content
stringlengths
0
625k
subset
stringclasses
1 value
meta
dict
--- abstract: | # We consider a nonlinear transmission problem for a Bresse beam, which consists of two parts, damped and undamped. The mechanical damping in the damped part is present in the shear angle equation only, and the damped part may be of arbitrary positive length. We prove well-posedness of the corresponding PDE system in energy space and establish existence of a regular global attractor under certain conditions on nonlinearities and coefficients of the damped part only. Moreover, we study singular limits of the problem when $l\rightarrow 0$ or $l\rightarrow 0$ simultaneously with $k_i\rightarrow+\infty$ and perform numerical modelling for these processes. # Keywords: Bresse beam, transmission problem, global attractor, singular limit author: - Tamara Fastovska $^{1,2,*}$, Dirk Langemann $^{3}$ and Iryna Ryzhkova $^{2}$ title: Qualitative properties of solutions to a nonlinear transmission problem for an elastic Bresse beam --- $^{1}$Department of Mathematics and Computer Science, V.N. Karazin Kharkiv National University, Kharkiv, Ukraine\ $^{2}$ Institut für Mathematik, Humboldt-Universität zu Berlin, Berlin, Germany\ $^{3}$Institut für Partielle Differentialgleichungen,Technische Universität Braunschweig, Braunschweig, Germany\ fastovskaya\@karazin.ua # Introduction In this paper we consider a contact problem for the Bresse beam. Originally the mathematical model for homogeneous Bresse beams was derived in [@Bre1859]. We use the variant of the model described in [@LagLeu1994 Ch. 3]. Let the whole beam occupies a part of a circle of length $L$ and has the curvature $l=R^{-1}$. We consider the beam as a one-dimensional object and measure the coordinate $x$ along the beam. Thus, we say that the coordinate $x$ changes within the interval $(0,L)$. The parts of the beam occupying the intervals $(0,L_0)$ and $(L_0,L)$ consist of different materials. The part lying in the interval $(0,L_0)$ is partially subjected to a structural damping (see Figure [1](#FigBeam){reference-type="ref" reference="FigBeam"}). ![Composite Bresse beam.](bresse_beam.eps){#FigBeam width="45%"} The Bresse system describes evolution of three quantities: transversal displacement, longitudinal displacement and shear angle variation. We denote by ${\varphi}$, $\psi$, and ${\omega}$ the transversal displacement, the shear angle variation, and the longitudinal displacement of the left part of the beam lying in $(0,L_0)$. Analogously, we denote by $u$, $v$, and $w$ the transversal displacement, the shear angle variation, and the longitudinal displacement of the right part of the beam occupying the interval $(L_0,L)$. We assume the presence of the mechanical dissipation in the equation for the shear angle variation for the left part of the beam. We also assume that both ends of the beam are clamped. Nonlinear oscillations of the composite beam can be described by the following system of equations $$\begin{aligned} & \rho_1{\varphi}_{tt}-k_1({\varphi}_x+\psi+l{\omega})_x - l{\sigma}_1({\omega}_x-l{\varphi}) +f_1({\varphi}, \psi, {\omega})=p_1(x,t), \label{Eq1}\\ & {\beta}_1\psi_{tt} -\lambda_1 \psi_{xx} +k_1({\varphi}_x+\psi+l{\omega}) +{\gamma}(\psi_t) +h_1({\varphi}, \psi, {\omega})=r_1(x,t),\;x\in (0,L_0), t>0,\label{Eq2}\\ & \rho_1{\omega}_{tt}- {\sigma}_1({\omega}_x-l{\varphi}) _x+lk_1({\varphi}_x+\psi+l{\omega})+g_1({\varphi}, \psi, {\omega})=q_1(x,t), \label{Eq3} \end{aligned}$$ and $$\begin{aligned} & \rho_2u_{tt}-k_2(u_x+v+lw)_x - l{\sigma}_2(w_x-lu) +f_2(u, v, w)=p_2(x,t), \label{Eq4}\\ & {\beta}_2v_{tt} -\lambda_2 v_{xx} +k_2(u_x+v+lw) +h_2(u, v, w)=r_2(x,t),\qquad \qquad x\in (L_0,L), t>0,\label{Eq5}\\ & \rho_2w_{tt}- {\sigma}_2(w_x-lu)_x+lk_2(u_x+v+lw)+g_2(u, v, w)=q_2(x,t), \label{Eq6} \end{aligned}$$ where $\rho_j,\;{\beta}_j, \; k_j, \; {\sigma}_j,\; \lambda_j$ are positive parameters, $f_j, \; g_j, \; h_j:{\mathbb R}^3\rightarrow{\mathbb R}$ are nonlinear feedbacks, $p_j, \; q_j, \; r_j:(0,L)\times{\mathbb R}^3\rightarrow{\mathbb R}$ are known external loads, $\gamma:{\mathbb R}\rightarrow{\mathbb R}$ is a nonlinear damping. The system is subjected to the Dirichlet boundary conditions $$\label{BC} {\varphi}(0,t)=u(L,t)=0, \quad \psi(0,t)=v(L,t)=0, \quad {\omega}(0,t)=w(L,t)=0,$$ the transmission conditions $$\begin{aligned} & {\varphi}(L_0,t)=u(L_0,t), \quad \psi(L_0,t)=v(L_0,t), \quad {\omega}(L_0,t)=w(L_0,t), \label{TC1} \\ & k_1({\varphi}_x+\psi+l{\omega})(L_0,t)=k_2(u_x+v+lw)(L_0,t), \\ & \lambda_1 \psi_{x}(L_0,t)= \lambda_2 v_{x}(L_0,t),\\ & {\sigma}_1({\omega}_x-l{\varphi})(L_0,t)={\sigma}_2(w_x-lu)(L_0,t), \label{TC4} \end{aligned}$$ and supplemented with the initial conditions $$\begin{aligned} &{\varphi}(x,0)={\varphi}_0(x),\quad \psi(x,0)=\psi_0(x),\quad {\omega}(x,0)={\omega}_0(x),\\ &{\varphi}_t(x,0)={\varphi}_1(x),\quad \psi_t(x,0)=\psi_1(x),\quad {\omega}_t(x,0)={\omega}_1(x), \end{aligned}$$ $$\begin{aligned} &u(x,0)=u_0(x),\quad v(x,0)=v_0(x),\quad w(x,0)=w_0(x),\\ &u_t(x,0)=u_1(x),\quad v_t(x,0)=v_1(x),\quad w_t(x,0)=w_1(x).\label{IC} \end{aligned}$$ One can observe patterns in the problem which appear to have physical meaning: $$\begin{aligned} & Q_i(\xi, \zeta, \eta)=k_i(\xi_x+\zeta +l\eta) \mbox{ are shear forces},\\ & N_i(\xi, \zeta, \eta)={\sigma}_i(\eta_x-l\xi) \mbox{ are axial forces},\\ & M_i(\xi, \zeta, \eta)=\lambda_i\zeta_x \mbox{ are bending moments} \end{aligned}$$ for damped ($i=1)$ and undamped ($i=2$) parts respectively. Later we will use them to rewrite the problem in a compact and physically natural form. The paper is devoted to the well-posedness and long-time behaviour of the system [\[Eq1\]](#Eq1){reference-type="eqref" reference="Eq1"}-[\[IC\]](#IC){reference-type="eqref" reference="IC"}. Our main goal is to establish conditions under which the assumed amount of dissipation is sufficient to guarantee the existence of a global attractor. The paper is organized as follows. In Section 2 we represent functional spaces and pose the problem in an abstract form. In Section 3 we prove that the problem is well-posed and possesses strong solutions provided nonlinearities and initial data are smooth enough. Section 4 is devoted to the main result on the existence of a compact attractor. The nature of dissipation prevents us from proving dissipativity explicitly, thus we show that the corresponding dynamical system is of gradient structure and asymptotically smooth. We establish the unique continuation property by means of the observability inequality obtained in [@TriYao2002] to prove the gradient property. The compensated compactness approach is used to prove the asymptotic smoothness. In Section 5 we show that solutions to [\[Eq1\]](#Eq1){reference-type="eqref" reference="Eq1"}-[\[IC\]](#IC){reference-type="eqref" reference="IC"} tend to solutions to a transmission problem for the Timoshenko beam when $l\rightarrow 0$ and to solutions to a transmission problem for the Euler-Bernoulli beam when $l\rightarrow 0$ and $k_i \rightarrow\infty$ as well as perform numerical modelling of these singular limits. # Preliminaries and Abstract formulation ## Spaces and notations Let us denote $$\Phi^1=({\varphi}, \psi, {\omega}), \quad\Phi^2=(u,v,w), \quad \Phi=(\Phi^1,\Phi^2).$$ Thus, $\Phi$ is a six-dimensional vector of functions. Analogously, $$\begin{aligned} & F_j=(f_j,g_j, h_j): {\mathbb R}^3\rightarrow{\mathbb R}^3, \quad F=(F_1,F_2), \\ & P_j=(p_j,q_j, r_j): [(0,L)\times {\mathbb R}_+]^3\rightarrow{\mathbb R}^3, \quad P=(P_1,P_2), \\ & R_j=diag\{\rho_j,{\beta}_j,\rho_j\}, \quad R=diag\{\rho_1,{\beta}_1,\rho_1,\rho_2,{\beta}_2,\rho_2\}, \\ & {\Gamma}(\Phi_t)=(0,{\gamma}(\psi_{t}),0,0,0,0), \end{aligned}$$ where $j=1,2$. The static linear part of the equation system can be formally rewritten as $$\label{oper} A\Phi=\left( \begin{aligned} & -\partial_x Q_1(\Phi^1)-lN_1(\Phi^1)\\ & -\partial_x M_1(\Phi^1)+Q_1(\Phi^1)\\ & -\partial_x N_1(\Phi^1)+lQ_1(\Phi^1) \\ & -\partial_x Q_2(\Phi^2)-lN_2(\Phi^2)\\ & -\partial_x M_2(\Phi^2)+Q_2(\Phi^2) \\ & -\partial_x N_2(\Phi^2)+lQ_2(\Phi^2) \end{aligned} \right).$$ Then transmission conditions [\[TC1\]](#TC1){reference-type="eqref" reference="TC1"}-[\[TC4\]](#TC4){reference-type="eqref" reference="TC4"} can be written as follows $$\begin{aligned} &\Phi^1(L_0,t)=\Phi^2(L_0,t), \\ & Q_1(\Phi^1(L_0,t))=Q_2(\Phi^2(L_0,t)),\\ & M_1(\Phi^1(L_0,t))=M_2(\Phi^2(L_0,t)),\\ & N_1(\Phi^1(L_0,t))=N_2(\Phi^2(L_0,t)). \end{aligned}$$ Throughout the paper we use the notation $||\cdot||$ for the $L^2$-norm of a function and $(\cdot,\cdot)$ for the $L^2$-inner product. In these notations we skip the domain, on which functions are defined. We adopt the notation $||\cdot||_{L^2(\Omega)}$ only when domain is not evident. We also use the same notations $||\cdot||$ and $(\cdot,\cdot)$ for $[L^2({\Omega})]^3$.\ To write our problem in an abstract form we introduce the following spaces. For the velocities of the displacements we use the space $$H_v=\{\Phi=(\Phi^1,\Phi^2):\; \Phi^1\in [L^2(0,L_0)]^3, \;\Phi^2\in [L^2(L_0,L)]^3 \}$$ with the norm $$||\Phi||^2_{H_v}=||\Phi||^2_v=\sum_{j=1}^{2}||\sqrt{R_j}\Phi^j||^2,$$ which is equivalent to the standard $L^2$-norm.\ For the beam displacements we use the space $$\begin{aligned} H_d=\left\{\Phi\in H_v:\; \right.&\Phi^1\in [H^1(0,L_0)]^3, \;\Phi^2\in [H^1(L_0,L)]^3, \\ &\left.\Phi^1(0,t)=\Phi^2(L,t)=0,\; \Phi^1(L_0,t)=\Phi^2(L_0,t) \right\} \end{aligned}$$ with the norm $$||\Phi||^2_{H_d}=||\Phi||^2_d=\sum_{j=1}^{2}\left(||Q_j(\Phi^j)||^2+||N_j(\Phi^j)||^2+||M_j(\Phi^j)||^2 \right).$$ This norm is equivalent to the standard $H^1$-norm. Moreover, the equivalence constants can be chosen independent of $l$ for $l$ small enough (see [@MaMo2017], Remark 2.1). If we set $$\Psi(x)=\left\{ \begin{aligned} &\Phi^1(x), \quad &x\in (0,L_0),\\ &\Phi^2(x), \quad &x\in [L_0,L) \end{aligned} \right.$$ we see that there is isomorphism between $H_d$ and $[H^1_0(0,L)]^3$. ## Abstract formulation The operator $A:D(A)\subset H_v\rightarrow H_v$ is defined by formula [\[oper\]](#oper){reference-type="eqref" reference="oper"}, where $$\begin{gathered} D(A)=\left\{\Phi\in H_d:\right. \Phi^1\in H^2(0,L_0), \; \Phi^2\in H^2(L_0,L), \; Q_1(\Phi^1(L_0,t))=Q_2(\Phi^2(L_0,t)), \\ N_1(\Phi^1(L_0,t))=N_2(\Phi^2(L_0,t)), \left.M_1(\Phi^1(L_0,t))=M_2(\Phi^2(L_0,t))\right.\} \end{gathered}$$ Arguing analogously to Lemmas 1.1-1.3 from [@LiuWil2000] one can prove the following lemma. **Lemma 1**. *The operator $A$ is positive and self-adjoint. Moreover, $$\label{AForm} \begin{aligned} (A^{1/2}\Phi, A^{1/2}B)= & \frac{1}{k_1} (Q_1(\Phi^1),Q_1(B^1)) + \frac{1}{{\sigma}_1} (N_1(\Phi^1),N_1(B^1)) + \frac{1}{\lambda_1} (M_1(\Phi^1),M_1(B^1)) + \\ & \frac{1}{k_2} (Q_2(\Phi^2),Q_2(B^2)) + \frac{1}{{\sigma}_2} (N_2(\Phi^2),N_2(B^2)) + \frac{1}{\lambda_2} (M_2(\Phi^2),M_2(B^2)) \end{aligned}$$ and $D(A^{1/2})=H_d\subset H_v$.* Thus, we can rewrite equations [\[Eq1\]](#Eq1){reference-type="eqref" reference="Eq1"}-[\[Eq6\]](#Eq6){reference-type="eqref" reference="Eq6"} in the form $$\label{AEq} R\Phi_{tt}+A\Phi+{\Gamma}(\Phi_t) + F(\Phi)=P(x,t),$$ boundary conditions [\[BC\]](#BC){reference-type="eqref" reference="BC"} in the form $$\Phi^1(0,t)=\Phi^2(L,t)=0, \label{ABC}$$ and transmission conditions [\[TC1\]](#TC1){reference-type="eqref" reference="TC1"}-[\[TC4\]](#TC4){reference-type="eqref" reference="TC4"} can be written as $$\begin{aligned} &\Phi^1(L_0,t)=\Phi^2(L_0,t), \label{ATC1} \\ & Q_1(\Phi^1(L_0,t))=Q_2(\Phi^2(L_0,t)),\\ & M_1(\Phi^1(L_0,t))=M_2(\Phi^2(L_0,t)),\\ & N_1(\Phi^1(L_0,t))=N_2(\Phi^2(L_0,t)). \label{ATC4} \end{aligned}$$ Initial conditions have the form $$\Phi(x,0)=\Phi_0(x), \qquad \Phi_t(x,0)=\Phi_1(x). \label{AIC}$$ We use $H=H_d\times H_v$ as a phase space. # Well-posedness In this section we study strong, generalized and variational (weak) solutions to [\[AEq\]](#AEq){reference-type="eqref" reference="AEq"}-[\[AIC\]](#AIC){reference-type="eqref" reference="AIC"}. **Definition 2**. *$\Phi\in C(0,T;H_d)\bigcap C^1(0,T;H_v)$ such that $\Phi(x,0)=\Phi_0(x)$, $\Phi_t(x,0)=\Phi_1(x)$ is said to be a strong solution to [\[AEq\]](#AEq){reference-type="eqref" reference="AEq"}-[\[AIC\]](#AIC){reference-type="eqref" reference="AIC"} if* - *$\Phi(t)$ lies in $D(A)$ for almost all $t$;* - *$\Phi(t)$ is an absolutely continuous function with values in $H_d$ and $\Phi_t \in L_1(a,b;H_d)$ for $0<a<b<T$;* - *$\Phi_t(t)$ is an absolutely continuous function with values in $H_v$ and $\Phi_{tt} \in L_1(a,b;H_v)$ for $0<a<b<T$;* - *equation [\[AEq\]](#AEq){reference-type="eqref" reference="AEq"} is satisfied for almost all $t$.* **Definition 3**. *$\Phi\in C(0,T;H_d)\bigcap C^1(0,T;H_v)$ such that $\Phi(x,0)=\Phi_0(x)$, $\Phi_t(x,0)=\Phi_1(x)$ is said to be a generalized solution to [\[AEq\]](#AEq){reference-type="eqref" reference="AEq"}-[\[AIC\]](#AIC){reference-type="eqref" reference="AIC"} if there exists a sequence of strong solutions $\Phi^{(n)}$ to [\[AEq\]](#AEq){reference-type="eqref" reference="AEq"}-[\[AIC\]](#AIC){reference-type="eqref" reference="AIC"} with the initial data $(\Phi_0^{(n)},\Phi_1^{(n)})$ and right hand side $P^{(n)}(x,t)$ such that $$\lim_{n\rightarrow\infty} \max_{t\in[0,T]} \left(||\Phi^{(n)}(\cdot,t)-\Phi(\cdot,t)||_d + ||\Phi_t^{(n)}(\cdot,t)-\Phi_t(\cdot,t)||_v \right) =0.$$* We also need a definition of a variational solution. We use six-dimensional vector-functions $B=(B^1, B^2)$, $B^j=(\beta^j,\gamma^j,\delta^j)$ from the space $$F_T=\{B\in L^2(0,T;H_d), \; B_t\in L^2(0,T;H_v), B(T)=0\}$$ as test functions. **Definition 4**. *$\Phi$ is said to be a variational (weak) solution to [\[AEq\]](#AEq){reference-type="eqref" reference="AEq"}-[\[AIC\]](#AIC){reference-type="eqref" reference="AIC"} if* - *$\Phi\in L^\infty(0,T;H_d), \; \Phi_t\in L^\infty(0,T;H_v)$;* - *$\Phi$ satisfies the following variational equality for all $B\in F_T$ $$\begin{gathered} \label{VEq} -\int\limits_0^T (R\Phi_t,B_t)(t)dt- (R\Phi_1, B(0)) + \int_0^T (A^{1/2}\Phi, A^{1/2}B)(t)dt +\\ \int_0^T (\Gamma(\Phi_t), B)(t)dt + \int_0^T (F(\Phi), B)(t)dt - \int_0^T (P, B)(t)dt=0; \end{gathered}$$* - *$\Phi(x,0)=\Phi_0(x)$.* Now we state a well-posedness result for problem [\[AEq\]](#AEq){reference-type="eqref" reference="AEq"}-[\[AIC\]](#AIC){reference-type="eqref" reference="AIC"}. **Theorem 5** (Well-posedness). *Let $$\begin{aligned} & f_i, \; g_i, \; h_i: {\mathbb R}^3 \rightarrow{\mathbb R}\mbox{ are locally Lipschitz i.e.} \\ %\mbox{for every compact } K\subset \R^3 \mbox{ there exists } L(K)>0, \mbox{ such that } & |f_i(a)-f_i(b)|\le L(K)|a-b|, \quad \mbox{provided } |a|,|b|\le K; \tag{N1}\label{NLip} \end{aligned}$$ $$\begin{aligned} & \mbox{there exist } \mathcal{F}_i: {\mathbb R}^3\rightarrow{\mathbb R}\mbox{ such that } (f_i,h_i,g_i)={\nabla}\mathcal{F}_i; \\ &\mbox{ there exists } \delta>0 \mbox{ such that } \mathcal{F}_j(a) \ge -\delta \mbox{ for all } a\in {\mathbb R}^3; \label{NBoundBelow} \tag{N2} \\ \end{aligned}$$ $$P\in L^2(0,T;H_v); \label{RSmooth}\tag{R1}$$ and the nonlinear dissipation satisfies $$\label{DCont} \gamma \in C({\mathbb R}) \mbox{ and non-decreasing }, \quad \gamma(0) = 0. \tag{D1}$$ Then for every initial data $\Phi_0\in H_d, \Phi_1\in H_v$ and time interval $[0,T]$ there exists a unique generalized solution to [\[AEq\]](#AEq){reference-type="eqref" reference="AEq"}-[\[AIC\]](#AIC){reference-type="eqref" reference="AIC"} with the following properties:* - *every generalized solution is variational;* - *energy inequality $$\label{EE} \mathcal{E}(T)+\int_0^T ({\gamma}(\psi_t),\psi_t)dt \le \mathcal{E}(0)+ \int_0^T (P(t),\Phi_t(t))dt$$ holds, where $$\mathcal{E}(t)=\frac 12 \left[||R^{1/2}\Phi_t(t)||^2+||A^{1/2}\Phi(t)||^2\right] + \int\limits_0^L \mathcal{F}(\Phi(x,t))dx$$ and $$\mathcal{F}(\Phi(x,t))=\left\{ \begin{aligned} & \mathcal{F}_1({\varphi}(x,t),\psi(x,t),{\omega}(x,t)), \quad & x\in (0,L_0),\\ & \mathcal{F}_2(u(x,t),v(x,t),w(x,t)), \quad & x\in (L_0,L). \end{aligned} \right.$$* - *If, additionally, $\Phi_0\in D(A)$, $\Phi_1\in H_d$ and $$\partial_t P(x,t)\in L_2(0,T;H_v) \label{RAddSmooth} \tag{R2}$$ then the generalized solution is also strong and satisfies the energy equality.* *Proof.* The proof essentially uses monotone operators theory. It is rather standard by now (see, e.g., [@ChuEllLa2002]), so in some parts we give only references to corresponding arguments. However, we give some details which demonstrate the peculiarity of 1D problems.\ *Step 1. Abstract formulation.* We need to reformulate problem [\[AEq\]](#AEq){reference-type="eqref" reference="AEq"}-[\[AIC\]](#AIC){reference-type="eqref" reference="AIC"} as a first order problem. Let us denote $$U=(\Phi,\Phi_t), \quad U_0=(\Phi_0, \Phi_1) \in H=H_d\times H_v,$$ $$\mathcal{T}U= \begin{pmatrix} I & 0 \\ 0 & R^{-1} \end{pmatrix} \begin{pmatrix} 0 & -I \\ A & 0 \end{pmatrix} U + \begin{pmatrix} 0\\ {\Gamma}(\Phi_t) \end{pmatrix}.$$ Consequently, $D(\mathcal{T})=D(A)\times H_d\subset H$. In what follows we use the notations $$\mathcal{B}(U)= \begin{pmatrix} I & 0 \\ 0 & R^{-1} \end{pmatrix} \begin{pmatrix} 0 \\ F(\Phi) \end{pmatrix}, \quad \mathcal{P}(x,t)=\begin{pmatrix} 0 \\ P(x,t) \end{pmatrix}.$$ Thus, we can rewrite problem [\[AEq\]](#AEq){reference-type="eqref" reference="AEq"}-[\[AIC\]](#AIC){reference-type="eqref" reference="AIC"} in the form $$\label{FirstOrderForm} U_t+\mathcal{T}U +\mathcal{B}(U) = \mathcal{P}, \quad U(0)=U_0\in H.$$ *Step 2. Existence and uniqueness of a local solution.* Here we use Theorem 7.2 from [@ChuEllLa2002]. For the reader's convenience we formulate it below. **Theorem 6** ([@ChuEllLa2002]). *Consider the initial value problem $$\label{AbstrIVP} U_t+\mathcal{T}U +B(U) = f, \quad U(0)=U_0\in H.$$ Suppose that $\mathcal{T}:D(\mathcal{T})\subset H \rightarrow H$ is a maximal monotone mapping, $0\in \mathcal{T}0$ and $B:H\rightarrow H$ is locally Lipschitz, i.e. there exits $L(K)>0$ such that $$||B(U)-B(V)||_H \le L(K)||U-V||_H, \quad ||U||_H, ||V||_H \le K.$$ If $U_0\in D(\mathcal{T})$, $f\in W_1^1(0,t;H)$ for all $t>0$, then there exists $t_{max}\le \infty$ such that [\[AbstrIVP\]](#AbstrIVP){reference-type="eqref" reference="AbstrIVP"} has a unique strong solution $U$ on $(0,t_{max})$.\ If $U_0\in \overline{D(\mathcal{T})}$, $f\in L^1(0,t;H)$ for all $t>0$, then there exists $t_{max}\le \infty$ such that [\[AbstrIVP\]](#AbstrIVP){reference-type="eqref" reference="AbstrIVP"} has a unique generaized solution $U$ on $(0,t_{max})$.\ In both cases $$\lim_{t\rightarrow t_{max}}||U(t)||_H=\infty \quad \mbox{provided} \quad t_{max}<\infty.$$* First, we need to check that $\mathcal{T}$ is a maximal monotone operator. Monotonicity is a direct consequence of Lemma [Lemma 1](#lem:ASelfAdjont){reference-type="ref" reference="lem:ASelfAdjont"} and [\[DCont\]](#DCont){reference-type="eqref" reference="DCont"}.\ To prove $\mathcal{T}$ is maximal as an operator from $H$ to $H$, we use Theorem 1.2 from [@Bar1976 Ch. 2]. Thus, we need to prove that $Range(I+\mathcal{T})=H$. Let $z=(\Phi_z,\Psi_z)\in H_d\times H_v$. We need to find $y=(\Phi_y,\Psi_y)\in D(A)\times H_d=D(\mathcal{T})$ such that $$\begin{aligned} & -\Psi_y + \Phi_y=\Phi_z,\\ & A\Phi_y+\Psi_y + {\Gamma}(\Psi_y) = \Psi_z, \end{aligned}$$ or, equivalently, find $\Psi_y\in H_d$ such that $$M(\Psi_y)=\frac 12 A\Psi_y + \frac 12 A\Psi_y+\Psi_y +{\Gamma}(\Psi_y) = \Psi_z - A\Phi_z = \Theta_z$$ for an arbitrary $\Theta_z\in H_d'=D(A^{1/2})'$. Naturally, due to Lemma [Lemma 1](#lem:ASelfAdjont){reference-type="ref" reference="lem:ASelfAdjont"} $A$ is a duality map between $H_d$ and $H_d'$, thus the operator $M$ is onto if and only if $\frac 12 A\Psi_y+\Psi_y +{\Gamma}(\Psi_y)$ is maximal monotone as an operator from $H_d$ to $H_d'$. According to Corollary 1.1 from [@Bar1976 Ch. 2], this operator is maximal monotone if $\frac 12 A$ is maximal monotone (it follows from Lemma [Lemma 1](#lem:ASelfAdjont){reference-type="ref" reference="lem:ASelfAdjont"}) and $I+{\Gamma}(\cdot)$ is monotone, bounded and hemicontinuous from $H_d$ to $H_d'$. The last statement is evident for the identity map, now let's prove it for ${\Gamma}$.\ Monotonicity is evident. Due to the continuity of the embedding $H^1(0,L_0)\subset C(0,L_0)$ in 1D every bounded set $X$ in $H^1(0,L_0)$ is bounded in $C(0,L_0)$ and thus due to [\[DCont\]](#DCont){reference-type="eqref" reference="DCont"} ${\Gamma}(X)$ is bounded in $C(0,L_0)$ and, consequently, in $L^2(0,L_0)$. To prove hemicontinuity we take an arbitrary $\Phi=({\varphi}, \psi, {\omega}, u,v,w) \in H_d$, an arbitrary $\Theta=(\theta_1,\theta_2, \theta_3, \theta_4,\theta_5, \theta_6)\in H_d$ and consider $$({\Gamma}(\Psi_y+t\Phi),\Theta)=\int_0^{L_0} {\gamma}(\psi_y(x) + t\psi(x))\theta_2(x)dx,$$ where $\Psi_y=({\varphi}_y, \psi_y, {\omega}_y, u_y,v_y,w_y)$. Since $\psi_y + t\psi\rightarrow\psi_y,$ as $t\rightarrow 0$ in $H^1(0,L_0)$ and in $C(0,L_0)$, we obtain that ${\gamma}(\psi_y(x) + t\phi(x)) \rightarrow{\gamma}(\psi_y(x))$ as $t\rightarrow 0$ for every $x\in [0,L_0]$, and has an integrable bound from above due to [\[DCont\]](#DCont){reference-type="eqref" reference="DCont"}. This implies ${\gamma}(\psi_y(x) + t\phi(x)) \rightarrow{\gamma}(\psi_y(x))$ in $L^1(0,L_0)$ as $t\rightarrow 0$ . Since $\theta_2\in H^1(0,L_0)\subset L^\infty (0,L_0)$, then $$({\Gamma}(\Psi_y+t\Phi),\Theta) \rightarrow({\Gamma}(\Psi_y),\Theta), \quad t\rightarrow 0.$$ Hemicontinuity is proved.\ Further, we need to prove that $\mathcal{B}$ is locally Lipschitz on $H$, that is, $F$ is locally Lipschitz from $H_d$ to $H_v$. The embedding $H^{1/2+\varepsilon}(0,L)\subset C(0,L)$ and [\[NLip\]](#NLip){reference-type="eqref" reference="NLip"} imply $$|F_j(\widetilde{\Phi}^j(x))-F_j(\widehat{\Phi}^j(x))|\le C(\max( ||\widetilde{\Phi}||_d, ||\widehat{\Phi}||_d)) ||\widetilde{\Phi}^j-\widehat{\Phi}^j||_{1} \label{FLip}$$ for all $x\in [0,L_0]$, if $j=1$ and for all $x\in [L_0,L]$, if $j=2$. This, in turn, gives us the estimate $$||F(\widetilde{\Phi})-F(\widehat{\Phi})||_v\le C(\max(||\widetilde{\Phi}||_d, ||\widehat{\Phi}||_d)) ||\widetilde{\Phi}-\widehat{\Phi}||_d.$$ Thus, all the assumptions of Theorem [Theorem 6](#th:AbstrEx){reference-type="ref" reference="th:AbstrEx"} are satisfied and existence of a local strong/generalized solution is proved.\ *Step 3. Energy inequality and global solutions.* It can be verified by direct calculations, that strong solutions satisfy energy equality. Using the same arguments, as in proof of Proposition 1.3 [@ChuLa2007], and [\[DCont\]](#DCont){reference-type="eqref" reference="DCont"} we can pass to the limit and prove [\[EE\]](#EE){reference-type="eqref" reference="EE"} for generalized solutions.\ Let us assume that a local generalised solution exists on a maximal interval $(0, t_{max})$, $t_{max}<\infty$. Then [\[EE\]](#EE){reference-type="eqref" reference="EE"} implies $\mathcal{E}(t_{max})\le \mathcal{E}(0)$. Since due to [\[NBoundBelow\]](#NBoundBelow){reference-type="eqref" reference="NBoundBelow"} $$c_1||U(t)||_H \le \mathcal{E}(t) \le c_2||U(t)||_H,$$ we have $||U(t_{max})||_H\le C||U_0||_H$. Thus, we arrive to a contradiction which implies $t_{max}=\infty$. *Step 4. Generalized solution is variational (weak).* We formulate the following obvious estimate as a lemma for future use. **Lemma 7**. *Let [\[NLip\]](#NLip){reference-type="eqref" reference="NLip"} holds and $\widetilde{\Phi}$, $\widehat{\Phi}$ are two weak solutions to [\[AEq\]](#AEq){reference-type="eqref" reference="AEq"}-[\[AIC\]](#AIC){reference-type="eqref" reference="AIC"} with the initial conditions $(\widetilde{\Phi}_0, \widetilde{\Phi}_1)$ and $(\widehat{\Phi}_0, \widehat{\Phi}_1)$ respectively. Then the following estimate is valid for all $x\in [0,L], \; t>0$ and $\epsilon\in[0,1/2)$: $$|F_j(\widetilde{\Phi}^j(x,t))-F_j(\widehat{\Phi}^j(x,t))|\le C(\max(||(\widetilde{\Phi}_0, \widetilde{\Phi}_1)||_H, ||(\widehat{\Phi}_0, \widehat{\Phi}_1)||_H)) ||\widetilde{\Phi}^j(\cdot,t)-\widehat{\Phi}^j(\cdot,t)||_{1-\epsilon}, \quad j=1,2.$$* *Proof.* The energy inequality and the embedding $H^{1/2+\varepsilon}(0,L)\subset C(0,L)$ imply that for every weak solution $\Phi$ $$\max_{t\in[0,T], x\in[0,L]} |\Phi(x,t)| \le C(||\Phi_0||_d, ||\Phi_1||_v).$$ Thus, using [\[NLip\]](#NLip){reference-type="eqref" reference="NLip"} and [\[FLip\]](#FLip){reference-type="eqref" reference="FLip"}, we prove the lemma. ◻ Evidently, [\[VEq\]](#VEq){reference-type="eqref" reference="VEq"} is valid for strong solutions. We can find a sequence of strong solutions $\Phi^{(n)}$, which converges to a generalized solution $\Phi$ strongly in $C(0,T; H_d)$, and $\Phi_t^{(n)}$ converges to $\Phi_t$ strongly in $C(0,T; H_v)$. Using Lemma [Lemma 7](#lem:lip){reference-type="ref" reference="lem:lip"}, we can easily pass to the limit in nonlinear feedback term in [\[VEq\]](#VEq){reference-type="eqref" reference="VEq"}. Since the test function $B\in L^\infty(0,T;H_d)\subset L^\infty((0,T)\times (0,L))$, we can use the same arguments as in the proof of Proposition 1.6 [@ChuLa2007] to pass to the limit in the nonlinear dissipation term. Namely, we can extract from $\Phi_t^{(n)}$ a subsequence that converges to $\Phi_t$ almost everywhere and prove that it converges to $\Phi_t$ strongly in $L^1((0,T)\times (0,L))$. ◻ **Remark 1**. *In space dimension greater then one we do not have the embedding $H^1(\Omega)\subset C(\Omega)$, therefore, we need to assume polynomial growth of the derivative of the nonlinearity to obtain estimates similar to Lemma [Lemma 7](#lem:lip){reference-type="ref" reference="lem:lip"}.* # Existence of attractors. In this section we study long-time behaviour of solutions to problem [\[AEq\]](#AEq){reference-type="eqref" reference="AEq"}-[\[AIC\]](#AIC){reference-type="eqref" reference="AIC"} in the framework of dynamical systems theory. From Theorem [Theorem 5](#th:WeakWP){reference-type="ref" reference="th:WeakWP"} we have **Corollary 1**. *Let, additionally to conditions of Theorem [Theorem 5](#th:WeakWP){reference-type="ref" reference="th:WeakWP"}, $P(x,t)=P(x)$. Then [\[AEq\]](#AEq){reference-type="eqref" reference="AEq"}-[\[AIC\]](#AIC){reference-type="eqref" reference="AIC"} generates a dynamical system $(H, S_t)$ by the formula $$S_t(\Phi_0,\Phi_1)=(\Phi(t),\Phi_t(t)),$$ where $\Phi(t)$ is the weak solution to [\[AEq\]](#AEq){reference-type="eqref" reference="AEq"}-[\[AIC\]](#AIC){reference-type="eqref" reference="AIC"} with initial data $(\Phi_0,\Phi_1)$.* To establish the existence of the attractor for this dynamical system we use Theorem [Theorem 15](#abs){reference-type="ref" reference="abs"} below, thus we need to prove the gradientness and the asymptotic smoothness as well as the boundedness of the set of stationary points. ## Gradient structure In this subsection we prove that the dynamical system generated by [\[AEq\]](#AEq){reference-type="eqref" reference="AEq"}-[\[AIC\]](#AIC){reference-type="eqref" reference="AIC"} possesses a specific structure, namely, is gradient under some additional conditions on the nonlinearities. **Definition 8** ([@Chueshov; @CFR; @CL]). *Let $Y\subseteq X$ be a positively invariant set of $(X,S_t)$.* - *a continuous functional $L(y)$ defined on $Y$ is said to be a *Lyapunov function* of the dynamical system $(X,S_t)$ on the set $Y$, if a function $t\mapsto L(S_ty)$ is non-increasing for any $y\in Y$.* - *the Lyapunov function $L(y)$ is said to be *strict* on $Y$, if the equality $L(S_{t}y)=L(y)$ *for all* $t>0$ implies $S_{t}y=y$ for all $t>0$;* - *A dynamical system $(X,S_t)$ is said to be *gradient*, if it possesses a strict Lyapunov function on the whole phase space $X$.* The following result holds true. **Theorem 9**. *Let, additionally to the assumptions of Corollary [Corollary 1](#DSGen){reference-type="ref" reference="DSGen"}, the following conditions hold $$\begin{aligned} & f_1=g_1=0, \quad h_1({\varphi}, \psi,{\omega})=h_1(\psi), \label{NZero}\tag{N3} \\ & f_2,\; g_2,\; h_2 \in C^1({\mathbb R}^3), \label{NAddSmooth}\tag{N4} \\ & {\gamma}(s)s > 0 \quad \mbox{ for all } s\neq 0. \label{DNonZero} \tag{D2} \end{aligned}$$ Then the dynamical system $(H, S_t)$ is gradient.* *Proof.* We use as a Lyapunov function $$\label{lap} L(\Phi(t))=L(t) =\frac 12\left( ||R^{1/2}\Phi_t(t)||^2+ ||A^{1/2}\Phi(t)||^2\right) + \int\limits_0^L \mathcal{F}(\Phi(x,t))dx + (P,\Phi(t)).$$ Energy inequality [\[EE\]](#EE){reference-type="eqref" reference="EE"} implies that $L(t)$ is non-increasing. The equality $L(t)=L(0)$ together with [\[DNonZero\]](#DNonZero){reference-type="eqref" reference="DNonZero"} imply that $\psi_t(t)\equiv 0$ on $[0,T]$. We need to prove that $\Phi(t)\equiv const$, which is equivalent to $\Phi(t+h)-\Phi(t)=0$ for every $h>0$. In what follows we use the notation $\Phi(t+h)-\Phi(t)=\overline{\Phi}(t)=(\overline{\varphi}, \overline{\psi},\overline{{\omega}},\overline{u},\overline{v},\overline{w})(t)$ .\ *Step 1.* Let us prove that $\overline{\Phi}^1\equiv 0$. In this step we use the distribution theory (see, e.g., [@Kan2004]) because some functions involved in computations are of too low smoothness. Let us set the test function $B=(B^1,0)=(\beta^1,\gamma^1,\delta^1, 0,0,0)$. Then $\overline{\Phi}(t)$ satisfies $$\begin{gathered} -\int\limits_0^T (R_1 \overline{\Phi}^1_t, B_t)(t)dt - (R_1(\Phi^1_t(h)-\Phi_1^1), B^1(0)) +\\ \int\limits_0^T \left[\frac 1{k_1}(Q_1(\overline{\Phi}^1),Q_1(B^1))(t) dt+ \frac 1{{\sigma}_1}(N_1(\overline{\Phi}^1), N_1(B^1))(t) \right] + \\ \int\limits_0^T (h_1(\psi(t+h))-h_1(\psi(t)), \gamma^1(t))dt =0. \end{gathered}$$ The last term equals to zero due to [\[NZero\]](#NZero){reference-type="eqref" reference="NZero"} and $\psi(t)\equiv const$.\ Setting in turn $B=(0,\gamma^1,0, 0,0,0)$, $B=(0,0,{\delta}^1, 0,0,0)$, $B=(\beta^1, 0,0, 0,0,0)$ we obtain $$\begin{aligned} & \overline{\varphi}_x+l\overline{{\omega}}=0 \qquad&\mbox{ almost everywhere on } (0,L_0)\times (0,T), \label{GrSpace} \\ & \rho_1\overline{{\omega}}_{tt} - l{\sigma}_1(\overline{{\omega}}_x-l\overline{\varphi})_x =0 \qquad&\mbox{ almost everywhere on } (0,L_0)\times (0,T), \label{GrOmega} \\ & \rho_1\overline{\varphi}_{tt} - {\sigma}_1(\overline{{\omega}}_x-l\overline{\varphi})=0 \qquad&\mbox{ in the sense of distributions on } (0,L_0)\times (0,T). \label{GrPhi} \end{aligned}$$ These equalities imply $$\label{GrTime} \overline{\varphi}_{ttx}=0, \quad \overline{{\omega}}_{tt}=0 \quad \mbox{ in the sense of distributions}.$$ Similar to regular functions, if partial derivative of a distribution equals to zero, then the distribution \"does not depends\" on the corresponding variable (see [@Kan2004 Ch. 7], Example 2.) That is, $$\overline{{\omega}}_t=c_1(x)\times 1(t) \qquad \mbox{ in the sense of distributions} \\ %& \oom_x-l\ovph = g_2(x)\times 1(t) \qquad &\mbox{ in the sense of distributions}$$ However, Theorem [Theorem 5](#th:WeakWP){reference-type="ref" reference="th:WeakWP"} implies that $\overline{{\omega}}_t$ is a regular distribution, thus, we can treat the equality above as the equality almost everywhere. Furthermore, $$\overline{{\omega}}(x,t)=\overline{{\omega}}(x,0)+\int_0^t c_1(x)d\tau = \overline{{\omega}}(x,0) +tc_1(x).$$ Since $||\overline{{\omega}}(\cdot,t)||\le C$ for all $t\in{\mathbb R}_+$, $c_1(x)$ must be zero. Thus, $$\label{GrOmegaConst} \overline{{\omega}}(x,t)=c_2(x),$$ which together with [\[GrSpace\]](#GrSpace){reference-type="eqref" reference="GrSpace"} implies $$\begin{aligned} & \overline{\varphi}_x=-lc_2(x), \\ & \overline{\varphi}(x,t)= \overline{\varphi}(0,t) - l\int\limits_0^x c_2(y)dy = c_3(x), \\ & \overline{\varphi}_{tt} =0. \end{aligned}$$ The last equality together with [\[GrSpace\]](#GrSpace){reference-type="eqref" reference="GrSpace"}, [\[GrPhi\]](#GrPhi){reference-type="eqref" reference="GrPhi"} and boundary conditions [\[ABC\]](#ABC){reference-type="eqref" reference="ABC"} give us that $\overline{\varphi}, \overline{{\omega}}$ are solutions to the following Cauchy problem (with respect to $x$): $$\begin{aligned} &\overline{{\omega}}_x = l\overline{\varphi},\\ &\overline{\varphi}_x = -l\overline{{\omega}}, \\ &\overline{{\omega}}(0,t)=\overline{\varphi}(0,t)=0. \end{aligned}$$ Consequently, $\overline{{\omega}}\equiv\overline{\varphi}\equiv 0$.\ *Step 2.* Let us prove, that $u\equiv v\equiv w\equiv 0$. Due to [\[NAddSmooth\]](#NAddSmooth){reference-type="eqref" reference="NAddSmooth"}, we can use the Taylor expansion of the difference $F^2(\Phi^2(t+h))-F^2(\Phi^2(t))$ and thus $(\overline{u}, \overline{v}, \overline{w})$ satisfies on $(0,T)\times (L_0,L)$ $$\begin{aligned} & \rho_2\overline{u}_{tt}-k_2\overline{u}_{xx} +g_u(\partial_x \overline{\Phi}^2, \overline{\Phi}^2) + {\nabla}f_2(\zeta_{1,h}(x,t))\cdot\overline{\Phi}^2=0, \label{WE1} \\ & \beta_2\overline{v}_{tt} -\lambda_2 \overline{v}_{xx} + g_v(\partial_x \overline{\Phi}^2, \overline{\Phi}^2) + {\nabla}h_2(\zeta_{2,h}(x,t))\cdot\overline{\Phi}^2=0,\\ & \rho_2\overline{w}_{tt}- {\sigma}_2 \overline{w}_{xx} + g_w(\partial_x \overline{\Phi}^2, \overline{\Phi}^2) + {\nabla}g_2(\zeta_{3,h}(x,t))\cdot\overline{\Phi}^2=0 \\ & \overline{u}(L_0,t)=\overline{v}(L_0,t)=\overline{w}(L_0,t)=0,\\ & \overline{u}(L,t)=\overline{v}(L,t)=\overline{w}(L,t)=0, \\ & {k_2(\overline{u}_x+\overline{v}+l\overline{w})(L_0,t)=0}, \label{WEBC1}\\ &{\overline{v}_x(L_0,t)=0, \qquad \sigma_2(\overline{w}_x-l\overline{u})(L_0,t)=0}, \label{WEBC2}\\ & \overline{\Phi}^2(x,0)=\Phi^2(x,h)-\Phi^2_0, \quad \overline{\Phi}^2_t(x,0)=\Phi^2_t(x,h)-\Phi^2_1, \label{WEIC} \end{aligned}$$ where $g_u, g_v, g_w$ are linear combinations of $u_x,v_x,w_x, u,v,w$ with the constant coefficients, $\zeta_{j,h}(x,t)$ are 3D vector functions which components lie between $u(x, t+h)$ and $u(x,t)$, $v(x,t+h)$ and $v(x,t)$, $w(x,t+h)$ and $w(x,t)$ respectively. Thus, we have a system of linear equations on $(L_0,L)$ with overdetermined boundary conditions. $L^2$-regularity of $u_x,v_x, w_x$ on the boundary for solutions to a linear wave equation was established in [@LaTri1983], thus, boundary conditions [\[WEBC1\]](#WEBC1){reference-type="eqref" reference="WEBC1"}-[\[WEBC2\]](#WEBC2){reference-type="eqref" reference="WEBC2"} make sense.\ It is easy to generalize the observability inequality [@TriYao2002 Th. 8.1] for the case of the system of the wave equations. **Theorem 10** ([@TriYao2002] ). *For the solution to problem [\[WE1\]](#WE1){reference-type="eqref" reference="WE1"}-[\[WEIC\]](#WEIC){reference-type="eqref" reference="WEIC"} the following estimate holds: $$\int_0^T [|\overline{u}_x|^2+|\overline{v}_x|^2+|\overline{w}_x|^2](L_0,t) dt\ge C(E(0)+E(T)),$$ where $$E(t)=\frac 12 \left(||\overline{u}_t(t)||^2+ ||\overline{v}_t(t)||^2 +||\overline{w}_t(t)||^2 + ||\overline{u}_x(t)||^2 +||\overline{v}_x(t)||^2+||\overline{w}_x(t)||^2 \right).$$* Therefore, if conditions [\[WEBC1\]](#WEBC1){reference-type="eqref" reference="WEBC1"}, [\[WEBC2\]](#WEBC2){reference-type="eqref" reference="WEBC2"} hold true, then $\overline{u}=\overline{v}=\overline{w}=0$. The theorem is proved. ◻ ## Asymptotic smoothness. **Definition 11** ([@Chueshov; @CFR; @CL]). *A dynamical system $(X,S_t)$ is said to be asymptotically smooth if for any closed bounded set $B\subset X$ that is positively invariant ($S_tB\subseteq B$) one can find a compact set $\mathcal{K}=\mathcal{K}(B)$ which uniformly attracts $B$, i. e. $\sup\{{\rm dist}_X(S_ty,\mathcal{K}):\ y\in B\}\to 0$ as $t\to\infty$.* In order to prove the asymptotical smoothness of the system considered we rely on the compactness criterion due to [@Khanmamedov], which is recalled below in an abstract version formulated in [@CL]. **Theorem 12**. *[@CL] [\[theoremCL\]]{#theoremCL label="theoremCL"} Let $(S_t, H)$ be a dynamical system on a complete metric space $H$ endowed with a metric $d$. Assume that for any bounded positively invariant set $B$ in $H$ and for any $\varepsilon>0$ there exists $T = T (\varepsilon, B)$ such that $$\label{te} d(S_T y_1, S_T y_2) \le \varepsilon+ \Psi_{\varepsilon,B,T} (y_1, y_2), y_i \in B ,$$ where $\Psi_{\varepsilon,B,T} (y_1, y_2)$ is a function defined on $B \times B$ such that $$\liminf\limits_{m\to\infty}\liminf\limits_{n\to\infty}\Psi_{\varepsilon,B,T} (y_n, y_m) = 0$$ for every sequence ${y_n} \in B$. Then $(S_t, H)$ is an asymptotically smooth dynamical system.* To formulate the result on the asymptotic smoothness of the system considered we need the following lemma. **Lemma 13**. *Let assumptions [\[DCont\]](#DCont){reference-type="eqref" reference="DCont"} hold. Let moreover, there exists a positive constant $M$ such that $$\frac{\gamma(s_1)-\gamma(s_2)}{s_1-s_2}\le M, \quad s_1, s_2\in {\mathbb R}, \,\,s_1\ne s_2.\tag{D3}\label{GammaLip1}$$ Then for any $\varepsilon>0$ there exists $C_\varepsilon>0$ such that $$\label{GammaEst} \left|\int\limits_0^{L_0}({\gamma}( \xi_1)-{\gamma}( \xi_2)) \zeta dx\right|\le \varepsilon \|\zeta\|^2+C_\varepsilon \int\limits_0^{L_0}({\gamma}( \xi_1)-{\gamma}( \xi_2)) (\xi_1-\xi_2)dx$$ for any $\xi_1, \xi_2, \zeta\in L^2(0,L_0)$.* The proof is similar to that given in [@CL Th.5.5]). **Theorem 14**. *Let assumptions of Theorem [Theorem 5](#th:WeakWP){reference-type="ref" reference="th:WeakWP"}, [\[GammaLip1\]](#GammaLip1){reference-type="eqref" reference="GammaLip1"}, and $$m\le \frac{\gamma(s_1)-\gamma(s_2)}{s_1-s_2}, \quad s_1, s_2\in {\mathbb R}, \,\,s_1\ne s_2.\tag{D4}\label{GammaLip2}$$ with $m>0$ hold. Let, moreover, $$\begin{gathered} k_1=\sigma_1 \label{c1}\\ \frac{\rho_1}{k_1}=\frac{\beta_1}{\lambda_1}. \label{c2} \end{gathered}$$ Then the dynamical system $(H, S_t)$ generated by problem [\[Eq1\]](#Eq1){reference-type="eqref" reference="Eq1"}--[\[TC4\]](#TC4){reference-type="eqref" reference="TC4"} is asymptotically smooth.* *Proof.* In this proof we perform all the calculations for strong solutions and then pass to the limit in the final estimate to justify it for weak solutions. Let us consider strong solutions $\hat U(t)=(\hat\Phi(t), \hat\Phi_t(t))$ and $\tilde U(t)=(\tilde\Phi(t), \tilde\Phi_t(t))$ to problem [\[Eq1\]](#Eq1){reference-type="eqref" reference="Eq1"}--[\[TC4\]](#TC4){reference-type="eqref" reference="TC4"} with initial conditions $\hat U_0=(\hat \Phi_0, \hat \Phi_1)$ and $\tilde U_0=(\tilde \Phi_0, \tilde \Phi_1)$ lying in a ball, i.e. there exists $R>0$ such that $$\label{inboun} \|\tilde U_0\|_H+\|\hat U_0\|_H\le R.$$ Denote $U(t)=\tilde U(t)-\hat U(t)$ and $U_0=\tilde U_0-\hat U_0$. Obviously, $U(t)$ is a weak solution to the problem $$\begin{aligned} & \rho_1{\varphi}_{tt}-k_1({\varphi}_x+\psi+l{\omega})_x - l{\sigma}_1({\omega}_x-l{\varphi}) +f_1(\tilde{\varphi}, \tilde\psi, \tilde{\omega})-f_1(\hat{\varphi}, \hat\psi, \hat{\omega})=0\label{eq1}\\ & {\beta}_1\psi_{tt} -\lambda_1 \psi_{xx} +k_1({\varphi}_x+\psi+l{\omega}) +{\gamma}( \tilde\psi_t)-{\gamma}( \hat\psi_t) +h_1(\tilde{\varphi}, \tilde\psi, \tilde{\omega})-h_1(\hat{\varphi}, \hat\psi, \hat{\omega})=0\label{eq2}\\ & \rho_1{\omega}_{tt}- {\sigma}_1({\omega}_x-l{\varphi}) _x+lk_1({\varphi}_x+\psi+l{\omega})+g_1(\tilde{\varphi}, \tilde\psi, \tilde{\omega})-g_1(\hat{\varphi}, \hat\psi, \hat{\omega})=0 \label{eq3}\\ & \rho_2u_{tt}-k_2(u_x+v+lw)_x - l{\sigma}_2(w_x-lu) +f_2(\tilde u, \tilde v, \tilde w)-f_2(\hat u, \hat v, \hat w)=0 \label{eq4}\\ & {\beta}_2v_{tt} -\lambda_2 v_{xx} +k_2(u_x+v+lw) +h_2(\tilde u, \tilde v, \tilde w)-h_2(\hat u, \hat v, \hat w)=0,\label{eq5}\\ & \rho_2w_{tt}- {\sigma}_2(w_x-lu)_x+lk_2(u_x+v+lw)+g_2(\tilde u, \tilde v, \tilde w)-g_2(\hat u, \hat v, \hat w)=0 \label{eq6} \end{aligned}$$ with boundary conditions [\[BC\]](#BC){reference-type="eqref" reference="BC"}, [\[TC1\]](#TC1){reference-type="eqref" reference="TC1"}--[\[TC4\]](#TC4){reference-type="eqref" reference="TC4"} and the initial conditions $U(0)=\tilde U_0-\hat U_0$. It is easy to see by the energy argument that $$\label{En1} E(U(T))+ \int\limits_t^T \int\limits_0^{L_0}({\gamma}( \tilde\psi_s)-{\gamma}( \hat\psi_s)) \psi_s dx ds=E(U(t))+ \int\limits_t^T H(\hat U(s),\tilde U(s)) ds,$$ where $$\begin{gathered} \label{H} H(\hat U(t),\tilde U(t))=\int\limits_0^{L_0}(f_1(\hat{\varphi}, \hat\psi, \hat{\omega})-f_1(\tilde{\varphi}, \tilde\psi, \tilde{\omega})){\varphi}_t dx+\int\limits_0^{L_0}(h_1(\hat{\varphi}, \hat\psi, \hat{\omega})-h_1(\tilde{\varphi}, \tilde\psi, \tilde{\omega}))\psi_t dx\\+ \int\limits_0^{L_0}(g_1(\hat{\varphi}, \hat\psi, \hat{\omega})-g_1(\tilde{\varphi}, \tilde\psi, \tilde{\omega}))\omega_t dx+ \int\limits_{L_0}^L(f_2(\hat u, \hat v, \hat w)-f_2(\tilde u, \tilde v, \tilde w)) u_t dx\\+\int\limits_{L_0}^L(h_2(\hat u, \hat v, \hat w)-h_2(\tilde u, \tilde v, \tilde w))v_t dx+ \int\limits_{L_0}^L(g_2(\hat u, \hat v, \hat w)-g_2(\tilde u, \tilde v, \tilde w))w_t dx, \end{gathered}$$ and $$\label{E} E(t)=E_1(t)+E_2(t),$$ here $$\begin{gathered} \label{E1} E_1(t)=\rho_1\int\limits_0^{L_0} \omega_t^2dx dt+\rho_1 \int\limits_0^{L_0}{\varphi}_{t}^2 dx dt+\beta_1 \int\limits_0^{L_0}\psi_{t}^2 dx+\sigma_1\int\limits_0^{L_0} (\omega_x-l{\varphi})^2 dx+\\+k_1\int\limits_0^{L_0} ({\varphi}_x+\psi+l{\omega})^2 dx+\lambda_1\int\limits_0^{L_0} \psi_x^2 dx \end{gathered}$$ and $$\begin{gathered} \label{E2} E_2(t)=\rho_2\int\limits_0^{L_0} w_t^2dx dt+\rho_2 \int\limits_0^{L_0}u_{t}^2 dx dt+\beta_2 \int\limits_0^{L_0}v_{t}^2 dx+\sigma_2\int\limits_0^{L_0} (w_x-lu)^2 dx+\\+k_2\int\limits_0^{L_0} (u_x+v+lw)^2 dx+\lambda_2\int\limits_0^{L_0} v_x^2 dx. \end{gathered}$$ Integrating in [\[En1\]](#En1){reference-type="eqref" reference="En1"} over the interval $(0,T)$ we come to $$\label{En2} TE(U(T))+\int\limits_0^T \int\limits_t^T \int\limits_0^{L_0} ({\gamma}( \tilde\psi_s)-{\gamma}( \hat\psi_s))\psi_s dx ds dt\\=\int\limits_0^T E(U(t)) dt+ \int\limits_0^T \int\limits_t^T H(\hat U(s),\tilde U(s)) ds dt.$$ Now we estimate the first term in the right-hand side of [\[En2\]](#En2){reference-type="eqref" reference="En2"}. In what follows we present formal estimates which can be performed on strong solutions.\ *Step 1.* We multiply equation [\[eq3\]](#eq3){reference-type="eqref" reference="eq3"} by $\omega$ and $x\cdot\omega_x$ and sum up the results. After integration by parts with respect to $t$ we obtain $$\begin{gathered} %\label{o1} \rho_1 \int\limits_0^T \int\limits_0^{L_0} \omega_t x \omega_{tx} dx dt +\rho_1 \int\limits_0^T \int\limits_0^{L_0} \omega_t^2 dx dt+\sigma_1 \int\limits_0^T \int\limits_0^{L_0} (\omega_x-l{\varphi})_x x \omega_{x} dx dt\\ +\sigma_1 \int\limits_0^T \int\limits_0^{L_0} (\omega_x-l{\varphi})_x \omega dx dt-k_1 l \int\limits_0^T \int\limits_0^{L_0}({\varphi}_x+\psi+l\omega)x\omega_x dx dt\\ -k_1 l \int\limits_0^T \int\limits_0^{L_0}({\varphi}_x+\psi+l\omega)\omega dx dt- \int\limits_0^T \int\limits_0^{L_0}(g_1(\tilde{\varphi},\tilde\psi,\tilde\omega)-g_1(\hat{\varphi},\hat\psi,\hat\omega))(x\omega_x+\omega) dx dt\\ =\rho_1 \int\limits_0^{L_0} \omega_t(x,T) x \omega_{x}(x,T) dx+\rho_1 \int\limits_0^{L_0} \omega_t(x,T) \omega(x,T) dx \end{gathered}$$ $$\label{o1} -\rho_1 \int\limits_0^{L_0} \omega_t(x,0) x \omega_{x}(x,0) dx-\rho_1 \int\limits_0^{L_0} \omega_t(x,0) \omega(x,0) dx.$$ Integrating by parts with respect to $x$ we get $$\label{o2} \rho_1 \int\limits_0^T \int\limits_0^{L_0} \omega_t x \omega_{tx} dx dt =-\frac{\rho_1}{2} \int\limits_0^T \int\limits_0^{L_0} \omega_t^2 dx dt+\frac{\rho_1L_0}{2}\int\limits_0^T\omega_t^2(L_0,t)dt$$ and $$\begin{gathered} \label{o3} \sigma_1 \int\limits_0^T \int\limits_0^{L_0} (\omega_x-l{\varphi})_x x \omega_{x} dx dt -k_1 l \int\limits_0^T \int\limits_0^{L_0}({\varphi}_x+\psi+l\omega)x\omega_x dx dt\\ = \sigma_1 \int\limits_0^T \int\limits_0^{L_0} (\omega_x-l{\varphi})_x x (\omega_x-l{\varphi}) dx dt+\sigma_1 l\int\limits_0^T \int\limits_0^{L_0} (\omega_x-l{\varphi})_x x {\varphi}dx dt\\-k_1 l \int\limits_0^T \int\limits_0^{L_0}({\varphi}_x+\psi+l\omega)x\omega_x dx dt= - \frac{\sigma_1}{2} \int\limits_0^T \int\limits_0^{L_0} (\omega_x-l{\varphi})^2 dx dt\\ +\frac{\sigma_1 L_0}{2} \int\limits_0^T (\omega_x-l{\varphi})^2(L_0,t) dt-\sigma_1 l\int\limits_0^T \int\limits_0^{L_0} (\omega_x-l{\varphi}) {\varphi}dx dt\\ -2\sigma_1 l\int\limits_0^T \int\limits_0^{L_0} (\omega_x-l{\varphi})x({\varphi}_x +\psi+l\omega)dx dt+\sigma_1 l\int\limits_0^T \int\limits_0^{L_0} (\omega_x-l{\varphi})x(\psi+l\omega)dx dt\\ -\sigma_1 lL_0\int\limits_0^T (\omega_x-l{\varphi}) (L_0,t) {\varphi}(L_0,t) dt-k_1 l^2 \int\limits_0^T \int\limits_0^{L_0}({\varphi}_x+\psi+l\omega)x{\varphi}dx dt. \end{gathered}$$ Analogously, $$\begin{gathered} \label{o4} \sigma_1 \int\limits_0^T \int\limits_0^{L_0} (\omega_x-l{\varphi})_x \omega dx dt=-\sigma_1 \int\limits_0^T \int\limits_0^{L_0} (\omega_x-l{\varphi})^2 dx dt\\ +\sigma_1 \int\limits_0^T (\omega_x-l{\varphi})(L_0,t) \omega(L_0,t) dt-l\sigma_1 \int\limits_0^T \int\limits_0^{L_0} (\omega_x-l{\varphi}){\varphi}dx dt. \end{gathered}$$ It follows from Lemma [Lemma 7](#lem:lip){reference-type="ref" reference="lem:lip"}, energy relation [\[EE\]](#EE){reference-type="eqref" reference="EE"}, and property [\[NBoundBelow\]](#NBoundBelow){reference-type="eqref" reference="NBoundBelow"} that $$\label{nn} \int\limits_0^T \int\limits_0^{L_0}|g_1(\tilde{\varphi},\tilde\psi,\tilde\omega)-g_1(\hat{\varphi},\hat\psi,\hat\omega)|^2 dx dt \le C(R,T)\max\limits_{t\in[0,T]} \|\Phi(\cdot,t)\|_{H^{1-\epsilon}}^2,\, 0<\epsilon<1/2.$$ Therefore, for every $\varepsilon>0$ $$\label{o5} \left|\int\limits_0^T \int\limits_0^{L_0}(g_1(\tilde{\varphi},\tilde\psi,\tilde\omega)-g_1(\hat{\varphi},\hat\psi,\hat\omega))(x\omega_x+\omega) dx dt\right|\\ \le \varepsilon\int\limits_0^T\|\omega_x-l{\varphi}\|^2 dt+C(\varepsilon,R,T) lot,$$ where we use the notation $$\begin{gathered} \label{lot} lot=\max\limits_{t\in[0,T]} (\| {\varphi}(\cdot,t)\|_{H^{1-\epsilon}}^2+\| \psi(\cdot,t)\|_{H^{1-\epsilon}}^2+ \|\omega(\cdot,t)\|_{H^{1-\epsilon}}^2\\ +\| u(\cdot,t)\|_{H^{1-\epsilon}}^2+ \|v(\cdot,t)\|_{H^{1-\epsilon}}^2+\|w(\cdot,t)\|_{H^{1-\epsilon}}^2),\quad 0<\epsilon<1/2. \end{gathered}$$ Similar estimates hold for nonlinearities $g_2$, $f_i$, $h_i$, $i=1,2$.\ We note that for any $\eta\in H^1(0,L_0)$ (or analogously $\eta\in H^1(L_0, L)$) $$\label{lote} \eta(L_0)\le \sup\limits_{(0,L_0)}|\eta|\le C \| \eta\|_{H^{1-\epsilon}}, \quad 0<\epsilon<1/2.$$ Since due to [\[c1\]](#c1){reference-type="eqref" reference="c1"} $$\begin{gathered} 2\sigma_1 l\left|\int\limits_0^T \int\limits_0^{L_0} (\omega_x-l{\varphi})x({\varphi}_x +\psi+l\omega)dx dt\right|\\\le \frac{\sigma_1}{16} \int\limits_0^T \int\limits_0^{L_0} (\omega_x-l{\varphi})^2 dx dt+16k_1 l^2L_0^2\int\limits_0^T \int\limits_0^{L_0} ({\varphi}_x +\psi+l\omega)^2dx dt, \end{gathered}$$ the following estimate can be obtained from [\[o1\]](#o1){reference-type="eqref" reference="o1"}-- [\[o5\]](#o5){reference-type="eqref" reference="o5"} $$\begin{gathered} \label{ol1} \frac{\rho_1}{2} \int\limits_0^T \int\limits_0^{L_0} \omega_t^2dx dt +\frac{\rho_1L_0}{2}\int\limits_0^T\omega_t^2(L_0,t)dt+\frac{13\sigma_1L_0}{8} \int\limits_0^T (\omega_x-l{\varphi})^2(L_0,t) dt\\ \le \frac{13\sigma_1}{8} \int\limits_0^T \int\limits_0^{L_0} (\omega_x-l{\varphi})^2 dx dt+17k_1 l^2L_0^2 \int\limits_0^T \int\limits_0^{L_0}({\varphi}_x+\psi+l\omega)^2 dx dt+C(R,T)lot\\+C(E(0)+E(T)), \end{gathered}$$ where $C>0$.\ *Step 2.* Multiplying equation [\[eq3\]](#eq3){reference-type="eqref" reference="eq3"} by $\omega$ and $(x-L_0)\cdot\omega_x$ and arguing as above we come to the estimate $$\begin{gathered} \label{ol2} \frac{\rho_1}{2} \int\limits_0^T \int\limits_0^{L_0} \omega_t^2dx dt+\frac{13\sigma_1L_0}{8} \int\limits_0^T (\omega_x-l{\varphi})^2(0,t) dt \le \frac{13\sigma_1}{8} \int\limits_0^T \int\limits_0^{L_0} (\omega_x-l{\varphi})^2 dx dt\\+17k_1 l^2L_0^2 \int\limits_0^T \int\limits_0^{L_0}({\varphi}_x+\psi+l\omega)^2 dx dt+C(R,T)lot+C(E(0)+E(T)). \end{gathered}$$ Summing up estimates [\[ol1\]](#ol1){reference-type="eqref" reference="ol1"} and [\[ol2\]](#ol2){reference-type="eqref" reference="ol2"} and multiplying the result by $\frac{1}{2}$ we get $$\begin{gathered} \label{ol3} \frac{\rho_1}{2}\int\limits_0^T \int\limits_0^{L_0} \omega_t^2dx dt +\frac{\rho_1L_0}{4}\int\limits_0^T\omega_t^2(L_0,t)dt+\frac{3\sigma_1L_0}{16} \int\limits_0^T (\omega_x-l{\varphi})^2(L_0,t) dt\\+\frac{3\sigma_1L_0}{16} \int\limits_0^T (\omega_x-l{\varphi})^2(0,t) dt\le \frac{13\sigma_1}{8} \int\limits_0^T \int\limits_0^{L_0} (\omega_x-l{\varphi})^2 dx dt\\+17k_1 l^2L_0^2 \int\limits_0^T \int\limits_0^{L_0}({\varphi}_x+\psi+l\omega)^2 dx dt+C(R,T)lot+C(E(0)+E(T)). \end{gathered}$$ *Step 3.* Next we multiply equation [\[eq1\]](#eq1){reference-type="eqref" reference="eq1"} by $-\frac{1}{l}(\omega_x-l{\varphi})$, equation [\[eq3\]](#eq3){reference-type="eqref" reference="eq3"} by $\frac{1}{l}{\varphi}_x$, summing up the results and integrating by parts with respect to $t$ we arrive at $$\begin{gathered} %\label{o6} \frac{\rho_1}{l}\int\limits_0^T \int\limits_0^{L_0}{\varphi}_{t}(\omega_{tx}-l{\varphi}_t)dx dt+\frac{k_1}{l}\int\limits_0^T \int\limits_0^{L_0}({\varphi}_x+\psi+l{\omega})_x (\omega_x-l{\varphi}) dx dt\\ + {\sigma}_1\int\limits_0^T \int\limits_0^{L_0}({\omega}_x-l{\varphi})^2 dx dt- \frac{1}{l}\int\limits_0^T \int\limits_0^{L_0}(f_1(\tilde{\varphi},\tilde\psi,\tilde\omega)- f_1(\hat{\varphi},\hat\psi,\hat\omega))(\omega_x-l{\varphi}) dx dt\\ +\frac{\rho_1}{l}\int\limits_0^T \int\limits_0^{L_0}{\omega}_{t}{\varphi}_{tx} dx dt+ \frac{{\sigma}_1}{l}\int\limits_0^T \int\limits_0^{L_0}({\omega}_x-l{\varphi})_x{\varphi}_{x} dx dt\\ -k_1\int\limits_0^T \int\limits_0^{L_0}({\varphi}_x+\psi+l{\omega}){\varphi}_{x} dx dt-\int\limits_0^T \int\limits_0^{L_0}(g_1(\tilde{\varphi},\tilde\psi,\tilde\omega)-g_1(\hat{\varphi},\hat\psi,\hat\omega)){\varphi}_{x} dx dt\\ =\frac{\rho_1}{l} \int\limits_0^{L_0}{\varphi}_{t}(x,T)(\omega_{x}-l{\varphi})(x,T)dx-\frac{\rho_1}{l} \int\limits_0^{L_0}{\varphi}_{t}(x,0)(\omega_{x}-l{\varphi})(x,0)dx \end{gathered}$$ $$\label{o6} \qquad\qquad\qquad\qquad +\frac{\rho_1}{l} \int\limits_0^{L_0}{\omega}_{t}(x,T){\varphi}_{x}(x,T) dx-\frac{\rho_1}{l} \int\limits_0^{L_0}{\omega}_{t}(x,0){\varphi}_{x}(x,0) dx.$$ Integrating by parts with respect to $x$ we obtain $$\left|\frac{\rho_1}{l}\int\limits_0^T \int\limits_0^{L_0}{\varphi}_{t}\omega_{tx}dx dt+\frac{\rho_1}{l}\int\limits_0^T \int\limits_0^{L_0}{\omega}_{t}{\varphi}_{tx} dx dt\right| =\left|\frac{\rho_1}{l}\int\limits_0^T {\varphi}_{t}(L_0,t)\omega_{t}(L_0,t) dt\right|$$ $$\label{o7} \qquad\qquad \qquad\qquad \qquad\qquad \qquad\qquad\le \frac{\rho_1L_0}{8}\int\limits_0^T\omega_{t}^2(L_0,t) dt+\frac{2\rho_1}{l^2L_0}\int\limits_0^T {\varphi}_{t}^2(L_0,t) dt.$$ Taking into account [\[c1\]](#c1){reference-type="eqref" reference="c1"} we get $$\begin{gathered} \label{o8} \frac{k_1}{l}\int\limits_0^T \int\limits_0^{L_0}({\varphi}_x+\psi+l{\omega})_x (\omega_x-l{\varphi}) dx dt+\frac{{\sigma}_1}{l}\int\limits_0^T \int\limits_0^{L_0}({\omega}_x-l{\varphi})_x{\varphi}_{x} dx dt\\ =\frac{k_1}{l}\int\limits_0^T ({\varphi}_x+\psi+l{\omega})(L_0,t) (\omega_x-l{\varphi})(L_0,t)dt -\frac{k_1}{l}\int\limits_0^T ({\varphi}_x+\psi+l{\omega})(0,t)(\omega_x-l{\varphi})(0,t)dt\\ +\frac{k_1}{l}\int\limits_0^T \int\limits_0^{L_0}\psi_x (\omega_x-l{\varphi}) dx dt+{\sigma}_1\int\limits_0^T \int\limits_0^{L_0}(\omega_x-l{\varphi})^2 dx dt+{\sigma}_1 l\int\limits_0^T \int\limits_0^{L_0}(\omega_x-l{\varphi}){\varphi}dx dt. \end{gathered}$$ Using the estimates $$\begin{gathered} \left|\frac{k_1}{l}\int\limits_0^T ({\varphi}_x+\psi+l{\omega})(L_0,t) (\omega_x-l{\varphi})(L_0,t)dt\right|\\\le \frac{4k_1}{l^2L_0}\int\limits_0^T({\varphi}_x+\psi+l{\omega})^2(L_0,t)dt+ \frac{{\sigma}_1L_0}{16}\int\limits_0^T(\omega_x-l{\varphi})^2(L_0,t)dt, \end{gathered}$$ $$\left|\frac{k_1}{l}\int\limits_0^T \int\limits_0^{L_0}\psi_x (\omega_x-l{\varphi}) dx dt\right|\le \frac{4k_1}{l^2}\int\limits_0^T \int\limits_0^{L_0}\psi_x^2 dx dt+ \frac{{\sigma}_1}{16}\int\limits_0^T \int\limits_0^{L_0} (\omega_x-l{\varphi})^2 dx dt$$ and [\[o6\]](#o6){reference-type="eqref" reference="o6"}--[\[o8\]](#o8){reference-type="eqref" reference="o8"} we infer $$\begin{gathered} \label{ol4} \frac{15{\sigma}_1}{8}\int\limits_0^T \int\limits_0^{L_0} (\omega_x-l{\varphi})^2 dx dt\le \rho_1 \int\limits_0^T \int\limits_0^{L_0} {\varphi}_t^2 dx dt+ 2k_1\int\limits_0^T \int\limits_0^{L_0} ({\varphi}_{x}+\psi+l\omega)^2dx dt\\ +\frac{4k_1}{l^2} \int\limits_0^T \int\limits_0^{L_0} \psi_x^2 dx dt+\frac{4k_1}{l^2L_0}\int\limits_0^T ({\varphi}_{x}+\psi+l\omega)^2(L_0,t) dt+\frac{{\sigma}_1L_0}{8}\int\limits_0^T(\omega_x-l{\varphi})^2(L_0,t) dt \\ + \frac{4k_1}{l^2L_0}\int\limits_0^T({\varphi}_{x}+\psi+l\omega)^2(0,t) dt+\frac{{\sigma}_1L_0}{8}\int\limits_0^T(\omega_x-l{\varphi})^2(0,t)dt\\ \frac{\rho_1L_0}{8}\int\limits_0^T \omega_t^2(L_0,t) dt+\frac{2\rho_1}{l^2L_0}\int\limits_0^T{\varphi}_t^2(L_0,t) dt+ C(R,T)lot+C(E(0)+E(T)). \end{gathered}$$ Adding [\[ol4\]](#ol4){reference-type="eqref" reference="ol4"} to [\[ol3\]](#ol3){reference-type="eqref" reference="ol3"} we obtain $$\begin{gathered} \label{ol5} \frac{{\sigma}_1}{4}\int\limits_0^T \int\limits_0^{L_0} (\omega_x-l{\varphi})^2 dx dt+\frac{\rho_1}{2}\int\limits_0^T \int\limits_0^{L_0} \omega_t^2dx dt +\frac{\rho_1L_0}{8}\int\limits_0^T\omega_t^2(0,t)dt+\frac{\sigma_1L_0}{16} \int\limits_0^T (\omega_x-l{\varphi})^2(L_0,t) dt\\+\frac{\sigma_1L_0}{16} \int\limits_0^T (\omega_x-l{\varphi})^2(L_0,t) dt\le \rho_1 \int\limits_0^T \int\limits_0^{L_0} {\varphi}_t^2 dx dt+ k_1(2+17l^2L_0^2)\int\limits_0^T \int\limits_0^{L_0} ({\varphi}_{x}+\psi+l\omega)^2dx dt\\+\frac{4k_1}{l^2L_0}\int\limits_0^T ({\varphi}_{x}+\psi+l\omega)^2(L_0,t) dt+ \frac{4k_1}{l^2L_0}\int\limits_0^T({\varphi}_{x}+\psi+l\omega)^2(0,t) dt\\+\frac{4k_1}{l^2} \int\limits_0^T \int\limits_0^{L_0} \psi_x^2 dx dt +\frac{2\rho_1}{l^2L_0}\int\limits_0^T{\varphi}_t^2(L_0,t) dt+C(R,T)lot+C(E(0)+E(T)). \end{gathered}$$ *Step 4.* Now we multiply equation [\[eq1\]](#eq1){reference-type="eqref" reference="eq1"} by $-\frac{16}{l^2L_0^2}x{\varphi}_x$ and $-\frac{16}{l^2L_0^2}(x-L_0){\varphi}_x$ and sum up the results. After integration by parts with respect to $t$ we get $$\begin{gathered} %\label{p1} \frac{16\rho_1}{l^2L_0^2}\int\limits_0^T \int\limits_0^{L_0}{\varphi}_{t}x{\varphi}_{tx}dx dt+\frac{16\rho_1}{l^2L_0^2}\int\limits_0^T \int\limits_0^{L_0}{\varphi}_{t}(x-L_0){\varphi}_{tx}dx dt\\ +\frac{16k_1}{l^2L_0^2}\int\limits_0^T \int\limits_0^{L_0}({\varphi}_x+\psi+l{\omega})_xx{\varphi}_xdx dt+\frac{16k_1}{l^2L_0^2}\int\limits_0^T \int\limits_0^{L_0}({\varphi}_x+\psi+l{\omega})_x(x-L_0){\varphi}_xdx dt \end{gathered}$$ $$\begin{gathered} \label{p1} + \frac{16{\sigma}_1}{lL_0^2}\int\limits_0^T \int\limits_0^{L_0}({\omega}_x-l{\varphi})x{\varphi}_xdx dt+\frac{16{\sigma}_1}{lL_0^2}\int\limits_0^T \int\limits_0^{L_0}({\omega}_x-l{\varphi})(x-L_0){\varphi}_xdx dt \\ -\frac{16}{l^2L_0^2}\int\limits_0^T \int\limits_0^{L_0}(f_1(\tilde{\varphi},\tilde\psi,\tilde\omega)-f_1(\hat{\varphi},\hat\psi,\hat\omega))(2x-L_0){\varphi}_xdx dt\\ =\frac{16\rho_1}{l^2L_0^2} \int\limits_0^{L_0}{\varphi}_{t}(x,T)(2x-L_0){\varphi}_{x}(x,T)dx-\frac{16\rho_1}{l^2L_0^2} \int\limits_0^{L_0}{\varphi}_{t}(x,T)(2x-L_0){\varphi}_{x}(x,T)dx . \end{gathered}$$ It is easy to see that $$\frac{16\rho_1}{l^2L_0^2}\int\limits_0^T \int\limits_0^{L_0}{\varphi}_{t}x{\varphi}_{tx}dx dt+\frac{16\rho_1}{l^2L_0^2}\int\limits_0^T \int\limits_0^{L_0}{\varphi}_{t}(x-L_0){\varphi}_{tx}dx dt$$ $$\label{p2} \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad =-\frac{16\rho_1}{l^2L_0^2}\int\limits_0^T \int\limits_0^{L_0}{\varphi}_{t}^2 dx dt+\frac{8\rho_1}{l^2L_0}\int\limits_0^T{\varphi}_{t}^2 (L_0,t) dt$$ and $$\begin{gathered} %\label{p3} \frac{16k_1}{l^2L_0^2}\int\limits_0^T \int\limits_0^{L_0}({\varphi}_x+\psi+l{\omega})_xx{\varphi}_xdx dt+\frac{16k_1}{l^2L_0^2}\int\limits_0^T \int\limits_0^{L_0}({\varphi}_x+\psi+l{\omega})_x(x-L_0){\varphi}_xdx dt\\ =-\frac{16k_1}{l^2L_0^2}\int\limits_0^T \int\limits_0^{L_0}({\varphi}_x+\psi+l{\omega})^2dx dt+\frac{8k_1}{l^2L_0}\int\limits_0^T ({\varphi}_x+\psi+l{\omega})^2(0,t) dt\\+\frac{8k_1}{l^2L_0}\int\limits_0^T ({\varphi}_x+\psi+l{\omega})^2(L_0,t) dt -\frac{16k_1}{l^2L_0^2}\int\limits_0^T \int\limits_0^{L_0}({\varphi}_x+\psi+l{\omega})_xx(\psi+l{\omega})dx dt\\ -\frac{16k_1}{l^2L_0^2}\int\limits_0^T \int\limits_0^{L_0}({\varphi}_x+\psi+l{\omega})_x(x-L_0)(\psi+l{\omega})dx dt\\ =-\frac{16k_1}{l^2L_0^2}\int\limits_0^T \int\limits_0^{L_0}({\varphi}_x+\psi+l{\omega})^2dx dt+\frac{8k_1}{l^2L_0}\int\limits_0^T ({\varphi}_x+\psi+l{\omega})^2(0,t) dt\\ +\frac{8k_1}{l^2L_0}\int\limits_0^T ({\varphi}_x+\psi+l{\omega})^2(L_0,t) dt-\frac{16k_1}{l^2L_0}\int\limits_0^T ({\varphi}_x+\psi+l{\omega})(L_0,t)(\psi+l{\omega})(L_0,t) dt \end{gathered}$$ $$\begin{gathered} \label{p3} +\frac{32k_1}{l^2L_0^2}\int\limits_0^T \int\limits_0^{L_0}({\varphi}_x+\psi+l{\omega})(\psi+l{\omega})dx dt+ +\frac{16k_1}{lL_0^2}\int\limits_0^T \int\limits_0^{L_0}({\varphi}_x+\psi+l{\omega})(2x-L_0)({\omega}_x-l{\varphi})dx dt\\+\frac{16k_1}{l^2L_0^2}\int\limits_0^T \int\limits_0^{L_0}({\varphi}_x+\psi+l{\omega})(2x-L_0)\psi_xdx dt +\frac{16k_1}{L_0^2}\int\limits_0^T \int\limits_0^{L_0}({\varphi}_x+\psi+l{\omega})(2x-L_0){\varphi}dx dt. \end{gathered}$$ Moreover, $$\begin{gathered} %\label{p4} \frac{16{\sigma}_1}{lL_0^2}\int\limits_0^T \int\limits_0^{L_0}({\omega}_x-l{\varphi})x{\varphi}_xdx dt+\frac{16{\sigma}_1}{lL_0^2}\int\limits_0^T \int\limits_0^{L_0}({\omega}_x-l{\varphi})(x-L_0){\varphi}_xdx dt\\ = \frac{16{\sigma}_1}{lL_0^2}\int\limits_0^T \int\limits_0^{L_0}({\omega}_x-l{\varphi})(2x-L_0)({\varphi}_x+\psi+l{\omega})dx dt \end{gathered}$$ $$\label{p4} \qquad\qquad\qquad \qquad\qquad\qquad- \frac{16{\sigma}_1}{lL_0^2}\int\limits_0^T \int\limits_0^{L_0}({\omega}_x-l{\varphi})(2x-L_0)(\psi+l{\omega})dx dt.$$ Collecting [\[p1\]](#p1){reference-type="eqref" reference="p1"}--[\[p4\]](#p4){reference-type="eqref" reference="p4"} and using the estimates $$\begin{gathered} \left|\frac{32k_1}{lL_0^2}\int\limits_0^T \int\limits_0^{L_0}({\varphi}_x+\psi+l{\omega})(2x-L_0)({\omega}_x-l{\varphi})dx dt\right|\\\le \frac{{\sigma}_1}{8}\int\limits_0^T \int\limits_0^{L_0}({\omega}_x-l{\varphi})^2dx dt+\frac{2046k_1}{l^2L_0^2}\int\limits_0^T \int\limits_0^{L_0}({\varphi}_x+\psi+l{\omega})^2dx dt \end{gathered}$$ and $$\begin{gathered} \left|\frac{16k_1}{l^2L_0^2}\int\limits_0^T \int\limits_0^{L_0}({\varphi}_x+\psi+l{\omega})(2x-L_0)\psi_xdx dt\right|\\\le \frac{k_1}{l^2}\int\limits_0^T \int\limits_0^{L_0}\psi_x^2dx dt+\frac{64k_1}{l^2L_0^2}\int\limits_0^T \int\limits_0^{L_0}({\varphi}_x+\psi+l{\omega})^2dx dt \end{gathered}$$ we come to $$% \label{ol6} \frac{7k_1}{l^2L_0}\int\limits_0^T ({\varphi}_x+\psi+l{\omega})^2(L_0,t) dt+\frac{7k_1}{l^2L_0}\int\limits_0^T ({\varphi}_x+\psi+l{\omega})^2(0,t) dt\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad$$ $$\begin{gathered} \label{ol6} +\frac{8\rho_1}{l^2L_0}\int\limits_0^T {\varphi}_t^2(L_0,t) dt\le \frac{16\rho_1}{l^2L_0^2}\int\limits_0^T \int\limits_0^{L_0} {\varphi}_t^2 dx dt+ \frac{2150 k_1}{l^2L_0^2}\int\limits_0^T \int\limits_0^{L_0} ({\varphi}_{x}+\psi+l\omega)^2dx dt\\+\frac{k_1}{l^2} \int\limits_0^T \int\limits_0^{L_0} \psi_x^2 dx dt+\frac{3{\sigma}_1}{16}\int\limits_0^T \int\limits_0^{L_0}({\omega}_x-l{\varphi})^2dx dt+C(R,T)lot+C(E(0)+E(T)). \end{gathered}$$ Adding [\[ol6\]](#ol6){reference-type="eqref" reference="ol6"} to [\[ol5\]](#ol5){reference-type="eqref" reference="ol5"} we arrive at $$\begin{gathered} \label{ol7} \frac{{\sigma}_1}{16}\int\limits_0^T \int\limits_0^{L_0} (\omega_x-l{\varphi})^2 dx dt+\frac{\rho_1}{2}\int\limits_0^T \int\limits_0^{L_0} \omega_t^2dx dt +\frac{\rho_1L_0}{8}\int\limits_0^T\omega_t^2(L_0,t)dt\\+\frac{\sigma_1L_0}{16} \int\limits_0^T (\omega_x-l{\varphi})^2(L_0,t) dt+\frac{\sigma_1L_0}{16} \int\limits_0^T (\omega_x-l{\varphi})^2(0,t) dt\\ + \frac{3k_1}{l^2L_0}\int\limits_0^T ({\varphi}_x+\psi+l{\omega})^2(L_0,t) dt+\frac{3k_1}{l^2L_0}\int\limits_0^T ({\varphi}_x+\psi+l{\omega})^2(0,t) dt\\+\frac{6\rho_1}{l^2L_0}\int\limits_0^T {\varphi}_t^2(L_0,t) dt \le \rho_1 \left(1+\frac{16}{l^2L_0^2} \right)\int\limits_0^T \int\limits_0^{L_0} {\varphi}_t^2 dx dt\\+ k_1\left(2+17l^2L_0^2+\frac{2150}{l^2L_0^2}\right)\int\limits_0^T \int\limits_0^{L_0} ({\varphi}_{x}+\psi+l\omega)^2dx dt\\+\frac{5k_1}{l^2} \int\limits_0^T \int\limits_0^{L_0} \psi_x^2 dx dt+C(R,T)lot+C(E(0)+E(T)). \end{gathered}$$ *Step 5.* Next we multiply equation [\[eq1\]](#eq1){reference-type="eqref" reference="eq1"} by $-\left(1+\frac{18}{l^2L_0^2}\right){\varphi}$ and integrate by parts with respect to $t$ $$\begin{gathered} \label{p5} \rho_1\left(1+\frac{18}{l^2L_0^2}\right)\int\limits_0^T \int\limits_0^{L_0}{\varphi}_{t}^2 dx dt+k_1\left(1+\frac{18}{l^2L_0^2}\right)\int\limits_0^T\int\limits_0^{L_0} ({\varphi}_x+\psi+l{\omega})_x{\varphi}dx dt \\ + l{\sigma}_1\left(1+\frac{18}{l^2L_0^2}\right)\int\limits_0^T\int\limits_0^{L_0}({\omega}_x-l{\varphi}){\varphi}dx dt -\left(1+\frac{18}{l^2L_0^2}\right)\int\limits_0^T\int\limits_0^{L_0}(f_1(\tilde{\varphi}, \tilde\psi,\tilde{\omega})-f_1(\hat{\varphi},\hat\psi,\hat{\omega})){\varphi}dx dt=\\ \rho_1\left(1+\frac{18}{l^2L_0^2}\right) \int\limits_0^{L_0}({\varphi}_t(x,T){\varphi}(x,T)-{\varphi}_t(x,0){\varphi}(x,0))dx. \end{gathered}$$ Since $$\begin{gathered} \label{p6} k_1\left(1+\frac{18}{l^2L_0^2}\right)\int\limits_0^T\int\limits_0^{L_0} ({\varphi}_x+\psi+l{\omega})_x{\varphi}dx dt =-k_1\left(1+\frac{18}{l^2L_0^2}\right)\int\limits_0^T \int\limits_0^{L_0} ({\varphi}_x+\psi+l{\omega})^2 dx dt\\ +k_1\left(1+\frac{18}{l^2L_0^2}\right)\int\limits_0^T ({\varphi}_x+\psi+l{\omega})(L_0,t){\varphi}(L_0,t) dt\\ +k_1\left(1+\frac{18}{l^2L_0^2}\right)\int\limits_0^T ({\varphi}_x+\psi+l{\omega})(\psi+l{\omega}) dx dt \end{gathered}$$ we obtain the estimate $$\begin{gathered} \label{ol8} \rho_1\left(1+\frac{17}{l^2L_0^2}\right)\int\limits_0^T \int\limits_0^{L_0}{\varphi}_{t}^2 dx dt\le k_1\left(2+\frac{18}{l^2L_0^2}\right)\int\limits_0^T\int\limits_0^{L_0} ({\varphi}_x+\psi+l{\omega})^2 dx dt\\ +\frac{k_1}{l^2L_0}\int\limits_0^T ({\varphi}_x+\psi+l{\omega})^2(L_0,t) dt+\frac{{\sigma}_1}{32}\int\limits_0^T \int\limits_0^{L_0} (\omega_x-l{\varphi})^2 dx dt\\+C(R,T)lot+C(E(0)+E(T)). \end{gathered}$$ Summing up [\[ol7\]](#ol7){reference-type="eqref" reference="ol7"} and [\[ol8\]](#ol8){reference-type="eqref" reference="ol8"} we get $$\begin{gathered} %\label{ol9} \frac{{\sigma}_1}{32}\int\limits_0^T \int\limits_0^{L_0} (\omega_x-l{\varphi})^2 dx dt+\frac{\rho_1}{2}\int\limits_0^T \int\limits_0^{L_0} \omega_t^2dx dt +\frac{\rho_1L_0}{8}\int\limits_0^T\omega_t^2(L_0,t)dt\\+\frac{\sigma_1L_0}{16} \int\limits_0^T (\omega_x-l{\varphi})^2(L_0,t) dt+\frac{\sigma_1L_0}{16} \int\limits_0^T (\omega_x-l{\varphi})^2(0,t) dt\\+ \frac{2k_1}{l^2L_0}\int\limits_0^T ({\varphi}_x+\psi+l{\omega})^2(L_0,t) dt+\frac{2k_1}{l^2L_0}\int\limits_0^T ({\varphi}_x+\psi+l{\omega})^2(0,t) dt\\ +\frac{6\rho_1}{l^2L_0}\int\limits_0^T {\varphi}_t^2(L_0,t) dt+ \frac{1}{l^2L_0^2}\int\limits_0^T \int\limits_0^{L_0}{\varphi}_{t}^2 dx dt\\ \le k_1\left(4+17l^2L_0^2+\frac{2200}{l^2L_0^2}\right)\int\limits_0^T ({\varphi}_x+\psi+l{\omega})^2 dx dt \end{gathered}$$ $$\label{ol9} \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\frac{6k_1}{l^2} \int\limits_0^T \int\limits_0^{L_0} \psi_x^2 dx dt+C(R,T)lot+C(E(0)+E(T)).$$ *Step 6.* Next we multiply equation [\[eq2\]](#eq2){reference-type="eqref" reference="eq2"} by $C_1({\varphi}_x+\psi+l{\omega})$ and equation [\[eq1\]](#eq1){reference-type="eqref" reference="eq1"} by $C_1\frac{\beta_1}{\rho_1}\psi_x$, where $C_1=2(6+17l^2L_0^2+\frac{2200}{l^2L_0^2})$. Then we sum up the results and integrate by parts with respect to $t$. Taking into account [\[c1\]](#c1){reference-type="eqref" reference="c1"}, [\[c2\]](#c2){reference-type="eqref" reference="c2"} we come to $$\begin{gathered} \label{p7} -\beta_1C_1 \int\limits_0^T \int\limits_0^{L_0}{\varphi}_{t}\psi_{tx}dx dt-\lambda_1C_1 \int\limits_0^T \int\limits_0^{L_0}({\varphi}_x+\psi+l{\omega})_x\psi_x dx dt\\ - lC_1\lambda_1\int\limits_0^T \int\limits_0^{L_0}({\omega}_x-l{\varphi})\psi_x dx dt +C_1\frac{\beta_1}{\rho_1}\int\limits_0^T \int\limits_0^{L_0}(f_1(\tilde{\varphi}, \tilde\psi, \tilde{\omega})-f_1(\hat{\varphi}, \hat\psi, \hat{\omega}))\psi_x dx dt\\ -{\beta}_1C_1 \int\limits_0^T \int\limits_0^{L_0}\psi_{t}({\varphi}_{xt}+\psi_t+l{\omega}_t)dx dt -\lambda_1C_1 \int\limits_0^T \int\limits_0^{L_0}\psi_{xx}({\varphi}_x+\psi+l{\omega})dx dt\\ +k_1C_1 \int\limits_0^T \int\limits_0^{L_0}({\varphi}_x+\psi+l{\omega})^2 dx dt + C_1 \int\limits_0^T \int\limits_0^{L_0}({\gamma}(\tilde\psi_t)-{\gamma}(\hat\psi_t)) ({\varphi}_x+\psi+l{\omega})dx dt\\ +C_1\int\limits_0^T \int\limits_0^{L_0}(h_1(\tilde{\varphi}, \tilde\psi, \tilde{\omega})-h_1(\hat{\varphi}, \hat\psi, \hat{\omega}))({\varphi}_x+\psi+l{\omega})dx dt =\beta_1C_1\int\limits_0^{L_0}{\varphi}_{t}(x,0)\psi_{x}(x,0)dx\\-\beta_1C_1\int\limits_0^{L_0}{\varphi}_{t}(x,T)\psi_{x}(x,T)dx +{\beta}_1C_1 \int\limits_0^{L_0}\psi_{t}(x,0)({\varphi}_{x}+\psi+l{\omega})(x,0)dx\\ -{\beta}_1C_1 \int\limits_0^{L_0}\psi_{t}(x,T)({\varphi}_{x}+\psi+l{\omega})(x,T)dx. \end{gathered}$$ Integrating by parts with respect to $x$ we get $$\begin{gathered} \label{p8} \left|\beta_1C_1 \int\limits_0^T \int\limits_0^{L_0}{\varphi}_{t}\psi_{tx}dx dt+{\beta}_1C_1 \int\limits_0^T \int\limits_0^{L_0}\psi_{t}({\varphi}_{xt}+l{\omega}_t)dx dt\right|\\ \le \left|\beta_1C_1 \int\limits_0^T {\varphi}_{t}(L_0,t)\psi_{t}(L_0,t) dt+{\beta}_1C_1 l\int\limits_0^T \int\limits_0^{L_0}\psi_{t}{\omega}_tdx dt\right|\le \frac{\rho_1}{l^2L_0}\int\limits_0^T {\varphi}_{t}^2(L_0,t) dt\\+ \frac{\beta_1^2C_1^2l^2L_0}{4\rho_1}\int\limits_0^T \psi_{t}^2(L_0,t) dt + \frac{\rho_1}{4}\int\limits_0^T \int\limits_0^{L_0}{\omega}_t^2dx dt+ \frac{{\beta}_1^2C_1^2 l^2}{\rho_1}\int\limits_0^T \int\limits_0^{L_0}\psi_{t}^2dx dt \end{gathered}$$ and $$\begin{gathered} \label{p9} \left| \lambda_1C_1 \int\limits_0^T \int\limits_0^{L_0}({\varphi}_x+\psi+l{\omega})_x\psi_x dx dt+\lambda_1C_1 \int\limits_0^T \int\limits_0^{L_0}\psi_{xx}({\varphi}_x+\psi+l{\omega})dx dt\right|\\=\left|\lambda_1C_1 \int\limits_0^T ({\varphi}_x+\psi+l{\omega})(L_0,t)\psi_x(L_0,t) dt-\lambda_1C_1 \int\limits_0^T ({\varphi}_x+\psi+l{\omega})(0,t)\psi_x(0,t) dt\right|\\ \le \frac{k_1}{l^2L_0}\int\limits_0^T ({\varphi}_x+\psi+l{\omega})^2(L_0,t) dt+\frac{k_1}{l^2L_0}\int\limits_0^T ({\varphi}_x+\psi+l{\omega})^2(0,t) dt\\ +\frac{l^2L_0\lambda_1^2C_1^2}{4k_1}\int\limits_0^T \psi_x^2(L_0,t) dt+\frac{l^2L_0\lambda_1^2C_1^2}{4k_1}\int\limits_0^T \psi_x^2(0,t) dt. \end{gathered}$$ Moreover, $$\label{p10} \left|lC_1\lambda_1\int\limits_0^T \int\limits_0^{L_0}({\omega}_x-l{\varphi})\psi_x dx dt \right|\le \frac{{\sigma}_1}{64}\int\limits_0^T \int\limits_0^{L_0}({\omega}_x-l{\varphi})^2 dx dt+\frac{16l^2C_1^2\lambda_1^2}{{\sigma}_1}\int\limits_0^T \int\limits_0^{L_0}\psi_x^2 dx dt.$$ It follows from Lemma [\[lem:GammaEst\]](#lem:GammaEst){reference-type="eqref" reference="lem:GammaEst"} with $\varepsilon=\frac{k_1 C_1}{4}$ $$\begin{gathered} \label{p111} \left|C_1 \int\limits_0^T \int\limits_0^{L_0} ({\gamma}(\tilde\psi_t)-{\gamma}(\hat\psi_t)) ({\varphi}_x+\psi+l{\omega})dx dt\right|\\ \le \frac{k_1 C_1}{4} \int\limits_0^T \int\limits_0^{L_0}({\varphi}_x+\psi+l{\omega})^2dx dt+C\int\limits_0^T \int\limits_0^{L_0}({\gamma}(\tilde\psi_t)-{\gamma}(\hat\psi_t))\psi_t dx dt \end{gathered}$$ Consequently, collecting [\[p7\]](#p7){reference-type="eqref" reference="p7"}--[\[p111\]](#p111){reference-type="eqref" reference="p111"} we obtain $$\begin{gathered} %\label{ol10} \frac{C_1k_1}{2}\int\limits_0^T \int\limits_0^{L_0}({\varphi}_x+\psi+l{\omega})^2dx dt\le \frac{{\sigma}_1}{64}\int\limits_0^T \int\limits_0^{L_0}({\omega}_x-l{\varphi})^2 dx dt\\ +\frac{20l^2C_1^2\lambda_1^2}{{\sigma}_1}\int\limits_0^T \int\limits_0^{L_0}\psi_x^2 dx dt+ C_1\left(\beta_1+\frac{\beta_1^2l^2}{\rho_1}\right)\int\limits_0^T \int\limits_0^{L_0}\psi_t^2 dx dt + \\ \frac{k_1}{l^2L_0}\int\limits_0^T ({\varphi}_x+\psi+l{\omega})^2(L_0,t) dt+\frac{k_1}{l^2L_0}\int\limits_0^T ({\varphi}_x+\psi+l{\omega})^2(0,t) dt \end{gathered}$$ $$\begin{gathered} \label{ol10} +\frac{l^2L_0\lambda_1^2C_1^2}{4k_1}\int\limits_0^T \psi_x^2(L_0,t) dt+\frac{l^2L_0\lambda_1^2C_1^2}{4k_1}\int\limits_0^T \psi_x^2(0,t) dt\\ +\frac{\rho_1}{l^2L_0}\int\limits_0^T {\varphi}_{t}^2(L_0,t) dt+ \frac{\beta_1^2C_1^2l^2L_0}{4\rho_1}\int\limits_0^T \psi_{t}^2(L_0,t) dt+\frac{\rho_1}{4}\int\limits_0^T \int\limits_0^{L_0}{\omega}_t^2dx dt \\+C\int\limits_0^T \int\limits_0^{L_0}({\gamma}(\tilde\psi_t)-{\gamma}(\hat\psi_t))\psi_t dx dt+ C(R,T)lot+C(E(0)+E(T)). \end{gathered}$$ Combining [\[ol10\]](#ol10){reference-type="eqref" reference="ol10"} with [\[ol9\]](#ol9){reference-type="eqref" reference="ol9"} we get $$\begin{gathered} \label{ol11} \frac{{\sigma}_1}{64}\int\limits_0^T \int\limits_0^{L_0} (\omega_x-l{\varphi})^2 dx dt+\frac{\rho_1}{4}\int\limits_0^T \int\limits_0^{L_0} \omega_t^2dx dt +\frac{\rho_1L_0}{8}\int\limits_0^T\omega_t^2(L_0,t)dt\\ +\frac{\sigma_1L_0}{16} \int\limits_0^T (\omega_x-l{\varphi})^2(L_0,t) dt+\frac{\sigma_1L_0}{16} \int\limits_0^T (\omega_x-l{\varphi})^2(0,t) dt\\ +\frac{k_1}{l^2L_0}\int\limits_0^T ({\varphi}_x+\psi+l{\omega})^2(L_0,t) dt+\frac{k_1}{l^2L_0}\int\limits_0^T ({\varphi}_x+\psi+l{\omega})^2(0,t) dt\\ +\frac{5\rho_1}{l^2L_0}\int\limits_0^T {\varphi}_t^2(L_0,t) dt+ \frac{1}{l^2L_0^2}\int\limits_0^T \int\limits_0^{L_0}{\varphi}_{t}^2 dx dt+ 2k_1\int\limits_0^T \int\limits_0^{L_0} ({\varphi}_x+\psi+l{\omega})^2 dx dt\\ \le \left(\frac{6k_1}{l^2}+\frac{20l^2C_1^2\lambda_1^2}{\sigma_1}\right) \int\limits_0^T \int\limits_0^{L_0} \psi_x^2 dx dt+C_1\left(\beta_1+\frac{\beta_1^2l^2}{\rho_1}\right)\int\limits_0^T \int\limits_0^{L_0}\psi_t^2 dx dt\\ +\frac{l^2L_0\lambda_1^2C_1^2}{4k_1}\int\limits_0^T \psi_x^2(L_0,t) dt+\frac{l^2L_0\lambda_1^2C_1^2}{4k_1}\int\limits_0^T \psi_x^2(0,t) dt +\frac{\beta_1^2C_1^2l^2L_0}{4}\int\limits_0^T \psi_{t}^2(L_0,t) dt\\+C\int\limits_0^T \int\limits_0^{L_0}({\gamma}(\tilde\psi_t)-{\gamma}(\hat\psi_t))\psi_t dx dt+C(R,T)lot+C(E(0)+E(T)). \end{gathered}$$ *Step 7.* Our next step is to multiply equation [\[eq2\]](#eq2){reference-type="eqref" reference="eq2"} by $-C_2x\psi_x-C_2(x-L_0)\psi_x$, where $C_2=\frac{l^2\lambda_1C_1^2}{k_1}$. After integration by parts with respect to $t$ we obtain $$%\label{p11} {\beta}_1C_2\int\limits_0^T \int\limits_0^{L_0}\psi_{t}x \psi_{xt} dx dt+{\beta}_1C_2\int\limits_0^T \int\limits_0^{L_0}\psi_{t}(x-L_0) \psi_{xt} dx dt\qquad\qquad\qquad\qquad\qquad\qquad$$ $$\begin{gathered} \label{p11} +\lambda_1C_2 \int\limits_0^T \int\limits_0^{L_0}\psi_{xx}x \psi_{x} dx dt+\lambda_1C_2 \int\limits_0^T \int\limits_0^{L_0}\psi_{xx}(x-L_0) \psi_{x} dx dt\\ -k_1C_2\int\limits_0^T \int\limits_0^{L_0}({\varphi}_x+\psi+l{\omega})(2x-L_0)\psi_{x} dx dt - C_2 \int\limits_0^T \int\limits_0^{L_0}(\gamma(\tilde\psi_t)-\gamma(\hat\psi_t))(2x-L_0)\psi_{x} dx dt\\ + \int\limits_0^T \int\limits_0^{L_0}(h_1(\tilde{\varphi},\tilde\psi, \tilde{\omega})-h_1(\hat{\varphi},\hat\psi, \hat{\omega}))(2x-L_0)\psi_{x} dx dt\\ = {\beta}_1C_2 \int\limits_0^{L_0}\psi_{t}(x,T)(2x-L_0) \psi_{x}(x,T) dx -{\beta}_1C_2 \int\limits_0^{L_0}\psi_{t}(x,0)(2x-L_0) \psi_{x}(x,0) dx. \end{gathered}$$ After integration by parts with respect to $x$ we get $$\begin{gathered} \label{p12} {\beta}_1C_2\int\limits_0^T \int\limits_0^{L_0}\psi_{t}x \psi_{xt} dx dt+{\beta}_1C_2\int\limits_0^T \int\limits_0^{L_0}\psi_{t}(x-L_0) \psi_{xt} dx dt\\=-{\beta}_1C_2\int\limits_0^T \int\limits_0^{L_0}\psi_{t}^2 dx dt+\frac{{\beta}_1C_2L_0}{2}\int\limits_0^T \psi_{t}^2(L_0,t)dt \end{gathered}$$ and $$\begin{gathered} \label{p13} \lambda_1C_2 \int\limits_0^T \int\limits_0^{L_0}\psi_{xx}x \psi_{x} dx dt+\lambda_1C_2 \int\limits_0^T \int\limits_0^{L_0}\psi_{xx}(x-L_0) \psi_{x} dx dt\\ =\frac{\lambda_1C_2L_0}{2} \int\limits_0^T\psi_{x}^2(L_0,t) dt+\frac{\lambda_1C_2L_0}{2} \int\limits_0^T\psi_{x}^2(0,t) dt- \lambda_1C_2 \int\limits_0^T \int\limits_0^{L_0}\psi_{x}^2 dx dt. \end{gathered}$$ Furthermore, $$\begin{gathered} \label{p14} \left| k_1C_2\int\limits_0^T \int\limits_0^{L_0}({\varphi}_x+\psi+l{\omega})(2x-L_0)\psi_{x} dx dt\right|\\ \le k_1\int\limits_0^T \int\limits_0^{L_0}({\varphi}_x+\psi+l{\omega})^2 dx dt+ \frac{k_1C_2^2L_0^2}{4}\int\limits_0^T \int\limits_0^{L_0}\psi_{x}^2 dx dt. \end{gathered}$$ By Lemma [\[lem:GammaEst\]](#lem:GammaEst){reference-type="eqref" reference="lem:GammaEst"} with $\varepsilon=\frac{k_1 C2^2L_0^2}{4}$ we have $$\label{p141} \left| C_2 \int\limits_0^T \int\limits_0^{L_0}\psi_t(2x-L_0)\psi_{x} dx dt\right|\le \frac{k_1C_2^2L_0^2}{4}\int\limits_0^T \int\limits_0^{L_0}\psi_{x}^2 dx dt+C \int\limits_0^T \int\limits_0^{L_0}(\gamma(\tilde\psi_t)-\gamma(\hat\psi_t))\psi_tdx dt.$$ As a result of [\[p11\]](#p11){reference-type="eqref" reference="p11"}-- [\[p141\]](#p141){reference-type="eqref" reference="p141"} we obtain the estimate $$\begin{gathered} \label{ol12} \frac{{\beta}_1C_2L_0}{2}\int\limits_0^T \psi_{t}^2(L_0,t)dt+\frac{\lambda_1C_2L_0}{2} \int\limits_0^T\psi_{x}^2(L_0,t) dt+\frac{\lambda_1C_2L_0}{2} \int\limits_0^T\psi_{x}^2(0,t)dt\\ \le k_1 \int\limits_0^T \int\limits_0^{L_0}({\varphi}_x+\psi+l{\omega})^2 dx dt+ \left(k_1C_2^2L_0^2+\lambda_1C_2\right)\int\limits_0^T \int\limits_0^{L_0}\psi_{x}^2 dx dt+\beta_1C_2 \int\limits_0^T \int\limits_0^{L_0}\psi_t^2dx dt\\ +C \int\limits_0^T \int\limits_0^{L_0}(\gamma(\tilde\psi_t)-\gamma(\hat\psi_t))\psi_tdx dt+C(R,T)lot+C(E(0)+E(T)). \end{gathered}$$ Summing up [\[ol11\]](#ol11){reference-type="eqref" reference="ol11"} and [\[ol12\]](#ol12){reference-type="eqref" reference="ol12"} and using [\[c2\]](#c2){reference-type="eqref" reference="c2"} we infer $$\begin{gathered} % \label{ol13} \frac{{\sigma}_1}{64}\int\limits_0^T \int\limits_0^{L_0} (\omega_x-l{\varphi})^2 dx dt+\frac{\rho_1}{4}\int\limits_0^T \int\limits_0^{L_0} \omega_t^2dx dt +\frac{\rho_1L_0}{8}\int\limits_0^T\omega_t^2(L_0,t)dt\\ +\frac{\sigma_1L_0}{16} \int\limits_0^T (\omega_x-l{\varphi})^2(L_0,t) dt+\frac{\sigma_1L_0}{16} \int\limits_0^T (\omega_x-l{\varphi})^2(0,t) dt\\ +\frac{k_1}{l^2L_0}\int\limits_0^T ({\varphi}_x+\psi+l{\omega})^2(L_0,t) dt+\frac{k_1}{l^2L_0}\int\limits_0^T ({\varphi}_x+\psi+l{\omega})^2(0,t) dt\\ +\frac{5\rho_1}{l^2L_0}\int\limits_0^T {\varphi}_t^2(L_0,t) dt+ \frac{1}{l^2L_0^2}\int\limits_0^T \int\limits_0^{L_0}{\varphi}_{t}^2 dx dt+ k_1\int\limits_0^T \int\limits_0^{L_0}({\varphi}_x+\psi+l{\omega})^2 dx dt\\ \frac{l^2L_0\lambda_1^2C_1^2}{4k_1}\int\limits_0^T \psi_x^2(L_0,t) dt+\frac{l^2L_0\lambda_1^2C_1^2}{4k_1}\int\limits_0^T \psi_x^2(0,t) dt\\ +\frac{\beta_1^2C_1^2l^2L_0}{4\rho_1}\int\limits_0^T \psi_{t}^2(L_0,t) dt\le \left(\frac{6k_1}{l^2}+\frac{20l^2C_1^2\lambda_1^2}{\sigma_1}+\lambda_1 C_2+k_1C_2^2L_0^2\right) \int\limits_0^T \int\limits_0^{L_0} \psi_x^2 dx dt \end{gathered}$$ $$\begin{gathered} \label{ol13} +\left((C_1+C_2)\beta_1+\frac{C_1\beta_1^2l^2}{\rho_1}\right)\int\limits_0^T \int\limits_0^{L_0}\psi_t^2 dx dt\\+C \int\limits_0^T \int\limits_0^{L_0}(\gamma(\tilde\psi_t)-\gamma(\hat\psi_t))\psi_tdx dt +C(R,T)lot+C(E(0)+E(T)). \end{gathered}$$ *Step 8.* Now we multiply equation [\[eq2\]](#eq2){reference-type="eqref" reference="eq2"} by $C_3\psi$, where $C_3=\frac{2}{\lambda_1}\left(\frac{6k_1}{l^2}+\frac{20l^2C_1^2\lambda_1^2}{\sigma_1}+\lambda_1 C_2+k_1C_2^2L_0^2\right)$ and integrate by parts with respect to $t$ $$\begin{gathered} % \label{p15} -C_3{\beta}_1\int\limits_0^T \int\limits_0^{L_0}\psi_{t}^2 dx dt-\lambda_1C_3 \int\limits_0^T \int\limits_0^{L_0}\psi_{xx}\psi dx dt +k_1C_3\int\limits_0^T \int\limits_0^{L_0}({\varphi}_x+\psi+l{\omega})\psi dx dt\\ + C_3\int\limits_0^T \int\limits_0^{L_0} ({\gamma}(\tilde\psi_t)-{\gamma}(\hat\psi_t))\psi dx dt +C_3\int\limits_0^T \int\limits_0^{L_0}(h_1(\tilde{\varphi},\tilde\psi,\tilde{\omega})-h_1(\hat{\varphi},\hat\psi,\hat{\omega}))\psi dx dt \end{gathered}$$ $$\label{p15} \qquad\qquad\qquad\qquad= C_3{\beta}_1\int\limits_0^{L_0}\psi_{t}(x,0)\psi(x,0) dx- C_3{\beta}_1\int\limits_0^{L_0}\psi_{t}(x,T)\psi(x,T) dx$$ After integration by parts we infer the estimate $$\begin{gathered} \label{ol14} \lambda_1C_3 \int\limits_0^T \int\limits_0^{L_0}\psi_x^2 dx dt \le \frac{k_1}{2}\int\limits_0^T ({\varphi}_x+\psi+l{\omega})^2 dx dt+C_3\beta_1\int\limits_0^T \int\limits_0^{L_0} \psi_t^2 dx dt + \frac{l^2L_0\lambda_1^2C_1^2}{8k_1}\int\limits_0^T \psi_x^2(L_0,t) dt \\ +C \int\limits_0^T \int\limits_0^{L_0}(\gamma(\tilde\psi_t)-\gamma(\hat\psi_t))\psi_tdx dt +C(R,T)lot+C(E(0)+E(T)). \end{gathered}$$ Combining [\[ol14\]](#ol14){reference-type="eqref" reference="ol14"} with [\[ol13\]](#ol13){reference-type="eqref" reference="ol13"} we obtain $$\begin{gathered} %\label{ol15} \frac{{\sigma}_1}{64}\int\limits_0^T \int\limits_0^{L_0} (\omega_x-l{\varphi})^2 dx dt+\frac{\rho_1}{4}\int\limits_0^T \int\limits_0^{L_0} \omega_t^2dx dt +\frac{\rho_1L_0}{8}\int\limits_0^T\omega_t^2(L_0,t)dt\\ +\frac{\sigma_1L_0}{16} \int\limits_0^T (\omega_x-l{\varphi})^2(L_0,t) dt+\frac{\sigma_1L_0}{16} \int\limits_0^T (\omega_x-l{\varphi})^2(0,t) dt\\ +\frac{k_1}{l^2L_0}\int\limits_0^T ({\varphi}_x+\psi+l{\omega})^2(L_0,t) dt+\frac{k_1}{l^2L_0}\int\limits_0^T ({\varphi}_x+\psi+l{\omega})^2(0,t) dt \end{gathered}$$ $$\begin{gathered} \label{ol15} +\frac{5\rho_1}{l^2L_0}\int\limits_0^T {\varphi}_t^2(L_0,t) dt+ \frac{1}{l^2L_0^2}\int\limits_0^T \int\limits_0^{L_0}{\varphi}_{t}^2 dx dt+ \frac{k_1}{2}\int\limits_0^T \int\limits_0^{L_0} ({\varphi}_x+\psi+l{\omega})^2 dx dt\\ \frac{l^2L_0\lambda_1^2C_1^2}{8k_1}\int\limits_0^T \psi_x^2(L_0,t) dt+\frac{l^2L_0\lambda_1^2C_1^2}{4k_1}\int\limits_0^T \psi_x^2(0,t) dt +\frac{\beta_1^2C_1^2l^2L_0}{4\rho_1}\int\limits_0^T \psi_{t}^2(L_0,t) dt\\ + \left(\frac{6k_1}{l^2}+\frac{20l^2C_1^2\lambda_1^2}{\sigma_1}+\lambda_1 C_2+k_1C_2^2L_0^2\right) \int\limits_0^T \int\limits_0^{L_0} \psi_x^2 dx dt\\ \le\left((C_1+C_2)\beta_1+\frac{C_1\beta_1^2l^2}{\rho_1}+C_3\beta_1\right)\int\limits_0^T \int\limits_0^{L_0}\psi_t^2 dx dt\\ +C \int\limits_0^T \int\limits_0^{L_0}(\gamma(\tilde\psi_t)-\gamma(\hat\psi_t))\psi_tdx dt +C(R,T)lot+C(E(0)+E(T)). \end{gathered}$$ *Step 9.* Consequently, it follows from [\[ol15\]](#ol15){reference-type="eqref" reference="ol15"} and assumption [\[GammaLip2\]](#GammaLip2){reference-type="eqref" reference="GammaLip2"} for any $l>0$ where exist constants $M_i$, $i=\overline{\{1,3\}}$ (depending on $l$) such that $$\begin{gathered} \label{l1} \int\limits_0^T E_1(t) dt+\int\limits_0^T B_1(t) dt\le M_1 \int\limits_0^T \int\limits_0^{L_0}(\gamma(\tilde\psi_t)-\gamma(\hat\psi_t))\psi_tdx dt\\+M_2(R,T)lot+M_3(E(T)+E(0)), \end{gathered}$$ where $$\begin{gathered} \label{b1} B_1(t)=\int\limits_0^T (\omega_x-l{\varphi})^2(L_0,t) dt+ \int\limits_0^T ({\varphi}_x+\psi+l{\omega})^2(L_0,t) dt+\int\limits_0^T \psi_x^2(L_0,t) dt \\+\int\limits_0^T\omega_t^2(L_0,t)dt +\int\limits_0^T \psi_{t}^2(L_0,t) dt+\int\limits_0^T {\varphi}_t^2(L_0,t) dt. \end{gathered}$$ *Step 10.* Finally, we multiply equation [\[eq4\]](#eq4){reference-type="eqref" reference="eq4"} by $(x-L)u_x$, equation [\[eq5\]](#eq5){reference-type="eqref" reference="eq5"} by $(x-L)v_x$, and [\[eq6\]](#eq6){reference-type="eqref" reference="eq6"} by $(x-L)w_x$. Summing up the results and integrating by parts with respect to $t$ we arrive at $$\begin{gathered} %\label{r1} - \rho_2\int\limits_0^T \int\limits_{L_0}^Lu_{t}(x-L)u_{tx} dx dt-k_2\int\limits_0^T \int\limits_{L_0}^L(u_x+v+lw)_x (x-L)u_{x} dx dt\\ - l{\sigma}_2\int\limits_0^T \int\limits_{L_0}^L(w_x-lu)(x-L)u_{x} dx dt +\int\limits_0^T \int\limits_{L_0}^L(f_2(\tilde u, \tilde v, \tilde w)-f_2(\hat u, \hat v, \hat w))(x-L)u_{x} dx dt \end{gathered}$$ $$\begin{gathered} \label{r1} -{\beta}_2\int\limits_0^T \int\limits_{L_0}^L v_{t} (x-L)v_{xt} dx dt-\lambda_2 \int\limits_0^T \int\limits_{L_0}^L v_{xx}(x-L)v_{x} dx dt\\ +k_2 \int\limits_0^T \int\limits_{L_0}^L(u_x+v+lw)(x-L)v_{x} dx dt +\int\limits_0^T \int\limits_{L_0}^L(h_2(\tilde u, \tilde v, \tilde w)-h_2(\hat u, \hat v, \hat w))(x-L)v_{x} dx dt\\ -\rho_2\int\limits_0^T \int\limits_{L_0}^Lw_{t}(x-L)w_{xt} dx dt- {\sigma}_2\int\limits_0^T \int\limits_{L_0}^L(w_x-lu)_x(x-L)w_{x} dx dt\\ +lk_2\int\limits_0^T \int\limits_{L_0}^L(u_x+v+lw)(x-L)w_{x} dx dt+\int\limits_0^T \int\limits_{L_0}^L(g_2(\tilde u, \tilde v, \tilde w)-g_2(\hat u, \hat v, \hat w))(x-L)w_{x} dx dt=\\ - \rho_2 \int\limits_{L_0}^L (x-L)((u_{t}u_{x})(x,T) - (u_{t}u_{x})(x,0))dx -{\beta}_2 \int\limits_{L_0}^L (x-L)((v_{t}v_{x})(x,T) -(v_{t}v_{x})(x,0))dx \\ - \rho_2 \int\limits_{L_0}^L (x-L)((w_{t}w_{x})(x,T) - (w_{t}w_{x})(x,0))dx . \end{gathered}$$ After integration by parts with respect to $x$ we infer $$\begin{gathered} \label{r2} - \rho_2\int\limits_0^T \int\limits_{L_0}^Lu_{t}(x-L)u_{tx} dx- {\beta}_2\int\limits_0^T \int\limits_{L_0}^L v_{t} (x-L)v_{xt} dx dt -\rho_2\int\limits_0^T \int\limits_{L_0}^Lw_{t}(x-L)w_{xt} dx dt\\= \frac{\rho_2}{2}\int\limits_0^T \int\limits_{L_0}^L u_{t}^2 dx+ \frac{{\beta}_2}{2}\int\limits_0^T \int\limits_{L_0}^L v_{t}^2 dx dt +\frac{\rho_2}{2}\int\limits_0^T \int\limits_{L_0}^Lw_{t}^2 dx dt\\- \frac{\rho_2(L-L_0)}{2}\int\limits_0^T u_{t}^2(L_0) dt- \frac{{\beta}_2(L-L_0)}{2}\int\limits_0^T v_{t}^2(L_0) dt -\frac{\rho_2(L-L_0)}{2}\int\limits_0^T w_{t}^2(L_0) dt \end{gathered}$$ and $$\begin{gathered} %\label{r3} -k_2\int\limits_0^T \int\limits_{L_0}^L(u_x+v+lw)_x (x-L)u_{x} dx dt- l{\sigma}_2\int\limits_0^T \int\limits_{L_0}^L(w_x-lu)(x-L)u_{x} dx dt\\ -\lambda_2 \int\limits_0^T \int\limits_{L_0}^L v_{xx}(x-L)v_{x} dx dt+k_2 \int\limits_0^T \int\limits_{L_0}^L(u_x+v+lw)(x-L)v_{x} dx dt \end{gathered}$$ $$\begin{gathered} \label{r3} - {\sigma}_2\int\limits_0^T \int\limits_{L_0}^L(w_x-lu)_x(x-L)w_{x} dx dt+lk_2\int\limits_0^T \int\limits_{L_0}^L(u_x+v+lw)(x-L)w_{x} dx dt=\\ -k_2\int\limits_0^T \int\limits_{L_0}^L(u_x+v+lw)_x (x-L)(u_x+v+lw) dx dt\\-{\sigma}_2 \int\limits_0^T \int\limits_{L_0}^L(w_x-lu)_x(x-L)(w_x-lu) dx dt -\lambda_2 \int\limits_0^T \int\limits_{L_0}^L v_{xx}(x-L)v_{x} dx dt\\ - l{\sigma}_2(L-L_0)\int\limits_0^T(w_x-lu)(L_0)u(L_0)dt +k_2(L-L_0) \int\limits_0^T(u_x+v+lw)(L_0)v(L_0)dt\\ +lk_2(L-L_0)\int\limits_0^T (u_x+v+lw)(L_0)w(L_0)dt=\\ -\frac{k_2(L-L_0)}{2}\int\limits_0^T (u_x+v+lw)^2(L_0) dt+\frac{k_2}{2}\int\limits_0^T \int\limits_{L_0}^L(u_x+v+lw)^2 dx dt\\ +\frac{{\sigma}_2}{2} \int\limits_0^T \int\limits_{L_0}^L(w_x-lu)^2 dx dt -\frac{{\sigma}_2(L-L_0)}{2} \int\limits_0^T(w_x-lu)^2(L_0) dt\\ +\frac{\lambda_2}{2} \int\limits_0^T \int\limits_{L_0}^L v_{x}^2 dx dt-\frac{\lambda_2(L-L_0) }{2}\int\limits_0^T v_{x}^2(L_0) dt - l{\sigma}_2(L-L_0)\int\limits_0^T(w_x-lu)(L_0)u(L_0)dt\\ +k_2(L-L_0) \int\limits_0^T(u_x+v+lw)(L_0)v(L_0)dt\\ +lk_2(L-L_0)\int\limits_0^T (u_x+v+lw)(L_0)w(L_0)dt. \end{gathered}$$ Consequently, it follows from [\[r1\]](#r1){reference-type="eqref" reference="r1"}--[\[r3\]](#r3){reference-type="eqref" reference="r3"} that for any $l>0$ where exist constants $M_4, M_5, M_6>0$ such that $$\label{l2} \int\limits_0^T E_2(t) dt\le M_4\int\limits_0^T B_2(t) dt+ M_5(R,T)lot+M_6(E(T)+E(0)),$$ where $$\begin{gathered} \label{b2} B_2(t)=\int\limits_0^T (w_x-lu)^2(L_0,t) dt+ \int\limits_0^T (u_x+v+lw)^2(L_0,t) dt+\int\limits_0^T v_x^2(L_0,t) dt \\+\int\limits_0^Tw_t^2(L_0,t)dt +\int\limits_0^T v_{t}^2(L_0,t) dt+\int\limits_0^T u_t^2(L_0,t) dt. \end{gathered}$$ Then, due to transmission conditions [\[TC1\]](#TC1){reference-type="eqref" reference="TC1"}--[\[TC4\]](#TC4){reference-type="eqref" reference="TC4"} there exist $\delta, M_7, M_8>0$ (depending on $l$), such that $$\label{l3} \int\limits_0^T E(t) dt\le \delta \int\limits_0^T \int\limits_0^{L_0}(\gamma(\tilde\psi_t)-\gamma(\hat\psi_t))\psi_t dx dt+ M_7(R,T)lot+M_8(E(T)+E(0)).$$ It follows from [\[En1\]](#En1){reference-type="eqref" reference="En1"} that there exists $C>0$ such that $$\label{En11} \int\limits_0^T \int\limits_0^{L_0}(\gamma(\tilde\psi_t)-\gamma(\hat\psi_t)) \psi_t dx dt\le C\left( E(0)+ \int\limits_0^T | H(\hat U(t),\tilde U(t))| dt\right).$$ By Lemma [Lemma 7](#lem:lip){reference-type="ref" reference="lem:lip"} we have that for any $\varepsilon>0$ there exists $C(\varepsilon, R)>0$ such that $$\label{HEst} \int\limits_0^T | H(\hat U(t), \tilde U(t))| dt\le \varepsilon\int\limits_0^T\int\limits_0^{L_0} E(t) dx dt+ C(\varepsilon, R,T)lot.$$ Combining [\[HEst\]](#HEst){reference-type="eqref" reference="HEst"} with [\[En11\]](#En11){reference-type="eqref" reference="En11"} we arrive at $$\label{DampEst} \int\limits_0^T \int\limits_0^{L_0}(\gamma(\tilde\psi_t)-\gamma(\hat\psi_t)) \psi_t dx dt\le C E(0)+C(R,T)lot .$$ Substituting [\[DampEst\]](#DampEst){reference-type="eqref" reference="DampEst"} into [\[l3\]](#l3){reference-type="eqref" reference="l3"} we obtain $$\label{l4} \int\limits_0^T E(t) dt\le C(R,T)lot+C(E(T)+E(0))$$ for some $C, C(R,T)>0$.\ Our remaining task is to estimate the last term in [\[En2\]](#En2){reference-type="eqref" reference="En2"}. $$\label{En22} \left| \int\limits_0^T \int\limits_t^T H(\hat U(s),\tilde U(s)) ds dt\right|\le \int\limits_0^T E(t) dt+T^3C(R) lot.$$ Then, it follows from [\[En2\]](#En2){reference-type="eqref" reference="En2"} and [\[En22\]](#En22){reference-type="eqref" reference="En22"} that $$\label{En222} TE(T)\le C\int\limits_0^T E(t) dt+C(T,R) lot.$$ Then the combination of [\[En222\]](#En222){reference-type="eqref" reference="En222"} with [\[l4\]](#l4){reference-type="eqref" reference="l4"} leads to $$\label{l} TE(T)\le C(R,T)lot+C(E(T)+E(0)).$$ Choosing $T$ large enough one can obtain estimate [\[te\]](#te){reference-type="eqref" reference="te"} which together with Theorem [\[theoremCL\]](#theoremCL){reference-type="ref" reference="theoremCL"} immediately leads to the asymptotic smoothness of the system. ◻ ## Existence of attractors. The following statement collects criteria on existence and properties of attractors to gradient systems. **Theorem 15** ([@CFR; @CL]). *Assume that $( H, S_t)$ is a gradient asymptotically smooth dynamical system. Assume its Lyapunov function $L(y)$ is bounded from above on any bounded subset of $H$ and the set $\mathcal{W}_R=\{y: L(y) \le R\}$ is bounded for every $R$. If the set $\EuScript N$ of stationary points of $(H, S_t)$ is bounded, then $(S_t, H)$ possesses a compact global attractor. Moreover, the global attractor consists of full trajectories $\gamma=\{ U(t)\, :\, t\in{\mathbb R}\}$ such that $$\label{conv-N} \lim_{t\to -\infty}{\rm dist}_{H}(U(t),\EuScript N)=0 ~~ \mbox{and} ~~ \lim_{t\to +\infty}{\rm dist}_{H}(U(t),\EuScript N)=0$$ and $$\label{7.4.1} \lim_{t\to +\infty}{\rm dist}_{H}(S_tx,\EuScript N)=0 ~~\mbox{for any $x \in H$;}$$ that is, any trajectory stabilizes to the set $\EuScript N$ of stationary points.* Now we state the result on the existence of an attractor. **Theorem 16**. *Let assumptions of Theorems [Theorem 9](#th:grad){reference-type="ref" reference="th:grad"}, [Theorem 14](#th:AsSmooth){reference-type="ref" reference="th:AsSmooth"}, hold true, moreover, $$\begin{gathered} \liminf\limits_{|s|\to \infty}\frac{h_1(s)}{s}>0, \tag{N5}\label{fl}\\ \nabla\mathcal{F}_2(u,v,w)(u,v,w)-a_1\mathcal{F}_2(u,v,w)\ge -a_2,\qquad a_i\ge 0. \end{gathered}$$ Then, the dynamical system $(H, S_t)$ generated by [\[Eq1\]](#Eq1){reference-type="eqref" reference="Eq1"}-[\[TC4\]](#TC4){reference-type="eqref" reference="TC4"} possesses a compact global attractor $\mathfrak A$ possessing properties [\[conv-N\]](#conv-N){reference-type="eqref" reference="conv-N"}, [\[7.4.1\]](#7.4.1){reference-type="eqref" reference="7.4.1"}.* *Proof.* In view of Theorems [Theorem 9](#th:grad){reference-type="ref" reference="th:grad"}, [Theorem 14](#th:AsSmooth){reference-type="ref" reference="th:AsSmooth"}, [Theorem 15](#abs){reference-type="ref" reference="abs"} our remaining task is to show the boundedness of the set of stationary points and the set $W_R=\{Z: L(Z)\le R\}$, where $L$ is given by [\[lap\]](#lap){reference-type="eqref" reference="lap"}. The second statement follows immediately from the structure of function $L$ and property [\[fl\]](#fl){reference-type="eqref" reference="fl"}. The first statement can be easily shown by energy-like estimates for stationary solutions taking into account [\[fl\]](#fl){reference-type="eqref" reference="fl"}. ◻ # Singular Limits on finite time intervals ## Singular limit $l\rightarrow 0$ Let the nonlinearities $f_j,h_j, g_j$ are such that $$\begin{aligned} & f_1({\varphi},\psi,{\omega})=f_1({\varphi},\psi), && h_1({\varphi},\psi,{\omega})=h_1({\varphi},\psi), && g_1({\varphi},\psi,{\omega})=g_1({\omega}), \\ & f_2(u,v,w)=f_2(u,v), && h_2(u,v,w)=h_2(u,v), && g_2(u,v,w)=g_2(w). \tag{N6}\label{Ndecouple} \end{aligned}$$ If we formally set $l=0$ in [\[AEq\]](#AEq){reference-type="eqref" reference="AEq"}-[\[AIC\]](#AIC){reference-type="eqref" reference="AIC"}, we obtain the contact problem for a straight Timoshenko beam $$\begin{aligned} & \rho_1{\varphi}_{tt}-k_1({\varphi}_x+\psi)_x +{f_1({\varphi},\psi)}=p_1(x,{t}), &(x,t)\in (0,L_0)\times (0,T), \label{TimEq1}\\ & \beta_1\psi_{tt} -\lambda_1 \psi_{xx} +k_1({\varphi}_x+\psi) +{\gamma}(\psi_t) +h_1({\varphi},\psi)=r_1(x,{t}), &(x,t)\in (0,L_0)\times (0,T), \\ & \rho_2u_{tt}-k_2(u_x+v)_x +f_2(u,v)=p_2(x,{t}), &(x,t)\in (L_0,L)\times (0,T), \\ & \beta_2v_{tt} -\lambda_2 v_{xx} +k_2(u_x+v) +h_2(u,v)=r_2(x,{t}), &(x,t)\in (L_0,L)\times (0,T), \\ &{\varphi}(0,t)=\psi(0,t)=0, \quad u(L,t)=v(L,t)=0,\\ & {\varphi}(L_0,t)=u(L_0,t), \quad \psi(L_0,t)=v(L_0,t),\\ & k_1({\varphi}_x+\psi)(L_0,t)=k_2(u_x+v)(L_0,t), \, \lambda_1 \psi_{x}(L_0,t)=\lambda_2 v_{x}(L_0,t),\label{TimEq2} \end{aligned}$$ and an independent contact problem for wave equations $$\begin{aligned} & \rho_1{\omega}_{tt}- {\sigma}_1{\omega}_{xx}+{g_1({\omega})}=q_1(x,t), &(x,t)\in (0,L_0)\times (0,T), \label{WaveEq1}\\ & \rho_2w_{tt}- {\sigma}_2 w_{xx}+g_2(w)=q_2(x,{t}), &(x,t)\in (L_0,L)\times (0,T), \\ & {\sigma}_1 {\omega}_x(L_0,t)={\sigma}_2w_x(L_0,t),\quad {\omega}(L_0,t)=w(L_0,t), \\ &w(L,t)=0, \quad {\omega}(0,t)=0. \label{WaveEq2} \end{aligned}$$ The following theorem gives an answer, how close are solutions to [\[AEq\]](#AEq){reference-type="eqref" reference="AEq"}-[\[AIC\]](#AIC){reference-type="eqref" reference="AIC"} to the solution of decoupled system [\[TimEq1\]](#TimEq1){reference-type="eqref" reference="TimEq1"}-[\[WaveEq2\]](#WaveEq2){reference-type="eqref" reference="WaveEq2"} when $l\rightarrow 0$. **Theorem 17**. *Assume that the conditions of Theorem [Theorem 5](#th:WeakWP){reference-type="ref" reference="th:WeakWP"}, [\[GammaLip1\]](#GammaLip1){reference-type="eqref" reference="GammaLip1"} and [\[Ndecouple\]](#Ndecouple){reference-type="eqref" reference="Ndecouple"} hold. Let $\Phi^{(l)}$ be the solution to [\[AEq\]](#AEq){reference-type="eqref" reference="AEq"}-[\[AIC\]](#AIC){reference-type="eqref" reference="AIC"} with the fixed $l$ and the initial data $$\Phi(x,0)=({\varphi}_0,\psi_0,{\omega}_0, u_0,v_0,w_0)(x), \quad \Phi_t(x,0)=({\varphi}_1,\psi_1,{\omega}_1, u_1,v_1,w_1)(x).$$ Then for every $T>0$ $$\begin{aligned} &\Phi^{(l)} \stackrel{\ast}{\rightharpoonup} ({\varphi},\psi,{\omega}, u,v,w) \quad &\mbox{in } L^\infty(0,T;H_d) \; &\mbox{ as } l\rightarrow 0,\\ &\Phi^{(l)}_t \stackrel{\ast}{\rightharpoonup} ({\varphi}_t,\psi_t,{\omega}_t, u_t,v_t,w_t) \quad &\mbox{in } L^\infty(0,T;H_v)\; &\mbox{ as } l\rightarrow 0, \end{aligned}$$ where $({\varphi},\psi, u,v)$ is the solution to [\[TimEq1\]](#TimEq1){reference-type="eqref" reference="TimEq1"}-[\[TimEq2\]](#TimEq2){reference-type="eqref" reference="TimEq2"} with the initial conditions $$({\varphi},\psi, u,v)(x,0)=({\varphi}_0,\psi_0, u_0,v_0)(x), \quad ({\varphi}_t,\psi_t, u_t,v_t)(x,0)=({\varphi}_1,\psi_1, u_1,v_1)(x),$$ and $({\omega}, w)$ is the solution to [\[WaveEq1\]](#WaveEq1){reference-type="eqref" reference="WaveEq1"}-[\[WaveEq2\]](#WaveEq2){reference-type="eqref" reference="WaveEq2"} with the initial conditions $$({\omega},w)(x,0)=({\omega}_0, w_0)(x), \quad ({\omega}_t,w_t)(x,0)=({\omega}_1, w_1)(x).$$* The proof is similar to that of Theorem 3.1 [@MaMo2017] for the homogeneous Bresse beam with obvious changes, except for the limit transition in the nonlinear dissipation term. For the future use we formulate it as a lemma. **Lemma 18**. *Let [\[GammaLip1\]](#GammaLip1){reference-type="eqref" reference="GammaLip1"} holds. Then $$\int_0^T \int_0^{L_0} {\gamma}(\psi^{(l)}(x,t)){\gamma}^1(x,t) dxdt \rightarrow\int_0^T \int_0^{L_0} {\gamma}(\psi(x,t)){\gamma}^1(x,t) dxdt \quad \mbox{ as } l\rightarrow 0$$ for every ${\gamma}^1\in L^2(0,T;H^1(0,L_0))$.* *Proof.* Since [\[DCont\]](#DCont){reference-type="eqref" reference="DCont"} and [\[GammaLip1\]](#GammaLip1){reference-type="eqref" reference="GammaLip1"} hold $|{\gamma}(s)|\le Ms$, therefore $$||{\gamma}(\psi^{(l)})||_{L^\infty(0,T; L^2(0,L_0))} \le C( ||\psi^{(l)}||_{L^\infty(0,T; L^2(0,L_0))}).$$ Thus, due to Lemmas [Lemma 1](#lem:ASelfAdjont){reference-type="ref" reference="lem:ASelfAdjont"}, [Lemma 7](#lem:lip){reference-type="ref" reference="lem:lip"} the sequence $$R\Phi^{(l)}_{tt}=A\Phi^{(l)}+{\Gamma}(\Phi^{(l)}_t) + F(\Phi^{(l)}) + P$$ is bounded in $L^\infty(0,T; H^{-1}(0,L))$ and we can extract from $\Phi^{(l)}_{tt}$ a subsequence, that converges $\ast$-weakly in $L^\infty(0,T; H^{-1}(0,L))$. Thus, $$\Phi^{(l)}_t \rightarrow\Phi_t \quad \mbox{strongly in } L^2(0,T; H^{-\varepsilon}(0,L)), \; \varepsilon>0.$$ Consequently, $$\begin{gathered} \left|\int_0^T \int_0^{L_0} ({\gamma}(\psi^{(l)}(x,t))-{\gamma}(\psi(x,t))) {\gamma}^1(x,t) dxdt \right| \le \\ C(L) \int_0^T \int_0^{L_0} |\psi^{(l)}(x,t)-\psi(x,t)| |{\gamma}^1(x,t)|dxdt \rightarrow 0. \end{gathered}$$ ◻ We perform numerical modelling for the original problem with $l=1,1/3, 1/10, 1/30, 1/100, 1/300, 1/1000$ and the limiting problem ($l=0$) with the following values of constants $\rho_1=\rho_2=1$, $\beta_1=\beta_2=2$, $\sigma_1=4$, $\sigma_2=2$,$\lambda_1=8$, $\lambda_2=4$, $L=10$, $L_0=4$ and the right-hand sides $$\begin{aligned} & p_1(x)=\sin x, && r_1(x)=x, && q_1(x)=\sin x, \label{rhs1}\\ & p_2(x)=\cos x, && r_2(x)=x+1, && q_2(x)=\cos x. \label{rhs2} \end{aligned}$$ In this subsection we consider the nonlinearities with the potentials $$\begin{aligned} & {\mathbb F}_1({\varphi}, \psi, {\omega})=|{\varphi}+\psi|^4-|{\varphi}+\psi|^2+|{\varphi}\psi|^2 + |{\omega}|^3,\\ & {\mathbb F}_2(u,v,w)=|u+v|^4-|u+v|^2+|uv|^2 + |w|^3. \end{aligned}$$ Consequently, the nonlinearities have the form $$\begin{aligned} &f_1({\varphi}, \psi, {\omega}) = 4({\varphi}+\psi)^3-2({\varphi}+\psi)+2{\varphi}\psi^2, && f_2(u,v,w)=4(u+v)^3-2(u+v)+2uv^2,\\ &h_1({\varphi}, \psi, {\omega}) = 4({\varphi}+\psi)^3-2({\varphi}+\psi)+2{\varphi}^2\psi, && h_2(u,v,w)=4(u+v)^3-2(u+v)+2u^2v, \\ &g_1({\varphi}, \psi, {\omega})=3|{\omega}|{\omega}, && g_2(u,v,w)=3|w|w. \end{aligned}$$ For modelling we choose the following (globally Lipschitz) dissipation $${\gamma}(s)=\left\{ \begin{aligned} & \frac{1}{100}s^3, &&|s|\le 10, \\ & 10s, && |s|>10 \end{aligned} \right.$$ and the following initial data: $$\begin{aligned} &{\varphi}(x,0)=-\frac{3}{16}x^2+\frac{3}{4}x, && u(x,0)=0,\\ &\psi(x,0)=-\frac{1}{12}x^2+\frac{7}{12}x, && v(x,0)=-\frac{1}{6}x+\frac{5}{3},\\ &{\omega}(x,0)=\frac{1}{16}x^2-\frac{1}{4}x, && w(x,0)=-\frac{1}{12}x^2+\frac 76 x-\frac{10}{3},\\ & {\varphi}_t(x,0)=\frac{x}{4}, && u_t(x,0)=-\frac{1}{6}(x-10), \\ & \psi_t(x,0)=\frac{x}{4}, && v_t(x,0)=-\frac{1}{6}(x-10), \\ & {\omega}_t(x,0)=\frac{x}{4}, && w_t(x,0)=-\frac{1}{6}(x-10). \end{aligned}$$ Figures [2](#fig:sl1_first){reference-type="ref" reference="fig:sl1_first"}-[3](#fig:sl1_last){reference-type="ref" reference="fig:sl1_last"} show the behavior of solutions when $l\rightarrow 0$ for the chosen cross-sections of the beam. ![Transversal displacement of the beam, cross-section $x=2$.](phiellto0.eps){#fig:sl1_first width="90%" height="0.27\\textheight"} ![Transversal displacement of the beam, cross-section $x=6$.](uellto0.eps){width="90%" height="0.27\\textheight"} ![Shear angle variation of the beam, cross-section $x=2$.](psiellto0.eps){width="90%" height="0.27\\textheight"} ![Shear angle variation of the beam, cross-section $x=6$.](vellto0.eps){width="90%" height="0.27\\textheight"} ![Longitudinal displacement of the beam, cross-section $x=2$.](omegaellto0.eps){width="90%" height="0.27\\textheight"} ![Longitudinal displacement of the beam, cross-section $x=6$.](wellto0.eps){#fig:sl1_last width="90%" height="0.27\\textheight"} ## Singular limit $k_i\rightarrow\infty, \;l\rightarrow 0$ The singular limit for the straight Timoshenko beam ($l=0$) as $k_i\rightarrow+\infty$ is the Euler-Bernoulli beam equation [@Lag1989 Ch. 4]. We have a similar result for the Bresse composite beam when $k_i\rightarrow\infty, \;l\rightarrow 0$. **Theorem 19**. *Let the assumptions of Theorem [Theorem 5](#th:WeakWP){reference-type="ref" reference="th:WeakWP"}, [\[Ndecouple\]](#Ndecouple){reference-type="eqref" reference="Ndecouple"} and [\[GammaLip1\]](#GammaLip1){reference-type="eqref" reference="GammaLip1"} hold. Moreover, $$\begin{aligned} \begin{split} ({\varphi}_0,u_0)\in\left\{{\varphi}_0\in H^2(0,L_0), \; u_0\in H^2(L_0,L), \; {\varphi}_0(0)=u_0(L)=0,\;\right. \\ \left. \partial_x\phi_0(0)=\partial_x u_0(L)=0,\; \partial_x{\varphi}_0(L_0,t)=\partial_x u_0(L_0,t) \right\}; \end{split}\label{ICDsmooth}\tag{I1}\\ & \psi_0=-\partial_x{\varphi}_{0},\;v_0=-\partial_x u_{0}; \label{ICdep}\tag{I2} \\ & ({\varphi}_1,u_1) \in\{{\varphi}_1\in H^1(0,L_0), \; u_1\in H^1(L_0,L),\; {\varphi}_1(0)=u_1(L)=0,\; {\varphi}_1(L_0,t)= u_1(L_0,t)\}; \label{ICVSmooth} \tag{I3} \\ & \omega_0=w_0=0; \label{ICLZero} \tag{I4} \\ & h_1, h_2\in C^1(\mathbb R^2); \label{NC1}\tag{N6} \\ \begin{split} r_1\in L^\infty(0,T;H^1(0,L_0)), \; r_2\in L^\infty(0,T;H^1(L_0,L)), \\ r_1(L_0,t)=r_2(L_0,t) \quad \mbox{ for almost all } t>0. \end{split}\label{Rsmooth}\tag{R3} \end{aligned}$$ Let $k_j^{(n)}\rightarrow\infty$, $l^{(n)}\rightarrow 0$ as $n\rightarrow\infty$, and $\Phi^{(n)}$ be weak solutions to [\[AEq\]](#AEq){reference-type="eqref" reference="AEq"}-[\[AIC\]](#AIC){reference-type="eqref" reference="AIC"} with fixed $k_j^{(n)}, \; l^{(n)}$ and the same initial data $$\Phi(x,0)=({\varphi}_0,\psi_0,{\omega}_0, u_0,v_0,w_0)(x), \quad \Phi_t(x,0)=({\varphi}_1,\psi_1,{\omega}_1, u_1,v_1,w_1).$$ Then for every $T>0$ $$\begin{aligned} &\Phi^{(n)} \stackrel{\ast}{\rightharpoonup} ({\varphi},\psi,{\omega}, u,v,w) \quad &\mbox{in } L^\infty(0,T;H_d) \; &\mbox{ as } n\rightarrow\infty,\\ &\Phi^{(n)}_t \stackrel{\ast}{\rightharpoonup} ({\varphi}_t,\psi_t,{\omega}_t, u_t,v_t,w_t) \quad &\mbox{in } L^\infty(0,T;H_v)\; &\mbox{ as } n\rightarrow\infty, \end{aligned}$$ where* - *$({\varphi}, u)$ is a weak solution to $$\begin{aligned} \begin{split} \rho_1{\varphi}_{tt}-\beta_1{\varphi}_{ttxx} +\lambda_1 {\varphi}_{xxxx} -{\gamma}'(-{\varphi}_{tx}) {\varphi}_{txx} + \partial_x h_1({\varphi},-\varphi_x) +{f_1({\varphi},-\varphi_x)}= \qquad\\ p_1(x,{t})+\partial_x r_1(x,{t}), \quad (x,t)\in (0,L_0)\times (0,T), \label{KirchEq1} \end{split}\\ \begin{split} \rho_2u_{tt} -\beta_2u_{ttxx} +\lambda_2 u_{xxxx}+ \partial_x h_2(u,-u_x)+f_2(u,-u_x)= \qquad \qquad \qquad \qquad\qquad\\ p_2(x,{t})+\partial_x r_2(x,{t}), \quad (x,t)\in (L_0,L)\times (0,T), \label{KirchEq2} \end{split}\\ &{\varphi}(0,t)={\varphi}_x(0,t)=0, \, u(L,t)=u_x(L,t)=0,\,\\ &{\varphi}(L_0,t)=u(L_0,t), {\varphi}_x(L_0,t)=u_x(L_0,t), \lambda_1 {\varphi}_{xx}(L_0,t)=\lambda_2 u_{xx}(L_0,t), \, \label{KirchTC1}\\ \begin{split} \lambda_1 {\varphi}_{xxx}(L_0,t)-\beta_1 {\varphi}_{ttx}(L_0,t)+ h_1({\varphi}(L_0,t),-{\varphi}_x(L_0,t)) +{\gamma}(-{\varphi}_{tx}(L_0,t))=\qquad\\ \qquad\lambda_2 u_{xxx}(L_0,t)-\beta_2 u_{ttx}(L_0,t)+h_2(u(L_0,t),-u_x(L_0,t)),\label{KirchTC2} \end{split} \end{aligned}$$ with the initial conditions $$({\varphi}, u)(x,0)=({\varphi}_0, u_0)(x), \quad ({\varphi}_t, u_t)(x,0)=({\varphi}_1, u_1)(x).$$* - *$\psi=-{\varphi}_x, v=-u_x$;* - *$({\omega}, w)$ is the solution to $$\begin{aligned} &\rho_1{\omega}_{tt}- {\sigma}_1{\omega}_{xx}+{g_1({\omega})}=q_1(x,t), \quad (x,t)\in (0,L_0)\times (0,T), \label{KWaveEq}\\ & \rho_2w_{tt}- {\sigma}_2 w_{xx}+g_2(w)=q_2(x,{t}), \quad (x,t)\in (L_0,L)\times (0,T),\\ &{\omega}(0,t)=0,\,w(L,t)=0,\,\\ & {\sigma}_1 {\omega}_x(L_0,t)={\sigma}_2 w_x(L_0,t), \, {\omega}(L_0,t)=w(L_0,t) \label{KWaveTC} \end{aligned}$$ with the initial conditions $$({\omega},w)(x,0)=(0,0), \quad ({\omega}_t,w_t)(x,0)=({\omega}_1, w_1)(x).$$* *Proof.* The proof uses the idea from [@Lag1989 Ch. 4.3] and differs from it mainly in transmission conditions. We skip the details of the proof, which coincide with [@Lag1989].\ Energy inequality [\[EE\]](#EE){reference-type="eqref" reference="EE"} implies & \_t (\^(n), \^(n), \^(n), u\^(n), v\^(n),w\^(n)) &  bounded in  L\^(0,T;H_v),\ & \^(n) &  bounded in  L\^(0,T;H\^1(0,L_0)), [\[ConvF\]]{#ConvF label="ConvF"}\ & v\^(n) &  bounded in  L\^(0,T;H\^1(L_0,L))\ & \^(n)\_x - l\^(n)\^(n) &  bounded in  L\^(0,T;L_2(0,L_0)),\ & w\^(n)\_x - l\^(n) u\^(n) &  bounded in  L\^(0,T;L_2(L_0,L)),\ & k_1\^(n)(\^(n)\_x + \^(n) + l\^(n) \^(n)) &  bounded in  L\^(0,T;L_2(0,L_0)),\ & k_2\^(n)(u\^(n)\_x + v\^(n) + l\^(n) w\^(n)) &  bounded in  L\^(0,T;L_2(L_0,L)), [\[ConvL\]]{#ConvL label="ConvL"} Thus, we can extract subsequences which converge in corresponding spaces weak-$\ast$. Similarly to [@Lag1989] we have $${\varphi}^{(n)}_x + \psi^{(n)} + l^{(n)} {\omega}^{(n)} \stackrel{\ast}{\rightharpoonup} 0 \quad \mbox{ in } L^\infty(0,T;L_2(0,L_0)),$$ therefore $${\varphi}_x =- \psi.$$ Analogously, $$u_x =- v.$$ [\[ConvF\]](#ConvF){reference-type="eqref" reference="ConvF"}-[\[ConvL\]](#ConvL){reference-type="eqref" reference="ConvL"} imply $$\begin{aligned} &{\omega}^{(n)} \stackrel{\ast}{\rightharpoonup} {\omega}&\mbox{ in } L^\infty(0,T;H^1(0,L_0)), & &w^{(n)} \stackrel{\ast}{\rightharpoonup} w &\mbox{ in } L^\infty(0,T;H^1(L_0,L)), \label{Conv1}\\ &{\varphi}^{(n)} \stackrel{\ast}{\rightharpoonup} {\varphi}&\mbox{ in } L^\infty(0,T;H^1(0,L_0)), & &u^{(n)} \stackrel{\ast}{\rightharpoonup} u &\mbox{ in } L^\infty(0,T;H^1(L_0,L)). \label{Conv2} \end{aligned}$$ Thus, the Aubin's lemma gives that $$\label{Conv3} \Phi^{(n)} \rightarrow\Phi \mbox{ strongly in } C(0,T; [H^{1-\varepsilon}(0,L_0)]^3\times [H^{1-\varepsilon}(L_0,L)]^3)$$ for every $\varepsilon>0$ and then $$\partial_x {\varphi}_0 + \psi_0 + l^{(n)} {\omega}_0 \rightarrow 0 \quad \mbox{ strongly in } H^{-\varepsilon}(0,L_0),$$ This implies that $$\partial_x {\varphi}_0 =- \psi_0 , \quad {\omega}_0=0.$$ Analogously, $$\partial_x u_0 =- v_0 , \quad w_0=0.$$ Let us choose a test function of the form $B=({\beta}^1,-{\beta}^1_x,0,{\beta}^2,-{\beta}^2_x,0)\in F_T$ such that ${\beta}^1_x(L_0, t)={\beta}^2_x(L_0, t)$ for almost all $t$. Due to [\[Conv1\]](#Conv1){reference-type="eqref" reference="Conv1"}-[\[Conv3\]](#Conv3){reference-type="eqref" reference="Conv3"} and Lemma [Lemma 18](#lem:DissTrans){reference-type="ref" reference="lem:DissTrans"} we can pass to the limit in variational equality [\[VEq\]](#VEq){reference-type="eqref" reference="VEq"} as $n\rightarrow\infty$. The same way as in [@Lag1989 Ch. 4.3] we obtain that the limiting functions ${\varphi}, u$ are of higher regularity and satisfy the following variational equality $$\begin{gathered} \label{LimVarEq} \int_0^T \int_0^{L_0} \left(\rho_1{\varphi}_t\beta^1_t - {\beta}_1{\varphi}_{tx}{\beta}^1_{tx}\right)dxdt + \int_0^T \int_{L_0}^L \left(\rho_2 u_t\beta^2_t - {\beta}_1 u_{tx}{\beta}^2_{tx}\right)dxdt - \\ \int_0^{L_0}\left(\rho_1 ({\varphi}_t \beta^1_t)(x,0) - {\beta}_1({\varphi}_{tx}{\beta}^1_{tx})(x,0)\right)dx + \int_{L_0}^L\left(\rho_2(u_t \beta^2_t)(x,0) - {\beta}_1(u_{tx}{\beta}^2_{tx})(x,0)\right) dx +\\ \int_0^T \int_0^{L_0} \lambda_1{\varphi}_{xx}{\beta}^1_{xx}dxdt + \int_0^T \int_{L_0}^L\lambda_2u_{xx} {\beta}^2_{xx}dxdt - \int_0^T\int_0^{L_0} {\gamma}'(-{\varphi}_{xt}){\varphi}_{txx}{\beta}^1dxdt +\\ \int_0^T\int_0^{L_0} \left(f_1({\varphi},-{\varphi}_x){\beta}^1 - h_1({\varphi},-{\varphi}_x){\beta}^1_x\right)dxdt + \int_0^T \int_{L_0}^L \left(f_2(u,-u_x){\beta}^2 - h_2(u,-u_x) {\beta}^2_x\right)dxdt =\\ \int_0^T \int_0^{L_0} \left( p_1{\beta}^1 - r_1 {\beta}^1_x\right)dxdt + \int_0^T \int_{L_0}^L \left(p_2{\beta}^2 - r_2 {\beta}^2_x\right)dxdt. \end{gathered}$$ Provided ${\varphi}, u$ are smooth enough, we can integrate [\[LimVarEq\]](#LimVarEq){reference-type="eqref" reference="LimVarEq"} by parts with respect to $x,\; t$ and obtain $$\begin{gathered} \label{number} \int_0^T \int_0^{L_0} (\rho_1-{\beta}_1\partial_{xx}){\varphi}_{tt}{\beta}^1 dxdt + \int_0^T \int_{L_0}^L (\rho_2-{\beta}_2\partial_{xx})u_{tt} {\beta}^2 dxdt + \\ \int_0^T \left[ {\beta}_1{\varphi}_{ttx}(t,L_0)-{\beta}_2 u_{ttx}(t,L_0)\right] {\beta}^1(t,L_0) dt + \\ \int_0^T \int_0^{L_0} \lambda_1{\varphi}_{xxxx}{\beta}^1 dxdt + \int_0^T \int_{L_0}^L \lambda_2u_{xxxx}{\beta}^2 dxdt +\\ \int_0^T \left[\lambda_1{\varphi}_{xx}-\lambda_2 u_{xx}\right] (t,L_0) {\beta}^1_x(t,L_0) dt - \int_0^T \left[\lambda_1{\varphi}_{xxx}-\lambda_2 u_{xxx}\right] (t,L_0) {\beta}^1(t,L_0) dt -\\ \int_0^T \int_0^{L_0} {\gamma}'(-{\varphi}_{xt}){\varphi}_{xxt}{\beta}^1 dxdt - \int_0^T {\gamma}(-{\varphi}_{xt}(L_0,t)){\beta}^1(L_0,t) +\\ \int_0^T\int_0^{L_0} \left(f_1({\varphi},-{\varphi}_x) + \partial_x h_1({\varphi},-{\varphi}_x)\right){\beta}^1dxdt + \int_0^T\int_{L_0}^L \left(f_2(u,-u_x) + \partial_x h_2(u,-u_x) \right){\beta}^2dxdt + \\ \int_0^T \left(h_2(u(L_0,t),-u_x(L_0,T)) -h_1({\varphi}(L_0,t),-{\varphi}_x(L_0,T))\right){\beta}^1(L_0,t) dt=\\ \int_0^T \int_0^{L_0} (p_1+\partial_x r_1) {\beta}^1 dxdt + \int_0^T \int_{L_0}^L (p_2+\partial_x r_2) {\beta}^2 dxdt + \int_0^T \left[ r_2(t,L_0)-r_1(t,L_0)\right] {\beta}^1(t,L_0)dt. \end{gathered}$$ Requiring all the terms containing ${\beta}^1(L_0,t)$, ${\beta}^1_x(L_0,t)$ to be zero, we get transmission conditions [\[KirchTC1\]](#KirchTC1){reference-type="eqref" reference="KirchTC1"}-[\[KirchEq2\]](#KirchEq2){reference-type="eqref" reference="KirchEq2"}. Equations [\[KirchEq1\]](#KirchEq1){reference-type="eqref" reference="KirchEq1"}-[\[KirchEq2\]](#KirchEq2){reference-type="eqref" reference="KirchEq2"} are recovered from the variational equality [\[number\]](#number){reference-type="eqref" reference="number"}. Problem [\[KWaveEq\]](#KWaveEq){reference-type="eqref" reference="KWaveEq"}-[\[KWaveTC\]](#KWaveTC){reference-type="eqref" reference="KWaveTC"} can be obtained in the same way. ◻ We perform numerical modelling for the original problem with the initial parameters $$l^{(1)}=1, \; k_1^{(1)}=4, \; k_2^{(1)}=1;\\ $$ We model the simultaneous convergence $l\rightarrow 0$ and $k_1,k_2\rightarrow\infty$ in the following way: we divide $l$ by the factor $\chi$ and multiply $k_1,k_2$ by the factor $\chi$. Calculations performed for the original problem with $$\chi=1,\; \chi=3,\; \chi=10, \; \chi=30, \; \chi=100, \; \chi=300$$ and the limiting problem [\[KirchEq1\]](#KirchEq1){reference-type="eqref" reference="KirchEq1"}-[\[KirchTC2\]](#KirchTC2){reference-type="eqref" reference="KirchTC2"}. Other constants in the original problem are the same as in the previous subsection and we choose the functions in the right-hand side of [\[rhs1\]](#rhs1){reference-type="eqref" reference="rhs1"}-[\[rhs2\]](#rhs2){reference-type="eqref" reference="rhs2"} as follows: $$r_1(x)=x+4 , \,r_2(x)=2x.$$ The nonlinear feedbacks are $$\begin{aligned} & f_1({\varphi}, \psi,{\omega})=4{\varphi}^3-2{\varphi}, && f_2(u,v,w)=4u^3-8u, \\ & h_1({\varphi}, \psi,{\omega})=0, && h_2(u,v,w)=0,\\ & g_1({\varphi}, \psi,{\omega})=3|{\omega}|{\omega}, && g_2(u,v,w)=6|w|w. \end{aligned}$$ We use linear dissipation ${\gamma}(s)=s$ and choose the following initial displacement and shear angle variation $${\varphi}_0(x)=-\frac{13}{640}x^4 +\frac{6}{40}x^2-\frac{23}{40}x^2,$$ $$u_0(x)= \frac{41}{2160}x^4 -\frac{68}{135}x^3 + \frac{823}{180}x^2 -\frac{439}{27}x + \frac{520}{27}.$$ $$\psi_0(x)= -\left(-\frac{13}{160}x^3 +\frac{27}{40}x^2 -\frac{23}{20}x\right),$$ $$v_0(x)=-\left(\frac{41}{540}x^3 -\frac{68}{45}x^2+\frac{823}{90}x -\frac{439}{27}\right).$$ and set $${\omega}_0(x)=w_0(x)=0.$$ We choose the following initial velocities $${\varphi}_1(x)=-\frac{1}{32}x^3+\frac{3}{16}x^2, \quad u_1(x)=\frac{1}{108}x^3 -\frac{7}{36}x^2+\frac{10}{9}x-\frac{25}{27},$$ $$\omega_1(x)=\psi_1(x)=\frac{3}{5}x,$$ $$w_1(x)=v_1(x)=-\frac{2}{5}x+4.$$ The double limit case appeared to be more challenging from the point of view of numerics, then the case $l\rightarrow 0$. The numerical simulations of the coupled system in equations [\[Eq1\]](#Eq1){reference-type="eqref" reference="Eq1"}-[\[BC\]](#BC){reference-type="eqref" reference="BC"} including the interface conditions in [\[TC1\]](#TC1){reference-type="eqref" reference="TC1"}-[\[TC4\]](#TC4){reference-type="eqref" reference="TC4"} were done by a semidiscretization of the functions $\phi, \psi,{\omega}, u, v,w$ with respect to the position $x$ and by using an explicit scheme for the time integration. That allows to choose the discretized values at grid points near the interface in a separate step so that they obey the transmission conditions. It was necessary to solve a nonlinear system of equations for the six functions at three grid points (at the interface, and left and right of the interface) in each time step. Any attempt to use a full implicit numerical scheme led to extremely time-expensive computations due to the large nonlinear system over all discretized values which was to solve in each time step. On the other hand, increasing $k_1,k_2$ increase the stiffness of the system of ordinary differential equations which results from the semidiscretization, and the CFL-conditions requires small time steps --- otherwise numerical oscillations occur. Figures [4](#fig:sl2_first){reference-type="ref" reference="fig:sl2_first"}-[5](#fig:sl2_last){reference-type="ref" reference="fig:sl2_last"} present smoothed numerical solutions, particularly necessary for large factors $\chi$, e. g. $\chi=300$. When the parameters $k_1,\; k_2$ are large, the material of the beam gets stiff, and so does the discretized system of differential equations. Nevertheless the oscillations are still noticeable in the graph. The observation that the factor $\chi$ cannot be arbitrarily enlarged, underlines the importance of having the limit problem for $\chi\to\infty$ in [\[Eq1\]](#Eq1){reference-type="eqref" reference="Eq1"}-[\[IC\]](#IC){reference-type="eqref" reference="IC"}. ![Transversal displacement of the beam, cross-section $x=2$.](phiellandkto_smooth.eps){#fig:sl2_first width="90%" height="0.27\\textheight"} ![Transversal displacement of the beam, cross-section $x=6$.](uellandkto_smooth.eps){width="90%" height="0.27\\textheight"} ![Shear angle variation of the beam, cross-section $x=2$.](psiellandkto_smooth.eps){width="90%" height="0.27\\textheight"} ![Shear angle variation of the beam, cross-section $x=6$.](vellandkto_smooth.eps){width="90%" height="0.27\\textheight"} ![Longitudinal displacement of the beam, cross-section $x=2$.](omegaellandkto_smooth.eps){width="90%" height="0.27\\textheight"} ![Longitudinal displacement of the beam, cross-section $x=6$.](wellandkto_smooth.eps){#fig:sl2_last width="90%" height="0.27\\textheight"} # Discussion There is a number of papers devoted to long-time behaviour of linear homogeneous Bresse beams (with various boundary conditions and dissipation nature). If damping is present in all three equations, it appears to be sufficient for the exponential stability of the system without additional assumptions on the parameters of the problems (see, e.g., [@AlmSan2010]). The situation is different if we have a dissipation of any kind in one or two equations only. First of all, it matters in which equations the dissipation is present. There are results on the Timoshenko beams [@MuRa2002] and the Bresse beams [@Oro2015] that damping in only one of the equations does not guarantee the exponential stability of the whole system. It seems that for the Bresse system the presence of the dissipation in the shear angle equation is necessary for the stability of any kind. To get the exponential stability, one needs additional assumptions on the coefficients of the problem, usually, the equality of the propagation speeds: $$k_1=\sigma_1, \quad \frac{\rho_1}{k_1}=\frac{\beta_1}{\lambda_1}.$$ Otherwise, only polynomial (non-uniform) stability holds (see, e.g., [@AlaMuAl2011] for mechanical dissipation and [@Oro2015] for thermal dissipation). In [@ChaSoFla2013] analogous results are established in case of nonlinear damping. If dissipation is present in all three equations of the Bresse system, corresponding problems with nonlinear source forces of local nature possesses global attractors under the standard assumptions for nonlinear terms (see, e.g., [@MaMo2017]). Otherwise, nonlinear source forces create technical difficulties and may cause instability of the system. To the best of our knowledge, there is no literature on such cases. In the present paper we study a transmission problem for the Bresse system. Transmission problems for various equation types have already had some history of investigations. One can find a number of papers concerning their well-posedness, long-time behaviour and other aspects (see, e.g., [@Pot2012] for a nonlinear thermoelastic/isothermal plate, or [@Fast2013] for the Euler-Bernoulli/Timoshenko beam and [@Fast2022] for the full von Karman beam). Problems with localized damping are close to transmission problems. In the recent years a number of such problems for the Bresse beams were studied in, e.g., [@MaMo2017; @ChaSoFla2013]. To prove the existence of attractors in this case a unique continuation property is an important tool, as well as the frequency method. The only paper we know on a transmission problem for the Bresse system is [@You2022]. The beam in this work consists of a thermoelastic (damped) and elastic (undamped) parts, both purely linear. Despite the presence of dissipation in all three equations for the damped part, the corresponding semigroup is not exponentially stable for any set of parameters, but only polynomially (non-uniformly) stable. In contrast to [@You2022], we consider mechanical damping only in the equation for the shear angle for the damped part. However, we can establish exponential stability for the linear problem and existence of an attractor for the nonlinear one under restrictions on the coefficients in the damped part only. The assumption on the nonlinearities can be simplified in 1D case (cf. e.g. [@Fast2014]). # Conflict of Interest Statement {#conflict-of-interest-statement .unnumbered} The research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. # Acknowledgements {#acknowledgements .unnumbered} The research is supported by the Volkswagen Foundation project \"From Modeling and Analysis to Approximation\". The first and the third authors were also successively supported by the Volkswagen Foundation project \"Dynamic Phenomena in Elasticity Problems\" at Humboldt-Universität zu Berlin, Funding for Refugee Scholars and Scientists from Ukraine. 99 Fatiha Alabau-Boussouira and Jaime E. Munoz-Rivera and Dilberto da S. Almeida Junior, Stability to weak dissipative Bresse system, Journal of Mathematical Analysis and Applications, 374 (2011), 481-498, https://doi.org/10.1016/j.jmaa.2010.07.046. Dilberto da S. Almeida Junior and M L Santos, Numerical exponential decay to dissipative Bresse system, Journal of Applied Mathematics, , Art. ID 848620, 17 pp. Viorel Barbu, Nonlinear Semigroups and Differential Equations in Banach Spaces, Nordhof, 1976. J. A. C. Bresse, Cours de Mechanique Appliquee, Mallet Bachelier, Paris, 1859. R.P. Kanwal, Generalized functions. Theory and applications, Birkhäuser, 2004. Wenden Charles and J.A. Soriano and Flávio A. Falcão Nascimento and J.H. Rodrigues, Decay rates for Bresse system with arbitrary nonlinear localized damping, Journal of Differential Equations, 255 (2013), 2267-2290. Igor Chueshov, Strong solutions and attractor of the von Karman equations (in Russian), Mathematicheskii Sbornik, 181 (1990) , 25--36. Igor Chueshov, Introduction to the Theory of Infinite-Dimensional Dissipative Systems, Acta, Kharkiv, 2002. Igor Chueshov and Matthias Eller and Irena Lasiecka, On the attractor for a semilinear wave equation with critical exponent and nonlinear boundary dissipation, Communications in Partial Differential Equations, 27 (2002), 1901--1951. Igor Chueshov and Tamara Fastovska and Iryna Ryzhkova, Quasistability method in study of asymptotical behaviour of dynamical systems, J. Math. Phys. Anal. Geom., 15 (2019), 448-501.10.15407/mag15.04.448. Igor Chueshov and Irena Lasiecka, Long-time dynamics of von Karman semi-flows with non-linear boundary/interior damping, Journal of differential equation, 233 (2007), 42--86. Igor Chueshov and Irena Lasiecka, Long-time behavior of second order evolution equations with nonlinear damping, Memoirs of AMS ,912, AMS, Providence, RI, 2008. F. Dell'Oro, Asymptotic stability of thermoelastic systems of Bresse type, J. Differential Equations, 258 (2015), 3902-3927. Tamara Fastovska, Decay rates for Kirchhoff-Timoshenko transmission problems, Communications on Pure and Applied Analysis, 12 (2013), 2645-2667, 10.3934/cpaa.2013.12.2645. Tamara Fastovska, Global attractors for a full von Karman beam transmission problem\" Communications on Pure and Applied Analysis, 22 (2023), 1120-1158. T. Fastovska, Attractor for a composite system of nonlinear wave and thermoelastic plate equations, Visnyk of Kharkiv National University 70 (2014), 4-35. AKh. Khanmamedov, Global attractors for von Karman equations with nonlinear dissipation, Journal of Mathematical Analysis and Applications 318 (2016), 92-101, 10.1016/j.jmaa.2005.05.031. H. Koch and I. Lasiecka, Hadamard well-posedness of weak solutions in nonlinear dynamic elasticity-full von Karman systems, Prog. Nonlinear Differ. Equ. Appl. vol.50, Birkhäuser, Basel, 2002, 197-216. J.E. Lagnese, Boundary Stabilization of Thin Plates, SIAM, Philadelphia, PA, 1989. J. E. Lagnese and G. Leugering and E. J. P. G. Schmidt, Modeling, Analysis and Control of Dynamic Elastic Multi-Link Structures, Birkhauser, Boston, 1994. Irena Lasiecka and Roberto Triggiani, Regularity of Hyperbolic Equations Under $L^2(0 , T; L^2( {\Gamma}))$-Dirichlet Boundary Terms, Appl. Math. Optim., 10 (1983), 275-286. J.-L. Lions and E. Magenes, Problémes aux limites non homogénes et applications, Vol 1, Dunod, Paris, 1968. Weijiu Liu and Graham H. Williams, Exact Controllability for Problems of Transmission of the Plate Equation with Lower-order Terms, Quarterly of Applied Math., 58 (2000), 37-68. To Fu Ma and Rodrigo Nunes Monteiro, Singular Limit and Long-Time Dynamics of Bresse Systems, SIAM Journal on Mathematical Analysis, 49 (2017), 2468-2495, 10.1137/15M1039894. Jaime E. Muñoz Rivera, Reinhard Racke, Mildly dissipative nonlinear Timoshenko systems---global existence and exponential stability, Journal of Mathematical Analysis and Applications, 276 (2002), 10.1016/S0022-247X(02)00436-5. M. Potomkin, A nonlinear transmission problem for a compound plate with thermoelastic part, Mathematical Methods in the Applied Sciences, 35 (2012), 530-546. Roberto Triggiani and P. F. Yao, Carleman Estimates with No Lower-Order Terms for General Riemann Wave Equations. Global Uniqueness and Observability in One Shot, Appl. Math. Optim., 46 (2002), 331-375. W. Youssef, Asymptotic behavior of the transmission problem of the Bresse beam in thermoelasticity, Z. Angew. Math. Phys., 73 (2022), 10.1007/s00033-022-01797-7.
arxiv_math
{ "id": "2309.15171", "title": "Qualitative properties of solutions to a nonlinear transmission problem\n for an elastic Bresse beam", "authors": "Tamara Fastovska, Dirk Langemann, and Iryna Ryzhkova", "categories": "math.AP", "license": "http://creativecommons.org/licenses/by-nc-nd/4.0/" }
--- abstract: | In this short note, we study the special values for zeta functions of totally real fields using the Shintani's cone decomposition. We prove certain congruence between the special values for zeta functions under the prime degree field extension. This congruence implies the 'torsion congruence' proved by Ritter-Weiss ([@RW]) which is crucial in the proof of the noncommutative Iwasawa main conjecture for totally real fields. address: | Department of Mathematical Sciences\ Durham University\ South. Rd.\ Durham, DH1 3LE, U.K. author: - Yubo Jin title: On the Torsion Congruence for Zeta Functions of Totally Real Fields --- # Introduction and the Main Theorem {#section 1} Generalizing the classical commutative Iwasawa main conjectures, the noncommutative Iwasawa main conjectures are proposed in [@CFKSV; @FK] (see also [@K13 Section 2] for totally real fields). By the algebraic result of Burns [@B15] (see also [@K13; @RW04; @RW08] for totally real fields) on the $K$-theory of noncommutative Iwasawa algebras, the proof of the noncommutative Iwasawa main conjectures are reduced to the proof of certain congruences between special $L$-values and the commutative Iwasawa main conjectures. For totally real fields, certain congruences and the noncommutative Iwasawa main conjectures are proved in [@K13; @RW]. We recall the congruence in [@RW]. Let $F$ be a totally real number field of degree $n$ over $\mathbb{Q}$ with ring of integers $\mathcal{O}_F$. Fix an odd prime $p$ and let $K$ be a totally real Galois extension over $F$ of degree $p$ with ring of integers $\mathcal{O}_K$. Fix a finite set $S$ of nonarchimedean places of $F$ containing all places above $p$ and all places ramify in $K$. Let $F_S$ (resp. $K_S$) be the maximal abelian extension of $F$ (resp. $K$) unramified at all nonarchimedean places outside $S$. Denote $\Sigma=\mathrm{Gal}(K/F), G_S=\mathrm{Gal}(F_S/F), H_S=\mathrm{Gal}(K_S/K)$. Let $\lambda_F\in\mathbb{Z}_p[[G_S]]$ (resp. $\lambda_K\in\mathbb{Z}_p[[H_S]]$) be the Serre's pesudomeasures [@Se] interpolating the special values of zeta functions for $F$ (resp. $K$). Two pesudomeasures can be compared by the transfer map $\mathrm{ver}:G_S\to H_S$. The 'torsion congruence' in [@RW] is stated in the following theorem. **Theorem 1**. *Let $\mathcal{I}$ be the ideal in the ring $\mathbb{Z}_p[[H_S]]^{\Sigma}$ of $\Sigma$-fixed points of $\mathbb{Z}_p[[H_S]]$ consisting elements of the form $\sum_{\sigma\in\Sigma}\alpha^{\sigma},\alpha\in\mathbb{Z}_p[[H_S]]$. Then for $g\in G_S, h=\mathrm{ver}(g)\in H_S$, $$\label{torsion} \mathrm{ver}\left((1-g)\lambda_F\right)\equiv (1-h)\lambda_K\qquad\text{ mod }\mathcal{I}.$$* The proof of this theorem based on two steps. First is reducing the theorem to a rather simpler congruence of twisted partial zeta functions ([@RW Proposition 4]). Second is proving this congruence (in [@RW Proposition 4]) by studying the zeta functions through constant terms of Eisenstein series as in [@DR]. The 'torsion congruence' [\[torsion\]](#torsion){reference-type="eqref" reference="torsion"} can be also stated by replacing $\lambda_F,\lambda_K$ for other $p$-adic measures interpolating special $L$-values of motives over $F,K$. In [@SU; @WX], the $p$-adic $L$-functions for Hilbert modular forms are also studied via the constant terms of Eisenstein series. However, proving the 'torsion congruence' for Hilbert modular forms using the similar method seems much more complicated as the Eisenstein series used [@SU; @WX] are of 'Klingen type' whose Fourier coefficients are more difficult to study. The aim of this paper is to give another proof of the 'torsion congruence' [\[torsion\]](#torsion){reference-type="eqref" reference="torsion"}. That is, we study the zeta functions using the Shintani's cone decomposition which is also used in [@CN78] constructing $p$-adic measures. This method also has a cohomological explanation in [@CDG]. We hope this new proof can provide a new approach on studying the 'torsion congruence'. In particular, we expect the 'torsion congruence' for Hilbert modular forms can be studied via the cohomological approach as in [@BH]. We now introduce the main theorem proved in this paper. Fix an integral ideal $\mathfrak{f}\subset\mathcal{O}_F$ such that $p|\mathfrak{f}$. Denote $\mathrm{Cl}_{F}^+(\mathfrak{f})$ be the narrow ray class group modulo $\mathfrak{f}$. Denote $F^+(\mathfrak{f})=\{\alpha\in F:\alpha\equiv 1\text{ mod }\mathfrak{f},\alpha\gg0\}$ and $U_F^+(\mathfrak{f})=\mathcal{O}_{F,+}^{\times}\cap F^+(\mathfrak{f})$. Let $\mathfrak{f}'=\mathfrak{f}\mathcal{O}_K$ and similarly denote $\mathrm{Cl}_K^+(\mathfrak{f}'),K^+(\mathfrak{f}),U_K^+(\mathfrak{f})$. Denote $N_{K/F},N_{K}:=N_{K/\mathbb{Q}},N_F:=N_{F/\mathbb{Q}}$ and $\mathrm{Tr}_{K/F},\mathrm{Tr}_{K}:=\mathrm{Tr}_{K/\mathbb{Q}},\mathrm{Tr}_F:=\mathrm{Tr}_{F/\mathbb{Q}}$ for norms and traces between number fields. We abuse the same notation for both norm of elements and norm of ideals. For integral ideals $\mathfrak{a}\subset\mathcal{O}_F$ (resp. $\mathcal{A}\subset\mathcal{O}_K$), we define the partial zeta functions $$\begin{aligned} \zeta_{\mathfrak{f}}^F(\mathfrak{a}^{-1},s)&=\sum_{\mathfrak{b}\sim_{\mathfrak{f}}\mathfrak{a}^{-1}}N_F(\mathfrak{b})^{-s},\\ \zeta_{\mathfrak{f'}}^K(\mathcal{A}^{-1},s)&=\sum_{\mathcal{B}\sim_{\mathfrak{f}'}\mathcal{A}^{-1}}N_K(\mathcal{B})^{-s}, \end{aligned}$$ where the sum is over all integral ideals $\mathfrak{b}$ (resp. $\mathcal{B}$) that are in the same class of $\mathrm{Cl}_F^+(\mathfrak{f})$ (resp. $\mathrm{Cl}_K^+(\mathfrak{f}')$) as $\mathfrak{a}^{-1}$ (resp. $\mathcal{A}^{-1}$). Fix prime ideals $\mathfrak{c}\subset\mathcal{O}_F$ and $\mathfrak{c}'\subset\mathcal{O}_K$ relatively prime to $\mathfrak{f}$ with residue degree $1$ above the prime number $c$. The twisted partial zeta functions are defined by $$\label{zeta} \begin{aligned} \zeta_{\mathfrak{f,c}}^F(\mathfrak{a}^{-1},s)&=N_F(\mathfrak{c})^{1-s}\zeta_{\mathfrak{f}}^F((\mathfrak{ac})^{-1},s)-\zeta_{\mathfrak{f}}^F(\mathfrak{a}^{-1},s),\\ \zeta_{\mathfrak{f',c'}}^K(\mathcal{A}^{-1};s)&=N_K(\mathfrak{c}')^{1-s}\zeta_{\mathfrak{f'}}^K((\mathcal{A}\mathfrak{c}')^{-1},s)-\zeta_{\mathfrak{f'}}^K(\mathcal{A}^{-1},s). \end{aligned}$$ Our main theorem, rephrased from [@RW Proposition 4], is stated as follow. **Theorem 2**. *Assume $\mathcal{A}^{\sigma}=\mathcal{A}$ for all $\sigma\in\mathrm{Gal}(K/F)$ and take $\mathfrak{a}=\mathcal{A}\cap\mathcal{O}_F$. Then, for all integers $k\geq 1$, $$\zeta_{\mathfrak{f',c'}}^K(\mathcal{A}^{-1},1-k)\equiv\zeta_{\mathfrak{f,c}}^F(\mathfrak{a}^{-1},1-pk)\qquad\text{ mod }p\mathbb{Z}_p.$$* We review the Shintani's cone decomposition as well as the Colmez's explicit construction in Section [2](#section 2){reference-type="ref" reference="section 2"}. The key ingredient is a relative version of the cone decomposition in Proposition [Proposition 3](#decomposition){reference-type="ref" reference="decomposition"}, [Proposition 4](#decomposition2){reference-type="ref" reference="decomposition2"}. In Section [3](#section 3){reference-type="ref" reference="section 3"}, we summarize the computations for the special values of zeta functions using the cone decomposition following [@R15 Section 3] and the proof of Theorem [Theorem 2](#main){reference-type="ref" reference="main"} is then completed in Section [4](#section 4){reference-type="ref" reference="section 4"}. # Shintani's Cone Decomposition {#section 2} Let $F$ be a totally real number field of degree $n$ over $\mathbb{Q}$ with $\mathrm{Gal}(F/\mathbb{Q})=\{\tau_1,...,\tau_n\}$. Then each $\tau_i$ defines an embedding $F\to\mathbb{R}$. We can thus embed $F$ into $\mathbb{R}^n$ by sending $x\mapsto(\tau_1(x),...,\tau_n(x))$. Denote $\mathcal{O}_{F,+}^{\times}$ be the group of totally positive units whose action on $\mathbb{R}_{>0}^n$ is given by the componentwise multiplication $\epsilon.(x_1,...,x_n)=(\epsilon x_1,...,\epsilon x_n)$. In [@Sh76], Shintani provide a cone decomposition for the fundamental domain $\mathbb{R}_{>0}^n/\mathcal{O}_{F,+}^{\times}$ (see also [@Ne99 (9.3)]). A more explicit construction of the fundamental domain is given by Colmez in [@Co88] (see also [@DF12 Section 2]). We first review the explicit construction of Colmez. Let $\{\epsilon_1,...,\epsilon_{n-1}\}$ be a set of linearly independent totally positive units generates $\mathcal{O}_{F,+}^{\times}$. For a permutation $\tau\in S_{n-1}$ of indices $\{1,...,n-1\}$, set $$f_{i,\tau}=\epsilon_{\tau(1)}\epsilon_{\tau(2)}...\epsilon_{\tau(i-1)}=\prod_{j=1}^{i-1}\epsilon_{\tau(j)},\qquad 1\leq i\leq n,\tau\in S_{n-1},$$ where $f_{1,\tau}$ is understood as $1=(1,...,1)\in\mathbb{R}_+^n$. We make the following assumption (called the **Unit Assumption $F$**): $$\mathrm{sgn}(\tau)=\mathrm{sgn}\left(\det(f_{1,\tau},f_{2,\tau},...,f_{n,\tau})\right)\qquad\text{ for all }\tau\in S_{n-1}.$$ Here $\mathrm{sgn}(\tau)$ is the signature of the permutation $\tau$ and $\mathrm{sgn}\left(\det(f_{1,\tau},f_{2,\tau},...,f_{n,\tau})\right)$ is the signature of the determinant of the $n\times n$ matrix whose rows are $f_{i,\tau},1\leq i\leq n$. For $\tau\in S_{n-1}$ and $J$ a non-empty subset of $\{1,...,n\}$, we consider open cones $$C_{\tau,J}=\sum_{i\in J}\mathbb{R}_{>0}\cdot f_{i,\tau}.$$ Define an equivalence relation $C_{\tau,J}\sim C_{\tau',J'}$ if there exists some $\epsilon\in\mathcal{O}_{F,+}^{\times}$ such that $C_{\tau,J}=\epsilon C_{\tau',J'}$. Let $\mathcal{C}_F=\left\{C_{\tau,J}\right\}/\sim$ be a class of open cones. Then the Shintani decomposition is given by $$\mathbb{R}_{>0}^n=\coprod_{\epsilon\in\mathcal{O}_{F,+}^{\times}}\coprod_{C\in\mathcal{C}_F}\epsilon\cdot C.$$ Fix an odd prime $p$. Let $K$ be a totally real Galois extension of degree $p$ over $F$ with ring of integers $\mathcal{O}_K$. Take $\sigma$ to be the nontrivial generator of $\mathrm{Gal}(K/F)$, then $\mathrm{Gal}(K/\mathbb{Q})=\{\tau_i^j:1\leq i\leq n,0\leq j\leq p-1\}$ with $\tau_i^j=\tau_i\sigma^j$. We can thus embed $K$ into $\mathbb{R}^{pn}$ by sending $x\mapsto(\tau_1^0(x),...,\tau_n^0(x),...,\tau_1^{p-1}(x),...,\tau_n^{p-1}(x))$. We also embed $\mathbb{R}^n$ into $\mathbb{R}^{pn}$ by sending $$\label{iota} (x_1,...,x_n)\mapsto\iota(x_1,...,x_n):=(x_1,...,x_n,x_1,..,x_n,...,x_1,...,x_n)$$ with $x_1,...,x_n$ repeating $p$ times. The action of the group of totally positive units $\mathcal{O}_{K,+}^{\times}$ on $\mathbb{R}_{>0}^{pn}$ is given by the componentwise multiplication as for the field $F$. Replace $F$ by $K$ above, we have a cone decomposition for the fundamental domain $\mathbb{R}_{>0}^{pn}/\mathcal{O}_{K,+}^{\times}$. For our purpose, we need to compare two decomposition $\mathbb{R}_{>0}^{n}/\mathcal{O}_{F,+}^{\times}$ and $\mathbb{R}_{>0}^{pn}/\mathcal{O}_{K,+}^{\times}$. **Proposition 3**. *There is a finite class of open cones $\mathcal{C}_F$ in $\mathbb{R}_+^n$ and a finite class of open cones $\mathcal{C}_K$ in $\mathbb{R}_+^{pn}$ such that $$\label{2.6} \begin{aligned} \mathbb{R}_{>0}^n&=\coprod_{\epsilon\in\mathcal{O}_{F,+}^{\times}}\coprod_{C\in\mathcal{C}_F}\epsilon\cdot C,\\ \mathbb{R}_{>0}^{pn}&=\coprod_{\epsilon\in\mathcal{O}_{K,+}^{\times}}\coprod_{C\in\mathcal{C}_K}\epsilon\cdot C. \end{aligned}$$ Moreover, we have $\mathcal{C}_K=\mathcal{C}_K^{0}\sqcup\mathcal{C}_K'$ such that:\ (1) The cones $C\in\mathcal{C}_K^{0}$ are obtained from cones $C_0\in\mathcal{C}_F$ by the map $\iota:\mathbb{R}_{>0}^n\to\mathbb{R}_{>0}^{pn}$, i.e. $C=\iota(C_0)$. Hence $x^{\sigma}=x$ for any $x\in C$ and $\sigma\in\mathrm{Gal}(K/F)$,\ (2) The cones $C\in\mathcal{C}_K'$ satisfy $x^{\sigma}\neq x$ for any $x\in C$ and $\sigma\in\mathrm{Gal}(K/F)$.* *Proof.* We first take $\{\epsilon_1,...,\epsilon_{n-1}\}$ generates $\mathcal{O}_{F,+}^{\times}$ satisfying the **Unit Assumption $F$**. The existence of these units is proved by [@Co88 Lemme 2.1]. We recall the idea of the proof. For any $M>0$, denote $$l^F_i(M)=\left(\frac{-M}{n-1},...,\frac{-M}{n-1},M,\frac{-M}{n-1},...,\frac{-M}{n-1}\right)\in\mathbb{R}^n,\qquad 1\leq i\leq n$$ with $M$ in the $i$-th entry. There is a constant $r_F$ depending only on $\mathcal{O}_{F,+}^{\times}$ such that for all $M>0$ there exists $\epsilon_1,...,\epsilon_{n-1}\in \mathcal{O}_{F,+}^{\times}$ such that $\mathrm{Log}\epsilon_i\in B(l^F_i(M),r_F)$. To satisfy the **Unit Assumption $F$**, it suffices to take such $\{\epsilon_1,...,\epsilon_{n-1}\}$ for some large enough $M$. Here $\mathrm{Log}:\mathbb{R}_{>0}^n\to\mathbb{R}^n$ is the map $(x_1,...,x_n)\mapsto (\mathrm{log}x_1,...,\mathrm{log}x_n)$ and $B(x,r)$ is the open ball in $\mathbb{R}^n$ with origin $x$ and radius $r$. We use the same notation $\mathrm{Log}:\mathbb{R}_{>0}^{pn}\to\mathbb{R}^{pn}$ and $B(x,r)$ the open ball in $\mathbb{R}^{pn}$ with origin $x$ and radius $r$ for the field $K$. For $1\leq i\leq n$ and $M'>0$, let $l_i^K(M')=\iota(l_i^F(M'))$ be the embedding of $l_i^F(M')$ under the map [\[iota\]](#iota){reference-type="eqref" reference="iota"}. For $n+1\leq i\leq pn$, we set $$l^K_i(M')=\left(\frac{-M'}{pn-1},...,\frac{-M'}{pn-1},M',\frac{-M'}{pn-1},...,\frac{-M'}{pn-1}\right)\in\mathbb{R}^{pn},$$ with $M'$ in the $i$-th entry. Then $\{l_i^K(M'):1\leq i\leq qn\}$ forms a basis of the trace zero hyperplane $\{(x_1,...,x_{pn})\in\mathbb{R}^{pn}:x_1+...+x_{pn}=0\}$ in $\mathbb{R}^{pn}$. There is a constant $r_K$ depending only on $\mathcal{O}_{K,+}^{\times}$, such that for all $M'>0$ there exists $\epsilon_1',...\epsilon'_{pn-1}\in\mathcal{O}_{K,+}^{\times}$ such that $\mathrm{Log}\epsilon_i'\in B(l_i^K(M'),r_K)$. We enlarge $r_K$ if necessary to assume $r_K>r_F$. The same computations in [@Co88 Lemme 2.1] show that to satisfy the **Unit Assumption $K$**, it suffices to take such $\{\epsilon'_1,...,\epsilon'_{pn-1}\}$ for some large enough $M'$. Take $\epsilon_1,...,\epsilon_{n-1}\in\mathcal{O}_{F,+}^{\times}$ satisfying $\mathrm{Log}\epsilon_i\in B(l_i^F(M),r_F)$ for some $M$. Then, viewing as elements in $\mathcal{O}_{K,+}^{\times}$, they satisfy $\mathrm{Log}\epsilon_i\in B(l_i^K(M'),r_K)$ if $$r_F-r_K-M<M'<r_K-r_F+M.$$ Therefore, taking $M,M'$ large enough satisfying above inequality, we can find $\{\epsilon_1,...,\epsilon_{pn-1}\}$ generates $\mathcal{O}_{K,+}^{\times}$ satisfying the **Unit Assumption $K$** and $\{\epsilon_1,...,\epsilon_{n-1}\}$ generates $\mathcal{O}_{F,+}^{\times}$ satisfying the **Unit Assumption $F$**. The proposition then follows from Colmez's explicit constructions of the cones. ◻ Let $$C(\lambda_1,...\lambda_g):=\mathbb{R}_{>0}\lambda_1+...+\mathbb{R}_{>0}\lambda_g,$$ be a cone of $\mathbb{R}_{>0}^{pn}$ generated by $\lambda_1,...,\lambda_g\in K$. We define the action of $\mathrm{Gal}(K/F)$ on this cone by $$C(\lambda_1,...,\lambda_g)^{\sigma}:=C(\lambda_1^{\sigma},...,\lambda_g^{\sigma}).$$ **Proposition 4**. *The decomposition of $\mathbb{R}_{>0}^{pn}$ in [\[2.6\]](#2.6){reference-type="eqref" reference="2.6"} can be further refined such that $$\mathbb{R}_{>0}^{pn}=\coprod_{\epsilon\in\mathcal{O}_{K,+}^{\times}}\coprod_{C\in\mathcal{C}_K}\epsilon\cdot C,\qquad\mathcal{C}_K=\mathcal{C}_K^0\sqcup\mathcal{C}_K^1\sqcup\mathcal{C}_K^2$$ with $\mathcal{C}_K^0,\mathcal{C}'_K=\mathcal{C}_K^1\sqcup\mathcal{C}_K^2$ as in Proposition [Proposition 3](#decomposition){reference-type="ref" reference="decomposition"} and\ (1) The cones $C\in\mathcal{C}_K^1$ satisfy $C^{\sigma}=C$ for any $\sigma\in\mathrm{Gal}(K/F)$,\ (2) If $C\in\mathcal{C}_K^2$, then $C\neq C^{\sigma}\in\mathcal{C}_K^2$ for any $\sigma\in\mathrm{Gal}(K/F)$.* *Proof.* Write $\mathcal{C}_K'=\{C_1,...,C_m\}$ and let $\sigma$ be a nontrivial generator of $\mathrm{Gal}(K/F)$. For $1\leq i_1,...,i_p\leq m$, denote $$C_{(i_1,...,i_p)}=C_{i_1}\cap C_{i_2}^{\sigma}\cap C_{i_3}^{\sigma^2}\cap...\cap C_{i_p}^{\sigma^{p-1}}.$$ Note that $$\mathcal{O}_{K,+}^{\times}\left(C_1\cup...\cup C_m\right)=\mathcal{O}_{K,+}^{\times}\left(C_1^{\sigma}\cup...\cup C_m^{\sigma}\right)=...=\mathcal{O}_{K,+}^{\times}\left(C_1^{\sigma^{p-1}}\cup...\cup C_m^{\sigma^{p-1}}\right),$$ we have $$\mathcal{O}_{K,+}^{\times}\left(C_1\cup...\cup C_m\right)=\mathcal{O}_{K,+}^{\times}\coprod_{1\leq i_1,...,i_p\leq m}C_{(i_1,...,i_p)}.$$ Therefore, the proposition follows by taking $$\begin{aligned} \mathcal{C}_K^1&=\{C_{(i_1,...,i_p)}:i_1=...=i_p\},\\ \mathcal{C}_K^2&=\{C_{(i_1,...,i_p)}:i_1,...,i_p\text{ not identically same}\}. \end{aligned}$$ ◻ # Special Values of Zeta Functions {#section 3} In this section, we summarize the computations in [@R15 Section 3] for the special values of zeta functions using the cone decomposition. Let $F$ be a totally real number field and take ideals $\mathfrak{f},\mathfrak{c},\mathfrak{a}$ as in Section [1](#section 1){reference-type="ref" reference="section 1"}. We consider the twisted partial zeta function $\zeta_{\mathfrak{f,c}}^F(\mathfrak{a}^{-1},s)$ defined in [\[zeta\]](#zeta){reference-type="eqref" reference="zeta"}. For $0\leq j\leq c$, denote $\xi^F_j:\mathcal{O}_F\to\mathbb{C}^{\times}$ be the additive character given by $\xi^F_j(x)=e^{2\pi i\frac{j\mathrm{Tr}_F(x)}{c}}$. Then by [@R15 Proposition 3.6], $$\zeta^F_{\mathfrak{f,c}}(\mathfrak{a}^{-1};s)=N_F(\mathfrak{a})^s\sum_{j=1}^c\sum_{\alpha\in\left(\mathfrak{a}\cap F^+(\mathfrak{f})\right)/U^+_F(\mathfrak{f})}\xi^F_j(\alpha)N_F(\alpha)^{-s}.$$ Recall that we have a decomposition $$\mathbb{R}_{>0}^n=\coprod_{\epsilon\in\mathcal{O}_{F,+}^{\times}}\coprod_{C\in\mathcal{C}_F}\epsilon\cdot C.$$ By multiplying a certain scalar on generators of $C$, we assume each $C\in\mathcal{C}_F$ can be written as $$C=C(\lambda_1,...,\lambda_g):=\mathbb{R}_{>0}\cdot\lambda_1+...+\mathbb{R}_{>0}\cdot\lambda_g\qquad\text{ with }\lambda_1,...,\lambda_g\in\mathfrak{af}$$ and for such cone $C$ we denote $$\begin{aligned} C^1&:=C^1(\lambda_1,...,\lambda_g)=\{t_1\lambda_1+...+t_g\lambda_g:0<t_1,...,t_g\leq 1\},\\ C^0(\beta)&:=C^0(\beta;\lambda_1,...,\lambda_g)=\beta+\mathbb{Z}_{>0}\lambda_1+...+\mathbb{Z}_{>0}\lambda_g\qquad\text{ for }\beta\in C^1. \end{aligned}$$ Then by [@CN78 Lemma 1] (see also [@R15 Section 3.3]), $$\mathfrak{a}\cap F^+(\mathfrak{f})=\coprod_{\epsilon\in U_F^+(\mathfrak{f})}\coprod_{\substack{C\in\mathcal{C}_F\\ \beta\in\mathfrak{a}\cap F^+(\mathfrak{f})\cap C^1 }}\epsilon\cdot C^0(\beta).$$ Hence $$\zeta^F_{\mathfrak{f,c}}(\mathfrak{a}^{-1};s)=N_F(\mathfrak{a})^s\sum_{j=1}^c\sum_{\substack{C\in\mathcal{C}_F\\ \beta\in \mathfrak{a}\cap F^+(\mathfrak{f})\cap C^1 }}\sum_{\alpha\in C^0(\beta)}\xi_j^F(\alpha)N_F(\alpha)^{-s}.$$ For a cone $C^0(\beta):=C^0(\beta;\lambda_1,...\lambda_g)$ and $\xi^F_j$ an additive character of $\mathcal{O}_F$ as above, we define a power series $$P_F(C^0(\beta),\xi^F_j;T)=\frac{\xi^F_j(\beta)(1+T)^{N_F(\beta)}}{\prod_{i=1}^g\left(1-\xi^F_j(\lambda_i)(1+T)^{N_F(\lambda_i)}\right)}.$$ As proved in [@R15 Corollary 4.2], this is indeed a power series in $\mathbb{Z}_p[[T]]$. Let $\Delta=(1+T)\frac{\partial}{\partial T}$ be the differential operator acting on $\mathbb{Z}_p[[T]]$. The computations in [@R15 Section 3.4, 3.5] show that $$\zeta^F_{\mathfrak{f,c}}(\mathfrak{a}^{-1},1-k)=N_F(\mathfrak{a})^{1-k}\sum_{j=1}^c\sum_{\substack{C\in\mathcal{C}_F\\\beta\in \mathfrak{a}\cap F^+(\mathfrak{f})\cap C^1}}\Delta^{k-1}P_F(C^0(\beta),\xi^F_j;T)|_{T=0},$$ for any integer $k\geq 1$. We remark that in [@R15], the special zeta value $\zeta_{\mathfrak{f,c}}^F(\mathfrak{a}^{-1},1-k)$ is first interpolated by a power series in $\mathbb{C}[[T_1,...,T_n]]$. After applying the projection operator $\Omega:\mathbb{C}[[T_1,...,T_n]]\to\mathbb{C}[[T]]$ defined in [@R15 Section 3.5], we then obtained above one variable power series. This is important for us as now we can compare the power series obtained from different fields $F$ and $K$ and thus the special zeta values $\zeta_{\mathfrak{f,c}}^F(\mathfrak{a}^{-1},1-pk)$ and $\zeta_{\mathfrak{f',c'}}^K(\mathcal{A}^{-1},1-k)$. # Proof of the Main Theorem {#section 4} We now start the proof of our main Theorem [Theorem 2](#main){reference-type="ref" reference="main"}. Let $K/F$ be a totally real Galois extension of degree $p$ and take ideals $\mathfrak{f'},\mathfrak{c'},\mathcal{A}$ as in Section [1](#section 1){reference-type="ref" reference="section 1"}. In particular, we assume $\mathcal{A}^{\sigma}=\mathcal{A}$ for any $\sigma\in\mathrm{Gal}(K/F)$ and $\mathfrak{a}=\mathcal{A}\cap\mathcal{O}_F$ as in Theorem [Theorem 2](#main){reference-type="ref" reference="main"}. Take $\mathcal{C}_F$ be a class of open cones in $\mathbb{R}_{>0}^n$ and $\mathcal{C}_K=\mathcal{C}_K^0\sqcup\mathcal{C}_K^1\sqcup\mathcal{C}_K^2$ a class of open cones in $\mathbb{R}_{>0}^{pn}$ as in Proposition [Proposition 4](#decomposition2){reference-type="ref" reference="decomposition2"}. Replacing $F$ by $K$ in the previous section we have $$\zeta_{\mathfrak{f',c'}}^K(\mathcal{A}^{-1},1-k)=N_K(\mathcal{A})^{1-k}\sum_{j=1}^c\sum_{\substack{C\in\mathcal{C}_K\\\beta\in \mathcal{A}\cap K^+(\mathfrak{f}')\cap C^1}}\Delta^{k-1}P_K(C^0(\beta),\xi^K_j;T)|_{T=0}$$ where for cone $C=C(\lambda_1,...,\lambda_g)\in\mathcal{C}_K$, $\beta\in C^1$ and $\xi_j^K$ an additive character of $\mathcal{O}_K$, $$P_K(C_0(\beta),\xi_j^K;T)=\frac{\xi^K_j(\beta)(1+T)^{N_K(\beta)}}{\prod_{i=1}^g\left(1-\xi^K_j(\lambda_i)(1+T)^{N_K(\lambda_i)}\right)}.$$ We further write $$\zeta_{\mathfrak{f',c'}}^K(\mathcal{A}^{-1},1-k)=\zeta_{\mathfrak{f',c'}}^{K,0}(\mathcal{A}^{-1},1-k)+\zeta_{\mathfrak{f',c'}}^{K,1}(\mathcal{A}^{-1},1-k)+\zeta_{\mathfrak{f',c'}}^{K,2}(\mathcal{A}^{-1},1-k),$$ where for $i=0,1,2$, $$\zeta_{\mathfrak{f',c'}}^{K,i}(\mathcal{A}^{-1},1-k)=N_K(\mathcal{A})^{1-k}\sum_{j=1}^c\sum_{\substack{C\in\mathcal{C}^i_K\\\beta\in \mathcal{A}\cap K^+(\mathfrak{f}')\cap C^1}}\Delta^{k-1}P_K(C^0(\beta),\xi^K_j;T)|_{T=0}.$$ **Lemma 5**. *For $i=1,2$ and any $1\leq j\leq c$, $$\sum_{\substack{C\in\mathcal{C}^i_K\\\beta\in \mathcal{A}\cap K^+(\mathfrak{f}')\cap C^1}}P_K(C^0(\beta),\xi^K_j;T)\in p\mathbb{Z}_p[[T]].$$* *Proof.* Note that by the definition of $P_K(\beta,C,\xi_j^K;T)$, we have $$P_K(C_0(\beta),\xi_j^K;T)=P_K(C_0(\beta)^{\sigma},\xi_j^K;T)\qquad\text{ for }\sigma\in\mathrm{Gal}(K/F).$$ For $i=1$, since for any $\sigma\in\mathrm{Gal}(K/F)$ we have $\mathcal{A}^{\sigma}=\mathcal{A}$ by assumption and $C^{\sigma}=C$ for any $C\in\mathcal{C}_K^1$ by Proposition [Proposition 4](#decomposition2){reference-type="ref" reference="decomposition2"}, the Galois group $\mathrm{Gal}(K/F)$ acts transitively on $\mathcal{A}\cap K^+(\mathfrak{f}')\cap C^1$ and each orbit has $p$ elements. Thus, $$\begin{aligned} &\sum_{\substack{C\in\mathcal{C}^1_K\\\beta\in \mathcal{A}\cap K^+(\mathfrak{f}')\cap C^1}}P_K(C^0(\beta),\xi^K_j;T)\\ =&\sum_{\substack{C\in\mathcal{C}^1_K\\\beta\in \left(\mathcal{A}\cap K^+(\mathfrak{f}')\cap C^1\right)/\mathrm{Gal}(K/F)}}\sum_{\sigma\in\mathrm{Gal}(K/F)}P_K(C^0(\beta^{\sigma}),\xi^K_j;T)\\ =&p\sum_{\substack{C\in\mathcal{C}^1_K\\\beta\in \left(\mathcal{A}\cap K^+(\mathfrak{f}')\cap C^1\right)/\mathrm{Gal}(K/F)}}P_K(C^0(\beta),\xi^K_j;T)\in p\mathbb{Z}_p[[T]]. \end{aligned}$$ For $i=2$, since $C\neq C^{\sigma}\in\mathcal{C}_K^2$ for any $C\in\mathcal{C}_K^2$ and $\sigma\in\mathrm{Gal}(K/F)$ by Proposition [Proposition 4](#decomposition2){reference-type="ref" reference="decomposition2"}, the Galois group $\mathrm{Gal}(K/F)$ acts transitively on $\mathcal{C}_K^2$ and each orbit has $p$ elements. Thus, $$\begin{aligned} &\sum_{\substack{C\in\mathcal{C}^1_K\\\beta\in \mathcal{A}\cap K^+(\mathfrak{f}')\cap C^1}}P_K(C^0(\beta),\xi^K_j;T)\\ =&\sum_{\substack{C\in\mathcal{C}^1_K/\mathrm{Gal}(K/F)\\\beta\in \mathcal{A}\cap K^+(\mathfrak{f}')\cap C^1}}\sum_{\sigma\in\mathrm{Gal}(K/F)}P_K(C^0(\beta)^{\sigma},\xi^K_j;T)\\ =&p\sum_{\substack{C\in\mathcal{C}^1_K/\mathrm{Gal}(K/F)\\\beta\in \mathcal{A}\cap K^+(\mathfrak{f}')\cap C^1}}P_K(C^0(\beta)^{\sigma},\xi^K_j;T)\in p\mathbb{Z}_p[[T]]. \end{aligned}$$ ◻ Hence, $$\begin{aligned} \zeta_{\mathfrak{f',c'}}^K(\mathcal{A}^{-1},1-k)&\equiv\zeta_{\mathfrak{f',c'}}^{K,0}(\mathcal{A}^{-1},1-k)\\ &\equiv N_F(\mathfrak{a})^{1-pk}\sum_{j=1}^c\sum_{\substack{C\in\mathcal{C}_K^0\\\beta\in\mathcal{A}\cap K^+(\mathfrak{f}')\cap C^1}}\Delta^{k-1}P_K(C^0(\beta),\xi_j^K;T)|_{T=0}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \text{ mod }p. \end{aligned}$$ By Proposition [Proposition 3](#decomposition){reference-type="ref" reference="decomposition"} on $\mathcal{C}_K^0$, $$\sum_{\substack{C\in\mathcal{C}_K^0\\\beta\in\mathcal{A}\cap K^+(\mathfrak{f}')\cap C^1}}P_K(C^0(\beta),\xi_j^K;T)=\sum_{\substack{C\in\mathcal{C}_F\\\beta\in\mathfrak{a}\cap F^+(\mathfrak{f}')\cap C^1}}P_K(C^0(\beta),\xi_j^K;T),$$ where on the write hand side we are writing $C_0(\beta)$ for its image in $\mathbb{R}_{>0}^{pn}$ under the embedding [\[iota\]](#iota){reference-type="eqref" reference="iota"}. **Lemma 6**. *Let $C\in\mathcal{C}_F$ and $\beta\in\mathfrak{a}\cap F^+(\mathfrak{f})\cap C^1$. Then $$\label{remains} \sum_{j=1}^c\Delta^{k-1}P_K(C_0(\beta),\xi_j^K;T)|_{T=0}\equiv\sum_{j=1}^c\Delta^{pk-1}P_F(C_0(\beta),\xi_j^F;T)|_{T=0}\text{ mod }p.$$* *Proof.* For integers $m$ and $M$ with $0\leq m\leq M$ define the rational function $$B_{m,M}(x)=(-1)^m\sum_{i=m}^M\left(\begin{array}{c}i\\m\end{array}\right)\left(\frac{x}{x-1}\right)^n.$$ Then by [@R15 Theorem 4.1], we have $$\begin{aligned} &P_K(C_0(\beta),\xi_j^K;T)\\ \equiv&\frac{\xi_j^K(\beta)}{\prod_{i=1}^g\left(1-\xi_j^K(\lambda_i)\right)}\sum_{m_1,...,m_g=0}^{kpn}(1+T)^{N_K(\beta+m_1\lambda_1+...+m_g\lambda_g)}\prod_{i=1}^gB_{m_i,kpn}(\xi_j^K(\lambda_i))\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \text{ mod }T^{k+1}. \end{aligned}$$ Hence the left hand side of [\[remains\]](#remains){reference-type="eqref" reference="remains"} equals $$\begin{aligned} &\sum_{j=1}^c\frac{\xi_j^F(p\beta)}{\prod_{i=1}^g\left(1-\xi_j^F(p\lambda_i)\right)}\sum_{m_1,...,m_g=0}^{kpn}N_K(\beta+m_1\lambda_1+...+m_g\lambda_g)^{k-1}\prod_{i=1}^gB_{m_i,kpn}(\xi_j^F(p\lambda_i))\\ =&\sum_{j=1}^c\frac{\xi_j^F(\beta)}{\prod_{i=1}^g\left(1-\xi_j^F(\lambda_i)\right)}\sum_{m_1,...,m_g=0}^{kpn}N_F(\beta+m_1\lambda_1+...+m_g\lambda_g)^{p(k-1)}\prod_{i=1}^gB_{m_i,kpn}(\xi_j^F(\lambda_i))\\ \equiv&\sum_{j=1}^c\frac{\xi_j^F(\beta)}{\prod_{i=1}^g\left(1-\xi_j^F(\lambda_i)\right)}\sum_{m_1,...,m_g=0}^{kpn}N_F(\beta+m_1\lambda_1+...+m_g\lambda_g)^{pk-1}\prod_{i=1}^gB_{m_i,kpn}(\xi_j^F(\lambda_i))\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \text{ mod }p. \end{aligned}$$ Applying [@R15 Theorem 4.1] for the right hand side of [\[remains\]](#remains){reference-type="eqref" reference="remains"}, we have $$\begin{aligned} &P_F(C_0(\beta),\xi_j^F;T)\\ \equiv&\frac{\xi_j^F(\beta)}{\prod_{i=1}^g\left(1-\xi_j^F(\lambda_i)\right)}\sum_{m_1,...,m_g=0}^{kpn}(1+T)^{N_F(\beta+m_1\lambda_1+...+m_g\lambda_g)}\prod_{i=1}^gB_{m_i,kpn}(\xi_j^F(\lambda_i))\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \text{ mod }T^{pk+1}. \end{aligned}$$ Hence the right hand side of [\[remains\]](#remains){reference-type="eqref" reference="remains"} equals $$\sum_{j=1}^c\frac{\xi_j^F(\beta)}{\prod_{i=1}^g\left(1-\xi_j^F(\lambda_i)\right)}\sum_{m_1,...,m_g=0}^{kpn}N_F(\beta+m_1\lambda_1+...+m_g\lambda_g)^{pk-1}\prod_{i=1}^gB_{m_i,kpn}(\xi_j^F(\lambda_i)),$$ which completes the proof of the lemma. ◻ Therefore, Theorem [Theorem 2](#main){reference-type="ref" reference="main"} follows from $$\begin{aligned} \zeta_{\mathfrak{f',c'}}^K(\mathcal{A}^{-1},1-k)&\equiv N_F(\mathfrak{a})^{1-pk}\sum_{j=1}^c\sum_{\substack{C\in\mathcal{C}_F\\\beta\in\mathfrak{a}\cap F^+(\mathfrak{f}')\cap C^1}}\Delta^{pk-1}P_F(C^0(\beta),\xi_j^F;T)|_{T=0}\\ &\equiv\zeta_{\mathfrak{f,c}}^F(\mathfrak{a}^{-1},1-pk)\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \text{ mod }p. \end{aligned}$$ 15 Bergdall John; Hansen, David. On $p$-adic $L$-functions for Hilbert modular forms. arXiv:1710.05324. Burns, David. On main conjectures in non-commutative Iwasawa theory and related conjectures. J. Reine Angew. Math.698(2015), 105--159. Cassou-Noguès, Pierrette. $p$-adic $L$-functions for totally real number field. Proceedings of the Conference on p-adic Analysis (Nijmegen, 1978), pp. 24--37. Katholieke Universiteit, Mathematisch Instituut, Nijmegen, 1978. Charollois, Pierre; Dasgupta, Samit; Greenberg, Matthew. Integral Eisenstein cocycles on $\mathrm{GL}_n$, II: Shintani's method. Comment. Math. Helv.90(2015), no.2, 435--477. Coates, John; Fukaya, Takako; Kato, Kazuya; Sujatha, Ramdorai; Venjakob, Otmar. The $\mathrm{GL}_2$ main conjecture for elliptic curves without complex multiplication. Publ. Math. Inst. Hautes Études Sci.(2005), no.101, 163--208. Colmez, Pierre. Résidu en $s=1$ des fonctions zêta $p$-adiques. Invent. Math.91(1988), no.2, 371--389. Diaz y Diaz, Francisco; Friedman, Eduardo. Colmez cones for fundamental units of totally real cubic fields. J. Number Theory132(2012), no.8, 1653--1663. Deligne, Pierre; Ribet, Kenneth A. Values of abelian $L$-functions at negative integers over totally real fields. Invent. Math.59(1980), no.3, 227--286. Fukaya, Takako; Kato, Kazuya. A formulation of conjectures on $p$-adic zeta functions in noncommutative Iwasawa theory. Proceedings of the St. Petersburg Mathematical Society. Vol. XII, 1--85. Amer. Math. Soc. Transl. Ser. 2, 219. American Mathematical Society, Providence, RI, 2006. ISBN:978-0-8218-4205-8. ISBN:0-8218-4205-6. Kakde, Mahesh. The main conjecture of Iwasawa theory for totally real fields. Invent. Math.193(2013), no.3, 539--626. Neukirch, Jürgen. Algebraic number theory. Translated from the 1992 German original and with a note by Norbert Schappacher. With a foreword by G. Harder Grundlehren Math. Wiss., 322\[Fundamental Principles of Mathematical Sciences\]. Springer-Verlag, Berlin, 1999. xviii+571 pp. ISBN:3-540-65399-6. Ritter, Jürgen; Weiss, Alfred Toward equivariant Iwasawa theory. II. Indag. Math. (N.S.)15(2004), no.4, 549--572. Ritter, Jürgen; Weiss, Alfred. Non-abelian pseudomeasures and congruences between abelian Iwasawa $L$-functions. Pure Appl. Math. Q.4,Special Issue: In honor of Jean-Pierre Serre. Part 1(2008), no.4, 1085--1106. Ritter, Jürgen; Weiss, Alfred. Congruences between abelian pseudomeasures. Math. Res. Lett.15(2008), no.4, 715--725. Roblot, Xavier-François. Computing $p$-adic $L$-functions of totally real number fields. Math. Comp.84(2015), no.292, 831--874. Serre, Jean-Pierre. Sur le résidu de la fonction zêta p-adique d'un corps de nombres. C. R. Acad. Sci. Paris Sér. A-B287(1978), no.4, A183--A188. Shintani, Takuro. On evaluation of zeta functions of totally real algebraic number fields at non-positive integers. J. Fac. Sci. Univ. Tokyo Sect. IA Math.23(1976), no.2, 393--417. Skinner, Christopher; Urban, Eric. The Iwasawa main conjectures for $\mathrm{GL}_2$. Invent. Math.195(2014), no.1, 1--277. Wan, Xin. The Iwasawa main conjecture for Hilbert modular forms. Forum Math. Sigma3(2015), Paper No. e18, 95 pp.
arxiv_math
{ "id": "2310.04385", "title": "On the Torsion Congruence for Zeta Functions of Totally Real Fields", "authors": "Yubo Jin", "categories": "math.NT", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | Using the 4th and the 3rd degree spherical harmonics as the representations for volumetric frames, we describe a simple algebraic technique for combining multiple frame orientation constraints into a single quadratic penalty function. This technique allows to solve volumetric frame fields design problems using a coarse-to-fine strategy on hierarchical grids with immersed boundaries. These results were presented for the first time at the FRAMES 2023 European workshop on meshing. address: Siemens Digital Industries Software author: - Yu. Nesterenko bibliography: - lit.bib title: | **Spectral boundary conditions\ for volumetric frame fields design** --- # Introduction In this research note we consider the question: how to enforce multiple boundary conditions in a volumetric frame fields design problem? The reasons of such multiplicity may be different: noisy input data, presence of multiple orientation constraints (e.g. in a CAD model with a piecewise smooth surface), or consideration of boundary conditions in an immersed way. The last scenario, in our opinion, is of particular interest, since it opens up the possibility to optimize volumetric frame fields using a coarse-to-fine strategy on hierarchical grids with immersed boundaries. Based on the two different representations of volumetric frames --- the 4th and the 3rd degree spherical harmonics --- we propose a simple algebraic technique allowing to combine arbitrary many frame orientation constraints into a single quadratic penalty function (of 9 and 7 variables respectively). # Spherical harmonics of degree 4 We consider real-valued spherical harmonics of 4th degree on the unit sphere. These functions form 9D vector space with the standard orthonormal basis $Y_{4,-4}, \ldots, Y_{4,4}$ (see [@Gorller1996]). ![ Spherical plots of basis functions $Y_{4,\scalebox{0.35}[1.0]{\( - \)}4}, \ldots, Y_{4,4}$. ](Y4i.png){#fig:Y4i width="13.0cm"} In the coordinate form, all octahedrally symmetric spherical harmonics (up to multiplication by $-1$) may be obtained from the reference one --- $$\tilde{h} = (0,0,0,0,\sqrt{\frac{7}{12}},0,0,0,\sqrt{\frac{5}{12}})^T \in \mathds{R}^9$$ --- by rotations $$h = R_x(\alpha) \times R_y(\beta) \times R_z(\gamma) \times \tilde{h},$$ where $\alpha, \beta, \gamma$ are Euler angles. ![ The reference harmonic and its rotation. ](ref9.png){#fig:def width="8.0cm"} Appendix A.1 describes the construction of the rotation matrices $R_x$, $R_y$ and $R_z$. If we restrict ourselves to rotations about $z$ axis, we get the next 1D manifold of spherical harmonics: $$\begin{split} & h = (0,0,0,0,\sqrt{\frac{7}{12}},0,0,0,0)^T + \\ \alpha \, (1,0,0,& 0,0,0,0,0,0)^T + \beta \, (0,0,0,0,0,0,0,0,1)^T, \end{split}$$ where $\alpha^2 + \beta^2 = 5/12$. Further in this section, we will denote the coordinate vectors from the last formula as $b_z$, $p_z$ and $q_z$ respectively. To enforce an arbitrary symmetric harmonic[^1] to lie on the given manifold --- and thus to respect the orientation constraints given by $z$ axis --- we can apply the penalty function $$E_z(h) = \mathop{\rm dist}\nolimits^2 (h - b_z, \, \mathop{\rm Span}\nolimits\{p_z, q_z\}).$$ The similar formula holds for orientation constraints with an arbitrary normal vector $n$: $$\label{dist4} E_n(h) = \mathop{\rm dist}\nolimits^2 (h - b_n, \, \mathop{\rm Span}\nolimits\{p_n, q_n\}),$$ where $b_n = R_{z \rightarrow n} \times b_z$, $p_n = R_{z \rightarrow n} \times p_z$, $q_n = R_{z \rightarrow n} \times q_z$ and $R_{z \rightarrow n}$ is the corresponding $9 \times 9$ rotation matrix. The formula ([\[dist4\]](#dist4){reference-type="ref" reference="dist4"}) may be rewritten as follows: $$\label{E4} E_n(h) = h^T (I - p_n \, p_n^T - q_n \, q_n^T) \, h - 2 b_n^T \, h + b_n^T \, b_n,$$ where the $9 \times 9$ matrix $I - p_n \, p_n^T - q_n \, q_n^T$ is the orthogonal projection onto $\mathop{\rm Span}\nolimits^\perp\{p_n, q_n\}$ and the free term $b_n^T \, b_n$ is obviously equal to $\frac{7}{12}$. Now we can enforce multiple orientation constraints by simply summing the corresponding coefficients of the quadratic penalty functions ([\[E4\]](#E4){reference-type="ref" reference="E4"}): $$\label{sE4} \sum_n w_n \, E_n(h) = h^T \left(\sum_n w_n \, A_n\right) h - 2 \left(\sum_n w_n \, b_n\right)^T h + \sum_n w_n \, b_n^T \, b_n.$$ Here, $w_n$ are some positive weights (in case a weighted sum is needed) and $A_n = I - p_n \, p_n^T - q_n \, q_n^T$. # Spherical harmonics of degree 3 (octupoles) In this section we will describe the technique similar to the discussed above but for the 7D space of octupoles --- real-valued spherical harmonics of degree 3. Note that all notations from the previous section will be reintroduced here with the similar meaning. ![ Spherical plots of basis functions $Y_{3,\scalebox{0.35}[1.0]{\( - \)}3}, \ldots, Y_{3,3}$. ](Y3i.png){#fig:Y3i width="13.0cm"} Let's consider the 3D manifold of *semisymmetric* octupoles --- octupoles possessing octahedral symmetries up to multiplication by $-1$ (see [@Nesterenko2023]). In the coordinate form, all octupoles of this kind may be obtained from the reference one --- $$\tilde{h} = (0,1,0,0,0,0,0)^T \in \mathds{R}^7$$ --- by rotations $$h = R_x(\alpha) \times R_y(\beta) \times R_z(\gamma) \times \tilde{h},$$ where $\alpha, \beta, \gamma$ are Euler angles. ![ The reference octupole and its rotation. ](ref7.png){#fig:ref7 width="8.0cm"} Appendix A.2 describes the construction of the rotation matrices $R_x$, $R_y$ and $R_z$ for the space of octupoles. If we restrict ourselves to rotations about $z$ axis, we get the next 1D manifold of octupoles: $$h = \alpha \, (0,1,0,0,0,0,0)^T + \beta \, (0,0,0,0,0,1,0)^T,$$ where $\alpha^2 + \beta^2 = 1$. Further, we will denote the coordinate vectors from the last formula as $p_z$ and $q_z$ respectively. To enforce an arbitrary semisymmetric octupole[^2] to lie on the given 1D manifold --- and thus to respect the orientation constraints given by $z$ axis --- we can apply the penalty function $$E_z(h) = \mathop{\rm dist}\nolimits^2 (h, \, \mathop{\rm Span}\nolimits\{p_z, q_z\}).$$ The similar formula holds for orientation constraints with an arbitrary normal vector $n$: $$\label{dist3} E_n(h) = \mathop{\rm dist}\nolimits^2 (h, \, \mathop{\rm Span}\nolimits\{p_n, q_n\}),$$ where $p_n = R_{z \rightarrow n} \times p_z$, $q_n = R_{z \rightarrow n} \times q_z$ and $R_{z \rightarrow n}$ is the corresponding $7 \times 7$ rotation matrix. The formula ([\[dist3\]](#dist3){reference-type="ref" reference="dist3"}) may be rewritten as follows: $$\label{E3} E_n(h) = h^T (I - p_n \, p_n^T - q_n \, q_n^T) \, h,$$ where the $7 \times 7$ matrix $I - p_n \, p_n^T - q_n \, q_n^T$ is the orthogonal projection onto $\mathop{\rm Span}\nolimits^\perp\{p_n, q_n\}$ subspace. Now we can enforce multiple orientation constraints by simply summing the corresponding coefficients of the quadratic penalty functions ([\[E3\]](#E3){reference-type="ref" reference="E3"}): $$\label{sE3} \sum_n w_n \, E_n(h) = h^T \left(\sum_n w_n \, A_n\right) h.$$ Here, $w_n$ are some positive weights (in case a weighted sum is needed) and $A_n = I - p_n \, p_n^T - q_n \, q_n^T$. Note that in contrast to ([\[sE4\]](#sE4){reference-type="ref" reference="sE4"}), the quadratic penalty function ([\[sE3\]](#sE3){reference-type="ref" reference="sE3"}) is homogeneous and requires storing 28 coefficients of its symmetric $7\times7$ matrix instead of 55 coefficients of a symmetric $9\times9$ matrix together with a 9D vector and a constant (per boundary frame). # Spectral analysis In this section we consider some particular combinations of orientation constraints and analyse their spectral properties. Among other things, the special role of spectra in the presented technique explains its name. For simplicity, we will limit ourselves to the octupoles case (using the corresponding notations). ## \"Cube\" Let's assume that the orientation constraints for some boundary octupole consist of the three mutually orthogonal axial-aligned normal vectors (taken with the unit weights). The corresponding quadratic form has the matrix $$\begin{Small} \begin{bmatrix} \frac{21}{8} & 0 & \frac{\sqrt{15}}{8} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \frac{\sqrt{15}}{8} & 0 & \frac{19}{8} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 3 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & \frac{19}{8} & 0 & \frac{-\sqrt{15}}{8} \\ 0 & 0 & 0 & 0 & 0 & 2 & 0 \\ 0 & 0 & 0 & 0 & \frac{-\sqrt{15}}{8} & 0 & \frac{21}{8} \end{bmatrix} \end{Small}$$ with the eigenvalues $\{0, 2, 2, 2, 3, 3, 3\}$ and the null space $\mathop{\rm Span}\nolimits\{ \tilde{h} \}$. Thus, the given quadratic form penalizes all components of its argument, except the suitable one --- $\tilde{h}$. ## \"Cylinder\" If the boundary is a cylinder oriented along $y$ axis, then (up to a constant factor) the corresponding quadratic form has the matrix $$\frac{1}{2\pi} \int_{0}^{2\pi} (I - R_y(\beta) p_z \, p_z^T R_y(\beta)^T - R_y(\beta) q_z \, q_z^T R_y(\beta)^T) d\beta =$$ $$\begin{Small} \begin{bmatrix} \frac{13}{16} & 0 & \frac{\sqrt{15}}{16} & 0 & 0 & 0 & 0 \\ 0 & \frac{1}{2} & 0 & 0 & 0 & 0 & 0 \\ \frac{\sqrt{15}}{16} & 0 & \frac{11}{16} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & \frac{49}{64} & 0 & -\frac{\sqrt{15}}{64} & 0 \\ 0 & 0 & 0 & 0 & \frac{103}{128} & 0 & -\frac{\sqrt{15}}{128} \\ 0 & 0 & 0 & -\frac{\sqrt{15}}{64} & 0 & \frac{47}{64} & 0 \\ 0 & 0 & 0 & 0 & -\frac{\sqrt{15}}{128} & 0 & \frac{89}{128} \end{bmatrix} \end{Small}$$ with the eigenvalues $\{\frac{1}{2}, \frac{1}{2}, \frac{11}{16}, \frac{11}{16}, \frac{13}{16}, \frac{13}{16}, 1\}$. It means that this quadratic form penalizes all the octupole components. To prevent such behavior we can simply shift the form spectra by subtracting $\frac{1}{2}I$ from its matrix. The null space of the shifted quadratic form --- $\mathop{\rm Span}\nolimits\{\tilde{h}, R_y(\frac{\pi}{4}) \times \tilde{h}\}$ --- corresponds to the free rotation of the reference semisymmetric octupole $\tilde{h}$ about $y$ axis. In other words, a thin cylinder immersed into a relatively coarse grid contributes to its cells' boundary conditions similarly to the axis of the given cylinder. ## \"Shpere\" Let's consider the unit sphere $S^2$. It can be shown by direct calculations that $$\frac{1}{4 \pi} \int_{n \in S^2} A_n dS = \frac{5}{7} I.$$ Thus, taking into account the spectral shift, we can conclude that spheres lying entirely inside a grid cell don't contribute to its boundary conditions. # Conclusion Using the 4th and the 3rd degree spherical harmonics as the representations of volumetric frames, we have described the two types of quadratic penalty functions enforcing an arbitrary number of frame orientation constraints. In combination with the (semi)symmetrization techniques from [@Nesterenko2020] and [@Nesterenko2023], this approach allows to solve volumetric frame fields design problems using a coarse-to-fine strategy on hierarchical grids with immersed boundaries. # Acknowledgements I would like to thank my friend Svyatoslav Proskurnya for a lot of fruitful discussions and, in particular, for giving the name to the presented technique long before it was formulated. # Appendix A.1 The rotational matrices $R_x$, $R_y$ and $R_z$ for spherical harmonics of degree 4 are defined as follows: $$R_z(\gamma) = \begin{bmatrix} \cos 4\gamma & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \sin 4\gamma \\ 0 & \cos 3\gamma & 0 & 0 & 0 & 0 & 0 & \sin 3\gamma & 0 \\ 0 & 0 & \cos 2\gamma & 0 & 0 & 0 & \sin 2\gamma & 0 & 0 \\ 0 & 0 & 0 & \cos \gamma & 0 & \sin \gamma & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -\sin \gamma & 0 & \cos \gamma & 0 & 0 & 0 \\ 0 & 0 & -\sin 2\gamma & 0 & 0 & 0 & \cos 2\gamma & 0 & 0 \\ 0 & -\sin 3\gamma & 0 & 0 & 0 & 0 & 0 & \cos 3\gamma & 0 \\ -\sin 4\gamma & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \cos 4\gamma \end{bmatrix},$$ $$R_x(\frac{\pi}{2}) = \frac{1}{8} \begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 2\sqrt{14} & 0 & -2\sqrt{2} & 0 \\ 0 & -6 & 0 & 2\sqrt{7} & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 2\sqrt{2} & 0 & 2\sqrt{14} & 0 \\ 0 & 2\sqrt{7} & 0 & 6 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 3 & 0 & 2\sqrt{5} & 0 & \sqrt{35} \\ -2\sqrt{14} & 0 & -2\sqrt{2} & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 2\sqrt{5} & 0 & 4 & 0 & -2\sqrt{7} \\ 2\sqrt{2} & 0 & -2\sqrt{14} & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & \sqrt{35} & 0 & -2\sqrt{7} & 0 & 1 \end{bmatrix},$$ $$R_y(\beta) = R_x(\frac{\pi}{2}) \times R_z(\beta) \times R_x(\frac{\pi}{2})^T,$$ $$R_x(\alpha) = R_y(\frac{\pi}{2})^T \times R_z(\alpha) \times R_y(\frac{\pi}{2}).$$ See [@Blanco1997; @Choi1999; @Collado1989; @Ivanic1996] for more details. # Appendix A.2 The rotational matrices $R_x$, $R_y$ and $R_z$ for spherical harmonics of degree 3 (octupoles) are defined as follows: $$R_z(\gamma) = \begin{bmatrix} \cos 3\gamma & 0 & 0 & 0 & 0 & 0 & \sin 3\gamma \\ 0 & \cos 2\gamma & 0 & 0 & 0 & \sin 2\gamma & 0 \\ 0 & 0 & \cos \gamma & 0 & \sin \gamma & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & -\sin \gamma & 0 & \cos \gamma & 0 & 0 \\ 0 & -\sin 2\gamma & 0 & 0 & 0 & \cos 2\gamma & 0 \\ -\sin 3\gamma & 0 & 0 & 0 & 0 & 0 & \cos 3\gamma \end{bmatrix},$$ $$R_x(\frac{\pi}{2}) = \frac{1}{4} \begin{bmatrix} 0 & 0 & 0 & \sqrt{10} & 0 & -\sqrt{6} & 0 \\ 0 & -4 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & \sqrt{6} & 0 & \sqrt{10} & 0 \\ -\sqrt{10} & 0 & -\sqrt{6} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & -1 & 0 & -\sqrt{15} \\ \sqrt{6} & 0 & -\sqrt{10} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & -\sqrt{15} & 0 & 1 \end{bmatrix},$$ $$R_y(\beta) = R_x(\frac{\pi}{2}) \times R_z(\beta) \times R_x(\frac{\pi}{2})^T,$$ $$R_x(\alpha) = R_y(\frac{\pi}{2})^T \times R_z(\alpha) \times R_y(\frac{\pi}{2}).$$ See [@Blanco1997; @Choi1999; @Collado1989; @Ivanic1996] for more details. [^1]: We refer the reader to [@Nesterenko2020] for the definition of the symmetrization penalty term. [^2]: We refer the reader to [@Nesterenko2023] for the definition of the semisymmetrization penalty term.
arxiv_math
{ "id": "2309.13477", "title": "Spectral boundary conditions for volumetric frame fields design", "authors": "Yuri Nesterenko", "categories": "math.NA cs.GR cs.NA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We investigate aspects of the metric bubble tree for non-collapsing degenerations of (log) Kähler-Einstein metrics in complex dimensions one and two, and further describe a conjectural higher dimensional picture. address: - "Department of Mathematical Sciences, Loughborough University, Loughborough LE11 3TU (United Kingdom), *m.de-borbon\\@lboro.ac.uk*" - "Department of Mathematics, Aarhus University, Ny Munkegade 118 8000 Aarhus (Denmark), *c.spotti\\@math.au.dk*" author: - Martin de Borbon and Cristiano Spotti bibliography: - bub.bib title: Some models for bubbling of (log) Kähler-Einstein metrics --- # Introduction In the last decade there has been an intense study of non-collapsed limits of Kähler-Einstein (KE) manifolds and their relation to algebraic geometry. For instance, results include the algebricity of non-collapsed Gromov-Hausdorff (GH) limits ([@donsun1; @liuszek]), which led to the solution of the Yau-Tian-Donaldson conjecture in the Fano case ([@cds; @datarszek; @chensunwang; @bermanboucksomjonsson]), the construction of Fano K-moduli spaces (e.g., [@blumxu; @SSY; @odaka2; @LWX]), and the algebro-geometric detection of the metric tangent cone at the singularities ([@donsun2; @chili; @liwangxu]). When a singularity is formed, the metric tangent cone is the first approximation of the geometry of the space around such singular point. However, during the singularity formation, the metrics contract at different rate certain portions of the degenerating spaces. From a metric point of view, such phenomenon is captured by the so-called *metric bubble tree*, which encodes all possible rescaled limits of a degenerating family, including the metric tangent cone at the singularity as the first non-trivial rescaled limit. In the KE case, these resulting limiting bubbles are, in general still singular, asymptotically conical Calabi-Yau (CY) spaces. This metric picture leads us to the following natural problem: given a *flat family of non-collapsing Kähler-Einstein spaces* and a singular point $p_0 \in \mbox{Sing}(X_0)$ $$\begin{tikzcd}[column sep=0pt,nodes={inner xsep=0pt,outer xsep=0pt}] p_0 \in X_0 & \hookrightarrow & \mathcal{X}\arrow[d] \\ & & \Delta, \end{tikzcd}$$ how can we find all the *rescaled* pointed Gromov-Hausdorff limits (i.e., *bubbles*) $$\lim_{i\rightarrow \infty} (\lambda(t_i)X_{t_i}, p_{t_i})=B_{\infty}$$ at points $p_{t_i} \in X_{t_i}$ such that $p_{t_i}\rightarrow p_0$ and $\lambda(t_i) \rightarrow +\infty$, as $t_i\rightarrow 0$? More precisely, one can ask the following, possibly interrelated, questions. Is there a *purely (local) algebro-geometric way of describing such metric bubbles*, somewhat generalizing for families Chi Li's [@chili] normalized volume for detecting the tangent cone? What kind of *structure* does the space $\mathcal{T}=\{B_\infty\}$ of all such bubbles $B_\infty$ may have? In this experimental elementary note we start to investigate this problem in low dimensions. This, combined with the recent differential geometric foundational work of S. Sun [@sun] refining Donaldson-Sun [@donsun2], could give some evidence of the emergence of an algebro-geometric general theory of multiscale non-collapsing for families of KE varieties. **Remark 1**. Of course similar questions linking algebraic and differential geometry of rescaled limits are relevant also in case of (local and global) geometric *collapsing* of KE metrics. This is expected to be linked to deeper non-Archimedean aspects of a degenerating family (e.g., [@odaka1]), but we will not discuss this problem here. We refer the readers to the sections below for the precise results, but here we give a quick survey of the content of the paper. In Section [2](#sec:1dim){reference-type="ref" reference="sec:1dim"} we describe bubbling in the case of conical (log) KE metrics on Riemann surfaces, establishing a receipt for their algebraic detection (Theorem [Theorem 1](#thm:flatP1){reference-type="ref" reference="thm:flatP1"}). Moreover, for the case of flat metrics on the Riemann sphere, we investigate the relation of bubbling with the Deligne-Mostow and Deligne-Mumford compactifications of moduli spaces of such flat metrics (Theorem [Theorem 2](#thm:DelMum){reference-type="ref" reference="thm:DelMum"}). In Section [3](#sec:2dim){reference-type="ref" reference="sec:2dim"} we then move to the complex two dimensional case where we study local bubbling for the $A_k$ ALE spaces thanks to the Gibbons-Hawing ansatz description, and discuss how the metric rescaling can indeed be detected by certain algebro geometric rescaling of the deformation family (Theorem [Theorem 3](#thm:aklimits){reference-type="ref" reference="thm:aklimits"}). The logarithmic two dimensional situation is, however, more subtle: here we describe certain examples where algebraic rescalings alone would not be able to detect the bubbles, but rather "weighted bubbles\" similar to the "weighted tangent cone\" (see [@sun]). Finally, in Section [4](#sec:highdim){reference-type="ref" reference="sec:highdim"}, we put together our observations to discuss some aspects of the emerging higher dimensional picture for bubbling of non-collapsing KE metrics.\ *Acknowledgments.* We thank Song Sun for the discussions on the topic on several occasions, and for having shared with us his paper [@sun]. We thank Yuji Odaka for very useful comments on a draft of this paper that improved considerably our presentation. The first author thanks Dmitri Panov for discussions on flat metrics with cone points and Deligne-Mumford compactification. During the writing of this paper the second author was supported by a Villum Young Investigator Grant 0019098 and Villum YIP+ 53062. # Bubbling in the one dimensional logarithmic case {#sec:1dim} Let $\beta$ be a positive real number and write $\mathbb{C}_{\beta}$ for the complex plane $\mathbb{C}$ endowed with the metric of a cone of total angle $2\pi\beta$ with its vertex located at $0$. Using a standard complex coordinate $z$ the metric is given by the line element $$\label{eq:conemodel} |z|^{\beta-1}|dz| .$$ Here we consider constant curvature metrics on Riemann surfaces with a finite number of cone points at which the metric is asymptotic, in local complex coordinates, to the model given by Equation [\[eq:conemodel\]](#eq:conemodel){reference-type="eqref" reference="eq:conemodel"} above, and we want to study *the blow-up limits that arise after we rescale an algebraic family of such metrics where a cluster of cone points merge into a single conical singularity*. Geometrically, multiscale bubbling happens due to the different relative speed of collision among the conical points. Note that this problem has already been considered in the literature ([@mazzeo; @monpan1; @monpan2]), but below we will further investigate certain aspects more relevant to establish an algebraic description of bubblings. ## Infinite flat metrics with cone points We begin with a purely local (non-compact) description of bubbling of flat conical metrics. The metric bubbles relevant to this section can be written down explicitly in complex coordinates as explained in the next Lemma [Lemma 1](#lem:inflatmet){reference-type="ref" reference="lem:inflatmet"} (a schematic picture of the geometry of these metric bubbles is given later in Figure [\[fig:inflatmet\]](#fig:inflatmet){reference-type="ref" reference="fig:inflatmet"}). **Lemma 1**. *Let $p_1, \ldots, p_N \in \mathbb{C}$ and let $\beta_1, \ldots, \beta_N \in (0,1)$ be such that $$\sum_{i=1}^N (1-\beta_i) < 1 .$$ Then the line element $$\label{eq:inflatmet} \left(\prod_{i=1}^N |z-p_i|^{\beta_i-1}\right) |dz|$$ defines a flat Kähler metric on $\mathbb{C}$ with the following properties.* 1. *At the points $p_i$ it is locally isometric to a cone of total angle $2\pi\beta_i$.* 2. *Outside a compact set it is isometric to the open end of a cone of total angle $2\pi\gamma$ where $\gamma \in (0,1)$ is given by $$\label{eq:gamma} 1-\gamma = \sum_{i=1}^N (1-\beta_i) .$$* *Proof.* The Gaussian curvature $K$ of a conformal metric $g = e^{2u}|dz|^2$ is given by the formula $K = e^{-2u}\Delta u$, therefore $K=0$ if and only if $u$ is harmonic. In particular, the metric given by Equation [\[eq:inflatmet\]](#eq:inflatmet){reference-type="eqref" reference="eq:inflatmet"} is flat outside the points $p_i$ because the function $u = \sum_i (\beta_i-1) \log |z-p_i|$ is harmonic on $\mathbb{C}\setminus \{p_1, \ldots, p_N\}$. The local uniformization theorem asserts that if $g = e^{2u}|dz|^2$ is flat then we can find a local holomorphic coordinate $w$ in which $g$ is equal to the Euclidean metric. Indeed, we can locally write the harmonic function $u$ as the real part of a holomorphic function, say $u = \mbox{Re}(h)$. If we let $F$ be the solution of $F' = e^h$ with $F(0)=0$ and change coordinates to $w=F(z)$, then $g = |dw|^2$ as wanted. In order to prove items (i) and (ii) we consider a slightly more general situation. Let $\beta$ be a real number and let $g$ be the metric defined on a punctured disc $D^* \subset \mathbb{C}$ around $0$ given by $$g = e^{2u} |z|^{2\beta-2} |dz|^2$$ where $u$ is harmonic and extends smoothly over the origin. **Claim.** If $\beta \in \mathbb{R}\setminus \{-1, -2, \ldots\}$ then $g$ is isometric in a neighbourhood of the origin to a cone of total angle $2\pi\beta$ if $\beta>0$, an infinite end of a cylinder if $\beta=0$ or to an infinite end of a cone of total angle $2\pi(-\beta)$ if $\beta<0$. We prove the claim following the arguments of Troyanov [@troyanov Proposition 2]. Write $u = \mbox{Re}(h)$ where $h$ is holomorphic. We have a power series expansion $e^h= \sum_{k \geq 0} a_k z^k$ with $a_0 \neq 0$ which is convergent on a disc. Assume first that $\beta \neq 0$ and let $f = \sum_{k\geq0} b_kz^k$ with $b_k = (1+k/\beta)^{-1} a_k$ (here we use that $\beta$ is not a negative integer). Then $f$ solves the equation $$\label{eq:solf} \begin{aligned} \frac{d}{dz}\left( \frac{z^{\beta}}{\beta}f\right) &= z^{\beta-1} \sum_{k\geq 0} \left(b_k + \frac{k}{\beta}b_k\right)z^k \\ &= z^{\beta-1} e^h . \end{aligned}$$ Note that $f(0) = a_0 \neq 0$ so we can take a single-valued holomorphic branch of $f^{1/\beta}$ around the origin. Let $$w= z f^{1/\beta} .$$ It follows from Equation [\[eq:solf\]](#eq:solf){reference-type="eqref" reference="eq:solf"} that $w^{\beta-1}dw= z^{\beta-1}e^hdz$ and hence $g=|w|^{2\beta-2}|dw|^2$ which implies the claim when $\beta \neq 0$. On the other hand, if $\beta=0$ then we solve $df/dz = (a_0^{-1}e^h-1)/z$ and change coordinates to $w=ze^f$ so that $a_0w^{-1}dw = z^{-1}e^udz$ and $g=|a_0|^2 |w|^{-2}|dw|^2$. This finishes the proof of the claim. Item (i) then follows immediately from the claim applied to $\beta = \beta_i$. Furthermore, outside a compact set $|z| \gg 1$ we can write the line element [\[eq:inflatmet\]](#eq:inflatmet){reference-type="eqref" reference="eq:inflatmet"} as $|z|^{\gamma-1} e^u |dz|$ where $u$ is harmonic and bounded. Let $\tilde{z} =1/z$ so that close to $\tilde{z}=0$ we have $|\tilde{z}|^{-\gamma-1}e^u|d\tilde{z}|$. Hence item (ii) follows from the claim applied to $\beta = -\gamma$. ◻ **Remark 2**. Somewhat surprisingly, Lemma [Lemma 1](#lem:inflatmet){reference-type="ref" reference="lem:inflatmet"} *requires* the angle at infinity $\gamma$ to be non-integer. Since we are restricting to the case that $\beta_i \in (0,1)$, it is immediate from Equation [\[eq:gamma\]](#eq:gamma){reference-type="eqref" reference="eq:gamma"} that $\gamma \in (0,1)$. On the other hand, if the angle at infinity is an integer multiple of $2\pi$ then Item (ii) of Lemma [Lemma 1](#lem:inflatmet){reference-type="ref" reference="lem:inflatmet"} might indeed not hold. For example, the infinite flat metric with two cone points of angles $\pi$ and $3\pi$ given by $$|z+1|^{-1/2} |z-1|^{1/2} |dz|$$ has an end asymptotic to the Euclidean plane (i.e. $\gamma=1$) but it is not isometric to it outside a compact set. See Figure [\[fig:integerangle\]](#fig:integerangle){reference-type="ref" reference="fig:integerangle"}. ## Local models for singularity formation Next we consider families of infinite flat metrics $g_t$ of the form given by Equation [\[eq:inflatmet\]](#eq:inflatmet){reference-type="eqref" reference="eq:inflatmet"}, where the cone points $p_i = p_i(t)$ are *holomorphic functions* on $\Delta = \{ |t| < 1\}$. We examine the case when all cone point collide at $0$, i.e. $p_i(0) = 0$ for all $i$, and describe algebraically the bubble tree $\mathcal{B}$ of all possible rescaled limits. Let's begin discussing a concrete trivial example: **Example 1** (Collision of two cone points). The basic case occurs when a pair of two cone points $\beta_1, \beta_2 \in (0,1)$ with $\beta_1+\beta_2>1$ collide into a single cone point $\gamma = \beta_1 + \beta_2 -1$. For instance, this is described by the family of metrics parameterized by $\epsilon(=|t|)>0$ given by $$g_{\epsilon} = |z+\epsilon|^{2\beta_1-2} |z -\epsilon|^{2\beta_2-2}|dz|^2 .$$ If we let $z = \epsilon \tilde{z}$ then we see that the family $$g_{\epsilon} = \epsilon^{2\beta_1+2\beta_2-2} |\Tilde{z}+1|^{2\beta_1-2} |\Tilde{z}-1|^{2\beta_2-2}|d\Tilde{z}|^2$$ is just a rescaling $g_{\epsilon} = \epsilon^{2\beta_1+2\beta_2-2} \cdot g_1$. The distance with respect to $g_{\epsilon}$ of the points $z = \pm \epsilon$ is a constant multiple of $\epsilon^{\beta_1+\beta_2-1}$ and the family $g_{\epsilon}$ converges to the $2$-cone $\mathbb{C}_{\gamma}$ of total angle $2\pi\gamma$ as $\epsilon \to 0$. We can realize the metrics $g_{\epsilon}$ of this family by taking the *double* (that is, by gluing along the boundary two copies) of the truncated wedge shown in yellow in Figure [\[fig:collision\]](#fig:collision){reference-type="ref" reference="fig:collision"}. As the red segment slides parallel to the left the distance between the $2$ cone points $\beta_1, \beta_2$ goes to $0$. In order to describe algebraically the general case of a cluster of cone points colliding we begin by reviewing a standard construction of a tree out of a finite set of holomorphic functions by grouping them according to their relative order of vanishing, as explained in Éttiene Ghys's book [@ghys p.27]. Let $\mathcal{O}_{\mathbb{C},0}$ be the local ring of germs of holomorphic functions defined in a neighbourhood of $0 \in \mathbb{C}$. For $f \in \mathcal{O}_{\mathbb{C}, 0}$ we let $\nu(f)$ be its order of vanishing at $0$. For any integer $k \geq 0$ we let $\sim_k$ be the equivalence relation on $\mathcal{O}_{\mathbb{C},0}$ defined by $f \sim_k g$ if $\nu(f-g) \geq k$. This gives a nested sequence of equivalence relations in the sense that if $f \sim_{k'} g$ and $k' \geq k$ then $f \sim_{k} g$, i.e. the equivalence classes of $\sim_{k'}$ refine the equivalence classes of $\sim_{k}$. Consider now a finite set $S = \{p_1, \ldots, p_N\} \subset \mathcal{O}_{\mathbb{C},0}$. We construct a tree $\mathcal{T} = \mathcal{T}(S)$ by grouping the elements of $S$ according to their relative order of vanishing as defined next. The vertices of $\mathcal{T}$ are subsets of $S$ and the children of a node make a partition of it. We begin by defining a tree $\Tilde{\mathcal{T}}$ whose $k$-th level are the equivalence classes in $S$ given by $\sim_k$. The root of $\Tilde{\mathcal{T}}$ is the set $S$ itself, the children of $S$ are the equivalence classes of $\sim_1$ and so on. The tree $\Tilde{\mathcal{T}}$ is infinite; however, since the set $S$ is finite, there is a smallest $k_0$ such that the equivalence classes of $\sim_k$ are singletons $\{p_i\}$ for all $k \geq k_0$. **Definition 1**. $\mathcal{T}$ is the finite tree obtained up to the $k_0$-th level of $\Tilde{\mathcal{T}}$. Moreover, we bypass every interior node with a single child by replacing every occurrence of a triplet $\to \bullet \to$ with an edge $\to$. **Example 2**. If $S$ is made of the $4$ functions $p_1 = t$, $p_2 = t-t^4$, $p_3 = t + t^4$, and $p_4 = t^2$ then $\mathcal{T}$ is the tree shown in Figure [\[fig:polytree\]](#fig:polytree){reference-type="ref" reference="fig:polytree"}. Let $S = \{p_1(t), \ldots, p_N(t)\}$ be holomorphic functions on the unit disc $\Delta$ and let $\beta_1, \ldots, \beta_N$ be real numbers in the interval $(0,1)$ such that $\sum_i (1-\beta_i) < 1$. We proceed to define the bubble tree associated to the family of infinite flat metrics $$\label{eq:flatfamily} \left(\prod_{i=1}^N |z-p_i(t)|^{\beta_i-1}\right) |dz| .$$ Let $\textbf{v}$ be an interior (i.e. non-leaf) node of $\mathcal{T}(S)$ corresponding to a subset $V \subset \{1, \ldots, N\}$. The children $\mathbf{v}_1, \ldots, \mathbf{v}_{\ell}$ of $\mathbf{v}$ give a partition $V_1 \cup \ldots \cup V_{\ell} = V$. There is a non-negative integer $k$ such that all derivatives of $p_i$ at $0$ agree up to order $k$ for all $i \in V$ but the $k+1$ derivatives $p_i^{(k+1)}(0)$, $p_j^{(k+1)}(0)$ are different if $i \in V_r$ and $j \in V_s$ with $r \neq s$ and equal if $r=s$. Let $q_1, \ldots, q_{\ell} \in \mathbb{C}$ be the values of the $(k+1)$-derivatives $p_i^{(k+1)}(0)$ for $i \in V$ corresponding to the $\ell$ different children of the node $\mathbf{v}$. For each $r = 1, \ldots, \ell$ we let $\Tilde{\beta}_r \in (0,1)$ be given by $$1-\Tilde{\beta}_r = \sum_{i \in V_r} (1-\beta_i) .$$ **Definition 2**. For each $\textbf{v} \in \mathcal{T}$ as above we associate the following objects: - $\mathcal{C}_{\mathbf{v}}$ is the $2$-cone $\mathbb{C}_{\tilde{\gamma}}$ of total angle $2\pi\Tilde{\gamma}$ where $$1-\Tilde{\gamma} = \sum_{i \in V} (1-\beta_i) .$$ - $B_{\mathbf{v}}$ is the infinite flat metric on $\mathbb{C}$ with cone angles $2\pi\Tilde{\beta}_1, \ldots, 2\pi\Tilde{\beta}_{\ell}$ at the points $q_1, \ldots, q_{\ell}$ and isometric to the end of the cone $\mathcal{C}_{\mathbf{v}}$ at infinity. **Definition 3**. The *metric bubble tree* $\mathcal{B}$ is the set of all infinite flat metrics $B_{\mathbf{v}}$ labelled by the interior nodes $\mathbf{v} \in \mathcal{T}(S)$. Let $\pi: \Delta \times \mathbb{C}\to \Delta$ be the projection map $(t, z) \mapsto t$ and equip the fibres $\pi^{-1}(t)$ with the infinite flat metrics with cone points at $p_i = p_i(t)$ given by Equation [\[eq:flatfamily\]](#eq:flatfamily){reference-type="eqref" reference="eq:flatfamily"}. Fix a section of $\pi$ given by a holomorphic function $t \mapsto (t, s(t))$ with $s(0)=0$. The section determines a path in the tree $\mathcal{T}$ by looking at the polynomials from $\{p_1, \ldots, p_N\}$ that approximate $s$ closer, as explained in the next paragraph. Recall that $\mathcal{T} = \mathcal{T}(S)$ is the tree associated to $S= \{p_1, \ldots, p_N\}$. Let $\mathcal{T}'$ be the tree associated to $\{s\} \cup S$. Take the path from the root of $\mathcal{T}'$ to the leaf $\{s\}$. Remove $s$ from every interior node of the path to obtain a path in $\mathcal{T}$ and label its interior nodes starting from the root of $\mathcal{T}$ as $\textbf{v}_1, \ldots, \textbf{v}_{\ell}$. See Figure [\[fig:polytreepath\]](#fig:polytreepath){reference-type="ref" reference="fig:polytreepath"} For each $\alpha>0$ we let $h_{\alpha}$ be the pointed Gromov-Hausdorff limit $$\label{eq:halpha} h_{\alpha} = \lim_{t \to 0} \left(\mathbb{C}, |t|^{-2\alpha} g_t, s(t) \right) .$$ Let $\textbf{v}_1, \ldots, \textbf{v}_{\ell}$ be the vertices of $\mathcal{T}$ determined by the section $s$ and let $B_{\textbf{v}_i}$ be the corresponding bubbles. The section $s$ also determines points $q_i \in B_{\textbf{v}_i}$ where $q_i$ is the value of the $k$-th derivative of the section and $k$ is the smallest integer such that the values of the $k$-th derivatives of the functions in the equivalence class represented by $\textbf{v}_i$ are not all equal. **Lemma 2**. *There is an increasing sequence $0 = \alpha_0 < \alpha_1 < \ldots < \alpha_{\ell}$ such that, up to scale, $h_{\alpha}$ is isometric to:* - *the cone $\mathcal{C}_{\mathbf{v}_i}$ with base point the vertex if $\alpha_{i-1} < \alpha < \alpha_i$;* - *the bubble $B_{\mathbf{v}_i}$ with base point $q_i$ determined by the section $s$ if $\alpha = \alpha_i$.* *If $s$ is different from all $p_i$ then $h_{\alpha}$ is isometric to $\mathbb{C}$ for all $\alpha>\alpha_{\ell}$. While if $s = p_i$ for some $i$ then $h_{\alpha}$ is isometric to $\mathbb{C}_{\beta_i}$ for all $\alpha > \alpha_{\ell}$.* *Proof.* Center coordinates at $s(t)$ by letting $$\Tilde{z} = z - s(t) .$$ In the $\Tilde{z}$ coordinate the position of the cone points is given by $$\Tilde{p}_i(t) = p_i(t) - s(t) .$$ Write $\Tilde{p}_{j}(t) = a_{j}t^{d(j)} + \text{(h.o.t)}$ with $a_{j} \neq 0$ and $d(j) \geq 1$. We have a finite set of orders of vanishing $D \subset \mathbb{N}$ and a partition $\sqcup_{d \in D} I_d = \{ 1, \ldots, N \}$ such that $d(j) = d$ for all $j \in I_d$. Order the elements of $D$ as $d_1 < d_2 < \ldots <d_{\ell}$. The interior nodes $\mathbf{v}_1, \ldots, \mathbf{v}_{\ell}$ of $\mathcal{T}$ correspond to the subsets $\{\tilde{p}_j \, | \, d(j) \geq d_1 \}, \ldots, \{\tilde{p}_j \, | \, d(j) \geq d_{\ell} \}$. If we change coordinates to $\Tilde{z} \mapsto t^{d_i}\Tilde{z}$ then the line elements of $g_t$ transform up to a positive $O(1)$ factor to $$\label{eq:ztilde} |t|^{\alpha_i} \left( \prod_{j | d(j) \geq d_i} |\Tilde{z} - t^{-d_i}\Tilde{p}_j(t)|^{\beta_j -1} \right) \left( \prod_{j | d(j) < d_i} |1 - t^{d_i}\Tilde{p}^{-1}_j(t)\Tilde{z}|^{\beta_j -1}\right) |d\Tilde{z}|$$ where $$\alpha_i = d_i \left( 1 + \sum_{j | d(j) \geq d_i} (\beta_j-1) \right) + \sum_{j | d(j) < d_i} (\beta_j-1) d_j .$$ If we let $\gamma \in (0,1)$ be given by $\gamma - 1 = \sum_i (\beta_i -1)$ then $$\label{eq:alphai} \alpha_i = d_i \gamma + \sum_{j | d(j) < d_i} (1-\beta_j) (d_i - d(j))$$ from which it is clear that $\alpha_1 < \ldots < \alpha_{\ell}$. It follows from Equation [\[eq:ztilde\]](#eq:ztilde){reference-type="eqref" reference="eq:ztilde"} that, up to scale, the metrics $h_{\alpha_i}$ are isometric to the bubbles $B_{\mathbf{v}_i}$. Indeed the terms $t^{-d_i}\Tilde{p}_j(t)$ inside the first parenthesis in Equation [\[eq:ztilde\]](#eq:ztilde){reference-type="eqref" reference="eq:ztilde"} converge to the $d_i$-th derivative of $\Tilde{p}_j$ at the origin $$\lim_{t \to 0} \frac{\Tilde{p}_j(t)}{t^{d_i}} = \frac{\Tilde{p}_j^{(d_i)}(0)}{d_i!}$$ while the factors $|1-t^{d_i}\Tilde{p}_j^{-1}(t) \Tilde{z}|$ in the second parenthesis in Equation [\[eq:ztilde\]](#eq:ztilde){reference-type="eqref" reference="eq:ztilde"} converge uniformly on compacts subsets of $\mathbb{C}$ to $1$ as $t \to 0$. More generally, for $\lambda>0$ we dilate coordinates by $\Tilde{z} \mapsto |t|^{\lambda} \Tilde{z}$ so that $\sqrt{g_t}$ is up to a positive $O(1)$ factor $$\label{eq:tildez2} |t|^{\alpha(\lambda)} \left( \prod_{j | d(j) \geq \lambda} |\Tilde{z} - |t|^{-\lambda} \Tilde{p}_j(t)|^{\beta_j -1} \right) \left( \prod_{j | d(j) < \lambda} |1 - |t|^{\lambda} \Tilde{p}_j^{-1}(t) \Tilde{z}|^{\beta_j -1} \right) |d\Tilde{z}| ,$$ where $$\alpha(\lambda) = \gamma \lambda + \left( \sum_{j | d(j) < \lambda} (1-\beta_j) \right) \lambda + \sum_{j | d(j) < \lambda} (\beta_j-1)d(j) .$$ This defines a picewise linear continuous convex bijection $\alpha (\lambda) : \mathbb{R}_{\geq 0} \to \mathbb{R}_{\geq 0}$ with $\alpha (d_i) = \alpha_i$. If $\alpha_{i-1} < \alpha < \alpha_i$ then $d_{i-1} < \lambda < d_i$ therefore all the terms $|t|^{-\lambda}\Tilde{p}_j(t)$ inside the first parenthesis in Equation [\[eq:tildez2\]](#eq:tildez2){reference-type="eqref" reference="eq:tildez2"} converge to $0$ as $t \to 0$; and the line elements given by Equation [\[eq:tildez2\]](#eq:tildez2){reference-type="eqref" reference="eq:tildez2"} converge to the cone $\mathcal{C}_{\mathbf{v}_i}$ as $t \to 0$. ◻ ## Bubbling for flat metrics on the $2$-sphere We recall the well-known classification of flat metrics with conical singularities on the $2$-sphere $S^2$ in terms of their cone angles and conformal structures. We follow [@thurston; @troyanov]. Let $g$ be a flat metric on $S^2$ with a finite number of cone points $p_1, \ldots, p_N$ of total angle $2\pi\beta_1, \ldots, 2\pi\beta_N$ where $\beta_i \in (0,1)$. In precise terms, $g$ is a smooth flat metric on $S^2 \setminus \{p_1, \ldots, p_N\}$ and in a neighbourhood of each $p_i$ the metric $g$ is equivalent in geodesic polar coordinates centred at $p_i$ to the $2$-cone of total angle $2\pi\beta_i$ given by $dr^2 + \beta_i^2r^2d\theta^2$. More geometrically, by the Alexandrov embedding theorem [@pak Theorem 37.1] the metric $g$ is either isometric to the surface of a convex polyhedron in $\mathbb{R}^3$ or the double of a convex polygon in $\mathbb{R}^2$. By Gauss-Bonnet the cone angles satisfy $$\label{eq:GB} \sum_{i=1}^N (1-\beta_i) = 2 .$$ Conversely we have the following. **Lemma 3**. *Let $\beta_1, \ldots, \beta_N \in (0,1)$ such that the Gauss-Bonnet constraint [\[eq:GB\]](#eq:GB){reference-type="eqref" reference="eq:GB"} is satisfied and let $p_1, \ldots, p_N$ be distinct points in the complex plane $\mathbb{C}$. Then the line element $$\label{eq:flatsphere} \left( \prod_i |z - p_i|^{\beta_i-1} \right) |dz|$$ extends smoothly over infinity to define a metric $g$ on the $2$-sphere with $N$ cone points of angles $2\pi\beta_1, \ldots, 2\pi\beta_N$.* *Proof.* An easy check shows that if we let $z=1/w$ then the line element [\[eq:flatsphere\]](#eq:flatsphere){reference-type="eqref" reference="eq:flatsphere"} takes the form $$\left( \prod_i |1 - p_i w|^{\beta_i-1} \right) |dw|$$ which extends smoothly over $w=0$. On the other hand, the proof of Lemma [Lemma 1](#lem:inflatmet){reference-type="ref" reference="lem:inflatmet"} shows that close to every $p_i$ we can find a holomorphic coordinate $\xi$ in which the metric $g$ agrees with the flat $2$-cone $|\xi|^{\beta_i-1}|d\xi|$. ◻ **Lemma 4**. *Every flat metric $g$ on the $2$-sphere with $N$ cone point of total angles $2\pi\beta_1, \ldots, 2\pi\beta_N$ is isometric to a metric of the form given by Equation [\[eq:flatsphere\]](#eq:flatsphere){reference-type="eqref" reference="eq:flatsphere"}.* *Proof.* The metric $g$ induces in the usual way a complex structure away from the cone points given by an anti-clockwise rotation of $90^o$. This complex structure extends smoothly over the cone points as can be checked for the $2$-cone. Indeed, if $\xi = r^{1/\beta} e^{i\theta}$ then $dr^2 + \beta^2r^2d\theta^2 = \beta^2 |\xi|^{2\beta-2}|d\xi|^2$ showing that the $2$-cone endows the Euclidean plane $\mathbb{R}^2$ with the complex structure of $\mathbb{C}$. In particular, the metric $g$ endows the $2$-sphere with the structure of a Riemann surface. By the Uniformization Theorem we know that this Riemann surface is biholomorphic to $\mathbb{CP}^1$. Let $z$ be a complex coordinate on $\mathbb{C}\subset \mathbb{CP}^1$ such that $\infty$ is a smooth point of $g$ and let $p_1, \ldots, p_N \in \mathbb{C}$ be the cone points. Then we can write $$g = e^{2u} \left( \prod_i |z - p_i|^{2\beta_i-2} \right) |dz|^2$$ for a real function $u$. Since $g$ is flat then $u$ is harmonic away from the cone points. The fact that the cone angle at $p_i$ is $2\pi\beta_i$ implies that $u$ extends smoothly over $p_i$. As a consequence, the function $u$ is smooth and harmonic on the whole $\mathbb{CP}^1$ and by the maximum principle it is constant. ◻ Now, let $\mathcal{M}_{0,N}$ be the moduli space of configurations of $N$ marked points in the Riemann sphere $\mathbb{CP}^1$ modulo the action of linear transformations $PSL(2, \mathbb{C})$ that preserve the markings. Let $Met_{\beta}$ be the space of all flat metrics on the $2$-sphere with $N$ cone points of angles $2\pi\beta_1, \ldots, 2\pi\beta_N$ modulo marked isometries and scale. **Corollary 1**. *The forgetful map $$\label{eq:forget} Met_{\beta} \to \mathcal{M}_{0,N}$$ that records the conformal structure induced by the metric is a bijection.* *Proof.* Surjectivity follows from Lemma [Lemma 3](#lem:fms1){reference-type="ref" reference="lem:fms1"} and injectivity from Lemma [Lemma 4](#lem:fms2){reference-type="ref" reference="lem:fms2"}. ◻ Consider a sequence of unit area flat conical metrics $g_t$ on $\mathbb{CP}^1$ parameterized by $t$ in the unit disc $\Delta$. We assume that the positions of the cone points $p_1(t), \ldots, p_N(t)$ depend holomorphically on $t \in \Delta$ and that for $t \neq 0$ we have $N$ different cone points ($p_i(t) \neq p_j(t)$ for $i \neq j$) of fixed cone angles $2\pi\beta_1, \ldots, 2\pi\beta_N$. However, for $t=0$ we allow some of the cone points come together. More precisely, we have a partition of the index set $$\bigcup_{k=1}^M I_k = \{1, \ldots, N\}$$ by disjoint subsets $I_k$ such that $p_i(0) = p_j(0)$ if $i, j$ belong to the same subset $I_k$ and $p_i(0) \neq p_j(0)$ if $i \in I_k$, $j \in I_{\ell}$ with $k \neq \ell$. Moreover, we assume that for every $k=1, \ldots, M$ we have $$\label{eq:collisions} \sum_{i \in I_k} (1-\beta_i) < 1 .$$ Let $q_1, \ldots, q_M$ be the values of $p_1(0), \ldots, p_N(0)$ so that the limiting metric is $$\label{eq:g0} g_0 = C_0 \left( \prod_{j=1}^M |z - q_j|^{2\gamma_j - 2} \right) |dz|^2$$ where the angles $\gamma_j$ are determined by $1 - \gamma_j = \sum_{i \in I_j}(1-\beta_i)$ and $C_0>0$ is determined by unit area normalization. **Theorem 1**. *Let $\sigma$ be a section of the projection map $(t, z) \mapsto t$ from $\Delta \times \mathbb{CP}^1$ to $\Delta$ given by $t \mapsto \sigma(t) = (t, s(t))$ where $s$ is a holomorphic function with $s(0) = q_j$ for some $1 \leq j \leq M$. Then the rescaled pointed Gromov-Hausdorff limits $$h_{\alpha} = \lim_{t \to 0} \left(\mathbb{CP}^1, |t|^{-2\alpha} \cdot g_t, s(t) \right)$$ for $\alpha>0$ are given as in Lemma [Lemma 2](#lem:rescaledsection){reference-type="ref" reference="lem:rescaledsection"} with $\mathcal{T} = \mathcal{T}(S)$ where $S = \{p_i \, | \, i \in I_j\}$.* *Proof.* The metrics $g_t$ are given for $t \neq 0$ by the explicit formula $$\label{eq:gts} g_t = C_t \left( \prod_{i=1}^N |z - p_i(t)|^{2\beta_i-2} \right) |dz|^2$$ where $C_t > 0$ are determined by the condition $\text{Area}(g_t) = 1$. At $t=0$ the limiting metric $g_0$ is given by Equation [\[eq:g0\]](#eq:g0){reference-type="eqref" reference="eq:g0"}. It is easy to check that the distance functions induced by the metrics $g_t$ converge uniformly to the distance function given by $g_0$ and that there is a uniform $C>0$ such that $C^{-1} < C_t < C$ for all $t$ with $|t|< 1/2$. Given this, the same proof of Lemma [Lemma 2](#lem:rescaledsection){reference-type="ref" reference="lem:rescaledsection"} applies to this case. ◻ **Remark 3**. Using gluing methods we expect Theorem [Theorem 1](#thm:flatP1){reference-type="ref" reference="thm:flatP1"} to hold for non-collapsed sequences of hyperbolic and spherical metrics. This goes by producing approximate solutions, using the infinite flat model families, and then perturbing the approximation by analyzing the linearization of the singular Liouville equation. See [@mazzeo]. ## Relations to Deligne-Mostow and Deligne-Mumford compactifications The space $\mathcal{M}_{0,N}$ of configurations of $N \geq 3$ marked points in the Riemann sphere is an open complex manifold of dimension $N-3$. The automorphisms group of the projective line $PSL(2, \mathbb{C})$ acts diagonally on the Cartesian product $(\mathbb{CP}^1)^N$. This action is free away from diagonals. If we let $$U = (\mathbb{CP}^1)^N \setminus \bigcup_{i \neq j} \{z_i=z_j\}$$ then $\mathcal{M}_{0, N}$ is the space of $PSL(2, \mathbb{C})$-orbits on the open set $U$. The *Deligne-Mostow compactification* $\overline{\mathcal{M}^{\beta}_{0, N}}$ of the configuration space $\mathcal{M}_{0,N}$ is defined as follows. Fix $0<\beta_i < 1$ for $i = 1, \ldots, N$ such that the Gauss-Bonnet constraint $$\sum_{i=1}^N (1-\beta_i) = 2$$ is satisfied. Moreover, we assume the generic condition that[^1] $$\label{eq:noncollapse} \sum_{i \in I} (1-\beta_i) \neq 1 \, \text{ for all subsets } I \subset \{1, \ldots, N\} .$$ Let $z = (z_1, \ldots, z_N)$ be a point in $(\mathbb{CP}^1)^N$. The point $z$ defines a partition of the index set $\{1, \ldots, N\}$ by subsets $I_1, \ldots, I_{\ell}$ such that $z_i=z_j$ if and only if both $i, j$ belong to the same subset $I_k$. We say that $z$ is $\beta$-*stable* if $$\label{eq:stability} \sum_{i \in I_j} (1-\beta_i) < 1 \, \text{ for all } 1 \leq j \leq \ell .$$ In particular, if $z \in U$ then the subsets $I_j$ are singletons and the stability condition [\[eq:stability\]](#eq:stability){reference-type="eqref" reference="eq:stability"} trivially holds. However, for arbitrary tuples in $(\mathbb{CP}^1)^N$ the notion of stability depends on the angle parameters $\beta = (\beta_1, \ldots, \beta_N)$. The upshot is that the space of $PSL(2, \mathbb{C})$-orbits of $\beta$-stable points is a projective manifold $\overline{\mathcal{M}}^{\beta}_{0,N}$ that contains $\mathcal{M}_{0,N}$ as a Zariski open subset, see [@DM Section 4]. Under the correspondence between $\mathcal{M}_{0, N}$ and the space of unit area flat metrics on the $2$-sphere with prescribed cone angles $Met_{\beta}$ given by Corollary [Corollary 1](#lem:forget){reference-type="ref" reference="lem:forget"}, the projective manifold $\overline{\mathcal{M}}^{\beta}_{0,N}$ corresponds to the Gromov-Hausdorff compactification of $Met_{\beta}$. **Example 3**. Take $N=5$ and suppose that the first $4$ cone points are equal $\beta_1 = \ldots = \beta_4 = \beta$ while $\beta_5$ is determined by Gauss-Bonnet $$\beta_5 = 3 - 4\beta .$$ The requirement that $0< \beta_5 < 1$ implies that $1/2 < \beta < 3/4$. For $2/3 < \beta < 3/4$ the space $\overline{\mathcal{M}}^{\beta}_{0,5}$ is equivalent to the projective plane $\mathbb{CP}^2$ and the boundary divisor (i.e. the complement of $\mathcal{M}_{0, 5}$) is the arrangement of $6 = \binom{4}{2}$ lines that join $2$ out of $4$ points in general position. When $1/2 < \beta < 2/3$ the space $\overline{\mathcal{M}}^{\beta}_{0,5}$ is equivalent to the blow-up $\text{Bl}_4\mathbb{CP}^2$ of the projective plane at the $4$ triple points of the arrangement. The boundary divisor is made of $10 = \binom{5}{2}$ rational curves that intersect in a normal crossing configuration. Equivalently, $\mathcal{M}_{0,5}$ is the complement in $\mathbb{CP}^1 \times \mathbb{CP}^1$ of the $7$ lines: $6$ where one of the coordinates is $0, 1, \infty$ plus the diagonal, and $\overline{\mathcal{M}}^{\beta}_{0,5}$ for $1/2<\beta<3/4$ is obtained by blowing up the points $(0,0), (1,1), (\infty, \infty) \in \mathbb{CP}^1 \times \mathbb{CP}^1$. **Remark 4**. As Example [Example 3](#ex:delmost){reference-type="ref" reference="ex:delmost"} shows, the boundary divisor $\overline{\mathcal{M}}^{\beta}_{0,N} \setminus \mathcal{M}_{0,N}$ is not always normal crossing. The *Deligne-Mumford* compactification $\overline{\mathcal{M}}_{0,N}$ is a projective manifold that contains $\mathcal{M}_{0,N}$ as a Zariski open subset and such that its complement $\overline{\mathcal{M}}_{0,N} \setminus \mathcal{M}_{0,N}$ is a normal crossing divisor $$D = \sum_P D_P$$ whose irreducible components $D_P$ are in one to one correspondence with partitions $P=\{I_0, I_1\}$ of in the index set $\{1, \ldots, N\}$ into $2$ subsets $I_0, I_1$ such that $\min \{|I_0|, |I_1|\} \geq 2$. The boundary points $m \in \overline{\mathcal{M}}_{0,N}$ represent connected nodal curves $C_m$ with $N$ marked points whose irreducible components are $\mathbb{CP}^1$'s. The total number of nodal and marked points in each irreducible component is at least $3$. The topology of $C_m$ is encoded by a tree whose vertices represent the irreducible components of $C_m$ and and an edge connects $2$ vertices if the corresponding $\mathbb{CP}^1$'s intersect at a nodal point. The number of divisors that contain $m$ is equal to the number of nodes of $C_m$. More precisely, splitting a node $y \in C_m$ into $2$ divides the curve $C_m$ into $2$ connected components $C_0, C_1$ and then we have a partition $P = \{I_0, I_1\}$ by recording which indices correspond to marked points in each component. We proceed to relate the Deligne-Mostow and Deligne-Mumford compactifications. Same as before, fix $0< \beta_i < 1$ for $i=1, \ldots, N$ such that the Gauss-Bonnet constraint $\sum_i (1-\beta_i) =2$ is satisfied. **Lemma 5** ([@koziarz]). *There is a logarithmic resolution $$\pi: \overline{\mathcal{M}}_{0,N} \to \overline{\mathcal{M}}^{\beta}_{0,N} .$$* *Proof.* We follow [@koziarz Section 5]. For simplicity of notation, we write $b(x_i) = 1-\beta_i$ for $i=1, \ldots, N$ and refer to it as the weight at $x_i$. The Gauss-Bonnet formula is equivalent to $\sum_i b(x_i) = 2$. Let $m \in \overline{\mathcal{M}}_{0,N}$ and let $C_m$ be the corresponding nodal curve with marked points $x_1, \ldots, x_N$. Split each node of $C_m$ into a pair of points $\{y, y'\}$ and let $Y, Y'$ be the $2$ connected components of $C_m$ that contain $y, y'$ respectively. Define $$\label{eq:yweight} b(y) = \sum_{x_i \in Y'} b(x_i) .$$ In particular $b(y) + b(y') = 2$ and exactly one of the alternatives $b(y) < 1, b(y') > 1$ or $b(y)> 1, b(y') < 1$ must hold. Given an irreducible component $C_j$ of $C_m$ we have a finite set of marked points $$\Sigma_j = \{x_i \, | \, x_i \in C_j\} \bigcup \{y_i \, | \, y_i \in C_j\}$$ and the sum of weights over all points in $\Sigma_j$ is equal to $2$. The key elementary fact [@koziarz Lemma 5.1] is that there is a unique irreducible component $C_j$ such that $b(z) < 1$ for all $z \in \Sigma_j$, called the $\beta$-*principal* component of $C_m$. The $\beta$-principal component $C_j$ gives us an $N$-tuple $(z_1, \ldots, z_N)$ where $z_i = x_i$ if $x_i \in C_j$ and $z_i = y$ if $x_i \in Y'$ where $y$ is a nodal point in $C_j$. By definition, the $N$-tuple $(z_1, \ldots, z_N)$ is $\beta$-stable and the resolution is given by the map $$\pi(m) = [(z_1, \ldots, z_N)] . \qedhere$$ ◻ **Example 4**. Take $N=5$ and $\beta_1 = \ldots = \beta_4 = \beta$ with $1/2 < \beta < 3/4$ as in Example [Example 3](#ex:delmost){reference-type="ref" reference="ex:delmost"}. Let $\pi: \overline{\mathcal{M}}_{0,5} \to \overline{\mathcal{M}}_{0,5}^{\beta}$. If $1/2 < \beta < 2/3$ then $\pi$ is an isomorphism. If $2/3 < \beta < 3/4$ then $\pi$ is the blow-up at the $4$ triple points of the boundary divisor. We can now provide a metric interpretation, in relation to bubbling, of the boundary points in the Deligne-Mumford compactification. **Theorem 2**. *Every point in the Deligne-Mumford compactification represents a bubble tree for a family of flat metrics with cone points on $\mathbb{CP}^1$ as in Theorem [Theorem 1](#thm:flatP1){reference-type="ref" reference="thm:flatP1"}. Conversely, every such family of metrics determines a unique point in the Deligne-Mumford compactification $\overline{\mathcal{M}}_{0,N}$* *Proof.* Let $m \in \overline{\mathcal{M}}_{0,N}$ and let $F: \Delta \to \overline{\mathcal{M}}_{0,N}$ be a holomorphic map with $F(0)=m$ and $F(\Delta^*) \subset \mathcal{M}_{0,N}$. Let $C_m$ be the nodal curve with marked points $x_1, \ldots, x_N$ corresponding to $m$. The irreducible components of $C_m$ make a tree and we set the root of this tree to be the $\beta$-principal component. The map $F$ gives us a family of flat metrics (Corollary [Corollary 1](#lem:forget){reference-type="ref" reference="lem:forget"}) for which the $\beta$-principal component of $C_m$ represents the limiting compact metric in $\mathbb{CP}^1$. Every other component $C_j$ has a unique nodal point $y$ with $b(y) > 1$ and represents an infinite flat metric with a cone end of total angle $2\pi(b(y)-1)$ and cone points of angles $2\pi(1-b(x_i))$ at the $x_i \in C_j$ and $2\pi(1-b(y_i))$ at all the other nodes $y_i \in C_j$ which are different from $y$. The children $q_1, \ldots, q_k$ of the $\beta$-principal component represent clusters of cone points that coalesce to single cone points in the limiting compact metric. The sub-trees which have as roots $q_1, \ldots, q_k$ represent the metric bubble trees that arise after taking rescaled limits as in Theorem [Theorem 1](#thm:flatP1){reference-type="ref" reference="thm:flatP1"}. Conversely, given a family of metrics as in Theorem [Theorem 1](#thm:flatP1){reference-type="ref" reference="thm:flatP1"} we define a nodal curve $C_m$ with marked points $x_1, \ldots, x_N$ whose irreducible components are the vertices of the metric bubble tree, and two such curves meet at a nodal point $y_j$ if and only if their corresponding vertices are connected. The marked points $x_i$ are the limiting (non-clustered) cone points of the family while the nodal points $y_i$ represent a collision of a cluster of cone points for one component and a cone end for the other. ◻ **Remark 5**. The Deligne-Mumford compactification carries also the differential geometric meaning as moduli compactification of *hyperbolic metrics* with cusps at the marked points, with further cusps' formation at the nodal points. Thus the above theorem provides a different differential geometric interpretation of the same moduli space. It may be interesting to study further if the Hassett's moduli compactifications [@hassett] provide also this combined meaning of moduli of conical hyperbolic metrics (with their degenerations) and bubbles (up to certain scale). See also the discussion in Section [4.3](#mK){reference-type="ref" reference="mK"}. # Bubbling in two dimensions {#sec:2dim} It is well-known that the singularities forming in non-collapsing sequences of Kähler-Einstein manifolds of dimension $2$ are isolated orbifold singularities [@Anderson; @BKN] bubbling ALE spaces [@Bando]. Here, in analogy to what we described in the previous section, we investigate relations of bubbling and algebraic geometry for the simplest type of such singularities, namely $A_k$-singularities, showing that the picture is essentially analogous to the one dimensional log case. However, we also point out (section [3.5](#Log2){reference-type="ref" reference="Log2"}) that if one instead considers the more general case of log KE metrics (so conical along a divisor) in this dimension, the bubbling picture seems to be more complicated, and related to the jumping phenomena as recently pointed out in [@sun]. ## The $A_k$-singularity Let $k$ be a positive integer and consider the cyclic group $\Gamma_{k+1} \subset SU(2)$ of order $k+1$ generated by the diagonal matrix with eigenvalues $\exp(2\pi i / (k+1))$ and $\exp(-2\pi i/ (k+1))$. The polynomial functions $u=X_1^{k+1}$, $v = X_2^{k+1}$, and $z = X_1 X_2$ are invariant under the action of $\Gamma_{k+1}$ on $\mathbb{C}^2$ and give an isomorphism between the orbifold quotient and the $A_k$-singularity $$\mathbb{C}^2/ \Gamma_{k+1} \cong \{ uv = z^{k+1} \} \subset \mathbb{C}^3 .$$ The group $\Gamma_{k+1}$ preserves the Euclidean metric on $\mathbb{C}^2$ and it acts freely on the unit $3$-sphere. Thus the $A_k$-singularity comes equipped with a flat Kähler cone metric $d\rho^2 + \rho^2 g_{S^3/\Gamma_{k+1}}$ where $$\rho^2 = |u|^{\frac{2}{k+1}} + |v|^{\frac{2}{k+1}}$$ measures the intrinsic squared distance to the vertex located at $0$. The linear $\mathbb{C}^*$-action on $\mathbb{C}^3$ with weights $(k+1, k+1, 2)$ given by $$\label{eq:akweights} \lambda \cdot (u, v, z) = (\lambda^{k+1} u, \lambda^{k+1} v , \lambda^{2}z)$$ preserves the $A_k$-singularity and scales the intrinsic distance by $|\lambda|$. ## Gibbons-Hawking ansatz We recall the Gibbons-Hawking construction of ALE manifolds of type $A_k$. Let $x_1, \ldots, x_{k+1}$ be distinct points in $\mathbb{R}^3$ and let $f$ be the harmonic function $$f(x) = \frac{1}{2} \sum_{i=1}^{k+1} \frac{1}{|x-x_i|} .$$ Let $\pi_0: M_0 \to \mathbb{R}^3 \setminus \{x_1, \ldots, x_{k+1}\}$ be the circle bundle with first Chern class $-1$ at spheres around the punctures equipped with a connection $\eta \in \Omega^1(M_0)$ such that $$d\eta = - *df .$$ Then $$g = f g_{\mathbb{R}^3} + f^{-1} \eta^2$$ defines a complete hyperkähler metric on a $4$-manifold $M = M_0 \cup \{\Tilde{x}_i\}$ obtained by adding $k+1$ points $\Tilde{x}_1, \ldots, \Tilde{x}_{k+1}$ to $M_0$. The metric $g$ is asymptotic at infinity to the flat orbifold $\mathbb{C}^2/\Gamma_{k+1}$ where $\Gamma_{k+1} \subset SU(2)$ is the cyclic group generated by the diagonal matrix with eigenvalues $\exp(2\pi i / (k+1))$ and $\exp(-2\pi i/ (k+1))$. The manifold $M$ admits a circle action which preserves the hyperkähler structure with moment map $\pi: M \to \mathbb{R}^3$. The circle action has $k+1$ fixed points at $\Tilde{x}_1, \ldots, \Tilde{x}_{k+1}$ and $\pi(\Tilde{x}_i) = x_i$, while the restriction of $\pi$ to $M_0$ is equal to $\pi_0$. Every vector $v$ in the unit sphere $S^2 \subset \mathbb{R}^3$ determines a parallel complex structure $I_v$ that sends the horizontal lift of the constant vector field $v$ to the derivative of the circle action. Let $x^1, x^2, x^3$ be linear coordinates in $\mathbb{R}^3$. Consider the complex structure $I$ determined by the $\partial/\partial x_3$ vector field. Then $z = x^1 + i x^2$ is a circle invariant holomorphic function on $(M,I)$. As shown in [@lebrun], one can further produce holomorphic functions $u,v$ on $(M,I)$ of weights $1,-1$ for the circle action which give an equivariant biholomorphism between $(M, I)$ and the complex surface in $\mathbb{C}^3$ defined by the equation $$uv = \prod_{i=1}^{k+1} (z - z_i)$$ where $z_i = z(x_i)$, equipped with the circle action $e^{it} \cdot (u, v, z) = (e^{it}u, e^{-it}v, z)$. If $s \subset \mathbb{R}^3$ is a segment that connects two points $x_i, x_j$ then the preimage $\pi^{-1}(s)$ is a $2$-sphere $S_{ij} \subset M$. The second homology group $H_2(M, \mathbb{Z})$ is generated by such spheres. If $v$ is a unit vector in $\mathbb{R}^3$ corresponding to a parallel complex structure $I_v$ on $M$ then the cohomology class of the Kähler form $\omega_v = g (I_v \cdot, \cdot)$ is determined by its pairing with the $2$-spheres $S_{ij}$ given by $$\frac{1}{2\pi} \int_{S_{ij}} \omega_v = \langle v, x_i - x_j \rangle$$ where $\langle \cdot, \cdot \rangle$ denotes the Euclidean inner product in $\mathbb{R}^3$. In particular, if $v = \partial/ \partial x^3$ and the cohomology class of the Kähler form $\omega = \omega_v$ vanishes then we can assume that all the points $x_i$ lie on the plane $x^3=0$ which we identify with $\mathbb{C}$ via $z= x^1 + i x^2$. **Remark 6**. All the above can be extended to ALE orbifolds. In this setting the points $x_i$ have multiplicities $m_i \in \mathbb{Z}_{\geq 1}$. The harmonic potential is $$f(x) = \frac{1}{2} \sum_i \frac{m_i}{|x-x_i|}$$ and the metric has orbifold singularities of type $A_{m_i-1}$ at the points with $m_i > 1$. The asymptotic cone at infinity is $\mathbb{C}^2/\Gamma_{k+1}$ where $k+1 = \sum_i m_i$. ## Local bubbling models for $A_k$-singularities We provide model families of ALE manifolds of type $A_k$. Let $z_i(t)$ for $i=1, \ldots, k+1$ be holomorphic functions of $t$, for $t$ in the unit disc $\Delta$ with $z_i(t) \neq z_j (t)$ for $i\neq j$ and $t \in \Delta^*$ and $z_i(0) = 0$ for all $1 \leq i \leq k+1$. We consider the family of Gibbons-Hawking metrics $g_t$ given by the harmonic potentials $$f_t(x) = \frac{1}{2} \sum_i \frac{1}{|x - x_i(t)|}$$ where $x_i(t) = (z_i(t), 0)$ under the identification of $\mathbb{R}^3$ with $\mathbb{C}\times \mathbb{R}$ given by $(x^1, x^2, x^3) \mapsto (z, x^3)$ where $z = x^1 + i x^2$. We take the complex structure given by the $x_3$-axis, so the corresponding complex surfaces are $$X_t = \{uv = \prod_{i=1}^{k+1} (z - z_i(t)) \} \subset \mathbb{C}^3$$ and the Kähler forms $\omega_t$ are all $\partial\Bar{\partial}$-exact by our choice of points $x_i(t)$. Let $\mathcal{X}$ be the complex $3$-fold which is the total space of the family. So we have a holomorphic submersion $$\Pi : \mathcal{X} \to \Delta$$ whose fibers are the complex surfaces $X_t$. Same as in Section [2](#sec:1dim){reference-type="ref" reference="sec:1dim"} we can construct a tree $\mathcal{T} = \mathcal{T}(S)$ from the set of holomorphic functions $S = \{z_1(t), \ldots, z_{k+1}(t)\}$ by grouping them according to their relative order of vanishing at the origin. **Theorem 3**. *The set of *non-cone* pointed Gromov-Hausdorff limits $$\label{eq:ptGHGH} h_{\alpha} = \lim_{t \to 0} |t|^{-2\alpha} \cdot (X_t, g_t, \sigma(t))$$ for $\alpha>0$ and $\sigma$ a holomorphic section of $\Pi$ is in one-to-one correspondence with the set of interior (i.e. non-leaf) vertices of $\mathcal{T}$. Each vertex $\mathbf{v} \in \mathcal{T}$ corresponds to an orbifold ALE space $B_{\mathbf{v}}$ asymptotic to $\mathbb{C}^2/\Gamma_{\ell+1}$ where $\ell+1$ is the number $|\mathbf{v}|$ of functions $z_i(t)$ in the equivalence class represented by $\mathbf{v}$. If $\mathbf{w}$ is a child of $\mathbf{v}$ then it corresponds to an orbifold point of $B_{\mathbf{v}}$ of type $A_{|\mathbf{w}|}$.* *Proof.* Let $\lambda>0$ and let $(M, g)$ and $(\Tilde{M}, \Tilde{g})$ be Gibbons-Hawking ALE spaces determined by the monopole points $x_1, \ldots, x_{k+1} \in \mathbb{R}^3$ and $\Tilde{x}_1, \ldots, \Tilde{x}_{k+1} \in \mathbb{R}^3$ where $\Tilde{x}_i = \lambda x_i$ for all $i$. Then the metric $\Tilde{g}$ is isometric to $\lambda g$ by isometries that act transitively on the circle fibres over $0 \in \mathbb{R}^3$. Indeed, if $m_{\lambda}$ denotes the scalar multiplication by $\lambda$ in $\mathbb{R}^3$ then the respective harmonic potentials of the metrics $g, \Tilde{g}$ satisfy $$\Tilde{f} \circ m_{\lambda} = \lambda^{-1} f .$$ We can lift $m_{\lambda}$ to a circle bundle map $\Phi: M \to \Tilde{M}$ that preserves the respective connections, i.e. $\Phi^* \Tilde{\eta} = \eta$. Pulling back by $\Phi$ we see that $$\label{eq:ghscale} \lambda g = \lambda \left( f g_{\mathbb{R}^3} + f^{-1} \eta^2 \right) = \Phi^* \left( \Tilde{f} g_{\mathbb{R}^3} + \Tilde{f}^{-1} \Tilde{\eta}^2 \right) = \Phi^* \Tilde{g} .$$ Fix a section $\sigma = (u(t), v(t), z(t))$. To prove the lemma it suffices to show that the pointed limits [\[eq:ptGHGH\]](#eq:ptGHGH){reference-type="eqref" reference="eq:ptGHGH"} for $\alpha>0$ behave in the same manner as described in Lemma [Lemma 2](#lem:rescaledsection){reference-type="ref" reference="lem:rescaledsection"}. Compose $\sigma$ with the moments maps $\pi_t: X_t \to \mathbb{R}^3$ to obtain $c(t) = (z(t), x^3(t))$ with $c(0)=0$. For each $t$ we change the moments map $\pi_t$ by subtracting the constant $c(t)$. This way we can assume that $c(t)$ is identically zero and the position of the monopole points $x_i(t)$ changes to $\Tilde{x}_i(t) = (\Tilde{z}_i(t), -x^3(t))$ where $\Tilde{z}_i(t) = z_i(t) - z(t)$. Let $d_1, \ldots, d_{\ell}$ be the order of vanishing at $t=0$ of $\Tilde{z}_1(t), \ldots, \Tilde{z}_{k+1}(t)$. To prevent collision of points with order of vanishing $d_i$ we change $x = (z, x^3)$ to $\Tilde{x} = (\Tilde{z}, x^3)$ where $z = t^{d_i} \Tilde{z}$. Taking the limit as $t \to 0$ all points $\Tilde{z}_j(t)$ whose order of vanishing at $t=0$ is $> d_i$ collide at $0$, all points with order of vanishing is $< d_i$ are sent off to infinity, and all the points with order of vanishing equal to $d_i$ converge to a limiting configuration given by taking the $d_i$-derivatives at $0$. It follows from Equation [\[eq:ghscale\]](#eq:ghscale){reference-type="eqref" reference="eq:ghscale"} that the pointed Gromov-Hausdorff limit [\[eq:ptGHGH\]](#eq:ptGHGH){reference-type="eqref" reference="eq:ptGHGH"} for $\alpha = d_i / 2$ is the Gibbons-Hawking ALE orbifold pointed at an $A_{|\mathbf{v}|-1}$ orbifold point where $\mathbf{v} \in \mathcal{T}$ represents all points that equal the section $z(t)$ to order $>d_i$. The configuration of monopole points is $0$ with multiplicity $|\mathbf{v}|$ and then one monopole point for each child $\mathbf{w}$ of $\mathbf{v}$ with multiplicity $|\mathbf{w}|$. In the same way, using Equation [\[eq:ghscale\]](#eq:ghscale){reference-type="eqref" reference="eq:ghscale"}, we see that the pointed Gromov-Hausdorff limit [\[eq:ptGHGH\]](#eq:ptGHGH){reference-type="eqref" reference="eq:ptGHGH"} for $d_i/2 < \alpha <d_{i+1}/2$ is the cone $\mathbb{C}^2/\Gamma_{|\mathbf{v}|}$ pointed at its vertex. ◻ **Remark 7**. Given the section $\sigma(t) = (u(t), v(t), z(t))$ we can change coordinates to $u = \Tilde{u} + u(t)$, $v = \Tilde{v} + v(t)$, and $z = \Tilde{z} +z(t)$. In $\Tilde{u}, \Tilde{v}, \Tilde{z}$ coordinates the family is $$\label{eq:akfamily} \Tilde{u} \Tilde{v} = \left( \prod_i (\Tilde{z} - \Tilde{z}_i(t)) \right) - \ell_t(\Tilde{u}, \Tilde{v})$$ where $\ell_t = v(t)\Tilde{u} + u(t)\Tilde{v} + u(t)v(t)$, $\Tilde{z}_i(t) = z_i(t) - z(t)$, and the section $\sigma$ is identically $0$. We rescale coordinates $\Tilde{u}, \Tilde{v}, \Tilde{z}$ according to the weights of the cone $\mathbb{C}^2/ \Gamma_{k+1}$ given by Equation [\[eq:akweights\]](#eq:akweights){reference-type="eqref" reference="eq:akweights"} as follows $$\Tilde{u} \mapsto t^{(k+1)\lambda} \Tilde{u}, \hspace{2mm} \Tilde{v} \mapsto t^{(k+1)\lambda} \Tilde{v}, \hspace{2mm} \Tilde{z} \mapsto t^{2\lambda} \Tilde{z} .$$ where $\lambda>0$ is to be determined later. Let $d \geq 1$ be the lowest order of vanishing of the functions $\Tilde{z}_i(t)$ at $t=0$. Take $\lambda = d/2$ then in the rescaled coordinates the family [\[eq:akfamily\]](#eq:akfamily){reference-type="eqref" reference="eq:akfamily"} is $$\label{eq:akansatz} \Tilde{u} \Tilde{v} = \left( \prod_i (\Tilde{z} - t^{-d}\Tilde{z}_i) \right) - \Tilde{\ell}_t$$ where $\Tilde{\ell}_t \to 0$ as $t \to 0$. Taking the limit as $t \to 0$ in Equation [\[eq:akansatz\]](#eq:akansatz){reference-type="eqref" reference="eq:akansatz"} we recover the minimal bubble limit as in Theorem [Theorem 3](#thm:aklimits){reference-type="ref" reference="thm:aklimits"}. **Remark 8**. The norm square $|\text{Riem}_g|^2$ of the Riemann curvature tensor of a Gibbons-Hawking metric $g$ with harmonic potential $f$ is given by the formula $$|\text{Riem}_g|^2 = \frac{1}{4} \Delta \Delta f^{-1}$$ where $\Delta$ is the usual Laplacian of $\mathbb{R}^3$. It follows from this that in the context of Theorem [Theorem 3](#thm:aklimits){reference-type="ref" reference="thm:aklimits"} the curvature of the ALE metrics $g_t$ along the section $\sigma$ satisfies a bound of the form $$|\text{Riem}_{g_t}(\sigma(t))|^2 = O(|t|^{-C})$$ for uniform $C>0$ as $t\to 0$. ## Towards a global construction The above local discussion for the bubblings for $A_k$-singularities should similarly describe what happens when we consider a holomorphic family of compact varieties forming such singularities. In particular, by combining the gluing techniques developed in the work of Biquard and Rollin for cscK metrics [@biquardrollin] and Ozuch [@ozuch] multiscale analysis of non-collapsed limits of Einstein $4$-manifolds [@ozuch], we expect one should be able to prove the following result: **Conjecture 1**. *Let $\pi:\mathcal{X} \rightarrow \Delta$ a smoothing of a KE orbifold $(X_0, \omega_0)$ with $A_k$-singularities and $Aut(X_0)$ discrete. Then nearby $X_t$ admits KE metrics whose full multiscale bubble tree can be recovered just from local algebraic data of the given family. More precisely, by considering the curve induced by the family in the versal deformation space of the $A_k$-singularity, and by varying sections of the family passing through the singularity, the tree of pointed Gromov-Hausdorff limits at a given singularity matches the local description given in Theorem [Theorem 3](#thm:aklimits){reference-type="ref" reference="thm:aklimits"}.* **Remark 9**. The $Aut(X_0)$ conditions is not essential: the important aspect is that the smoothings admit KE metrics. Moreover, the above conjecture, easily extend (at least) to orbifolds admitting $\mathbb{Q}$-Gorenstein smoothings, as well as to the more general constant scalar curvature (cscK) case. ## Logarithmic situation {#Log2} We now describe a simple example in the $2$ dimensional logarithmic situation when some new phenomenon occurs, suggesting that simple algebraic rescalings are not enough to describe the bubbling. This was pointed out first in [@sun p.21] for the absolute case and here we consider a logarithmic analogue. We consider a family of smooth curves $C_t$ for $t \neq 0$ (see Equation [\[eq:family\]](#eq:family){reference-type="eqref" reference="eq:family"}) that develop a cuspidal singularity $\{w^2 = z^3\}$ at $t=0$. Moreover, we fix the cone angle parameter $\beta$ to be equal to $5/6$, which corresponds to the *strictly semi-stable* case. In order to explain where the value $\beta=5/6$ comes from, we begin by reviewing the basics of theory of stability for klt pair singularities in this case. **Lemma 6**. *Consider the cuspidal curve $C = \{w^2 = z^3\} \subset \mathbb{C}^2$. Then the pair $(\mathbb{C}^2, b \cdot C)$ with $b=1-\beta \in (0,1)$ is klt if and only if the cone angle parameter $\beta$ belongs to the interval $(1/6,1)$.* *Proof.* The pair is klt if and only the integral $$\label{eq:integral} I = \int_{B_1} |w^2-z^3|^{-2b} d\mu$$ is finite, where $B_1 \subset \mathbb{C}^2$ is the unit ball and $d\mu$ is the standard Lebesgue measure. In order to evaluate this integral we fix $\lambda \in (0,1)$, say $\lambda=1/2$, and consider the action $\lambda \cdot(z,w) = (\lambda^2 z, \lambda^3 w)$. The function $|w^2-z^3|^{-2b}$ is clearly integrable on the annulus $A_0 = B_1 \setminus B_2$ and we let $a_0 = \int_{A_0} |w^2-z^3|^{-2b} d\mu$. For $k \geq 1$ we let $A_k = \lambda^k \cdot A_0$ and $a_k = \int_{A_k} |w^2-z^3|^{-2b} d\mu$, so the integral [\[eq:integral\]](#eq:integral){reference-type="eqref" reference="eq:integral"} is finite if and only if the series $\sum a_k$ converges and $I = \sum_{k \geq 0} a_k$. On the other hand, by changing variables $\Tilde{z}= \lambda^2 z, \Tilde{w}= \lambda^3 w$ we see that $a_k = \lambda^{ck} a_0$ where $c = 10 - 12b$. Therefore $\sum a_k$ is a geometric series which converges if and only if $c >0$, equivalently $b < 5/6$. ◻ Now consider the natural $\mathbb{C}^*$-action which preserves the curve $C = \{w^2 = z^3\}$ where $\lambda \in \mathbb{C}^*$ acts by $$\label{eq:action} \lambda \cdot (z, w) = (\lambda^2 z, \lambda^3 w) .$$ **Lemma 7**. *Let $\beta \in (1/6, 1)$ so that the pair $(\mathbb{C}^2, b \cdot C)$ with $b=1-\beta$ is klt. Then the pair is semistable with respect to the action [\[eq:action\]](#eq:action){reference-type="eqref" reference="eq:action"} if and only if $\beta \leq 5/6$.* *Proof.* The quotient of the pair $(\mathbb{C}^2, b \cdot C)$ by the action [\[eq:action\]](#eq:action){reference-type="eqref" reference="eq:action"} is a sphere with three marked points $(\mathbb{CP}^1, \Delta)$ with $\Delta = b_0 \cdot 0 + b_{\infty} \cdot \infty + b \cdot 1$ where $0$ and $\infty$ are the orbifold locus of the quotient map and $b_0 = 1/2, b_{\infty} = 2/3$. The pair $(\mathbb{C}^2, b \cdot C)$ with the action [\[eq:action\]](#eq:action){reference-type="eqref" reference="eq:action"} is stable if and only if $(\mathbb{CP}^1, \Delta)$ is stable. On the other hand, $(\mathbb{CP}^1, \Delta)$ is semi-stable if and only if the coefficients $b_0, b_{\infty}, b$ satisfy the (closed) triangle inequalities. The inequalities $b \leq 1/2 + 2/3$ and $1/2 \leq 2/3 + b$ are always satisfied; therefore $(\mathbb{CP}^1, \Delta)$ is semi-stable if and only if $2/3 \leq 1/2 + b$, i.e. $b \geq 1/6$. ◻ The local behaviour of a KE metric $g_{KE}$ with cone angle $2\pi\beta$ along a cuspidal curve $C = \{w^2 = z^3\}$ depends on the cone angle parameter $\beta$ as follows. - **Stable case.** If $\beta \in (1/6,5/6)$ then there is a flat Kähler cone metric $g_C$ on $\mathbb{C}^2$ with cone angle $2\pi\beta$ along $C$. We can write $g_C = d\rho^2 + \rho^2 \Bar{g}$ where $\rho>0$ is the intrinsic distance to the origin and $\Bar{g}$ is a metric on the $3$-sphere with constant sectional curvature $1$ and cone angle $2\pi\beta$ along the trefoil knot $C \cap S^3$. It is proved in [@deborbonspotti] that the metric $g_{KE}$ is asymptotic at a polynomial rate to $g_C$, i.e. in suitable local coordinates $|g_{KE} - g_C|_{g_C} = O(\rho^\mu)$ for some $\mu>0$ as $\rho \to 0$. - **Unstable case.** If $\beta \in (5/6, 1)$ then [@dbe] produce a Calabi-Yau metric $g_{CY}$ in a neighbourhood of $0 \in \mathbb{C}^2$ with cone angle $2\pi\beta$ along $C \setminus \{0\}$ whose tangent cone at the origin is equal to the product $\mathbb{C}\times \mathbb{C}_{\gamma}$ where $\gamma = 2\beta -1$. Following [@chiuszek] we expect that $g_{KE}$ has polynomial convergence to (a multiple) of $g_{CY}$ at the level of potentials. - **Strictly semistable case.** If $\beta = 5/6$ then the tangent cone at $0$ of $g_{KE}$ as predicted algebraically by the theory of normalized volumes (see [@deborbonspotti Section 7]) is the product $\mathbb{C}\times \mathbb{C}_{\gamma}$ where $\gamma = 2\beta -1$, same as in the unstable case. However, in this case $g_{KE}$ is only expected to converge to its tangent cone at a logarithmic rate. **Remark 10**. We can provide a geometric interpretation for the change of tangent cone as $\beta \to 5/6$. In the stable case $\beta \in (1/6, 5/6)$ there is a metric $g_{\mathbb{CP}^1}$ on the $2$-sphere with $3$ cone points of total angle $2\pi/3, \pi$, and $2\pi\beta$ which lifts through the Seifert fibration $S^3 \to \mathbb{CP}^1$ given by the quotient projection by the action [\[eq:action\]](#eq:action){reference-type="eqref" reference="eq:action"} to the constant curvature $1$ metric $\Bar{g}$ on $S^3$ that is the link of the tangent cone $g_C = d\rho^2 + \rho^2 \Bar{g}$. The metric $g_{\mathbb{CP}^1}$ is the double of a spherical triangle with angles $\pi/3, \pi/2$, and $\pi\beta$. As $\beta \to 5/6$ the triangle converges to a spherical bigon with angles $\pi/3$. Taking the double of the bigon and lifting through the Seifert fibration gives a constant curvature $1$ metric on the $3$-sphere with cone angle $2\pi\gamma$ (with $\gamma=2/3$) along the circle lying over the $1/2$ orbifold point which is the link of the tangent cone $\mathbb{C}\times \mathbb{C}_{\gamma}$ when $\beta=5/6$. The Euler vector field $e$ and the metric dilations $d_{\lambda}$ for $\lambda>0$ of the tangent cone are as follows. - **Stable case.** If $\beta \in (1/6, 5/6)$ then $$\label{eq:euler1} e = \frac{2}{\alpha} z \frac{\partial}{\partial z} + \frac{3}{\alpha} w \frac{\partial}{\partial w} \hspace{2mm} \mbox{ and } \hspace{2mm} d_{\lambda} (z,w) = (\lambda^{2/\alpha} z, \lambda^{3/\alpha}w) ,$$ where $\alpha = 3\beta - (1/2)$. The Euler vector field is tangent to $C=\{w^2=z^3\}$ and the metric dilations preserve this curve. - **Unstable case.** If $\beta \in (5/6, 1)$ then $$\label{eq:euler2} e = z \frac{\partial}{\partial z} + \frac{1}{\gamma} w \frac{\partial}{\partial w} \hspace{2mm} \mbox{ and } \hspace{2mm} d_{\lambda} (z,w) = (\lambda z, \lambda^{1/\gamma}w) .$$ This time the Euler vector field of the cone is *not* tangent to the cusp $C=\{w^2=z^3\}$ and the metric dilations of the cone degenerate the curve $C$ into the line $\{w=0\}$. More precisely, if we let $z = \lambda \Tilde{z}$, $w = \lambda^{1/\gamma} \Tilde{w}$ then the curve $C$ is given by $\Tilde{w}^2 = \lambda^{3-(2/\gamma)} \Tilde{z}^3$. The condition $\beta>5/6$ is equivalent to $3-(2/\gamma)>0$, hence the preimage of $C$ under the action $d_{\lambda}$ given by Equation [\[eq:euler2\]](#eq:euler2){reference-type="eqref" reference="eq:euler2"} converges to the double line $\{\Tilde{w}^2=0\}$ as $\lambda \to 0$. - **Strictly semistable case.** If $\beta=5/6$ then the two vector fields and actions given by Equations [\[eq:euler1\]](#eq:euler1){reference-type="eqref" reference="eq:euler1"} and [\[eq:euler2\]](#eq:euler2){reference-type="eqref" reference="eq:euler2"} agree. The cuspidal curve $C$ is invariant under the action, however the tangent cone is $\mathbb{C}\times \mathbb{C}_{\gamma}$ and the degeneration of the pair $(\mathbb{C}^2, C)$ to its tangent cone is through a different (non-canonical) action, such as $\lambda \cdot (z,w) = (\lambda z, \lambda w)$ for example. From now on we focus on the strictly semistable case and fix $\beta=5/6$. Consider a sequence of KE metrics with cone angle $2\pi\beta$ along a family of smooth curves $C_t$ that develop a cuspidal singularity at $t=0$, say given by $$\label{eq:family} C_t = \{w^2 = z^3 + tz\} .$$ We rescale the family of curves [\[eq:family\]](#eq:family){reference-type="eqref" reference="eq:family"} using the weights $(1,3/2)$ of $\mathbb{C}\times \mathbb{C}_{\gamma}$ (where $\gamma = 2\beta-1 = 2/3$) which is the tangent cone at the origin of $g_0$, as in the case of $A_k$-singularities (see Remark [Remark 7](#rmk:ansatz){reference-type="ref" reference="rmk:ansatz"}). We let $z = t^c \Tilde{z}$, $w = t^{3c/2} \Tilde{w}$ and take $c=1/2$ to obtain in the limit as $t \to 0$ the curve $$\label{eq:ssbubble} \Tilde{w}^2 = \Tilde{z}^3 + \Tilde{z}.$$ **Expectation 1**. There is no CY metric on $\mathbb{C}^2$ with cone angle $2\pi\beta$ along the curve [\[eq:ssbubble\]](#eq:ssbubble){reference-type="eqref" reference="eq:ssbubble"} and whose tangent cone at infinity is $\mathbb{C}\times \mathbb{C}_{\gamma}$, where $\beta=5/6$ and $\gamma=2\beta-1$. Indeed, if such a metric exists then rescaling the coordinates by the weights $(1,3/2)$ of the tangent cone at infinity $\Tilde{z}\mapsto \lambda \Tilde{z}$, $\Tilde{w}\mapsto \lambda^{3/2} \Tilde{w}$ degenerates the curve [\[eq:ssbubble\]](#eq:ssbubble){reference-type="eqref" reference="eq:ssbubble"} to $$\Tilde{w}^2 = \Tilde{z}^3 + \lambda^{-2} \Tilde{z}.$$ which converges to $\Tilde{w}^2 = \Tilde{z}^3$ as $\lambda \to \infty$. By the results of [@sunzhang] we would expect that there is a CY metric on $\mathbb{C}^2$ with cone angle $2\pi\beta$ along cuspidal curve $\Tilde{w}^2 = \Tilde{z}^3$ whose tangent cone at infinity is $\mathbb{C}\times \mathbb{C}_{\gamma}$. However, the tangent cone at the origin should also be $\mathbb{C}\times \mathbb{C}_{\gamma}$ and by Bishop-Gromov the metric must be a cone but this is impossible as jumping occurs. On the other hand, if we rescale the family [\[eq:family\]](#eq:family){reference-type="eqref" reference="eq:family"} by $z = t \Tilde{z}$, $w = t \Tilde{w}$ we obtain $\Tilde{w}^2 = t \Tilde{z}^3 + \Tilde{z}$ which converges to $$\label{eq:parabola} \Tilde{w}^2 = \Tilde{z}$$ as $t \to 0$. **Expectation 2**. The bubble of the family [\[eq:family\]](#eq:family){reference-type="eqref" reference="eq:family"} is a CY metric on $\mathbb{C}^2$ with cone angle $2\pi\beta$ along the parabola [\[eq:parabola\]](#eq:parabola){reference-type="eqref" reference="eq:parabola"} and tangent cone $\mathbb{C}\times \mathbb{C}_{\gamma}$ at infinity. If we rescale the parabola by the weights of the tangent cone $\Tilde{z}\mapsto \lambda \Tilde{z}$, $\Tilde{w}\mapsto \lambda^{3/2} \Tilde{w}$ we obtain the family $\Tilde{w}^2 = \lambda^{-2} \Tilde{z}$ which converges to the double line $\Tilde{w}^2 =0$ as $\lambda \to \infty$, as wanted. # Towards an algebro-geometric picture of bubbling and possible multiscale K-moduli compactifications $\overline{\mathcal{M}}^\lambda$ {#sec:highdim} In this last section we will first propose a tentative more invariant algebraic picture for detecting the bubbles, slightly reinforcing Sun's conjecture in [@sun]. Finally, we will formulate a more general framework to study bubbling, by speculating about the existence of what we call *multiscale K-moduli compactifications* $\mathcal{T M}^K$, that is birational modifications of the K-moduli spaces which parameterize algebraic spaces supporting all the possible multiscale bubble limits of non-collapsing KE metrics. ## Algorithm for the algebro-geometric detection of the metric bubble tree of a $1$-dimensional family Suppose we have a flat holomorphic family $$\pi:\mathcal{X} \rightarrow \Delta$$ over the complex disc of non-collapsing KE spaces (note that in what follows smoothness is not really required), and take a section $\sigma: \Delta \rightarrow \mathcal{X}$ for the family such that $\sigma(0) \in \mbox{Sing}(X_0)$. Working locally, and ignoring convergence issues, we can restrict to consider our family as defined on the local ring $\mathbb{C}[[t]]$ of formal power series, and the section to be a ring morphism $s: R \rightarrow \mathbb{C}[[t]]$ from a finitely generated $\mathbb{C}[[t]]$-algebra $R$. Note that $\sigma(0)\in \mathcal{X}$ is then the image of the unique maximal ideal of $\mathbb{C}[[t]]$ via the map of schemes induced by $s$, so we can think $\sigma(0)=x \in \emph{Spec}(R)$. Now consider the non-Archimedean link $\mbox{Val}_{R_x}$, that is maps $v: R_x \rightarrow \mathbb{R} \cup \{+\infty\}$, satisfying the usual axioms for valuations on the local ring $R_x$. The claim is that there is the following inductive procedure (terminating in finite steps), mirroring Sun's differential geometric termination of bubbles. Following [@sun] we call *minimal bubble* a non-cone rescaled limit whose tangent cone at infinity is equal to the tangent cone at the singularity. There is a valuation $v_1 \in \mbox{Val}_{R_x}$ such that the following properties hold: 1. *Weighted family*. The graded ring $R^1:= \bigoplus_j \frac{I_{k_j}^1}{I_{k_{j+1}}^1}$ is a finitely generated $\mathbb{C}[t]$-algebra, with ideals $I_{k_j}=\{f\in R_x, v_1(f)\geq k_j\}$. Then $$\mathcal{W}^1=\mbox{Spec}(R^1) \rightarrow \mathbb{C}$$ is a $\mathbb{C}^*$-equivariant *weighted family* such that the general fiber $W^1_t$ is a (minimal) *weighted bubble*, a negative weight deformation (see [@CH]) of the special fiber $W^1_0$ the *weighted tangent cone*. 2. *Bubbling family*. There exists a non-canonical $\mathbb{C}^\ast$-equivariant degeneration of this family to a new equivariant family $$\mathcal{B}^1 \rightarrow \mathbb{C}$$ (called the *bubbling family*) where the general fiber $B^1_t$ is the (minimal) *algebraic bubble*, a negative weight deformation of the central fiber $B^1_0$, the *algebraic tangent cone*. Note that the space of negative weighted deformation should always be finite dimensional, even for a cone with non-isolated singularities. 3. *Iteration*: By specialization, the original family with its section gives rise to two new families (isomorphic to the original family away from zero) $\Tilde{\mathcal{X}}_t^1$ and $\Tilde{\Tilde{\mathcal{X}}}_t^1$, having the (weighted) bubbles in the fiber over the origin. We are now back to the beginning: there should exists a valuation such that we can repeat the above steps $(1)(2)$, given a new local weighted family $\mathcal{W}^2$ and a new bubbling family $\mathcal{B}^2$, and new iteration families $\Tilde{\mathcal{X}}_t^2$ and $\Tilde{\Tilde{\mathcal{X}}}_t^2$. The process will repeat a finite number of steps until we reach the situation in which the algebraic tangent cone coincides with $\mathbb{C}^n$. We refer to the corresponding algebraic bubble in the step before that as the *deepest bubble*. The weighted and bubbling families of the above items $(1)(2)$ are represented in Figure [\[fig:2stepdeg\]](#fig:2stepdeg){reference-type="ref" reference="fig:2stepdeg"}. **Remark 11**. Note that, even if we start with a family with compact fibers and only defined locally in $t$, the rescaled local families $\mathcal{W, B}$ of items $(1)(2)$ are going to be affine and defined on all $\mathbb{C}$. Regarding the crucial step $(3)$, the process should be described by a weighted blow-up of the original family, giving two families still over $\mathbb{C}[[t]]$, say $\Tilde{\mathcal{X}}_t$ and $\Tilde{\Tilde{\mathcal{X}}}_t$, which have the bubbles $W_1$ and the actual metric bubble $B_1$ (as components) in the fiber over the origin. In particular, $W_1$ and $B_1$ will give (divisorial) valuations corresponding to special points in the Berkovich analytification (over the given section) of the original family $\pi:\mathcal{X} \rightarrow \Delta$ (of course such valuations are expected to be linked to the one given the weighted bubbling local families of steps $(1),(2)$). Motivated by the analysis in the previous sections, we then propose the following. **Expectation 3** (Relation to differential geometry). The outlined algebraic algorithm above compute all possible metric bubbles (pointed Gromov-Hausdorff limits) of a non-collapsing family of KE metrics along a section $\sigma$. More precisely, - All possible metric bubbles do not depend on the particular sequence of $t_k\rightarrow 0$, but only on the holomorphic section $\sigma$. - The algebraic bubbles $B^i_1$ can be identified with all the (non-cone) rescaled metric bubbles, and hence they admit (possibly singular) Calabi-Yau metrics asymptotic to the Calabi-Yau cones $B^i_0$. - The algebraic finite step iterations of the algorithm correspond to the finite number of possible metric bubbles, as described in [@sun]. **Remark 12**. In the above context, for the given flat family $\mathcal{X} \to \Delta$ of non-collapsing KE metrics and an algebraic section $\sigma: \Delta \to \mathcal{X}$, we expect that the curvature of the manifolds $X_t$ at the points $\sigma(t)$ for $t \neq 0$ blows-up at a polynomial rate of the form $|\mbox{Riem}_{g_t}(\sigma(t))| \leq C |t|^{-C}$ for some $C>0$, corresponding to the finite step termination of the algorithm. See Remark [Remark 8](#rmk:blowupcurvature){reference-type="ref" reference="rmk:blowupcurvature"} for the case of $2$-dimensional $A_k$ singularities. Of course, the crucial aspect of the conjectural picture is to determine a priori via algebraic geometry which valuations will make such identification between algebraic and differential geometric rescalings to hold. As our experiments show (see also Section [4.2](#sec:ak){reference-type="ref" reference="sec:ak"}), it seems that the valuations are related to the ones giving the normalized volume for the klt singularities in the central fiber $X_0$ (that is, differential geometrically, the ones computing the metric tangent cone) via Li's normalized volume. **Remark 13**. Evocatively, we can think to the above algorithm to provide a *geometric algebraic expansion* of the space $\mathcal{X}$ around the section $\sigma$: $$\mathcal{X}_{\sigma} \cong \sum_i B^i_1 t^{v_i},$$ where $v_i$ are the valuations computing the tangent cones. More generally, one can ask what happens by letting the sections vary. Our previous examples in Sections [2](#sec:1dim){reference-type="ref" reference="sec:1dim"} and [3](#sec:2dim){reference-type="ref" reference="sec:2dim"} suggest the following picture: **Expectation 4**. There are equivalence relations on the set of sections through a point $p\in X_0$ so that two equivalent sections induce the same bubbles up to certain scale. Thus, by varying the sections we expect to be able to algebro geometrically detect the full metric bubble tree for the degenerating family $\pi:\mathcal{X} \rightarrow \Delta$ at a given singularity. Schematically, this looks like a $1$-dimensional tree $\mathcal{T}$, with root given by the space $X_0$ itself and leaves given by all the possible deepest bubbles (removing all flat limiting spaces, or, more generally, if we consider singular families with section $\sigma$ all taking value at singular points, all tangent cones at the generic point of $\sigma$). The segments correspond to the tangent cones. A choice of a section will determine a non-branching path from the root to a leaf, as indicated in Figure [\[fig:polytreepath\]](#fig:polytreepath){reference-type="ref" reference="fig:polytreepath"}. We now discuss some conceptual examples (see also previous section $3.5$ for a logarithmic situation). Steps $(1)(2)$ above can be simply obtained by rescaling a given family centered at at the singularity of the central fiber, where one uses in the vertical directions weights of the tangent cone, and a suitable rescaling on the horizontal $t$ variable, in analogy to the usual computations for the metric tangent cone. Next examples (we will focus on step (3) and its iterations) are, for simplicity, given only locally, and they are still conjectural (but the rescalings we consider are in analogy to the ones that works in low dimension as described in previous sections). ## $A_k$-singularities {#sec:ak} Let $n \geq 3$ and consider the $n$-dimensional $A_k$-singularity given by $$A_k = \{x_1^2 + \ldots + x_n^2 = x_0^{k+1}\} \subset \mathbb{C}^{n+1} .$$ Suppose that the singularity is strictly unstable, this is $$\label{eq:akunst} k+1 > 2 \frac{n-1}{n-2} .$$ If the unstable condition [\[eq:akunst\]](#eq:akunst){reference-type="eqref" reference="eq:akunst"} holds then the weighted tangent cone $W$ as well as the tangent cone at the origin $C(Y)$ are equal to the product of $\mathbb{C}$ with the $(n-1)$-dimensional $A_1$-singularity $$W = C(Y) = \mathbb{C}\times A_1^{(n-1)} .$$ The weights of the tangent cone are $$\textbf{w} = \left( 1, \frac{n-1}{n-2}, \ldots, \frac{n-1}{n-2} \right) .$$ The space of negative weight deformations of $C(Y)$ is made of hypersurfaces $B \subset \mathbb{C}^{n+1}$ of the form $$\label{eq:negweight} B = \{ x_1^2 + \ldots + x_n^2 = P(x_0) \hspace{2mm} | \hspace{2mm} \deg(P) \leq 3 \text{ if } n = 3, \hspace{2mm} \deg(P) \leq 2 \text{ if } n \geq 4\} .$$ In particular, any bubble arising from the formation of an unstable $A_k$-singularity should be of the form given by Equation [\[eq:negweight\]](#eq:negweight){reference-type="eqref" reference="eq:negweight"}. **Example 5**. Take $(n, k) = (3, 4)$ and consider the family $$\label{eq:akfamily} X_t = \{ x_1^2 + x_2^2 + x_3^2 = x_0 (x_0 + t) (x_0 + t + t^2) (x_0 + t^3) (x_0 + t^3 + t^4) \} .$$ We describe rescaled limits centered at the zero section. The weights are $\textbf{w} = (1,2,2,2)$. Let $c>0$ and rescale $x_0 = t^c \tilde{x}_0$ and $x_i = t^{2c} \Tilde{x}_i$. Taking $c=2$ we obtain the family $$\label{eq:rescfam} \Tilde{X}_t = \{ \Tilde{x}_1^2 + \Tilde{x}_2^2 + \Tilde{x}_3^2 = \Tilde{x}_0 (t \Tilde{x}_0 + 1) (t \Tilde{x}_0 + 1 + t) (\Tilde{x}_0 + t) (\Tilde{x}_0 + t + t^2) \} .$$ The minimal bubble $B_{\alpha_1}$ is obtained by setting $t=0$ in Equation [\[eq:rescfam\]](#eq:rescfam){reference-type="eqref" reference="eq:rescfam"} $$B_{\alpha_1} = \{ \Tilde{x}_1^2 + \Tilde{x}_2^2 + \Tilde{x}_3^2 = \Tilde{x}_0^3 \} .$$ The minimal bubble $B_{\alpha_1}$ is the total space of the $3$-dimensional $A_2$-singularity. The $A_2$-singularity is stable (Equation [\[eq:akunst\]](#eq:akunst){reference-type="eqref" reference="eq:akunst"} is strictly violated) and admits a Calabi-Yau cone metric found by Li-Sun [@lisun]. We expect that $B_{\alpha_1}$ should admit a Calabi-Yau metric asymptotic to the Li-Sun metric at the origin and whose tangent cone at infinity is the product $\mathbb{C}\times (\mathbb{C}^2/\mathbb{Z}_2)$. Next we rescale the family [\[eq:rescfam\]](#eq:rescfam){reference-type="eqref" reference="eq:rescfam"} using the weights of the $A_2$-singularity. Let $c>0$ and write $\Tilde{x}_0 = t^{2c} x_0'$, $\Tilde{x}_i = t^{3c} x_i'$. Setting $c=1/2$ and taking $t \to 0$ we obtain that the second bubble is $$B_{\alpha_2} = \{ (x'_1)^2 + (x'_2)^2 + (x'_3)^2 = x'_0 (x'_0+1)^2 \} .$$ The space $B_{\alpha_2}$ has an isolated $A_1$ singularity at $x_0' = -1$, $x_i'=0$ for $i=1, 2, 3$ and it is otherwise smooth. We expect that $B_{\alpha_2}$ admits a Calabi-Yau metric asymptotic to the Stenzel cone metric at its singular point and whose tangent cone at infinity is the Li-Sun cone metric on the $A_2$-singularity. Finally, since we are rescaling at the zero section and the origin is a smooth point of $B_{\alpha_2}$, the next rescaled limit is flat $\mathbb{C}^3$ and the process terminates. See Figure [\[fig:aklimits\]](#fig:aklimits){reference-type="ref" reference="fig:aklimits"}. The next two examples illustrate the dependence of the bubbles on the section. **Example 6** (Non-isolated singularities). Consider the family $$X_t = \{x_1^2 + x_2^2 + x_3^2 = t x_0\} .$$ Let $\sigma (t) = (x_0(t), x_1(t), x_2(t))$ be the section with $\sigma(0)=0$. In the generic case that the first derivative $x_0'(0) \neq 0$ the bubble is the product of Eguchi-Hanson with $\mathbb{C}$. In the special case that $x_0'(0)=0$ the bubble is a metric on $\mathbb{C}^3$ with tangent cone at infinity $(\mathbb{C}^2/\mathbb{Z}_2) \times \mathbb{C}$ as found in [@yli; @conlonrochon; @szek]. **Example 7**. Suppose that the unstable condition [\[eq:akunst\]](#eq:akunst){reference-type="eqref" reference="eq:akunst"} holds and consider the family $$X_t = \{x_1^2 + \ldots + x_n^2 = \prod_{j=1}^{k+1} \left( x_0 - a_j t \right) \}$$ where $a_j \in \mathbb{C}$ and $a_i \neq a_j$ for $i \neq j$. If the section $s(t) = (x_0(t), \ldots, x_n(t))$ with $s(0) = 0$ is generic, in the sense that $x_0(t) = at + \text{(h.o.t)}$ with $a \neq a_j$ for all $j$, then we obtain the product of the Stenzel metric on $\{x_1^2+\ldots+x_n^2=1\}$ with $\mathbb{C}$ as a bubble. However, if the section if of the form $x_0(t) = a_j t + \text{(h.o.t)}$ for some $1 \leq j \leq n$ then we expect the bubble to be one the Calabi-Yau metrics on $\mathbb{C}^n$ with splitting tangent cone at infinity $A_1^{n-1} \times \mathbb{C}$ found in [@szek]. **Remark 14**. All these local examples can be easily realised as local analytic singularities' formations of families of *compact* KE varieties. In particular, establishing an a-priori algebro-geometric detection of bubbling will give a construction of asymptotically conical CY metrics (also singular and with splitting tangent cones at infinity, as described in the examples above) *directly as bubble limits of compact KE manifolds*, hence avoiding a direct and subtle weighted analysis of the Monge-Ampère equation on (possibly singular) affine spaces but instead relaying on some sort of *multiscale moduli continuity method*. More concretely, let's illustrate this philosophy with a toy example, showing (and this is a rigorous deduction, thanks to the already existing theory) the emergence of the Stenzel's AC Calabi-Yau metric on the affine variety $x_1^2 + \ldots + x_n^2 = 1$ *directely* as a bubble limit. **Proposition 1**. *Let $\pi:\mathcal{X} \rightarrow \Delta$ be a non-collapsing family of compact KE manifolds, and assume that $p\in X_0$ is an $A_1$-singularity. Then the Stenzel CY metric on $x_1^2 + \ldots + x_n^2 = 1$ appears as a minimal bubble (no gluing required!).* *Proof.* By the established deep general theory of convergence, we know that such metrics GH convergence to a singular KE metric on $X_0$ whose metric tangent cone is the Calabi's ansatz cone metric on the canonical bundle of the KE Fano on the smooth quadric hypersurface. Moreover, by [@sun] we know that there should be a rescaled minimal bubble that has such cone as its tangent cone at infinity, and it is a negative weight deformation of it [@CH]. However, versal deformations of an $A_1$ singularity are trivial: there is only *a* smoothing. Hence, there must be an AC CY metric on the affine space $x_1^2 + \ldots + x_n^2 = 1$ with that tangent cone at infinity (which has to be precisely the AC Stenzel's metric by uniqueness). ◻ **Remark 15**. For explicit examples, just take a family of smooth projective hypersurfaces in $\mathbb{C}\mathbb{P}^N$ of sufficiently higher degree degenereting to a klt variety $X_0$ with an $A_1$ singularity (the generic singularity at the boundary of their moduli space). ## Towards multiscale K-moduli compactifications? {#mK} The previous study of the Deligne-Mumford/Mostow compactifications for the log $1$-dimensional case in Section [2](#sec:1dim){reference-type="ref" reference="sec:1dim"}, combined with the study of deformations of $A_k$-singularities in Section [4.2](#sec:ak){reference-type="ref" reference="sec:ak"}, points towards the following highly speculative picture, which we will now roughly describe in its more optimistic formulation (thus our "expectations\" should be tuned appropriately). In short, we can ask the following: **Question 1**. Are there compactifications of moduli spaces which parametrize bubbles as well? Let's elaborate a bit more. First one can ask what happens if we let the family $\pi:\mathcal{X} \rightarrow \Delta$ vary while fixing the central fiber $X_0$. A family $\pi:\mathcal{X} \rightarrow \Delta$ can be considered as a curve in the non-collapsing part of the K-moduli compactification $\overline{\mathcal{M}}$ passing at $[X_0]\in \overline{\mathcal{M}}$. From this point of view, a section is just a lift of that curve to the universal family $\mathcal{U} \rightarrow \mathcal{M}$ (assume we have one for simplicity). We should have a recipe to associate to a one dimensional family the sets of minimal bubbles by varying $p\in \mbox{Sing}(X_0)$. Note that two different families may give the same minimal bubbles at $p$ (e.g., this could happen if the reduction mod $t^2$ of the families give the same tangent at $T_{[X_0]}\mathcal{M}$, but the general picture may be more complicated than that). Each minimal bubble $B^1$ comes with an equivariant *negative weight* $\mathbb{C}^\ast$ degeneration to its associated tangent cone. The space of such $\mathbb{Q}$-Gorenstein negative weight deformations $Def_\mathbb{Q}^-(B_0^0)$ should be some finite dimensional space. Hence, at last in good situations (as we have in the example of $A_k$-singularities in dimension two), we can have a subset of $Z^1\subseteq \mathbb{P} Def_\mathbb{Q}^-(B_0^0)$ to be a *moduli space of first minimal bubble at $p\in X_0$*. In concrete situations we expect $Z$ to depend on the all $X_0$, and not only from obstructions to the local deformation of the singularity. This is related to the fact that there are local to global obstructions to the deformation of singularities (remarkably, this is not the case for del Pezzo surfaces, as shown by Hacking and Prokhorov [@hackingprokhorov]). This is true even when jumping phenomena do not occur. Thus, motivated by the bubbling interpretation of the Deligne-Mumford compactification in [2](#sec:1dim){reference-type="ref" reference="sec:1dim"}, it is tempting to ask if there is some algebro-geometric moduli spaces (a birational modification of the K-moduli) $\overline{\mathcal{M}}^{\lambda_1}$ parametrizing in its boundary the first minimal bubbles at the various singularities of $X_0 \in \partial \overline{\mathcal{M}}^{K}$ as discussed above. Moreover, by varying scales and (equivalence classes of) sections/lifts one could hope to find a set of birational modifications of K-moduli spaces parametrizing bubbles at *different scales* $\mathcal{T M}^K=\{\overline{\mathcal{M}}^{\lambda_i}\}$ (also, which scales are indeed realizable algebro-geometrically? All?). It may be interesting to investigate if something like that is happening on some simple examples (e.g., K-moduli of del Pezzo surfaces or K3 surfaces). [^1]: In terms of metrics Equation [\[eq:noncollapse\]](#eq:noncollapse){reference-type="eqref" reference="eq:noncollapse"} rules out collapsing sequences.
arxiv_math
{ "id": "2309.03705", "title": "Some models for bubbling of (log) K\\\"ahler-Einstein metrics", "authors": "Martin de Borbon and Cristiano Spotti", "categories": "math.DG math.AG", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | [\[sec:abstract\]]{#sec:abstract label="sec:abstract"} This paper focuses on the stabilization and regulation of linear systems affected by quantization in state-transition data and actuated input. The observed data are composed of tuples of current state, input, and the next state's interval ranges based on sensor quantization. Using an established characterization of input-logarithmically-quantized stabilization based on robustness to sector-bounded uncertainty, we formulate a nonconservative infinite-dimensional linear program that enforces superstabilization of all possible consistent systems under assumed priors. We solve this problem by posing a pair of exponentially-scaling linear programs, and demonstrate the success of our method on example quantized systems. author: - "Jared Miller$^{1, 2}$, Jian Zheng$^2$, Mario Sznaier$^2$, Chris Hixenbaugh$^3$ [^1] [^2] [^3] [^4]" bibliography: - references.bib title: "**Data-Driven Superstabilization of Linear Systems under Quantization**" --- # Introduction {#sec:introduction} This paper performs [DDC]{acronym-label="DDC" acronym-form="singular+short"} of discrete-time linear systems under data quantization in the state-transition records and logarithmic quantization in the input. Input quantization can be encountered in data-rate constraints for network models when sending instructions to digital actuators, and its presence adds a nonlinearity to system dynamics [@brockett2000quantized; @elia2001stabilization; @nesic2009unified]. The logarithmic input quantizer offers the coarsest possible quantization density [@elia2001stabilization] among all possible quantization schemes. These logarithmic quantizers admit a nonconservative characterization as a Luré-type sector-bounded input [@fu2005sector; @fu2009finite; @zhou2013adaptive]. Data quantization could occur in the storage of sensor data into bits on a computer, and admits the mixed-precision setting of sensor fusion with different per-sensor precisions. [DDC]{acronym-label="DDC" acronym-form="singular+short"} is a design method to synthesize control laws directly from acquired system observations and model/noise priors, without first performing system-identification/robust-synthesis pipeline [@HOU20133; @formentin2014comparison; @hou2017datasurvey]. This paper utilizes a Set-Membership approach to [DDC]{acronym-label="DDC" acronym-form="singular+short"}: furnishing a controller along with a certificate that the set of all quantized data-consistent plants are contained within the set of all commonly-stabilized plants. Certificate methods for set-membership [DDC]{acronym-label="DDC" acronym-form="singular+short"} approaches include Farkas certificates for polytope-in-polytope containment [@cheng2015robust; @miller2023ddcpos], a Matrix S-Lemma for [QMIs]{acronym-label="QMI" acronym-form="plural+short"} to prove quadratic and robust stabilization [@waarde2020noisy; @van2023quadratic; @miller2022lpvqmi], and Sum-of-Squares certificates of polynomial nonnegativity [@dai2020semi; @martin2021data; @miller2022eiv_short; @zheng2023robust]. Other methods for [DDC]{acronym-label="DDC" acronym-form="singular+short"} include Iterative Feedback Tuning [@hjalmarsson1998iterative], Virtual Reference Feedback Tuning [@campi2002virtual; @formentin2012non], Behavioral characterizations (Willem's Fundamental Lemma) with applications to Model-Predictive Control [@willems2005note; @depersis2020formulas; @coulson2019data; @berberich2021robustmpc], moment proofs for switching control [@dai2018moments], learning with Lipschitz bounds [@robey2020learning; @lindemann2021learning], and kernel regression [@thorpe2023physicsinformed]. The most relevant prior work to the quantized [DDC]{acronym-label="DDC" acronym-form="singular+short"} approach in this paper is the research in [@zhao2022data]. The work in [@zhao2022data] performs utilizes the approach of [@fu2005sector] by treating logarithmic-quantizing control as an $H_\infty$ small-gain task. They then formulate the consistency set of data-plants as , and use the Matrix S-Lemma [@waarde2020noisy] to certify common stabilization. In contrast, our work includes quantized data as well as quantized control by developing a polytopic description of the plant consistency set. We then restrict to superstabilization [@polyak2001optimal; @polyak2002superstable] to formulate [DDC]{acronym-label="DDC" acronym-form="singular+short"} [LPs]{acronym-label="LP" acronym-form="plural+short"} over the polytopic consistency set. In the case of quantization of data, the [QMI]{acronym-label="QMI" acronym-form="singular+short"} approach in [@zhao2022data] would then over-approximate the polytopic consistency constraint with a single ellipsoidal region. The contributions of this work are: - A formulation for superstabilizing [DDC]{acronym-label="DDC" acronym-form="singular+short"} under input and data quantization - A sign-based [LP]{acronym-label="LP" acronym-form="singular+short"} for data-driven quantized superstabilization that grows exponentially in $n$ and $m$ - A more tractable [AARC]{acronym-label="AARC" acronym-form="singular+short"} that is exponential in $m$ alone. This paper has the following structure: Section [2](#sec:preliminaries){reference-type="ref" reference="sec:preliminaries"} introduces notation and superstabilization. Section [3](#sec:quantization){reference-type="ref" reference="sec:quantization"} provides an overview of the data and logarithmic-input quantization schemes considered in this work. Section [4](#sec:quantized_ddc){reference-type="ref" reference="sec:quantized_ddc"} formulates superstabilizing [DDC]{acronym-label="DDC" acronym-form="singular+short"} under quantization as a pair of equivalent [LPs]{acronym-label="LP" acronym-form="plural+short"}. Section [5](#sec:examples){reference-type="ref" reference="sec:examples"} demonstrates these algorithms on example quantized systems. Section [\[sec:conclusion\]](#sec:conclusion){reference-type="ref" reference="sec:conclusion"} concludes the paper. # Preliminaries {#sec:preliminaries} \[QMIs\]Quadratic Matrix Inequalities ## Notation ------------------------------------------------ ---------------------------------------------------------------- $a..b$ Natural numbers between $a$ and $b$ $\mathbb{R}^n$ $n$-dimensional real Euclidean space $\mathbb{R}^n_{\geq 0} \ (\mathbb{R}^n_{> 0})$ $n$-dimensional nonnegative (positive) orthant $\mathbb{R}^{n\times m}$ $n\times m$-dimensional real matrix space $\mathbf{1}_n, \ \mathbf{0}_n$ Vector of all ones or zeros $I_n$ Identity matrix $\otimes$ Kronecker product $\text{vec}(X)$ Column-wise vectorization of a matrix $X^T$ Matrix transpose $\norm{x}_\infty$ $L_\infty$-norm (vector): $\max_i \abs{x}_i$ $\norm{X}_\infty$ Induced $L_\infty$ norm (matrix): $\max_i \sum_j \abs{X_{ij}}$ $x./y$ Element-wise division between $x$ and $y$ $A \leq B$ Element-wise $\leq$ between $A, B \in \mathbb{R}^{n \times m}$ ------------------------------------------------ ---------------------------------------------------------------- ## Superstabilization {#sec:superstability} A discrete-time system $x_+ = A x$ is *Extended Superstable* if there exists nonnegative weights $v > 0$ such that $\norm{x./v}_\infty$ is a Lyapunov function [@polyak2004extended]. This condition may be expressed using an operator norm through the definition $Y = \text{diag}(v)$ and the constraint $\norm{Y A Y^{-1}}_\infty < 1$. Standard *superstability* is the restriction of extended superstability when $v=\mathbf{1}_n$. A discrete-time linear system with input of $$\begin{aligned} x^{+} &= A x + B u \label{eq:dyn_discrete}\end{aligned}$$ is extended-superstabilized by the full-state-feedback controller $u = K x$ if there exists [@polyak2004extended] $v \in \mathbb{R}_{>0}^n, \ S \in \mathbb{R}^{m \times n}$ with $$\begin{aligned} & \forall i \in 1..n, \ \alpha \in \{-1, 1\}^n: \nonumber \\ &\qquad \textstyle \sum_{j=1}^n \alpha_i \left(A_{ij}v_j + \sum_{k=1} B_{ik}S_{kj}\right) < v_i. & &\label{eq:ess_sign}\end{aligned}$$ The controller $K$ forming the input $u=Kx$ is then recovered by $K = S \diag{1./v}$. Problem [\[eq:ess_sign\]](#eq:ess_sign){reference-type="eqref" reference="eq:ess_sign"} is a set of $n 2^n$ strict linear inequality constraints. A more efficient method of imposing extended-superstability is by introducing a new matrix $M \in \mathbb{R}^{n\times n}$ [@yannakakis1991expressing], [\[eq:ess_standard\]]{#eq:ess_standard label="eq:ess_standard"} $$\begin{aligned} &\textstyle \sum_{j=1}^n M_{ij} < v_i & & \forall i \in 1..n \\ &\textstyle \abs{A_{ij}v_j + \sum_{k=1} B_{ik}S_{kj}} \leq M_{ij} & &\forall i,j \in 1..n.\end{aligned}$$ Problems [\[eq:ess_sign\]](#eq:ess_sign){reference-type="eqref" reference="eq:ess_sign"} and [\[eq:ess_standard\]](#eq:ess_standard){reference-type="eqref" reference="eq:ess_standard"} are equivalent, in which an admissible selection for $M$ is $M_{ij} = \abs{A_{ij}v_j + \sum_{k=1} B_{ik}S_{kj}}.$ The conditions in [\[eq:ess_sign\]](#eq:ess_sign){reference-type="eqref" reference="eq:ess_sign"} and [\[eq:ess_standard\]](#eq:ess_standard){reference-type="eqref" reference="eq:ess_standard"} is necessary and sufficient for full-state feedback extended superstabilization. If the system in [\[eq:dyn_discrete\]](#eq:dyn_discrete){reference-type="eqref" reference="eq:dyn_discrete"} is superstabilized $(v=\mathbf{1}_n)$ and $\norm{A + BK}_\infty \leq \lambda$ with $\lambda < 1$, then any closed-loop trajectory $x_t$ starting at $x_0$ with $\forall t: u_t =0$ will satisfy $\norm{x_t}_\infty \leq \lambda^t \norm{x_0}_\infty$ [@SZNAIER19963550; @polyak2001optimal]. The quantity $\lambda$ can be interpreted as a decay rate, and the controller $K$ can be designed using to minimize $\lambda$ and ensure the fastest possible convergence. A similar minimal peak-to-peak design task for extended superstabilization requires the solution of parametric [LP]{acronym-label="LP" acronym-form="singular+short"} with a single free parameter [@polyak2004extended]. # Quantization {#sec:quantization} This section will introduce the two sources of quantization considered in this paper. ## Quantization of Data Our data $\mathcal{D}$ with $N_s$ samples is composed of the current state $\hat{x}$, input $\hat{u}$, and bounds on the subsequent state $[p, q]$, forming the $N_s$ tuples $\mathcal{D}= \cup_{s=1}^{N_s} (\hat{x}_s, \hat{u}_s, p_s, q_s)$. We define the polytope $\mathcal{P}(A, B)$ as the set of all plants that are consistent with the data in $\mathcal{D}$: $$\begin{aligned} \mathcal{P}= \{(A, B) \mid \forall s \in 1..N_s: A \hat{x}_s + B \hat{u}_s & \in [p_s, q_s] \}. \label{eq:residual_constraints}\end{aligned}$$ The bounds $p_s, q_s$ at each sample-index $s$ may arise from interval quantization. In the case where a quantization process performs rounding to the first decimal place, the true state transition $x^+ = 0.368$ would be restricted to the interval to the interval described by $p = 0.3$ and $q = 0.4$. This data-quantization framework in $\mathcal{D}$ allows for the integration of $L_\infty$-bounded process-noise. In the case where there exists a process-noise $w_s$ such that $A \hat{x}_s + B \hat{u}_s + w_s \in [p_s, q_s]$ with $\norm{w_s}_\infty \leq \epsilon$, interval arithmetic can be used to express the data constraint as $A \hat{x}_s + B \hat{u}_s \in [p_s-\epsilon, q_s+\epsilon]$. ## Quantization of Input A scalar logarithmic quantizer with density $\rho \in (0, 1)$ and step $\delta = (1-\rho)/(1+\rho)$ is defined by $g_\rho: \mathbb{R}\rightarrow \mathbb{R}$ [@fu2005sector Equation 7]: $$\begin{aligned} g_\rho(z) = \begin{cases} \rho^i & \exists i \in \mathbb{N}\mid \frac{1}{1+\delta} \rho^i \leq z \leq \frac{1}{1-\delta} \rho^i \\ 0 & z=0 \\ -g_\rho(-z) & v < 0.\end{cases} \label{eq:log_quantize}\end{aligned}$$ We will obey the convention of [@fu2005sector] in referring to $\rho$ as the *quantization density*, in which a larger $\rho$ refers to a coarser quantizer with wider intervals. A $\rho$-logarithmically-quantized linear system has dynamics $$\begin{aligned} x_{t+1} &= A x_t + B g_\rho(u_t), \label{eq:dyn_quantize}\end{aligned}$$ where the quantization in $g_\rho$ should be understood to occur elementwise in $u_t$. The following proposition establishes a sector-bound characterization of logarithmic quantization (for $m=1$): **Proposition 1** (Eq. (21)-(22) in [@fu2005sector]). *For any $z \geq 0$ and logarithmic quantization density $\rho>0$ with $$\delta = (1-\rho)/(1+\rho), \label{eq:delta_rho}$$ the quantization error at $z$ satisfies a multiplicative bound $$z - g_\rho(z) \in [-\delta z, \delta z]. \label{eq:mult_error_bound}$$* Figure [1](#fig:log_quantizer){reference-type="ref" reference="fig:log_quantizer"} plots the graph of a logarithmic quantizer with $\rho = 0.4, \ \delta = 0.4286$ along with the error bound in [\[eq:mult_error_bound\]](#eq:mult_error_bound){reference-type="eqref" reference="eq:mult_error_bound"} over the interval $u \in [-8, 8]$. ![Logarithmic quantizer $(\rho=0.4)$ and error bound](img/log_quantizer.png){#fig:log_quantizer width="\\exfiglength"} The trajectories of a logarithmically quantized systems with $m=1$ are therefore contained in the class of scalar-$\Delta$ sector-bounded models: $$\begin{aligned} x_{t+1} &= \left[A + (1+\Delta) B K \right]x_t & & \forall \Delta \in [-\delta, \delta].\label{eq:dyn_quantize_feedback}\end{aligned}$$ Theorem 2.1 of [@fu2005sector] proves that the state-feedback controller $u_t = K x_t$ with $K \in \mathbb{R}^{1 \times n}$ **quadratically** stabilizes [\[eq:dyn_discrete\]](#eq:dyn_discrete){reference-type="eqref" reference="eq:dyn_discrete"} iff $u=Kx$ can **quadratically** stabilize [\[eq:dyn_quantize\]](#eq:dyn_quantize){reference-type="eqref" reference="eq:dyn_quantize"}. For systems in which each input channel $u_j$ has a separate quantization density $(\rho_j, \delta_j)$, quadratic state-feedback stabilization of the quantized system will occur if [@fu2005sector Theorem 3.2]: $$\begin{aligned} & \forall \Delta \in \textstyle \prod_{j=1}^m [-\delta_j, \delta_j] \nonumber \\ & \qquad x_{t+1} = \left[A + B(I_m+\diag{\Delta}) K \right]x_t. \label{eq:dyn_quantize_feedback_vector}\end{aligned}$$ The work in [@fu2005sector] and [@zhao2022data] treat common stabilization of [\[eq:dyn_quantize_feedback\]](#eq:dyn_quantize_feedback){reference-type="eqref" reference="eq:dyn_quantize_feedback"} as an $H_\infty$ optimization using the small-gain theorem for a sector-bounded uncertainty. The muli-input small-gain formulation in [\[eq:dyn_quantize_feedback_vector\]](#eq:dyn_quantize_feedback_vector){reference-type="eqref" reference="eq:dyn_quantize_feedback_vector"} is posed solved using a conservative multi-block S-Lemma. ## Combined Superstability and Input-Quantization We can apply extended superstabilization method Section [2.2](#sec:superstability){reference-type="ref" reference="sec:superstability"} towards the control of input-quantized systems, as represented by the sector-bounded model class in [\[eq:dyn_quantize_feedback_vector\]](#eq:dyn_quantize_feedback_vector){reference-type="eqref" reference="eq:dyn_quantize_feedback_vector"}. **Theorem 2**. *A logarithmically quantized system in [\[eq:dyn_quantize\]](#eq:dyn_quantize){reference-type="eqref" reference="eq:dyn_quantize"} is extended superstablized by a controller $u= K x$ if there exists a $v \in \mathbb{R}_{>0}^n, \ S \in \mathbb{R}^{m \times n}, M \in \mathbb{R}^{n \times n}$ with $Y = \diag{v}$* *[\[eq:ess_quantized\]]{#eq:ess_quantized label="eq:ess_quantized"} $$\begin{aligned} & \forall i \in 1..n \nonumber \\ & \qquad \textstyle \sum_{j=1}^n M_{ij} < v_i & & \\ & \forall \Delta \in \textstyle \prod_{j=1}^m [-\delta_j, \delta_j] \nonumber \\ &\textstyle -M \leq A Y+ B (I_m + \diag{\Delta}) S \leq M. \label{eq:ess_quantized_con} \end{aligned}$$* *The recovered controller is $K = S Y^{-1}.$* *Proof.* In the case where $\Delta=0$, then the quantized program [\[eq:ess_quantized\]](#eq:ess_quantized){reference-type="eqref" reference="eq:ess_quantized"} is equivalent to the unquantized program [\[eq:ess_standard\]](#eq:ess_standard){reference-type="eqref" reference="eq:ess_standard"}. We can apply Proposition [Proposition 1](#prop:quant_error){reference-type="ref" reference="prop:quant_error"} to generate a sector-bound description of quantization, together with separate input channel quantization based on Equation [\[eq:dyn_quantize_feedback_vector\]](#eq:dyn_quantize_feedback_vector){reference-type="eqref" reference="eq:dyn_quantize_feedback_vector"} regarding the multiplicative perturbations $\Delta$. The linear inequality constraints [\[eq:ess_quantized\]](#eq:ess_quantized){reference-type="eqref" reference="eq:ess_quantized"} are convex, such that a common $M$ is a worst-case certificate over the all possible closed-loop matrices $AY + B(I_m + \diag{\Delta}) S \leq M$. Such a certificate ensures extended superstability of all systems in [\[eq:dyn_quantize_feedback\]](#eq:dyn_quantize_feedback){reference-type="eqref" reference="eq:dyn_quantize_feedback"}. ◻ **Corollary 1**. *We can enumerate the convex constraint [\[eq:ess_quantized\]](#eq:ess_quantized){reference-type="eqref" reference="eq:ess_quantized"} over the vertices of the hypercube formed by $\delta$, producing the equivalent statement of $$\begin{aligned} & \forall \gamma \in \textstyle\prod_{j=1}^m\{-\delta_j, \delta_j\} \nonumber \\ &\textstyle -M \leq A Y+ B (I_m + \diag{\gamma}) S \leq M.\label{eq:ess_quantized_enum} \end{aligned}$$* **Corollary 2**. *An equivalent formulation to [\[eq:ess_quantized\]](#eq:ess_quantized){reference-type="eqref" reference="eq:ess_quantized"} with respect to sign-enumeration in [\[eq:ess_sign\]](#eq:ess_sign){reference-type="eqref" reference="eq:ess_sign"} and substitution $\beta = 1+\gamma$ is the following [LP]{acronym-label="LP" acronym-form="singular+short"} in $n 2^{n+m}$ constraints: $$\begin{aligned} & \forall i \in 1..n, \ \alpha \in \{-1, 1\}^n, \ \beta \in \textstyle\prod_{j=1}^m\{1-\delta_j, 1+\delta_j\} : \nonumber \\ &\quad \textstyle \sum_{j=1}^n \alpha_j \left( A_{ij}v_j + \sum_{k=1}^m \beta_k B_{ik} S_{kj} \right)< v_i. & &\label{eq:ess_sign_quantized}\end{aligned}$$* **Proposition 3**. *A controller $K$ that is feasible for quantization $\delta \in \mathbb{R}_{\geq 0}^m$ in [\[eq:ess_sign_quantized\]](#eq:ess_sign_quantized){reference-type="eqref" reference="eq:ess_sign_quantized"} will also be feasible for every $\delta' \in \mathbb{R}_{\geq 0}^m$ with $\delta' \leq \delta$.* # Quantized DDC {#sec:quantized_ddc} This section will outline approach towards quantized superstability. Given data in $\mathcal{D}$, let $\mathcal{P}$ in [\[eq:residual_constraints\]](#eq:residual_constraints){reference-type="eqref" reference="eq:residual_constraints"} be the polytopic consistency of plants $(A, B)$ in agreement with $\mathcal{D}$. Our task is to solve the following problem: **Problem 4**. *Find a state-feedback controller $u = K x$ such that the quantized system [\[eq:dyn_quantize_feedback\]](#eq:dyn_quantize_feedback){reference-type="eqref" reference="eq:dyn_quantize_feedback"} is (extended) superstable for all $(A, B) \in \mathcal{P}$. [\[prob:ddc_quantized\]]{#prob:ddc_quantized label="prob:ddc_quantized"}* ## Consistency Polytope Representation Let us define $\mathbf{X}, \mathbf{U}, \mathbf{p}, \mathbf{q}$ as the following concatenations of data in $\mathcal{D}$: [\[eq:data_matrices\]]{#eq:data_matrices label="eq:data_matrices"} $$\begin{aligned} \mathbf{X}&= \begin{bmatrix} \hat{x}_1; & \hat{x}_2; & \ldots & \hat{x}_{N_s} \end{bmatrix} \\ \mathbf{U}&= \begin{bmatrix} \hat{u}_1; & \hat{u}_2; & \ldots & \hat{u}_{N_s} \end{bmatrix}\\ \mathbf{p} &= \begin{bmatrix} p_1; & p_2; & \ldots & p_{N_s} \end{bmatrix}\\ \mathbf{q} &= \begin{bmatrix} q_1; & q_2; & \ldots & q_{N_s} \end{bmatrix}.\end{aligned}$$ The data-consistency polytope in [\[eq:residual_constraints\]](#eq:residual_constraints){reference-type="eqref" reference="eq:residual_constraints"} may be represented using the data matrices in [\[eq:data_matrices\]](#eq:data_matrices){reference-type="eqref" reference="eq:data_matrices"} as $$\begin{aligned} G_\mathcal{D}&= \begin{bmatrix} -\mathbf{X}^T \otimes I_n & -\mathbf{U}^T \otimes I_n \\ \mathbf{X}^T \otimes I_n & \mathbf{U}^T \otimes I_n \end{bmatrix} \nonumber\\ h_\mathcal{D}&= \begin{bmatrix} - \mathbf{p}; & \mathbf{q} \end{bmatrix}\label{eq:polytope_data}\\ \mathcal{P}&= \{(A, B) \mid G_\mathcal{D}[\textrm{vec}(A); \textrm{vec}(B)] \leq h_\mathcal{D}\}, \nonumber\end{aligned}$$ using the Kronecker identity $\textrm{vec}(P X Q) = (Q^T \otimes P)\textrm{vec}(X)$ for matrices $(P, X, Q)$ of compatible dimensions. We will denote $L \leq 2 n N_s$ as the number of faces in [\[eq:polytope_data\]](#eq:polytope_data){reference-type="eqref" reference="eq:polytope_data"} ($h_\mathcal{D}\in \mathbb{R}^{1 \times L}$). The number of faces $L$ can be reduced from $2n N_s$ by pruning redundant constraints from $\mathcal{P}$ [@caron1989degenerate] through iterative [LPs]{acronym-label="LP" acronym-form="plural+short"}. ## Sign-Based Approach The sign-based program in [\[eq:ess_sign_quantized\]](#eq:ess_sign_quantized){reference-type="eqref" reference="eq:ess_sign_quantized"} in the [DDC]{acronym-label="DDC" acronym-form="singular+short"} case can be considered as a finite-dimensional robust [LP]{acronym-label="LP" acronym-form="singular+short"}: $$\begin{aligned} & \forall i \in 1..n, \ \alpha \in \{-1, 1\}^n, \ \beta \in \textstyle\prod_{j=1}^m\{1-\delta_j, 1+\delta_j\}: \label{eq:ess_sign_quantized_ddc} \\ &\quad \textstyle \sum_{j=1}^n \alpha_j \left(A_{ij}v_j + \sum_{k=1} \beta_k B_{ik} S_{kj}\right) < v_i, \ \forall (A, B) \in \mathcal{P}. \nonumber\end{aligned}$$ Program [\[eq:ess_sign_quantized_ddc\]](#eq:ess_sign_quantized_ddc){reference-type="eqref" reference="eq:ess_sign_quantized_ddc"} features a total of $n 2^{n+m}$ strict robust inequalities. We will add a stability tolerance $\eta > 0$ in order to modify the comparator and right-hand side of [\[eq:ess_sign_quantized_ddc\]](#eq:ess_sign_quantized_ddc){reference-type="eqref" reference="eq:ess_sign_quantized_ddc"} into a nonstrict inequality $\leq v_i - \eta$. Each nonstrict robust inequality in $\alpha, \beta$ may be formulated as a polytope: $$\begin{aligned} G_{\alpha \beta} &= \begin{bmatrix} (\diag{v} \alpha)^T \otimes I_n & (\diag{\beta} S \alpha)^T \otimes I_n \end{bmatrix} \nonumber \\ h_{\alpha \beta} &= v - \eta \mathbf{1} \label{eq:polytope_sign_stab}\\ \mathcal{P}_{\alpha\beta} &= \{(A, B) \mid G_{\alpha \beta} [\textrm{vec}(A); \textrm{vec}(B)] \leq h_{\alpha \beta} \}. \nonumber\end{aligned}$$ We will enforce containment of $\mathcal{P}$ in each $\mathcal{P}_{\alpha \beta}$ using the Extended Farkas Lemma: **Lemma 5** (Extended Farkas Lemma [@hennet1989farkas; @henrion1999control]). *Let $P_1 = \{x \mid G_1 x \leq h_1\}$ and $P_2 = \{x \mid G_2 x \leq h_2\}$ be a pair of polytopes with $G_1 \in \mathbb{R}^{m \times n}$ and $G_2 \in \mathbb{R}^{p \times n}$. Then $P_1 \subseteq P_2$ if and only if there exists a matrix $Z \in \mathbb{R}^{p \times m}_{\geq 0}$ such that, $$\begin{aligned} Z G_1 &= G_2, & Z h_1 \leq h_2.\end{aligned}$$* **Remark 1**. *The Extended Farkas Lemma is a particular instance of a robust counterpart [@ben2009robust] when certifying validity of a system of linear inequalities over polytopic uncertainty.* A sign-based program to solve Problem [\[prob:ddc_quantized\]](#prob:ddc_quantized){reference-type="ref" reference="prob:ddc_quantized"} is: [\[eq:lp_stab_sign\]]{#eq:lp_stab_sign label="eq:lp_stab_sign"} $$\begin{aligned} \mathop{\mathrm{find}}_{v, S, Z} \quad & \forall \alpha \in \{-1, 1\}^n, \ \beta \in \textstyle\prod_{j=1}^m\{1-\delta_j, 1+\delta_j\}: \\ & \qquad Z_{\alpha \beta} G_\mathcal{D}= G_{\alpha \beta}, \quad Z_{\alpha \beta} h_\mathcal{D}\leq h_{\alpha \beta} \label{eq:lp_stab_ext}\\ & \qquad Z_{\alpha \beta} \in \mathbb{R}_{\geq 0} ^{n \times L} \\ & v-\eta \mathbf{1}_n \in \mathbb{R}_{\geq 0}^n, \ S \in \mathbb{R}^{n \times m}. \label{eq:lp_stab_var}\end{aligned}$$ ## Lifted Approach We can solve Problem [\[prob:ddc_quantized\]](#prob:ddc_quantized){reference-type="ref" reference="prob:ddc_quantized"} by posing [\[eq:ess_quantized\]](#eq:ess_quantized){reference-type="eqref" reference="eq:ess_quantized"} as an infinite-dimensional [LP]{acronym-label="LP" acronym-form="singular+short"} in terms of a function $M: \mathcal{P}\rightarrow \mathbb{R}^{n \times n}$. **Theorem 6**. *A state-feedback controller $u = Kx$ will solve Problem [\[prob:ddc_quantized\]](#prob:ddc_quantized){reference-type="ref" reference="prob:ddc_quantized"} if the following infinite-dimensional [LP]{acronym-label="LP" acronym-form="singular+short"} has a feasible solution with $v \in \mathbb{R}_{>0}^n, \ S \in \mathbb{R}^{m \times n}, M: \mathcal{P}\rightarrow \mathbb{R}^{n \times n}$ with $Y = \diag{v}$* *[\[eq:ess_quantized_ddc\]]{#eq:ess_quantized_ddc label="eq:ess_quantized_ddc"} $$\begin{aligned} & \forall i \in 1..n \nonumber \\ & \qquad \textstyle \sum_{j=1}^n M_{ij}(A, B) < v_i & & \label{eq:ess_quantized_constraint_1} \\ & \forall \beta \in \textstyle\prod_{j=1}^m \{1-\delta_j, 1+\delta_j\} \nonumber \\ &\textstyle -M(A, B) \leq A Y+ B ( \diag{\beta}) S \leq M(A, B). \label{eq:ess_quantized_constraint} \end{aligned}$$* *Proof.* Each plant $(A, B) \in \mathcal{P}$ has a certificate of extended superstabilizability $(v, M(A, B))$ by Theorem [Theorem 2](#thm:quant_ess){reference-type="ref" reference="thm:quant_ess"}. If [\[eq:ess_quantized_ddc\]](#eq:ess_quantized_ddc){reference-type="eqref" reference="eq:ess_quantized_ddc"} is feasible, then all plants in $\mathcal{P}$ simultaneously extended superstabilized by a common $K = S Y^{-1}$ . ◻ **Remark 2**. *The function $M(A, B)$ may be treated as an adjustable decision variable given the a-priori unknown $(A, B) \in \mathcal{P}$ [@yanikouglu2019survey].* The infinite-dimensional [LP]{acronym-label="LP" acronym-form="singular+short"} in [\[eq:ess_quantized_ddc\]](#eq:ess_quantized_ddc){reference-type="eqref" reference="eq:ess_quantized_ddc"} must be truncated into a finite-dimensional convex program in order to admit computationally tractable formulations. One method to perform this truncation is to restrict $M(A, B)$ to an affine function by defining $M^0, M^{A}_{ij}, M^{B}_{ik} \in \mathbb{R}^{n \times n}$ to form $$\begin{aligned} M(A, B) &= \textstyle M^0 + \sum_{i j} M^{A}_{ij} A_{ij} + \sum_{i k} M^{B}_{ik} B_{ik}, \label{eq:m_affine} \end{aligned}$$ We can define the quantities $\mathbf{m} = (m^0, m^a, m^b)$ $$\begin{aligned} m^0 &= \textrm{vec}(M^0)\\ m^a &= \begin{bmatrix} \textrm{vec}(M^A_{11}), & \textrm{vec}(M^A_{21}), & \ldots, & \textrm{vec}(M^A_{nn}) \end{bmatrix} \\ m^b &= \begin{bmatrix} \textrm{vec}(M^B_{11}), & \textrm{vec}(M^B_{21}), & \ldots, & \textrm{vec}(M^B_{nm}) \end{bmatrix}.\end{aligned}$$ in order to obtain a vectorized expression for [\[eq:m_affine\]](#eq:m_affine){reference-type="eqref" reference="eq:m_affine"} with $$\begin{aligned} \textrm{vec}(M(A, B)) &= m^0 + m^a \textrm{vec}(A) + m^b \textrm{vec}(B). \\ \intertext{The row-sums of $M$ can be expressed as } \textrm{vec}(M(A, B) \mathbf{1}_n ) &= (\mathbf{1}_n^T \otimes I_n) (m^0 + m^a \textrm{vec}(A)) \\ &\qquad + (\mathbf{1}_n^T \otimes I_n) (m^b \textrm{vec}(B)).\nonumber \end{aligned}$$ The constraint in [\[eq:ess_quantized_constraint_1\]](#eq:ess_quantized_constraint_1){reference-type="eqref" reference="eq:ess_quantized_constraint_1"} with stability factor $\eta>0$ can be reformulated as membership in the following polytope $\mathcal{P}_M$: $$\begin{aligned} G_M &= (\mathbf{1}_n^T \otimes I_n)\begin{bmatrix} m^a, & m^b \end{bmatrix} \nonumber \\ h_M &= [v - \eta - (\mathbf{1}_n^T \otimes I_n)m^0] \\ \mathcal{P}_{M} &= \{(A, B) \mid G_{M} [\textrm{vec}(A); \textrm{vec}(B)] \leq h_{M} \}. \nonumber \end{aligned}$$ The polytopic constraint region in [\[eq:ess_quantized_constraint\]](#eq:ess_quantized_constraint){reference-type="eqref" reference="eq:ess_quantized_constraint"} for each $\beta \in \textstyle\prod_{j=1}^m\{1-\delta_j, 1+\delta_j\}$ is $$\begin{aligned} G_{\beta}^s &= \begin{bmatrix} -m^a - \diag{v}^T\otimes I_n & -m^b- (\diag{\beta} S)^T \otimes I_n \\ -m^a + \diag{v}^T\otimes I_n & -m^b+ (\diag{\beta} S)^T \otimes I_n \end{bmatrix} \nonumber \\ h_{\beta} &= \begin{bmatrix} m^0; & m^0 \end{bmatrix} \label{eq:polytope_sign_stab_aarc}\\ \mathcal{P}_{\beta} &= \{(A, B) \mid G_{\beta} [\textrm{vec}(A); \textrm{vec}(B)] \leq h_{\beta} \}. \nonumber\end{aligned}$$ The affine restriction of $M$ in [\[eq:m_affine\]](#eq:m_affine){reference-type="eqref" reference="eq:m_affine"} results in program for [\[eq:ess_quantized_ddc\]](#eq:ess_quantized_ddc){reference-type="eqref" reference="eq:ess_quantized_ddc"}: [\[eq:lp_stab_aarc\]]{#eq:lp_stab_aarc label="eq:lp_stab_aarc"} $$\begin{aligned} \mathop{\mathrm{find}}_{v, S, Z, \mathbf{m}} \quad & \forall \beta \in \textstyle\prod_{j=1}^m\{1-\delta_j, 1+\delta_j\}: \\ & Z_{\beta} G_\mathcal{D}= G_{\beta}, \quad Z_{\beta} h_\mathcal{D}\leq h_{\beta} \label{eq:lp_stab_ext_aarc}\\ & Z_{\beta} \in \mathbb{R}_{\geq 0} ^{2 n^2 \times L} \\ & Z_M G_\mathcal{D}= G_M, \quad Z_M h_\mathcal{D}\leq h_M \\ & Z_M \in \mathbb{R}^{n \times L}_{\geq 0} \\ &m^0 \in \mathbb{R}^{n^2 \times 1}, \ m^A \in \mathbb{R}^{n^2 \times n^2}\\ &m^B \in \mathbb{R}^{n^2 \times nm} \\ & v-\eta \mathbf{1}_n \in \mathbb{R}_{\geq 0}^n, \ S \in \mathbb{R}^{n \times m}. \label{eq:lp_stab_var_aarc}\end{aligned}$$ ## Computational Complexity We will quantify the computational complexity [\[eq:lp_stab_sign\]](#eq:lp_stab_sign){reference-type="eqref" reference="eq:lp_stab_sign"} and [\[eq:lp_stab_aarc\]](#eq:lp_stab_aarc){reference-type="eqref" reference="eq:lp_stab_aarc"} based on the number of robust inequalities (for [\[eq:ess_sign_quantized_ddc\]](#eq:ess_sign_quantized_ddc){reference-type="eqref" reference="eq:ess_sign_quantized_ddc"} and [\[eq:ess_quantized_ddc\]](#eq:ess_quantized_ddc){reference-type="eqref" reference="eq:ess_quantized_ddc"}), scalar variables $(v, S, Z, \mathbf{m})$, slack variables/constraints introduced in reformulations of scalar inequality constraints (e.g., $v - \eta \mathbf{1}_n \in \mathbb{R}^n_{\geq 0} \mapsto q \in \mathbb{R}^n_{\geq 0}, \ v-\eta \mathbf{1}_n = q$), scalar inequality constraints $(\in \mathbb{R}_{\geq 0})$, and scalar equality constraints. These counts (up to the highest order terms to save space) are listed in Table [1](#tab:complexity){reference-type="ref" reference="tab:complexity"}. -------------- -------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------- sign-based [\[eq:lp_stab_sign\]](#eq:lp_stab_sign){reference-type="eqref" reference="eq:lp_stab_sign"} [AARC]{acronym-label="AARC" acronym-form="singular+short"} [\[eq:lp_stab_aarc\]](#eq:lp_stab_aarc){reference-type="eqref" reference="eq:lp_stab_aarc"} robust ineq. $n 2^{n+m}$ $n + n^2 2^{m+1}$ scalar vars. $n(m+1+ 2^{n+m}L)$ $n(m+1) + n^2(1+nm+n^2)$ $\quad + n^2 L 2^{m+1}$ slack vars. $n+2^{n+m+1} n^2$ $n(1+2^{m+1})$ eq. cons. $n^2(n+m) 2^{n+m} + n$ $(2^{m+1} n^2+n)(n+m) + n$ ineq. cons. $2^{n+m}(n + nL) + n$ $n(L+2) + n^2 (L+1)2^{m+1}$ -------------- -------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------- : Comparison between [LPs]{acronym-label="LP" acronym-form="plural+short"} [\[eq:lp_stab_sign\]](#eq:lp_stab_sign){reference-type="eqref" reference="eq:lp_stab_sign"} and [\[eq:lp_stab_aarc\]](#eq:lp_stab_aarc){reference-type="eqref" reference="eq:lp_stab_aarc"} Note how $n$ appears exponentially in the sign-based scheme [\[eq:lp_stab_sign\]](#eq:lp_stab_sign){reference-type="eqref" reference="eq:lp_stab_sign"}, while $n$ enters only polynomially for quantities in the [AARC]{acronym-label="AARC" acronym-form="singular+short"} [\[eq:lp_stab_aarc\]](#eq:lp_stab_aarc){reference-type="eqref" reference="eq:lp_stab_aarc"}. Given that the running-time of an Interior Point Method for $N$-variable [LPs]{acronym-label="LP" acronym-form="plural+short"} up $\gamma$-optimality is approximately $O(N^{\omega+0.5} \abs{\log(1/\gamma)})$ (with matrix multiplication constant $\omega$) [@wright1997primal], the [AARC]{acronym-label="AARC" acronym-form="singular+short"} is more computationally efficient than the sign-based scheme as $n$ increases. # Numerical Examples {#sec:examples} MATLAB (2021a) code to execute all examples is publicly available [^5]. The convex optimization problems [\[eq:lp_stab_sign\]](#eq:lp_stab_sign){reference-type="eqref" reference="eq:lp_stab_sign"} and [\[eq:lp_stab_aarc\]](#eq:lp_stab_aarc){reference-type="eqref" reference="eq:lp_stab_aarc"} are modeled in YALMIP [@lofberg2004yalmip] (including the robust programming module [@lofberg2012] with option '`lplp.duality`') and solved in Mosek 9.2 [@mosek92]. ## 3-state 2-input The first example will involve superstabilization of the following system 3-state 2-input discrete-time linear system: [\[eq:sys1\]]{#eq:sys1 label="eq:sys1"} $$\begin{aligned} A &= \begin{bmatrix} -0.1300&-0.3974&0.2030\\ -0.3974&-0.5000&0.2990\\ 0.2030&0.2990&-0.5262 \end{bmatrix}, \\ B &= \begin{bmatrix} 0.2179&1.2300\\ 0.3592&0\\ -1.1553&0 \end{bmatrix}.\end{aligned}$$ System [\[eq:sys1\]](#eq:sys1){reference-type="eqref" reference="eq:sys1"} is open-loop unstable with eigenvalues of $[-1.0185, -0.2613, 0.1236].$ We collect $T=100$ input-state-transition observations of system [\[eq:sys1\]](#eq:sys1){reference-type="eqref" reference="eq:sys1"} to form $\mathcal{D}$. The transition observations are quantized according to the following partition with 9 bins: $$\begin{aligned} (-\infty, -4] \cup [-4, -3] \cup [-3, -2] \ldots [3, 4] \cup [4, \infty). \label{eq:partition1}\end{aligned}$$ Superstabilization $(v=\mathbf{1}_3)$ is performed by solving the sign-based scheme in [\[eq:ess_sign_quantized_ddc\]](#eq:ess_sign_quantized_ddc){reference-type="eqref" reference="eq:ess_sign_quantized_ddc"}. An objective is added to minimize $\lambda \in \mathbb{R}$ such that $\forall i: \sum_{j} M_{ij} \leq \lambda$, in which $\lambda < 1$ indicates a successful worst-case superstabilization under input and data quantization. Figure [2](#fig:ss_rho_vs_lam){reference-type="ref" reference="fig:ss_rho_vs_lam"} plots worst-case optimal values of $\lambda$ as a function of the quantization density $\rho$, in which $\rho$ is the same for all inputs. The $T=60$ data preserves the first 60 elements of the 100 observations in $\mathcal{D}$ (with a similar process for $T=80$). Gain values for the ground truth (model-based case when [\[eq:sys1\]](#eq:sys1){reference-type="eqref" reference="eq:sys1"} is known) are presented as a comparison. We note that $\rho \rightarrow 1$ results in $\delta \rightarrow 0$ by [\[eq:delta_rho\]](#eq:delta_rho){reference-type="eqref" reference="eq:delta_rho"}, for which the (limiting) quantization law at $\rho=1$ is $g_{\rho=1}(u) = u$. ![Peak-to-peak gain $(\lambda)$ vs. quantization density $(\rho)$](img/quantization_T_60_100.png){#fig:ss_rho_vs_lam width="0.8\\exfiglength"} Table [2](#tab:min_rho){reference-type="ref" reference="tab:min_rho"} lists the minimal feasible $\rho$ (up to four decimal places) such that the sign-based formulation in [\[eq:ess_sign_quantized_ddc\]](#eq:ess_sign_quantized_ddc){reference-type="eqref" reference="eq:ess_sign_quantized_ddc"} returns a feasible superstabilizing (SS) or extended superstabilizing (ESS) controller. The symbol $\varnothing$ indicates primal infeasiblility of the [LP]{acronym-label="LP" acronym-form="singular+short"} for all $\rho \leq 1$. $T$ 60 80 100 Truth ----- --------------- -------- -------- -------- SS $\varnothing$ 0.6727 0.6182 0.3182 ESS 0.9397 0.3494 0.2081 0.1422 : Minimal $\rho$ with sign-based of [\[eq:sys1\]](#eq:sys1){reference-type="eqref" reference="eq:sys1"} Table [3](#tab:min_rho_aarc){reference-type="ref" reference="tab:min_rho_aarc"} lists the minimal $\rho$ for [AARC]{acronym-label="AARC" acronym-form="singular+short"}-based quantized superstabilization. There is no difference between the ground-truth values in Tables [2](#tab:min_rho){reference-type="ref" reference="tab:min_rho"} and [3](#tab:min_rho_aarc){reference-type="ref" reference="tab:min_rho_aarc"}, because the underlying finite-dimensional [LPs]{acronym-label="LP" acronym-form="plural+short"} with nonrobust inequality constraints are equivalent. $T$ 60 80 100 Truth ----- --------------- --------------- -------- -------- SS $\varnothing$ $\varnothing$ 0.9500 0.3182 ESS $\varnothing$ $\varnothing$ 0.7723 0.1422 : Minimal $\rho$ with [AARC]{acronym-label="AARC" acronym-form="singular+short"} stabilization of [\[eq:sys1\]](#eq:sys1){reference-type="eqref" reference="eq:sys1"} ## 5-state 3-input The second example performs extended superstabilization over the following system with 5 states and 3 inputs: $$\begin{aligned} A &= (1/5)[\min(i/j, j/i)]_{ij} + (1/2) I_{5},\\ B &= [I_3; \mathbf{0}_{2 \times 3}].\end{aligned}$$ [\[eq:sys2\]]{#eq:sys2 label="eq:sys2"} System [\[eq:sys2\]](#eq:sys2){reference-type="eqref" reference="eq:sys2"} is open-loop unstable with purely real eigenvalues of $[1.0633, 0.6507, 0.5502, 0.5046, 0.4812]$. The nominal system in [\[eq:sys2\]](#eq:sys2){reference-type="eqref" reference="eq:sys2"} can be extended-superstabilized until $\rho = 0.2245$. The $T=350$ state-input collected transitions of [\[eq:sys2\]](#eq:sys2){reference-type="eqref" reference="eq:sys2"} are quantized according to the following partition with 26 bins: $$\begin{aligned} (-\infty, -6] \cup [-6, -5.5] \cup [-5.5, -5] \ldots [5.5, 6] \cup [6, \infty). \label{eq:partition2}\end{aligned}$$ The polytope in [\[eq:residual_constraints\]](#eq:residual_constraints){reference-type="eqref" reference="eq:residual_constraints"} has $2nT = 3500$ faces in $n(n+m)=40$ dimensions, of which $185$ of these faces are nonredundant. We successfully solve the data-driven common-extended-superstabilizing [AARC]{acronym-label="AARC" acronym-form="singular+short"} program in [\[eq:lp_stab_aarc\]](#eq:lp_stab_aarc){reference-type="eqref" reference="eq:lp_stab_aarc"} at $\rho=0.8$ to acquire a feasible controller with parameters $$\begin{aligned} K &= -\begin{bmatrix} 0.6434 & 0.0943 & 0.0785 & 0.0609 & 0.0330 \\ 0.0965 & 0.6513 & 0.1409 & 0.0899 & 0.0842 \\ 0.0650 & 0.1392 & 0.6528 & 0.1463 & 0.1183 \\ \end{bmatrix} \nonumber \\ v &= \begin{bmatrix} 0.0137, & 0.0069 & 0.0058 & 0.0289 & 0.0289 \end{bmatrix}. \end{aligned}$$ # Acknowledgements {#acknowledgements .unnumbered} The authors would like to thank Roy Smith and the Automatic Control Lab of ETH Zürich for their support. [^1]: $^1$J. Miller is with the Automatic Control Laboratory (IfA), Department of Information Technology and Electrical Engineering (D-ITET), ETH Zürich, Physikstrasse 3, 8092, Zürich, Switzerland (e-mail: jarmiller\@control.ee.ethz.ch). [^2]: $^2$ J. Miller, J. Zheng, and M. Sznaier are with the Robust Systems Lab, ECE Department, Northeastern University, Boston, MA 02115. (e-mails: zheng.jian1\@northeastern.edu, msznaier\@coe.neu.edu) [^3]: $^3$ Naval Undersea Warfare Center 1176 Howell St, Newport, RI 02841 (e-mail: chixenbaugh\@umassd.edu). [^4]: J. Miller was partially supported by Swiss National Science Foundation Grant 200021_178890. J. Miller, J. Zheng, and M. Sznaier were partially supported by NSF grants CNS--1646121, ECCS--1808381 and CNS--2038493, AFOSR grant FA9550-19-1-0005, and ONR grant N00014-21-1-2431. [^5]: <https://github.com/Jarmill/quantized_ddc>
arxiv_math
{ "id": "2309.13712", "title": "Data-Driven Superstabilization of Linear Systems under Quantization", "authors": "Jared Miller, Jian Zheng, Mario Sznaier, Chris Hixenbaugh", "categories": "math.OC cs.SY eess.SY", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this paper, we consider higher regularity of a weak solution $({\bf u},p)$ to stationary Stokes systems with variable coefficients. Under the assumptions that coefficients and data are piecewise $C^{s,\delta}$ in a bounded domain consisting of a finite number of subdomains with interfacial boundaries in $C^{s+1,\mu}$, where $s$ is a positive integer, $\delta\in (0,1)$, and $\mu\in (0,1]$, we show that $D{\bf u}$ and $p$ are piecewise $C^{s,\delta_{\mu}}$, where $\delta_{\mu}=\min\big\{\frac{1}{2},\mu,\delta\big\}$. Our result is new even in the 2D case with piecewise constant coefficients. address: - Division of Applied Mathematics, Brown University, 182 George Street, Providence, RI 02912, USA - School of Mathematical Sciences, Beijing Normal University, Laboratory of Mathematics and Complex Systems, Ministry of Education, Beijing 100875, China. - Academy for Multidisciplinary Studies, Capital Normal University, Beijing 100048, China. author: - Hongjie Dong - Haigang Li - Longjuan Xu title: On higher regularity of Stokes systems with piecewise Hölder continuous coefficients --- [^1] [^2] [^3] # Introduction and main results Stokes systems with variable coefficients have been studied extensively in the literature. See, for instance, the pioneer work of Giaquinta and Modica [@gm82]. Such type of Stokes systems can be used to model the motion of inhomogeneous fluid with density dependent viscosity [@ls1975; @l1996; @agz2011]. In this paper, we study stationary Stokes systems with piecewise smooth coefficients $$\begin{aligned} \label{stokes} \begin{cases} D_\alpha (A^{\alpha\beta}D_\beta {\bf u})+Dp=D_{\alpha}{\bf f}^{\alpha}\\ \operatorname{div}{\bf u}=g \end{cases}\,\,\mbox{in }~\mathcal{D}\end{aligned}$$ where ${\bf u}=(u^1,\ldots, u^d)^{\top}$ and ${\bf f}^{\alpha}=(f_1^\alpha,\ldots,f_d^\alpha)^{\top}$, $d\geq2$, and we used the Einstein summation convention over repeated indices. We assume that the bounded domain $\mathcal{D}$ in $\mathbb R^d$ contains a finite number of disjoint subdomains $\mathcal{D}_j$, $j=1,\dots,M$, and the coefficients and the data may have jump across the boundaries of the subdomains. By approximation, we may assume that any point $x\in\mathcal{D}$ belongs to the boundaries of at most two of the $\mathcal{D}_{j}$'s. With these assumptions, the Stokes systems [\[stokes\]](#stokes){reference-type="eqref" reference="stokes"} is connected to the study of composite materials with closely spaced interfacial boundaries (see, for instance, [@mnmv1987; @jg2015]), as well as the study of the motion of two fluids with interfacial boundaries [@cd2019; @dk2018; @dk2019; @kw2018; @kmw2021]. This problem is also stimulated by the study of regularity of weak solutions for equations with rough coefficients. There have been significant developments on the regularity theory for partial differential equations and systems with coefficients which satisfy some proper piecewise continuous conditions. We shall begin by reviewing the literature for results on gradient estimates in such a setting from the past two decades. Bonnetier and Vogelius first [@bv2000] considered divergence form second-order elliptic equations with piecewise constant coefficients: $$\label{homoscalar} D_\alpha(a(x)D_\alpha u)=0\quad\mbox{in}~\mathcal{D},$$ where $a(x)$ is given by $$\begin{aligned} a(x)=a_0\mathbbm{1}_{\mathcal{D}_1\cup\mathcal{D}_2}+\mathbbm{1}_{\mathcal{D}\setminus(\mathcal{D}_1\cup\mathcal{D}_2)},\end{aligned}$$ with $0<a_0<\infty$ and $\mathbbm{1}_{\bullet}$ is the indicator function. They proved that the gradient of the solution is bounded when the subdomains are circular touching fibers of comparable radii. Li and Vogelius [@lv2000] studied general elliptic equations in divergence form: $$%\label{elliptic} D_\alpha(A^{\alpha\beta}D_\beta u)=D_\alpha f^\alpha\quad\mbox{in}~\mathcal{D},$$ where the coefficients $A^{\alpha\beta}$ and the data $f^\alpha$ are $C^{\delta}$ ($\delta\in(0,1)$) up to the boundary in each subdomain with $C^{1,\mu}$ boundary, $\mu\in(0,1]$, but may have jump discontinuities across the boundaries of the subdomains. They established global Lipschitz and piecewise $C^{1,\delta'}$ estimates of the solution with $\delta'\in(0,\min\{\delta,\frac{\mu}{d(\mu+1)}\}]$. This result was extended to elliptic systems under the same conditions by Li and Nirenberg [@ln2003] and the range of $\delta'$ was improved to $\delta'\in(0,\min\{\delta,\frac{\mu}{2(\mu+1)}\}]$. Dong and Xu [@dx2019] further relaxed the range to $\delta'\in(0,\min\{\delta,\frac{\mu}{\mu+1}\}]$ by using a completely different argument from [@lv2000; @ln2003]. Notably, the estimates in [@lv2000; @ln2003; @dx2019] are independent of the distances between subdomains. For more related results, we refer the reader to [@ckvc86; @cf2012; @d2012; @xb13; @z2021] and the references therein. The estimates were extended to the case of parabolic equations and systems with piecewise continuous coefficients [@fknn13; @ll2017; @dx2021], and stationary Stokes systems with piecewise Dini mean oscillation coefficients [@cdx2022]. Now let us discuss the topic of the higher regularity for solutions to partial differential equations and systems with piecewise smooth coefficients. Significant progresses have been made on the second-order elliptic equations [\[homoscalar\]](#homoscalar){reference-type="eqref" reference="homoscalar"} with piecewise constant coefficients. By using conformal mappings, Li and Vogelius [@lv2000] proved that the solutions to [\[homoscalar\]](#homoscalar){reference-type="eqref" reference="homoscalar"} are piecewise smooth up to interfacial boundaries, when the subdomains $\mathcal{D}_1$ and $\mathcal{D}_2$ are two touching unit disks in $\mathbb R^2$, and $\mathcal{D}$ is a disk $B_{R_0}$ with sufficiently large $R_0$. Dong and Zhang [@dz2016] removed the requirement that $R_0$ being sufficiently large with the help of the construction of Green's function. Dong and Li [@dl2019] then applied the Green function method to obtain higher derivative estimates by demonstrating the explicit dependence of the coefficients and the distance between interfacial boundaries of inclusions. Related results about higher derivative estimates with circular inclusions were investigated in [@jk2023; @dy2023]. It is worth noting that in all these work, the dimension is always assumed to be two and the inclusions are circular. To the best of our knowledge, there is no corresponding result available for Stokes systems. Recently, Dong and Xu [@dx2022] tackled more general divergence form parabolic systems in any dimensions with piecewise Hölder continuous coefficients and data in a bounded domain consisting of a finite number of cylindrical subdomains. By using a completely different method from those in [@lv2000; @dz2016; @dl2019; @jk2023; @dy2023], they established piecewise higher derivative estimates for weak solutions to such parabolic systems, and the estimates are independent of the distance between the interfaces. This result also implies piecewise higher regularity for the corresponding elliptic systems, addressing the open question proposed in [@lv2000]. In this paper, we study higher regularity for solutions to the Stokes system [\[stokes\]](#stokes){reference-type="eqref" reference="stokes"}, closely following the scheme in [@dx2022]. However, the presence of the pressure term $p$ introduces added difficulties in the proofs below. To state our main result precisely, we first give the following assumption imposed on the domain $\mathcal{D}$. **Assumption 1**. The bounded domain $\mathcal{D}$ in $\mathbb R^d$ contains $M$ disjoint subdomains $\mathcal{D}_{j},j=1,\ldots,M$, and the interfacial boundaries are $C^{s+1,\mu}$, where $s\in\mathbb N$ and $\mu\in(0,1]$. We also assume that any point $x\in\mathcal{D}$ belongs to the boundaries of at most two of the $\mathcal{D}_{j}$'s. For $0<\delta<1$, we denote the $C^{\delta}$ Hölder semi-norm by $$[u]_{C^{\delta}(\mathcal{D})}:=\sup_{\substack{x,y\in \mathcal{D}\\ x\neq y}} \frac{|u(x)-u(y)|}{|x-y|^\delta},$$ and the $C^{\delta}$ norm by $$|u|_{\delta;\mathcal{D}}:=[u]_{C^{\delta}(\mathcal{D})}+|u|_{0;\mathcal{D}},\quad \text{where}\,\,|u|_{0;\mathcal{D}}=\sup_{\mathcal{D}}|u|.$$ By $C^\delta(\mathcal{D})$ we denote the set for all bounded measurable functions $u$ satisfying $[u]_{C^{\delta}(\mathcal{D})}<\infty$. The function spaces $C^{s,\delta}(\mathcal{D}),s\in\mathbb N,$ are defined accordingly. For $\varepsilon>0$ small, we set $$\mathcal{D}_{\varepsilon}:=\{x\in \mathcal{D}: \mbox{dist}(x,\partial \mathcal{D})>\varepsilon\}.$$ **Assumption 2**. The coefficients $A^{\alpha\beta}$ are bounded and satisfy the strong ellipticity condition, that is, there exists $\nu\in (0,1)$ such that $$%\label{strongellp} |A^{\alpha\beta}(x)|\le \nu^{-1}, \quad \sum_{\alpha,\beta=1}^d A^{\alpha\beta}(x)\xi_\beta\cdot \xi_\alpha\ge \nu \sum_{\alpha=1}^d |\xi_\alpha|^2$$ for any $x\in \mathbb{R}^d$ and $\xi_\alpha\in \mathbb{R}^d$, $\alpha\in \{1,\ldots, d\}$. Moreover, $A^{\alpha\beta}$, ${\bf f}^{\alpha}$, and $g$ are assumed to be of class $C^{s,\delta}(\mathcal{D}_{\varepsilon}\cap\overline{{\mathcal{D}}_{j}}),j=1,\ldots,M$, where $s\in\mathbb N$ and $\delta\in(0,1)$. Here is our main result. **Theorem 3**. *Let $\varepsilon\in (0,1)$ and $q\in(1,\infty)$. Assume that $\mathcal{D}$ satisfies Assumption [Assumption 1](#assumpdomain){reference-type="ref" reference="assumpdomain"}, and $A^{\alpha\beta}$, ${\bf f}^{\alpha}$, and $g$ satisfy Assumption [Assumption 2](#assump){reference-type="ref" reference="assump"}. Let $({\bf u},p)\in W^{1,q}(\mathcal{D})^d\times L^q(\mathcal{D})$ be a weak solution to [\[stokes\]](#stokes){reference-type="eqref" reference="stokes"} in $\mathcal{D}$. Then $({\bf u},p)\in C^{s+1,\delta_{\mu}}( \mathcal{D}_{\varepsilon}\cap\overline{{\mathcal{D}}_{j_0}})^d\times C^{s,\delta_{\mu}}( \mathcal{D}_{\varepsilon}\cap\overline{{\mathcal{D}}_{j_0}})$ and it holds that $$\begin{aligned} |{\bf u}|_{s+1,\delta_{\mu};\mathcal{D}_{\varepsilon}\cap\overline{{\mathcal{D}}_{j_0}}}+|p|_{s,\delta_{\mu};\mathcal{D}_{\varepsilon}\cap\overline{{\mathcal{D}}_{j_0}}}\leq N\Big(\|D{\bf u}\|_{L^{1}(\mathcal{D})}+\|p\|_{L^{1}(\mathcal{D})}+\sum_{j=1}^{M}|{\bf f}^\alpha|_{s,\delta;\overline{\mathcal{D}_{j}}}+\sum_{j=1}^{M}|g|_{s,\delta;\overline{\mathcal{D}_{j}}}\Big),\end{aligned}$$ where $j_0=1,\ldots,M$, $\delta_{\mu}=\min\big\{\frac{1}{2},\mu,\delta\big\}$, $N$ depends on $d$, $M$, $q$, $\nu$, $\varepsilon$, $|A|_{s,\delta;\overline{\mathcal{D}_{j}}}$, and the $C^{s+1,\mu}$ characteristic of $\mathcal{D}_{j}$.* *Remark 4*. The piecewise Hölder-regularity of $(D{\bf u},p)$ for $s=0$ was proved in [@cdx2022] with $\delta_{\mu}=\min\{\delta,\frac{\mu}{\mu+1}\}$. As mentioned in [@cdx2022 p. 3616], the results in Theorem [Theorem 3](#Mainthm){reference-type="ref" reference="Mainthm"} can also be applied to anisotropic Stokes systems in the form $$\begin{aligned} %\label{anistokes} \begin{cases} \operatorname{div} (\tau \mathcal{S}{\bf u})+D p=D_{\alpha}{\bf f}^{\alpha}\\ \operatorname{div}{\bf u}=g \end{cases}\,\,\mbox{in }~\mathcal{D},\end{aligned}$$ where $\tau=\tau(x)$ is a piecewise $C^{s,\delta}$ scalar function satisfying $\nu\le \tau\le \nu^{-1}$ and $\mathcal{S}{\bf u}=\frac{1}{2}(D{\bf u}+(D{\bf u})^{\top})$ is the rate of deformation tensor or strain tensor. The remainder of this paper is structured as follows: Section [2](#secpreliminaries){reference-type="ref" reference="secpreliminaries"} provides an overview of the notation, vector fields, and coordinate systems introduced in [@dx2022], along with several auxiliary results. In Section [3](#newsystem){reference-type="ref" reference="newsystem"}, we derive a new Stokes system for the case when $s=1$. Sections [4](#auxilemma){reference-type="ref" reference="auxilemma"} and [5](#bddestimate){reference-type="ref" reference="bddestimate"} contain the key components of the proof of Theorem [Theorem 3](#Mainthm){reference-type="ref" reference="Mainthm"} with $s=1$. It is important to note that we encounter challenges due to the presence of the pressure term $p$, as exemplified in the proof of Lemma [Lemma 10](#lemmaup){reference-type="ref" reference="lemmaup"} below. Finally, in Section [6](#prfprop){reference-type="ref" reference="prfprop"}, we conclude the proof of Theorem [Theorem 3](#Mainthm){reference-type="ref" reference="Mainthm"} with $s=1$ by utilizing the results from Sections [4](#auxilemma){reference-type="ref" reference="auxilemma"} and [5](#bddestimate){reference-type="ref" reference="bddestimate"}. In Section [7](#general){reference-type="ref" reference="general"}, we extend the proof to cover Theorem [Theorem 3](#Mainthm){reference-type="ref" reference="Mainthm"} for general $s\geq2$. # Preliminaries {#secpreliminaries} In this section, we first review the notation, vector fields, and coordinate systems in [@dx2022]. Then we give some auxiliary lemmas which will be used in the proof of our results. ## Notation, vector fields, and coordinate systems We use $x=(x',x^d)$ to denote a generic point in the Euclidean space $\mathbb{R}^d$, where $d\ge 2$ and $x'=(x^1,\ldots, x^{d-1})\in \mathbb{R}^{d-1}$. For $r>0$, we denote $$B_{r}(x)=\{y\in\mathbb R^{d}: |y-x|<r\},\quad B'_{r}(x')=\{y'\in\mathbb R^{d-1}: |y'-x'|<r\}.$$ We often write $B_r$ and $B'_r$ for $B_r(0)$ and $B'_r(0)$, respectively. For $q\in (0, \infty]$, we define $$L_0^q(\mathcal{D})=\{f\in L^q(\mathcal{D}): (f)_{\mathcal{D}}=0\},$$ where $(f)_{\mathcal{D}}$ is the average of $f$ over $\mathcal{D}$: $$(f)_\mathcal{D}=\fint_{\mathcal{D}} f\, dx=\frac{1}{|\mathcal{D}|}\int_{\mathcal{D}} f \,dx.$$ We denote by $W^{1,q}(\mathcal{D})$ the usual Sobolev space and by $W^{1,q}_0(\mathcal{D})$ the completion of $C^\infty_0(\mathcal{D})$ in $W^{1,q}(\mathcal{D})$, where $C^\infty_0(\mathcal{D})$ is the set of all infinitely differentiable functions with a compact support in $\mathcal{D}$. For simplicity, we take $\mathcal{D}$ to be $B_1$. By suitable rotation and scaling, we may suppose that a finite number of subdomains lie in $B_{1}$ and that they can be represented by $$x^{d}=h_{j}(x'),\quad x'\in B'_{1},~j=1,\ldots,m(<M),$$ where $$%\label{eqhj} -1<h_{1}(x')<\dots<h_{m}(x')<1,$$ $h_{j}(x')\in C^{s+1,\mu}(B'_{1})$ with $s\in\mathbb N$. Set $h_{0}(x')=-1$ and $h_{m+1}(x')=1$. Then we have $m+1$ regions: $$\mathcal{D}_{j}:=\{x\in \mathcal{D}: h_{j-1}(x')<x^{d}<h_{j}(x')\},\quad1\leq j\leq m+1.$$ The interfacial boundary is denoted by $\Gamma_j:=\{x^d=h_j(x')\}$, and the normal direction of $\Gamma_j$ is given by $$\label{normal} {\bf n}_j:=(n_j^1,\ldots,n_j^d)=\frac{(-D_{x'}h_j(x'),1)^\top}{(1+|D_{x'}h_j(x')|^2)^{1/2}}\in \mathbb{R}^{d},\quad j=1,\ldots,m.$$ As in [@dx2022 Section 2.3], we fix a coordinate system such that $0\in\mathcal{D}_{i_0}$ for some $i_0\in\{1,\ldots,m+1\}$ and the closest point on $\partial\mathcal{D}_{i_0}$ is $x_{i_0}=(0',h_{i_0}(0'))$, and $\nabla_{x'}h_{i_0}(0')=0'$. In this coordinate system, we shall use $x=(x',x^d)$ and $D_{x}$ to denote the point and the derivatives, respectively. The following vector field was introduced in [@dx2022]. For the completeness of the paper and reader's convenience, we review it here. For each $k=1,\ldots,d-1$, we define a vector field $\ell^{k,0}:\mathbb{R}^{d}\to \mathbb{R}^d$ near the center point $0$ of $B_1$ as follows: $\ell^{k,0}=(0,\ldots,0,1,0,\ldots,\ell_d^{k,0})$, where $$\ell^{k,0}_i=\delta_{ki},\quad i=1,\dots,d-1,$$ $\delta_{ki}$ are Kronecker delta symbols, and $$%\label{elld} \ell^{k,0}_d= \begin{cases} D_kh_m(x'),\quad&x^d\ge h_m,\\ \frac {x^d-h_{j-1}}{h_{j}-h_{j-1}}D_kh_j(x')+\frac {h_{j}-x^d}{h_{j}-h_{j-1}} D_kh_{j-1}(x'),\quad&h_{j-1}\le x^d< h_j,~j=1,\dots,m,\\ D_kh_1(x'),\quad&x^d< h_1. \end{cases}$$ Here, $D_{k}:=D_{x_k}$. One can see that $\ell_d^{k,0}=D_kh_{j}(x')$ on $\Gamma_j$ and thus $\ell^{k,0}$ is in a tangential direction. Moreover, it follows from $h_j\in C^{s+1,\mu}$ that $\ell^{k,0}$ is $C^{s,\mu}$ on $\Gamma_j$. Introduce the projection operator defined by $$\mbox{proj}_{a}b=\frac{\langle a,b\rangle}{\langle a,a\rangle}a,$$ where $\langle a,b\rangle$ denotes the inner product of the vectors $a$ and $b$, and $\langle a,a\rangle=|a|^{2}$. By using the Gram-Schmidt process: $$\label{defell} \begin{split} \tilde\ell^{1}&=\ell^{1,0}, \quad\ell^1={\tilde\ell^{1}}/{|\tilde\ell^{1}|},\\ \tilde\ell^{2}&=\ell^{2,0} -\mbox{proj}_{\ell^{1}}\ell^{2,0}, \quad\ell^2={\tilde\ell^{2}}/{|\tilde\ell^{2}|},\\ &\vdots\\ \tilde\ell^{d-1}&=\ell^{d-1,0}-\sum_{j=1}^{d-2} \mbox{proj}_{\ell^{j}}\ell^{d-1,0},\quad\ell^{d-1}= {\tilde\ell^{d-1}}/{|\tilde\ell^{d-1}|}, \end{split}$$ the vector field is orthogonal to each other. Now we define the corresponding unit normal direction which is orthogonal to $\ell^{k,0}$, $k=1,\ldots,d-1,$ (and thus also $\ell^k$): $$\label{defnorm} {\bf n}(x)=(n^1,\ldots,n^d)^{\top}=\frac{(-\ell_d^{1,0},\ldots,-\ell_d^{d-1,0},1)^{\top}}{\big(1+\sum_{k=1}^{d-1}(\ell_d^{k,0})^2\big)^{1/2}}.$$ Obviously, ${\bf n}(x)={\bf n}_j$ on $\Gamma_j$. For any point $x_0\in B_{3/4}\cap \mathcal{D}_{j_{0}}$, $j_0=1,\ldots,m+1$, suppose the closest point on $\partial \mathcal{D}_{j_{0}}$ to $x_0$ is $y_0:=(y'_0,h_{j_{0}}(y'_0))$. On the surface $\Gamma_{j_0}$, the unit normal vector at $(y'_0,h_{j_0}(y'_0))$ is $$\label{defny0} {\bf n}_{y_0}=(n^1_{y_0},\ldots, n_{y_0}^d)^{\top}= \frac{\big(-\nabla_{x'}h_{j_0}(y'_0),1\big)^{\top}}{\big(1+|\nabla_{x'}h_{j_0}(y'_0)|^{2})^{1/2}}.$$ The corresponding tangential vectors are defined by $$\label{deftauk} \tau_k=\ell^{k}(y_0),\quad k=1,\ldots,d-1,$$ where $\ell^{k}$ is defined in [\[defell\]](#defell){reference-type="eqref" reference="defell"}. In the coordinate system associated with $x_0$ with the axes parallelled to ${\bf n}_{y_0}$ and $\tau_k,k=1,\ldots,d-1$, we will use $y=(y',y^d)$ and $D_{y}$ to denote the point and the derivatives, respectively. Moreover, we have $y=\Lambda x$, where $$\Lambda=(\Lambda^1,\ldots,\Lambda^d)^\top=(\Lambda^{\alpha\beta})_{\alpha,\beta=1}^{d}$$ is a $d\times d$ matrix representing the linear transformation from the coordinate system associated with $0$ to the coordinate system associated with $x_0$, and $\tau_k=(\Gamma^{1k},\dots,\Gamma^{dk})^\top,k=1,\ldots,d-1$, ${\bf n}_{y_0}=(\Gamma^{1d},\dots,\Gamma^{dd})^\top$, where $\Gamma=\Lambda^{-1}$. Finally, we introduce $m+1$ "strips" (in the $y$-coordinates) $$\Omega_j:=\{y\in\mathcal{D}: y_{j-1}^d<y^{d}<y_j^d\},\quad j=1,\ldots,m+1,$$ where $y_j:=(\Lambda'y_0,y_j^d)\in \Gamma_j$ and $\Lambda'=(\Lambda^1,\ldots,\Lambda^{d-1})^\top$. For any $0<r\leq1/4$, we have $$\label{volume} |(\mathcal{D}_{j}\setminus\Omega_{j})\cap (B_{r}(\Lambda x_0))|\leq Nr^{d+1/2},\quad j=1,\ldots,m+1.$$ See, for instance, [@dx2019 Lemma 2.3]. ## Auxiliary results Here we collect some elementary results. The following weak type-$(1,1)$ estimate is almost the same as [@cd2019 Lemma 3.4]. **Lemma 5**. *Let $q\in(1,\infty)$. Let $({\bf v},\pi)\in W_0^{1,q}(B_{r}(\Lambda x_0))^{d}\times L_0^q(B_{r}(\Lambda x_0))$ be a weak solution to $$\begin{aligned} \begin{cases} D_{\alpha}(\overline{{\mathcal A}^{\alpha\beta}}(y^{d})D_{\beta}{\bf v})+D\pi={\boldsymbol{\mathfrak f}}\mathbbm{1}_{B_{r/2}(\Lambda x_0)}+D_\alpha({\bf F}^\alpha\mathbbm{1}_{B_{r/2}(\Lambda x_0)})\\ \operatorname{div}{\bf v}=\mathcal H\mathbbm{1}_{B_{r/2}(\Lambda x_0)}-(\mathcal H\mathbbm{1}_{B_{r/2}(\Lambda x_0)})_{B_{r}(\Lambda x_0)} \end{cases}\,\, \mbox{in }B_{r}(\Lambda x_0),\end{aligned}$$ where ${\boldsymbol{\mathfrak f}}, {\bf F}^\alpha, \mathcal H\in L^{q}(B_{r/2}(\Lambda x_0))$. Then for any $t>0$, we have $$\begin{aligned} |\{y\in B_{r/2}(\Lambda x_0): |D{\bf v}(y)|+|\pi(y)|>t\}|\leq\frac{N}{t}\int_{B_{r/2}(\Lambda x_0)}\left(|{\bf F}^\alpha|+|\mathcal H|+r|{\boldsymbol{\mathfrak f}}|\right)\, dy,\end{aligned}$$ where $N=N(d,q,\nu)$.* **Lemma 6**. *[@cdx2022 Theorem 2.4][\[lemlocbdd\]]{#lemlocbdd label="lemlocbdd"} Let $\varepsilon\in (0,1)$, $q\in(1,\infty)$, $A^{\alpha\beta}$, ${\bf f}^{\alpha}$, and $g$ satisfy Assumption [Assumption 2](#assump){reference-type="ref" reference="assump"} with $s=0$. Let $({\bf u},p)\in W^{1,q}(B_1)^d\times L^q(B_1)$ be a weak solution to [\[stokes\]](#stokes){reference-type="eqref" reference="stokes"} in $B_1$. Then $({\bf u},p)\in C^{1,\delta'}(B_{1-\varepsilon}\cap\overline{{\mathcal{D}}_{j_0}})^d\times C^{\delta'}(B_{1-\varepsilon}\cap\overline{{\mathcal{D}}_{j_0}})$ and it holds that $$\begin{aligned} %\label{est Du''} &\|D{\bf u}\|_{L^{\infty}(B_{1/4})}+|{\bf u}|_{1,\delta'; B_{1-\varepsilon}\cap\overline{{\mathcal{D}}_{j_0}}}+\|p\|_{L^{\infty}(B_{1/4})}+|p|_{\delta'; B_{1-\varepsilon}\cap\overline{{\mathcal{D}}_{j_0}}}\\ &\leq N\big(\|D{\bf u}\|_{L^{1}(B_1)}+\|p\|_{L^{1}(B_1)}+\sum_{j=1}^{M}|{\bf f}^\alpha|_{\delta;\overline{\mathcal{D}_{j}}}+\sum_{j=1}^{M}|g|_{\delta;\overline{\mathcal{D}_{j}}}\big),\end{aligned}$$ where $j_0=1,\ldots,m+1$, $\delta'=\min\{\delta,\frac{\mu}{1+\mu}\}$, $N>0$ is a constant depending only on $d,m,q,\nu,\varepsilon$, $|A|_{\delta;\overline{\mathcal{D}_{j}}}$, and the $C^{1,\mu}$ norm of $h_j$.* # A new Stokes system {#newsystem} This section is devoted to deriving a new Stokes system in $B_{3/4}$ as follows: $$\begin{aligned} \label{eqtildeu} \begin{cases} D_\alpha(A^{\alpha\beta}D_\beta \tilde {\bf u})+D\tilde p={\bf f}+ D_\alpha \tilde {\bf f}^\alpha,\\ \operatorname{div}\tilde {\bf u}=D_\ell g+D\ell_i D_i{\bf u}-\sum_{j=1}^{m+1}\mathbbm{1}_{_{\mathcal{D}_j}}D\ell_{i,j}D_i{\bf u}(P_jx_0)-\sum_{j=1}^{m+1}(\mathbbm{1}_{_{\mathcal{D}_j^c}}D \tilde\ell_{i,j}D_i{\bf u}(P_jx_0))_{B_1}, \end{cases}\end{aligned}$$ where $\tilde {\bf u}$ and $\tilde p$ are defined in [\[def-tildeu\]](#def-tildeu){reference-type="eqref" reference="def-tildeu"}, ${\bf f}$ and $\tilde {\bf f}^\alpha$ are defined in [\[defg\]](#defg){reference-type="eqref" reference="defg"} and [\[def-tildefalpha\]](#def-tildefalpha){reference-type="eqref" reference="def-tildefalpha"}, respectively, and $\tilde\ell_{,j}:=(\tilde\ell_{1,j},\dots,\tilde\ell_{d,j})$ is a smooth extension of $\ell|_{\mathcal{D}_j}$ to $\cup_{k=1,k\neq j}^{m+1}\mathcal{D}_k$. To prove [\[eqtildeu\]](#eqtildeu){reference-type="eqref" reference="eqtildeu"}, we first use the definition of weak solutions to find that the problem [\[stokes\]](#stokes){reference-type="eqref" reference="stokes"} is equivalent to a homogeneous transmission problem $$\begin{aligned} \label{eq2.57} \begin{cases} D_\alpha(A^{\alpha\beta}D_\beta {\bf u})+Dp=D_\alpha {\bf f}^\alpha \qquad \text{in}\,\,\bigcup_{j=1}^{m+1}\mathcal{D}_j, \\ {\bf u}|_{\Gamma_j}^+={\bf u}|_{\Gamma_j}^-,\quad[n^\alpha_j(A^{\alpha\beta} D_\beta {\bf u} -{\bf f}^\alpha)+p{\bf n}_j]_{\Gamma_j}=0,\quad j=1,\ldots,m,\\ \operatorname{div}{\bf u}=g\qquad \text{in}\,\,\bigcup_{j=1}^{m+1}\mathcal{D}_j, \end{cases}\end{aligned}$$ where $$\begin{aligned} &[n_j^\alpha(A^{\alpha\beta} D_\beta {\bf u} -{\bf f}^\alpha)+p{\bf n}_j]_{\Gamma_j}\\ &:=(n_j^\alpha(A^{\alpha\beta} D_\beta {\bf u} -{\bf f}^\alpha)+p{\bf n}_j)|_{\Gamma_j}^+-(n_j^\alpha(A^{\alpha\beta} D_\beta {\bf u} -{\bf f}^\alpha)+p{\bf n}_j)|_{\Gamma_j}^-,\end{aligned}$$ ${\bf n}_j$ is the unit normal vector on $\Gamma_j$ defined by [\[normal\]](#normal){reference-type="eqref" reference="normal"}, ${\bf u}|_{\Gamma_j}^+$ and ${\bf u}|_{\Gamma_j}^-$ ($n_j^\alpha A^{\alpha\beta} D_\beta {\bf u} |_{\Gamma_j}^+$ and $n_j^\alpha A^{\alpha\beta} D_\beta {\bf u} |_{\Gamma_j}^-$) are the left and right limits of ${\bf u}$ (its conormal derivatives) on $\Gamma_j$, respectively, $j=1,\ldots,m$. Here and throughout this paper the superscript $\pm$ indicates the limit from outside and inside the domain, respectively. Taking the directional derivative of [\[eq2.57\]](#eq2.57){reference-type="eqref" reference="eq2.57"} along the direction $\ell:=\ell^k$, $k=1,\ldots,d-1$, we get the following inhomogeneous transmission problem $$\begin{aligned} \label{eqsecond} \begin{cases} D_\alpha(A^{\alpha\beta}D_\beta D_\ell {\bf u})+DD_\ell p={\bf f}+ D_\alpha {\bf f}^{\alpha,1} \quad\text{in}\,\,\bigcup_{j=1}^{m+1}\mathcal{D}_j, \\ D_\ell {\bf u}|_{\Gamma_j}^+=D_\ell {\bf u}|_{\Gamma_j}^-,\quad[n_j^\alpha (A^{\alpha\beta} D_\beta D_\ell {\bf u}-{\bf f}^{\alpha,1})+{\bf n}_jD_\ell p]_{\Gamma_j}=\tilde {\bf h}_j,~j=1,\dots,m,\\ \operatorname{div}(D_\ell {\bf u})=D_\ell g+D\ell_i D_i{\bf u}\qquad \text{in}\,\,\bigcup_{j=1}^{m+1}\mathcal{D}_j, \end{cases}\end{aligned}$$ where $$\label{defg} \begin{split} {\bf f}&=(A^{\alpha\beta} D_\beta D{\bf u}+DA^{\alpha\beta}D_\beta {\bf u}-D{\bf f}^\alpha)D_\alpha\ell+D\ell Dp,\\ {\bf f}^{\alpha,1}&=D_\ell {\bf f}^\alpha+A^{\alpha\beta}(D_\beta \ell_i) D_i{\bf u}-D_{\ell} A^{\alpha\beta}D_\beta {\bf u}, \end{split}$$ and $$\begin{aligned} \label{deftildeh} \tilde {\bf h}_j=[D_\ell n_j^\alpha (-A^{\alpha\beta}D_\beta {\bf u}+{\bf f}^\alpha)-pD_\ell {\bf n}_j]_{\Gamma_j}.\end{aligned}$$ From [\[normal\]](#normal){reference-type="eqref" reference="normal"}, it follows that $D_\ell {\bf n}_j$ is a tangential direction on $\Gamma_j$ and thus we may write $\tilde {\bf h}_j=\tilde {\bf h}_j(x')$ and $D_{\ell}{\bf n}_j\in C^{\mu}$. Now by adding a term $$\sum_{j=1}^{m}D_d\Big(\mathbbm{1}_{x^d>h_j(x')} {\tilde {\bf h}_j(x')}/{n^d_j(x')}\Big)$$ to the first equation in [\[eqsecond\]](#eqsecond){reference-type="eqref" reference="eqsecond"}, where $\mathbbm{1}_{\bullet}$ is the indicator function, we can get rid of $\tilde {\bf h}_j$ in the second equation of [\[eqsecond\]](#eqsecond){reference-type="eqref" reference="eqsecond"} and reduce the problem [\[eqsecond\]](#eqsecond){reference-type="eqref" reference="eqsecond"} to a homogeneous transmission problem: $$\begin{aligned} \label{homosecond0} \begin{cases} D_\alpha(A^{\alpha\beta}D_\beta D_\ell {\bf u})+DD_\ell p={\bf f}+ D_\alpha {\bf f}^{\alpha,2} \quad\text{in}\,\,\bigcup_{j=1}^{m+1}\mathcal{D}_j, \\ D_\ell {\bf u}|_{\Gamma_j}^+=D_\ell {\bf u}|_{\Gamma_j}^-,\quad[n_j^\alpha (A^{\alpha\beta} D_\beta D_\ell {\bf u}-{\bf f}^{\alpha,2})+{\bf n}_jD_\ell p]_{\Gamma_j}=0,\\ \operatorname{div}(D_\ell {\bf u})=D_\ell g+D\ell_i D_i{\bf u}\qquad \text{in}\,\,\bigcup_{j=1}^{m+1}\mathcal{D}_j, \end{cases}\end{aligned}$$ where $$\begin{aligned} {\bf f}^{\alpha,2}:={\bf f}^{\alpha,1}+\delta_{\alpha d}\sum_{j=1}^{m}\mathbbm{1}_{x^d>h_j(x')} \frac{\tilde {\bf h}_j(x')}{n^d_j(x')},\end{aligned}$$ $\delta_{\alpha d}=1$ if $\alpha=d$, and $\delta_{\alpha d}=0$ if $\alpha\neq d$. Note that $D\ell$ is singular at any point where two interfaces touch or are very close to each other. To cancel out this singularity, for $x_{0}\in B_{3/4}\cap \overline{\mathcal{D}_{j_{0}}}$, we consider $$\label{tildeu} {\bf u}_{_\ell}:={\bf u}_{_\ell}(x;x_0)=D_{\ell}{\bf u}-{\bf u}_0,$$ where $$\begin{aligned} \label{defu0} {\bf u}_0:={\bf u}_0(x;x_0)=\sum_{j=1}^{m+1}\tilde\ell_{i,j}D_i{\bf u}(P_jx_0),\end{aligned}$$ $$\begin{aligned} \label{Pjx} P_jx_0=\begin{cases} x_0&\quad\mbox{for}\quad j=j_0,\\ (x'_0,h_j(x'_0))&\quad\mbox{for}\quad j<j_0,\\ (x'_0,h_{j-1}(x'_0))&\quad\mbox{for}\quad j>j_0, \end{cases}\end{aligned}$$ and the vector field $\tilde\ell_{,j}:=(\tilde\ell_{1,j},\dots,\tilde\ell_{d,j})$ is a smooth extension of $\ell|_{\mathcal{D}_j}$ to $\cup_{k=1,k\neq j}^{m+1}\mathcal{D}_k$. Then it follows from [\[homosecond0\]](#homosecond0){reference-type="eqref" reference="homosecond0"} that $$\begin{aligned} \label{homosecond} \begin{cases} D_\alpha(A^{\alpha\beta}D_\beta {\bf u}_{_\ell})+DD_\ell p={\bf f}+ D_\alpha {\bf f}^{\alpha,3} &\quad\text{in}\,\,\bigcup_{j=1}^{m+1}\mathcal{D}_j, \\ [n_j^\alpha (A^{\alpha\beta} D_\beta {\bf u}_{_\ell}-{\bf f}^{\alpha,3})+{\bf n}_jD_\ell p]_{\Gamma_j}=0,~\quad j=1,\dots,m,\\ \operatorname{div}{\bf u}_{_\ell}=D_\ell g+D\ell_i D_i{\bf u}-\sum_{j=1}^{m+1}D \tilde\ell_{i,j}D_i{\bf u}(P_jx_0)&\quad \text{in}\,\,\bigcup_{j=1}^{m+1}\mathcal{D}_j, \end{cases}\end{aligned}$$ where $$\begin{aligned} \label{tildef1} {\bf f}^{\alpha,3}&:={\bf f}^{\alpha,3}(x;x_0)={\bf f}^{\alpha,2}-A^{\alpha\beta}\sum_{j=1}^{m+1}D_\beta \tilde\ell_{i,j}D_i{\bf u}(P_jx_0)\nonumber\\ &=D_\ell {\bf f}^\alpha-D_{\ell} A^{\alpha\beta}D_\beta {\bf u}+A^{\alpha\beta}\big(D_\beta \ell_i D_i {\bf u}-\sum_{j=1}^{m+1}D_\beta \tilde\ell_{i,j}D_i{\bf u}(P_jx_0)\big)\nonumber\\ &\quad+\delta_{\alpha d}\sum_{j=1}^{m}\mathbbm{1}_{x^d>h_j(x')} (n^d_j(x'))^{-1}\tilde {\bf h}_j(x').\end{aligned}$$ Note that the mean oscillation of $$A^{\alpha\beta}\big(D_\beta \ell_i D_i {\bf u}-\sum_{j=1}^{m+1}D_\beta \tilde\ell_{i,j}D_i{\bf u}(P_jx_0)\big)$$ in [\[tildef1\]](#tildef1){reference-type="eqref" reference="tildef1"} is only bounded. For this, we choose a cut-off function $\zeta\in C_{0}^\infty(B_1)$ satisfying $$0\leq\zeta\leq1,\quad\zeta\equiv 1~\mbox{in}~B_{3/4},\quad|D\zeta|\leq8.$$ Denote $$\label{mathcalA} \tilde A^{\alpha\beta}:=\zeta A^{\alpha\beta}+\nu(1-\zeta)\delta_{\alpha\beta}\delta_{ij}.$$ For $j=1,\ldots,m+1$, denote $\mathcal{D}_j^c:=\mathcal{D}\setminus\mathcal{D}_j$. From [@cl2017 Corollary 5.3], it follows that there exists $({\boldsymbol{\mathfrak u}}_j(\cdot;x_0),{\mathfrak \pi}_j(\cdot;x_0))\in W^{1,q}(B_1)^d\times L_0^q(B_1)$ such that $$\begin{aligned} \label{eq-rmu} \begin{cases} D_{\alpha}(\tilde A^{\alpha\beta}D_\beta {\boldsymbol{\mathfrak u}}_j(\cdot;x_0))+D{\mathfrak \pi}_j(\cdot;x_0)=-D_{\alpha}(\mathbbm{1}_{_{\mathcal{D}_j^c}}A^{\alpha\beta}D_\beta \tilde\ell_{i,j}D_i{\bf u}(P_jx_0))&\,\, \mbox{in}~B_1,\\ \operatorname{div}{\boldsymbol{\mathfrak u}}_j(\cdot;x_0)=-\mathbbm{1}_{_{\mathcal{D}_j^c}}D \tilde\ell_{i,j}D_i{\bf u}(P_jx_0)+(\mathbbm{1}_{_{\mathcal{D}_j^c}}D \tilde\ell_{i,j}D_i{\bf u}(P_jx_0))_{B_1}&\,\, \mbox{in}~B_1,\\ {\boldsymbol{\mathfrak u}}_j(\cdot;x_0)=0&\,\, \mbox{on}~\partial B_1, \end{cases}\end{aligned}$$ where $1<q<\infty$. Moreover, by using the fact that $\mathbbm{1}_{_{\mathcal{D}_j^c}}D_\beta \tilde\ell_{,j}$ is piecewise $C^{\mu}$ and the local boundedness estimate of $D{\bf u}$ in Lemma [\[lemlocbdd\]](#lemlocbdd){reference-type="ref" reference="lemlocbdd"}, it holds that $$\begin{aligned} \label{rmuj} &\|{\boldsymbol{\mathfrak u}}_j(\cdot;x_0)\|_{W^{1,q}(B_1)}+\|{\mathfrak \pi}_j(\cdot;x_0)\|_{L^{q}(B_1)}\nonumber\\ &\leq N\|\mathbbm{1}_{_{\mathcal{D}_j^c}}A^{\alpha\beta}D_\beta \tilde\ell_{i,j}D_i{\bf u}(P_jx_0)\|_{L^q(B_1)}+N\|\mathbbm{1}_{_{\mathcal{D}_j^c}}D \tilde\ell_{i,j}D_i{\bf u}(P_jx_0)\|_{L^q(B_1)}\nonumber\\ &\leq N\big(\|D{\bf u}\|_{L^{1}(B_1)}+\|p\|_{L^{1}(B_1)}+\sum_{j=1}^{M}|{\bf f}^\alpha|_{1,\delta;\overline{\mathcal{D}_{j}}}+\sum_{j=1}^{M}|g|_{1,\delta; \overline{\mathcal{D}_{j}}}\big),\end{aligned}$$ where $N>0$ is a constant depending on $d,m,q,\nu,\varepsilon$, $|A|_{\delta;\overline{\mathcal{D}_{j}}}$, and the $C^{1,\mu}$ norm of $h_j$. We also obtain from Lemma [\[lemlocbdd\]](#lemlocbdd){reference-type="ref" reference="lemlocbdd"} that $$({\boldsymbol{\mathfrak u}}_j(\cdot;x_0),{\mathfrak \pi}_j(\cdot;x_0))\in C^{1,\mu'}(\overline{\mathcal{D}_i}\cap B_{1-\varepsilon})^d\times C^{\mu'}(\overline{\mathcal{D}_i}\cap B_{1-\varepsilon}),\quad i=1,\dots,m+1,$$ with the estimate $$\begin{aligned} &\|D{\boldsymbol{\mathfrak u}}_j\|_{L^\infty(B_{1/4})}+|{\boldsymbol{\mathfrak u}}_j|_{1,\mu';\overline{\mathcal{D}_i}\cap B_{1-\varepsilon}}+\|{\mathfrak \pi}_j\|_{L^\infty(B_{1/4})}+|{\mathfrak \pi}_j|_{\mu';\overline{\mathcal{D}_i}\cap B_{1-\varepsilon}}\nonumber\\ &\leq N\big(\|D{\boldsymbol{\mathfrak u}}_j(\cdot;x_0))\|_{L^{1}(B_1)}+\|{\mathfrak \pi}_j(\cdot;x_0))\|_{L^{1}(B_1)}+|\mathbbm{1}_{_{\mathcal{D}_j^c}}A^{\alpha\beta}D_\beta \tilde\ell_{i,j}D_i{\bf u}(t_0,P_jx_0)|_{\mu;\overline{\mathcal{D}_{j}}}\nonumber\\ &\quad+|\mathbbm{1}_{_{\mathcal{D}_j^c}}D \tilde\ell_{i,j}D_i{\bf u}(t_0,P_jx_0)|_{\mu;\overline{\mathcal{D}_{j}}}\big)\nonumber\\ &\le N\big(\|D{\bf u}\|_{L^{1}(B_1)}+\|p\|_{L^{1}(B_1)}+\sum_{j=1}^{M}|{\bf f}^\alpha|_{1,\delta;\overline{\mathcal{D}_{j}}}+\sum_{j=1}^{M}|g|_{1,\delta; \overline{\mathcal{D}_{j}}}\big),\end{aligned}$$ where $\mu':=\min\{\mu,\frac{1}{2}\}$ and we used [\[rmuj\]](#rmuj){reference-type="eqref" reference="rmuj"} in the second inequality. Denote $$%\label{def-v} {\boldsymbol{\mathfrak u}}:={\boldsymbol{\mathfrak u}}(x;x_0)=\sum_{j=1}^{m+1}{\boldsymbol{\mathfrak u}}_j(x;x_0),\quad {\mathfrak \pi}:={\mathfrak \pi}(x;x_0)=\sum_{j=1}^{m+1}{\mathfrak \pi}_j(x;x_0).$$ Then for each $i=1,\ldots,m+1$, we have $$\begin{aligned} \label{estauxiu} &\|D{\boldsymbol{\mathfrak u}}\|_{L^\infty(B_{1/4})}+|{\boldsymbol{\mathfrak u}}|_{1,\mu';\overline{\mathcal{D}_i}\cap B_{1-\varepsilon}}+\|{\mathfrak \pi}\|_{L^\infty(B_{1/4})}+|{\mathfrak \pi}|_{\mu';\overline{\mathcal{D}_i}\cap B_{1-\varepsilon}}\nonumber\\ &\le N\big(\|D{\bf u}\|_{L^{1}(B_1)}+\|p\|_{L^{1}(B_1)}+\sum_{j=1}^{M}|{\bf f}^\alpha|_{1,\delta;\overline{\mathcal{D}_{j}}}+\sum_{j=1}^{M}|g|_{1,\delta; \overline{\mathcal{D}_{j}}}\big).\end{aligned}$$ We further define $$\label{def-tildeu} \tilde {\bf u}:=\tilde {\bf u}(x;x_0)={\bf u}_{_\ell}-{\boldsymbol{\mathfrak u}}=D_{\ell}{\bf u}-{\bf u}_0-{\boldsymbol{\mathfrak u}},\quad \tilde p:=\tilde p(x;x_0)=D_\ell p-{\mathfrak \pi}.$$ Then $(\tilde {\bf u},\tilde p)$ satisfies [\[eqtildeu\]](#eqtildeu){reference-type="eqref" reference="eqtildeu"}, where $$\begin{aligned} \label{def-tildefalpha} \tilde {\bf f}^\alpha:=\tilde {\bf f}^\alpha(x;x_0)=\tilde {\bf f}^{\alpha,1}(x;x_0)+\tilde {\bf f}^{\alpha,2}(x),\end{aligned}$$ with $$\begin{aligned} \label{deff1} \tilde {\bf f}^{\alpha,1}(x;x_0):=A^{\alpha\beta}\big(D_\beta \ell_i D_i {\bf u}-\sum_{j=1}^{m+1}\mathbbm{1}_{_{\mathcal{D}_j}}D_\beta \ell_{i}D_i{\bf u}(P_jx_0)\big),\end{aligned}$$ and $$\begin{aligned} \label{deff2} \tilde {\bf f}^{\alpha,2}(x):=D_\ell {\bf f}^\alpha-D_{\ell} A^{\alpha\beta}D_\beta {\bf u}+\delta_{\alpha d}\sum_{j=1}^{m}\mathbbm{1}_{x^d>h_j(x')} (n^d_j(x'))^{-1}\tilde {\bf h}_j(x').\end{aligned}$$ Compared to [\[tildef1\]](#tildef1){reference-type="eqref" reference="tildef1"}, such data $\tilde {\bf f}^\alpha$ is good enough for us to apply Campanato's method in [@c1963; @g1983], since the mean oscillation of $\tilde {\bf f}^\alpha$ vanishes at a certain rate as the radii of the balls go to zero (see the proof of [\[estF\]](#estF){reference-type="eqref" reference="estF"} below for the details). # Decay estimates {#auxilemma} Let us denote $$\label{deftildeU} \tilde {\bf U}:=\tilde {\bf U}(x;x_0)=n^\alpha(A^{\alpha\beta}D_\beta \tilde {\bf u}-\tilde {\bf f}^\alpha)+{\bf n}\tilde p,$$ where $n^\alpha$ and ${\bf n}$ are defined in [\[defnorm\]](#defnorm){reference-type="eqref" reference="defnorm"}, $\alpha=1,\ldots,d$. Denote $$\label{defPhi} \Phi(x_0,r):=\inf_{\mathbf q^{k'},\mathbf Q\in\mathbb R^{d}}\left(\fint_{B_r(x_0)}\big(|D_{\ell^{k'}}\tilde {\bf u}(x;x_0)-\mathbf q^{k'}|^{\frac{1}{2}}+|\tilde {\bf U}(x;x_0)-\mathbf Q|^{\frac{1}{2}}\big)\,dx \right)^{2},$$ where $\tilde {\bf u}$ and $\tilde {\bf U}$ are defined in [\[def-tildeu\]](#def-tildeu){reference-type="eqref" reference="def-tildeu"} and [\[deftildeU\]](#deftildeU){reference-type="eqref" reference="deftildeU"}, respectively. We shall adapt the argument in [@dx2022] to establish a decay estimate of $$\begin{aligned} \label{def-phi} \phi(\Lambda x_0,r):=\inf_{\mathbf q^{k'},\mathbf Q\in\mathbb R^{d}}\Big(\fint_{B_r(\Lambda x_0)}\big(|D_{y^{k'}}\tilde{\bf v}(y;\Lambda x_0)-\mathbf q^{k'}|^{\frac{1}{2}}+|\tilde{\bf V}(y;\Lambda x_0)-\mathbf Q|^{\frac{1}{2}}\big)\,dy\Big)^{2},\end{aligned}$$ where $$\label{tildeV} \tilde{\bf V}(y;\Lambda x_0)=\mathcal{A}^{d\beta}D_{y^\beta}\tilde{\bf v}(y;\Lambda x_0)-\tilde {\boldsymbol{\mathfrak f}}^d(y;\Lambda x_0)+\tilde{\mathfrak p}(y;\Lambda x_0){\bf e}_d,$$ ${\bf e}_d$ is the $d$-th unit vector in $\mathbb R^d$, $\tilde {\boldsymbol{\mathfrak f}}^\alpha=(\tilde{\mathfrak f}_1^\alpha,\dots,\tilde{\mathfrak f}_d^\alpha)^{\top}$ with $\alpha=1,\dots,d$, $$\label{transformation} \begin{split} &\mathcal{A}^{\alpha\beta}(y)=\Lambda\Lambda^{\alpha k}A^{ks}(x)\Lambda^{s\beta}\Gamma,\quad \tilde {\bf v}(y;\Lambda x_0)=\Lambda\tilde {\bf u}(x;x_0),\quad \tilde{\mathfrak p}(y;\Lambda x_0)=\tilde p(x;x_0), \\ &\tilde{\mathfrak f}_\tau^\alpha(y;\Lambda x_0)=\Lambda^{\tau m}\Lambda^{\alpha k}\tilde f_m^k(x;x_0),\quad\tau=1,\dots,d, \end{split}$$ $\tilde f_m^k(x;x_0)$ is the $m$-th component of $\tilde{\bf f}^k(x;x_0)$ defined in [\[def-tildefalpha\]](#def-tildefalpha){reference-type="eqref" reference="def-tildefalpha"} with $k$ in place of $\alpha$, $y=\Lambda x$, $\Lambda=(\Lambda^{\alpha\beta})_{\alpha,\beta=1}^{d}$ is defined in Section [2](#secpreliminaries){reference-type="ref" reference="secpreliminaries"} (see p.), and $\Gamma=\Lambda^{-1}$. Denote $$\label{defG} G:=G(x;x_0)=D_\ell g+D\ell_i D_i{\bf u}-\sum_{j=1}^{m+1}\mathbbm{1}_{_{\mathcal{D}_j}}D\ell_{i,j}D_i{\bf u}(P_jx_0)-\sum_{j=1}^{m+1}(\mathbbm{1}_{_{\mathcal{D}_j^c}}D \tilde\ell_{i,j}D_i{\bf u}(P_jx_0))_{B_1},$$ and set $$\label{def-G} \mathcal G:=\mathcal G(y;\Lambda x_0)=G(x;x_0),\quad {\boldsymbol{\mathfrak f}}=({\mathfrak f}_1,\dots,{\mathfrak f}_d)^{\top},~{\mathfrak f}_\tau(y)=\Lambda^{\tau m} f_m(x).$$ Then it follows from [\[eqtildeu\]](#eqtildeu){reference-type="eqref" reference="eqtildeu"} that $\tilde{\bf v}$ satisfies $$\label{eqfraku} \begin{cases} D_\alpha(\mathcal{A}^{\alpha\beta}D_\beta \tilde{\bf v})+D\tilde{\mathfrak p}={\boldsymbol{\mathfrak f}}+ D_\alpha \tilde {\boldsymbol{\mathfrak f}}^\alpha\\ \operatorname{div}\tilde {\bf v}=\mathcal G \end{cases}\,\,\mbox{in}~\Lambda(B_{3/4}),$$ where $\tilde {\boldsymbol{\mathfrak f}}^\alpha=(\tilde{\mathfrak f}_1^\alpha,\dots,\tilde{\mathfrak f}_d^\alpha)^{\top}$. From [\[transformation\]](#transformation){reference-type="eqref" reference="transformation"}, the $\tau$-th component of $\tilde {\boldsymbol{\mathfrak f}}^{\alpha,1}$ and $\tilde {\boldsymbol{\mathfrak f}}^{\alpha,2}$ is $$\label{deff1f2} \tilde {\mathfrak f}_\tau^{\alpha,1}(y;\Lambda x_0)=\Lambda^{\tau m}\Lambda^{\alpha k}\tilde f_m^{k,1}(x;x_0),\quad \tilde {\mathfrak f}_\tau^{\alpha,2}(y)=\Lambda^{\tau m}\Lambda^{\alpha k}\tilde f_m^{k,2}(x),$$ where $\tilde f_m^{k,1}(x;x_0)$ and $\tilde f_m^{k,2}(x)$ are the $m$-th component of $\tilde {\bf f}^{k,1}(x;x_0)$ and $\tilde {\bf f}^{k,2}(x)$ defined in [\[deff1\]](#deff1){reference-type="eqref" reference="deff1"} and [\[deff2\]](#deff2){reference-type="eqref" reference="deff2"}, respectively. Then $\tilde {\boldsymbol{\mathfrak f}}^{\alpha,1}+\tilde {\boldsymbol{\mathfrak f}}^{\alpha,2}=\tilde {\boldsymbol{\mathfrak f}}^\alpha$ which is defined in [\[transformation\]](#transformation){reference-type="eqref" reference="transformation"}. Recalling that ${\bf f}^\alpha, A^{\alpha\beta}\in C^{1,\delta}(\overline{\mathcal{D}_{j}})$, $D_{\ell}n_j^\alpha\in C^{\mu}$, the assumption that $D{\bf u}$ and $p$ are piecewise $C^1$, and the fact that the vector field $\ell$ is $C^{1/2}$ (see [@dx2022 Lemma 2.1]), we find that $\tilde {\boldsymbol{\mathfrak f}}^{\alpha,2}$ is piecewise $C^{\delta_{\mu}}$, where $\delta_{\mu}=\min\big\{\frac{1}{2},\mu,\delta\big\}$. Now we denote $$\begin{aligned} \label{def-Falpha} &{\bf F}^\alpha:={\bf F}^\alpha(y;\Lambda x_0)=(\overline{{\mathcal A}^{\alpha\beta}}(y^{d})-\mathcal{A}^{\alpha\beta}(y))D_{y^\beta}\tilde{\bf v}(y;\Lambda x_0)+\tilde {\boldsymbol{\mathfrak f}}^{\alpha,1}(y;\Lambda x_0)+\tilde {\boldsymbol{\mathfrak f}}^{\alpha,2}(y)-\bar{\boldsymbol{\mathfrak f}}^{\alpha,2}(y^d),\nonumber\\ & {\bf F}=(F^1,\ldots,F^d),\quad\mathcal H:=\mathcal G-\overline{\mathcal G},\end{aligned}$$ where $\bar{\boldsymbol{\mathfrak f}}^{\alpha,2}(y^d)$ and $\overline{\mathcal G}$ are piecewise constant functions corresponding to $\tilde {\boldsymbol{\mathfrak f}}^{\alpha,2}(y)$ and $\mathcal G$, respectively. For the convenience of notation, set $$\begin{aligned} \label{defC1} \mathcal{C}_1:=\sum_{j=1}^{m+1}\|D^2{\bf u}\|_{L^\infty(B_{r}(x_0)\cap\mathcal{D}_j)}+\sum_{j=1}^{m+1}|{\bf f}^\alpha|_{1,\delta;\overline{\mathcal{D}_{j}}}+\sum_{j=1}^{m+1}|g|_{1,\delta; \overline{\mathcal{D}_{j}}}+\|D{\bf u}\|_{L^{1}(B_1)}+\|p\|_{L^{1}(B_1)},\end{aligned}$$ and $$\begin{aligned} \label{defC0} \mathcal{C}_0:=\mathcal{C}_1+\sum_{j=1}^{m+1}\|Dp\|_{L^\infty(B_{r}(x_0)\cap\mathcal{D}_j)}.\end{aligned}$$ **Lemma 7**. *Let ${\boldsymbol{\mathfrak f}}$, ${\bf F}$, and $\mathcal H$ be defined as in [\[def-G\]](#def-G){reference-type="eqref" reference="def-G"} and [\[def-Falpha\]](#def-Falpha){reference-type="eqref" reference="def-Falpha"}, respectively. Then we have $$\begin{aligned} \label{estfL1} \|\boldsymbol{\mathfrak f}\|_{L^{1}(B_{r}(\Lambda x_0))} \leq N\mathcal{C}_0 r^{d-\frac{1}{2}},\end{aligned}$$ $$\begin{aligned} \label{estF} \|{\bf F}\|_{L^1(B_{r}(\Lambda x_0))}\leq N\mathcal{C}_0r^{d+\delta_{\mu}},\end{aligned}$$ and $$\begin{aligned} \label{LemmaH} \|\mathcal H\|_{L^1(B_{r}(\Lambda x_0))}\leq N\mathcal{C}_1r^{d+\delta_{\mu}},\end{aligned}$$ where $\mathcal{C}_0$ and $\mathcal{C}_1$ are defined in [\[defC0\]](#defC0){reference-type="eqref" reference="defC0"} and [\[defC1\]](#defC1){reference-type="eqref" reference="defC1"}, respectively, $\delta_{\mu}=\min\big\{\frac{1}{2},\mu,\delta\big\}$, $N$ depends on $|A|_{1,\delta;\overline{\mathcal{D}_{j}}}$, $d,q,m,\nu$, and the $C^{2,\mu}$ norm of $h_j$.* *Proof.* Note that $$\begin{aligned} \label{estDellk0} \int_{B_r(x_0)\cap\mathcal{D}_j}|D\ell|\,dx\leq Nr^{d-\frac{1}{2}},\end{aligned}$$ see [@dx2022 (3.26)]. Here, $N$ depends only on the $C^{2,\mu}$ norm of $h_j$. Then together with ${\mathfrak f}_\tau(y)=\Lambda^{\tau m}f_m(x)$, Lemma [\[lemlocbdd\]](#lemlocbdd){reference-type="ref" reference="lemlocbdd"}, and [\[defg\]](#defg){reference-type="eqref" reference="defg"}, we obtain [\[estfL1\]](#estfL1){reference-type="eqref" reference="estfL1"}. Since $\tilde {\boldsymbol{\mathfrak f}}^{\alpha,2}(y)$ is piecewise $C^{\delta_{\mu}}$, we have $$\begin{aligned} \label{estDf} \int_{B_{r}(\Lambda x_0)}\big|\tilde {\boldsymbol{\mathfrak f}}^{\alpha,2}(y)-\bar{\boldsymbol{\mathfrak f}}^{\alpha,2}(y^d)\big| &\leq Nr^{d+\delta_{\mu}}\Big(\sum_{j=1}^{m+1}\|D^2{\bf u}\|_{L^\infty(B_{r}(x_0)\cap\mathcal{D}_j)}+\sum_{j=1}^{m+1}\|Dp\|_{L^\infty(B_{r}(x_0)\cap\mathcal{D}_j)}\nonumber\\ &\quad+\sum_{j=1}^{m+1}|{\bf f}^\alpha|_{1,\delta;\overline{\mathcal{D}_{j}}}+\|D{\bf u}\|_{L^{1}(B_1)}+\|p\|_{L^{1}(B_1)}\Big),\end{aligned}$$ where $\delta_{\mu}=\min\big\{\frac{1}{2},\mu,\delta\big\}$, and $N$ depends on $d,m$, and the $C^{2,\mu}$ norm of $h_j$. By using [\[defu0\]](#defu0){reference-type="eqref" reference="defu0"} and [\[def-tildeu\]](#def-tildeu){reference-type="eqref" reference="def-tildeu"}, we have $$\begin{aligned} D_s\tilde {\bf u}(x;x_0)=\ell_iD_{s}D_i{\bf u}-D_{s}{\boldsymbol{\mathfrak u}}+D_s \ell_i D_i{\bf u}-\sum_{j=1}^{m+1}D_s \tilde\ell_{i,j}D_i{\bf u}(P_jx_0).\end{aligned}$$ Then combining with [\[transformation\]](#transformation){reference-type="eqref" reference="transformation"}, we have $$\begin{aligned} &(\overline{{\mathcal A}^{\alpha\beta}}(y^{d})-\mathcal{A}^{\alpha\beta}(y))D_{y^\beta}\tilde{\bf{ v}}(y;\Lambda x_0)\\ &=(\overline{{\mathcal A}^{\alpha\beta}}(y^{d})-\mathcal{A}^{\alpha\beta}(y))\Lambda\Gamma^{\beta s}D_s\tilde {\bf u}(x;x_0)\\ &=(\overline{{\mathcal A}^{\alpha\beta}}(y^{d})-\mathcal{A}^{\alpha\beta}(y))\Lambda\Gamma^{\beta s}\big(\ell_iD_{s}D_i{\bf u}-D_{s}{\boldsymbol{\mathfrak u}}+D_s \ell_i D_i{\bf u}-\sum_{j=1}^{m+1}D_s \tilde\ell_{i,j}D_i{\bf u}(P_jx_0)\big).\end{aligned}$$ Using [\[deff1\]](#deff1){reference-type="eqref" reference="deff1"}, [\[deff1f2\]](#deff1f2){reference-type="eqref" reference="deff1f2"}, and $\mathcal{A}^{\alpha\beta}(y)=\Lambda\Lambda^{\alpha k}A^{ks}(x)\Lambda^{s\beta}\Gamma$ in [\[transformation\]](#transformation){reference-type="eqref" reference="transformation"}, we have for each $\tau=1,\dots,d$, $$\begin{aligned} %\label{f1alpha} \tilde {\mathfrak f}_\tau^{\alpha,1}(y;\Lambda x_0)=\Lambda^{\tau m}\Lambda^{\alpha k}\tilde f_m^{k,1}(x;x_0) &=\Lambda^{\tau m}\Lambda^{\alpha k}A_{mn}^{ks}(x)\big(D_s \ell_i D_i {\bf u}^n-\sum_{j=1}^{m+1}\mathbbm{1}_{_{\mathcal{D}_j}}D_s \ell_{i}D_i{\bf u}^n(P_jx_0)\big)\nonumber\\ &=\mathcal{A}_{\tau\gamma}^{\alpha\beta}(y)\Lambda^{\gamma n}\Gamma^{\beta s}\big(D_s \ell_i D_i {\bf u}^n-\sum_{j=1}^{m+1} \mathbbm{1}_{_{\mathcal{D}_j}}D_s \ell_{i}D_i{\bf u}^n(P_jx_0)\big).\end{aligned}$$ Thus, $$\tilde {\boldsymbol{\mathfrak f}}^{\alpha,1}(y;\Lambda x_0)=\mathcal{A}^{\alpha\beta}(y)\Lambda\Gamma^{\beta s}\big(D_s \ell_i D_i {\bf u}-\sum_{j=1}^{m+1} \mathbbm{1}_{_{\mathcal{D}_j}}D_s \ell_{i}D_i{\bf u}(P_jx_0)\big)$$ and $$\begin{aligned} %\label{Af1} &(\overline{{\mathcal A}^{\alpha\beta}}(y^{d})-\mathcal{A}^{\alpha\beta}(y))D_{y^\beta}\tilde{\bf{ v}}(y;\Lambda x_0)+\tilde {\boldsymbol{\mathfrak f}}^{\alpha,1}(y;\Lambda x_0)\nonumber\\ &=(\overline{{\mathcal A}^{\alpha\beta}}(y^{d})-\mathcal{A}^{\alpha\beta}(y))\Lambda\Gamma^{\beta s}(\ell_iD_{s}D_i{\bf u}-D_{s}{\boldsymbol{\mathfrak u}}-\sum_{j=1}^{m+1}\mathbbm{1}_{_{\mathcal{D}_j^c}}D_s \tilde\ell_{i,j}D_i{\bf u}(P_jx_0))\nonumber\\ &\quad+\overline{{\mathcal A}^{\alpha\beta}}(y^{d})\Lambda\Gamma^{\beta s} \big(D_s \ell_i D_i{\bf u}-\sum_{j=1}^{m+1} \mathbbm{1}_{_{\mathcal{D}_j}} D_s\ell_{i}D_i{\bf u}(P_jx_0)\big).\end{aligned}$$ Together with $\mathcal{A}\in C^{1,\delta}(\mathcal{D}_{\varepsilon}\cap\overline{\mathcal{D}}_j)$, [\[volume\]](#volume){reference-type="eqref" reference="volume"}, [\[estauxiu\]](#estauxiu){reference-type="eqref" reference="estauxiu"}, [\[estDellk0\]](#estDellk0){reference-type="eqref" reference="estDellk0"}, and the fact that $\mathbbm{1}_{_{\mathcal{D}_j^c}}D_s \tilde\ell_{i,j}$ is piecewise $C^\mu$, we have $$\begin{aligned} %\label{estADtildeu} \|(\overline{{\mathcal A}^{\alpha\beta}}(y^{d})-\mathcal{A}^{\alpha\beta}(y))D_{y^\beta}\tilde{\bf{ v}}(y;\Lambda x_0)+\tilde {\boldsymbol{\mathfrak f}}^{\alpha,1}(y;\Lambda x_0)\|_{L^1(B_{r}(\Lambda x_0))}\leq N\mathcal{C}_1r^{d+\frac{1}{2}}.\end{aligned}$$ Combining with [\[estDf\]](#estDf){reference-type="eqref" reference="estDf"}, we derive [\[estF\]](#estF){reference-type="eqref" reference="estF"}. Finally, recalling $\mathcal G:=\mathcal G(y;\Lambda x_0)=G(x;x_0)$, where $G(x;x_0)$ is defined in [\[defG\]](#defG){reference-type="eqref" reference="defG"}, using [\[volume\]](#volume){reference-type="eqref" reference="volume"}, Lemma [\[lemlocbdd\]](#lemlocbdd){reference-type="ref" reference="lemlocbdd"}, and the fact that $\mathbbm{1}_{_{\mathcal{D}_j^c}}D\tilde\ell_{i,j}$ is piecewise $C^\mu$ again, we have [\[LemmaH\]](#LemmaH){reference-type="eqref" reference="LemmaH"}. The proof of the lemma is complete. ◻ **Lemma 8**. *Let $\varepsilon\in(0,1)$ and $q\in(1,\infty)$. Suppose that $A^{\alpha\beta}$, ${\bf f}^\alpha$, and $g$ satisfy Assumption [Assumption 2](#assump){reference-type="ref" reference="assump"} with $s=1$. If $(\tilde{\bf v},\tilde{\mathfrak p})$ is a weak solution to [\[eqfraku\]](#eqfraku){reference-type="eqref" reference="eqfraku"}, then for any $0<\rho\leq r\leq 1/4$, we have $$\begin{aligned} %\label{est phi'} \phi(\Lambda x_0,\rho)&\leq N\Big(\frac{\rho}{r}\Big)^{\delta_{\mu}}\phi(\Lambda x_0,r/2)+N\mathcal{C}_0\rho^{\delta_{\mu}},\end{aligned}$$ where $\phi(\Lambda x_0,r)$ is defined in [\[def-phi\]](#def-phi){reference-type="eqref" reference="def-phi"}, $\mathcal{C}_0$ is defined in [\[defC0\]](#defC0){reference-type="eqref" reference="defC0"}, $\delta_{\mu}=\min\big\{\frac{1}{2},\mu,\delta\big\}$, $N$ depends on $d,m,q,\nu$, the $C^{2,\mu}$ norm of $h_j$, and $|A|_{1,\delta;\overline{\mathcal{D}_{j}}}$.* *Proof.* Let ${\bf v_0}=(v_0^1,\dots,v_0^d)$ and $p_0$ be functions of $y^d$, such that $v_0^d=\overline{\mathcal G}$, $\overline{{\mathcal A}^{dd}}{\bf v}_0+p_0{\bf e}_d=\bar{\boldsymbol{\mathfrak f}}_2^d$, where $\overline{\mathcal G}$ and $\bar{\boldsymbol{\mathfrak f}}_2^d$ are piecewise constant functions corresponding to $\mathcal G$ and $\tilde{\boldsymbol{\mathfrak f}}_2^d$, respectively. Set $${\bf v}_e=\tilde{\bf v}-\int_{\Lambda x_0^d}^{y^d}{\bf v}_0(s)\,ds,\quad p_e=\tilde {\mathfrak p}-p_0.$$ Then according with [\[eqfraku\]](#eqfraku){reference-type="eqref" reference="eqfraku"}, we have $$\begin{cases} D_\alpha(\overline{{\mathcal A}^{\alpha\beta}}(y^{d})D_\beta {\bf v}_e)+Dp_e={\boldsymbol{\mathfrak f}}+ D_\alpha {\bf F}^\alpha\\ \operatorname{div}{\bf v}_e=\mathcal H \end{cases}\,\,\mbox{in}~B_{r}(\Lambda x_0),$$ where $\mathcal H=\mathcal G-\overline{\mathcal G}$, and ${\bf F}^\alpha$ is defined in [\[def-Falpha\]](#def-Falpha){reference-type="eqref" reference="def-Falpha"}. Now we decompose $({\bf v}_e,p_e)=({\bf v},p_1)+({\bf w},p_2)$, where $({\bf v},p_1)\in W_0^{1,q}(B_{r}(\Lambda x_0))^d\times L_0^q(B_{r}(\Lambda x_0))$ satisfies $$\begin{aligned} \begin{cases} D_{\alpha}(\overline{{\mathcal A}^{\alpha\beta}}(y^d)D_{\beta}{\bf v})+Dp_1={\boldsymbol{\mathfrak f}}\mathbbm{1}_{B_{r/2}(\Lambda x_0)}+D_\alpha({\bf F}^\alpha\mathbbm{1}_{B_{r/2}(\Lambda x_0)})\\ \operatorname{div}{\bf v}=\mathcal H\mathbbm{1}_{B_{r/2}(\Lambda x_0)}-(\mathcal H\mathbbm{1}_{B_{r/2}(\Lambda x_0)})_{B_{r}(\Lambda x_0)} \end{cases}\,\, \mbox{in}~B_{r}(\Lambda x_0).\end{aligned}$$ Then by Lemmas [Lemma 5](#weak est barv){reference-type="ref" reference="weak est barv"} and [Lemma 7](#lemmaFG){reference-type="ref" reference="lemmaFG"}, we have $$\begin{aligned} \label{holder v} \left(\fint_{B_{r/2}(\Lambda x_0)}(|D{\bf v}|+|p_1|)^{\frac{1}{2}}\,dy\right)^2\leq N\mathcal{C}_0r^{\delta_{\mu}},\end{aligned}$$ where $\mathcal{C}_0$ is defined in [\[defC0\]](#defC0){reference-type="eqref" reference="defC0"}. Moreover, $({\bf w},p_2)$ satisfies $$\begin{aligned} \begin{cases} D_{\alpha}(\overline{{\mathcal A}^{\alpha\beta}}(y^d)D_{\beta}{\bf w})+Dp_2=0\\ \operatorname{div}{\bf w}=(\mathcal H\mathbbm{1}_{B_{r/2}(\Lambda x_0)})_{B_{r}(\Lambda x_0)} \end{cases}\,\, \mbox{in}~B_{r/2}(\Lambda x_0).\end{aligned}$$ Then it follows from [@cd2019 (3.7)] that $$\begin{aligned} \label{diff-wW} &\Bigg(\fint_{B_{\kappa r}(\Lambda x_0)}\big(|D_{y^{k'}}{\bf w}(y;\Lambda x_0)-(D_{y^{k'}}{\bf w})_{B_{\kappa r}(\Lambda x_{0})}|^{\frac{1}{2}}+|{\bf W}(y;\Lambda x_0)-({\bf W})_{B_{\kappa r}(\Lambda x_{0})}|^{\frac{1}{2}}\big)\,dy\Bigg)^{2}\nonumber\\ &\leq N\kappa\left(\fint_{B_{r/2}(\Lambda x_0)}\big(|D_{y^{k'}}{\bf w}(y;\Lambda x_0)-\mathbf{q}^{k'}|^{\frac{1}{2}}+|{\bf W}(y;\Lambda x_0)-\mathbf Q|^{\frac{1}{2}}\big)\,dy\right)^{2},\end{aligned}$$ where ${\bf W}:={\bf W}(y;\Lambda x_0)=\overline{\mathcal A^{d\beta}}(y^d)D_{y^\beta}{\bf w}(y;\Lambda x_0)+p_2{\bf e}_d$ and $\kappa\in(0,1/2)$ to be fixed later. Set $${\bf V}_e=\overline{\mathcal A^{d\beta}}(y^d)D_{y^\beta}{\bf v}_e(y;\Lambda x_0)+p_e{\bf e}_d.$$ Then $$\begin{aligned} \tilde{\bf V}-{\bf V}_e=-{\bf F}^d(y;\Lambda x_0),\end{aligned}$$ where $\tilde{\bf V}$ and ${\bf F}^d$ are defined in [\[tildeV\]](#tildeV){reference-type="eqref" reference="tildeV"} and [\[def-Falpha\]](#def-Falpha){reference-type="eqref" reference="def-Falpha"}, respectively. Thus, combining the triangle inequality, [\[holder v\]](#holder v){reference-type="eqref" reference="holder v"}, [\[diff-wW\]](#diff-wW){reference-type="eqref" reference="diff-wW"}, and [\[estF\]](#estF){reference-type="eqref" reference="estF"}, we obtain $$\begin{aligned} %\label{estkappar} &\Bigg(\fint_{B_{\kappa r}(\Lambda x_0)}\big(|D_{y^{k'}}\tilde{\bf v}(y;\Lambda x_0)-(D_{y^{k'}}{\bf w})_{B_{\kappa r}(\Lambda x_{0})}|^{\frac{1}{2}}+|\tilde{\bf V}(y;\Lambda x_0)-({\bf W})_{B_{\kappa r}(\Lambda x_{0})}|^{\frac{1}{2}}\big)\,dy\Bigg)^{2}\nonumber\\ &\leq N\kappa\left(\fint_{B_{r/2}(\Lambda x_0)}\big(|D_{y^{k'}}\tilde{\bf v}(y;\Lambda x_0)-\mathbf{q}^{k'}|^{\frac{1}{2}}+|\tilde{\bf V}(y;\Lambda x_0)-\mathbf Q|^{\frac{1}{2}}\big)\,dy\right)^{2}\nonumber\\ &\quad+N\kappa^{-2d}\left(\fint_{B_{r/2}(\Lambda x_0)}|{\bf F}^d(y;\Lambda x_0)|^{\frac{1}{2}}\,dy\right)^{2}+N\kappa^{-2d}\mathcal{C}_0r^{\delta_{\mu}}\\ &\leq N\kappa\left(\fint_{B_{r/2}(\Lambda x_0)}\big(|D_{y^{k'}}\tilde{\bf v}(y;\Lambda x_0)-\mathbf{q}^{k'}|^{\frac{1}{2}}+|\tilde{\bf V}(y;\Lambda x_0)-\mathbf Q|^{\frac{1}{2}}\big)\,dy\right)^{2}+N\kappa^{-2d}\mathcal{C}_0r^{\delta_{\mu}}.\end{aligned}$$ Using the fact that $\mathbf{q}^{k'}, \mathbf{Q}\in\mathbb R^{d}$ are arbitrary, we deduce $$\begin{aligned} %\label{iterating Du} \phi(\Lambda x_0,\kappa r)\leq N_{0}\kappa\phi(\Lambda x_0,r/2)+N\kappa^{-2d}\mathcal{C}_0r^{\delta_{\mu}}.\end{aligned}$$ Choosing $\kappa\in(0,1/2)$ small enough so that $N_{0}\kappa\leq\kappa^{\gamma}$ for any fixed $\gamma\in(\delta_{\mu},1)$ and iterating, we get $$\begin{aligned} %\label{iteration phi} \phi(\Lambda x_0,\kappa^{j}r)\leq\kappa^{j\delta_{\mu}}\phi(\Lambda x_0,r/2)+N\mathcal{C}_0(\kappa^{j}r)^{\delta_{\mu}}.\end{aligned}$$ Therefore, for any $\rho$ with $0<\rho\leq r\leq1/4$ and $\kappa^j r\leq\rho<\kappa^{j-1}r$, we have $$\begin{aligned} %\label{est phi'} \phi(\Lambda x_0,\rho)\leq N\Big(\frac{\rho}{r}\Big)^{\delta_{\mu}}\phi(\Lambda x_0,r/2)+N\mathcal{C}_0\rho^{\delta_{\mu}}.\end{aligned}$$ The lemma is proved. ◻ Now we are ready to prove the decay estimate of $\Phi(x_{0},r)$ defined in [\[defPhi\]](#defPhi){reference-type="eqref" reference="defPhi"} as follows. **Lemma 9**. *Let $\varepsilon\in(0,1)$ and $q\in(1,\infty)$. Suppose that $A^{\alpha\beta}$, ${\bf f}^\alpha$, and $g$ satisfy Assumption [Assumption 2](#assump){reference-type="ref" reference="assump"} with $s=1$. If $(\tilde{\bf u},\tilde p)$ is a weak solution to [\[eqtildeu\]](#eqtildeu){reference-type="eqref" reference="eqtildeu"}, then for any $0<\rho\leq r\leq 1/4$, we have $$\begin{aligned} \label{est phi'} \Phi(x_{0},\rho)&\leq N\Big(\frac{\rho}{r}\Big)^{\delta_{\mu}}\Phi(x_{0},r/2)+N\mathcal{C}_0\rho^{\delta_{\mu}},\end{aligned}$$ where $\mathcal{C}_0$ is defined in [\[defC0\]](#defC0){reference-type="eqref" reference="defC0"}, $\delta_{\mu}=\min\big\{\frac{1}{2},\mu,\delta\big\}$, $N$ depends on $d,m,q,\nu$, the $C^{2,\mu}$ norm of $h_j$, and $|A|_{1,\delta;\overline{\mathcal{D}_{j}}}$.* *Proof.* The proof is an adaptation of [@dx2022 Lemma 3.4]. Let $y_0$ be as in Section [2](#secpreliminaries){reference-type="ref" reference="secpreliminaries"}. Note that $$\label{Dell-nalpha} \begin{split} &D_{\ell^k}\tilde {\bf u}(x;x_0)-\Gamma D_{y^k}\tilde{\bf v}(y;\Lambda x_0)=(\ell^k(x)-\tau_k)\cdot D\tilde {\bf u}(x;x_0),\\ &\tilde {\bf U}(x;x_0)-\Gamma\big(\mathcal{A}^{d\beta}(y)D_{y^\beta}\tilde{\bf v}(y;\Lambda x_0)-\tilde{\boldsymbol{\mathfrak f}}^d(y;\Lambda x_0)+\tilde{\mathfrak p}(y;\Lambda x_0){\bf e}_d\big)\\ &=(n^\alpha-n_{y_0}^\alpha)(A^{\alpha\beta}(x)D_\beta \tilde {\bf u}(x;x_0)-\tilde {\bf f}^\alpha(x;x_0))+({\bf n}-{\bf n}_{y_0})\tilde p(x;x_0), \end{split}$$ where $\tau_k$ and $n_{y_0}^\alpha$ are defined in [\[deftauk\]](#deftauk){reference-type="eqref" reference="deftauk"} and [\[defny0\]](#defny0){reference-type="eqref" reference="defny0"}, respectively. For any $x\in B_r(x_0)\cap\mathcal{D}_{j}$, where $r\in(|x_0-y_0|,1)$ and $j=1,\ldots,m+1$, we have $$\begin{aligned} |\ell^k(x)-\tau_{k}|\leq N\sqrt r,\quad |{\bf n}(x)-{\bf n}_{y_0}|\leq N\sqrt r,\end{aligned}$$ where $k=1,\ldots,d-1$. See the proof of [@dx2022 Lemma 3.4] for the details. Then coming back to [\[Dell-nalpha\]](#Dell-nalpha){reference-type="eqref" reference="Dell-nalpha"}, we obtain $$\label{difference-coor} \begin{aligned} &|D_{\ell^k}\tilde {\bf u}(x;x_0)-\Gamma D_{y^k}\tilde{\bf v}(y;\Lambda x_0)|\leq N\sqrt r|D\tilde {\bf u}(x;x_0)|,\\ &|\tilde {\bf U}(x;x_0)-\Gamma(\mathcal{A}^{d\beta}(y)D_{y^\beta}\tilde{\bf v}(y;\Lambda x_0)-\tilde{\boldsymbol{\mathfrak f}}^d(y;\Lambda x_0)+\tilde{\mathfrak p}(y;\Lambda x_0){\bf e}_d)|\\ &\leq N\sqrt r(|D\tilde {\bf u}(x;x_0)|+|\tilde {\bf f}^\alpha(x;x_0)|+|\tilde p(x;x_0)|). \end{aligned}$$ By using [\[def-tildeu\]](#def-tildeu){reference-type="eqref" reference="def-tildeu"}, [\[defu0\]](#defu0){reference-type="eqref" reference="defu0"}, and [\[estauxiu\]](#estauxiu){reference-type="eqref" reference="estauxiu"}, we have $$\begin{aligned} \label{meanDtildeu} \fint_{B_{r}(x_{0})}(|D\tilde {\bf u}|+|\tilde p|)\,dx&\leq \sum_{j=1}^{m+1}\|D^2{\bf u}\|_{L^\infty(B_{r}(x_0)\cap\mathcal{D}_j)}+\sum_{j=1}^{m+1}\|Dp\|_{L^\infty(B_{r}(x_0)\cap\mathcal{D}_j)}+\|D{\boldsymbol{\mathfrak u}}\|_{L^\infty(B_{r}(x_0))}\nonumber\\ &\quad+\|\pi\|_{L^\infty(B_{r}(x_0))}+\fint_{B_{r}(x_{0})}\big|D\ell^k D{\bf u}-\sum_{j=1}^{m+1}D\tilde\ell_{,j}^kD{\bf u}(P_jx_0)\big|\,dx\nonumber\\ &\leq \sum_{j=1}^{m+1}\|D^2{\bf u}\|_{L^\infty(B_{r}(x_0)\cap\mathcal{D}_j)}+\sum_{j=1}^{m+1}\|Dp\|_{L^\infty(B_{r}(x_0)\cap\mathcal{D}_j)}\nonumber\\ &\quad+N\big(\|D{\bf u}\|_{L^{1}(B_1)}+\|p\|_{L^{1}(B_1)}+\sum_{j=1}^{M}|{\bf f}^\alpha|_{1,\delta;\overline{\mathcal{D}_{j}}}+\sum_{j=1}^{M}|g|_{1,\delta; \overline{\mathcal{D}_{j}}}\big)\nonumber\\ &\quad+\fint_{B_{r}(x_{0})}\big|D\ell^k D{\bf u}-\sum_{j=1}^{m+1}D\tilde\ell_{,j}^kD{\bf u}(P_jx_0)\big|\,dx.\end{aligned}$$ To estimate the last term on the right-hand side above, on one hand, using the fact that $\tilde\ell_{,j}$ is the smooth extension of $\ell|_{\mathcal{D}_j}$ to $\cup_{k=1,k\neq j}^{m+1}\mathcal{D}_k$ and the local boundedness of $D{\bf u}$ in Lemma [\[lemlocbdd\]](#lemlocbdd){reference-type="ref" reference="lemlocbdd"}, we obtain $$\begin{aligned} \label{estDellk} &\Big\|\sum_{j=1,j\neq i}^{m+1}D\tilde\ell_{,j}^kD{\bf u}(P_jx_0)\Big\|_{L_1(B_{r}(x_0)\cap\mathcal{D}_i)}\nonumber\\ &\leq Nr^{d}\big(\|D{\bf u}\|_{L^{1}(B_1)}+\|p\|_{L^{1}(B_1)}+\sum_{j=1}^{M}|{\bf f}^\alpha|_{1,\delta;\overline{\mathcal{D}_{j}}} +\sum_{j=1}^{M}|g|_{1,\delta; \overline{\mathcal{D}_{j}}}\big),\end{aligned}$$ where $i=1,\ldots,m+1$. On the other hand, it follows from [\[estDellk0\]](#estDellk0){reference-type="eqref" reference="estDellk0"} that $$\begin{aligned} \label{estDellk00} &\big\|D\ell^k (D{\bf u}-D{\bf u}(P_ix_0))\big\|_{L_1(B_{r}(x_0)\cap\mathcal{D}_i)}\nonumber\\ &\leq Nr\|D^2{\bf u}\|_{L^\infty(B_{r}(x_0)\cap\mathcal{D}_i)}\int_{B_r(x_0)\cap\mathcal{D}_i}|D\ell^k|\,dx\leq Nr^{d+\frac{1}{2}}\|D^2{\bf u}\|_{L^\infty(B_{r}(x_0)\cap\mathcal{D}_i)}.\end{aligned}$$ Thus, coming back to [\[meanDtildeu\]](#meanDtildeu){reference-type="eqref" reference="meanDtildeu"}, and using [\[estDellk\]](#estDellk){reference-type="eqref" reference="estDellk"} and [\[estDellk00\]](#estDellk00){reference-type="eqref" reference="estDellk00"}, we obtain $$\begin{aligned} \label{estfintDtildeu} \fint_{B_{r}(x_{0})}(|D\tilde {\bf u}|+|\tilde p|)\,dx\leq N\mathcal{C}_0,\end{aligned}$$ where $\mathcal{C}_0$ is defined in [\[defC0\]](#defC0){reference-type="eqref" reference="defC0"}. It follows from [\[def-tildefalpha\]](#def-tildefalpha){reference-type="eqref" reference="def-tildefalpha"} that $$\begin{aligned} %\label{meantildef} \fint_{B_{r}(x_{0})}|\tilde {\bf f}^\alpha|\,dx &\leq N\big(\sum_{j=1}^{M}|{\bf f}^\alpha|_{1,\delta;\overline{\mathcal{D}_{j}}}+\|D{\bf u}\|_{L^{1}(B_1)}\big)\nonumber\\ &\quad+N\fint_{B_{r}(x_0)}\big|D_\beta \ell_i D_i{\bf u}-\sum_{j=1}^{m+1}\mathbbm{1}_{_{\mathcal{D}_j}}D_\beta \ell_{i}D_i{\bf u}(P_jx_0)\big|\,dx\nonumber\\ &\quad+\fint_{B_{r}(x_{0})}\big|\delta_{\alpha d}\sum_{j=1}^{m}\mathbbm{1}_{x^d>h_j(x')} (n^d_j(x'))^{-1}\tilde {\bf h}_j(x')\big|\,dx.\end{aligned}$$ Then by using [\[estDellk0\]](#estDellk0){reference-type="eqref" reference="estDellk0"} and [\[deftildeh\]](#deftildeh){reference-type="eqref" reference="deftildeh"}, we derive $$\begin{aligned} \label{estfinttildef} \fint_{B_{r}(x_{0})}|\tilde {\bf f}^\alpha|\,dx\leq N\mathcal{C}_1,\end{aligned}$$ where $\mathcal{C}_1$ is defined in [\[defC1\]](#defC1){reference-type="eqref" reference="defC1"}. Using the triangle inequality and [\[difference-coor\]](#difference-coor){reference-type="eqref" reference="difference-coor"}--[\[estfinttildef\]](#estfinttildef){reference-type="eqref" reference="estfinttildef"}, we have $$\begin{aligned} &\left(\fint_{B_\rho(x_0)}\big(|D_{\ell^{k'}}\tilde {\bf u}(x;x_0)-\mathbf q^{k'}|^{\frac{1}{2}}+|\tilde {\bf U}(x;x_0)-\mathbf Q|^{\frac{1}{2}}\big)\,dx \right)^{2}\\ &\leq \Bigg(\fint_{B_\rho(\Lambda x_0)}\big(|\Gamma(D_{y^{k'}}\tilde{\bf v}(y;\Lambda x_0)-\Gamma^{-1}\mathbf q^{k'})|^{\frac{1}{2}}\nonumber\\ &\qquad+|\Gamma(\mathcal{A}^{d\beta}(y)D_{y^\beta}\tilde{\bf v}(y;\Lambda x_0)-\tilde{\boldsymbol{\mathfrak f}}^d(y;\Lambda x_0)+\tilde{\mathfrak p}(y;\Lambda x_0){\bf e}_d-\Gamma^{-1}\mathbf Q)|^{\frac{1}{2}}\Big)\,dy\Bigg)^{2}+N\mathcal{C}_0\sqrt{\rho}\\ &\leq \Bigg(\fint_{B_\rho(\Lambda x_0)}\big(|D_{y^{k'}}\tilde{\bf v}(y;\Lambda x_0)-\Gamma^{-1}\mathbf q^{k'}|^{\frac{1}{2}}\nonumber\\ &\qquad+|\mathcal{A}^{d\beta}(y)D_{y^\beta}\tilde{\bf v}(y;\Lambda x_0)-\tilde{\boldsymbol{\mathfrak f}}^d(y;\Lambda x_0)+\tilde{\mathfrak p}(y;\Lambda x_0){\bf e}_d-\Gamma^{-1}\mathbf Q|^{\frac{1}{2}}\Big)\,dy\Bigg)^{2}+N\mathcal{C}_0\sqrt{\rho},\end{aligned}$$ where $0<\rho\leq r\leq 1/4$ and $\mathcal{C}_0$ is defined in [\[defC0\]](#defC0){reference-type="eqref" reference="defC0"}. By using the fact that $\mathbf{q}^{k'}, \mathbf{Q}\in\mathbb R^{d}$ are arbitrary, we obtain $$\begin{aligned} %\label{est phi'} \Phi(x_{0},\rho)\leq \phi(\Lambda x_{0},\rho)+N\mathcal{C}_0\sqrt{\rho}.\end{aligned}$$ Combining with Lemma [Lemma 8](#lemma iteraphi){reference-type="ref" reference="lemma iteraphi"}, we derive $$\begin{aligned} \label{est phiPhi} \Phi(x_{0},\rho)\leq N\Big(\frac{\rho}{r}\Big)^{\delta_{\mu}}\phi(\Lambda x_{0},r/2)+N\mathcal{C}_0\rho^{\delta_{\mu}}.\end{aligned}$$ Similarly, we have $$\begin{aligned} \phi(\Lambda x_{0},r/2)\leq\Phi(x_{0},r/2)+N\mathcal{C}_0\sqrt{r}.\end{aligned}$$ Substituting it into [\[est phiPhi\]](#est phiPhi){reference-type="eqref" reference="est phiPhi"} and using $\delta_{\mu}\leq1/2$, we obtain $$\begin{aligned} %\label{est Phi'} \Phi(x_{0},\rho)\leq N\Big(\frac{\rho}{r}\Big)^{\delta_{\mu}}\Phi(x_{0},r/2)+N\mathcal{C}_0\rho^{\delta_{\mu}}.\end{aligned}$$ The lemma is proved. ◻ # The boundedness of $\|D^2{\bf u}\|_{L^\infty}$ and $\|Dp\|_{L^\infty}$ {#bddestimate} For convenience, set $$\begin{aligned} \label{defC3} \mathcal{C}_2:=\|D{\bf u}\|_{L^{1}(B_1)} +\|p\|_{L^{1}(B_1)}+\sum_{j=1}^{m+1}|{\bf f}^\alpha|_{1,\delta;\overline{\mathcal{D}_{j}}}+\sum_{j=1}^{m+1}|g|_{1,\delta;\overline{\mathcal{D}_{j}}}.\end{aligned}$$ We first prove the estimates of $\|D\tilde{\bf u}(\cdot;x_0)\|_{L^2(B_{r/2}(x_0))}$ and $\|\tilde p(\cdot;x_0)\|_{L^2(B_{r/2}(x_0))}$ in the following lemma. **Lemma 10**. *Under the same assumptions as in Lemma [Lemma 9](#lemma itera){reference-type="ref" reference="lemma itera"}, we have $$\begin{aligned} %\label{estDuell} &\|D\tilde{\bf u}(\cdot;x_0)\|_{L^2(B_{r/2}(x_0))}+\|\tilde p(\cdot;x_0)\|_{L^2(B_{r/2}(x_0))}\nonumber\\ &\leq Nr^{\frac{d+1}{2}}\Big(\sum_{j=1}^{m+1}\|D^2{\bf u}\|_{L^\infty(B_{r}(x_0)\cap\mathcal{D}_j)}+\sum_{j=1}^{m+1}\|Dp\|_{L^\infty(B_{r}(x_0)\cap\mathcal{D}_j)}\Big)+N\mathcal{C}_2r^{\frac{d}{2}-1},\end{aligned}$$ where $x_0\in \mathcal{D}_{\varepsilon}\cap{{\mathcal{D}}_{j_0}}$, $r\in(0,1/4)$, $\tilde{\bf u}$ and $\tilde p$ are defined in [\[def-tildeu\]](#def-tildeu){reference-type="eqref" reference="def-tildeu"}, the constant $N>0$ depends on $d,m,q,\nu,\varepsilon$, $|A|_{1,\delta;\overline{\mathcal{D}_{j}}}$, and the $C^{2,\mu}$ norm of $h_j$.* *Proof.* We start with proving the estimate of $\|D\tilde{\bf u}(\cdot;x_0)\|_{L^2(B_{r/2}(x_0))}$. By using the definition of weak solutions, the transmission problem [\[homosecond\]](#homosecond){reference-type="eqref" reference="homosecond"} is equivalent to $$\begin{aligned} \label{equell} \begin{cases} D_\alpha(A^{\alpha\beta}D_\beta {\bf u}_{_\ell})+D(D_\ell p-(D_\ell p)_{B_{r}(x_0)})={\bf f}+ D_\alpha {\bf f}^{\alpha,3}\\ \operatorname{div}{\bf u}_{_\ell}=D_\ell g+D\ell_i D_i{\bf u}-\sum_{j=1}^{m+1}D \tilde\ell_{i,j}D_i{\bf u}(P_jx_0) \end{cases}\,\, \text{in}\,B_1.\end{aligned}$$ By [@art2005 Lemma 10], one can find $\boldsymbol\psi\in H_0^1(B_{r}(x_0))^d$ satisfying $$\operatorname{div}\boldsymbol\psi=D_\ell p-(D_\ell p)_{B_{r}(x_0)}\quad\mbox{in}~B_{r}(x_0),$$ and $$\label{estellp} \|\boldsymbol\psi\|_{L^2(B_{r}(x_0))}+r\|D\boldsymbol\psi\|_{L^2(B_{r}(x_0))}\leq Nr\|D_\ell p-(D_\ell p)_{B_{r}(x_0)}\|_{L^2(B_{r}(x_0))},$$ where $N=N(d)$. Then by applying $\boldsymbol\psi$ to [\[equell\]](#equell){reference-type="eqref" reference="equell"} as a test function, and using Young's inequality and [\[estellp\]](#estellp){reference-type="eqref" reference="estellp"}, we obtain $$\begin{aligned} &\int_{B_{r}(x_0)}|D_\ell p-(D_\ell p)_{B_{r}(x_0)}|^2\,dx\\ &=-\int_{B_{r}(x_0)}A^{\alpha\beta}D_\beta {\bf u}_{_\ell}D_\alpha\boldsymbol{\psi}\,dx-\int_{B_{r}(x_0)}{\bf f}\boldsymbol\psi\,dx+\int_{B_{r}(x_0)}{\bf f}^{\alpha,3} D_\alpha\boldsymbol\psi\,dx\\ &\leq\varepsilon_0\int_{B_{r}(x_0)}|D_\ell p-(D_\ell p)_{B_{r}(x_0)}|^2\, dx+N(d,\varepsilon_0)\int_{B_{r}(x_0)}(|D{\bf u}_{_\ell}|^2+r^2|{\bf f}|^2+|{\bf f}^{\alpha,3}|^2)\,dx.\end{aligned}$$ Taking $\varepsilon_0=\frac{1}{2}$, we have $$\begin{aligned} \label{estDellp} \|D_\ell p-(D_\ell p)_{B_{r}(x_0)}\|_{L^2(B_{r}(x_0))}\leq N\big(\|D{\bf u}_{_\ell}\|_{L^2(B_{r}(x_0))}+r\|{\bf f}\|_{L^2(B_{r}(x_0))}+\|{\bf f}^{\alpha,3}|\|_{L^2(B_{r}(x_0))}\big).\end{aligned}$$ Now we choose $\eta\in C_0^\infty(B_r(x_0))$ such that $$\label{defeta} 0\leq\eta\leq1,\quad\eta=1\quad\mbox{in}~B_{r/2}(x_0),\quad |D\eta|\leq \frac{N(d)}{r}.$$ Then we apply $\eta^2{\bf u}_{_\ell}$ to [\[equell\]](#equell){reference-type="eqref" reference="equell"} as a test function to obtain $$\begin{aligned} &\int_{B_{r}(x_0)}\eta^2A^{\alpha\beta}D_\beta {\bf u}_{_\ell}D_\alpha{\bf u}_{_\ell}\,dx\\ &=-2\int_{B_{r}(x_0)}\eta {\bf u}_{_\ell}A^{\alpha\beta}D_\beta {\bf u}_{_\ell}D_\alpha\eta\, dx-2\int_{B_{r}(x_0)}\eta{\bf u}_{_\ell}D\eta(D_\ell p-(D_\ell p)_{B_{r}(x_0)})\,dx\\ &\quad-\int_{B_{r}(x_0)}\eta^2(D_\ell p-(D_\ell p)_{B_{r}(x_0)})(D_\ell g+D\ell_i D_i{\bf u}-\sum_{j=1}^{m+1}D \tilde\ell_{i,j}D_i{\bf u}(P_jx_0))\,dx\\ &\quad-\int_{B_{r}(x_0)}{\bf f}\eta^2{\bf u}_{_\ell}\,dx+\int_{B_{r}(x_0)}\eta^2{\bf f}^{\alpha,3} D_\alpha{\bf u}_{_\ell}\,dx+2\int_{B_{r}(x_0)}\eta{\bf f}^{\alpha,3} D_\alpha\eta{\bf u}_{_\ell}\,dx.\end{aligned}$$ Using the ellipticity condition, Young's inequality, [\[estDellp\]](#estDellp){reference-type="eqref" reference="estDellp"}, and $\eta=1$ in $B_{r/2}(x_0)$, we derive $$\begin{aligned} \|D{\bf u}_{_\ell}(\cdot;x_0)\|_{L^2(B_{r/2}(x_0))} &\leq N\big(r^{-1}\|{\bf u}_{_\ell}(\cdot;x_0)\|_{L^2(B_{r}(x_0))}+r\|{\bf f}\|_{L^2(B_{r}(x_0))}+\|{\bf f}^{\alpha,3}(\cdot;x_0)\|_{L^2(B_{r}(x_0))}\nonumber\\ &\quad+\|D_\ell g+D\ell_i D_i{\bf u}-\sum_{j=1}^{m+1}D \tilde\ell_{i,j}D_i{\bf u}(P_jx_0)\|_{L^2(B_{r}(x_0))}\big)\\ &\quad+\varepsilon_1\|D{\bf u}_{_\ell}(\cdot;x_0)\|_{L^2(B_{r}(x_0))},\end{aligned}$$ where $\varepsilon_1>0$. This, in combination with a well-known iteration argument (see, for instance, [@g93 pp. 81--82]), yields $$\begin{aligned} \label{est-Dul} \|D{\bf u}_{_\ell}(\cdot;x_0)\|_{L^2(B_{r/2}(x_0))} &\leq N\big(r^{-1}\|{\bf u}_{_\ell}(\cdot;x_0)\|_{L^2(B_{r}(x_0))}+r\|{\bf f}\|_{L^2(B_{r}(x_0))}+\|{\bf f}^{\alpha,3}(\cdot;x_0)\|_{L^2(B_{r}(x_0))}\nonumber\\ &\quad+\|D_\ell g+D\ell_i D_i{\bf u}-\sum_{j=1}^{m+1}D \tilde\ell_{i,j}D_i{\bf u}(P_jx_0)\|_{L^2(B_{r}(x_0))}\big).\end{aligned}$$ Next we estimate the terms on the right-hand side above. By using [\[tildeu\]](#tildeu){reference-type="eqref" reference="tildeu"} and the local boundedness estimate of $D{\bf u}$ in Lemma [\[lemlocbdd\]](#lemlocbdd){reference-type="ref" reference="lemlocbdd"}, we obtain $$\label{est-uell} \|{\bf u}_{_\ell}(\cdot;x_0)\|_{L^2(B_{r}(x_0))}\leq Nr^{\frac{d}{2}}\Big(\|D{\bf u}\|_{L^{1}(B_1)}+\|p\|_{L^{1}(B_1)}+\sum_{j=1}^{M}|{\bf f}^\alpha|_{1,\delta;\overline{\mathcal{D}_{j}}}+\sum_{j=1}^{M}|g|_{1,\delta;\overline{\mathcal{D}_{j}}}\Big).$$ From the definition of $\ell$ in [\[defell\]](#defell){reference-type="eqref" reference="defell"}, it follows that $$\begin{aligned} \label{estDellk02} \int_{B_r(x_0)\cap\mathcal{D}_i}|D\ell^k|^2\,dx\leq N\int_{B_r'(x'_0)}\frac{\min\{2r,h_i-h_{i-1}\}}{|h_{i}-h_{i-1}|}\, dx'\leq Nr^{d-1}.\end{aligned}$$ See [@dx2022 lemma 2.1] for the properties of $\ell$. Using this and recalling the definition of ${\bf f}$ in [\[defg\]](#defg){reference-type="eqref" reference="defg"}, we get $$\begin{aligned} \label{est-g1} \|{\bf f}\|_{L^2(B_{r}(x_0))}\leq N\mathcal{C}_0r^{\frac{d-1}{2}},\end{aligned}$$ where $\mathcal{C}_0$ is defined in [\[defC0\]](#defC0){reference-type="eqref" reference="defC0"}. Similarly, we have $$\begin{aligned} \left(\fint_{B_{r}(x_{0})}\big|D\ell_i D_i{\bf u}-\sum_{j=1}^{m+1}D \tilde\ell_{i,j}D_i{\bf u}(P_jx_0)\big|^2\,dx\right)^{1/2}\leq Nr^{1/2}\sum_{j=1}^{m+1}\|D^2{\bf u}\|_{L^\infty(B_{r}(x_0)\cap\mathcal{D}_j)}+N\mathcal{C}_2,\end{aligned}$$ where $\mathcal{C}_2$ is defined in [\[defC3\]](#defC3){reference-type="eqref" reference="defC3"}. According to [\[tildef1\]](#tildef1){reference-type="eqref" reference="tildef1"}, we have $$\begin{aligned} \label{est-f1breve} \|{\bf f}^{\alpha,3}(\cdot;x_0)\|_{L^2(B_{r}(x_0))}\leq Nr^{\frac{d+1}{2}}\sum_{j=1}^{m+1}\|D^2{\bf u}\|_{L^\infty(B_{r}(x_0)\cap\mathcal{D}_j)}+N\mathcal{C}_2r^{\frac{d}{2}}.\end{aligned}$$ Thus, substituting [\[est-uell\]](#est-uell){reference-type="eqref" reference="est-uell"}, [\[est-g1\]](#est-g1){reference-type="eqref" reference="est-g1"}, and [\[est-f1breve\]](#est-f1breve){reference-type="eqref" reference="est-f1breve"} into [\[est-Dul\]](#est-Dul){reference-type="eqref" reference="est-Dul"}, we obtain $$\begin{aligned} \label{estDuell} &\|D{\bf u}_{_\ell}(\cdot;x_0)\|_{L^2(B_{r/2}(x_0))}\notag\\ &\leq Nr^{\frac{d+1}{2}}\Big(\sum_{j=1}^{m+1}\|D^2{\bf u}\|_{L^\infty(B_{r}(x_0)\cap\mathcal{D}_j)}+\sum_{j=1}^{m+1}\|Dp\|_{L^\infty(B_{r}(x_0)\cap\mathcal{D}_j)}\Big) +N\mathcal{C}_2r^{\frac{d}{2}-1}.\end{aligned}$$ Combining [\[estDuell\]](#estDuell){reference-type="eqref" reference="estDuell"} with [\[estauxiu\]](#estauxiu){reference-type="eqref" reference="estauxiu"} and [\[def-tildeu\]](#def-tildeu){reference-type="eqref" reference="def-tildeu"}, the estimate of $\|D\tilde{\bf u}(\cdot;x_0)\|_{L^2(B_{r/2}(x_0))}$ follows. Next we proceed to estimate $\|\tilde p\|_{L^2(B_{r/2}(x_0))}$. Integrating $D_\ell (p\eta^2)$ over $B_r(x_0)$ directly, and using the integration by parts and $\eta\in C_0^\infty(B_r(x_0))$, we obtain $$\int_{B_{r}(x_0)}\big(D_\ell (p\eta^2)+p\eta^2\operatorname{div}\ell\big) \,dx=0.$$ Then by using [@art2005 Lemma 10] again, there exists a function $\boldsymbol\varphi\in H_0^1(B_{r}(x_0))^d$ such that $$\operatorname{div}\boldsymbol\varphi=D_\ell (p\eta^2)+p\eta^2\operatorname{div}\ell\quad\mbox{in}~B_{r}(x_0),$$ and $$\|\boldsymbol\varphi\|_{L^2(B_{r}(x_0))}+r\|D\boldsymbol\varphi\|_{L^2(B_{r}(x_0))}\leq Nr\|D_\ell (p\eta^2)+p\eta^2\operatorname{div}\ell\|_{L^2(B_{r}(x_0))},$$ where $N=N(d)$. Moreover, combining [\[defeta\]](#defeta){reference-type="eqref" reference="defeta"}, the local boundedness of $p$ in Lemma [\[lemlocbdd\]](#lemlocbdd){reference-type="ref" reference="lemlocbdd"} and [\[estDellk02\]](#estDellk02){reference-type="eqref" reference="estDellk02"}, we have $$\begin{aligned} \label{estellpeta} &\|\boldsymbol\varphi\|_{L^2(B_{r}(x_0))}+r\|D\boldsymbol\varphi\|_{L^2(B_{r}(x_0))}\nonumber\\ &\leq Nr\|D_\ell p\eta\|_{L^2(B_{r}(x_0))}+Nr\|p\eta D_\ell \eta\|_{L^2(B_{r}(x_0))}+Nr\|p\eta^2\operatorname{div}\ell\|_{L^2(B_{r}(x_0))}\nonumber\\ &\leq Nr\|D_\ell p\eta\|_{L^2(B_{r}(x_0))}+N\mathcal{C}_2r^{d/2}.\end{aligned}$$ Applying $\boldsymbol\varphi$ to [\[equell\]](#equell){reference-type="eqref" reference="equell"} as a test function, we have $$\begin{aligned} \int_{B_{r}(x_0)}\eta^2|D_\ell p|^2\,dx &=-\int_{B_{r}(x_0)}A^{\alpha\beta}D_\beta {\bf u}_{_\ell}D_\alpha\boldsymbol\varphi\,dx-\int_{B_{r}(x_0)}{\bf f}\boldsymbol\varphi\,dx+\int_{B_{r}(x_0)}{\bf f}^{\alpha,3} D_\alpha\boldsymbol\varphi\,dx\\ &\quad-\int_{B_{r}(x_0)}D_\ell p(2p\eta D_\ell\eta+p\eta^2\operatorname{div}\ell)\,dx.\end{aligned}$$ By Young's inequality and [\[estellpeta\]](#estellpeta){reference-type="eqref" reference="estellpeta"}, we have $$\begin{aligned} \|\eta D_\ell p\|_{L^2(B_{r}(x_0))} &\leq \varepsilon_2\|\eta D_\ell p\|_{L^2(B_{r}(x_0))}+N(\varepsilon_2)\big(\|D{\bf u}_{_\ell}\|_{L^2(B_{r}(x_0))}+\|{\bf f}^{\alpha,3}\|_{L^2(B_{r}(x_0))}\\ &\quad+r\|{\bf f}\|_{L^2(B_{r}(x_0))}\big)+N\mathcal{C}_2r^{\frac{d}{2}-1}.\end{aligned}$$ Taking $\varepsilon_2=\frac{1}{2}$, and using $\eta=1$ in $B_{r/2}(x_0)$, [\[est-g1\]](#est-g1){reference-type="eqref" reference="est-g1"}--[\[estDuell\]](#estDuell){reference-type="eqref" reference="estDuell"}, we obtain $$\begin{aligned} \|D_\ell p\|_{L^2(B_{r/2}(x_0))} &\leq Nr^{\frac{d+1}{2}}\Big(\sum_{j=1}^{m+1}\|D^2{\bf u}\|_{L^\infty(B_{r}(x_0)\cap\mathcal{D}_j)}+\sum_{j=1}^{m+1}\|Dp\|_{L^\infty(B_{r}(x_0)\cap\mathcal{D}_j)}\Big)\nonumber\\ &\quad+N\mathcal{C}_2r^{\frac{d}{2}-1}.\end{aligned}$$ This together with [\[estauxiu\]](#estauxiu){reference-type="eqref" reference="estauxiu"} gives the estimate of $\|\tilde p\|_{L^2(B_{r/2}(x_0))}$. The lemma is proved. ◻ **Lemma 11**. *Let $\varepsilon\in(0,1)$ and $q\in(1,\infty)$. Suppose that $A^{\alpha\beta}$, ${\bf f}^\alpha$, and $g$ satisfy Assumption [Assumption 2](#assump){reference-type="ref" reference="assump"} with $s=1$. If $({\bf u},p)\in W^{1,q}(B_{1})^d\times L^q(B_1)$ is a weak solution to $$\begin{aligned} \begin{cases} D_\alpha (A^{\alpha\beta}D_\beta {\bf u})+Dp=D_{\alpha}{\bf f}^{\alpha}\\ \operatorname{div}{\bf u}=g \end{cases}\,\,\mbox{in }~B_{1},\end{aligned}$$ then we have $$\begin{aligned} %\label{est Du''} \sum_{j=1}^{m+1}\|D^2{\bf u}\|_{L^\infty(B_{1/4}\cap\overline\mathcal{D}_j)}+\sum_{j=1}^{m+1}\|Dp\|_{L^\infty(B_{1/4}\cap\overline\mathcal{D}_j)}\leq N\mathcal{C}_2,\end{aligned}$$ where $\mathcal{C}_2$ is defined in [\[defC3\]](#defC3){reference-type="eqref" reference="defC3"}, $N>0$ is a constant depending only on $d,m,q,\nu,\varepsilon$, $|A|_{1,\delta;\overline{\mathcal{D}_{j}}}$, and the $C^{2,\mu}$ norm of $h_j$.* *Proof.* For any $s\in(0,1)$, let $\mathbf q_{x_{0},s}^{k'},\mathbf Q_{x_{0},s}\in\mathbb R^d$ be chosen such that $$\Phi(x_{0},s)=\left(\fint_{B_{s}(x_{0})}\big(|D_{\ell^{k'}}\tilde {\bf u}(x;x_0)-\mathbf q_{x_{0},s}^{k'}|^{\frac{1}{2}}+|\tilde {\bf U}(x;x_0)-\mathbf Q_{x_{0},s}|^{\frac{1}{2}}\big)\,dx\right)^{2},$$ where $k'=1,\ldots,d-1$. It follows from the triangle inequality that $$\begin{aligned} |\mathbf q_{x_{0},s/2}^{k'}-\mathbf q_{x_{0},s}^{k'}|^{\frac{1}{2}}\leq|D_{\ell^{k'}}\tilde {\bf u}(x;x_0)-\mathbf q_{x_{0}, s/2}^{k'}|^{\frac{1}{2}}+|D_{\ell^{k'}}\tilde {\bf u}(x;x_0)-\mathbf q_{x_{0},s}^{k'}|^{\frac{1}{2}}\end{aligned}$$ and $$\begin{aligned} |\mathbf Q_{x_{0},s/2}-\mathbf Q_{x_{0},s}|^{\frac{1}{2}}\leq|\tilde {\bf U}(x;x_0)-\mathbf Q_{x_{0},s/2}|^{\frac{1}{2}}+|\tilde {\bf U}(x;x_0)-\mathbf Q_{x_{0},s}|^{\frac{1}{2}}.\end{aligned}$$ Taking the average over $x\in B_{s/2}(x_{0})$ and then taking the square, we obtain $$\begin{aligned} |\mathbf q_{x_{0},s/2}^{k'}-\mathbf q_{x_{0},s}^{k'}|+|\mathbf Q_{x_{0},s/2}-\mathbf Q_{x_{0},s}|\leq N(\Phi(x_{0},s/2)+\Phi(x_{0},s)).\end{aligned}$$ By iterating and using the triangle inequality, we derive $$\label{itera} |\mathbf q_{x_{0},2^{-L}s}^{k'}-\mathbf q_{x_{0},s}^{k'}|+|\mathbf Q_{x_{0},2^{-L} s}-\mathbf Q_{x_{0},s}|\leq N\sum_{j=0}^{L}\Phi(x_{0},2^{-j} s).$$ Using [\[def-tildeu\]](#def-tildeu){reference-type="eqref" reference="def-tildeu"} and [\[deftildeU\]](#deftildeU){reference-type="eqref" reference="deftildeU"}, we have $$\begin{aligned} \label{Dellk'tildeu} D_{\ell^{k'}}\tilde {\bf u}(x;x_0)=\ell_i^k\ell_j^{k'}D_iD_j{\bf u}+D_{\ell^{k'}}\ell_i D_i{\bf u}-\sum_{j=1}^{m+1}D_{\ell^{k'}}\tilde\ell_{i,j} D_i{\bf u}(P_jx_0)-D_{\ell^{k'}}{\boldsymbol{\mathfrak u}}\end{aligned}$$ and $$\begin{aligned} \label{tildeU} &\tilde {\bf U}(x;x_0)=n^\alpha\Big(A^{\alpha\beta}D_\beta D_i {\bf u}\ell_i^k-D_{\ell}{\bf f}^\alpha+D_\ell A^{\alpha\beta}D_\beta {\bf u}-A^{\alpha\beta}D_\beta{\boldsymbol{\mathfrak u}}\nonumber\\ &\quad-\delta_{\alpha d}\sum_{j=1}^{m}\mathbbm{1}_{x^d>h_j(x')} (n^d_j(x'))^{-1}\tilde {\bf h}_j(x')-A^{\alpha\beta}\sum_{j=1}^{m+1}\mathbbm{1}_{_{\mathcal{D}_j^c}}D_\beta \tilde\ell_{i,j}D_i{\bf u}(P_jx_0)\Big)+{\bf n}(D_\ell p-\pi).\end{aligned}$$ Recalling the assumption that $D{\bf u}$ and $p$ are piecewise $C^1$, $A^{\alpha\beta}$ and ${\bf f}^\alpha$ are piecewise $C^{1,\delta}$, and using [\[estauxiu\]](#estauxiu){reference-type="eqref" reference="estauxiu"}, it follows that $D_{\ell^{k'}}\tilde {\bf u}(x;x_0),\tilde {\bf U}(x;x_0)\in C^0 (\mathcal{D}_{\varepsilon}\cap\overline{{\mathcal{D}}_{j}})$. Taking $\rho=2^{-L} s$ in [\[est phi\'\]](#est phi'){reference-type="eqref" reference="est phi'"}, we have $$\lim_{L\rightarrow\infty}\Phi(x_{0},2^{-L} s)=0.$$ Thus, for any $x_0\in \mathcal{D}_{\varepsilon}\cap{\mathcal{D}}_{j}$, we obtain $$\lim_{L\rightarrow\infty}\mathbf q_{x_{0},2^{-L}s}^{k'}=D_{\ell^{k'}}\tilde {\bf u}(x_{0};x_0),\quad \lim_{L\rightarrow\infty}\mathbf Q_{x_{0},2^{-L} s}=\tilde {\bf U}(x_{0};x_0).$$ Now taking $L\rightarrow\infty$ in [\[itera\]](#itera){reference-type="eqref" reference="itera"}, choosing $s=r/2$, and using Lemma [Lemma 9](#lemma itera){reference-type="ref" reference="lemma itera"}, we have for $r\in(0,1/4)$, $k'=1,\ldots,d-1$, and $x_0\in \mathcal{D}_{\varepsilon}\cap{{\mathcal{D}}_{j}}$, $$\begin{aligned} \label{diffDtildeu} &|D_{\ell^{k'}}\tilde {\bf u}(x_{0};x_{0})-\mathbf q_{x_{0},r/2}^{k'}|+|\tilde {\bf U}(x_{0};x_{0})-\mathbf Q_{x_{0},r/2}|\nonumber\\ &\leq N\sum_{j=0}^{\infty}\Phi(x_{0},2^{-j-1}r)\leq N\Phi(x_{0},r/2)+N\mathcal{C}_0r^{\delta_{\mu}},\end{aligned}$$ where $\delta_{\mu}=\min\big\{\frac{1}{2},\mu,\delta\big\}$, and $\mathcal{C}_0$ is defined in [\[defC0\]](#defC0){reference-type="eqref" reference="defC0"}. By averaging the inequality $$\begin{aligned} |\mathbf q_{x_{0},r/2}^{k'}|+|\mathbf Q_{x_{0},r/2}|&\leq |D_{\ell^{k'}}\tilde {\bf u}(x;x_{0})-\mathbf q_{x_{0},r/2}^{k'}|+|\tilde {\bf U}(x;x_{0})-\mathbf Q_{x_{0},r/2}|+|D_{\ell^{k'}}\tilde {\bf u}(x;x_{0})|\\ &\quad+|\tilde {\bf U}(x;x_{0})|\end{aligned}$$ over $x\in B_{r/2}(x_{0})$ and then taking the square, we have $$\begin{aligned} |\mathbf q_{x_{0},r/2}^{k'}|+|\mathbf Q_{x_{0},r/2}|&\leq N\Phi(x_{0},r/2)+N\left(\fint_{B_{r/2}(x_{0})}\big(|D_{\ell^{k'}}\tilde {\bf u}(x;x_{0})|^{\frac{1}{2}}+|\tilde {\bf U}(x;x_{0})|^{\frac{1}{2}}\big)\,dx\right)^{2}\\ &\leq Nr^{-d}\Big(\|D_{\ell^{k'}}\tilde {\bf u}(\cdot;x_{0})\|_{L^{1}(B_{r/2}(x_{0}))}+\|\tilde {\bf U}(\cdot;x_{0})\|_{L^{1}(B_{r/2}(x_{0}))}\Big).\end{aligned}$$ Therefore, combining [\[diffDtildeu\]](#diffDtildeu){reference-type="eqref" reference="diffDtildeu"} and the triangle inequality, we obtain $$\begin{aligned} \label{Dx'Uz0} &|D_{\ell^{k'}}\tilde {\bf u}(x_0;x_0)|+|\tilde {\bf U}(x_0;x_0)|\nonumber\\ &\leq Nr^{-d}\Big(\|D_{\ell^{k'}}\tilde {\bf u}(\cdot;x_0)\|_{L^{1}(B_{r/2}(x_0))}+\|\tilde {\bf U}(\cdot;x_0)\|_{L^{1}(B_{r/2}(x_0))}\Big)+N\mathcal{C}_0r^{\delta_{\mu}}.\end{aligned}$$ By using Hölder's inequality and Lemma [Lemma 10](#lemmaup){reference-type="ref" reference="lemmaup"}, we have $$\begin{aligned} &\|D\tilde{\bf u}(\cdot;x_0)\|_{L^1(B_{r/2}(x_0))}+\|\tilde p\|_{L^1(B_{r/2}(x_0))}\nonumber\\ &\leq Nr^{d+\frac{1}{2}}\big(\sum_{j=1}^{m+1}\|D^2{\bf u}\|_{L^\infty(B_{r}(x_0)\cap\mathcal{D}_j)}+\sum_{j=1}^{m+1}\|Dp\|_{L^\infty(B_{r}(x_0)\cap\mathcal{D}_j)}\big)+N\mathcal{C}_2r^{d-1}.\end{aligned}$$ Recalling [\[deftildeU\]](#deftildeU){reference-type="eqref" reference="deftildeU"} and [\[def-tildeu\]](#def-tildeu){reference-type="eqref" reference="def-tildeu"}, and using [\[estauxiu\]](#estauxiu){reference-type="eqref" reference="estauxiu"}, we have $$\begin{aligned} &\|\tilde{\bf U}(\cdot;x_0)\|_{L^1(B_{r/2}(x_0))}\nonumber\\ &\leq\|D\tilde{\bf u}(\cdot;x_0)\|_{L^1(B_{r/2}(x_0))}+\|\tilde{\bf f}(\cdot;x_0)\|_{L^1(B_{r/2}(x_0))}+\|\tilde p\|_{L^1(B_{r/2}(x_0))}\nonumber\\ &\leq Nr^{d+\frac{1}{2}}\big(\sum_{j=1}^{m+1}\|D^2{\bf u}\|_{L^\infty(B_{r}(x_0)\cap\mathcal{D}_j)}+\sum_{j=1}^{m+1}\|Dp\|_{L^\infty(B_{r}(x_0)\cap\mathcal{D}_j)}\big)+N\mathcal{C}_2r^{d-1}.\end{aligned}$$ These estimates together with [\[Dx\'Uz0\]](#Dx'Uz0){reference-type="eqref" reference="Dx'Uz0"} imply that $$\begin{aligned} \label{estDtildeuU} &|D_{\ell^{k'}}\tilde {\bf u}(x_0;x_0)|+|\tilde {\bf U}(x_0;x_0)|\nonumber\\ &\leq Nr^{\delta_{\mu}}\big(\sum_{j=1}^{m+1}\|D^2{\bf u}\|_{L^\infty(B_{r}(x_0)\cap\mathcal{D}_j)}+\sum_{j=1}^{m+1}\|Dp\|_{L^\infty(B_{r}(x_0)\cap\mathcal{D}_j)}\big)+N\mathcal{C}_2r^{-1}.\end{aligned}$$ It follows from [\[stokes\]](#stokes){reference-type="eqref" reference="stokes"} that $$\label{Dalpbetau} A^{\alpha\beta}D_{\alpha\beta}{\bf u}+Dp=D_\alpha {\bf f}^\alpha-D_\alpha A^{\alpha\beta}D_{\beta}{\bf u}$$ and $$\label{Ddiv} D(\operatorname{div}{\bf u})=Dg,$$ in $B_{1-\varepsilon}\cap\mathcal{D}_j$, $j=1,\ldots,M$. To solve for $D^2{\bf u}$ and $Dp$, we need to show that the determinant of the coefficient matrix in [\[Dellk\'tildeu\]](#Dellk'tildeu){reference-type="eqref" reference="Dellk'tildeu"}, [\[tildeU\]](#tildeU){reference-type="eqref" reference="tildeU"}, [\[Dalpbetau\]](#Dalpbetau){reference-type="eqref" reference="Dalpbetau"}, and [\[Ddiv\]](#Ddiv){reference-type="eqref" reference="Ddiv"} is not equal to 0. To this end, let us define $$y=\Lambda x,\quad{\bf v}(y)=\Lambda{\bf u}(x),\quad \pi(y)=p(x),\quad\mathcal{A}^{\alpha\beta}(y)=\Lambda\Lambda^{\alpha k}A^{ks}(x)\Lambda^{s\beta}\Gamma,$$ where $\Gamma=\Lambda^{-1}$, $\Lambda$ is the linear transformation from the coordinate system associated with $0$ to the coordinate system associated with the fixed point $x\in B_r(x_0)$, which is defined in Section [2](#secpreliminaries){reference-type="ref" reference="secpreliminaries"} (see p.). A direct calculation yields $$\label{nAalp} n^\alpha A^{\alpha\beta}D_\beta D_i{\bf u}\ell_i^k+{\bf n}D_\ell p=\Gamma\mathcal{A}^{d\beta}D_{\beta}D_{k}{\bf v}+{\bf n}D_k\pi.$$ Using the definitions of $\Lambda$ and ${\bf n}$ in Section [2](#secpreliminaries){reference-type="ref" reference="secpreliminaries"} (see p.), we have $$\Lambda{\bf n}=(0,\dots,0,1)^{\top}=:{\bf e}_d.$$ Then [\[nAalp\]](#nAalp){reference-type="eqref" reference="nAalp"} becomes $$\Lambda\big(n^\alpha A^{\alpha\beta}D_\beta D_i{\bf u}\ell_i^k+{\bf n}D_\ell p\big)=\mathcal{A}^{d\beta}D_{\beta}D_{k}{\bf v}+{\bf e}_dD_k\pi.$$ Similarly, we obtain $$\ell_i^k\ell_j^{k'}D_iD_j{\bf u}=\Gamma D_{k}D_{k'}{\bf v}$$ and $$A^{\alpha\beta}D_{\alpha\beta}{\bf u}+Dp=\Gamma (\mathcal{A}^{\alpha\beta}D_{\alpha\beta}{\bf v}+D\pi),\quad D(\operatorname{div}{\bf u})=D(\operatorname{div}{\bf v})\Lambda.$$ Thus, in view of [\[Dellk\'tildeu\]](#Dellk'tildeu){reference-type="eqref" reference="Dellk'tildeu"}, [\[tildeU\]](#tildeU){reference-type="eqref" reference="tildeU"}, [\[Dalpbetau\]](#Dalpbetau){reference-type="eqref" reference="Dalpbetau"}, and [\[Ddiv\]](#Ddiv){reference-type="eqref" reference="Ddiv"}, we obtain the equations for $D^2{\bf v}$ and $D\pi$ as follows: $$\begin{aligned} \label{eq-D2v} \begin{cases} D_{k}D_{k'}{\bf v}=\mathcal{R}_1,\\ \mathcal{A}^{d\beta}D_{\beta}D_{k}{\bf v}+{\bf e}_dD_k\pi=\mathcal{R}_2,\\ \mathcal{A}^{\alpha\beta}D_{\alpha\beta}{\bf v}+D\pi=\mathcal{R}_3,\\ D(\operatorname{div}{\bf v})=\mathcal{R}_4, \end{cases}\end{aligned}$$ where $k,k'=1,\dots,d-1$, $\mathcal{R}_m$, $m=1,2,3,4$, is derived from the terms in [\[Dellk\'tildeu\]](#Dellk'tildeu){reference-type="eqref" reference="Dellk'tildeu"}, [\[tildeU\]](#tildeU){reference-type="eqref" reference="tildeU"}, [\[Dalpbetau\]](#Dalpbetau){reference-type="eqref" reference="Dalpbetau"}, and [\[Ddiv\]](#Ddiv){reference-type="eqref" reference="Ddiv"}, respectively. It follows from the first and last equations in [\[eq-D2v\]](#eq-D2v){reference-type="eqref" reference="eq-D2v"} that $D_{k}D_{k'}{\bf v}$ and $D_kD_dv^d$ are solved, where $k,k'=1,\dots,d-1$. If we solve for $D_dD_iv^j$ and $D_d\pi$, $i=1,\dots,d$, $j=1,\dots,d-1$, and $(i,j)=(d,d)$, then $D_k\pi$ are obtained from the second equation in [\[eq-D2v\]](#eq-D2v){reference-type="eqref" reference="eq-D2v"}, $k=1,\dots,d-1$. For this, we rewrite the last three equations in [\[eq-D2v\]](#eq-D2v){reference-type="eqref" reference="eq-D2v"} as, $i=1,\dots,d-1,~k=1,\dots,d-1$, $$\begin{aligned} \label{eq-D2vDpi} \begin{cases} \sum_{j=1}^{d-1}\mathcal{A}_{ij}^{dd}D_{d}D_{k}v^j=\widetilde{\mathcal{R}}_2^i,\\ \sum_{\beta,j=1}^{d-1}\mathcal{A}_{ij}^{d\beta}D_{d\beta}v^j+\sum_{\alpha,j=1}^{d-1}\mathcal{A}_{ij}^{\alpha d}D_{\alpha d}v^j+\mathcal{A}_{ij}^{dd}D_{d }^2v^j-\sum_{j=1}^{d-1}\mathcal{A}_{dj}^{dd}D_{d}D_{i}v^j=\widetilde{\mathcal{R}}_3^i,\\ \sum_{\beta,j=1}^{d-1}\mathcal{A}_{dj}^{d\beta}D_{d\beta}v^j+\sum_{\alpha,j=1}^{d-1}\mathcal{A}_{dj}^{\alpha d}D_{\alpha d}v^j+\mathcal{A}_{dj}^{dd}D_{d }^2v^j+D_d\pi=\widetilde{\mathcal{R}}_3^d,\\ D_dD_jv^j=\mathcal{R}_4^d, \end{cases}\end{aligned}$$ where $$\begin{aligned} \widetilde{\mathcal{R}}_2^i=\mathcal{R}_2^i-\sum_{\beta=1}^{d-1}\mathcal{A}_{ij}^{d\beta}D_{\beta}D_{k}v^j-\mathcal{A}_{id}^{dd}D_{d}D_{k}v^d,\end{aligned}$$ $$\begin{aligned} \widetilde{\mathcal{R}}_3^i&=\mathcal{R}_3^i-\sum_{\alpha,\beta=1}^{d-1}\mathcal{A}_{ij}^{\alpha\beta}D_{\alpha\beta}v^j-\mathcal{R}_2^d+\sum_{\beta=1}^{d-1}\mathcal{A}_{dj}^{d\beta}D_{\beta}D_{i}v^j-\sum_{\beta=1}^{d-1}\mathcal{A}_{id}^{d\beta}D_{d\beta}v^d\\ &\quad-\sum_{\alpha=1}^{d-1}\mathcal{A}_{id}^{\alpha d}D_{\alpha d}v^d+\mathcal{A}_{dd}^{dd}D_{d}D_{i}v^d,\end{aligned}$$ $$\begin{aligned} \widetilde{\mathcal{R}}_3^d=\mathcal{R}_3^d-\sum_{\alpha,\beta=1}^{d-1}\mathcal{A}_{dj}^{\alpha\beta}D_{\alpha\beta}v^j-\sum_{\beta=1}^{d-1}\mathcal{A}_{dd}^{d\beta}D_{d\beta}v^d-\sum_{\alpha=1}^{d-1}\mathcal{A}_{dd}^{\alpha d}D_{\alpha d}v^d,\end{aligned}$$ and $\mathcal{R}_m^i$ is the $i$-th component of $\mathcal{R}_m$, $m=2,3,4$. A direct calculation yields the determinant of the coefficient matrix in [\[eq-D2vDpi\]](#eq-D2vDpi){reference-type="eqref" reference="eq-D2vDpi"} is $(\mathrm{cof}(A_{dd}^{dd}))^d\neq0$, where $\mathrm{cof}(A_{dd}^{dd})$ is the cofactor of $(A^{dd})$. This implies that $D_dD_iv^j$ and $D_d\pi$ can be solved by Cramer's rule and thus $D^2{\bf u}$ and $Dp$. Moreover, using [\[estDtildeuU\]](#estDtildeuU){reference-type="eqref" reference="estDtildeuU"} and [\[estauxiu\]](#estauxiu){reference-type="eqref" reference="estauxiu"}, we obtain $$\begin{aligned} \label{D2bfuDp} &|D^2{\bf u}(x_0)|+|Dp(x_0)|\notag\\ &\leq N\Big(|D_{\ell^{k'}}\tilde {\bf u}(x_0;x_0)|+|\tilde {\bf U}(x_0;x_0)|+|D{\boldsymbol{\mathfrak u}}(x_0)|\Big)\nonumber\\ &\quad+N\big(\sum_{j=1}^{m+1}|{\bf f}^\alpha|_{1,\delta;\overline{\mathcal{D}_{j}}}+\sum_{j=1}^{m+1}|g|_{1,\delta; \overline{\mathcal{D}_{j}}}+\|D{\bf u}\|_{L^{1}(B_1)}+\|p\|_{L^{1}(B_1)}\big)\nonumber\\ &\leq Nr^{\delta_{\mu}}\big(\sum_{j=1}^{m+1}\|D^2{\bf u}\|_{L^\infty(B_{r}(x_0)\cap\mathcal{D}_j)}+\sum_{j=1}^{m+1}\|Dp\|_{L^\infty(B_{r}(x_0)\cap\mathcal{D}_j)}\big)+N\mathcal{C}_2r^{-1}.\end{aligned}$$ For any $x_1\in B_{1/4}$ and $r\in(0,1/4)$, by taking the supremum with respect to $x_0\in B_r(x_1)\cap \mathcal{D}_j$, we have $$\begin{aligned} &\sum_{j=1}^{m+1}\|D^2{\bf u}\|_{L^\infty(B_{r}(x_1)\cap\mathcal{D}_j)}+\sum_{j=1}^{m+1}\|Dp\|_{L^\infty(B_{r}(x_1)\cap\mathcal{D}_j)}\nonumber\\ &\leq Nr^{\delta_{\mu}}\big(\sum_{j=1}^{m+1}\|D^2{\bf u}\|_{L^\infty(B_{2r}(x_1)\cap\mathcal{D}_j)}+\sum_{j=1}^{m+1}\|Dp\|_{L^\infty(B_{2r}(x_1)\cap\mathcal{D}_j)}\big)+N\mathcal{C}_2r^{-1}.\end{aligned}$$ Applying an iteration argument (see, for instance, [@dx2019 Lemma 3.4]), we conclude that $$\begin{aligned} \sum_{j=1}^{m+1}\|D^2{\bf u}\|_{L^\infty(B_{1/4}\cap\overline\mathcal{D}_j)}+\sum_{j=1}^{m+1}\|Dp\|_{L^\infty(B_{1/4}\cap\overline\mathcal{D}_j)}\leq N\mathcal{C}_2.\end{aligned}$$ We finish the proof of the lemma. ◻ # Proof of Theorem [Theorem 3](#Mainthm){reference-type="ref" reference="Mainthm"} with $s=1$ {#prfprop} In this section, we first estimate $|D_{\ell^{k'}}\tilde {\bf u}(x;x_{0})-D_{\ell^{k'}}\tilde {\bf u}(x;x_{1})|$ and $|\tilde {\bf U}(x;x_{0})-\tilde {\bf U}(x;x_{1})|$, where $x_0, x_1\in B_{1-\varepsilon}$. Then we establish an a priori estimate of the modulus of continuity of $(D_{\ell^{k'}}\tilde{\bf u}, \tilde {\bf U})$ by using the results in Sections [4](#auxilemma){reference-type="ref" reference="auxilemma"} and [5](#bddestimate){reference-type="ref" reference="bddestimate"}, which implies Theorem [Theorem 3](#Mainthm){reference-type="ref" reference="Mainthm"} with $s=1$. **Lemma 12**. *Let $\varepsilon\in(0,1)$ and $q\in(1,\infty)$. Suppose that $A^{\alpha\beta}$, ${\bf f}^\alpha$, and $g$ satisfy Assumption [Assumption 2](#assump){reference-type="ref" reference="assump"} with $s=1$. If $(\tilde{\bf u},\tilde p)$ is a weak solution to [\[eqtildeu\]](#eqtildeu){reference-type="eqref" reference="eqtildeu"}, then for any $x_0, x_1\in B_{1-\varepsilon}$, we have $$\begin{aligned} \label{diffUz0z1} |D_{\ell^{k'}}\tilde {\bf u}(x;x_{0})-D_{\ell^{k'}}\tilde {\bf u}(x;x_{1})|+|\tilde {\bf U}(x;x_{0})-\tilde {\bf U}(x;x_{1})|\leq N\mathcal{C}_2r,\end{aligned}$$ where $\mathcal{C}_2$ is defined in [\[defC3\]](#defC3){reference-type="eqref" reference="defC3"}, $N$ depends on $d,m,q,\nu,\varepsilon$, $|A|_{1,\delta;\overline{\mathcal{D}_{j}}}$, and the $C^{2,\mu}$ characteristic of $\mathcal{D}_{j}$.* *Proof.* We first note that for any $x_0\in B_{1/8}\cap\overline\mathcal{D}_{j_0}$ and $x_1\in B_{1/8}\cap\overline\mathcal{D}_{j_1}$, by using [\[Pjx\]](#Pjx){reference-type="eqref" reference="Pjx"} and $h_j\in C^{2,\mu}$, $$|P_jx_0-P_jx_1|\leq N|x_0-x_1|.$$ Combining with Lemma [Lemma 11](#lemma Dtildeu){reference-type="ref" reference="lemma Dtildeu"}, we have $$\begin{aligned} \label{DuPjx} \left|D{\bf u}(P_jx_0)-D{\bf u}(P_jx_1)\right|\leq Nr\|D^2{\bf u}\|_{L^\infty(B_{1/4}\cap\overline\mathcal{D}_j)}\leq N\mathcal{C}_2r.\end{aligned}$$ By [\[eq-rmu\]](#eq-rmu){reference-type="eqref" reference="eq-rmu"}, one has $$\begin{aligned} \begin{cases} D_{\alpha}(\tilde A^{\alpha\beta}D_\beta \tilde{\boldsymbol{\mathfrak u}}_j)+D\tilde{\mathfrak \pi}_j=-D_{\alpha}(\mathbbm{1}_{_{\mathcal{D}_j^c}}A^{\alpha\beta}D_\beta \tilde\ell_{i,j}(D_i{\bf u}(P_jx_0)-D_i{\bf u}(P_jx_1)))& \mbox{in}~B_1,\\ \operatorname{div}\tilde{\boldsymbol{\mathfrak u}}_j=\mathbbm{1}_{_{\mathcal{D}_j^c}}D \tilde\ell_{i,j}(D_i{\bf u}(P_jx_1)-D_i{\bf u}(P_jx_0))\\ \qquad\qquad+(\mathbbm{1}_{_{\mathcal{D}_j^c}}D \tilde\ell_{i,j}(D_i{\bf u}(P_jx_0)-D_i{\bf u}(P_jx_1)))_{B_1}& \mbox{in}~B_1,\\ \tilde{\boldsymbol{\mathfrak u}}_j=0& \mbox{on}~\partial B_1, \end{cases}\end{aligned}$$ where $$\tilde{\boldsymbol{\mathfrak u}}_j:={\boldsymbol{\mathfrak u}}_j(x;x_{0})-{\boldsymbol{\mathfrak u}}_j(x;x_{1}),\quad\tilde{\pi}_j:=\pi_j(x;x_0)-\pi_j(x;x_1).$$ Then by using Lemma [\[lemlocbdd\]](#lemlocbdd){reference-type="ref" reference="lemlocbdd"}, [\[DuPjx\]](#DuPjx){reference-type="eqref" reference="DuPjx"}, and the fact that $\mathbbm{1}_{_{\mathcal{D}_j^c}}D_\beta \tilde\ell_{i,j}$ is piecewise $C^\mu$, $i=1,\ldots,m+1$, we obtain $$\begin{aligned} %\label{mathuz} &|\tilde{\boldsymbol{\mathfrak u}}_j|_{1,\mu';\overline{\mathcal{D}_i}\cap B_{1-\varepsilon}}\nonumber\\ &\leq N\|D\tilde{\boldsymbol{\mathfrak u}}_j\|_{L^1(B_{1})}+\|\pi_j\|_{L^1(B_{1})}+N\sum_{j=1}^{m+1}|\mathbbm{1}_{_{\mathcal{D}_j^c}}A^{\alpha\beta}D_\beta \tilde\ell_{i,j}(D_i{\bf u}(P_jx_0)-D_i{\bf u}(P_jx_1))|_{\mu;B_1}\nonumber\\ &\quad+N|\mathbbm{1}_{_{\mathcal{D}_j^c}}D\tilde\ell_{i,j}(D_i{\bf u}(P_jx_1)-D_i{\bf u}(P_jx_0))|_{\mu;B_1}\nonumber\\ &\leq N\|\mathbbm{1}_{_{\mathcal{D}_j^c}}A^{\alpha\beta}D_\beta \tilde\ell_{i,j}(D_i{\bf u}(P_jx_0)-D_i{\bf u}(P_jx_1))\|_{L^q(B_{1})}+N\mathcal{C}_2r\nonumber\\ &\leq N\mathcal{C}_2r,\end{aligned}$$ where $\mu'=\min\{\mu,\frac{1}{2}\}$. Thus, $$\begin{aligned} %\label{mathuz} |{\boldsymbol{\mathfrak u}}(\cdot;x_{0})-{\boldsymbol{\mathfrak u}}(\cdot;x_{1})|_{1,\mu';\overline{\mathcal{D}_i}\cap B_{1-\varepsilon}}\leq \sum_{j=1}^{m+1}|\tilde{\boldsymbol{\mathfrak u}}_j|_{1,\mu';\overline{\mathcal{D}_i}\cap B_{1-\varepsilon}}\leq N\mathcal{C}_2r.\end{aligned}$$ This combined with [\[def-tildeu\]](#def-tildeu){reference-type="eqref" reference="def-tildeu"}, [\[defu0\]](#defu0){reference-type="eqref" reference="defu0"}, and [\[DuPjx\]](#DuPjx){reference-type="eqref" reference="DuPjx"} yields $$\begin{aligned} \label{diffDuz0z1} &|D_{\ell^{k'}}\tilde {\bf u}(x;x_{0})-D_{\ell^{k'}}\tilde {\bf u}(x;x_{1})|\nonumber\\ &=\Big|\sum_{j=1}^{m+1}D_{\ell^{k'}}\tilde{\ell}_{i,j}\big(D_i{\bf u}(P_jx_0)-D_i{\bf u}(P_jx_1)\big) +D_{\ell^{k'}}{\boldsymbol{\mathfrak u}}(x;x_{0})-D_{\ell^{k'}}{\boldsymbol{\mathfrak u}}(x;x_{1})\Big|\leq N\mathcal{C}_2r.\end{aligned}$$ Similarly, we have the estimate of $|\tilde {\bf U}(x;x_{0})-\tilde {\bf U}(x;x_{1})|$ and thus the proof of Lemma [Lemma 12](#lemma uU){reference-type="ref" reference="lemma uU"} is finished. ◻ Together with the results in Sections [4](#auxilemma){reference-type="ref" reference="auxilemma"} and [5](#bddestimate){reference-type="ref" reference="bddestimate"}, we obtain an a priori estimate of the modulus of continuity of $(D_{\ell^{k'}}\tilde{\bf u}, \tilde {\bf U})$ as follows. **Proposition 13**. *Let $\varepsilon\in(0,1)$ and $q\in(1,\infty)$. Suppose that $A^{\alpha\beta}$, ${\bf f}^\alpha$, and $g$ satisfy Assumption [Assumption 2](#assump){reference-type="ref" reference="assump"} with $s=1$. If $({\bf u},p)\in W^{1,q}(B_{1})^d\times L^q(B_1)$ is a weak solution to $$\begin{cases} D_\alpha (A^{\alpha\beta}D_\beta {\bf u})+Dp=D_{\alpha}{\bf f}^{\alpha}\\ \operatorname{div}{\bf u}=g \end{cases}\,\,\mbox{in }~B_1,$$ then for any $x_0, x_1\in B_{1-\varepsilon}$, we have $$\begin{aligned} \label{holdertildeuU} |(D_{\ell^{k'}}\tilde {\bf u}(x_{0};x_{0})-D_{\ell^{k'}}\tilde {\bf u}(x_{1};x_{1})|+|\tilde {\bf U}(x_{0};x_{0})-\tilde {\bf U}(x_{1};x_{1})|\leq N\mathcal{C}_2|x_{0}-x_{1}|^{\delta_{\mu}},\end{aligned}$$ where $k'=1,\ldots,d-1$, $\mathcal{C}_2$ is defined in [\[defC3\]](#defC3){reference-type="eqref" reference="defC3"}, $\tilde {\bf u}$ and $\tilde {\bf U}$ are defined in [\[def-tildeu\]](#def-tildeu){reference-type="eqref" reference="def-tildeu"} and [\[deftildeU\]](#deftildeU){reference-type="eqref" reference="deftildeU"}, respectively, $\delta_{\mu}=\min\big\{\frac{1}{2},\mu,\delta\big\}$, $N$ depends on $d,m,q,\nu,\varepsilon$, $|A|_{1,\delta;\overline{\mathcal{D}_{j}}}$, and the $C^{2,\mu}$ characteristic of $\mathcal{D}_{j}$.* *Proof.* It follows from [\[Dellk\'tildeu\]](#Dellk'tildeu){reference-type="eqref" reference="Dellk'tildeu"} that $$\begin{aligned} \label{Dellk'tildeu00} D_{\ell^{k'}}\tilde {\bf u}(x_0;x_0)=\ell_i^k(x_0)\ell_j^{k'}(x_0)D_iD_j{\bf u}(x_0)-\sum_{j=1,j\neq j_0}^{m+1}D_{\ell^{k'}}\tilde\ell_{i,j}(x_0) D_i{\bf u}(P_jx_0)-D_{\ell^{k'}}{\boldsymbol{\mathfrak u}}(x_0).\end{aligned}$$ For any $x_{1}\in B_{1/8}\cap\mathcal{D}_{j_{1}}$, where $j_{1}\in\{1,\ldots,m+1\}$, if $|x_{0}-x_{1}|\geq1/16$, then by using [\[Dellk\'tildeu00\]](#Dellk'tildeu00){reference-type="eqref" reference="Dellk'tildeu00"}, Lemma [\[lemlocbdd\]](#lemlocbdd){reference-type="ref" reference="lemlocbdd"}, Lemma [Lemma 11](#lemma Dtildeu){reference-type="ref" reference="lemma Dtildeu"}, and [\[estauxiu\]](#estauxiu){reference-type="eqref" reference="estauxiu"}, we have $$\begin{aligned} %\label{esttildeu} |D_{\ell^{k'}}\tilde {\bf u}(x_{0};x_0)-D_{\ell^{k'}}\tilde {\bf u}(x_{1};x_1)| &\leq N\sum_{j=1}^{m+1}\|D^2{\bf u}\|_{L^\infty(B_{1/4}\cap\overline\mathcal{D}_j)}+N\|D{\boldsymbol{\mathfrak u}}\|_{L^\infty(B_{1/4})}+N\mathcal{C}_2\notag\\ &\leq N\mathcal{C}_2|x_{0}-x_{1}|^{\delta_{\mu}}.\end{aligned}$$ Similarly, by using [\[tildeU\]](#tildeU){reference-type="eqref" reference="tildeU"} and the equation [\[stokes\]](#stokes){reference-type="eqref" reference="stokes"}, we have $$\begin{aligned} \label{tildeU00} \tilde {\bf U}(x_0;x_0)&=n^\alpha(x_0)\Big(A^{\alpha\beta}(x_0)D_\beta D_i {\bf u}(x_0)\ell_i^k(x_0)-D_{\ell}{\bf f}^\alpha(x_0)+D_\ell A^{\alpha\beta}(x_0)D_\beta {\bf u}(x_0)\nonumber\\ &\quad-A^{\alpha\beta}(x_0)D_\beta{\boldsymbol{\mathfrak u}}(x_0)-\delta_{\alpha d}\sum_{j=1}^{m}\mathbbm{1}_{x^d>h_j(x')} (n^d_j(x'_0))^{-1}\tilde {\bf h}_j(x'_0)\nonumber\\ &\quad-A^{\alpha\beta}(x_0)\sum_{j=1,j\neq j_0}^{m+1}\mathbbm{1}_{_{\mathcal{D}_j^c}}D_\beta \tilde\ell_{i,j}(x_0)D_i{\bf u}(P_jx_0)\Big)\nonumber\\ &\quad+{\bf n}(x_0)\ell(x_0)\big(D_\alpha {\bf f}^\alpha(x_0)-D_\alpha A^{\alpha\beta}D_{\beta}{\bf u}(x_0)-A^{\alpha\beta}(x_0)D_{\alpha\beta}{\bf u}(x_0)\big)\nonumber\\ &\quad-{\bf n}(x_0)\pi(x_0;x_0),\end{aligned}$$ and thus, $$\begin{aligned} |\tilde {\bf U}(x_{0};x_0)-\tilde {\bf U}(x_{1};x_1)| \leq N\mathcal{C}_2|x_{0}-x_{1}|^{\delta_{\mu}}.\end{aligned}$$ If $|x_0-x_1|<1/16$, then we set $r=|x_0-x_1|$. By the triangle inequality, for any $x\in B_{r}(x_0)\cap B_{r}(x_1)$, we have $$\label{differenceDu} \begin{split} &|D_{\ell^{k'}}\tilde {\bf u}(x_0;x_0)-D_{\ell^{k'}}\tilde {\bf u}(x_1;x_1)|^{\frac{1}{2}}+|\tilde {\bf U}(x_0;x_0)-\tilde {\bf U}(x_1;x_1)|^{\frac{1}{2}}\\ &\leq|D_{\ell^{k'}}\tilde {\bf u}(x_0;x_0)-\mathbf q_{x_0,r}^{k'}|^{\frac{1}{2}}+|D_{\ell^{k'}}\tilde {\bf u}(x;x_0)-\mathbf q_{x_0,r}^{k'}|^{\frac{1}{2}}+|D_{\ell^{k'}}\tilde {\bf u}(x;x_1)-\mathbf q_{x_1,r}^{k'}|^{\frac{1}{2}}\\ &\quad+|D_{\ell^{k'}}\tilde {\bf u}(x;x_0)-D_{\ell^{k'}}\tilde {\bf u}(x;x_1)|^{\frac{1}{2}}+|D_{\ell^{k'}}\tilde {\bf u}(x_1;x_1)-\mathbf q_{x_1,r}^{k'}|^{\frac{1}{2}}\\ &\quad+|\tilde {\bf U}(x_0;x_0)-\mathbf Q_{x_0,r}|^{\frac{1}{2}}+|\tilde {\bf U}(x;x_0)-\mathbf Q_{x_0,r}|^{\frac{1}{2}}+|\tilde {\bf U}(x;x_1)-\mathbf Q_{x_1,r}|^{\frac{1}{2}}\\ &\quad+|\tilde {\bf U}(x;x_0)-\tilde {\bf U}(x;x_1)|^{\frac{1}{2}}+|\tilde {\bf U}(x_1;x_1)-\mathbf Q_{x_1,r}|^{\frac{1}{2}}, \end{split}$$ where $\mathbf q_{x_{0},r}^{k'},\mathbf Q_{x_{0},r},\mathbf q_{x_{1},r}^{k'},\mathbf Q_{x_{1},r}\in\mathbb R^d$, $k'=1,\ldots,d-1$, satisfy $$\Phi(x_{0},r)=\left(\fint_{B_{r}(x_{0})}\big(|D_{\ell^{k'}}\tilde {\bf u}(x;x_{0})-\mathbf q_{x_{0},r}^{k'}|^{\frac{1}{2}}+|\tilde {\bf U}(x;x_{0})-\mathbf Q_{x_{0},r}|^{\frac{1}{2}}\big)\,dx\right)^{2},$$ and $$\Phi(x_{1},r)=\left(\fint_{B_{r}(x_{1})}\big(|D_{\ell^{k'}}\tilde {\bf u}(x;x_1)-\mathbf q_{x_{1},r}^{k'}|^{\frac{1}{2}}+|\tilde {\bf U}(x;x_1)-\mathbf Q_{x_{1},r}|^{\frac{1}{2}}\big)\,dx\right)^{2},$$ respectively. Taking the average over $x\in B_{r}(x_{0})\cap B_r(x_1)$ and then taking the square in [\[differenceDu\]](#differenceDu){reference-type="eqref" reference="differenceDu"}, we obtain $$\label{diffDtildeuU} \begin{split} &|D_{\ell^{k'}}\tilde {\bf u}(x_{0};x_0)-D_{\ell^{k'}}\tilde {\bf u}(x_{1};x_1)|+|\tilde {\bf U}(x_{0};x_0)-\tilde {\bf U}(x_{1};x_1)|\\ &\leq|D_{\ell^{k'}}\tilde {\bf u}(x_0;x_0)-\mathbf q_{x_{0},r}^{k'}|+|\tilde {\bf U}(x_{0};x_0)-\mathbf Q_{x_{0},r}|+\Phi(x_0,r)+\Phi(x_1,r)\\ &\quad+|D_{\ell^{k'}}\tilde {\bf u}(x_{1};x_1)-\mathbf q_{x_{1},r}^{k'}|+|\tilde {\bf U}(x_1;x_1)-\mathbf Q_{x_{1},r}|\\ &\quad+\left(\fint_{B_{r}(x_{0})\cap B_r(x_1)}\big(|D_{\ell^{k'}}\tilde {\bf u}(x;x_0)-D_{\ell^{k'}}\tilde {\bf u}(x;x_1)|^{\frac{1}{2}}+|\tilde {\bf U}(x;x_0)-\tilde {\bf U}(x;x_1)|^{\frac{1}{2}}\big)\,dx\right)^{2}. \end{split}$$ It follows from Lemmas [Lemma 9](#lemma itera){reference-type="ref" reference="lemma itera"} and [Lemma 11](#lemma Dtildeu){reference-type="ref" reference="lemma Dtildeu"}, [\[estauxiu\]](#estauxiu){reference-type="eqref" reference="estauxiu"}, [\[estfintDtildeu\]](#estfintDtildeu){reference-type="eqref" reference="estfintDtildeu"}, and [\[estfinttildef\]](#estfinttildef){reference-type="eqref" reference="estfinttildef"} with $B_{1/8}$ in place of $B_r(x_0)$ that $$\begin{aligned} \label{estsupphi} \sup_{x_{0}\in B_{1/8}}\Phi(x_{0},r)&\leq Nr^{\delta_{\mu}}\Big(\sum_{j=1}^{m+1}\|D^2{\bf u}\|_{L^\infty(B_{1/4}\cap\overline\mathcal{D}_j)}+\sum_{j=1}^{m+1}\|Dp\|_{L^\infty(B_{1/4}\cap\overline\mathcal{D}_j)}+\|D\tilde {\bf u}\|_{L^1(B_{1/4})}\nonumber\\ &\quad+\|\tilde {\bf f}\|_{L^1(B_{1/4})}+\|\tilde p\|_{L^1(B_{1/4})}+\sum_{j=1}^{m+1}|{\bf f}^\alpha|_{1,\delta; \overline{\mathcal{D}_{j}}}+\sum_{j=1}^{m+1}|g|_{1,\delta; \overline{\mathcal{D}_{j}}}+\|D{\bf u}\|_{L^{1}(B_1)}\nonumber\\ &\quad+\|p\|_{L^{1}(B_1)}\Big)\leq N\mathcal{C}_2r^{\delta_{\mu}}.\end{aligned}$$ Applying [\[diffDtildeu\]](#diffDtildeu){reference-type="eqref" reference="diffDtildeu"} and using [\[estsupphi\]](#estsupphi){reference-type="eqref" reference="estsupphi"}, we derive $$\begin{aligned} \label{estDk'DdU} \sup_{x_{0}\in B_{1/8}}\big(|D_{\ell^{k'}}\tilde {\bf u}(x_{0};x_{0})-\mathbf q_{x_{0},r}^{k'}|+|\tilde {\bf U}(x_{0};x_{0})-\mathbf Q_{x_{0},r}|\big)\leq N\mathcal{C}_2r^{\delta_{\mu}}.\end{aligned}$$ Substituting [\[estsupphi\]](#estsupphi){reference-type="eqref" reference="estsupphi"}, [\[estDk\'DdU\]](#estDk'DdU){reference-type="eqref" reference="estDk'DdU"}, [\[diffDuz0z1\]](#diffDuz0z1){reference-type="eqref" reference="diffDuz0z1"}, and [\[diffUz0z1\]](#diffUz0z1){reference-type="eqref" reference="diffUz0z1"} into [\[diffDtildeuU\]](#diffDtildeuU){reference-type="eqref" reference="diffDtildeuU"}, we obtain [\[holdertildeuU\]](#holdertildeuU){reference-type="eqref" reference="holdertildeuU"}. ◻ ***Proof of Theorem [Theorem 3](#Mainthm){reference-type="ref" reference="Mainthm"} with $s=1$**.* By using [\[Dalpbetau\]](#Dalpbetau){reference-type="eqref" reference="Dalpbetau"} and [\[Ddiv\]](#Ddiv){reference-type="eqref" reference="Ddiv"} at the point $x=x_0$, [\[Dellk\'tildeu00\]](#Dellk'tildeu00){reference-type="eqref" reference="Dellk'tildeu00"}, [\[tildeU00\]](#tildeU00){reference-type="eqref" reference="tildeU00"}, and Cramer's rule, we get that $D^{2}{\bf u}(x_0)$ and $Dp(x_0)$ are combinations of $$\label{uteq} Dg(x_0),\quad D_\alpha {\bf f}^\alpha(x_0)-D_\alpha A^{\alpha\beta}(x_0)D_{\beta}{\bf u}(x_0),$$ $$\label{Duell} D_{\ell^{k'}}\tilde {\bf u}(x_0;x_0)+\sum_{j=1,j\neq j_0}^{m+1}D_{\ell^{k'}}\tilde\ell_{i,j}(x_0) D_i{\bf u}(P_jx_0)+D_{\ell^{k'}}{\boldsymbol{\mathfrak u}}(x_0),$$ and $$\begin{aligned} \label{tildeUeq} &\tilde {\bf U}(x_0;x_0)+n^\alpha(x_0)\Big(D_{\ell}{\bf f}^\alpha(x_0)-D_\ell A^{\alpha\beta}(x_0)D_\beta {\bf u}(x_0)+A^{\alpha\beta}(x_0)D_\beta{\boldsymbol{\mathfrak u}}(x_0)\nonumber\\ &\,+\delta_{\alpha d}\sum_{j=1}^{m}\mathbbm{1}_{x^d>h_j(x')} (n^d_j(x'_0))^{-1}\tilde {\bf h}_j(x'_0)+A^{\alpha\beta}(x_0)\sum_{j=1,j\neq j_0}^{m+1}\mathbbm{1}_{_{\mathcal{D}_j^c}}D_\beta \tilde\ell_{i,j}(x_0)D_i{\bf u}(P_jx_0)\Big)\nonumber\\ &\quad-{\bf n}(x_0)\ell(x_0)\big(D_\alpha {\bf f}^\alpha(x_0)-D_\alpha A^{\alpha\beta}D_{\beta}{\bf u}(x_0)\big)+{\bf n}(x_0)\pi(x_0;x_0).\end{aligned}$$ Similarly, for any $\tilde x_{0}\in B_{1-\varepsilon}\cap\overline{\mathcal{D}}_{j_0}$, $D^{2}{\bf u}(\tilde x_0)$ and $Dp(\tilde x_0)$ are combinations of [\[uteq\]](#uteq){reference-type="eqref" reference="uteq"}--[\[tildeUeq\]](#tildeUeq){reference-type="eqref" reference="tildeUeq"} with $x_0$ replaced with $\tilde x_0$. It follows from [\[holdertildeuU\]](#holdertildeuU){reference-type="eqref" reference="holdertildeuU"} and [\[estauxiu\]](#estauxiu){reference-type="eqref" reference="estauxiu"} that $$[D^2{\bf u}]_{\delta_{\mu};B_{1-\varepsilon}\cap\overline{\mathcal{D}}_{j_0}}+[Dp]_{\delta_{\mu};B_{1-\varepsilon}\cap\overline{\mathcal{D}}_{j_0}} \le N\mathcal{C}_2$$ for any $j_0=1,\ldots,m+1$. Theorem [Theorem 3](#Mainthm){reference-type="ref" reference="Mainthm"} is proved. ◻ # The case when $s\geq2$ {#general} ## Main ingredients of the proof We first use an induction argument for $s\geq 2$ to obtain $$\begin{aligned} \label{Dlu} D_{\ell}\cdots D_{\ell}{\bf u} =\ell_{i_1}\ell_{i_2}\cdots\ell_{i_{_{s}}}D_{i_1}D_{i_2}\cdots D_{i_{_{s}}}{\bf u}+R({\bf u}),\end{aligned}$$ where we used $D_\ell(fg)=gD_\ell f+fD_\ell g$ and the Einstein summation convention over repeated indices, $\ell_{i_{\tau}}:=\ell_{i_{\tau}}^{k_{\tau}}$, $\tau=1,\ldots,s$, $k_{\tau}=1,\ldots,d-1$, $i_{\tau}=1,\ldots,d$, and $$\begin{aligned} %\label{defLu} R({\bf u})&=D_{\ell_{i_1}}(\ell_{i_{2}}\cdots\ell_{i_{_{s}}})D_{i_{2}}\cdots D_{i_{_{s}}}{\bf u}\nonumber\\ &\quad+D_{\ell_{i_1}}\Bigg(D_{\ell_{i_2}}(\ell_{i_{3}}\cdots\ell_{i_{_{s}}})D_{i_{3}}\cdots D_{i_{_{s}}}{\bf u}+D_{\ell_{i_2}}\Big(D_{\ell_{i_3}}(\ell_{i_{4}}\cdots\ell_{i_{_{s}}})D_{i_{4}}\cdots D_{i_{_{s}}}{\bf u}\nonumber\\ &\qquad\qquad+D_{\ell_{i_3}}\big(D_{\ell_{i_4}}(\ell_{i_{5}}\cdots\ell_{i_{_{s}}})D_{i_{5}}\cdots D_{i_{_{s}}}u+\cdots+D_{\ell_{i_{s-2}}}(D_{\ell_{i_{s-1}}}\ell_{i_s}D_{i_s}{\bf u})\big)\Big)\Bigg),\end{aligned}$$ which is the summation of the products of directional derivatives of $\ell$ and derivatives of ${\bf u}$. Taking $D_\ell\cdots D_{\ell}$ to the equation $D_\alpha(A^{\alpha\beta}D_\beta {\bf u})+Dp=D_\alpha {\bf f}^\alpha$ and $\operatorname{div}{\bf u}=g$, respectively, we obtain in $\bigcup_{j=1}^{m+1}\mathcal{D}_j$, $$\begin{aligned} \label{eqDllu} \begin{cases} D_\alpha\big(A^{\alpha\beta}D_\beta (D_{\ell}\cdots D_{\ell}{\bf u})\big)+D(D_{\ell}\cdots D_{\ell}p)=D_\alpha \breve {\bf f}^{\alpha,1}+\breve {\bf f},\\ \operatorname{div}(D_\ell\cdots D_{\ell}{\bf u})=\ell_{i_1}\ell_{i_2}\cdots\ell_{i_{_{s}}}D_{i_1}D_{i_2}\cdots D_{i_{_{s}}}g+D_\alpha(R(u^\alpha))\\ \qquad\qquad\qquad\qquad+D_\alpha(\ell_{i_1}\ell_{i_2}\cdots\ell_{i_{_{s}}})D_{i_1}D_{i_2}\cdots D_{i_{_{s}}}u^\alpha, \end{cases}\end{aligned}$$ where $\breve {\bf f}^{\alpha,1}=(\breve {f}_1^{\alpha,1},\dots,\breve { f}_d^{\alpha,1})^\top$, $\breve {\bf f}=(\breve {f}_1,\dots,\breve {f}_d)^\top$, for the $i$-th equation, $i=1,\dots,d$, $$\begin{aligned} \label{defbrevef1} \breve { f}_i^{\alpha,1}&:=\ell_{i_1}\ell_{i_2}\cdots\ell_{i_{_{s}}}D_{i_1}D_{i_2}\cdots D_{i_{_{s}}}{f}_i^\alpha+A_{ij}^{\alpha\beta}D_\beta(\ell_{i_1}\ell_{i_2}\cdots\ell_{i_{_{s}}})D_{i_1}D_{i_2}\cdots D_{i_{_{s}}}{u^j}\notag\\ &\quad +A_{ij}^{\alpha\beta}D_\beta(R({u^j}))+\delta_{\alpha i}R(p)-\ell_{i_1}\ell_{i_2}\cdots\ell_{i_{_{s}}}\big(D_{i_1}A_{ij}^{\alpha\beta}D_\beta D_{i_2}\cdots D_{i_{_{s}}}{u^j}\nonumber\\ &\qquad+\sum_{\tau=1}^{s-1}D_{i_1}\cdots D_{i_{\tau}}(D_{i_{\tau+1}}A_{ij}^{\alpha\beta}D_\beta D_{i_{\tau+2}}\cdots D_{i_{_{s}}}{u^j})\big),\end{aligned}$$ and $$\begin{aligned} \label{defbrevef} \breve { f}_i&:=D_\alpha(\ell_{i_1}\ell_{i_2}\cdots\ell_{i_{_{s}}})\big(D_{i_1}D_{i_2}\cdots D_{i_{_{s}}}(A_{ij}^{\alpha\beta}D_\beta {u^j}-{f}_i^\alpha+\delta_{\alpha i}p)\big)\nonumber\\ &\quad+R(D_\alpha({f}_i^\alpha-A_{ij}^{\alpha\beta}D_\beta {u^j})-D_ip).\end{aligned}$$ Similarly, by taking $D_{\ell}\cdots D_\ell$ to $[n^\alpha_j(A^{\alpha\beta} D_\beta {\bf u} -{\bf f}^\alpha)+p{\bf n}_j]_{\Gamma_j}=0$, we obtain the boundary condition $$\label{boundary} [n_j^\alpha (A^{\alpha\beta} D_\beta (D_{\ell}\cdots D_{\ell}{\bf u})- \breve {\bf f}^{\alpha,1})]_{\Gamma_j}=\breve {\bf h}_j,$$ where $$\begin{aligned} %\label{brevehj} \breve {\bf h}_j&:=\Big[-\ell_{i_1}\ell_{i_2}\cdots\ell_{i_{_{s}}}\big(\sum_{\tau=1}^{s}D_{i_{\tau}}n_j^\alpha D_{i_1}\cdots D_{\tau_{s-1}}D_{i_{\tau+1}}\cdots D_{i_s}(A^{\alpha\beta}D_\beta {\bf u}-{\bf f}^\alpha)\nonumber\\ &\quad+\sum_{\tau=1}^{s}D_{i_{\tau}}{\bf n}_j D_{i_1}\cdots D_{\tau_{s-1}}D_{i_{\tau+1}}\cdots D_{i_s}p\nonumber\\ &\quad+\sum_{1\leq \tau_1<\tau_2\leq s}D_{i_{\tau_1}}D_{i_{\tau_2}}n_j^\alpha D_{i_1}\cdots D_{i_{\tau_1}-1}D_{i_{\tau_1}+1}\cdots D_{i_{\tau_2}-1}D_{i_{\tau_2}+1}\cdots D_{i_s}(A^{\alpha\beta}D_\beta {\bf u}-{\bf f}^\alpha)\nonumber\\ &\quad+\sum_{1\leq \tau_1<\tau_2\leq s}D_{i_{\tau_1}}D_{i_{\tau_2}}{\bf n}_j D_{i_1}\cdots D_{i_{\tau_1}-1}D_{i_{\tau_1}+1}\cdots D_{i_{\tau_2}-1}D_{i_{\tau_2}+1}\cdots D_{i_s}p\nonumber\\ &\quad+\cdots+D_{i_1}D_{i_2}\cdots D_{i_s}n_j^\alpha (A^{\alpha\beta}D_\beta {\bf u}-{\bf f}^\alpha)+D_{i_1}D_{i_2}\cdots D_{i_s}{\bf n}_jp\big)\Big]_{\Gamma_j}\notag\\ &\quad -[R(n_j^\alpha (A^{\alpha\beta} D_\beta {\bf u}- {\bf f}^\alpha))+R({\bf n}_jp)]_{\Gamma_j}.\end{aligned}$$ By adding a term $$\sum_{j=1}^{m} D_d(\mathbbm{1}_{x^d>h_j(x')} (n^d_j(x'))^{-1}\breve{\bf h}_j(x'))$$ to the first equation in [\[eqDllu\]](#eqDllu){reference-type="eqref" reference="eqDllu"}, then [\[eqDllu\]](#eqDllu){reference-type="eqref" reference="eqDllu"} and [\[boundary\]](#boundary){reference-type="eqref" reference="boundary"} become $$\begin{aligned} \label{eq Dbreveu} \begin{cases} D_\alpha\big(A^{\alpha\beta}D_\beta (D_{\ell}\cdots D_{\ell}{\bf u})\big)+D(D_{\ell}\cdots D_{\ell}p)=D_\alpha \breve {\bf f}^{\alpha,2}+\breve {\bf f},\\ \operatorname{div}(D_\ell\cdots D_{\ell}{\bf u})=\ell_{i_1}\ell_{i_2}\cdots\ell_{i_{_{s}}}D_{i_1}D_{i_2}\cdots D_{i_{_{s}}}g+D_\alpha(R(u^\alpha))\\ \qquad\qquad\qquad\qquad+D_\alpha(\ell_{i_1}\ell_{i_2}\cdots\ell_{i_{_{s}}})D_{i_1}D_{i_2}\cdots D_{i_{_{s}}}u^\alpha,\\ [n_j^\alpha (A^{\alpha\beta} D_\beta (D_{\ell}\cdots D_{\ell}{\bf u})- \breve {\bf f}^{\alpha,2})]_{\Gamma_j}=0, \end{cases}\end{aligned}$$ where $$\begin{aligned} %\label{brevef} \breve {\bf f}^{\alpha,2}:=\breve {\bf f}^{\alpha,1}+\delta_{\alpha d}\sum_{j=1}^m\mathbbm{1}_{x^d>h_j(x')} (n^d_j(x'))^{-1}\breve{\bf h}_j(x').\end{aligned}$$ As mentioned above [\[tildeu\]](#tildeu){reference-type="eqref" reference="tildeu"}, since $D_\beta(\ell_{i_1}\ell_{i_2}\cdots\ell_{i_{_{s}}})$ and $R(u^j)$ are singular at any point where two interfaces touch or are close to each other, we cannot prove the smallness of the mean oscillation of [\[defbrevef1\]](#defbrevef1){reference-type="eqref" reference="defbrevef1"}. To cancel out the singularity, we choose $$\begin{aligned} \label{defu-0} &{\bf u}_0:={\bf u}_0(x;x_0)\notag\\ &=\sum_{j=1}^{m+1}\tilde\ell_{i_{1},j}\tilde\ell_{i_{2},j}\cdots\tilde\ell_{i_{s},j}D_{i_1}D_{i_2}\cdots D_{i_{_{s}}}{\bf u}(P_jx_0)\nonumber\\ &\quad+\sum_{j=1}^{m+1}\sum_{\tau=1}^{s-1}D_{\tilde\ell_{i_1,j}}D_{\tilde\ell_{i_2,j}}\cdots D_{\tilde\ell_{i_\tau},j}(\tilde\ell_{i_{\tau+1},j}\cdots\tilde\ell_{i_{_{s}},j})\big(D_{i_{\tau+1}}\cdots D_{i_{_{s}}}{\bf u}(P_jx_0)\nonumber\\ &\quad+(x-x_0)\cdot DD_{i_{\tau+1}}\cdots D_{i_{_{s}}}{\bf u}(P_jx_0)\big)+\cdots\nonumber\\ &\quad+\sum_{j=1}^{m+1}(D_{\tilde\ell_{i_{s-1},j}}\tilde \ell_{i_s,j})\tilde\ell_{i_{1},j}\tilde\ell_{i_{2},j}\cdots\tilde\ell_{i_{s-2},j}\big(D_{i_1}D_{i_2}\cdots D_{i_{_{s-2}}}D_{i_{_{s}}}{\bf u}(P_jx_0)\nonumber\\ &\quad+(x-x_0)\cdot DD_{i_{1}}D_{i_{2}}\cdots D_{i_{_{s-2}}}D_{i_{_{s}}}{\bf u}(P_jx_0)\big),\end{aligned}$$ where $P_jx_0$ is defined in [\[Pjx\]](#Pjx){reference-type="eqref" reference="Pjx"}, $x_{0}\in B_{3/4}\cap \mathcal{D}_{j_{0}}$, $r\in(0,1/4)$, $\tilde\ell_{,j}$ is the smooth extension of $\ell|_{\mathcal{D}_j}$ to $\cup_{k=1,k\neq j}^{m+1}\mathcal{D}_k$. Denote $$\label{defuell} {\bf u}^{\ell}:={\bf u}^{\ell}(x;x_0)=D_{\ell}\cdots D_{\ell}{\bf u}-{\bf u}_0.$$ Then by using [\[eq Dbreveu\]](#eq Dbreveu){reference-type="eqref" reference="eq Dbreveu"}, we obtain $$\begin{aligned} \label{homosecond2} \begin{cases} D_\alpha(A^{\alpha\beta}D_\beta {\bf u}^{\ell})+DD_{\ell}\cdots D_{\ell}p=D_\alpha \breve{\bf f}^{\alpha,3}+\breve{\bf f}, \\ [n_j^\alpha (A^{\alpha\beta} D_\beta {\bf u}^{\ell}-{\bf f}^{\alpha,3})+{\bf n}_jD_{\ell}\cdots D_{\ell}p]_{\Gamma_j}=0,\\ \operatorname{div}{\bf u}^{\ell}=\ell_{i_1}\ell_{i_2}\cdots\ell_{i_{_{s}}}D_{i_1}D_{i_2}\cdots D_{i_{_{s}}}g+D_\alpha(R(u^\alpha))-\operatorname{div}{\bf u}_0\\ \qquad\qquad+D_\alpha(\ell_{i_1}\ell_{i_2}\cdots\ell_{i_{_{s}}})D_{i_1}D_{i_2}\cdots D_{i_{_{s}}}u^\alpha, \end{cases}\end{aligned}$$ where $\breve{\bf f}^{\alpha,3}=(\breve f_1^{\alpha,3},\dots,\breve f_d^{\alpha,3})^{\top}$, and $$\begin{aligned} \label{brevef} \breve f_i^{\alpha,3}&:=\breve f_i^{\alpha,3}(x;x_0)=\breve f_i^{\alpha,2}-A_{ij}^{\alpha\beta}D_\beta u_0^j,\quad i=1,\dots,d.\end{aligned}$$ Finally, we consider the following problem: $$\begin{aligned} \label{eqmathsfu} \begin{cases} D_\alpha(\tilde A^{\alpha\beta}D_\beta {\mathsf u})+D\pi=-D_{\alpha}\big(A^{\alpha\beta}{\bf F}_\beta\big)\\ \operatorname{div}{\mathsf u}=-{\bf E}+({\bf E})_{B_1} \end{cases}\,\, \mbox{in}~B_1,\end{aligned}$$ where $({\mathsf u}(\cdot;x_0),\pi(\cdot;x_0))\in W_0^{1,q}(B_1)^d\times L_0^q(B_1)$, the coefficient $\tilde A^{\alpha\beta}$ is defined in [\[mathcalA\]](#mathcalA){reference-type="eqref" reference="mathcalA"}, $$\begin{aligned} \label{defmathF} {\bf F}_\beta&:=\sum_{j=1}^{m+1}\mathbbm{1}_{_{\mathcal{D}_j^c}}D_\beta (\tilde\ell_{i_{1},j}\tilde\ell_{i_{2},j}\cdots\tilde\ell_{i_{s},j})D_{i_1}D_{i_2}\cdots D_{i_{_{s}}}{\bf u}(P_jx_0)+\cdots\nonumber\\ &\quad+\sum_{j=1}^{m+1}\mathbbm{1}_{_{\mathcal{D}_j^c}}D_\beta\big((D_{\tilde\ell_{i_{s-1},j}}\tilde \ell_{i_s,j})\tilde\ell_{i_{1},j}\tilde\ell_{i_{2},j}\cdots\tilde\ell_{i_{s-2},j}\big)\big(D_{i_1}D_{i_2}\cdots D_{i_{_{s-2}}}D_{i_{_{s}}}{\bf u}(P_jx_0)\nonumber\\ &\quad\quad+(x-x_0)\cdot DD_{i_{1}}D_{i_{2}}\cdots D_{i_{_{s-2}}}D_{i_{_{s}}}{\bf u}(P_jx_0)\big)\notag\\ &\quad+\sum_{j=1}^{m+1}\mathbbm{1}_{_{\mathcal{D}_j^c}} (D_{\tilde\ell_{i_{s-1},j}}\tilde \ell_{i_s,j})\tilde\ell_{i_{1},j}\tilde\ell_{i_{2},j}\cdots \tilde\ell_{i_{s-2},j} D_\beta D_{i_{1}}D_{i_{2}}\cdots D_{i_{_{s-2}}}D_{i_{_{s}}}{\bf u}(P_jx_0),\end{aligned}$$ which is the summation of the products of $\mathbbm{1}_{_{\mathcal{D}_j^c}}$ and derivatives of the terms on the right-hand side of [\[defu-0\]](#defu-0){reference-type="eqref" reference="defu-0"}, and $$\begin{aligned} {\bf E}&:=\sum_{j=1}^{m+1}\mathbbm{1}_{_{\mathcal{D}_j^c}}D (\tilde\ell_{i_{1},j}\tilde\ell_{i_{2},j}\cdots\tilde\ell_{i_{s},j})D_{i_1}D_{i_2}\cdots D_{i_{_{s}}}{\bf u}(P_jx_0)+\cdots\nonumber\\ &\quad+\sum_{j=1}^{m+1}\mathbbm{1}_{_{\mathcal{D}_j^c}}D\big((D_{\tilde\ell_{i_{s-1},j}}\tilde \ell_{i_s,j})\tilde\ell_{i_{1},j}\tilde\ell_{i_{2},j}\cdots\tilde\ell_{i_{s-2},j}\big)\big(D_{i_1}D_{i_2}\cdots D_{i_{_{s-2}}}D_{i_{_{s}}}{\bf u}(P_jx_0)\nonumber\\ &\quad\quad+(x-x_0)\cdot DD_{i_{1}}D_{i_{2}}\cdots D_{i_{_{s-2}}}D_{i_{_{s}}}{\bf u}(P_jx_0)\big)\notag\\ &\quad+\sum_{j=1}^{m+1}\mathbbm{1}_{_{\mathcal{D}_j^c}} (D_{\tilde\ell_{i_{s-1},j}}\tilde \ell_{i_s,j})\tilde\ell_{i_{1},j}\tilde\ell_{i_{2},j}\cdots \tilde\ell_{i_{s-2},j} DD_{i_{1}}D_{i_{2}}\cdots D_{i_{_{s-2}}}D_{i_{_{s}}}{\bf u}(P_jx_0).\end{aligned}$$ Define $$\label{defbreveu} \breve {\bf u}:=\breve {\bf u}(x;x_0)={\bf u}^{\ell}-{\mathsf u},\quad\breve p:=\breve p(x;x_0)=D_{\ell}\cdots D_{\ell}p-\pi.$$ Then it follows from [\[homosecond2\]](#homosecond2){reference-type="eqref" reference="homosecond2"} and [\[eqmathsfu\]](#eqmathsfu){reference-type="eqref" reference="eqmathsfu"} that in $B_{3/4}$, $\breve {\bf u}$ and $\breve p$ satisfy $$\begin{aligned} \label{eqbreveu} \begin{cases} D_\alpha(A^{\alpha\beta}D_\beta \breve {\bf u})+D\breve p=D_\alpha \breve {\bf f}^\alpha+\breve{\bf f},\\ \operatorname{div}\breve {\bf u}=\ell_{i_1}\ell_{i_2}\cdots\ell_{i_{_{s}}}D_{i_1}D_{i_2}\cdots D_{i_{_{s}}}g-\operatorname{div}{\bf u}_0+D_\alpha(\ell_{i_1}\ell_{i_2}\cdots\ell_{i_{_{s}}})D_{i_1}D_{i_2}\cdots D_{i_{_{s}}}u^\alpha\\ \qquad\qquad+{\bf E}-({\bf E})_{B_1}, \end{cases}\end{aligned}$$ where $\breve {\bf f}^\alpha=(\breve f_1^\alpha,\dots,\breve f_d^\alpha)^{\top}$, and for $i=1,\dots,d$, $$\begin{aligned} \label{def-brevefalpha} \breve f_i^\alpha:=\breve f_i^\alpha(x;x_0)=\breve f_i^{\alpha,1}+\delta_{\alpha d}\sum_{j=1}^m\mathbbm{1}_{x^d>h_j(x')} (n^d_j(x'))^{-1}\breve h_j^i(x')-A_{ij}^{\alpha\beta}D_\beta u_0^j+A_{ij}^{\alpha\beta}F_\beta^j,\end{aligned}$$ and $\breve f_i^{\alpha,1}$ is defined in [\[defbrevef1\]](#defbrevef1){reference-type="eqref" reference="defbrevef1"}. The general case $s\geq2$ will be proved by induction on $s$. If $A^{\alpha\beta}$, ${\bf f}^{\alpha}$, and $g$ are piecewise $C^{s-1,\delta}$, and the interfacial boundaries are $C^{s,\mu}$, then we have $$\begin{aligned} \label{estinduction} &|{\bf u}|_{s,\delta_{\mu};\mathcal{D}_{\varepsilon}\cap\overline{{\mathcal{D}}_{j_0}}}+|p|_{s-1,\delta_{\mu};\mathcal{D}_{\varepsilon}\cap\overline{{\mathcal{D}}_{j_0}}}\nonumber\\ &\leq N\Big(\|D{\bf u}\|_{L^{1}(\mathcal{D})}+\|p\|_{L^{1}(\mathcal{D})}+\sum_{j=1}^{M}|{\bf f}^\alpha|_{s-1,\delta;\overline{\mathcal{D}_{j}}}+\sum_{j=1}^{M}|g|_{s-1,\delta;\overline{\mathcal{D}_{j}}}\Big),\end{aligned}$$ where $j_0=1,\dots,m+1$, $\delta_{\mu}=\min\big\{\frac{1}{2},\mu,\delta\big\}$, and $N$ depends on $d,m,q,\nu,\varepsilon$, the $C^{s,\mu}$ characteristic of $\mathcal{D}_{j}$, and $|A|_{s-1+\delta;\overline{\mathcal{D}_{j}}}$. Now assuming that $A^{\alpha\beta}$, ${\bf f}^{\alpha}$, and $g$ are piecewise $C^{s,\delta}$, and the interfacial boundaries are $C^{s+1,\mu}$, we will prove that ${\bf u}$ is piecewise $C^{s+1,\delta_\mu}$ and $p$ is piecewise $C^{s,\delta_\mu}$. Recalling that $\tilde\ell_{,j}$ is the smooth extension of $\ell|_{\mathcal{D}_j}$ to $\cup_{k=1,k\neq j}^{m+1}\mathcal{D}_k$ and using [\[estinduction\]](#estinduction){reference-type="eqref" reference="estinduction"}, one can see that the right-hand side of [\[eqmathsfu\]](#eqmathsfu){reference-type="eqref" reference="eqmathsfu"} is piecewise $C^{\delta_{\mu}}$. Then by applying Lemma [\[lemlocbdd\]](#lemlocbdd){reference-type="ref" reference="lemlocbdd"} to [\[eqmathsfu\]](#eqmathsfu){reference-type="eqref" reference="eqmathsfu"}, we have $$\begin{aligned} \label{est-Dmathu} &|\mathsf u|_{1+\delta_\mu;\overline{\mathcal{D}_i}\cap B_{1-\varepsilon}}+|\pi|_{\delta_\mu;\overline{\mathcal{D}_i}\cap B_{1-\varepsilon}}\nonumber\\ &\leq N\big(\|D{\bf u}\|_{L^{1}(\mathcal{D})}+\|p\|_{L^{1}(\mathcal{D})}+\sum_{j=1}^{M}|{\bf f}^\alpha|_{s-1,\delta;\overline{\mathcal{D}_{j}}}+\sum_{j=1}^{M}|g|_{s-1,\delta;\overline{\mathcal{D}_{j}}}\big),\end{aligned}$$ where $i=1,\dots,m+1$. Therefore, combining with [\[defbreveu\]](#defbreveu){reference-type="eqref" reference="defbreveu"}, to derive the regularity of $D_{\ell}\cdots D_{\ell}{\bf u}$ and $D_{\ell}\cdots D_{\ell}p$, it suffices to prove that for $\breve {\bf u}$ and $\breve p$. For this, by replicating the argument in the proof of Lemma [Lemma 9](#lemma itera){reference-type="ref" reference="lemma itera"}, we obtain the decay estimate of $\Psi(x_0,r)$ as follows, where $$%\label{defPhi} \Psi(x_0,r):=\inf_{\mathbf q^{k'},\mathbf Q\in\mathbb R^{d}}\left(\fint_{B_r(x_0)}\big(|D_{\ell^{k'}}\breve {\bf u}(x;x_0)-\mathbf q^{k'}|^{\frac{1}{2}}+|\breve {\bf U}(x;x_0)-\mathbf Q|^{\frac{1}{2}}\big)\,dx \right)^{2},$$ and $$\label{defbreveU} \breve {\bf U}(x;x_0)=n^\alpha(A^{\alpha\beta}D_\beta \breve {\bf u}-\breve {\bf f}^\alpha)+{\bf n}\breve p.$$ **Lemma 14**. *Let $\varepsilon\in(0,1)$ and $q\in(1,\infty)$. Suppose that $A^{\alpha\beta}$, ${\bf f}^\alpha$, and $g$ satisfy Assumption [Assumption 2](#assump){reference-type="ref" reference="assump"} with $s\geq2$. If $(\breve{\bf u},\breve p)$ is a weak solution to [\[eqbreveu\]](#eqbreveu){reference-type="eqref" reference="eqbreveu"}, then for any $0<\rho\leq r\leq 1/4$, we have $$\begin{aligned} %\label{est phi'} \Psi(x_{0},\rho)&\leq N\Big(\frac{\rho}{r}\Big)^{\delta_{\mu}}\Psi(x_{0},r/2)+N\mathcal{C}_3\rho^{\delta_{\mu}},\end{aligned}$$ where $$\begin{aligned} %\label{defC3} \mathcal{C}_3&:=\sum_{j=1}^{m+1}\|D^{s+1}{\bf u}\|_{L^\infty(B_{r}(x_0)\cap\mathcal{D}_j)}+\sum_{j=1}^{m+1}\|D^sp\|_{L^\infty(B_{r}(x_0)\cap\mathcal{D}_j)}+\mathcal{C}_4,\end{aligned}$$ $$\begin{aligned} \label{defC4} \mathcal{C}_4:=\|D{\bf u}\|_{L^{1}(B_1)}+\|p\|_{L^{1}(B_1)}+\sum_{j=1}^{M}|{\bf f}^\alpha|_{s,\delta;\overline{\mathcal{D}_{j}}}+\sum_{j=1}^{M}|g|_{s,\delta; \overline{\mathcal{D}_{j}}},\end{aligned}$$ $\delta_{\mu}=\min\big\{\frac{1}{2},\mu,\delta\big\}$, $N$ depends on $d,m,q,\nu$, the $C^{s+1,\mu}$ norm of $h_j$, and $|A|_{s,\delta;\overline{\mathcal{D}_{j}}}$.* By the definitions of $\breve{\bf f}$, ${\bf u}_0$, and $\breve{\bf f}^{\alpha,3}$ in [\[defbrevef\]](#defbrevef){reference-type="eqref" reference="defbrevef"}, [\[defu-0\]](#defu-0){reference-type="eqref" reference="defu-0"}, and[\[brevef\]](#brevef){reference-type="eqref" reference="brevef"}, respectively, using [\[estDellk02\]](#estDellk02){reference-type="eqref" reference="estDellk02"}, and mimicking the proof of Lemma [Lemma 10](#lemmaup){reference-type="ref" reference="lemmaup"}, we obtain the following result. **Lemma 15**. *Under the same assumptions as in Lemma [Lemma 14](#lemmaiter){reference-type="ref" reference="lemmaiter"}, we have $$\begin{aligned} %\label{estDuell} &\|D\breve{\bf u}(\cdot;x_0)\|_{L^2(B_{r/2}(x_0))}+\|\breve p(\cdot;x_0)\|_{L^2(B_{r/2}(x_0))}\nonumber\\ &\leq Nr^{\frac{d+1}{2}}\Big(\sum_{j=1}^{m+1}\|D^{s+1}{\bf u}\|_{L^\infty(B_{r}(x_0)\cap\mathcal{D}_j)}+\sum_{j=1}^{m+1}\|D^sp\|_{L^\infty(B_{r}(x_0)\cap\mathcal{D}_j)}\Big)+N\mathcal{C}_4r^{\frac{d}{2}-1},\end{aligned}$$ where $x_0\in \mathcal{D}_{\varepsilon}\cap{{\mathcal{D}}_{j_0}}$, $r\in(0,1/4)$, $\breve{\bf u}$ and $\breve p$ are defined in [\[defbreveu\]](#defbreveu){reference-type="eqref" reference="defbreveu"}, the constant $N>0$ depends on $d,m,q,\nu,\varepsilon$, $|A|_{s,\delta;\overline{\mathcal{D}_{j}}}$, and the $C^{s+1,\mu}$ norm of $h_j$.* **Lemma 16**. *Under the same assumptions as in Lemma [Lemma 14](#lemmaiter){reference-type="ref" reference="lemmaiter"}, if $({\bf u},p)\in W^{1,q}(B_{1})^d\times L^q(B_1)$ is a weak solution to $$\begin{aligned} \begin{cases} D_\alpha (A^{\alpha\beta}D_\beta {\bf u})+Dp=D_{\alpha}{\bf f}^{\alpha}\\ \operatorname{div}{\bf u}=g \end{cases}\,\,\mbox{in }~B_{1},\end{aligned}$$ then we have $$\begin{aligned} %\label{est Du''} \sum_{j=1}^{m+1}\|D^{s+1}{\bf u}\|_{L^\infty(B_{1/4}\cap\overline\mathcal{D}_j)}+\sum_{j=1}^{m+1}\|D^{s}p\|_{L^\infty(B_{1/4}\cap\overline\mathcal{D}_j)}\leq N\mathcal{C}_4,\end{aligned}$$ where $\mathcal{C}_4$ is defined in [\[defC4\]](#defC4){reference-type="eqref" reference="defC4"}, $N>0$ is a constant depending on $d,m,q,\nu,\varepsilon$, $|A|_{s,\delta;\overline{\mathcal{D}_{j}}}$, and the $C^{s+1,\mu}$ norm of $h_j$.* *Proof.* The proof is similar to that of Lemma [Lemma 11](#lemma Dtildeu){reference-type="ref" reference="lemma Dtildeu"}. It follows from [\[Dlu\]](#Dlu){reference-type="eqref" reference="Dlu"}, [\[defuell\]](#defuell){reference-type="eqref" reference="defuell"}, [\[defbreveu\]](#defbreveu){reference-type="eqref" reference="defbreveu"}, and [\[def-brevefalpha\]](#def-brevefalpha){reference-type="eqref" reference="def-brevefalpha"} that $$\begin{aligned} \label{Dbreveu} D_{\ell^{k}}\breve {\bf u}(x;x_0) &=\ell_{i_1}\ell_{i_2}\cdots\ell_{i_{_{s}}}D_{\ell^k}D_{i_1}D_{i_2}\cdots D_{i_{_{s}}}{\bf u}+D_{\ell^k}(\ell_{i_1}\ell_{i_2}\cdots\ell_{i_{_{s}}})D_{i_1}D_{i_2}\cdots D_{i_{_{s}}}{\bf u}\nonumber\\ &\quad+D_{\ell^k}(R({\bf u}))-D_{\ell^{k}} {\bf u}_0-D_{\ell^{k}}{\mathsf u}\end{aligned}$$ and $$\begin{aligned} \label{breveU} &\breve {\bf U}(x;x_0)=n^\alpha\big(A^{\alpha\beta}D_\beta \breve {\bf u}-\breve {\bf f}^\alpha\big)+{\bf n}\breve p\nonumber\\ &=n^\alpha\big(A^{\alpha\beta}\ell_{i_1}\ell_{i_2}\cdots\ell_{i_{_{s}}}D_\beta D_{i_1}D_{i_2}\cdots D_{i_{_{s}}}{\bf u}-A^{\alpha\beta}D_\beta{\mathsf u}-\ell_{i_1}\ell_{i_2}\cdots\ell_{i_{_{s}}}D_{i_1}D_{i_2}\cdots D_{i_{_{s}}}{\bf f}^\alpha\nonumber\\ &\quad+\ell_{i_1}\ell_{i_2}\cdots\ell_{i_{_{s}}}\big(D_{i_1}A^{\alpha\beta}D_\beta D_{i_2}\cdots D_{i_{_{s}}}{\bf u}+\sum_{\tau=1}^{s-1}D_{i_1}\cdots D_{i_{\tau}}(D_{i_{\tau+1}}A^{\alpha\beta}D_\beta D_{i_{\tau+2}}\cdots D_{i_{_{s}}}{\bf u})\big)\nonumber\\ &\quad-\delta_{\alpha d}\sum_{j=1}^m\mathbbm{1}_{x^d>h_j(x')} (n^d_j(x'))^{-1}\breve {\bf h}_j(x')-A^{\alpha\beta}{\bf F}_\beta\big)+{\bf n}(\ell_{i_1}\ell_{i_2}\cdots\ell_{i_{_{s}}}D_{i_1}D_{i_2}\cdots D_{i_{_{s}}}p-\pi).\end{aligned}$$ Then using Lemmas [Lemma 14](#lemmaiter){reference-type="ref" reference="lemmaiter"}, [Lemma 15](#lemmbddup-s){reference-type="ref" reference="lemmbddup-s"}, and the argument that led to [\[estDtildeuU\]](#estDtildeuU){reference-type="eqref" reference="estDtildeuU"}, we have $$\begin{aligned} \label{estDbreveU} &|D_{\ell^{k'}}\breve {\bf u}(x_0;x_0)|+|\breve {\bf U}(x_0;x_0)|\nonumber\\ &\leq Nr^{\delta_{\mu}}\big(\sum_{j=1}^{m+1}\|D^{s+1}{\bf u}\|_{L^\infty(B_{r}(x_0)\cap\mathcal{D}_j)}+\sum_{j=1}^{m+1}\|D^{s}p\|_{L^\infty(B_{r}(x_0)\cap\mathcal{D}_j)}\big)+N\mathcal{C}_4r^{-1}.\end{aligned}$$ Note that $D^{s+1}{\bf u}$ and $D^sp$ have $d\tbinom{d+s}{s+1}$ and $\tbinom{d+s-1}{s}$ components, respectively. To solve for them, we first take the $(s-1)$-th derivative of the first equation [\[stokes\]](#stokes){reference-type="eqref" reference="stokes"} in each subdomain to get the following $d\tbinom{d+s-2}{s-1}$ equations $$\begin{aligned} \label{eqDDDu} A^{\alpha\beta}D_{\alpha\beta} D^{s-1}{\bf u}+D^{s}p=D^{s-1}D_\alpha {\bf f}^{\alpha}-\sum_{i=1}^{s-1}\tbinom{s-1}{i}D^i A^{\alpha\beta}D^{s-1-i}D_{\alpha\beta} {\bf u}-D^{s-1}(D_\alpha A^{\alpha\beta}D_\beta {\bf u}).\end{aligned}$$ Here, it follows from [\[estinduction\]](#estinduction){reference-type="eqref" reference="estinduction"}, the assumption on $A^{\alpha\beta}$ and ${\bf f}^{\alpha}$ in Assumption [Assumption 2](#assump){reference-type="ref" reference="assump"} that the right-hand side of [\[eqDDDu\]](#eqDDDu){reference-type="eqref" reference="eqDDDu"} is of class piecewise $C^{\delta_{\mu}}$. Next, by taking the $s$-th derivative of the second equation [\[stokes\]](#stokes){reference-type="eqref" reference="stokes"} in each subdomain, we obtain $\tbinom{d+s-1}{s}$ equations $$\label{eqDsduv} D^s(\operatorname{div}{\bf u})=D^sg.$$ Finally, by the $d\tbinom{d+s-1}{s+1}+d\tbinom{d+s-2}{s}$ equations in [\[Dbreveu\]](#Dbreveu){reference-type="eqref" reference="Dbreveu"} and [\[breveU\]](#breveU){reference-type="eqref" reference="breveU"}, and using [\[eqDDDu\]](#eqDDDu){reference-type="eqref" reference="eqDDDu"}, [\[eqDsduv\]](#eqDsduv){reference-type="eqref" reference="eqDsduv"}, and Cramer's rule, we derive $D^{s+1}{\bf u}$ and $D^sp$. Furthermore, combining [\[est-Dmathu\]](#est-Dmathu){reference-type="eqref" reference="est-Dmathu"} and [\[estDbreveU\]](#estDbreveU){reference-type="eqref" reference="estDbreveU"}, we obtain $$\begin{aligned} &|D^{s+1}{\bf u}(x_0)|+|D^{s}p(x_0)|\\ &\leq Nr^{\delta_{\mu}}\big(\sum_{j=1}^{m+1}\|D^{s+1}{\bf u}\|_{L^\infty(B_{r}(x_0)\cap\mathcal{D}_j)}+\sum_{j=1}^{m+1}\|D^{s}p\|_{L^\infty(B_{r}(x_0)\cap\mathcal{D}_j)}\big) +N\mathcal{C}_4r^{-1}.\end{aligned}$$ Finally, following the argument below [\[D2bfuDp\]](#D2bfuDp){reference-type="eqref" reference="D2bfuDp"}, Lemma [Lemma 16](#lemma Dbreveu){reference-type="ref" reference="lemma Dbreveu"} is proved. ◻ ## Proof of Theorem [Theorem 3](#Mainthm){reference-type="ref" reference="Mainthm"} with $s\geq2$ {#proof-of-theorem-mainthm-with-sgeq2} Using Lemmas [Lemma 14](#lemmaiter){reference-type="ref" reference="lemmaiter"} --[Lemma 16](#lemma Dbreveu){reference-type="ref" reference="lemma Dbreveu"}, and following the argument in the proof of [\[holdertildeuU\]](#holdertildeuU){reference-type="eqref" reference="holdertildeuU"}, we reach an a priori estimate of the modulus of continuity of $(D_{\ell^{k'}}\breve {\bf u},\breve {\bf U})$ as follows: $$\begin{aligned} \label{piecebreveu} |(D_{\ell^{k'}}\breve {\bf u}(x_{0};x_{0})-D_{\ell^{k'}}\breve {\bf u}(x_{1};x_{1})|+|\breve {\bf U}(x_{0};x_{0})-\breve {\bf U}(x_{1};x_{1})|\leq N\mathcal{C}_4|x_{0}-x_{1}|^{\delta_{\mu}},\end{aligned}$$ where $\mathcal{C}_4$ is defined by [\[defC4\]](#defC4){reference-type="eqref" reference="defC4"}, $x_0, x_1\in B_{1-\varepsilon}$, $k'=1,\ldots,d-1$, $\breve {\bf u}$ and $\breve {\bf U}$ are defined in [\[defbreveu\]](#defbreveu){reference-type="eqref" reference="defbreveu"} and [\[defbreveU\]](#defbreveU){reference-type="eqref" reference="defbreveU"}, respectively, $\delta_{\mu}=\min\big\{\frac{1}{2},\mu,\delta\big\}$, $N$ depends on $d,m,q,\nu,\varepsilon$, $|A|_{s,\delta;\overline{\mathcal{D}_{j}}}$, and the $C^{s+1,\mu}$ characteristic of $\mathcal{D}_{j}$. For any $x_0\in B_{1-\varepsilon}\cap\overline{\mathcal{D}}_{j_0}$, it follows from [\[Dlu\]](#Dlu){reference-type="eqref" reference="Dlu"} and [\[defu-0\]](#defu-0){reference-type="eqref" reference="defu-0"} that the terms containing (directional) derivatives of $\ell$ at $x_0$ in [\[Dbreveu\]](#Dbreveu){reference-type="eqref" reference="Dbreveu"} are cancelled. Then using [\[Dbreveu\]](#Dbreveu){reference-type="eqref" reference="Dbreveu"}, [\[breveU\]](#breveU){reference-type="eqref" reference="breveU"}, [\[eqDDDu\]](#eqDDDu){reference-type="eqref" reference="eqDDDu"}, and [\[eqDsduv\]](#eqDsduv){reference-type="eqref" reference="eqDsduv"} with $x=x_0$ and Cramer's rule, one can solve for $D^{s+1}{\bf u}(x_0)$ and $D^sp(x_0)$. For any $x_1\in B_{1-\varepsilon}\cap\overline{\mathcal{D}}_{j_0}$, $D^{s+1}{\bf u}(x_1)$ and $D^sp(x_1)$ are similarly solved. Thus, combining [\[estinduction\]](#estinduction){reference-type="eqref" reference="estinduction"}, [\[est-Dmathu\]](#est-Dmathu){reference-type="eqref" reference="est-Dmathu"}, [\[piecebreveu\]](#piecebreveu){reference-type="eqref" reference="piecebreveu"}, and Assumption [Assumption 2](#assump){reference-type="ref" reference="assump"}, we derive $$\begin{aligned} _{\delta_{\mu};B_{1-\varepsilon}\cap\overline{\mathcal{D}}_{j_0}}+[D^{s}p]_{\delta_{\mu};B_{1-\varepsilon}\cap\overline{\mathcal{D}}_{j_0}}\leq N\mathcal{C}_4\end{aligned}$$ for $j_0=1,\ldots,m+1$. Theorem [Theorem 3](#Mainthm){reference-type="ref" reference="Mainthm"} with $s\geq2$ follows.0◻ 10 H. Abidi; G. Gui; P. Zhang. On the decay and stability of global solutions to the 3D inhomogeneous Navier-Stokes equations. *Comm. Pure Appl. Math.*, **64** (2011), no. 6, 832--881. P. Auscher; E. Russ; P. Tchamitchian. Hardy Sobolev spaces on strongly Lipschitz domains of $\mathbb R^{n}$. *J. Funct. Anal.*, **218** (2005), no. 1, 54--109. E. Bonnetier; M. Vogelius. An elliptic regularity result for a composite medium with touching fibers of circular cross-section. *SIAM J. Math. Anal.* **31** (2000), 651--677. S. Campanato. Proprietá di hôlderianitá di alcune classi di funzioni. (Italian) *Ann. Scuola Norm. Sup. Pisa.* **17** (1963), no. 3, 175--188. M. Chipot; D. Kinderlehrer; G. Vergara-Caffarelli. Smoothness of linear laminates. *Arch. Rational Mech. Anal.* **96** (1986), no. 1, 81--96. J. Choi; H. Dong. Gradient estimates for Stokes systems with Dini mean oscillation coefficients. *J. Differential Equations*, **266** (2019) no. 8, 4451--4509. J. Choi; H. Dong; L. Xu. Gradient estimates for Stokes and Navier-Stokes systems with piecewise DMO coefficients. *SIAM J. Math. Anal.* **54** (2022), no. 3, 3609--3635. J. Choi; K. Lee. The Green function for the Stokes system with measurable coefficients. *Commun. Pure Appl. Anal.* **16** (2017), no. 6, 1989--2022. G. Citti; F. Ferrari. A sharp regularity result of solutions of a transmission problem. *Proc. Amer. Math. Soc.*, **140** (2012), 615--620. H. Dong. Gradient estimates for parabolic and elliptic systems from linear laminates. *Arch. Rational Mech. Anal.* **205** (2012), 119--149. H. Dong; D. Kim. $L_q$-estimates for stationary Stokes system with coefficients measurable in one direction. *Bull. Math. Sci.* 9, no. 1, 2019: 30. H. Dong; D. Kim. Weighted $L_q$-estimates for stationary Stokes system with partially BMO coefficients. *J. Differential Equations*, **264** (2018), no. 7, 4603--4649. H. Dong; H. Li. Optimal estimates for the conductivity problem by Green's function method. *Arch. Ration. Mech. Anal.*, **231** (2019), no. 3, 1427--1453. H. Dong; L. Xu. Gradient estimates for divergence form elliptic systems arising from composite material. *SIAM J. Math. Anal.* **59** (2019), no. 3, 2444--2478. H. Dong; L. Xu. Gradient estimates for divergence form parabolic systems from composite materials. *Calc. Var. Partial Differential Equations*, 60(3), 98, 2021. H. Dong; L. Xu. Higher regularity for solutions to equations arising from composite materials. arXiv: 2206.06321v1 \[math.AP\]. (2022). H. Dong; Z. Yang. Optimal estimates for transmission problems including relative conductivities with different signs. *Adv. Math.*, **428** (2023), Paper No. 109160, 28 pp. H. Dong; H. Zhang. On an elliptic equation arising from composite materials. *Arch. Ration. Mech. Anal.* **222** (2016), no. 1, 47--89. J. Fan; K. Kim; S. Nagayasu; G. Nakamura. A gradient estimate for solutions to parabolic equations with discontinuous coefficients. *Electron J Differential Equations,* (2013), 1--24. M. Giaquinta. Multiple integrals in the calculus of variations and nonlinear elliptic systems. *Princeton University Press: Princeton, NJ,* 1983. M. Giaquinta. Introduction to regularity theory for nonlinear elliptic systems. *Lectures in Mathematics ETH Zürich. Birkhäuser Verlag, Basel*, 1993. M. Giaquinta; G. Modica. Nonlinear systems of the type of the stationary Navier-Stokes system. *J. Reine Angew. Math.* **330** (1982) 173--214. B. Jaiswal; B. Gupta. Stokes flow over composite sphere: Liquid core with permeable shell. *Journal of Applied Fluid Mechanics*, **8** (2015), no. 3,339--350. Y. Ji; H. Kang. Spectrum of the Neumann-Poincaré operator and optimal estimates for transmission problems in presence of two circular inclusions. *Int. Math. Res. Not.,* IMRN (2023), no. 7, 6299--6300. M. Kohr; W.L. Wendland. Variational approach for the Stokes and Navier-Stokes systems with nonsmooth coefficients in Lipschitz domains on compact Riemannian manifolds. *Calc. Var. Partial Differential Equations*, **57**(6) (2018): Paper No. 165, 41. M. Kohr; S.E. Mikhailov; W.L. Wendland. Layer potential theory for the anisotropic Stokes system with variable $L_\infty$ symmetrically elliptic tensor coefficient. *Math. Methods Appl. Sci.*, **44** (2021), no. 12, 9641--9674. O.A. Ladyženskaja; V.A. Solonnikov. The unique solvability of an initial-boundary value problem for viscous incompressible inhomogeneous fluids. Boundary value problems of mathematical physics and related questions of the theory of functions, 8. , 52:52--109, 218--219, 1975. H. Li; Y. Li. Gradient estimates for parabolic systems from composite material. *Sci China Math.* **60** (2017), no. 11, 2011--2052. Y. Li; L. Nirenberg. Estimates for elliptic systems from composite material. *Comm. Pure Appl. Math.* **56** (2003), 892--925. Y. Li; M. Vogelius. Gradient estimates for solutions to divergence form elliptic equaitons with discontinuous coefficients. *Arch. Rational Mech. Anal.* **153** (2000), 91--151. P.L. Lions. Mathematical topics in fluid mechanics. Vol. 1, Incompressible models. *Oxford Lecture Series in Mathematics and its Applications, 3*. The Clarendon Press, Oxford University Press, New York, 1996. J.H. Masliyah; G. Neale; K. Malysa; T.G.M. Van De Ven. Creeping flow over a composite sphere: Solid core with porous shell. *Chemical Engineering Science*, **42** (1987), no. 2, 245--253. J. Xiong; J. Bao. Sharp regularity for elliptic systems associated with transmission problems. *Potential Anal.* **39** (2013), no. 2, 169--194. J. Zhuge. Regularity of a transmission problem and periodic homogenization. *J. Math. Pures Appl.*, **152** (2021), 213--247. [^1]: H. Dong was partially supported by Simons Fellows Award 007638 and the NSF under agreement DMS-2055244. [^2]: H. Li was partially supported by NSF of China (11971061). [^3]: L. Xu was partially supported by NSF of China (12301141).
arxiv_math
{ "id": "2309.06722", "title": "On higher regularity of Stokes systems with piecewise H\\\"{o}lder\n continuous coefficients", "authors": "Hongjie Dong, Haigang Li, and Longjuan Xu", "categories": "math.AP", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Tao and Vu showed that every centrally symmetric convex progression $C\subset\mathbb{Z}^d$ is contained in a generalised arithmetic progression of size $d^{O(d^2)} \#C$. Berg and Henk improved the size bound to $d^{O(d\log d)} \#C$. We obtain the bound $d^{O(d)} \#C$, which is sharp up to the implied constant, and is of the same form as the bound in the continuous setting given by John's Theorem. author: - Peter van Hintum - Peter Keevash bibliography: - references.bib title: Sharp bounds for the Tao-Vu Discrete John's Theorem --- # Introduction A classical theorem of John [@fritz1948extremum] shows that for any centrally symmetric convex set $K\subset \mathbb{R}^d$ there exists an ellipsoid $E$ centred at the origin so that $E\subset K\subset \sqrt{d}E$. This immediately implies that there exists a parallelotope $P$ so that $P\subset E\subset K\subset \sqrt{d}E\subset dP$. In the discrete setting, quantitative covering results are of great interest in Additive Combinatorics, a prominent example being the Polynomial Freiman-Ruzsa Conjecture, which asks for effective bounds on covering sets of small doubling by convex progressions. In this context, a natural analogue of John's Theorem in $\mathbb{Z}^d$ would be covering centrally symmetric convex progressions by generalised arithmetic progressions. Here, a $d$-dimensional *convex progression* is a set of the form $K\cap \mathbb{Z}^d$, where $K\subset \mathbb{R}^d$ is convex, and a $d$-dimensional *generalized arithmetic progression* ($d$-GAP) is a translate of a set of the form $\left\{\sum_{i=1}^d m_ia_i: 1\leq m_i\leq n_i\right\}$ for some $n_i\in\mathbb{N}$ and $a_i\in \mathbb{Z}^d$. Tao and Vu [@tao2006additive; @tao2008john] obtained such a discrete version of John's Theorem, showing that for any centrally symmetric $d$-dimensional convex progression $C\subset\mathbb{Z}^d$ there exists a $d$-GAP $P$ so that $P\subset C\subset O(d)^{3d/2}\cdot P$, where $m\cdot P:=\left\{\sum_{i=1}^m p_i: p_i\in P\right\}$ denotes the iterated sumset. Berg and Henk [@berg2019discrete] improved this to $P\subset C\subset d^{O(\log(d))}\cdot P$. Our focus will be on the covering aspect of these results, i.e. minimising the ratio $\#P' / \#C$, where $P'$ is a $d$-GAP covering $C$. This ratio is bounded by $d^{O(d^2)}$ by Tao and Vu and by $d^{O(d\log d)}$ by Berg and Henk. We obtain the bound $d^{O(d)}$, which is optimal. **Theorem 1**. For any centrally symmetric convex progression $C\subset\mathbb{Z}^d$ there exists a $d$-GAP $P$ containing $C$ with $\#P\leq O(d)^{3d} \#C$. **Corollary 2**. For any centrally symmetric convex progression $C\subset\mathbb{Z}^d$ and linear map $\phi:\mathbb{R}^d\to\mathbb{R}$, there exists a $d$-GAP $P$ containing $C$ with $\#\phi(P)\leq O(d)^{3d} \#\phi(C)$. The optimality of is demonstrated by the intersection of a ball $B$ with a lattice $L$. Moreover, Lovett and Regev [@lovett2017] obtained a more emphatic negative result, disproving the GAP analogue of the Polynomial Freiman-Ruzsa Conjecture, by showing that by considering a random lattice $L$ one can find a convex $d$-progression $C = B \cap L$ such that any $O(d)$-GAP $P$ with $|P| \le |C|$ has $|P \cap C| < d^{-\Omega(d)} |C|$. Our result can be viewed as the positive counterpart that settles this line of enquiry, showing that indeed $d^{\Theta(d)}$ is the optimal ratio for covering convex progressions by GAPs. # Proof We start by recording two simple observations and a property of Mahler Lattice Bases. **Observation 3**. Given a centrally symmetric convex set $K\subset\mathbb{R}^d$, there exists a centrally symmetric parallelotope $Q$ and a centrally symmetric ellipsoid $E$ so that $\frac1d Q\subset E\subset K\subset\sqrt{d}E\subset Q$, so in particular $|Q|\leq d^{d}|K|$. **Observation 4**. Let $X,X'\in\mathbb{R}^{d\times d}$ be so that the rows of $X$ and $X'$ generate the same lattice of full rank in $\mathbb{R}^d$. Then $\exists T\in GL_n(\mathbb{Z})$ so that $TX=X'$. **Proposition 5** (Corollary 3.35 from [@tao2006additive]). Given a lattice $\Lambda\subset\mathbb{R}^d$ of full rank, there exists a lattice basis $v_1,\dots, v_d$ of $\Lambda$ so that $\prod_{i=1}^{d} \|v_i\|_2 \leq O(d^{3d/2})\det(v_1,\dots, v_d)$. With these three results in mind, we prove the theorem. *Proof of .* By passing to a subspace if necessary, we may assume that $C$ is full-dimensional. Write $C = K \cap \mathbb{Z}^d$ where $K\subset\mathbb{R}^d$ is centrally symmetric and convex. Use to find a parallelotope $Q\supset K$ so that $|Q|\leq d^d |K|$. Let the defining vectors of $Q$ be $u_1,\dots,u_d$, i.e. $Q=\{\sum \lambda_i u_i: \lambda_i\in[-1,1]\}$. Write $u_i^j$ for the $j$-th coordinate of $u_i$ and write $U$ for the matrix $(u_i^j)$ with rows $u^j$ and columns $u_i$. Consider the lattice $\Lambda$ generated by the vectors $u^j$ (these are the vectors formed by the $j$-th coordinates of the vectors $u_i$). Using find a basis $v^1,\dots,v^d$ of $\Lambda$ so that $\prod_{j=1}^{d} |v^j|\leq O(d^{3d/2})\det(v^1,\dots, v^d)$. Write $v^j_i$ for the $i$-th coordinate of $v^j$ and write $V:=(v_i^j)$. By , we can find $T\in GL_n(\mathbb{Z})$ so that $TU=V$. Let $T'\colon\mathbb{R}^d\mapsto\mathbb{R}^d$ defined by $T' u_i = v_i$ for $1 \le i \le d$. Note that $T'(\mathbb{Z}^d)=\mathbb{Z}^d$ and in the standard basis $T'$ corresponds to matrix multiplication by $T$. Write $Q':=T'(Q)=\{\sum \lambda_i v_i: \lambda_i\in[-1,1]\}$ and consider the smallest axis aligned box $B:=\prod [-a_i,a_i]$ containing $Q'$. Note that $a_j\leq\sum_{i} |v_i^j|=||v^{j}||_1\leq \sqrt{d}||v^{j}||_2$. Hence, we find $$\begin{aligned} |B|&=2^d\prod_{i=1}^d a_i\leq 2^d\prod_{j=1}^d \sqrt{d}||v^j||_2\leq O(d)^{2d} \det(v^1,\dots,v^d)= O(d)^{2d} \det(v_1,\dots,v_d)=O(d)^{2d} |Q'|.\end{aligned}$$ Now we cover $C$ by a $d$-GAP $P$, constructed by the following sequence: $$C=K\cap \mathbb{Z}^d\subset Q\cap \mathbb{Z}^d= T'^{-1}(Q')\cap \mathbb{Z}^d \subset T'^{-1}(B)\cap \mathbb{Z}^d = T'^{-1}(B\cap \mathbb{Z}^d) =: P.$$ It remains to bound $\#P$. As $C$ is full-dimensional each $a_i \ge 1$, so $$\begin{aligned} \#P&=\#(B \cap \mathbb{Z}^d) \leq 2^d |B|\leq O(d)^{2d} |Q'|= O(d)^{2d} |Q| \leq O(d)^{3d} |K|\leq O(d)^{3d} \#C. \qedhere\end{aligned}$$ ◻ *Proof of .* Let $m:=\max_{x\in\mathbb{Z}}\#(\phi^{-1}(x)\cap C)$ and note that $\#\phi(C)\geq \#C/ m$. Analogously, let $m':=\max_{x\in\mathbb{Z}}\#(\phi^{-1}(x)\cap P)$, so that $m'\geq m$. By translation, we may assume that $m'$ is achieved at $x=0$. Note that for any $x = \phi(p)$ with $p \in P$ and $p' \in P \cap \phi^{-1}(0)$ we have $p+p' \in P+P$ with $\phi(p+p')=x$, so $\#(\phi^{-1}(x)\cap (P+P))\geq m'$. We conclude that $$\begin{aligned} \#\phi(P) &\leq \#(P+P)/m'\leq 2^d \#P/m\leq O(d)^{3d}\#C/m\leq O(d)^{3d}\#\phi(C). \qedhere\end{aligned}$$ ◻
arxiv_math
{ "id": "2309.12386", "title": "Sharp bounds for the Tao-Vu Discrete John's Theorem", "authors": "Peter van Hintum and Peter Keevash", "categories": "math.CO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | This paper refers to an imaging problem in the presence of nonlinear materials. Specifically, the problem we address falls within the framework of Electrical Resistance Tomography and involves two different materials, one or both of which are nonlinear. Tomography with nonlinear materials in the early stages of developments, although breakthroughs are expected in the not-too-distant future. The original contribution this work makes is that the nonlinear problem can be approximated by a weighted $p_0-$Laplace problem. From the perspective of tomography, this is a significant result because it highlights the central role played by the $p_0-$Laplacian in inverse problems with nonlinear materials. Moreover, when $p_0=2$, this result allows all the imaging methods and algorithms developed for linear materials to be brought into the arena of problems with nonlinear materials. The main result of this work is that for ''small'' Dirichlet data, (i) one material can be replaced by a perfect electric conductor and (ii) the other material can be replaced by a material giving rise to a weighted $p_0-$Laplace problem. [**MSC 2020**]{.smallcaps}: 35J62, 35R30, 78A46. [**Key words and phrases**]{.smallcaps}. Inverse problem, Electrical Resistance Tomography, Elliptic PDE, Quasilinear PDE, Nonlinear problems, Linear approximation, Asymptotic behaviour, Imaging. author: - Antonio Corbo Esposito$^1$, Luisa Faella$^1$, Gianpaolo Piscitelli$^2$, Vincenzo Mottola$^1$, Ravi Prakash$^3$, Antonello Tamburrino$^{1,4}$ bibliography: - biblioCFPPT.bib title: The $p_0-$Laplace ''Signature'' for Quasilinear Inverse Problems --- [^1] # Introduction {#1-PosProb} This paper is focused on nonlinear imaging problems in Electrical Resistance Tomography where the aim is to retrieve the nonlinear electrical conductivity $\sigma$, starting from boundary measurements in stationary conditions (steady currents). This is a nonlinear variant of the Calderón problem [@calderon1980inverse; @calderon2006inverse]. Analysis of this class of problems highlights the limiting behaviour of the solution (electric scalar potential) for boundary data approaching zero. In this case, the solution approaches a limit which is the solution of a weighted $p-$Laplace problem. Moreover, the materials with nondominant growth can be replaced by either a *perfect electric conductor* or a *perfect electric insulator*. These results are significant from both a mathematical and an engineering point of view, since they make it possible to approximate a nonlinear phenomenon with a weighted $p-$Laplace problem. In one sense, this suggests the ''fingerprint'' of a weighted $p-$Laplace problem in a nonlinear problem. The linear case, i.e. $p=2$, is of paramount importance. In this case, we have a powerful bridge to apply all the imaging methods and algorithms developed for linear materials to nonlinear materials. The behaviour for large data has been studied in [@corboesposito2023Large] where we use different set of test functions for the Dirichlet energy as we do not have different growth exponents ($p$ and $p_0$) for the asymptotic behaviour. Hereafter, we consider steady current operations where the constitutive relationship is nonlinear, local, isotropic and memoryless: $$\label{J} {\bf J}(x)=\sigma(x,\vert {\bf E}(x)\vert) {\bf E}(x)\quad\forall x\in\Omega.$$ In ([\[J\]](#J){reference-type="ref" reference="J"}), $\sigma$ is the nonlinear electrical conductivity, ${\bf J}$ the electric current density, ${\bf E}$ the electric field and $\Omega\subset\mathbb{R}^n$, $n \geq 2,$ is an open bounded domain with Lipschitz boundary. $\Omega$ represents the region occupied by the conducting material. The electric field can be expressed through the electrical scalar potential $u$ as ${\bf E}(x)=-\nabla u(x)$, where $u$ solves the steady current problem: $$\label{gproblem1} \begin{cases} \mathop{\mathrm{div}}\Big(\sigma (x, |\nabla u(x)|) \nabla u (x)\Big) =0\ \text{in }\Omega\vspace{0.2cm}\\ u(x) =f(x)\qquad\qquad\qquad\quad\ \text{on }\partial\Omega, \end{cases}$$ where $f$ is the applied boundary potential. Both $u$ and $f$ belong to proper function spaces that will be defined in the following. [\[2-IPnonlinear\]]{#2-IPnonlinear label="2-IPnonlinear"} The literature contains very few contributions on imaging in the presence of nonlinear materials. As quoted in [@lam2020consistency] (2020), *'' \... the mathematical analysis for inverse problems governed by nonlinear Maxwell's equations is still in the early stages of development.''*. It can be expected that as new methods and algorithms become available, the demand for nondestructive evaluation and imaging of nonlinear materials will eventually rise significantly . Among the contributions to the nonlinear Calderón problem, special attention has been paid to the case based on the $p-$Laplacian, where $\sigma(x,\vert {\bf E}(x)\vert)=\theta(x)\vert {\bf E}(x)\vert^{p-2}$ in equation $(\ref{J})$, with $\theta$ being an appropriate weight function. The nonlinear $p-$Laplace variant of the Calderón problem was initially posed by Salo and Zhong [@Salo2012_IP] and subsequently studied in [@brander2015enclosure; @brander2016calderon; @brander2018superconductive; @guo2016inverse; @brander2018monotonicity; @hauer2015p]. As well as the nonlinear $p-$Laplace problem, mention must also be made of the work by Sun [@sun2004inverse; @Sun_2005] for weak nonlinearities, the work by Cârstea and Kar [@carstea2020recovery] which treated a nonlinear problem (linear plus a nonlinear term) and the work by Corbo Esposito et al. [@corboesposito2021monotonicity]. The latter treat a general nonlinearity within the framework of the Monotonicity Principle Method. [\[3-Applications\]]{#3-Applications label="3-Applications"} From the application perspective, nonlinear electrical conductivities can be found in semiconducting and ceramic materials (see [@bueno2008sno2]), with applications to cable termination in high voltage (HV) and medium voltage (MV) systems [@boucher2018interest; @lupo1996field], for instance. Nonlinear electrical conductivities characterize superconductors, key materials for such applications as energy storage, magnetic levitation systems, superconducting magnets (nuclear fusion devices, nuclear magnetic resonance) and high-frequency radio technology [@seidel2015applied; @krabbes2006high]. Nonlinear electrical conductivity also appears in the area of biological tissues (see [@foster1989dielectric]). For instance, [@corovic2013modeling] proved that nonlinear models fit the experimental data better than linear models. [\[4-Generalizations\]]{#4-Generalizations label="4-Generalizations"} Problem ([\[gproblem1\]](#gproblem1){reference-type="ref" reference="gproblem1"}) is common to steady currents as well as to other physical settings. In the framework of electromagnetism, both nonlinear electrostatic and nonlinear magnetostatic[^2]$^1$ phenomena can be modelled as in ([\[gproblem1\]](#gproblem1){reference-type="ref" reference="gproblem1"}). In the first case the constitutive relationship is ${\bf D}(x)=\varepsilon(x,\vert {\bf E}(x)\vert) {\bf E}(x)$ (see [@miga2011non] and references therein, and [@yarali20203d]), where $\bf D$ is the electric displacement field, $\varepsilon$ is the dielectric permittivity and $\bf E$ the electric field. In the second case ${\bf B}(x)=\mu(x,\vert {\bf H}(x)\vert) {\bf H}(x)$ (see [@1993ferr.book.....B]), where $\bf B$ is the magnetic flux density, $\mu$ is the magnetic permeability, and $\bf H$ is the magnetic field. [\[5-Inverse Problem\]]{#5-Inverse Problem label="5-Inverse Problem"} From a general perspective, the inverse problem of retrieving a coefficient of a PDE ( Partial Differential Equation) from boundary measurements, such as the electrical conductivity $\sigma$ appearing in [\[gproblem1\]](#gproblem1){reference-type="eqref" reference="gproblem1"}, is nonlinear and ill-posed in the sense of Hadamard, i.e. it is an inverse problem. [\[5a-IT\]]{#5a-IT label="5a-IT"} A classic approach for solving an inverse problem consists in casting it in terms of the minimization of a proper cost function [@tikhonov1977solutions; @tikhonov1998nonlinear]. The minimizer of this cost function gives the estimate of the unknown quantity. The cost function is usually built as the weighted sum of the discrepancy on the data and proper a priori information that must be provided to complement the loss of information inherent to the physics of the measurement process. There are many iterative approaches devoted to the search for the solution (the minimizer) of an inverse problem. An overview can be found in several specialized textbooks [@bertero1998introduction; @tarantola2005inverse; @vogel2002computational; @engl1996regularization]. Other than the Gauss-Newton and its variant (see [@qi2000iteratively] for a review), let us mention some relevant iterative approaches applied to inverse problems such as the Quadratic Born approximation [@pierri1997local], Bayesian approaches [@premel2002eddy], the Total Variation regularization [@RudinOsher1992nonlinear; @pirani2008multi], the Levenberg-Marquardt method for nonlinear inverse problems [@Hanke_1997], the Level Set method [@dorn2000shape; @harabetian1998regularization], the Topological Derivative method [@Jackowska-Strumillo2002231; @ammari2012stability; @fernandez2019noniterative] and the Communication Theory approach [@tamburrino2000communications]. [\[5b-nonIT\]]{#5b-nonIT label="5b-nonIT"} Iterative methods suffer from two major drawbacks: (i) they may be trapped into local minima and (ii) the computational cost may be very high. Indeed, the objective function to be minimized in order to achieve the reconstruction might present several/many local minima which may constitute points where an iterative algorithm may be trapped. Moreover, the computational cost at each iteration may be very high because it entails computing the objective function and, optionally, its gradient. Both computations are expensive in terms of computational resources. An excellent alternative to iterative methods is provided by noniterative ones. Noniterative methods are attractive because they call for the computation of a proper function of the space (the so-called indicator function) giving the shape of the interface between two different materials, i.e. the support of the region occupied by a specific material. They usually require a larger amount of data than iterative approaches, but the computation of the indicator function is much less expensive. In general, noniterative methods are suitable for real-time operations. Only a handful of noniterative methods are currently available. These include the Linear Sampling Method (LSM) by Colton and Kirsch [@Colton_1996], which evolved into the Factorization Method (FM) proposed by Kirsch [@Kirsch_1998]. Ikehata proposed the Enclosure Method (EM) [@ikehata1999draw; @Ikehata_2000] and Devaney applied MUSIC (MUltiple SIgnal Classification), a well-known algorithm in signal processing, as an imaging method [@Devaney2000]. Finally, Tamburrino and Rubinacci proposed the Monotonicity Principle Method (MPM) [@Tamburrino_2002]. [\[6-PEC/PEI\]]{#6-PEC/PEI label="6-PEC/PEI"} The prototype problem which motivated this study consists in imaging a two-phase material where the outer phase is linear and the inner phase is nonlinear (see Figure [1](#fig_1_intro){reference-type="ref" reference="fig_1_intro"}). A configuration of this type may be encountered when testing/imaging superconducting cables (see, for instance, [@lee2005nde; @amoros2012effective; @takahashi2014non; @Higashikawa2021_SC]). The main result of this work is the proof that for ''small'' Dirichlet data $f$, the nonlinear material can be replaced by either a perfect electric conductor (PEC) or a perfect electric insulator (PEI). Consequently, when one material is linear, the limiting version of the original nonlinear problem is linear. These results provide a powerful bridge to bring all the imaging methods and algorithms developed for linear materials into the arena of problems presenting nonlinear materials. ![Description of two possible applications. Left: inverse obstacle problem where the interface ($\partial A$) between two phases is unknown. $A$ and $B$ are the regions occupied by the inner material and the outer material, respectively. Right: nondestructive testing where regions A and B are known, while the position and shape of region $C$ (a crack) is unknown. The materials in regions $A$ and $B$ are also known.](Lim_1_intro.png){#fig_1_intro width="\\textwidth"} Moreover, in order to reach a thorough understanding of the underlying mathematics, the results have been proved in a more general setting where both materials are nonlinear. In this case, one material is replaced by either a perfect electric conductor or a perfect electric insulator, and the other is replaced by a material yielding a weighted $p_0-$Laplace problem. [\[7-math\]]{#7-math label="7-math"} A specific feature of this work concerns the required assumptions, which are general and sharp, as discussed in Sections [3](#fram_sec){reference-type="ref" reference="fram_sec"} and [6](#counter_sec){reference-type="ref" reference="counter_sec"}. The assumptions are general: other than the standard conditions for existence and uniqueness of the solution of ([\[gproblem1\]](#gproblem1){reference-type="ref" reference="gproblem1"}), they involve pointwise convergence, only. The assumptions are sharp: the fundamental conditions specifically introduced for replacing one material with either a PEC or PEI cannot be removed, as shown by the counterexamples in Section [6](#counter_sec){reference-type="ref" reference="counter_sec"}. [\[8-arch\]]{#8-arch label="8-arch"} The paper is organized as follows: in Section [2](#underlying){reference-type="ref" reference="underlying"} we present the ideas underpinning the work; in Section [3](#fram_sec){reference-type="ref" reference="fram_sec"} we set out the notations and the problem, together with the required assumptions; in Section [4](#mean_sec0){reference-type="ref" reference="mean_sec0"} we give a fundamental inequality for small Dirichlet data; in Section [5](#small_sec){reference-type="ref" reference="small_sec"} we discuss the limiting case for small Dirichlet data; in Section [6](#counter_sec){reference-type="ref" reference="counter_sec"} we provide the counterexamples proving that the specific assumptions are sharp; in Section [7](#num_sec){reference-type="ref" reference="num_sec"} we provide numerical validation of the proposed theory; finally, in Section [8](#Con_sec){reference-type="ref" reference="Con_sec"} we provide some conclusions. # Underlying ideas and expected results {#underlying} In this section we present the main ideas underpinning this work. The key is the ''educated guess'' that when the boundary data is ''small'', the electric field $\mathbf{E}=-\nabla{u}$ is small a.e. in $\Omega$ and, therefore, its behaviour has to be governed by the asymptotic behaviour of $\sigma \left(x,E\right)$ in the constitutive relationship [\[J\]](#J){reference-type="eqref" reference="J"}. Specifically, let $A\subset\subset\Omega$ and $B:=\Omega\setminus\overline A$, we assume that there exist two constants $p_0$ and $q_0$, and two functions $\beta_0$ and $\alpha_0$ which capture the behaviour of $\sigma$, as $E \to 0^+$, in $B$ and $A$, respectively: $$\begin{aligned} \sigma_B(x,E) \sim \beta_0 (x)E^{p_0-2} \quad \text{for a.e.}\ x \in B,\\ \sigma_A(x,E) \sim \alpha_0 (x)E^{q_0-2} \quad \text{for a.e.}\ x \in A,\end{aligned}$$ where $\sigma_B$ and $\sigma_A$ are the restriction of $\sigma$ to $B$ and $A$, respectively and $E = \left| \mathbf{E} \right| = \left| \nabla u \right|$. Analysis of nonlinear problems is fascinating because of the wide variety of different cases. The most representative cases are shown in Figure [6](#fig_2_sigma){reference-type="ref" reference="fig_2_sigma"}. ![The electrical conductivities of the outer material and of the inner conducting material, when $\sigma_B(\cdot,E)=E^{p_0-2}$ is represented by the continuous lines and $\sigma_A(\cdot,E)= E^{q_0-2}$ is represented by the dashed lines. The configurations when the order relation between $p_0$ and $q_0$ is reversed easily follow.](Lim_2_a.png "fig:"){#fig_2_sigma width="32.5%"} ![The electrical conductivities of the outer material and of the inner conducting material, when $\sigma_B(\cdot,E)=E^{p_0-2}$ is represented by the continuous lines and $\sigma_A(\cdot,E)= E^{q_0-2}$ is represented by the dashed lines. The configurations when the order relation between $p_0$ and $q_0$ is reversed easily follow.](Lim_2_b.png "fig:"){#fig_2_sigma width="32.5%"} ![The electrical conductivities of the outer material and of the inner conducting material, when $\sigma_B(\cdot,E)=E^{p_0-2}$ is represented by the continuous lines and $\sigma_A(\cdot,E)= E^{q_0-2}$ is represented by the dashed lines. The configurations when the order relation between $p_0$ and $q_0$ is reversed easily follow.](Lim_2_c.png "fig:"){#fig_2_sigma width="32.5%"} ![The electrical conductivities of the outer material and of the inner conducting material, when $\sigma_B(\cdot,E)=E^{p_0-2}$ is represented by the continuous lines and $\sigma_A(\cdot,E)= E^{q_0-2}$ is represented by the dashed lines. The configurations when the order relation between $p_0$ and $q_0$ is reversed easily follow.](Lim_2_d.png "fig:"){#fig_2_sigma width="32.5%"} ![The electrical conductivities of the outer material and of the inner conducting material, when $\sigma_B(\cdot,E)=E^{p_0-2}$ is represented by the continuous lines and $\sigma_A(\cdot,E)= E^{q_0-2}$ is represented by the dashed lines. The configurations when the order relation between $p_0$ and $q_0$ is reversed easily follow.](Lim_2_e.png "fig:"){#fig_2_sigma width="32.5%"} When, for instance, $q_0<p_0$ it can be reasonably expected that either (i) region $A$ is a perfect electric conductor or (ii) region $B$ is a perfect electric insulator, because $\sigma_B$ would be dominant if compared to $\sigma_A$, at small electric fields. When $A\subset\subset\Omega$, the ambiguity between (i) and (ii) is resolved in Section [5](#small_sec){reference-type="ref" reference="small_sec"}, where we prove that region $B$ cannot be assimilated to a PEI and, therefore, $A$ has to be assimilated to a PEC. Finally, the case $p_0=q_0$ (that is the case when $A=\emptyset$) has been treated in [@corboesposito2021monotonicity; @MPMETHODS]. Moreover, the limiting problem where the conductor in region $A$ is replaced by a PEC, can reliably be modelled by a $p_0-$Laplace problem in region $B$, with a boundary condition given by a constant scalar potential $u$, on each connected component of $\partial A$. In other words, $u \sim u_{p_0}$ in $B$, where $u_{p_0}$ is the solution of the weighted $p_0-$Laplace problem arising from the electrical conductivity $\beta_0(x) E^{p_0-2}$ in $B$ and $|\nabla u_{p_0}|=0$ on $A$. The latter observation is also inspiring as it properly defines the concept of ''small'' boundary data and the limiting problem. Specifically, it is well known that the operator mapping the boundary data $f$ into the solution of a weighted ${p_0}-$Laplace problem is a homogeneous operator of degree 1, i.e. the solution corresponding to $\lambda f(x)$ is equal to $\lambda u_{p_0}(x)$, where $u_{p_0}$ is the solution corresponding to the boundary data $f$. Thus, the term ''problem for small boundary data'' means [\[gproblem1\]](#gproblem1){reference-type="eqref" reference="gproblem1"} where the boundary data is $\lambda f$ and $\lambda \to 0$. Moreover, this suggests the need to study convergent properties of the normalized solution $v^\lambda$, defined as the ratio $u^\lambda / \lambda$, where $u^\lambda$ is the solution of [\[gproblem1\]](#gproblem1){reference-type="eqref" reference="gproblem1"} corresponding to the Dirichlet data $\lambda f(x)$. Indeed, if $u^\lambda$ can be approximated by the solution of the weighted $p_0-$Laplace problem, then the normalized solution $v^\lambda(x)$ converges in $B$, i.e. it is expected to be constant w.r.t. $\lambda$, as $\lambda$ approaches $0$. We term this limit as $v^0$ and we expect it to be equal to $u_{p_0}$, i.e. the solution of the weighted $p_0-$Laplace problem with boundary data $f$. From the formal point of view, when $q_0<p_0$, $v^\lambda$ weakly converges to ${w^0\in W^{1,p_0}(\Omega)}$ for $\lambda\to 0^+$, where $w^0$ is constant in each connected component of $A$, and it is the solution of: $$\label{pproblem_Bgrad0} \begin{cases} \mathop{\mathrm{div}}\Big(\beta_0 (x) |\nabla w^0(x)|^{p_0-2}\nabla w^0 (x)\Big) =0 & \text{in }B,\vspace{0.2cm}\\ {|\nabla w^0(x)| =0} &\text{a .e. in } A,\vspace{0.2cm}\\ \int_{\partial A}\sigma(x,|\nabla w^0(x)|)\partial_\nu w^0(x)dS=0\vspace{0.2cm}\\ w^0(x) =f(x) & \text{on }\partial \Omega, \end{cases}$$ in $B$. In this case, from the physical standpoint, region $A$ can be replaced by a Perfect Electric Conductor (PEC). The solution of problem [\[pproblem_Bgrad0\]](#pproblem_Bgrad0){reference-type="eqref" reference="pproblem_Bgrad0"} satisfies the minimum problem [\[H\]](#H){reference-type="eqref" reference="H"}, described in Section [5](#small_sec){reference-type="ref" reference="small_sec"}. On the other hand, when $p_0<q_0$, $v^\lambda$ converges, in $B$, to $v^0_B\in W^{1,p_0}(B)$, that is the solution of the weighted $p_0-$Laplace problem in region $B$: $$\label{pproblem_B0} \begin{cases} \mathop{\mathrm{div}}\Big(\beta_0 (x) |\nabla v^0_B(x)|^{p_0-2}\nabla v^0_B (x)\Big) =0 & \text{in }B,\vspace{0.2cm}\\ \beta_0 (x) |\nabla v^0_B(x)|^{p_0-2}\partial_\nu v^0_B(x) =0 & \text{on }\partial A,\vspace{0.2cm}\\ v^0_B(x) =f(x) & \text{on }\partial \Omega. \end{cases}$$ From the physical standpoint, problem [\[pproblem_B0\]](#pproblem_B0){reference-type="eqref" reference="pproblem_B0"} corresponds to stationary currents where the electrical conductivity is $\sigma(x,E)=\beta_0(x)E^{p_0}$, and region $A$ is replaced by a perfectly electrical insulating material (PEI). The solution of problem [\[pproblem_B0\]](#pproblem_B0){reference-type="eqref" reference="pproblem_B0"} satisfies the minimum problem [\[Hii\]](#Hii){reference-type="eqref" reference="Hii"} , described in Section [5](#small_sec){reference-type="ref" reference="small_sec"}. # Framework of the Problem {#fram_sec} ## Notations Throughout this paper, $\Omega$ denotes the region occupied by the conducting materials. We assume that $\Omega\subset\mathbb{R}^n$, $n\geq 2$, is a bounded domain (i.e. an open and connected set) with Lipschitz boundary and $A\subset\subset\Omega$ is an open bounded set with Lipschitz boundary and a finite number of connected components, such that $B:=\Omega\setminus\overline A$ is still a domain. Hereafter we consider the growth exponents $p, q, p_0$ and $q_0$ such that $1< p,q<\infty$, $p \neq q$, $1< p_0 \leq p<\infty$, $1< q_0 \leq q<\infty$ and $p_0\neq q_0$. $p$ ($p_0$) is related to the growth of the electrical conductivity in region $B$ for large (small) electric fields (see Section [3.3](#subsec_hyp){reference-type="ref" reference="subsec_hyp"} for further details). Similarly, $q$ ($q_0$) is related to the growth of the electrical conductivity in region $A$ for large (small) electric fields (see Figure [7](#fig_4_omega){reference-type="ref" reference="fig_4_omega"}). ![A two phase problem (left) together with the electrical conductivity growth exponents for the electric field in a neighborhood of $+\infty$ (center) and in a neighborhood of $0$ (right).](Lim_4_omega.png){#fig_4_omega width="\\textwidth"} We denote by $dx$ and $dS$ the $n-$dimensional and the $(n-1)-$dimensional Hausdorff measure, respectively. Moreover, we set $$L^\infty_+(\Omega):=\{\theta\in L^\infty(\Omega)\ |\ \theta\geq c_0\ \text{a.e. in}\ \Omega, \ \text{for a positive constant}\ c_0\}.$$ Furthermore, for any $1<s<+\infty$ we denote by $W^{1,s}_0(\Omega)$ the closure set of $C_0^1(\Omega)$ with respect to the $W^{1,s}-$norm. The applied boundary voltage $f$ belongs to the abstract trace space $B^{1-\frac 1p,p}(\partial\Omega)$, which, for any bounded Lipschitz open set, is a Besov space (refer to [@JERISON1995161; @leoni17]), equipped with the following norm: $$||u||_{B^{1-\frac 1p,p}(\partial\Omega)}=||u||_{L^p(\partial\Omega)}+|u|_{B^{1-\frac 1p,p}(\partial\Omega)}<+\infty,$$ where $|u|_{B^{1-\frac 1p,p}(\partial\Omega)}$ is the Slobodeckij seminorm: $$|u|_{B^{1-\frac 1p,p}(\partial\Omega)}=\left(\int_{\partial\Omega}\int_{\partial\Omega}\frac{|u(x)-u(y)|^p}{||x-y||^{N-1+(1-\frac 1p)p} }dS (y)d S (x) \right)^\frac 1p,$$ see Definition 18.32, Definition 18.36 and Exercise 18.37 in [@leoni17]. This guarantees the existence of a function in $W^{1,p}(\Omega)$ whose trace is $f$ [@leoni17 Th. 18.40]. For the sake of brevity, we denote this space by $X^p(\partial \Omega)$ and its elements can be identified as the functions in $W^{1,p}(\Omega)$, modulo the equivalence relation $f\in [g]_{X^p(\partial \Omega)}$ if and only if $f-g\in W^{1,p}_0(\Omega)$, see [@leoni17 Th. 18.7]. Finally, we denote by $X^p_\diamond (\partial \Omega)$ the set of elements in $X^p(\partial \Omega)$ with zero average on $\partial\Omega$ with respect to the measure $dS$. ## The Scalar Potential and Dirichlet energy In terms of the electric scalar potential, that is ${\bf E}(x)=-\nabla u(x)$, the nonlinear Ohm's law [\[J\]](#J){reference-type="eqref" reference="J"} is $$% \label{gOhm} {\bf J} (x)=- \sigma (x, |\nabla u(x)|)\nabla u(x),$$ where $\sigma$ is the electrical conductivity, ${\bf E}$ is the electric field, and ${\bf J}$ is the electric current density. The electric scalar potential $u$ solves the steady current problem: $$\label{gproblem} \begin{cases} \mathop{\mathrm{div}}\Big(\sigma (x, |\nabla u(x)|) \nabla u (x)\Big) =0\ \text{in }\Omega\vspace{0.2cm}\\ u(x) =f(x)\qquad\qquad\qquad\quad\ \text{on }\partial\Omega, \end{cases}$$ where $f\in X_\diamond^p(\partial \Omega)$. Problem [\[gproblem\]](#gproblem){reference-type="eqref" reference="gproblem"} is meant in the weak sense, that is $$\int_{\Omega }\sigma \left( x,| \nabla u(x) |\right) \nabla u (x) \cdot\nabla \varphi (x)\ \text{d}x=0\quad\forall\varphi\in C_c^\infty(\Omega).$$ The solution $u$ restricted to $B$ belongs to $W^{1,p}(B)$, whereas $u$ restricted to $A$ belongs to $W^{1,q}(A)$; however the solution $u$ as a whole is an element of the largest between the two functional spaces $W^{1,p}(\Omega)$ and $W^{1,q}(\Omega)$. Furthermore, (i) if $p\leq q$ then $W^{1,p}(\Omega)\cup W^{1,q}(\Omega)=W^{1,p}(\Omega)$, and (ii) if $p\geq q$ then $W^{1,p}(\Omega)\cup W^{1,q}(\Omega)=W^{1,q}(\Omega)$. The solution $u$ satisfies the boundary condition in the sense that $u-f\in W_0^{1,p}(\Omega)\cup W_0^{1,q}(\Omega)$ and we write $u|_{\partial\Omega}=f$. Moreover, the solution $u$ is variationally characterized as $$\label{gminimum} \arg\min\left\{ \mathbb{E}_\sigma\left( u\right)\ :\ u\in W^{1,p}(\Omega)\cup W^{1,q}(\Omega), \ u|_{\partial\Omega}=f\right\}.$$ In ([\[gminimum\]](#gminimum){reference-type="ref" reference="gminimum"}), the functional $\mathbb{E}_\sigma\left( u\right)$ is the Dirichlet energy $$%\label{genergy} \mathbb{E}_\sigma \left( u \right) = \int_{B} Q_B (x,|\nabla u(x)|)\ \text{d}x+ \int_A Q_A (x,|\nabla u(x)|)\ \text{d}x$$ where $Q_B$ and $Q_A$ are the Dirichlet energy density in $B$ and in $A$, respectively: $$\begin{aligned} %\label{Bdensity} & Q_{B} \left( x,E\right) :=\int_{0}^{E} \sigma_B\left( x,\xi \right)\xi \text{d}\xi\quad \text{for a.e.}\ x\in B\ \text{and}\ \forall E\geq0,\\ & Q_{A}\left( x,E\right) :=\int_{0}^{E} \sigma_A\left( x,\xi \right)\xi \text{d}\xi\quad \text{for a.e.}\ x\in A\ \text{and}\ \forall E\geq 0,%\label{Adensity}\end{aligned}$$ and $\sigma_B$ and $\sigma_A$ are the restiction of the electrical conductivity $\sigma$ in $B$ and $A$, respectively. ## Requirements on the Dirichlet energy densities {#subsec_hyp} In this Section, we provide the assumptions on the Dirichlet energy densities $Q_B$ and $Q_A$, to guarantee the well-posedness of the problem and to prove the main convergence results of this paper. For each individual result, we will make use of a minimal set of assumptions, among those listed in the following. Firstly, we recall the definition of the Carathéodory functions. **Definition 1**. *$Q:\Omega\times[0,+\infty)\to\mathbb{R}$ is a Carathéodory function iff:* 1. *$\Omega\ni x\mapsto Q(x,E)$ is measurable for every $E\in [0,+\infty)$,* 2. *$[0,+\infty)\ni E\mapsto Q(x, E)$ is continuous for almost every $x\in\Omega$.* The assumptions on $Q_B$ and $Q_A$, required to guarantee the existence and uniqueness of the solution, are as follows. - $Q_B$ and $Q_A$ are Carathéodory functions; - $[0,+\infty)\ni E\mapsto Q_B(x,E)$ and $[0,+\infty)\ni E\to Q_A(x,E)$ are nonnegative, $C^1$, strictly convex, $Q_B(x,0)=0$ for a.e. $x\in B$, and $Q_A(x,0)=0$ for a.e. $x\in A$. The behaviour of $Q_A$ and $Q_B$ for small Dirichlet boundary data, satisfies the following assumptions: - There exists two exponents $p_0$ and $q_0$ with $1< p_0 \leq p<\infty$, $1< q_0 \leq q<\infty$ and $p_0\neq q_0$, such that: $$\begin{split} &(i)\ \underline{Q}\ {\max\left\{ \left(\frac{E}{E_0}\right)^{p_0},\left(\frac{ E}{E_0}\right)^p\right\}}\leq Q_B(x, E)\leq\overline Q \max\left\{ \left(\frac{E}{E_0}\right)^{p_0},\left(\frac{ E}{E_0}\right)^p\right\}\\ & \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \text{for a.e.} \ x\in B \ \text{and}\ \forall\ E\ge 0,\\ &(ii)\ \underline{Q} \ {\max\left\{ \left(\frac{E}{E_0}\right)^{q_0},\left(\frac{ E}{E_0}\right)^q\right\}}\leq Q_A(x, E)\leq\overline Q \max\left\{ \left(\frac{E}{E_0}\right)^{q_0},\left(\frac{ E}{E_0}\right)^q\right\}\\ & \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \text{for a.e.} \ x\in A \ \text{and}\ \forall\ E\ge 0. \end{split}$$ Assumption (A2) implies that both $Q_B$ and $Q_A$ are increasing functions in $E$; moreover, (A2) and (A3) imply that both $Q_B(x, E)\leq \overline Q$ and $Q_A(x,E)\le\overline Q$, when $0\le E\leq E_0$. Finally, assumption (A3) is implied by the well-known hypothesis used in the literature (see e.g. assumptions (H4) in [@corboesposito2021monotonicity] and (0.2) in [@giaquinta1982regularity]). - There exists a function $\beta_0\in L^\infty_+(B)$ such that: $$\begin{split} \lim_{E\to 0^+} \frac{Q_B (x,E)}{E^{p_0}}=\beta_0(x)\quad \text{for a.e.}\ x\in B. \end{split}$$ In Section [6](#counter_sec){reference-type="ref" reference="counter_sec"}, we will provide another counterexample to show that assumption (A4) is sharp. ## Connection among $\sigma$, ${\bf J}$ and $Q$ This paper is focused on the properties of the Dirichlet energy density $Q$, while, in physics and engineering the electrical conductivity $\sigma$ is of greater interest. From this perspective, assumptions (Ax) are able to include a wide class of electrical conductivities (see Figure [13](#fig_5_assumptions){reference-type="ref" reference="fig_5_assumptions"}). In other words, the (Ax)s are not restrictive in practical applications. ![Behaviour of the constitutive relationship in a neighborhood of $E=0$ for $p_0>2$, $p_0=2$ and $p_0<2$: in terms of (a) the electrical conductivity $\sigma$ and (b) the electrical current density $J$. Dashed lines correspond to the upper and lower bounds to either $\sigma$ or $J$.](Lim_5_1_sigmaEp_0ge2.png "fig:"){#fig_5_assumptions width="42%"} ![Behaviour of the constitutive relationship in a neighborhood of $E=0$ for $p_0>2$, $p_0=2$ and $p_0<2$: in terms of (a) the electrical conductivity $\sigma$ and (b) the electrical current density $J$. Dashed lines correspond to the upper and lower bounds to either $\sigma$ or $J$.](Lim_5_2_JEp_0ge2.png "fig:"){#fig_5_assumptions width="42%"} ![Behaviour of the constitutive relationship in a neighborhood of $E=0$ for $p_0>2$, $p_0=2$ and $p_0<2$: in terms of (a) the electrical conductivity $\sigma$ and (b) the electrical current density $J$. Dashed lines correspond to the upper and lower bounds to either $\sigma$ or $J$.](Lim_5_3_sigmaEp_0=2.png "fig:"){#fig_5_assumptions width="42%"} ![Behaviour of the constitutive relationship in a neighborhood of $E=0$ for $p_0>2$, $p_0=2$ and $p_0<2$: in terms of (a) the electrical conductivity $\sigma$ and (b) the electrical current density $J$. Dashed lines correspond to the upper and lower bounds to either $\sigma$ or $J$.](Lim_5_4_JEp_0=2.png "fig:"){#fig_5_assumptions width="42%"} ![Behaviour of the constitutive relationship in a neighborhood of $E=0$ for $p_0>2$, $p_0=2$ and $p_0<2$: in terms of (a) the electrical conductivity $\sigma$ and (b) the electrical current density $J$. Dashed lines correspond to the upper and lower bounds to either $\sigma$ or $J$.](Lim_5_5_sigmaEp_0le2.png "fig:"){#fig_5_assumptions width="42%"} ![Behaviour of the constitutive relationship in a neighborhood of $E=0$ for $p_0>2$, $p_0=2$ and $p_0<2$: in terms of (a) the electrical conductivity $\sigma$ and (b) the electrical current density $J$. Dashed lines correspond to the upper and lower bounds to either $\sigma$ or $J$.](Lim_5_6_JEp_0le2.png "fig:"){#fig_5_assumptions width="42%"} There is a close connection between $\sigma$, $J$ and $Q$. Indeed, $$\begin{split} Q_B \left( x,E \right) =\int_{0}^{E}{J_B} ({ x, \xi}) \ \text{d}{ \xi}\quad\text{for a.e.} \ x\in B\ \text{and}\ \forall E> 0,\\ Q_A \left( x,E \right) =\int_{0}^{E}{J_A} ({ x, \xi}) \ \text{d}{ \xi}\quad\text{for a.e.} \ x\in A\ \text{and}\ \forall E> 0, \end{split}$$ where $J_B$ and $J_A$ is the magnitude of the current density in regions $B$ and $A$, respectively: $$\label{connsJQ} \begin{split} J_B (x, E)&=\partial_E Q_B(x,E)=\sigma_B(x, E)E\quad \text{for a.e.} \ x \in B\ \text{and}\ \forall E>0,\\ J_A (x, E)&=\partial_E Q_A(x,E)=\sigma_A(x, E)E\quad \text{for a.e.} \ x \in A\ \text{and}\ \forall E>0. \end{split}$$ The electrical conductivity $\sigma(x,E)$ is the secant to the graph of the function $J_\sigma(x,E(x))$ and $Q_\sigma (x, E(x))$ is the area of the sub-graph of $J_\sigma(x, E(x))$. For a geometric interpretation of the connections between $\sigma$, $J_\sigma$ and $Q_\sigma$, see Figure [15](#fig_6_JE){reference-type="ref" reference="fig_6_JE"}. ![For any given spatial point in the region $\Omega$, (a) the electrical conductivity $\sigma(\cdot,E)$ is the secant line to the graph of the function $J_\sigma(\cdot,E)$; (b) $Q_\sigma (\cdot, E)$ is the area of the sub-graph of $J_\sigma(\cdot, E)$.](Lim_6_JE.png "fig:"){#fig_6_JE width="45%"} ![For any given spatial point in the region $\Omega$, (a) the electrical conductivity $\sigma(\cdot,E)$ is the secant line to the graph of the function $J_\sigma(\cdot,E)$; (b) $Q_\sigma (\cdot, E)$ is the area of the sub-graph of $J_\sigma(\cdot, E)$.](Lim_6_JEarea.png "fig:"){#fig_6_JE width="45%"} ## Existence and uniqueness of the solutions The proof of the existence and uniqueness of the solution for [\[gproblem\]](#gproblem){reference-type="eqref" reference="gproblem"} in its variational form, relies on standard methods of the Calculus of Variations, when the Dirichlet energy density presents the same growth in any point of the domain $\Omega$. The case treated in this work is nonstandard, because the Dirichlet energy density presents different growth in $B$ and $A$ and, hence, we provide a proof in the following. **Theorem 2**. *Let $1<p, q<+\infty$, $p\neq q$ and $f\in X_\diamond^p(\partial \Omega)$. If (A1), (A2), (A3) hold, then there exists a unique solution of problem [\[gminimum\]](#gminimum){reference-type="eqref" reference="gminimum"}.* *Proof.* Before distinguishing the two cases depending on the exponents order, we observe that there exists a function $u_0\in W^{1,p}(\Omega)$ that assumes a suitable constant value in $A$, with $Tr(u_0)=f$ on $\partial\Omega$, such that $||u_0||_{W^{1,p}(\Omega)}\leq C(\partial\Omega) ||f||_{X^p_\diamond(\partial\Omega)}<+\infty$, by the Inverse Trace inequality in Besov spaces [@leoni17 Th. 18.34]. For this function $u_0$, it is easily seen that $\mathbb E_\sigma(u_0)<+\infty$. As a consequence, $\mathbb E_\sigma$ is proper convex function, as required to apply [@dacorogna2007direct Th. 3.30]. Moreover, the strictly convex function $Q_\sigma(x,\cdot)$ is coercive for a.e. $x\in\Omega$ with respect to $\min\{p,q\}$. Indeed, if $p>q$ and $E\ge E_0$, then, by assumption (A3), we have $$\begin{split} &Q_B(x, E)\ge \underline{Q}\left(\frac{E}{E_0}\right)^p\ge \underline{Q}\left(\frac{E}{E_0}\right)^q\ge \underline{Q}\left[\left(\frac{E}{E_0}\right)^q-1\right]\quad \text{a.e. in } B;\\ %& Q_B(x, E)\ge \underline{Q}\left[ \left(\frac{E}{E_0}\right)^p-\left(\frac{\tilde E}{E_0}\right)^p\right]\ge\underline{Q}\left[ \left(\frac{\tilde E}{E_0}\right)^{p-q}\left(\frac{E}{E_0}\right)^q-\left(\frac{\tilde E}{E_0}\right)^p\right]\quad\text{a.e. in } B;\\ &Q_A(x, E)\ge \underline{Q} \left(\frac{E}{E_0}\right)^q\ge \underline{Q}\left[\left(\frac{E}{E_0}\right)^q-1\right] \qquad\qquad\qquad\ \ \text{a.e. in } A. %& Q_A(x, E)\ge \underline{Q}\left[ \left(\frac{E}{E_0}\right)^q-\left(\frac{\tilde E}{E_0}\right)^q\right]\quad\text{a.e. in } A. \end{split}$$ If $E<E_0$, then $$\begin{split} &Q_B(x, E)\ge 0%\underline{Q}\left(\frac{E}{E_0}\right)^p%\ge\underline{Q}\left[\left(\frac{E}{E_0}\right)^q+\left(\frac{E}{E_0}\right)^p-\left(\frac{E}{E_0}\right)^q\right] \ge\underline{Q}\left[\left(\frac{E}{E_0}\right)^q-1\right]\quad \text{a.e. in } B;\\ & Q_A(x, E)\ge 0 %\underline{Q} \left(\frac{E}{E_0}\right)^q\ge 0 \ge\underline{Q}\left[\left(\frac{E}{E_0}\right)^q-1\right]\quad \text{a.e. in } A. \end{split}$$ Therefore, setting $Q_\sigma=Q_B$ in $B$ and $Q_\sigma=Q_A$ in A, we have $$Q_\sigma(x, E)\ge \underline{Q}\left[\left(\frac{E}{E_0}\right)^q-1\right]\quad \text{a.e. in }\Omega,$$ for any $E\ge 0$. Similarly, when $p<q$, we have $Q_\sigma(x,E)\ge \underline{Q}\left[\left(\frac{E}{E_0}\right)^p-1\right]$, for any $E\ge0$. Therefore, in both cases ($p>q$ and $p<q$), all the assumptions of [@dacorogna2007direct Th. 3.30] are satisfied and, thus, the solution exists and is unique. ◻ **Remark 3**. By invoking [@dacorogna2007direct Th. 3.30] one finds that the solution is an element of $W^{1,\min\{p,q\}}(\Omega)=W^{1,p}(\Omega)\cup W^{1,q}(\Omega)$. In both cases ($p>q$ and $p<q$) the boundary data $f\in X_\diamond^p(\partial \Omega)$ is compatible with the solution space. Let us observe that optimization problems on domains with holes have received a great deal of interest in recent years, see e.g. [@della2020optimal; @gavitone2021isoperimetric; @paoli2020stability; @paoli2020sharp] and references therein. ## Normalized solution Through this paper we study the behaviour of the solution of problem [\[gminimum\]](#gminimum){reference-type="eqref" reference="gminimum"} for small Dirichlet boundary data, i.e. the behaviour of $u^\lambda$ defined as: $$\label{Fspezzata} \min_{\substack{u\in W^{1,p}(\Omega)\cup W^{1,q}(\Omega)\\ u=\lambda f\ \text{on}\ \partial \Omega}}\mathbb E_\sigma(u),$$ for $\lambda\to 0^+$ (small Dirichlet data). To this purpose, as discussed in Section [2](#underlying){reference-type="ref" reference="underlying"}, it is convenient to introduce the normalized solution $v^\lambda$ defined as: $$%\label{v_scal} v^\lambda=\frac{u^\lambda}{\lambda}.$$ In the following Sections, we prove that the behaviour of the normalized function $v^\lambda$ is $p_0-$Laplace modeled for $\lambda \to 0^+$. For any prescribed $f\in X^p_\diamond(\partial \Omega)$ and $\lambda>0$, $v^\lambda$ is the solution of the following variational problem: $$\label{G_norm} \min_{\substack{v\in W^{1,p}(\Omega)\cup W^{1,q}(\Omega)\\ v=f\ \text{on}\ \partial \Omega}}\mathbb G_0^\lambda(v),\quad \mathbb G_0^\lambda(v)=\frac 1 {\lambda^{p_0}}\left(\int_{B} Q_B(x,\lambda|\nabla v(x)|)dx+\int_{A} Q_A(x,\lambda|\nabla v(x)|)dx\right).$$ The multiplicative factor $1/\lambda^{p_0}$ is introduced in order to guarantee that the functionals $\mathbb G_0^\lambda$ are equibounded for small $\lambda$. The normalized solution makes it possible to ''transfer'' parameter $\lambda$ in [\[Fspezzata\]](#Fspezzata){reference-type="eqref" reference="Fspezzata"} from the boundary data to the functional $\mathbb G_0^\lambda$. Specifically, in the following Sections, we will prove that $v^\lambda$ converges, under very mild hypotheses, for $\lambda\to 0^+$. If $q_0<p_0$, the limiting problem of [\[G_norm\]](#G_norm){reference-type="eqref" reference="G_norm"} is a problem where the inner region $A$ is replaced by a PEC. The limit solution is termed $w^0$. If $p_0<q_0$, the limiting problem of [\[G_norm\]](#G_norm){reference-type="eqref" reference="G_norm"} is a problem where the inner region $A$ is replaced by a PEI. The limiting problem of $v^\lambda$, is termed $v^0_B$ in $B$ . Finally, we remark that $v^0_B$ and $w^0$ arise from a weighted $p_0-$Laplace problem . # The fundamental inequality for small Dirichlet data {#mean_sec0} In this Section we provide the main tool to achieve the convergence results in the limiting cases for ''small'' Dirichlet boundary data. Specifically, we show that the asymptotic behaviour of the Dirichlet energy corresponds to a $p_0-$Laplace modelled equation [@Salo2012_IP; @guo2016inverse] in domain $B$. In the following, we study the asymptotic behaviour of the Dirichlet energy in the outer region $B$. To do this, we prove the following general Lemma, first for a weighted $p_0-$Laplace problem and, then, for the quasilinear case. Let $Q_F(x,E)=\theta(x)E^{p_0}$ be the Dirichlet energy density for a weighted $p_0-$Laplace problem defined in $F$, a bounded Lipschitz domain. We observe that $Q_F$ satisfies assumption (A4). **Lemma 4**. *Let $1<p_0<+\infty$, $F%\Omega \subset\mathbb{R}^n$ be a bounded domain with Lipschitz boundary, $\theta$ be a nonnegative measurable function in $F$ and $\{w_n\}_{n\in\mathbb{N}}$ be a sequence weakly convergent to $w$ in $W^{1,p_0}(F)$. Then we have $$\label{only_in_factor_E3} \int_{F} \theta(x)|\nabla w(x)|^{p_0} dx\le\liminf_{n\to+\infty}\int_{F} \theta(x)|\nabla w_n(x)|^{p_0} dx.$$* *Proof.* Let us set $L:=\liminf_{n\to+\infty}\int_{F} \theta(x)|\nabla w_n(x)|^{p_0} dx$. If $L=+\infty$, the inequality [\[only_in_factor_E3\]](#only_in_factor_E3){reference-type="eqref" reference="only_in_factor_E3"} is trivial; otherwise we consider a subsequence $\{n_j\}_{n\in\mathbb{N}}$ such that $$\lim_{ j\to+\infty}\int_{F} \theta(x)|\nabla w_{ {n_j}}(x)|^{p_0} dx=L.$$ This means that for any $\varepsilon>0$, there exists $\nu\in\mathbb{N}$ such that $$\label{defin_liminf} L-\varepsilon<\int_{F} \theta(x)|\nabla w_{ {n_j}}(x)|^{p_0} dx<L+\varepsilon$$ for any $j\ge\nu$. Then, by the Mazur's Lemma (refer for example to [@dunford1963linear; @renardy2006introduction]), there exists a function $N:\mathbb{N}\to\mathbb{N}$ and a sequence $\{\alpha_{n,k}\}_{k=n}^{N(n)}$ for any $n\in\mathbb{N}$ such that 1. $\alpha_{n,k}\geq 0$ for any $(n,k)\in\mathbb{N}\times[n,N(n)]$, 2. $\sum_{k=n}^{N(n)}\alpha_{n,k}=1$ for any $n\in\mathbb{N}$, 3. $z_n:=\sum_{k=n}^{N(n)}\alpha_{n,k}w_{ {n_k}}$ $\to w$ in $W^{1,p_0}(F)$. Then there exists a subsequence $\{z_{ {n_l}}\}_{ l\in\mathbb{N}}$ such that $$\label{eguagl_liminf} \liminf_{n\to+\infty} \int_{F} \theta(x)|\nabla z_n(x)|^{p_0} dx= \lim_{ l\to+\infty}\int_{F} \theta(x)|\nabla z_{ {n_l}}(x)|^{p_0} dx$$ and another subsequence, again indicated with $\{z_{ {n_l}}\}_{ l\in\mathbb{N}}$, such that $\nabla z_{ {n_l}}\to \nabla w$ a.e. in $F$ [@leoni17 Chap. 18]. Therefore, we have $$\begin{split} \mathbb \int_{F} \theta(x)|\nabla w(x)|^{p_0} dx&= \int_{F} \theta(x)\liminf_{ l\to+\infty}|\nabla z_{ {n_l}}(x)|^{p_0} dx\leq \liminf_{ {l}\to+\infty}\int_{F} \theta(x)|\nabla z_{ {n_l}}(x)|^{p_0} dx\\ &=\lim_{ l\to+\infty}\int_{F} \theta(x)|\nabla z_{ {n_l}}(x)|^{p_0} dx =\liminf_{n\to+\infty}\int_{F} \theta(x)|\nabla z_n(x)|^{p_0} dx\\ &\le\liminf_{n\to+\infty}\sum_{k=n}^{N(n)}\alpha_{n,k}\int_{F} \theta(x)|\nabla w_{ {n_k}}(x)|^{p_0} dx\\ &<\liminf_{n\to+\infty}\mathbb \sum_{k=n}^{N(n)}\alpha_{n,k}\left(L+\varepsilon\right)=L+\varepsilon\\ &=\liminf_{n\to+\infty}\int_{F} \theta(x)|\nabla w_n(x)|^{p_0} dx+\varepsilon, \end{split}$$ where in the first line the equality follows from the convergence result of (M3) and the inequality follows from Fatou's Lemma, in the second line we applied [\[eguagl_liminf\]](#eguagl_liminf){reference-type="eqref" reference="eguagl_liminf"}, in the third line we applied the convexity of $|\cdot|^{p_0}$, and in the fourth line we applied [\[defin_liminf\]](#defin_liminf){reference-type="eqref" reference="defin_liminf"}. Conclusion [\[only_in_factor_E3\]](#only_in_factor_E3){reference-type="eqref" reference="only_in_factor_E3"} follows from the arbitrariness of $\varepsilon>0$. ◻ The next step consists in extending [\[only_in_factor_E3\]](#only_in_factor_E3){reference-type="eqref" reference="only_in_factor_E3"} from the weighted $p_0-$Laplace case to the quasilinear case. In doing this, we restrict the validity of the result to sequences of the solutions of problem [\[G_norm\]](#G_norm){reference-type="eqref" reference="G_norm"}. The main difficulty in proving this result lies in evaluating an upper bound of the measure of that part of $B$ where the solutions $v^\lambda$ admit large values of the gradient (see Figure [16](#fig_7_insiemi){reference-type="ref" reference="fig_7_insiemi"}). ![The objective of the proof is to show that the set of the points in $B$ such that the solution $v^{\lambda_{n_j}}$ does not satisfy the fundamental inequality ($C^c_{\delta,n_j}$) and admit large values of the gradient ($D^c_{\ell,n_j}$) can be made sufficiently small (shaded region).](Lim_7_insiemi.png){#fig_7_insiemi width="\\textwidth"} **Lemma 5**. *Let $1<p_0\leq p<+\infty$, $f\in X^p_\diamond(\partial \Omega)$, (A1), (A2), (A3), (A4) hold, and let the solution $v^\lambda$ of [\[G_norm\]](#G_norm){reference-type="eqref" reference="G_norm"} be weakly convergent to $v$ in $W^{1,p_0}(B)$, for $\lambda\to 0^+$. Let $\lambda_n\to 0^+$ be a decreasing sequence for $n\to+\infty$, such that $$\label{seq_to_liminf_0} \lim_{n\to+\infty}\frac{1}{\lambda_n^{p_0}}\int_{B}Q_B(x,\lambda_n|\nabla v^{\lambda_n}(x)|)dx=\liminf_{\lambda\to0^+}\frac{1}{\lambda^{p_0}}\int_{B}Q_B(x,\lambda|\nabla v^{\lambda}(x)|)dx.$$ Then, for any $\delta>0$ and $\theta>0$, there exists a set $F_{\delta,\theta}\subseteq B$ with $|B\setminus F_{\delta,\theta}|<\theta$ such that $$\label{step_to_conclude_0} \liminf_{n\to+ \infty}\int_{F_{\delta,\theta}} (\beta_0(x)-\delta)|\nabla v^{\lambda_n}(x)|^{p_0} dx\le \lim_{n\to+\infty}\frac{1}{\lambda_n^{p_0}}\int_{B}Q_B(x,\lambda_n|\nabla v^{\lambda_n}(x)|)dx.$$* *Proof.* Let ${w^0}$ be the solution of problem [\[pproblem_Bgrad0\]](#pproblem_Bgrad0){reference-type="eqref" reference="pproblem_Bgrad0"}. We have $$%\label{chainB_0} \begin{split} \frac{\underline{Q}}{E_0^{p_0}}\int_{B}|\nabla v^\lambda(x)|^{p_0}dx&\leq\frac 1 {\lambda^{p_0}}\int_{B} Q_B(x,\lambda|\nabla v^\lambda(x)|)dx\leq \mathbb{G}_0^\lambda(v^\lambda)\leq\mathbb{G}_0^\lambda( {w^0})\\ &=\frac 1 {\lambda^{p_0}}\int_{B} Q_B(x,\lambda|\nabla {w^0}(x)|)dx\\ &\leq\max\left\{\frac{\overline Q}{E_0^{p_0}}\int_B|\nabla {w^0}(x)|^{p_0}dx , \frac{\overline Q}{E_0^p}\lambda^{p-p_0}\int_B|\nabla {w^0}(x)|^pdx\right\}, \end{split}$$ where in the first inequality we used the left-hand side of (A3.i), in the second inequality we added the integral term on $A$, in the third inequality we tested $\mathbb G_0^\lambda$ with ${w^0}$, and in the last inequality we used the right-hand side (A3.i). Since $p-p_0\geq 0$, we find that $\int_{B}|\nabla v^\lambda(x)|^{p_0}dx$ is definitively upper bounded and, therefore, $%\frac 1 {\lambda_n^p} \int_{B} |\nabla v^{\lambda_n}(x)|^{p_0}dx$ is upper bounded by a constant $M>0$, for any $n\in\mathbb{N}$. Let us fix $\delta>0$ and $\theta>0$. For any $n\in\mathbb{N}$, we set $$\label{def_C0} C_{\delta,n}=C_{n}:=\left\{x\in B\ : \ (\beta_0(x)-\delta)(\lambda_n|\nabla v^{\lambda_n}(x)|)^{p_0}\le Q_B(x,\lambda_n|\nabla v^{\lambda_n}(x)|)\right\}.$$ Now, for any constant $L>0$, we set $$\begin{split} D_{n,L}=D_{n}:=\{ x\in B \ : \ |\nabla v^{\lambda_n}(x)|< L\}\\ D^c_{n,L}=D^c_{n}:=\{ x\in B \ : \ |\nabla v^{\lambda_n}(x)|\ge L\} \end{split}$$ By definition of $D^c_{n}$, we have $$%\frac{ \underline{Q}}{E_0^p} |D^c_{n}|L^{p_0}\le%\frac{ \underline{Q}}{E_0^p} \int_{B} |\nabla v^{\lambda_n}(x)|^{p_0}dx%\le \frac 1 {t_n^{p_0}}\int_{B} Q_B(x,|t_n\nabla v^{t_n}(x)|)dx \le M,$$ which gives $|D^c_n|\leq \frac{M}{%\frac{ \underline{Q}}{E_0^p} L^{p_0}}$, for any $n\in\mathbb{N}$. By choosing $L>\left(\frac{4M}{\theta}\right)^\frac{1}{p_0}$, we have $$\label{stimaD3_0} |D^c_n|<\frac\theta 4.$$ Let $E_{n}$ be defined as $$E_{\delta,L,n}=E_{n}:=\left\{x\in B\ : \ (\beta_0(x)-\delta)E^{p_0}\leq {Q_B(x,E)}\ \ \forall\ 0\leq E<\lambda_nL\right\}.$$ Then, for any $n\in\mathbb{N}$, $$\label{chain_inclusion3_0} C_{n}\supseteq C_{n}\cap D_{n}\supseteq E_{n}\cap D_{n}.$$ $E_n$ is increasing with respect to $n$ and $\left| \bigcup_{n=1}^{+\infty}E_{n}\right|=|B|$. Therefore there exists a natural number $n_1=n_1(\theta)$ such that $$\label{stimaF3_0} \left|\bigcup_{n=1}^{n_1} E_{n}\right|=\left|E_{n_1}\right|\ge |B|-\frac \theta 4.$$ By considering the complementary sets in $B$, [\[chain_inclusion3_0\]](#chain_inclusion3_0){reference-type="eqref" reference="chain_inclusion3_0"} for $n=n_1$ gives $$C^c_{n_1}\subseteq E^c_{n_1}\cup D^c_{n_1}=E^c_{n_1}\cup D^c_{n_1}\quad$$ with $$\quad |C_{n_1}^c|\leq \frac\theta 4+\frac\theta 4=\frac\theta 2,$$ as follows from [\[stimaD3_0\]](#stimaD3_0){reference-type="eqref" reference="stimaD3_0"} and [\[stimaF3_0\]](#stimaF3_0){reference-type="eqref" reference="stimaF3_0"}. Similarly, repeating the argument and replacing $2^{j+1}$ with $4$ in the previous argument, we construct a subsequence $\{\lambda_{n_j}\}_{j\in\mathbb{N}}$ such that $$|C_{n_j}^c|\leq \frac{\theta}{2^j}.$$ Then, by defining $$F_{\delta,\theta}: =\bigcap_{j=1}^\infty C_{n_j},$$ we have $$F^c_{\delta,\theta}= \bigcup_{j=1}^\infty C_{n_j}^c \quad$$ with $|F^c_{\delta,\theta}|\leq \theta\sum_{j=1}^{+\infty} {2^{-j}}=\theta$, which means $$|B |-\theta\le |F_{\delta,\theta}|\le |B|.$$ Therefore, we have $$\begin{split} &\liminf_{j\to+\infty}\int_{F_{\delta,\theta}}(\beta_0(x)-\delta)|\nabla v^{\lambda_{n_j}}(x)|^{p_0} dx\le \liminf_{j\to+\infty}\frac{1}{\lambda_{n_j}^{p_0}}\int_{F_{\delta,\theta}} Q_B(x,|\lambda_{n_j}\nabla v^{\lambda_{n_j}}(x)|)dx\\ &\le\liminf_{j\to +\infty}\frac{1}{\lambda_{n_j}^{p_0}}\int_{B} Q_B(x,|\lambda_{n_j}\nabla v^{\lambda_{n_j}}(x)|)dx=\lim_{n\to+\infty}\frac{1}{\lambda_n^{p_0}}\int_{B}Q_B(x,\lambda_n|\nabla v^{\lambda_n}(x)|)dx, \end{split}$$ where in the first inequality we use $F_{\delta,\theta}\subseteq C_{n_j}$ for any $j\in\mathbb{N}$ and the inequality appearing in [\[def_C0\]](#def_C0){reference-type="eqref" reference="def_C0"}, in the second inequality we take into account that $F_{\delta,\theta}\subseteq B$, and, in the last equality, we exploit the convergence given by [\[seq_to_liminf_0\]](#seq_to_liminf_0){reference-type="eqref" reference="seq_to_liminf_0"}. ◻ Finally, we prove the result on the fundamental inequality holding for $\lambda\to 0^+$. **Proposition 6**. *Let $1<p_0\le p<+\infty$, $f\in X^p_\diamond(\partial \Omega)$, (A1), (A2), (A3), (A4) hold, and let the solution $v^\lambda$ of [\[G_norm\]](#G_norm){reference-type="eqref" reference="G_norm"} be weakly convergent to $v$ in $W^{1,p_0}(B)$, as $\lambda\to 0^+$. Then $$\label{fundamental_inequality3_0} \int_{B}\beta_0(x)|\nabla v|^{p_0}dx\le \liminf_{\lambda\to 0^+}\frac 1 {\lambda^{p_0}}\int_{B} Q_B(x,\lambda|\nabla v^\lambda(x)|)dx.$$* *Proof.* First, we assume that $v$ is nonconstant in $B$, otherwise the conclusion is trivial. Therefore, the integral on the l.h.s. of [\[fundamental_inequality3_0\]](#fundamental_inequality3_0){reference-type="eqref" reference="fundamental_inequality3_0"} is positive because $\beta_0\in L^\infty_+(B)$. Let $\{\lambda_n\}_{n \in \mathbb{N}}$ be a decreasing sequence such that $\lambda_n\to 0^+$ for $n\to+\infty$, and $$\label{seq_to_liminf3_0} \lim_{n\to+\infty}\frac{1}{\lambda_n^{p_0}}\int_{B}Q_B(x,\lambda_n|\nabla v^{\lambda_n}(x)|)dx=\liminf_{\lambda\to0^+}\frac{1}{\lambda^{p_0}}\int_{B}Q_B(x,\lambda|\nabla v^{\lambda}(x)|)dx.$$ To prove [\[fundamental_inequality3_0\]](#fundamental_inequality3_0){reference-type="eqref" reference="fundamental_inequality3_0"}, we use Lemma [Lemma 5](#lem_dis_fund_0){reference-type="ref" reference="lem_dis_fund_0"}. To this purpose, the measure $$\kappa: F\in {\mathcal B(\Omega)}\mapsto \int_F\beta_0(x)|\nabla v(x)|^{p_0}dx$$ where $\mathcal B(\Omega)$ is the class of the borelian sets contained in $\Omega$, is absolutely continuous with respect to the Lebesgue measure. Therefore, for any $\varepsilon>0$, there exists $\theta>0$ such that $\kappa(F)<\frac\varepsilon 2$ for any $|F|<\theta$. Therefore, for any fixed $0<\delta\le\inf_{B}\beta_0$, $|B\setminus F_{\delta,\theta}|<\theta$ implies $$\label{mu3_0} \kappa(B\setminus F_{\delta,\theta})=\int_{B\setminus F_{\delta,\theta}}\beta_0(x)|\nabla v(x)|^{p_0}dx <\frac \varepsilon 2.$$ Hence, for $\delta\le \min\left\{\varepsilon/\left(2\int_{B}|\nabla v(x)|^{p_0} dx\right), \inf_B \beta_0(x)\right\}$, we have $$\begin{split} &\int_{B} \beta_0(x)|\nabla v(x)|^{p_0} dx-\varepsilon<\int_{B} \beta_0(x)|\nabla v(x)|^{p_0} dx-\int_{B\setminus F_{\delta,\theta}} \beta_0(x)|\nabla v(x)|^{p_0} dx-\frac \varepsilon 2\\ &\le\int_{F_{\delta,\theta}} \beta_0(x)|\nabla v(x)|^{p_0} dx- \delta \int_{B} |\nabla v(x)|^{p_0} dx\le \int_{F_{\delta,\theta}} (\beta_0(x)-\delta)|\nabla v(x)|^{p_0} dx\\ &\le \liminf_{\lambda\to 0^+}\int_{F_{\delta,\theta}} (\beta_0(x)-\delta)|\nabla v^{\lambda}(x)|^{p_0} dx \le\lim_{n\to+\infty}\frac{1}{\lambda_n^{p_0}}\int_{B}Q_B(x,\lambda_n|\nabla v^{\lambda_n}(x)|)dx\\ &=\liminf_{\lambda\to 0^+}\frac{1}{\lambda^{p_0}}\int_{B}Q_B(x,\lambda|\nabla v^{\lambda}(x)|)dx, \end{split}$$ where in the first line we applied [\[mu3_0\]](#mu3_0){reference-type="eqref" reference="mu3_0"}, in the second line the connection between $\delta$ and $\varepsilon$, in the third line we used Lemma [Lemma 4](#factorizable_lemma3){reference-type="ref" reference="factorizable_lemma3"} with $F= F_{\delta,\theta}$ and the inequality [\[step_to_conclude_0\]](#step_to_conclude_0){reference-type="eqref" reference="step_to_conclude_0"} of the previous Lemma [Lemma 5](#lem_dis_fund_0){reference-type="ref" reference="lem_dis_fund_0"}, and in the fourth line we used [\[seq_to_liminf3_0\]](#seq_to_liminf3_0){reference-type="eqref" reference="seq_to_liminf3_0"}. The conclusion follows from the arbitrariness of $\varepsilon$. ◻ # Limiting Problems for small Dirichlet data {#small_sec} In this Section we treat the limiting case of problem [\[Fspezzata\]](#Fspezzata){reference-type="eqref" reference="Fspezzata"} for small Dirichlet boundary data. We will distinguish two cases depending on $p_0$ and $q_0$: 1. $1<q_0<p_0<+\infty$ (see Section [5.1](#Small_bigger){reference-type="ref" reference="Small_bigger"}); 2. $1<p_0<q_0<+\infty$ (see Section [5.2](#Small_smaller){reference-type="ref" reference="Small_smaller"}). In the first case, we prove that: - $v^\lambda\rightharpoonup w ^0$ in $W^{1,p_0}(B)$, as $\lambda\to 0^+$, where $w^0$ in $B$ is the unique solution of problem [\[pproblem_Bgrad0\]](#pproblem_Bgrad0){reference-type="eqref" reference="pproblem_Bgrad0"}; - $v^\lambda\to w^0$ in $W^{1,q_0}(A)$, as $\lambda\to 0^+$, where $w^0$ is constant in any connected component of $A$; - $v^\lambda\rightharpoonup w ^0$ in $W^{1,p_0}(\Omega)$, as $\lambda\to 0^+$, where $w^0$ in $B$ is the unique solution of problem [\[pproblem_Bgrad0\]](#pproblem_Bgrad0){reference-type="eqref" reference="pproblem_Bgrad0"} and in $A$ is constant in any connected component. The limiting solution in $\Omega$, for $1<q_0<p_0<+\infty$, is characterized by: $$\label{H} \min_{\substack{v\in W^{1,p_0}(\Omega)\\ |\nabla v|=0\ \text{a.e. in}\ A \\ v=f\ \text{on}\ \partial \Omega}}\mathbb B_0(v),\quad \mathbb B_0(v)=\int_{B} \beta_0(x)|\nabla v(x)|^{p_0} dx.$$ Problem [\[H\]](#H){reference-type="eqref" reference="H"} is the variational form of problem [\[pproblem_Bgrad0\]](#pproblem_Bgrad0){reference-type="eqref" reference="pproblem_Bgrad0"}. Whereas, in the second case, we prove that: - $v^\lambda\rightharpoonup v^0_B$ in $W^{1,p_0}(B)$, as $\lambda\to 0^+$, where $v^0_B$ in $B$ is the unique solution of problem [\[pproblem_B0\]](#pproblem_B0){reference-type="eqref" reference="pproblem_B0"}. For $1<p_0<q_0<+\infty$, the limiting solution is characterized by the following problem: $$\label{Hii} \min_{\substack{v\in W^{1,p_0}(B)\\ v=f\ \text{on}\ \partial \Omega}}\mathbb B_0(v),\quad \mathbb B_0(v)=\int_{B} \beta_0(x)|\nabla v(x)|^{p_0}dx%+ \int_{A} c(x)|\nabla v(x)|dx,$$ The problem [\[Hii\]](#Hii){reference-type="eqref" reference="Hii"} is the variational form of [\[pproblem_B0\]](#pproblem_B0){reference-type="eqref" reference="pproblem_B0"}. We recall that $v_B^0\in W^{1,p_0}(B)$ is the unique normalized solutions of the limiting problems [\[Hii\]](#Hii){reference-type="eqref" reference="Hii"} in region $B$ . Using the results developed in Section [4](#mean_sec0){reference-type="ref" reference="mean_sec0"}, we are in a position to prove the main convergence results. ## First case: $\mathbf{q_0<p_0}$ {#Small_bigger} In the whole section we assume $q_0< p_0\le p$ and $q_0\leq q$; hence we have the continuous embeddings $W^{1,p}(\cdot)\hookrightarrow W^{1,p_0}(\cdot)\hookrightarrow W^{1,q_0}(\cdot)$ and $W^{1,q}(\cdot)\hookrightarrow W^{1,q_0}(\cdot)$, for any bounded set with Lipschitz boundary. For any fixed $f\in X^{p}_\diamond(\partial \Omega)$ we study problem [\[G_norm\]](#G_norm){reference-type="eqref" reference="G_norm"} as $\lambda$ approaches zero. The variational problem [\[G_norm\]](#G_norm){reference-type="eqref" reference="G_norm"} particularizes as $$\label{G^t} \min_{\substack{v\in W^{1,q_0}(\Omega)\\ v=f\ \text{on}\ \partial \Omega}}\mathbb G_0^\lambda(v),\quad \mathbb G_0^\lambda(v)=\frac 1 {\lambda^{p_0}}\left(\int_{B} Q_B(x,\lambda|\nabla v(x)|)dx+\int_{A} Q_A(x,\lambda|\nabla v(x)|)dx\right).$$ Let $v^\lambda$ be the minimizer of [\[G\^t\]](#G^t){reference-type="eqref" reference="G^t"} and ${w^0}$ be the minimizer of [\[H\]](#H){reference-type="eqref" reference="H"}; the aim of this Section is to prove the following convergence result $$%\label{vtv0} v^\lambda\rightharpoonup {w^0}\quad \text{in}\ W^{1,q_0}(\Omega) \quad\text{as}\ \lambda\to 0^+.$$ The condition $|\nabla v|=0$ is equivalent to saying that $v$ is constant on each connected component of $A$. This makes it possible to decouple the problems associated to regions $B$ and $A$. Specifically, region $A$ behaves as a PEC, with respect to problem [\[H\]](#H){reference-type="eqref" reference="H"}, whereas the outer region $B$ behaves as $p-$Laplacian modelled material with a PEC on $\partial A$. We first prove that $v^\lambda$ is weakly convergent and we identify the limiting function $v^0\in W^{1,q_0}(\Omega)$, for $\lambda \to 0^+$; proving that the limiting function $v^0$ satisfies $\mathbb B_0(v^0)=\mathbb B_0(w^0)$. The latter equality implies $v^0= {w^0}$, because of the uniqueness of the solution of problem [\[H\]](#H){reference-type="eqref" reference="H"}. Then, assuming stronger hypotheses, we prove the strong convergence in $W^{1,q_0}(\Omega)$. **Theorem 7**. *Let $1<q_0<p_0<+\infty$ be such that $p_0\leq p$, $q_0\leq q$, $f\in X^{p}_\diamond(\partial \Omega)$ and $v^\lambda$ be the solution of [\[G\^t\]](#G^t){reference-type="eqref" reference="G^t"}. If (A1), (A2), (A3) and (A4) hold, then* - *$v^\lambda\rightharpoonup {w^0}$ in $W^{1,p_0}(B)$, as $\lambda\to 0^+$,* - *$v^\lambda\to {w^0}$ in $W^{1,q_0}(A)$, as $\lambda\to 0^+$,* - *$v^\lambda\rightharpoonup {w^0}$ in $W^{1,q_0}(\Omega)$, as $\lambda\to 0^+$,* *where ${w^0}\in W^{1,p_0}(\Omega)$ is the unique solution of [\[H\]](#H){reference-type="eqref" reference="H"}.* *Proof.* For the sake of simplicity, we will only treat the case when $A$ has one connected component. The general case can be treated with the same approach. Let us consider a function in $W^{1,p}(\Omega)$ whose trace on $\partial\Omega$ is $f$ and is such that $f\equiv w^0$ in $A$. We again denote this function by $f$ and hence we have $w^0-f\in W^{1,p_0}_0(B)$. A density argument [@leoni17 Th. 11.35] ensures that there exists $\{v_n\}_{n\in\mathbb{N}}\subseteq C_c^\infty(B)$ such that $$v_n \to w_0-f\quad \text{in } W^{1,p_0}(\Omega), \text{ as } n\to\infty.$$ Consequently, we have $$\lim_{n\to+\infty} \int_\Omega |\nabla v_n - \nabla (w^0-f)|^{p_0}dx=0.$$ We immediately deduce that $f+v_n\in W^{1,p}(B)$ and $$\lim_{n\to+\infty} \int_\Omega \beta_0(x)|\nabla (v_n+f) - \nabla w^0|^{p_0}dx=0.$$ Hence, this implies that $$\lim_{n\to+\infty} \mathbb B_0(v_n+f) =\mathbb B_0( w^0).$$ Therefore, for any $\varepsilon>0$, there exists $\omega\in W^{1,p}(\Omega)$ with $Tr(\omega)=f$ on $\partial\Omega$ and with constant value on $A$ such that: $$\label{lavrentiev} \mathbb B_0(\omega)<\mathbb B_0(w^0)+\varepsilon.$$ Hence, $$\label{chainGtvtv0Omega} \begin{split} \frac{ \underline{Q}}{E_0^{p_0}} \int_{B}|\nabla v^\lambda(x)|^{p_0}dx& \leq\frac 1 {\lambda^{p_0}}\int_{B} Q_B(x,\lambda|\nabla v^\lambda(x)|)dx\\ &\leq\mathbb G_0^\lambda (v^\lambda)\le \mathbb G_0^\lambda ( {\omega})=\frac 1 {\lambda^{p_0}}\int_{B} Q_B(x,|\lambda\nabla {\omega}(x)|)dx\\ &\le\max\left\{ \frac{ \overline{Q}}{E_0^{p_0}} \int_{B}|\nabla {\omega}(x)|^{p_0}dx,\frac{ \overline{Q}}{E_0^{p}}\lambda^{p-p_0} \int_{B}|\nabla {\omega}(x)|^{p}dx\right\}, \end{split}$$ where in the first inequality we used (A3.i)-left, in the second inequality we exploited the fact that $\mathbb G^\lambda_0$ also contains the integral term over $A$, in the third inequality we used the fact that $v^\lambda$ is the minimizer of $\mathbb G_0^\lambda$, in the last inequality we used (A3.i)-right. Since $\lambda^{p-p_0}$ is bounded, as $\lambda\to0^+$, we find that $\int_{B}|\nabla v^\lambda(x)|^{p_0}dx$ is definitively upper bounded by [\[chainGtvtv0Omega\]](#chainGtvtv0Omega){reference-type="eqref" reference="chainGtvtv0Omega"}. We first prove that $\{v^\lambda\}_\lambda\subseteq L^{p_0}(B)$ is equibounded. By using the Poincaré inequality [@leoni17 Th. 13.19], we have: $$\label{bounded_poincare_vl} \begin{split} ||v^\lambda||_{L^{p_0}(B)}&\leq ||v^\lambda- f||_{L^{p_0}(B)}+|| f ||_{L^{p_0}(B)}\leq C||\nabla v^\lambda-\nabla f||_{L^{p_0}}+||f||_{L^{p_0}(B)}\\ &\leq C||\nabla v^\lambda||_{L^{p_0}(B)}+C||\nabla f||_{L^{p_0}(B)}+||f||_{L^{p_0}(B)}. \end{split}$$ The claim is proved because the right hand side is equibounded. Taking into account [\[chainGtvtv0Omega\]](#chainGtvtv0Omega){reference-type="eqref" reference="chainGtvtv0Omega"} and [\[bounded_poincare_vl\]](#bounded_poincare_vl){reference-type="eqref" reference="bounded_poincare_vl"}, it follows that there exists $v^0\in W^{1,{p_0}}(B)$ such that, up to a subsequence, $v^\lambda\rightharpoonup v^0$ in $W^{1,{p_0}}(B)$, as $\lambda\to 0^+$, that is *(i)*. Similarly to [\[chainGtvtv0Omega\]](#chainGtvtv0Omega){reference-type="eqref" reference="chainGtvtv0Omega"}, for region $A$, we have $$\label{chainGtvtv0A} \begin{split} %\frac{ \underline{Q}}{E_0^q}\frac{1}{t^{p_0-q}}\int_{A}|\nabla v^t(x)|^{q}dx &-\frac{\underline Q}{E_0^{q}} \frac{\widetilde E^{q}}{t^{p_0}}|A| \leq\frac 1 {t^{p_0}}\int_{A} Q_A(x,|t\nabla v^t(x)|)dx\\&\leq\mathbb H^t (v^t)\le \frac{ \overline{Q}}{E_0^p}\frac{1}{t^{p_0-p}} \int_{B}|\nabla \zeta(x)|^{p}dx+\frac{\overline Q}{E_0^{p}}\frac{\widetilde E^{p}}{t^{p_0}}|B| \frac{ \underline{Q}}{E_0^{q_0}}\int_{A}|\nabla v^\lambda(x)|^{q_0}dx & \leq\frac 1 {\lambda^{q_0}}\int_{A} Q_A(x,\lambda|\nabla v^\lambda(x)|)dx=\frac 1 {\lambda^{q_0-p_0}}\frac 1 {\lambda^{p_0}}\int_{A} Q_A(x,\lambda|\nabla v^\lambda(x)|)dx\\ &\leq\frac{1}{\lambda^{q_0-p_0}}\mathbb G_0^\lambda (v^\lambda)\le \frac{1}{\lambda^{q_0-p_0}}\mathbb G_0^\lambda ( {\omega})=\frac{1}{\lambda^{q_0-p_0}}\frac 1 {\lambda^{p_0}}\int_{B} Q_B(x,|\lambda\nabla {\omega}(x)|)dx\\% & \leq\frac 1 {\lambda^{q_0}}\int_{A} Q_A(x,|\lambda\nabla v^\lambda(x)|)dx%\\&\leq\mathbb G_0^\lambda (v^\lambda) &\le\frac{1}{\lambda^{q_0-p_0}}\max\left\{ \frac{ \overline{Q}}{E_0^{p_0}} \int_{B}|\nabla {\omega} (x)|^{p_0}dx,\frac{ \overline{Q}}{E_0^p}\lambda^{p-p_0} \int_{B}|\nabla {\omega}(x)|^{p}dx\right\}, \end{split}$$ where we exploited (A3.ii)-left in the first inequality. Therefore, by passing [\[chainGtvtv0A\]](#chainGtvtv0A){reference-type="eqref" reference="chainGtvtv0A"} to the limit, we have$$\lim_{\lambda\to 0^+}\lambda^{q_0-p_0}\int_{A}|\nabla v^\lambda(x)|^{q_0}dx\leq\frac{\overline{Q}}{\underline Q} E_0^{q_0-p_0} \int_{B}|\nabla {\omega}(x)|^{p_0}dx.$$ Therefore, we find that $\int_{A}|\nabla v^\lambda(x)|^{q_0}dx=O(\lambda^{p_0-q_0})$; now, we prove that $\{v^\lambda\}_\lambda\subseteq L^{q_0}(A)$ is equibounded. By using the Poincaré inequality [@leoni17 Th. 13.19], we have: $$\label{bounded_poincare_vlq} \begin{split} ||v^\lambda||_{L^{q_0}(A)}\leq ||v^\lambda||_{L^{q_0}(\Omega)}& \leq ||v^\lambda- f||_{L^{q_0}(\Omega)}+|| f ||_{L^{q_0}(\Omega)}\\ &\leq C||\nabla v^\lambda-\nabla f||_{L^{q_0}(\Omega)}+||f||_{L^{q_0}(\Omega)}\\ &\leq C||\nabla v^\lambda||_{L^{q_0}(\Omega)}+C||\nabla f||_{L^{q_0}(\Omega)}+||f||_{L^{q_0}(\Omega)}. \end{split}$$ The claim is proved because the right hand side is equibounded. Hence, $||v^\lambda||_{W^{1,q_0}(A)}$ is definitively upper bounded. Moreover, by taking into account [\[chainGtvtv0Omega\]](#chainGtvtv0Omega){reference-type="eqref" reference="chainGtvtv0Omega"}, [\[chainGtvtv0A\]](#chainGtvtv0A){reference-type="eqref" reference="chainGtvtv0A"}, [\[bounded_poincare_vlq\]](#bounded_poincare_vlq){reference-type="eqref" reference="bounded_poincare_vlq"}, that $q_0<p_0\le p$, it turns out that $v^\lambda\in W^{1,q_0}(\Omega)%\cap W^{1,q_0}(A)$ and that $||v^\lambda||_{W^{1,q_0}(\Omega)}$ is upper bounded. Therefore, up to a subsequence, we find that $v^\lambda\rightharpoonup v^0$ in $W^{1,q_0}(\Omega)$. Moreover, $v^\lambda\to v^0$ in $W^{1,q_0}(A)$ and $v^0$ is constant in $A$ because $\nabla v^\lambda \to 0$ in $L^{q_0}( {A})$. We have thus proved convergences *(ii)* and *(iii)*. The final step is to prove that $v^\lambda$ converges to ${w^0}$, which is the minimizer for [\[H\]](#H){reference-type="eqref" reference="H"}. Specifically, we have the following inequalities: $$\label{chain_small_i} \begin{split} \mathbb B_0( {w^0})\le\mathbb B_0(v^0)&\le\liminf_{\lambda\to 0^+}\frac 1 {\lambda^{p_0}}\int_{B} Q_B(x,|\lambda\nabla v^\lambda(x)|)dx\le \liminf_{\lambda\to 0^+}\mathbb G_0^\lambda(v^\lambda)\\ &\le %\limsup_{\lambda\to 0^+}\mathbb G_0^\lambda(v^\lambda)\le \lim_{\lambda\to 0^+}\mathbb G_0^\lambda( {\omega})= \mathbb B_0(\omega)<\mathbb B_0(w^0)+\varepsilon, \end{split}$$ where in the first inequality we exploited that ${w^0}$ is the minimizer of $\mathbb B_0$, in the second inequality we used the fundamental inequality of Proposition [Proposition 6](#fund_ine_propt0){reference-type="ref" reference="fund_ine_propt0"}, in the third inequality we added the integral term on region $A$, in the fourth inequality we exploited that $v^\lambda$ is the minimizer of $\mathbb G_0^\lambda$, in the equality in the second line we have taken into account assumption (A4) and the dominated convergence Theorem, and in the last inequality we have used [\[lavrentiev\]](#lavrentiev){reference-type="eqref" reference="lavrentiev"}. By the arbitrariness of $\varepsilon>0$, this implies that $\mathbb B_0( {w^0})=\mathbb B_0(v^0)$ and, hence, $v^0= {w^0}$ due to uniqueness of the solution of minimization problem [\[H\]](#H){reference-type="eqref" reference="H"}. ◻ **Remark 8**. [\[chain_small_i\]](#chain_small_i){reference-type="eqref" reference="chain_small_i"} implies the equality in the fundamental inequality [\[fundamental_inequality3_0\]](#fundamental_inequality3_0){reference-type="eqref" reference="fundamental_inequality3_0"}. ## Second case: $\mathbf{p_0<q_0}$ {#Small_smaller} In the whole section we assume $p_0< q_0\le q$ and $p_0\leq p$; hence we have the continuous embeddings $W^{1,q}(\cdot)\hookrightarrow W^{1,q_0}(\cdot)\hookrightarrow W^{1,p_0}(\cdot)$ and $W^{1,p}(\cdot)\hookrightarrow W^{1,p_0}(\cdot)$ on any bounded set with Lipschitz boundary. In this case, problem [\[G_norm\]](#G_norm){reference-type="eqref" reference="G_norm"} particularizes as $$\label{G^tii} \min_{\substack{v\in W^{1,p_0}(\Omega)\\ v=f\ \text{on}\ \partial \Omega}}\mathbb G_0^\lambda(v),\quad \mathbb G_0^\lambda(v)=\frac 1 {\lambda^{p_0}}\left(\int_{B} Q_B(x,\lambda|\nabla v(x)|)dx+\int_{A} Q_A(x,\lambda|\nabla v(x)|)dx\right),$$ for any prescribed $f\in X_\diamond^{p}(\partial \Omega)$. In this case, limiting problem [\[Hii\]](#Hii){reference-type="eqref" reference="Hii"} plays a key role. Specifically, we have the following Theorem. **Theorem 9**. *Let $1<p_0<q_0<+\infty$ with $p_0\leq p$ and $q_0\leq q$, $f\in X^{p}_\diamond(\partial \Omega)$ and $v^\lambda$ be the solution of [\[G\^tii\]](#G^tii){reference-type="eqref" reference="G^tii"}. If (A1), (A2), (A3) and (A4) hold, then* - *$v^\lambda\rightharpoonup v_B^0$ in $W^{1,p_0}(B)$, as $\lambda\to 0^+$,* *where $v_B^0\in W^{1,p_0}(B)$ is the unique solution of [\[Hii\]](#Hii){reference-type="eqref" reference="Hii"}.* *Proof.* Let us consider a Sobolev extension $\tilde v_B^0$ in $W^{1,p_0}(\Omega)$ of $v_B^0 \in W^{1,p_0}(B)$ and a function in $W^{1,p}(\Omega)$ whose trace on $\partial\Omega$ is $f$, that we again denote by $f$. Hence we observe that $\tilde v_B^0-f\in W^{1,p_0}_0(\Omega)$. A density argument [@leoni17 Th. 11.35] ensures that there exists $\{v_n\}_{n\in\mathbb{N}}\subseteq C_c^\infty(\Omega)$ such that $$v_n \to \tilde v_B^0-f\quad \text{in } W^{1,p_0}(\Omega), \text{ as } n\to\infty.$$ Consequently, we have $$\lim_{n\to+\infty} \int_\Omega |\nabla v_n - \nabla (\tilde v_B^0-f)|^{p_0}dx=0.$$ Therefore, we immediately deduce that $f+v_n\in W^{1,p}(\Omega)$ and $$\lim_{n\to+\infty} \int_B \beta_0(x)|\nabla (v_n+f) - \nabla v_B^0|^{p_0}dx=0.$$ Hence, this implies that $$\lim_{n\to+\infty} \mathbb B_0(v_n+f) =\mathbb B_0( v_B^0).$$ Therefore, for any $\varepsilon>0$, there exists $\omega\in W^{1,p}(\Omega)$ with $Tr(\omega)=f$ on $\partial\Omega$ such that: $$\label{lavrentiev2} \mathbb B_0(\omega)<\mathbb B_0(v_B^0)+\varepsilon.$$ Hence, we have $$\label{chainGtvtv0Omega2} \begin{split} \frac{ \underline{Q}}{E_0^{p_0}} \int_{B}|\nabla v^\lambda(x)|^{p_0}dx& \leq\frac 1 {\lambda^{p_0}}\int_{B} Q_B(x,\lambda|\nabla v^\lambda(x)|)dx\\ &\leq\mathbb G_0^\lambda (v^\lambda)\le \mathbb G_0^\lambda ( {\omega})=\frac 1 {\lambda^{p_0}}\int_{B} Q_B(x,|\lambda\nabla {\omega}(x)|)dx\\ &\le\max\left\{ \frac{ \overline{Q}}{E_0^{p_0}} \int_{B}|\nabla {\omega}(x)|^{p_0}dx,\frac{ \overline{Q}}{E_0^{p}}\lambda^{p-p_0} \int_{B}|\nabla {\omega}(x)|^{p}dx\right\}, \end{split}$$ where in the first inequality we used (A3.i)-left, in the second inequality we exploited the fact that $\mathbb G^\lambda_0$ also contains the integral term over $A$, in the third inequality we used the fact that $v^\lambda$ is the minimizer of $\mathbb G_0^\lambda$ and in the last inequality we used (A3.i)-right. Since $||\nabla v^\lambda||_{L^{p_0}(B)}^{p_0}$ is definitively upper bounded and $v^\lambda=f$ on $\partial\Omega$, then by the Rellich--Kondrachov's compactness Theorem [@leoni17 Th. 12.18], there exists a function $v^0\in W^{1,p_0}(B)$ such that $$\label{conv_vt_v_0} v^\lambda \rightharpoonup v^0\quad\text{in}\ W^{1,p_0}(B)\quad\text{as}\ \lambda\to 0^+.$$ The final step is to prove that $v^0$ is equal to $v_B^0$, the solution of the limiting problem [\[Hii\]](#Hii){reference-type="eqref" reference="Hii"}. Let $\delta>0$ be prescribed, and let $A_\delta$ be the $\delta$-Minkowski neighbourhood of $A$: $$A_\delta=\{x\in\Omega\ : \ \mathop{\mathrm{dist}}(x,A)<\delta\}.$$ For any $\tau$ such that $0<\tau<\delta$, we denote the mollified function $({ {\omega}})_\tau=\rho_\tau * { {\omega}}$, where $\rho_\tau$ is the canonical mollifier. Then, for any $0<\tau<\delta$, we define $${z}(x)= \begin{cases} {\omega} &\text{in}\ B\setminus A_{\delta},\\ \frac{\mathop{\mathrm{dist}}(x,A_\delta)}\delta {\omega}+\left(1-\frac{\mathop{\mathrm{dist}}(x,A_\delta)}\delta \right)( { \omega})_\tau & \text{in}\ A_{\delta}\setminus A,\\ ( { \omega})_\tau &\text{in}\ A. \end{cases}$$ Let us observe that the mollification of $\omega$ in $A$ (in the definition of $z$) is well-defined, because $A\subset\subset \Omega$. We have the following inequalities: $$\label{chainii} \begin{split} \mathbb B_0(v_B^0)&\le\mathbb B_0(v^0)\le \liminf_{\lambda\to 0^+}\mathbb G_0^\lambda(v^\lambda) %\le \limsup_{\lambda\to 0^+}\mathbb G_0^\lambda(v^\lambda) \le\lim_{\lambda\to 0^+}\mathbb G_0^\lambda( {z})\\ &\leq\int_{B\setminus A_{\delta}}\beta_0(x)|\nabla {\omega}(x)|^{p_0}dx\\ &\qquad+\frac{ \overline{Q}}{E_0^p} \left(\int_{A_{\delta}\setminus A} |\nabla {\omega}(x)|^{p_0}+|\nabla ( {\omega})_\tau(x)|^{p_0}+\frac{| {\omega}(x)-( {\omega})_\tau(x)|^{p_0}}{\delta^{p_0}}dx\right)\\ &\qquad+\frac{ \overline{Q}}{E_0^{q_0}}\lim_{\lambda\to 0^+} \lambda^{q_0-p_0}\int_A|\nabla ( {\omega})_\tau(x)|^{q_0}dx\\ &\leq \mathbb B_0( {\omega})+I_{\delta,\tau} {<\mathbb B_0(v_B^0)+I_{\delta,\tau}+\varepsilon}, \end{split}$$ where in the first inequality we exploited the fact that $v^0_B$ is the minimizer of $\mathbb B_0$, in the second inequality we used the fundamental inequality stated in Proposition [Proposition 6](#fund_ine_propt0){reference-type="ref" reference="fund_ine_propt0"} (since assumption (A4) holds), in the third inequality we used the fact that $v^\lambda$ is the minimizer of $\mathbb G^\lambda_0$, in the fourth inequality we used the dominate convergence Theorem thanks to assumption (A4), in the fifth equality we exploited the fact that $\lim_{\lambda \to 0^+} \lambda^{q_0-p_0}=0$ for $q_0>p_0$, and in the sixth inequality we used [\[lavrentiev2\]](#lavrentiev2){reference-type="eqref" reference="lavrentiev2"}. The symbol $I_{\delta,\tau}$ refers to the terms in round brackets. Since [\[chainii\]](#chainii){reference-type="eqref" reference="chainii"} holds for any $0<\tau<\delta$, by first letting $\tau\to 0^+$ and then $\delta\to 0^+$, we have $$\lim_{\delta\to 0^+}\lim_{\tau\to 0^+}I_{\delta,\tau}=2\lim_{\delta\to 0^+}\int_{A_{\delta}\setminus A} |\nabla {\omega}(x)|^{p_0}dx= 0,$$ where in the first equality we exploited the uniform convergence of the Sobolev extension [@brezis1986analisi Prop. IV.21] and in the second equality we exploited the fact that the measure of $A_\delta\setminus A$ is negligible for $\delta \to 0^+$. Hence, by the arbitrariness of $\varepsilon>0$, the inequality [\[chainii\]](#chainii){reference-type="eqref" reference="chainii"} implies that $\mathbb B_0(v_B^0)= \mathbb B_0(v^0)$ and, therefore, $v_B^0=v^0$ thanks to the uniqueness of [\[Hii\]](#Hii){reference-type="eqref" reference="Hii"}. This result together with [\[conv_vt_v\_0\]](#conv_vt_v_0){reference-type="eqref" reference="conv_vt_v_0"} yields the conclusion. ◻ **Remark 10**. We observe that [\[chainii\]](#chainii){reference-type="eqref" reference="chainii"}, implies the equality in the fundamental inequality [\[fundamental_inequality3_0\]](#fundamental_inequality3_0){reference-type="eqref" reference="fundamental_inequality3_0"}. Let us observe that from [\[chainii\]](#chainii){reference-type="eqref" reference="chainii"}, we find that the fundamental inequality (i.e. the second inequality in [\[chainii\]](#chainii){reference-type="eqref" reference="chainii"}), also stated in Proposition [Proposition 6](#fund_ine_propt0){reference-type="ref" reference="fund_ine_propt0"}, holds as equality:$$\mathbb B_0(v^0)= \liminf_{\lambda\to 0^+}\mathbb G_0^\lambda(v^\lambda).$$ # The pointwise convergence assumption in the limiting cases {#counter_sec} The main aim of this Section is to prove that assumption (A4) for small Dirichlet data is sharp. Specifically, we provide one example where (A4) does not hold, and the previous convergence results (Theorems [Theorem 7](#Thm_conv_lim){reference-type="ref" reference="Thm_conv_lim"} and [Theorem 9](#Thm_conv_limii){reference-type="ref" reference="Thm_conv_limii"}) do not hold. With a similar approach, not reported here for the sake of brevity, it is possible to prove that even assumption (A4') is sharp. We prove this result by providing a Dirichlet energy density for which the ratio $Q_B(x,E)/E^{p_0}$ does not admit the limit for $E\to 0^+$. As before, we first need to provide two suitable subsequences (see Figure [18](#fig_9_counter){reference-type="ref" reference="fig_9_counter"} for the geometric interpretation). ![The continuous line represents the function describing the Dirichlet energy density used in the counterexample for small Dirichlet data.](Lim_8_countersmallPhi.png "fig:"){#fig_9_counter width="47.5%"} ![The continuous line represents the function describing the Dirichlet energy density used in the counterexample for small Dirichlet data.](Lim_8_countersmallQB.png "fig:"){#fig_9_counter width="47.5%"} **Lemma 11**. *Let $L>1$, then there exist two sequences $$\{\lambda_n'\}_{n\in\mathbb{N}}\downarrow 0^+ \quad\text{and}\quad \{\lambda_n''\}_{n\in\mathbb{N}}\downarrow 0^+$$ such that $$%..\lambda_{n+1}''< L \lambda_{n+1}''<\lambda_n',\ %< L \lambda_n'< \lambda_n''\ \ %<...\quad \forall n\in\mathbb{N},$$ and a strictly convex function $$\Psi:[0,+\infty[\to[0,+\infty[$$ such that $$\Psi|_{[\lambda_n',L \lambda_n']}(E)=2E^2\quad \Psi|_{[\lambda_n'',L \lambda_n'']}(E)=3E^2.$$* *Proof.* Let us fix $\lambda_1''>0$. For each $n \in \mathbb{N}$ we set the auxiliary function $\Phi$ equal to $2 E^2$ in $(\lambda_n'', L \lambda_n'')$ and equal to $E^2$ in $(\lambda_n', L \lambda_n')$. In interval $(L \lambda_n'',\lambda_n')$ the function $\Phi$ is equal to the tangent line to function $2E^2$ evaluated at $L \lambda_n''$. Point $\lambda_n'$ is found at the intersection of this tangent line with function $E^2$. In interval $(L \lambda_n',\lambda_{n+1}'')$ the function $\Phi$ is a straight line, continuous at $L \lambda_n'$ and tangent to $2E^2$. Point $\lambda_{n+1}''$ is found as the abscissa of the tangent point between this straight line and function $2E^2$. This procedure is applied iteratively from $n=1$. Function $\Phi$ is convex and sequences $\{\lambda_n'\}_{n \in \mathbb{N}}$ and $\{\lambda_n''\}_{n \in \mathbb{N}}$ are monotonically decreasing to zero. Therefore, the measure of intervals where $\Phi$ is equal to $E^2$ or equal to $2E^2$ is nonvanishing. It is possible to prove that $\{\lambda'_n\}_{n\in\mathbb{N}}$ and $\{\lambda_n''\}_{n\in\mathbb{N}}$ are two geometric sequences. Indeed $$\lambda_n'=\frac{1}{c_2L}\lambda_n'',\ \lambda_{n+1}''=\frac{1}{c_1L} \lambda_n',\ \lambda_{n+1}'=\frac 1{C^2L^2}\lambda_n',\ \lambda_{n+1}''=\frac 1{C^2L^2}\lambda_n'',$$ where $c_1=2+\sqrt 2$, $c_2=1+\frac{\sqrt 2}{2}$ and $C$ is the geometric mean of $c_1$ and $c_2$, that is $C^2%c_2Lc_1L =\left(3+2\sqrt 2\right)$. Finally, we set $\Psi(E)=\Phi(E)+E^2$. $\Psi$ is a strictly convex function. ◻ The construction of the counterexample in the planar case ($n=2$) for $p_0=2$ and $1<q_0<+\infty$ follows the steps of the previous Section line by line but with the aim of showing that $\lim_{\lambda\to 0^+}\mathbb G_0^\lambda(v^\lambda)\not\in\mathbb{R}$, where $\mathbb G_0^\lambda$ is defined in [\[G_norm\]](#G_norm){reference-type="eqref" reference="G_norm"}. Specifically, the two sequences $\{\lambda_n'\}_{n\in\mathbb{N}}\downarrow 0$ and $\{\lambda_n''\}_{n\in\mathbb{N}}\downarrow 0$ satisfy $$\limsup_{n\to +\infty}\mathbb G_0^{\lambda_n'}(v^{\lambda_n'})\le m_1(r)<m_2(r)\le\liminf_{n\to +\infty}\mathbb G_0^{\lambda_n''}(v^{\lambda_n''}).$$ The Dirichlet energy density defined as $Q_B(x,E)=\Psi(E)$, satisfies all the assumptions except (A4). This energy density is the basis to build a counterexample proving that (A4) is sharp. Specifically, we consider a 2D case ($n=2$) and $p_0=2$ in the outer region. The growth exponent $q_0$ satisfies condition $1<q_0<\infty$. Let $r$ be greater than or equal to 10, and let the outer region $\Omega$ be the annulus centred in the origin with radii $1$ and $r$. This annulus is $D_r\setminus\overline D_1$, where $D_r$ and $D_1$ are the disks of radii $r$ and $1$, respectively, and centered at the origin. The inner region is, therefore, $D_1$. We focus on problem [\[G_norm\]](#G_norm){reference-type="eqref" reference="G_norm"}, where the Dirichlet energy density is defined as $$\begin{split} Q_B(x,E)&=\Psi(E)\quad\text{in}\ D_r\setminus\overline D_1\times [0,+\infty[,\qquad\\ Q_A(x,E)&=E^{q_0}\qquad\text{in}\ D_1\times [0,+\infty[. \end{split}$$ Let $\gamma$ be defined as $\gamma=7+\frac {12}{r^2}$. We denote $x=(x_1,x_2) \in \mathbb{R}^2$ and we consider the problem $$\label{G^tc1large} \min_{\substack{v\in W^{1,q}(D_r)\\ v=\gamma x_1\ \text{on}\ \partial D_r}}\mathbb G^\lambda(v),\quad \mathbb G^\lambda(v)=\frac 1 {\lambda^2}\left(\int_{ D_r\setminus D_1} \Psi(\lambda|\nabla v(x)|)dx+\int_{D_1} \lambda^q|\nabla v(x)|^q dx\right).$$ Here we prove that $\lim_{\lambda\to +\infty}\mathbb G^\lambda(v^\lambda)$ does not exist. Specifically, the two sequences $\{\lambda_n'\}_{n\in\mathbb{N}}\uparrow +\infty$ and $\{\lambda_n''\}_{n\in\mathbb{N}}\uparrow+\infty$ of Lemma [Lemma 11](#succ_L0){reference-type="ref" reference="succ_L0"} give $$\label{counter_resultlarge} \limsup_{n\to +\infty}\mathbb{G}^{\lambda_n'}(v^{\lambda_n'})\le\ell_1<\ell_2\le\liminf_{n\to +\infty}\mathbb{G}^{\lambda_n''}(v^{\lambda_n''}).$$ As usual, $v^\lambda$ is the solution of [\[G\^tc1large\]](#G^tc1large){reference-type="eqref" reference="G^tc1large"}. Let us consider the following problem $$\begin{aligned} %\label{AuxPbc1}&\min_{\substack{v\in H^{1}(A_{1,r})\\ v=\alpha_1x_1\ \text{on}\ \partial B_r\\\partial_nv=0\ \text{on}\ \partial B_1}}\int_{ A_{1,r}} |\nabla v(x)|^2dx,\qquad\\ \label{problem_down} &\min_{\substack{v\in H^{1}(D_r\setminus\overline D_1)\\ v=\gamma x_1\ \text{on}\ \partial D_r\\v=const.\ \text{on}\ \partial D_1}}\mathbb B_0 (v), \quad \mathbb B_0(v)=\int_{ D_r\setminus D_1} |\nabla v(x)|^2dx.\end{aligned}$$ The symmetry of the domain and the zero average of the boundary data imply that the constant appearing in [\[problem_down\]](#problem_down){reference-type="eqref" reference="problem_down"} on $\partial D_1$ is zero. An easy computation reveals that $$v_{D_r}(x)=\frac{7r^2+12}{r^2-1}\left(1-\frac{1}{x_1^2+x_2^2}\right)x_1\quad\text{in}\ D_r\setminus\overline D_1$$ is the solution of [\[problem_down\]](#problem_down){reference-type="eqref" reference="problem_down"}, that $\Delta v_{D_r}=0$ in $D_r\setminus\overline D_1$, and that we have $$\begin{split} %\alpha_1\frac{r^2}{r^2+1}\left(1-\frac{\sqrt{2}}{r}\right)^2\le |\nabla v_1(x)|^2\le\alpha_1\frac{r^2}{r^2+1} \left(1+\frac 2{r^2}\right)^2\quad\text{in}\ \R^n\setminus B_{r},\\ \frac{7r^2-12}{r^2-1}\left(1-\frac{1}{\rho^2}\right)\le|\nabla v_{D_r}(x)|\le\frac{7r^2+12}{r^2-1} \left(1+\frac 1{\rho^2}\right)\quad\text{on}\ \partial D_{\rho},\ 1<\rho\le r. \end{split}$$ Consequently, when $\rho\ge2$, we have $$\label{stima_nablaw} 1\le\frac 34 \frac{7r^2-12}{r^2-1}\le|\nabla v_{D_r}(x)|\le\frac 54 \frac{7r^2+12}{r^2-1}\leq 10\quad\text{in}\ D_r\setminus D_2.$$ Let $L$ be greater than 10, $\lambda_n'\uparrow +\infty$ and let $\lambda_n''\uparrow +\infty$ be the two sequences of Lemma [Lemma 11](#succ_L0){reference-type="ref" reference="succ_L0"}. It turns out that $$\label{stime_up_down} \lambda_n' \le% t_n'\ m \leq \lambda_n'|\nabla v_{D_r}(x)|%\leq t_n'\ M \le L \lambda'_n\quad\text{in}\ D_r\setminus D_2.$$ We have $$\label{inf_counter} \begin{split} &\limsup_{n\to +\infty}\mathbb G^{\lambda'_n}(v^{\lambda_n'})\leq\limsup_{n\to +\infty} \mathbb G^{\lambda'_n}(v_{D_r})\\ &=\limsup_{n\to +\infty}\frac 1{(\lambda'_n)^2} \int_{D_r\setminus D_2}\Psi(\lambda'_n|\nabla v_{D_r}(x)|)dx+\limsup_{n\to +\infty}\frac 1{(\lambda'_n)^2} \int_{D_2\setminus D_1}\Psi(\lambda'_n|\nabla v_{D_r}(x)|)dx\\ &\leq 2 \int_{D_r\setminus D_2}|\nabla v_{D_r}(x)|^2dx+3 \int_{D_2\setminus D_1}|\nabla v_{D_r}(x)|^2dx, \end{split}$$ where in the first line we used the minimality of $v^{\lambda'_n}$ for $\mathbb G^{\lambda'_n}$ and that $v_{D_r}$ is an admissible function for problem [\[G\^tc1large\]](#G^tc1large){reference-type="eqref" reference="G^tc1large"}, in the second line we exploited the property that the gradient of $v_{D_r}$ in $D_1$ is vanishing, and in the third line we used [\[stime_up_down\]](#stime_up_down){reference-type="eqref" reference="stime_up_down"}. By setting $\ell_1$ equal to [\[inf_counter\]](#inf_counter){reference-type="eqref" reference="inf_counter"}: $$\ell_1:=2 \int_{D_r\setminus D_2}|\nabla v_{D_r}(x)|^2dx+3 \int_{D_2\setminus D_1}|\nabla v_{D_r}(x)|^2dx,$$ we have the leftmost inequality in [\[counter_resultlarge\]](#counter_resultlarge){reference-type="eqref" reference="counter_resultlarge"}. To obtain the rightmost inequality in [\[counter_resultlarge\]](#counter_resultlarge){reference-type="eqref" reference="counter_resultlarge"}, we consider the following problems $$\label{AuxF} \min_{\substack{v\in H^{1}(D_r\setminus\overline D_1)\\ v=\gamma x_1\ \text{on}\ \partial D_r}} \mathbb H^\lambda (v),\quad\mathbb H^\lambda(v)= \frac{1}{\lambda^2}\int_{D_r\setminus D_2}\Psi(\lambda|\nabla v(x)|)dx+2 \int_{D_2\setminus D_1}|\nabla v(x)|^2dx.$$ $$\label{problem_up} \min_{\substack{v\in H^{1}(D_r\setminus\overline D_1)\\ v=\gamma x_1\ \text{on}\ \partial D_r}}\mathbb D(v),\quad \mathbb D (v) =3\int_{D_r\setminus D_2}|\nabla v(x)|^2dx+2\int_{D_2\setminus D_1}|\nabla v(x)|^2dx.$$ The unique solution of [\[problem_up\]](#problem_up){reference-type="eqref" reference="problem_up"} is $$w_{D_r}(x)= \begin{cases} \left(7+\frac{12}{x_1^2+x_2^2}\right)x_1&\quad\text{in}\ D_r\setminus\overline D_2\\ 8\left(1+\frac{1}{x_1^2+x_2^2}\right)x_1 &\quad\text{in}\ D_2\setminus\overline D_1. \end{cases}$$ Analogously to [\[stima_nablaw\]](#stima_nablaw){reference-type="eqref" reference="stima_nablaw"}, it can be easily proved that $$1\le 4\le \left(7-\frac{12}{\rho^2}\right)\le|\nabla w_{D_r}(x)|\le \left(7+\frac {12}{\rho^2}\right)\le 10< L\quad\text{on}\ \partial D_{\rho},\ 2\le\rho\le r.$$ and hence we choose $L>10$ such that $$\label{stime_up} \lambda_n'' \le% t''_n\ m \leq \lambda_n''|\nabla w_{D_r}(x)|%\leq t_n''\ M \le L \lambda''_n\quad\text{in}\ D_r\setminus D_2.$$ Therefore, we have $$\label{sup_counter} \mathbb{G}^{\lambda_n''}(v^{\lambda_n''})\geq\mathbb H^{\lambda''_n}(w_{D_r})=\mathbb D(w_{D_r}),$$ where the inequality comes from the definition of $\Psi$. The equality follows from the fact that $\mathbb H^{\lambda''_n}$ coincides with $\mathbb D$ by [\[stime_up\]](#stime_up){reference-type="eqref" reference="stime_up"} and the definition of $\Psi$. Note that $w_{D_r}$ is a local minimizer in $W^{1,\infty}(D_r)\cap W^{1,2}(D_r)$, since $\Psi$ does not depend on $x$ (see [@cianchi2010global] for details). Finally, $w_{D_r}$ is a global minimizer thanks to the uniqueness of [\[AuxF\]](#AuxF){reference-type="eqref" reference="AuxF"}. By setting $$\ell_2(r):= \mathbb D(w_{D_r}).$$ we have the rightmost inequality in [\[counter_resultlarge\]](#counter_resultlarge){reference-type="eqref" reference="counter_resultlarge"} by passing to the limit in [\[sup_counter\]](#sup_counter){reference-type="eqref" reference="sup_counter"}. At this stage, it only remains to be proved that $\ell_1(r)<\ell_2(r)$. To this purpose, we notice that: $$\begin{split} \ell_1(r)&= 2 \int_{D_r\setminus D_2}|\nabla v_{D_r}(x)|^2dx+3 \int_{D_2\setminus D_1}|\nabla v_{D_r}(x)|^2dx\\ \ell_2(r)&= 3\int_{D_r\setminus D_2}|\nabla w_{D_r}(x)|^2dx+2\int_{D_2\setminus D_1}|\nabla w_{D_r}(x)|^2dx. \end{split}$$ Condition $\ell_1(r)<\ell_2(r)$ holds for large $r$, by observing that (i) $v_{D_r}$ and $w_{D_r}$ solve the same associated Euler-Lagrange equation on $D_2\setminus\overline D_1$, (ii) $\nabla v_{D_r}(x)$ and $\nabla w_{D_r}(x)$ are bounded functions on the bounded domain $D_2\setminus D_1$ by [\[stime_up_down\]](#stime_up_down){reference-type="eqref" reference="stime_up_down"} and [\[stime_up\]](#stime_up){reference-type="eqref" reference="stime_up"}, respectively, and (iii) it turn out that $$\begin{split} &\lim_{r\to+\infty}\frac{ \int_{ D_r\setminus D_2} |\nabla v_{D_r}(x)|^2dx}{\int_{ D_r\setminus D_2} |\nabla w_{D_r}(x)|^2dx}= 1,\\ &\lim_{r\to+\infty}\int_{D_r\setminus D_2}|\nabla v_{D_r}(x)|^2dx=\lim_{r\to+\infty}\int_{D_r\setminus D_2}|\nabla w_{D_r}(x)|^2dx=+\infty. \end{split}$$ # Forward and Inverse Problems: Applications and Numerical analysis {#num_sec} ![Picture of the cross section for typical superconducting cables. The cable consists in several petals (36 petals for (a), and 18 petals for (b) and (c)). Each petal is made up of many thin SC wires (19 wires for (a), 37 wires for (b) and 61 for (c)). The picture is in [@instruments4020017 Fig. 4] and it is courtesy of Instruments-MPDI.](Lim_9_SC.png){#fig_9_SC width="100%"} In this Section we propose some applications of the theoretical results of the previous Sections. The case of study refers to superconducting wires: a major component in technological applications. After a brief presentation of superconducting materials, we show the impact of the theoretical results on both the *Forward Problem*, i.e. finding the scalar potential $u$ assigned to the materials and the boundary data, and the *Inverse Problem*, i.e retrieving the shape of defects in the cross section of the wire. Figure [19](#fig_9_SC){reference-type="ref" reference="fig_9_SC"} shows a typical cross section for a few superconducting cables. A type II High Temperature Superconducting (HTS) material [@seidel2015applied; @krabbes2006high], in its superconductive state, is well described by a constitutive relationship given by $$\label{E-J_power_law} E(J)=E_0\left({J}/{J_c}\right)^n.$$ This constitutive relationship, named *E-J Power Law*, was proposed by Rhyner in [@rhyner1993magnetic] to properly reflect the nonlinear relationship between the electric field and the current density in HTS materials. An HTS described by an ideal E-J Power Law behaves like a PEC for a ''small'' boundary potential, when ''immersed'' in a linear conductive material. Indeed, its electrical conductivity is given by $$\sigma = \frac {J_c}{E_0} \left( \frac{E}{E_0} \right) ^\frac{1-n}n,$$ and it turns out that $\sigma\to+\infty$ as $E \to 0^+$. Typical parameters for $J_c$ and $n$ are given in Table [1](#table_par){reference-type="ref" reference="table_par"}. They refer to commercial products from European Superconductors (EAS-EHTS) and American Superconductors (AMSC) [@lamas2011electrical]. The value of $E_0$ is almost independent of the material and equal to $0.1$ mV/m [@yao2019numerical]. Type $J_c$\[A/mm$^2$\] $n$ ------------------ ------------------- ----- BSCCO EAS 85 17 BSCCO AMSC 135 16 YBCO AMSC 136 28 YBCO SP SF12100 290 30 YBCO SP SCS12050 210 36 : Typical parameters for $J_c$ and $n$. The numerical examples have been developed with an in-house Finite Element Method (FEM) based on [@grilli2005finite]. We consider a standard Bi-2212 round wire. The geometry of the cable is shown in Figure [21](#fig_10_mesh){reference-type="ref" reference="fig_10_mesh"} (left), together with the finite element mesh used for the numerical computations (see Figure [21](#fig_10_mesh){reference-type="ref" reference="fig_10_mesh"} (right)). ![Left: geometry of the cross-section of the superconducting cable. The solid superconducting material (light grey) in the linear matrix (white). Right: the finite element mesh used in the numerical computations.](Lim_10_GEO.pdf "fig:"){#fig_10_mesh width="45%"}![Left: geometry of the cross-section of the superconducting cable. The solid superconducting material (light grey) in the linear matrix (white). Right: the finite element mesh used in the numerical computations.](Lim_10_mesh.pdf "fig:"){#fig_10_mesh width="45%"} The radius $R_e$ of this HTS cable is equal to 0.6 mm [@huang2013bi]. The geometry of the problem is simplified w.r.t. those in Figure [19](#fig_9_SC){reference-type="ref" reference="fig_9_SC"}. Specifically, each ''petal'' is assumed to be made up of an individual (solid) superconducting wire, rather than many thin superconducting wires. The electrical conductivity of the matrix surrounding the petals is $5.55 \times 10^7$ S/m [@li2015rrr]. This electrical conductivity is equal to $95.8\%$ IACS. As reported in [@Ba21], the superconducting material is characterized by a very high value of critical current $J_c$. In particular, we assume $$J_c=8000\,\text{A/mm$^2$}, \quad n=27.$$ ## Solution of the forward problem {#forward_problem} The replacement of the original problem with its limiting ''version'' has a major impact when numerically solving the *Forward Problem*, i.e. the computation of the scalar potential $u$ for prescribed materials and boundary data. Specifically, the solution of the original nonlinear problem is carried out by an iterative method. At each iteration the solution of a linear system of equations is required. This linear system of equations is characterized by a sparse matrix and, therefore, its solution is obtained by an iterative approach. However, the regions corresponding to the nonlinear materials may give rise to strongly ill-conditioned matrices, posing relevant challenges when solving the related linear system of equations. On the other hand, when it is possible to replace the original problem with its limiting version, the nonlinear material is replaced by a PEC and the overall problem is linear. In the following we compare the solution of the *Forward Problem* obtained by simulating the actual (nonlinear) superconducting material and the limiting problem where the superconducting material is replaced by the PEC. In all numerical calculations, the applied boundary potential is equal to $f(x,y)=V_0 x/R_e$. In this way the parameter $V_0$ represents the maximum value for the boundary potential. Figure [22](#fig_11_err){reference-type="ref" reference="fig_11_err"} shows the errors for ''small'' boundary potential when replacing the superconducting material with a PEC. The error metrics are equal to the relative error in the $L^2-$norm ($e_2$) and in the $L^\infty-$norm ($e_\infty$). As expected, the PEC approximation is valid for ''small'' $V_0$. The accuracy of the approximation is very high in its domain of validity. ![Plot of the error for the PEC approximation (''small'' boundary potential).](Lim_11_err_CEP.eps){#fig_11_err width="70%"} Figure [24](#fig_12_E_CEP_J_IEP){reference-type="ref" reference="fig_12_E_CEP_J_IEP"} shows the spatial distribution of the electric field $\mathbf{E}$ (top) and of the electric scalar potential $u$ (bottom) for small ($V_0=10^{-6}$V) boundary potential, in the presence of the actual HTS cable. It is evident from the plot that $\mathbf{E}$ is perpendicular to the HTS regions for small $V_0$. This is in line with the concept that for small $V_0$ the HTS regions behave like a PEC, where $\mathbf{E}$ is orthogonal to their boundaries. ![Plot of the electric field $\mathbf{E}$ (top) and of the electric scalar potential $u$ (bottom) for a ''small'' boundary potential ($V_0=10^{-6}$ V).](Lim_12_E_CEP.eps "fig:"){#fig_12_E_CEP_J_IEP width="70%"} ![Plot of the electric field $\mathbf{E}$ (top) and of the electric scalar potential $u$ (bottom) for a ''small'' boundary potential ($V_0=10^{-6}$ V).](Lim_12_u_CEP.eps "fig:"){#fig_12_E_CEP_J_IEP width="70%"} ## Imaging via linear methods In this Section we provide numerical examples related to the solution of the inverse problem. We treat the case of $p_0=2$ since the limiting problem, being linear, provides a powerful ''bridge'' toward well assessed and mature methods and algorithms developed for linear problems. From a more general perspective ($p_0 \ne 2$), the limiting case approach is very relevant because it brings to the light that, when facing an inverse problem with nonlinear materials, the canonical problem consists in solving an inverse problem for the weighted $p_0-$Laplace equation, regardless of the specific nature of the nonlinearity. The configuration is that of section [7.1](#forward_problem){reference-type="ref" reference="forward_problem"}, and it refers to a superconducting cable. Specifically, the inverse problem consists in retrieving the shape of defects in the Mg-Al alloy matrix of a superconducting wire, starting from boundary data. This is a very challenging task because of the nonlinear behaviour of the superconductive petals, which results in a nonlinear relationship between the applied boundary voltages and the measured boundary currents. From a general perspective, the inverse problem can be tackled as follows: (i) the data are collected on the actual (nonlinear) configuration under small boundary data operations and (ii) the data are processed assuming that the governing equations are those for the limiting problem that, in this case, is linear. As mentioned above, this means that the inverse problem of retrieving the position and shape of anomalies in the Mg-Al matrix can be addressed by means of the imaging methods developed for linear materials. ### The imaging algorithm The imaging algorithm attempts to estimate a tomographic reconstruction of the shape and position of the anomalies in the Mg-Al matrix, starting from boundary data. The boundary data we adopt is the Dirichlet-to-Neumann (DtN) operator, which maps a prescribed boundary potential into the corresponding normal component of the electrical current density entering through $\partial \Omega$. In other words, the DtN operator gives the voltage-to-current relationship on the boundary $\partial \Omega$. In a practical setting we can measure a noisy version of a discrete approximation of the DtN operator, that is a noisy version of the DtN operator restricted to a finite dimensional linear subspace of applied voltages, as in the case of a finite set of boundary electrodes. Hereafter, this discrete approximation is referred as $\mathbf{G}$, the conductivity matrix. In order to solve the inverse problem we adopt an imaging method based on the Monotonicity Principle [@Tamburrino_2002], here briefly summarized for the sake of completeness. Both the DtN operator and its discrete counterpart, the conductances matrix $\mathbf{G}$, satisfy a Monotonicity Principle (see [@gisser1990electric; @Tamburrino_2002; @corboesposito2021monotonicity]): $$\label{eqn:mono1} \sigma_1\leq\sigma_2 \Longrightarrow \mathbf{G}_1 \preceq \mathbf{G}_2,$$ where $\sigma_1 \le \sigma_2$ is meant a.e. in $\Omega$ and $\mathbf{G}_1 \preceq \mathbf{G}_2$ means that $\mathbf{G}_1 - \mathbf{G}_2$ is a negative semi-definite matrix. The Monotonocity Principle states a monotonic relationship between the pointwise values of the electrical conductivities and the measured boundary operator. The targeted problem, that is the imaging of anomalies in the Mg-Al phase of a superconducting cable, is a two phase problem. One phase corresponds to the healthy material (the Mg-Al phase plus the PEC which replaces the superconductor, at small boundary data) while the other phase corresponds to the damaged region, having an electrical conductivity $\sigma_I$ smaller than the electrical conductivity $\sigma_{BG}$ of the Mg-Al phase. As a consequence, if $V_1$ and $V_2$ are two possible anomalies well-contained in the Mg-Al phase, and $V_1 \subseteq V_2$, it turns out that $\sigma_1 \ge \sigma_2$ and, therefore $$\label{eqn:mono2} V_1 \subseteq V_2 \Longrightarrow \mathbf{G}_{V_1} \succeq \mathbf{G}_{V_2}$$ Equation [\[eqn:mono2\]](#eqn:mono2){reference-type="eqref" reference="eqn:mono2"} can be translated in terms of an imaging method, as originally proposed in [@Tamburrino_2002]. Specifically, [\[eqn:mono2\]](#eqn:mono2){reference-type="eqref" reference="eqn:mono2"} is equivalent to $\mathbf{G}_{V_1} \nsucceq \mathbf{G}_{V_2} \Longrightarrow V_1 \nsubseteq V_2$, which gives for $V_1=T$ and $V_2=V$ $$\label{eqn:mono3} \mathbf{G}_{T} \nsucceq \mathbf{G}_{V} \Longrightarrow T \nsubseteq V,$$ where $V$ is the domain occupied by the unknown anomaly and $T$ a known domain occupied by test domain. Equation [\[eqn:mono3\]](#eqn:mono3){reference-type="eqref" reference="eqn:mono3"} makes it possible to infer whether the test domain $T$ is not contained in the unknown region occupied by the anomaly $V$. By repeating this type of test for several prescribed test anomalies occupying regions $T_1, \ T_2, \ \ldots$, we get an estimate ${V}_U$ of the unknown anomaly $V$ as: $$\begin{aligned} V_U=\bigcup_{k\in\Theta} T_k, \qquad \Theta=\{T_k\,|\,\mathbf{G}_{V}-\mathbf{G}_{T_k}\succeq 0\}.\end{aligned}$$ In the presence of noise, as is the case in any real-world problem, we adopt a slightly different strategy (see [@harrach2015resolution; @Tamburrino2016284]). Specifically, let $\tilde{\mathbf{G}}_V=\mathbf{G}_V+\mathbf{N}$ be the noisy version of the data, where $\delta$ is an upper bound to the 2-norm of the noise matrix $\mathbf{N}$, i.e. $\left\Vert\mathbf{N}\right\Vert_2\leq\delta$, then the reconstruction is obtained as $$\label{eq:rule} \tilde{V}_U =\bigcup_{k\in\Theta} T_k, \qquad \Theta=\{T_k\,|\,\tilde{\mathbf{G}}_{V}-\mathbf{G}_{T_k}+\delta\,\mathbf{I}\succeq 0\}$$ where $\mathbf{I}$ is the identity matrix. Moreover, it has been shown that, under proper assumptions $V \subseteq \tilde{V}_U$ even in the presence of noisy data [@Tamburrino2016284]. With a similar imaging method it is possible to reconstruct a lower bound $\tilde{V}_L$ to $V$. Existence of upper ($\tilde{V}_U$) and lower ($\tilde{V}_L$) bounds to the unknown anomaly is an unique feature of this imaging method. Finally, the matrices related to test domains $T_k$ are evaluated numerically by replacing the superconducting petals with perfect electric conductors. ### Numerical Results ![Reconstructions obtained by means of monotonicity based algorithm for linear materials in the ''small'' boundary data regime.](Lim_13_Ric1.pdf "fig:"){#fig_13_CEP width="40%"} ![Reconstructions obtained by means of monotonicity based algorithm for linear materials in the ''small'' boundary data regime.](Lim_13_Ric2.pdf "fig:"){#fig_13_CEP width="40%"}\ ![Reconstructions obtained by means of monotonicity based algorithm for linear materials in the ''small'' boundary data regime.](Lim_13_Ric3.pdf "fig:"){#fig_13_CEP width="40%"} ![Reconstructions obtained by means of monotonicity based algorithm for linear materials in the ''small'' boundary data regime.](Lim_13_Ric4.pdf "fig:"){#fig_13_CEP width="40%"}\ ![Reconstructions obtained by means of monotonicity based algorithm for linear materials in the ''small'' boundary data regime.](Lim_13_Ric5.pdf "fig:"){#fig_13_CEP width="40%"} ![Reconstructions obtained by means of monotonicity based algorithm for linear materials in the ''small'' boundary data regime.](Lim_13_Ric6.pdf "fig:"){#fig_13_CEP width="40%"} The reference geometry is that of the superconducting cable in Figure [21](#fig_10_mesh){reference-type="ref" reference="fig_10_mesh"} (left). We apply 16 electrodes at the boundary. The electrodes are equal and uniformly spaced. We assume the following noise model $$\mathbf{N}=\eta \Delta G_{max} \mathbf{A}$$ where $\mathbf{A}$ is a random matrix belonging to the Gaussian Orthogonal Ensamble (GOE) (see [@mehta2004random] for details), $$\Delta G_{max}=\max_{i,j}\left\vert(G_V)_{ij}-(G_{BG})_{ij}\right\vert,$$ $\mathbf{G}_{BG}$ is the conductance matrix associated to the healthy superconducting wire, i.e. without defects, and $\eta$ is a parameter representing the (relative) noise level. Hereafter we assume the noise level $\eta$ equal to $0.01$. Figure [30](#fig_13_CEP){reference-type="ref" reference="fig_13_CEP"} shows the reconstructions obtained with this method. The reconstructed defects are represented in black whereas the red line is the boundary of the real defects. The boundary of the petals is reported in black, like the electrodes but with a thicker line. These reconstructions clearly demonstrate the effectiveness of the approach. The conductance matrix was evaluated by applying boundary voltages of $1$mV. # Conclusions {#Con_sec} This study is a contribution to Inverse Problems in the presence of nonlinear materials. This subject is still at an early stage of development, as stated in [@lam2020consistency]. We focus on Electrical Resistance Tomography where the aim is to retrieve the electrical conductivity/resistivity of a material by means of stationary (DC) currents. Our main results consist prove that the original nonlinear problem can be replaced by a proper weighted $p_0-$Laplace problem, when the prescribed Dirichlet data is small''. Specifically, we prove that in the presence of two different materials, where at least one is nonlinear, the scalar potential in the outer region in contact with the boundary where the Dirichlet data is prescribed, can be computed by (i) replacing the interior region with either a Perfect Electric Conductor ($q_0<p_0$) or a Perfect Electric Insulator ($q_0<p_0$) and (ii) replacing the original problem (material) in the outer region with a weighted $p_0-$Laplace problem. The presence of the ''fingerprint'' of a weighted $p_0-$Laplace problem can be recognized to some extent in an arbitrary nonlinear problem. From the perspective of tomography, this is a significant result because it highlights the central role played by the $p_0-$Laplacian in inverse problems with nonlinear materials. For $p_0=2$, i.e. when the material in the outer region is linear, these results constitute a powerful bridge allowing all theoretical results, imaging methods and algorithms developed for linear materials to be brought into the arena of problems with nonlinear materials. The fundamental tool to prove the convergence results is the inequality appearing in Proposition [Proposition 6](#fund_ine_propt0){reference-type="ref" reference="fund_ine_propt0"}. These results express the asymptotic behaviour of the Dirichlet energy for the outer region in terms of a factorized $p_0-$Laplacian form. Moreover, we have proved that our assumptions are sharp, by means of proper counterexamples. Finally, we provide a numerical example, referring to a superconducting cable, as an application of the theoretical results proved in this paper. # Acknowledgements {#acknowledgements .unnumbered} This work has been partially supported by the MiUR-Progetto Dipartimenti di eccellenza 2018-2022 grant ''Sistemi distribuiti intelligenti''of Dipartimento di Ingegneria Elettrica e dell'Informazione ''M. Scarano'', by the MiSE-FSC 2014-2020 grant ''SUMMa: Smart Urban Mobility Management'' and by GNAMPA of INdAM. # Data availability Statements {#data-availability-statements .unnumbered} The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request. ieeetr [^1]: \ $^1$Dipartimento di Ingegneria Elettrica e dell'Informazione ''M. Scarano'', Università degli Studi di Cassino e del Lazio Meridionale, Via G. Di Biasio n. 43, 03043 Cassino (FR), Italy.\ $^2$Dipartimento di Matematica e Applicazioni ''R. Caccioppoli'', Università degli Studi di Napoli Federico II, Via Cinthia n. 26, Complesso Universitario Monte Sant'Angelo, 81026 Napoli, Italy.\ $^3$Departamento de Matemática, Facultad de Ciencias Físicas y Matemáticas, Universidad de Concepción, Avenida Esteban Iturra s/n, Bairro Universitario, Casilla 160 C, Concepción, Chile.\ $^4$Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI-48824, USA.\ Email: corbo\@unicas.it, l.faella\@unicas.it, gianpaolo.piscitelli\@unina.it *(corresponding author)*, vincenzo.mottola\@unicas.it, rprakash\@udec.cl, antonello.tamburrino\@unicas.it. [^2]: $^1$In magnetostatics, it is possible to introduce a magnetic scalar potential for treating simply connected and source free regions.
arxiv_math
{ "id": "2309.15865", "title": "The $p_0$-Laplace \"Signature\" for Quasilinear Inverse Problems", "authors": "A. Corbo Esposito, L. Faella, V. Mottola, G. Piscitelli, R. Prakash,\n A. Tamburrino", "categories": "math.AP", "license": "http://creativecommons.org/publicdomain/zero/1.0/" }
--- abstract: | It is well known that any port-Hamiltonian (pH) system is passive, and conversely, any minimal and stable passive system has a pH representation. Nevertheless, this equivalence is only concerned with the input-output mapping but not with the Hamiltonian itself. Thus, we propose to view a pH system either as an enlarged dynamical system with the Hamiltonian as additional output or as two dynamical systems with the input-output and the Hamiltonian dynamic. Our first main result is a structure-preserving Kalman-like decomposition of the enlarged pH system that separates the controllable and zero-state observable parts. Moreover, for further approximations in the context of structure-preserving model-order reduction (MOR), we propose to search for a Hamiltonian in the reduced pH system that minimizes the $\mathcal{H}_2$-distance to the full-order Hamiltonian without altering the input-output dynamic, thus discussing a particular aspect of the corresponding multi-objective minimization problem corresponding to $\mathcal{H}_2$-optimal MOR for pH systems. We show that this optimization problem is uniquely solvable, can be recast as a standard semidefinite program, and present two numerical approaches for solving it. The results are illustrated with three academic examples. address: - ${}^{\star}$ Department of Mathematics, University of Stuttgart, Pfaffenwaldring 5a, 70569 Stuttgart, Germany - ${}^{\dagger}$ Stuttgart Center for Simulation Science (SC SimTech), University of Stuttgart, Universitätsstr. 32, 70569 Stuttgart, Germany - ${}^{\ddagger}$ Courant Institute of Mathematical Sciences, New York University, New York, NY 10012, United States author: - Tobias Holicki${}^\star$ - Jonas Nicodemus${}^\dagger$ - Paul Schwerdtner$^{\ddagger}$ - Benjamin Unger${}^\dagger$ bibliography: - journalabbr.bib - literature.bib title: Energy matching in reduced passive and port-Hamiltonian systems --- [Keywords:]{.smallcaps} Port-Hamiltonian systems, structure-preserving model-order reduction, energy matching, quadratic output system, $\mathcal{H}_2$-optimal, semidefinite program [AMS subject classification:]{.smallcaps} 37J06,37M99,65P10,93A30,93C05,90C22 # Introduction {#sec:intro} The port-Hamiltonian ([pH]{.sans-serif}) modelling paradigm offers an intuitive energy-based formulation of dynamical systems across a wide variety of physical domains such as electrical systems [@EstT00; @GunF99a; @GunF99b], fluid-flow problems [@DomHLMMT21], or mechanical multi-body systems [@BeaMXZ18 Ex. 12]. By design, [pH]{.sans-serif} systems are automatically stable and passive and can be coupled across different scales and physical domains, which makes them valuable building blocks for large network models [@MehU22]. Since first-principle full-order models ([FOMs]{.sans-serif}) of complex systems or large system networks often have a high state-space dimension, model order reduction ([MOR]{.sans-serif}) is necessary in many cases to enable efficient numerical simulations or even real-time model-based control by computing a reduced-order model ([ROM]{.sans-serif}) that is used instead of the [FOM]{.sans-serif} for simulations and control. In this article, we offer a new perspective on the [MOR]{.sans-serif} problem for linear time-invariant ([LTI]{.sans-serif}) [pH]{.sans-serif} systems. In particular, we argue that [pH]{.sans-serif} systems should not be treated merely as a special case of standard [LTI]{.sans-serif}(state-space) systems during [MOR]{.sans-serif} but instead propose to view [pH]{.sans-serif} systems as two dynamical (respectively an extended) dynamical systems consisting of the classical input-output mapping, which is typically approximated during [MOR]{.sans-serif}, and, additionally, a dynamical system representing the evolution of the *Hamiltonian*, which represents the energy that is stored in a [pH]{.sans-serif} system. Besides the fact that the approximation of the Hamiltonian can be of interest in energy-related applications, it is particularly important for energy-aware control synthesis; see e.g. [@CalDRSK22; @RasBZSJF22]. Consequently, we study the approximation quality of different structure-preserving [MOR]{.sans-serif} algorithms in two different error measures that reflect both the input-output and the Hamiltonian dynamic. This goes beyond *structure-preserving* [MOR]{.sans-serif} for [pH]{.sans-serif} systems, where a [ROM]{.sans-serif} with [pH]{.sans-serif} structure is computed rather than a general [LTI]{.sans-serif}[ROM]{.sans-serif}. For our investigation, we first provide an analysis of the extended [pH]{.sans-serif} system and derive a structure-preserving Kalman-like decomposition in . Finally, we provide a new post-processing algorithm in --- to be performed after any structure-preserving [MOR]{.sans-serif} method --- that minimizes the approximation error of the Hamiltonian dynamic without changing the system's input output dynamic. We consider [LTI]{.sans-serif}[pH]{.sans-serif} systems defined as follows. **Definition 1** (Port-Hamiltonian system [@SchJ14]). *A linear time-invariant dynamical system of the form* *[\[eq:pH\]]{#eq:pH label="eq:pH"} $$\label{eq:pH:sys} \Sigma_{\mathsf{pH}}\quad \left\{\quad\begin{aligned} \dot{x}(t) &= (J-R)Qx(t) + (G-P)u(t),\\ y(t) &= {(G+P)}^\ensuremath\mathsf{T}Qx(t) + (S-N)u(t), \end{aligned}\right.$$ with matrices $J,R,Q\in\ensuremath\mathbb{R}^{n\times n}$, $G,P\in\ensuremath\mathbb{R}^{n\times m}$, $S,N\in\ensuremath\mathbb{R}^{m\times m}$, together with a *Hamiltonian* function $$\label{eq:pH:Hamiltonian} \mathcal{H}\colon \ensuremath\mathbb{R}^{n}\to\ensuremath\mathbb{R},\qquad x\mapsto \tfrac{1}{2} x^\ensuremath\mathsf{T}Qx,$$* *is called a *port-Hamiltonian* system, if* 1. *the structure matrix $\Gamma\vcentcolon= \begin{bsmallmatrix} J & G\\ -\smash{G^\ensuremath\mathsf{T}} & N \end{bsmallmatrix}$ is skew symmetric,* 2. *the dissipation matrix $W\vcentcolon= \begin{bsmallmatrix} R & P\\ P^\ensuremath\mathsf{T}& S \end{bsmallmatrix}$ is symmetric positive semi-definite, and* 3. *the Hessian of the Hamiltonian $Q$ is symmetric positive semi-definite.* *The variables $x$, $u$, and $y$ are referred to as the *state*, *input*, and *output*, respectively.* For such systems, structure-preserving [MOR]{.sans-serif} computes [pH]{.sans-serif}[ROMs]{.sans-serif} $$\label{eq:pH:rom} \tilde{\Sigma}_{\mathsf{pH}}\quad \left\{\quad\begin{aligned} \dot{\tilde{x}}(t) &= (\tilde{J}-\tilde{R})\tilde{Q}\tilde{x}(t) + (\tilde{G}-\tilde{P})u(t),\\ \tilde{y}(t) &= {(\tilde{G}+\tilde{P})}^\ensuremath\mathsf{T}\tilde{Q} \tilde{x}(t) + (\tilde{S}-\tilde{N})u(t), \end{aligned}\right.$$ with matrices $\tilde{J},\tilde{R},\tilde{Q}\in\ensuremath\mathbb{R}^{r\times r}$, $\tilde{G},\tilde{P}\in\ensuremath\mathbb{R}^{r\times m}$, $\tilde{S},\tilde{N}\in\ensuremath\mathbb{R}^{m\times m}$, that satisfy the same constraints as in but with $r\ll n$. Typically, [MOR]{.sans-serif}(and also structure-preserving [MOR]{.sans-serif}) aims to compute [ROMs]{.sans-serif} such that $y- \tilde{y}$ is small for all admissible inputs $u$ in an appropriate norm (which results in a good approximation of the input-output mapping). The approximation of the Hamiltonian, i.e., $\mathcal{H}- \tilde{\mathcal{H}}$ in some appropriate norm is typically not considered; here $\tilde{\mathcal{H}}$ denotes the Hamiltonian of the reduced system [\[eq:pH:rom\]](#eq:pH:rom){reference-type="eqref" reference="eq:pH:rom"}, given by $$\begin{aligned} \tilde{\mathcal{H}}\colon \ensuremath\mathbb{R}^{r} \to \ensuremath\mathbb{R}, \qquad \tilde{x}\mapsto \frac{1}{2} \tilde{x}^\ensuremath\mathsf{T}\tilde{Q} \tilde{x}.\end{aligned}$$ Exploiting the *Kalman-Yakubovitch-Popov* ([KYP]{.sans-serif}) inequality, see the forthcoming , we propose a novel post-processing step called *energy matching* for the [ROM]{.sans-serif} such that $\mathcal{H}- \tilde{\mathcal{H}}$ is minimized in some appropriate norm the difference. ## Literature review and state-of-the-art {#literature-review-and-state-of-the-art .unnumbered} [MOR]{.sans-serif} for standard [LTI]{.sans-serif} systems of the form $$\label{eq:LTI} \Sigma\quad \left\{\quad\begin{aligned} \dot{x}(t) &= Ax(t) + Bu(t),\\ y(t) &= Cx(t) + Du(t), \end{aligned}\right.$$ where $A \in \ensuremath\mathbb{R}^{n \times n}, B \in \ensuremath\mathbb{R}^{n \times m}, C \in \ensuremath\mathbb{R}^{p \times n}$, and $D \in \ensuremath\mathbb{R}^{p \times m}$, is well understood. There exist several well-established algorithms that compute [ROMs]{.sans-serif} of the form $$\label{eq:LTI:rom} \tilde{\Sigma}\quad \left\{\quad\begin{aligned} \dot{\tilde{x}}(t) &= \tilde{A}\tilde{x}(t) + \tilde{B}u(t),\\ \tilde{y}(t) &= \tilde{C}\tilde{x}(t) + \tilde{D}u(t), \end{aligned}\right.$$ with matrices $\tilde{A}\in\ensuremath\mathbb{R}^{r\times r}$, $\tilde{B}\in\ensuremath\mathbb{R}^{r\times m}$, $\tilde{C}\in\ensuremath\mathbb{R}^{p\times r}$, and $\tilde{D}\in\ensuremath\mathbb{R}^{p\times m}$ that approximate the [FOM]{.sans-serif} with high fidelity. One standard input-output error measure is the $\mathcal{H}_2$ error $$\| H - \tilde{H} \|_{\mathcal{H}_2} \vcentcolon= \sqrt{\frac{1}{2\pi} \int_{-\infty}^{\infty} \left\|H(\ensuremath\mathrm{i}\omega) - \tilde{H}(\ensuremath\mathrm{i}\omega)\right\|^2_{\mathrm{F}} \,\mathrm{d}\omega},$$ that measures the deviation of the [ROM]{.sans-serif} transfer function $\tilde{H}$ from the [FOM]{.sans-serif} transfer function $H$. These transfer functions are defined as $$H(s) \vcentcolon= C{(s I_{n} - A)}^{-1} B + D \qquad\text{and}\qquad \tilde{H}(s) \vcentcolon= \tilde{C}{(s I_{r} - \tilde{A})}^{-1} \tilde{B} + \tilde{D},$$ and, we have that $\|y-\tilde{y}\|_{L_\infty} \le \|H-\tilde{H}\|_{\mathcal{H}_2} \|u\|_{L_2}$, which ensures that a small $\mathcal{H}_2$-error leads to a good approximation of the input-output map in the $L_\infty$-norm. A comprehensive review of the classical [MOR]{.sans-serif} methods is beyond the scope of this paper, and we refer to [@Ant05; @AntBG20; @BenCOW17] for an overview on this topic. A popular [MOR]{.sans-serif} method that often leads to locally $\mathcal{H}_2$-optimal [ROMs]{.sans-serif} is the *iterative rational Krylov algorithm* ([IRKA]{.sans-serif}), which is introduced in [@BeaG09]. [IRKA]{.sans-serif} computes [ROMs]{.sans-serif} based on a subspace projection onto the $r$-dimensional subspaces $\mathop{\mathrm{im}}(V)$ and $\mathop{\mathrm{im}}(W)$ of $\ensuremath\mathbb{R}^{n}$ encoded via the matrices $V,W \in \ensuremath\mathbb{R}^{n\times r}$, i.e., the [ROM]{.sans-serif} matrices are defined as $\tilde{A} = W^\ensuremath\mathsf{T}A V$, $\tilde{B} = W^\ensuremath\mathsf{T}B$, $\tilde{C} = CV$, and $\tilde{D} = D$. The projection spaces $\mathop{\mathrm{im}}(V)$ and $\mathop{\mathrm{im}}(W)$ are updated according to a fixed point iteration until local $\mathcal{H}_2$-optimality is attained. While popular [MOR]{.sans-serif} approaches such as [IRKA]{.sans-serif} often lead to highly accurate [ROMs]{.sans-serif}, they do not generically preserve the [pH]{.sans-serif} structure. For this, [pH]{.sans-serif} structure-preserving variants of classical approaches are introduced, e.g., in [@PolS12; @GugPBV09; @GugPBV12]. Here, the projection is adapted such that the symmetries and definiteness properties of a [pH]{.sans-serif} system are preserved during the projection. However, this often leads to a drastically reduced accuracy, as emphasized in [@SchV20; @BreU22]. Another approach towards structure-preserving [MOR]{.sans-serif} is based on preserving the passivity of a [pH]{.sans-serif} system during [MOR]{.sans-serif} and then recovering a [pH]{.sans-serif} representation from a general [ROM]{.sans-serif} as in [\[eq:LTI:rom\]](#eq:LTI:rom){reference-type="eqref" reference="eq:LTI:rom"}, which is possible for minimal passive [ROMs]{.sans-serif}(see [@CheGH22; @ReiRV15] for a thorough investigation). Passivity preserving [MOR]{.sans-serif} is pursued in [@DesP84; @ReiS10], in which *positive-real balanced truncation* ([PRBT]{.sans-serif}) is introduced and extended to descriptor systems, respectively. In [@BreU22], a passivity preserving [MOR]{.sans-serif} method is designed based on a [MOR]{.sans-serif} of the spectral factors of a given passive [FOM]{.sans-serif}. Other [pH]{.sans-serif}[MOR]{.sans-serif} algorithms include the methods in [@BreMS21; @BorSF21; @PenM16; @AfkH17; @AfkH19; @BucBH19; @EggKLMM18] and optimization-based methods in [@MosL20; @SchV20; @SatS18]. [MOR]{.sans-serif} methods for linear systems are evaluated based on their approximation of the input-output mapping, which can be assessed using well-established error measures (such as the $\mathcal{H}_2$ norm) based on the transfer function distances. The evaluation of the approximation quality of the Hamiltonian requires a more advanced error analysis that has only recently been established. When we add the Hamiltonian as an additional output, an [LTI]{.sans-serif}[pH]{.sans-serif} system becomes a *linear time-invariant system with quadratic output* ([LTIQO]{.sans-serif}). [MOR]{.sans-serif} for such systems is considered, e.g., in [@VanM10; @VanVLM12], in which single output [LTIQO]{.sans-serif} systems are simplified to standard [LTI]{.sans-serif} systems with multiple outputs such that either balancing or Krylov based [MOR]{.sans-serif} methods can be applied. In [@PulN19], [LTIQO]{.sans-serif} systems are rewritten as quadratic-bilinear ([QB]{.sans-serif}) systems that are subsequently reduced via balanced truncation. Our approach to approximating the Hamiltonian is based on developments from [@BenGP22], in which the $\mathcal{H}_2$ error measure is extended to [LTIQO]{.sans-serif} systems. Moreover, in [@BenGP22], energy functionals and Gramians are introduced for [LTIQO]{.sans-serif} systems such that balanced truncation can be applied directly. Finally, in [@GosA19], an iterative structure preserving [MOR]{.sans-serif} algorithm is presented, based on solving two Sylvester equations, and in [@GosG22] the *Adaptive Antoulas-Anderson* ([AAA]{.sans-serif}) algorithm is extended to [LTIQO]{.sans-serif} to develop a data-driven modelling framework. ## Organization of the manuscript {#organization-of-the-manuscript .unnumbered} Our manuscript is organized as follows, first we recall the basics of the [pH]{.sans-serif} framework (cf. ) and review [LTIQO]{.sans-serif} systems (cf. ) in . The view of [pH]{.sans-serif} systems as dual-dynamical systems and in particular the Hamiltonian dynamic are presented and analyzed in . We then present our proposed method for optimizing the energy of a [ROM]{.sans-serif} to match the energy of the [FOM]{.sans-serif} in . Finally, the efficiency of the method is demonstrated in three numerical examples in . ## Notation and abbreviations {#notation-and-abbreviations .unnumbered} We use the symbols $\ensuremath\mathbb{N}$, $\ensuremath\mathbb{R}$, $\ensuremath\mathbb{R}^n$, $\ensuremath\mathbb{R}^{n\times m}$, $\mathrm{GL}_{n}$, $\mathcal{S}^{n}_{\succ}$, $\mathcal{S}^{n}_{\succcurlyeq}$, and $\mathrm{O}_{n}$ to denote the positive integers, the real numbers, the set of column vectors with $n\in\ensuremath\mathbb{N}$ real entries, the set of $n\times m$ real matrices, the set of nonsingular matrices, the set of symmetric positive definite, the set of symmetric positive semi-definite matrices, and the orthogonal matrices, respectively. For a matrix $A\in\ensuremath\mathbb{R}^{n\times m}$, we use the symbols $A^\ensuremath\mathsf{T}$, $\mathop{\mathrm{sym}}(A) = \tfrac{1}{2}(A+A^\ensuremath\mathsf{T})$, and $\mathop{\mathrm{skew}}(A)=\tfrac{1}{2}(A - A^\ensuremath\mathsf{T})$, for the transpose, the symmetric part, and the skew-symmetric part, respectively. # Preliminaries {#sec:preliminaries} We first recall a few basic notions from [LTI]{.sans-serif} systems and pH systems, that we will later use for our developments in . Moreover, we briefly explain error measures for [LTIQO]{.sans-serif} systems that we use for our energy matching algorithm in . ## Controllability and Observability An [LTI]{.sans-serif} system such as [\[eq:LTI\]](#eq:LTI){reference-type="eqref" reference="eq:LTI"} is called controllable or observable, if the corresponding controllability and observability matrices, have full row and column rank, respectively, i. e. $$\mathop{\mathrm{rank}}\begin{bmatrix} B & AB & \cdots & A^{n-1}B \end{bmatrix} = n\qquad\text{and}\qquad \mathop{\mathrm{rank}} \begin{bmatrix} C & A^\ensuremath\mathsf{T}C & \cdots & {\big(A^\ensuremath\mathsf{T}\big)}^{n-1}C \end{bmatrix} = n.$$ The system [\[eq:LTI\]](#eq:LTI){reference-type="eqref" reference="eq:LTI"} is called minimal, if it is controllable and observable. Controllability and observability are closely related to the (infinite) Gramians [\[eq:gramian:int\]]{#eq:gramian:int label="eq:gramian:int"} $$\begin{aligned} \label{eq:gramian:int:ctrl} \mathcal{P}&\vcentcolon= \int_0^\infty \exp(A\tau)BB^\ensuremath\mathsf{T}\exp(A^\ensuremath\mathsf{T}\tau) \,\mathrm{d}\tau,\\ \label{eq:gramian:int:obs} \mathcal{O}&\vcentcolon= \int_0^\infty \exp(A^\ensuremath\mathsf{T}\tau)CC^\ensuremath\mathsf{T}\exp(A\tau)\,\mathrm{d}\tau, \end{aligned}$$ which exist if the dynamical system [\[eq:LTI\]](#eq:LTI){reference-type="eqref" reference="eq:LTI"} is asymptotically stable, i.e., if all eigenvalues of $A$ are in the open left-half plane. In this case, the Gramians can be computed as solutions of the Lyapunov equations [\[eq:gramian:lyap\]]{#eq:gramian:lyap label="eq:gramian:lyap"} $$\begin{aligned} \label{eq:gramian:lyap:ctrl} A\mathcal{P}+ \mathcal{P}A^\ensuremath\mathsf{T}+ BB^\ensuremath\mathsf{T}&= 0, \\ \label{eq:gramian:lyap:obs} A^\ensuremath\mathsf{T}\mathcal{O}+ \mathcal{O}A + C^\ensuremath\mathsf{T}C &= 0, \end{aligned}$$ respectively, and we have that $\Sigma$ is controllable if and only if $\mathop{\mathrm{rank}}(\mathcal{P}) = n$, and observable if and only if $\mathop{\mathrm{rank}}(\mathcal{O}) = n$. ## Port-Hamiltonian systems and the Kalman-Yakubovich-Popov inequality {#subsec:pH} We say that a general [LTI]{.sans-serif} system as in [\[eq:LTI\]](#eq:LTI){reference-type="eqref" reference="eq:LTI"} has a [pH]{.sans-serif} representation, whenever we can factorize the system matrices in the form of [\[eq:pH:sys\]](#eq:pH:sys){reference-type="eqref" reference="eq:pH:sys"} with the properties given in . While the specific matrices in a [pH]{.sans-serif} system are typically obtained during the modelling process, the factorization of the system matrices is not unique, in general. Indeed, it is easily seen that a [pH]{.sans-serif} system is passive and vice versa, any stable and minimal passive system has a [pH]{.sans-serif} representation; see for instance [@BeaMX22]. If $\Sigma$ in [\[eq:LTI\]](#eq:LTI){reference-type="eqref" reference="eq:LTI"} is passive, then a [pH]{.sans-serif} representation can be obtained via a symmetric positive-definite solution $X\in\mathcal{S}^{n}_{\succ}$ of the *Kalman-Yakubovich-Popov* ([KYP]{.sans-serif}) inequality $$\label{eq:KYP} \mathcal{W}_\Sigma(X) \in \mathcal{S}^{n+m}_{\succcurlyeq}$$ with $$\mathcal{W}_\Sigma\colon \ensuremath\mathbb{R}^{n\times n}\to\ensuremath\mathbb{R}^{(n+m)\times(n+m)},\qquad X \mapsto \begin{bmatrix} -A^\ensuremath\mathsf{T}X - XA & C^\ensuremath\mathsf{T}- XB\\ C-B^\ensuremath\mathsf{T}X & D+D^\ensuremath\mathsf{T} \end{bmatrix}.$$ In more detail, defining the set $\mathbb{X}_{\Sigma} \vcentcolon= \{X\in\mathcal{S}^{n}_{\succ} \mid \mathcal{W}_{\Sigma}(X)\in\mathcal{S}^{n+m}_{\succcurlyeq}\}$, it is easy to verify that for a passive [LTI]{.sans-serif} system [\[eq:LTI\]](#eq:LTI){reference-type="eqref" reference="eq:LTI"}, any $X\in\mathbb{X}_{\Sigma}$ of [\[eq:KYP\]](#eq:KYP){reference-type="eqref" reference="eq:KYP"} yields a [pH]{.sans-serif} representation by setting $$\begin{gathered} Q\vcentcolon= X, \quad J \vcentcolon= \mathop{\mathrm{skew}}(AX^{-1}), \quad R \vcentcolon= -\mathop{\mathrm{sym}}(AX^{-1}) \\ G \vcentcolon= \tfrac{1}{2}(X^{-1}C^\ensuremath\mathsf{T}+B), \quad P \vcentcolon= \frac{1}{2}(X^{-1}C^\ensuremath\mathsf{T}- B), \quad S \vcentcolon= \mathop{\mathrm{sym}}(D), \quad N \vcentcolon= \mathop{\mathrm{skew}}(D).\end{gathered}$$ Note, that we have $$(J-R)Q= \tfrac{1}{2}\left(AX^{-1} - X^{-1}A^\ensuremath\mathsf{T}+ AX^{-1} + X^{-1}A^\ensuremath\mathsf{T}\right)X = A,$$ and similarly for the other matrices. Hence, the [pH]{.sans-serif} representation does not affect the state-space description [\[eq:LTI\]](#eq:LTI){reference-type="eqref" reference="eq:LTI"}, but is merely a special decomposition of the system matrices. **Remark 2**. *If $Q\in\mathcal{S}^{n}_{\succ}$, then it is sometimes convenient to multiply the dynamical equation in [\[eq:pH:sys\]](#eq:pH:sys){reference-type="eqref" reference="eq:pH:sys"} from the left with $Q$, such that, after renaming of the matrices, the [pH]{.sans-serif} system takes the form $$\label{eq:pH:descriptorForm} \left\{\quad \begin{aligned} Q\dot{x}(t) &= (J-R)x(t) + (G-P)u(t),\\ y(t) &= (G+P)^\ensuremath\mathsf{T}x(t) + (S-N)u(t). \end{aligned}\right.$$ In this setting, all matrices in [\[eq:pH:descriptorForm\]](#eq:pH:descriptorForm){reference-type="eqref" reference="eq:pH:descriptorForm"} appear linearly, which can be exploited in model order reduction and system identification, see for instance [@SchV20; @MorNU22]. Let us emphasize that this formulation sometimes appears natural during the modeling process, which then also allows $Q$ to be singular; cf. [@BeaMXZ18; @MehU22]. Moreover, the generalized state-space description for our [pH]{.sans-serif} representation does not require the inverse of $X$. The [pH]{.sans-serif} representation is simply obtained by multiplying the state equation in [\[eq:LTI\]](#eq:LTI){reference-type="eqref" reference="eq:LTI"} from the left with $Q\vcentcolon= X$ and taking the skew-symmetric and symmetric part of the right-hand side. In more details, if $X\in\mathcal{S}^{n}_{\succ}$ is a solution of the [KYP]{.sans-serif} inequality [\[eq:KYP\]](#eq:KYP){reference-type="eqref" reference="eq:KYP"}, then we can set $$\begin{aligned} \begin{bmatrix} J & G\\ -G^\ensuremath\mathsf{T}& N \end{bmatrix} \vcentcolon= \mathop{\mathrm{skew}}\left( \begin{bmatrix} XA & B\\ -C & -D \end{bmatrix}\right)\qquad\text{and}\qquad \begin{bmatrix} R & P\\ P^\ensuremath\mathsf{T}& S \end{bmatrix} \vcentcolon= -\mathop{\mathrm{sym}}\left( \begin{bmatrix} XA & B\\ -C & -D \end{bmatrix}\right). \end{aligned}$$* For our forthcoming analysis we gather several results from the literature about the [KYP]{.sans-serif} inequality [\[eq:KYP\]](#eq:KYP){reference-type="eqref" reference="eq:KYP"}. **Theorem 3**. *Consider the dynamical system $\Sigma$ in [\[eq:LTI\]](#eq:LTI){reference-type="eqref" reference="eq:LTI"} and the associated [KYP]{.sans-serif} inequality [\[eq:KYP\]](#eq:KYP){reference-type="eqref" reference="eq:KYP"}.* 1. *[\[thm:KYP:symPosSemi\]]{#thm:KYP:symPosSemi label="thm:KYP:symPosSemi"} If the dynamical system is asymptotically stable, i.e., the eigenvalues of $A$ are in the open left half plane, then any solution $X\in\ensuremath\mathbb{R}^{n\times n}$ of [\[eq:KYP\]](#eq:KYP){reference-type="eqref" reference="eq:KYP"} is symmetric positive semi-definite.* 2. *[\[thm:KYP:posDef\]]{#thm:KYP:posDef label="thm:KYP:posDef"} If the dynamical system is observable, then any solution $X\in\mathcal{S}^{n}_{\succcurlyeq}$ of [\[eq:KYP\]](#eq:KYP){reference-type="eqref" reference="eq:KYP"} is positive definite.* 3. *[\[thm:KYP:bounded\]]{#thm:KYP:bounded label="thm:KYP:bounded"} Suppose the dynamical system is minimal and asymptotically stable. Then there exist matrices $X_{\min},X_{\max}\in\mathbb{X}_{\Sigma}$ such that any $X\in\mathbb{X}_{\Sigma}$ satisfies $$X_{\min}\preccurlyeq X \preccurlyeq X_{\max}.$$ In particular, the set $\mathbb{X}_{\Sigma}$ is bounded.* *Proof.* Since the results are well-known, we simply refer to the respective literature. 1. Let $X\in\ensuremath\mathbb{R}^{n\times n}$ be a solution of [\[eq:KYP\]](#eq:KYP){reference-type="eqref" reference="eq:KYP"}. Then there exists a matrix $M\in\mathcal{S}^{n}_{\succcurlyeq}$ such that $$-A^\ensuremath\mathsf{T}X - XA^\ensuremath\mathsf{T}= M.$$ The result is thus an immediate consequence of [@LanT85 Cha. 12.3, Thm. 3]. 2. See [@CamIV14 Prop. 1]. 3. See [@Wil72 Thm. 3].  ◻ If $D$ is regular, solutions of the [KYP]{.sans-serif} that minimize $\mathop{\mathrm{rank}}(\mathcal{W}_\Sigma(\cdot))$ can be computed by solving an associated *algebraic Riccati equation* ([ARE]{.sans-serif}) of the form $$\begin{aligned} \label{eq:passivity-riccati} A^\ensuremath\mathsf{T}X +XA + (-C^\ensuremath\mathsf{T}+ XB){(D+D^\ensuremath\mathsf{T})}^{-1}(-C+B^\ensuremath\mathsf{T}X) = 0.\end{aligned}$$ The connection between solutions of this [ARE]{.sans-serif} and the [KYP]{.sans-serif} are studied to great detail in [@Wil71]. Numerical solvers for the [ARE]{.sans-serif} are readily available[^1] and can be used to compute both *minimal* and *maximal solutions* to the [ARE]{.sans-serif}, which are also the minimal and maximal solutions of the [KYP]{.sans-serif} inequality from  [\[thm:KYP:bounded\]](#thm:KYP:bounded){reference-type="ref" reference="thm:KYP:bounded"}. These solutions have the property that for each solution $X$ of the [ARE]{.sans-serif}, we have that $X-X_{\min} \in \mathcal{S}^{n}_{\succcurlyeq}$ and $X_{\max} - X \in \mathcal{S}^{n}_{\succcurlyeq}$. Moreover, each solution of the [ARE]{.sans-serif} can be constructed as $X = X_{\max}\mathfrak{P}+ X_{\min}(I-\mathfrak{P})$, where $\mathfrak{P}$ and $I-\mathfrak{P}$ are projections onto invariant subspaces of associated matrices; see [@Wil71] for further details. ## Linear systems with quadratic output {#subsec:LTIQO} For our forthcoming analysis, we recall some results on *linear time-invariant systems with quadratic output* ([LTIQO]{.sans-serif}) of the form $$\label{eq:LTIQO} \Sigma_{\mathsf{QO}}\quad\left\{\quad\begin{aligned} \dot{x}(t) &= Ax(t) + Bu(t),\\ y(t) &= x(t)^\ensuremath\mathsf{T}M x(t), \end{aligned}\right.$$ with $M = M^\ensuremath\mathsf{T}\in\ensuremath\mathbb{R}^{n\times n}$. Similarly as before, the controllability Gramian of [\[eq:LTIQO\]](#eq:LTIQO){reference-type="eqref" reference="eq:LTIQO"} is given by [\[eq:gramian:int:ctrl\]](#eq:gramian:int:ctrl){reference-type="eqref" reference="eq:gramian:int:ctrl"} and can be computed as the solution of the Lyapunov equation [\[eq:gramian:lyap:ctrl\]](#eq:gramian:lyap:ctrl){reference-type="eqref" reference="eq:gramian:lyap:ctrl"}. To define the observability Gramian we assume, as common in the [MOR]{.sans-serif} setting, $x(0) = 0$. Then, following [@BenGP22; @GosA19], the output of [\[eq:LTIQO\]](#eq:LTIQO){reference-type="eqref" reference="eq:LTIQO"} is given by $$%\label{eq:LTIQO:outputConvolution} y(t) = \int_0^t \int_0^t h(\tau,\sigma) \big(u(t-\tau)\otimes u(t-\sigma)\big)\,\mathrm{d}\tau\,\mathrm{d}\sigma$$ with kernel $h(\tau,\sigma) \vcentcolon= \big(\mathrm{vec}\big(B^\ensuremath\mathsf{T}\exp(A^\ensuremath\mathsf{T}\sigma) M \exp(A\tau)B\big)\big)^\ensuremath\mathsf{T}$. Accordingly, the [LTIQO]{.sans-serif} observability Gramian can be defined as $$\label{eq:LTIQO:obs-gramian} \mathcal{O}_{\mathsf{QO}}= \int_0^\infty \int_0^\infty \exp(A^\ensuremath\mathsf{T}\sigma)M\exp(A \tau) B \left(\exp(A^\ensuremath\mathsf{T}\sigma)M\exp(A \tau) B\right)^\ensuremath\mathsf{T}\,\mathrm{d}\sigma\,\mathrm{d}\tau,$$ and can be computed as a solution of the Lyapunov equation $$\label{eq:qo-obs-gramian:lyap} A^\ensuremath\mathsf{T}\mathcal{O}_{\mathsf{QO}}+ \mathcal{O}_{\mathsf{QO}}A + M \mathcal{P}M = 0,$$ where $\mathcal{P}$ is the controllability Gramian given by [\[eq:gramian:int:ctrl\]](#eq:gramian:int:ctrl){reference-type="eqref" reference="eq:gramian:int:ctrl"}. Similar to the case of linear output, a system [\[eq:LTIQO\]](#eq:LTIQO){reference-type="eqref" reference="eq:LTIQO"} is controllable or observable if and only if the corresponding Gramians are nonsingular. With these preparations, the $\mathcal{H}_2$-norm for the [LTIQO]{.sans-serif} system [\[eq:LTIQO\]](#eq:LTIQO){reference-type="eqref" reference="eq:LTIQO"} can be defined as $$\|\Sigma_{\mathsf{QO}}\|_{\mathcal{H}_2} \vcentcolon= \bigg(\int_0^\infty \int_0^\infty \|h(\tau,s)\|_2^{2} \,\mathrm{d}\tau\,\mathrm{d}s\bigg)^{\tfrac{1}{2}} = \sqrt{\mathop{\mathrm{tr}}{B^\ensuremath\mathsf{T}\mathcal{O}_{\mathsf{QO}}B}}.$$ with output bound $\|y\|_{L_\infty} \leq \|\Sigma_{\mathsf{QO}}\|_{\mathcal{H}_2}\|u\otimes u\|_{L_2\otimes L_2}$; see [@BenGP22 Sec. III B]. Hereby, the $L_2\otimes L_2$-norm is defined as $\|u\otimes u\|_{L_2\otimes L_2} \vcentcolon= (\int_0^\infty\int_0^\infty \|u(\tau)\otimes u(\sigma)\|_2^2 \,\mathrm{d}\tau\,\mathrm{d}\sigma)^{1/2}$. If we replace [\[eq:LTIQO\]](#eq:LTIQO){reference-type="eqref" reference="eq:LTIQO"} with a surrogate model of order $r\ll n$ of the form $$\begin{aligned} \label{eq:LTIQO:red} \tilde{\Sigma}_{\mathsf{QO}}\quad\left\{\quad\begin{aligned} \dot{\tilde{x}}(t) &= \tilde{A}\tilde{x}(t) + \tilde{B}u(t),\\ \tilde{y}(t) &= \tilde{x}(t)^\ensuremath\mathsf{T}\tilde{M} \tilde{x}(t) \end{aligned}\right.\end{aligned}$$ with matrices $\tilde{A},\tilde{M}\in\ensuremath\mathbb{R}^{r\times r}$ and $\tilde{B}\in\ensuremath\mathbb{R}^{r\times m}$, then the output error ${\|y-\tilde{y}\|}_{L_\infty}$ can be bounded by the difference of the systems in the $\mathcal{H}_2$ norm multiplied with the norm of $u\otimes u$ in the $L_2\otimes L_2$ norm. In more detail, we obtain [@BenGP22 Thm. 3.4] $$\label{eq:ltiqo:h2:error} \|\Sigma_{\mathsf{QO}}- \tilde{\Sigma}_{\mathsf{QO}}\|_{\mathcal{H}_2}^2 = \mathop{\mathrm{tr}}(B^\ensuremath\mathsf{T}\mathcal{O}_{\mathsf{QO}}B + \tilde{B}^\ensuremath\mathsf{T}\tilde{\mathcal{O}}_{\mathsf{QO}}\tilde{B} - 2B^\ensuremath\mathsf{T}Z\tilde{B}),$$ where $Z\in \ensuremath\mathbb{R}^{n\times r}$ and $Y\in\ensuremath\mathbb{R}^{n\times r}$ are the unique solution of the Sylvester equations $$\begin{aligned} \label{eq:h2-error:syl1} A^\ensuremath\mathsf{T}Z + Z \tilde{A} + M Y \tilde{M} &= 0,\\ \label{eq:h2-error:syl2} A Y + Y \tilde{A}^\ensuremath\mathsf{T}+ B \tilde{B}^\ensuremath\mathsf{T}&= 0.\end{aligned}$$ # Hamiltonian dynamic and pH-minimality {#sec:hamiltonian-dynamic} A [MOR]{.sans-serif} technique that does not involve any approximation error for the input-output behavior of a given [LTI]{.sans-serif} system is the computation of a minimal realization for example based on the Kalman decomposition. However, in general this technique does not preserve structure of the given system or other quantities of relevance. Since we are dealing with [pH]{.sans-serif} systems, we are particularly interested in the evaluation of the Hamiltonian along system trajectories next to the system's input output behavior. Thus, we develop next a Kalman-like decomposition that permits us to construct a reduced order model that preserves the input output dynamic and the Hamiltonian dynamic as defined next. **Definition 4** (Hamiltonian dynamic). *Consider the [pH]{.sans-serif} system [\[eq:pH\]](#eq:pH){reference-type="eqref" reference="eq:pH"}. Then we call the dynamical system $$\label{eq:HamiltonianDynamic} \Sigma_{\mathcal{H}}\quad \left\{\quad \begin{aligned} \dot{x}(t) &= (J-R)Qx(t) + (G-P)u(t),\\ y_{\mathcal{H}}(t) &= \tfrac{1}{2}x(t)^\ensuremath\mathsf{T}Qx(t),\\ \end{aligned}\right.$$ the *Hamiltonian dynamic* associated with the [pH]{.sans-serif} system [\[eq:pH\]](#eq:pH){reference-type="eqref" reference="eq:pH"}.* If we thus want to approximate a [pH]{.sans-serif} system or find a minimal realization of a [pH]{.sans-serif} system, then we have to do this simultaneously for the input-output dynamic [\[eq:pH:sys\]](#eq:pH:sys){reference-type="eqref" reference="eq:pH:sys"} and the Hamiltonian dynamic [\[eq:HamiltonianDynamic\]](#eq:HamiltonianDynamic){reference-type="eqref" reference="eq:HamiltonianDynamic"}. **Example 5**. *Let $n=2$, $Q= I_2$, $S=N=0$, $P=0$, and $$\begin{aligned} \label{eq:exPHmini:FOM} J &= \begin{bmatrix} 0 & -1\\ 1 & 0 \end{bmatrix}, & R &= \begin{bmatrix} 1 & -1\\ -1 & 2 \end{bmatrix}, & A &= (J-R)Q= \begin{bmatrix} -1 & 0\\ 2 & -2 \end{bmatrix}, & G &= \begin{bmatrix} 1\\0 \end{bmatrix}. \end{aligned}$$ It is easy to see that with this choice, the input-output dynamic $\Sigma_{\mathsf{pH}}$ is controllable but not observable. A minimal realization is given by the dynamical system [\[eq:LTI\]](#eq:LTI){reference-type="eqref" reference="eq:LTI"} with $$\begin{aligned} \label{eq:exPHmini:ROM} r&=1, & \tilde{A}&=-1, & \tilde{B}&=1, & \tilde{C}&=1. \end{aligned}$$ The minimal realization is passive with unique solution $\tilde{Q}^\star = 1$ of the [KYP]{.sans-serif} inequality [\[eq:KYP\]](#eq:KYP){reference-type="eqref" reference="eq:KYP"}. Nevertheless, straightforward computations[^2] show that the Hamiltonian dynamic $\Sigma_{\mathcal{H}}$ of the original system and the minimal realization do not coincide. We conclude that while [\[eq:exPHmini:ROM\]](#eq:exPHmini:ROM){reference-type="eqref" reference="eq:exPHmini:ROM"} constitutes a minimal realization for the input-output dynamic of [\[eq:exPHmini:FOM\]](#eq:exPHmini:FOM){reference-type="eqref" reference="eq:exPHmini:FOM"}, the reduced system given by [\[eq:exPHmini:ROM\]](#eq:exPHmini:ROM){reference-type="eqref" reference="eq:exPHmini:ROM"} introduces an approximation error for the Hamiltonian dynamic.* To simplify our further discussion, we combine the input-output dynamic [\[eq:pH:sys\]](#eq:pH:sys){reference-type="eqref" reference="eq:pH:sys"} and the Hamiltonian dynamic [\[eq:HamiltonianDynamic\]](#eq:HamiltonianDynamic){reference-type="eqref" reference="eq:HamiltonianDynamic"} into the extended dynamical system $$\label{eq:pHQO} \Sigma_{\mathsf{pH}+\mathcal{H}}\quad\left\{\quad\begin{aligned} \dot{x}(t) &= (J-R)Qx(t) + (G-P)u(t),\\ y(t) &= (G+P)^\ensuremath\mathsf{T}Qx+ (S-N)u,\\ y_{\mathcal{H}}(t) &= \tfrac{1}{2}x(t)^\ensuremath\mathsf{T}Qx(t) \end{aligned}\right.$$ with extended output variable $y_{\mathsf{pH}+\mathcal{H}}\vcentcolon= [y^\ensuremath\mathsf{T}, y_{\mathcal{H}}]^\ensuremath\mathsf{T}\in\ensuremath\mathbb{R}^{m+1}$. Let us emphasize that in this formulation the dissipation inequality can be represented as $$\ensuremath{\frac{\mathrm{d}}{\mathrm{d}t}} \begin{bmatrix} 0 & 1 \end{bmatrix} y_{\mathsf{pH}+\mathcal{H}}(t) \leq y_{\mathsf{pH}+\mathcal{H}}(t)^\ensuremath\mathsf{T} \begin{bmatrix} I_{m} \\ 0 \end{bmatrix}u(t),$$ and hence does no longer require the state, but is purely an input-output property. Towards a structure-preserving Kalman-like decomposition, let $V\in\mathrm{O}_{n}$ (i.e., $V^\ensuremath\mathsf{T}V = I_{n}$) such that $$V^\ensuremath\mathsf{T}QV = \begin{bmatrix} Q_{\mathsf{o}} & 0\\ 0 & 0 \end{bmatrix}$$ with $Q_{\mathsf{o}}\in\mathcal{S}^{\mathop{\mathrm{rank}}(Q)}_{\succ}$. Setting $$\begin{gathered} V^\ensuremath\mathsf{T}x= \begin{bmatrix} x_{\mathsf{o}}\\ x_{\overline{\mathsf{o}}} \end{bmatrix}, \qquad V^\ensuremath\mathsf{T}J V = \begin{bmatrix} J_{\mathsf{o}} & -J_{\overline{\mathsf{o}}}^\ensuremath\mathsf{T}\\ J_{\overline{\mathsf{o}}} & J_{\star} \end{bmatrix}, \qquad V^\ensuremath\mathsf{T}R V = \begin{bmatrix} R_{\mathsf{o}} & R_{\overline{\mathsf{o}}}^\ensuremath\mathsf{T}\\ R_{\overline{\mathsf{o}}} & R_{\star} \end{bmatrix},\\ V^\ensuremath\mathsf{T}G = \begin{bmatrix} G_{\mathsf{o}}\\ G_{\overline{\mathsf{o}}} \end{bmatrix} , \qquad V^\ensuremath\mathsf{T}P = \begin{bmatrix} P_{\mathsf{o}}\\ P_{\overline{\mathsf{o}}} \end{bmatrix}\end{gathered}$$ we immediately observe that the dynamic corresponding to the state $x_{\overline{\mathsf{o}}}$ is not observable and hence can be removed without altering the input-output mapping. In particular, the [pH]{.sans-serif} system $$\label{eq:pHQO:nonsingHam} \left\{\quad\begin{aligned} \dot{x}_{\mathsf{o}}(t) &= \left(J_{\mathsf{o}}-R_{\mathsf{o}}\right)Q_{\mathsf{o}} x_{\mathsf{o}}(t) + (G_{\mathsf{o}}-P_{\mathsf{o}})u(t)\\ y(t) &= (G_{\mathsf{o}} + P_{\mathsf{o}})^\ensuremath\mathsf{T}Q_{\mathsf{o}}x_{\mathsf{o}}(t) + (S-N)u(t)\\ y_{\mathcal{H}}(t) &= \tfrac{1}{2} x_{\mathsf{o}}(t)^\ensuremath\mathsf{T}Q_{\mathsf{o}} x_{\mathsf{o}}(t) \end{aligned}\right.$$ has the same input-output mapping as [\[eq:pHQO\]](#eq:pHQO){reference-type="eqref" reference="eq:pHQO"}, i.e., from an approximation perspective, we can always assume $Q\in\mathcal{S}^{n}_{\succ}$ since we can simply remove the singular components, which is considered favorable from a modeling perspective [@MehU22 Sec. 4.3]. **Theorem 6**. *Consider the [pH]{.sans-serif} system [\[eq:pHQO\]](#eq:pHQO){reference-type="eqref" reference="eq:pHQO"} with $Q\in\mathcal{S}^{n}_{\succ}$. Then, there exists a matrix $V\in\mathrm{GL}_{n}$ such that a state-space transformation with $V$ transforms the [pH]{.sans-serif} system [\[eq:pHQO\]](#eq:pHQO){reference-type="eqref" reference="eq:pHQO"} into $$\label{eq:pHQO:KalmanContrForm} \left\{\quad \begin{aligned} \begin{bmatrix} \dot{x}_{\mathsf{c}}(t)\\ \dot{x}_{\overline{\mathsf{c}}}(t) \end{bmatrix} &= \begin{bmatrix} \left(J_{\mathsf{c}} - R_{\mathsf{c}}\right) & 0\\ 0 & \left(J_{\overline{\mathsf{c}}} - R_{\overline{\mathsf{c}}}\right) \end{bmatrix}\begin{bmatrix} x_{\mathsf{c}}(t)\\ x_{\overline{\mathsf{c}}}(t) \end{bmatrix} + \begin{bmatrix} G_{\mathsf{c}}-P_{\mathsf{c}}\\ 0\\ \end{bmatrix}u(t),\\ y(t) &= (G_{\mathsf{c}} + P_{\mathsf{c}})^\ensuremath\mathsf{T}x_{\mathsf{c}}(t) + (S-N)u(t),\\ y_{\mathcal{H}}(t) &= \tfrac{1}{2} x_{\mathsf{c}}(t)^\ensuremath\mathsf{T}x_{\mathsf{c}}(t) + \tfrac{1}{2}x_{\overline{\mathsf{c}}}(t)^\ensuremath\mathsf{T}x_{\overline{\mathsf{c}}}(t) \end{aligned}\right.$$ with $V^{-1}QV = I_n$ and $$\begin{aligned} V\begin{bmatrix}x_{\mathsf{c}}\\x_{\overline{\mathsf{c}}}\end{bmatrix} &= x, & \begin{bmatrix} J_{\mathsf{c}} - R_{\mathsf{c}} & 0\\ 0 & J_{\overline{\mathsf{c}}} - R_{\overline{\mathsf{c}}} \end{bmatrix} &= V^{-1} (J-R) V, & \begin{bmatrix} G_{\mathsf{c}} - P_{\mathsf{c}}\\ 0 \end{bmatrix} &= V^{-1} (G-P) \end{aligned}$$ such that the subsystem corresponding to $x_{\mathsf{c}}$ is in [pH]{.sans-serif}-form and controllable.* *Proof.* Let $Q= LL^\ensuremath\mathsf{T}$ denote the Cholesky decomposition of the Hessian of the Hamiltonian, and define $$\begin{aligned} \tilde{J} &\vcentcolon= L^\ensuremath\mathsf{T}J L, & \tilde{R} &\vcentcolon= L^\ensuremath\mathsf{T}R L, & \tilde{G} &\vcentcolon= L^\ensuremath\mathsf{T}G, & \tilde{P} &\vcentcolon= L^\ensuremath\mathsf{T}P. \end{aligned}$$ Using a classical Kalman decomposition, let $\tilde{V}\in\mathrm{O}_{n}$ be such that $$\left(\tilde{V}^\ensuremath\mathsf{T}(\tilde{J}-\tilde{R})\tilde{V}, \tilde{V}^\ensuremath\mathsf{T}(\tilde{G}-\tilde{P})\right) = \left(\begin{bmatrix} J_{\mathsf{c}} -R_{\mathsf{c}} & J_\star - R_\star\\ 0 & J_{\overline{\mathsf{c}}} - R_{\overline{\mathsf{c}}} \end{bmatrix}, \begin{bmatrix} G_{\mathsf{c}} - P_{\mathsf{c}}\\ 0 \end{bmatrix}\right)$$ is such that $(J_{\mathsf{c}}-R_{\mathsf{c}},G_{\mathsf{c}}-P_{\mathsf{c}})$ is controllable. Note that the transformation is a congruence transformation, such that we conclude $J_\star = 0 = R_\star$. The result follows with $V \vcentcolon= L^{-\ensuremath\mathsf{T}}\tilde{V}$. ◻ **Corollary 7**. *Consider the system [\[eq:pHQO:KalmanContrForm\]](#eq:pHQO:KalmanContrForm){reference-type="eqref" reference="eq:pHQO:KalmanContrForm"} with $J_{\mathsf{c}}\in\ensuremath\mathbb{R}^{n_{\mathsf{c}}\times n_{\mathsf{c}}}$ and $J_{\overline{\mathsf{c}}}\in\ensuremath\mathbb{R}^{n_{\overline{\mathsf{c}}}\times n_{\overline{\mathsf{c}}}}$. Then [\[eq:pHQO:KalmanContrForm\]](#eq:pHQO:KalmanContrForm){reference-type="eqref" reference="eq:pHQO:KalmanContrForm"} is zero-state observable. It is controllable, if and only if $n_{\overline{\mathsf{c}}} = 0$. In this case, asymptotic stability implies that the controllability and observability Gramians defined in [\[eq:gramian:int:ctrl\]](#eq:gramian:int:ctrl){reference-type="eqref" reference="eq:gramian:int:ctrl"} and [\[eq:LTIQO:obs-gramian\]](#eq:LTIQO:obs-gramian){reference-type="eqref" reference="eq:LTIQO:obs-gramian"} are positive definite.* *Proof.* For zero-state observability, observe that $u\equiv 0$ implies $x_{\mathsf{c}}(t) = \exp((J_{\mathsf{c}}-R_{\mathsf{c}})t)x_{\mathsf{c},0}$ and $x_{\overline{\mathsf{c}}}(t) = \exp((J_{\overline{\mathsf{c}}}-R_{\overline{\mathsf{c}}})t)x_{\overline{\mathsf{c}},0}$. In particular, using $$y_{\mathcal{H}}(t) = \tfrac{1}{2}\|x_{\mathsf{c}}(t)\|_2^2 + \tfrac{1}{2}\|x_{\overline{\mathsf{c}}}(t)\|_2^2$$ yields $y_{\mathcal{H}}\equiv 0$ if and only if $x_{\mathsf{c},0} = 0$ and $x_{\overline{\mathsf{c}},0} = 0$. Controllability is a consequence of , which also implies positive definiteness of the controllability Gramian. The positive definiteness of the observability Gramian follows immediately from [@LanT85 Cha. 12.3, Thm. 3]. ◻ Summarizing the previous discussion, we obtain the main result of this section, namely a Kalman-like decomposition for [pH]{.sans-serif} systems. **Theorem 8**. *Consider the [pH]{.sans-serif} system [\[eq:pHQO\]](#eq:pHQO){reference-type="eqref" reference="eq:pHQO"}. Then, there exists a matrix $V\in\mathrm{GL}_{n}{\ensuremath\mathbb{R}}$ such that a state-space transformation with $V$ transforms the [pH]{.sans-serif} system [\[eq:pHQO\]](#eq:pHQO){reference-type="eqref" reference="eq:pHQO"} into $$\label{eq:pHQO:KalmanForm} \left\{\quad\begin{aligned} \begin{bmatrix} \dot{x}_{\mathsf{c}\mathsf{o}}(t)\\ \dot{x}_{\overline{\mathsf{c}}\mathsf{o}}(t)\\ \dot{x}_{\mathsf{c}\overline{\mathsf{o}}}(t)\\ \dot{x}_{\overline{\mathsf{c}}\overline{\mathsf{o}}}(t) \end{bmatrix} &= \begin{bmatrix} J_{\mathsf{c}\mathsf{o}} - R_{\mathsf{c}\mathsf{o}} & 0 & 0 & 0\\ 0 & J_{\overline{\mathsf{c}}\mathsf{o}} - R_{\overline{\mathsf{c}}\mathsf{o}} & 0 & 0\\ J_{\mathsf{c}\overline{\mathsf{o}},1} - R_{\mathsf{c}\overline{\mathsf{o}},1} & J_{\mathsf{c}\overline{\mathsf{o}},2} - R_{\mathsf{c}\overline{\mathsf{o}},2} & 0 & 0\\ 0 & 0 & 0 & 0 \end{bmatrix}\begin{bmatrix} x_{\mathsf{c}\mathsf{o}}(t)\\ x_{\overline{\mathsf{c}}\mathsf{o}}(t)\\ x_{\mathsf{c}\overline{\mathsf{o}}}(t)\\ x_{\overline{\mathsf{c}}\overline{\mathsf{o}}}(t) \end{bmatrix} + \begin{bmatrix} G_{\mathsf{c}\mathsf{o}} - P_{\mathsf{c}\mathsf{o}}\\ 0\\ G_{\mathsf{c}\overline{\mathsf{o}}} - P_{\mathsf{c}\overline{\mathsf{o}}}\\ 0 \end{bmatrix}u(t),\\ y(t) &= (G_{\mathsf{c}\mathsf{o}} + P_{\mathsf{c}\mathsf{o}})^\ensuremath\mathsf{T}x_{\mathsf{c}\mathsf{o}}(t) + (S-N)u(t),\\ y_{\mathcal{H}}(t) &= \tfrac{1}{2} x_{\mathsf{c}\mathsf{o}}^\ensuremath\mathsf{T}(t) x_{\mathsf{c}\mathsf{o}}(t) + \tfrac{1}{2}x_{\overline{\mathsf{c}}\mathsf{o}}^\ensuremath\mathsf{T}(t) x_{\overline{\mathsf{c}}\mathsf{o}}(t) \end{aligned}\right.$$ such that* 1. *the subsystem corresponding to $x_{\mathsf{c}\mathsf{o}}$ is in [pH]{.sans-serif} form, controllable, and zero-state observable,* 2. *the subsystem corresponding to $x_{\overline{\mathsf{c}}\mathsf{o}}$ is in [pH]{.sans-serif} form and zero-state observable,* 3. *the subsystem corresponding to $x_{\mathsf{c}\mathsf{o}}$ and $x_{\overline{\mathsf{c}}\mathsf{o}}$ is zero-state observable, and* 4. *the subsystem corresponding to $x_{\mathsf{c}\mathsf{o}}$ and $x_{\mathsf{c}\overline{\mathsf{o}}}$ is controllable.* *Proof.* Using a classical Kalman decomposition, let $\mathcal{V}_{\mathsf{c}}\subseteq\ensuremath\mathbb{R}^{n}$ denote the controllability space associated with [\[eq:pHQO\]](#eq:pHQO){reference-type="eqref" reference="eq:pHQO"} and define the spaces $$\begin{aligned} \mathcal{V}_{\overline{\mathsf{c}}\overline{\mathsf{o}}} &\vcentcolon= \mathcal{V}_{\mathsf{c}}^\perp \cap \ker(Q), & \tilde{\mathcal{V}}_{\mathsf{c}\overline{\mathsf{o}}} &\vcentcolon= \mathcal{V}_{\mathsf{c}} \cap \ker(Q), & \mathcal{V}_1 &\vcentcolon= (\mathcal{V}_{\overline{\mathsf{c}}\overline{\mathsf{o}}} + \tilde{\mathcal{V}}_{\mathsf{c}\overline{\mathsf{o}}})^\perp. \end{aligned}$$ Using $\mathcal{V}_{\mathsf{c}}+\mathcal{V}_{\mathsf{c}}^\perp = \ensuremath\mathbb{R}^n$, we conclude $\ker(Q) \perp \mathcal{V}_1$. Assume that the columns of $V_1, V_{\mathsf{c}\overline{\mathsf{o}}},V_{\overline{\mathsf{c}}\overline{\mathsf{o}}}$ form a basis for $\mathcal{V}_1,\mathcal{V}_{\mathsf{c}\overline{\mathsf{o}}},\mathcal{V}_{\overline{\mathsf{c}}\overline{\mathsf{o}}}$ such that $\tilde{V} = [V_1,V_{\mathsf{c}\overline{\mathsf{o}}},V_{\overline{\mathsf{c}}\overline{\mathsf{o}}}] \in\mathrm{O}_{n}$. Define $Q_1 \vcentcolon= V_1^\ensuremath\mathsf{T}QV_1$, $J_1 \vcentcolon= V_1^\ensuremath\mathsf{T}JV_1$, and $R_1 \vcentcolon= V_1^\ensuremath\mathsf{T}RV_1$. A state-space transformation of [\[eq:pHQO\]](#eq:pHQO){reference-type="eqref" reference="eq:pHQO"} with $\tilde{V}$ then yields $$\begin{aligned} \left\{\quad\begin{aligned} \begin{bmatrix} \dot{x}_1(t)\\ \dot{x}_{\mathsf{c}\overline{\mathsf{o}}}(t)\\ \dot{x}_{\overline{\mathsf{c}}\overline{\mathsf{o}}}(t) \end{bmatrix} &= \begin{bmatrix} J_1-R_1 & 0 & 0\\ J_{\mathsf{c}\overline{\mathsf{o}}} - R_{\mathsf{c}\overline{\mathsf{o}}} & 0 & 0\\ 0 & 0 & 0 \end{bmatrix}\begin{bmatrix} Q_1x_1(t)\\ x_{\mathsf{c}\overline{\mathsf{o}}}(t)\\ x_{\overline{\mathsf{c}}\overline{\mathsf{o}}}(t) \end{bmatrix} + \begin{bmatrix} G_1 - P_1\\ G_{\mathsf{c}\overline{\mathsf{o}}} - P_{\mathsf{c}\overline{\mathsf{o}}}\\ 0 \end{bmatrix}u(t),\\ y(t) &= (G_1^\ensuremath\mathsf{T}+ P_1^\ensuremath\mathsf{T})Q_1x_1(t) + (S-N)u(t),\\ y_{\mathcal{H}}(t) &= \tfrac{1}{2} x_1^\ensuremath\mathsf{T}(t) Q_1x_1(t), \end{aligned}\right. \end{aligned}$$ where the subsystem corresponding to $x_1$ is in [pH]{.sans-serif} form. The result follows from applying and to the [pH]{.sans-serif} subsystem corresponding to $x_1$. ◻ **Corollary 9**. *Consider the [pH]{.sans-serif} system [\[eq:pHQO\]](#eq:pHQO){reference-type="eqref" reference="eq:pHQO"} with initial value $x(0) = 0$ and, using the notation of , the reduced controllable and zero-state observable [pH]{.sans-serif} system $$\left\{\quad\begin{aligned} \dot{x}_{\mathsf{c}\mathsf{o}}(t) &= (J_{\mathsf{c}\mathsf{o}} - R_{\mathsf{c}\mathsf{o}})x_{\mathsf{c}\mathsf{o}}(t) + (G_{\mathsf{c}\mathsf{o}} - P_{\mathsf{c}\mathsf{o}})u(t),\\ \tilde{y}(t) &= (G_{\mathsf{c}\mathsf{o}} + P_{\mathsf{c}\mathsf{o}})^\ensuremath\mathsf{T}x_{\mathsf{c}\mathsf{o}}(t) + (S-N)u(t),\\ \tilde{y}_{\mathcal{H}}(t) &= \tfrac{1}{2} x_{\mathsf{c}\mathsf{o}}^\ensuremath\mathsf{T}(t) x_{\mathsf{c}\mathsf{o}}(t) \end{aligned}\right.$$ with initial value $x_{\mathsf{c}\mathsf{o}}(0) = 0$. Then $y\equiv \tilde{y}$ and $y_{\mathcal{H}}\equiv \tilde{y}_{\mathcal{H}}$ for any control input $u$.* **Remark 10**. *A Kalman decomposition for [pH]{.sans-serif} systems considering only the input-output dynamic [\[eq:pH:sys\]](#eq:pH:sys){reference-type="eqref" reference="eq:pH:sys"} is obtained in [@PolS08], which however requires certain invertibility assumptions to preserve the [pH]{.sans-serif} structure and, moreover, does not take the Hamiltonian dynamic [\[eq:HamiltonianDynamic\]](#eq:HamiltonianDynamic){reference-type="eqref" reference="eq:HamiltonianDynamic"} into consideration.* # Energy matching algorithm in surrogate models {#sec:em} In an optimization setting, the approximation of a [pH]{.sans-serif} system can be interpreted as a *mulit-objective optimization problem* accounting for the approximation of the input-output dynamic and the Hamiltonian dynamic. If we seek optimal approximations, then we have to solve the multi-objective optimization problem. $$\label{eq:dual-objective-ph-mor} \min_{\tilde{\Sigma}_{\mathsf{pH}+\mathcal{H}}} \tfrac{1}{2}\begin{bmatrix} \|\Sigma_{\mathsf{pH}}- \tilde{\Sigma}_{\mathsf{pH}}\|_{\mathcal{H}_2}^2\\ \|\Sigma_{\mathcal{H}}- \tilde{\Sigma}_{\mathcal{H}}\|_{\mathcal{H}_2}^2 \end{bmatrix}$$ As a first approach to solving [\[eq:dual-objective-ph-mor\]](#eq:dual-objective-ph-mor){reference-type="eqref" reference="eq:dual-objective-ph-mor"}, we propose a two-step approach towards the Pareto front: first find a good (optimal) surrogate for the input-output dynamic, for instance by applying classical structure preserving [MOR]{.sans-serif} methods; then minimize the error of the Hamiltonian dynamic without changing the surrogate input-output dynamic. This strategy is motivated by the numerous very effective available structure preserving [MOR]{.sans-serif} methods that we wish to exploit. The investigation of other approaches to deal with the multi-objective problem [\[eq:dual-objective-ph-mor\]](#eq:dual-objective-ph-mor){reference-type="eqref" reference="eq:dual-objective-ph-mor"} is left for future research. ## The optimal Hamiltonian surrogate We replace [\[eq:pH:sys\]](#eq:pH:sys){reference-type="eqref" reference="eq:pH:sys"} with a reduced [pH]{.sans-serif} system of the form $$\label{eq:pHred:sys} \tilde{\Sigma}_{\mathsf{pH}}\quad \left\{\quad\begin{aligned} \dot{\tilde{x}}(t) &= (\tilde{J}-\tilde{R})\tilde{Q}\tilde{x}(t) + (\tilde{G}-\tilde{P})u(t),\\ \tilde{y}(t) &= (\tilde{G}-\tilde{P})^\ensuremath\mathsf{T}\tilde{Q}\tilde{x}(t) + (\tilde{S}-\tilde{N})u(t), \end{aligned}\right.$$ with reduced dimension $r\ll n$ and reduced Hamiltonian dynamic $$\label{eq:pHred:hamiltonian} \tilde{\Sigma}_{\mathcal{H}}\quad \left\{\quad\begin{aligned} \dot{\tilde{x}}(t) &= (\tilde{J}-\tilde{R})\tilde{Q}\tilde{x}(t) + (\tilde{G}-\tilde{P})u(t),\\ \tilde{y}_{\mathcal{H}}(t) &= \tfrac{1}{2} \tilde{x}(t)^\ensuremath\mathsf{T}\tilde{Q}\tilde{x}(t). \end{aligned}\right.$$ For notational convenience, we introduce the reduced system matrices $$\begin{aligned} \label{eq:pHred:matrices} \tilde{A} &\vcentcolon= (\tilde{J}-\tilde{R})\tilde{Q}, & \tilde{B} &\vcentcolon= \tilde{G}-\tilde{P}, & \tilde{C} &\vcentcolon= (\tilde{G}-\tilde{P})^\ensuremath\mathsf{T}\tilde{Q}, & \tilde{D} &\vcentcolon= \tilde{S}-\tilde{N}\end{aligned}$$ and assume that $\tilde{A}$ is asymptotically stable. We assume for the moment that the surrogate [\[eq:pHred:sys\]](#eq:pHred:sys){reference-type="eqref" reference="eq:pHred:sys"} is already available, for instance via the system theoretic methods mentioned in the introduction. Since these methods usually aim at an approximation of the input-output mapping and not at an optimal approximation of the Hamiltonian dynamic (see ), we in general encounter a poor approximation of the Hamiltonian dynamic. However, for any given [pH]{.sans-serif}[ROM]{.sans-serif}, we can replace the Hessian of the Hamiltonian $\tilde{Q}$ by any other positive definite solution of the [KYP]{.sans-serif} inequality [\[eq:KYP\]](#eq:KYP){reference-type="eqref" reference="eq:KYP"} without changing the input-output mapping. Hence, this matrix can be treated as a decision variable, then we are interested in solving the constrained minimization problem $$\label{eq:energyErrMin} \min_{\tilde{Q}\in\mathcal{S}^{r}_{\succ}} \ \tfrac{1}{2} \|\Sigma_{\mathcal{H}}- \tilde{\Sigma}_{\mathcal{H}}\|_{\mathcal{H}_2}^2 \qquad\text{s.t.} \qquad \mathcal{W}_{\tilde{\Sigma}}(\tilde{Q}) \in \mathcal{S}^{r+m}_{\succcurlyeq},$$ where the reduced system $\tilde{\Sigma}_{\mathcal{H}}$ depends on $\tilde{Q}$; see . Note that the constraint in [\[eq:energyErrMin\]](#eq:energyErrMin){reference-type="eqref" reference="eq:energyErrMin"} ensures that the input-output mapping of the given [ROM]{.sans-serif} is preserved. Using the discussion in , we note that the cost functional $\bar{\mathcal{J}}(\tilde{Q}) \vcentcolon= \tfrac{1}{2} \|\Sigma_{\mathcal{H}}- \tilde{\Sigma}_{\mathcal{H}}\|_{\mathcal{H}_2}^2$ can be computed as $$\label{eq:costFunctional} \bar{\mathcal{J}}(\tilde{Q}) = \tfrac{1}{2}\mathop{\mathrm{tr}}(B^\ensuremath\mathsf{T}\mathcal{O}_{\mathsf{QO}}B + \tilde{B}^\ensuremath\mathsf{T}\tilde{\mathcal{O}}_{\mathrm{QO}}\tilde{B} - 2B^\ensuremath\mathsf{T}Z\tilde{B})$$ with $\mathcal{O}_{\mathsf{QO}},\tilde{\mathcal{O}}_{\mathsf{QO}}$, and $Z$ are the unique solution of the linear matrix equations [\[eq:costFunctional:matrixEquations\]]{#eq:costFunctional:matrixEquations label="eq:costFunctional:matrixEquations"} $$\begin{aligned} \label{eq:costFunctional:matrixEquations:ctrlGramianLyapEquations} A\mathcal{P}+ \mathcal{P}A^\ensuremath\mathsf{T}+ BB^\ensuremath\mathsf{T}&= 0, & \tilde{A}\tilde{\mathcal{P}} + \tilde{\mathcal{P}} \tilde{A}^\ensuremath\mathsf{T}+ \tilde{B}\tilde{B}^\ensuremath\mathsf{T}&= 0, \\ \label{eq:costFunctional:matrixEquations:qoObsGramianLyapEquations} A^\ensuremath\mathsf{T}\mathcal{O}_{\mathsf{QO}}+ \mathcal{O}_{\mathsf{QO}}A + \tfrac{1}{4}Q\mathcal{P}Q&= 0, & \tilde{A}^\ensuremath\mathsf{T}\tilde{\mathcal{O}}_{\mathsf{QO}}+ \tilde{\mathcal{O}}_{\mathsf{QO}}\tilde{A} + \tfrac{1}{4}\tilde{Q}\tilde{\mathcal{P}} \tilde{Q}&= 0,\\ \label{eq:costFunctional:matrixEquations:sylvesterEquations} A^\ensuremath\mathsf{T}Z+ Z\tilde{A} + \tfrac{1}{4} QY\tilde{Q}&= 0, & A Y+ Y\tilde{A}^\ensuremath\mathsf{T}+ B \tilde{B}^\ensuremath\mathsf{T}&= 0.\end{aligned}$$ **Example 11**. *To illustrate the optimization problem, we discuss [\[eq:energyErrMin\]](#eq:energyErrMin){reference-type="eqref" reference="eq:energyErrMin"} with a concrete academic toy example. Suppose the [FOM]{.sans-serif} [\[eq:pH\]](#eq:pH){reference-type="eqref" reference="eq:pH"} is given by the matrices $$\begin{aligned} J &= \begin{bmatrix} 0 & 1\\ -1 & 0 \end{bmatrix}, & R &= \begin{bmatrix} 2 & 0\\ 0 & 1 \end{bmatrix}, & Q &= \begin{bmatrix} 1 & 0\\ 0 & 1 \end{bmatrix}, & A &= (J-R)Q = \begin{bmatrix} -2 & 1\\ -1 & -1 \end{bmatrix}, & G &= \begin{bmatrix} 6\\0 \end{bmatrix}, & D &= 1. \end{aligned}$$ Accordingly, the Gramians are $$\begin{aligned} \mathcal{P}&= \begin{bmatrix} 8 & -2\\ -2 & 2 \end{bmatrix}, & \mathcal{O}&= \begin{bmatrix} 8 & 2\\ 2 & 2 \end{bmatrix}, & \mathcal{O}_{\mathsf{QO}}&= \tfrac{1}{36}\begin{bmatrix} 19 & -2\\ -2 & 7 \end{bmatrix} \end{aligned}$$ and hence $\|\Sigma_{\mathcal{H}}\|_{\mathcal{H}_2}^2 = \mathop{\mathrm{tr}}(B^\ensuremath\mathsf{T}\mathcal{O}_{\mathsf{QO}}B) = 19$. For the reduced model, we make the choice $$\begin{aligned} \label{eq:exEnergyMinimization:ROM} \tilde{A} &= -2, & \tilde{B} &= 6, & \tilde{C} &= 6, & \tilde{D} &= 1, \end{aligned}$$ and we immediately see that the [KYP]{.sans-serif} inequality $$\mathcal{W}_\Sigma(\tilde{Q}) = \begin{bmatrix} 4\tilde{Q}& 6-6\tilde{Q}\\ 6-6\tilde{Q}& 2 \end{bmatrix}\succcurlyeq 0$$ is satisfied for any $\tilde{Q}\in [\tfrac{10}{9} - \tfrac{\sqrt{76}}{18},\tfrac{10}{9} + \tfrac{\sqrt{76}}{18}] = \mathbb{X}_{\Sigma}$. In particular, the [ROM]{.sans-serif} is passive and the optimization problem [\[eq:energyErrMin\]](#eq:energyErrMin){reference-type="eqref" reference="eq:energyErrMin"} is feasible. The Gramians for the [ROM]{.sans-serif} and the solutions of the matrix equations [\[eq:costFunctional:matrixEquations\]](#eq:costFunctional:matrixEquations){reference-type="eqref" reference="eq:costFunctional:matrixEquations"} are $$\begin{aligned} \tilde{\mathcal{P}} &= 9, & \tilde{\mathcal{O}} &= 9, & \tilde{\mathcal{O}}_{\mathsf{QO}}&= \tfrac{9}{16} \tilde{Q}^2, & Y&= \tfrac{1}{13}\begin{bmatrix} 108\\-36 \end{bmatrix}, & Z&= \tfrac{\tilde{Q}}{169}\begin{bmatrix} 90\\-9 \end{bmatrix}. \end{aligned}$$ We thus obtain $$\begin{aligned} \bar{\mathcal{J}}(\tilde{Q}) = \tfrac{1}{2}\left(19 + \tfrac{81}{4}\tilde{Q}^2 - 2\cdot\tfrac{3240}{169}\tilde{Q}\right) = \tfrac{19}{2} + \tfrac{81}{8}\tilde{Q}^2 - \tfrac{3240}{169}\tilde{Q}. \end{aligned}$$ Thus, the first-order necessary condition implies $\tilde{Q}^\star = \tfrac{160}{169} \approx 0.95$, which is an element of the feasible set and thus the optimal point. We like to stress that the [ROM]{.sans-serif} [\[eq:exEnergyMinimization:ROM\]](#eq:exEnergyMinimization:ROM){reference-type="eqref" reference="eq:exEnergyMinimization:ROM"} is obtained via Galerkin projection onto the space spanned by the matrix $V = [1,0]^\ensuremath\mathsf{T}$, which in this particular setting preserves the [pH]{.sans-serif}-structure. Nevertheless, we have $\tilde{Q}^\star \neq V^\ensuremath\mathsf{T}QV = 1$, i.e., a standard projection framework does not automatically yield the best Hamiltonian in the [ROM]{.sans-serif}. Moreover, the optimal Hamiltonian is not an element of the solution set of the [ARE]{.sans-serif} [\[eq:passivity-riccati\]](#eq:passivity-riccati){reference-type="eqref" reference="eq:passivity-riccati"} for the [ROM]{.sans-serif}, which is $\{\tfrac{10}{9} - \tfrac{\sqrt{76}}{18},\tfrac{10}{9} + \tfrac{\sqrt{76}}{18}\}$.* **Remark 12**. *Since the first term of $\bar{\mathcal{J}}(\tilde{Q})$ is independent of $\tilde{Q}$ (and the trace operator is linear), it can be neglected and simplified to $$\begin{aligned} \label{eq:obj-fun} \mathcal{J}(\tilde{Q}) &= %\tfrac{1}{2}\left(\|\reduce{\Sigma}_{\mathcal{H}}\|_{\Htwo}^2 - 2 \langle \systemHam, \systemHamRed \rangle_{\Htwo}\right) = \tfrac{1}{2}\mathop{\mathrm{tr}}(\tilde{B}^\ensuremath\mathsf{T}\tilde{\mathcal{O}}_{\mathsf{QO}}\tilde{B} - 2B^\ensuremath\mathsf{T}Z\tilde{B}), \end{aligned}$$ where $\tilde{\mathcal{O}}_{\mathsf{QO}}$ and $Z$ depend on $\tilde{Q}$ via [\[eq:costFunctional:matrixEquations\]](#eq:costFunctional:matrixEquations){reference-type="eqref" reference="eq:costFunctional:matrixEquations"}. To determine the computational cost of repeated evaluations of the objective function, we notice that due to the Kronecker product it is sufficient to solve the (sparse) linear system $$(I_r\otimes A^\ensuremath\mathsf{T}+ \tilde{A}^\ensuremath\mathsf{T}\otimes I_n) K= (I_r \otimes QY)$$ then $$\mathop{\mathrm{vec}}(Z) = -\tfrac{1}{4} K\mathop{\mathrm{vec}}(\tilde{Q}) \quad \text{and} \quad \mathop{\mathrm{vec}}(B^\ensuremath\mathsf{T}Z\tilde{B}) = -\tfrac{1}{4}(\tilde{B}^\ensuremath\mathsf{T}\otimes B^\ensuremath\mathsf{T}) K\mathop{\mathrm{vec}}(\tilde{Q}).$$ In particular, we observe that $Z$ depends linearly on $\tilde{Q}$ and, whenever we have computed $K$, the cost for evaluating $\mathcal{J}$ does not depend on the full dimension $n$.* We make the following observations. **Lemma 13**. *Assume that $\Sigma$ is asymptotically stable and $\tilde{\Sigma}$ is minimal and asymptotically stable. Then $\mathcal{J}\colon\mathcal{S}^{r}_{\succ}\to\ensuremath\mathbb{R}$ as defined in [\[eq:obj-fun\]](#eq:obj-fun){reference-type="eqref" reference="eq:obj-fun"} is twice Frechét differentiable, strictly convex, and $$\label{eq:gradient} \nabla_{\tilde{Q}} \mathcal{J}(\tilde{Q}) = \tfrac{1}{4} \left(\tilde{\mathcal{P}} \tilde{Q}\tilde{\mathcal{P}} - Y^\ensuremath\mathsf{T}QY\right).$$* *Proof.* We first discuss the Frechét differentiability and compute the gradient using similar ideas as for the gradient calculation for the $\mathcal{H}_2$ norm for standard [LTI]{.sans-serif} systems (without quadratic outputs) as presented in [@VanGA08 Thm. 3.3]. A perturbation $\Delta_{\tilde{Q}}$ of $\tilde{Q}$ leads to a perturbation $\Delta_{\tilde{\mathcal{O}}_{\mathsf{QO}}}$ in $\tilde{\mathcal{O}}_{\mathsf{QO}}$ and $\Delta_{Z}$ in $Z$. Then, using the cyclic property of the trace, we obtain $$\begin{aligned} \mathcal{J}(\tilde{Q}+\Delta_{\tilde{Q}}) - \mathcal{J}(\tilde{Q}) &= \tfrac{1}{2}\mathop{\mathrm{tr}}(\tilde{B}^\ensuremath\mathsf{T}\Delta_{\tilde{\mathcal{O}}_{\mathsf{QO}}} \tilde{B} - 2B^\ensuremath\mathsf{T}\Delta_{Z} \tilde{B}) = \tfrac{1}{2}\mathop{\mathrm{tr}}(\tilde{B}\tilde{B}^\ensuremath\mathsf{T}\Delta_{\tilde{\mathcal{O}}_{\mathsf{QO}}} - 2\tilde{B} B^\ensuremath\mathsf{T}\Delta_{Z}), \end{aligned}$$ where $\Delta_{\tilde{\mathcal{O}}_{\mathsf{QO}}}$ and $\Delta_{Z}$ are solutions of the Lyapunov equation [\[eq:costFunctional:matrixEquations:qoObsGramianLyapEquations\]](#eq:costFunctional:matrixEquations:qoObsGramianLyapEquations){reference-type="eqref" reference="eq:costFunctional:matrixEquations:qoObsGramianLyapEquations"} and Sylvester equation [\[eq:costFunctional:matrixEquations:sylvesterEquations\]](#eq:costFunctional:matrixEquations:sylvesterEquations){reference-type="eqref" reference="eq:costFunctional:matrixEquations:sylvesterEquations"} $$\begin{aligned} \label{eq:lyapQ:perturbation} 0 &= \tilde{A}^\ensuremath\mathsf{T}\Delta_{\tilde{\mathcal{O}}_{\mathsf{QO}}} + \Delta_{\tilde{\mathcal{O}}_{\mathsf{QO}}} \tilde{A} + \tfrac{1}{4}\left(\Delta_{\tilde{Q}}\tilde{\mathcal{P}}\tilde{Q}+ \tilde{Q}\tilde{\mathcal{P}}\Delta_{\tilde{Q}} + \Delta_{\tilde{Q}}\tilde{\mathcal{P}}\Delta_{\tilde{Q}}\right),\\ \label{eq:sylvesterZ:perturbation} 0 &= A^\ensuremath\mathsf{T}\Delta_{Z} + \Delta_{Z} \tilde{A} + \tfrac{1}{4} QY\Delta_{\tilde{Q}}, \end{aligned}$$ respectively. Then, applying [@VanGA08 Lem. 3.2] to [\[eq:gramian:lyap:ctrl\]](#eq:gramian:lyap:ctrl){reference-type="eqref" reference="eq:gramian:lyap:ctrl"} and [\[eq:lyapQ:perturbation\]](#eq:lyapQ:perturbation){reference-type="eqref" reference="eq:lyapQ:perturbation"} respectively to [\[eq:h2-error:syl2\]](#eq:h2-error:syl2){reference-type="eqref" reference="eq:h2-error:syl2"} and [\[eq:sylvesterZ:perturbation\]](#eq:sylvesterZ:perturbation){reference-type="eqref" reference="eq:sylvesterZ:perturbation"}, we obtain $$\begin{aligned} \mathop{\mathrm{tr}}\left(\tilde{B}\tilde{B}^\ensuremath\mathsf{T}\Delta_{\tilde{\mathcal{O}}_{\mathsf{QO}}}\right) &= \tfrac{1}{4}\mathop{\mathrm{tr}}\left(\left(\Delta_{\tilde{Q}}\tilde{\mathcal{P}}\tilde{Q}+ \tilde{Q}\tilde{\mathcal{P}}\Delta_{\tilde{Q}} + \Delta_{\tilde{Q}}\tilde{\mathcal{P}}\Delta_{\tilde{Q}}\right)^\ensuremath\mathsf{T}\tilde{\mathcal{P}}\right)\\ &= \tfrac{1}{2}\mathop{\mathrm{tr}}\left(\Delta_{\tilde{Q}}^\ensuremath\mathsf{T}\tilde{\mathcal{P}}\tilde{Q}\tilde{\mathcal{P}}\right) + \tfrac{1}{4}\mathop{\mathrm{tr}}\left(\Delta_{\tilde{Q}}^\ensuremath\mathsf{T}\tilde{\mathcal{P}} \Delta_{\tilde{Q}}^\ensuremath\mathsf{T}\tilde{\mathcal{P}}\right),\\ \mathop{\mathrm{tr}}\left(\tilde{B}B^\ensuremath\mathsf{T}\Delta_{Z}\right) &= \tfrac{1}{4}\mathop{\mathrm{tr}}\left(\Delta_{\tilde{Q}}^\ensuremath\mathsf{T}Y^\ensuremath\mathsf{T}QY\right). \end{aligned}$$ With these preparations we arrive at, $$\begin{aligned} \mathcal{J}(\tilde{Q}+\Delta_{\tilde{Q}}) - \mathcal{J}(\tilde{Q}) &= \tfrac{1}{4}\mathop{\mathrm{tr}}\left(\Delta_{\tilde{Q}}^\ensuremath\mathsf{T}\left(\tilde{\mathcal{P}}\tilde{Q}\tilde{\mathcal{P}} - Y^\ensuremath\mathsf{T}QY\right)\right) - \tfrac{1}{8} \mathop{\mathrm{tr}}\left(\Delta_{\tilde{Q}}^\ensuremath\mathsf{T}\tilde{\mathcal{P}} \Delta_{\tilde{Q}}^\ensuremath\mathsf{T}\tilde{\mathcal{P}}\right). \end{aligned}$$ The Cauchy-Schwarz inequality yields $$\frac{\left|\mathop{\mathrm{tr}}(\Delta_{\tilde{Q}}^\ensuremath\mathsf{T}\tilde{\mathcal{P}} \Delta_{\tilde{Q}}^\ensuremath\mathsf{T}\tilde{\mathcal{P}})\right|}{\|\Delta_{\tilde{Q}}\|_{\mathrm{F}}} \leq \|\Delta_{\tilde{Q}}\|_{\mathrm{F}}\|\tilde{\mathcal{P}}\|_{\mathrm{F}}^2$$ such that we conclude that $\mathcal{J}$ is Frechét differentiable with directional derivative $$\mathcal{D}_{\Delta_{\tilde{Q}}}\mathcal{J}(\tilde{Q}) = \tfrac{1}{4}\langle \tilde{\mathcal{P}}\tilde{Q}\tilde{\mathcal{P}} - Y^\ensuremath\mathsf{T}QY, \Delta_{\tilde{Q}}\rangle_{\mathrm{F}}$$ and the gradient is given as in [\[eq:gradient\]](#eq:gradient){reference-type="eqref" reference="eq:gradient"}. For the second derivative, we observe $$\begin{aligned} \mathcal{D}_{\Delta_{\tilde{Q}}}\mathcal{J}(\tilde{Q}-\Gamma) - \mathcal{D}_{\Delta_{\tilde{Q}}}\mathcal{J}(\tilde{Q}) = \tfrac{1}{4}\langle \tilde{\mathcal{P}}\Gamma\tilde{\mathcal{P}},\Delta_{\tilde{Q}}\rangle_{\mathrm{F}}. \end{aligned}$$ Hence, also the second derivative exists with $\mathcal{D}^2_{\Delta,\Gamma} \mathcal{J}(\tilde{Q}) = \tfrac{1}{4}\langle \Gamma \tilde{\mathcal{P}},\tilde{\mathcal{P}}\Delta\rangle_{\mathrm{F}}$. Using properties of the Kronecker product and the $\mathop{\mathrm{vec}}$ operator, we conclude $$\mathcal{D}^2_{\Delta,\Delta} \mathcal{J}(\tilde{Q})(\Delta,\Delta) = \tfrac{1}{4}\mathop{\mathrm{vec}}(\Delta)^\ensuremath\mathsf{T}(\tilde{\mathcal{P}}\otimes \tilde{\mathcal{P}})\mathop{\mathrm{vec}}(\Delta) > 0$$ for all $\Delta\neq 0$. Hence, $\mathcal{J}$ is strictly convex. ◻ **Theorem 14**. *In addition to the assumptions of suppose that $\tilde{\Sigma}$ is passive. Then the optimization problem [\[eq:energyErrMin\]](#eq:energyErrMin){reference-type="eqref" reference="eq:energyErrMin"} is solvable and has a unique solution.* *Proof.* Since $\tilde{\Sigma}$ is minimal and passive,  [\[thm:KYP:posDef\]](#thm:KYP:posDef){reference-type="ref" reference="thm:KYP:posDef"} implies the existence of $\smash{\tilde{Q}\in\mathbb{X}_{\tilde{\Sigma}}}$. Moreover, $\mathcal{J}$ is bounded from below. Let $(\tilde{Q}_k)_{k\in\ensuremath\mathbb{N}}$ denote a sequence in $\smash{\mathbb{X}_{\tilde{\Sigma}}}$ such that $$\lim_{k\to\infty} \mathcal{J}(\tilde{Q}_k) = \inf_{X\in\mathbb{X}_{\tilde{\Sigma}}} \mathcal{J}(X).$$ Since $\smash{\mathbb{X}_{\tilde{\Sigma}}}$ is bounded, cf.  [\[thm:KYP:bounded\]](#thm:KYP:bounded){reference-type="ref" reference="thm:KYP:bounded"}, we can choose a convergent subsequence $(\tilde{Q}_{k_j})_{j\in\ensuremath\mathbb{N}}$ with limit $\tilde{Q}^\star \vcentcolon= \lim_{j\to\infty} \tilde{Q}_{k_j}$. By construction, we obtain $\smash{\mathcal{W}_{\tilde{\Sigma}}}(\tilde{Q}^\star)\in\mathcal{S}^{r+m}_{\succcurlyeq}$ such that  [\[thm:KYP:posDef\]](#thm:KYP:posDef){reference-type="ref" reference="thm:KYP:posDef"} implies $\tilde{Q}^\star\in\mathbb{X}_{\tilde{\Sigma}}$. The continuity of $\mathcal{J}$ (cf. ) thus implies that $\tilde{Q}^\star$ is a minimizer of [\[eq:energyErrMin\]](#eq:energyErrMin){reference-type="eqref" reference="eq:energyErrMin"}. The uniqueness follows from the strict convexity of $\mathcal{J}$; cf. . ◻ ## A special case: Positive-real balanced truncation {#sec:specialcase-prbt} To obtain further insights into the optimization problem [\[eq:energyErrMin\]](#eq:energyErrMin){reference-type="eqref" reference="eq:energyErrMin"}, we consider the special case that the [ROM]{.sans-serif} is obtained via [PRBT]{.sans-serif} and the Hessian of the Hamiltonian of the [FOM]{.sans-serif} is given by the minimal solution of the [KYP]{.sans-serif} inequality. In this case, the minimal solution of the [KYP]{.sans-serif} inequality of the [FOM]{.sans-serif} is given via projection of the minimal solution of the [FOM]{.sans-serif}[KYP]{.sans-serif}, and hence one might get the idea that in this specific scenario, [PRBT]{.sans-serif} is optimal with respect to [\[eq:energyErrMin\]](#eq:energyErrMin){reference-type="eqref" reference="eq:energyErrMin"}. We refer to the forthcoming numerical for a corresponding numerical example. The following two toy examples, generated via the balanced parametrization for positive-real systems from [@Obe91], demonstrate that [PRBT]{.sans-serif} can be optimal in the setting described above (cf. ), but in general, there is no guarantee; see . **Example 15**. *Consider the system described by the matrices $$\begin{aligned} A &= \begin{bmatrix} -2 & -4 \\ -4 & -9 \end{bmatrix}, & B &= \begin{bmatrix} 4 \\ 4 \end{bmatrix}, & C &= \begin{bmatrix} 4 & 4 \end{bmatrix}, & D &= 1, & Q_{\min} &= \begin{bmatrix} \tfrac{1}{2} & 0\\ 0 & \tfrac{1}{4} \end{bmatrix}, \end{aligned}$$ which is already in positive-real balanced form [@Obe91] with Gramians $Q_{\min}$. Then the [ROM]{.sans-serif} obtained by [PRBT]{.sans-serif} of order $r=1$ is given by the upper left entry, i.e., $$\label{eq:prbtToy:optimalROM} \begin{aligned} \tilde{A} &= -2, & \tilde{B} &= 4, & \tilde{C} &= 4, & \tilde{D} &= 1, & \tilde{Q}_{\min} &= \tfrac{1}{2}. \end{aligned}$$ For any given $\tilde{Q}\in\mathbb{X}_{\tilde{\Sigma}} = [\tfrac{1}{2},2]$ we obtain $\tilde{\mathcal{P}} = 4$, $\tilde{\mathcal{O}}_{\mathsf{QO}}= \tfrac{1}{4}\tilde{Q}^2$, $Y= \left[\begin{smallmatrix} 4\\0 \end{smallmatrix}\right]$, $Z= \frac{\tilde{Q}}{14} \left[\begin{smallmatrix} 11/4\\-1 \end{smallmatrix}\right]$ such that the simplified cost functional [\[eq:obj-fun\]](#eq:obj-fun){reference-type="eqref" reference="eq:obj-fun"} is given by $$\begin{aligned} \mathcal{J}(\tilde{Q}) &= 2 \tilde{Q}^2 - 2 \tilde{Q}, & \nabla\mathcal{J}(\tilde{Q}) &= 4 \tilde{Q}- 2, \end{aligned}$$ which is minimized for $\tilde{Q}^\star = \frac{1}{2} = \tilde{Q}_{\min}$, i.e., the [PRBT]{.sans-serif}[ROM]{.sans-serif} [\[eq:prbtToy:optimalROM\]](#eq:prbtToy:optimalROM){reference-type="eqref" reference="eq:prbtToy:optimalROM"} is optimal.* **Example 16**. *Consider the positive-real balanced system $$\begin{aligned} A &= \begin{bmatrix} -1 & -\frac{9}{2} \\ -\frac{9}{2} & -27 \end{bmatrix}, & B &= \begin{bmatrix} 4 \\ 4 \end{bmatrix}, & C &= \begin{bmatrix} 4 & 4 \end{bmatrix}, & D &= \frac{1}{3}, & Q_{\min} &= \begin{bmatrix} \tfrac{3}{4} & 0\\ 0 & \tfrac{1}{4} \end{bmatrix}, \end{aligned}$$ with diagonal Gramian $Q_{\min}$. The one-dimensional [PRBT]{.sans-serif}[ROM]{.sans-serif} is given by $$\begin{aligned} \tilde{A} &= -1, & \tilde{B} &= 4, & \tilde{C} &= 4, & \tilde{D} &= \tfrac{1}{3}, & \tilde{Q}_{\min} &= \tfrac{3}{4}. \end{aligned}$$ For any given $\tilde{Q}\in\mathbb{X}_{\tilde{\Sigma}} = [\tfrac{3}{4},\tfrac{4}{3}]$ we obtain $\tilde{\mathcal{P}} = 8$, $\tilde{\mathcal{O}}_{\mathsf{QO}}= \tilde{Q}^2$, $Y= -\tfrac{16}{143} \left[\begin{smallmatrix} -94\\\phantom{-}10 \end{smallmatrix}\right]$, $Z= \frac{\tilde{Q}}{143^2} \left[\begin{smallmatrix} 31764\\-5156 \end{smallmatrix}\right]$ such that the simplified cost functional [\[eq:obj-fun\]](#eq:obj-fun){reference-type="eqref" reference="eq:obj-fun"} is given by $$\begin{aligned} \mathcal{J}(\tilde{Q}) &= 8 \tilde{Q}^2 - \tfrac{425728}{143^2}\tilde{Q}, & \nabla\mathcal{J}(\tilde{Q}) &= 16 \tilde{Q}- \tfrac{425728}{143^2}. \end{aligned}$$ We deduce $\tilde{Q}_{\min}<\tilde{Q}^\star = \tfrac{26608}{143^2} \in \mathbb{X}_{\tilde{\Sigma}}$ and thus conclude that the reduced Hamiltonian $\tilde{Q}_{\min}$ obtained via [PRBT]{.sans-serif} is not optimal.* ## Equivalent semi-definite program {#subsec:sdp} In this section we show that the energy matching optimization problem [\[eq:energyErrMin\]](#eq:energyErrMin){reference-type="eqref" reference="eq:energyErrMin"} is equivalent to a standard *semi-definite program* ([SDP]{.sans-serif}). This is a consequence of the following observation. **Lemma 17**. *Let $A$ be Hurwitz, $\mathcal{P}\in \mathcal{S}^{n}_{\succ}$, and $\mathcal{O}_{\mathsf{QO}}\in\mathcal{S}^{n}_{\succcurlyeq}$ be the controllability and quadratic output observability Gramian of the [LTIQO]{.sans-serif} system [\[eq:LTIQO\]](#eq:LTIQO){reference-type="eqref" reference="eq:LTIQO"}, respectively. Then, for any $\gamma\geq 0$, the following statements are equivalent:* 1. *[\[it:equivalent:a\]]{#it:equivalent:a label="it:equivalent:a"} $\|\Sigma_{\mathsf{QO}}\|_{\mathcal{H}_2}^2 = \mathop{\mathrm{tr}}(B^\ensuremath\mathsf{T}\mathcal{O}_{\mathsf{QO}}B) \leq \gamma$.* 2. *[\[it:equivalent:b\]]{#it:equivalent:b label="it:equivalent:b"} There exists $\check{\mathcal{O}}_{\mathsf{QO}}\in\mathcal{S}^{n}_{\succcurlyeq}$ satisfying $A^\ensuremath\mathsf{T}\check{\mathcal{O}}_{\mathsf{QO}}+ \check{\mathcal{O}}_{\mathsf{QO}}A + M \mathcal{P}M \preccurlyeq 0$ with $\mathop{\mathrm{tr}}(B^\ensuremath\mathsf{T}\check{\mathcal{O}}_{\mathsf{QO}}B) \leq \gamma$.* 3. *[\[it:equivalent:c\]]{#it:equivalent:c label="it:equivalent:c"} There exists $\check{\mathcal{O}}_{\mathsf{QO}}\in\mathcal{S}^{n}_{\succcurlyeq}$ satisfying $\begin{bmatrix} A^\ensuremath\mathsf{T}\check{\mathcal{O}}_{\mathsf{QO}}+ \check{\mathcal{O}}_{\mathsf{QO}}A & M\\ M & -\mathcal{P}^{-1} \end{bmatrix} \preccurlyeq 0$ with $\mathop{\mathrm{tr}}(B^\ensuremath\mathsf{T}\check{\mathcal{O}}_{\mathsf{QO}}B) \leq \gamma$.* *Proof.* The equivalence of [\[it:equivalent:a\]](#it:equivalent:a){reference-type="ref" reference="it:equivalent:a"} and [\[it:equivalent:b\]](#it:equivalent:b){reference-type="ref" reference="it:equivalent:b"} follows immediately from the observation that any $\check{\mathcal{O}}_{\mathsf{QO}}$ with $A^\ensuremath\mathsf{T}\check{\mathcal{O}}_{\mathsf{QO}}+ \check{\mathcal{O}}_{\mathsf{QO}}A + M \mathcal{P}M \preccurlyeq 0$ satisfies $\mathcal{O}_{\mathsf{QO}}\preccurlyeq \check{\mathcal{O}}_{\mathsf{QO}}$. The equivalence of [\[it:equivalent:b\]](#it:equivalent:b){reference-type="ref" reference="it:equivalent:b"} and [\[it:equivalent:c\]](#it:equivalent:c){reference-type="ref" reference="it:equivalent:c"} is an immediate consequence of the Schur complement. ◻ in combination with the fact that $Z$ depends linearly on $\tilde{Q}$ (cf. ) allows us to reformulate the optimization problem [\[eq:energyErrMin\]](#eq:energyErrMin){reference-type="eqref" reference="eq:energyErrMin"} as the equivalent standard [SDP]{.sans-serif}(using $\tilde{M}=\tfrac{1}{2}\tilde{Q}$) [\[eq:energymatching:sdp\]]{#eq:energymatching:sdp label="eq:energymatching:sdp"} $$\min_{\tilde{Q}=\tilde{Q}^\ensuremath\mathsf{T}, \check{\mathcal{O}}_{\mathsf{QO}}=\check{\mathcal{O}}_{\mathsf{QO}}^\ensuremath\mathsf{T}} \mathop{\mathrm{tr}}(\tilde{B}^\ensuremath\mathsf{T}\check{\mathcal{O}}_{\mathsf{QO}}\tilde{B} - 2B^\ensuremath\mathsf{T}Z(\tilde{Q}) \tilde{B})$$ subject to $$\begin{aligned} \begin{bmatrix} \tilde{A}^\ensuremath\mathsf{T}\check{\mathcal{O}}_{\mathsf{QO}}+ \check{\mathcal{O}}_{\mathsf{QO}}\tilde{A} & \frac{1}{2} \tilde{Q}\\ \frac{1}{2} \tilde{Q}& -\tilde{\mathcal{P}}^{-1} \end{bmatrix} \preccurlyeq 0 & \quad \text{and} \quad & \begin{bmatrix} \tilde{A}^\ensuremath\mathsf{T}\tilde{Q}+ \tilde{Q}\tilde{A} & \tilde{Q}\tilde{B} - \tilde{C}^\ensuremath\mathsf{T}\\ \tilde{B}^\ensuremath\mathsf{T}\tilde{Q}- \tilde{C} & -\tilde{D} - \tilde{D}^\ensuremath\mathsf{T} \end{bmatrix} \preccurlyeq 0 \end{aligned}.$$ ## Numerical approach To solve the energy matching problem [\[eq:energyErrMin\]](#eq:energyErrMin){reference-type="eqref" reference="eq:energyErrMin"} numerically, we propose two different strategies. Our first strategy exploits that the energy matching problem can be recast as the standard [SDP]{.sans-serif} problem [\[eq:energymatching:sdp\]](#eq:energymatching:sdp){reference-type="eqref" reference="eq:energymatching:sdp"}, such that we can apply state-of-the-art [SDP]{.sans-serif} solvers. Our second strategy is to directly apply an interior-point approach for [\[eq:energyErrMin\]](#eq:energyErrMin){reference-type="eqref" reference="eq:energyErrMin"} using a barrier function and a quasi-Newton method. In more detail, we define the barrier function $$\label{eq:barrierFunction} \psi\colon\ensuremath\mathbb{R}^{r\times r}\to \overline{\ensuremath\mathbb{R}},\quad \tilde{Q}\mapsto \begin{cases} -\ln \det \left(\mathcal{W}_{\tilde{\Sigma}_{\mathrm{pH}}}(\tilde{Q})\right), & \text{if } \mathcal{W}_{\tilde{\Sigma}_{\mathrm{pH}}}(\tilde{Q})\in\mathcal{S}^{r+m}_{\succ},\\ \infty, & \text{otherwise}, \end{cases}$$ and consider for $\alpha>0$ the parametrized cost function $\mathcal{J}_{\alpha,\psi}(\tilde{Q}) \vcentcolon= \mathcal{J}(\tilde{Q}) + \alpha\psi(\tilde{Q})$ and corresponding optimization problem $$\label{eq:energyErrMinWithBarrier} \min_{\tilde{Q}\in\mathcal{S}^{r}_{\succcurlyeq}} \mathcal{J}_{\alpha,\psi}(\tilde{Q}).$$ Note that the barrier function [\[eq:barrierFunction\]](#eq:barrierFunction){reference-type="eqref" reference="eq:barrierFunction"} requires [\[eq:pH:rom\]](#eq:pH:rom){reference-type="eqref" reference="eq:pH:rom"} to be strictly passive. If this is not the case, then a perturbation of the feedthrough term is required (see the forthcoming ). **Proposition 18**. *Assume that the [ROM]{.sans-serif} [\[eq:pHred:sys\]](#eq:pHred:sys){reference-type="eqref" reference="eq:pHred:sys"} is passive and let $X\in\mathbb{X}_{\tilde{\Sigma}}$ with $\det(\mathcal{W}_{\tilde{\Sigma}_{\mathsf{pH}}}(X))>0$. Then the gradient of the barrier function is given by $$\begin{aligned} \nabla_X \ln\left(\det\left(\mathcal{W}_{\tilde{\Sigma}_{\mathsf{pH}}}(X)\right)\right) = \begin{bmatrix} -\tilde{A} & -\tilde{B} \end{bmatrix} {\left(\mathcal{W}_{\tilde{\Sigma}_{\mathsf{pH}}}(X)\right)}^{-1} \begin{bmatrix} I \\ 0 \end{bmatrix} + \begin{bmatrix} I & 0 \end{bmatrix} {\left(\mathcal{W}_{\tilde{\Sigma}_{\mathsf{pH}}}(X)\right)}^{-1} \begin{bmatrix} -\tilde{A}^\ensuremath\mathsf{T}\\ -\tilde{B}^\ensuremath\mathsf{T} \end{bmatrix}. \end{aligned}$$* *Proof.* Using the chain rule, we obtain $$\begin{aligned} \nabla_X \ln\left(\det\left(\mathcal{W}_{\tilde{\Sigma}_{\mathsf{pH}}}(X)\right)\right) &= \frac{1}{\det\left(\mathcal{W}_{\tilde{\Sigma}_{\mathsf{pH}}}(X)\right)} \det\left(\mathcal{W}_{\tilde{\Sigma}_{\mathsf{pH}}}(X)\right)\mathop{\mathrm{tr}}\left({\left(\mathcal{W}_{\tilde{\Sigma}_{\mathsf{pH}}}(X)\right)}^{-1} \nabla_X \left(\mathcal{W}_{\tilde{\Sigma}_{\mathsf{pH}}}(X)\right)\right). \end{aligned}$$ The directional derivative of $\mathcal{W}_{\tilde{\Sigma}_{\mathsf{pH}}}(X)$ is given by $$\begin{aligned} \mathcal{D}_{\Delta_X} \mathcal{W}_{\tilde{\Sigma}_{\mathsf{pH}}}(X)&= \begin{bmatrix} -\tilde{A}^\ensuremath\mathsf{T}\Delta_X - \Delta_X \tilde{B} & -\Delta_X \tilde{B}\\ -\tilde{B}^\ensuremath\mathsf{T}\Delta_X & 0 \end{bmatrix} = \begin{bmatrix} -\tilde{A}^\ensuremath\mathsf{T}\\ -\tilde{B}^\ensuremath\mathsf{T} \end{bmatrix} \Delta_X \begin{bmatrix} I & 0 \end{bmatrix} + \begin{bmatrix} I \\ 0 \end{bmatrix} \Delta_X \begin{bmatrix} -\tilde{A} & -\tilde{B} \end{bmatrix} \end{aligned}$$ resulting in $$\begin{gathered} \mathcal{D}_{\Delta_X} \ln\left(\det\left(\mathcal{W}_{\tilde{\Sigma}_{\mathsf{pH}}}(X)\right)\right)\\ =\mathop{\mathrm{tr}}\left(\begin{bmatrix} I & 0 \end{bmatrix} {\left(\mathcal{W}_{\tilde{\Sigma}_{\mathsf{pH}}}(X)\right)}^{-1} \begin{bmatrix} -\tilde{A}^\ensuremath\mathsf{T}\\ -\tilde{B}^\ensuremath\mathsf{T} \end{bmatrix} \Delta_X \right) + \mathop{\mathrm{tr}}\left( \begin{bmatrix} -\tilde{A} & -\tilde{B} \end{bmatrix} {\left(\mathcal{W}_{\tilde{\Sigma}_{\mathsf{pH}}}(X)\right)}^{-1} \begin{bmatrix} I \\ 0 \end{bmatrix} \Delta_X \right).\qedhere \end{gathered}$$ ◻ The idea of our energy matching algorithm is now straight forward. For a decreasing sequence of $\alpha_k$, we minimize [\[eq:energyErrMinWithBarrier\]](#eq:energyErrMinWithBarrier){reference-type="eqref" reference="eq:energyErrMinWithBarrier"} with a gradient-based optimization method such as a quasi-Newton method. Since the surrogate model is assumed to be stable, implies that any solution of the [KYP]{.sans-serif} inequality is positive definite. Hence, the barrier function automatically ensures that $\tilde{Q}$ is symmetric positive definite whenever $\mathcal{J}_{\alpha,\psi}(\tilde{Q})$ is finite. In our numerical implementation, we reduce the degrees of freedom by explicitly forcing $\tilde{Q}$ to be symmetric via the *half vectorization* operator $\mathop{\mathrm{vech}}\colon\ensuremath\mathbb{R}^{r\times r}\to\ensuremath\mathbb{R}^{r(r+1)/2}$; see for instance [@MagN19]. In this way, we can represent a $r\times r$ symmetric matrix as a vector of length $r(r+1)/2$ and vice versa. Straightforward calculations show that, $$\nabla_z \mathcal{J}_{\alpha,\psi}(\mathop{\mathrm{vech}}^{-1}(z)) = \mathop{\mathrm{vech}}\left( 2\nabla_{ \mathop{\mathrm{vech}}^{-1}(z) } \mathcal{J}_{\alpha,\psi}(\mathop{\mathrm{vech}}^{-1}(z)) - \mathop{\mathrm{diag}}( \nabla_{\mathop{\mathrm{vech}}^{-1}(z)} \mathcal{J}_{\alpha,\psi}(\mathop{\mathrm{vech}}^{-1}(z))) \right),$$ such that we can compute the gradient using and . The resulting algorithm is described in . Set $z_0 \vcentcolon= \mathop{\mathrm{vech}}(\tilde{Q}_0)$.\ # Numerical experiments {#sec:num-exp} In the following sections, we illustrate the effectiveness of our energy-matching algorithm on a [pH]{.sans-serif} mass-spring-damper model and a [pH]{.sans-serif} poroelasticity model. For this, we compare the structure-preserving [MOR]{.sans-serif} algorithm [pH-IRKA]{.sans-serif} and the passivity-preserving [MOR]{.sans-serif} algorithm [PRBT]{.sans-serif} with our method, *energy-matched* [PRBT]{.sans-serif}([EM-PRBT]{.sans-serif}), where we use the [ROMs]{.sans-serif} obtained by [PRBT]{.sans-serif} as initialization. Regarding the implementation details of the methods, the following remarks are in order: As in [@BreU22], we use the minimal solution of the [KYP]{.sans-serif} inequality [\[eq:KYP\]](#eq:KYP){reference-type="eqref" reference="eq:KYP"} as the Hamiltonian to obtain a [pH]{.sans-serif} representation for the [ROMs]{.sans-serif} from [PRBT]{.sans-serif}. For the computation of the extremal solutions of the [KYP]{.sans-serif} inequality [\[eq:KYP\]](#eq:KYP){reference-type="eqref" reference="eq:KYP"}, i.e., the stabilizing and anti-stabilizing solution of the [ARE]{.sans-serif} [\[eq:passivity-riccati\]](#eq:passivity-riccati){reference-type="eqref" reference="eq:passivity-riccati"}, we added an artificial feedthrough term $D= \num{1e-6} I_{m}$ and then used the built-in MATLAB function `icare` to obtain the solutions. The computation of the $\mathcal{H}_2$-norm for standard [LTI]{.sans-serif} systems is done via the julia package `ControlSystems.jl`[^3]. For the implementation details for [pH-IRKA]{.sans-serif} and [PRBT]{.sans-serif}, we refer to [@BreU22]. The [SDP]{.sans-serif} solvers for [\[eq:energymatching:sdp\]](#eq:energymatching:sdp){reference-type="eqref" reference="eq:energymatching:sdp"} are applied within the JuMP[^4] framework. For the minimization of [\[eq:energyErrMinWithBarrier\]](#eq:energyErrMinWithBarrier){reference-type="eqref" reference="eq:energyErrMinWithBarrier"} we use the BFGS implementation from `Optim.jl` [@MogR18]. To initialize , we pick $\tilde{Q}_0$ as the optimal solution of the optimization problem [\[eq:energyErrMin\]](#eq:energyErrMin){reference-type="eqref" reference="eq:energyErrMin"}, where we replace the [KYP]{.sans-serif} inequality with the [ARE]{.sans-serif} [\[eq:passivity-riccati\]](#eq:passivity-riccati){reference-type="eqref" reference="eq:passivity-riccati"}. Note that resulting [KYP]{.sans-serif} matrix $\mathcal{W}_{\tilde{\Sigma}_{\mathsf{pH}}}(\tilde{Q}_0)$ is rank deficient by construction and hence perturbed to render it positive definite. ## Mass-spring-damper system Our first experiment considers a [pH]{.sans-serif} mass-spring-damper system with $n=100$ degrees of freedom and an input/output dimension $m=2$. The system was introduced in [@GugPBV12] and is described in detail in the [pH]{.sans-serif} benchmark systems collection[^5]. Comparing the $\mathcal{H}_2$ errors of the structure-preserving [MOR]{.sans-serif} algorithms in , it can be observed that [pH-IRKA]{.sans-serif} leads to an approximation error that is, in general, a few orders of magnitude worse compared to [PRBT]{.sans-serif}(as already observed in [@SchV20; @BreU22]. However, [pH-IRKA]{.sans-serif} yields better approximations of the Hamiltonian dynamic than [PRBT]{.sans-serif}. Using either or the solution of the [SDP]{.sans-serif} solver (which gives approximately the same result in this example), we can significantly improve the error of the Hamiltonian dynamic of [PRBT]{.sans-serif}(see [EM-PRBT]{.sans-serif} in ). In fact, after the optimization, the $\mathcal{H}_2$-error of the Hamiltonian dynamic is similar or even better compared to the one obtained when using [pH-IRKA]{.sans-serif}, whereas the input-output dynamic matches [PRBT]{.sans-serif}. also shows that it is not sufficient to search only for the best rank-minimizing solution of the [KYP]{.sans-serif} inequality, i.e., the solutions of the [ARE]{.sans-serif} [\[eq:passivity-riccati\]](#eq:passivity-riccati){reference-type="eqref" reference="eq:passivity-riccati"}. These [ROMs]{.sans-serif} are denoted with $\textsf{PRBT}\xspace(X^\star)$ and only slightly improve the Hamiltonian dynamic $\mathcal{H}_2$-error of [PRBT]{.sans-serif}. [\[leg:msd-h2\]](#leg:msd-h2){reference-type="ref" reference="leg:msd-h2"} -------------------------------------------------- ------------------------------------------------- \(a\) Input-output dynamic $\mathcal{H}_2$-error \(b\) Hamiltonian dynamic $\mathcal{H}_2$-error -------------------------------------------------- ------------------------------------------------- In , we show the error trajectories $\|y(t)-\tilde{y}(t)\|_2$ and $|y_\mathcal{H}(t)-\tilde{y}_\mathcal{H}(t)|$ of the [ROMs]{.sans-serif} with reduced order $r=20$ for the input signal $u(t) = \smash{[\sin(t),\cos(t)]^\ensuremath\mathsf{T}}$ for times $t>50$ at which the system response has approximately settled. These trajectories are in line with our observations from . Note that the output error trajectory for [pH-IRKA]{.sans-serif} is worse than the errors of [PRBT]{.sans-serif} and [EM-PRBT]{.sans-serif}. As expected, the output errors of [PRBT]{.sans-serif} are identical before and after optimization. In contrast, the Hamiltonian error is worst for [PRBT]{.sans-serif}(before energy matching) but even better than the error of [pH-IRKA]{.sans-serif} after applying our method. [\[leg:msd-output-error\]](#leg:msd-output-error){reference-type="ref" reference="leg:msd-output-error"} -------------------- ------------------------- \(a\) Output error \(b\) Hamiltonian error -------------------- ------------------------- ## Mass-spring-damper with $X_{\mathrm{min}}$ as Hamiltonian {#subsec:MSDxMINHam} For our numerical experiment, we investigate the findings of , i.e., we analyze the situation when the Hessian of the Hamiltonian is given by the minimal solution of the [KYP]{.sans-serif} inequality (which corresponds to the optimal choice for [pH-IRKA]{.sans-serif}[@BreU22]). In particular, we consider the mass-spring-damper system from the previous subsection and modify the Hamiltonian of the [FOM]{.sans-serif} to the minimal solution of the [KYP]{.sans-serif} inequality [\[eq:KYP\]](#eq:KYP){reference-type="eqref" reference="eq:KYP"} and transform the other matrices accordingly, see . The $\mathcal{H}_2$-error of [PRBT]{.sans-serif} before and after optimization is presented in . We conclude that for this example, [PRBT]{.sans-serif} already provides a close-to-optimal approximation of the Hamiltonian since the error is almost identical before and after the optimization. $r$ 4 8 12 16 20 -------------------- ------------------ ------------------ ------------------ ------------------ ------------------ $\textsf{PRBT}$ $\num{4.11e-01}$ $\num{1.02e-02}$ $\num{4.24e-04}$ $\num{1.55e-04}$ $\num{1.52e-04}$ $\textsf{EM-PRBT}$ $\num{4.11e-01}$ $\num{1.02e-02}$ $\num{4.20e-04}$ $\num{1.45e-04}$ $\num{1.42e-04}$ : Hamiltonian dynamic $\mathcal{H}_2$-errors of [PRBT]{.sans-serif} and [EM-PRBT]{.sans-serif} for the mass-spring-damper example with $X_{\mathrm{min}}$ as Hamiltonian. ## Linear poroelasticity In our third example, we apply our proposed method to Biot's consolidation model for poroelasticity. A general [pH]{.sans-serif} formulation was derived in [@AltMU21], and the system is also part of the [pH]{.sans-serif} benchmark collection. In , we can observe that [pH-IRKA]{.sans-serif} leads to the better input-output dynamic $\mathcal{H}_2$-error and also to the best Hamiltonian dynamic $\mathcal{H}_2$-error at the same time. In fact, [PRBT]{.sans-serif} computes [ROMs]{.sans-serif} with a Hamiltonian dynamic $\mathcal{H}_2$-error that is between one and two orders of magnitude worse than [pH-IRKA]{.sans-serif}. In this example, we compare with the state-of-the-art [SDP]{.sans-serif} solvers `COSMO`[^6], `MOSEK`[^7], and `SeDuMi`[^8], denoted with `EM-PRBT-COSMO`, `EM-PRBT-MOSEK`, and `EM-PRBT-SeDuMi`, respectively. We observe that the barrier method provides the best results among the methods, especially for larger reduced orders. [\[leg:poro-h2\]](#leg:poro-h2){reference-type="ref" reference="leg:poro-h2"} -------------------------------------------------- ------------------------------------------------- \(a\) Input-output dynamic $\mathcal{H}_2$-error \(b\) Hamiltonian dynamic $\mathcal{H}_2$-error -------------------------------------------------- ------------------------------------------------- # Conclusions {#sec:conclusion} We introduced the view of [pH]{.sans-serif} systems as a dual-dynamical system: the well-known input-output dynamic and additionally the Hamiltonian dynamic (cf. ). We studied how this view affects observability and have derived a corresponding structure-preserving Kalman-like decomposition in . Consequently, the optimal approximation of a [pH]{.sans-serif} system can be considered a multi-objective minimization problem: one objective for the approximation quality of the input-output dynamic and one for the approximation quality of the Hamiltonian dynamic. Using the observation that the [KYP]{.sans-serif} inequality determines all possible Hamiltonians, we proposed a [MOR]{.sans-serif} post-processing method called energy-matching: Given a structure-preserving [ROM]{.sans-serif} for the input-output dynamic, solely consider the optimization problem [\[eq:energyErrMin\]](#eq:energyErrMin){reference-type="eqref" reference="eq:energyErrMin"} for finding the best Hamiltonian. We showed that this optimization problem is uniquely solvable and convex --- see --- and can be recast as a standard semi-definite program (cf. ). We presented two numerical approaches to solve this problem and demonstrated their feasibility on three academic examples. Future work will be the further analysis of the multi-objective optimization problem [\[eq:dual-objective-ph-mor\]](#eq:dual-objective-ph-mor){reference-type="eqref" reference="eq:dual-objective-ph-mor"}. ## Acknowledgments {#acknowledgments .unnumbered} We thank Prof. Carsten Scherer (U Stuttgart) for valuable comments regarding the [SDP]{.sans-serif} formulation of the energy matching problem. P. Schwerdtner acknowledges funding from the DFG within the project 424221635. T. Holicki, J. Nicodemus and B. Unger acknowledge funding from the DFG under Germany's Excellence Strategy -- EXC 2075 -- 390740016 and are thankful for support by the Stuttgart Center for Simulation Science (SimTech). [^1]: e. g. via the MATLAB command `icare` [^2]: *One can take for instance a nonzero, constant control input and explicitly compute the Hamiltonian dynamic $\Sigma_{\mathcal{H}}$.* [^3]: <https://github.com/JuliaControl/ControlSystems.jl> [^4]: <https://jump.dev> [^5]: <https://port-hamiltonian.io> [^6]: <https://github.com/oxfordcontrol/COSMO.jl> [^7]: <https://www.mosek.com> [^8]: <https://sedumi.ie.lehigh.edu>
arxiv_math
{ "id": "2309.05778", "title": "Energy matching in reduced passive and port-Hamiltonian systems", "authors": "Tobias Holicki and Jonas Nicodemus and Paul Schwerdtner and Benjamin\n Unger", "categories": "math.OC math.DS", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We prove the absolute winning property of weighted simultaneous inhomogeneous badly approximable vectors on non-degenerate analytic curves. This answers a question by Beresnevich, Nesharim, and Yang. In particular, our result is an inhomogeneous version of the main result in [@BNY22] by Beresnevich, Nesharim, and Yang. Also, the generality of the inhomogeneous part that we considered extends the previous result in [@ABV]. Moreover, our results even contribute to classical results, namely establishing the inhomogeneous Schmidt's conjecture in arbitrary dimensions. address: - Department of Mathematics, Uppsala University, Sweden - Department of Mathematics, University of Michigan, Ann Arbor, USA author: - Shreyasi Datta - Liyang Shao bibliography: - inhomo_curve.bib title: Winning of Inhomogeneous bad for curves --- # Introduction In this paper, we study inhomogeneous weighted badly approximable vectors for analytic curves in $\mathbb{R}^n.$ Let $\mathbf{r}=(r_1,\cdots,r_n)\in\mathbb{R}^n$ be such that $r_i\geq 0$ and $\sum_{i=1}^n r_i=1.$ Such $\mathbf{r}$ is called a *weight* in $\mathbb{R}^n$. Let $\vert \cdot\vert_{\mathbb{Z}}$ denote the distance from the nearest integer. Given a weight $\mathbf{r}$, and $\mathbf{\boldsymbol\theta}=(\theta_i)_{i=1}^n$, where $\theta_i:\mathbb{R}^n\to \mathbb{R},$ a map, we define the simultaneous inhomogeneous bad vectors as follows, $$\label{def: simul_inho} \mathbf{Bad}_{\mathbf{\boldsymbol\theta}}(\mathbf{r}):=\{\mathbf{x}\in\mathbb{R}^n:\liminf_{q\in\mathbb{Z}\setminus\{0\}, \vert q\vert \to\infty}\max_{1\leq i\leq n}\vert qx_i-\theta_i(\mathbf{x})\vert_{\mathbb{Z}}^{1/r_i} \vert q\vert>0 \}.$$ Here we define $\vert x\vert^{1/0}:=0,$ for any $x\in (0,1).$ For $\mathbf{\boldsymbol\theta}(\mathbf{x})=\mathbf{0} ~\forall~ \mathbf{x}\in\mathbb{R}^n$, we denote $\mathbf{Bad}(\mathbf{r}):=\mathbf{Bad}_{\mathbf{0}}(\mathbf{r}),$ and call it homogeneous bad set. See Remark [\[remk on defn\]](#remk on defn){reference-type="ref" reference="remk on defn"} on the definition. These sets defined in [\[def: simul_inho\]](#def: simul_inho){reference-type="ref" reference="def: simul_inho"} possess a rich structure, that we will discuss soon, which makes them an interesting set in Diophantine approximation. Another reason these sets are interesting to a broader class of mathematicians is due to their connection with homogeneous dynamics; see [@Da]. Jarník in [@Jarnik] showed that Hausdorff dimension of bad set $\mathbf{Bad}(1)$ in $\mathbb{R}$ is $1$. In higher dimension, when $\mathbf{r}=(1/n,\cdots,1/n)$, Schmidt showed that the $\mathbf{Bad}(1/n,\cdots,1/n)$ is of full Hausdorff dimension, as a consequence of a stronger property winning with respect to a game, that he introduced in [@Schmidt]. In [@Sc], Schmidt conjectured that $$\mathbf{Bad}(1/3,2/3)\cap \mathbf{Bad}(2/3,1/3)\neq \emptyset.$$ This conjecture was open for nearly 30 years, and gathered much attraction due to its connection with Littlewood's conjecture, another long-standing open problem, and homogeneous dynamics. In 2011 Schmidt's conjecture was settled assertively in [@BPV2011] in a stronger form. Moreover, the question of winning property of $\mathbf{Bad}(\mathbf{r})$ in the view of Schmidt game was first posed by Kleinbock in [@Kleinbock_Duke_98]. For dimension $2$, this question is addressed in [@An], and in full generality for arbitrary dimensions, a stronger statement is proved recently in [@BNY21]. In the fifties and sixties, Cassels, Davenport, and Schmidt first studied badly approximable vectors intersecting curves. Since then, understanding the behavior of the set of (weighted) badly approximable vectors inside manifolds and fractals has a very rich history and contains many open problems even to this date. As mentioned before, in a breakthrough [@BPV2011], Schmidt's conjecture was proved; the heart of the proof requires showing that for any bad number $\alpha$, Hausdorf dimension of $L_{\alpha}\cap \textbf{Bad}(i,j)$ is $1$, where $L_{\alpha}$ is the line parallel to $y$-axis and passing through $(\alpha,0).$ A higher dimensional Schmidt conjecture is established in [@Beresnevich2015; @Lei2019]. The papers [@BV11; @Beresnevich2015; @Lei2019] also answer Davenport's problem showing full Hausdorff dimension of $\mathbf{Bad}(\mathbf{r})$ inside *nondegenerate* manifolds in $\mathbb{R}^n.$ Other than the works we already mentioned, there are a series of works in the past few decades that establish Schmidt winning in different contexts; some of them are [@AGGL; @GY; @LDM; @KL; @BMS2017; @LW; @JLD; @KW13; @KW10; @BFKRW; @BFK; @ET; @Mos11; @Jim1; @Jim2; @Fish09; @KW2005; @Da]. A more desirable result of *absolute winning* of $\mathbf{Bad}(\mathbf{r})$ inside nondegenerate analytic curves is recently proved by Beresnevich, Nesharim, and Yang in [@BNY22]. In [@McM2010], this stronger notion of winning was first coined by McMullen, extending Schmidt's notion of winning in [@Schmidt]. We refer to [@McM2010; @BFKRW; @BHNS], for the definition and several properties of the absolute winning sets. In [@BNY22], the authors ask if their result can be extended to the inhomogeneous setting. In this paper, we address that question by studying $\mathbf{Bad}_{\mathbf{\boldsymbol\theta}}(\mathbf{r})$ inside nondegenerate analytic curves. **Theorem 1**. *Let $\mathbf{\boldsymbol\theta}\in\mathbb{R}^n$, $\mathbf{r}$ be a weight in $\mathbb{R}^n$, and $U\subset \mathbb{R}$ be an open interval in $\mathbb{R}$. Suppose $\varphi:U\to \mathbb{R}^n$ be an analytic map that is nondegenerate. Then $\varphi^{-1}(\mathbf{Bad}_{\mathbf{\boldsymbol\theta}}(\mathbf{r}))$ is absolute winning on $U.$* Our main theorem is as follows: **Theorem 2**. *Let $\mathbf{r}$ be a weight in $\mathbb{R}^n$. Suppose $U\subset \mathbb{R}$ be an open interval in $\mathbb{R}$, and $\varphi:U\to \mathbb{R}^n$ be an analytic map that is nondegenerate. Let $\mathbf{\boldsymbol\theta}=(\theta_i)_{i=1}^n$, where $\theta_i:\mathbb{R}^n \to \mathbb{R}$ are maps such that $\theta_i|_{\varphi(U)}$ are Lipschitz function for $1\leq i \leq n$. Then $\varphi^{-1}(\mathbf{Bad}_{\mathbf{\boldsymbol\theta}}(\mathbf{r}))$ is absolute winning on $U.$* A map $\theta:A\subseteq \mathbb{R}^n\to \mathbb{R}$ is called Lipschitz on $A$ if there exists $d>0$ such that, $$|\theta(\mathbf{x})-\theta(\mathbf{y})|\leq d \max_{i=1}^n\vert x_i-y_i\vert, \forall ~ \mathbf{x}=(x_i),\mathbf{y}=(y_i)\in A.$$ Any continuously differentiable map on a closed bounded interval in $\mathbb{R}$ is always Lipschitz due to the mean value theorem. One can find the definition of a nondegenerate map in [Definition 2](#nondeg){reference-type="ref" reference="nondeg"}. Now we state some corollaries of [Theorem 2](#thm: main){reference-type="ref" reference="thm: main"}. Using properties of absolute winning sets; see [@BNY22 Lemma 1.2], as an immediate consequence of [Theorem 2](#thm: main){reference-type="ref" reference="thm: main"}, we get the following. **Corollary 1**. *Let $n_i\in\mathbb{N}$, $i\geq 1$, $\{\mathbf{\boldsymbol\theta}^{i}_{j}\}_{j\geq 1}, \mathbf{\boldsymbol\theta}^{i}_j:\mathbb{R}^{n_i}\to\mathbb{R}^{n_i}$ be a sequence of Lipschitz maps, $\mathbf{r}_i$ be a sequence of weights in $\mathbb{R}^{n_i}$, and $U\subset \mathbb{R},$ an open interval. Suppose for each $i\in\mathbb{N}$, $\varphi_i: U\to \mathbb{R}^{n_i}$ be analytic nondegenerate map. Let $\mu$ be a Ahlfors regular measure on $U$ such that $U\cap\operatorname{supp}\mu\neq \emptyset.$ Then $$\dim \bigcap_{i\geq 1}\bigcap_{j\geq 1} \varphi_i^{-1}(\mathbf{Bad}_{\mathbf{\boldsymbol\theta}^{i}_{j}}(\mathbf{r_i}))\cap\operatorname{supp}\mu=\dim (\operatorname{supp}\mu).$$ In case $\mu$ is the Lebesgue measure, $$\dim \bigcap_{i\geq 1}\bigcap_{j\geq 1} \varphi_i^{-1}(\mathbf{Bad}_{\mathbf{\boldsymbol\theta}^{i}_{j}}(\mathbf{r_i}))=1.$$* The above corollary establishes the inhomogeneous version of Davenport's problem [@Dav64] for analytic curves. Moreover, using a Fibering lemma [@Beresnevich2015 §2.1], and Marstrand's slicing lemma; [@Mat1995 Theorem 10.11], [@Fal2003 Lemma 7.12] we extend some parts of the previous corollary to any analytic nondegenerate manifolds. The deduction of [Corollary 2](#cor_manifold){reference-type="ref" reference="cor_manifold"} from [Corollary 1](#cor_curve){reference-type="ref" reference="cor_curve"} is exactly the same as in [@Beresnevich2015 §2.1]. **Corollary 2**. *Let $m,n,k\in \mathbb{N}$, $1\leq i\leq k,$ $\{\mathbf{\boldsymbol\theta}^{i}_{j}\}_{j\geq 1}, \mathbf{\boldsymbol\theta}^{i}_j:\mathbb{R}^{n}\to\mathbb{R}^{n}$ be a sequence of Lipschitz maps, $W$ be a countable set of weights in $\mathbb{R}^{n}$, and $U\subset \mathbb{R}^m,$ be an open ball. Suppose for each $1\leq i \leq k$, $\mathbf{\varphi}_i: U\to \mathbb{R}^{n}$ be analytic nondegenerate map. Then $$\dim \bigcap_{i=1}^k\bigcap_{\mathbf{r}\in W}\bigcap_{j\geq 1} \varphi_i^{-1}(\mathbf{Bad}_{\mathbf{\boldsymbol\theta}^{i}_{j}}(\mathbf{r}))=m.$$* By [@BNY22 Lemma 1.2 (iv) ], (and [@BHNS]) [Theorem 2](#thm: main){reference-type="ref" reference="thm: main"} implies that, for any Ahlfors regular measure $\mu,$ with $U\cap \operatorname{supp}\mu\neq\emptyset$, we must have $\varphi^{-1}(\mathbf{Bad}_{\mathbf{\boldsymbol\theta}}(\mathbf{r}))\bigcap\operatorname{supp}\mu\neq\emptyset.$ Surprisingly, by [@BHNS Proposition 2.18, Corollary 4.2] ([@BNY22 Lemmata 1.2-1.4]), the last implication is sufficient to prove absolute winning. Hence we prove the following theorem that is equivalent to [Theorem 2](#thm: main){reference-type="ref" reference="thm: main"}. **Theorem 3**. *Let $\mathbf{\boldsymbol\theta}, \mathbf{r},U,\varphi$, be as in [Theorem 2](#thm: main){reference-type="ref" reference="thm: main"}. Suppose $\mu$ be a $(C,\alpha)$-Ahlfors regular measure such that $U\cap \operatorname{supp}{\mu}\neq \emptyset.$ Then $$\varphi^{-1}(\mathbf{Bad}_{\mathbf{\boldsymbol\theta}}(\mathbf{r}))\bigcap\operatorname{supp}\mu\neq\emptyset.$$* The relevant definition of *Ahlfors regular measure* can be found in § [2](#preli){reference-type="ref" reference="preli"}. Next, we point out how our main theorem relates to previous results in this direction. ## Remarks {#remarks .unnumbered} 1. [Theorem 2](#thm: main){reference-type="ref" reference="thm: main"} represents complete inhomogeneous version of [@BNY22 Theorem 1.1]. 2. For $n=2$, [Theorem 2](#thm: main){reference-type="ref" reference="thm: main"} extends [@ABV Theorem 1.1] from $\mathbf{\boldsymbol\theta}$ being a constant to any Lipschitz function. 3. [Corollary 2](#cor_manifold){reference-type="ref" reference="cor_manifold"} gives the inhomogeneous analogue of [@Beresnevich2015 Theorem 1], and in particular, answers Davenport's questions [@Dav64 p. 52] in much more generality. 4. [Corollary 2](#cor_manifold){reference-type="ref" reference="cor_manifold"}, for $n=m\geq 2$, $\varphi_i(\mathbf{x})=\mathbf{x}~\forall~ \mathbf{x}\in\mathbb{R}^n, 1\leq i\leq k$ is also new, giving a contribution to the classical set-up, namely solving the inhomogeneous analogue of Schmidt's conjecture in arbitrary dimensions. The basic idea of the proof of [Theorem 3](#thm: twin main){reference-type="ref" reference="thm: twin main"} is to construct the inhomogeneous constraints inside homogeneous constraints, which has been earlier exploited in [@ABV; @BV13]. In the proof of the homogeneous result, in [@BNY22], authors take the *dual* definition of bad vectors; see §[4](#final){reference-type="ref" reference="final"} for the definition of dual bad. First, we transfer the dynamical property of the elements in the Cantor set $\mathcal{K}_\infty$, constructed in [@BNY22], in a number theoretic property. Then using a transference principle, we transfer these properties in a simultaneous setting. For the rest of the paper, our aim is to construct a new generalized Cantor set $\mathcal{K}'_\infty$ inside $\varphi^{-1}(\mathbf{Bad}_{\mathbf{\boldsymbol\theta}}(\mathbf{r}))\bigcap\operatorname{supp}\mu\cap \mathcal{K}_\infty$ and show that the new Cantor set is nonempty. For the nonemptiness, we need to make sure that the rate of removal is controlled in a certain way. Since we took the inhomogeneous part as a function, more generally than a constant, the constructions of the Cantor set avoid *dangerous domains*. The idea is to partition a chosen interval centered at the support of $\mu$, and treat the problem as if the inhomogeneous part is constant inside each of these subintervals. This means we construct comparable *dangerous intervals*, which allows us to measure the parts removed in the process of Cantor set construction. Also, we categorize shifted rationals near the curve in a more complex manner than it appeared in [@ABV]. # Preliminaries {#preli} Let us first recall the definition of Ahlfors regular measures from [@BNY22 §1]. **Definition 1**. *A Borel measure $\mu$ on $\mathbb{R}^d$ is called $(C,\alpha)$-Ahlfors regular if there exists $\rho_0>0$ such that for any ball $B(x,\rho)\subset\mathbb{R}^d$ where $x\in\operatorname{supp}\mu$ and $\rho\leq\rho_0$, we have $$C^{-1}\rho^{\alpha}\leq\mu(B(x,\rho))\leq C\rho^{\alpha}.$$* Let us recall the definition of a nondegenerate map. **Definition 2**. *A analytic map $\varphi=(\varphi_1,\cdots,\varphi_n):U\to \mathbb{R}^n$, defined on an open interval $U\subset \mathbb{R}$ is nondegenerate if $\varphi_1,\cdots,\varphi_n$ together with the constant function 1 are linearly independent over $\mathbb{R}.$* The above definition is restrictive to analytic maps. In general, the definition of nondegenerate map with respect to some measure can be found in [@KM]. For every $\varepsilon>0$, let us define $$K_{\varepsilon}:=\{\Lambda\in X_{n+1}:=\mathrm{SL_{n+1}}(\mathbb{R})/\mathrm{SL_{n+1}}(\mathbb{Z})~|~ \inf_{\mathbf{x}\in \Lambda\setminus 0}\Vert \mathbf{x}\Vert\geq \varepsilon\}.$$ Mahler's compactness criterion ensures that these are compact sets in the space of unimodular lattices $X_{n+1}.$ Here and elsewhere $\Vert \cdot\Vert$ is the Euclidean norm. Next, we recall the definition of generalized Cantor set, following [@BNY22 §2.2]. **Definition 3**. *Given $R\in\mathbb{N}$, $I\subset\mathbb{R}$ a closed interval, the $R$-partition of $I$ is the collection of closed intervals obtained by dividing $I$ to $R$ closed subintervals of the same length $R^{-1}|I|$, denoted as **Par**$_R(I)$. Here, $\vert I\vert$ is the Lebesgue measure of the interval $I.$* *Also, given $\mathcal{J}$ a collection of closed intervals,$$\mathbf{Par}_R(\mathcal{J}):=\cup_{I\in\mathcal{J}}\mathbf{Par}_R(I).$$* To define a generalized Cantor set, we first let $\mathcal{J}_0=\{I_0\}$, where $I_0$ is a closed interval. Then inductively $\mathcal{I}_{q+1}=\mathbf{Par}_R(\mathcal{J}_q)$. In addition, we remove a subcollection $\hat{\mathcal{J}}_q$ from $\mathcal{I}_{q+1}$. In other words, we let $\mathcal{J}_{q+1}=\mathcal{I}_{q+1}\setminus\hat{\mathcal{J}}_q$. In particular, $$\label{length} \forall I\in\mathcal{J}_q,|I|=|I_0|R^{-q}.$$ We define the generalized Cantor set, $$\label{K_infty}\mathcal{K}_{\infty}:=\bigcap_{q\in\mathbb{N}\cup\{0\}}\bigcup_{I\in\mathcal{J}_q}I.$$ Next, we recall the following theorem that we will use to get the non-emptiness of a generalized Cantor set. **Theorem 4**. *[@BV11multi Theorem 3] [\[nemp\]]{#nemp label="nemp"} If $\forall q\in\mathbb{N}\cup\{0\}$, $\hat{\mathcal{J}}_q$ can be written as $$\hat{\mathcal{J}}_q=\cup_{p=0}^q\hat{\mathcal{J}}_{p,q}\label{jq},$$ and we denote $h_{p,q}:=\max_{J\in\mathcal{J}_p}\#\{I\in\hat{\mathcal{J}}_{p,q}:I\subset J\}$, such that $$\label{formula} t_0:= R- h_{0,0}>0, t_q:=R-h_{q,q}-\sum_{j=1}^q\frac{h_{q-j,q}}{\prod_{i=1}^jt_{q-i}}>0, ~\forall~ q\geq 1.$$ Then $\mathcal{K}_{\infty}\neq\emptyset.$* We also recall the following transference lemma from [@Mahler1939], which we state in the form it appeared in [@Beresnevich2015 Lemma 11]. **Lemma 1**. *Let $T_0,\cdots, T_n>0$, $L_0,L_1,\cdots,L_n$ be a system of linear forms in variables $u_0,u_1,\cdots,u_n$ with real coefficients and determinants $d\neq 0$, and let $L_0',L_1',\cdots,L_n'$ be the transposed system of linear forms in variables $v_0,v_1,\cdots,v_n$, so that $\sum_{i=0}^nL_iL_i'=\sum_{i=0}^nu_iv_i$. Let $\iota=\frac{\prod_{i=0}^nT_i}{|d|}.$ Suppose there exists $(u_0,u_1,\cdots,u_n)\in\mathbb{Z}^{n+1}\setminus \{\mathbf{0}\}$ such that $$\label{system1} |L_i(u_0,u_1,\cdots,u_n)|\leq T_i,~~\forall~~ 0\leq i\leq n.$$ Then there exists a non-zero integer point $(v_0,v_1,\cdots,v_n)$ such that $$\label{system2} |L_0'(v_0,v_1,\cdots,v_n)|\leq n\iota/T_0\textit{ and }|L'_i(v_0,v_1,\cdots,v_n)|\leq \iota/T_i,~~\forall~~ 1\leq i\leq n.$$* ## Notations and auxiliary results {#subsection: Notation} Let us denote $\mathbf{x}=(x_i)\in \mathbb{R}^n$, where $x_i$ is the $i$-th coordinate. Next, without loss of generality, we assume, $$\label{weights chain} r_1\geq r_2\geq \cdots\geq r_n>0,$$ where $\mathbf{r}$ is the weight in $\mathbb{R}^n$ that we considered in [Theorem 2](#thm: main){reference-type="ref" reference="thm: main"}. The reason why $r_n>0$ is without loss of generality is that otherwise, $\textbf{Bad}_{\hat\mathbf{\boldsymbol\theta}}{(\hat\mathbf{r})}\times \mathbb{R}\subseteq \textbf{Bad}_{\mathbf{\boldsymbol\theta}}(\mathbf{r}),$ where $\hat\mathbf{r}=(r_1,\cdots,r_{n-1}),$ and $\hat\mathbf{\boldsymbol\theta}=(\theta_i)_{i=1}^{n-1}.$ Let $\rho_0$ be as defined as [Definition 1](#ahlfors){reference-type="ref" reference="ahlfors"} and $I_0$ be as in [@BNY22 §2.1] satisfying $x_0\in\operatorname{supp}\mu$ and $\varphi_1'(x_0)\neq 0$, where $x_0$ is the center of $I_0$, and $$\label{U containing I_0}3^{n+1}I_0\subset U,$$ and $3|I_0|\leq\min\{\rho_0, 1\}$. Also, as assumed in [@BNY22 Equation 2.2], $$\label{Change of variable} \varphi(x)=(\varphi_1(x),\varphi_2(x),\cdots,\varphi_n(x)), \varphi_1(x)=x, \forall x\in 3^{n+1}I_0.$$ Let $\mathbf{\boldsymbol\theta}$ be as in [Theorem 2](#thm: main){reference-type="ref" reference="thm: main"}. Note that, $$\varphi^{-1}(\mathbf{Bad}_{\mathbf{\boldsymbol\theta}}(\mathbf{r})):=\{x\in\mathbb{R}:\liminf_{q\in\mathbb{Z}\setminus\{0\}, |q|\to\infty}\max_{1\leq i\leq n}\vert q\varphi_i(x)-\theta_i(\varphi(x))\vert_{\mathbb{Z}}^{1/r_i} \vert q\vert>0 \}.$$ Let $\theta^{\varphi}_i:U\to\mathbb{R}$, be defined as $\theta^{\varphi}_i(x):=\theta_i(\varphi(x)).$ Since each $\theta_i$ is Lipschitz on $\varphi(U)$, and $\varphi$ is analytic, we have $\theta^{\varphi}_i$ to be Lipschitz on $3^{n+1}I_0$. We will drop the suffix $\varphi$ here onwards and write $\theta_i$ for clarity of notation. So for every $1\leq i \leq n$ there exists $d_i>0$ such that, $$|\theta_i(x)-\theta_i(y)|\leq d_i \vert x-y\vert, \forall ~ x,y\in 3^{n+1}I_0.$$ Let us recall the definitions of the following actions as in [@BNY22]. $$\begin{aligned} & a(t):=\operatorname{diag}\{e^t,e^{-r_1t},\cdots, e^{-r_nt}\},b(t):=\operatorname{diag}\{e^{-t/n},e^{t},e^{-t/n}, \cdots,e^{-t/n}\},\\ & u(\mathbf{x}):=\begin{pmatrix} 1&\mathbf{x}\\ 0&\mathrm{I}_n \end{pmatrix}, \mathbf{x}\in \mathbb{R}^n, \text{ where } \mathrm{I}_n \text{ is the identity matrix of size } n\times n.\end{aligned}$$ For any $\mathbf{y}\in\mathbb{R}^{n-1}$, and $x\in\mathbb{R},$ let us define $$\label{inho3} u_1(\mathbf{y}):=\begin{pmatrix} 1&0&0& 0& \cdots&0\\ 0&1&y_2& y_3& \cdots&y_n\\ \mathbf{0}&\mathbf{0} & & &\mathrm{I}_{n-1}\\%&1& 0&\cdots& 0\\ %& & &\ddots&\\ %&0& 0&\cdots & 0& 0&1 \end{pmatrix}, ~~z(x):= u_1(\hat\varphi'(x)),$$ where $\hat\varphi(x):=(\varphi_2(x),\cdots, \varphi_n(x)).$ Let $R>0$ be a fixed large number and $\beta,\beta'>1$ be such that $$\label{beta} e^{(1+r_1)\beta}=R=e^{(1+1/n)\beta'}.$$ Also, let us fix $\epsilon>0$ such that $$\label{def: epsilon} 0<\epsilon<\frac{r_n}{4n}.$$ In [@BNY22], a non-empty generalized Cantor set $\mathcal{K}_{\infty}$ determined by $\hat{\mathcal{J}}_q=\cup_{p=0}^q\hat{\mathcal{J}}_{p,q}$ has already been constructed within $\varphi^{-1}(\mathbf{Bad}(\mathbf{r}))\cap\operatorname{supp}\mu$; see [@BNY22 Proposition 2.6]. We recall the construction, let $$\label{jqq} \hat{\mathcal{J}}_{q,q}:=\{I\in\mathcal{I}_{q+1},\mu(I)<(3C)^{-1}|I|^{\alpha}\}.$$ Next, for $\forall q\geq 1$ define $\hat{\mathcal{J}}_{p,q}=\emptyset$ if $0<p\leq q/2$ or $0<p<q$ with $p\not\equiv q\pmod 4$, define $\hat{\mathcal{J}}_{0,q}$ to be the collection of $I\in\mathcal{I}_{q+1}\setminus\hat{\mathcal{J}}_{q,q}$ such that there exists $l\in\mathbb{Z}$ with $\max(1,q/8)\leq l\leq q/4$ satisfying $$\label{jpq} b(\beta'l)a(\beta(q+1))z(x)u(\varphi(x))\mathbb{Z}^{n+1}\notin K_{e^{-\epsilon\beta l}}\textit{ for some }x\in I,$$ and finally if $q/2<p<q$ and $p=q-4l$ for some $l\in\mathbb{Z}$ define $$\hat{\mathcal{J}}_{p,q}:=\{I\in\mathcal{I}_{q+1}\setminus\left(\hat{\mathcal{J}}_{q,q}\cup\bigcup_{0\leq p'<p}\hat{\mathcal{J}}_{p',q}\right):\cref{jpq}\textit{ holds}\}$$ Then $\hat{\mathcal{J}}_{q}$ is as defined in [\[jq\]](#jq){reference-type="ref" reference="jq"}, $\mathcal{J}_{q+1}=\mathcal{I}_{q+1}\setminus\hat{\mathcal{J}}_{q}$ and $\mathcal{K}_{\infty}$ is as defined in [\[K_infty\]](#K_infty){reference-type="ref" reference="K_infty"}. Let us denote $\tilde u_1(O(1)):= u_1(\mathbf{y}),$ such that $\Vert \mathbf{y}\Vert\ll 1.$ Then let us recall the following lemma that was proved in [@BNY22 Equation 2.35]. **Lemma 2**. *Given $q>8$, then $\forall I\in\mathcal{J}_{q}$, we have* *$$\label{eqn: Dani reversing b} \forall x\in I,a(\beta q)u(\varphi(x))\mathbb{Z}^{n+1}\in \tilde u_1(O(1))^{-1}b(-\beta')K_{e^{-\epsilon\beta}}.$$* Also, denote $h_{p,q}:=\max_{J\in\mathcal{J}_p}\#\{I\in\hat{\mathcal{J}}_{p,q}:I\subset J\},0\leq p\leq q$. The following two propositions provide upper bounds for $h_{p,q}$. **Proposition 1**. *[@BNY22 Proposition 2.7] Let $C,\alpha$ be as defined in [Theorem 3](#thm: twin main){reference-type="ref" reference="thm: twin main"} and $R^\alpha\geq 21C^2$, for $\forall q\in\mathbb{N}\cup\{0\},$ $$\label{hqq} h_{q,q}\leq R-(4C)^{-2}R^{\alpha}.$$* **Proposition 2**. *[@BNY22 Propositions 3.3 and 3.4][\[hpq\]]{#hpq label="hpq"} There exist constants $R_0\geq1,C_0>0$ and $\eta_0>0$ such that if $R\geq R_0$, then for any $p,q\in\mathbb{N}\cup\{0\}$ with $p<q$, we have $$h_{p,q}\leq C_0R^{\alpha(1-\eta_0)(q-p+1)}.$$* # Proof of the main theorem **Lemma 3**. *There exists $\xi>0$ such that $\forall ~q\in\mathbb{N}$,  $\forall I\in\mathcal{J}_{q},\forall x\in I$, $$\{a(\beta t)u(\varphi(x))\mathbb{Z}^{n+1}, 0<t\leq q\}\subset K_{\xi}.$$* *Proof.* As shown in [Lemma 2](#234){reference-type="ref" reference="234"}, the right-hand side of [\[eqn: Dani reversing b\]](#eqn: Dani reversing b){reference-type="ref" reference="eqn: Dani reversing b"} is contained in a bounded subset of $X_{n+1}$, independent of $q$ and $x$. Therefore, $\{a(\beta q_0)u(\varphi(x))\mathbb{Z}^{n+1}:q_0\leq q,q_0\in\mathbb{N}\}$ is in a bounded set in $X_{n+1}$, that is independent of $q$ and $x$. If $q_0-1<t\leq q_0$, then $a(\beta t)u(\varphi(x))\mathbb{Z}^{n+1}= a(-\beta(q_0-t))a(\beta q_0)u(\varphi(x))\mathbb{Z}^{n+1}$. Since $0\leq q_0-t<1$, $\{a(\beta t)u(\varphi(x))\mathbb{Z}^{n+1}, 0<t\leq q, x\in I, I\in \mathcal{J}_q, q\in\mathbb{N}\}$ is bounded in $X_{n+1}$. So, by Mahler's compactness criterion, the lemma follows. ◻ Let $\xi$ be as in [Lemma 3](#bounded){reference-type="ref" reference="bounded"}, and without loss of generality, we assume $$\xi<\frac{n+1}{e}\label{xi}.$$ We define the following quantities: $$\label{def: f_0} f_0:=1+\sup_{x\in I_0, i=1,\cdots, n}|\varphi_i'(x)|.$$ $$\label{def: lambda_1}\lambda_1:=\left\lceil\ln{\left(\frac{n+1}{\xi}\right)^{\frac{1}{\beta r_n}}}\right\rceil,$$ $$k_1:=\frac{\xi}{n+1}e^{-\beta\lambda_1}. \label{def: k_1}$$ Note that $\beta$ depends on $R$, and therefore, in the above definitions $\lambda_1\in \mathbb{N}$ and $k_1<1$ are dependent on $R.$ From [\[def: lambda_1\]](#def: lambda_1){reference-type="ref" reference="def: lambda_1"}, we have $$\label{eqn: defn of lambda_1 gives} e^{-\beta\lambda_1 r_i}\leq\frac{\xi}{n+1}, 1\leq i\leq n.$$ Also, $$\label{upper bound on k_1r_1} k_1^{1+r_1}<R^{-\lambda_1},$$ since by [\[xi\]](#xi){reference-type="ref" reference="xi"}, $\frac{n+1}{\xi}\stackrel{\eqref{def: k_1}}{=}k_1^{-1}e^{-\beta\lambda_1}>1\implies k_1<e^{-\beta\lambda_1}\stackrel{\eqref{beta}}{\implies} k_1^{1+r_1}<R^{-\lambda_1}.$ We then have the following corollary. **Corollary 3**. *$\forall q\in\mathbb{N}, q>\lambda_1, ~\forall ~1\leq H<e^{\beta(q-\lambda_1)},~\forall I\in\mathcal{J}_q,\forall x\in I,$ there are no non-zero integer solutions $(a_0,a_1,\cdots,a_n)$ to $$|a_0+\sum_{i=1}^na_i\varphi_i(x)|\leq k_1H^{-1},|a_i|\leq H^{r_i},~ 1\leq i \leq n.\label{dual system}$$* *Proof.* We prove this by contradiction. Let $(a_0,a_1,\cdots,a_n)\in\mathbb{Z}^{n+1}\setminus \{\mathbf{0}\}$ be a solution to [\[dual system\]](#dual system){reference-type="ref" reference="dual system"} with a positive number $$1\leq H<e^{\beta(q-\lambda_1)}\label{H},$$ and $x\in I$ for some $I\in\mathcal{J}_q$. Then, we let $$\label{t_0} t_0:=\beta\lambda_1+\ln H,$$ Thus by [\[H\]](#H){reference-type="ref" reference="H"} and $\lambda_1>0$,$$\label{t_0'}0< t_0<\beta q.$$ Also, let $$\label{w} \mathbf{w}:=(w_i)_{i=0}^n=a(t_0)u(\varphi(x))\cdot (a_0,a_1,\cdots,a_n)^T.$$ Thus, we have $\vert w_0\vert =e^{\beta\lambda_1}H|a_0+\sum_{i=1}^na_i\varphi_i(x)|\stackrel{\eqref{dual system},\eqref{def: k_1}}{\leq}\frac{\xi}{n+1}$ and $\forall 1\leq i\leq n$, $\vert w_i\vert =e^{-\beta\lambda_1r_i}H^{-r_i}|a_i|\stackrel{\eqref{dual system}, \eqref{eqn: defn of lambda_1 gives}}{\leq}\frac{\xi}{n+1}$, thus $0\neq \Vert \mathbf{w}\Vert=\sqrt{\sum_{i=0}^nw_i^2}<\xi$. This implies that there exists $x\in I,$ for some $I\in \mathcal{J}_q,$ and by [\[t_0\'\]](#t_0'){reference-type="ref" reference="t_0'"}, some $0< t_0/\beta< q$ such that $$a(\beta t_0/\beta) u(\varphi(x))\mathbb{Z}^{n+1}\notin K_{\xi},$$ which is a contradiction to [Lemma 3](#bounded){reference-type="ref" reference="bounded"}. ◻ **Proposition 3**. *For $\forall q\in\mathbb{N}, q>\lambda_1, \forall~ 1\leq Q<e^{\beta(q-\lambda_1)},\forall I\in\mathcal{J}_q,\forall x\in I$, there are no non-zero integer solutions $(m,p_1,...,p_n)$ to the system $$\label{simul system} |m|\leq Q, |m\varphi_i(x)-p_i|<\frac{k_1}{n}Q^{-r_i},~\forall~ 1\leq i\leq n.$$* *Proof.* Suppose by contradiction, $(m,p_1,\cdots,p_n)$ be a non-zero integer solution to [\[simul system\]](#simul system){reference-type="ref" reference="simul system"} with a positive number $1\leq Q<e^{\beta(q-\lambda_1)}$ and $x\in I$ for some $I\in\mathcal{J}_q$. Let $$L_0(m,p_1,\cdots,p_n):=m,L_i(m,p_1,\cdots,p_n):=m\varphi_i(x)-p_i,~\forall~ 1\leq i\leq n.$$ The transposed system is $L_0'(v_0,v_1,...,v_n)=v_0+\sum_{i=1}^n\varphi_i(x)v_i$ and $L_i'(v_0,v_1,...,v_n)=-v_i,~\forall~ 1\leq i\leq n$. Then [\[simul system\]](#simul system){reference-type="ref" reference="simul system"} implies [\[system1\]](#system1){reference-type="ref" reference="system1"} with these $L_0,L_1,...,L_n$ and $T_0=Q,T_i=\frac{k_1}{n}Q^{-r_i},~\forall 1\leq i\leq n$. By [Lemma 1](#transfer){reference-type="ref" reference="transfer"}, we have a non-zero integer solution $(v_0,v_1,\cdots,v_n)$ to [\[system2\]](#system2){reference-type="ref" reference="system2"} with these $L_i', ~0\leq i\leq n$ and $\iota=\left(\frac{k_1}{n}\right)^n$. Since $k_1<1$ and $n\geq1$, we have $\iota n\leq k_1$ and $\iota \frac{n}{k_1}\leq 1.$ So $(v_0,v_1,...,v_n)$ is a non-zero integer solution to [\[dual system\]](#dual system){reference-type="ref" reference="dual system"} with $H=Q$, which is a contradiction to [Corollary 3](#dual){reference-type="ref" reference="dual"}. ◻ Define a new quantity $c$, $$\label{def: c} c:=\frac{k_1^{1+r_1}d_1}{200 n(f_0+\max_{2\leq i\leq n}d_i)(1+d_1)^2R^4}.$$ We take $$\label{R_0'} R>\frac{1}{|I_0|}.$$ Given $m\in\mathbb{N}$, let us partition $I_0$ as a disjoint union of $m^{\star}:=\lceil\frac{2d_1}{c}m^{r_1}\rceil\stackrel{\eqref{def: c}}{\geq} 2$ many intervals, such that for $1\leq j\leq m^\star-1,$ $\vert I_0^{j,m}\vert=\frac{c|I_0|}{2d_1 m^{r_1}}$, and $\vert I_0^{m^\star, m}\vert\leq \frac{c|I_0|}{2d_1 m^{r_1}}.$ Suppose the center of each $I_0^{j,m}$ is $x_0^{j,m}.$ For $m\in\mathbb{N}, 1\leq j\leq m^\star,$ let us denote $\theta_1^{j,m}:= \theta_1(x_0^{j,m}),$ and let $$\label{def: Vj} \mathcal{V}^{j,m}:=\left\{\frac{\mathbf{p}}{m} ~\left|~\begin{aligned}&\frac{p_1+\theta_1^{j,m}}{m}\in 3^{n+1}I_0,~ \mathbf{p}\in\mathbb{Z}^n\text{ and }\\&\left|\varphi_i\left(\frac{p_1+\theta_1^{j,m}}{m}\right)-\frac{p_i+\theta_i\left(\frac{p_1+\theta_1^{j,m}}{m}\right)}{m}\right|<\frac{(f_0+d_i) c}{m^{1+r_i}}, \forall~ 2\leq i\leq n\end{aligned}\right.\right\},$$ and define $$\label{def V upper union}\mathcal{V}:=\bigcup_{m\in\mathbb{N},m>17|I_0|^{-1}d_1}\bigcup_{j=1}^{m^\star} \mathcal{V}^{j,m}.$$ For any $\mathbf{v}=\frac{\mathbf{p}}{m}$, $1\leq k\leq m^\star,$ let $$\label{def: inho_interval1} \Delta_{\mathbf{\boldsymbol\theta}}^{k,m}(\mathbf{v}):=\left\{x\in I^{k,m}_0:\left|x-\frac{p_1+\theta_1(x)}{m}\right|<\frac{c}{m^{1+r_1}}\right\}.$$ Note that $\{\Delta_{\mathbf{\boldsymbol\theta}}^{k,m}(\mathbf{v})\}_{k=1}^{m^\star}$, is a collection of disjoint domains. For $\mathbf{v}=\frac{\mathbf{p}}{m}\in \mathbb{Q}^n, 1\leq j\leq m^\star$, let $$\label{def: inho_interval} \widetilde\Delta^{j,m}_{\mathbf{\boldsymbol\theta}}(\mathbf{v}):=\left\{x\in I^{j,m}_0:\left|x-\frac{p_1+\theta^{j,m}_1}{m}\right|<\frac{c}{2m^{1+r_1}}\right\}.$$ We have the following lemma showing that these intervals are comparable with the domains $\Delta^{j,m}_{\mathbf{\boldsymbol\theta}}(\mathbf{v})$, **Lemma 4**. *For every $m\in \mathbb{N},\mathbf{p}\in\mathbb{Z}^n$, $1\leq j\leq m^\star,$ $\mathbf{v}=\frac{\mathbf{p}}{m},$ $$\widetilde\Delta^{j,m}_{\mathbf{\boldsymbol\theta}}(\mathbf{v})\subseteq \Delta^{j,m}_{\mathbf{\boldsymbol\theta}}(\mathbf{v})\subseteq 4\widetilde\Delta^{j,m}_{\mathbf{\boldsymbol\theta}}(\mathbf{v}).$$* *Proof.* Pick any $x\in \widetilde\Delta^{j,m}_{\mathbf{\boldsymbol\theta}}(\mathbf{v}).$ Since $x\in I^{j,m}_0,$ $\vert x-x_0^{j,m}\vert< \frac{c}{4d_1 m^{r_1}}$. Then $$\begin{aligned} \left|x-\frac{p_1+\theta_1(x)}{m}\right|&< \left|x-\frac{p_1+\theta_1^{j,m}}{m}\right|+ \left\vert \frac{\theta_1(x)-\theta_1^{j,m}}{m}\right\vert\\ &< \frac{c}{2 m^{1+r_1}}+ d_1 \frac{c}{4d_1 m^{1+r_1}}< \frac{c}{m^{1+r_1}}. \end{aligned}$$ Now, suppose $x\in \Delta^{j,m}_{\mathbf{\boldsymbol\theta}}(\mathbf{v}),$ $$\begin{aligned} \left|x-\frac{p_1+\theta_1^{j,m}}{m}\right|&< \left|x-\frac{p_1+\theta_1(x)}{m}\right|+ \left\vert \frac{\theta_1(x)-\theta_1^{j,m}}{m}\right\vert\\ &< \frac{c}{ m^{1+r_1}}+ d_1 \frac{c}{4d_1 m^{1+r_1}}< \frac{2c}{m^{1+r_1}}. \end{aligned}$$ ◻ Next, we have the following lemma that shows that $\Delta^{j,m}_{\mathbf{\boldsymbol\theta}}(\mathbf{v})$ are the *dangerous domains* to avoid. **Lemma 5**. *Let $I_0$, $\mathcal{V}^{j,m}$ and $\Delta^{j,m}_{\mathbf{\boldsymbol\theta}}(\mathbf{v})$ be as before. Then $$I_0\setminus\bigcup_{m> 17 \vert I_0\vert^{-1} d_1}\bigcup_{j=1}^{m^\star}\bigcup_{\mathbf{v}\in\mathcal{V}^{j,m}}\Delta^{j,m}_{\mathbf{\boldsymbol\theta}}(\mathbf{v})\subseteq\varphi^{-1}(\mathbf{Bad}_{\mathbf{\boldsymbol\theta}}(\mathbf{r})).$$* *Proof.* Suppose $x\notin\varphi^{-1}(\mathbf{Bad}_{\mathbf{\boldsymbol\theta}}(\mathbf{r}))$ and $x\in I_0$, then there exists $(m,\mathbf{p})\in\mathbb{N}\times \mathbb{Z}^n$, with $$\label{large m} m> 17|I_0|^{-1}d_1,$$ such that $$\label{phi1} \left|x-\frac{p_1+\theta_1(x)}{m}\right|\stackrel{\eqref{Change of variable}}{=}\left|\varphi_1(x)-\frac{p_1+\theta_1(x)}{m}\right|<\frac{c}{4m^{1+r_1}},$$ and $$\left|\varphi_i(x)-\frac{p_i+\theta_i(x)}{m}\right|<\frac{c}{4m^{1+r_i}},\forall i\geq 2. \label{phii}$$ Since $x\in I_0$, there exists $1\leq j\leq m^\star$ such that $x\in I_0^{j,m}.$ Moreover, $$\label{eqn: constant theta near theta func}\begin{aligned}\left|x-\frac{p_1+\theta^{j,m}_1}{m}\right|&< \left\vert x-\frac{p_1+\theta_1(x)}{m}\right\vert + \left\vert\frac{\theta_1(x)-\theta_1^{j,m}}{m}\right\vert\\ & \stackrel{\eqref{phi1}}{<}\frac{c}{4m^{1+r_1}}+ \frac{d_1}{m} \vert x- x_0^{j,m}\vert\\ & <\frac{c}{4m^{1+r_1}}+ d_1 \frac{c}{4d_1 m^{1+r_1}}< \frac{c}{m^{1+r_1}}.\end{aligned}$$ On combining with [\[large m\]](#large m){reference-type="ref" reference="large m"}, [\[R_0\'\]](#R_0'){reference-type="ref" reference="R_0'"} and $x\in I_0$, [\[eqn: constant theta near theta func\]](#eqn: constant theta near theta func){reference-type="ref" reference="eqn: constant theta near theta func"} implies $\frac{p_1+\theta^{j,m}_1}{m}\in 3^{n+1}I_0.$ For $\forall~ i\geq 2,$ $$\begin{aligned} &\left\vert\varphi_i\left(\frac{p_1+\theta^{j,m}_1}{m}\right)-\frac{p_i+\theta_i\left(\frac{p_1+\theta^{j,m}_1}{m}\right)}{m}\right\vert\\ & \leq\left|\varphi_i\left(\frac{p_1+\theta^{j,m}_1}{m}\right)-\varphi_i(x)\right|+\left|\varphi_i(x)-\frac{p_i+\theta_i\left(\frac{p_1+\theta^{j,m}_1}{m}\right)}{m}\right|\\ &\leq\left|\varphi_i\left(\frac{p_1+\theta^{j,m}_1}{m}\right)-\varphi_i(x)\right|+\left|\varphi_i(x)-\frac{p_i+\theta_i(x)}{m}\right|+\\& \hspace{2.2 cm}\frac{1}{m} \left\vert \theta_i(x)-\theta_i\left(\frac{p_1+\theta^{j,m}_1}{m}\right)\right\vert. \end{aligned}$$ Therefore, by [\[phii\]](#phii){reference-type="ref" reference="phii"} and mean value theorem, $$\label{eqn: second inequality} \begin{aligned} &\left\vert\varphi_i\left(\frac{p_1+\theta^{j,m}_1}{m}\right)-\frac{p_i+\theta_i\left(\frac{p_1+\theta^{j,m}_1}{m}\right)}{m}\right\vert\\ &<(f_0-1)\left|x-\frac{p_1+\theta^{j,m}_1}{m}\right|+\frac{c}{4 m^{1+r_i}}+ \\ & \hspace{ 2.2 cm}\frac{d_i}{m} \left\vert x-\frac{p_1+\theta_1^{j,m}}{m}\right\vert \\ %& \stackrel{\eqref{eqn: constant theta near theta func}}{<}(f_0-1)\left|x-\frac{p_1+\theta^j_1}{m}\right|+\frac{c}{m^{1+r_i}}+d_i \frac{c}{m^{1+r_1}}+ d_1 \\ &\stackrel{\eqref{eqn: constant theta near theta func}}{<}(f_0-1)\frac{c}{m^{1+r_1}}+\frac{c}{m^{1+r_i}}+d_i \frac{c}{m^{1+r_1}}\\ &\stackrel{\eqref{weights chain}}{\leq}\frac{(f_0+d_i)c}{m^{1+r_i}}. \end{aligned}$$ By [\[eqn: second inequality\]](#eqn: second inequality){reference-type="ref" reference="eqn: second inequality"}, $\mathbf{v}:=\frac{\mathbf{p}}{m}\in\mathcal{V}^{j,m}$ and by [\[eqn: constant theta near theta func\]](#eqn: constant theta near theta func){reference-type="ref" reference="eqn: constant theta near theta func"}, $x\in\widetilde\Delta^{j,m}_{\mathbf{\boldsymbol\theta}}(\mathbf{v})$. By [Lemma 4](#interval inside domain){reference-type="ref" reference="interval inside domain"}, $x\in\Delta^{j,m}_{\mathbf{\boldsymbol\theta}}(\mathbf{v})$ which completes the proof. ◻ For $q\in\mathbb{N}$, we define, $$\label{def: V_n} \mathcal{V}_q:=\left\{\mathbf{v}=\frac{\mathbf{p}}{m}\in\mathcal{V}, 2c|I_0|^{-1}R^{q+1}\leq m^{1+r_1}<2c|I_0|^{-1}R^{q+2}\right\},$$ and $\mathcal{V}_0:=\emptyset.$ Note that $$\mathcal{V}_q=\emptyset, ~~\forall~~ 0\leq q\leq \lambda_1+1, q\in\mathbb{N}.\label{empty}$$ This is because if $\frac{\mathbf{p}}{m}\in\mathcal{V}_q,$ $$\label{upper bound on m} 1\leq m\stackrel{\eqref{R_0'}, \eqref{def: c}}{<}\left(\frac{k_1^{1+r_1}}{n}R^{q-1}\right)^{\frac{1}{1+r_1}}<e^{\beta (q-1-\lambda_1)}.$$ By [\[beta\]](#beta){reference-type="ref" reference="beta"}, the last inequality holds if $\frac{k_1^{1+r_1}}{n } < R^{-\lambda_1}$, which is true by [\[upper bound on k_1r_1\]](#upper bound on k_1r_1){reference-type="ref" reference="upper bound on k_1r_1"}. Now [\[upper bound on m\]](#upper bound on m){reference-type="ref" reference="upper bound on m"} implies for $q\leq \lambda_1+1$, there is no such $m$, hence $\mathcal{V}_q=\emptyset.$ Then by definition, we have $$\bigcup_{q\in\mathbb{N}}\mathcal{V}_q=\mathcal{V}. \label{eqn: union}$$ Also, let us define $\mathcal{V}_q^{j,m}:=\mathcal{V}_q\cap \mathcal{V}^{j,m}.$ Then by [\[eqn: union\]](#eqn: union){reference-type="ref" reference="eqn: union"}, $$\label{union_sublevel}\bigcup_{q\in\mathbb{N}} \mathcal{V}_q^{j,m}=\mathcal{V}^{j,m}.$$ Then we have the following two propositions, which are crucial to constructing Cantor set later. **Proposition 4**. *Suppose $\mathbf{v}=\frac{\mathbf{p}}{m}\in\mathcal{V}$. Then, there are at most two $1\leq j\leq m^\star$ such that $\Delta^{j,m}_{\mathbf{\boldsymbol\theta}}(\mathbf{v})\neq\emptyset$.* *[\[key1\]]{#key1 label="key1"}* *Proof.* Suppose $1\leq j_1\neq j_2\neq \cdots \neq j_k\leq m^\star$ such that for $1\leq s \leq k$, $\Delta_{\mathbf{\boldsymbol\theta}}^{j_s,m}(\mathbf{v})\neq\emptyset.$ $$\label{inho_10} \forall 1\leq s \leq k,~ \exists~ x_{j_s}\in I^{j_s,m}_0 %x_{j_s}\in I\cap I^{j_s,m}_0, \textit{such that }\left|x_{j_s}-\frac{p_{1}+\theta_{1}(x_{j_s})}{m}\right|<\frac{c}{m^{1+r_1}}.$$ Hence for any $1\leq i,i'\leq k$, $$\begin{aligned}\vert x_{j_i}-x_{j_{i'}}\vert &\leq \sum_{s=i,i'}\left\vert x_{j_s}-\frac{p_1+\theta_1(x_{j_s})}{m}\right\vert +\frac{1}{m}\vert \theta_1(x_{j_i})-\theta_1(x_{j_{i'}})\vert\\ & \leq \frac{2c}{m^{1+r_1}}+ \frac{d_1}{m}\vert x_{j_i}-x_{j_{i'}}\vert. \end{aligned}$$ Since $m\in \mathcal{V},$ $m-d_1>0.$ Hence we have $$\vert x_{j_i}-x_{j_{i'}}\vert \leq \frac{2c}{m^{r_1}(m-d_1)}.$$ Moreover, by $|I_0|<1$, and $m>17|I_0|^{-1}d_1$ by [\[def V upper union\]](#def V upper union){reference-type="ref" reference="def V upper union"}, we have $m-d_1>16|I_0|^{-1}d_1$. Thus $\vert x_{j_i}-x_{j_{i'}}\vert < \frac{c|I_0|}{4d_1 m^{r_1}},$ which together with the fact that for any $1\leq s\leq k$, $x_{j_s}\in I_0^{j_s,m}$, implies that there are at most two such $s$ satisfying [\[inho_10\]](#inho_10){reference-type="ref" reference="inho_10"}. ◻ **Proposition 5**. *$\forall q\in\mathbb{N},~\forall I\in\mathcal{J}_{q-1}$, there will be at most one $\mathbf{v}=\frac{\mathbf{p}}{m}\in\mathcal{V}_q$ such that for some $1\leq j\leq m^\star$, $\mathbf{v}\in\mathcal{V}^{j,m}$ and $\Delta^{j,m}_{\mathbf{\boldsymbol\theta}}(\mathbf{v})\cap I\neq\emptyset$. [\[key0\]]{#key0 label="key0"}* *Proof.* Since by [\[empty\]](#empty){reference-type="ref" reference="empty"}, $\mathcal{V}_q=\emptyset$, for $q\leq \lambda_1+1$, the proposition is vacuously true in this range of $q$. Thus we prove the proposition for $q\geq \lambda_1+2$. Suppose, by contradiction, there are $q\geq \lambda_1+2$, $I\in\mathcal{J}_{q-1}$, and two such distinct $\mathbf{v}_s=\frac{\mathbf{p}_s}{m_s}\in\mathcal{V}_q$, $\mathbf{p}_{s}=(p_{i,s})_{i=1}^n, s=1,2$, for some $1\leq j_s\leq m_s^\star,$ $\mathbf{v}_s\in\mathcal{V}^{j_s,m_s}$ and there exists $x_s\in\Delta_{\mathbf{\boldsymbol\theta}}^{j_s,m_s}(\mathbf{v}_s)\cap I\neq\emptyset$. In particular, $$\label{inho_1} \forall s=1,2,~ \exists~ x_s\in I\cap I^{j_s,m_s}_0, \textit{s.t. },\left|x_s-\frac{p_{1,s}+\theta_{1}(x_s)}{m_s}\right|<\frac{c}{m_s^{1+r_1}},$$ and $$\label{inho_2} \forall i\geq 2,~ \left|\varphi_i\left(\frac{p_{1,s}+\theta^{j_s,m_s}_1}{m_s}\right)-\frac{p_{i,s}+\theta_i\left(\frac{p_{1,s}+\theta^{j_s,m_s}_1}{m_s}\right)}{m_s}\right|<\frac{(f_0+d_i)c}{m_s^{1+r_i}}.$$ Without loss of generality, $$m_1\geq m_2. \label{assume}$$ Since $x_1,x_2\in I$, we have $$|x_1-x_2|<|I|=|I_0|R^{-q+1}. \label{diam}$$ Since $\theta_1$ is Lipschitz, and $x_s\in I_0^{j_s,m_s}$, by [\[inho_1\]](#inho_1){reference-type="ref" reference="inho_1"}, $$\label{eqn: shifted rational near xs } \forall ~s=1,2,~\left|x_s-\frac{p_{1,s}+\theta_{1}^{j_s,m_s}}{m_s}\right|<\frac{5c}{4m_s^{1+r_1}}.$$ Then we have, $$\label{ho1} \begin{aligned} |(m_1-m_2)x_1-(p_{1,1}-p_{1,2})| &\leq m_2|x_1-x_2|+\sum_{s=1,2}m_s\left |x_s-\frac{p_{1,s}+\theta_1(x_s)}{m_s}\right|+\vert \theta_1(x_1)-\theta_1(x_2)\vert \\ &\stackrel{\eqref{diam},\eqref{inho_1}}{<}m_2|I_0|R^{-q+1}+\sum_{s=1,2}\frac{c}{m_s^{r_1}}+ d_1 |I_0|R^{-q+1}\\ &\stackrel{\eqref{assume}}{\leq}m_2(1+d_1)|I_0|R^{-q+1}+\frac{2c}{m_2^{r_1}}. \end{aligned}$$ Note that since $\frac{\mathbf{p}_2}{m_2}\in\mathcal{V}_q$, $m_2(1+d_1)|I_0|R^{-q+1}< 2c(1+d_1)R^{3}\stackrel{\eqref{def: c}}{<}\frac{1}{2},$ and $\frac{2c}{m_2^{r_1}}<\frac{1}{2}.$ This makes the right side of [\[ho1\]](#ho1){reference-type="eqref" reference="ho1"} $<1.$ Moreover, for $\forall i\geq 2,$ $$\begin{aligned} |(m_1-m_2)\varphi_i\left(\frac{p_{1,1}+\theta^{j_1,m_1}_1}{m_1}\right) & - \left.(p_{i,1}-p_{i,2})\right| \nonumber\\ \leq& m_2 \left|\varphi_i\left(\frac{p_{1,1}+\theta^{j_1,m_1}_1}{m_1}\right)-\varphi_i\left(\frac{p_{1,2}+\theta^{j_2,m_2}_1}{m_2}\right)\right|\nonumber\\&+\left|\theta_i\left(\frac{p_{1,1}+\theta^{j_1,m_1}_1}{m_1}\right)-\theta_i\left(\frac{p_{1,2}+\theta^{j_2,m_2}_1}{m_2}\right)\right|\nonumber\\&+\sum_{s=1,2}m_s\left|\varphi_i\left(\frac{p_{1,s}+\theta^{j_s,m_s}_1}{m_s}\right)-\frac{p_{i,s}+\theta_i\left(\frac{p_{1,s}+\theta^{j_s,m_s}_1}{m_s}\right)}{m_s}\right|\nonumber\\\stackrel{MVT, \eqref{inho_2}}{<}& m_2(f_0+d_i)\left(|x_1-x_2|+\sum_{s=1,2}\left|x_s-\frac{p_{1,s}+\theta^{j_s,m_s}_1}{m_s}\right|\right)\nonumber\\&+\sum_{s=1,2}\frac{(f_0+d_i)c}{m_s^{r_i}}\label{ho2}\\ \stackrel{\eqref{eqn: shifted rational near xs }, \eqref{diam}}{\leq}&\frac{5(f_0+d_i)c}{m_2^{r_i}}+m_2(f_0+d_i)|I_0|R^{-q+1}\nonumber.\end{aligned}$$ Using [\[def: c\]](#def: c){reference-type="ref" reference="def: c"}, the right side of [\[ho2\]](#ho2){reference-type="eqref" reference="ho2"} is $<1.$ Now note, we may assume $m_1>m_2$. Otherwise, as the right sides of [\[ho1\]](#ho1){reference-type="ref" reference="ho1"}, and [\[ho2\]](#ho2){reference-type="ref" reference="ho2"} are $<1$, $\mathbf{p}_1=\mathbf{p}_2$, giving $\mathbf{v}_1=\mathbf{v}_2,$ that contradicts our assumption. Next, we claim that $$\label{sol1} (m_1-m_2)^{r_1}|(m_1-m_2)x_1-(p_{1,1}-p_{1,2})|<\frac{k_1}{n},$$ and for $2\leq i\leq n,$ $$\label{sol2} (m_1-m_2)^{r_i}|(m_1-m_2)\varphi_i(x_1)-(p_{i,1}-p_{i,2})|<\frac{k_1}{n}.$$ First we show [\[sol1\]](#sol1){reference-type="ref" reference="sol1"} is true; $$\begin{aligned} (m_1-m_2)^{r_1}|(m_1-m_2)x_1-(p_{1,1}-p_{1,2})|&\stackrel{\eqref{ho1},\eqref{assume}}{<}m_1^{1+r_1}(1+d_1)|I_0|R^{-q+1}+\frac{2cm_1^{r_1}}{m_2^{r_1}}\\&\stackrel{\eqref{def: V_n}}{<}2c(1+d_1)R^3+2cR\stackrel{\eqref{def: c}}{<}\frac{k_1}{n}. \end{aligned} \label{abc1}$$ Next, we show that [\[sol2\]](#sol2){reference-type="ref" reference="sol2"} holds. $$\label{abc} \begin{aligned} &(m_1-m_2)^{r_i}|(m_1-m_2)\varphi_i(x_1)-(p_{i,1}-p_{i,2})|\\ &\leq (m_1-m_2)^{1+r_1}\left|\varphi_i(x_1)-\varphi_i\left(\frac{p_{1,1}+\theta^{j_1,m_1}_1}{m_1}\right)\right|\\ & \hspace{2 cm}+(m_1-m_2)^{r_i}\left|(m_1-m_2)\varphi_i\left(\frac{p_{1,1}+\theta^{j_1,m_1}_1}{m_1}\right)-(p_{i,1}-p_{i,2})\right|\\ &\stackrel{\eqref{assume},\eqref{ho2},\eqref{eqn: shifted rational near xs },MVT}{\leq}\frac{5 f_0 m_1^{1+r_1} c}{4m_1^{1+r_1}}+\frac{ 5(f_0+d_i)cm_1^{r_i}}{m_2^{r_i}}+ m_1^{1+r_i}(f_0+d_i)|I_0|R^{-q+1} \\ & \hspace{1.5 cm}= \frac{5}{4}f_0c + \frac{ 5 (f_0+d_i)cm_1^{r_i}}{m_2^{r_i}}+ m_1^{1+r_i}(f_0+d_i)|I_0|R^{-q+1}\\ &\hspace{1.5 cm} \stackrel{\eqref{assume}}{\leq}\frac{10(f_0+d_i)cm_1^{r_i}}{m_2^{r_i}}+m_1^{1+r_i}(f_0+d_i)|I_0|R^{-q+1} \\& \hspace{1.2 cm}\stackrel{\eqref{def: V_n},\eqref{weights chain}}{<}10(f_0+d_i)cR+2(f_0+d_i)cR^3\stackrel{\eqref{def: c}}{<}\frac{k_1}{n}. \end{aligned}$$ We also have, $$1\leq m_1-m_2\leq m_1\stackrel{\eqref{R_0'}, \eqref{def: c}}{<}\left(\frac{k_1^{1+r_1}}{n}R^{q-1}\right)^{\frac{1}{1+r_1}}\stackrel{\eqref{beta},\eqref{upper bound on k_1r_1}}{<}e^{\beta (q-1-\lambda_1)}.$$ Since $m_1>m_2$, by [\[Change of variable\]](#Change of variable){reference-type="ref" reference="Change of variable"}, [\[sol1\]](#sol1){reference-type="ref" reference="sol1"} and [\[sol2\]](#sol2){reference-type="ref" reference="sol2"} the integer vector $(m_1-m_2,\mathbf{p}_1-\mathbf{p}_2)$ is a non-zero solution to the system [\[simul system\]](#simul system){reference-type="ref" reference="simul system"} with $Q=m_1-m_2<e^{\beta(q-1-\lambda_1)}$, $q\geq\lambda_1+2, \text{ and }I\in\mathcal{J}_{q-1}, x\in I$, which leads to a contradiction to [Proposition 3](#result){reference-type="ref" reference="result"}. ◻ Combining [\[key1\]](#key1){reference-type="ref" reference="key1"} and [\[key0\]](#key0){reference-type="ref" reference="key0"}, we have the following key proposition. **Proposition 6**. *$\forall q\in\mathbb{N},~\forall I\in\mathcal{J}_{q-1}$, there will be at most one $\mathbf{v}=\frac{\mathbf{p}}{m}\in\mathcal{V}_q$ and at most two $1\leq j\leq m^\star$ such that $\mathbf{v}\in\mathcal{V}^{j,m}$ and $\Delta^{j,m}_{\mathbf{\boldsymbol\theta}}(\mathbf{v})\cap I\neq\emptyset$. [\[key\]]{#key label="key"}* In what follows, we take $p,q\in\mathbb{N}\cup\{0\}.$ For any $q\geq 0$, we define, $$\label{def: M} \mathcal{M}_{q,q+1}:=\left\{I\in\mathcal{I}_{q+2}:\exists~\mathbf{v}=\frac{\mathbf{p}}{m}\in\mathcal{V}_{q+1}, \left| \begin{aligned} &~\mathbf{v}\in\mathcal{V}^{j,m} \text{ and }\\ &~\Delta_{\mathbf{\boldsymbol\theta}}^{j,m}(\mathbf{v})\cap I\neq\emptyset\end{aligned} \right. \text{ for some } 1\leq j\leq m^\star\right\}.$$ and let $\mathcal{M}_{p,q}:=\emptyset, 0\leq p\leq q, p\neq q-1$. Next, for $0\leq p\leq q,$ we define $$f_{p,q}:=\max_{J\in\mathcal{J}_{p}}\#\{I\in\mathcal{M}_{p,q}: I\subset J\}.$$ Trivially, $$\label{f0q} f_{q,q}=0.$$ And for $p<q,$ we have the following proposition. **Proposition 7**. *For all $p\geq 0,q\geq 1$ with $p< q$, we have $$f_{p,q}\leq 6 .$$* *Proof.* If $p\neq q-1$, then $f_{p,q}=0$, which trivially satisfies the upperbound. If $p=q-1$, then given $J\in\mathcal{J}_p$, by [\[key\]](#key){reference-type="ref" reference="key"}, there is at most one $\mathbf{v}=\frac{\mathbf{p}}{m}\in\mathcal{V}_q$ and at most $2$, $1\leq j\leq m^\star$ such that $\mathbf{v}\in \mathcal{V}^{j,m}$, and $$\label{split in jm} \Delta^{j,m}_{\mathbf{\boldsymbol\theta}}(\mathbf{v})\cap J\neq\emptyset.$$ By [Lemma 4](#interval inside domain){reference-type="ref" reference="interval inside domain"}, we have $$4\widetilde \Delta^{j,m}_{\mathbf{\boldsymbol\theta}}(\mathbf{v})\cap J\neq\emptyset.$$ Also, by [\[def: V_n\]](#def: V_n){reference-type="ref" reference="def: V_n"} and [\[def: inho_interval\]](#def: inho_interval){reference-type="ref" reference="def: inho_interval"}, $|4 \widetilde\Delta^{j,m}_{\mathbf{\boldsymbol\theta}}(\mathbf{v})|\leq 2|I_0|R^{-q-1}$. Then for each $j$, the domain $\Delta^{j,m}_{\mathbf{\boldsymbol\theta}}(\mathbf{v})$ can intersect at most $3$ of intervals in $\{I\subset J:I\in\mathcal{I}_{q+1}\}$, which completes the proof. ◻ ## The new Cantor set . We construct the Cantor set by induction on $q\geq 0$. Let $\mathcal{J}_0'=\{I_0\}$. Let us assume that $\mathcal{J}'_1,\cdots, \mathcal{J}'_q$, for $q\geq 0$ are already constructed. Let $\mathcal{I}_{q+1}'=\mathbf{Par}_R(\mathcal{J}_q').$ Then we define, $$\hat{\mathcal{J}}_{p,q}':=\mathcal{I}_{q+1}'\cap(\mathcal{M}_{p,q}\cup\hat{\mathcal{J}}_{p,q}),~~\forall~~ 0\leq p\leq q .$$ Now, let $\hat{\mathcal{J}}_{q}'=\cup_{p=0}^q\hat{\mathcal{J}}_{p,q}'$ . Then we remove the subcollection $\hat{\mathcal{J}}_q'$ from $\mathcal{I}_{q+1}'$, i.e. $\mathcal{J}_{q+1}'=\mathcal{I}_{q+1}'\setminus\hat{\mathcal{J}}_q'.$ **Lemma 6**. *For $q\geq 1$, $\mathcal{J}_q'\subseteq \mathcal{J}_q$, and $\mathcal{I}'_{q}\subseteq\mathcal{I}_q$.* *Proof.* We prove this lemma by induction. When $q=1$, note that $\mathcal{I}_1'=\mathcal{I}_1,$ and $\mathcal{\hat J}'_0= \mathcal{\hat J}'_{0,0}= \mathcal{I}_1'\cap \mathcal{\hat J}_{0,0}= \mathcal{I}_1\cap \mathcal{\hat J}_{0}= \mathcal{\hat J}_{0}.$ Hence $\mathcal{J}_1'=\mathcal{I}_1'\setminus \mathcal{\hat J}_{0}'= \mathcal{I}_1\setminus \mathcal{\hat J}_{0}=\mathcal{J}_1$ By induction hypothesis, $\mathcal{J}'_i\subseteq \mathcal{J}_i$ and $\mathcal{I}'_i\subseteq \mathcal{I}_i$ for $i=1,\cdots,q.$ Then $\mathcal{I}'_{q+1}= \mathbf{Par}_R(\mathcal{J}_q')\subseteq \mathbf{Par}_R(\mathcal{J}_q)= \mathcal{I}_{q+1}.$ Note that $\hat{\mathcal{J}}_{p,q}\cap \mathcal{I}_{q+1}'\subseteq \hat{\mathcal{J}}_{p,q}',\forall~0\leq p\leq q\implies \hat{\mathcal{J}}_{q}\cap \mathcal{I}_{q+1}'\subseteq \hat{\mathcal{J}}_{q}'.$ Then $\mathcal{J}_{q+1}' \subseteq \mathcal{I}_{q+1}'\setminus \left( \mathcal{I}_{q+1}'\cap \hat{\mathcal{J}}_q\right)= \left(\mathcal{I}_{q+1}'\cap\mathcal{I}_{q+1}\right)\setminus \left( \mathcal{I}_{q+1}'\cap \hat{\mathcal{J}}_q\right)= \left(\mathcal{I}_{q+1}\setminus\hat{\mathcal{J}}_q\right)\cap\mathcal{I}_{q+1}'\subseteq\mathcal{J}_{q+1}$ and the induction follows. ◻ Lastly, we define a new Cantor set $$\label{new cantor} \mathcal{K}_{\infty}':=\bigcap_{q\geq 0}\bigcup_{I\in\mathcal{J}_q'}I.$$ **Lemma 7**. *Let $\mathcal{K}'_{\infty}$ be the cantor set we constructed in [\[new cantor\]](#new cantor){reference-type="ref" reference="new cantor"}. Then, $$\mathcal{K}'_{\infty}\subseteq \mathcal{K}_{\infty}\subseteq \operatorname{supp}{\mu},$$ where $\mathcal{K}_{\infty}$ is the cantor set defined in [2.1](#subsection: Notation){reference-type="ref" reference="subsection: Notation"} , and $\mu$ is as in [Theorem 3](#thm: twin main){reference-type="ref" reference="thm: twin main"}* *Proof.* Note $\mathcal{J}_0'=\mathcal{J}_0$, and by [Lemma 6](#prime inside old){reference-type="ref" reference="prime inside old"}, $\mathcal{J}'_{q}\subseteq\mathcal{J}_q,\forall q\geq 1$. Hence $\mathcal{K}_{\infty}'=\cap_{q\geq 0}\cup_{I\in\mathcal{J}_q'}I\subseteq\cap_{q\geq 0}\cup_{I\in\mathcal{J}_q}I=\mathcal{K}_{\infty}.$ The last inclusion was already proved in [@BNY22 Proposition 2.6]. ◻ **Lemma 8**. *Let $\mathcal{K}'_{\infty}$ be as before, $\mathbf{Bad}_{\mathbf{\boldsymbol\theta}}(\mathbf{r})$ be as defined in [\[def: simul_inho\]](#def: simul_inho){reference-type="ref" reference="def: simul_inho"}. Then $$\mathcal{K}'_{\infty}\subseteq\varphi^{-1}(\mathbf{Bad}_{\mathbf{\boldsymbol\theta}}(\mathbf{r})).$$* *Proof.* By [\[def: M\]](#def: M){reference-type="ref" reference="def: M"}, $\forall q\geq 2, ~\forall~ I\in\mathcal{J}_q',~ \forall~ \mathbf{v}=\frac{\mathbf{p}}{m}\in\mathcal{V}_{q-1}$, for all $1\leq j\leq m^\star,$ $$\Delta_{\mathbf{\boldsymbol\theta}}^{j,m}(\mathbf{v})\cap I=\emptyset \text{ when } \mathbf{v}\in \mathcal{V}^{j,m}.$$ Hence for any $q\geq 2,$ $$\bigcup_{I\in\mathcal{J}_q'}I\subseteq I_0\setminus \bigcup_{m> 17\vert I_0\vert^{-1}d_1}\bigcup_{j=1}^{m^\star}\bigcup_{\mathbf{v}\in \mathcal{V}_{q-1}^{j,m}}\Delta^{j,m}_{\mathbf{\boldsymbol\theta}}(\mathbf{v}).$$ Therefore, by [Lemma 5](#avoiding inho_interval){reference-type="ref" reference="avoiding inho_interval"} and [\[union_sublevel\]](#union_sublevel){reference-type="ref" reference="union_sublevel"}, [\[new cantor\]](#new cantor){reference-type="ref" reference="new cantor"}, $$\mathcal{K}_{\infty}'\subseteq I_0\setminus \bigcup_{m> 17\vert I_0\vert^{-1}d_1}\bigcup_{j=1}^{m^\star}\bigcup_{\mathbf{v}\in\mathcal{V}^{j,m}}\Delta^{j,m}_{\mathbf{\boldsymbol\theta}}(\mathbf{v})\subseteq\varphi^{-1}(\mathbf{Bad}_{\mathbf{\boldsymbol\theta}}(\mathbf{r})).$$ ◻ Now by [Lemma 7](#new inside old){reference-type="ref" reference="new inside old"} and [Lemma 8](#Cantor inside bad){reference-type="ref" reference="Cantor inside bad"}, we have $$\label{new cantor set inside curve and support} \mathcal{K}'_\infty\subseteq\varphi^{-1}(\mathbf{Bad}_{\mathbf{\boldsymbol\theta}}(\mathbf{r}))\cap\operatorname{supp}\mu.$$ **Remark 1**. *Note that since $\mathcal{K}'_\infty\subseteq \mathcal{K}_\infty,$ by [@BNY22 Proposition 2.6], and [\[new cantor set inside curve and support\]](#new cantor set inside curve and support){reference-type="ref" reference="new cantor set inside curve and support"}, $$\mathcal{K}'_\infty\subseteq \varphi^{-1}(\mathbf{Bad}_{\mathbf{\boldsymbol\theta}}(\mathbf{r}))\cap \varphi^{-1}(\mathbf{Bad}(\mathbf{r}))\cap\operatorname{supp}\mu.$$* Now to prove [Theorem 3](#thm: twin main){reference-type="ref" reference="thm: twin main"}, the main task is to show that $\mathcal{K}'_{\infty}$ is non-empty, which will be the content of the rest of this section. ## New upper bounds For all $0\leq p\leq q,$ let us define $$h_{p,q}':=\max_{J\in\mathcal{J}'_{p}}\#\{I\in\hat{\mathcal{J} }'_{p,q}:I\subset J\}.$$ By definition of $\hat{\mathcal{J}}'_{p,q}$ and [Lemma 6](#prime inside old){reference-type="ref" reference="prime inside old"}, we have that $$\begin{split}h_{p,q}'\leq\max_{J\in\mathcal{J}_{p}}\#\{I\in\hat{\mathcal{J} }'_{p,q}:I\subset J\} &\leq \max_{J\in\mathcal{J}_{p}}\#\{I\in\hat{\mathcal{J }}_{p,q}:I\subset J\}+\max_{J\in\mathcal{J}_{p}}\#\{I\in\mathcal{M}_{p,q}: I\subset J\}\\=&h_{p,q}+f_{p,q}.\end{split}$$ Therefore, $$\label{triangle} h_{p,q}'\leq h_{p,q}+f_{p,q}.$$ **Proposition 8**. *Let $C,\alpha$ be as in [Theorem 3](#thm: twin main){reference-type="ref" reference="thm: twin main"}. For $\forall q\geq 0$, and $R^{\alpha}\geq 21C^2$ $$h_{q,q}'\leq R-(4C)^{-2}R^{\alpha}.$$* *Proof.* By [\[triangle\]](#triangle){reference-type="ref" reference="triangle"}, [\[f0q\]](#f0q){reference-type="ref" reference="f0q"}, and [\[hqq\]](#hqq){reference-type="ref" reference="hqq"}, we have $h_{q,q}'\leq h_{q,q}+f_{q,q}\leq R-(4C)^{-2}R^{\alpha}.$ ◻ **Proposition 9**. *There exist constants $R_0'\geq 1,C_0'>0,\eta_0'>0$, such that $\forall R\geq R_0'$ and $~0\leq p<q$,$$h_{p,q}'\leq C_0'R^{\alpha(1-\eta_0')(q-p+1)}.$$* *Proof.* Let $R_0, C_0$ and $\eta_0$ be as in [\[hpq\]](#hpq){reference-type="ref" reference="hpq"}. Choose $R_0',C_0',\eta_0'$ such that $R_0'>\max\{ R_0,\frac{1}{|I_0|}\},C_0'\geq\max\{16,8C_0\}$ and $\eta_0=\eta_0'$. Then by [\[triangle\]](#triangle){reference-type="ref" reference="triangle"}, [\[hpq\]](#hpq){reference-type="ref" reference="hpq"} and [Proposition 7](#fpq){reference-type="ref" reference="fpq"}, $$h_{p,q}'\leq h_{p,q}+f_{p,q}\leq C_0R^{\alpha(1-\eta_0)(q-p+1)}+6\leq C_0'R^{\alpha(1-\eta_0')(q-p+1)}.$$ ◻ ## Completing the proof of [Theorem 3](#thm: twin main){reference-type="ref" reference="thm: twin main"} {#completing-the-proof-of-thm-twin-main} . As mentioned earlier, the non-emptiness of $\mathcal{K}'_{\infty}$ implies [Theorem 3](#thm: twin main){reference-type="ref" reference="thm: twin main"}. Given [Proposition 8](#hqq'){reference-type="ref" reference="hqq'"} and [Proposition 9](#hpq'){reference-type="ref" reference="hpq'"}, the rest of the proof is exactly the same as in [@BNY22 Section 4], which we rewrite for completion. By [\[nemp\]](#nemp){reference-type="ref" reference="nemp"}, it suffices to prove that for $q\geq0$, $$t_q':=R-h_{q,q}'-\sum_{j=1}^q\frac{h_{q-j,q}'}{\prod_{i=1}^jt_{q-i}'}\geq(6C)^{-2}R^{\alpha} \label{non}.$$ We prove this by induction. When $q=0$, $t_0'=R-h_{0,0}'\geq (4C)^{-2}R^{\alpha}$ by [Proposition 8](#hqq'){reference-type="ref" reference="hqq'"}, which satisfies [\[non\]](#non){reference-type="ref" reference="non"}. Suppose [\[non\]](#non){reference-type="ref" reference="non"} holds for $q<q_1$ where $q_1\geq 1$. Then by [Proposition 9](#hpq'){reference-type="ref" reference="hpq'"} and [Proposition 8](#hqq'){reference-type="ref" reference="hqq'"}, we have $$t_{q_1}'\geq(4C)^{-2}-R^{\alpha}(C_0'R^{-\eta_0'\alpha}\sum_{j=1}^{\infty}(\frac{36C^2}{R^{\eta_0'\alpha}})^j).$$ Thus, we only have to make sure that $R$ is large enough such that $C_0'R^{-\eta_0'\alpha}<\frac{1}{32C^2}$ and $\sum_{j=1}^{\infty}(\frac{36C^2}{R^{\eta_0'\alpha}})^j\leq 1$, while satisfying the lowerbounds for $R$ in [\[R_0\'\]](#R_0'){reference-type="ref" reference="R_0'"}, [Proposition 8](#hqq'){reference-type="ref" reference="hqq'"} and [Proposition 9](#hpq'){reference-type="ref" reference="hpq'"}, to make the induction work. Thus the proof is done. We end this section with two minor remarks. ## Remark {#rmk: main2 .unnumbered} 1. Let $\mathbf{r}$ and $\varphi$ be as in [Theorem 2](#thm: main){reference-type="ref" reference="thm: main"}. Let $\tilde\theta_i:U\to \mathbb{R}$, Lipschitz function for $1\leq i\leq n$. Then we showed, $$\{x\in U:\liminf_{q\in\mathbb{Z}\setminus\{0\}, \vert q\vert \to\infty}\max_{1\leq i\leq n}\vert q\varphi_i(x)-\tilde\theta_i(x)\vert_{\mathbb{Z}}^{1/r_i} \vert q\vert>0 \}$$ is absolute winning on $U.$ The main difference between this statement and [Theorem 2](#thm: main){reference-type="ref" reference="thm: main"} is that, in the case of curves, we can take the inhomogeneous function to be any Lipschitz function, and don't have to restrict to composite function $\theta^\varphi_i$ as in [2.1](#subsection: Notation){reference-type="ref" reference="subsection: Notation"}. 2. [\[remk on defn\]]{#remk on defn label="remk on defn"} Note that $\textbf{Bad}(\mathbf{r})$ is same as the set $$\{\mathbf{x}\in\mathbb{R}^n:\inf_{q\in\mathbb{Z}\setminus\{0\}}\max_{1\leq i\leq n}\vert qx_i\vert_{\mathbb{Z}}^{1/r_i} \vert q\vert>0 \}.$$ But in the inhomogeneous set-up, one may not replace '$\liminf$' in [\[def: simul_inho\]](#def: simul_inho){reference-type="ref" reference="def: simul_inho"} by $`\inf$'. For instance, if $\theta(x)=ax+b$, where $a\in \mathbb{Z}\setminus\{0\}, b\in\mathbb{Z}$, one can easily see that replacing '$\liminf$' by '$\inf$' will make the set of bad to be empty. However, this kind of function is not interesting since the corresponding inhomogeneous set is actually the same as a homogeneous bad. So for a sake of generality, we took $`\liminf$' in the definition in [\[def: simul_inho\]](#def: simul_inho){reference-type="ref" reference="def: simul_inho"}. # Final Remark {#final} In this paper, we consider simultaneous inhomogeneous bad. One can consider a dual analogue of inhomogeneous bad as follows. For any $\theta:\mathbb{R}^n\to\mathbb{R},$ map let $$\label{def: dual_inho} \widetilde{\mathbf{Bad}_{\theta}(\mathbf{r})}:=\{\mathbf{x}\in\mathbb{R}^n:\liminf_{\mathbf{q}=(q_i)\in\mathbb{Z}^n\setminus\{0\}, \Vert \mathbf{q}\Vert\to\infty}\vert \mathbf{q}\cdot \mathbf{x}-\theta(\mathbf{x})\vert_{\mathbb{Z}} \max_{1\leq i\leq n}\vert q_i\vert^{1/r_i} >0 \}.$$ In [@Beresnevich2015], it was shown that $\widetilde{\mathbf{Bad}_{0}(\mathbf{r})}=\mathbf{Bad}_{\mathbf{0}}(\mathbf{r}),$ using Khintchine's transference principle. But more generally in the inhomogeneous setting, it is not clear if the dual and simultaneous bad sets are related. Therefore, it is natural to consider the following problem. **Problem 1**. *Let $\varphi:U \subset \mathbb{R}\to \mathbb{R}^n$ be as in [Theorem 2](#thm: main){reference-type="ref" reference="thm: main"}, and suppose $\theta:\mathbb{R}^n\to\mathbb{R}$ has nice properties (one may just take a constant map). Prove that $\varphi^{-1}(\widetilde{\mathbf{Bad}_{\theta}(\mathbf{r})})$ is absolute winning on $U.$* Another interesting direction is to find the measure of the inhomogeneous bad vectors inside nondegenerate curves. More precisely, **Problem 2**. *Let $\varphi:U\subset \mathbb{R}\to \mathbb{R}^n$ and $\mathbf{\boldsymbol\theta}$ be as in [Theorem 2](#thm: main){reference-type="ref" reference="thm: main"}. Show that $$\lambda(\varphi^{-1}(\mathbf{Bad}_{\mathbf{\boldsymbol\theta}}(\mathbf{r}))=0,$$ where $\lambda$ is the Lebesgue measure on $\mathbb{R}.$* Our main Theorem [Theorem 2](#thm: main){reference-type="ref" reference="thm: main"}, guarantees that the set has Hausdorff dimension $1$. So, it is interesting to find the value of measure at its dimension. One can follow the general framework developed in [@BDGW_null] to achieve such results. ## Acknowledgments {#acknowledgments .unnumbered} We thank the University of Michigan for giving us the opportunity to work on this project.
arxiv_math
{ "id": "2309.02018", "title": "Winning of inhomogeneous bad for curves", "authors": "Shreyasi Datta and Liyang Shao", "categories": "math.NT math.DS", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We present an optimization-based framework to construct confidence intervals for functionals in constrained inverse problems, ensuring valid one-at-a-time frequentist coverage guarantees. Our approach builds upon the now-called strict bounds intervals, originally pioneered by @burrus1965utilization [@rust_burrus], which offer ways to directly incorporate any side information about parameters during inference without introducing external biases. Notably, this family of methods allows for uncertainty quantification in ill-posed inverse problems without needing to select a regularizing prior. By tying our proposed intervals to an inversion of a constrained likelihood ratio test, we translate interval coverage guarantees into type-I error control, and characterize the resulting interval via solutions of optimization problems. Along the way, we refute the Burrus conjecture, which posited that, for possibly rank-deficient linear Gaussian models with positivity constraints, a correction based on the quantile of the chi-squared distribution with one degree of freedom suffices to shorten intervals while maintaining frequentist coverage guarantees. Our framework provides a novel approach to analyze the conjecture and construct a counterexample by employing a stochastic dominance argument, which we also use to disprove a general form of the conjecture. We illustrate our framework with several numerical examples and provide directions for extensions beyond the Rust--Burrus method for non-linear, non-Gaussian settings with general constraints. author: - | Pau Batlle [^1] ¶\ <pbatllef@caltech.edu>  - | Pratik Patil [^2] ¶\ <pratikpatil@berkeley.edu>  - | Michael Stanley [^3] ¶\ <mcstanle@andrew.cmu.edu>  - | Houman Owhadi ¶\ <owhadi@caltech.edu>  - | Mikael Kuusela ¶\ <mkuusela@andrew.cmu.edu>  bibliography: - references.bib title: " Optimization-based frequentist confidence intervals for functionals in constrained inverse problems: Resolving the Burrus conjecture" --- # Introduction {#sec:intro} Advances in data collection and computational power in recent years have led to an increase in the prevalence of high-dimensional, ill-posed inverse problems, especially within the physical sciences. These challenges are particularly evident in domains like remote sensing and data assimilation, where uncertainty quantification (UQ) in inverse problems is of paramount importance. Many of these inverse problems also come with inherent physical constraints on their parameters. This paper focuses on constrained inverse problems for which the noise model is known and the forward model, defined on a finite-dimensional parameter space, can be computationally evaluated. Our primary objective is to construct a confidence interval for a functional of the forward model parameters. Formally, we consider statistical models of the form ${\bm{y}}\sim P_{{\bm{x}}^*}$, where ${\bm{y}}\in \mathbb{R}^m$ is sampled according to a parametric probability distribution. Here ${\bm{x}}^* \in \mathbb{R}^p$ is a fixed unknown parameter, which we know a priori lies within the set $\mathcal X$; see for an illustration. Our goal is to construct confidence intervals for a known one-dimensional functional $\varphi({\bm{x}}^*) \in \mathbb{R}$. Ideally, we want the length of these intervals to be as small as possible while still maintaining a frequentist coverage guarantee in finite sample. In other words, given a prescribed coverage level $1 - \alpha$ for some $\alpha \in (0, 1)$, we want to construct functions of the data $I^-({\bm{y}})$ and $I^+({\bm{y}})$ such that the following coverage guarantee holds in finite sample[^4]: $$\label{u} \inf_{{\bm{x}}\in\mathcal X}\mathbb{P}_{{\bm{y}}\sim P_{{\bm{x}}}}\big(\varphi({\bm{x}}) \in [I^-({\bm{y}}), I^+({\bm{y}})] \big) \ge 1-\alpha.$$ While the requirement [\[u\]](#u){reference-type="eqref" reference="u"} necessitates that we maintain at least $1 - \alpha$ coverage, we also want it to be approximately accurate by minimizing slack in the inequality. Ensuring such *proper calibration*, namely, confidence intervals that do not *undercover* (fail to meet the $1-\alpha$ guarantee for some ${\bm{x}}\in \mathcal X$) or *overcover* (are too large and therefore exceed the required coverage) is paramount in practical applications. This is especially true in contexts that necessitate stringent safety and certification standards. Intervals that undercover yield unreliable inferences that may expose the system to unforeseen risks. Conversely, intervals that overcover might lead to excessive economic costs by needing to guard against scenarios that are unlikely to occur. ![ Illustration of the problem setup. We seek to construct confidence intervals $[I^-({\bm{y}}), I^+({\bm{y}})] \subseteq \mathbb{R}$ for $\varphi({\bm{x}}^*) \in \mathbb{R}$ from an observation ${\bm{y}}\in \mathbb{R}^m$ sampled from $P_{{\bm{x}}^*}$ that satisfies a frequentist coverage guarantee in finite sample while being as small (in length) as possible. ](figures/images/diagram-intro-v5.pdf){#fig:diagram-intro width="0.75\\columnwidth"} In many applied contexts, Bayesian methods constitute a primary set of techniques for uncertainty quantification. These methods leverage a prior for regularization, derived either from the intrinsic details of the problem or introduced externally. A key advantage of this regularization approach is the natural UQ that emerges from the Bayesian statistical framework. Specifically, the combination of a predefined prior and data likelihood results in a posterior distribution via Bayes' theorem. This distribution can subsequently be used to derive the intended posterior UQ. However, there is a caveat: Bayesian methods can offer *marginal coverage* (probability over $\bm{x}$ and $\bm{y}$), if the prior is correctly specified. They do not necessarily provide *conditional coverage* (probability over $\bm{y}$ given $\bm{x}$). The former notion of coverage is weaker (and in particular, the latter implies the former, but the converse may not be true), as it replaces the infimum in the coverage requirement [\[u\]](#u){reference-type="eqref" reference="u"} with a probability distribution over ${\bm{x}}$. Generally, Bayesian methods may not align with the analyst's expectations due to the inherent bias [@kuusela_phd_thesis; @patil]. While, in theory, priors present an effective mechanism to incorporate scientific knowledge into UQ, they can inadvertently introduce extraneous information [@stark_constraints_priors] and a lack of robustness in the resulting estimates [@owhadi2015brittlenessb; @owhadi2015brittlenessa; @owhadi2017qualitative]. On the other hand, we could consider a basic worst-case approach that is rooted in the simple observation that: $$\label{worst_case} \varphi({\bm{x}}^*) \in \bigg[ \inf_{{\bm{x}}\in \mathcal X} \varphi({\bm{x}}), \; \sup_{{\bm{x}}\in \mathcal X} \varphi({\bm{x}}) \bigg].$$ Of course, this method is inherently conservative given the absence of assumptions and any specific knowledge regarding data generation. More importantly, the method does not use the observations $y$ in any way to calibrate the confidence set. This means the sets cannot be fine-tuned to approximately achieve the desired $1 - \alpha$ coverage level. Nonetheless, they illustrate the essential idea of constructing a confidence interval based on the outcomes of two boundary optimization problems---an approach that the more sophisticated methods that we will study in this paper build on. We will henceforth refer to such intervals with the notation: $$\begin{aligned} \inf_{{\bm{x}}}/\sup_{{\bm{x}}} \quad & \varphi({\bm{x}}) \\ \mathop{\mathrm{subject\,\,to}}\quad & {\bm{x}}\in \mathcal X \end{aligned} := \bigg [ \inf_{{\bm{x}}\in \mathcal X} \varphi({\bm{x}}), \; \sup_{{\bm{x}}\in \mathcal X} \varphi({\bm{x}}) \bigg ].$$ One example of such more sophisticated method is the so-called "simultaneous" approach [@stark_strict_bounds; @stark1994simultaneous], which provides intervals with at least $1-\alpha$ frequentist coverage for the functional of interest $\varphi({\bm{x}}^*)$ from confidence sets for the parameter ${\bm{x}}^*$. The approach can be summarized in three steps (see for an illustration): 1. Construct a set $\mathcal{C}({\bm{y}}) \in \mathbb{R}^p$ that is a $1-\alpha$ confidence set for ${\bm{x}}^*$. 2. Intersect this set $\mathcal{C}({\bm{y}})$ with the constraint set $\mathcal X$. 3. Project this intersection through the functional of interest $\varphi$. The term "simultaneous" refers to Steps 1 and 2 being independent of the quantity of interest $\varphi$, so the resulting set from Step 2 can be simultaneously projected to different quantities of interest. Under mild assumptions, the resulting intervals can be equivalently written as: $$\label{simultaneous} \mathcal{I}_{\mathsf{SSB}}({\bm{y}}) := \bigg [\inf_{{\bm{x}}\in \mathcal X\cap \mathcal{C}({\bm{y}})} \varphi({\bm{x}}), \sup_{{\bm{x}}\in \mathcal X \cap \mathcal{C}({\bm{y}})} \varphi({\bm{x}}) \bigg ] = \begin{aligned} \inf_{{\bm{x}}}/\sup_{{\bm{x}}} \quad & \varphi({\bm{x}}) \\ \mathop{\mathrm{subject\,\,to}}\quad & {\bm{x}}\in \mathcal X \cap \mathcal{C}({\bm{y}}). \end{aligned}$$ This illustrates how the simultaneous approach is a refinement of the basic worst-case method [\[worst_case\]](#worst_case){reference-type="eqref" reference="worst_case"}: the observation of the data ${\bm{y}}$ shrinks the $\mathcal X$ into a smaller $\mathcal X \cap \mathcal{C}({\bm{y}})$, which is then projected through $\varphi$ in a worst-case manner. Given that this simultaneous framework is broadly encapsulated in [@stark_strict_bounds] as "strict bounds," we label these intervals as "simultaneous strict bounds" or SSB intervals, for short. Unlike methods that rely on explicit regularization through a prior, the techniques outlined above leverage only the physical constraints and the functional of interest to address the underlying ill-posedness of the inverse problem. This approach allows for uncertainty quantification without the need to assume a prior distribution, circumventing potential biases and miscalibrated coverage issues previously mentioned. Although interval [\[simultaneous\]](#simultaneous){reference-type="eqref" reference="simultaneous"} has guaranteed coverage for $\varphi({\bm{x}}^*)$ inherited from the coverage of $\mathcal{C}({\bm{y}})$, this method generally suffers from overcoverage, especially when the dimension of $\mathcal X$ is large [@patil; @stanley_unfolding; @kuusela2017shape]. This happens due to two main factors: (i) its generality cannot account for the specific structure of $P$, $\varphi$, and $\mathcal X$; and (ii) while the set $\mathcal C({\bm{y}})$ being a $1-\alpha$ confidence set is a sufficient condition, it is not necessary for [\[simultaneous\]](#simultaneous){reference-type="eqref" reference="simultaneous"} to ensure accurate coverage, implying that smaller sets might also yield valid confidence intervals. Consequently, an important research direction has been constructing confidence intervals that are shorter than the simultaneous approach, but still maintain nominal coverage for a given $\varphi$. Sometimes, this is achieved by assuming that $P$, $\varphi$, and $\mathcal X$ come from a particular class [@stark1994simultaneous; @rust_burrus; @tenorio2007confidence; @patil; @stanley_unfolding]. In the sequel, we discuss one such special class. ![ Illustration of the simultaneous approach for confidence interval building, which works generically for any $\mathcal X$, $\varphi$ and $P$. The intersection of $\mathcal{X}$ and $\mathcal{C}({\bm{y}})$ occurs in the original parameter space $\mathbb{R}^p$, and is then projected via the functional of interest function into the real line. The confidence interval is then constructed using the minimum and maximum of the quantity of interest $\varphi$ over the intersection $\mathcal X \cap \mathcal{C}({\bm{y}})$. ](figures/images/baselina-v5.pdf){#fig:diagram-intro2 width="0.8\\columnwidth"} ## The Burrus conjecture The Gaussian linear forward model with non-negativity constraints and a linear functional of interest is a setting that has attracted significant attention, going back to the works of [@burrus1965utilization; @rust_burrus]. These foundational studies consider the applied problem of unfolding gamma-ray and neutron spectra from pulse-height distributions under rank-deficient linear systems. They demonstrated that incorporating the non-negativity physical constraint allowed for the computation of non-trivial (i.e., finite length) intervals for linear functionals of the parameters. In order to describe the construction of these intervals, consider the canonical form of the Gaussian linear model with non-negativity constraints, along with a linear functional of interest: $$\label{eq:lineargaussianmodel} \underbrace{{\bm{y}}= {\bm{K}}{\bm{x}}^* + \bm{\varepsilon}, \quad \bm{\varepsilon} \sim \mathcal{N}(\bm{0}, {\bm{I}}_m)}_{\text{model}}, \quad \text{with} \quad \underbrace{{\bm{x}}^* \geq \bm{0}}_{\text{constraints}} \quad \text{and} \quad \underbrace{\varphi({\bm{x}}) = {\bm{h}}^\top{\bm{x}}}_{\text{functional}}.$$ Here, ${\bm{K}}\in \mathbb{R}^{m \times p}$ is the forward operator[^5], ${\bm{x}}^* \in \mathbb{R}^p$ is the true parameter vector, and ${\bm{h}}\in \mathbb{R}^p$ contains weights for the functional of interest. In this setting, [@burrus1965utilization; @rust_burrus] posed that the following interval construction yields valid $1-\alpha$ confidence intervals, a result now known as the *Burrus conjecture* [@rust1994confidence]: $$\label{RB1} \mathcal{I}_{\mathsf{OSB}}({\bm{y}}) := \begin{aligned} \min_{{\bm{x}}}/\max_{{\bm{x}}} \quad & {\bm{h}}^\top{\bm{x}}\\ \mathop{\mathrm{subject\,\,to}}\quad &\Vert {\bm{y}}- {\bm{K}}{\bm{x}}\Vert_2^2 \leq \psi^2_\alpha({\bm{y}}),\\ & {\bm{x}}\geq \bm{0}, \\ \end{aligned}$$ where $\psi^2_\alpha = z_{\alpha/2}^2 + s^2({\bm{y}})$. Here $z_\alpha$ is the upper quantile of standard normal such that $\mathbb{P}(Z>z_\alpha) = \alpha$ for $Z \sim \mathcal{N}(0,1)$, and $s^2({\bm{y}})$ is defined through an optimization problem as follows: $$\begin{aligned} s^2({\bm{y}}) := \begin{cases} \begin{aligned} \min_{{\bm{z}}} \:\qquad & \Vert {\bm{y}}- {\bm{K}}{\bm{z}}\Vert^2_2 \\ \mathop{\mathrm{subject\,\,to}}\quad & {\bm{z}}\geq \bm{0}. \end{aligned} \end{cases} \end{aligned}$$ Comparison of [\[RB1\]](#RB1){reference-type="eqref" reference="RB1"} with [\[simultaneous\]](#simultaneous){reference-type="eqref" reference="simultaneous"} shows that Rust and Burrus proposed a construction. In this construction, the set $\{{\bm{x}}: \Vert {\bm{y}}- {\bm{K}}{\bm{x}}\Vert_2^2 \leq \psi^2_\alpha({\bm{y}})\}$ plays the role of $\mathcal{C}({\bm{y}})$. It typically does not represent a $1-\alpha$ confidence set for ${\bm{x}}^*$, thus relaxing the stringent assumption of the SSB interval construction. Furthermore, a possible simultaneous interval for this setting can be built by observing that $\| {\bm{y}}-{\bm{K}}{\bm{x}}^* \|_2^2 \sim \chi^2_m$. This yields the following valid $1-\alpha$ interval: $$\begin{aligned} \min_{{\bm{x}}}/\max_{{\bm{x}}} \quad & {\bm{h}}^\top{\bm{x}}\\ \mathop{\mathrm{subject\,\,to}}\quad &\Vert {\bm{y}}- {\bm{K}}{\bm{x}}\Vert_2^2 \leq Q_{\chi^2_m}(1-\alpha)\\ & {\bm{x}}\geq \bm{0}. \\ \end{aligned}$$ Here $Q_{\chi^2_m}$ is the quantile function of a $\chi^2_m$ distribution. It is worth noting that the data-dependent term $\psi^2_\alpha({\bm{y}})$ in [\[RB1\]](#RB1){reference-type="eqref" reference="RB1"} could be considerably smaller than $Q_{\chi^2_m}(1-\alpha)$, especially when $m$ is large and $\alpha$ is small. So if the Burrus conjecture were true, it would provide a significant reduction in the length of the interval for problems in the class [\[eq:lineargaussianmodel\]](#eq:lineargaussianmodel){reference-type="eqref" reference="eq:lineargaussianmodel"}. For instance, assuming $\alpha = 0.05$ (so that we are after a $95$% coverage level), [@stanley_unfolding] observe an expected length reduction of about a factor of two across a variety of functionals in a particle unfolding application. The gain in the interval length originates from the fact that these intervals take into account that we are only required to guarantee coverage for *one specific* functional. Given that intervals of the form [\[RB1\]](#RB1){reference-type="eqref" reference="RB1"} are designed to provide coverage for one functional at a time, following the nomenclature of [@stanley_unfolding], we refer to these intervals as "one-at-a-time strict bounds" or OSB intervals, for short[^6]. [@rust_burrus] and subsequently [@rust1994confidence] investigated the conjecture posed in [@burrus1965utilization]. The latter work purported to have found a definitive proof for the conjecture's validity. However, this claim was later refuted by [@tenorio2007confidence] through a two-dimensional counterexample. In this work, we demonstrate that, in fact, this two-dimensional counterexample proposed in [@tenorio2007confidence] is not a valid counterexample. However, we present and prove another counterexample that refutes the conjecture and we propose ways to fix the previous faulty results by reinterpreting the conjecture. We achieve this through a novel hypothesis test based framework that not only revisits but also broadens the scope beyond the linear Gaussian setting paired with positivity constraints in which the conjecture was originally proposed. ## Summary and outline In this paper, we frame the problem of confidence interval construction for functionals in constrained ill-posed problems through the inversion of a particular likelihood ratio test. This perspective allows us to reinterpret the interval coverage guarantee in terms of type-I error control associated with the test, and subsequently, the distribution of the log-likelihood ratio under the null hypothesis. A detailed summary of contributions in this paper along with an outline is given below. - **Strict bounds intervals from test inversion.** In , we present a general framework to construct strict bounds intervals through test inversion, resulting in two optimization problems for the interval end points. This approach generalizes the Rust--Burrus type interval technique to potentially non-linear and non-Gaussian settings. Our main result in proves coverage of the test inversion construction, and provides sufficient conditions under which the coverage is tight. Examples in provide straightforward but concrete analytical illustrations of our framework. - **General interval construction methodology.** In , we propose in a general methodology for computing the intervals in practice. This approach integrates theoretical tools such as stochastic dominance () and computational methods such as sampling (). Additionally, for Gaussian models, we present a hybrid method () that provably improves upon SSB and intervals obtained from the general approach. This improvement is formalized in . - **Refuting the Burrus conjecture.** In , we demonstrate that our method successfully recovers previously proposed OSB intervals for the linear Gaussian setting. In , we leverage this novel interpretation to disprove the Burrus conjecture [@rust_burrus; @rust1994confidence] in the general case, by refuting a previously proposed counterexample and providing a new, provably correct counterexample in . Furthermore, we provide a negative result disproving a natural generalization of the original conjecture in . Our proof technique provides a method to detect when the Rust--Burrus approach is effective and when it falls short and introduces a means to rectify the earlier erroneous examples. - **Illustrative numerical examples.** In , we elucidate our findings through a suite of numerical illustrations. These span various scenarios, including the counterexample to the Burrus conjecture. ## Other related work {#subsec:RelatedWork} Given the effectiveness of the strict bounds methodology in high-dimensional ill-posed inverse problems, this paper seeks to deepen our understanding of these intervals and provide related perspectives by connecting them with the broader statistical literature. Specifically, we relate these intervals to the well-developed areas of likelihood ratio tests, test inversions, and constrained inference, which enables us to make rigorous statements about their properties and generalize the methodology beyond its earlier confines. We provide below a brief overview of earlier work in this area. #### Confidence intervals in constrained inverse problems. Various optimization-based strategies exist for constructing confidence intervals for functionals in linear inverse problems with constraints. A common method is to construct confidence regions based on the generalized least squares estimator of model parameters that optimizes a penalty function to balance data misfit with regularization, while adhering to prior constraints [@hansen1992analysis; @hansen1993use]. Another strategy employs Bayesian methods to estimate the posterior distribution of model parameters and subsequently constructs credible intervals from marginal distributions [@tarantola2005inverse; @stuart2010inverse]. While these methods effectively quantify uncertainty in model parameter estimates, their coverage heavily relies on the precision of prior information, regularization, and noise assumptions. A growing line of work in optimization-based confidence intervals focuses on ensuring correct frequentist coverage. This approach is more resistant to the aforementioned pitfalls associated with relying heavily on prior assumptions and offers a robust framework for uncertainty quantification. We will describe these optimization-based methods in detail in the following section. #### Optimization-based confidence intervals and the Burrus conjecture. This paper was largely motivated by the literature surrounding the Burrus conjecture (see for further discussion), which makes a claim about how to set a parameter in optimization-based confidence interval construction such that the resulting interval has at least some desired level of coverage ([@rust_burrus; @oleary_rust; @rust1994confidence; @tenorio2007confidence; @patil; @stanley_unfolding]). These references consider only the Gaussian-linear inverse problem and can thus be seen as an instance of optimization-based confidence intervals. [@donoho1994statistical] discusses the optimality of statistical procedures in the context of recovery from indirect and noisy observations. [@schafer2009constructing] presents a novel approach to constructing confidence regions for high-dimensional parameters that achieves the optimal expected size. The paper further derives theoretical results on the performance of the proposed method, including bounds on the expected size of the confidence regions and the probability of coverage. #### Inverting likelihood ratio tests and constrained inference. Traditionally, the optimization-based confidence interval constructions in inverse problems and physical sciences have developed independently of the broader statistical literature, often overlooking the duality between confidence intervals and hypothesis testing [@casella_berger; @wasserman2004all; @lehmann_romano]. Our work reinterprets these optimization-based confidence intervals from the inverse problem literature as inverted hypothesis tests and situates them within the realm of constrained testing and inference; see, e.g., [@gourieroux; @wolak87; @robertson; @shapiro1988towards; @wolak89; @molenberghs2007likelihood], among others. The constrained inference literature often employs the $\bar{\chi}^2$ distribution, a convex combination of $\chi^2$ distributions with different degrees of freedom, dictated by the problem constraints. Recent work of [@yu2019constrained] has extended these constrained testing frameworks to high-dimensional settings with linear inequality constraints, examining both sparse and non-sparse scenarios. While such tests can be more powerful than their unconstrained counterparts, their definitions typically limit the null hypothesis to linear subspaces, complicating their use in test inversion scenarios [@silvapulle2011constrained]. Although there have been applications of constrained test inversion ([@Feldman_1998]), these are limited in scope due to the grid-based inversion approaches. The statistics literature contains other approaches to inverting likelihood ratio tests (LRTs), which center around sampling procedures. One approach is to use bootstrapping to resample a statistical estimator and iteratively update interval endpoints ([@cash1979parameter; @venzon1988method; @garthwaite_buckland; @LikTestInv; @neale1997use; @schweiger; @fisher2020efficient]). Alternatively, one can sample from the parameter space and the forward model to generate training data for a quantile regression, which can then be used to invert an LRT ([@dalmasso2020confidence; @heinrich2022learning; @waldo]). Since these latter approaches require sampling points in the parameter space, they are practically limited to compact parameter spaces and again encounter difficulties when in higher dimensions. #### Worst-case and likelihood-free methods. We finally mention a body of work that considers constructing confidence intervals for quantities of interest without assuming a likelihood model. Techniques such as Conformal Prediction (see, e.g., [@shafer2007tutorial; @angelopoulos2022gentle]) and Optimal Uncertainty Quantification (OUQ) (see, e.g., [@owhadiOUQ]) do not assume a particular likelihood function to perform statistical inference, relying instead on worst-case bounds. While these methods are advantageous in contexts where the likelihood is uncertain or unknown, they tend to yield conservative estimates when a well-defined likelihood exists. Another avenue of research explores the use of sampling-access only to the likelihood, typically via a simulator, which has found particular relevance in the physical sciences [@LikFree1; @LikFree2]. In scenarios where the data can be split, approaches such as Universal Inference [@universal_inference] offer a method to obtain confidence sets for irregular likelihoods with finite-sample coverage. # Strict bounds intervals from test inversion {#sec:data_gen_test_def_inv_arg} Suppose we observe data ${\bm{y}}\in \mathbb{R}^m$ according to a data generating process ${\bm{y}}\sim P_{{\bm{x}}^*}$. Here, $P_{{\bm{x}}^*}$ is a distribution that depends on a fixed but unknown parameter ${\bm{x}}^* \in \mathbb{R}^p$. Furthermore, suppose we have prior knowledge that this parameter ${\bm{x}}^*$ lies in a constraint set $\mathcal{X} \subseteq \mathbb{R}^p$, namely ${\bm{x}}^* \in \mathcal{X}$. Given a nominal coverage level $1-\alpha$, where $\alpha \in (0,1)$, this paper investigates methods for constructing a $1-\alpha$ confidence interval for $\varphi({\bm{x}}^*)$, where $\varphi: \mathbb{R}^p \to \mathbb{R}$ is a known one-dimensional quantity of interest[^7] (we will also refer to $\varphi$ as a functional of interest). More precisely, we are interested in constructing an interval $\mathcal{I}_\alpha({\bm{y}}) \subseteq \mathbb{R}$ for $\varphi({\bm{x}}^*)$ that satisfies the following coverage requirement: $$\label{eq:coverage_requirement} \mathbb{P}_{{\bm{y}}\sim P_{{\bm{x}}^*}} \big(\varphi({\bm{x}}^*) \in \mathcal{I}_\alpha({\bm{y}}) \big) \geq 1 - \alpha, \quad \text{for all} \quad {\bm{x}}^* \in \mathcal{X}.$$ Our primary focus lies in intervals that: (i) effectively utilize the information that ${\bm{x}}^* \in \mathcal X$, (ii) are valid (i.e., satisfying the coverage requirement in [\[eq:coverage_requirement\]](#eq:coverage_requirement){reference-type="eqref" reference="eq:coverage_requirement"}) in the finite data and noisy regimes (rather than, e.g., in the large system or noiseless limits), (iii) do not make overly restrictive assumptions (e.g., identifiability) about the structure of the parametric model $P_{{\bm{x}}^*}$, and (iv) are short in length[^8]. We view the observation vector ${\bm{y}}$ as a single observation in $\mathbb{R}^{m}$ drawn from a multivariate distribution $P_{{\bm{x}}^*}$. This may include the case of repeated sampling (i.i.d. or not) from an experiment and aggregating them in a vector. In this case, $P_{{\bm{x}}^*}$ is then defined as the measure that accounts for all the observations.[^9] ## Review: classical test inversion for non-composite null hypotheses {#subsec:test_inversion} We briefly review the concept of test inversion and duality between hypothesis testing and confidence sets that the subsequent subsections will build upon. After observing ${\bm{y}}\sim P_{{\bm{x}}^*}$, two classical statistical tasks emerge: (i) determining whether ${\bm{x}}^* = {\bm{x}}$ for a particular ${\bm{x}}\in \mathcal X$ at a significance level $\alpha$ (hypothesis testing), and (ii) constructing a subset of $\mathcal X$ that contains ${\bm{x}}^*$ with a coverage level $1 -\alpha$ (confidence set building). In hypothesis testing, for a given parameter ${\bm{x}}$, one can consider the hypothesis test $$\label{eq:h_test_classic} H_0 : {\bm{x}}^* = {\bm{x}}\quad \text{versus} \quad H_1: {\bm{x}}^* \neq {\bm{x}}.$$ We then build an acceptance region $A({\bm{x}})$ in the data space (the space in which the observations ${\bm{y}}$ live) corresponding to those observations that would not reject $H_0$, with the condition that $H_0$ is rejected with probability at most $\alpha$ when it is true. In confidence set building, one builds a subset in parameter space (the space in which the parameters ${\bm{x}}$ live) as a function of the data, $\mathcal{C}({\bm{y}})$, such that it contains ${\bm{x}}^*$ with probability at least $1-\alpha$ (over repeated samples of ${\bm{y}}\sim P_{{\bm{x}}^*}$). Lifting to the product space of data and parameter spaces (see for an illustration), both tasks amount to constructing a compatibility region $\mathcal S$. For a fixed observation ${\bm{y}}$, a confidence set is given by $\mathcal{C}({\bm{y}}) = \{ {\bm{x}}: ({\bm{y}}, {\bm{x}}) \in \mathcal S\}$, and for fixed parameter ${\bm{x}}$, the acceptance region is given by $\mathcal{A}({\bm{x}}) = \{ {\bm{y}}: ({\bm{y}}, {\bm{x}}) \in \mathcal S\}$. Observe that $\mathbb{P}({\bm{y}}\in \mathcal{A}({\bm{x}})) = \mathbb{P}({\bm{x}}\in \mathcal{C}({\bm{y}}))$. Hence, a procedure that forms confidence sets with coverage $1-\alpha$ for all possible data ${\bm{y}}$ also creates a procedure that yields valid hypothesis tests at level $\alpha$ for all possible parameter values ${\bm{x}}$, and vice versa. This observation can be employed to create confidence sets as the set of parameter values that would not be rejected by a hypothesis test, a construction known as test inversion (see, e.g., Chapter 7 of [@casella_berger] or Chapter 5 of [@Panaretos2016]). ![image](figures/images/TestInversion-v5.pdf){width="0.4\\columnwidth"} ## Formulation and inversion of constrained likelihood ratio tests {#subsec:LRT} The starting point of this work is the inversion of specific hypothesis tests that can incorporate the constraint information $\mathcal X$ and the functional of interest $\varphi$. We will establish that the test inversion can be achieved by solving two endpoint optimization problems. We note that unlike the simple null versus composite alternative tests [\[eq:h_test_classic\]](#eq:h_test_classic){reference-type="eqref" reference="eq:h_test_classic"} described in , the tests that we will consider have composite nulls. We focus on the continuous case and assume that the Lebesgue measure dominates the set of distributions $\mathcal P := \{P_{{\bm{x}}} \mid {\bm{x}}\in \mathcal X\}$. However, a discrete analog can also be constructed using a similar approach as in [@Feldman_1998]. Let $L_{\bm{x}}$ be the density of $P_{\bm{x}}$, and let $\ell_{\bm{x}}:= \log L_{\bm{x}}$. For any $\mu \in \mathbb{R}$, denote the level sets of the quantity of interest $\varphi$ by $\Phi_\mu$. These are defined as follows: $$\label{eq:affine_subspace} \Phi_\mu := \{{\bm{x}}: \varphi({\bm{x}}) = \mu \} \subseteq \mathbb{R}^p.$$ Subsequently, define a hypothesis test $T_\mu$ as follows: $$\label{eq:h_test_oi_full} H_0 : {\bm{x}}^* \in \Phi_\mu \cap \mathcal{X} \quad \text{versus} \quad H_1: {\bm{x}}^* \in \mathcal{X} \setminus \Phi_\mu.$$ We can test hypothesis [\[eq:h_test_oi_full\]](#eq:h_test_oi_full){reference-type="eqref" reference="eq:h_test_oi_full"} (for a fixed $\mu$) with a Likelihood Ratio (LR) test statistic defined as the following function of the observed data ${\bm{y}}$: $$\label{eq:likelihood_ratio} \Lambda(\mu, {\bm{y}}) := \frac{\underset{{\bm{x}}\in \Phi_\mu \cap \mathcal{X}}{\text{sup}} \; L_{{\bm{x}}}({\bm{y}})}{\underset{{\bm{x}}\in \mathcal{X}}{\text{sup}} \; L_{{\bm{x}}}({\bm{y}})}.$$ The corresponding log-likelihood ratio (LLR) statistic is given by: $$\label{eq:log_likelihood_ratio_full} \lambda(\mu, {\bm{y}}) := -2 \log \Lambda(\mu, {\bm{y}}) = -2 ~ \bigg\{\underset{{\bm{x}}\in \Phi_\mu \cap \mathcal{X}}{\text{sup}} \; \ell_{{\bm{x}}}({\bm{y}})- \underset{{\bm{x}}\in \mathcal{X}}{\text{sup}} \; \ell_{{\bm{x}}}({\bm{y}}) \bigg\} = \underset{{\bm{x}}\in \Phi_\mu \cap \mathcal{X}} {\text{inf}} \; -2\ell_{{\bm{x}}}({\bm{y}})- \underset{{\bm{x}}\in \mathcal{X}}{\text{inf}} \; -2 \ell_{{\bm{x}}}({\bm{y}}).$$ As is standard (see, e.g., [@casella_berger; @wasserman2004all]), we use the supremum over all $\mathcal X$ in the denominator of [\[eq:likelihood_ratio\]](#eq:likelihood_ratio){reference-type="eqref" reference="eq:likelihood_ratio"}, instead of over $\mathcal X \setminus \Phi_\mu$[^10]. The factor of $-2$ helps connect with the standard likelihood ratio test in the context of Wilks' theorem, and is needed, together with the optimization being over the whole space, to reinterpret previous constrained inference intervals as coming from the inversion of this test (see ). #### Motivation behind the choice of test and test statistic. In addition to the reinterpretation of previous constrained inference intervals as a result of inverting this test, there are other theoretical and practical reasons that make it a reasonable choice for the uncertainty quantification purposes of this work. Theoretically, the LR emerges as the optimal test statistic (resulting in the most powerful level-$\alpha$ test) in the simple versus simple hypothesis testing setting via the Neyman-Pearson Lemma [@casella_berger; @lehmann_romano]. Although uniformly most powerful tests do not exist in general, LR has been effective in several contexts. For example, [@wald_1943] provides some optimality properties for the likelihood ratio test in terms of its asymptotic average power. While our test of interest does not fall under the simple versus simple paradigm and we are interested in non-asymptotic properties, these two properties establish the sensibility in adopting the LR-based test. Moreover, literature on constrained inference [@robertson; @silvapulle2011constrained] extensively leverages the LR, deriving both its asymptotic and non-asymptotic log-likelihood ratio (LLR) distributions under diverse scenarios, often leading to the $\bar{\chi}^2$ distribution. These characterizations indicate that, in certain situations, it is plausible to obtain the test statistic's distribution under the null hypothesis, either completely or in an asymptotic sense. On a practical note, the explicit relationship between the test and the composite null set simplifies incorporating constraints. #### The distribution of the LLR and test inversion. In hypothesis testing, we reject the null hypothesis when the values of $\lambda(\mu, {\bm{y}})$ surpass a certain threshold. This indicates that there is substantial evidence against the data being generated by a distribution in the composite null defined by $\mu$. To choose a rejection region, we next study the distribution of the LLR, denoted as $\lambda(\mu, {\bm{y}})$, in the context where $\mu = \varphi({\bm{x}})$ (pertaining to the null hypothesis) and ${\bm{y}}\sim P_{{\bm{x}}}$, a data sampling model, across various values of ${\bm{x}}\in \mathcal X$. Let $F_{\bm{x}}$ denote the distribution of $\lambda(\varphi({\bm{x}}), {\bm{y}})$ for any ${\bm{x}}\in \mathcal X$, where ${\bm{y}}\sim P_{\bm{x}}$. To simplify notation, we will write $\lambda \sim F_{\bm{x}}$ to indicate that a random LLR is sampled following the procedure described above. To ensure an $\alpha$ level test for test inversion, we need to control the distribution of the test statistic under the null hypothesis. Since the null is composite, the false positive rate must hold for any parameter under the null hypothesis $H_0$. Assume a true parameter ${\bm{x}}^* \in \mathcal{X}$. Suppose we are conducting a test $T_\mu$ to determine whether ${\bm{x}}^* \in \Phi_{\mu} \cap \mathcal{X}$ for some $\mu \in \varphi(\mathcal X) \subseteq \mathbb{R}$. We use $\lambda > q_\alpha$ as the rejection region, where $q_\alpha$ is a pre-determined decision threshold. Under the null hypothesis, if the decision threshold satisfies: $$\label{eq:type1_err_c_alpha} \sup_{{\bm{x}}\in \Phi_{\mu} \cap \mathcal{X}} \mathbb{P}_{\lambda \sim F_{{\bm{x}}}} \left(\lambda > q_\alpha \right) \leq \alpha$$ for all $\alpha \in (0, 1)$, then we say the test $T_\mu$ is a *level-$\alpha$* test. Inverting the test will require choosing appropriate $q_\alpha$ for all $\mu$; we will henceforth denote it as $q_\alpha(\mu)$. It is often the case that the dependence of $q_\alpha(\mu)$ on $\mu$ is difficult to compute, and therefore, the aim is to find a constant $q_\alpha$ valid for any $\mu$, at the cost of increasing the type-2 error. We develop general results valid if $q_\alpha(\mu)$ can be obtained, but we eventually move to the constant $q_\alpha$ setting for computation (see ). We seek to invert this test using a methodology similar to that outlined in , but adapted to accommodate the composite null hypothesis. The acceptance region is formally defined as: $$\label{eq:acceptance_region_c_alpha} \mathcal{A}_\alpha(\mu) := \left\{ {\bm{y}}: \lambda(\mu, {\bm{y}}) \leq q_\alpha(\mu) \right\}.$$ Subsequently, we define the proposed confidence set for $\mu^* = \varphi({\bm{x}}^*) \in \mathbb{R}$ through test inversion as follows: $$\label{eq:conf_set_def} \mathcal{C}_\alpha({\bm{y}}) := \{\mu : \lambda(\mu, {\bm{y}}) \leq q_\alpha(\mu) \}.$$ We prove in that if [\[eq:type1_err_c\_alpha\]](#eq:type1_err_c_alpha){reference-type="eqref" reference="eq:type1_err_c_alpha"} is satisfied for $\mu^* := \varphi({\bm{x}}^*)$ (i.e., $T_{\mu^*}$ is a *level-$\alpha$* test), the resulting confidence set will have the desired $1 - \alpha$ coverage, thereby extending the classical test inversion framework to our specific case. **Lemma 1** (Coverage of the inverted test). *Let $\alpha \in (0, 1)$. Let ${\bm{x}}^*, \mu^*$ be the true parameter value and its corresponding image under $\varphi$, respectively. If $T_{\mu^*}$ is a level-$\alpha$ test, then $$\mathbb{P}_{{\bm{y}}\sim P_{{\bm{x}}^*}} \left( \mu^* \in \mathcal{C}_\alpha({\bm{y}}) \right) \geq 1 - \alpha.$$* *Proof sketch.* The proof is based on a straightforward test inversion argument. For a detailed proof, see . ◻ To ensure that condition [\[eq:type1_err_c\_alpha\]](#eq:type1_err_c_alpha){reference-type="eqref" reference="eq:type1_err_c_alpha"} holds in practice, where both ${\bm{x}}^*$ and therefore $\mu^*$ are unknown, we aim to satisfy this condition for all possible null hypotheses. Specifically, we choose an appropriate $q_\alpha(\mu)$ for each $\mu$ to make sure that all hypothesis tests $T_\mu$ are *level-$\alpha$*. Formally, this is expressed as: $$\label{eq:combined_type_1} \sup_{\mu \in \varphi(\mathcal X)} \sup_{{\bm{x}}\in \Phi_\mu \cap \mathcal{X}} \mathbb{P}_{\lambda \sim F_{{\bm{x}}}} \left(\lambda> q_\alpha(\mu) \right) \leq \alpha.$$ This condition is equivalent[^11] to: $$\label{eq:combined_type_1_better} \sup_{{\bm{x}}\in \mathcal{X}} \mathbb{P}_{\lambda \sim F_{{\bm{x}}}} \left(\lambda> q_\alpha(\varphi({\bm{x}})) \right) \leq \alpha.$$ Although [\[eq:combined_type_1\_better\]](#eq:combined_type_1_better){reference-type="eqref" reference="eq:combined_type_1_better"} lacks the interpretation of [\[eq:combined_type_1\]](#eq:combined_type_1){reference-type="eqref" reference="eq:combined_type_1"} of having hypothesis tests for each different $\mu \in \varphi(\mathcal{X})$, it simplifies computations. We refer to a set of values $q_\alpha(\mu)$ satisfying [\[eq:combined_type_1\_better\]](#eq:combined_type_1_better){reference-type="eqref" reference="eq:combined_type_1_better"} (or equivalently [\[eq:combined_type_1\]](#eq:combined_type_1){reference-type="eqref" reference="eq:combined_type_1"}) as *valid values*. Since $\mu$ in [\[eq:combined_type_1\]](#eq:combined_type_1){reference-type="eqref" reference="eq:combined_type_1"} is equal to $\varphi({\bm{x}})$ as ${\bm{x}}\in \Phi_\mu$, we use $q_\alpha(\mu)$ and $q_\alpha(\varphi({\bm{x}}))$ interchangeably. From , we know that valid values can be used in [\[eq:conf_set_def\]](#eq:conf_set_def){reference-type="eqref" reference="eq:conf_set_def"} to construct confidence set for $\mu^*$ with the correct $1-\alpha$ coverage. Moreover, as argued in the proof of , the probability of this set [\[eq:conf_set_def\]](#eq:conf_set_def){reference-type="eqref" reference="eq:conf_set_def"} covering the unknown $\mu^*$ is exactly $1-\mathbb{P}_{\lambda \sim F_{{\bm{x}}^*}} \left(\lambda > q_\alpha(\mu^*) \right)$, which is guaranteed to be at least $1-\alpha$ by condition $\eqref{eq:combined_type_1}$. Even though it is not always feasible, one approach to obtain valid $q_\alpha(\mu)$ is to have full knowledge of distributions $F_{{\bm{x}}}$ for all possible ${\bm{x}}\in \mathcal X$. Indeed, if we let $Q_{F_{\bm{x}}}: [0,1] \to \mathbb{R}$ be the quantile function of $F_{{\bm{x}}}$ such that $$\label{eq:quantile_function} \mathbb{P}_{\lambda \sim F_{{\bm{x}}}} \left(\lambda > Q_{F_{\bm{x}}}(1-\alpha) \right) = \alpha,$$ then one can choose $q_\alpha(\mu) = \sup_{{\bm{x}}\in \Phi_\mu \cap \mathcal{X}} Q_{F_{\bm{x}}}(1-\alpha)$, since $\text{for all } {\bm{x}}\in \Phi_\mu \cap \mathcal X$ it holds that: $$\label{eq:quantileproof} \mathbb{P}_{\lambda \sim F_{{\bm{x}}}} \Big(\lambda > \sup_{{\bm{x}}\in \Phi_\mu \cap \mathcal{X}}Q_{F_{\bm{x}}}(1-\alpha) \Big) \leq \mathbb{P}_{\lambda \sim F_{{\bm{x}}}} \left(\lambda > Q_{F_{\bm{x}}}(1-\alpha) \right) = \alpha.$$ However, having closed-form expressions for the distributions $F_{{\bm{x}}}$ is rare, both due to potentially complicated likelihoods and the constraint set $\mathcal X$. Methods for obtaining valid $q_\alpha(\mu)$ are discussed in , and we explore the computation of $\mathcal{C}_\alpha({\bm{y}})$ via optimization techniques in the next section. ## Characterizing the inverted confidence set via optimization problems {#subsec:conf_set_as_opt} The set defined in [\[eq:conf_set_def\]](#eq:conf_set_def){reference-type="eqref" reference="eq:conf_set_def"} yields a random collection of real numbers that encompasses the true functional value with a probability of at least $1 - \alpha$. Although this set is not necessarily an interval, it can be enclosed within an interval whose bounds are computable through optimization techniques. Given a valid $q_\alpha(\mu)$, which satisfies either [\[eq:combined_type_1\]](#eq:combined_type_1){reference-type="eqref" reference="eq:combined_type_1"} or [\[eq:combined_type_1\_better\]](#eq:combined_type_1_better){reference-type="eqref" reference="eq:combined_type_1_better"}, let us define the following sets: $$\begin{aligned} \mathcal{D}({\bm{y}}) &:= \{ {\bm{x}}: -2\ell_{{\bm{x}}}({\bm{y}}) \leq q_\alpha(\varphi({\bm{x}})) + \inf_{{\bm{x}}' \in \mathcal X} -2\ell_{{\bm{x}}'}({\bm{y}}) \} \subseteq \mathbb R^p \label{eq:defD}\\ \bar{\mathcal X}_\alpha({\bm{y}}) &:= \mathcal X \cap \mathcal{D}({\bm{y}}). \label{eq:defcalX}\end{aligned}$$ If $\bar{\mathcal X}_\alpha({\bm{y}}) \neq \emptyset$, we further define: $$\label{intervaldef} \mathcal{I}_\alpha({\bm{y}}) := \bigg [\inf_{\lambda(\mu, {\bm{y}}) \leq q_\alpha(\mu)} \mu,\; \sup_{\lambda(\mu, {\bm{y}}) \leq q_\alpha(\mu)} \mu \bigg ] = \bigg [\inf_{{\bm{x}}\in \bar{\mathcal X}_\alpha({\bm{y}})} \varphi({\bm{x}}),\; \sup_{{\bm{x}}\in \bar{\mathcal X}_\alpha({\bm{y}})} \varphi({\bm{x}}) \bigg ].$$ If $\bar{\mathcal X}_\alpha({\bm{y}}) = \emptyset$, let $\widetilde{{\bm{x}}}$ be the closest point from $\mathcal X$ to $\mathcal D$ and define $\mathcal{I}_\alpha({\bm{y}})$ to be the degenerate interval $[\varphi(\widetilde{{\bm{x}}}), \varphi(\widetilde{{\bm{x}}})]$. **Theorem 2** (From test inversion to optimization-based intervals). *For any $\alpha \in (0, 1)$, and for any ${\bm{x}}\in \mathcal{X}$, let $\mathcal{I}_\alpha({\bm{y}})$ be the interval constructed according to [\[intervaldef\]](#intervaldef){reference-type="eqref" reference="intervaldef"}. It holds that $$\mathbb{P}_{{\bm{y}}\sim P_{{\bm{x}}}} \left(\varphi({\bm{x}}) \in \mathcal{I}_\alpha({\bm{y}}) \right) \geq 1-\alpha.$$ In other words, $\mathcal{I}_\alpha({\bm{y}})$ is a valid $1-\alpha$ confidence interval for $\varphi({\bm{x}}^*)$.* *Proof sketch.* The definition given in [\[intervaldef\]](#intervaldef){reference-type="eqref" reference="intervaldef"} is derived by enclosing $\mathcal{C}_\alpha({\bm{y}})$ within the smallest possible interval that guarantees its coverage. The final equality arises from the equivalence between the optimization problems under consideration. For a comprehensive proof, see . ◻ **Remark 1** (Comparison with the simultaneous strict bound intervals). Observe that the construction of $\mathcal{I}_\alpha({\bm{y}})$ follows the form outlined in [\[simultaneous\]](#simultaneous){reference-type="eqref" reference="simultaneous"} for the simultaneous strict bound intervals. However, a key distinction lies in not requiring that $\mathcal{D}({\bm{y}}) \subseteq \mathcal X$ serves as a $1-\alpha$ confidence set for $x^*$. This relaxation will translate into shorter intervals when $q_\alpha$ is chosen appropriately. **Remark 2** (Handling empty constrained sets). If $\alpha$ is chosen such that $1-\alpha$ becomes too small, the set $\bar{\mathcal{X}}_\alpha(y)$ can be empty. However, the actual interval produced under this circumstance does not compromise the $1-\alpha$ coverage level the theorem provides. A point interval is chosen to minimize average length. Specifically, the point $\widetilde{{\bm{x}}}$ is chosen to ensure continuity of the interval with respect to $\alpha$ under many standard scenarios. Generally, an empty set $\bar{\mathcal{X}}_\alpha({\bm{y}})$ should inform one of two possibilities: either (i) an outlier event has been observed, or (ii) the initial assumption that $x \in \mathcal X$ is flawed. Here, the definition of an "outlier" is intrinsically linked to the choice of $\alpha$. A larger $\alpha$ will make such events more frequent, as it broadens the range of data considered as outliers. We also present a partial converse result, stating that interval coverage implies the validity of $q_\alpha$, subject to appropriate assumptions on $\varphi$, $P$, and $\mathcal X$. This result will be instrumental in refuting the coverage claims of Rust--Burrus intervals, and consequently, the Burrus conjecture, as discussed in . **Proposition 3** (Coverage implies validity of quantile levels). *Assume that $\mathcal X$ forms a convex cone, $\ell_{\bm{x}}({\bm{y}})$ is a concave function, and $\varphi({\bm{x}})$ is linear. Define $\mathcal{I}_\alpha({\bm{y}})$ as in , for a particular choice of $q_\alpha(\mu)$. If $\mathcal{I}_\alpha({\bm{y}})$ is a valid confidence interval for $1-\alpha$, then the values of $q_\alpha(\mu)$ are valid.* *Proof sketch.* Generally, the values of $q_\alpha(\mu)$ are valid if and only if $\mathcal{C}_\alpha({\bm{y}})$ constitutes a $1-\alpha$ set. Since $\mathcal{I}_\alpha({\bm{y}})$ is the smallest interval that contains $\mathcal{C}_\alpha({\bm{y}})$, if $\mathcal{C}_\alpha({\bm{y}})$ is already an interval, then result holds. The assumptions on $\mathcal X, \ell_{\bm{x}}({\bm{y}})$ and $\varphi$ ensure this is the case by convexity of the function $$\mu \mapsto \inf_{\substack{\varphi({\bm{x}}) = \mu \\ {\bm{x}}\in \mathcal{X}}} -2\ell_{\bm{x}}({\bm{y}})$$ for any ${\bm{y}}$. For a detailed proof, see . ◻ #### A decision-theoretic connection. The intervals derived herein can be understood through the lens of decision theory. Specifically, consider a two-player zero-sum game where Nature acts as an adversary, selecting a world state ${\bm{x}}$, while the statistician aims to infer a quantity of interest, $\varphi({\bm{x}})$, based on data. In the classical formulation by [@wald1945statistical], Nature chooses ${\bm{x}}$, and the statistician selects a decision function $d$. This function estimates the quantity of interest $\varphi(\theta)$ based on the data ${\bm{y}}\sim P_{{\bm{x}}}$, leading to a loss function: $$\label{LalphadefW0l2} L({\bm{x}},d):=\mathbb{E}_{y \sim P_{{\bm{x}}}}\big[\|\varphi({\bm{x}})-d({\bm{y}})\|^2\big]$$ for some norm $\|\cdot\|$ (typically $L_2$ norm) in the prediction space. To identify the Nash Equilibria of this game, mixed (randomized) strategies are considered for Nature. Taking $\pi$ to be a probability distribution over states of the world ${\bm{x}}$, we consider the lift $$\label{LalphadefW0lifted} L(\pi,d):=\mathbb{E}_{{\bm{x}}\sim \pi} \big[\|\varphi({\bm{x}})-d({\bm{y}})\|^2\big]$$ of the game [\[LalphadefW0l2\]](#LalphadefW0l2){reference-type="eqref" reference="LalphadefW0l2"}. A minimax optimal estimate of $\varphi({\bm{x}}^*)$ is then obtained by identifying a Nash equilibrium (a saddle point) for [\[LalphadefW0lifted\]](#LalphadefW0lifted){reference-type="eqref" reference="LalphadefW0lifted"}, i.e., $\pi^*$ and $d^*$ satisfying $$L(\pi,d^*) \leq L(\pi^{*},d^*) \leq L(\pi^{*},d)$$ for all other $d, \pi$. This framework is equivalent to identifying a worst-case prior $\pi^*$ within Bayesian inference. [@Bajgiran2022] considers a similar two-player game in which the statistician's decision is made *post hoc*, after seeing the data. Here, the adversary's power is parameterized by $\beta \in [0,1]$. For a sub-region $X_\beta$ in parameter space (which contains the whole space for $\beta = 0$ and shrinks to the MLE for $\beta = 1$), and after observing the data ${\bm{y}}$, the game considered is: $$\label{Lalphadefa} L(\pi,d):= \mathbb{E}_{{\bm{x}}\sim \pi}\big[\| \varphi({\bm{x}})-d \|^2\big], \quad \text{where} \quad \pi \in \mathcal{P}(X_\beta) \text{ and } d \in \varphi(X_\beta).$$ Invoking Theorem 3.3 from [@Bajgiran2022], the optimal decision $d^*$ in the game defined by [\[Lalphadefa\]](#Lalphadefa){reference-type="eqref" reference="Lalphadefa"} for the $L_2$ norm is determined by the center of the minimum enclosing ball around $\varphi(X_\beta)$. When $\varphi$ is continuous and one dimensional and $X_\beta$ is compact, one has $\varphi(X_\beta) = \min_{{\bm{x}}\in X_\beta} / \max_{{\bm{x}}\in X_\beta} \varphi({\bm{x}})$, with optimal decision $d^*$ and one possible optimally adversarial prior $\pi^*$ respectively given by: $$d^* = \frac{1}{2}\Big( \min_{{\bm{x}}\in X_\beta} \varphi({\bm{x}}) + \max_{{\bm{x}}\in X_\beta} \varphi({\bm{x}}) \Big) \quad \text{and} \quad \pi^* = \frac{1}{2}\Big( \delta_{\min_{{\bm{x}}\in X_\beta}\varphi({\bm{x}}) } + \delta_{\max_{{\bm{x}}\in X_\beta}\varphi({\bm{x}}) } \Big).$$ This leads to the following game value: $$L(\pi^*, d^*) = \frac{1}{4} \Big(\max_{{\bm{x}}\in X_\beta} \varphi({\bm{x}}) - \min_{{\bm{x}}\in X_\beta} \varphi({\bm{x}})\Big)^2.$$ In [@Bajgiran2022], the set $X_\beta$ is derived by inverting a classical LRT with a non-composite null hypothesis, independent of the quantity of interest. Nonetheless, if we consider the parameterized family of sets $X_\beta$ to be $\bar{\mathcal X}_\alpha({\bm{y}})$ as defined in this work, which depends on $\varphi$, the result remains applicable. Therefore, the midpoints of our confidence intervals act as minimax optimal statistical estimators for $\varphi({\bm{x}}^*)$, provided that the prior distribution falls within the parameter space subset $\bar{\mathcal X}_\alpha({\bm{y}})$ (that importantly contains the constraint ${\bm{x}}\in \mathcal X$). This connection also opens up avenues for exploring higher-dimensional quantities of interest, which defer to future work. ## Illustrative examples {#subsec:examples_theorem} To elucidate the general methodology outlined in , we offer two simple illustrative examples where the LLR and its distribution are explicitly computable: a one-dimensional constrained Gaussian scenario and an unconstrained linear Gaussian case. #### Constrained Gaussian in one dimension. As a tangible example, consider the following one-dimensional model: $$\label{eq:1d_toy_model} \underbrace{y= x^* + \varepsilon, \quad \varepsilon \sim \mathcal{N}(0,1)}_{\text{model}} \quad \text{with} \quad \underbrace{x^* \geq 0}_{\text{constraints}} \quad \text{and} \quad \underbrace{\varphi(x) = x}_{\text{functional}}.$$ In this case, the distribution of the LLR is precisely known. Hence, a confidence interval can be constructed without resorting to the techniques that will be introduced in , which are otherwise necessary when such information is not tractable. The form of the hypothesis test $T_\mu$, as given in [\[eq:h_test_oi_full\]](#eq:h_test_oi_full){reference-type="eqref" reference="eq:h_test_oi_full"}, is as follows: $$\label{eq:1d_hypoth} H_0: x^* = \mu \quad \text{versus} \quad H_1: x^* \neq \mu \text{ and } x^* \geq 0.$$ The LLR as defined in [\[eq:log_likelihood_ratio_full\]](#eq:log_likelihood_ratio_full){reference-type="eqref" reference="eq:log_likelihood_ratio_full"} for the test [\[eq:1d_hypoth\]](#eq:1d_hypoth){reference-type="eqref" reference="eq:1d_hypoth"} is given by: $$\begin{aligned} \label{eq:1d_log_lr} \lambda(\mu, y)~ &= \inf_{x = \mu, x \geq 0} (y - x)^2 - \inf_{x \geq 0} (y - x)^2 \nonumber \\ &= \begin{cases} (y - \mu)^2 & y \geq 0 \\ (y - \mu)^2 - y^2 & y < 0. \end{cases}\end{aligned}$$ We can also derive its distribution under the null hypothesis (i.e., when $x^* = \mu$, leading to $y = \mu + \varepsilon$) for any $\mu \in [0, \infty)$, as formalized below. **Example 4** (Distribution of the LLR statistic for constrained Gaussian in one dimension). *For $\lambda(\mu, y)$ as defined in [\[eq:1d_log_lr\]](#eq:1d_log_lr){reference-type="eqref" reference="eq:1d_log_lr"} with $\mu \geq 0$, when $y \sim \mathcal N(\mu, 1)$ (null hypothesis), for all $c > 0$, we have $$\mathbb{P} (\lambda(\mu, y) \leq c ) = \chi^2_1(c) \cdot \bm{1} \{ c < {\mu}^2 \} + \{ \Phi (\sqrt{c}) - \Phi( {(-{\mu}^2 - c)}/{2 \mu} ) \} \cdot \bm{1} \{ c \geq {\mu}^2 \},$$ where $\chi^2_1$ and $\Phi$ are the CDFs of a $\chi^2_1$ and a standard Gaussian, respectively.* *Proof.* See . ◻ The expression for $\lambda(\mu, y)$ is equivalent to Equation (4.3) in [@Feldman_1998], with the appropriately scaled log transformation, where they consider the Neyman confidence interval construction for the same problem. [@Feldman_1998] characterizes this quantity as a likelihood ordering for determining an acceptance region. By the virtue of the previous result, we can use [\[eq:quantile_function\]](#eq:quantile_function){reference-type="eqref" reference="eq:quantile_function"} and take $q_\alpha(\mu)$ satisfying [\[eq:type1_err_c\_alpha\]](#eq:type1_err_c_alpha){reference-type="eqref" reference="eq:type1_err_c_alpha"} as $Q_{\mu}(1-\alpha)$, where $Q_\mu$ is the quantile of the distribution of $\lambda(\mu, y)$ when $\mu$ is fixed. A direct computation shows $$\label{eq:1d_quantile_func} q_\alpha(\mu) = Q_{\mu}(1-\alpha) = \begin{cases} Q_{\chi^2_1}(1-\alpha) & 1-\alpha < \chi^2_1(\mu^2) \\ r_{\mu, \alpha} & 1-\alpha \geq \chi^2_1(\mu^2), \\ \end{cases}$$ where $r_{\mu, \alpha}$ is the unique non-negative root of the function $x \mapsto \Phi(\sqrt{x}) - \Phi({(-\mu^2-x)}{(2\mu)}) - (1-\alpha)$, which can be found with numerical methods. Therefore, $\mathcal D = \{x: (y-x)^2 \leq q_\alpha(x) + \min_{x' \geq 0} (y-x')^2 \}$, the final form of the confidence interval becomes: $$\mathcal{I}_\alpha(y) = \bigg [ \min_{\substack{x \in \mathcal D \\ x \geq 0}} \, x ,\; \max_{\substack{x \in \mathcal D \\ x \geq 0}} \, x \bigg].$$ For a numerical comparison of this interval with alternative methods, we refer the reader to . #### Unconstrained Gaussian linear model. Consider the following problem setup: $$\label{eq:lineargaussianmodel-unconstrained} \underbrace{{\bm{y}}= {\bm{K}}{\bm{x}}^* + \bm{\varepsilon}, \quad \bm{\varepsilon} \sim \mathcal{N}(\bm{0}, {\bm{I}}_m)}_{\text{model}} \quad \text{and} \quad \underbrace{\varphi({\bm{x}}) = {\bm{h}}^\top{\bm{x}}}_{\text{functional}}.$$ Assume ${\bm{K}}\in \mathbb{R}^{m \times p}$ has full column rank. The assumption $\text{Cov}({\bm{y}}) = {\bm{I}}_m$ is without loss of generality as it is equivalent to assuming known positive definite covariance for ${\bm{y}}$ and performing a change of basis with the Cholesky factor. Note that this setup is same as [\[eq:lineargaussianmodel\]](#eq:lineargaussianmodel){reference-type="eqref" reference="eq:lineargaussianmodel"} but the parameter space is unconstrained, i.e., $\mathcal X = \mathbb{R}^p$, and the forward model ${\bm{K}}$ is assumed to be full rank. Utilizing the framework established in , we aim to invert the following family of hypothesis tests: $$\label{eq:hyp_unconstrained} H_0: {\bm{h}}^\top{\bm{x}}^* = \mu \quad \text{versus} \quad H_1: {\bm{h}}^\top{\bm{x}}^* \neq \mu.$$ The LLR as defined in [\[eq:log_likelihood_ratio_full\]](#eq:log_likelihood_ratio_full){reference-type="eqref" reference="eq:log_likelihood_ratio_full"} for the test [\[eq:hyp_unconstrained\]](#eq:hyp_unconstrained){reference-type="eqref" reference="eq:hyp_unconstrained"} takes the form: $$\label{eq:llr_full_rank_unconstrain} \lambda(\mu, {\bm{y}}) ~:= \min_{{\bm{x}}: {\bm{h}}^\top{\bm{x}}= \mu} \lVert {\bm{y}}- {\bm{K}}{\bm{x}}\rVert_2^2 - \min_{{\bm{x}}} \lVert {\bm{y}}- {\bm{K}}{\bm{x}}\rVert_2^2.$$ In this particular scenario, the LLR admits a closed-form expression and has a straightforward distribution, as formalized below: **Example 5** (Distribution of the LLR statistic for unconstrained Gaussian linear model). *$\lambda(\mu, {\bm{y}})$ for the unconstrained full column rank Gaussian linear model [\[eq:llr_full_rank_unconstrain\]](#eq:llr_full_rank_unconstrain){reference-type="eqref" reference="eq:llr_full_rank_unconstrain"} can be expressed in closed form as $$\label{eq:test_stat_eq_unconstr} \lambda(\mu, {\bm{y}}) = \frac{({\bm{h}}^\top({\bm{K}}^\top{\bm{K}})^{-1} {\bm{K}}^\top{\bm{y}}- \mu)^2}{{\bm{h}}^\top({\bm{K}}^\top{\bm{K}})^{-1} {\bm{h}}}.$$ Furthermore for any ${\bm{x}}^*$, whenever $y \sim \mathcal{N}({\bm{K}}{\bm{x}}^*, {\bm{I}}_m)$, $\lambda({\bm{h}}^\top{\bm{x}}^*, {\bm{y}})$ is distributed as a chi-squared distribution with $1$ degree of freedom.* *Proof.* See . ◻ Leveraging the above results, we can set $q_{\alpha}(\mu) = Q_{\chi^2_1}(1-\alpha)$ for all values of $\mu$. Here, $Q_{\chi^2_1}$ represents the quantile function of a chi-squared distribution with $1$ degree of freedom. Consequently, we can express the interval in [\[intervaldef\]](#intervaldef){reference-type="eqref" reference="intervaldef"} as: $$\label{int:unconstrained_full_rank_opt} \mathcal{I}_\alpha({\bm{y}}) = \bigg[\min_{{\bm{x}}\in \mathcal{D}({\bm{y}})} {\bm{h}}^\top{\bm{x}},\; \max_{{\bm{x}}\in \mathcal{D}({\bm{y}})} {\bm{h}}^\top{\bm{x}}\bigg], \text{ where } \mathcal{D}\left({\bm{y}}\right) := \Big\{ {\bm{x}}: \lVert {\bm{y}}- {\bm{K}}{\bm{x}}\rVert_2^2 \leq Q_{\chi^2_1}(1-\alpha) + \min_{{\bm{x}}'} \lVert {\bm{y}}- {\bm{K}}{\bm{x}}' \rVert_2^2 \Big\}.$$ Similarly, let us define $z_{\alpha} = \Phi^{-1}(1 - \alpha)$, where $\Phi$ is the cumulative distribution function of the standard normal distribution. Using the equivalence $z^2_{\alpha/2} = Q_{\chi^2_1}(1-\alpha)$, we can rewrite the expression in terms of the standard normal. Moreover, as shown in Appendix A of [@patil], the endpoints of the above interval can be calculated in closed-form and are given by: $$\label{int:unconstrained_full_rank} \mathcal{I}_\alpha({\bm{y}}) = \bigg[{\bm{h}}^\top\widehat{{\bm{x}}} - z_{\alpha / 2} \sqrt{{\bm{h}}^\top\left({\bm{K}}^\top{\bm{K}}\right)^{-1} {\bm{h}}},\; {\bm{h}}^\top\widehat{{\bm{x}}} + z_{\alpha / 2} \sqrt{{\bm{h}}^\top\left({\bm{K}}^\top{\bm{K}}\right)^{-1} {\bm{h}}} \bigg],$$ where we define the least squares projection $\widehat{{\bm{x}}}= ({\bm{K}}^\top{\bm{K}})^{-1} {\bm{K}}^\top{\bm{y}}$. This interval is equivalent to the one derived from observing that $\widehat{{\bm{x}}} \sim \mathcal{N}({\bm{x}}^*, ({\bm{K}}^\top{\bm{K}})^{-1})$. Hence, we have ${\bm{h}}^\top\widehat{{\bm{x}}} \sim \mathcal{N}({\bm{h}}^*, {\bm{h}}^\top({\bm{K}}^\top{\bm{K}})^{-1} {\bm{h}})$. The interval in [\[int:unconstrained_full_rank\]](#int:unconstrained_full_rank){reference-type="eqref" reference="int:unconstrained_full_rank"} is thus a standard construction of a Gaussian $1 - \alpha$ confidence interval. # General interval construction methodology {#sec:interval_methodology} In this section, we outline the core methodology for constructing intervals as derived from . We also address the problem of determining provably accurate decision values, denoted as $q_\alpha$, to guarantee the specified coverage. To summarize the preceding section, asserts that if we know $q_\alpha(\mu)$ satisfying [\[eq:combined_type_1\_better\]](#eq:combined_type_1_better){reference-type="eqref" reference="eq:combined_type_1_better"}, we can invert the hypothesis test defined in [\[eq:h_test_oi_full\]](#eq:h_test_oi_full){reference-type="eqref" reference="eq:h_test_oi_full"} with a composite null hypothesis to yield a valid $1 - \alpha$ confidence interval. We propose two classes of methods to identify such valid $q_\alpha(\mu)$ values: a) analytical methods, which rely on the stochastic dominance of random variables, described in ; b) computational methods, which utilize quantile regression algorithms described in . The section is encapsulated into a meta-algorithm, detailed in . Additionally, we present an improved hybrid method specifically tailored for cases involving Gaussian models, described in . ## Analytical ways to obtain quantile levels via stochastic dominance {#subsec:stochastic_dominance} In this subsection, we address the challenge of identifying a valid quantile level, denoted as $q_\alpha$, within an optimization framework. Importantly, this framework is independent of the parameter $\mu$ and allows for straightforward evaluation for any confidence level $\alpha \in (0, 1)$. We propose taking $q_\alpha = Q_X(1-\alpha)$, where $Q_X$ is the quantile function of a random variable $X$ with known distribution. We establish that for the resulting confidence interval to maintain a $1 - \alpha$ coverage guarantee, $X$ must stochastically dominate $\lambda({\bm{y}}, \mu^*)$, where ${\bm{y}}$ is a random variable with distribution $P_{x^*}$. This is denoted as $X \succeq \lambda({\bm{y}}, \mu^*)$. Following the classical definition of stochastic dominance for real-valued random variables (see, e.g., [@lehmann_rojo]), we say that $X \succeq Y$ if and only if $\mathbb{P}(X \geq z) \geq \mathbb{P}(Y \geq z)$ for all $z \in \mathbb{R}$. **Lemma 6** (Valid quantile level via stochastic dominance). *$Q_X(1-\alpha)$ serves as a valid choice for $q_\alpha$ for all $\alpha$ if and only if $X \succeq \lambda({\bm{y}}, \mu^*)$.* *Proof.* See . ◻ **Remark 3** (Conditional validity of quantile levels). If $X$ fails to stochastically dominate $\lambda({\bm{y}}, \mu^*)$, a valid $q_\alpha$ can still be identified for specific $\alpha$ levels, provided that certain conditions are met. Specifically, if $\mathbb{P}(X \leq z) \leq \mathbb{P}(Y \leq z)$ for some value of $z$, then $z$ can serve as a valid $q_\alpha$ where $\alpha = 1 - F_X(z)$, with $F_X$ being the cumulative distribution function of $X$. **Remark 4** (Support restriction). Candidates for $X$ can be restricted to the range $[0, \infty)$ without loss of generality, as $\lambda({\bm{y}}, \mu^*)$ is supported on this range, by moving the mass in $(-\infty, 0)$ to $0$. The economic interpretation of our result is that agents with non-decreasing utility functions would prefer a reward drawn from $X$ over one from $\lambda({\bm{y}}, \mu^*)$. In practical scenarios where the true parameter ${\bm{x}}^*$ is unknown, it is essential to establish the stochastic dominance for the entire family $F_{{\bm{x}}}$, where ${\bm{x}}\in \mathcal X$. While all stochastically dominating distributions provide correct coverage when the quantile is used for $q_\alpha$, a larger stochastic dominance gap provides more conservative bounds. Additionally, if $X_1, X_2$ both stochastically dominate the family $F_{{\bm{x}}}$ for ${\bm{x}}\in \mathcal X$, one can take the pointwise minimum $q_\alpha = \min\{ Q_{X_1}(1-\alpha), Q_{X_2}(1-\alpha) \}$ which will be no worse than using either $X_1$ or $X_2$. The perspective of stochastic dominance also enables the use of coupling arguments to identify stochastic dominant distributions. For instance, one approach to find stochastically dominating distributions to a given $F_{{\bm{x}}}$ is finding a function $g(\varphi({\bm{x}}), {\bm{y}})$ such that for all $z$: $$\mathbb P(g(\varphi({\bm{x}}), {\bm{y}}) \geq z) \geq \mathbb {P}(\lambda(\varphi({\bm{x}}), {\bm{y}}) \geq z),$$ where the randomness is from ${\bm{y}}\sim P_{{\bm{x}}}$. A particular case is that of non-random bounds. If $g(\varphi({\bm{x}}), {\bm{y}}) \geq \lambda(\varphi({\bm{x}}), {\bm{y}})$ almost surely (as opposed to when ${\bm{y}}\sim P_{{\bm{x}}^*}$), then this implies a coupling of random variables once ${\bm{y}}$ is sampled that implies stochastic dominance by Theorem 4.2.3 of @ProbabilityBook. As an illustration, we revisit the one-dimensional constrained example discussed in . We consider the model $y = x^* + \varepsilon$, where $\varepsilon \sim \mathcal{N}(0,1)$, $x^* \geq 0$, and $\varphi(x) = x$. We recall that we have $\lambda(\mu, y) = (y-\mu)^2 - \bm{1}(y < 0)y^2$ and an analytic solution for the quantile of the distribution of $\lambda(\mu, y)$ for every $\mu$, that we can use as valid $q_\alpha$. We extend below the results in to prove that the distribution of $\lambda(\mu, y)$ is stochastically dominated by a $\chi^2_1$. See for an illustration. **Example 7** (Stochastic dominance for LLR in constrained Gaussian in one dimension). *For the LLR $\lambda(\mu, y)$, when $y \sim \mathcal{N}(\mu, 1)$ under the null hypothesis, we have that $\chi^2_1 \succeq \lambda(\mu, y)$ for all $\mu \geq 0$.* *Proof.* See . ◻ For this example, given that the quantile function can be analytically computed, we can define $1-\alpha$ confidence intervals using either $q_\alpha(\mu^*)$ or $\chi^2_{1, 1-\alpha}$, as shown in [\[opt4new1d_quantile_sec3\]](#opt4new1d_quantile_sec3){reference-type="eqref" reference="opt4new1d_quantile_sec3"}. $$\label{opt4new1d_quantile_sec3} \mathcal{I}_\alpha(y) :=~ \begin{aligned} \min_{x}/\max_{x} \quad & x \\ \mathop{\mathrm{subject\,\,to}}\quad & x \geq 0\\ & (x-y)^2 \leq q_\alpha(x) + \min_{x \geq 0} (x-y)^2. \end{aligned}$$ ![](figures/images_1d/1d_example_quantiles_ABOVE.png){#fig:1d_quantiles width="\\textwidth"} ![](figures/images_1d/1d_cdfs.png){#fig:1d_cdfs width="95%"} ## Computational ways to compute quantile levels via sampling {#subsec:Sampling} When constructing the interval [\[intervaldef\]](#intervaldef){reference-type="eqref" reference="intervaldef"}, the smallest quantile level $q_\alpha$ that produces a valid interval is simply the $(1 - \alpha)$ quantile function evaluated at ${\bm{x}}^*$, i.e., $Q_{F_{{\bm{x}}^*}}(1 - \alpha)$. For future reference, we refer to intervals computed using this quantile as "Oracle" intervals (or "OQ" for short), as they are unknown in practice and reflect the best possible constant in the constraint setting. Using the quantile from a stochastically dominating distribution as done in can addresses the unknown nature of the optimal quantile level by using a quantile valid for all ${\bm{x}}^*$. However, a stochastically dominant distribution might not be known, and intervals computed using its quantiles can be conservative if the stochastic dominance gap is too large. Both of these challenges can potentially be addressed via computation if the true parameter ${\bm{x}}^*$ is known to lie within a compact set. Formally, let $\mathcal{B} \subseteq \mathcal{X}$ be compact, assume that ${\bm{x}}^* \in \mathcal{B}$. Since $Q^u := \max_{{\bm{x}}\in \mathcal{B}} Q_{F_{{\bm{x}}}}(1 - \alpha) \geq Q_{F_{{\bm{x}}^*}}(1 - \alpha)$ for all ${\bm{x}}^* \in \mathcal{B}$, the maximum quantile $Q^u$ across all ${\bm{x}}\in \mathcal{B}$ can serve as a quantile level to construct a valid test and interval. To estimate $Q^u$ computationally, we propose the following steps: 1. Randomly sample $M$ points ${\bm{x}}_1, \dots, {\bm{x}}_M$ within the compact set $\mathcal{B}$. 2. For each sampled point ${\bm{x}}_i$: 1. Generate $N$ samples from the distribution of the LLR test statistic under the null defined at ${\bm{x}}_i$, i.e., $\lambda_{i,1}, \dots, \lambda_{i,N} \sim F_{{\bm{x}}_i}$. Note that $N$ depends on $M$, so technically, it is $N(M)$, but we will suppress this dependence for notational brevity. 2. Compute the estimated $1 - \alpha$ quantile at ${\bm{x}}_i$, denoted by $\widehat{q}_i$ using the appropriate order statistic: $\widehat{q}_i := \lambda_{i, (\lfloor N (1 - \alpha) \rfloor)}$, where $(\lfloor N (1 - \alpha) \rfloor)$ denotes the order statistic defined by the floor of $N (1 - \alpha)$. 3. Calculate the estimated upper bound $Q^u$ using the empirical maximum as $\widehat{Q}^u := \max_{i \in [M]} \widehat{q}_i$. As shown in below, $\widehat{Q}^u$ is a consistent estimator of $Q^u$ as $M, N \to \infty$, assuming a growth rate $N = o(M)$. **Lemma 8** (Consistency of the maximum quantile $\widehat{Q}^u$). *Assume $\mathcal{B} \subset \mathcal{X}$ is compact and let $N = o(M)$. If the quantile function is continuous with respect to the parameter ${\bm{x}}$, then, for any $\epsilon > 0$, $\mathbb{P} ( \lvert \widehat{Q}^u - Q^u \rvert \geq \epsilon ) \to 0$ as $M \to \infty$.* *Proof.* See . ◻ For future reference, we refer to intervals computed using $\widehat{Q}^u$ as "Max Quantile" (abbreviated as "MQ") intervals. To summarize these settings for constant $q_\alpha$, we have: $$\begin{aligned} q_\alpha^{\mathsf{OQ}} := Q_{F_{{\bm{x}}^*}}(1 - \alpha) \quad \text{and} \quad q_\alpha^{\mathsf{MQ}} := \widehat{Q}^u. \label{oracle-max_quant}\end{aligned}$$ In practice, implementing the above steps requires some nuance. First, the assumption of compactness for $\mathcal{B}$ may not always be reasonable, and the computational complexity increases with the dimensionality of $\mathcal{X}$ due to needing to sample more points $M$. However, assuming the compactness of this set is often reasonable in scientific applications when there are physically-motivated constraints on the parameter vector ${\bm{x}}^*$. Second, we have found in the considered numerical examples (see for more details) that the boundaries of $\mathcal{X}$ are usually where the non-trivial quantile behavior is found. As such, even in low-dimensional settings, it could be helpful to sample more frequently near the boundaries of $\mathcal{X}$. Third, to perform each draw $\lambda_{i,j} \sim F_{\bm_i}$, one must solve the optimization problems defining the LLR, which can be non-trivial even if the problems are convex in ${\bm{x}}$. ## General confidence interval construction {#subsec:algorithm} In this section, we present our meta-algorithm that uses the methodologies discussed in the preceding sections. The goal of this meta-algorithm is to construct a $1-\alpha$ confidence interval for a given quantity of interest $\varphi({\bm{x}}^*)$. The algorithmic steps are outlined in . Observed data ${\bm{y}}$, log-likelihood model $\ell_{\bm{x}}({\bm{y}})$, quantity of interest function $\varphi$, constraint set $\mathcal X$, miscoverage level $\alpha$. **Test statistic**: Write down the LLR test statistic $\lambda(\mu, {\bm{y}}) ~= \underset{{\bm{x}}\in \Phi_\mu \cap \mathcal{X}}{\text{inf}} \; -2\ell_{{\bm{x}}}({\bm{y}})- \underset{{\bm{x}}\in \mathcal{X}}{\text{inf}} \; -2 \ell_{{\bm{x}}}({\bm{y}})$. **Distribution control**: Control $F_{{\bm{x}}}$, the distribution of $\lambda(\varphi({\bm{x}}), {\bm{y}})$ whenever ${\bm{y}}\sim P_{{\bm{x}}}$, for all ${\bm{x}}\in \mathcal X$, by either: 1. *Explicit Solution*: Obtaining it explicitly, and letting $q_\alpha(\mu) := \sup_{{\bm{x}}\in \Phi_\mu \cap \mathcal{X}} Q_{F_{\bm{x}}}(1-\alpha)$. 2. *Analytical way to do stochastic dominance* (): Finding a distribution $X$ that stochastically dominates all of $F_{{\bm{x}}}$, the distribution of the LLR under the null ${\bm{x}}\in \Phi_\mu \cap \mathcal{X}$, and let $q_\alpha := Q_X(1-\alpha)$. 3. *Computational way to directly find valid $q_\alpha$* (): In the event there is no explicit nor stochastic dominance solution, one can develop computational approaches to obtain a deterministic or probabilistic upper bound on $q_\alpha(\mu^*)$, i.e., the desired quantile for the LLR distribution at the true parameter value. **Confidence interval calculation**: Obtain the confidence intervals by solving the pair of optimization problems that is easier in the particular case: - Parameter space formulation: $$\label{optA} \begin{aligned} \min_{{\bm{x}}}/\max_{{\bm{x}}} \quad & \varphi({\bm{x}}) \\ \mathop{\mathrm{subject\,\,to}}\quad & {\bm{x}}\in \mathcal X\\ \quad & -2\ell_{{\bm{x}}}({\bm{y}}) \leq q_\alpha(\varphi({\bm{x}}))+ \inf_{{\bm{x}}' \in \mathcal X} -2\ell_{{\bm{x}}'}({\bm{y}}). \end{aligned}$$ - Functional space formulation: $$\label{optB} \begin{aligned} \min_\mu/\max_{\mu} \quad &\mu \\ \mathop{\mathrm{subject\,\,to}}\quad & \mu \in \mathbb{R} \\ \quad & \underset{{\bm{x}}\in \Phi_\mu \cap \mathcal{X}} {\text{inf}} \; -2\ell_{{\bm{x}}}({\bm{y}})- \underset{{\bm{x}}\in \mathcal{X}}{\text{inf}} \; -2 \ell_{{\bm{x}}}({\bm{y}})\leq q_\alpha(\mu). \end{aligned}$$ Confidence interval with coverage $1-\alpha$. It is worth noting that the optimization problems defined in [\[optA\]](#optA){reference-type="eqref" reference="optA"} and [\[optB\]](#optB){reference-type="eqref" reference="optB"} may not always be convex or straightforward to solve. However, their dual formulations can be constructed, offering provably valid confidence intervals even if they are suboptimal in the absence of strong duality. We defer the exploration of specialized optimization techniques specifically tailored for solving [\[optA\]](#optA){reference-type="eqref" reference="optA"} and [\[optB\]](#optB){reference-type="eqref" reference="optB"} to future work. Such techniques could potentially yield more efficient or tighter confidence intervals, further increasing the practical utility of our proposed methodology. ## A hybrid improvement for Gaussian models {#sec:hsb} In the particular case of Gaussian models, we can further improve upon the general construction outlined in . In this section, we introduce a hybrid method that combines the simultaneous interval approach from with our general construction. This hybrid method ensures that the resulting interval is at least as good as using either of the two individual approaches. We refer to this as the "hybrid strict bounds" intervals, or HSB for short. Consider the model ${\bm{y}}= f({\bm{x}}^*) + \varepsilon, \bm{\varepsilon} \sim \mathcal{N}(\bm{0},{\bm{I}}_m)$ where ${\bm{x}}^* \in \mathcal X \subseteq \mathbb{R}^p$ without any assumptions on $\mathcal X$ and an arbitrary functional $\varphi: \mathbb{R}^p \to \mathbb{R}$. Observe that $\|{\bm{y}}-f({\bm{x}}^*)\|^2_2 \sim \chi^2_m$, so $\mathcal{C}({\bm{y}}) := \{x: \|{\bm{y}}-f({\bm{x}}^*)\|_2 \leq Q_{\chi^2_m}(1-\alpha)\}$ is a $1-\alpha$ set for $x^*$. Therefore, using the simultaneous approach, a valid $1-\alpha$ interval is given by: $$\label{eq:ssb_gauss} \mathcal{I}_{\mathsf{SSB}}({\bm{y}}) := \begin{aligned} \inf_{{\bm{x}}}/\sup_{{\bm{x}}} \quad & \varphi({\bm{x}}) \\ \mathop{\mathrm{subject\,\,to}}\quad &\Vert {\bm{y}}- f({\bm{x}}) \Vert_2^2 \leq Q_{\chi^2_m}(1-\alpha) \nonumber \\ & {\bm{x}}\in \mathcal X. \nonumber \end{aligned}$$ Consider now the approach of , and write the LLR: $$\label{eq:llr_gaussian} \lambda(\mu, {\bm{y}}) ~= \inf_{\substack{\varphi({\bm{x}}) = \mu \\ {\bm{x}}\in \mathcal X}} \lVert {\bm{y}}- f({\bm{x}}) \rVert_2^2 - \inf_{{\bm{x}}\in \mathcal X} \lVert {\bm{y}}- f({\bm{x}}) \rVert_2^2.$$ Suppose we know a valid $q_\alpha$ from any of the methods presented in this section. Then, an interval with coverage according to is: $$\label{eq:osb_gauss} \mathcal{I}_{\mathsf{LLR}}({\bm{y}}) := \begin{aligned} \inf_{{\bm{x}}}/\sup_{{\bm{x}}} \quad & \varphi({\bm{x}}) \\ \mathop{\mathrm{subject\,\,to}}\quad & \Vert {\bm{y}}- f({\bm{x}}) \Vert_2^2 \leq q_\alpha + \inf_{{\bm{x}}\in \mathcal X} \lVert {\bm{y}}- f({\bm{x}}) \rVert_2^2 \nonumber \\ & {\bm{x}}\in \mathcal X. \nonumber % \end{aligned}$$ We propose a combined approach in the following proposition. **Proposition 9** (Hybrid strict bounds interval construction). *Consider the model ${\bm{y}}= f({\bm{x}}^*) + \bm{\varepsilon}$ with ${\bm{x}}^* \in \mathcal X$ and $\bm{\varepsilon} \sim \mathcal{N}(\bm{0},{\bm{I}}_m)$, and any functional $\varphi$. Let $q_\alpha$ be valid. Consider the hybrid strict bounds interval: $$\label{eq:hsb} \mathcal{I}_{\mathsf{HSB}}({\bm{y}}) := \begin{aligned} \inf_{{\bm{x}}}/\sup_{{\bm{x}}} \quad & \varphi({\bm{x}}) \\ \mathop{\mathrm{subject\,\,to}}\quad & \Vert {\bm{y}}- f({\bm{x}}) \Vert_2^2 \leq \min \big\{ Q_{\chi^2_m}(1-\alpha), \; q_\alpha + \inf_{{\bm{x}}\in \mathcal X} \lVert {\bm{y}}- f({\bm{x}}) \rVert_2^2 \big\} \nonumber \\ & {\bm{x}}\in \mathcal X, \nonumber \end{aligned}$$ where the minimum is taken after observing a particular ${\bm{y}}$. The hybrid strict bounds interval $\mathcal{I}_{\mathsf{HSB}}$ is:* 1. *a $1-\alpha$ confidence interval;* 2. *never larger (in length) than the simultaneous interval [\[eq:ssb_gauss\]](#eq:ssb_gauss){reference-type="eqref" reference="eq:ssb_gauss"} and never larger than the LLR-based interval [\[eq:osb_gauss\]](#eq:osb_gauss){reference-type="eqref" reference="eq:osb_gauss"}.* *Proof sketch.* The proof follows by exploiting the fact that both intervals are $1-\alpha$ coverage and one of them is always contained in the other. See for the complete proof. ◻ In , we empirically compare the HSB interval with both the OSB and SSB intervals. Our findings indicate that the HSB intervals tend to overcover less frequently than the OSB intervals, as expected from . # Refuting the Burrus conjecture {#sec:rust_burrus} As discussed in , the family of constrained problems that has received the most attention is the positivity-constrained version of the problem as described in . To recap, the model is defined as follows: $$\label{eq:BurrusConjectureModel} {\bm{y}}\sim \mathcal{N}({\bm{K}}{\bm{x}}^*, {\bm{I}}_m) \quad \text{with} \quad \mathcal{X} = \{{\bm{x}}: {\bm{x}}\geq \bm{0}\} \quad \text{and} \quad \varphi({\bm{x}}) = {\bm{h}}^\top{\bm{x}}.$$ Here ${\bm{K}}\in \mathbb{R}^{m \times p}$ is the forward linear operator. It was initially conjectured in [@burrus1965utilization; @rust_burrus] that a valid $1-\alpha$ confidence interval could be obtained as: $$\label{opt4burrus} \begin{aligned} \min_x/\max_{{\bm{x}}} \quad & {\bm{h}}^\top{\bm{x}}\\ \mathop{\mathrm{subject\,\,to}}\quad & \Vert {\bm{y}}- {\bm{K}}{\bm{x}}\Vert_2^2 \leq \psi^2_\alpha\\ \quad & {\bm{x}}\geq \bm{0}. \end{aligned}$$ Here $\psi^2_\alpha = z_{\alpha/2}^2 + s^2({\bm{y}})$, with $z_{\alpha/2}$ being the previously defined standard Gaussian quantile and $s^2({\bm{y}})$ defined as the optimal value of: $$\begin{aligned} \min_{{\bm{z}}} \quad & \Vert {\bm{y}}- {\bm{K}}{\bm{z}}\Vert^2_2 \\ \mathop{\mathrm{subject\,\,to}}\quad & {\bm{z}}\geq \bm{0}. \end{aligned}$$ Although initially believed to be proven true in [@rust1994confidence], a flaw in the proof was later identified, along with a counterexample provided in [@tenorio2007confidence]. However, we demonstrate that this counterexample actually satisfies the conjecture, leaving the conjecture unresolved until now prior to our work, to the best of our knowledge. The main result of this section is the construction a new counterexample in this section using the test inversion perspective developed in and the stochastic dominance perspective in , disproving the conjecture. **Theorem 10** (Refutation of the Burrus conjecture). *The Burrus conjecture is false in general. The previously proposed two-dimensional example of a particular instance of [\[eq:BurrusConjectureModel\]](#eq:BurrusConjectureModel){reference-type="eqref" reference="eq:BurrusConjectureModel"} in [@tenorio2007confidence]: $${\bm{K}}= {\bm{I}}_2 \quad \text{and} \quad {\bm{h}}= (1,-1) \quad \text{with} \quad {\bm{x}}^* = (a, a) \text{ such that } a \geq 0$$ does not constitute a valid counterexample to the Burrus conjecture. However, the following constitutes a valid counterexample for the Burrus conjecture: $${\bm{K}}= {\bm{I}}_3 \quad \text{and} \quad {\bm{h}}= (1,1,-1) \quad \text{with} \quad {\bm{x}}^* = (0,0,1).$$* The main idea of the proof is to first connect the conjecture to our framework, identifying the conjectured intervals as a particular case of our construction with a particular choice of $q_\alpha$. We then apply to show that coverage is equivalent to a valid choice of $q_\alpha$. Finally, we present a counterexample to prove that the proposed $q_\alpha$ is not universally valid. The proof is divided into several lemmas for clarity. Our approach is novel in that it diverges from previous geometric perspectives on the Gaussian likelihood, instead leveraging the test inversion and stochastic dominance perspectives developed in and . ## Proof outline of This subsection provides a structured outline of the proof for , which refutes the Burrus conjecture. We break down the proof into several key lemmas. **Lemma 11** (Framing the Burrus conjecture via test inversion). *The interval construction in [\[opt4burrus\]](#opt4burrus){reference-type="eqref" reference="opt4burrus"} for a particular instance of the problem $({\bm{K}}, {\bm{h}})$ is equivalent to the general construction in for the model ${\bm{y}}\sim \mathcal{N}({\bm{K}}{\bm{x}}^*, {\bm{I}}_m)$, with ${\bm{x}}^* \geq \bm{0}$ component wise, and $\varphi({\bm{x}}) = {\bm{h}}^\top{\bm{x}}$, using threshold $q_\alpha(\mu) = z_{\alpha/2}^2$ independent of $\mu$. Therefore, it comes from inverting a hypothesis test $H_0: {\bm{h}}^\top{\bm{x}}^* = \mu \text{ versus } H_1: {\bm{h}}^\top{\bm{x}}^* \neq \mu$ with LLR $$\label{eq:llrburrus} \lambda(\mu, {\bm{y}}) ~:= \min_{\substack{{\bm{h}}^\top{\bm{x}}= \mu \\ {\bm{x}}\geq \bm{0}}} \lVert {\bm{y}}- {\bm{K}}{\bm{x}}\rVert_2^2 - \min_{{\bm{x}}\geq \bm{0}} \lVert {\bm{y}}- {\bm{K}}{\bm{x}}\rVert_2^2.$$ Furthermore, the interval has correct coverage if and only if $q_\alpha=z^2_{\alpha/2}$ is a valid choice (in the sense of satisfying the false positive guarantees [\[eq:combined_type_1\]](#eq:combined_type_1){reference-type="eqref" reference="eq:combined_type_1"}).* *Proof.* See . ◻ **Lemma 12** (Reducing of the Burrus conjecture to stochastic dominance). *The interval construction in [\[opt4burrus\]](#opt4burrus){reference-type="eqref" reference="opt4burrus"} has the right coverage for any $\alpha$ (and hence the conjecture holds) for a particular instance of the problem $({\bm{x}}^*, {\bm{K}}, {\bm{h}})$ if and only if the log-likelihood ratio test statistic $$\lambda(\mu = {\bm{h}}^\top{\bm{x}}^*, {\bm{y}}) ~:= \min_{\substack{{\bm{h}}^\top{\bm{x}}= {\bm{h}}^\top{\bm{x}}^* \\ {\bm{x}}\geq \bm{0}}} \lVert {\bm{y}}- {\bm{K}}{\bm{x}}\rVert_2^2 - \min_{{\bm{x}}\geq \bm{0}} \lVert {\bm{y}}- {\bm{K}}{\bm{x}}\rVert_2^2$$ is stochastically dominated by a $\chi^2_1$ distribution whenever ${\bm{y}}\sim \mathcal N({\bm{K}}{\bm{x}}^*, {\bm{I}}_m)$.* *Proof.* See . ◻ As an example, the constrained one-dimensional example considered in satisfies the stochastic dominance result, and hence the conjecture. Furthermore, using , an alternative characterization of the conjecture is the stochastic dominance of the unconstrained LLR statistic $\min_{{\bm{h}}^\top{\bm{x}}= {\bm{h}}^\top{\bm{x}}^*} \lVert {\bm{y}}- {\bm{K}}{\bm{x}}\rVert_2^2 - \min_{{\bm{x}}} \lVert {\bm{y}}- {\bm{K}}{\bm{x}}\rVert_2^2$ over the constrained test statistic $\min_{\substack{{\bm{h}}^\top{\bm{x}}= {\bm{h}}^\top{\bm{x}}^* \\ {\bm{x}}\geq \bm{0}}} \lVert {\bm{y}}- {\bm{K}}{\bm{x}}\rVert_2^2 - \min_{{\bm{x}}\geq \bm{0}} \lVert {\bm{y}}- {\bm{K}}{\bm{x}}\rVert_2^2$. We use to prove both that the example in [@tenorio2007confidence] satisfies the conjecture and that our new counterexample does not. #### Invalidity of previous counterexample in two dimensions. The previously proposed counterexample is a two-dimensional example with ${\bm{K}}= {\bm{I}}_2$, ${\bm{x}}^* = (a,a)$ with $a \geq 0$, ${\bm{h}}= (1,-1)$ (and therefore $\mu^* = {\bm{h}}^\top{\bm{x}}^* = 0$). The LLR test statistic is $$\lambda(\mu^* = 0, {\bm{y}}) = \min_{\substack{x_1 = x_2 \\ {\bm{x}}\geq \bm{0}}} \Vert {\bm{x}}-{\bm{y}}\Vert^2_2 - \min_{{\bm{x}}\geq \bm{0} } \Vert {\bm{x}}-{\bm{y}}\Vert^2_2$$ which, after solving the optimization problems, is equal to $$\lambda(\mu^*, {\bm{y}}) = \begin{cases} y_1^2 + y_2^2 & y_1 + y_2 < 0 \\ \frac 1 2 (y_1-y_2)^2 & y_1 + y_2 \geq 0 \end{cases} -(y_1 - \max(y_1, 0))^2 - (y_2 - \max(y_2, 0))^2,$$ which we can equivalently write as $$\label{tenorioce} \lambda(\mu^*, {\bm{y}}) = (y_1^2 + y_2^2)\mathbbm{1}\{y_1+y_2 < 0\} + \frac 1 2 (y_1-y_2)^2 \mathbbm{1}\{y_1+y_2 \geq 0\} - y_1^2 \mathbbm{1}\{y_1<0\} -y_2^2\mathbbm{1}\{y_2 < 0\}.$$ **Lemma 13** (Invalidity of a previous counterexample). *$\lambda(\mu^*, {\bm{y}})$ in [\[tenorioce\]](#tenorioce){reference-type="eqref" reference="tenorioce"} is stochastically dominated by a $\chi^2_1$ random variable whenever $y \sim \mathcal N({\bm{x}}^*, {\bm{I}})$. Therefore, it does not constitute a valid counterexample to the conjecture.* *Proof sketch.* The proof follows by a coupling argument between the LLR and a $\chi^2_1$ random variable. See for proof details. ◻ In summary, we use to demonstrate that the previously proposed counterexample actually satisfies the conjecture, while our new counterexample refutes it. We discuss it next. #### A new provable counterexample in three dimensions. We present a new counterexample in $\mathbb{R}^3$ to refute the Burrus conjecture. Specifically, we consider ${\bm{K}}= {\bm{I}}_3$, ${\bm{x}}^* = (0,0,1)$, and ${\bm{h}}= (1,1,-1)$, yielding $\mu^* = -1$. We prove that $\chi^2_1$ does not stochastically dominate $\lambda(\mu^*, {\bm{y}})$, which in this case is: $$\label{ource} \lambda(\mu^* = -1, {\bm{y}}) ~= \min_{\substack{x_1 + x_2 -x_3 = -1\\ {\bm{x}}\geq \bm{0}}} \Vert {\bm{x}}-{\bm{y}}\Vert^2_2 - \min_{{\bm{x}}\geq \bm{0} } \Vert {\bm{x}}-{\bm{y}}\Vert^2_2.$$ We prove that $\mathbb{E}[\lambda(\mu^*, {\bm{y}})] > \mathbb{E}[\chi^2_1] = 1$. Here expectation is taken with respect to $y \sim \mathcal N({\bm{x}}^*, {\bm{I}}_3)$, and the inequality is a general sufficient condition for the conjecture to break. **Lemma 14** (Validity of a new counterexample). *$\lambda(\mu^*, {\bm{y}})$ in [\[ource\]](#ource){reference-type="eqref" reference="ource"} is *not* stochastically dominated by a $\chi^2_1$ random variable whenever $y \sim \mathcal N({\bm{x}}^*, {\bm{I}})$. Therefore, it constitutes a valid counterexample to the general conjecture.* *Proof sketch.* We compute the expected value and show that it is greater than 1 (the expected value of a $\chi^2_1$), disproving stochastic dominance. See for proof details. ◻ **Remark 5** (A more general counterexample). The validity of the counterexample validity does not hinge on ${\bm{x}}^*$ being on the boundary of the constraint set. Indeed, the example remains valid for ${\bm{x}}^* = (\varepsilon, \varepsilon, 1)$ with $\varepsilon > 0$. We choose $\varepsilon = 0$ for proof simplicity. See for numerical evidence, where quantiles over the dashed line correspond to valid counterexamples. shows the difference between the two examples. By plotting the difference between the CDF of $\lambda$ (obtained numerically with $N = 10^6$ samples) and the CDF of a $\chi^2_1$ distribution, we observe stochastic dominance for the two-dimensional example in (left panel) and not stochastic dominance (hence breaking of the conjecture) for the three-dimensional example in (right panel). ![ Difference of cumulative distribution functions between LLR test statistics and $\chi^2$ distributions for the statistics defined in [\[tenorioce\]](#tenorioce){reference-type="eqref" reference="tenorioce"} (left) and [\[ource\]](#ource){reference-type="eqref" reference="ource"} (right). Stochastic dominance, which is equivalent to the Burrus conjecture, is broken in the right example only. There is a direct correspondence between the points at which the CDF difference is negative and coverages $\alpha$ that fail to hold (see ). ](figures/images/2DCDFs.pdf "fig:"){#fig:sdominance width="0.95\\columnwidth"} [\[fig:sdominanceA\]]{#fig:sdominanceA label="fig:sdominanceA"} ![ Difference of cumulative distribution functions between LLR test statistics and $\chi^2$ distributions for the statistics defined in [\[tenorioce\]](#tenorioce){reference-type="eqref" reference="tenorioce"} (left) and [\[ource\]](#ource){reference-type="eqref" reference="ource"} (right). Stochastic dominance, which is equivalent to the Burrus conjecture, is broken in the right example only. There is a direct correspondence between the points at which the CDF difference is negative and coverages $\alpha$ that fail to hold (see ). ](figures/images/3DCDFs.pdf "fig:"){#fig:sdominance width="0.95\\columnwidth"} [\[fig:sdominanceB\]]{#fig:sdominanceB label="fig:sdominanceB"} ## A negative result in high dimensions After establishing that the $\chi^2_1$ distribution fails to stochastically dominate the constrained log-likelihood ratio (LLR), a natural question arises: is there another distribution, possibly within the $\chi^2_k$ family, that can stochastically dominate the constrained LLR? If such a distribution exists, it would allow us to redefine $\psi^2_\alpha$ in [\[opt4burrus\]](#opt4burrus){reference-type="eqref" reference="opt4burrus"} as $s^2 + Q_X(1-\alpha)$, making the $Q_X$ term in the optimization problem dimension independent, leading to intervals with smaller length in large dimensions. It is worth noting that in the unconstrained scenario, the LLR distribution is precisely $\chi^2_1$, regardless of the problem's dimensionality. However, the following proposition shows that no such dimension-independent distribution exists for the constrained case. **Proposition 15** (A negative result in high dimensions). *The family of constrained LLR, defined as $$\lambda(\mu = {\bm{h}}^\top{\bm{x}}^*, {\bm{y}}) ~= \min_{\substack{{\bm{h}}^\top{\bm{x}}= {\bm{h}}^\top{\bm{x}}^* \\ {\bm{x}}\geq \bm{0}}} \lVert {\bm{y}}- {\bm{K}}{\bm{x}}\rVert_2^2 - \min_{{\bm{x}}\geq \bm{0}} \lVert {\bm{y}}- {\bm{K}}{\bm{x}}\rVert_2^2,$$ can not be uniformly stochastically bounded in a dimension-independent way by any finite-mean distribution (including all $\chi^2_k$ for $k \ge 1$).* *Proof sketch.* We construct a sequence of examples with increasing dimensions and demonstrate that the expected value of the constrained LLR grows unbounded as the dimension increases. This result negates the possibility of stochastic dominance by any finite-mean distribution. For a detailed proof, see . ◻ # Numerical examples {#sec:numerical-examples} In this section, we provide numerical illustrations for the procedures proposed above. Recall different intervals we have discussed so far: $\mathcal{I}_{\mathsf{SSB}}$ per [\[simultaneous\]](#simultaneous){reference-type="eqref" reference="simultaneous"}; $\mathcal{I}_{\mathsf{OSB}}$ per [\[RB1\]](#RB1){reference-type="eqref" reference="RB1"}; $\mathcal{I}_{\mathsf{HSB}}$ per [\[eq:hsb\]](#eq:hsb){reference-type="eqref" reference="eq:hsb"}; $\mathcal{I}_{\mathsf{OQ}}$ using quantiles from [\[oracle-max_quant\]](#oracle-max_quant){reference-type="eqref" reference="oracle-max_quant"}; $\mathcal{I}_{\mathsf{MQ}}$ using quantiles from [\[oracle-max_quant\]](#oracle-max_quant){reference-type="eqref" reference="oracle-max_quant"}. This section contains three numerical experiments, each with its own macro goal. In , we demonstrate a direct application of the framework presented in to a one-dimensional constrained Gaussian scenario where we can compute (to arbitrary precision) the null LLR test statistic $\alpha$-quantile function, $q_\alpha(x)$. presents a two-dimensional constrained Gaussian scenario (originally explored in [@tenorio2007confidence]) to show that $95\%$ intervals computed using the MQ approach from outperform the OSB intervals in terms of both coverage and length by leveraging the quantile structure of the null LLR test statistic distributions. Finally, provides numerical evidence of the Burrus conjecture's failure in a three-dimensional constrained Gaussian scenario proven in by showing miscalibration of $68\%$ OSB intervals. We also provide numerical evidence that the Max Quantile approach does produce valid $68\%$ intervals in this setting, demonstrating a viable alternative. ## Constrained Gaussian in one dimension {#subsec:1d_numerics} ![](figures/images_1d/1d_example_coverage_TRUE_QUANT.png){#fig:1d_interval_coverages width="95%"} ![](figures/images_1d/1d_example_lengths_TRUE_QUANT.png){#fig:1d_interval_lengths width="95%"} We revisit the constrained Gaussian model in one dimension [\[eq:1d_toy_model\]](#eq:1d_toy_model){reference-type="eqref" reference="eq:1d_toy_model"} described in . We perform a simulation experiment using six true parameter settings of $x^* \in \{0, 2^{-3}, 2^{-2}, 2^{-1}, 2^0, 2^1 \}$. We favor settings closer to the boundary as that is where the most difference between the considered intervals exists. For each of these settings, we simulate $10^5$ observations according to model [\[eq:1d_toy_model\]](#eq:1d_toy_model){reference-type="eqref" reference="eq:1d_toy_model"} described in and compute three different $95\%$ confidence intervals for each sample; interval in [\[intervaldef\]](#intervaldef){reference-type="eqref" reference="intervaldef"} using the actual quantile function obtained by $q_\alpha(x)$ as described in [\[eq:1d_quantile_func\]](#eq:1d_quantile_func){reference-type="eqref" reference="eq:1d_quantile_func"}, interval in [\[intervaldef\]](#intervaldef){reference-type="eqref" reference="intervaldef"} using the stochastically dominating $\chi^2_{1, \alpha}$ quantile, and the standard Truncated Gaussian interval, which exactly equals a simultaneous interval construction (SSB). To empirically estimate coverage, for each $x^*$ setting and each interval type, we compute $10^5$ intervals and keep track of their coverage of the true parameter. The intervals computed with the true quantile function are characterized by: $$\label{opt4new1d_quantile} \begin{aligned} \mathcal{I}_\alpha(y) :=~ \min_{x}/\max_{x} \quad & x \\ \mathop{\mathrm{subject\,\,to}}\quad & x \geq 0\\ & (x-y)^2 \leq q_\alpha(x) + \min_{x \geq 0} (x-y)^2. \end{aligned}$$ For the stochastically dominating $\chi^2_{1, \alpha}$, the interval in [\[intervaldef\]](#intervaldef){reference-type="eqref" reference="intervaldef"} becomes: $$\label{opt4new1d} \begin{aligned} \mathcal{I}_{\mathsf{OSB}}(y) = \min_{x}/\max_{x} \quad & x \\ \mathop{\mathrm{subject\,\,to}}\quad & x \geq 0\\ & (x-y)^2 \leq \chi^2_{1, \alpha} + \min_{x \geq 0} (x-y)^2. \end{aligned}$$ While the truncated Gaussian interval is defined as: $$\label{eq:TG1D} \mathcal{I}_{\mathsf{SSB}}(y) := \left[ y - z_{\alpha / 2},\; y + z_{\alpha / 2} \right] \cap \mathbb{R}_{\ge 0}.$$ Observe that [\[opt4new1d\]](#opt4new1d){reference-type="eqref" reference="opt4new1d"} admits an explicit solution: $$\begin{aligned} \label{eq:explicit1d} \mathcal{I}_{\mathsf{OSB}}(y) = \begin{cases} [y - \sqrt{\chi^2_{1, \alpha}},\ y + \sqrt{\chi^2_{1, \alpha}} ] \cap \mathbb{R}_{\ge 0} & y \geq 0 \\ [y - \sqrt{\chi^2_{1, \alpha} + y^2},\; y + \sqrt{\chi^2_{1, \alpha} + y^2} ] \cap \mathbb{R}_{\ge 0} & y < 0. \end{cases}\end{aligned}$$ Further note than $\sqrt{\chi^2_{1, \alpha}} = z_{\alpha/2}$, so that [\[eq:explicit1d\]](#eq:explicit1d){reference-type="eqref" reference="eq:explicit1d"} is always larger or equal than [\[eq:TG1D\]](#eq:TG1D){reference-type="eqref" reference="eq:TG1D"} when the $\chi^2_1$ upper bound is used. Conversely, we can express [\[eq:TG1D\]](#eq:TG1D){reference-type="eqref" reference="eq:TG1D"} as the solution to optimization problems, illustrating that the truncated Gaussian is an SSB interval for this case: $$\label{TGopt1D} \begin{aligned} \mathcal{I}_{\mathsf{SSB}}(y) = \min_{x}/\max_{x} \quad & x \\ \mathop{\mathrm{subject\,\,to}}\quad & x \geq 0\\ & (x-y)^2 \leq z_{\alpha/2}^2. \end{aligned}$$ To compare each method, we estimate coverage and expected interval length for each setting of $x^*$ using the $10^5$ computed random intervals. shows how the interval in [\[intervaldef\]](#intervaldef){reference-type="eqref" reference="intervaldef"} based on $c_\alpha := \chi^2_{1, \alpha}$ over-covers when the true parameter is on the boundary, which makes sense as this setting of $c_\alpha$ holds for all $x^*$, and therefore is a conservative quantile. As expected, the interval computed with $q_\alpha(x)$ maintains nominal $95\%$ coverage over all considered $x^*$ values, which is remarkable since knowing the quantile function means that we can compute an interval with exact nominal coverage that is adaptive to the true underlying parameter value. Additionally, we note that as the signal strength increases (i.e., larger values of $x^*$), the estimated coverage values across these methods converge, illustrating the intuition that when signal dominates noise, the problem is essentially unconstrained, and all considered methods produce nearly identical results. shows each interval method's expected length as a function of $x^*$. Again, we observe the tightness of the interval [\[intervaldef\]](#intervaldef){reference-type="eqref" reference="intervaldef"} constructed with the true quantile function $q_\alpha(x)$ with respect to the interval constructed with the stochastically dominating quantile, $\chi^2_{1, \alpha}$. Similarly to coverage, as the signal strength increases, the expected interval lengths converge and the methods become indistinguishable. Also observe that the truncated-Gaussian intervals have a smaller expected length compared to the intervals computed with $q_\alpha(x)$. This can partly be explained by one of the primary failure modes of the truncated-Gaussian interval, which is extreme observations can lead to intervals of length zero. ## Constrained Gaussian in two dimensions {#subsec:2d_gauss_num} This section presents simulation study results for $95\%$ confidence intervals using the two-dimensional data generating process from [@tenorio2007confidence]. Namely, this data generating process is the Linear-Gaussian model in [\[eq:lineargaussianmodel\]](#eq:lineargaussianmodel){reference-type="eqref" reference="eq:lineargaussianmodel"} with ${\bm{K}}= {\bm{I}}_2$, and $\varphi({\bm{x}}) = {\bm{h}}^\top{\bm{x}}= x_1 - x_2$. [@tenorio2007confidence] propose this scenario as a counter-example to the Burrus conjecture, but as shown in , it is in fact a case where the $\chi^2_1$ stochastically dominates the LLR for all true ${\bm{x}}^* \in \{{\bm{x}}: {\bm{h}}^\top{\bm{x}}= 0, {\bm{x}}\in \mathbb{R}^2_+ \}$, so the OSB intervals proposed by the conjecture have the correct coverage for $x^*$ in the set. We estimate interval coverage and expected length under five interval procedures and three true parameter values to show that the MQ approach from outperforms the OSB intervals in terms of both coverage and length by leveraging the quantile structure of the null LLR test statistic distributions. The five considered intervals are the SSB, OSB, HSB, MQ, and OQ intervals. To define the MQ intervals, we assume that ${\bm{x}}^* \in \mathcal{B} = [0, 1] \times [0, 1]$. This choice is somewhat arbitrary for this example, but as mentioned in , $\mathcal{B}$ can be defined using known physical constraints in applications. To compute the oracle quantile for the OQ intervals, we sample $2 \times 10^4$ draws from the LLR test statistic distribution under the null for this problem, compute the $95\%$ nonparametric confidence interval for the empirical $95\%$ quantile using the method in [@hahn_meeker], and set the oracle quantile as the upper endpoint of that interval. Given the large sample size, this upper endpoint is likely only slightly conservative. This OQ procedure is repeated for each true parameter value. The three considered true parameter values are, ${\bm{x}}^*_0 = \begin{pmatrix} 0 & 0 \end{pmatrix}^\top$, ${\bm{x}}^*_1 = \begin{pmatrix} 0.33 & 0.33 \end{pmatrix}^\top$, and ${\bm{x}}^*_2 = \begin{pmatrix} 0.33 & 0.5 \end{pmatrix}^\top$. These parameter values are chosen to provide a sense of how the interval properties change starting at the origin of the cone constraint compared to farther along the line bisecting the positive quadrant. The stochastic dominance shown in implies that the OSB intervals are valid along the bisecting line. The third considered parameter, ${\bm{x}}^*_2$, is a departure from this level-set. Since applies only for true parameter settings on the bisecting line of the positive quadrant, the OSB intervals (and therefore the HSB intervals as well) do not have a coverage guarantee for this third setting. By contrast, the SSB, MQ, and OQ have provable coverage guarantees in this scenario. For each parameter value and all intervals, coverage and expected length are estimated by drawing $10^3$ observations from the data generating process, computing all interval types for each generated observation, and then checking coverage and length. The coverage confidence intervals are $95\%$ Clopper-Pearson intervals for a binomial parameter, whereas the length confidence intervals are standard asymptotic Gaussian intervals using sample means and standard errors. Although the OSB and HSB do not have provable coverage for all scenarios, the left side of demonstrates their empirical validity for all parameter settings. The left side of illustrates the primary coverage advantage of the HSB intervals over both OSB and SSB for the single functional of interest across all tested true parameter values. Similarly, the right side of shows minor expected length improvements for HSB over OSB and SSB, especially when the true parameter is at the origin. The reason for this advantages is made clearer considering the histograms in , showing the distributions of the OSB constraint, $\psi_{\mathsf{OSB}, \alpha}^2 = \chi^2_{1, \alpha} + s^2({\bm{y}})$ versus the HSB constraint, $\psi_{\mathsf{HSB}, \alpha}^2 = \min \{\psi_{\mathsf{OSB}, \alpha}^2, Q_{\chi^2_2}(1 - \alpha) \}$. The OSB constraint has a long tail, occurring when the observed ${\bm{y}}$ is far from the known parameter constraints. This long tail is clipped for the HSB intervals whenever the OSB constraint exceeds the simultaneous quantile. As previously mentioned, a failure mode of the SSB intervals is that the feasible region is empty if an improbable enough ${\bm{y}}$ is observed, in which case we simply define the trivial interval where both endpoints are the plug-in estimator of the functional at the maximum likelihood estimator. ![ Estimated interval coverage and expected lengths for $95\%$ intervals resulting from the SSB, OSB, HSB, MQ, and OQ interval settings. Of the shown intervals, SSB and MQ are provably valid, and HSB is valid if and only if OSB is valid, which the numerical evidence strongly suggests. and yet the MQ intervals have better coverage **(left)** and length **(right)** across three true parameter settings as compared to OSB. ](figures/images_2d/2DFigureAfterMask.png){#fig:2d_coverage_and_length width="0.9\\columnwidth"} ![ For each true parameter setting, we show the distributions of the OSB and HSB constraints. When the observation ${\bm{y}}$ is far from the parameter constraints, the OSB constraint becomes large, resulting in a long tail in all cases. Since the HSB constraint can be at most $Q_{\chi^2_2}(1 - \alpha)$, the HSB intervals have a clear size advantage in these low probability events. ](figures/images_2d/2d_hsb_v_osb.png){#fig:2d_quantiles width="0.95\\columnwidth"} ![image](figures/images_2d/quantile_heatmap.png){width="0.3\\columnwidth"} As expected, the OQ intervals have the best coverage and length as shown in . Out of the intervals with provable coverage across all parameter settings, the MQ intervals boast better coverage than OSB, and better length than OSB and HSB. These performance improvements are due to the conservative nature of the $\chi^2_1$ distribution used for both the OSB and HSB intervals, whereas the MQ intervals take advantage of smaller $(1 - \alpha)$ quantiles over the defined bounded region, $\mathcal{B}$. The estimated $95$th quantiles over $\mathcal{B}$ are shown in . More precisely, while $\chi^2_{1, 0.05} \approx 3.84$, we obtain $\widehat{Q}^u = 3.48$ towards the upper right corner of . With even more information about $\mathcal{B}$, we could obtain even tighter intervals due to the diminishing quantiles towards the origin. ## Constrained Gaussian in three dimensions {#subsec:3d_numerics} Using the proposed three-dimensional counter-example from , we provide additional numerical evidence that the OSB $68\%$ intervals under-cover along with contrasting evidence that the MQ $68\%$ interval cover slightly above the nominal level. To provide additional context, we provide the estimated coverage and expected lengths for the SSB intervals and the intervals proposed in Corollary 1(ii) of [@tenorio2007confidence], both of which have provable coverage at all true parameter values for this problem setup. We refer to the intervals from [@tenorio2007confidence] as "Transformed" as they guarantee coverage by transforming the problem into a two-dimensional problem, and then compute valid SSB intervals on the transformed two-dimensional problem with $Q_{\chi^2_2}(1 - \alpha)$. We also consider OSB, MQ, and OQ intervals for this scenario. The oracle quantile defining the OQ intervals is found in the same way as described in , simply adapted to this scenario. Note that we do not consider the HSB intervals for this scenario since they are a function of the OSB intervals, which are invalid. To define the MQ intervals, we define $\mathcal{B} = \left\{{\bm{x}}^*(t) : 0 \leq t \leq e \right\}$, where ${\bm{x}}^*(t) = \begin{pmatrix}t & t & 1 \end{pmatrix}^\top$. In other words, we consider a line moving away from the true parameter value at ${\bm{x}}^* = \begin{pmatrix}0 & 0 & 1 \end{pmatrix}^\top$. The estimated $68\%$ quantiles for the LLR test statistic are shown in along with $95\%$ nonparametric confidence intervals for percentiles [@hahn_meeker]. provides additional evidence and intuition as to why the Burrus conjecture fails for this scenario, since the estimated quantile curve over $\mathcal{B}$ exceeds $\chi^2_{1, 0.32}$ for some values, $\chi^2_1$ does not stochastically dominate the LLR test statistic distribution. Note how the dominance violation occurs near the boundary (and at the boundary as shown by the estimated interval coverage in ). We have generally observed that non-trivial quantile behavior (and therefore coverage behavior) occurs near the boundary, in contrast to quantile behavior far from the boundary where the problem is essentially unconstrained. ![image](figures/images_3d/parameterized_68th_qs_10k_samp.png){width="0.5\\columnwidth"} Similar to the experiment setup in , coverage and expected length for each interval are estimated by sampling $10^3$ observations from the data generating process defined in , computing all interval types for each data draw, checking coverage with respect to the true functional value and computing all interval lengths. The results are shown in , along with $95\%$ Clopper-Pearson intervals for coverage values and $95\%$ asymptotic Gaussian intervals for expected length values. As predicted by , the OSB intervals under-cover at the $68\%$ confidence level, thus invalidating the Burrus conjecture. By contrast, the SSB, Transformed, and MQ intervals all have provable coverage in this scenario, which is reflected in the estimated coverage values in . Notably, both SSB and Transformed intervals are conservative, whereas the MQ intervals are nearly at a nominal level as a result of the assumed form of $\mathcal{B}$. Namely, we assume substantial knowledge for $\mathcal{B}$, which is why the MQ coverage and length are nearly the same as those of the OQ interval. Although our assumed knowledge of $\mathcal{B}$ is practically unreasonable, because the validity of the MQ interval is sufficiently guaranteed as long as $\mathcal{B}$ is strictly larger than only the true parameter value, and this definition has the important benefit of providing an elucidating quantile visualization in , we find this set sufficient to demonstrate the important point that MQ intervals are valid in this key scenario where the Burrus conjecture fails. ![ Using $10^3$ draws from the three-dimensional data generating process defined in , we estimate interval coverage and expected length for SSB, Transformed, OSB, MQ, and OQ intervals. Following , the left panel provides numerical evidence that the OSB intervals under-cover in this scenario, thus invalidating the Burrus conjecture. The Transformed and SSB intervals are included here as comparisons that are known to provide coverage at the desired confidence level. ](figures/images_3d/3DFigureAfterMaskSpaced.png){#fig:3d_coverage_length width="0.9\\columnwidth"} # Discussion {#sec:discussion_and_conclusion} This paper presents a framework for constructing confidence intervals with guaranteed frequentist coverage for one-dimensional functionals of forward model parameters in the presence of constraints. For the specific case of the Gaussian linear forward model with non-negativity constraints, we refute the Burrus conjecture [@burrus1965utilization] by providing a counterexample and propose a more generalized approach for interval construction. Our methodology hinges on the inversion of a specific likelihood ratio test, and we offer theoretical and practical insights into the properties of the constructed intervals via illustrative examples. Our framework is versatile, accommodating potentially non-linear and non-Gaussian settings. At a high-level, the practical effectiveness of UQ methods often depends on the (sometimes implicit) assumptions of the method. Different methods come into play depending on what we assume or know, be it the likelihood, the prior, or even the noise structure. In classical statistics, confidence intervals serve as a valuable tool for UQ, especially for one-dimensional quantities of interest. These intervals are constructed to offer guaranteed coverage under repeated sampling, aligning with frequentist principles. While frequentist coverage guarantees are a useful criterion, especially in contexts where repeatability is essential, we acknowledge that the "best" UQ method is often context-dependent. For example, this frequentist approach is the most natural in applications like OCO-2 XCO2 retrievals ([@patil]), where repeatability is a key feature. Conversely, when a well-motivated scientific prior is available, Bayesian methods are natural and have desirable properties. We conclude the paper by discussing few possible directions for future work. - **Data-adaptive sampling procedure.** We saw in Section 5 that MQ is critically valid where OSB is not and can leverage smaller quantiles to produce tighter intervals. In the event ones does not have the assumed true parameter bounding box, it is possible to create a data--generated one, and adjust the error budget accordingly. Such a procedure would expand the use of the MQ intervals to scenarios with unbounded parameter constraints. - **Joint confidence sets for multiple functionals.** Since our framework is devised for one-dimensional functionals, its application it to collections of functionals (i.e., a higher-dimensional quantity of interest), would be a natural and desirable extension. Trivially, given a collection of $K$ functionals, one could apply this methodology $K$ times and use the Bonferroni correction to adjust the confidence levels such that they all cover at the desired level. While this approach might be practically reasonable when $K$ is small, it becomes markedly inefficient as $K$ gets large. Furthermore, this approach would create a $K$-dimensional hyper-rectangle for the quantity of interest, which may not take known relationships between functionals into account. As such, extending the framework of to simultaneously consider the $K$ functionals of interest would be the first step to creating a more nuanced approach. - **Choice of test statistics beyond LLR.** The log-likelihood ratio test statistic considered in this work connects with the Rust--Burrus proposed intervals and is observed to perform well in practice, but other choices can be explored in future work. While the LLR is a natural choice for the generic problem, improving the interval length on particular families of problems with different test statistics might be possible. Since the main theoretical machinery comes from the test inversion framework, which is independent of the actual form of the test statistic, alternate versions of can be constructed as long as the test statistic constructs valid *level*$-\alpha$ hypothesis tests; the resulting intervals of which could be explored theoretically and numerically. - **Generalization to simulation-based problems.** An extension of our methodology to settings in which the likelihood is not exactly known can be considered, ranging from only partial knowledge of the form of the likelihood to full simulation-based (likelihood-free) settings where the likelihood is not known explicitly but can be sampled. A possible avenue is to develop worst-case robust approaches with respect to possible likelihoods. In the fully likelihood-free setting, approaches such as [@dalmasso2020confidence; @heinrich2022learning; @waldo] provide ways to invert hypothesis tests to obtain confidence sets in these scenarios in multidimensional parameter spaces. These sets could produce a confidence interval for a functional of the model parameters as seen for the SSB intervals. However, as we have explored, orienting the hypothesis test to the functional of interest can have dramatic length benefits for the resulting confidence interval (as seen for the OSB, HSB, and MQ intervals). Since the log--likelihood plays a key role in the definition of this methodology's intervals, extensions providing ways to relax that dependence would be a necessary first step. Through these and related questions, we aim to broaden the applicability of our approach, contributing to the evolving landscape of uncertainty quantification in inverse problems in physical sciences with constraints. # Acknowledgements {#acknowledgements .unnumbered} We warmly thank Larry Wasserman, Ann Lee, and other members of the STAMPS (Statistical Methods in the Physical Sciences) working group at Carnegie Mellon University for fruitful discussions on this work. We also warmly thank Andrew Stuart, George Karniadakis and their groups at California Institute of Technology and Brown University useful comments and stimulating discussions. We are also grateful to Jon Hobbs and Amy Braverman at the Jet Propulsion Laboratory at the National Aeronautics and Space Administration for many discussions related surrounding this work and for hosting trips of PP, MS, MK and their kind hospitality. PB and HO gratefully acknowledge support by the Air Force Office of Scientific Research under MURI award number FA9550-20-1-0358 (Machine Learning and Physics-Based Modeling and Simulation), the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration, and Beyond Limits (Learning Optimal Models) through CAST (The Caltech Center for Autonomous Systems and Technologies). MS and MK were supported by NSF grants DMS-2053804 and PHY-2020295, JPL RSA No. 1689177 and a grant from the C3.AI Digital Transformation Institute. **[Supplementary Material]{.ul}** This document serves as a supplement to the paper entitled " Optimization-based frequentist confidence intervals for functionals in constrained inverse problems: Resolving the Burrus conjecture." We outline below the structure of the supplement and summarize key notation used both in this supplement and the main paper. ## Organization {#organization .unnumbered} The content of this supplement is organized as follows. L1.75cmL15cm **Appendix** & **Description** & Proofs of , , , and (from ) & Proofs of , , and (from ) & Proofs of , , and (from ) ## Notation {#notation .unnumbered} A summary of the general notation used throughout this paper is as follows. (Any specific notation needed is explained in respective sections as necessary.) L2.75cmL15cm **Notation** & **Description** Non-bold lower or upper case & Denotes scalars (e.g., $\alpha$, $\mu$, $Q$). Bold lower case & Denotes vectors (e.g., ${\bm{x}}$, ${\bm{y}}$, ${\bm{h}}$). Bold upper case & Denotes matrices (e.g., ${\bm{K}}$, ${\bm{I}}$). Calligraphic font & Denotes sets (e.g., $\mathcal{X}$, $\mathcal{C}$, $\mathcal{D}$). $\mathbb{R}$ & Set of real numbers. $\mathbb{R}_{\ge 0}$ & Set of non-negative real numbers. $[n]$ & Set $\{1, \dots, n\}$ for a positive integer $n$. $(x)_{+}$ & Positive part of a real number $x$ $\bm{1}\{A\}$, $\mathbb{P}(A)$ & Indicator random variable associated with an event $A$ and probability of $A$ $\mathbb{E}[X], \mathop{\mathrm{Var}}(X)$ & Expectation and variance of a random variable $X$ $\langle \mathbf{u}, \mathbf{v} \rangle$ & The inner product of vectors $\mathbf{u}$ and $\mathbf{v}$. $\| \mathbf{u} \|_{p}$ & The $\ell_p$ norm of a vector $\mathbf{u}$. $\| f \|_{L_p}$ & The $L_p$ norm of a function $f$. ${\bm{u}}\le {\bm{v}}$ & Lexicographic ordering for vectors ${\bm{u}}$ and ${\bm{v}}$. $\bm{A} \preceq \bm{B}$ & Loewner ordering for symmetric matrices $\bm{A}$ and $\bm{B}$. $X \preceq Y$ & Stochastic dominance order for random variables $X$ and $Y$ (see for details). $Y = \mathcal{O}_\alpha(X)$ & Deterministic big-O notation, indicating that $Y$ is bounded by $| Y | \le C_\alpha X$. $C_\alpha$ & A numerical constant that may depend on the ambient parameter $\alpha$ in the context. $\mathcal{O}_p$ & Probabilistic big-O notation. $\xrightarrow{\textup{d}}$ & Convergence in distribution. $\xrightarrow{\textup{p}}$ & Convergence in probability. A note about $\min/\max$ versus $\inf/\sup$: We use $\min/\max$ when the optimal value of an optimization problem is attained, otherwise we use $\inf/\sup$. A note about uniqueness of optimization problems: When we express an equality involving the minimizer of an optimization problem, this is intended to signify a set inclusion. The guarantees presented in our paper are applicable to all solutions, and we make no distinction among multiple solutions. # Proofs in {#app:sec:data_gen_test_def_inv_arg_proofs} ## Proof of {#proof:conf_set_coverage} To prove the lemma, we need to show that the probability of $\mu^*$ being in the confidence set $\mathcal{C}_\alpha({\bm{y}})$ is at least $1 - \alpha$. Towards this end, observe that $$\begin{aligned} \mathbb{P}_{{\bm{y}}\sim P_{{\bm{x}}^*}} (\mu^* \in \mathcal{C}_\alpha({\bm{y}})) &= \mathbb{P}_{{\bm{y}}\sim P_{{\bm{x}}^*}} ({\bm{y}}\in A_\alpha(\mu^*)) \relax &= 1 - \mathbb{P}_{{\bm{y}}\sim P_{{\bm{x}}^*}} ({\bm{y}}\notin A_\alpha(\mu^*)) \relax &\geq 1 - \sup_{{\bm{x}}\in \Phi_{\mu^*} \cap \mathcal{X}} \mathbb{P}_{{\bm{y}}\sim P_{{\bm{x}}}} ({\bm{y}}\notin A_\alpha(\mu^*)) \relax &\geq 1 - \alpha,\end{aligned}$$ as desired. This completes the proof. ## Proof of {#proof:main} Assume $\bar{\mathcal X}_\alpha({\bm{y}})$ is nonempty and write $\inf_{{\bm{x}}\in \mathcal X}/\sup_{{\bm{x}}\in \mathcal X} f({\bm{x}})$ for the interval $[\inf_{{\bm{x}}\in \mathcal X} f({\bm{x}}), \sup_{{\bm{x}}\in \mathcal X} f({\bm{x}})]$. Observe that: $$\mathcal{C}_\alpha({\bm{y}}) \subseteq \inf_{\mu \in \mathcal{C}_\alpha({\bm{y}})}/\sup_{\mu \in \mathcal{C}_\alpha({\bm{y}})} \mu = \mathcal{I}_\alpha({\bm{y}}).$$ From , $\mathcal{C}_\alpha({\bm{y}}) \subseteq \mathcal{I}_\alpha({\bm{y}})$ implies that $\mathcal{I}_\alpha({\bm{y}})$ is also a $1 - \alpha$ confidence interval. We prove this interval exactly equals the second in [\[intervaldef\]](#intervaldef){reference-type="eqref" reference="intervaldef"}. Unpacking the definition of $\mathcal{C}_\alpha({\bm{y}})$, we write the interval $$\label{opt1} \begin{aligned} \inf_\mu/\sup_{\mu} \quad & \mu \relax \mathop{\mathrm{subject\,\,to}}\quad & \mu \in \mathbb{R} \relax & -2\log\Lambda(\mu, {\bm{y}}) \leq q_\alpha(\mu). \end{aligned}$$ We can write different optimization problems which are equivalent to the optimization problem [\[opt1\]](#opt1){reference-type="eqref" reference="opt1"}. First, we use the definition of $\Lambda$ to write: $$\label{opt2} \begin{aligned} \inf_\mu/\sup_{\mu} \quad & \mu \relax \mathop{\mathrm{subject\,\,to}}\quad & \mu \in \mathbb{R} \relax & \inf_{\varphi({\bm{x}}) = \mu, {\bm{x}}\in \mathcal X} -2\ell_{{\bm{x}}}({\bm{y}}) \leq q_\alpha(\mu) + \inf_{{\bm{x}}\in \mathcal X} -2\ell_{{\bm{x}}}({\bm{y}}). \end{aligned}$$ Notice that we can rewrite the feasibility condition of $\mu$ as follows: $$\inf_{\varphi({\bm{x}}) = \mu, {\bm{x}}\in \mathcal X} -2\ell_{{\bm{x}}}({\bm{y}})\leq q_\alpha(\mu) + \inf_{{\bm{x}}\in \mathcal X} -2\ell_{{\bm{x}}}({\bm{y}})$$ as there exists ${\bm{x}}\in \mathcal X$ such that $\varphi({\bm{x}}) = \mu$ and $$-2\ell_{{\bm{x}}}({\bm{y}}) \leq q_\alpha(\mu) + \inf_{{\bm{x}}\in \mathcal X} -2\ell_{{\bm{x}}}({\bm{y}}).$$ Therefore, the optimization problem can be rewritten with ${\bm{x}}$ and $\mu$ as the optimization variables: $$\label{opt3} \begin{aligned} \inf_{\mu, {\bm{x}}}/\sup_{\mu, {\bm{x}}} \quad & \mu \relax \mathop{\mathrm{subject\,\,to}}\quad & {\bm{x}}\in \mathcal X,~ \mu \in \mathbb{R} \relax & \varphi({\bm{x}}) = \mu \relax & -2\ell_{{\bm{x}}}({\bm{y}}) \leq q_\alpha(\mu) + \inf_{{\bm{x}}\in \mathcal X} -2\ell_{{\bm{x}}}({\bm{y}}). \end{aligned}$$ And $\mu$ can be eliminated using the constraint, yielding $$\begin{aligned} \inf_{{\bm{x}}}/\sup_{{\bm{x}}} \quad & \varphi({\bm{x}}) \relax \mathop{\mathrm{subject\,\,to}}\quad & {\bm{x}}\in \mathcal X\relax & -2\ell_{{\bm{x}}}({\bm{y}}) \leq q_\alpha(\varphi({\bm{x}}))+ \inf_{{\bm{x}}\in \mathcal X} -2\ell_{{\bm{x}}}({\bm{y}}), \end{aligned}$$ i.e., $\inf_{{\bm{x}}\in \bar{\mathcal X}_\alpha({\bm{y}})}/\sup_{{\bm{x}}\in \bar{\mathcal X}_\alpha({\bm{y}})} \varphi({\bm{x}})$. The choice when $\bar{\mathcal X}_\alpha({\bm{y}})$ is empty does not affect coverage properties. This finishes the proof. ## Proof of {#proof:main_converse} We have, by definition and test inversion, that $q_\alpha(\mu)$ are valid if and only if $$\mathcal{C}_\alpha({\bm{y}}) := \{\mu : \lambda(\mu, {\bm{y}}) \leq q_\alpha(\mu)\}$$ is a valid $1-\alpha$ confidence interval for any ${\bm{x}}\in \mathcal{X}$. Since $\mathcal{I}_\alpha({\bm{y}})$ is the smallest interval that contains $\mathcal{C}_\alpha({\bm{y}})$, we aim to prove that $\mathcal{C}_\alpha({\bm{y}})$ is already an interval (including singletons or empty sets), so that $\mathcal{C}_\alpha({\bm{y}}) = \mathcal{I}_\alpha({\bm{y}})$ and the result holds. Define the function: $$\mu \mapsto \mathcal{F}(\mu) = \inf_{\substack{\varphi({\bm{x}}) = \mu \relax {\bm{x}}\in \mathcal{X}}} -2\ell_{\bm{x}}({\bm{y}})$$ for a given ${\bm{y}}$, supported in all $\mu$ such that $\Phi_\mu \cap \mathcal X \neq \emptyset$. Write $\mathcal{C}_\alpha({\bm{y}})$ explicitly using [\[eq:log_likelihood_ratio_full\]](#eq:log_likelihood_ratio_full){reference-type="eqref" reference="eq:log_likelihood_ratio_full"}, we get $$\mathcal{C}_\alpha({\bm{y}}) := \Bigg \{\mu : \underset{\substack{\varphi({\bm{x}}) = \mu \relax {\bm{x}}\in \mathcal{X}}} {\text{inf}} \; -2\ell_{{\bm{x}}}({\bm{y}})- \underset{{\bm{x}}\in \mathcal{X}}{\text{inf}} \; -2 \ell_{{\bm{x}}}({\bm{y}}) \leq q_\alpha \Bigg \}.$$ The second term on the left-hand side does not depend on $\mu$, so it is enough to prove that any set of the form $\{\mu : \mathcal{F}(\mu) \leq z\}$ is an interval, which is implied by the function $\mathcal{F}(\mu)$ being convex in $\mu$ (for a fixed ${\bm{y}}$). (Indeed, if the set is not an interval, we have $\mu^- < \mu < \mu^+$ with $\mu^-, \mu^+ \in \mathcal{C}_\alpha({\bm{y}})$ and $\mu \notin \mathcal{C}_\alpha({\bm{y}})$ which contradicts convexity since $$\mathcal{F}(\mu) \geq z > \gamma \mathcal{F}(\mu^-) + (1-\gamma)\mathcal{F}(\mu^+)).$$ To see convexity and finish the proof, let $\mu_1 \neq \mu_2$ and let $\mathcal{G}({\bm{x}}) := -2\ell_{\bm{x}}({\bm{y}})$, a convex function by assumption. Write for $i = 1,2$: $$x_i \in \mathop{\mathrm{\arg\!\min}}_{\substack{\varphi({\bm{x}}) = \mu \relax {\bm{x}}\in \mathcal{X}}} -2\ell_{\bm{x}}({\bm{y}}),$$ with $x_i$ being any possible element in the set of argminizers, so that $\mathcal{F}(\mu_i) = \mathcal{G}({\bm{x}}_i)$. For any $0 < \gamma < 1$, $\gamma {\bm{x}}_1 + (1-\gamma) {\bm{x}}_2 \in \mathcal X$ since $\mathcal X$ is a convex cone and $$\varphi(\gamma {\bm{x}}_1 + (1-\gamma) {\bm{x}}_2) = \gamma \mu_1 + (1-\gamma)\mu_2,$$ since $\varphi$ is linear, so $\gamma {\bm{x}}_1 + (1-\gamma) {\bm{x}}_2$ is a feasible point of the optimization problem $$\inf_{\substack{\varphi({\bm{x}}) = \gamma \mu_1 + (1-\gamma)\mu_2 \relax {\bm{x}}\in \mathcal{X}}} -2\ell_{\bm{x}}({\bm{y}}),$$ that has optimal value $\mathcal{F}(\gamma\mu_1 + (1-\gamma)\mu_2)$. Therefore, by convexity of $\mathcal G$ and definition of the ${\bm{x}}_i$, we have that: $$\mathcal{F}(\gamma\mu_1 + (1-\gamma)\mu_2) \leq \mathcal G(\gamma {\bm{x}}_1 + (1-\gamma) {\bm{x}}_2) \leq \gamma \mathcal G({\bm{x}}_1) + (1-\gamma) \mathcal{G}({\bm{x}}_2) = \gamma \mathcal F(\mu_1) + (1-\gamma) \mathcal{F}(\mu_2).$$ This completes the proof. ## Proof of {#app:proof-lemma:1d_cdf_part1} Since the case when $\mu^* = 0$ is of particular interest, we show the result in this specific case and then generalize to the case of $\mu^* > 0$. #### [Case of $\mu^*$ = 0]{.ul}. When $\mu^* = 0$, we can argue from symmetry of the standard Gaussian about the origin to write down the CDF in closed form. For $c \geq 0$, we have $$\begin{aligned} \mathbb{P}_{\mu_0} \left(\ell_0 \leq c \right) &= \mathbb{P}_{\mu_0} \left(\ell_0 \leq c, y < 0 \right) + \mathbb{P}_{\mu_0} \left(\ell_0\leq c, y \geq 0 \right) \nonumber \relax &= \mathbb{P}_{\mu_0} \left(\ell_0 \leq c \mid y < 0 \right)\mathbb{P}_{\mu_0}(y < 0) + \mathbb{P}_{\mu_0} \left(\ell_0 \leq c \mid y \geq 0 \right)\mathbb{P}_{\mu_0}(y \geq 0). \label{eq:tot_prob_form}\end{aligned}$$ By definition, $\mathbb{P}_{\mu_0}(y < 0) = \mathbb{P}_{\mu_0}(y \geq 0) = \frac{1}{2}$, so only the conditional probabilities remain. By [\[eq:1d_log_lr\]](#eq:1d_log_lr){reference-type="eqref" reference="eq:1d_log_lr"}, we have $$\begin{aligned} \mathbb{P}_{\mu_0} \left(\ell_0 \leq c \mid y < 0 \right) &= \mathbb{P}_{\mu_0} \left(0 \leq c \mid y < 0 \right) = 1 \nonumber \relax \mathbb{P}_{\mu_0} \left(\ell_0 \leq c \mid y \geq 0 \right) &= \mathbb{P}_{\mu_0} \left(y^2 \leq c \mid y \geq 0 \right). \label{eq:cond_cdf_2}\end{aligned}$$ In [\[eq:cond_cdf_2\]](#eq:cond_cdf_2){reference-type="eqref" reference="eq:cond_cdf_2"}, we immediately observe that $$\mathbb{P}_{\mu_0} \left(y^2 \leq c \mid y \geq 0 \right) = \mathbb{P}_{\mu_0} \left(y^2 \leq c, y \geq 0 \right) \mathbb{P}_{\mu_0} \left(y \geq 0 \right)^{-1} = 2 \mathbb{P}_{\mu_0} \left(0 \leq y \leq \sqrt{c} \right) = 2 \Phi(\sqrt{c}) - 1.$$ But we also have that $$\mathbb{P}_{\mu_0} \left(y^2 \leq c \right) = \mathbb{P}_{\mu_0} \left(-\sqrt{c} \leq y \leq \sqrt{c} \right) = 2 \Phi(\sqrt{c}) - 1.$$ So we have $$\mathbb{P}_{\mu_0} \left(y^2 \leq c \mid y \geq 0 \right) = \mathbb{P}_{\mu_0} \left(y^2 \leq c \right).$$ Hence, we obtain $$\mathbb{P}_{\mu_0} \left(\ell_0 \leq c \mid y \geq 0 \right) = \chi^2_1(c).$$ Note this independence on the sign of $y$ means that the magnitude of $y$ is statistically independent of its direction. Thus, when $\mu_0 = 0$, the log-likelihood ratio has the following distribution: $$\ell_0 \sim \frac{1}{2} + \frac{1}{2} \chi^2_1.$$ This completes the case when $\mu^* = 0$. #### [Case of $\mu^* > 0$]{.ul}. When $\mu > 0$, the closed-form solution to the CDF of $\ell_0$ becomes more complicated, as we can no longer use symmetry around the origin. Picking up at [\[eq:tot_prob_form\]](#eq:tot_prob_form){reference-type="eqref" reference="eq:tot_prob_form"}, we first note that when $y \sim \mathcal{N}(\mu_0, 1)$, we have $$\begin{aligned} \mathbb{P}_{\mu_0} \left(y < 0 \right) = \Phi(- \mu_0) \quad \text{and} \quad \mathbb{P}_{\mu_0} \left(y \geq 0 \right) = \Phi(\mu_0).\end{aligned}$$ Next, we must find the conditional probabilities. Starting with the case when $\{y < 0 \}$, we obtain $$\begin{aligned} \mathbb{P}_{\mu_0} \left((y - \mu_0)^2 - y^2 \leq c \mid y < 0 \right) &= \mathbb{P}_{\mu_0} \left(-2 y \mu_0 + \mu_0^2 \leq c \mid y < 0 \right) \nonumber \relax &= \mathbb{P}_{\mu_0} \left(y \geq \frac{\mu_0^2 - c}{2 \mu_0} \mid y < 0 \right) \nonumber \relax &= \Phi(- \mu_0)^{-1} \mathbb{P}_{\mu_0} \left(y \geq \frac{\mu_0^2 - c}{2 \mu_0}, y < 0 \right) \nonumber \relax &= \Phi(- \mu_0)^{-1} \left\{ 0 \cdot \bm{1} \{c \leq \mu_0^2 \} + \mathbb{P}_{\mu_0} \left( \frac{\mu_0^2 - c}{2 \mu_0} \leq y \leq 0 \right) \bm{1} \{c > \mu_0^2 \} \right\} \nonumber \relax &= \Phi(- \mu_0)^{-1} \mathbb{P}_{\mu_0} \left( \frac{-\mu_0^2 - c}{2 \mu_0} \leq y - \mu_0 \leq - \mu_0 \right) \bm{1} \{ c > \mu_0^2 \} \nonumber \relax &= \Phi(- \mu_0)^{-1} \left\{\Phi(- \mu_0) - \Phi\left(\frac{-\mu_0^2 - c}{2 \mu_0} \right) \right\}\bm{1} \{ c > \mu_0^2 \}. \label{eq:cond_y_l_0}\end{aligned}$$ Then, when $\{y \geq 0 \}$, we have $$\begin{aligned} \mathbb{P}_{\mu_0} \left((y - \mu_0)^2 \leq c \mid y \geq 0 \right) &= \mathbb{P}_{\mu_0} \left(- \sqrt{c} \leq y - \mu_0 \leq \sqrt{c} \mid y \geq 0 \right) \nonumber \relax &= \Phi(\mu_0)^{-1} \mathbb{P}_{\mu_0} \left(- \sqrt{c} \leq y - \mu_0 \leq \sqrt{c}, y \geq 0 \right) \nonumber \relax &= \Phi(\mu_0)^{-1} \mathbb{P}_{\mu_0} \left(0 \leq y \leq \sqrt{c} + \mu_0 \right) \bm{1} \{ - \sqrt{c} + \mu_0 \leq 0 \} \nonumber \relax &\quad + \Phi(\mu_0)^{-1} \mathbb{P}_{\mu_0} \left(-\sqrt{c} + \mu_0 \leq y \leq \sqrt{c} + \mu_0 \right) \bm{1} \{ - \sqrt{c} + \mu_0 > 0 \} \nonumber \relax &= \Phi(\mu_0)^{-1} \left\{ \left( \Phi(\sqrt{c}) - \Phi(- \mu_0) \right) \bm{1} \{ c \geq \mu_0^2 \} + (2 \Phi(\sqrt{c}) - 1) \bm{1} \{ c < \mu_0^2 \} \right\}. \label{eq:cond_y_geq_0}\end{aligned}$$ Putting together [\[eq:cond_y\_l_0\]](#eq:cond_y_l_0){reference-type="eqref" reference="eq:cond_y_l_0"} and [\[eq:cond_y\_geq_0\]](#eq:cond_y_geq_0){reference-type="eqref" reference="eq:cond_y_geq_0"}, we obtain the following CDF: $$\mathbb{P}_{\mu_0} \left( \ell_0 \leq c \right) = \chi^2_1(c) \cdot \bm{1} \{ c < \mu_0^2 \} + \left\{ \Phi(\sqrt{c}) - \Phi\left( \frac{-\mu_0^2 - c}{2 \mu_0} \right) \right\} \cdot \bm{1} \{ c \geq \mu_0^2 \}.$$ This completes the case of $\mu^* > 0$. ## Proof of {#proof:unconstrchi2} We derive this result using a duality argument inspired by [@gourieroux]. By definition, we have $$\label{eq:log_like_def} \lambda(\mu^*, {\bm{y}}) = \min_{{\bm{x}}: \theta({\bm{x}}) = \mu^*} \lVert {\bm{y}}- {\bm{K}}{\bm{x}}\rVert_2^2 - \min_{{\bm{x}}} \lVert {\bm{y}}- {\bm{K}}{\bm{x}}\rVert_2^2.$$ For ease of notation, let $\widehat{{\bm{x}}}^* = \underset{{\bm{x}}: {\bm{h}}^\top{\bm{x}}= \mu^*}{\text{argmin}} \; \lVert {\bm{y}}- {\bm{K}}{\bm{x}}\rVert_2^2$. Consider the Lagrangian for the first optimization in [\[eq:log_like_def\]](#eq:log_like_def){reference-type="eqref" reference="eq:log_like_def"}: $$\label{eq:lagrangian_unconstr} L({\bm{x}}, \lambda) = \lVert {\bm{y}}- {\bm{K}}{\bm{x}}\rVert^2_2 + \lambda ({\bm{h}}^\top{\bm{x}}- \mu^*).$$ First-order optimality allows solving for $\widehat{{\bm{x}}}^*$ as a function of the dual variable $\lambda$: $$\begin{aligned} \nabla_{{\bm{x}}} L({\bm{x}}, \lambda) &= -2 {\bm{K}}^\top({\bm{y}}- {\bm{K}}{\bm{x}}) + \lambda {\bm{h}}= 0 \relax &\implies -2 {\bm{K}}^\top{\bm{y}}+ 2 {\bm{K}}^\top{\bm{K}}{\bm{x}}+ \lambda {\bm{h}}= 0 \relax &\implies \widehat{{\bm{x}}}^* = ({\bm{K}}^\top{\bm{K}})^{-1} {\bm{K}}^\top{\bm{y}}- \frac{1}{2} \lambda ({\bm{K}}^\top{\bm{K}})^{-1} {\bm{h}}\relax &\implies \widehat{{\bm{x}}}^* = \widehat{{\bm{x}}} - \frac{1}{2} \lambda ({\bm{K}}^\top{\bm{K}})^{-1} {\bm{h}}.\end{aligned}$$ Substituting back into the LLR, we obtain $$\begin{aligned} \lambda(\mu^*, {\bm{y}}) &= \lVert {\bm{y}}- {\bm{K}}\widehat{{\bm{x}}}^* \rVert_2^2 - \lVert {\bm{y}}- {\bm{K}}\widehat{{\bm{x}}} \rVert_2^2 \nonumber \relax &= \lVert {\bm{y}}- {\bm{K}}\widehat{{\bm{x}}} + \frac{1}{2} \lambda {\bm{K}}({\bm{K}}^\top{\bm{K}})^{-1} {\bm{h}}\rVert^2_2 - \lVert {\bm{y}}- {\bm{K}}\widehat{{\bm{x}}} \rVert_2^2. \label{eq:reduction_log_lik}\end{aligned}$$ Performing some algebra, we note that $$\begin{aligned} \lVert {\bm{y}}- {\bm{K}}\widehat{{\bm{x}}} + \frac{1}{2} \lambda {\bm{K}}({\bm{K}}^\top{\bm{K}})^{-1} {\bm{h}}\rVert^2_2 &= \lVert {\bm{y}}- {\bm{K}}\widehat{{\bm{x}}} \rVert_2^2 \relax & \quad + \lambda ({\bm{y}}- {\bm{K}}\widehat{{\bm{x}}})^\top{\bm{K}}({\bm{K}}^\top{\bm{K}})^{-1} {\bm{h}}+ \frac{1}{4} \lambda^2 {\bm{h}}^\top({\bm{K}}^\top{\bm{K}})^{-1} {\bm{h}}.\end{aligned}$$ Thus, we have $$\begin{aligned} \lambda ( {\bm{y}}- {\bm{K}}\widehat{{\bm{x}}} )^\top{\bm{K}}({\bm{K}}^\top{\bm{K}})^{-1} {\bm{h}}&= \lambda {\bm{y}}^\top{\bm{K}}({\bm{K}}^\top{\bm{K}})^{-1} {\bm{h}}- \lambda \widehat{{\bm{x}}}^\top{\bm{K}}^\top{\bm{K}}({\bm{K}}^\top{\bm{K}})^{-1} {\bm{h}}\relax &= \lambda \widehat{{\bm{x}}}^\top{\bm{h}}- \lambda \widehat{{\bm{x}}}^\top{\bm{h}}\relax &= 0.\end{aligned}$$ So the substitution in [\[eq:reduction_log_lik\]](#eq:reduction_log_lik){reference-type="eqref" reference="eq:reduction_log_lik"} can be further simplified such that: $$\label{eq:simplified_log_lik_unconstr} \lambda(\mu^*, {\bm{y}}) = \frac{1}{4} \lambda^2 {\bm{h}}^\top({\bm{K}}^\top{\bm{K}})^{-1} {\bm{h}}.$$ We now turn our attention to finding $\lambda$. Note that this optimization defining the Lagrangian [\[eq:lagrangian_unconstr\]](#eq:lagrangian_unconstr){reference-type="eqref" reference="eq:lagrangian_unconstr"} is convex with an affine equality constraint. Therefore, strong duality holds. We then define the dual function as follows: $$\begin{aligned} g(\lambda) &= \min_{{\bm{x}}} L({\bm{x}}, \lambda) = L(\widehat{{\bm{x}}}^*, \lambda) \nonumber \relax &= \lVert {\bm{y}}- {\bm{K}}\widehat{{\bm{x}}}^* \rVert^2_2 + \lambda ({\bm{h}}^\top\widehat{{\bm{x}}}^* - \mu^*) \nonumber \relax &= \lVert {\bm{y}}- {\bm{K}}\widehat{{\bm{x}}} + \frac{1}{2} \lambda {\bm{K}}({\bm{K}}^\top{\bm{K}})^{-1} {\bm{h}}\rVert^2_2 + \lambda \left({\bm{h}}^\top\widehat{{\bm{x}}} - \frac{1}{2} \lambda {\bm{h}}^\top({\bm{K}}^\top{\bm{K}})^{-1} {\bm{h}}- \mu^* \right). \end{aligned}$$ We note that we can make many of the same simplifications above to arrive at the simplified dual function: $$g(\lambda) = \lVert {\bm{y}}- {\bm{K}}\widehat{{\bm{x}}} \rVert_2^2 + \lambda {\bm{h}}^\top- \frac{1}{4} \lambda^2 {\bm{h}}^\top({\bm{K}}^\top{\bm{K}})^{-1} {\bm{h}}+ \lambda {\bm{h}}^\top\widehat{{\bm{x}}} - \lambda \mu^*.$$ To maximize $g(\lambda)$, we again use the following first order optimality condition: $$\begin{aligned} \frac{d g}{d \lambda} &= - \frac{1}{2} \lambda {\bm{h}}^\top({\bm{K}}^\top{\bm{K}})^{-1} {\bm{h}}+ {\bm{h}}^\top\widehat{{\bm{x}}} - \mu^* = 0 \nonumber \relax \implies \widehat{\lambda} &= \frac{2 \left( {\bm{h}}^\top\widehat{{\bm{x}}} - \mu^* \right)}{{\bm{h}}^\top({\bm{K}}^\top{\bm{K}})^{-1} {\bm{h}}}. \label{eq:lambda_hat_unconstr}\end{aligned}$$ Substituting [\[eq:lambda_hat_unconstr\]](#eq:lambda_hat_unconstr){reference-type="eqref" reference="eq:lambda_hat_unconstr"} back into [\[eq:simplified_log_lik_unconstr\]](#eq:simplified_log_lik_unconstr){reference-type="eqref" reference="eq:simplified_log_lik_unconstr"}, we obtain $$\begin{aligned} \lambda(\mu^*, {\bm{y}}) &= \frac{1}{4} \left(\frac{2 \left( {\bm{h}}^\top\widehat{{\bm{x}}} - \mu^* \right)}{{\bm{h}}^\top({\bm{K}}^\top{\bm{K}})^{-1} {\bm{h}}} \right)^2 {\bm{h}}^\top({\bm{K}}^\top{\bm{K}})^{-1} {\bm{h}}\relax &= \frac{({\bm{h}}^\top\widehat{{\bm{x}}} - \mu^*)^2}{{\bm{h}}^\top({\bm{K}}^\top{\bm{K}})^{-1} {\bm{h}}}.\end{aligned}$$ For the second part, observe that when ${\bm{y}}\sim \mathcal{N}({\bm{K}}{\bm{x}}^*, {\bm{I}}_m)$, we have $${\bm{h}}^\top({\bm{K}}^\top{\bm{K}})^{-1} {\bm{K}}^\top{\bm{y}}\sim \mathcal{N}({\bm{h}}^\top{\bm{x}}^*, {\bm{h}}^\top ({\bm{K}}^\top{\bm{K}})^{-1} {\bm{h}}),$$ hence [\[eq:test_stat_eq_unconstr\]](#eq:test_stat_eq_unconstr){reference-type="eqref" reference="eq:test_stat_eq_unconstr"} is the square of a one-dimensional standard Gaussian distribution. This finishes the proof. # Proofs in {#app:sec:interval_methodology_proofs} ## Proof for {#sec:proof-subsec:stochastic_dominance} Let $Y := \lambda ({\bm{y}}, \mu^*)$. Recall the validity of $q_\alpha$ can be written as $\mathbb{P}(Y\leq q_\alpha) \geq 1-\alpha$ from [\[eq:type1_err_c\_alpha\]](#eq:type1_err_c_alpha){reference-type="eqref" reference="eq:type1_err_c_alpha"} as: $$\begin{aligned} X \succeq Y &\iff \mathbb{P}(X\geq \gamma) \geq \mathbb{P}(Y \geq \gamma), \; \; \text{for all } \gamma \relax &\iff \alpha = \mathbb{P}(X\geq Q_X(1-\alpha)) \geq \mathbb{P}(Y\geq Q_X(1-\alpha)), \; \; \text{for all } \alpha \relax &\iff 1-\alpha = \mathbb{P}(X\leq Q_X(1-\alpha))\leq \mathbb{P}(Y \leq Q_X(1-\alpha)), \; \; \text{for all } \alpha \relax &\iff Q_X(1-\alpha) \text{ is a valid } q_\alpha.\end{aligned}$$ This finishes the proof. ## Proof of {#sec:proof:lem:consistency_Qu} The strategy is to upper bound the failure probability with an expression that goes to zero as $M$ and $N$ grow to infinity. First, note by the triangle inequality, we have $$\lvert Q^u - \max_{i \in [M]} \widehat{q}_i(N) \rvert \leq \lvert Q^u - \max_{i \in [M]} q_i \rvert + \lvert \max_{i \in [M]} q_i - \max_{i \in [M]} \widehat{q}_i(N) \rvert.$$ Here $q_i$ is the $(1 - \alpha)$ quantile of the LLR when the true parameter as at ${\bm{x}}_i$, and $\widehat{q}_i(N)$ is the corresponding estimated quantile via order statistic, where the $N$ is included to more clearly show its dependence on the number of draws $N$. Next, since $$\lvert \max_{i \in [M]} q_i - \max_{i \in [M]} \widehat{q}_i(N) \rvert \leq \max_{i \in [M]} \lvert q_i - \widehat{q}_i(N) \rvert,$$ we have $$\lvert Q^u - \max_{i \in [M]} \widehat{q}_i(N) \rvert \leq \lvert Q^u - \max_{i \in [M]} q_i \rvert + \max_{i \in [M]} \lvert q_i - \widehat{q}_i(N) \rvert.$$ Define $\bm{q} \in \mathbb{R}^M$ to be the vector of true quantiles at the sampled ${\bm{x}}_i$'s and $\widehat{\bm{q}} \in \mathbb{R}^M$ the vector of estimated quantiles. Then, we have $$\max_{i \in [M]} \lvert q_i - \widehat{q}_i(N) \rvert = \lVert \bm{q} - \widehat{\bm{q}} \rVert_\infty \leq \lVert \bm{q} - \widehat{\bm{q}} \rVert_2.$$ Using the above upper bounds and Markov's inequality, for $\epsilon > 0$, yields: $$\begin{aligned} \mathbb{P} \left(\lvert Q^u - \max_{i \in [M]} \widehat{q}_i(N) \rvert > \epsilon \right) &\leq \mathbb{P} \left( \lvert Q^u - \max_{i \in [M]} q_i \rvert + \lVert \bm{q} - \widehat{\bm{q}} \rVert_2 > \epsilon \right) \relax &\leq \frac{\mathbb{E}[\lvert Q^u - \max_{i \in [M]} q_i \rvert] + \mathbb{E}[\lVert \bm{q} - \widehat{\bm{q}} \rVert_2] }{\epsilon},\end{aligned}$$ We next show $\mathbb{E}[\lvert Q^u - \max_{i \in [M]} q_i \rvert] \to 0$ and $\mathbb{E}[\lVert \bm{q} - \widehat{\bm{q}} \rVert_2] \to 0$ as $M, N \to \infty$. For the first of these two expectations, we start with the following: $$\mathbb{E}[\lvert Q^u - \max_{i \in [M]} q_i \rvert] = \int_0^{Q^u} \mathbb{P} \left(\lvert Q^u - \max_{i \in [M]} q_i \rvert > t \right) \; dt.$$ The probability within the integral can then be written as follows: $$\begin{aligned} \mathbb{P} \Big(\lvert Q^u - \max_{i \in [M]} q_i \rvert > t \Big) &= \mathbb{P} \Big( Q^u - \max_{i \in [M]} q_i > t \Big) \relax &= \mathbb{P} \Big( \max_{i \in [M]} q_i < Q^u - t \Big) \relax &= \prod_{i = 1}^M \mathbb{P} ( q_i < Q^u - t ) \relax % &= F(Q^u - t)^M,\end{aligned}$$ Every element in the product equals one minus the probability of having sampled a point inside the region $Q^{-1}(B_t(Q^u)) = \{{\bm{x}}: |Q({\bm{x}}) - Q^u| \leq t\}$, where $Q({\bm{x}})$ maps ${\bm{x}}$ to the quantile of the log-likelihood ratio test statistic when ${\bm{x}}$ is the null. Since $Q$ is continuous by assumption, using the $\epsilon-$ball definition $Q^{-1}(B_t(Q^u))$ contains a ball of a certain radius $\delta$ around $Q^{-1}(Q^u)$, which has positive Lebesgue measure. Therefore, $\mathbb{P} ( q_i < Q^u - t ) < 1$ and the term inside the integral $\mathbb{P} \Big(\lvert Q^u - \max_{i \in [M]} q_i \rvert > t \Big) \to 0$ as $M \to \infty$. Thus, by the Monotone Convergence Theorem, it follows that $\mathbb{E}[\lvert Q^u - \max_{i \in [M]} q_i \rvert] \to 0$ as $M \to \infty$. For the second expectation, Jensen's inequality yields: $$\mathbb{E}[\lVert \bm{q} - \widehat{\bm{q}} \rVert_2] \leq \sqrt{\mathbb{E}[\lVert \bm{q} - \widehat{\bm{q}} \rVert_2^2}] = \sqrt{\sum_{i = 1}^M \mathbb{E}[(q_i - \widehat{q_i}(N))^2]}.$$ Since the $\lfloor N (1 - \alpha) \rfloor$ order statistic is a consistent estimator of the $(1 - \alpha)$ quantile, we have $\mathbb{E}[q_i - \widehat{q_i}(N)] \to 0$ as $N \to \infty$. Therefore, the continuous mapping theorem implies that $\mathbb{E} [( q_i - \widehat{q_i}(N))^2] \to 0$ as $N \to \infty$. As such, it follows that for fixed $M$, $\mathbb{E}[\lVert \bm{q} - \widehat{\bm{q}} \rVert_2] \to 0$ as $N \to \infty$. We obtain the final result by assuming that $N$ can always be chosen larger than $M$, which means that the additive effect of $M$ in the upper bound can be overwhelmed by the choice of $N$. This is a reasonable assumption in practice, as the statistician is capable of choosing these integers in any desired way. This finishes the proof. ## Proof of {#sec:proof:lem:bbw} We first prove part (ii), and then prove part (i). #### [Part (ii)]{.ul}. Observe that the family of intervals $$\label{eq:nested-intervals} \mathcal{I}(c) := \begin{aligned} \inf_{{\bm{x}}}/\sup_{{\bm{x}}} \quad & \varphi({\bm{x}}) \relax \mathop{\mathrm{subject\,\,to}}\quad & \Vert {\bm{y}}- f({\bm{x}}) \Vert_2^2 \leq c \relax & {\bm{x}}\in \mathcal X \end{aligned}$$ is nested as a function of the parameter $c \ge 0$, i.e., $\mathcal{I}(c_1) \subseteq \mathcal{I}(c_2)$ if $0 \leq c_1 \leq c_2$. This is because the feasible set for the optimization problem in [\[eq:nested-intervals\]](#eq:nested-intervals){reference-type="eqref" reference="eq:nested-intervals"} is nested as a function of the parameter $c$. This proves part (ii). #### [Part (i)]{.ul}. For any given ${\bm{y}}$, note that $\mathcal{I}_{\mathsf{HSB}}({\bm{y}})$ equals either $\mathcal{I}_{\mathsf{LLR}}({\bm{y}})$ or $\mathcal{I}_{\mathsf{SSB}}({\bm{y}})$. We thus have $$\begin{aligned} \mathbb{P}(\varphi(x^*) \in \mathcal{I}_{\mathsf{HSB}}({\bm{y}})) &= \mathbb{P}(\varphi(x^*) \in \mathcal{I}_{\mathsf{HSB}}({\bm{y}}) \mid \mathcal{I}_{\mathsf{HSB}}({\bm{y}}) \relax &= \mathcal{I}_{\mathsf{LLR}}({\bm{y}})) \cdot \mathbb{P}(\mathcal{I}_{\mathsf{HSB}}({\bm{y}}) = \mathcal{I}_{\mathsf{LLR}}({\bm{y}})) \relax &\quad + \mathbb{P}(\varphi(x^*) \in \mathcal{I}_{\mathsf{HSB}}({\bm{y}}) \mid \mathcal{I}_{\mathsf{HSB}}({\bm{y}}) = \mathcal{I}_{\mathsf{SSB}}({\bm{y}})) \cdot \mathbb{P}(\mathcal{I}_{\mathsf{HSB}}({\bm{y}}) = \mathcal{I}_{\mathsf{SSB}}({\bm{y}})).\end{aligned}$$ Since $\mathcal{I}_{\mathsf{LLR}}({\bm{y}})$ and $\mathcal{I}_{\mathsf{SSB}}({\bm{y}})$ are $1-\alpha$ confidence intervals, we get $$\mathbb{P}(\varphi(x^*) \in \mathcal{I}_{\mathsf{HSB}}({\bm{y}})) \geq (1-\alpha) \cdot \mathbb{P}(\mathcal{I}_{\mathsf{HSB}}({\bm{y}}) = \mathcal{I}_{\mathsf{LLR}}({\bm{y}})) + (1-\alpha) \cdot \mathbb{P}(\mathcal{I}_{\mathsf{HSB}}({\bm{y}}) = \mathcal{I}_{\mathsf{SSB}}({\bm{y}})) = 1-\alpha.$$ This proves part (i). ## Proof of {#app:constrained_1d_gaussian} Similar to in , this proof is divided into two cases. #### [Case of $\mu^*$ = 0]{.ul}. Since the case when $\mu^* = 0$ is of particular interest, we show the result in this specific case and then generalize. Thus, when $\mu_0 = 0$, the log-likelihood ratio has the following distribution: $$\ell_0 \sim \frac{1}{2} + \frac{1}{2} \chi^2_1.$$ Additionally, this distribution implies the following stochastic dominance: $$\label{eq:llr_cdf_mu_0} \mathbb{P}_{\mu_0} \left(\ell_0 \leq c \right) = \frac{1}{2} \left( 1 + \chi^2_1(c) \right) \geq \chi^2_1(c),$$ i.e., the log-likelihood ratio CDF is stochastically dominated by the chi-squared with one degree of freedom distribution. This means that the type-I error of the test can be controlled at the $\alpha$ level. When $\mu > 0$, the closed-form solution to the CDF of $\ell_0$ becomes more complicated, as we can no longer use symmetry around the origin. From the result of , we have the following CDF: $$\label{eq:llr_cdf_non_zero_1} \mathbb{P}_{\mu_0} \left( \ell_0 \leq c \right) = \chi^2_1(c) \cdot \bm{1} \{ c < \mu_0^2 \} + \left\{ \Phi(\sqrt{c}) - \Phi\left( \frac{-\mu_0^2 - c}{2 \mu_0} \right) \right\} \cdot \bm{1} \{ c \geq \mu_0^2 \}.$$ Note, a quick check of [\[eq:llr_cdf_non_zero_1\]](#eq:llr_cdf_non_zero_1){reference-type="eqref" reference="eq:llr_cdf_non_zero_1"} when $\mu_0 = 0$ reveals agreement with [\[eq:llr_cdf_mu_0\]](#eq:llr_cdf_mu_0){reference-type="eqref" reference="eq:llr_cdf_mu_0"} such that $$\mathbb{P}_{\mu_0} \left(\ell_0 \leq c \right) = \Phi(\sqrt{c}) = \Phi(\sqrt{c}) - \frac{1}{2} + \frac{1}{2} = \frac{1}{2}\left(2 \Phi(\sqrt{c}) - 1 \right) + \frac{1}{2} = \frac{1}{2} \chi^2_1(c) + \frac{1}{2}.$$ This completes the case of $\mu^* = 0$. #### [Case of $\mu^* > 0$]{.ul}. We already demonstrated above the chi-squared with one degree of freedom dominates the log-likelihood ratio when $\mu_0 = 0$. We now show that the dominance holds when $\mu_0 > 0$. Clearly, when $c < \mu_0^2$, $\mathbb{P}_{\mu_0} \left(\ell_0 \leq 0 \right) = \chi^2_1(c)$, making it in fact equal to the chi-squared with one degree of freedom. Suppose $c \geq \mu_0^2$. Define $$h(c) := \Phi(\sqrt{c}) - \Phi\left( \frac{-\mu_0^2 - c}{2 \mu_0} \right) - \chi^2_1(c).$$ The stochastic dominance occurs if and only if $h(c) \geq 0$ for all $c \geq \mu_0^2$. Note first that $\chi^2_1(c) = \Phi(\sqrt{c}) - \Phi(-\sqrt{c})$ and therefore $h(c) = \Phi(-\sqrt{c}) - \Phi\left( \frac{-\mu_0^2 - c}{2 \mu_0} \right)$. Since $\Phi(\cdot)$ is a monotonically increasing function, it is sufficient to show that $-\sqrt{c} - \frac{-\mu_0^2 - c}{2 \mu_0} \geq 0$ for all $c \geq \mu_0^2$. We do so below. Define a function $f$ as follows: $$f(c) = -\sqrt{c} - \frac{-\mu_0^2 - c}{2 \mu_0}.$$ Observe that when $c = \mu_0^2$, $f(c) = 0$. Consider when $c > \mu_0^2$. We obtain the following first and second derivatives: $$f'(c) = \frac{-\mu_0 + \sqrt{c}}{2 \mu_0 \sqrt{c}} \quad \text{and} \quad f''(c) = \frac{1}{4}c^{-3/2}.$$ By the constraint $c > \mu_0^2$, it follows that $-\mu_0 + \sqrt{c} > 0$, and therefore, $f'(c) > 0$ for all $c > \mu_0^2$. Additionally, $f''(c) > 0$ for all $c > \mu_0^2$, so $f$ is convex. Hence, we conclude that $f$ is a monotonically increasing function for $c > \mu_0^2$, which starts at $0$ when $c = \mu_0^2$, and thus $f(c) \geq 0$ for all $c \geq \mu_0^2$. It therefore follows that $$\Phi(-\sqrt{c}) \geq \Phi\left(\frac{-\mu_0^2 - c}{2 \mu_0}\right),$$ and hence $h(c) \geq 0$ for all $c \geq \mu_0^2$. As such, we conclude that $\mathbb{P}_{\mu_0}\left(\ell_0 \leq c \right) \geq \chi^2_1(c)$ for all $c \geq 0$, i.e., that the sampling distribution for the log-likelihood ratio is stochastically dominated by a chi-squared distribution with one degree of freedom. This completes the case of $\mu^* > 0$. # Proofs in {#app:sec:rust_burrus_proofs} ## Proof of {#proof-of} The proof follows by combining . ## Proof of {#sec:proof:lemma:burruschi2} The proof follows by direct inspection and substitution of $q_\alpha$ and $-2\ell_{{\bm{x}}}({\bm{y}}) = \lVert {\bm{y}}- {\bm{K}}{\bm{x}}\rVert_2^2$. The interval has the coverage if the $q_\alpha$ is valid by and only if by . ## Proof of {#sec:proof:lemma:burruschi2B} The proof follows by observing $z^2_{\alpha/2} = Q_{\chi^2_1}(1-\alpha)$ and applying ## Proof of {#proof:invalidity-counterexample} We argue by coupling. Note that $\frac 1 2 (y_1-y_2)^2 \sim \chi^2_1$, so that it suffices to show $\lambda \leq \frac 1 2 (y_1-y_2)^2$ for every $y$ to constitute a valid coupling that proves stochastic dominance. This is clearly true when $y_1 + y_2 \geq 0$, since $\lambda- \frac 1 2 (y_1-y_2)^2$ is equal to non-positive terms only. When $y_1+y_2 < 0$, if both are strictly negative then $\lambda = 0 \leq \frac 1 2 (y_1-y_2)^2$. Then assume without loss of generality that $y_1$ is non-negative, then $y_2$ has to be negative. Then $\lambda = y_1^2$, but $y_1 \geq 0$, $y_2 < 0$ and $y_1 < - y_2$ imply that $|y_1 - y_2| = y_1 - y_2 \geq 2y_1 \geq \sqrt{2}y_1$, squaring both sides gives $\frac{1}{2}(y_1-y_2)^2 < y_1^2 = \lambda$. This finishes the proof. ## Proof of {#ProofCounter} Consider the LLR $$\label{ource_appendix} \lambda(\mu^* = -1, {\bm{y}}) ~= \min_{\substack{x_1 + x_2 -x_3 = -1\relax {\bm{x}}\geq \bm{0}}} \Vert {\bm{x}}-{\bm{y}}\Vert^2_2 - \min_{{\bm{x}}\geq \bm{0} } \Vert {\bm{x}}-{\bm{y}}\Vert^2_2.$$ The goal of this proof is to show that $\chi^2_1$ does not stochastically dominate [\[ource_appendix\]](#ource_appendix){reference-type="eqref" reference="ource_appendix"} when ${\bm{y}}\sim \mathcal{N}(x^* = (0,0,1), {\bm{I}}_3)$. By Corollary 4.26 in @ProbabilityBook, $X \succeq Y$ implies $\mathbb E[x] > \mathbb E[y]$, so it suffices to show that $$\mathbb E[\lambda(\mu^* = -1, {\bm{y}})] > \mathbb E[\chi^2_1] = 1$$ to complete the proof. Observe that $$\mathbb E [\lambda(\mu^* = -1, {\bm{y}})] = \mathbb E \bigg [\min_{\substack{{\bm{h}}^\top{\bm{x}}= -1 \relax {\bm{x}}\geq \bm{0}}} \Vert {\bm{x}}-{\bm{y}}\Vert^2_2\bigg ] - \mathbb{E} \bigg [\min_{{\bm{x}}\geq \bm{0} } \Vert {\bm{x}}-{\bm{y}}\Vert^2_2\bigg ].$$ We begin by computing the second term. Since $$\min_{{\bm{x}}\geq \bm{0} } \Vert {\bm{x}}-{\bm{y}}\Vert^2_2 = \sum_{i=1}^3 (y_i - \max\{y_i, 0\})^2,$$ we have $$\begin{aligned} \mathbb{E} \bigg [\min_{{\bm{x}}\geq \bm{0} } \Vert {\bm{x}}-{\bm{y}}\Vert^2_2\bigg ] &= \sum_{i=1}^3 \mathbb{E} \bigg [ (y_i - \max\{y_i, 0\})^2\bigg ] \relax &= 2 \mathbb{E}_{z \sim \mathcal N(0,1)} \bigg[ (z-\max\{z,0\})^2 \bigg ] + \mathbb{E}_{z \sim \mathcal N(1,1)} \bigg[ (z-\max\{z,0\})^2 \bigg ]\end{aligned}$$ Let $g(z) := (z-\max\{z, 0\})^2$. Using in both cases, we obtain $$\begin{aligned} \mathbb E[g(z)] = {\underbrace{\mathbb E[g(z)\mid z\geq 0]}_{0}} {\cdot} {\mathbb P(z \geq 0)} + \mathbb E[g(z)\mid z < 0] \cdot \mathbb P(z < 0) = \mathbb E[z^2 \mid z < 0] \cdot \mathbb P(z < 0).\end{aligned}$$ Note that $$\begin{aligned} \mathbb E[z^2 \mid z < 0] &= (\mathbb E[z\mid z < 0])^2 + \mathrm{Var}[z\mid z<0] \relax &= \begin{dcases} \bigg(-\frac{\phi(0)}{\Phi(0)}\bigg)^2 + \bigg(1 - \Big(\frac{\phi(0)}{\Phi(0)}\Big)^2\bigg),\; z \sim \mathcal N(0,1)\relax \bigg(1- \frac{\phi(-1)}{\Phi(-1)}\bigg)^2 + 1 + \frac{\phi(-1)}{\Phi(-1)} - \Big(\frac{\phi(-1)}{\Phi(-1)}\Big)^2,\; z \sim \mathcal N(1,1) \end{dcases} \relax &= \begin{dcases} 1, \; z \sim \mathcal N(0,1)\relax 2- \frac{\phi(-1)}{\Phi(-1)}, \; z \sim \mathcal N(1,1), \end{dcases} \end{aligned}$$ where we used the formulas for mean and variance of a truncated Gaussian. Finally, $$\begin{aligned} \mathbb{E} \bigg [\min_{{\bm{x}}\geq \bm{0} } \Vert {\bm{x}}-{\bm{y}}\Vert^2_2\bigg ] = 2 \cdot 1/2 \cdot 1 + (2- \phi(-1)/\Phi(-1))\cdot(\Phi(-1)) = 1 + 2\Phi(-1) - \phi(-1) \approx 1.0753.\end{aligned}$$ It suffices to prove that $$\mathbb E \bigg [\min_{\substack{{\bm{h}}^\top{\bm{x}}= -1 \relax {\bm{x}}\geq \bm{0}}} \Vert {\bm{x}}-{\bm{y}}\Vert^2_2 \bigg] > 2 + 2\Phi(-1) - \phi(-1) \approx 2.0753.$$ We will prove that $$\mathbb E \bigg [\min_{\substack{{\bm{h}}^\top{\bm{x}}= -1 \relax {\bm{x}}\geq \bm{0}}}\Vert {\bm{x}}-{\bm{y}}\Vert^2_2 \bigg] = 13/6 \approx 2.166.$$ Note that the intersection of the plane ${\bm{h}}^\top{\bm{x}}= x_1 +x_2 - x_3 = -1$ and ${\bm{x}}\geq \bm{0}$ is the parametric surface $\mathcal S = \left \{(u,\ v,\ u+v+1), u\geq 0, v \geq 0 \right\}$, so we can write $$\min_{{\bm{x}}\in \mathcal S} \Vert {\bm{x}}-{\bm{y}}\Vert^2_2 = \min_{u \geq 0, v \geq 0} (y_1-u)^2 + (y_2-v)^2 + (y_3 - u - v - 1)^2.$$ It is convenient to define a new variable $z_3 = 1-y_3 \sim \mathcal{N}(0,1)$, so that $(y_1, y_2, z_3)$ is sampled from a standard three dimensional Gaussian. Abusing notation we will still write $y_3$ for $z_3$ and then ${\bm{y}}\sim \mathcal{N}((0,0,0), {\bm{I}})$. The optimization problem becomes: $$\min_{u \geq 0, v \geq 0} (y_1-u)^2 + (y_2-v)^2 + (-y_3 - u - v)^2.$$ This can be explicitly solved to yield: $$\begin{aligned} \label{formula} \min_{\substack{{\bm{h}}^\top{\bm{x}}= -1 \relax {\bm{x}}\geq \bm{0}}} \Vert {\bm{x}}-{\bm{y}}\Vert^2_2\ = \begin{dcases} y_1^2+y_2^2+y_3^2 & y_1-y_3\leq 0 \text{ and } y_2-y_3\leq 0 \relax \frac{1}{2} \left(y_1^2+2 y_1 y_3+2 y_2^2+y_3^2\right) & y_1-y_3\geq 0 \text{ and } y_1-2 y_2+y_3\geq 0 \relax \frac{1}{2} \left(2 y_1^2+y_2^2+2 y_2 y_3+y_3^2\right) & y_2-y_3\geq 0 \text{ and }2 y_1-y_2-y_3\leq 0 \relax \frac{1}{3} \left(y_1+y_2+y_3\right)^2 & \begin{cases} 2y_1 - y_3 \geq y_2 \geq \max\{y_1, y_3\} \relax 2y_2 - y_3 \geq y_1 \geq \max\{y_2, y_3\} \end{cases}. \end{dcases}\end{aligned}$$ We split $$\int_{\mathbb R^3} \min_{\substack{{\bm{h}}^\top{\bm{x}}= -1 \relax {\bm{x}}\geq \bm{0}}}\Vert {\bm{x}}-{\bm{y}}\Vert^2_2\ \phi(y_1)\phi(y_2)\phi(y_3) \, \mathrm{d}y$$ into the different domains given by [\[formula\]](#formula){reference-type="eqref" reference="formula"}, with the value of the expectation being equal to the sum of the different integrals, which we proceed to compute. #### [Region 1]{.ul}: $$\begin{aligned} I_1 = \int_{y_3 \geq y_1, y_3 \geq y_2} \frac{e^{-\frac{1}{2}(y_1^2 + y_2^2 +y_3^2)}}{2 \sqrt{2} \pi ^{3/2}} (y_1^2 + y_2^2 + y_3^2) \, \mathrm{d}y.\end{aligned}$$ Note that by symmetry of the variables in the integrand, we have $$\begin{aligned} I_1 = \int_{y_2 \geq y_1, y_2 \geq y_3} \frac{e^{-\frac{1}{2}(y_1^2 + y_2^2 +y_3^2)}}{2 \sqrt{2} \pi ^{3/2}} (y_1^2 + y_2^2 + y_3^2) \, \mathrm{d}y = \int_{y_1 \geq y_3, y_1 \geq y_2} \frac{e^{-\frac{1}{2}(y_1^2 + y_2^2 +y_3^2)}}{2 \sqrt{2} \pi ^{3/2}} (y_1^2 + y_2^2 + y_3^2) \, \mathrm{d}y.\end{aligned}$$ And since one of the $y_i$ will always be the largest one, the sum of the domains is $\mathbb R^3$ (modulo measure zero intersections that do not affect integration) and we can write $$\begin{aligned} I_1 = \dfrac 1 3 \int_{\mathbb R^3} \frac{e^{-\frac{1}{2}(y_1^2 + y_2^2 +y_3^2)}}{2 \sqrt{2} \pi ^{3/2}} (y_1^2 + y_2^2 + y_3^2) \, \mathrm{d}y = \dfrac 1 3 \cdot 3 = 1.\end{aligned}$$ Here we used that the integral is the expected value of $y_1^2 + y_2^2 + y_3^2$, which is 3 since the $y_i$ are centered with unit variance. #### [Region 2]{.ul}: $$\begin{aligned} \label{eqi2} I_2 = \int_{y_1-y_3\geq 0, y_1-2 y_2+y_3\geq 0 } \frac{e^{-\frac{1}{2}(y_1^2 + y_2^2 +y_3^2)}}{4 \sqrt{2} \pi ^{3/2}} \left(y_1^2+2 y_1 y_3+2 y_2^2+y_3^2\right) \, \mathrm{d}y.\end{aligned}$$ Partition $\mathbb R^3$ in four spaces with measure zero intersection, and we aim to argue that the integral of the integrand in [\[eqi2\]](#eqi2){reference-type="eqref" reference="eqi2"} has the same value when integrating over any of them: $$\begin{aligned} A &:= \left\{{\bm{y}}: y_1 \geq y_3, y_2 \geq \frac{y_1 + y_3}{2}\right\}\relax B &:= \left\{{\bm{y}}: y_1 \geq y_3, y_2 \leq \frac{y_1 + y_3}{2}\right\}\relax C &:= \left\{{\bm{y}}: y_1 \leq y_3, y_2 \geq \frac{y_1 + y_3}{2}\right\}\relax D &:= \left\{{\bm{y}}: y_1 \leq y_3, y_2 \leq \frac{y_1 + y_3}{2}\right\}.\end{aligned}$$ Clearly $I_2 = I_B = \int_A {\bm{h}}(y_1, y_2, y_3) \, \mathrm{d}y$. Since ${\bm{h}}$ satisfies ${\bm{h}}(x_1, x_2, x_3) = {\bm{h}}(x_3, x_2, x_1)$, we can exchange $y_1$ and $y_3$ in the definitions of the sets, so $I_A = I_C$ and $I_B = I_D$. And since ${\bm{h}}$ is even with respect to $x_2$ and odd with respect to $x_1, x_3$ we can exchange $y_i$ to $-y_i$ for $i = 1,2,3$ without the result changing. This flips both inequalities, proving $I_A = I_D$ and $I_B = I_C$. We therefore have $$\begin{aligned} I_2 = \dfrac 1 4 \int_{\mathbb R^3} \frac{e^{-\frac{1}{2}(y_1^2 + y_2^2 +y_3^2)}}{4 \sqrt{2} \pi ^{3/2}} \left(y_1^2+2 y_1 y_3+2 y_2^2+y_3^2\right) \, \mathrm{d}y = \dfrac{1}{4} \cdot 2 = \dfrac 1 2.\end{aligned}$$ Here, in the integral, we factor out the sum and using that, the expected value of $y_iy_j$ is $\delta_{ij}$. #### [Region 3]{.ul}: $$\begin{aligned} I_3 = \int_{y_2-y_3\geq 0, y_2-2 y_1+y_3\geq 0 } \frac{e^{-\frac{1}{2}(y_1^2 + y_2^2 +y_3^2)}}{4 \sqrt{2} \pi ^{3/2}} \left(2y_1^2+2 y_2 y_3+y_2^2+y_3^2\right) \, \mathrm{d}y.\end{aligned}$$ This is exactly the same integral as $I_2$ by switching $y_2$ with $y_1$, so $I_3 = \dfrac 1 2$. #### [Region 4]{.ul}: $$\begin{aligned} I_4 &= \int\displaylimits_{ 2y_1 - y_3 \geq y_2 \geq \max(y_1, y_3)} \frac{e^{-\frac{1}{2}(y_1^2 + y_2^2 +y_3^2)}}{6 \sqrt{2} \pi ^{3/2}} (y_1 + y_2 + y_3)^2 \, \mathrm{d}y \nonumber \relax & \qquad + \int\displaylimits_{ 2y_2 - y_3 \geq y_1 \geq \max(y_2, y_3)} \frac{e^{-\frac{1}{2}(y_1^2 + y_2^2 +y_3^2)}}{6 \sqrt{2} \pi ^{3/2}} (y_1 + y_2 + y_3)^2 \, \mathrm{d}y. \label{eqi4}\end{aligned}$$ We partition $\mathbb R^3$ in 12 subspaces with measure 0 intersection and we aim to argue that the integral of the integrand in [\[eqi4\]](#eqi4){reference-type="eqref" reference="eqi4"} (considering one of the integrals only) has the same value when integrating over any of them. For $\sigma$ a permutation of $(y_1, y_2, y_3)$, we define the first 6 subsets as: $$\{{\bm{y}}: 2y_{\sigma(1)} - y_{\sigma(2)} \geq y_{\sigma(3)} \geq \max\{y_{\sigma(1)}, y_{\sigma(2)}\}\},$$ and the last 6 subsets as: $$\{{\bm{y}}: 2y_{\sigma(1)} - y_{\sigma(2)} \leq y_{\sigma(3)} \leq \min\{y_{\sigma(1)}, y_{\sigma(2)}\}\}.$$ We need to prove that the integral has the same value in any of the 12 subsets. Since that the integrand $${\bm{h}}(y_1, y_2, y_3) := \frac{e^{-\frac{1}{2}(y_1^2 + y_2^2 +y_3^2)}}{6 \sqrt{2} \pi ^{3/2}} (y_1 + y_2 + y_3)^2$$ satisfies ${\bm{h}}(y_1, y_2, y_3) = {\bm{h}}(y_{\sigma(1)},y_{\sigma(2)},y_{\sigma(3)})$ for all permutations $\sigma$, the value of the integral in between the first and second groups of 6 subsets is the same. For a fixed $\sigma$ (say, the identity), since ${\bm{h}}(y_1, y_2, y_3) = {\bm{h}}(-y_1, -y_2,-y_3)$, the value over $$\{ {\bm{y}}: 2y_1-y_2 \geq y_3 \geq \max\{y_1,y_2\} \}$$ is the same as the value over $$\{ {\bm{y}}: -2y_1+y_2 \geq -y_3 \geq \max\{-y_1,-y_2\} \} = \{ {\bm{y}}: 2y_1-y_2 \leq y_3 \leq \min\{y_1,y_2\} \},$$ so the value over the 12 sets is complete. It remains to be seen that for a generic ${\bm{y}}= (y_1, y_2, y_3)$, $y_1 \neq y_2 \neq y_3 \neq y_1$ (which can be assumed with probability 1 without affecting the integral), the point belongs to one and just one of the sets. Assume without loss of generality that $y_1$ is the greater of the three and $y_3$ is the smallest. Then since $y_1 > \max\{y_2, y_3\}$ and $y_3 < \min\{y_1, y_2\}$ the only subsets that ${\bm{y}}$ can belong to are: $$\begin{aligned} A &:= \{ {\bm{y}}: 2y_2-y_3 \geq y_1 \geq \max\{y_2, y_3\} \}\relax B &:=\{ {\bm{y}}: 2y_3-y_2 \geq y_1 \geq \max\{y_2, y_3\} \}\relax C &:=\{ {\bm{y}}: 2y_1-y_2 \leq y_3 \leq \min\{y_1, y_2\} \}\relax D &:=\{ {\bm{y}}: 2y_2-y_1 \leq y_3 \leq \min\{y_1, y_2\} \}.\end{aligned}$$ But ${\bm{y}}$ is not in $B$ because that would require $y_3 \geq \frac{y_1 + y_2}{2}$ but $y_3 < y_1$ and $y_3 < y_2$, and it is also not in $C$ because that would require $y_1 \leq \frac{y_2+y_3}{2}$ and $y_1 > y_2$ and $y_1 > y_3$. ${\bm{y}}$ will be in $A$ if $y_2 > \frac{y_1+y_3}{2}$ and in $D$ if, on the contrary, $y_2 < \frac{y_1+y_3}{2}$, both of which are possible, but not at the same time. We conclude by identifying $I_4$ as the sum of two integrals over subsets that we have defined, and therefore $$\begin{aligned} I_4 = \dfrac{2}{12} \int_{\mathbb R^3} \frac{e^{-\frac{1}{2}(y_1^2 + y_2^2 +y_3^2)}}{6 \sqrt{2} \pi ^{3/2}} (y_1 + y_2 + y_3)^2 \, \mathrm{d}y = \dfrac 1 6 \cdot 1 = \dfrac 1 6.\end{aligned}$$ Here we expand the sum and use again that the expected value of $y_iy_j$ is $\delta_{ij}$. The proof concludes by adding up $$\mathbb E \bigg [\min_{\substack{{\bm{h}}^\top{\bm{x}}= -1 \relax {\bm{x}}\geq \bm{0}}}\Vert {\bm{x}}-{\bm{y}}\Vert^2_2 \bigg] =I_1 + I_2 + I_3 + I_4 = \frac {13}{6}.$$ ## Proof of {#proof:dimbounding} We construct a series of counterexamples, indexed by the dimension $p$, and prove that as $p \rightarrow \infty$, the expected value of the LLR diverges. Since stochastic dominance implies inequality of expectations (when expectations are finite), we conclude that the distribution can not be stochastically dominated. For all $p \in \mathbb{N}$, consider the example in $\mathbb{R}^p (= \mathbb{R}^m)$, ${\bm{K}}= {\bm{I}}_p$, ${\bm{x}}^* = (0,\ldots,0, 1)$, ${\bm{h}}= (1,\ldots,1,-1)$ (such that $\mu^* = -1$). Let $$\lambda_n(\mu^* = -1, {\bm{y}}) = \min_{\substack{\sum_{i =1}^{p-1}x_i -x_{p} = -1\relax {\bm{x}}\geq \bm{0}}} \Vert {\bm{x}}-{\bm{y}}\Vert^2_2 - \min_{{\bm{x}}\geq \bm{0} } \Vert {\bm{x}}-{\bm{y}}\Vert^2_2.$$ And compute $$\mathbb{E}_{{\bm{y}}\sim \mathcal N({\bm{x}}^*, {\bm{I}}_n)}[\lambda_n(-1, {\bm{y}})] = \mathbb{E}_{{\bm{y}}\sim \mathcal N({\bm{x}}^*, {\bm{I}}_n)}\bigg [\min_{\substack{\sum_{i =1}^{p-1}x_i -x_{p} = -1\relax {\bm{x}}\geq \bm{0}}} \Vert {\bm{x}}-{\bm{y}}\Vert^2_2\bigg] - \mathbb{E}_{{\bm{y}}\sim \mathcal N({\bm{x}}^*, {\bm{I}}_n)}\bigg[\min_{{\bm{x}}\geq \bm{0} } \Vert {\bm{x}}-{\bm{y}}\Vert^2_2\bigg].$$ For the second term, we have $$\begin{aligned} \mathbb{E} \bigg [\min_{{\bm{x}}\geq \bm{0} } \Vert {\bm{x}}-{\bm{y}}\Vert^2_2\bigg ] &= \sum_{i=1}^p \mathbb{E} \bigg [ (y_i - \max\{y_i, 0\})^2\bigg ] \relax &= (p-1) \mathbb{E}_{z \sim \mathcal N(0,1)} \bigg[ (z-\max\{z,0\})^2 \bigg ] + \mathbb{E}_{z \sim \mathcal N(1,1)} \bigg[ (z-\max\{z,0\})^2 \bigg ] \relax &= (p-1)\dfrac{1}{2} + (2- \phi(-1)/\Phi(-1))\cdot(\Phi(-1)),\end{aligned}$$ using similar arguments as the proof in . We will lower bound the first term using duality. For simplicity, define ${\bm{z}}= (y_1, \ldots, y_{p-1}, y_n-1) \sim \mathcal N(\mathbf{0}, {\bm{I}}_n)$, and equivalently optimize $$\min_{\substack{\sum_{i =1}^{p-1}x_i = x_{p}\relax {\bm{x}}\geq \bm{0}}} \Vert {\bm{x}}-{\bm{z}}\Vert^2_2,$$ where we defined the feasible $\widetilde{{\bm{x}}} = (x_1, \ldots, x_n-1 = \sum_{i =1}^{p-1} x_i)$ and replaced $\widetilde{{\bm{x}}}$ by ${\bm{x}}$, abusing notation. Using Fenchel duality, we have that $$\begin{aligned} \min_{\substack{\sum_{i =1}^{p-1}x_i = x_{p}\relax {\bm{x}}\geq \bm{0}}} \Vert {\bm{x}}-{\bm{z}}\Vert^2_2 \geq \sup_{\bm{\xi} \in \mathbb{R}^p} (- f^*(\bm{\xi}) - g^*(-\bm{\xi})), \end{aligned}$$ where we have noted by $f^*$ the convex conjugate of $f({\bm{x}}) := \|{\bm{x}}- {\bm{z}}\|_2^2$ and, letting $S$ be the feasible set, we denoted by $g^*$ the convex conjugate of $$\begin{aligned} g({\bm{x}}) = \begin{cases} 0 &\quad \text{if } {\bm{x}}\in S \relax \infty &\quad \text{if } {\bm{x}}\notin S. \end{cases}\end{aligned}$$ Note that with these definitions, $\min_{\substack{\sum_{i =1}^{p-1}x_i = x_{p}\relax {\bm{x}}\geq \bm{0}}} \Vert {\bm{x}}-{\bm{z}}\Vert^2_2 = \inf_{{\bm{x}}} (f(x) + g(x))$ so the weak Fenchel duality applies. We compute $f^*(\bm{\xi}) = \frac{1}{4} \Vert \bm{\xi} \Vert^2_2 + {\bm{z}}^\top \bm{\xi} - {\bm{z}}^\top {\bm{z}}$, and $$\begin{aligned} g^*(\bm{\xi}) = \begin{cases} 0 &\quad \text{if } \xi_i+\xi_p \leq 0 \text{ for } i \in [p-1] \relax \infty &\quad \text{otherwise}, \end{cases} \end{aligned}$$ so that $$\begin{aligned} \sup_{\bm{\xi} \in \mathbb{R}^p} (-f^*(\bm{\xi}) - g^*(-\bm{\xi})) &= \sup_{\xi_i + \xi_n \geq 0, \text{ for all } i \in [p-1]} \bigg [-\frac{1}{4} \Vert \bm{\xi} \Vert^2_2 - {\bm{z}}^\top \bm{\xi} + {\bm{z}}^\top {\bm{z}}\bigg] \relax &\geq \sup_{\xi_i + \xi_n \geq 0, \text{ for all } i \in [p-1]} \bigg [-\frac{1}{4} \Vert \bm{\xi} \Vert^2_2 - {\bm{z}}^\top \bm{\xi} \bigg] . \end{aligned}$$ Since the supremum is lower bounded by any feasible point, we can further bound by picking a feasible $\bm{\xi}^*$ for each possible ${\bm{z}}$. We define the following: $$\bm{\xi}^*({\bm{z}}) = \begin{cases} -{\bm{z}}& \text{if }-{\bm{z}}\text{ is feasible } (-z_i \geq z_n \text{ for all } i) \relax (-z_1, \ldots, -z_{p-1}, \max_{i \in [p-1]}{z_i}) & \text{otherwise}. \end{cases}$$ Observe that $$\begin{aligned} &\min_{\substack{\sum_{i =1}^{p-1}x_i = x_{p}\relax {\bm{x}}\geq \bm{0}}} \Vert {\bm{x}}-{\bm{z}}\Vert^2_2 \relax&\qquad \geq -\frac{1}{4} \Vert \bm{\xi}^*({\bm{z}}) \Vert^2_2 - {\bm{z}}^\top \bm{\xi}^*({\bm{z}}) \relax&\qquad = \begin{dcases} \frac{3}{4} \|{\bm{z}}\|^2 & \text{if }-{\bm{z}}\text{ is feasible } (-z_i \geq z_n \text{ for all } i) \relax \frac{3}{4}\sum_{i=1}^{p-1}z_i^2 + z_n\max_{i \in [p-1]}{z_i} -\frac{1}{4}\Big(\max_{i \in [p-1]}{z_i}\Big)^2 & \text{otherwise}. \end{dcases}\end{aligned}$$ We note that $-{\bm{z}}$ is feasible with probability $1/p$, by symmetry. Taking expected value over the inequality and using the law of total expectation yields $$\begin{aligned} \mathbb{E}\bigg[\min_{\substack{\sum_{i =1}^{p-1}x_i = x_{p}\relax {\bm{x}}\geq \bm{0}}} \Vert {\bm{x}}-{\bm{z}}\Vert^2_2\bigg ] &\geq \frac{1}{p} \times \frac{3}{4}\mathbb{E}\big[\|z\|_2^2\big] + \frac{p-1}{p} \bigg \{\mathbb{E}\bigg[\frac{3}{4}\sum_{i=1}^{p-1}z_i^2\bigg]+ \mathbb{E}\bigg[z_n\max_{i \in [p-1])}{z_i}\bigg] -\frac{1}{4}\mathbb{E}\bigg[\Big(\max_{i \in [p-1]}{z_i}\Big)^2\bigg] \bigg \} \relax &= \frac{3}{4} + \frac{p-1}{p}\bigg\{\frac{3(p-1)}{4} + 0 - \frac{1}{4}\mathbb{E}\bigg[\Big(\max_{i \in [p-1]}{z_i}\Big)^2\bigg]\bigg\}.\end{aligned}$$ To bound the last term, we use $$\mathbb{E}\bigg[\Big(\max_{i \in [p-1]}{z_i}\Big)^2\bigg] = \mathbb{E}\bigg[\max_{i \in [p-1]}{z_i}\bigg] + \mathrm{Var}\bigg[\max_{i \in [p-1]}{z_i}\bigg] \leq \sqrt{2\log(p-1)} + 1,$$ where the moment bounds are standard results: the expectation bound can be found using Jensen's inequality on $\exp({\sqrt{2\log p}\max_i{z_i}})$ and then bounding $\max_i{z_i} \leq \sum_i z_i$, and the variance bound with Poincaré's inequality applied to a smooth maximum, even though it can be refined [@10.1214/ECP.v17-2210]. Putting everything together, we obtain $$\begin{aligned} \mathbb{E}_{{\bm{y}}\sim \mathcal N({\bm{x}}^*, {\bm{I}}_n)}[\lambda_n(-1, {\bm{y}})] \geq \frac{p-1}{p}\bigg\{\frac{3(p-1)}{4} - \frac{1}{4} \sqrt{2\log(p-1)} \bigg\} - \frac{p}{2} + \mathcal{O}(1),\end{aligned}$$ which is $\mathcal{O}(p)$ and therefore tends to $\infty$ as $p\rightarrow\infty$. This completes the proof. [^1]: Department of Computing and Mathematical Sciences, California Institute of Technology, CA 91125, USA. [^2]: Department of Statistics, University of California, Berkeley, CA 94720, USA. [^3]: Department of Statistics and Data Science, Carnegie Mellon University, Pittsburgh, PA 15213, USA. [^4]: This form of "simple" interval is only for expositional simplicity. One can consider more general forms of confidence sets $\mathcal{I}({\bm{y}})$ beyond simple intervals, which we will do when describing the general framework in . [^5]: Note that the forward operator ${\bm{K}}$ is allowed to be column rank deficient, and in particular, the overparameterized setting when $p > m$ is allowed. [^6]: [@patil; @stanley_unfolding] also extend the setting and the conjecture to encompass linear constraints of the form $\boldsymbol{A}{\bm{x}}\leq \boldsymbol{b}$. Such constraints are of interest in practical applications such as X$_{\mathrm{CO2}}$ retrieval and particle unfolding. For simplicity, we present the positivity constraint case only here. However, our counterexample based on positivity constraints in will be sufficient to disprove the conjecture in this general case as well. [^7]: Confidence sets of several functionals of interest with guarantees can be constructed by using the proposed method with, e.g., Bonferroni correction, but studying the performance of that approach is beyond the scope of this work [^8]: Note that length will in general depend on the unknown parameter ${\bm{x}}^*$. Several aggregate notions exist for "optimality" of the method with respect to length, such as minimax length [@donoho1994statistical; @schafer2009constructing] or expected length [@stanley_unfolding], among others. [^9]: For example, in the typical case where a $d$ dimensional vector is observed a total of $n$ times, we aggregate the results in an $m = n \times d$ dimensional vector. Throughout, we use $m$ to denote the total dimensionality of the observation vector. [^10]: [@schervish] provides conditions of equality of both test statistics. [^11]: Note that every ${\bm{x}}\in \mathcal{X}$ is accounted for in [\[eq:combined_type_1\]](#eq:combined_type_1){reference-type="eqref" reference="eq:combined_type_1"} when $\mu = \varphi({\bm{x}})$. An alternative viewpoint arises from the proof of , where the condition $\mathbb{P}_{{\bm{y}}\sim P_{{\bm{x}}^*}} \left({\bm{y}}\notin A_\alpha(\mu^*) \right) \leq \alpha$ is shown to be sufficient for achieving the desired coverage. This condition is actually weaker than requiring $T_{\mu^*}$ to be a *level-$\alpha$* test. As opposed to the *level-$\alpha$* requirement, we do not need the condition $\mathbb{P}_{{\bm{y}}\sim P_{{\bm{x}}}} \left({\bm{y}}\notin A_\alpha(\varphi({\bm{x}})) \right) \leq \alpha$ to hold $\text{for all } {\bm{x}}\in \Phi_{\mu^*} \cap \mathcal{X}$, but only for ${\bm{x}}^*$. Ensuring this weaker condition for all possible true parameters ${\bm{x}}\in \mathcal X$ is then the same as [\[eq:combined_type_1\_better\]](#eq:combined_type_1_better){reference-type="eqref" reference="eq:combined_type_1_better"}, which is equivalent to requiring the hypotheses tests $\{T_\mu : \mu \in \varphi(\mathcal{X})\}$ to be *level-$\alpha$* $\text{for all } \mu$.
arxiv_math
{ "id": "2310.02461", "title": "Optimization-based frequentist confidence intervals for functionals in\n constrained inverse problems: Resolving the Burrus conjecture", "authors": "Pau Batlle, Pratik Patil, Michael Stanley, Houman Owhadi, Mikael\n Kuusela", "categories": "math.ST stat.ME stat.TH", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this paper, we study a geometric counterpart of the cyclic vector which allow us to put a rank 2 meromorphic connection on a curve into a "companion" normal form. This allow us to naturally identify an open set of the moduli space of $\mathrm{GL}_2$-connections (with fixed generic spectral data, i.e. unramified, non resonant) with some Hilbert scheme of points on the twisted cotangent bundle of the curve. We prove that this map is symplectic, therefore providing Darboux (or canonical) coordinates on the moduli space, i.e. separation of variables. On the other hand, for $\mathrm{SL}_2$-connections, we give an explicit formula for the symplectic structure for a birational model given by Matsumoto. We finally detail the case of an elliptic curve with a divisor of degree $2$. address: - Department of Material Science, Graduate School of Science, University of Hyogo, 2167 Shosha, Himeji, Hyogo 671-2280, Japan - Univ Rennes, CNRS, IRMAR - UMR 6625, F-35000 Rennes, France - Department of Data Science, Faculty of Business Administration, KobeGakuin University, Minatojima, Chuou-ku, Kobe, 650-8586, Japan - Department of Algebra and Geometry, Insitute of Mathematics, Faculty of Natural Sciences, Budapest University of Technology and Economics, Műegyetem rkp. 3., H-1111 Budapest, Hungary author: - Arata Komyo - Frank Loray - Masa-Hiko Saito - Szilárd Szabó title: Canonical coordinates for moduli spaces of rank two irregular connections on curves. --- # Introduction In this paper, we introduce coordinates on the moduli spaces of rank 2 meromorphic connections on a Riemann surface, and we describe the symplectic structures on the moduli spaces by the introduced coordinates. Finally, we will have canonical coordinates on the moduli spaces. Our motivation is to give explicit descriptions of the isomonodromic deformations of meromorphic connections over a general Riemann surface. It is well-know that the isomonodromic deformations have non-autonomous Hamiltonian descriptions (in detail, see [@Krich02], [@Hurt08], [@Fedorov], for example). If we find explicit formulae for the isomonodromic Hamiltonians, then we have explicit descriptions of isomonodromic deformations. To find explicit formulae of Hamiltonians, it is necessary to introduce canonical coordinates (which are also called Darboux coordinates) on the moduli space of meromorphic connections. The present paper is a first step to give explicit descriptions of isomonodromic deformations. For the isomonodromic deformations of rank 2 projective connections with regular singular points, there are some results of explicit descriptions. For example, Okamoto considered non-autonomous Hamiltonian descriptions of isomonodromic deformations on elliptic curves in [@Oka86] and [@Oka95]. Iwasaki generalized for general Riemann surfaces in [@Iwa91] and [@Iwa92]. Here the independent variables of the isomonodromic deformations are the position of regular singular points on the Riemann surfaces. That is, they are isomonodromic deformations of fixed Riemann surfaces. On the other hand, Kawai [@Kawa03] gave explicit descriptions of the isomonodromic Hamiltonians varying the elliptic curve. Okamoto, Iwasaki, and Kawai in these papers introduced canonical coordinates on (a generic part of) the moduli space of rank 2 meromorphic projective connections by using apparent singularities. For our purpose, we take this strategy. That is, we will also introduce canonical coordinates on the moduli space of meromorphic connections by using apparent singularities. On the other hand, in this paper, we are interested in the isomonodromic deformations of $\mathrm{GL}_2$-connections and of $\mathrm{SL}_2$-connections. The coordinates using apparent singularities are an analog of the separation of variables in the Hitchin system, which is a birational map from the moduli space of stable Higgs bundles to the Hilbert scheme of points on the cotangent bundle over the underlying curve of the Higgs bundles (see [@Hurt] and [@GNR]). This approach has been generalized to Higgs bundles with unramified irregular singularities [@Kon-Soi Section 8.3], [@Szabo_BNR], [@Kon-Ode Section 4.2]. The moduli spaces of Higgs bundles corresponding to the Painlevé cases were analyzed from this perspective in [@ISS1], [@ISS2], [@ISS3]. Here this map is a symplectomorphism of the open dense subsets of the moduli space. The definition of the apparent singularities for general rank meromorphic connections will be given in [@SS]. ## Our setting Let $\nu$ be a positive integer. We set $I:= \{ 1,2,\ldots,\nu\}$. Let $C$ be a compact Riemann surface of genus $g$ ($g\geq 0$), and $D= \sum_{i \in I} m_i[t_i]$ be an effective divisor on $C$. Let $E$ be a vector bundle over $C$ and $\nabla \colon E \rightarrow E \otimes \Omega^1_{C}(D)$ be a meromorphic connection acting on $E$. We assume that the leading term of the expansion of a connection matrix of $\nabla$ at $t_i$ has distinct eigenvalues. If $m_i=1$, then we assume that the difference of eigenvalues of the residue matrix at $t_i$ is not integer. That is, $t_i$ is an generic unramified irregular singular point of $\nabla$ or a non-resonant regular singular point of $\nabla$. When $C$ is the projective line and $E$ is the trivial bundle, the moduli space of meromorphic connections has been studied by Boalch [@Boalch01] and Hiroe--Yamakawa [@HY14]. This moduli space has the natural symplectic structure coming from the symplectic structure on the (extended) coadjoint orbits. For general $C$ and $E$, the moduli space of meromorphic connections (with quasi-parabolic structures) has been studied by Inaba--Iwasaki--Saito [@IIS; @IIS2], Inaba [@Ina], and Inaba--Saito [@IS]. For general $C$ and $E$, the moduli space has also the natural symplectic structure. In these papers, the symplectic form described by a pairing of the hypercohomologies of some complex. This description of the symplectic structure is an analog of the symplectic structure of the moduli spaces of stable Higgs bundles due to Bottacin [@Bott]. For the case where $\nabla$ has only regular singular points, Inaba showed that this symplectic structure coincides with the pull-back of the Goldman symplectic structure on the character variety via the Riemann--Hilbert map in [@Ina the proof of Proposition 7.3]. Our purpose in this paper is to introduce canonical coordinates on the moduli spaces of meromorphic connections. For this purpose, there are some strategies. First one is to consider canonical coordinates on the products of coadjoint orbits. This direction was studied by Jimbo--Miwa--Mori--Sato [@JMMS], Harnad [@Harnad94], and Woodhouse [@Woodhouse07]. Sakai--Kawakami--Nakamura [@KNS18] and Gaiur--Mazzocco--Rubtsov [@GMR23] gave some explicit formulae for the isomonodromic Hamiltonians by the coordinates of this direction. Second one is to consider the apparent singularities. As mentioned above, we take this strategies. In this paper, we consider only the case where the rank of $E$ is two. Let $X$ be an irregular curve, which is described in Section [2.3](#SubSect:SpectralData){reference-type="ref" reference="SubSect:SpectralData"}. That is, $X$ is a tuple of (i) a compact Riemann surface $C$, (ii) an effective divisor $D$ on $C$, (iii) local coordinates around the support with $D$, and (iv) spectral data of meromorphic connections at the support with $D$ (with data of residue parts). Here, the spectral data is described in Section [2.3](#SubSect:SpectralData){reference-type="ref" reference="SubSect:SpectralData"}. We fix an irregular curve $X$. That is, we fix spectral data of rank 2 meromorphic connections at each point of the support with $D$. By applying elementary transformations (which is also called Hecke modifications), we may change the degree of the underlying vector bundle of a meromorphic connection freely. So we assume that $\deg(E)=2g-1$. By this condition, the Euler characteristic of the vector bundle $E$ is $1$ by the Riemann--Roch theorem. In this situation, for generic meromorphic connections $(E,\nabla)$, we have $\dim_{\mathbb{C}} H^0(C,E)=1$. So the global section of $E$ is uniquely determined up to constant. This is convenient for the definition of the apparent singularities. In this paper, we consider only meromorphic connections with $\dim_{\mathbb{C}} H^0(C,E)=1$. Moreover we assume that meromorphic connections $(E,\nabla)$ are irreducible. By this condition, the definition of apparent singularities becomes simple. ## $\mathrm{GL}_2$-connections In the first part of this paper, we discuss on $\mathrm{GL}_2$-connections. That is, we consider rank 2 meromorphic connections. We do not fix the determinant bundles of the underlying vector bundles and the traces of connections. Our purpose is to introduce canonical coordinates on the moduli space of rank 2 meromorphic connections by using apparent singularities. When $C$ is the projective line, many people introduced canonical coordinates on the moduli space by using the apparent singularities ([@Oka86], [@Obl05], [@DM07], [@Szabo13], [@KS19], [@DiarraLoray], and [@Kom]). In this paper, we consider apparent singularities for general Riemann surfaces. Let $X$ be the fixed irregular curve. If $(E,\nabla)$ is a rank 2 meromorphic connection such that $\deg(E) =2g-1$, $\dim_{\mathbb{C}} H^0(C,E)=1$, and $(E,\nabla)$ is irreducible, then we can define apparent singularities for $(E,\nabla)$. (In detail, see Definition [Definition 1](#2023_8_22_13_40){reference-type="ref" reference="2023_8_22_13_40"} below). The apparent singularities are the set of points $\{q_1,\ldots,q_N\}$ on the underlying curve $C$. Here we set $N := 4g-3+\deg(D)$. Let $M_X$ be the following moduli space $$M_X := \left. \left\{ (E,\nabla ) \ \middle| \ \begin{array}{l} \text{(i) $E$ is a rank 2 vector bundle on $C$ with $\deg(E)=2g-1$}\\ \text{(ii) $\nabla \colon E \rightarrow E \otimes \Omega^1_C(D)$ is a connection} \\ \text{(iii) $(E,\nabla)$ is irreducible, and}\\ \text{(iv) $\nabla$ has the fixed spectral data in $X$} \end{array} \right\} \right/ \cong.$$ This moduli space $M_X$ has a natural symplectic structure due to Inaba--Iwasaki--Saito [@IIS], Inaba [@Ina], and Inaba--Saito [@IS]. We consider a Zariski open subset $M_X^0$ of $M_X$ as follows: $$M^0_X := \left. \left\{ (E,\nabla ) \in M_X \ \middle| \ \begin{array}{l} \text{(i) $\dim_{\mathbb{C}} H^0(C,E) =1$}, \\ \text{(ii) $q_1+\cdots+q_{N}$ is reduced, and} \\ \text{(iii) $q_1+\cdots+q_{N}$ has disjoint support with $D$} \end{array} \right\} \right/ \cong$$ (in detail, see Section [3.1](#subsect_moduli){reference-type="ref" reference="subsect_moduli"}). The dimension of the moduli space $M^0_X$ is $2N$ (Proposition [Proposition 10](#prop:dimension){reference-type="ref" reference="prop:dimension"}). By taking apparent singularities, we have a map $$\begin{aligned}\label{eq:App} \operatorname{App}\colon M^0_X &\longrightarrow \mathrm{Sym}^N(C) \\ (E,\nabla) &\longmapsto \{ q_1,q_2 , \ldots ,q_N \} . \end{aligned}$$ Remark that the dimension of $\mathrm{Sym}^N(C)$ is half of the dimension of $M^0_X$. To introduce coordinates on $M^0_X$, it is necessary to find further invariants of connections, that are customarily called *accessory parameters*. To find these parameters, we introduce a twist of $\Omega^1_C(D)$ by $c_d$, which is the first Chern class $c_1(\det (E)) \in H^1(C,\Omega^1_C)$ of $E$. (In detail, Section [3.5](#subsect:Canonical_Coor){reference-type="ref" reference="subsect:Canonical_Coor"} below). We denote by $\Omega_C^1(D,c_d)$ the twist of $\Omega^1_C(D)$. Let $$\pi_{c_d} \colon \boldsymbol{\Omega}(D,c_d) \longrightarrow C$$ the total space of $\Omega^1_C(D,c_d)$. Let $\omega_{D,c_d}$ be the rational 2-form on $\boldsymbol{\Omega}(D,c_d)$ induced by the Liouville symplectic form. This rational 2-form $\omega_{D,c_d}$ induces a symplectic structure on $\boldsymbol{\Omega}(D,c_d) \setminus \pi_{c_d}^{-1}(D)$. We consider the symmetric product $\mathrm{Sym}^N(\boldsymbol{\Omega}(D,c_d))$. Let $\sum_{j=1}^N \mathrm{pr}_j^* (\omega_{D,c_d})$ be the rational 2-form on the product $\boldsymbol{\Omega}(D,c_d)^N$. Here $\mathrm{pr}_j \colon \boldsymbol{\Omega}(D,c_d)^N \rightarrow \boldsymbol{\Omega}(D,c_d)$ is the $j$-th projection. This rational 2-form $\sum_{j=1}^N \mathrm{pr}_j^*( \omega_{D,c_d})$ induces a symplectic structure on a generic part of $\mathrm{Sym}^N(\boldsymbol{\Omega}(D,c_d))$. We will define a map from $M^0_X$ to $\mathrm{Sym}^N(\boldsymbol{\Omega}(D,c_d))$ by the following idea. By the theory of apparent singularities discussed in Section [2.1](#SubSect:AppGL2){reference-type="ref" reference="SubSect:AppGL2"}, we have a canonical inclusion morphism $$\mathcal{O}_C \oplus (\Omega^1_C(D))^{-1} \longrightarrow E.$$ By this morphism, we have the connection $\nabla_0$ on $\mathcal{O}_C \oplus (\Omega^1_C(D))^{-1}$ induced by a connection $\nabla$ on $E$. Notice that $\nabla_0$ has simple poles at the apparent singularities. By applying automorphisms on $\mathcal{O}_C \oplus (\Omega^1_C(D))^{-1}$, we may normalize $\nabla_0$ as $$\nabla_0= \begin{pmatrix} \operatorname{d} & \beta \\ 1 & \delta \end{pmatrix},$$ which is called a companion normal form (in detail, see Section [2.2](#SubSect:CNFGL2){reference-type="ref" reference="SubSect:CNFGL2"} below). Here $\operatorname{d}$ is the exterior derivative on $C$, $\beta \in H^0(C, (\Omega^1_C)^{\otimes 2} (2D + q_1+\cdots+q_N))$, and $\delta$ is a connection on $(\Omega^1_C(D))^{-1}$, which has poles at the support of $D$ and the apparent singularities $q_1,\ldots,q_N$. Then we may define a map $$\label{2023_8_23_8_9} \begin{aligned} f_{\mathrm{App}} \colon M^0_X &\longrightarrow \mathrm{Sym}^N(\boldsymbol{\Omega}(D,c_d)) \\ (E,\nabla) &\longmapsto \{ (q_j, \mathrm{res}_{q_j} (\beta) + \mathrm{tr}(\nabla)|_{q_j} )\}_{ 1\leq j \leq N}. \end{aligned}$$ Here, notice that $\mathrm{res}_{q_j} (\beta) \in \Omega^1_C(D)|_{q_j}$ and $\mathrm{tr}(\nabla)|_{q_j}$ is justified by considering the twisted cotangent bundle (in detail, see Definition [Definition 16](#2023_7_12_23_06){reference-type="ref" reference="2023_7_12_23_06"} below). Remark that the dimension of $\mathrm{Sym}^N(\boldsymbol{\Omega}(D,c_d))$ is equal to the dimension of $M^0_X$. A generic part of $\mathrm{Sym}^N(\boldsymbol{\Omega}(D,c_d))$ has the natural symplectic structure induced by the symplectic structure on the product $(\boldsymbol{\Omega}(D,c_d) \setminus \pi_{c_d}^{-1}(D)) \times \cdots \times (\boldsymbol{\Omega}(D,c_d)\setminus \pi_{c_d}^{-1}(D))$. The first main theorem is the following: **Theorem 1** (Theorem [Theorem 20](#2023_8_22_12_09){reference-type="ref" reference="2023_8_22_12_09"} below). *The pull-back of the symplectic form on a generic part of $\mathrm{Sym}^N(\boldsymbol{\Omega}(D,c_d))$ under the map [\[2023_8\_23_8\_9\]](#2023_8_23_8_9){reference-type="eqref" reference="2023_8_23_8_9"} coincides with the symplectic form on $M^0_X$.* If we take canonical coordinates on $\boldsymbol{\Omega}(D,c_d)$, then we have canonical coordinate on $\mathrm{Sym}^N(\boldsymbol{\Omega}(D,c_d))$, since the symplectic structure on $\mathrm{Sym}^N(\boldsymbol{\Omega}(D,c_d))$ is induced by the 2-form $\sum_{j=1}^N \mathrm{pr}_j^*( \omega_{D,c_d})$. Then we have canonical coordinates on $M^0_X$ by Theorem [Theorem 1](#TheoremA){reference-type="ref" reference="TheoremA"}. Detail of construction of concrete canonical coordinates on $M^0_X$ is discussed in the paragraph after the proof of Theorem [Theorem 20](#2023_8_22_12_09){reference-type="ref" reference="2023_8_22_12_09"} below. In Section [5](#Sect:CompForElliptic){reference-type="ref" reference="Sect:CompForElliptic"}, we consider an example of this argument. We will calculate the canonical coordinates for an elliptic curve and a divisor $D$ of length $2$. The moduli space of rank 2 meromorphic connection with fixed trace connection on an elliptic curve with two simple poles was studied in [@LR20] and [@FL]. In this paper, we will discuss the $\mathrm{GL}_2$-connection case. ## $\mathrm{SL}_2$-connections In the second part of this paper, we discuss on $\mathrm{SL}_2$-connections. That is, we consider rank 2 meromorphic connections with fixed trace connection $(L_0,\nabla_0)$. Here $L_0$ is a fixed line bundle on $C$ of degree $2g-1$ and $\nabla_0 \colon L_0 \rightarrow L_0 \otimes \Omega^1_C(D)$ is a fixed connection. More precisely, we consider rank 2 quasi-parabolic connections $(E,\nabla , \{ l^{(i)}\} )$, defined in [@IS Definition 2.1], with fixed trace connection $(L_0,\nabla_0)$. Here the spectral data of $\nabla_0$ is determined by the fixed irregular curve $X$. The quasi-parabolic structure $l^{(i)}$ at $t_i$ induces a one dimensional subspace $l^{(i)}_{\mathrm{red}}$ of $E|_{t_i}$, that is the restriction of $l^{(i)}$ to $t_i$ (without multiplicity). Our moduli space is as follows: $$M_X(L_0,\nabla_0)_0 := \left. \left\{ (E,\nabla , \{ l^{(i)}\} ) \ \middle| \ \begin{array}{l} \text{(i) $\nabla$ has the fixed spectral data in $X$,}\\ \text{(ii) $E$ is an extension of $L_0$ by $\mathcal{O}_C$,}\\ \text{(iii) $\dim_{\mathbb{C}} H^0(C,E) =1$, and} \\ \text{(iv) $l_{\text{red}}^{(i)} \not\in \mathcal{O}_{C}|_{t_i} \subset \mathbb{P}(E)$ for any $i$} \end{array} \right\} \right/ \cong,$$ which is described in Section [4.2](#SubSect:ModuliSL2){reference-type="ref" reference="SubSect:ModuliSL2"}. Here $(E,\nabla , \{ l^{(i)}\} )$ are rank 2 quasi-parabolic connections on $(C,D)$ with fixed trace connection $(L_0,\nabla_0)$. When $g=0$, we impose one more condition (in detail, see the paragraph after the proof of Lemma [Lemma 26](#2023_7_10_12_15){reference-type="ref" reference="2023_7_10_12_15"} below). This moduli space also has a natural symplectic structure. The dimension of the moduli space $M_X(L_0,\nabla_0)_0$ is $2N_0$, where $N_0 := 3g-3+\deg(D)$. For $(E,\nabla , \{ l^{(i)}\} ) \in M_X(L_0,\nabla_0)_0$, we can also define apparent singularities (Section [4.2](#SubSect:ModuliSL2){reference-type="ref" reference="SubSect:ModuliSL2"} below). The apparent singularities give an element of $\mathbb{P}H^0(C, L_0\otimes \Omega_C^1(D))$. So we have a map $$\pi_{\mathrm{App}} \colon M_X(L_0,\nabla_0)_0 \longrightarrow \mathbb{P}H^0(C, L_0\otimes \Omega_C^1(D)).$$ For $(E,\nabla , \{ l^{(i)}\} ) \in M(L_0,\nabla_0)_0$, we forget the connection $\nabla$. So we have a quasi-parabolic bundle $(E, \{ l^{(i)}\} )$. By taking the extension class for the quasi-parabolic bundle $(E, \{ l^{(i)}\} )$, we have a map $$\pi_{\mathrm{Bun}} \colon M_X(L_0,\nabla_0)_0 \longrightarrow \mathbb{P}H^1(C, L_0^{-1}(-D)).$$ Here the extension class is described in Section [4.1](#SubSect:ModuliParaBunSL2){reference-type="ref" reference="SubSect:ModuliParaBunSL2"} below. We consider the product $$\pi_{\mathrm{App}} \times \pi_{\mathrm{Bun}} \colon M_X(L_0,\nabla_0)_0 \longrightarrow \mathbb{P}H^0(C, L_0\otimes \Omega_C^1(D)) \times \mathbb{P}H^1(C, L_0^{-1}(-D)).$$ This map has been studied by Loray--Saito--Simpson [@LSS], Loray--Saito [@LS], Fassarella--Loray [@FL], Fassarella--Loray--Muniz [@FLM], and Matsumoto [@Matsu]. Notice that $H^1(C, L_0^{-1}(-D))$ is isomorphic to the dual of $H^0(C, L_0\otimes \Omega_C^1(D))$. Remark that $$\dim_{\mathbb{C}} \mathbb{P} H^0(C, {L_0} \otimes \Omega^1_C(D)) = \dim_{\mathbb{C}} \mathbb{P} H^1(C, L_0^{-1}(-D)) =N_0.$$ Let us introduce the homogeneous coordinates $\boldsymbol{a} = (a_0 :\cdots : a_{N_0})$ on $\mathbb{P} H^0(C, {L_0} \otimes \Omega^1_C(D))\cong \mathbb{P}_{\boldsymbol{a}}^{N_0}$ and the dual coordinates $\boldsymbol{b} = (b_0 :\cdots : b_{N_0})$ on $$\mathbb{P} H^1(C, L_0^{-1}(-D))\cong \mathbb{P} H^0(C, {L_0}\otimes \Omega^1_C(D))^{\vee} \cong \mathbb{P}_{\boldsymbol{b}}^{N_0}.$$ We may define a $1$-form $\eta$ on $\mathbb{P}_{\boldsymbol{a}}^{N_0} \times \mathbb{P}_{\boldsymbol{b}}^{N_0}$ by $$\eta = \left( \text{constant} \right) \cdot \frac{a_0 \, d b_0 + a_1 \, d b_1 + \cdots + a_{N_0} \, d b_{N_0} }{a_0b_0 + a_1b_1 + \cdots + a_{N_0}b_{N_0}}.$$ (In detail, see Section [4.4](#SubSect:SyplecticSL2){reference-type="ref" reference="SubSect:SyplecticSL2"}). The $2$-form $d \eta$ gives an symplectic structure on $\mathbb{P}_{\boldsymbol{a}}^{N_0} \times \mathbb{P}_{\boldsymbol{b}}^{N_0} \setminus \Sigma$. Here we set $$\Sigma \colon (a_0b_0 + a_1b_1 + \cdots + a_{N_0}b_{N_0}=0) \subset \mathbb{P}_{\boldsymbol{a}}^{N_0} \times \mathbb{P}_{\boldsymbol{b}}^{N_0}.$$ The image of $M(L_0,\nabla_0)_0$ is contained in $\mathbb{P}_{\boldsymbol{a}}^{N_0} \times \mathbb{P}_{\boldsymbol{b}}^{N_0} \setminus \Sigma$. (In detail, see Section [4.3](#SubSect:MapsSL2){reference-type="ref" reference="SubSect:MapsSL2"}). The second main theorem is the following: **Theorem 2** (Theorem [Theorem 31](#2023_8_22_16_22){reference-type="ref" reference="2023_8_22_16_22"} below). *We assume that the fixed spectral data satisfies the generic condition [\[2023_8\_24_8\_47\]](#2023_8_24_8_47){reference-type="eqref" reference="2023_8_24_8_47"} below. The pull-back of the symplectic form $d\eta$ on $\mathbb{P}_{\boldsymbol{a}}^{N_0} \times \mathbb{P}_{\boldsymbol{b}}^{N_0}\setminus \Sigma$ under the map $\pi_{\mathrm{App}} \times \pi_{\mathrm{Bun}}$ coincides with the symplectic form on the moduli space $M_X(L_0,\nabla_0)_0$.* ## The organization of this paper In Section [2](#sect:CompanionNF){reference-type="ref" reference="sect:CompanionNF"}, the apparent singularities for a generic rank 2 meromorphic connection are defined. After the definition of the apparent singularities, we will discuss on the companion normal form of a generic rank 2 meromorphic connection. We will use this companion normal form when we will introduce canonical coordinates. In Section [3](#Sect:SymplecticGL2andCano){reference-type="ref" reference="Sect:SymplecticGL2andCano"}, first, we will describe our moduli space of rank 2 meromorphic connections. Second, we will discuss on tangent spaces of the moduli space of rank 2 meromorphic connections. We will recall that the tangent spaces at a meromorphic connection are isomorphic to a hypercohomology of the complex defined by the meromorphic connection. After that, we will describe a natural symplectic structure on the moduli space of rank 2 meromorphic connections. Section [3.3](#2023_7_4_13_59){reference-type="ref" reference="2023_7_4_13_59"} and Section [3.4](#2023_7_12_16_39){reference-type="ref" reference="2023_7_12_16_39"} are preliminaries of the proof of the first main theorem. In Section [3.5](#subsect:Canonical_Coor){reference-type="ref" reference="subsect:Canonical_Coor"}, we will give the map from a generic part of the moduli space to $\mathrm{Sym}^N(\boldsymbol{\Omega}(D,c_d))$ and will show the first main theorem. In Section [4](#sect:sympl_FixedDet){reference-type="ref" reference="sect:sympl_FixedDet"}, we will consider rank 2 meromorphic connections with fixed trace connection. First, to describe the bundle map $\pi_{\mathrm{Bun}}$, we recall the moduli space of stable quasi-parabolic bundles with fixed determinant. Second, we will describe our moduli space of rank 2 meromorphic connections with fixed trace connection. Third, we will describe the map $\pi_{\mathrm{App}}$ defined by considering the apparent singularities. In Section [4.4](#SubSect:SyplecticSL2){reference-type="ref" reference="SubSect:SyplecticSL2"}, we will recall a natural symplectic structure on the moduli space of rank 2 meromorphic connections with fixed trace connection, and will show the second main theorem. In Section [5](#Sect:CompForElliptic){reference-type="ref" reference="Sect:CompForElliptic"}, we will apply the argument in Section [2](#sect:CompanionNF){reference-type="ref" reference="sect:CompanionNF"} and Section [3](#Sect:SymplecticGL2andCano){reference-type="ref" reference="Sect:SymplecticGL2andCano"} to the case of an elliptic curve with a divisor $D$ of length $2$. When $D$ is reduced, this amounts to two logarithmic singularities, otherwise to an irregular singularity. It is remarkable that using our approach these two cases can be studied completely similarly. In Section [6](#Sec:Higgs){reference-type="ref" reference="Sec:Higgs"}, we will provide a method for obtaining canonical coordinates $\tilde{p}_j \in \boldsymbol{\Omega}(D, c_d)_{|q_i}$ for generic $(E, \nabla) \in M^0_X$ by introducing a section $s \in H^0(C, \det(E))$ and $\gamma \in H^0(C, \Omega^1_C(D))$. We will utilize an open set $U_{0} = C \setminus \{ s=0, \gamma=0 \}$ and the trivialization of $E_{|U_0}$ to define $\tilde{p}_j \in \Omega^1_C(D)_{|q_j}$. This method can be also used for constructing a meromorphic connection $\nabla_1\colon E \longrightarrow E \otimes \Omega^1_C(D(s))$ for a given $s \in H^0(C, \det(E))$, where $D(s)$ denotes the zero divisor of $s$. In Theorem [Theorem 41](#thm:birational){reference-type="ref" reference="thm:birational"}, we will provide an alternative proof of the birationality of $f_{\mathrm{App}}$ (cf. Proposition [Proposition 17](#prop:birational){reference-type="ref" reference="prop:birational"}) by utilizing the Higgs fields $\nabla - \nabla_1$ and the BNR correspondence [@BNR]. This approach may shed new light on the relationship between the canonical coordinates of the moduli spaces of connections and the moduli spaces of Higgs bundles. (cf. [@SS]). ## Acknowledgments {#acknowledgments .unnumbered} The authors would like to warmly thank Michi-aki Inaba and Takafumi Matsumoto for useful discussions. The first, third, and fourth authors would like to thank Frank Loray for his hospitality at IRMAR, Univ. Rennes. # Companion normal form {#sect:CompanionNF} Let $C$ be a compact Riemann surface of genus $g$ ($g\geq 0$), and $D$ be an effective divisor on $C$. We assume $4g-3+n>0$ where $n=\deg(D)$. We consider a rank 2 meromorphic connection $$\label{2023_7_13_12_11} \nabla:E\longrightarrow E\otimes\Omega^1_C(D)$$ on $C$ where $\deg(E)=2g-1$. When $g=0$, Diarra--Loray have given companion normal forms of the rank 2 meromorphic connections in [@DiarraLoray]. By the companion normal forms, we may construct a universal family of the rank 2 meromorphic connections on some generic part of the moduli space of rank 2 meromorphic connections. This universal family is useful to describe the isomonodromic deformations [@Kom]. The purpose of this section is to give companion normal forms of rank 2 meromorphic connections when $g\geq 0$. For this purpose, first, we will introduce the apparent singularities for (generic) rank 2 meromorphic connections. ## Apparent singularities {#SubSect:AppGL2} First we assume that $\dim_{\mathbb{C}} H^0(C,E)=1$ for the rank 2 meromorphic connection [\[2023_7\_13_12_11\]](#2023_7_13_12_11){reference-type="eqref" reference="2023_7_13_12_11"}. This assumption holds for a generic vector bundle of the rank 2 meromorphic connection with $\deg(E)=2g-1$. For an element of $H^0(C,E)$, we define the sequence of $\mathbb C$-linear maps $$\label{2023_7_11_12_46} \varphi_\nabla \colon \mathcal O_C \longrightarrow E \xrightarrow{\ \nabla \ } E\otimes\Omega^1_C(D) \longrightarrow E/ \mathcal O_C \otimes\Omega^1_C(D) .$$ This composition $\varphi_\nabla$ is an $\mathcal O_C$-linear map. From now on we assume that $\varphi_\nabla \neq 0$. This assumption holds for every $(E,\nabla)$, provided that the eigenvalues of the residues are chosen generically (see Remark [Remark 4](#2023_7_13_16_34){reference-type="ref" reference="2023_7_13_16_34"} below). We call the global section in $H^0(C,E)$ in [\[2023_7\_11_12_46\]](#2023_7_11_12_46){reference-type="eqref" reference="2023_7_11_12_46"} the *cyclic vector*. Let us now define $E_0\subset E$ as the rank $2$ locally free subsheaf spanned by $\mathcal O_C$ and $$\operatorname{Im}\left\{\nabla|_{\mathcal O_C} \otimes \operatorname{Id}_{(\Omega^1_C(D))^{-1}} \colon (\Omega^1_C(D))^{-1} \to E \right\}.$$ This construction gives rise to a short exact sequence of coherent sheaves $$0 \longrightarrow \mathcal O_C \longrightarrow E_0 \longrightarrow (\Omega^1_C(D))^{-1} \longrightarrow 0.$$ We claim that this sequence splits, i.e. $$\label{eq:decomposition} E_0\cong \mathcal O_C\oplus (\Omega^1_C(D))^{-1}.$$ Indeed, equivalence classes of extensions of $(\Omega^1_C(D))^{-1}$ by $\mathcal O_C$ are classified by the group $$\operatorname{Ext}^{1} ((\Omega^1_C(D))^{-1} , \mathcal O_C) = \operatorname{Ext}^{1} (\mathcal O_C (-D) , \Omega^1_C) \cong H^0 (C, \mathcal O_C (-D) )^{\vee} = 0,$$ where we have used Grothendieck--Serre duality. We denote by $$\label{eq:phi_nabla} \phi_\nabla \colon E_0 \longrightarrow E.$$ the canonical inclusion morphism, and define the meromorphic connection $$\label{2023_7_13_12_39} \nabla_0 = \phi_\nabla^* (\nabla)$$ on $E_0$. We note that the polar divisor of $\nabla_0$ is $D+B$ where $$\label{eq:B} B=\mathrm{div}(\varphi_\nabla ).$$ We note that $$\label{eq:deg(B)} \deg(B)=4g-3+n.$$ From now on, moreover, we assume that $B$ is reduced, with support disjoint from $D$. In different terms, in view of [\[eq:deg(B)\]](#eq:deg(B)){reference-type="eqref" reference="eq:deg(B)"}, we have $$B=q_1+\cdots+q_{4g-3+n}$$ where $q_i \neq q_j$ once $i\neq j$ and $q_i \notin D$ for all $i$. **Definition 1**. *Assume that $\varphi_\nabla \neq 0$ and $\mathrm{div}(\varphi_\nabla )$ is reduced, with support disjoint from $D$. We call the points of the support $\{ q_1,\ldots,q_{4g-3+n}\}$ of $\mathrm{div}(\varphi_\nabla )$ the apparent singularities of $(E,\nabla)$.* ## Companion normal form {#SubSect:CNFGL2} The desired companion normal form is a normal form of $\nabla_0$ in [\[2023_7\_13_12_39\]](#2023_7_13_12_39){reference-type="eqref" reference="2023_7_13_12_39"}. So the companion normal form is given by normalization of $\nabla_0$ by applying automorphisms on $\mathcal O_C\oplus (\Omega^1_C(D))^{-1}$. To give the companion normal form, first, we describe a decomposition of $\nabla_0$ relative to [\[eq:decomposition\]](#eq:decomposition){reference-type="eqref" reference="eq:decomposition"}: $$\nabla_0=\begin{pmatrix}\alpha & \beta\\ \gamma & \delta \end{pmatrix}$$ where $$\left\{\begin{matrix} \alpha&:& \mathcal O_C\longrightarrow \Omega^1_C(D+B) \hfill (\text{connection})\\ \beta&:& (\Omega^1_C(D))^{-1}\longrightarrow \Omega^1_C(D+B) \hfill (\mathcal O_C\text{-linear})\\ \gamma&:& \mathcal O_C\longrightarrow \mathcal O_C(B) \hfill (\mathcal O_C\text{-linear})\\ \delta&:& (\Omega^1_C(D))^{-1}\longrightarrow (\Omega^1_C(D))^{-1}\otimes \Omega^1_C(D+B) \hskip1cm (\text{connection}) \end{matrix}\right.$$ This form is unique only up to pre-composition by an element of the automorphism group $\mathrm{Aut}(E_0)$ of $E_0$. Elements of $\mathrm{Aut}(E_0)$ are described as follows: $$\begin{pmatrix} \lambda_1 & F \\ 0& \lambda_2 \end{pmatrix},$$ where $\lambda_1, \lambda_2 \in \mathbb C^*$ and $F \in H^0(C, \Omega^1_C(D))$. It follows by construction that $\nabla_0$ admits no pole in restriction to $\mathcal O_C$ over the divisor $B$, so that actually we have $$\left\{\begin{matrix} \alpha&:& \mathcal O_C\longrightarrow \Omega^1_C(D) \hskip1cm (\text{connection})\\ \gamma&:& \mathcal O_C\longrightarrow \mathcal O_C \hfill (=\text{identity}) \end{matrix}\right.$$ The action of an automorphism of the form $$\label{eq:automorphism} \begin{pmatrix} 1 & F\\ 0 & 1 \end{pmatrix},\ \ \ F\in H^0(C, \Omega^1_C(D))$$ transforms $\alpha$ into $\alpha-F\gamma$ (without affecting $\gamma$). Therefore, there exists a unique choice $F$ such that $\alpha=\operatorname{d}$ is the trivial connection on $\mathcal O_C$. We thus get the unique companion normal form $$\label{eq:normal_form} \nabla_0=\begin{pmatrix}\operatorname{d} & \beta\\ 1 & \delta \end{pmatrix}.$$ Notice that the same companion normal form is obtained simply by taking the generator $\varphi_\nabla (1)$ for the second factor of [\[eq:decomposition\]](#eq:decomposition){reference-type="eqref" reference="eq:decomposition"}, and the action of the automorphism [\[eq:automorphism\]](#eq:automorphism){reference-type="eqref" reference="eq:automorphism"} in the above argument simply amounts to switching to this particular generator. ## Spectral data {#SubSect:SpectralData} Now we consider the polar part of the meromorphic connection [\[2023_7\_13_12_11\]](#2023_7_13_12_11){reference-type="eqref" reference="2023_7_13_12_11"} at each point of the support of $D$. We impose some conditions on the polar parts. To describe the conditions, we introduce the notion of irregular curves with residues. Let $\nu$ be a positive integer. We set $I := \{ 1,2,\ldots,\nu\}$. Let $\mathfrak{h}$ be the Cartan subalgebra $$\mathfrak{h}= \left\{ \begin{pmatrix} h_1 & 0 \\ 0 & h_2 \end{pmatrix} \ \middle| \ h_1,h_2 \in \mathbb{C} \right\}$$ of the Lie algebra $\mathfrak{gl}_2(\mathbb{C})$. Let $\mathfrak{h}_0$ be the regular locus of $\mathfrak{h}$. **Definition 2**. *We say $X=(C,D,\{ z_i \}_{i \in I} , \{\boldsymbol{\theta}_{i} \}_{i \in I}, \boldsymbol{\theta}_{\mathrm{res}})$ is an irregular curve with residues if* - *$C$ is a compact Riemann surface of genus $g$,* - *$D = \sum_{i\in I} m_i [t_i]$ is an effective divisor on $C$.* - *$z_i$ is a generator of the maximal ideal of $\mathcal{O}_{C,t_i}$,* - *$\boldsymbol{\theta}_{i} = (\theta_{i,-m_i} ,( \theta_{i,-m_i+1} ,\ldots, \theta_{i,-2}) ) \in \mathfrak{h}_0 \times \mathfrak{h}^{m_i -2}$, and* - *$\boldsymbol{\theta}_{\mathrm{res}} = (\theta_{1,-1}, \theta_{2,-1},\ldots, \theta_{\nu,-1})$, where $\theta_{i,-1} \in \mathfrak{h}$, such that $\sum_{i=1}^{\nu} \mathrm{tr}(\theta_{i,-1} ) =-(2g-1)$.* *We set $$\theta_{i,-1} = \begin{pmatrix} \theta_{i,-1}^{-} & 0 \\ 0 & \theta_{i,-1}^+ \end{pmatrix} \quad \text{for each $i \in I$}.$$ We assume that $\sum_{i=1}^{\nu} \theta^{\pm}_{i,-1} \not\in \mathbb{Z}$ whatever are the signs $\pm$, and, if $m_i =1$, then $\theta^+_{i,-1} - \theta^-_{i,-1} \not\in \mathbb{Z}$.* For an irregular curve with residues $X$, we set $$\label{2023_4_10_19_32} \omega_i(X ) := \theta_{i,-m_i} \frac{\operatorname{d}\!z_i}{z_i^{m_i}} + \theta_{i,-m_i+1} \frac{\operatorname{d}\!z_i}{z_i^{m_i-1}} + \cdots+ \theta_{i,-2}\frac{\operatorname{d}\!z_i}{z_i^{2}} + \theta_{i,-1}\frac{\operatorname{d}\!z_i}{z_i}$$ and $\mathcal{O}_{m_i[t_i]} := \mathcal{O}_{C,t_i}/(z_i^{m_i})$. For an irregular curve with residues $X$ and a meromorphic connection $(E,\nabla)$ in [\[2023_7\_13_12_11\]](#2023_7_13_12_11){reference-type="eqref" reference="2023_7_13_12_11"}, we set $E|_{m_i [t_i]} := E \otimes \mathcal{O}_{m_i[t_i]}$. Let $$\nabla|_{m_i[t_i]} \colon E|_{m_i [t_i]} \longrightarrow E|_{m_i [t_i]} \otimes \Omega^1_C(D)$$ be the morphism induced by $\nabla$. **Definition 3**. *We call $(E,\nabla )$ a rank $2$ meromorphic connection over an irregular curve with residues $X$ if* - *$E$ is a rank $2$ vector bundle of degree $2g-1$ on $C$,* - *$\nabla \colon E \rightarrow E \otimes \Omega^1_{C}(D)$ is a connection, and* - *there exists an isomorphism $\varphi_{m_i[t_i]} \colon E|_{m_i [t_i]} \rightarrow \mathcal{O}^{\oplus 2}_{m_i[t_i]}$ for each $i \in I$ such that $$( \varphi_{m_i[t_i]}\otimes 1) \circ \nabla|_{m_i[t_i]} \circ \varphi^{-1}_{m_i[t_i]} = \operatorname{d} +\, \omega_i(X ).$$ Here $\omega_i(X )$ is defined in [\[2023_4\_10_19_32\]](#2023_4_10_19_32){reference-type="eqref" reference="2023_4_10_19_32"}.* *We call $\omega_i(X )$ the spectral data of $(E,\nabla )$ and call the submodule $\varphi_{m_i[t_i]}^{-1}(\mathcal{O}_{m_i[t_i]} \oplus 0)$ of $E|_{m_i [t_i]}$ the quasi-parabolic structure of $(E,\nabla )$ at $t_i$.* From now on, by a connection we will mean a rank $2$ meromorphic connection over a fixed irregular curve with residues $X$. So we impose the condition (iii) of Definition [Definition 3](#2023_7_13_16_05){reference-type="ref" reference="2023_7_13_16_05"} on the polar parts of the meromorphic connection $\nabla$ in [\[2023_7\_13_12_11\]](#2023_7_13_12_11){reference-type="eqref" reference="2023_7_13_12_11"} at the points of the support of $D$. This condition means that the polar parts of $\nabla$ at $t_i$ are diagonalizable with eigenvalues equal to the diagonal entries of $\omega_i(X )$ for $i=1,2,\ldots,n$. **Remark 4**. *In Definition [Definition 3](#2023_7_13_16_05){reference-type="ref" reference="2023_7_13_16_05"}, we impose the condition that $\sum_{i=1}^{\nu} \theta^{\pm}_{i,-1} \not\in \mathbb{Z}$ whatever are the signs $\pm$. By this assumption and the argument as in [@KLS Proposition 6], we have that $(E,\nabla)$ is irreducible. Then some arguments become simple. For example, $\varphi_\nabla = 0$ if and only if the free subsheaf $\mathcal O_C$ of $E$ is a proper $\nabla$-invariant subbundle. So we have that $\varphi_{\nabla} \neq 0$. Moreover, $(E,\nabla)$ is automatically stable (described in Section [3.1](#subsect_moduli){reference-type="ref" reference="subsect_moduli"} below).* ## The polar parts of $\delta$ {#subsec:delta} We fix an irregular curve with residues $X$. Let $(E,\nabla )$ be a rank $2$ meromorphic connection over $X$ and $\nabla_0$ be the companion normal form for $(E,\nabla )$. We consider the $(2,2)$-entry $\delta$ of this companion normal form $\nabla_0$. It immediately follows from [\[eq:normal_form\]](#eq:normal_form){reference-type="eqref" reference="eq:normal_form"} that the connection $\delta$ coincides with the trace connection $\mathop{\mathrm{tr}}(\nabla_0)$ on $\det(E_0)=(\Omega^1_C(D))^{-1}$. It is further related to the trace connection $\mathop{\mathrm{tr}}(\nabla)$ by $$\delta = \mathop{\mathrm{tr}}(\nabla_0)=\mathop{\mathrm{tr}}(\nabla)+\frac{\operatorname{d}\!\varphi_\nabla}{\varphi_\nabla}.$$ **Lemma 5**. 1. *The polar part of $\delta$ over $D$ is determined by the spectral data;* 2. *The polar part of $\delta$ over $B$ is logarithmic with residue $+1$;* 3. *$\delta$ is determined by the irregular curve with residues $X$ up to adding a holomorphic $1$-form of $C$.* *Proof.* The polar part of $\delta$ at $t_i$ is equal to $\operatorname{tr} (\omega_i )$, showing the first assertion. In view of our assumption $q_{j_1}\neq q_{j_2}$ for ${j_1}\neq {j_2}$, the second assertion is classical. Let now $\delta, \delta'$ be the $(2,2)$-entries of companion normal forms $\nabla_0, \nabla_0'$ of connections $\nabla, \nabla'$ satisfying the conditions of Definition [Definition 3](#2023_7_13_16_05){reference-type="ref" reference="2023_7_13_16_05"}. By the first part, $\delta - \delta'$ is then a global holomorphic $1$-form of $C$. ◻ As a consequence of the lemma and by $\operatorname{dim}_{\mathbb{C}} H^0(C, \Omega_C^1) = g$, the possible values for $\delta$ represent $g$ free parameters for a meromorphic connection over $X$. ## The polar parts of $\beta$ {#subsec:beta} Next we consider the $(1,2)$-entry $\beta$ of the companion normal form $\nabla_0$ of a meromorphic connection $\nabla$ over the irregular curve with residues $X$. By the condition $\gamma = 1$ in [\[eq:normal_form\]](#eq:normal_form){reference-type="eqref" reference="eq:normal_form"}, $\beta$ accounts for the determinant of the characteristic polynomial of the residues. By Definition [Definition 3](#2023_7_13_16_05){reference-type="ref" reference="2023_7_13_16_05"}, the eigenvalues of the connection matrix of $\nabla$ are differentials (of the first kind) with a pole of order at most $m_i$ at $t_i$. The same condition then holds for $\nabla_0$ too, because it only differs from $\nabla$ by elementary modifications at points $q_j \neq t_i$. As the determinant of a $2\times 2$ matrix is a quadratic expression of the eigenvalues, we see that $\beta$ must be a quadratic differential with poles of order at most $2m_i$ at $t_i$. Over $B$, a similar argument shows that $\beta$ has poles of order at most $2$. Let us fix local coordinate charts $z_i$ centered at the pole $t_i$. One may then expand $\beta$ into Laurent series: $$\beta = \left( \beta_{i, -2m_i} z_i^{-2m_i} + \cdots + \beta_{i, -2} z_i^{-2} + O(z_i^{-1}) \right) (\operatorname{d}\! z_i)^{\otimes 2}.$$ Notice that for given $\beta$ the coefficient $\beta_{i, -2}$ is independent of the chosen coordinate chart $z_i$, however the other coefficients depend on $z_i$. We also fix local coordinate charts $z_j$ centered at the apparent singularity $q_j$, and have a similar expansion $$\beta = \left( \beta_{j, -2} z_j^{-2} + \beta_{j, -1} z_j^{-1} + O(z_j^0) \right) (\operatorname{d}\! z_j)^{\otimes 2}.$$ Analogously to Lemma [Lemma 5](#lem:delta){reference-type="ref" reference="lem:delta"}, we therefore find **Lemma 6**. 1. *The coefficients $\beta_{i, -2m_i}, \ldots , \beta_{i, -2}$ are uniquely determined by the irregular curve with residues $X$ (and the holomorphic coordinate $z_i$);* 2. *We have $\beta_{j, -2} =0$.* 3. *$\beta$ is determined by the irregular curve with residues $X$ up to adding a section of $(\Omega^1_C)^{\otimes2}(D)$.* *Proof.* The coefficients $\beta_{i, -2m_i}, \ldots , \beta_{i, -2}$ all admit homogeneous quadratic expressions in terms of the eigenvalues of $\boldsymbol{\theta}_{i}, \boldsymbol{\theta}_{\mathrm{res}}$, therefore they are determined by them. Conversely, the coefficients $\beta_{i, -2m_i}, \ldots , \beta_{i, -2}$ determine the polar part of the eigenvalues. It is classical that for an apparent singularity of $\nabla_0$, one of the two eigenvalues of the residue must vanish. This implies that for every $q\in B$ the product of the eigenvalues of $\operatorname{res}_q(\nabla_0 )$ vanishes. As this latter product gives the leading (second) order term $\beta_{j, -2}$, we get the second assertion. The last part follows from the first two as in Lemma [Lemma 5](#lem:delta){reference-type="ref" reference="lem:delta"}. ◻ As a consequence of the lemma and by $\operatorname{dim}_{\mathbb{C}} H^0(C, (\Omega^1_C)^{\otimes2}(D)) = 3g-3+n$, the possible values for $\beta$ represent $3g-3+n$ free parameters for a connection $\nabla$ on $X$ having apparent singularities at a fixed reduced divisor $B$ of length $N$. From now on, we set $\beta_{j, -1} = \zeta_j$, so that we have the expansion $$\label{eq:expansion_beta} \beta = \zeta_j \frac{(\operatorname{d}\! z_j)^{\otimes 2}}{z_j} + \beta^{(j)}$$ for some local holomorphic quadratic differential $\beta^{(j)}$. Notice that $\zeta_j$ depends on the coordinate $z_j$, however the element $\zeta_j \operatorname{d}\! z_j \in \Omega_C^1\vert_{q_j}$ of the fiber of the holomorphic cotangent (or canonical) bundle over $q_j$ does not depend on it. As a matter of fact, since $\beta$ belongs to an affine space modelled over $H^0(C, (\Omega^1_C)^{\otimes2}(D))$ (and in order to be consistent with the decomposition [\[eq:decomposition\]](#eq:decomposition){reference-type="eqref" reference="eq:decomposition"}), it is even more rigourous to consider $\zeta_j \operatorname{d}\! z_j$ as elements of the fiber $\Omega^1_C (D)\vert_{q_j}$, using the inclusion $\Omega^1_C \subset \Omega^1_C(D)$. In the sequel we will consider them to be such elements. It will turn out that these quantities $\zeta_j \operatorname{d}\! z_j$ are closely related to accessory parameters. ## Determination of $\beta$ and $\delta$ in terms of $\zeta$ Fix a reduced divisor $B$ of length $N$ on $C$ with support disjoint from $D$. In Subsections [2.4](#subsec:delta){reference-type="ref" reference="subsec:delta"}, [2.5](#subsec:beta){reference-type="ref" reference="subsec:beta"} we have found that (normal forms of) meromorphic connections with residue on $X$ that have apparent singularities at $B$ can be described by an affine space of complex dimension $g + 3g-3+n = N$ ($g$ coming from the choice of $\delta$ and $3g-3+n$ from the choice of $\beta$). In this section, we provide a description of such connections in terms of analogs of separated variables. Namely, it will turn out that generically the data of $\delta, \beta$ is equivalent to the $N$-tuple $(\zeta_1 \operatorname{d}\! z_1, \ldots, \zeta_N \operatorname{d}\! z_N)$. The fact that singular points are apparent over $B$ imposes further constraints on $\beta$ and $\delta$. This constraint gives $1$ linear condition for each point $q_j$ and we can expect that these constraints fix $\beta$ and $\delta$ uniquely in terms of the data $(q_j,\zeta_j \operatorname{d}\! z_j)_{j=1}^N$. In fact, this is true for the genus $g=0$ case (see [@DiarraLoray]) and we will show in Lemma [Lemma 7](#lem:independence){reference-type="ref" reference="lem:independence"} that this is also true for *generic* choices of $(q_j,\zeta_j \operatorname{d}\! z_j)_{j=1}^N$ if $g>0$. In fact, the data of $\zeta_j \operatorname{d}\! z_j$ can be interpreted as a certain quasi-parabolic structure over $B$. Indeed, at a point $q_j$ and with respect to the decomposition [\[eq:decomposition\]](#eq:decomposition){reference-type="eqref" reference="eq:decomposition"}, the residue of $\nabla_0$ reads as $$\mathrm{res}_{q_j}\nabla_0=\begin{pmatrix} 0 & \zeta_j \operatorname{d}\! z_j\\ 0 & 1 \end{pmatrix} .$$ So, the vector $\begin{pmatrix} \zeta_j \operatorname{d}\! z_j\\ 1 \end{pmatrix}$ is an eigenvector with respect to eigenvalue $1$ and the map $\phi_\nabla$ (see [\[eq:phi_nabla\]](#eq:phi_nabla){reference-type="eqref" reference="eq:phi_nabla"}) is just the positive elementary transformation with respect to these parabolic directions at all points $q_j$. In summary, the data of all values $\zeta_j \operatorname{d}\! z_j$ is equivalent to the data of a quasi-parabolic structure of $E_0$ over $B$ (i.e., a line in the fiber of $E_0$ over each $q_j$) distinct from the destabilizing subbundle $\mathcal O_C\subset E_0$ for every $j$. Let us denote by $\boldsymbol{\Omega}(D)$ the total space of the line bundle $\Omega^1_C(D)$. **Lemma 7**. *For generic data $(q_j,\zeta_j\operatorname{d}\! z_j)_j\in \mathrm{Sym}^{4g-3+n}(\boldsymbol{\Omega}(D))$ there exist unique $\beta$ and $\delta$ as above such that the corresponding $\nabla_0$ has apparent singular points at all the points $q_j$ ($1\leq j \leq N:= 4g-3+n$), and such that the Laurent expansion [\[eq:expansion_beta\]](#eq:expansion_beta){reference-type="eqref" reference="eq:expansion_beta"} is fulfilled.* *Proof.* Let us consider $(q_j,\zeta_j\operatorname{d}\! z_j)_j$ such that $q_j$'s are pair-wise distinct, and do not intersect the support of $D$. Given one point $(q_j,\zeta_j\operatorname{d}\! z_j)$, we can diagonalize the residue $\mathrm{res}_{q_i}\nabla_0$ by conjugating by a triangular matrix $$\label{diag_by_conj_by_triangular} \begin{pmatrix} 1 & \zeta_j \operatorname{d}\! z_j\\ 0& 1 \end{pmatrix}^{-1} \begin{pmatrix} 0 & \beta\\ 1 & \delta \end{pmatrix} \begin{pmatrix} 1 & \zeta_j\operatorname{d}\! z_j\\ 0 & 1 \end{pmatrix} +\begin{pmatrix} 1 & \zeta_j\operatorname{d}\! z_j\\ 0& 1 \end{pmatrix}^{-1} d\begin{pmatrix} 1 & \zeta_j\operatorname{d}\! z_j\\ 0 & 1 \end{pmatrix}$$ $$=\begin{pmatrix} - \zeta_j \operatorname{d}\! z_j & \beta-\zeta_j \delta \otimes \operatorname{d}\! z_j -\zeta_j^2 \operatorname{d}\! z_j^{\otimes 2} \\ 1 & \delta+\zeta_j\operatorname{d}\! z_j \end{pmatrix} =\begin{pmatrix} 0 & 0\\ 0 & \frac{dz_j}{z_j} \end{pmatrix}+\text{holomorphic}$$ where $z_j$ stands for a local coordinate at $q_j$. Then the elementary transformation $\phi_\nabla$ is locally equivalent to the conjugacy by $\begin{pmatrix} 1 & 0\\ 0 & z_j^{-1} \end{pmatrix}$ yielding $$\label{conn_matrix_at_qj} \begin{pmatrix} - \zeta_j \operatorname{d}\! z_j & \frac{\beta- \zeta_j\delta\otimes \operatorname{d}\! z_j-\zeta_j^2 \operatorname{d}\! z_j^{\otimes 2}}{z_j} \\ z_j & \delta+\zeta_j\operatorname{d}\! z_j-\frac{dz_j}{z_j} \end{pmatrix}.$$ The apparent point condition is therefore equivalent to saying that $\beta-\zeta_j\delta\otimes \operatorname{d}\! z_j -\zeta_j^2\operatorname{d}\! z_j^{\otimes 2}$ is (holomorphic and) **vanishing** at $q_j$. This condition is linear on $\beta$ and $\delta$ and rewrites $$\label{eq:apparent} \underbrace{\beta-\zeta_j\delta \otimes \operatorname{d}\! z_j}_\text{holomorphic}\vert_{q_j} = \zeta_j^2\operatorname{d}\! z_j^{\otimes 2}|_{q_j},$$ where the right hand side does not involve $\beta$ and $\delta$. If we assume that $(q_1, \ldots, q_N)$ lies in the image of the map $\operatorname{App}$ (see [\[eq:App\]](#eq:App){reference-type="eqref" reference="eq:App"}), then the normal form of any $(E,\nabla )$ in the preimage produces a solution $(\delta_0, \beta_0)$. Fixing such solutions, by Lemmas [Lemma 5](#lem:delta){reference-type="ref" reference="lem:delta"}, [Lemma 6](#2023_7_11_12_53){reference-type="ref" reference="2023_7_11_12_53"} we may rewrite $$\left\{\begin{matrix} \beta&=& \beta_0+b_1\nu_1+\cdots+b_{N-g}\nu_{N-g} \hfill \\ \delta&=& \delta_0+d_1\omega_1+\cdots+d_g\omega_g \end{matrix}\right.$$ where $(\omega_l)_{l=1}^g$, $(\nu_k)_{k=1}^{N-g}$ are respective bases of $H^0(C,\Omega^1_C)$ and $H^0(C,(\Omega^1_C)^{\otimes2}(D))$. Using these expressions, the constraint that $q_j$ is an apparent singularity can be rewritten as a linear system consisting of $N$ equations in the $N$ variables $b_k$, $d_l$. The condition to uniquely determine $\beta$ and $\delta$ in terms of the data $(q_j,\zeta_j \operatorname{d}\! z_j)$ is that the following determinant does not vanish $$\label{eq:determinant} \det\begin{pmatrix} \nu_1(q_1)& \cdots& \nu_{N-g}(q_1) & \zeta_1\operatorname{d}\! z_1 \omega_1(q_1)&\cdots& \zeta_1 \operatorname{d}\! z_1 \omega_g(q_1) \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ \nu_1(q_{N})& \cdots& \nu_{N-g}(q_{N}) & \zeta_{N}\operatorname{d}\! z_N \omega_1(q_{N})&\cdots& \zeta_{N}\operatorname{d}\! z_N \omega_g(q_{N}) \end{pmatrix}$$ Of course, it is sufficient for our purpose to check that we can find some $(q_j,\zeta_j \operatorname{d}\! z_j)$'s such that this determinant does not vanish, so that it will be generically non vanishing. If we set $\zeta_1=\cdots=\zeta_{N-g}=0$, then the matrix has a zero block of dimension $(N-g)\times g$ in the top right corner, and the determinant factors as $$\zeta_{N-g+1}\operatorname{d}\! z_{N-g+1} \cdots \zeta_{N}\operatorname{d}\! z_N\cdot \det\begin{pmatrix} \nu_1(q_1)& \cdots& \nu_{N-g}(q_1) \\ \vdots & \ddots & \vdots \\ \nu_1(q_{N-g})& \cdots& \nu_{N-g}(q_{N-g}) \end{pmatrix} \cdot \det\begin{pmatrix} \omega_1(\tilde q_{1})&\cdots& \omega_g(\tilde q_1) \\ \vdots & \ddots & \vdots \\ \omega_1(\tilde q_g)&\cdots& \omega_g(\tilde q_g) \end{pmatrix}$$ where $\tilde q_j=q_{j+N-g}$. After setting $\zeta_{N-g+1}=\cdots=\zeta_{N}=1$, it is enough to find $q_j$'s such that the two smaller determinants are non zero. To conclude the proof, let us denote by $L$ any of the two lines bundles $\Omega^1_C$ or $(\Omega^1_C)^{\otimes2}(D)$, and by $\mu_1,\ldots,\mu_{N'}$ a corresponding basis of $H^0(C,L)$. Then we want to prove that the image of the curve by the evaluation map $$C\stackrel{\text{ev}}{\longrightarrow} \mathbb P^{N'-1}\ ;\ q\mapsto (\mu_1(q):\ldots:\mu_{N'}(q))$$ is not contained in some hyperplane, i.e. that we can find $q_1,\ldots,q_{N'}\in C$ such that the image is not contained in some hyperplane. But this is true, otherwise, we would have a linear relation between $\mu_1,\ldots,\mu_{N'}$ contradicting that they form a basis. ◻ **Remark 8**. *In the previous proof, the locus of $q_j$'s for which $\det(\omega_i(\tilde q_j))_{i,j}$ vanishes correspond to the Brill-Noether locus for divisor $\tilde q_1+\cdots+\tilde q_g$.* **Lemma 9**. *When $g=0$, any data $(q_j,\zeta_j\operatorname{d}\! z_j)_j\in \mathrm{Sym}^{n-3}(\boldsymbol{\Omega}(D))$ gives rise to unique $\beta$ and $\delta$ such that the corresponding $\nabla_0$ has apparent singular points at all $q_j$'s. However, for $g>0$, there always exist data $(q_j,\zeta_j\operatorname{d}\! z_j)_j$ such that the determinant ([\[eq:determinant\]](#eq:determinant){reference-type="ref" reference="eq:determinant"}) vanishes.* *Proof.* When $g=0$, this directly follows from [@DiarraLoray] (a consequence of Lagrange interpolation). When $g>0$, fix generic $q_j$'s and let $\omega\in H^0(C,\Omega^1_C(D))$. If we set $\zeta_j:=\omega(q_j)$, then the last colum of ([\[eq:determinant\]](#eq:determinant){reference-type="ref" reference="eq:determinant"}) is just the evaluation of the section $\omega\otimes\omega_g\subset H^0(C,(\Omega^1_C)^{\otimes2}(D))$ at $q_1,\cdots,q_{4g-3+n}$ and is therefore a linear combination of the $3g-3+n$ first colums. ◻ # Symplectic structure and canonical coordinates {#Sect:SymplecticGL2andCano} We fix an irregular curve with residues $X=(C,D,\{ z_i \}_{i \in I} , \{\boldsymbol{\theta}_{i} \}_{i \in I}, \boldsymbol{\theta}_{\mathrm{res}})$. As usual, we use the notation $N:= 4g+n-3$, where $g$ is the genus of $C$ and $n = \deg (D)$. We will consider the moduli space $M_{X}$ of rank 2 meromorphic connections over $X$. This moduli space is constructed in [@IS Theorem 2.1] and carries a natural symplectic structure described in [@IS Proposition 4.1]. The purpose of this section is to give canonical coordinates on an open subset of $M_{X}$ with respect to this symplectic structure. First we describe the moduli space $M_{X}$. ## Moduli spaces {#subsect_moduli} Let $(E,\nabla)$ be a rank 2 meromorphic connection over $X$. Then, the subsheaf $$l^{(i)} := \varphi_{m_i[t_i]}^{-1}(\mathcal{O}_{m_i[t_i]} \oplus 0) \subset E_{m_i[t_i]}.$$ equips $(E,\nabla)$ with a canonical quasi-parabolic structure at each $t_i$. So we may consider $(E,\nabla)$ as a *quasi-parabolic connection* $(E,\nabla , \{l^{(i)}\} )$ defined in [@Ina Definition 1.1] and [@IS Definition 2.1]. A stability condition for quasi-parabolic connections is introduced in [@Ina Definition 2.1] and [@IS Definition 2.2]. The moduli space of stable quasi-parabolic connections is constructed in [@Ina Theorem 2.1] and [@IS Theorem 2.1]. In our situation, any rank $2$ meromorphic connections over $X$ are irreducible (see Remark [Remark 4](#2023_7_13_16_34){reference-type="ref" reference="2023_7_13_16_34"}). So our objects are automatically stable objects. We omit the stability condition of the quasi-parabolic connections. Let $M_{X}$ be the moduli space of rank 2 meromorphic connections over the irregular curve with residues $X$. If $(E,\nabla ) \in M_{X}$ satisfies $\dim_{\mathbb{C}} H^0(C,E)=1$, then we have a unique $\mathcal{O}_C$-morphism $\varphi_\nabla$ in [\[2023_7\_11_12_46\]](#2023_7_11_12_46){reference-type="eqref" reference="2023_7_11_12_46"}. The $\mathcal{O}_C$-morphism $\varphi_\nabla$ is nonzero, since $(E,\nabla )$ is irreducible. So we may define the divisor $\mathrm{div}(\varphi_\nabla )$ in [\[eq:B\]](#eq:B){reference-type="eqref" reference="eq:B"} for $(E,\nabla)$. We set $$M_{X}^0 := \left\{ (E,\nabla ) \in M_X \ \middle| \ \begin{array}{l} \text{$\dim_{\mathbb{C}} H^0(C,E) =1$,}\\ \text{$\mathrm{div}(\varphi_\nabla )$ is reduced, and} \\ \text{$\mathrm{div}(\varphi_\nabla )$ has disjoint support with $D$} \end{array} \right\}.$$ Next we recall the natural symplectic structure on $M_X$. ## Symplectic structure {#subsect:symplecticGL2} We will describe the natural symplectic structure on $M_X$ via Čech cohomology. This is defined in [@Ina Proposition 7.2] and [@IS Proposition 4.1]. This is analog of the symplectic form on the moduli space of stable Higgs bundles in [@Bott]. This description of the symplectic structure is useful to comparing this symplectic structure with the Goldman symplectic structure on the character variety via the Riemann--Hilbert map (for example, see [@Ina the proof of Proposition 7.3] and [@Bis Theorem 3.2]). Moreover, this description of the symplectic structure is useful to describe the isomonodromic deformations (for example, see [@BHH Proposition 4.3], [@BHH1 Proposition 4.4], and [@Kom0 Proposition 3.8]). First we recall the description of the tangent space of $M_{X}$ at $(E,\nabla ) \in M_{X}$ in terms of the hypercohomology of a certain complex ([@Ina the proof of Theorem 2.1] and [@IS the proof of Proposition 4.1]). We consider $(E,\nabla )$ as a quasi-parabolic connection $(E,\nabla , \{l^{(i)}\} )$. We define a complex ${\mathcal F}^{\bullet}$ for $(E,\nabla , \{l^{(i)}\} )$ by $$\label{2023_7_4_14_18} \begin{aligned} &{\mathcal F}^0 := \left\{ s \in {\mathcal E}nd (E) \ \middle| \ s |_{m_i t_i} (l^{(i)}) \subset l^{(i)} \text{ for any $i$} \right\} \\ &{\mathcal F}^1 := \left\{ s \in {\mathcal E}nd (E)\otimes \Omega^1_{C}(D) \ \middle| \ s|_{m_i t_i} (l^{(i)}) \subset l^{(i)} \otimes \Omega^1_C \text{ for any $i$} \right\} \\ &\nabla_{{\mathcal F}^{\bullet}} \colon {\mathcal F}^0 \longrightarrow{\mathcal F}^1; \quad \nabla_{{\mathcal F}^{\bullet}} (s) = \nabla \circ s - s \circ \nabla. \end{aligned}$$ Then we have an isomorphism between the tangent space $T_{(E,\nabla , \{l^{(i)}\} )} M_{X}$ and ${\mathbf H}^1(\mathcal{F}^{\bullet})$. Now we recall this isomorphism. We take an analytic (or affine) open covering $C = \bigcup_{\alpha} U_{\alpha}$ such that $E|_{U_{\alpha}} \cong \mathcal{O}^{\oplus 2}_{U_{\alpha}}$ for any $\alpha$, $\sharp\{ i \mid t_i \cap U_{\alpha} \neq \emptyset \} \le 1$ for any $\alpha$ and $\sharp\{ \alpha \mid t_i \cap U_{\alpha} \neq \emptyset \} \le 1$ for any $i$. Take a tangent vector $v \in T_{(E,\nabla , \{l^{(i)}\} )} M_{X}$. The field $v$ corresponds to an infinitesimal deformation $(E_{\epsilon},\nabla_{\epsilon}, \{ l_{\epsilon}^{(i)} \})$ of $(E,\nabla , \{l^{(i)}\} )$ over $C \times \mathrm{Spec}\,\mathbb{C}[\epsilon]$ such that $(E_{\epsilon},\nabla_{\epsilon}, \{ l_{\epsilon}^{(i)} \}) \otimes \mathbb{C}[\epsilon]/(\epsilon) \cong ( E, \nabla, \{ l^{(i)} \})$, where $\mathbb{C}[\epsilon]= \mathbb{C}[t]/(t^2)$. There is an isomorphism $$\label{equation isom verphi 1} \varphi_{\alpha} \colon E_{\epsilon}|_{U_{\alpha}\times \mathrm{Spec}\, \mathbb{C}[\epsilon] } \xrightarrow{\sim} \mathcal{O}^{\oplus 2}_{U_{\alpha}\times \mathrm{Spec}\, \mathbb{C}[\epsilon]} \xrightarrow{\sim} E|_{U_{\alpha}} \otimes \mathbb{C}[\epsilon]$$ such that $\varphi_{\alpha}\otimes \mathbb{C}[\epsilon]/(\epsilon ) \colon E_{\epsilon}\otimes \mathbb{C}[\epsilon]/(\epsilon)|_{U_{\alpha}} \xrightarrow{\sim} E|_{U_{\alpha}}\otimes \mathbb{C}[\epsilon]/(\epsilon) =E|_{U_{\alpha}}$ is the given isomorphism and that $\varphi_{\alpha}|_{t_i\times \mathop{\rm Spec}\nolimits\mathbb{C}[\epsilon]} (l_{\epsilon}^{(i)}) = l^{(i)}|_{U_{\alpha} \times \mathop{\rm Spec}\nolimits\mathbb{C}[\epsilon]}$ if $t_i \cap U_{\alpha} \neq \emptyset$. We put $$\begin{aligned} u_{\alpha\beta} :=&\ \varphi_{\alpha} \circ \varphi_{\beta}^{-1} - \mathrm{id}_{E|_{U_{\alpha\beta}\times \mathop{\rm Spec}\nolimits\mathbb{C}[\epsilon]}},\\ v_{\alpha} :=&\ (\varphi_{\alpha}\otimes \mathrm{id}) \circ \nabla_{\epsilon} |_{U_{\alpha}\times \mathop{\rm Spec}\nolimits\mathbb{C}[\epsilon]} \circ \varphi^{-1}_{\alpha} - \nabla|_{U_{\alpha}\times \mathop{\rm Spec}\nolimits\mathbb{C}[\epsilon]}. \end{aligned}$$ Then $\{ u_{\alpha\beta} \} \in C^1 ((\epsilon) \otimes {\mathcal F}^0)$, $\{ v_{\alpha} \} \in C^0((\epsilon) \otimes {\mathcal F}^1)$ and we have the cocycle conditions $$u_{\beta \gamma}-u_{\alpha \gamma} +u_{\alpha\beta} = 0 \quad \text{and} \quad \nabla\circ u_{\alpha\beta}-u_{\alpha\beta} \circ \nabla = v_{\beta}-v_{\alpha}.$$ So $[(\{ u_{\alpha\beta} \},\{ v_{\alpha} \} )]$ determines an element of ${\mathbf H}^1(\mathcal{F}^{\bullet})$. This correspondence gives an isomorphism between the tangent space $T_{(E,\nabla , \{l^{(i)}\} )} M_{X}$ and ${\mathbf H}^1(\mathcal{F}^{\bullet})$. We define a pairing $$\label{2020.11.7.15.48} \begin{aligned} {\mathbf H}^1( {\mathcal F}^{\bullet}) \otimes {\mathbf H}^1({\mathcal F}^{\bullet}) &\longrightarrow{\mathbf H}^2(\mathcal{O}_C \xrightarrow{d}\Omega_{C}^{1}) \cong \mathbb{C}\\ [(\{ u_{\alpha\beta} \}, \{ v_{\alpha} \})]\otimes[(\{ u_{\alpha\beta}' \}, \{ v_{\alpha}' \} )] &\longmapsto [ (\{ \mathrm{tr}( u_{\alpha\beta} \circ u_{\beta\gamma}') \}, -\{ \mathrm{tr} (u_{\alpha\beta} \circ v_{\beta}') - \mathrm{tr} (v_{\alpha} \circ u'_{\alpha\beta}) \} )], \end{aligned}$$ considered in Čech cohomology with respect to an open covering $\{ U_{\alpha} \}$ of $C$, $\{ u_{\alpha\beta} \} \in C^1({\mathcal F}^0)$, $\{ v_{\alpha} \} \in C^0({\mathcal F}^1)$ and so on. This pairing gives a nondegenerate 2-form on the moduli space $M_{X}$. This fact follows from the Serre duality and the five lemma: $$\label{diagram:hypercohomology_LES} \xymatrix{ H^0(\mathcal{F}^0) \ar[r] \ar[d]^-{\sim} & H^0(\mathcal{F}^1) \ar[r] \ar[d]^-{\sim} & {\mathbf H}^1(\mathcal{F}^{\bullet}) \ar[r] \ar[d]^-{\sim} & H^1(\mathcal{F}^0) \ar[r] \ar[d]^-{\sim} & H^1(\mathcal{F}^1) \ar[d]^-{\sim} \\ H^1(\mathcal{F}^1)^{\vee} \ar[r] & H^1(\mathcal{F}^0)^{\vee} \ar[r] & {\mathbf H}^1(\mathcal{F}^{\bullet})^{\vee} \ar[r] & H^0(\mathcal{F}^1)^{\vee} \ar[r] & H^0(\mathcal{F}^0)^{\vee}\rlap{.} }$$ We denote by $\omega$ the nondegenerate 2-form on $M_{X}$. This 2-form $\omega$ is a symplectic structure. That is, we have $d\omega=0$ (see [@Ina Proposition 7.3] and [@IS Proposition 4.2]). We get as a consequence: **Proposition 10**. *The dimension of $M_{X}^0$ is equal to $2N$, where $N = 4g - 3 + n$.* *Proof.* By irreducibility of $(E,\nabla)$ and Schur's lemma we have $${\mathbf H}^0(\mathcal{F}^{\bullet})\cong \mathbb{C}.$$ On a Zariski open subset of $M_{X}$, the underlying quasi-parabolic vector bundle $(E, \{ l^{(i)} \}_i )$ is irreducible, so we also have $$H^0(\mathcal{F}^{0})\cong \mathbb{C}.$$ Clearly, we have $\operatorname{deg} (\mathcal{F}^0) = -\operatorname{length} (D)$. From Riemann--Roch we find $$\begin{aligned} \operatorname{dim}_{\mathbb{C}} H^1 (\mathcal{F}^0) & = \operatorname{dim}_{\mathbb{C}} H^0 (\mathcal{F}^0) + 4 (g-1) - \operatorname{deg} (\mathcal{F}^0) \\ & = 4g - 3 + n = N. \end{aligned}$$ By Serre duality and Euler characteristic count applied to the hypercohomology long exact sequence [\[diagram:hypercohomology_LES\]](#diagram:hypercohomology_LES){reference-type="eqref" reference="diagram:hypercohomology_LES"}, we get the statement. ◻ ## Trivializations of $E$ {#2023_7_4_13_59} Our purpose is to give canonical coordinates of $M_{X}^0$ with respect to the symplectic form [\[2020.11.7.15.48\]](#2020.11.7.15.48){reference-type="eqref" reference="2020.11.7.15.48"}. To do it, we will calculate the Čech cohomology by taking trivializations of $E$. To simplify the calculation, we take trivializations of $E$ by using $$\phi_{\nabla} \colon E_0 \xrightarrow{\ \subset \ } E,$$ whose cokernel defines the apparent singularities. In this section, we will discuss construction of the trivializations of $E$ by using $\phi_{\nabla}$. We take $(E,\nabla, \{ l^{(i)} \}) \in M_{X}^0$. Let $\{(q_j, \zeta_j\operatorname{d}\! z_j)\}_{j=1,2,\ldots,N}$ be the point on $\mathrm{Sym}^{N}(\boldsymbol{\Omega}(D))$ corresponding to $(E,\nabla, \{ l^{(i)} \})$. We assume that the point $\{(q_j, \zeta_j\operatorname{d}\! z_j)\}_{j=1,2,\ldots,N}$ is generic in the sense of Lemma [Lemma 7](#lem:independence){reference-type="ref" reference="lem:independence"}. Let $U^{\mathrm{an}}_{q_j}$ be an analytic open subset of $C$ such that $q_j \in U^{\mathrm{an}}_{q_j}$ and $U^{\mathrm{an}}_{t_i}$ be an analytic open subset of $C$ such that $t_i \in U^{\mathrm{an}}_{t_i}$. We assume that $U^{\mathrm{an}}_{q_j}$ and $U^{\mathrm{an}}_{t_i}$ are small enough. We take an analytic coordinate $z_j$ on $U^{\mathrm{an}}_{q_j}$ such that it is independent of the moduli space $M_{X}^0$. We denote also by $q_j$ the complex number so that the point $q_j$ on $C$ is defined by $z_j -q_j =0$. **Definition 11**. *Let $\{ U_{\alpha} \}_{\alpha}$ be an analytic open covering of $C$: $C = \bigcup_{\alpha} U_{\alpha}$ such that* - *$\sharp\{ i \mid t_i \cap U_{\alpha} \neq \emptyset \} \le 1$ for any $\alpha$, and $\sharp\{ \alpha \mid t_i \cap U_{\alpha} \neq \emptyset \} \le 1$ for any $i$,* - *$\sharp\{ j \mid q_j \cap U_{\alpha} \neq \emptyset \} \le 1$ for any $\alpha$, and $\sharp\{ \alpha \mid q_j \cap U_{\alpha} \neq \emptyset \} \le 1$ for any $j$,* - *$\Omega^1_{C}(D)$ is free on $U_{\alpha}$ for any $\alpha$, that is, $\Omega^1_{C}(D)|_{U_{\alpha}} \cong \mathcal{O}_{U_{\alpha}}$,* - *$U_{\alpha_{t_i}} = U^{\mathrm{an}}_{t_i}$ and $U_{\alpha_{q_j}} = U^{\mathrm{an}}_{q_j}$.* *Here we denote by $\alpha_{t_i}$ the index $\alpha$ such that $t_i \in U_{\alpha}$, and by $\alpha_{q_j}$ the index $\alpha$ such that $q_j \in U_{\alpha}$.* We fix trivializations $\omega_{\alpha} \colon \mathcal{O}_{U_{\alpha}}\xrightarrow{\ \sim \ } \Omega^1_{C}(D)|_{U_{\alpha}}$ of $\Omega^1_{C}(D)$. We assume that $\omega_{\alpha}$ is independent of the moduli space $M_{X}^0$. By using $\omega_{\alpha}$, we have $\omega^{-1}_{\alpha} \colon \mathcal{O}_{U_{\alpha}} \xrightarrow{\ \sim \ } (\Omega^1_{C}(D))^{-1}|_{U_{\alpha}}$. By the trivializations, we have trivializations $\varphi_{\alpha}^{\mathrm{norm}} \colon \mathcal{O}^{\oplus 2}_{U_{\alpha}} \xrightarrow{\ \sim \ } E_0 |_{U_{\alpha}}$ of $E_0$. Assume that the connection matrices $A_{\alpha}^{\mathrm{norm}}$ of $\nabla_0$ associated to $\varphi_{\alpha}^{\mathrm{norm}}$ are $$\label{2023_4_5_11_56} A_{\alpha}^{\mathrm{norm}}= \begin{pmatrix} 0 & \beta_{\alpha} \\ \gamma_{\alpha} & \delta_{\alpha} \end{pmatrix},$$ where $\beta_{\alpha},\delta_{\alpha} \in \Omega_C^1(D+B)|_{U_{\alpha}}$ are determined by $\{(q_j, \mathrm{res}_{q_j} (\beta))\}_{j=1,2,\ldots,N}$ (see Lemma [Lemma 7](#lem:independence){reference-type="ref" reference="lem:independence"}). The 1-form $\gamma_{\alpha} \in \Omega_C^1(D)|_{U_{\alpha}}$ is the image of $1$ under the composition $$\mathcal{O}_{U_{\alpha}} \xrightarrow{ \ \sim \ } (\Omega^1_{C}(D))^{-1} \otimes \Omega^1_{C}(D) |_{U_{\alpha}} \xrightarrow{\omega_{\alpha}\otimes 1 } \mathcal{O}_{U_{\alpha}}\otimes \Omega^1_{C}(D).$$ In particular, $\gamma_{\alpha}$ is independent of the moduli space $M_{X}^0$ for any $\alpha$. The polar part of $A_{\alpha_{t_i}}^{\mathrm{norm}}$ at $t_i$ is independent of the moduli space $M_{X}^0$ for any $i$. We set $$\label{2023_7_12_16_04} \zeta_j := \frac{\mathrm{res}_{q_j} (\beta)}{ \gamma_{\alpha_{q_j}}|_{q_j} } \in \mathbb{C} \quad \text{ for $j=1,2,\ldots,N$.}$$ Here $\beta \in H^0(C,(\Omega^1_C)^{\otimes 2}(2D+B))$ is the $(1,2)$-entry of [\[eq:normal_form\]](#eq:normal_form){reference-type="eqref" reference="eq:normal_form"}. Notice that $\beta|_{U_{\alpha}} = \beta_{\alpha}\gamma_{\alpha}$, where $\beta_{\alpha}$ and $\gamma_{\alpha}$ are in [\[2023_4\_5_11_56\]](#2023_4_5_11_56){reference-type="eqref" reference="2023_4_5_11_56"}. So we have $$\mathrm{res}_{q_j} ( A_{\alpha_{q_j}}^{\mathrm{norm}} ) = \begin{pmatrix} 0 & \zeta_j \\ 0 &1 \end{pmatrix}\quad \text{ for $j=1,2,\ldots,N$.}$$ **Definition 12**. *We define other trivializations $\varphi_{\alpha}^{\mathrm{App},0} \colon \mathcal{O}^{\oplus 2}_{U_{\alpha}} \xrightarrow{\ \sim \ } E_0 |_{U_{\alpha}}$ of $E_0$ for each $\alpha$ as follows:* - *When $\alpha = \alpha_{q_j}$, we take a trivialization $\varphi_{\alpha}^{\mathrm{App},0}$ as $$\label{2023_3_16_18_38} \varphi_{\alpha}^{\mathrm{App},0} = \varphi_{\alpha}^{\mathrm{norm}} \circ \begin{pmatrix} 1 &\zeta_j \\ 0 & 1 \end{pmatrix}.$$ Note that this triangular matrix appeared in [\[diag_by_conj_by_triangular\]](#diag_by_conj_by_triangular){reference-type="eqref" reference="diag_by_conj_by_triangular"}.* - *Otherwise, we take a trivialization $\varphi_{\alpha}^{\mathrm{App},0}$ as $\varphi_{\alpha}^{\mathrm{App},0} = \varphi_{\alpha}^{\mathrm{norm}}$.* Let $A_{\alpha}^{\mathrm{App},0}$ be the connection matrix of $\nabla_0$ associated to $\varphi_{\alpha}^{\mathrm{App},0}$, that is, $$(\varphi_{\alpha}^{\mathrm{App},0} )^{-1} \circ ( \phi_{\nabla}^* \nabla ) \circ \varphi_{\alpha}^{\mathrm{App},0} = \operatorname{d} + A_{\alpha}^{\mathrm{App},0}.$$ We have that $$A_{\alpha}^{\mathrm{App},0}= \begin{cases} \begin{pmatrix} - \zeta_j \gamma_{\alpha} & \beta_{\alpha}-\zeta_j\delta_{\alpha}-\zeta_j^2\gamma_{\alpha}\\ \gamma_{\alpha} & \delta_{\alpha} + \zeta_j\gamma_{\alpha} \end{pmatrix} & \text{when $\alpha = \alpha_{q_j}$} \\ \begin{pmatrix} 0 & \beta_{\alpha} \\ \gamma_{\alpha} & \delta_{\alpha} \end{pmatrix} & \text{otherwise}. \end{cases}$$ We have $$\mathrm{res}_{q_j} ( A_{\alpha_{q_j}}^{\mathrm{App},0} ) = \begin{pmatrix} 0 & 0 \\ 0 &1 \end{pmatrix}\quad \text{ for $j=1,2,\ldots,N$.}$$ Now we define trivializations of $E$ by using $\phi_{\nabla} \colon E_0 \rightarrow E$ in [\[eq:phi_nabla\]](#eq:phi_nabla){reference-type="eqref" reference="eq:phi_nabla"} and the trivialization of $E_0$ in Definition [Definition 12](#2023_7_8_23_29){reference-type="ref" reference="2023_7_8_23_29"}. **Definition 13**. *Now we define trivialization $\varphi^{\mathrm{App}}_{\alpha} \colon \mathcal{O}^{\oplus 2}_{U_{\alpha}} \xrightarrow{\ \sim \ } E|_{U_{\alpha}}$ of $E$ for the open covering $\{ U_{\alpha} \}_{\alpha}$ in Definition [Definition 11](#2023_7_8_20_43){reference-type="ref" reference="2023_7_8_20_43"} as follows.* - *When $\alpha = \alpha_{q_j}$, we take a trivialization $\varphi^{\mathrm{App}}_{\alpha}$ so that $$(\varphi^{\mathrm{App}}_{\alpha} )^{-1}\circ \phi_{\nabla}|_{U_{\alpha}} \circ \varphi^{\mathrm{App},0}_{\alpha} =\begin{pmatrix} 1 & 0 \\ 0 & z_j -q_j \end{pmatrix}.$$* - *When $\alpha = \alpha_{t_i}$, we take $g^{t_i}_{\alpha} \in \mathrm{Aut}(\mathcal{O}_{U_{\alpha}}^{\oplus 2})$ so that the polar part of $(g^{t_i}_{\alpha})^{-1} A_{\alpha}^{\mathrm{norm}} g^{t_i}_{\alpha}$ is diagonal at $m_i[t_i]$. We take a trivialization $\varphi^{\mathrm{App}}_{\alpha}$ as $$\varphi^{\mathrm{App}}_{\alpha} = \phi_{\nabla}|_{U_{\alpha}} \circ \varphi_{\alpha}^{\mathrm{norm}} \circ g^{t_i}_{\alpha}.$$ Here remark that $\phi_{\nabla}|_{U_{\alpha}}$ is invertible. Since the polar part of $A_{\alpha_{t_i}}^{\mathrm{norm}}$ at $t_i$ is independent of the moduli space $M_{X}^0$, we may assume that $(g^{t_i}_{\alpha})_{< m_i}$ is independent of the moduli space $M_{X}^0$. Here we define $(g^{t_i}_{\alpha})_{< m_i}$ so that $g^{t_i}_{\alpha} = (g^{t_i}_{\alpha})_{< m_i} + O(z_i^{m_i})$.* - *Otherwise, we take a trivialization $\varphi^{\mathrm{App}}_{\alpha}$ so that $$(\varphi^{\mathrm{App}}_{\alpha} )^{-1} \circ \phi_{\nabla}|_{U_{\alpha}} \circ \varphi^{\mathrm{norm}}_{\alpha} =\begin{pmatrix} 1 &0 \\ 0 & 1 \end{pmatrix}.$$ Since $\phi_{\nabla}|_{U_{\alpha}}$ is invertible in this case, $\varphi^{\mathrm{App}}_{\alpha} = \phi_{\nabla}|_{U_{\alpha}} \circ \varphi^{\mathrm{norm}}_{\alpha}$.* Let $A_{\alpha}$ be the connection matrix of $\nabla$ associated to $\varphi^{\text{App}}_{\alpha}$, that is $$(\varphi^{\text{App}}_{\alpha} )^{-1} \circ \nabla \circ \varphi^{\text{App}}_{\alpha} = \operatorname{d} + A_{\alpha}.$$ We have that $$\label{2023_3_6_18_19} A_{\alpha}= \begin{cases} \begin{pmatrix} - \zeta_j \gamma_{\alpha} & \frac{\beta_{\alpha}-\zeta_j\delta_{\alpha}-\zeta_j^2\gamma_{\alpha}}{z_j -q_j}\\ (z_j -q_j)\gamma_{\alpha} & \delta_{\alpha} + \zeta_j\gamma_{\alpha} -\frac{dz_j}{z_j-q_j} \end{pmatrix} & \text{when $\alpha = \alpha_{q_j}$} \\ \omega_i(X ) + [\text{holo.\ part}] & \text{when $\alpha = \alpha_{t_i}$} \\ \begin{pmatrix} 0 & \beta_{\alpha} \\ \gamma_{\alpha} & \delta_{\alpha} \end{pmatrix} & \text{otherwise}. \end{cases}$$ Here $\omega_i(X )$ is the 1-form defined in [\[2023_4\_10_19_32\]](#2023_4_10_19_32){reference-type="eqref" reference="2023_4_10_19_32"}. The connection matrix $A_{\alpha_{q_j}}$ on $U_{\alpha_{q_j}}$ appeared in [\[conn_matrix_at_qj\]](#conn_matrix_at_qj){reference-type="eqref" reference="conn_matrix_at_qj"}. The connection matrix $A_{\alpha_{q_j}}$ has no pole at $q_j$ for any $j=1,2,\ldots,N$, since $\beta_{\alpha},\delta_{\alpha}$ are determined by Lemma [Lemma 7](#lem:independence){reference-type="ref" reference="lem:independence"}. We have considered diagonalization of the polar part of the connection $(E,\nabla)$ at each $t_i$. The reason why we consider diagonalization of the polar parts is that we use the connection matrix [\[2023_3\_6_18_19\]](#2023_3_6_18_19){reference-type="eqref" reference="2023_3_6_18_19"} to calculate an infinitesimal deformation of $(E,\nabla)$. So we will calculate variations of the transition functions with respect to the trivializations in Definition [Definition 13](#2023_7_2_11_52){reference-type="ref" reference="2023_7_2_11_52"} and variations of the connection matrices [\[2023_3\_6_18_19\]](#2023_3_6_18_19){reference-type="eqref" reference="2023_3_6_18_19"}. These are elements of $\mathcal{F}^0$ and $\mathcal{F}^1$ of [\[2023_7\_4_14_18\]](#2023_7_4_14_18){reference-type="eqref" reference="2023_7_4_14_18"}, respectively. To be elements of $\mathcal{F}^0$ and $\mathcal{F}^1$, we need the compatibility with the quasi-parabolic structure. However, this compatibility follows directly from diagonalization of the polar parts. ## Descriptions of the cocycles of an infinitesimal deformation {#2023_7_12_16_39} Let $\boldsymbol{\Omega} (D) \rightarrow C$ be the total space of $\Omega^1_C(D)$. By the argument as in Lemma [Lemma 6](#2023_7_11_12_53){reference-type="ref" reference="2023_7_11_12_53"}, we may define a map $$\label{2023_7_12_16_02} \begin{aligned} f_{\mathrm{App},0} \colon M_{X}^0 &\longrightarrow \mathrm{Sym}^{N}(\boldsymbol{\Omega} (D)) \\ (E,\nabla ) & \longmapsto \left\{ (q_j, \mathrm{res}_{q_j} (\beta)) \right\}_{j=1,2,\ldots,N}. \end{aligned}$$ Here $\beta \in H^0(C, (\Omega^1_C)^{\otimes 2} (2D+B))$ is the $(1,2)$-entry of [\[eq:normal_form\]](#eq:normal_form){reference-type="eqref" reference="eq:normal_form"} and $\mathrm{res}_{q_j} (\beta) \in \Omega_C^1(D)|_{q_j}$. We take an analytic open subset $V$ of $M_{X}^0$. For the analytic open subset $V$, we assume that we may define a composition $$\begin{aligned} V \longrightarrow f_{\mathrm{App},0} (V) & \longrightarrow \mathrm{Sym}^{N}(\mathbb{C}_{(q,\zeta)}^2) \\ (E,\nabla ) \longmapsto \left\{ (q_j, \mathrm{res}_{q_j} (\beta)) \right\}_{j=1,2,\ldots,N} & \longmapsto \left\{ (q_j, \zeta_j) \right\}_{j=1,2,\ldots,N}, \end{aligned}$$ where $\zeta_j$ is defined in [\[2023_7\_12_16_04\]](#2023_7_12_16_04){reference-type="eqref" reference="2023_7_12_16_04"}, and the image of $V$ under the composition is isomorphic to some analytic open subset of $\mathbb{C}_{(q,\zeta)}^{2N}$. Let $U_{(q,\zeta)}$ be such an analytic open subset of $\mathbb{C}_{(q,\zeta)}^{2N}$. So we have a map $$\label{2023_7_12_16_26} \begin{aligned} M_{X}^0 \supset V &\longrightarrow U_{(q,\zeta)} \subset \mathbb{C}_{(q,\zeta)}^{2N} \\ (E,\nabla ) &\longmapsto (q_1,\ldots,q_N, \zeta_1,\ldots,\zeta_N), \end{aligned}$$ which are coordinates that we will use in this subsection. We consider the family of $(E,\nabla,\{ l^{(i)} \})$ parametrized by $U_{(q,\zeta)}$ such that this family induces the inverse map of the map $V \rightarrow U_{(q,\zeta)}$. Here this family is constructed by Lemma [Lemma 7](#lem:independence){reference-type="ref" reference="lem:independence"}. By using the trivializations $\{\varphi^{\text{App}}_{\alpha}\}_{\alpha}$ of $E$ in Definition [Definition 13](#2023_7_2_11_52){reference-type="ref" reference="2023_7_2_11_52"}, we have transition functions and connection matrices of the family of $(E,\nabla,\{ l^{(i)} \})$ parametrized by $U_{(q,\zeta)}$. Indeed, the transition function is $$\label{2023_7_2_11_55} B_{\alpha\beta} :=( \varphi^{\text{App}}_{\alpha}|_{U_{\alpha\beta}})^{-1} \circ \varphi^{\text{App}}_{\beta}|_{U_{\alpha\beta}} \colon \mathcal{O}^{\oplus 2}_{U_{\alpha \beta}} \longrightarrow \mathcal{O}^{\oplus 2}_{U_{\alpha \beta}},$$ and the connection matrix is as in [\[2023_3\_6_18_19\]](#2023_3_6_18_19){reference-type="eqref" reference="2023_3_6_18_19"}. Let $(q_j,\zeta_j)_j$ be a point on $U_{(q,\zeta)}$. The purpose of this subsection is to describe the tangent map $$\label{2023_7_12_16_59} \begin{aligned} T_{(q_j,\zeta_j)_j} \mathbb{C}_{(q,\zeta)}^{2N} &\longrightarrow T_{(E,\nabla , \{l^{(i)}\} )} M_{X}^0 \cong {\bf H}^1 (\mathcal{F}^{\bullet}) \\ v &\longmapsto [(\{u_{\alpha\beta} (v)\}, \{ v_{\alpha} (v)\} )] \end{aligned}$$ induced by the inverse map of [\[2023_7\_12_16_26\]](#2023_7_12_16_26){reference-type="eqref" reference="2023_7_12_16_26"}. For this purpose, we will calculate the variations of the transition functions and the connection matrices parametrized by $U_{(q,\zeta)}$ with respect to the tangent vector $v$ in $U_{(q,\zeta)} \subset \mathbb{C}_{(q,\zeta)}^{2N}$. By using these variations, we will calculate the cocycles $(\{u_{\alpha\beta} (v)\}, \{ v_{\alpha} (v)\} )$ of the infinitesimal deformation of $(E,\nabla , \{l^{(i)}\} )$ with respect to $v$. First, we calculate $u_{\alpha\beta} (v) \in \mathcal{F}^1(U_{\alpha\beta})$. We consider the variation of $B_{\alpha\beta}$ in [\[2023_7\_2_11_55\]](#2023_7_2_11_55){reference-type="eqref" reference="2023_7_2_11_55"} by $v$: $$B_{\alpha\beta} ( \mathrm{id} + \epsilon B_{\alpha\beta}^{-1} v(B_{\alpha\beta})) \colon \mathcal{O}^{\oplus 2}_{U_{\alpha \beta}} \longrightarrow \mathcal{O}^{\oplus 2}_{U_{\alpha \beta}} \otimes \mathbb{C}[\epsilon].$$ Then $u_{\alpha\beta}(v)$ has the following description: $$\label{2023_7_4_14_16} u_{\alpha\beta} (v)= \varphi^{\text{App}}_{\beta}|_{U_{\alpha\beta}} \circ \left( B_{\alpha\beta}^{-1} v(B_{\alpha\beta}) \right) \circ (\varphi^{\text{App}}_{\beta}|_{U_{\alpha\beta}})^{-1}.$$ **Lemma 14**. *Let $I_{\mathrm{cov}}$ be the set of the indices of the open covering $\{ U_{\alpha} \}$ in Definition [Definition 11](#2023_7_8_20_43){reference-type="ref" reference="2023_7_8_20_43"}. We set $I^t_{\mathrm{cov}} = \{\alpha_{t_1}, \ldots,\alpha_{t_\nu}\}$ and $I^q_{\mathrm{cov}} = \{\alpha_{q_1}, \ldots,\alpha_{q_{N}}\}$, which are subsets of $I_{\mathrm{cov}}$. For $v\in T_{(E,\nabla , \{l^{(i)}\} )} M_X^0$, we have the equality $$\label{2023_3_12_23_00} u_{\alpha\beta} (v) = \begin{cases} 0 & \alpha,\beta \in I_{\mathrm{cov}} \setminus (I^t_{\mathrm{cov}} \cup I^q_{\mathrm{cov}}) \\ \varphi^{\mathrm{App}}_{\alpha_{q_j}}|_{U_{\alpha\alpha_{q_j}}} \circ \begin{pmatrix} 0 & \frac{v(\zeta_j)}{z_j -q_j} \\ 0 & \frac{v(q_j)}{z_j -q_j} \end{pmatrix}\circ (\varphi^{\mathrm{App}}_{\alpha_{q_j}}|_{U_{\alpha\alpha_{q_j}}})^{-1} & \alpha \in I_{\mathrm{cov}} \setminus (I^t_{\mathrm{cov}} \cup I^q_{\mathrm{cov}}) , \beta = \alpha_{q_j} \in I^q_{\mathrm{cov}} \\ \varphi^{\mathrm{App}}_{\alpha_{t_i}}|_{U_{\alpha\alpha_{t_i}}} \circ \left( (g^{t_i}_{\alpha_{t_i}})^{-1} v(g^{t_i}_{\alpha_{t_i}}) \right) \circ (\varphi^{\mathrm{App}}_{\alpha_{t_i}}|_{U_{\alpha\alpha_{t_i}}})^{-1} &\alpha \in I_{\mathrm{cov}}\setminus (I^t_{\mathrm{cov}} \cup I^q_{\mathrm{cov}}) , \beta = \alpha_{t_i} \in I^t_{\mathrm{cov}}, \end{cases}$$ and we have that $$\label{2023_3_13_17_13} (g^{t_i}_{\alpha_{t_i}})^{-1} v(g^{t_i}_{\alpha_{t_i}}) =O(z_i^{m_i} ).$$* *Proof.* Let $\alpha \in I_{\mathrm{cov}}\setminus (I^t_{\mathrm{cov}} \cup I^q_{\mathrm{cov}})$. If $\beta \in I_{\mathrm{cov}}\setminus (I^t_{\mathrm{cov}} \cup I^q_{\mathrm{cov}})$, then we have the following equalities: $$\begin{aligned} B_{\alpha\beta} &=( \varphi^{\text{App}}_{\alpha}|_{U_{\alpha\beta}})^{-1} \circ \varphi^{\text{App}}_{\beta}|_{U_{\alpha\beta}} \\ &= (\varphi^{\mathrm{norm}}_{\alpha}|_{U_{\alpha\beta}})^{-1} \circ (\phi_{\nabla}|_{U_{\alpha\beta}})^{-1} \circ \phi_{\nabla}|_{U_{\alpha\beta}} \circ \varphi^{\mathrm{norm}}_{\beta}|_{U_{\alpha\beta}}\\ &= (\varphi^{\mathrm{norm}}_{\alpha}|_{U_{\alpha\beta}})^{-1} \circ \varphi^{\mathrm{norm}}_{\beta} |_{U_{\alpha\beta}}= \begin{pmatrix} 1 & 0 \\ 0 & ( (\omega_{\alpha}^{-1})^{-1} \circ \omega_{\alpha_{q_j}}^{-1} ) \end{pmatrix}. \end{aligned}$$ Here $\omega^{-1}_{\alpha}$ is a trivialization $\mathcal{O}_{U_{\alpha}} \xrightarrow{\cong} (\Omega^1_{C}(D))^{-1}|_{U_{\alpha}}$ for any $\alpha$. Since $( (\omega_{\alpha}^{-1})^{-1} \circ \omega_{\alpha_{q_j}}^{-1} )$ is independent of the moduli space $M_X^0$, we have $v(B_{\alpha\beta})=0$. So $u_{\alpha\beta}(v)=0$. If $\beta = \alpha_{q_j}$, then we have the following equalities: $$\label{2023_3_12_23_33} \begin{aligned} B_{\alpha\alpha_{q_j}} &=( \varphi^{\text{App}}_{\alpha}|_{U_{\alpha\alpha_{q_j}}})^{-1} \circ \varphi^{\text{App}}_{\alpha_{q_j}}|_{U_{\alpha\alpha_{q_j}}} \\ &= (\varphi^{\mathrm{App},0}_{\alpha}|_{U_{\alpha\alpha_{q_j}}})^{-1} \circ (\phi_{\nabla}|_{U_{\alpha\alpha_{q_j}}})^{-1} \circ \phi_{\nabla}|_{U_{\alpha\alpha_{q_j}}} \circ \varphi^{\mathrm{App},0}_{\alpha_{q_j}} |_{U_{\alpha\alpha_{q_j}}} \circ \begin{pmatrix} 1 & 0 \\ 0& \frac{1}{z_j-q_j} \end{pmatrix}\\ &= (\varphi^{\mathrm{App},0}_{\alpha}|_{U_{\alpha\alpha_{q_j}}})^{-1} \circ \varphi^{\mathrm{App},0}_{\alpha_{q_j}}|_{U_{\alpha\alpha_{q_j}}} \circ \begin{pmatrix} 1 & 0 \\ 0& \frac{1}{z_j-q_j} \end{pmatrix}\\ &=(\varphi^{\mathrm{norm}}_{\alpha}|_{U_{\alpha\alpha_{q_j}}})^{-1} \circ \varphi^{\mathrm{norm}}_{\alpha_{q_j}}|_{U_{\alpha\alpha_{q_j}}} \circ \begin{pmatrix} 1 & \zeta_j \\ 0& 1 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0& \frac{1}{z_j-q_j} \end{pmatrix}\\ &= \begin{pmatrix} 1 & 0 \\ 0 & ( (\omega_{\alpha}^{-1})^{-1} \circ \omega_{\alpha_{q_j}}^{-1} ) \end{pmatrix} \begin{pmatrix} 1 & \frac{\zeta_j}{z_j-q_j} \\ 0& \frac{1}{z_j-q_j} \end{pmatrix}. \end{aligned}$$ So we have $$B_{\alpha\alpha_{q_j}}^{-1} v(B_{\alpha\alpha_{q_j}}) =\begin{pmatrix} 1 & -\zeta_j \\ 0& z_j-q_j \end{pmatrix} \begin{pmatrix} 0 & \frac{v(\zeta_j) (z_j-q_j) + \zeta_j v(q_j) }{(z_j-q_j)^2} \\ 0& - \frac{-v(q_j)}{(z_j-q_j)^2} \end{pmatrix} =\begin{pmatrix} 0 & \frac{v(\zeta_j)}{z_j -q_j} \\ 0 & \frac{v(q_j)}{z_j -q_j} \end{pmatrix}.$$ If $\beta = \alpha_{t_i}$, then we have the following equalities: $$\begin{aligned} B_{\alpha\alpha_{t_i}} &=( \varphi^{\text{App}}_{\alpha}|_{U_{\alpha\alpha_{t_i}}})^{-1} \circ \varphi^{\text{App}}_{\alpha_{t_i}} |_{U_{\alpha\alpha_{t_i}}}\\ &= (\varphi^{\mathrm{norm}}_{\alpha}|_{U_{\alpha\alpha_{t_i}}})^{-1} \circ (\phi_{\nabla}|_{U_{\alpha\alpha_{t_i}}})^{-1} \circ \phi_{\nabla}|_{U_{\alpha\alpha_{t_i}}} \circ \varphi^{\mathrm{norm}}_{\alpha_{t_i}} |_{U_{\alpha\alpha_{t_i}}} \circ g^{t_i}_{\alpha_{t_i}}\\ &= \begin{pmatrix} 1 & 0 \\ 0 & ( (\omega_{\alpha}^{-1})^{-1} \circ \omega_{\alpha_{q_j}}^{-1} ) \end{pmatrix}\circ g^{t_i}_{\alpha_{t_i}}. \end{aligned}$$ So we have $B_{\alpha\alpha_{t_i}}^{-1} v(B_{\alpha\alpha_{t_i}}) =(g^{t_i}_{\alpha_{t_i}})^{-1} v(g^{t_i}_{\alpha_{t_i}})$. Since $(g^{t_i}_{\alpha})_{< m_i}$ is independent of the moduli space $M^0_X$, we have that $v(g^{t_i}_{\alpha_{t_i}}) = O(z_i^{m_i})$. Finally, we have the statement of the lemma. ◻ Next we calculate $v_{\alpha}(v) \in \mathcal{F}^1(U_{\alpha})$ for $v\in T_{(E,\nabla , \{l^{(i)}\} )} M^0_X$. This is given by calculating the variation of the connection matrix $A_{\alpha}$ in [\[2023_3\_6_18_19\]](#2023_3_6_18_19){reference-type="eqref" reference="2023_3_6_18_19"} with respect to $v$. So we have $$\label{2023_3_12_22_59} v_{\alpha}(v)= \begin{cases} \varphi^{\text{App}}_{\alpha}\circ \begin{pmatrix} - v( \zeta_j) \gamma_{\alpha} & v\left( \frac{\beta_{\alpha}-\zeta_j\delta_{\alpha}-\zeta_j^2\gamma_{\alpha}}{z_j -q_j} \right)\\ -v(q_j) \gamma_{\alpha} & v(\mathrm{tr}(A_{\alpha_{q_j}})) + v(\zeta_j)\gamma_{\alpha} \end{pmatrix} \circ ( \varphi^{\text{App}}_{\alpha})^{-1} & \text{when $\alpha = \alpha_{q_j}$} \\ \varphi^{\text{App}}_{\alpha} \circ \begin{pmatrix} 0 & v(\beta_{\alpha}) \\ 0 &v(\mathrm{tr}(A_{\alpha})) \end{pmatrix} \circ( \varphi^{\text{App}}_{\alpha})^{-1}& \text{when $\alpha \in I_{\mathrm{cov}}\setminus (I^t_{\mathrm{cov}} \cup I^q_{\mathrm{cov}})$} \end{cases}.$$ Here remark that $\gamma_{\alpha}$ is independent of the moduli space $M_{X}^0$ for any $\alpha$. When $\alpha = \alpha_{t_i}$, we have that $v_{\alpha}(v)$ is holomorphic at $t_i$. ## Canonical coordinates {#subsect:Canonical_Coor} Now we introduce canonical coordinates on $M_{X}^0$ with respect to the symplectic form [\[2020.11.7.15.48\]](#2020.11.7.15.48){reference-type="eqref" reference="2020.11.7.15.48"}. We recall that we have set $N:= 4g+n-3$. Let $\pi \colon \boldsymbol{\Omega}(D) \rightarrow C$ and $\pi_0 \colon \boldsymbol{\Omega} \rightarrow C$ be the total spaces of $\Omega^1_C(D)$ and $\Omega^1_C$, respectively. The total space $\boldsymbol{\Omega}$ has the Liouville symplectic form $\omega_{\text{Liouv}}$. Since we have an isomorphism $$\pi_0^{-1} (C\setminus \mathrm{Supp}(D)) \xrightarrow{ \ \sim \ } \pi^{-1} (C\setminus \mathrm{Supp}(D)) ,$$ the Liouville symplectic form induces a symplectic form $\pi^{-1} (C\setminus \mathrm{Supp}(D))$. Let $\pi_N \colon \mathrm{Sym}^{N}(\boldsymbol{\Omega}(D)) \rightarrow \mathrm{Sym}^{N}(C)$ be the map induced by the map $\pi \colon \boldsymbol{\Omega}(D) \rightarrow C$. We set $$\mathrm{Sym}^{N}(\boldsymbol{\Omega}(D))_0 := \left\{ \{ q_1,\ldots,q_N \} \in \pi_N^{-1} (\mathrm{Sym}^{N}(C \setminus \mathrm{Supp}(D) )) \ \middle| \ q_{j_1} \neq q_{j_2} \ (j_1\neq j_2) \right\}.$$ Then $\mathrm{Sym}^{N}(\boldsymbol{\Omega}(D))_0$ has the induced symplectic form from the Liouville symplectic form. **Remark 15**. *We have a map $f_{\mathrm{App},0} \colon M_{X}^0 \rightarrow \mathrm{Sym}^{N}(\boldsymbol{\Omega}(D))_0$, which is described in [\[2023_7\_12_16_02\]](#2023_7_12_16_02){reference-type="eqref" reference="2023_7_12_16_02"}. Notice that $M_{X}^0$ and $\mathrm{Sym}^{N}(\boldsymbol{\Omega}(D))_0$ have symplectic forms. But by the explicit calculation as below, we realize that this map $f_{\mathrm{App},0}$ does not preserve these symplectic structures. So $f_{\mathrm{App},0}$ does not give canonical coordinates directly. To give canonical coordinates, we have to modify the map $f_{\mathrm{App},0}$ as follows.* We twist $\boldsymbol{\Omega}(D)$ by a class in $H^1(C, \Omega^1_C)$ as follows. Let $c_d$ be the image of the line bundle $\mathrm{det} (E)$ under the morphism $$H^1 ( C , \mathcal{O}^*_C) \xrightarrow{ \ \operatorname{d} \log \ } H^1(C, \Omega^1_C) \cong \operatorname{Ext}_C^1(T_C, \mathcal{O}_C ).$$ Let $\mathcal{A}_{C}(c_d)$ be the sheaf produced by the Atiyah sequence on $C$ with respect to $c_d$, that is, $\mathcal{A}_{C}(c_d)$ is given by the extension $$\label{2023_3_24_19_28} 0\longrightarrow \mathcal{O}_C \longrightarrow \mathcal{A}_{C}(c_d) \longrightarrow T_C \longrightarrow 0$$ with respect to $c_d \in H^1(C, \Omega_C^1)$. Then, $\mathcal{A}_{C}(c_d)$ is naturally a Lie-algebroid, called the Atiyah algebroid of the $\mathbb{G}_m$-principal bundle $\operatorname{Tot} (T_C) \setminus 0$, where $0$ stands for the $0$-section; for details, see [@LM Section 3.1.2]. We denote by $\mathrm{symb}_1 \colon \mathcal{A}_{C}(c_d) \rightarrow T_C$ the morphism in [\[2023_3\_24_19_28\]](#2023_3_24_19_28){reference-type="eqref" reference="2023_3_24_19_28"}. We consider the subsheaf $T_C (-D) \subset T_C$. We set $\mathcal{A}_{C}(c_d,D) := \mathrm{symb}_1^{-1} T_C (-D)$, which is an extension $$0\longrightarrow \mathcal{O}_C \longrightarrow \mathcal{A}_{C}(c_d,D) \longrightarrow T_C(-D) \longrightarrow 0.$$ Let $\Omega^1_C(D, c_d)$ be the twisted cotangent bundle over $C$ with respect to $\mathcal{A}_{C}(c_d,D)$, that is, $$\Omega^1_C(D, c_d) = \left\{ \phi \in \mathcal{A}_{C}(c_d,D)^{\vee} \ \middle|\ \langle \phi, 1_{\mathcal{A}_{C}(c_d,D)} \rangle =1 \right\}.$$ We denote by $$\pi_{c_d} \colon \boldsymbol{\Omega}(D, c_d) \longrightarrow C$$ the total space of the twisted cotangent bundle $\Omega^1_C(D, c_d)$, and a generic element of this affine bundle by $(q, \tilde{p} )$ in analogy with classical notation $(q, p )$ for points of $\boldsymbol{\Omega}(D)$. For each $(E,\nabla , \{ l^{(i)} \}) \in M_{X}^0$, we have $(\mathrm{det}(E) ,\mathrm{tr} (\nabla))$. The connection $\mathrm{tr} (\nabla)$ on the line bundle $\mathrm{det}(E)$ is considered as a *global* section of $\boldsymbol{\Omega}(D,c_d) \rightarrow C$, which is the total space of the twisted cotangent bundle with respect to $\mathrm{det}(E)$. The global section $\mathrm{tr} (\nabla)$ gives a diffeomorphism $$\begin{aligned} \boldsymbol{\Omega} (D) \longrightarrow \boldsymbol{\Omega}(D,c_d) ;\quad (q, p ) \longmapsto (q, p + \mathrm{tr} (\nabla) ). \end{aligned}$$ Notice that $\mathrm{tr} (\nabla)$ *does* depend on $M_{X}^0$. So this morphism depends on $M_{X}^0$. Moreover, it is not a morphism of vector bundles. **Definition 16**. *We define the *accessory parameter* associated to $(E,\nabla )$ at $q_j$ by $$\tilde{p}_j = \mathrm{res}_{q_j} (\beta) + \mathrm{tr}(\nabla)|_{q_j},$$ where $\beta \in H^0(C, (\Omega^1_C)^{\otimes 2} (2D+B))$ is the $(1,2)$-entry of [\[eq:normal_form\]](#eq:normal_form){reference-type="eqref" reference="eq:normal_form"} and $\mathrm{res}_{q_j}(\beta) \in \Omega^1_C(D) |_{q_j}$. The $N$-tuple $\left\{ \left( q_j , \tilde{p}_j \right) \right\}_{j=1,2,\ldots,N}$ will be called *canonical coordinates* of $(E,\nabla )$. We let $f_{\mathrm{App}}$ be the map $$\begin{aligned} f_{\mathrm{App}} \colon M_{X}^0 &\longrightarrow \mathrm{Sym}^N (\boldsymbol{\Omega}(D,c_d)) \\ (E,\nabla , \{l^{(i)}\} ) &\longmapsto \left\{ \left( q_j , \tilde{p}_j \right) \right\}_{j=1,2,\ldots,N}. \end{aligned}$$* Notice that the map $f_{\mathrm{App},0}$ in [\[2023_7\_12_16_02\]](#2023_7_12_16_02){reference-type="eqref" reference="2023_7_12_16_02"} is defined by using only $\mathrm{res}_{q_j}(\beta)$. The reason why we consider the twisted cotangent bundle $\boldsymbol{\Omega}(D,c_d)$ is to justify $\mathrm{tr}(\nabla)|_{q_j}$. The next proposition shows that the quantities introduced in the definition may indeed be called coordinates. **Proposition 17**. *The map $f_{\mathrm{App}}$ introduced in Definition [Definition 16](#2023_7_12_23_06){reference-type="ref" reference="2023_7_12_23_06"} is birational.* *Proof.* It follows from Proposition [Proposition 10](#prop:dimension){reference-type="ref" reference="prop:dimension"} that the dimensions of the source and target of $f_{\mathrm{App}}$ agree. We therefore need to show two things: first, that $f_{\mathrm{App}}$ is rational, and second, that it admits an inverse over a Zariski open subset of $\mathrm{Sym}^N (\boldsymbol{\Omega}(D,c_d))$. The first assertion is trivial, because the construction of the apparent singularities $q_j$ and their accessory parameters $\tilde{p}_j$ follow from algebraic arguments on certain Zariski open subsets. The key statement is existence of a generic inverse. This is now a variant of Lemma [Lemma 7](#lem:independence){reference-type="ref" reference="lem:independence"}. Namely, fixing generic $\left\{ \left( q_j , \tilde{p}_j \right) \right\}_{j=1,2,\ldots,N}$, we must find a unique $(\delta, \beta)$. Since we have $\delta = \mathrm{tr}(\nabla_0)$, we get the expression $$\tilde{p}_j = \zeta_j \operatorname{d}\! z_j + \delta - \frac{\operatorname{d}\! z_j}{z_j}.$$ An algebraic manipulation shows that the constraint [\[eq:apparent\]](#eq:apparent){reference-type="eqref" reference="eq:apparent"} expressing that the singularity at $q_j$ be apparent is equivalent to the holomorphicity and vanishing of the expression $$\label{eq:quantum_characteristic} \beta + \delta \left( \tilde{p}_j + \frac{\operatorname{d}\! z_j}{z_j} \right) - \left( \tilde{p}_j + \frac{\operatorname{d}\! z_j}{z_j} \right)^2.$$ We now study these conditions by taking the Laurent expansion of this expression with respect to $z_j$. We first observe that it clearly admits a pole of order at most $2$ at $q_j$, because $q_j\neq t_i$. Since $\delta$ has a simple pole with residue $1$, the term of degree $-2$ is $$\left( \operatorname{d}\! z_j\right)^{\otimes 2}-\left( \operatorname{d}\! z_j\right)^{\otimes 2}=0.$$ So the pole is automatically at most simple. For the study of the residue, we need to introduce some notation: let us write $$\begin{aligned} \delta_0 & = \frac{\operatorname{d}\! z_j}{z_j} + \delta_0^{(j)} \\ \beta_0 & = \zeta_j \frac{\left( \operatorname{d}\! z_j\right)^{\otimes 2}}{z_j} + \beta_0^{(j)}\end{aligned}$$ for a holomorphic rank $1$ connection $\delta_0^{(j)}$ and a holomorphic quadratic differential $\beta_0^{(j)}$ on $U_{q_j}$. Then, the degree $-1$ part of [\[eq:quantum_characteristic\]](#eq:quantum_characteristic){reference-type="eqref" reference="eq:quantum_characteristic"} is (up to a global factor $\operatorname{d}\! z_j$) $$\zeta_j \operatorname{d}\! z_j + \tilde{p}_j + \left( \delta - \frac{\operatorname{d}\! z_j}{z_j} \right) - 2 \tilde{p}_j = 0$$ by the definition of $\tilde{p}_j$. Finally, to deal with the vanishing constraint, we make use of the same basis expansions for $\delta$ and $\beta$ as in Lemma [Lemma 7](#lem:independence){reference-type="ref" reference="lem:independence"}. Then, the conditions read as $$\sum_{k=1}^{N-g} b_k \nu_k (q_j) + \tilde{p}_j \sum_{l=1}^g d_l \omega_l (q_j) = \left( \tilde{p}_j\right)^{\otimes 2} - \delta_0^{(j)} (q_j) \tilde{p}_j - \beta_0^{(j)} (q_j).$$ Now, the determinant of this linear system of $N$ equations (for $1\leq j \leq N$) in $N$ variables $b_1,\ldots, b_{N-g}, d_1, \ldots, d_g$ agrees with the determinant studied in Lemma [Lemma 7](#lem:independence){reference-type="ref" reference="lem:independence"}, up to replacing each occurrence of $\zeta_j \operatorname{d}\! z_j$ by $\tilde{p}_j$. The end of the proof then follows word by word the method of Lemma [Lemma 7](#lem:independence){reference-type="ref" reference="lem:independence"}. ◻ **Remark 18**. *The expression [\[eq:quantum_characteristic\]](#eq:quantum_characteristic){reference-type="eqref" reference="eq:quantum_characteristic"} has variables $\tilde{p}_j$ in the twisted cotangent sheaf rather than the ordinary cotangent sheaf. The quadratic polynomial of $\tilde{p}_j$ can be viewed as the characteristic polynomial of the connection matrix of $\nabla_0$. Thus, in a sense the vanishing condition on [\[eq:quantum_characteristic\]](#eq:quantum_characteristic){reference-type="eqref" reference="eq:quantum_characteristic"} may be interpreted as the requirement that $\tilde{p}_j$ lie on the *quantum spectral curve* of $\nabla_0$, see e.g. [@DumMul].* By taking a local trivialization of $\mathrm{det}(E)$, we have a concrete description of the map $f_{\mathrm{App}}$. Now we will discuss on such a description of $f_{\mathrm{App}}$. The description discussed below is useful for the proof of Theorem [Theorem 20](#2023_8_22_12_09){reference-type="ref" reference="2023_8_22_12_09"} below. Let $(E,\nabla , \{ l^{(i)} \}) \in M_{X}^0$. As a local trivialization of $\mathrm{det}(E)$, we take the isomorphism $$\label{2023_8_24_14_00} \mathrm{det} (\varphi_{\alpha_{q_j}}^{\mathrm{App}}) \colon \mathcal{O}_{U_{\alpha_{q_j}}} \longrightarrow \mathrm{det}(E)|_{U_{\alpha_{q_j}}},$$ which is the determinant of the trivialization in Definition [Definition 13](#2023_7_2_11_52){reference-type="ref" reference="2023_7_2_11_52"}. Notice that the composition $$\mathcal{O}_{U_{\alpha_{q_j}}} \xrightarrow{\omega_{\alpha_{q_j}}^{-1}} ( \Omega^1_C(D))^{-1}|_{U_{\alpha_{q_j}}} \xrightarrow{ \mathrm{det}(\phi_{\nabla})|_{U_{\alpha_{q_j}}} } \mathrm{det}(E)|_{U_{\alpha_{q_j}}} \xrightarrow{\mathrm{det} (\varphi_{\alpha_{q_j}}^{\mathrm{App}})^{-1}} \mathcal{O}_{U_{\alpha_{q_j}}}$$ coincides with $(z_j -q_j) \colon \mathcal{O}_{U_{\alpha_{q_j}}} \rightarrow \mathcal{O}_{U_{\alpha_{q_j}}}$. Let $\mathrm{tr}(A_{\alpha_{q_j}}) \in \Omega^1_C(D)|_{U_{\alpha_{q_j}}}$ be the connection matrix of $(\mathrm{det}(E) ,\mathrm{tr}(\nabla))$ on $U_{\alpha_{q_j}}$ with respect to the local trivialization $\mathrm{det} (\varphi_{\alpha_{q_j}}^{\mathrm{App}})$. Then, by using [\[2023_7\_12_16_04\]](#2023_7_12_16_04){reference-type="eqref" reference="2023_7_12_16_04"}, the map $f_{\mathrm{App}}$ has the following description: $$\label{2023_3_13_17_31} \begin{aligned} f_{\mathrm{App}} \colon (E,\nabla , \{l^{(i)}\} ) \longmapsto \left\{ \left( q_j , \zeta_j \gamma_{\alpha_{q_j}}|_{q_j} + \mathrm{tr}(A_{\alpha_{q_j}})|_{q_j} \right) \right\}_{j=1,2,\ldots,N}, \end{aligned}$$ Here $\zeta_j \gamma_{\alpha_{q_j}}|_{q_j} + \mathrm{tr}(A_{\alpha_{q_j}})|_{q_j}$ is an element of $\Omega^1_C(D)|_{q_j}$. We set $$\label{2023_7_12_16_43} p_j := \mathrm{res}_{q_j}\left( \frac{\zeta_j \gamma_{\alpha_{q_j}}}{z_j-q_j} \right) +\mathrm{res}_{q_j}\left( \frac{\mathrm{tr}(A_{\alpha_{q_j}})}{z_j-q_j} \right) ,$$ which is the image of $\zeta_j \gamma_{\alpha_{q_j}}|_{q_j} + \mathrm{tr}(A_{\alpha_{q_j}})|_{q_j}$ under the isomorphism $\Omega^1_C(D)|_{q_j} \cong \mathbb{C}$. **Remark 19**. *This $p_j$ is just the evaluation of the $(2,2)$-entry of the connection matrix $A_{\alpha_{q_j}}$ in [\[2023_3\_6_18_19\]](#2023_3_6_18_19){reference-type="eqref" reference="2023_3_6_18_19"} at $q_j$. Note that the $(2,1)$-entry of this connection matrix $A_{\alpha_{q_j}}$ at $q_j$ vanishes. So $p_j$ is an "eigenvalue" of $\nabla$ at $q_j$. (On the other hand, $\zeta_j$ is an "eigenvector" of $\nabla_0$ at $q_j$). This fact means that the coordinates $(q_j , p_j)_j$ are an analog of the coordinates on the moduli space of (parabolic) Higgs bundles given as in [@GNR] and [@Hurt]. The coordinates on the moduli space of (parabolic) Higgs bundles are by using the BNR correspondence [@BNR]. (See Section [6](#Sec:Higgs){reference-type="ref" reference="Sec:Higgs"}).* Let $\pi_{c_d, N} \colon \mathrm{Sym}^{N}(\boldsymbol{\Omega}(D,c_d)) \rightarrow \mathrm{Sym}^{N}(C)$ be the map induced by the map $\pi_{c_d} \colon \boldsymbol{\Omega}(D,c_d) \rightarrow C$. We set $$\mathrm{Sym}^{N}(\boldsymbol{\Omega}(D,c_d))_0 := \left\{ \{ (q_j, \tilde{p}_j) \}_{j=1}^N \in \pi_{c_d,N}^{-1} (\mathrm{Sym}^{N}(C \setminus \mathrm{Supp}(D) )) \ \middle| \ q_{j_1} \neq q_{j_2} \ (j_1\neq j_2) \right\}.$$ Then $\mathrm{Sym}^{N}(\boldsymbol{\Omega}(D,c_d))_0$ has the induced symplectic form from the Liouville symplectic form. Notice that by construction the image of $M_{X}^0$ under the map $f_{\mathrm{App}}$ is contained in $\mathrm{Sym}^{N}(\boldsymbol{\Omega}(D,c_d))_0$. **Theorem 20**. *Let $\omega$ be the symplectic form on $M_{X}^0$ defined by [\[2020.11.7.15.48\]](#2020.11.7.15.48){reference-type="eqref" reference="2020.11.7.15.48"}. The pull-back of the symplectic form on $\mathrm{Sym}^N (\boldsymbol{\Omega}(D, c_d))_0$ under the map $$f_{\mathrm{App}} \colon M_{X}^0 \longrightarrow \mathrm{Sym}^{N}(\boldsymbol{\Omega}(D,c_d))_0$$ in Definition [Definition 16](#2023_7_12_23_06){reference-type="ref" reference="2023_7_12_23_06"} coincides with $\omega$.* *Proof.* Let $V$ be an analytic open subset of $M_{X}^0$ as in Section [3.4](#2023_7_12_16_39){reference-type="ref" reference="2023_7_12_16_39"}. Moreover, we assume that we may define a composition $$\begin{aligned} V \longrightarrow f_{\mathrm{App}} (V) & \longrightarrow \mathrm{Sym}^{N}(\mathbb{C}_{(q,p)}^2) \\ (E,\nabla ) \longmapsto f_{\mathrm{App}}(E,\nabla ) & \longmapsto \left\{ (q_j, p_j) \right\}_{j=1,2,\ldots,N}, \end{aligned}$$ where $p_j$ is defined in [\[2023_7\_12_16_43\]](#2023_7_12_16_43){reference-type="eqref" reference="2023_7_12_16_43"}, and the image of $V$ under the composition is isomorphic to some analytic open subset of $\mathbb{C}_{(q,p)}^{2N}$. Let $U_{(q,p)}$ be such an analytic open subset of $\mathbb{C}_{(q,p)}^{2N}$. We denote by $f_2$ the map $$\begin{aligned} M_{X}^0 \supset V &\longrightarrow U_{(q,p)} \subset \mathbb{C}_{(q,\zeta)}^{2N} \\ (E,\nabla) &\longmapsto (q_1,\ldots,q_N, p_1,\ldots, p_N). \end{aligned}$$ We consider the following maps $$\xymatrix{ U_{(q,\zeta)} & \ar[l]_-{f_1}^-{\sim} V \ar[r]^-{f_2} & U_{(q,p)} \rlap{.} }$$ Here $f_1 \colon V \xrightarrow{\sim} U_{(q,\zeta)}$ is the isomorphism [\[2023_7\_12_16_26\]](#2023_7_12_16_26){reference-type="eqref" reference="2023_7_12_16_26"}. The symplectic structure on $U_{(q,p)}$ induced by the symplectic structure on $\mathrm{Sym}^N (\boldsymbol{\Omega}(D,c_d))$ is $\sum^N_{j=1} d p_j \wedge dq_j$. We will show that $$(f_1^{-1})^* (\omega|_V) = (f_2 \circ f_1^{-1})^*\left( \sum^N_{j=1} d p_j \wedge dq_j \right).$$ Let $v,v'$ be elements of $T_{(q_j,\zeta_j)_j } U_{(q,\zeta)}$ for $(q_j,\zeta_j)_j \in U_{(q,\zeta)}$. We will use the description of the tangent map [\[2023_7\_12_16_59\]](#2023_7_12_16_59){reference-type="eqref" reference="2023_7_12_16_59"} of $f^{-1}_1 \colon U_{(q,\zeta)} \rightarrow V$. That is, we calculate $(f_1^{-1})^* (\omega|_V)$ by applying the descriptions [\[2023_3\_12_23_00\]](#2023_3_12_23_00){reference-type="eqref" reference="2023_3_12_23_00"} and [\[2023_3\_12_22_59\]](#2023_3_12_22_59){reference-type="eqref" reference="2023_3_12_22_59"} of $u_{\alpha\beta}(v)$ and $v_{\alpha}(v)$, respectively. First we consider $\{ u_{\alpha\beta}(v)u_{\beta\gamma} (v')\}_{\alpha\beta\gamma}$. Remark that $U_{\alpha_{q_{j_1}}} \cap U_{\alpha_{q_{j_2}}} = \emptyset$ for any $j_1$ and $j_2$, $U_{\alpha_{t_{i_1}}} \cap U_{\alpha_{t_{i_2}}} = \emptyset$ for any $i_1$ and $i_2$, and $U_{\alpha_{q_{j}}} \cap U_{\alpha_{t_{i}}} = \emptyset$ for any $j$ and $i$. Then we have $u_{\alpha\beta}u_{\beta\gamma}=0$ by Lemma [Lemma 14](#2023_3_20_20_28){reference-type="ref" reference="2023_3_20_20_28"}. So we may take a representative of the class in the pairing [\[2020.11.7.15.48\]](#2020.11.7.15.48){reference-type="eqref" reference="2020.11.7.15.48"} so that $$[- \{ \mathrm{tr} (u_{\alpha\beta}(v) \circ v_{\beta}(v')) - \mathrm{tr} (v_{\alpha} (v) \circ u_{\alpha\beta}(v'))\}_{\alpha\beta} ] \in H^1(C,\Omega^1_C) \cong \mathbb{C}.$$ Now we calculate $\mathrm{tr} (u_{\alpha\beta}(v) \circ v_{\beta}(v')) - \mathrm{tr} (v_{\alpha} (v) \circ u_{\alpha\beta}(v'))$. If $\alpha \in I_{\mathrm{cov}}\setminus (I^t_{\mathrm{cov}} \cup I^q_{\mathrm{cov}})$ and $\beta = \alpha_{q_j}$, then, by applying [\[2023_3\_12_23_00\]](#2023_3_12_23_00){reference-type="eqref" reference="2023_3_12_23_00"} and [\[2023_3\_12_22_59\]](#2023_3_12_22_59){reference-type="eqref" reference="2023_3_12_22_59"}, we have the following equalities $$\label{2023_3_12_23_44} \begin{aligned} & \mathrm{tr} (u_{\alpha\alpha_{q_j}}(v) v_{\alpha_{q_j}}(v')) - \mathrm{tr} (v_{\alpha} (v) u_{\alpha\alpha_{q_j}}(v')) \\ &= \mathrm{tr} \left( \begin{pmatrix} 0 & \frac{v(\zeta_j)}{z_j -q_j} \\ 0 & \frac{v(q_j)}{z_j -q_j} \end{pmatrix} \begin{pmatrix} * & * \\ -v'(q_j) \gamma_{\alpha_{q_j}} & v'(\mathrm{tr}(A_{\alpha_{q_j}})) + v'(\zeta_j)\gamma_{\alpha_{q_j}} \end{pmatrix} \right) \\ &\qquad - \mathrm{tr} \left( \begin{pmatrix} * & * \\ 0 & v(\mathrm{tr}(A_{\alpha})) \end{pmatrix} \begin{pmatrix} 0 & \frac{v'(\zeta_j)}{z_j -q_j} \\ 0 & \frac{v'(q_j)}{z_j -q_j} \end{pmatrix}\right) \\ &= -\frac{v(\zeta_j)v'(q_j) \gamma_{\alpha_{q_j}}}{z_j -q_j} + \frac{v(q_j) \left(v'(\mathrm{tr}(A_{\alpha_{q_j}})) +v'(\zeta_i)\gamma_{\alpha_{q_j}} \right)}{z_j -q_j} - \frac{v'(q_j)\left(v(\mathrm{tr}(A_{\alpha})) \right)}{z_j -q_j}\\ &= -\frac{ \left( v(\zeta_j) \gamma_{\alpha_{q_j}} + v(\mathrm{tr}(A_{\alpha})) \right) v'(q_j) }{z_j -q_j} + \frac{v(q_j) \left(v'(\mathrm{tr}(A_{\alpha_{q_j}})) +v'(\zeta_i)\gamma_{\alpha_{q_j}} \right)}{z_j -q_j} . \end{aligned}$$ Now we consider the difference between $v(\mathrm{tr}(A_{\alpha_{q_j}}))$ and $v(\mathrm{tr}(A_{\alpha}))$. So we consider infinitesimal deformation of $(\mathrm{det}(\nabla) , \mathrm{tr}(\nabla))$. We have that $$\mathrm{det} (B_{\alpha\alpha_{q_j}}) = \mathrm{det}\left( \begin{pmatrix} 1 & 0 \\ 0 & ( (\omega_{\alpha}^{-1})^{-1} \circ \omega_{\alpha_{q_j}}^{-1} ) \end{pmatrix} \begin{pmatrix} 1 & \frac{\zeta_j}{z_j-q_j} \\ 0& \frac{1}{z_j-q_j} \end{pmatrix} \right) = \frac{( (\omega_{\alpha}^{-1})^{-1} \circ \omega_{\alpha_{q_j}}^{-1} )}{z_j-q_j}.$$ Here $B_{\alpha\alpha_{q_j}}$ is calculated in [\[2023_3\_12_23_33\]](#2023_3_12_23_33){reference-type="eqref" reference="2023_3_12_23_33"}. Set $$\label{2023_3_18_22_48} u^{\mathrm{det}}_{\alpha\alpha_{q_j}}(v) := \mathrm{det} (B_{\alpha\alpha_{q_j}})^{-1} v(\mathrm{det} (B_{\alpha\alpha_{q_j}})) = \frac{v(q_j)}{z_j -q_j} .$$ Here remark that $( (\omega_{\alpha}^{-1})^{-1} \circ \omega_{\alpha_{q_j}}^{-1} )$ is independent of the moduli space $M_{X}^0$. We have a cocycle condition $$v(\mathrm{tr}(A_{\alpha_{q_j}})) -v(\mathrm{tr}(A_{\alpha})) = \mathrm{tr} (\nabla) \circ u^{\mathrm{det}}_{\alpha\alpha_{q_j}} - u^{\mathrm{det}}_{\alpha\alpha_{q_j}} \circ \mathrm{tr} (\nabla) .$$ So we have $$v(\mathrm{tr}(A_{\alpha_{q_j}})) -v(\mathrm{tr}(A_{\alpha})) = \operatorname{d} \left( \frac{v(q_j)}{z_j -q_j} \right) = - \frac{v(q_j)\operatorname{d}\!z_j}{(z_j -q_j)^2} .$$ By applying this difference to [\[2023_3\_12_23_44\]](#2023_3_12_23_44){reference-type="eqref" reference="2023_3_12_23_44"}, we have that $$\label{2023_3_13_9_7} \begin{aligned} & \mathrm{tr} (u_{\alpha\alpha_{q_j}}(v) v_{\alpha_{q_j}}(v')) - \mathrm{tr} (v_{\alpha} (v) u_{\alpha\alpha_{q_j}}(v')) \\ &= -\frac{ \left( v(\zeta_j) \gamma_{\alpha_{q_j}} + v(\mathrm{tr}(A_{\alpha_{q_j}})) \right) v'(q_j) }{z_j -q_j} + \frac{v(q_j) \left(v'(\mathrm{tr}(A_{\alpha_{q_j}})) +v'(\zeta_i)\gamma_{\alpha_{q_j}} \right)}{z_j -q_j} - \frac{ v(q_j ) v'(q_j) \operatorname{d}\! z_j}{(z_j -q_j)^3} . \end{aligned}$$ So we may extend the 1-form $$\mathrm{tr} (u_{\alpha\alpha_{q_j}}(v) v_{\alpha_{q_j}}(v')) - \mathrm{tr} (v_{\alpha} (v) u_{\alpha\alpha_{q_j}}(v'))$$ from $U_{\alpha\alpha_{q_j}}$ to $U_{\alpha_{q_j}}$ by [\[2023_3\_13_9\_7\]](#2023_3_13_9_7){reference-type="eqref" reference="2023_3_13_9_7"}. Then we have a meromorphic 1-form defined on $U_{\alpha_{q_j}}$, which has a pole at $q_j$. We denote by $\omega_{\alpha_{q_j}}(v,v')$ the meromorphic 1-form defined on $U_{\alpha_{q_j}}$. Next we consider the case where $\alpha \in I_{\mathrm{cov}}\setminus (I^t_{\mathrm{cov}} \cup I^q_{\mathrm{cov}})$ and $\beta = \alpha_{t_i}$. We have the following equalities $$\begin{aligned} & \mathrm{tr} (u_{\alpha\alpha_{t_i}}(v) v_{\alpha_{t_i}}(v')) - \mathrm{tr} (v_{\alpha} (v) u_{\alpha\alpha_{t_i}}(v')) \\ &= \mathrm{tr} \left( (g^{t_i}_{\alpha_{t_i}})^{-1} v(g^{t_i}_{\alpha_{t_i}}) v'(A_{\alpha_{t_i}}) \right) - \mathrm{tr} \left( \left( (g^{t_i}_{\alpha_{t_i}} )^{-1} v(A_{\alpha}) g^{t_i}_{\alpha_{t_i}}\right) (g^{t_i}_{\alpha_{t_i}})^{-1} v'(g^{t_i}_{\alpha_{t_i}}) \right) \end{aligned}$$ We have the cocycle condition $$\begin{aligned} &v(A_{\alpha_{t_i}}) - (g^{t_i}_{\alpha_{t_i}})^{-1}v(A_{\alpha}) (g^{t_i}_{\alpha_{t_i}}) \\ &= (\operatorname{d} + A_{\alpha_{t_i}}) \circ \left( (g^{t_i}_{\alpha_{t_i}})^{-1} v(g^{t_i}_{\alpha_{t_i}}) \right) -\left( (g^{t_i}_{\alpha_{t_i}})^{-1} v(g^{t_i}_{\alpha_{t_i}}) \right) \circ (\operatorname{d} + A_{\alpha_{t_i}})\\ &= \operatorname{d} \left( (g^{t_i}_{\alpha_{t_i}})^{-1} v(g^{t_i}_{\alpha_{t_i}}) \right) + \left[ \, A_{\alpha_{t_i}} , \, \left( (g^{t_i}_{\alpha_{t_i}})^{-1} v(g^{t_i}_{\alpha_{t_i}}) \right) \, \right]. \end{aligned}$$ By this condition, we have $$\label{2023_3_13_17_15} \begin{aligned} & \mathrm{tr} (u_{\alpha\alpha_{t_i}}(v) v_{\alpha_{t_i}}(v')) - \mathrm{tr} (v_{\alpha} (v) u_{\alpha\alpha_{t_i}}(v')) \\ &= \mathrm{tr} \left( (g^{t_i}_{\alpha_{t_i}})^{-1} v(g^{t_i}_{\alpha_{t_i}}) v'(A_{\alpha_{t_i}}) \right) - \mathrm{tr} \left( v(A_{\alpha_{t_i}})(g^{t_i}_{\alpha_{t_i}})^{-1} v'(g^{t_i}_{\alpha_{t_i}}) \right) \\ &\qquad + \mathrm{tr} \left( \left( \operatorname{d} \left( (g^{t_i}_{\alpha_{t_i}})^{-1} v(g^{t_i}_{\alpha_{t_i}}) \right) + \left[ \, A_{\alpha_{t_i}} , \, \left( (g^{t_i}_{\alpha_{t_i}})^{-1} v(g^{t_i}_{\alpha_{t_i}}) \right) \right] \right) (g^{t_i}_{\alpha_{t_i}})^{-1} v'(g^{t_i}_{\alpha_{t_i}}) \right) \end{aligned}$$ So we may extend the 1-form $$\mathrm{tr} (u_{\alpha\alpha_{t_i}}(v) v_{\alpha_{t_i}}(v')) - \mathrm{tr} (v_{\alpha} (v) u_{\alpha\alpha_{t_i}}(v'))$$ from $U_{\alpha\alpha_{t_i}}$ to $U_{\alpha_{t_i}}$ by [\[2023_3\_13_17_15\]](#2023_3_13_17_15){reference-type="eqref" reference="2023_3_13_17_15"}. Since we have the vanishing of the lower terms [\[2023_3\_13_17_13\]](#2023_3_13_17_13){reference-type="eqref" reference="2023_3_13_17_13"}, the extended 1-form defined on $U_{\alpha_{t_i}}$ is holomorphic. We denote by $\omega_{\alpha_{t_i}}(v,v')$ the holomorphic 1-form defined on $U_{\alpha_{t_i}}$. For $\alpha \in I_{\mathrm{cov}}\setminus (I^t_{\mathrm{cov}} \cup I^q_{\mathrm{cov}})$, we set $\omega_{\alpha}(v,v') =0$. By [\[2023_3\_13_9\_7\]](#2023_3_13_9_7){reference-type="eqref" reference="2023_3_13_9_7"} and [\[2023_3\_13_17_15\]](#2023_3_13_17_15){reference-type="eqref" reference="2023_3_13_17_15"}, we have a meromorphic coboundary $\{ \omega_{\alpha}(v,v')\}_{\alpha}$ of $$\{ \mathrm{tr} (u_{\alpha\beta}(v) \circ v_{\beta}(v')) - \mathrm{tr} (v_{\alpha} (v) \circ u_{\alpha\beta}(v'))\}_{\alpha\beta}.$$ So we have $$\begin{aligned} H^1(C,\Omega_C^1) &\xrightarrow{ \ \cong \ }\mathbb{C} \\ [ -\{ \mathrm{tr} (u_{\alpha\beta}(v) \circ v_{\beta}(v')) - \mathrm{tr} (v_{\alpha} (v) \circ u_{\alpha\beta}(v'))\}_{\alpha\beta} ] &\longmapsto \sum_{x \in C } - \mathrm{res}_{x} \left(\omega_{\alpha}(v,v') \right) . \end{aligned}$$ By taking the residues of the right hand sides of [\[2023_3\_13_9\_7\]](#2023_3_13_9_7){reference-type="eqref" reference="2023_3_13_9_7"} and [\[2023_3\_13_17_15\]](#2023_3_13_17_15){reference-type="eqref" reference="2023_3_13_17_15"}, we have that $$\begin{aligned} -\sum_{x \in C} \mathrm{res}_{x} \left(\omega_{\alpha}(v,v') \right) &=\sum_{j=1}^N \mathrm{res}_{q_j} \left( \frac{ \left( v(\zeta_j) \gamma_{\alpha_{q_j}} + v(\mathrm{tr}(A_{\alpha_{q_j}})) \right) v'(q_j) }{z_j -q_j} \right) \\ &\qquad - \sum_{j=1}^N\mathrm{res}_{q_j} \left( \frac{v(q_j) \left(v'(\mathrm{tr}(A_{\alpha_{q_j}})) +v'(\zeta_i)\gamma_{\alpha_{q_j}} \right)}{z_j -q_j} \right) \\ &= \sum_{j=1}^N \left( v(p_j) v'(q_j) - v(q_j) v'(p_j) \right) = \left( \sum^N_{j=1} d p_j \wedge dq_j \right) (v,v'). \end{aligned}$$ Here remark that $\gamma_{\alpha}$ is independent of the moduli space $M_{X}^0$ for any $\alpha$. ◻ By the map $f_{\mathrm{App}}$, we have concrete canonical coordinates as follows. We take an analytic open subset $V$ of $M^0_X$ at a point $(E,\nabla)$, which is small enough. We define functions $q_j$ and $p_j$ ($j=1,2,\ldots,N$) on $V$ as follows. (So, here, the notation $q_j$ has a double meaning). Let $U_{\alpha_{q_j}}$ be an analytic open subset of $C$ such that $U_{\alpha_{q_j}}$ contains the apparent singularity $q_j$ of the point $(E,\nabla)$ and is small enough. Let $q_j'$ be the apparent singularity of each $(E',\nabla') \in V$, where $q_j' \in U_{\alpha_{q_j}}$. First we take a local coordinate $z_j$ on $U_{\alpha_{q_j}}$. By evaluating the apparent singularity $q_j'$ by the local coordinate $z_j$ for each $(E',\nabla') \in V$, we have a function $q_j \colon V \rightarrow \mathbb{C}$. Second, let $(E_V, \nabla_V)$ be a vector bundles on $C \times V$, which is a family of vector bundles on $C$ parametrized by $V$. We take a trivialization of $\det(E_V)$ on $U_{\alpha_{q_j}} \times V$ which depends on only $q_j \colon V \rightarrow \mathbb{C}$ (which is described in [\[2023_8\_24_14_00\]](#2023_8_24_14_00){reference-type="eqref" reference="2023_8_24_14_00"}). We take the connection matrix of $\mathrm{tr} (\nabla_V)$ with respect to the local trivialization. Let $\boldsymbol{\Omega}(D,c_d)_{V} \rightarrow C \times V$ be the relative twisted cotangent bundle over $V$ with respect to the family of line bundles $\det(E_V)$ on $C \times V$. We have an identification between $\boldsymbol{\Omega}(D,c_d)_V$ and $\boldsymbol{\Omega}(D) \times V$ on $U_{\alpha_{q_j}} \times V$ that depends only on $q_j \colon V \rightarrow \mathbb{C}$. By evaluating $\mathrm{res}_{q_j'} (\beta') + \mathrm{tr}(\nabla')|_{q_j'}$ by the identification $\boldsymbol{\Omega}(D,c_d)|_{q_j'} \cong \boldsymbol{\Omega}(D)|_{q_j'} \cong \mathbb{C}$ for each $(E',\nabla') \in V$, we have a function $p_j \colon V \rightarrow \mathbb{C}$. This is just [\[2023_7\_12_16_43\]](#2023_7_12_16_43){reference-type="eqref" reference="2023_7_12_16_43"}. That is, this is the following composition: $$\begin{aligned} V &\longrightarrow U_{\alpha_{q_j}} \times V \longrightarrow \boldsymbol{\Omega}(D,c_d)_V|_{U_{\alpha_{q_j}} \times V} \longrightarrow \boldsymbol{\Omega}(D)|_{U_{\alpha_{q_j}}} \longrightarrow \mathbb{C} \\ (E',\nabla') &\longmapsto (q_j', (E',\nabla')) \longmapsto \left( (\zeta_j \gamma_{\alpha_{q_j}} )_V + \mathrm{tr}(\nabla_V) \right)|_{(q_j', (E',\nabla'))} \\ &\qquad \longmapsto \left( \zeta_j' \gamma_{\alpha_{q_j}} + \mathrm{tr}(A'_{\alpha_{q_j}}) \right)|_{q_j'} \longmapsto \mathrm{res}_{q'_j} \left( \frac{\zeta'_j \gamma_{\alpha_{q_j}} +\mathrm{tr}(A'_{\alpha_{q_j}}) }{z_j-q'_j} \right). \end{aligned}$$ By Theorem [Theorem 20](#2023_8_22_12_09){reference-type="ref" reference="2023_8_22_12_09"}, the symplectic structure on $V$ has the following description: $\sum_{j=1}^N \operatorname{d}\! p_j \wedge \operatorname{d}\! q_j$. **Remark 21**. *We set $$p_j^0 := \mathrm{res}_{q_j} \left( \frac{\zeta_j \gamma_{\alpha_{q_j}} }{z_j-q_j} \right) \in \mathbb{C}.$$ If $g=0$, then $\mathrm{res}_{q_j}\left( \frac{\mathrm{tr}(A_{\alpha_{q_j}})}{z_j-q_j} \right)$ depends on only $q_j$. So we have $\sum^N_{j=1} \operatorname{d}\! p_j \wedge \operatorname{d}\! q_j =\sum^N_{j=1} \operatorname{d}\! p^0_j \wedge \operatorname{d}\! q_j$. Here the symplectic form $\sum^N_{j=1} \operatorname{d}\! p^0_j \wedge \operatorname{d}\! q_j$ is induced by the symplectic form on $\mathrm{Sym}^N (\boldsymbol{\Omega}(D))_0$.* **Remark 22**. *In general, $\sum^N_{j=1} \operatorname{d}\! p_j \wedge \operatorname{d}\! q_j \not= \sum^N_{j=1} \operatorname{d}\! p^0_j \wedge \operatorname{d}\! q_j$, that is, $$\label{2023_3_18_22_44} \sum_{j} \operatorname{d}\! \left(\mathrm{res}_{q_j}\left( \frac{\mathrm{tr}(A_{\alpha_{q_j}})}{z_j-q_j} \right) \right) \wedge \operatorname{d}\! q_j$$ does not vanish. This is related to the determinant map $$\begin{aligned} M_{X}^0 &\longrightarrow M^{\mathrm{rk}=1}_X(\boldsymbol{\nu}_{\textrm{res}}) \\ (E,\nabla,\{l^{(i)}\}) &\longmapsto (\mathrm{det}(E), \mathrm{tr}(\nabla)). \end{aligned}$$ The 2-form [\[2023_3\_18_22_44\]](#2023_3_18_22_44){reference-type="eqref" reference="2023_3_18_22_44"} comes from $$\left[\{ u^{\mathrm{det}}_{\alpha\beta}(v)u^{\mathrm{det}}_{\beta\gamma}(v') \} , -\{ u^{\mathrm{det}}_{\alpha\beta}(v)v'(\mathrm{tr}(A_{\beta})) - v(\mathrm{tr}(A_{\alpha})) u^{\mathrm{det}}_{\alpha\beta}(v') \} \right] \in {\mathbf H}^2 (\mathcal{O}_C \rightarrow \Omega^1_C).$$ Here $u^{\mathrm{det}}_{\alpha\beta}(v)$ is defined as in [\[2023_3\_18_22_48\]](#2023_3_18_22_48){reference-type="eqref" reference="2023_3_18_22_48"}. This class gives rise to the $2$-form on $M_{X}^0$ which is just the pull-back of the natural symplectic form on $M^{\mathrm{rk}=1}_X(\boldsymbol{\theta}_{\textrm{res}})$ under the determinant map. The determinant map is not degenerate in general. So the class [\[2023_3\_18_22_44\]](#2023_3_18_22_44){reference-type="eqref" reference="2023_3_18_22_44"} does not vanish in general.* # Symplectic structure on the moduli space with fixed trace connection {#sect:sympl_FixedDet} In this section, we consider the moduli spaces of rank 2 quasi-parabolic connections *with fixed trace connection*. When the effective divisor $D$ is reduced, this moduli space is detailed in [@AL], [@LS] (when $g=0$), [@FL], [@FLM] (when $g=1$), and [@Matsu] (when $g\geq 1$). The moduli spaces of rank 2 quasi-parabolic connections with fixed trace connection has a natural symplectic structure described as in Section [3.2](#subsect:symplecticGL2){reference-type="ref" reference="subsect:symplecticGL2"}. The purpose of this section is to give coordinates on some generic part of the moduli space and to describe the natural symplectic structure by using the coordinates. As in the case where the effective divisor $D$ is reduced ([@LS], [@FL], [@FLM], [@Matsu]), we may define the map forgetting connections and the apparent map. These maps are from a generic part of the moduli space to projective spaces. These maps will give our coordinates on the generic part of the moduli space. First we describe these maps. ## Moduli space of quasi-parabolic bundles with fixed determinant {#SubSect:ModuliParaBunSL2} To describe the map forgetting connections, we recall the moduli space of quasi-parabolic bundles. The moduli space of (quasi-)parabolic bundles was introduced in Mehta--Seshadri [@MS]. Yokogawa generalized this notion to (quasi-)parabolic sheaves and studied their moduli [@Yoko1]. Let $\nu$ be a positive integer. Set $I:=\{ 1,2,\ldots,\nu\}$. Let $C$ be a compact Riemann surface of genus $g$, and $D = \sum_{i\in I} m_i [t_i]$ be an effective divisor on $C$. We assume $3g-3+n>0$ where $n=\operatorname{length}(D)$. Let $z_i$ be a generator of the maximal ideal of $\mathcal{O}_{C,t_i}$. We fix a line bundle $L_0$ with $\deg(L_0) =2g-1$. **Definition 23**. *We say $(E, \{l^{(i)}\} )$ a rank $2$ quasi-parabolic bundle with determinant $L_0$ over $(C,D)$ if* - *$E$ is a rank $2$ vector bundle of degree $2g-1$ on $C$ with $\det(E) \cong L_0$, and* - *$E|_{m_i[t_i]} \supset l^{(i)} \supset 0$ is a filtration by free $\mathcal{O}_{m_i[t_i]}$-modules such that $E|_{m_i[t_i]}/ l^{(i)}\cong \mathcal{O}_{m_i[t_i]}$ and $l^{(i)}\cong \mathcal{O}_{m_i[t_i]}$ for any $i \in I$.* We fix weights $\boldsymbol{w} = (w_1,\ldots , w_{\nu})$ such that $w_i \in [0,1]$ for any $i \in I$. When $g=0$, we assume that $(w_i)_{i\in I}$ satisfies $$\label{2023_7_10_11_59(1)} w_1=\cdots = w_{\nu} \quad \text{and}\quad \frac{1}{\deg(D)} <w_i < \frac{1}{\deg(D)-2}.$$ When $g\geq 1$, we assume that $(w_i)_{i\in I}$ satisfy $$\label{2023_7_10_11_59(2)} 0 <w_i \ll 1.$$ **Definition 24**. *Let $(E, \{l^{(i)}\} )$ be a rank $2$ quasi-parabolic bundle with determinant $L_0$. Let $L$ be a line subbundle of $E$. We define the $\boldsymbol{w}$-stability index of $L$ to be the real number $$\mathrm{Stab}_{\boldsymbol{w} } (L) := \deg (E) - 2 \deg(L) + \sum_{i\in I} w_{i} \left(m_i - 2 \, \mathrm{length} (l_{i}\cap L|_{m_i[t_i]}) \right).$$* **Definition 25**. *A rank $2$ quasi-parabolic bundle $(E, \{l^{(i)}\} )$ is $\boldsymbol{w}$-stable if for any subbundle $L \subset E$, the inequality $\mathrm{Stab}_{\boldsymbol{w}}(L)>0$ holds.* We say that a quasi-parabolic bundle $(E, \{ l^{ (i) } \} )$ is decomposable if there exists a decomposition $E =L_1 \oplus L_2$ such that $l^{(i)} = l^{(i)}_{1}$ or $l^{(i)} = l^{(i)}_{2}$ for any $i\in I$, where we set $l^{(i)}_{1} := l^{(i)}\cap (L_1|_{m_i[t_i]})$ and $l^{(i)}_{2} := l^{(i)}\cap (L_2|_{m_i[t_i]})$. We say that $(E, \{ l^{ (i)} \} )$ is undecomposable if $(E, \{ l^{ (i) } \} )$ is not decomposable. A free $\mathcal{O}_{m_i [t_i]}$-submodule $l^{ (i) }$ of $E|_{m_i [t_i]}$ induces a one dimensional subspace $l^{(i)}_{\mathrm{red}}$ of $E|_{t_i}$, that is the restriction of $l^{(i)}$ to $t_i$ (without multiplicity). **Lemma 26**. *Let $(E, \{l^{(i)}\} )$ be a rank $2$ quasi-parabolic bundle with determinant $L_0$. If* - *$E$ is an extension of ${L_0}$ by $\mathcal{O}_C$ (when $g=0$, moreover we assume that $(E, \{l^{(i)}\} )$ is undecomposable)* - *$\dim_{\mathbb{C}} H^1(C,E) =0$* - *$l_{\text{red}}^{(i)} \not\in \mathcal{O}_{C}|_{t_i} \subset \mathbb{P}(E)$ for any $i$,* *then $(E, \{l^{(i)}\} )$ is $\boldsymbol{w}$-stable.* *Proof.* When $g=0$, we have this statement from [@KLS Proposition 46] by the condition [\[2023_7\_10_11_59(1)\]](#2023_7_10_11_59(1)){reference-type="eqref" reference="2023_7_10_11_59(1)"}. When $g\geq 1$, we have that $E$ is stable, that is, $\deg(E) - 2\deg(L)$ is a positive integer for any line subbundle $L \subset E$. This claim follows from the same argument as in [@Matsu Lemma 4.2]. Since $0 <w_i \ll 1$ in [\[2023_7\_10_11_59(2)\]](#2023_7_10_11_59(2)){reference-type="eqref" reference="2023_7_10_11_59(2)"}, we have that $\mathrm{Stab}_{\boldsymbol{w}}(L)>0$. ◻ Let $P_{(C,D)}^{\boldsymbol{w}}$ be a moduli space of $\boldsymbol{w}$-stable quasi-parabolic bundles constructed in [@Yoko1]. Let $P_{(C,D)}^{\boldsymbol{w}}(L_0)$ be the fiber of $L_0$ under the determinant map $$P_{(C,D)}^{\boldsymbol{w}} \longrightarrow \mathrm{Pic}_C^{2g-1}; \quad (E, \{l^{(i)}\} ) \longmapsto \det(E).$$ We set $$P_{(C,D)}(L_0)_0 := \left\{ (E,\{ l^{(i)}\} ) \ \middle| \ \begin{array}{l} \text{$(E,\{ l^{(i)}\} )$ is rank $2$ quasi-parabolic bundle over $(C,D)$ such that}\\ \text{(i) $\det (E) \cong L_0$,\quad (ii) $E$ is an extension of ${L_0}$ by $\mathcal{O}_C$,} \\ \text{(iii) $\dim_{\mathbb{C}} H^1(C,E) =0$,\quad (iv) $l_{\text{red}}^{(i)} \not\in \mathcal{O}_{C}|_{t_i} \subset \mathbb{P}(E)$ for any $i$,} \\ \text{(v) $(E,\{ l^{(i)}\} )$ is undecomposable (when $g=0$)} \end{array} \right\}.$$ By Lemma [Lemma 26](#2023_7_10_12_15){reference-type="ref" reference="2023_7_10_12_15"}, we have an inclusion $$P_{(C,D)}(L_0)_0 \subset P_{(C,D)}^{\boldsymbol{w}}(L_0).$$ For $(E,\{ l^{(i)}\} ) \in P_{(C,D)}(L_0)_0$, we have an extension $$\label{2023_7_10_13_56} 0 \longrightarrow \mathcal{O}_C \longrightarrow E \longrightarrow L_0 \longrightarrow 0.$$ Since $\dim_{\mathbb{C}} H^1(C,E) =0$, we have that $\dim_{\mathbb{C}} H^0(C,E) =1$. So the injection $\mathcal{O}_C \xrightarrow{\subset} E$ in [\[2023_7\_10_13_56\]](#2023_7_10_13_56){reference-type="eqref" reference="2023_7_10_13_56"} is unique up to a constant. **Definition 27**. *Let $(E,\{ l^{(i)}\} ) \in P_{(C,D)}(L_0)_0$. We take an affine open covering $\{ U_{\alpha} \}_{\alpha}$ of $C$, i.e. $C = \bigcup_{\alpha} U_{\alpha}$. Let $\{\varphi_{\alpha}^{\mathrm{Ext}}\}_{\alpha}$ be trivializations $\varphi_{\alpha}^{\mathrm{Ext}} \colon \mathcal{O}_{U_{\alpha}}^{\oplus 2} \rightarrow E|_{U_{\alpha}}$ of the underlying vector bundle $E$ such that* - *the composition $$\begin{aligned} \mathcal{O}_{U_{\alpha}} &\longrightarrow \mathcal{O}_{U_{\alpha}}^{\oplus 2} \xrightarrow{ \varphi_{\alpha}^{\mathrm{Ext}} } E|_{U_{\alpha}} \\ f &\longmapsto (f,0) \end{aligned}$$ is just the inclusion $\mathcal{O}_C \subset E$ of the extension [\[2023_7\_10_13_56\]](#2023_7_10_13_56){reference-type="eqref" reference="2023_7_10_13_56"} for any $\alpha$, and* - *the image of the composition $$\begin{aligned} \mathcal{O}_{U_{\alpha}} &\longrightarrow \mathcal{O}_{U_{\alpha}}^{\oplus 2} \xrightarrow{ \varphi_{\alpha}^{\mathrm{Ext}} } E|_{U_{\alpha}} \longrightarrow E|_{m_i[t_i]} \\ f &\longmapsto (0,f) \end{aligned}$$ generates the submodule $l^{(i)} \subset E|_{m_i[t_i]}$ for each $i$ and $\alpha$ where $t_i \in U_{\alpha}$.* Notice that the claim that we may take $\varphi_{\alpha}^{\mathrm{Ext}}$ which satisfies the condition (ii) of Definition [Definition 27](#2023_7_10_12_29){reference-type="ref" reference="2023_7_10_12_29"} follows from the condition that $l_{\text{red}}^{(i)} \not\in \mathcal{O}_{C}|_{t_i} \subset \mathbb{P}(E)$ for any $i$. Now we define a map $$\label{2023_7_10_12_44} P_{(C,D)}(L_0)_0 \longrightarrow \mathbb{P} H^1(C, L_0^{-1}(-D))$$ as follows. Let $\{\varphi_{\alpha}^{\mathrm{Ext}}\}_{\alpha}$ be the trivializations in Definition [Definition 27](#2023_7_10_12_29){reference-type="ref" reference="2023_7_10_12_29"}. We have the transition matrices $$B_{\alpha\beta} := (\varphi_{\alpha}^{\mathrm{Ext}} |_{U_{\alpha\beta}})^{-1} \circ \varphi_{\beta}^{\mathrm{Ext}} |_{U_{\alpha\beta}} \colon \mathcal{O}_{U_{\alpha\beta}}^{\oplus 2} \longrightarrow \mathcal{O}_{U_{\alpha\beta}}^{\oplus 2} .$$ We represent $B_{\alpha\beta}$ as a matrix: $$\label{2023_7_10_20_51(1)} B_{\alpha\beta} = \begin{pmatrix} 1 & b^{12}_{\alpha\beta} \\ 0 & b^{22}_{\alpha\beta} \end{pmatrix}.$$ Remark that $\{ b^{22}_{\alpha\beta}\}_{\alpha\beta}$ is a multiplicative cocycle which defines the fixed line bundle $L_0$. We take a meromorphic coboundary $$\label{2023_7_10_12_37} \{ b^{22}_{\alpha} \}_{\alpha} \quad \left( \text{where } b^{22}_{\alpha\beta} = \frac{b^{22}_{\alpha}}{b^{22}_{\beta}} \right)$$ of the multiplicative cocycle $\{ b^{22}_{\alpha\beta}\}_{\alpha\beta}$. By using the coboundary $\{ b^{22}_{\alpha} \}_{\alpha}$, we define a cocycle $$\label{2023_7_10_13_12(1)} b^{\text{Bun}}_{\alpha\beta} := b^{12}_{\alpha\beta}b^{22}_{\alpha},$$ which gives a class $[\{ b^{\text{Bun}}_{\alpha\beta}\} ] \in H^1(C, L_0^{-1} (-D))$. Then we have a map [\[2023_7\_10_12_44\]](#2023_7_10_12_44){reference-type="eqref" reference="2023_7_10_12_44"}: $$(E,\{ l^{(i)}\} ) \longmapsto \overline{[\{ b^{\text{Bun}}_{\alpha\beta}\} ]}.$$ ## Moduli space of quasi-parabolic connections with fixed trace connection {#SubSect:ModuliSL2} Now we recall the moduli space of quasi-parabolic connections. We fix an irregular curve with residues $X=(C,D,\{ z_i \} , \{\boldsymbol{\theta}_{i} \}, \boldsymbol{\theta}_{\mathrm{res}})$ defined in Definition [Definition 2](#2023_7_14_23_01){reference-type="ref" reference="2023_7_14_23_01"}. Moreover we assume that $$\label{2023_8_24_8_47} \sum_{i \in I}\theta_{i,-1}^{-} \neq 0.$$ Let $({L_0},\nabla_{L_0} \colon {L_0} \rightarrow {L_0}\otimes \Omega^1_C(D))$ be a rank $1$ connection on $C$ with degree $2g-1$ such that the polar part of $\nabla_{L_0}$ at $t_i$ is $\mathrm{tr} (\omega_i(X ))$. **Definition 28**. *We say $(E,\nabla , \lambda, \{l^{(i)}\} )$ a rank $2$ quasi-parabolic $\lambda$-connection over $X$ with fixed trace connection $({L_0},\nabla_{L_0})$ if* - *$E$ is a rank $2$ vector bundle on $C$ with $\det(E) \cong L_0$,* - *$\lambda \in \mathbb{C}$ and $\nabla \colon E \rightarrow E \otimes \Omega^1_{C}(D)$ is a $\lambda$-connection that is, $\nabla (fs ) = \lambda s \otimes df + f \nabla (s)$ for any $f \in \mathcal{O}_C$ and $s \in E$, and* - *$\nabla (s_1) \wedge s_2 + s_1 \wedge \nabla(s_2) = \lambda \nabla_L(s_1\wedge s_2)$ for $s_1,s_2 \in E$,* - *$E|_{m_i[t_i]} \supset l^{(i)} \supset 0$ is a filtration by free $\mathcal{O}_{m_i[t_i]}$-modules such that, for any $i \in I$,* - *$E|_{m_i[t_i]}/ l^{(i)}\cong \mathcal{O}_{m_i[t_i]}$ and $l^{(i)}\cong \mathcal{O}_{m_i[t_i]}$,* - *$\nabla|_{m_i[t_i]} (l^{(i)}) \subset l^{(i)} \otimes \Omega^1_{C}(D)$, and* - *the image of $(E|_{m_i[t_i]}/ l^{(i)}) \oplus l^{(i)}$ under $\mathrm{Gr}_i (\nabla) - \lambda \cdot \omega_i(X )$ is contained in $$\left( (E|_{m_i[t_i]}/ l^{(i)}) \oplus l^{(i)} \right) \otimes \Omega^1_C.$$* *Here $\mathrm{Gr}_i (\nabla)$ is the induced morphism $$\mathrm{Gr}_i (\nabla) \colon (E|_{m_i[t_i]}/ l^{(i)}) \oplus l^{(i)} \longrightarrow \left( (E|_{m_i[t_i]}/ l^{(i)}) \oplus l^{(i)} \right) \otimes \Omega^1_C(D).$$* Notice that, if $\lambda=0$, then $\nabla$ is an $\mathcal{O}_C$-morphism, which is called a Higgs field. So $(E,\nabla , \lambda, \{l^{(i)}\} )$ is called a (trace free) quasi-parabolic Higgs bundle when $\lambda=0$. We consider only rank $2$ quasi-parabolic $\lambda$-connections $(E,\nabla , \lambda, \{l^{(i)}\} )$ over $X$ with $({L_0},\nabla_{L_0})$ such that the underlying quasi-parabolic bundle $(E,\{l^{(i)}\} )$ is in the moduli space $P_{(C,D)}(L_0)_0$. We define the moduli spaces $\widetilde{M}_X({L_0},\nabla_{L_0})_0$ and $M_X(L_0, \nabla_{L_0})_0$ as follows: $$\widetilde{M}_X({L_0},\nabla_{L_0})_0 = \left. \left\{ \begin{array}{l} (E,\nabla, \lambda, \{ l^{(i)}\} ) \\ \text{quasi-parabolic $\lambda$-connection} \\ \text{over $X$ with trace $({L_0},\nabla_{L_0})$} \end{array} \ \middle| \ \begin{array}{l} (E, \{ l^{(i)}\} ) \in P_{(C,D)}(L_0)_0 \end{array} \right\} \right/ \cong$$ and $$M_X(L_0, \nabla_{L_0})_0 = \left. \left\{ \begin{array}{l} (E,\nabla, \lambda, \{ l^{(i)}\} ) \\ \text{quasi-parabolic $\lambda$-connection} \\ \text{over $X$ with trace $({L_0},\nabla_{L_0})$} \end{array} \ \middle| \ \begin{array}{l} (E, \{ l^{(i)}\} ) \in P_{(C,D)}(L_0)_0 \\ \text{and }\lambda \neq 0 \end{array} \right\} \right/ \cong.$$ ## Maps from the moduli space {#SubSect:MapsSL2} Now we describe two maps: the forgetful map $\pi_{\mathrm{Bun}}$ forgetting connections and the apparent map $\pi_{\mathrm{App}}$. First we consider the composition $$\widetilde{M}_X({L_0},\nabla_{L_0})_0 \longrightarrow P_{(C,D)}(L_0)_0 \longrightarrow \mathbb{P} H^1( C, L_0^{-1}(-D)).$$ Here the first map is the forgetful map, and the second map is [\[2023_7\_10_12_44\]](#2023_7_10_12_44){reference-type="eqref" reference="2023_7_10_12_44"}. We denote by $$\pi_{\mathrm{Bun}} \colon \widetilde{M}_X({L_0},\nabla_{L_0})_0 \longrightarrow \mathbb{P} H^1( C, L_0^{-1}(-D)).$$ the composition. Second we define a map $$\label{2023_7_10_13_02} \pi_{\mathrm{App}} \colon \widetilde{M}_X({L_0},\nabla_{L_0})_0 \longrightarrow \mathbb{P} H^0(C, {L_0}\otimes \Omega^1_C(D))$$ as follows. Let $(E,\nabla, \lambda, \{ l^{(i)}\} )$ be a point on $\widetilde{M}_X({L_0},\nabla_{L_0})_0$. Let $\{\varphi_{\alpha}^{\mathrm{Ext}}\}_{\alpha}$ be the trivializations in Definition [Definition 27](#2023_7_10_12_29){reference-type="ref" reference="2023_7_10_12_29"}. Let $A_{\alpha}$ be the connection matrix of the $\lambda$-connection $\nabla$ with respect to $\varphi_{\alpha}^{\mathrm{Ext}}$, that is, $$\lambda \operatorname{d} + A_{\alpha} := (\varphi_{\alpha}^{\mathrm{Ext}} )^{-1} \circ \nabla \circ \varphi_{\alpha}^{\mathrm{Ext}} \colon \mathcal{O}_{U_{\alpha}}^{\oplus 2} \longrightarrow (\Omega^1_{U_{\alpha}}(D))^{\oplus 2} .$$ We denote the matrix $A_{\alpha}$ as follows: $$\label{2023_7_10_20_51(2)} A_{\alpha} = \begin{pmatrix} a_{\alpha}^{11} & a^{12}_{\alpha} \\ a^{21}_{\alpha} & a^{22}_{\alpha} \end{pmatrix}.$$ By the condition (ii) in Definition [Definition 27](#2023_7_10_12_29){reference-type="ref" reference="2023_7_10_12_29"} and the condition (iv) in Definition [Definition 28](#2023_7_10_14_08){reference-type="ref" reference="2023_7_10_14_08"}, the polar part of the connection matrix $A_{\alpha}$ at $t_i$ is a lower triangular matrix, that is, the Laurent expansion of $A_{\alpha}$ at $t_i$ is as follows: $$\label{2023_7_10_20_59} A_{\alpha} = \begin{pmatrix} \lambda\nu_{i}^{-} & 0 \\ * &\lambda \nu_{i}^{+} \end{pmatrix} \frac{1}{z_i^{m_i}} +[\text{ holo. part }].$$ Here $\nu_{i}^{-}, \nu_{i}^{+} \in \Omega^1_C(D)|_{m_i[t_i]}$ are defined so that $$\lambda \cdot \omega_i(X ) =\begin{pmatrix} \lambda\nu_{i}^{-} & 0 \\ 0 &\lambda \nu_{i}^{+} \end{pmatrix} .$$ By using the coboundary $\{ b^{22}_{\alpha} \}_{\alpha}$ in [\[2023_7\_10_12_37\]](#2023_7_10_12_37){reference-type="eqref" reference="2023_7_10_12_37"}, we define cocycles $$\label{2023_7_10_13_12(2)} a^{\text{App}}_{\alpha} := a^{21}_{\alpha}(b^{22}_{\alpha} )^{-1},$$ which give a class $[\{ a^{\text{App}}_{\alpha}\} ] \in H^0(C, {L_0} \otimes \Omega^1_C(D))$. Then we have a map [\[2023_7\_10_13_02\]](#2023_7_10_13_02){reference-type="eqref" reference="2023_7_10_13_02"}: $$(E,\nabla, \lambda, \{ l^{(i)}\} ) \longmapsto \overline{[\{ a^{\text{App}}_{\alpha}\} ]}.$$ Finally, we have a map $$\label{2023_9_8_10_17} (\pi_{\mathrm{App}}, \pi_{\mathrm{Bun}}) \colon \widetilde{M}_X({L_0},\nabla_{L_0})_0 \longrightarrow \mathbb{P} H^0(C, {L_0}\otimes \Omega^1_C(D)) \times \mathbb{P} H^1( C, L_0^{-1}(-D)).$$ We consider the natural pairing $$\label{2023_7_10_20_48} H^0(C, {L_0}\otimes \Omega^1_C(D)) \times H^1(C, L_0^{-1} (-D)) \longrightarrow H^1(C,\Omega^1_{C}) \cong \mathbb{C} .$$ **Lemma 29**. *Let $(E,\nabla, \lambda, \{ l^{(i)}\} ) \in \widetilde{M}_X({L_0},\nabla_{L_0})_0$. Let $a^{\text{App}}_{\alpha}$ and $b^{\text{Bun}}_{\alpha\beta}$ be the cocycles in [\[2023_7\_10_13_12(1)\]](#2023_7_10_13_12(1)){reference-type="eqref" reference="2023_7_10_13_12(1)"} and in [\[2023_7\_10_13_12(2)\]](#2023_7_10_13_12(2)){reference-type="eqref" reference="2023_7_10_13_12(2)"}, respectively. Then we have $$[\{ b^{\mathrm{Bun}}_{\alpha\beta} \cdot a^{\mathrm{App}}_{\beta} \} ] = \lambda \cdot \sum_{i \in I}\theta_{i,-1}^-.$$ Here the left hand side is the pairing [\[2023_7\_10_20_48\]](#2023_7_10_20_48){reference-type="eqref" reference="2023_7_10_20_48"}.* *Proof.* Let $B_{\alpha\beta}$ be the transition function in [\[2023_7\_10_20_51(1)\]](#2023_7_10_20_51(1)){reference-type="eqref" reference="2023_7_10_20_51(1)"}. Let $A_{\alpha}$ be the connection matrix in [\[2023_7\_10_20_51(2)\]](#2023_7_10_20_51(2)){reference-type="eqref" reference="2023_7_10_20_51(2)"}. Then we have $$\lambda \cdot \operatorname{d}\! B_{\alpha\beta} + A_{\alpha} B_{\alpha\beta} = B_{\alpha\beta}A_{\beta}.$$ By comparing the $(1,1)$-entries of the both hand sides, we have $$a_{\alpha}^{11} - a_{\beta}^{11} = b^{\mathrm{Bun}}_{\alpha\beta} \cdot a^{\mathrm{App}}_{\beta}.$$ By [\[2023_7\_10_20_59\]](#2023_7_10_20_59){reference-type="eqref" reference="2023_7_10_20_59"} and the isomorphism $H^1(C,\Omega^1_{C}) \cong \mathbb{C}$, we have $[\{ b^{\mathrm{Bun}}_{\alpha\beta} \cdot a^{\mathrm{App}}_{\beta}\} \} ] = \lambda \cdot \sum_{i }\theta_{i,-1}^-$. ◻ Set $$N_0 := \dim_{\mathbb{C}} \mathbb{P} H^0(C, {L_0} \otimes \Omega^1_C(D)) = 3g+n-3.$$ Let us introduce the homogeneous coordinates $\boldsymbol{a} = (a_0 :\cdots : a_{N_0})$ on $\mathbb{P} H^0(C, {L_0} \otimes \Omega^1_C(D))\cong \mathbb{P}_{\boldsymbol{a}}^{N_0}$ and the dual coordinates $\boldsymbol{b} = (b_0 :\cdots : b_{N_0})$ on $$\mathbb{P} H^1(C, L_0^{-1}(-D))\cong \mathbb{P} H^0(C, {L_0}\otimes \Omega^1_C(D))^{\vee} \cong \mathbb{P}_{\boldsymbol{b}}^{N_0}.$$ Let $\Sigma\subset \mathbb{P}_{\boldsymbol{a}}^{N_0} \times \mathbb{P}_{\boldsymbol{b}}^{N_0}$ be the incidence variety whose defining equation is given by $\sum_ja_jb_j =0$. By Lemma [Lemma 29](#2023_7_10_21_01){reference-type="ref" reference="2023_7_10_21_01"}, we have that $$\widetilde{M}_X({L_0},\nabla_{L_0})_0 \setminus M_X({L_0},\nabla_{L_0})_0 \xrightarrow{ \ (\pi_{\mathrm{App}}, \pi_{\mathrm{Bun}}) \ } \Sigma.$$ **Remark 30**. *Loray--Saito (for $g=0$) and Matsumoto (for $g\geq 1$) discussed on the birationality of the map [\[2023_9\_8_10_17\]](#2023_9_8_10_17){reference-type="eqref" reference="2023_9_8_10_17"}. They showed the birationality of the map [\[2023_9\_8_10_17\]](#2023_9_8_10_17){reference-type="eqref" reference="2023_9_8_10_17"} when $D$ is a reduced effective divisor ([@LS Theorem 4.3] for $g=0$ and [@Matsu Theorem 4.5] for $g\ge 1$). In these cases, quasi-parabolic connections have only simple poles. But we may apply the arguments in [@LS Theorem 4.3] and in [@Matsu Theorem 4.5] to our cases where quasi-parabolic connections admit generic unramified irregular singular points. So we can reconstruct $(E,\nabla, \lambda, \{ l^{(i)}\} ) \in \widetilde{M}_X({L_0},\nabla_{L_0})_0$ from an element of $$\mathbb{P} H^1( C, L_0^{-1}(-D))_0 \times \mathbb{P} H^0( C, L_0\otimes \Omega^1_C(D)).$$ Here we set $$\mathbb{P} H^1( C, L_0^{-1}(-D))_0 := \left\{ b\in \mathbb{P} H^1( C, L_0^{-1}(-D)) \ \middle| \ \begin{array}{l} \text{{\rm The extension $E$ corresponding to $b$}} \\ \text{{\rm satisfies $\dim_{\mathbb{C}} H^1(C,E)=0$}} \end{array} \right\}.$$ Then we have isomorphisms $$\widetilde{M}_X({L_0},\nabla_{L_0})_0 \cong \mathbb{P} H^1( C, L_0^{-1}(-D))_0 \times \mathbb{P} H^0( C, L_0\otimes \Omega^1_C(D))$$ and $$M_X({L_0},\nabla_{L_0})_0 \cong \mathbb{P} H^1( C, L_0^{-1}(-D))_0 \times \mathbb{P} H^0( C, L_0\otimes \Omega^1_C(D)) \setminus \Sigma.$$* ## Symplectic structure and explicit description {#SubSect:SyplecticSL2} Now we recall the natural symplectic structure on $M_X({L_0},\nabla_{L_0})_0$. We define a complex ${\mathcal F}_0^{\bullet}$ for $(E, \frac{1}{\lambda} \nabla , \{l^{(i)}\} )$ by $$\begin{aligned} &{\mathcal F}_0^0 := \left\{ s \in {\mathcal E}nd (E) \ \middle| \ \mathrm{tr}(s)=0,\, s |_{m_i t_i} (l^{(i)}) \subset l^{(i)} \text{ for any $i$} \right\} \\ &{\mathcal F}^1_0 := \left\{ s \in {\mathcal E}nd (E)\otimes \Omega^1_{C}(D) \ \middle| \ \mathrm{tr}(s)=0,\, s|_{m_i t_i} (l^{(i)}) \subset l^{(i)} \otimes \Omega^1_C \text{ for any $i$} \right\} \\ &\nabla_{{\mathcal F}^{\bullet}} \colon {\mathcal F}_0^0 \longrightarrow{\mathcal F}_0^1; \quad \nabla_{{\mathcal F}_0^{\bullet}} (s) = (\frac{1}{\lambda}\nabla)\circ s - s \circ (\frac{1}{\lambda}\nabla). \end{aligned}$$ We define the following morphism $$\label{2023_3_12_20_42} \begin{aligned} {\mathbf H}^1( {\mathcal F}_0^{\bullet}) \otimes {\mathbf H}^1({\mathcal F}_0^{\bullet}) \longrightarrow{\mathbf H}^2(\mathcal{O}_C \xrightarrow{d}\Omega_{C}^{1}) \cong \mathbb{C} \end{aligned}$$ as in [\[2020.11.7.15.48\]](#2020.11.7.15.48){reference-type="eqref" reference="2020.11.7.15.48"}. This pairing gives the symplectic form on $M_X({L_0},\nabla_{L_0})_0$. We denote by $\omega_0$ the symplectic form. The maps $\pi_{\mathrm{App}}$ and $\pi_{\mathrm{Bun}}$ give coordinates on $M_X({L_0},\nabla_{L_0})_0$ (see Remark [Remark 30](#2023_7_14_9_20){reference-type="ref" reference="2023_7_14_9_20"}). Now we describe the symplectic structure [\[2023_3\_12_20_42\]](#2023_3_12_20_42){reference-type="eqref" reference="2023_3_12_20_42"} by using the coordinates on $M_X({L_0},\nabla_{L_0})_0$. We define a 1-form $\eta$ on $\mathbb{P}_{\boldsymbol{a}}^{N_0} \times \mathbb{P}_{\boldsymbol{b}}^{N_0}$ as follows: $$\eta := \left( - \sum_i \theta_{i,-1}^- \right) \cdot \frac{a_0 \, d b_0 + a_1 \, d b_1 + \cdots + a_{N_0} \, d b_{N_0} }{a_0b_0 + a_1b_1 + \cdots + a_{N_0}b_{N_0}}.$$ **Theorem 31**. *Assume that $\sum_{i \in I}\theta_{i,-1}^{-} \neq 0$. Let $\omega_{\boldsymbol{a}, \boldsymbol{b}}$ be the $2$-form on $\mathbb{P}^{N_0}_{\boldsymbol{a}} \times \mathbb{P}^{N_0}_{\boldsymbol{b}}$ defined by $\omega_{\boldsymbol{a}, \boldsymbol{b}}= d \eta$. The pull-back of $\omega_{\boldsymbol{a}, \boldsymbol{b}}$ under the map $$M_X({L_0},\nabla_{L_0})_0 \xrightarrow{\ (\pi_{\mathrm{App}}, \pi_{\mathrm{Bun}}) \ } \mathbb{P}_{\boldsymbol{a}}^{N_0} \times \mathbb{P}_{\boldsymbol{b}}^{N_0}$$ coincides with the symplectic form $\omega_0$ on $M_X({L_0},\nabla_{L_0})_0$.* *Proof.* Let $v , v' \in T_{(E, \frac{1}{\lambda} \nabla , \{l^{(i)}\} )} M_X({L_0},\nabla_{L_0})_0$. We have the isomorphism $$T_{(E, \frac{1}{\lambda} \nabla , \{l^{(i)}\} )} M_X({L_0},\nabla_{L_0})_0 \xrightarrow{ \ \cong \ } {\mathbf H}^1( {\mathcal F}_0^{\bullet}).$$ Let $u_{\alpha\beta} (v)$ and $v_{\alpha} (v)$ be cocycles such that the class $[\{u_{\alpha\beta} (v)\}_{\alpha\beta},\{v_{\alpha} (v)\}_{\alpha}]$ is the image of $v$ under the isomorphism. We calculate $u_{\alpha\beta} (v)$ and $v_{\alpha} (v)$ by using the trivialization $\{ \varphi_{\alpha}^{\mathrm{Ext}} \}_{\alpha}$ as follows: $$\label{2023_3_14_11_49} \begin{aligned} u_{\alpha\beta} (v) &= \varphi_{\beta}^{\mathrm{Ext}}|_{U_{\alpha\beta}} \circ \left( B_{\alpha\beta}^{-1} v(B_{\alpha\beta}) \right) \circ (\varphi_{\beta}^{\mathrm{Ext}} |_{U_{\alpha\beta}})^{-1} \\ &= \varphi_{\beta}^{\mathrm{Ext}} |_{U_{\alpha\beta}} \circ \begin{pmatrix} 0 & v (b^{12}_{\alpha\beta}) \\ 0 & 0 \end{pmatrix} \circ (\varphi_{\beta}^{\mathrm{Ext}}|_{U_{\alpha\beta}} )^{-1} \\ &= \varphi_{\beta}^{\mathrm{Ext}} |_{U_{\alpha\beta}} \circ \begin{pmatrix} 0 & \frac{v (b^{\text{Bun}}_{\alpha\beta} )}{b^{22}_{\alpha}} \\ 0 & 0 \end{pmatrix} \circ (\varphi_{\beta}^{\mathrm{Ext}}|_{U_{\alpha\beta}} )^{-1} \end{aligned}$$ and $$\label{2023_3_14_11_50} \begin{aligned} v_{\alpha} (v) &= \varphi_{\alpha}^{\mathrm{Ext}} \circ v \left( \frac{1}{\lambda } A_{\alpha} \right) \circ (\varphi_{\alpha}^{\mathrm{Ext}} )^{-1} \\ &= \varphi_{\alpha}^{\mathrm{Ext}} \circ \begin{pmatrix} v(a_{\alpha}^{11}/\lambda) & v(a^{12}_{\alpha} /\lambda) \\ v(a^{21}_{\alpha}/\lambda )& v(a^{22}_{\alpha}/\lambda) \end{pmatrix} \circ (\varphi_{\alpha}^{\mathrm{Ext}} )^{-1}\\ &= \varphi_{\alpha}^{\mathrm{Ext}} \circ \begin{pmatrix} v(a_{\alpha}^{11}/\lambda) & v(a^{12}_{\alpha} /\lambda) \\ v(a^{\text{App}}_{\alpha}/\lambda ) b^{22}_{\alpha} & v(a^{22}_{\alpha}/\lambda) \end{pmatrix} \circ (\varphi_{\alpha}^{\mathrm{Ext}} )^{-1}. \end{aligned}$$ Here $\{ b^{22}_{\alpha} \}_{\alpha}$ is the coboundary in [\[2023_7\_10_12_37\]](#2023_7_10_12_37){reference-type="eqref" reference="2023_7_10_12_37"}. Since we fix the determinant bundle $L_0$, we may assume that the coboundary $\{ b^{22}_{\alpha} \}_{\alpha}$ is independent of the moduli space $M_X({L_0},\nabla_{L_0})_0$. Now we calculate the class $$\label{2023.3.18.22.55} [ (\{ \mathrm{tr}( u_{\alpha\beta} (v) u_{\beta\gamma} (v') ) \}, -\{ \mathrm{tr} \left(u_{\alpha\beta} (v) v_{\beta} (v') \right) - \mathrm{tr} \left( v_{\alpha} (v) u_{\alpha\beta} (v') \right) \} )]$$ in ${\mathbf H}^2(\mathcal{O}_C \xrightarrow{d}\Omega_{C}^{1}) \cong \mathbb{C}$. First we calculate $u_{\alpha\beta} (v) u_{\beta\gamma} (v')$ as follows: $$\begin{aligned} &u_{\alpha\beta} (v) u_{\beta\gamma} (v') \\ &= \varphi_{\beta}^{\mathrm{Ext}} |_{U_{\alpha\beta}} \circ \begin{pmatrix} 0 & \frac{v (b^{\text{Bun}}_{\alpha\beta} )}{b^{22}_{\alpha}} \\ 0 & 0 \end{pmatrix} \circ (\varphi_{\beta}^{\mathrm{Ext}}|_{U_{\alpha\beta}} )^{-1}\circ \varphi_{\gamma}^{\mathrm{Ext}}|_{U_{\alpha\beta}} \circ \begin{pmatrix} 0 & \frac{v (b^{\text{Bun}}_{\beta\gamma} )}{b^{22}_{\alpha}} \\ 0 & 0 \end{pmatrix} \circ (\varphi_{\gamma}^{\mathrm{Ext}} |_{U_{\alpha\beta}})^{-1} \\ &= \varphi_{\beta}^{\mathrm{Ext}} |_{U_{\alpha\beta}} \circ \begin{pmatrix} 0 & \frac{v (b^{\text{Bun}}_{\alpha\beta} )}{b^{22}_{\alpha}} \\ 0 & 0 \end{pmatrix} B_{\beta\gamma} \begin{pmatrix} 0 & \frac{v (b^{\text{Bun}}_{\beta\gamma} )}{b^{22}_{\alpha}} \\ 0 & 0 \end{pmatrix} \circ (\varphi_{\gamma}^{\mathrm{Ext}} |_{U_{\alpha\beta}})^{-1} \\ &= \varphi_{\beta}^{\mathrm{Ext}} |_{U_{\alpha\beta}} \circ \begin{pmatrix} 0 & 0 \\ 0 & 0 \end{pmatrix} \circ (\varphi_{\gamma}^{\mathrm{Ext}}|_{U_{\alpha\beta}} )^{-1} =0. \end{aligned}$$ So we may take a representative of the class [\[2023.3.18.22.55\]](#2023.3.18.22.55){reference-type="eqref" reference="2023.3.18.22.55"} so that $$[ -\{ \mathrm{tr} \left(u_{\alpha\beta} (v) v_{\beta} (v') \right) - \mathrm{tr} \left( v_{\alpha} (v) u_{\alpha\beta} (v') \right) \} ]$$ is in $H^1(C,\Omega^1_C)$. By using equalities [\[2023_3\_14_11_49\]](#2023_3_14_11_49){reference-type="eqref" reference="2023_3_14_11_49"} and [\[2023_3\_14_11_50\]](#2023_3_14_11_50){reference-type="eqref" reference="2023_3_14_11_50"}, we have the following equality $$\label{2023_3_14_11_51} \mathrm{tr} \left(u_{\alpha\beta} (v) v_{\beta} (v') \right)- \mathrm{tr} \left( v_{\alpha} (v) u_{\alpha\beta} (v') \right) = v (b^{\text{Bun}}_{\alpha\beta} ) v'\left( \frac{a^{\text{App}}_{\beta} }{\lambda} \right) -v\left( \frac{a^{\text{App}}_{\alpha} }{\lambda} \right) v' (b^{\text{Bun}}_{\alpha\beta} ) .$$ We take bases $$a^{\text{App}(0)}, a^{\text{App}(1)}, \ldots ,a^{\text{App}(N_0)} \in H^0(C, L_0\otimes \Omega^1_C(D))$$ of $H^0(C, L_0\otimes \Omega^1_C(D))$ and $$[\{ b_{\alpha\beta}^{\text{App}(0)}\}], [\{ b_{\alpha\beta}^{\text{App}(1)}\}], \ldots , [\{ b_{\alpha\beta}^{\text{App}(N_0)}\}]$$ of $H^1(C, L_0^{-1}(-D))$ so that these bases give the homogeneous coordinates $(a_0 :\cdots : a_{N_0})$ on $\mathbb{P}_{\boldsymbol{a}}^{N_0}$ and $(b_0 :\cdots : b_{N_0})$ on $\mathbb{P}_{\boldsymbol{b}}^{N_0}$. We may assume that these bases are independent of the moduli space $M_X({L_0},\nabla_{L_0})_0$. We set $$a^{\text{App}}_{\alpha} = a_0 a^{\text{App}(0)}|_{U_{\alpha}} + a_1 a^{\text{App}(1)}|_{U_{\alpha}}+\cdots +a_{N_0} a^{\text{App}(N_0)}|_{U_{\alpha}}$$ and $$b^{\text{App}}_{\alpha\beta} = b_0 b^{\text{App}(0)}_{\alpha\beta} + b_1 b^{\text{App}(1)}_{\alpha\beta}+\cdots +b_{N_0} b^{\text{App}(N_0)}_{\alpha\beta}.$$ By [\[2023_3\_14_11_51\]](#2023_3_14_11_51){reference-type="eqref" reference="2023_3_14_11_51"}, we have that $$\begin{aligned} \mathrm{tr} \left(u_{\alpha\beta} (v) v_{\beta} (v') \right)- \mathrm{tr} \left( v_{\alpha} (v) u_{\alpha\beta} (v') \right) &= v \left(\sum_{k=0}^{N_0}b_k b^{\text{App}(k)}_{\alpha\beta} \right) v'\left( \frac{\sum_{k=0}^{N_0} a_k a^{\text{App}(k)}|_{U_{\alpha}} }{\lambda} \right) \\ &\qquad -v\left( \frac{\sum_{k=0}^{N_0} a_k a^{\text{App}(k)}|_{U_{\alpha}} }{\lambda} \right) v'\left(\sum_{k=0}^{N_0}b_k b^{\text{App}(k)}_{\alpha\beta} \right) \\ &= \left( \sum_{k=0}^{N_0} v \left(b_k \right) b^{\text{App}(k)}_{\alpha\beta}\right) \left( \sum_{k=0}^{N_0} v'\left( \frac{a_k}{\lambda} \right)a^{\text{App}(k)}|_{U_{\alpha}} \right) \\ &\qquad -\left( \sum_{k=0}^{N_0} v\left( \frac{a_k}{\lambda} \right)a^{\text{App}(k)}|_{U_{\alpha}} \right) \left( \sum_{k=0}^{N_0} v' \left(b_k \right) b^{\text{App}(k)}_{\alpha\beta}\right). \end{aligned}$$ Since $(b_0 :\cdots : b_{N_0})$ is dual of $(a_0 :\cdots : a_{N_0})$ with respect to the natural pairing $$H^0(C, L_0\otimes \Omega^1_C(D)) \times H^1(C, L_0^{-1} (-D)) \longrightarrow H^1(C,\Omega^1_{C}) \cong \mathbb{C},$$ we have that $$\begin{aligned} &\mathrm{tr} \left(u_{\alpha\beta} (v) v_{\beta} (v') \right)- \mathrm{tr} \left( v_{\alpha} (v) u_{\alpha\beta} (v') \right) \\ &= \sum_{k=0}^{N_0} v \left(b_k \right) v'\left( \frac{a_k}{\lambda} \right) -\sum_{k=0}^{N_0} v' \left(b_k \right) v\left( \frac{a_k}{\lambda} \right) . \end{aligned}$$ On the other hand, we have that $$\lambda = \frac{ \langle [\{ a^{\text{App}}_{\alpha} \} ,[ \{ b^{\text{Bun}}_{\alpha\beta}\}] \rangle}{ -\sum_i \theta_{i,-1}^-} = \frac{ a_0b_0 + a_1b_1 + \cdots + a_{N_0}b_{N_0}}{ -\sum_i \theta_{i,-1}^-}.$$ Then we have $$\begin{aligned} H^1(C,\Omega^1_C) &\xrightarrow{\ \cong \ } \mathbb{C} \\ [ -\{ \mathrm{tr} \left(u_{\alpha\beta} (v) v_{\beta} (v') \right) - \mathrm{tr} \left( v_{\alpha} (v) u_{\alpha\beta} (v') \right) \} ] &\longmapsto d\eta(v,v') . \end{aligned}$$ This means the statement. ◻ # Companion normal forms for an elliptic curve with two poles {#Sect:CompForElliptic} In Section [2](#sect:CompanionNF){reference-type="ref" reference="sect:CompanionNF"}, we introduced the companion normal form of a rank 2 meromorphic connection with some assumption. The purpose of the present section is to detail the case of an elliptic curve with two simple poles, or with an unramified irregular singularity of order $2$. The latter case arises by confluence from the first one, up to some modification in the arguments. We will give explicit description of the companion normal form for an elliptic curve in these cases. Moreover, we will calculate the canonical coordinates introduced in Section [3.5](#subsect:Canonical_Coor){reference-type="ref" reference="subsect:Canonical_Coor"}. First we start from construction of the companion normal form $(\mathcal{O}_C\oplus (\Omega_C^1(D))^{-1} , \nabla_0)$. Next we will construct a rank 2 meromorphic connection $(E,\nabla)$ by transforming the companion normal form. Let $C$ be the elliptic curve constructed by gluing affine cubic curves $$U_{0} := (y_1^2-x_1(x_1-1)(x_1-\lambda) =0) \quad\text{and} \quad U_{\infty} := ( y_2^2-x_2(1-x_2)(1-\lambda x_2) =0)$$ with the relations $x_1 = x_2^{-1}$ and $y_1 = y_2 x_2^{-2}$. We fix some $t\in\mathbb{C}$ and set $D=t_1+t_2$ where $t_1=(t,s)$ and $t_2=(t,-s)$, so that $D$ is the positive part of $\mathrm{div}(x-t)$. Let $q_1,q_2,q_3$ be points on $C$: $$q_j\colon (x_1,y_1) = (u_j,v_j)$$ for each $j=1,2,3$. Now we assume that $u_j \not\in \{ 0,1,\lambda, \infty, t \}$ for any $j$. We take trivialization of the line bundle $(\Omega^1_{C}(D))^{-1}$ over $C$ as follows: $$\label{2023_6_30_21_56(1)} \mathcal{O}_{U_0} \xrightarrow{\ \sim \ } (\Omega^1_{C}(D))^{-1} |_{U_0} ; \quad 1 \longmapsto \left( \frac{\operatorname{d}\!x_1}{(x_1-t)y_1} \right)^{-1}$$ and $$\label{2023_6_30_21_56(2)} \mathcal{O}_{U_\infty} \xrightarrow{\ \sim \ } (\Omega^1_{C}(D))^{-1} |_{U_\infty} ;\quad 1 \longmapsto \left( \frac{\operatorname{d}\!x_2}{(1-tx_2)y_2} \right)^{-1}.$$ Then the corresponding transition function $f_{0\infty}$ is as follows: $$\label{2023_7_1_12_35} \begin{aligned} f_{\infty0} \colon \mathcal{O}_{U_0} |_{U_0 \cap U_\infty} &\xrightarrow{\ \sim \ } \mathcal{O}_{U_\infty} |_{U_0 \cap U_\infty} \\ 1 &\longmapsto -\frac{1}{x_2}. \end{aligned}$$ ## Definition of a connection $\nabla_0$ on $\mathcal{O}_C\oplus (\Omega^1_{C}(D))^{-1}$ For $\zeta_1,\zeta_2,\zeta_3 \in \mathbb{C}$, we define $1$-forms $\omega_{12}$, $\omega_{21}$, and $\omega_{22}$ as follows: $$\label{2023_7_1_10_5} \begin{aligned} &\omega_{12}=\sum_{j=1}^3\frac{\zeta_j}{2} \cdot \frac{y_1+v_{j}}{x_1-u_{j}} \cdot \frac{\operatorname{d}\!x_1}{y_1} +\left( \frac{A_1 + A_2 y_1 }{x_1-t} + A_3 +A_4 x_1 \right) \frac{\operatorname{d}\!x_1}{y_1}\\ &\omega_{21}:= \frac{1}{x_1-t}\frac{\operatorname{d}\!x_1}{y_1} \\ &\omega_{22}:=\sum_{j=1}^3\frac{1}{2} \cdot \frac{y_1+v_{j}}{x_1-u_{j}} \cdot \frac{\operatorname{d}\!x_1}{y_1} + \left(\frac{B_1 + B_2 y_1 }{x_1-t} +B_3 \right) \frac{\operatorname{d}\!x_1}{y_1} . \end{aligned}$$ Here $A_1,\ldots,A_4 \in \mathbb{C}$ and $B_1,\ldots,B_3\in \mathbb{C}$ are parameters. Notice that $\omega_{12} \otimes \omega_{21}$ is a global section of $(\Omega_{C}^1)^{\otimes 2}(2D+B)$ and $\omega_{22}$ is a global section of $\Omega_{C}^1(D+B+\infty)$. ### Fixing the polar parts in the logarithmic case We start by analyzing the case where $t\notin \{ 0,1,\lambda, \infty \}$. In this case, we have $s\neq 0$, so $t_1\neq t_2$. We fix complex numbers $\theta_1^{\pm}, \theta_2^{\pm}$ such that $\sum_{i=1}^2 (\theta^+_i+\theta^-_i) = -1$, which is called Fuchs' relation. Now we assume that the eigenvalues of the matrix $$\mathrm{res}_{t_1} \begin{pmatrix} 0 & \omega_{12}\\ \omega_{21} & \omega_{22} \end{pmatrix}$$ are given by $\theta_1^+, \theta_1^-$ and the eigenvalues of the matrix $$\mathrm{res}_{t_2} \begin{pmatrix} 0 & \omega_{12}\\ \omega_{21} & \omega_{22} \end{pmatrix}$$ are given by $\theta_2^+, \theta_2^-$. (To be coherent with Definition [Definition 2](#2023_7_14_23_01){reference-type="ref" reference="2023_7_14_23_01"}, we should write $\theta_{1,-1}$ and $\theta_{2,-1}$ for elements of the Cartan subalgebra, and $\theta_{1,-1}^{\pm}$ and $\theta_{2,-1}^{\pm}$ for their eigenvalues; however, we drop the subscript $-1$ for ease of notation, because there are only poles of order $1$, so no confusion is possible.) Specifically, these conditions read as $$\label{2023_6_30_22_02(1)} \mathrm{res}_{(t,s)}\omega_{12} \cdot \mathrm{res}_{(t,s)}\omega_{21} = \theta^+_1 \cdot \theta^-_1, \qquad \mathrm{res}_{(t,-s)}\omega_{12} \cdot \mathrm{res}_{(t,-s)}\omega_{21} = \theta^+_2 \cdot \theta^-_2,$$ and $$\label{2023_6_30_22_02(2)} \mathrm{res}_{(t,s)}\omega_{22} =\theta^+_1 + \theta^-_1, \qquad \mathrm{res}_{(t,-s)}\omega_{22} = \theta^+_2 + \theta^-_2.$$ Notice that $\mathrm{res}_{(u_j,v_j)}\omega_{22} = 1$ for each $j$. By the residue theorem, $\mathrm{res}_{\infty}\omega_{22} = -2$. By the assumption [\[2023_6\_30_22_02(1)\]](#2023_6_30_22_02(1)){reference-type="eqref" reference="2023_6_30_22_02(1)"} and [\[2023_6\_30_22_02(2)\]](#2023_6_30_22_02(2)){reference-type="eqref" reference="2023_6_30_22_02(2)"}, we may determine the parameters $A_1,A_2, B_1$, and $B_2$. **Lemma 32**. *Let complex numbers $\theta_1^{\pm}, \theta_2^{\pm}$ satisfying Fuchs' relation be given. Then, there exist unique values of the parameters $A_1,A_2, B_1$, and $B_2$ such that [\[2023_6\_30_22_02(1)\]](#2023_6_30_22_02(1)){reference-type="eqref" reference="2023_6_30_22_02(1)"} and [\[2023_6\_30_22_02(2)\]](#2023_6_30_22_02(2)){reference-type="eqref" reference="2023_6_30_22_02(2)"} are fulfilled. Moreover, these parameter values are independent of $u_1,u_2,u_3$, $\zeta_1,\zeta_2$, and $\zeta_3$. So the polar parts of $\omega_{12},\omega_{21}$, and $\omega_{22}$ at $t_i$ are independent of $u_1,u_2,u_3$, $\zeta_1,\zeta_2$, and $\zeta_3$.* *Proof.* By the equalities [\[2023_6\_30_22_02(1)\]](#2023_6_30_22_02(1)){reference-type="eqref" reference="2023_6_30_22_02(1)"}, we have $$\frac{A_1 + A_2s}{s} \cdot \frac{1}{s} = \theta^+_1 \cdot \theta^-_1 \quad \text{and} \quad \frac{A_1 - A_2s}{-s} \cdot \frac{1}{-s} = \theta^+_2 \cdot \theta^-_2.$$ By the equalities in [\[2023_6\_30_22_02(2)\]](#2023_6_30_22_02(2)){reference-type="eqref" reference="2023_6_30_22_02(2)"}, we have $$\frac{B_1 + B_2s}{s} = \theta^+_1 + \theta^-_1 \quad \text{and} \quad \frac{B_1 - B_2s}{-s} = \theta^+_2 + \theta^-_2.$$ By these equalities, $A_1,A_2, B_1$, and $B_2$ are determined, and $A_1,A_2, B_1$, and $B_2$ are independent of $u_1,u_2,u_3$, $\zeta_1,\zeta_2$, and $\zeta_3$. It is clear that the polar parts of $\omega_{12},\omega_{21}$, and $\omega_{22}$ at $t_i$ are independent of $u_1,u_2,u_3$, $\zeta_1,\zeta_2$, and $\zeta_3$. ◻ ### Fixing the polar part in the irregular case We now study the situation $t\in \{ 0, 1 , \lambda , \infty \}$. For sake of concreteness, we let $t=0$, the other cases being similar. Then, $s=0$ and $t_1 = t_2$, so the divisor $D$ is reduced of length $2$. A local holomorphic coordinate of the elliptic curve $C$ in a neighbourhood of $t_1$ is given by $y_1$. We fix $\theta_{-2}^{\pm}, \theta_{-1}^+\in\mathbb{C}$ so that $\theta_{-2}^+ \neq \theta_{-2}^-$ and set $\theta_{-1}^- = - 1 - \theta_{-1}^+$. (To be coherent with Definition [Definition 2](#2023_7_14_23_01){reference-type="ref" reference="2023_7_14_23_01"}, we should write $\theta_{1,-2}$ and $\theta_{1,-1}$ for elements of the Cartan subalgebra, and $\theta_{1,-2}^{\pm}$ and $\theta_{1,-1}^{\pm}$ for their eigenvalues; however, we omit the subscript $1$ for ease of notation, because there is only one singular point, so no confusion is possible.) **Lemma 33**. *Fix $\theta_{-2}^{\pm}, \theta_{-1}^{\pm}$ as above. Then, there exist unique values $A_1, A_2, B_1, B_2\in\mathbb{C}$ such that the eigenvalues of $$\mathrm{res} \begin{pmatrix} 0 & \omega_{12}\\ \omega_{21} & \omega_{22} \end{pmatrix}$$ admit Laurent expansions of the form $$\left( \theta_{-2}^{\pm} \frac{1}{y_1^2} + \theta_{-1}^{\pm} \frac{1}{y_1} + O(1) \right) \otimes \operatorname{d}\! y_1.$$ Moreover, the values of the solutions are independent of $u_i, \zeta_i$.* *Proof.* By the inverse function theorem, there exists an analytic open subset $U\subset \mathbb{C}$ and a holomorphic function $h\colon U \to \mathbb{C}$ satisfying $h(0)=0$ such that $C$ is given by the explicit equation $x_1 = h (y_1^2 )$. It is obvious that this function $h$ is independent of the choice of $u_i, \zeta_i$, and it is easy to see that $h'(0) = \frac{1}{\lambda}\neq 0$. From the defining equation of $C$ we get $$\frac{\operatorname{d}\!x_1}{y_1} = \frac{2 \operatorname{d}\!y_1}{3 x_1^2 -2(1+\lambda ) x_1 + \lambda},$$ so $\frac{\operatorname{d}\!x_1}{y_1}$ is a holomorphic $1$-form around $t_1$. Moreover, $$\frac{\operatorname{d}\!x_1}{x_1 y_1} = \frac{\operatorname{d}\!y_1}{y_1^2} g(y_1^2)$$ for some holomorphic function $g\colon U \to \mathbb{C}$ satisfying $g(0)=2$. The polar parts of the coefficients can be separated as $$\begin{aligned} \omega_{12} & = (A_1 + A_2 y_1 ) \frac{\operatorname{d}\!x_1}{x_1y_1} + O(1) = 2 (A_1 + A_2 y_1 ) \frac{\operatorname{d}\!y_1}{y_1^2} + O(1) \\ \omega_{21} & = 2 \frac{\operatorname{d}\!y_1}{y_1^2} + O(1) \\ \omega_{22} & = (B_1 + B_2 y_1 ) \frac{\operatorname{d}\!x_1}{x_1y_1} + O(1) = 2 (B_1 + B_2 y_1 ) \frac{\operatorname{d}\!y_1}{y_1^2} + O(1).\end{aligned}$$ Now, the sum of the eigenvalues must be $$( \theta_{-2}^+ + \theta_{-2}^- ) \frac{1}{y_1^2} + ( \theta_{-1}^+ + \theta_{-1}^-) \frac{1}{y_1}.$$ These conditions determine $$B_1 = \frac 12 ( \theta_{-2}^+ + \theta_{-2}^- ), \qquad B_2 = \frac 12 (\theta_{-1}^+ + \theta_{-1}^-) = - \frac12 .$$ Moreover, we have $$- \omega_{12} \omega_{21} = -4 (A_1 + A_2 y_1 ) \frac{\left( \operatorname{d}\!y_1 \right)^{\otimes 2}}{y_1^4} + O\left( \frac{1}{y_1^2} \right).$$ On the other hand, the product of the eigenvalues must have the expansion (up to a global factor $\left( \operatorname{d}\!y_1 \right)^{\otimes 2}$) $$\theta_{-2}^+ \theta_{-2}^- \frac{1}{y_1^4} + ( \theta_{-2}^+ \theta_{-1}^- + \theta_{-2}^- \theta_{-1}^+) \frac{1}{y_1^3}.$$ These condition then determine $$A_1 = - \frac 14 \theta_{-2}^+ \theta_{-2}^-, \qquad A_3 = - \frac 14 (\theta_{-2}^+ \theta_{-1}^- + \theta_{-2}^- \theta_{-1}^+).$$ This finishes the proof. ◻ ### Construction of the connection We define $$\begin{aligned} &\beta \colon (\Omega^1_{C}(D))^{-1} \longrightarrow \Omega^1_{C}(D+B) && \text{($\mathcal{O}_{C}$-morphism)} \\ &\delta \colon (\Omega^1_{C}(D))^{-1} \longrightarrow (\Omega^1_{C}(D))^{-1} \otimes \Omega^1_{C}(D+B) && \text{(connection)} \\ &\gamma \colon \mathcal{O}_C \longrightarrow (\Omega^1_{C}(D))^{-1} \otimes \Omega^1_{C}(D) &&\text{($\mathcal{O}_{C}$-morphism)} \end{aligned}$$ by using the trivializations [\[2023_6\_30_21_56(1)\]](#2023_6_30_21_56(1)){reference-type="eqref" reference="2023_6_30_21_56(1)"} and [\[2023_6\_30_21_56(2)\]](#2023_6_30_21_56(2)){reference-type="eqref" reference="2023_6_30_21_56(2)"} of $(\Omega^1_{C}(D))^{-1}$ as follows: $$\beta= \begin{cases} \omega_{12} \colon \mathcal{O}_{U_0} \rightarrow \mathcal{O}_{U_0} \otimes \Omega^1_{C}(D+B)|_{U_0} \\ \mathrm{id} \circ \omega_{12} \circ f^{-1}_{\infty0} \colon \mathcal{O}_{U_\infty} \rightarrow \mathcal{O}_{U_\infty} \otimes \Omega^1_{C}(D+B)|_{U_\infty}, \end{cases}$$ $$\delta= \begin{cases} \operatorname{d}+\omega_{22} \colon \mathcal{O}_{U_0} \rightarrow \mathcal{O}_{U_0} \otimes \Omega^1_{C}(D+B)|_{U_0} \\ \operatorname{d} + f_{\infty0} \circ \omega_{22} \circ f^{-1}_{\infty0} + f_{\infty0} \circ \operatorname{d}\! f^{-1}_{\infty0} \colon \mathcal{O}_{U_\infty} \rightarrow \mathcal{O}_{U_\infty} \otimes\Omega^1_{C}(D+B)|_{U_\infty}, \end{cases}$$ $$\gamma := \begin{cases} \omega_{21} \colon \mathcal{O}_{U_0} \rightarrow \mathcal{O}_{U_0} \otimes \Omega^1_{C}(D+B)|_{U_0} \\ f_{\infty 0}\circ \omega_{21} \circ \mathrm{id} \colon \mathcal{O}_{U_\infty} \rightarrow \mathcal{O}_{U_\infty} \otimes \Omega^1_{C}(D+B)|_{U_\infty}. \end{cases}$$ Here $f_{\infty0}$ is the transition function of $(\Omega^1_{C}(D))^{-1}$ described in [\[2023_7\_1_12_35\]](#2023_7_1_12_35){reference-type="eqref" reference="2023_7_1_12_35"}. Notice that $$f_{\infty0} \circ \omega_{22} \circ f^{-1}_{\infty0} + f_{\infty0} \circ \operatorname{d}\! f^{-1}_{\infty0} = \omega_{22} + \frac{\operatorname{d}\!x_2}{x_2},$$ which is holomorphic at $\infty \in C$, since we have $\mathrm{res}_{\infty}\omega_{22} = -2$. We define a connection as follows: $$\label{2023_7_14_17_29} \nabla_0 := \operatorname{d} +\begin{pmatrix}0&\beta\\ \gamma& \delta \end{pmatrix} \colon \mathcal{O}_C \oplus (\Omega^1_{C}(D))^{-1} \longrightarrow \left( \mathcal{O}_C \oplus (\Omega^1_{C}(D))^{-1} \right) \otimes \Omega^1_{C}(D+B),$$ which is the companion normal form. Remark that $$\mathrm{res}_{q_j} (\nabla_0) = \begin{pmatrix} 0 & \zeta_j \\ 0 & 1 \end{pmatrix}$$ for $j=1,2,3$. **Lemma 34**. *The fact that $\nabla_0$ has apparent singular points at $q_1,q_2,q_3$ imposes 3 linear conditions on $A_3,A_4,B_3$ in terms of spectral data, and $((u_j,v_j),\zeta_j)$'s; we can uniquely determine $A_3,A_4,B_3$ from these conditions if, and only if, we have $$\label{2023_7_3_22_49} \det\begin{pmatrix}1&u_1&\zeta_1\\ 1&u_2&\zeta_2 \\ 1&u_3&\zeta_3\end{pmatrix}\not=0.$$* *Proof.* It is just Lemma [Lemma 7](#lem:independence){reference-type="ref" reference="lem:independence"} specified to the present elliptic case with 2 poles. We set $$\label{2023_7_4_13_18} C_{j}= \sum_{j'\in\{1,2,3\}\setminus \{j\}}\frac{\zeta_{j'}-\zeta_j}{2} \cdot \frac{v_j+v_{j'}}{u_j-u_{j'}} + \frac{A_1 + A_2 v_j-\zeta_j (B_1 + B_2 v_j) -\zeta_j^2 }{u_j-t} .$$ We denote by $((a_j)_j,(b_j)_j,(c_j)_j)$ the $3\times 3$-matrix $$((a_j)_j,(b_j)_j,(c_j)_j)= \begin{pmatrix} a_1 & b_1 & c_1 \\ a_2 & b_2 & c_2\\ a_3 & b_3 & c_3 \end{pmatrix}.$$ The condition where $q_1 ,q_2,q_3$ are apparent singularities means that $$\label{2023_7_1_12_13} ((1)_j,(u_j)_j,(-\zeta_j)_j) \begin{pmatrix} A_3 \\ A_4 \\ B_3 \end{pmatrix} =- \begin{pmatrix} C_1 \\ C_2 \\ C_3 \end{pmatrix}.$$ By Cramer's rule, the parameters $A_3,A_4,B_3$ of the family of connections $\nabla_0$ are uniquely determined $$\begin{aligned} &A_3= - \frac{\det((C_j)_j,(u_j)_j,(\zeta_j)_j )}{ \det(((1)_j,(u_j)_j,(\zeta_j)_j))} && &A_4= - \frac{\det((1)_j,(C_j)_j,(\zeta_j)_j)}{\det((1)_j,(u_j)_j,(\zeta_j)_j)}\\ &B_3= \frac{\det((1)_j,(u_j)_j,(C_j)_j)}{\det((1)_j,(u_j)_j,(\zeta_j)_j)}, && \end{aligned}$$ if and only if [\[2023_7\_3_22_49\]](#2023_7_3_22_49){reference-type="eqref" reference="2023_7_3_22_49"}. ◻ **Lemma 35**. *We have: $$\det\begin{pmatrix}1&u_1&\zeta_1\\ 1&u_2&\zeta_2 \\ 1&u_3&\zeta_3\end{pmatrix}=0$$ if, and only if, $E$ is not stable.* *Proof.* The vanishing of the determinant gives that $\zeta_j=\sigma(q_j)$ for a global section $\sigma\in H^0(C,\Omega_C^1(D))$. In other words, the quasi-parabolic structure on $E_0$ given over each $q_j$ by the eigenvectors corresponding to eigenvalue $1$ lie on a subbundle $(\Omega^1_C(D))^{-1}\subset E_0$. After elementary transformations at each $q_j$, we get $L\subset E$ with $\deg(L)=1$ (in fact $L=\det(E)$). ◻ ## Definition of a rank 2 vector bundle $E$ We set $$\tilde{U}_0 := U_0 \setminus \{ q_1,q_2,q_3\} \quad \text{and} \quad \tilde{U}_\infty := U_\infty \setminus \{ q_1,q_2,q_3\}.$$ We take an analytic open subsets $\tilde U_{q_j}$ ($j=1,2,3$) of $C$ such that $q_j \in \tilde U_{q_j}$ and $\tilde U_{q_j}$ are small enough. In particular, $(u_j,-v_j) \not\in \tilde U_{q_j}$. On $\tilde U_{q_j}$, the apparent singular point $q_j$ is defined by $x_1-u_j=0$. We have an open covering $(\tilde U_k)_{k \in \{ 0,1,q_1,q_2,q_3\} }$ of $C$. We define transition functions $B_{k_1k_2}$ ($k_1,k_2 \in \{ 0,1,q_1,q_2,q_3\}$) as follows: $$B_{0 q_j} := \begin{pmatrix} 1 & \frac{\zeta_j}{x_1 - u_j} \\ 0 & \frac{1}{x_1 - u_j} \end{pmatrix}\colon \mathcal{O}^{\oplus 2}_{\tilde U_{q_j}}|_{\tilde U_{0} \cap \tilde U_{q_j}} \xrightarrow{ \ \sim \ } \mathcal{O}^{\oplus 2}_{\tilde U_{0}}|_{\tilde U_{0} \cap \tilde U_{q_j}};$$ $$B_{0 \infty} := \begin{pmatrix} 1 & 0 \\ 0 & -x_2 \end{pmatrix}\colon \mathcal{O}^{\oplus 2}_{\tilde U_{\infty}}|_{\tilde U_{0} \cap \tilde U_{\infty}} \xrightarrow{ \ \sim \ } \mathcal{O}^{\oplus 2}_{\tilde U_{0}}|_{\tilde U_{0} \cap \tilde U_{\infty}}.$$ Then we have a vector bundle $$E = \left((\tilde U_k)_{k \in \{ 0,1,q_1,q_2,q_3\} } , \ (B_{k_1k_2} )_{k_1,k_2 \in \{ 0,1,q_1,q_2,q_3\}} \right),$$ where $E$ is trivial on each $\tilde U_k$ and the transition function from $\tilde U_{k_2}$ to $\tilde U_{k_1}$ is $B_{k_1k_2}$. ## Definition of a connection $\nabla$ on $E$ We define matrices $A_0,A_{q_j}, A_\infty$ as follows: $$\begin{aligned} &A_0 := \begin{pmatrix} 0 & \omega_{12}\\ \omega_{21} & \omega_{22} \end{pmatrix}, &&A_{\infty}:= \begin{pmatrix} 0 & -x_2 \omega_{12} \\ -\frac{\omega_{21}}{x_2} & \omega_{22} + \frac{\operatorname{d}\! x_2}{x_2} \end{pmatrix}, \\ &A_{q_j}:= \begin{pmatrix} \omega^{(j)}_{11} & \frac{\omega^{(j)}_{12}}{x_1-u_j}\\ (x_1-u_j)\omega_{21} & \omega^{(j)}_{22} \end{pmatrix}.&& \end{aligned}$$ The 1-form $\omega_{12}$, $\omega_{21}$, and $\omega_{22}$ are defined in [\[2023_7\_1_10_5\]](#2023_7_1_10_5){reference-type="eqref" reference="2023_7_1_10_5"}. The 1-form $\omega_{12}^{(j)}$, $\omega_{21}^{(j)}$, and $\omega_{22}^{(j)}$ are defined as follows: $$\begin{aligned} \omega^{(j)}_{11}&= -\frac{\zeta_j}{x_1-t} \cdot \frac{\operatorname{d}\!x_1}{y_1} , \\ \omega^{(j)}_{12}&= \sum_{j'\in\{1,2,3\}\setminus \{j\}}\frac{\zeta_{j'}-\zeta_j}{2} \cdot \frac{y_1+v_{j'}}{x_1-u_{j'}} \cdot \frac{\operatorname{d}\!x_1}{y_1} \\ & \qquad + \left( \frac{A_1 + A_2 y_1-\zeta_j (B_1 + B_2 y_1) -\zeta_j^2 }{x_1-t} +A_3 + A_4 x_1 - \zeta_j B_3 \right) \frac{\operatorname{d}\!x_1}{y_1} , \\ \omega^{(j)}_{22}&=\frac{1}{2} \cdot \frac{-y_1+v_{j}}{x_1-u_{j}} \cdot \frac{\operatorname{d}\!x_1}{y_1} + \sum_{j'\in\{1,2,3\}\setminus \{j\}}\frac{1}{2} \cdot \frac{y_1+v_{j'}}{x_1-u_{j'}} \cdot \frac{\operatorname{d}\!x_1}{y_1} \\ &\qquad + \left( \frac{B_1 + B_2 y_1 }{x_1-t} + B_3 + \frac{\zeta_j}{x_1-t} \right) \frac{\operatorname{d}\!x_1}{y_1} . \end{aligned}$$ **Proposition 36**. - *The $(1,2)$-entry of $A_{q_j}$ is a section of $\Omega_C^1(D)|_{\tilde U_{q_j}}$ for each $j=1,2,3$.* - *We define a local connection on each $\tilde U_k$ $(k \in \{ 0,1,q_1,q_2,q_3\})$ by $$\begin{cases} \operatorname{d}+A_0 \colon \mathcal{O}_{\tilde U_{0}}^{\oplus 2} \longrightarrow \mathcal{O}_{\tilde U_{0}}^{\oplus 2} \otimes \Omega_C^1(D)|_{\tilde U_{0}} & \text{on $\tilde U_{0}$} \\ \operatorname{d}+A_{q_j} \colon \mathcal{O}_{\tilde U_{q_j}}^{\oplus 2} \longrightarrow \mathcal{O}_{\tilde U_{q_j}}^{\oplus 2} \otimes \Omega_C^1(D)|_{\tilde U_{q_j}} & \text{on $\tilde U_{q_j}$}\\ \operatorname{d}+A_\infty \colon \mathcal{O}_{\tilde U_{\infty}}^{\oplus 2} \longrightarrow \mathcal{O}_{\tilde U_{\infty}}^{\oplus 2} \otimes \Omega_C^1(D)|_{\tilde U_{\infty}} & \text{on $\tilde U_{\infty}$}. \end{cases}$$ Then we can glue these local connections. So we have a global connection $\nabla \colon E \rightarrow E \otimes \Omega^1_C(D)$ on $E$.* *Proof.* Since $A_3,A_4,B_3$ are determined so that these parameters satisfy the condition [\[2023_7\_1_12_13\]](#2023_7_1_12_13){reference-type="eqref" reference="2023_7_1_12_13"}, we have $$\omega_{12}^{(j)} |_{q_j} = \left(C_j + A_3 + A_4 u_j - \zeta_j B_3 \right) \frac{\operatorname{d}\!x_1|_{q_j}}{v_j} =0.$$ Here, $C_j$ is in [\[2023_7\_4_13_18\]](#2023_7_4_13_18){reference-type="eqref" reference="2023_7_4_13_18"}. So $\frac{\omega^{(j)}_{12}}{x_1-u_j}$ has no pole at $q_j$ for each $j=1,2,3$. Since we have $$B_{k_1k_2}^{-1}A_{k_1}B_{k_1k_2} +B_{k_1k_2}^{-1} \operatorname{d}\! B_{k_1k_2} = A_{k_2}$$ for each $k_1,k_2 \in \{ 0,\infty ,q_1,q_2,q_3\}$, the connection $\nabla$ acting on $E$ is defined globally. ◻ **Remark 37**. *By Definition [Definition 13](#2023_7_2_11_52){reference-type="ref" reference="2023_7_2_11_52"} in Section [3.3](#2023_7_4_13_59){reference-type="ref" reference="2023_7_4_13_59"}, we have trivializations of $E$. On $C \setminus \{ t_1,t_2 \}$, the trivializations in Definition [Definition 13](#2023_7_2_11_52){reference-type="ref" reference="2023_7_2_11_52"} coincide with the trivializations described in the present section. We have defined the trivialization in Definition [Definition 13](#2023_7_2_11_52){reference-type="ref" reference="2023_7_2_11_52"} at $t_i$ $(i=1,2)$ so that the residue matrix (respectively, the polar part in the reduced case) is a diagonal matrix. On the other hand, by the trivializations described in the present section, the residue matrix at $t_i$ $(i=1,2)$ (respectively, the polar part) is not a diagonal matrix. The reason why the residue matrix at $t_i$ $(i=1,2)$ is a diagonal matrix is that the corresponding description of the variation [\[2023_7\_4_14_16\]](#2023_7_4_14_16){reference-type="eqref" reference="2023_7_4_14_16"} satisfies the compatibility conditions of the quasi-parabolic structure in $\mathcal{F}^0$ and $\mathcal{F}^1$ of [\[2023_7\_4_14_18\]](#2023_7_4_14_18){reference-type="eqref" reference="2023_7_4_14_18"}. On the other hand, now we are interested in behavior of the connection $\nabla$ around $q_j$ $(j=1,2,3)$. So now we do not consider the diagonalization of the residue matrices at $t_i$ $(i=1,2)$ (respectively, of the polar part when $D$ is reduced).* ## Canonical coordinates {#canonical-coordinates} We will calculate the canonical coordinates introduced in Section [3.5](#subsect:Canonical_Coor){reference-type="ref" reference="subsect:Canonical_Coor"}. For the transition functions $B_{k_1k_2}$ ($k_1,k_2 \in \{ 0,1,q_1,q_2,q_3\}$) of $E$, we have transition functions of $\det(E)$ as follows: $$\det(B_{0 q_j}) = \frac{1}{x_1 - u_j} \colon \mathcal{O}_{\tilde U_{q_j}}|_{\tilde U_{0} \cap \tilde U_{q_j}} \xrightarrow{ \ \sim \ } \mathcal{O}_{\tilde U_{0}}|_{\tilde U_{0} \cap \tilde U_{q_j}};$$ $$\det(B_{0 \infty} ) = -x_2 \colon \mathcal{O}_{\tilde U_{\infty}}|_{\tilde U_{0} \cap \tilde U_{\infty}} \xrightarrow{ \ \sim \ } \mathcal{O}_{\tilde U_{0}}|_{\tilde U_{0} \cap \tilde U_{\infty}}.$$ So we have a cocycle $(\det(B_{k_1k_2} ))_{k_1,k_2 \in \{ 0,1,q_1,q_2,q_3\}}$, which gives a class of $H^1(C,\mathcal{O}_C^{*})$. We have $$\operatorname{d} \log (\det(B_{0 q_j})) = - \frac{\operatorname{d}\!x_1}{x_1 - u_j} \quad \text{and} \quad \operatorname{d} \log (\det(B_{0 \infty} )) = \frac{\operatorname{d}\!x_2}{x_2},$$ and these 1-forms give a class of $H^1(C,\Omega^1_C)$. We denote by $c_1$ and $\boldsymbol{\Omega}(D,c_1)$ the class of $H^1(C,\Omega^1_C)$ and the total space of the twisted cotangent bundle corresponding to $c_1$, respectively. We have the following description of $\mathrm{tr}(\nabla)$: $$\mathrm{tr}(\nabla) = \begin{cases} \operatorname{d}+\omega_{22} \colon \mathcal{O}_{\tilde U_{0}} \longrightarrow \mathcal{O}_{\tilde U_{0}} \otimes \Omega_C^1(D)|_{\tilde U_{0}} & \text{on $\tilde U_{0}$} \\ \operatorname{d} +\omega_{11}^{(j)}+\omega_{22}^{(j)} \colon \mathcal{O}_{\tilde U_{q_j}} \longrightarrow \mathcal{O}_{\tilde U_{q_j}}^{\oplus 2} \otimes \Omega_C^1(D)|_{\tilde U_{q_j}} & \text{on $\tilde U_{q_j}$}\\ \operatorname{d} +\omega_{22} + \frac{\operatorname{d}\!x_2}{x_2} \colon \mathcal{O}_{\tilde U_{\infty}} \longrightarrow \mathcal{O}_{\tilde U_{\infty}}^{\oplus 2} \otimes \Omega_C^1(D)|_{\tilde U_{\infty}} & \text{on $\tilde U_{\infty}$}. \end{cases}$$ Notice that we have $$\omega_{11}^{(j)}+\omega_{22}^{(j)} = \omega_{22} + \operatorname{d} \log (\det(B_{0 q_j})), \text{ and}$$ $$\omega_{22} + \frac{\operatorname{d}\! x_2}{x_2} = \omega_{22}+\operatorname{d} \log (\det(B_{0 \infty} )) .$$ So these connection matrices of $\mathrm{tr}(\nabla)$ give an explicit global section of $\boldsymbol{\Omega}(D,c_1) \rightarrow C$. We consider a section of $\boldsymbol{\Omega}(D,c_1) |_{\tilde U_{q_j}}\rightarrow \tilde U_{q_j}$ $$\frac{ \zeta_j \operatorname{d}\!x_1}{(x_1-t)y_1} +\omega_{11}^{(j)}+\omega_{22}^{(j)}.$$ For this section on $\tilde U_{q_j}$, we define $p_j$ ($j=1,2,3$) by $$p_j = \mathrm{res}_{q_j} \left( \frac{\zeta_j}{x_1-u_j}\cdot \frac{\operatorname{d}\!x_1}{(x_1-t)y_1} \right) + \mathrm{res}_{q_j} \left( \frac{\omega_{11}^{(j)}+\omega_{22}^{(j)}}{x_1 - u_j}\right) .$$ Then we have a map $$(E,\nabla) \longmapsto (u_1,u_2,u_3,\zeta_1,\zeta_2,\zeta_3) \longmapsto (u_1,u_2,u_3, p_1,p_2, p_3),$$ where $$\begin{aligned} p_j&= \frac{\zeta_j}{(u_j-t)v_j} - \frac{K'(u_j)}{4v_j^2} + \sum_{j'\in\{1,2,3\}\setminus \{j\}}\frac{1}{2} \cdot \frac{v_j+v_{j'}}{u_j-u_{j'}} \cdot \frac{1}{v_j} \\ &\qquad + \left( \frac{B_1 + B_2 v_j }{u_j-t} + B_3 \right) \frac{1}{v_j} \end{aligned}$$ Here we set $K(x_1) := x_1(x_1-1)(x_1 - \lambda)$. Notice that $B_1$ and $B_2$ are determined by Lemma [Lemma 32](#2023_7_21_17_07){reference-type="ref" reference="2023_7_21_17_07"} and $B_3$ is determined by Lemma [Lemma 34](#2023_7_21_17_07(2)){reference-type="ref" reference="2023_7_21_17_07(2)"}. Notice that $B_3$ depends on $\zeta_1,\zeta_2$ and $\zeta_3$. The symplectic structure is $\sum_{j=1}^3 \operatorname{d}\! p_j \wedge \operatorname{d}\! u_j$ by Theorem [Theorem 20](#2023_8_22_12_09){reference-type="ref" reference="2023_8_22_12_09"}. # Canonical coordinates revised and another proof for birationality {#Sec:Higgs} In this section, we will give another proof of Proposition [Proposition 17](#prop:birational){reference-type="ref" reference="prop:birational"}. For simplicity, we will consider the cases where $D$ is a reduced effective divisor. Let $(E, \nabla) \in M_X^0$ be a connection on a fixed irregular curve $X =(C, D, \{ z_i \}_{i \in I}, \{ \theta_i \}_{i \in I}, \theta_{res})$ with genericity conditions as before. We set $D = t_1 + \cdots + t_n$ and the connection is given by $$\nabla \colon E \longrightarrow E \otimes \Omega^1_C(D).$$ In this section, we assume that $g= g(C) \geq 1$ and $n \geq 1$ as in the previous sections. Moreover if $g=g(C) =1$, we assume that $n \geq 2$. Note that we have the unique extension $$\label{eq:ext_6} 0 \longrightarrow{\mathcal O}_C \longrightarrow E \longrightarrow L_0 \longrightarrow 0$$ with $L_0 = \det (E)$. Moreover for $(E, \nabla) \in M^0_X$ we have $\deg L_0= 2g-1$ and $\dim_{\mathbb{C}} H^0(C, E)=1$. Then we can define apparent singularities $q_1, \ldots, q_N \in C$ where $N = 4g-3 +n$. Since $\deg D =2g-2+n \geq 1$ and $\deg L_0 =2g-1 \geq 1$, we see that $\dim_{\mathbb{C}} H^0(C, \Omega^1_C(D)) = g-1+n \geq 2$. We can choose $\gamma\in H^0(C, \Omega^1_C(D))$ and $s \in H^0(C, L_0)$ whose zeros are given by $$\{ \gamma=0 \} =\{ c_1, \ldots, c_{2g-2+n} \}\quad \text{and} \quad \{ s =0 \} = \{u_1, \ldots, u_{2g-1}\}.$$ We assume the following genericity conditions: 1. $u_{i_1} \not= u_{i_2}$ (for $i_1\not=i_2$), and $c_{k_1} \not= c_{k_2}$ (for $k_1 \neq k_2$); 2. $\{ u_1, \ldots, u_{2g-1} \} \cap \{ c_1, \ldots, c_{2g-2 + n } \} = \emptyset$; 3. $\{ q_1, \ldots, q_N \} \cap \{u_1, \ldots, u_{2g-1}, c_1, \ldots, c_{2g-2 +n } \} = \emptyset.$ Set $$U_0 = C \setminus \{u_1, \ldots, u_{2g-1}, c_1, \ldots, c_{2g-2 + n } \}.$$ Moreover we take small an analytic neighborhood $U_{i}$ of $u_i$ for $1 \leq i \leq 2g-1$ and $U_{2g-1+k}$ of $c_k$ for $1 \leq k \leq 2g-2+n$. For $i = 1,\ldots, 4g-3+n$, we can identify $U_i$ with a unit disc $\Delta = \{ z \in \mathbb C\mid |z| < 1 \}$ with the origin corresponding to $u_i$ ($1 \leq i \leq 2g-1$) and $c_{i-2g+1}$ ($2g \leq i \leq 4g-3+n$). We can assume that $U_{i_1} \cap U _{i_2} = \emptyset$ for $i_1 \not=i_2$, $i_1, i_2 \geq 1$. Note that since $U_0$ is an affine variety and $U_0 \cap U_i \cong \Delta \setminus \{ 0 \}$ for $i=1, \ldots, 4g-3+n$, the covering $C = U_0 \cup U_1 \cup \cdots \cup U_{4g-3+n}$ gives a Stein covering of $C$. For $0 \leq i \leq 4g-3+n$, we have nonzero sections $\boldsymbol{e}_1^{(i)} \in \mathcal{O}_{U_i}, \boldsymbol{e}_2^{(i)} \in (L_0)_{|U_i}$ giving trivializations of $E$ on $U_i$ respectively: $$E_{|U_i} \simeq {\mathcal O}_{|U_i}\boldsymbol{e}_1^{(i)} \oplus {\mathcal O}_{|U_i} \boldsymbol{e}_2^{(i)}.$$ Moreover we have a transition matrix $H_{0i}$ on $U_{0} \cap U_{i}$ of the form $$H_{0i}= \begin{pmatrix} 1 & h_{0i} \\ 0 & g_{0i} \end{pmatrix}$$ satisfying $$\label{eq:trans_6} (\boldsymbol{e}_1^{(i)}, \boldsymbol{e}_2^{(i)} ) = (\boldsymbol{e}_1^{(0)}, \boldsymbol{e}_2^{(0)}) H_{0i} =( \boldsymbol{e}_1^{(0)}, h_{0i} \boldsymbol{e}_1^{(0)}+ g_{0i} \boldsymbol{e}_2^{(0)} ) .$$ Here $\{h_{0i}\}_i \in \operatorname{Ext}^1(L_0, {\mathcal O}_C) \cong H^1(C, L_0^{-1})$ corresponds to the extension class of ([\[eq:ext_6\]](#eq:ext_6){reference-type="ref" reference="eq:ext_6"}) and $\{ g_{0i} \}_i \in H^1(C, {\mathcal O}_C^{*})$ gives the transition function of $L_0 = \det(E)$. With these trivializations we have connection matrices $A^{(i)}$: $$\label{eq:conn_6} \nabla (\boldsymbol{e}_1^{(i)}, \boldsymbol{e}_2^{(i)}) =(\boldsymbol{e}_1^{(i)}, \boldsymbol{e}_2^{(i)}) A^{(i)}$$ of the form $$A^{(i)} = \begin{pmatrix} a_{11}^{(i)} \gamma_i & a_{12}^{(i)} \gamma_i \\ a_{21}^{(i)} \gamma_i & a_{22}^{(i)} \gamma_i \end{pmatrix}.$$ Here $a_{kl}^{(i)} \in \Gamma(U_i, {\mathcal O}_{U_i})$ and $\gamma_i \in \Gamma(U_i, \Omega^1_{U_i}(D))$. We set $\gamma_0 =\gamma_{|U_0}$ as above. From ([\[eq:trans_6\]](#eq:trans_6){reference-type="ref" reference="eq:trans_6"}) and ([\[eq:conn_6\]](#eq:conn_6){reference-type="ref" reference="eq:conn_6"}), we can verify the following **Lemma 38**. *For $1 \leq i \leq 4g-3 + n$, on $U_{0} \cap U_{i}$, we gave $$A^{(i)} = H_{0i}^{-1} A^{(0)} H_{0i} + H_{0i}^{-1} \operatorname{d}\! H_{0i}.$$ Specifically, we have the following identities: $$\label{eq:app_6} a_{21}^{(i)} \gamma_i = a_{21}^{(0)} \gamma_0 g_{0i}^{-1}; \quad \text{and}$$ $$\label{eq:dual_6} a_{22}^{(i)} \gamma_i = a_{22}^{(0)} \gamma_0 + a_{21}^{(0)} \gamma_0 h_{0i} g_{0i}^{-1} + \frac{\operatorname{d}\! g_{0i}}{g_{0i}} .$$* The identity ([\[eq:app_6\]](#eq:app_6){reference-type="ref" reference="eq:app_6"}) shows that $a_{21}^{(i)}\gamma_i$ defines a section of $H^0(C, \Omega_1(D) \otimes L_0)$ and the zeros of this section are nothing but the apparent singularities $q_1, \ldots, q_N$. Evaluating the identity ([\[eq:dual_6\]](#eq:dual_6){reference-type="ref" reference="eq:dual_6"}) at $q_j$ ($j=1,\ldots,N$), we then have $$\label{eq:twist_6} ( a_{22}^{(i)} \gamma_i)_{q_j} = (a_{22}^{(0)} \gamma_0)_{q_j} + \left( \frac{\operatorname{d}\! g_{0i}}{g_{0i}}\right)_{q_j}$$ Noting that the cohomology class of the cocycle $\left\{ \frac{\operatorname{d}\!g_{0i}}{g_{0i}} \right\}_i$ corresponds to $c_d = c_1(L_0)$, from ([\[eq:twist_6\]](#eq:twist_6){reference-type="ref" reference="eq:twist_6"}), we have the following **Proposition 39**. *For each $0 \leq j \leq N$, the data $(E, \nabla) \in M^0_X$ defines $N$ points $(q_j, \tilde{p}_j)$ on the total space of $\boldsymbol{\Omega}(D,c_d)$ by the formula $$\tilde{p}_j = (a_{22}^{(0)} \gamma_0)_{q_j} \in \Omega^1_C(D, c_d)_{|q_j}$$* The above definition of $\tilde{p}_j$ does not depend on the choice of the sections $s \in H^0(C, L_0)$ and $\gamma\in H^0(C, \Omega^1_C(D))$ and defines the same map as in Definition [Definition 16](#2023_7_12_23_06){reference-type="ref" reference="2023_7_12_23_06"}: $$f_{\mathrm{App}}\colon M^0_X \longrightarrow \operatorname{Sym}^N (\boldsymbol{\Omega}(D,c_d)) .$$ Now we consider $q_j$ as a local coordinate near $q_j$ and we write $\gamma = c(q_j) \operatorname{d}\! q_j$ for some local holomorphic function $c(q_j)$. Then we have $$\tilde{p}_j = p_j \operatorname{d}\! q_j$$ with $$p_j = a_{22}^{(0)}(q_j) c(q_j) .$$ As we have proved in Theorem [Theorem 20](#2023_8_22_12_09){reference-type="ref" reference="2023_8_22_12_09"}, the map $f_{\mathrm{App}}$ is symplectic. ## From a connection to a Higgs field Keeping the notation, let us consider the section $s \in H^0(C, L_0)$ as before, and set $s^{(0)} = s$. Take trivialization of $L_{0|U_i}$ over $U_i$ we have a holomorphic function $s^{(i)} \in \Gamma(U_i, {\mathcal O}_{U_i})$ such that $$s^{(0)}= g_{0i} s^{(i)}.$$ Note that $s^{(i)}$ has zeros at $u_i \in U_i$ for $1 \leq i \leq 2g-1$. Set $D(s)= u_1+\cdots + u_{2g-1}$. We can show the following **Lemma 40**. *There exists a connection $$\nabla_{1}\colon E \longrightarrow E \otimes \Omega_C^1(D(s))$$ such that for each $0\leq i \leq N = 4g -3 + n$, on $U_i$ it has the form $$\nabla_1^{(i)} = \operatorname{d} + \, S^{(i)} =\operatorname{d} + \begin{pmatrix} 0 & -\frac{\beta_i}{s^{(i)} } \\ 0 & -\frac{\operatorname{d}\!s^{(i)}}{s^{(i)}} \end{pmatrix}$$ with respect to the trivialization $(\boldsymbol{e}_1^{(i)}, \boldsymbol{e}_2^{(i)})$. Here $\beta_i \in \Gamma(U_i,\Omega^1_{U_i})$.* *Proof.* Since $s^{(0)}= g_{0i} s^{(i)}$, one has $$\frac{\operatorname{d}\! s^{(0)}}{s^{(0)}} = \frac{\operatorname{d}\!g_{0i}}{g_{0i}} + \frac{\operatorname{d}\! s^{(i)}}{s^{(i)}}$$ in $U_{0i} = U_0 \cap U_i$. The compatibility condition for connection matrices $S^{(i)}$ is $$\label{eq:sp_6} S^{(i)} = H_{0i}^{-1} S^{(0)} H_{0i} + H_{0i}^{-1} \operatorname{d}\! H_{0i}.$$ The right hand side of ([\[eq:sp_6\]](#eq:sp_6){reference-type="ref" reference="eq:sp_6"}) is $$\label{eq:iden_6} \begin{pmatrix} 0 & -g_{0i} \frac{\beta_0}{s^{(0)}} + h_{0i} \left(\frac{\operatorname{d}\! s^{(0)}}{s^{(0)}}- \frac{\operatorname{d}\! g_{0i}}{g_{0i}} \right) + \operatorname{d}\! h_{0i} \\ 0 & -\frac{\operatorname{d}\! s^{(0)}}{s^{(0)}} + \frac{\operatorname{d}\! g_{0i}}{g_{0i}} \end{pmatrix}$$ Since $\{ h_{0i} \}_i$ is a class in $H^1(C, L_0^{-1})$ and $s \in H^0(C, L_0)$, the class $\{ s^{(i)} h_{0i} \}_i$ defines a class in $H^1(C, {\mathcal O}_{C})$. Then, by the Hodge theory, the derivative $\{ \operatorname{d} (s^{(i)} h_{0i} ) \}_i \in H^1(C, \Omega^1_C)$ vanishes, so there exist $\beta_i \in \Gamma(U_i, \Omega^1_{U_i})$ such that $$\operatorname{d} (s^{(i)} h_{0i} ) = \beta_0 - \beta_i.$$ Choose such $\beta_i$'s for the formula. Then we have $$\operatorname{d}\! h_{0i} = - h_{0i} \frac{\operatorname{d}\!s^{(i)}}{s^{(i)} } + g_{0i} \frac{\beta_0}{s^{(0)}}- \frac{ \beta_i}{s^{(i)}}.$$ Then the right hand side of ([\[eq:iden_6\]](#eq:iden_6){reference-type="ref" reference="eq:iden_6"}) becomes $$\begin{pmatrix} 0 & - \frac{\beta_i}{s^{(i)}} \\ 0 & -\frac{\operatorname{d}\! s^{(i)}}{s^{(i)} } \end{pmatrix}$$ as desired. ◻ For any $(E, \nabla) \in M^0_X$, the difference $$\nabla- \nabla_1\colon E \longrightarrow E \otimes \Omega^1_C(D+D(s))$$ defines an ${\mathcal O}_C$-homomorphism, that is a rational Higgs fields on $E$. We reprove Proposition [Proposition 17](#prop:birational){reference-type="ref" reference="prop:birational"}. **Theorem 41**. *For generic $(E, \nabla) \in M_X^0$, the point $(q_j, \tilde{p}_j)_{j=1,\ldots,N} \in \mathrm{Sym}^N(\boldsymbol{\Omega}(D, c_d))$ determines $(E, \nabla)$. So the map $f_{\mathrm{App}}$ is birational.* *Proof.* Consider the Higgs field $$\Phi=\Phi_{\nabla} = \nabla - \nabla_1 \colon E \longrightarrow E \otimes \Omega^1_C(D + D(s))$$ where $D = t_1 + \cdots + t_n$ and $D(s) = u_1 + \cdots + u_{2g-1}$ as in the notation above. We assume that the set of apparent singularities $q_1, \ldots, q_N$ of $(E, \nabla)$ is disjoint from $D$ and $D(s)$. We will consider the characteristic curve of $\Phi$. On $U_i$, we have $$\Phi_{i} = A^{(i)} - S^{(i)} = \begin{pmatrix} \tilde{a}_{11} & \tilde{a}_{12} + \frac{\beta_i}{s^{(i)}} \\ \tilde{a}_{21} & \tilde{a}_{22} + \frac{\operatorname{d}\!s^{(i)}}{s^{(i)}} \end{pmatrix}.$$ The characteristic curve $C_s$ can be defined in the total space of $\boldsymbol{\Omega}(D+D(s))$ of the line bundle $\Omega^1_C(D+D(s))$ by $$C_s: x^2 - b_1 x - b_2 = 0$$ with $b_i \in H^0(C, (\Omega^1_C(D+D(s)))^{\otimes i})$, and $x$ the canonical section. The dimension of the family of spectral curves is thus given by $$\begin{aligned} \dim H^0( C, \Omega^1_C(D+D(s))) + \dim H^0( C, (\Omega^1_C(D+D(s)))^{\otimes 2}) &= & N + 1-g + 2N + 1-g \\ &= & 3N +2-2g = 3(4g-3+n) +2-2g \\ &= & 10g - 7 + 3n. \end{aligned}$$ Then $\Phi$ is constrained by the following conditions. 1. At $t_i, i=1, \ldots, n$, $\Phi$ has eigenvalues fixed by data $X$. These impose $2n -1$ conditions because of the Fuchs relation. 2. At $u_k$, $k=1, \ldots,2g-1$, take a local coordinate $z_k$ such that $z_k(u_k)=0$. Then $\Phi$ has the following form near $z_k=0$ $$\Phi = \begin{pmatrix} 0 & \frac{\beta_i(0)}{z_k} \\ 0 & \frac{\operatorname{d}\!z_k}{z_k} \end{pmatrix} + \mbox{holomorphic}.$$ Then eigenvalues of the residue matrix are $0, 1$ and the $\beta_i (0)$ gives a restriction on $C_s$. Then totally we have $3 \times (2g-1)$ conditions. 3. At $q_j, j=1, \ldots, N$, the points $\tilde{a}_{22}(q_j) + \frac{ds^{(i)}}{s^{(i)}}(q_j) = \tilde{p}_j + c_j \in \boldsymbol{\Omega}(D+D(s))$ lie on the characteristic curve $C_s$. These give $N=4g-3+n$ conditions. For generic choice of $q_1, \cdots, q_N$ and $s \in H^0(C, L_0)$, we can see using the method of Lemma [Lemma 7](#lem:independence){reference-type="ref" reference="lem:independence"} and Proposition [Proposition 17](#prop:birational){reference-type="ref" reference="prop:birational"} that these conditions are independent, so we obtain a total of $$2n-1 + 3(2g-1) + (4g-3 + n) = 10g-7 + 3n$$ conditions, so these determine the spectral curve $C_s$. Now the divisor $\mu= \sum_{j=1}^N ( \tilde{p}_j-c_j) + \sum_{k=1}^{2g-1} (1_k)$ determines the rank 1 sheaf ${\mathcal O}_{C_s}(\mu)$ where $(1_k) \in C_s$ denotes the point over $u_k$ corresponding to the eigenvalue $-1$ of the residue of $\Phi$ at $u_k$. Then $(\pi\colon C_s \longrightarrow C, {\mathcal O}_{C_s}(\mu) )$ determines $(E, \Phi)$ uniquely by [@BNR Proposition 3.6]. Hence $E$ and $\nabla = \Phi + \nabla_1$ is determined uniquely. ◻ 1 D. Arinkin, S. Lysenko. *Isomorphisms between moduli spaces of SL(2)-bundles with connections on $\boldsymbol{P}^1 \setminus \{x_1, \ldots , x_4\}$*. Math. Res. Lett., **4** (2-3): 181--190, 1997. M. F. Atiyah, *Complex analytic connections in fibre bundles*, Trans. Amer. Math. Soc. **85** (1957), 181--207. A. Beauville, M.S. Narasimhan, S. Ramanan, *Spectral curves and the generalised theta divisor*. J. Reine Angew. Math. **398** (1989), 169--179. I. Biswas, *On the moduli space of holomorphic $G$-connections on a compact Riemann surface*, Euro. Jour. Math. **6** (2020), 321--335. I. Biswas, V. Heu, J. Hurtubise, *Isomonodromic deformations of logarithmic connections and stability*, Math. Ann., **366** (2016), 121--140. I. Biswas, V. Heu, J. Hurtubise, *Isomonodromic deformations of irregular connections and stability of bundles* Comm. Anal. Geom. **29** (2021), no. 1, 1--18. P. Boalch, *Symplectic manifolds and isomonodromic deformations*. Adv. Math. **163** (2), (2001), 137--205 F. Bottacin, *Symplectic geometry on moduli spaces of stable pairs*, Ann. Sci. École Norm. Sup. **28** (1995), 391--433. K. Diarra, F. Loray, *Normal forms for rank two linear irregular differential equations and moduli spaces.* Period. Math. Hungar. **84** (2022) 303--320. B. Dubrovin, M. Mazzocco, *Canonical structure and symmetries of the Schlesinger equations*. Comm. Math. Phys. **271** (2007), no.2, 289--373. O. Dumitrescu, M. Mulase, *Quantum curves for Hitchin fibrations and the Eynard--Orantin theory*. Letters in Mathematical Physics **104** (2014) 635--671. T. Fassarella, F. Loray *Flat parabolic vector bundles on elliptic curves*. J. Reine Angew. Math. **761** (2020) 81--122. T. Fassarella, F. Loray, A. Muniz, *Flat parabolic vector bundles on elliptic curves II*. Math. Zeichr. **301** (2022) 4079--4118. R. Fedorov, *Algebraic and Hamiltonian approaches to isoStokes deformations*. Transform. Groups **11** (2006), no. 2, 137--160. I. Gaiur, M. Mazzocco, V. Rubtsov, *Isomonodromic Deformations: Confluence, Reduction and Quantisation* Commun. Math. Phys. **400**, (2023), 1385--1461. A. Gorsky, N. Nekrasov, V. Rubtsov, *Hilbert schemes, separated variables, and D-branes*. Comm. Math. Phys. **222** (2001), no. 2, 299--318. J. Harnad, *Dual Isomonodromic Deformations and Moment Maps to Loop Algebras*. Commun. Math. Phys. **166**, (1994), 337--365. K. Hiroe, D. Yamakawa, *Moduli spaces of meromorphic connections and quiver varieties*. Adv. Math. **266**, (2014), 120--151. J. Hurtubise, *Integrable systems and algebraic surfaces*. Duke. Math. J. **83**, 19--50 (1996). J. Hurtubise, *On the geometry of isomonodromic deformations*. J. Geom. Phys. **58** (2008), no.10, 1394--1406. M.-A. Inaba, *Moduli of parabolic connections on curves and the Riemann-Hilbert correspondence*. J. Algebraic Geom. **22** (2013), no. 3, 407--480. M.-A. Inaba, K. Iwasaki, M.-H. Saito, *Moduli of stable parabolic connections, Riemann-Hilbert correspondence and geometry of Painlevé equation of type VI. I.* Publ. Res. Inst. Math. Sci. **42** (2006), no. 4, 987--1089. M.-A. Inaba, K. Iwasaki, M.-H. Saito, *Moduli of stable parabolic connections, Riemann-Hilbert correspondence and geometry of Painlevé equation of type VI. II*. Moduli spaces and arithmetic geometry, 387-432, Adv. Stud. Pure Math., **45**, Math. Soc. Japan, Tokyo, 2006. M.-A. Inaba, M.-H. Saito, *Moduli of unramified irregular singular parabolic connections on a smooth projective curve*. Kyoto J. Math. **53** (2013) 433--482. P. Ivanics, A. Stipsicz, Sz. Szabó. *Two-dimensional moduli spaces of rank 2 Higgs bundles over $\mathbb{C}P^1$ with one irregular singular point*, Journal of Geometry and Physics **130** (2018) pp. 184--212. P. Ivanics, A. Stipsicz, Sz. Szabó. *Hitchin fibrations on moduli of irregular Higgs bundles and motivic wall-crossing*, Journal of Pure and Applied Algebra (9) **223** (2019) 3989--4064. P. Ivanics, A. Stipsicz, Sz. Szabó. *Hitchin fibrations on two-dimensional moduli spaces of irregular Higgs bundles with one singular fiber*, Symmetry, Integrability and Geometry: Methods and Applications (SIGMA) (85) **15** (2019) 28 pages. K. Iwasaki, *Moduli and deformation for Fuchsian projective connections on a Riemann surface*. J. Fac. Sci. Univ. Tokyo Sect. IA Math. **38** (1991), no.3, 431--531. K. Iwasaki, *Fuchsian moduli on a Riemann surface---its Poisson structure and Poincaré-Lefschetz duality.* Pacific J. Math. **155** (1992), no.2, 319--340. M. Jimbo, T. Miwa, Y. Mori, M. Sato, *Density Matrix of an Impenetrable Bose Gas and the Fifth Painlevé Equation*. Phys. D **1**(1), (1980), 80--158. S. Kawai, *Isomonodromic deformation of Fuchsian projective connections on elliptic curves.* Nagoya Math. J. **171** (2003), 127--161. H. Kawakami, A. Nakamura, H. Sakai, *Degeneration scheme of $4$-dimensional Painlevé-type equations*, MSJ Memoir, **37**, (2018), 25--111. A. Komyo, *Hamiltonian structures of isomonodromic deformations on moduli spaces of parabolic connections*. J. Math. Soc. Japan **74** (2022), no. 2, 473--519. A. Komyo, *Description of generalized isomonodromic deformations of rank two linear differential equations using apparent singularities*, to appear in Publications of the Research Institute for Mathematical Sciences (arXiv:2003.08045). A. Komyo, F. Loray, M.-H. Saito, *Moduli space of irregular rank two parabolic bundles over the Riemann sphere and its compactification*. Adv. Math. **410** (2022), Paper No. 108750, 76 pp. A. Komyo, M.-H. Saito, *Explicit description of jumping phenomena on moduli spaces of parabolic connections and Hilbert schemes of points on surfaces*. Kyoto J. Math. **59** (2019), no.3, 515--552. M. Kontsevich, A. Odesskii, *Multiplication kernels*. Letters in Mathematical Physics (2021) 111--152. M. Kontsevich, Y. Soibelman, *Wall-crossing structures in Donaldson--Thomas invariants, integrable systems and Mirror Symmetry*. In: Castano-Bernard, R., Catanese, F., Kontsevich, M., Pantev, T., Soibelman, Y., Zharkov, I. (eds) Homological Mirror Symmetry and Tropical Geometry. Lecture Notes of the Unione Matematica Italiana, vol **15** (2014). Springer I. Krichever, *Isomonodromy equations on algebraic curves, canonical transformations and Whitham equations*. Mosc. Math. J. **2** (2002), no.4, 717--752, 806. M. Logares and J. Martens, Moduli of parabolic Higgs bundles and Atiyah algebroids, *Jour. Reine Angew. Math.* **649** (2010), 89--116. F. Loray, V. Ramı́rez *A map between moduli spaces of connections*. SIGMA **16** (2020) 125, 42 pages. F. Loray, M.-H. Saito, *Lagrangian fibrations in duality on moduli spaces of rank 2 logarithmic connections over the projective line.* Internat. Math. Res. Notices (2015), no. **4**, 995--1043. F. Loray, M.-H. Saito, C. Simpson. *Foliations on the moduli space of rank two connections on the projective line minus four points.* In Geometric and differential Galois theories, volume **27** of Sémin. Congr., pages 115--168. Soc. Math. France, Paris, 2012. T. Matsumoto *Birational geometry of moduli spaces of rank 2 logarithmic connections*, arXiv:2105.06892. V. B. Mehta, C. S. Seshadri. *Moduli of vector bundles on curves with parabolic structures*. Math. Ann., **248** (3): 205--239, 1980. S. Oblezin, *Isomonodromic deformations of $sl(2)$ Fuchsian systems on the Riemann sphere*. Mosc. Math. J. **5** (2005), no.2, 415--441. K. Okamoto, *Isomonodromic deformation and Painlevé equations, and the Garnier system.* J. Fac. Sci. Univ. Tokyo Sect. IA Math. **33** (1986), no.3, 575--618. K. Okamoto, *The Hamiltonian structure derived from the holonomic deformation of the linear ordinary differential equations on an elliptic curve*, Sci. Papers College Arts Sci. Univ. Tokyo, **37** (1987), 1--11. K. Okamoto, *On the holonomic deformation of linear ordinary differential equations on an elliptic curve*, Kyushu J. Math., **49** (1995), 281--308. M.-H. Saito, S. Szabó, *Apparent singularities and canonical coordinates for moduli of parabolic connections and parabolic Higgs bundles, I*, in preparation. S. Szabó, *The dimension of the space of Garnier equations with fixed locus of apparent singularities*. Acta Sci. Math. (Szeged) **79** (2013), no.1-2, 107--128. S. Szabó, *The birational geometry of unramified irregular Higgs bundles on curves*. International Journal of Mathematics (6) **28** (2017). N. M. J. Woodhouse, *Duality for the general isomonodromy problem*. J. Geom. Phys. **57** (4), (2007), 1147--1170. K. Yokogawa, *Compactification of moduli of parabolic sheaves and moduli of parabolic Higgs sheaves*. Journal of Mathematics of Kyoto University **33**, no. 2 (1993): 451--504.
arxiv_math
{ "id": "2309.05012", "title": "Canonical coordinates for moduli spaces of rank two irregular\n connections on curves", "authors": "Arata Komyo and Frank Loray and Masa-Hiko Saito and Szilard Szabo", "categories": "math.AG math.SG", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We show that every locally flat topological embedding of a 3-manifold in a smooth 5-manifold is homotopic, by a small homotopy, to a smooth embedding. We deduce that topologically locally flat concordance implies smooth concordance for smooth surfaces in smooth 4-manifolds. address: - School of Mathematics and Statistics, University of Glasgow, United Kingdom - School of Mathematics and Statistics, University of Glasgow, United Kingdom author: - Michelle Daher - Mark Powell bibliography: - bib.bib title: Smoothing 3-manifolds in 5-manifolds --- ubjclassname\@1991=[2020]{.upright} Mathematics Subject Classification # Introduction Let $Y^3=Y_1\sqcup \cdots \sqcup Y_m$ be a compact 3-manifold with connected components $Y_i$, and let $N^5$ be a compact, connected, smooth 5-manifold. Note that $Y$ and $N$ are possibly nonorientable and can have nonempty boundary. Since $Y$ is 3-dimensional it admits a unique smooth structure up to isotopy [@Moise52], [@Munkres-smoothing Theorem 6.3], [@WhdJ1961a Corollary 1.18]. **Theorem 1**. *Let $f\colon Y\to N$ be a locally flat proper topological embedding that is smooth near $\partial Y$. Then $f$ is homotopic rel. boundary, via an arbitrarily small homotopy, to a smooth embedding.* Here *proper* means that $f^{-1}(\partial N) = \partial Y$. It is not possible in general to isotope $f$ to a smooth embedding, so the homotopy in the theorem is necessary. For instance, Lashof [@Lashof] constructed a locally flat knot $L \cong S^3 \subseteq S^5$ that is not isotopic, in fact not even concordant, to any smooth knot. We will make crucial use of Lashof's knot in our proof of [Theorem 1](#thm:main-intro){reference-type="ref" reference="thm:main-intro"}. In the rest of the introduction, we explain an application to concordance of surfaces, then we compare with the situation for codimension two embeddings in other dimensions, before finally outlining our proof of [Theorem 1](#thm:main-intro){reference-type="ref" reference="thm:main-intro"}. ## Topological concordance implies smooth concordance for surfaces in 4-manifolds Let $\Sigma$ be a closed, smooth surface, possibly disconnected, and possibly nonorientable. We consider a smooth, closed, connected 4-manifold $X$, again possibly nonorientable, and two smooth submanifolds $\Sigma_0$ and $\Sigma_1$ in $X$ with $\Sigma_0 \cong \Sigma \cong \Sigma_1$. **Definition 1**. We say that $\Sigma_0$ and $\Sigma_1$ are *topologically concordant* (respectively *smoothly concordant*) if there is a locally flat (respectively smooth) submanifold $C \cong \Sigma \times I$, properly embedded in $X \times I$, whose intersection with $X \times \{0,1\}$ is precisely $\Sigma_0 \subseteq X \times \{0\}$ and $\Sigma_1 \subseteq X \times \{1\}$. We call $C$ a *topological concordance* (respectively *smooth concordance*). **Corollary 2**. *Suppose that $C$ is a topological concordance between $\Sigma_0 \subseteq X\times \{0\}$ and $\Sigma_1 \subseteq X \times \{1\}$. Then the inclusion map $C \to X \times I$ is homotopic rel. $\Sigma_0 \cup \Sigma_1$, via an arbitrarily small homotopy, to an embedding whose image is a smooth concordance between $\Sigma_0$ and $\Sigma_1$.* This follows immediately from  [Theorem 1](#thm:main-intro){reference-type="ref" reference="thm:main-intro"} by taking $Y = \Sigma \times I$, $N= X \times I$, and $f \colon Y \to N$ to be an embedding with $C=f(Y)$. Special cases of [Corollary 2](#corollary:concordance){reference-type="ref" reference="corollary:concordance"} were known before. First, Kervaire [@Kervaire-slice-knots] proved that every 2-knot is slice. This holds in both categories, from which it follows that smooth and topological concordance coincide for 2-knots. Sunukjian [@Sunuk] proved more generally that homologous connected surfaces in a simply-connected 4-manifold $X$ are both smoothly and topologically concordant. Again, it follows immediately that smooth and topological concordance coincide. Similarly Cha-Kim [@Cha-Kim Corollary J] proved that smooth and topological concordance coincide for smoothly embedded spheres with a common smoothly embedded geometrically dual framed sphere. Work defining surface concordance obstructions includes [@Stong-uniqueness; @Schwartz-I; @Klug-Miller; @ST-FQ; @AMY]. Other than in [@Stong-uniqueness], the authors restricted to the smooth category. Our result implies that one automatically obtains topological concordance obstructions. ## Comparison with other dimensions We start with low dimensions. In dimension 3, every locally flat embedding $Y^1 \subseteq N^3$ is isotopic to a smooth embedding. On the other hand, in dimension 4, the existence of topologically slice knots that are not smoothly slice implies the existence of a locally flat embedding $D^2 \hookrightarrow D^4$ that is not even homotopic rel. boundary to a smooth embedding. There are also examples of closed locally flat surfaces in closed 4-manifolds, in particular in $S^2 \times S^2$, $\mathbb{CP}^2$, and $S^2 \widetilde{\times} S^2$, that cannot be smoothed up to homotopy [@Kuga; @Rudolph; @Luo; @LW-90]. We deal with dimension 5 in this article. The analogue of [Theorem 1](#thm:main-intro){reference-type="ref" reference="thm:main-intro"} for locally flat embeddings of smooth 4-manifolds embedded in smooth 6-manifolds is open, and we intend to investigate it in future work. Now we discuss high dimensions. For codimension 2 proper embeddings $f \colon Y^m \to N^{m+2}$, when the dimension $m$ of $Y$ is greater or equal to 5, Schultz [@SchultzSmoothableSO] proved the following cf. [@Lashof-Rothenberg]. **Theorem 3** (Schultz). *Let $m \geq 5$ and $n>m$. Let $N^n$ be a smooth compact $n$-manifold, and let $Y^m$ be a compact topological manifold equipped with a smooth structure near $\partial Y$. Let $f\colon Y\to N$ be a locally flat proper topological embedding that is smooth near $\partial Y$. Then there is a smooth structure on $Y$, extending the given smooth structure on $\partial Y$, such that $f$ is isotopic rel. boundary to a smooth embedding if and only if $Y$ has a topological vector bundle neighbourhood.* Topological vector bundle neighbourhoods always exist for locally flat codimension 1 and 2 embeddings [@Brown62; @KS-normal-bundles-codim-2], so Schultz [@SchultzSmoothableSO] deduced the following result. **Theorem 4** (Schultz). *Let $k=1$ or $2$, let $m\geq5$, and let $n=m+k$. Let $N^{n}$ be a smooth compact $n$-manifold, and let $Y^m$ be a compact topological manifold equipped with a smooth structure near $\partial Y$. Let $f\colon Y\to N$ be a locally flat proper topological embedding that is smooth near $\partial Y$. Then there is a smooth structure on $Y$, extending the given smooth structure on $\partial Y$, such that $f$ is isotopic rel. boundary to a smooth embedding.* Note that in the statements of [\[thm:Schultz1,thm:Schultz2\]](#thm:Schultz1,thm:Schultz2){reference-type="ref" reference="thm:Schultz1,thm:Schultz2"}, $Y$ is not a priori smoothable. The existence of a topological vector bundle neighbourhood for an embedding $f \colon Y\to N$ guarantees a smooth structure on $Y\times\mathbb{R}^p$ for some $p\in\mathbb{N}^*$. Hence, for $m\geq5$, the Product Structure Theorem [@Kirby-Siebenmann:1977-1 Essay I] implies that $Y$ is smoothable. The proofs of [\[thm:Schultz1,thm:Schultz2\]](#thm:Schultz1,thm:Schultz2){reference-type="ref" reference="thm:Schultz1,thm:Schultz2"} then proceed by using smoothing theory and the Concordance Implies Isotopy Theorem [@Kirby-Siebenmann:1977-1 Essay I] for smooth structures on $Y$. If one first fixes a smooth structure on $Y$, then the induced structure on $Y$ that emerges from Schultz's argument need not be isotopic to the fixed one. This is a feature of the problem, not a failure of the proof. In fact, if we fix a smooth structure on $Y$, in general in high dimensions $f$ is not even homotopic to a smooth embedding. For example, Hsiang-Levine-Szczarba [@Hsiang-Levine-Szczarba] showed that the exotic 16-sphere does not embed smoothly in $S^{18}$. Certainly it does embed topologically. The results from [@Kirby-Siebenmann:1977-1] do not apply in the same way for $Y^3 \subseteq N^5$. Our approach is rather different. We fix a smooth structure on a tubular neighbourhood of $f(Y)$ and try to extend it to all of $N$. As we will describe next, we face obstructions along the way that will require us in general to modify the embedding by a small homotopy to obtain a smooth embedding of $Y$ in $N$. ## Outline of the proof of [Theorem 1](#thm:main-intro){reference-type="ref" reference="thm:main-intro"} {#subsec:outline-of-proof} For a submanifold $K$ of a manifold $X$ with an open tubular neighbourhood i.e. the image of an embedding of a normal bundle, denote the tubular neighbourhood by $\nu K$. Write $\overline{\nu} K$ for the closure of the tubular neighbourhood of $K$ in $X$, which has the structure of a disc bundle over $K$. Given a closed subset $C$ of $X$, *a smooth structure on $C$* will always mean a smooth structure on an open neighbourhood $U$ of $C$ in $X$. The proof of [Theorem 1](#thm:main-intro){reference-type="ref" reference="thm:main-intro"} breaks naturally into two distinct steps, the outlines of which we shall explain next. For a smooth structure $\sigma$ on a topological manifold $X$, we write $X_\sigma$ to specify that $X$ is equipped with the smooth structure $\sigma$. In what follows, we will write $N_{\mathop{\mathrm{std}}}$ for $N$ equipped with the given smooth structure. **Step 1:** *We show that $f\colon Y\to N$ is homotopic, by a small homotopy, to a smooth embedding $g\colon Y\to N_{\sigma}$, for some $\sigma$.* We write $M := f(Y)$ for the image of $f$. The idea of the proof is to consider a standard smooth structure on $\overline{\nu} M$ and on $\partial N$, and then to try to extend this to all of $N$. We denote the exterior of $M$ by $W_f := N \setminus\nu M$. Smoothing theory (recapped in Section [2](#section:smoothing-theory){reference-type="ref" reference="section:smoothing-theory"}) gives a Kirby-Siebenmann obstruction in $H^4(W_f,\partial W_f;\mathbb{Z}/2)$, that vanishes if and only if the smooth structure on $\partial W_f$ extends to all of $W_f$. It turns out that this obstruction does not always vanish, but that by taking ambient connected sums of $M$ with copies of Lashof's nonsmoothable 3-knot $L \cong S^3 \subseteq S^5$ from [@Lashof], which we discuss in Section [2.2](#section:lashofs-knot){reference-type="ref" reference="section:lashofs-knot"}, we can arrange that this obstruction vanishes. Whence $f$ is homotopic, via a small homotopy, to $g \colon Y \to N$ such that $M' := g(Y)$ is smooth in some smooth structure $\sigma$ on $N$, that restricts to the given smooth structure on $\partial N$. **Step 2:** *We show that $g\colon Y\to N_{\sigma}$ is homotopic, via a small homotopy, to a smooth embedding $g'\colon Y\to N_{\mathop{\mathrm{std}}}$.* Smoothing theory implies that we can arrange for the smooth structure $\sigma$ on $N$ and the given smooth structure $\mathop{\mathrm{std}}$ to agree away from a tubular neighbourhood $\nu S$ of a surface $S \subseteq N$. By transversality we can assume that $g(Y)$ intersects $S$ in finitely many points, in a neighbourhood of which $M':= g(Y)$ is smooth in $\sigma$ but not in $\mathop{\mathrm{std}}$. This reduces the smoothing problem for $M'$ in $\mathop{\mathrm{std}}$ to finitely many local problems, which can be resolved using a proof analogous to Kervaire's proof [@Kervaire-slice-knots Théorème III.6] that every 2-knot is smoothly slice. Kervaire's result was generalised by Sunukjian [@Sunuk], who showed that homologous connected surfaces in 1-connected 4-manifolds are smoothly concordant, and it is Sunukjian's arguments that apply in our situation. *Remark 5*. The changes to $f$ in Steps 1 and 2 can be characterised as topological isotopies, together with adding and removing local knots. ## Organisation {#organisation .unnumbered} In Section [2](#section:smoothing-theory){reference-type="ref" reference="section:smoothing-theory"} we recap smoothing theory, prove lemmas on properties of the Kirby-Siebenmann invariant, and recall Lashof's nonsmoothable 3-knot. We prove Step 1 in Section [3](#section:smoothing-complement){reference-type="ref" reference="section:smoothing-complement"}, and we prove Step 2 in Section [4](#section:comparing-with-std){reference-type="ref" reference="section:comparing-with-std"}. Then in Section [5](#section:Jae-choon-suggestions){reference-type="ref" reference="section:Jae-choon-suggestions"} we give conditions under which smoothing up to isotopy is possible. ## Acknowledgements {#acknowledgements .unnumbered} MP thanks the participants of a discussion group on surfaces in 4-manifolds in Le Croisic, June 2022, which brought this problem to his attention, and by extension thanks the organisers of this enjoyable conference. We thank Sander Kupers for his interest and suggesting a citation, and we are grateful to Jae Choon Cha for suggesting that we write Section [5](#section:Jae-choon-suggestions){reference-type="ref" reference="section:Jae-choon-suggestions"}. MD was supported by EPSRC New Horizons grant EP/V04821X/2. MP was partially supported by EPSRC New Investigator grant EP/T028335/2 and EPSRC New Horizons grant EP/V04821X/2. # Smoothing theory {#section:smoothing-theory} In this section we give a brief recap of smoothing theory, and recall the results we will need. Smoothing theory was developed by Cairns [@Cairns], Munkres [@Munkres-smoothing; @Munkres-smoothing-II; @Munkres-smoothing-III; @Munkres-smoothing-IV; @Munkres-2-sphere], Milnor [@Milnor-ICM; @Milnor-microbundles], Hirsch [@Hirsch-smoothing], Hirsch-Mazur [@Hirsch-Mazur], Lashof-Rothenberg [@Lashof-Rothenberg], and Cerf [@Cerf-VI; @Cerf-I; @Cerf-II; @Cerf-III; @Cerf-IV; @Cerf-V], among others. Their goal, which they achieved to a large extent, was to understand which PL manifolds admit smooth structures, and if so how many. The theory was extended around 1970 by Kirby and Siebenmann [@Kirby-Siebenmann:1977-1] to allow one to start with a topological manifold, provided that one is not trying to understand smooth structures on a 4-manifold. For the purposes of this article, since we work in dimensions four and five, the smooth and PL categories are interchangeable. Since it is more common nowadays to work in the smooth category, we shall also do so. ## Recap of smoothing theory Let $X$ be a topological $n$-manifold possibly with boundary, let $C$ be a closed subset of $X$, and let $\sigma$ be a smooth structure on an open neighbourhood $U$ of $C$. Let $V \subseteq U$ be a smaller open neighbourhood of $C$. Denote the set of isotopy classes of smooth structures on $X$ that agree with $\sigma$ near $C$ by $\mathcal{S}_{\mathrm{Diff}}(X, C,\sigma)$. We write $\mathop{\mathrm{BTOP}}(k)$ for the classifying space for topological $\mathbb{R}^k$ bundles, and $\mathop{\mathrm{BO}}(k)$ for the classifying space for rank $k$ smooth vector bundles. Define $\mathop{\mathrm{BTOP}}:= \mathop{\mathrm{colim}}_k \mathop{\mathrm{BTOP}}(k)$ and $\mathop{\mathrm{BO}}:= \mathop{\mathrm{colim}}_k \mathop{\mathrm{BO}}(k)$, the corresponding stable bundle classifying spaces. Consider the following diagram, which is induced by the stable classifying maps of the tangent bundle of a neighbourhood $U$ of $C$ and the stable tangent microbundle of $X$: $$\begin{tikzcd}[row sep = small] & \mathop{\mathrm{TOP}}/\mathop{\mathrm{O}}\ar[d] \\ V \ar[r,"T_{V}"] \ar[d] & \mathop{\mathrm{BO}}\ar[d] \\ X \ar[r,"\tau_X"'] \ar[ur,dashed] & \mathop{\mathrm{BTOP}}\ar[d] \\ & \mathop{\mathrm{B}}(\mathop{\mathrm{TOP}}/\mathop{\mathrm{O}}). \end{tikzcd}$$ Smoothing theory implies that for $n \geq 6$ or $(n=5 \ \mathrm{and}\ \partial X \subseteq C)$, isotopy classes of smooth structures on $X$ correspond to lifts $X\to \mathop{\mathrm{BO}}$ of the map $X \to \mathop{\mathrm{BTOP}}$, relative to the fixed lift on the smaller neighbourhood $V \subseteq U$. The vertical sequence is a principal fibration, which implies that such a lift exists if and only if the composite $X \to \mathop{\mathrm{BTOP}}\to \mathop{\mathrm{B}}(\mathop{\mathrm{TOP}}/\mathop{\mathrm{O}})$ is null-homotopic, and that homotopy classes of such lifts correspond to $[(X,V),(\mathop{\mathrm{TOP}}/\mathop{\mathrm{O}},*)]$ , homotopy classes of maps $X\to \mathop{\mathrm{TOP}}/\mathop{\mathrm{O}}$ that send $V$ to the base point. The main result of smoothing theory, applied to 5-manifolds, reads as follows [@Kirby-Siebenmann:1977-1 Theorem IV.10.1]. **Theorem 6**. *Let $X$ be a $5$-dimensional topological manifold, let $C$ be a closed subset of $X$ with $\partial X\subseteq C$, and fix a smooth structure $\sigma$ on an open neighbourhood $U$ of $C$.* (i) *[\[item:1-smoothing-thy\]]{#item:1-smoothing-thy label="item:1-smoothing-thy"} There is an obstruction $\operatorname{ks}(X,C):= \operatorname{ks}(X,C,U,\sigma) \in H^4(X, C;\mathbb{Z}/2)$ that vanishes if and only if $X$ admits a smooth structure extending the given smooth structure on some neighbourhood $V \subseteq U$ of $C$.* (ii) *[\[item:2-smoothing-thy\]]{#item:2-smoothing-thy label="item:2-smoothing-thy"} Given two smooth structures $\sigma$ and $\pi$ on $X$ extending the given smooth structure on $U \supseteq C$, there is an obstruction $\operatorname{ks}(\sigma,\pi) \in H^3(X, C;\mathbb{Z}/2)$ that vanishes if and only if there is a neighbourhood $V \subseteq U$ of $C$ such that $\sigma$ and $\pi$ are isotopic rel. $V$, i.e. if there is a homeomorphism $f \colon X \to X$ with $f|_{V} = \operatorname{Id}$, such that $f^*(\pi) = \sigma$ and such that $f$ is topologically isotopic rel. $V$ to $\operatorname{Id}_X$.* (iii) *[\[item:3-smoothing-thy\]]{#item:3-smoothing-thy label="item:3-smoothing-thy"} The Kirby-Siebenmann obstructions $\operatorname{ks}(X,C)$ and $\operatorname{ks}(\sigma,\pi)$ from [\[item:1-smoothing-thy\]](#item:1-smoothing-thy){reference-type="eqref" reference="item:1-smoothing-thy"} and [\[item:2-smoothing-thy\]](#item:2-smoothing-thy){reference-type="eqref" reference="item:2-smoothing-thy"} are natural for restriction to open submanifolds of $X$. More precisely, let $W$ be an open submanifold of $X$ and let $i\colon W\hookrightarrow X$ be the inclusion map. Then $i^*\colon H^4(X,C;\mathbb{Z}/2)\to H^4(W,W\cap C;\mathbb{Z}/2)$ sends $\operatorname{ks}(X,C)$ to $\operatorname{ks}(W,W\cap C)$ and $i^*\colon H^3(X,C;\mathbb{Z}/2)\to H^3(W,W\cap C;\mathbb{Z}/2)$ sends $\operatorname{ks}(\sigma,\pi)$ to $\operatorname{ks}(\sigma|_W,\pi|_W)$.* (iv) *[\[item:4-smoothing-thy\]]{#item:4-smoothing-thy label="item:4-smoothing-thy"} Given a smooth structure on some neighbourhood $V$ of $C$ in $X$, the Kirby-Siebenmann obstruction $\operatorname{ks}(X,C)$ from [\[item:1-smoothing-thy\]](#item:1-smoothing-thy){reference-type="eqref" reference="item:1-smoothing-thy"} is natural with respect to restriction to a neighbourhood $V'\subseteq V$ of a closed subset $C' \subseteq C$. That is, the inclusion map $H^4(X,C;\mathbb{Z}/2) \to H^4(X,C';\mathbb{Z}/2)$ sends $\operatorname{ks}(X,C)$ to $\operatorname{ks}(X,C')$.* *Proof.* The first three items of the theorem for $\mathop{\mathrm{PL}}$ structures instead of smooth structures follows from [@Kirby-Siebenmann:1977-1 Theorem IV.10.1] and the fact that $\mathop{\mathrm{TOP}}/\mathop{\mathrm{PL}}\simeq K(\mathbb{Z}/2,3)$ [@Kirby-Siebenmann:1977-1 Section IV.10.12]. However $\mathop{\mathrm{PL}}$ 5-manifolds with smooth boundary admit a unique smooth structure up to isotopy, by smoothing theory and since $\mathop{\mathrm{PL}}/\mathop{\mathrm{O}}$ is 6-connected [@Hirsch-Mazur; @Munkres-2-sphere; @Cerf-V; @Kervaire-Milnor:1963-1]. Hence it is legitimate to replace $\mathop{\mathrm{PL}}$ structures by smooth structures, as we have done. The final item can be seen from the following diagram. $$\begin{tikzcd}[row sep = small] % && \TOP/\OO \ar[d] \\ V' \ar[r] \ar[dr] & V \ar[r,"T_{V}"] \ar[d] & \mathop{\mathrm{BO}}\ar[d] & \\ & X \ar[r,"\tau_X"'] & \mathop{\mathrm{BTOP}}\ar[r] & \mathop{\mathrm{B}}(\mathop{\mathrm{TOP}}/\mathop{\mathrm{O}}). \end{tikzcd}$$ The obstructions $\operatorname{ks}(X,C)$ and the obstructions $\operatorname{ks}(X,C')$ are both represented by the map $X \xrightarrow{\tau_X} \mathop{\mathrm{BTOP}}\to \mathop{\mathrm{B}}(\mathop{\mathrm{TOP}}/\mathop{\mathrm{O}})$, and the inclusion induced map sends the former to the latter. ◻ Next we apply [Theorem 6](#thm:smoothing-thy-main-thm){reference-type="ref" reference="thm:smoothing-thy-main-thm"} to deduce a naturality result for the Kirby-Siebenmann obstructions, that will be useful for submanifolds with corners. Let $K$ be a smooth 5-manifold with corners. Suppose the corner set of $K$, denoted by $\angle K$, separates $\partial K$ into $\partial_1K$ and $\partial_2 K$. Note that $\partial_1K$ and $\partial_2K$ are smooth manifolds with boundary. Fix a smooth structure[^1] $\sigma$ on a neighbourhood $U$ of $\partial K$. By [@Wall-diff-topology Proposition 1.5.6], $U$ contains a smooth embedding of $\partial_1K\times[0,1)$ in $K$. Let $K'$ denote the result of attaching $\partial_1 K\times[0,1)$ to $K$ by the map $h\colon\partial_1K\times\{0\}\to\partial_1K$ given by $h(x,0)=x$, and extend $\sigma|_{\partial_1 K}$ to a product structure along $[0,1)$ in $\partial_1K \times [0,1)$. Denote the resulting smooth structure on $U':=U\cup\partial_1K\times[0,1)$ by $\sigma'$. Note that $U'$ is a smooth manifold with boundary, i.e. no corners. Apply the same procedure to $K'$ along $\partial K'$ to obtain an unbounded manifold $K''$ and a smooth structure $\sigma''$ on $U'':=U'\cup\partial K'\times[0,1)$. Let $j\colon K\hookrightarrow K''$ be the inclusion map. **Definition 7**. Let $K$, $K''$, and $j$ be as above. Let $\sigma$ be a smooth structure on a neighbourhood $U$ of $\partial K$. Define the Kirby-Siebenmann obstruction $\operatorname{ks}(K,\partial K)$ as $j^*\operatorname{ks}(K'',\partial K)$ where $\operatorname{ks}(K'',\partial K) = \operatorname{ks}(K'',\partial K,U'',\sigma'')$ is the obstruction to extending the smooth structure $\sigma''$ on $U''$ to $K''$. Let $X$ be a smooth 5-manifold with boundary and let $K$ be a smooth 5-manifold with corners that is a submanifold of $X$ such that the corner set of $K$ separates $\partial K$ into $\partial_1K:=K\cap\partial X$ and $\partial_2 K$ with $\mathop{\mathrm{Int}}\partial_2 K \subseteq \mathop{\mathrm{Int}}X$. (By definition of a submanifold, $\partial_2 K$ intersects $\partial X$ transversely.) Consider a smooth structure $\sigma$ on a neighbourhood $U$ of $\partial X\cup\partial K$ such that $\partial_2 K\hookrightarrow U$ is smooth; this condition guarantees the existence of a smooth bicollar neighbourhood of $\partial_2 K$ in $U$, which will be implicitly used in the proof of the next proposition. **Proposition 8**. *Let $i\colon(K,\partial K)\hookrightarrow(X,\partial X\cup\partial K)$ be the inclusion. The induced map $$i^*\colon H^4(X,\partial X\cup\partial K;\mathbb{Z}/2)\to H^4(K,\partial K;\mathbb{Z}/2)$$ sends $\operatorname{ks}(X,\partial X\cup\partial K)$ to $\operatorname{ks}(K,\partial K)$.* *Proof.* Let $X'$ be the open topological manifold obtained from $X$ by attaching an exterior collar $\partial X \times [0,1)$, where $x$ in $\partial X$ is identified with $(x,0)$ in $\partial X \times [0,1)$. Extend $\sigma$, by taking a product structure along $\partial X \times [0,1)$, to $U' := U \cup \partial X \times [0,1)$, which is a neighbourhood of $\partial X\cup\partial K$ in $X'$. Let $\sigma'$ be the result smooth structure on $U'$. Then, by 'absorbing the boundary' [@Kirby-Siebenmann:1977-1 Proposition IV.2.1], this construction determines a natural bijection $\theta\colon \mathcal{S}_{\mathrm{Diff}}(X, \partial X\cup\partial K,\sigma)\to \mathcal{S}_{\mathrm{Diff}}(X', \partial X\cup\partial K,\sigma')$ and it follows from [@Kirby-Siebenmann:1977-1 Theorem IV.10.1] and [@Kirby-Siebenmann:1977-1 Remark IV.10.2] that $\operatorname{ks}(X,\partial X\cup\partial K) \mapsto \operatorname{ks}(X',\partial X\cup\partial K)$ under the isomorphism on $H^4(-,\partial X\cup\partial K;\mathbb{Z}/2)$ induced by the obvious homotopy equivalence $X' \simeq X$. Consider the following diagram: $$\begin{tikzcd} H^4(X,\partial X\cup\partial K;\mathbb{Z}/2) \ar[rr,"\cong"] \ar[d,"i^*"] && H^4(X',\partial X\cup\partial K;\mathbb{Z}/2) \ar[d,"i_1^*"] \\ H^4(K,\partial K;\mathbb{Z}/2) & H^4(K'',\partial K;\mathbb{Z}/2) \ar[l,"j^*"'] & H^4(X',\partial K;\mathbb{Z}/2). \ar[l,"g^*"'] \end{tikzcd}$$ The map $i^*_1$ is induced by the inclusion map $i_1 \colon (X',\partial K) \to (X',\partial X \cup \partial K)$, $j^*$ is induced by the inclusion $j \colon (K,\partial K) \to (K'',\partial K)$ from [Definition 7](#defn:ks-corners){reference-type="ref" reference="defn:ks-corners"}, and $g^*$ is induced by the inclusion $(K'',\partial K)\hookrightarrow(X',\partial K)$. As per the above discussion, $\operatorname{ks}(X,\partial X\cup\partial K)$ is sent to $\operatorname{ks}(X',\partial X\cup\partial K)$ by the top horizontal map. By [Theorem 6](#thm:smoothing-thy-main-thm){reference-type="ref" reference="thm:smoothing-thy-main-thm"} [\[item:4-smoothing-thy\]](#item:4-smoothing-thy){reference-type="eqref" reference="item:4-smoothing-thy"}, $i^*_1\operatorname{ks}(X',\partial X\cup\partial K) = \operatorname{ks}(X',\partial K)$. It follows from [Theorem 6](#thm:smoothing-thy-main-thm){reference-type="ref" reference="thm:smoothing-thy-main-thm"} [\[item:3-smoothing-thy\]](#item:3-smoothing-thy){reference-type="eqref" reference="item:3-smoothing-thy"} that $g^*\operatorname{ks}(X',\partial K)=\operatorname{ks}(K'',\partial K)$. Finally by [Definition 7](#defn:ks-corners){reference-type="ref" reference="defn:ks-corners"} we have that $j^*\operatorname{ks}(K'',\partial K)=\operatorname{ks}(K,\partial K)$. This concludes the proof that $i^*\operatorname{ks}(X,\partial X\cup\partial K) = \operatorname{ks}(K,\partial K)$. ◻ In practice, the manifold with corners $K$ will be either a closed tubular neighbourhood $\overline{\nu} f(Y)$ of a locally flat proper embedding $f\colon Y\to N$, or the complement $N \setminus\nu f(Y)$. In fact, let $p \colon\overline{\nu} f(Y)\to f(Y)$ be a disc bundle and denote its boundary sphere bundle by $\Sigma$. Then $\overline{\nu} f(Y)$ is a smooth manifold with corners (note that it is not necessarily smooth in $N$) and $\angle\overline{\nu} f(Y)=p^{-1}\partial f(Y)\cap\Sigma$ separates $\partial\overline{\nu} f(Y)$ in two parts with closures $p^{-1}f(\partial Y)$ and $\Sigma$. We also need the following more detailed characterisation of the obstruction $\operatorname{ks}(\sigma,\pi)$. We write $X_\sigma$ to denote a topological 5-manifold $X$ equipped with a smooth structure $\sigma$, and let $\pi$ be another smooth structure on $X$ that agrees with $\sigma$ near $\partial X$. **Proposition 9**. *Suppose that $S \subseteq X$ is a closed surface smoothly embedded in $\mathop{\mathrm{Int}}X_{\sigma}$ whose $\mathbb{Z}/2$-fundamental class is Poincaré dual to $\operatorname{ks}(\sigma,\pi) \in H^3(X,\partial X;\mathbb{Z}/2)$. Then there is an arbitrarily small isotopy of $\sigma$, supported away from $S$ and $\partial X$, to a smooth structure that agrees with $\pi$ on $X \setminus S$.* *Proof.* Using the inclusion $X \setminus S \to X$ we have a map $H^3(X,\partial X;\mathbb{Z}/2) \to H^3(X \setminus S,\partial X;\mathbb{Z}/2)$. By naturality of Kirby-Siebenmann invariants ([Theorem 6](#thm:smoothing-thy-main-thm){reference-type="ref" reference="thm:smoothing-thy-main-thm"} [\[item:3-smoothing-thy\]](#item:3-smoothing-thy){reference-type="eqref" reference="item:3-smoothing-thy"}), this sends the Kirby-Siebenmann invariant $\operatorname{ks}(\sigma,\pi)$ to the invariant of the restricted structures $\operatorname{ks}(\sigma|_{X \setminus S},\pi|_{X \setminus S})$. We will denote restricted structures by $\sigma| := \sigma|_{X \setminus S}$, and similarly for $\pi$, from now on. The long exact sequence of the triple $\partial X \subseteq X \setminus S \subseteq X$ gives the top row of the following diagram. $$\begin{tikzcd}%[row sep = small] H^3(X,X\setminus S;\mathbb{Z}/2) \ar[r] \ar[d,"\cong"] & H^3(X,\partial X;\mathbb{Z}/2) \ar[r] \ar[dd,"\cong"] & H^3(X \setminus S, \partial X;\mathbb{Z}/2) \\ H^3(\overline{\nu}S,\partial \overline{\nu}S;\mathbb{Z}/2) \ar[d,"\cong"] & & \\ H_2(S;\mathbb{Z}/2) \ar[r] & H_2(X;\mathbb{Z}/2) & \end{tikzcd}$$ The vertical isomorphisms are given by combining homotopy invariance of homology, excision, and Poincaré-Lefschetz duality. It follows from the diagram that the Poincaré dual to $[S] \in H_2(X;\mathbb{Z})$, which by hypothesis equals $\operatorname{ks}(\sigma,\pi)$, lies in the kernel of the map $H^3(X,\partial X;\mathbb{Z}/2) \to H^3(X \setminus S,\partial X;\mathbb{Z}/2)$. Thus $\operatorname{ks}(\sigma|,\pi|) =0 \in H^3(X \setminus S,\partial X;\mathbb{Z}/2)$. By smoothing theory (Theorem [Theorem 6](#thm:smoothing-thy-main-thm){reference-type="ref" reference="thm:smoothing-thy-main-thm"}) there is an isotopy of $\sigma|$ to $\pi|$ on $X \setminus S$ rel. $\partial X$. That is, we have an isotopy of homeomorphisms $f_t \colon X\setminus S \to X \setminus S$, where $f_0 = \operatorname{Id}$, $f_t|_{\partial X}=\operatorname{Id}_{\partial X}$, and $f_1^*(\pi|) = \sigma|$. To prove the desired result we have to delve into the proof of Theorem [Theorem 6](#thm:smoothing-thy-main-thm){reference-type="ref" reference="thm:smoothing-thy-main-thm"} a little. Such an isotopy is constructed chart by chart, and within each chart via a decomposition into handles. Then the handles are smoothed iteratively using [@Kirby-Siebenmann:1977-1 Theorem I.3.1]. Let $d$ be a metric on $X$. We can and shall choose charts $\{U_i\}$ covering $X \setminus S$ to be such that if there exists $x \in U_i$ with $d(x,S) < \varepsilon$, then $\operatorname{diam}(U_i) < \varepsilon/10$. We can also make all charts have diameter smaller than an arbitrarily chosen global positive constant. The construction of $f_t$ guarantees that for all $i$, if $x \in U_i$, then $f_t(x) \in U_i$ for all $t \in [0,1]$. It follows from this and the fact that we controlled the size of the charts as they approach $S$ that $f_t$ extends continuously to an isotopy $F_t \colon X \to X$ that fixes $S$ pointwise for all $t \in [0,1]$. This gives the desired isotopy of $\sigma$ to a smooth structure $\sigma'$ on $X$ such that $\sigma'|_{X \setminus S} = \pi|_{X \setminus S}$, i.e. they agree away from $S$. Since we controlled the global size of all charts, we can also arrange for the isotopy to be arbitrarily small. ◻ ## Lashof's nonsmoothable 3-knot {#section:lashofs-knot} Lashof [@Lashof] constructed a locally flat 3-knot $L \cong S^3 \subseteq S^5$ that is not isotopic to any smooth knot. As observed by Kwasik and Vogel [@Kwasik-Vogel; @Kwasik-nonsmoothable], Lashof's knot bounds a Seifert 4-manifold $V$ in $S^5$ with $\operatorname{sign}(V)/8 \equiv 1 \mod{2}$. We can use this to explain why $L$ is not smoothable. The proof is as follows. If $L$ were smoothable, it would bound a smooth Seifert 4-manifold $V'$, which would be spin by naturality of $w_2$, and therefore would satisfy $\operatorname{sign}(V')/8 \equiv 0 \mod{2}$ by Rochlin's theorem. Since the signature of a Seifert 4-manifold is a knot invariant [@Levine:1969-1], we arrive at a contradiction and it follows that $L$ cannot be smoothed. Since the signature is a concordance invariant, it follows also that $L$ is not concordant to any smooth $3$-knot. Let $E_L := S^5 \setminus\nu L$ be the exterior of $L$, and equip $\partial E_L \cong S^3 \times S^1$ with a standard smooth structure. **Lemma 10**. *The Kirby-Siebenmann invariant of $E_L$ satisfies $$\operatorname{ks}(E_L,\partial E_L) =1 \in H^4(E_L,\partial E_L;\mathbb{Z}/2) \cong H_1(E_L;\mathbb{Z}/2) \cong \mathbb{Z}/2.$$* *Proof.* If $\operatorname{ks}(E_L,\partial E_L)$ were trivial, then by smoothing theory there would be a smooth structure $\tau$ on $S^5$ extending the standard smooth structure on $\overline{\nu}L$. Thus $L$ would be smooth in $\tau$. But in fact there is a unique smooth structure on $S^5$ up to isotopy [@Kervaire-Milnor:1963-1], and hence $L$ would be isotopic to a smooth knot in the standard smooth structure on $S^5$. Since Lashof proved this is not the case, we deduce that $\operatorname{ks}(E_L,\partial E_L)$ is indeed nontrivial. ◻ # Smoothing the complement of an embedding {#section:smoothing-complement} In this section we prove the following result, which proves Step 1 from [1.3](#subsec:outline-of-proof){reference-type="ref" reference="subsec:outline-of-proof"}. **Proposition 11**. *Let $N$ be a compact, connected, smooth 5-dimensional manifold with $($possibly empty$)$ boundary, let $Y$ be a compact 3-dimensional manifold with $($possibly empty$)$ boundary, and let $f\colon Y\to N$ be a locally flat proper topological embedding such that $f$ is smooth near $\partial Y$. Then $f$ is homotopic rel. boundary, via an arbitrarily small homotopy, to a smooth embedding in some smooth structure $\sigma$ on $N$ that agrees with the given smooth structure on $N$ near $\partial N$.* Let $Y=Y_1\sqcup \cdots \sqcup Y_m$, where each $Y_i$ is connected. We write $M:= f(Y)$ and $M_i:= f(Y_i)$. By [@KS-normal-bundles-codim-2], $M$ has a normal vector bundle in $N$. Let $\nu M \subseteq N$ denote the image of an embedding of the normal bundle. Let $W_f := N \setminus\nu M$, $E_i:=\partial\overline{\nu} M_i \cap\partial W_f$, and define $E:= \bigcup_{i=1}^m E_i$; see [\[figure:N\]](#figure:N){reference-type="ref" reference="figure:N"}. We fix a smooth structure on a neighbourhood of $\partial N\cup E=\partial\overline{\nu} M\cup\partial W_f$. To do this, we use that $Y$, as a 3-manifold, admits an essentially unique smooth structure. Since $\mathop{\mathrm{TOP}}(2) \simeq \mathop{\mathrm{O}}(2)$, we may assume the normal bundle of $M$ has $\mathop{\mathrm{O}}(2)$ structure group, and hence that the total space of the normal bundle is smooth. The closed tubular neighbourhood $\overline{\nu} M$, which has the structure of a $D^2$-bundle $\pi \colon \overline{\nu}M \xrightarrow{} M$, therefore has the structure of a smooth manifold with corners, with the property that $M \hookrightarrow \overline{\nu} M$ is a smooth map. The corner set gives rise to a decomposition $$\partial \overline{\nu} M = E \cup \pi^{-1}(\partial M),$$ such that $E$ and $\pi^{-1}(\partial M)$ become smooth 4-manifolds with boundary. We have a smooth structure on a collar neighbourhood of $\partial N$ in $N$. Next, we choose a topological bicollar neighbourhood of $E$ in $N$ and endow it with a smooth structure that is compatible with the given smooth structure on a neighbourhood of $\partial N$. Choose the collar neighbourhood of $E$ in $\overline{\nu} M$ to be smooth with respect to the smooth structure on $\overline{\nu}M$. For the outside collar of $E$ into $W_f$, first consider that given a collar, we obtain a product smooth structure on that collar induced from the smooth structure of $E$. In a neighbourhood of $\partial N$, choose the outside collar of $E$ so that the resulting smooth structure is compatible with the smooth structure we already have near $\partial N$. We can do this because $f$ is smooth near $\partial Y$. Then extend the collar to the rest of $E$. Now extend the smooth structure on $E$ to its bicollar as a product structure. Since we chose collars carefully to arrange for the two smooth structures on the bicollar of $E$ and the collar of $\partial N$ to be compatible, we obtain a smooth structure on a neighbourhood of $\partial N\cup E$. We obtain as well a smooth structure on a neighbourhood of $\partial W_f$ in $W_f$ which is the smoothly compatible collar neighbourhood of $\partial N\setminus\mathop{\mathrm{Int}}(\pi^{-1}(\partial M))$ and the part of the bicollar neighbourhood of $E$ that lies in $W_f$. As such, the neighbourhood of $\partial W_f$ is a smooth manifold with corners, with corner set $\partial\pi^{-1}(\partial M)$. By [Theorem 6](#thm:smoothing-thy-main-thm){reference-type="ref" reference="thm:smoothing-thy-main-thm"} we therefore have an obstruction $$\operatorname{ks}(N, \partial\overline{\nu}M\cup\partial W_f)\in H^4(N,\partial\overline{\nu}M\cup\partial W_f)$$ to extending this smooth structure to all of $N$. It will be shown in [Proposition 13](#prop:ks-lies-in-A){reference-type="ref" reference="prop:ks-lies-in-A"} below that $\operatorname{ks}(N, \partial\overline{\nu}M\cup\partial W_f)$ is determined by $$\operatorname{ks}(\overline{\nu}M, \partial\overline{\nu}M)\in H^4( \overline{\nu}M, \partial\overline{\nu}M) \text{ and } \operatorname{ks}(W_f,\partial W_f)\in H^4(W_f,\partial W_f;\mathbb{Z}/2).$$ Since the smooth structure on $E$ was obtained by restricting a structure on $\overline{\nu} M$, it follows that $\operatorname{ks}(\overline{\nu} M, \partial \overline{\nu} M)=0$. Thus, $\operatorname{ks}(N, \partial\overline{\nu}M\cup\partial W_f)$ is determined by $\operatorname{ks}(W_f,\partial W_f)$. If the latter vanishes, then so does $\operatorname{ks}(N, \partial\overline{\nu}M\cup\partial W_f)$. We also note that $\operatorname{ks}(N,\partial N)=0$, because $N$ is a smooth manifold. Our goal will therefore be to modify $f$ by a small homotopy to arrange for $\operatorname{ks}(W_f,\partial W_f)=0$. To begin, we record a homology computation in the next lemma. **Lemma 12**. *The homology of $E$ satisfies $H_1(E;\mathbb{Z}/2) \cong \oplus^m_{i=1} (H_1(M_i;\mathbb{Z}/2)\oplus B_i)$, where $B_i$ is a quotient of $\mathbb{Z}/2$, and may depend on $i$. If $B_i$ is nontrivial then it is generated by a meridian of $M_i$.* *Proof.* Since $S^1\hookrightarrow E_i\rightarrow M_i$ is a fibration, $M_i$ is path connected, and $\pi_1 (M_i)$ acts trivially on $H_*(S^1;\mathbb{Z}/2)$, we will use the Leray-Serre spectral sequence to compute $H_1(E_i;\mathbb{Z}/2)$. We have $$E^2_{p,q}\cong H_p(M_i;H_q(S^1;\mathbb{Z}/2)) \cong \begin{cases} H_p(M_i;\mathbb{Z}/2) & q=0,1 \\ 0 & \mathrm{otherwise}. \end{cases}$$ and $E^3_{p,q}=E^\infty_{p,q}$. Since the coefficient group is a field, the extension problem is trivial and $$\begin{aligned} H_1(E_i;\mathbb{Z}/2)&\cong E^\infty_{1,0}\oplus E^\infty_{0,1}\cong H_1(M_i;\mathbb{Z}/2)\oplus \mathbb{Z}/2/\operatorname{Im}(d^2 \colon E^2_{2,0}\rightarrow E^2_{0,1}) \\ &\cong H_1(M_i;\mathbb{Z}/2)\oplus B_i.\end{aligned}$$ It follows that $H_1(E;\mathbb{Z}/2) \cong \bigoplus^m_{i=1} H_1(M_i;\mathbb{Z}/2)\oplus B_i$. The $B_i$ are quotients of the terms on the $E^2$-page $H_0(M_i;H_1(S^1;\mathbb{Z}/2)) \cong H_1(S^1;\mathbb{Z}/2)$, and so if $B_i$ is nontrivial it is generated by a meridian to $M_i$, as asserted. ◻ Whether or not $B_i$ is trivial depends on the differential $d^2$. It will not be important for our later proofs whether $B_i$ is nontrivial, and so we do not include an investigation of this. Let $A \subseteq H^4(W_f,\partial W_f;\mathbb{Z}/2)$ be the subgroup generated by $\{PD^{-1}[\mu_i]\}_{i=1}^m$, where $[\mu_i] \in H_1(W_f;\mathbb{Z}/2)$ is the class represented by a meridian to $M_i$, and $PD$ denotes the Poincaré-Lefschetz duality isomorphism. That is, writing $\iota \colon E \to W_f$ for the inclusion map, $A$ is by definition the subgroup of $H^4(W_f,\partial W_f;\mathbb{Z}/2)$ Poincaré dual to $\oplus_{i=1}^m \iota(B_i) \subseteq H_1(W_f;\mathbb{Z}/2)$. **Proposition 13**. *The Kirby-Siebenmann obstruction $\operatorname{ks}(N, \partial\overline{\nu}M\cup\partial W_f)\in H^4(N,\partial\overline{\nu}M\cup\partial W_f)$ is determined by $\operatorname{ks}(W_f,\partial W_f) \in H^4(W_f,\partial W_f;\mathbb{Z}/2)$. Moreover, $\operatorname{ks}(W_f,\partial W_f)$ lies in the subgroup $A$.* *Proof.* All homology and cohomology in this proof will be with $\mathbb{Z}/2$ coefficients, and so to save space we omit them from the notation. Decompose the pair $(N, \partial\overline{\nu}M\cup\partial W_f)$ as $$( \overline{\nu}M, \partial\overline{\nu}M) \cup (W_f,\partial W_f).$$ The intersections are $\overline{\nu}M \cap W_f = E = \partial\overline{\nu}M \cap \partial W_f$. Consider the relative cohomology Mayer-Vietoris sequence [@Hatcher p. 204]: $$\cdots \to H^{n-1}(E,E) \to H^n(N, \partial\overline{\nu}M\cup\partial W_f)\to H^n( \overline{\nu}M, \partial\overline{\nu}M) \oplus H^n(W_f,\partial W_f)\to H^n(E,E)\to\cdots$$ Taking $n=4$ and observing that $H^i(E,E) =0$ for all $i$, we deduce that $$H^4(N, \partial\overline{\nu}M\cup\partial W_f)\cong H^4( \overline{\nu}M, \partial\overline{\nu}M)\oplus H^4(W_f,\partial W_f)$$ where this isomorphism has coordinates the two restrictions to $(\overline{\nu}M, \partial\overline{\nu}M)$ and $(W_f,\partial W_f)$. Therefore, by  [Proposition 8](#prop:naturality-corner-5dim){reference-type="ref" reference="prop:naturality-corner-5dim"} applied twice, once to the inclusion $\overline{\nu}M \hookrightarrow N$ and once to the inclusion $W_f \hookrightarrow N$, this isomorphism sends $\operatorname{ks}(N, \partial\overline{\nu}M\cup\partial W_f) \in H^4(N, \partial\overline{\nu}M\cup\partial W_f)$ to $$\begin{aligned} (\operatorname{ks}(\overline{\nu}M, \partial\overline{\nu}M), \operatorname{ks}(W_f,\partial W_f)) = (0, \operatorname{ks}(W_f,\partial W_f)) \in H^4( \overline{\nu}M, \partial\overline{\nu}M)\oplus H^4(W_f,\partial W_f).\end{aligned}$$ Here we use that $\operatorname{ks}(\overline{\nu}M, \partial\overline{\nu}M)=0$, which as mentioned above holds because our chosen smooth structure on $\partial\overline{\nu}M$ was obtained by restricting a structure on $\overline{\nu} M$. This proves the first statement of the proposition. To prove the second sentence, consider the following diagram: $$\begin{tikzcd}[column sep = tiny] H^3(\partial\overline{\nu}M\cup\partial W_f,\partial N) \ar[r] \ar[d,"\cong"] & H^4(N,\partial\overline{\nu}M\cup\partial W_f) \ar[r,"j^*"] \ar[d,"\cong"] & H^4(N,\partial N) \\ H^3(E,\partial E)\ar[r] \ar[d,"\cong"] & H^4( \overline{\nu}M, \partial\overline{\nu}M)\oplus H^4(W_f,\partial W_f) \ar[d,"\cong"] \\ H_1(E) \ar[r,"k"] & H_1(\overline{\nu}M)\oplus H_1(W_f) & \end{tikzcd}$$ where the upper row is an excerpt from the cohomology long exact sequence of the triple $\partial N\subseteq \partial\overline{\nu}M\cup\partial W_f\subseteq N$, the top left vertical isomorphism is by excision and the bottom vertical isomorphisms use Poincaré--Lefschetz duality. Let $(0, \gamma)\in H_1(\overline{\nu}M)\oplus H_1(W_f)$ be the Poincaré-Lefschetz dual of $$(\operatorname{ks}(\overline{\nu}M, \partial\overline{\nu}M), \operatorname{ks}(W_f,\partial W_f))= (0,\operatorname{ks}(W_f,\partial W_f)).$$ Since $$j^*(\operatorname{ks}(N, \partial\overline{\nu}M\cup\partial W_f))=\operatorname{ks}(N, \partial N)=0,$$ by [Theorem 6](#thm:smoothing-thy-main-thm){reference-type="ref" reference="thm:smoothing-thy-main-thm"} [\[item:4-smoothing-thy\]](#item:4-smoothing-thy){reference-type="eqref" reference="item:4-smoothing-thy"} and the fact that $N$ is smooth, it follows from exactness of the top row and commutativity of the diagram that $(0,\gamma) \in \operatorname{Im}k$. The map $k \colon H_1(E) \to H_1(\overline{\nu}M) \oplus H_1(W_f)$ is induced by the inclusions $\kappa_1 \colon E \hookrightarrow \overline{\nu}M$ and $\kappa_2 \colon E \hookrightarrow W_f$. By [Lemma 12](#lemma:homology-computation-boundary){reference-type="ref" reference="lemma:homology-computation-boundary"}, $$H_1(E) \cong \oplus^m_{i=1} (H_1(M_i)\oplus B_i) \cong H_1(M) \oplus^m_{i=1} B_i.$$ Let $$(\alpha,\beta_1,\dots,\beta_m) \in H_1(M) \oplus^m_{i=1} B_i$$ be such that $k(\alpha,\beta_1,\dots,\beta_m) = (0,\gamma) \in H_1(\overline{\nu}M)\oplus H_1(W_f)$. Note that $\kappa_1|_{B_i} =0$ and $\kappa_1|_{H_1(M)}$ is an isomorphism. Since $\kappa_1|_{B_i} =0$ it follows that $\kappa_1(\alpha,\beta_1,\dots,\beta_m) = \kappa_1|_{H_1(M)}(\alpha)$. Since $\kappa_1(\alpha,\beta_1,\dots,\beta_m)=0$, we have that $\kappa_1|_{H_1(M)}(\alpha)=0$. Using that $\kappa_1|_{H_1(M)}$ is an isomorphism, we deduce that $\alpha=0$. Thus $PD(\operatorname{ks}(W_f,\partial W_f)) = (0,\beta_1,\dots,\beta_m) \in \operatorname{Im}(\oplus_{i=1}^m B_i)$ is a sum of meridians of the connected components $M_i$ of $M$. It follows that $\operatorname{ks}(W_f,\partial W_f)$ lies in $A$, as desired. ◻ Let $L \cong S^3 \subseteq S^5$ denote Lashof's non-smoothable 3-knot (see [2.2](#section:lashofs-knot){reference-type="ref" reference="section:lashofs-knot"}). Write $$\operatorname{ks}(W_f,\partial W_f) = \sum_{i=1}^m a_i (PD^{-1}[\mu_i]),$$ for $a_i \in \mathbb{Z}/2$ defined by this equality and the stipulation that we take $a_i=0$ if $PD^{-1}[\mu_i]=0$ in $H^4(W_f,\partial W_f;\mathbb{Z}/2)$. If $a_i =1$ then we form a connected sum $M_i \# L$ in an arbitrarily small $5$-ball, while if $a_i =0$ we leave $M_i$ alone. Let $g\colon Y\hookrightarrow N$ denote the resulting embedding. Define $$\mathcal{I}:= \{i \mid a_i =1\} \subseteq \{1,\dots,m\}.$$ Let $W_g := N \setminus\nu g(Y)$. Note that $$W_g\cong W_f\cup_{\sqcup_{\mathcal{I}} (S^1\times D^3)_i} \bigsqcup_{\mathcal{I}} E_{L_i}.$$ That is, $W_f$ and $\sqcup_i E_{L_i}$ attached along $\sqcup_{\mathcal{I}} (S^1\times D^3)_i$, where $(S^1\times {0})_i$ is identified with a meridian to $M_i$ and a meridian to $L_i$, for each $i \in \mathcal{I}$, and we extend to a tubular neighbourhood $(S^1 \times D^3)_i$ in $\partial W_g$ and $\partial E_{L_i}$ respectively. Also, note that $$\partial W_g= \Big(\partial W_f \setminus\sqcup_{\mathcal{I}} (S^1 \times \mathring{D}^3)_i\Big) \cup_{\sqcup_{\mathcal{I}} (S^1 \times S^2)_i} \Big( \bigsqcup_{\mathcal{I}} \partial E_{L_i} \setminus\sqcup_{\mathcal{I}} (S^1 \times \mathring{D}^3)_i\Big)$$ Hence, $\partial W_g\cup(\sqcup_{\mathcal{I}} (S^1\times D^3)_i)$ decomposes as $\partial W_f\cup \bigcup_{\mathcal{I}} \partial E_{L_i}$. [\[figure:W-g\]](#figure:W-g){reference-type="ref" reference="figure:W-g"} shows an illustration of $W_g$ when one Lashof knot is attached. **Proposition 14**. *We have that $\operatorname{ks}(W_g,\partial W_g) =0 \in H^4(W_g,\partial W_g;\mathbb{Z}/2)$.* *Proof.* All homology and cohomology in this proof will be with $\mathbb{Z}/2$ coefficients, and so to save space we omit them from the notation. Recall that $\operatorname{ks}(W_f,\partial W_f) = \sum_{\mathcal{I}} PD^{-1}[\mu_i] \in H^4(W_f,\partial W_f)$. From the relative cohomology Mayer-Vietoris sequence of the pair $$(W_g,\partial W_g\cup(\sqcup_{\mathcal{I}} (S^1\times D^3)_i))=(W_f, \partial W_f)\cup(\sqcup_{\mathcal{I}} E_{L_i},\sqcup_{\mathcal{I}}\partial E_{L_i}),$$ using that $W_f\cap(\sqcup_{\mathcal{I}} E_{L_i})=\partial W_f\cap(\sqcup_{\mathcal{I}}\partial E_{L_i})=\sqcup_{\mathcal{I}} (S^1\times D^3)_i$, we get via an argument similar to that in the proof of [Proposition 13](#prop:ks-lies-in-A){reference-type="ref" reference="prop:ks-lies-in-A"}, that $$H^4 (W_g,\partial W_g\cup(\sqcup_{\mathcal{I}} (S^1\times D^3)_i))\cong H^4(W_f, \partial W_f)\oplus_{\mathcal{I}} H^4 (E_{L_i},\partial E_{L_i}).$$ Hence the image of $\operatorname{ks}(W_g,\partial W_g\cup(\sqcup_{\mathcal{I}} (S^1\times D^3)_i))$ under this isomorphism is $$(\operatorname{ks}(W_f, \partial W_f),\operatorname{ks}(E_{L_1},\partial E_{L_1}),\dots,\operatorname{ks}(E_{L_k},\partial E_{L_k})) \in H^4(W_f, \partial W_f)\oplus_{\mathcal{I}} H^4 (E_{L_i},\partial E_{L_i}).$$ Recall that by [Lemma 10](#lemma-ks-of-L-nonzero){reference-type="ref" reference="lemma-ks-of-L-nonzero"} we have that $\operatorname{ks}(E_{L_i},\partial E_{L_i}) =1$ for each $i \in \mathcal{I}$, represented by $PD^{-1}[\mu_{L_i}]$, the Poincaré dual to a meridian of $L_i$. Consider the following diagram: $$\begin{tikzcd}[column sep = tiny] H^3(\partial W_g\cup(\sqcup_{\mathcal{I}} (S^1\times D^3))_i,\partial W_g) \ar[r] \ar[d,"\cong"] & H^4(W_g, \partial W_g\cup(\sqcup_{\mathcal{I}} (S^1\times D^3)_i)) \ar[r,"j^*"] \ar[d,"\cong"] & H^4(W_g,\partial W_g)\ar[d,"="] \\ H^3(\sqcup_{\mathcal{I}} (S^1\times D^3)_i,\sqcup_{\mathcal{I}} (S^1\times S^2)_i)\ar[r] \ar[d,"\cong"] & H^4(W_f, \partial W_f)\oplus_{\mathcal{I}} H^4(E_{L_i},\partial E_{L_i})\ar[r]\ar[d,"\cong"] & H^4(W_g,\partial W_g)\ar[d,"\cong"]\\ H_1(\sqcup_{\mathcal{I}} (S^1\times D^3)_i)\cong\oplus_{\mathcal{I}} H_1((S^1\times {0})_i) \ar[r,"{\Phi}"] & H_1(W_f)\oplus_{\mathcal{I}} H_1(E_{L_i})\ar[r] & H_1(W_g) & \end{tikzcd}$$ The upper row is an excerpt from the cohomology long exact sequence of the triple $$\partial W_g\subseteq \partial W_g\cup(\sqcup_{\mathcal{I}} (S^1\times D^3))_i\subseteq W_g,$$ the top left vertical isomorphism is by excision and the bottom vertical isomorphisms use Poincaré--Lefschetz duality. By naturality of the Kirby-Siebenmann obstruction ([Proposition 8](#prop:naturality-corner-5dim){reference-type="ref" reference="prop:naturality-corner-5dim"} applied twice), the upper middle vertical isomorphism sends $$\kappa := \operatorname{ks}(W_g, \partial W_g\cup(\sqcup_{\mathcal{I}} (S^1\times D^3)_i))$$ to $$\big(\operatorname{ks}(W_f, \partial W_f),\operatorname{ks}(E_{L_1},\partial E_{L_1}),\dots,\operatorname{ks}(E_{L_k},\partial E_{L_k})\big) \in H^4(W_f, \partial W_f)\oplus_{\mathcal{I}} H^4(E_{L_i},\partial E_{L_i}).$$ On the other hand by [Theorem 6](#thm:smoothing-thy-main-thm){reference-type="ref" reference="thm:smoothing-thy-main-thm"} [\[item:4-smoothing-thy\]](#item:4-smoothing-thy){reference-type="eqref" reference="item:4-smoothing-thy"}, $j^*(\kappa) = \operatorname{ks}(W_g,\partial W_g) \in H^4(W_g,\partial W_g)$. By commutativity of the top right square it follows that $$\big(\operatorname{ks}(W_f, \partial W_f),\operatorname{ks}(E_{L_1},\partial E_{L_1}),\dots,\operatorname{ks}(E_{L_k},\partial E_{L_k})\big)$$ maps under the right hand map of the middle row to $$\operatorname{ks}(W_g,\partial W_g) \in H^4(W_g,\partial W_g).$$ By commutativity of the bottom right square of the diagram, the Poincaré-Lefschetz dual of the former, $(\sum_{\mathcal{I}}[\mu_i],\sum_{\mathcal{I}}[\mu_{L_i}]) \in H_1(W_f)\oplus_{\mathcal{I}} H_1(E_{L_i})$, is sent to $$\gamma := PD\big(\operatorname{ks}(W_g,\partial W_g)\big) \in H_1(W_g),$$ the Poincaré-Lefschetz dual of $\operatorname{ks}(W_g,\partial W_g)\in H^4(W_g,\partial W_g)$. Note that the bottom row is the Mayer-Vietoris homology sequence of the decomposition $W_f\cup_{\mathcal{I}} E_{L_i}=W_g$, and for each $i \in \mathcal{I}$ we have $\Phi([S^1\times {0}]_i)=([\mu_i], [\mu_{L_i}])$, where $[S^1\times {0}]_i$ is the generator of $H_1((S^1\times {0})_i)$. Hence by linearity, $$\Phi\big(\sum_{\mathcal{I}} [S^1\times {0}]_i\big)=\big(\sum_{\mathcal{I}} [\mu_i],\sum_{\mathcal{I}}[\mu_{L_i}]\big).$$ Thus $\gamma=0$ by exactness, and since $PD$ is an isomorphism it follows that $\operatorname{ks}(W_g,\partial W_g)=0$. ◻ The main result of this section follows. *Proof of Proposition [Proposition 11](#prop:exotic-embedding){reference-type="ref" reference="prop:exotic-embedding"}.* Since $\operatorname{ks}(W_g,\partial W_g) =0$, we can extend the standard smooth structure on $\partial N \cup \overline{\nu} g(Y)$ to all of $N$. Call the resulting smooth structure $\sigma$. By construction, $g(Y)$ is smooth in $\sigma$, and $\sigma$ agrees with the given smooth structure of $N$ near $\partial N$. Each connected sum of $M_i$ with $L_i$ can be done arbitrarily close to $M_i=f(Y_i)$, so we can assume that we altered $f$ by an arbitrarily small homotopy. ◻ # Comparing with the standard smooth structure on $N$ {#section:comparing-with-std} Next, we need to compare the smooth structure $\sigma$ we have just constructed with the given smooth structure $\mathop{\mathrm{std}}$ on $N$. The submanifold $g(Y)$ is smooth in $\sigma$, but is a priori not smooth in $\mathop{\mathrm{std}}$. We aim to reduce to a finite collection of local problems, namely neighbourhoods $V_i \subseteq \mathop{\mathrm{Int}}N$ where $g(Y)$ need not be smooth in $\mathop{\mathrm{std}}$. Then we will apply the argument that all 2-knots are smoothly slice [@Kervaire-slice-knots; @Sunuk] to further modify $g(Y)$ in each of these neighbourhoods $V_i$, replacing $g(Y) \cap V_i$ with a slice disc for $g(Y) \cap \partial V_i \cong S^2$ that is smooth in the structure $\mathop{\mathrm{std}}$. Our aim is the following proposition, which proves Step 2 from the introduction. The combination of Proposition [Proposition 11](#prop:exotic-embedding){reference-type="ref" reference="prop:exotic-embedding"} and Proposition [Proposition 15](#prop:smooth-embedding-body){reference-type="ref" reference="prop:smooth-embedding-body"} proves Theorem [Theorem 1](#thm:main-intro){reference-type="ref" reference="thm:main-intro"}. **Proposition 15**. *Let $N$ be a compact, connected, smooth 5-dimensional manifold with $($possibly empty$)$ boundary, let $Y$ be a compact 3-dimensional manifold with $($possibly empty$)$ boundary, and let $g\colon Y\to N_\sigma$ be a smooth embedding for some $\sigma$ such that $\sigma$ and $\mathop{\mathrm{std}}$ agree near $\partial N$. Then $g$ is homotopic rel. boundary, via an arbitrarily small homotopy, to a smooth embedding in $N_{\mathop{\mathrm{std}}}$.* To begin, recall that the structures $\sigma$ and $\mathop{\mathrm{std}}$ correspond via smoothing theory (Theorem [Theorem 6](#thm:smoothing-thy-main-thm){reference-type="ref" reference="thm:smoothing-thy-main-thm"}) to two lifts $\sigma, \mathop{\mathrm{std}}\colon N \to \mathop{\mathrm{BO}}$ of $\tau_{N} \colon N \to \mathop{\mathrm{BTOP}}$. The difference between these lifts gives rise to a map $N \to \mathop{\mathrm{TOP}}/\mathop{\mathrm{O}}$, and whence to an element $\operatorname{ks}(\sigma,\mathop{\mathrm{std}}) \in H^3(N,\partial N ; \mathbb{Z}/2) \cong H_2(N;\mathbb{Z}/2)$. In this section we redefine $M:= g(Y)$ **Lemma 16**. *The class $PD(\operatorname{ks}(\sigma,\mathop{\mathrm{std}})) \in H_2(N;\mathbb{Z}/2)$ can be represented by a closed surface $S \subseteq \mathop{\mathrm{Int}}N$, which is smoothly embedded in $\sigma$ and is transverse to $M:= g(Y)$.* *Proof.* We consider the group $\mathcal{N}_2(N)$ of unoriented surfaces mapping to $N$, up to bordism. The Atiyah-Hirzebruch spectral sequence for this has $E^2$-page $$E^2_{p,q} \cong H_p(N;\mathcal{N}_q).$$ The unoriented bordism groups are given [@Thom-cobordism] in the range $q \in \{0,1,2\}$ by $\mathcal{N}_0 \cong \mathbb{Z}/2 \cong \mathcal{N}_2$, and $\mathcal{N}_1=0$. Using that the $q=1$ row on the $E^2$-page consists entirely of zeros, we have an exact sequence $$H_3(N;\mathbb{Z}/2) \xrightarrow{d^3_{3,0}} H_0(N;\mathbb{Z}/2) \to\mathcal{N}_2(N) \to H_2(N;\mathbb{Z}/2) \to 0.$$ In particular every element of $H_2(N;\mathbb{Z}/2)$ lifts to $\mathcal{N}_2(N)$, and so can be represented by a map $h \colon \Sigma \to N$ from some closed surface $\Sigma$ into $N$. By [@Hirsch-diff-top Theorem 2.2.6] we can approximate $h$ by a smooth map in $[N]_{\sigma}$, and by [@Hirsch-diff-top Theorems 2.2.12 and 2.2.14] we can approximate the result by an embedding, $h' \colon \Sigma \to N$. We write $S:= h'(\Sigma)$. Since both $S$ and $M$ are smooth in $\sigma$, we apply transversality to complete the proof. ◻ By Proposition [Proposition 9](#prop:sigma-pi-agree-away-from-S){reference-type="ref" reference="prop:sigma-pi-agree-away-from-S"}, by an arbitrarily small isotopy of $\sigma$ away from $S$ and $\partial N$, and hence of $M \cap (N \setminus S)$, we can assume that the smooth structures $\sigma$ and $\mathop{\mathrm{std}}$ agree in the complement of the surface $S$. Replace $M$ and $\sigma$ by the outcomes of this isotopy. Let $\nu S$ denote a smooth open tubular neighbourhood of $S$ in the smooth structure $\sigma$. We have that $M\setminus\nu S$ is smooth in $[N \setminus\nu S]_{\mathop{\mathrm{std}}}$. By compactness and transversality, $S \pitchfork M$ consists of finitely many points, $p_1,\dots,p_n$ say. Moreover, the intersection $M \cap \partial \overline{\nu} S$ consists of a copy of $S^2$ for each point $p_i \in S \pitchfork M$, which bounds a 3-ball $D^3_i \subseteq M \cap \overline{\nu} S$ with the centre of $D^3_i$ equal to $p_i$. In fact the intersection $M \cap \overline{\nu} S$ comprises exactly $\bigcup_{i=1}^n D^3_i$; the $D^3_i$ are pairwise disjoint. Since $D^3_i$ is locally flat and codimension 2, it has a normal bundle [@KS-normal-bundles-codim-2]. We take a normal bundle of each $D^3_i$ in $\overline{\nu} S$. We obtain an inclusion of pairs $$(D^3_i \times \mathbb{R}^2,S^2 \times \mathbb{R}^2) \subseteq (\overline{\nu} S, \partial \overline{\nu} S).$$ Pull back the smooth structure $\mathop{\mathrm{std}}$ to this to obtain $$V_i := [D^3_i \times \mathbb{R}^2]_{\mathop{\mathrm{std}}}.$$ This $V_i$ is a smooth manifold that is homeomorphic to $D^3 \times \mathbb{R}^2$, with boundary $\partial V_i$ identified with $S^2 \times \mathbb{R}^2$. In the boundary, $M \cap \partial V_i$ is a 2-sphere $T_i$ that is identified with $S^2 \times \{0\} \subseteq S^2 \times \mathbb{R}^2$. We remark that $(V_i,\partial V_i)$ may not be diffeomorphic rel. boundary to $(D^3 \times \mathbb{R}^2,S^2 \times \mathbb{R}^2)$. In addition while $M \cap V_i$ is smooth in $\sigma$, this need not be the case in $\mathop{\mathrm{std}}$. **Lemma 17**. *The 2-sphere $T_i \subseteq \partial V_i$ bounds a compact, orientable 3-manifold $Z_i$ smoothly embedded in $V_i = [D^3_i \times \mathbb{R}^2]_{\mathop{\mathrm{std}}}$.* *Proof.* Consider the sequence of maps $$f \colon S^2 \times D^2 \hookrightarrow S^2 \times \mathbb{R}^2 \to \mathbb{R}^2 \xrightarrow{\cong} S^2 \setminus\{*\} \hookrightarrow S^2 \hookrightarrow \mathbb{CP}^2 \hookrightarrow \mathbb{CP}^{\infty}.$$ These are given respectively by the inclusion, the projection, the inverse of stereographic projection, the inclusion, identification with $\mathbb{CP}^1$, and the standard inclusion again. Choose another embedding $\mathbb{CP}^1 \to \mathbb{CP}^2$ that intersects our original $\mathbb{CP}^1$ transversely in exactly the image of $\{0\} \in \mathbb{R}^2$ under $\mathbb{R}^2 \xrightarrow{\cong} S^2 \setminus\{*\} \hookrightarrow S^2 \xrightarrow{\mathbb{CP}^1}$. Let $\ell \colon \mathbb{CP}^1 \to \mathbb{CP}^\infty$ denote the composition of this embedding with the inclusion $\mathbb{CP}^2 \hookrightarrow \mathbb{CP}^\infty$. We observe that $f^{-1}(\ell(\mathbb{CP}^1)) = T_i$. Let $V_i' := D^3 \times D^2 \subseteq D^3 \times \mathbb{R}^2$. We seek an extension of the form: $$\begin{tikzcd} S^2 \times D^2 \ar[r,"f"] \ar[d] & \mathbb{CP}^{\infty} \\ V_i' \ar[ur,dashed,"F"'] & \end{tikzcd}$$ Since $\mathbb{CP}^{\infty} \simeq K(\mathbb{Z},2)$, there is a unique obstruction in $$H^3(D^3 \times D^2,S^2 \times \mathbb{R}^2;\pi_2(\mathbb{CP}^{\infty})) \cong H^3(D^3,S^2;\pi_2(\mathbb{CP}^{\infty})) \cong H^3(D^3,S^2;\mathbb{Z}) \cong \mathbb{Z}$$ to extending $f$ to $F$. However the boundary of the $D^3$ in question is $T_i \subseteq S^2 \times D^2$. The image $f(S^2 \times \{0\})$ is a point in $\mathbb{CP}^{\infty}$, which represents the trivial element in $\pi_2(\mathbb{CP}^\infty) \cong \mathbb{Z}$. As a result the obstruction cocycle is trivial, and hence the obstruction cohomology class is too. Thus we obtain a map $F \colon V_i' \to \mathbb{CP}^\infty$ as desired. Using that $V_i'$ is 5-dimensional, homotope $F$ rel. $F|_{S^2 \times D^2} = f$ to a map $F'$ with image in $\mathbb{CP}^2$. We can and shall assume, by perturbing $F'$ further if necessary, that the inverse image of $\ell(\mathbb{CP}^1) \subseteq \mathbb{CP}^2$ lies in the interior of $V_i'$. Next we perturb $F'$ to be smooth in the smooth structure on the interior of $V_i'$ induced by $\mathop{\mathrm{std}}$, and so that $F'$ is transverse to $\ell(\mathbb{CP}^1)$. By making the perturbation sufficiently small, we can assume that the inverse image still lies in $V_i'$. The inverse image of $\ell(\mathbb{CP}^1)$ is thus a smooth 3-manifold $Z_i$ in $V_i$ with boundary $S^2 \times \{0\} \subseteq \partial V_i$. As it is the inverse image of a closed set, $Z_i$ is closed, and since $Z_i \subseteq V_i'$ and $V_i'$ is compact, we see that $Z_i$ is compact. Since $V_i$ is orientable, $w_1(Z_i) = w_1(\nu Z_i)$. However $w_1(\nu Z_i)$ is zero because $\nu Z_i$ can be obtained as the pull back of the normal bundle of $\ell(\mathbb{CP}^1) \subseteq \mathbb{CP}^2$, and $w_1(\nu \mathbb{CP}^1)$ is necessarily trivial since $H^1(\mathbb{CP}^1;\mathbb{Z}/2)=0$. It follows that $w_1(Z_i)=0$ and so $Z_i$ is orientable. Then recall that $V_i' \subseteq V_i$, to see that we have constructed the 3-manifold $Z_i \subseteq V_i$ we desire. ◻ Now we can prove the Proposition [Proposition 15](#prop:smooth-embedding-body){reference-type="ref" reference="prop:smooth-embedding-body"}, which is the goal of this section. *Proof of Proposition [Proposition 15](#prop:smooth-embedding-body){reference-type="ref" reference="prop:smooth-embedding-body"}.* To prove the proposition, it remains to find, for each $i=1,\dots,n$, a smooth slice disc $D^3 \subseteq V_i$ with $\partial D^3 = T_i = S^2 \times \{0\} \subseteq S^2 \times \mathbb{R}^2 = \partial V_i$. By Lemma [Lemma 17](#lemma:finding:Y-s){reference-type="ref" reference="lemma:finding:Y-s"}, for each $i$ we have a smooth, compact, orientable 3-manifold $Z_i$ with $\partial Z_i = T_i$. Since $Z_i$ is orientable and 3-dimensional, it is parallelisable, and thus is in particular spin. We now apply the argument of Sunukjian [@Sunuk] from his Section 5 and the proof of his Theorem 6.1. As mentioned in the introduction, this is similar to and was inspired by Kervaire's theorem [@Kervaire-slice-knots] that every 2-knot is slice. For the convenience of the reader we give an outline here. First perform ambient 1-surgeries on $Z_i$ to arrange that $\pi_1(V_i \setminus Z_i)$ is cyclic. By [@Sunuk Proposition 5.1 and Lemma 5.2], there is a spin structure on $Z_i$ such that every spin structure preserving surgery on $Z_i$ can be performed ambiently. Here we use that $\pi_1(V_i \setminus Z_i)$ is cyclic, so that every circle in $Z_i$ bounds an embedded 2-disc whose interior lies in $V_i \setminus Z_i$. Using this spin structure, the union $Z_i \cup D^3$ is a closed, smooth, spin 3-manifold. The group $\Omega_3^{\operatorname{Spin}} =0$, so $Z_i \cup D^3$ is spin null-bordant. By [@Sunuk Lemma 5.4], there is a sequence of spin structure compatible surgeries on circles in $Z_i$ that convert it to $D^3$. Perform these surgeries ambiently, and obtain a smoothly embedded $D^3 \subseteq [V_i]_{\mathop{\mathrm{std}}}$, as desired, in the restriction to $V_i$ of the smooth structure $\mathop{\mathrm{std}}$. Replacing $M \cap V_i$ with this 3-ball, for each $i$, yields a smooth embedding $g'\colon Y\hookrightarrow N$ in the smooth structure $\mathop{\mathrm{std}}$. By making the $V_i$ as small as we please, and using that $\pi_3(V_i,\partial V_i)=0$, we can arrange that we changed $g$ by an arbitrarily small homotopy. ◻ As mentioned above, Propositions [Proposition 11](#prop:exotic-embedding){reference-type="ref" reference="prop:exotic-embedding"} and [Proposition 15](#prop:smooth-embedding-body){reference-type="ref" reference="prop:smooth-embedding-body"} combine to complete the proof of Theorem [Theorem 1](#thm:main-intro){reference-type="ref" reference="thm:main-intro"}, noting that in both cases all the modifications we made to the embedding, from $f$ to $g$ to $g'$, consisted of local homotopies or isotopies, in all cases supported outside a neighbourhood of $\partial N$. # Conditions for smoothing up to isotopy {#section:Jae-choon-suggestions} As shown by Lashof's 3-knot [@Lashof] (Section [2.2](#section:lashofs-knot){reference-type="ref" reference="section:lashofs-knot"}), it is not in general possible to isotope a locally flat embedding of a 3-manifold to a smooth embedding. Our main result shows this is possible with an arbitrarily small homotopy. Here we discuss the extent to which smoothing up to isotopy is possible. As above let $Y=Y_1\sqcup \cdots \sqcup Y_m$ be a compact 3-manifold with connected components $Y_i$, and let $N$ be a compact, connected, smooth 5-manifold. We will use the Kirby-Siebenmann invariant $\operatorname{ks}(W_f,\partial W_f) \in H^4(W_f,\partial W_f;\mathbb{Z}/2)$ of the exterior $W_f := N \setminus\nu f(Y)$, and we will use the relative Kirby-Siebenmann invariant $\operatorname{ks}(\sigma,\mathop{\mathrm{std}}) \in H^3(N,\partial N)$ comparing the smooth structure $\sigma$ on $N$ arising from Step 1 (Proposition [Proposition 11](#prop:exotic-embedding){reference-type="ref" reference="prop:exotic-embedding"}) with the given smooth structure $\mathop{\mathrm{std}}$ on $N$. These invariants were recalled in detail in Section [2](#section:smoothing-theory){reference-type="ref" reference="section:smoothing-theory"}. In practice, these invariants are not always easy to evaluate. One way to do this for [\[it:i-smoothing-up-to-isotopy\]](#it:i-smoothing-up-to-isotopy){reference-type="eqref" reference="it:i-smoothing-up-to-isotopy"} could be to use the ideas of Kwasik and Vogel [@Kwasik-Vogel; @Kwasik-nonsmoothable] discussed in Section [2.2](#section:lashofs-knot){reference-type="ref" reference="section:lashofs-knot"} to relate $\operatorname{ks}(W_f,\partial W_f)$ to the signature of an appropriate 4-manifold. **Scholium 18**. *Let $f\colon Y\to N$ be a locally flat proper topological embedding that is smooth near $\partial Y$.* (i) *[\[it:i-smoothing-up-to-isotopy\]]{#it:i-smoothing-up-to-isotopy label="it:i-smoothing-up-to-isotopy"} If $\operatorname{ks}(W_f,\partial W_f)=0$ then there exists a smooth structure $\sigma$ on $N$ with respect to which $f$ is smooth.* (ii) *If in addition $\langle \operatorname{ks}(\sigma,\mathop{\mathrm{std}}),[f(Y_i)]\rangle = 0 \in \mathbb{Z}/2$ for each connected component $Y_i$ of $Y$, then $f$ is topologically isotopic rel. boundary, via an arbitrarily small isotopy, to a smooth embedding.* *Proof.* If $\operatorname{ks}(W_f,\partial W_f)=0$, then Step 1 (Proposition [Proposition 11](#prop:exotic-embedding){reference-type="ref" reference="prop:exotic-embedding"}) can be completed without connect summing with any Lashof knots. We obtain a smooth structure $\sigma$ on $N$ in which $f$ is smooth, that agrees with the standard smooth structure on $N$ near $\partial N$. Now suppose $\langle \operatorname{ks}(\sigma,\mathop{\mathrm{std}}),[f(Y_i)]\rangle = 0$ for each $i=1,\dots,m$. Let $S$ be an embedded surface Poincaré dual to $\operatorname{ks}(\sigma,\mathop{\mathrm{std}})$ that intersects $f(Y)$ transversely (such an $S$ was produced in Lemma [Lemma 16](#lemma:represent-ks-by-S){reference-type="ref" reference="lemma:represent-ks-by-S"}). The condition implies, by intersection theory, that for each $i$ the count of transverse intersection points between $S$ and $f(Y_i)$ is even. For every $i$, tube $S$ to itself, along $f(Y_i)$, to obtain a new surface $S'$, in the same $\mathbb{Z}/2$-homology class, $[S] = [S'] \in H_2(N;\mathbb{Z}/2)$, and such that $S' \cap f(Y) = \emptyset$. It then follows from Proposition [Proposition 9](#prop:sigma-pi-agree-away-from-S){reference-type="ref" reference="prop:sigma-pi-agree-away-from-S"} that $f$ is isotopic to a smooth embedding in $\mathop{\mathrm{std}}$. ◻ [^1]: Note that this is a *smooth structure with corners*, which means that it is a maximal atlas in which two charts with corners $(U,\phi)$ and $(V,\theta)$ are smoothly compatible if $\phi\circ\theta^{-1}\colon\theta(U\cap V)\to\phi(U\cap V)$ admits a smooth extension to an open neighbourhood of each point. See [@Lee00].
arxiv_math
{ "id": "2309.15962", "title": "Smoothing 3-manifolds in 5-manifolds", "authors": "Michelle Daher and Mark Powell", "categories": "math.GT", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We present the Fast Chebyshev Transform (FCT), a fast, randomized algorithm to compute a Chebyshev approximation of functions in high-dimensions from the knowledge of the location of its nonzero Chebyshev coefficients. Rather than sampling a full-resolution Chebyshev grid in each dimension, we randomly sample several grids with varied resolutions and solve a least-squares problem in coefficient space in order to compute a polynomial approximating the function of interest across all grids simultaneously. We theoretically and empirically show that the FCT exhibits quasi-linear scaling and high numerical accuracy on challenging and complex high-dimensional problems. We demonstrate the effectiveness of our approach compared to alternative Chebyshev approximation schemes. In particular, we highlight our algorithm's effectiveness in high dimensions, demonstrating significant speedups over commonly-used alternative techniques. author: - "Dalton Jones[^1]" - Pierre-David Letourneau - Matthew J. Morse - M. Harper Langston bibliography: - references.bib title: A Sparse Fast Chebyshev Transform for High-Dimensional Approximation --- Chebyshev polynomial, high dimensional approximation, Discrete Cosine Transform, randomized sampling, least squares, fast algorithms # Introduction Polynomial interpolation is a fundamental building block of numerical analysis and scientific computing, often serving as the connection between continuous application domains and discrete computational domains. Further, Chebyshev polynomials are a popular basis for interpolation due to their favorable convergence properties for smooth and analytic functions, their numerical stability, their near-best approximation power [@li2004near], and a simple FFT-based interpolation procedure [@trefethen2019approximation; @trefethen2000spectral; @powell1967maximum]. As a result, Chebyshev polynomial interpolation has been used in a variety of domains, such as solving partial differential equations [@mason1967chebyshev; @trefethen2000spectral; @deville1985chebyshev], signal processing [@shuman2011chebyshev; @kabal1986computation] and optimization [@cheng1999nonlinear; @sherali2001global], and they serve as the foundation for the popular numerical computing toolkit `chebfun` [@battles2004extension]. The standard approach for interpolating a function in $D$ dimensions with a $d$ degree polynomial is to sample the target function on a tensor product grid of $(d+1)^D$ points and solve a linear system for the coefficients of a polynomial $p({\undefined{x}})$; i.e., $$p({\undefined{x}}) = \sum_{{\undefined{n}}={\undefined{0}}}^{\undefined{D}} c_{{\undefined{n}}} T_{{\undefined{n}}}({\undefined{x}}), \label{eq:chebyshev_expansion}$$ where ${\undefined{n}}= (n_1, \hdots, n_D)$ is a multi-index, ${\undefined{x}}= (x_1, \hdots, x_D) \in \mathbb{R}^D$,$T_{\undefined{n}}({\undefined{x}}) = \prod_{i=1}^D T_{n_i}(x_i)$, and $T_n(x)$ is the $n$th Chebyshev polynomial. However, as the dimension grows, the number of required samples and unknown coefficients grow exponentially, and the problem quickly becomes intractable. Due to the importance of high-dimensional function approximation, many alternative methods have been developed to overcome this *curse of dimensionality*. A more thorough review of the literature can be found in [@trefethen2017cubature], but we briefly summarize various approaches here. Low-rank approximations [@bebendorf2011adaptive; @hashemi2017chebfun; @townsend2013extension; @oseledets2011tensor; @grasedyck2013literature] try to reduce the problem to computing coefficients of a reduced set of separated functions that still approximate the target function well. The accuracy of low-rank approximations can be improved by increasing the number of terms in the approximation or by applying the method hierarchically on low-rank submatrices of the resulting system matrix [@bebendorf2007hierarchical; @hackbusch2012tensor]. Sparse grids [@bungartz2004sparse; @griebel2014fast; @griebel2021generalized] reduce the degrees of freedom by using a reduced tensor-product basis formed from a progressively finer hierarchy of functions. After removing basis elements with mixed derivative terms below a threshold tolerance, the remaining coefficients decay rapidly, allowing for scaling independent of dimension (up to logarithmic factors). Adaptive methods have been proposed [@conrad2013adaptive], leveraging the pseudospectral method combined with sparse grids and Smolyak's algorithm [@smolyak1963quadrature] to reach even higher dimensions. Recently, deep neural networks have demonstrated their effectiveness at approximating and interpolating high-dimensional data [@devore2021neural]. This has become more relevant in light of the "double-descent" phenomenon [@nakkiran2021deep; @belkin2019reconciling]: test accuracy actually continues to improve even after a network overfits the training data. Some progress has been made to determine the theoretical underpinning of this behavior [@allen2019convergence; @xie2022overparameterization; @belkin2019does]. Although designed for Gaussian process regression and limited to low dimensions, the most recent work that resembles our proposed approach is [@greengard2022equispaced], though our appraoch is more broadly scoped and applied. Standard tensor product representations for Chebyshev interpolation in high-dimensions have largely fallen by the wayside due to their exponential complexity. The most relevant improvement to the baseline algorithm, while still leveraging tensor product grids, is a faster DCT algorithm that reduces the constant runtime prefactor [@chen2003fast]. Recently, however, [@trefethen2017cubature; @trefethen2017multivariate] have shown that Chebyshev coefficients decay rapidly when approximating smooth functions, but anisotropically; that is, the normal tensor product grid underresolves the diagonal of the hypercube$[-1,1]^D$ by a factor of $\sqrt{D}$. Interpolating with a bounded Euclidean degree instead of bounded total degree will address this anisotropy and recover the usual convergence behavior. Another important observation from [@trefethen2017cubature] is that, regardless of choosing a bounded total or Euclidean degree interpolant, the Chebyshev coefficients above machine precision are contained in a sphere of radius $d$ under the $s$-norm ($s=1$ for bounded total degree, $s=2$ for Euclidean). An obvious consequence of this is that by increasing the radius of this sphere in coefficient-space, we increase the accuracy of the resulting Chebyshev interpolant. In other words, to achieve a given target accuracy, $\epsilon$, we can truncate coefficients outside of this coefficient sphere of radius $d_\epsilon$ without losing accuracy. This is algorithmically important because there are exponentially fewer coefficients inside a sphere under the 1-or 2-norm than under the $\infty$-norm (i.e. bounded maximal degree) [@smith1989small; @trefethen2017multivariate]. In this paper, we describe an algorithm, the Fast Chebyshev Transform (FCT), whose runtime is proportional to the number of coefficients in this coefficient-space $s$-norm sphere rather than the size of a $D$-dimensional tensor-product grid. To do this, we estimate the number of significant coefficients $N$ and sample $L$ distinct tensor product Chebyshev grids with random resolutions in each dimension, such that the product of the resolutions roughly equals $N$. We then sample the target function at all of these grid points, perform separate $D$-dimensional DCTs on each grid, and solve a least-squares problem in coefficient space with the conjugate gradient (CG) method. Since the system matrix is well-conditioned, CG converges rapidly. More importantly, we can apply each matrix-vector multiplication in $O(N)$ time due to the sparsity pattern induced by an aliasing identity derived from Chebyshev polynomials. The FCT algorithm has an overall complexity of $O(N\log N)$. We limit our scope to Chebyshev approximation in this work, but FCT can be applied directly to high dimensional cosine approximation of smooth functions with minimal modifications. The remainder of this paper is structured as follows: In , we discuss relevant notation and background information. In , we discuss the mathematical formulation that underpins the Fast Chebyshev Transform and present the full algorithm. discusses the algorithmic complexity and numerical behavior of FCT. Numerical results and comparisons are presented in . We conclude with . # Background and notation {#sec:preliminary} We aim to approximate a given smooth function $f: \mathbb{R}^D \to \mathbb{R}$ by a tensor product Chebyshev polynomial in the form of . Bold script will represent vector quantities, such as the real vector ${\undefined{x}}=(x_1, \hdots, x_D)$ and integer multi-indices ${\undefined{n}}= (n_1,\hdots, n_D)$. The $n$th Chebyshev polynomial is given by $$T_{n}(x) = \cos( n \, \arccos(x)).$$ Chebyshev polynomials are orthonormal with respect to measure $\frac{1}{\sqrt{1-x^2}}, \, x\in[-1,1]$: $$\int_{-1}^1 \frac{T_m (x) \, T_n(x) }{\sqrt{1-x^2}} dx= \left\{ \begin{array}{ll} 0 & \mbox{if } n \not = m \\ 1 & \mbox{if } n = m \end{array} \right.$$ and obey the recurrence relation $$T_{n+1} = 2xT_n(x) - T_{n-1}(x).$$ A multi-dimensional Chebyshev polynomial defined on $\mathbb{R}^D$ is a tensor product of one dimensional Chebyshev polynomials: $$T_{\undefined{n}}({\undefined{x}}) = \prod_{i=0}^D T_{n_i}(x_i).$$ When approximating a function in $D$ dimensions by polynomials, we often consider the class of polynomials with bounded degree $d$ with respect to some norm as approximation candidates; i.e. members of the set $$\mathbb{P}^D_{d,s} = \left\{p({\undefined{x}}) = \sum_{\undefined{n}}c_{\undefined{n}}T_{\undefined{n}}({\undefined{x}}) \in {\mathbb{P}^D_d}\biggm| ||{\undefined{n}}||_{s} \leq d \right\},$$ where ${\mathbb{P}^D_d}$ is the space of polynomials $p:\mathbb{R}^D \to \mathbb{R}$ with maximum degree $d$, and $||{\undefined{n}}||_{s}$ is the $s$-norm of the vector ${\undefined{n}}$. We will refer to ${\mathbb{P}^D_{d,1}}$, ${\mathbb{P}^D_{d,2}}$ and ${\mathbb{P}^D_d}$, as polynomials with bounded total degree, Euclidean degree and maximum degree respectively. Increasing the degree of a polynomial approximation of $f$ increases the approximation accuracy [@trefethen2019approximation]. This means that to approximate $f$ by a polynomial $p$ with accuracy $\epsilon$, there is a degree $d = d_{\epsilon,s}$ such that $\| f-p \| \leq \epsilon$ if $p$ is in $\mathbb{P}^D_{d,s}$. Choosing $d$ determines the total number of coefficients of basis elements with bounded exponents in the $s$-norm, i.e. $\|{\undefined{n}}\|_s \leq d$, which define $p$. We denote this set of indices by $$\mathcal{S}^s_{d,D} =\{{\undefined{n}}\in \mathbb{N}^D \,\mid\, \|{\undefined{n}}\|_s \leq d \}.$$ Known results tell us that $$\begin{aligned} |\mathcal{S}^1_{d,D} | &= \binom{d + D}{d} \quad \cite{moore2018monomial}\label{eq:N_1norm}\\ |\mathcal{S}^2_{d,D} | &= \frac{(\sqrt{\pi}d/2)^D}{\Gamma\left(\frac{D}{2} + 1 \right)}, \quad D \to \infty\quad \cite{smith1989small}\label{eq:N_2norm}\\ |\mathcal{S}^\infty_{d,D}| &= d^D \label{eq:N_infnorm}.\end{aligned}$$ The important aspect of is that they grow much slower than as $D$ increases. This makes them more efficient to work with directly than while still approximating $f$ well. These values are used to discuss the computational complexity of this work; we typically choose $N = |\mathcal{S}^1_{d,D}|$ or $|\mathcal{S}^2_{d,D}|$. #### Problem setup One way to approximate a smooth function $f$ is to sample it on a $D$-dimensional tensor product of 1D Chebyshev grids: $$x_{k}= \cos(\theta_{k}),\quad \theta_{k} = \frac{\left(k + \frac{1}{2}\right)\pi}{N+1}, \quad k=0,\hdots, N. \label{eq:chebyshev_grid}$$ Then, apply a DCT in each dimension. This produces a polynomial interpolant of the function samples; since Chebyshev polynomials obey the following relation after a change of variables from $[-1,1]$ to $[0,\pi]$ $$\label{eq:chebeq} T_{n} (x_k) = T_{n} (\cos(\theta_k)) = \cos(n \theta_k),$$ becomes a multi-dimensional cosine series. However, as mentioned above, the size of the tensor-product grid grows exponentially with $D$, limiting this approach's practicality to low dimensions. Another common approach is to approximate $f$ with a least-squares solution: find $p$ that minimizes $\|f - p\|_2^2$. The standard way to solve this problem is sample $f$ at a set of points, then form a matrix $B$ with entries $$B_{i,j} = T_{{\undefined{n}}^{(j)}} \left({\undefined{x}}^{(i)}\right), \label{eq:least_squares_matrix}$$ where ${\undefined{n}}^{(j)}$ and ${\undefined{x}}^{(i)}$ imply an ordering on the multi-indices and sample points. After constructing $B$, we solve the minimization problem, $$\min_{\undefined{c}}\|B{\undefined{c}}- {\undefined{f}}\|_2^2, \label{eq:least_squares_opt}$$ where ${\undefined{c}}_i = c_{{\undefined{n}}^{(i)}}$ and ${\undefined{f}}_i = f({\undefined{x}}^{(i)})$. This can solved many ways, either with a direct approach, such as solving the normal equations, or by using the singular value decomposition (SVD), or with an iterative solver like the conjugate gradient method (CG) [@kershaw1978incomplete]. The advantage of a least-squares-based approach is that the complexity is in terms of the number of unknown coefficients $N$ of , rather than an exponentially growing grid size. A disadvantage of this approach is the implicit ambiguity in choosing sample point locations: poorly sampling $f$ can produce a bad approximation or slow convergence, while provable convergence often requires exponentially many more samples [@demanet2019stable]. Another drawback is the relatively large complexity; direct methods require $O(N^3)$ work, while the CG method still requires $O(N^2)$ dense matrix-vector multiplications. # Algorithm {#sec:main} To address these shortcomings, we propose an alternative approach: the Fast Chebyshev Transform (FCT). The core of the FCT algorithm is a consistent linear system of the form, $$\begin{aligned} \label{eq:main_linear_system} \begin{bmatrix} F^{(1)} & 0 & \hdots & 0 \\ 0 & F^{(2)} & \hdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \hdots & F^{(L)} \end{bmatrix} &\, \begin{bmatrix} {\undefined{f}}^{(1)}\\ {\undefined{f}}^{(2)}\\ \vdots \\ {\undefined{f}}^{(L)} \end{bmatrix} = \begin{bmatrix} A^{(1)} \\ A^{(2)} \\ \vdots \\ A^{(L)} \\ \end{bmatrix} \, {\undefined{c}} \\ \mathcal{F}{\undefined{f}}&= A{\undefined{c}}.\end{aligned}$$ Each matrix $A^{(\ell)}$ is a sparse $O( N ) \times N$ matrix, which are concatenated into a large $O(NL)\times N$ matrix with $L$ sparse blocks. The block matrices $F^{(\ell)}$ are $O(N) \times O(N)$ matrices corresponding to DCTs, ${\undefined{c}}= [c_{{\undefined{n}}^{(1)}},\hdots,c_{{\undefined{n}}^{(N)}}]^T$ is the vector containing non-zero coefficients of $f$'s polynomial approximation, and $[{\undefined{f}}^{(1)},\hdots, {\undefined{f}}^{(L)} ]^T$ is a vector of $f$ at sample points ${\undefined{x}}^{(i)}$. To determine the sampling pattern of $f$ and the sparsity pattern of $A$, we construct a collection of $L$ distinct tensor product grids, each containing randomly selected resolutions in each dimension, but whose total number of points is roughly $N$. We call this collection of grids an *$L$-grid*; an example of one possible $L$-grid for $N=90$ in three dimensions is shown in -Left. ![Schematic of the matrix-vector multiplication step in the sparse Fast Chebyshev Transform. *Left*: First, we sample the high-dimensional input function on a collection of $L$ grids with varied, randomly chosen discretizations in each dimension, the product of each of which roughly equals the estimated number of non-zero coefficients $N$. *Middle*: Next, for each sampled grid, we compute the non-zero entries (red and blue entries) of the linear system resulting from an interpolation problem on the grid. This is sparse due to aliasing effects of Chebyshev polynomials (see and [@trefethen2019approximation Chapter 4]). *Right*: Finally, we multiply each sparse matrix with the coefficient vector (magenta) and apply a Discrete Cosine Transform (green matrix) to the result. To solve for unknown polynomial coefficients, we solve a sparse least-squares using an iterative method such as the Calgorithm.](fct-algorithm-schematic.pdf){#fig:fct_schematic width="\\textwidth"} Due to the aliasing properties of Chebyshev polynomials (, [@trefethen2019approximation Chapter 4]), the matrices $A^{(l)}$ are sparse and contain $O(N)$ non-zero entries. These properties, along with fast DCT algorithms [@chen1977fast; @ahmed1974discrete], allow us to compute both sides of with $O(LN\log(N))$ time complexity. When $L$ is well-chosen, the matrix $A$ has favorable conditioning properties, allowing an iterative solver such as the CG method [@kershaw1978incomplete], to converge rapidly to the least-squares solution. A schematic of the algorithm is shown in . In the remainder of this section, we first highlight the aliasing properties of Chebyshev polynomials on $L$-grids and how this leads to the sparsity of $A$ in . In , we describe how to appropriately choose an $L$-grid. We combine each of these pieces in to describe the full FCT algorithm. ## Aliasing {#sec:aliasing} Chebyshev polynomials evaluated on a Chebyshev grid exhibit a special property known as *aliasing*: a higher degree Chebyshev polynomial realizes the same values as a single lower degree Chebyshev polynomial on a fixed grid size [@trefethen2019approximation Chapter 4]. When approximating functions with Chebyshev polynomials, this aliasing has another side effect: if a low degree Chebyshev polynomial is zero on a given Chebyshev grid, certain higher degree Chebyshev polynomials will also be zero. We leverage this to produce a sparse system matrix in . #### Aliasing in 1D To highlight this relationship, consider the one-dimensional problem of interpolating a smooth function $f$ by a degree $N$ Chebyshev polynomial on $[-1,1]$. We know that $f$ can be expressed as a unique Chebyshev series on $[-1,1]$: $$f(x) = \sum_{n=0}^\infty c_nT_n(x). \label{eq:f_inf_cheb_expansion}$$ To approximate this, we want to find a set of $M$ coefficients $c_i$ defining a polynomial $p(x)$ such that $$f(x) \approx p(x) = \sum_{n=0}^M c_nT_n(x), \quad x \in [-1,1], \quad c_n = \frac{2 - \delta_0(n)}{\pi}\int_{-1}^1 \frac{T_n(x)f(x)}{\sqrt{1-x^2}} dx,$$ where $\delta_0(n)$ is the dirac delta function centered at $0$. Specifically, we would like to approximate $f$ by a truncated Chebyshev series. After applying a change of variables $x = \cos(\theta)$ from $[-1,1]$ into $[0,\pi]$, the interpolation problem becomes $$\begin{aligned} f(\cos(\theta)) \approx p(\cos(\theta)) &= \sum_{n=0}^M c_n\cos(n\theta), \quad \theta \in [0,\pi], \quad \quad \\ c_n &= \int_{0}^\pi \cos(n\theta)f(\cos(\theta)) d\theta. \label{eq:poly_approx}\end{aligned}$$ Substituting the polynomial approximation of $f$ into the expression for the $n$th Chebyshev coefficient in , we obtain $$\begin{aligned} c_n =& \int_{0}^\pi \cos(n\theta) \left(\sum_{m=0}^M c_m\cos(m\theta)\right) d\theta \label{eq:cn_equation} \\=& \sum_{m=0}^M c_m\int_{0}^\pi \cos(n\theta) \cos(m\theta) d\theta \label{eq:inner_product_1D}\end{aligned}$$ for $n\leq m$, thanks to the orthogonality of cosines over $[0,\pi]$. If we discretize the integral in using $N+1$ equispaced points in $[0, \pi]$ (corresponding to a first-kind Chebyshev grid of $N+1$ points in ), the quadrature becomes $$b_{n,N} :=\sum_{m=0}^M c_m \left ( \frac{1}{N+1} \, \sum_{k=0}^N \cos(n\theta_k) \cos(m\theta_k) \right ) = \frac{1}{N+1} \, \sum_{k=0}^N \cos(n\theta_k) \, p \left( \cos(m\theta_k) \right ). \label{eq:quad_1D}$$ In particular, note that if $N \geq M$, the trapezoidal rule is exact for the $(M+1)$-term cosine series in and $b_{n,N} = c_n$. However, this is not the case when $N \leq M$ due to *aliasing*. Such aliasing is quantified by the following lemma: [\[lem:aliasing\]]{#lem:aliasing label="lem:aliasing"} For every positive integer $N$, $$\begin{aligned} \frac{1}{N+1} \, &\sum_{k=0}^{N} \, \cos \left ( \frac{n (k+1/2) \pi}{N+1} \right ) \, \cos \left ( \frac{m (k+1/2) \pi}{N+1} \right ) \\&= \frac{1}{2}\Delta(m+n) + \frac{1}{2}\Delta(m-n),\end{aligned}$$ where $$\label{eq:aliasing_delta} \Delta_N(\ell) = \left\{ \begin{array}{ll} 1 & \text{if } \ell \, \mathrm{mod} \, (4(N+1)) = 0 \\ -1 & \text{if } \ell \, \mathrm{mod} \, (4(N+1)) = 2(N+1) \\ 0 & \mbox{o.w. } \end{array} \right.$$ Substituting into : $$\begin{aligned} b_{n,N} &= \sum_{m=0}^M c_m\left(\frac{1}{2}\Delta_N(m+n) + \frac{1}{2}\Delta_N(m-n)\right) \\&= \frac{1}{N+1} \, \sum_{k=0}^N \cos(n\theta_k) \, p \left( \cos(m\theta_k) \right ) . \\ \end{aligned}$$ This equation establishes a relationship between the aliased coefficients, the polynomial coefficients, and the polynomial values, which can be expressed as $${\undefined{b}}_N = A \, {\undefined{c}}= F \, {\undefined{p}},$$ where ${\undefined{c}}$ is the vector of Chebyshev coefficients of the polynomial, ${\undefined{p}}$ contains polynomial values samples on a regular grid, $F$ corresponds to a size-$(N+1)$ Discrete Cosine Transform (DCT), and $A$ is sparse with entries given by $$A_{n,m} = \frac{1}{2}\Delta_N(m+n) + \frac{1}{2}\Delta_N(m-n). \label{eq:aliasing_matrix_1d}$$ #### Higher dimensions To see how this phenomenon impacts higher dimensions, we generalize the above relations from scalars in $[-1,1]$ to vectors $[-1,1]^D$. Chebyshev interpolation of a function $f:[-1,1]^D \to \mathbb{R}$ involves tensor-products of Chebyshev polynomials and replacing the single index $n$ with multi-index ${\undefined{n}}=(n_1, \hdots, n_D) \in \mathbb{N}^D$: $$\begin{aligned} f({\undefined{x}}) \approx p({\undefined{x}}) = \sum_{{\undefined{n}}\in \mathcal{N}} c_{\undefined{n}}T_{\undefined{n}}({\undefined{x}}), &\quad {\undefined{x}}\in [-1,1]^D, \;\\ &\quad c_{\undefined{n}}= \frac{2 - \delta_0(n)}{\pi}\int_{[-1,1]^D} \frac{T_{\undefined{n}}({\undefined{x}})f({\undefined{x}})}{\sqrt{\undefined{1}-{\undefined{x}}^2}} d{\undefined{x}},\end{aligned}$$ where $T_{\undefined{n}}({\undefined{x}}) = T_{n_1}(x_1)T_{n_2}(x_2)\dots T_{n_d}(x_D)$, and $\mathcal{N}$ consists of the set of the multi-indices of the non-zero coefficients of the interpolant $p(\cdot)$. In general, for interpolation purposes, $\mathcal{N}$ corresponds to $\mathcal{S}^1_{d,D}$ or $\mathcal{S}^2_{d,D}$ (see ). We apply the same change of variables in each dimension as in the one-dimensional case, producing $$\begin{aligned} p(\cos(\theta_1), ..., \cos(\theta_D)) ) &= \sum_{{\undefined{n}}\in \mathcal{N}} c_{\undefined{n}}\cos({\undefined{n}}{\undefined{\theta}}), \quad {\undefined{\theta}}\in [0,\pi]^D, \\ c_{\undefined{n}}&= \int_{[0,\pi]^D} \cos({\undefined{n}}{\undefined{\theta}})f(\cos(\theta_1), ..., \cos(\theta_D)) d{\undefined{\theta}} \label{eq:poly_approx_nd}\end{aligned}$$ with ${\undefined{\theta}}= (\theta_1,\hdots, \theta_D) \in [0, \pi]^D$ and  $\cos({\undefined{n}}{\undefined{\theta}}) = \cos(n_1\theta_1)\cos(n_2\theta_2) \dots \cos(n_D\theta_D)$. Combining the two parts of , we obtain $$\begin{aligned} c_{\undefined{n}} % & \sum_{\vm \in \mathcal{N} } c_\vm\int_{[0,\pi]^D} \cos(\vn\vtheta) \cos(\vm\vtheta) d\vtheta \\ %=& \sum_{\vm \in \mathcal{N}} c_\vm\int_{[0,\pi]^D} \left(\prod_{i=0}^D\cos(n_i\theta_i)\right)\left(\prod_{i=0}^D \cos(m_i\theta_i)\right) d\vtheta \\ =& \sum_{{\undefined{m}}\in \mathcal{N} }c_{\undefined{m}}\prod_{i=0}^D\int_0^\pi \cos(n_i\theta_i)\cos(m_i\theta_i) d\theta_i.\label{eq:inner_product_nd} \end{aligned}$$ Discretizing the integral in , using $N_i + 1$ points in each dimension $i$, provides the multi-dimensional analogue of : $$\begin{aligned} b_{{\undefined{n}}, {\undefined{N}}} &= \sum_{{\undefined{m}}\in \mathcal{N}} c_{\undefined{m}} \prod_{i=0}^D \left(\frac{1}{ N_i} \sum_{k_i =0}^{N_i} \cos(n_i \, \theta_{i,k_i} ) \cos(m_i \, \theta_{i,k_i}) \right) \\ &= \sum_{k_1=0}^{N_1} ... \sum_{k_D=0}^{N_D} \, \left ( \frac{1}{\prod_{i=1}^D N_i} \, \prod_{i=0}^D \cos(n_i\theta_{i,k_i}) \right ) \, p \left (\cos(\theta_{1,k_1}), \, ..., \, \cos(\theta_{D,k_D}) \right ). \label{eq:quad_nd}\end{aligned}$$ In particular, note that if we sample on a grid where $N_i \geq \max_{n\in \mathcal{N}} (\max_i n_i)$ as is done traditionally, then $b_{{\undefined{n}},{\undefined{N}}} = c_{\undefined{n}}$. However, as the number of samples increases exponentially in higher dimensions, this traditional approach becomes prohibitively expensive computationally. To alleviate this issue, we perform a form of under-sampling, leveraging the aliasing result of ; i.e., we use grids with $N_i$ points per dimension $i$, chosen in such a way that the total size $\prod_{i=1}^D N_i$ is roughly equal to $\lvert \mathcal{N} \rvert$ (see for details). Following , this results in a linear system of the form $$\begin{aligned} b_{{\undefined{n}}, {\undefined{N}}} &= \sum_{{\undefined{m}}\in \mathcal{N}} c_{\undefined{m}} \prod_{i=0}^D \left( \frac{1}{2}\Delta_{N_i}(m_i+n_i) + \frac{1}{2}\Delta_{N_i} (m_i-n_i) \right)\\ &= \sum_{k_1=0}^{N_1} ... \sum_{k_D=0}^{N_D} \, \left ( \prod_{i=0}^D \cos(n_i\theta_{i,k_i}) \right ) \, p \left (\cos(\theta_{1,k_1}), \, ..., \, \cos(\theta_{D,k_D}) \right ), \label{eq:quad_nd_matrix} \end{aligned}$$ which may be written as $$b_{{\undefined{N}}} = A_{\mathcal{N}, {\undefined{N}}} {\undefined{c}}= F_{{\undefined{N}}} {\undefined{p}}_{\undefined{N}}. \label{eq:multi_dim_dct}$$ Here, ${\undefined{c}}$ is the vector of Chebyshev coefficients of the polynomial with multi-index corresponding to elements of $\mathcal{N}$, ${\undefined{p}}_{\undefined{N}}$ contains polynomial values from the non-uniform grid with $N_i+1$ samples per dimension, $F_{{\undefined{N}}}$ corresponds to a non-uniform Discrete Cosine Transform (DCT) with $N_i+1$ samples per dimension, and $A_{\mathcal{N}, {\undefined{N}}}$ is a size-$O(\lvert \mathcal{N} \rvert) \times \lvert \mathcal{N} \rvert$ matrix with rows corresponding to elements of the non-uniform grid and columns corresponding to elements of the set $\mathcal{N}$ with entries $$A_{{\undefined{n}}, {\undefined{m}}}= \prod_{i=1}^D \, A^{(i)}_{n_i, m_i} = \prod_{i=0}^D \left(\frac{1}{2}\Delta_{N_i}(m_i+n_i) + \frac{1}{2}\Delta_{N_i}(m_i-n_i)\right). \label{eq:aliasing_matrix_nd_diff}$$ In particular, the size of this system is *linear* in the number of Chebyshev coefficients ($\lvert \mathcal{N} \rvert$), which leads to computational efficiency. Further, the matrix $A$ can be computed efficiently by considering the products of non-zero elements of lower-dimensional sparse aliasing matrices only. ## $L$-grids With the system matrix on a single tensor product grids defined, we can apply the same construction to a collection of different grids. Let ${\bar{\undefined{N}}}^{(\ell)} = (N_1^{(\ell)},\hdots, N_D^{(\ell)})$ for $\ell=1, \hdots, L$ be the $\ell^{th}$ choice of tensor-product discretization. The $L$ discretizations resulting from discretizing each dimension via for each $\ell$ is called an *$L$-grid*, which is completely determined by the sampling rates ${\bar{\undefined{N}}}^{(\ell)}$. For each grid within a given $L$-grid, we compute the matrix $A^{(\ell)}$ using and concatenate them into a final full matrix $A$: $$A = \begin{bmatrix} A^{(1)} \\ A^{(2)} \\ \vdots \\ A^{(L)} \end{bmatrix} \label{eq:L-grid_A_mat}$$ Combining with , we obtain the expression in . From this, the FCT computes the DCT on the left side of using the function samples on the entire $L$-grid, followed by solving a least-squares problem for the unknown polynomial coefficients ${\undefined{c}}$. ## Sampling and Matrix Construction {#sec:sampling_schedule} With the system in , we need to determine an $L$-grid, along with an appropriate value of $L$, such that is well-conditioned and invertible. To ensure invertibility, we adopt a randomized approach by randomly assigning sampling rates along each dimension $i$ in such a way that $$\prod_{i=1}^D N_i^{(\ell)} \approx N, \quad \ell=1,\hdots L$$ as detailed in . With our randomly selected ${\bar{\undefined{N}}}$, we sample $f$ on the associated $L$-grid and form the matrix $A$ in . The goal for a sufficiently large value of $L$ is to sample enough points to produce a well-conditioned least-squares system to recover the Chebyshev coefficients. To determine $L$, we iteratively sample a new random grid, construct the corresponding matrix $A^{(\ell)}$, and append it to the matrix computed for the previous $\ell-1$ grids. This process terminates when $A$ is full-rank with a condition number lower than some provided bound $\kappa$; we typically choose $\kappa< 10^4$. This process is shown in . We have performed a variety of numerical experiments with the sampling scheme of . Empirically, choosing $L = c D$ for some $c \geq 2$ is sufficient to guarantee invertibility and well-conditioning of the matrix $A$ of . #### Precomputation of $L$-grids Since depends only on $\mathcal{N}$ and is independent of the sample values, we can precompute and store the sampling rates ${\bar{\undefined{N}}}$ of an appropriate $L$-grid as well as the matrix $A$ for various values of $N$, $D$, $d$ and a given set of multi-indices $\mathcal{N}$ (e.g., $\mathcal{S}^1_{d,D}$ or $\mathcal{S}^2_{d,D}$). This makes a precomputation with no runtime cost in practical applications. ## Full algorithm {#sec:algorithm} We now give a description of the full FCT algorithm. Given a target function $f(\cdot)$, we first randomly sample an $L$-grid and construct the corresponding sparse matrix $A$ using and . Then, we sample the function $f(\cdot)$ at the discretization points on each grid in the chosen $L$-grid. We denote the vector containing these samples on a single grid ${\undefined{f}}^{(\ell)}$ and the stacked vector of all values ${\undefined{f}}= [{\undefined{f}}^{(1)}; \hdots; {\undefined{f}}^{(L)} ]$. We then form the linear system in and solve for the Chebyshev coefficients ${\undefined{c}}$ via least-squares: $${\undefined{c}}^* = \mathrm{argmin}_{\undefined{c}}||(F {\undefined{f}}- A {\undefined{c}})||_2^2.$$ We solve this least-squares problem with the CG method. Since we can efficiently compute Type-II DCTs with the Fast Fourier Transform (FFT) and can efficiently compute matrix-vector products $A{\undefined{c}}$ due to the sparsity of $A$, we can solve this problem rapidly. Combined with the favorable conditioning of $A$, our approach allows the CG method to converge rapidly to a solution. The scheme is summarized in . Input: target function $f$, degree $d$, dimension $D$, multi-index norm $s$ Output: Chebyshev coefficients ${\undefined{c}}^*$ approximating $f$ $\kappa_0 \gets 10^4$ $A, L, \mathfrak{N} \gets$ ($\mathcal{S}^s_{d,D}$, $\kappa_0$) ${\undefined{f}}^{(\ell)} \gets$ Sample $f$ at $\mathfrak{N}_\ell$ $\mathcal{F}^{(\ell)} = F^{(\ell)} \, {\undefined{f}}^{(\ell)}$ (apply Type-II DCTs to ${\undefined{f}}^{(\ell)}$) $\mathcal{F} = [\mathcal{F}^{(1)};\hdots; \mathcal{F}^{(L)}]$ Solve least-squares problem: ${\undefined{c}}^* = \mathrm{argmin}_{\undefined{c}}||(\mathcal{F} - A {\undefined{c}})||_2^2$ using CG. ${\undefined{c}}^*$ ## Scaling & Complexity {#sec:complexity} #### Precomputation complexity The primary work of the precomputation phase of the FCT involves forming the matrix $A$ using . Computing the sampling rate for a given block of $A$, which calls , requires $O(D)$ time to complete. For a given sampling rate $(N_1^{(\ell)}, N_2^{(\ell)}, \hdots, N_D^{(\ell)})$, we then compute the non-zero elements of $A^{(\ell)}$ by using the nonzeros of one dimensional aliasing matrices as in ; since each one dimensional aliasing matrix has $O(1)$ zeros per dimension per column, computing all of the zeros results in $O(ND)$ computational complexity. Further, because the loop in repeats $L$ times, we can construct $A$ in $O(NDL)$ time (the construction of the $\ell^{th}$ aliasing matrix dominates the cost of each loop iteration. It is also worth noting that, as the matrix $A$ is sparse, the memory complexity is $O(NL)$. #### FCT complexity There are two main steps in : (1) the Type-II DCTs applied to the function samples and (2) solving the least-squares problem with the CG method. The adjoint Type-II DCTs applied on on $L$ different $D$-dimensional grids (containing $O(N)$) points can be computed in $O(NL\log(N))$ time using FFTs [@ahmed1974discrete]. The least-squares problem can be solved with the CG method in $n_{CG}$ iterations with each iteration requiring a sparse matrix-vector multiply in $O(NL)$ time, resulting in an overall computational complexity of $O(n_{CG}NL)$. Note that if the condition number of $A$ is not too large in value, resulting in a *well-conditioned* system, $n_{CG}$ is small [@golub2013matrix Chapter 10.2], and the CG method converges rapidly. We discuss this further in the following section, but in the case of a well-conditioned $A$,the complexity of FCT is $O(NL\log(N))$. In summary, the precomputation phase computational complexity is $O(NDL)$, while applying FCT to $f$ results in $O(NL\log(N))$ complexity. # Numerical Results {#sec:numerical_results} In this section, we show results in terms of scalability and accuracy for FCT. We begin with a brief background introduction on numerical approximation and error in , followed by a discussion on the methods used for comparison in as well as an introduction to the different types of functions studied for scaling and accuracy in . We briefly note some details on the numerical implementation in . We complete our numerical discussions by showing the specific experiment results on scaling and accuracy in for FCT against the methods in on the functions in . ## Numerical Approximation and Error Discussion {#sec:approx_and_details} The error introduced from the polynomial approximation of a smooth function $f$ evaluated at Chebyshev points is a well-studied problem [@trefethen2019approximation]. On a tensor product Chebyshev grid in high dimensions, if $f$ is analytic in an Bernstein $\rho$-hyperellipse containing $[-1,1]^D$, then an interpolating polynomial of Euclidean degree $k$ will converge geometrically with rate $O(\rho^{-k})$ [@trefethen2017multivariate]. In the context of FCT, we are solving a least-squares problem for an approximating polynomial to $f$ on an $L$-grid instead of interpolating $f$ on a single $D$-dimensional grid. The quadrature rule used in to compute $c_{\undefined{n}}$ in is exact due to the orthogonality of cosines; however, when $f$ is underresolved in a certain dimension as a result of the randomized grid sub sampling, greater care must be taken. Suppose $f$ is a polynomial with bounded total degree $k$, but whose leading order term has a multi-index ${\undefined{k}}= (k_1, k_2, \hdots, k_D)$ with $\sum_{i=1}^D k_i \leq N$. We have observed empirically that as long as $L$ scales on the order $O(D+d)$, where $D$ is the dimension of the ambient space and $d$ is the maximum degree we wish to interpolate, the resulting aliasing matrix $A$ has full column rank with high probability; indeed, experiments show this scaling for $L$ holds for all numerical results considered below in the following sections. In this case, we can obtain an an accurate Chebyshev expansion approximating $f$ using only a constrained set of polynomials in our basis from $\mathcal{N}$. With this background going forward, we next discuss the methods we employ for evaluating the accuracy and scaling benefits of FCT in . ## Methods for Numerical Comparison {#sec:numerical_comparisons} We compare FCT with two standard approaches for performing Chebyshev interpolation: (1) the baseline DCT-based interpolation approach and (2) a least-squares approach based on a randomized interpolation matrix, which we call Randomized Least-Squares Interpolation (RLSI) with the following specific details: - **DCT approach**: we sample the target function $f$ on a $D$-dimensional tensor product Chebyshev grid and apply the DCT to compute the Chebyshev coefficients. This method has a complexity of $O(Dd^D\log d)$. - **RLSI approach**: consider the general least squares approach as introduced in . For a Randomized Least Squares (RLSI) approach, we construct the matrix $B$ in to solve by uniformly sampling $CN$ points from a full uniform tensor-product grid of size $d^D$ (without explicitly forming it). We further choose $C=1.2$ to ensure that $B$ is invertible with bounded condition number less than $\kappa_{\mathrm{RLSI}} = 10^4$; this mirrors the conditions imposed in for a fair comparison. The resulting least-squares problem is solved numerically using CG, available through the `Eigen` [@eigenweb]. This method has complexity $O(\vert \mathcal{N} \rvert^2)$. Note that for consistency, we use a tolerance of $10^{-3}$ for both RLSI and FCT. We choose $L=3D$ for FCT for all experiments unless otherwise specified. To demonstrate the scalability and efficiency of the FCT algorithm versus the DCT-based and RLSI-based approaches, we compare against function samples for three different challenging families of functions as detailed in . ## Numerical Experiments Function Families {#sec:numerical_families} For our numerical experiments, we used three families of functions as detailed below; these three function families are studied extensively in against the methods as detailed above in . These function families are chosen for their challenging complexity in approximation, interpolation, and scalability by FCT, DCT-based, and RLSI methods, providing a clear and fair comparison of all three methods. ### Function Class \#1 (Runge Function) {#sec:numerical_runge} The first family $f_1(\cdot)$ can be parameterized as: $$f_1({\undefined{x}}) = \frac{1}{1 + 10\|{\undefined{x}}\|_2^2}.$$ This function is challenging to approximate accurately due to the nearby pole in the complex extension of $f_1$ from $\mathbb{R}^D$ to $\mathbb{C}^D$ located at $\pm \frac{i}{\sqrt{10}}\undefined{e}_i$ for $i^{th}$ unit vector, $\undefined{e}_i$. Results for experiments using this function can be seen in . ### Function Class \#2 (Oscillatory Function) {#sec:numerical_oscillatory} The second family of functions, $f_2(\cdot)$, are parameterized as: $$f_2({\undefined{x}}) = \sin(3\cos(3\exp(\|{\undefined{x}}\|_2^2))) + \exp(\sin(3({\undefined{x}}\cdot \undefined{1}))).$$ These functions are analytic, but they exhibit a complex behavior in the region $[-1,1]^D$. In this case, accurate interpolation requires high degree, and therefore many coefficients, making this family challenging computationally. Results for experiments using this function can be seen in . ### Function Class \#3 (Sparse Support Function) {#sec:numerical_sparse_support} Finally, we consider a third family of functions $f_3(\cdot)$, which have *sparse support*; i.e., $\mathcal{N}$ has little structure and is a sparse subset of $S^\infty_{d,D}$. To generate elements of this family, we fix a dimension $D$ and degree $d$ and randomly select $N$ multi-indices within $S^\infty_{d,D}$. We then assign random values in $[-1,1]$ to the coefficients corresponding to the chosen multi-indices. Results for experiments using this function can be seen in . ## Numerical Implementation {#sec:implementation} We have implemented the FCT algorithm described in in `C++`. We use the `fftw` package [@frigo1998fftw] for its optimized DCT implementation and `Eigen` [@eigenweb] for its linear algebra routines, including its CG method. All experiments were performed with a single core on a machine with an AMD EPYC 7452 processor and 256GB of RAM. ## Numerical Experiment Studies on Scaling and Accuracy {#sec:numerical_experiment_results} Using our FCT implementation, we provide the specific results of our numerical experiments against the methods detailed in (i.e., DCT-based and RLSI) on the three function families described in . We begin with Function Family \#1 (Runge Function $f_1(\cdot)$) as described in . ### Runge Function ($f_1(\cdot)$) {#sec:numerical_runge_res} We first focus on Function Family \#1 as described in in order to study scaling of FCT against DCT-based and RLSI methods. #### Scaling Comparison with $f_1(\cdot)$ To compare the scaling behaviors of each method, we continue to focus on $f_1(\cdot)$ and construct polynomials in ${\mathbb{P}^D_{d,1}}$ with random coefficients $c_{\undefined{n}}$ in $[-1,1]^D$ and attempt to recover them with FCT, the DCT-based approach, and RLSI. Each of these algorithms should recover the polynomials accurately with varying complexity. We test polynomials of degree $d=3,6$ and in dimensions $D= 2, \hdots, 25$ with $N=|\mathcal{S}^1_{d,D}| = \binom{D+d}{d}$ coefficients. In , we report the wall time of each method in terms of the number of nonzero coefficients $N$ for a given value of $d$ and $D$. ![Wall time versus number of coefficients $N=|\mathcal{S}^1_{d,D}| = \binom{D+d}{d}$ for the interpolation of polynomials with degree $d=3$ (left) and at $d = 6$ (right), and varying dimension. The FCT significantly outperforms the DCT and RLSI with a crossover size of roughly $N=2 \times 10^2$. Each point on each curve corresponds to dimension $D=2,\hdots,25$. Curves with fewer points indicate the algorithm in question ran out of memory.](ScalingComparisonDeg3.pdf "fig:"){#fig:result_dense_support width="\\textwidth"} [\[fig:scaling_deg_3\]]{#fig:scaling_deg_3 label="fig:scaling_deg_3"} ![Wall time versus number of coefficients $N=|\mathcal{S}^1_{d,D}| = \binom{D+d}{d}$ for the interpolation of polynomials with degree $d=3$ (left) and at $d = 6$ (right), and varying dimension. The FCT significantly outperforms the DCT and RLSI with a crossover size of roughly $N=2 \times 10^2$. Each point on each curve corresponds to dimension $D=2,\hdots,25$. Curves with fewer points indicate the algorithm in question ran out of memory.](ScalingComparisonDeg6.pdf "fig:"){#fig:result_dense_support width="\\textwidth"} [\[fig:scaling_deg_6\]]{#fig:scaling_deg_6 label="fig:scaling_deg_6"} We do not report timing results when the algorithm runs out of system memory. We see that the runtime of the DCT approach grows exponentially with $D$, quickly running out of system memory even for $d=3$. The RLSI approach fares better, exhibiting roughly $O(N^2)$ scaling as expected; however, it still runs out of memory when $d=6$ while FCT is able to recover the most polynomials in high dimensions. It is worth highlighting that, in low dimensions, RLSI and the DCT approach are in fact faster than FCT. #### Accuracy Comparison In , we compare the approximation accuracy of the three methods by providing the mean $\ell_2$ coefficient error: for a computed solution ${\undefined{c}}$ and known solution $\tilde{{\undefined{c}}}$, we report $\|{\undefined{c}}- \tilde{{\undefined{c}}}\|_2/N$. Tests that run out of system memory in this range are not plotted. The target function for interpolation in this case was generated by randomly selecting Chebyshev coefficients between $[-1, 1]$ in the specified dimensions and degrees. ![Mean $\ell_2$ coefficient error versus number of coefficients $N=|\mathcal{S}^1_{d,D}| = \binom{D+d}{d}$ for the interpolation of polynomials with degree $d=3$ (left) and at $d = 6$ (right), and varying dimension. Each point on each curve corresponds to dimension $D=2,\hdots,25$. ](ScalingComparisonError_deg3.pdf "fig:"){#fig:result_dense_support_error width="\\textwidth"} [\[fig:scalingerror_deg_3\]]{#fig:scalingerror_deg_3 label="fig:scalingerror_deg_3"} ![Mean $\ell_2$ coefficient error versus number of coefficients $N=|\mathcal{S}^1_{d,D}| = \binom{D+d}{d}$ for the interpolation of polynomials with degree $d=3$ (left) and at $d = 6$ (right), and varying dimension. Each point on each curve corresponds to dimension $D=2,\hdots,25$. ](ScalingComparisonError_deg6.pdf "fig:"){#fig:result_dense_support_error width="\\textwidth"} [\[fig:scalingerror_deg_6\]]{#fig:scalingerror_deg_6 label="fig:scalingerror_deg_6"} The error of each method is essentially flat (negative slopes are due to increasing $N$). The DCT-based approach exactly recovers the polynomial coefficients as expected, whereas the accuracy of RLSI is bounded by the CG error tolerance, since the system matrix is full-rank. The jump in error for FCT in the degree case is caused by a phase change as we move from low condition number aliasing matrices for small problem instances, to ones with more moderate condition numbers. We observe that FCT becomes more efficient than the DCT-based approach and RLSI at $N\approx 2\times 10^2$ for $d=3$ and $N\approx 2\times 10^3$ for $d=6$, i.e., $D=10$ for $d=3$ and $D=7$ for $d=6$. ### Oscillatory Function ($f_2(\cdot)$) {#sec:numerical_osc_res} To investigate the accuracy of function approximations produced with FCT, we consider the function $f_2(\cdot)$ as described in , where we choose $D=5$. A 2D slice of $f_2$ through the origin can be seen in -Right. We approximate $f_2$ with a polynomial of bounded Euclidean degree $d_E$; i.e., $N = |\mathcal{S}_{d,D}^2|$ and compare the $L^\infty$ absolute error.[^2]. We choose a CG tolerance at machine precision to ensure that all error is solely due to FCT. ![*Left*: FCT wall time and $L^\infty$ absolute error for $f_2$, with bounded Euclidean degree $d_E=2,\hdots, 50$; *Right*: a horizontal slice through the origin of $f_2$. We observe rapid error decay with increasing Euclidean degree, while the computational cost scales quasi-linearly.](accuracy_vs_time.pdf){#fig:accuracy width="\\textwidth"} ![*Left*: FCT wall time and $L^\infty$ absolute error for $f_2$, with bounded Euclidean degree $d_E=2,\hdots, 50$; *Right*: a horizontal slice through the origin of $f_2$. We observe rapid error decay with increasing Euclidean degree, while the computational cost scales quasi-linearly.](accuracy_function.pdf){#fig:accuracy width="\\textwidth"} In -Left, we plot the relative error and wall time of FCT in terms of the number of coefficients $N$, determined by a bounded Euclidean degree. We see that the $L^\infty$ error decays rapidly with $N$, and therefore with $d_E$, as expected from [@trefethen2017multivariate]. By $d_E=50$ ($N\approx 5000$), we have resolved $f_2$ to machine precision, which is a behavior consistent with a standard Chebyshev interpolation scheme. We also observe quasi-linear scaling in wall time with increasing $N$, requiring only a $1/10 s$ to recover 5000 coefficients to machine precision. ### Sparse Coefficient Recovery ($f_3(\cdot)$) {#sec:numerical_sparse_res} We can use the FCT algorithm to approximate a function $f$ using an expansion consisting of specific subset of Chebyshev coefficients as described in . Provided we know the location of the nonzero coefficients in the expansion of our function of interest, FCT will recover these rapidly and with high accuracy. Suppose we know that the target function $f$ is a polynomial with $N$ non-zero coefficients whose locations we know in advance. In other words, we know the multi-indices ${\undefined{n}}^{(i_1)},{\undefined{n}}^{(i_2)},\hdots,{\undefined{n}}^{(i_N)}$ which have non-zero coefficients $c_{{\undefined{n}}^{(i_j)}}$. ![Comparison of the computation time of the FCT and RLSI algorithms for the recovery of the Chebyshev coefficients of a polynomials in dimension $D=100$ having an increasing number of sparse coefficients. ](SparseCoeffsDim100.pdf){#fig:wall_time_big_scaling width="50%"} We compare the walltime for FCT and RLSI in in dimension $D=100$. FCT recovers $N=10^7$ Chebyshev coefficients in $\sim 10$ seconds, while RLSI runs out of memory at $N=10^4$. Note that $A$ is constructed via equation , and only contains columns corresponding to the nonzero multi-indices (i.e., only $N$ columns). # Conclusions {#sec:conclusions} In this paper, we have introduced the Fast Chebyshev Transform (FCT), an algorithm for computing the Chebyshev coefficients of a polynomial from the knowledge of the location of its nonzero Chebyshev coefficients with applications to Chebyshev interpolation. We have theoretically and empirically demonstrated quasi-linear scaling and competitive numerical accuracy on high-dimensional problems. We have compared the performance of our method with that of commonly-used alternative techniques based on least-squares (RLSI) and the Discrete Cosine Transform (DCT). We have further demonstrated significant speedups. Moving forward, our most immediate next step is to improve the sampling scheme of FCT to further reduce the number of required function samples. We also plan to apply FCT to problems in polynomial optimization and apply the sampling scheme to more general transforms applied to smooth functions, such as the NUFFT (Non-Uniform Fast Fourier Transform) and Gaussian process regression [@greengard2022equispaced; @greengard2004accelerating]. # Data Availability Statement The data produced for the redaction of this paper will be made available under reasonable requests satisfying the policy of Qualcomm Technologies Inc. for the public release of internal research material. # Proofs ## Proof of aliasing identity {#sec:lemma_proof} Let $1 < N \in \mathbb{N}$. Then, $$\begin{aligned} &\frac{1}{N} \, \sum_{k=0}^{N-1} \, \cos \left ( \frac{n (k+1/2) \pi}{N} \right ) \, \cos \left ( \frac{m (k+1/2) \pi}{N} \right ) &=\frac{1}{2} \Delta(m+n) + \frac{1}{2}\Delta(m-n) % \frac{\cos( \pi (m+n) /2N) \,}{2} \Delta(m+n) + \frac{\cos( \pi (m-n) /2N) \,}{2}\Delta(m-n) \end{aligned}$$ where, $$\label{eq:aliasing_delta_app} \Delta(l) = \left\{ \begin{array}{ll} 1 & \mbox{if } l \, \mathrm{mod} \, (4N) \equiv 0 \\ -1 & \mbox{if } l \, \mathrm{mod} \, (4N) \equiv 2N \\ 0 & \mbox{o.w. } \end{array} \right.$$ *Proof.* Consider, $$\begin{aligned} &\frac{1}{N} \, \sum_{n=0}^{N-1} \, \cos \left ( \frac{m (n+1/2) \pi}{N} \right ) \, \cos \left ( \frac{k (n+1/2) \pi}{N} \right ) \\ &=\frac{1}{N} \, \sum_{n=0}^{N-1} \, \left [ \frac{1}{2} \cos \left ( \frac{(n+1/2) (m+k) \pi}{N} \right ) + \frac{1}{2} \cos \left ( \frac{ (n+1/2) (m-k) \pi}{N} \right ) \right ] \end{aligned}$$ Then, consider, $$\begin{aligned} \sum_{n=0}^{N-1} \, \cos \left ( \frac{(n+1/2) l \pi}{N} \right ) &= \frac{e^{i \frac{\pi}{2N} l}}{2} \, \sum_{n=0}^{N-1}e^{i \frac{\pi}{N} n \, l} + \frac{e^{-i \frac{\pi}{2N} l}}{2} \, \sum_{n=0}^{N-1}e^{-i \frac{\pi}{N} n \, l} \\ &= \left ( \frac{e^{i \frac{\pi}{2} l}}{2} + \frac{e^{-i \frac{\pi}{2} l}}{2} \right ) \, \frac{\sin(\pi l/2) }{\sin( \pi l /(2N) )} \\ &= \cos( \pi l /2) \, \frac{\sin(\pi l/2) }{\sin( \pi l /(2N) )} \\ &= \frac{1}{2} \, \frac{\sin(\pi l) }{\sin( \pi l /(2N) )}\end{aligned}$$ since (Dirichlet kernel), $$\sum_{n=0}^{N-1} e^{i n \, x} = e^{i \frac{N-1}{2} x} \, \frac{\sin(N x/2) }{\sin( x/2)}$$ and since, $$\cos( \pi l /2) \, \sin(\pi l/2) = \frac{1}{2} \sin(\pi l) + \frac{1}{2} \sin(0)= \frac{1}{2} \sin(\pi l)$$ Therefore, $$\begin{aligned} &\frac{1}{N} \, \sum_{n=0}^{N-1} \, \left [ \frac{1}{2} \cos \left ( \frac{(n+1/2) (m+k) \pi}{N} \right ) + \frac{1}{2} \cos \left ( \frac{ (n+1/2) (m-k) \pi}{N} \right ) \right ] \\ &= \frac{1}{2} \, \left ( \frac{1}{2N}\, \frac{\sin(\pi (m+k)) }{\sin( \pi (m+k) /(2N) )} + \frac{1}{2N}\, \frac{\sin(\pi (m-k)) }{\sin( \pi (m-k) /(2N) )} \right ) \\ &= \frac{1}{2} \, \Delta(m+k) + \frac{1}{2} \, \Delta(m-k) \end{aligned}$$ and the result follows after defining, $$\Delta(l) := \frac{1}{2N}\, \frac{\sin(\pi l) }{\sin( \pi l /(2N) )} = \left\{ \begin{array}{ll} 1 & \mbox{if } l \, \mathrm{mod} \, (4N) \equiv 0 \\ -1 & \mbox{if } l \, \mathrm{mod} \, (4N) \equiv 2N \\ 0 & \mbox{o.w. } \end{array} \right.$$ for all $l \in \mathbb{Z}$. ◻ [^1]: Qualcomm AI Research. Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc. [^2]: To approximate the $L^\infty$ error, we compute the maximum error over a random sampling of 5000 points in $[-1,1]^D$.
arxiv_math
{ "id": "2309.14584", "title": "A Sparse Fast Chebyshev Transform for High-Dimensional Approximation", "authors": "Dalton Jones, Pierre-David Letourneau, Matthew J. Morse, M. Harper\n Langston", "categories": "math.NA cs.MS cs.NA math.OC", "license": "http://creativecommons.org/licenses/by-nc-nd/4.0/" }
--- author: - Yoav Noah bibliography: - IEEEabrv.bib - refs.bib date: August 2021 title: Consensus and Shearing Optimization --- # Introduction ## Motivation Since we are living in the period of big data, solving problems become more complex and it has an impact in a huge amount of areas such as internet applications, marketing, finance, network analysis, etc. Therefore, it is very important to be able to solve problems with a large number of features. The problems with a large number of features are: - Since it is possible to store very detailed information the data is often very high dimensional - Data is collected or stored in a distributed manner Many systems in engineering domains are composed of partially independent subsystems, these subsystems should frequently reach to a common goal. Therefore, distributed optimization are commonly encountered in many applications such as reaching to an optimal set point, reaching to a power demand with minimum cost, etc. Since distributed optimization needs to achieve consensus synchronization, i.e. reach to a common goal, in order to achieve it plenty of communication needs to transfer between the subsystems. This cost time and bandwidth hence, deep unfolding has emerged as a paradigm for deriving learned optimizers in a manner which is known to require much less iterations compared to conventional model-based implementations. ## Our Aims - Empower and improve distributed optimization techniques by introducing deep unfolding - Extend the deep unfolding framework into a distributed domain Based on the above, we intend to focus on distributed Alternating Direction Method of Multipliers (ADMM) optimization, being a common distributed iterative method for tackling convex optimization in a distributed manner. # Preliminaries ## ADMM algorithm ADMM is a simple but powerful algorithm that is suitable to distributed convex optimization, ADMM is an algorithm that solves convex optimization problems by breaking them into smaller sub-problems and it is easier to handle with each sub-problem. In addition, it attempts to blend the benefits of dual decomposition and augmented Lagrangian methods for constrained optimization. Lest consider the case with a single global variable [@boyd2004convex] $$\label{eqn:ADMM} \mathop{\rm minimize}_x f(x) = \sum_{i=1}^{N} f_i(x).$$ In [\[eqn:ADMM\]](#eqn:ADMM){reference-type="eqref" reference="eqn:ADMM"}, $x \in {R^n}$, $f_i:{R^n}\to{R\cup{{+\infty}}}$ are convex. Our goal is to solve this optimization problem, for cases where the problem written in [\[eqn:ADMM\]](#eqn:ADMM){reference-type="eqref" reference="eqn:ADMM"} is very hard to solve. Therefore, we will do a relaxation to this problem and try to solve the constrained problem. $$\label{eqn:ADMMcons} \begin{aligned} \mathop{\rm minimize} \quad & f(z) = \sum_{i=1}^{N} f_i(x)\\ \textrm{subject to} \quad & x_i - z = 0, i=1,...,N \end{aligned}$$ where $x_i\in{R^n}$ is the local variables and a common global variable z. Since the constraint is that all local variables should agree (i.e. equal), this problem is called global consensus problem. Problem [\[eqn:ADMMcons\]](#eqn:ADMMcons){reference-type="eqref" reference="eqn:ADMMcons"} is still a bit hard to solve therefore, we will use the augmented Lagrangian method in order to use it to solve the dual problem (i.e. ADMM for problem [\[eqn:ADMMcons\]](#eqn:ADMMcons){reference-type="eqref" reference="eqn:ADMMcons"}). See, e.g., [\[eqn:loss1\]](#eqn:loss1){reference-type="eqref" reference="eqn:loss1"}. $$\label{eqn:loss1} L_\rho(x_1,...,x_N,z,y) = \sum_{i=1}^{N}(f_i(x_i)+y_{i}^{T}(x_i-z) + (\rho/2)\lVert \boldsymbol{x}_i-\boldsymbol{z} \rVert_2^2)$$ The resulting ADMM algorithm is the following: $$\begin{aligned} \label{eqn:ADMM1} x_i^{k+1} &:= \mathop{\arg \min}\limits_{x \in \mathcal{X}} (f_i(x_i)+y_{i}^{kT}(x_i-z^{k}) + (\rho/2)\lVert \mathbf{x_i-z^{k}} \rVert_2^2)\\ \label{eqn:ADMM2} z^{k+1} &:= \frac{1}{N}\sum_{i=1}^{N}(x_{i}^{k+1}+\ \frac{\rho}{2}\lVert \mathbf{x_i-z^{k}} \rVert_2^2)\\ \label{eqn:ADMM3} y_i^{k+1} &:= y_i^{k}+\rho(x_{i}^{k+1}-z^{k+1}), \end{aligned}$$ where the first and last steps are carried out independently for each i=1,\...,N. Note that the z update is just the projection $x^{k+1}$+$\frac{1}{\rho}y^{k}$ onto the constraint, ($x_1$,\...,$x_N$)$| x_1$=$x_2$=\...=$x_N$ (i.e., block-constant vectors). Furthermore, ADMM is an augmented Lagrangian-based algorithm that consists of only one loop. **Optimality conditions:** - Primal feasibility ```{=html} <!-- --> ``` - Dual feasibility (with respect to x and to z), 1 primal residual and 2 duals - With ADMM dual variable update ($x^{k+1}, z^{k+1}, y^{k+1}$), where $x^{k+1} and z^{k+1}$ are primal and $y^{k+1}$ is the dual, satisfied second dual feasibility conditions: - g(z) - projection - f(z) - smooth minimization (if f(x) is smooth) Optimal values of the primal and dual problems are equal, i.e. strong duality holds. if f, g are closed, proper and convex, and the Lagrangian has a saddle point then we have primal residual convergence i.e. $r^{k} \xrightarrow[]{} 0$ and the objective convergence meaning that $p^{k} \xrightarrow[]{} p^{*}$ where $p^{k} = f(x^{k}) + g(z^{k})$ and $s^{k+1} = \rho A^{T}B(z^{k+1} - z^{k})$, $r^{k+1} = Ax^{k+1} + Bz^{k+1} -c$. Where $s^{k+1}$ is the dual residual at iteration k+1 and $r^{k+1}$ is the primal residual at iteration k+1. ## Distributed ADMM Algorithm In order to solve a separable optimization problems in networks of interconnected nodes or agents, we will use Distributed Alternating Direction Method of Multipliers (D-ADMM). For a separable optimization problem there is a different cost function and a different constraint set at each node. The goal is to minimize the sum of all the cost functions, constraining the solution to be in the intersection of all the constraint sets. D-ADMM algorithm is a good fit to solve the following problems from signal processing and control: average consensus, compressed sensing, and support vector machines. Though D-ADMM is proven to converge when the network is divided or when all the functions are strongly convex, in practice convergence is observed even when these conditions are not met. In order to solve a separable optimization problems we propose the next distributed algorithm [@mota2013d] $$\label{eqn:DADMM} \begin{aligned} \mathop{minimize}\limits_{x} \quad & f_1(x) + f_2(x) + ... + f_{\rho}(x)\\ \textrm{subject to} \quad & {x\in X_1\cap X_2\cap ...\cap X_{\rho}} \end{aligned}$$ In [\[eqn:DADMM\]](#eqn:DADMM){reference-type="eqref" reference="eqn:DADMM"}, $x \in {R^n}$ is the global optimization variable, where $x^*$ will be the solution of [\[eqn:DADMM\]](#eqn:DADMM){reference-type="eqref" reference="eqn:DADMM"}. We infer from [\[eqn:DADMM\]](#eqn:DADMM){reference-type="eqref" reference="eqn:DADMM"} that there is a network with P nodes in the problem, where only node p can compute the cost function $f_p$ and the set $X_p$. In addition, **each node can communicate with its neighbors and together they have to solve** [\[eqn:DADMM\]](#eqn:DADMM){reference-type="eqref" reference="eqn:DADMM"} **in a collaborative way**. This method is called *distributed algorithm*, since there is no central node at any specific location which aggregates all the data. *Formal problem statement:* Given a network with P nodes, each $f_p$ and $X_p$ in [\[eqn:DADMM\]](#eqn:DADMM){reference-type="eqref" reference="eqn:DADMM"} with *p*th node of the network. We make the following assumptions: 1. Each $f_p : R^n \xrightarrow[]{} R$ is a convex function over $R^n$, each set $X_p$ is closed and convex. 2. Problem [\[eqn:DADMM\]](#eqn:DADMM){reference-type="eqref" reference="eqn:DADMM"} is solvable. 3. The network is connected and **not** vary with time (time invariant). 4. Coloring scheme of the network is available. We can infer from assumption 2 that [\[eqn:DADMM\]](#eqn:DADMM){reference-type="eqref" reference="eqn:DADMM"} has **at least** one solution ($x^*$). Assumption 3 implies that a network is connected if there is a path between every pair of nodes. An assignment of numbers to the nodes of the network is called coloring scheme and it used in order to verify that no adjacent nodes have the same number. We called these numbers colors and they will be used to set up the distributed algorithm. Under the assumptions above, we solve the following problem: *given a network, design a distributed algorithm that solves* [\[eqn:DADMM\]](#eqn:DADMM){reference-type="eqref" reference="eqn:DADMM"}. D-ADMM solution for this problem relies on ADMM algorithm, since ADMM is not directly applicable to solve [\[eqn:DADMM\]](#eqn:DADMM){reference-type="eqref" reference="eqn:DADMM"} first we need to reformulate the problem. The problem reformulation uses coloring scheme and takes advantage of node coloring. Before we reformulate the problem, we will introduce some notations. - *Network notation:* We can think of networks as a undirected graphs $G = (V,\varepsilon)$, where $V = \{1, 2,\dots, P\}$ is the set of all nodes and $\varepsilon \subseteq V x V$ is the set of all edges. Where the cardinality of the sets is P and E respectively. An edge is represented by $(i,j)$, when $i<j$, and $(i,j) \in \varepsilon$. Therefore, nodes $i$ and $j$ can exchange data with each other. We define $N_p$ to be the neighborhood of node $p$ as the set of nodes connected to node $p$, but excluding it. In addition, the cardinality of this set is the degree of node $p$, $D_p :=$ $N_p$. - *Coloring:* Assuming that the network is given together with a coloring scheme of $C$ colors. Where the set of nodes with the color $c$, will be denoted with $C_c$, for $c = 1,\dots,C$ and a cardinality $C_c =$ $C_c$. Note that, $\{ C_c \}_{c=1}^C$ partitions $V$. *Problem manipulations:* Without loss of generality, we assume that the nodes are ordered in a way such that, the first $C_1$ nodes have color 1, the next $C_2$ nodes have color 2, etc, i.e., $C_1=\{1, 2, \dots, C_1 \}, C_2 = \{C_1 + 1, C_1 + 2, \dots, C_1 + C_2 \}, \dots$ , we split problem [\[eqn:DADMM\]](#eqn:DADMM){reference-type="eqref" reference="eqn:DADMM"} by assigning copies of the global variable $x$ to each node and then constrain all copies to be equal in an edge based way. Let $x_p \in R^n$ denote the copy held by node $p$ and rewrite [\[eqn:DADMM\]](#eqn:DADMM){reference-type="eqref" reference="eqn:DADMM"} as, $$\label{eqn:ADMMmani} \begin{aligned} \mathop{minimize}\limits_{\Bar{x}=(x_1,\dots,x_P)} \quad & f_1(x_1) + f_2(x_2) + \cdots + f_P(x_P)\\ \textrm{subject to} \quad & {x\in X_P, p = 1,\dots,P},\\ \quad & {x_i = x_j, (i,j) \in \varepsilon} \end{aligned}$$ where $\Bar{x} = (x_1,\dots, x_P) \in (R^n)^P$ is the optimization variable. Problem [\[eqn:ADMMmani\]](#eqn:ADMMmani){reference-type="eqref" reference="eqn:ADMMmani"} is no longer coupled by a global variable as [\[eqn:DADMM\]](#eqn:DADMM){reference-type="eqref" reference="eqn:DADMM"}is. Instead we see from [\[eqn:ADMMmani\]](#eqn:ADMMmani){reference-type="eqref" reference="eqn:ADMMmani"} that $x_i = x_j$ takes place for all the pairs $(i,j) \in \varepsilon$. Furthermore, from assumption 3 we can infer that since the network is connected these equations impose that all the copies has to be equal. We can write these constraints in a more compact form as $(B^T \otimes I_n)\Bar{x}$, where $B \in R^{P \times E}$ is the node are-incidence matrix of the graph, $I_n$ is the identity matrix in $R^n$ and $\otimes$ is the Kronecker product. Each column of $B$ is associated with an edge $(i,j) \in \varepsilon$ and has 1 and -1 in the $i$th and $j$th entry respectively and the remaining entries are zeros. The numbering assumption gets a natural partition of $B$ as $[B_1^T \quad B_2^T \quad \cdots \quad B_C^T]^T$, where the columns of $B_c^T$ associated to the nodes with color $c$. Similarly we do it on , $\Bar{x} = (\Bar{x}_1, \dots, \Bar{x}_C)$ where $\Bar{x}_c \in (R^n)^{C_c}$ collects the copies of all nodes with color c. Thus we can rewirte [\[eqn:ADMMmani\]](#eqn:ADMMmani){reference-type="eqref" reference="eqn:ADMMmani"} as, $$\label{eqn:ADMMsplit} \begin{aligned} \mathop{minimize}\limits_{\Bar{x}_1, \dots, \Bar{x}_C} \quad & \sum_{c=1}^C \sum_{p \in C_c} f_p(x_p)\\ \textrm{subject to} \quad & {\Bar{x}_c\in \Bar{X}_c, c = 1,\dots,C},\\ \quad & {\sum_{c=1}^C (B_c^T \otimes I_n)\Bar{x}_c = 0} \end{aligned}$$ where $\Bar{X}_c := \prod_{p \in C_c}X_p$. In order to solve problem [\[eqn:ADMMsplit\]](#eqn:ADMMsplit){reference-type="eqref" reference="eqn:ADMMsplit"} we need to use the Extended ADMM, explained next. *Extended ADMM:* The Extended ADMM is a generalization of the ADMM. Given $C$ functions $g_c$, $C$ sets $X_c$ and $C$ matrices $A_c$, all with the **same** number of rows, the extended ADMM solves, $$\label{eqn:EADMM} \begin{aligned} \mathop{minimize}\limits_{\Bar{x}=(x_1,\dots,x_P)} \quad & \sum_{c=1}^C g_c(x_c)\\ \textrm{subject to} \quad & {x_c\in X_c, c = 1,\dots,C},\\ \quad & {\sum_{c=1}^C A_cx_c = 0} \end{aligned}$$ where $x := (x_1, \dots, x_C)$ is the optimization variable. The extended ADMM consists of iterating on $k$: $$\begin{aligned} \label{eqn:EADMM1} x_1^{k+1} &= \mathop{\arg \min}\limits_{x_1 \in \mathcal{X}_1} L_{\rho}(x_1, x_2^k, \dots, x_P^k; \lambda^k)\\ \label{eqn:EADMM2} x_2^{k+1} &= \mathop{\arg \min}\limits_{x_2 \in \mathcal{X}_2} L_{\rho}(x_1^{k+1}, x_2, x_3^k, \dots, x_C^k; \lambda^k)\\\vdots\\ \label{eqn:EADMM3} x_C^{k+1} &= \mathop{\arg \min}\limits_{x_C \in \mathcal{X}_C} L_{\rho}(x_1^{k+1}, x_2^{k+1}, \dots, x_C; \lambda^k)\\ \label{eqn:EADMM4} \lambda^{k+1} &= \lambda^k + \rho \sum_{c=1}^C A_cx_c^{k+1}\end{aligned}$$ where $L_{\rho}(x;\lambda) = \sum_{c=1}^C (g_c(x_c) + \lambda^TA_cx_c) + (\rho/2)\lVert \sum_{c=1}^C A_cx_c \rVert_2^2$ is the augmented Lagrangian of [\[eqn:EADMM\]](#eqn:EADMM){reference-type="eqref" reference="eqn:EADMM"}, $\lambda$ is the dual variable and $\rho$ is a positive parameter. When $C = 2$, [\[eqn:EADMM1\]](#eqn:EADMM1){reference-type="eqref" reference="eqn:EADMM1"} - [\[eqn:EADMM4\]](#eqn:EADMM4){reference-type="eqref" reference="eqn:EADMM4"} becomes the ordinary ADMM and it converges under very soft assumptions. When $C<2$ the proof of convergence holds only when all the functions $g_c$ are strongly convex and the following theorem holds. *Theorem 1:* Let $g_c : R^{n_c} \xrightarrow[]{} R$ be a convex function over $R^{n_c}$, $X_c \subseteq R^{n_c}$ a **closed convex set** and $A_c$ an $m \times n_c$ matrix, for $c = 1, \dots, C$ assume [\[eqn:EADMM\]](#eqn:EADMM){reference-type="eqref" reference="eqn:EADMM"} is solvable and that either 1. $C = 2$ and each $A_c$ has full column rank. 2. or $C \geq 2$ and each $f_c$ is **strongly convex**. Then, the sequence { $(x_1^k, \dots, x_C^k, \lambda^k)$ } generated by [\[eqn:EADMM1\]](#eqn:EADMM1){reference-type="eqref" reference="eqn:EADMM1"} - [\[eqn:EADMM4\]](#eqn:EADMM4){reference-type="eqref" reference="eqn:EADMM4"} converges to $(x_1^*, \dots , x_C^*, \lambda^*)$, where $(x_1^*, \dots, x_C^*)$ solves [\[eqn:EADMM\]](#eqn:EADMM){reference-type="eqref" reference="eqn:EADMM"} and $\lambda^*$ solves the dual problem of [\[eqn:EADMM\]](#eqn:EADMM){reference-type="eqref" reference="eqn:EADMM"}: $\max_{\lambda} G_1(\lambda)+ \cdots + G_C(\lambda)$, where $G_c(\lambda) = \inf_{x_c \in X_c} (g_c(x_c) + \lambda^TA_cx_c)$, for $c = 1, \dots , C$. You can find the proof of case 1 from the theorem in [@mota2011proof] D-ADMM is asynchronous in the sense that **nodes operate in a color-based order, with nodes with the same color operating in parallel**. Since nodes with the same color are not neighbors, we would apparently need some kind of coordination to execute the algorithm. Actually, we don't need coordination since each node knows its own color and the color of its neighbors. # Undirected graphs properties 1. Degree distribution P(k) of each node - $N_k$ = No. of nodes with degree k - normalized histogram: $P(K) = N_k/N$ 2. Define a **path**, i.e., sequence of nodes in which each node is linked to the next one - $P_n = \{ i_0, i_1, \dots, i_n \} ; P_n = \{ (i_0, i_1), (i_1,i_2, (i_2, i_3), \dots, (i_{n-1}, i_n) \}$ - A path can intersect itself and pass through the same edge multiple times # Random graph models ## Erdos and Renyi model - $G_{n,m}$ is a set of all graphs with n vertices and m edges. In order to generate a random graph with uniform distribution from the set $G_{m,n}$ we throw down m edges between pairs of nodes chosen at random from n initially unconnected nodes - $G_{n,p}$ is the set of all graphs consisting of n vertices, each pair is connected together with independent probability p. In order to generate a random graph with uniform distribution from the set $G_{n,p}$, we take n initially unconnected nodes and go through each pair of them, joining the pair of nodes with an edge with probability p, or not with probability 1-p For our purposes the two models are essentially equivalent in the limit of large n. Since $G_{n,p}$ is simpler to work with than $G_{n.m}$ we will concentrate on $G_{n,p}$. The random graph $G_{n,p}$ has binomial degree distribution. The probability $p_k$ that a node chosen randomly is connected with exactly k others is: $$\label{eqn:Bern} \begin{aligned} \mathop p_{k} = {n \choose k}p^{k}(1-p)^{n-k} \end{aligned}$$ In the limit, $n \xrightarrow{} \infty$ this becomes: $$\label{eqn:Bernlim} \begin{aligned} \mathop p_{k} = \lim_{n \xrightarrow{} \infty} \frac{n^k}{k!}*(\frac{p}{1-p})^{k}*(1-p)^n = \frac{z^{k}e^{-z}}{k!} \end{aligned}$$ This is a Poisson distribution as a function of k where z is the value mean degree, all the varieties are sharply peaked and have a tail that decays as $1/k!$ ## Watts-Strogatz model ## Geometric model ## Barabasi model ## Lattice model # The optimization problem In order to solve D-ADMM we will use a sparse solution of linear systems, since it is important to find a sparse solution of linear systems in many areas such as statistics, compressed sensing and cognitive radio. We will use LASSO in order to solve a distributed optimization problem $$\label{eqn:LASSO} \begin{aligned} \mathop{\rm minimize}_{x} \quad & \lVert \mathbf{x} \rVert_1\\ \textrm{subject to} \quad & \lVert \mathbf{Ax-b} \rVert \leq \sigma \end{aligned}$$ Where $A \in \mathbf{R}^{m \times n}$, the vector $b \in \mathbf{R}^{m}$ and the parameters $\sigma > 0$ are given. We solve LASSO with a column partition since with row partition it is not trivial to reformulate [\[eqn:DADMM\]](#eqn:DADMM){reference-type="eqref" reference="eqn:DADMM"} using LASSO. Therefore, each node will store $p$th block of columns of matrix A that only known to node $p$. In a columns partition we assume that all nodes know vector b, parameter $\sigma$ and the number of nodes P. In order that LASSO could reformulate [\[eqn: DADMM\]](#eqn: DADMM){reference-type="eqref" reference="eqn: DADMM"} we will need to use the duality.Moreover, we cannot recover the primal solution of LASSO from solving the dual problem. Since the objective of the primal solution of LASSO is not strictly convex, thus we need to regularize LASSO to make it strictly convex. $$\label{eqn:DualLASSO} \begin{aligned} \mathop{\rm minimize}_{x} \quad & \lVert \mathbf{x} \rVert_1 + \frac{\delta}{2}\lVert \mathbf{x} \rVert^2\\ \textrm{subject to} \quad & \lVert \mathbf{Ax-b} \rVert \leq \sigma \end{aligned}$$ Where $\delta > 0$ is small enough. One of the conditions is that the objective is linear and the constraint set is the intersection of a linear system with a closed polyhedral cone. We can rewrite [\[eqn: DualLASSO\]](#eqn: DualLASSO){reference-type="eqref" reference="eqn: DualLASSO"} as: $$\label{eqn:FinalLASSO} \begin{aligned} \mathop{\rm minimize}_{x,y} \quad & \sum_{p=1}^{P} (\lVert \mathbf{x} \rVert_1 + \frac{\delta}{2}\lVert \mathbf{x} \rVert^2)\\ \textrm{subject to} \quad & \lVert \mathbf{y} \rVert \leq \sigma\\ \quad & y = \sum_{p=1}^{P} A_px_p - b \end{aligned}$$ If we only dualize the last constraint of [\[eqn:FinalLASSO\]](#eqn:FinalLASSO){reference-type="eqref" reference="eqn:FinalLASSO"}, we get the dual problem of minimizing $\sum_{p=1}^{P} g_p(\lambda)$, where $g_p(\lambda) := \frac{1}{P}(b^T\lambda + \sigma \lVert \lambda \rVert - \inf_{x_p}(\lVert x_p \rVert_1 + (A^T_p \lambda)^{T}x_p + \frac{\delta}{2} \lVert x_p \rVert^2)$ is a function associated to node $p$. *Experimental Setup:* We generated 5 networks with $P = 50$, A is a matrix of size $500 \times 2000$ with i.i.d Gaussian entries with zero mean and variance of $\frac{1}{\sqrt{m}}$, b is a vector of size 500. Each node of the network stored $m_p = \frac{m}{P}$ rows of A, since $P=50$ the value of $m_p$ is an integer number. In addition, we choose $\sigma = 0.1$ and $\delta = 10^-2$. # Compressed sensing Sparse representation of a signal is that we can represent a signal of length $n$ with $k<<n$ nonzero coefficients. $$\label{eqn:zeroNorm} \begin{aligned} \mathop{\lVert \mathbf{x} \rVert_0 = \{ i: x_i \neq 0 \}} \end{aligned}$$ # Distributed Support Vector Machines for DADMM $$\label{eqn:SVM} \begin{aligned} \mathop{\rm minimize}_{s,r, z} \quad & \frac{1}{2}\lVert \mathbf{s} \rVert^{2} + \beta \myVec{1}^{T}_{k}z \\ \textrm{subject to} \quad & y_{k}(s^{T}x_{k}-r) \geq 1-z_{k}, \quad & k=1, \dots, K\\ \quad & z \geq 0 \end{aligned}$$
arxiv_math
{ "id": "2309.14353", "title": "Limited Communications Distributed Optimization via Deep Unfolded\n Distributed ADMM", "authors": "Yoav Noah, Nir Shlezinger", "categories": "math.OC cs.LG eess.SP", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We propose a method for providing communication network infrastructure in autonomous multi-agent teams. In particular, we consider a set of communication agents that are placed alongside regular agents from the system in order to improve the rate of information transfer between the latter. In order to find the optimal positions to place such agents, we define a flexible performance function that adapts to network requirements for different systems. We provide an algorithm based on shadow prices of a related convex optimization problem in order to drive the configuration of the complete system towards a local maximum. We apply our method to three different performance functions associated with three practical scenarios in which we show both the performance of the algorithm and the flexibility it allows for optimizing different network requirements. author: - "Ignacio Boero$^{1, 2}$, Igor Spasojevic$^{1}$, Mariana del Castillo$^{2}$ George Pappas$^{1}$, Vijay Kumar$^{1}$, Alejandro Ribeiro$^{1}$ [^1]" bibliography: - papers.bib title: Navigation with shadow prices to optimize multi-commodity flow rates --- # Introduction Autonomous multi-agent systems have lately found applications in numerous challenging tasks. Examples include a team of robots mapping an unknown or disaster-stricken environment, performing surveillance missions, and carrying out search and rescue operations. A key hurdle lies in the fact that individual agents have to make decisions with outdated or even missing information from their teammates. This is due to the inherent problem that communications cannot be instant. To overcome this issue, a large body of research has been done on developing algorithms to take optimal decisions with the available information. A parallel line of works seeks to improve the communication network infrastructure formed by a multi agent system, with the aim of making communications faster and more reliable. This paper belongs to the second line of research. A common approach to solving this problem involves deploying a second team of agents whose purpose is to provide network infrastructure for the first team [@mox2020mobile; @4543418; @stephan2017concurrent; @fink2013robust; @6213085]. The problem then amounts to positioning the newly deployed agents in order to allow the greatest amount of communication flow. Previous work has predominantly focused on maximizing the connectivity of the graph defined by the spatial configuration of agents [@zavlanos2011graph; @kim2005maximizing; @zavlanos2005controlling; @mox2022learning]. This line of works relies heavily on quantities from algebraic graph theory that capture a notion of connectivity. Indeed, one common figure of merit for the connectivity of a graph is the second eigenvalue of its Laplacian. The latter can be computed efficiently, and many approaches solve the problem by finding methods to maximize the second eigenvalue. Although such a notion of connectivity is attractive in that it captures a complex notion with a single number, we believe that resulting solutions often suffer from lack of flexibility to adapt to different communication requirements that a multi agent system needs. In our work, we consider a solution that captures a more complete notion than only of connectivity. This has been previously done by [@6213085], nonetheless we differ in that we depart from the assumption that we know the minimum desired communication rate for each agent, which many times is not the case. This modifies the framework from restricting agents movement to feasible configurations to moving in directions that maximize an objective. In particular, we define the figure of merit for a given configuration of agents using the solution of a Multi-Commodity Flow Problem. Multi-Commodity Flow Problems (MCFP) are widely used in many areas [@feo1989flight; @foulds1981multi], including network modeling [@assad1978multicommodity; @ali1984multicommodity]. It consists of a graph and a set of commodities, where a commodity can model the "supply" of information from some nodes, usually named source nodes, to other "demand" nodes, named sink nodes. The restrictions on this problem are that there is a cost associated for a commodity to flow from one node to another, and edges between nodes have limitations on how much cost they can withstand. Then, the problem is to maximize a utility function that increases with the amount of commodities that can be pushed from sources to sinks, while respecting given edge capacities. This model bears a straightforward translation to our problem. In our setting, the graph is defined by the spatial configuration of the agents, the commodities are information being transmitted between agents, and the capacity constraints between nodes are the link capacities between agents. Then, the performance of a configuration is given by the value of the utility function for the solution of that MCFP. The flexibility of this figure of merit lies in the fact that different agents can be selected as source or sink agents. Furthermore, the options for the utility function can capture a range of different modelling aspects that are more refined than mere graph connectivity. Having defined the figure of merit, we present an algorithm that from random initial positions moves select agents to positions that locally maximize the utility of the MCFP. When moving an agent, we perturb the capacity constraints of the MCFP, as we increase the link between the moved agent and the agents to whom the distance decreased, and vice-versa. Then, the dilemma is to choose which link to increase in order to maximize the value of the MCFP. For doing that we rely on perturbation theory, which establishes that dual variables associated with a constraint are closely related to the gain of relaxing that constraint [@boyd2004convex]. We then do numerical experiments which demonstrate the efficacy of the algorithm, as it consistently increases the utility function of the MCFP on all experiments. Ultimately, we show the flexibility of this approach, illustrating how different practical scenarios can lead to different definitions for the MCFP, which in turn leads to different solutions of the algorithm. Regarding the rest of the paper, in section II we give the mathematical formulation for a general figure of merit defined by a MCFP. In section III we explain the proposed method, and provide a mathematical justification. In section IV we apply the algorithm to different use cases involving various practically-motivated static scenarios, and we finish with a test of our approach on a dynamic scenario. Finally, in section V we summarize our findings and propose future work. # Dynamic Multi-commodity Flow Problem We consider a maximum multi-commodity flow problem on a dynamic graph $\mathcal{G} = (\mathcal{V}, \mathcal{E})$, with vertex set $\mathcal{V}$ and edge set $\mathcal{E}$. Vertices in $\mathcal{V}$ correspond to agents in a robot team whose configurations determine capacities of edges in $\mathcal{E}$. Letting $|\mathcal{V}| = N$, we identify $\mathcal{V}$ with $[N] = \{1,2,...,N\}$, and denote the configuration of agent $i \in [N]$ by $\mathbf{x}_i \in \mathbb{R}^2$. The capacity function $$\begin{aligned} c : \mathbb{R}^2 \times \mathbb{R}^2 & \rightarrow \mathbb{R}_{++} \\ \end{aligned}$$ constrains the maximum amount of cumulative flow along any edge $(i,j) \in \mathcal{E} \subseteq \mathcal{V} \times \mathcal{V}$ by $c(\mathbf{x}_i, \mathbf{x}_j)$. Here, $c$ is a positive differentiable function, with $c(\mathbf{x},\mathbf{y})$ decreasing in $|| \mathbf{x} - \mathbf{y} ||_2$ for fixed $\mathbf{x}$ (or $\mathbf{y}$) and fixed $(\mathbf{x}-\mathbf{y})/||\mathbf{x}-\mathbf{y}||_2$. Letting $\mathcal{I} \subseteq \mathcal{V}$ denote agents with controllable configurations, we solve: $$\tag{DMCF} \max_{(\mathbf{x}_i)_{i \in \mathcal{I}}} \Phi(\mathbf{x}_{1:N}) \label{eqn:global_problem}$$ where $\Phi$ represents the value of the following *generalized* multi-commodity flow problem $$\tag{P-MCF} \begin{aligned} &\Phi(\mathbf{x}_{1:N}) \ = \max_{\textbf{r} \in[0,1]^{N \times N \times K}, \ \mathbf{a} \in \mathbb{R}_+^{N \times K}} \quad \mathcal{U}\left((a_i^k)_{\ k \in \mathcal{A}, \ i \in S_k} \right)\\ s.t. & \\ & a^k_i \leq \sum_{j=1}^{N} r_{i,j}^{(k)} - \sum_{j=1}^{N} r_{j,i}^{(k)} \quad \forall k \in \mathcal{A}, \ \forall i \in \mathcal{S}_k \\ & \sum_{j=1}^{N} r_{i,j}^{(k)} - \sum_{j=1}^{N} r_{j,i}^{(k)} = 0 \quad \forall k \in \mathcal{A}, \ i \in \mathcal{I} \\ &\sum_{k \in \mathcal{A}} r_{i,j}^{(k)} \leq c(\mathbf{x}_i, \mathbf{x}_j) \quad \forall i,j \in \mathcal{V}. \\ \end{aligned} \label{eqn:P-MCF}$$ In the latter, $\mathcal{A} = \mathcal{V} \setminus \mathcal{I}$ represents the subset of agents that need to exchange information. We have one commodity for each agent in $\mathcal{A}$, indexed by $k \in \mathcal{A}$, and let $K := |\mathcal{A}|$ denote the total number of commodities. We also let $I := |\mathcal{I}|$, and note that $N = K + I$. For commodity $k$, agents in $S_k \subseteq \mathcal{A}$ act as source nodes while the rest of the agents in $\mathcal{A}$ act as sinks. This is (implicitly) encoded by the first constraint of Problem [\[eqn:P-MCF\]](#eqn:P-MCF){reference-type="ref" reference="eqn:P-MCF"}. In any flow, agents in $\mathcal{I}$ are relay nodes. They neither generate nor accumulate information, as described by the second constraint of the problem. The edge capacity between any two agents represents the maximum rate at which they can send information between themselves across all flows. The latter is captured by the third constraint of Problem [\[eqn:P-MCF\]](#eqn:P-MCF){reference-type="ref" reference="eqn:P-MCF"}. Ultimately, the objective of [\[eqn:P-MCF\]](#eqn:P-MCF){reference-type="ref" reference="eqn:P-MCF"} is a general one in that we only require concavity of the global *utility* function $\mathcal{U}$. Classical maximum multi-commodity flow problems focus on cases where $\mathcal{U}$ is the sum of $(a^{k}_i)_{k \in \mathcal{A}, i \in \mathcal{S}_k}$. The present formulation allows us to generalize this to other measures of team performance, as we will see in section IV. Our problem is dynamic due to edge capacities being a function of the positions of the agents, which can be varied. This differs from usual multi-commodity flow problems in which the capacity constraints are fixed, and only the flows can be optimized. The set of decision variables therefore consists of positions of agents in $\mathcal{I}$, denoted by $(\mathbf{x}_i)_{i \in \mathcal{I}}$, as well as $(r_{i,j}^{(k)})_{i,j \in \mathcal{V}, k \in \mathcal{A}}$, the flow for each commodity $k$ being transmitted between each pair of agents $(i,j)$, and $(a^{k}_i)_{k \in \mathcal{A}, i \in \mathcal{S}_k}$, the amount of commodity $k$ generated by agent $i$. We also introduce vector notation for the latter two variables, where $\mathbf{r} \in [0,1]^{N\times K\times K}$ collects $r_{ij}^{(k)}$ in ascending lexicographic order, first by source $i$, then by sink $j$, and finally by commodity $k$, while $\mathbf{a} \in \mathbb{R}_+^{N\times K}$ arranges $a^{k}_i$ similarly, first by agent $i$ and then by commodity $k$. # Shadow Price Ascent Algorithm ## Proposed Algorithm As $c(\mathbf{x},\mathbf{y})$ cannot be a concave function, [\[eqn:global_problem\]](#eqn:global_problem){reference-type="eqref" reference="eqn:global_problem"} is in general a non-convex problem. As a result, we develop an algorithm for finding a first order stationary point. We use an iterative method where at each step we solve the dual problem of [\[eqn:P-MCF\]](#eqn:P-MCF){reference-type="ref" reference="eqn:P-MCF"}, and update the positions of $(\mathbf{x}_{i})_{i \in \mathcal{I}}$ along a direction of local increase. The dual of [\[eqn:P-MCF\]](#eqn:P-MCF){reference-type="ref" reference="eqn:P-MCF"} is given by: $$\tag{D-MCF} \begin{aligned} D^* (\mathbf{x}_{1:N}) &=\min_{% \vcenter{% \Let@ \restore@math@cr \default@tag \baselineskip\fontdimen 10 \scriptfont\tw@ \advance\baselineskip\fontdimen 12 \scriptfont\tw@ \lineskip\thr@@\fontdimen 8 \scriptfont\thr@@ \lineskiplimit\lineskip \ialign{\hfil$\m@th\scriptstyle##$&$\m@th\scriptstyle{}##$\hfil\crcr &\bm{\lambda} \in \mathbb{R}_+^{|S_k|\times K}, \\ &\bm{\mu} \in \mathbb{R}_+^{K\times I} , \\ &\bm{\nu} \in \mathbb{R}^{N \times N}\crcr }% }% } D(\bm{\lambda},\bm{\mu},\bm{\nu}, \mathbf{x}_{1:N}) \\ &=\min_{% \vcenter{% \Let@ \restore@math@cr \default@tag \baselineskip\fontdimen 10 \scriptfont\tw@ \advance\baselineskip\fontdimen 12 \scriptfont\tw@ \lineskip\thr@@\fontdimen 8 \scriptfont\thr@@ \lineskiplimit\lineskip \ialign{\hfil$\m@th\scriptstyle##$&$\m@th\scriptstyle{}##$\hfil\crcr &\bm{\lambda} \in \mathbb{R}_+^{|S_k|\times K}, \\ &\bm{\mu} \in \mathbb{R}_+^{K\times I} , \\ &\bm{\nu} \in \mathbb{R}^{N \times N}\crcr }% }% } \ \max_{% \vcenter{% \Let@ \restore@math@cr \default@tag \baselineskip\fontdimen 10 \scriptfont\tw@ \advance\baselineskip\fontdimen 12 \scriptfont\tw@ \lineskip\thr@@\fontdimen 8 \scriptfont\thr@@ \lineskiplimit\lineskip \ialign{\hfil$\m@th\scriptstyle##$&$\m@th\scriptstyle{}##$\hfil\crcr &\textbf{r} \in[0,1]^{N \times N \times K} \\ &\mathbf{a} \in \mathbb{R}_+^{N \times K}\crcr }% }% } \ L(\bm{\lambda},\bm{\mu},\bm{\nu}, \mathbf{a},\mathbf{r},\mathbf{x}_{1:N}), \end{aligned} \label{eqn:D-MCF}$$ where $L$ is the Lagrangian of [\[eqn:P-MCF\]](#eqn:P-MCF){reference-type="eqref" reference="eqn:P-MCF"} defined as: $$\begin{aligned} L(\bm{\lambda},\bm{\mu},\bm{\nu},\bm{a},\bm{r},\mathbf{x}_{1:N}) = & \quad \mathcal{U}\left((a_i^k)_{\ \forall k \in \mathcal{A}, \ i \in S_k} \right) \\ %\sum_{k \in A} \log{(a^k)} \\ &-\sum_{i\in \mathcal{S}_k,k \in \mathcal{A}} \lambda^{k}_{i} [a_{i}^{k} - (\sum_{j=1}^{N} r_{i,j}^{k} - \sum_{j=1}^{N} r_{j,i}^{k})]\\ &-\sum_{i \in \mathcal{I},k \in \mathcal{A}} \nu^{k}_{i} [\sum_{j=1}^{N} r_{i,j}^{(k)} - \sum_{j=1}^{N} r_{j,i}^{(k)} ]\\ &-\sum_{i,j \in \mathcal{V}} \mu_{i,j}[(\sum_{k \in \mathcal{A}} r_{i,j}^{(k)}) - c(\mathbf{x}_i, \mathbf{x}_j)]. \end{aligned} \label{eqn:L-MCMF}$$ The dual variables $\bm{\lambda} \in \mathbb{R}_+^{|S_k|\times K}, \ \bm{\nu} \in \mathbb{R}^{N \times N}, \ \bm{\mu} \in \mathbb{R}_+^{K\times I}$ are associated with constraints one, two, and three, respectively. In light of the fact that $c > 0$, it is not difficult to show that [\[eqn:P-MCF\]](#eqn:P-MCF){reference-type="eqref" reference="eqn:P-MCF"} is a convex program for which Slater's condition holds. In particular, it has zero duality gap. Solving the dual problem [\[eqn:D-MCF\]](#eqn:D-MCF){reference-type="ref" reference="eqn:D-MCF"} is therefore equivalent to solving the primal. Nevertheless, for our purposes it is more convenient as it allows us to readily extract the sensitivity of the objective of the team problem [\[eqn:global_problem\]](#eqn:global_problem){reference-type="ref" reference="eqn:global_problem"} with respect to configurations of controllable agents.\ **Proposition 1.** Let $\{\mu_{ij}^*(x_{1:N})\}_{i,j \in \mathcal{V}}$ be the solution to [\[eqn:D-MCF\]](#eqn:D-MCF){reference-type="eqref" reference="eqn:D-MCF"} for a given set of positions $\mathbf{x}_{1:N}$. Then $\partial \Phi / \partial X \in (\mathbb{R}^2)^{\mathcal{I}}$ given by $$\left. \left( \frac{\partial \Phi}{\partial X} \right)_i \right|_{\mathbf{x}_{1:N}} = \sum_{j \in \mathcal{V}} (\mu_{ij}^*(\mathbf{x}_{1:N}) + \mu_{ji}^*(\mathbf{x}_{1:N}))\nabla_{\mathbf{x}_i} c(\mathbf{x}_i,\mathbf{x}_j)|_{\mathbf{x}_{1:N}} \label{eqn:subG-P}$$ with $i \in \mathcal{I}$ is a direction of local increase for [\[eqn:global_problem\]](#eqn:global_problem){reference-type="ref" reference="eqn:global_problem"}.\ $X_{\mathcal{I}} \gets X_{\mathcal{I}}^{0}$\ $\Phi_{0} \gets -\infty, \ \Delta \Phi \gets +\infty$\ $(C)_{ij} \gets c(x_i,x_j) \ \forall x_i,x_j \in X_{\mathcal{V}}$\ $(\partial C)_{ij} \gets \nabla_{x_i} c(x_i,x_j) \ \forall x_i,x_j \in X_{\mathcal{V}}$\ $t \gets 0$\ Our approach is summarized in Algorithm [\[alg:alg2\]](#alg:alg2){reference-type="ref" reference="alg:alg2"}. From a high level, it is a gradient ascent method. Lines $1-5$ initialize all variables. Thereafter, until the change in the team objective function $\Phi$ drops below the specified tolerance value $tol$, the algorithm calculates a local direction of ascent by using the dual variables in an optimal solution to problem [\[eqn:D-MCF\]](#eqn:D-MCF){reference-type="ref" reference="eqn:D-MCF"} obtained by the interior point solver MOSEK [@mosek]. We refer to the latter sub-procedure by the method "Solve-DMCF"; it takes as input a graph with fixed edge capacities, and outputs the optimal flows along every edge, the team value function, as well as the optimal dual (Lagrangian) variables for every constraint. Lines $10-12$ then leverage the dual variables to perform the gradient ascent step using a stepsize $\alpha$ that is tuned for the family of problems at hand. ## Intuition behind the direction of local increase Any algorithm that seeks to maximize the cumulative capacity of the network connecting the agents faces a trade-off. In order to maximize the rate of information transfer between agent $i$ and $j$, it needs to update the position of agent $i$ in the direction of the gradient of $c(\mathbf{x}_i,\mathbf{x}_j)$ w.r.t $\mathbf{x}_i$. Nonetheless, it is easy to see that this heuristic can be problematic, as maximizing the quality of the connection between one pair of agents could easily diminish the quality between another. A simplistic approach would be to move agent $i$ along the average of the gradients of $c(\mathbf{x}_i,\mathbf{x}_j)$ w.r.t $\mathbf{x}_i$ across all $j$. However, this approach would miss important information regarding the complexity of how information flows through the network. In particular, if a link is a bottleneck in the network, it would be desirable to increase its capacity before that of others. Our algorithm attacks this problem via leveraging a well known concept on constrained optimization - the sensitivity of the perturbation function. In a constrained optimization problem in which strong duality holds, the dual variable associated with a constraint can be seen as the sensitivity of the primal function to perturbations of the value of the bound in that constraint. Bottlenecks manifest as the edges to which the primal function displays maximal sensitivity, thereby emerging as most influential during the update of agent positions. # Numerical Experiments In this section, we provide numerical experiments to show the performance of our algorithm. First, we set several parameters that will be used throughout all experiments. We use the capacity function $$c(\mathbf{x}, \mathbf{y}) = exp \left( - \left( \frac{|| \mathbf{x} - \mathbf{y} ||_2}{d_0} \right)^{D} \right) \label{eqn:c_func}$$ Intuitively, $D$ captures the shape of the fading of the capacity of the link between a pair of agents as function of their distance, whereas $d_0$ captures the characteristic length scale of the decay. In all experiments, the density of agents $\rho$ is the same; the size of the area in which agents are spawned will depend on the number of agents. We set $\rho = 1 \ agents/km^2$, $D=2$, and $d_0 = 1km$. A plot of the rate function is shown in Figure [1](#fig: cfunc){reference-type="ref" reference="fig: cfunc"}. ![Function $c(\mathbf{x},\mathbf{x})$ from equation [\[eqn:c_func\]](#eqn:c_func){reference-type="eqref" reference="eqn:c_func"} with $D = 2$ and $d_0$ = 1km](images/link.pdf){#fig: cfunc} We used the step size $\alpha = 0.4$, scaling it by the value of $0.97$ after every iteration. For all experiments we restrict the utility function of the P-MCF to a linear combination of the minimum $a^k_i$ of each commodity $k$. Therefore it is defined as $$\mathcal{U}\left((a_i^k)_{\ k \in \mathcal{A}, \ i \in S_k} \right) = \sum_{k \in \mathcal{A}} \omega_k \min_{i \in S_{k}} a^k_i$$ We present three distinct study cases where the merit function above can be used to model different network requirements. ### Ad-Hoc Networking The agents represent a mobile Ad-Hoc Network; they navigate together and need to remain connected with as much network capacity as possible. In this case, we set $\omega = \mathbf{1}$, weighing all commodities equally. ### Routing to infrastructure AP A set of agents is performing a task where they need to maintain communication with an access point (AP). This setting can arise in scenarios where agents carry out a search and rescue mission in a remote area, and need to relay local information to an AP. There is only one sink, and so only one commodity is required. Therefore, $\omega$ is set to be all zero except for the $j$-th entry, corresponding to the AP, which is set to 1. ### Distributed Algorithm Agents are running a distributed algorithm that periodically needs to update information from surrounding agents. We set $\omega$ to be a time-varying vector, whose entries associated with agents that need to update their information for a given instant are set to one, with remaining entries set to zero. In the remaining part of this section, we use the previous three study cases to show multiple properties of our method. ## Performance We start by showing results for several scenarios with a small number of agents, in which the form of optimal solutions can be gauged by inspection. In all such setups, we use the first study case, setting $\omega = \mathbf{1}$. We reserve the term "network agents" for agents in the controllable subset $\mathcal{I} \subseteq \mathcal{V}$, and the term "task agents" for those in subset $\mathcal{V} \setminus \mathcal{I} \equiv \mathcal{A}$. First, we consider a problem with two task agents and one network agent. The expected result is for the network agent to be placed in the middle of the segment formed by its teammates. The next scenario is composed of four task agents forming a square shape, and two network agents. Here, the intuitive solution is to place the latter pair close to the middle of the square. The last scenario also involves four task agents; now, however, three of them are close together, while the fourth one is far away. The last example serves to illustrate an important aspect of the utility function. We defined it to be the sum of $a^k$, where $a^k$ is the *minimum* $a^k_i$ across all $i \in S_{k} \ (\text{in this case } S_{k} = \mathcal{A} \setminus \{k\})$ for commodity $k$. Therefore, if one agent is disconnected from the rest, it will lead to $a^k=0$. For this reason, we expect the algorithm will connect this agent to its teammates, in order to increase $a^k$. Figure [2](#fig:intuitive){reference-type="ref" reference="fig:intuitive"} illustrates outputs of the algorithm running in these three scenarios. In all cases, expected behaviour emerges. Furthermore, the team utility function increases with the number of iterations, thus providing reassuring evidence of the soundness of the algorithm. ![Test with small teams of task and network agents in various configurations. The first row involves two task agents and one network agent. The second row has four task agents arranged in a square and two network agents. The third row corresponds to a scenario with four task agents, where one is separated from the other three, and two network agents. In all cases, the first column depicts initial positions of all agents, the second column the positions of networks agents adjusted by the algorithm, and the third column the evolution of the team utility function with the number of iterations.](images/intuitive_cases.pdf){#fig:intuitive} ## Flexibility We consider two scenarios, in each of which we run the algorithm three times, using the same task agent configuration, but varying the utility function. In the first run of the algorithm, we use the all-ones weight vector, simulating the first study case. For the next two runs, simulating the second study case, we choose a different task agent to play the role of the AP. In the first scenario, we use 5 task agents and 4 network agents, and in the second scenario we use 25 task agents and 10 network agents. The positions of network agents computed by our algorithm are shown in Figure [3](#fig:case1vscase2){reference-type="ref" reference="fig:case1vscase2"}. We can observe a marked difference in solutions for each case. While using $\omega = \mathbf{1}$, the solution exhibits higher spatial dispersion, in line with the intuitive goal to connect all agents. However, when there is only one commodity in the objective, the communication agents are placed closer to the agent playing the role of the AP. ![Test with one commodity in the objective. The first row corresponds to 5 task agents and 4 network agents. The second row involves 25 task agents and 10 network agents. In both cases, the first column is the solution with all weights equal to one. In the second and third column, the weight of the commodity of one randomly selected task agent was set to one, while the rest were set to zero.](images/one_sink_cases.pdf){#fig:case1vscase2} We also repeat the latter experiment, this time comparing the first case study with the third. Figure [4](#fig:mulitple-sinks){reference-type="ref" reference="fig:mulitple-sinks"} shows how, once again, different solutions are reached, where in cases where commodities are restricted to a few task agents, the communication agents end up placed close to them. ![Test comparing different utility functions. Both rows illustrate examples with 25 task agents and 10 communication agents. In the first column all 25 commodities are active, whereas in the second and third 5 out of the 25 commodities are active.](images/multiple_sink_cases.pdf){#fig:mulitple-sinks} ## Scalability Our final goal was to determine how the computational performance of our method scales with the number of agents/size of the graph. To this end, we ran 100 simulations for different numbers of task agents, and computed the average time it took to solve an MCFP of that size. In all cases, the number of communication agents was half the number of task agents, and the weight vector was set to one. The results are shown in Table [1](#tab:tab1){reference-type="ref" reference="tab:tab1"}. We show examples for 30, 35 and 45 agents in Figure [5](#fig:scalability){reference-type="ref" reference="fig:scalability"}. Furthermore, we can see how the algorithm places network agents in positions that increase the utility function with the number of iterations of the algorithm. \# Task Agents Time to solve MCFP ---------------- ------------------------- A=2 (0.0005 $\pm$ 0.0001) s A=5 (0.0014 $\pm$ 0.0002) s A=10 (0.0173 $\pm$ 0.0006) s A=20 (0.20 $\pm$ 0.02) s A=25 (0.42 $\pm$ 0.05) s A=30 (1.0 $\pm$ 0.1) s : The presented values are the average times calculated for solving a MCFP for different sizes. For each number of task agents we run 50 different scenarios, and solved 10 MCFP for each. In all cases the number of communication agents is half the number of task agents, rounded down on even cases. Experiments were run on an AMD Ryzen Threadripper PRO 3995WX 64-Cores. ![Test with weight vector equal to one. The first row involves 20 task agents and 10 network agents, and the second involves 25 task agents and 10 network agents. The third row corresponds to 30 task agents and 15 network agents. In all cases the first column depicts initial positions of all agents, the second column the output of the algorithm, and the third column the evolution of the performance function.](images/scalability.pdf){#fig:scalability} ## Mobility We lastly show how the algorithm may be used in a dynamic scenario. For these simulations, we run two parallel threads. The first thread updates the positions of all agents every $\Delta T$ seconds. The positions to which task agents are updated follow a trajectory given by a random acceleration model. The equations of this model for agent $i \in \mathcal{A}$ are $$\begin{aligned} &\mathbf{v}_i(t+\Delta T) = \mathbf{v}_i(t) + \mathbf{a}_i(t) \Delta T \\ &\mathbf{x}_i(t+\Delta T) = \mathbf{x}_i(t) + \mathbf{v}_i(t) \Delta T + \frac{1}{2} \mathbf{a}_i(t) \Delta T^2\end{aligned}$$ where $\mathbf{a}_i(t) \sim \mathcal{N}(0, a I_2)$. To maintain the density $\rho$ constant, we limit the area in which task agents can navigate. In case they hit one of the walls, we change the sign of their velocity to generate a "bouncing" effect. The directions along which positions of network agents are updated are determined by the second thread. Letting $\mathbf{d}$ be that direction, the positions are updated as a step in that direction with norm bounded by $v_{max}$. Here $v_{max}$ is a bound on the speed of the agents. The equation for updating the position of agent $i \in \mathcal{I}$ is $$\begin{aligned} \mathbf{x}_i(t + \Delta T) = \mathbf{x}_i(t) + \min(||\mathbf{d}||,v_{max}) \frac{\mathbf{d}}{||\mathbf{d}||} \Delta T. \end{aligned}$$ Initial positions for task agents $X_{\mathcal{A}}^0$ are randomly sampled, while initial positions for network agents $X_{\mathcal{I}}^0$ are obtained by finding optimal positions for $X_{\mathcal{A}}^0$ before the simulation starts. The second thread involves iteratively sampling agent positions for a given instant, solving the MCFP and outputting the directions of local increase for the network agents. These directions are the directions $\mathbf{d}$ previously mentioned. We run the scenario simulating case study two. The agent that plays the role of the AP is fixed in space. We use $\Delta T = 0.2 s$, $a = 0.01 km/s^2$ and $v_{max} = 90 m/s$. We simulate for 20 seconds. Snapshots of the simulations are shown in Figure [6](#fig:dynamic){reference-type="ref" reference="fig:dynamic"}. We observe how the performance drops in between second 8 and 12 of the simulation. That can be attributed to two task agents that separate from the swarm towards the upper right and lower left corner, as can be seen on Figure [6](#fig:dynamic){reference-type="ref" reference="fig:dynamic"}. Nonetheless, the network agents close to them move in direction of those distant agents so as to connect them back to the sink, thus recovering the performance function. ![Test for the dynamic scenario simulation. Each frame corresponds to a snap shot of agents configuration for given instants. ](images/dy_one_sink_cases.pdf){#fig:dynamic} ![Utility as a function of time for the dynamic scenario simulation.](images/perf_one_sink.pdf){#fig:perfDyn} # Conclusion We presented a novel method for tackling the problem of providing dynamic communication infrastructure for a team of agents. Our approached leveraged the fact that we can extract directions of local increase of the objective function using the dual solution of a closely related convex optimization problem. We showed the efficacy of our algorithm on a range of different scenarios in simulation, demonstrating the scalability of the method to large teams of agents. Future work will include using learning techniques to decrease the running time of computing the solution to the Multi-Commodity Flow Problem, as well as using different utility functions to model a wider range of practical scenarios. [^1]: This work was supported in part by NSF Grant CCR-2112665. $^{1}$Authors are with the Department for Electrical and Systems Engineering, University of Pennsylvania, USA; $^{2}$Authors with the Institute of Electrical Engineering, Universidad de la República, Uruguay; `iboero, mdelcastillo@fing.edu.uy, igorspas, pappasg, kumar, aribeiro@seas.upenn.edu`
arxiv_math
{ "id": "2309.14284", "title": "Navigation with shadow prices to optimize multi-commodity flow rates", "authors": "Ignacio Boero, Igor Spasojevic, Mariana del Castillo, George Pappas,\n Vijay Kumar, Alejandro Ribeiro", "categories": "math.OC cs.RO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We consider the stochastic Landau-Lifshitz-Gilbert equation, perturbed by a real-valued Wiener process. We add an external control to the effective field as an attempt to drive the magnetization to a desired state and also to control thermal fluctuations. We use the theory of Young measures to relax the given control problem along with the associated cost. We consider a control operator that can depend (possibly non-linearly) on both the control and the associated solution. Moreover, we consider a fairly general associated cost functional without any special convexity assumption. We use certain compactness arguments, along with the Jakubowski version of the Skorohod Theorem to show that the relaxed problem admits an optimal control. address: Symbiosis Institute of Technology, Pune author: - Soham Gokhale bibliography: - References_GokhaleSoham.bib title: Relaxed optimal control for the stochastic Landau-Lifshitz-Gilbert equation --- # Introduction {#section Introduction} Magnetic data storage devices are some of the most widely used devices to store and process digital data, for example a magnetic hard drive. One of the modern day problems is to optimize this storage, that is to have the ability to store and process large amounts of data on a small device, without hampering the processing speed. This speed of operation can depend on the speed of switching of the magnetization. Therefore, a study of this switching phenomenon becomes key in such applications. A lot of these devices use ferromagnets (such as iron, oxides of chromium, etc.) as a material. One of the main points of study in this regard is optimal control. Is it possible to have an external input (control) that can drive the magnetization from one state to another? Can we control the thermal fluctuations to some degree, and reduce the possible damage or loss of storage for the given device? Having commercial uses, can this be done with optimal cost? We try to answer this question in some sense. Weiss initiated the study of the theory of ferromagnetism, (see for example [@brown1963micromagnetics] and references therein). This was further developed by Landau and Lifshitz [@landau1992theory] and Gilbert [@Gilbert]. Let $\mathcal{O}\subset \mathbb{R}$ be a bounded interval. For a temperature below the Curie temperature, the magnetization $m$ satisfies the Landau-Lifshitz-Gilbert (LLG) equation $$\begin{aligned} \label{deterministic equation 1.1} \begin{cases} &d m = \left[ ( m \times H_{\text{eff}}(m) ) - \alpha \, m \times ( m \times H_{\text{eff}}(m)) \right] \, dt \ \text{in}\ \mathcal{O}_T=(0,T)\times \mathcal{O},\\ & \frac{\partial m}{\partial \nu} =\, 0\ \text{on}\ \partial \mathcal{O}_T,\\ & m(0,\cdot) = m_0\ \text{on}\ \mathcal{O}. \end{cases} \end{aligned}$$ Here $\times$ denotes the vector product in $\mathbb{R}^3$. Also, $\nu$ denotes the outward pointing normal vector. The constant $\alpha>0$ depends on the gyromagnetic ratio and the damping parameter [@Cimrak_2008]. $H_{\text{eff}}$ denotes the effective field, which will be described shortly. For ferromagnetic materials at temperature below the Curie temperature, the modulus of the magnetization remains constant, giving rise to a constraint condition (see [\[eqn constraint condition\]](#eqn constraint condition){reference-type="eqref" reference="eqn constraint condition"}). For simplicity, we assume the constant to be 1. The solutions of [\[deterministic equation 1.1\]](#deterministic equation 1.1){reference-type="eqref" reference="deterministic equation 1.1"} correspond to the equilibrium states of the ferromagnet. We aim to study the phase transition between equilibrium states that is induced by the thermal fluctuations in the effective field $H_{\text{eff}}$. In order to do this, we modify the equation [\[deterministic equation 1.1\]](#deterministic equation 1.1){reference-type="eqref" reference="deterministic equation 1.1"} to include the random fluctuations affecting the dynamics of $m$. The study of analysis of noise induced transitions was started by Neel [@Neel_1946] and further studied in [@Brown_Thermal_Fluctuations; @Kamppeter+FranzEtAl_StochasticVortexDynamics], among others. The introduction of a stochastic forcing term in [\[deterministic equation 1.1\]](#deterministic equation 1.1){reference-type="eqref" reference="deterministic equation 1.1"} is due to Brown, see [@brown1963micromagnetics; @Brown_Thermal_Fluctuations]. The effective field $H_{\text{eff}}=H_{\text{eff}}(m)= - D\mathcal{E}(m)$ is deduced from the Landau--Lifshitz energy $\mathcal{E}(m)$, primarily consisting of three parts: the exchange energy, the anistropy energy and the magnetostatic energy, see [@Kruzik+Prohl] for example. Visintin [@Visintin_1985] studied the equation with contribution from all three energies to the total magnetic energy. For simplicity, we only the exchange energy has been considered here. Similar simplifying assumptions can be seen in [@Michiel_EtAl_2001; @ZB+BG+TJ_Weak_3d_SLLGE] to name a few. DeSimone [@DeSimone_HysterisisSmallFerromagneticParticles], Gioia and James [@Gioia1997micromagnetics] studied the LLG equation (with exchange energy only) as a limiting case of the models that include the other types of energy. The exchange energy is given by $$\begin{aligned} \mathcal{E}(m)=\int_{\mathcal{O}}\frac{A_{\text{exc}}}{2}|\nabla m|_{\mathbb{R}^3}^2 \,dx, \end{aligned}$$ where $A_{\text{exc}}$ is a material constant [@Cimrak_2008]. In the calculations that follow, we assume that $A_{\text{exc}} = 1$. Therefore, we have the effective field $H_{\text{eff}} = \Delta m$. Following [@ZB+BG+TJ_Weak_3d_SLLGE], we perturb the equation by adding Gaussian noise to the effective field. Further, the noise needs to be invariant under the coordinate transformation. Hence it is suggested, see [@Brown_Thermal_Fluctuations; @Kamppeter+FranzEtAl_StochasticVortexDynamics] for instance, that we consider the noise in the Stratonovich form. The resulting stochastic counterpart (stochastic LLG equation) of equation [\[deterministic equation 1.1\]](#deterministic equation 1.1){reference-type="eqref" reference="deterministic equation 1.1"} is $$\begin{aligned} \label{equation 1.1} \begin{cases} &dm = \left( m\times H_{\text{eff}} - \alpha \, m\times(m\times H_{\text{eff}}) \right) \, dt \\ & \qquad \ + \bigl(m \times h - \alpha \, m \times (m \times h) \bigr) \circ \, dW(t)\ \text{in}\ \mathcal{O}_T=(0,T)\times \mathcal{O},\\ & \frac{\partial m}{\partial \nu} =\, 0\ \text{on}\ \partial \mathcal{O}_T,\\ & m(0,\cdot) = m_0\ \text{on}\ \mathcal{O}. \end{cases} \end{aligned}$$ Here $h : \mathcal{O} \to \mathbb{R}^3$ is a given bounded function. $W$ is a real valued Wiener process on a given probability space $\left( \Omega , \mathcal{F} , \mathbb{P} \right)$. Also, $\circ \, dW$ denotes the Stratonovich differential. The equation [\[equation 1.1\]](#equation 1.1){reference-type="eqref" reference="equation 1.1"} is subject to the constraint condition $$\label{eqn constraint condition} \left|m(t, x)\right|_{\mathbb{R}^3} = 1,\ \text{for Leb. a.a. }x\in \mathcal{O},\ \text{for all }t\in[0,T], \ \mathbb{P}^\prime\text{-a.s.}$$ **Remark 1**. *A natural question here is about what happens when the temperature is above the Curie temperature. In that case, the model can be replaced by the Landau-Lifshitz-Bloch (LLB) equation. The model was proposed by Geranin in [@garanin1997fokker] in 1997. The LLB equation essentially interpolates between the LLG equation at low temperatures and the Ginzburg-Landau theory of phase transitions. Le in [@LE_Deterministic_LLBE] showed the existence of a weak solution to the LLB equation. The same author with Brzeźniak and Goldys in [@ZB+BG+Le_SLLBE] showed the existence and uniqueness of a solution for the stochastic LLB equation, along with the existence of invariant measures for dimensions 1 and 2. On similar lines, Jiang, Ju and Wang in [@Jiang+Ju+Wang_MartingaleWeakSolnSLLBE] showed the existence of a weak martingale solution to the stochastic LLB equation. In [@Qiu+Tang_Wang_AsymptoticBehaviourSLLBE], the authors established a large deviation principle and central limit theorem for the 1 dimensional stochastic Landau-Lifshitz-Bloch equation. The authors in the preprint [@UM+SG_2022Preprint_SLLBE_LDP] show the existence and pathwise uniqueness of a solution for the stochastic LLB equation (dimensions $1,2$) driven by a pure jump noise. Further, the authors establish a Wentzell-Friedlin type large deviations principle for the small noise asymptotic of the solutions. On a related note, the authors in [@UM+SG_2022_SLLBE_WongZakai] proved Wong-Zakai type approximations for the stochastic LLB equation perturbed by a real-valued Wiener process. In the preprint [@UM+SG_2022Preprint_SLLBE_RelaxedControl], the authors consider the stochastic LLB equation. Following the strategy from [@D+M+P+V] and [@ZB+RS] and considering a very general control operator and cost, the authors show that the problem admits a relaxed optimal control.* As indicated earlier, our aim is to try and optimize the switching of magnetization by giving some external input. Following the works [@ZB+UM+SG_2022Preprint_SLLGE_Control; @Dunst+Klein; @D+M+P+V], we add the control to the effective field $H_{\text{eff}}$ as an attempt to control the magnetization switching, and also to control the thermal fluctuations. Instead of just adding the control process, we add operators $L_1(m,u),L_2(m,u)$ that depend (possibly non-linearly) on both the control process $u$ and the corresponding solution process $m$. The resulting controlled equation is as follows. $$\begin{aligned} \label{eqn controlled problem equation considered} \nonumber dm(t) = & \big[ m(t) \times \Delta m(t) - \alpha m(t) \times \left( m(t) \times \Delta m(t) \right) \\ \nonumber & + m(t) \times L_1(m,u) - \alpha m(t) \times \left( m(t) \times L_2(m,u) \right) \big] \, dt \\ & + \left[ m(t) \times h - \alpha m(t) \times \left( m(t) \times h \right) \right] \circ dW(t)\end{aligned}$$ **Remark 2**. *A natural question here would be why we have not considered the control term just as $L(m,u)$ (say) and why in the specific form above in [\[eqn controlled problem equation considered\]](#eqn controlled problem equation considered){reference-type="eqref" reference="eqn controlled problem equation considered"}. It is recommended (see [@Dunst+Klein; @D+M+P+V]) that the added control should lie in the plane perpendicular to the solution $m$. For some vector $\mathscr{V}$ (non-zero and not parallel to $m$), the said tangent plane can be spanned by the vectors $m \times \mathscr{V}$ and $m \times (m \times \mathscr{V})$. Hence, considering the said form of the control term, in [\[eqn controlled problem equation considered\]](#eqn controlled problem equation considered){reference-type="eqref" reference="eqn controlled problem equation considered"}, suffices to study all the controls that are acting on the tangent plane of the solution $m$.* For the system [\[eqn controlled problem equation considered\]](#eqn controlled problem equation considered){reference-type="eqref" reference="eqn controlled problem equation considered"}, we define the associated cost $J$. Let $\left( \Omega , \mathcal{F} , \mathbb{P} \right)$ denote a probability space. For a solution process $m$ corresponding to a control process $u$, the cost functional $J(m,u)$ is given by $$\begin{aligned} \label{eqn cost functional} J(m,u) = \mathbb{E}\left[ \int_{0}^{T} F(m,u) \, dt + \Psi(m(T)) \right].\end{aligned}$$ Here, $F$ is the running cost and $\Psi$ denotes the terminal cost. $\mathbb{E}$ denotes the expectation corresponding to the space $\Omega$ with respect to the probability measure $\mathbb{P}$ defined on the $\sigma$-algebra $\mathcal{F}$ The corresponding control problem is then to minimize $J$ over the space of all solutions to the problem [\[eqn controlled problem equation considered\]](#eqn controlled problem equation considered){reference-type="eqref" reference="eqn controlled problem equation considered"}. In this work, we employ the theory of Young measures, see for instance [@Balder_2000Book_LecturesYoungMeasureTheory; @Young_1969Book_LecturesCalculusofVariations], to obtain a relaxed version of the above control problem. Let us denote the considered control set by $\mathbb{U}$, which is assumed to be a metrizable Suslin space. Let $L: L^2 \times \mathbb{U} \to L^2$ denote the control operator, which depends (possibly non-linearly) on the control $u\in\mathbb{U}$ as well as the corresponding solution $m$. Let $\lambda$ denote a random Young measure on the space $\mathbb{U}$, see Definition [Definition 5](#definition Young measure){reference-type="ref" reference="definition Young measure"}.\ **Relaxed Control Problem:** $$\begin{aligned} \label{eqn relaxed controlled problem equation considered} \nonumber dm(t) = & \big[ m(t) \times \Delta m(t) - \alpha m(t) \times \left( m(t) \times \Delta m(t) \right) \big] \, dt \\ \nonumber & + \int_{\mathbb{U}} \big[ m(t) \times L_1(m,v) - \alpha m(t) \times \left( m(t) \times L_2(m,v) \right) \, \lambda(dv,dt) \big] \\ & + \left[ m(t) \times h - \alpha m(t) \times \left( m(t) \times h \right) \right] \circ dW(t).\end{aligned}$$ **Relaxed Cost Functional:**\ Corresponding to the problem [\[eqn relaxed controlled problem equation considered\]](#eqn relaxed controlled problem equation considered){reference-type="eqref" reference="eqn relaxed controlled problem equation considered"}, we relax the cost [\[eqn cost functional\]](#eqn cost functional){reference-type="eqref" reference="eqn cost functional"} as follows. Let $\pi = \left( \Omega , \mathcal{F} , \mathbb{F} , \mathbb{P} , m , \lambda , W\right)$ be a weak martingale solution to the relaxed problem [\[eqn relaxed controlled problem equation considered\]](#eqn relaxed controlled problem equation considered){reference-type="eqref" reference="eqn relaxed controlled problem equation considered"} (see Definition [Definition 13](#def weak martingale solution){reference-type="ref" reference="def weak martingale solution"}). The corresponding relaxed cost is given by $$\begin{aligned} \label{eqn relaxed cost functional} \mathscr{J}(\pi) = \mathbb{E}\left[ \int_{0}^{T} \int_{\mathbb{U}} F(m,v) \lambda(dm,dt) + \Psi(m(T)) \right].\end{aligned}$$ Our aim is to show the existence of an optimal control, see Definition [Definition 16](#def optimal control){reference-type="ref" reference="def optimal control"}, to the above problem. That is, we want to show that there exists a weak martingale solution with $m$ as the process corresponding to the control process $u$ such that the cost $J$ is minimized. **Remark 3**. *By Lemma [Lemma 6](#lemma disintegration of Young measures){reference-type="ref" reference="lemma disintegration of Young measures"}, one can disintegrate a given random Young measure $\lambda$ using a relaxed control process $\left\{q_t\right\}_{t\in[0,T]}$ and the Lebesgue measure $dt$. The original problem, along with the cost functional can be recovered from the respective relaxed ones by replacing the Random Young Measure $\lambda(dv,dt) = q_t(dv) dt$ by $\lambda(dv,dt) = \delta_{u(t)}(dv) dt$, where $\delta_{u(t)}$ denotes the Dirac Delta mass saturated at $u(t)$.* Observe here that no special conditions on the control operator $L$ with respect to the control process $u$ are assumed, and no special convexity assumption is made on the cost functional $F$. With that in mind, we extend the space of admissible controls to a bigger space, called the space of admissible relaxed controls . This is done by the method of relaxation (Young [@Young_1942_GeneralizedSurfaces; @Young_1969Book_LecturesCalculusofVariations] and Warga [@Warga_1962_NecessaryConditionsRelaxedVariationalProblems; @Warga_1962_RelaxedVariationalProblems]). Ahmed [@Ahmed_1983_PropertiesRelaxedTrajectories], Papageorgiou, *et. al.* ([@Avgerinos+Papageorgiou_1990_OptimalControlAndRelaxation; @Papageorgiou_1989_ExistenceOptimalControl; @Papageorgiou_1989_RelaxationInfiniteDimensionControlSystem]) use relaxed controls in the evolution equations in Banach spaces, with Polish space valued controls (see also Lou [@Lou_2003_ExistenceOptimalControlSemilinearParabolicEqn; @Lou_2007_ExitenceNonExistenceOptimalControl], Karoui, Nguyen, and Picque [@Karoui+Nguyen+Picque_1987_CompactificationMethodsExistenceOptimalControl], Gatarek, Sobczyk [@Gatarek+Sobczyk_1994_OnExistenceOptimalControlsHilbertSpaceValuedDiffusions], Haussmann, Lepeltier [@Haussmann+Lepeltier_2990_OnExistenceOptimalControl]). Regarding the stochastic case, Nagase and Nisio [@Nagase_ExistenceOptimalControlSPDE; @Nagase+Nisio_OptimalControlsSPDE] initiated the study of relaxed control for stochastic partial differential equations. Fleming and Nisio [@Fleming+Nisio_1984_OnStochasticRelaxedControl; @Fleming_1980_MeasureValuedProcessesControl] introduced Young measure technique for finite-dimensional stochastic systems. Sritharan [@Sritharan_DeterministicAndStochasticControl_SNSE] studied the optimal relaxed control of both deterministic and stochastic Navier-Stokes equations with linear and non-linear constitutive relations. He used the Minty-Browder technique (along with the martingale problem formulation of Stroock and Varadhan for the stochastic case) to establish the existence of optimal controls. Cutland and Grzesiak [@Cutland+Grzesiak_2005_OptimalControl3DSNSE; @Cutland+Grzesiak_2007_OptimalControl2DSNSE] study the existence of an optimal control for 2D (respectively 3D) Navier-Stokes equations. They use non-standard analysis (see also [@Berg+Neves_2007Book_StrengthNonstandardAnalysis]), along with relaxed control techniques. Brzeźniak and Serrano [@ZB+RS] study an optimal relaxed control problem for a class of semilinear stochastic partial differential equations on Banach spaces. They use compactness properties of the class of Young measures on metrizable Suslin spaces (the control sets). Manna and Mukherjee [@UM+DM] use the technique of relaxed controls , with Suslin space-valued controls, combine them with the martingale problem formulation in the sense of Stroock and Varadhan to establish the existence of optimal controls. ## Comparison with some relevant works - Dunst, Majee, Prohl and Vallet in [@D+M+P+V] considered the stochastic LLG equation driven by a Wiener process, and showed the existence of a relaxed optimal control. The main differences between that work and this work are as follows. We have considered the full noise (including the non-linear term), of which the said paper [@D+M+P+V] considers only the linear term. We have also considered a very general form of the control operator as well as the cost functional. - The authors in [@ZB+UM+SG_2022Preprint_SLLGE_Control] (preprint) consider the stochastic LLG equation in dimension $d=1$. They show the existence of a unique maximal regular solution, while considering the full noise. They also show the existence of an optimal control. The main difference between that work and this is that we consider the dimensions $d=1,2,3$ here, as opposed to $d=1$ in [@ZB+UM+SG_2022Preprint_SLLGE_Control]. Moreover, we do not tackle the problem of existence of a solution in this work. We have considered a fairly general control operator and cost functional as compared to the specific ones in the said work. - The authors in [@UM+SG_2022Preprint_SLLBE_RelaxedControl] consider the stochastic Landau-Lifshitz-Bloch equations. They consider a general (Suslin space valued) control operator, along with a general cost functional. Relaxing the problem, they first show the existence of a weak martingale solution to the controlled (relaxed) equation, followed by showing the existence of a relaxed optimal control to the said problem. In this paper, we consider a similar approach, but for the stochastic LLG equation. # Some Notations, Preliminaries {#section some notations, preliminaries} We state some of the results that we have used in the proofs. We have mostly imitated the notations and borrowed results from [@ZB+RS; @Castaing_Book], and [@Sritharan_DeterministicAndStochasticControl_SNSE]. For more details, we refer the reader to [@Balder_2000Book_LecturesYoungMeasureTheory; @Haussman+Lepeltier_1990_OnExistenceOptimalControls; @Jean+Jean_1981Book; @Fitte_2003_CompactnessCriteriaStableTopology; @Young_1969Book_LecturesCalculusofVariations] among others. We assume that $\mathcal{O}\subset\mathbb{R}^d,\ d=1,2,3$ is a bounded domain with a smooth boundary. Let $0<T<\infty$ be arbitrary but fixed. Let $\mathbb{U}$ (the control set) denote a Hausdorff topological space. Let $\mathcal{B}\left(\mathbb{U}\right)$ denote the Borel $\sigma$-algebra on $\mathbb{U}$. Let $\mathcal{P}\left(\mathbb{U}\right)$ denote the set of all probability measures on $\mathcal{B}\left(\mathbb{U}\right)$. The set $\mathcal{P}\left(\mathbb{U}\right)$ is endowed with the $\sigma$-algebra generated by the projection maps $$\pi_C : \mathcal{P}\left(\mathbb{U}\right) \ni q \mapsto q(C) \in [0,1] ,\ C\in \mathcal{B}\left(\mathbb{U}\right).$$ Let $\left( \Omega , \mathcal{F} , \mathbb{F}, \mathbb{P} \right)$ denote a filtered probability space (satisfying the usual hypothesis) with filtration $\mathbb{F} = \left(\mathcal{F}_t\right)_{t\in[0,T]}$. Throughout the work, let $W$ denote a $\mathbb{R}$-valued Wiener process on the given probability space. Let $0 < T < \infty$ and let $h$ denote the given function. **Definition 4** (Relaxed Control Process). *A stochastic process $q = \left\{q_t\right\}_{t\in[0,T]}$ with values in $\mathcal{P}\left(\mathbb{U}\right)$ is called a relaxed control process on $\mathbb{U}$ if the map $$\times \Omega \ni \left(t,\omega\right) \mapsto q_t\left( \omega , \cdot \right) \in \mathcal{P}\left( \mathbb{U} \right),$$ is measurable. In other words, a relaxed control process on $\mathbb{U}$ is a measurable process with values in $\mathcal{P}\left( \mathbb{U} \right)$.* **Definition 5** (Young Measure). *Let $\lambda$ denote a non-negative $\sigma$-additive measure on $\mathcal{B}\bigl( \mathbb{U} \times [0,T]\bigr)$. We say that $\lambda$ is a Young measure on $\mathbb{U}$ if and only if $\lambda$ satisfies $$\lambda\left( \mathbb{U} \times D \right) = \text{Leb}\left(D\right),\ \forall D\in\mathcal{B}\bigl([0,T]\bigr),$$ with $\text{Leb}$ denoting the Lebesgue measure on $[0,T]$. The space of all Young measures on $\mathbb{U}$ will be denoted by $\mathcal{Y}\left(0,T: \mathbb{U} \right)$.* **Lemma 6** (Disintegration of Young measures, Lemma 2.3, [@ZB+RS]). *Let $\mathbb{U}$ be a Radon space. Let $\lambda:\Omega\to \mathcal{Y}\left(0,T : \mathbb{U} \right)$ be such that for every $J\in\mathcal{B}\bigl( \mathbb{U} \times [0,T]\bigr)$, the mapping $$\label{eqn definition measurability of Young measure} \Omega \ni \omega \mapsto \lambda\left(\omega\right) \in [0,T],$$ is measurable. Then there exists a relaxed control process $q = \left\{q_t\right\}_{t\in[0,T]}$ on $\mathbb{U}$ such that $$\label{eqn disintegration formula 1} \lambda\left( \omega , C \times D \right) = \int_{D} q_t\left(\omega , C\right) \, dt,\ \forall C\in \mathcal{B}\left(\mathbb{U}\right),\ D\in \mathcal{B}\bigl([0,T]\bigr).$$* The formula [\[eqn disintegration formula 1\]](#eqn disintegration formula 1){reference-type="eqref" reference="eqn disintegration formula 1"} is frequently denoted by $$\label{eqn disintegration formula 2} \lambda\left( du,dt \right) = q_t\left( du \right)\, dt.$$ Note that the above mapping [\[eqn definition measurability of Young measure\]](#eqn definition measurability of Young measure){reference-type="eqref" reference="eqn definition measurability of Young measure"} being measurable justifies calling $\lambda$ a random Young measure. **Definition 7** (Stable Topology). *The stable topology on $\mathcal{Y}(0,T:\mathbb{U})$ is the weakest topology on $\mathcal{Y}(0,T:\mathbb{U})$ such that the mappings $$\mathcal{Y}(0,T:\mathbb{U}) \ni \lambda \mapsto \int_{D} \int_{\mathbb{U}}f(u) \, \lambda(du,dt)\in \mathbb{R},$$ is continuous for every $D\in\mathcal{B}([0,T])$ and $f\in C_{\text{b}}(\mathbb{U})$.* **Proposition 8**. *The space of Young measures on a metrizable space (respectively metrizable Suslin) with the stable topology is metrizable (respectively metrizable Suslin).* **Definition 9** (Flexible Tighness). *We say that a set $\mathscr{J}\subset \mathcal{Y}(0,T:\mathbb{U})$ is flexibly tight, if for each $\varepsilon>0$, there exists a measurable set-valued mapping $[0,T]\ni t \mapsto K_t\subset\mathbb{U}$ such that $K_t$ is compact for all $t\in[0,T]$ and $$\sup_{\lambda\in\mathscr{J}}\int_{0}^{T}\int_{\mathbb{U}} \mathbbm{1}_{K_t^c}(u)\lambda(du,dt) < \varepsilon.$$* **Definition 10** (Inf-compactness). *A function $\eta:\mathbb{U} \to [0,\infty]$ is called inf-compact if and only if for every $R>0$ the level set $$\left\{ \eta \leq R \right\} = \left\{ u\in\mathbb{U} : \eta(u) \leq R \right\},$$ is compact.* **Lemma 11** (Theorem 2.13, [@ZB+RS] ). *The following are equivalent for any $\mathscr{J}\subset \mathcal{Y}(0,T:\mathbb{U})$.* 1. *$\mathscr{J}$ is flexibly tight.* 2. *There exists an inf-compact function $\eta$ such that $$\sup_{\lambda \in \mathscr{J}} \int_{0}^{T} \int_{\mathbb{U}} \eta(t,u) \lambda(du,dt) < \infty.$$* **Lemma 12** (Theorem 2.16, [@ZB+RS]). *Let $\mathbb{U}$ be a metrizable Suslin space. If $\lambda_n \to \lambda$ stably in $\mathcal{Y}(0,T:\mathbb{U})$, then for every $f\in L^1(0,T:C_{\text{b}}(\mathbb{U}))$ we have $$\lim_{n\to\infty} \int_{0}^{T}\int_{\mathbb{U}} f(t,u) \, \lambda_n(du,dt) = \int_{0}^{T}\int_{\mathbb{U}} f(t,u) \, \lambda(du,dt).$$* # Formulation of the problem, Statements of the main result First of all, recall that the equality (specifically the stochastic differential) in [\[eqn relaxed controlled problem equation considered\]](#eqn relaxed controlled problem equation considered){reference-type="eqref" reference="eqn relaxed controlled problem equation considered"} is to be understood in the Stratonovich sense. This can be written in the Itô sense by adding a correction term as follows. Let us define an operator $G:L^2 \to L^2$ given by $$\begin{aligned} G(v) = v \times h - \alpha v \times ( v \times h ),\ v\in L^2. \end{aligned}$$ With this in mind, the equality [\[eqn relaxed controlled problem equation considered\]](#eqn relaxed controlled problem equation considered){reference-type="eqref" reference="eqn relaxed controlled problem equation considered"} is equivalent see [@Prato+Zabczyk; @Oksendal_Book], to the equality $$\begin{aligned} \label{eqn relaxed controlled problem equation considered Ito form} \nonumber dm(t) = & \big[ m(t) \times \Delta m(t) - \alpha m(t) \times \left( m(t) \times \Delta m(t) \right) \big] \, dt \\ \nonumber & + \int_{\mathbb{U}} \big[ m(t) \times L_1(m,v) - \alpha m(t) \times \left( m(t) \times L_2(m,v) \right) \, \lambda(dv,dt) \big] \\ & + \frac{1}{2} DG(m(t))\bigl(G(m(t))\bigr) \, dt + G(m(t)) \, dW(t).\end{aligned}$$ Here $DG$ denotes the Gateaux derivative of $G$. **Definition 13** (Weak Martingale Solution). *A tuple $\pi^\prime= \left( \Omega^\prime, \mathcal{F}^\prime, \mathbb{F}^\prime, \mathbb{P}^\prime, m^\prime, \lambda^\prime, W^\prime\right)$ is said to be a **weak martingale solution to the problem** [\[eqn relaxed controlled problem equation considered\]](#eqn relaxed controlled problem equation considered){reference-type="eqref" reference="eqn relaxed controlled problem equation considered"} (equivalently a relaxed solution to the problem [\[eqn controlled problem equation considered\]](#eqn controlled problem equation considered){reference-type="eqref" reference="eqn controlled problem equation considered"} if the following hold:* 1. *$\left( \Omega^\prime, \mathcal{F}^\prime, \mathbb{F}^\prime, \mathbb{P}^\prime\right)$ is a filtered probability space that satisfies the usual hypotheses.* 2. *$W^\prime$ is a Wiener process on the said probability space.* 3. *$\lambda^\prime$ is a random Young measure on the space $\mathbb{U}$ such that the laws of $\lambda^\prime$ and $\lambda$ are equal on the space $\mathcal{Y}(0,T:\mathbb{U})$.* 4. *The process $m^\prime: \Omega^\prime\times [0,T] \times \mathscr{O}$ is adapted, taking values in the space $C([0,T] : (H^1)^\prime)$, and satisfies the equality [\[eqn relaxed controlled problem equation considered Ito form\]](#eqn relaxed controlled problem equation considered Ito form){reference-type="eqref" reference="eqn relaxed controlled problem equation considered Ito form"} in the weak sense. That is, for $\phi \in W^{1,4}$, the following equality holds. $$\begin{aligned} \label{eqn relaxed controlled problem equation considered Ito form weak sense} \nonumber \left\langle m^{\prime}(t) ,\phi \right\rangle_{L^2} = & \int_{0}^{t} \left\langle \big[ m^{\prime}(t) \times \Delta m^{\prime}(t) - \alpha m^{\prime}(t) \times \left( m^{\prime}(t) \times \Delta m^{\prime}(t) \right) \big] , \phi \right\rangle_{L^2} \, dt \\ \nonumber & + \int_{0}^{t} \int_{\mathbb{U}} \left\langle \big[ m^{\prime}(t) \times L(m^{\prime}(t),v) - \alpha m^{\prime}(t) \times \left( m^{\prime}(t) \times L(m^{\prime}(t),v) \right) \, \lambda^\prime(dv,dt) \big] , \phi \right\rangle_{L^2} \\ & + \frac{1}{2} \int_{0}^{t} \left\langle DG\left(m^{\prime}(t)\right)\bigl(G\left(m^{\prime}(t)\right)\bigr) , \phi \right\rangle_{L^2} \, dt + \int_{0}^{t} \left\langle G\left(m^{\prime}(t)\right) , \phi \right\rangle_{L^2} \, dW(t).\end{aligned}$$* 5. *The process $m^{\prime}$ satisfies the following bounds for $p\geq 1$:* 1. *$$\mathbb{E} \sup_{t\in[0,T]} \left| m^{\prime}(t) \right|_{H^1}^{2p} < \infty,$$* 2. *$$\mathbb{E} \int_{0}^{T} \left| m^{\prime}(t) \times \Delta m^{\prime}(t) \right|_{L^2}^{2p} \, dt < \infty.$$* 6. *The process $m^{\prime}$ satisfies the constraint condition [\[eqn constraint condition\]](#eqn constraint condition){reference-type="eqref" reference="eqn constraint condition"}.* **An Assumption on the control operators:**\ We assume that there exist two inf-compact functions $\kappa_1,\kappa_2$ along with constants $C_1,C_2$ such that for some $r\geq 0$, $$\begin{aligned} L_1(m,u) \leq C_1 \left| m \right|_{L^2}^r + C_1 \kappa_1(u,\cdot) + C_1 \left| m \right|^r_{L^2} \kappa_1(u,\cdot),\ m\in L^2,u\in\mathbb{U},\end{aligned}$$ $$\begin{aligned} L_2(m,u) \leq C_2 \left| m \right|_{L^2}^r + C_2 \kappa_2(u,\cdot) + C_2 \left| m \right|^r_{L^2} \kappa_2(u,\cdot),\ m\in L^2,u\in\mathbb{U}.\end{aligned}$$ **Remark 14**. *In the two inequalities above, one can do away with the first two elements on the right hand sides. This is because they only add length to the calculations, and the complexity remains the same if we consider only the last terms on the respective right hand sides. Moreover, we assume $L_1 = L_2$, and hence also $\kappa_1 = \kappa_2$ and $C_1 = C_2$. Note that this also does not simplify the problem any more than reducing the length of certain calculations.* **Remark 15**. *For comparing this work with the preprint [@ZB+UM+SG_2022Preprint_SLLGE_Control], a main contrast is that even if we assume $L(m,u) = u$, the inf-compact function cannot be just the $L^2$ norm since it may not satisfy the required properties. In that case, we may need to either assume $\kappa$ to be the $H^1$ norm, or maybe assume the space of admissible controls to be $L^2_{\text{w}}(0,T:L^2)$, which is the space $L^2(0,T:L^2)$ endowed with the weak topology. For the latter case, the following embedding is compact. $$L^2(0,T:L^2) \hookrightarrow L^2_{\text{w}}(0,T:L^2).$$ Also, the authors in [@ZB+UM+SG_2022Preprint_SLLGE_Control] deals with the control problem in dimension $d=1$, proving the existence of a unique maximal regular solution. In this work, we do not prove the existence/non-existence of a solution for the controlled problem.* We now state some assumptions on the control operator $L$ the cost functional $F$. **Assumption 18**. *We assume the following.* 1. *We will assume, in the first part of the paper, that $\alpha = 0$ for the noise part. This will keep only the linear terms of the noise part. Later, in Appendix [5](#section Appendix){reference-type="ref" reference="section Appendix"}, we present the arguments for the case $\alpha \neq 0$. With this in mind, the equality [\[eqn relaxed controlled problem equation considered Ito form\]](#eqn relaxed controlled problem equation considered Ito form){reference-type="eqref" reference="eqn relaxed controlled problem equation considered Ito form"} takes the form $$\begin{aligned} \nonumber dm(t) = & \big[ m(t) \times \Delta m(t) - m(t) \times \left( m(t) \times \Delta m(t) \right) \big] \, dt \\ \nonumber & + \big[ m(t) \times L(m,v) - m(t) \times \left( m(t) \times L(m,v) \right) \, \lambda(dv,dt) \big]\\ & + \frac{1}{2} DG(m(t))\bigl(G(m(t))\bigr) \, dt + G(m(t)) \, dW(t), \end{aligned}$$ with $G(m) = m \times h$ and $DG(m)(G(m)) = G^2(m) = ( m \times h ) \times h$. For the coefficient of the triple product term $\left(m \times \left( m \times \Delta m \right) \right)$, we have replaced $\alpha$ by $1$ for simplicity.* 2. *Taking a cue from the previous (formal) discussion in Remark [Remark 14](#remark assumption on L){reference-type="ref" reference="remark assumption on L"}, we assume that there exists an inf-compact function $\kappa$ and a constant $C>0$ such that the following holds. Let $r\geq 0$. Then $$\left| L(m,u) \right|_{L^2} \leq C \left| m \right|_{L^2}^r \kappa ( \cdot , u).$$ Further, we assume that for any random Young measure, we have $$\label{eqn assumption on kappa} \mathbb{E} \int_{0}^{T} \kappa^{5}(\cdot , v) \lambda(dv,dt) < \infty.$$ For higher order estimates, we require the following. For $p\geq 1$, $$\label{eqn assumption on kappa p} \mathbb{E} \int_{0}^{T} \kappa^{5p}(\cdot , v) \lambda(dv,dt) < \infty.$$* 3. *The operator $L$ is continuous in the first variable in the following sense. For any $R>0$, let $K_R = \left\{ v\in\mathbb{U} : \kappa(\cdot,v) \leq R \right\}$. Let $m_n \to m$ in $L^4(\Omega:L^4(0,T:L^4))$ as $n\to\infty$ and let $\lambda$ denote a random Young measure on $\mathbb{U}$. Then* *$$\begin{aligned} \mathbb{E}\int_{0}^{T} \sup_{v\in K_R} \left| L(m_n,v) - L(m,v) \right|_{L^2}^2 \, ds \to 0,\ \text{as} \ n\to\infty. \end{aligned}$$* 4. *The functional $F$ is lower semicontinuous on the space $(H^1)^\prime$. That is, if $m_n\to m$ as $n\to\infty$ in $(H^1)^\prime$ then $$F(m) \leq \liminf_{n\to\infty} F(m_n).$$* 5. *The terminal cost $\Psi$ is assumed to be continuous with values in the space $(H^1)^\prime$.* 6. *The terminal time is $0<T<\infty$ and given data $h\in W^{2,\infty}$ is fixed.* 7. *We assume the following coercivity assumption on the cost $F$. $$\begin{aligned} \label{eqn coercivity assumption} F(m,v) \geq C \kappa^5(m(\cdot),v). \end{aligned}$$* 8. *We assume that the initial data and the given data have the following regularity: $m_0 \in W^{1,2}(\mathscr{O}:\mathbb{S}^2)$ and $h\in W^{2,\infty}$. Here, $\mathbb{S}^2$ denotes the unit sphere in $\mathbb{R}^3$. That is, we have assumed $\mathbb{S}^2$-valued initial data.* *For the control problem, we need to define the space of admissible solutions. Here, we consider a weak martingale solution $\pi$ to be admissible if the associated relaxed cost is finite. That is, $$\begin{aligned} \mathscr{J}(\pi) < \infty. \end{aligned}$$ Let us denote by $\mathcal{U}_{m_0,T}$ the space of all admissible solutions corresponding to the initial data $m_0$ and the terminal time $T$.* ***Definition 16**. *A weak martingale solution $\pi^*$ to the problem [\[eqn relaxed controlled problem equation considered\]](#eqn relaxed controlled problem equation considered){reference-type="eqref" reference="eqn relaxed controlled problem equation considered"} is said to be an optimal control for the problem [\[eqn relaxed controlled problem equation considered\]](#eqn relaxed controlled problem equation considered){reference-type="eqref" reference="eqn relaxed controlled problem equation considered"} with respect to the cost [\[eqn relaxed cost functional\]](#eqn relaxed cost functional){reference-type="eqref" reference="eqn relaxed cost functional"} (a relaxed optimal control for the problem [\[eqn controlled problem equation considered\]](#eqn controlled problem equation considered){reference-type="eqref" reference="eqn controlled problem equation considered"} with the cost [\[eqn cost functional\]](#eqn cost functional){reference-type="eqref" reference="eqn cost functional"}) if $$\mathscr{J}(\pi^*) = \inf_{\pi\mathcal{U}_{m_0,T}} \mathscr{J}(\pi).$$** ***Theorem 17**. *Let $d=1,2,3$. Let Assumption [Assumption 18](#assumption){reference-type="ref" reference="assumption"} hold. Then the problem [\[eqn relaxed controlled problem equation considered\]](#eqn relaxed controlled problem equation considered){reference-type="eqref" reference="eqn relaxed controlled problem equation considered"}, with the cost [\[eqn relaxed cost functional\]](#eqn relaxed cost functional){reference-type="eqref" reference="eqn relaxed cost functional"}, admits an optimal control.** *Idea of the proof.* We first give a heuristic idea of the proof. First of all, if we assume $L=0$, we can conclude that the problem [\[eqn relaxed controlled problem equation considered Ito form\]](#eqn relaxed controlled problem equation considered Ito form){reference-type="eqref" reference="eqn relaxed controlled problem equation considered Ito form"} admits a weak martingale solution (see for instance [@ZB+BG+TJ_Weak_3d_SLLGE; @D+M+P+V; @UM+SG_2022Preprint_SLLBE_LDP; @UM+SG_2022Preprint_SLLBE_RelaxedControl]. We have assumed a fairly general control operator and cost functional. Therefore we can take the liberty to assume that there exists at least one weak martingale solution with finite cost. That is, we can assume that the infimum stated in Theorem [Theorem 17](#thm optimal control){reference-type="ref" reference="thm optimal control"} is finite. If not, just showing the existence of a weak martingale solution suffices to show the existence of an optimal control (with possibly infinite cost). Hence, we can say that $\inf_{\mathcal{U}_{m_0,T}} \mathscr{J}(\pi) = \Lambda$ (say) is finite. Therefore, there exists a sequence $\left\{\pi_n\right\}_{n\in\mathbb{N}}$ of weak martingale solutions, whose costs converge to $\Lambda$. From this sequence, we extract a subsequence and then show that it converges (in some appropriate sense) to $\pi^*$, which is a weak martingale solution to the problem [\[eqn relaxed controlled problem equation considered\]](#eqn relaxed controlled problem equation considered){reference-type="eqref" reference="eqn relaxed controlled problem equation considered"} and a minimizer of the cost [\[eqn relaxed cost functional\]](#eqn relaxed cost functional){reference-type="eqref" reference="eqn relaxed cost functional"}. ◻ # Proof of Theorem [Theorem 17](#thm optimal control){reference-type="ref" reference="thm optimal control"} {#proof-of-theorem-thm-optimal-control} The section is comprised of several lemmas, which culminate into the proof of Theorem [Theorem 17](#thm optimal control){reference-type="ref" reference="thm optimal control"}. Since we have assumed that $\Lambda<\infty$, there exists a sequence $\left\{ \pi_n \right\}_{n\in\mathbb{N}}$ of weak martingale solutions to the problem [\[eqn relaxed controlled problem equation considered\]](#eqn relaxed controlled problem equation considered){reference-type="eqref" reference="eqn relaxed controlled problem equation considered"} such that $$\lim_{n\to\infty} \mathscr{J}(\pi_n) = \Lambda.$$ As a consequence, there exists some positive constant (say $\mathscr{R}$) such that $$\begin{aligned} \sup_{n\in\mathbb{N}} \mathscr{J}(\pi_n) \leq \mathscr{R}.\end{aligned}$$ In particular, $$\begin{aligned} \label{eqn minimizing sequence bound on F assumption} \sup_{n\in\mathbb{N}} \mathbb{E} \int_{0}^{T} \int_{\mathbb{U}} F(m,v) \, \lambda_n(dv,ds) \leq \mathscr{R}.\end{aligned}$$ Note that all the above solutions are corresponding to the same initial data $m_0$ and terminal time $T$. Also recall that since each of them is a weak martingale solution, the constraint condition [\[eqn constraint condition\]](#eqn constraint condition){reference-type="eqref" reference="eqn constraint condition"} is also satisfied for each process $m_n,n\in\mathbb{N}$. ## Uniform Bounds {#section Uniform bounds} We start by showing some uniform estimates for the sequence of processes $m_n,n\in\mathbb{N}$. **Lemma 19**. *There exists a constant $C>0$ such that the following holds for every $n\in\mathbb{N}$. $$\label{eqn minimizing sequence constraint condition} \left| m_n \right|_{\mathbb{R}^3}= 1,$$ $$\label{eqn minimizing sequence L infinity L2 bound} \sup_{t\in[0,T]} \left| m_n(t) \right|_{L^2}^2 \leq C,$$ $$\label{eqn minimizing sequence L infinity H1 bound} \mathbb{E}^n \sup_{t\in[0,T]} \left| m_n(t) \right|_{H^1}^2 \leq C,$$ $$\label{eqn minimizing sequence mn times Delta mn bound} \mathbb{E}^n \int_{0}^{T} \left| m_n(t) \times \Delta m_n(t) \right|_{L^2}^2 \, dt \leq C.$$* *Proof of Lemma [Lemma 19](#lemma bounds 1){reference-type="ref" reference="lemma bounds 1"}.* Firstly, since $\pi_n$ is a weak martingale solution, we have $$\begin{aligned} \nonumber dm_n(t) = & \left[ m_n(t) \times \Delta m_n(t) - m_n(t) \times \left( m_n(t) \times \Delta m_n(t) \right) + \frac{1}{2} \left( m_n(t) \times h \right) \times h \right] \, dt \\ \nonumber & + \int_{\mathbb{U}} \big[ m_n(t) \times L(m_n,v) - m_n(t) \times \left( m_n(t) \times L(m_n,v) \right) \, \lambda(dv,dt) \big] \\ & + \left[ \left( m_n(t) \times h \right) \right] \, dW_n(t). \end{aligned}$$ **Proof of [\[eqn minimizing sequence constraint condition\]](#eqn minimizing sequence constraint condition){reference-type="eqref" reference="eqn minimizing sequence constraint condition"}:**\ Proof of [\[eqn minimizing sequence constraint condition\]](#eqn minimizing sequence constraint condition){reference-type="eqref" reference="eqn minimizing sequence constraint condition"} follows from the fact that each of the tuples $\pi_n$ is a weak martingale solution to the problem [\[eqn relaxed controlled problem equation considered\]](#eqn relaxed controlled problem equation considered){reference-type="eqref" reference="eqn relaxed controlled problem equation considered"}, and hence satisfies the constraint condition.\ **Proof of [\[eqn mn L infinity L2 bound\]](#eqn mn L infinity L2 bound){reference-type="eqref" reference="eqn mn L infinity L2 bound"}:**\ The inequality [\[eqn mn L infinity L2 bound\]](#eqn mn L infinity L2 bound){reference-type="eqref" reference="eqn mn L infinity L2 bound"} is a consequence of the constraint condition combined with the bounded domain $\mathscr{O}$ (and hence domain with finite measure). We now give proof for the inequalities [\[eqn minimizing sequence L infinity H1 bound\]](#eqn minimizing sequence L infinity H1 bound){reference-type="eqref" reference="eqn minimizing sequence L infinity H1 bound"}, [\[eqn minimizing sequence mn times Delta mn bound\]](#eqn minimizing sequence mn times Delta mn bound){reference-type="eqref" reference="eqn minimizing sequence mn times Delta mn bound"}.\ **Proof of [\[eqn minimizing sequence L infinity H1 bound\]](#eqn minimizing sequence L infinity H1 bound){reference-type="eqref" reference="eqn minimizing sequence L infinity H1 bound"}:**\ We apply the Itô formula to the function $\phi: H^1 \to \mathbb{R}$ given by $$\phi(v) = \frac{1}{2} \left| \nabla v \right|_{L^2}^2,\ v\in H^1.$$ Let $v,w_1,w_2 \in H^1$. The Gateaux derivative $\phi^\prime$ of $\phi$ is given by $$\phi^\prime(v)(w_1) = \left\langle \nabla v , \nabla w_1 \right\rangle_{L^2}.$$ For the second derivative $\phi^{\prime\prime}$, we have the following. $$\phi^{\prime\prime}(v)(w_1,w_2) = \left\langle \nabla w_1 , \nabla w_2 \right\rangle_{L^2}.$$ Note that we will have to apply an infinite dimensional version of the Itô formula, see for example [@Prato+Zabczyk; @Gyongy+Krylov_1982_StochasticEquationsSemimartingales; @Pardoux_1979]. Applying the Itô formula gives $$\begin{aligned} \nonumber \frac{1}{2} \left| \nabla m_n(t) \right|_{L^2}^2 = & \frac{1}{2} \left| \nabla m_0 \right|_{L^2}^2 \\ \nonumber & + \int_{0}^{t} \langle \bigg[ m_n(t) \times \Delta m_n(s) - m_n(s) \times \left( m_n(s) \times \Delta m_n(s) \right) \\ \nonumber & \qquad + \frac{1}{2} \left( m_n(s) \times h \right) \times h \bigg] , -\Delta m_n(t) \rangle_{L^2} \, ds \\ \nonumber & + \int_{0}^{t} \bigg\langle \int_{\mathbb{U}} \bigg[ m_n(s) \times L(m_n(s),v) \\ \nonumber & \qquad - m_n(s) \times \left( m_n(s) \times L(m_n(s),v) \right) \, \lambda_n(dv,ds) \bigg] , -\Delta m_n(s) \bigg\rangle_{L^2} \\ \nonumber & + \frac{1}{2} \int_{0}^{t} \left| \nabla \left( m_n(s) \times h \right) \right|_{L^2}^2 \, ds \\ \nonumber & + \int_{0}^{t} \left\langle \left( m_n(s) \times h \right) \times h + \left[ m_n(s) \times h \right] , -\Delta m_n(s) \right\rangle_{L^2} \, dW_n(s) \\ = & \frac{1}{2} \left| \nabla m_0 \right|_{L^2}^2 + \sum_{i=1}^{4} I_i(t). \end{aligned}$$ **Calculations for $I_1$:** $$\begin{aligned} \nonumber I_1(t) \nonumber = & \int_{0}^{t} \left\langle \big[ m_n(t) \times \Delta m_n(s) - m_n(s) \times \left( m_n(s) \times \Delta m_n(s) \right) , -\Delta m_n(t) \right\rangle_{L^2} \, ds \\ \nonumber & + \frac{1}{2} \int_{0}^{t} \left\langle \left( m_n(s) \times h \right) \times h \big] , -\Delta m_n(t) \right\rangle_{L^2} \, ds \\ = & I_{1,1}(t) + I_{1,2}(t). \end{aligned}$$ $$\begin{aligned} \nonumber I_{1,1}(t) & = \int_{0}^{t} \left\langle \big[ m_n(t) \times \Delta m_n(s) - m_n(s) \times \left( m_n(s) \times \Delta m_n(s) \right) \big] , -\Delta m_n(t) \right\rangle_{L^2} \, ds \\ & = - \alpha \int_{0}^{t} \left| m_n(s) \times \Delta m_n(s) \right|_{L^2}^2 \, ds \leq 0. \end{aligned}$$ $$\begin{aligned} \nonumber I_{1,2}(t) = & \frac{1}{2} \int_{0}^{t} \left\langle \left( m_n(s) \times h \right) \times h \big] , -\Delta m_n(t) \right\rangle_{L^2} \\ \leq & C + C \int_{0}^{t} \left|\nabla m_n(s) \right|_{L^2}^2 \, ds. \end{aligned}$$ **Calculations for $I_2$:** $$\begin{aligned} \nonumber I_2(t) = & \int_{0}^{t} \bigg\langle \int_{\mathbb{U}} \big[ m_n(s) \times L(m_n(s),v) \\ \nonumber & \qquad - m_n(s) \times \left( m_n(s) \times L(m_n(s),v) \right) \, \lambda_n(dv,ds) \big] , -\Delta m_n(s) \bigg\rangle_{L^2} \\ \nonumber = & \int_{0}^{t} \left\langle \int_{\mathbb{U}} \big[ m_n(s) \times L(m_n(s),v) \, \lambda_n(dv,ds) \big] , -\Delta m_n(s) \right\rangle_{L^2} \\ \nonumber & + \int_{0}^{t} \left\langle \int_{\mathbb{U}} \big[ - m_n(s) \times \left( m_n(s) \times L(m_n(s),v) \right) \, \lambda_n(dv,ds) \big] , -\Delta m_n(s) \right\rangle_{L^2} \\ = & I_{2,1}(t) + I_{2,2}(t). \end{aligned}$$ **Calculation for $I_{2,1}$:** $$\begin{aligned} \nonumber \left| I_{2,1}(t) \right| = & \left| \int_{0}^{t} \left\langle \int_{\mathbb{U}} \big[ m_n(s) \times L(m_n(s),v) \, \lambda_n(dv,ds) \big] , -\Delta m_n(s) \right\rangle_{L^2} \right| \\ \nonumber \leq & \int_{0}^{t} \int_{\mathbb{U}} \left| L(m_n(s),v) \right|_{L^2} \left| m_n(s) \times \Delta m_n(s) \right|_{L^2} \, \lambda_n(dv,ds) \\ \nonumber \leq & \frac{\varepsilon}{2} \int_{0}^{t} \left| m_n(s) \times \Delta m_n(s) \right|_{L^2}^2 ds + \frac{1}{2\varepsilon} \int_{0}^{t} \int_{\mathbb{U}} \left| L(m_n(s),v) \right|_{L^2}^2 \, \lambda_n(dv,ds) \\ \nonumber \leq & \frac{\varepsilon}{2} \int_{0}^{t} \left| m_n(s) \times \Delta m_n(s) \right|_{L^2}^2 ds + C(\varepsilon) \int_{0}^{t} \int_{\mathbb{U}} \left| m_n(s) \right|_{L^2}^{2r} \kappa^2(m_n(s),v) \, \lambda_n(dv,ds) \\ \leq & \frac{\varepsilon}{2} \int_{0}^{t} \left| m_n(s) \times \Delta m_n(s) \right|_{L^2}^2 ds + C(\varepsilon) \int_{0}^{t} \int_{\mathbb{U}} \kappa^2(m_n(s),v) \, \lambda_n(dv,ds). \end{aligned}$$ Similarly, one can show that $$\begin{aligned} \left| I_{2,2}(t) \right| \leq \frac{\varepsilon}{2} \int_{0}^{t} \left| m_n(s) \times \Delta m_n(s) \right|_{L^2}^2 ds + C(\varepsilon) \int_{0}^{t} \int_{\mathbb{U}} \kappa^2(m_n(s),v) \, \lambda(dv,ds). \end{aligned}$$ **Calculations for $I_3$:** $$\begin{aligned} \nonumber \left| I_3(t) \right| = & \frac{1}{2} \int_{0}^{t} \left| \nabla \left( m_n(s) \times h \right) \right|_{L^2}^2 \, ds \\ \leq & C + C \int_{0}^{t} \left| \nabla m_n(s) \right|_{L^2}^2 \, ds. \end{aligned}$$ **Calculations for $I_4$:** $$\begin{aligned} \nonumber \mathbb{E} \sup_{t\in[0,T]} \left| I_4(t) \right| = & \mathbb{E} \sup_{t\in[0,T]} \left| \int_{0}^{t} \left\langle \left[ m_n(s) \times h \right] , -\Delta m_n(s) \right\rangle_{L^2} \, dW_n(s) \right| \\ \nonumber \leq & C \mathbb{E} \left( \int_{0}^{T} \left\langle \left[ m_n(s) \times h \right] , -\Delta m_n(s) \right\rangle_{L^2}^2 \, ds \right)^{\frac{1}{2}} \\ \nonumber \leq & C + C \mathbb{E} \left( \int_{0}^{T} \left| \nabla m_n(s) \right|_{L^2}^4 \, ds \right)^{\frac{1}{2}} \\ \nonumber \leq & C + C \mathbb{E} \sup_{t\in[0,T]} \left| \nabla m_n(s) \right|_{L^2} \left( \int_{0}^{T} \left| \nabla m_n(s) \right|_{L^2}^2 \, ds \right)^{\frac{1}{2}} \\ \leq & \frac{1}{4} \mathbb{E} \sup_{t\in[0,T]} \left| \nabla m_n(s) \right|_{L^2}^2 + C + 4 C \mathbb{E} \int_{0}^{T} \left| \nabla m_n(s) \right|_{L^2}^2 \, ds. \end{aligned}$$ Combining all the above calculations (except for $I_4$), we obtain the following. $$\begin{aligned} \nonumber \frac{1}{2} \left| \nabla m_n(t) \right|_{L^2}^2 \leq & C + \frac{1}{2} \left| \nabla m_0 \right|_{L^2}^2 - \int_{0}^{t} \left| m_n(s) \times \Delta m_n(s) \right|_{L^2}^2 \, ds \\ \nonumber & + \frac{\varepsilon}{2} \int_{0}^{t} \left| m_n(s) \times \Delta m_n(s) \right|_{L^2}^2 ds + C(\varepsilon) \int_{0}^{t} \int_{\mathbb{U}} \kappa^2(m_n(s),v) \, \lambda(dv,ds) \\ \nonumber & + C \int_{0}^{t} \left| \nabla m_n(s) \right|_{L^2}^2 \, ds \\ \nonumber & + \left| \int_{0}^{t} \left\langle \left( m_n(s) \times h \right) \times h + \left[ m_n(s) \times h \right] , -\Delta m_n(s) \right\rangle_{L^2} \, dW_n(s) \right| \\ \nonumber = & C + \frac{1}{2} \left| \nabla m_0 \right|_{L^2}^2 + \left(- 1 + \varepsilon\right) \int_{0}^{t} \left| m_n(s) \times \Delta m_n(s) \right|_{L^2}^2 \, ds \\ \nonumber & + C(\varepsilon) \int_{0}^{t} \int_{\mathbb{U}} \kappa^2(m_n(s),v) \, \lambda(dv,ds) + C \int_{0}^{t} \left| \nabla m_n(s) \right|_{L^2}^2 \, ds \\ & + \left| \int_{0}^{t} \left\langle \left[ m_n(s) \times h \right] , -\Delta m_n(s) \right\rangle_{L^2} \, dW_n(s) \right|. \end{aligned}$$ The above inequality can be written as $$\begin{aligned} \label{eqn minimizing sequence main inequality 1} \nonumber \frac{1}{2} \left| \nabla m_n(t) \right|_{L^2}^2 & + \left( 1 - \varepsilon \right) \int_{0}^{t} \left| m_n(s) \times \Delta m_n(s) \right|_{L^2}^2 \, ds \\ \nonumber \leq & C + \frac{1}{2} \left| \nabla m_0 \right|_{L^2}^2 + \frac{\varepsilon}{2} \int_{0}^{t} \left| m_n(s) \times \Delta m_n(s) \right|_{L^2}^2 ds \\ \nonumber & + C(\varepsilon) \int_{0}^{t} \int_{\mathbb{U}} \kappa^2(m_n(s),v) \, \lambda_n(dv,ds) + C \int_{0}^{t} \left| \nabla m_n(s) \right|_{L^2}^2 \, ds \\ & + \left| \int_{0}^{t} \left\langle \left( m_n(s) \times h \right) \times h + \left[ m_n(s) \times h \right] , -\Delta m_n(s) \right\rangle_{L^2} \, dW_n(s) \right|. \end{aligned}$$ Observe that the second term on the left hand side of the above inequality is non-negative (whenever $\varepsilon < 1$), and hence can be neglected without changing the inequality. Therefore, $$\begin{aligned} \label{eqn minimizing sequence main inequality 2} \nonumber \frac{1}{2} \left| \nabla m_n(t) \right|_{L^2}^2 \leq & C + \frac{1}{2} \left| \nabla m_0 \right|_{L^2}^2 + C(\varepsilon) \int_{0}^{t} \int_{\mathbb{U}} \kappa^2(m_n(s),v) \, \lambda_n(dv,ds) + C \int_{0}^{t} \left| \nabla m_n(s) \right|_{L^2}^2 \, ds \\ & + \left| \int_{0}^{t} \left\langle \left( m_n(s) \times h \right) \times h + \left[ m_n(s) \times h \right] , -\Delta m_n(s) \right\rangle_{L^2} \, dW_n(s) \right|. \end{aligned}$$ Taking the supremum over $[0,T]$ followed by taking the expectation of both sides gives $$\begin{aligned} \label{eqn minimizing sequence main inequality 3} \nonumber \frac{1}{2} \mathbb{E} \sup_{t\in[0,T]} \left| \nabla m_n(t) \right|_{L^2}^2 \leq & C + \frac{1}{2} \mathbb{E} \left| \nabla m_0 \right|_{L^2}^2 + C(\varepsilon) \mathbb{E} \int_{0}^{T} \int_{\mathbb{U}} \kappa^2(m_n(s),v) \, \lambda_n(dv,ds) \\ \nonumber & + C \mathbb{E} \int_{0}^{T} \left| \nabla m_n(s) \right|_{L^2}^2 \, ds \\ \nonumber & + \mathbb{E} \sup_{t\in[0,T]} \left| \int_{0}^{t} \left\langle \left( m_n(s) \times h \right) \times h + \left[ m_n(s) \times h \right] , -\Delta m_n(s) \right\rangle_{L^2} \, dW_n(s) \right| \\ \leq & C + C \mathbb{E} \int_{0}^{T} \left| \nabla m_n(s) \right|_{L^2}^2 \, ds + \frac{1}{4} \mathbb{E} \sup_{t\in[0,T]} \left| \nabla m_n(s) \right|_{L^2}^2. \end{aligned}$$ Therefore, $$\begin{aligned} \label{eqn minimizing sequence main inequality 4} \frac{1}{4} \mathbb{E} \sup_{t\in[0,T]} \left| \nabla m_n(t) \right|_{L^2}^2 \leq & C + C \mathbb{E} \int_{0}^{T} \left| \nabla m_n(s) \right|_{L^2}^2 \, ds. \end{aligned}$$ Multiplying by a suitable constant and using the Gronwall inequality gives the inequality [\[eqn minimizing sequence L infinity H1 bound\]](#eqn minimizing sequence L infinity H1 bound){reference-type="eqref" reference="eqn minimizing sequence L infinity H1 bound"}.\ **Proof of inequality [\[eqn minimizing sequence mn times Delta mn bound\]](#eqn minimizing sequence mn times Delta mn bound){reference-type="eqref" reference="eqn minimizing sequence mn times Delta mn bound"}:**\ We now go back to the inequality [\[eqn minimizing sequence main inequality 1\]](#eqn minimizing sequence main inequality 1){reference-type="eqref" reference="eqn minimizing sequence main inequality 1"} (with $\varepsilon < 1$). Now, the first term in the resulting inequality is non-negative, and hence can be neglected for now, while still preserving the inequality. Taking the supremum over $[0,T]$ of both sides, followed by taking the expectation gives the following for some constant $C>0$. $$\begin{aligned} \left( \alpha - \varepsilon \right) \mathbb{E} \int_{0}^{T} \left| m_n(s) \times \Delta m_n(s) \right|_{L^2}^2 \, ds \leq & C + C \mathbb{E} \int_{0}^{T} \left| \nabla m_n(s) \right|_{L^2}^2 \, ds + \frac{1}{4} \mathbb{E} \sup_{t\in[0,T]} \left| \nabla m_n(s) \right|_{L^2}^2. \end{aligned}$$ Multiplying by a suitable constant and then using the inequality [\[eqn minimizing sequence L infinity H1 bound\]](#eqn minimizing sequence L infinity H1 bound){reference-type="eqref" reference="eqn minimizing sequence L infinity H1 bound"} gives us the required inequality [\[eqn minimizing sequence mn times Delta mn bound\]](#eqn minimizing sequence mn times Delta mn bound){reference-type="eqref" reference="eqn minimizing sequence mn times Delta mn bound"}. This concludes the proof of Lemma [Lemma 19](#lemma bounds 1){reference-type="ref" reference="lemma bounds 1"}. ◻ **Lemma 20**. *Let $p\geq 1$. There exists a constant $C>0$ such that the following holds for every $n\in\mathbb{N}$. $$\begin{aligned} \mathbb{E}^n \sup_{t\in[0,T]} \left| m_n(t) \right|_{L^2}^{2p} \leq C, \end{aligned}$$ $$\label{eqn minimizing sequence p L infinity H1 bound} \mathbb{E}^n \sup_{t\in[0,T]} \left| m_n(t) \right|_{H^1}^{2p} \leq C,$$ $$\label{eqn minimizing sequence p mn times Delta mn bound} \mathbb{E}^n \int_{0}^{T} \left| m_n(t) \times \Delta m_n(t) \right|_{L^2}^{2p} \, dt \leq C.$$* *Proof of Lemma [Lemma 20](#lemma bounds p){reference-type="ref" reference="lemma bounds p"}.* We skip the proof for this lemma, and instead, give a very brief outline for it (see the proof of Theorem 3.5 in [@ZB+BG+TJ_Weak_3d_SLLGE] for a similar proof. After simplifying the result of applying the Itô formula, we can the power $p$ of both sides of equation [\[eqn minimizing sequence main inequality 2\]](#eqn minimizing sequence main inequality 2){reference-type="eqref" reference="eqn minimizing sequence main inequality 2"}, before taking the supremum over $[0,T]$ followed by the expectation. Note that this will require the assumption [\[eqn assumption on kappa p\]](#eqn assumption on kappa p){reference-type="eqref" reference="eqn assumption on kappa p"} on the inf-compact function $\kappa$. ◻ **Lemma 21**. *The following bounds hold. There exists a constant $C>0$ such that for all $p\in[2,\infty)$ and $\gamma\in\left(0,\frac{1}{2}\right)$ $$\label{eqn mn bound 1} \mathbb{E}^n \left| m_n \right|_{W^{\gamma,p}(0,T: L^{\frac{6}{5}})} \leq C.$$* *As a consequence, the following inequality holds (possibly for a different constant $C>0$). $$\label{eqn minimizing sequence W gamma p bound} \mathbb{E}^n \left| m_n \right|_{W^{\gamma,p}(0,T: (H^{1})^{\prime})} \leq C.$$* *Proof.* We skip the proof. The idea is to show that each integral on the right hand side of [\[eqn relaxed controlled problem equation considered Ito form\]](#eqn relaxed controlled problem equation considered Ito form){reference-type="eqref" reference="eqn relaxed controlled problem equation considered Ito form"} satisfies the bounds [\[eqn mn bound 1\]](#eqn mn bound 1){reference-type="eqref" reference="eqn mn bound 1"}, [\[eqn minimizing sequence W gamma p bound\]](#eqn minimizing sequence W gamma p bound){reference-type="eqref" reference="eqn minimizing sequence W gamma p bound"}, and therefore each $m_n$ satisfies the bounds as well. For similar details, we refer the reader to Lemma 4.1 in [@ZB+BG+TJ_Weak_3d_SLLGE]. ◻ **Lemma 22**. *The sequence of laws of the sequence $\left\{ \left( m_n , \lambda_n \right) \right\}_{n\in\mathbb{N}}$ is tight on the space $L^4(0,T:L^4)\cap C(0,T:(H^{1})^{\prime}) \times \mathcal{Y}(0,T:\mathbb{U})$.* *Proof of Lemma [Lemma 22](#lemma tightness full noise){reference-type="ref" reference="lemma tightness full noise"}:.* We skip the proof of this result, and refer the reader to Lemma 4.2 in [@ZB+BG+TJ_Weak_3d_SLLGE], see also [@UM+SG_2022Preprint_SLLBE_RelaxedControl], along with using Lemma [Lemma 11](#lemma tightness){reference-type="ref" reference="lemma tightness"}. ◻ ## Use of the Skorohod Theorem {#section Use of the Skorohod Theorem} **Lemma 23**. *There exists a probability space $\left( \tilde{\Omega} , \tilde{\mathcal{F}} , \tilde{\mathbb{P}}\right)$, along with $L^4(0,T:L^4)\cap C(0,T:(H^{1})^{\prime}) \times \mathcal{Y}(0,T:\mathbb{U})$-valued random variables $\left( \tilde{m}_n,\tilde{\lambda}_n\right), n\in\mathbb{N}$, $\tilde{m},\tilde{\lambda}$ and $C([0,T]:\mathbb{R})$-valued random variables $\tilde{W}_n$, $n\in\mathbb{N}$, $W$ such that the following hold.* 1. *$$\tilde{m_n} \to \tilde{m} \text{ in } L^4(0,T:L^4)\cap C(0,T:(H^{1})^{\prime}),\ \tilde{\mathbb{P}}-a.s.$$* 2. *$$\tilde{W}_n \to \tilde{W} \text{ in } C([0,T]:\mathbb{R}),\ \tilde{\mathbb{P}}-a.s.$$* 3. *$$\tilde{\lambda}_n \to \tilde{\lambda} \text{ stably in } \mathcal{Y}(0,T:\mathbb{U}),\ \tilde{\mathbb{P}}-a.s.$$* 4. *The processes $\left( \tilde{m}_n,\tilde{\lambda}_n,\tilde{W}_n\right)$ have the same laws (on the space $L^4(0,T:L^4)\cap C(0,T:(H^{1})^{\prime}) \times \mathcal{Y}(0,T:\mathbb{U}) \times C([0,T]:\mathbb{R})$) as the processes $\left( m_n,\lambda_n, W_n\right)$.* *Proof of Lemma [Lemma 23](#lemma Skorohod Theorem){reference-type="ref" reference="lemma Skorohod Theorem"}.* The lemma from the Skorohod Theorem, see for instance Theorem 4.30 in [@OK_FoundationsOfModernProbability]. ◻ Appealing to the Kuratowski Theorem, see Theorem 1.1 in [@Vakhania_Probability_distributions_on_Banach_spaces], (see also [@ZB+Dhariwal_2012_SNSE_LevyNoise]), we can show that the processes $\tilde{m}_n$ and the corresponding $m_n$ satisfy the same bounds, especially the bounds given in Lemma [Lemma 19](#lemma bounds 1){reference-type="ref" reference="lemma bounds 1"}, Lemma [Lemma 20](#lemma bounds p){reference-type="ref" reference="lemma bounds p"} and Lemma [Lemma 21](#lemma bounds 2){reference-type="ref" reference="lemma bounds 2"}. This is made formal in the following lemma. **Lemma 24**. *There exists a constant $C>0$ such that the following holds for every $n\in\mathbb{N}$. $$\label{eqn mn L infinity L2 bound} \sup_{t\in[0,T]} \left| \tilde{m}_n(t) \right|_{L^2}^2 \leq C.$$ $$\label{eqn mn L infinity H1 bound} \tilde{\mathbb{E}} \sup_{t\in[0,T]} \left| \tilde{m}_n(t) \right|_{H^1}^{2p} \leq C,$$ $$\label{eqn mn times Delta mn bound} \tilde{\mathbb{E}} \int_{0}^{T} \left| \tilde{m}_n(t) \times \Delta \tilde{m}_n(t) \right|_{L^2}^{2p} \, dt \leq C.$$* The process $\tilde{m}$ also satisfies the same bounds. We write the bounds in the following lemma. **Lemma 25**. *There exists a constant $C>0$ such that the following holds. $$\label{eqn m L infinity L2 bound} \sup_{t\in[0,T]} \left| \tilde{m}(t) \right|_{L^2}^2 \leq C.$$ $$\label{eqn mnL infinity H1 p bound} \tilde{\mathbb{E}} \sup_{t\in[0,T]} \left| \tilde{m}(t) \right|_{H^1}^{2p} \leq C,$$ $$\label{eqn mn times Delta m bound} \tilde{\mathbb{E}} \int_{0}^{T} \left| \tilde{m}(t) \times \Delta \tilde{m}(t) \right|_{L^2}^{2p} \, dt \leq C.$$* ## Convergence Arguments {#section convergence arguments} From Lemma [Lemma 23](#lemma Skorohod Theorem){reference-type="ref" reference="lemma Skorohod Theorem"}, we recall the following convergence. $$\begin{aligned} \tilde{m}_n \to \tilde{m} \text{ in } L^4(0,T:L^4)\cap C(0,T:(H^{1})^{\prime}),\ \tilde{\mathbb{P}}-a.s.\end{aligned}$$ By the bounds in Lemma [Lemma 24](#lemma bounds mn 1){reference-type="ref" reference="lemma bounds mn 1"} , we conclude that the sequence $m_n$ is uniformly integrable in the space $L^4(\Omega:L^4(0,T:L^4))$. Hence, we have the following convergence. $$\begin{aligned} \label{eqn convergence L4L4L4} \tilde{m}_n \to \tilde{m} \text{ in } L^4(\Omega:L^4(0,T:L^4)).\end{aligned}$$ **Note:** Henceforth in the section, we suppress the notation $\tilde{\ }$ for the sake of simplicity of writing. That is, we replace $\tilde{m}_n$ by $m_n$, $\tilde{\lambda}_n$ by $\lambda_n$ and so on. The resulting processes are not to be confused with the processes from the previous sections. **Lemma 26**. *The following convergences hold. $$\begin{aligned} \lim_{n\to\infty} \mathbb{E} \left| \int_{0}^{T} \int_{\mathbb{U}} \left\langle m_n \times L(m_n , v) , \phi \right\rangle_{L^2} \, \lambda_n(dv,ds) - \int_{0}^{T} \int_{\mathbb{U}} \left\langle m \times L(m,v) , \phi \right\rangle_{L^2} \, \lambda(dv,ds) \right| = 0. \end{aligned}$$ $$\begin{aligned} \nonumber \lim_{n\to\infty} \mathbb{E} \bigg| & \int_{0}^{T} \int_{\mathbb{U}} \left\langle m_n \times \left( m_n \times L(m_n , v) \right) , \phi \right\rangle_{L^2} \, \lambda_n(dv,ds) \\ \qquad &- \int_{0}^{T} \int_{\mathbb{U}} \left\langle m \times \left( m \times L(m,v) \right) , \phi \right\rangle_{L^2} \, \lambda(dv,ds) \bigg| = 0. \end{aligned}$$* *Proof.* First, we observe that $$\begin{aligned} \nonumber \int_{0}^{T} & \int_{\mathbb{U}} \left\langle m_n \times \left( m_n \times L(m_n , v) \right) , \phi \right\rangle_{L^2} \, \lambda_n(dv,ds) - \int_{0}^{T} \int_{\mathbb{U}} \left\langle m \times \left( m \times L(m,v) \right) , \phi \right\rangle_{L^2} \, \lambda(dv,ds) \\ \nonumber = & \bigg( \int_{0}^{T} \int_{\mathbb{U}} \left\langle m_n \times \left( m_n \times L(m_n , v) \right) , \phi \right\rangle_{L^2} \, \lambda_n(dv,ds) \\ \nonumber & - \int_{0}^{T} \int_{\mathbb{U}} \left\langle m \times \left( m_n \times L(m_n,v) \right) , \phi \right\rangle_{L^2} \, \lambda_n(dv,ds) \bigg) \\ \nonumber & + \bigg( \int_{0}^{T} \int_{\mathbb{U}} \left\langle m \times \left( m_n \times L(m_n,v) \right) , \phi \right\rangle_{L^2} \, \lambda_n(dv,ds) \\ \nonumber &- \int_{0}^{T} \int_{\mathbb{U}} \left\langle m \times \left( m \times L(m_n,v) \right) , \phi \right\rangle_{L^2} \, \lambda_n(dv,ds) \bigg) \\ \nonumber & + \bigg( \int_{0}^{T} \int_{\mathbb{U}} \left\langle m \times \left( m \times L(m_n,v) \right) , \phi \right\rangle_{L^2} \, \lambda_n(dv,ds) \\ \nonumber &- \int_{0}^{T} \int_{\mathbb{U}} \left\langle m \times \left( m \times L(m,v) \right) , \phi \right\rangle_{L^2} \, \lambda_n(dv,ds) \bigg) \\ \nonumber & + \bigg( \int_{0}^{T} \int_{\mathbb{U}} \left\langle m \times \left( m \times L(m,v) \right) , \phi \right\rangle_{L^2} \, \lambda_n(dv,ds) \\ \nonumber &- \int_{0}^{T} \int_{\mathbb{U}} \left\langle m \times \left( m \times L(m,v) \right) , \phi \right\rangle_{L^2} \, \lambda(dv,ds) \bigg) \\ = & \sum_{i=1}^{4} S_i. \end{aligned}$$ We calculate the terms separately. We show calculations first for $S_1$. The term $S_2$ can be handled similarly.\ **Calculations for $S_1$:** $$\begin{aligned} %\nonumber & \mathbb{E} \l| \int_{0}^{T} \int_{\mathbb{U}} \l\langle m_n \times \l( m_n \times L(m_n , v) \r) , \phi \r\rangle_{L^2} \, \lambda_n(dv,ds) %- \int_{0}^{T} \int_{\mathbb{U}} \l\langle m \times \l( m_n \times L(m_n,v) \r) , \phi \r\rangle_{L^2} \, \lambda_n(dv,ds) \r| \\ \nonumber \mathbb{E} \left| S_1 \right| & = \mathbb{E} \left| \int_{0}^{T} \int_{\mathbb{U}} \left\langle \left( m_n - m \right) \times \left( m_n \times L(m_n , v) \right) , \phi \right\rangle_{L^2} \, \lambda_n(dv,ds) \right| \\ \nonumber & \leq \int_{0}^{T} \int_{\mathbb{U}} \left| m_n - m \right|_{L^4} \left| m_n \right|_{L^4} \left| L(m_n , v) \right|_{L^2} \left| \phi \right|_{L^{\infty}} \, \lambda_n(dv,ds) \\ \nonumber & \leq \left( \mathbb{E} \int_{0}^{T} \left| m_n - m \right|_{L^4}^4 \, dt \right)^{\frac{1}{4}} \left( \mathbb{E} \int_{0}^{T} \left| m_n \right|_{L^4}^4 \, dt \right)^{\frac{1}{4}} \centerdot \\ \nonumber & \qquad \centerdot \left( \mathbb{E} \int_{0}^{T} \int_{\mathbb{U}} \left| L(m_n , v) \right|_{L^2} \, \lambda_n(dv,ds) \right)^{\frac{1}{4}} \left( \mathbb{E} \int_{0}^{T} \left| \phi \right|_{L^{\infty}}^4 \, dt \right)^{\frac{1}{4}} \\ \nonumber & \leq C \left( \mathbb{E} \int_{0}^{T} \left| m_n - m \right|_{L^4}^4 \, dt \right)^{\frac{1}{4}} \left( \mathbb{E} \int_{0}^{T} \int_{\mathbb{U}} \left| m_n(s) \right|_{L^2}^{4r} \kappa^4(s,v) \, \lambda_n(dv,ds) \right)^{\frac{1}{4}} \\ \nonumber & \leq C \left( \mathbb{E} \int_{0}^{T} \left| m_n - m \right|_{L^4}^4 \, dt \right)^{\frac{1}{4}} \left( \sup_{s\in[0,T]} \left| m_n(s) \right|_{L^2}^{4r} \mathbb{E} \int_{0}^{T} \int_{\mathbb{U}} \kappa^4(s,v) \, \lambda_n(dv,ds) \right)^{\frac{1}{4}} \\ \nonumber & \leq C \left( \mathbb{E} \int_{0}^{T} \left| m_n - m \right|_{L^4}^4 \, dt \right)^{\frac{1}{4}} \left( \mathbb{E} \int_{0}^{T} \int_{\mathbb{U}} \kappa^4(s,v) \, \lambda_n(dv,ds) \right)^{\frac{1}{4}} \\ & \leq C \left( \mathbb{E} \int_{0}^{T} \left| m_n - m \right|_{L^4}^4 \, dt \right)^{\frac{1}{4}}. \end{aligned}$$ Note that the dot ($\centerdot$) in the third step is used to denote the product of terms on the left and right. The right hand side of the above inequality goes to $0$ as $n\to\infty$. The calculations for $S_2$ are similar. Note that we have used the coercivity assumption [\[eqn coercivity assumption\]](#eqn coercivity assumption){reference-type="eqref" reference="eqn coercivity assumption"}, along with the inequality [\[eqn minimizing sequence bound on F assumption\]](#eqn minimizing sequence bound on F assumption){reference-type="eqref" reference="eqn minimizing sequence bound on F assumption"} to conclude that $$\sup_{n\in\mathbb{N}}\mathbb{E} \int_{0}^{T} \int_{\mathbb{U}} \kappa^4(s,v) \, \lambda_n(dv,ds) < \infty.$$ **Calculations for $S_3$:** $$\begin{aligned} \mathbb{E} ( S_3 ) = & \mathbb{E} \int_{0}^{T} \int_{\mathbb{U}} \left\langle m \times \left[ m \times \left( L(m_n,v) - L(m,v) \right) \right] , \phi \right\rangle_{L^2} \, \lambda_n(dv,ds). \end{aligned}$$ Let us define a cut-off function $\Phi_R: \mathbb{U} \to [0,1]$ as follows. $$\begin{aligned} \label{eqn definition cutoff function} \Phi_R(v) = \begin{cases} 1,\ \text{if } \kappa(\cdot,v) \leq R,\\ 0,\ \text{if } \kappa(\cdot,v) \geq 2R. \end{cases} \end{aligned}$$ Further, let us assign a temporary notation as follows. Let $$F(t,v) = \left\langle m(s) \times \left[ m(s) \times \left( L(m_n(s),v) - L(m(s),v) \right) \right] , \phi \right\rangle_{L^2}.$$ Therefore, $$\begin{aligned} \nonumber \mathbb{E} ( S_3 ) = & \mathbb{E} \int_{0}^{T} \int_{\mathbb{U}} F(s,v) \, \lambda_n(dv,ds) \\ \nonumber = & \mathbb{E} \int_{0}^{T} \int_{\mathbb{U}} \Phi(v) F(s,v) \, \lambda_n(dv,ds) + \mathbb{E} \int_{0}^{T} \int_{\mathbb{U}} (1-\Phi(v))F(s,v) \, \lambda_n(dv,ds) \\ = & S_{3,1} + S_{3,2}. \end{aligned}$$ **Calculations for $S_{3,1}$:** $$\begin{aligned} \nonumber \left| S_{3,1} \right| = & \mathbb{E} \left| \int_{0}^{T} \int_{\mathbb{U}} \Phi_R(v) \left\langle m \times \left[ m \times \left( L(m_n,v) - L(m,v) \right) \right] , \phi \right\rangle_{L^2} \, \lambda_n(dv,ds) \right| \\ \nonumber \leq & \mathbb{E} \int_{0}^{T} \int_{\mathbb{U}} \Phi_R(v)\left| m \right|_{L^4} \left| m \right|_{L^4} \left| L(m_n,v) - L(m,v) \right|_{L^2} \left| \phi \right|_{L^{\infty}} \, \lambda_n(dv,ds) \\ \nonumber \leq & \left( \mathbb{E} \int_{0}^{T} \left| m \right|_{L^4}^4 \, ds \right)^{\frac{1}{2}} \left( \mathbb{E} \int_{0}^{T} \int_{\mathbb{U}} \Phi_R(v) \left| L(m_n,v) - L(m,v) \right|_{L^2}^2 \, \lambda_n(dv,ds) \right)^{\frac{1}{2}} \centerdot \\ \nonumber & \qquad \centerdot \left( \mathbb{E} \int_{0}^{T} \left| \phi \right|_{L^{\infty}} \, ds \right)^{\frac{1}{4}} \\ \nonumber \leq & C \left( \mathbb{E} \int_{0}^{T} \int_{\mathbb{U}} \Phi_R(v) \left| L(m_n,v) - L(m,v) \right|_{L^2}^2 \, \lambda_n(dv,ds) \right)^{\frac{1}{2}} \\ \leq & C \left( \mathbb{E} \int_{0}^{T} \sup_{v\in \mathbb{U}} \Phi_R(v) \left| L(m_n,v) - L(m,v) \right|_{L^2}^2 \, ds \right)^{\frac{1}{2}}. \end{aligned}$$ By (3) in Assumption [Assumption 18](#assumption){reference-type="ref" reference="assumption"}, for every $R>0$, the right hand side of the above inequality converges to $0$ as $n\to\infty$.\ **Calculations for $S_{3,2}$:** $$\begin{aligned} \nonumber \left| S_{3,2} \right| \leq & \mathbb{E} \left| \int_{0}^{T} \int_{\mathbb{U}} (1-\Phi(v))F(s,v) \, \lambda_n(dv,ds) \right| \\ \nonumber = & \mathbb{E} \left| \int_{0}^{T} \int_{\left\{v \in \mathbb{U}: \kappa(\cdot , v) \geq R\right\}} F(s,v) \, \lambda_n(dv,ds) \right| \\ \nonumber \leq & \mathbb{E} \int_{0}^{T} \int_{\left\{v \in \mathbb{U}: \kappa(\cdot , v) \geq R\right\}} \left| \left\langle m \times \left[ m \times \left( L(m_n,v) - L(m,v) \right) \right] , \phi \right\rangle_{L^2} \right| \, \lambda_n(dv,ds) \\ \nonumber \leq & \mathbb{E} \int_{0}^{T} \int_{\left\{v \in \mathbb{U}: \kappa(\cdot , v) \geq R\right\}} \left| m \right|_{L^4} \left| m \right|_{L^4} 2 \left| L(m_n,v) + L(m,v) \right|_{L^2} \left| \phi \right|_{L^{\infty}} \, \lambda_n(dv,ds) \\ \nonumber \leq & C \mathbb{E} \int_{0}^{T} \int_{\mathbb{U}} \Phi_R(v)\left| m \right|_{L^4} \left( \mathbb{E} \int_{0}^{T} \int_{\left\{v \in \mathbb{U}: \kappa(\cdot , v) \geq R\right\}} \left[ \left| m_n \right|_{L^2}^{4r} + \left| m \right|_{L^2}^{4r} \right] \kappa(s,v)^4 \, \lambda_n(dv,ds) \right)^{\frac{1}{2}} \centerdot\\ \nonumber & \qquad \centerdot \left( \mathbb{E} \int_{0}^{T} \left| \phi \right|_{L^{\infty}} \, ds \right)^{\frac{1}{4}} \\ \nonumber \leq & C \left( \mathbb{E} \int_{0}^{T} \int_{\left\{v \in \mathbb{U}: \kappa(\cdot , v) \geq R\right\}} \kappa^4(s,v) \, \lambda_n(dv,ds) \right)^{\frac{1}{2}} \\ \nonumber \leq & C \left( \mathbb{E} \int_{0}^{T} \int_{\left\{v \in \mathbb{U}: \kappa(\cdot , v) \geq R\right\}} \kappa^4(s,v) \, \lambda_n(dv,ds) \right)^{\frac{1}{2}} \\ \nonumber \leq & C \left( \mathbb{E} \int_{0}^{T} \int_{\left\{v \in \mathbb{U}: \kappa(\cdot , v) \geq R\right\}} \kappa^{5}(s,v) \cdot \kappa^{-1}(s,v) \, \lambda_n(dv,ds) \right)^{\frac{1}{2}} \\ \nonumber \leq & \frac{C}{R^{\frac{1}{4}}} \left( \mathbb{E} \int_{0}^{T} \int_{\mathbb{U}} \kappa^{5}(s,v) \cdot \, \lambda_n(dv,ds) \right)^{\frac{1}{2}} \\ \leq & \frac{C}{R}. \end{aligned}$$ The right hand side of the above inequality converges to $0$ as $R\to\infty$.\ **Calculations for $S_4$:**\ As before, let us fix the following notation. $$\hat{F}(s,v) = \left\langle m(s) \times \left( m(s) \times L(m(s),v) \right) , \phi \right\rangle_{L^2}.$$ Therefore, $$\begin{aligned} \mathbb{E} \left( S_4 \right) = & \mathbb{E} \left( \int_{0}^{T} \int_{\mathbb{U}} \hat{F}(s,v) \, \lambda_n(dv,ds) - \int_{0}^{T} \int_{\mathbb{U}} \hat{F}(s,v) \, \lambda(dv,ds) \right). \end{aligned}$$ Recall the cut-off function $\Phi$ defined previously in [\[eqn definition cutoff function\]](#eqn definition cutoff function){reference-type="eqref" reference="eqn definition cutoff function"}. Observe that $$\begin{aligned} \nonumber & \mathbb{E} \int_{0}^{T} \int_{\mathbb{U}} \hat{F}(s,v) \, \lambda_n(dv,ds) - \int_{0}^{T} \int_{\mathbb{U}} \hat{F}(s,v) \, \lambda(dv,ds) \\ \nonumber = & \mathbb{E} \int_{0}^{T} \int_{\mathbb{U}} \Phi(v) \hat{F}(s,v) \, \lambda_n(dv,ds) - \int_{0}^{T} \int_{\mathbb{U}} \Phi(v) \hat{F}(s,v) \, \lambda(dv,ds) \\ & + \mathbb{E} \int_{0}^{T} \int_{\mathbb{U}} \left(1-\Phi(v) \right)\hat{F}(s,v) \, \lambda_n(dv,ds) - \int_{0}^{T} \int_{\mathbb{U}} \left(1-\Phi(v) \right)\hat{F}(s,v) \, \lambda(dv,ds). \end{aligned}$$ **Claim:** $$\begin{aligned} \Phi(\cdot)\hat{F} \in L^1(0,T:C_{b}(\mathbb{U}). \end{aligned}$$ That is, we want to show that $$\int_{0}^{T} \sup_{v\in\mathbb{U}} \left| \Phi(v) \hat{F}(s,v) \right| \, ds < \infty.$$ Towards this aim, we have the following sequence of inequalities. $$\begin{aligned} \nonumber \int_{0}^{T} \sup_{v\in\mathbb{U}} \left| \Phi(v) \hat{F}(s,v) \right| \, ds = & \int_{0}^{T} \sup_{v\in\mathbb{U}} \left| \left\langle m \times \left( m \times L(m,v) \right) , \phi \right\rangle_{L^2} \right| \, ds \\ \nonumber \leq & \int_{0}^{T} \sup_{v\in\mathbb{U}} \left| m \right|_{L^4}^2 \left| L(m,v) \right|_{L^2} \left| \phi \right|_{L^{\infty}} \, ds \\ \nonumber \leq & \int_{0}^{T} \sup_{v\in\mathbb{U}} \left| m \right|_{L^4}^2 \left| m \right|_{L^2}^{r} \kappa(s,v) \left| \phi \right|_{L^{\infty}} \, ds \\ \leq & C \left( \int_{0}^{T} \sup_{v\in\mathbb{U}} \left| m \right|_{L^4}^4 \, ds \right)^{\frac{1}{2}} \left( \int_{0}^{T} \sup_{v\in\mathbb{U}} \kappa^2(s,v) \, ds \right)^{\frac{1}{2}} \left| \phi \right|_{L^{\infty}} \, ds. \end{aligned}$$ The right-hand side of the above inequality is finite almost surely. This concludes the proof of the claim. Therefore by Lemma [Lemma 12](#lemma for convergence of integral of continuous bounded function Young measures){reference-type="ref" reference="lemma for convergence of integral of continuous bounded function Young measures"}, the following convergence holds for almost all $t\in[0,T]$. $$\begin{aligned} \int_{0}^{t} \int_{\mathbb{U}} \Phi(v) \hat{F}(s,v) \, \lambda_n(dv,ds)\ \text{converges to} \int_{0}^{t} \int_{\mathbb{U}} \Phi(v) \hat{F}(s,v) \, \lambda(dv,ds)\ \text{as } n\to\infty. \end{aligned}$$ Moreover, $$\begin{aligned} % \l| \int_{0}^{t} \int_{\mathbb{U}} \hat{F}(s,v) \, \lambda_n(dv,ds) \r|^{\frac{4}{3}} \leq & \l( \int_{0}^{T} \l| m \r|_{L^4}^4 \, ds \r)^{\frac{2}{3}} % \l( \int_{0}^{T} \int_{\mathbb{U}} \kappa^2(s,v) \, \lambda_n(dv,ds) \r)^{\frac{2}{3}} % \l| \phi \r|_{L^{\infty}}^{\frac{4}{3}} \\ \mathbb{E} \left| \int_{0}^{T} \int_{\mathbb{U}} \Phi(v) \hat{F}(s,v) \, \lambda_n(dv,ds) \right|^{\frac{4}{3}} \leq & C \left( \mathbb{E} \int_{0}^{T} \left| m \right|_{L^4}^4 \, ds \right)^{\frac{2}{3}} \left( \mathbb{E} \left| \phi \right|_{L^{\infty}}^{4} \right)^{\frac{2}{3}}. \end{aligned}$$ Therefore by the Vitali Convergence Theorem (see Theorem 4.5.4 in [@BogachevBook_MeasureTheor_2007]), we have $$\begin{aligned} \lim_{n\to\infty} \mathbb{E} \left| \int_{0}^{T} \int_{\mathbb{U}} \Phi(v) \hat{F}(s,v) \, \lambda_n(dv,ds) - \int_{0}^{T} \int_{\mathbb{U}} \Phi(v) \hat{F}(s,v) \, \lambda(dv,ds) \right| = 0. \end{aligned}$$ The second part of $S_4$ can be handled similarly to the term $S_{3,2}$. This concludes the proof of the lemma. ◻ **Lemma 27**. *The following convergences hold. Let $\phi \in L^4(\Omega:L^4(0,T:L^4))$. Then $$\begin{aligned} \lim_{n\to\infty} \mathbb{E} \int_{0}^{T} \left\langle \left( m_n \times \Delta m_n \right) , \phi \right\rangle_{L^2} \, ds = \mathbb{E} \int_{0}^{T} \left\langle \left( m \times \Delta m \right) , \phi \right\rangle_{L^2} \, ds. \end{aligned}$$ $$\begin{aligned} \lim_{n\to\infty} \mathbb{E} \int_{0}^{T} \left\langle m_n \times \left( m_n \times \Delta m_n \right) , \phi \right\rangle_{L^2} \, ds = \mathbb{E} \int_{0}^{T} \left\langle m \times \left( m \times \Delta m \right) , \phi \right\rangle_{L^2} \, ds. \end{aligned}$$* *Proof.* We skip the proof and refer the reader to Section 4 in [@ZB+BG+TJ_Weak_3d_SLLGE]. ◻ **Lemma 28**. *Let $\phi \in L^2(\Omega:L^2(0,T:W^{1,4}))$. Then $$\begin{aligned} \lim_{n\to\infty} \mathbb{E} \int_{0}^{T} \left\langle G(m_n) - G(m) , \phi \right\rangle_{L^2} \, ds = 0. \end{aligned}$$* *Proof.* We give a short idea of the proof. We have the following sequence of inequalities. $$\begin{aligned} \nonumber & \left| \mathbb{E} \int_{0}^{T} \left\langle G(m_n) - G(m) , \phi \right\rangle_{L^2} \, ds \right| \\ \nonumber = & \left| \mathbb{E} \int_{0}^{T} \left\langle m_n - m , \phi \times h \right\rangle_{L^2} \, ds \right| \\ \nonumber \leq & \left( \mathbb{E} \int_{0}^{T} \left| m_n - m \right|_{L^4}^4 \, ds \right)^{\frac{1}{4}} \left( \mathbb{E} \int_{0}^{T} \left| \phi \right|_{L^2}^2 \, ds \right)^{\frac{1}{2}} \left( \mathbb{E} \int_{0}^{T} \left| h \right|_{L^4}^4 \, ds \right)^{\frac{1}{4}} \\ \leq & C \left( \mathbb{E} \int_{0}^{T} \left| m_n - m \right|_{L^4}^4 \, ds \right)^{\frac{1}{4}}. %\to 0\text{ as }n\to\infty \end{aligned}$$ The right hand side of the above inequality converges to $0$ as $n\to\infty$ due to the convergence [\[eqn convergence L4L4L4\]](#eqn convergence L4L4L4){reference-type="eqref" reference="eqn convergence L4L4L4"}. ◻ **Lemma 29**. *The process $m$ satisfies the equality in [\[eqn relaxed controlled problem equation considered Ito form\]](#eqn relaxed controlled problem equation considered Ito form){reference-type="eqref" reference="eqn relaxed controlled problem equation considered Ito form"}, in the sense given in Definition [Definition 13](#def weak martingale solution){reference-type="ref" reference="def weak martingale solution"}.* *Proof.* For a similar result, we refer the reader to Section 5 in [@ZB+BG+TJ_Weak_3d_SLLGE]. ◻ ## Verification of the constraint condition {#section constraint condition} So far, we have shown that the process $m$ satisfies the equality [\[eqn relaxed controlled problem equation considered Ito form\]](#eqn relaxed controlled problem equation considered Ito form){reference-type="eqref" reference="eqn relaxed controlled problem equation considered Ito form"} and satisfies the required bounds. We are yet to show that the constraint condition is satisfied. Towards that, we consider the function $v \mapsto \frac{1}{2} \left| v \right|_{L^2}^2$, and apply the Itô formula from the work of Pardoux [@Pardoux_1979]. That all the necessary conditions (see Theorem 1.2 in [@Pardoux_1979]) are satisfied can be verified using the bounds in Lemma [Lemma 25](#lemma bounds on m){reference-type="ref" reference="lemma bounds on m"}. Applying the Itô formula, we get the following equality. Let $\phi\in C_{c}^{\infty}(\mathscr{O})$. Consider the function $v\mapsto\frac{1}{2}\left\langle \phi v , v \right\rangle_{L^2}$. $$\begin{aligned} \nonumber \frac{1}{2} \left\langle m(t) , \phi m(t) \right\rangle_{L^2} = & \frac{1}{2} \left\langle m_0 , \phi m_0 \right\rangle_{L^2} \\ \nonumber & + \int_{0}^{t} \left\langle \big[ m(t) \times \Delta m(s) - \alpha m(s) \times \left( m(s) \times \Delta m(s) \right) \big] , \phi m(t) \right\rangle_{L^2} \, ds \\ \nonumber & + \int_{0}^{t} \int_{\mathbb{U}} \bigg\langle \big[ m(s) \times L(m(s),v) \\ \nonumber & \qquad - \alpha m(s) \times \left( m(s) \times L(m(s),v) \right) \big] , \phi m(s) \bigg\rangle_{L^2} \, \lambda(dv,ds) \\ \nonumber & + \frac{1}{2} \int_{0}^{t} \left| \left( m(s) \times h \right) \right|_{L^2}^2 \, ds \\ \nonumber & + \int_{0}^{t} \left\langle \left[ m_n(s) \times h - \alpha \left( m_n(s) \times h \right) \times h \right] , \phi m(s) \right\rangle_{L^2} \, dW_n(s) \\ = & \frac{1}{2} \left\langle m_0 , \phi m_0 \right\rangle_{L^2} + \sum_{i=1}^{4} J_i(t).\end{aligned}$$ We now observe that $$\begin{aligned} J_i(t) = 0.\end{aligned}$$ Therefore, the resulting equality is $$\begin{aligned} \left\langle m(t) , \phi m(t) \right\rangle_{L^2} = \left\langle m_0 , \phi m_0 \right\rangle_{L^2}.\end{aligned}$$ That is, $$\begin{aligned} \int_{\mathscr{O}} \left\langle m_n(t,x) , \phi(x) m_n(t,x) \right\rangle_{\mathbb{R}^3} \, dx = \int_{\mathscr{O}} \left\langle m_0(x) , \phi(x) m_0(x) \right\rangle_{\mathbb{R}^3} \, dx.\end{aligned}$$ Therefore, $$\begin{aligned} \left| m(t,x) \right|_{\mathbb{{R}}^3}^2 = \left| m_0(t,x)\right|_{\mathbb{{R}}^3}^2 = 1,\ Leb.\ a.a.\ x\in\mathscr{O}.\end{aligned}$$ This completes the verification of the constraint condition. ## Minimizing the cost We now come back to the notation $\tilde{\pi}$ for denoting the elements obtained after using the Skorohod Theorem (see Lemma [Lemma 23](#lemma Skorohod Theorem){reference-type="ref" reference="lemma Skorohod Theorem"}. In the previous sections, we showed that the tuple $\tilde{\pi}$ is a weak martingale solution to the problem [\[eqn relaxed controlled problem equation considered\]](#eqn relaxed controlled problem equation considered){reference-type="eqref" reference="eqn relaxed controlled problem equation considered"}. We now show that this obtained solution also minimizes the cost [\[eqn relaxed cost functional\]](#eqn relaxed cost functional){reference-type="eqref" reference="eqn relaxed cost functional"}. The aim of this section is therefore, to show that $$\begin{aligned} \mathscr{J}(\tilde{\pi}) = \Lambda.\end{aligned}$$ By Lemma [Lemma 23](#lemma Skorohod Theorem){reference-type="ref" reference="lemma Skorohod Theorem"}, we know that the processes $(m_n,\lambda_n)$ and $(\tilde{m}_n,\tilde{\lambda}_n)$ have the same laws on the space $L^4(0,T:L^4)\cap C([0,T]:(H^1)^\prime) \times \mathcal{Y}(0,T:\mathbb{U})$. Therefore $$\begin{aligned} \liminf_{n\to\infty} \mathbb{E}^n \int_{0}^{T} \int_{\mathbb{U}} F(m_n,v)\lambda_n(dv,ds) \geq \liminf_{n\to\infty} \tilde{\mathbb{E}}^n \int_{0}^{T} \int_{\mathbb{U}} F(\tilde{m}_n,v)\tilde{\lambda}_n(dv,ds).\end{aligned}$$ We now recall the assumption that the cost (running) $F$ is lower semicontinuous. We have that $$\begin{aligned} m_n \to m \text{ in } L^2(0,T:L^2)\ \text{and}\ \lambda_n \to \lambda\ \text{stably in}\ \mathcal{Y}(0,T:\mathbb{U}).\end{aligned}$$ Combining the lower semicontinuity of $F$ with Fatou's Lemma, we have the following inequality. $$\begin{aligned} \int_{0}^{T} \int_{\mathbb{U}} F(m,v)\lambda(dv,ds) \leq \liminf_{n\to\infty} \int_{0}^{T} \int_{\mathbb{U}} F(m_n,v)\lambda_n(dv,ds).\end{aligned}$$ Hence $$\begin{aligned} \tilde{\mathbb{E}} \int_{0}^{T} \int_{\mathbb{U}} F(\tilde{m},v)\lambda(dv,ds) \leq \liminf_{n\to\infty} \tilde{\mathbb{E}} \int_{0}^{T} \int_{\mathbb{U}} F(\tilde{m}_n,v)\tilde{\lambda}_n(dv,ds).\end{aligned}$$ Therefore, $$\begin{aligned} \nonumber \tilde{\mathbb{E}} \int_{0}^{T} \int_{\mathbb{U}} F(\tilde{m},v)\tilde{\lambda}(dv,ds) & \leq \liminf_{n\to\infty} \tilde{\mathbb{E}} \int_{0}^{T} \int_{\mathbb{U}} F(\tilde{m}_n,v)\tilde{\lambda}_n(dv,ds)\\ & \leq \liminf_{n\to\infty} \mathbb{E}^n \int_{0}^{T} \int_{\mathbb{U}} F(m_n,v)\lambda_n(dv,ds).\end{aligned}$$ Recall from Assumption [Assumption 18](#assumption){reference-type="ref" reference="assumption"} that we have assumed $\Psi$ to be continuous and $\tilde{m}_n \to \tilde{m}$ in $C(0,T:(H^1)^\prime$. Therefore $$\begin{aligned} \tilde{\mathbb{E}} \Psi(\tilde{m}(T)) \leq \liminf_{n\to\infty} \tilde{\mathbb{E}} \Psi(\tilde{m}_n(T)).\end{aligned}$$ Since $\pi_n$ is a minimizing sequence and $\Lambda$ is the infimum, we have $$\begin{aligned} \Lambda \leq \tilde{\mathbb{E}}\left( \int_{0}^{T} \int_{\mathbb{U}} F(\tilde{m},v)\tilde{\lambda}(dv,ds)+\Psi(\tilde{m}_n(T))\right) \leq \Lambda.\end{aligned}$$ Hence the infumum is attained at $\tilde{\pi}$.\ \ **Acknowledgement:** The author would like to thank Prof. Utpal Manna, Indian Institute of Science Education and Research, Thiruvananthapuram, for helpful discussions and insights. # Result with full noise {#section Appendix} We begin with a remark. **Remark 30**. *For the previous part, we stated that with $L = 0$, we at least have the existence of a weak martingale solution. For the problem [\[eqn relaxed controlled problem equation considered Ito form\]](#eqn relaxed controlled problem equation considered Ito form){reference-type="eqref" reference="eqn relaxed controlled problem equation considered Ito form"} with full noise, we cannot say so for $d=2,3$ since the existence is not yet known, to the best of our knowledge. It is known that the problem admits a strong solution for $d=1$, see [@ZB+BG+TJ_LargeDeviations_LLGE; @D+M+P+V; @ZB+UM+SG_2022Preprint_SLLGE_Control]. Therefore, we cannot simply say that $L=0$ admits a solution. We have considered a very general cost functional in [\[eqn cost functional\]](#eqn cost functional){reference-type="eqref" reference="eqn cost functional"} (and hence also [\[eqn relaxed cost functional\]](#eqn relaxed cost functional){reference-type="eqref" reference="eqn relaxed cost functional"}). As before, we take the liberty here to assume that there exists at least one solution with finite cost.* We now state the equation that has been considered. $$\begin{aligned} \nonumber dm_n(t) = & \big[ m_n(t) \times \Delta m_n(t) - \alpha m_n(t) \times \left( m_n(t) \times \Delta m_n(t) \right) \big] \, dt \\ \nonumber & + \int_{\mathbb{U}} \big[ m_n(t) \times L(m_n,v) - \alpha\, m_n(t) \times \left( m_n(t) \times L(m_n,v) \right) \, \lambda_n(dv,dt) \big] \\ & + \frac{1}{2} DG\left(m_n(t)\right)\bigl(G\left(m_n(t)\right)\bigr) \, dt + G\left(m_n(t)\right) \, dW_n(t). \end{aligned}$$ with $G(m_n) = m_n \times h - \alpha \, m_n \times \left( m_n \times h \right)$ and $$\begin{aligned} \label{eqn Stratonivich to Ito correction term} \nonumber DG_n\big(m_n\big)\bigl(G_n(m_n)\bigr) =& (m_n \times h) \times h - \alpha \, \left( \big(m_n \times (m_n \times h)\big) \times h \right)\\ \nonumber& - \alpha \, \left(m_n \times h\right) \times (m_n \times h)) \\ \nonumber & + m_n \times \big((m_n \times h) \times h\big)\\ \nonumber &-\alpha \, \big(m_n \times (m_n(s) \times h)\big) \times (m_n \times h)\\ \nonumber & - \alpha \, \bigg(m_n \times \big((m_n \times \big(m_n \times h)\big)\times h\big)\bigg)\\ = & \sum_{i=1}^{6} K_i. \end{aligned}$$ **Theorem 31**. *Let $d=1,2,3$. Let Assumption [Assumption 18](#assumption){reference-type="ref" reference="assumption"} hold with $\alpha \neq 0$. Then the problem [\[eqn relaxed controlled problem equation considered\]](#eqn relaxed controlled problem equation considered){reference-type="eqref" reference="eqn relaxed controlled problem equation considered"}, with the cost [\[eqn relaxed cost functional\]](#eqn relaxed cost functional){reference-type="eqref" reference="eqn relaxed cost functional"}, admits an optimal control.* *Idea of the proof of Theorem [Theorem 31](#thm optimal control full noise){reference-type="ref" reference="thm optimal control full noise"}.* We do not give all the proof arguments here. We give the arguments for the additional correction term that arises due to the non-linear part of the noise. We treat the proof in the sense that we are adding an extra term $DG_n\big(m_n\big)\bigl(G_n(m_n)\bigr)$ to the calculations. We therefore discuss only about the calculations corresponding to the said term. The rest of the section is dedicated to this discussion. ◻ **Remark 32**. *We state here some observations about the correction term [\[eqn Stratonivich to Ito correction term\]](#eqn Stratonivich to Ito correction term){reference-type="eqref" reference="eqn Stratonivich to Ito correction term"}* 1. *The term $K_1$ is linear in $m_n$.* 2. *The terms $K_{i},i=2,3,4$ are quadratic (in $m_n$).* 3. *The terms $K_{i},i=5,6$ are of third order (in $m_n$).* ## Some calculations for the Correction term: {#section Calculations for correction term} In this subsection, we show some calculations for proving uniform estimates, assuming that the full noise term has been considered. The following lemma is an analogue of Lemma [Lemma 19](#lemma bounds 1){reference-type="ref" reference="lemma bounds 1"}. **Lemma 33**. *There exists a constant $C>0$ such that the following holds for every $n\in\mathbb{N}$. $$\label{eqn minimizing sequence constraint condition full noise} \left| m_n \right|_{\mathbb{R}^3}= 1,$$ $$\label{eqn minimizing sequence L infinity L2 bound full noise} \sup_{t\in[0,T]} \left| m_n(t) \right|_{L^2}^2 \leq C,$$ $$\label{eqn minimizing sequence L infinity H1 bound full noise} \mathbb{E}^n \sup_{t\in[0,T]} \left| m_n(t) \right|_{H^1}^2 \leq C,$$ $$\label{eqn minimizing sequence mn times Delta mn bound full noise} \mathbb{E}^n \int_{0}^{T} \left| m_n(t) \times \Delta m_n(t) \right|_{L^2}^2 \, dt \leq C.$$* *Proof.* The proof is in line with the proof of Lemma [Lemma 19](#lemma bounds 1){reference-type="ref" reference="lemma bounds 1"}. We do not give a complete proof of the result here. Instead, we give the arguments that if added to the proof of Lemma [Lemma 19](#lemma bounds 1){reference-type="ref" reference="lemma bounds 1"}, will complete the result. That is, we only show some calculations related to the correction term ($DG(m_n)(G(m_n))$). The constraint condition holds because each $m_n$ is a part of a weak martingale solution to the problem [\[eqn relaxed controlled problem equation considered\]](#eqn relaxed controlled problem equation considered){reference-type="eqref" reference="eqn relaxed controlled problem equation considered"}. The inequality [\[eqn minimizing sequence L infinity L2 bound full noise\]](#eqn minimizing sequence L infinity L2 bound full noise){reference-type="eqref" reference="eqn minimizing sequence L infinity L2 bound full noise"} can be shown again, by using the constraint condition. For the proof of inequality [\[eqn minimizing sequence L infinity H1 bound\]](#eqn minimizing sequence L infinity H1 bound){reference-type="eqref" reference="eqn minimizing sequence L infinity H1 bound"}, we recall the equality [\[eqn minimizing sequence main inequality 1\]](#eqn minimizing sequence main inequality 1){reference-type="eqref" reference="eqn minimizing sequence main inequality 1"}. We will, as done in the proof of [\[eqn mn L infinity H1 bound\]](#eqn mn L infinity H1 bound){reference-type="eqref" reference="eqn mn L infinity H1 bound"}, apply the Itô formula to the function $$v \mapsto \frac{1}{2} \left| \nabla v \right|_{L^2}^2.$$ The resulting equation will be as follows. $$\begin{aligned} \nonumber \frac{1}{2} \left| \nabla m_n(t) \right|_{L^2}^2 = & \frac{1}{2} \left| \nabla m_0 \right|_{L^2}^2 + \int_{0}^{t} \big\langle \big[ m_n(t) \times \Delta m_n(s) \\ \nonumber & \qquad - \alpha m_n(s) \times \left( m_n(s) \times \Delta m_n(s) \right) \big] , -\Delta m_n(t) \big\rangle_{L^2} \, ds \\ \nonumber & + \int_{0}^{t} \int_{\mathbb{U}} \bigg\langle \big[ m_n(s) \times L(m_n(s),v) \\ \nonumber & \qquad - \alpha m_n(s) \times \left( m_n(s) \times L(m_n(s),v) \right) \big] , -\Delta m_n(s) \bigg\rangle_{L^2} \, \lambda(dv,ds) \\ %\nonumber & + \frac{1}{2} \coma{\int_{0}^{t} \l| \nabla \l( m_n(s) \times h \r) \r|_{L^2}^2 \, ds} \\ \nonumber & + \frac{1}{2} \int_{0}^{t} \left| \nabla G\left(m_n(s)\right) \right|_{L^2}^2 \, ds + \frac{1}{2} \int_{0}^{t} \left\langle DG\left(m_n(s)\right)\left[G\left(m_n(s)\right)\right] , -\Delta m_n(s) \right\rangle_{L^2} \, ds \\ %\nonumber & + \int_{0}^{t} \l\langle \l( m_n(s) \times h \r) \times h + \l[ m_n(s) \times h \r] , -\Delta m_n(s) \r\rangle_{L^2} \, dW_n(s) \\ \nonumber & + \int_{0}^{t} \left\langle G\left(m_n(s)\right) , -\Delta m_n(s) \right\rangle_{L^2} \, dW_n(s) \\ = & \frac{1}{2} \left| \nabla m_0 \right|_{L^2}^2 + \sum_{i=1}^{5} I_i(t). \end{aligned}$$ Since the constraint condition holds for each $m_n$, we do not show all the calculations for the uniform energy bounds. We just brief the idea, which is as follows. The calculations for $I_1,I_2$ are the same as the corresponding integrals ($I_{1,1},I_2$) in the proof of Lemma [Lemma 19](#lemma bounds 1){reference-type="ref" reference="lemma bounds 1"}. We now show the calculations for the term $I_4$. The crux of the rest of the proof stays the same.\ **Calculations for $I_4$:**\ Recall the term $DG(m_n)(G(m_n))$ from [\[eqn Stratonivich to Ito correction term\]](#eqn Stratonivich to Ito correction term){reference-type="eqref" reference="eqn Stratonivich to Ito correction term"}. We do not show the calculations for all the individual terms on the right hand side of [\[eqn Stratonivich to Ito correction term\]](#eqn Stratonivich to Ito correction term){reference-type="eqref" reference="eqn Stratonivich to Ito correction term"}. We show detailed calculations for one of the terms $K_6$. The others can be done in a similar spirit. $$\begin{aligned} \nonumber \left| \int_{0}^{t} K_6(s) \, ds \right| \leq & \left| \alpha \right| \int_{0}^{t} \left| \left\langle \left(m_n \times \big((m_n \times \big(m_n \times h)\big)\times h\big)\right) , \Delta m_n \right\rangle_{L^2} \right| \, ds \\ \nonumber = & \left| \alpha \right| \int_{0}^{t} \left| \left\langle \big((m_n \times \big(m_n \times h)\big)\times h\big) , m_n \times \Delta m_n \right\rangle_{L^2} \right| \, ds \\ \nonumber \leq & \left| \alpha \right| \int_{0}^{t} \left| \big((m_n \times \big(m_n \times h)\big)\times h\big) \right|_{L^2} \left| m_n \times \Delta m_n \right|_{L^2} \, ds \\ \nonumber \leq & \left| \alpha \right| \int_{0}^{t} \left| m_n \right|_{L^{\infty}}^2 \left| h \right|_{L^{\infty}}^2 \left| m_n \times \Delta m_n \right|_{L^2} \, ds \\ \nonumber \leq & \frac{2 \left| \alpha \right|^2 }{\varepsilon} \int_{0}^{t} \left| m_n \right|_{L^{\infty}}^4 \left| h \right|_{L^{\infty}}^4 \, ds + \frac{\varepsilon}{2} \int_{0}^{t} \left| m_n \times \Delta m_n \right|_{L^2}^2 \, ds \\ \leq & C(\varepsilon,h) + \frac{\varepsilon}{2} \int_{0}^{t} \left| m_n \times \Delta m_n \right|_{L^2}^2 \, ds. \end{aligned}$$ We recall here the equations [\[eqn minimizing sequence main inequality 1\]](#eqn minimizing sequence main inequality 1){reference-type="eqref" reference="eqn minimizing sequence main inequality 1"}-[\[eqn minimizing sequence main inequality 2\]](#eqn minimizing sequence main inequality 2){reference-type="eqref" reference="eqn minimizing sequence main inequality 2"}. The choice of $\varepsilon$ is made in the same spirit as done there. The rest of the proof is the same as that of Lemma [Lemma 19](#lemma bounds 1){reference-type="ref" reference="lemma bounds 1"}. ◻ ## Calculations for convergence: Continuing in the format of Section [4.3](#section convergence arguments){reference-type="ref" reference="section convergence arguments"}, we have the following lemma. Note that all the results from the previous section will continue to hold. That is, with the calculations in Section [5.1](#section Calculations for correction term){reference-type="ref" reference="section Calculations for correction term"}, the bounds established in Lemmas [Lemma 20](#lemma bounds p){reference-type="ref" reference="lemma bounds p"}, [Lemma 21](#lemma bounds 2){reference-type="ref" reference="lemma bounds 2"} will hold. Therefore, following Section [4.2](#section Use of the Skorohod Theorem){reference-type="ref" reference="section Use of the Skorohod Theorem"}, we can use the Skorohod Theorem to obtain a pointwise convergent sequence of processes (possibly on a different probability space) with the same laws as those of the given processes. We again use $m_n$ to denote the newly obtained processes. All we give here are some of the calculations and results corresponding to the correction term $DG(m_n)\left(G(m_n)\right)$. *A general idea of the calculations:.* We state some general idea. The terms in the correction term are polynomials in $m_n$ (see Remark [Remark 32](#remark observations about the correction term){reference-type="ref" reference="remark observations about the correction term"}). We add and subtract suitable terms, and then use the Hölder inequality in order to use the previously established bounds and convergences. ◻ **Lemma 34**. *Let $\phi\in L^4(\Omega : L^4(0,T:W^{1,4}))$. We have the following convergence. $$\begin{aligned} \lim_{n\to\infty}\mathbb{E} \int_{0}^{t} \left\langle DG(m_n)(G(m_n)) , \phi \right\rangle_{L^2} \, ds = \mathbb{E} \int_{0}^{t} \left\langle DG(m)(G(m)) , \phi \right\rangle_{L^2} \, ds. \end{aligned}$$* *Proof.* We again show calculations for one of the terms from [\[eqn Stratonivich to Ito correction term\]](#eqn Stratonivich to Ito correction term){reference-type="eqref" reference="eqn Stratonivich to Ito correction term"} for the convergence arguments. $$\begin{aligned} \nonumber & \bigg(m_n \times \big((m_n \times \big(m_n \times h)\big)\times h\big)\bigg) - \bigg(m \times \big((m \times \big(m \times h)\big)\times h\big)\bigg) \\ \nonumber = & \bigg(m_n \times \big((m_n \times \big(m_n \times h)\big)\times h\big)\bigg) - \bigg(m \times \big((m_n \times \big(m_n \times h)\big)\times h\big)\bigg) \\ \nonumber & + \bigg(m \times \big((m_n \times \big(m_n \times h)\big)\times h\big)\bigg) - \bigg(m \times \big((m \times \big(m_n \times h)\big)\times h\big)\bigg) \\ \nonumber & + \bigg(m \times \big((m \times \big(m_n \times h)\big)\times h\big)\bigg) - \bigg(m \times \big((m \times \big(m \times h)\big)\times h\big)\bigg) \\ \nonumber = & \left(m_n - m\right) \times \big((m_n \times \big(m_n \times h)\big)\times h\big) \\ \nonumber & + m \times \big(\big(m_n - m \big) \times \big(m_n \times h)\times h\big) \\ & + m \times \big((m \times \big( \left( m_n - m \right) \times h)\big)\times h\big). \end{aligned}$$ Therefore, $$\begin{aligned} \nonumber & \mathbb{E} \int_{0}^{T} \left| \left\langle \bigg(m_n \times \big((m_n \times \big(m_n \times h)\big)\times h\big)\bigg) - \bigg(m \times \big((m \times \big(m \times h)\big)\times h\big)\bigg) , \phi \right\rangle_{L^2} \right| \, ds \\ \nonumber \leq & \mathbb{E} \int_{0}^{T} \left| \left\langle \left(m_n - m\right) \times \big((m_n \times \big(m_n \times h)\big)\times h\big), \phi \right\rangle_{L^2} \right| \, ds \\ \nonumber & + \mathbb{E} \int_{0}^{T} \left| \left\langle m \times \big(\big(m_n - m \big) \times \big(m_n \times h)\times h\big) , \phi \right\rangle_{L^2} \right| \, ds \\ \nonumber & + \mathbb{E} \int_{0}^{T} \left| \left\langle m \times \big((m \times \big( \left( m_n - m \right) \times h)\big)\times h\big) , \phi \right\rangle_{L^2} \right| \, ds \\ = & I_1 + I_2 + I_3. \end{aligned}$$ We show calculations for one term. Others can be done similarly. $$\begin{aligned} \nonumber & \mathbb{E} \int_{0}^{T} \left| \left\langle \left(m_n - m\right) \times \big((m_n \times \big(m_n \times h)\big)\times h\big), \phi \right\rangle_{L^2} \right| \, ds \\ \nonumber \leq & \left| h \right|_{L^{\infty}} \left( \mathbb{E} \int_{0}^{T} \left| m_n - m\right|_{L^4}^4 \, ds \right)^{\frac{1}{4}} \left( \mathbb{E} \int_{0}^{T} \left|m\right|_{L^4}^4 \, ds \right)^{\frac{1}{2}} \left( \mathbb{E} \int_{0}^{T} \left|\phi\right|_{L^4}^4 \, ds \right)^{\frac{1}{4}} \\ \leq & C \left( \mathbb{E} \int_{0}^{T} \left| m_n - m\right|_{L^4}^4 \, ds \right)^{\frac{1}{4}}. %\to 0\ \text{as}\ n\to\infty \end{aligned}$$ The right-hand side of the above inequality converges to $0$ as $n\to\infty$. For the remaining convergence arguments, we refer the reader to Section 5 in [@ZB+BG+TJ_Weak_3d_SLLGE], see also [@ZB+BG+TJ_LargeDeviations_LLGE], the preprint [@ZB+UM+SG_2022Preprint_SLLGE_Control]. Even though the second and the third stated references consider the equation only in dimension $d=1$, the crux of the argument can be applied to this case as well. ◻
arxiv_math
{ "id": "2309.12556", "title": "Relaxed optimal control for the stochastic Landau-Lifshitz-Gilbert\n equation", "authors": "Soham Gokhale", "categories": "math.OC math.AP math.PR", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We show that the derived categories of symmetric products of a curve are embedded into the derived categories of the moduli spaces of vector bundles of large ranks on the curve. It supports a prediction of the existence of a semiorthogonal decomposition of the derived category of the moduli space, expected by a motivic computation. As an application, we show that all Jacobian varieties, symmetric products of curves and all principally polarized abelian varieties of dimension at most three, are Fano visitors. We also obtain similar results for motives. address: - Kyoung-Seog Lee, Department of Mathematics, POSTECH, 77, Cheongam-ro, Nam-gu, Pohang-si, Gyeongsangbuk-do, 37673, Korea - Han-Bom Moon, Department of Mathematics, Fordham University, New York, NY 10023 author: - Kyoung-Seog Lee - Han-Bom Moon title: | Derived categories of symmetric products and\ moduli spaces of vector bundles on a curve --- # Introduction {#sec:intro} Let $X$ be a smooth projective curve of genus $g \ge 2$, and $L$ be a line bundle on $X$ of degree $d$. The moduli space $\mathrm{M} _{X}(r, L)$ of rank $r$, determinant $L$ semistable vector bundles on $X$ is one of the most intensively studied moduli spaces in the past decades. When $(r, d) = 1$, it is a smooth projective Fano variety of dimension $(r^{2}-1)(g-1)$ of index two [@Ram73]. ## Derived category of the moduli space of vector bundles Since the pioneering works of Narasimhan in [@Nar17; @Nar18] and Fonarev-Kuznetsov in [@FK18], many works have been done to understand the bounded derived category of coherent sheaves $\mathrm{D} ^{b}(\mathrm{M} _{X}(r, L))$ of the moduli space, in particular its semiorthogonal decomposition. Narasimhan, and independently Belmans--Galkin--Mukhopadhyay, conjectured that in the rank two case, $\mathrm{D} ^{b}(\mathrm{M} _{X}(2, L))$ has an explicit semiorthogonal decomposition [@Lee18; @BGM23], where all indecomposable components are equivalent to $\mathrm{D} ^{b}(X_{n})$, where $X_{n} = X^{n}/S_{n}$. A proof of this conjecture was recently announced by Tevelev and Torres [@TT21; @Tev23]. For the higher rank case, based on motivic computation in [@GL20], it has been conjectured that $\mathrm{D} ^{b}(\mathrm{M} _{X}(r, L))$ has a semiorthogonal decomposition whose components can be described in terms of symmetric products $X_{n}$ and its Jacobian $\mathrm{Jac}(X)$ [@GL20 Conjecture 1.3]. As the initial step, building upon earlier works of Narasimhan in [@Nar17; @Nar18], Fonarev-Kuznetsov in [@FK18], Belmans-Mukhopadhyay in [@BM19], we proved that $\mathrm{D} ^{b}(X)$ can be embedded into $\mathrm{D} ^{b}(\mathrm{M} _{X}(r, L))$ in [@LM21; @LM23] for any curve $X$, rank $r \geq 2$ and coprime degree $d$. In this paper, we extend this result to symmetric products. **Theorem 1**. *Suppose that $r > 2n$. Then $\mathrm{D} ^{b}(X_{n})$ is embedded into $\mathrm{D} ^{b}(\mathrm{M} _{X}(r, L))$.* As we can see below, this result implies that $\mathrm{D} ^{b}(\mathrm{Jac}(X))$ is embedded into $\mathrm{D} ^{b}(\mathrm{M} _{X}(r, L))$ if $r > 2g$. ## Fano visitor problem Mirror symmetry predicts that a mirror of a Fano variety is given by a Landau-Ginzburg model, and people have tried to understand Fano varieties via their Landau-Ginzburg mirrors. From this perspective, it is essential to know which categories can appear as semiorthogonal components of the derived categories of Fano varieties since we expect they will also appear as Fukaya-Seidel categories associated with some critical loci of the Landau-Ginzburg mirrors. On the other hand, studying semiorthogonal decompositions of derived categories of Fano varieties has played an important role in the theory of derived categories. It has many interesting (sometimes conjectural) consequences to birational geometry, especially rationality, mirror symmetry, moduli spaces of ACM/Ulrich bundles, motives, quantum cohomology, and other geometric properties of Fano varieties. When the derived category of a Fano variety contains the derived category of a projective variety, those two varieties are expected to interchange geometric information. See [@KKLL17; @KL23] and references therein for more details. Therefore, it is an interesting question which categories can be embedded into the derived categories of Fano varieties. **Definition 1**. Let $V$ be a smooth projective variety and $W$ be a smooth projective Fano variety. If there is a fully faithful functor $\mathrm{D} ^{b}(V) \hookrightarrow \mathrm{D} ^{b}(W)$, we say $V$ is a *Fano visitor*, and $W$ is a *Fano host*. The smallest dimension of Fano hosts of $V$ is called the *Fano dimension* of $V$ and denoted by $\mathrm{Fdim}\;V$. If there is no Fano host, we say $\mathrm{Fdim}\;V = \infty$. In 2011, Bondal asked the following fundamental question: **Question 2** (Bondal, ). Is every smooth projective variety a Fano visitor? In other words, he asked if $\mathrm{Fdim}\;V < \infty$ for every smooth projective variety $V$. It was predicted by Orlov [@Orl09 Conjecture 10] and recently proved by Olander [@Ola21 Theorem 2] that $\mathrm{D} ^{b}(V) \hookrightarrow \mathrm{D} ^{b}(W)$ implies $\dim V \le \dim W$. This implies that $\dim V \le \mathrm{Fdim}\;V$ and clearly $\dim V = \mathrm{Fdim}\;V$ if $V$ is a Fano variety. Thus, one can use the *Fano defect* $\mathrm{Fdim}\;V - \dim V$ to measure how the given variety $V$ is different from the class of Fano varieties. There are a few cases which the affirmative answer to the Fano visitor problem is known. Examples of Fano visitors include all curves [@Nar17; @Nar18; @FK18], all complete intersections [@KKLL17], Hirzebruch surfaces [@KL23], and general Enriques surfaces [@Kuz19]. Theorem [Theorem 1](#thm:embedding){reference-type="ref" reference="thm:embedding"} immediately implies that for any genus $g \ge 2$ curve $X$ and $n \in \mathbb{Z} _{\ge 0}$, its symmetric product $X_{n}$ is a Fano visitor, and $\mathrm{Fdim}\;X_{n} \le ((2n+1)^{2}-1)(g-1)$. On the other hand, $\mathrm{D} ^{b}(\mathrm{Jac}(X))$ is embedded into $\mathrm{D} ^{b}(X_{g})$ (Section [7](#sec:Fanovisitor){reference-type="ref" reference="sec:Fanovisitor"}). Thus, we obtain: **Theorem 2**. *For any nonsingular projective curve $X$ of genus $g \ge 2$, its Jacobian $\mathrm{Jac}(X)$ is a Fano visitor, and $\mathrm{Fdim}\;\mathrm{Jac}(X) \le \dim \mathrm{M} _{X}(2g+1, L) = 4(g+1)g(g-1)$.* ## Idea of proof Let $\mathcal{M} _{X}(r, L)$ be the moduli stack of rank $r$, determinant $L$ vector bundles on $X$. Let $\mathcal{E}$ be the universal bundle over $X \times \mathcal{M} _{X}(r, L)$. Choosing a section $\sigma : \mathrm{M} _{X}(r, L) \to \mathcal{M} _{X}(r, L)^{s}$, we have a *Poincaré bundle* $\sigma^{*}\mathcal{E}$ over $X \times \mathrm{M} _{X}(r, L)$. Inspired by earlier works in [@LN21; @TT21], define the Fourier-Mukai kernel $\mathcal{F}$ over $X_{n} \times \mathrm{M} _{X}(r, L)$ by taking the $S_{n}$-invariant part of the push-forward $q_{*}(\bigotimes_{i} q_{i}^{*}\sigma^{*}\mathcal{E} )^{S_{n}}$, where $q : X^{n} \times \mathrm{M} _{X}(r, L) \to X_{n} \times \mathrm{M} _{X}(r, L)$ is the projection (Section [3.1](#ssec:kernel){reference-type="ref" reference="ssec:kernel"}). Then $\mathcal{F}$ is a rank $r^{n}$ vector bundle over $X_{n} \times \mathrm{M} _{X}(r, L)$. We consider the Fourier-Mukai transform $\Phi_{\mathcal{F} } : \mathrm{D} ^{b}(X_{n}) \to \mathrm{D} ^{b}(\mathrm{M} _{X}(r, L))$ and show that $\Phi_{\mathcal{F} }$ is fully-faithful. Applying the Bondal-Orlov criterion (Theorem [Theorem 20](#thm:BondalOrlov){reference-type="ref" reference="thm:BondalOrlov"}), the fully-faithfulness can be shown by evaluating cohomology groups of the form $\mathcal{F} _{\mathbf{p} } \otimes \mathcal{F} _{\mathbf{q} }^{*}$. Using the deformation of vector bundles, we may replace the problem by computation of cohomology of bundles of the form $\bigotimes_{i=1}S_{\lambda_{i}}\mathcal{E} _{p_{i}}$, where $p_{i}$ are distinct points and $S_{\lambda_{i}}\mathcal{E} _{p_{i}}$ are Schur functors of the bundle $\mathcal{E} _{p_{i}}$ associated to partitions $\lambda_{i}$. The cohomology groups of these 'standard' bundles can be evaluated by employing the Borel-Weil-Bott-Teleman theory (Section [5.1](#ssec:Teleman){reference-type="ref" reference="ssec:Teleman"}, [@Tel98]) once the bundles are over the moduli stack $\mathcal{M} _{X}(r, \mathcal{O} )$ of all bundles with trivial determinants. Under the numerical condition $r > 2n$, we identify these cohomologies with that over $\mathrm{M} _{X}(r, L)$ by studying the contribution of the unstable locus (Section [4](#sec:quantization){reference-type="ref" reference="sec:quantization"}, [@HL15]) and geometry of moduli spaces of parabolic bundles (Section [2.1](#ssec:moduliparabolic){reference-type="ref" reference="ssec:moduliparabolic"}). ## Some questions Here we leave some questions for the interested readers. We believe the large rank assumption ($r > 2n$) is not essential, but in our proof, it is required to eliminate the contribution of the unstable locus (Section [4.2](#ssec:quantizationmoduli){reference-type="ref" reference="ssec:quantizationmoduli"}) and realize cohomological boundedness via the deformation argument. Our combinatorial approach does not seem to work for larger $r$ (Section [5.2](#ssec:cohomologyonstack){reference-type="ref" reference="ssec:cohomologyonstack"}). **Question 3**. Can we lift the large rank assumption from Theorem [Theorem 1](#thm:embedding){reference-type="ref" reference="thm:embedding"}? From a motivic computation and earlier results [@GL20; @LM21; @LM23], it is expected that many copies of $\mathrm{D} ^{b}(X_{n})$ are embedded in $\mathrm{D} ^{b}(\mathrm{M} _{X}(r, L))$ and these copies are obtained by twisting the image with the pluricanonical divisor. **Question 4**. Find an explicit semiorthogonal decomposition of $\mathrm{D} ^{b}(\mathrm{M} _{X}(r, L))$. In our earlier work [@LM21; @LM23], the vanishing of cohomology was also used to show that $\mathcal{E} _{p}$ is an arithmetically Cohen-Macaulay (ACM) bundle on $\mathrm{M} _{X}(r, L)$ for any $p \in X$. **Question 5**. For any $\mathbf{p} \subset X$ and a collection of partitions $\lambda_{1}, \lambda_{2}, \cdots, \lambda_{k}$, under which condition is the product of Schur functors $\bigotimes_{i=1}^{k}S_{\lambda_{i}}\mathcal{E} _{p_{i}}$ an ACM bundle? Using this, can we show that $\mathrm{M} _{X}(r, L)$ has nontrivial families of ACM bundles with arbitrary dimensions? In other words, is $\mathrm{M} _{X}(r,L)$ of wild representation type [@CH11]? A natural question that arises from Theorem [Theorem 2](#thm:Fanovisitor){reference-type="ref" reference="thm:Fanovisitor"} is the following. **Question 6**. Is every abelian variety a Fano visitor? It turns out that a parallel question for motives is true as follows. See Section [7](#sec:Fanovisitor){reference-type="ref" reference="sec:Fanovisitor"} for details. **Proposition 7**. *All symmetric products of curves and abelian varieties are motivic Fano visitors.* ## Structure of the paper Section [2](#sec:parabolic){reference-type="ref" reference="sec:parabolic"} reviews the moduli space/stack of parabolic bundles, functorial morphisms between them, Schur functors of the universal bundle, and the GIT construction. All results are classical. Section [3](#sec:BondalOrlov){reference-type="ref" reference="sec:BondalOrlov"} defines the Fourier-Mukai kernel F. Section [4](#sec:quantization){reference-type="ref" reference="sec:quantization"} explains the negligibility of the contribution of unstable loci. In Section [5](#sec:boundedness){reference-type="ref" reference="sec:boundedness"}, employing the Borel-Weil-Bott-Teleman theory, we prove the boundedness and triviality of certain vector bundles, which is a necessary condition in the Bondal-Orlov criterion. Section [6](#sec:simple){reference-type="ref" reference="sec:simple"} shows the simpleness of the restricted Fourier-Mukai kernel and completes the proof of Theorem A. In the last section (Section [7](#sec:Fanovisitor){reference-type="ref" reference="sec:Fanovisitor"}), we prove Theorem B and discuss the Fano visitor problem for motives. ## Convention {#convention .unnumbered} We work over $\mathbb{C}$. We use $X$ to denote a smooth projective curve of genus $g \geq 2$ and $L$ is a line bundle of degree $d$ on $X$. Part of this work was done when the first author was working at the Institute of the Mathematical Sciences of the Americas, University of Miami as an IMSA, Research Assistant Professor. He thanks Ludmil Katzarkov and Simons Foundation for partially supporting this work via Simons Investigator Award-HMS. He also thanks to Claire Voisin for helpful discussions and telling him about her work on Fano visitor problem for motives. # Moduli space of parabolic bundles {#sec:parabolic} In this section, we give an overview of the moduli space of parabolic bundles. ## Moduli spaces of parabolic bundles {#ssec:moduliparabolic} Let $X$ be a smooth projective curve of genus $g$ and let $\mathbf{p} = (p_{1}, \cdots, p_{k})$ be an ordered set of distinct closed points on $X$. For notational simplicity, we only discuss parabolic bundles with full flags. **Definition 8**. A *parabolic bundle* over $(X, \mathbf{p} )$ is a collection of data $(E, \{W_{\bullet}^{i}\})$ where 1. $E$ is a rank $r$ vector bundle over $X$; 2. For each $1 \le i \le k$, $W_{\bullet}^{i} \in \mathrm{Fl}(E|_{p_{i}})$, in other words, $W_{\bullet}^{i}$ is a strictly increasing sequence of subspaces of $E|{p_i}$ as follows. $$0 \subsetneq W_{1}^{i} \subsetneq W_{2}^{i} \subsetneq \cdots \subsetneq W_{r-1}^{i} \subsetneq E|_{p_{i}}$$ **Definition 9**. Let $\mathcal{M} _{X, \mathbf{p} }(r, L)$ (resp. $\mathcal{M} _{X, \mathbf{p} }(r, d)$) be the moduli stack of rank $r$, determinant $L$ (resp. degree $d$) parabolic bundles over $(X, \mathbf{p} )$. When $k = 0$, so there is no parabolic point, then $\mathcal{M} _{X, \mathbf{p} }(r, L)$ is the moduli stack $\mathcal{M} _{X}(r, L)$ of rank $r$ vector bundles. Let $\mathcal{E}$ be the universal bundle over $X \times \mathcal{M} _{X}(r, L)$. There is a forgetful morphism $\pi : \mathcal{M} _{X, \mathbf{p} }(r, L) \to \mathcal{M} _{X}(r, L)$ and for each point $[E] \in \mathcal{M} _{X}(r, L)$, its fiber $\pi^{-1}([E])$ is a product of flag varieties $\prod_{i=1}^{k}\mathrm{Fl}(E|_{p_{i}})$. Thus, $$\mathcal{M} _{X, \mathbf{p} }(r, L) = \times_{\mathcal{M} _{X}(r, L)}\mathrm{Fl}(\mathcal{E} |_{p_{i}}).$$ To obtain a separated moduli stack with projective moduli space, one can employ a stability condition. For the moduli space of vector bundles, there is a standard notion of slope stability, but for parabolic bundles, the stability condition depends on numerical data, and they form a family. **Definition 10**. A *parabolic weight* $\mathbf{a}$ is a collection of data $\mathbf{a} = (a_{\bullet}^{1}, \cdots, a_{\bullet}^{k})$ where each $a_{\bullet}^{i}$ is a length $r$ strictly decreasing sequence of real numbers $$1 > a_{1}^{i} > a_{2}^{i} > \cdots > a_{r-1}^{i} > a_{r}^{i} \ge 0.$$ If $a_{r}^{i} = 0$ for all $i$, we say $\mathbf{a}$ is *normalized*. For a pointed curve $(X, \mathbf{p} )$ with $|\mathbf{p} | = k$, the space of normalized parabolic weights is the interior of $\Delta_{r-1}^{k}$, where $\Delta_{r-1}$ is an $(r-1)$-dimensional simplex. Indeed, $\Delta_{r-1} = \{(x_{i}) \in \mathbb{R} ^{r}\;|\; \sum_{i=1}^{r} x_{i} = 1, x_{i} \ge 0\}$ and $\mathrm{int}\;\Delta_{r-1}$ is identified with the set of parabolic weights on a single point, after setting $a_{0}^{i} = 1$, via $a_{\bullet}^{i} \mapsto (a_{j}^{i} - a_{j+1}^{i})_{0 \le j \le r-1} \in \Delta_{r-1}$. **Definition 11**. Let $(E, \{W_{\bullet}^{i}\})$ be a parabolic bundle. The *parabolic degree* of $(E, \{W_{\bullet}^{i}\})$ with respect to $\mathbf{a}$ is $$\mathrm{pardeg}_{\mathbf{a} } (E, \{W_{\bullet}^{i}\}) := \deg E + \sum_{i=1}^{k}\sum_{j=1}^{r}a_{j}^{i}.$$ Its *parabolic slope* is $$\mu_{\mathbf{a} } (E, \{W_{\bullet}^{i}\}) = \frac{\mathrm{pardeg}_{\mathbf{a} } (E, \{W_{\bullet}^{i}\})}{\mathrm{rank}\; E}.$$ Fix a parabolic bundle $(E, \{W_{\bullet}^{i}\})$ over $(X, \mathbf{p} )$. Let $F \subset E$ be a subbundle. For each point $p_{i}$, consider a (non-strictly increasing) filtration $$W_{1}^{i} \cap F|_{p_{i}} \subset W_{2}^{i} \cap F|_{p_{i}} \subset \cdots \subset W_{r-1}^{i} \cap F|_{p_{i}}$$ of $F|_{p_{i}}$. We define a full flag $W|_{F \bullet}^{i}$ of $F|_{p_{i}}$, by taking $(W|_{F}^{i})_{j}$ as $W_{\ell}^{i} \cap F_{p_{i}}$ with the smallest index $\ell$ such that $\dim (W_{\ell}^{i} \cap F_{p_{i}}) = j$. We also define the induced parabolic weight $\mathbf{b} = (\mathbf{b} _{\bullet}^{k})$ as $\mathbf{b} _{j}^{i} = \mathbf{a} _{\ell}^{i}$, hence a (non-normalized) *parabolic subbundle* $(F, \{W|_{F \bullet}^{i}\})$. By taking the quotient bundle $Q = E/F$ and the quotient filtration $\mathrm{im}\, (W_{j}^{i} \to Q|_{p_{i}})$, one can define a *quotient parabolic bundle* $(Q, \{(W/F)_{\bullet}^{i}\})$ in a similar way. We define the quotient parabolic weight $\mathbf{c}$ on $(Q, \{(W/F)_{\bullet}^{i}\})$ by taking the complementary weight data of $\mathbf{b}$. **Definition 12**. Fix a parabolic weight $\mathbf{a}$. We say a parabolic bundle $(E, \{W_{\bullet}^{i}\})$ is *$\mathbf{a}$-(semi)-stable* if for any parabolic subbundle $(F, \{W|_{F \bullet}^{i}\})$ with induced parabolic weight $\mathbf{b}$, $$\mu_{\mathbf{b} }(F, \{W|_{F \bullet}^{i}\}) (\le) < \mu_{\mathbf{a} }(E, \{W_{\bullet}^{i}\}).$$ Let $\mathcal{M} _{X, \mathbf{p} }(r, L, \mathbf{a} ) \subset \mathcal{M} _{X, \mathbf{p} }(r, L)$ be the substack of $\mathbf{a}$-semistable parabolic bundles. There is a good moduli space $p : \mathcal{M} _{X, \mathbf{p} }(r, L, \mathbf{a} ) \to \mathrm{M} _{X, \mathbf{p} }(r, L, \mathbf{a} )$, which is a normal projective variety. If $\mathbf{a}$ is general, then the stability and the semistability coincide. Then $\mathrm{M} _{X, \mathbf{p} }(r, L, \mathbf{a} )$ is a smooth projective variety [@MS80]. When $k = 0$, we denote the moduli stack of stable (resp. semistable) bundles by $\mathcal{M} _{X}(r, L)^{s}$ (resp. $\mathcal{M} _{X}(r, L)^{ss}$). ## Functorial morphisms {#ssec:functorial} Because of the connection between type A conformal blocks and the moduli stack of (untwisted) principal parabolic $\mathrm{SL}_{r}$-bundles [@BL94; @Pau96; @MY20; @MY21], the case of $L = \mathcal{O}$ has been spelled out most explicitly in literature. For a general $L \in \mathrm{Pic} ^{d}(X)$, we may describe $\mathrm{M} _{X, \mathbf{p} }(r, L, \mathbf{a} )$ as a contraction of $\mathrm{M} _{X, \mathbf{p} '}(r, \mathcal{O} , \mathbf{a} ')$ for some $\mathbf{p} '$ and $\mathbf{a} '$. In this section, we describe functorial morphisms between moduli stacks and moduli spaces. Let $\mathbf{p} := (p_{1}, \cdots, p_{k})$ and $\mathbf{p} ' := \mathbf{p} \sqcup \{p_{k+1}\}$. Fix $d$ such that $1 \le d \le r-1$. For a parabolic bundle $(E, \{W_{\bullet}^{i}\}_{1 \le i \le k+1}) \in \mathcal{M} _{X, \mathbf{p} '}(r, \mathcal{O} )$, consider the following epimorphism $$\label{eqn:fiberwisemodification} E \to E|_{p_{k+1}} \to E_{p_{k+1}}/W_{d}^{k+1} \to 0.$$ Let $E_{d-r}$ be the kernel. Then $E_{d-r}$ is a vector bundle of the determinant $\mathcal{O} (-(r-d)p_{k+1}) = \mathcal{O} ((d-r)p_{k+1})$. Forgetting all flags on $p_{k+1}$, we have an induced parabolic bundle $(E, \{W_{\bullet}^{i}\}_{1 \le i \le k})$ over $(X, \mathbf{p} )$. The map $(E, \{W_{\bullet}^{i}\}_{1 \le i \le k+1}) \mapsto (E_{d-r}, \{W_{\bullet}^{i}\}_{1 \le i \le k})$ induces a morphism of stacks $$m_{d} : \mathcal{M} _{X, \mathbf{p} '}(r, \mathcal{O} ) \to \mathcal{M} _{X, \mathbf{p} }(r, \mathcal{O} ((d-r)p_{k+1})).$$ By selecting a stability appropriately, we may induce a morphism between their good moduli spaces. Let $\mathbf{a} \in \Delta_{r-1}^{k}$ be a general parabolic weight. For the $(k+1)$-st point, we define $a_{\bullet}^{k+1}$ as $a_{j}^{k+1} < \epsilon$ for $j > d$ and $a_{j}^{k+1} > 1 - \epsilon$ for $j \le d$, for sufficiently small $\epsilon > 0$. We set $\mathbf{a} ' := \mathbf{a} \cup a_{\bullet}^{k+1}$. By comparing the stabilities (Consult the proof of [@LM23 Proposition 2.9]), we have an induced morphism of stacks $$m_{d} : \mathcal{M} _{X, \mathbf{p} '}(r, \mathcal{O} , \mathbf{a} ') \to \mathcal{M} _{X, \mathbf{p} }(r, \mathcal{O} ((d-r)p_{k+1}), \mathbf{a} )$$ and the corresponding morphism between their good moduli spaces (we use the same notation, if there is no chance of confusion) $$\label{eqn:modification} m_{d} : \mathrm{M} _{X, \mathbf{p} '}(r, \mathcal{O} , \mathbf{a} ') \to \mathrm{M} _{X, \mathbf{p} }(r, \mathcal{O} ((d-r)p_{k+1}), \mathbf{a} ).$$ On the other hand, by tensoring an appropriate line bundle $A$ on $E$, we obtain an isomorphism $$\label{eqn:twistingiso} \begin{split} \mathcal{M} _{X, \mathbf{p} }(r, L, \mathbf{a} ) &\cong \mathcal{M} _{X, \mathbf{p} }(r, L \otimes A^{r}, \mathbf{a} ),\\ (E, \{W_{\bullet}^{i}\}) & \mapsto (E \otimes A, \{W_{\bullet}^{i}\}). \end{split}$$ Therefore, if $\deg (L_{1} \otimes L_{2}^{-1})$ is a multiple of $r$, $\mathcal{M} _{X, \mathbf{p} }(r, L_{1}) \cong \mathcal{M} _{X, \mathbf{p} }(r, L_{2})$ and there are similar isomorphisms between moduli stacks of $\mathbf{a}$-semistable bundles and their good moduli spaces. By composing [\[eqn:twistingiso\]](#eqn:twistingiso){reference-type="eqref" reference="eqn:twistingiso"} and [\[eqn:modification\]](#eqn:modification){reference-type="eqref" reference="eqn:modification"}, if $\deg L = d$, we obtain a morphism $\mathrm{M} _{X, \mathbf{p} }(r, \mathcal{O} , \mathbf{a} ') \to \mathrm{M} _{X, \mathbf{p} }(r, L, \mathbf{a} )$, induced by $m_{d}$. Assume further that $(r, d) = 1$ and $\mathbf{a}$ is sufficiently small, in the sense that $\sum_{i=1}^{k}\sum_{j=1}^{r-1}a_{j}^{i} < \epsilon$. Then the forgetful morphism $\pi : \mathcal{M} _{X, \mathbf{p} }(r, L) \to \mathcal{M} _{X}(r, L)$ induces $$\begin{split} \pi : \mathcal{M} _{X, \mathbf{p} }(r, L, \mathbf{a} ) &\to \mathcal{M} _{X}(r, L)^{s}\\ (E, \{W_{\bullet}^{i}\}) &\mapsto E, \end{split}$$ and the fiber is a product of flag varieties, because the flag structure does not affect to the stability computation. In summary, if $\mathbf{a}$ is sufficiently small, we have the following commutative diagram $$\label{eqn:commutativediagram} \xymatrix{&\mathcal{M} _{X, \mathbf{p} '}(r, \mathcal{O} ) \ar[r]^{m_{d}} \ar@/_4.0pc/[ddd]_{\pi}& \mathcal{M} _{X, \mathbf{p} }(r, L)\ar@/^4.0pc/[ddd]^{\pi}\\ \mathrm{M} _{X, \mathbf{p} '}(r, \mathcal{O} , \mathbf{a} ') \ar@{-->}[d]^{\pi}&\mathcal{M} _{X, \mathbf{p} '}(r, \mathcal{O} , \mathbf{a} ') \ar[l]_{p}\ar[u]_{j} \ar[r]^{m_{d}} \ar@{-->}[d]^{\pi}& \mathcal{M} _{X, \mathbf{p} }(r, L, \mathbf{a} ) \ar[u]_{j} \ar[d]^{\pi} \ar[r]^{p} & \mathrm{M} _{X, \mathbf{p} }(r, L, \mathbf{a} ) \ar[d]^{\pi}\\ \mathrm{M} _{X}(r, \mathcal{O} ) &\mathcal{M} _{X}(r, \mathcal{O} )^{ss} \ar[l]_{p} \ar[d]^{j} & \mathcal{M} _{X}(r, L)^{s} \ar[r]^{p} \ar[d]^{j}& \mathrm{M} _{X}(r, L)\\ & \mathcal{M} _{X}(r, \mathcal{O} ) & \mathcal{M} _{X}(r, L). }$$ Each $\pi$ is a forgetful map, $p$ is a good moduli map, and $j$ is a natural inclusion. Note that some $\pi$ on the left-hand side are not regular morphisms but are defined over an open substack. ## Schur functors and Borel-Weil-Bott theorem {#ssec:Schur} Let $F$ be a rank $r$ vector bundle on a stack $\mathcal{M}$. For a partition $\lambda = (\lambda_{1} \ge \lambda_{2} \ge \cdots \ge \lambda_{k} > 0)$ of $n$ with at most $r$ parts, we denote the associate Schur functor bundle by $S_{\lambda}F$. For instance, if $\lambda = (n)$, $S_{\lambda}F = \mathrm{Sym}^{n}F$. If $\lambda = (1, 1, \cdots, 1)$, $S_{\lambda}F = \wedge^{n}F$. Equivalently, following the standard representation theory of $\mathfrak{sl}_{r}$, any partition $\lambda$ can be understood as a sum of dominant integral weights $\lambda = \sum a_{j}\omega_{j}$. Here $\omega_{j}$ is the $j$-th fundamental weight. Then $a_{j} = \lambda_{j} - \lambda_{j+1}$. Let $\mathcal{E}$ be the universal bundle over $\mathcal{M} _{X}(r, L)$. Note that there is a forgetful $\mathcal{M} _{X}(r, L)$-morphism $\mathcal{M} _{X, \mathbf{p} }(r, L) \to \mathrm{Fl}(\mathcal{E} |_{p_{i}})$. For each partition $\lambda$ with at most $r$ parts, there is an associated line bundle $L_{p_{i}, \lambda}$ over $\mathrm{Fl}(\mathcal{E} |_{p_{i}})$. By taking the pull-back, we have a line bundle $L_{p_{i}, \lambda}$ over $\mathcal{M} _{X, \mathbf{p} }(r, L)$. By the Borel-Weil-Bott theorem, we have $$\label{eqn:pushforwardformula} \pi_{*}L_{p_{i}, \lambda} = S_{\lambda}\mathcal{E} _{p_{i}}.$$ Furthermore, using the Leray spectral sequence, one can identify their cohomology groups: $$\label{eqn:cohomologycomparison} \mathrm{H} ^{*}(\mathcal{M} _{X, \mathbf{p} }(r, L), L_{p_{i}, \lambda}) \cong \mathrm{H} ^{*}(\mathcal{M} _{X}(r, L), S_{\lambda}\mathcal{E} _{p_{i}}).$$ When $\mathbf{a}$ is small, since the restricted map $\pi : \mathcal{M} _{X, \mathbf{p} }(r, L, \mathbf{a} ) \to \mathcal{M} _{X}(r, L)^{s}$ is also a fibration with the same fiber because $\mathcal{M} _{X, \mathbf{p} }(r, L, \mathbf{a} ) = \mathcal{M} _{X}(r, L)^{s} \times_{\mathcal{M} _{X}(r, L)}\mathcal{M} _{X, \mathbf{p} }(r, L)$. So the same cohomology formula is true: $$\label{eqn:cohomologycomparison2} \mathrm{H} ^{*}(\mathcal{M} _{X, \mathbf{p} }(r, L, \mathbf{a} ), L_{p_{i}, \lambda}) \cong \mathrm{H} ^{*}(\mathcal{M} _{X}(r, L)^{s}, S_{\lambda}\mathcal{E} _{p_{i}}).$$ On the other hand, note that the morphism $m_{d} : \mathcal{M} _{X, \mathbf{p} '}(r, \mathcal{O} ) \to \mathcal{M} _{X, \mathbf{p} }(r, L)$ in [\[eqn:fiberwisemodification\]](#eqn:fiberwisemodification){reference-type="eqref" reference="eqn:fiberwisemodification"} does not make any change along $p_{i}$ for $1 \le i \le k$. Thus, we have $$\label{eqn:pullbackoflambda} m_{d}^{*}L_{p_{i}, \lambda} = L_{p_{i}, \lambda}.$$ From now on, for notational simplicity, we will suppress the pull-back $m_{d}^{*}$ if there is no chance of confusion. ## GIT construction {#ssec:GITconstruction} The moduli space $\mathrm{M} _{X, \mathbf{p} }(r, L, \mathbf{a} )$ of semistable parabolic bundles can also be constructed by GIT. In this section, we briefly describe a construction. Fix a degree one line bundle $\mathcal{O} _{X}(1)$ over $X$. Take a sufficiently large $m \in \mathbb{Z}$ so that $\mathrm{H} ^{1}(E(m)) = 0$ and $E(m)$ is globally generated for all $(E, \{W_{\bullet}^{i}\}) \in \mathcal{M} _{X, \mathbf{p} }(r, L, \mathbf{a} )$. Let $\chi_{m} := \mathrm{H} ^{0}(E(m)) = d + r(m + 1-g)$. Let $\mathbf{Q} (m) = \mathrm{Quot}(\mathcal{O} _{X}^{\chi_{m}})$ be the quot scheme parametrizing quotients $\mathcal{O} _{X}^{\chi_{m}} \to F \to 0$ such that the Hilbert polynomial of $F$ is that of $E(m)$. Let $\mathbf{R} (m) \subset \mathbf{Q} (m)$ be the locally closed subscheme parametrizing the quotients $\mathcal{O} _{X}^{\chi_{m}} \stackrel{\varphi}{\to} F \to 0$ such that $\mathrm{H} ^{1}(F) = 0$, $\mathrm{H} ^{0}(\mathcal{O} _{X}^{\chi_{m}}) \stackrel{\wedge^{r}\varphi}{\cong} \mathrm{H} ^{0}(F) \cong \mathbb{C} ^{\chi_{m}}$, $F$ is locally free, and $\wedge^{r}F \cong L(rm)$. For the universal quotient $\mathcal{O} _{X \times \mathbf{R} (m)}^{\chi_{m}} \to \mathcal{F} \to 0$, let $$\widetilde{\mathbf{R} }(m) := \times_{\mathbf{R} (m)}\mathrm{Fl}(\mathcal{F} |_{p_{i}})$$ be the fiber product of full-flag bundles. Then $\widetilde{\mathbf{R} }(m)$ admits an $\mathrm{SL}_{\chi_{m}}$-action, and $\mathrm{M} _{X, \mathbf{p} }(r, L, \mathbf{a} )$ is constructed as a GIT quotient $\widetilde{\mathbf{R} }(m)/\!/ \mathrm{SL}_{\chi_{m}}$ with a certain linearization. The linearization is constructed explicitly in [@Bho89]. Let $Z := \mathbb{P} \mathrm{Hom} (\wedge^{r}\mathbb{C} ^{\chi_{m}}, \mathrm{H} ^{0}(L(rm)))^{*}$. Then for any $[\mathcal{O} _{X}^{\chi_{m}} \stackrel{\varphi}{\to} F \to 0] \in \mathbf{R} (m)$, we assign $$\wedge^{r}\mathbb{C} ^{\chi_{m}} \stackrel{\wedge^{r}\varphi}{\to}\wedge^{r}\mathrm{H} ^{0}(F) \to \mathrm{H} ^{0}(\wedge^{r}F) \cong \mathrm{H} ^{0}(L(rm)).$$ This assignment induces a morphism $\mathbf{R} (m) \to Z$, which is indeed an embedding [@Tha96 Section 7]. Furthermore, for each $p_{i} \in \mathbf{p}$, we have an evaluation map $\psi_{i} : \mathbb{C} ^{\chi_{m}} \cong \mathrm{H} ^{0}(F) \to F|_{p_{i}}$. Then for each $W_{j}^{i}$, we have $\psi_{i}^{-1}(W_{j}^{i}) \in \mathrm{Gr}(\chi_{m} - r + j, \mathbb{C} ^{\chi_{m}})$. Therefore, we have a morphism $$\label{eqn:GITembedding} \widetilde{\mathbf{R} }(m) \to Z \times \prod_{i=1}^{k}\prod_{j=1}^{r-1}\mathrm{Gr}(\chi_{m}-r+j, \mathbb{C} ^{\chi_{m}}).$$ In [@Bho89], Bhosle described an explicit ample line bundle $A(\mathbf{a} )$ on the right-hand side, such that $\widetilde{\mathbf{R} }(m) /\!/ _{A(\mathbf{a} )} \mathrm{SL}_{\chi_{m}} \cong \mathrm{M} _{X, \mathbf{p} }(r, L, \mathbf{a} )$. On the atlas $\widetilde{\mathbf{R} }(m) \to [\widetilde{\mathbf{R} }(m)/\mathrm{SL}_{\chi_{m}}]$, note that the line bundle $L_{p_{i}, \omega_{j}}$ is given by the restriction of $\mathcal{O} _{\mathrm{Gr}(\chi_{m} - r + j, \mathbb{C} ^{\chi_{m}})}(1)$ for the $i$-th factor in [\[eqn:GITembedding\]](#eqn:GITembedding){reference-type="eqref" reference="eqn:GITembedding"}. A parabolic bundle $(E, \{W_{\bullet}^{i}\})$ is $\mathbf{a}$-unstable if it has a maximal destabilizing parabolic subbundle $(F, \{W|_{F \bullet^{i}}\})$ such that $\mu_{\mathbf{b} }(F, \{W|_{F \bullet}^{i}\}) > \mu_{\mathbf{a} }(E, \{W_{\bullet}^{i}\})$. If we set $s = \mathrm{rank}\; F$, $e = \deg F$, and $J^{i} \subset [r]$ such that $$\mu_{\mathbf{b} }(F, \{W|_{F \bullet}^{i}\}) = \frac{e + \sum_{i=1}^{k}\sum_{j \in J^{i}}a_{j}^{i}}{s},$$ this irreducible component is indexed by the numerical triple $(s, e, \{J^{i}\})$. **Definition 13**. We denote the unstable stratum associated to the numerical data $(s, e, \{J^{i}\})$ by $S_{(s, e, \{J^{i}\})} \subset \widetilde{\mathbf{R} }(m)$. Under the condition $L = \mathcal{O}$, the codimension of the unstable locus is evaluated by Sun [@Sun00]. See [@MY20 Section 3.2] for a summary. In particular, in [@MY20 p.254, (3.2)], it was shown that codimension of $S_{(s, e, \{J^{i}\})}$ is given by $$s(r-s)(g-1) + \sum_{i=1}^{k}\mathrm{codim}\; Y^{i} + re,$$ where $Y^{i} \subset \mathrm{Fl}(\mathbb{C} ^{r})$ is a certain flag variety. In [@Sun00 Lemma 5.2] (see also [@MY20 p.254]), it was shown that $\sum_{i=1}^{k}\mathrm{codim}\; Y^{i} + re$ is positive. In summary, we have: **Lemma 14**. *The codimension of the unstable locus $S_{(s, e, \{J^{i}\})}$ is at least $s(r-s)(g-1) + 1$.* # Bondal-Orlov criterion and cohomology of line bundles {#sec:BondalOrlov} Many natural functors between two derived categories of algebraic varieties are constructed as Fourier-Mukai transforms $\Phi_{\mathcal{F} }$. In this section, we describe the Fourier-Mukai kernel that we will use and reformulate the Bondal-Orlov criterion for the fully-faithfulness of $\Phi_{\mathcal{F} }$. ## Fourier-Mukai kernel {#ssec:kernel} Let $X^{n}$ be the product of $n$ copies of $X$ and let $q_{i} : X^{n} \to X$ be the projection to its $i$-th factor. There is a natural $S_{n}$-action on $X^{n}$. We denote the quotient map $q : X^{n} \to X_{n} = X^{n}/S_{n} \cong \mathrm{Hilb}^{n}(X)$, which is a finite flat morphism. This construction can be relativized, obviously. For any scheme or stack $\mathcal{M}$, we have a quotient map $q_{\mathcal{M} } : X^{n} \times \mathcal{M} \to X_{n} \times \mathcal{M}$. If there is no chance of confusion, we will suppress the subscript $\mathcal{M}$ and denote it by $q$. Let $\mathcal{E}$ be the universal bundle on the moduli stack $\mathcal{M} _{X}(r, L)$. Let $\mathcal{M} _{X}(r, L)^{ss} \subset \mathcal{M} _{X}(r, L)$ be the open substack of the semistable bundles and $p : \mathcal{M} _{X}(r, L)^{ss} \to \mathrm{M} _{X}(r, L)$ be the good moduli space morphism. Suppose that $(r, d) = 1$, where $d = \deg L$. Then there is a section $\sigma : \mathrm{M} _{X}(r, L) \to \mathcal{M} _{X}(r, L)^{ss} = \mathcal{M} _{X}(r, L)^{s}$. A *Poincaré bundle* on the coarse moduli space is $\sigma^{*}\mathcal{E}$, the pull-back of the universal bundle over the moduli stack. Note that it depends on the choice of a section $\sigma$, which is equivalent to a choice of a line bundle on $\mathrm{M} _{X}(r, L)$. **Definition 15**. Consider a Poincaré bundle $\sigma^{*}\mathcal{E}$ over $X \times \mathrm{M} _{X}(r, L)$. We set $$(\sigma^{*}\mathcal{E} )^{\otimes n} := \bigotimes_{i=1}^{n}q_{i}^{*}\sigma^{*}\mathcal{E} .$$ Then $(\sigma^{*}\mathcal{E} )^{\otimes n}$ is an $S_{n}$-equivariant bundle of rank $r^{n}$. By taking the invariant functor, we obtain a vector bundle $$\label{eqn:kernelF} \mathcal{F} := (q_{*}(\sigma^{*}\mathcal{E} )^{\otimes n})^{S_{n}}$$ on $X_{n} \times \mathrm{M} _{X}(r, L)$ [@TT21 Lemma 2.1]. **Definition 16**. For $\mathbf{p} = (\sum p_{j})\in X_{n}$, let $\mathcal{F} _{\mathbf{p} } = \iota_{\mathbf{p} }^{*}\mathcal{F}$ be the restriction of $\mathcal{F}$ by $\iota_{\mathbf{p} } : \{\mathbf{p} \} \times \mathrm{M} _{X}(r, L) \hookrightarrow X_{n}\times \mathrm{M} _{X}(r, L)$. **Lemma 17** (). *The restricted bundle $\mathcal{F} _{\mathbf{p} }$ is a deformation of $\bigotimes_{i}q_{i}^{*}\sigma^{*}\mathcal{E} _{p_{i}}$. In other words, there is a family of bundles over $\mathbb{A} ^{1} \times \mathrm{M} _{X}(r, L)$ such that the restriction to $\{0\} \times \mathrm{M} _{X}(r, L)$ is isomorphic to $\bigotimes_{i}q_{i}^{*}\sigma^{*}\mathcal{E} _{p_{i}}$ and that to $\{t\} \times \mathrm{M} _{X}(r, L)$ for $t \ne 0$ is isomorphic to $\mathcal{F} _{\mathbf{p} }$.* **Definition 18**. Let $\Phi_{\mathcal{F} } : \mathrm{D} ^{b}(X_{n}) \to \mathrm{D} ^{b}(\mathrm{M} _{X}(r, L))$ be the Fourier-Mukai transform with the kernel $\mathcal{F}$ in [\[eqn:kernelF\]](#eqn:kernelF){reference-type="eqref" reference="eqn:kernelF"}, that is, $$\Phi_{\mathcal{F} }(E^{\bullet}) = Rp_{2 *}(Lp_{1}^{*}E^{\bullet} \otimes^{L} \mathcal{F} ).$$ **Remark 19**. The definition of the Fourier-Mukai kernel $\mathcal{F}$ and the Fourier-Mukai transform $\Phi_{\mathcal{F} }$ depend on a choice of Poincaré bundle $\sigma^{*}\mathcal{E}$. To be more precise, we denote $\mathcal{F} ^{\sigma} := (q_{*}(\sigma^{*}\mathcal{E} )^{\otimes n})^{S_{n}}$. Then for two sections $\sigma, \sigma' : \mathrm{M} _{X}(r, L) \to \mathcal{M} _{X}(r, L)^{s}$, ${\sigma'}^{*}\mathcal{E} = (\sigma^{*}\mathcal{E} ) \otimes p_{2}^{*}A$ for some $A \in \mathrm{Pic} (\mathrm{M} _{X}(r, L))$. Then $$\mathcal{F} ^{\sigma'} = (q_{*}({\sigma'}^{*}\mathcal{E} )^{\otimes n})^{S_{n}} \cong (q_{*}(\sigma^{*}\mathcal{E} \otimes p_{2}^{*}A)^{\otimes n})^{S_{n}} \cong (q_{*}(\sigma^{*}\mathcal{E} )^{\otimes n})^{S_{n}} \otimes p_{2}^{*}A^{n} = \mathcal{F} ^{\sigma}\otimes p_{2}^{*}A^{n}.$$ By the projection formula, $$\Phi_{\mathcal{F} ^{\sigma'}}(E^{\bullet}) = \Phi_{\mathcal{F} ^{\sigma}}(E^{\bullet}) \otimes A^{n}.$$ Therefore, the fully-faithfulness does not change and we may choose any Poincaré bundle. We will prove Theorem [Theorem 1](#thm:embedding){reference-type="ref" reference="thm:embedding"} by showing that the functor $\Phi_{\mathcal{F} } : \mathrm{D} ^{b}(X_{n}) \to \mathrm{D} ^{b}(\mathrm{M} _{X}(r, L))$ is fully-faithful. ## Bondal-Orlov criterion We show the fully-faithfulness by employing the classical result of Bondal and Orlov [@BO95 Theorem 1.1]. We state a version adapted to our situation. **Theorem 20** (Bondal-Orlov criterion). *The functor $\Phi_{\mathcal{F} }$ is fully faithful if and only if the following three conditions hold:* 1. *(Simplicity) $\mathrm{H} ^{0}(\mathrm{M} _{X}(r, L), \mathcal{F} _{\mathbf{p} } \otimes \mathcal{F} _{\mathbf{p} }^{*}) \cong \mathbb{C}$.* 2. *(Cohomological boundedness) $\mathrm{H} ^{i}(\mathrm{M} _{X}(r, L), \mathcal{F} _{\mathbf{p} } \otimes \mathcal{F} _{\mathbf{p} }^{*}) = 0$ for $i > n$.* 3. *(Cohomological triviality) $\mathrm{H} ^{i}(\mathrm{M} _{X}(r, L), \mathcal{F} _{\mathbf{p} } \otimes \mathcal{F} _{\mathbf{q} }^{*}) = 0$ for all $\mathbf{p} \ne \mathbf{q} \in X_{n}$ and $i \in \mathbb{Z}$.* **Remark 21**. We say a coherent sheaf $F$ on an algebraic stack (or a scheme) $\mathcal{M}$ is *cohomologically bounded up to degree $n$* if $\mathrm{H} ^{i}(\mathcal{M} , F) = 0$ for all $i > n$. $F$ is *cohomologically trivial* if $\mathrm{H} ^{i}(\mathcal{M} , F) = 0$ for all $i \ge 0$. # Quantization {#sec:quantization} The cohomological boundedness/triviality on the coarse moduli space will be shown by comparing the cohomology groups with those on the moduli stack. Then, we need to estimate the effect of unstable strata. By employing [@HL15], we show that the contribution of the unstable loci is negligible if $n$ is relatively small. ## Contribution of unstable loci {#ssec:unstableloci} Let $V$ be a smooth quasi-projective variety with a reductive group $G$ action. Let $A$ be a $G$-linearization on $V$. We review [@HL15] which explains how to compare $\mathrm{D} ^{b}([V/G])$ and $\mathrm{D} ^{b}([V^{ss}(A)/G])$ where the latter quotient stack $[V^{ss}(A)/G]$ has a good moduli space $V/\!/ _{A}G$ (thus $\pi^{*} : \mathrm{D} ^{b}(V/\!/ _{A}G) \to \mathrm{D} ^{b}([V^{ss}(A)/G])$ is fully-faithful). The Kempf-Ness stratification of $V$ can be constructed as the following. For each one-parameter subgroup $\lambda : \mathbb{C} ^{*} \to G$, let $Z \subset V^{\lambda}$ be an irreducible component of the torus fixed locus $X^{\lambda}$. Then, one may compute the numerical invariant $$\mu(\lambda, Z) := -\frac{\mathrm{wt}_{\lambda}A|_{Z}}{|\lambda|}.$$ Take a pair $(\lambda, Z)$ such that $\mu(\lambda, Z)$ is the largest positive one. We set $Y_{\lambda, Z} := \{x \in V\;|\; \lim_{t \to 0}\lambda(t)\cdot x \in Z\}$ and $S_{\lambda, Z} := G\cdot Y_{\lambda, Z}$. Then $S_{\lambda, Z}$ is a stratum in the unstable locus. One can continue this construction by starting with $V \setminus S_{\lambda, Z}$, and the $G$-action on it. **Theorem 22** (Quantization Theorem ). *Let $\eta$ be the $\lambda$-weight of $\wedge^{\mathrm{top}}N_{S_{\lambda, Z}/V}^{*}|_{Z}$. Let $E^{\bullet} \in \mathrm{D} ^{b}([V/G])$, and suppose that the $\lambda$-weight of $\mathcal{H}^{*}(E^{\bullet}|_{Z})$ is supported on $(-\infty, \eta)$. Then $$\mathrm{H} ^{*}([V/G], E^{\bullet}) \cong \mathrm{H} ^{*}([V^{ss}(A)/G], E^{\bullet}|_{[V^{ss}(A)/G]}).$$* Thus, if the given sheaf does not have a too large $\lambda$-weight, then its cohomology on the semistable locus coincides with the cohomology over the whole quotient stack. By induction, this theorem is valid for the GIT quotient with many components in the unstable loci. ## The case of moduli space of parabolic bundles {#ssec:quantizationmoduli} We apply Theorem [Theorem 22](#thm:quantization){reference-type="ref" reference="thm:quantization"} to $\mathcal{M} _{X, \mathbf{p} }(r, \mathcal{O} , \mathbf{a} )$. The main result is Corollary [Corollary 24](#cor:unstablelociarenegligible){reference-type="ref" reference="cor:unstablelociarenegligible"}, which shows that the unstable loci are negligible in the cohomology calculation. Let $S = S_{(s, e, \{J^{i}\})}$ be a stratum of the unstable locus, and let the associated one-parameter subgroup be $\lambda(t)$. The $\lambda$-fixed locus $Z \subset S$ parametrizes data $$\{[\mathcal{O} ^{\chi^{+}} \oplus \mathcal{O} ^{\chi^{-}} \stackrel{\varphi}{\to} E^{+}(m) \oplus E^{-}(m) \to 0], \{W_{\bullet}^{i}\}\},$$ where $\varphi = \varphi^{+}\oplus \varphi^{-}$, $E^{+}$ (resp. $E^{-}$) is of degree $e$ (resp. $-e$), and $W_{\bullet}^{i} \subset E^{+} \cup E^{-}$, in other words, $W_{j}^{i} = (E^{+}|_{p_{i}}\cap W_{j}^{i}) \oplus (E^{-}|_{p_{i}} \cap W_{j}^{i})$. Here $\chi^{+}(m) = \dim \mathrm{H} ^{0}(E^{+}(m))$ and $\chi^{-}(m) = \dim \mathrm{H} ^{0}(E^{-}(m))$. If we take a general point of $S$, then both $E^{+}$ and $E^{-}$ are simple; hence $\lambda(t)$ acts on each factor as a scalar multiplication. So, up to normalization, $\lambda(t)$ acts as $$\left(\begin{array}{cc}t^{-\chi^{-}} & 0 \\ 0 & t^{\chi^{+}}\end{array}\right).$$ For a general point in $Z$, $(E^{+}, \{W|_{E^{+} \bullet}^{i}\})$ has the following property. For any $j$ with $J_{k}^{i} \le j < J_{k+1}^{i}$, $\dim E^{+}|_{p_{i}} \cap W_{j}^{i} = k$ and $\dim E^{-}|_{p_{i}} \cap W_{j}^{i} = j-k$. Therefore, $$\dim \mathbb{C} ^{\chi^{+}} \cap \psi_{i}^{-1}(W_{j}^{i}) = \chi^{+} - s + k,$$ and $$\dim \mathbb{C} ^{\chi^{-}} \cap \psi_{i}^{-1}(W_{j}^{i}) = \chi^{-} - (r - s) + j - k.$$ Thus, $$\begin{split} \mathrm{wt}_{\lambda}L_{p_{i}, \omega_{j}} &= -\chi^{-}(\chi^{+}-s + k) + \chi^{+}(\chi^{-}-r+s+j-k) = \chi(s-k) - \chi^{+}(r-j)\\ &= r(m+1-g)(s-k) - (e+s(m+1-g))(r-j)\\ &= (m+1-g)(sj - rk) - e(r-j). \end{split}$$ Since $j -k \le \dim E^{-}|_{p_{i}} = r-s$, $sj - rk \le s(r-s+k) - rk = (r-s)(s-k)$. And if $m \gg 0$, because $\chi$ is a linear polynomial for $m$ but $e$ and $j$ are bounded constants, $$\begin{split} \mathrm{wt}_{\lambda}L_{p_{i}, \omega_{j}} &= (m+1-g)(sj-rk) - e(r-j) \le (m+1-g)(r-s)(s-k) -e(r-j)\\ &= \chi\frac{(r-s)(s-k)}{r} - e(r-j) < \chi \frac{(r-s)(s-k)+1}{r} < \chi\frac{(r-s)s+1}{r}. \end{split}$$ Therefore, for a dominant weight $\sum a_{j}\omega_{j}$ with $a_{j} \ge 0$, the $\lambda$-weight of the associated line bundle $L_{p_{i}, \lambda} = L_{p_{i}, \sum a_{j}\omega_{j}}$ is at most $$\chi (\sum a_{j})\frac{(r-s)s+1}{r}.$$ On the other hand, by [@Tha96 Section 7], $\lambda(t)$ acts on $N_{S/V}|_{Z}$ by a multiplication of $t^{-\chi}$. Thus, $$\eta = \mathrm{wt}_{\lambda}\wedge^{\mathrm{top}}N_{S/V}^{*}|_{Z} = \chi \cdot \mathrm{rank}\; N_{S/V} \ge \chi \left(s(r-s)(g-1) + 1\right)$$ by Lemma [Lemma 14](#lem:unstablecodim){reference-type="ref" reference="lem:unstablecodim"}. In particular, if $\sum a_{j} \le r((r-s)s(g-1)+1)/((r-s)s+1)$, $$\mathrm{wt}_{\lambda} L_{p_{i}, \sum a_{j}\omega_{j}} < \chi (\sum a_{j})\frac{(r-s)s+1}{r} \le \chi ((r-s)s(g-1)+1) \le \eta.$$ The minimum of $r((r-s)s(g-1)+1)/((r-s)s+1)$ for $1 \le s \le r-1$ is achieved when $s = 1$, hence it is $r((r-1)(g-1)+1)/(r-1+1) = (r-1)(g-1)+1$. Then we obtain the following result by the Quantization Theorem (Theorem [Theorem 22](#thm:quantization){reference-type="ref" reference="thm:quantization"}). Note that we use $\lambda$ to describe a partition, not a one-parameter subgroup in the statement. **Proposition 23**. *For a partition $\lambda = \sum a_{j}\omega_{j}$ with $\sum a_{j} \le (r-1)(g-1)+1$, $$\mathrm{H} ^{*}([\widetilde{\mathbf{R} }(m)/\mathrm{SL}_{\chi_{m}}], L_{p_{i}, \lambda}) \cong \mathrm{H} ^{*}([\widetilde{\mathbf{R} }(m)^{ss}(A(\mathbf{a} ))/\mathrm{SL}_{\chi_{m}}], L_{p_{i}, \lambda}) \cong \mathrm{H} ^{*}(\mathcal{M} _{X}(r, \mathcal{O} , \mathbf{a} ), L_{p_{i}, \lambda}).$$* **Corollary 24**. *For a partition $\lambda = \sum a_{j}\omega_{j}$ with $\sum a_{j} \le (r-1)(g-1)+1$, $$\label{eqn:unstablelociarenegligible} \mathrm{H} ^{*}(\mathcal{M} _{X, \mathbf{p} }(r, \mathcal{O} ), L_{p_{i}, \lambda}) \cong \mathrm{H} ^{*}(\mathcal{M} _{X, \mathbf{p} }(r, \mathcal{O} , \mathbf{a} ), L_{p_{i}, \lambda}).$$* *Proof.* There is an open embedding $$[\widetilde{\mathbf{R} }(m)/\mathrm{SL}_{\chi_{m}}] \subset [\widetilde{\mathbf{R} }(m+1)/\mathrm{SL}_{\chi_{m+1}}],$$ and we have an isomorphism of stacks $$\mathcal{M} _{X, \mathbf{p} }(r, \mathcal{O} ) \cong \lim_{m \rightarrow}[\widetilde{\mathbf{R} }(m)/\mathrm{SL}_{\chi_{m}}].$$ For $L_{p_{i}, \lambda}$ with $p_{i} \in \mathbf{p}$, we have a morphism $$\label{eqn:inverselimit} \mathrm{H} ^{*}(\mathcal{M} _{X, \mathbf{p} }(r, \mathcal{O} ), L_{p_{i}, \lambda}) \to \lim_{m \leftarrow}\mathrm{H} ^{*}([\widetilde{\mathbf{R} }(m)/\mathrm{SL}_{\chi_{m}}], \iota_{m}^{*}L_{p_{i}, \lambda})$$ where $\iota_{m} : [\widetilde{\mathbf{R} }(m)/\mathrm{SL}_{\chi_{m}}] \subset \mathcal{M} _{X, \mathbf{p} }(r, \mathcal{O} )$. By Proposition [Proposition 23](#prop:unstablelociarenegligible){reference-type="ref" reference="prop:unstablelociarenegligible"}, each cohomology is identified with the cohomology of a line bundle on a projective variety $\mathrm{M} _{X, \mathbf{p} }(r, \mathcal{O} , \mathbf{a} )$. Hence it is finite-dimensional. Thus, the inverse system on the right hand side satisfies the Mittag-Leffler condition. Therefore, the map in [\[eqn:inverselimit\]](#eqn:inverselimit){reference-type="eqref" reference="eqn:inverselimit"} is an isomorphism. ◻ # Cohomological boundedness {#sec:boundedness} The classical Borel-Weil-Bott theorem provides a recipe to compute the cohomology of all line bundles over the full-flag variety $\mathrm{Fl}(V)$ of a finite-dimensional vector space $V$. This is extended to the case of the moduli stack of vector bundles with trivial determinant by Teleman [@Tel98]. In this section, we review the Borel-Weil-Bott-Teleman theory and its implication to the cohomology boundedness/triviality. ## Borel-Weil-Bott for curves {#ssec:Teleman} Recall that $\mathrm{Pic} (\mathcal{M} _{X}(r, \mathcal{O} )) \cong \mathbb{Z}$. Let $\Theta \in \mathrm{Pic} (\mathcal{M} _{X}(r, \mathcal{O} ))$ be the ample generator. For the universal family $\mathcal{E}$ over $\mathcal{M} _{X}(r, \mathcal{O} )$, a point $p \in X$, and a partition $\lambda \vdash n$ of length at most $r-1$, let $S_{\lambda}\mathcal{E} _{p}$ be the Schur functor applied to $\mathcal{E} _{p}$ (Section [2.3](#ssec:Schur){reference-type="ref" reference="ssec:Schur"}). If we have $k$ distinct points $\mathbf{p} = (p_{1}, p_{2}, \cdots, p_{k})$ and $n$ partitions $\lambda_{1}, \lambda_{2}, \cdots, \lambda_{k}$, we may construct a vector bundle $$\Theta^{h} \otimes \bigotimes_{i=1}^{k}S_{\lambda_{i}}\mathcal{E} _{p_{i}}.$$ Teleman's extension of the Borel-Weil-Bott theorem evaluates the cohomology groups of these bundles. Here we give some relevant representation theoretic definitions, specialized to $\mathrm{SL}_{r}$. Let $h$ be a fixed nonnegative integer. Let $\mathfrak{h}$ be the Cartan subalgebra of $\mathfrak{sl}_{r}$. On the Euclidean space $\mathfrak{h}^{*}$ with the normalized Killing form $(- ,-)$, let $\{\beta_{j}\}$ be the set of fixed simple roots, and $\{\omega_{j}\}$ be the associated fundamental weights. With respect to the Killing form, $\{\omega_{j}\}$ is the dual basis of $\{\beta_{j}\}$. We denote by $\rho$ the half sum of all positive roots, or equivalently, the sum of all fundamental weights. The set of hyperplanes $\{\lambda \;|\; (\lambda, \beta) \in (h+r)\mathbb{Z} \}$ where $\beta$ is a root of $\mathfrak{h}$, divides $\mathfrak{h}^{*}$ into polyhedral chambers, the so-called *Weyl alcoves*. The alcove containing the small highest weights is called the *positive alcove*. The positive alcove is an open simplex bounded by $\{(\lambda, \beta_{j}) > 0\}_{1 \le j \le r-1}$ and $(\lambda, \sum_{j=1}^{r-1}\beta_{j}) < h+r$. We say a weight $\lambda$ is *regular* if $\lambda+\rho$ is on the interior of one of the alcoves. Otherwise, $\lambda$ is called *singular*. For a regular weight, the length $\ell(\lambda)$ is defined as the number of Weyl reflections that requires to map $\lambda+\rho$ to $\mu+\rho$ in the positive alcove. Equivalently, $\ell(\lambda)$ is the minimum number of hyperplanes required to cross to move from the alcove containing $\lambda+\rho$ to the positive alcove. In this situation, $\mu$ is called the *ground form* of $\lambda$. Since $\rho = \sum \omega_{j}$ and $\{\omega_{j}\}$ is the dual basis of $\{\beta_{j}\}$, for a given weight $\lambda = \sum a_{j}\omega_{j}$, $\lambda + \rho$ is in the positive alcove if and only if $a_{j} \ge 0$ and $\sum a_{j} \le h$. **Remark 25**. If $h = 0$, the only $\lambda$ such that $\lambda + \rho$ is in the positive alcove is $\lambda = 0$. **Theorem 26** (). *Fix $h \ge 0$. Suppose that, with respect to $h + r$, all $\lambda_{i}$'s are regular. Then $\mathrm{H} ^{\ell}(\mathcal{M} _{X}(r, \mathcal{O} ), \Theta^{h} \otimes \bigotimes_{i=1}^{k}S_{\lambda_{i}}\mathcal{E} _{p_{i}}) \cong \mathrm{H} ^{0}(\mathcal{M} _{X}(r, \mathcal{O} ), \Theta^{h} \otimes \bigotimes_{i=1}^{k}S_{\mu_{i}}\mathcal{E} _{p_{i}})$, where $\ell = \sum \ell(\lambda_{i})$ is the sum of the lengths of $\lambda_{i}$'s and $\mu_{i}$ is the ground form of $\lambda_{i}$, and all other cohomology groups are trivial. If one of $\lambda_{i}$'s is singular, then $\mathrm{H} ^{*}(\mathcal{M} _{X}(r, \mathcal{O} ), \Theta^{h} \otimes \bigotimes_{i=1}^{k}S_{\lambda_{i}}\mathcal{E} _{p_{i}}) = 0$.* If we set $\mathbf{p} = (p_{1}, p_{2}, \cdots, p_{k})$ as the set of parabolic points, the moduli stack $\mathcal{M} _{X, \mathbf{p} }(r, L)$ of the parabolic bundles over $(X, \mathbf{p} )$ is obtained by taking the fiber product of the relative flag bundles. (Section [2.1](#ssec:moduliparabolic){reference-type="ref" reference="ssec:moduliparabolic"}) By the Borel-Weil-Bott and the degeneration of the Leray spectral sequence and the fact that the push-forward of $L_{p_{i}, \lambda_{i}}$ is $S_{\lambda_{i}}\mathcal{E} _{p_{i}}$, we obtain $$\label{eqn:cohidentification} \mathrm{H} ^{*}(\mathcal{M} _{X, \mathbf{p} }(r, \mathcal{O} ), \Theta^{h}\otimes \bigotimes_{i=1}^{k}L_{p_{i}, \lambda_{i}}) \cong \mathrm{H} ^{*}(\mathcal{M} _{X}(r, \mathcal{O} ), \Theta^{h} \otimes \bigotimes_{i=1}^{k}S_{\lambda_{i}}\mathcal{E} _{p_{i}}).$$ Therefore, Teleman's theorem can be understood as the cohomology evaluation of line bundles on the moduli stack of parabolic bundles. ## Cohomological boundedness and triviality on the moduli stack {#ssec:cohomologyonstack} In this section, all bundles are over $\mathcal{M} _{X}(r, \mathcal{O} )$. We start with a few combinatorial lemmas. Let $\mathcal{E}$ be the universal bundle over $X \times \mathcal{M} _{X}(r, \mathcal{O} )$. Recall that a coherent sheaf $F$ is cohomologically bounded up to degree $n$ if $\mathrm{H} ^{i}(F) = 0$ for all $i > n$ (Remark [Remark 21](#rmk:boundedandtriviality){reference-type="ref" reference="rmk:boundedandtriviality"}). Under the assumption $h = 0$, here we describe the Weyl reflection more explicitly. Let $\lambda = \sum a_{j}\omega_{j}$ be a dominant weight. Then $\lambda + \rho = \sum (a_{j}+1)\omega_{j}$. We set $h = 0$ in the statement of Theorem [Theorem 26](#thm:BWB){reference-type="ref" reference="thm:BWB"}. We investigate the effect of a Weyl reflection on $\lambda$. For notational simplicity, we set $\omega_{0} = \omega_{r} = 0$. There are $r(r-1)/2$ positive roots $\beta := \beta_{i} + \beta_{i+1} + \cdots + \beta_{j} = -\omega_{i-1}+\omega_{i} + \omega_{j} - \omega_{j+1}$ with $i \le j$. When $i = j$, we have $\beta = \beta_{i} = -\omega_{i-1}+2\omega_{i} - \omega_{i+1}$. The partition $\lambda$ is singular if there is a positive root $\beta$ such that $r | (\lambda+\rho, \beta)$. If $\lambda$ is regular, the Weyl alcove containing $\lambda+\rho$ is bounded by affine hyperplanes $(\lambda+\rho, \beta) \equiv \pm 1 \; \mathrm{mod}\;r$ for positive roots, and there is a positive root $\beta$ such that $(\lambda+\rho, \beta) \equiv 1\; \mathrm{mod}\; r$. The Weyl reflection along the $\beta$ moves $\lambda+\rho$ to $\lambda +\rho - \beta$. **Lemma 27**. *Suppose that $r > 2n$. For any partitions $\lambda$ and $\mu$ with $|\lambda| = |\mu| = n$, $S_{\lambda}\mathcal{E} _{p} \otimes S_{\mu}\mathcal{E} _{p}^{*}$ is cohomologically bounded up to degree $n$.* *Proof.* Suppose that $\mu$ has *columns* (not rows!) $\mu^{1} \ge \mu^{2} \ge \cdots \ge \mu^{t} > 0$. Then the dual partition $\mu^{*}$, which gives the isomorphism $S_{\mu^{*}}\mathcal{E} _{p} \cong S_{\mu}\mathcal{E} _{p}^{*}$ has the columns $r - \mu^{t} \ge r - \mu^{t-1} \ge \cdots \ge r - \mu^{1}$. Since $|\mu| = n$ and $r > 2n$, the length of each columns of $\mu^{*}$ is at least $n+1$. We may decompose the tensor product of $\mathcal{S} _{\lambda}\mathcal{E} _{p}$ and $\mathcal{S} _{\mu}\mathcal{E} _{p}^{*}$ as a direct sum of Schur functors: $$S_{\lambda}\mathcal{E} _{p} \otimes S_{\mu}\mathcal{E} _{p}^{*} \cong \bigoplus _{\nu}c_{\lambda, \mu^{*}}^{\nu}S_{\nu}\mathcal{E} _{p}.$$ By the Littlewood-Richardson rule [@Ful97 p. 121, Corollary 2], $c_{\lambda, \mu^{*}}^{\nu}$ is the number of skew semistandard Young tableaux of shape $\nu / \mu^{*}$ with some extra combinatorial conditions. Because $r > 2n$, $\nu$ with a nonzero $c_{\lambda, \mu^{*}}^{\nu}$ is a partition that has the following properties. It has at most $n$ boxes of 'small height' (the columns including the boxes have the height at most $n \le (r-1)/2$) and at least $tr-n$ boxes of 'large height' (the columns containing the boxes have the height at least $(r-1)/2 + 1 \ge n+1$). See Figure [\[fig:Youngdiagrams\]](#fig:Youngdiagrams){reference-type="ref" reference="fig:Youngdiagrams"}. Let $\nu = \sum a_{j}\omega_{j}$. If $\nu$ is singular, then by Theorem [Theorem 26](#thm:BWB){reference-type="ref" reference="thm:BWB"}, $S_{\nu}\mathcal{E} _{p}$ is cohomologically trivial. So, suppose that $\nu$ is regular. Then there is a positive root $\beta = \beta_{i}+ \beta_{i+1}+\cdots +\beta_{j}$ such that $(\nu+\rho, \beta) \equiv 1 \; \mathrm{mod}\; r$. Because $(\nu + \rho, \beta) = \sum_{k= i}^{j}(a_{k}+1) \le 2n + r-1 < 2r - 1$, $(\nu + \rho, \beta) = 1$ or $r+1$. The first case only occurs if $\beta = \beta_{i}$ and $a_{i} = 0$. In this case, the $i$-th coefficient of $\nu + \rho - \beta$ is $-1$, so it is out of the region of dominant weights. Thus, we do not consider the reflection. Hence, we may assume that there is a root $\beta = \beta_{i} + \beta_{i+1} + \cdots + \beta_{j}$ with $i < j$ such that $(\nu + \rho, \beta) = r + 1$. We claim that $i \le (r-1)/2$ and $j \ge (r-1)/2+1$. If not, $j \le (r-1)/2$ or $i \ge (r-1)/2+1$. In the former case, because $(\nu + \rho, \beta) \le \sum_{k=i}^{(r-1)/2}(a_{k} + 1) \le n + (r-1)/2$. But $n + (r-1)/2 < (2r-1)/2 < r+1 = (\nu + \rho, \beta)$, hence this is impossible. We may exclude the $i \ge (r-1)/2+1$ case similarly. Take the Weyl reflection associated with $\beta = \beta_{i} + \cdots + \beta_{j}$. Since $i \le (r-1)/2$ and $j \ge (r-1)/2 + 1$, in terms of the associated Young diagram, this corresponds to 1. Removing a box from one row in the small height part; 2. Adding a box to one row in the large height part; 3. Eliminating any column with length $r$ (if it exists). Since there are at most $n$ boxes of small height on $\nu$, this procedure terminates at most $n$ steps and reaches $0 + \rho$. Therefore, the length $\ell(\nu)$ is at most $n$. By Theorem [Theorem 26](#thm:BWB){reference-type="ref" reference="thm:BWB"}, with $h = 0$, $\mathrm{H} ^{i}(\mathcal{M} _{X}(r, \mathcal{O} ), S_{\nu}\mathcal{E} _{p}) = 0$ for $i > n$. Therefore, all irreducible factors $S_{\nu}\mathcal{E} _{p}$ are cohomologically bounded up to degree $n$, and the same is true for $S_{\lambda}\mathcal{E} _{p} \otimes S_{\mu}\mathcal{E} _{p}^{*}$. ◻ **Lemma 28**. *Suppose that $r > 2n$. For a partition $\lambda \vdash n$, if $|\lambda| = n$, then $\lambda$ and $\lambda^{*}$ are singular. Thus $S_{\lambda}\mathcal{E} _{p}$ and $S_{\lambda^{*}}\mathcal{E} _{p}$ are cohomologically trivial.* *Proof.* Set $\lambda = \sum a_{j}\omega_{j}$. Then $\lambda + \rho = \sum (a_{j}+1)\omega_{j}$. Since $|\lambda| = \sum ja_{j} = n$, $a := \sum a_{j} \le n$ and $a_{j} = 0$ for $j > r - a > n$. Now if we take $\beta := \beta_{1} + \cdots + \beta_{r-a}$, $$(\lambda+\rho, \beta) = \sum_{j=1}^{r-a}(a_{j}+1) = \sum_{j=1}^{r-1}(a_{j}+1) - (a-1) = a + r-1 - (a-1) = r.$$ Therefore $\lambda$ is singular. The singularity of $\lambda^{*}$ follows from the symmetry $\beta_{j} \leftrightarrow \beta_{r-j}$. The second statement follows from Theorem [Theorem 26](#thm:BWB){reference-type="ref" reference="thm:BWB"}. ◻ ## Cohomological boundedness and triviality on the moduli space {#ssec:cohomologyonmodulispace} We are ready to prove the desired cohomological boundedness and the triviality over $\mathrm{M} _{X}(r, L)$. In this section, we assume that $(r, d) = 1$ where $d = \det L$. The following lemma tells us that for 'balanced' tensor products of Schur functors, the universal bundle gives the same bundle with the pull-back of a Poincaré bundle. **Lemma 29**. *Let $\mathcal{E}$ be the universal bundle over $X \times \mathcal{M} _{X}(r, L)$ and let $\sigma^{*}\mathcal{E}$ be a Poincaré bundle over $X \times \mathrm{M} _{X}(r, L)$ for a section $\sigma : \mathrm{M} _{X}(r, L) \to \mathcal{M} _{X}(r, L)^{s}$. For two sequences of points $p_{1}, p_{2}, \cdots, p_{k} \in X$, $q_{1}, q_{2}, \cdots, q_{\ell} \in X$, and two sequences of partitions $\lambda^{1}, \lambda^{2}, \cdots, \lambda^{k}$ and $\mu^{1}, \mu^{2}, \cdots, \mu^{\ell}$ such that $\sum |\lambda^{i}| = \sum |\mu^{j}|$, there is an isomorphism $$\bigotimes S_{\lambda_{i}}\mathcal{E} _{p_{i}} \otimes \bigotimes S_{\mu_{j}}\mathcal{E} _{q_{j}}^{*} \cong p^{*}\left(\bigotimes S_{\lambda_{i}}\sigma^{*}\mathcal{E} _{p_{i}} \otimes \bigotimes S_{\mu_{j}}\sigma^{*}\mathcal{E} _{q_{j}}^{*}\right).$$* *Proof.* For the good moduli map $p : \mathcal{M} _{X}(r, L)^{s} \to \mathrm{M} _{X}(r, L)$, since $\mathcal{M} _{X}(r, L)^{s}$ is a $\mathbb{C} ^{*}$-gerbe over $\mathrm{M} _{X}(r, L)$, the pull-back of a universal bundle $p^{*}\sigma^{*}\mathcal{E}$ differs from $\mathcal{E}$ by a tensor product of a line bundle $F$ on the stack. Because the Schur functor construction is functorial, $$\begin{split} p^{*}\left(\bigotimes S_{\lambda_{i}}\sigma^{*}\mathcal{E} _{p_{i}} \otimes \bigotimes S_{\mu_{j}}\sigma^{*}\mathcal{E} _{q_{j}}^{*}\right) &\cong \bigotimes S_{\lambda_{i}}p^{*}\sigma^{*}\mathcal{E} _{p_{i}} \otimes \bigotimes S_{\mu_{j}}p^{*}\sigma^{*}\mathcal{E} _{q_{j}}^{*}\\ &\cong \bigotimes S_{\lambda_{i}}\left(\mathcal{E} _{p_{i}}\otimes F\right) \otimes \bigotimes S_{\mu_{j}}\sigma^{*}\left(\mathcal{E} _{q_{j}} \otimes F\right)^{*}\\ &\cong \bigotimes \left(S_{\lambda_{i}}\mathcal{E} _{p_{i}}\right) \otimes F^{|\lambda_{i}|} \otimes \bigotimes \left(S_{\mu_{j}}\mathcal{E} _{q_{j}}^{*}\right) \otimes F^{-|\mu_{j}|}\\ &\cong \bigotimes S_{\lambda_{i}}\mathcal{E} _{p_{i}} \otimes \bigotimes S_{\mu_{j}}\mathcal{E} _{q_{j}}^{*} \otimes F^{\sum |\lambda_{i}| - \sum |\mu_{j}|} \end{split}$$ and the power of $F$ is zero by the assumption. ◻ We first show the cohomological boundedness. **Proposition 30**. *Suppose $r > 2n$ and let $\mathbf{p} \in X_{n}$. Then $\mathrm{H} ^{i}(\mathrm{M} _{X}(r, \mathcal{O} ), \mathcal{F} _{\mathbf{p} } \otimes \mathcal{F} _{\mathbf{p} }^{*}) = 0$ for $i > n$.* *Proof.* We first consider the case that $\mathbf{p} = (np)$. Since $\mathcal{F} _{\mathbf{p} }$ is a deformation of $(\sigma^{*}\mathcal{E} _{p})^{\otimes n}$ (Lemma [Lemma 17](#lem:FMkernel){reference-type="ref" reference="lem:FMkernel"}), by the semicontinuity, it is sufficient to show that $\mathrm{H} ^{i}(\mathrm{M} _{X}(r, \mathcal{O} ), (\sigma^{*}\mathcal{E} _{p})^{\otimes n} \otimes (\sigma^{*}\mathcal{E} _{p}^{*})^{\otimes n}) = 0$ for $i > n$. By the Schur-Weyl duality, there is an isomorphism $$(\sigma^{*}\mathcal{E} _{p})^{\otimes n} \cong \bigoplus_{\lambda \vdash n}V_{\lambda} \otimes S_{\lambda}(\sigma^{*}\mathcal{E} _{p}),$$ where $V_{\lambda}$ is an irreducible $S_{n}$-representation associated to $\lambda$. Then $$\begin{split} (\sigma^{*}\mathcal{E} _{p})^{\otimes n} \otimes (\sigma^{*}\mathcal{E} _{p}^{*})^{\otimes n} &\cong \bigoplus_{\lambda \vdash n}V_{\lambda} \otimes S_{\lambda}(\sigma^{*}\mathcal{E} _{p}) \otimes \bigoplus_{\lambda \vdash n}V_{\lambda} \otimes S_{\lambda}(\sigma^{*}\mathcal{E} _{p}^{*}) \\ &\cong \bigoplus_{\lambda, \mu \vdash n} m_{\lambda, \mu}S_{\lambda}(\sigma^{*}\mathcal{E} _{p}) \otimes S_{\mu}(\sigma^{*}\mathcal{E} _{p}^{*}), \end{split}$$ for some integers $m_{\lambda, \mu}$. Since $p^{*} : \mathrm{D} ^{b}(\mathrm{M} _{X}(r, L)) \to \mathrm{D} ^{b}(\mathcal{M} _{X}(r, L)^{s})$ is fully-faithful [@LM23 Lemma 6.1], the cohomology of $S_{\lambda}(\sigma^{*}\mathcal{E} _{p}) \otimes S_{\mu}(\sigma^{*}\mathcal{E} _{p}^{*})$ can be computed after pulling back to $\mathcal{M} _{X}(r, L)^{s}$. By Lemma [Lemma 29](#lem:Poincareisuniversal){reference-type="ref" reference="lem:Poincareisuniversal"}, this is isomorphic to $S_{\lambda}\mathcal{E} _{p} \otimes S_{\mu}\mathcal{E} _{p}^{*}$. Littlewood-Richardson rule implies $$S_{\lambda}\mathcal{E} _{p} \otimes S_{\mu}\mathcal{E} _{p}^{*} \cong \bigoplus _{\nu}c_{\lambda, \mu^{*}}^{\nu}S_{\nu}\mathcal{E} _{p}.$$ So, we may reduce the statement to the cohomological boundedness of $S_{\nu}\mathcal{E} _{p}$. Then, by the cohomology identifications in previous sections, $$\label{eqn:reductiontotrivialdet} \begin{split} \mathrm{H} ^{*}(\mathcal{M} _{X}(r, L)^{s}, S_{\nu}\mathcal{E} _{p}) & \stackrel{\eqref{eqn:cohomologycomparison2}}{\cong} \mathrm{H} ^{*}(\mathcal{M} _{X, (p)}(r, L, \mathbf{a} ), L_{p, \nu}) \stackrel{\eqref{eqn:pullbackoflambda}}{\cong} \mathrm{H} ^{*}(\mathcal{M} _{X, (p, p')}(r, \mathcal{O} , \mathbf{a} '), L_{p, \nu}) \\ & \stackrel{\eqref{eqn:unstablelociarenegligible}}{\cong} \mathrm{H} ^{*}(\mathcal{M} _{X, (p, p')}(r, \mathcal{O} ), L_{p, \nu}) \stackrel{\eqref{eqn:cohomologycomparison}}{\cong} \mathrm{H} ^{*}(\mathcal{M} _{X}(r, \mathcal{O} ), S_{\nu}\mathcal{E} _{p}). \end{split}$$ In other words, we may reduce the cohomology computation of $S_{\lambda}\mathcal{E} _{p} \otimes S_{\mu}\mathcal{E} _{p}^{*}$ over $\mathcal{M} _{X}(r, \mathcal{O} )$. Finally, by Lemma [Lemma 27](#lem:boundedness){reference-type="ref" reference="lem:boundedness"}, over $\mathcal{M} _{X}(r, \mathcal{O} )$, $S_{\nu}\mathcal{E} _{p}$ is cohomologically bounded up to degree $n$. Now, we prove the general case. The symmetric product $X_{n} \cong \mathrm{Hilb}^{n}(X)$ is naturally stratified by the multiplicities of point configuration. For a partition $\lambda = (\lambda_{1} \ge \lambda_{2} \ge \cdots \ge \lambda_{k}>0)$ of $n$, we set $$X_{\lambda} = \{ \lambda_{1}p_{1} + \lambda_{2}p_{2} + \cdots + \lambda_{k}p_{k} \in \mathrm{Hilb}^{n}(X)\;|\; p_{i}\ne p_{j} \mbox{ if } i \ne j\}.$$ Then $X_{n} = \sqcup_{\lambda \vdash n}X_{\lambda}$. By the semicontinuity of cohomology, the vanishing of cohomology is true for general points of arbitrary strata $X_{\lambda} \subset X_{n}$, as $X_{(n)}$ is the unique closed stratum and the closure of any other stratum intersect $X_{(n)}$. Theorem [Theorem 26](#thm:BWB){reference-type="ref" reference="thm:BWB"} does not depend on the actual point configuration. Therefore, if the cohomology vanishing is true for one point of $X_{\lambda}$, then it is true for every point on $X_{\lambda}$. Thus, the desired statement is true for the whole $X_{n}$. ◻ Along the same line of the proof, we can show the cohomological triviality as well. **Proposition 31**. *Suppose that $r > 2n$ and $\mathbf{p} \ne \mathbf{q} \in X_{n}$. Then $\mathcal{F} _{\mathbf{p} } \otimes \mathcal{F} _{\mathbf{q} }^{*}$ is cohomologically trivial.* *Proof.* First, consider $\mathbf{p} = (np)$ and $\mathbf{q} = (nq)$ with $p \ne q \in X$. Because of the deformation property of $\mathcal{F} _{\mathbf{p} }$ (Lemma [Lemma 17](#lem:FMkernel){reference-type="ref" reference="lem:FMkernel"}), it is sufficient to show the cohomological triviality of $(\sigma^{*}\mathcal{E} _{p})^{\otimes n} \otimes (\sigma^{*}\mathcal{E} _{q}^{*})^{\otimes n}$. Applying the Schur-Weyl duality and the tensor decomposition, we have $$(\sigma^{*}\mathcal{E} _{p})^{\otimes n}\otimes (\sigma^{*}\mathcal{E} _{q}^{*})^{\otimes n} \cong \bigoplus_{\lambda, \mu \vdash n}m_{\lambda, \mu}S_{\lambda}(\sigma^{*}\mathcal{E} _{p})\otimes S_{\mu}(\sigma^{*}\mathcal{E} _{q}^{*}).$$ By virtue of Lemma [Lemma 29](#lem:Poincareisuniversal){reference-type="ref" reference="lem:Poincareisuniversal"}, over $\mathcal{M} _{X}(r, L)^{s}$, $p^{*}(S_{\lambda}(\sigma^{*}\mathcal{E} _{p})\otimes S_{\mu}(\sigma^{*}\mathcal{E} _{q}^{*})) \cong S_{\lambda} \mathcal{E} _{p} \otimes S_{\mu}\mathcal{E} _{q}^{*}$. Then, as in [\[eqn:reductiontotrivialdet\]](#eqn:reductiontotrivialdet){reference-type="eqref" reference="eqn:reductiontotrivialdet"}, we may reduce the computation to the bundle over $\mathcal{M} _{X}(r, \mathcal{O} )$. Lemma [Lemma 28](#lem:cohtrivial){reference-type="ref" reference="lem:cohtrivial"} implies that $\lambda$ is singular. By Theorem [Theorem 26](#thm:BWB){reference-type="ref" reference="thm:BWB"}, $S_{\lambda}\mathcal{E} _{p} \otimes S_{\mu}\mathcal{E} _{q}^{*}$ is cohomologically trivial regardless of $\mu$. Hence $(\sigma^{*}\mathcal{E} _{p})^{\otimes n}\otimes (\sigma^{*}\mathcal{E} _{q}^{*})^{\otimes n}$ is cohomologically trivial, too. The semicontinuity tells us that the cohomological triviality is valid for general points on $X_{\lambda} \times X_{\mu} \subset X_{n} \times X_{n}$. Since the cohomology computation only depends on the pair of partitions by Theorem [Theorem 26](#thm:BWB){reference-type="ref" reference="thm:BWB"}, we obtained the desired triviality for all points. ◻ In summary, we have Items (2) and (3) of Theorem [Theorem 20](#thm:BondalOrlov){reference-type="ref" reference="thm:BondalOrlov"}. **Remark 32**. One may observe that we only used a particular case ($h = 0$) of Theorem [Theorem 26](#thm:BWB){reference-type="ref" reference="thm:BWB"}. This is because all of the bundles we need to compute their cohomology are 'balanced' ones. The technique developed in this article can be extended to more general questions including the computation of a semiorthogonal decomposition in $\mathrm{D} ^{b}(\mathrm{M} _{X}(r, L))$ and a construction of ACM bundles on $\mathrm{M} _{X}(r, L)$ as in [@LM21; @LM23]. We expect that in the future work along this direction, a twist by the theta divisor $\Theta$ will play a crucial role, and we need to employ the full power of Theorem [Theorem 26](#thm:BWB){reference-type="ref" reference="thm:BWB"}. # Simplicity {#sec:simple} To complete the proof of Theorem [Theorem 20](#thm:BondalOrlov){reference-type="ref" reference="thm:BondalOrlov"}, and hence Theorem [Theorem 1](#thm:embedding){reference-type="ref" reference="thm:embedding"}, it remains to show the simplicity of $\mathcal{F} _{\mathbf{p} }$ for $\mathbf{p} \in X_{n}$. In this section, we prove this result. In this section, we assume $(r, d) = 1$ where $d = \deg L$. Let $\sigma^{*}\mathcal{E}$ be a Poincaré bundle over $X \times \mathrm{M} _{X}(r, L)$. **Proposition 33**. *For any $\mathbf{p} \in X_{n}$, $\mathcal{F} _{\mathbf{p} }$ is simple. That is, $\mathrm{End}(\mathcal{F} _{\mathbf{p} }) \cong \mathbb{C}$.* *Proof.* For any vector bundle $E$, $\mathrm{End}(E) \ne 0$. Thus, it is sufficient to show that $\dim \mathrm{End}(\mathcal{F} _{\mathbf{p} }) \le 1$. Let $\mathbf{p} = (\sum \lambda_{j}p_{j}) \in X_{n}$ with $p_{i} \ne p_{j}$ if $i \ne j$. Then $\mathcal{F} _{\mathbf{p} } \cong \bigotimes_{j}\mathcal{F} _{\lambda_{j}p_{j}}$ [@TT21 Lemma 2.6] and it is an $S_{\lambda_{1}} \times S_{\lambda_{2}} \times \cdots \times S_{\lambda_{k}}$-invariant bundle. Since it is a deformation of $\bigotimes_{j}(\sigma^{*}\mathcal{E} _{p_{j}})^{\otimes \lambda_{j}}$, by the semicontinuity, there is an injective map $$\mathrm{Hom}(\mathcal{F} _{\mathbf{p} }, \mathcal{F} _{\mathbf{p} }) \hookrightarrow \mathrm{Hom}(\mathcal{F} _{\mathbf{p} }, \bigotimes_{j}(\sigma^{*}\mathcal{E} _{p_{j}})^{\otimes \lambda_{j}}) \cong \mathrm{Hom}(\mathcal{F} _{\mathbf{p} }, \bigotimes_{j}\bigoplus_{\mu \vdash \lambda_{j}}V_{\mu} \otimes S_{\mu}(\sigma^{*}\mathcal{E} _{p_{j}})).$$ Since the deformation is $S_{\lambda_{1}} \times S_{\lambda_{2}} \times \cdots \times S_{\lambda_{k}}$-equivariant and $S_{\lambda_{1}} \times S_{\lambda_{2}} \times \cdots \times S_{\lambda_{k}}$ trivially acts on $\mathcal{F} _{\mathbf{p} }$, the image factors through the invariant subbundle, that is, $\bigotimes_{j}\mathrm{Sym}^{\lambda_{j}}(\sigma^{*}\mathcal{E} _{p_{j}})$. Therefore, we have an injective map $$\mathrm{Hom}(\mathcal{F} _{\mathbf{p} }, \mathcal{F} _{\mathbf{p} }) \hookrightarrow \mathrm{Hom}(\mathcal{F} _{\mathbf{p} }, \bigotimes_{j}\mathrm{Sym}^{\lambda_{j}}(\sigma^{*}\mathcal{E} _{p_{j}})).$$ We may apply the deformation argument once again, then we have an inclusion $$\mathrm{Hom}(\mathcal{F} _{\mathbf{p} }, \bigotimes_{j}\mathrm{Sym}^{\lambda_{j}}(\sigma^{*}\mathcal{E} _{p_{j}})) \hookrightarrow \mathrm{Hom}(\bigotimes_{j}(\sigma^{*}\mathcal{E} _{p_{j}})^{\otimes \lambda_{j}}, \bigotimes_{j}\mathrm{Sym}^{\lambda_{j}}(\sigma^{*}\mathcal{E} _{p_{j}}))$$ $$\cong \mathrm{Hom}(\bigotimes_{j}\bigoplus_{\mu \vdash \lambda_{j}}V_{\mu} \otimes S_{\mu}(\sigma^{*}\mathcal{E} _{p_{j}}), \bigotimes_{j}\mathrm{Sym}^{\lambda_{j}}(\sigma^{*}\mathcal{E} _{p_{j}}))$$ and the last space is a direct sum of $$\mathrm{H} ^{0}(\mathrm{M} _{X}(r, L), \bigotimes_{j}S_{\mu}(\sigma^{*}\mathcal{E} _{p_{j}}^{*}) \otimes S_{(\lambda_{j})}(\sigma^{*}\mathcal{E} _{p_{j}})).$$ By applying the chain of isomorphisms in [\[eqn:reductiontotrivialdet\]](#eqn:reductiontotrivialdet){reference-type="eqref" reference="eqn:reductiontotrivialdet"}, the computation of the cohomology group is reduced to that of $$\label{eqn:globalsectionforsimpleness} \mathrm{H} ^{0}(\mathcal{M} _{X}(r, \mathcal{O} ), \bigotimes_{j}S_{\mu^{*}}(\mathcal{E} _{p_{j}}) \otimes S_{(\lambda_{j})}(\mathcal{E} _{p_{j}})).$$ Note that because we are only interested in the zeroth-cohomology and the codimension of the unstable locus is large (Lemma [Lemma 14](#lem:unstablecodim){reference-type="ref" reference="lem:unstablecodim"}), the map in [\[eqn:unstablelociarenegligible\]](#eqn:unstablelociarenegligible){reference-type="eqref" reference="eqn:unstablelociarenegligible"} is an isomorphism without the condition on the partition. Decompose $S_{\mu^{*}}\mathcal{E} _{p_{j}} \otimes S_{(\lambda_{j})}\mathcal{E} _{p_{j}}\cong \bigoplus c_{\mu^{*}, (\lambda_{j})}^{\nu}S_{\nu}\mathcal{E} _{p_{j}}$. Note that among the direct summands on the right-hand side, the only one that $\nu + \rho$ is already in the positive Weyl alcove is $S_{(t, t, \cdots, t)}\mathcal{E} _{p_{j}} \cong S_{0}\mathcal{E} _{p_{j}} \cong \mathcal{O}$, by Remark [Remark 25](#rem:positivealcovewhenhiszero){reference-type="ref" reference="rem:positivealcovewhenhiszero"}, and it appears only if $\mu = (\lambda_{j})$. By Theorem [Theorem 26](#thm:BWB){reference-type="ref" reference="thm:BWB"}, the only nonzero contribution to [\[eqn:globalsectionforsimpleness\]](#eqn:globalsectionforsimpleness){reference-type="eqref" reference="eqn:globalsectionforsimpleness"} is given by the direct sum of $$\mathrm{H} ^{0}(\mathcal{M} _{X}(r, \mathcal{O} ), \bigotimes_{j}S_{(\lambda_{j})^{*}}(\mathcal{E} _{p_{j}})\otimes S_{(\lambda_{j})}(\mathcal{E} _{p_{j}})),$$ and its multiplicity is precisely the product of the multiplicities of $S_{(\lambda_{j})}(\mathcal{E} _{p_{j}}) \cong \mathrm{Sym}^{\lambda_{j}}\mathcal{E} _{p_{j}}$ in $(\mathcal{E} _{p_{j}})^{\otimes \lambda_{j}}$, which is one. Therefore, $$\dim\mathrm{End}(\mathcal{F} _{\mathbf{p} }) \le \dim \mathrm{H} ^{0}(\mathcal{M} _{X}(r, \mathcal{O} ), \bigotimes_{j}S_{\mu^{*}}(\mathcal{E} _{p_{j}}) \otimes S_{(\lambda_{j})}(\mathcal{E} _{p_{j}})) = 1$$ and we are done. ◻ This completes the proof of Theorem [Theorem 20](#thm:BondalOrlov){reference-type="ref" reference="thm:BondalOrlov"} and Theorem [Theorem 1](#thm:embedding){reference-type="ref" reference="thm:embedding"}. # Fano visitor {#sec:Fanovisitor} ## Fano visitors for derived categories In this section, we will show that $X_n$ and $\mathrm{Jac}(X)$ are Fano visitors. Since $\mathrm{M} _{X}(r, L)$ is a smooth Fano variety of dimension $(r^{2}-1)(g-1)$, Theorem [Theorem 1](#thm:embedding){reference-type="ref" reference="thm:embedding"} immediately implies the following consequence. **Corollary 34**. *Let $X$ be a smooth curve of genus $g \ge 2$. Then the symmetric product $X_{n}$ is a Fano visitor, and its Fano dimension $\mathrm{Fdim} \;X_{n}$ is at most $((2n+1)^{2}-1)(g-1)$.* **Remark 35**. The upper bound of the Fano dimension is not tight. For $n \le g-1$, the embedding $\mathrm{D} ^{b}(X_{n}) \to \mathrm{D} ^{b}(\mathrm{M} _{X}(2, L))$ is obtained by the work of Tevelev-Torres in [@TT21]. Thus, for this range, the upper bound of the Fano dimension is $\dim \mathrm{M} _{X}(2, L) = 3(g-1)$, thus $\mathrm{Fdim}\;X_{n} \le 3g - 3$. For general low genus curves, one can find better upper bounds [@KL23 Section 5]. For any birational morphism $f : V \to W$ between two smooth projective varieties, the natural morphism $\mathcal{O} _{W} \to R f_{*}\mathcal{O} _{V}$ is an isomorphism [@Hir64 p. 144]. Thus, for any $F^{\bullet}, G^{\bullet} \in \mathrm{D} ^{b}(W)$, $$\mathrm{Hom}(L f^{*}F^{\bullet}, L f^{*}G^{\bullet}) \cong \mathrm{Hom}(F^{\bullet}, Rf_{*}Lf^{*}G^{\bullet}) \cong \mathrm{Hom}(F^{\bullet}, G^{\bullet})$$ by the projection formula. Thus, $L f^{*} : \mathrm{D} ^{b}(W) \to \mathrm{D} ^{b}(V)$ is fully-faithful. *Proof of Theorem [Theorem 2](#thm:Fanovisitor){reference-type="ref" reference="thm:Fanovisitor"}.* The *Abel-Jacobi map* $AJ : X_{g} \to \mathrm{Jac}(X)$ is a birational morphism between smooth varieties. By the above argument, we have an embedding $L AJ^{*} : \mathrm{D} ^{b}(\mathrm{Jac}(X)) \to \mathrm{D} ^{b}(X_{g})$. So we have an embedding $$\Phi_{\mathcal{F} } \circ L AJ^{*} : \mathrm{D} ^{b}(\mathrm{Jac}(X)) \to \mathrm{D} ^{b}(\mathrm{M} _{X}(2g+1, L)).$$ The dimension of $\mathrm{M} _{X}(2g+1, L)$ is $((2g+1)^{2}-1)(g-1) = 4(g+1)g(g-1)$. So we obtain an upper bound of $\mathrm{Fdim} \; \mathrm{Jac}(X)$. ◻ **Remark 36**. If $g(X) = 0$, the Jacobian is trivial. When $g(X) = 1$, the Abel-Jacobi map is an isomorphism. It is well-known that any principally polarized abelian variety $A$ of dimension $\le 3$ are either Jacobian of a curve or products of Jacobians. Thus, applying [@Kuz11 Corollary 5.10], $\mathrm{D} ^{b}(A)$ is embedded into the derived category of a product of Fano varieties, which is Fano. **Corollary 37**. *Any principally polarized abelian varieties of dimension $\le 3$ are Fano visitors.* **Remark 38**. Since $X_{n}$ and abelian varieties are not simply connected, they are not complete intersections. Thus, they are new examples of Fano visitors. **Remark 39**. Derived categories of abelian varieties are also indecomposable [@KO15 Corollary 1.5]. In the range of $1 \le n \le g-1$, the indecomposability of $\mathrm{D} ^{b}(X_{n})$ is obtained by an accumulation of many works (see [@Lin21]). On the other hand, when $n \ge g$, an explicit semiorthogonal decomposition of $\mathrm{D} ^{b}(X_{n})$ is obtained in [@Tod21 Section 5.5]. Indeed, $\mathrm{D} ^{b}(X_{n})$ can be decomposed into the derived categories of $\mathrm{Jac}(X)$ and $X_{m}$ with $m \le g-1$. ## Motivic Fano visitors Orlov conjectured that semiorthogonal decompositions of derived categories of algebraic varieties are closely related to motives of them [@Orl05]. From this perspective, it is natural to ask the following definition. **Definition 40**. A smooth projective variety $V$ is a *motivic Fano visitor* if its rational Chow motive (or one of its tensor products with Lefschetz motives) is a direct summand of the rational Chow motive of a smooth Fano variety $W$. Then we can state the Fano visitor problem for motives as follows. **Question 41** (Fano visitor problem for motives). Is every smooth projective variety a motivic Fano visitor? One may ask similar questions for the other versions of motives, for instance, the Grothendieck ring of varieties, numerical motives, Voevodsky motives, and so on. For simplicity, we restrict ourselves to rational Chow motives in this paper. See [@GL20] and references therein for more details about motives. Del Baño obtained the following formula. **Theorem 42** (). *Let $r \geq 2, d$ be two integers which are coprime to each other. Then the motivic Poincaré polynomial of $\mathrm{M} _{X}(r,d)$ is $$\sum_{s=1}^r \sum_{r_1+\cdots+r_s=r, r_i \in \mathbb{N} } (-1)^{s-1} \frac{((1+1)^{h^1(C)})^s}{(1-\mathbb{L} )^{s-1}} \prod_{j=1}^s \prod_{i=1}^{r_j-1} \frac{(1+\mathbb{L} ^{ i})^{h^1(C)}}{(1-\mathbb{L} ^{ i})(1-\mathbb{L} ^{i+1})}$$ $$\prod_{j=1}^{s-1}\frac{1}{1-\mathbb{L} ^{r_j+r_{j+1}}} \mathbb{L} ^{ (\sum_{i < j} r_i r_j (g-1)+\sum_{i=1}^{s-1}(r_i+r_{i+1})\langle -\frac{r_1+\cdots+r_i}{r}d \rangle)}.$$* Taking the quotient by the motive of the Jacobian, we obtain an analogous formula for $\mathrm{M} _{X}(r, L)$. See [@GL20] for the detail of the argument. Fu, Hoskins, and Lehalleur proved that under some finiteness assumption, the comparison of the motivic Poincaré polynomial suffices to obtain an isomorphism between Chow motives. **Proposition 43** (). *Let $M_1, M_2$ be two effective Chow motives which are Kimura finite dimensional whose motivic Poincaré polynomials are the same. Then $M_1$ is isomorphic to $M_2$ as effective Chow motives.* Using the strategies of [@GL20] and [@FHL21], we get the following proposition. **Proposition 44**. *For any smooth curve $X$ and $n \in \mathbb{N}$, the symmetric product $X_{n}$ is a motivic Fano visitor.* *Proof.* Let $h(\mathrm{M} _X(r,L))$ be the Chow motive of $\mathrm{M} _X(r,L).$ It admits the following decomposition $$h(\mathrm{M} _X(r,L)) = h_{-}(\mathrm{M} _X(r,L)) \oplus h_{0}(\mathrm{M} _X(r,L)) \oplus h_{+}(\mathrm{M} _X(r,L))$$ which provides the following cohomology decomposition, $$\mathrm{H} ^*(\mathrm{M} _X(r,L)) = \mathrm{H} ^{i \leq 4n}(\mathrm{M} _X(r,L)) \oplus \mathrm{H} ^{4n < i < (r^2-1)(g-1)-4n}(\mathrm{M} _X(r,L)) \oplus \mathrm{H} ^{i \geq (r^2-1)(g-1)-4n}(\mathrm{M} _X(r,L))$$ after applying the realization functor [@Ban98]. When $r$ is sufficiently large, Theorem [\[Bano;inversion\]](#Bano;inversion){reference-type="ref" reference="Bano;inversion"}, the proof of [@GL20 Theorem 1.3], and Proposition [Proposition 43](#prop:equivalenceofeffectivemotives){reference-type="ref" reference="prop:equivalenceofeffectivemotives"} imply that $h_{-}(\mathrm{M} _X(r,L))$ admits a decomposition $h_{-}(\mathrm{M} _X(r,L)) = h(X_n) \otimes \mathbb{L} ^{\otimes n} \oplus R$ where $R$ is an effective Chow motive. Using Proposition [Proposition 43](#prop:equivalenceofeffectivemotives){reference-type="ref" reference="prop:equivalenceofeffectivemotives"} again, we obtain the desired conclusion. ◻ Vial proved that if there is a surjective morphism $f : W \to V$ between two smooth projective varieties, the rational Chow motive of $V$ is a direct summand of that of $W$ [@Via13 Theorem 1.4]. Note that the *Albanese map* $Alb : X_{n} \to \mathrm{Jac}(X)$ is surjective for any smooth curve $X$ when $n \geq g.$ Thus, we obtain the following: **Corollary 45**. *For any smooth curve $X$, the Jacobian $\mathrm{Jac}(X)$ is a motivic Fano visitor.* On the other hand, Matsusaka's theorem says that every abelian variety admits a surjection from $\mathrm{Jac}(X)$ for some curve $X$ [@Mat52]. From Matsusaka's theorem and Vial's theorem, we have the following conclusion, which provides a shred of evidence for the affirmative answer to Question [Question 6](#que:abelian){reference-type="ref" reference="que:abelian"}. **Corollary 46**. *Every abelian variety is a motivic Fano visitor.* KKLL17 S. del Baño. *On motives and moduli spaces of stable vector bundles over a curve.* Thesis, 1998. S. del Baño. *On the Chow motive of some moduli spaces.* J. Reine Angew. Math. 532 (2001), 105--132. A. Beauville and Y. Laszlo, Conformal blocks and generalized theta functions. *Commun. Math. Phys.*, 164, 385--419, 1994. P. Belmans, S. Galkin, and S. Mukhopadhyay, Decompositions of moduli spaces of vector bundles and graph potentials. *Forum Math. Sigma*, 11 (2023), Paper No. e16, 28 pp. P. Belmans and S. Mukhopadhyay, Admissible subcategories in derived categories of moduli of vector bundles on curves. *Adv. Math.* 351 (2019), 653--675. M. Bernardara, M. Bolognesi, and D. Faenzi. Homological projective duality for determinantal varieties. *Adv. Math.* 296 (2016), 181--209. U. Bhosle, Parabolic vector bundles on curves. *Ark. Mat.* 27 (1989), no. 1, 15--22. A. Bondal and D. Orlov, Semiorthogonal decomposition for algebraic varieties. Preprint, arXiv:alg-geom/9506012, 1995. M. Casanellas and R. Hartshorne, ACM bundles on cubic surfaces. *J. Eur. Math. Soc. (JEMS)* 13 (2011), no. 3, 709--731. L. Fu, V. Hoskins, S. P. Lehalleur, Motives of moduli spaces of rank 3 vector bundles and Higgs bundles on a curve. *Electronic Research Archive* 2022, Volume 30, Issue 1: 66--89. A. Fonarev and A. Kuznetsov, Derived categories of curves as components of Fano manifolds. *J. London Math. Soc.* (2) 97 (2018) 24--46. W. Fulton, *Young tableaux. With applications to representation theory and geometry*. London Mathematical Society Student Texts, 35. Cambridge University Press, Cambridge, 1997. x+260 pp. ISBN: 0-521-56144-2. T. Gómez and K.-S. Lee, Motivic decompositions of moduli spaces of vector bundles on curves. Preprint, arXiv:2007.06067. D. Halpern-Leistner, The derived category of a GIT quotient. *J. Amer. Math. Soc.* 28 (2015), no. 3, 871--912. H. Hironaka, Resolution of singularities of an algebraic variety over a field of characteristic zero. I. Ann. of Math. (2) 79 (1964), 109--203. K. Kawatani, S. Okawa, Nonexistence of semiorthogonal decompositions and sections of the canonical bundle. Preprint, arXiv:1508.00682. Y.-H. Kiem, I.-K. Kim, H. Lee, and K.-S. Lee, All complete intersection varieties are Fano visitors. Adv. Math. 311 (2017), 649--661. Y.-H. Kiem and K.-S. Lee, Fano visitors, Fano dimension and Fano orbifolds. *Proceedings for the Moscow-Shanghai-Pohang conferences.* Springer Proc. Math. Stat., 409 Springer, Cham, 2023, 517--544. A. Kuznetsov, Base change for semiorthogonal decompositions. *Compos. Math.* 147 (2011), no. 3, 852--876. A. Kuznetsov, Embedding derived categories of an Enriques surface into derived categories of Fano varieties. *Izv. Ross. Akad. Nauk Ser. Mat.* 83 (2019), no. 3, 127--132. K.-S. Lee, Remarks on motives of moduli spaces of rank 2 vector bundles on curves. Preprint, arXiv:1806.11101. K.-S. Lee and H.-B. Moon, Positivity of the Poincaré bundle on the moduli space of vector bundles and its applications. Preprint, arXiv:2106.04857. K.-S. Lee and H.-B. Moon, Derived category and ACM bundles of moduli space of vector bundles on a curve. Forum. Math, Sigma. Volume 11, 2023, e81 DOI: <https://doi.org/10.1017/fms.2023.75>. K.-S. Lee and M. S. Narasimhan, Symmetric products and moduli spaces of vector bundles of curves. Preprint, arXiv:2106.04872. X. Lin, On nonexistence of semi-orthogonal decompositions in algebraic geometry. Preprint, arXiv:2107.09564. T. Matsusaka, On a generating curve of an Abelian variety. *Natur. Sci. Rep. Ochanomizu Univ.* 3(1952), 1--4. V. B. Mehta and C. S. Seshadri. Moduli of vector bundles on curves with parabolic structures. *Math. Ann.* 248 (1980), no. 3, 205--239. H.-B. Moon and S.-B. Yoo. Finite generation of the algebra of type A conformal blocks via birational geometry II: higher genus. *Proc. Lond. Math. Soc.*, vol.120 (2020), issue 2, 242--264. H.-B. Moon and S-B. Yoo, Finite generation of the algebra of type A conformal blocks via birational geometry. *Int. Math. Res. Not. IMRN*, (2021), no. 7, 4941--4974. M. S. Narasimhan, Derived categories of moduli spaces of vector bundles on curves. *J. Geom. Phys.* 122 (2017), 53--58. M. S. Narasimhan, Derived categories of moduli spaces of vector bundles on curves II. *Geometry, algebra, number theory, and their information technology applications*, 375-382, Springer Proc. Math. Stat., 251, Springer, Cham, 2018. N. Olander. Fully faithful functors and dimension, preprint, arXiv:2110.09256. D. Orlov, Derived categories of coherent sheaves, and motives. *Uspekhi Mat. Nauk* 60 (2005), no. 6(366), 231--232. D. Orlov, Remarks on generators and dimensions of triangulated categories. *Mosc. Math. J.* 9 (2009), no. 1, 153--159 C. Pauly, Espaces de modules de fibrés paraboliques et blocs conformes. *Duke Math. J.*, 84(1):217--235, 1996. S. Ramanan, The moduli spaces of vector bundles over an algebraic curve. *Math. Ann.* 200 (1973), 69--84. C. Teleman, Borel-Weil-Bott theory on the moduli stack of $G$-bundles over a curve. *Invent. Math.* 134 (1998), no. 1, 1--57. J. Tevelev, Braid and phantom. Preprint, arXiv:2304.01825. J. Tevelev and S. Torres, The BGMN conjecture via stable pairs. Preprint, arXiv:2108.11951. M. Thaddeus, Geometric invariant theory and flips. *J. Amer. Math. Soc.* 9 (1996), no. 3, 691--723. Y. Toda, Semiorthogonal decompositions of stable pair moduli spaces via $d$-critical flips. *J. Eur. Math. Soc. (JEMS)* 23 (2021), no. 5, 1675--1725. X. Sun, Degeneration of moduli spaces and generalized theta functions, *J. Algebraic Geom.* 9 (2000) 459--527. C. Vial, Algebraic cycles and fibrations. *Doc. Math*. 18 (2013), 1521--1553.
arxiv_math
{ "id": "2309.15412", "title": "Derived categories of symmetric products and moduli spaces of vector\n bundles on a curve", "authors": "Kyoung-Seog Lee and Han-Bom Moon", "categories": "math.AG", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We generalize a result by Vasil'ev on the algebraic independence of periods of abelian varieties to the case when some of these periods are replaced by their exponentials. We eventually derive some applications to values of the beta function at rational points. author: - Riccardo Tosi date: September 6, 2023 title: On the algebraic independence of periods of abelian varieties and their exponentials --- # Introduction Let X be an abelian variety of dimension $g$ defined over a number field $K$. It is possible to choose a basis $\varphi_1,\dots,\varphi_{2g}$ of $\text{H}^1_{\text{dR}}(X,K)$ in such a way that $\varphi_1,\dots,\varphi_g\in \Gamma(X,\Omega^1_{X/K})$. Fix $2g$ paths $\gamma_1,\dots,\gamma_{2g}$ on $X(\mathbb{C})$ which induce a basis for $\text{H}_1(X(\mathbb{C}),\mathbb{Q})$ and consider the period matrix $$\Omega=(\omega_{ij})=\left( \int_{\gamma_j} \varphi_i\right), \quad i,j=1,\dots,2g.$$ As far as the transcendence properties of the periods of $X$ are concerned, Chudnovsky [@Chudnovsky-main] proved that there are always two algebraically independent numbers among the entries of $\Omega$, following the methods introduced in his celebrated theorem on the transcendence degree of periods of elliptic curves. Later on, Vasil'ev [@Vasil'ev] managed to show that two algebraically independent periods exist even when restricting to any $g+1$ rows of $\Omega$. A bunch of applications have been derived, in particular concerning values of the beta and gamma function at rational points. To quote some, Chudnovsky's result applied to the case of elliptic curves with complex multiplication led to the first proof of the algebraic independence of $\pi$ with both $\Gamma(\frac{1}{4})$ and $\Gamma(\frac{1}{3})$. On the other hand, Vasil'ev's refinement yields for any $n\ge 3$ the existence of at least two algebraically independent numbers among $\pi, \Gamma\left(\frac{1}{n}\right), \dots, \Gamma\left(\frac{g}{n}\right)$, where $g=\lfloor \frac{n-1}{2}\rfloor$. The aim of the present work is to expose a transcendence result along these lines in the case when some periods are replaced by their exponentials. Before stating our main result, we will briefly recall some well-known facts regarding transcendence measures. If $\omega\in\mathbb{C}$ is transcendental, we say that a function $\varphi:\mathbb{R}^2\to \mathbb{R}$ is a *transcendence measure* for $\omega$ if for all non-zero polynomials $P$ with integer coefficients of degree at most $n$ and height at most $h$ we have $\log |P(\omega)|\ge -\varphi(n,\log h)$. A real number $\tau>0$ is said to be a *transcendence type* for $\omega$ if there is a constant $c(\omega,\tau)>0$ such that $c(\omega,\tau)(n+\log h)^{\tau}$ is a transcendence measure for $\omega$. An elementary argument involving the pigeonhole principle shows that any transcendence type $\tau$ for a transcendental number $\omega$ must satisfy $\tau\ge 2$. On the other hand, almost all transcendental numbers admit a transcendence type $\le 2+\varepsilon$ for all $\varepsilon>0$; we refer for instance to [@Amoroso] for a proof of this assertion. The only particular number which is known to have a transcendence type $\le 2+\varepsilon$ for all $\varepsilon>0$ is $\pi$, for which explicit transcendence measures can be found in [@Waldschmidt-measures]. We may now state the main result that we wish to prove. Fix $g$ non-zero complex numbers $\xi_1,\dots, \xi_g$ and define the $g\times 2g$ matrix $E$ whose $i$-th row, for $i=1,\dots, g$, coincides with $$\left(e^{\xi_i \omega_{i1}}, \dots, e^{\xi_i \omega_{i,2g}}\right).$$ Pick two integers $m,n$ such that $1\le m \le 2g$ and $0\le n\le g$. Let us choose $m$ rows of the matrix $\Omega$ and $n$ rows of the matrix $E$, say the rows $i_1,\dots, i_n$ for $1\le i_1,\dots, i_n\le g$. Let $S$ be the set made up of the entries of the chosen rows, together with $\xi_{i_1},\dots,\xi_{i_n}$. According to [@Schneider], each row of $\Omega$ cannot have only algebraic entries, so the field $\mathbb{Q}(S)$ has transcendence degree at least $1$ over $\mathbb{Q}$. **Theorem 1**. *Suppose that $\mathbb{Q}(S)$ has transcendence degree exactly $1$ over $\mathbb{Q}$. If $2m+n>2g$, then any transcendence type $\tau$ of a transcendental number $\omega\in\mathbb{Q}(S)$ satisfies $$\tau\ge2+\frac{2m+n-2g}{2g+n}.$$* In particular, any such transcendence type is bounded away from $2$, which is the case for almost no transcendental number. In fact, we expect the transcendence degree of $\mathbb{Q}(S)$ to be at least $2$ whenever $2m+n>2g$. We will later expose the main obstructions to obtaining a result of this kind. For the proof of Theorem [Theorem 1](#theo: 3){reference-type="ref" reference="theo: 3"}, we follow closely the strategy of [@Vasil'ev], which can be recovered from the case $n=0$. It relies on the construction of an auxiliary function vanishing with high multiplicity on prescribed points of a lattice in $\mathbb{C}^g$ by exploiting Siegel's Lemma. As far as the structure of our exposition is concerned, the next section covers the general setting and some preliminary lemmas. The third section revolves around the construction of the aforementioned auxiliary function, whereas the conclusion of the proof of Theorem [Theorem 1](#theo: 3){reference-type="ref" reference="theo: 3"} takes place in the fourth section, and is carried out by analytic means via a generalization by Bombieri and Lang of Schwarz's Lemma to several variables. We will eventually reserve the last section for some applications to values of the beta function at rational points, in which it is possible to turn Theorem [Theorem 1](#theo: 3){reference-type="ref" reference="theo: 3"} into an algebraic independence criterion. For example, we shall prove that there are at least two algebraically independent numbers among $$B\left( \frac{1}{12},\frac{1}{12}\right), \; B\left(\frac{5}{12},\frac{5}{12}\right),\; \pi, \; e^{\pi^2},\; e^{i\pi^2}.$$ ## Acknowledgements {#acknowledgements .unnumbered} I wish to thank Johannes Sprang for his continuous support, his patience and his thorough dedication in helping me write the present text. All this would not have been possible without him. While writing this article, I was supported by the DFG Research Training Group 2553 *Symmetries and classifying spaces: analytic, arithmetic and derived*. # Notation and setting We first introduce some notation. The standard coordinates in $\mathbb{C}^g$ will be denoted by $z=(z_1,\dots,z_g)\in\mathbb{C}^g$. For $j=1,\dots, 2g$, we set $\lambda_j= (\omega_{1j},\dots,\omega_{gj})$, that is, $\lambda_j$ is the vector of the first $g$ entries of the $j$-th column of $\Omega$. Since we have chosen $\varphi_1,\dots,\varphi_g\in \Gamma(X,\Omega^1_{X/K})$, the vectors $\lambda_1,\dots,\lambda_{2g}$ generate a lattice $\Lambda$ of $\mathbb{C}^g$ which yields an isomorphism $X(\mathbb{C})\cong \mathbb{C}^g/\Lambda$. For an integral vector $k=(k_1,\dots, k_g)\in \mathbb{Z}^g$, we set $$|k|\coloneqq \sum_{j=1}^g |k_j|, \quad \|k\|\coloneqq \max_{j=1,\dots,g} |k_j|, \quad k\cdot \lambda\coloneqq \sum_{j=1}^g k_j\lambda_j, \quad z^k=z_1^{k_1}\dots z_g^{k_g}.$$ Given a differentiation operator $$\partial=\frac{\partial^m}{\partial z_1^{m_1}\dots\partial z_g^{m_g}},$$ its order will be denoted by $|\partial|=m=m_1+\dots +m_g$. The notation $|\partial|=0$ signifies that $\partial$ is the identity operator. For an entire function $f:\mathbb{C}^g\to \mathbb{C}$, we write $|f|_R$ for the maximum modulus of $f$ in the closed ball of radius $R\ge 0$ centred at the origin. Given a polynomial $P\in \mathbb{C}[x_1,\dots,x_n,y_1,\dots,y_m]$, we set for the sake of brevity $x=(x_1,\dots,x_n)$, $y=(y_1,\dots,y_m)$. We will write $\deg P$ for the degree of $P$, while the notation $\deg_x P$ will refer to the degree of $P$ in $x$, and similarly for $\deg_y P$. The maximum modulus of the coefficients of $P$, the *height* of $P$, will be denoted by $H(P)$. If $P\neq 0$, we also denote by $t(P)$ the *type* of $P$, which stands for the maximum between the degree and the logarithm of the height of $P$. Let now $L$ be a subfield of $\mathbb{C}$ such that the extension $L/\mathbb{Q}$ is finitely generated. We may write $L=\mathbb{Q}(\omega_1,\dots,\omega_q,\chi)$ for some $\omega_1,\dots,\omega_q\in\mathbb{C}$ algebraically independent and $\chi\in \mathbb{C}$ integral over $\mathbb{Z}[\omega_1,\dots,\omega_q]$ of degree $d$, say. Any element $a\in L$ can be written uniquely in the form $aP=\sum_{i=1}^d P_i y^{i-1}$ for suitable $P,P_1,\dots, P_d\in \mathbb{Z}[\omega_1,\dots,\omega_q]$ such that $\gcd(P,P_1,\dots, P_d)=1$. For $i=1,\dots, q$, we define the degree of $a$ in $\omega_i$ and the type of $a$ respectively as $$\begin{gathered} \deg_{\omega_i} a \coloneqq \max \{\deg_{\omega_i} P, \deg_{\omega_i} P_1,\dots, \deg_{\omega_i} P_d \},\\ t(a)\coloneqq \max\{t(P),t(P_1),\dots, t(P_d) \}. \end{gathered}$$ We remark that these notions depend on the choice of generators $\omega_1,\dots, \omega_q,\chi$ for the extension $L/\mathbb{Q}$. Upper bounds for the type of a number in $L$ can be reduced to computations with types of polynomials via the following **Lemma 1**. *There is a constant $c>0$, only depending on $\omega_1,\dots,\omega_q,\chi$, such that for any polynomial $P\in\mathbb{Z}[x_1,\dots,x_q,y]$ $$t(P(\omega_1,\dots,\omega_q,\chi))\le c \,t(P)+c\log(\deg P).$$* *Proof.* [@Waldschimdt-fr Lemme 4.2.5]. ◻ Let us now consider the matrix $\Omega$ and the lattice $\Lambda$ defined above. There exist $3g$ meromorphic functions $A_1,\dots, A_g,H_1,\dots, H_{2g}:\mathbb{C}^g\to \mathbb{C}$ with the following properties: 1. all these functions vanish at the origin of $\mathbb{C}^g$; 2. for all $i=1,\dots, g$, $j=1,\dots, 2g$ we have $A_i(z+\lambda_j)=A_i(z)$; 3. for all $i,j=1,\dots, 2g$ we have $H_i(z+\lambda_j)=H_i(z)+\omega_{ij}$. The functions $A_1,\dots, A_g$ are called *abelian*, while $H_1,\dots, H_{2g}$ *quasi-periodic*. Moreover, there exists a theta function $\vartheta:\mathbb{C}^g\to \mathbb{C}$ which is entire, does not vanish at the origin and is a denominator for the abelian and quasi-periodic functions, in the sense that $\vartheta A_1,\dots, \vartheta A_g,\vartheta H_1,\dots, \vartheta H_{2g}$ are entire. We recall that, by definition of theta function, for all $j=1,\dots,2g$ we may find $u_{j1},\dots, u_{jg}, v_j\in \mathbb{C}$ satisfying $$\vartheta(z+\lambda_j)=\vartheta(z)\exp( 2\pi i(u_{j1}z_1+\dots+u_{jg}z_g+v_j)).$$ The proof of these facts can be found for instance in [@Lange-Birkenhake] and [@Grinspan]. We conclude this section with some arithmetic properties of these functions. **Lemma 2**. *The functions $A_1,\dots, A_g$ and $H_1,\dots, H_{2g}$ can be expanded at the origin in power series of the form $$\sum_{|\mu|\ge 1} b_{\mu} z_1^{\mu_1}\dots z_g^{\mu_g},$$ where $z=(z_1,\dots,z_g)\in \mathbb{C}^g$, $\mu=(\mu_1,\dots,\mu_g)\in\mathbb{Z}^g$ and $\mu_i\ge 0$ for all $i=1,\dots, g$. Furthermore, the coefficients $b_\mu$ lie in $K$ and enjoy the following properties:* 1. *There exists a positive integer $c_1$ such that the maximum modulus of the conjugates of $b_\mu$ satisfies $\le e^{c_1|\mu|}$;* 2. *there exists a positive integer $c_2$ such that $$(3|\mu|)!c_2^{|\mu|}b_{\mu}$$ is an algebraic integer.* *Proof.* [@Grinspan Corollary 4.5]. ◻ # The auxiliary function The first step in proving Theorem [Theorem 1](#theo: 3){reference-type="ref" reference="theo: 3"}, which will be carried out in this section, is the construction of an auxiliary function which vanishes with high multiplicity on a prescribed lattice in $\mathbb{C}^g$. This purpose will be accomplished by means of the following version of Siegel's Lemma: **Lemma 3**. *Let $L=\mathbb{Q}(x_1,\dots, x_q,y)$ for some $x_1,\dots, x_q\in\mathbb{C}$ algebraically independent and $y$ integral over $\mathbb{Z}[x_1,\dots, x_q]$. Then there exists a constant $C>0$ which enjoys the following property.\ Let $n$ and $r$ be positive integers with $n\ge 2r$ and consider $a_{ij}\in\mathbb{Z}[x_1,\dots,x_q,y]$, for $i=1,\dots, n$, $j=1,\dots, r$. Then there exist $\xi_1,\dots,\xi_n\in\mathbb{Z}[x_1,\dots,x_q,y]$, not all zero, such that for all $j=1,\dots, r$ $$\sum_{i=1}^n \xi_i a_{ij}=0 \quad \text{and} \quad \max_{i=1,\dots, n} t(\xi_i)\le C\left(\max_{i,j} t(a_{ij}) + \log n \right).$$* *Proof.* [@Waldschimdt-fr Lemme 4.3.1] ◻ Without loss of generality, we may suppose that the chosen rows are the first $m$ of $\Omega$ and the first $n$ of $E$. Let $K$ be the number field, embedded in $\mathbb{C}$, over which the abelian variety $X$ is defined; we may suppose that $K$ is a Galois extension of $\mathbb{Q}$. Also, from Lemma [Lemma 2](#lemma: alg coeff series of qper funct){reference-type="ref" reference="lemma: alg coeff series of qper funct"} it follows that $K$ contains all the coefficients of the Taylor expansion of $H_1,\dots, H_m$ and $A_1,\dots,A_g$ around the origin. Set $\delta\coloneqq [K:\mathbb{Q}]$ and let $\alpha_1,\dots,\alpha_\delta$ be an integral basis for the ring of integers of $K$. We fix a transcendental number $\omega\in\mathbb{Q}(S)$ and we suppose that all numbers in $S$ are algebraic over $\mathbb{Q}(\omega)$. Let $\chi\in\mathbb{C}$ be integral over $\mathbb{Z}[\omega]$ and such that $\mathbb{Q}(\omega,\chi)$ contains both $\mathbb{Q}(S)$ and $K$. We let $N$ signify a sufficiently large positive integer and we write $c_1,c_2,\dots$ for positive constants depending only on $S$. We set for short $b=2m+n-2g$, which is positive by hypothesis, and we let $a$ be a real number such that $$a>\frac{2g+n}{2m+n-2g}$$ We then choose a real number $\varepsilon_1$ satisfying $$0<\varepsilon_1 < \min\left\{\frac{b}{2m+n}, \, \frac{ab-2g-n}{a(m+1)+g(a+1)} \right\}.$$ For $\varepsilon_1$ in this range of values, it is always possible to find $\varepsilon_2>0$ such that $$\varepsilon_1<\varepsilon_2< \min\left\{\frac{b-\varepsilon_1 g}{m}, \,\frac{ab-2g-n + \varepsilon_1(a(m+n-g)-g )}{a(2m+n)+m} \right\}.$$ We define the quantities $$\begin{aligned} r&= m+n,\\ t &= 2g+\varepsilon_1(g+n+m)-\varepsilon_2 n+n, \\ d &= t-r\varepsilon_1,\\ d_0 &= t-r(\varepsilon_1+1-\varepsilon_2), \end{aligned}$$ which have been chosen in such a manner that they satisfy $$\begin{cases} gt+2gr=nd_0+(g+m)d,\\ 0<d_0<d<t<d_0+r,\\ \left(2+\frac{1}{a}\right)d_0+\frac{1}{a}r<t. \end{cases}$$ We finally set $$R=N^r, \quad T=N^t,\quad D=N^d, \quad D_0=\lfloor N^{d_0}\log N\rfloor.$$ We are ready to introduce our auxiliary function. **Proposition 1**. *There exists a constant $C(\omega)$ only depending on $S$, $\omega$, $\alpha_1,\dots,\alpha_\delta$ and $\chi$ and there exist numbers $E_{hl\nu}(\omega)\in\mathbb{Z}[\omega]$ not all zero satisfying the following property. Consider the function $$\Phi(z)\coloneqq C(N)^{\delta}\sum_{\|h\|\le \delta D_0}\sum_{\|l\|\le \delta D}\sum_{\|\nu\|\le \delta D} E_{hl\nu}(\omega) \prod_{r=1}^ne^{h_r\xi_r z_r}\prod_{i=1}^{m}H_{i}^{l_{i}}(z) \prod_{s=1}^g A_s^{\nu_s}(z),$$ with $$C(N)=C(\omega)^{nT+2gnD_0R+2gmD+1}(3T)!c_2^T ,$$ $c_2$ as in Lemma [Lemma 2](#lemma: alg coeff series of qper funct){reference-type="ref" reference="lemma: alg coeff series of qper funct"}, $h=(h_1,\dots,h_n)\in\mathbb{Z}^{n}_{\ge 0}$, $l=(l_1,\dots,l_m)\in\mathbb{Z}^{m}_{\ge0}$ and $\nu=(\nu_1,\dots,\nu_g)\in\mathbb{Z}^g_{\ge 0}$. Then $\Phi$ satisfies the conditions $$\partial \Phi(k\cdot \lambda)=0 \quad \text{for $|\partial|\le T$ and $k\in\mathbb{Z}^{2g}$ with $\|k\|\le R$}.$$ Moreover, the $E_{hl\nu}(\omega)$'s, as polynomials in $\omega$, satisfy $$t(E_{hl\nu}) \le c_3 D_0R\log (D_0R).$$* The proof of Proposition [Proposition 1](#prop: aux. function 3){reference-type="ref" reference="prop: aux. function 3"} relies on regarding (a minor modification of) the equations $\partial \Phi(k\cdot \lambda)=0$ as a homogeneous linear system in the $E_{hl\nu}(\omega)$'s, so as to apply Siegel's Lemma [Lemma 3](#lemma: Siegel){reference-type="ref" reference="lemma: Siegel"}. The following Lemma deals with the quantities which arise as the coefficients of such linear system. **Lemma 4**. *For $k=(k_1,\dots,k_{2g})\in\mathbb{Z}^g$ and $h,l, \nu$ as above with $\|h\|\le D_0$ and $\|l\|,\|\nu\|\le D$, define $$F_{khl\nu}(z)\coloneqq \prod_{r=1}^n e^{h_r\xi_r(z_r+(k\cdot\lambda)_r)}\prod_{i=1}^{m} H_i^{l_i}(z+k\cdot \lambda) \prod_{s=1}^g A_s^{\nu_s}(z),$$ and let $\partial$ be a differential operator of order $|\partial|\le T$. Then there is a constant $C(\omega)\in\mathbb{Z}[\omega]$ depending only on $S$, $\omega$, $\alpha_1,\dots,\alpha_\delta$ and $\chi$ such that $$P_{\partial khl\nu}(\omega,\chi)\coloneqq C(N)\partial F_{khl\nu}(0)\in \mathbb{Z}[\omega,\chi],$$ with $C(N)$ defined as in Proposition [Proposition 1](#prop: aux. function 3){reference-type="ref" reference="prop: aux. function 3"}, satisfies $$t(P_{\partial khl\nu}(\omega,\chi))\le c_4 D_0R\log (D_0R).$$* *Proof.* By the quasi-periodicity of $H_i$, we have $$\begin{aligned} H_i^{l_i}(z+k\cdot \lambda) & =\sum_{m_0+\dots+m_{2g}=l_i}\binom{l_i}{m_0;\dots; m_{2g}} H_i^{m_0}(z)\prod_{j=1}^{2g}(k_j\omega_{ij})^{m_j}\\ & =\sum_{m_1+\dots+m_{2g}\le l_i} \varphi_{i,m_1,\dots,m_{2g}}(z)\prod_{j=1}^{2g}\omega_{ij}^{m_j}, \end{aligned}$$ setting for short $$\varphi_{i,m_1,\dots,m_{2g}}(z)=\frac{l_i!H_i^{l_i-m_1-\dots-m_{2g}}(z)}{m_1!\dots m_{2g}!(l_i-m_1-\dots- m_{2g})!}\prod_{j=1}^{2g}k_j^{m_j}.$$ Let now $\kappa$ range through the $(g+1)\times 2g$ matrices with non-negative integer coefficients $\kappa_{ij}$ such that for $i=1,\dots,g+1$ we have $\kappa_{i1}+\dots+\kappa_{i,2g}\le l_i$. We then define $\psi_\kappa$ in such a way that $$\begin{aligned} \prod_{i=1}^{m}H_i^{l_i}(z+k\cdot \lambda) = \sum_{\kappa} \psi_\kappa(z)\prod_{i=1}^{m}\prod_{j=1}^{2g}\omega_{ij}^{\kappa_{ij}}. \end{aligned}$$ Let us also write $$\psi_\kappa'(z)=\psi_\kappa(z)\prod_{s=1}^g A_s^{\nu_s}(z),$$ with Taylor expansion around $0$ given by $$\psi_\kappa'(z)=\sum_q \beta_{\kappa, q} \frac{z_1^{q_1}\dots z_g^{q_g}}{q_1!\dots q_g!}.$$ for some algebraic integers $\beta_{\kappa,q}\in K$. By [@Vasil'ev Lemma 2], the components of $\beta_{\kappa,q}$ with respect to the integral basis $\alpha_1,\dots,\alpha_\delta$ have modulus $\le e^{c_5T\log T}$.\ Furthermore, we have $$\begin{aligned} \prod_{r=1}^n e^{h_r\xi_r(z_r+(k\cdot\lambda)_r)}&=\prod_{r=1}^n e^{h_r\xi_r z_r} \prod_{r=1}^n\prod_{j=1}^{2g} \left(e^{\xi_r\omega_{rj}}\right)^{h_rk_j}\\ &= \prod_{r=1}^n\prod_{j=1}^{2g} \left(e^{\xi_r\omega_{rj}}\right)^{h_rk_j} \sum_p \gamma_p \frac{z_1^{p_1}\dots z_g^{p_g}}{p_1!\dots p_g!}, \end{aligned}$$ where $p=(p_1,\dots,p_g)\in\mathbb{Z}^g_{\ge 0}$ and $\gamma_p=0$ if $p_j\neq 0$ for some $j=n+1,\dots,g$, while $\gamma_p= \prod_{r=1}^n h_r^{p_r}\xi_r^{p_r}$ otherwise. It follows that $$\begin{gathered} \prod_{r=1}^n e^{h_r\xi_rz_r}\prod_{i=1}^{m} H_i^{l_i}(z+k\cdot \lambda) \prod_{s=1}^g A_s^{\nu_s}(z)=\\ =\left(\sum_p \gamma_p \frac{z_1^{p_1}\dots z_g^{p_g}}{p_1!\dots p_g!}\right)\left( \sum_\kappa\sum_q \beta_{\kappa, q}\prod_{i=1}^{m}\prod_{j=1}^{2g}\omega_{ij}^{\kappa_{ij}} \frac{z_1^{q_1}\dots z_g^{q_g}}{q_1!\dots q_g!}\right)=\\ = \sum_{p,q} \sum_\kappa \gamma_p \beta_{\kappa,q} \prod_{i=1}^{m}\prod_{j=1}^{2g}\omega_{ij}^{\kappa_{ij}} \frac{z_1^{p_1+q_1}\dots z_g^{p_g+q_g}}{p_1!q_1!\dots p_g!q_g!}=\\ = \sum_{p,q} \sum_\kappa \gamma_p \beta_{\kappa,q} \binom{p_1+q_1}{p_1}\dots \binom{p_g+q_g}{p_g}\prod_{i=1}^{m}\prod_{j=1}^{2g}\omega_{ij}^{\kappa_{ij}} \frac{z_1^{p_1+q_1}\dots z_g^{p_g+q_g}}{(p_1+q_1)!\dots (p_g+q_g)!}. \end{gathered}$$ The $t$-th term in the Taylor expansion of $F_{khl\nu}$ at $0$ therefore coincides with $$\sum_{p+q=t}\sum_\kappa \beta_{\kappa,q} \prod_{a=1}^g\binom{p_a+q_a}{p_a}\prod_{r=1}^n h_r^{p_r}\xi_r^{p_r}\prod_{i=1}^{m}\prod_{j=1}^{2g}\omega_{ij}^{\kappa_{ij}}\prod_{r=1}^n\prod_{j=1}^{2g} \left(e^{\xi_r\omega_{rj}}\right)^{h_rk_j} \frac{z_1^{t_1}\dots z_g^{t_g}}{t_1!\dots t_g!}$$ Let us now find a suitable quantity that clears out the denominator of the coefficient of the latter term. Let $C(\omega)$ be a denominator in $\mathbb{Z}[\omega,\chi]$ for $\alpha_1,\dots,\alpha_\delta$ and for the numbers in $S$. By [@Vasil'ev Lemma 7], $C(\omega)(3T)!c_2^T$ is a denominator for all the $\beta_{\kappa,q}$'s. Hence $$P_{\partial khl\nu}(\omega,\chi)=C(\omega)^{nT+2gmD+2gnD_0R+1}(3T)!c_2^T \partial F_{khl\nu}(0)\in\mathbb{Z}[\omega,\chi].$$ In order to estimate the type of $P_{\partial khl\nu}(\omega,\chi)$, in view of Lemma [Lemma 1](#lemma: how to deal with type proved){reference-type="ref" reference="lemma: how to deal with type proved"} it suffices for our purposes to bound the degree in $\omega$ and $\chi$ and the height of the above polynomial expression. First, $C(\omega)(3T)!c_2^T\beta_{\kappa,q}$ has bounded degree in $\omega$ and by [@Vasil'ev Lemma 2] logarithm of the height $\le c_6 T\log T$. Moreover, $$\log\left(\prod_{a=1}^g\binom{p_a+q_a}{p_a}\right)\le \log\left(2^{gT}\right)\le c_7T, \quad \log \left(\prod_{r=1}^n h_r^{p_r}\right)\le c_8T\log D_0,$$ and these terms only contribute with their height, being constant in $\omega$. Furthermore, $C(\omega)^{nT}\prod_{r=1}^n \xi_r^{p_r}$ yields a polynomial expression in $\mathbb{Z}[\omega,\chi]$ with both degree in $\omega$ and $\chi$ satisfying $\le c_9 T$, while logarithm of the height $\le c_{10} T\log T$. For the product of the $\omega_{ij}$'s we have degree in $\omega$ and $\chi$ bounded by $c_{11} D$ and logarithm of the height $\le c_{12} D\log D$, while for the product of the $e^{\xi_r\omega_{rj}}$'s we get degrees $\le c_{13}D_0 R$ and logarithm of the height $\le c_{14}D_0R\log (D_0 R)$. These computations lead to the desired inequality $$t(P_{\partial khl\nu}(\omega,\chi))\le c_4 D_0R\log (D_0R).$$ ◻ We may now conclude the proof of Proposition [Proposition 1](#prop: aux. function 3){reference-type="ref" reference="prop: aux. function 3"}. We first consider the function $$\widetilde{\Phi}(z)\coloneqq C(N)\sum_{\|h\|\le D_0}\sum_{\|l\|\le D}\sum_{\|\nu\|\le D} \widetilde{E}_{hl\nu}(\omega,\chi) \prod_{r=1}^ne^{h_r\xi_r z_r}\prod_{i=1}^{m}H_{i}^{l_{i}}(z) \prod_{s=1}^g A_s^{\nu_s}(z),$$ for some $\widetilde{E}_{hl\nu}(\omega,\chi)\in\mathbb{Z}[\omega,\chi]$ not all zero to be chosen in such a way that $$\partial \widetilde{\Phi}(k\cdot \lambda)=0 \quad \text{for $|\partial|\le T$ and $k\in\mathbb{Z}^{2g}$ with $\|k\|\le R$}.$$ We may regard these conditions as a linear system in the $\widetilde{E}_{hl\nu}$'s whose coefficients coincide with the $P_{\partial k h l \nu}(\omega,\chi)$ of Lemma [Lemma 4](#lemma: coeff for Siegel 3){reference-type="ref" reference="lemma: coeff for Siegel 3"}. Since $T^gR^{2g}<D_0^{n}D^{g+m}$, Siegel's Lemma [Lemma 3](#lemma: Siegel){reference-type="ref" reference="lemma: Siegel"} ensures the existence of the desired $\widetilde{E}_{hl\nu}$, and in combination with Lemma [Lemma 4](#lemma: coeff for Siegel 3){reference-type="ref" reference="lemma: coeff for Siegel 3"} it also yields $$t(\widetilde{E}_{hl\nu}(\omega,\chi))\le c_{16} D_0R \log (D_0R).$$ We are only left with removing $\chi$ from the coefficients $\widetilde{E}_{hl\nu}(\omega,\chi)$, which can be achieved via a standard argument. We define $\Phi(z)$ to be the product of all the functions $\widetilde{\Phi}_\sigma$ obtained from $\widetilde{\Phi}$ by replacing each $\widetilde{E}_{hl\nu}$ by $\sigma(\widetilde{E}_{hl\nu})$ for a Galois automorphism $\sigma$ of $\mathbb{Q}(\omega,\chi)$ over $\mathbb{Q}(\omega)$. Such $\Phi(z)$ finally satisfies the desired properties as in Proposition [Proposition 1](#prop: aux. function 3){reference-type="ref" reference="prop: aux. function 3"}. # End of the proof We shall now construct a polynomial $Q$ with integer coefficients which realizes the transcendence measure for $\omega$ in the statement of Theorem [Theorem 1](#theo: 3){reference-type="ref" reference="theo: 3"}. This polynomial is a byproduct of the result of the previous section, and the missing estimate for $|Q(\omega)|$ will be obtained by analytic means. The functions $A_1,\dots, A_g$, $H_1,\dots, H_m$ and $e^{\xi_1z_1},\dots, e^{\xi_nz_n}$ are algebraically independent [@Brownawell-Kubota Corollary 7], so $\Phi(z)$ does not vanish identically. As a result, we can find an integer $N_0\ge N$ such that $$\partial \Phi(k\cdot\lambda)=0 \quad \text{for $\|k\|\le N_0^r,\; |\partial|\le N_0^t$},$$ but there are $k_0$, $\partial_0$ with $\|k_0\|\le (N_0+1)^r$, $|\partial_0|<(N_0+1)^t$ such that $$\begin{gathered} \partial \Phi(k_0\cdot \lambda)=0 \quad \text{for $|\partial|<|\partial_0|$,}\\ \partial_0\Phi(k_0\cdot\lambda)\neq 0. \end{gathered}$$ We set for short $R_0=(N_0+1)^r$ and $T_0=(N_0+1)^t$. The same argument proposed in Lemma [Lemma 4](#lemma: coeff for Siegel 3){reference-type="ref" reference="lemma: coeff for Siegel 3"} shows that $$P(\omega,\chi)\coloneqq \frac{C(N_0)^\delta}{C(N)^\delta}\partial_0\Phi(k_0\cdot \lambda)$$ is a non-zero element of $\mathbb{Z}[\omega,\chi]$ of type $$t(P(\omega,\chi))\le c_{17} D_0R_0\log(D_0R_0).$$ We now aim at estimating $|P(\omega,\chi)|$ from above: by eventually removing $\chi$, this will lead us to the desired transcendence measure for $\omega$. The main tool that we are going to exploit is the Bombieri-Lang version of Schwarz's Lemma. **Proposition 2**. *There exist positive constants $c_{16}$, $c_{17}$ and $c_{18}$, only depending on $\lambda_1\dots, \lambda_{2g}$, such that for any $T\ge 1$, $\varrho\ge 1$ and $\varrho_1\ge c_{16}\varrho$ and any entire function $f:\mathbb{C}^g\to \mathbb{C}$ satisfying the equation $\partial f(k\cdot\lambda)=0$ for $|\partial|<T$ and $\|k\|\le \varrho$ the following inequality holds: $$|f|_{\varrho_1}\le |f|_{c_{17}\varrho_1}\exp(-c_{18}TR^2).$$* *Proof.* [@Masser Lemma 7] ◻ We consider the entire function $$\Psi(z)=\vartheta_0(z)^{(g+m)\delta D}\Phi(z).$$ Since $k_0\cdot \lambda$ is contained in the ball centred at the origin of radius $\le c_{18}R_0$, by Proposition [Proposition 2](#prop: Schwarz lemma several variables){reference-type="ref" reference="prop: Schwarz lemma several variables"} we infer that $$|\Psi|_{c_{19}R_0}\le |\Psi|_{c_{20}R_0}e^{-c_{21}T_0 R_0^2}.$$ The coefficients $E_{hl\nu}$ have modulus at most $e^{c_{22}D_0R\log(D_0R)}$ by Lemma [Lemma 4](#lemma: coeff for Siegel 3){reference-type="ref" reference="lemma: coeff for Siegel 3"}. We use an elementary Lemma in order to obtain an upper bound for $|\Psi|_{c_{20}R_0}$. **Lemma 5**. *Let $G(z)$ be one of the functions $1,A_1,\dots, A_g, H_1,\dots, H_{g+1}$. Then $\vartheta G$ is entire and for any $\varrho>0$ $$|\vartheta(z)G(z)|_\varrho\le e^{c_{15}\varrho^2}$$* *Proof.* By definition of theta function, for all $j=1,\dots,2g$ we may find $u_{j1},\dots, u_{jg}, v_j\in \mathbb{C}$ satisfying $$\vartheta(z+\lambda_j)=\vartheta(z)\exp( 2\pi i(u_{j1}z_1+\dots+u_{jg}z_g+v_j)).$$ Let us denote by $U$ the $2g\times g$ complex matrix with entries the $u_{ij}$'s and by $V$ the vector $(v_1,\dots,v_{2g})\in\mathbb{C}^{2g}$. For any $k\in\mathbb{Z}^{2g}$ we have $$\vartheta(z+k\cdot \lambda)=\vartheta(z)\exp\left(2\pi i(\,^tk U z+ V\cdot k )\right).$$ Let us now pick $z\in\mathbb{C}^g$ with $|z|\le\varrho$. We may write $z=w+k\cdot\lambda$ for some $k\in\mathbb{Z}^g$ and $w$ in the fundamental parallelogram generated by $\lambda_1,\dots,\lambda_{2g}$. If $M$ denotes the maximum of $|\vartheta|$ on such parallelogram, we have $$|\vartheta(z)|=|\vartheta(w)|\exp\left(\text{Re}(2\pi i\,^tk U z+ 2\pi iV\cdot k )\right)\le M\exp\left(|2\pi\,^tk U z+ 2\pi V\cdot k|\right).$$ Let $L$ be the minimum modulus of the $\lambda_j$'s, so that $\|k\|\le \frac{1}{L}\varrho$. If $\|U\|$ denotes the maximum modulus of the $u_{ij}$'s, then $$\left|^tk Uz\right|\le \frac{2g^2\|U\|}{L}\varrho^2, \quad |V\cdot k|\le 2g\|V\|\varrho,$$ and the claim for $G=1$ follows. The remaining cases can be treated analogously, by exploiting the fact that $G$ is either abelian or quasi-periodic. ◻ This computations lead us to the estimate $$|\Psi|_{c_{20}R_0}\le e^{c_{22}(D_0R_0\log(D_0R)+DR_0^2)},$$ and therefore $$|\Psi|_{c_{19}R_0}\le e^{-c_{23}T_0R_0^2}.$$ A similar inequality carries through to $\partial_0\Psi(k_0\cdot\lambda)$. Indeed, by Cauchy's formula $$\frac{\partial^t\Psi(z)}{\partial z_1^{t_1}\dots \partial z_g^{t_g}}= \frac{t_1!\dots t_g!}{(2\pi i)^g}\int_{\gamma_1}\dots \int_{\gamma_g} \frac{\Psi(\zeta)}{(\zeta_1-z_1)^{t_1+1}\dots(\zeta_g-z_g)^{t_g+1}}\,d\zeta,$$ where $\gamma_i$ denotes a circle centred at $z_i$ of radius $\le 1$ for all $i=1,\dots, g$. Hence, we derive the inequality $$\left|\partial_0 \Psi(k_0\cdot\lambda) \right|\le \exp(-c_{24}T R_0^2).$$ Since $\vartheta_0(0)\neq 0$, it follows that $|\vartheta_0(k_0\cdot\lambda)^{(g+m)\delta D}|\ge e^{c_{25}DR_0^2}$, hence $$|P(\omega,\chi)|=\frac{|\partial_0\Psi(k_0\cdot\lambda)|}{|\vartheta_0(k_0\cdot\lambda)^{(g+m)\delta D}|}\le e^{-c_{26}T_0R_0^2}.$$ By taking the norm of $P(\omega,\chi)$ over $\mathbb{Q}(\omega)$, we eventually obtain a polynomial $Q$ with integer coefficients satisfying $$0<|Q(\omega)|\le e^{-c_{27}T_0R_0^2}, \qquad t(Q)\le c_{28} D_0R_0\log(D_0R_0).$$ An exposition of the missing computation can be found for example in [@Feldman]. Since $(D_0R_0\log(D_0R_0))^{2+\frac{1}{a}}<T_0R_0^2$, we conclude that $\omega$ has the desired transcendence type. # Some applications Let us now comment on some features of Theorem [Theorem 1](#theo: 3){reference-type="ref" reference="theo: 3"}. As we previously remarked, almost all transcendental numbers have transcendence type $\le 2+\varepsilon$ for any $\varepsilon>0$. It should therefore be expected that in almost all cases Theorem [Theorem 1](#theo: 3){reference-type="ref" reference="theo: 3"} yields in fact the existence of two algebraically independent numbers in $S$, provided $2m+n>2g$. Another reason why it seems likely to turn this Theorem into the form $\text{trdeg}(\mathbb{Q}(S)/\mathbb{Q})\ge 2$ is that it is possible to see that $\pi$ belongs to the field generated by the entries of $\Omega$. Thus, in case $\pi$ can actually be generated by fewer than $g+1$ rows of $\Omega$, we deduce the existence of two algebraically independent numbers among the ones in $S$ when selecting precisely those rows. The main obstruction to turning Theorem [Theorem 1](#theo: 3){reference-type="ref" reference="theo: 3"} in an algebraic independence statement lies in the fact that we have not been able to apply Gelfond's criterion [@Waldschimdt-fr Théorème 5.1.1] at the end of the proof. By letting $N$ range through all sufficiently large natural numbers, our argument yields a sequence of polynomials $Q_N$ satisfying $$0<|Q_N(\omega)|\le e^{-c_{27}T_0R_0^2}, \qquad t(Q_N)\le c_{28} D_0R_0\log(D_0R_0),$$ where the quantities $T_0,R_0, D_0$ are defined as above and depend on some $N_0\ge N$. If we managed to bound $N_0$ from above by a suitable power of $N$, Gelfond's criterion would lead to a contradiction with the transcendence of $\omega$. One could achieve this by supposing that the function $\Phi(z)$ of Proposition [Proposition 1](#prop: aux. function 3){reference-type="ref" reference="prop: aux. function 3"} vanishes with multiplicity $T_0$ at all points of the form $k\cdot\lambda$ with $\|k\|\le R_0$. Imposing these conditions yields a homogeneous linear system in the $E_{hl\nu}$, with the number of unknowns depending on $N$ and the one of equations depending on $N_0$. Using the fact that the $E_{hl\nu}$'s are not all zero, one may derive an upper bound of $N_0$ in terms of $N$ by computing the rank of the matrix associated with this linear system. However, the exponential terms in the expression of $\Phi(z)$ make it somewhat unclear how to practically compute such rank. We now examine some applications of Theorem [Theorem 1](#theo: 3){reference-type="ref" reference="theo: 3"}. For an integer $N\ge 3$, let us consider the curve $y^2=1-4x^N$, which has genus $\lfloor\frac{N-1}{2}\rfloor$. Let us also write $\zeta=e^{\frac{2\pi i}{N}}$. As shown in [@Lang-introduction Chapter V], the period lattice of the Jacobian variety associated with this curve has generators given by the vectors $$\lambda_j=\left( \dots, \zeta^{kj}\left(1-\zeta^k\right)^2\frac{1}{N}B\left(\frac{k}{N},\frac{k}{N} \right), \dots\right)$$ for $j=0,\dots, N-1$, the components running over $k=1,\dots,\lfloor\frac{N-1}{2}\rfloor$, together with the vector $$\left( \dots, \left(1-\zeta^k\right)\frac{1}{N}B\left(\frac{k}{N},\frac{k}{N} \right), \dots\right).$$ An analogous expression applies to the quasi-periods, provided we let $k$ run from $\lfloor\frac{N+1}{2}\rfloor$ to $N-1$. One may now apply Theorem [Theorem 1](#theo: 3){reference-type="ref" reference="theo: 3"} in order to derive results of algebraic independence for $B$-values, for instance by exploiting the fact that for any $a\in\mathbb{Q}\smallsetminus\mathbb{Z}$ the number $B(a,a)B(1-a,1-a)$ is a non-zero algebraic multiple of $\pi$. Let us go through some examples of these arguments. **Corollary 1**. *For any non-zero complex number $\xi$, there are two algebraically independent numbers among $$B\left( \frac{1}{12},\frac{1}{12}\right), \; B\left(\frac{5}{12},\frac{5}{12}\right),\; \pi,\; \xi, \; e^{\xi}, \; e^{i\sqrt{3}\xi}.$$* *Proof.* We apply Theorem [Theorem 1](#theo: 3){reference-type="ref" reference="theo: 3"} to the complex Abelian variety described above for $N=12$ with the following choices. We choose the rows of the period matrix whose components are algebraic multiples of $$B\left( \frac{1}{12},\frac{1}{12}\right), \; B\left(\frac{5}{12},\frac{5}{12}\right),\; B\left( \frac{7}{12},\frac{7}{12}\right), \; B\left(\frac{11}{12},\frac{11}{12}\right).$$ As for the matrix $E$ defined before Theorem [Theorem 1](#theo: 3){reference-type="ref" reference="theo: 3"}, we choose its fourth row, and we also pick $$\xi_4=\xi\left(\left(1-\zeta^4\right)^2\frac{1}{12}B\left(\frac{4}{12},\frac{4}{12} \right)\right)^{-1},$$ where $\zeta=e^{\frac{2\pi i}{12}}$. It is readily checked that the products $$B\left( \frac{1}{12},\frac{1}{12}\right)B\left(\frac{11}{12},\frac{11}{12}\right), \quad B\left(\frac{5}{12},\frac{5}{12}\right)B\left( \frac{7}{12},\frac{7}{12}\right)$$ are non-zero algebraic multiples of $\pi$. Thus, $\pi$ belongs to the field generated by $S$, with $S$ defined as in Theorem [Theorem 1](#theo: 3){reference-type="ref" reference="theo: 3"}. As a result, $\text{trdeg}(\mathbb{Q}(S)/\mathbb{Q})\ge 2$. The numbers of $S$ appearing in the matrix $E$ are of the form $e^{\xi\zeta^{4j}}$ for some integers $j$, together with $e^{\xi(1-\zeta^{4})^{-1}}$. Moreover, $\zeta^4=\varrho$, where $\varrho=e^{\frac{2\pi i}{3}}$, while $(1-\zeta^4)$ is a primitive sixth root of unity. Hence, these numbers turn out to be $$e^\xi, \; e^{\xi\varrho}, \; e^{\xi\varrho^2}, \; e^{\xi\varrho^{-1/2}}.$$ Since $\varrho=(-1+i\sqrt{3})/2$, it follows that the transcendence degree of $\mathbb{Q}(S)$ coincides with the one of the field generated over $\mathbb{Q}$ by the numbers in the statement, which is therefore proved. ◻ A similar strategy, choosing the third row of the matrix $E$, allows for example to prove that there are at least two algebraically independent numbers among $$B\left( \frac{1}{12},\frac{1}{12}\right), \; B\left(\frac{5}{12},\frac{5}{12}\right),\; \pi, \; \xi, \; e^{\xi},\; e^{i\xi}.$$ By choosing $\xi=\log 2$ or $\xi=\pi^2$, one deduces for instance the existence of two algebraically independent numbers in each of the following sets: $$\begin{gathered} \left\{B\left( \frac{1}{12},\frac{1}{12}\right), \; B\left(\frac{5}{12},\frac{5}{12}\right),\; \pi, \; \log 2,\; 2^{i}\right\};\\ \left\{B\left( \frac{1}{12},\frac{1}{12}\right), \; B\left(\frac{5}{12},\frac{5}{12}\right),\; \pi, \; e^{\pi^2},\; e^{i\pi^2}\right\}. \end{gathered}$$ 99 Amoroso, F. *Polynomials with high multiplicity*, Acta Arith. 56, no. 4 (1990), 345-64, MR1096348. Birkenhake, C. and Lange, H. *Complex Abelian Varieties*, Grundlehren der mathematischen Wissenschaften 302, Springer-Verlag, Berlin (1992), MR1217487. Brownawell, W.D. and Kubota, K.K. *The algebraic independence of Weierstrass functions and some related numbers*, Acta Arith. 33, no. 2 (1977), 111-49, MR444582. Chudnovsky, G.V. *Contributions to the theory of transcendental numbers*, Mathematical Surveys and Monographs 19, American Mathematical Society, Providence, Rhode Island (1984), MR772027. Fel'dman, N.I. *Hilbert's Seventh Problem* (Russian), Moskov. Gos. Univ., Moscow, (1982), MR710652. Grinspan, P. *Measures of Simultaneous Approximations for Quasi-Periods of Abelian Varieties*, J. Number Theory 94, no. 1 (2002), 136-176, MR1904966. Lang, S. *Introduction to transcendental numbers*, Addison-Wesley Publishing Co., Reading, Mass.-London-Don Mills, Ont. (1966), MR0214547. Masser, D.W. *On the periods of Abelian functions in two variables*, Mathematika 22, no. 2 (1975), 97-107, MR399000. Mumford, D. *Tata Lectures on Theta I*, Progress in Mathematics 28, with the assistance of C. Musili, M. Nori, E. Previato and M. Stillman, Birkhäuser, Inc., Boston, MA (1983), MR688651. Schneider, T. *Zur Theorie der Abelschen Funktionen und Integrale* (German), J. Reine Angew. Math. 183, no. 2 (1941), 110-28, MR6170. Vasil'ev, K.G. *On the algebraic independence of the periods of Abelian integrals* (Russian), Mat. Zametki 60, no. 5 (1996), 681-91, 799, translation in Mat. Notes 60, no. 5-6 (1997), 510-8, MR1619893. Waldschmidt, M. *Nombres transcendants* (French), Lecture Notes in Mathematics, Vol. 402, Springer-Verlag, Berlin-New York (1974), MR0360483. Waldschmidt, M. *Transcendence measures for exponentials and logarithms*, J. Austral. Math. Soc. Ser. A 25, no. 4 (1978), 445-65, MR508469.
arxiv_math
{ "id": "2309.02800", "title": "On the algebraic independence of periods of abelian varieties and their\n exponentials", "authors": "Riccardo Tosi", "categories": "math.NT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Conway and Doyle have claimed to be able to divide by three. We attempt to replicate their achievement and fail. In the process, we get tangled up in some shoes and socks and forget how to multiply. author: - "Patrick Lutz[^1]" bibliography: - bibliography.bib title: Conway and Doyle Can Divide by Three, But I Can't --- # Introduction In the paper "Division by Three" [@doyle1994division], Conway and Doyle show that it is possible to divide by 3 in cardinal arithmetic, even without the axiom of choice. Actually, they show that it is possible to divide by $n$ for all natural numbers $n > 0$; they called their paper "Division by Three" rather than "Division by $n$" because the case $n = 3$ seems to capture all the difficulty of the full result. More precisely, they give a proof of the following theorem. **Theorem 1**. *It is provable in $\mathsf{ZF}$ (Zermelo-Fraenkel set theory without the axiom of choice) that for any natural number $n > 0$ and any sets $A$ and $B$, if $|A\times n| = |B \times n|$ then $|A| = |B|$.* Here we are using the notation $A\times n$ to denote the set $A\times\{1,2,\ldots, n\}$ and the notation $|A| = |B|$ to mean that there is a bijection between $A$ and $B$. The purpose of this article is to question whether the statement of Theorem [Theorem 1](#thm-shoe_division){reference-type="ref" reference="thm-shoe_division"} is really the correct definition of "dividing by $n$ without choice." We will propose an alternative statement, show that it is not provable without the axiom of choice, and explain what all this has to do with Bertrand Russell's socks. Of course, none of this should be taken too seriously. I'm not really here to argue about what "division by $n$ without choice" means. Instead, the goal is to have fun with some interesting mathematics, and the question of what "division by $n$ without choice" should really mean is merely an inviting jumping-off point. ## Mathematics Without Choice {#mathematics-without-choice .unnumbered} What does it mean to do math without the axiom of choice? In brief, it means that if we are proving something and want to describe a construction that requires infinitely many choices then we must describe explicitly how these choices are to be made, rather than just assuming that they can be made any-which-way when the time comes. There is a well-known example, due to Bertrand Russell, that illustrates this issue. Suppose there is a millionaire who loves to buy shoes and socks. Every day, he buys a pair of shoes and a pair of socks, and after infinitely many days have passed, he has amassed infinitely many pairs of each. He then asks his butler to pick out one shoe from each pair for him to display in his foyer. The butler wants to make sure he is following the millionaire's instructions precisely, so he asks how to decide which shoe to pick from each pair. The millionaire replies that he can pick the left shoe each time. The next day, the millionaire decides he would also like to display one sock from each pair and so he asks the butler to do so. When the butler again asks how he should decide which sock to pick from each pair, the millionaire is stymied---there is no obvious way to distinguish one sock in a pair from the other.[^2] The point of this example is that if we have a sequence $\{A_i\}_{i \in \mathbb{N}}$ of sets of size 2, then there is no way to prove without the axiom of choice that $\Pi_{i \in \mathbb{N}} A_i$ is nonempty. Doing so would require explicitly constructing an element of $\Pi_{i \in \mathbb{N}}A_i$, which is analogous to giving a way to choose one sock from each pair in the millionaire's collection. On the other hand, if we have a fixed ordering on each set $A_i$ in the sequence, then we *can* show without choice that $\Pi_{i \in \mathbb{N}}A_i$ is nonempty, just as it was possible to choose one shoe from each pair. Russell's story about the shoes and socks may seem like just a charming and straightforward illustration of the axiom of choice, but we will return to it a few times throughout this article and see that there is more to it than is initially apparent. # Failing to Divide by Three ## You Can Divide by Three {#you-can-divide-by-three .unnumbered} As we mentioned earlier, in the paper "Division by Three," Conway and Doyle prove without the axiom of choice that for any natural number $n > 0$ and any sets $A$ and $B$, if $|A \times n| = |B \times n|$ then $|A| = |B|$. What this requires is giving an explicit procedure to go from a bijection between $A\times n$ and $B\times n$ to a bijection between $A$ and $B$. This result has a long history. It was (probably) first proved by Lindenbaum and Tarski in 1926 [@lindenbaum1926communication], but the proof was not published and seems to have been forgotten. The first published proof was by Tarski in 1949 and is regarded as somewhat complicated [@tarski1949cancellation]. Conway and Doyle gave a simpler (and more entertainingly exposited) proof, which they claimed may be the original proof by Lindenbaum and Tarski. Later, the proof was simplified even more by Doyle and Qiu in the paper "Division by Four" [@doyle2015division]. There is also a charming exposition of Doyle and Qiu's proof in the article "Pangalactic Division" by Schwartz [@schwartz2015pan]. ## Can You Divide by Three? {#can-you-divide-by-three .unnumbered} Does the statement of Theorem [Theorem 1](#thm-shoe_division){reference-type="ref" reference="thm-shoe_division"} really capture what it means to divide by $n$ without choice? To explain what we mean, we first need to say a little about how division by $n$ is proved. Recall that we are given a bijection between $A\times n$ and $B\times n$, and we need to construct a bijection between $A$ and $B$. We can think of both $A\times n$ and $B\times n$ as unions of collections of disjoint sets of size $n$. Namely, $$\begin{aligned} A\times n &= \bigcup_{a \in A} \{(a, 1), (a, 2), \ldots, (a, n)\}\\ B\times n &= \bigcup_{b \in B} \{(b, 1), (b, 2), \ldots, (b, n)\}.\end{aligned}$$ A key point, which every known proof uses, is that we can simultaneously order every set in the two collections using the ordering induced by the usual ordering on $\{1, 2, \ldots, n\}$. But if we are already working without the axiom of choice, this seems like an unfair restriction. Why not also allow collections of *unordered* sets of size $n$? This gives us an alternative version of "division by $n$ without choice" in which we replace the collections $A\times n$ and $B\times n$ with collections of unordered sets of size $n$ (we will give a precise statement of this version below). Since collections of ordered sets of size $n$ behave like the pairs of shoes from Russell's example while collections of unordered sets of size $n$ behave like the pairs of socks, we will refer to the standard version as "shoe division" and the alternative version as "sock division." **Definition 2**. *Suppose $n > 0$ is a natural number. **Shoe division by** $n$ is the principle that for any sets $A$ and $B$, if $|A\times n| = |B \times n|$ then $|A| = |B|$.* **Definition 3**. *Suppose $n > 0$ is a natural number. **Sock division by** $n$ is the principle that for any sets $A$ and $B$ and any collections $\{X_a\}_{a \in A}$ and $\{Y_b\}_{b \in B}$ of disjoint sets of size $n$, if $|\bigcup_{a \in A}X_a| = |\bigcup_{b \in B}Y_b|$ then $|A| = |B|$.* Since we know that shoe division by $n$ is provable without the axiom of choice, it is natural to wonder whether the same is true of sock division. By the way, this is not the first time that someone has asked about the necessity of having collections of ordered rather than unordered sets when dividing by $n$ in cardinal arithmetic. In the paper "Equivariant Division" [@bajpai2017equivariant], Bajpai and Doyle consider when it is possible to go from a bijection $A\times n \to B \times n$ to a bijection $A \to B$ when the bijections are required to respect certain group actions on $A$, $B$, and $n$. Since the axiom of choice can be considered a way to "break symmetries," the question of whether sock division is provable without choice is conceptually very similar to the questions addressed by Bajpai and Doyle. ## You Can't Divide by Three {#you-cant-divide-by-three .unnumbered} In this section we will show that sock division by 3 is not provable without the axiom of choice. In fact, neither is sock division by 2 or, for that matter, sock division by $n$ for any $n > 1$. **Theorem 4**. *For any natural number $n > 1$, the principle of sock division by $n$ is not provable in $\mathsf{ZF}$.* *Proof.* We will show that if sock division by 2 is possible then it is also possible to choose socks for Bertrand Russell's millionaire. The full theorem follows by noting that the proof works not just for human socks but also for socks for octopi with $n > 1$ tentacles. More precisely, suppose sock division by 2 does hold and let $\{A_i\}_{i \in \mathbb{N}}$ be a sequence of disjoint sets of size 2. We will show that $\Pi_{i \in \mathbb{N}}A_i$ is nonempty by constructing a choice function for $\{A_i\}_{i \in \mathbb{N}}$. We can picture $\{A_i\}_{i \in \mathbb{N}}$ as a sequence of pairs of socks. ![image](socks_1.png){width="\\columnwidth"} Now consider taking a single pair of socks---say $A_0 = \{x_0, y_0\}$---and forming the Cartesian product of this pair with the set $\{0, 1\}$. This gives us a 4 element set, depicted by the grid below. ![image](socks_2.png){width="0.4\\columnwidth"} We will divide this 4 element set into a pair of 2 element sets in two different ways. First, we can take the rows of the grid to get the sets $\{(x_0, 0), (x_0, 1)\}$ and $\{(y_0, 0), (y_0, 1)\}$. ![image](socks_3.png){width="0.75\\columnwidth"} Second, we can take the columns of the grid to get the sets $\{(x_0, 0), (y_0, 0)\}$ and $\{(x_0, 1), (y_0, 1)\}$. ![image](socks_4.png){width="0.7\\columnwidth"} If we repeat this procedure for every pair of socks, we end up with two collections of disjoint sets of size 2---one consisting of the rows of the grids formed from each pair and the other consisting of the columns. ![image](socks_5.png){width="0.9\\columnwidth"} Now we will observe a few things about these two collections. - First, each set in the collection of rows has the form $\{(x, 0), (x, 1)\}$ for some $x \in \bigcup_{i \in \mathbb{N}}A_i$, so we can think of the collection of rows as being indexed by $\bigcup_{i \in \mathbb{N}}A_i$ (i.e. indexed by the individual socks). - Second, each set in the collection of columns either has the form $A_i\times\{0\}$ for some $i \in \mathbb{N}$ or the form $A_i \times \{1\}$ for some $i \in \mathbb{N}$, so we can think of the collection of columns as being indexed by $\mathbb{N}\times \{0, 1\}$. - Lastly, the union of the collection of rows and the union of the collection of columns are identical---they are both just equal to $\bigcup_{i \in \mathbb{N}}(A_i\times\{0, 1\})$. ![image](socks_6.png){width="0.9\\columnwidth"} The principle of sock division by 2 says that if the unions of two collections of disjoint sets of size 2 are in bijection then the sets indexing those collections are also in bijection. Thus we can conclude that there is a bijection $f \colon (\bigcup_{i \in \mathbb{N}}A_i) \to \mathbb{N}\times\{0,1\}$. We can now describe how to choose one sock from each pair. Consider a pair of socks, $A_i = \{x, y\}$. We know that $x$ is mapped by $f$ to some pair $(n, b) \in \mathbb{N}\times \{0, 1\}$ and $y$ is mapped to some other pair, $(m, c)$. We can choose between $x$ and $y$ by picking whichever one is mapped to the smaller pair in the lexicographic ordering on $\mathbb{N}\times \{0, 1\}$. ◻ # Cardinal Arithmetic and the Power of Sock Division In this section we will discover another view of sock division by considering how to define multiplication of cardinals without choice. ## Shoes and Socks, Revisited {#shoes-and-socks-revisited .unnumbered} Suppose we have two sets, $A$ and $B$. How should we define the product of their cardinalities? The standard answer is that it is the cardinality of their Cartesian product---i.e. $|A\times B|$. But there is another possible definition. Suppose $\{X_a\}_{a \in A}$ is a collection of disjoint sets such that each $X_a$ has the same cardinality as $B$. Since taking a disjoint union of sets corresponds to taking the sum of their cardinalities, we can think of $|\bigcup_{a \in A} X_a| = \sum_{a \in A}|X_a|$ as an alternative definition of "the cardinality of $A$ times the cardinality of $B$." One way to think of these two definitions is that the first interprets multiplication as the area of a rectangle, while the second interprets it as repeated addition. **Multiplication is ...** --------------------------- ----------------------------------------- The area of a rectangle: $|A|\times |B| = |A\times B|$ Repeated addition: $|A|\times|B| = |\bigcup_{a \in A}X_a|$ One problem with thinking of multiplication as repeated addition, however, is that without the axiom of choice, it may not be well-defined. In particular, it is possible to have two collections $\{X_a\}_{a \in A}$ and $\{Y_a\}_{a \in A}$ of disjoint sets of size $|B|$ such that $|\bigcup_{a \in A}X_a| \neq |\bigcup_{a \in A} Y_a|$. In fact, this is actually the original context for Russell's example about shoes and socks. The following passage is from his 1919 book *Introduction to Mathematical Philosophy* [@russell1919introduction] (note that he refers to the axiom of choice as the "multiplicative axiom," since it guarantees that every nonzero product of nonzero cardinalities is nonzero). > Another illustration may help to make the point clearer. We know that $2\times \aleph_0 = \aleph_0$. Hence we might suppose that the sum of $\aleph_0$ pairs must have $\aleph_0$ terms. But this, though we can prove that it is sometimes the case, cannot be proved to happen always unless we assume the multiplicative axiom. This is illustrated by the millionaire who bought a pair of socks whenever he bought a pair of boots, and never at any other time, and who had such a passion for buying both that at last he had $\aleph_0$ pairs of boots and $\aleph_0$ pairs of socks. The problem is: How many boots had he, and how many socks? One would naturally suppose that he had twice as many boots and twice as many socks as he had pairs of each, and that therefore he had $\aleph_0$ of each, since that number is not increased by doubling. But this is an instance of the difficulty, already noted, of connecting the sum of $\nu$ classes each having $\mu$ terms with $\mu\times \nu$. Sometimes this can be done, sometimes it cannot. In our case it can be done with the boots, but not with the socks, except by some very artificial device. ## Multiplication vs. Division {#multiplication-vs.-division .unnumbered} Let's revisit the difference between shoe division and sock division in light of what we have just discussed. When discussing "division by $n$ without choice," we have implicitly defined division in terms of multiplication. Being able to divide by $n$ means that whenever we have $|A|\times n = |B| \times n$, we can cancel the $n$'s to get $|A| = |B|$. The only difference between shoe division and sock division is what definition of multiplication is used (i.e. what is meant by $|A|\times n$ and $|B|\times n$). In shoe division, multiplication is interpreted in the usual way, i.e. as "the area of a rectangle." In sock division, it is interpreted as "repeated addition." When we view shoe division and sock division in this way, it is clear that if "multiplication by $n$ as repeated addition of $n$" is well-defined then sock division by $n$ holds (because in this case it is equivalent to shoe division by $n$). Thus it is natural to ask what the exact relationship is between these two principles. A priori, they are fairly different statements. Sock division by $n$ says that if we have two collections $\{X_a\}_{a \in A}$ and $\{Y_b\}_{b \in B}$ of disjoint sets of size $n$ then we can go from a bijection between $\bigcup_{a \in A}X_a$ and $\bigcup_{b \in B}Y_b$ to a bijection between $A$ and $B$ while "multiplication by $n$ as repeated addition of $n$ is well-defined" says that we can go from a bijection between $A$ and $B$ to a bijection between $\bigcup_{a \in A}X_a$ and $\bigcup_{b \in B}Y_b$. However, it turns out that the two principles are actually equivalent and the proof of this is implicit in our proof of Theorem [Theorem 4](#thm-sock_division){reference-type="ref" reference="thm-sock_division"}. Let's make all of this more precise. **Definition 5**. *Suppose $n > 0$ is a natural number. **Multiplication by $n$ is equal to repeated addition of $n$** is the principle that for any set $A$ and any collection $\{X_a\}_{a \in A}$ of disjoint sets of size $n$, $|\bigcup_{a \in A}X_a| = |A\times n|$.* What we can show is that in $\mathsf{ZF}$, the principle of sock division by $n$ is equivalent to the principle that multiplication by $n$ is equal to repeated addition of $n$. **Theorem 6**. *It is provable in $\mathsf{ZF}$ that for any natural number $n > 0$, the principle of sock division by $n$ holds if and only if the principle that multiplication by $n$ is equal to repeated addition by $n$ holds.* *Proof.* ($\impliedby$) First suppose "multiplication by $n$ is equal to repeated addition of $n$" holds. Let $A$ and $B$ be any sets and let $\{X_a\}_{a \in A}$ and $\{Y_b\}_{b \in B}$ be two collections of disjoint sets of size $n$ such that $|\bigcup_{a \in A}X_a| = |\bigcup_{b \in B}Y_b|$. Applying "multiplication is repeated addition," we have $$\begin{aligned} |A\times n| = \left|\bigcup\nolimits_{a \in A}X_a\right| = \left|\bigcup\nolimits_{b \in B}Y_b\right| = |B\times n|.\end{aligned}$$ And by applying shoe division by $n$, we get $|A| = |B|$. ($\implies$) Now suppose sock division by $n$ holds and let $A$ be any set and $\{X_a\}_{a \in A}$ be a collection of disjoint sets of size $n$. Consider the set $\bigcup_{a \in A}(X_a \times n)$. We can view this set as a union of a collection of disjoint sets of size $n$ in two different ways: $$\begin{aligned} \bigcup\nolimits_{a \in A}(X_a \times n) &= \bigcup\nolimits_{a \in A,\ i \le n} \{(x, i) \mid x \in X_a\}\\ \bigcup\nolimits_{a \in A}(X_a \times n) &= \bigcup\nolimits_{a \in A,\ x \in X_a} \{(x, i) \mid i \le n\}.\end{aligned}$$ The first of these two collections is indexed by $A\times n$ and the second is indexed by $\bigcup_{a \in A}X_a$. And since the unions of the two collections are identical, sock division implies that $|A\times n| = |\bigcup_{a \in A}X_a|$. ◻ ## Sock Geometry {#sock-geometry .unnumbered} Just how powerful is sock division? We have just seen that if sock division by $n$ holds then multiplication by $n$ is equal to repeated addition of $n$. In other words, for any set $A$ and any collection $\{X_a\}_{a \in A}$ of disjoint sets of size $n$, there is a bijection between $\bigcup_{a \in A}X_a$ and $A\times n$. However, this bijection does not necessarily respect the structure of $\bigcup_{a \in A}X_a$ and $A\times n$ as collections of size $n$ sets indexed by $A$: we are not guaranteed that the image of each $X_a$ is equal to $\{a\}\times n$. It seems reasonable, then, to ask whether sock division by $n$ implies the existence of a bijection that does respect this structure. It is natural to phrase this question using terms from geometry, and in particular in the language of fiber bundles. It is possible to understand everything in this section even if you do not know what a bundle is, but our choice of terminology may seem a bit odd. We can think of a collection $\{X_a\}_{a \in A}$ of disjoint sets of size $n$ as a kind of bundle over $A$. We will refer to it as an **$n$-sock bundle**, or just a **sock bundle** for short. We can think of the index set $A$ as the **base space** of the bundle and the union $\bigcup_{a \in A}X_a$ as the **total space**. If $\{X_a\}_{a \in A}$ and $\{Y_a\}_{a \in A}$ are two $n$-sock bundles then a **sock bundle isomorphism** between them is a bijection $f \colon \bigcup_{a \in A}X_a \to \bigcup_{a \in A}Y_a$ such that for each $a$, the image of $X_a$ is $Y_a$---in other words, such that the following diagram commutes (where $\pi$ and $\pi'$ denote the natural projections $\bigcup_{a \in A}X_a \to A$ and $\bigcup_{a \in A}Y_a \to A$). We will refer to $A\times n$ as the **trivial $n$-sock bundle**[^3] and call a sock bundle **trivializable** if it is isomorphic to $A\times n$. We summarize some of these terms in the table below. **Geometry** **Sock Geometry** ---------------------- ---------------------------------------- Bundle $\{X_a\}_{a \in A}$ Total space $\bigcup_{a \in A}X_a$ Base space $A$ Trivial bundle $A\times n$ Trivializable bundle $\bigcup_{a \in A}X_a \cong A\times n$ Restated in these terms, here's the question we asked above. **Question 7**. *Let $n > 0$ be a natural number. Does $\mathsf{ZF}$ prove that sock division by $n$ implies that all $n$-sock bundles are trivializable?* There is at least one special case in which this question has a positive answer: when the base space $A$ can be linearly ordered. To see why, suppose sock division by $n$ holds, let $A$ be any set and let $\preceq$ be a linear order on $A$. If $\{X_a\}_{a \in A}$ is a collection of disjoint sets of size $n$ then we know by Theorem [Theorem 6](#thm-sock_multiplication){reference-type="ref" reference="thm-sock_multiplication"} that sock division by $n$ implies that there is a bijection $f \colon \bigcup_{a \in A}X_a \to A\times n$. Since $\preceq$ is a linear order on $A$, we can linearly order $A\times n$ using $\preceq$ and the standard ordering on $\{1, 2, \ldots, n\}$. Thus we can use $f$ to simultaneously linearly order all the $X_a$'s and thereby trivialize the bundle $\{X_a\}_{a \in A}$. ## Sock Division and Divisibility {#sock-division-and-divisibility .unnumbered} We will end with one more question about the power of sock division. Consider the question of what it means for the cardinality of a set $A$ to be divisible by a natural number $n$. It seems natural to define divisibility in terms of multiplication: $|A|$ is divisible by $n$ if there is some set $B$ such that $|A| = |B|\times n$. However, we saw above that there are two possible ways to interpret $|B|\times n$, and without the axiom of choice these two are not necessarily equivalent. Thus we have two possible notions of divisibility by $n$ without choice, one based on interpreting multiplication as the area of a rectangle and the other based on interpreting multiplication as repeated addition. These two notions were studied by Blair, Blass and Howard [@blair2005divisibility], who called them **strong divisibility by $n$** and **divisibility by $n$**, respectively. To help distinguish the two notions, we will refer to divisibility by $n$ as **weak divisibility by $n$**. **Definition 8**. *Suppose $n > 0$ is a natural number. A set $A$ is **strongly divisible by $n$** if there is a set $B$ such that $|A| = |B\times n|$ and **weakly divisible by $n$** if there is a set $B$ and a collection $\{X_b\}_{b \in B}$ of disjoint sets of size $n$ such that $|A| = |\bigcup_{b \in B}X_b|$.* In the language of sock geometry, one might say that $A$ is strongly divisible by $n$ if it is in bijection with the total space of some trivial $n$-sock bundle and weakly divisible by $n$ if it is in bijection with the total space of any $n$-sock bundle. It is clear that strong divisibility by $n$ implies weak divisibility by $n$, but without the axiom of choice, weak divisibility does not imply strong divisibility (for example, this is implied by the results of Herrlich and Tachtsis in [@herrlich2006number]). Since sock division by $n$ implies that multiplication by $n$ is equal to repeated addition of $n$, sock division by $n$ implies that strong and weak divisibility by $n$ are equivalent. However, it is not clear whether the converse holds and this seems like an interesting question. **Question 9**. *Let $n > 0$ be a natural number. Does $\mathsf{ZF}$ prove that if strong and weak divisibility by $n$ are equivalent then sock division by $n$ holds?* I believe that answering questions [Question 7](#q:sock_geometry){reference-type="ref" reference="q:sock_geometry"} and [Question 9](#q:divisibility){reference-type="ref" reference="q:divisibility"} above would give insight into the world of set theory without choice more generally. # Acknowledgements Thanks to Peter Doyle for a lively email conversation on the topic of this paper and for coming up with the name "sock division," an anonymous reviewer for suggesting question [Question 9](#q:divisibility){reference-type="ref" reference="q:divisibility"}, Rahul Dalal for reading and commenting on a draft, Brandon Walker for inadvertently providing me with the motivation to work on this topic and, of course, Conway, Doyle, Qiu and all the rest for inspiration. [^1]: University of California, Los Angeles, Department of Mathematics Email: pglutz\@math.ucla.edu [^2]: When Russell introduced this example, he was careful to point out that in real life there actually are ways to distinguish between socks---for instance, one of them probably weighs slightly more than the other---but he asked for "a little goodwill" on the part of the reader in interpreting the example. [^3]: It would also be reasonable to call it the $n$-shoe bundle.
arxiv_math
{ "id": "2309.11634", "title": "Conway and Doyle Can Divide by Three, But I Can't", "authors": "Patrick Lutz", "categories": "math.LO", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We introduce a finite dimensional version of backstepping controller design for stabilizing solutions of PDEs from boundary. Our controller uses only a finite number of Fourier modes of the state of solution, as opposed to the classical backstepping controller which uses all (infinitely many) modes. We apply our method to the reaction-diffusion equation, which serves only as a canonical example but the method is applicable also to other PDEs whose solutions can be decomposed into a slow finite-dimensional part and a fast tail, where the former dominates the evolution in large time. One of the main goals is to estimate the sufficient number of modes needed to stabilize the plant at a prescribed rate. In addition, we find the minimal number of modes that guarantee the stabilization at a certain (unprescribed) decay rate. Theoretical findings are supported with numerical solutions. address: - | $^a$Department of Mathematics, Koç University\ Sarıyer, İstanbul, 34450 Turkey - | $^b$Department of Mathematics, Bilkent University\ Çankaya, Ankara, 06800 Turkey - | $^c$Department of Mathematics, İzmir Institute of Technology\ Urla, Izmir, 35430 Turkey author: - Varga K. Kalantarov$^{\MakeLowercase{a}}$, Türker Özsarı$^{\MakeLowercase{b},*}$ and Kemal Cem Yılmaz$^{\MakeLowercase{c}}$ bibliography: - ref.bib title: Finite dimensional backstepping controller design --- [^1] [^2] [^3] # Introduction {#sec:introduction} ## Problem statements The goal of this paper is to introduce a finite dimensional analogue of the classical backstepping control algorithm for stabilizing solutions of PDEs from boundary. As a canonical example, we will use the reaction diffusion equation posed on an interval with Dirichlet boundary conditions. However, the method is applicable also to other evolutionary equations whose solutions decompose into a slow finite-dimensional part and a fast tail, where the former dominates the dynamics in long time. To this end, let us consider the following plant: $$\begin{aligned} \label{pde_nl} \begin{cases} u_t - \nu u_{xx} - \alpha u + u^3 = 0, \quad x\in(0,L), t > 0, \\ u(0,t) = 0, u(L,t) = g(t), \\ u(x,0) = u_0(x). \end{cases}\end{aligned}$$ Here $\nu, \alpha > 0$ are given constant coefficients and $g(t)$ is a soughtafter feedback controller that involves only finitely many Fourier sine modes of $u$. In the absence of control and for certain values of $\nu$, $\alpha$ and $L$, zero equilibrium solution may be either asymptotically stable or unstable. To be more precise, if $\nu\lambda_1 - \alpha > 0$, where $\lambda_1$ is the first eigenvalue of the operator $-\frac{d^2}{dx^2}$ subject to Dirichlet boundary conditions, [\[pde_nl\]](#pde_nl){reference-type="eqref" reference="pde_nl"} has a unique equilibrium solution $u \equiv 0$. For this case, it is asymptotically stable. Conversely, if $\nu\lambda_1 - \alpha < 0$, then there will be at least two nontrivial stationary solutions, exactly two of which are asymptotically stable, and all solutions bifurcate from the zero equilibrium (see, e.g. [@chafee; @henry; @sattinger] for a more detailed discussion on this topic). So for the latter case, $u \equiv 0$ is no longer stable. Our aim is to construct a feedback law of the form $$\begin{aligned} \label{feedbackcontroller}&u(L,t) = \int_0^L \xi(y) \Gamma[P_N u](y,t)dy,\end{aligned}$$ such that all solutions are steered asymptotically to zero. Here, $\Gamma$ is a linear bounded operator on a certain $L^2-$based functional space, $\xi$ is a suitable smooth function to be constructed, and $P_N$ is the projection operator $$P_N\varphi(x) = \sum_{j=1}^N e_j(x) \left(e_j(\cdot), \varphi(\cdot)\right)_2, \quad e_j(x) = \sqrt{\frac{2}{L}} \sin \left(\frac{j \pi x}{L}\right).$$ We will refer to boundary feedbacks of above form as a finite dimensional backstepping controller. By abusing notation, we denote the feedback controller defined by the integral at the right hand side of [\[feedbackcontroller\]](#feedbackcontroller){reference-type="eqref" reference="feedbackcontroller"} (whose more precise version is [\[controller\]](#controller){reference-type="eqref" reference="controller"} below) as $g(t)$, $g(u(t))$, and $g(P_Nu(t))$. We are interested in two different stabilization problems. **Problem 1**. **Let $\nu, \alpha >0$ be such that $\nu\lambda_1 - \alpha < 0$ and $u_0 \in H^\ell(0,L)$, where $\ell=0$ or $\ell=1$. For **given** $\gamma > 0$, can you find $N > 0$ and construct a feedback control law of the form $u(L,t) = g(P_Nu(t))$ such that the solution of [\[pde_nl\]](#pde_nl){reference-type="eqref" reference="pde_nl"} satisfies $\|u(t)\|_{H^\ell(0,L)} = \mathcal{O}(e^{-\gamma t})$ for $t \ge 0$?** In contrast with the above *rapid stabilization* problem, if one is only interested in decay at a certain non-prescribed rate, then the problem takes the following form: **Problem 2**. **Let $\nu, \alpha >0$ be such that $\nu\lambda_1 - \alpha < 0$ and $u_0 \in H^\ell(0,L)$, where $\ell = 0$ or $\ell=1$. What is the minimum value of $N$ for which the solution of [\[pde_nl\]](#pde_nl){reference-type="eqref" reference="pde_nl"} satisfies $\|u(t)\|_{H^\ell(0,L)} = \mathcal{O}(e^{-\gamma t})$, $t \ge 0$ for **some** $\gamma > 0$ with a boundary feedback of the form $u(L,t) = g(P_Nu(t))$?** *Remark 3*. In Problem [Problem 1](#prob){reference-type="ref" reference="prob"} and Problem [Problem 2](#prob2){reference-type="ref" reference="prob2"}, we are not only interested in establishing the existence of a sufficiently large $N$ that fulfills the goal, in addition we want to provide a precise estimate for $N$ in terms of problem parameters such as the decay rate and the coefficients in the main equation. Problem [Problem 1](#prob){reference-type="ref" reference="prob"} and Problem [Problem 2](#prob2){reference-type="ref" reference="prob2"} will be treated by combining ideas from the theory of asymptotic dynamics of dissipative systems and boundary control of PDEs via backstepping method, see e.g., [@krsticbook]. We give an affirmative answer to Problem [Problem 1](#prob){reference-type="ref" reference="prob"} in Theorems [Theorem 4](#wpstab_lin){reference-type="ref" reference="wpstab_lin"} and [Theorem 5](#wpstab_nonlin){reference-type="ref" reference="wpstab_nonlin"}. Regarding Problem [Problem 2](#prob2){reference-type="ref" reference="prob2"}, we find that $N$ is exactly equal to the instability level of [\[pde_nl\]](#pde_nl){reference-type="eqref" reference="pde_nl"}. More precisely, if the number of positive eigenvalues of the differential operator $-\nu \frac{d^2}{dx^2} - \alpha I$ with domain $\{\varphi \in H^2(0,L) \, | \, \varphi(0) = \varphi(L) = 0\}$ is $N$, then it suffices to design a boundary controller involving only the first $N$ Fourier sine modes of $u$. Corresponding results are stated in Theorem [Theorem 7](#wpstab_lin2){reference-type="ref" reference="wpstab_lin2"} and Theorem [Theorem 9](#wpstab_nonlin2){reference-type="ref" reference="wpstab_nonlin2"}. ## Discussion of previous work and motivation {#motivation} Control and stabilization of evolutionary partial differential equations is an important topic and became an area of interest for many researchers in recent decades. Several different approaches exist for designing control systems. In some of these systems, control inputs act from the interior of the medium, perhaps only from a local part. Other systems may use boundary controllers, especially when access to medium is restricted. The control mechanism is of feedback type if the controller depends on the state of the model. A backstepping based boundary feedback controller is a classical example. See [@AaSmKr; @KrGuSm; @Liu03; @SmCeKr; @SmKr1; @SmKr2] for its application to second order PDEs and [@BaOzYi; @CeCo; @CoLu; @OzAr; @OzBa; @OzYi] for its application to third order PDEs. Stabilization of a nonlinear diffusion equation (with a quadratic nonlinearity) via the standard backstepping transform was previously achieved by [@Yu14]. Designing a finite dimensional controller for a similar problem as in the current paper is more challenging. For instance, the usual methods for proving bounded invertibility of the standard backstepping transformation fails in the current finite dimensional case and one needs to give a new proof of such result. Moreover, proofs of decay rate estimates through multipliers are more delicate in the present case as projections are involved both in equations and in the estimates. Designing a control system where the controller involves only finitely many parameters of the solution rather than the full state of the solution also offers an important practical use. The idea of stabilization of solutions of nonlinear parabolic equations by finite-dimensional controllers goes back to the pioneering works on 2D Navier-Stokes equations (see [@BaTi], [@Fur1]). The results obtained in these works and the pioneering results on infinite-dimensional disspative systems that generate 2D Navier-Stokes equations, nonlinear parabolic equations, nonlinear damped wave equations and related systems of PDE's (see, e.g., [@BaVi], [@Lad1], [@FTR]) inspired further investigations on stabilization of these equations by internal and boundary finite-dimensional controllers (see [@AzTi], [@Che], [@KaTi], [@KaOz], [@LHM], [@Mun1] and references therein). A number of papers are devoted to the boundary stabilization of nonlinear parabolic equations by finite-dimensional controllers ([@Bar1], [@Mun1], [@Mun2]). In particular, Barbu [@Bar1] studied the problem of stabilization of the stationary solution of the system $$\begin{cases} u_t=\Delta u+f(x,u),\ \ \mbox{in} \ \ (0,\infty)\times \Omega,\\ u=v, \ \ \mbox{on} \ \ (0,\infty)\times \Gamma_1, \ \ \frac{\partial u}{\partial n}=0 \ \ \mbox{on} \ \ (0,\infty)\times \Gamma_2, \\ u(x,0)=u_0(x)\ \ \mbox{in} \ \ \Omega, \end{cases}$$ with feedback boundary controller depending on finitely many parameters. Here $\Omega\subset \mathbb R^d$ is a bounded domain with the boundary $\partial \Omega =\Gamma_1\cup \Gamma_2$, where $\Gamma_1,\Gamma_2$ are disjoint connected components of $\partial \Omega.$ They assume that the system, $\{(\partial e_j/\partial n) \, | \, 1 \leq j \leq N\}$, of unstable Fourier modes is linearly independent on $\Gamma_1$. Their strategy works successfuly for $d \geq 2$. However, in one spatial dimension, this assumption is true only if the number of unstable Fourier modes is one. This situation results in a restriction $\alpha < \lambda_2$ on the coefficient of the zero order term of the main equation, where $\lambda_2$ is the second eigenvalue of the differential operator $-\frac{d^2}{dx^2}$ with domain $\{\varphi \in H^2(0,L) \, | \, \varphi^\prime(0) = \varphi(L) = 0\}$. The strategy that we design for Problems [Problem 1](#prob){reference-type="ref" reference="prob"} and Problem [Problem 2](#prob2){reference-type="ref" reference="prob2"} will work for arbitrary choice of $\alpha$, i.e. arbitrary levels of instability. Very recently, stabilization of the 1D linear heat equation by observer-based finite dimensional controllers using in-domain point measurement and boundary measurement was studied in [@KaFr] and [@LhPr] respectively. There are also similar results for the semilinear heat equation, see [@Kat23]. Their strategy relies on homogenizing the boundary conditions by changing the variables to transfer the boundary control input into the domain, and then decomposing the main equation with respect to the Fourier modes. Utilizing the fact that the eigenvalues $\lambda_j, j \in \mathbb{Z}^+$, of the differential operator are countable and ordered, and therefore unstable Fourier modes are finitely many, say the first $N_0$, they control only first $N_0$ modes by considering $N_0-$dimensional system of ODEs. Another recent study on the stabilization of linear heat equation in higher dimensions is [@Feng2] (see also [@Feng1] for 1D case), which is again based on stabilizing the model via spectral truncation stabilizer. These studies mentioned in the previous paragraph are based on stabilizing the infinite dimensional dynamical systems by controlling a corresponding finite dimensional system, which eventually amounts to use finite dimensional feedback controllers. In our present study, we will follow a different approach that allows us to stabilize the zero equilibrium by controlling directly the infinite dimensional system via finite dimensional boundary feedback controllers. Our approach is inspired by the fact that dissipative dynamical systems are known to possess finite-dimensional determining parameters (see, e.g., [@BaVi; @FoPr; @Hale; @Rob; @Temam]). We combine this theory with the backstepping method by choosing the so called *target* model in a novel way. However, the advantage of our strategy is that, it allows us to gain exponential stabilization of zero equilibrium with a desired decay rate, while the instability level of the model is allowed to be arbitrarily large. This in turn amounts to say that the decay rate can be chosen arbitrarily large, also known as rapid stabilization. We would also like to note that the method in our present study is applicable for other second order dissipative nonlinear PDEs such as Ginzburg-Landau equation, Fitzhugh--Nagumo equation, viscous Burgers' equation and so on. It should be remarked that recently a nice strategy, called late-lumping, that combines the method of backstepping with a finite dimensional controller was introduced, see for instance [@Aur19] and [@Woit17] for its application to the heat equation. There are certain important differences when the methodology of these papers are compared with ours. The first one of these is that the aforementioned works use the standard backstepping transformation as in [@krsticbook]. Therefore, it does not involve the projection operator inside the integral term and this contrasts with our transformation (see equation [\[bt\]](#bt){reference-type="eqref" reference="bt"} below), which is a priori given with the projection $P_N$ in its definition. The motivation for this is to obtain a target model with homogeneous boundary conditions ameanable to a Lyapunov analysis that yields a precise information on the sufficient number of Fourier modes for the desired purpose (stabilization with either a prescribed or an unprescribed rate). Such estimate is not provided in the late-lumping approach. In addition, the late-lumping approach is based on a crucial assumption that the finite dimensional controller uniformly converges to the standard controller that uses the full state of the solution. Such assumption was verified in [@Aur19 Lemma 5], however its proof relies on the embedding $H^1(0,L)\hookrightarrow L^\infty(0,L)$. This forces solutions and in turn the initial state to be taken from $H^1(0,L)$, too, for proving decay estimates at $L^2$-level. On the other hand, our method does not rely on the verification of such property and we establish stabilization at $L^2$-level with initial datum taken from $L^2(0,L)$. Furthermore, we also provide decay rate estimates at $H^1-$level with data from $H^1(0,L)$ - the space used for $L^2$-level decay with late-late lumping approach. ## Methodology Let us now describe our strategy. We first consider the following linearized model associated with [\[pde_nl\]](#pde_nl){reference-type="eqref" reference="pde_nl"}: $$\label{pde_l} \begin{cases} u_t - \nu u_{xx} - \alpha u = 0, \quad x\in(0,L), t > 0, \\ u(0,t) = 0, u(L,t) = g(t),\\ u(x,0) = u_0(x). \end{cases}$$ The steps of our strategy are given below:\ ***Step i.*** We first design the following target model: $$\label{pde_tl} \begin{cases} w_t - \nu w_{xx} - \alpha w + \mu P_Nw= 0, \quad x\in(0,L), t > 0, \\ w(0,t) = w(L,t) = 0\\ w(x,0)=w_0(x), \end{cases}$$ where $w_0$ to be computed in step (iii) below. This target model is chosen to convert the feedback action on the boundary to an interior stabilizing effect (see, e.g. [@Liu03]). In our case, the feedback involves a finite number of Fourier modes, therefore, we consider a target model with a stabilizing term given by $\mu P_N w$. Indeed, dissipative dynamical systems possess finite dimensional asymptotic dynamics (see e.g., [@Hale; @Rob; @Temam]) since such systems possess a finite number of determining modes [@FoPr]. Therefore, the term $\mu P_Nw$ is capable of preventing large fluctuations or uncontrolled growth of the solution for large $N$ due to the term $-\alpha w$. We will show that solutions of [\[pde_tl\]](#pde_tl){reference-type="eqref" reference="pde_tl"} can be steered to zero with any prescribed exponential decay rate provided $\mu > \nu\lambda_1 - \alpha$ and $N$ fulfills certain criteria.\ ***Step ii.*** Next, we introduce our backstepping transformation: $$\label{bt} \begin{split} u(x,t) &= w(x,t) + \int_0^x k(x,y) P_N w(y,t) dy \\ &\doteq [(I + \Upsilon_k P_N)w](x,t) \\ &\doteq [T_Nw](x,t). \end{split}$$ In Section [2](#sec_bstrans){reference-type="ref" reference="sec_bstrans"} we show that if $k$ is a sufficiently smooth solution of a certain boundary value problem (see [\[ker_pde\]](#ker_pde){reference-type="eqref" reference="ker_pde"}) posed on the triangular region $\Delta_{x,y} \doteq \left\{(x,y) \in \mathbb{R}^2 : x\in(0,L), y \in (0,x)\right\}$, then [\[bt\]](#bt){reference-type="eqref" reference="bt"} successfully maps the target model [\[pde_tl\]](#pde_tl){reference-type="eqref" reference="pde_tl"} to the linearized model [\[pde_l\]](#pde_l){reference-type="eqref" reference="pde_l"}. Finding the kernel $k$ allows us to write the control input as $$\begin{aligned} \label{ctrl_input} &g(t) = \int_0^L k(L,y) P_N w(y,t) dy\end{aligned}$$ [\[ctrl_input\]](#ctrl_input){reference-type="eqref" reference="ctrl_input"} will be rewritten in the form of a feedback using the inverse operator discussed in the next step.\ ***Step iii.*** We prove that the transformation $I + \Upsilon_kP_N$ is bounded invertible if $(\mu,N)$ is an admissible decay rate-mode pair (see Definition [Definition 10](#ratemodepair){reference-type="ref" reference="ratemodepair"} in Section [2](#sec_bstrans){reference-type="ref" reference="sec_bstrans"}). Bounded invertibility and step (ii) imply that the decay properties of the target model [\[pde_tl\]](#pde_tl){reference-type="eqref" reference="pde_tl"} are inherited by the linear plant [\[pde_l\]](#pde_l){reference-type="eqref" reference="pde_l"}. Moreover, the inverse operator can be expressed as $I - \Phi_N$, where $\Phi_N : L^2(0,L) \to H^\ell(0,L)$ is a bounded linear operator. Thus, the boundary controller [\[ctrl_input\]](#ctrl_input){reference-type="eqref" reference="ctrl_input"} can be expressed as $$\begin{aligned} \label{controller} &g(t) = \int_0^L k(L,y) \Gamma[P_N u](y,t)dy,\end{aligned}$$ where $\Gamma \doteq (I - P_N \Phi_N): H^\ell(0,L) \to H^\ell(0,L)$. The initial state of the target model takes the form $w_0 \doteq (I - \Phi_N) u_0$. We use the same backstepping transformation to transform the nonlinear model [\[pde_nl\]](#pde_nl){reference-type="eqref" reference="pde_nl"} into the corresponding nonlinear target model below. $$\begin{cases} w_t - \nu w_{xx} - \alpha w + \mu P_Nw + f(w) = 0, \quad x\in(0,L), t > 0, \\ w(0,t) = w(L,t) = 0, \\ w(x,0) = w_0(x), \end{cases}$$ where $f(w) = (I - \Phi_N) \left[((I + \Upsilon_kP_N)w)^3\right].$ We will see in Section [4](#sec_wpstab_nonlin){reference-type="ref" reference="sec_wpstab_nonlin"} that choosing $\mu > \nu \lambda_1 - \alpha$ and $N$ sufficiently large, solutions of the above nonlinear target model tend to zero at a prescribed exponential decay rate, provided that $\|w_0\|_{H^\ell(0,L)}$ is sufficiently small. Now thanks to steps (ii)-(iii) above, under a smallness assumption on $u_0$, the same decay result will also hold for the nonlinear plant [\[pde_nl\]](#pde_nl){reference-type="eqref" reference="pde_nl"}. ## Preliminaries {#prelim} $L^p(0,L)$, $1 \leq p \leq \infty$ is the usual Lebesgue space and given $\varphi \in L^p(0,L)$ we denote its $L^p-$norm by $\|\varphi\|_{L^p(0,L)}$. For $p = 2$, we write $\|\varphi\|$ instead of $\|\varphi\|_{L^2(0,L)}$. Given $\ell > 0$, we represent the $L^2-$based Sobolev space by $H^\ell(0,L)$. $H_0^1(0,L)$ is the closed subspace of $H^1(0,L)$ that involves those functions in $H_0^1(0,L)$ which vanish (have zero trace) on the boundary of $(0,L)$. If $A:H^\ell(0,L) \to H^\ell(0,L)$ is a linear bounded operator, $\ell = 0,1$, we denote its operator norm as $\|A\|_{H^\ell(0,L) \to H^\ell(0,L)}$. We write $a \lesssim b$ to denote an inequality $a \leq c b$, where $c > 0$ may depend on only some fixed parameters. With $\lambda_{j}$, $j = 1, 2, \dotsc$, where $\lambda_j = j^2 \lambda_1=\left(\frac{j\pi }{L}\right)^2$, we denote the $j-$th eigenvalue of the operator $-\frac{d^2}{dx^2}$ subject to Dirichlet boundary conditions, and $e_j=\sqrt{\frac{2}{L}}\sin(\frac{j\pi x}{L})$ is the corresponding $L^2-$normalized eigenfunction. We introduce the (solution) spaces $$X_T^\ell\equiv L^\infty(0,T;H^\ell(0,L)) \cap L^2(0,T;H^{1+\ell}(0,L)), \quad \ell=0, 1.$$ We equip these spaces with the norms $$\begin{aligned} \|\psi\|_{X_T^\ell}^2 \equiv\mathop{\mathrm{ess\,sup}}_{t\in [0,T]}\|\psi(\cdot,t)\|_{H^{\ell}(0,L)}^2+\int_0^T\|\psi(\cdot,t)\|_{H^{1+\ell}(0,L)}^2dt.\end{aligned}$$ The following inequalities are useful while carrying out some of the norm estimates.\ - Cauchy's inequality with $\epsilon$: $ab \leq \frac{\epsilon a^2}{2} + \frac{b^2}{2\epsilon}$, $\epsilon>0$. - Cauchy-Schwarz inequality: For $\varphi,\psi \in L^2(0,L)$, $\left| \int_0^L \varphi(x)\psi(x) dx \right| \leq \|\varphi\| \|\psi\|.$ - Poincaré inequality: For $\varphi \in H_0^1(0,L)$ or for $\varphi\in H^1(0,L)$ with $\int_{0}^L\varphi dx=0$, it follows $\|\varphi\|^2 \leq \lambda_1^{-1} \|\varphi^\prime\|^2.$ In particular, $\|\varphi'\|_{2}^2\le \lambda_1^{-1}\|\varphi''\|_2^2$ for $\varphi\in H_0^1(0,L)\cap H^2(0,L)$. - Poincaré type inequality: For $u \in H_0^1(0,L)$, $$\|u - P_N u\|^2 \leq \lambda_{N+1}^{-1} \|u^\prime\|^2, \quad \lambda_{N+1} = (N+1)^2 \lambda_1,$$ The proof of this inequality is easy, indeed with $\hat{u}_j=(u,e_j)_2$, one has $$\|u'\|^2=\sum\limits_{j=1}^\infty\lambda_j\hat{u}_j^2\ge \sum\limits_{j=N+1}^\infty\lambda_j\hat{u}_j^2 \ge \lambda_{N+1}\sum\limits_{j=N+1}^\infty\hat{u}_j^2= \lambda_{N+1}\|u-P_Nu\|^2.$$ - Gagliardo-Nirenberg's inequality: Let $u \in H_0^1(0,L)$, $p \geq 2$, $\alpha = \frac{1}{2} - \frac{1}{p}$. Then $\|u\|_{L^p(0,L)} \leq c^* \|u^{\prime}\|^\alpha \|u\|^{1-\alpha},$ where $c^* > 0$ (is say best GN-constant) depends on $p, \alpha$, and $L$. ## Main results We first solve the rapid stabilization problem for the linearized model and prove the theorem below. $(\mu,N)$ is assumed to be an admissible decay rate-mode pair throughout (see Definition [Definition 10](#ratemodepair){reference-type="ref" reference="ratemodepair"} in Section [2](#sec_bstrans){reference-type="ref" reference="sec_bstrans"}). **Theorem 4**. *Let $\nu, \alpha > 0$, $\mu>\alpha - \nu \lambda_1 \geq 0$, and $u_0 \in H^\ell(0,L)$, where $\ell = 0$ or $\ell=1$. Then the weak solution $u$ of the linear problem [\[pde_l\]](#pde_l){reference-type="eqref" reference="pde_l"}, which satisfies the feedback control law $u(L,t) = g(P_N u(t))$, belongs to $X_T^\ell$. Moreover, if $$N > \max \left\{ \frac{\mu}{2\nu\lambda_1} - 1 , \frac{\mu}{\mu + \nu\lambda_1 - \alpha} - 1\right\},$$ then $u$ satisfies the decay estimate $$\|u(t)\|_{H^\ell(0,L)} \leq c_k e^{-\gamma t} \|u_0\|_{H^\ell(0,L)}, \quad \text{ for a.e. }t \geq 0,$$ where $\gamma = \nu\lambda_1 - \alpha + \mu (1 - \frac{1}{N+1})$ and $c_k$ is a nonnegative constant dependent on the kernel $k$ and independent of $u_0$.* Secondly, we treat the corresponding nonlinear model. **Theorem 5**. *Let $\nu, \alpha > 0$, $\mu>\alpha - \nu \lambda_1 \geq 0$, $d\in(0,1)$, and $u_0 \in H^\ell(0,L)$, where $\ell = 0$ or $\ell=1$. Let $\epsilon>0$ be any number satisfying $\epsilon\lambda_1\le \max\{\nu-\frac{\mu}{N+1}, 2\gamma\}$ and assume that $\|u_0\|_{H^\ell}$ is sufficiently small in the sense that $\|u_0^{(\ell)}\|_2^2\le d\frac{\sqrt{\epsilon(2\gamma-\epsilon\lambda_1)}\lambda_1^{\ell}}{c_0c_{1}}{\|T_N^{-1}\|_{H^\ell(0,L)\rightarrow H^\ell(0,L)}^{-2}}$, where $\gamma$ is given by [\[gamma\]](#gamma){reference-type="eqref" reference="gamma"}, $c_0, c_1$ are defined in [\[c0def\]](#c0def){reference-type="eqref" reference="c0def"} and [\[c1def\]](#c1def){reference-type="eqref" reference="c1def"}, respectively. If $$N > \max \left\{\frac{\mu}{2\nu\lambda_1} - 1, \frac{\mu}{\mu + \nu\lambda_1 - \alpha} - 1\right\},$$ then the nonlinear problem [\[pde_nl\]](#pde_nl){reference-type="eqref" reference="pde_nl"} has a global solution $u\in X_T^\ell$, which satisfies the feedback control law $u(L,t) = g(P_N u(t))$, and has the decay estimate $$\|u(t)\|_{H^\ell(0,L)} \leq c_k e^{-\gamma't} \|u_0\|_{H^\ell(0,L)} \text{ for a.e. } t \geq 0,$$ for any $\gamma'$ with $0<\gamma'<\gamma= \nu\lambda_1 - \alpha + \mu \left(1 - \frac{1}{N+1}\right)$, where $c_k$ is a non-negative constant dependent on the kernel $k$, $d$ and independent of $u_0$.* *Remark 6*. The smallness assumption on initial datum is used only for stabilization and it is not a necessary assumption for local existence of solutions. Smallness condition for stabilization of nonlinear PDEs via backstepping controllers is common, see for instance [@CeCo]. This is because the backstepping transformation turns the original nonlinear plant into another plant (target model) in which the monotone structure of the nonlinear term is disrupted. Next, we consider the stabilization problem with a certain unprescribed not necessarily large decay rate for the linearized model. We show that if $N$ is the number of unstable modes, it suffices to employ a controller that involves only the first $N$ modes to achieve exponential stabilization. **Theorem 7**. *Let $\nu, \alpha > 0$, $u_0 \in H^\ell(0,L)$, where $\ell = 0$ or $\ell=1$, $N$ be such that $\lambda_N < \frac{\alpha}{\nu} < \lambda_{N+1}$, and $\mu$ satisfy $$\label{mucond} 2(\alpha-\nu\lambda_1) \left(1 - \frac{1}{(N+1)^2}\right)^{-1} < \mu < 2\nu \lambda_{N+1}.$$ Then the solution $u$ of linear problem [\[pde_l\]](#pde_l){reference-type="eqref" reference="pde_l"}, which satisfies the feedback control law $u(L,t) = g(P_N u(t))$, belongs to $X_T^\ell$ and has the decay estimate $$\|u(t)\|_{H^\ell(0,L)} \leq c_k e^{-\rho t} \|u_0\|_{H^\ell(0,L)}, \quad \text{ for a.e. } t \geq 0,$$ where $\rho = \nu\lambda_1 - \alpha + \frac{\mu}{2}(1 - \frac{1}{(N+1)^2})$ and, $c_k$ is a nonnegative constant depending on $k$ and independent of $u_0$.* *Remark 8*. The set of $\mu$ that satisfies [\[mucond\]](#mucond){reference-type="eqref" reference="mucond"} is nonempty as shown in Section [5](#sec_wpstab2){reference-type="ref" reference="sec_wpstab2"}. Moreover, $\rho$ is strictly positive by construction. Similar remarks also apply to Theorem [Theorem 9](#wpstab_nonlin2){reference-type="ref" reference="wpstab_nonlin2"} below, where we extend the above result to the nonlinear model. **Theorem 9**. *Let $\nu, \alpha > 0$, $d\in (0,1)$ and $u_0 \in H^\ell(0,L)$, where $\ell = 0$ or $\ell=1$, $N$ be such that $\lambda_N < \frac{\alpha}{\nu} < \lambda_{N+1}$. Assume that $\|u_0\|_{H^\ell}$ is sufficiently small in the sense of Theorem [Theorem 5](#wpstab_nonlin){reference-type="ref" reference="wpstab_nonlin"} and $\mu$ satisfies $$2(\alpha-\nu\lambda_1) \left(1 - \frac{1}{(N+1)^2}\right)^{-1} < \mu < 2\nu \lambda_{N+1}.$$ Then for any $\rho'$ with $0<\rho'<\rho= \nu\lambda_1 - \alpha + \frac{\mu}{2}\left(1 - \frac{1}{(N+1)^2}\right)$, the nonlinear problem [\[pde_nl\]](#pde_nl){reference-type="eqref" reference="pde_nl"} has a solution $u\in X_T^\ell$, which satisfies the feedback control law $u(L,t) = g(P_N u(t))$, and has the decay estimate $$\|u(t)\|_{H^\ell(0,L)} \leq c_k e^{-\rho' t} \|u_0\|_{H^\ell(0,L)} \quad \text{ for a.e. } t \geq 0,$$ where $c_k$ is a nonnegative constant depending on $k$, $d$, and independent of $u_0$.* ## Orientation This paper consists of six sections. In Section [2](#sec_bstrans){reference-type="ref" reference="sec_bstrans"} we find sufficient conditions for the backstepping kernel so that the corresponding backstepping transformation maps the linearized plant to a desired target model. Using these conditions, we obtain an explicit representation of the kernel. Then, we prove the invertibility of the backstepping transformation with a bounded inverse. In Sections [3](#sec_wpstab_lin){reference-type="ref" reference="sec_wpstab_lin"} and [4](#sec_wpstab_nonlin){reference-type="ref" reference="sec_wpstab_nonlin"} we answer Problem [Problem 1](#prob){reference-type="ref" reference="prob"}, and in Section [5](#sec_wpstab2){reference-type="ref" reference="sec_wpstab2"} we answer Problem [Problem 2](#prob2){reference-type="ref" reference="prob2"}. Finally, in Section [6](#numerics){reference-type="ref" reference="numerics"}, we give some numerical simulations verifying our theoretical results. # Backstepping transformation: existence and bounded invertibility {#sec_bstrans} ## Kernel Here, we state the sufficient conditions for the kernel so that the backstepping transformation maps the target model to the linearized plant. Adapting arguments given in [@krsticbook Section 4] to the operator in [\[bt\]](#bt){reference-type="eqref" reference="bt"} , it turns out that it is enough if $k$ solves the boundary value problem $$\label{ker_pde} \begin{cases} \nu (k_{xx} - k_{yy}) + \mu k = 0, \quad (x,y) \in \Delta_{x,y}, \\ k(x,0) = 0, \quad k(x,x) = -\frac{\mu x}{2\nu}, \quad x \in (0,L), \end{cases}$$ on $\Delta_{x,y}$. Moreover, the solution can be expressed explicitly as $$\label{ksolrep} k(x,y) = -\frac{\mu y}{2 \nu} \sum\limits_{m=0}^\infty \left(-\frac{\mu}{4 \nu}\right)^m \frac{(x^2 - y^2)^m}{m! (m+1)!},$$ which is a uniformly and absolutely convergent series on $\overline{\Delta_{x,y}}$. It is important that $P_N$ and $\frac{d^2}{dy^2}$ commutes in derivation of [\[ker_pde\]](#ker_pde){reference-type="eqref" reference="ker_pde"}. ## Invertibility In this section, we discuss the bounded invertibility of the backstepping transformation on $H^\ell(0,L)$, $\ell = 0,1,2$. At first, we argue that the invertibility does not necessarily hold for every choice of $\mu>0$ in [\[ker_pde\]](#ker_pde){reference-type="eqref" reference="ker_pde"}. To infer this, let $\Upsilon_k : H^\ell (0,L) \to H^\ell (0,L)$ be the integral operator defined by $(\Upsilon_k \psi)(x) \doteq \int_0^x k(x,y) \psi(y) dy$. Let $N=1$ as an example and consider the backstepping transformation $I + \Upsilon_k P_N=I + \Upsilon_k P_1$, which uses a single Fourier mode. Let $\mu>0$ be such that $(e_1,\Upsilon_ke_1)_2=-1$ (such $\mu$ exists by intermediate value theorem). Let $\varphi$ be a function in $H^\ell(0,L)$ with the property $\hat{\varphi}_1=(\varphi,e_1)_2\neq 0$. We claim that there is no $\psi$ with $\varphi = (I + \Upsilon_k P_1)\psi$ because if it is the case then setting $v = \Upsilon_k P_1\psi$, we see that $v$ must satisfy $$v = \Upsilon_k P_1 (\varphi - v)=(\hat \varphi_1 - \hat{v}_1) \Upsilon_ke_1.$$ Taking $L^2-$inner product of both sides with $e_1$ and using $(e_1,\Upsilon_ke_1)_2=-1$, it follows that $0=-\hat \varphi_1\neq 0$, which is a contradiction. Hence, $(I + \Upsilon_k P_1)$ cannot be invertible if $\mu$ (equivalently the associated kernel $k=k(x,y;\mu)$) is such that $(e_1,\Upsilon_ke_1)_2=-1$. Note however, if $\mu>0$ is such that $(e_1,\Upsilon_ke_1)_2\neq -1$, then given $\varphi$, the function $\psi=\varphi-v$, where $v=\hat{v}_1\Upsilon_ke_1$ with $$\hat{v}_1=\frac{\hat{\varphi}_1}{1+(e_1,\Upsilon_ke_1)_2}$$ satisfies $(I + \Upsilon_k P_1)\psi=\varphi$. The preimage $$\displaystyle\varphi-v=\varphi-\frac{\hat{\varphi}_1}{1+(e_1,\Upsilon_ke_1)_2}\Upsilon_ke_1$$ leads to a well-defined inverse operator in the form $I-\Phi_1$, where $$\Phi_1\varphi=\frac{\hat{\varphi}_1}{1+(e_1,\Upsilon_ke_1)_2}\Upsilon_ke_1.$$ Now, let us consider the case of two modes ($N=2$) and the transformation $I+\Upsilon_kP_2$. Suppose $\mu>0$ is such that $(e_1,\Upsilon_ke_1)_2\neq -1$ so that the operator $\Phi_1$ given in above paragraph is well-defined. Let us write $\varphi = (I + \Upsilon_k P_2)\psi$ and set $v = \Upsilon_k P_2\psi$. Then, we have $$v = \Upsilon_k P_{2}(\varphi - v)=\Upsilon_k P_{2}\varphi-\Upsilon_k P_{1}v-\hat{v}_2\Upsilon_k e_2,$$ from which it follows that $(I+\Upsilon_k P_{1})v=\Upsilon_k P_{2}\varphi - \hat{v}_{2}\Upsilon_k e_{2}$. Now, applying $I-\Phi_1$ to both sides, we get $$v=(I - \Phi_1)[\Upsilon_k P_{2}\varphi] - \hat{v}_{2}(I - \Phi_1)[\Upsilon_k e_{2}].$$ Taking inner product of above expression with $e_2$ and rearranging the terms, we obtain $$\hat v_{2}=\frac{\left((I - \Phi_1)[\Upsilon_k P_{2}\varphi],e_{2}\right)_2}{1+\left((I-\Phi_1)[\Upsilon_k e_{2}],e_{2}\right)_2}$$ provided $\mu>0$ is also such that $\left((I-\Phi_1)[\Upsilon_k e_{2}],e_{2}\right)_2\neq -1.$ Under the aforementioned two conditions on $\mu$, the inverse takes the form $I-\Phi_2$, where $$\Phi_{2}\varphi=(I - \Phi_1)[\Upsilon_k P_{2}\varphi] -\frac{\left((I - \Phi_1)[\Upsilon_k P_{2}\varphi],e_{2}\right)_2}{1+\left((I-\Phi_1)[\Upsilon_k e_{2}],e_{2}\right)_2} \times(I - \Phi_1)[\Upsilon_k e_{2}].$$ The above argument can be extended to any $N>2$ via recursion (whose details are given in Step 3 in the proof of Lemma [Lemma 11](#invlem){reference-type="ref" reference="invlem"} below). The following definition plays a crucial role in order for such recursive argument to work: **Definition 10** (Admissible decay-mode pair). Let $\mu>0$ and $N\in\mathbb{Z}_+$. Then, $(\mu,N)$ is said to be an *admissible finite dimensional backstepping decay rate-mode pair* if $$\left((I-\Phi_{j-1})[\Upsilon_k e_{j}],e_{j}\right)_2\neq -1$$ for $1\le j\le N$, where $\Phi_0\doteq 0$ and $$\Phi_{j}\varphi=(I - \Phi_{j-1})[\Upsilon_k P_{j}\varphi] -\frac{\left((I - \Phi_{j-1})[\Upsilon_k P_{j}\varphi],e_{j}\right)_2}{1+\left((I-\Phi_{j-1})[\Upsilon_k e_{j}],e_{j}\right)_2} \times(I - \Phi_{j-1})[\Upsilon_k e_{j}]$$ for $1\le j\le N$. **Lemma 11**. *Let $\ell\in\{0, 1, 2\}$ and $(\mu,N)$ be an admissible decay-mode pair. Then $T_N\doteq I + \Upsilon_k P_N : H^\ell(0,L) \to H^\ell(0,L)$, is bounded invertible. Moreover $T_N^{-1}$ can be written as $I - \Phi_N$, where $\Phi_N : L^2(0,L) \to H^\ell(0,L)$ is linear bounded.* *Proof.* We write the backstepping transformation in operator form as $\varphi = (I + \Upsilon_k P_N)\psi$. Set $v = \Upsilon_k P_N\psi$. Then, we have $\psi = \varphi - v$ and we get $$\label{vequation} v = \Upsilon_k P_N(\varphi - v).$$ Given $\psi\in H^\ell(0,L)$, we have $(I + \Upsilon_k P_N)\psi\in H^\ell(0,L).$ Therefore, we have the inclusion $R(I + \Upsilon_k P_N)\subset H^\ell(0,L).$ We will prove the invertibility with induction on $N$. 1. We first consider the case $N=1$. Using [\[vequation\]](#vequation){reference-type="eqref" reference="vequation"} with $N=1$, we write $v = \Upsilon_k P_1 (\varphi - v)$. Note that $P_1\varphi=\hat \varphi_1 e_1$ and $P_1v=\hat v_1 e_1$, where $\hat \varphi_1=(\varphi,e_1)_2$ and $\hat{v}_1=(v,e_1)_2$. Using linearity of $\Upsilon_k$, we get $$\label{veqnew} v=\Upsilon_k(\hat \varphi_1-\hat v_1)e_1= (\hat \varphi_1 - \hat{v}_1) \Upsilon_ke_1.$$ Now, we look for a solution in the form $\label{vform} v=\alpha_1 \Upsilon_k e_1$. Note that then $\hat v_1=(e_1,\alpha_1 \Upsilon_k e_1)_2=\alpha_1\beta_1$, where $\beta_1 = \int_{0}^{L}e_1(s) [\Upsilon_k e_1](s) ds$. Using this equality in [\[veqnew\]](#veqnew){reference-type="eqref" reference="veqnew"}, we find that $\alpha_1$ should satisfy $\label{alphajsolve}\alpha_1 \Upsilon_ke_1 = (\hat \varphi_1 - \alpha_1 \beta_1)\Upsilon_k e_1$. This equation will hold with the choice $\alpha_1(1+\beta_1)=\hat\varphi_1$. Since $(\mu,N)$ is assumed to be an admissible decay rate-mode pair, we have $\beta_1\neq -1$, so that we can take $\alpha_1=\frac{\hat\varphi_1}{1+\beta_1}.$ This shows surjectivity of $I + \Upsilon_k P_1$. Note that we in particular have $v\in H^\ell(0,L)$ (indeed better than this) so that $\psi=\varphi-v\in H^\ell(0,L)$. Note we have $$\psi=\varphi-v=\varphi-\frac{\hat\varphi_1}{1+\beta_1}\Upsilon_k e_1 = \varphi - \frac{1}{1 + \beta_1} \Upsilon_k P_1 \varphi.$$ Now we set $\label{phiN} \Phi_1 \varphi\equiv \frac{1}{1 + \beta_1} \Upsilon_k P_1 \varphi$. It is easy to verify that $I-\Phi_1$ is both a right inverse and a left inverse for $I + \Upsilon_k P_1$ and moreover $\Phi_1: L^2(0,L) \to H^\ell(0,L)$ is a linear bounded operator. Therefore, $(I + \Upsilon_k P_1)^{-1}$ exists and is given by $(I + \Upsilon_k P_1)^{-1}{\varphi}=(I-\Phi_1)\varphi$. 2. We assume that there exists some $K\ge 1$ such that the statement of the lemma holds true for $N=K$. 3. Now, we claim that the statement of the lemma must also be true for $N=K+1$. Replacing $N$ by $K+1$ we have $$\label{invlem_vKplus1} \begin{split} v &= \Upsilon_k P_{K+1}(\varphi - v) \\ &=\Upsilon_k P_{K+1}\varphi-\Upsilon_k P_{K}v-\Upsilon_k E_{K+1}v, \end{split}$$ where we used $P_{K+1}=P_K+E_{K+1}$ with $E_{K+1}$ being the projection onto the $(K+1)-$th Fourier sine mode. Rearranging the terms, we get $(I+\Upsilon_k P_{K})v=\Upsilon_k P_{K+1}\varphi - \hat{v}_{K+1}\Upsilon_k e_{K+1}$, where $\hat{v}_{K+1}$ is the $(K+1)-$th Fourier sine mode of $v$. By using the induction assumption in Step 2, we obtain $$v=(I - \Phi_K)[\Upsilon_k P_{K+1}\varphi] - \hat{v}_{K+1}(I - \Phi_K)[\Upsilon_k e_{K+1}].$$ Taking inner product in both sides of the above equation with $e_{K+1}$, we get $$\hat v_{K+1}=\frac{\left((I - \Phi_K)[\Upsilon_k P_{K+1}\varphi],e_{K+1}\right)_2}{1+\left((I-\Phi_K)[\Upsilon_k e_{K+1}],e_{K+1}\right)_2}.$$ Note that $\hat v_{K+1}$ is well-defined since the denominator does not vanish above due to the assumption that $(\mu,N)$ is an admissible decay rate-mode pair. Therefore, we find $$\label{PhiKplus1} \begin{split} v=&(I - \Phi_K)[\Upsilon_k P_{K+1}\varphi] \\ &-\frac{\left((I - \Phi_K)[\Upsilon_k P_{K+1}\varphi],e_{K+1}\right)_2}{1+\left((I-\Phi_K)[\Upsilon_k e_{K+1}],e_{K+1}\right)_2} \times(I - \Phi_K)[\Upsilon_k e_{K+1}]\equiv \Phi_{K+1}\varphi. \end{split}$$ This shows that $I + \Upsilon_k P_{K+1}$ is surjective and it has a right inverse given by $I-\Phi_{K+1}$, where $\Phi_{K+1}$ is given in [\[PhiKplus1\]](#PhiKplus1){reference-type="eqref" reference="PhiKplus1"}. Moreover, given $\varphi$, we see from above that $v$ is uniquely determined, and therefore the right inverse is unique. This implies the right inverse is also a left inverse and $I + \Upsilon_k P_{K+1}$ is invertible and we have $$\psi=(I + \Upsilon_k P_{K+1})^{-1}{\varphi} =\varphi-v =(I-\Phi_{K+1})\varphi.$$ It is easy to check that $\Phi_{K+1}: L^2(0,L) \to H^\ell(0,L)$ is a linear bounded operator. This follows from the definition of $\Phi_{K+1}$ together with the induction assumption in Step 2.  ◻ *Remark 12*. Replacing $w$ in the control input $g(t) = \int_0^L k(x,L) P_Nw(y,t) dy$ with $w = (I - \Phi_N)u$, and noting $\Phi_Nu=\Phi_NP_Nu$ we see that $g$ can be expressed as $$g(t) =\int_0^L k(L,y) \Gamma_N[P_Nu](y,t)dy,$$ where $\Gamma_N \doteq I - P_N\Phi_N: H^\ell(0,L) \to H^\ell(0,L)$ is a bounded operator. Consequently, (i) our control input is indeed feedback type and (ii) it is calculated by using only finitely many Fourier sine modes of the state of solution. # Linear theory (Proof of Theorem [Theorem 4](#wpstab_lin){reference-type="ref" reference="wpstab_lin"}) {#sec_wpstab_lin} In this section, we prove well-posedness of the linearized model [\[pde_l\]](#pde_l){reference-type="eqref" reference="pde_l"} and establish uniform decay rate estimates for its solutions. Our analysis will be first carried out through the target model [\[pde_tl\]](#pde_tl){reference-type="eqref" reference="pde_tl"} and then, we will use boundedness of the backstepping transformation and its inverse to deduce that the wellposedness and decay results that we obtain for [\[pde_tl\]](#pde_tl){reference-type="eqref" reference="pde_tl"} are inherited by [\[pde_l\]](#pde_l){reference-type="eqref" reference="pde_l"}. Existence of a local solution of [\[pde_tl\]](#pde_tl){reference-type="eqref" reference="pde_tl"} can be obtained rigorously by using the standard Galerkin method. We will only give a proof showing the uniform exponential decay of solutions of [\[pde_tl\]](#pde_tl){reference-type="eqref" reference="pde_tl"} at topological levels $H^\ell(0,L)$ with $\ell=0$ and $\ell=1$, which will also imply global existence and uniqueness of solutions. This will be done formally by using the multiplier method. Indeed, one first construct approximate regular solutions in the form $w^m(t)=\sum_{k=1}^m d_{m}^{k}(t)e_k\in C([0,T];H_0^\ell(0,L))\cap L^2(0,T;H^{\ell+1}(0,L))$ via Galerkin method, apply the multiplier technique with these solutions and then use a limiting argument to obtain the same type of final estimates for the soughtafter weak solution. This procedure is rather standard and will be omitted. Let us first consider the case $u_0\in L^2(0,L)$. This implies $w_0=(I - \Upsilon_kP_N)^{-1}u_0\in L^2(0,L)$ thanks to the bounded invertibility of the backstepping transformation on $L^2(0,L)$ (see Lemma [Lemma 11](#invlem){reference-type="ref" reference="invlem"}). Now, taking the $L^2-$inner product of the main equation in [\[pde_tl\]](#pde_tl){reference-type="eqref" reference="pde_tl"} by $2w$, we get $$\label{l2-inner} \frac{d}{dt}\|w(t)\|^2 + 2\nu \|w_x(t)\|^2 - 2\alpha \|w(t)\|^2 + 2\mu \int_0^L w(x,t) P_Nw(x,t) dx = 0.$$ Using the Cauchy-Schwarz inequality and Cauchy's inequality with $\epsilon_1 > 0$, we see that the last term at the left hand side of [\[l2-inner\]](#l2-inner){reference-type="eqref" reference="l2-inner"} is bounded from below as $$\label{ctrl_term} \begin{split} 2\mu \int_0^L w(x,t) P_Nw(x,t) dx &= -2\mu \int_0^L w(x,t) \left(w - P_Nw\right)(x,t) dx + 2\mu\|w(\cdot,t)\|^2 \\ &\geq (2\mu - \epsilon_1\mu) \|w(\cdot,t)\|^2 - \frac{\mu}{\epsilon_1} \|\left(w-P_Nw\right)(\cdot,t)\|^2. \end{split}$$ Using the above bound in [\[l2-inner\]](#l2-inner){reference-type="eqref" reference="l2-inner"}, we get $$\frac{d}{dt}\|w(t)\|^2 + 2\nu \|w_x(t)\|^2 - 2\alpha \|w(t)\|^2 + (2\mu - \mu\epsilon_1) \|w(t)\|^2 - \frac{\mu}{\epsilon_1} \|\left(w-P_Nw\right)(t)\|^2 \leq 0.$$ Employing the Poincaré type inequality, we obtain $$\label{l2h1} \frac{d}{dt}\|w(t)\|^2 + 2\left(\nu - \frac{\mu}{2\epsilon_1 \lambda_{1} (N+1)^2}\right) \|w_x(t)\|^2 + (2\mu - \mu\epsilon_1 - 2\alpha)\|w(t)\|^2 \leq 0.$$ For $\epsilon_1 > 0$ satisfying $$\label{poincare} \nu - \frac{\mu}{2\epsilon_1 \lambda_{1}(N+1)^2} > 0,$$ we apply the Poincaré inequality to the second term at the left hand side of in [\[l2h1\]](#l2h1){reference-type="eqref" reference="l2h1"}: $$\label{l2-ineq} \frac{d}{dt}\|w(t)\|^2 + 2 \left[\nu\lambda_1 - \alpha + \mu \left(1 - \frac{\epsilon_1}{2} - \frac{1}{2 \epsilon_1 (N+1)^2}\right) \right]\|w(t)\|^2 \leq 0.$$ To maximize the stabilizing effect one must choose $\epsilon_1 = \frac{1}{N+1}$. Note that [\[poincare\]](#poincare){reference-type="eqref" reference="poincare"} with $\epsilon_1 = \frac{1}{N+1}$ will hold provided $$\label{cond_N1} N > \frac{\mu}{2\nu\lambda_1} - 1.$$ Therefore, under condition [\[cond_N1\]](#cond_N1){reference-type="eqref" reference="cond_N1"} on $N$, we get $$\begin{aligned} \label{l2decay_ineq} \frac{d}{dt}\|w(t)\|^2 + 2 \gamma \|w(t)\|^2 \leq 0,\end{aligned}$$ where $$\label{gamma} \gamma = \nu\lambda_1 - \alpha + \mu \left(1 - \frac{1}{(N+1)}\right).$$ In order to have decay, we must have $\gamma > 0$ which imposes $$\label{cond_N2} N > \frac{\mu}{\mu + \nu\lambda_1 - \alpha} - 1.$$ Finally, integrating [\[l2decay_ineq\]](#l2decay_ineq){reference-type="eqref" reference="l2decay_ineq"}, we deduce $$\label{l2decayrate} \|w(t)\| \leq e^{-\gamma t} \|w_0\|, \quad \text{ for a.e. } t \geq 0,$$ under the assumption that $$\label{cond_N} N > \max \left\{ \frac{\mu}{2\nu\lambda_1} - 1 , \frac{\mu}{\mu + \nu\lambda_1 - \alpha} - 1\right\}.$$ Next, we obtain uniform estimates for solutions of [\[pde_tl\]](#pde_tl){reference-type="eqref" reference="pde_tl"} in the case $u_0\in H^1(0,L)$. We take $L^2-$inner product of the main equation in [\[pde_tl\]](#pde_tl){reference-type="eqref" reference="pde_tl"} by $-2w_{xx}$ and deduce $$\label{h1-inner} \frac{d}{dt}\|w_x(t)\|^2 + 2\nu \|w_{xx}(t)\|^2 - 2\alpha \|w_x(t)\|^2 - 2\mu \int_0^L w_{xx}(x,t) P_Nw(x,t) dx = 0.$$ Applying the Cauchy-Schwarz inequality and then Cauchy's inequality with $\epsilon_2 > 0$, the last term at the left hand side of the above identity can be bounded from below as $$\label{h1est-1} \begin{split} -2\mu \int_0^L w_{xx}(x,t) P_Nw(x,t) dx =& 2\mu \int_0^L w_{xx}(x,t) (w-P_Nw)(x,t) dx \\ &- 2\mu \int_0^L w_{xx}(x,t) w(x,t) dx \\ \geq& -\mu\epsilon_2 \|w_{xx}(t)\|^2 - \frac{\mu}{\epsilon_2} \|(w-P_Nw)(t)\|^2+ 2\mu \|w_x(t)\|^2. \end{split}$$ Next we use the Poincaré type inequality to get $$-2\mu \int_0^L w_{xx}(x,t) P_Nw(x,t) dx \geq -\mu\epsilon_2 \|w_{xx}(t)\|^2 + \left(2\mu - \frac{\mu}{\epsilon_2\lambda_1 (N+1)^2}\right)\|w_x(t)\|^2.$$ Combining the above estimate with [\[h1-inner\]](#h1-inner){reference-type="eqref" reference="h1-inner"}, we obtain $$\label{h1-inner-2} \frac{d}{dt}\|w_x(t)\|^2 + 2\left(\nu - \frac{\mu \epsilon_2}{2}\right) \|w_{xx}(t)\|^2 + 2 \left(\mu - \alpha - \frac{\mu}{2 \epsilon_2\lambda_1 (N+1)^2}\right)\|w_x(t)\|^2 \leq 0.$$ Now for a given $\mu$, we can find $\epsilon_2>0$ such that $$\label{poincareh} \nu - \frac{\mu \epsilon_2}{2} > 0.$$ For this choice of $\epsilon_2$, we apply the Poincaré inequality to the second term at the left hand side of [\[h1-inner-2\]](#h1-inner-2){reference-type="eqref" reference="h1-inner-2"} and get $$\label{h1-inner-3} \frac{d}{dt}\|w_x(t)\|^2 + 2\left[\nu \lambda_1 - \alpha + \mu \left(1 - \frac{\epsilon_2 \lambda_1}{2} - \frac{1}{2\epsilon_2\lambda_1(N+1)^2}\right)\right] \|w_x(t)\|^2 \leq 0.$$ Observe that the inner parenthesis is maximized for the choice $\epsilon_2 = \frac{1}{\lambda_1 (N+1)}$. Using this choice we rewrite [\[h1-inner-3\]](#h1-inner-3){reference-type="eqref" reference="h1-inner-3"} and get $$\label{h1decay_ineq} \frac{d}{dt}\|w_x(t)\|^2 + 2\gamma \|w_x(t)\|^2 \leq 0,$$ where $\gamma$ is given by [\[gamma\]](#gamma){reference-type="eqref" reference="gamma"}. Note that letting $\epsilon_2 = \frac{1}{\lambda_1 (N+1)}$ in [\[poincareh\]](#poincareh){reference-type="eqref" reference="poincareh"} yields the same condition [\[cond_N1\]](#cond_N1){reference-type="eqref" reference="cond_N1"} on $N$ as we obtained in the $L^2-$decay case. Finally, combining [\[h1decay_ineq\]](#h1decay_ineq){reference-type="eqref" reference="h1decay_ineq"} with [\[l2decay_ineq\]](#l2decay_ineq){reference-type="eqref" reference="l2decay_ineq"}, we obtain the following decay result $$\|w(t)\|_{H^1(0,L)} \leq e^{-\gamma t} \|w_0\|_{H^1(0,L)}, \quad \text{ for a.e. } t \geq 0,$$ under condition [\[cond_N\]](#cond_N){reference-type="eqref" reference="cond_N"} on $N$. Finally, we deduce $w\in X_T^\ell$ by using the definition of norm of $X_T^\ell$ once we integrate [\[l2decay_ineq\]](#l2decay_ineq){reference-type="eqref" reference="l2decay_ineq"} and [\[h1decay_ineq\]](#h1decay_ineq){reference-type="eqref" reference="h1decay_ineq"} over $(0,T)$ for the cases $\ell=0$ and $\ell=1$, respectively. The following proposition follows. **Proposition 13**. **Let $\nu, \alpha > 0$,$w_0 \in H^\ell(0,L)$, where $\ell = 0$ or $\ell=1$, $\mu>\alpha - \nu \lambda_1 \geq 0$, $N$ satisfy [\[cond_N\]](#cond_N){reference-type="eqref" reference="cond_N"}, and let $(\mu,N)$ be admissible (Definition [Definition 10](#ratemodepair){reference-type="ref" reference="ratemodepair"}). Then, the solution $w$ of [\[pde_tl\]](#pde_tl){reference-type="eqref" reference="pde_tl"}, which belongs to $X_T^\ell$, satisfies $$\|w(t)\|_{H^\ell(0,L)} \leq e^{-\gamma t} \|w_0\|_{H^\ell(0,L)}, \quad \forall t\geq 0.$$ where $\gamma$ is given by [\[gamma\]](#gamma){reference-type="eqref" reference="gamma"}.** *Remark 14*. If one uses all Fourier sine modes ($N=\infty$), then the main equation of the target model takes the form $$w_t - \nu w_{xx} - \alpha w + \mu w = 0.$$ Using standard multipliers, one can see that $\|w(t)\|_{H^\ell(0,L)} = \mathcal{O}(e^{-(\nu\lambda_1 - \alpha + \mu) t})$, $t \geq 0$. Hence, the condition $\mu > \alpha - \nu\lambda_1$ is necessary for solutions to decay. Therefore, this condition, which also appears in the above proposition is a natural assumption. Using boundedness of the backstepping transformation, we deduce that $u$, the solution of [\[pde_l\]](#pde_l){reference-type="eqref" reference="pde_l"}, also belongs to $X_T^\ell$. Moreoever, from Proposition [Proposition 13](#l2h1tarlin_decay){reference-type="ref" reference="l2h1tarlin_decay"}, we have $$\label{backtoplant1} \begin{split} \|u(t)\|_{H^\ell(0,L)} &\leq \left(1 + \|k\|_{H^\ell(\Delta_{x,y})}\right) \|w(t)\|_{H^\ell(0,L)} \\ &\le \left(1 + \|k\|_{H^\ell(\Delta_{x,y})}\right) e^{-\gamma t} \|w_0\|_{H^\ell(0,L)}. \end{split}$$ Due to the invertibility of the backstepping transformation with a bounded inverse, we get $$\label{backtoplant2} \|w_0\|_{H^\ell(0,L)} \leq \|T_N^{-1}\|_{H^\ell(0,L) \to H^\ell(0,L)} \|u_0\|_{H^\ell(0,L)} .$$ Combining [\[backtoplant1\]](#backtoplant1){reference-type="eqref" reference="backtoplant1"} and [\[backtoplant2\]](#backtoplant2){reference-type="eqref" reference="backtoplant2"}, we conclude that $$\label{backtoplant3} \|u(t)\|_{H^\ell(0,L)} \leq c_k e^{-\gamma t} \|u_0\|_{H^\ell(0,L)}, \quad \forall t \geq 0$$ where $c_k = (1 + \|k\|_{H^\ell(\Delta_{x,y})}) \|T_N^{-1}\|_{H^\ell(0,L) \to H^\ell(0,L)}$ is a nonnegative constant independent of the initial datum. [\[backtoplant3\]](#backtoplant3){reference-type="eqref" reference="backtoplant3"} also provides continuous dependence of solutions on the data. Hence, complete the proof of Theorem [Theorem 4](#wpstab_lin){reference-type="ref" reference="wpstab_lin"}. # Nonlinear theory (Proof of Theorem [Theorem 5](#wpstab_nonlin){reference-type="ref" reference="wpstab_nonlin"}) {#sec_wpstab_nonlin} In this section, we study the well-posedness and stabilization of the nonlinear plant [\[pde_nl\]](#pde_nl){reference-type="eqref" reference="pde_nl"}. The same backstepping transformation that was used in Section [2](#sec_bstrans){reference-type="ref" reference="sec_bstrans"} relates the following target system posed on $(0,L)\times(0,T)$ to the original nonlinear plant, where $\label{nl_term} Fw = (I - \Phi_N) \left[((I + \Upsilon_kP_N)w)^3\right],$ with $\Phi_N$ being the operator defined in Lemma [Lemma 11](#invlem){reference-type="ref" reference="invlem"}: $$\label{pde_tnl} \begin{cases} w_t - \nu w_{xx} - \alpha w + \mu P_Nw = -Fw, \quad x \in (0,L), t > 0, \\ w(0,t) = 0, w(L,t) = 0, \\ w(x,0) = w_0(x). \end{cases}$$ We first prove a linear nonhomogeneous estimate in $X_T^0$ assuming $q_0\in L^2(0,L)$. Taking $L^2-$inner product of the main equation by $2q$ and integrating by parts, we get $$\label{apriori_l2} \frac{d}{dt}\|q(t)\|^2 + 2\nu \|q_x(t)\|^2 - 2\alpha \|q(t)\|^2 +2\mu \int_0^L q(t) P_Nq(t) dx \leq 2\int_0^L |f(t)||q(t)| dx.$$ Integrating above inequality over $[0,t]$ and using Cauchy-Schwarz inequality in $x$, we get $$\begin{split} \min&\{1,2\nu\}\cdot\left(\|q(t)\|^2 + \int_0^t\|q_x(s)\|^2ds\right)\\ &\le 2\alpha\int_0^t \|q(s)\|^2ds+2\mu \int_{0}^{t}\|q(s)\|\|P_Nq(s)\|ds +\|q_0\|^2+2\int_{0}^{t}\|q(s)\|\|f(s)\|ds\\ &\le \|q_0\|^2+2(\alpha+\mu)\int_0^T \|q(s)\|^2ds+2\left(\mathop{\mathrm{ess\,sup}}_{t\in[0,T]}\|q(t)\|\right)\cdot\|f\|_{L^1(0,T;L^2(0,T))}. \end{split}$$ It follows that for a.e. $t\in [0,T]$, in view of Cauchy's inequality, for $\epsilon>0$, we get $$\begin{split} \|&q(t)\|^2 + \int_0^t\|q_x(s)\|^2ds\\ &\le C_1\|q_0\|^2+C_2T\mathop{\mathrm{ess\,sup}}_{t\in[0,T]}\|q(t)\|^2 +{\epsilon}\left(\mathop{\mathrm{ess\,sup}}_{t\in[0,T]}\|q(t)\|^2\right)+C_\epsilon\|f\|_{L^1(0,T;L^2(0,T))}^2, \end{split}$$ where the constants at the right hand side are given by $C_1=\frac{1}{\min\{1,2\nu\}}$, $C_2=\frac{2(\alpha+\mu)}{\min\{1,2\nu\}}$, and $C_\epsilon=\frac{1}{\epsilon\cdot \min\{1,2\nu\}^2}$. Now, taking $\mathop{\mathrm{ess\,sup}}$ with respect to $t$ on $[0,T]$, we get $$\begin{aligned} \label{apest01} &\|q\|_{X_T^0}\le c_T(\|q_0\|+\|f\|_{L^1(0,T;L^2(0,L))}),\end{aligned}$$ where $c_T=\frac{\max\{C_1,C_\epsilon\}}{1-\epsilon-C_2T}>0$ with $\epsilon$ and $T$ small enough that $\epsilon+C_2T<1$. Next, we prove a linear non-homogeneous estimate in $X_T^1$ assuming $q_0\in H_0^1(0,L)$. Taking the $L^2-$inner product of the main equation by $-2q_{xx}$, $$\label{h1aoriori} \frac{d}{dt}\|q_x(t)\|^2 + 2\nu \|q_{xx}(t)\|^2 - 2\alpha \|q_x(t)\|^2 - 2\mu \int_0^L q_{xx}(x,t) P_Nq(x,t) dx \leq 2\int_0^L |f(x,t)||q_{xx}(x,t)| dx.$$ Using arguments similar to those used for [\[apest01\]](#apest01){reference-type="eqref" reference="apest01"}, we obtain $$\label{XT1est}\|q\|_{X_T^1}\le c_T(\|q_0\|_{H_0^1(0,L)}+\|f\|_{L^1(0,T;L^2(0,L))}).$$ ### Contraction We assume $u_0\in H^{\ell}(0,L)$ where either $\ell=0$ or $\ell=1$. Therefore, by bounded invertibilility of the backstepping transformation, we have $w_0\in H^{\ell}(0,L)$ for the same value of $\ell$. We set the solution map $$\label{wp_map2} w = \Psi z(t) \doteq S(t)w_0 + \int_0^t S(t - \tau) Fz(\tau) d\tau,$$ where $S(t)w_0$ is the solution of the corresponding linear problem [\[pde_tl\]](#pde_tl){reference-type="eqref" reference="pde_tl"}. Using estimates of Section [\[apriori\]](#apriori){reference-type="ref" reference="apriori"} and boundedness of $T_N^{-1}=I - \Phi_N$ on $L^2(0,L)$, we obtain $$\label{nonlin_loc1} \begin{split} \|\Psi z\|_{X_T^\ell} &\leq \left\| S(t)w_0 + \int_0^t S(t - \tau) Fz(\tau) d\tau\right\|_{X_T^\ell} \\ &\leq c_T\|w_0\|_{H^{\ell}(0,L)} + c_{T,k}\int_0^T \left\|(T_Nz)^3(t) \right\| dt. \end{split}$$ Applying the Gagliardo-Nirenberg inequality and using boundedness of the backstepping transformation, we get $$\label{nonlin_loc1.5} \begin{split} \left\|\left(T_Nz\right)^3(t)\right\| &= \|T_Nz\|_{L^6(0,L)}^3 \\ &\leq c_L\|\partial_x((T_Nz)(t)\| \|T_Nz(t)\|^2 \\ &\leq c_{k,L} \|z_x(t)\| \|z(t)\|^2 \end{split}$$ from which it follows that $$\label{nonlin_loc2} \begin{split} \int_0^T \left\|((I + \Upsilon_kP_N)z)^3(t) \right\| dt &\leq \int_0^T \left(c_{k,L}\|z_x(t)\| \|z(t)\|^2 \right) dt \\ &\leq c_{k,L} \sqrt{T}\|z\|_{L^\infty(0,T;L^2(0,L))}^2 \|z_x\|_{L^2(0,T;L^2(0,L))} \\ &\leq c_{k,L}\sqrt{T}\|z\|_{X_T^\ell}^3. \end{split}$$ Combining [\[nonlin_loc1.5\]](#nonlin_loc1.5){reference-type="eqref" reference="nonlin_loc1.5"}-[\[nonlin_loc2\]](#nonlin_loc2){reference-type="eqref" reference="nonlin_loc2"}, we get $$\label{fixed1} \|\Psi z\|_{X_T^\ell} \leq c_T\left(\|w_0\|_{H^{\ell}(0,L)} + \sqrt{T}\|z\|_{X_T^\ell}^3\right),$$ where $c_T$ is uniformly bounded in $T$, and it also depends on parameters $\nu,\alpha,\mu, k, L$, which are fixed throughout. To prove local existence, it is enough to show that the mapping [\[wp_map2\]](#wp_map2){reference-type="eqref" reference="wp_map2"} has a fixed point for some $T > 0$. Let $B_{R}^T \doteq \left\{ z \in X_T^\ell \, | \, \|z\|_{X_T^\ell} \leq R \right\}$, where $R \doteq 2c_{T} \|w_0\|_{H^{\ell}(0,L)}$. Then, $$\|\Psi z\|_{X_T^\ell} \leq \frac{R}{2} + c_{T}\sqrt{T} R^3.$$ Let us choose $T > 0$ small enough that $2 c_{T}\sqrt{T}R^2 < 1$ holds. This yields $\|\Psi z\|_{X_T^\ell} < R$, i.e. $\Psi$ maps $B_{R}^T$ onto itself for sufficiently small $T > 0$. Next, we show that $\Psi$ is a contraction on $B_R^T$ for sufficiently small $T$. Let $z_1, z_2 \in B_R^T$. Then $$\label{cont1} \begin{split} \left\|\Psi z_1 - \Psi z_2\right\|_{X_T^\ell} =& \left\| \int_0^t S(\cdot - s) \left(Fz_1 - Fz_2\right) ds \right\|_{X_T^\ell} \\ \leq& c_{T}\int_0^T \left\|Fz_1 - Fz_2\right\| dt \\ =& c_{T} \int_0^T \left\| T_N^{-1} \left[\left(T_Nz_1\right)^3 - \left(T_Nz_2\right)^3 \right] \right\| dt \\ \leq& c_{T} \int_0^T \left\|\left(T_Nz_1\right)^3 - \left(T_Nz_2\right)^3 \right\| dt \\ \leq& c_{T} \int_0^T \left\|\left(T_N(z_1 - z_2)\right) \left(\left(T_Nz_1\right)^2 + \left(T_Nz_2\right)^2 \right) \right\| dt \\ \leq& c_{T}\int_0^T \left(\|T_Bz_1\|_{\infty}^2 +\|T_Nz_2\|_{\infty}^2\right)\|T_N(z_1 - z_2)\|dt\\ \leq& c_{T}\int_0^T \left(\|\partial_x z_1\|\|z_1\| +\|\partial_x z_2\|\|z_2\|\right)\|z_1 - z_2\|dt\\ \leq& c_{T}\sqrt{T}(\|z_1\|_{X_T^\ell}^2+\|z_2\|_{X_T^\ell}^2)\|z_1 - z_2\|_{X_T^\ell}. \end{split}$$ where we used the boundedness of the backstepping transformation as well as its inverse. For sufficiently small $T > 0$, we guarantee that $c_{T} T^\frac{1}{2} R^2 < \frac{1}{2}$. Hence $\Psi$ becomes a contraction on a closed ball $B_R^T$. This yields a unique solution in $B_R^T\subset X_T^\ell$. ## Stabilization {#stab_nonlin} In this section, we prove the exponential decay for solutions of [\[pde_tnl\]](#pde_tnl){reference-type="eqref" reference="pde_tnl"} in $H^\ell(0,L)$ for $\ell=0$ and $\ell=1$. ### $L^2-$decay Let $u_0\in L^2(0,L)$ so that $w_0=T_N^{-1}u_0\in L^2(0,L)$. Taking $L^2-$inner product of [\[pde_tnl\]](#pde_tnl){reference-type="eqref" reference="pde_tnl"} by $2w$: $$\label{nonlin_mult} \begin{split} \frac{d}{dt}\|w(t)\|^2 &+ 2\nu \|w_x(t)\|^2 - 2\alpha \|w(t)\|^2 + 2\mu \int_0^L w(x,t) P_Nw(x,t) dx\\ &= -2 \int_0^L T_N^{-1} \left[(T_Nw)^3\right]w(x,t) dx. \end{split}$$ Applying Cauchy-Schwarz inequality and using boundedness of $T_N^{-1}$ on $L^2(0,L)$, the term at the right hand side of [\[nonlin_mult\]](#nonlin_mult){reference-type="eqref" reference="nonlin_mult"} is dominated by $2c_0\left\| (T_Nw)^3(t) \right\| \|w(t)\|,$ where $$\label{c0def}c_0=\|T_N^{-1}\|_{2\rightarrow 2}.$$ Applying arguments similar to those in [\[nonlin_loc1.5\]](#nonlin_loc1.5){reference-type="eqref" reference="nonlin_loc1.5"} and using Cauchy's inequality with $\epsilon$, we get $$\label{nonlin_est2} 2c_0\left\|\left(T_Nw\right)^3(t)\right\| \|w(t)\|\leq 2c_0c_{1}\|w_x(t)\| \|w(t)\|^3 \leq \epsilon\|w_x(t)\|^2 + \frac{c_0^2c_{1}^2}{\epsilon}\|w(t)\|^6,$$ where $$\label{c1def} c_1=c^*\|T_N\|_{H^{1}(0,L)\rightarrow H^{1}(0,L)}\|T_N\|_{2\rightarrow 2}^2$$ with $c^*$ being the Gagliardo-Nirenberg constant. Using $$2\mu \int_0^L w(x,t) P_Nw(x,t) dx \geq (2\mu - \epsilon_1 \mu) \|w(t)\|^2 - \frac{\mu}{\epsilon_1} \|(w - P_Nw)(t)\|^2$$ with $\epsilon_1 = 1/(N+1)$, we get $$\label{nonlin_mult3} \begin{split} \frac{d}{dt}&\|w(t)\|^2 + (2\nu - \epsilon) \|w_x(t)\|^2+ \left(2\mu - \frac{\mu}{N+1} - 2\alpha\right) \|w(t)\|^2 \\ &- \mu (N+1) \|\left(w-P_Nw\right)(t)\|^2 \leq \frac{c_0^2c_{1}^2}{\epsilon} \|w(t)\|^6. \end{split}$$ Thanks to the Poincaré type inequality, we have $$\label{nonlin_mult4} \begin{split} \frac{d}{dt}&\|w(t)\|^2 + 2(\nu - \frac{\epsilon}{2} - \frac{\mu}{2\lambda_1(N+1)}) \|w_x(t)\|^2 \\ &+ \left(2\mu - \frac{\mu}{N+1} - 2\alpha\right) \|w(t)\|^2 \leq \frac{c_0^2c_{1}^2}{\epsilon} \|w(t)\|^6. \end{split}$$ One can find $\epsilon$ such that $$\label{poincare_nl} \nu - \frac{\mu}{2\lambda_1(N+1)} \geq \frac{\epsilon}{2} > 0$$ provided the same condition [\[cond_N1\]](#cond_N1){reference-type="eqref" reference="cond_N1"} on $N$ given in the linear theory holds. For such $N$, using Poincaré inequality above, we get $$\label{nonlin_mult7} \frac{d}{dt}\|w(t)\|^2 + 2\left(\gamma - \frac{\epsilon \lambda_1}{2}\right) \|w(t)\|^2 \leq \frac{c_0^2c_{1}^2}{\epsilon} \|w(t)\|^6,$$ where $\gamma$ is as in [\[gamma\]](#gamma){reference-type="eqref" reference="gamma"}. Now if $\mu > \alpha -\nu \lambda_1 \geq 0$ and $N$ in addition satisfies [\[cond_N2\]](#cond_N2){reference-type="eqref" reference="cond_N2"}, then [\[nonlin_mult7\]](#nonlin_mult7){reference-type="eqref" reference="nonlin_mult7"} implies $L^2-$decay provided $\|w_0\|$ is small (see Proposition [Proposition 16](#l2h1tarnonlin_decay){reference-type="ref" reference="l2h1tarnonlin_decay"} below). This follows from Lemma [Lemma 15](#lem_nonlinh1){reference-type="ref" reference="lem_nonlinh1"} by taking $y(t)=\|w(t)\|^2$. ### $H^1-$decay Here, we consider the case $u_0\in H^1(0,L)$ so that $w_0=T_N^{-1}u_0\in H^1(0,L)$. We take $L^2-$inner product of the main equation in [\[pde_tnl\]](#pde_tnl){reference-type="eqref" reference="pde_tnl"} by $-2w_{xx}$, proceed as in [\[h1est-1\]](#h1est-1){reference-type="eqref" reference="h1est-1"}-[\[h1-inner-2\]](#h1-inner-2){reference-type="eqref" reference="h1-inner-2"} with $\epsilon_2 = \frac{1}{\lambda_1(N+1)}$, and thereby obtain $$\label{nonlin_multh1} \begin{split} \frac{d}{dt}\|w_x(t)\|^2 &+ 2\left(\nu - \frac{\mu}{2\lambda_1(N+1)}\right) \|w_{xx}(t)\|^2 + 2 \left(\mu - \alpha - \frac{\mu}{2 (N+1)}\right)\|w_x(t)\|^2 \\ &\leq 2 \int_0^L T_N^{-1} \left[(T_Nw)^3\right](t) w_{xx}(t) dx. \end{split}$$ Applying Cauchy-Schwarz inequality and using boundedness of $T_N^{-1}$ on $L^2(0,L)$, the right hand side of [\[nonlin_multh1\]](#nonlin_multh1){reference-type="eqref" reference="nonlin_multh1"} can be estimated as $$\label{nonlin_h1est1} 2 \int_0^L T_N^{-1}\left[(T_Nw)^3\right](x,t) w_{xx}(x,t) dx \\ \leq 2c_0 \left\| (T_Nw)^3(t) \right\| \|w_{xx}(t)\|.$$ Proceeding as in [\[nonlin_loc1.5\]](#nonlin_loc1.5){reference-type="eqref" reference="nonlin_loc1.5"}, applying Poincaré inequality and Cauchy's inequality with $\epsilon> 0$, we get $$\label{nonlin_h1est2} \begin{split} 2c_0 \left\| (T_Nw)^3(t) \right\| \|w_{xx}(t)\| &\leq \left(2c_{0}c_1 \|w_x(t)\| \|w(t)\|^2\right) \|w_{xx}(t)\| \\ &\leq 2c_{0}c_1\lambda_1^{-1} \|w_x(t)\|^3 \|w_{xx}(t)\| \\ &\leq \frac{c_{0}^2c_1^2\lambda_1^{-2}}{\epsilon}\|w_x(t)\|^6 + \epsilon \|w_{xx}(t)\|^2. \end{split}$$ Using [\[nonlin_h1est1\]](#nonlin_h1est1){reference-type="eqref" reference="nonlin_h1est1"}-[\[nonlin_h1est2\]](#nonlin_h1est2){reference-type="eqref" reference="nonlin_h1est2"} in [\[nonlin_multh1\]](#nonlin_multh1){reference-type="eqref" reference="nonlin_multh1"}, we get $$\label{nonlin_multh2} \begin{split} \frac{d}{dt}&\|w_x(t)\|^2 + 2\left(\nu - \frac{\epsilon}{2} - \frac{\mu}{2\lambda_1(N+1)}\right) \|w_{xx}(t)\|^2 \\ &+ 2 \left(\mu - \alpha - \frac{\mu}{2 (N+1)}\right)\|w_x(t)\|^2 \leq \frac{c_{0}^2c_1^2\lambda_1^{-2}}{\epsilon}\|w_x(t)\|^6. \end{split}$$ Again, one can find $\epsilon> 0$ such that $$\nu - \frac{\mu}{2\lambda_1(N+1)} \geq \frac{\epsilon}{2} > 0$$ provided the same condition [\[cond_N1\]](#cond_N1){reference-type="eqref" reference="cond_N1"} on $N$ given in the linear theory holds. So assuming [\[cond_N1\]](#cond_N1){reference-type="eqref" reference="cond_N1"}, we can apply the Poincaré inequality and get $$\begin{aligned} \label{nonlin_multh4} &\frac{d}{dt}\|w_x(t)\|^2 + 2 \left(\gamma - \dfrac{\epsilon \lambda_1}{2}\right) \|w_x(t)\|^2 \leq \dfrac{c_{0}^2c_1^2\lambda_1^{-2}}{\epsilon}\|w_x(t)\|^6,\end{aligned}$$ where $\gamma > 0$ is given by [\[gamma\]](#gamma){reference-type="eqref" reference="gamma"}. If $\mu > \nu \lambda_1 - \alpha$, then [\[nonlin_multh4\]](#nonlin_multh4){reference-type="eqref" reference="nonlin_multh4"} yields $H^1-$decay of solutions. This can be obtained from the following lemma by setting $y(t)=\|w_x(t)\|^2$. **Lemma 15**. *Let $a,b>0$, $d\in(0,1)$, and $y = y(t)> 0$ satisfy $$\begin{aligned} \label{lem_ineqh} &y^\prime(t) +ay(t) - by^3(t) \leq 0, \quad t \geq 0. \end{aligned}$$ If $y(0)\le d\sqrt{\frac{a}{b}}$, then $y(t) \le \frac{1}{\sqrt{1-d^2}}y(0)e^{-at}$, $t\ge 0$.* *Proof.* [\[lem_ineqh\]](#lem_ineqh){reference-type="eqref" reference="lem_ineqh"} is a Bernoulli type differential inequality. Solving this inequality, we obtain $$\begin{aligned} &y^2(t) \leq \left(\left(\dfrac{1}{y^2(0)} - \dfrac{b}{a}\right) e^{2at} + \dfrac{b}{a}\right)^{-1}. \end{aligned}$$ The result follows from the assumption $y(0) \le d\sqrt{\frac{a}{b}}$. ◻ Applying the above lemma to the $H^\ell-$estimates, we establish the proposition below for the target system. **Proposition 16**. *Let $\nu, \alpha > 0$, $\mu>\alpha - \nu \lambda_1 \geq 0$, $d\in(0,1)$, $w_0 \in H^\ell(0,L)$, $\ell = 0$ or $\ell=1$. Let $\epsilon>0$ be any number satisfying $\epsilon\lambda_1\le \max\left\{\nu-\frac{\mu}{N+1}, 2\gamma\right\}$, suppose $\|w_0^{(\ell)}\|_2^2\le d\frac{\sqrt{\epsilon(2\gamma-\epsilon\lambda_1)}\lambda_1^{\ell}}{c_0c_{1}}$ and $N$ satisfies [\[cond_N\]](#cond_N){reference-type="eqref" reference="cond_N"}. Then, the solution of [\[pde_tnl\]](#pde_tnl){reference-type="eqref" reference="pde_tnl"}, which belongs to $X_T^\ell$, has the decay estimate $$\label{l2tarnonlin_decayy} \|w(t)\|_{H^\ell(0,L)} \lesssim e^{-\left(\gamma - \frac{\epsilon \lambda_1}{2}\right) t} \|w_0\|_{H^\ell(0,L)}, \text{ for a.e. } t \geq 0,$$ where $\gamma$ is given by [\[gamma\]](#gamma){reference-type="eqref" reference="gamma"}, $c_0, c_1$ are defined in [\[c0def\]](#c0def){reference-type="eqref" reference="c0def"} and [\[c1def\]](#c1def){reference-type="eqref" reference="c1def"}, respectively.* Proceeding as in [\[backtoplant1\]](#backtoplant1){reference-type="eqref" reference="backtoplant1"}-[\[backtoplant3\]](#backtoplant3){reference-type="eqref" reference="backtoplant3"}, we conclude that well-posedness and exponential decay results for the target model are also true for the original plant. Hence, we have Theorem [Theorem 5](#wpstab_nonlin){reference-type="ref" reference="wpstab_nonlin"}. *Remark 17*. Recall that $w_0=T_N^{-1}u_0=(I-\Phi_N)u_0$, therefore $$\|w_0\|_{H^{\ell}(0,L)}\le \|T_N^{-1}\|_{H^\ell(0,L)\rightarrow H^\ell(0,L)}\|u_0\|_{H^{\ell}(0,L)}$$ Hence, the smallness condition of Proposition [Proposition 16](#l2h1tarnonlin_decay){reference-type="ref" reference="l2h1tarnonlin_decay"} will hold provided $$\|u_0^{(\ell)}\|_2^2\le d\frac{\sqrt{\epsilon(2\gamma-\epsilon\lambda_1)}\lambda_1^{\ell}}{c_0c_{1}}{\|T_N^{-1}\|_{H^\ell(0,L)\rightarrow H^\ell(0,L)}^{-2}},$$ which characterizes the domain of attraction for the original plant. Note that the above estimate is still implicit, however with the help of the definition of $\Phi_N$, one may be able to find an upper bound for the quantities $c_0$ and $\|T_N^{-1}\|_{H^\ell(0,L) \to H^\ell(0,L)}$, which would yield an explicit estimate for the domain of attraction. # Minimal number of Fourier modes for decay (Proofs of Theorem [Theorem 7](#wpstab_lin2){reference-type="ref" reference="wpstab_lin2"} and Theorem [Theorem 9](#wpstab_nonlin2){reference-type="ref" reference="wpstab_nonlin2"} ) {#sec_wpstab2} Section [3](#sec_wpstab_lin){reference-type="ref" reference="sec_wpstab_lin"} and Section [4](#sec_wpstab_nonlin){reference-type="ref" reference="sec_wpstab_nonlin"} focused on the rapid stabilization problem. In this section, we are interested in determining the minimal number of Fourier sine modes of $u$ to gain exponential stabilization of zero equilibrium for some unprescribed and not necessarily large decay rate. To this end, we will assume that we want to gain exponential stabilization with $N$ Fourier sine modes, and we will extract the conditions that the problem parameters must satisfy. Recall from [\[poincare\]](#poincare){reference-type="eqref" reference="poincare"} and [\[l2-ineq\]](#l2-ineq){reference-type="eqref" reference="l2-ineq"} that the conditions $$\mu < 2\epsilon_1 \nu \lambda_1 (N+1)^2 \quad \text{and} \quad \mu > \frac{\alpha - \nu \lambda_1}{1 - \frac{\epsilon_1}{2} - \frac{1}{2\epsilon_1 (N+1)^2}}$$ must be satisfied by $\mu$. To guarantee that such $\mu$ exists, we must have $$\label{mu-l2} \frac{\alpha - \nu \lambda_1}{1 - \frac{\epsilon_1}{2} - \frac{1}{2\epsilon_1 (N+1)^2}} < 2\epsilon_1 \nu \lambda_1 (N+1)^2,$$ which holds provided $$\label{alpha_nu} \frac{\alpha}{\nu} < \lambda_1 (N+1)^2(2\epsilon_1 - \epsilon_1^2) = \lambda_{N+1}(2\epsilon_1 - \epsilon_1^2).$$ Note that for $\lambda_{j+1}>\frac{\alpha}{\nu} > \lambda_j$, where $j \in \{ 1, 2, \dotsc\}$, the first $j$ Fourier modes are unstable while the $(j+1)-$th Fourier mode is stable. So let us maximize the right hand side of [\[alpha_nu\]](#alpha_nu){reference-type="eqref" reference="alpha_nu"} in order to maximize the range selection for $\frac{\alpha}{\nu}$. This is done with the choice $\epsilon_1 = 1$, which yields $$\label{instalevel} \frac{\alpha}{\nu} < \lambda_{N+1}.$$ Similarly, from [\[poincareh\]](#poincareh){reference-type="eqref" reference="poincareh"} and [\[h1-inner-3\]](#h1-inner-3){reference-type="eqref" reference="h1-inner-3"}, we need $$\label{mu-h1} \mu < \frac{2\nu}{\epsilon_2} \quad \text{and} \quad \mu > \frac{\alpha - \nu \lambda_1}{1 - \frac{\epsilon_2 \lambda_1}{2} - \frac{1}{2 \epsilon_2 \lambda_{1}(N+1)^2}}.$$ Combining these inequalities, we deduce that we must have $$\frac{\alpha}{\nu} < \frac{2}{\epsilon_2} - \frac{1}{\epsilon_2^2 \lambda_1 (N+1)^2}$$ in order to guarantee that $\mu$ satisfying [\[mu-h1\]](#mu-h1){reference-type="eqref" reference="mu-h1"} exists. We again maximize the right hand side with $\epsilon_2 = \frac{1}{\lambda_1 (N+1)^2}$, and therefore get the same inequality given by [\[instalevel\]](#instalevel){reference-type="eqref" reference="instalevel"}. Consequently, as long as the $(N+1)-$th Fourier mode is stable, we can achieve exponential stabilization of zero equilibrium by using a boundary feedback controller involving only the first $N$ Fourier modes. Now using the above choices of $\epsilon_1$ and $\epsilon_2$ in [\[l2-ineq\]](#l2-ineq){reference-type="eqref" reference="l2-ineq"} and [\[h1-inner-3\]](#h1-inner-3){reference-type="eqref" reference="h1-inner-3"}, respectively, one can see that the decay rate becomes $$\rho = \nu\lambda_1 - \alpha + \frac{\mu}{2}\left(1 - \frac{1}{(N+1)^2}\right) > 0,$$ where $\mu$ obeys $$\begin{aligned} \label{mu2} 2(\alpha-\nu\lambda_1) \left(1 - \frac{1}{(N+1)^2}\right)^{-1} < \mu < 2\nu \lambda_{N+1}.\end{aligned}$$ Since conditions [\[mu-l2\]](#mu-l2){reference-type="eqref" reference="mu-l2"} and [\[mu-h1\]](#mu-h1){reference-type="eqref" reference="mu-h1"} above are required for the stabilization of both linear and nonlinear target models, the conclusion we obtain here is valid for both. The resuts are stated in Theorem [Theorem 7](#wpstab_lin2){reference-type="ref" reference="wpstab_lin2"} and Theorem [Theorem 9](#wpstab_nonlin2){reference-type="ref" reference="wpstab_nonlin2"}. # Numerical algorithm and simulations {#numerics} Notice that our control input involves the integral operator $\Upsilon_k$ and the inverse backstepping operator $I - \Phi_N$. So, before deriving a numerical scheme for our continuous models, we need to express discrete counterparts of these operators. To this end, let $N_x > 1$ be a positive integer, $1 \leq j \leq i \leq N_x$, and $(x_i,y_j)$ be distinct points of the triangular region $\Delta_{x,y}$, where $$\delta_x = \frac{L}{N_x - 1}, \quad x_i = (i - 1)\delta_x, \quad y_j = (j - 1)\delta_x.$$ We derive $k$ approximately by setting $$k^M(x_i,y_j) = -\frac{\mu y_i}{2 \nu} \sum_{m=0}^M \left(-\frac{\mu}{4 \nu}\right)^m \frac{(x_i^2 - y_j^2)^m}{m! (m+1)!},$$ where $M$ is an appropriate value in the sense that $\underset{1 \leq j \leq i \leq N_x}{\max}|k^{M+1}(x_i,y_j) - k^M(x_i,y_j)|$ is less than a tolerance value. Using $k^M$ and applying composite trapezoidal rule for the integration, discrete counterpart, $\mathbf{\Upsilon}_k^h$, of $\Upsilon_k$ can be expressed by an $N_x$ dimensional square matrix with elements $$(\mathbf{\Upsilon}_k^h)_{i,j} = \begin{cases} 0, &\text{if }j > i, \\ \frac{\delta_x}{2}k(x_i,x_j), &\text{if }j = i, \\ \delta_x k(x_i,x_j), &\text{else}. \end{cases}$$ Discrete counterpart of the operator $P_N$, say $\mathbf{P}_N^h$, can also be expressed by an ${N_x}$ dimensional square matrix $\mathbf{P}_N^h = \delta_x \mathbf{W} \mathbf{W}^T$, where $\mathbf{W}$ is an $N_x \times N$ dimensional matrix with elements $$(\mathbf{W})_{j,n} = \sqrt{\frac{2}{L}} \sin \left(\frac{ n\pi x_j}{L}\right), \quad n = 1, \dotsc, N.$$ Next, we obtain an approximation to $\Phi_N u$ using the iteration $$\begin{aligned} \label{invlem_vN1} \Phi_{N} u &\equiv& (I - \Phi_{N-1})[\Upsilon_k P_{N} u] -\dfrac{\left( (I-\Phi_{N-1})[\Upsilon_kP_N u],e_{N}\right)_2}{1+\left((I-\Phi_{N-1})[\Upsilon_k e_N],e_{N}\right)_2}(I-\Phi_{N-1})[\Upsilon_ke_N], \\ \Phi_0 u &=& 0. \nonumber\end{aligned}$$ Iteration [\[invlem_vN1\]](#invlem_vN1){reference-type="eqref" reference="invlem_vN1"} can be viewed as a mapping, say $\Psi$, from $p-$th level to $(p+1)-$th level, $p = 1, 2, \dotsc, N-1$, since derivation of $\Phi_N u$ via [\[invlem_vN1\]](#invlem_vN1){reference-type="eqref" reference="invlem_vN1"} requires information related with $\Phi_{N-1}$. More precisely one explicitly needs to know $\Phi_{N-1}[\Upsilon_k P_N u]$ and $\Phi_{N-1}[\Upsilon_ke_N]$ to derive $\Phi_N u$. Let us illustrate this situation schematically as follows: $$\label{scheme2} \left. \begin{aligned} &\Phi_{N-1} [\Upsilon_kP_N u] \\ &\Phi_{N-1} [\Upsilon_ke_N] \end{aligned} \right\} \xlongrightarrow{\Psi} \Phi_N u.$$ However, $\Phi_{N-1} [\Upsilon_kP_N u]$ and $\Phi_{N-1} [\Upsilon_ke_N]$ are unknowns as well, since they depend on $\Phi_{N-2}$ due to iteration [\[invlem_vN1\]](#invlem_vN1){reference-type="eqref" reference="invlem_vN1"}. In other words, considering [\[invlem_vN1\]](#invlem_vN1){reference-type="eqref" reference="invlem_vN1"} for $N -1$ and replacing $u$ with $\Upsilon_k P_N u$ or $\Upsilon_ke_N$, we see that we need information about $\Phi_{N-2}[(\Upsilon_k P_{N-1})(\Upsilon_k P_N) u]$ and $\Phi_{N-2}[\Upsilon_k e_{N-1}]$ or $\Phi_{N-2}[(\Upsilon_k P_{N-1})(\Upsilon_k e_N)]$ and $\Phi_{N-2}[\Upsilon_k e_{N-1}]$, respectively. Together with the iteration [\[scheme2\]](#scheme2){reference-type="eqref" reference="scheme2"}, let us illustrate this as the following scheme: $$\label{scheme3} \left. \begin{aligned} \left. \begin{aligned} &\Phi_{N-2}[(\Upsilon_k P_{N-1})(\Upsilon_k P_N) u] \\ &\Phi_{N-2}[\Upsilon_k e_{N-1}] \end{aligned} \right\} \xlongrightarrow{\Psi} &\Phi_{N-1} [\Upsilon_kP_N u] \\ \left. \begin{aligned} &\Phi_{N-2}[(\Upsilon_k P_{N-1})(\Upsilon e_N)] \\ &\Phi_{N-2}[\Upsilon_k e_{N-1}] \end{aligned} \right\} \xlongrightarrow{\Psi} &\Phi_{N-1}[\Upsilon_k e_N] \end{aligned} \right\} \xlongrightarrow{\Psi} \Phi_N u.$$ The functions lying at the most left end of scheme [\[scheme3\]](#scheme3){reference-type="eqref" reference="scheme3"} are unknown since they depend on $\Phi_{N-3}$. Iteratively, each backward step to determine $\Phi_j$ gives rise to a new unknown including $\Phi_{j-1}$, $j = 3, \dotsc , N$. Therefore, in order to understand what total information is required to derive $\Phi_N u$, we need to go backward step by step until we reach $\Phi_2$, since it depends on $\Phi_1$, which is already known since $\Phi_0 \varphi = 0$ implies $$\Phi_1 \varphi = \frac{1}{1 + \beta_1} \Upsilon_kP_1 \varphi,$$ where $\beta_1 = \int_0^L e_1(s) [\Upsilon_k e_1](s) ds$. For example, if one needs to find $\Phi_3 u$, then one can see by taking $N = 3$ on [\[scheme3\]](#scheme3){reference-type="eqref" reference="scheme3"} that $(\Upsilon_k P_1)[\Upsilon_ke_2]$, $(\Upsilon_k P_1)(\Upsilon_k P_2)[\Upsilon_k e_3]$ and $(\Upsilon_kP_1)(\Upsilon_kP_2)(\Upsilon_kP_3)u$ are required inputs. In general, to derive $\Phi_N u$ explicitly, we need $$\label{inviter_input1} (\Upsilon_k P_1)(\Upsilon_k P_2) \dotsm (\Upsilon_k P_{N-1})(\Upsilon_k P_{N})u$$ and $$\begin{aligned} &(\Upsilon_k P_1)[\Upsilon_k e_2] \tag{6.4--2} \label{inviter_input2} \\ &(\Upsilon_k P_1)(\Upsilon_k P_2)[\Upsilon_k e_3] \tag{6.4--3} \label{inviter_input3} \\ & \quad\vdots \nonumber\\ &(\Upsilon_k P_1)(\Upsilon_k P_2) \dotsc (\Upsilon_k P_{j-1})[\Upsilon_k e_j] \tag{6.4--j} \label{inviter_inputj} \\ & \quad\vdots \nonumber\\ &(\Upsilon_k P_1)(\Upsilon_k P_2) \dotsc (\Upsilon_k P_{j-1}) \dotsc (\Upsilon_k P_{N-1})[\Upsilon_k e_N] \tag{6.4--N} \label{inviter_inputN}\end{aligned}$$ as an input. Algorithm [\[alg:inviter\]](#alg:inviter){reference-type="ref" reference="alg:inviter"} gives a numerical construction of this. $\varphi$ $\mathbf{\Phi_N^h}u \gets \frac{1}{1 + \beta_1} \mathbf{\Upsilon_k^h} \mathbf{P_N^h}\varphi;$ $K_\text{old}(1) \gets \text{expression } \eqref{inviter_input1};$ $K_\text{old}(j) \gets \text{expression } \eqref{inviter_inputj};$ $i \gets N;$ $K_\text{new} \gets \mathbf{0}_{i-1}$ $K_\text{new}(1) \gets \Psi(K_\text{old}(1),K_\text{old}(2));$ $K_\text{new}(j-1) \gets \Psi\left(K_\text{old}(2),K_\text{old}(j)\right);$ $K_\text{old};$ $K_\text{old} \gets K_\text{new};$ $i \gets i - 1;$ $\mathbf{\Phi_N^h}\varphi \gets K_\text{old}(1);$ In view of the above discretization procedure, control input is of the form $$\label{num_ctrl} g(u(:,t)) = \int_0^L k(L,y) \mathbf{P}_N^h [(\mathbf{I}_{N_x} - \mathbf{\Phi}_N^h) u(y,t)] dy,$$ where $\mathbf{\Phi}_N^h$ is derived by using the above iterative scheme involving finite dimensional operators $\mathbf{\Upsilon}_k^h$, $\mathbf{P}_N^h$ and $\mathbf{I}_{N_x}$ is the $N_x$ dimensional identity matrix. Note that we evaluate the integral [\[num_ctrl\]](#num_ctrl){reference-type="eqref" reference="num_ctrl"} by the composite trapezoidal rule. Next, we derive discrete schemes for our linear and nonlinear continuous models. #### *Linear case.* Define the finite dimensional space $\mathrm{X}^{h} \doteq \left\{\mathbf{\varphi} \, | \, \mathbf{\varphi} = [\varphi_1 \, \dotsm \, \varphi_{N_x}]^T \in \mathbb{R}^{N_x}\right\}$ with the property that $\varphi_1 = 0$ and $\varphi_{N_x} = g(\varphi)$. Define the operator $\mathbf{A}: \mathrm{X}^h \to \mathrm{X}^h$, $$\mathbf{A} \doteq -\nu \mathbf{\Delta} - \alpha \mathbf{I}_{N_x} + \mu \mathbf{P}_N^h,$$ where $\mathbf{\Delta}$ is the $N_x$ dimensional tridiagonal matrix obtained by applying central difference approximation to the second order derivative. Let $N_t$ denotes the number of time steps, $T_\text{max}$ is the final time and $\delta_t = \frac{T_\text{max}}{N_t - 1}$. Let $\mathbf{u}^n = [u_1^n \, \dotsm \, u_{N_x}^n]^T \in \mathrm{X}^h$ be an approximation of the solution $u$ at $n-$th time level. Using Crank-Nicolson time stepping scheme, which yields unconditionally numerical stable results in time, we end up with the following fully discrete problem for the linear target model [\[pde_tl\]](#pde_tl){reference-type="eqref" reference="pde_tl"}: $$\label{disc_lin} \begin{cases} \text{Given } \mathbf{u}^n \in \mathrm{X}^h, \text{ find } \mathbf{u}^{n+1} \in \mathrm{X}^h \text{ such that} \\ \left(\mathbf{I}_{N_x} + \frac{\delta_t}{2} \mathbf{A}\right) \mathbf{u}^{n+1} = \mathbf{B}_l^n, \quad n = 0, 1, \dotsc, N_t, \\ u^{n+1}_{N_x} = g(\mathbf{u}^{n}), \end{cases}$$ where $$\label{disc_lin_rhs} \mathbf{B}_l^n \doteq \left(\mathbf{I}_{N_x} - \frac{\delta_t}{2} \mathbf{A}\right) \mathbf{u}^{n}.$$ Now [\[disc_lin\]](#disc_lin){reference-type="eqref" reference="disc_lin"}-[\[disc_lin_rhs\]](#disc_lin_rhs){reference-type="eqref" reference="disc_lin_rhs"} is linear in $\mathbf{u}^{n+1}$, hence can be solved directly. See Algorithm [\[alg:numlin\]](#alg:numlin){reference-type="ref" reference="alg:numlin"} for the algorithm to get simulations for linear model. Initial state $u_0$. $\mathbf{u}^1 \gets \mathbf{u_0}$; $u^{n+1}(1) \gets 0$; $u^{n+1}(N_x) \gets \text{trapz}(y,k(L,y_j)\mathbf{P}_N^h[(\mathbf{I}_{N_x}-\mathbf{\Phi}_N^h) u^{n}(y_j)])$; $\mathbf{u}^{n+1}$; #### *Nonlinear case.* Applying the same discretization procedure as in the linear case, we obtain the following fully discrete system of equations for nonlinear model [\[pde_nl\]](#pde_nl){reference-type="eqref" reference="pde_nl"} $$\label{disc_nonlin} \left(\mathbf{I}_{N_x} + \frac{\delta_t}{2} \mathbf{A}\right)\mathbf{u}^{n+1} + \frac{\delta_t}{2} \left(\mathbf{u}^{n+1}\right)^3 = \mathbf{B}_l^n + \mathbf{B}_{nl}^n,$$ where $\mathbf{B}_l^n$ is given by [\[disc_lin_rhs\]](#disc_lin_rhs){reference-type="eqref" reference="disc_lin_rhs"} and $\mathbf{B}_{nl}^n = -\frac{\delta_t}{2}\left(\mathbf{u}^{n}\right)^3$. Observe that [\[disc_nonlin\]](#disc_nonlin){reference-type="eqref" reference="disc_nonlin"} is nonlinear in the unknown $\mathbf{u}^{n+1}$. To solve [\[disc_nonlin\]](#disc_nonlin){reference-type="eqref" reference="disc_nonlin"} for $\mathbf{u}^{n+1}$, we apply the following linearization scheme: Let $\mathbf{u}^{n,p}$ be an approximation to the unknown $\mathbf{u}^{n+1}$ and consider performing the iteration $$\label{iter_nonlin} \begin{cases} \mathbf{u}^{n,p+1} = \mathbf{u}^{n,p} + \mathbf{du}, \quad p = 0, 1, 2 \dotsc, \\ \mathbf{u}^{n,0} = \mathbf{u}^n, \end{cases}$$ together with $$\label{iter_nonlin2} u^{n,p+1}(N_x) = g(\mathbf{u}^{n,p}),$$ to obtain a better approximation, $\mathbf{u}^{n,p+1}$, until $\underset{1 \leq i \leq N_x}{\max} |\mathbf{du}|$ is small enough. To determine $\mathbf{du}$ in each iteration, we replace $\mathbf{u}^{n+1}$ by $\mathbf{u}^{n,p} + \mathbf{du}$ for the linear term and approximate the nonlinear term $\left(\mathbf{u}^{n+1}\right)^3$ linearly in $\mathbf{du}$ by $\left(\mathbf{u}^{n,p}\right)^3 + 3 \mathbf{du} \left(\mathbf{u}^{n,p}\right)^2$. Then, we get $$\label{disc_nonlin2} \left(\mathbf{I}_{N_x} + \frac{\delta_t}{2} \mathbf{A}\right) (\mathbf{u}^{n,p} + \mathbf{du}) \frac{\delta_t}{2} \left(\left(\mathbf{u}^{n,p}\right)^3 + 3 \mathbf{du} \left(\mathbf{u}^{n,p}\right)^2\right) = \mathbf{B}_l^n + \mathbf{B}_{nl}^n.$$ Now we reexpress [\[disc_nonlin2\]](#disc_nonlin2){reference-type="eqref" reference="disc_nonlin2"} in terms of $\mathbf{du}$ and write $$\label{dw_lin} \left(\mathbf{I}_{N_x} + \frac{\delta_t}{2} \mathbf{A} + \frac{3\delta_t}{2} \text{diag} \left(\mathbf{u}^{n,p}\right)^2 \right)\mathbf{du} = \mathbf{B}_l^n + \mathbf{B}_{nl}^n - \left(\mathbf{I}_{N_x} + \frac{\delta_t}{2} \mathbf{A}\right)\mathbf{u}^{n,p} - \frac{\delta_t}{2} \left(\mathbf{u}^{n,p}\right)^3.$$ Here $\text{diag} \left(\mathbf{u}^{n,p}\right)^2$ represents $N_x$ dimensional diagonal matrix, where the diagonal elements are $\left(u^{n,p}(x_i)\right)^2$, $i = 1, \dotsc, N_x$. Observe that [\[dw_lin\]](#dw_lin){reference-type="eqref" reference="dw_lin"} is linear in $\mathbf{du}$. Consequently for given $\mathbf{u}^{n,p}$, [\[dw_lin\]](#dw_lin){reference-type="eqref" reference="dw_lin"} can be solved directly to obtain $\mathbf{du}$. Once $\underset{1 \leq i \leq N_x}{\max} |\mathbf{du}|$ is small enough, we stop the iteration [\[iter_nonlin\]](#iter_nonlin){reference-type="eqref" reference="iter_nonlin"}-[\[iter_nonlin2\]](#iter_nonlin2){reference-type="eqref" reference="iter_nonlin2"} and set $\mathbf{u}^{n+1} = \mathbf{u}^{n,p} + \mathbf{du}$. See Algorithm [\[alg:numnonlin\]](#alg:numnonlin){reference-type="ref" reference="alg:numnonlin"} for the algorithm to get simulations for nonlinear model. Initial state $u_0$. $\text{TOL} \gets 10^{-12}$; $\mathbf{u}^1 \gets \mathbf{u_0}$; $u^{n+1}(1) \gets 0$; $p \gets 0$; $\mathbf{u}^{n,p} \gets \mathbf{u}^n$; $\mathbf{du} = \mathbf{1}_{N_x}$; $\mathbf{du}$; $\mathbf{u}^{n,p+1} \gets \mathbf{u}^{n,p} + \mathbf{du}$; $u^{n,p+1}(N_x) \gets \text{trapz}(y,k(L,y_j)\mathbf{P}_N^h[(\mathbf{I}_{N_x}-\mathbf{\Phi}_N^h) u^{n,p}(y_j)])$; $p \gets p + 1$; $\mathbf{u}^{n+1} = \mathbf{u}^{n,p+1}$; In the next part of this section, we present two experiments which verify our theoretical stabilization results both for linear and nonlinear models. Results are obtained by taking $N_x = 1000$ spatial nodes and $N_t = 1000$ time steps. Value $M$ for the partial sum of the series representation of backstepping kernel $k$ is chosen such that $\underset{1 \leq j \leq i \leq N_x}{\max}|k^{M+1}(x_i,y_j) - k^M(x_i,y_j)| \sim 10^{-12}$.\ #### **Experiment 1 - Stabilization with a single Fourier mode.** Consider the linear model $$\label{num1} \begin{cases} u_t - u_{xx} - 12 u = 0, \quad x\in(0,1), t > 0, \\ u(0,t) = 0, u(1,t) = g(t), \quad t > 0, \\ u(x,0) = 10x\left(x - \frac{1}{2}\right)(x - 1)^2, \quad x \in (0,1). \end{cases}$$ We have $\nu = 1, \alpha = 12$, and $\lambda_1 = \pi^2 < 12 < 4\pi^2 = \lambda_2$. So, in the absence of control, zero equilibrium is instable. As the instability level is one, we should be able to stabilize the zero equilibrium by a controller involving a single Fourier mode. So let us take $N = 1$. Then, in order to fulfill the condition [\[mu2\]](#mu2){reference-type="eqref" reference="mu2"}, $\mu$ should be such that $$\frac{8(12 - \pi^2)}{3} < \mu < 8 \pi^2.$$ We take $\mu = 6$. With this choice $1 + \beta_1 = 0.632 \neq 0$, so $(\mu,N) = (6,1)$ is an admissible decay rate-mode pair. Thus, with these problem parameters, we ensure that Lemma [Lemma 11](#invlem){reference-type="ref" reference="invlem"} holds. See Figure [\[fig:ctrl_lin\]](#fig:ctrl_lin){reference-type="ref" reference="fig:ctrl_lin"} for numerical simulations of the stabilized linear model [\[num1\]](#num1){reference-type="eqref" reference="num1"}. ![Time evolution of the solution.](3d_wcont_lin.png){#fig:ctrl_lin_3d width="\\textwidth"} ![Time evolution of $L^2-$norm of the solution.](l2_lin.png){#fig:ctrl_lin_l2 width="\\textwidth"} \ #### **Experiment 2 - Rapid stabilization of a nonlinear model** Now we consider the nonlinear model $$\label{num2} \begin{cases} u_t - u_{xx} - 15 u + u^3 = 0, \quad x\in(0,1), t > 0, \\ u(0,t) = 0, u(1,t) = g(t), \quad t > 0, \\ u(x,0) = \sin(2\pi x) - \frac{1}{2}\sin(3\pi x), \quad x \in (0,1). \end{cases}$$ We have $\alpha - \nu \lambda_1 = 15 - \pi^2 > 0$. Therefore, if there is no control input acting on the model, then zero equilibrium is unstable and the given initial state will evolve into a nontrivial equilibrium in time (see Figure [3](#fig:3d_wocont_nonlin){reference-type="ref" reference="fig:3d_wocont_nonlin"}). ![Time evolution of the solution of uncontrolled nonlinear model [\[num2\]](#num2){reference-type="eqref" reference="num2"}.](3d_wocont_nonlin.png){#fig:3d_wocont_nonlin width="0.5\\columnwidth"} Let us take $\mu = 15 > \alpha - \nu \lambda_1 = 15 - \pi^2$. Then choosing $N = 2$ fulfills the condition $$N = 2> \max \left\{\frac{\mu}{2\nu\lambda_1} - 1, \frac{\mu}{\mu + \nu\lambda_1 - \alpha} - 1\right\} = \max\left\{\frac{15}{2 \pi^2} -1,\frac{15}{\pi^2}- 1 \right\}.$$ With this choice of $\mu$ and $N$, we have $1 + \beta_1 = 1.746 \neq 0$ and $1+\left((I-\Phi_1)[\Upsilon_k e_{2}],e_{2}\right)_2 = 0.845 \neq 0$. Hence $(\mu,N) = (15,2)$ is an admissible decay rate-mode pair and we ensure that Lemma [Lemma 11](#invlem){reference-type="ref" reference="invlem"} holds. See Figure [\[fig:ctrl_nonlin\]](#fig:ctrl_nonlin){reference-type="ref" reference="fig:ctrl_nonlin"} for the numerical simulation of the stabilized linear model [\[num1\]](#num1){reference-type="eqref" reference="num1"}. ![Time evolution of the solution.](3d_wcont_nonlin.png){#fig:ctrl_nonlin_3d width="\\textwidth"} ![Time evolution of $L^2-$norm of the solution.](l2_nonlin.png){#fig:ctrl_nonlin_l2 width="\\textwidth"} [^1]: This research was funded by TÜBİTAK 1001 Grant \#122F084. [^2]: T. Özsarı's research was partially supported by Science Academy's Young Scientist Award (BAGEP 2020) [^3]: \*Corresponding author: Türker Özsarı, turker.ozsari\@bilkent.edu.tr
arxiv_math
{ "id": "2309.02196", "title": "Finite dimensional backstepping controller design", "authors": "Varga Kalantarov, T\\\"urker \\\"Ozsar{\\i} and Kemal Cem Y{\\i}lmaz", "categories": "math.OC", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We show how to formulate some recent results from homological stability of algebras in Graham and Lehrer's language of cellular algebras. The aim is to begin to connect the new results from topology to well-established representation theoretic understandings of these objects. address: Mathematical Institute, Utrecht University, Heidelberglaan 8 3584 CS Utrecht, The Netherlands author: - Guy Boyde bibliography: - references.bib title: Homological stability and Graham-Lehrer cellular algebras --- # Introduction Recently, there has been interest among topologists in certain diagram algebras, such as the Temperley-Lieb [@BH; @BHComb], Brauer [@BHP], Iwahori-Hecke [@Hepworth; @Moselle], partition [@BHP2; @Boyde2], and Jones annular algebras [@Boyde2], as well as equivariant and braided analogues of these [@Graves]. These algebras have a long history in representation theory (see for example [@GrahamLehrer; @KonigXi1; @KonigXi2; @Xi; @HalversonRam; @RidoutSaintAubin; @BDM]), and Patzt and Sitaraman [@Patzt; @Sitaraman] have shown that topological language can be used to prove results about representations, so it would be good to show conversely that the representation theoretic language can be used to answer topological questions. The aim of this paper is to make some basic steps in this direction. Our algebras $A$ (over a commutative unital ground ring $R$) will always be equipped with a trivial module $\mathbbm{1}$. For the Brauer and Temperley-Lieb algebras, for example, this is the one-dimensional $R$-module where diagrams with the maximal number of left-to-right connections act by 1, and other diagrams act by zero. By the *homology of $A$* we will always mean the $R$-modules $$\mathrm{Tor}_q^A(\mathbbm{1},\mathbbm{1})$$ for $q \geq 0$. The primary goal on the topology side is the computation of these modules. In this paper we attempt to connect this goal to representation theory by giving reasonably general results which describe the homology, with hypotheses in a form familiar to representation theorists. ## Classes of algebras We will define two new classes of algebras, which we call *naive-cellular* and *diagram-like*. Both are inspired by the Graham and Lehrer's *cellular algebras* [@GrahamLehrer], and the definitions are closely related. For a given algebra, we have a diagram of implications: We introduce these classes essentially because we wish to work in the language of cellular algebras, but view the algebras themselves in a less refined way. In particular, the Robinson-Schensted correspondence seems to be unnecessary for our purposes and too sensitive to the ground ring. Naive-cellular algebras are essentially defined (Definition [Definition 6](#algDef){reference-type="ref" reference="algDef"}) by stripping away the Robinson-Schensted correspondence from Graham and Lehrer's definition. What remains is just the idea (familiar from Graham and Lehrer's work) that for example a Brauer diagram may be uniquely decomposed into 'two link states and a permutation' or 'a left piece, a right piece, and middle piece': $=$ For some of what we want to do, this definition is not rigid enough, so we introduce diagram-like algebras (Definition [Definition 9](#def:dgrmLike){reference-type="ref" reference="def:dgrmLike"}). Roughly, this is an algebra with a basis which can be decomposed into 'a left piece, a right piece, and middle piece', as before, but now the product of two basis elements must be a multiple of another basis element, and the decomposition of the result must depend in a specific (fairly obvious) way on the decomposition of the factors. We will show that the Brauer, Jones annular, and Temperley-Lieb algebras are diagram-like, hence naive-cellular. The facts we need are essentially already written down in Graham and Lehrer's paper. Xi [@Xi] has shown that the partition algebras are cellular, and they are almost certainly diagram-like. However, Xi takes a different approach to Graham and Lehrer, and this paper is already quite long, so we chose to omit the partition algebras, rather than spend more pages proving the necessary basic facts. One concrete sense in which cellular algebras are 'too refined' is as follows: the results we wish to prove sometimes hold in situations where the algebra in question is not cellular. For example, we will use our methods to provide an alternate proof Theorem 1.5 of [@Boyde2], which says that the homology of the Jones annular algebra $\mathrm{J}_n(\delta)$ coincides with that of the cyclic group $C_n$ whenever $n$ is odd or $\delta$ is invertible, but Graham and Lehrer's proof that $\mathrm{J}_n(\delta)$ is cellular assumes the splitting of a certain polynomial in the ground ring [@GrahamLehrer Theorem 6.15]. A word of caution: although cellular algebras are naive-cellular, we will not in general be working with the naive-cellular structure obtained from Graham and Lehrer's cellular structure. This happens to be true for the Temperley-Lieb algebras (because 'the only planar permutation is the identity') but for the Brauer and Jones annular algebras we use a different structure (albeit one which shows up in Graham and Lehrer's work - see Remark [Remark 14](#rmk:NotTheSame){reference-type="ref" reference="rmk:NotTheSame"}). ## Overview of results Our first main result, Theorem [Theorem 1](#thm:global){reference-type="ref" reference="thm:global"}, is for computing all of the homology of an algebra. We frame it for naive-cellular algebras, and is intended to illustrate the connection to Graham and Lehrer's language, thereby making clear that the hypotheses are quite mild. As an example, we will use it to recover the 'global' results on the Jones annular algebras $\mathrm{J}_n(\delta)$ from [@Boyde2], taking the opportunity to give an entertaining alternative proof via 'twisting' that would not have been possible for the Temperley-Lieb algebras. It could equally well have been used to recover the known global results on the Temperley-Lieb [@BH; @Sroka], Brauer [@BHP], or Partition algebras [@BHP2], but we will restrict ourselves to a single example. In Section [1.3](#subsection:globalResults){reference-type="ref" reference="subsection:globalResults"}, we will state this theorem, and then in Section [1.5](#subsection:Cellular){reference-type="ref" reference="subsection:Cellular"} we will give the special case of that statement for cellular algebras. These statements closely resemble some of Graham and Lehrer's [@GrahamLehrer]. It is also desirable to make some connection to stability, which is to say, results which hold under milder hypotheses, but only in a range of homological degrees. From a topological point of view, these are the more interesting results. We will prove a technical proposition (Proposition [Proposition 69](#prop:Idempotents){reference-type="ref" reference="prop:Idempotents"}) and show how to use it in tandem with the main theorem of our previous paper [@Boyde2] to prove basic stability results. This is where we introduce diagram-like algebras. Roughly speaking, we want to think of Proposition [Proposition 69](#prop:Idempotents){reference-type="ref" reference="prop:Idempotents"} as the representation-theoretic 'front-end' of this pipeline, and the main theorem from [@Boyde2] as the topological 'back-end'. As an example, we will recover Sroka's homological stability and vanishing result for the Temperley-Lieb algebras. Again, we could equally well have recovered the known stability results for the Temperley-Lieb [@BH; @Sroka], Jones annular [@Boyde2], or Partition algebras [@BHP2], but we will restrict ourselves to a single example. The hope is that by running the machine consisting of Proposition [Proposition 69](#prop:Idempotents){reference-type="ref" reference="prop:Idempotents"} and the main theorem of [@Boyde2], one puts the topology in a black box, so that users not familiar with e.g. spectral sequences can easily establish basic stability results. We stress that stability results obtained in this way will not in general be optimal: Boyd-Hepworth's original proof of stability for the Temperley-Lieb algebras obtains a better range, of slope $1$, and requires much more intricate work. That said, for the partition algebras, one can obtain the optimal stability range due to Boyd-Hepworth-Patzt [@BHP2] in this way [@Boyde2]. ## Global results {#subsection:globalResults} Readers familiar with cellular algebras may wish to first skip to Section [1.5](#subsection:Cellular){reference-type="ref" reference="subsection:Cellular"}, where we give a version in that context. We will say that a subset $X$ of a poset $\Lambda$ is *downward closed* if whenever $\mu \leq \lambda$ and $\lambda \in X$ then $\mu \in X$. Many of the symbols appearing here will not be defined until later: for naive-cellular algebras see Definition [Definition 6](#algDef){reference-type="ref" reference="algDef"}, for the modules $W(\lambda)$ see Definition [Definition 40](#def:LinkMod){reference-type="ref" reference="def:LinkMod"}, for the bilinear forms $\langle \phantom{x},\phantom{x} \rangle_{\tau}$ see Definition [Definition 44](#def:innerProduct){reference-type="ref" reference="def:innerProduct"}, and for the ideals $I_X$ see Definition [Definition 23](#def:I){reference-type="ref" reference="def:I"}. **Theorem 1**. *Let $A$ be a naive-cellular algebra over a ring $R$, with naive-cellular datum $(\Lambda, G, M, C, *)$. Let $Y \subset X$ be downward closed subsets of $\Lambda$, with $X \setminus Y$ finite. Suppose that for each $\lambda \in X \setminus Y$ we have the following. For every $q \in M(\lambda)$, there exists $v \in W(\lambda)$ such that for some $\sigma \in G(\lambda)$ we have $$\langle C_q, v \rangle_{\tau} = \begin{cases} 1 & \textrm{ if } \tau=\sigma, \textrm{ and} \\ 0 & \textrm{ otherwise.} \end{cases}$$ Then, for any right $A$-module $M$ and left $A$-module $N$, where $I_X$ acts trivially on both, we have $$\mathrm{Tor}_*^{\faktor{A}{I_{Y}}}(M,N) \cong \mathrm{Tor}_*^{\faktor{A}{I_{X}}}(M,N).$$* We will prove this theorem in Section [6](#section:GlobalResults){reference-type="ref" reference="section:GlobalResults"}. In the case of cellular algebras, our definitions coincide with Graham and Lehrer's, and their results suggest that the hypotheses of this theorem are not too strong (see Section [1.5](#subsection:Cellular){reference-type="ref" reference="subsection:Cellular"}). ## Results which hold only in a range In our paper [@Boyde2] we made the following definition and proved the following theorem. Write $\underline{w}$ for the set $\{1,2,\dots, w\}$. **Definition 2**. Let $A$ be an $R$-algebra, let $I$ be a twosided ideal of $A$, and let $w \geq h \geq 1$. An *idempotent (left) cover* of $I$ of *height $h$* and *width $w$* is a finite collection of left ideals $J_1, \dots J_w$ of $A$, which cover $I$ in the sense that $J_1 + \dots + J_w = I$, and such that for each $S \subset \underline{w}$ with $\lvert{S}\rvert \leq h$, the intersection $$\bigcap_{i \in S} J_i$$ is either zero or is a principal left ideal generated by an idempotent. If $I$ is free as an $R$-module, then an idempotent cover is said to be *$R$-free* if there is a choice of $R$-basis for $I$ such that each $J_i$ is free on a subset of this basis. **Theorem 3**. *Let $A$ be an augmented $R$-algebra with trivial module $\mathbbm{1}$. Let $I$ be a twosided ideal of $A$ which is free as an $R$-module and acts trivially on $\mathbbm{1}$. Suppose that there exists an $R$-free idempotent left cover of $I$ of height $h$. Then the natural map $$\mathrm{Tor}_q^A(\mathbbm{1},\mathbbm{1}) \to \mathrm{Tor}_q^{\faktor{A}{I}}(\mathbbm{1},\mathbbm{1})$$ is an isomorphism for $q \leq h - 2$, and a surjection for $q = h-1$. If $h$ is equal to the width $w$ of the cover, then this map is an isomorphism for all $q$.* In that paper, we constructed covers for standard ideals of the Jones annular and partition algebras, computed their height, and applied the theorem to obtain stable homology isomorphisms. The most challenging step of this process is to compute the height (i.e. to show that certain ideals are principal and generated by idempotents), and the point of the second half of this paper is that the (technical) Proposition [Proposition 69](#prop:Idempotents){reference-type="ref" reference="prop:Idempotents"} can be used to put this computation into cellular-type language. As a proof of concept, we will use this method to recover Sroka's results on the homology of the Temperley-Lieb algebras. ## The cellular case of Theorem [Theorem 1](#thm:global){reference-type="ref" reference="thm:global"} {#subsection:Cellular} Consider Theorem [Theorem 1](#thm:global){reference-type="ref" reference="thm:global"}. In the cellular case, each $G(\lambda)$ is trivial (Proposition [Proposition 8](#prop:cellIsWossit){reference-type="ref" reference="prop:cellIsWossit"}), and the single bilinear form $\langle \phantom{x}, \phantom{y} \rangle_1$ corresponding to the identity element is their bilinear form $\phi_\lambda$ (c.f. Remarks [Remark 41](#rmk:cellMod){reference-type="ref" reference="rmk:cellMod"} and [Remark 45](#rmk:cellForm){reference-type="ref" reference="rmk:cellForm"}). The resulting special case of Theorem [Theorem 1](#thm:global){reference-type="ref" reference="thm:global"} is as follows: **Theorem 4**. *Let $A$ be a cellular algebra over a ring $R$, with cell datum $(\Lambda, M, C, *)$. Let $Y \subset X$ be downward closed subsets of $\Lambda$, with $X \setminus Y$ finite. Suppose that for each $\lambda \in X \setminus Y$ we have the following. For every $q \in M(\lambda)$, there exists $v \in W(\lambda)$ such that $$\phi_\lambda(C_q, v) = 1.$$ Then, for any right $A$-module $M$ and left $A$-module $N$, where $I_X$ acts trivially on both, we have $$\mathrm{Tor}_*^{\faktor{A}{I_{Y}}}(M,N) \cong \mathrm{Tor}_*^{\faktor{A}{I_{X}}}(M,N).$$* In words, the condition of the theorem asks that for every member $C_q$ of the canonical basis of $W(\lambda)$, the image of the function $W(\lambda) \to R$ given by $v \mapsto \phi_\lambda(C_q,v)$ is all of $R$. If $R$ is a field, then this is equivalent to asking that the elements $C_q$ do not lie in the radical of $\phi_\lambda$. Graham and Lehrer show [@GrahamLehrer Theorem 7.3] that, over a field, semisimplicity of $A$ is equivalent to all of the bilinear forms $\phi_\lambda$ being nondegenerate, which is a much stronger condition. Since their condition is defined $\lambda$-wise, this is actually equivalent to semisimplicity of the quotients $\faktor{A}{I_X}$ for each $X$. In this case, the above theorem is trivial: if $\faktor{A}{I_X}$ and $\faktor{A}{I_Y}$ are both semisimple, then all exact sequences of modules over these two algebras are split, so all of their finitely generated modules are projective, and both of the $\mathrm{Tor}$ groups appearing in Theorem [Theorem 4](#thm:globalCellular){reference-type="ref" reference="thm:globalCellular"} vanish automatically. ## The structure of this paper In Section [2](#section:Classes){reference-type="ref" reference="section:Classes"} we will define naive-cellular and diagram-like algebras (Definitions [Definition 6](#algDef){reference-type="ref" reference="algDef"} and [Definition 9](#def:dgrmLike){reference-type="ref" reference="def:dgrmLike"}), and prove that diagram like algebras are naive-cellular (Proposition [Proposition 10](#prop:dgrmLikeIsWossit){reference-type="ref" reference="prop:dgrmLikeIsWossit"}), and that cellular algebras are naive-cellular (Proposition [Proposition 8](#prop:cellIsWossit){reference-type="ref" reference="prop:cellIsWossit"}). We will then show (Section [3](#section:Ex){reference-type="ref" reference="section:Ex"}) that the Brauer, Jones annular, and Temperley-Lieb algebras are diagram-like. We repeat the caution that the naive-cellular structures on these algebras that we will work with are not the same as the ones obtained from Graham and Lehrer's cellular structures. In Section [4](#section:BasicProperties){reference-type="ref" reference="section:BasicProperties"}, we develop the basic theory. This follows Graham and Lehrer's development quite closely, with no important surprises. The basic theory culminates with the definition of the link modules $W(\lambda)$ and the associated bilinear form, which again is a direct analogue of Graham and Lehrer's definition. At this stage, we have the language to prove Theorem [Theorem 1](#thm:global){reference-type="ref" reference="thm:global"}, and we do so in Section [6](#section:GlobalResults){reference-type="ref" reference="section:GlobalResults"}. We specialise this theorem to subalgebras of the Brauer algebras in Section [7](#section:Specialisation){reference-type="ref" reference="section:Specialisation"}, and in Section [8](#section:Jones){reference-type="ref" reference="section:Jones"} we use the specialised theorem to recover the odd strand and invertible parameter results for the Jones annular algebra proven in [@Boyde2] from a different perspective. To prove stability results, we need a bit more. In Section [9](#section:LSO){reference-type="ref" reference="section:LSO"} we define the notion of a *link state ordering* (Definition [Definition 62](#def:LSOrdering){reference-type="ref" reference="def:LSOrdering"}) and we use it to prove our second main theoretical result, Proposition [Proposition 69](#prop:Idempotents){reference-type="ref" reference="prop:Idempotents"}. In the final section (Section [10](#section:HS){reference-type="ref" reference="section:HS"}), we use Proposition [Proposition 69](#prop:Idempotents){reference-type="ref" reference="prop:Idempotents"} together with the main theorem of [@Boyde2] (Theorem [Theorem 3](#oldThm){reference-type="ref" reference="oldThm"}) to give a different perspective on Sroka's proof of homological stability for Temperley-Lieb algebras. ## Omissions and further directions There are many things that we do not discuss here, but which might be good directions for further work. Patzt [@Patzt] and Sitaraman [@Sitaraman] have already studied representation stability (á la Church-Ellenberg-Farb [@ChurchEllenbergFarb]) for the Temperley-Lieb algebras, and Patzt also treats the Brauer and partition algebras. We wonder if there is a useful way to take a cellular point of view on these results. There is another point of view on cellular algebras, due to König and Xi [@KonigXi1; @KonigXi2]. Some of what we do here closely resembles some of what they did, and it would be interesting to see if there is any connection. ## Acknowledgements {#acknowledgements .unnumbered} I would like to thank Richard Hepworth for his encouragement, and for providing the original impetus for this project, by telling me about Graham and Lehrer's paper [@GrahamLehrer], and for suggesting that it might provide the correct language for the results of my paper [@Boyde]. I would also like to acknowledge the substantial technical debt to Graham and Lehrer. This work was supported by the European Research council (ERC) through Gijs Heuts' grant "Chromatic homotopy theory of spaces", grant no. 950048. # Classes of algebras {#section:Classes} In this section, we define naive-cellular and diagram-like algebras, recalling first the definition of cellular algebras. ## Cellular algebras Throughout, $R$ will be a commutative ring with unit. In [@GrahamLehrer], the authors define a class of algebras that they call *cellular*: **Definition 5** ([@GrahamLehrer Definition 1.1]). A *cellular algebra* over $R$ is an associative unital algebra $A$, together with *cell datum* $(\Lambda, M, C, *)$, where [\[item:C1\]]{#item:C1 label="item:C1"} $\Lambda$ is a poset, and for each $\lambda \in \Lambda$, $M(\lambda)$ is a finite set (the set of *tableaux of type $\lambda$*) such that $$\begin{aligned} \coprod_{\lambda \in \Lambda} M(\lambda) \times M(\lambda) & \to A \\ (S,T) & \mapsto C_{S,T}^{\lambda} \end{aligned}$$ is an injective map with image an $R$-basis of $A$. [\[item:C2\]]{#item:C2 label="item:C2"} The map $*$ is an $R$-linear anti-involution of $A$ such that $$(C_{S,T}^{\lambda})^*=C_{T,S}^{\lambda}.$$ [\[item:C3\]]{#item:C3 label="item:C3"} If $\lambda \in \Lambda$ and $S,T \in M(\lambda)$ then for any element $a \in A$ we have $$a C_{S,T}^{\lambda} \equiv \sum_{S' \in M(\lambda)} r_a(S',S) C_{S',T}^{\lambda} (\mathrm{mod} A(< \lambda)),$$ where $r_a(S',S) \in R$ is independent of $T$ and where $A(< \lambda)$ is the $R$-submodule of $A$ generated by the elements $\{C_{A,B}^{\mu} \lvert A,B \in M(\mu), \mu < \lambda\}$. ## Naive-cellular algebras Our main class of algebra will be defined as follows. **Definition 6**. A *naive-cellular algebra* over $R$ is an associative unital algebra $A$, together with *naive-cellular datum* $(\Lambda, G, M, C, *)$, where [\[item:W1\]]{#item:W1 label="item:W1"} $\Lambda$ is a poset. For each $\lambda \in \Lambda$, $G(\lambda)$ is a group, and $M(\lambda)$ is a finite set (the set of *link states of type $\lambda$*) such that $$\begin{aligned} \coprod_{\lambda \in \Lambda} M(\lambda) \times G(\lambda) \times M(\lambda) & \to A \\ (p,\sigma,q) & \mapsto C_{p,q}^{\sigma} \end{aligned}$$ is an injective map with image an $R$-basis of $A$. [\[item:W2\]]{#item:W2 label="item:W2"} The map $*$ is an $R$-linear anti-involution of $A$ such that $$(C_{p,q}^{\sigma})^*=C_{q,p}^{\sigma^{-1}}.$$ [\[item:W3\]]{#item:W3 label="item:W3"} If $\sigma \in G(\lambda)$ and $p,q \in M(\lambda)$ then for any element $a \in A$ we have $$a C_{p,q}^{\sigma} \equiv \sum_{\substack{\sigma' \in G(\lambda) \\ p' \in M(\lambda) }} r_a(p',\sigma'\sigma^{-1},p) C_{p',q}^{\sigma'} (\mathrm{mod} I_{< \lambda}),$$ where $r_a(p',\sigma'\sigma^{-1},p) \in R$ depends only on $a$, $p'$, $p$, and the product $\sigma'\sigma^{-1}$, and where $I_{< \lambda}$ is the $R$-submodule of $A$ generated by the elements $\{C_{p'',q''}^{\tau} \lvert \mu < \lambda, \tau \in G(\mu), p'',q'' \in M(\mu)\}$. *Remark 7*. Much as for cellular algebras, applying the involution $*$ to the equation [\[item:W3\]](#item:W3){reference-type="ref" reference="item:W3"}, we get, for any $a \in A$, $\lambda \in \Lambda$, $\sigma \in G(\lambda)$ and $p,q \in M(\lambda)$, $$C_{q,p}^{\sigma^{-1}}a^* \equiv \sum_{\substack{\sigma' \in G(\lambda) \\ p' \in M(\lambda)}} r_a(p',\sigma'\sigma^{-1},p)C_{q,p'}^{(\sigma')^{-1}} (\mathrm{mod} I_{< \lambda}).$$ Here we have used the fact that Axiom [\[item:W2\]](#item:W2){reference-type="ref" reference="item:W2"} implies that $I_{< \lambda}^*=I_{< \lambda}$. For later use, we record the more useful form of this equation obtained by changing variables: $$C_{p,q}^{\tau}b \equiv \sum_{\substack{\tau' \in G(\lambda) \\ q' \in M(\lambda)}} r_{b^*}(q',(\tau')^{-1}\tau,q)C_{p,q'}^{\tau'} (\mathrm{mod} I_{< \lambda}).$$ It follows immediately from the definitions that naive-cellular algebras do indeed generalise cellular algebras: **Proposition 8**. *The map $$(\Lambda, M, C, *) \mapsto (\Lambda, 1, M, C, *),$$ given by assigning the trivial group to each $\lambda \in \Lambda$, carries a cell datum to a naive-cellular datum, and identifies cellular algebras with those naive-cellular algebras where each group $G(\lambda)$ is the trivial group. 0◻* ## Diagram-like algebras **Definition 9**. A *diagram-like algebra* over $R$ is defined identically to a naive-cellular algebra, except that instead of Axiom [\[item:W3\]](#item:W3){reference-type="ref" reference="item:W3"} of that definition, we require the following condition on the product of two basis elements: For any $\lambda_1, \lambda_2 \in \Lambda$, $\sigma_1 \in G(\lambda_1), \sigma_2 \in G(\lambda_2)$, $p_1, q_1 \in M(\lambda_1)$ and $p_2,q_2 \in M(\lambda_2)$, we have: $$C_{p_1,q_1}^{\sigma_1} C_{p_2,q_2}^{\sigma_2} = \kappa C_{p,q}^{\sigma},$$ where the objects ($\kappa,p,q,\sigma$) on the right are implicitly functions of those on the left, such that (letting $\lambda \in \Lambda$ be the element corresponding to $C_{p,q}^{\sigma}$) we have: 1. [\[item:dl1\]]{#item:dl1 label="item:dl1"} $\lambda \leq \lambda_1$, and $\lambda \leq \lambda_2$ 2. [\[item:dl2\]]{#item:dl2 label="item:dl2"} $\lambda$ and $\kappa$ depend only on $q_1$ and $p_2$ 3. [\[item:dl3\]]{#item:dl3 label="item:dl3"} $\sigma$ depends only on $\sigma_1, q_1, p_2,$ and $\sigma_2$ 4. [\[item:dl4\]]{#item:dl4 label="item:dl4"} $p$ depends only on $p_1,\sigma_1,q_1,$ and $p_2$. 5. [\[item:dl5\]]{#item:dl5 label="item:dl5"} If $\lambda = \lambda_2$, then $q = q_2$, and $\sigma \sigma_2^{-1}$ depends only on $\sigma_1, q_1,$ and $p_2$. We will call Conditions ([\[item:dl1\]](#item:dl1){reference-type="ref" reference="item:dl1"}) - ([\[item:dl5\]](#item:dl5){reference-type="ref" reference="item:dl5"}) the *dependency conditions*. **Proposition 10**. *A diagram-like algebra is in particular a naive-cellular algebra.* *Proof.* The two definitions differ only in the replacement of the third axiom ([\[item:W3\]](#item:W3){reference-type="ref" reference="item:W3"}) defining a naive-cellular algebra by the condition of Definition [Definition 9](#def:dgrmLike){reference-type="ref" reference="def:dgrmLike"}. It therefore suffices to show that, together with the first two axioms defining a naive-cellular algebra, this condition implies the third axiom. The algebra multiplication is $R$-bilinear, so in particular it is $R$-linear in the first variable, and $a \in A$ is of course uniquely expressible in the basis. It therefore suffices to verify this third axiom in the case that $a = C_{p_1,q_1}^{\sigma_1}$ is a basis element. Thus, consider a product $a C_{p_2,q_2}^{\sigma_2} = C_{p_1,q_1}^{\sigma_1} C_{p_2,q_2}^{\sigma_2}$. The idea is to transform the problem into one about indicator functions. Definition [Definition 9](#def:dgrmLike){reference-type="ref" reference="def:dgrmLike"} says first of all that $$a C_{p_2,q_2}^{\sigma_2} = C_{p_1,q_1}^{\sigma_1} C_{p_2,q_2}^{\sigma_2} = \kappa C_{p,q}^{\sigma}.$$ By Dependency Condition ([\[item:dl1\]](#item:dl1){reference-type="ref" reference="item:dl1"}) of the condition, the product lies in $I_{\leq \lambda_2}$, so the above equation may be rewritten as $$C_{p_1,q_1}^{\sigma_1} C_{p_2,q_2}^{\sigma_2} = \sum_{\lambda' \leq \lambda_2} \sum_{\substack{\sigma' \in G(\lambda') \\ p' \in M(\lambda')}} \kappa(q_1,p_2)\chi_{\sigma}(\sigma')\chi_p(p') C_{p',q_2}^{\sigma'}$$ where $\chi_\sigma$ and $\chi_p$ are the indicator functions of $\sigma$ and $p$ respectively. By Dependency Condition ([\[item:dl2\]](#item:dl2){reference-type="ref" reference="item:dl2"}), there exists a function $\overline{\kappa}:M(\lambda_1) \times M(\lambda_2) \to R$ defined by setting $\overline{\kappa}(q_1,p_2) = \begin{cases} \kappa & \lambda = \lambda_2 \\ 0 & \lambda < \lambda_2. \end{cases}$ Modulo $I(< \lambda_2)$, we may therefore write: $$a C_{p_2,q_2}^{\sigma_2} = C_{p_1,q_1}^{\sigma_1} C_{p_2,q_2}^{\sigma_2} \equiv \sum_{\substack{\sigma' \in G(\lambda_2) \\ p' \in M(\lambda_2)}} \overline{\kappa}(q_1,p_2)\chi_{\sigma}(\sigma')\chi_p(p') C_{p',q_2}^{\sigma'},$$ and what must be established is that this is of the correct form for Axiom [\[item:W3\]](#item:W3){reference-type="ref" reference="item:W3"}, namely that $\overline{\kappa}(q_1,p_2)\chi_{\sigma}(\sigma')\chi_p(p') \in R$ depends only on $a$, $\sigma'\sigma_2^{-1},$ $p_2$, and $p'$. It suffices to verify this factor-by factor. First $\overline{\kappa}(q_1,p_2)$ depends only on $q_1$ and $p_2$ (Dependency Condition ([\[item:dl2\]](#item:dl2){reference-type="ref" reference="item:dl2"})) which is to say only on $a$ and $p_2$, as required. This implies the result when $\overline{\kappa}(q_1,p_2) = 0$, so it suffices to establish it in the case $\overline{\kappa}(q_1,p_2) \neq 0$, which means we may henceforth assume $\lambda=\lambda_2$. On the face of it, $\chi_{\sigma}(\sigma')$ depends on $\sigma$ and $\sigma'$, but $\sigma$ is implicitly (Dependency Condition ([\[item:dl3\]](#item:dl3){reference-type="ref" reference="item:dl3"})) a function of $a$, $p_2$, and $\sigma_2$. We must show that this reduces the dependency of $\chi_{\sigma}(\sigma')$ on $\sigma$ and $\sigma'$ to a dependency on $a$, $p_2$, and $\sigma'\sigma_2^{-1}$. Using the fact that $\chi_{\sigma}$ is an indicator function on a group, we get $$\chi_{\sigma}(\sigma') = \chi_{\sigma}(\sigma'\sigma_2^{-1}\sigma_2) = \chi_{\sigma \sigma_2^{-1}}(\sigma'\sigma_2^{-1}).$$ By Dependency Condition ([\[item:dl5\]](#item:dl5){reference-type="ref" reference="item:dl5"}), $\sigma \sigma_2^{-1}$ depends only on $a$ and $p_2$, so, $\chi_{\sigma}(\sigma')$ depends only on $a$, $p_2$, and $\sigma'\sigma_2^{-1}$, as required. Dependency Condition ([\[item:dl4\]](#item:dl4){reference-type="ref" reference="item:dl4"}) then immediately gives that $\chi_p(p')$ depends only on $a$, $p_2$, and $p'$, as required. ◻ ## Restriction of link states and groups The following ideas will be useful for verifying examples. **Definition 11**. Let $A$ be a diagram-like algebra, with naive-cellular datum $(\Lambda,G,M,C,*)$. A subalgebra $A'$ of $A$ will be said to be *obtained by restriction* if for each $\lambda$ in $\Lambda$ there exists a (perhaps empty) subset $M'(\lambda)$ of $M(\lambda)$ and a subgroup $G'(\lambda)$ of $G(\lambda)$ such that $$\begin{aligned} \coprod_{\lambda \in \Lambda} M'(\lambda) \times G'(\lambda) \times M'(\lambda) & \to A \\ (p,\sigma,q) & \mapsto C_{p,q}^{\sigma}\end{aligned}$$ has image an $R$-basis of $A'$. If $G'(\lambda)=G(\lambda)$ for each $\lambda$, that is, if we have only restricted the set of available link states, then we will say that $A'$ is obtained from $A$ *by restriction of link states*. Similarly, if $M'(\lambda)=M(\lambda)$ for each $\lambda$ then we will say that $A'$ is obtained from $A$ *by restriction of groups*. **Proposition 12**. *Let $A$ be a diagram-like algebra, with naive-cellular datum $(\Lambda,G,M,C,*)$. Let $A' \subset A$ be a subalgebra. If $A'$ is obtained by restriction in the sense of Definition [Definition 11](#def:restriction){reference-type="ref" reference="def:restriction"}, then $A'$ is a diagram-like algebra, with naive-cellular datum $(\Lambda\lvert_{M'},G',M',C,*)$, where $$\Lambda\lvert_{M'} = \{\lambda \in \Lambda \mid M'(\lambda) \neq \emptyset\},$$ and we abuse notation to identify $G',C$, and $*$ with appropriate restrictions.* Note that we are taking it as given that $A'$ is closed under the multiplication in $A$, but not necessarily that it is closed under the involution $*$. *Proof.* Either $\Lambda\lvert_{M'}$ is empty, in which case $A'=0$ and we are done, or it is non-empty, and inherits a poset structure as a subset of $\Lambda$. We must verify Axioms [\[item:W1\]](#item:W1){reference-type="ref" reference="item:W1"} and [\[item:W2\]](#item:W2){reference-type="ref" reference="item:W2"} of Definition [Definition 6](#algDef){reference-type="ref" reference="algDef"}, as well as the dependency conditions of Definition [Definition 9](#def:dgrmLike){reference-type="ref" reference="def:dgrmLike"}. Axiom [\[item:W1\]](#item:W1){reference-type="ref" reference="item:W1"} follows immediately from the definition of restriction (Definition [Definition 11](#def:restriction){reference-type="ref" reference="def:restriction"}). Axiom [\[item:W2\]](#item:W2){reference-type="ref" reference="item:W2"} holds (in the sense that the involution $*$ preserves $A'$), since the basis $C_{p,q}^{\sigma}$ of $A'$ is closed under interchanging $p$ and $q$, and under inversion of $\sigma$ (since each $G'(\lambda)$ is a subgroup of $G(\lambda)$). We have assumed that $A'$ is multiplicatively closed, and the dependency conditions hold in $A'$ since they held already in $A$. This completes the proof. ◻ # Examples {#section:Ex} We will use the 'priming/unpriming' convention of [@Boyde2 Definition 3.1]. The sets that we will use as vertex labels will consist of a positive integer, together with some number of superscript 'prime' markings. In particular, we write $\underline{n}$ for the set of symbols $\{1,2,\dots , n\}$, and $\underline{n}'$ for the set $\{1',2', \dots, n'\}$. *Example 13*. Let $R$ be a commutative ring with unit, and let $\delta \in R$. We take the *Brauer algebra* $\mathrm{Br}_n(\delta)$ [@Brauer] to be defined as having an $R$-basis consisting of partitions of the set $\underline{n} \cup \underline{n}'$ where each part contains two elements. As is standard (see e.g. [@GrahamLehrer] or [@BHP]), we will think of these basis elements as diagrams on the vertex set $\underline{n} \cup \underline{n}'$, where an edge is drawn between pairs of vertices which lie in the same part of the partition. The multiplication is given by 'concatenation of diagrams and replacement of loops by factors of $\delta$' (c.f. [@GrahamLehrer; @BHP]). *Remark 14*. Graham and Lehrer show [@GrahamLehrer Theorem 4.10] that the the Brauer algebra $\mathrm{Br}_n(\delta)$ is cellular. It then follows from Proposition [Proposition 8](#prop:cellIsWossit){reference-type="ref" reference="prop:cellIsWossit"} that $\mathrm{Br}_n(\delta)$ is naive-cellular. We will work with a more elementary naive-cellular structure. For the reader familiar with [@GrahamLehrer], our basis elements $C_{p,q}^{\sigma}$ are what Graham and Lehrer would denote $[p,q,\sigma]$ (though they prefer to use different letters, as in $[S,T,w]$). **Definition 15**. Formally, we assign a datum $(\Lambda,G,M,C,*)$ (which is independent of $\delta$, and will be seen to make $\mathrm{Br}_n(\delta)$ into a diagram-like algebra) as follows: - $\mathbf{\Lambda:}$ Let $\Lambda = \{t \in \underline{n}_0 \mid n-t \textrm{ is even} \}$. - $\mathbf{G:}$ For each $t \geq 1 \in \Lambda$, let $G(t)=\Sigma_t$ be the symmetric group of order $t$. We regard $\Sigma_0$ as the trivial group, i.e. as self-bijections of the empty set $\emptyset$. - $\mathbf{M:}$ Let $M(t)$ be the *Brauer link states on $\underline{n}$ with $t$ defects*: these are partitions of $\underline{n}$ into parts of cardinality either 1 or 2. The parts of cardinality 1 (which may be identified with their single element) will be called *defects* or *defect vertices*. - $\mathbf{C:}$ Let $t \in \Lambda$. For $t \geq 0$, $\sigma \in \Sigma_t$, and $p,q \in M(t)$ the basis element $C_{p,q}^\sigma$ is the unique partition of $\underline{n} \cup \underline{n}'= \{1, \dots, n,1', \dots n'\}$ such that: - The restriction of $C_{p,q}^\sigma$ to $\underline{n} = \{1, \dots , n\}$ is $p$. - The restriction of $C_{p,q}^\sigma$ to $\underline{n}' = \{1', \dots n'\}$ is the partition $q'$ which corresponds to $q$ under the priming/unpriming bijection $\{1, \dots , n\} \cong \{1', \dots, n'\}$. - If $t=0$, then ($G(t)$ is the trivial group, and) $C_{p,q}^1$ has no parts containing both an element of $\underline{n}$ and $\underline{n}'$. - If $t \geq 1$, then the partition $C_{p,q}^\sigma$ will have $t$ parts containing both an element of of $\underline{n}$ and an element of $\underline{n}'$, as follows. The $t$ defects in $p$ are a well-ordered subset of $\underline{n}$, and this defines a bijection with $\underline{t}$. The vertex mapped to $i \in \underline{t}$ by this correspondence will be called the $i$-th *$i$-th defect*. Similarly, we put the parts of $q'$ having a defect in bijection with $\underline{t}'$. The partition $C_{p,q}^\sigma$ consists of the cardinality 2 parts of $p$ and $q'$, together with the unions of the $i'-th$ defect vertex of $q'$ with the $\sigma(i)$-th defect vertex of $p$ for each $i$. - $\mathbf{*:}$ The anti-involution $*$ is the unique bijection which interchanges $\underline{n}$ and $\underline{n}'$, and preserves the order of each of these subsets individually. The following definition will be helpful. **Definition 16**. Let $q,p$ be Brauer link states with $t_1$ and $t_2$ defects respectively. The *pair graph* $\Gamma_{q,p}$ is the graph with vertices $\underline{n}$, an edge (thought of as 'on the left') connecting each pair of vertices which are connected in $q$, and an edge (thought of as 'on the right') connecting each pair of vertices which are connected in $p$. The *pair set* $S_{\langle q,p \rangle}$ is the subset of the path components $\pi_0(\Gamma_{\langle q,p \rangle})$ given by the intersection of the maps $\underline{t}_1 \to \Gamma_{\langle q,p \rangle} \to \pi_0(\Gamma_{\langle q,p \rangle})$ and $\underline{t}_1 \to \pi_0(\Gamma_{\langle q,p \rangle})$ marking the defects. In other words, $S_{\langle q,p \rangle}$ is the set of path components in the pair graph which contain a defect vertex from $q$ and a defect vertex from $p$. *Example 17*. Taking $n=11$, if $q =$ $\in M(3),$ [and]{.roman} $p =$ $\in M(1),$ then $\Gamma_{\langle q,p \rangle} =$ and $S_{\langle q,p \rangle} \subset \pi_0(\Gamma_{\langle q,p \rangle})$ consists of the single indicated (dashed) path component: **Proposition 18**. *With the datum of Definition [Definition 15](#def:BrauerDGL){reference-type="ref" reference="def:BrauerDGL"}, the Brauer algebra $\mathrm{Br}_n(\delta)$ is diagram-like. In particular, given elements $C_{p_1,q_1}^{\sigma_1}$ associated to $t_1 \in \Lambda$ and $C_{p_2,q_2}^{\sigma_2}$ associated to $t_2 \in \Lambda$, we have $$C_{p_1,q_1}^{\sigma_1} C_{p_2,q_2}^{\sigma_2} = \delta^i C_{p,q}^{\sigma},$$ where* - *$i$ is the number of loops in the pair graph $\Gamma_{\langle q_1, p_2 \rangle}$, and* - *the element $t \in \Lambda$ associated to the product $C_{p,q}^{\sigma}$ is equal to $t_2$ if and only if the cardinality of the pair-set $S_{\langle q_1, p_2 \rangle}$ is $t_2$.* *Proof.* The ingredients we need are all proven by Graham and Lehrer (in different notation). They show [@GrahamLehrer Proposition 4.4] that the $C_{p,q}^{\sigma}$ do indeed form an $R$-module basis of $\mathrm{Br}_n(\delta)$ (which is the canonical basis used to define $\mathrm{Br}_n(\delta)$) verifying Condition ([\[item:W1\]](#item:W1){reference-type="ref" reference="item:W1"}) of Definition [Definition 6](#algDef){reference-type="ref" reference="algDef"}. The first bullet point above then follows immediately from the definition of the Brauer algebras. Likewise, Condition ([\[item:W2\]](#item:W2){reference-type="ref" reference="item:W2"}) follows immediately from the definitions, and the dependency conditions of Definition [Definition 9](#def:dgrmLike){reference-type="ref" reference="def:dgrmLike"} are verified by Graham and Lehrer in different notation [@GrahamLehrer Proposition 4.7]. Precisely, it follows from the definition of the Brauer algebra that a product of two basis elements takes the form: $$C_{p_1,q_1}^{\sigma_1} C_{p_2,q_2}^{\sigma_2} = \kappa C_{p,q}^{\sigma},$$ and - Conditions ([\[item:dl1\]](#item:dl1){reference-type="ref" reference="item:dl1"}) and ([\[item:dl2\]](#item:dl2){reference-type="ref" reference="item:dl2"}) follows from [@GrahamLehrer Remark 4.6iii]. - Conditions ([\[item:dl3\]](#item:dl3){reference-type="ref" reference="item:dl3"}) and ([\[item:dl4\]](#item:dl4){reference-type="ref" reference="item:dl4"}) follow respectively from the third and second displayed equations in [@GrahamLehrer Proposition 4.7]. The second bullet point above follows likewise from the third equation. - Condition ([\[item:dl5\]](#item:dl5){reference-type="ref" reference="item:dl5"}) follows from the same two displayed equations: specifically, if $\lambda = \lambda_2$, then (in Graham and Lehrer's notation) then the restriction functions become the identity, the second displayed equation becomes $$S_2'' = S_2',$$ (so $q = q_2$), and the third displayed equation becomes $$w'' = w \lvert_{T_{S_2}(S_2,S_1')} w(S_2,S_1') w',$$ so $$w'' (w')^{-1} = w \lvert_{T_{S_2}(S_2,S_1')} w(S_2,S_1'),$$ so $\sigma \sigma_2^{-1} = w'' (w')^{-1}$ depends only on $q_1=S_2$, $p_2=S_1'$, and $\sigma_1=w$, as required. This verifies the conditions of Definition [Definition 9](#def:dgrmLike){reference-type="ref" reference="def:dgrmLike"}, and completes the proof. ◻ Recall from e.g. [@Jones; @GrahamLehrer] that the Jones annular algebra $\mathrm{J}_n(\delta)$ is the subalgebra of $\mathrm{Br}_n(\delta)$ spanned by diagrams 'which can be embedded on the cylinder without overlapping edges'. We will freely conflate the vertex set $\underline{n}$ with the cyclic group $C_n$. The following definition is due to Graham and Lehrer [@GrahamLehrer Lemma 6.12]. **Definition 19**. Let $p$ be a Brauer link state, that is to say, a partition of $\underline{n}$ into subsets of cardinality 1 or 2, and that the defect parts are precisely those of cardinality 1. We will say that $p$ is *annular* if whenever $i$ and $j$ are connected by an edge in $p$, we have: - No edge in $p$ connects a vertex from the cyclic interval $(i,j)$ to one from $(j,i)$. In terms of partitions, $(i,j)$ and $(j,i)$ are unions of parts of $p$. - Either all defects of $p$ are contained in $(i,j)$, or all defects of $p$ are contained in $(j,i)$. *Example 20*. As in [@Boyde2], by the *cyclic interval* $[i,j]$ in the cyclic group $C_n$ we mean the set $\{i,i+1,i+2, \dots, j \}$. Open cyclic intervals are defined similarly. For example, we then have that $[i,j]$ and $(j,i)$ are disjoint, and that their union is $C_n$. In what follows, our presentation will differ from Graham and Lehrer's because we speak in cyclic intervals. Graham and Lehrer [@GrahamLehrer Proposition 6.14] show (with different terminology) that annular partitions are precisely those $\rho = C_{p,q}^{\sigma}$ (corresponding to $t \in \Lambda$) for which: - $\sigma \in C_t \leq \Sigma_t$ is an element of the cyclic group of order $t$, and - $p$ and $q$ are annular link states in the sense of Definition [Definition 19](#def:AnnularLS){reference-type="ref" reference="def:AnnularLS"}. These results of Graham and Lehrer show that $\mathrm{J}_n(\delta)$ is obtained from $\mathrm{Br}_n(\delta)$, by restriction (Definition [Definition 11](#def:restriction){reference-type="ref" reference="def:restriction"}) of both link states and groups. It follows by Proposition [Proposition 12](#prop:restriction){reference-type="ref" reference="prop:restriction"} that $\mathrm{J}_n(\delta)$ is a diagram-like algebra. Its naive-cellular datum coincides with that of $\mathrm{Br}_n(\delta)$ except that: - For $t \in \Lambda = \{t \in \underline{n}_0 \mid n-t \textrm{ is even} \}$, we set $G(t) = C_t$, the cyclic group of order $t$ (with the convention that $C_0$ is the trivial group). - For $t \in \Lambda$, $M(t)$ is the set of annular link states with $t$ defects. Again, the following definition is due to Graham and Lehrer [@GrahamLehrer Lemma 6.2]. **Definition 21**. We will say that a Brauer link state $p$ is *planar* if whenever $i<j$ and $i,j$ are connected in $p$, we have: - If $k, \ell$ lie in the same part of $p$ and $k$ lies in the closed interval $[i,j]$, then $\ell$ also lies in $[i,j]$. In terms of partitions, $[i,j]$ is a union of parts of $p$, and - the closed interval $[i,j]$ contains no defects. *Example 22*. We take the *Temperley-Lieb algebra* $\mathrm{TL}_n(\delta)$ to be a subalgebra of $\mathrm{Br}_n(\delta)$, defined identically to the Jones annular algebra, except that diagrams are now required to have planar representatives in the square $[0,1] \times [0,1]$ when $i$ is embedded as $(\frac{i-1}{n-1},0)$ and $i'$ is embedded as $(\frac{i-1}{n-1},1)$. These algebras were originally defined by Temperley and Lieb [@TemperleyLieb], and their diagrammatic interpretation is due to Kauffman [@Kauffman]. Graham and Lehrer show [@GrahamLehrer Proposition 6.5ii] that the Temperley-Lieb algebra has a basis consisting of those partitions $\rho = C_{p,q}^{\sigma}$ for which $\sigma = \mathrm{id}$ is the identity permutation, and $p$ and $q$ are *planar* link states. As usual, this implies that $\mathrm{TL}_n(\delta)$ is diagram-like (obtained by restriction from $\mathrm{J}_n(\delta)$), with diagram-like data which coincides with that of $\mathrm{Br}_n(\delta)$ except that: - Each $G(t)$ is trivial, and - $M(t)$ is the set of planar link states with $t$ defects. # Basic properties {#section:BasicProperties} ## Twosided Ideals Throughout this subsection, to clean up the statements, $A$ will be a fixed naive-cellular algebra, with naive-cellular datum $(\Lambda, G, M, C, *)$. **Definition 23**. Let $X \subset \Lambda$. Define $$I_X := \mathrm{Span}_R\{C_{p,q}^{\sigma} \mid \mu \in X, \sigma \in g(\mu), p,q \in M(\mu)\}.$$ In particular, given $\lambda \in \Lambda$, write $$I_{\leq \lambda} := \mathrm{Span}_R\{C_{p,q}^{\sigma} \mid \mu \leq \lambda, \sigma \in g(\mu), p,q \in M(\mu)\},$$ and $$I_{< \lambda} := \mathrm{Span}_R\{C_{p,q}^{\sigma} \mid \mu < \lambda, \sigma \in g(\mu), p,q \in M(\mu)\}.$$ The following lemma is an immediate consequence of Axiom [\[item:W2\]](#item:W2){reference-type="ref" reference="item:W2"}. **Lemma 24**. *Let $X \subset \Lambda$ be any subset. Then $(I_X)^*=I_X$. 0◻* A free $R$-module modulo a subset of a basis is again a free $R$-module on the complement of this subset. In our case, we have: **Lemma 25**. *Let $X \subset \Lambda$ be any subset, and let $\pi_X : A \to \faktor{A}{I_X}$ denote the projection. Then $\faktor{A}{I_X}$ is a free $R$-module with basis $$\pushQED{\qed} \{\pi_X(C_{p,q}^{\sigma}) \mid \mu \not\in X, \sigma \in g(\mu), p,q \in M(\mu)\}. \qedhere \popQED$$* Recall that a subset $X$ of a poset $\Lambda$ is said to be downward closed if whenever $\mu \leq \lambda$ and $\lambda \in X$ then $\mu \in X$. **Lemma 26**. *Let $X \subset \Lambda$ be downward closed. Then $I_X$ is a twosided ideal of $A$. In particular, for each $\lambda \in \Lambda$, both $I_{\leq \lambda}$ and $I_{< \lambda}$ are twosided ideals.* *Proof.* Since $X \subset \Lambda$ is downward closed, we have that if $\mu \in X$ then $I_{< \mu} \subset I_X$. The result then follows from Axiom [\[item:W3\]](#item:W3){reference-type="ref" reference="item:W3"} and its dual, Remark [Remark 7](#rmk:W3){reference-type="ref" reference="rmk:W3"}. ◻ If $I_X$ is a twosided ideal of $A$, then the quotient $\faktor{A}{I_X}$ is again an $R$-algebra. In fact, it is again a naive-cellular algebra: **Proposition 27**. *Let $X \subset \Lambda$ be a downward closed proper subset. Then $\faktor{A}{I_X}$ is a naive-cellular algebra with naive-cellular datum $$(\Lambda \setminus X, G, M, \pi_X \circ C, *),$$ where $\pi_X$ is the projection $A \to \faktor{A}{I_X}$, and we abuse notation by still writing $G$, $M$, and $C$ for the restrictions of those assignments to $\Lambda \setminus X$, and by identifying $*$ with the induced map on the quotient.* *Proof.* Since $X$ is downward closed, $I_X$ is a twosided ideal, so $\faktor{A}{I_X}$ is again an associative $R$-algebra. Since $X$ is proper, $I_X$ is a proper ideal of $A$, so $\faktor{A}{I_X}$ is unital. The set $\Lambda \setminus X$ is a poset by restricting the order. The functions $G$ and $M$ still assign a group and a set respectively to each $\lambda \in \Lambda \setminus X$, and $\pi_X \circ C$ assigns an element of $\faktor{A}{I_X}$, to each triple $(p, \sigma, q)$ in $\coprod_{\lambda \in \Lambda \setminus X} M(\lambda) \times G(\lambda) \times M(\lambda)$. The involution $*$ descends to the quotient by Lemma [Lemma 24](#lem:IStar){reference-type="ref" reference="lem:IStar"}. This establises that the datum is of the correct form, and we must now check that it satisfies the axioms. Axiom [\[item:W1\]](#item:W1){reference-type="ref" reference="item:W1"} follows from Lemma [Lemma 25](#lem:quotientBasis){reference-type="ref" reference="lem:quotientBasis"}. Axiom [\[item:W2\]](#item:W2){reference-type="ref" reference="item:W2"} is automatic. Axiom [\[item:W3\]](#item:W3){reference-type="ref" reference="item:W3"} is essentially the correspondence theorem: note that for $\lambda \in \Lambda \setminus X$, the projection $\pi_X$ gives an isomorphism between the ideal $\pi_X(I_{< \lambda})$ of $\faktor{A}{I_X}$ and the ideal $I_{< \lambda} + I_X$ of $A$. This latter ideal certainly contains $I_{< \lambda}$, so Axiom [\[item:W3\]](#item:W3){reference-type="ref" reference="item:W3"} for $\faktor{A}{I_X}$ is implied by Axiom [\[item:W3\]](#item:W3){reference-type="ref" reference="item:W3"} for $A$. ◻ *Remark 28*. We will often abuse notation by writing $C_{p,q}^{\sigma}$ for $\pi_X(C_{p,q}^{\sigma})$, which is to say we will identify the basis of $\faktor{A}{I_X}$ with the appropriate subset of the basis of $A$. If $\lambda$ is a minimal element of $\Lambda \setminus X$, then $I_{< \lambda} \subset I_X$. This implies the following refinement of Axiom [\[item:W3\]](#item:W3){reference-type="ref" reference="item:W3"}, and its dual, Remark [Remark 7](#rmk:W3){reference-type="ref" reference="rmk:W3"}, in the quotient $\faktor{A}{I_X}$. **Proposition 29**. *Let $X$ be a downward closed subset of $\Lambda$, and consider the quotient algebra $\faktor{A}{I_X}$. For any minimal element $\lambda$ of $\Lambda \setminus X$, any $\sigma \in G(\lambda)$, any $p,q \in M(\lambda)$, and any $a \in \faktor{A}{I_X}$, we have the equalities $$a C_{p,q}^{\sigma} = \sum_{\substack{\sigma' \in G(\lambda) \\ p' \in M(\lambda) }} r_a(p',\sigma'\sigma^{-1},p) C_{p',q}^{\sigma'}$$ and $$C_{p,q}^{\sigma}a = \sum_{\substack{\sigma' \in G(\lambda) \\ q' \in M(\lambda)}} r_{a^*}(q',(\sigma')^{-1}\sigma,q)C_{p,q'}^{\sigma'}$$ in $\faktor{A}{I_{X}}$. 0◻* The following two lemmas are elementary. **Lemma 30**. *If $X$ is a downward closed subset of a poset $\Lambda$, and $\lambda$ is a minimal element of $\Lambda \setminus X$, then $X \cup \{ \lambda \}$ is again downward closed. 0◻* **Lemma 31**. *If $Y \subset X$ are downward closed subsets of a poset $\Lambda$, and $\lambda$ is minimal in $X \setminus Y$, then $Y \cup \{\lambda\}$ is again downward closed. 0◻* ## The product of two basis elements For any minimal element $\lambda$ of $\Lambda \setminus X$, consider the product of two basis elements $C_{p_1,q_1}^{\sigma_1} C_{p_2,q_2}^{\sigma_2}$ using the formulae of Proposition [Proposition 29](#prop:quotientAx3){reference-type="ref" reference="prop:quotientAx3"} (with $\sigma_1, \sigma_2 \in G(\lambda)$, and any $p_1,p_2,q_1,q_2 \in M(\lambda)$). The first formula shows that only basis elements of the form $C_{p',q_2}^{\sigma'}$ may appear with nonzero coefficient, and the second shows that only basis elements of the form $C_{p_1,q'}^{\sigma'}$ may appear with nonzero coefficient, so in fact only basis elements of the form $C_{p_1,q_2}^{\sigma'}$ may appear. We therefore have: $$\begin{aligned} C_{p_1,q_1}^{\sigma_1} C_{p_2,q_2}^{\sigma_2} & = \sum_{\substack{\sigma' \in G(\lambda)}} r_{C_{p_1,q_1}^{\sigma_1}}(p_1,\sigma'\sigma_2^{-1},p_2) C_{p_1,q_2}^{\sigma'} \\ & = \sum_{\substack{\sigma' \in G(\lambda)}} r_{C_{q_2,p_2}^{\sigma_2^{-1}}}(q_2,(\sigma')^{-1}\sigma_1,q_1)C_{p_1,q_2}^{\sigma'}\end{aligned}$$ Equating coefficients, we have $$r_{C_{p_1,q_1}^{\sigma_1}}(p_1,\sigma'\sigma_2^{-1},p_2) = r_{C_{q_2,p_2}^{\sigma_2^{-1}}}(q_2,(\sigma')^{-1}\sigma_1,q_1)$$ for each $\sigma' \in G(\lambda)$. In this equality, $p_1$ appears only on the left hand side, $q_2$ appears only on the left, and the two appear in the same two entries. Thus, $r_{C_{p_1,q_1}^{\sigma_1}}(p_1,\sigma'\sigma_2^{-1},p_2)$ is independent of $p_1$, and the function $$s(\sigma_1,q_1,p_2,\tau) := r_{C_{p_1,q_1}^{\sigma_1}}(p_1,\tau,p_2) \in R$$ is well defined. We have established the following corollary of Proposition [Proposition 29](#prop:quotientAx3){reference-type="ref" reference="prop:quotientAx3"}. **Corollary 32**. *Let $X$ be a downward closed subset of $\Lambda$, and consider the quotient algebra $\faktor{A}{I_X}$. For any minimal element $\lambda$ of $\Lambda \setminus X$, any $\sigma_1, \sigma_2 \in G(\lambda)$, and any $p_1,p_2,q_1,q_2 \in M(\lambda)$, we have $$\begin{aligned} C_{p_1,q_1}^{\sigma_1} C_{p_2,q_2}^{\sigma_2} & = \sum_{\substack{\sigma' \in G(\lambda)}} s(\sigma_1,q_1,p_2,\sigma'\sigma_2^{-1}) C_{p_1,q_2}^{\sigma'} \\ & = \sum_{\substack{\sigma' \in G(\lambda)}} s(\sigma_2^{-1},p_2,q_1,(\sigma')^{-1}\sigma_1) C_{p_1,q_2}^{\sigma'}\end{aligned}$$ in $\faktor{A}{I_{X}}$. In particular, for each $\sigma' \in G(\lambda)$ we have $$\pushQED{\qed} s(\sigma_1,q_1,p_2,\sigma'\sigma_2^{-1})=s(\sigma_2^{-1},p_2,q_1,(\sigma')^{-1}\sigma_1). \qedhere \popQED$$* ## Onesided ideals Again, in this subsection, $A$ will be a fixed naive-cellular algebra, with naive-cellular datum $(\Lambda, G, M, C, *)$. **Definition 33**. Let $\lambda \in \Lambda$, and let $q \in M(\lambda)$. Define $$J_q := \mathrm{Span}_R\{C_{p,q}^{\sigma} \mid \sigma \in g(\lambda), p \in M(\lambda)\} \subset A.$$ If $\lambda$ is minimal in $\Lambda \setminus X$ then $I_{<\lambda} \subset I_X$. Axiom [\[item:W3\]](#item:W3){reference-type="ref" reference="item:W3"} then implies the following lemma. **Lemma 34**. *If $X$ is a downward closed subset of $\Lambda$ (so that by Proposition [Proposition 27](#prop:quotientWossit){reference-type="ref" reference="prop:quotientWossit"}, $\faktor{A}{I_X}$ is again a naive-cellular algebra), and $\lambda$ is minimal in $\Lambda \setminus X$, then $\pi_X(J_q)$ is a left ideal of $\faktor{A}{I_X}$. 0◻* Axiom [\[item:W1\]](#item:W1){reference-type="ref" reference="item:W1"} says that if $q \neq q'$ then the canonical bases of $J_q$ and $J_{q'}$ are disjoint subsets of the canonical basis of $A$. This gives the following structural property. **Lemma 35**. *Let $\lambda, \lambda' \in \Lambda$, $q \in M(\lambda)$, and $q' \in M(\lambda')$. If $q \neq q'$ then $J_q \cap J_{q'}=0$ in $A$. 0◻[\[JInt\]]{#JInt label="JInt"}* From this lemma, we obtain the next one, which will be very important to us. **Lemma 36**. *If $X$ is a downward closed subset of $\Lambda$, and $\lambda$ is a minimal element of $\Lambda \setminus X$, then the inclusions $J_q \to I_\lambda$ induce an isomorphism $$\pi_X(I_{X \cup \{\lambda\}}) = \faktor{I_{X \cup \{\lambda\}}}{I_{X}} \cong \bigoplus_{q \in M(\lambda)} \pi_X(J_q)$$ of left $A$-modules.* *Proof.* By Lemma [Lemma 34](#lem:JIdeal){reference-type="ref" reference="lem:JIdeal"}, the quotients $\pi_X(J_q)$ are indeed left $A$-modules. By Lemma [Lemma 30](#lem:XcupLambda){reference-type="ref" reference="lem:XcupLambda"}, $X \cup \{ \lambda \}$ is again downwards closed, so by Lemma [Lemma 26](#lem:IIdeal){reference-type="ref" reference="lem:IIdeal"}, $I_{X \cup \{\lambda\}}$, hence the quotient, is a left (in fact twosided) $A$-module. The twosided ideal $I_X$ of $A$ is defined to be the $R$-span of the elements $C_{p,q}^{\mu}$, for $\mu \in X$ and $p,q \in M(\mu)$. It follows that $\faktor{I_{X \cup \{\lambda\}}}{I_{X}}$ is the $R$-span of the elements $C_{p,q}^{\lambda}$ for $p,q \in M(\lambda)$, hence that every element of $\faktor{I_{X \cup \{\lambda\}}}{I_{X}}$ is an $R$-linear combination of elements drawn from the $J_q$, for $q \in M(\lambda)$: $$\faktor{I_{X \cup \{\lambda\}}}{I_{X}} \cong \sum_{q \in M(\lambda)} \pi_X(J_q).$$ By Lemma [Lemma 35](#lem:JDisjt){reference-type="ref" reference="lem:JDisjt"}, this sum is in fact direct. This completes the proof. ◻ ## Equivariance Let $R$ be a commutative ring, and let $A$ be a naive-cellular algebra over $R$, with naive-cellular datum $(\Lambda, G, M, C, *)$. We here give a perhaps more intuitive reformulation of the equivariance condition of Axiom [\[item:W3\]](#item:W3){reference-type="ref" reference="item:W3"}, and its dual, Remark [Remark 7](#rmk:W3){reference-type="ref" reference="rmk:W3"}. Let $\lambda \in \Lambda$, $\sigma, \tau \in G(\lambda)$ and $p,q \in M(\lambda)$ then for any element $a \in A$, Axiom [\[item:W3\]](#item:W3){reference-type="ref" reference="item:W3"} gives $$a C_{p,q}^{\sigma \tau} \equiv \sum_{\substack{\sigma'' \in G(\lambda) \\ p' \in M(\lambda) }} r_a(p',\sigma''\tau^{-1}\sigma^{-1},p) C_{p',q}^{\sigma''} (\mathrm{mod} I_{< \lambda}),$$ and reindexing the sum via $\sigma'=\sigma'' \tau^{-1}$ gives $$\begin{aligned} a C_{p,q}^{\sigma \tau} & \equiv \sum_{\substack{\sigma' \in G(\lambda) \\ p' \in M(\lambda) }} r_a(p',(\sigma' \tau) \tau^{-1}\sigma^{-1},p) C_{p',q}^{\sigma' \tau} (\mathrm{mod} I_{< \lambda}) \\ & = \sum_{\substack{\sigma' \in G(\lambda) \\ p' \in M(\lambda) }} r_a(p',\sigma' \sigma^{-1},p) C_{p',q}^{\sigma' \tau} (\mathrm{mod} I_{< \lambda}).\end{aligned}$$ We record this congruence, and the dual obtained from Remark [Remark 7](#rmk:W3){reference-type="ref" reference="rmk:W3"}, as the next lemma. **Lemma 37**. *For any $\lambda \in \Lambda$, $\sigma, \tau \in G(\lambda)$ and $p,q \in M(\lambda)$, and any $a \in A$, we have $$a C_{p,q}^{\sigma \tau} \equiv \sum_{\substack{\sigma' \in G(\lambda) \\ p' \in M(\lambda) }} r_a(p',\sigma' \sigma^{-1},p) C_{p',q}^{\sigma' \tau} (\mathrm{mod} I_{< \lambda}),$$ and $$\pushQED{\qed} C_{p,q}^{\tau\sigma}a \equiv \sum_{\substack{\sigma' \in G(\lambda) \\ q' \in M(\lambda)}} r_{a^*}(q',(\sigma')^{-1}\sigma,q)C_{p,q'}^{\tau\sigma'} (\mathrm{mod} I_{< \lambda}). \qedhere \popQED$$* For a product of basis elements corresponding to the same $\lambda$, in terms of the function $s$ of Corollary [Corollary 32](#cor:basisMultFormula){reference-type="ref" reference="cor:basisMultFormula"}, we get: **Corollary 38**. *Let $X$ be a downward closed subset of $\Lambda$, and consider the quotient algebra $\faktor{A}{I_X}$. For any minimal element $\lambda$ of $\Lambda \setminus X$, any $\sigma_1, \sigma_2 \in G(\lambda)$, and any $p_1,p_2,q_1,q_2 \in M(\lambda)$, we have $$\begin{aligned} C_{p_1,q_1}^{\tau_1 \sigma_1} C_{p_2,q_2}^{\sigma_2 \tau_2} & = \sum_{\substack{\sigma' \in G(\lambda)}} s(\sigma_1,q_1,p_2,\sigma'\sigma_2^{-1}) C_{p_1,q_2}^{\tau_1 \sigma' \tau_2} \\ & = \sum_{\substack{\sigma' \in G(\lambda)}} s(\sigma_2^{-1},p_2,q_1,(\sigma')^{-1}\sigma_1) C_{p_1,q_2}^{\tau_1 \sigma' \tau_2}\end{aligned}$$ in $\faktor{A}{I_{X}}$. 0◻* One useful consequence of equivariance is the following, which will be necessary to show that the *link modules* of the next section are indeed $A$-modules (Proposition [Proposition 43](#prop:linkIsModule){reference-type="ref" reference="prop:linkIsModule"}). **Lemma 39**. *For each $\lambda \in \Lambda$, and every $a,b \in A$, the function $r_a$ satisfies the equation $$\sum_{\sigma'\in G(\lambda)} r_{ab}(p',\sigma',p) = \sum_{\sigma'\in G(\lambda)} \sum_{\substack{\sigma'' \in G(\lambda) \\ p'' \in M(\lambda)}} r_a(p',\sigma',p'') r_b(p'',\sigma'',p)$$* *Proof.* This lemma is essentially a consequence of the following: since $A$ is associative, we have $$(ab)C_{p,q}^1 = a(bC_{p,q}^1).$$ Applying Axiom [\[item:W3\]](#item:W3){reference-type="ref" reference="item:W3"} to the left hand side, one sees that the coefficient of $C_{p',q}^{\rho}$ in $(ab)C_{p,q}^1$ is $r_{ab}(p',\rho,p)$. Applying the same axiom (twice) to the right hand side, one sees that the coefficient of the same basis element in $a(bC_{p,q}^1)$ is $\sum_{\sigma'', p''} r_a(p',\rho(\sigma'')^{-1},p'')r_b(p'',\sigma'',p)$. These coefficients must be equal, that is: $$r_{ab}(p',\rho,p) = \sum_{\sigma'', p''} r_a(p',\rho(\sigma'')^{-1},p'')r_b(p'',\sigma'',p).$$ The point is now that equivariance allows us to reindex in the desired manner on the right hand side, at least after an additional sum over $\rho$. Namely, $$\begin{aligned} \sum_{\rho \in G(\lambda)} r_{ab}(p',\rho,p) & = \sum_{\rho \in G(\lambda)}\sum_{\substack{\sigma'' \in G(\lambda) \\ p'' \in M(\lambda)}} r_a(p',\rho(\sigma'')^{-1},p'')r_b(p'',\sigma'',p) \\ & = \sum_{\substack{(\rho,\sigma'') \in G(\lambda) \times G(\lambda) \\ p'' \in M(\lambda)}} r_a(p',\rho(\sigma'')^{-1},p'')r_b(p'',\sigma'',p) \\ & = \sum_{\substack{(\tau,\sigma'') \in G(\lambda) \times G(\lambda) \\ p'' \in M(\lambda)}} r_a(p',\tau,p'')r_b(p'',\sigma'',p) \\ & = \sum_{\tau \in G(\lambda)}\sum_{\substack{\sigma'' \in G(\lambda) \\ p'' \in M(\lambda)}} r_a(p',\tau,p'')r_b(p'',\sigma'',p),\end{aligned}$$ where we used the fact that $$(\rho,\sigma'') \mapsto (\rho (\sigma'')^{-1}, \sigma'') = (\tau, \sigma'')$$ is a self-bijection of $G(\lambda) \times G(\lambda)$. This completes the proof. ◻ # Link modules and bilinear forms {#section:Form} In this subsection, fix a commutative ring $R$, and a naive-cellular algebra $A$ over $R$, with naive-cellular datum $(\Lambda, G, M, C, *)$. **Definition 40**. For $\lambda \in \Lambda$, let the *link module* $W(\lambda)$ be the free $R$-module on the symbols $C_p$, together with a left action of $A$ given by $$a C_p = \sum_{\substack{\sigma'\in G(\lambda) \\ p' \in M(\lambda)}} r_a(p',\sigma',p) C_{p'}.$$ *Remark 41*. If $A$ is a cellular algebra in the sense that each $G(\lambda)$ is trivial (Proposition [Proposition 8](#prop:cellIsWossit){reference-type="ref" reference="prop:cellIsWossit"}), then this definition reduces to Graham and Lehrer's definition [@GrahamLehrer Definition 2.1] of the *cell module* associated to $\lambda$ (which they also denote by $W(\lambda)$). *Remark 42*. As in the cellular case, note that the definition 'mimics the multiplication from $A$' in the sense that Axiom [\[item:W3\]](#item:W3){reference-type="ref" reference="item:W3"} gives that, modulo $A(< \lambda)$, we have $$a C_{p,q}^{\tau} = \sum_{\substack{\sigma'\in G(\lambda) \\ p' \in M(\lambda)}} r_a(p',\sigma',p) C_{p',q}^{\sigma' \tau},$$ where we use the form of equivariance from Lemma [Lemma 37](#lem:equivariance){reference-type="ref" reference="lem:equivariance"}. We must verify that this multiplication rule actually makes $W(\lambda)$ into a left $A$-module. **Proposition 43**. *Each link module $W(\lambda)$ is a left $A$-module.* *Proof.* We must check that the multiplication is bilinear, and that for $x \in W(\lambda)$ and $a,b\in A$, we have $1 \cdot x = x$, and $a(bx)=(ab)x$. Since the algebra multiplication in $A$ is bilinear, for fixed $p',\sigma,$ and $p$, we have that $r_a(p',\sigma,p)$ is $R$-linear in $a$. The action of $A$ on $C_p$ is therefore $R$-linear in the first variable. Linearity in the second variable is automatic, because the action is defined as the $R$-linear extension of an action on the basis $C_p$. This establishes bilinearity, and it therefore suffices to check the equalities $1 \cdot x = x$ and $a(bx)=(ab)x$ in the case that $x = C_p$ is a basis element. The first equality then follows immediately from Remark [Remark 42](#rmk:mimicMult){reference-type="ref" reference="rmk:mimicMult"}. For the second equality, we must check that $$\sum_{\substack{\sigma'\in G(\lambda) \\ p'\in M(\lambda)}} r_{ab}(p',\sigma',p) = \sum_{\substack{\sigma'\in G(\lambda) \\ p'\in M(\lambda)}} \sum_{\substack{\sigma'' \in G(\lambda) \\ p'' \in M(\lambda)}} r_a(p',\sigma',p'') r_b(p'',\sigma'',p),$$ but this is just the sum over $p'$ of the equality given in Lemma [Lemma 39](#lem:eqProductReindexing){reference-type="ref" reference="lem:eqProductReindexing"}. This completes the proof. ◻ ## Bilinear forms Let $C_{p_1,q_1}^{\sigma_1}$ and $C_{p_2,q_2}^{\sigma_2}$ be basis elements associated to the same $\lambda \in \Lambda$. Recall that by Corollary [Corollary 32](#cor:basisMultFormula){reference-type="ref" reference="cor:basisMultFormula"}, there is a function $s$, defined by $$s(\sigma_1,q_1,p_2,\rho) := r_{C_{p_1,q_1}^{\sigma_1}}(p_1,\rho,p_2) \in R,$$ such that, modulo $I(< \lambda)$, we have $$\begin{aligned} C_{p_1,q_1}^{\sigma_1} C_{p_2,q_2}^{\sigma_2} & = \sum_{\substack{\sigma' \in G(\lambda)}} s(\sigma_1,q_1,p_2,\sigma'\sigma_2^{-1}) C_{p_1,q_2}^{\sigma'} \\ & = \sum_{\substack{\sigma' \in G(\lambda)}} s(\sigma_2^{-1},p_2,q_1,(\sigma')^{-1}\sigma_1) C_{p_1,q_2}^{\sigma'}\end{aligned}$$ **Definition 44**. Let $\lambda \in \Lambda$, and let $\tau \in M(\lambda)$. The *bilinear form (on $W(\lambda)$) associated to $\tau$* is defined on the basis by setting $$\langle C_q,C_p \rangle_{\tau} : = s(1,q,p,\tau),$$ and then extended bilinearly over all of $W(\lambda) \times W(\lambda)$. In other words, $\langle C_q,C_p \rangle_{\tau}$ is the coefficient of $C_{p_1,q_2}^{\tau}$ in $C_{p_1,q}^{1} C_{p,q_2}^{1}$ for any $p_1,q_2 \in M(\lambda)$. *Remark 45*. If $A$ is a cellular algebra in the sense that each $G(\lambda)$ is trivial (Proposition [Proposition 8](#prop:cellIsWossit){reference-type="ref" reference="prop:cellIsWossit"}), then (Remark [Remark 41](#rmk:cellMod){reference-type="ref" reference="rmk:cellMod"}) the module $W(\lambda)$ coincides with Graham and Lehrer's cell module, and this definition reduces immediately to Graham and Lehrer's definition [@GrahamLehrer Definition 2.3] of the bilinear form $\phi_\lambda$. We record some easy properties of this bilinear form. **Proposition 46**. 1. *$\langle C_q,C_p \rangle_{\tau}$ is equal to the coefficient of $C_{p_1,q_2}^{\sigma_1\tau\sigma_2}$ in $C_{p_1,q}^{\sigma_1} C_{p,q_2}^{\sigma_2}$ for any $\sigma_1,\sigma_2 \in G(\lambda)$ and any $p_1,q_2 \in M(\lambda)$.* 2. *We have the symmetry property $$\langle u,v \rangle_{\tau} = \langle v,u \rangle_{\tau^{-1}}.$$* *Proof.* Since $\langle C_q,C_p \rangle_{\tau}$ is the coefficient of $C_{p_1,q_2}^{\tau}$ in $C_{p_1,q}^{1} C_{p,q_2}^{1}$ for any $p_1,q_2 \in M(\lambda)$, the first property follows from Corollary [Corollary 38](#cor:basisMultEquivariance){reference-type="ref" reference="cor:basisMultEquivariance"}. By bilinearity, it suffices to verify the second property in the case $u=C_q, v=C_p$, but in this case the result is immediate from Corollary [Corollary 32](#cor:basisMultFormula){reference-type="ref" reference="cor:basisMultFormula"}. This completes the proof. ◻ If $A$ is a diagram-like algebra, then the definition of the bilinear form becomes more comprehensible. It is given by the next lemma, which follows immediately from the definitions. **Lemma 47**. *Suppose that $A$ is diagram-like, let $\lambda \in \Lambda$, $p,q \in M(\lambda)$, and $\tau \in G(\lambda)$. From Definition [Definition 9](#def:dgrmLike){reference-type="ref" reference="def:dgrmLike"}, we have, for any $p_1,q_2 \in G(\lambda)$, $$C_{p_1,q}^1 C_{p,q_2}^1 = \kappa C_{p_1,q_2}^\sigma,$$ where the basis element on the right is associated to $\mu \in \Lambda$ depending only on $p$ and $q$, and $\kappa$ and $\sigma$ also depend only on $p,q$. Writing these two variables as functions of $p$ and $q$, we have $$\pushQED{\qed} \langle C_q,C_p \rangle_{\tau} = \begin{cases} \kappa(p,q) \chi_{\sigma(p,q)}(\tau) & \textrm{ if $\lambda = \mu(p,q)$, and } \\ 0 \textrm{ otherwise.} \end{cases} \qedhere \popQED$$* *Remark 48*. If one just wishes to work with diagram algebras, it may be better to define a single bilinear form on $W(\lambda)$ as $\langle C_q, C_p\rangle = \kappa(q,p)$. We do not do so because this definition doesn't work well for naive-cellular algebras, and we wanted to be able to compare our Theorem [Theorem 1](#thm:global){reference-type="ref" reference="thm:global"} with Graham and Lehrer's results on cellular algebras (as in Section [1.5](#subsection:Cellular){reference-type="ref" reference="subsection:Cellular"}). The reason for introducing these bilinear forms is the next proposition. Recall the definition of the ideals $J_q$ (Definition [Definition 33](#def:J){reference-type="ref" reference="def:J"}), and recall that by Lemma [Lemma 34](#lem:JIdeal){reference-type="ref" reference="lem:JIdeal"}, if $X$ is a downward closed subset of $\Lambda$, and $\lambda$ is a minimal element of $\Lambda \setminus X$, then the image $\pi_X(J_q)$ in $\faktor{A}{I_X}$ is a left ideal. **Proposition 49**. *Let $X$ be a downward closed subset of $\Lambda$, and let $\lambda$ be a minimal element of $\Lambda \setminus X$. Fix $q \in M(\lambda)$. Then there exists an idempotent $e_q$ in $\faktor{A}{I_X}$ which generates $\pi_X(J_q)$ as a left ideal if and only if the indicator function of the identity in $G(\lambda)$ lies in the $R$-span of the functions $$\begin{aligned} G(\lambda) & \to R \\ \tau & \mapsto \langle C_q , C_p \rangle_{\tau \sigma^{-1}},\end{aligned}$$ for $p \in M(\lambda)$, $\sigma \in G(\lambda)$.* We will use the following lemma in the proof. **Lemma 50**. *Let $A$ be an $R$-algebra, let $J$ be a left ideal of $A$, and let $e \in J$. Then $y e = y$ for every $y \in J$ if and only if $e$ is idempotent and generates $J$ as a left ideal.* *Proof.* If $e$ is idempotent and generates $J$, then every element $y \in J$ has the form $x e$, so $y e = x e^2 = xe = y$. Conversely, if $ye = y$ for every $y \in J$, then $e$ generates $J$ as a left ideal, and, since we have in particular that $e \in J$, setting $y = e$ gives that $e$ is idempotent. ◻ *Proof of Proposition [Proposition 49](#prop:formBegetIdempt){reference-type="ref" reference="prop:formBegetIdempt"}.* In this proof, we will commit the abuse of Remark [Remark 28](#rmk:basisAbuse){reference-type="ref" reference="rmk:basisAbuse"}, and write $C_{p,q}^\sigma$ for the image of that basis vector in $\faktor{A}{I_X}$. By Lemma [Lemma 50](#lem:idempotentGenCharacterisation){reference-type="ref" reference="lem:idempotentGenCharacterisation"}, an element $e_q$ of $\pi_X(J_q)$ is an idempotent generator of $\pi_X(J_q)$ if and only if $y e_q = y$ for all $y \in \pi_X(J_q)$, if and only if $C_{p,q}^\sigma e_q = C_{p,q}^\sigma$ for every basis element $C_{p,q}^\sigma$ of $\pi_X(J_q)$. Expressing $e_q$ in this basis gives $e_q = \sum_{\substack{p' \in M(\lambda) \\ \sigma' \in G(\lambda)}} \alpha_{p',\sigma'} C_{p',q}^{\sigma'}$, and we write: $$\begin{aligned} C_{p,q}^\sigma e_q & = C_{p,q}^\sigma \sum_{p', \sigma'} \alpha_{p',\sigma'} C_{p',q}^{\sigma'} \\ & = \sum_{p', \sigma'} \alpha_{p',\sigma'} C_{p,q}^\sigma C_{p',q}^{\sigma'} \\ & = \sum_{p', \sigma'} \alpha_{p',\sigma'} \sum_{\tau' \in G(\lambda)} \langle C_q, C_{p'}\rangle_{\tau'} C_{p,q}^{\sigma \tau' \sigma'} \\ & = \sum_{\tau'} \sum_{p', \sigma'} \alpha_{p',\sigma'} \langle C_q, C_{p'}\rangle_{\tau'} C_{p,q}^{\sigma \tau' \sigma'} \\ & = \sum_{\tau}( \sum_{p', \sigma'} \alpha_{p',\sigma'} \langle C_q, C_{p'}\rangle_{\tau (\sigma')^{-1}}) C_{p,q}^{\sigma \tau},\end{aligned}$$ where the third equality is by Proposition [Proposition 46](#prop:bilFormProperties){reference-type="ref" reference="prop:bilFormProperties"}. This last expression is equal to $C_{p,q}^\sigma$ if and only if $\sum_{p', \sigma'} \alpha_{p',\sigma'} \langle C_q, C_{p'}\rangle_{\tau (\sigma')^{-1}}$ is equal to $1$ if $\tau$ is the identity, and zero otherwise. That is, the coefficients $\alpha_{p',\sigma'}$ express the indicator function of the identity in $G(\lambda)$ as a linear combination of the $\tau \mapsto \langle C_q, C_{p'}\rangle_{\tau (\sigma')^{-1}}$ if and only if they define an element $e_q$ (as a linear combination of $C_{p',q}^{\sigma'}$) which satisfies $C_{p,q}^\sigma e_q = C_{p,q}^\sigma$ for each $p$ and $\sigma$. In particular, the indicator function can be expressed as such a linear combination if and only if $\pi_X(J_q)$ is principal and generated by an idempotent, as required. ◻ We will use Proposition [Proposition 49](#prop:formBegetIdempt){reference-type="ref" reference="prop:formBegetIdempt"} in conjunction with the following lemma. Note that this actually verifies something stronger than the hypothesis of that proposition, since it expresses the indicator function as a span of only those functions corresponding to a particular element $\sigma$. **Lemma 51**. *Let $\lambda \in \Lambda$, and fix $q \in M(\lambda)$. If there exists $v \in W(\lambda)$ such that for some $\sigma \in G(\lambda)$ we have $$\langle C_q, v \rangle_{\tau} = \begin{cases} 1 & \textrm{ if } \tau=\sigma, \textrm{ and} \\ 0 & \textrm{ otherwise,} \end{cases}$$ then the indicator function of $1 \in G(\lambda)$ lies in the span of the functions $$\begin{aligned} G(\lambda) & \to R \\ \tau & \mapsto \langle C_q , C_p \rangle_{\tau \sigma},\end{aligned}$$ for $p \in M(\lambda)$.* *Proof.* The hypothesis gives $\chi_{\sigma}(\tau) = \langle C_q, v \rangle_{\tau}$. Expressing $v$ in the basis $C_p$ of $W(\lambda)$ gives $$v = \sum_{p \in M(\lambda)} \alpha_p C_p$$ for coefficients $\alpha_p \in R$. We may then write $$\chi_{1}(\tau)= \chi_{\sigma}(\tau \sigma) = \langle C_q , v \rangle_{\tau \sigma} =\sum_{p \in M(\lambda)} \alpha_p \langle C_q ,C_p \rangle_{\tau \sigma},$$ which is of the desired form. ◻ # Proof of Theorem [Theorem 1](#thm:global){reference-type="ref" reference="thm:global"} {#section:GlobalResults} In this section we will prove Theorem [Theorem 1](#thm:global){reference-type="ref" reference="thm:global"}. To do so, we will apply the following observation from [@Boyde] inductively. **Theorem 52** ([@Boyde Theorem 3.3]). *Let $A$ be an $R$-algebra, let $M$ be a right $A$-module and let $N$ be a left $A$-module. Let $I$ be a twosided ideal of $A$ which acts trivially on $M$ and $N$, and which as a left ideal is a direct sum $I \cong J_1 \oplus \dots \oplus J_k$. Suppose that each $J_i$ is generated as a left ideal by finitely many commuting idempotents. Then $$\pushQED{\qed} \mathrm{Tor}^A_*(M,N) \cong \mathrm{Tor}^{\faktor{A}{I}}_*(M,N). \qedhere \popQED$$* *Remark 53*. The hypotheses of this theorem are reminiscent of the definition of a *block* of an associative algebra (see for example [@Benson Section 1.8]) though of course we are asking for something much weaker than a block decomposition. *Proof of Theorem [Theorem 1](#thm:global){reference-type="ref" reference="thm:global"}.* We are given - a naive-cellular algebra $A$ over a ring $R$ (commutative with unit), with naive-cellular datum $(\Lambda, G, M, C, *)$, and - two downward closed subsets $Y \subset X$ of $\Lambda$, with $X \setminus Y$ finite. We assume that for each $\lambda \in X \setminus Y$ we have the following. - For every $q \in M(\lambda)$, there exists $v \in W(\lambda)$ such that $$\langle C_q, v \rangle_{\tau} = \begin{cases} 1 & \textrm{ if } \tau=1, \textrm{ and} \\ 0 & \textrm{ otherwise.} \end{cases}$$ In this situation, we must show that for any right $A$-module $M$ and left $A$-module $N$, where $I_X$ acts trivially on both, we have $$\mathrm{Tor}_*^{\faktor{A}{I_{Y}}}(M,N) \cong \mathrm{Tor}_*^{\faktor{A}{I_{X}}}(M,N).$$ Since $X \setminus Y$ is assumed to be finite and $Y$ is arbitrary downward closed, it suffices by induction and Lemma [Lemma 31](#lem:YcupLambda){reference-type="ref" reference="lem:YcupLambda"} to prove that for any minimal element $\lambda$ of $X \setminus Y$ we have $$\mathrm{Tor}_*^{\faktor{A}{I_{Y}}}(M,N) \cong \mathrm{Tor}_*^{\faktor{A}{I_{Y \cup \{\lambda\}}}}(M,N).$$ By Lemma [Lemma 36](#lem:decomposition){reference-type="ref" reference="lem:decomposition"}, we have an isomorphism of left $A$-modules $$\faktor{I_{Y \cup \{\lambda \}}}{I_Y} \cong \bigoplus_{q \in M(\lambda)} \pi_X(J_q),$$ so by Theorem [Theorem 52](#thm:quotientingPlus){reference-type="ref" reference="thm:quotientingPlus"} applied to the ideal $\faktor{I_{Y \cup \{\lambda \}}}{I_Y}$ of the algebra $\faktor{A}{I_Y}$, it suffices to show that each left ideal $\pi_X(J_q)$ of $\faktor{A}{I_X}$ is generated by a single idempotent, but this follows from the assumption by Proposition [Proposition 49](#prop:formBegetIdempt){reference-type="ref" reference="prop:formBegetIdempt"} and Lemma [Lemma 51](#lem:ezCase){reference-type="ref" reference="lem:ezCase"}. This completes the proof. ◻ # Specialisation of Theorem [Theorem 1](#thm:global){reference-type="ref" reference="thm:global"} to subalgebras of the Brauer algebras {#section:Specialisation} Our first job is to give a concrete interpretation of the bilinear form of Definition [Definition 44](#def:innerProduct){reference-type="ref" reference="def:innerProduct"} in the examples. Since all of the examples we consider here are obtained by restriction from the Brauer algebras, it suffices to do so in that case. **Lemma 54**. *Consider the Brauer algebra $\mathrm{Br}_n(\delta)$ (Example [Example 13](#ex:Brauer){reference-type="ref" reference="ex:Brauer"}). Let $t \in \underline{n}$, and let $q,p \in M(t)$ be link states with $t$ defects. Form the pair-graph $\Gamma_{\langle q,p \rangle}$ (Definition [Definition 16](#def:pairGraph){reference-type="ref" reference="def:pairGraph"}).* *Recall that the pair-set $S_{\langle q,p \rangle} \subset \pi_0(\Gamma_{\langle q,p \rangle})$ is the intersection of those path components containing a defect vertex of both $q$ and $p$. Note that $S_{\langle q,p \rangle} \leq t$.* *The bilinear forms $\langle C_q,C_p \rangle_{\tau}$ in the Brauer algebra $\mathrm{Br}_n(\delta)$ are determined as follows:* - *If $\lvert{S_{\langle q,p \rangle}}\rvert < t$, then $\langle C_q,C_p \rangle_{\tau} = 0$ for all $\tau$.* - *If $\lvert{S_{\langle q,p \rangle}}\rvert = t$, then $\langle C_q,C_p \rangle_{\tau} = \delta^i \chi_{\sigma(p,q)}(\tau)$, for some $\sigma = \sigma(p,q) \in G(t)$, where $i$ is the number of components in $\pi_0(\Gamma_{\langle q,p \rangle})$ not hit by either copy of $\underline{t}$.* *Proof.* Combine Lemma [Lemma 47](#lem:dgrmLikeBilinearForm){reference-type="ref" reference="lem:dgrmLikeBilinearForm"} and Proposition [Proposition 18](#prop:BrauerDGL){reference-type="ref" reference="prop:BrauerDGL"}. ◻ **Definition 55**. Let $A$ be a diagram-like subalgebra of $\mathrm{Br}_n(\delta)$ on a subset of the diagrams, with naive-cellular datum $(\Lambda, M, G, C, *)$ obtained by restriction (Definition [Definition 11](#def:restriction){reference-type="ref" reference="def:restriction"}) from the datum for $\mathrm{Br}_n(\delta)$ given in Definition [Definition 15](#def:BrauerDGL){reference-type="ref" reference="def:BrauerDGL"}. Let $t$ be an integer with $0 \leq t \leq n-1$. We will say that $A$ *satisfies Hypothesis $(\dag)$ at $t$* if: - For all $q \in M(t)$ there exists $p \in M(t)$ such that in the pair-graph $\Gamma_{\langle q,p \rangle}$, we have that the cardinality of the pair-set $\lvert{S_{\langle q,p \rangle}}\rvert$ is $t$, and that $\delta^i$ is invertible, where $i$ is the number of components in $\pi_0(\Gamma_{\langle q,p \rangle})$ not hit by either copy of $\underline{t}$. The following theorem specialises Theorem [Theorem 1](#thm:global){reference-type="ref" reference="thm:global"} to subalgebras of the Brauer algebras, and is what we will use in our first application. Recall that $I_{\leq t}$ (Definition [Definition 23](#def:I){reference-type="ref" reference="def:I"}) is the twosided ideal of $A$ spanned by diagrams with at most $t$ left-to-right connections. **Theorem 56**. *Let $A$ be a diagram-like subalgebra of $\mathrm{Br}_n(\delta)$ on a subset of the partitions, with naive-cellular datum $(\Lambda, M, G, C, *)$ obtained by restriction from the datum for $\mathrm{Br}_n(\delta)$ given in Definition [Definition 15](#def:BrauerDGL){reference-type="ref" reference="def:BrauerDGL"}. If there exist integers $a$ and $b \leq n-1$ such that $A$ satisfies Hypothesis $(\dag)$ for all $t$ with $a < t \leq b$, then the quotient map induces an isomorphism $$\mathrm{Tor}_*^{\faktor{A}{I_{\leq a}}}(\mathbbm{1},\mathbbm{1}) \cong \mathrm{Tor}_*^{\faktor{A}{I_{\leq b}}}(\mathbbm{1},\mathbbm{1}).$$* *Proof.* We wish to apply Theorem [Theorem 1](#thm:global){reference-type="ref" reference="thm:global"} with $X = \{t \in \Lambda \mid t \leq b \}$ and $Y = \{t \in \Lambda \mid t \leq a \}$, noting that $b \leq n-1$, so $I_X$ acts trivially on $\mathbbm{1}$. To verify the hypothesis, we must fix $q \in M(t)$ and find $v \in W(t)$ such that $\langle C_q,v\rangle_\tau$, regarded as a function of $\tau$, is the indicator function of some $\sigma \in G(t) = \Sigma_t$. Letting $p$ be as given in the hypotheses, since $\delta^i$ is invertible, we may form the element $$v = \frac{1}{\delta^i} C_p \in W(t),$$ which will suffice by Lemma [Lemma 54](#lem:formInterpretation){reference-type="ref" reference="lem:formInterpretation"}. ◻ # Application: global results for Jones annular algebras {#section:Jones} ## Jones annular algebras Recall that the Jones annular algebra $\mathrm{J}_n(\delta)$ is diagram-like, with naive-cellular datum obtained by restriction from that of the Brauer algebra $\mathrm{Br}_n(\delta)$ (Example [Example 20](#ex:Jones){reference-type="ref" reference="ex:Jones"}). We have the following lemma (see e.g. [@Boyde2]): **Lemma 57**. *The quotient $\faktor{\mathrm{J}_n(\delta)}{I_{\leq n-1}}$ of the topological partition algebra by the twosided ideal $I_{\leq n-1}$ spanned by diagrams having at most $n-1$ left-to-right connections is the group algebra $R C_n$ of the cyclic group of order $n$. 0◻* Recall the definition of the pair graph $\Gamma_{\langle q, p \rangle}$ (Definition [Definition 16](#def:pairGraph){reference-type="ref" reference="def:pairGraph"}). **Lemma 58**. *Let $q \in M(t)$ be an annular link state in $\mathrm{J}_n(\delta)$, for $t \geq 0$. Let $p \in M(t)$ be the link state obtained by 'rotating $q$ by one place', so that (mod $n$) if $i$ and $j$ are connected in $q$ then $i+1$ and $j+1$ are connected in $p$.* *Then, for each $i \in \underline{n}$, either $i$ has a defect in $q$, or $i$ is connected to $i+1$ in $\Gamma_{\langle q, p \rangle}$.* *Proof.* We will show that if $i$ does not have a defect in $q$, then $i$ is connected to $i+1$ in $\Gamma_{\langle q, p \rangle}$. If $i$ does not have a defect in $q$, then $i$ is connected to some $j$ via an edge in $q$. Since $q$ is annular (Example [Example 20](#ex:Jones){reference-type="ref" reference="ex:Jones"}), all of its defects lie in either the cyclic interval $(i,j)$ or in the cyclic interval $(j,i)$. By definition, $p$ has an edge between $i+1$ and $j+1$, so $i$ and $i+1$ are connected in $\Gamma_{\langle q, p \rangle}$ if and only if $j$ and $j+1$ are. We may therefore assume without loss of generality (i.e. perhaps interchanging $i$ and $j$) that $q$ has no defects in the cyclic interval $(i,j)$. By definition, $p$ has no defects in the cyclic interval $(i+1,j+1)$, so in particular $j$ cannot be a defect in $p$. Since $p$ is annular, $(i+1,j+1)$ is a union of parts of $p$, so $j$ must lie in the same part of $p$ as some $k \in (i+1,j)$ (to which it is connected by an edge in $p$). A picture of this portion of $\Gamma_{\langle q, p\rangle}$, with $q$ on the left and $p$ on the right, is as follows: Continuing, $k$ is connected via an edge in $q$ to some vertex $\ell$, which must be contained in $(i,j)$ and not have a defect, so is connected in turn to some vertex $m$ via an edge in $p$, and so on. This sequence can only terminate by reaching vertex $i+1$ via some edge in $q$. The conclusion is that $j$ and $i+1$, hence $i$ and $i+1$, lie in the same component of $\Gamma_{\langle q, p \rangle}$, as required. ◻ **Corollary 59**. *Let $q \in M(t)$ be an annular link state in $\mathrm{J}_n(\delta)$, for $t \geq 0$. Let $p \in M(t)$ be the link state obtained by 'rotating $q$ by one place', so that (mod $n$) if $i$ and $j$ are connected in $q$ then $i+1$ and $j+1$ are connected in $p$.* *Then, if $t > 0$, $\Gamma_{\langle q, p \rangle}$ consists of $t$ contractible path components, each containing a single defect from $q$ and a single defect from $p$. If $t=0$ then $\Gamma_{\langle q, p \rangle} \simeq S^1$.* *Proof.* Let $i \in \underline{n}$ be any vertex. By Lemma [Lemma 58](#lem:Jones){reference-type="ref" reference="lem:Jones"}, either $i$ has a defect in $q$, or $i$ is connected to $i+1$ in $\Gamma_{\langle q, p \rangle}$. By symmetry, we also have that either $i$ has a defect in $p$ or $i$ is connected to $i-1$ in $\Gamma_{\langle q, p \rangle}$. It follows that if $t \geq 1$ then the (preimages in $\underline{n}$ of) connected components of $\Gamma_{\langle q, p \rangle}$ are the cyclic intervals $[i,j]$, where $j$ is a defect in $q$, $i$ is a defect in $p$ and there are no defects in $(i,j)$. Since all vertices have valence 2, this establishes the result for $t \geq 1$. If $p$ has no defects ($t=0$) then all vertices lie in the same connected component of $\Gamma_{\langle q, p \rangle}$. Again, since all vertices have valence 2, the result follows. ◻ **Theorem 60**. *Let $R$ be a commutative ring, and let $\delta \in R$. The Jones annular algebra $\mathrm{J}_n(\delta)$ satisfies Hypothesis $(\dag)$ for $1 \leq t \leq n-1$, and if $\delta$ is invertible, then additionally Hypothesis $(\dag)$ holds for $t=0.$* *Proof.* For the first claim, let $q \in M(t)$ for $t \geq 1$. Let $p \in M(t)$ be obtained by rotation, so that if $i$ and $j$ are connected in $q$ then $i+1$ and $j+1$ are connected in $p$. By Corollary [Corollary 59](#cor:Jones){reference-type="ref" reference="cor:Jones"}, $\lvert{S_{\langle q,p\rangle}}\rvert = t$, and each path component of $\Gamma_{\langle q, p \rangle}$ is contractible, so $i=0$ by Lemma [Lemma 54](#lem:formInterpretation){reference-type="ref" reference="lem:formInterpretation"}. This establishes Hypothesis $(\dag)$. When $\delta$ is invertible and $t=0$, we still have $\lvert{S_{\langle q,p\rangle}}\rvert = t$, so Hypothesis $(\dag)$ again holds. This completes the proof. ◻ **Corollary 61**. *Let $R$ be a commutative ring, let $\delta \in R$, and consider the Jones annular algebra $\mathrm{J}_n(\delta)$ (Example [Example 20](#ex:Jones){reference-type="ref" reference="ex:Jones"}). Let $I_0$ denote the ideal spanned by partitions $\rho$ such that no part of $\rho$ contains both primed and unprimed elements.* *We have $$\mathrm{Tor}_*^{\faktor{\mathrm{J}_n(\delta)}{I_{0}}}(\mathbbm{1},\mathbbm{1}) \cong \mathrm{Tor}_*^{R C_n}(\mathbbm{1},\mathbbm{1}),$$ and if $\varepsilon$ is invertible or $n$ is odd then additionally $$\mathrm{Tor}_*^{\mathrm{J}_n(\delta)}(\mathbbm{1},\mathbbm{1}) \cong \mathrm{Tor}_*^{\faktor{\mathrm{J}_n(\delta)}{I_{0}}}(\mathbbm{1},\mathbbm{1}).$$* *Proof.* All parts of the claim apart from the case $n$ odd follow from Theorem [Theorem 56](#thm:subalgebrasOfBrauer){reference-type="ref" reference="thm:subalgebrasOfBrauer"}, Lemma [Theorem 60](#thm:JR1){reference-type="ref" reference="thm:JR1"}, and Theorem [Lemma 57](#lem:topThingJ){reference-type="ref" reference="lem:topThingJ"}. If $n$ is odd then a partition of $\underline{n} \cup \underline{n}'$ where all parts have cardinality 2 must have at least one part with both a primed and an unprimed element. It follows that $I_0 = 0$, so the result follows. ◻ # Link state orderings {#section:LSO} **Definition 62**. Let $A$ be a naive-cellular algebra over a ring $R$, with naive-cellular datum $(\Lambda, G, M, C, *)$. A *link state ordering* on the naive-cellular datum is a partial order $<$ on $$M_{\Lambda} := \coprod_{\lambda \in \Lambda} M(\lambda),$$ such that: 1. [\[O1\]]{#O1 label="O1"} whenever $p,q \in M_{\Lambda}$ have a common lower bound, then they have a greatest common lower bound, 2. [\[O2\]]{#O2 label="O2"} the map $$\pi: \coprod_{\lambda \in \Lambda} M(\lambda) \to \Lambda$$ sending $p \in M(\lambda)$ to $\lambda$ is strictly order-preserving, and 3. [\[O3\]]{#O3 label="O3"} the ordering is compatible with the multiplication in the sense that for all $a \in A$, $\lambda \in \Lambda$, $p,q \in M(\lambda)$ and $\sigma \in G(\lambda)$ we have $$a C_{p,q}^{\sigma} \in \mathrm{Span}_R\{C_{p,q'}^{\sigma} \ \mid \ \mu \leq \lambda, \sigma \in g(\mu), p,q' \in M(\mu), q' \leq q \}$$ By a *link state ordered* naive-cellular algebra, we mean a naive-cellular algebra with a prescribed choice of naive-cellular datum and link state ordering. We write $<$ for both the order on $\lambda$ and that on $M_{\lambda}$. ## Ideals In this subsection we fix a naive-cellular algebra over $R$, with naive-cellular datum $(\Lambda, G, M, C, *)$, and a choice of link state ordering. Recall that for $\lambda \in \Lambda$, and $q \in M(\lambda)$, we define an $R$-submodule $J_q$ of $A$ via $$J_q := \mathrm{Span}_R\{C_{p,q}^{\sigma} \ \mid \ \sigma \in g(\lambda), p \in M(\lambda)\}.$$ Using the link state ordering, we now further define $$J_{\leq q} := \mathrm{Span}_R\{C_{p,q'}^{\sigma} \ \mid \ \mu \leq \lambda, \sigma \in g(\mu), p,q' \in M(\mu), q' \leq q \}.$$ Definition [Definition 62](#def:LSOrdering){reference-type="ref" reference="def:LSOrdering"}, Condition ([\[O3\]](#O3){reference-type="ref" reference="O3"}) is then equivalent to: **Lemma 63**. *$J_{\leq q}$ is a left ideal of $A$. 0◻* Write $M_{\Lambda} \cup \{-\infty\}$ for the set consisting of $M_{\Lambda}$, together with a single additional element $- \infty$, with the convention that $-\infty < p$ for all $p \in M_{\Lambda}$. By Definition [Definition 62](#def:LSOrdering){reference-type="ref" reference="def:LSOrdering"}, Condition ([\[O1\]](#O1){reference-type="ref" reference="O1"}), we then have: **Lemma 64**. *$M_{\Lambda} \cup \{-\infty\}$ has all meets. 0◻* By using the convention that $J_{\leq (- \infty)} = J_{- \infty} = 0$, we can avoid a case statement in the next lemma: **Lemma 65**. *For $p,q \in M_{\Lambda}$, we have $J_{\leq p} \cap J_{\leq q} = J_{\leq (p \wedge q)}$ 0◻* ## Idempotent generation In this section we fix a naive-cellular algebra over $R$, with naive-cellular datum $(\Lambda, G, M, C, *)$, and a choice of link state ordering. **Definition 66**. A link state ordering will be said to be *(left) generating* if $J_{q}$ generates $J_{\leq q}$ as a left ideal. Suppose that $A$ is diagram-like. Then, the product of two basis elements $C_{p_1,q_1}^{\sigma_1} C_{p_2,q_2}^{\sigma_2}$ associated to the same $\lambda \in \Lambda$ is a multiple of some element of the basis of $I_{\leq \lambda}$, associated to some $\lambda'$, and knowledge of $q_1$ and $p_2$ is sufficient to determine whether $\lambda = \lambda'$. Let $S_q \subset M(\lambda)$ be the set of those $p$ for which the product $C_{p_1,q}^{\sigma_1} C_{p,q_2}^{\sigma_2}$ lies in the span of the basis elements associated to $\lambda$ (this question being independent of $p_1, \sigma_1, q_2,$ and $\sigma_2$), and let $$\rho_q : J_q \to J_q$$ be the projection onto those basis elements $C_{p',q}^{\sigma'}$ for which $p' \in S_q$. **Lemma 67**. *For any $a \in J_q$, right multiplication by $\rho_q(a)$ preserves $J_q$.* *Proof.* It suffices to verify the lemma in the case that $a = C_{p,q}^{\sigma}$ is a basis element. Then $$\rho_q(C_{p,q}^{\sigma})=\begin{cases} C_{p,q}^{\sigma'} & p \in S_q, \textrm{ and} \\ 0 & \textrm{ otherwise.} \end{cases}$$ That is, we must establish that if $p \in S_q$ then right multiplication by $C_{p,q}^{\sigma}$ preserves $J_q$. The definition of a diagram-like algebra gives that: $$C_{p',q}^{\sigma'} C_{p,q}^{\sigma} = \kappa C_{p'',q''}^{\sigma''},$$ and then, since $p \in S_q$, we have either that $\kappa = 0$, or that $C_{p'',q''}^{\sigma''}$ is associated to $\lambda$. In the first case we are done, and in the second case, the definition of a diagram-like algebra (Definition [Definition 9](#def:dgrmLike){reference-type="ref" reference="def:dgrmLike"}, Condition [\[item:dl5\]](#item:dl5){reference-type="ref" reference="item:dl5"}) gives that $q''=q$, as required. ◻ **Lemma 68**. *Suppose that $A$ is diagram-like. Let $\lambda \in \Lambda$, $q \in M(\lambda)$. If the right action of $e \in J_q$ (equivalently, the right action of $\pi(e) \in \pi(J_q)$) fixes the submodule $\pi(J_q) \subset \faktor{A}{I_{< \lambda}}$, then the right action of $\rho_q(e)$ fixes $J_q$.* *Proof.* The kernel of $\rho_q$ acts trivially on $\pi(J_q)$ (from the right), so the right multiplication maps $\cdot \rho_q(e)$ and $\cdot e$ coincide as maps $\pi(J_p) \to \faktor{A}{I_{< \lambda}}$ on the quotient. By Lemma [Lemma 67](#lem:rhoPreserves){reference-type="ref" reference="lem:rhoPreserves"}, the action of $\rho_q(e)$ preserves $J_q$, so we get a commutative diagram The vertical map $\pi$ is an isomorphism of $R$-modules, so since the bottom map is assumed to be the identity, the top map is too. This completes the proof. ◻ **Proposition 69**. *Suppose that $A$ is diagram-like. Suppose that the link state ordering is left generating (Definition [Definition 66](#def:generating){reference-type="ref" reference="def:generating"}). Let $\lambda \in \Lambda$, $q \in M(\lambda)$. If there exists $v \in W(\lambda)$ such that for some $\sigma \in G(\lambda)$ we have $$\langle C_{q}, v \rangle_{\tau} = \begin{cases} 1 & \textrm{ if } \tau=\sigma, \textrm{ and} \\ 0 & \textrm{ otherwise,} \end{cases}$$ then there exists an idempotent $e_{q}$ in $J_{q}$ which generates $J_{\leq q}$ as a left ideal.* *Proof.* By Proposition [Lemma 34](#lem:JIdeal){reference-type="ref" reference="lem:JIdeal"}, the image $\pi(J_q)$ in the quotient algebra $\faktor{A}{I_{<\lambda}}$ is a left ideal. By Lemma [Lemma 51](#lem:ezCase){reference-type="ref" reference="lem:ezCase"} and Proposition [Proposition 49](#prop:formBegetIdempt){reference-type="ref" reference="prop:formBegetIdempt"}, the condition of this theorem guarantees that $\pi(J_q)$ is generated by some idempotent $\varepsilon_q$. This means that every element of $\pi(J_q)$ is of the form $x \varepsilon_q$ for $x \in A$, so since $\varepsilon_q$ is idempotent, right multiplication by $\varepsilon_q$ fixes $\pi(J_q)$. Let $\widetilde{\varepsilon_q} \in J_q$ be the unique element with $\pi(\widetilde{\varepsilon_q}) = \varepsilon_q$, and let $e_q = \rho_q(\widetilde{\varepsilon_q})$. By Lemma [Lemma 68](#lem:IdempotentLift){reference-type="ref" reference="lem:IdempotentLift"}, right multiplication by $e_q$ fixes $J_q$. By the assumption that the link state ordering is left generating, we have that any $y \in J_{\leq q}$ is of the form $ax$ for $x \in J_q$. Since right multiplication by $e_q$ fixes $J_q$ we have $x = x e_q$, so $y = ax = ax e_q$. That is, $e_q$ generates $J_{\leq q}$ as a left ideal of $A$. Lastly, $e_q$ is itself in $J_q$, so is fixed by right multiplication by $e_q$: $e_q \cdot e_q = e_q$. This establishes that $e_q$ is idempotent, and completes the proof. ◻ # Application: Homological stability for Temperley-Lieb algebras {#section:HS} In this section, we will convert Sroka's proof of homological stability for the Temperley-Lieb algebras into our language. We wish to use Proposition [Proposition 69](#prop:Idempotents){reference-type="ref" reference="prop:Idempotents"}, so we must first write down a link state ordering and show that it is left generating. For the most part, the maneuvers we make with diagrams will be familiar ones: see for example [@Kauffman; @FGG; @FGG-TL; @BHComb]. **Definition 70**. Let $p$ and $q$ be Temperley-Lieb link states. Say that $p \leq q$ if each connection from $q$ is present in $p$. Equivalently, $p \leq q$ if each defect from $p$ is present in $q$, and for each connection in $p$, either this connection is present in $q$, or both of its endpoints are defects in $q$. **Lemma 71**. *The ordering of Definition [Definition 70](#def:TLLSO){reference-type="ref" reference="def:TLLSO"} defines a link state ordering on the Temperley-Lieb algebra $\mathrm{TL}_n(\delta)$.* *Proof.* We must verify the three conditions of Definition [Definition 62](#def:LSOrdering){reference-type="ref" reference="def:LSOrdering"}. We begin with Condition ([\[item:C1\]](#item:C1){reference-type="ref" reference="item:C1"}). Suppose that link states $p$ and $q$ have a common lower bound. By Definition [Definition 70](#def:TLLSO){reference-type="ref" reference="def:TLLSO"}, this means that each connection in $p$ is either present in $q$ or has endpoints which are defects in $q$, and vice versa. In particular, a connection from $p$ and a connection from $q$ are either equal or disjoint. We will frequently use this fact without comment in the remainder of the proof. We may therefore let $b$ be the link state having all connections from $p$, all connections from $q$, and defects elsewhere. We must show that $b$ is planar. By the definition of planarity (Definition [Definition 21](#def:PlanarLS){reference-type="ref" reference="def:PlanarLS"}) this means we must argue that no two connections from $b$ can cross, and that no defect can lie inside a connection. Suppose first that two connections cross. Since $p$ and $q$ are planar, it must be that one of these connections is drawn from each. Write $c_p$ for the connection from $p$ and $c_q$ for the connection from $q$. Since $c_p$ is not present in $q$, its endpoints must be defects in $q$. Since $c_p$ and $c_q$ overlap, one of these defects must lie inside $c_q$, but this contradicts planarity of $q$. Thus, no two connections in $b$ can cross. Suppose then that a defect of $b$ (at height $i$, say) lies inside a connection. Without loss of generality, we may assume that $i$ is a defect in $p$ lying inside a connection $c_q$ in $q$. Since $i$ lies inside $c_q$, by planarity of $q$ we must have that $i$ lies at one end of a connection in $q$. But then, by definition, $i$ must also lie at one end of a connection in $b$, contradicting the assumption that $i$ was a defect in $b$. This establishes that $b$ is planar. By construction, $b$ is a common lower bound for $p$ and $q$, and we must show that it is the greatest such. If $c$ is another common lower bound, then $c \leq p$ implies that $c$ has all connections from $p$, and $c \leq q$ implies that $c$ has all connections from $q$. This says precisely that $c \leq b$, so $b$ is indeed the greatest common lower bound for $p$ and $q$. This establishes that our ordering satisfies Condition ([\[item:C1\]](#item:C1){reference-type="ref" reference="item:C1"}). Condition ([\[item:C2\]](#item:C2){reference-type="ref" reference="item:C2"}) follows from the observation that if $p < q$ then $p$ must have strictly fewer defects than $q$. Condition ([\[item:C3\]](#item:C3){reference-type="ref" reference="item:C3"}) holds, since (the underlying diagram of) a product of Temperley-Lieb diagrams $C_{p',q'}^{\sigma'} C_{p,q}^{\sigma}$ must retain all right-to-right connections from $C_{p,q}^{\sigma}$. This completes the proof. ◻ **Lemma 72**. *The link state ordering of Definition [Definition 70](#def:TLLSO){reference-type="ref" reference="def:TLLSO"} is left generating.* *Proof.* Fix a Temperley-Lieb link state $q$. It suffices to argue that any diagram $C_{p,q'}^{1}$ with right link state $q'$ strictly less than $q$ is a left multiple of a diagram with right link state precisely $q$. Note that by Condition [\[item:C2\]](#item:C2){reference-type="ref" reference="item:C2"} of Definition [Definition 62](#def:LSOrdering){reference-type="ref" reference="def:LSOrdering"}, this implies that $q$ has at least one defect (since $q'$ must have strictly fewer). We will take the liberty of giving the remainder of the proof by pictures. These pictures will be drawn in the case $n=11$ (i.e. in $\mathrm{TL}_{11}(\delta)$), where $q =$ , $q' =$ , $C_{p,q'}^{1}$ = First 'stretch out' the right hand side of $C_{p,q'}^1$, so that the connections which were already present in $q$ stay as they were, and the connections which are new to $q'$ come into the middle. In the example this looks as follows. $C_{p,q'}^{1}$ = We add vertices at the intersection of these connections with the central vertical, push these new vertices to the bottom, and insert new ones above: $C_{p,q'}^{1}$ "=" The number of new vertices inserted is $n-t$, where $t$ is the number of defects in $q$ (i.e. $q \in M(t)$). It is automatic that $n-t$ must be even. As noted at the start of the proof, $t \geq 1$, so there exists at least one left-to-right connection in the right-hand half of this 'diagram'. The topmost such connection may now take a 'detour' through the remaining vertices, since there are an even number of them: $C_{p,q'}^{1}$ "=" This expresses $C_{p,q}^1$ as a product of the desired form, namely: $C_{p,q'}^{1}$ = $\cdot$ This completes the proof. ◻ ## The idempotent cover In [@Sroka], Sroka constructs a resolution of the trivial module $\mathbbm{1}$ by projective left $\mathrm{TL}_n(\delta)$-modules. In the language of [@Boyde2], his complex can be described as an *idempotent cover* (indeed, it was the motivation for introducing that terminology). In this section, we will reconstruct Sroka's complex in this language. For $1 \leq i \leq n-1$, let $K_i$ be the left ideal of $\mathrm{TL}_n(\delta)$ spanned by those diagrams which have a connection between the (right-hand) nodes $i'$ and $(i+1)'$, and let $I_{\leq n-1}$ be the twosided ideal spanned by diagrams with fewer than $n$ left-to-right connections. **Lemma 73**. *There is an equality of left ideals $K_1 + \dots + K_{n-1} = I_{\leq n-1}$, where the sum is the internal sum in $\mathrm{TL}_n(\delta)$.* In the language of [@Boyde2], this says that the ideals $K_i$ *cover* $I_{\leq n-1}$. *Proof.* Any diagram with a right-to-right connection must have fewer than $n$ primed vertices with a left-to-right connection, hence in particular fewer than $n$ left-to-right connections. This gives that $K_i \subset I_{\leq n-1}$ for each $i$, and it suffices to show that any diagram with fewer than $n$ left-to-right connections must be contained in some $K_i$. Any diagram having fewer than $n$ left-to-right connections must have a primed vertex without a left-to-right connection. This vertex, $j_0'$, say, must be connected via a right-to-right connection to some other primed vertex $k_0'$. If $|j_0-k_0|=1$ then we are done, otherwise, since the right link state is planar (Definition [Definition 21](#def:PlanarLS){reference-type="ref" reference="def:PlanarLS"}), the primed vertices lying between $j_0'$ and $k_0'$ can only be connected to one another. Choose a connection among this set, say from $j_1$ to $k_1$, and repeat. Necessarily at each stage we have $|j_\ell-k_\ell| < |j_{\ell-1}-k_{\ell-1}|$, so we must eventually reach a connection between adjacent vertices $i$ and $i+1$, which shows that the diagram lies in $K_i$, as required. ◻ Following Sroka, call a subset $S$ of $\underline{n}$ *innermost* if there exists no $i$ for which $i$ and $i+1$ are both elements of $S$. The following lemma is immediate. **Lemma 74**. *For $S \subset \underline{n}$, the intersection $$\bigcap_{i \in S} K_i$$ is the $R$-span of those diagrams having a connection from $i'$ to $(i+1)'$ for each $i \in S$. This intersection is non-zero if and only if $S$ is innermost. 0◻* **Lemma 75**. *For $S \subset \underline{n}$ innermost, let $q=q(S)$ be the link state having a connection from $i$ to $i+1$ whenever $i \in S$, and defects elsewhere. We have an equality of left ideals $$\bigcap_{i \in S} K_i = J_{\leq q}.$$* *Proof.* To say that a diagram $C_{p',q'}^1$ lies in $\bigcap_{i \in S} K_i$ is precisely to say that $q' \leq q$ in the link state ordering (Definition [Definition 70](#def:TLLSO){reference-type="ref" reference="def:TLLSO"}). Since the link state ordering is left generating (Lemma [Lemma 72](#lem:TLLSOGenerating){reference-type="ref" reference="lem:TLLSOGenerating"}), and since link state orderings respect the multiplication (Definition [Definition 62](#def:LSOrdering){reference-type="ref" reference="def:LSOrdering"}, Condition ([\[item:C3\]](#item:C3){reference-type="ref" reference="item:C3"})) this is equivalent to saying that $C_{p',q'}^1$ lies in $J_{\leq q(S)}$. ◻ **Proposition 76**. *Let $t \geq 1$, $q \in M(t)$. There exists $p \in M(t)$ such that the pair-graph $\Gamma_{\langle q, p \rangle}$ has no loops and such that the cardinality of the pair-set $\lvert{S_{\langle q,p\rangle}}\rvert$ is $t$.* After the proof, we will give an example of the algorithm. *Proof.* We will build $p$ inductively, adding one edge or defect at a time. Let $L \subset \underline{n}$ be the set of *live vertices*. Initially, set $L$ equal to the defects of $q$. Since we assume $t \geq 1$, $L$ is initially non-empty. We will also describe each edge of $q$ as either *available* or *unavailable*. Write $A$ for the set of available edges. Initially, all edges are available. At each stage of the algorithm, select a live vertex $i \in L \subset \underline{n}$. If there exists at least one $j \in \underline{n}$ such that - $j$ lies at one end of an available edge $e$ in $q$, and - no live vertex $i' \in L$ lies between $i$ and $j$ then select from among such $j$ one closest to $i$, and - add a connection in $p$ from $i$ to $j$, - replace $i$ in the set of live vertices by the vertex $k$ lying at the other end of $e$ from $j$, and - remove $e$ from the set of available edges. If no such $j$ exists, then add a defect at $i$, and remove $i$ from the set $L$ of live vertices. This algorithm terminates when the set of live vertices is empty. We must argue that after performing this algorithm all vertices have exactly one edge in $p$ (so that $p$ is a Brauer link state), that $p$ is planar (hence a Temperley-Lieb link state) and that the conditions of the proposition statement are satisfied. If at some stage at least one available edge remains, then it is automatic that it satisfies the criterion of the algorithm for some live vertex (i.e. one of possibly two closest to it). This means that the algorithm terminates precisely when all edges have become unavailable and all remaining live vertices have been converted to defects. For a given edge $e$ from $i$ to $j$ in $q$, we see that $e$ starts the algorithm available, and can receive a connection only while it is available. Upon receiving a connection (at $i$, say), the other end $j$ of $e$ then becomes a new live vertex, and must later either become a defect or be connected to something else. The algorithm will produce no further connections to $i$ or $j$, since $e$ is now unavailable. Since the algorithm terminates when all edges have become unavailable and all remaining live vertices have been converted to defects, it follows that in $p$, the ends $i$ and $j$ of $e$ either both have connections in $p$, or one has a connection and one has a defect. In particular, this says precisely that the resulting $p$ is a Brauer link state. By induction, the unavailable edges at each stage of the algorithm are precisely those which are connected by a sequence of edges in $\Gamma_{\langle q,p \rangle}$ to some defect of $q$. Also by induction, an unavailable edge must be connected at each stage by a sequence of edges to some live vertex, hence ultimately to some defect of $p$. Since all edges are ultimately unavailable, all edges of $q$, hence all vertices in $\underline{n}$, are ultimately connected to both a defect of $q$ and a defect of $p$. This gives that the cardinality of the pair set $\lvert{S_{\langle q,p\rangle}}\rvert$ is $t$, and that the pair graph $\Gamma_{\langle q,p \rangle}$ has no loops, as required. It remains to argue that $p$ is planar (Definition [Definition 21](#def:PlanarLS){reference-type="ref" reference="def:PlanarLS"}). We must argue for each edge $e$ from $i$ to $j$ (say $i<j$) in $p$: - no defects of $p$ lie in the interval $(i,j)$, and - no vertex in $(i,j)$ is connected to a vertex outside $(i,j)$. At each stage, the algorithm begins from a live vertex and selects a nearest vertex lying at the end of an available edge in $q$, with no live vertices in between. In particular, after adding this new edge $e$ to $p$, no available edges of $q$ have an end lying inside $e$. It follows that $(i,j)$ consists only of vertices which already had an edge in $p$ when $e$ was added. Such an edge $e'$ must necessarily go to another vertex in the interval $(i,j)$, because otherwise when $e'$ was added there was a closer available vertex, contradicting the definition of the algorithm. In total, this shows that the interval $(i,j)$ consists entirely of vertices connected by an edge to another vertex in $(i,j)$. In particular, no defects can be created in this interval because it contains no live vertices, and no live vertices can be created in it, all of its edges being unavailable. This establishes that $p$ is planar, and completes the proof. ◻ *Example 77*. We will give an example of the algorithm, with $n=16$, and $q=$ Live vertices will be drawn in red, and unavailable edges (and the edges of $p$) will be drawn in grey. Note that the algorithm involves a choice at each stage. One sequence of choices leads to the following construction of $p$: 1: : : : : : 7: : : : so $p=$ is as required. **Proposition 78**. *Let $t \geq 1$, $q \in M(t)$. The left ideal $J_{\leq q}$ is principal and generated by an idempotent.* *Proof.* By Lemma [Lemma 72](#lem:TLLSOGenerating){reference-type="ref" reference="lem:TLLSOGenerating"}, the link state ordering on the Temperley-Lieb algebras is left generating. Since the Temperley-Lieb algebras are diagram-like (Example [Example 22](#ex:TL){reference-type="ref" reference="ex:TL"}), we may attempt to use Proposition [Proposition 69](#prop:Idempotents){reference-type="ref" reference="prop:Idempotents"}. To do so, we must construct $v \in W(t)$ such that $$\langle C_{q}, v \rangle_{\tau} = \begin{cases} 1 & \textrm{ if } \tau=\sigma, \textrm{ and} \\ 0 & \textrm{ otherwise.} \end{cases}$$ Since for all $t$, the group $G(t)$ in the naive-cellular datum is the trivial group (i.e. the naive-cellular structure on the Temperley-Lieb algebras is just Graham and Lehrer's cellular structure), this reduces to constructing $v$ such that $\langle C_{q}, v \rangle = 1$, where the bilinear form is Graham and Lehrer's (c.f Remark [Remark 45](#rmk:cellForm){reference-type="ref" reference="rmk:cellForm"}). By Lemma [Lemma 54](#lem:formInterpretation){reference-type="ref" reference="lem:formInterpretation"}, it suffices to take $v$ to be the element $C_p$ provided by Proposition [Proposition 76](#prop:wiggleWiggle){reference-type="ref" reference="prop:wiggleWiggle"}. The result follows. ◻ We are now ready to give a variant proof of Sroka's theorem [@Sroka]. It is not a completely new proof because the topological 'back-end' (in the guise of Theorem 1.7 of [@Boyde2]) is essentially unchanged, and the 'front-end' has just been dressed up in the language of cellular algebras, to permit a 'bilinear form' verification of the fact that the $\bigcap_{i \in S} K_i$ are principal ideals generated by idempotents. Sroka's result is as follows: **Theorem 79**. *For all commutative rings $R$, all $\delta \in R$, and $n \geq 0$, we have $$\mathrm{Tor}_q^{\mathrm{TL}_n(\delta)}(\mathbbm{1},\mathbbm{1}) \cong \begin{cases} R & q = 0 \textrm{ and} \\ 0 & 0 < q \leq \frac{n}{2}-2. \end{cases}$$ Furthermore, if $n$ is odd, then actually $\mathrm{Tor}_q^{\mathrm{TL}_n(\delta)}(\mathbbm{1},\mathbbm{1}) \cong 0$ for all $q > 0$.* *Proof.* By Lemma [Lemma 74](#lem:intersectionDescription){reference-type="ref" reference="lem:intersectionDescription"}, Lemma [Lemma 75](#lem:intersectionsToJ){reference-type="ref" reference="lem:intersectionsToJ"}, and Proposition [Proposition 78](#prop:TLIdempotents){reference-type="ref" reference="prop:TLIdempotents"}, the ideals $K_i$ form a principal idempotent cover [@Boyde2 Definition 1.6] of $I_{\leq n-1}$, and this cover has height $\frac{n}{2}-1$ if $n$ is even, and height $n-1$ if $n$ is odd. Then apply Theorem [Theorem 3](#oldThm){reference-type="ref" reference="oldThm"}, together with the observations that $\faktor{\mathrm{TL}_n(\delta)}{I_{\leq n-1}} \cong R$, and $$\mathrm{Tor}^R_q(\mathbbm{1},\mathbbm{1}) \cong \begin{cases} 0 & q > 0 \textrm{ and } \\ R & q = 0 \end{cases}$$ to complete the proof. ◻
arxiv_math
{ "id": "2310.00373", "title": "Homological stability and Graham-Lehrer cellular algebras", "authors": "Guy Boyde", "categories": "math.RT math.AT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | This paper explores the modulus (discrete $p$-modulus) of the family of edge covers on a discrete graph. This modulus is closely related to that of the larger family of fractional edge covers; the modulus of the latter family is guaranteed to approximate the modulus of the former within a multiplicative factor based on the length of the shortest odd cycle in the graph. The bounds on edge cover modulus can be computed efficiently using a duality result that relates the fractional edge covers to the family of stars. author: - Adriana Ortiz-Aquino and Nathan Albin bibliography: - my_bib.bib date: | Department of Mathematics, Kansas State University, Manhattan, KS 66506, USA. \ September 22, 2023 title: "Modulus of edge covers and stars[^1] " --- # Introduction {#sec:introduction} The discrete $p$-modulus is a versatile and informative tool for studying many families of objects on graphs [@albin2017modulus; @albin2015modulus]. For example, the modulus of all paths connecting two nodes in a graph is related to known graph theoretic quantities such as shortest path, max flow/min cut, and effective resistance [@albin2017modulus; @shakeri2016generalized]. It has also been shown that the modulus of all cycles can be used for clustering and community detection [@shakeri2017network] and that the modulus of all spanning trees can be used to describe a hierarchical decomposition of the graph [@albin2020spanning; @kottegoda2020spanning; @albin2021fairest]. In general, modulus can be adapted to any family of graph objects and will measure the richness of that family; larger families of objects tend to have larger modulus. This paper considers the modulus of edge covers and gives an approximation to its value using the modulus of the family of fractional edge covers. As described in Section [2](#sec:defs){reference-type="ref" reference="sec:defs"} in more detail, the modulus problem is a convex optimization problem with a number of constraints determined by the number of elements in the family of graph objects under consideration. Thus, the modulus of a combinatorially large family, like the family of edge covers, can be difficult to compute directly. Through the theory of Fulkerson duality, it has been shown that every family of objects has a corresponding dual family, whose modulus is closely related to the modulus of the original family [@fulkerson1968blocking; @albin2019blocking]. We prove in this paper that the dual family of fractional edge covers is the family of stars, which greatly reduces the number of constraints for the $p$-modulus problem. In this way, we can calculate the modulus of fractional edge covers using the modulus of stars, and then obtain a bound for the modulus of edge covers. A probabilistic interpretation for the $p$-modulus problem has also proven valuable in some cases [@albin2016prob; @albin2021convergence]. This allows us to reinterpret the modulus problem as an optimization problem related to random objects. Specifically, it has been shown that solving the modulus problem is equivalent to finding a probability mass function (pmf) on the family of objects that minimizes a function of certain expectations on the edges. The primary contributions of this paper are the following. - Section [2](#sec:defs){reference-type="ref" reference="sec:defs"} introduces a new equivalence relation of families of graph objects with respect to modulus. - Definition [Definition 4](#def:bfec){reference-type="ref" reference="def:bfec"} introduces the concept of basic fractional edge covers. - Lemma [Lemma 8](#lem:extreme-bfec){reference-type="ref" reference="lem:extreme-bfec"} characterizes the set of extreme points of fractional edge covers by relating them to basic fractional edge covers. - Theorem [Theorem 11](#thm:bounds){reference-type="ref" reference="thm:bounds"} provides an estimate of the modulus of edge covers using the modulus of fractional edge covers. - Lemma [Lemma 16](#lem:Bhat_fec){reference-type="ref" reference="lem:Bhat_fec"} shows that the dual blocking family of fractional edge covers is equivalent to the family of stars, which allows the modulus of fractional edge covers to be computed more efficiently. This paper is organized as follows. Section [2](#sec:defs){reference-type="ref" reference="sec:defs"} introduces definitions and notations. Section [3](#sec:mod_ec_fec){reference-type="ref" reference="sec:mod_ec_fec"} defines the modulus problem for the family of edge covers and the family of fractional edge covers. Section [4](#sec:fulkerson_duality){reference-type="ref" reference="sec:fulkerson_duality"} reviews the concepts of blocking duality and proves that the dual family of fractional edge covers is the family of stars. Section [5](#sec:star_mod){reference-type="ref" reference="sec:star_mod"} explores the star modulus problem, reviews the probabilistic interpretation of modulus, and demonstrates these concepts through examples on several standard graphs. Section [6](#sec:num-examples){reference-type="ref" reference="sec:num-examples"} explores some of the fundamental differences between edge cover modulus and fractional edge cover modulus through two additional examples. Section [7](#sec:discussion){reference-type="ref" reference="sec:discussion"} ends with a discussion of these concepts and future work. # Definitions and Notation {#sec:defs} #### Objects and usage. Let $G = (V,E,\sigma)$ be an undirected graph with vertex set $V$, edge set $E$, and a positive vector $\sigma\in\mathbb{R}^E_{>0}$ that assigns to each edge, $e\in E$, a positive weight, $\sigma(e)$. Let $\Gamma$ be a *family of objects* on $G$. The concept of *object* is very flexible. The present paper is focused primarily on three specific families: the families of stars, edge covers, and fractional edge covers. Each of these families is defined below. The family $\Gamma$ is associated with a nonnegative *usage matrix*, $\mathcal{N}\in\mathbb{R}_{\ge 0}^{\Gamma\times E}$, where $\mathcal{N}(\gamma,e)$ indicates the degree to which the object $\gamma\in\Gamma$ "uses" the edge $e\in E$. When $\Gamma$ consists of subsets of $E$, a natural choice for $\mathcal{N}$ is the indicator function $$\label{eq:natural-N} \mathcal{N}(\gamma,e) = \mathbbm{1}_{\gamma}(e) := \begin{cases} 1 & \text{if }e \in \gamma, \\ 0 & \text{otherwise}. \end{cases}$$ For the family of fractional edge covers, however, $\mathcal{N}$ will be allowed to take other real values. In this paper, we restrict our attention to families that are *nontrivial* in the sense that each row of $\mathcal{N}$ contains at least one nonzero entry. Note that we refer to $\mathcal{N}$ as a matrix, even if $\Gamma$ is an infinite set. The usage matrix provides a useful representation of the objects in $\gamma$; by associating each $\gamma$ with the corresponding row vector, $\mathcal{N}(\gamma,\cdot)$ in the usage matrix, we may view a family of objects as a subset of the nonnegative vectors $\mathbb{R}^E_{\ge 0}$. Given a family of objects, $\Gamma\subseteq\mathbb{R}^E_{\ge 0}$, it is often useful to consider its *convex hull*, $\text{co}(\Gamma)$, as well as its *dominant*, $$\text{Dom}(\Gamma) = \text{co}(\Gamma) + \mathbb{R}^{E}_{\ge 0}.$$ #### Stars, edge covers, and fractional edge covers. To each vertex, $v\in V$, is associated the *star*, $\delta(v)\subset E$, comprising the set of edges incident to $v$. The family, $\Gamma_{\text{star}}$, of all stars in $G$ is endowed with the natural usage matrix [\[eq:natural-N\]](#eq:natural-N){reference-type="eqref" reference="eq:natural-N"}. Any subset of a star is called a *substar*. An *edge cover* of $G$ is a set of edges $C\subset E$ such that each vertex in $G$ is incident to at least one edge in $C$, that is $|C\cap\delta(v)|\ge 1$ for every $v\in V$. (If the intersection is exactly 1 for all vertices, the edge cover is called a *perfect matching*.) The family of all edge covers is denoted $\Gamma_{\text{ec}}$ and is also endowed with the natural usage matrix [\[eq:natural-N\]](#eq:natural-N){reference-type="eqref" reference="eq:natural-N"}. The concept of edge cover can be generalized as follows (see [@schrijver2003combinatorial]). Let $\gamma\in\mathbb{R}_{\ge 0}^E$ be a nonnegative vector on $E$. If $$\gamma(\delta(v)) := \sum_{e\in\delta(v)}\gamma(e)\ge 1\quad\text{for all }v\in V,$$ then $\gamma$ is called a *fractional edge cover*. Each such $\gamma$ can be considered to be an object on the graph $G$ with corresponding edge usage $$\mathcal{N}(\gamma,\cdot) := \gamma(\cdot),$$ yielding the (uncountably infinite) family of fractional edge covers, $\Gamma_{\text{fec}}$. Each edge cover, $\gamma\in\Gamma_{\text{ec}}$, can be associated with its incidence vector $\mathbbm{1}_\gamma$, which provides a natural inclusion $\Gamma_{\text{ec}}\subset\Gamma_{\text{fec}}$. (The reverse inclusion is only true on the trivial graph.) #### Densities and admissibility. A nonnegative vector, $\rho\in\mathbb{R}^E_{\ge 0}$, is called a *density*. Each density induces a measure of length, called the *$\rho$-length*, on the objects in $\Gamma$. Given a density $\rho$ and an object $\gamma\in\Gamma$, the $\rho$-length of $\gamma$, $\ell_\rho(\gamma)$, is defined as $$\ell_{\rho}(\gamma) := \sum_{e \in E} \mathcal{N}(\gamma, e)\rho(e)= \rho^T\gamma,$$ where the last equality arises from identifying $\gamma$ with its row in the matrix $\mathcal{N}$. A density, $\rho$, is called *admissible for $\Gamma$*, or simply *admissible*, if $\ell_{\rho}(\gamma) \ge 1$ for every $\gamma \in \Gamma$. The set of all admissible densities for a family $\Gamma$ is denoted as $$\text{Adm}(\Gamma) := \{ \rho \in \mathbb{R}_{\ge 0}^E : \ell_{\rho}(\gamma) \ge 1, \forall \gamma \in \Gamma\}.$$ **Lemma 1**. *Suppose $\Gamma_1$ and $\Gamma_2$ be two families of objects satisfying $\Gamma_1\subseteq\Gamma_2$, then $\text{Adm}(\Gamma_2)\subseteq\text{Adm}(\Gamma_1)$.* *Proof.* This is evident from the definitions. Any density that is admissible for the larger family, $\Gamma_2$, is necessarily admissible for $\Gamma_1$. ◻ #### Energy, modulus, and extremal densities. For $1\le p\le \infty$, the *$p$-energy* of a density $\rho$ is defined as $$\mathcal{E}_{p,\sigma}(\rho) := \begin{cases} \sum\limits_{e\in E}\sigma(e)\rho(e)^p & \text{if }1\le p < \infty,\\ \max\limits_{e\in E}\sigma(e)\rho(e) & \text{if }p=\infty. \end{cases}$$ The *$p$-modulus* of $\Gamma$ is then defined as $$\label{eq:mod} \text{Mod}_{p,\sigma}(\Gamma) := \inf_{\rho \in \text{Adm}(\Gamma)} \mathcal{E}_{p,\sigma}(\rho).$$ If the graph is unweighted, we define $\sigma \equiv 1$ and adopt the simplified notation $\mathcal{E}_p(\rho)$ and $\text{Mod}_p(\Gamma)$. In the case that $\Gamma$ is finite, modulus can be expressed as a convex optimization problem of the form $$\label{eq:optimization_prob} \begin{split} \text{minimize} & \quad \mathcal{E}_{p,\sigma}(\rho) \\ \text{subject to} & \quad \rho \succeq 0 \\ & \quad \mathcal{N}\rho \succeq \mathbf{1}. \\ \end{split}$$ The notation $\succeq$ indicates elementwise comparison and $\mathbf{1}$ indicates the appropriately shaped vector of all ones. A density $\rho \in \text{Adm}(\Gamma)$ is said to be *extremal* if $\mathcal{E}_{p,\sigma}(\rho) = \text{Mod}_{p,\sigma}(\Gamma)$. The notation $\rho^*$ is commonly used to denote an extremal density. If $\Gamma$ is finite, then [@albin2017modulus Theorem 4.1] implies that an extremal density exists. Moreover, it is unique for $1<p<\infty$. A useful property of modulus comes from Proposition 3.4 in [@albin2017modulus], which is a direct consequence of Lemma [Lemma 1](#lem:adm-monotone){reference-type="ref" reference="lem:adm-monotone"}. **Proposition 2** (Monotonicity). *Let $\Gamma_1$ and $\Gamma_2$ be families of graph objects. If $\Gamma_1 \subseteq \Gamma_2$, then $\text{Mod}_{p,\sigma}(\Gamma_1) \le \text{Mod}_{p,\sigma}(\Gamma_2)$.* #### Equivalent families. Two families of objects, $\Gamma$ and $\Gamma'$ are called *equivalent* (in the sense of modulus) if $\text{Adm}(\Gamma)=\text{Adm}(\Gamma')$. We shall use the notation $\Gamma\simeq\Gamma'$ to indicate that the two families are equivalent in this sense. In light of [\[eq:mod\]](#eq:mod){reference-type="eqref" reference="eq:mod"}, this implies that $\text{Mod}_{p,\sigma}(\Gamma)=\text{Mod}_{p,\sigma}(\Gamma')$ for any choice of the parameter $p$ and weights $\sigma$; equivalent families are indistinguishable in the context of modulus. One straightforward example of equivalence comes from the following lemma. **Lemma 3**. *Let $\Gamma$ be a family of objects on $G$. Then $\Gamma\simeq\text{Dom}(\Gamma)$.* *Proof.* By definition, $\Gamma \subseteq \text{Dom}(\Gamma)$, so by Lemma [Lemma 1](#lem:adm-monotone){reference-type="ref" reference="lem:adm-monotone"}, $\text{Adm}(\text{Dom}(\Gamma)) \subseteq \text{Adm}(\Gamma)$. Let $\rho \in \text{Adm}(\Gamma)$ and let $\tilde{\gamma} \in \text{Dom}(\Gamma)$. Then there must exist a collection of objects, $\gamma_1,\gamma_2,\cdots,\gamma_r\in\Gamma$, a choice of weights $\mu_1, \mu_2, \ldots, \mu_r \ge 0$ summing to one, and a vector $\xi \in \mathbb{R}^E_{\ge 0}$ such that $$\tilde{\gamma} = \sum_{i=1}^r \mu_i \gamma_i + \xi.$$ Since $\rho$ and $\xi$ are nonnegative, and since $\rho$ is admissible for $\Gamma$, $$\rho^T \tilde{\gamma} = \sum_{i=1}^r \mu_i \rho^T \gamma_i + \rho^T \xi \ge 1.$$ Thus, $\rho \in \text{Adm}(\text{Dom}(\Gamma))$. ◻ For a given family $\Gamma$, consider the set of extreme points $\text{ext}(\text{Dom}(\Gamma))$. The extreme points are nonnegative vectors in $\mathbb{R}^E$ and, therefore, can be viewed as a family of objects. Lemma [Lemma 3](#lem:gamma-equiv-dom){reference-type="ref" reference="lem:gamma-equiv-dom"} implies that $\Gamma\simeq\text{ext}(\text{Dom}(\Gamma))$, since both families share the same dominant. As an application of this last equivalence, consider the family, $\Gamma'_{st}$, of all walks in $G$ connecting two distinct vertices $s$ and $t$ and the family, $\Gamma_{st}$, of all simple paths connecting $s$ and $t$. Then, since $\Gamma_{st} = \text{ext}(\text{Dom}(\Gamma'_{st})$, it follows that $\Gamma_{st}\simeq\Gamma'_{st}$. This simply recovers the intuitive observation that a density is admissible for the family of $st$-walks if and only if it is admissible for the family of $st$-paths. # Modulus of edge covers and fractional edge covers {#sec:mod_ec_fec} We first calculate $\text{Mod}_2(\Gamma_{\text{ec}})$ for some common graphs. It is straightforward to verify that $\Gamma_{\text{ec}}$ is equivalent to the family of minimal edge covers, which can simplify some of the following computations. *Example 1* (Star Graph). Let $G = S_n$ be the unweighted star graph with $|V| = n\ge 3$ and $|E| = n-1$. Note that all edges of $S_n$ are *pendant edges*---they are incident on at least one vertex of degree one. It follows that there is a single edge cover: $\Gamma_{\text{ec}} = \{ E\}$. The symmetries of the graph and the uniqueness of the extremal density suggest that we restrict our search to constant densities. Since the single edge cover has $n-1$ edges, the density $\rho_0 \equiv \frac{1}{n-1}$ is admissible. This provides an upper bound on modulus: $$\text{Mod}_2(\Gamma_{\text{ec}}) \le \mathcal{E}_2(\rho_0) = \sum_{e \in E} \rho_0(e)^2 = (n-1) \cdot \frac{1}{(n-1)^2} = \frac{1}{n-1}.$$ The fact that equality holds is established later through the lower bound in Example [Example 7](#ex:star-star){reference-type="ref" reference="ex:star-star"}. *Example 2* (Cycle Graph). Let $G = C_n$ be the unweighted cycle graph with $|V| = n$ and $|E| = n$. Again, symmetry suggests that the extremal density is constant. To calculate the edge cover modulus, we need to consider the cases when $n$ is even or odd. - Let $n$ be even, then there are an even number of edges. This implies that there are 2 minimal edge covers in $\Gamma_{\text{ec}}$, where each edge cover has $\frac{n}{2}$ edges. Thus, $\rho_0 \equiv \frac{2}{n}$ is admissible, and $$\text{Mod}_2(\Gamma) \le \mathcal{E}_2(\rho_0) = \sum_{e \in E} \rho_0(e)^21 = n \cdot \frac{4}{n^2} = \frac{4}{n}.$$ - Let $n$ be odd, then there are an odd number of edges. In this case, there are $n$ minimal edge covers in $\Gamma_{\text{ec}}$, where each edge cover has $\frac{n+1}{2}$ edges as in Figure [\[fig:odd\]](#fig:odd){reference-type="ref" reference="fig:odd"}. So, $\rho_0 \equiv \frac{2}{n+1}$ is admissible and $$\text{Mod}_2(\Gamma) \le \mathcal{E}_2(\rho_0) = \sum_{e \in E} \rho_0(e)^2 = n \cdot \frac{4}{(n+1)^2} = \frac{4n}{(n+1)^2}.$$ By making the argument by symmetry more precise, using the Symmetry Rule of [@albin2017modulus Section 5.3], it is possible to show that $\rho_0$ is extremal in both cases. For the even cycle, this can also be established by the lower bound found in Example [Example 8](#ex:cycle-star){reference-type="ref" reference="ex:cycle-star"}. *Example 3* (Complete Graph). Let $G = K_n$ be the unweighted complete graph with $|V| = n$ and $|E| = \frac{n(n-1)}{2}$. To calculate the modulus of edge covers, we again consider the cases when $n$ is even or odd. - If $n$ is even, then all minimal edge covers have $\frac{n}{2}$ edges, so $\rho_0 = \frac{2}{n}$ is admissible, and $$\text{Mod}_2(\Gamma) \le \mathcal{E}_2(\rho_0) = \sum_{e \in E} \rho_0(e)^2 = \frac{n(n-1)}{2} \cdot \frac{4}{n^2} = \frac{2(n-1)}{n}.$$ - If $n$ is odd, then all minimal edge covers have $\frac{n+1}{2}$ edges, so $\rho_0 = \frac{2}{n+1}$ is admissible and $$\text{Mod}_2(\Gamma) \le \mathcal{E}_2(\rho_0) = \sum_{e \in E} \rho_0(e)^2 = \frac{n(n-1)}{2} \cdot \frac{4}{(n+1)^2} = \frac{2n(n-1)}{(n+1)^2}.$$ As in the previous example, the Symmetry Rule can be used to show that these bounds are sharp: the $\rho_0$ are extremal in both cases. Numerical computation of the edge cover modulus, $\text{Mod}_{p,\sigma}(\Gamma_{\text{ec}})$, is challenging because each edge cover induces an inequality constraint in the optimization problem [\[eq:optimization_prob\]](#eq:optimization_prob){reference-type="eqref" reference="eq:optimization_prob"}. On general graphs, this leads to an exponentially large number of constraints. For example, when $n$ is even the complete graph, $K_n$, has $(n-1)!!$ minimal edge covers. Even the task of enumerating all edge covers of a large graph quickly becomes computationally infeasible. One way to circumvent this combinatorial complexity is to introduce a relaxed problem. This can be accomplished using fractional edge covers. ## The structure of fractional edge covers Since the family $\Gamma_{\text{fec}}$ is uncountably infinite, the modulus problem [\[eq:optimization_prob\]](#eq:optimization_prob){reference-type="eqref" reference="eq:optimization_prob"} on this family cannot be immediately viewed as a standard convex optimization problem. However, it is possible to find an equivalent finite family (in the sense of Section [2](#sec:defs){reference-type="ref" reference="sec:defs"}), $\Gamma_{\overline{\text{fec}}}$. This comes from the observations that $\Gamma_{\text{fec}}$ is convex and *recessive*, in the sense that $\Gamma_{\text{fec}}=\text{Dom}(\Gamma_{\text{fec}})$, which implies that $\Gamma_{\text{fec}}\simeq\text{ext}(\Gamma_{\text{fec}})$. The extreme points of $\Gamma_{\text{fec}}$ have a relatively simple structure. The following definition, lemmas, and proofs where inspired by the work done in [@scheinerman2011fractional] with fractional perfect matchings. **Definition 4**. A vector $\gamma\in\mathbb{R}^{E}_{\ge 0}$ is called a *basic fractional edge cover* if - $\gamma$ is a fractional edge cover taking only values in $\{0,1/2,1\}$, - the support, $\mathop{\mathrm{supp}}\gamma$, is a vertex-disjoint union of odd cycles and substars, and - $\gamma(e)=1/2$ if and only if $e$ belongs to an odd cycle in $\mathop{\mathrm{supp}}\gamma$. The family of all basic fractional edge covers is denoted $\Gamma_{\overline{\text{fec}}}$. The remainder of this section is devoted to showing that the extreme points of $\Gamma_{\text{fec}}$ are basic fractional edge covers. **Lemma 5**. *If $\gamma\in\text{ext}(\Gamma_{\text{fec}})$ then $\mathop{\mathrm{supp}}\gamma$ contains no even cycles.* *Proof.* Suppose, to the contrary, that $C$ is an even cycle in $\mathop{\mathrm{supp}}\gamma$. Define $\tau\in\mathbb{R}^E$ to be a function that assigns $1$ and $-1$ alternately to the edges of $C$ (starting from an arbitrary edge), and that assigns 0 to all other edges of $G$. Note that, for any number $\alpha$ and any vertex $v$, $$(\gamma+\alpha\tau)(\delta(v)) = \gamma(\delta(v))+\alpha\tau(\delta(v)) = \gamma(\delta(v)) \ge 1.$$ Thus, for sufficiently small $|\alpha|$, $\gamma + \alpha \tau$ is non-negative and, therefore, a fractional edge cover. This implies that the extreme point $\gamma$ lies in an open line segment in $\Gamma_{\text{fec}}$, which is a contradiction. ◻ **Lemma 6**. *Let $\gamma\in\text{ext}(\Gamma_{\text{fec}})$. Then, any connected component of $\mathop{\mathrm{supp}}\gamma$ that contains a pendant edge is a substar.* *Proof.* Consider a connected component of $H:=\mathop{\mathrm{supp}}\gamma$ containing a pendant edge. We shall show that this component contains no path of length greater than 2, which implies that the component is a substar. Let $e = (u,v)$ be the pendant edge, with $\deg_H(u) = 1$. If $\deg_H(v) = 1$, then the component consists solely of the edge $e$ and we are done. Now, suppose $\deg_H(v) > 1$ and that there is a path of length greater than 2. Starting at node $u$, trace a maximum-length path in $H$. This path, which is assumed to have length at least 3, must either end at another pendant edge in $H$ or cannot be extended further without creating a cycle. We consider these two possibilities in turn. 1. Assume the path ends in another pendant edge, denoted $\{x,y\}$, with $\deg_H(y)=1$. Since the path has length at least $3$, $x\ne v$. Let $\tau\in\mathbb{R}^E$ be a function that assigns 0 to the two pendant edges, assigns 1 and $-1$ alternately to the other edges in the path, and assigns 0 to all other edges in $E$ (see Figure [\[fig:pendant_edge_proof\]](#fig:pendant_edge_proof){reference-type="ref" reference="fig:pendant_edge_proof"}). Again, if we show that for sufficiently small $|\alpha|$, $\gamma+\alpha\tau\in\Gamma_{\text{fec}}$, we arrive at a contradiction. There are two types of vertices to consider in this case. Consider the vertex $v$. The path in $H$ connecting $u$ to $y$ passes from $u$ to $v$ and then on to a third vertex $w$. (It is possible that $w=x$.) Since $\gamma\in\Gamma_{\text{fec}}$, we know that $\gamma(\{u,v\}) \geq 1$ and since $\{v,w\}\in\mathop{\mathrm{supp}}\gamma$, we know that $\gamma(\{v,w\})>0$. So, $$(\gamma+\alpha\tau)(\delta(v)) = \gamma(\delta(v))+\alpha\tau(\delta(v)) \ge \gamma(\{u,v\}) + \gamma(\{v,w\}) - \alpha > 1 - \alpha,$$ which is greater than or equal to 1 for sufficiently small $|\alpha|$. The vertex $x$ is similar. For all other vertices, $z$, $\tau(\delta(z))=0$, arriving at the contradiction. 2. Assume instead that the path ends in a cycle. That is, one can trace a path in $H$ starting from $u$, passing through $v$, and continuing until the path eventually circles back to repeat a vertex $w$. (It is possible that $w=v$.) From Lemma [Lemma 5](#lem:no-even-cycles){reference-type="ref" reference="lem:no-even-cycles"}, the cycle through $w$ must be odd. Let $\tau\in\mathbb{R}^E$ be a function, shown in Figure [\[fig:oddcycle_pendantedge\]](#fig:oddcycle_pendantedge){reference-type="ref" reference="fig:oddcycle_pendantedge"}, that assigns 0 to the pendant edge, assigns $\pm 1/2$ alternately on the odd cycle with the two edges through $w$ sharing the same value, and assigns 0 to all other edge in $E$. If $w\ne v$, then $\tau$ should alternately assign $\pm 1$ to the edges connecting $v$ and $w$ in such a way that $\tau(\delta(w))=0$. (The value of $\tau$ is set to zero on all other edges.) It can be seen that this once again leads to a contradiction of the assumption that $\gamma$ is an extreme point. The function $\tau$ sums to zero on all stars other than $\delta(v)$ and $\gamma(\delta(v))>1$, which makes $\gamma + \alpha\tau\in\Gamma_{\text{fec}}$ for sufficiently small $|\alpha|$.  ◻ **Lemma 7**. *Let $\gamma\in\text{ext}(\Gamma_{\text{fec}})$. Then, any connected component of $\mathop{\mathrm{supp}}\gamma$ that does not contain a pendant edge is an odd cycle.* *Proof.* Let $H'$ be a connected component of $H := \mathop{\mathrm{supp}}\gamma$ that does not contain a pendant edge. The $H'$ must contain at least one cycle, $C$. By Lemma [Lemma 5](#lem:no-even-cycles){reference-type="ref" reference="lem:no-even-cycles"}, any such cycle must be odd. As before, we proceed by contradiction. Suppose that $H'\setminus C\ne\emptyset$. Then $H'$ must contain a vertex $v$ such that $\deg_H(v) \geq 3$. That is, there must be an edge of $H$ that is incident to $v$ but does not lie in the cycle $C$. Consider extending this edge to a maximum-length path in $H$. Since the component is assumed to contain no pendant edges, this path must eventually create a cycle. There are a few cases to consider. One possibility is that the path leaving $v$ returns to a different vertex $w$ in $C$, producing a "snowman" as shown in Figure [\[fig:snowman\]](#fig:snowman){reference-type="ref" reference="fig:snowman"}. The vertices $v$ and $w$ are connected by two paths in $C$, and since $C$ is an odd cyle, one of these paths has an even number of edges while the other has an odd number. Removing the path with even length results in an even cycle, which contradicts Lemma [Lemma 5](#lem:no-even-cycles){reference-type="ref" reference="lem:no-even-cycles"}. So, the path leaving $C$ from $v$ does not return to any other vertex of $C$ (and does not end in a pendant edge). Thus, $H$ must contain a graph $K$ with 2 (necessarily odd) cycles connected by a path (possibly of length 0) as in Figure [\[fig:oddcyclecases\]](#fig:oddcyclecases){reference-type="ref" reference="fig:oddcyclecases"}. Let $\tau\in\mathbb{R}^E$ be a function that assigns 0 to edges not in $K$, $\pm 1$ alternately on the path connecting the two cycles of $K$, and $\pm 1/2$ alternately around the cycles of $K$ so that $\tau$ sums to zero on all stars of $G$. (See Figure [\[fig:odd_cycle_barbell\]](#fig:odd_cycle_barbell){reference-type="ref" reference="fig:odd_cycle_barbell"} and Figure [\[fig:bowtie\]](#fig:bowtie){reference-type="ref" reference="fig:bowtie"} for examples.) This again yields a contradiction, since $\gamma+\alpha\tau\in\Gamma_{\text{fec}}$ for sufficiently small $|\alpha|$.  ◻ **Lemma 8**. *Every extreme point in $\text{ext}(\Gamma_{\text{fec}})$ is a basic fractional edge cover.* *Proof.* Lemmas [Lemma 6](#lem:component-substars){reference-type="ref" reference="lem:component-substars"} and [Lemma 7](#lem:component-cycles){reference-type="ref" reference="lem:component-cycles"} show that each connected component of $H:=\mathop{\mathrm{supp}}\gamma$ is either a substar or an odd cycle in $G$. To complete the proof, we must show that $\gamma$ only takes values in $\{0,1/2,1\}$, with the value $1/2$ occurring exactly on the odd cycles of $H$. First, consider a substar component, $S$, of $H$. All edges in this case are pendant edges and, therefore, $\gamma$ must be at least 1 on each edge. On the other hand, suppose $\gamma(e')>1$ for some $e'\in S$, and define $$\tilde{\gamma}(e) = \begin{cases} 1 & \text{if }e=e',\\ \gamma(e) &\text{otherwise}. \end{cases}$$ Then $\tilde{\gamma}\in\Gamma_{\text{fec}}$ and $\tilde{\gamma}\preceq\gamma$. Since $\Gamma_{\text{fec}}$ is recessive, this implies that $\gamma$ lies on the relative interior of a ray in $\Gamma_{\text{fec}}$ emanating from $\tilde{\gamma}$ and, therefore, that $\gamma$ cannot be an extreme point. Next, consider a component, $C$, of $H$ comprising an odd cycle. We wish to show that $\gamma(e)=1/2$ on all edges of $C$. We begin by observing that $\gamma(\delta(v))=1$ for every vertex $v$ in the cycle. Suppose to the contrary that $\gamma(\delta(v))>1$ for some vertex $v$, and consider the vector $\tau\in\mathbb{R}^E$, supported on $C$, alternately taking the values $\pm 1$ around the cycle in such a way that $+1$ is assigned to both edges incident on $v$. Then, $\gamma+\alpha\tau\in\Gamma_{\text{fec}}$ for sufficiently small $|\alpha|$, contradicting the extremality of $\gamma$. Next, we argue that if $\gamma(\delta(v))=1$ for all $v\in C$ then $\gamma(e)=1/2$ for all edges in $C$. To see this, define $$\tilde{\gamma}(e) = \begin{cases} 1/2 &\text{if }e\in C,\\ \gamma(e)&\text{otherwise}. \end{cases}$$ Then, $\tilde{\gamma}\in\Gamma_{\text{fec}}$ and $\tilde{\gamma}(\delta(v))=1$ for every vertex $v\in C$. Define $\tau:=\gamma-\tilde{\gamma}$. Then $\mathop{\mathrm{supp}}\tau\subseteq C$ and $\tau(\delta(v))=0$ for every $v\in C$. Choose an adjacent pair of edges $e_1,e_2\in C$. Then $\tau(e_2)=-\tau(e_1)$. Continuing around the cycle we find that the next edge, $e_3$, must satisfy $\tau(e_3)=-\tau(e_2)=\tau(e_1)$ and so on. Since the cycle is odd, the last edge we cross, $e_r$, that completes the cycle must have $\tau(e_r)=\tau(e_1)$. But, since $\tau$ must sum to zero on the star including these two edges, $\tau(e_r)=\tau(e_1)=0$, implying that $\tau=0$. Thus, $\gamma=\tilde{\gamma}$, implying that $\gamma(e)=1/2$ on all edges $e\in C$. ◻ **Theorem 9**. *The families $\Gamma_{\text{fec}}$ and $\Gamma_{\overline{\text{fec}}}$ are equivalent; $\Gamma_{\text{fec}}\simeq\Gamma_{\overline{\text{fec}}}$.* *Proof.* Lemma [Lemma 8](#lem:extreme-bfec){reference-type="ref" reference="lem:extreme-bfec"} implies that $\text{ext}(\Gamma_{\text{fec}})\subseteq\Gamma_{\overline{\text{fec}}}\subseteq\Gamma_{\text{fec}}$, which shows that $\text{Adm}(\Gamma_{\text{fec}})\subseteq\text{Adm}(\Gamma_{\overline{\text{fec}}})\subseteq\text{Adm}(\text{ext}(\Gamma_{\text{fec}}))=\text{Adm}(\Gamma_{\text{fec}})$. ◻ Figures [\[fig:W5fec\]](#fig:W5fec){reference-type="ref" reference="fig:W5fec"} and [\[fig:W6fec\]](#fig:W6fec){reference-type="ref" reference="fig:W6fec"} demonstrate the implication of Theorem [Theorem 9](#thm:fec-bfec-equiv){reference-type="ref" reference="thm:fec-bfec-equiv"}. These figures show all extreme points (up to rotation and reflection) of $\Gamma_{\text{fec}}$ for the wheel graphs $W_5$ and $W_6$ respectively. Note that (as guaranteed by the theorem) all are basic fractional edge covers. Any density that is admissible for one of these sets is admissible for all fractional edge covers on the corresponding graph. **Corollary 10**. *If $G$ is a bipartite graph, then $\Gamma_{\text{fec}}\simeq\Gamma_{\text{ec}}$.* *Proof.* Let $\gamma\in\Gamma_{\overline{\text{fec}}}$. Since a bipartite graph has no odd cycles, $\gamma$ must take values in $\{0,1\}$. Moreover, $\gamma(\delta(v))\ge 1$ for every $v\in V$ and, therefore, $\gamma$ is (the incidence vector of) an edge cover. ◻ ## Computing $\text{Mod}_{p}(\Gamma_{\text{fec}})$ We now revisit the graphs we studied at the beginning of this section and calculate the unweighted 2-modulus of fractional edge covers. By Theorem [Theorem 9](#thm:fec-bfec-equiv){reference-type="ref" reference="thm:fec-bfec-equiv"}, we have that $\Gamma_{\text{fec}} \simeq\Gamma_{\overline{\text{fec}}}$. Therefore, calculating $\text{Mod}_2(\Gamma_{\text{fec}})$ is equivalent to calculating $\text{Mod}_2(\Gamma_{\overline{\text{fec}}})$. *Example 4* (Star Graph). Let $G = S_n$ be the unweighted star graph. Note that $S_n$ is a bipartite graph, so by Corollary [Corollary 10](#cor:bipartite-equiv){reference-type="ref" reference="cor:bipartite-equiv"}, $\Gamma_{\text{ec}} \simeq\Gamma_{\text{fec}}$ and $$\text{Mod}_2(\Gamma_{\text{fec}}) = \text{Mod}_2(\Gamma_{\text{ec}}) = \frac{1}{n-1}.$$ *Example 5* (Cycle Graph). Let $G = C_n$ be the unweighted cycle graph. To calculate the fractional edge cover modulus, we again consider the cases when $n$ is even or odd. - If $n$ is even, then the graph is bipartite. Again, by Corollary [Corollary 10](#cor:bipartite-equiv){reference-type="ref" reference="cor:bipartite-equiv"}, $\Gamma_{\text{ec}} \simeq\Gamma_{\text{fec}}$ and $$\text{Mod}_2(\Gamma_{\text{fec}}) = \text{Mod}_2(\Gamma_{\text{ec}}) = \frac{4}{n}.$$ - If $n$ is odd, then $\Gamma_{\overline{\text{fec}}}$ is the union of all the minimal edge covers and the constant vector $\gamma \equiv 1/2$. It is straightforward to verify that the constant density $\rho_0 \equiv \frac{2}{n}$ is admissible for this family. Thus, $$\text{Mod}_2(\Gamma) \le \mathcal{E}_2(\rho_0) = \sum_{e \in E} \rho_0(e)^2 = n \cdot \frac{4}{n^2} = \frac{4}{n}.$$ The corresponding lower bound is established in Example [Example 8](#ex:cycle-star){reference-type="ref" reference="ex:cycle-star"} of Section [5](#sec:star_mod){reference-type="ref" reference="sec:star_mod"}. *Example 6* (Complete Graph). Let $G = K_n$ be the unweighted complete graph with $|V| = n$ and $|E| = \frac{n(n-1)}{2}$. In this example, the number of basic fractional edge covers grows quickly with $n$, and is difficult to visualize. Nevertheless, we can show that $\rho_0 = \frac{2}{n}$ is the extremal density and that the modulus is $$\text{Mod}_2(\Gamma_{\text{fec}}) = \frac{2(n-1)}{n}.$$ This is proved in Example [Example 9](#ex:complete-star){reference-type="ref" reference="ex:complete-star"} in Section [5](#sec:star_mod){reference-type="ref" reference="sec:star_mod"}. ## Bounds on the modulus of edge covers One reason that fractional edge covers are interesting in the context of modulus comes from the following theorem. **Theorem 11**. *Let $k$ be the length of the shortest odd cycle in $G$. Then for all $p\in[1,\infty)$ $$\left( \frac{k}{k+1} \right)^p \text{Mod}_{p,\sigma}(\Gamma_{\text{fec}}) \leq \text{Mod}_{p,\sigma}(\Gamma_{\text{ec}}) \leq \text{Mod}_{p,\sigma}(\Gamma_{\text{fec}}).$$ In particular, if $G$ contains a triangle, then $$\left( \frac{3}{4} \right)^p \text{Mod}_{p,\sigma}(\Gamma_{\text{fec}}) \leq \text{Mod}_{p,\sigma}(\Gamma_{\text{ec}}) \leq \text{Mod}_{p,\sigma}(\Gamma_{\text{fec}}).$$ If $G$ is bipartite then $\text{Mod}_{p,\sigma}(\Gamma_{\text{ec}}) = \text{Mod}_{p,\sigma}(\Gamma_{\text{fec}})$.* The second inequality in these bounds follows from the inclusion $\Gamma_{\text{ec}} \subseteq \Gamma_{\text{fec}}$. The other bound follows from a more careful look at odd cycles. The key estimate is the following. **Lemma 12**. *Let $C\subset E$ be an odd cycle with length $|C|=k=2\ell+1$, let $\Gamma_{\text{ec}}(C)$ be the set of edge covers for $C$, and let $\rho\in\mathbb{R}^C_{\ge 0}$. Then $$\frac{1}{2}\sum_{e\in C}\rho(e) \ge \frac{k}{k+1}\min_{\gamma\in\Gamma_{\text{ec}}(C)}\ell_\rho(\gamma)$$* *Proof.* Since $\rho$ is non-negative, the minimum value on the right must be attained by a minimal edge cover. A minimal edge cover for an odd cycle must look like a rotation of Figure [\[fig:odd\]](#fig:odd){reference-type="ref" reference="fig:odd"}, with all vertices but one incident on exactly one edge. There are $k$ such edge covers, which we may enumerate $\{\gamma_1,\gamma_2,\ldots,\gamma_k\}$. Moreover, each edge of $C$ lies in exactly $\ell+1$ of these minimal edge covers, which implies that $$\frac{1}{k+1}\sum_{i=1}^k\gamma_i = \frac{1}{2(\ell+1)}\sum_{i=1}^k\gamma_i = \frac{1}{2}\mathbf{1}.$$ From this, it follows that $$\frac{1}{2}\sum_{e\in C}\rho(e) =\frac{1}{2}\rho^T\mathbf{1} = \frac{1}{k+1}\sum_{i=1}^k\ell_\rho(\gamma_i) \ge \frac{k}{k+1}\min_{\gamma\in\Gamma_{\text{ec}}(C)}\ell_\rho(\gamma).$$ ◻ **Lemma 13**. *Let $\rho \in \text{Adm}(\Gamma_{\text{ec}})$. If the graph, $G$,contains no odd cycles shorter than $k=2\ell + 1$, then $\displaystyle \frac{k+1}{k}\rho \in \text{Adm}(\Gamma_{\overline{\text{fec}}})$.* *Proof.* Let $\rho \in \text{Adm}(\Gamma_{\text{ec}})$ and let $\gamma \in \Gamma_{\overline{fec}}$. From Definition [Definition 4](#def:bfec){reference-type="ref" reference="def:bfec"}, we know that the support of $\gamma$ is a vertex-disjoint union of substars, $\mathscr{S} = \{S_1, \ldots, S_t\}$, and odd cycles, $\mathscr{C} =\{C_1, \ldots, C_r\}$. Moreover, $\gamma$ takes the value $1$ on each substar edge and $1/2$ on each cycle edge. From $\gamma$, we construct an edge cover, $\tilde{\gamma}\in\Gamma_{\text{ec}}$ as follows. For edges $e\notin\mathop{\mathrm{supp}}\gamma$, we set $\tilde{\gamma}(e)=0$. Similarly, we set $\tilde{\gamma}(e)=\gamma(e)=1$ for each edge $e\in\bigcup_{i=1}^tS_i$. The remaining edges lie on the disjoint union of the odd cycles in $\mathscr{C}$. On each of these odd cycles, we choose $\tilde{\gamma}$ to be the (incidence vector of the) minimal edge cover that has the smallest $\rho$-length. By construction, $\tilde{\gamma}$ is an edge cover for $G$. By admissibility, then, it follows that $$\label{eq:gamma-tilde-adm} 1 \le \ell_\rho(\tilde{\gamma}) = \sum_{C\in\mathscr{C}}\sum_{e\in C}\tilde{\gamma}(e)\rho(e) + \sum_{S\in\mathscr{S}}\sum_{e\in S}\rho(e).$$ Now consider the $\rho$-length of the basic fractional edge cover $\gamma$, $$\label{eq:rho-len-gamma} \ell_\rho(\gamma) = \sum_{C\in\mathscr{C}}\sum_{e\in C}\frac{1}{2}\rho(e) + \sum_{S\in\mathscr{S}}\sum_{e\in S}\rho(e).$$ By the construction of $\tilde{\gamma}$, Lemma [Lemma 12](#lem:rho-len-odd-cycle){reference-type="ref" reference="lem:rho-len-odd-cycle"} implies that for every $C\in\mathscr{C}$, $$\label{eq:cycle-compare} \frac{1}{2}\sum_{e\in C}\rho(e) \ge \frac{|C|}{|C|+1}\sum_{e\in C}\tilde{\gamma}(e)\rho(e) \ge \frac{k}{k+1}\sum_{e\in C}\tilde{\gamma}(e)\rho(e).$$ Combining [\[eq:gamma-tilde-adm\]](#eq:gamma-tilde-adm){reference-type="eqref" reference="eq:gamma-tilde-adm"}--[\[eq:cycle-compare\]](#eq:cycle-compare){reference-type="eqref" reference="eq:cycle-compare"} shows that $$\ell_\rho(\gamma) \ge \frac{k}{k+1}.$$ Since $\gamma\in\Gamma_{\overline{\text{fec}}}$ was arbitrary, it follows that $\frac{k+1}{k}\rho\in\text{Adm}(\Gamma_{\overline{\text{fec}}})$. ◻ *Proof of Theorem [Theorem 11](#thm:bounds){reference-type="ref" reference="thm:bounds"}.* If $G$ is bipartite, then the result follows from Corollary [Corollary 10](#cor:bipartite-equiv){reference-type="ref" reference="cor:bipartite-equiv"}. Otherwise, as stated earlier, the second inequality follows from the inclusion $\Gamma_{\text{ec}}\subseteq\Gamma_{\overline{\text{fec}}}$. The first inequality is a consequence of Lemma [Lemma 13](#lem:almost-admissible){reference-type="ref" reference="lem:almost-admissible"}. To see this, let $\rho^*$ be an extremal density for $\text{Mod}_{p,\sigma}(\Gamma_{\text{ec}})$. By the lemma, $\rho = \frac{k+1}{k}\rho^*\in\text{Adm}(\Gamma_{\overline{\text{fec}}})=\text{Adm}(\Gamma_{\text{fec}})$. Thus, we have $$\text{Mod}_{p,\sigma}(\Gamma_{\text{fec}}) \le \mathcal{E}_{p,\sigma}(\rho) = \left(\frac{k+1}{k}\right)^p\mathcal{E}_{p,\sigma}(\rho^*) = \left(\frac{k+1}{k}\right)^p\text{Mod}_{p,\sigma}(\Gamma_{\text{ec}}).$$ ◻ *Remark 1*. A similar theorem is true for the case $p=\infty$. Using the same proof technique, one finds that $$\frac{k}{k+1} \text{Mod}_{\infty, \sigma}(\Gamma_{\text{fec}}) \le \text{Mod}_{\infty, \sigma}(\Gamma_{\text{ec}}) \le \text{Mod}_{\infty, \sigma}(\Gamma_{\text{fec}}).$$ # Fulkerson Duality {#sec:fulkerson_duality} The theory of Fulkerson Duality applied to modulus was developed in [@albin2019blocking]. If $\Gamma$ is a finite family, then the admissible set, $\text{Adm}(\Gamma)$, has finitely many faces and finitely many extreme points. Since $\text{Adm}(\Gamma)$ is a recessive closed convex set, it can be written as the dominant of its extreme points $$\text{Adm}(\Gamma) = \text{Dom}(\text{ext}(\text{Adm}(\Gamma))).$$ We define $$\hat{\Gamma} := \text{ext}( \text{Adm}(\Gamma)) = \{ \hat{\gamma}_1, \ldots, \hat{\gamma}_r\} \subseteq \mathbb{R}^E_{\ge 0}$$ to be the *Fulkerson blocker of $\Gamma$*. These extreme points can be thought of as another family of graph objects with usage matrix $\hat{\mathcal{N}} \in \mathbb{R}^{\hat{\Gamma} \times E}_{\ge 0}$. This construction provides a duality among families of objects due to the fact that $$\hat{\hat{\Gamma}} = \text{ext}(\text{Dom}(\Gamma)) \simeq\Gamma.$$ The relationship between $\Gamma$ and $\hat{\Gamma}$ in terms of modulus is given by the following theorem. **Theorem 14** (Theorem 4 in [@albin2019blocking]). *Let $G = (V,E)$ be a graph and let $\Gamma$ be a non-trivial finite family of objects on G with Fulkerson blocker $\hat{\Gamma}$. Let the exponent $1 < p < \infty$ be given, with $q := p/(p-1)$ its Hölder conjugate. For any set of weights $\sigma \in \mathbb{R}_{>0}^E$, define the dual set of weights $\hat{\sigma}$ as $\hat{\sigma}(e) := \sigma(e)^{-\frac{q}{p}}$, for all $e \in E$. Then, $$\label{eq:mod_reciprocal} \text{Mod}_{p,\sigma}(\Gamma)^{1/p}\text{Mod}_{q,\hat{\sigma}}(\hat{\Gamma})^{1/q} = 1.$$* *Moreover, the optimal $\rho^* \in \text{Adm}(\Gamma)$ and $\eta^* \in \text{Adm}(\hat{\Gamma})$ are unique and are related as follows: $$\eta^*(e) = \frac{\sigma(e) \rho^*(e)^{p-1}}{\text{Mod}_{p,\sigma}(\Gamma)} \qquad \forall e \in E.$$* The relationship of the modulus of the families when $p=1$ and $p=\infty$ are described using the following theorem. **Theorem 15** (Thereom 5 in [@albin2019blocking]). *Under the assumptions of Theorem [Theorem 14](#thm:mod_reciprocals){reference-type="ref" reference="thm:mod_reciprocals"}, $$\text{Mod}_{1,\sigma}(\Gamma)\text{Mod}_{\infty,\sigma^{-1}} (\hat{\Gamma}) = 1,$$ where $\sigma^{-1}(e) = \sigma(e)^{-1}$.* The relationship between dual families of objects has several useful implications. Most important in the present setting is the fact that upper bounds on the modulus of the dual family provide lower bounds for the original (primal) family. In particular, the following lemma shows that the family of stars can be used to provide lower bounds for the modulus of fractional edge covers. **Lemma 16**. *The families of stars and fractional edge covers on $G$ are dual in the sense that $$\hat{\Gamma}_{\text{star}}\simeq\Gamma_{\text{fec}}.$$* *Proof.* Consider a vector $\gamma\in\mathbb{R}^E_{\ge 0}$. Then, $\gamma\in\Gamma_{\text{fec}}$ if and only if $$\sum_{e\in\delta(v)}\gamma(e) \ge 1\quad \text{ for every $v$ in $V$},$$ which is equivalent to saying that $\gamma\in\text{Adm}(\Gamma_{\text{star}})$. By Lemma [Lemma 3](#lem:gamma-equiv-dom){reference-type="ref" reference="lem:gamma-equiv-dom"}, then, $$\hat{\Gamma}_\text{star} = \text{ext}(\text{Adm}(\Gamma_{\text{star}})) \simeq\text{Adm}(\Gamma_{\text{star}}) = \Gamma_{\text{fec}}.$$ ◻ # Star modulus {#sec:star_mod} Lemma [Lemma 16](#lem:Bhat_fec){reference-type="ref" reference="lem:Bhat_fec"} implies that Theorems [Theorem 14](#thm:mod_reciprocals){reference-type="ref" reference="thm:mod_reciprocals"} and [Theorem 15](#thm:mod_reciprocals-1-inf){reference-type="ref" reference="thm:mod_reciprocals-1-inf"} hold with $\Gamma=\Gamma_{\text{star}}$ and $\hat{\Gamma}=\Gamma_{\text{fec}}$. This means that the modulus and extremal densities for fractional edge covers can be understood through the modulus of stars. Calculating the star modulus turns out to be computationally simpler than calculating the modulus of fractional edge covers based on the number of constraints in the minimization problem. Specifically, the number of stars in a graph is equal to the number of vertices $|V|$, whereas the family of basic fractional edge covers in a graph is at least as big as the family of minimal edge covers. In this section, we prove simple results for the modulus of stars, as well as studying examples of well-known graphs. The following lemma states a basic estimate for the star modulus by restating a result from [@albin2017modulus], along with a lower bound. **Lemma 17**. *Let $G = (V,E,\sigma)$ be a finite graph and let $\Gamma$ be the family of all stars in $G$. Let $\delta(G) := \min_{v \in V} \deg(v)$, then $$\frac{\sigma_{\text{min}}}{\delta(G)^p} \le \text{Mod}_{p,\sigma}(\Gamma) \le \frac{\sigma(E)}{\delta(G)^p}$$ where $\sigma_{\text{min}} = \min\limits_{e \in E} \sigma(e)$ and $\sigma(E) = \sum\limits_{e \in E} \sigma(e)$.* *Proof.* Define $\rho_0 \equiv \frac{1}{\delta(G)}$, then $\rho_0$ is admissible since for every $v \in V$ $$\ell_{\rho_0}(\delta(v)) = \sum_{e \in \delta(v)} \rho_0(e) = \frac{\deg(v)}{\delta(G)} \ge 1,$$ where the last inequality is true since every star will have at least $\delta(G)$ edges in it. So, $$\text{Mod}_{p,\sigma}(\Gamma) \le \mathcal{E}_{p,\sigma}(\rho_0) = \sum_{e \in E} \sigma(e) \rho_0(e)^p = \frac{\sigma(E)}{\delta(G)^p}.$$ Let $v_0 \in V$ be the node with smallest degree ($\deg(v_0) = \delta(G)$) and let $\rho \in \text{Adm}(\Gamma)$. Then, $$1 \le \ell_{\rho}(\delta(v_0)) = \sum_{e \in \delta(v_0)} \rho(e) \le \max_{e\in E} \rho(e) \sum_{e \in \delta(v_0)} 1 = \max_{e \in E} \rho(e) \delta(G)$$ This implies that there exists an edge $e_0$ such that $\rho(e_0) \ge \frac{1}{\delta(G)}$ and $$\mathcal{E}_{p,\sigma} (\rho) = \sum_{e \in E} \sigma(e) \rho(e)^p \ge \sigma(e_0)\rho(e_0)^p = \frac{\sigma(e_0)}{\delta(G)^p} \ge \frac{\sigma_{\text{min}}}{\delta(G)^p}.$$ Taking the infimum over all $\rho \in \text{Adm}(\Gamma)$ we get the result $$\text{Mod}_{p,\sigma} (\Gamma) \ge \frac{\sigma_{\text{min}}}{\delta(G)^p}.$$ ◻ *Example 7* (Star Graph). Let $G = S_n$ be the unweighted star graph. The density $\rho_0 \equiv 1$ is admissible for $\Gamma_{\text{star}}$. So, $$\text{Mod}_2(\Gamma_{\text{star}}) \le \mathcal{E}_2(\rho_0) = \sum_{e \in E} \rho_0(e)^2 = (n-1) \cdot 1^2 = n-1.$$ Duality and Example [Example 4](#ex:star-fec){reference-type="ref" reference="ex:star-fec"} imply that $\text{Mod}_2(\Gamma_{\text{star}})=\text{Mod}_2(\Gamma_{\text{fec}})^{-1}=n-1$, showing that this choice of $\rho$ is extremal. **Lemma 18**. *Let $G$ be an unweighted $d$-regular graph and let $\Gamma$ be the family of all stars in $G$. Then, for $1<p<\infty$, the density $\rho \equiv \frac{1}{d}$ is extremal and $\text{Mod}_p(\Gamma) = \frac{|E|}{d^p}$.* The proof of Lemma [Lemma 18](#lem:d-reg-mod){reference-type="ref" reference="lem:d-reg-mod"} is given in Section [5.1](#sec:prob-interp){reference-type="ref" reference="sec:prob-interp"} once the probabilistic interpretation of modulus has been developed. *Example 8* (Cycle Graphs). Let $G = C_n$ be the unweighted cycle graph. Since $G$ is a $2$-regular graph, by Lemma [Lemma 18](#lem:d-reg-mod){reference-type="ref" reference="lem:d-reg-mod"}, $\rho \equiv \frac{1}{2}$ is extremal and $$\text{Mod}_2(\Gamma_{\text{star}}) = \frac{n}{2^2} = \frac{n}{4}.$$ *Example 9* (Complete Graph). Let $G = K_n$ be the unweighted complete graph. Since $G$ is a $(n-1)$-regular graph, by Lemma [Lemma 18](#lem:d-reg-mod){reference-type="ref" reference="lem:d-reg-mod"}, $\rho \equiv \frac{1}{n-1}$ is extremal and $$\text{Mod}_2(\Gamma_{\text{star}}) = \frac{\frac{n(n-1)}{2}}{(n-1)^2} = \frac{n}{2(n-1)}.$$ By duality, Examples [Example 8](#ex:cycle-star){reference-type="ref" reference="ex:cycle-star"} and [Example 9](#ex:complete-star){reference-type="ref" reference="ex:complete-star"} complete Examples [Example 5](#ex:cycle-fec){reference-type="ref" reference="ex:cycle-fec"} and [Example 6](#ex:complete-fec){reference-type="ref" reference="ex:complete-fec"}. ## Probabilistic Interpretation {#sec:prob-interp} The optimization problem in [\[eq:optimization_prob\]](#eq:optimization_prob){reference-type="eqref" reference="eq:optimization_prob"} is a convex problem for which, for $1 < p < \infty$, the minimizer is unique. A Lagrangian dual problem was developed in [@albin2015modulus] and endowed with a probabilistic interpretation in [@albin2016prob]. We review some of the concepts of this probabilistic interpretation and apply it to the families of stars and edge covers. Let $\mathcal{P}(\Gamma)\subset\mathbb{R}^{\Gamma}_{\ge 0}$ represent the set of probability mass functions (pmfs) on the set $\Gamma$. That is, $\mu \in \mathcal{P}(\Gamma)$ if and only if $\mu$ is a nonnegative vector with $\mu^T \mathbf{1} = 1$. For a given $\mu$, we can define a random variable $\underline{\gamma}$ with distribution given by $\mu: \mathbb{P}_{\mu} (\underline{\gamma} = \gamma) = \mu(\gamma)$. For an edge $e \in E$, the value $\mathcal{N}(\underline{\gamma}, e)$ is a random variable as well and we denote its expectation as $\mathbb{E}_{\mu}[\mathcal{N}(\underline{\gamma}, e)]$. The probabilistic interpretation is given by the following theorem. **Theorem 19** (Theorem 2 in [@albin2019blocking]). *Let $G=(V,E)$ be a finite graph with edge weights $\sigma$, and let $\Gamma$ be a non-trivial finite family of objects on $G$ with usage matrix $\mathcal{N}$. Then, for any $1<p<\infty$, letting $q := p/(p-1)$ be the conjugate exponent to $p$, we have $$\label{eq:mod_prob} \text{Mod}_{p,\sigma}(\Gamma)^{-\frac{1}{p}} = \left( \min_{\mu \in \mathcal{P}(\Gamma)} \sum_{e\in E} \sigma(e)^{-\frac{q}{p}} \mathbb{E}_{\mu}[\mathcal{N}(\underline{\gamma}, e)]^q \right)^{\frac{1}{q}}.$$* *Moreover, $\mu \in \mathcal{P}(\Gamma)$ is optimal for the right-hand side of [\[eq:mod_prob\]](#eq:mod_prob){reference-type="eqref" reference="eq:mod_prob"} if and only if $$\mathbb{E}_{\mu}[\mathcal{N}(\underline{\gamma}, e)] = \frac{\sigma(e) \rho^*(e)^{\frac{p}{q}}}{\text{Mod}_{p,\sigma}(\Gamma)} \qquad \forall e \in E.$$ where $\rho^*$ is the unique extremal density for $\text{Mod}_{p,\sigma}(\Gamma)$.* As stated in [@albin2019blocking], the probabilistic interpretation is particularly informative when $p=2$, $\sigma \equiv 1$, and $\Gamma$ is a collection of subsets of edges, so that the usage matrix $\mathcal{N}$ is as defined in [\[eq:natural-N\]](#eq:natural-N){reference-type="eqref" reference="eq:natural-N"}. In this case, the duality relation from Theorem [Theorem 19](#thm:prob_int){reference-type="ref" reference="thm:prob_int"} can be written as $$\label{eq:prob-interp-2-norm} \text{Mod}_2(\Gamma)^{-1} = \min_{\mu \in \mathcal{P}(\Gamma)} \mathbb{E}_{\mu} | \underline{\gamma} \cap \underline{\gamma'}|$$ where $\underline{\gamma}$ and $\underline{\gamma}'$ are two independent random variables chosen according to the pmf $\mu$, and $| \underline{\gamma} \cap \underline{\gamma'}|$ is their *overlap*, which is also a random variable. This implies that computing the 2-modulus is equivalent to finding a pmf $\mu$ that minimizes the expected overlap of two independent, identically distributed random objects. The expectations that appear in this section have special forms in the case of $\Gamma_{\text{star}}$. In particular, if $e=\{x,y\}\in E$, then $$\label{eq:expect-star} \mathbb{E}_\mu[\mathcal{N}(\underline{\gamma},e)] = \mu(\delta(x)) + \mu(\delta(y)).$$ Moreover, the expected overlap can be written as $$\label{eq:exp-overlap} \mathbb{E}_{\mu} | \underline{\gamma} \cap \underline{\gamma'}| = \sum_{v,v' \in V} | \delta(v) \cap \delta(v')| \mu(\delta(v)) \mu(\delta(v')) = \sum_{v \in V} \deg(v) \mu(\delta(v))^2 + \sum_{v \in V} \sum_{v' \sim v} \mu(\delta(v)) \mu(\delta(v')),$$ which expresses the expected overlap as the sum of two terms. In the second term, the notation $v'\sim v$ in the inner summation indicates that the sum is taken over all neighbors $v'$ of $v$. Since the expected overlap is being minimized, the first term suggests that stars with higher degrees, i.e. stars with more edges, will be assigned smaller $\mu$ values than stars with a smaller number of edges. The second term acts to minimize the probabilities of neighboring vertices. In the case of $\Gamma_{\text{star}}$, selecting the uniform pmf yields a simple lower bound on the modulus. **Lemma 20**. *Let $\Gamma=\Gamma_{\text{star}}$ and $1<p<\infty$. Then, $$\text{Mod}_{p,\sigma}(\Gamma) \ge \left(\frac{|V|}{2}\right)^p \left(\sum_{e\in E}\sigma(e)^{-\frac{q}{p}}\right)^{-\frac{p}{q}}.$$ In particular, if $\sigma\equiv 1$, then $$\label{eq:prob-lower-bound-unweighted} \text{Mod}_p(\Gamma) \ge \left(\frac{|V|}{2|E|^{\frac{1}{q}}}\right)^p.$$* *Proof.* Define $\mu$ to be the uniform pmf on $\Gamma$. That is, $\mu(\delta(v)) = \frac{1}{|V|}$ for every $v \in V$. Then, the expected edge usage for every edge $e$ is $$\mathbb{E}_{\mu}[\mathcal{N}(\underline{\gamma}, e)] = \frac{2}{|V|}.$$ Substituting this into equation [\[eq:mod_prob\]](#eq:mod_prob){reference-type="eqref" reference="eq:mod_prob"} we get $$\begin{aligned} \text{Mod}_{p,\sigma}(\Gamma)^{-\frac{1}{p}} & \le \left( \sum_{e\in E} \sigma(e)^{-\frac{q}{p}} \left( \frac{2}{|V|}\right)^q \right)^{\frac{1}{q}} \\ & \le \frac{2}{|V|} \left( \sum_{e\in E} \sigma(e)^{-\frac{q}{p}} \right)^{\frac{1}{q}} \\ \text{Mod}_{p,\sigma}(\Gamma) & \ge \left( \frac{2}{|V|} \right)^{-p} \left(\sum_{e\in E}\sigma(e)^{-\frac{q}{p}}\right)^{-\frac{p}{q}}.\end{aligned}$$ which gives an upper bound to the modulus. Letting $\sigma \equiv 1$ yields the result from equation [\[eq:prob-lower-bound-unweighted\]](#eq:prob-lower-bound-unweighted){reference-type="eqref" reference="eq:prob-lower-bound-unweighted"}. ◻ This allows us to prove Lemma [Lemma 18](#lem:d-reg-mod){reference-type="ref" reference="lem:d-reg-mod"}. *Proof of Lemma [Lemma 18](#lem:d-reg-mod){reference-type="ref" reference="lem:d-reg-mod"}.* Lemma [Lemma 17](#lem:simple-bound-mod-star){reference-type="ref" reference="lem:simple-bound-mod-star"} provides an upper bound. The lower bound follows from Lemma [Lemma 20](#lem:simple-bound-prob){reference-type="ref" reference="lem:simple-bound-prob"}. Indeed, since $G$ is assumed to be $d$-regular, $|V| = 2|E|/d$. Substituting into [\[eq:prob-lower-bound-unweighted\]](#eq:prob-lower-bound-unweighted){reference-type="eqref" reference="eq:prob-lower-bound-unweighted"} yields the bound $$\text{Mod}_p(\Gamma) \ge \left(\frac{|E|^{1-\frac{1}{q}}}{d}\right)^p = \frac{|E|}{d^p}.$$ ◻ The following examples show optimal pmf's for several types of graphs. These examples highlight the balance of terms in [\[eq:exp-overlap\]](#eq:exp-overlap){reference-type="eqref" reference="eq:exp-overlap"}. *Example 10*. Let $G = S_n$ be the star graph and define $\mu = 0$ on the center star and $\mu = \frac{1}{n-1}$ on the remaining $n-1$ stars. Then $\mu \in \mathcal{P}(\Gamma_{\text{star}})$. Moreover, for any edge $e \in E$, the expected edge usage is $$\mathbb{E}_{\mu}[\mathcal{N}(\underline{\gamma}, e)] = \frac{1}{n-1},$$ since each edge sees exactly one of the stars of degree one. Using [\[eq:mod_prob\]](#eq:mod_prob){reference-type="eqref" reference="eq:mod_prob"}, we obtain $$\text{Mod}_2(\Gamma_{\text{star}})^{-1} \le \sum_{e\in E}\left(\frac{1}{n-1}\right)^2 = (n-1)\left( \frac{1}{n-1}\right)^2 = \frac{1}{n-1}.$$ From Example [Example 7](#ex:star-star){reference-type="ref" reference="ex:star-star"} we know that $\text{Mod}_2(\Gamma_{\text{star}}) = n-1$, so this pmf is optimal. *Example 11*. Let $G = C_n$ be the cycle graph. By choosing the uniform pmf as in the proof of Lemma [Lemma 20](#lem:simple-bound-prob){reference-type="ref" reference="lem:simple-bound-prob"}, we obtain the lower bound $$\text{Mod}_2(\Gamma_{\text{star}}) \ge \frac{n}{4}.$$ From Example [Example 8](#ex:cycle-star){reference-type="ref" reference="ex:cycle-star"} we know that $\text{Mod}_2(\Gamma_{\text{star}}) = \frac{n}{4}$, so this pmf is optimal. *Example 12*. Let $G = K_n$ be the complete graph. As in the previous example, the uniform pmf yields a lower bound, $$\text{Mod}_2(\Gamma_{\text{star}}) \ge \frac{n}{2(n-1)},$$ which coincides with the modulus as computed in Example [Example 9](#ex:complete-star){reference-type="ref" reference="ex:complete-star"}, showing that the uniform pmf is optimal. ## More Examples In the following examples we compute $\text{Mod}_2(\Gamma_{\text{star}})$ on several unweighted, undirected graphs. *Example 13* (Path Graphs). Let $G = P_n$ be the unweighted path graph with $|V| = n\ge 3$ and $|E| = n-1$. - For $n=3$, the density $\rho_0 \equiv 1$ is admissible for $\Gamma_{\text{star}}$. So, $$\text{Mod}_2(\Gamma_{\text{star}}) \le 2.$$ For the lower bound, define $\mu(\delta(v)) = \frac{1}{2}$ on the two nodes with degree 1 and $\mu(\delta(u)) = 0$ on the node with degree 2. Using [\[eq:mod_prob\]](#eq:mod_prob){reference-type="eqref" reference="eq:mod_prob"}, $$\begin{aligned} \text{Mod}_2(\Gamma_{\text{star}})^{-1} & \le 2 \left( \frac{1}{2} \right)^2 = \frac{1}{2}. \end{aligned}$$ Thus we have that $\text{Mod}_2(\Gamma_{\text{star}}) = 2$, $\rho_0$ is the extremal density, and $\mu$ is an optimal pmf. - For $n=4$, let $\rho_0$ be defined as in Figure [\[fig:P4_rho0\]](#fig:P4_rho0){reference-type="ref" reference="fig:P4_rho0"}. One can check that this density is admissible for $\Gamma_{\text{star}}$. So, $$\text{Mod}_2(\Gamma_{\text{star}}) \le 2.$$ For the lower bound, similar to the $n=3$ case, define $\mu(\delta(v)) = \frac{1}{2}$ on the nodes with degree 1 and $\mu(\delta(u)) = 0$ for the nodes with degree 2. The expected edge usage is $1$ on the two outer edges and $0$ on the inner edge. By [\[eq:mod_prob\]](#eq:mod_prob){reference-type="eqref" reference="eq:mod_prob"}, $$\begin{aligned} \text{Mod}_2(\Gamma_{\text{star}})^{-1} & \le 2 \left( \frac{1}{2} \right)^2 = \frac{1}{2}. \end{aligned}$$ Thus we have that $\text{Mod}_2(\Gamma_{\text{star}}) = 2$, $\rho_0$ is the extremal density, and $\mu$ is an optimal pmf. - For $n \ge 5$ and $n$ odd, define $\rho_0$ as $$\rho_0(e) = \begin{cases} 1 & \text{if $e$ is a pendant edge}, \\ \frac{1}{2} & \text{ otherwise}. \end{cases}$$ This density is admissible for $\Gamma_{\text{star}}$, so $$\text{Mod}_2(\Gamma_{\text{star}}) \le \mathcal{E}_2(\rho_0)= 2\cdot 1^2 + (n-3)\cdot \left(\frac{1}{2}\right)^2 = \frac{n+5}{4}.$$ For the lower bound define $\mu = \frac{4}{n+5}$ on the stars with degree 1. On the stars with degree 2, assign $\mu$ to be 0 and $\frac{2}{n+5}$ alternately. (See Figure [\[fig:mu-path\]](#fig:mu-path){reference-type="ref" reference="fig:mu-path"}.) One can verify that this is a pmf on the stars. Since no two stars with positive probability share an edge, [\[eq:prob-interp-2-norm\]](#eq:prob-interp-2-norm){reference-type="eqref" reference="eq:prob-interp-2-norm"} and [\[eq:exp-overlap\]](#eq:exp-overlap){reference-type="eqref" reference="eq:exp-overlap"} show that $$\begin{aligned} \text{Mod}_2(\Gamma_{\text{star}})^{-1} & \le 2 \left( \frac{4}{n+5} \right)^2 + \frac{n-3}{2} \cdot 2 \left( \frac{2}{n+5} \right)^2 = \frac{4n+20}{(n+5)^2} = \frac{4}{n+5}. \end{aligned}$$ Thus we have that $\text{Mod}_2(\Gamma_{\text{star}}) = \frac{4}{n+5}$ and $\rho_0$ is the extremal density and $\mu$ is an optimal pmf. - For $n \ge 6$ and even, define $\rho_0$ to be 1 on the pendant edges, and alternating between the values $\rho_1$ and $\rho_2$ on the remaining edges, where $$\rho_1 = \frac{n-4}{2n-6} \qquad \rho_2 = \frac{n-2}{2n-6}.$$ (See Figures [\[fig:P6_rho0\]](#fig:P6_rho0){reference-type="ref" reference="fig:P6_rho0"} and [\[fig:P8_rho0\]](#fig:P8_rho0){reference-type="ref" reference="fig:P8_rho0"}.) Note this density is admissible for $\Gamma_{\text{star}}$. Thus, $$\text{Mod}_2(\Gamma_{\text{star}}) \le 2 \cdot 1^2 + \left(\frac{n}{2}-1\right) \left( \frac{n-4}{2n-6}\right)^2 + \left( \frac{n}{2}-2\right) \left( \frac{n-2}{2n-6} \right)^2 = \frac{n^2+2n-16}{2\left(2n-6\right)}.$$ To obtain a lower bound, first enumerate the first half of the vertices $v_1, v_2, \ldots, v_k$, where $k = \frac{n}{2}$. For the stars centered at $v_i$, $1 \le i \le k$, define $\mu$ as follows: $$\mu(\delta(v_i)) = \begin{cases} \frac{2(2n-6)}{n^2+2n-16}, & i = 1 \\ 0, & i = 2 \\ \frac{2n-2(i+1)}{n^2+2n-16}, & 2 < i \le k, i \text{ odd} \\ \frac{2(i-2)}{n^2+2n-16}, & 2< i \le k, i \text{ even} \\ \end{cases}$$ Once the first $k$ stars have been assigned a $\mu$ value, assign the values of $\mu$ to the following $k$ vertices by mirroring the values: $\mu(\delta(v_{n+1-i}))=\mu(\delta(v_i))$ for $i=1,2,\ldots,k$. A straightforward (but long) calculation shows that the corresponding lower bound on modulus agrees with the upper bound above. *Example 14* (Wheel Graphs). Let $G = W_n$ be the wheel graph with $|V| = n\ge 4$ and $|E| = 2(n-1)$. Specifically, $G$ has a special center node $v \in V$ with $\deg(v) = n-1$, and every other node $u \neq v$, has $\deg(u) = 3$. - For $4 \le n \le 5$, define $\rho_0$ as $$\rho_0(e) = \begin{cases} \frac{1}{n-1}, & \text{if $e$ is incident to $v$} \\ \frac{n-2}{2(n-1)}, & \text{otherwise}. \end{cases}$$ See Figure [\[fig:wheel-extremal-density\]](#fig:wheel-extremal-density){reference-type="ref" reference="fig:wheel-extremal-density"}. Note that $\rho_0$ is admissible for $\Gamma_{\text{star}}$ and $$\text{Mod}_2(\Gamma_{\text{star}}) \le (n-1) \left(\frac{1}{n-1}\right)^2 + (n-1) \left( \frac{n-2}{2(n-1)}\right)^2 = \frac{4+(n-2)^2}{4(n-1)}.$$ To obtain a lower bound, define $\mu \in \mathcal{P}(\Gamma_{\text{star}})$ as $\mu(\delta(v)) = \frac{6-n}{4+(n-2)^2}$ and define $\mu(\delta(u)) = \frac{n-2}{4+(n-2)^2}$ for $u \neq v$. Using [\[eq:expect-star\]](#eq:expect-star){reference-type="eqref" reference="eq:expect-star"} and Theorem [Theorem 19](#thm:prob_int){reference-type="ref" reference="thm:prob_int"}, $$\begin{aligned} \text{Mod}_2(\Gamma_{\text{star}})^{-1} & \le \left(\frac{6-n}{4+(n-2)^2}\right)^2 + 5(n-1) \left( \frac{n-2}{4+(n-2)^2} \right)^2 \\ &+ 2 (n-1) \left(\frac{6-n}{4+(n-2)^2}\right) \left( \frac{n-2}{4+(n-2)^2} \right) \\ & = \frac{4(n-1)(n^2-4n+8)}{(4+(n-2)^2)^2} = \frac{4(n-1)}{4+(n-2)^2} \\ \end{aligned}$$ Thus we have that $\text{Mod}_2(\Gamma_{\text{star}}) = \frac{4+(n-2)^2}{4(n-1)}$, $\rho_0$ is the extremal density and $\mu$ is an optimal pmf. - For $n \ge 6$, define $\rho_0$ as $$\rho_0(e) = \begin{cases} \frac{1}{5}, & \text{if $e$ is incident to $v$} \\ \frac{2}{5}, & \text{otherwise}. \end{cases}$$ Again, note that $\rho_0$ is admissible for $\Gamma_{\text{star}}$ and $$\text{Mod}_2(\Gamma_{\text{star}}) \le (n-1) \left(\frac{1}{5}\right)^2 + (n-1) \left( \frac{2}{5}\right)^2 = \frac{n-1}{5}.$$ To obtain a lower bound, define $\mu(\delta(v)) = 0$ and $\mu(\delta(u)) = \frac{1}{n-1}$ for $u \neq v$. Using [\[eq:expect-star\]](#eq:expect-star){reference-type="eqref" reference="eq:expect-star"} and Theorem [Theorem 19](#thm:prob_int){reference-type="ref" reference="thm:prob_int"}, $$\begin{aligned} \text{Mod}_2(\Gamma_{\text{star}})^{-1} & \le 3(n-1)\left(\frac{1}{n-1}\right)^2 + 2(n-1)\left(\frac{1}{n-1}\right)^2 = \frac{5}{n-1}. \end{aligned}$$ Thus we have that $\text{Mod}_2(\Gamma_{\text{star}}) = \frac{n-1}{5}$, $\rho_0$ is the extremal density, and $\mu$ is an optimal pmf. # Comparing $\Gamma_{\text{ec}}$ and $\Gamma_{\text{fec}}$ {#sec:num-examples} For an unweighted graph, $G$, we can use the results from Lemma [Lemma 16](#lem:Bhat_fec){reference-type="ref" reference="lem:Bhat_fec"} and Theorem [Theorem 14](#thm:mod_reciprocals){reference-type="ref" reference="thm:mod_reciprocals"} to restate the upper and lower bounds relating $\text{Mod}_{2}(\Gamma_{\text{ec}})$, $\text{Mod}_2(\Gamma_{\text{fec}})$ and $\text{Mod}_{2}(\Gamma_{\text{star}})$ as $$\label{eq:bounds_ec_star} \left( \frac{k}{k+1} \right)^2 \leq \text{Mod}_{2}(\Gamma_{\text{ec}})\text{Mod}_{2}(\Gamma_{\text{fec}})^{-1} = \text{Mod}_{2}(\Gamma_{\text{ec}})\text{Mod}_{2}(\Gamma_{\text{star}}) \leq 1,$$ where $k$ is the length of the shortest odd cycle in $G$. (If $G$ is bipartite, then $\Gamma_{\text{fec}}\simeq\Gamma_{\text{ec}}$, so the moduli of the two families are equal.) In this section, we present two examples that explore the tightness of the bounds in [\[eq:bounds_ec_star\]](#eq:bounds_ec_star){reference-type="eqref" reference="eq:bounds_ec_star"} on nonbipartite graphs. We consider the complete graphs, $K_n$, for which we have explicit formulas for the moduli, and the $n$-barbell graphs, for which we use numerical approximation to compute the moduli. ## Numerical methods {#sec:num-methods} One natural way to solve the modulus problem numerically is to apply a convex optimization solver to [\[eq:optimization_prob\]](#eq:optimization_prob){reference-type="eqref" reference="eq:optimization_prob"}. For sufficiently small families, $\Gamma$, it is not difficult to form the full usage matrix $\mathcal{N}\in\mathbb{R}^{\Gamma\times E}$. For example, the usage matrix for $\Gamma_{\text{star}}$ is a $V\times E$ matrix. For larger families (e.g., $\Gamma_{\text{ec}}$ and $\Gamma_{\overline{\text{fec}}}$), it is often not feasible even to enumerate all constraints. A possible option for such cases is the *basic algorithm* described in [@albin2017modulus]. This algorithm proceeds by maintaining a relatively small set of active constraints, thus eliminating the need to fully construct $\mathcal{N}$. Instead, the algorithm iteratively improves an estimate of the optimal density. In each round of the iteration, the algorithm requires a subroutine `shortest(\rho)`, that produces a $\rho$-shortest object in $\Gamma$, that is $$\gamma = \texttt{shortest}(\rho) \Longrightarrow \forall \gamma': \ell_{\rho}(\gamma) \le \ell_{\rho}(\gamma').$$ As long as an efficient implementation of `shortest` exists for $\Gamma$, the basic algorithm can be used to numerically approximate modulus. For example, Dijkstra's algorithm can be used to compute the modulus of $st$-paths, and Kruskal's algorithm can be used to compute the modulus of spanning trees. This method can also be applied to the families in the present paper. #### Stars. For $\Gamma_{\text{star}}$, it is possible to fully compute $\mathcal{N}$ (there is one row per vertex). There is also an efficient `shortest` method; one simply loops over all vertices to find the one with minimum $\rho$-degree. #### Edge covers. For the family of edge covers, there is a minimum weight edge cover (MWEC) algorithm given by Schrijver in [@schrijver2003combinatorial]. This algorithm reduces the minimum weight edge cover problem to the minimum weight perfect matching (MWPM) problem, for which there are several polynomial-time algorithms [@kuhn1955hungarian; @edmonds1965paths; @micali1980v]. #### Fractional edge covers. The authors are not aware of an efficient method for computing the minimum weight fractional edge cover. (Considering the method described above for the MWEC problem, it is possible that an analogous reduction to the minimum weight fractional perfect matching problem exists.) Fortunately, it is possible to compute the modulus of this family anyway, using Theorem [Theorem 14](#thm:mod_reciprocals){reference-type="ref" reference="thm:mod_reciprocals"} and Lemma [Lemma 16](#lem:Bhat_fec){reference-type="ref" reference="lem:Bhat_fec"}. This allows us to repurpose the star modulus code to compute the modulus of fractional edge covers. #### Implementation details. For the computations presented in this paper, we used the Python implementation of the basic algorithm found in [@modbook]. The graphs are represented using NetworkX [@SciPyProceedings_11], NumPy [@harris2020array], and SciPy [@2020SciPy-NMeth], and the convex optimization problem is solved numerically using CVXPY [@diamond2016cvxpy; @agrawal2018rewriting]. When calculating the modulus of stars, it is efficient to compute the full usage matrix $\mathcal{N}$ using the NetworkX built-in function `incidence_matrix`.Since the family of edge covers tends to be large, it is not feasible to generate $\mathcal{N}$. Instead, we use the basic algorithm and implement the `shortest` subroutine with Schrijver's MWEC algorithm, using the NetworkX function `min_weight_matching` to solve the MWPM problem. It is also generally infeasible to generate the full usage matrix for fractional edge covers. For fractional edge covers, we calculate the modulus by using the results from Section [4](#sec:fulkerson_duality){reference-type="ref" reference="sec:fulkerson_duality"} and the code that computes modulus of stars. ## Complete graphs The modulus values for $K_n$ were found in Examples [Example 3](#ex:complete-ec){reference-type="ref" reference="ex:complete-ec"}, [Example 6](#ex:complete-fec){reference-type="ref" reference="ex:complete-fec"}, and [Example 9](#ex:complete-star){reference-type="ref" reference="ex:complete-star"}. Summarizing these examples, we found that $$\text{Mod}_2(\Gamma_{\text{ec}}) = \begin{cases} \frac{2(n-1)}{n}\quad\text{ if $n$ is even,}\\ \frac{2n(n-1)}{(n+1)^2}\quad\text{ if $n$ is odd}, \end{cases} \quad\text{and}\quad \text{Mod}_2(\Gamma_{\text{fec}}) = \frac{2(n-1)}{n} = \text{Mod}_2(\Gamma_{\text{star}})^{-1}.$$ In the case of edge covers, the value of the modulus splits into two cases depending on whether $n$ is even or odd. When $n$ is even, the minimal edge covers of the graph are also perfect matchings, therefore every node is covered by exactly one edge. When $n$ is odd, a minimal edge cover will have one node covered by 2 edges, and the rest covered by exactly one edge. Therefore, there will always be a "heavier" node in the edge cover. The modulus of fractional edge covers does not appear to have this same dependence on the evenness or oddness of $n$; there is a single formula for all cases. This suggests that, when $n$ is odd, the fractional edge covers using odd cycles become important. For this class of graphs, we have $$\text{Mod}_2(\Gamma_{\text{ec}})\text{Mod}_2(\Gamma_{\text{fec}})^{-1} = \begin{cases} 1 & \text{if $n$ is even},\\ \frac{n^2}{(n+1)^2}&\text{if $n$ is odd}. \end{cases}$$ Since these graphs contain triangles, [\[eq:bounds_ec_star\]](#eq:bounds_ec_star){reference-type="eqref" reference="eq:bounds_ec_star"} shows that the ratio of edge cover modulus to fractional edge cover modulus is bounded from below by $\frac{9}{16}$. The actual ratio approaches $1$ in the limit as $n\to\infty$, showing that the lower bound is not tight. ![The value of $\text{Mod}_2(\Gamma_{\text{ec}})/\text{Mod}_2(\Gamma_{\text{fec}})$ as a function of $n$ for the $n$-barbell graph. The lines represent the upper and lower bounds in equation [\[eq:bounds_ec_star\]](#eq:bounds_ec_star){reference-type="eqref" reference="eq:bounds_ec_star"}.](Images/modulus-ratio-barbell.pdf){#fig:modulus-ratio-barbell} ## Barbell graphs In the case of the $n$-barbell graphs (two disjoint copies of $K_n$ connected by a single edge), we see a similar behavior. Using the modulus code described in Section [6.1](#sec:num-methods){reference-type="ref" reference="sec:num-methods"}, we computed $\text{Mod}_2(\Gamma_{\text{ec}})$ and $\text{Mod}_2(\Gamma_{\text{fec}})$ on $K_n$ for $n$ ranging from 3 to 30. The ratio of these two moduli is plotted in Figure [1](#fig:modulus-ratio-barbell){reference-type="ref" reference="fig:modulus-ratio-barbell"}. As in the complete graph example, we observe two types of behavior depending on the parity of $n$. In both cases, it appears that the ratio of moduli approaches $1$ as $n\to\infty$. However, the convergence is much faster for even $n$ than for odd. Again, this suggests that the odd cycles play an important role in $\text{Mod}(\Gamma_{\text{fec}})$ for odd $n$. The difference can be further understood by considering the family of minimal edge covers on the $n$-barbell. When $n$ is even, no minimal edge cover uses the "bridge" of the $n$-barbell; these edge covers are built independently from edge covers of the two copies of $K_n$. When $n$ is odd, however, there are two categories of edge cover. Those that avoid the bridge contain $n+1$ edges. Those that use the bridge are perfect matchings using $n$ edges. From the point of view of the probabilistic interpretation [\[eq:prob-interp-2-norm\]](#eq:prob-interp-2-norm){reference-type="ref" reference="eq:prob-interp-2-norm"}, the modulus will balance betwen these two types of edge cover. Choosing the smaller edge covers is beneficial since it tends to reduce the overlap with other edge covers; however, since these edge covers share a common edge (the bridge) it is also beneficial to choose the larger edge covers at times. An optimal pmf will balance these two competing preferences. This can be seen in Figure [2](#fig:barbells){reference-type="ref" reference="fig:barbells"}. When $n$ is odd, the bridge is more likely to appear than any other edge in a random edge cover chosen by an optimal pmf (Figure [2](#fig:barbells){reference-type="ref" reference="fig:barbells"} top left). On the other hand, when $n$ is even, the optimal edge usage probability of the bridge is zero (Figure [2](#fig:barbells){reference-type="ref" reference="fig:barbells"} top right). For fractional edge covers, the bridge always has the lowest edge usage probability, followed by adjacent edges, and then all other edges. Figure [3](#fig:bridge-exp){reference-type="ref" reference="fig:bridge-exp"} compares the optimal expected edge usage, $\mathbb{E}_{\mu^*}[\mathcal{N}(\underline{\gamma}, b)]$, of the bridge, $b$, for both the edge cover and fractional edge cover modulus. While the expected usage for the fractional edge cover modulus follows a single smooth curve $$\mathbb{E}_{\mu^*}[\mathcal{N}(\underline{\gamma}, b)] = \frac{2n+2}{n^2-n+4},$$ the expected edge usage for edge cover modulus oscillates between two behaviors: $$\mathbb{E}_{\mu^*}[\mathcal{N}(\underline{\gamma}, b)] = \begin{cases} 0 & \text{if $n$ is even},\\ \frac{n-2}{n^2-n-1}&\text{if $n$ is odd}. \end{cases}$$ (There is a special case for $n=3$.) From this example, we see that fractional edge cover modulus approximates edge cover modulus for large $n$. (Again, the lower bound in [\[eq:bounds_ec_star\]](#eq:bounds_ec_star){reference-type="eqref" reference="eq:bounds_ec_star"} is overly pessimistic.) However, for smaller $n$, the additional flexibility in $\Gamma_{\text{fec}}$ arising from the ability to use odd cycles, fails to capture the parity-dependence of the edge cover modulus. ![$n$-Barbell Graphs with $n=7$ and $n=8$. The edges are colored using the expected edge usage, $\mathbb{E}_{\mu^*}[\mathcal{N}(\underline{\gamma}, e)] = \rho^*(e)/\text{Mod}_2(\Gamma)$, for $\Gamma=\Gamma_{\text{ec}}$ (top row) and $\Gamma=\Gamma_{\text{fec}}$ (bottom row). Lighter colors represent smaller values.](Images/n-barbell-graphs.pdf){#fig:barbells width="6in"} ![The graph above shows the value of $\mathbb{E}_{\mu}(\underline{\gamma}, b)$ in the $y$-axis and the value $n$ on the $x$-axis, where $b$ is the bridge connecting the two copies of $K_n$ in the $n$-barbell graph.](Images/expected-edge-usage-bridge.pdf){#fig:bridge-exp width="4.2in"} # Discussion {#sec:discussion} As mentioned in the introduction, one of the main motivations for studying the modulus of edge covers is to develop a deeper understanding of what properties of the underlying graph structure this modulus can expose. In this paper, we have laid the theoretical groundwork for the study of edge cover modulus. Moreover, by connecting it to the modulus of fractional edge covers and, ultimately, to that of stars, we have made it computationally feasible to approximate edge cover modulus on large graphs. In addition to deepening the connection between edge cover modulus and graph structure, there are several other interesting research directions open to pursuit. One path involves further developing the relationship between the moduli of edge covers and fractional edge covers. For bipartite graphs, these two families are equivalent (in the sense of modulus), which gives a starting point. Moreover, if a graph only has very long odd cycles (a sort of "nearly-bipartite" property), then the bounds established in Theorem [Theorem 11](#thm:bounds){reference-type="ref" reference="thm:bounds"} show that fractional edge cover modulus is a good approximation of edge cover modulus. However, the examples in Section [6](#sec:num-examples){reference-type="ref" reference="sec:num-examples"} show that even for graphs containing triangles the two moduli can be close. In those examples, we saw that the approximation gets asymptotically better for larger graphs; the main barrier for the smaller examples seems to be a switching in the behavior of the edge cover modulus depending on whether or not the graph contains perfect matchings. The ability to use half-weight odd loops appears to give the fractional edge covers enough added flexibility to avoid this switching behavior. Another question that arises naturally is that of the dual family to $\Gamma_{\text{ec}}$. As shown in this paper, $\hat{\Gamma}_{\text{fec}}\simeq\Gamma_{\text{star}}$, which leads to a computationally efficient method for computing the modulus of $\Gamma_{\text{fec}}$. Computing the modulus of edge covers tends to be slower because of the need to repeatedly construct minimum edge covers as part of the basic algorithm. The edge covers may have a simpler dual family that could similarly aid in efficiently computing edge cover modulus. Finally, we hope that a better understanding of the properties of the edge cover modulus will lead to similar insights into the modulus of related (and more complex) families, particularly the families of maximal matchings and perfect matchings. Both of these families have related relaxed (fractional) families that may be useful in their study just as the fractional edge covers can be used to understand edge covers. [^1]: This material is based upon work supported by the National Science Foundation under Grant No. 2154032.
arxiv_math
{ "id": "2309.13180", "title": "Modulus of edge covers and stars", "authors": "Adriana Ortiz-Aquino and Nathan Albin", "categories": "math.CO", "license": "http://creativecommons.org/licenses/by-nc-nd/4.0/" }
--- abstract: | In this paper, we investigate the minimum number of triangles, denoted by $t(n,k)$, in $n$-vertex $k$-regular graphs, where $n$ is an odd integer and $k$ is an even integer. The well-known Andrásfai-Erdős-Sós Theorem has established that $t(n,k)>0$ if $k>\frac{2n}{5}$. In a striking work, Lo has provided the exact value of $t(n,k)$ for sufficiently large $n$, given that $\frac{2n}{5}+\frac{12\sqrt{n}}{5}<k<\frac{n}{2}$. Here, we bridge the gap between the aforementioned results by determining the precise value of $t(n,k)$ in the entire range $\frac{2n}{5}<k<\frac{n}{2}$. This confirms a conjecture of Cambie, de Verclos, and Kang. author: - Jialin He$^1$ - Xinmin Hou$^{2,4}$ - Jie Ma$^{3,4}$ - Tianying Xie$^{5}$ title: Counting triangles in regular graphs --- # Introduction We investigate the number of triangles guaranteed in regular graphs with edge density blow one half. To be precise, for positive integers $n$ and $k$, we define $t(n,k)$ to be the minimum number of triangles over all $k$-regular graphs on $n$ vertices. In the case when $n$ is even, it is easy to see that $t(n,k)=0$ if and only if $k\leq \frac{n}{2}$, which can derived from the classic Mantel's Theorem [@M07] and the existence of $n$-vertex $k$-regular bipartite graphs for any $k\leq \frac{n}{2}$. In this paper, we mainly focus on the case when $n$ is odd.[^1] A special case of a celebrate theorem of Andrásfai, Erdős and Sós [@AES] states that every $n$-vertex graph with minimum degree greater than $\frac{2n}{5}$ either is bipartite or contains some triangles. Since there exist no bipartite regular graphs with an odd number of vertices, this result implies that in the case when $n$ is odd, we have $t(n,k)>0$ if $k>\frac{2n}{5}$.[^2] It is natural to ask for the exact value of $t(n,k)$ in the range $\frac{2n}{5}<k\leq \frac{n}{2}$. The following theorem proved by Lo [@Lo09] offers a comprehensive understanding for nearly all integers $k$ within this range when the odd integer $n$ is sufficiently large. **Theorem 1** (Lo, [@Lo09]). *For every odd integer $n\ge 10^7$ and even integer $k$ with $\frac{2n}{5}+\frac{12\sqrt{n}}{5}< k< \frac{n}{2}$, $$t(n,k)=\frac k 4(3k-n-1).$$ Moreover, the extremal graphs for $t(n,k)$ must belong to the family ${\mathcal{G}}(n,k)$ given by Definition [Definition 1](#dfn_fig){reference-type="ref" reference="dfn_fig"}.* Very recently, Cambie, de Verclos, and Kang [@CdVK] have revisited this problem by investigating the concept of regular Turán numbers. One of their main results states that $t(n,k)\geq \frac{n^2}{300}$ for all odd $n$ and even $k$ with $k>\frac {2n}{5}.$ They also highlighted the following conjecture (see Conjecture 20 in [@CdVK]). **Conjecture 2** (Cambie, de Verclos and Kang, [@CdVK]). *For every odd integer $n$ and even integer $k$ with $\frac{2n}{5}< k< \frac{n}{2}$, $t(n,k)=\frac k 4(3k-n-1).$ Moreover, the extremal graphs must belong to ${\mathcal{G}}(n,k)$.* The main contribution of this paper is to establish this conjecture for sufficiently large integers $n$. **Theorem 3**. *For every odd integer $n\ge 10^9$ and even integer $k$ with $\frac{2n}{5}< k< \frac{n}{2}$, $t(n,k)=\frac k 4(3k-n-1).$ Moreover, the extremal graphs must belong to ${\mathcal{G}}(n,k)$.* Our proof of Theorem [Theorem 3](#Thm: Main){reference-type="ref" reference="Thm: Main"} employs the approach utilized in [@Lo09], incorporating several innovative technical ideas. For a brief overview of the proof, we direct readers to the initial portions of both Section 3 and Section 4. As a direct corollary of Theorem [Theorem 3](#Thm: Main){reference-type="ref" reference="Thm: Main"}, we are able to improve the aforementioned lower bound of $t(n,k)$ by Cambie et al. in the following. **Corollary 4**. *For every odd integer $n\ge 10^9$ and even integer $k>\frac{2n}{5}$, we have $t(n,k)\ge \frac{(n+1)^2}{50}$, where the equality holds if and only if $k=\frac{2(n+1)}{5}$ and $n+1$ is divisible by $5$.* The rest of the paper is organized as follows. Section 2 provides some preliminaries and introduces the graph family ${\mathcal{G}}(n,k)$. Section 3 focuses on reducing the proof of Theorem [Theorem 3](#Thm: Main){reference-type="ref" reference="Thm: Main"} to the key intermediate result, namely Theorem [Theorem 5](#Thm: Main Reduced){reference-type="ref" reference="Thm: Main Reduced"}. The proof of Corollary [Corollary 4](#Cor: Improvement of t(n,k)){reference-type="ref" reference="Cor: Improvement of t(n,k)"} will also be presented at the end of Section 3. Section 4 presents the complete proof of Theorem [Theorem 5](#Thm: Main Reduced){reference-type="ref" reference="Thm: Main Reduced"}. The concluding section offers some related remarks. # Preliminaries We use standard notations on graphs throughout the paper. Let $G$ be a graph. We use $\Delta(G)$ to denote the maximum degree of $G$. For a subset $U\subseteq V(G)$, the subgraph of $G$ induced by the vertex set $U$ is denoted by $G[U]$, while the subgraph obtained from $G$ by deleting all vertices in $U$ is expressed by $G\backslash U$. Let $N_{G}(U)=\bigcap_{v\in U} N_{G}(v)$ be the set consisting of common neighbors of all vertices of $U$ in $G$. For a subset of edges $A\subseteq E(G)$, we denote $G\setminus A$ the subgraph of $G$ obtained by deleting all edges in $A$. Suppose that $U,W\subset V(G)$ are disjoint. We denote $E_{G}(U,W)$ to be the set of edges of $G$ between $U$ and $W$ and let $e_{G}(U,W)=|E_{G}(U,W)|$. For a vertex $v\in V(G)$ and a subset $X\subseteq V(G),$ we define $d_X(v)=|N_G(v)\cap X|.$ For an integer $k\geq 1$, a *$k$-factor* of $G$ is a spanning $k$-regular subgraph of $G$. We use *$T(G)$* to denote the number of triangles in $G$. For a vertex $v\in V(G)$ and an edge $e\in E(G)$, we use *$T(v)$* and *$T(e)$* to denote the number of triangles containing the vertex $v$ and the edge $e$ respectively. For positive integers $m$, we write $[m]$ for the set $\{1,2,...,m\}$. We often drop above subscripts when they are clear from context, and omit floors and ceilings whenever they are not critical. ![image](1.png){height="8cm" width="18cm"} Let $n$ be an odd integer $n$ and $k$ be an even integer $k$ satisfying $\frac{2n}{5}< k< \frac{n}{2}$. We now define an important family of $n$-vertex $k$-regular graphs ${\mathcal{G}}(n,k)$ as follows (see Figure 1). **Definition 1**. *Let $G$ be a complete bipartite graph with parts $X, Y$, each of size $\frac{n-1}{2}$. Let $A\subseteq X$, $B\subseteq Y$ be two subsets of size $\frac{k}{2}.$ The family ${\mathcal{G}}(n,k)$ consists of all $n$-vertex graphs obtained by the following procedures: adding a new vertex $v$ to $G$, joining $v$ to all vertices of $A$ and $B$, removing the edges of a $\frac{n-2k+1}{2}$-factor from the induced subgraph $G[A\cup B]$, and removing the edges of a $\frac{n-2k-1}{2}$-factor from the induced subgraph $G[(X\setminus A)\cup (Y\setminus B)]$.* It is easy to see that any graph $G\in {\mathcal{G}}(n,k)$ is $k$-regular. Moreover, all triangles in $G$ contain the vertex $v$, and every neighbor of $v$ is contained in exactly $\frac k 2-\frac{n-2k+1}{2}=\frac{3k-n-1}{2}$ triangles. Thus, the number of triangles in any graph $G\in {\mathcal{G}}(n,k)$ is equal to $\frac k 4(3k-n-1).$ # Proofs of Theorem [Theorem 3](#Thm: Main){reference-type="ref" reference="Thm: Main"} and Corollary [Corollary 4](#Cor: Improvement of t(n,k)){reference-type="ref" reference="Cor: Improvement of t(n,k)"} {#proofs-of-theorem-thm-main-and-corollary-cor-improvement-of-tnk} In this section, we prove our main result -- Theorem [Theorem 3](#Thm: Main){reference-type="ref" reference="Thm: Main"}. We make use of Lo's approach [@Lo09], where he proved that for any odd integer $n\geq 10^7$ and even $k$ satisfying $\frac{2n}{5}+\frac{12\sqrt{n}}{5}< k< \frac{n}{2}$, let $G$ be an $n$-vertex $k$-regular graph with the minimum number of triangles, then $G\in {\mathcal{G}}(n,k)$. A key intermediate step in Lo's proof is to show, by using Andrásfai-Erdős-Sós Theorem [@AES], that such $G$ can be made bipartite by deleting $O(k^{\frac 3 2})$ edges. In our proof, we present a novel insight wherein, by employing a more thorough structural analysis instead of Andrásfai-Erdős-Sós Theorem, we establish that, when the broader condition $\frac{2n}{5} < k < \frac{n}{2}$ holds, one can also make the graph $G$ bipartite by removing $O(k^{\frac{3}{2}})$ edges. To be formal, we prove the following. **Theorem 5**. *Let $G$ be an $n$-vertex $k$-regular graph, where $n\ge 10^9$ is odd and $\frac{2n}{5}< k< \frac{n}{2}$ is even. If $T(G)\le \frac k 4(3k-n-1)$, then $G$ can be made bipartite by deleting at most $\frac {15}{2} k^{\frac 3 2}$ edges.* In the remaining of this section, we will demonstrate the proof of Theorem [Theorem 3](#Thm: Main){reference-type="ref" reference="Thm: Main"} while assuming the validity of Theorem [Theorem 5](#Thm: Main Reduced){reference-type="ref" reference="Thm: Main Reduced"}. We also need a useful lemma proved by Lo (see Lemma 3.2 in [@Lo09]). For a graph $H$, let $\phi(H)$ be the sum of squares of degrees of all vertices of $H.$ **Lemma 6** (Lo [@Lo09]). *Let $r>4500$ be a positive integer, and let $H$ be an $n$-vertex graph with $\Delta(H)\le r$ and $e(H)=\beta r$ for $1\le \beta < \frac{6}{5}$. Then $\phi(H)\le (\beta^2-2\beta+2)r^2 + (5\beta-4)r$. Furthermore, equality holds if and only if $H$ has a vertex $u$ of degree $r$, a vertex $v\in N(u)$ with degree $(\beta-1)r+1$ and $n-r-1$ isolated vertices.* To ensure completeness, we provide a comprehensive proof, adapted from [@Lo09], illustrating the reduction of Theorem [Theorem 3](#Thm: Main){reference-type="ref" reference="Thm: Main"} to Theorem [Theorem 5](#Thm: Main Reduced){reference-type="ref" reference="Thm: Main Reduced"}, as follows. **Proof of Theorem [Theorem 3](#Thm: Main){reference-type="ref" reference="Thm: Main"} (Assuming Theorem [Theorem 5](#Thm: Main Reduced){reference-type="ref" reference="Thm: Main Reduced"}).** The proof presented here has been adjusted from Lo's work [@Lo09]. Let $G$ be an $n$-vertex $k$-regular graph, where $n\ge 10^9$ is odd and $\frac{2n}{5}< k< \frac{n}{2}$ is even, such that $T(G)$ is minimum. By the construction of ${\mathcal{G}}(n,k),$ we see that $T(G)\le \frac k 4(3k-n-1)$. First partition $V(G)=X\cup Y$ with $e(X,Y)$ maximal. Without loss of generality, suppose that $|X|>|Y|.$ Define $e(X)=\frac 1 2\beta k.$ By Theorem [Theorem 5](#Thm: Main Reduced){reference-type="ref" reference="Thm: Main Reduced"}, $G$ can be made bipartite by deleting at most $\frac{15}{2} k^{\frac 3 2}$ edges. Since $e(X)+e(Y)$ is minimum, we have that $e(X)\le \frac{15}{2} k^{\frac 3 2}$, which implies that $\beta\le 15\sqrt k.$ Since $2e(X)=k|X|-e(X,Y)\geq k(|X|-|Y|)$, we get that $\beta\ge |X|-|Y|\ge 1.$ For any edge $uv\in E(G[X]),$ $T(uv)\ge d_Y(u)+d_Y(v)-|Y|= 2k-|Y|-(d_X(u)+d_X(v)),$ where the right hand side only counts the number of triangles $uvy$ with $y\in Y$. Adding up, we have $$\begin{aligned} \label{Equ: lower bound of T(G)} T(G)\ge e(X)(2k-|Y|)-\phi(G[X])=\frac 1 2\beta k(2k-|Y|)-\phi(G[X]).\end{aligned}$$ Using $T(G)\le \frac k 4(3k-n-1)$, $k>\frac{2n}{5},$ $|Y|\le \frac{n-1}{2}$ and $\beta\ge 1$, we can get a lower bound of $\phi(G[X])$ that (see the equation (4.3) in [@Lo09]) $$\begin{aligned} \label{Equ: lower bound of phi(X)} \phi(G[X])\ge \frac 1 8 (3\beta-1) k^2.\end{aligned}$$ In order to apply Lemma [Lemma 6](#Lem: Lo's lemma){reference-type="ref" reference="Lem: Lo's lemma"} with $H:=\phi(G[X])$, we now proceed to show that $\beta<\frac 6 5.$ Let $S$ be the set of vertices $x\in X$ with $d_X(x)\ge \alpha\sqrt{\beta k}$, where $\alpha\le \frac 1 2\sqrt{k/\beta}$ is some real to be decided later. Then, $|S|\le 2e(X)/\alpha\sqrt{\beta k}= \sqrt{\beta k}/\alpha$. Also $\Sigma:=\sum_{u\in S}d_X(u)\le e(X)+\binom{|S|}{2}\le (\frac 1 2+\frac 1 {2\alpha^2})\beta k.$ By the maximality of $e(X,Y)$, we have $d_X(x)\leq \frac{k}{2}$ and $d_Y(y)\le \frac k 2$ for all $x\in X$ and $y\in Y$. Hence, we have $$\begin{aligned} \phi(G[X])&=\sum_{u\in S}d_X(u)^2+\sum_{v\in X\setminus S}d_X(v)^2 \le\frac k 2\cdot\Sigma+\alpha\sqrt{\beta k}\cdot\left(2e(X)-\Sigma\right)\\ &\le \frac 1 2 \left(\frac 1 2+\frac 1 {2\alpha^2}\right)\beta k^2+\alpha\left(\frac 1 2-\frac 1 {2\alpha^2}\right)(\beta k)^{\frac{3}{2}} \le \left(\frac 1 4+\frac 1 {4\alpha^2}\right)\beta k^2+\frac \alpha 2 (\beta k)^{\frac{3}{2}},\end{aligned}$$ where the second-to-last inequality holds since $\alpha\le \frac 1 2\sqrt{k/\beta}$ and $\Sigma\leq (\frac 1 2+\frac 1 {2\alpha^2})\beta k$. Recall $\beta\le 15\sqrt k$ and $k>\frac{2n}{5}\geq 2^{24}$. So we have $(k/\beta)^{1/6}<\frac 1 2\sqrt{k/\beta}.$ Thus, we can set $\alpha=(k/\beta)^{1/6}$ to get that $$\phi(G[X])\le \frac{1}{4}\beta k^2+\frac 1 4 \beta^{\frac 4 3}k^{\frac 5 3}+\frac 1 2 \beta^{\frac 4 3}k^{\frac 5 3}=\frac{1}{4}\beta k^2+\frac 3 4 \beta^{\frac 4 3}k^{\frac 5 3}.$$ Combining with the inequality [\[Equ: lower bound of phi(X)\]](#Equ: lower bound of phi(X)){reference-type="eqref" reference="Equ: lower bound of phi(X)"}, we get that $$\beta-1-6\beta^{\frac 4 3}k^{-\frac{1}{3}}\le 0.$$ Let $f(\beta)=\beta-1-6\beta^{\frac 4 3}k^{-\frac{1}{3}}.$ To prove $\beta< \frac 6 5$, it is enough to show that $f(\beta)>0$ when $\beta \in[\frac 6 5, 15\sqrt k]$. Since $f''(\beta)=-\frac 8 3 \beta^{-\frac 2 3}k^{-\frac{1}{3}}<0$, by convexity, we only need to check the cases when $\beta= \frac 6 5$ and $\beta=15\sqrt k.$ If $\beta= \frac 6 5$, we have $f(\beta)=\frac 1 5-6\cdot (\frac 6 5)^{\frac{4}{3}}\cdot k^{-1/3}>0,$ as $k>2^{24}$. If $\beta= 15\sqrt k$, we have $f(\beta)=15\sqrt k-1-6\cdot 15^{\frac 4 3}\cdot k^{\frac 1 3}>0,$ as $k> 2^{24}$. This completes the proof of that $\beta< \frac 6 5$. Note that $\frac 6 5>\beta\ge |X|-|Y|\ge 1$. So $|X|-|Y|$ is an integer which must be $1$. By applying Lemma [Lemma 6](#Lem: Lo's lemma){reference-type="ref" reference="Lem: Lo's lemma"} with $H:=G[X]$ and $r:=\frac k 2$, we conclude that $\phi(G[X])\le \frac 1 4(\beta^2-2\beta+2)k^2 + \frac 1 2(5\beta-4)k.$ Then, by [\[Equ: lower bound of T(G)\]](#Equ: lower bound of T(G)){reference-type="eqref" reference="Equ: lower bound of T(G)"}, we have $$\begin{aligned} \label{Eq:T(G)>k/4(3k-n-1)} \nonumber T(G)&\ge \frac 1 2\beta k (2k-|Y|)-\phi(G[X]) \ge \frac 1 2\beta k \left(2k-\frac{n-1}{2}\right)-\frac 1 4(\beta^2-2\beta+2)k^2 - \frac 1 2(5\beta-4)k\\ &=\frac k 4 \Big( 4\beta k-(\beta^2-2\beta+2)k-\beta(n-1)-2(5\beta-4)\Big) \ge \frac{k}{4}(3k-n-1),\end{aligned}$$ where the last inequality holds because $g(\beta):=4\beta k-(\beta^2-2\beta+2)k-\beta(n-1)-2(5\beta-4)\ge g(1)=3k-n-1$, which follows from the fact $g'(\beta)=6k-2\beta k-(n-1)-10>0$ (as $\beta<\frac 6 5$, $n<\frac {5k}{2}$ and $k>2^{24}$). Thus, we get the equality in [\[Eq:T(G)\>k/4(3k-n-1)\]](#Eq:T(G)>k/4(3k-n-1)){reference-type="eqref" reference="Eq:T(G)>k/4(3k-n-1)"}, from which we can easily drive that $\beta=1$, $|Y|=\frac{n-1}{2}$, $|X|=\frac{n+1}{2}$, and $e(G[Y])=0$. Moreover, by Lemma [Lemma 6](#Lem: Lo's lemma){reference-type="ref" reference="Lem: Lo's lemma"}, $G[X]$ consists of a star centered at $v$ with $\frac k 2$ edges and $\frac{n-k-1}{2}$ isolated vertices. By [\[Equ: lower bound of T(G)\]](#Equ: lower bound of T(G)){reference-type="eqref" reference="Equ: lower bound of T(G)"}, any edge $vw$ in $G[X]$ is contained in exactly $2k-|Y|-(d_X(v)+d_X(w))=2k-\frac {n-1}{2}-(\frac k 2+1)=\frac{3k-n-1}{2}$ triangles. Taking into account all of this information, it becomes evident that $G\in {\mathcal{G}}(n,k)$, which completes the proof of Theorem [Theorem 3](#Thm: Main){reference-type="ref" reference="Thm: Main"}. ------------------------------------------------------------------------ To conclude this section, we give the proof of Corollary [Corollary 4](#Cor: Improvement of t(n,k)){reference-type="ref" reference="Cor: Improvement of t(n,k)"}. **Proof of Corollary [Corollary 4](#Cor: Improvement of t(n,k)){reference-type="ref" reference="Cor: Improvement of t(n,k)"}.** Let $n\ge 10^9$ be odd and $k>\frac{2n}{5}$ be even. Note that in fact, $k\ge \frac{2(n+1)}{5}.$ By Theorem [Theorem 3](#Thm: Main){reference-type="ref" reference="Thm: Main"}, when $\frac{2n}{5}< k< \frac{n}{2}$, we have $t(n,k)=\frac k 4(3k-n-1)$, which is an increasing function with variable $k$. Thus in this range, we can get that $t(n,k)\ge \frac{n+1}{10}(3\cdot \frac{2n+2}{5}-n-1)=\frac{(n+1)^2}{50},$ where the equality holds if and only if $k=\frac{2(n+1)}{5}$ and $n+1$ is divisible by $5$. Now consider $k\ge \frac n 2.$ Since $n$ is odd, we see $k\ge \frac{n+1}{2}.$ Let $G$ be any $n$-vertex $k$-regular graph. Using the classic Moon-Moser inequality [@MM65], we have $T(G)\ge \frac{4e(G)}{3}(\frac{e(G)}{n}-\frac{n}{4})$. Since $e(G)=\frac{kn}{2},$ it gives that $T(G)\ge \frac{2kn}{3}(\frac{k}{2}-\frac{n}{4})> \frac{n^2}{3}(\frac{n+1}{4}-\frac{n}{4})=\frac{n^2}{12}>\frac{(n+1)^2}{50}$, which completes the proof of Corollary [Corollary 4](#Cor: Improvement of t(n,k)){reference-type="ref" reference="Cor: Improvement of t(n,k)"}. ------------------------------------------------------------------------ # Proof of Theorem [Theorem 5](#Thm: Main Reduced){reference-type="ref" reference="Thm: Main Reduced"} {#proof-of-theorem-thm-main-reduced} This section is devoted to the proof of Theorem [Theorem 5](#Thm: Main Reduced){reference-type="ref" reference="Thm: Main Reduced"}. Throughout this section, we assume that $n\ge 10^9$ is odd and $k$ is even satisfying $\frac{2n}{5}< k< \frac{n}{2}$, and $G$ is an $n$-vertex $k$-regular graph with $$\label{Equ: T(G)<k/4(3k-n-1)} T(G)\le \frac k 4(3k-n-1).$$ Our goal is to show that $G$ can be made bipartite by deleting at most $\frac {15}{2} k^{\frac 3 2}$ edges. Call an edge $e\in E(G)$ ***heavy*** if $T(e)\ge \frac{3k-n-1}{3}.$ Let $H$ be the spanning subgraph of $G$ whose edge set consists of all heavy edges of $G$. Let $$U=\{u\in V(H):d_H(u)\ge \frac 3 2 \sqrt{k}\}\ \ \text{and} \ \ G'=G\setminus E(H).$$ Building upon the approach used in [@Lo09], it is crucial to show that $G'$ is "close" to be a bipartite graph. To make the first step, we have the following proposition proved in [@Lo09]. **Proposition 7**. *$G'$ is triangle-free.* *Proof.* It is enough to show that any triangle in $G$ contains at least one heavy edge. Let $T$ be any triangle in $G$ with edges $e_1,e_2,e_3$. For $i\in\{0,1,2,3\}$, let $m_i$ denote the number of vertices in $G$ with exact $i$ neighbors in $T$. By double-counting, we have $\sum_{i=0}^3 m_i=n,$ and $\sum_{i=0}^3 im_i=3k,$ which implies that $T(e_1)+T(e_2)+T(e_3)=m_2+3m_3\geq m_2+2m_3=3k-n+m_0\ge 3k-n.$ Hence at least one of the edges of $T$ is heavy. This proves that $G'=G\setminus E(H)$ is triangle-free. ◻ The next proposition is important in our proof, which says that $G'\setminus U$ does not contain five-cycles. **Proposition 8**. *$G'\setminus U$ is $C_5$-free.* The remaining proof is structured into two subsections. In Subsection [4.1](#Subsec:reduction){reference-type="ref" reference="Subsec:reduction"}, we demonstrate how the proof of Proposition [Proposition 8](#Pro: G'-U is C5-free){reference-type="ref" reference="Pro: G'-U is C5-free"} leads to the derivation of Theorem [Theorem 5](#Thm: Main Reduced){reference-type="ref" reference="Thm: Main Reduced"}. In Subsection [4.2](#Subsec: Proof of key claim){reference-type="ref" reference="Subsec: Proof of key claim"}, we complete the proof of Proposition [Proposition 8](#Pro: G'-U is C5-free){reference-type="ref" reference="Pro: G'-U is C5-free"}. ## Reducing to Proposition [Proposition 8](#Pro: G'-U is C5-free){reference-type="ref" reference="Pro: G'-U is C5-free"} {#Subsec:reduction} **Proof of Theorem [Theorem 5](#Thm: Main Reduced){reference-type="ref" reference="Thm: Main Reduced"} (Assuming Proposition [Proposition 8](#Pro: G'-U is C5-free){reference-type="ref" reference="Pro: G'-U is C5-free"}).** First, by double-counting the number of pairs of edges contained in triangles, and the definition of $H$, we can get that $$T(G)=\frac 1 3\sum_{e\in E(G)}T(e)\ge \frac 1 3\sum_{e\in E(H)}T(e)\ge \frac 1 3 \cdot \frac{3k-n-1}{3}\cdot|E(H)|.$$ Thus, by [\[Equ: T(G)\<k/4(3k-n-1)\]](#Equ: T(G)<k/4(3k-n-1)){reference-type="eqref" reference="Equ: T(G)<k/4(3k-n-1)"}, we have $|E(H)|\le \frac 9 4 k.$ Then, by the definition of $U$, we have that $$|U|\le \frac{2|E(H)|}{3\sqrt k/2}\le \frac{9k/2}{3\sqrt k/2}= 3\sqrt k.$$ Next, we claim that $G'\setminus U$ is bipartite. Suppose not. Then there is an odd cycle $C_{2\ell+1}$ with minimum length $2\ell+1$ in $G'\setminus U.$ By Propositions [Proposition 7](#Pro: G' is triangle-free){reference-type="ref" reference="Pro: G' is triangle-free"} and [Proposition 8](#Pro: G'-U is C5-free){reference-type="ref" reference="Pro: G'-U is C5-free"}, we have $\ell\ge 3.$ Let $V(C_{2\ell+1})=\{v_1,v_2,...v_{2\ell+1}\}.$ By Proposition [Proposition 7](#Pro: G' is triangle-free){reference-type="ref" reference="Pro: G' is triangle-free"}, we have $N_{G'}(v_1)\cap N_{G'}(v_2)=\emptyset.$ By the minimality of $C_{2\ell+1},$ we can conclude that $N_{G'}(v_1)\cap N_{G'}(v_{\ell+2})\subseteq U$ and $N_{G'}(v_2)\cap N_{G'}(v_{\ell+2})\subseteq U.$ Otherwise, one can find a cycle in $G'\setminus U$ with odd length $t\in\{\ell+2,\ell+3\}$; since $\ell+3<2\ell+1$ when $\ell\ge3,$ this contradicts our choice of $C_{2\ell+1}$. For any vertex $v\in V(G')\setminus U$, $d_H(v)\le \frac 3 2 \sqrt k$, which also says that $|N_{G'}(v)|\ge k-\frac 3 2 \sqrt k.$ Since $|U|\le 3\sqrt k$, we can get that $$n\ge |N_{G'}(v_1)\cup N_{G'}(v_2)\cup N_{G'}(v_{\ell+2})|\ge |N_{G'}(v_1)|+| N_{G'}(v_2)|+|N_{G'}(v_{\ell+2})|-|U|\ge 3(k-\frac 3 2 \sqrt k)-3\sqrt k,$$ which implies that $n\ge 3k-\frac {15}{2}\sqrt k>\frac 5 2 k>n$ (as $k>\frac{2}{5}n\geq 10^8$). This contradiction proves the claim. Therefore, $G'$ can be made bipartite by deleting all edges with at least one endpoint in $U$, which is at most $\binom{|U|}{2}+|U|(n-|U|)\le n|U|-\frac 1 2 |U|^2\le \frac 5 2 k |U|-\frac 1 2|U|^2.$ Since $\frac 5 2 k |U|-\frac 1 2|U|^2$ is monotonically increasing with respect to $|U|$ when $|U|\le 3\sqrt k$, $G'$ can be made bipartite by deleting at most $\frac 5 2 k |U|-\frac 1 2|U|^2\le \frac {15} 2 k^{\frac 3 2}-\frac 9 2 k$ edges. Note that $G'=G\setminus E(H)$ and $|E(H)|\le \frac 9 4 k$. So $G$ can be made bipartite by deleting at most $(\frac {15} 2 k^{\frac 3 2}-\frac 9 2 k)+|E(H)|< \frac {15} 2 k^{\frac 3 2}$ edges, completing the proof of Theorem [Theorem 5](#Thm: Main Reduced){reference-type="ref" reference="Thm: Main Reduced"}. ------------------------------------------------------------------------ ## Proof of Proposition [Proposition 8](#Pro: G'-U is C5-free){reference-type="ref" reference="Pro: G'-U is C5-free"} {#Subsec: Proof of key claim} We begin with an outline of the proof for Proposition [Proposition 8](#Pro: G'-U is C5-free){reference-type="ref" reference="Pro: G'-U is C5-free"}. Our approach primarily relies on stability-based reasoning. To begin, we assume, for the sake of contradiction, that $G'\setminus U$ contains a $C_5$. We then establish that $G'$ closely resembles a balanced blow-up of $C_5$. Furthermore, by analyzing the addition of edges from $H$ back into $G$, we observe an excess of triangles in $G$. This contradicts our initial assumption that $T(G)\leq \frac{k}{4}(3k-n-1)$. Throughout the proof, all indices will be modulo 5. Suppose on the contrary that there is a $C_5$, say $u_1u_2u_3u_4u_5u_1$, in $G'\setminus U$. For $i\in [5]$, we define $N^1_{i}=N_{G'}(u_{i-1})\cap N_{G'}(u_{i+1})$. By Proposition [Proposition 7](#Pro: G' is triangle-free){reference-type="ref" reference="Pro: G' is triangle-free"}, $G'$ is triangle-free, so $N_{G'}(u_{i})\cap N_{G'}(u_{i+1})=\emptyset$. Then, for $i\in[5]$, we can get that $$n\ge |N_{G'}(u_{i-1})\cup N_{G'}(u_{i})\cup N_{G'}(u_{i+1})|\ge d_{G'}(u_{i-1})+d_{G'}(u_i)+d_{G'}(u_{i+1})-|N^1_{i}|,$$ which implies that $$\label{Equ: lower bound of Ni} |N^1_i|\ge \sum_{j=i-1}^{i+1}d_{G'}(u_j)-n.$$ Since $G'$ is triangle-free, for different $i,j\in[5],$ we also have $N^1_{i}\cap N^1_{j}=\emptyset.$ Thus, by [\[Equ: lower bound of Ni\]](#Equ: lower bound of Ni){reference-type="eqref" reference="Equ: lower bound of Ni"}, we have $$\begin{aligned} \label{Equ: bound of union of Nis} n\ge \left|\bigcup_{i=1}^5 N^1_{i}\right|=\sum_{i=1}^5|N^1_i|\ge 3\sum_{i=1}^5d_{G'}(u_i)-5n,\end{aligned}$$ By [\[Equ: lower bound of Ni\]](#Equ: lower bound of Ni){reference-type="eqref" reference="Equ: lower bound of Ni"} again (summing over indices $[5]\backslash \{i\}$), we get $n-|N^1_i|\geq \Big(3\sum_{i=1}^5d_{G'}(u_i)-\sum_{j=i-1}^{i+1}d_{G'}(u_j)\Big)-4n,$ which implies that $|N^1_i|\le 5n-3\sum_{j=1}^5d_{G'}(u_j)+\sum_{j=i-1}^{i+1}d_{G'}(u_j).$ Therefore, for any $i\in[5],$ we have $$\label{Equ: bounds of Ni} \sum_{j=i-1}^{i+1}d_{G'}(u_j)-n\le |N^1_i|\le 5n-3\sum_{j=1}^5d_{G'}(u_j)+\sum_{j=i-1}^{i+1}d_{G'}(u_j)$$ Note that $d_{G'}(u)\ge k-\frac{3}{2}\sqrt k$ holds for any $u\in V(G')\setminus U$. Hence we have $\sum_{j=1}^5d_{G'}(u_j)\ge 5(k-\frac{3}{2}\sqrt k).$ Now we choose $v_1v_2v_3v_4v_5v_1$ to be a $C_5$ in $G'$ such that $$\sum_{i=1}^5d_{G'}(v_i)=\max \left\{\sum_{j=1}^5d_{G'}(w_i):G'[\{w_1,w_2,w_3,w_4,w_5\}] \hbox{ contains a } C_5\right\}.$$ Clearly the above cycle $u_1u_2u_3u_4u_5u_1$ is also a $C_5$ in $G'$. So we have $$\begin{aligned} \label{Equ:dG'(vi)>k-3/2sqrt k} \sum_{i=1}^5d_{G'}(v_i)\ge \sum_{i=1}^5d_{G'}(u_i) \ge 5\left(k-\frac{3}{2}\sqrt{k}\right).\end{aligned}$$ We partition the vertex set $V(G)$ using the information from the cycle $v_1v_2v_3v_4v_5v_1$ in the following (see Figure 2). For $i\in[5]$, let $a_i:=d_H(v_i)=k-d_{G'}(v_i)$, $$a:=\sum_{i=1}^5 a_i=5k-\sum_{i=1}^5d_{G'}(v_i), ~~ N_{i}:=N_{G'}(v_{i-1})\cap N_{G'}(v_{i+1})~~ \text{and}~~ Z:=V(G)\setminus (\cup_{i=1}^5N_i).$$ Note that [\[Equ: lower bound of Ni\]](#Equ: lower bound of Ni){reference-type="eqref" reference="Equ: lower bound of Ni"}, [\[Equ: bound of union of Nis\]](#Equ: bound of union of Nis){reference-type="eqref" reference="Equ: bound of union of Nis"} and [\[Equ: bounds of Ni\]](#Equ: bounds of Ni){reference-type="eqref" reference="Equ: bounds of Ni"} also hold for $N_i$ and $v_i.$ ![image](2.png){height="8cm" width="15cm"} Since $\sum_{i=1}^5d_{G'}(v_i)=5k-a$, by [\[Equ: bound of union of Nis\]](#Equ: bound of union of Nis){reference-type="eqref" reference="Equ: bound of union of Nis"}, we have that $$\label{Equ: bound of k} n\ge \sum_{i=1}^5|N_i|\ge 3(5k-a)-5n = 15k-5n-3a,$$ which implies that $\frac{2}{5}n < k \le \frac{2}{5}n+\frac{a}{5}.$ From now on, we define $b:=\frac{5k}{2}-n.$ Since $k$ is even, $b$ is an integer satisfying that $1\le b\le \frac{a}{2}.$ By [\[Equ:dG\'(vi)\>k-3/2sqrt k\]](#Equ:dG'(vi)>k-3/2sqrt k){reference-type="eqref" reference="Equ:dG'(vi)>k-3/2sqrt k"}, we also see that $2\leq a\leq \frac{15\sqrt{k}}{2}$. Next, we show that $a$ can be bounded from above by an absolute constant. We first point out that for any $i\in[5]$ and any vertex $w\in N_i$, it holds $d_{G'}(w)\le d_{G'}(v_i)$ (equivalently, $d_{H}(w)\ge d_{H}(v_i)=a_i$). Otherwise, $G'[\{v_1,v_2,v_3,v_4,v_5,w\}-\{v_i\}]$ also contains a $C_5$ with $\sum_{i=1}^5d_{G'}(v_i)+d_{G'}(w)-d_{G'}(v_i)> \sum_{i=1}^5d_{G'}(v_i)$, a contradiction to the definition of $v_1v_2v_3v_4v_5v_1$. Also note that $\sum_{i=1}^5\sum_{w\in N_i}d_{H}(w)\le 2|E(H)|\le \frac 9 2 k$. Using [\[Equ: bounds of Ni\]](#Equ: bounds of Ni){reference-type="eqref" reference="Equ: bounds of Ni"} (for $N_i$) and the facts that $n<\frac 5 2 k$ and $a\le \frac{15\sqrt{k}}{2},$ we have that $$\begin{aligned} \frac 9 2 k &\ge \sum_{i=1}^5\sum_{w\in N_i}d_{H}(w)\ge \sum_{i=1}^5a_i|N_i|\ge \sum_{i=1}^5a_i\Big(3k-\sum_{j=i-1}^{i+1}a_j-n\Big)\\ &\ge \frac{k}{2}\sum_{i=1}^5a_i-\sum_{i=1}^5\sum_{j=i-1}^{i+1}a_ia_j\ge \frac{ak}{2}-a^2\ge \frac{ak}{2}-\frac{225}{4}k,\end{aligned}$$ which implies that $a\le 121$. Therefore in the rest of the proof, it suffices to consider the cases $$\label{Equ:a<121} 2\le a\le 121.$$ ### Refining the structure To deal with the remaining cases, we establish more inequalities and accurate estimations to enhance the graph structure. Let $i\in [5]$. First, by substituting $a$ and $b$ and using the fact $3k-a\leq \sum_{j=i-1}^{i+1}d_{G'}(v_j)\leq 3k$, we can refine the inequality [\[Equ: bounds of Ni\]](#Equ: bounds of Ni){reference-type="eqref" reference="Equ: bounds of Ni"} and obtain the following new inequality $$\label{Equ:new bounds of N_i} \frac{k}{2}-a+b=3k-a-n\le |N_i|\le 5n-3(5k-a)+3k=\frac{k}{2}+3a-5b.$$ Next, we bound $|Z|$ in terms of $a$ and $b$. Note that $G'$ is triangle-free. So any vertex of $Z$ is adjacent to at most one vertex of $\{v_1,v_2,v_3,v_4,v_5\}$ in $G'$. Thus, $|Z|\ge \sum_{i=1}^5(d_{G'}(v_i)-|N_{i-1}|-|N_{i+1}|)=5k-a-2\sum_{i=1}^5|N_i|=5k-a-2(n-|Z|),$ which implies that $|Z|\le a-2b$. On the other hand, for any $i\in [5]$, $|N_{i-1}|+|N_{i+1}|\le d_{G'}(v_i)$. Thus, $\sum_{i=1}^5|N_i|\le (\sum_{i=1}^5d_{G'}(v_i))/2=(5k-a)/2$, and $|Z|=n-\sum_{i=1}^5|N_i|\ge (\frac{5}{2}k-b)-\frac{5k-a}{2}=\frac{a}{2}-b$. Summarizing, we have $$\label{Equ: bounds of Z} \frac{a}{2}-b\le |Z|\le a-2b.$$ As we discussed at the beginning of this proof, we want to demonstrate that when adding the edges of $H$ back into $G$, there will be many triangles in $G$, thus contradicting [\[Equ: T(G)\<k/4(3k-n-1)\]](#Equ: T(G)<k/4(3k-n-1)){reference-type="eqref" reference="Equ: T(G)<k/4(3k-n-1)"} For this purpose, it is important to classify the edges $e\in E(H)$ and estimate the value of $T(e)$. Set $\alpha:=\frac 1 {14}$ and let $$A=\{v\in V(G):d_{H}(v)\le \alpha k\}\ \ \text{and}\ \ B=V(G)\setminus A.$$ Since $|E(H)|\le \frac 9 4 k$, we have that $|B|\le \frac{9k/2}{\alpha k}\le \frac{9}{2\alpha}=63$. Let $E^0:=E(H)\setminus E(H[Z])$ and it is straightforward to see that $E^0=E_p\cup E_q\cup E_r\cup E_s,$ where $E_p=\{xy\in E(H): x, y\in N_i \hbox{ for some }i\in[5]\}$, $E_q=\{xy\in E(H): x\in N_i \hbox{ and } y\in N_{i+1} \hbox{ for some } i\in [5] \}$, $E_r=\{xy\in E(H): x\in N_i,\ y\in N_j \hbox{ and } |i-j|=2 \hbox{ for some } i,j\in[5] \}$ and $E_s=\{xy\in E(H): x\in Z \hbox{ and } y\in N_i \hbox{ for some } i\in[5]\}$. Let $E^0=E^1\cup E^2\cup E^3,$ where $E^1=E^0\cap E(H[A])$, $E^2=E^0\cap E(H[A,B])$ and $E^3=E^0\cap E(H[B])$. The next claim shows that most edges of $E^1$ are contained in many different triangles. **Claim 1**. *For $\lambda\in\{p,q,r,s\},$ let $E_\lambda^1=E_\lambda\cap E^1$. Then, we have* - *for any edge $x_1y_1\in E_p^1$, $|N_{G'}(x_1)\cap N_{G'}(y_1)|\ge \frac 6 7 k-840$;* - *$E_q^1=\emptyset$;* - *for any edge $x_3y_3\in E_r^1$, $|N_{G'}(x_3)\cap N_{G'}(y_3)|\ge \frac 5 {14} k -1400$;* - *for any edge $x_4y_4\in E_s^1,$ $|N_{G'}(x_4)\cap N_{G'}(y_4)|\geq \frac{5}{14}k-1400.$* *Proof.* For the first statement, let $x_1y_1\in E_p^1$. We may assume that $x_1,y_1\in N_1$, then the common neighbors of $x_1$ and $y_1$ in $G'$ must belong to $N_2\cup N_5\cup Z$, otherwise there is a triangle in $G'$ containing $v_2$ or $v_5$. Thus, by [\[Equ:new bounds of N_i\]](#Equ:new bounds of N_i){reference-type="eqref" reference="Equ:new bounds of N_i"}, [\[Equ: bounds of Z\]](#Equ: bounds of Z){reference-type="eqref" reference="Equ: bounds of Z"} and the definition of $A$, we have that $|N_{G'}(x_1)\cap N_{G'}(y_1)|\ge d_{G'}(x_1)+d_{G'}(y_1)-|N_2|-|N_5|-|Z|\ge 2(1-\alpha)k-2(\frac k 2+3a-5b)-(a-2b)=(1-2\alpha)k-7a+12b\ge \frac 6 7 k-840,$ the last inequality holds since $a\le 121$ and $b\ge 1.$ For the second statement, suppose on the contrary that there is an edge $x_2y_2\in E_q^1$. If $x_2y_2z_2$ is a triangle in $G$ and $z_2\in \bigcup_{i=1}^5N_i$, then $z_2$ must belong to $N_{H}(x_2)\cup N_H(y_2)$, otherwise there exist $w\in\{x_2,y_2\}$ and some $i\in[5]$ such that $v_iwz_2$ is a triangle in $G'$. Thus, by [\[Equ: bounds of Z\]](#Equ: bounds of Z){reference-type="eqref" reference="Equ: bounds of Z"} and the definition of $A$, the number of triangles in $G$ containing $x_2y_2$ is at most $d_{H}(x_2)+d_{H}(y_2)+|Z|\le 2\alpha k+a-2b\le \frac k 7+121 < \frac {3k-n-1} 3$, where the last inequality holds since $n=\frac 5 2 k-b$ and $k\ge 10^8,$ which contradicts that $x_2y_2\in E(H)$ is heavy. Hence, $E_q^1=\emptyset$. For the third statement, let $x_3y_3\in E_r^1$. We may assume that $x_3\in N_1$ and $y_3\in N_3$, then the common neighbors of $x_3$ and $y_3$ in $G'$ must belong to $N_2\cup Z$. Note $N_{G'}(x_3)\subseteq N_2\cup N_5\cup Z,$ which implies that $N_{G'}(x_3)\setminus N_5\subseteq N_2\cup Z.$ Similarly, $N_{G'}(y_3)\setminus N_4\subseteq N_2\cup Z.$ Thus, by [\[Equ:new bounds of N_i\]](#Equ:new bounds of N_i){reference-type="eqref" reference="Equ:new bounds of N_i"}, [\[Equ: bounds of Z\]](#Equ: bounds of Z){reference-type="eqref" reference="Equ: bounds of Z"} and the definition of $A$, we have that $|N_{G'}(x_3)\cap N_{G'}(y_3)|\ge (d_{G'}(x_3)-|N_5|)+(d_{G'}(y_3)-|N_4|)-|N_2|-|Z|\ge 2(1-\alpha) k-3(\frac k 2+3a-5b)-(a-2b)=\frac k 2-2\alpha k-10a+17b\ge \frac 5 {14} k -1400$. For the last statement, let $x_4y_4\in E_s^1$ with $x_4\in Z$. We first claim that there is an integer $i\in [5]$ such that $|N_{G'}(x_4)\cap N_i|, |N_{G'}(x_4)\cap N_{i+2}|\geq \frac 3 7 k-410$ and $|N_{G'}(x_4)\cap N_j|\le 27$ for $j\in[5]\setminus\{i,i+2\}$. Indeed, for any vertex $z\in Z$ and any integer $i\in [5]$, since $G'$ is triangle free, there is no edge between $N_{G'}(z)\cap N_i$ and $N_{G'}(z)\cap N_{i+1}$ in $G'$. Thus, any vertex $v\in N_{G'}(z)\cap N_{i}$ has degree at most $(|N_{i-1}|-|N_{G'}(z)\cap N_{i-1}|)+(|N_{i+1}|-|N_{G'}(z)\cap N_{i+1}|)+|Z|$ in $G'$, which is equivalent to $d_H(v)\ge k-(|N_{i-1}|+|N_{i+1}|+|Z|-|N_{G'}(z)\cap N_{i+1}|-|N_{G'}(z)\cap N_{i-1}|).$ Therefore, we can conclude that for any $z\in Z,$ $$\label{Equ: bound of neighborhood of z in Ni} |N_{G'}(z)\cap N_i|\le \frac{2|E(H)|}{k-(|N_{i-1}|+|N_{i+1}|+|Z|-|N_{G'}(z)\cap N_{i+1}|-|N_{G'}(z)\cap N_{i-1}|)}.$$ Since $d_{G'}(x_4)\ge k-\alpha k$, without loss of generality, we may assume that $|N_{G'}(x_4)\cap N_1|\ge (k-\alpha k-|Z|)/5\ge (k-\alpha k-a+2b)/5$. By [\[Equ:new bounds of N_i\]](#Equ:new bounds of N_i){reference-type="eqref" reference="Equ:new bounds of N_i"} and [\[Equ: bounds of Z\]](#Equ: bounds of Z){reference-type="eqref" reference="Equ: bounds of Z"}, since $k\ge 10^8$ and $a\le 121$, we have $$|N_{G'}(x_4)\cap N_2|\le \frac{9k/2}{k-2(\frac k 2 +3a-5b)-(a-2b)+(k-\alpha k-a+2b)/5}=\frac{45k}{\frac {13} 7 k-72a+124b}\le 27.$$ Similarly, we have $|N_{G'}(x_4)\cap N_5|\le 27$. Furthermore, without loss of generality, we may assume that $|N_{G'}(x_4)\cap N_3|\ge (k-\alpha k-|N_1|-|Z|-54)/2\ge (\frac k 2-\alpha k-4a+7b-54)/2$. Then, by using [\[Equ: bound of neighborhood of z in Ni\]](#Equ: bound of neighborhood of z in Ni){reference-type="eqref" reference="Equ: bound of neighborhood of z in Ni"} again, we have $|N_{G'}(x_4)\cap N_4|\le 27$. Therefore, by [\[Equ:new bounds of N_i\]](#Equ:new bounds of N_i){reference-type="eqref" reference="Equ:new bounds of N_i"} and [\[Equ: bounds of Z\]](#Equ: bounds of Z){reference-type="eqref" reference="Equ: bounds of Z"}, since $a\le 121$ and $b\ge 1$, we can get that $$|N_{G'}(x_4)\cap N_1|\ge k-\alpha k -|N_3|-|Z|-81\ge \frac k 2-\alpha k- 4a+7b-81\ge \frac 3 7 k-558.$$ Similarly, we can also get that $|N_{G'}(x_4)\cap N_3|\ge\frac 3 7 k-558$. Thus, we have proved that most of the vertices of $N_{G'}(x_4)$ are contained in $N_1$ and $N_3.$ Next, we show that $x_4$ and $y_4$ have many common neighbors in $G'.$ First, we claim that $y_4\notin (N_1\cup N_3)$. By symmetry, we only need to show that $y_4\notin N_1.$ Suppose on the contrary that $y_4\in N_1$, then the third vertex of any triangle in $G$ containing $x_4y_4$ must belong to $(N_{G}(x_4)\cap (N_2\cup N_5\cup Z))\cup (N_{G}(y_4)\cap (N_1\cup N_3\cup N_4))$. It is easy to see that $|N_{G}(x_4)\cap (N_2\cup N_5\cup Z)|\le d_{H}(x_4)+|N_{G'}(x_4)\cap N_2|+|N_{G'}(x_4)\cap N_5|+|Z|\le \alpha k +a-2b+54$ and $|N_{G}(y_4)\cap (N_1\cup N_3\cup N_4)|\le d_{H}(y_4)\le \alpha k$. Thus the number of triangles containing edge $x_4y_4$ is at most $2\alpha k +a-2b+54\le \frac k 7 +175<\frac {3k-n-1} 3$, where the last inequality holds since $n=\frac 5 2 k-b$ and $k\ge 10^8,$ which contradicts that $x_4y_4\in E(H)$ is heavy. If $y_4\in N_2$, then $N_{G'}(y_4)\subseteq N_1\cup N_3\cup Z$. By [\[Equ:new bounds of N_i\]](#Equ:new bounds of N_i){reference-type="eqref" reference="Equ:new bounds of N_i"} and [\[Equ: bounds of Z\]](#Equ: bounds of Z){reference-type="eqref" reference="Equ: bounds of Z"}, $$\begin{aligned} |N_{G'}(x_4)\cap N_{G'}(y_4)|&\ge |N_{G'}(x_4)\cap (N_1\cup N_3\cup Z)|+d_{G'}(y_4)-|N_1|-|N_3|-|Z|\\ &\ge 2(\frac 3 7 k-558)+\frac {13}{14}k-2(\frac k 2+3a-5b)-(a-2b)\ge \frac {11}{14}k-1951\ge \frac{5}{14}k-1400. \end{aligned}$$ If $y_4\in N_4$, then $N_{G'}(y_4)\subseteq N_3\cup N_5\cup Z$. By [\[Equ:new bounds of N_i\]](#Equ:new bounds of N_i){reference-type="eqref" reference="Equ:new bounds of N_i"} and [\[Equ: bounds of Z\]](#Equ: bounds of Z){reference-type="eqref" reference="Equ: bounds of Z"}, $$\begin{aligned} |N_{G'}(x_4)\cap N_{G'}(y_4)|&\ge |N_{G'}(x_4)\cap (N_3\cup N_5\cup Z)|+d_{G'}(y_4)-|N_3|-|N_5|-|Z|\\ &\ge \frac 3 7 k-558+\frac {13}{14}k-2(\frac k 2+3a-5b)-(a-2b)\ge \frac{5}{14}k-1400. \end{aligned}$$ Similarly, if $y_4\in N_5$, then $N_{G'}(y_4)\subseteq N_1\cup N_4\cup Z$ and we can show that $|N_{G'}(x_4)\cap N_{G'}(y_4)|\ge \frac{5}{14}k-1400$. This completes the proof of Claim [Claim 1](#Clm: numebr of triangles){reference-type="ref" reference="Clm: numebr of triangles"}. ◻ ### Reducing to the cases $a\in \{2,3\}$ Having the refined structure, we now proceed to complete the proof of Proposition [Proposition 8](#Pro: G'-U is C5-free){reference-type="ref" reference="Pro: G'-U is C5-free"}. By double-counting the edges of $E(H[B])\setminus E(H[Z]),$ we have that $|E^2|+2|E^3|=\sum_{i=1}^5\sum_{w\in B\cap N_i}d_{H}(w)+\sum_{z\in Z\cap B}|N_{H}(z)\cap (\bigcup_{i=1}^5N_i)|.$ Since $E^0=E(H)\setminus E(H[Z]),$ we have that $$\label{Equ:2|E^0|} \begin{split} 2(|E^1|+|E^2|&+|E^3|)=2|E^0|=\sum_{i=1}^{5}\sum_{w\in N_i}d_{H}(w)+\sum_{z\in Z}\Big|N_{H}(z)\cap (\bigcup_{i=1}^5N_i)\Big|=\sum_{i=1}^5\sum_{w\in A\cap N_i}d_{H}(w)\\ &+\sum_{i=1}^5\sum_{w\in B\cap N_i}d_{H}(w)+\sum_{z\in Z}\Big|N_{H}(z)\cap (\bigcup_{i=1}^5N_i)\Big|\ge \sum_{i=1}^5\sum_{w\in A\cap N_i}d_{H}(w)+|E^2|+2|E^3|. \end{split}$$ Recall that any vertex $w\in N_i$ satisfies that $d_{H}(w)\ge d_{H}(v_i)=a_i$. By [\[Equ:new bounds of N_i\]](#Equ:new bounds of N_i){reference-type="eqref" reference="Equ:new bounds of N_i"}, we have $|N_i|\ge \frac k 2 -a.$ Combining with [\[Equ:2\|E\^0\|\]](#Equ:2|E^0|){reference-type="eqref" reference="Equ:2|E^0|"} and $|B|\le 63,$ we have $$\label{Equ: bound of sum of E^1 and E^2} 2|E^1|+|E^2|\ge \sum_{i=1}^5\sum_{w\in A\cap N_i}d_{H}(w)\ge \sum_{i=1}^5a_i|N_i\cap A|\ge \sum_{i=1}^5a_i(|N_i|-|B|)\ge \frac{ak}{2}-a^2-63a.$$ Let $T_2$ be set of triangles containing at least one edge in $E^2$. Let $M$ be the number of pairs $(e,T)$ satisfying that $T\in T_2$ contains $e\in E^2$. By the definition of $H$, we get $M\ge \frac{3k-n-1}{3}|E^2|.$ Since any triangle in $G$ contains at most two edges in $E(H[A,B])$, we get $M\le 2|T_2|.$ Hence, $$\label{Equ:T2} |T_2|\ge \frac{M}{2}\ge \frac{3k-n-1}{6}|E^2|\ge \frac{k}{12}|E^2|.$$ Since $G'=G\setminus E(H)$ is triangle-free, all the triangles we have considered in Claim [Claim 1](#Clm: numebr of triangles){reference-type="ref" reference="Clm: numebr of triangles"} are disjoint from each other and disjoint from the triangles in $T_2.$ Then, using Claim [Claim 1](#Clm: numebr of triangles){reference-type="ref" reference="Clm: numebr of triangles"} (note that $|E_q^1|=0$), [\[Equ: bound of sum of E\^1 and E\^2\]](#Equ: bound of sum of E^1 and E^2){reference-type="eqref" reference="Equ: bound of sum of E^1 and E^2"} and [\[Equ:T2\]](#Equ:T2){reference-type="eqref" reference="Equ:T2"}, we can derive that $$\begin{aligned} \label{Equ: lower bound of number of triangles for a>4} \nonumber T(G) &\ge \left(\frac 6 7 k-840\right)|E_p^1|+\left(\frac {5}{14} k-1400\right)|E_r^1|+\left(\frac {5}{14} k-1400\right)|E_s^1|+|T_2|\\ &\ge\frac {k}{6}\Big(|E_p^1|+|E_r^1|+|E_s^1|\Big)+ \frac{k}{12}|E^2| =\frac{k}{12}\Big(2|E^1|+|E^2|\Big) \ge \frac{a}{24}k^2-\frac{a^2+63a}{12}k. \end{aligned}$$ Suppose that $a\geq 4$. Then by [\[Equ:a\<121\]](#Equ:a<121){reference-type="eqref" reference="Equ:a<121"}, we have $4\le a\le 121$. Since $k\ge 10^8$, $n=\frac{5}{2}k-b$ and $b\le \frac{a}{2}$, it is easy to see that using [\[Equ: lower bound of number of triangles for a\>4\]](#Equ: lower bound of number of triangles for a>4){reference-type="eqref" reference="Equ: lower bound of number of triangles for a>4"}, we have $T(G)\ge \frac 1 6 k^2-\frac{67}{3}k>\frac{1}{8}k^2+\frac{b-1}{4}k=\frac k 4(3k-n-1),$ a contradiction to our assumption [\[Equ: T(G)\<k/4(3k-n-1)\]](#Equ: T(G)<k/4(3k-n-1)){reference-type="eqref" reference="Equ: T(G)<k/4(3k-n-1)"}. Hence, we only need to examine the remaining two cases of $a=2$ and $a=3$, which will be addressed in the subsequent subsections. ### The case when $a=3$ In this case, since $1\le b\le \frac a 2$, we get that $b=1$ and thus $n=\frac 5 2 k-1.$ Then, by [\[Equ: bounds of Z\]](#Equ: bounds of Z){reference-type="eqref" reference="Equ: bounds of Z"}, $\frac 1 2\le |Z|\le 1,$ which implies that $|Z|=1$. Let $Z=\{z\}.$ By [\[Equ:new bounds of N_i\]](#Equ:new bounds of N_i){reference-type="eqref" reference="Equ:new bounds of N_i"}, we have that $\frac k 2-2\le |N_i|\le \frac k 2+4$ for $i\in [5]$. We first claim that $|B|\ge 2.$ Indeed, if $|B|\le 1$, then $|E^2|\le k$. By [\[Equ: bound of sum of E\^1 and E\^2\]](#Equ: bound of sum of E^1 and E^2){reference-type="eqref" reference="Equ: bound of sum of E^1 and E^2"}, we have $2|E^1|+|E^2|\ge \frac{ak}{2}-a^2-63a\ge\frac {3}{2}k-200,$ which implies that $|E^1|\ge \frac{k}{4}-100.$ Then, since $k\ge 10^8,$ Claim [Claim 1](#Clm: numebr of triangles){reference-type="ref" reference="Clm: numebr of triangles"} implies that $$\begin{aligned} \label{Equ: lower bound of number of triangles when a=3} T(G) &\ge \left(\frac 6 7 k-840\right)|E_p^1|+\left(\frac {5}{14} k-1400\right)|E_r^1|+\left(\frac {5}{14} k-1400\right)|E_s^1|+\frac{k}{12}|E^2|\\ &\ge \left(\frac{5}{14}k-1400\right)|E^1|+\frac{k}{12}|E^2| \ge \left(\frac{5}{14}k-1400\right)\left(\frac{k}{4}-100\right)+\frac{k^2}{12} >\frac{1}{8}k^2=\frac{k(3k-n-1)}{4}, \end{aligned}$$ which is a contradiction to our assumption [\[Equ: T(G)\<k/4(3k-n-1)\]](#Equ: T(G)<k/4(3k-n-1)){reference-type="eqref" reference="Equ: T(G)<k/4(3k-n-1)"}. Thus, we may assume that $|B|\ge 2$. Then, since $|Z|=1$, we can get that $|B\cap N_i|\ge 1$ for some integer $i\in [5]$. Without loss of generality, we may assume that there is a vertex $w_1\in B\cap N_1$. Next, we show that $|N_{H}(w_1)\cap (N_1\cup N_3\cup N_4)|\ge \frac{k}{28}.$ Indeed, if $|N_{H}(w_1)\cap (N_1\cup N_3\cup N_4)|< \frac{k}{28}$, then $|N_{H}(w_1)\cap (N_2\cup N_5)|\ge d_{H}(w_1)-\frac{k}{28}-1\ge \frac{k}{28}-1>|B|$. We may choose a vertex $w_2\in N_{H}(w_1)\cap N_2\cap A$. Then, since $k\ge 10^8,$ the number of triangles in $G$ containing $w_1w_2$ is at most $$|N_{H}(w_1)\cap (N_1\cup N_3\cup N_4)|+d_{H}(w_2)+|Z|<\frac{k}{28}+\frac{k}{14}+1< \frac{k}{6}=\frac{3k-n-1}{3},$$ which is a contradiction to $w_1w_2\in E(H)$ is heavy. We denote the set of vertex in $(N_2\cup N_5)\cap A$ that are not adjacent to $w_1$ in $G$ by $S.$ In the following, we prove that $|S|$ is big and the degree of most vertices of $S$ in $H$ is larger than $v_2$ or $v_5.$ In this way, we can improve the bound of [\[Equ: bound of sum of E\^1 and E\^2\]](#Equ: bound of sum of E^1 and E^2){reference-type="eqref" reference="Equ: bound of sum of E^1 and E^2"} and get a contradiction. Since $|N_{H}(w_1)\cap (N_1\cup N_3\cup N_4)|\ge \frac {k}{28}$, $|N_G(w_1)\cap (N_2\cup N_5)|=|N_{G'}(w_1)\cap (N_2\cup N_5)|+|N_H(w_1)\cap (N_2\cup N_5)|\le k-|N_{H}(w_1)\cap (N_1\cup N_3\cup N_4)|\le k-\frac {k}{28}$. Thus, we get that $$\label{Equ:lower bound of S} |S|\ge|N_2|+|N_5|-|N_G(w_1)\cap (N_2\cup N_5)|-|B|\ge 2(\frac k 2-2)-k+\frac {k}{28}-63\ge \frac {k}{28}-67.$$ Next, we analysis the connection between these vertices and $z$. For $i\in[5],$ if all the vertices $v_i$ are not contained in $N_{G'}(z)$, then similar to [\[Equ: lower bound of Ni\]](#Equ: lower bound of Ni){reference-type="eqref" reference="Equ: lower bound of Ni"}, we can get $|N_i|\ge d_{G'}(v_{i-1})+d_{G'}(v_i)+d_{G'}(v_{i+1})-n+1,$ which implies that $n-1=\sum_{i=1}^5|N_i|\ge 3\sum_{i=1}^5d_{G'}(v_i)-5n+5=3(5k-3)-5n+5,$ and thus we have $n\ge \frac 5 2 k-\frac 1 2,$ which is a contradiction to $n=\frac 5 2 k-1.$ Thus, we may assume that $zv_{j}\in E(G')$ for some $j\in[5].$ According to our definition of $N_i,$ for $i\neq j$, $zv_i\notin E(G').$ Then, similar to [\[Equ: bound of union of Nis\]](#Equ: bound of union of Nis){reference-type="eqref" reference="Equ: bound of union of Nis"}, we can get that $n-1=\sum_{i=1}^5|N_i|\ge 3\sum_{i=1}^5d_{G'}(v_i)-5n+2=3(5k-3)-5n+2,$ and thus we have $n\ge \frac 5 2 k-1,$ which implies that inequality holds. Therefore, we can get $N_{G'}(v_j)=N_{j-1}\cup N_{j+1}\cup \{z\}$ and for any $w\in N_j,$ $N_{G'}(w)\subseteq N_{j-1}\cup N_{j+1}\cup \{z\}.$ For $i\in [5],$ $i\neq j,$ $N_{G'}(v_i)=N_{i-1}\cup N_{i+1}$ and for any $w\in N_i,$ $N_{G'}(w)\subseteq N_{i-1}\cup N_{i+1}\cup \{z\}.$ First note that $N_{G'}(z)\cap (N_{j-1}\cup N_{j+1})=\emptyset,$ or we will get a triangle in $G'.$ Let $Z_j=N_j\setminus (N_{G'}(z)\cup B),$ $Z_{j-2}=(N_{G'}(z)\cap N_{j-2})\setminus B$ and $Z_{j+2}=(N_{G'}(z)\cap N_{j+2})\setminus B.$ Now, we claim that $|Z_j|,|Z_{j-2}|, |Z_{j+2}| \le \frac{k}{112}.$ Indeed, if $|Z_j|>\frac{k}{112},$ then, since $zv_j\in E(G'),$ comparing the vertex of $Z_j$ with $v_j,$ we can get $\sum_{i=1}^5\sum_{w\in A\cap N_i}d_{G'}(w)\le \sum_{i=1}^5(k-a_i)|N_i\cap A|-\frac{k}{112}.$ By [\[Equ: bound of sum of E\^1 and E\^2\]](#Equ: bound of sum of E^1 and E^2){reference-type="eqref" reference="Equ: bound of sum of E^1 and E^2"}, this implies that $$2|E^1|+|E^2|\ge \sum_{i=1}^5\sum_{w\in A\cap N_i}d_{H}(w)\ge \sum_{i=1}^5a_i|N_i\cap A|+\frac{k}{112}\ge 3\Big(\frac k 2 -65\Big)+\frac{k}{112}=\frac{169}{112}k-200>\frac{3}{2}k,$$ and by [\[Equ: lower bound of number of triangles for a\>4\]](#Equ: lower bound of number of triangles for a>4){reference-type="eqref" reference="Equ: lower bound of number of triangles for a>4"}, since $k\ge 10^8,$ we can get the number of triangles in $G$ is at least, $$T(G)\ge \frac{k}{12}\Big(2|E^1|+|E^2|\Big)\ge \frac{k}{12}\cdot\frac{3}{2}k=\frac{1}{8}k^2=\frac{k(3k-n-1)}{4},$$ which is a contradiction to our assumption [\[Equ: T(G)\<k/4(3k-n-1)\]](#Equ: T(G)<k/4(3k-n-1)){reference-type="eqref" reference="Equ: T(G)<k/4(3k-n-1)"} shows that $|Z_j|\le \frac{k}{112}.$ We denote the property that $2|E^1|+|E^2|>\frac{3}{2}k$ by $(\star).$ By the analysis above, when we get $(\star),$ we are done. The proof of the remaining two cases are similar to the above. If $|Z_{j-2}|> \frac{k}{112}$ and $|Z_{j+2}|=0,$ then we can get that $\sum_{i=1}^5\sum_{w\in A\cap N_i}d_{G'}(w)\le \sum_{i=1}^5(k-a_i)|N_i\cap A|+|Z_{j-2}|-2|Z_{j-2}|\le \sum_{i=1}^5(k-a_i)|N_i\cap A|-\frac{k}{112}.$ If $|Z_{j-2}|> \frac{k}{112}$ and $|Z_{j+2}|\neq 0,$ then we can get that $\sum_{i=1}^5\sum_{w\in A\cap N_i}d_{G'}(w)\le \sum_{i=1}^5(k-a_i)|N_i\cap A|+|Z_{j-2}|+|Z_{j+2}|-2|Z_{j-2}||Z_{j+2}|\le \sum_{i=1}^5(k-a_i)|N_i\cap A|-\frac{k}{112}+1.$ Doing the above process again, we will get a contradiction. Therefore, we have $|Z_{j-2}|,|Z_{j+2}|\le \frac{k}{112}.$ Now, recall that $w_1\in B\cap N_1$, $S$ is the set of vertex in $(N_2\cup N_5)\cap A$ that are not adjacent to $w_1$ in $G$ and $zv_{j}\in E(G')$ for some $j\in[5].$ By symmetry, it suffices to consider $j\in \{1,2,3\}.$ If $j=1,$ since all the vertices of $S$ are not adjacent to $w_1$ in $G$, comparing these vertices with $v_2$ or $v_5,$ by [\[Equ:lower bound of S\]](#Equ:lower bound of S){reference-type="eqref" reference="Equ:lower bound of S"}, we can get that $$2|E^1|+|E^2|\ge \sum_{i=1}^5\sum_{w\in A\cap N_i}d_{H}(w)\ge \sum_{i=1}^5a_i|N_i\cap A|+|S|> 3\Big(\frac k 2 -65\Big)+\frac{k}{112}>\frac{3}{2}k,$$ again we get $(\star)$. If $j=2,$ let $S'=S\setminus (Z_2\cup Z_5).$ Since $|Z_2|,|Z_5|\le \frac{k}{112},$ we have $|S'|\ge|S|-2\cdot\frac{k}{112}\ge \frac{k}{56}-67.$ And for $v\in S'\cap N_2,$ $v$ is not adjacent to $z$. For $v\in S'\cap N_5,$ $v$ is not adjacent to $z$ and $w_1$. Comparing these vertices with $v_2$ or $v_5,$ we can get that $$2|E^1|+|E^2|\ge \sum_{i=1}^5\sum_{w\in A\cap N_i}d_{H}(w)\ge \sum_{i=1}^5a_i|N_i\cap A|+|S'|> 3\Big(\frac k 2 -65\Big)+\frac{k}{112}>\frac{3}{2}k,$$ again we get $(\star)$. If $j=3,$ since $G'$ is triangle-free and $N_{G'}(v_3)=N_2\cup N_4\cup\{z\},$ $N_{G'}(z)\cap N_2=N_{G'}(z)\cap N_4=\emptyset.$ Let $S''=S\setminus Z_{5},$ since $|Z_5|\le \frac{k}{112},$ we have $|S''|\ge|S|-\frac{k}{112}\ge \frac{3}{112}k-67.$ And for any vertex $v\in S''$, $v$ is not adjacent to $z$ and $w_1$. Comparing these vertices with $v_2$ or $v_5,$ we can get that $$2|E^1|+|E^2|\ge \sum_{i=1}^5\sum_{w\in A\cap N_i}d_{H}(w)\ge \sum_{i=1}^5a_i|N_i\cap A|+|S''|> 3\Big(\frac k 2 -65\Big)+\frac{k}{112}>\frac{3}{2}k,$$ again we get $(\star)$. Therefore, we have completed the proof when $a=3.$ ### The case when $a=2$ In this case, since $1\le b\le \frac a 2$, we get that $b=1$ and thus $n=\frac 5 2 k-1.$ By [\[Equ: bounds of Z\]](#Equ: bounds of Z){reference-type="eqref" reference="Equ: bounds of Z"}, we have $|Z|=0$. Then, $V(G)=\bigcup_{i=1}^5N_i$. By [\[Equ:new bounds of N_i\]](#Equ:new bounds of N_i){reference-type="eqref" reference="Equ:new bounds of N_i"}, we have that $\frac k 2-1\le |N_i|\le \frac k 2+1$ for $i\in[5]$. Note that the equality holds in [\[Equ: bound of union of Nis\]](#Equ: bound of union of Nis){reference-type="eqref" reference="Equ: bound of union of Nis"}, which implies that $N_{G'}(v_i)=N_{i-1}\cup N_{i+1}$ holds for $i\in [5].$ Moreover, for any $w\in N_i,$ $N_{G'}(w)\subseteq N_{i-1}\cup N_{i+1}.$ Let $\mathcal{E}=\cup_{i=1}^5 E(H[N_i,N_{i+1}]).$ We define $H_1=H\setminus \mathcal{E}$ and $G_1=G'\cup\mathcal{E}.$ By our definition, $G_1$ is also triangle-free. Similarly, we classify the edges of $H_1$ (in fact, here we can think of $H_1$ and $G_1$ as the previous $H$ and $G'$ respectively). Let $A_1=\{v\in V(G):d_{H_1}(v)\le \frac k {14}\}$ and $B_1=V(G)-A_1$. Since $|E(H_1)|\le|E(H)|\le \frac 9 4 k$, we have $|B_1|\le 63$. Let $F^0=F_p\cup F_r,$ where $F_p=\{xy\in E(H_1): x, y\in N_i \hbox{ for some }i\in[5]\}$ and $F_r=\{xy\in E(H_1): x\in N_i,\ y\in N_j \hbox{ and } |i-j|=2 \hbox{ for some } i,j\in[5] \}.$ Let $F^1=F^0\cap E(H_1[A_1]),$ $F^2=F^0\cap E(H_1[A_1,B_1])$ and $F^3=F^0\cap E(H_1[B_1])$. Note that $F^0=E(H_1)$. Thus, we have $$\begin{split} 2(|F^1|+|F^2|+|F^3|)&=2|F^0| =\sum_{i=1}^{5}\sum_{w\in N_i}d_{H_1}(w) =\sum_{i=1}^5\left(\sum_{w\in A_1\cap N_i}d_{H_1}(w)+\sum_{w\in B_1\cap N_i}d_{H_1}(w)\right), \end{split}$$ and $|F^2|+2|F^3|=\sum_{i=1}^5\sum_{w\in B_1\cap N_i}d_{H_1}(w),$ which implies that $$\label{Equ1: bound of E1 for case a=2} 2|F^1|+|F^2|=\sum_{i=1}^5\sum_{w\in A_1\cap N_i}d_{H_1}(w).$$ For any integer $i\in [5]$ and any vertex of $w\in N_i\cap B_1$, since $|N_{i-1}|+|N_{i+1}|=k-a_i$, there are at least $|N_{i-1}|+|N_{i+1}|-(k-d_{H_1}(w))-|B_1|=d_{H_1}(w)-a_i-|B_1|$ vertices of $(N_{i-1}\cup N_{i+1})\cap A_1$ are not adjacent to $w$ in $G$. Thus, for $i\in [5]$, summing up the number of vertices of $(N_{i-1}\cup N_{i+1})\cap A_1$ that are not adjacent to $w$ with $w\in N_i\cap B_1$, we can get that $$\label{Euq2: bound of E1 for case a=2} \sum_{i=1}^5\sum_{w\in A_1\cap N_i}(d_{H_1}(w)-a_i)\geq \sum_{i=1}^5\sum_{w\in B_1\cap N_i}(d_{H_1}(w)-a_i-|B_1|).$$ By [\[Equ1: bound of E1 for case a=2\]](#Equ1: bound of E1 for case a=2){reference-type="eqref" reference="Equ1: bound of E1 for case a=2"} and [\[Euq2: bound of E1 for case a=2\]](#Euq2: bound of E1 for case a=2){reference-type="eqref" reference="Euq2: bound of E1 for case a=2"}, since $|N_i|\ge \frac k 2-1$ and $|B_1|\le 63,$ we can get that $$\begin{aligned} 2|F^1| &=\sum_{i=1}^5\sum_{w\in A_1\cap N_i}d_{H_1}(w)-|F^2| \ge \sum_{i=1}^5 a_i|N_i\cap A_1|+ \sum_{i=1}^5\sum_{w\in A_1\cap N_i}(d_{H_1}(w)-a_i)-\sum_{w\in B_1}d_{H_1}(w)\\ &\ge 2\Big(\frac k 2-1-|B_1|\Big)- \sum_{i=1}^5\sum_{w\in B_1\cap N_i}(a_i+|B_1|) \ge k-128-|B_1|(2+|B_1|) \ge k-4400. \end{aligned}$$ Let $F_{p}^1=F_p\cap F^1$ and $F_{r}^1=F_r\cap F^1.$ We can obtain the same conclusion as for the Claim [Claim 1](#Clm: numebr of triangles){reference-type="ref" reference="Clm: numebr of triangles"} (replace $G'$ and $H$ with $G_1$ and $H_1$ respectively). Since $G_1$ is triangle-free and $E(G_1)=(E(G)\setminus E(H)) \cup \mathcal{E},$ all the triangles we considered are disjoint from each other. Using Claim [Claim 1](#Clm: numebr of triangles){reference-type="ref" reference="Clm: numebr of triangles"} and $k>10^8$, we have $$T(G)\ge \left(\frac {5}{14}k-1400\right)\left(\frac{k}{2}-2200\right)>\frac{k^2}{8}=\frac{k(3k-n-1)}{4},$$ which is a contradiction to our assumption [\[Equ: T(G)\<k/4(3k-n-1)\]](#Equ: T(G)<k/4(3k-n-1)){reference-type="eqref" reference="Equ: T(G)<k/4(3k-n-1)"}. We have completed the proof of Proposition [Proposition 8](#Pro: G'-U is C5-free){reference-type="ref" reference="Pro: G'-U is C5-free"}. ------------------------------------------------------------------------ # Concluding remarks Our main result establishes a tight lower bound for the number of triangles in $n$-vertex $k$-regular graphs under certain conditions (specifically, when $n$ is odd and sufficiently large, and $k$ is an even integer satisfying $\frac{2n}{5}<k<\frac{n}{2}$). The same problem was also investigated by Lo in [@Lo10; @Lo12] for the case of $k\geq \frac{n}{2}$. For related problems concerning the count of cliques $K_{r}$ in regular graphs or graphs with a bounded minimum degree, we recommend referring to [@Lo12]. It is also interesting to explore the minimum number $t_H(n,k)$ of copies of a general graph $H$ in an $n$-vertex $k$-regular graph. Results of Cambie, de Verclos, and Kang [@CdVK] indicate that there is a distinction based on whether the chromatic number $\chi(H)=3$ or not (see Theorems 2, 13, and 14 in [@CdVK]). For all odd cycles $C_{2\ell-1}$ where $\ell \geq 2$, Cambie et al. [@CdVK] obtained the following tight result: $$\text{If $n$ is even and $k>\frac{n}{2}$ or $n$ is odd and $k>\frac{2n}{2\ell+1}$, then } t_{C_{2\ell-1}}(n,k)>0.$$ Assuming $n$ is odd and $k$ is even, satisfying $\frac{2n}{2\ell+1}<k<\frac{n}{2}$, it would be of great interest to determine the exact value of $t_{C_{2\ell-1}}(n,k)$. This endeavor to generalize Theorem [Theorem 3](#Thm: Main){reference-type="ref" reference="Thm: Main"} appears to be challenging. 99 B. Andrásfai, P. Erdős and V. T. Sós, , , **8**, (1974), 205--218. S. Cambie, R. de Joannis de Verclos and R. J. Kang, , , **16**, (2022), 1--19. A. S. L. Lo, , , **18**, (2009), 435--440. A. S. L. Lo, , A. S. L. Lo, , , **21**, (2012), 457--482. W. Mantel, , 10, (1907), 60--61. J. W. Moon and L. Moser, , , **3**, (1965), 23--28. [^1]: If $n$ is odd, it only makes sense to consider when $k$ is even. This is because there exist no regular graphs with odd number of vertices and odd degree. [^2]: In fact, for infinitely many odd integers $n$, we can state this as $t(n,k)>0$ if and only if $k>\frac{2n}{5}$.
arxiv_math
{ "id": "2309.02993", "title": "Counting triangles in regular graphs", "authors": "Jialin He, Xinmin Hou, Jie Ma and Tianying Xie", "categories": "math.CO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | A $(d-1)$-dimensional simplicial complex $\Delta$ is balanced if its graph $G(\Delta)$ is $d$-colorable. Klee and Novik obtained the balanced lower bound theorem for balanced normal $(d-1)$-pseudomanifolds $\Delta$ with $d\geq3$ by showing that the subgraph of $G(\Delta)$ induced by the vertices colored in $T$ is rigid in $\mathbb{R}^3$ for any $3$ colors $T$. We show that the same rigidity result, and thus the balanced lower bound theorem, holds for balanced minimal $(d-1)$-cycle complexes with $d \geq 3$. Motivated by the Stanley's work on a colored system of parameters for the Stanley-Reisner ring of balanced simplicial complexes, we further investigate the infinitesimal rigidity of non-generic realization of balanced, and more broadly $\bm{a}$-balanced, simplicial complexes. Among other results, we show that for $d \geq 4$, a balanced homology $(d-1)$-manifold can be realized as an infinitesimally rigid framework in $\mathbb{R}^d$ such that each vertex of color $i$ lies on the $i$th coordinate axis. author: - "Ryoshun Oba[^1]" bibliography: - myreference.bib title: Rigidity of Balanced Minimal Cycle Complexes --- # Introduction Barnett's lower bound theorem [@Bar] asserts that the boundary complex $\Delta$ of a simplicial $d$-polytope satisfies the inequality $$\label{eq:LBT} f_1(\Delta) \geq df_0(\Delta) -\binom{d+1}{2},$$ where $f_i(\Delta)$ denotes the number of $i$-dimensional faces of $\Delta$. Subsequently this was generalized to pseudomanifolds [@Fog; @Kal]. Several variants of the lower bound theorem have been investigated (e.g. [@GKN; @St1]). In this paper we discuss the balanced lower bound theorem due to Goff, Klee, Novik [@GKN] and Klee and Novik [@KN]. A $(d-1)$-dimensional simplicial complex $\Delta$ is *balanced* (or *completely balanced*) if its graph $G(\Delta)$ is $d$-colorable. The balanced lower bound theorem [@KN] asserts that, for $d \geq 3$, every balanced normal $(d-1)$-pseudomanifold $\Delta$ satisfies the inequality $$\label{eq:balLBT} 2h_2(\Delta) \geq (d-1)h_1(\Delta),$$ where $h_1(\Delta)=f_0(\Delta)-d$, $h_2(\Delta)=f_1(\Delta)-(d-1)f_0(\Delta)+\binom{d}{2}$. A connection between the lower bound theorem and rigidity theory was pointed out by Kalai [@Kal] (see also [@Gro]). Kalai [@Kal] noted that the inequality ([\[eq:LBT\]](#eq:LBT){reference-type="ref" reference="eq:LBT"}) follows immediately from the (generic) rigidity of the graph of $\Delta$ in $\mathbb{R}^d$. Goff, Klee, Novik [@GKN] pointed out that the inequality ([\[eq:balLBT\]](#eq:balLBT){reference-type="ref" reference="eq:balLBT"}) holds if $G(\Delta)[\kappa^{-1}(T)]$ is rigid in $\mathbb{R}^3$ for every $3$-set $T \subseteq [d]:=\{1,\ldots,d\}$, where $G[W]$ denotes the subgraph of $G$ induced by $W$ and $\kappa$ is the proper $d$-coloring of $G(\Delta)$. Klee and Novik [@KN] showed this rigidity result, and thus the balanced lower bound theorem, for balanced normal $(d-1)$-pseudomanifolds for $d\geq 3$. This rigidity result for balanced normal pseudomanifolds [@KN Lemma 3.5] is based on the rigidity theorem of pseudomanifolds by Fogelsanger [@Fog]. Fogelsanger introduced a superclass of pseudomanifolds, called minimal cycle complexes, and show that minimal cycle complexes admit a decomposition into minimal cycle complexes in such a way that the decomposition behaves nicely with respect to edge contractions. He used this decomposition together with vertex splitting [@Whi] and gluing to show that the graph of any minimal $(d-1)$-cycle complex is rigid in $\mathbb{R}^d$ for $d\geq3$. The remarkable aspect of this proof of rigidity is that the inductive proof works within the same dimension, so it is applicable to a class of simplicial complexes which is not closed under taking links. Fogelsanger's idea is recently used to show the global rigidity of pseudomanifolds [@CJT] and $\mathbb{Z}_2$-symmetric rigidity of $\mathbb{Z}_2$-symmetric pseudomanifolds [@CJT2]. In this paper, we give a new application of Fogelsanger's idea to prove rigidity results of balanced simplicial complexes. We generalize the rigidity result [@KN Lemma 3.5] of balanced normal pseudomanifolds to balanced minimal cycle complexes (Theorem [Theorem 8](#thm:rank-selcted){reference-type="ref" reference="thm:rank-selcted"}). Using an argument by Goff, Klee, Novik [@GKN], this immediately implies the balanced lower bound theorem for minimal cycle complexes (Corollary [Corollary 11](#cor:balanced_LBT){reference-type="ref" reference="cor:balanced_LBT"}). Balanced simplicial complexes, and more broadly $\bm{a}$-balanced simplicial complexes, have also been studied in the theory of Stanley-Reisner ring [@Sta]. For $\bm{a}=(a_1,\ldots,a_m) \in \mathbb{Z}_{>0}^m$ with $\sum_{i=1}^m a_i=d$, a $(d-1)$-dimensional simplicial complex $\Delta$ on the vertex set $V(\Delta)$ is *$\bm{a}$-balanced* if there is a map $\kappa:V(\Delta) \rightarrow [m]$ satisfying $|F \cap \kappa^{-1}(i)|\leq a_i$ for any $F \in \Delta$ and $i \in [m]$. We call such a map $\kappa$ an *$\bm{a}$-coloring* of $\Delta$. Given a simplicial complex $\Delta$, the *Stanley-Reisner ring* of $\Delta$ is $\mathbb{R}[\Delta]:=\mathbb{R}[x_v:v \in V(\Delta)]/I_\Delta$, where $I_\Delta$ is the ideal generated by monomials $\prod_{v \in G} x_v$ over all $G \not\in \Delta$. For $\bm{a} \in \mathbb{Z}_{>0}^m$ and an $\bm{a}$-balanced simplicial complex $\Delta$ with an $\bm{a}$-coloring $\kappa$, one can define an $\mathbb{N}^m$-graded algebra structure of $\mathbb{R}[\Delta]$ by $\deg x_v=\bm{e}_{\kappa(v)}$, where $\bm{e}_i \in \mathbb{N}^m$ denotes the $i$th unit coordinate vector. Stanley [@Sta] showed that, for an $\bm{a}$-balanced $(d-1)$-dimensional simplicial complex $\Delta$, $\mathbb{R}[\Delta]$ has a system of parameters $\theta_1,\ldots,\theta_d$ such that exactly $a_i$ of $\theta_j$'s are of degree $\bm{e}_i$ for $i \in [m]$. Such a system of parameters is called an *$\bm{a}$-colored s.o.p.* for $\mathbb{R}[\Delta]$. Cook et al. [@3-polytope] showed that if $\Delta$ is a balanced simplicial $2$-sphere, there is a $(1,1,1)$-colored s.o.p. $\Theta=(\theta_1,\theta_2,\theta_3)$ for $\mathbb{R}[\Delta]$ and a linear form $\omega \in \mathbb{R}[\Delta]_1$ such that the multiplication map $(\times \omega):(\mathbb{R}[\Delta]/\Theta\mathbb{R}[\Delta])_1 \rightarrow (\mathbb{R}[\Delta]/\Theta\mathbb{R}[\Delta])_2$ is bijective. We conjecture that the statement of Cook et al. [@3-polytope] holds for a larger class of simplicial complexes and any $\bm{a} \in \mathbb{Z}_{>0}^m$ with a few exceptions as follows. **Conjecture 1**. For $\bm{a} \in \mathbb{Z}_{>0}^m$ with $d=\sum_{i=1}^m a_i \geq 3$ and $\bm{a}\neq (d-1,1),(1,d-1)$, let $\Delta$ be an $\bm{a}$-balanced minimal $(d-1)$-cycle complex. Then there is an $\bm{a}$-colored s.o.p. $\Theta=(\theta_1,\ldots,\theta_d)$ for $\mathbb{R}[\Delta]$ and a linear form $\omega \in \mathbb{R}[\Delta]_1$ such that the multiplication map $(\times \omega):(\mathbb{R}[\Delta]/\Theta\mathbb{R}[\Delta])_1 \rightarrow (\mathbb{R}[\Delta]/\Theta\mathbb{R}[\Delta])_2$ is injective. A correspondence between Stanley-Reisner ring theory and (skeletal) rigidity theory has been noted in [@Lee; @TWW]. In this correspondence, under the normalization $\omega=\sum_{v \in V(\Delta)} x_v$, a linear system of parameters $(\theta_1,\ldots,\theta_d)$ for $\mathbb{R}[\Delta]$ is identified with a point configuration $p:V(\Delta) \rightarrow \mathbb{R}^d$ through $\theta_i=\sum_{v\in V(\Delta)} p(v)_i x_v$ for $i \in [d]$ (see Corollary [Corollary 15](#cor:inj){reference-type="ref" reference="cor:inj"}). Thus Conjecture [Conjecture 1](#conj:wl){reference-type="ref" reference="conj:wl"} is equivalently formulated as the problem of finding an infinitesimally rigid realization of $\bm{a}$-balanced minimal cycle complexes with non-generic point configurations. Let $G$ be a graph, $\kappa:V(G) \rightarrow [m]$ a map, and $\bm{a} \in \mathbb{Z}_{>0}^m$ an integer vector with $\sum_{i=1}^m a_i=d$. We consider $\mathbb{R}^d$ as the direct product of $\mathbb{R}^{a_i}$ over all $i =1,\ldots,m$. Then each $x \in \mathbb{R}^d$ is denoted by $x=(x_1,\ldots,x_m)$ with $x_i \in \mathbb{R}^{a_i}$. For $i =1,\ldots,m$, let $H_i:=\{x=(x_1,\ldots,x_m): x_j=0~ (j \neq i) \} \subseteq \mathbb{R}^d$. We say that a point configuration $p:V(G) \rightarrow \mathbb{R}^d$ is *$(\kappa,\bm{a})$-sparse* if $p(v) \in H_{\kappa(v)}$ for all $v\in V(G)$, and $G$ is *$(\kappa,\bm{a})$-sparse rigid* if $(G,p)$ is infinitesimally rigid in $\mathbb{R}^d$ for some $(\kappa,\bm{a})$-sparse point configuration $p$. For example, if $\bm{a}=(1,\ldots,1)$ and $\kappa$ is a $d$-coloring, then a $(\kappa,\bm{a})$-sparse point configuration is the one such that each vertex of color $i$ is realized on the $i$-th coordinate axis. Conjecture [Conjecture 1](#conj:wl){reference-type="ref" reference="conj:wl"} can be restated as follows. **Conjecture 2**. For $\bm{a} \in \mathbb{Z}_{>0}^m$ with $d=\sum_{i=1}^m a_i \geq 3$ and $\bm{a}\neq (d-1,1),(1,d-1)$, let $\Delta$ be an $\bm{a}$-balanced minimal $(d-1)$-cycle complex and $\kappa$ be an $\bm{a}$-coloring of $\Delta$. Then $G(\Delta)$ is $(\kappa,\bm{a})$-sparse rigid. We will explain the equivalence of Conjecture [Conjecture 1](#conj:wl){reference-type="ref" reference="conj:wl"} and Conjecture [Conjecture 2](#conj:main){reference-type="ref" reference="conj:main"} in Section [5](#sec:5){reference-type="ref" reference="sec:5"}. The assumption on $\bm{a}$ in Conjecture [Conjecture 2](#conj:main){reference-type="ref" reference="conj:main"} is necessary. Cook et al. [@3-polytope] pointed out that there is a $(2,1)$-balanced simplicial $2$-sphere $\Delta$ and its $(2,1)$-coloring $\kappa$ such that $G(\Delta)$ is not $(\kappa,\bm{a})$-sparse rigid. As we will see in Example [Example 27](#eg:(d-1,1)){reference-type="ref" reference="eg:(d-1,1)"}, the construction of Cook et al. [@3-polytope] can be extended to higher dimension. In this paper we confirm Conjecture [Conjecture 2](#conj:main){reference-type="ref" reference="conj:main"} for the following cases. In Theorem [Theorem 17](#thm:a-bal){reference-type="ref" reference="thm:a-bal"}, we prove that Conjecture [Conjecture 2](#conj:main){reference-type="ref" reference="conj:main"} holds if $a_i \geq 2$ for all $i \in [m]$ using Fogelsanger's idea. In Theorem [Theorem 23](#thm:C_d){reference-type="ref" reference="thm:C_d"}, we then apply the standard coning argument and show that Conjecture [Conjecture 2](#conj:main){reference-type="ref" reference="conj:main"} holds if $d \geq 4$ and $\Delta$ is a homology $(d-1)$-manifold. The base cases in the coning argument are Gluck's theorem [@Glu], the result of Cook et al. [@3-polytope Theorem 1.1] and Theorem [Theorem 17](#thm:a-bal){reference-type="ref" reference="thm:a-bal"} for $\bm{a}=(2,2)$. The paper is organized as follows. In Section 2, preliminaries on simplicial complexes and rigidity are given. In Section 3, we summarize the basic properties of Fogelsanger decomposition for minimal cycle complexes. In Section 4, we prove the rigidity of subgraph of $G(\Delta)$ induced by colors and derive the balanced lower bound theorem for a balanced minimal cycle complex $\Delta$. In Section 5, we discuss the equivalence of Conjecture [Conjecture 1](#conj:wl){reference-type="ref" reference="conj:wl"} and Conjecture [Conjecture 2](#conj:main){reference-type="ref" reference="conj:main"}. In Section 6, we prove Conjecture [Conjecture 2](#conj:main){reference-type="ref" reference="conj:main"} in the case when $a_i \geq 2$ for all $i$. In Section 7, we prove Conjecture [Conjecture 2](#conj:main){reference-type="ref" reference="conj:main"} for homology manifolds. In Section 8, we give further observations related to Conjecture [Conjecture 2](#conj:main){reference-type="ref" reference="conj:main"}. # Preliminaries ## Graphs and simplicial complexes Throughout the paper, we only consider simple graphs and use the following basic notations. For a graph $G$, the vertex set and the edge set is denoted by $V(G)$ and $E(G)$. For $X \subseteq V(G)$, $G[X]=(X,E[X])$ denotes the induced subgraph of $G$ by $X$. For $v \in V(G)$, $N_G(v)$ denotes the set of vertices adjacent to $v$ in $G$. Given a graph $G$ and an edge $uv \in E(G)$, we denote $G/uv$ to denote the simple graph obtained from $G$ by contracting $v$ onto $u$. More precisely $V(G/uv)= V-v$, $E(G/uv)= E[V-v] \cup \{uw: w\in N_G(v)\}$. A *simplicial complex* $\Delta$ is a finite collection of finite sets such that if $F \in \Delta$ and $G \subseteq F$, $G \in \Delta$. Elements of $\Delta$ are called *faces* of $\Delta$. The *dimension* of a face $F \in \Delta$ is $\dim F:=|F|-1$, and a face of dimension $i$ is called an *$i$-face*. The dimension of $\Delta$ is $\dim \Delta:=\max\{\dim F: F \in \Delta\}$. A *facet* of $\Delta$ is a maximal face under inclusion and $\Delta$ is *pure* if all facets have the same dimension. For a finite collection $\mathcal{S}$ of finite sets, the simplicial complex *spanned by $\mathcal{S}$* is $\langle \mathcal{S} \rangle:=\{G \subseteq F:F \in \mathcal{S}\}$. The vertex set $V(\Delta)$ (resp. the edge set $E(\Delta)$) of $\Delta$ is the set of all $0$-faces (resp. $1$-faces). $G(\Delta)=(V(\Delta),E(\Delta))$ is called the *graph* (or *$1$-skeleton*) of $\Delta$. The *link* of a face $F \in \Delta$ is $\mathop{\mathrm{lk}}_\Delta(F)=\{G \in \Delta:F\cap G=\emptyset,F \cup G \in \Delta\}$. The (closed) *star* of a face $F \in \Delta$ is $\mathop{\mathrm{st}}_\Delta(F)=\{G \in \Delta:F \cup G \in \Delta\}$. For $W \subseteq V(\Delta)$, define the *restriction* of $\Delta$ to $W$ to be $\Delta[W]:=\{F \in \Delta:F \subseteq W\}$. Let $\Delta_1, \Delta_2$ be pure simplicial complexes with $V(\Delta_1) \cap V(\Delta_2) = \emptyset$ and $\dim \Delta_1 = \dim \Delta_2$. Let $F_1$ and $F_2$ be a facet of $\Delta_1$ and $\Delta_2$ respectively, and let $\gamma:F_1 \rightarrow F_2$ be a bijection. The *connected sum* of $\Delta_1$ and $\Delta_2$, denoted as $\Delta_1 \#_\gamma \Delta_2$ (or simply $\Delta_1 \# \Delta_2$), is the simplicial complex obtained by identifying the vertices $F$ and $G$ and removing the facet corresponding to $F$ (which has been identified with $G$). A $(d-1)$-sphere on $n (\geq d+1)$ vertices written as a $(n-d)$-fold connected sum of the boundary complex of a $d$-simplex is called a *stacked $(d-1)$-sphere*. We say that a simplicial complex $\Delta$ is a *simplicial $(d-1)$-sphere* if its geometric realization is homeomorphic to $\mathbb{S}^{d-1}$. A simplicial complex $\Delta$ is a *homology $(d-1)$-manifold* over $\bm{k}$ if $\tilde{H}_{*}(\mathop{\mathrm{lk}}_\Delta(F);\bm{k})\cong\tilde{H}_{*}(\mathbb{S}^{d-|F|-1};\bm{k})$ for every nonempty face $F \in \Delta$. Here, $\tilde{H}_*(\Delta;\bm{k})$ denotes the reduced simplicial homology group of $\Delta$ with coefficients in $\bm{k}$. A pure $(d-1)$-dimensional simplicial complex $\Delta$ is *strongly connected* if for every pair of facets $F$ and $G$ of $\Delta$, there is a sequence of facets $F=F_0,F_1\ldots,F_m=G$ such that $|F_{i-1} \cap F_{i}|=d-1$ for $i \in [m]$. A *$(d-1)$-pseudomanifold* is a strongly connected pure $(d-1)$-dimensional simplicial complex such that every $(d-2)$-face is contained in exactly two facets. A $(d-1)$-pseudomanifold is *normal* if the link of each face of dimension at most $d-3$ is connected. The class of normal pseudomanifolds is closed under taking links [@BD]. Let $\Delta$ be a $(d-1)$-dimensional simplicial complex. The *$f$-vector* of $\Delta$ is an integer vector $f(\Delta):=(f_{-1}(\Delta),\ldots,f_{d-1}(\Delta))$, where $f_i(\Delta)$ is the number of $i$-faces of $\Delta$. The *$h$-vector* of $\Delta$ is an integer vector $h(\Delta):=(h_0(\Delta),\ldots,h_d(\Delta))$ whose entries are given by $$h_j(\Delta):= \sum_{i=0}^j (-1)^{j-i} \binom{d-i}{d-j} f_{i-1}(\Delta).$$ ## Rigidity A *framework* in $\mathbb{R}^d$ is a pair $(G,p)$ of a graph $G$ and a map $p:V(G) \rightarrow \mathbb{R}^d$. The map $p$ is called a *point configuration*. An *infinitesimal motion* of a framework $(G,p)$ in $\mathbb{R}^d$ is a map $\dot{p}:V(G) \rightarrow \mathbb{R}^d$ satisfying $$\label{eq:inf} (p(i)-p(j))\cdot (\dot{p}(i)-\dot{p}(j))=0 \qquad ij \in E(G).$$ An infinitesimal motion $\dot{p}$ defined by $\dot{p}(i)=Sp(i)+t$ $(i \in V(G))$ for a skew symmetric matrix $S$ and $t \in \mathbb{R}^d$ is said to be *trivial*. A framework $(G,p)$ is *infinitesimally rigid in $\mathbb{R}^d$* if every infinitesimal motion of $(G,p)$ is trivial. A graph $G$ is *rigid in $\mathbb{R}^d$* if $(G,p)$ is infinitesimally rigid in $\mathbb{R}^d$ for some $p:V(G) \rightarrow \mathbb{R}^d$. The matrix $R(G,p) \in \mathbb{R}^{|E(G)| \times d|V(G)|}$ representing ([\[eq:inf\]](#eq:inf){reference-type="ref" reference="eq:inf"}) is called the *rigidity matrix* of $(G,p)$. More concretely, in $R(G,p)$, $d$ consecutive columns are associated to each vertex and a row associated to $ij \in E(G)$ is $$% \begingroup % \br@kwd depends on font size, so compute it now. \setbox 0=\hbox{$\left[\right.$} \setlength{\br@kwd}{\wd 0} % Compute the array strut based on current value of \arraystretch. \setbox\@arstrutbox\hbox{\vrule \@height\arraystretch\ht\strutbox \@depth\arraystretch\dp\strutbox \@width\z@} % Compute height of first row and extra space. \setlength{\k@bordht}{\kbrowsep} \addtolength{\k@bordht}{\ht\@arstrutbox} \addtolength{\k@bordht}{\dp\@arstrutbox} % turn off mathsurround \m@th % Set the first row style \def\@kbrowstyle{\scriptstyle} % Swallow the alignment into box0: \setbox 0=\vbox{% % Define \cr for first row to include the \kbrowsep % and to reset the row style \def\cr{\crcr\noalign{\kern\kbrowsep \global\let\cr=\endline \global\let\@kbrowstyle=\relax}} % Redefine \\ a la LaTeX: \let\\\@arraycr % The following are needed to make a solid \vrule with no gaps % between the lines. \lineskip\z@skip \baselineskip\z@skip % Compute the length of the skip after the first column \dimen 0\kbcolsep \advance\dimen 0\br@kwd % Here begins the alignment: \ialign{\tabskip\dimen 0 % This space will show up after the first column \kern\arraycolsep\hfil\@arstrut$\scriptstyle##$\hfil\kern\arraycolsep& \tabskip\z@skip % Cancel extra space for other columns \kern\arraycolsep\hfil$\@kbrowstyle ##$\ifkbalignright\relax\else\hfil\fi\kern\arraycolsep&& \kern\arraycolsep\hfil$\@kbrowstyle ##$\ifkbalignright\relax\else\hfil\fi\kern\arraycolsep\crcr % That ends the template. % Here is the argument: &&i&&j& \\ &\cdots 0 \cdots&p(j)-p(i)&\cdots 0\cdots&p(i)-p(j)& \cdots 0 \cdots \crcr}% End \ialign }% End \setbox0. % \box0 now holds the array. % % This next line uses \box2 to hold a throwaway % copy of \box0, leaving \box0 intact, % while putting the last row in \box5. \setbox 2=\vbox{\unvcopy 0 \global\setbox 5=\lastbox} % We want the width of the first column, % so we lop off columns until there is only one left. % It's not elegant or efficient, but at 1 gHz, who cares. \loop \setbox 2=\hbox{\unhbox 5 \unskip \global\setbox 3=\lastbox} \ifhbox 3 \global\setbox 5=\box 2 \global\setbox 1=\box 3 \repeat % \box1 now holds the first column of last row. % % This next line stores the alignment in \box2, % while calculating the proper % delimiter height and placement. \setbox 2=\hbox{$\kern\wd 1\kern\kbcolsep\kern-\arraycolsep \left[ \kern-\wd 1\kern-\kbcolsep\kern-\br@kwd % % Here is the output. The \vcenter aligns the array with the "math axis." % The negative vertical \kern only shrinks the delimiter's height. % BTW, I didn't find this in the TeXbook, % I had to try various \kerns to see what they did in a % \left[\vcenter{}\right]. \vcenter{\kern-\k@bordht\vbox{\unvbox 0}} \right]$} \null\vbox{\kern\k@bordht\box 2} % \endgroup ,$$ where $p(i), p(j)$ are considered as a row vector. An element in the left kernel of $R(G,p)$ is called an *equilibrium stress* (or an affine $2$-stress) of $(G,p)$. We call the left kernel of $R(G,p)$ the *stress space* of $(G,p)$. If $p(V(G))$ spans at least $(d-1)$-dimensional affine subspace, a framework $(G,p)$ is infinitesimally rigid if and only if $\rank R(G,p)=d|V(G)|-\binom{d+1}{2}$. To produce infinitesimally rigid frameworks from those with smaller size, gluing lemma and vertex splitting lemma are useful. **Lemma 3** (Gluing Lemma). Let $(G,p)$ be a framework in $\mathbb{R}^d$. Suppose that $G_1, G_2$ are subgraphs of $G$ such that $(G_i,p|_{V(G_i)})$ is infinitesimally rigid in $\mathbb{R}^d$ for $i=1,2$ and $p(V(G_1) \cap V(G_2))$ spans at least $(d-1)$-dimensional affine subspace. Then $(G,p)$ is infinitesimally rigid in $\mathbb{R}^d$. **Lemma 4** (Vertex Splitting Lemma [@Whi]). Let $G$ be a graph and $uv \in E(G)$ be an edge satisfying $|N_G(u) \cap N_G(v)| \geq d-1$. Let $C$ be a size $(d-1)$ subset of $N_G(u) \cap N_G(v)$. Suppose that there is $p:V(G/uv) \rightarrow \mathbb{R}^d$ such that $(G/uv,p)$ is infinitesimally rigid in $\mathbb{R}^d$ and $\{p(w)-p(u):w \in C\}$ is linearly independent. Let $z \in \mathbb{R}^d$ be a vector not in the linear span of $\{p(w)-p(u):w \in C\}$. Then there is $t \in \mathbb{R}$ such that for the extension $p':V(G) \rightarrow \mathbb{R}^d$ of $p$ defined by $p'(v)=p(u)+tz$, $(G,p')$ is infinitesimally rigid in $\mathbb{R}^d$. Coning is a way to produce an infinitesimally rigid framework from that in one less dimension. The *cone graph* $G * \{v\}$ of a graph $G$ is obtained from $G$ by adding a new vertex $v$ and edges from $v$ to all the original vertices in $G$. **Lemma 5** (Cone Lemma [@Whi2]). Let $G$ be a graph and $G * \{v\}$ be its cone graph. Let $p:V(G) \cup \{v\} \rightarrow \mathbb{R}^{d+1}$ be a point configuration of cone graph such that $p(v) \neq p(u)$ for all $u \in V(G)$. Let $H \cong \mathbb{R}^d$ be a hyperplane in $\mathbb{R}^{d+1}$ not passing through $p(v)$ and not parallel to the vectors $\{p(u)-p(v):u \in V(G)\}$. Define a projection $p_H:V(G) \rightarrow \mathbb{R}^d$ of $p$ by letting $p_H(u)$ be the intersection point of $H$ and the line passing through $p(v)$ and $p(u)$. Then $(G * \{v\},p)$ is infinitesimally rigid in $\mathbb{R}^{d+1}$ if and only if $(G,p_H)$ is infinitesimally rigid in $\mathbb{R}^d$. # Decomposition of minimal cycle complexes {#sec:3} Let $\Delta$ be a simplicial complex and $uv \in E(\Delta)$. We define a simplicial complex $\Delta/uv$ by $$\Delta/uv=\{F \in \Delta: v \not \in F\} \cup \{F-v+u: F \in \mathop{\mathrm{st}}_\Delta(v)\}.$$ The operation $\Delta \rightarrow \Delta/uv$ is called an *edge contraction* of $uv$ (onto $u$). We remark that we do not allow multiple copies of the same faces, so for $G \in \mathop{\mathrm{lk}}_\Delta(u)\cap\mathop{\mathrm{lk}}_\Delta(v)$, faces $G+u$ and $G+v$ of $\Delta$ are identified under the edge contraction of $uv$. Let $\Gamma$ be an abelian group and let $\Delta$ be a simplicial complex on the vertex set $[n]$. An *$i$-chain* is a $\Gamma$-coefficient formal sum of $i$-faces. For an $i$-chain $c=\sum_{F}a_F F$, define an $(i-1)$-chain $\partial c$ by $\partial c := \sum_{F} a_F \partial F$, where for $F=\{x_1,\ldots,x_{i+1}\}$ ($x_1< \cdots <x_{i+1}$), $\partial F:=\sum_{j=1}^{i+1} (-1)^j F \setminus \{x_j\}$. An *$i$-cycle* is an $i$-chain $c$ satisfying $\partial c=0$. An $i$-chain $c'=\sum_{F} b_F F$ is a *subchain* of an $i$-chain $c=\sum_{F} a_F F$ if for each $i$-face $F$, $b_F=a_F$ or $b_F=0$ holds. An $i$-cycle $c$ is a *minimal $i$-cycle* if its only subchains which are $i$-cycles are $c$ and $0$. The *support* of an $i$-chain $c=\sum_{F}a_F F$ is $\mathop{\mathrm{supp}}c := \{F:a_F \neq 0\} \subseteq \binom{[n]}{i+1}$, where $\binom{[n]}{i+1}$ is the set of all $(i+1)$-subsets of $[n]$. A $(d-1)$-dimensional simplicial complex $\Delta$ is called a *minimal $(d-1)$-cycle complex* (over $\Gamma$) if (i) $\Delta=\langle \{F\} \rangle$ for a $d$-set $F$ or (ii) $\Delta=\langle \mathop{\mathrm{supp}}c \rangle$ for a nonzero minimal $(d-1)$-cycle $c$. If the dimension is clear from the context, it is simply called a minimal cycle complex. We remark that although minimal cycle complexes are originally defined as simplicial complexes satisfying (ii), we include the case (i) to make the presentation simple. It is possible to unify (i) and (ii) by using multicomplex formulation as in [@CJT]. A minimal cycle complex satisfying (i) (resp. (ii)) is said to be *trivial* (resp. *nontrivial*). A minimal cycle complex is always pure. Minimal cycle complexes may have singular point, so the class of minimal cycle complexes are not closed under taking links. Pseudomanifolds are minimal cycle complexes over $\mathbb{Z}_2$ [@CJT]. $2$-CM complexes over a field $\bm{k}$ are a minimal cycle complexes over a free $\bm{k}$-module of finite rank [@Nev]. Other examples of minimal cycle complexes arise in the context of simplicial matroid. A *$(d-1)$-simplicial matroid* on a subset $\mathcal{E}$ of $\binom{[n]}{d}$ over a field $\bm{k}$ is a matroid such that $\{F_1,\ldots,F_k\} \subseteq \mathcal{E}$ is independent if and only if $\{\partial F_1,\ldots, \partial F_k\}$ is linearly independent. A simplicial complex spanned by the circuit of $(d-1)$-simplicial matroid over $\bm{k}$ is a minimal $(d-1)$-cycle complex over $\bm{k}$. A matroid on a ground set $E$ is *connected* if for any $e,f \in E$, there is a circuit containing both $e$ and $f$. A pure $(d-1)$-dimensional simplicial complex whose facets define a connected $(d-1)$-simplicial matroid over $\bm{k}$ is a minimal $(d-1)$-cycle complex over a free $\bm{k}$-module of finite rank. Thus a simplicial complex having a convex ear decomposition [@Cha] is a minimal cycle complex. We list basic properties of minimal cycle complexes. **Lemma 6**. For a nontrivial minimal $(d-1)$-cycle complex $\Delta$, the followings hold: - $\Delta$ is strongly connected. - Every $(d-2)$-face of $\Delta$ is contained in at least two facets of $\Delta$. In particular, $|V(\Delta)| \geq d+1$. *Proof.* Let $c$ be a minimal $(d-1)$-cycle satisfying $\Delta=\langle \mathop{\mathrm{supp}}c \rangle$. \(i\) Suppose that $\Delta$ is not strongly connected. Then there is a proper subset $\mathcal{S}$ of $\mathop{\mathrm{supp}}c$ such that no pair of $F \in \mathcal{S}$ and $G \in \mathop{\mathrm{supp}}c \setminus \mathcal{S}$ shares the same $(d-2)$-face. Then, $c$ restricted to $\mathcal{S}$ is a $(d-1)$-cycle and is a proper subchain of $c$, which is a contradiction. \(ii\) Suppose that a $(d-2)$-face $T$ of $\Delta$ is contained in only one facet $F$ of $\Delta$. Then $(\partial c)_T=\pm c_F \neq 0$, which contradicts the fact that $c$ is a $(d-1)$-cycle. ◻ Fogelsanger [@Fog] pointed out that a minimal $(d-1)$-cycle can be decomposed into minimal $(d-1)$-cycles in such a way that it behaves nicely with respect to edge contractions. We summarize the properties of this decomposition below, which is necessary in the later discussion, in terms of minimal cycle complexes. See [@CJT; @Fog] for more details. **Lemma 7** (Fogelsanger [@Fog]). Suppose that $\Delta$ is a nontrivial minimal $(d-1)$-cycle complex and $uv \in E(\Delta)$. Then there exists a sequence of nontrivial minimal $(d-1)$-cycle complexes $\Delta_1^+,\ldots,\Delta_m^+$ satisfying the following properties: - For each $i$, $uv \in E(\Delta_i^+)$. Equivalently, there is a facet in $\Delta_i^+$ containing both $u$ and $v$. - For each $i$ and each facet $F$ of $\Delta_i^+$ with $F \not\in\Delta$, $u$ and $v$ are contained in $F$, and $F-u$ and $F-v$ are $(d-2)$-faces of $\Delta_i^+ \cap \Delta$. - For each $i$, $\Delta_i^+/uv$ is a minimal $(d-1)$-cycle complex. - $G(\Delta)=\bigcup_{i=1}^m G(\Delta_i^+)$. - For each $i \geq 2$, $(\bigcup_{j <i} \Delta_j^+)$ and $\Delta_i^+$ share at least one facet. We remark that, as in Lemma [Lemma 7](#lem:Fog_dec){reference-type="ref" reference="lem:Fog_dec"} (b), for the minimal cycle complexes $\Delta_1^+,\ldots,\Delta_m^+$ given in Lemma [Lemma 7](#lem:Fog_dec){reference-type="ref" reference="lem:Fog_dec"}, $\Delta_i^+$ may have a face not in $\Delta$. We also remark that the decomposition given in Lemma [Lemma 7](#lem:Fog_dec){reference-type="ref" reference="lem:Fog_dec"} is not unique. # Rigidity of rank-selected subcomplexes We use the notation $[d]:=\{1,\ldots,d\}$. A $(d-1)$-dimensional simplicial complex $\Delta$ is *balanced* (or *completely balanced*) if $G(\Delta)$ is $d$-colorable, or equivalently, there is a map $\kappa:V(\Delta) \rightarrow [d]$ such that for any face $F \in \Delta$, $|F \cap \kappa^{-1}(i)| \leq 1$ for $i \in [d]$. Such a coloring is called a *proper coloring* of $\Delta$. A proper coloring of a strongly connected simplicial complex is unique up to the permutation of colors and is called *the proper coloring* of $\Delta$. The boundary complex of a $d$-dimensional *cross-polytope* $\mathcal{C}_d^*:=\mathop{\mathrm{conv}}\{\pm\bm{e}_i:i \in [d]\}$, where $\bm{e}_i$ is the $i$th unit coordinate vector in $\mathbb{R}^d$, is an example of a balanced $(d-1)$-simplicial complex. Let $\Delta$ be a balanced, strongly connected, $(d-1)$-dimensional simplicial complex and $\kappa$ be the proper coloring of $\Delta$. For $T \subseteq [d]$, the *$T$-rank selected subcomplex* of $\Delta$ is $\Delta_T:=\Delta[\kappa^{-1}(T)]$. Klee and Novik [@KN] showed that $G(\Delta_T)$ is rigid in $\mathbb{R}^{|T|}$ for any $T\subseteq[d]$ with $|T| \geq 3$ if $\Delta$ is a balanced normal $(d-1)$-pseudomanifold with $d \geq 3$. We extend this result to the class of minimal $(d-1)$-cycle complexes as follows. **Theorem 8**. For $d\geq 3$, let $\Delta$ be a balanced minimal $(d-1)$-cycle complex. Then for $T \subseteq [d]$ with $|T| \geq 3$, $G(\Delta_T)$ is rigid in $\mathbb{R}^{|T|}$. For a pure simplicial complex $\Delta$, we say that a vertex subset $U \subseteq V(\Delta)$ is *$(\geq k)$-transversal* if $|U \cap F| \geq k$ for every facet $F \in \Delta$. Since, for a proper coloring $\kappa$ of a pure simplicial complex $\Delta$ and $T \subseteq [d]$, $\kappa^{-1}(T) \subseteq V(\Delta)$ is $(\geq |T|)$-transversal, Theorem [Theorem 8](#thm:rank-selcted){reference-type="ref" reference="thm:rank-selcted"} is a corollary of the following lemma. **Lemma 9**. For $d \geq k \geq 3$, let $\Delta$ be a minimal $(d-1)$-cycle complex and $U \subseteq V(\Delta)$ be a $(\geq k)$-transversal set of $\Delta$. Then $G(\Delta[U])$ is rigid in $\mathbb{R}^{k}$. *Proof.* We prove the statement by the induction on $|V(\Delta)|$. If $\Delta$ is a trivial $(d-1)$-cycle complex, $G(\Delta)$ is a complete graph. Hence, $G(\Delta[U])=G(\Delta)[U]$ is a complete graph, which is rigid in $\mathbb{R}^{k}$. Suppose that $\Delta$ is a nontrivial minimal $(d-1)$-cycle complex. Pick $u,v \in U$ with $uv \in E(\Delta)$. Such $u,v$ always exist by $k \geq 3$. Let $\Delta_1^+,\ldots,\Delta_m^+$ be the nontrivial minimal $(d-1)$-cycle complexes given in Lemma [Lemma 7](#lem:Fog_dec){reference-type="ref" reference="lem:Fog_dec"} with respect to $\Delta$ and $uv$. For each $i$, let $U_i:=U \cap V(\Delta_i^+)$. For each facet $F$ of $\Delta_i^+ \cap \Delta$, we have $|F \cap U_i| \geq k$. For each facet $F \in \Delta_i^+ \setminus \Delta$, as $F$ contains $u$ and $F-u$ is a $(d-2)$-face of $\Delta$ by Lemma [Lemma 7](#lem:Fog_dec){reference-type="ref" reference="lem:Fog_dec"} (b), we have $|F \cap U_i|=|(F-u) \cap U_i|+1 \geq k$. Hence $U_i$ is a $(\geq k)$-transversal set of $\Delta_i^+$. **Claim 10**. $|N_{G(\Delta_i^+)}(u) \cap N_{G(\Delta_i^+)}(v) \cap U_i| \geq k-1$ holds for each $i$. *Proof of claim.* Let $F$ be a facet of $\Delta_i^+$ containing both $u$ and $v$. Such a facet always exists by Lemma [Lemma 7](#lem:Fog_dec){reference-type="ref" reference="lem:Fog_dec"}(a). As $U_i$ is a $(\geq k)$-transversal set of $\Delta_i^+$, we have $|F \cap U_i| \geq k$. Thus we have $|(F-u-v) \cap U_i| \geq k-2$. If $|(F-u-v) \cap U_i| \geq k-1$, the claim follows as $(F-u-v)\cap U_i$ is included in $N_{G(\Delta_i^+)}(u) \cap N_{G(\Delta_i^+)}(v)$. If $|(F-u-v) \cap U_i|=k-2$, by $k \geq 3$, we can pick $w \in (F-u-v) \cap U_i$. By Lemma [Lemma 6](#lem:basic){reference-type="ref" reference="lem:basic"}(ii), there is another facet $G \in \Delta_i^+$ different from $F$ which includes $F-w$. Since $|G \cap U_i| \geq k$, the unique element $x \in G \setminus F$ must be in $U_i$. Now $(F -u - v +x)\cap U_i$ is the desired subset. ◻ For each $i$, since $u,v \in U_i$, $U_i-v$ is a $(\geq k)$-transversal set of $\Delta_i^+/uv$ and $G(\Delta_i^+[U_i])/uv=G((\Delta_i^+/uv)[U_i-v])$. Also $\Delta_i^+/uv$ is a minimal $(d-1)$-cycle complex by Lemma [Lemma 7](#lem:Fog_dec){reference-type="ref" reference="lem:Fog_dec"} (c). Hence, by the induction hypothesis, $G((\Delta_i^+/uv)[U_i-v])$ is rigid in $\mathbb{R}^k$. By Lemma [Lemma 4](#lem:vertex splitting){reference-type="ref" reference="lem:vertex splitting"} and Claim [Claim 10](#claim:chap4){reference-type="ref" reference="claim:chap4"}, $G(\Delta_i^+[U_i])$ is rigid in $\mathbb{R}^k$. Now the rigidity of $G(\Delta[U])=\bigcup_{i=1}^m G(\Delta_i^+[U_i])$ follows from Lemma [Lemma 3](#lem:gluing){reference-type="ref" reference="lem:gluing"} and Lemma [Lemma 7](#lem:Fog_dec){reference-type="ref" reference="lem:Fog_dec"} (d), (e). ◻ The balanced analogue of the lower bound theorem [@Bar] has been investigated in [@GKN; @KN]. For positive integers $n,d$ with $n$ divisible by $d$, a *stacked cross-polytopal $(d-1)$-sphere* on $n$ vertices is the connected sum of $\frac{n}{d}-1$ copies of the boundary complex of the cross-polytope $\mathcal{C}_d^*$. A stacked cross-polytopal $(d-1)$-sphere $\Delta$ satisfies $2h_2(\Delta)=(d-1)h_1(\Delta)$, and the balanced lower bound theorem [@KN] asserts that $2h_2(\Delta)\geq(d-1)h_1(\Delta)$ holds for any balanced normal $(d-1)$-pseudomanifolds with $d \geq 3$. Goff, Klee, Novik [@GKN] showed that for a balanced pure simplicial complex $\Delta$, this inequality follows from the rigidity of $G(\Delta_T)$ in $\mathbb{R}^3$ for every $3$-set $T \subseteq [d]$. Hence Theorem [Theorem 8](#thm:rank-selcted){reference-type="ref" reference="thm:rank-selcted"} implies the following generalization of balanced lower bound theorem to balanced minimal cycle complexes. **Corollary 11**. Let $\Delta$ be a balanced minimal $(d-1)$-cycle complex with $d \geq3$. Then $2h_2(\Delta) \geq (d-1)h_1(\Delta)$. **Remark 12**. Characterizing simplicial complexes achieving the tight equality in the lower bound theorem is also a well-studied problem. Klee and Novik [@KN] showed that for a balanced normal $(d-1)$-pseudomanifold $\Delta$ with $d \geq 4$, $2h_2(\Delta) = (d-1)h_1(\Delta)$ holds if and only if $\Delta$ is a stacked cross-polytopal $(d-1)$-sphere. It is interesting to know when the equality $2h_2(\Delta) = (d-1)h_1(\Delta)$ occurs for a balanced minimal $(d-1)$-cycle complex $\Delta$. # Stanley-Reisner ring and rigidity {#sec:5} For an $\mathbb{N}$-graded algebra $A$, the $i$th homogeneous component is denoted as $A_i$. An element of $A_1$ is called a linear form. For a sequence of linear forms $\Theta=(\theta_1,\ldots,\theta_d)$ of $\mathbb{R}[\Delta]$, the ideal of $\mathbb{R}[\Delta]$ generated by $\theta_1,\ldots,\theta_d$ is denoted as $\Theta \mathbb{R}[\Delta]$. For a $(d-1)$-dimensional simplicial complex $\Delta$, a sequence of $d$ linear forms $\theta_1,\ldots,\theta_d \in \mathbb{R}[\Delta]_1$ is called a *linear system of parameters* (*l.s.o.p.* for short) for $\mathbb{R}[\Delta]$ if $\dim_{\mathbb{R}} \mathbb{R}[\Delta]/\Theta\mathbb{R}[\Delta] < \infty$. For Stanley-Reisner ring, the following criterion of an l.s.o.p. is known (see [@Sta]). **Lemma 13**. Let $\Delta$ be a $(d-1)$-dimensional simplicial complex and $\theta_1,\ldots,\theta_d \in \mathbb{R}[\Delta]_1$ be linear forms. Define $p:V(\Delta) \rightarrow \mathbb{R}^d$ by the relation $\theta_i=\sum_{v \in V(\Delta)}p(v)_ix_v$ for $i \in [d]$. Then $\theta_1,\ldots,\theta_d$ is an l.s.o.p. for $\mathbb{R}[\Delta]$ if and only if $\{p(v):v \in F\}$ is linearly independent for every $F \in \Delta$. The connection between Stanley-Reisner ring theory and rigidity theory was pointed out by Lee [@Lee]. Lee [@Lee] defined the notion of linear and affine $r$-stresses of a simplicial complex $\Delta$ and proved that, for a sequence of linear forms $\Theta=(\theta_1,\ldots,\theta_d)$ and a linear form $\omega=\sum_{v \in V(\Delta)} x_v$, $\left(\mathbb{R}[\Delta]/\Theta \mathbb{R}[\Delta]\right)_r$ (resp. $\left(\mathbb{R}[\Delta]/(\Theta,\omega) \mathbb{R}[\Delta]\right)_r$) is linearly isomorphic to the space of linear (resp. affine) $r$-stresses of $\Delta$. For the remaining argument, the case of $r=1,2$ is related, which is summarized as below. **Lemma 14** (Lee [@Lee]). Let $\Delta$ be a $(d-1)$-dimensional simplicial complex. Let $\Theta=(\theta_1,\ldots,\theta_d)$ be a sequence of linear forms of $\mathbb{R}[\Delta]$. Let $\omega=\sum_{v \in V(\Delta)}x_v$ and define $p:V(\Delta) \rightarrow \mathbb{R}^d$ by the relation $\theta_i=\sum_{v \in V(\Delta)}p(v)_ix_v$ for $i \in [d]$. The followings hold: - $(\mathbb{R}[\Delta]/\Theta \mathbb{R}[\Delta])_1$ is linearly isomorphic to $\{t \in \mathbb{R}^{V(\Delta)}:\sum_{v \in V(\Delta)}t_v p(v)=0\}$ . - $\left(\mathbb{R}[\Delta]/(\Theta,\omega) \mathbb{R}[\Delta]\right)_2$ is linearly isomorphic to $\ker R(G(\Delta),p)^\top$. - $(\mathbb{R}[\Delta]/\Theta \mathbb{R}[\Delta])_2$ is linearly isomorphic to $\ker R(G(\Delta)*\{u\},p')^\top$, where $G(\Delta)*\{u\}$ is the cone graph of $G(\Delta)$ and $p':V(\Delta)\cup\{u\} \rightarrow \mathbb{R}^d$ is the extension of $p$ defined by $p'(u)=0$. We have the following corollary. Although the result is already known, we include the proof for completeness. **Corollary 15**. Let $\Delta$ be a strongly connected $(d-1)$-dimensional simplicial complex. Let $\Theta=(\theta_1,\ldots,\theta_d)$ be an l.s.o.p. for $\mathbb{R}[\Delta]$. Let $\omega=\sum_{v \in V(\Delta)}x_v$ and define $p:V(\Delta) \rightarrow \mathbb{R}^d$ by the relation $\theta_i=\sum_{v \in V(\Delta)}p(v)_ix_v$ for $i \in [d]$. Then the multiplication map $(\times \omega):\left(\mathbb{R}[\Delta]/\Theta \mathbb{R}[\Delta]\right)_1 \rightarrow \left(\mathbb{R}[\Delta]/\Theta \mathbb{R}[\Delta]\right)_2$ is injective if and only if $(G(\Delta),p)$ is infinitesimally rigid in $\mathbb{R}^d$. *Proof.* Since $\Theta$ is an l.s.o.p. for $\mathbb{R}[\Delta]$, $p(V(\Delta))$ linearly spans $\mathbb{R}^d$ by Lemma [Lemma 13](#lem:Green-Kleitman){reference-type="ref" reference="lem:Green-Kleitman"}. Hence by Lemma [Lemma 14](#lem:Lee){reference-type="ref" reference="lem:Lee"} (i), $\dim \left(\mathbb{R}[\Delta]/\Theta \mathbb{R}[\Delta]\right)_1 = f_0(\Delta)-d=h_1(\Delta)$. By Lemma [Lemma 14](#lem:Lee){reference-type="ref" reference="lem:Lee"} (ii), $\mathop{\mathrm{Coker}}(\times \omega) \cong \left(\mathbb{R}[\Delta]/(\Theta,\omega) \mathbb{R}[\Delta]\right)_2$ is linearly isomorphic to $\ker R(G(\Delta),p)^\top$. Hence $\dim \mathop{\mathrm{Coker}}(\times \omega)=f_1(\Delta)-df_0(\Delta)+\binom{d+1}{2}(=h_2(\Delta)-h_1(\Delta))$ if and only if $(G(\Delta),p)$ is infinitesimally rigid in $\mathbb{R}^d$. Thus, the statement follows if $\dim (\mathbb{R}[\Delta]/\Theta \mathbb{R}[\Delta])_2=h_2(\Delta)$ always holds. To see this, let $G(\Delta) * \{u\}$ be the cone graph of $G(\Delta)$ and let $p':V(\Delta) \cup \{u\} \rightarrow \mathbb{R}^d$ be the point configuration as in Lemma [Lemma 14](#lem:Lee){reference-type="ref" reference="lem:Lee"} (iii). By Lemma [Lemma 13](#lem:Green-Kleitman){reference-type="ref" reference="lem:Green-Kleitman"}, for each facet $F$ of $\Delta$, $F + u$ is a clique in $G(\Delta) * \{u\}$ and $p'(F + u)$ affinely spans $\mathbb{R}^d$, so $(G(\Delta)[F+u],p'|_{F+u})$ is infinitesimally rigid in $\mathbb{R}^d$. Again by Lemma [Lemma 13](#lem:Green-Kleitman){reference-type="ref" reference="lem:Green-Kleitman"}, for facets $F$ and $G$ of $\Delta$ with $|F \cap G|=d-1$, $p'((F\cap G)+u)$ affinely spans a $(d-1)$-dimensional subspace. As $\Delta$ is strongly connected, one can order the facets of $\Delta$ as $F_1,\ldots,F_m$ in such a way that, for each $i \geq 2$, there is $j <i$ satisfying $|F_i \cap F_j|=d-1$. Hence by the repeated application of Lemma [Lemma 3](#lem:gluing){reference-type="ref" reference="lem:gluing"}, $(G(\Delta)*\{u\},p')$ is infinitesimally rigid in $\mathbb{R}^d$. Therefore by Lemma [Lemma 14](#lem:Lee){reference-type="ref" reference="lem:Lee"}(iii), we get $\dim (\mathbb{R}[\Delta]/\Theta \mathbb{R}[\Delta])_2 =\dim \ker R(G(\Delta)*\{u\},p')^\top = (f_1(\Delta)+f_0(\Delta))-d(f_0(\Delta)+1) + \binom{d+1}{2}=h_2(\Delta)$ as desired. ◻ Let us recall the definition of $\bm{a}$-balancedness. For $\bm{a} \in \mathbb{Z}_{>0}^m$ with $\sum_{i=1}^m a_i=d$, a $(d-1)$-dimensional simplicial complex $\Delta$ on the vertex set $V(\Delta)$ is *$\bm{a}$-balanced* if there is a map $\kappa:V(\Delta) \rightarrow [m]$ satisfying $|F \cap \kappa^{-1}(i)|\leq a_i$ for any $F \in \Delta$. We call such a map $\kappa$ an *$\bm{a}$-coloring* of $\Delta$. If $\Delta$ is pure, this condition is equivalent to $|F \cap \kappa^{-1}(i)| = a_i$ for any facet $F$ of $\Delta$. Stanley [@Sta] showed that an $\bm{a}$-balanced simplicial complex $\Delta$ admits a special type of l.s.o.p. for $\mathbb{R}[\Delta]$. We remark that we always use the term "linear forms" to mean degree one forms in the usual $\mathbb{N}$-grading. **Proposition 16** (Stanley [@Sta]). Let $\Delta$ be an $\bm{a}$-balanced $(d-1)$-dimensional simplicial complex for $\bm{a} \in \mathbb{Z}_{>0}^m$ and $\kappa:V(\Delta)\rightarrow [m]$ be an $\bm{a}$-coloring of $\Delta$. Make $\mathbb{R}[\Delta]$ into an $\mathbb{N}^m$-graded algebra by defining $\deg x_v = \bm{e}_{\kappa(v)} \in \mathbb{N}^m$. Then $\mathbb{R}[\Delta]$ has an l.s.o.p. $\Theta=(\theta_1,\ldots,\theta_d)$ such that each $\theta_i$ is homogeneous in $\mathbb{N}^m$-grading and exactly $a_i$ elements among $\Theta$ are of degree $\bm{e}_i$ for each $i \in [m]$. An l.s.o.p. satisfying the property in Proposition [Proposition 16](#prop:a-bal-lsop){reference-type="ref" reference="prop:a-bal-lsop"} is called an *$\bm{a}$-colored s.o.p.* We now prove the equivalence of Conjecture [Conjecture 1](#conj:wl){reference-type="ref" reference="conj:wl"} and Conjecture [Conjecture 2](#conj:main){reference-type="ref" reference="conj:main"}. We recall the definition of $(\kappa,\bm{a})$-sparse rigidity. Let $G$ be a graph, $\kappa:V(G) \rightarrow [m]$ a map, and $\bm{a} \in \mathbb{Z}_{>0}^m$ an integer vector with $\sum_{i=1}^m a_i=d$. Let $H_i:=0 \times \cdots \times \mathbb{R}^{a_i} \times \cdots \times 0 \subseteq \mathbb{R}^d=\prod_{i=1}^m \mathbb{R}^{a_i}$ for $i \in [m]$. We say that a point configuration $p:V(G) \rightarrow \mathbb{R}^d$ is *$(\kappa,\bm{a})$-sparse* if $p(v) \in H_{\kappa(v)}$ for all $v\in V(G)$, and $G$ is *$(\kappa,\bm{a})$-sparse rigid* if $(G,p)$ is infinitesimally rigid in $\mathbb{R}^d$ for some $(\kappa,\bm{a})$-sparse point configuration $p$. Note that by Lemma [Lemma 13](#lem:Green-Kleitman){reference-type="ref" reference="lem:Green-Kleitman"}, under the identification $\theta_i=\sum_{v \in V(\Delta)} p(v)_i x(v)$ for $i \in [d]$, $\theta_1,\ldots,\theta_d$ is an $\bm{a}$-colored s.o.p. for $\mathbb{R}[\Delta]$ if and only if $p:V(\Delta) \rightarrow \mathbb{R}^d$ is $(\kappa,\bm{a})$-sparse and $\{p(v):v \in F\}$ is linearly independent for every $F \in \Delta$. *Proof of equivalence of Conjecture [Conjecture 1](#conj:wl){reference-type="ref" reference="conj:wl"} and Conjecture [Conjecture 2](#conj:main){reference-type="ref" reference="conj:main"}.* If Conjecture [Conjecture 1](#conj:wl){reference-type="ref" reference="conj:wl"} holds, for any generic choice of $\omega$, $(\times \omega):\left(\mathbb{R}[\Delta]/\Theta \mathbb{R}[\Delta]\right)_1 \rightarrow \left(\mathbb{R}[\Delta]/\Theta \mathbb{R}[\Delta]\right)_2$ is injective. So, in Conjecture [Conjecture 1](#conj:wl){reference-type="ref" reference="conj:wl"}, we may add extra constraints that $\omega=\sum_{v \in V(\Delta)}a_v x_v$ with $a_v \neq 0$ for all $v \in V(\Delta)$. Moreover, by setting $a_vx_v$ to $x_v$, we may further suppose $\omega=\sum_{v \in V(\Delta)}x_v$ in the statement of Conjecture [Conjecture 1](#conj:wl){reference-type="ref" reference="conj:wl"}. Now, the equivalence of Conjecture [Conjecture 1](#conj:wl){reference-type="ref" reference="conj:wl"} and Conjecture [Conjecture 2](#conj:main){reference-type="ref" reference="conj:main"} follows from Corollary [Corollary 15](#cor:inj){reference-type="ref" reference="cor:inj"}. ◻ # $(\kappa,\bm{a})$-sparse rigidity of minimal cycle complexes for $\bm{a} \in \mathbb{Z}_{\geq 2}^m$ In this section we shall verify Conjecture [Conjecture 2](#conj:main){reference-type="ref" reference="conj:main"} in the case when $a_i \geq 2$ for all $i$ as follows. **Theorem 17**. Let $\bm{a}=(a_1,\ldots,a_m) \in \mathbb{Z}_{>0}^m$ be a positive integer vector with $a_i \geq 2$ for all $i \in [m]$ and $d:=\sum_{i=1}^m a_i \geq 3$. Let $\Delta$ be an $\bm{a}$-balanced minimal $(d-1)$-cycle complex with an $\bm{a}$-coloring $\kappa$. Then $G(\Delta)$ is $(\kappa,\bm{a})$-sparse rigid. The proof of Theorem [Theorem 17](#thm:a-bal){reference-type="ref" reference="thm:a-bal"} consists of several lemmas. To state lemmas, we introduce $L$-sparse rigidity as a generalization of $(\kappa,\bm{a})$-sparse rigidity. For $x=(x_1,\ldots,x_d) \in \mathbb{R}^d$, we denote $\mathop{\mathrm{supp}}x:=\{i \in [d]:x_i \neq 0\}$. Let $G$ be a graph and $L:V(G) \rightarrow 2^{[d]}$ be a map. A point configuration $p:V(G) \rightarrow \mathbb{R}^d$ is *$L$-sparse* if $\mathop{\mathrm{supp}}p(v) \subseteq L(v)$ for every $v \in V(G)$. We say that an $L$-sparse point configuration $p:V(G) \rightarrow \mathbb{R}^d$ is *generic* over $\mathbb{Q}$ if $\{p(v)_j: v \in V(G), j \in L(v)\}$ is algebraically independent over $\mathbb{Q}$. A graph $G$ is *$L$-sparse rigid* if $(G,p)$ is infinitesimally rigid in $\mathbb{R}^d$ for some $L$-sparse point configuration $p$. Note that for $\kappa:V(G) \rightarrow [m]$ and $\bm{a} \in \mathbb{Z}_{>0}^m$ with $d=\sum_{i=1}^m a_i$, $(\kappa,\bm{a})$-sparse rigidity coincides with $L_{\kappa,\bm{a}}$-sparse rigidity, where $L_{\kappa,\bm{a}}:V(G) \rightarrow \mathbb{R}^{[d]}$ is defined by partitioning $[d]$ into disjoint sets $I_1, \ldots, I_m$ with $|I_i|=a_i$ and letting $L_{\kappa,\bm{a}}(v)=I_{\kappa(v)}$. A $(\kappa,\bm{a})$-sparse point configuration $p$ is said to be *generic* if $p$ is generic as an $L_{\kappa,\bm{a}}$-sparse point configuration. Eftekhari et al. [@EJNSTW] addressed the special setting of $L$-sparse rigidity in which, given $X \subseteq V(G)$, $L(v)=[d-1]$ for $v \in X$ and $L(v)=[d]$ for $v \not\in X$, and they gave a combinatorial characterization when $d=2$. Cook et al. [@3-polytope] gave a combinatorial characterization for $(\kappa,\bm{a})$-sparse rigidity of maximal planar graphs $G$ with $\bm{a}=(2,1)$ and $\kappa:V(G) \rightarrow \{1,2\}$ in which $\kappa^{-1}(2)$ is a stable set in $G$. A set of points $X \subseteq \mathbb{R}^d$ is *affinely independent* if the dimension of the affine span of $X$ is $|X|-1$. **Lemma 18**. Let $G$ be a graph and $U \subseteq V(G)$ be a subset of vertices with size at most $d+1$. Let $L:V(G) \rightarrow 2^{[d]}$ be a map, and $p:V(G) \rightarrow \mathbb{R}^d$ be a generic $L$-sparse point configuration. Then $p(U)$ is affinely independent if and only if $|\bigcup_{v \in W} L(v)|+1 \geq |W|$ for every subset $W \subseteq U$. *Proof.* Let $k=|U|$ and $U=\{v_1,\ldots,v_k\}$. Let $A \in \mathbb{R}^{(d+1)\times k}$ be the matrix defined by $$\begin{aligned} A= \begin{bmatrix} p(v_1) & \cdots & p(v_k) \\ 1 & \cdots & 1 \\ \end{bmatrix}.\end{aligned}$$ Then $p(U)$ is affinely independent if and only if $\rank A=k$. Consider the bipartite graph $G(A)$ between $U$ and $[d+1]$ such that there is an edge between $v \in U$ and $i \in [d+1]$ if and only if $A_{i,v} \neq 0$. For $R \subseteq [d+1]$ with $|R|=k$, let $P_R$ be the determinant of submatrices of $A$ indexed by $R$. Then $$\label{eq:submat} P_R=\sum_{\sigma} s_\sigma \prod_{j=1}^k A_{v_j,\sigma(v_j)},$$ where the sum is taken over all perfect matchings $\sigma$ of $G(A)[U \sqcup R]$ and $s_\sigma=\pm 1$. When $P_R$ is considered as a polynomial of $\{p(v)_i:v \in U,i \in L(v)\}$, each monomial appears at most once in the summand of the right hand side of ([\[eq:submat\]](#eq:submat){reference-type="ref" reference="eq:submat"}). Since $p$ is a generic $L$-sparse point configuration, $P_R \neq 0$ if and only if there is a perfect matching in $G(A)[U \sqcup R]$. Hence $\rank A=k$ if and only if there is a size $|U|$ matching in the bipartite graph $G(A)$, which is equivalent to the condition given in the statement by Hall's theorem. ◻ Let $G$ be a graph and $L:V(G) \rightarrow 2^{[d]}$ be a map. For $U \subseteq V(G)$, consider the following "Hall condition" (H): - $|\bigcup_{v \in W} L(v)|+1 \geq |W|$ holds for every subset $W \subseteq U$. **Lemma 19**. Let $G$ be a graph and $uv \in E(G)$ be an edge satisfying $|N_G(u) \cap N_G(v)| \geq d-1$. Let $L:V(G) \rightarrow 2^{[d]}$ be a map satisfying $L(u)=L(v)$, and define $L'$ as the restriction of $L$ to $V(G/uv)$. Suppose that there is a $(d-1)$-set $C \subseteq N_G(u) \cap N_G(v)$ such that $C+u+v$ satisfies the condition (H) with respect to $G$ and $L$. Then $G$ is $L$-sparse rigid if $G/uv$ is $L'$-sparse rigid. *Proof.* Suppose that $G/uv$ is $L'$-sparse rigid. Let $p:V(G/uv) \rightarrow \mathbb{R}^d$ be a generic $L'$-sparse point configuration. Then $(G/uv,p)$ is infinitesimally rigid in $\mathbb{R}^d$. By Lemma [Lemma 18](#lem:gen_pos){reference-type="ref" reference="lem:gen_pos"}, the condition (H) of $C+u+v$ implies that the set of points $\{p(w) : w\in C \cup \{u\}\}$ is affinely independent. Hence $\{p(w)-p(u): w \in C\}$ is linearly independent. Among vectors $z \in \mathbb{R}^{d}$ with $\mathop{\mathrm{supp}}(z)=L(v)$, pick a generic one $z$. Then, by Lemma [Lemma 18](#lem:gen_pos){reference-type="ref" reference="lem:gen_pos"}, the condition (H) of $C+u+v$ implies that $\{p(w):w\in C \cup \{u\} \} \cup \{p(u)+z\}$ is affinely independent. Hence $z$ is not in the linear span of $\{p(w)-p(u):w\in C\}$. Thus by Lemma [Lemma 4](#lem:vertex splitting){reference-type="ref" reference="lem:vertex splitting"}, there is an extension $p'$ of $p$ such that $p'(v)=p(u)+tz$ for some $t \in \mathbb{R}$ such that $(G,p')$ is infinitesimally rigid. As $p'$ is an $L$-sparse point configuration, $G$ is $L$-rigid. ◻ As a special case, we have the following corollary for $(\kappa,\bm{a})$-sparse rigidity. **Corollary 20**. Let $G$ be a graph and $uv \in E(G)$ be an edge satisfying $|N_G(u) \cap N_G(v)| \geq d-1$. Let $\kappa:V(G) \rightarrow [m]$ be a map satisfying $\kappa(u)=\kappa(v)$, and let $\kappa'$ be the restriction of $\kappa$ to $V(G/uv)$. For $U \subseteq V(G)$, define $t_\kappa(U):=(|U \cap \kappa^{-1}(i)|)_i \in \mathbb{Z}_{\geq 0}^{m}$. For $\bm{a} \in \mathbb{Z}_{> 0}^m$, suppose that there is a $(d-1)$-set $C \subseteq N_G(u) \cap N_G(v)$ such that $t_\kappa(C+u+v)=\bm{a}+\bm{e}_j$ for some $j \in [m]$, where $\bm{e}_j$ denotes the $j$th unit coordinate vector. Then $G$ is $(\kappa,\bm{a})$-sparse rigid if $G/uv$ is $(\kappa',\bm{a})$-sparse rigid. *Proof.* One can easily check that in the setting of $(\kappa,\bm{a})$-sparse rigidity, $(d+1)$-set $U$ satisfies the condition (H) if and only if $t_\kappa(U)=\bm{a}+\bm{e}_j$ for some $j \in [m]$. Now the statement follows from Lemma [Lemma 19](#lem:hall){reference-type="ref" reference="lem:hall"}. ◻ *Proof of Theorem [Theorem 17](#thm:a-bal){reference-type="ref" reference="thm:a-bal"}.* We prove the statement by the induction on $|V(\Delta)|$. When $\Delta$ is trivial, $G(\Delta)=K_d$, a complete graph on $d$ vertices. By Lemma [Lemma 18](#lem:gen_pos){reference-type="ref" reference="lem:gen_pos"}, $p(V(G))$ spans $(d-1)$-dimensional affine space for any generic $(\kappa,\bm{a})$-sparse $p$. Hence $G(\Delta)$ is $(\kappa,\bm{a})$-sparse rigid. Suppose that $\Delta$ is nontrivial. By $a_1 \geq 2$, we can pick $uv \in E(G)$ with $\kappa(u)=\kappa(v)=1$. Let $\Delta_1^+,\ldots,\Delta_t^+$ be the minimal $(d-1)$-cycle complexes given in Lemma [Lemma 7](#lem:Fog_dec){reference-type="ref" reference="lem:Fog_dec"} with respect to $\Delta$ and $uv$. For $U \subseteq V(G)$, define the type of $U \subseteq V(G)$ by $t_\kappa(U):=(|U \cap \kappa^{-1}(i)|)_i \in \mathbb{Z}_{\geq 0}^{m}$. **Claim 21**. For each $i$, there is a $(d-1)$-set $C \subseteq N_{G(\Delta_i^+)}(u) \cap N_{G(\Delta_i^+)}(v)$ such that $t_\kappa(C+u+v)=\bm{a}+\bm{e}_k$ for some $k \in [m]$. *Proof of claim.* Let $F^*$ be a facet of $\Delta_i^+$ containing $u$ and $v$, which exists by Lemma [Lemma 7](#lem:Fog_dec){reference-type="ref" reference="lem:Fog_dec"} (a). By Lemma [Lemma 6](#lem:basic){reference-type="ref" reference="lem:basic"} (ii), for every $w \in F^* - u -v$, there exists $x (\neq w)$ such that $F^*-w+x$ is a facet of $\Delta_i^+$. Then $C:=F^*-u-v+x$ is included in the common neighborhood of $u$ and $v$ in $G(\Delta_i^+)$. We show that for an appropriate choice of $w$, $t_\kappa(C+u+v)=\bm{a}+\bm{e}_k$ holds for some $k \in [m]$. By Lemma [Lemma 7](#lem:Fog_dec){reference-type="ref" reference="lem:Fog_dec"} (b), for every facet $F$ of $\Delta_i^+$ containing $u$ and $v$, $F-v$ is a $(d-2)$-face of $\Delta$, and thus $t_\kappa(F-v)=\bm{a} -\bm{e}_j$ for some $j \in [m]$. Hence, we have the following property ($\star$): > ($\star$) for every facet $F$ of $\Delta_i^+$ containing $u$ and $v$, $t_\kappa(F) = \bm{a}-\bm{e}_j+\bm{e}_1$ for some $j \in [m]$. By property ($\star$) of $F^*$, $t_\kappa(F^*) = \bm{a}-\bm{e}_j+\bm{e}_1$ for some $j \in [m]$. If $j=1$, for any choice of $w$, we have $t_\kappa(C+u+v)=\bm{a}+\bm{e}_{\kappa(x)}$ as desired. If $j \neq 1$, pick $w$ from $F^* \cap \kappa^{-1}(j)$, which is not empty by $a_j \geq 2$. Since $F^*-w+x$ also satisfies ($\star$), it follows that $\kappa(x)=j$. Hence we have $t_\kappa(C+u+v)=\bm{a}+\bm{e}_1$ as desired. ◻ Let $\kappa_i$ (resp. $\kappa_i'$) be the restriction of $\kappa$ to $V(\Delta_i^+)$ (resp. $V(\Delta_i^+/uv)$). Then $\Delta_i^+/uv$ is $\bm{a}$-balanced and $\kappa_i'$ is an $\bm{a}$-coloring of $\Delta_i^+/uv$. $\Delta_i^+/uv$ is also a minimal $(d-1)$-cycle complex by Lemma [Lemma 7](#lem:Fog_dec){reference-type="ref" reference="lem:Fog_dec"} (c). Thus by induction hypothesis, $G(\Delta_i^+/uv)$ is $(\kappa_i',\bm{a})$-sparse rigid. By Corollary [Corollary 20](#cor:hall){reference-type="ref" reference="cor:hall"} and Claim [Claim 21](#claim:a-bal){reference-type="ref" reference="claim:a-bal"}, $G(\Delta_i^+)$ is $(\kappa_i,\bm{a})$-sparse rigid for each $i$. By Lemma [Lemma 18](#lem:gen_pos){reference-type="ref" reference="lem:gen_pos"} and the property ($\star$), for a facet $F$ of $\Delta_i^+$ and a generic $(\kappa,\bm{a})$-sparse point configuration $p$, $p(F)$ spans a $(d-1)$-dimensional affine subspace. Hence by Lemma [Lemma 7](#lem:Fog_dec){reference-type="ref" reference="lem:Fog_dec"} (d) and (e), we can deduce that $G(\Delta)=\bigcup_{i=1}^t G(\Delta_i^+)$ is $(\kappa,\bm{a})$-sparse rigid by the repeated application of Lemma [Lemma 3](#lem:gluing){reference-type="ref" reference="lem:gluing"}. ◻ # $(\kappa,\bm{a})$-sparse rigidity of homology manifolds {#sec:7} In this section, we prove that Conjecture [Conjecture 2](#conj:main){reference-type="ref" reference="conj:main"} holds for any $\bm{a}\neq(d-1),(1,d-1)$ if $d \geq 4$ and $\Delta$ is a homology $(d-1)$-manifolds. Conjecture [Conjecture 2](#conj:main){reference-type="ref" reference="conj:main"} was verified for balanced simplicial $2$-spheres by Cook et al. [@3-polytope]. **Theorem 22** (Cook et al. [@3-polytope]). For a balanced simplicial $2$-sphere $\Delta$ with a proper $3$-coloring $\kappa$, $G(\Delta)$ is $(\kappa,\bm{a})$-sparse rigid. Kalai [@Kal] defined a class $\mathcal{C}_d$ of $(d-1)$-dimensional pseudomanifolds for $d\geq 3$ as follows: $\mathcal{C}_3$ is the class of simplicial $2$-sphere, and for $d\geq 4$, a $(d-1)$-dimensional pseudomanifold $\Delta$ belongs to $\mathcal{C}_d$ if $\mathop{\mathrm{lk}}_\Delta(v) \in \mathcal{C}_{d-1}$ for every $v \in V(\Delta)$. For $d\geq 4$, $\mathcal{C}_d$ includes all homology $(d-1)$-manifolds (over any field). Kalai [@Kal] showed that if $\Delta \in \mathcal{C}_d$ ($d \geq 3$), $G(\Delta)$ is rigid in $\mathbb{R}^d$. In the case of $(\kappa,\bm{a})$-sparse rigidity, we have the following theorem. **Theorem 23**. For $d \geq 3$, let $\bm{a}=(a_1,\ldots,a_m) \in \mathbb{Z}_{>0}^m$ be a positive integer vector with $\sum_{i=1}^m a_i=d$. Let $\Delta$ be an $\bm{a}$-balanced pseudomanifold satisfying $\Delta \in \mathcal{C}_d$, and let $\kappa$ be an $\bm{a}$-coloring of $\Delta$. If $\bm{a}\neq (d-1,1), (1,d-1)$, $G(\Delta)$ is $(\kappa,\bm{a})$-sparse rigid. In the proof of Theorem [Theorem 23](#thm:C_d){reference-type="ref" reference="thm:C_d"}, we use cone lemma for $(\kappa,\bm{a})$-sparse rigidity, which we first prove in the generality of $L$-sparse rigidity. **Lemma 24**. Let $G$ be a graph and $G*\{v\}$ be its cone graph. Let $L:V(G) \cup \{v\} \rightarrow 2^{[d+1]}$ be a map with $L(v) \neq \emptyset$. Suppose that there is $i \in L(v)$ such that, for each $u \in V(G)$, either $i \not\in L(u)$ or $|L(v)\setminus L(u)| \leq 1$ holds. Define $L':V(G) \rightarrow 2^{[d+1]\setminus\{i\}}$ by $L'(u):=L(u) \cup L(v) \setminus \{i\}$ if $i \in L(u)$ and $L'(u):=L(u)$ otherwise, and identify $2^{[d+1]\setminus\{i\}}$ with $2^{[d]}$. Then $G$ is $L'$-sparse rigid if and only if $G * \{v\}$ is $L$-sparse rigid. *Proof.* Identify $H:=\{x \in \mathbb{R}^{d+1}:x_i=0\}$ with $\mathbb{R}^d$. Suppose that $G * \{v\}$ is $L$-sparse rigid. Let $p:V(G) \cup \{v\} \rightarrow \mathbb{R}^{d+1}$ be a generic $L$-sparse configuration. Then $(G * \{v\},p)$ is infinitesimally rigid in $\mathbb{R}^{d+1}$. As $i \in L(v)$, we have $p(v) \not\in H$, and $p(u)-p(v)$ is not parallel to $H$ for any $u \in V(G)$. For each $u \in V(G)$, let $p_H(u)$ be the intersection of $H$ and the line passing through $p(u)$ and $p(v)$. By Lemma [Lemma 5](#lem:coning){reference-type="ref" reference="lem:coning"}, $(G,p_H)$ is infinitesimally rigid in $\mathbb{R}^d$. By the definition of $L'$, $p_H$ is $L'$-sparse. Thus $G$ is $L'$-sparse rigid. To see the other direction, suppose that $G$ is $L'$-sparse rigid. There is $q:V(G) \rightarrow \mathbb{R}^d \cong H$ such that $(G,q)$ is infinitesimally rigid in $\mathbb{R}^d$ and $q$ is $L'$-sparse and $\mathop{\mathrm{supp}}q(u)=L'(u)$ for all $u \in V(G)$. We define $p:V \cup \{v\} \rightarrow \mathbb{R}^{d+1}$ as follows. Let $p(v) \in \mathbb{R}^{d+1}$ be a point such that $\mathop{\mathrm{supp}}(p(v))=L(v)$ and $p(v)_j \neq q(u)_j$ for any $j \in L(v)$ and $u \in V(G)$. For $u \in V(G)$, if $i \not \in L(u)$ or $L(u) \subseteq L(v)$, let $p(u):=q(u)$. For $u \in V(G)$, if $i \in L(u)$ and $L(u) \not\subseteq L(v)$, by the assumption on $L$, we must have $L(v) \setminus L(u)=\{j\}$ for some $j (\neq i)$. For such $u$, as $q(u)_j\neq0,p(v)_j$, there is a unique real number $t (\neq 0,1) \in \mathbb{R}$ such that $(tq(u)+(1-t)p(v))_j=0$, so let $p(u):=tq(u)+(1-t)p(v)$. One can see that $p$ is $L$-sparse. By Lemma [Lemma 5](#lem:coning){reference-type="ref" reference="lem:coning"}$, (G*\{v\},p)$ is infinitesimally rigid in $\mathbb{R}^{d+1}$. Hence $G*\{v\}$ is $L$-sparse rigid. ◻ We obtain the following corollary of Lemma [Lemma 24](#lem:list-cone){reference-type="ref" reference="lem:list-cone"} for $(\kappa,\bm{a})$-sparse rigidity. **Corollary 25**. Let $G$ be a graph, $\kappa:V(G) \rightarrow [m]$ a map, and $\bm{a} \in \mathbb{Z}_{>0}^m$ a positive integer vector. Let $G*\{v\}$ be the cone graph of $G$. Suppose that $G$ is $(\kappa,\bm{a})$-sparse rigid. We have the followings: - For the extension $\kappa':V(G) \cup \{v\} \rightarrow [m+1]$ of $\kappa$ defined by $\kappa'(v)=m+1$, $G*\{v\}$ is $(\kappa',(\bm{a},1))$-sparse rigid. - For the extension $\kappa':V(G) \cup \{v\} \rightarrow [m]$ of $\kappa$ defined by $\kappa'(v)=i$ for some $i \in [m]$, $G*\{v\}$ is $(\kappa',\bm{a}+\bm{e}_i)$-sparse rigid. *Proof of Theorem [Theorem 23](#thm:C_d){reference-type="ref" reference="thm:C_d"}.* If $m=1$, the statement is the result of Kalai [@Kal]. If $m=2$, the statement follows from Theorem [Theorem 17](#thm:a-bal){reference-type="ref" reference="thm:a-bal"}. We prove the statement for $m \geq 3$ by the induction on $d$. If $d=3$, we have $\bm{a}=(1,1,1)$, and the statement follows from Theorem [Theorem 22](#thm:3-poly){reference-type="ref" reference="thm:3-poly"}. Suppose that $d \geq 4$ and $\Delta \in \mathcal{C}_d$ is $\bm{a}$-balanced. Let $\kappa$ be an $\bm{a}$-coloring of $\Delta$. Define $U \subseteq V(G)$ as follows: if $a_j=1$ for all $j$, let $U:=\kappa^{-1}(\{1,2\})$, and if $a_j \geq 2$ for some $j \in [m]$, let $U:=\kappa^{-1}(j)$. **Claim 26**. For each $v \in U$, $G(\mathop{\mathrm{st}}_\Delta(v))$ is $(\kappa|_{V(\mathop{\mathrm{st}}_\Delta(v))},\bm{a})$-sparse rigid. *Proof of claim.* First consider the case in which $a_j=1$ for all $j$. In this case $m=d \geq 4$. Let $\kappa':V(\mathop{\mathrm{lk}}_\Delta(v)) \rightarrow [m-1]$ be the restriction of $\kappa$ to $V(\mathop{\mathrm{lk}}_\Delta(v))$, where $[m-1]$ is identified with $[m]\setminus\{\kappa(v)\}$. Then $\kappa'$ is a $\bm{b}$-coloring of $\mathop{\mathrm{lk}}_\Delta(v)$, where $\bm{b}=(1,\ldots,1) \in \mathbb{Z}^{m-1}$. As $\mathop{\mathrm{lk}}_\Delta(v) \in \mathcal{C}_{d-1}$, by the induction hypothesis, $G(\mathop{\mathrm{lk}}_\Delta(v))$ is $(\kappa',\bm{b})$-sparse rigid. Thus by Corollary [Corollary 25](#cor:cone-sparse){reference-type="ref" reference="cor:cone-sparse"} (i), the claim follows. Now consider the case in which $a_j\geq2$ for some $j$. Then $\kappa|_{V(\mathop{\mathrm{lk}}_\Delta(v))}$ is an $(\bm{a}-\bm{e}_j)$-coloring of $\mathop{\mathrm{lk}}_\Delta(v)$, so $G(\mathop{\mathrm{lk}}_\Delta(v))$ is $(\kappa|_{V(\mathop{\mathrm{lk}}_\Delta(v))},\bm{b})$-sparse rigid by the induction hypothesis. Thus by Corollary [Corollary 25](#cor:cone-sparse){reference-type="ref" reference="cor:cone-sparse"} (ii), the claim follows. ◻ By definition of $U$, $|U \cap F| \geq 2$ for every facet $F$ of $\Delta$. As $\Delta$ is a pseudomanifold, $\Delta$ is strongly connected. Hence $G(\Delta)[U]$ is connected. Thus, the vertices of $U$ can be ordered as $v_1,\ldots,v_t$ in such a way that $\bigcup_{j<i} \mathop{\mathrm{st}}_\Delta(v_j)$ and $\mathop{\mathrm{st}}_\Delta(v_i)$ share at least one facet for each $i\geq 2$. By Lemma [Lemma 18](#lem:gen_pos){reference-type="ref" reference="lem:gen_pos"}, for any generic $(\kappa,\bm{a})$-sparse configuration $p:V(\Delta)\rightarrow \mathbb{R}^d$ and any facet $F$ of $\Delta$, $p(F)$ spans a $(d-1)$-dimensional affine subspace. Hence, by Lemma [Lemma 3](#lem:gluing){reference-type="ref" reference="lem:gluing"} and Claim [Claim 26](#claim:star){reference-type="ref" reference="claim:star"}, $G(\Delta)=\bigcup_{v \in U} G(\mathop{\mathrm{st}}_\Delta(v))$ is $(\kappa,\bm{a})$-sparse rigid. ◻ The assumption $\bm{a}\neq(d-1),(1,d-1)$ in Theorem [Theorem 23](#thm:C_d){reference-type="ref" reference="thm:C_d"} is necessary. Cook et al. [@3-polytope] showed that there is a $(2,1)$-balanced simplicial $2$-sphere $\Delta$ and its $(2,1)$-coloring $\kappa$ such that $G(\Delta)$ is not $(\kappa,\bm{a})$-sparse rigid. Based on the construction given in [@3-polytope], for each $d\geq 3$, we give a $(d-1,1)$-balanced simplicial $(d-1)$-sphere $\Delta$ and a $(d-1,1)$-coloring $\kappa$ of $\Delta$ such that $G(\Delta)$ is not $(\kappa,(d-1,1))$-sparse rigid. Moreover, $\dim \ker R(G(\Delta),p)-\binom{d+1}{2}$ can be arbitrarily large for any $(\kappa,(d-1,1))$-sparse point configuration $p$. **Example 27**. Let $\Gamma$ be a stacked $(d-1)$-simplicial sphere. Let $\Delta$ be the simplicial $(d-1)$-sphere obtained from $\Gamma$ by subdividing all facets of $\Gamma$. Then $\Delta$ is a stacked $(d-1)$-sphere with $f_0(\Delta)+f_{d-1}(\Delta)$ vertices. Define $\kappa:V(\Delta) \rightarrow \{1,2\}$ by $\kappa(v)=1$ if $v \in V(\Gamma)$ and $\kappa(v)=2$ otherwise. Then $\kappa$ is a $(d-1,1)$-coloring of $\Delta$. Let $p:V(\Delta) \rightarrow \mathbb{R}^d$ be a $(\kappa,(d-1,1))$-sparse point configuration of $G(\Delta)$. As $\Delta$ is a stacked sphere, we have $\dim \ker R(G(\Delta),p)^\top = \dim \ker R(G(\Delta),p)-\binom{d+1}{2}$. The subframework $(G(\Gamma),p|_{V(\Gamma)})$ of $(G(\Delta),p)$ is in a $(d-1)$-dimensional affine subspace and has $d f_0(\Gamma)-\binom{d+1}{2}$ edges. Thus the dimension of the linear space of the equilibrium stresses of $(G(\Gamma),p|_{V(\Gamma)})$ is at least $d f_0(\Gamma)-\binom{d+1}{2} - ((d-1)f_0(\Gamma)-\binom{d}{2})=f_0(\Gamma)-d$. Hence $\dim \ker R(G(\Delta),p)-\binom{d+1}{2}$ can be arbitrarily large by setting $f_0(\Gamma)$ large enough. # Further observations on $L$-sparse rigidity We can prove further results similar to Theorem [Theorem 17](#thm:a-bal){reference-type="ref" reference="thm:a-bal"}. The following proposition treats the case in which $\Delta$ satisfies the weaker condition than $\bm{a}$-balancedness. **Proposition 28**. For $m \geq 1$, let $\bm{a}=(a_1,\ldots,a_m) \in \mathbb{Z}_{> 0}^m$ be a positive integer vector with $a_1 \geq 4$ and $a_i \geq 2$ for all $i \in [m]$. Let $\Delta$ be a minimal $(d-1)$-cycle complex, where $d=\sum_{i=1}^m a_i$. Let $\kappa:V(\Delta) \rightarrow [m]$ be a map such that, for any facet $F$ of $\Delta$, $\left(|F \cap \kappa^{-1}(i)|\right)_i=\bm{a}+\bm{e}_j-\bm{e}_1 \in \mathbb{Z}_{> 0}^m$ holds for some $j \in [m]$. Then, $G(\Delta)$ is $(\kappa,\bm{a})$-sparse rigid. *Proof.* The proof is similar to that of Theorem [Theorem 17](#thm:a-bal){reference-type="ref" reference="thm:a-bal"}. Define a type $t_\kappa(U)$ of $U \subseteq V(\Delta_i^+)$ as $(|U \cap \kappa^{-1}(i)|)_i \in \mathbb{Z}_{\geq 0}^m$. Note that by Lemma [Lemma 18](#lem:gen_pos){reference-type="ref" reference="lem:gen_pos"}, for a generic $(\kappa,\bm{a})$-sparse point configuration $p:V(\Delta)\rightarrow \mathbb{R}^d$ and $U \subseteq V(\Delta)$ with $|U|=d+1$, $p(U)$ is affinely independent if and only if $t_\kappa(U)=\bm{a}+\bm{e}_j$ for some $j \in [m]$. The proof is done by the induction on $|V(\Delta)|$. As a base case, if $\Delta$ is a trivial minimal $(d-1)$-cycle complex, $G(\Delta)=K_d$ and by Lemma [Lemma 18](#lem:gen_pos){reference-type="ref" reference="lem:gen_pos"}, for a generic $(\kappa,\bm{a})$-sparse point configuration $p:V(\Delta)\rightarrow \mathbb{R}^d$, $p(V(\Delta))$ spans a $(d-1)$-dimensional affine space. Hence $G(\Delta)$ is $(\kappa,\bm{a})$-sparse rigid. Suppose that $\Delta$ is a nontrivial minimal $(d-1)$-cycle complex. Pick $u,v \in \kappa^{-1}(1)$ with $uv \in E(G)$, which exist by $a_1 \geq2$. Let $\Delta_1^+,\ldots,\Delta_t^+$ be the nontrivial minimal $(d-1)$-cycle complexes given in Lemma [Lemma 7](#lem:Fog_dec){reference-type="ref" reference="lem:Fog_dec"} with respect to $\Delta$ and $uv$. **Claim 29**. For each $i$, there is a $(d-1)$-set $C \subseteq N_{G(\Delta_i^+)}(u) \cap N_{G(\Delta_i^+)}(v)$ such that $t_\kappa(C+u+v)=\bm{a}+\bm{e}_k$ for some $k \in [m]$. *Proof of claim.* Pick a facet $F^* \in \Delta_i^+$ containing $u$ and $v$. For every $w \in F^*-u-v$, there is $x (\neq w)$ such that $F^*-w+x$ is a facet of $\Delta_i^+$. We show that for an appropriate choice of $w \in F^*-u-v$, $C:=F^*+x-u-v$ satisfies $t_\kappa(C+u+v)=\bm{a}+\bm{e}_j$ for some $j \in [m]$. By Lemma [Lemma 7](#lem:Fog_dec){reference-type="ref" reference="lem:Fog_dec"} (b), for every facet $F$ of $\Delta_i^+$ containing $v$, $t_\kappa(F-v)=\bm{b}-\bm{e}_j$ for some $j \in [m]$ and some $\bm{b} \in \{\bm{a}+\bm{e}_k-\bm{e}_1: k \in [m]\}$. Hence, we have the following: > ($\star$) for every facet $F$ of $\Delta_i^+$ containing $v$, $t_\kappa(F)=\bm{a}+\bm{e}_k-\bm{e}_j$ for some $k,j \in [m]$. By the property ($\star$) of $F^*$, $t_\kappa(F^*)=\bm{a}+\bm{e}_k-\bm{e}_j$ for some $k,j \in [m]$. If $k=j$, for any choice of $w$, we have $t_\kappa(C+u+v)=\bm{a}+\bm{e}_{\kappa(x)}$ as desired. Thus assume that $k \neq j$. If $j=1$, pick $w \in F^*-u-v$ with $\kappa(w)=1$. Note that such $w$ exists by $a_1 \geq 4$. Then for $F^*-w+x$ to satisfy $(\star)$, $\kappa(x)$ must be $1$, and we get $t_\kappa(C+u+v)=\bm{a}+\bm{e}_k$ as desired. If $j \neq 1$, by $a_j \geq 2$, we can pick $w \in F-u-v$ with $\kappa(w)=j$. Then from the property $(\star)$ of $F^*-w+x$, we have $\kappa(x)=j$. Thus we get $t_\kappa(C+u+v)=\bm{a}+\bm{e}_k$ as desired. ◻ The restriction of $\kappa$ to $V(\Delta_i^+)$ (resp. $V(\Delta_i^+/uv)$) satisfies the assumption for $\Delta_i^+$ (resp. $\Delta_i^+/uv$). Hence by the same argument as the proof of Theorem [Theorem 17](#thm:a-bal){reference-type="ref" reference="thm:a-bal"}, from Claim [Claim 29](#claim:nonbal){reference-type="ref" reference="claim:nonbal"}, we can deduce that $G(\Delta)$ is $(\kappa,\bm{a})$-sparse rigid by Lemma [Lemma 3](#lem:gluing){reference-type="ref" reference="lem:gluing"}, Lemma [Lemma 4](#lem:vertex splitting){reference-type="ref" reference="lem:vertex splitting"}, Lemma [Lemma 7](#lem:Fog_dec){reference-type="ref" reference="lem:Fog_dec"} (d) and (e). ◻ The following proposition asserts that the assumption $a_i \geq 2$ for all $i$ in Theorem [Theorem 17](#thm:a-bal){reference-type="ref" reference="thm:a-bal"} can be weakened to $a_1 \geq 2$ (and no assumption on $a_i$ for $i \neq 1$) if vertices in $\kappa^{-1}(1)$ are allowed to have full support. **Proposition 30**. Let $d > a \geq 2$ be integers and let $\bm{b} \in \mathbb{Z}_{>0}^m$ be an integer vector with $d-a=\sum_{i=1}^m b_i$. Let $\Delta$ be a minimal $(d-1)$-cycle complex and $X \subseteq V(\Delta)$ be a $(\geq a)$-transversal set of $\Delta$. Let $\kappa:V(\Delta) \setminus X \rightarrow [m]$ be a map satisfying $|F \cap \kappa^{-1}(i)| \leq b_i$ for every $F \in \Delta$ and $i \in [m]$. Let $I_1,\ldots,I_m$ be disjoint subsets of $[d]$ with $|I_i|=b_i$ for $i \in [m]$ and define a map $L^*:V(\Delta) \rightarrow 2^{[d]}$ by $L^*(v)=[d]$ if $v \in X$ and $L^*(v)=I_{\kappa(v)}$ if $v\in V(\Delta) \setminus X$. Then $G(\Delta)$ is $L^*$-sparse rigid. *Proof.* For $U \subseteq V(\Delta)$, let $t(U):=(|U \cap X|,|U \cap \kappa^{-1}(1)|, \ldots,|U \cap \kappa^{-1}(m)|) \in \mathbb{Z}_{\geq 0}^{m+1}$. The assumption on $X$ and $\kappa$ is that, for each facet $F$ of $\Delta$, $t(F)=(a+\sum_{i=1}^m d_i,\bm{b}-\bm{d})$ for some $\bm{d} \in \mathbb{Z}_{\geq 0}^m$ with $\bm{d} \leq \bm{b}$. One can check that a $(d+1)$-set $U \subseteq V(\Delta)$ satisfies the condition (H) for $L^*$ if $t(U)=(a+\sum_{i=1}^m d_i,\bm{b}-\bm{d})+\bm{e}_j$ for some $j \in [m+1]$ and some $\bm{d} \in \mathbb{Z}_{\geq 0}^m$ with $\bm{d} \leq \bm{b}$. The proof is done by the induction on $|V(\Delta)|$. When $\Delta$ is trivial, the statement easily follows. Suppose that $\Delta$ is a nontrivial minimal $(d-1)$-cycle complex. Pick $u,v \in X$ with $uv \in E(\Delta)$, which exist by $a \geq 2$. Let $\Delta_1^+,\ldots,\Delta_t^+$ be the nontrivial minimal $(d-1)$-cycle complexes given in Lemma [Lemma 7](#lem:Fog_dec){reference-type="ref" reference="lem:Fog_dec"} with respect to $\Delta$ and $uv$. **Claim 31**. For each $i$, there exists $C \in N_{G(\Delta_i^+)}(u) \cap N_{G(\Delta_i^+)}(v)$ such that $C+u+v$ satisfies the condition (H) for $L^*$. *Proof of claim.* For each facet $F$ of $\Delta_i^+$ containing $v$, we have $t(F)=(a+\sum_{i=1}^m d_i,\bm{b}-\bm{d})$ for some $\bm{d} \in \mathbb{Z}_{\geq 0}^m$ with $\bm{d} \leq \bm{b}$. Pick $F^* \in \Delta_i^+$ with $u,v \in F^*$, and then pick $w \in F^*-u-v$. Then there is $x (\neq w)$ such that $F^*+x-w$ is a facet of $\Delta_i^+$. Now $C:=F^*+x-u-v$ is the desired set. ◻ By the similar argument to the proof of Theorem [Theorem 17](#thm:a-bal){reference-type="ref" reference="thm:a-bal"}, the $L^*$-sparse rigidity of $G(\Delta)$ is deduced. ◻ #### Acknowledgement I would like to thank my advisor Shin-ichi Tanigawa for helpful discussions and helpful comments on the presentation. [^1]: Department of Mathematical Informatics, Graduate School of Information Science and Technology, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, 113-8656, Tokyo Japan. Email: `ryoshun_oba@mist.i.u-tokyo.ac.jp`
arxiv_math
{ "id": "2310.05005", "title": "Rigidity of Balanced Minimal Cycle Complexes", "authors": "Ryoshun Oba", "categories": "math.CO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | The Ising $p$-spin glass and the random $k$-SAT models exhibit symmetric multi Overlap Gap Property ($m$-OGP), an intricate geometrical property which is a rigorous barrier against many important classes of algorithms. We establish that for both models, the symmetric $m$-OGP undergoes a sharp phase transition and we identify the phase transition point for each model: for any $m\in\mathbb{N}$, there exists $\gamma_m$ (depending on the model) such that the model exhibits $m$-OGP for $\gamma>\gamma_m$ and the $m$-OGP is provably absent for $\gamma<\gamma_m$, both with high probability, where $\gamma$ is some natural parameter of the model. Our results for the Ising $p$-spin glass model are valid for all large enough $p$ that remain constant as the number $n$ of spins grows, $p=O(1)$; our results for the random $k$-SAT are valid for all $k$ that grow mildly in the number $n$ of Boolean variables, $k=\Omega(\ln n)$. To the best of our knowledge, these are the first sharp phase transition results regarding the $m$-OGP. Our proofs are based on an application of the second moment method combined with a concentration property regarding a suitable random variable. While a standard application of the second moment method fails, we circumvent this issue by leveraging an elegant argument of Frieze [@frieze1990independence] together with concentration. author: - "Eren C. Kızıldağ[^1]" bibliography: - bibliography.bib title: | Sharp Phase Transition for Multi Overlap Gap Property in\ Ising $p$-Spin Glass and Random $k$-SAT Models --- # Introduction We focus on the Ising pure $p$-spin glass and the random $k$-SAT models. Both of these models known to exhibit the multi Overlap Gap Property ($m$-OGP), an intricate geometrical property which is a rigorous barrier against important classes of algorithms (see below). Our main contribution is to show that for both models, the $m$-OGP undergoes a sharp phase transition and to identify the phase transition point. ## Ising Pure $p$-Spin Glass Model For a fixed integer $p\ge 2$ and an order-$p$ tensor $\boldsymbol{J}=(J_{i_1,\dots,i_p}:1\le i_1,\dots,i_p\le n)$ with i.i.d. standard normal entries, $J_{i_1,\dots,i_p}\sim {\bf \mathcal{N}}(0,1)$, the $p$-spin Hamiltonian is given by $$\label{eq:Hamiltonian-p-spin} H_{n,p}(\boldsymbol{\sigma})=n^{-\frac{p+1}{2}}\left\langle\boldsymbol{J},\boldsymbol{\boldsymbol{\sigma}}^{\otimes p}\right\rangle = n^{-\frac{p+1}{2}}\sum_{1\le i_1,\dots,i_p\le n}J_{i_1,\dots,i_p}\boldsymbol{\sigma}_{i_1}\cdots\boldsymbol{\sigma}_{i_p}.$$ The $p$-spin model was introduced by Derrida [@derrida1980random]. By allowing for $p$-body interactions, it generalizes the Sherrington-Kirkpatrick (SK) model [@sherrington1975solvable] corresponding to $p=2$ in [\[eq:Hamiltonian-p-spin\]](#eq:Hamiltonian-p-spin){reference-type="eqref" reference="eq:Hamiltonian-p-spin"}. Our focus is on the *Ising* case, where the configuration space is the discrete hypercube, $\boldsymbol{\sigma}\in\{-1,1\}^n$. It is worth noting that the *spherical* case where the configuration space is the hypersphere of radius $\sqrt{n}$ (i.e. $\|\boldsymbol{\sigma}\|_2=\sqrt{n}$) is also studied extensively in the literature. A key quantity regarding spin glass models is the *free energy*: $$\label{eq:Free-energy} F(\beta)\triangleq \lim_{n\to\infty}\frac{\ln Z_\beta}{\beta},\quad\text{where}\quad Z_\beta = \sum_{\boldsymbol{\sigma}\in\Sigma_n}e^{\beta n H_{n,p}(\boldsymbol{\sigma})},$$ where $\beta>0$ is the *inverse temperature*. The limiting free energy [\[eq:Free-energy\]](#eq:Free-energy){reference-type="eqref" reference="eq:Free-energy"} for the SK model was predicted in a celebrated work of Parisi [@parisi1979infinite] using the non-rigorous replica method. Following the work of Guerra [@guerra2003broken], Talagrand rigorously verified Parisi's prediction in a breakthrough paper [@talagrand2006parisi]; his proof in fact extends to all even $p$. For general $p$, Parisi formula was verified by Panchenko [@panchenko2014parisi], following the work of Aizenman-Sims-Starr [@aizenman2003extended]. The existence of the limit in [\[eq:Free-energy\]](#eq:Free-energy){reference-type="eqref" reference="eq:Free-energy"} was shown earlier by Guerra and Toninelli [@guerra2002thermodynamic] using the so-called interpolation method. Far more is known rigorously regarding the $p$-spin glass models, including the ultrametricity of the asymptotic Gibbs measure [@panchenko2013parisi], the TAP equations [@auffinger2019thouless] and the TAP free energy [@subag2018free; @chen2023generalized], a certain phase diagram [@toninelli2002almeida; @auffinger2015properties; @jagannath2017some], shattering [@arous2021shattering; @alaoui2023shattering; @gamarnik2023shattering], and algorithmic results (see below). Furthermore, more general spin glass models have in fact been studied in the literature. A particularly relevant case is the *vector spin models* where the states are tuples of points in $\mathbb{R}^n$ (as opposed to points in $\mathbb{R}^n$) with a prescribed overlap, see [@panchenko2018free; @ko2020free] and the references therein. For further pointers to relevant literature on spin glasses, we refer the reader to [@talagrand2010mean; @talagrand2011mean; @panchenko2013sherrington]. #### Finding a Near-Ground State Efficiently Define the *ground-state energy* of the $p$-spin model by $$\label{eq:Ground-En} {\mathsf{H^*}}\triangleq \lim_{n\to\infty}\max_{\boldsymbol{\sigma}\in\Sigma_n} H_{n,p}(\boldsymbol{\sigma}),$$ which can be recovered as a zero temperature limit of the Parisi formula[^2], ${\mathsf{H^*}}=\lim_{\beta\to\infty}F(\beta)/\beta$. Equipped with the value of ${\mathsf{H^*}}$, a natural algorithmic question is to find a near ground-state efficiently. That is, given a $\boldsymbol{J}\in(\mathbb{R}^n)^{\otimes p}$ and an $\epsilon>0$, find in polynomial time a $\boldsymbol{\sigma}_{\rm ALG}\in\Sigma_n$ for which $H_{n,p}(\boldsymbol{\sigma}_{\rm ALG})\ge (1-\epsilon){\mathsf{H^*}}$, ideally with high probability (w.h.p.). For the SK model ($p=2$), this problem was solved by Montanari [@montanari2019FOCS], who devised an Approximate Message Passing (AMP) type algorithm. Montanari's algorithm was inspired by an algorithm of Subag [@subag2021following] regarding spherical mixed $p$-spin model; it is contingent on an unproven (though widely believed) hypothesis that the SK model does not exhibit the OGP. More recent work [@el2021optimization; @sellke2021optimizing] extended Montanari's algorithm to Ising mixed $p$-spin models. For any $\epsilon>0$, these algorithms also return a $\boldsymbol{\sigma}_{\rm ALG}$ for which $H_{n,p}(\boldsymbol{\sigma}_{\rm ALG})\ge (1-\epsilon){\mathsf{H^*}}$ w.h.p., provided that the underlying mixed $p$-spin model does not exhibit the OGP. It is known, however, that the Ising $p$-spin model exhibits the OGP for $p\ge 4$ [@chen2019suboptimality Theorem 3], and that the OGP is a rigorous barrier against AMP type algorithms [@gamarnikjagannath2021overlap]. In particular, [@gamarnikjagannath2021overlap] showed that for any even $p\ge 4$, there exists a $\bar{\mu}$ such that any AMP type algorithm fails to find a $\boldsymbol{\sigma}_{\rm ALG}$ with $H_{n,p}(\boldsymbol{\sigma}_{\rm ALG})\ge {\mathsf{H^*}}-\bar{\mu}$. This algorithmic lower bound was later extended to low-degree polynomials [@gamarnik2020low] and to low-depth Boolean circuits [@gamarnik2021circuit]. The classical OGP [@chen2019suboptimality] as well as the aforementioned lower bounds however do not match the best known algorithmic threshold. More recently, Huang and Sellke [@huang2021tight; @huang2023algorithmic] designed a sophisticated version of the OGP consisting of an ultrametric tree of solutions and subsequently obtained tight hardness guarantees against Lipschitz algorithms. ## Random $k$-SAT Model Given Boolean variables $x_1,\dots,x_n$, let a $k$-clause $\mathcal{C}$ be a disjunction of $k$ variables $\{y_1,\dots,y_k\}\subset\{x_1,\dots,x_n,\bar{x}_1,\dots,\bar{x}_n\}$, $\mathcal{C}=y_1 \vee \cdots \vee y_k$. A random $k$-SAT formula $\Phi$ is a conjunction of $M$ $k$-clauses, where each of its $kM$ literals are sampled independently and uniformly from $\{x_1,\dots,x_n,\bar{x}_1,\dots,\bar{x}_n\}$: $\Phi = \mathcal{C}_1\wedge \cdots \wedge \mathcal{C}_M$.[^3] In what follows, we focus on the regime $M=\Theta(n)$ and $n\to\infty$, and refer to $\alpha\triangleq M/n$ as the clause density. Arguably the most natural question regarding the random $k$-SAT is *satisfiability*: when does a satisfying assignment (i.e. a $\boldsymbol{\sigma}\in\{0,1\}^n$ with $\Phi(\boldsymbol{\sigma})=1$) exist? For fixed $k\in \mathbb{N}$, [@franco1983probabilistic] showed, using a simple first moment argument, that a random $k$-SAT formula is w.h.p. unsatisfiable if $\alpha\ge 2^k\ln 2$. This was later refined in [@kirousis1998approximating], where it was shown that a random $k$-SAT formula is w.h.p. unsatisfiable if $\alpha\ge 2^k\ln 2-\frac12(\ln 2+1)+o_k(1)$. Here, $o_k(1)$ term tends to $0$ as $k\to\infty$. On the positive side, [@chvatalreed] showed that a simple algorithm---called unit clause---finds a satisfying assignment w.h.p. if $\alpha<2^k/k$; see also the earlier work [@ming1990probabilistic] showing that the same algorithm succeeds with constant probability. The asymptotic gap between $2^k/k$ and $\Omega_k(2^k)$ was substantially narrowed by Achlioptas and Moore [@achlioptas2002asymptotic]. Using a non-constructive argument based on the second moment method, they showed that a random $k$-SAT formula is w.h.p. satisfiable if $\alpha \le 2^{k-1}\ln 2-O_k(1)$. This was later refined by Coja-Oghlan and and Panagiotou [@cojapanagio]; they showed that a random $k$-SAT formula is w.h.p. satisfiable if $\alpha\le 2^k\ln 2-\frac12(\ln 2+1)+o_k(1)$, nearly matching the lower bound by [@kirousis1998approximating] mentioned above up to additive $o_k(1)$ terms. The breakthrough work by Ding, Sly, and Sun [@ding2015proof] confirmed, for large $k$, that the satisfiability threshold is the value predicted by [@mezard2002analytic] within this $o_k(1)$ range: for any large $k$, there is a threshold $\alpha_*(k)$ such that a random $k$-SAT formula is satisfiable for $\alpha<\alpha_*(k)$ and unsatisfiable for $\alpha>\alpha_*(k)$, both w.h.p. as $n\to\infty$. For a more elaborate discussion on the random $k$-SAT, see the introduction of [@bresler2021algorithmic] and the survey [@achlioptas2009random]. See also [@frieze2005random] for references regarding random $2$-SAT and random $3$-SAT. #### Finding a Satisfying Assignment Efficiently Equipped with the existence of satisfying assignments for $\alpha=O_k(2^k)$, a natural question is algorithmic: find such an assignment efficiently. As we mentioned earlier, a very simple algorithm called the *unit clause* finds in polynomial time a satisfying assignment for $\alpha\le 2^k/k$. Coja-Oghlan [@coja2010better] later devised a polynomial-time algorithm finding a satisfying assignment provided $\alpha\le (1+o_k(1))2^k\ln k/k$. No efficient algorithm is known beyond this value. Interestingly, the value $(1+o_k(1))2^k\ln k/k$ also marks the onset of *clustering* in the solution space of random $k$-SAT, as predicted by Krzakala et al. [@krzakala2007gibbs] and verified rigorously by Achlioptas and Coja-Oghlan [@achlioptas2008algorithmic]. Beyond $\alpha\ge (1+o_k(1))2^k\ln k/k$, the solution space of random $k$-SAT breaks down into exponentially many well-separated *sub-dominant* clusters: the clusters are $\Omega(n)$ apart from each other and fraction of solutions contained in each cluster is exponentially small. In light of the these, the value $(1+o_k(1))2^k\ln k/k$ was naturally conjectured to be the algorithmic threshold for the random $k$-SAT[^4]. Namely, it is conjectured that no polynomial-time algorithm that finds (w.h.p.) a satisfying assignment exists above this threshold. While this conjecture still remains open, lower bounds against particular classes of algorithms were obtained. Using the $m$-OGP, Gamarnik and Sudan [@gamarnik2017performance] showed that *sequential local algorithms* fail to find a satisfying assignment for a variant of the random $k$-SAT model known as the random Not-All-Equal-$k$-SAT (NAE-$k$-SAT) for $\alpha\ge 2^{k-1}\log^2 k/k$. Their result rules out algorithms with moderately growing number of message passing iterations. Hetterich [@hetterich2016analysing] proved that *Survey Propagation Guided Decimation* algorithm fails for $\alpha\ge (1+o_k(1))2^k\log k/k$ even when the number of iterations is unbounded; Coja-Oglan, Haqshenas, and Hetterich [@coja2017walksat] showed that *Walksat* stalls when $\alpha\ge C\cdot 2^k\log^2 k/k$, where $C>0$ is some absolute constant. More recently, Bresler and Huang [@bresler2021algorithmic] showed that *low-degree polynomials*---a powerful framework that captures many of the algorithms mentioned above as well as other popular classes of algorithms---fail to find a satisfying assignment for the random $k$-SAT when $\alpha\ge\kappa^*(1+o_k(1))2^k\ln k/k$, where $\kappa^*\approx 4.911$. Their argument is a based on a novel and asymmetric variant of the $m$-OGP (see below). ## Algorithmic Barriers and the Overlap Gap Property In light of prior discussion, both the Ising $p$-spin and the random $k$-SAT models exhibit a *statistical-to-computational gap* (`SCG`): known efficient algorithms perform strictly worse than the existential guarantee. While the standard complexity theory is often not helpful for such average-case models[^5], various frameworks for understanding `SCG`s and giving 'rigorous evidence' of hardness have emerged. For a review of these frameworks, we refer the reader to excellent surveys [@kunisky2022notes; @gamarnik2021overlap; @gamarnik2022disordered]. One such framework is guided through insights from statistical physics and based on the intricate geometrical properties of the solution space. #### Overlap Gap Property (OGP) As we mentioned above, prior work [@achlioptas2008algorithmic] as well as [@mezard2005clustering; @achlioptas2006solution] discovered a link between the intricate geometry and algorithmic hardness for certain random constraint satisfaction problems (CSPs): the onset of clustering roughly coincides with the point above which no efficient algorithm is known. While this link is intriguing, the papers mentioned above do not obtain formal algorithmic hardness results. The first formal hardness results through the intricate geometry were obtained by Gamarnik and Sudan [@gamarnik2014limits] by leveraging the OGP framework. Informally, the OGP asserts the non-existence of a certain cluster of solutions; we refer to this cluster as a 'forbidden structure'. The OGP framework has been instrumental in obtaining (nearly sharp) algorithmic lower bounds for many average-case models, including random graphs [@gamarnik2014limits; @gamarnik2017; @rahman2017local; @gamarnik2020low; @wein2020optimal], random CSPs [@gamarnik2017performance; @bresler2021algorithmic], spin glasses [@chen2019suboptimality; @gamarnik2020low; @gamarnikjagannath2021overlap; @huang2021tight; @huang2023algorithmic], binary perceptron [@gamarnik2022algorithms], and number balancing problem [@gamarnik2021algorithmic; @gamarnik2023geometric]. For a survey on OGP, see [@gamarnik2021overlap]. #### Multi OGP ($m$-OGP) The focus of [@gamarnik2014limits] introducing the OGP framework is on the algorithmic problem of finding a large independent set in sparse random graphs on $n$ vertices with average degree $d$. The largest independent set of this model is asymptotically of size $2\frac{\log d}{d}n$ [@frieze1992independence; @bayati2010combinatorial] in double limit, $n\to\infty$ followed by $d\to\infty$. On the other hand, the best known efficient algorithm finds an independent set that is only of size $\frac{\log d}{d}n$, highlighting an `SCG`. For this model, [@gamarnik2014limits; @gamarnik2017] showed that any pair of independent sets of size greater than $(1+1/\sqrt{2})\frac{\log d}{d}n$ exhibits the OGP: either they have a substantial overlap or they are nearly disjoint. Consequently, they showed that *local algorithms* fail to find an independent set of size $(1+1/\sqrt{2})\frac{\log d}{d}n$. Note that this hardness guarantee is off from the best known algorithmic guarantee by an additional factor of $1/\sqrt{2}$. Rahman and Virág [@rahman2017local] removed this additional factor and obtained a sharp hardness guarantee: they showed the presence of a version of OGP involving many independent sets all the way down to the algorithmically achievable $\frac{\log d}{d}n$ value. This approach is referred to as the multi OGP ($m$-OGP); it has been instrumental in obtaining (nearly) sharp algorithmic hardness results for many other average-case models. Our particular focus is on a symmetric version of the $m$-OGP introduced by Gamarnik and Sudan [@gamarnik2017performance]; this version of the OGP asserts the nonexistence of $m$-tuples of nearly equidistant solutions at a certain prescribed distance. Using symmetric $m$-OGP, nearly sharp algorithmic lower bounds were obtained for the random NAE-$k$-SAT problem [@gamarnik2017performance] and for the binary perceptron [@gamarnik2022algorithms]. Similar forbidden structures were also considered for obtaining (non-tight) lower bounds for the number balancing problem [@gamarnik2021algorithmic; @gamarnik2023geometric]. While it is not our main focus here, it is worth mentioning that more sophisticated and asymmetric versions of the OGP have emerged recently. These variants consider rather intricate overlap patterns, where the $i{\rm th}$ solution has intermediate overlap with the first $i-1$ solutions for $2\le i\le m$. Using such asymmetric forbidden patterns, sharp algorithmic lower bounds for random graphs [@wein2020optimal] and the random $k$-SAT [@bresler2021algorithmic] were obtained recently; both of these guarantees are against the class of low-degree polynomials. Even more recently, Huang and Sellke [@huang2021tight] designed a very intricate forbidden structure consisting of an ultrametric tree of solutions. Dubbed as *branching OGP*, this version of the OGP was crucial in obtaining tight hardness guarantees for the $p$-spin model, see [@huang2021tight; @huang2023algorithmic]. ## Main Results and the Absence of the OGP Both the Ising $p$-spin glass and the random $k$-SAT models are known to exhibit the symmetric $m$-OGP; this is established through a standard application of the *first moment method*. More concretely, for any $m\in\mathbb{N}$ and any $\gamma>1/\sqrt{m}$, the Ising $p$-spin model exhibits symmetric $m$-OGP above energy level $\gamma\sqrt{2\ln 2}$ for all large $p$, see [@gamarnik2023shattering Theorem 2.11][^6]. For the random $k$-SAT, for any $m\in\mathbb{N}$ and $\gamma>1/m$, the model exhibits symmetric $m$-OGP above constraint density $\gamma 2^k\ln 2$ for all large $k$, see [@gamarnik2017performance] or Theorem [Theorem 11](#thm:m-ogp-k-sat){reference-type="ref" reference="thm:m-ogp-k-sat"} below. Our main goal in this paper is to 'complement' these results. Our first main result establishes the absence of the symmetric $m$-OGP in the Ising $p$-spin model. **Theorem 1**. *(Informal, see Theorem [Theorem 6](#thm:m-ogp-tight){reference-type="ref" reference="thm:m-ogp-tight"})[\[thm:absent-informal-spin\]]{#thm:absent-informal-spin label="thm:absent-informal-spin"} Fix any $m\in\mathbb{N}$ and any $\gamma<1/\sqrt{m}$. Then for all large $p$, the symmetric $m$-OGP is provably absent in the Ising $p$-spin model below the energy level $\gamma\sqrt{2\ln 2}$.* Our second main result establishes a similar result for the random $k$-SAT. **Theorem 2**. *(Informal, see Theorem [Theorem 12](#thm:m-ogp-k-sat-absent){reference-type="ref" reference="thm:m-ogp-k-sat-absent"})[\[thm:absent-informal-sat\]]{#thm:absent-informal-sat label="thm:absent-informal-sat"} Fix any $m\in\mathbb{N}$ and any $\gamma<1/\sqrt{m}$. Then for $k=\Omega(\ln n)$, the symmetric $m$-OGP is provably absent in the random $k$-SAT model below the density $\gamma 2^k\ln 2$.* Combined with the corresponding $m$-OGP results, Theorems [\[thm:absent-informal-spin\]](#thm:absent-informal-spin){reference-type="ref" reference="thm:absent-informal-spin"}-[\[thm:absent-informal-sat\]](#thm:absent-informal-sat){reference-type="ref" reference="thm:absent-informal-sat"} yield a sharp phase transition. **Theorem 3**. *(Informal, see Theorems [Theorem 9](#thm:m-ogp-sharp-PT){reference-type="ref" reference="thm:m-ogp-sharp-PT"} and [Theorem 14](#thm:m-ogp-sharp-PT-sat){reference-type="ref" reference="thm:m-ogp-sharp-PT-sat"}) For any $m\in\mathbb{N}$, the symmetric $m$-OGP undergoes a sharp phase transition both for the Ising $p$-spin and for the random $k$-SAT models.* In particular, the phase transition points are $1/\sqrt{m}$ and $1/m$, respectively for the Ising $p$-spin model and the random $k$-SAT. We highlight that the phase transition points are strictly monotonic in $m$. Our result for the random $k$-SAT requires $k$ to grow (albeit mildly) in $n$, $k=\Omega(\ln n)$ for technical reasons. For constant $k$, we establish in Theorem [Theorem 16](#thm:m-ogp-k-sat-absent-constant-k){reference-type="ref" reference="thm:m-ogp-k-sat-absent-constant-k"} a weaker guarantee regarding the absence of the $m$-OGP for the set of assignments violating a small fraction of clauses. Proofs of Theorems [\[thm:absent-informal-spin\]](#thm:absent-informal-spin){reference-type="ref" reference="thm:absent-informal-spin"}-[\[thm:absent-informal-sat\]](#thm:absent-informal-sat){reference-type="ref" reference="thm:absent-informal-sat"} crucially rely on an involved application of the *second moment method*. The direct application of the second moment method fails for Theorems [\[thm:absent-informal-spin\]](#thm:absent-informal-spin){reference-type="ref" reference="thm:absent-informal-spin"}-[\[thm:absent-informal-sat\]](#thm:absent-informal-sat){reference-type="ref" reference="thm:absent-informal-sat"}: it yields the existence of a certain $m$-tuple of solutions with probability exponentially small in $n$ with an improving exponent as $p,k\to\infty$. Nevertheless, combined with a concentration property applied to a suitable random variable, the second moment method can be 'repaired' using a very elegant argument due to Frieze [@frieze1990independence]. #### Absence of OGP We further zoom in on the link between the OGP and algorithmic hardness. As we discussed earlier, the OGP suggests algorithmic hardness and formally implies hardness against certain classes of algorithms. A natural question is whether the converse statement---i.e. algorithmic hardness implies presence of the OGP---is true. We thus ask an equivalent question: *does the absence of the OGP imply that the problem is algorithmically tractable*? Our Theorems [Theorem 6](#thm:m-ogp-tight){reference-type="ref" reference="thm:m-ogp-tight"} and [Theorem 12](#thm:m-ogp-k-sat-absent){reference-type="ref" reference="thm:m-ogp-k-sat-absent"} suggest that, strictly speaking, this may not necessarily be the case; we now elaborate on this. For fixed $m\in\mathbb{N}$, Theorems [Theorem 6](#thm:m-ogp-tight){reference-type="ref" reference="thm:m-ogp-tight"} and [Theorem 12](#thm:m-ogp-k-sat-absent){reference-type="ref" reference="thm:m-ogp-k-sat-absent"} establish the absence of the symmetric $m$-OGP regarding $m$-tuples for the Ising $p$-spin glass and the random $k$-SAT models, respectively. Notice though that in light of prior discussion, the range of parameters where the $m$-OGP is provably absent is well within the algorithmically hard phase of these models. Taken together, these suggest that the absence of a certain variant of the $m$-OGP (for fixed $m\in\mathbb{N}$) may not necessarily imply that the model is algorithmically tractable. Namely, it is possible that the model is still algorithmically hard, and that the hard phase may be captured by considering a more sophisticated forbidden structure[^7]. Our results are an initial attempt at understanding the aforementioned question; we believe this question and its refinements merit further investigation. #### Notation For any set $A$, let $|A|$ denotes its cardinality. For any event $E$, denote by $\mathbbm{1}\{E\}$ its indicator. For any $n\in\mathbb{N}$, let $\Sigma_n\triangleq \{-1,1\}^n$. For any $\boldsymbol{\sigma},\boldsymbol{\sigma}'\in\Sigma_n$ or $\boldsymbol{\sigma},\boldsymbol{\sigma}'\in\{0,1\}^n$, let $d_H(\boldsymbol{\sigma},\boldsymbol{\sigma}')=\sum_{1\le i\le n}\mathbbm{1}\{\boldsymbol{\sigma}(i)\ne \boldsymbol{\sigma}'(i)\}$. For any $\boldsymbol{\sigma},\boldsymbol{\sigma}'\in\Sigma_n$, let $\left\langle\boldsymbol{\sigma},\boldsymbol{\sigma}'\right\rangle = \sum_{1\le i\le n}\boldsymbol{\sigma}(i)\boldsymbol{\sigma}'(i)=n-2d_H(\boldsymbol{\sigma},\boldsymbol{\sigma}')$. For any $r\in\mathbb{R}^+$, denote by logarithm and the exponential functions base $r$, respectively by $\log_r(\cdot)$ and $\exp_r(\cdot)$; when $r=e$, we denote the former by $\ln(\cdot)$ and the latter by $\exp(\cdot)$. For $p\in[0,1]$, let $h(p)$ denotes the entropy of a Bernoulli variable with parameter $p$, i.e. $h(p)= -p\log_2 p -(1-p)\log_2(1-p)$. Given any $\boldsymbol{\mu}\in\mathbb{R}^n$ and positive semidefinite $\boldsymbol{\Sigma}\in\mathbb{R}^{n\times n}$, denote by ${\bf \mathcal{N}}(\boldsymbol{\mu},\boldsymbol{\Sigma})$ the multivariate normal distribution on $\mathbb{R}^n$ with mean $\boldsymbol{\mu}$ and covariance $\boldsymbol{\Sigma}$. For any matrix $\mathcal{M}$, $\|\mathcal{M}\|_F$, $\|\mathcal{M}\|_2$, and $|\mathcal{M}|$ respectively denote its Frobenius norm, spectral norm, and determinant. Throughout the paper, we employ standard asymptotic notation, e.g. $\Theta(\cdot),O(\cdot),o(\cdot),\Omega(\cdot)$, and $\omega(\cdot)$, where the underlying asymptotics is often w.r.t. $n\to\infty$. Asymptotics other than $n\to\infty$ are distinguished with a subscript, e.g. $\Omega_k(\cdot)$. All floor/ceiling operators are omitted for simplicity. #### Paper Organization The rest of the paper is organized as follows. We provide our results regarding the $p$-spin model in Section [2](#sec:SPIN){reference-type="ref" reference="sec:SPIN"} and the $k$-SAT model in Section [3](#sec:SAT){reference-type="ref" reference="sec:SAT"}. We address the case of constant $k$ in Section [3.1](#set:CONSTANT){reference-type="ref" reference="set:CONSTANT"}. Complete proofs of all our results are provided in Section [4](#sec:PFs){reference-type="ref" reference="sec:PFs"}. # Sharp Phase Transition for the $m$-OGP in Ising $p$-Spin Glass Model {#sec:SPIN} In this section, we establish a sharp phase transition for the multi Overlap Gap Property ($m$-OGP) for the Ising $p$-spin model. We first formalize the set of $m$-tuples under consideration. **Definition 4**. *Let $m\in\mathbb{N}$, $0<\gamma<1$, and $0<\eta<\xi<1$. Denote by $S(\gamma,m,\xi,\eta)$ the set of all $m$-tuples $\boldsymbol{\sigma}^{(t)}\in\Sigma_n,1\le t\le m$, that satisfy the following:* - ***$\gamma$-Optimality**: For any $1\le t\le m$, $H(\boldsymbol{\sigma}^{(t)})\ge \gamma\sqrt{2\ln 2}$.* - ***Overlap Constraint:** For any $1\le t<\ell \le m$, $n^{-1}\left\langle\boldsymbol{\sigma}^{(t)},\boldsymbol{\sigma}^{(\ell)}\right\rangle\in[\xi-\eta,\xi]$.* Definition [Definition 4](#def:m-ogp){reference-type="ref" reference="def:m-ogp"} regards $m$-tuples of near-optimal solutions whose (pairwise) overlaps are constrained; parameter $\gamma$ quantifies the near-optimality and $\xi,\eta$ control the region of overlaps. Together with Gamarnik and Jagannath, we establish in [@gamarnik2023shattering] the following. **Theorem 5**. *[@gamarnik2023shattering Theorem 2.11][\[thm:m-ogp\]]{#thm:m-ogp label="thm:m-ogp"} For any $m\in\mathbb{N}$ and any $\gamma>1/\sqrt{m}$, there exists $0<\eta<\xi<1$ and a $P^*\in\mathbb{N}$ such that the following holds. Fix any $p\ge P^*$. Then, as $n\to\infty$ $$\mathbb{P}\bigl[S(\gamma,m,\xi,\eta)\ne\varnothing\bigr]\le e^{-\Theta(n)}.$$* That is, for any $m\in\mathbb{N}$ and $\gamma>1/\sqrt{m}$, Ising pure $p$-spin model exhibits $m$-OGP above $\gamma\sqrt{2\ln 2}$ for all large enough $p$. Our first main result complements Theorem [\[thm:m-ogp\]](#thm:m-ogp){reference-type="ref" reference="thm:m-ogp"} by addressing the case $\gamma<1/\sqrt{m}$. **Theorem 6**. *For any $m\in \mathbb{N}$, $\gamma<1/\sqrt{m}$, and $0<\eta<\xi<1$, there exists a $P^*\in \mathbb{N}$ such that the following holds. Fix any $p\ge P^*$. Then, as $n\to\infty$ $$\mathbb{P}\bigl[S(\gamma,m,\xi,\eta)\ne\varnothing\bigr]\ge 1-e^{-\Theta(n)}.$$* See Section [4.2](#sec:pf-m-ogp-tight){reference-type="ref" reference="sec:pf-m-ogp-tight"} for the proof. Taken together, Theorems [\[thm:m-ogp\]](#thm:m-ogp){reference-type="ref" reference="thm:m-ogp"} and [Theorem 6](#thm:m-ogp-tight){reference-type="ref" reference="thm:m-ogp-tight"} collectively yield that (for large enough $p$) the value $\sqrt{2\ln 2/m}$ is tight: for any $m\in\mathbb{N}$, and sufficiently large $p$, the onset of $m$-OGP is $\sqrt{2\ln 2/m}$. In Theorem [Theorem 9](#thm:m-ogp-sharp-PT){reference-type="ref" reference="thm:m-ogp-sharp-PT"}, we make this more precise and establish a sharp phase transition. ## A Sharp Phase Transition {#a-sharp-phase-transition .unnumbered} We define the notion of *admissible* values of $\gamma$. **Definition 7**. *Fix an $m\in\mathbb{N}$. A value $\gamma>0$ is called $m$-admissible if for any $0<\eta<\xi<1$ there exists a $P^*(m,\gamma,\eta,\xi)\in\mathbb{N}$ such that for every fixed $p\ge P^*(m,\gamma,\eta,\xi)$, $$\lim_{n\to\infty}\mathbb{P}\bigl[S(\gamma,m,\xi,\eta)\ne\varnothing\bigr] =1.$$ Similarly, $\gamma$ is called $m$-inadmissible if there exists $0<\eta<\xi<1$, $\eta$ sufficiently small, and a $\hat{P}\in \mathbb{N}$ such that for every fixed $p\ge \hat{P}$, $$\lim_{n\to\infty}\mathbb{P}\bigl[S(\gamma,m,\xi,\eta)\ne\varnothing\bigr] =0.$$* We now formalize the sharp phase transition we investigate. **Definition 8**. *Fix $m\in\mathbb{N}$. We say that the $m$-OGP exhibits a sharp phase transition at value $\gamma_m$ if $$\sup\{\gamma>0:\gamma\text{ is $m$-admissible}\} = \gamma_m=\inf\{\gamma>0:\gamma\text{ is $m$-inadmissible}\}.$$* Our main result is as follows sharp phase transition. **Theorem 9**. *For any $m\in\mathbb{N}$, the $m$-OGP for the Ising $p$-spin glass model exhibits a sharp phase transition in the sense of Definition [Definition 8](#def:sharp-PT){reference-type="ref" reference="def:sharp-PT"} at $\gamma_m=1/\sqrt{m}$.* Theorem [Theorem 9](#thm:m-ogp-sharp-PT){reference-type="ref" reference="thm:m-ogp-sharp-PT"} is identical to that of Theorem [Theorem 9](#thm:m-ogp-sharp-PT){reference-type="ref" reference="thm:m-ogp-sharp-PT"} by leveraging Theorems [\[thm:m-ogp\]](#thm:m-ogp){reference-type="ref" reference="thm:m-ogp"} and [Theorem 6](#thm:m-ogp-tight){reference-type="ref" reference="thm:m-ogp-tight"}, and is omitted. We highlight that the phase transition point $\gamma_m$ is strictly monotonic in $m$. # Sharp Phase Transition for the $m$-OGP in Random $k$-SAT Model {#sec:SAT} Our next focus is on the random $k$-SAT model. We formalize the set of $m$-tuples we consider. **Definition 10**. *Let $k\in \mathbb{N}$, $\gamma\in (0,1)$, $m\in \mathbb{N}$, and $0<\eta<\beta<1$. Denote by $\mathcal{S}(\gamma,m,\beta,\eta)$ the set of all $m$-tuples $\boldsymbol{\sigma}^{(t)}\in\{0,1\}^n$, $1\le t\le m$, that satisfy the following.* - ***Satisfiability:** For any $1\le t\le m$, $\Phi(\boldsymbol{\sigma}^{(t)})=1$, where $\Phi(\cdot)$ is a conjunction of $n\alpha_\gamma$ independent $k$-clauses, where $\alpha_\gamma = \gamma 2^k\ln 2$.* - ***Pairwise Distance:** For any $1\le t<\ell \le m$, $n^{-1}d_H(\boldsymbol{\sigma}^{(t)},\boldsymbol{\sigma}^{(\ell)})\in[\beta-\eta,\beta]$.* Definition [Definition 10](#def:m-ogp-k-sat){reference-type="ref" reference="def:m-ogp-k-sat"} regards $m$-tuples of satisfying solutions whose pairwise distances are constrained. The parameter $\alpha_\gamma$ is the clause density: the formula $\Phi$ consists of $n\alpha_\gamma$ independent $k$-clauses. Parameters $\beta$ and $\eta$ collectively control the region of overlaps. We first establish the following $m$-OGP result. **Theorem 11**. *For any $m\in\mathbb{N}$ and any $\gamma>1/m$, there exists $0<\eta<\beta\le\frac12$ and $K^*\triangleq K^*(\gamma,m,\beta,\eta)\in \mathbb{N}$ such that the following holds. Fix any $k\ge K^*$. Then, as $n\to\infty$ $$\mathbb{P}\bigl[\mathcal{S}(\gamma,m,\beta,\eta)\ne\varnothing\bigr]\le e^{-\Theta(n)}.$$* Theorem [Theorem 11](#thm:m-ogp-k-sat){reference-type="ref" reference="thm:m-ogp-k-sat"} is based on the first moment method; it follows immediately from the arguments of [@gamarnik2017performance]. We provide a proof in Section [4.4](#sec:pf-ogp-k-sat){reference-type="ref" reference="sec:pf-ogp-k-sat"} for completeness. Theorem [Theorem 11](#thm:m-ogp-k-sat){reference-type="ref" reference="thm:m-ogp-k-sat"} shows that for any $m\in\mathbb{N}$ and any $\gamma>1/m$, the random $k$-SAT model exhibits $m$-OGP at clause density $\gamma 2^k\ln 2$. Our next result complements Theorem [Theorem 11](#thm:m-ogp-k-sat){reference-type="ref" reference="thm:m-ogp-k-sat"} by addressing the regime $\gamma<1/m$. **Theorem 12**. *For any $m\in \mathbb{N}$, $\gamma<1/m$ and $0<\eta<\beta\le \frac12$, there exists a constant $C(\gamma,m,\beta,\eta)>0$ such that the following holds. Fix any $k\ge C(\gamma,m,\beta,\eta)\ln n$. Then as $n\to\infty$, $$\mathbb{P}\bigl[\mathcal{S}(\gamma,m,\beta,\eta)\ne\varnothing\bigr]\ge 1-e^{-\Theta(n)}.$$* The proof of Theorem [Theorem 12](#thm:m-ogp-k-sat-absent){reference-type="ref" reference="thm:m-ogp-k-sat-absent"} is based on the second moment method, see Section [4.5](#sec:pf-ogp-absent-k-sat){reference-type="ref" reference="sec:pf-ogp-absent-k-sat"} for details. Theorem [Theorem 12](#thm:m-ogp-k-sat-absent){reference-type="ref" reference="thm:m-ogp-k-sat-absent"} yields that for any $m\in\mathbb{N}$, any $\alpha<\frac{2^k\ln 2}{m}$ and any $k$ that grows mildly in $n$, $k=\Omega(\ln n)$, the $m$-OGP is absent: nearly equidistant $m$-tuples of satisfying solutions (at all pairwise distances) exist. Taken together, Theorems [Theorem 11](#thm:m-ogp-k-sat){reference-type="ref" reference="thm:m-ogp-k-sat"} and [Theorem 12](#thm:m-ogp-k-sat-absent){reference-type="ref" reference="thm:m-ogp-k-sat-absent"} collectively yield that for $k=\Omega(\ln n)$, the onset of $m$-OGP is at density $\frac{2^k\ln 2}{m}$. In next section, we leverage Theorems [Theorem 11](#thm:m-ogp-k-sat){reference-type="ref" reference="thm:m-ogp-k-sat"} and [Theorem 12](#thm:m-ogp-k-sat-absent){reference-type="ref" reference="thm:m-ogp-k-sat-absent"} to establish a sharp phase transition analogous to Theorem [Theorem 9](#thm:m-ogp-sharp-PT){reference-type="ref" reference="thm:m-ogp-sharp-PT"}. We note that unlike our results for the $p$-spin glass model that are valid for sufficiently large $p$, Theorem [Theorem 12](#thm:m-ogp-k-sat-absent){reference-type="ref" reference="thm:m-ogp-k-sat-absent"} holds for $k=\Omega(\ln n)$. Consequently, the corresponding phase transition also holds for $k=\Omega(\ln n)$. Later in Theorem [Theorem 16](#thm:m-ogp-k-sat-absent-constant-k){reference-type="ref" reference="thm:m-ogp-k-sat-absent-constant-k"}, we provide a weaker version of Theorem [Theorem 12](#thm:m-ogp-k-sat-absent){reference-type="ref" reference="thm:m-ogp-k-sat-absent"} for $m$-tuples of nearly satisfying assignments that holds for all sufficiently large $k=O(1)$. Admittedly, the regime $k=\Omega(\ln n)$ does not truly correspond to the most interesting case $M=\Theta(n)$, where $M$ is the number of clauses. Nevertheless, it is worth mentioning that certain earlier developments in the random $k$-SAT literature, particularly those regarding the satisfiability threshold, were also established in the same regime $k=\Omega(\ln n)$, see e.g. [@frieze2005random; @coja2008random; @liu2012note]. We lastly remark on the condition $\beta\le \frac12$ in Theorem [Theorem 12](#thm:m-ogp-k-sat-absent){reference-type="ref" reference="thm:m-ogp-k-sat-absent"}. Note that for Theorem [Theorem 12](#thm:m-ogp-k-sat-absent){reference-type="ref" reference="thm:m-ogp-k-sat-absent"} to be nonvacuous, there must exist $m$-tuples $\boldsymbol{\sigma}{(1)},\dots,\boldsymbol{\sigma}^{(m)}\in\{0,1\}^n$ for which for all small enough $\eta>0$, $d_H(\boldsymbol{\sigma}^{(i)},\boldsymbol{\sigma}^{(j)})\in[\beta-\eta,\beta]$ for $1\le i<j\le m$; our proof in fact extends to any $\beta$ for which such $m$-tuples exist. While a simple application of the probabilistic method, see in particular Lemma [Lemma 23](#lemma:F-NON-EMPTY){reference-type="ref" reference="lemma:F-NON-EMPTY"}, shows[^8] that such $m$-tuples exist for $\beta\le \frac12$, it is not clear whether such $m$-tuples exist for $\beta>\frac12$. For this reason we focus on solely $\beta\le \frac12$ and the leave the case $\beta>\frac12$ as an interesting direction for future work. ## A Sharp Phase Transition {#a-sharp-phase-transition-1 .unnumbered} Similar to Ising $p$-spin model, we define the notion of *admissible* values of $\gamma$ by modifying Definition [Definition 7](#def:adm-exp){reference-type="ref" reference="def:adm-exp"}. **Definition 13**. *Fix an $m\in\mathbb{N}$. A value $\gamma>0$ is called $m$-admissible if for any $0<\eta<\beta<1$ there exists a $C(\gamma,m,\beta,\eta)\in\mathbb{N}$ such that for every $k\ge C(\gamma,m,\beta,\eta)\ln n$, $$\lim_{n\to\infty}\mathbb{P}\bigl[\mathcal{S}(\gamma,m,\beta,\eta)\ne\varnothing\bigr] =1.$$ Similarly, $\gamma$ is called $m$-inadmissible if there exists $0<\eta<\beta<1$, $\eta$ sufficiently small, and a $C(\gamma,m,\beta,\eta)\in\mathbb{N}$ such that for every $k\ge C(\gamma,m,\beta,\eta)\ln n$, $$\lim_{n\to\infty}\mathbb{P}\bigl[\mathcal{S}(\gamma,m,\beta,\eta)\ne\varnothing\bigr] =0.$$* Equipped with Definition [Definition 13](#def:adm-gamma-sat){reference-type="ref" reference="def:adm-gamma-sat"}, Definition [Definition 8](#def:sharp-PT){reference-type="ref" reference="def:sharp-PT"} remains the same. Our next main result is as follows. **Theorem 14**. *For any $m\in\mathbb{N}$, the $m$-OGP for the random $k$-SAT model exhibits a sharp phase transition in the sense of Definition [Definition 8](#def:sharp-PT){reference-type="ref" reference="def:sharp-PT"} at $\gamma_m=1/m$.* Theorem [Theorem 14](#thm:m-ogp-sharp-PT-sat){reference-type="ref" reference="thm:m-ogp-sharp-PT-sat"} is a direct consequence of Theorems [Theorem 11](#thm:m-ogp-k-sat){reference-type="ref" reference="thm:m-ogp-k-sat"} and [Theorem 12](#thm:m-ogp-k-sat-absent){reference-type="ref" reference="thm:m-ogp-k-sat-absent"}; see Section [4.3](#pf:PT){reference-type="ref" reference="pf:PT"} for its proof. We highlight that the phase transition point $\gamma_m$ is, again, strictly monotonic in $m$. ## Case of Constant $k$, $k=O(1)$ {#set:CONSTANT} While Theorem [Theorem 11](#thm:m-ogp-k-sat){reference-type="ref" reference="thm:m-ogp-k-sat"} is valid for all large enough $k$, Theorem [Theorem 12](#thm:m-ogp-k-sat-absent){reference-type="ref" reference="thm:m-ogp-k-sat-absent"} is valid only for $k$ that grows (albeit mildly) in $n$, $k=\Omega(\ln n)$. A far more interesting regime is when $k$ remains constant as $n\to\infty$. In this regime, the 'vanilla' second moment method fails: it yields $\mathbb{P}[\mathcal{S}(\gamma,m,\beta,\eta)\ne\varnothing]\ge \exp\bigl(-nc^k\bigr)$ for some $c\triangleq c(\gamma,\beta,\eta)\in(0,1)$. Notice that while the probability guarantee is exponentially small in $n$, the exponent improves as $k$ grows. This is an instance where the unsuccessful second moment method can be circumvented by considering an appropriate random variable satisfying a concentration property and using a trick of Frieze [@frieze1990independence]. ### An Auxiliary Objective Function: Number of Violated Constraints {#an-auxiliary-objective-function-number-of-violated-constraints .unnumbered} Given a random $k$-SAT formula $\Phi$ which is a conjunction of $M$ independent $k$-clauses $\mathcal{C}_1,\dots,\mathcal{C}_M$ and a truth assignment $\boldsymbol{\sigma}\in\{0,1\}^n$, define by $\mathcal{L}(\boldsymbol{\sigma})$ the number of clauses violated by $\boldsymbol{\sigma}$: $$\mathcal{L}(\boldsymbol{\sigma}) = \sum_{1\le i\le M}\mathbbm{1}\bigl\{\mathcal{C}_i(\boldsymbol{\sigma})=0\bigr\}.$$ In particular, $\Phi(\boldsymbol{\sigma})=1$ iff $\mathcal{L}(\boldsymbol{\sigma})=0$. Equipped with this, we modify Definition [Definition 10](#def:m-ogp-k-sat){reference-type="ref" reference="def:m-ogp-k-sat"}. **Definition 15**. *Let $k\in \mathbb{N}$, $\gamma\in (0,1)$, $m\in \mathbb{N}$, $0<\eta<\beta<1$, and $\kappa\in[0,1]$. Denote by $\mathcal{S}(\gamma,m,\beta,\eta,\kappa)$ the set of all $m$-tuples $\boldsymbol{\sigma}^{(t)}\in\{0,1\}^n$, $1\le t\le m$, that satisfy the following.* - ***Near-Optimality:** For any $1\le i\le m$, $M^{-1}\mathcal{L}(\boldsymbol{\sigma}^{(i)})\le \kappa$, where $M=n\alpha_\gamma$ for $\alpha_\gamma=\gamma 2^k\ln 2$.* - ***Overlap Constraint:** For any $1\le t<\ell \le m$, $n^{-1}d_H(\boldsymbol{\sigma}^{(t)},\boldsymbol{\sigma}^{(\ell)})\in[\beta-\eta,\beta]$.* That is, $\mathcal{S}(\gamma,m,\beta,\eta,\kappa)$ is the set of all nearly equidistant $m$-tuples of truth assignments such that for every $\bigl(\boldsymbol{\sigma}^{(1)},\dots,\boldsymbol{\sigma}^{(m)}\bigr)\in \mathcal{S}(\gamma,m,\beta,\eta,\kappa)$ and any $i$, fraction of clauses violated by $\boldsymbol{\sigma}^{(i)}$ is at most $\kappa$. In particular, $\mathcal{S}(\gamma,m,\beta,\eta)=\mathcal{S}(\gamma,m,\beta,\eta,0)$. With these, we are ready to state our next main result. **Theorem 16**. *For any $m\in \mathbb{N}$, $\gamma<1/m$, and $0<\eta<\beta<1$, there exists constants $C^*>0$ and $K^*\in\mathbb{N}$ such that the following holds. Fix any $k\ge K^*$. Then as $n\to\infty$, $$\mathbb{P}\bigl[\mathcal{S}\bigl(\gamma,m,\beta,\eta,C2^{-k/2}\bigr)\ne\varnothing\bigr]\ge 1-e^{-\Theta(n)}.$$* See below for the proof sketch and Section [4.6](#sec:pf-thm:m-ogp-k-sat-absent-constant-k){reference-type="ref" reference="sec:pf-thm:m-ogp-k-sat-absent-constant-k"} for the proof. Theorem [Theorem 16](#thm:m-ogp-k-sat-absent-constant-k){reference-type="ref" reference="thm:m-ogp-k-sat-absent-constant-k"} asserts that for any $m\in\mathbb{N}$, $\gamma<1/m$ and $0<\eta<\beta<1$, there exists (w.h.p.) $m$-tuples $(\boldsymbol{\sigma}^{(1)},\dots,\boldsymbol{\sigma}^{(m)})$ with $n^{-1}d_H(\boldsymbol{\sigma}^{(i)},\boldsymbol{\sigma}^{(j)})\in[\beta-\eta,\beta]$ such that for each $1\le i\le m$, the number of clauses satisfied by $\boldsymbol{\sigma}^{(i)}$ is at least $(1-O(2^{-k/2}))M$. So, below density $\frac{2^k\ln 2}{m}$, there exists $m$-tuples of nearly equidistant and nearly satisfying assignments. #### Proof Sketch for Theorem [Theorem 16](#thm:m-ogp-k-sat-absent-constant-k){reference-type="ref" reference="thm:m-ogp-k-sat-absent-constant-k"} {#proof-sketch-for-theorem-thmm-ogp-k-sat-absent-constant-k} Define the random variable $$\label{eq:Z-beta-eta} Z_{\beta,\eta}\triangleq \max_{\substack{\boldsymbol{\sigma}^{(1)},\dots,\boldsymbol{\sigma}^{(m)}\in\{0,1\}^n \\ n^{-1}d_H(\boldsymbol{\sigma}^{(i)},\boldsymbol{\sigma}^{(j)})\in[\beta-\eta,\beta]}}\, \min_{1\le i\le m} \widehat{\mathcal{L}}(\boldsymbol{\sigma}^{(i)}),\quad\text{where}\quad \widehat{\mathcal{L}}(\boldsymbol{\sigma}) = M-\mathcal{L}(\boldsymbol{\sigma}).$$ Note that $\mathcal{S}(\gamma,m,\beta,\eta,\kappa)\ne\varnothing$ iff $Z_{\beta,\eta}\ge (1-\kappa)M$. Furthermore, as a simple application of Azuma's inequality, one can establish that $Z_{\beta,\eta}$ concentrates around its mean. Theorem [Theorem 16](#thm:m-ogp-k-sat-absent-constant-k){reference-type="ref" reference="thm:m-ogp-k-sat-absent-constant-k"} is based on this concentration property coupled with the second moment estimate per Theorem [Theorem 12](#thm:m-ogp-k-sat-absent){reference-type="ref" reference="thm:m-ogp-k-sat-absent"}. # Proofs {#sec:PFs} ## Auxiliary Results Below, we collect several auxiliary results. The first one is a simple counting estimate, see [@galvin2014three Theorem 3.1] for a proof. **Lemma 17**. *Fix $\alpha\le 1/2$. Then, for all $n$, $$\sum_{i\le \alpha n}\binom{n}{i}\le 2^{nh_b(\alpha)}.$$* We employ *second moment method* via Paley-Zygmund inequality, recorded below. **Lemma 18**. *Let $Z$ be a random variable with $\mathbb{P}[Z\ge 0]=1$ and ${\rm Var}(Z)<\infty$. Then, for any $\theta\in[0,1]$, $$\mathbb{P}[Z>\theta\mathbb{E}[Z]]\ge (1-\theta)^2\frac{\mathbb{E}[Z]^2}{\mathbb{E}[Z^2]}.$$* For a proof, see [@alon2016probabilistic]. The next result is a tail bound for multivariate normal random vectors. It is originally due to Savage [@savage1962mills]; the version below is reproduced verbatim from [@hashorva2003multivariate; @hashorva2005asymptotics]. **Theorem 19**. *Let $\boldsymbol{X}\in\mathbb{R}^d$ be a centered multivariate normal random vector with non-singular covariance matrix $\Sigma\in\mathbb{R}^{d\times d}$ and $\boldsymbol{t}\in\mathbb{R}^d$ be a fixed threshold. Suppose that $\Sigma^{-1}\boldsymbol{t}>\boldsymbol{0}$ entrywise. Then, $$1-\left\langle 1/(\Sigma^{-1}\boldsymbol{t}),\Sigma^{-1}(1/(\Sigma^{-1}\boldsymbol{t})\right\rangle \le \frac{\mathbb{P}[\boldsymbol{X}\ge \boldsymbol{t}]}{\varphi_{\boldsymbol{X}}(\boldsymbol{t})\prod_{i\le d}\left\langle e_i,\Sigma^{-1}\boldsymbol{t}\right\rangle}\le 1,$$ where $e_i\in\mathbb{R}^d$ is the $i{\rm th}$ unit vector and $\varphi_{\boldsymbol{X}}(\boldsymbol{t})$ is the multivariate normal density evaluated at $\boldsymbol{t}$: $$\varphi_{\boldsymbol{X}}(\boldsymbol{t}) = (2\pi)^{-d/2}|\Sigma|^{-1/2}\exp\left(-\frac{\boldsymbol{t}^T\Sigma^{-1}\boldsymbol{t}}{2}\right)\in\mathbb{R}^+.$$* We employ Theorem [Theorem 19](#thm:multiv-tail){reference-type="ref" reference="thm:multiv-tail"} when the coordinates of $X$ are nearly independent, i.e. $\Sigma$ is close to identity. The next auxiliary result we record is Slepian's Lemma [@slepian1962one]. **Lemma 20**. *Let $X=(X_1,\dots,X_n)\in\mathbb{R}^n$ and $Y=(Y_1,\dots,Y_n)\in\mathbb{R}^n$ be two multivariate normal random vectors with (a) $\mathbb{E}[X_i]=\mathbb{E}[Y_i]=0$ and $\mathbb{E}[X_i^2]=\mathbb{E}[Y_i^2]$ for every $1\le i\le n$ and (b) $\mathbb{E}[X_iX_j]\le\mathbb{E}[Y_iY_j]$ for $1\le i<j\le m$. Fix any $c_1,\dots,c_n\in\mathbb{R}$. Then, $$\mathbb{P}\bigl[X_i\ge c_i,\forall i\bigr]\le \mathbb{P}\bigl[Y_i\ge c_i,\forall i\bigr]$$* The next auxiliary result regards the random $k$-SAT model. **Lemma 21**. *Let $\boldsymbol{\sigma},\boldsymbol{\sigma}'\in\{0,1\}^n$ with $d_H(\boldsymbol{\sigma},\boldsymbol{\sigma}')\ge \beta n$ for some $\beta\in(0,1)$. Suppose the clause $\mathcal{C}$ is a disjunction of $k$ literals $y_1,\dots,y_k$, that is $\mathcal{C} = y_1\vee y_2 \vee \cdots \vee y_k$, where $y_i$ are drawn independently and uniformly at random from $\{x_1,\dots,x_n,\bar{x}_1,\dots,\bar{x}_n\}$. Then, $$\mathbb{P}\bigl[\mathcal{C}(\boldsymbol{\sigma})=\mathcal{C}(\boldsymbol{\sigma}') = 0\bigr] \le 2^{-k}(1-\beta)^k.$$* *Proof of Lemma [Lemma 21](#lemma:random-k-sat-prob){reference-type="ref" reference="lemma:random-k-sat-prob"}.* Fix $\boldsymbol{\sigma},\boldsymbol{\sigma}'\in\{0,1\}^n$ with $d_H(\boldsymbol{\sigma},\boldsymbol{\sigma}')=\beta n$, let $I=\{i\in[n]:\boldsymbol{\sigma}(i)\ne \boldsymbol{\sigma}'(i)\}$, so that $|I|\ge \beta n$. Note that if there is an $i$ such that $x_i\in \{y_1,\dots,y_k\}$ or $\bar{x}_i\in\{y_1,\dots,y_k\}$, then at least one of $\mathcal{C}(\boldsymbol{\sigma})$ or $\mathcal{C}(\boldsymbol{\sigma}')$ is satisfied. Consequently, $\mathcal{C}(\boldsymbol{\sigma})=\mathcal{C}(\boldsymbol{\sigma}')=0$ iff $\mathcal{C}$ consists of literals $x_i$/$\bar{x}_i$ for which $i\in [n]\setminus I$. Denote this event by $\mathcal{E}_g$. So, $$\begin{aligned} \mathbb{P}\bigl[\mathcal{C}(\boldsymbol{\sigma})=\mathcal{C}(\boldsymbol{\sigma}') = 0\bigr] &= \mathbb{P}\bigl[\mathcal{C}(\boldsymbol{\sigma})=\mathcal{C}(\boldsymbol{\sigma}') = 0\mid \mathcal{E}_g\bigr] \mathbb{P}[\mathcal{E}_g] \\ &=2^{-k}\cdot \frac{(n-d_H(\boldsymbol{\sigma},\boldsymbol{\sigma}'))^k}{n^k} \\ &\le 2^{-k}(1-\beta)^k. \qedhere \end{aligned}$$ ◻ The last auxiliary result is Wielandt-Hoffman inequality [@hoffman1953variation] (see also [@horn2012matrix Corollary 6.3.8]). **Theorem 22**. *Let $A,A+E\in\mathbb{R}^{n\times n}$ be two symmetric matrices with eigenvalues $$\lambda_1(A)\ge \cdots\ge \lambda_n(A)\quad\text{and}\quad \lambda_1(A+E)\ge \cdots\ge \lambda_n(A+E).$$ Then, $$\sum_{1\le i\le n}\left(\lambda_i(A+E)-\lambda_i(A)\right)^2 \le \|E\|_F^2.$$* ## Proof of Theorem [Theorem 6](#thm:m-ogp-tight){reference-type="ref" reference="thm:m-ogp-tight"} {#sec:pf-m-ogp-tight} We first observe that if $0<\eta'<\eta<\xi$ then $S(\gamma,m,\xi,\eta')\subseteq S(\gamma,m,\xi,\eta)$. In particular, $S(\gamma,m,\xi,\eta')\ne\varnothing\Rightarrow S(\gamma,m,\xi,\eta)\ne\varnothing$. For this reason, we assume in the sequel that $\eta$ is sufficiently small. #### Existence of $m$-Tuples with Constrained Overlaps For any $m\in\mathbb{N}$, $0<\eta<\xi<1$, let $$\label{eq:overlap-set} \mathcal{F}(m,\xi,\eta)\triangleq \left\{\left(\boldsymbol{\sigma}^{(1)},\dots,\boldsymbol{\sigma}^{(m)}\right):\xi-\eta\le n^{-1}\left\langle\boldsymbol{\sigma}^{(i)},\boldsymbol{\sigma}^{(j)}\right\rangle\le \xi,1\le i<j\le m\right\}.$$ We first show that $\mathcal{F}(m,\xi,\eta)\ne\varnothing$ for all large enough $n$. **Lemma 23**. *For any $m\in\mathbb{N}$, $0<\eta<\xi<1$, there exists $N_0\in \mathbb{N}$ such that $\mathcal{F}(m,\xi,\eta)\ne\varnothing$ for $n\ge N_0$.* *Proof of Lemma [Lemma 23](#lemma:F-NON-EMPTY){reference-type="ref" reference="lemma:F-NON-EMPTY"}.* The argument below is reproduced from [@gamarnik2021algorithmic Theorem 2.5]. Our approach is through the *probabilistic method* [@alon2016probabilistic]. Choose $\delta>0$ such that $\xi-\eta<\xi-2\delta<\xi$ and $\xi-2\delta>0$, and let $p^*\in(0,1)$ satisfy $$(1-2p^*)^2 = \xi - \delta.$$ As $\xi-\delta\in(0,1)$ such a $p^*$ indeed exists. We now assign coordinates of $\boldsymbol{\sigma}^{(i)}$ randomly. Specifically, let $\boldsymbol{\sigma}^{(i)}(k)$ be i.i.d. for $1\le i\le m$ and $1\le k\le n$ with distribution $$\mathbb{P}\bigl[\boldsymbol{\sigma}^{(i)}(k) = 1\bigr] = p^*\quad\text{and}\quad \mathbb{P}\bigl[\boldsymbol{\sigma}^{(i)}(k) = -1\bigr]=1-p^*.$$ Observe that for fixed $1\le i<j\le m$, random variables $\boldsymbol{\sigma}^{(i)}(k)\boldsymbol{\sigma}^{(j)}(k),1\le k\le n$ are i.i.d. with $$\mathbb{E}[\boldsymbol{\sigma}^{(i)}(k)\boldsymbol{\sigma}^{(j)}(k)] = (p^*)^2 + (1-p^*)^2 -2p^*(1-p^*) = (1-2p^*)^2 = \xi -\delta.$$ In particular, $\left\langle\boldsymbol{\sigma}^{(i)},\boldsymbol{\sigma}^{(j)}\right\rangle$ is a sum of i.i.d. binary random variables with $n^{-1}\mathbb{E}[\left\langle\boldsymbol{\sigma}^{(i)},\boldsymbol{\sigma}^{(j)}\right\rangle] = \xi-\delta$. Define next sequence of events $$\mathcal{E}_{ij} = \{n^{-1}\left\langle\boldsymbol{\sigma}^{(i)},\boldsymbol{\sigma}^{(j)}\right\rangle\in [\xi-2\delta,\xi]\},\quad 1\le i<j\le m.$$ Using standard concentration results for sum of i.i.d. sub-Gaussian random variables (see e.g. [@vershynin2018high]) we obtain that there is a $C>0$ such that for any $\delta>0$ and $n$, $$\mathbb{P}\left[\left|\frac1n\sum_{1\le k\le n}\boldsymbol{\sigma}^{(i)}(k)\boldsymbol{\sigma}^{(j)}(k)-(\xi-\delta)\right|\le \delta\right]\ge 1-\exp\bigl(-Cn\delta^2\bigr).$$ So, $$\mathbb{P}\left[\bigcap_{1\le i,j\le m}\mathcal{E}_{ij}\right]\ge 1-\binom{m}{2}e^{-Cn\delta^2}>0,$$ for $n$ large enough. This establishes the existence of $\boldsymbol{\sigma}^{(i)}\in\Sigma_n,1\le i\le m$ with $n^{-1}\left\langle\boldsymbol{\sigma}^{(i)},\boldsymbol{\sigma}^{(j)}\right\rangle\in [\xi-2\delta,\xi]\subset[\xi-\delta,\xi]$ for $1\le i<j\le m$, and completes the proof. ◻ Equipped with Lemma [Lemma 24](#lemma:T-concen){reference-type="ref" reference="lemma:T-concen"}, fix a $\boldsymbol{\sigma}^*\in\Sigma_n$ and let $$\label{eq:L_term} L :=L(m,\xi,\eta) = \left\{(\boldsymbol{\sigma}^{(i)}:i\le m):\boldsymbol{\sigma}^{(1)}=\boldsymbol{\sigma}^*,n^{-1}\left\langle\boldsymbol{\sigma}^{(i)},\boldsymbol{\sigma}^{(j)}\right\rangle\in[\xi-\eta,\xi],1\le i<j\le m\right\}.$$ From symmetry, (a) $L$ is also the number of $m$-tuples $(\boldsymbol{\sigma}^{(i)}:i\le m)\in\mathcal{F}(m,\xi,\eta)$ with $k{\rm th}$ coordinate fixed at $\boldsymbol{\sigma}^*$, $\boldsymbol{\sigma}^{(k)}=\boldsymbol{\sigma}^*$ for any arbitrary $k$, and (b) $|\mathcal{F}(m,\xi,\eta)| =2^n\cdot L$, so $L\ge 1$ for large $n$ by Lemma [Lemma 23](#lemma:F-NON-EMPTY){reference-type="ref" reference="lemma:F-NON-EMPTY"}. #### An Auxiliary Random Variable Next, define $$\label{eq:auxil-rv-T} T_{m,\xi,\eta}\triangleq \max_{(\boldsymbol{\sigma}^{(i)}:i\le m)\in\mathcal{F}(m,\xi,\eta)} \min_{1\le j\le m}H(\boldsymbol{\sigma}^{(j)}).$$ We establish a concentration property for $T_{m,\eta,\xi}$. **Lemma 24**. *For any $t\ge 0$, $$\mathbb{P}\bigl[T_{m,\xi,\eta}-\mathbb{E}[T_{m,\xi,\eta}]\ge t\bigr] \le \exp\left(-\frac{t^2 n}{2}\right).$$* *Proof of Lemma [Lemma 24](#lemma:T-concen){reference-type="ref" reference="lemma:T-concen"}.* Throughout the proof, we make dependence on disorder $\boldsymbol{J}$ explicit. Given $\boldsymbol{J}\in(\mathbb{R}^n)^{\otimes p}$, we view $T_{m,\xi,\eta}$ as a map $T_{m,\xi,\eta}(\boldsymbol{J}):\mathbb{R}^{n^p}\to \mathbb{R}$ and establish that it is Lipschitz (w.r.t. $\ell_2$ norm): $$\label{eq:T-is-Lip} \bigl|T_{m,\xi,\eta}(\boldsymbol{J}) -T_{m,\xi,\eta}(\boldsymbol{J}') \bigr|\le n^{-\frac12}\|\boldsymbol{J}-\boldsymbol{J}'\|_2.$$ Lemma [Lemma 24](#lemma:T-concen){reference-type="ref" reference="lemma:T-concen"} then follows from standard concentration results for Lipschitz functions of Gaussian random variables (see e.g. [@wainwright2019high Theorem 2.26]). To that end, fix any $\boldsymbol{\sigma}$ and recall that $$H(\boldsymbol{\sigma},\boldsymbol{J}) = n^{-\frac{p+1}{2}}\left\langle\boldsymbol{J},\boldsymbol{\sigma}^{\otimes p}\right\rangle.$$ Using Cauchy-Schwarz inequality and the fact $\|\boldsymbol{\sigma}^{\otimes p}\|_2=n^{p/2}$ for any $\boldsymbol{\sigma}\in\Sigma_n$, we obtain $$\label{eq:ham-lip} \bigl|H(\boldsymbol{\sigma},\boldsymbol{J}) - H(\boldsymbol{\sigma},\boldsymbol{J}')\bigr| \le n^{-\frac{p+1}{2}}\|\boldsymbol{J}-\boldsymbol{J}'\|_2 \|\boldsymbol{\sigma}^{\otimes p}\|_2 = n^{-\frac{p+1}{2}}\|\boldsymbol{J}-\boldsymbol{J}'\|_2\le n^{-\frac12}\|\boldsymbol{J}-\boldsymbol{J}'\|_2$$ Now, fix any $\Xi = (\boldsymbol{\sigma}^{(i)}:i\le m)\in\mathcal{F}(m,\xi,\eta)$ and let $L(\Xi,\boldsymbol{J}) = \min_{1\le j\le m}H(\boldsymbol{\sigma}^{(j)},\boldsymbol{J})$. We have $$L(\Xi,\boldsymbol{J}) \le H(\boldsymbol{\sigma}^{(j)},\boldsymbol{J}) \le H(\boldsymbol{\sigma}^{(j)},\boldsymbol{J}') +n^{-\frac12}\|\boldsymbol{J}-\boldsymbol{J}'\|_2,$$ where we used [\[eq:ham-lip\]](#eq:ham-lip){reference-type="eqref" reference="eq:ham-lip"} in the last step. Taking a minimum over $1\le j\le m$ and exchanging $\boldsymbol{J}$ with $\boldsymbol{J}'$ afterwards, we obtain $$\label{eq:L-lip} \bigl| L(\Xi,\boldsymbol{J}) - L(\Xi,\boldsymbol{J}')\bigr| \le n^{-\frac12}\|\boldsymbol{J}-\boldsymbol{J}'\|_2.$$ Using [\[eq:L-lip\]](#eq:L-lip){reference-type="eqref" reference="eq:L-lip"}, we have $$\begin{aligned} L(\Xi,\boldsymbol{J})&\le L(\Xi,\boldsymbol{J}')+n^{-\frac12}\|\boldsymbol{J}-\boldsymbol{J}'\|_2 \\ &\le \max_{\Xi\in\mathcal{F}(m,\xi,\eta)}L(\Xi,\boldsymbol{J}')+n^{-\frac12}\|\boldsymbol{J}-\boldsymbol{J}'\|_2\\ &= T_{m,\xi,\eta}(\boldsymbol{J}') + n^{-\frac12}\|\boldsymbol{J}-\boldsymbol{J}'\|_2.\end{aligned}$$ Taking a maximum over $\Xi\in\mathcal{F}(m,\xi,\eta)$ and reversing the roles of $\boldsymbol{J}$ and $\boldsymbol{J}'$, we establish [\[eq:T-is-Lip\]](#eq:T-is-Lip){reference-type="eqref" reference="eq:T-is-Lip"}. ◻ #### Bounds on Moments of $|S(\gamma,m,\xi,\eta)|$ We establish a lower bound on first moment of $|S(\gamma,m,\xi,\eta)|$. **Proposition 25**. *For any $m\in\mathbb{N}$, $\gamma<1/\sqrt{m}$ and $0<\eta<\xi<1$, $$\mathbb{E}\bigl[|S(\gamma,m,\xi,\eta)|\bigr]\ge L\cdot \exp_2\bigl(n-m\gamma^2 n +o(n)\bigr),$$ for $L$ in [\[eq:L_term\]](#eq:L_term){reference-type="eqref" reference="eq:L_term"}. In particular, $\mathbb{E}\bigl[|S(\gamma,m,\xi,\eta)|\bigr]=\exp(\Theta(n))$ for $\gamma<1/\sqrt{m}$.* *Proof of Proposition [Proposition 25](#prop:S_1st_Mom){reference-type="ref" reference="prop:S_1st_Mom"}.* We have $$\label{eq:1ST_M_Exp} |S(\gamma,m,\xi,\eta)| = \sum_{(\boldsymbol{\sigma}^{(i)}:i\le m)\in\mathcal{F}(m,\xi,\eta)}\mathbbm{1}\left\{H(\boldsymbol{\sigma}^{(i)})\ge \gamma\sqrt{2\ln 2},\forall i\right\}.$$ Now, fix any $(\boldsymbol{\sigma}^{(i)}:i\le m)\in\mathcal{F}(m,\xi,\eta)$ and let $Z_i=\sqrt{n}H(\boldsymbol{\sigma}^{(i)}),1\le i\le n$. We have $Z_i\sim {\bf \mathcal{N}}(0,1)$ for $1\le i\le m$. Furthermore, a simple calculation shows $$\mathbb{E}[Z_iZ_j] = \left(n^{-1}\left\langle\boldsymbol{\sigma}^{(i)},\boldsymbol{\sigma}^{(j)}\right\rangle\right)^p \in [(\xi-\eta)^p,\xi^p].$$ Let now $Z_i'\sim {\bf \mathcal{N}}(0,1)$ be i.i.d. Using the fact $\xi-\eta>0$, we apply Slepian's Lemma, Lemma [Lemma 20](#lemma:Slep){reference-type="ref" reference="lemma:Slep"}, to random vectors $(Z_1,\dots,Z_n)$ and $(Z_1',\dots,Z_n')$ to obtain $$\label{eq:1st--mom-p-bd} \mathbb{P}\bigl[H(\boldsymbol{\sigma}^{(i)})\ge \gamma\sqrt{2\ln 2},\forall i\bigr] \ge \mathbb{P}\bigl[Z_i'\ge \gamma\sqrt{2n \ln 2},\forall i\bigr]=(\mathbb{P}[Z_1'\ge \gamma\sqrt{2n \ln 2}])^m \ge \exp_2\bigl(-nm\gamma^2+o(n)\bigr),$$ where we used the well-known Gaussian tail bound $$\mathbb{P}[{\bf \mathcal{N}}(0,1)\ge x]\ge \frac{\exp(-x^2/2)}{\sqrt{2\pi}}\left(\frac1x-\frac{1}{x^3}\right)$$ with $x=\gamma\sqrt{2n\ln 2}$. We now combine [\[eq:1ST_M\_Exp\]](#eq:1ST_M_Exp){reference-type="eqref" reference="eq:1ST_M_Exp"}, [\[eq:1st\--mom-p-bd\]](#eq:1st--mom-p-bd){reference-type="eqref" reference="eq:1st--mom-p-bd"} and the fact $|\mathcal{F}(m,\xi,\eta)| = 2^n\cdot L$ per discussion below [\[eq:L_term\]](#eq:L_term){reference-type="eqref" reference="eq:L_term"}, and establish Proposition [Proposition 25](#prop:S_1st_Mom){reference-type="ref" reference="prop:S_1st_Mom"} via linearity of expectation. ◻ We next establish the following proposition using the *second moment method*. **Proposition 26**. *For any $m\in\mathbb{N}$, $0<\gamma<1/\sqrt{m}$ and $0<\eta<\xi<1$, there exists a $P^*\in \mathbb{N}$ and a function $\Delta_p\triangleq\Delta_p(m,\xi,\eta)>0$, depending only on $p,m,\gamma,\xi$ with the property that $\Delta_p\to 0$ as $p\to\infty$ (for fixed $m,\gamma,\xi$), such that the following holds. Fix any $p\ge P^*$. Then, as $n\to\infty$, $$\mathbb{P}\bigl[\bigl|S(\gamma,m,\xi,\eta)\bigr|\ge 1\bigr]\ge \exp\left(-\frac{2mn\gamma^2 \Delta_p}{1+\Delta_p}+o(n)\right).$$* *Proof of Proposition [Proposition 26](#prop:2nd-moment){reference-type="ref" reference="prop:2nd-moment"}.* Fix $m\in\mathbb{N}$, $\gamma<1/\sqrt{m}$, $0<\eta<\xi<1$, where $\eta$ is as small as needed (see the remark at the beginning of the proof). For convenience, let $$\label{eq:COUNT-RV} M \triangleq |S(\gamma,m,\xi,\eta)| = \sum_{\Xi \in\mathcal{F}(m,\xi,\eta)} \mathbbm{1}\bigl\{H(\boldsymbol{\sigma}^{(i)})\ge \gamma\sqrt{2\ln 2}, 1\le i\le m\bigr\},$$ where $$\label{eq:XI_Term} \Xi = (\boldsymbol{\sigma}^{(i)}:1\le i\le m)\in\mathcal{F}(m,\xi,\eta).$$ Observe that for $\bar{\Xi}=(\bar{\boldsymbol{\sigma}}^{(i)}:1\le i\le m)\in\mathcal{F}(m,\xi,\eta)$, we have $$\label{eq:2nd_Mom_Exp} \mathbb{E}[M^2] = \sum_{\Xi,\bar{\Xi}\in\mathcal{F}(m,\xi,\eta)}\mathbb{P}\bigl[H(\boldsymbol{\sigma}^{(i)})\ge \gamma\sqrt{2\ln 2},H(\bar{\boldsymbol{\sigma}}^{(i)})\ge \gamma\sqrt{2\ln 2}, 1\le i\le m\bigr]$$ Since $\gamma<1/\sqrt{m}$, there exists an $\epsilon^*\in(0,\frac12)$ such that $$\label{eq:Eps-star} h_b(\epsilon^*)<1-m\gamma^2.$$ Our goal is to carefully upper bound [\[eq:2nd_Mom_Exp\]](#eq:2nd_Mom_Exp){reference-type="eqref" reference="eq:2nd_Mom_Exp"} using a crucial overcounting idea developed in [@gamarnik2021algorithmic]. #### Overcounting For $\epsilon^*$ in [\[eq:Eps-star\]](#eq:Eps-star){reference-type="eqref" reference="eq:Eps-star"}, let $$\label{eq:Sigma-ij} \Sigma_{ij}(\epsilon^*)\triangleq \sum_{\substack{\Xi,\bar{\Xi}\in\mathcal{F}(m,\xi,\eta) \\ n^{-1}d_H(\boldsymbol{\sigma}^{(i)},\bar{\boldsymbol{\sigma}}^{(j)})\in[0,\epsilon^*]\cup[1-\epsilon^*,1]}} \mathbb{P}\bigl[H(\boldsymbol{\sigma}^{(i)})\ge \gamma\sqrt{2\ln 2},H(\bar{\boldsymbol{\sigma}}^{(i)})\ge \gamma\sqrt{2\ln 2}, 1\le i\le m\bigr].$$ and $$\label{eq:Sigma_D} \Sigma_d \triangleq \sum_{\substack{\Xi,\bar{\Xi}\in\mathcal{F}(m,\xi,\eta) \\ n^{-1}d_H(\boldsymbol{\sigma}^{(i)},\bar{\boldsymbol{\sigma}}^{(j)})\in[\epsilon^*,1-\epsilon^*],\forall i,j}} \mathbb{P}\bigl[H(\boldsymbol{\sigma}^{(i)})\ge \gamma\sqrt{2\ln 2},H(\bar{\boldsymbol{\sigma}}^{(i)})\ge \gamma\sqrt{2\ln 2}, 1\le i\le m\bigr].$$ Observe the bound following from overcounting $m$-tuples: $$\label{eq:overcounting} \mathbb{E}[M^2] \le \sum_{1\le i,j\le m}\Sigma_{ij}(\epsilon^*) + \Sigma_d.$$ #### The $\Sigma_{ij}(\epsilon^*)$ Term We now show $\Sigma_{ij}(\epsilon^*)/\mathbb{E}[M]^2 = \exp(-\Theta(n))$. To that end, we establish the following. **Lemma 27**. *Suppose $m\in\mathbb{N}$ and $0<\xi<\eta<1$ such that $\eta$ is small enough. Then, for $\Xi$ in [\[eq:XI_Term\]](#eq:XI_Term){reference-type="eqref" reference="eq:XI_Term"}, $$\sup_{\Xi\in\mathcal{F}(m,\xi,\eta)} \mathbb{P}\bigl[H(\boldsymbol{\sigma}^{(i)})\ge \gamma\sqrt{2\ln 2},1\le i\le m\bigr]\le \exp_2\left(-n\frac{m\gamma^2}{1+2mp\xi^p}+O(\log_2 n)\right).$$* *Proof of Lemma [Lemma 27](#lemma:prob_term){reference-type="ref" reference="lemma:prob_term"}.* Lemma [Lemma 27](#lemma:prob_term){reference-type="ref" reference="lemma:prob_term"} follows by combining [@gamarnik2023shattering Lemma 3.13(d)] and [@gamarnik2023shattering Equations (93),(94),(95)] through a similar reasoning leading to [@gamarnik2023shattering Equation (97)]. ◻ Define $$S_{ij}(\epsilon^*) \triangleq \bigl\{(\Xi,\bar{\Xi})\in\mathcal{F}(m,\xi,\eta)\times \mathcal{F}(m,\xi,\eta):n^{-1}d_H(\boldsymbol{\sigma}^{(i)},\bar{\boldsymbol{\sigma}}^{(j)})\in[0,\epsilon^*]\cup[1-\epsilon^*,1]\bigr\}.$$ Using Lemma [Lemma 17](#lemma:COUN){reference-type="ref" reference="lemma:COUN"}, and the remark following [\[eq:L_term\]](#eq:L_term){reference-type="eqref" reference="eq:L_term"}, we obtain $$\label{eq:S_ij_eps-star} |S_{ij}(\epsilon^*)| \le \bigl(2^n \cdot L\bigr)\cdot \left(\sum_{k\in [0,n\epsilon^*]\cup [n(1-\epsilon^*),n]}\binom{n}{k}\cdot L\right) \le 2^{n+1+nh_b(\epsilon^*)}L^2.$$ Furthermore, for any $(\Xi,\bar{\Xi})\in\mathcal{F}(m,\xi,\eta)\times \mathcal{F}(m,\xi,\eta)$, we have $$\begin{aligned} \mathbb{P}\bigl[H(\boldsymbol{\sigma}^{(i)})\ge \gamma\sqrt{2\ln 2},H(\bar{\boldsymbol{\sigma}}^{(i)})\ge \gamma\sqrt{2\ln 2}, 1\le i\le m\bigr]&\le \mathbb{P}\bigl[H(\boldsymbol{\sigma}^{(i)})\ge \gamma\sqrt{2\ln 2}, 1\le i\le m\bigr]\nonumber \\ &\le \exp_2\left(-n\frac{m\gamma^2}{1+2mp\gamma^p}+O(\log_2 n)\right)\label{eq:Sigma-ij-eps-star-prob},\end{aligned}$$ where we used Lemma [Lemma 27](#lemma:prob_term){reference-type="ref" reference="lemma:prob_term"} to deduce [\[eq:Sigma-ij-eps-star-prob\]](#eq:Sigma-ij-eps-star-prob){reference-type="eqref" reference="eq:Sigma-ij-eps-star-prob"}. Combining [\[eq:S_ij_eps-star\]](#eq:S_ij_eps-star){reference-type="eqref" reference="eq:S_ij_eps-star"} and [\[eq:Sigma-ij-eps-star-prob\]](#eq:Sigma-ij-eps-star-prob){reference-type="eqref" reference="eq:Sigma-ij-eps-star-prob"}, we upper bound $\Sigma_{ij}(\epsilon^*)$ in [\[eq:Sigma-ij\]](#eq:Sigma-ij){reference-type="eqref" reference="eq:Sigma-ij"}: $$\label{eq:UPPER__BD_S_IJ} \Sigma_{ij}(\epsilon^*) \le L^2 \exp_2\left(n+1+nh_b(\epsilon^*) -n\frac{m\gamma^2}{1+2mp\xi^p}+O(\log_2 n)\right)$$ Furthermore, Proposition [Proposition 25](#prop:S_1st_Mom){reference-type="ref" reference="prop:S_1st_Mom"} yields $$\label{eq:1ST_MOM_Sq} \mathbb{E}[M]^2 \ge L^2 \exp_2\bigl(2n-2m\gamma^2 n+o(n)\bigr).$$ Now, recall $\epsilon^*$ from [\[eq:Eps-star\]](#eq:Eps-star){reference-type="eqref" reference="eq:Eps-star"} and choose $P_1^*$ such that $$\label{eq:P_1-STERKA} 1-h_b(\epsilon^*) - m\gamma^2 - \frac{2m^2\gamma^2 p\xi^p}{1+2mp\xi^p}>0,\quad\text{for all}\quad p\ge P_1^*.$$ Such a $P_1^*$ indeed exists since $1-h_b(\epsilon^*)-m\gamma^2>0$ and $2m^2\gamma^2 p\xi^p(1+2mp\xi^p)\to 0$ as $p\to\infty$ for fixed $m,\gamma$ and $\xi\in(0,1)$. Equipped with these, we now combine [\[eq:UPPER\_\_BD_S\_IJ\]](#eq:UPPER__BD_S_IJ){reference-type="eqref" reference="eq:UPPER__BD_S_IJ"} and [\[eq:1ST_MOM_Sq\]](#eq:1ST_MOM_Sq){reference-type="eqref" reference="eq:1ST_MOM_Sq"} and obtain $$\begin{aligned} \frac{\Sigma_{ij}(\epsilon^*)}{\mathbb{E}[M]^2} &\le \exp_2\left(-n+nh_b(\epsilon^*) +2m\gamma^2 n - \frac{m\gamma^2 n}{1+2mp\xi^p}+o(n)\right) \nonumber \\ &=\exp_2\left(-n\left(1-h_b(\epsilon^*)-m\gamma^2-\frac{2m^2\gamma^2 p\xi^p}{1+2mp\xi^p}\right)+o(n)\right)\nonumber \\ &=\exp_2\bigl(-\Theta(n)\bigr)\label{eq:S-ij-eps-over-exp-small}\end{aligned}$$ for any $p\ge P_1^*$ via [\[eq:P_1-STERKA\]](#eq:P_1-STERKA){reference-type="eqref" reference="eq:P_1-STERKA"}. Since $m=O(1)$ and the indices $1\le i,j\le m$ are arbitrary, we thus obtain $$\label{eq:WEAK_TERM} \frac{1}{\mathbb{E}[M]^2}\sum_{1\le i,j\le m}\Sigma_{ij}(\epsilon^*) \le 2^{-\Theta(n)}, \quad \text{for}\quad p\ge P_1^*.$$ #### The $\Sigma_d$ Term We now study $\Sigma_d$ term in [\[eq:Sigma_D\]](#eq:Sigma_D){reference-type="eqref" reference="eq:Sigma_D"}. Let $$\label{eq:S_d_eps-star} S_d(\epsilon^*) \triangleq \bigl\{(\Xi,\bar{\Xi})\in\mathcal{F}(m,\xi,\eta)\times \mathcal{F}(m,\xi,\eta):n^{-1}d_H(\boldsymbol{\sigma}^{(i)},\bar{\boldsymbol{\sigma}}^{(j)})\in[\epsilon^*,1-\epsilon^*],1\le i,j\le m\bigr\}.$$ We crudely upper bound $|S_d(\epsilon^*)|$ by $$\label{eq:COUNT-Sigma_D} \bigl|S_d(\epsilon^*)\bigr|\le |\mathcal{F}(m,\xi,\eta)|^2.$$ We next establish the key probability estimate. **Lemma 28**. *For $m\in\mathbb{N}$, $0<\eta<\xi<1$, and $\epsilon^*$ in [\[eq:Eps-star\]](#eq:Eps-star){reference-type="eqref" reference="eq:Eps-star"}, there is a $P_2^*\in \mathbb{N}$ such that the following holds. Fix any $p\ge P_2^*$. Then, $$\sup_{(\Xi,\bar{\Xi})\in S_d(\epsilon^*)}\mathbb{P}\left[\min_{1\le i\le m}H(\boldsymbol{\sigma}^{(i)})\ge \gamma\sqrt{2\ln 2},\min_{1\le i\le m}H(\bar{\boldsymbol{\sigma}}^{(i)})\ge \gamma\sqrt{2\ln 2}\right]\le \exp_2\left(-\frac{2m\gamma^2 n}{1+\Delta_p}+O(\log_2 n)\right),$$ where $$\Delta_p = m\sqrt{2\xi^{2p}+2(1-2\epsilon^*)^{2p}}.$$* *Proof of Lemma [Lemma 28](#lemma:key-prob){reference-type="ref" reference="lemma:key-prob"}.* Fix $(\Xi,\bar{\Xi})\in S_d(\epsilon^*)$ with $\Xi=(\boldsymbol{\sigma}^{(i)}:i\le m)$ and $\bar{\Xi}=(\bar{\boldsymbol{\sigma}}^{(i)}:i\le m)$. From [\[eq:S_d\_eps-star\]](#eq:S_d_eps-star){reference-type="eqref" reference="eq:S_d_eps-star"}, $$\label{eq:Overlap_intra} \frac1n\left\langle\boldsymbol{\sigma}^{(i)},\bar{\boldsymbol{\sigma}}^{(j)}\right\rangle \in[-(1-2\epsilon^*),1-2\epsilon^*],\quad 1\le i,j\le m.$$ Moreover, as $\Xi,\bar{\Xi}\in\mathcal{F}(m,\xi,\eta)$, we also have for $1\le i<j\le m$ $$\label{eq:Overlap_inter} \frac1n\left\langle\boldsymbol{\sigma}^{(i)},\boldsymbol{\sigma}^{(j)}\right\rangle\in[\xi-\eta,\xi]\quad\text{and}\quad \frac1n\left\langle\bar{\boldsymbol{\sigma}}^{(i)},\bar{\boldsymbol{\sigma}}^{(j)}\right\rangle\in[\xi-\eta,\xi].$$ Set $Z_i = \sqrt{n}H(\boldsymbol{\sigma}^{(i)}),1\le i\le m$ and $\bar{Z}_i=\sqrt{n}H(\bar{\boldsymbol{\sigma}}^{(i)}),1\le i\le m$, so that $Z_i,\bar{Z}_i\sim {\bf \mathcal{N}}(0,1)$. We proceed by applying Theorem [Theorem 19](#thm:multiv-tail){reference-type="ref" reference="thm:multiv-tail"} to the random vector $\boldsymbol{V}\in\mathbb{R}^{2m}$ with tail $\boldsymbol{t}\in\mathbb{R}^{2m}$, where $$\label{eq:V_and_t} \boldsymbol{V}=(Z_1,\dots,Z_m,\bar{Z}_1,\dots,\bar{Z}_m)\in\mathbb{R}^{2m}\quad\text{and}\quad\boldsymbol{t} =(\gamma\sqrt{2n\ln 2})\mathbf{1}\in\mathbb{R}^{2m}.$$ To that end, we study the covariance matrix $A(\Xi,\bar{\Xi})\in\mathbb{R}^{2m\times 2m}$ of $\boldsymbol{V}$, admitting the following form: $$\label{eq:A-Xi-bar-Xi} A(\Xi,\bar{\Xi}) = \displaystyle\begin{pmatrix} B(\Xi) & \hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep}& \hat{E} \\ \hline \hat{E}& \hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep}& B(\bar{\Xi}) \end{pmatrix}\in\mathbb{R}^{2m\times 2m}.$$ The matrices $B(\Xi),B(\bar{\Xi}),\hat{E}\in\mathbb{R}^{m\times m}$ satisfy the following properties. - For $1\le i<j\le m$, $$\bigl(B(\Xi)\bigr)_{ij} = \bigl(B(\Xi)\bigr)_{ji} = \left(n^{-1}\left\langle\boldsymbol{\sigma}^{(i)},\boldsymbol{\sigma}^{(j)}\right\rangle\right)^p \in [(\xi-\eta)^p,\xi^p]$$ and $$\bigl(B(\bar{\Xi})\bigr)_{ij} = \bigl(B(\bar{\Xi})\bigr)_{ji} = \left(n^{-1}\left\langle\bar{\boldsymbol{\sigma}}^{(i)},\bar{\boldsymbol{\sigma}}^{(j)}\right\rangle\right)^p \in [(\xi-\eta)^p,\xi^p]$$ Moreover, for $1\le i\le m$, $$\bigl(B(\Xi)\bigr)_{ii} = \bigl(B(\bar{\Xi})\bigr)_{ii} = 1.$$ - For $1\le i,j\le m$, $$\hat{E}_{ij} =\hat{E}_{ji}=\left(n^{-1}\left\langle\boldsymbol{\sigma}^{(i)},\bar{\boldsymbol{\sigma}}^{(j)}\right\rangle\right)^p \Rightarrow |\hat{E}_{ij}| = |\hat{E}_{ji}| \le (1-2\epsilon^*)^p.$$ Equipped with these, we write $A(\Xi,\bar{\Xi})$ in [\[eq:A-Xi-bar-Xi\]](#eq:A-Xi-bar-Xi){reference-type="eqref" reference="eq:A-Xi-bar-Xi"} as $$\label{eq:Id-decompose} A(\Xi,\bar{\Xi}) = I_{2m} + E\in\mathbb{R}^{2m\times 2m},$$ where $$\label{eq:E-Frob_norm} \|E\|_F \le m\sqrt{2\xi^{2p}+2(1-2\epsilon^*)^{2p}} \triangleq \Delta_p.$$ Note that as $m=O(1)$, $\xi\in(0,1)$ and $\epsilon^*\in(0,\frac12)$, $\Delta_p\to 0$ as $p\to\infty$. Before applying Theorem [Theorem 19](#thm:multiv-tail){reference-type="ref" reference="thm:multiv-tail"}, we must verify $\bigl(A(\Xi,\bar{\Xi})^{-1}\boldsymbol{t}\bigr)_i>0$ for $1\le i\le 2m$. We have the following chain of inequalities: $$\begin{aligned} \bigl\|A(\Xi,\bar{\Xi})^{-1}\boldsymbol{t} -\boldsymbol{t} \bigr\|_2 &\le \|\boldsymbol{t}\|_2\cdot \bigl\|\bigl(I_{2m}+E\bigr)^{-1}-I_{2m}\bigr\|_2 \label{eq:CSE}\\ &\le \gamma\sqrt{4mn\ln 2}\cdot \left\|\sum_{k\ge 1}E^k\right\|_F\label{eq:matrix-Taylor} \\ &\le \gamma\sqrt{4mn\ln 2}\left(\sum_{k\ge 1}\|E\|_F^k\right)\label{eq:T_ineq} \\ &\le \gamma\sqrt{4mn\ln 2}\frac{\Delta_p}{1-\Delta_p}\label{eq:Geo_Series}. \end{aligned}$$ Here, [\[eq:CSE\]](#eq:CSE){reference-type="eqref" reference="eq:CSE"} follows from Cauchy-Schwarz inequality; [\[eq:matrix-Taylor\]](#eq:matrix-Taylor){reference-type="eqref" reference="eq:matrix-Taylor"} follows from the fact $\boldsymbol{t}=(\gamma\sqrt{2n\ln 2})\mathbf{1}\in\mathbb{R}^{2m}$ and the series $I+E=I-E-E^2-\cdots$ valid for $\|E\|_2<1$; [\[eq:T_ineq\]](#eq:T_ineq){reference-type="eqref" reference="eq:T_ineq"} follows from triangle inequality; and [\[eq:Geo_Series\]](#eq:Geo_Series){reference-type="eqref" reference="eq:Geo_Series"} follows from [\[eq:E-Frob_norm\]](#eq:E-Frob_norm){reference-type="eqref" reference="eq:E-Frob_norm"} and the geometric series summation. Using [\[eq:Geo_Series\]](#eq:Geo_Series){reference-type="eqref" reference="eq:Geo_Series"}, we have $$\bigl|\bigl(A(\Xi,\bar{\Xi})^{-1}\boldsymbol{t}\bigr)_i - \gamma\sqrt{2n \ln 2}\bigr| \le \bigl\|A(\Xi,\bar{\Xi})^{-1}\boldsymbol{t} - \boldsymbol{t}\bigr\|_2 \le \gamma\sqrt{4mn\ln 2}\frac{\Delta_p}{1-\Delta_p}.$$ In particular, there is a $P_2^*\in\mathbb{N}$ such that for $p\ge P_2^*$, we have $$\bigl(A(\Xi,\bar{\Xi})^{-1}\boldsymbol{t}\bigr)_i\ge \gamma\sqrt{2n\ln 2}\left(1-\frac{\Delta_p\sqrt{2m}}{1-\Delta_p}\right)>0,\quad 1\le i\le m,$$ where we used the fact $\Delta_p\to 0$ as $p\to 0$. In particular, Theorem [Theorem 19](#thm:multiv-tail){reference-type="ref" reference="thm:multiv-tail"} is applicable, provided $p\ge P_2^*$. Below, the value of the constant $P_2^*\in\mathbb{N}$ changes from line to line. #### Controlling the Spectrum of $A(\Xi,\bar{\Xi})$ Using [\[eq:Id-decompose\]](#eq:Id-decompose){reference-type="eqref" reference="eq:Id-decompose"}, we control the spectrum of $A(\Xi,\bar{\Xi})$. Let $\mu_1\ge\cdots\ge \mu_{2m}$ be the eigenvalues of $A(\Xi,\bar{\Xi})=I_{2m}+E$ per [\[eq:Id-decompose\]](#eq:Id-decompose){reference-type="eqref" reference="eq:Id-decompose"}. Since the eigenvalues of $I_{2m}$ are all unity, we apply Theorem [Theorem 22](#thm:wiehoff){reference-type="ref" reference="thm:wiehoff"} and use [\[eq:E-Frob_norm\]](#eq:E-Frob_norm){reference-type="eqref" reference="eq:E-Frob_norm"} to obtain $$\label{eq:Spec_of_A} \sum_{1\le i\le 2m}\bigl(\mu_i-1\bigr)^2 \le \|E\|_F^2 \Rightarrow \mu_i\in\bigl[1-\Delta_p,1+\Delta_p\bigr],\quad\text{for}\quad 1\le i\le 2m.$$ Note that for $p$ large enough, $\mu_{2m}>0$, so $A(\Xi,\bar{\Xi})$ is positive definite. Using [\[eq:Spec_of_A\]](#eq:Spec_of_A){reference-type="eqref" reference="eq:Spec_of_A"} we obtain $$\label{eq:A-det-bd} |A(\Xi,\bar{\Xi})| = \prod_{1\le i\le 2m}\mu_i \ge (1-\Delta_p)^{2m}.$$ Next, we control certain quantities before applying Theorem [Theorem 19](#thm:multiv-tail){reference-type="ref" reference="thm:multiv-tail"}. Diagonalize $A(\Xi,\bar{\Xi})^{-1}$ as $$A(\Xi,\bar{\Xi}) = Q(\Xi,\bar{\Xi})^T \Lambda(\Xi,\bar{\Xi}) Q(\Xi,\bar{\Xi}),$$ where $$\label{eq:Q_normal} Q(\Xi,\bar{\Xi})\in\mathbb{R}^{2m\times 2m}\quad\text{with}\quad Q(\Xi,\bar{\Xi})^T Q(\Xi,\bar{\Xi}) = Q(\Xi,\bar{\Xi}) Q(\Xi,\bar{\Xi})^T =I_{2m}$$ and $$\Lambda(\Xi,\bar{\Xi}) = {\rm diag}\bigl(\mu_i^{-1}:1\le i\le 2m\bigr)\in \mathbb{R}^{2m\times 2m}.$$ Set $\boldsymbol{v}_{\rm aux}=Q(\Xi,\bar{\Xi})\mathbf{1}\in\mathbb{R}^{2m}$. Using [\[eq:Q_normal\]](#eq:Q_normal){reference-type="eqref" reference="eq:Q_normal"}, we obtain $\|\boldsymbol{v}_{\rm aux}\|_2^2=2m$. With this, we have $$\begin{aligned} \boldsymbol{t}^T A(\Xi,\bar{\Xi})^{-1}\boldsymbol{t} &=(\gamma^2 \cdot 2n\ln 2)(\boldsymbol{v}_{\rm aux})^T \Lambda(\Xi,\bar{\Xi})\boldsymbol{v}_{\rm aux} =(\gamma^2 \cdot 2n\ln 2) \sum_{1\le i\le 2m}\mu_i^{-1}\bigl(\boldsymbol{v}_{\rm aux}\bigr)_i^2 \ge \frac{4mn\gamma^2 \ln 2}{1+\Delta_p}, \end{aligned}$$ where we used the fact $\mu_i^{-1}\ge 1+\Delta_p$ per [\[eq:Spec_of_A\]](#eq:Spec_of_A){reference-type="eqref" reference="eq:Spec_of_A"} and $\|\boldsymbol{v}_{\rm aux}\|_2^2 = 2m$ in the last step. So, $$\label{eq:EXP_TERM} \exp\left(-\frac{\boldsymbol{t}^TA(\Xi,\bar{\Xi})^{-1}\boldsymbol{t}}{2}\right) \le \exp_2\left(-\frac{2mn\gamma^2}{1+\Delta_p}\right).$$ Lastly, fix any $1\le i\le 2m$ and let $e_i$ be the $i{\rm th}$ unit vector in $\mathbb{R}^{2m}$. We have, $$\begin{aligned} \bigl|\left\langle e_i,A(\Xi,\bar{\Xi})^{-1}\boldsymbol{t}\right\rangle\bigr|&\le \|e_i\|_2 \cdot \|A(\Xi,\bar{\Xi})^{-1}\|_2 \cdot \|\boldsymbol{t}\|_2\label{eq:CSE-2} \\ &\le \frac{2\gamma\sqrt{mn \ln 2}}{1-\Delta_p}\label{eq:combine_norm_spec}, \end{aligned}$$ where [\[eq:CSE-2\]](#eq:CSE-2){reference-type="eqref" reference="eq:CSE-2"} follows by applying Cauchy-Schwarz inequality twice and [\[eq:combine_norm_spec\]](#eq:combine_norm_spec){reference-type="eqref" reference="eq:combine_norm_spec"} follows by combining the facts $\|e_i\|_2=1$ and $\|\boldsymbol{t}\|_2 = 2\gamma\sqrt{mn\ln 2}$ with [\[eq:Spec_of_A\]](#eq:Spec_of_A){reference-type="eqref" reference="eq:Spec_of_A"}. Using the fact $A(\Xi,\bar{\Xi})^{-1}\boldsymbol{t}$ is entrywise positive for large $p$ as established earlier, we obtain $$\label{eq:pi_of_proj} \prod_{1\le i\le 2m} \left\langle e_i,A(\Xi,\bar{\Xi})^{-1}\boldsymbol{t}\right\rangle \le n^m\cdot \left(\frac{2\gamma\sqrt{m\ln 2}}{1-\Delta_p}\right)^{2m}.$$ #### Applying Theorem [Theorem 19](#thm:multiv-tail){reference-type="ref" reference="thm:multiv-tail"} {#applying-theorem-thmmultiv-tail} We apply Theorem [Theorem 19](#thm:multiv-tail){reference-type="ref" reference="thm:multiv-tail"} to $\boldsymbol{V}\in\mathbb{R}^{2m}$ in [\[eq:V_and_t\]](#eq:V_and_t){reference-type="eqref" reference="eq:V_and_t"} with $\boldsymbol{t}\in\mathbb{R}^{2m}$ to obtain $$\begin{aligned} &\mathbb{P}\left[\min_{1\le i\le m}H(\boldsymbol{\sigma}^{(i)})\ge \gamma\sqrt{2\ln 2},\min_{1\le i\le m}H(\bar{\boldsymbol{\sigma}}^{(i)})\ge \gamma\sqrt{2\ln 2}\right] \nonumber\\ &\le (2\pi)^{-m} |A(\Xi,\bar{\Xi})|^{-1} \exp\left(-\frac{\boldsymbol{t}^TA(\Xi,\bar{\Xi})^{-1}\boldsymbol{t}}{2}\right) \prod_{1\le i\le 2m} \left\langle e_i,A(\Xi,\bar{\Xi})^{-1}\boldsymbol{t}\right\rangle \nonumber \\ &=\exp_2\left(-\frac{2mn\gamma^2}{1+\Delta_p} +O(\log_2 n)\right)\label{eq:prob_combine_everything} \end{aligned}$$ where [\[eq:prob_combine_everything\]](#eq:prob_combine_everything){reference-type="eqref" reference="eq:prob_combine_everything"} follows by combining [\[eq:A-det-bd\]](#eq:A-det-bd){reference-type="eqref" reference="eq:A-det-bd"}, [\[eq:EXP_TERM\]](#eq:EXP_TERM){reference-type="eqref" reference="eq:EXP_TERM"} and [\[eq:pi_of_proj\]](#eq:pi_of_proj){reference-type="eqref" reference="eq:pi_of_proj"} and recalling that as $m,\Delta_p=O(1)$, we have $$(1-\Delta_p)^{2m}, \left(\frac{2\gamma\sqrt{m\ln 2}}{1-\Delta_p}\right)^{2m} =O(1)\quad\text{and}\quad \log_2(n^m) = O(m\log_2 n)=O(\log_2 n).$$ Observe that both $O(\log_2 n)$ term as well as $\Delta_p$ are independent of $(\Xi,\bar{\Xi})$. Taking a supremum over all such pairs, we deduce $$\sup_{(\Xi,\bar{\Xi})\in S_d(\epsilon^*)}\mathbb{P}\left[\min_{1\le i\le m}H(\boldsymbol{\sigma}^{(i)})\ge \gamma\sqrt{2\ln 2},\min_{1\le i\le m}H(\bar{\boldsymbol{\sigma}}^{(i)})\ge \gamma\sqrt{2\ln 2}\right]\le \exp_2\left(-\frac{2m\gamma^2 n}{1+\Delta_p}+O(\log_2 n)\right)$$ and establish Lemma [Lemma 28](#lemma:key-prob){reference-type="ref" reference="lemma:key-prob"}. ◻ We combine Lemma [Lemma 28](#lemma:key-prob){reference-type="ref" reference="lemma:key-prob"} with the counting estimate [\[eq:COUNT-Sigma_D\]](#eq:COUNT-Sigma_D){reference-type="eqref" reference="eq:COUNT-Sigma_D"} to obtain an upper bound on $\Sigma_d$ in [\[eq:Sigma_D\]](#eq:Sigma_D){reference-type="eqref" reference="eq:Sigma_D"}: $$\label{eq:Sigma_d_up} \Sigma_d \le |\mathcal{F}(m,\xi,\eta)|^2 \exp_2\left(-\frac{2mn\gamma^2}{1+\Delta_p}+O(\log_2 n)\right).$$ Using [\[eq:Sigma_d\_up\]](#eq:Sigma_d_up){reference-type="eqref" reference="eq:Sigma_d_up"} and [\[eq:1ST_MOM_Sq\]](#eq:1ST_MOM_Sq){reference-type="eqref" reference="eq:1ST_MOM_Sq"} and recalling $|\mathcal{F}(m,\xi,\eta)|=2^n\cdot L$ per [\[eq:L_term\]](#eq:L_term){reference-type="eqref" reference="eq:L_term"} we obtain that for $p\ge P_2^*$, $$\label{eq:STRONG_TERM} \frac{\Sigma_d}{\mathbb{E}[M]^2}\le \exp_2\left(\frac{2mn\gamma^2 \Delta_p}{1+\Delta_p}+o(n)\right).$$ We now apply Paley-Zygmund inequality, Lemma [Lemma 18](#lemma:PZ){reference-type="ref" reference="lemma:PZ"}, to random variable $M$ with $\theta=0$: for $p$ large, $$\mathbb{P}\bigl[M\ge 1\bigr] \ge \frac{\mathbb{E}[M]^2}{\mathbb{E}[M^2]}\ge \frac{1}{\exp\bigl(-\Theta(n)\bigr) + \exp_2\left(\frac{2mn\gamma^2 \Delta_p}{1+\Delta_p}+o(n)\right)}\ge \exp_2\left(-\frac{2mn\gamma^2 \Delta_p}{1+\Delta_p}+o(n)\right),$$ where we combined [\[eq:overcounting\]](#eq:overcounting){reference-type="eqref" reference="eq:overcounting"}, [\[eq:WEAK_TERM\]](#eq:WEAK_TERM){reference-type="eqref" reference="eq:WEAK_TERM"}, and [\[eq:STRONG_TERM\]](#eq:STRONG_TERM){reference-type="eqref" reference="eq:STRONG_TERM"} in the penultimate step. Lastly, note that $\Delta_p$ is defined in [\[eq:E-Frob_norm\]](#eq:E-Frob_norm){reference-type="eqref" reference="eq:E-Frob_norm"}; it depends only on $p,m,\xi$ and $\epsilon^*$. As $\epsilon^*$ per [\[eq:Eps-star\]](#eq:Eps-star){reference-type="eqref" reference="eq:Eps-star"} is a function of $m,\gamma$ only, we find that $\Delta_p$ is indeed a function of $p,m,\gamma$, and $\xi$ only. With this, we finish the proof of Proposition [Proposition 26](#prop:2nd-moment){reference-type="ref" reference="prop:2nd-moment"}. ◻ #### Putting Everything Together Equipped with Lemma [Lemma 24](#lemma:T-concen){reference-type="ref" reference="lemma:T-concen"} and Proposition [Proposition 26](#prop:2nd-moment){reference-type="ref" reference="prop:2nd-moment"}, we now show how to establish Theorem [Theorem 6](#thm:m-ogp-tight){reference-type="ref" reference="thm:m-ogp-tight"}. Fix a $\gamma'$ such that $\gamma<\gamma'<1/\sqrt{m}$. Using Proposition [Proposition 26](#prop:2nd-moment){reference-type="ref" reference="prop:2nd-moment"} with $\gamma'$, we obtain that there is a $c\in(0,1)$ and a $P^*\in\mathbb{N}$ such that for any fixed $p\ge P^*$, $$\label{eq:2nd-mom-helps} \mathbb{P}\bigl[\bigl|S(\gamma',m,\xi,\eta)\bigr|\ge 1\bigr]\ge \exp\bigl(-nc^p\bigr),$$ as $n\to\infty$. Observe that for $T_{m,\xi,\eta}$ in [\[eq:auxil-rv-T\]](#eq:auxil-rv-T){reference-type="eqref" reference="eq:auxil-rv-T"}, $$\label{eq:crucial} \bigl\{\bigl|S(\gamma',m,\xi,\eta)\bigr|\ge 1\bigr\}= \bigl\{T_{m,\xi,\eta}\ge \gamma'\sqrt{2\ln 2}\bigr\}.$$ Next, note that for any fixed $\epsilon>0$, there is a $P_\epsilon\in \mathbb{N}$ such that for every $p\ge P_\epsilon$ $$\frac{\epsilon^2}{2}>\frac{2m\gamma^2 \Delta_p}{1+\Delta_p}$$ since $\Delta_p\to 0$ as $p\to\infty$. In the remainder, fix any $p\ge \max\{P_\epsilon,P^*\}$. We then have $$\exp\left(-\frac{2mn\gamma^2 \Delta_p}{1+\Delta_p}+o(n)\right)\ge \exp\left(-\frac{n\epsilon^2}{2}\right)$$ for all large enough $n$. Combining Lemma [Lemma 24](#lemma:T-concen){reference-type="ref" reference="lemma:T-concen"}, Proposition [Proposition 26](#prop:2nd-moment){reference-type="ref" reference="prop:2nd-moment"}, and the observation [\[eq:crucial\]](#eq:crucial){reference-type="eqref" reference="eq:crucial"}, we obtain $$\label{eq:CHAIN} \mathbb{P}\bigl[T_{m,\xi,\eta}\ge \gamma'\sqrt{2\ln 2}\bigr] \ge \exp\left(-\frac{2mn\gamma^2 \Delta_p}{1+\Delta_p}+o(n)\right)\ge \exp\left(-\frac{n\epsilon^2}{2}\right) \ge \mathbb{P}\bigl[T_{m,\xi,\eta}\ge \mathbb{E}[T_{m,\xi,\eta}]+\epsilon\bigr].$$ In particular, $$\label{eq:E-LB} \mathbb{E}[T_{m,\xi,\eta}]\ge \gamma'\sqrt{2\ln 2}-\epsilon.$$ Lastly, we combine Lemma [Lemma 24](#lemma:T-concen){reference-type="ref" reference="lemma:T-concen"} (with $t=\epsilon$) and [\[eq:E-LB\]](#eq:E-LB){reference-type="eqref" reference="eq:E-LB"} to obtain that w.p. at least $1-\exp(-\Theta(n))$, $$T_{m,\xi,\eta}\ge \gamma'\sqrt{2\ln 2}-2\epsilon.$$ Provided $\epsilon<(\gamma'-\gamma)\sqrt{\ln 2/2}$, we obtain that $$T_{m,\xi,\eta}\ge \gamma\sqrt{2\ln 2}$$ w.p. at least $1-\exp(-\Theta(n))$. This together with [\[eq:crucial\]](#eq:crucial){reference-type="eqref" reference="eq:crucial"}, establishes Theorem [Theorem 6](#thm:m-ogp-tight){reference-type="ref" reference="thm:m-ogp-tight"}. ## Proofs of Theorem [Theorem 9](#thm:m-ogp-sharp-PT){reference-type="ref" reference="thm:m-ogp-sharp-PT"} {#pf:PT} Fix $m\in\mathbb{N}$. Using Theorem [\[thm:m-ogp\]](#thm:m-ogp){reference-type="ref" reference="thm:m-ogp"}, we obtain that any $\gamma>1/\sqrt{m}$ is $m$-inadmissible in the sense of Definition [Definition 7](#def:adm-exp){reference-type="ref" reference="def:adm-exp"}. In particular, $$\inf\{\gamma>0:\gamma\text{ is $m$-inadmissible}\} = 1/\sqrt{m}.$$ Similarly, Theorem [Theorem 6](#thm:m-ogp-tight){reference-type="ref" reference="thm:m-ogp-tight"} shows that any $\gamma<1/\sqrt{m}$ is $m$-admissible. Taken together, we find $$\sup\{\gamma>0:\gamma\text{ is $m$-admissible}\} = 1/\sqrt{m}.$$ Recalling Definition [Definition 8](#def:sharp-PT){reference-type="ref" reference="def:sharp-PT"}, we establish Theorem [Theorem 9](#thm:m-ogp-sharp-PT){reference-type="ref" reference="thm:m-ogp-sharp-PT"} by combining the above two displays. ## Proof of Theorem [Theorem 11](#thm:m-ogp-k-sat){reference-type="ref" reference="thm:m-ogp-k-sat"} {#sec:pf-ogp-k-sat} Our proof is based on the *first moment method*. Fix $m\in\mathbb{N}$, $0<\eta<\beta<\frac12$ and let $$\label{eq:S-beta} \widehat{\mathcal{F}}(m,\beta,\eta)=\bigl\{(\boldsymbol{\sigma}^{(1)},\dots,\boldsymbol{\sigma}^{(m)}):n^{-1}d_H(\boldsymbol{\sigma}^{(k)},\boldsymbol{\sigma}^{(\ell)})\in[\beta-\eta,\beta],1\le k<\ell\le m\bigr\},$$ where $\boldsymbol{\sigma}^{(t)}\in\{0,1\}^n$. Fix a $\gamma>1/m$. Define $$\label{eq:rv-N} N = \bigl|\mathcal{S}(\gamma,m,\beta,\eta)\bigr| = \sum_{(\boldsymbol{\sigma}^{(t)}:t\in[m])\in\widehat{\mathcal{F}}(m,\beta,\eta)} \mathbbm{1}\bigl\{\Phi(\boldsymbol{\sigma}^{(t)}) = 1,\forall t\in[m]\bigr\}.$$ In what follows, we establish that $\mathbb{E}[N]=e^{-\Theta(n)}$ for a suitable choice of $\beta$ and $\eta$. The conclusion then follows by Markov's inequality: $\mathbb{P}[N\ge 1]\le \mathbb{E}[N]$. #### Cardinality Estimate We first bound $|\widehat{\mathcal{F}}(m,\beta,\eta)|$. Note that, there are $2^n$ choices for $\boldsymbol{\sigma}^{(1)}$. Having fixed a $\boldsymbol{\sigma}^{(1)}$, the number of choices for any $\boldsymbol{\sigma}^{(k)}$, $2\le k\le m$, is $\sum_{(\beta-\eta)n\le k\le \beta n}\binom{n}{k}$. Provided $\beta<\frac12$, which will be satisfied by our eventual choice of parameters, we have $$\sum_{(\beta-\eta)n\le k\le \beta n}\binom{n}{k}\le \binom{n}{\beta n}n^{O(1)}.$$ So, $$\begin{aligned} \label{eq:card-S} |\widehat{\mathcal{F}}(m,\beta,\eta)| &\le 2^n \left(\binom{n}{\beta n}n^{O(1)}\right)^{m-1}\le \exp\Bigl(n\ln 2 + n(m-1)\ln 2 \cdot h_b(\beta) + O(\ln n)\Bigr),\end{aligned}$$ where we used Stirling's approximation, $\binom{n}{\beta n} = \exp_2\bigl(nh_b(\beta)+O(\ln n)\bigr)$. #### Probability Estimate We next study the probability term. Fix a $(\boldsymbol{\sigma}^{(1)},\dots,\boldsymbol{\sigma}^{(t)})\in\widehat{\mathcal{F}}(m,\beta,\eta)$. Let $\Phi$ consists of independent $k$-clauses $\mathcal{C}_i$, $1\le i\le M$, where $M=\alpha_\gamma n$ for $\alpha_\gamma = \gamma 2^k\ln 2$. We have $$\begin{aligned} \mathbb{P}\bigl[\Phi(\boldsymbol{\sigma}^{(t)})=1,\forall t\in[m]\bigr] &=\mathbb{P}\bigl[\mathcal{C}_1(\boldsymbol{\sigma}^{(t)})=1,\forall t\in[m]\bigr]^{\alpha_\gamma n} \nonumber\\ &=\Bigl(1-\mathbb{P}\bigl[\cup_{1\le t\le m}\{\mathcal{C}_1(\boldsymbol{\sigma}^{(t)})=0\}\bigr]\Bigr)^{\alpha_\gamma n}\nonumber\\ &\le \Bigl(1-m\cdot 2^{-k}+\binom{m}{2}(1-\beta+\eta)^k 2^{-k}\Bigr)^{\alpha_\gamma n}\label{eq:prob-term-ksat},\end{aligned}$$ where [\[eq:prob-term-ksat\]](#eq:prob-term-ksat){reference-type="eqref" reference="eq:prob-term-ksat"} is obtained by using the fact that for events $\mathcal{E}_1,\dots,\mathcal{E}_m$, $$\mathbb{P}\bigl[\cup_{1\le i\le m}\mathcal{E}_i\bigr]\ge \sum_{1\le i\le m}\mathbb{P}[\mathcal{E}_i] - \sum_{1\le i<j\le m}\mathbb{P}\bigl[\mathcal{E}_i\cap\mathcal{E}_j\bigr],$$ and Lemma [Lemma 21](#lemma:random-k-sat-prob){reference-type="ref" reference="lemma:random-k-sat-prob"}. #### Combining Everything We combine [\[eq:card-S\]](#eq:card-S){reference-type="eqref" reference="eq:card-S"} and [\[eq:prob-term-ksat\]](#eq:prob-term-ksat){reference-type="eqref" reference="eq:prob-term-ksat"} to arrive at $$\begin{aligned} \mathbb{E}[N]\le \exp\left(n\ln 2 + n(m-1)\ln 2\cdot h_b(\beta) +\alpha_\gamma n \ln\left(1-m\cdot 2^{-k}+\binom{m}{2}(1-\beta+\eta)^k 2^{-k}\right)+O(\ln n)\right).\end{aligned}$$ Next, a Taylor expansion yields $\ln(1-x)=-x+O(x^2)$. Thus, $$\ln\left(1-m\cdot 2^{-k} + \binom{m}{2}(1-\beta+\eta)^k 2^{-k}\right) = -m\cdot 2^{-k} + \binom{m}{2}(1-\beta+\eta)^k 2^{-k} +O_k\left(m^2 2^{-2k}\right),$$ using the fact that $m=O_k(1)$. We now insert $\alpha_\gamma =\gamma 2^k\ln 2$ (for a $\gamma>1/m$) to obtain $$\begin{aligned} \mathbb{E}[N]&\le \exp\left(n\ln 2\underbrace{\left(1+(m-1)h_b(\beta)-m\gamma+\gamma\binom{m}{2}(1-\beta+\eta)^k +O_k\left(m^2 2^{-k}\right) \right)}_{\triangleq\varphi(k,\gamma,m,\beta,\eta)}+O(\ln n)\right)\\ &=\exp\Bigl(n\cdot \ln 2 \cdot \varphi(k,\gamma,m,\beta,\eta)+O(\ln n)\Bigr).\end{aligned}$$ Since $\gamma > \frac1m$, we have that $\gamma m = 1+\delta$ for some $\delta>0$. With this, choose $\beta^*<1/2$, such that $(m-1)h_b(\beta^*)\le \delta/4$. Set $\eta^* = \beta^*/2$. Finally, choose $K^*$ sufficiently large, so that, for $k\ge K^*$ (and aforementioned choice of $m,\gamma,\beta^*$), $$\gamma \binom{m}{2} (1-\beta^*+\eta^*)^k + O_k\left(m^2 2^{-k}\right) \le \frac{\delta}{4}.$$ With these, we obtain $$\varphi(k,\gamma,m,\beta^*,\eta^*)\le -\frac{\delta}{2},\quad \forall k\ge K^*$$ and therefore $$\mathbb{E}[N]\le \exp\left(-\frac{\delta\ln 2}{2}n+O(\ln n)\right) = \exp\bigl(-\Theta(n)\bigr).$$ This completes the proof of Theorem [Theorem 11](#thm:m-ogp-k-sat){reference-type="ref" reference="thm:m-ogp-k-sat"}. ## Proof of Theorem [Theorem 12](#thm:m-ogp-k-sat-absent){reference-type="ref" reference="thm:m-ogp-k-sat-absent"} {#sec:pf-ogp-absent-k-sat} The proof is based on the *second moment method*. To that end, recall Paley-Zygmund inequality from Lemma [Lemma 18](#lemma:PZ){reference-type="ref" reference="lemma:PZ"}, which asserts that for any non-negative integer-valued random variable $N$, $$\label{eq:paley} \mathbb{P}\bigl[N\ge 1\bigr]\ge \mathbb{E}[N]^2/\mathbb{E}[N^2].$$ Now, recall $\widehat{\mathcal{F}}(m,\beta,\eta)$ from [\[eq:S-beta\]](#eq:S-beta){reference-type="eqref" reference="eq:S-beta"} and let $N$ be the random variable introduced in [\[eq:rv-N\]](#eq:rv-N){reference-type="eqref" reference="eq:rv-N"}. Fix a $\boldsymbol{\sigma}\in\{0,1\}^n$ and define $$L_{\boldsymbol{\sigma}}=\Bigl|\bigl\{(\boldsymbol{\sigma}^{(1)},\dots,\boldsymbol{\sigma}^{(m)})\in \widehat{\mathcal{F}}(m,\beta,\eta):\boldsymbol{\sigma}^{(1)}=\boldsymbol{\sigma}\bigr\}\Bigr|.$$ As $L_{\boldsymbol{\sigma}}$ is independent of $\boldsymbol{\sigma}$, we set $L\triangleq L_{\boldsymbol{\sigma}}$ and obtain that $|\widehat{\mathcal{F}}(m,\beta,\eta)|=2^n\cdot L$. Next, fix a $\gamma<1/m$ and let $\alpha_\gamma = \gamma 2^k\ln 2$. #### Lower Bounding $\mathbb{E}[N]$ We lower bound $\mathbb{E}[N]$. Observe that, $$\begin{aligned} \mathbb{P}\bigl[\Phi(\boldsymbol{\sigma}^{(t)})=1,\forall t\in[m]\bigr] &=\mathbb{P}\bigl[\mathcal{C}_1(\boldsymbol{\sigma}^{(t)})=1,\forall t\in[m]\bigr]^{\alpha_\gamma n} \nonumber\\ &=\Bigl(1-\mathbb{P}\bigl[\cup_{1\le t\le m}\{\mathcal{C}_1(\boldsymbol{\sigma}^{(t)})=0\}\bigr]\Bigr)^{\alpha_\gamma n}\nonumber\\ &\ge \Bigl(1-m\cdot 2^{-k}\Bigr)^{\alpha_\gamma n},\end{aligned}$$ where we took a union bound over $1\le t\le m$ to arrive at the last line. Consequently, $$\label{eq:lower-bd-N} \mathbb{E}[N] \ge 2^n\cdot L\left(1-m\cdot 2^{-k}\right)^{\alpha n}.$$ #### Upper Bounding $\mathbb{E}[N^2]$ Recall that $\gamma<1/m$. Fix an $\epsilon>0$ such that $$\label{eq:choice-of-eps} -\delta\triangleq -1 + h_b(\epsilon)+m\gamma<0,$$ where $h_b(p)=-p\log_2 p -(1-p)\log_2(1-p)$ is the binary entropy function log base 2. Such an $\epsilon>0$ indeed exists as $\gamma<\frac1m$ and $h_b(\cdot)$ is continuous with $\lim_{p\to 0}h_b(p)=0$. Next, define the family $S_{ij}(\epsilon)$, $1\le i,j\le m$ of sets $$\label{eq:S-ij-eps} S_{ij}(\epsilon) = \Bigl\{(\Xi,\bar{\Xi})\in \widehat{\mathcal{F}}(m,\beta,\eta)\times\widehat{\mathcal{F}}(m,\beta,\eta):d_H(\boldsymbol{\sigma}^{(i)},\overline{\boldsymbol{\sigma}}^{(j)})\le \epsilon n\Bigr\},$$ where $\Xi,\bar{\Xi}\in\widehat{\mathcal{F}}(m,\beta,\eta)$ are such that $$\label{eq:bs-sigma} \Xi=\bigl(\boldsymbol{\sigma}^{(i)}:1\le i\le m)\quad\text{and}\quad \bar{\Xi}=\bigl(\overline{\boldsymbol{\sigma}}^{(i)}:1\le i\le m),$$ Furthermore, let $$\label{eq:F-eps} \mathcal{F}(\epsilon) = \bigl(\widehat{\mathcal{F}}(m,\beta,\eta)\times\widehat{\mathcal{F}}(m,\beta,\eta)\bigr)\setminus \bigcup_{1\le i,j\le m}S_{ij}(\epsilon).$$ Observe that for every $(\Xi,\bar{\Xi})\in \mathcal{F}(\epsilon)$ and every $1\le i,j\le m$, $$d_H\bigl(\boldsymbol{\sigma}^{(i)},\overline{\boldsymbol{\sigma}}^{(j)}\bigr)>\epsilon n.$$ We next upper bound $\mathbb{E}[N^2]$. Observe that $$\begin{aligned} \mathbb{E}[N^2] &= \sum_{(\Xi,\bar{\Xi})\in \widehat{\mathcal{F}}(m,\beta,\eta)\times \widehat{\mathcal{F}}(m,\beta,\eta)} \mathbb{P}\Bigl[\Phi(\boldsymbol{\sigma}^{(i)})=\Phi(\overline{\boldsymbol{\sigma}}^{(i)})=1,\forall i\in[m]\Bigr] \nonumber\\ &= \underbrace{\sum_{1\le i,j\le m}\sum_{(\Xi,\bar{\Xi})\in S_{ij}(\epsilon)} \mathbb{P}\Bigl[\Phi(\boldsymbol{\sigma}^{(i)})=\Phi(\overline{\boldsymbol{\sigma}}^{(i)})=1,\forall i\in[m]\Bigr]}_{\triangleq A_\epsilon} + \underbrace{\sum_{(\Xi,\bar{\Xi})\in \mathcal{F}(\epsilon)} \mathbb{P}\Bigl[\Phi(\boldsymbol{\sigma}^{(i)})=\Phi(\overline{\boldsymbol{\sigma}}^{(i)})=1,\forall i\in[m]\Bigr]}_{\triangleq B_\epsilon}.\label{eq:A-eps-B-eps-term}\end{aligned}$$ ### The Term $A_\epsilon/\mathbb{E}[N]^2$ {#the-term-a_epsilonmathbben2 .unnumbered} We next show that the dominant contribution to upper bound in [\[eq:A-eps-B-eps-term\]](#eq:A-eps-B-eps-term){reference-type="eqref" reference="eq:A-eps-B-eps-term"} comes from $B_\epsilon$. That is, we claim $$\frac{A_\epsilon}{\mathbb{E}[N]^2}=\exp(-\Theta(n)).$$ To that end, we first upper bound $$\label{eq:2nd-mom-prob} \max_{\Xi,\bar{\Xi} \in \widehat{\mathcal{F}}(m,\beta,\eta)}\mathbb{P}\Bigl[\Phi(\boldsymbol{\sigma}^{(i)})=\Phi(\overline{\boldsymbol{\sigma}}^{(i)})=1,\forall i\in[m]\Bigr],$$ where $\Xi$ and $\overline{\Xi}$ are as in [\[eq:bs-sigma\]](#eq:bs-sigma){reference-type="eqref" reference="eq:bs-sigma"}. **Lemma 29**. *$$\begin{aligned} \max_{\Xi,\bar{\Xi} \in \widehat{\mathcal{F}}(m,\beta,\eta)}\mathbb{P}\Bigl[\Phi(\boldsymbol{\sigma}^{(i)})=\Phi(\overline{\boldsymbol{\sigma}}^{(i)})=1,\forall i\in[m]\Bigr] \le \left(1-m\cdot 2^{-k}+2m(m-1)\cdot 2^{-k}\left(1-\frac{\beta-\eta}{2}\right)^k\right)^{\alpha n}.\end{aligned}$$* *Proof of Lemma [Lemma 29](#lemma:2nd-mom-pb){reference-type="ref" reference="lemma:2nd-mom-pb"}.* Fix $\Xi,\bar{\Xi}\in\widehat{\mathcal{F}}(m,\beta,\eta)$. Define the sets $$\label{eq:index-set} I_i\triangleq \left\{j\in[m]:d_H\bigl(\boldsymbol{\sigma}^{(i)},\overline{\boldsymbol{\sigma}}^{(j)}\bigr)<\frac{(\beta-\eta) n}{2}\right\},\quad 1\le i\le m.$$ We claim that $|I_i|\le 1$ for all $1\le i\le m$. Indeed, suppose that this is false for some $i$. That is, there exists $1\le j<k\le m$ such that $d_H\bigl(\boldsymbol{\sigma}^{(i)},\overline{\boldsymbol{\sigma}}^{(j)}\bigr),d_H\bigl(\boldsymbol{\sigma}^{(i)},\overline{\boldsymbol{\sigma}}^{(k)}\bigr)<\frac{(\beta-\eta) n}{2}$. Using triangle inequality, $$(\beta-\eta) n \le d_H\bigl(\overline{\boldsymbol{\sigma}}^{(j)},\overline{\boldsymbol{\sigma}}^{(k)}\bigr)\le d_H\bigl(\boldsymbol{\sigma}^{(i)},\overline{\boldsymbol{\sigma}}^{(j)}\bigr)+d_H\bigl(\boldsymbol{\sigma}^{(i)},\overline{\boldsymbol{\sigma}}^{(k)}\bigr)<(\beta-\eta) n,$$ a contradiction. Thus, $|I_i|\le 1, \forall i\in[m]$. For $1\le i\le m$, introduce the events $\mathcal{E}_i = \bigl\{\mathcal{C}_1(\boldsymbol{\sigma}^{(i)})=0\bigr\}$, $\overline{\mathcal{E}}_i =\bigl\{\mathcal{C}_1(\overline{\boldsymbol{\sigma}}^{(i)})=0\bigr\}$. We have, $$\label{eq:2-to-k} \mathbb{P}\bigl[\mathcal{E}_i\bigr] = \mathbb{P}\bigl[\overline{\mathcal{E}}_i\bigr] = 2^{-k},\quad 1\le i\le m.$$ Next, we have the following chain: $$\begin{aligned} &\mathbb{P}\Bigl[\Phi(\boldsymbol{\sigma}^{(i)})=\Phi(\overline{\boldsymbol{\sigma}}^{(i)})=1,\forall i\in[m]\Bigr] \nonumber \\ &=\mathbb{P}\Bigl[\mathcal{C}_1(\boldsymbol{\sigma}^{(i)})=\mathcal{C}_1(\overline{\boldsymbol{\sigma}}^{(i)})=1,1\le i\le m\Bigr]^{\alpha_\gamma n}\label{eq:clauses-independent} \\ &=\left(1-\mathbb{P}\left[\bigcup_{i\le m}(\mathcal{E}_i\cup \overline{\mathcal{E}}_i)\right]\right)^{\alpha_\gamma n}\label{eq:complement}\\ %&\le \left(1-\sum_{1\le i\le m}\bigl(\mathbb{P}[\mathcal{E}_i]+\mathbb{P}[\overline{\mathcal{E}}_i]\bigr)+\sum_{1\le i,j\le m}\mathbb{P}\bigl[\mathcal{E}_i\cap \overline{\mathcal{E}}_j\bigr]\right)^{\alpha_\gamma n}\label{eq:bonf}\\ &\le \left(1-2m\cdot 2^{-k}+\sum_{1\le i,j\le m}\mathbb{P}\bigl[\mathcal{E}_i\cap \overline{\mathcal{E}}_j\bigr]+\sum_{1\le i<j\le m}\mathbb{P}\bigl[\mathcal{E}_i\cap \mathcal{E}_j\bigr]+\sum_{1\le i<j\le m}\mathbb{P}\bigl[\overline{\mathcal{E}}_i\cap \overline{\mathcal{E}}_j\bigr]\right)^{\alpha_\gamma n}\label{eq:son}.\end{aligned}$$ We now justify the lines above. [\[eq:clauses-independent\]](#eq:clauses-independent){reference-type="eqref" reference="eq:clauses-independent"} follows from the fact that $\mathcal{C}_i$, $1\le i\le \alpha_\gamma n$, are independent. [\[eq:complement\]](#eq:complement){reference-type="eqref" reference="eq:complement"} follows by observing that $$\left\{\mathcal{C}_1(\boldsymbol{\sigma}^{(i)}) = \mathcal{C}_1(\overline{\boldsymbol{\sigma}}^{(i)})=1,\forall i\in[m]\right\} = \bigcap_{i\le m}\bigl(\mathcal{E}_i^c\cap \overline{\mathcal{E}}_i^c\bigr).$$ Finally, [\[eq:son\]](#eq:son){reference-type="eqref" reference="eq:son"} is obtained by combining [\[eq:2-to-k\]](#eq:2-to-k){reference-type="eqref" reference="eq:2-to-k"} with the inequality $$\begin{aligned} & \mathbb{P}\bigl[\cup_{i\le m}(\mathcal{E}_i\cup\overline{\mathcal{E}}_i)\bigr]\\ &\ge \sum_{1\le i\le m}\bigl(\mathbb{P}[\mathcal{E}_i]+\mathbb{P}[\overline{\mathcal{E}}_i]\bigr)-\sum_{1\le i,j\le m}\mathbb{P}\bigl[\mathcal{E}_i\cap \overline{\mathcal{E}}_j\bigr]-\sum_{1\le i<j\le m}\mathbb{P}\bigl[\mathcal{E}_i\cap \mathcal{E}_j\bigr]-\sum_{1\le i<j\le m}\mathbb{P}\bigl[\overline{\mathcal{E}}_i\cap \overline{\mathcal{E}}_j\bigr]. \end{aligned}$$ Applying Lemma [Lemma 21](#lemma:random-k-sat-prob){reference-type="ref" reference="lemma:random-k-sat-prob"}, we obtain that for any $1\le i<j\le m$, $$\label{eq:ei-ej} \mathbb{P}\bigl[\mathcal{E}_i\cap\mathcal{E}_j\bigr] %= \mathbb{P}\bigl[\mathcal{C}_1(\sigma_i)=\mathcal{C}_1(\sigma_j)=0\bigr]= \le 2^{-k}\cdot (1-\beta+\eta)^k.$$ Now fix $1\le i,j\le m$. Note that if $j\notin I_i$ for $I_i$ in [\[eq:index-set\]](#eq:index-set){reference-type="eqref" reference="eq:index-set"}, $d_H(\boldsymbol{\sigma}^{(i)},\overline{\boldsymbol{\sigma}}^{(j)})\ge (\beta-\eta) n/2$ and consequently $$\mathbb{P}\bigl[\mathcal{E}_i\cap \overline{\mathcal{E}}_j\bigr]\le 2^{-k}\cdot \left(1-\frac{\beta-\eta}{2}\right)^k,$$ again by Lemma [Lemma 21](#lemma:random-k-sat-prob){reference-type="ref" reference="lemma:random-k-sat-prob"}. On the other hand if $j\in I_i$, then we trivially have $$\mathbb{P}\bigl[\mathcal{E}_i\cap \overline{\mathcal{E}}_j\bigr]\le \mathbb{P}\bigl[\mathcal{E}_i\bigr]=2^{-k}.$$ Recall that $|I_i|\le 1$. Thus for any $1\le i\le m$ $$\sum_{1\le j\le m}\mathbb{P}\bigl[\mathcal{E}_i\cap \overline{\mathcal{E}}_j\bigr] \le 2^{-k} + (m-1)2^{-k}\left(1-\frac{\beta-\eta}{2}\right)^k,$$ so that $$\label{eq:ei-bar-ej} \sum_{1\le i,j\le m}\mathbb{P}\bigl[\mathcal{E}_i\cap \bar{\mathcal{E}}_j\bigr] \le m\cdot 2^{-k}+m(m-1)2^{-k}\left(1-\frac{\beta-\eta}{2}\right)^k.$$ Combining [\[eq:son\]](#eq:son){reference-type="eqref" reference="eq:son"} with [\[eq:ei-ej\]](#eq:ei-ej){reference-type="eqref" reference="eq:ei-ej"} and [\[eq:ei-bar-ej\]](#eq:ei-bar-ej){reference-type="eqref" reference="eq:ei-bar-ej"}, and using $$\left(1-\beta+\eta\right)^k\le \left(1-\frac{\beta-\eta}{2}\right)^k$$ valid for all $0<\eta<\beta<1$, we establish Lemma [Lemma 29](#lemma:2nd-mom-pb){reference-type="ref" reference="lemma:2nd-mom-pb"}. ◻ Using Lemma [Lemma 29](#lemma:2nd-mom-pb){reference-type="ref" reference="lemma:2nd-mom-pb"}, we arrive at $$\begin{aligned} A_\epsilon &\le m^2 2^n L \left(\sum_{0\le k\le \epsilon n}\binom{n}{k} \right)\cdot L \left(1-m\cdot 2^{-k}+2m(m-1)\cdot 2^{-k}\left(1-\frac{\beta-\eta}{2}\right)^k\right)^{\alpha_\gamma n} \nonumber\\ &=m^2 2^n L^2 \binom{n}{\epsilon n}n^{O(1)} \left(1-m\cdot 2^{-k}+2m(m-1)\cdot 2^{-k}\left(1-\frac{\beta-\eta}{2}\right)^k\right)^{\alpha_\gamma n}.\label{eq:A-eps}\end{aligned}$$ Combining [\[eq:A-eps\]](#eq:A-eps){reference-type="eqref" reference="eq:A-eps"} with [\[eq:lower-bd-N\]](#eq:lower-bd-N){reference-type="eqref" reference="eq:lower-bd-N"}, we obtain $$\begin{aligned} \frac{A_\epsilon}{\mathbb{E}[N]^2} &\le \frac{m^2 2^n L^2 \binom{n}{\epsilon n} n^{O(1)} \Bigl(1-m\cdot 2^{-k}+2m(m-1)\cdot 2^{-k}\left(1-\frac{\beta-\eta}{2}\right)^k\Bigr)^{\alpha_\gamma n}}{2^{2n}L^2 \Bigl(1-m\cdot 2^{-k}\Bigr)^{2\alpha_\gamma n}}\nonumber\\ &=\exp\Bigl(-n\ln 2 + n \ln 2\cdot h_b(\epsilon)+ O(\ln n)\Bigr)\frac{\exp\Bigl(\alpha_\gamma n\ln\Bigl(1-m\cdot 2^{-k}+2m(m-1)\cdot 2^{-k}\left(1-\frac{\beta-\eta}{2}\right)^k\Bigr)\Bigr)}{\exp\Bigl(2\alpha_\gamma n\ln\Bigl(1-m\cdot 2^{-k}\Bigr)\Bigr)}\label{eq:A-eps-inter}.\end{aligned}$$ We next employ the Taylor series, $\ln(1-x) = -x + O(x^2)$ valid as $x\to 0$. Note that $\beta,\eta$ and $m$ are fixed. So, for all sufficiently large $k$, specifically when $$\left(1-\frac{\beta-\eta}{2}\right)^k<\frac{1}{2(m-1)},$$ we have $$\begin{aligned} &\ln\left(1-m\cdot 2^{-k}+2m(m-1)\cdot 2^{-k}\left(1-\frac{\beta-\eta}{2}\right)^k\right)\nonumber\\ &=-m\cdot 2^{-k} +2m(m-1)2^{-k}\left(1-\frac{\beta-\eta}{2}\right)^k +O_k\left(m^2 2^{-2k}\right)\label{tay-1}.\end{aligned}$$ Similarly, $$\begin{aligned} \label{tay-2} \ln\Bigl(1-m\cdot 2^{-k}\Bigr) = -m\cdot 2^{-k} + O_k\left(m^2 2^{-2k}\right).\end{aligned}$$ Recall now that $$\alpha_\gamma = \gamma \cdot 2^k\cdot \ln 2,\quad\text{where}\quad \gamma<\frac1m.$$ Inserting [\[tay-1\]](#tay-1){reference-type="eqref" reference="tay-1"} and [\[tay-2\]](#tay-2){reference-type="eqref" reference="tay-2"} into [\[eq:A-eps-inter\]](#eq:A-eps-inter){reference-type="eqref" reference="eq:A-eps-inter"}, we obtain $$\begin{aligned} \frac{A_\epsilon}{\mathbb{E}[N]^2} \le \exp\Bigl(n\ln 2\Bigl(-1+h_b(\epsilon) +m\gamma +2m(m-1)(1-(\beta-\eta)/2)^k \gamma +O_k\left(m^2 2^{-k}\right)\Bigr)+O(\ln n)\Bigr).\end{aligned}$$ From [\[eq:choice-of-eps\]](#eq:choice-of-eps){reference-type="eqref" reference="eq:choice-of-eps"}, we have $-\delta = -1+h_b(\epsilon)+m\gamma <0$. Now, set $K^*$ such that $$\max\left\{2m(m-1)\left(1-\frac{\beta-\eta}{2}\right)^k\gamma\,,\,O_k\left(m^2 2^{-k}\right)\right\} \le -\frac{\delta}{4}.$$ for all $k\ge K^*$. With this choice of $K^*$, we arrive at the conclusion $$\label{eq:A-subdominant} \frac{A_\epsilon}{\mathbb{E}[N]^2}\le \exp\left(-\frac{\delta \ln 2}{2}n +O(\ln n)\right) = \exp\bigl(-\Theta(n)\bigr),$$ for every $k\ge K^*$. ### The Term $B_\epsilon/\mathbb{E}[N]^2$ {#the-term-b_epsilonmathbben2 .unnumbered} For $\Xi,\bar{\Xi}\in\mathcal{F}(\epsilon)$, we control the probability term by modifying Lemma [Lemma 29](#lemma:2nd-mom-pb){reference-type="ref" reference="lemma:2nd-mom-pb"}. **Lemma 30**. *$$\begin{aligned} &\max_{\Xi,\bar{\Xi} \in \mathcal{F}(\epsilon)}\mathbb{P}\bigl[\Phi(\boldsymbol{\sigma}^{(i)})=\Phi(\overline{\boldsymbol{\sigma}}^{(i)})=1,\forall i\in[m]\bigr]\\ &\le \Bigl(1-2m\cdot 2^{-k}+m(m-1)\cdot 2^{-k}\left(1-\beta+\eta\right)^k+m^2(1-\epsilon)^k 2^{-k}\Bigr)^{\alpha_\gamma n}.\end{aligned}$$* *Proof of Lemma [Lemma 30](#lemma:2nd-mom-pb2){reference-type="ref" reference="lemma:2nd-mom-pb2"}.* The proof is very similar to that of Lemma [Lemma 29](#lemma:2nd-mom-pb){reference-type="ref" reference="lemma:2nd-mom-pb"}, we only point out the necessary modification. Note that since $\Xi,\bar{\Xi}\in\mathcal{F}(\epsilon)$, we have $$d_H\bigl(\boldsymbol{\sigma}^{(i)},\overline{\boldsymbol{\sigma}}^{(j)}\bigr)>\epsilon n,\quad \forall i,j\in[m].$$ With this, we apply Lemma [Lemma 21](#lemma:random-k-sat-prob){reference-type="ref" reference="lemma:random-k-sat-prob"} to conclude $$\mathbb{P}\bigl[\mathcal{E}_i\cap \overline{\mathcal{E}}_j\bigr] \le (1-\epsilon)^k 2^{-k},$$ where $\mathcal{E}_i$ and $\overline{\mathcal{E}}_j$ are the events appearing in [\[eq:son\]](#eq:son){reference-type="eqref" reference="eq:son"}. This settles Lemma [Lemma 30](#lemma:2nd-mom-pb2){reference-type="ref" reference="lemma:2nd-mom-pb2"}. ◻ Equipped with Lemma [Lemma 30](#lemma:2nd-mom-pb2){reference-type="ref" reference="lemma:2nd-mom-pb2"}, we now control $B_\epsilon/\mathbb{E}[N]^2$. Note that $$\begin{aligned} B_\epsilon &= \sum_{(\Xi,\bar{\Xi})\in\mathcal{F}(\epsilon)}\mathbb{P}\Bigl[\Phi(\boldsymbol{\sigma}^{(i)}) = \Phi(\overline{\boldsymbol{\sigma}}^{(i)})=1,\forall i\in[m]\Bigr] \\ &\le 2^{2n}L^2\Bigl(1-2m\cdot 2^{-k}+m(m-1)\cdot 2^{-k}\left(1-\beta+\eta\right)^k+m^2(1-\epsilon)^k 2^{-k}\Bigr)^{\alpha_\gamma n},\end{aligned}$$ where we used the trivial upper bound $$|\mathcal{F}(\epsilon)|\le |\widehat{\mathcal{F}}(m,\beta,\eta)\times \widehat{\mathcal{F}}(m,\beta,\eta)|=2^{2n}L^2$$ and Lemma [Lemma 30](#lemma:2nd-mom-pb2){reference-type="ref" reference="lemma:2nd-mom-pb2"}. Using [\[eq:lower-bd-N\]](#eq:lower-bd-N){reference-type="eqref" reference="eq:lower-bd-N"}, we arrive at $$\label{eq:B-eps-over-E^2} \frac{B_\epsilon}{\mathbb{E}[N]^2}\le \frac{\Bigl(1-2m\cdot 2^{-k}+m(m-1)\cdot 2^{-k}\left(1-\beta+\eta\right)^k+m^2(1-\epsilon)^k 2^{-k}\Bigr)^{\alpha_\gamma n}}{\Bigl(1-m\cdot 2^{-k}\Bigr)^{2\alpha_\gamma n}}.$$ #### Choice of $k$ We set $k=\Omega(\log n)$, i.e. $k\ge C\ln n$ for some $C>0$ to be tuned and study the each term appearing in [\[eq:B-eps-over-E\^2\]](#eq:B-eps-over-E^2){reference-type="eqref" reference="eq:B-eps-over-E^2"}. First, using $\alpha_\gamma = \gamma \cdot 2^k\ln 2$ and the Taylor series $\ln(1-x)=-x+O(x^2)$ as $x\to 0$, $$\begin{aligned} \left(1-m\cdot 2^{-k}\right)^{2\alpha_\gamma n}&=\exp\Bigl(2\alpha_\gamma n\ln\Bigl(1-m\cdot 2^{-k}\Bigr)\Bigr)\nonumber\\ &=\exp\Bigl(n\ln 2\cdot 2^{k+1}\cdot \gamma\Bigl(-m\cdot 2^{-k}+O(2^{-2k})\Bigr)\Bigr)\nonumber\\ &=\exp\Bigl(n\ln 2\Bigl(-2m\gamma +O(2^{-k}) \Bigr)\Bigr).\label{eq:den}\end{aligned}$$ Second, $$\begin{aligned} &\Bigl(1-2m\cdot 2^{-k}+m(m-1)\cdot 2^{-k}\left(1-\beta+\eta\right)^k+m^2(1-\epsilon)^k 2^{-k}\Bigr)^{\alpha_\gamma n}\nonumber\\ &=\exp\Bigl(\alpha_\gamma n\ln\Bigl(1-2m\cdot 2^{-k}+m(m-1)\cdot 2^{-k}\left(1-\beta+\eta\right)^k+m^2(1-\epsilon)^k 2^{-k}\Bigr)\Bigr)\nonumber \\ &=\exp\Bigl(n\ln 2\cdot 2^k\cdot\gamma\Bigl(-2m\cdot 2^{-k}+m(m-1)\cdot 2^{-k}\left(1-\beta+\eta\right)^k+m^2(1-\epsilon)^k 2^{-k}+O(2^{-2k})\Bigr)\Bigr)\nonumber\\ &=\exp\Bigl(n\ln 2\Bigl(-2m\gamma+\gamma m(m-1)\cdot 2^{-k}\left(1-\beta+\eta\right)^k+\gamma m^2(1-\epsilon)^k +O(2^{-k})\Bigr)\Bigr).\label{eq:num}\end{aligned}$$ We combine [\[eq:den\]](#eq:den){reference-type="eqref" reference="eq:den"} and [\[eq:num\]](#eq:num){reference-type="eqref" reference="eq:num"} to upper bound [\[eq:B-eps-over-E\^2\]](#eq:B-eps-over-E^2){reference-type="eqref" reference="eq:B-eps-over-E^2"}: $$\frac{B_\epsilon}{\mathbb{E}[N]^2}\le \exp\Bigl(\gamma \ln 2 \cdot m(m-1)\cdot n(1-\beta+\eta)^k + \gamma\ln 2\cdot m^2 \cdot n(1-\epsilon)^k + O(2^{-k}n)\Bigr).$$ Let $k\ge C\ln n$. Note that $$O\left(2^{-k}n\right)=O\left(e^{-C\ln 2\ln n}e^{\ln n}\right) = O\left(e^{\ln n(1-C\ln 2)}\right)=o_n(1),$$ provided $C>1/\ln 2$. Next, $$n(1-\beta+\eta)^k = \exp\Bigl(\ln n-k\ln(1-\beta+\eta)\Bigr) \le \exp\Bigl(\ln n\Bigl(1-C\ln (1-\beta+\eta)\Bigr)\Bigr) = o_n(1)$$ if $C>1/\ln(1-\beta+\eta)$. Finally, $$n(1-\epsilon)^k = \exp\Bigl(\ln n-k\ln(1-\epsilon)\Bigr) \le \exp\Bigl(\ln n\Bigl(1-C\ln (1-\epsilon)\Bigr)\Bigr) = o_n(1)$$ if $C>1/\ln(1-\epsilon)$. Thus, choosing $C$ such that $$C>\max\left\{\frac{1}{\ln 2},\frac{1}{\ln(1-\beta+\eta)},\frac{1}{\ln(1-\epsilon)}\right\}$$ we have that $$\label{eq:B-eps-over-N-sq} \frac{B_\epsilon}{\mathbb{E}[N]^2}\le \exp\bigl(o_n(1)\bigr) = 1+o_n(1),$$ as $\gamma,m=O_n(1)$. #### Combining Everything Using [\[eq:A-eps-B-eps-term\]](#eq:A-eps-B-eps-term){reference-type="eqref" reference="eq:A-eps-B-eps-term"}, we have $$\mathbb{E}[N^2]\le A_\epsilon+B_\epsilon.$$ Using now [\[eq:A-subdominant\]](#eq:A-subdominant){reference-type="eqref" reference="eq:A-subdominant"}, [\[eq:B-eps-over-N-sq\]](#eq:B-eps-over-N-sq){reference-type="eqref" reference="eq:B-eps-over-N-sq"}, and Paley-Zygmund inequality (Lemma [Lemma 18](#lemma:PZ){reference-type="ref" reference="lemma:PZ"}), we obtain $$\mathbb{P}[N\ge 1]\ge \frac{\mathbb{E}[N]^2}{\mathbb{E}[N^2]} \ge \frac{1}{A_\epsilon/\mathbb{E}[N]^2 + B_\epsilon/\mathbb{E}[N]^2} = \frac{1}{1+o_n(1) + \exp(-\Theta(n))} = 1-o_n(1).$$ This concludes the proof of Theorem [Theorem 12](#thm:m-ogp-k-sat-absent){reference-type="ref" reference="thm:m-ogp-k-sat-absent"}. ## Proof of Theorem [Theorem 16](#thm:m-ogp-k-sat-absent-constant-k){reference-type="ref" reference="thm:m-ogp-k-sat-absent-constant-k"} {#sec:pf-thm:m-ogp-k-sat-absent-constant-k} Fix $\gamma<1/m$, $0<\eta<\beta<1$ and recall $Z_{\beta,\eta}$ from [\[eq:Z-beta-eta\]](#eq:Z-beta-eta){reference-type="eqref" reference="eq:Z-beta-eta"}. We first establish a concentration property. **Proposition 31**. *Let $M=\alpha_\gamma n$, where $\alpha_\gamma = \gamma 2^k\ln 2$. Then, for any $t_0\ge 0$, $$\mathbb{P}\bigl[\bigl|Z_{\beta,\eta}-\mathbb{E}[Z_{\beta,\eta}]\bigr|\ge t_0\bigr]\le 2\exp\left(-\frac{t_0^2}{M}\right).$$* *Proof of Proposition [Proposition 31](#prop:azuma-for-Z){reference-type="ref" reference="prop:azuma-for-Z"}.* We view $Z\triangleq Z_{\beta,\eta}$ as a function of clauses: $Z=Z(\mathcal{C}_1,\dots,\mathcal{C}_M)$. Note the following Lipschitz property: if $\mathcal{C}_i'$ is obtained by independently resampling the $i{\rm th}$ clause, then $$\bigl|Z(\mathcal{C}_1,\dots,\mathcal{C}_M) - Z(\mathcal{C}_1,\dots,\mathcal{C}_{i-1},\mathcal{C}_i',\mathcal{C}_{i+1},\dots,\mathcal{C}_M)\bigr|\le 1.$$ Proposition [Proposition 31](#prop:azuma-for-Z){reference-type="ref" reference="prop:azuma-for-Z"} then follows immediately from Azuma's inequality. ◻ Let $N=|\mathcal{S}(\gamma,m,\beta,\eta,0)|$. Let $\epsilon$ be an arbitrary value such that $\gamma m+h_b(\epsilon)-1<0$. Inspecting the proof of Theorem [Theorem 12](#thm:m-ogp-k-sat-absent){reference-type="ref" reference="thm:m-ogp-k-sat-absent"}, we obtain that there exists a $K^*$ such that $$\label{eq:2nd-mom-to-repair} \mathbb{P}\bigl[N\ge 1\bigr]\ge \exp\bigl(-2\alpha nm^2 T^k\bigr),$$ for all $k\ge K^*$, where $T$ is any number that satisfies $$T\ge \max\left\{\frac{1-\beta+\eta}{2},\frac{1-\epsilon}{2}\right\}.$$ Next, observe that for $M=n\alpha_\gamma$, where $\alpha_\gamma=\alpha 2^k\ln 2$, $\bigl\{N\ge 1\bigr\} = \bigl\{Z_{\beta,\eta} =M\bigr\}$. Now, observe that $$\label{eq:choice-of-t-star} \frac{(t^*)^2}{M}\ge 2\alpha nm^2 T^k,\quad\text{for}\quad t^* = n\cdot\alpha_\gamma \sqrt{3}m M^{k/2}.$$ Combining [\[eq:2nd-mom-to-repair\]](#eq:2nd-mom-to-repair){reference-type="eqref" reference="eq:2nd-mom-to-repair"} and [\[eq:choice-of-t-star\]](#eq:choice-of-t-star){reference-type="eqref" reference="eq:choice-of-t-star"} and applying Proposition [Proposition 31](#prop:azuma-for-Z){reference-type="ref" reference="prop:azuma-for-Z"} with $t_0=t^*$, we arrive at $$\mathbb{P}[Z_{\beta,\eta}=M]=\mathbb{P}[N\ge 1]\ge \exp\bigl(-2\alpha nm^2 T^k\bigr)\ge \exp\left(-\frac{(t^*)^2}{M}\right)\ge \mathbb{P}[Z_{\beta,\eta}\ge \mathbb{E}[Z_{\beta,\eta}]+t^*].$$ In particular, $\mathbb{E}[Z_{\beta,\eta}]\ge M-t^*$. We next apply Proposition [Proposition 31](#prop:azuma-for-Z){reference-type="ref" reference="prop:azuma-for-Z"} once again, this time with $t_0=\sqrt{\alpha_\gamma}n$ to obtain that w.p. at least $1-\exp(-\Theta(n))$, $$Z_{\beta,\eta}\ge \mathbb{E}[Z_{\beta,\eta}]-\sqrt{\alpha_\gamma}n\ge n\alpha_\gamma -n\cdot\alpha_\gamma\sqrt{3}mM^{k/2} - n\sqrt{\alpha_\gamma}\ge n\alpha_\gamma - n\cdot 2^{k/2}\sqrt{\gamma\ln 3}$$ for all large enough $k,n$. Lastly, we observe $$\bigl\{Z_{\beta,\eta}\ge n\alpha_\gamma -n\cdot 2^{k/2}\sqrt{\gamma\ln 3}\bigr\} = \Bigl\{\mathcal{S}(\gamma,m,\beta,\eta,C2^{-k/2})\ne\varnothing\Bigr\},\quad\text{where}\quad C =\gamma^{-\frac12}\frac{\sqrt{\ln 3}}{\ln 2},$$ which completes the proof of Theorem [4.6](#sec:pf-thm:m-ogp-k-sat-absent-constant-k){reference-type="ref" reference="sec:pf-thm:m-ogp-k-sat-absent-constant-k"}. ### Acknowledgments {#acknowledgments .unnumbered} The author is grateful to David Gamarnik for his guidance and many helpful discussions during the initial stages of this work. The author wishes to thank Aukosh Jagannath for many valuable discussions regarding spin glasses and Brice Huang for proofreading the draft and suggesting valuable comments. The research of the author is supported by Columbia University with the Distinguished Postdoctoral Fellowship in Statistics. [^1]: Department of Statistics, Columbia University; e-mail: `eck2170@columbia.edu` [^2]: See also [@auffingerchen] for a generalized Parisi formula for ${\mathsf{H^*}}$ without needing to take $\beta\to\infty$ limit. [^3]: This model allows clauses to have repeated literals, or to simulteneously contain a literal $x_i$ and its negation $\overline{x}_i$. However, we focus on the regime $k=\Omega(\ln n)$, hence this would not be the case for most clauses as $n\to\infty$. We consider this model for keeping our calculations simple; this is in fact customary in literature, see e.g. [@frieze2005random; @coja2008random; @bresler2021algorithmic]. [^4]: It is worth mentioning that clustering does not necessarily imply algorithmic hardness. For instance, the binary perceptron model exhibits an extreme form of clustering at all densities, including the 'easy' regime where polynomial-time algorithms exist. For details, see [@gamarnik2022algorithms; @perkins2021frozen; @abbe2021binary; @abbe2021proof]. [^5]: See [@ajtai1996generating; @boix2021average; @GK-SK-AAP] for several exceptions. [^6]: It is worth mentioning that the value $\sqrt{2\ln 2}$ is the ground-state energy for the Random Energy Model, which is the formal $p\to\infty$ limit of the $p$-spin model. [^7]: In fact, as mentioned earlier this was already done for the $p$-spin model in [@huang2021tight]. [^8]: While Lemma [Lemma 23](#lemma:F-NON-EMPTY){reference-type="ref" reference="lemma:F-NON-EMPTY"} concerns with $\boldsymbol{\sigma}\in\{-1,1\}^n$, a straightforward modification of it extends to $\{0,1\}^n$ for $\beta\le \frac12$.
arxiv_math
{ "id": "2309.09913", "title": "Sharp Phase Transition for Multi Overlap Gap Property in Ising $p$-Spin\n Glass and Random $k$-SAT Models", "authors": "Eren C. K{\\i}z{\\i}lda\\u{g}", "categories": "math.PR cs.CC cs.DS math-ph math.MP", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We simplify Vétois' Obata-type argument and use it to identify a closed interval $I_n$, $n \geq 3$, containing zero such that if $a \in I_n$ and $(M^n,g)$ is a closed conformally Einstein manifold with nonnegative scalar curvature and $Q_4 + a\sigma_2$ constant, then it is Einstein. We also relax the scalar curvature assumption to the nonnegativity of the Yamabe constant under a more restrictive assumption on $a$. Our results allow us to compute many Yamabe-type constants and prove sharp Sobolev inequalities on closed Einstein manifolds with nonnegative scalar curvature. In particular, we show that closed locally symmetric Einstein four-manifolds with nonnegative scalar curvature extremize the functional determinant of the conformal Laplacian, partially answering a question of Branson and Ørsted. address: | 109 McAllister Building\ Penn State University\ University Park, PA 16802\ USA author: - Jeffrey S. Case bibliography: - bib.bib title: The Obata--Vétois argument and its applications --- # Introduction {#sec:intro} Obata [@Obata1971] proved that every closed conformally Einstein manifold $(M^n,g)$ with constant scalar curvature is Einstein. There are two key points to his argument. First, the constant scalar curvature assumption implies that $E := P - \frac{1}{n}Jg$ is divergence-free, where $P$ is the Schouten tensor and $J$ is its trace. Second, the conformally Einstein assumption implies that there is a constant $\lambda \in \mathbb{R}$ and a positive function $u \in C^\infty(M)$ such that $$\label{eqn:conformally-einstein} \nabla^2u = -uP + \frac{1}{2}u^{-1}\left( \lvert\nabla u\rvert^2 + \lambda \right)g ;$$ i.e. $P^{u^{-2}g} = \frac{\lambda}{2}u^{-2}g$. Integration by parts yields $$\label{eqn:obata} \int_M u\lvert E\rvert^2 \mathop{\mathrm{dV}}= -\int_M \langle E, \nabla^2u \rangle\mathop{\mathrm{dV}}= 0 ,$$ and hence $E=0$. That is, $g$ is Einstein. It is natural to ask whether a similar statement holds for other scalar Riemannian invariants. The most satisfying such result was recently given by Vétois [@Vetois2022]: Any closed conformally Einstein manifold with positive Yamabe constant and constant fourth-order $Q$-curvature is itself Einstein. Earlier partial results were given under the additional assumption of local conformal flatness: Gursky [@Gursky1997] classified the critical points of certain functional determinants in the conformal class of the round four-sphere, and Viaclovsky [@Viaclovsky2000] classified locally conformally flat, conformally Einstein metrics of constant $\sigma_k$-curvature in the elliptic $k$-cones. Outside the setting of scalar Riemannian invariants, there are also Obata-type results for the Boundary Yamabe Problem [@Escobar1988] and for the CR Yamabe Problem [@JerisonLee1988]. Vétois' proof is in the spirit of Obata's argument, but focused directly on positive solutions of the PDE $L_4u = \lambda u^p$, $p \in \bigl( 1, \frac{2n}{n-4} \bigr]$, and its four-dimensional analogue, where $L_4$ is the Paneitz operator [@Paneitz1983]. When $p = \frac{2n}{n-4}$, solutions of this PDE give rise to metrics of constant $Q$-curvature. The benefit of his approach is that it classifies solutions of the subcritical problem (cf. [@Bidaut-VeronVeron1991; @GidasSpruck1981]). The downsides of his approach are that it relies heavily on the fact that the $Q$-curvature prescription problem, unlike the $\sigma_2$-curvature prescription problem, is *semilinear*, and that finding the correct analogue of Equation [\[eqn:obata\]](#eqn:obata){reference-type="eqref" reference="eqn:obata"} requires solving an undetermined coefficients problem with twelve unknowns. The first goal of this article is to give a streamlined version of Vétois' argument which both generalizes to certain fully nonlinear problems and requires solving an undetermined coefficients problem with only three unknowns. To that end, given a Riemannian manifold $(M^n,g)$ and constants $a,b \in \mathbb{R}$, define $$\label{eqn:defn-Ia} I_{a,b} := Q + a\sigma_2 + b\lvert W \rvert^2 .$$ Here $$\begin{aligned} Q & := -\Delta J - 2\lvert P\rvert^2 + \frac{n}{2}J^2 , \\ \sigma_2 & := \frac{1}{2}\left( J^2 - \lvert P\rvert^2 \right) , \\ \lvert W\rvert^2 & := W_{abcd}W^{abcd} ,\end{aligned}$$ are the $Q$-curvature, the $\sigma_2$-curvature, and the squared length of the Weyl tensor, with the convention $-\Delta \geq 0$. The significance of the family $I_{a,b}$ comes from the consideration of natural Riemannian scalar invariants which are variational within conformal class. Let $I$ be a natural scalar Riemannian invariant[^1] which is homogeneous of degree $-4$; i.e. $I^{c^2g}=c^{-4}I^g$ for any constant $c>0$ and any metric $g$. Then $I$ is in the span of $\{ \Delta J, J^2, \lvert P\rvert^2, \lvert W\rvert^2 \}$. Such an element is **conformally variational** if there is a natural Riemannian functional[^2] $\mathcal{F}\colon [g] \to \mathbb{R}$ such that $$\left. \frac{d}{dt} \right|_{t=0} \mathcal{F}(e^{2tu}\widehat{g}) = \int_M uI^{\widehat{g}} \mathop{\mathrm{dV}}_{\widehat{g}}$$ for any $\widehat{g}\in [g]$ and any $u \in C^\infty(M)$. We call $\mathcal{F}$ a **conformal primitive** of $I$. The subspace of conformally variational invariants which are homogeneous of degree $-4$ is three-dimensional [@BransonOrsted1988; @CaseLinYuan2016] and spanned by $\{ Q, \sigma_2, \lvert W\rvert^2 \}$. Since $\sigma_2$ and $\lvert W\rvert^2$ depend on at most second-order derivatives of the conformal factor, the $I_{a,b}$-curvatures thus represent all such conformally variational invariants which are fourth-order in the conformal factor. Our generalization of Vétois' argument applies to the invariants $I_{a,0}$ with $a$ suitably close to zero: **Theorem 1**. *Let $(M^n,g)$, $n \geq 3$, be a closed conformally Einstein manifold with nonnegative scalar curvature. Suppose additionally that there is an $$\label{eqn:a-range} a \in \left[ \frac{n^2-7n+8 - \sqrt{n^4+2n^3-3n^2}}{2(n-1)} , \frac{n^2-7n+8 + \sqrt{n^4+2n^3-3n^2}}{2(n-1)} \right]$$ such that $I_{a,0}$ is constant; if $J^g=0$, then assume also that $a$ is in the interior of this interval. Then $(M^n,g)$ is Einstein.* Note that $a=0$ satisfies Condition [\[eqn:a-range\]](#eqn:a-range){reference-type="eqref" reference="eqn:a-range"}. We do not know whether this condition is optimal. It would be interesting to know whether the proof of [Theorem 1](#main-thm){reference-type="ref" reference="main-thm"} can be modified to allow $b \not= 0$ for manifolds which are conformal to a locally symmetric Einstein manifold. The requirement in [Theorem 1](#main-thm){reference-type="ref" reference="main-thm"} that $g$ have nonnegative scalar curvature is somewhat unsatisfying. One can relax this to an assumption on the Yamabe constant if one further restricts Condition [\[eqn:a-range\]](#eqn:a-range){reference-type="eqref" reference="eqn:a-range"}. **Theorem 2**. *Let $(M^n,g)$, $n \geq 3$, be a conformally Einstein manifold with nonnegative Yamabe constant. Suppose additionally that there is an $$\label{eqn:restricted-a-range} a \in \{ 0 \} \cup \left[ \frac{n^2-7n+8 - \sqrt{n^4+2n^3-3n^2}}{2(n-1)} , -\frac{2(n-2)}{n-1} \right]$$ such that $I_{a,0}$ is constant; if $n=3$, then assume also that $I_{a,0}\geq0$. Then $(M^n,g)$ is Einstein.* The case $a=0$ of [Theorem 2](#main-thm-q){reference-type="ref" reference="main-thm-q"} is the main result of Vétois [@Vetois2022]. Direct computation shows that in dimension four, Condition [\[eqn:restricted-a-range\]](#eqn:restricted-a-range){reference-type="eqref" reference="eqn:restricted-a-range"} reduces to $$a \in \left[ -\frac{2+2\sqrt{21}}{3} , -\frac{4}{3} \right] ,$$ and hence [Theorem 2](#main-thm-q){reference-type="ref" reference="main-thm-q"} recovers Gursky's result [@Gursky1997] while also removing the locally conformally flat assumption. Neither [Theorem 1](#main-thm){reference-type="ref" reference="main-thm"} nor [Theorem 2](#main-thm-q){reference-type="ref" reference="main-thm-q"} recovers Viaclovsky's result [@Viaclovsky2000]. In both cases of [Theorem 2](#main-thm-q){reference-type="ref" reference="main-thm-q"}, one first shows that the assumptions force $g$ to have nonnegative scalar curvature, then one applies [Theorem 1](#main-thm){reference-type="ref" reference="main-thm"}. The two cases are distinguished by their proof: Vétois' case $a=0$ is proven using the semilinearity of the constant $Q$-curvature equation and an adaptation of an argument of Gursky and Malchiodi [@GurskyMalchiodi2014]. For the other case, one observes that $I_{a,0} \geq 0$, and hence that $L_2J \geq 0$, where $L_2$ denotes the conformal Laplacian, and then applies an observation of Gursky [@Gursky1998]. We do not know if Condition [\[eqn:restricted-a-range\]](#eqn:restricted-a-range){reference-type="eqref" reference="eqn:restricted-a-range"} is optimal. However, it cannot be completely removed: Gursky and Malchiodi [@GurskyMalchiodi2012] gave an explicit example of a linear combination $I_{a,0}$ for which there is a locally conformally flat metric on $S^4$ with $I_{a,0}$ constant but which is not Einstein. We expect that the strategy used to prove [Theorem 1](#main-thm){reference-type="ref" reference="main-thm"} can be adapted to other settings, and to that end we give a heuristic outline of its proof. The key insight, implicit in Vétois' argument, is that Obata's argument *only* requires finding a natural Riemannian symmetric $(0,2)$-tensor $T$ such that if $(M^n,g)$ has constant $I_{a,0}$-curvature, then $$\label{eqn:obata-strategy} \int_M \left( \langle\delta T, du \rangle- u\langle T,P \rangle+ \frac{1}{2}u^{-1}\left( \lvert\nabla u\rvert^2 + \lambda \right)\mathop{\mathrm{tr}}T \right) \mathop{\mathrm{dV}}\geq 0$$ with equality if and only if $g$ is Einstein. In particular, unlike previous Obata-type results for specific linear combinations of the $Q$- and $\sigma_2$-curvatures [@Gursky1997; @Viaclovsky2000; @ChangGurskyYang2003b], it is *not* necessary to choose $T$ to be trace- and divergence-free. It is, however, advantageous for $T$ to be zero when evaluated at an Einstein metric. The heuristic leading to a suitable tensor $T$ is as follows: Since $Q$ is homogeneous of degree $-4$, one should ask the same of $T$. Modulo multiples of the metric, the space of such natural Riemannian symmetric $(0,2)$-tensors is six-dimensional and spanned by $\{ JP, P^2, \nabla^2J, B, \check{W}, WP \}$, where $\nabla^2$ is the Hessian, $B$ is the Bach tensor, and $\check{W}$ and $WP$ are nontrivial partial contractions of $W \otimes W$ and $W \otimes P$. The divergences of all but $JP$ and $\nabla^2J$ algebraically depend on $\nabla P$ or $W$, which are difficult to use to verify Estimate [\[eqn:obata-strategy\]](#eqn:obata-strategy){reference-type="eqref" reference="eqn:obata-strategy"}. Therefore we restrict our attention to $JP$, $\nabla^2J$, and suitable multiples of the metric. Both $JP - \frac{1}{n}J^2g$ and $\nabla^2J - (\Delta J)g$ vanish at Einstein metrics. Additionally, there is a constant $c$ such that $JP - cI_{a,0}g$ vanishes at Einstein metrics. One then searches for a linear combination $T$ of these three tensors which satisfies Estimate [\[eqn:obata-strategy\]](#eqn:obata-strategy){reference-type="eqref" reference="eqn:obata-strategy"}. The desired tensor is (proportional to) $$\begin{gathered} T := -\frac{2(n^2+2n-4+(n-1)a)}{n-1}JP + \frac{2n}{n-1}\nabla^2J \\ + \left( - \frac{2n}{n-1}\Delta J + \frac{n^3+n^2-4+(n^2-1)a}{n(n-1)}J^2g - 2I_{a,0}g\right)g .\end{gathered}$$ Recall another well-known result of Obata [@Obata1971]: If $(M^n,g)$ is a closed Einstein manifold and if $u \in C^\infty(M)$ is such that $e^{2u}g$ is Einstein, then either $u$ is constant or both $(M^n,g)$ and $(M^n,e^{2u}g)$ are homothetic to a round sphere. Thus [\[main-thm,main-thm-q\]](#main-thm,main-thm-q){reference-type="ref" reference="main-thm,main-thm-q"} are essentially uniqueness results. The second goal of this article is to use Vétois' result to compute various Yamabe-type constants on closed Einstein manifolds with nonnegative scalar curvature. This relies on the fact [@GurskyMalchiodi2014; @HangYang2004; @HangYang2016t; @ChangYang1995] that minimizers of the $Q$-Yamabe Problem exist on such manifolds, and hence are known by [Theorem 2](#main-thm-q){reference-type="ref" reference="main-thm-q"}. A convexity argument, first used by Branson, Chang, and Yang [@BransonChangYang1992] to study extremals of functional determinants, allows us to compute the $I_{a,b}$-Yamabe constant for a wide range of values $a$ and $b$. We also use conformal invariance to express these as sharp Sobolev- and Onofri-type inequalities on closed Einstein manifolds with nonnegative scalar curvature. Due to dimensional differences in the Sobolev embedding theorem, we separately discuss the cases $n \geq 5$, $n=3$, and $n=4$. On manifolds of dimension at least five, the $I_{a,b}$-Yamabe constant is defined by minimizing the volume-normalized total $I_{a,b}$-curvature. We compute this constant for Einstein manifolds with nonnegative scalar curvature. **Theorem 3**. *Let $(M^n,g)$, $n \geq 5$, be a closed Riemannian manifold such that $\mathop{\mathrm{Ric}}= (n-1)\lambda g \geq 0$. Pick constants $a \in [-4,0]$ and $b \leq 0$; if $b<0$, assume additionally that $\lvert W\rvert^2$ is constant. Set $$\label{eqn:yamabe-dim5+-constant} C_{g,a,b} := \left( \frac{n(n^2-4)}{8}\lambda^2 + \frac{n(n-1)}{8}a\lambda^2 + b\lvert W\rvert^2 \right)\mathop{\mathrm{Vol}}(M)^{\frac{4}{n}} .$$ Then $$\label{eqn:yamabe-dim5+} \inf_{\widehat{g}\in [g]} \left\{ \int_M I_{a,b}^{\widehat{g}} \mathop{\mathrm{dV}}_{\widehat{g}} \mathrel{}:\mathrel{}\mathop{\mathrm{Vol}}_{\widehat{g}}(M) = 1 \right \} = C_{g,a,b} .$$ Moreover, $\widehat{g}\in [g]$ is extremal if and only if it is Einstein.* In particular, [Theorem 3](#yamabe-dim5+){reference-type="ref" reference="yamabe-dim5+"} applies to all closed locally symmetric Einstein manifolds with nonnegative scalar curvature. Since $I_{a,b}$ is conformally variational, there is a natural formally self-adjoint conformally covariant polydifferential operator whose constant term is a multiple of the $I_{a,b}$-curvature [@CaseLinYuan2018b]. This operator allows us to express Equation [\[eqn:yamabe-dim5+\]](#eqn:yamabe-dim5+){reference-type="eqref" reference="eqn:yamabe-dim5+"} as a sharp functional inequality. **Corollary 4**. *Let $(M^n,g)$, $n \geq 5$, be a closed Riemannian manifold such that $\mathop{\mathrm{Ric}}= (n-1)\lambda g \geq 0$. Pick constants $a \in [-4,0]$ and $b \leq 0$; if $b<0$, assume additionally that $\lvert W\rvert^2$ is constant. Then $$\begin{aligned} \MoveEqLeft \frac{n-4}{2}C_{g,a,b}\lVert u \rVert_{\frac{4n}{n-4}}^2 \leq \int_M \Biggl( (\Delta u^2)^2 - \frac{16}{(n-4)^2}a\lvert\nabla u\rvert^4 - \frac{4}{n-4}a\lvert\nabla u\rvert^2\Delta u^2 \\ & \quad + \left( \frac{n^2-2n-4}{2} + \frac{n-1}{2}a\right)\lambda\lvert\nabla u^2\rvert^2 \\ & \quad + \biggl( \frac{\Gamma\bigl(\frac{n+4}{2}\bigr)}{\Gamma\bigl(\frac{n-4}{2}\bigr)}\lambda^2 + \frac{n(n-1)(n-4)}{16}a\lambda^2 + \frac{n-4}{2}b\lvert W\rvert^2 \biggr) u^4 \Biggr) \mathop{\mathrm{dV}} \end{aligned}$$ for all $u \in C^\infty(M)$, with equality if and only if either $u$ is constant or $(M^n,g)$ is homothetic to the round $n$-sphere and $u(\xi) = a( 1 + \xi \cdot \zeta )^{-\frac{n-4}{4}}$ for some constant $a \in \mathbb{R}$ and some point $\zeta \in B_1(0)$.* The final conclusion of [Corollary 4](#sobolev-dim5+){reference-type="ref" reference="sobolev-dim5+"} uses the homothety to regard $(M^n,g)$ as the round sphere $S^n = \partial B_1(0)$ where $B_1(0) \subset \mathbb{R}^{n+1}$ is the unit ball. In the special case $a=b=0$, taking $v=u^2$ in [Corollary 4](#sobolev-dim5+){reference-type="ref" reference="sobolev-dim5+"} yields the sharp Sobolev inequality $$\begin{gathered} \int_M \left( (\Delta v)^2 + \frac{n^2-2n-4}{2}\lambda\lvert\nabla v\rvert^2 + \frac{\Gamma\bigl(\frac{n+4}{2}\bigr)}{\Gamma\bigl(\frac{n-4}{2}\bigr)}\lambda^2 v^2 \right) \mathop{\mathrm{dV}}\\ \geq \frac{\Gamma\bigl(\frac{n+4}{2}\bigr)}{\Gamma\bigl(\frac{n-4}{2}\bigr)}\lambda^2\mathop{\mathrm{Vol}}_g(M)^{\frac{4}{n}}\lVert v \rVert_{\frac{2n}{n-4}}^2\end{gathered}$$ on closed Einstein manifolds $(M^n,g)$, $n \geq 5$, with $\mathop{\mathrm{Ric}}= (n-1)\lambda g \geq 0$. The $Q$-Yamabe Problem in dimension three has been studied by Hang and Yang [@HangYang2004; @HangYang2016t]. While this problem still involves minimizing the normalized Paneitz energy, it is now equivalent to *maximizing* the volume-normalized total $Q$-curvature. This affects the three-dimensional analogue of [Theorem 3](#yamabe-dim5+){reference-type="ref" reference="yamabe-dim5+"}. **Theorem 5**. *Let $(M^3,g)$ be a closed Riemannian three-manifold such that $\mathop{\mathrm{Ric}}= 2\lambda g \geq 0$. Pick a constant $a \in [-4,0]$. Set $$\label{eqn:yamabe-dim3-constant} C_{g,a} := \left( \frac{15}{8}\lambda^2 + \frac{3}{4}a\lambda^2 \right)\mathop{\mathrm{Vol}}(M)^{\frac{4}{3}} .$$ Then $$\sup_{\widehat{g}\in [g]} \left\{ \int_M I_{a,0}^{\widehat{g}} \mathop{\mathrm{dV}}_{\widehat{g}} \mathrel{}:\mathrel{}\mathop{\mathrm{Vol}}_{\widehat{g}}(M) = 1 \right \} = C_{g,a} .$$ Moreover, $\widehat{g}\in [g]$ is extremal if and only if it is Einstein.* The parameter $b$ is irrelevant in [Theorem 5](#yamabe-dim3){reference-type="ref" reference="yamabe-dim3"} because the Weyl tensor vanishes in dimension three. Analogous to dimension at least five, gives rise to a functional inequality on closed Einstein three-manifolds with nonnegative scalar curvature. **Corollary 6**. *Let $(M^3,g)$ be a closed Riemannian three-manifold such that $\mathop{\mathrm{Ric}}= 2\lambda g \geq 0$. Pick a constant $a \in [-4,0]$. Then $$\begin{gathered} -\frac{1}{2}C_{g,a}\lVert u \rVert_{}^2 \leq \int_M \biggl( (\Delta u^2)^2 - 16a\lvert\nabla u\rvert^4 + 4\lvert\nabla u\rvert^2\Delta u^2 \\ + \left( a - \frac{1}{2}\right)\lambda\lvert\nabla u^2\rvert^2 - \frac{3(5+2a)}{16}\lambda^2 u^4 \biggr) \mathop{\mathrm{dV}}_g \end{gathered}$$ for all $u \in C^\infty(M)$, with equality if and only if either $u$ is constant or $(M^3,g)$ is homothetic to the round three-sphere and $u(\xi) = a( 1 + \xi \cdot \zeta )^{\frac{1}{4}}$ for some constant $a \in \mathbb{R}$ and some point $\zeta \in B_1(0)$.* The special feature of the cases $n\not=4$ is that $\int I_{a,b} \mathop{\mathrm{dV}}$ is a conformal primitive for the $I_{a,b}$-curvature. This is not true in dimension four. Instead, the functionals $$\begin{aligned} I(u) & := 4\int_M \lvert W\rvert^2u \mathop{\mathrm{dV}}- \left( \int_M \lvert W\rvert^2 \mathop{\mathrm{dV}}\right) \log \fint e^{4u}\mathop{\mathrm{dV}}, \\ II(u) & := \int_M uL_4u \mathop{\mathrm{dV}}+ 2\int_M Qu \mathop{\mathrm{dV}}- \frac{1}{2}\left( \int_M Q \mathop{\mathrm{dV}}\right) \log \fint e^{4u} \mathop{\mathrm{dV}}, \\ III(u) & := \int_M \left( 12\left(\Delta u + \lvert\nabla u\rvert^2\right)^2 - 4R\lvert\nabla u\rvert^2 - 4u\Delta R \right) \mathop{\mathrm{dV}},\end{aligned}$$ can be used to produce conformal primitives[^3]. Indeed, given $\gamma_1,\gamma_2,\gamma_3 \in \mathbb{R}$, the functional $\mathcal{F}_{\gamma_1,\gamma_2,\gamma_3} \colon C^\infty(M) \to \mathbb{R}$, $$F_{\gamma_1,\gamma_2,\gamma_3} := \gamma_1 I + \gamma_2 II + \gamma_3 III ,$$ has the property [@ChangYang1995] that $u \in C^\infty(M)$ is a critical point of $F_{\gamma_1,\gamma_2,\gamma_3}$ if and only if $I_{\gamma_1,\gamma_2,\gamma_3}^{e^{2u}g}$ is constant, where $$I_{\gamma_1,\gamma_2,\gamma_3} := \gamma_1 \lvert W \rvert^2 + \left( \frac{\gamma_2}{2} + 6\gamma_3 \right)Q - 24\gamma_3 \sigma_2 .$$ The labeling $I$, $II$, and $III$ was introduced by Chang and Yang [@ChangYang1995] in their study of the existence of extremal metrics for functional determinants on closed conformal four-manifolds. More precisely, Branson and Ørsted [@BransonOrsted1991b] showed that if $A^g$ is an integer power of a natural, formally self-adjoint, conformally covariant differential operator on a closed Riemannian four-manifold $(M^n,g)$ for which $\ker A^g \subseteq \mathbb{R}$, then there are constants $\gamma_1,\gamma_2,\gamma_3 \in \mathbb{R}$ such that $$\label{eqn:defn-Fa} F_A(u) := 720\pi^2\log\frac{\det A^{g_u}}{\det A^g} = F_{\gamma_1,\gamma_2,\gamma_3}(u)$$ for all $u \in C^\infty(M)$ such that $g_u := e^{2u}g$ has $\mathop{\mathrm{Vol}}_{g_u}(M)=\mathop{\mathrm{Vol}}_g(M)$, where $\log \det A^g$ is defined via zeta regularization. Denote by $(\gamma_1,\gamma_2,\gamma_3)_A$ the constants in Equation [\[eqn:defn-Fa\]](#eqn:defn-Fa){reference-type="eqref" reference="eqn:defn-Fa"} determined by $A$. It is known [@BransonOrsted1991b; @Branson1996] that $$\begin{aligned} (\gamma_1,\gamma_2,\gamma_3)_{L_2} & = \left( \frac{1}{8}, -\frac{1}{2}, -\frac{1}{12} \right) , \\ (\gamma_1,\gamma_2,\gamma_3)_{\slashed{\nabla}^2} & = \left( \frac{7}{16}, -\frac{11}{2}, -\frac{7}{24} \right) , \\ (\gamma_1,\gamma_2,\gamma_3)_{L_4} & = \left( -\frac{1}{4}, -14, \frac{8}{3} \right) , \end{aligned}$$ where $\slashed{\nabla}^2$ is the square of the Dirac operator. One motivation for this framework is the observation of Osgood, Phillips, and Sarnak [@OsgoodPhillipsSarnak1988] that closed Riemannian surfaces with constant Gauss curvature extremize the functional determinant of the conformal Laplacian within their conformal class. Branson, Chang, and Yang [@BransonChangYang1992] proved this conclusion for the round four-sphere. The techniques used to prove [\[sobolev-dim5+,sobolev-dim3\]](#sobolev-dim5+,sobolev-dim3){reference-type="ref" reference="sobolev-dim5+,sobolev-dim3"} allow us to prove this conclusion for closed locally symmetric Einstein manifolds with nonnegative scalar curvature, partially answering a question of Branson and Ørsted [@BransonOrsted1991b]\*p. 676. More generally: **Theorem 7**. *Let $(M^4,g)$ be a closed Riemannian four-manifold such that $\mathop{\mathrm{Ric}}= 3\lambda g \geq 0$. Let $\gamma_1,\gamma_2,\gamma_3 \in \mathbb{R}$ be such that $\gamma_1\leq 0$ and $\gamma_2,\gamma_3 \geq 0$; if $\gamma_1<0$, then assume additionally that $\lvert W\rvert^2$ is constant. Then $$\inf \left\{ F_{\gamma_1,\gamma_2,\gamma_3}(u) \mathrel{}:\mathrel{}u \in C^\infty(M) \right\} = 0 .$$ Moreover, if $\max\{\gamma_2,\gamma_3\}>0$, then $u$ is extremal if and only if $e^{2u}g$ is Einstein.* applies to $S^4$, $\mathbb{C}P^2$, $S^2 \times S^2$, $T^4$, and their quotients, and hence generalizes the aforementioned result of Branson, Chang, and Yang. This article is organized as follows: In [2](#sec:geometry){reference-type="ref" reference="sec:geometry"} we compute the relevant divergences and set up the undetermined coefficients problem. In [3](#sec:proof){reference-type="ref" reference="sec:proof"} we prove [Theorem 1](#main-thm){reference-type="ref" reference="main-thm"}. In [4](#sec:operators){reference-type="ref" reference="sec:operators"} we discuss sufficient conditions for the nonnegativity of $I_{a,b}$ and $J$, and use these to prove [Theorem 2](#main-thm-q){reference-type="ref" reference="main-thm-q"}. In [5](#sec:applications){reference-type="ref" reference="sec:applications"} we discuss the relationship between the total $I_{a,b}$-curvature and conformally invariant energy functionals, and then we prove [\[yamabe-dim5+,yamabe-dim3,functional-determinant-extremal\]](#yamabe-dim5+,yamabe-dim3,functional-determinant-extremal){reference-type="ref" reference="yamabe-dim5+,yamabe-dim3,functional-determinant-extremal"} and their corollaries. # Some Riemannian identities {#sec:geometry} Written in terms of the trace-free part $E$ of the Schouten tensor, the $I_{a,0}$-curvature [\[eqn:defn-Ia\]](#eqn:defn-Ia){reference-type="eqref" reference="eqn:defn-Ia"} is $$\label{eqn:Ia-via-E} I_{a,0} = -\Delta J - \frac{a+4}{2}\lvert E\rvert^2 + \frac{n^2-4+(n-1)a}{2n}J^2 .$$ As stated in the introduction, our first task is to compute the divergences of all natural symmetric $(0,2)$-tensors which are homogeneous of degree $-4$ and which vanish when computed at Einstein metrics. We also restrict our attention to those tensors for which the divergence is algebraically determined by the Schouten tensor $P$ and the gradient $\nabla J$ of its trace. This leaves a three-dimensional space of tensors. **Lemma 8**. *Let $(M^n,g)$, $n \geq 3$, be a Riemannian manifold. Suppose that $a \in \mathbb{R}$ is such that $I_{a,0}$ is constant. Then $$\begin{aligned} \delta\left( JP - \frac{1}{n}J^2g \right) & = E(\nabla J) + \frac{n-1}{n}J\nabla J , \\ \delta\left( (n^2-4+(n-1)a)JP - 2I_{a,0}g \right) & = (n^2-4+(n-1)a)\left( E(\nabla J) + \frac{n+1}{n}J\nabla J \right) , \\ \delta\left( \nabla^2J - \Delta Jg \right) & = (n-2)E(\nabla J) + \frac{2(n-1)}{n}J\nabla J . \end{aligned}$$* *Proof.* The contracted second Bianchi identity implies that $\delta P = \nabla J$. The Ricci identity implies that if $u \in C^\infty(M)$, then $$\delta\nabla^2u = \nabla\Delta u + \mathop{\mathrm{Ric}}(\nabla u) .$$ The conclusion readily follows. ◻ Pairing these formulas against $\nabla u$ yields integral formulas involving the Hessian of $u$. These formulas are particularly relevant when $u$ is an Einstein scale [\[eqn:conformally-einstein\]](#eqn:conformally-einstein){reference-type="eqref" reference="eqn:conformally-einstein"}: **Lemma 9**. *Let $(M^n,g)$, $n \geq 3$, be a closed Riemannian manifold. Suppose that $a \in \mathbb{R}$ is such that $I_{a,0}$ is constant. Suppose additionally that the metric $\widehat{g}:= u^{-2}g$ satisfies $P^{\widehat{g}} = \frac{\lambda}{2}\widehat{g}$. Then $$\begin{aligned} \label{eqn:tf-JP} 0 & = \int_M \left( -uJ\lvert E\rvert^2 + E(\nabla J,\nabla u) + \frac{n-1}{n}\langle J\nabla J,\nabla u\rangle\right) \mathop{\mathrm{dV}}, \\ \label{eqn:JP-Q} 0 & = \int_M \biggl( (n^2-4+(n-1)a)\Bigl( E(\nabla J,\nabla u) + \frac{(n+1)}{n}\langle J\nabla J,\nabla u\rangle\Bigr) \\ \notag & \qquad\qquad - 2uJ\Delta J - n(n+a) uJ\lvert E\rvert^2 + nu^{-1}\left(\lvert\nabla u\rvert^2 + \lambda\right)\Delta J \\ \notag & \qquad\qquad + \frac{n(a+4)}{2}u^{-1}\left(\lvert\nabla u\rvert^2 + \lambda\right)\lvert E\rvert^2\biggr) \mathop{\mathrm{dV}}, \\ \label{eqn:nablaJ} 0 & = \int_M \Bigl( (n-1)E(\nabla J,\nabla u) + \frac{n-1}{n}\langle J\nabla J,\nabla u\rangle\\ \notag & \qquad\qquad - \frac{n-1}{2}u^{-1}\left(\lvert\nabla u\rvert^2 + \lambda\right)\Delta J \Bigr) \mathop{\mathrm{dV}}. \end{aligned}$$* *Proof.* Writing the equation $P^{\widehat{g}} = \frac{\lambda}{2}\widehat{g}$ in terms of $u$ yields Equation [\[eqn:conformally-einstein\]](#eqn:conformally-einstein){reference-type="eqref" reference="eqn:conformally-einstein"}. Since $M$ is closed, integration by parts then implies that if $T \in \Gamma(S^2T^\ast M)$, then $$\label{eqn:pre-ibp} 0 = \int_M \left( \langle\delta T, \nabla u\rangle- u\langle T,P \rangle+ \frac{1}{2}u^{-1}\left(\lvert\nabla u\rvert^2 + \lambda\right) \mathop{\mathrm{tr}}T \right) \mathop{\mathrm{dV}}.$$ The conclusions now follow from [Lemma 8](#divergences){reference-type="ref" reference="divergences"}: Using the formula for $\delta\bigl( JP - \frac{1}{n}J^2g\bigr)$ in Equation [\[eqn:pre-ibp\]](#eqn:pre-ibp){reference-type="eqref" reference="eqn:pre-ibp"} yields Equation [\[eqn:tf-JP\]](#eqn:tf-JP){reference-type="eqref" reference="eqn:tf-JP"}. Combining the formula for $\delta\bigl( (n^2-4+(n-1)a)JP - 2I_{a,0}g \bigr)$ in Equation [\[eqn:pre-ibp\]](#eqn:pre-ibp){reference-type="eqref" reference="eqn:pre-ibp"} with Equation [\[eqn:Ia-via-E\]](#eqn:Ia-via-E){reference-type="eqref" reference="eqn:Ia-via-E"} yields Equation [\[eqn:JP-Q\]](#eqn:JP-Q){reference-type="eqref" reference="eqn:JP-Q"}. Inserting the formula for $\delta\bigl( \nabla^2J - \Delta Jg \bigr)$ into Equation [\[eqn:pre-ibp\]](#eqn:pre-ibp){reference-type="eqref" reference="eqn:pre-ibp"} and using the identity $$-\int_M \left( u\langle P,\nabla^2J \rangle- uJ\Delta J\right) \mathop{\mathrm{dV}}= \int_M \left( E(\nabla J,\nabla u) - \frac{n-1}{n}\langle J\nabla J,\nabla u\rangle\right) \mathop{\mathrm{dV}}$$ yields Equation [\[eqn:nablaJ\]](#eqn:nablaJ){reference-type="eqref" reference="eqn:nablaJ"}. ◻ # Proofs of [Theorem 1](#main-thm){reference-type="ref" reference="main-thm"} {#sec:proof} The basic idea of the proof of [Theorem 1](#main-thm){reference-type="ref" reference="main-thm"} is to find a linear combination of the three identities of [Lemma 9](#ibp){reference-type="ref" reference="ibp"} for which the contributions of $\Delta J$ are zero. To that end, we first cancel the term involving $u^{-1}(\lvert\nabla u\rvert^2+\lambda)\Delta J$ and integrate the term $uJ\Delta J$ by parts: **Lemma 10**. *Let $(M^n,g)$, $n \geq 3$, be a closed Riemannian manifold. Suppose that $a \in \mathbb{R}$ is such that $I_{a,0}$ is constant. Suppose additionally that the metric $\widehat{g}:= u^{-2}g$ satisfies $P^{\widehat{g}} = \frac{\lambda}{2}\widehat{g}$. Then $$\label{eqn:cancel-scalar-Delta} \begin{split} 0 & = \int_M \biggl( 2u\lvert\nabla J\rvert^2 - n(n+a)uJ\lvert E\rvert^2 + (n^2+2n-4+(n-1)a)E(\nabla J,\nabla u) \\ & \qquad\qquad + \frac{n^3+n^2-4+(n^2-1)a}{n}\langle J\nabla J, \nabla u\rangle\\ & \qquad\qquad + \frac{n(a+4)}{2}u^{-1}\left(\lvert\nabla u\rvert^2+\lambda\right)\lvert E\rvert^2 \biggr) \mathop{\mathrm{dV}}. \end{split}$$* *Proof.* Adding Equation [\[eqn:JP-Q\]](#eqn:JP-Q){reference-type="eqref" reference="eqn:JP-Q"} to $\frac{2n}{n-1}$ times Equation [\[eqn:nablaJ\]](#eqn:nablaJ){reference-type="eqref" reference="eqn:nablaJ"} yields $$\begin{aligned} 0 & = \int_M \biggl( (n^2+2n-4+(n-1)a)E(\nabla J,\nabla u) - n(n+a)uJ\lvert E\rvert^2 \\ & \qquad - 2uJ\Delta J + \frac{n^3+n^2-2n-4+(n^2-1)a}{n}\langle J\nabla J,\nabla u\rangle\\ & \qquad + \frac{n(a+4)}{2}u^{-1}(\lvert\nabla u\rvert^2 + \lambda)\lvert E\rvert^2 \biggr) \mathop{\mathrm{dV}}. \end{aligned}$$ The final conclusion follows from the identity $$\int_M uJ\Delta J \mathop{\mathrm{dV}}= -\int_M \left( u\lvert\nabla J\rvert^2 + \langle J\nabla J,\nabla u \rangle\right) \mathop{\mathrm{dV}}. \qedhere$$ ◻ We now cancel the term involving $\langle J\nabla J,\nabla u\rangle$. **Lemma 11**. *Let $(M^n,g)$, $n \geq 3$, be a closed Riemannian manifold. Suppose that $a \in \mathbb{R}$ is such that $I_{a,0}$ is constant. Suppose additionally that the metric $\widehat{g}:= u^{-2}g$ satisfies $P^{\widehat{g}} = \frac{\lambda}{2}\widehat{g}$. Then $$\begin{aligned} 0 & = \int_M \biggl( 2u\lvert\nabla J\rvert^2 - \frac{2(3n-4+(n-1)a)}{n-1}E(\nabla J,\nabla u) \\ & \qquad\qquad + \frac{2n^2-4+(n-1)a}{n-1}uJ\lvert E\rvert^2 + \frac{n(a+4)}{2}u^{-1}(\lvert\nabla u\rvert^2+\lambda)\lvert E\rvert^2 \biggr) \mathop{\mathrm{dV}}. \end{aligned}$$* *Proof.* Subtract $\frac{n^3+n^2-4+(n^2-1)a}{n-1}$ times Equation [\[eqn:tf-JP\]](#eqn:tf-JP){reference-type="eqref" reference="eqn:tf-JP"} from Equation [\[eqn:cancel-scalar-Delta\]](#eqn:cancel-scalar-Delta){reference-type="eqref" reference="eqn:cancel-scalar-Delta"}. ◻ We are now ready to prove our first analogue of the Obata--Vétois Theorem: *Proof of [Theorem 1](#main-thm){reference-type="ref" reference="main-thm"}.* Denote $$C(n,a) := 4n^3 - 17n^2 + 28n - 16 + (n-1)(n^2-7n+8)a - (n-1)^2a^2 .$$ Observe that Condition [\[eqn:a-range\]](#eqn:a-range){reference-type="eqref" reference="eqn:a-range"} is equivalent to the assumption $C(n,a) \geq 0$. Direct computation gives $C(n,-4) < 0$ and $C(n,0)>0$. Since $a$ satisfies Condition [\[eqn:a-range\]](#eqn:a-range){reference-type="eqref" reference="eqn:a-range"}, we deduce that $a>-4$, and hence $2n^2-4+(n-1)a > 0$. Let $\widehat{g}= u^{-2}g$ be such that $P^{\widehat{g}} = \frac{\lambda}{2}\widehat{g}$. Since $n \geq 3$, the contracted second Bianchi identity implies that $\lambda$ is constant. Since $g$ has nonnegative scalar curvature, the Yamabe constant of $(M^n,g)$ is nonnegative. Thus $\lambda\geq0$, and $\lambda>0$ if $J \not= 0$. Now, the Cauchy--Schwarz Inequality implies that $$\begin{gathered} -\frac{2(3n-4+(n-1)a)}{n-1}E(\nabla J,\nabla u) \geq -2u\lvert\nabla J\rvert^2 \\ - \frac{(3n-4+(n-1)a)^2}{2(n-1)^2}u^{-1}\lvert E\rvert^2\lvert\nabla u\rvert^2 . \end{gathered}$$ Combining this with [Lemma 11](#cancel-nabla-J2){reference-type="ref" reference="cancel-nabla-J2"} yields $$\begin{gathered} 0 \geq \int_M \biggl( \frac{2n^2-4+(n-1)a}{n-1}uJ\lvert E\rvert^2 + \frac{n(a+4)}{2}\lambda u^{-1}\lvert E\rvert^2 \\ + \frac{C(n,a)}{2(n-1)^2}u^{-1}\lvert\nabla u\rvert^2\lvert E\rvert^2 \biggr) \mathop{\mathrm{dV}}. \end{gathered}$$ If $\lambda>0$, then $E=0$. If instead $\lambda=0$, then $\lvert E\rvert^2\lvert\nabla u\rvert^2=0$. Suppose that $p \in M$ is such that $E_p \not= 0$. Then $\nabla u=0$ in a neighborhood of $p$. Equation [\[eqn:conformally-einstein\]](#eqn:conformally-einstein){reference-type="eqref" reference="eqn:conformally-einstein"} implies that $E_p=0$, a contraction. Thus again $E=0$. Therefore $g$ is Einstein. ◻ # Proof of [Theorem 2](#main-thm-q){reference-type="ref" reference="main-thm-q"} {#sec:operators} The first step in proving [Theorem 2](#main-thm-q){reference-type="ref" reference="main-thm-q"} is to give sufficient conditions for the $I_{a,0}$-curvature to be nonnegative under the hypotheses of [Theorem 1](#main-thm){reference-type="ref" reference="main-thm"}. When $a\not=0$, we do this with the help of the $I_{-4,0}$-Yamabe constant (cf. [@BransonChangYang1992]\*Lemma 5.4). **Lemma 12**. *Let $(M^n,g)$, $n \geq 3$, be a closed Riemannian manifold such that $\mathop{\mathrm{Ric}}= (n-1)\lambda g \geq 0$. Then $$\inf_{\widehat{g}\in [g]} \left\{ \int_M (J^{\widehat{g}})^2 \mathop{\mathrm{dV}}_{\widehat{g}} \mathrel{}:\mathrel{}\mathop{\mathrm{Vol}}_{\widehat{g}}(M) = 1 \right\} = J^2\mathop{\mathrm{Vol}}(M)^{\frac{4}{n}} .$$ Moreover, $\widehat{g}$ is extremal if and only if it is Einstein. In particular, $$\begin{aligned} \inf_{\widehat{g}\in [g]} \left\{ \int_M I_{-4,0}^{\widehat{g}} \mathop{\mathrm{dV}}_{\widehat{g}} \mathrel{}:\mathrel{}\mathop{\mathrm{Vol}}_{\widehat{g}}(M) = 1 \right\} & = \frac{n-4}{2}J^2\mathop{\mathrm{Vol}}(M)^{\frac{4}{n}} , && \text{if $n \geq 5$} , \\ \sup_{\widehat{g}\in [g]} \left\{ \int_M I_{-4,0}^{\widehat{g}} \mathop{\mathrm{dV}}_{\widehat{g}} \mathrel{}:\mathrel{}\mathop{\mathrm{Vol}}_{\widehat{g}}(M) = 1 \right\} & = -\frac{1}{2}J^2\mathop{\mathrm{Vol}}(M)^{\frac{4}{3}} , && \text{if $n = 3$} , \end{aligned}$$ and $\widehat{g}$ is extremal if and only if it is Einstein.* *Proof.* Let $\widehat{g}\in [g]$ be such that $\mathop{\mathrm{Vol}}_{\widehat{g}}(M) = 1$. The resolution of the Yamabe Problem [@Aubin1976; @Schoen1984; @LeeParker1987; @Trudinger1968] and Obata's Theorem [@Obata1971]\*Proposition 6.2 imply that $$J\mathop{\mathrm{Vol}}(M)^{\frac{2}{n}} \leq \int_M J^{\widehat{g}} \mathop{\mathrm{dV}}_{\widehat{g}}$$ with equality if and only if $\widehat{g}$ is Einstein. Since $J \geq 0$, squaring both sides and applying Hölder's inequality yields $$J^2\mathop{\mathrm{Vol}}(M)^{\frac{4}{n}} \leq \left( \int_M J^{\widehat{g}} \mathop{\mathrm{dV}}_{\widehat{g}} \right)^2 \leq \int_M (J^{\widehat{g}})^2 \mathop{\mathrm{dV}}_{\widehat{g}}$$ with equality if and only if $\widehat{g}$ is Einstein. The final conclusion uses the identity $$\int_M I_{-4,0}^{\widehat{g}} \mathop{\mathrm{dV}}_{\widehat{g}} = \frac{n-4}{2}\int_M (J^{\widehat{g}})^2 \mathop{\mathrm{dV}}_{\widehat{g}} . \qedhere$$ ◻ leads to sufficient conditions for $I_{a,0}$ to be nonnegative. **Proposition 13**. *Let $(M^n,g)$, $n \geq 4$, be a conformally Einstein manifold with nonnegative Yamabe constant. Suppose additionally that $a \geq -4$ is such that $I_{a,0}$ is constant; if $n \geq 5$, then assume also that $a \leq 0$. Then $I_{a,0} \geq 0$.* *Proof.* Let $g_0 \in [g]$ be such that $\mathop{\mathrm{Ric}}_{g_0} = (n-1)\lambda g_0$. Then $\lambda$ is a nonnegative constant. Moreover (cf. [@Gover2006q]\*Theorem 1.2), $$\label{eqn:factorization} \begin{split} Q^{g_0} & = \frac{n(n^2-4)}{8}\lambda^2 , \\ L_4^{g_0} & = \left( -\Delta_{g_0} + \frac{n(n-2)}{4}\lambda \right)\left( -\Delta_{g_0} + \frac{(n+2)(n-4)}{4}\lambda \right) . \end{split}$$ In particular, if $n \geq 5$, then writing $g = u^{\frac{4}{n-4}}g_0$ and using the conformal transformation law [@Paneitz1983]\*Theorem 1 for the Paneitz operator yields $$\label{eqn:paneitz-estimate} \int_M Q \mathop{\mathrm{dV}}= \frac{2}{n-4}\int_M uL_4^{g_0}u \mathop{\mathrm{dV}}_{g_0} \geq \frac{n(n^2-4)}{8}\lambda^2\int_M u^2 \mathop{\mathrm{dV}}_{g_0} \geq 0 .$$ Now write $$I_{a,0} = \frac{a+4}{4}Q - \frac{a}{4}I_{-4,0} .$$ If $n=4$, then $I_{-4,0} = -\Delta J$, and hence the conformal invariance of the total $Q$-curvature [@ChangYang1995] yields $$I_{a,0} \mathop{\mathrm{Vol}}(M) = \int_M I_{a,0}\mathop{\mathrm{dV}}= \frac{a+4}{4}\int_M Q^{g_0}\mathop{\mathrm{dV}}_{g_0} \geq 0 .$$ If instead $n \geq 5$, then combining Inequality [\[eqn:paneitz-estimate\]](#eqn:paneitz-estimate){reference-type="eqref" reference="eqn:paneitz-estimate"} with [Lemma 12](#DJ-yamabe){reference-type="ref" reference="DJ-yamabe"} yields $$I_{a,0}\mathop{\mathrm{Vol}}(M) = \int_M I_{a,0} \mathop{\mathrm{dV}}\geq -\frac{a}{4}\int_M I_{-4,0} \mathop{\mathrm{dV}}\geq 0 .$$ In either case we conclude that $I_{a,0} \geq 0$. ◻ Removing the assumption on the sign of the scalar curvature from [Theorem 1](#main-thm){reference-type="ref" reference="main-thm"} requires a generalization of an observation of Gursky [@Gursky1998]\*Lemma 1.2. **Lemma 14**. *Let $(M^n,g)$, $n \geq 3$, be a closed Riemannian manifold with $L_2J \geq 0$.* 1. *If $L_2 > 0$, then $J > 0$.* 2. *If $L_2 \geq 0$ and $\ker L_2$ is nonempty, then $J=0$.* *Proof.* Let $\lambda \geq 0$ be the first eigenvalue of $L_2$. A standard variational argument implies that there is a positive $u \in C^\infty(M)$ such that $L_2u = \lambda u$. Set $F := \frac{J}{u}$. Direct computation gives $$\Delta F + 2\langle\nabla F,\nabla\ln u\rangle= -u^{-1}L_2J + \lambda F \leq \lambda F .$$ The conclusion now follows from the Strong Maximum Principle. ◻ We now prove our second version of the Obata--Vétois Theorem: *Proof of [Theorem 2](#main-thm-q){reference-type="ref" reference="main-thm-q"}.* Let $g_0 \in [g]$ be such that $\mathop{\mathrm{Ric}}_{g_0} = (n-1)\lambda g_0$. [Case 1]{.ul}: $a=0$. We first show that $Q \geq 0$ with equality if and only if $\lambda=0$. Write $g = u^{\frac{4}{n-4}}g_0$ if $n\not=4$ and $g = e^{2u}g_0$ if $n=4$. The conformal transformation law for the $Q$-curvature [@Branson1995]\*p. 3679 yields $$\label{eqn:transformation} \begin{aligned} u^{\frac{n+4}{n-4}}Q & = \frac{2}{n-4}L_4^{g_0}(u) , && \text{if $n\not=4$} , \\ e^{4u}Q & = Q^{g_0} + L_4^{g_0}(u) , && \text{if $n=4$} . \end{aligned}$$ Since $Q$ is constant, integrating Equation [\[eqn:transformation\]](#eqn:transformation){reference-type="eqref" reference="eqn:transformation"} with respect to $\mathop{\mathrm{dV}}_{g_0}$ and applying Equation [\[eqn:factorization\]](#eqn:factorization){reference-type="eqref" reference="eqn:factorization"} yields $Q \geq 0$ with equality if and only if $\lambda=0$. We now show that $g$ is Einstein. If $\lambda=0$, then Equations [\[eqn:factorization\]](#eqn:factorization){reference-type="eqref" reference="eqn:factorization"} and [\[eqn:transformation\]](#eqn:transformation){reference-type="eqref" reference="eqn:transformation"} yield $u \in \ker L_4^{g_0} = \ker \Delta_{g_0}^2$. Thus $u$ is constant, and hence $g$ is Einstein. If instead $\lambda>0$, then a maximum principle argument [@Vetois2022]\*Theorem 2.3 implies that $J > 0$. We then conclude from [Theorem 1](#main-thm){reference-type="ref" reference="main-thm"} that $g$ is Einstein. [Case 2]{.ul}: $a \not= 0$. We first observe that $I_{a,0} \geq 0$. If $n=3$, then this is true by assumption. Otherwise it follows from [Proposition 13](#sign){reference-type="ref" reference="sign"}. We now show that $g$ is Einstein. We deduce from Equation [\[eqn:Ia-via-E\]](#eqn:Ia-via-E){reference-type="eqref" reference="eqn:Ia-via-E"} that $$0 \leq I_{a,0} = L_2J - \frac{a+4}{2}\lvert E\rvert^2 + \frac{2(n-2)+(n-1)a}{2n}J^2 \leq L_2J .$$ now yields $J^g \geq 0$. implies that $g$ is Einstein. ◻ # Applications {#sec:applications} Case, Lin, and Yuan [@CaseLinYuan2018b] showed that to each conformally variational scalar Riemannian invariant one can associate a formally self-adjoint, conformally covariant polydifferential operator in a way which generalizes the relationship between the $Q$-curvatures and the GJMS operators [@Branson1995]. The operators associated to the $I_{a,b}$-curvatures yield the relationship between the $I_{a,b}$-Yamabe constants and sharp Sobolev constants needed to derive [\[sobolev-dim3,sobolev-dim5+\]](#sobolev-dim3,sobolev-dim5+){reference-type="ref" reference="sobolev-dim3,sobolev-dim5+"}. **Proposition 15**. *Let $(M^n,g)$, $n \not= 4$, be a closed Riemannian manifold such that $\mathop{\mathrm{Ric}}= (n-1)\lambda g$. Fix $a,b \in \mathbb{R}$. Given $u \in C^\infty(M)$, set $g_u := u^{\frac{8}{n-4}}g$. Then $$\begin{aligned} \MoveEqLeft \frac{n-4}{2}\int_M I_{a,b}^{g_u} \mathop{\mathrm{dV}}_{g_u} = \int_M \Biggl( (\Delta u^2)^2 - \frac{16}{(n-4)^2}a\lvert\nabla u\rvert^4 - \frac{4}{n-4}a\lvert\nabla u\rvert^2\Delta u^2 \\ & \quad + \left( \frac{n^2-2n-4}{2} + \frac{n-1}{2}a\right)\lambda\lvert\nabla u^2\rvert^2 \\ & \quad + \frac{n-4}{2}\biggl( \frac{n(n^2-4)}{8}\lambda^2 + \frac{n(n-1)}{8}a\lambda^2 + b\lvert W\rvert^2\biggr) u^4 \Biggr) \mathop{\mathrm{dV}}. \end{aligned}$$* *Proof.* First, the conformal covariance of the Weyl tensor immediately gives $$\int_M \lvert W^{g_u} \rvert_{g_u}^2 \mathop{\mathrm{dV}}_{g_u} = \int_M \lvert W \rvert^2u^4 \mathop{\mathrm{dV}}.$$ Second, the conformal transformation law [@Paneitz1983]\*Theorem 1 for the Paneitz operator implies that $$\frac{n-4}{2}\int_M Q^{g_u} \mathop{\mathrm{dV}}_{g_u} = \int_M u^2L_4(u^2) \mathop{\mathrm{dV}}.$$ Third, the conformal covariance [@Case2019fl]\*Theorem 2.1 and Remark 2.2 of the operator $$\begin{gathered} L_{\sigma_2}(u) := \frac{1}{2}\delta\left( \lvert\nabla u\rvert^2 \, du \right) - \frac{n-4}{16}\left( u\Delta\lvert\nabla u\rvert^2 - \delta\bigl( (\Delta u^2) \, du \bigr) \right) \\ - \frac{1}{2}\left( \frac{n-4}{4} \right)^2u\delta\left( (Jg - P)(\nabla u^2) \right) + \left( \frac{n-4}{4} \right)^3\sigma_2u^3 \end{gathered}$$ implies that $$\left( \frac{n-4}{4} \right)^3 \int_M \sigma_2^{g_u} \mathop{\mathrm{dV}}_{g_u} = \int_M u L_{\sigma_2}(u) \mathop{\mathrm{dV}}.$$ Combining these formulas with the fact $P = \frac{\lambda}{2}g$ yields the desired conclusion. ◻ Computing the $Q$-Yamabe constant, and hence the $I_{a,b}$-Yamabe constants, requires separately considering the cases $n\geq 5$, $n=3$, and $n=4$. ## The case of dimension at least five {#subsec:dim5} An existence result of Gursky and Malchiodi [@GurskyMalchiodi2014] allows us to compute the $Q$-Yamabe constant of a closed Einstein manifold in dimension at least five. **Proposition 16**. *Let $(M^n,g)$, $n \geq 5$, be a closed Riemannian manifold with $\mathop{\mathrm{Ric}}= (n-1)\lambda g \geq 0$. Then $$\label{eqn:q-constant-5} \inf_{\widehat{g}\in [g]} \left\{ \int_M Q^{\widehat{g}} \mathop{\mathrm{dV}}_{\widehat{g}} \mathrel{}:\mathrel{}\mathop{\mathrm{Vol}}_{\widehat{g}}(M) = 1 \right\} = \frac{\Gamma\bigl(\frac{n+4}{2}\bigr)}{\Gamma\bigl(\frac{n-4}{2}\bigr)}\lambda^2\mathop{\mathrm{Vol}}(M)^{\frac{4}{n}} .$$ Moreover, $\widehat{g}$ is extremal if and only if it is Einstein.* *Proof.* If $\lambda=0$, then $L_4^g=\Delta^2$. Therefore $$\begin{gathered} \inf_{\widehat{g}\in [g]} \left\{ \int_M Q^{\widehat{g}} \mathop{\mathrm{dV}}_{\widehat{g}} \mathrel{}:\mathrel{}\mathop{\mathrm{Vol}}_{\widehat{g}}(M) = 1 \right\} \\ = \inf_{0<u\in C^\infty(M)} \left\{ \frac{2}{n-4}\int_M uL_4u \, \mathop{\mathrm{dV}}\mathrel{}:\mathrel{}\int_M u^{\frac{2n}{n-4}}\mathop{\mathrm{dV}}= 1 \right\} = 0 . \end{gathered}$$ Moreover, $u$ extremizes $\int uL_4u \mathop{\mathrm{dV}}$ if and only if $u$ is constant. If instead $\lambda>0$, then $g$ has positive scalar curvature and positive $Q$-curvature. Hence [@GurskyMalchiodi2014]\*p. 2140 there is metric $\widetilde{g}= e^{2u}g$ such that $\mathop{\mathrm{Vol}}_{\widetilde{g}}(M)=1$ and $$Q^{\widetilde{g}} = \inf_{\widehat{g}\in [g]} \left\{ \int_M Q^{\widehat{g}} \mathop{\mathrm{dV}}_{\widehat{g}} \mathrel{}:\mathrel{}\mathop{\mathrm{Vol}}_{\widehat{g}}(M) = 1 \right\} .$$ implies that $\widetilde{g}$ is Einstein. Therefore [@Obata1962]\*Theorem A there is a diffeomorphism $\Phi \colon M \to M$ such that $\Phi^\ast\widetilde{g}= cg$ for some constant $c>0$. Equation [\[eqn:q-constant-5\]](#eqn:q-constant-5){reference-type="eqref" reference="eqn:q-constant-5"} readily follows. ◻ We also need to compute the $\lvert W\rvert^2$-Yamabe constant for certain manifolds. **Lemma 17**. *Let $(M^n,g)$, $n \geq 3$, be a closed Riemannian manifold such that $\lvert W\rvert^2$ is constant. Then $$\sup_{\widehat{g}\in [g]} \left\{ \int_M \lvert W^{\widehat{g}} \rvert^2_{\widehat{g}} \mathop{\mathrm{dV}}_{\widehat{g}} \mathrel{}:\mathrel{}\mathop{\mathrm{Vol}}_{\widehat{g}}(M) = 1 \right\} = \lvert W \rvert^2 \mathop{\mathrm{Vol}}(M)^{\frac{4}{n}} .$$ Moreover, $\widehat{g}$ is extremal if and only if $W=0$ or $\widehat{g}$ is homothetic to $g$.* *Proof.* Let $\widehat{g}\in [g]$ be such that $\mathop{\mathrm{Vol}}_{\widehat{g}}(M)=1$. Define $u \in C^\infty(M)$ by $\widehat{g}= e^{2u}g$. Then $\int e^{nu}\mathop{\mathrm{dV}}= 1$. Combining these observations with Hölder's inequality yields $$\int_M \lvert W^{\widehat{g}} \rvert_{\widehat{g}}^2 \mathop{\mathrm{dV}}_{\widehat{g}} = \lvert W\rvert^2\int_M e^{(n-4)u}\mathop{\mathrm{dV}}\leq \lvert W\rvert^2 \mathop{\mathrm{Vol}}(M)^{\frac{4}{n}}$$ with equality if and only if $W=0$ or $u$ is constant. ◻ Since the minimizers of the $Q$-, $I_{-4,0}$-, and $(-\lvert W\rvert^2)$-Yamabe constants are the same on Einstein manifolds with $\lvert W\rvert^2$ constant, we can compute the Yamabe-type constants of their convex combinations. *Proof of [Theorem 3](#yamabe-dim5+){reference-type="ref" reference="yamabe-dim5+"}.* Note that $$I_{a,b} = \frac{a+4}{4}Q - \frac{a}{4}I_{-4,0} + b\lvert W\rvert^2 .$$ Since $-4 \leq a \leq 0$ and $b\leq 0$, we immediately deduce from [\[W-yamabe,DJ-yamabe,q-constant-5\]](#W-yamabe,DJ-yamabe,q-constant-5){reference-type="ref" reference="W-yamabe,DJ-yamabe,q-constant-5"} that $$\inf_{\widehat{g}\in [g]} \left\{ \int_M I_{a,b}^{\widehat{g}} \mathop{\mathrm{dV}}_{\widehat{g}} \mathrel{}:\mathrel{}\mathop{\mathrm{Vol}}_{\widehat{g}}(M) = 1 \right\} = I_{a,b} \mathop{\mathrm{Vol}}(M)^{\frac{4}{n}} ,$$ and moreover, that $\widehat{g}$ is extremal if and only if it is Einstein. The explicit value [\[eqn:yamabe-dim5+-constant\]](#eqn:yamabe-dim5+-constant){reference-type="eqref" reference="eqn:yamabe-dim5+-constant"} follows by direct computation. ◻ Writing the total $I_{a,b}$-curvature with respect to a background Einstein metric yields our functional inequality. *Proof of [Corollary 4](#sobolev-dim5+){reference-type="ref" reference="sobolev-dim5+"}.* Combine [Theorem 3](#yamabe-dim5+){reference-type="ref" reference="yamabe-dim5+"} with [Proposition 15](#energy){reference-type="ref" reference="energy"}. ◻ ## The case of dimension three {#subsec:dim3} An existence result of Hang and Yang [@HangYang2016t] allows us to compute the $Q$-Yamabe constant of a closed Einstein three-manifold (cf. [@HangYang2004; @YangZhu2004; @ChoiXu2009]). To that end, recall that a closed three-manifold $(M^3,g)$ satisfies **Condition (NN)** if $\int uL_4u \mathop{\mathrm{dV}}\geq 0$ for every $u \in C^\infty(M)$ such that $u^{-1}(\{0\})\not=\emptyset$. **Proposition 18**. *Let $(M^3,g)$ be a closed Riemannian three-manifold with $\mathop{\mathrm{Ric}}= 2\lambda g \geq 0$. Then $$\label{eqn:q-constant-3} \sup_{\widehat{g}\in [g]} \left\{ \int_M Q^{\widehat{g}} \mathop{\mathrm{dV}}_{\widehat{g}} \mathrel{}:\mathrel{}\mathop{\mathrm{Vol}}_{\widehat{g}}(M) = 1 \right\} = \frac{15}{8}\lambda^2\mathop{\mathrm{Vol}}(M)^{\frac{4}{3}} .$$ Moreover, $\widehat{g}$ is extremal if and only if it is Einstein.* *Proof.* If $\lambda=0$, then the Paneitz operator is $L_4=\Delta^2$. Therefore $$\begin{gathered} \sup_{\widehat{g}\in [g]} \left\{ \int_M Q^{\widehat{g}} \mathop{\mathrm{dV}}_{\widehat{g}} \mathrel{}:\mathrel{}\mathop{\mathrm{Vol}}_{\widehat{g}}(M) = 1 \right\} \\ = \inf_{0<u\in C^\infty(M)} \left\{ 2\int_M (\Delta u)^2 \, \mathop{\mathrm{dV}}\mathrel{}:\mathrel{}\int_M u^{-6}\mathop{\mathrm{dV}}= 1 \right\} = 0 . \end{gathered}$$ Moreover, $u$ extremizes $\int uL_4u \mathop{\mathrm{dV}}$ if and only if it is constant. If instead $\lambda>0$, then $(M^3,g)$ is a finite quotient of the round three-sphere. Since the round three-sphere satisfies Condition (NN) [@HangYang2004]\*Corollary 7.1, we conclude by a straightforward covering argument that $(M^3,g)$ also satisfies Condition (NN). Therefore there is [@HangYang2016t]\*Theorem 1.2 a metric $\widetilde{g}= e^{2u}g$ such that $$Q^{\widetilde{g}} = \sup_{\widehat{g}\in [g]} \left\{ \int_M Q^{\widehat{g}} \mathop{\mathrm{dV}}_{\widehat{g}} \mathrel{}:\mathrel{}\mathop{\mathrm{Vol}}_{\widehat{g}}(M) = 1 \right\} \geq \frac{15}{8}\lambda^2\mathop{\mathrm{Vol}}(M)^{\frac{4}{3}} .$$ implies that $\widetilde{g}$ is Einstein. Equation [\[eqn:q-constant-3\]](#eqn:q-constant-3){reference-type="eqref" reference="eqn:q-constant-3"} now follows as in the proof of [Proposition 16](#q-constant-5){reference-type="ref" reference="q-constant-5"}. ◻ Since the maximizers of the $Q$- and $I_{-4,0}$--Yamabe constants are the same, the same is true of their convex combinations. *Proof of [Theorem 5](#yamabe-dim3){reference-type="ref" reference="yamabe-dim3"}.* Note that $$I_{a,0} = \frac{a+4}{4}Q - \frac{a}{4}I_{-4,0} .$$ Since $a \in [-4,0]$, we deduce from [\[DJ-yamabe,q-constant-3\]](#DJ-yamabe,q-constant-3){reference-type="ref" reference="DJ-yamabe,q-constant-3"} that $$\sup_{\widehat{g}\in [g]} \left\{ \int_M I_{a,b}^{\widehat{g}} \mathop{\mathrm{dV}}_{\widehat{g}} \mathrel{}:\mathrel{}\mathop{\mathrm{Vol}}_{\widehat{g}}(M) = 1 \right\} = I_{a,b} \mathop{\mathrm{Vol}}(M)^{\frac{4}{3}} , \\ $$ and moreover, that $\widehat{g}$ is extremal if and only if it is Einstein. The explicit value [\[eqn:yamabe-dim3-constant\]](#eqn:yamabe-dim3-constant){reference-type="eqref" reference="eqn:yamabe-dim3-constant"} follows by direct computation. ◻ Writing the total $I_{a,b}$-curvature with respect to a background Einstein metric yields our functional inequality. *Proof of [Corollary 6](#sobolev-dim3){reference-type="ref" reference="sobolev-dim3"}.* Combine [Theorem 5](#yamabe-dim3){reference-type="ref" reference="yamabe-dim3"} with [Proposition 15](#energy){reference-type="ref" reference="energy"}. ◻ ## The case of dimension four {#subsec:dim4} In dimension four we directly compute the infimum of the $II$-functional. **Proposition 19**. *Let $(M^4,g)$ be a closed Riemannian four-manifold with $\mathop{\mathrm{Ric}}= 2\lambda g \geq 0$. Then $$\inf \left\{ II(u) \mathrel{}:\mathrel{}u \in C^\infty(M) \right\} = 0 .$$ Moreover, $u$ is extremal if and only if $e^{2u}g$ is Einstein.* *Proof.* We directly compute that $Q = 6\lambda^2$ and $L_4=-\Delta(-\Delta + 2)$. Suppose first that $\lambda = 0$. Then $$II(u) = \int_M uL_4u \mathop{\mathrm{dV}}\geq 0$$ with equality if and only if $u$ is constant. Suppose now that $\lambda > 0$. Clearly $L_4 \geq 0$ with $\ker L_4$ equal to the constant functions. A result of Gursky [@Gursky1999]\*Theorem B implies that $\int Q \mathop{\mathrm{dV}}\leq 16\pi^2$ with equality if and only if $(M^4,g)$ is conformal to the round four-sphere. Existence results of Beckner [@Beckner1993]\*Theorem 1 and of Chang and Yang [@ChangYang1995]\*Theorem 1.2 yield a minimizer $v \in C^\infty(M)$ for $II$, and necessarily $e^{2v}g$ has constant $Q$-curvature. We deduce from [Theorem 2](#main-thm-q){reference-type="ref" reference="main-thm-q"} that $e^{2v}g$ is constant and hence---applying conformal invariance [@Beckner1993]\*Theorem 1 if $(M^4,g)$ is conformally equivalent to $(S^4,g)$---that $II(v)=0$. ◻ The classification of extremals of functional determinants again follows from a convexity argument. *Proof of [Theorem 7](#functional-determinant-extremal){reference-type="ref" reference="functional-determinant-extremal"}.* Recall [@ChangYang1995]\*p. 173 that $$III(u) = 12\left[ \int_M (J^{g_u})^2 \mathop{\mathrm{dV}}_{g_u} - \int_M (J^g)^2 \mathop{\mathrm{dV}}_{g} \right] ,$$ where $g_u := e^{2u}g$. thus gives $III(u) \geq 0$ with equality if and only if $g_u$ is Einstein. Since $\lvert W\rvert^2$ is constant, we deduce from Jensen's inequality that $I(u) \leq 0$ with equality if and only if $u$ is constant. Combining these observations with [Proposition 19](#q-constant-4){reference-type="ref" reference="q-constant-4"} yields the final conclusion. ◻ # Acknowledgements {#acknowledgements .unnumbered} I would like to thank Matthew Gursky for helpful discussions about Vétois' argument and critical points of the functional determinant. I would also like to thank Yueh-Ju Lin for helpful discussions about higher-dimensional applications. This work was partially supported by the Simons Foundation (Grant \#524601), and by the Simons Foundation and the Mathematisches Forschungsinstitut Oberwolfach via a Simons Visiting Professorship. [^1]: A **natural scalar Riemannian invariant** is a linear combination of complete contractions of tensor products of $g$, its inverse, and covariant derivatives of its Riemann curvature tensor. [^2]: A functional $\mathcal{F}\colon [g] \to \mathbb{R}$ is **natural** if $\mathcal{F}(\Phi^\ast g) = \mathcal{F}(g)$ for any diffeomorphism $\Phi$ of $M$. [^3]: These are the volume-normalized versions of the conformal primitives for $\lvert W\rvert^2$, $Q$, and $-\Delta J$.
arxiv_math
{ "id": "2309.12431", "title": "The Obata-V\\'etois argument and its applications", "authors": "Jeffrey S. Case", "categories": "math.DG math.AP", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this paper we study the $s$-th symbolic powers of the edge ideals of complete graphs. In particular, we provide a criterion for finding an Eliahou-Kervaire splitting on these ideals, and use the splitting to provide a description for the graded Betti numbers. We also discuss the symbolic powers and graded Betti numbers of edge ideals of parallelizations of finite simple graphs. address: - | Department of Mathematics\ University of Manitoba\ 520 Machray Hall\ 186 Dysart Road\ Winnipeg, MB\ Canada R3T 2N2 - | Department of Mathematics and Economics\ Virginia State University\ 1 Hayden Drive\ Petersburg, VA\ USA 23806 - | Department of Mathematics\ University of Manitoba\ 420 Machray Hall\ 186 Dysart Road\ Winnipeg, MB\ Canada R3T 2N2 - | Department of Statistics\ University of Manitoba\ 330 Machray Hall\ 186 Dysart Road\ Winnipeg, MB\ Canada R3T 2N2 author: - Susan M. Cooper - Sergio Da Silva - Max Gutkin - Tessa Reimer bibliography: - Library.bib title: Splittings for symbolic powers of edge ideals of complete graphs --- # Introduction Edge ideals of finite simple graphs provide a rich collection of ideals for which many aspects of commutative algebra can be reduced to a combinatorial description (see [@Adam13]). For example, a primary decomposition of the edge ideal can be described using the facets of a simplicial complex via Stanley-Reisner theory. Graph-theoretic concepts such as graph colourings and vertex covers are also frequently used. It is natural to generalize this theory to other ideals which have similar combinatorial descriptions. Working with powers of square-free monomial ideals is one such direction (see [@Herzog-Hibi] for example). Another natural extension of this viewpoint is to consider symbolic powers of edge ideals of graphs. This would serve a two-fold purpose of retaining some of the combinatorics enjoyed by edge ideals while also shedding light on properties of symbolic powers. Indeed, symbolic powers have played a central role in many problems, yet their invariants are quite challenging to determine even when restricted to the family of square-free monomial ideals. The Waldschmidt constant and resurgence number for certain cases were studied in [@GHOS18], and questions about the symbolic Rees algebra were considered in [@Bahiano04]. In this article we study the graded Betti numbers for symbolic powers of edge ideals. In particular, with a focus on complete graphs, we highlight the fruitfulness of using Eliahou-Kervaire splittings as a way to obtain information about the graded Betti numbers of a symbolic power of an ideal. The symbolic power of a monomial ideal $I$ has a convenient description in terms of its primary decomposition, and is easier to describe compared to more general homogeneous ideals. Let $I\subset R = k[x_1, \ldots, x_m]$ be a monomial ideal, where $k$ is a field. Suppose that $I=I_1\cap\dots\cap I_r$ is a primary decomposition for $I$. Given a maximal associated prime ideal $Q$ of $I$ and a positive integer $s$, we define $$I_{\subseteq Q} = \bigcap_{\sqrt{I_\ell}\subseteq Q}I_\ell$$ so that the $s$-th **symbolic power** of $I$ is $$I^{(s)} = \bigcap_{Q\in\mathrm{maxAss}(I)}I_{\subseteq Q}^s,$$ where $\sqrt{I_\ell} = \{r \in R \mid r^n \in I_\ell \,\,\, \text{for some positive integer $n$}\}$ denotes the radical of $I_\ell$ and $\mathrm{maxAss}(I)$ denotes the set of associated primes of $I$ that are maximal with respect to inclusion. This definition doesn't depend on the primary decomposition since $I_{\subseteq Q} = R\cap IR_Q$ (see [@CEHH]). In Sections [3](#complete){reference-type="ref" reference="complete"} and [4](#parallel){reference-type="ref" reference="parallel"} we provide a description of the minimal monomial generating set for $I(G)^{(s)}$ when $G$ is a complete graph or a parallelization of a finite simple graph. These descriptions are necessary to study Eliahou-Kervaire splittings for symbolic powers of edge ideals, which in turn allow us to compute graded Betti numbers recursively. Theorem [Theorem 10](#further splittings){reference-type="ref" reference="further splittings"} provides a criterion for defining an Eliahou-Kervaire splitting in this context. A similar technique is employed for a specific family of monomial ideals found in [@Valla05]. Having a method to determine the graded Betti numbers of symbolic powers of edge ideals allows one to readily obtain information of related invariants of interest. We demonstrate this for the minimum socle degree in Section [3.4](#socle){reference-type="ref" reference="socle"}. It is known that symbolic powers of edge ideals of complete graphs are Cohen-Macaulay and have dimension 1 (see Lemma 3.15 for the relevant citations). Projectively these are fat points, and this implies that Section [3](#complete){reference-type="ref" reference="complete"} is studying zero-dimensional arithmetically Cohen-Macaulay schemes, which complements the content found in [@Tohaneanu-VanTuyl]. Finally, a discussion about the graded Betti numbers for graph parallelizations of finite simple graphs can be found in Section [4](#parallel){reference-type="ref" reference="parallel"}. The definition of a graph parallelization is given there, but the utility of this construction comes from the ability to define families of graphs with properties that are related to the original graph (which can be useful for finding examples or counter-examples). The arguments in this article work over any field $k$. While there exist some subtleties when working over fields of positive characteristic (for instance, [@FHV08 Example 4.2] illustrates a Betti splitting which fails to be a splitting except in characteristic 2), our arguments rely on a particular splitting map which is characteristic-free. # Preliminaries There is a rich theory involving edge ideals and their combinatorial properties. We refer the reader to [@Adam13] for a thorough overview of this theory. Throughout the paper, $G$ will denote an undirected finite simple graph with vertex set $V(G)=\{x_1,\ldots,x_m\}$ and edge set $E(G)$. Recall that $G$ is **simple** if it does not have multiple edges between vertices and if it does not have any loops at vertices. Let $k$ be a field and $R=k[x_1,\ldots,x_m]$. The **edge ideal** of $G$ is defined as the square-free monomial ideal $I(G)=\langle x_ix_j: \{x_i,x_j\}\in E(G)\rangle \subset R$. We mainly focus on complete graphs in this paper. Recall that the **complete graph** on $m$ vertices, denoted $K_m$, is the simple undirected graph for which every pair of distinct vertices is connected by exactly one unique edge. In the sections that follow, we will also consider subgraphs of graphs. An **induced subgraph** $H$ of $G$ is a graph with vertex set $V(H)\subseteq V(G)$ and edge set $E(H)=E(G)\cap[V(H)]^2$. That is, if $u,v\in V(H)$, then $u$ and $v$ are adjacent in $H$ if and only if they are adjacent in $G$. For example, the complete graph $K_3$ is an induced subgraph of $K_4$, whereas the subgraph with 4 vertices but no edges is not. Determining invariants of symbolic powers of homogeneous ideals in the polynomial ring can be quite challenging. Recall from the introduction that we can define the $s$-th symbolic power of a monomial ideal $I$ in terms of a primary decomposition of $I$. When $I$ is the edge ideal of a graph $G$, this becomes tractable via vertex covers of $G$. A subset $W \subseteq V(G)$ is said to be a **vertex cover** if $W \cap e \neq \emptyset$ for all $e \in E(G)$. A vertex cover $W$ is a **minimal vertex cover** if no proper subset of $W$ is a vertex cover of $G$. We have the following useful connection between vertex covers and $I(G)$. **Lemma 1** ([@Adam13 Theorem 1.34, Corollary 1.35]). *Let $W_1, \ldots, W_t$ be the minimal vertex covers of a graph $G$. If we set $\langle W_i \rangle = \langle x_j \mid x_j \in W_i \rangle$, then $I(G) = \langle W_1 \rangle \cap \cdots \cap \langle W_t \rangle$ is the minimal primary decomposition of $I(G)$.* One of the main goals of this paper is to investigate the use of a splitting technique due to Eliahou and Kervaire (in particular, a splitting for the symbolic powers of $I(G)$) to reduce the determination of graded Betti numbers to an induction on much simpler base cases. We denote the set of minimal monomial generators of a monomial ideal $I\subset R$ (which are unique) by $\mathcal{G}(I)$, and recall the following definition from [@FHV08]. **Definition 2**. *Let $I, J$ and $K$ be monomial ideals of $R = k[x_1,\ldots,x_m]$ such that $\mathcal{G}(I)$ is the disjoint union of $\mathcal{G}(J)$, $\mathcal{G}(K)$. We call $I=J+K$ an **Eliahou-Kervaire splitting** (or **E-K splitting**) if there exists a splitting function $\mathcal{G}(J\cap K)\longrightarrow \mathcal{G}(J)\times \mathcal{G}(K)$ sending $w\mapsto (\phi(w), \varphi(w))$ such that:* 1. *$w=\mathrm{lcm}(\phi(w),\varphi(w))$; and* 2. *for every subset $S\subset \mathcal{G}(J \cap K)$, both $\mathrm{lcm}(\phi(S))$ and $\mathrm{lcm}(\varphi(S))$ strictly divide $\mathrm{lcm}(S)$.* One benefit of having an Eliahou-Kervaire splitting is the ability to compute the graded Betti numbers of $I$ in terms of the graded Betti numbers for $J$ and $K$. Recall that the **$i,j$-th graded Betti number** of $I$ is by definition $\beta_{i,j}(I) = \dim_k\text{Tor}_i(k,I)_j$. That is, $\beta_{i,j}(I)$ is the number of copies of $R(-j)$ appearing in the $i$-th module of the graded minimal free resolution of $I$: $$0 \rightarrow \bigoplus_j R(-j)^{\beta_{\ell, j}(I)} \rightarrow \cdots \rightarrow \bigoplus_j R(-j)^{\beta_{1, j}(I)} \rightarrow \bigoplus_j R(-j)^{\beta_{0, j}(I)} \rightarrow I \rightarrow 0,$$ where $R(-j)$ is the polynomial ring $R$ shifted by degree $j$. **Lemma 3** ([@Fatabbi01 Proposition 3.2]). *Let $I, J$ and $K$ be monomial ideals of $k[x_1,\ldots,x_m]$ such that $I=J+K$ is an Eliahou-Kervaire splitting. Then $$\beta_{i,j}(I)=\beta_{i,j}(J)+\beta_{i,j}(K)+\beta_{i-1,j}(J\cap K),$$ for all $i\in\mathbb{N}$ and multidegrees $j$.* **Remark 4**. *The original version of this result was proved for total Betti numbers in [@EK90 Proposition 3.1] over an arbitrary field. The proof of Lemma [Lemma 3](#EK_Betti){reference-type="ref" reference="EK_Betti"} is also valid over any field, even if the author of [@Fatabbi01] assumes that the field is algebraically closed (which is needed later in [@Fatabbi01]).* All E-K splittings are examples of Betti splittings, which are a choice of monomial ideals $I=J+K$ where $\mathcal{G}(I) = \mathcal{G}(J) \sqcup\mathcal{G}(K)$ which also satisfy the graded Betti number equality from Lemma [Lemma 3](#EK_Betti){reference-type="ref" reference="EK_Betti"}. The distinction will not be important for us since all of the splittings in the sections that follow are E-K splittings. See [@FHV08] for more information on Betti splittings. We finish with a simple auxiliary lemma, which we leave as an exercise. **Lemma 5**. *If $I\subset k[x_1,\ldots,x_m]$ is a monomial ideal, then $\beta_{i,j}(x_{\ell}I)=\beta_{i,j-1}(I)$ for all $i,j\geq 1$, $1\leq \ell \leq m$.* # Symbolic Powers Of Edge Ideals Of Complete Graphs {#complete} We denote the complete graph on $m$ vertices by $K_m$ and label its vertices by $x_1, \ldots, x_m$. Fix $R = k[x_1, \ldots, x_m]$. Our main goal is to determine the graded Betti numbers for the symbolic powers of the edge ideal of $K_m$. In order to make use of the Eliahou-Kervaire splitting technique, we need a convenient description for the minimal monomial generators of symbolic powers of $I(K_m)$. The following lemma will simplify this task. **Lemma 6** ([@Wald15 Lemma 2.6]). *Let $I \subset R = k[x_1,\ldots,x_m]$ be a square-free monomial ideal with minimal primary decomposition $I=P_1\cap\hspace{0.25mm}\cdots\hspace{0.25mm}\cap P_n$ with $P_{\ell}=\langle x_{j_1},\ldots,x_{j_{\alpha_{\ell}}}\rangle$ for $\ell=1,\ldots,n$. Then $x_1^{a_1}\cdots x_m^{a_m}\in I^{(s)}$ if and only if $a_{j_1}+\cdots +a_{j_{\alpha_{\ell}}}\geq s$ for $\ell=1,\ldots,n$.* We now determine the minimal monomial generating set $\mathcal{G}(I(K_m)^{(s)})$ for the $s$-th symbolic power of $I(K_m) \subset R$. **Proposition 7**. *If $I=I(K_m) \subset R$ and $s \geq 2$, then $I^{(s)}$ has minimal monomial generating set $$\mathcal{L} :=\{x_1^{a_1}\cdots x_m^{a_m}: \exists \,\, 1 \leq i \leq m \,\, \text{with} \,\, \sum_{j \not = i}a_j = s, a_i = \max_{j \not = i} \{a_j\}\}.$$ That is, $\mathcal{G}(I^{(s)}) = \mathcal{L}$.* *Proof.* Let $x=x_1^{a_1}\cdots x_m^{a_m}\in \mathcal{L}$ where $a_i\geq 0$ for each $1 \leq i \leq m$. Without loss of generality, suppose that $a_1+\cdots +a_{m-1}=s$ and $a_m=\max\{a_1,\ldots,a_{m-1}\}$. Note that any choice of $m-1$ vertices of $K_m$ defines a minimal vertex cover for $K_m$, and so by Lemma [Lemma 1](#vertexcovers){reference-type="ref" reference="vertexcovers"} we can write $I = \bigcap_{i=1}^m \langle x_1,\ldots,\hat{x_i},\ldots,x_m\rangle$, which is the minimal primary decomposition for $I$. Then, by Lemma [Lemma 6](#SymbolicGens){reference-type="ref" reference="SymbolicGens"}, $x\in I^{(s)}$ since every subset of $\{a_1,\ldots,a_m\}$ of size $m-1$ sums to a value larger than or equal to $s$. Conversely, suppose that $x=x_1^{a_1}\cdots x_m^{a_m}\in I^{(s)}$. Without loss of generality, we may assume that $a_m=\max\{a_1,\ldots,a_m\}$ so that $a_1+\cdots+a_{m-1}\geq s$ by Lemma [Lemma 6](#SymbolicGens){reference-type="ref" reference="SymbolicGens"}. Then $x$ is clearly in the ideal $\langle \mathcal{L} \rangle$, proving that $\mathcal{L}$ is a generating set for $I^{(s)}$. To see that $\mathcal{L} = \mathcal{G}(I^{(s)})$, suppose $x=x_1^{a_1}\cdots x_m^{a_m}$ and $y=x_1^{b_1}\cdots x_m^{b_m}$ are both monomials in $\mathcal{L}$ such that $x$ divides $y$ (i.e. $b_t \geq a_t$ for $t=1,\ldots,m$). Suppose $1\leq i,j\leq m$ are such that $$a_1+\cdots +a_m-a_i = b_1+\cdots +b_m-b_j=s \,\, \text{with} \,\, a_i=\max\{a_1,\ldots, a_m\}, \,\, b_j=\max\{b_1,\ldots,b_m\}.$$ Then $b_j\geq b_i\geq a_i$ and we can write $b_j -a_i = c \geq 0$. Thus, $$b_1+\cdots +b_m = s+b_j = s+a_i+c = a_1+\cdots+a_m+c.$$ Therefore, $$c = \sum_{\ell=1}^m (b_{\ell} -a_{\ell}) = (b_j-a_i) + (b_i-a_j) + \sum_{\ell \neq i,j}^m (b_{\ell} -a_{\ell}) \implies (b_i-a_j)+ \sum_{\ell \neq i,j}^m (b_{\ell} -a_{\ell}) = 0.$$ Note that since $a_i=\max\{a_1,\ldots,a_m\}$, we have that $a_j\leq a_i\leq b_i$ and so $b_i-a_j\geq 0$. Then each term in the previous equation is non-negative showing that $a_{\ell}=b_{\ell}$ for $\ell \neq i,j$ and $b_i=a_j$. Since $b_j = \max\{b_1, \ldots, b_m\}$, there must be some $t \not = j$ such that $b_t = b_j$ (by the definition of $\mathcal{L}$). Now if $b_j \not = b_i$, then $t \not = i$ and $a_t = b_t = b_j > b_i \geq a_i=\max\{a_1,\ldots, a_m\}$, a contradiction. Thus, $b_j = b_i = a_j$. We conclude that $x=y$, as required. ◻ ## E-K Splittings We are now in a position to use splittings to determine the graded Betti numbers of symbolic powers of the edge ideal of a complete graph $K_m$. As before, we let $R= k[x_1,\ldots,x_m]$. Observe that if we set $G=K_m$ and fix $0\leq r \leq m$, then we can view $H=K_r$ as an induced subgraph of $K_m$ where $V(K_r) = \{x_1,\ldots, x_r\}$ (here $K_0$ is the null subgraph and $I(K_0)$ is the zero ideal). **Definition 8**. *Let $G = K_m$ and $H = K_r$ for some fixed $0 \leq r \leq m$. If $s \geq 2$ is an integer and $r \not = m$, then $$I_{H,s} = \langle w\in \mathcal{G}(I(G)^{(s)}):x_i\nmid w, i=r+1,\ldots,m \rangle \,\,\, \text{and} \,\,\, I_{G\setminus H,s} = I(G)^{(s)}\cap \langle \prod_{j=r+1}^m x_j\rangle.$$ By convention, if $r=m$, then we define $I_{H,s} = I_{G\setminus H,s} = I(G)^{(s)}$.* In general, if $w=x_1^{a_1}\cdots x_m^{a_m}\in \mathcal{G}(I(K_m)^{(s)})$, then $w\in \mathcal{G}(I_{H,s})$ if $a_{r+1}=\cdots=a_m=0$ and $w\in \mathcal{G}( I_{G\setminus H,s})$ if $a_i\neq 0$ for all $i\in\{r+1,\ldots, m\}$. Also, observe that $I_{H,s}$ can be viewed as the extension of $I(H)^{(s)} \subset k[x_1, \ldots, x_r]$ to the ring $k[x_1, \ldots, x_n]$. **Lemma 9**. *If $m \geq 3, s\geq 2, G=K_m$ and $H=K_{m-1}$, then $I_{H,s}\cap I_{G\setminus H,s} = x_mI_{H,s}$.* *Proof.* Note first that $I_{H,s}$ is an ideal contained in $I(G)^{(s)}$, and so $I_{H,s}\cap I(G)^{(s)}=I_{H,s}$.\ Thus, $I_{H,s}\cap I_{G\setminus H,s} = I_{H,s}\cap I(G)^{(s)}\cap \langle x_m\rangle = I_{H,s}\cap \langle x_m \rangle$. Since no generator of $I_{H,s}$ is divisible by $x_m$, it is clear that $I_{H,s}\cap \langle x_m \rangle=x_m I_{H,s}$. ◻ We now define an E-K splitting for ideals of the form $I_{K_m\setminus K_r,s}$. The ideal $I(K_m)^{(s)}$ is a special case, and an E-K splitting for it will follow as a corollary of the next theorem. These results are needed as part of the induction for the next section. **Theorem 10**. *Let $m \geq 3, s\geq 2$ and $r \in \{1, \ldots, m \mid r \not = m-s-1\}$. If $G=K_m, H=K_{r}$, $$L_1=\langle w\in \mathcal{G}( I_{G\setminus H,s} ): x_{r}\mid w\rangle = I_{G\setminus H,s} \cap \langle x_{r}\rangle, \,\,\, \text{ and } \,\,\, L_2=\langle w\in \mathcal{G}( I_{G\setminus H,s} ):x_{r}\nmid w\rangle,$$ then $I_{G\setminus H,s}=L_1+L_2$ is an Eliahou-Kervaire splitting.* *Proof.* We first note that the minimal monomial generating set of $L_2$ is given by all monomials $w= x_1^{a_1}\cdots x_m^{a_m}\in\mathcal{G}(I(G)^{(s)})$ such that $a_r=0$ and every exponent from $\{a_{r+1},\dots,a_m\}$ is nonzero. To construct a splitting function that sends $w\in \mathcal{G}(L_1\cap L_2)$ to $(\phi(w),\varphi(w))\in \mathcal{G}(L_1)\times \mathcal{G}(L_2)$, we need to define two functions, $\varphi : \mathcal{G}(L_1\cap L_2)\rightarrow \mathcal{G}(L_2)$ and $\phi :\mathcal{G}(L_1\cap L_2)\rightarrow \mathcal{G}(L_1)$. We start with the function $\varphi$. First notice that by a similar argument to Lemma [Lemma 9](#intersection){reference-type="ref" reference="intersection"}, $L_1\cap L_2 = x_rL_2$ and each $w\in \mathcal{G}(L_1\cap L_2)$ can be written uniquely as $w=x_rv$, where $v\in \mathcal{G}(L_2)$. Therefore, we define $\varphi : \mathcal{G}(L_1\cap L_2)\rightarrow \mathcal{G}(L_2)$ by $w=x_rv\mapsto v.$ Next we define $\phi :\mathcal{G}(L_1\cap L_2)\rightarrow \mathcal{G}(L_1)$. Given $w=x_rv\in \mathcal{G}(L_1\cap L_2)$, let $a_i$ denote the exponent of $x_i$ in $w$ for $1\leq i\leq m$. Note that $a_r=1$. Let $j \in \{1, \ldots, m\} \setminus \{r\}$ be the smallest index such that $$a_1+\cdots +a_m - a_r -a_j=s, \hspace{2mm} \text{ and } \hspace{2mm} a_j=a_{max}\coloneqq \max (\{a_1,\ldots, a_m\}\setminus \{a_r,a_j\}).$$ Let $t \in \{1,\ldots, m\} \setminus \{j,r\}$ be the smallest index such that $a_t=a_{max}$. The indices $j$ and $t$ exist by Proposition [Proposition 7](#minGens){reference-type="ref" reference="minGens"}. Observe that $j$ and $t$ are the two smallest indices in $\{1, \ldots, m\} \setminus \{r\}$ such that $a_t = a_j = a_{max}$ and that $j < t$. Define $\phi:\mathcal{G}(L_1\cap L_2)\rightarrow \mathcal{G}(L_1)$ by $$w=x_rv=x_1^{a_1}\cdots x_m^{a_m}\mapsto \begin{cases} \cfrac{w}{x_j} & \text{ if } \exists \hspace{0.9mm} \ell\neq j, t, r \text{ with } a_{\ell} = a_{max},\\ \cfrac{w}{x_tx_j} & \text{ otherwise.} \end{cases}$$ We need to show that $\phi$ does in fact map into $\mathcal{G}(L_1)$. If $a_t$ is the only value in $\{a_1,\ldots,a_m\}\setminus \{a_j,a_r\}$ such that $a_t=a_{max}$, then let $A'=\{x_1,\ldots,x_m\}\setminus \{x_t\}$ and for each $1\leq i\leq m$, let $a_i'$ denote the exponent of $x_i$ in $\phi(w)$. Note that $a_j'=a_j-1, a_t'=a_t-1, a_r'=a_r=1$ and $a_l'=a_l$ for all $l\neq j,t$. Then $$\begin{aligned} \sum_{x_i\in A'}a_i'=\sum_{i=1}^m a_i'- a_t'= \deg(\phi(w))-(a_t-1) & =\deg(w)-2-(a_{max}-1)\\ &=\sum_{i=1}^m a_i -1 - a_{max}\\ &=\sum_{i=1}^m a_i -a_r-a_j=s.\end{aligned}$$ Also, $$a_{max}'\coloneqq\max(\{a_1',\ldots,a_m'\}\setminus \{a_t'\})=a_j'= a_{max}-1 \ \ \text{and} \ \ a_t'=a_t-1=a_{max}-1=a_{max}'.$$ Thus $\phi(w)\in \mathcal{G}(I(G)^{(s)})$. To show that $\phi(w) \in \mathcal{G}(I_{G \setminus H, s})$, it suffices to verify that $\prod_{i=r+1}^mx_i$ divides $\phi(w)$. To this end, assume that $\prod_{i=r+1}^mx_i$ does not divide $\phi(w)$. Recall that $w = x_rv$ for some $v \in \mathcal{G}(L_2)$. Since $\mathcal{G}(L_2) \subset \mathcal{G}(I_{G \setminus H,s})$, we know that $\prod_{i=r+1}^m x_i$ divides $v$. Thus, $\prod_{i=r+1}^mx_i$ divides $w$. Since $\phi(w) = w/(x_jx_t)$ and $\prod_{i=r+1}^mx_i$ does not divide $\phi(w)$, it must be that at least one of $j$ or $t$ is greater than or equal to $r+1$ and $a_j = a_t = 1$. Since $a_j=a_t = a_{max}$, this implies that $w$ is a square-free monomial. Hence, $$\deg(w) = a_1 + \cdots + a_m = s+2$$ and so $\phi(w)$ has degree $s$. This is a contradiction since $\phi(w) \in \mathcal{G}(I(G)^{(s)})$ which consists of monomials of degree $s+1$ and higher. We conclude that $\prod_{i=r+1}^m x_i$ divides $\phi(w)$, and thus $\phi(w) \in \mathcal{G}(I_{G \setminus H, s})$. Furthermore, since $x_r\mid \phi(w)$, we cannot have $\phi(w)\in \mathcal{G}(L_2)$. Thus, $\phi(w) \in \mathcal{G}(L_1)$. Similarly, if $a_t$ is not the only value in $\{a_1,\ldots, a_m\}\setminus \{a_j,a_r\}$ such that $a_t=a_{max}$, then define $A'$ and each $a_i'$ as before. Note that $a_j'=a_j-1, a_r=a_r'=1$ and $a_l'=a_l$ for all $l\neq j$. Then $$\sum_{x_i\in A'} a_i'=\sum_{i=1}^ma_i'-a_t'=\deg(\phi(w))-a_t=\deg(w)-1-a_{max}=\sum_{i=1}^ma_i-a_r-a_j=s.$$ Also, $$a_{max}'\coloneqq \max (\{a_1',\ldots,a_m'\}\setminus \{a_t'\})=a_{max} \ \ \ \text{and} \ \ \ a_t'=a_t=a_{max}=a_{max}'.$$ Thus, $\phi(w)\in \mathcal{G}(I(G)^{(s)})$. We need to show that $\phi(w) = w/x_j \in \mathcal{G}(I_{G \setminus H, s})$. Again, it suffices to verify that $\prod_{i=r+1}^mx_i$ divides $\phi(w)$. Arguing by contradiction, suppose that $\prod_{i=r+1}^m x_i$ does not divide $\phi(w)$. As above, $w = x_rv$ for some $v \in \mathcal{G}(L_2)$ and $\prod_{i=r+1}^m x_i$ divides $v$ and hence $w$. Since $\phi(w) = w/x_j$, it follows that $j \geq r+1$ and $a_j = 1$. Thus, since $t > j$ and $a_t = a_{max} = a_j$, we have $t > r+1$ and $1 = a_t = a_j = a_{max}$. This implies that $w$ is a square-free monomial. Further, since the exponents in $\phi(w)$ of the variables in $A'$ sum to $s$ and $a_t'=a_t=1$, $\deg(\phi(w))=s+1$. By the choice of $j$ and $t$, this implies that for all $1 \leq i < r$ we have $a_i \not = a_{max}$. Since $w$ is square-free, we must have $a_i = 0$ for $1 \leq i < r$. But then $w$ is square-free and divisible by $x_r$ and $\prod_{i=r+1}^mx_i$, and so $w = \prod_{i=r}^m x_i$. Thus $\deg(\phi(w))=(m-r+1)-1=m-r$, and so $$m-r=s+1 \implies m-s-1 = r,$$ a contradiction to the assumption that $m-s-1 \not = r$. We conclude that $\prod_{i=r+1}^mx_i$ divides $\phi(w)$, and so $\phi(w) \in \mathcal{G}(I_{G \setminus H,s})$. Again, since $x_r\mid \phi(w)$, we have that $\phi(w)$ is not in $\mathcal{G}(L_2)$ and thus $\phi(w) \in \mathcal{G}(L_1)$. We now show that these maps define an E-K splitting. It is easy to verify the first condition by checking that $\mathrm{lcm}(\phi(w),\varphi(w))=w$ when $w\in \mathcal{G}(L_1\cap L_2)$. To check the second condition, we need to verify that for every subset $S \subset \mathcal{G}(J \cap K)$, both $\mathrm{lcm}(\phi(S))$ and $\mathrm{lcm}(\varphi(S))$ strictly divide $\mathrm{lcm}(S)$. To this end, let $S\subset \mathcal{G}(L_1\cap L_2)$. Clearly, $\mathrm{lcm}(\phi(S))$ and $\mathrm{lcm}(\varphi(S))$ both divide $\mathrm{lcm}(S)$. Since $x_r\mid \mathrm{lcm}(S)$ and $x_r\nmid \mathrm{lcm}(\varphi(S))$, we cannot have $\mathrm{lcm}(\varphi(S))=\mathrm{lcm}(S)$, so $\mathrm{lcm}(\varphi(S))$ strictly divides $\mathrm{lcm}(S)$, as required. To show $\mathrm{lcm}(\phi(S))\neq \mathrm{lcm}(S)$, let $a$ be the maximum exponent of any variable in any monomial in $S$. Let $i_0$ be the smallest index such that $x_{i_0}^a$ divides at least one $w\in S$. By definition of $\phi$, $x_{i_0}^a\nmid \phi(w)$. If there exists any $w'\in S$ such that $x_{i_0}^a\mid \phi(w')$, then either there exists an exponent in $w'$ that is larger than $a$, or there exists an index $i_1<i_0$ such that $x_{i_1}^a\mid w'$. Both are clear contradictions. Thus, $x_{i_0}^a\nmid \mathrm{lcm}(\phi(S))$, yet it clearly divides $\mathrm{lcm}(S)$, showing that $\mathrm{lcm}(\phi(S))\neq \mathrm{lcm}(S)$ and completing the proof. ◻ **Example 11**. *The assumption that $r \not = m-s-1$ is necessary in Theorem [Theorem 10](#further splittings){reference-type="ref" reference="further splittings"}. For example, consider $m=5, s=2$ and $r=2$. By definition, $x_3x_4x_5 \in \mathcal{G}(L_2)$, and so $w = x_2x_3x_4x_5 \in \mathcal{G}(L_1 \cap L_2)$. Here, $a_{max} = 1$. The smallest index $j \in \{1, \ldots, 5\} \setminus \{2\}$ such that $a_j = a_{max}$ is 3. The smallest index $t \in \{1, \ldots, 5\} \setminus \{2,3\}$ such that $a_t = a_{max}$ is 4. Note that $a_5 = a_{max}$ and $5 \not = j, t, r$. Thus, $w/x_j = w/x_3 = x_2x_4x_5$ is not in $\mathcal{G}(I_{G \setminus H,s})$ as it is not divisible by $x_3x_4x_5$.* **Corollary 12**. *If $m \geq 3, s\geq 2, G=K_m$ and $H=K_{m-1}$, then $I(G)^{(s)} = I_{H,s} + I_{G\setminus H,s}$ is an E-K splitting.* *Proof.* By definition, $I(K_m) = I(G)^{(s)} = I_{K_m \setminus K_m,s}$. Therefore, by Theorem [Theorem 10](#further splittings){reference-type="ref" reference="further splittings"} with $r=m$, we have that $I_{K_m/K_m, s} = L_1 + L_2$ is an E-K splitting where $$L_1 = I_{K_m \setminus K_m,s} \cap \langle x_m \rangle = I_{K_m \setminus K_{m-1}, s} \ \ \text{and} \ \ L_2 = I_{K_{m-1} \setminus K_{m-1},s} = I(K_{m-1})^{(s)} = I_{K_{m-1},s}.$$ ◻ **Remark 13**. *With these results, if $m - s - 1 < 0$, then we may repeatedly apply the splitting defined in Theorem [Theorem 10](#further splittings){reference-type="ref" reference="further splittings"} to define a function that gives the graded Betti numbers of $I(G)^{(s)}$. In fact, it is an important observation that $L_1$ from Theorem [Theorem 10](#further splittings){reference-type="ref" reference="further splittings"} is actually of the form $I_{K_{m}\setminus K_{r-1},s}$, since $I_{K_{m}\setminus K_{r-1},s} = I_{K_{m}\setminus K_r,s}\cap\langle x_r \rangle$, which allows the theorem to be iteratively applied to each subsequent $L_1$. Each step of this iteration applies Theorem [Theorem 10](#further splittings){reference-type="ref" reference="further splittings"} to $I_{K_m\setminus K_{r},s}$ for decreasing $r$, and terminates with $I_{K_m\setminus K_0,s}$.* ## Graded Betti Numbers Of Symbolic Powers Of $I(K_2)$ and $I(K_3)$ We now use splittings and induction to determine the graded Betti numbers for the $s$-th symbolic powers of the edge ideal of $K_3$. We begin with the following observation. **Lemma 14**. *If $I = I(K_2) \subset R = k[x_1,x_2]$ and $s \geq 2$, then $\beta_{1,2s}(I_{K_2,s})=1$ and $\beta_{i,j}(I_{K_2,s})=0$ for $i \not = 1, j \not = 2s$.* *Proof.* Notice that $I^{(s)} = \langle x_1^sx_2^s\rangle = I^s$. It is straightforward to see that a minimal graded free resolution of $I^{(s)}$ (over any field $k$) is given by $0 \rightarrow R(-2s) \rightarrow R \rightarrow R/I^{(s)}\rightarrow 0$. ◻ **Theorem 15**. *If $i, j\in\mathbb{Z}^+$ and $s \geq 1$, then we have: $$\beta_{1,\frac{3s}{2}}(I(K_3)^{(s)}) = 1; \,\,\, \beta_{2, \frac{3s+3}{2}}(I(K_3)^{(s)}) = 2; \,\,\, \beta_{1,j}(I(K_3)^{(s)}) = 3 \, \,\,\, \text{if $\frac{3s+1}{2} \leq j \leq 2s$};$$ $$\beta_{2,j}(I(K_3)^{(s)}) = 3 \, \,\,\, \text{if $\frac{3s+4}{2} \leq j \leq 2s+1$}; \,\,\, \text{and} \,\,\,\, \beta_{i,j}(I(K_3)^{(s)}) = 0 \,\,\,\, \text{otherwise}.$$* *Proof.* We induct on $s$. The result is clear for $s=1$ and $s=2$ via a computation using Macaulay2. Fix $s > 2$ and suppose the function holds for all positive integers $s' < s$. By Corollary [Corollary 12](#splitting){reference-type="ref" reference="splitting"}, there is an E-K splitting of $I(K_3)^{(s)} = I_{K_2,s}+I_{K_3\setminus K_2,s}$, and by Lemma [Lemma 3](#EK_Betti){reference-type="ref" reference="EK_Betti"} we can write $$\beta_{i,j}(I(K_3)^{(s)})=\beta_{i,j}(I_{K_2,s})+\beta_{i,j}(I_{K_3\setminus K_2,s})+\beta_{i-1,j}(I_{K_2,s}\cap I_{K_3\setminus K_2,s}).$$ Using Lemma [Lemma 9](#intersection){reference-type="ref" reference="intersection"}, we know that $I_{K_2,s}\cap I_{K_3\setminus K_2,s} = x_3I_{K_2,s}$, and by Lemma [Lemma 5](#recursive){reference-type="ref" reference="recursive"}, $\beta_{i-1,j}(x_3I_{K_2,s}) = \beta_{i-1,j-1}(I_{K_2,s})$. Therefore, $$\beta_{i,j}(I(K_3)^{(s)})=\beta_{i,j}(I_{K_2,s})+\beta_{i,j}(I_{K_3\setminus K_2,s})+\beta_{i-1,j-1}(I_{K_2,s}).$$ We now write $\beta_{i,j}(I_{K_3\setminus K_2,s})$ in terms of the known graded Betti numbers coming from the induction. Let $L_1$ and $L_2$ be as in Theorem [Theorem 10](#further splittings){reference-type="ref" reference="further splittings"} with $m=3$ and $r=2$. Then $I_{K_3\setminus K_2,s} = L_1+L_2$ is an E-K splitting, and therefore $$\beta_{i,j}(I_{K_3\setminus K_2,s})=\beta_{i,j}(L_1)+\beta_{i,j}(L_2)+\beta_{i-1,j}(L_1\cap L_2).$$ From the proof of Theorem [Theorem 10](#further splittings){reference-type="ref" reference="further splittings"}, $L_1\cap L_2 = x_2L_2$ so that $\beta_{i-1,j}(L_1\cap L_2) = \beta_{i-1,j-1}(L_2)$. By a change of coordinates, we have that $\beta_{i,j}(L_2)=\beta_{i,j}(I_{K_2\setminus K_1,s})=\beta_{i,j}(I_{K_2,s})$. Therefore $$\beta_{i,j}(I_{K_3\setminus K_2,s})=\beta_{i,j}(L_1)+\beta_{i,j}(I_{K_2,s})+\beta_{i-1,j-1}(I_{K_2,s}).$$ It remains to show that $\beta_{i,j}(L_1)$ can be computed using what is known from the induction. We require one more E-K splitting. By definition, $L_1 = I_{K_3\setminus K_1,s}$, so we can apply Theorem [Theorem 10](#further splittings){reference-type="ref" reference="further splittings"} one more time (using $m=3$ and $r=1$) to get a splitting $L_1 = L_1'+L_2'$, where $L_1' = I(K_3)^{(s)}\cap\langle x_1x_2x_3\rangle$. This yields $$\beta_{i,j}(L_1)=\beta_{i,j}(L_1')+\beta_{i,j}(L_2')+\beta_{i-1,j}(L_1'\cap L_2').$$ Using the same observations as before, notice that $\beta_{i,j}(L_2') = \beta_{i,j}(I_{K_2\setminus K_0,s})=\beta_{i,j}(I_{K_2,s})$ and $\beta_{i-1,j}(L_1'\cap L_2') = \beta_{i-1,j-1}(I_{K_2,s})$. It is not difficult to see that $L_1' = x_1x_2x_3I(K_3)^{(s-2)}$ so that $\beta_{i,j}(L_1') = \beta_{i,j-3}(I(K_3)^{(s-2)})$. Overall we have shown that $$\beta_{i,j}(I(K_3)^{(s)})=3\beta_{i,j}(I_{K_2,s})+3\beta_{i-1,j-1}(I_{K_2,s})+\beta_{i,j-3}(I(K_3)^{(s-2)}).$$ By Lemma [Lemma 14](#K2){reference-type="ref" reference="K2"}, we know that $\beta_{i,j}(I_{K_2,s})=1$ if $i=1$, $j=2s$ and $0$ otherwise. The result follows by induction. ◻ **Remark 16**. *We relied on the reduction to $L_1' = x_1x_2x_3I(K_3)^{(s-2)}$ in the proof of Theorem [Theorem 15](#K3){reference-type="ref" reference="K3"}, and this is why 3 splittings are needed in the proof. See Lemma [Lemma 17](#prod_int){reference-type="ref" reference="prod_int"} for a generalization.* ## Graded Betti Numbers Of Symbolic Powers Of $I(K_m)$ In General In Theorem [Theorem 15](#K3){reference-type="ref" reference="K3"}, we used the fact that $I(K_3)^{(s)}\cap\langle x_1x_2x_3\rangle = x_1x_2x_3I(K_3)^{(s-2)}$. The reader might wonder why we actually needed 3 splittings in the theorem. Perhaps the induction could be completed using just 2 splittings, for example, requiring a similar identification for $L_1= I(K_3)^{(s)}\cap\langle x_2x_3\rangle$. A look at the generators however shows that we need all of the variables present in the second ideal in the intersection to make such an identification. For example, we know that $\mathcal{G}(I(K_3)^{(3)})= \{x_1^3x_2^3, x_1^2x_2^2x_3, x_1^2x_2x_3^2, x_1x_2^2x_3^2,x_1^3x_3^3,x_2^3x_3^3\}$. If we look at the terms for which $x_2x_3$ can be factored out, we notice that the generator $x_1^2x_2^2x_3$ can be factored as $x_2x_3(x_1^2x_2)$, but $x_1^2x_2$ is not a minimal generator for $I(K_2)^{(s)}$ or $I(K_3)^{(s)}$ with any choice of $s$. We avoid this issue by only considering generators where each variable $x_i$ can be factored out. In particular, when computing graded Betti numbers for $I(K_m)^{(s)}$, one will need to use $m$ E-K splittings to reduce to the case $I_{K_m\setminus K_0,s}$ and achieve a similar result to Theorem [Theorem 15](#K3){reference-type="ref" reference="K3"}. **Lemma 17**. *We have $I_{K_m\setminus K_0,s}= I(K_m)^{(s)} \cap \langle x_1 \cdots x_m \rangle = x_1 \cdots x_m I(K_m)^{(s-m+1)}$ when $s \geq m \geq 2$.* *Proof.* This follows from the observation that if $x_1^{a_1}\cdots x_m^{a_m} \in \mathcal{G}(I(K_m)^{(s)})\cap \langle x_1\cdots x_m\rangle$, then $x_1^{a_1-1}\cdots x_m^{a_m-1} \in \mathcal{G}(I(K_m)^{(s-m+1)})$. ◻ The underlying ideas in the proof of Theorem [Theorem 15](#K3){reference-type="ref" reference="K3"} can now be generalized to obtain formulae for the graded Betti numbers for the $s$-th symbolic powers of the edge ideal of $K_m$. However, as $m$ increases, so does the complexity in writing the formulae. For the sake of concreteness, we provide only the formulae for $m = 4$ below. **Theorem 18**. *If $i, j \in \mathbb{Z}^+$ and $s \geq 4$, then we have: $$\beta_{i,j}(I(K_4)^{(s)}) = 6 \,\,\,\, \text{if $i=1$, $j=2s$ or $i=3$, $j=2s+2$}; \,\,\, \beta_{2,2s+1}(I(K_4)^{(s)}) = 12; \,\,\, and$$ $$\beta_{i,j}(I(K_4)^{(s)}) = \beta_{i,j-4}(I(K_4)^{(s-3)}) + 4\beta_{i-1,j-4}(I(K_3)^{(s-2)}) + 4\beta_{i,j-3}(I(K_3)^{(s-2)}) \,\,\,\, \text{otherwise}.$$* While the statement of the formulae for these graded Betti numbers seems cumbersome, they are a direct result of an inductive computation as in Theorem [Theorem 15](#K3){reference-type="ref" reference="K3"}. **Remark 19**. *Given any fixed $m>3$, we are able to inductively compute $\beta_{i,j}(I(K_m)^{(s)})$ for any $s>m-1$. If $s<m$, then at some point in the induction process $r=m-s-1$ and we cannot reduce the computation any further, and are left with finitely many Betti numbers to manually compute by other means.* ## Minimum Socle Degree Of Symbolic Powers Of Edge Ideals Of Complete Graphs {#socle} Having formulae for the graded Betti numbers of the symbolic powers for edge ideals of complete graphs gives us information on related invariants of the ideals. For example, we can obtain the minimum socle degrees which we now discuss. In the following, the dimension of a homogeneous ideal $I \subseteq R=k[x_1, \ldots, x_m]$ is the Krull dimension of $R/I$. If $A=\oplus_{i\geq 0} A_i$ is a graded Artinian $k$-algebra, then $\mathfrak{m} = \oplus_{i\geq 1} A_i$ is a maximal ideal and we call the ideal quotient $\mathrm{Socle}(A) = 0:\mathfrak{m} = \{r \in A \mid r\mathfrak{m} = 0\}$ the **socle** of $A$. It is a finite-dimensional graded $k$-vector space, and we write $\mathrm{Socle}(A) = \oplus_{i=0}k(-a_i)$ where $a_i\in\mathbb{Z}^+$ are the **socle degrees** of $A$. The **minimum socle degree** of $A$ is the minimum of the $a_i$. If $I$ is a homogeneous ideal of $R$ such that $R/I$ is Cohen-Macaulay, then we can find a maximal regular sequence $f_1,\ldots,f_n$ of $R/I$ where $f_i$ is a homogeneous polynomial of degree 1 and $n = \dim(R/I)$. Let $\bar{I} = I+\langle f_1,\ldots,f_n \rangle$. In this situation, the Artinian reduction $A=R/\bar{I}$ is $0$-dimensional, and hence Artinian. It is well-known that the socle degrees of $A$ are related to the back twists at the end of a minimal resolution of $R/I$. More precisely, let us write the graded minimal free resolution for $R/I$ as $$0\rightarrow F_{m-1} = \bigoplus_i R(-a_i) \rightarrow \cdots \rightarrow F_1\rightarrow R\rightarrow R/I\rightarrow 0.$$ The last module in the free resolution for $R/\bar{I}$ is $F_{m-1}(-n)$. Since it is in position $m+n-1$ of the free resolution, the socle degrees of $A$ are $s_i= (a_i +n) - (m+n-1) = a_i -(m-1)$ by [@Kustin-Ulrich Lemma 1.3]. With a slight abuse of notation, we will say that socle degrees of $A$ are the socle degrees of $R/I$. In particular, the minimum socle degree of $R/I$ is just $\min_{i}\{s_i\}$. This is similar to the setup in [@Tohaneanu] and [@Tohaneanu-VanTuyl] except for the labelling of indices. **Lemma 20**. *If $I=I(K_m)\subset R =k[x_1,\ldots,x_m]$, then $R/I^{(s)}$ is Cohen-Macaulay of dimension 1.* *Proof.* The Cohen-Macaulay property follows from [@RTY12 Theorem 3.6] and [@Varbaro Theorem 2.1]. It suffices to show that $R/I$ has dimension 1 since $I^{(s)}$ and $I$ have the same height. Each vertex cover of $K_m$ involves exactly $m-1$ vertices and defines a primary component in the primary decomposition of $I$ by Lemma [Lemma 1](#vertexcovers){reference-type="ref" reference="vertexcovers"}. Since $R/I$ is Cohen-Macaulay, all of the associated primes of $I$ must have the same height, so it suffices to compute the height of just one of these ideals. One such ideal is $\langle x_1,\ldots,x_{m-1}\rangle$ which has height $m-1$ (so $R/I$ has dimension 1), proving the result. ◻ As a consequence, we can apply our technique to determine the minimum socle degree of the $s$-th symbolic power of an edge ideal of any complete graph for any $s \geq 2$. For example, the minimum socle degree of $R/I(K_2)^{(s)} = 2s-1$ and the minimum socle degree of $R/I(K_3)^{(3)} = 4$ since, by Theorem [Theorem 15](#K3){reference-type="ref" reference="K3"}, $\beta_{2,6}(I(K_3)^{(3)}) = 2, \beta_{1,5}(I(K_3)^{(3)}) = \beta_{1,6}(I(K_3)^{(3)}) = \beta_{2,7}(I(K_3)^{(3)}) = 3$ and $\beta_{i,j}(I(K_3)^{(3)}) = 0$ otherwise. # Parallelizations {#parallel} It is natural to ask if we can determine the graded Betti numbers of edge ideals of graphs obtained by certain graph operations. One such operation is called a graph parallelization, which we now turn our attention to. The notion of a graph parallelization appears in [@MMV11 Section 2] in the discussion about polarizations and depolarizations of monomial ideals. We continue to work with undirected finite simple graphs. **Definition 21**. *Let $G$ be a graph with vertex set $\{x_1, \dots, x_m\}$ and fix $\alpha = (\alpha_1, \dots, \alpha_m) \in (\mathbb{Z}^+)^m$. The **parallelization** of $G$ by $\alpha$, denoted $G^\alpha$, is the graph with vertex set $V(G^\alpha) = \{x_{1,1}, \dots, x_{1,\alpha_1}, \dots, x_{m,1},\dots, x_{m,\alpha_m}\}$ and edge set $E(G^\alpha) = \{\{x_{i,t},x_{j,\ell}\}|\{x_i,x_j\} \in E(G)\}$.* For example, the graph for $K_3^{(3,1,1)}$ is obtained by duplicating $x_1$ to get the vertices $x_{1,1}, x_{1,2}$ and $x_{1,3}$ and adding edges between these vertices and $x_{2,1}$ and $x_{3,1}$. In particular, when $\alpha = (1,\ldots, 1)$, we recover the original graph so that $G^{(1,\ldots, 1)}$ is the same as $G$ (we are identifying $x_{i,1}=x_i$ in general). We will denote the set of vertices of $G^\alpha$ corresponding to the vertex $x_i \in V(G)$ by $V_i$. These are called the **duplications** of $x_i$. The next lemma shows that all minimal vertex covers for $G^\alpha$ come from minimal vertex covers for $G$ replaced by the appropriate duplications $V_i$. Recall that the **open neighbourhood** of a given set $V'$ of vertices in a graph $G$, denoted $N(V')$, is the set of all vertices which are adjacent to vertices in $V'$, not including the vertices in $V'$ themselves. **Lemma 22**. *Let $G$ be a graph on $m$ vertices and fix $\alpha \in (\mathbb{Z^+})^{m}$. Then any minimal vertex cover for $G^\alpha$ has the form $\{x_{i_1,1},\dots, x_{i_1,\alpha_{i_1}},\dots x_{i_{r},1} \dots, x_{i_{r},\alpha_{i_r}}\}$ where $\{x_{i_1}, \dots, x_{i_{r}}\}$ is a minimal vertex cover of $G$.* *Proof.* Let $S$ be a minimal vertex cover of $G$ and, without loss of generality, suppose that $S = \{x_1, \dots, x_n\}$. Let us define $S' = \{x_{1,1}, \dots, x_{1,\alpha_1}, \dots, x_{n,1}, \dots, x_{n,\alpha_n}\}$ as the set of vertices of $G^\alpha$ obtained by replacing each vertex $x_i$ in $S$ by all of its duplicates $V_i$ in $G^\alpha$. We first show that $S'$ is a minimal vertex cover of $G^\alpha$. Let $\{x_{i,t},x_{j,\ell}\}$ be any edge of $G^\alpha$. By definition, $\{x_i,x_j\} \in E(G)$. Since $S$ is a minimal vertex cover of $G$, at least one of $x_i$ or $x_j$ is in $S$. Without loss of generality, let us suppose that $x_i \in S$. Then $V_i\subset S'$, and so $S'$ contains a vertex from the edge $\{x_{i,t},x_{j,\ell}\}$. That is, $S'$ is a vertex cover of $G^\alpha$. To see that $S'$ is minimal, suppose that there exists $x_{i,t} \in S'$ such that $S'\backslash\{x_{i,t}\}$ is a vertex cover of $G^\alpha$. Then necessarily $N(x_{i,t}) \subseteq S'$. By definition of a parallelization, $N(V_i) = N(x_{i,t})$. Hence, if $e \in E(G^\alpha)$ contains some vertex $x_{i,\ell}$, then $S'$ contains a vertex of $e$ other than $x_{i,\ell}$. That is, $S' \backslash \{V_i\}$ is a vertex cover of $G^\alpha$. However, since $N(V_i) \subseteq S'$, by construction of $S'$ it follows that $N(x_i) \subseteq S$. Then $S\backslash \{x_i\}$ is a vertex cover of $G$, contradicting the minimality of $S$. It remains to show that these are the only minimal vertex covers of $G^\alpha$. Let $C '$ be a minimal vertex cover of $G^\alpha$. Note that for any $x_{i,t} \in C'$, there exists an edge $e \in E(G^\alpha)$ such that $e=\{x_{i,t},x_{j,\ell}\}$ (since $C'$ is a minimal vertex cover, $x_{i,t}$ cannot be an isolated vertex). However, $\{x_{i,t},x_{j,\ell}\} \in E(G^\alpha)$ if and only if $\{x_i,x_j\} \in E(G)$. So for all $x \in V_i, y \in V_j, \{x,y\} \in E(G^\alpha)$. Furthermore, if there exists some $V_i$ such that $V_i \nsubseteq C'$, then by the above, $N(V_i) \subseteq C'$. Hence, since $C'$ is minimal, $V_i \cap C' = \emptyset$ and so either $V_i \subseteq C'$ or $V_j \subseteq C'$. Thus $C' = V_1 \cup \dots \cup V_t$ for some labelling of the partite sets. Since $G$ is an induced subgraph of $G^\alpha$, $C'$ being a minimal vertex cover implies that $C = \{x_1, \dots, x_t\}= V(G)\cap C'$ is a vertex cover of $G$, and if $C$ were not minimal then the minimal vertex cover contained in $C$ would correspond to a minimal vertex cover of $G^{\alpha}$ properly contained in $C'$, a contradiction. ◻ **Proposition 23**. *Let $G$ be a graph with vertex set $\{x_1, \dots, x_m\}$ and edge ideal $I=I(G)$. Fix $\alpha = (\alpha_1, \dots, \alpha_m) \in (\mathbb{Z^+})^m$ and denote the edge ideal of $G^\alpha$ by $I_{\alpha}$. Then $$\mathcal{G}(I_{\alpha}^{(s)}) = \{x_{1,1}^{e_{1,1}}\cdots x_{1,\alpha_1}^{e_{1,\alpha_1}}\cdots x_{m,1}^{e_{m,1}}\cdots x_{m,\alpha_m}^{e_{m,\alpha_m}} | \sum_{j=1}^{\alpha_i}e_{i,j} = a_i, \text{ where $x_1^{a_1}\cdots x_m^{a_m}\in \mathcal{G}(I)$}\}.$$* *Proof.* The argument is similar to that of Proposition [Proposition 7](#minGens){reference-type="ref" reference="minGens"}. We use Lemma [Lemma 6](#SymbolicGens){reference-type="ref" reference="SymbolicGens"} and Lemma [Lemma 22](#minVertexCov){reference-type="ref" reference="minVertexCov"} to show that the set is generating. Minimality follows from the fact that $x_1^{a_1}\cdots x_m^{a_m}\in \mathcal{G}(I)$ and the description of minimal vertex covers. ◻ ## Graded Betti Numbers Of Parallelizations The graded Betti numbers of parallelizations for complete graphs can be bounded below using the splittings from Corollary [Corollary 12](#splitting){reference-type="ref" reference="splitting"}. This however is a special case of the next result. **Proposition 24**. *If $G$ is finite simple graph on $m$ vertices, $s \geq 2$, and $\alpha \in (\mathbb{Z^+})^{m}$, then for all $i, j \geq 1$, $\beta_{i,j}(I(G^\alpha)^{(s)}) \geq \beta_{i,j}(I(G)^{(s)})$.* *Proof.* Since we can view $G$ as an induced subgraph of $G^\alpha$, the result follows as a direct consequence of [@GHOS18 Lemma 4.4] (which is a generalization of the work in [@HH15] for edge ideals). ◻ Proposition [Proposition 24](#parallel_bound){reference-type="ref" reference="parallel_bound"} naturally leads one to try to determine what classes of ideals are obtained by parallelization of graphs $G$ where the Betti numbers of $I(G)^{(s)}$ are known. We illustrate this useful direction with complete $n$-partite graphs. Recall that the **complete $n$-partite graph** with partite sets of size $a_1, \ldots, a_n$, denoted $K_{a_1,\ldots,a_n}$, is the graph with vertex set $V = \{x_{1,1},\ldots,x_{1,a_1},\dots,x_{n,1},\ldots,x_{n,a_n}\}$ and edge set $E = \{\{x_{i,j},x_{\ell,m}\}\:|\: i \neq \ell\}$. In addition, recall that an **independent set** of vertices of a graph $G$ is a set of vertices in which no two vertices are adjacent. **Corollary 25**. *If $K_{a_1, \ldots, a_n}$ is a complete $n$-partite graph, then for all $s \geq 2$ and $i,j \geq 1$ we have the bound $$\beta_{i,j}(I(K_{a_1, \ldots, a_n})^{(s)}) \geq \beta_{i,j}(I(K_n)^{(s)}).$$* *Proof.* The result follows by noticing that $K_{a_1, \ldots, a_n} = K_n^{(a_1,\ldots, a_n)}$. To see this, observe that the duplicates of each vertex in $K_n^{(a_1, \ldots, a_n)}$ form an independent set, and any two of these independent sets have all possible edges between them by definition of a parallelization applied to a complete graph. ◻ As a consequence, the results of Section 3 combined with Corollary [Corollary 25](#complete n partite){reference-type="ref" reference="complete n partite"} yields explicit lower bounds for the graded Betti numbers of symbolic powers of edge ideals of complete $n$-partite graphs. Not surprisingly, the bound of Proposition [Proposition 24](#parallel_bound){reference-type="ref" reference="parallel_bound"} is not very effective as the entries in $\alpha$ grow. A better lower bound would not only depend on $I(G)^{(s)}$, but also on $\alpha$. As a motivational example, consider $I(K_3)^{(2)} = \langle \epsilon_1,\epsilon_2,\epsilon_3,\epsilon_4 \rangle=\langle x_1^2x_2^2, x_1^2x_3^2,x_2^2x_3^2, x_1x_2x_3 \rangle$. One possible relation on these generators is $\sigma= x_3\epsilon_1 - x_1x_2\epsilon_4=0$. Now let $\alpha = (2,2,2)$ and consider $I(K_3^\alpha)^{(2)}$. Then the relation demonstrated by $\sigma$ also holds if we substitute any of the variable duplications. One possible choice is using $x_{3,2}$ instead of $x_{3,1}=x_3$. Now if $\epsilon_4' = x_1x_2x_{3,2}$, then $\sigma' = x_{3,2}\epsilon_1 - x_1x_2\epsilon_4'=0$ also (where $x_1 = x_{1,1}$ and $x_2 = x_{2,1}$). The same would hold true if we replaced any of the variables $x_i$ by one of their duplications $x_{i,j}$. In this way we get groupings of relations corresponding to some choice of the duplicated variables, which is also true for higher syzygies. We see this more generally. In particular, if $\sigma$ is a syzygy for the $i$-th step of the minimal graded free resolution of $I(G)^{(s)}$, then it is also a syzygy for the same step of the minimal graded free resolution of $I(G^\alpha)^{(s)}$, and substituting any variable in $\sigma$ with any of its duplicates also yields a syzygy. We can expect $\prod_i \alpha_i$ many blocks of such relations, and so we might guess that $\beta_{i,j}(I(G^\alpha)^{(s)}) \geq (\prod_i \alpha_i)\beta_{i,j}(I(G)^{(s)})$. Another lower bound would instead use blocks of relations coming from disjoint copies of $G$ in $G^{\alpha}$ (that is copies of $G$ which partition the vertex set of $G^{\alpha}$). There are exactly $\operatorname{min}\{\alpha_i\}$ many of these, leading to a weaker bound, but likely one which is easier to prove. Instead, we will conjecture that the stronger bound holds. This bound works over any field since any potential cancellation in positive characteristic would occur both before and after any substitution by duplications. **Conjecture 26**. *Let $G$ be a graph on $m$ vertices and let $\alpha = (\alpha_1, \dots, \alpha_m) \in (\mathbb Z^+)^m$. For all $i,j \in \mathbb{Z}^+$ and $s \geq 2$, we have $\beta_{i,j}(I(G^\alpha)^{(s)}) \geq (\prod_i\alpha_i)\beta_{i,j}(I(G)^{(s)})$.* One might worry that the relations might not be part of a minimal generating set for the syzygy. However, we know from Gröbner theory that there at least $\binom{\prod_i\alpha_i}{2}$ relations which generate the syzygy, and the conjecture simply asks that a fraction of these are part of a minimal generating set. In particular, we know how to find generators for syzygies using $S$-polynomials (for example by Schreyer's Theorem). Information about $S$-polynomials and computing syzygies can be found in [@Eisenbud]. To prove the conjecture, one would need to show that the $S$-polynomials corresponding to each choice of duplication are part of a minimal generating set.
arxiv_math
{ "id": "2309.13017", "title": "Splittings for symbolic powers of edge ideals of complete graphs", "authors": "Susan M. Cooper, Sergio Da Silva, Max Gutkin, and Tessa Reimer", "categories": "math.AC math.CO", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | This paper examines the optimal velocity follow-the-leader dynamics, a microscopic traffic model, and explores different aspects of the dynamical model, with particular emphasis on collision analysis. More precisely, we present a rigorous boundary-layer analysis of the model which provides a careful understanding of the behavior of the dynamics in trade-off with the singularity of the model at collision which is essential in the controllability of the system. author: - "Hossein Nick Zinat Matin$^{1}$ and Maria Laura Delle Monache $^{1}$ [^1]" bibliography: - reference.bib date: . title: "**Near Collision and Controllability Analysis of Nonlinear Optimal Velocity Follow-the-Leader Dynamical Model In Traffic Flow**" --- # Introduction and Related Works The emergence of autonomous driving technologies such as adaptive cruise control and self-driving systems has created different theoretical challenges in modeling and analysis of the governing dynamics of the traffic flow. Traffic flow dynamics has been a widely studied research area for decades, with literature devoted to various models based on macroscopic, mesoscopic, and microscopic descriptions of traffic flow [@treiber2013traffic]. The microscopic class of dynamics considers individual vehicles and their interaction. The earliest car following models date back to the works of [@pipes1953operational; @newell1961nonlinear; @gazis1959car; @chandler1958traffic]. Nonlinear follow-the-leader dynamics can be traced back to [@herman1959single] and [@gazis1961nonlinear] among others. The celebrated Optimal Velocity (OV) dynamical model was introduced and analyzed in [@bando1995dynamical; @bando1994structure; @bando1998analysis] and numerous following studies. In this paper, we consider the Optimal Velocity Follow-the-Leader (OVFL) dynamical model which is shown to possess favorable properties both from a practical and theoretical point of view [@tordeux2014collision; @stern2018dissipation; @nick2022near; @matin2020nonlinear; @matin2019ConvergenceRate; @dellemonache2019pardalos]. The optimal velocity part of the OVFL model with a positive coefficient defines a target velocity based on the distance between each vehicle and its preceding one. Comparing the target and the current velocities, the acceleration/deceleration will be encouraged by the OV model. The follow-the-leader term explains the force that tries to match the vehicle's velocity with the preceding one. As the instantaneous relaxation time (i.e. $(x_{n-1} - x_n)/\beta$ in [\[E:initial_model\]](#E:initial_model){reference-type="eqref" reference="E:initial_model"}) decreases, a singularity occurs at collision. Understanding the interaction between such a singularity and the behavior of OVFL dynamics near collision is the main focus of this paper. Stability analysis of platoon of vehicles following [\[E:initial_model\]](#E:initial_model){reference-type="eqref" reference="E:initial_model"} has been studied from various points of view such as string stability, [@chandler1958traffic; @bando1998analysis; @kometani1958stability; @wilson2011car; @gunter2020commercially; @giammarino2021traffic; @piu2022stability]. Analysis of collision has been addressed from different standing points in some prior works. In a simulation-based study, [@davis2003modifications] investigates the likelihood of collision as a consequence of drivers' reaction time. A Lyapunov-based analysis in a neighborhood of the equilibrium point has been studied in [@davis2014nonlinear; @tordeux2014collision]. Nonlinear stability analysis and collision avoidance based on the safe distance is studied in [@magnetti2021nonlinear] for the OV model. **Focus and Contribution.** In contrast to the stability-based analysis of OVFL dynamics, in this paper, we are interested in the analysis of collision (e.g. in a platoon of connected autonomous vehicles which are governed by such a dynamical model). In other words, our main focus is on understanding the interplay between the behavior of the OVFL dynamics and the singularity introduced in [\[E:initial_model\]](#E:initial_model){reference-type="eqref" reference="E:initial_model"} at collision, through a careful and mathematically rigorous investigation. Our boundary-layer analysis results are strongly dependent on the initial values which allow us to study the effect of singularity when the vehicles are in a near-collision region. Such analytical understanding is crucial in analyzing the behavior of the system in real-world conditions such as in the presence of noise and perturbation. In such conditions, sooner or later any physical system will be pushed into various states. Therefore it is necessary and insightful to understand the deterministic behavior of the system in the proximity of critical states. As a consequence of our analysis, we show that the collision in the system does not happen and hence the system is well-posed. In addition, our analysis applies to multiple-vehicle which extends the results of [@nick2022near]. The organization of this paper is as follows. We start by introducing the dynamical model. Then, we prove some essential properties of the dynamics between the first two vehicles which will be used in the analysis of the other following vehicles. Then, we study the behavior of the trajectory of other vehicles with respect to that of the first two and we show the main result of the paper. # Mathematical Model We consider $N+1$ number of vehicles and each vehicle $n = 0,1, \cdots, N$ has position $x_n$ and velocity $y_n = \dot x_n$ such that $x_N < x_{N-1} < \cdots< x_0$ (see Figure [1](#fig:direction){reference-type="ref" reference="fig:direction"}). ![The first illustration shows the position and direction of the vehicles. The second illustration depicts the relative position $X_n = x_0 - x_n$.](Figures/position.png){#fig:direction width="3.3in"} We assume that the first vehicle is moving with a constant velocity $\bar v$; i.e. $\dot x_0(t) = y_0(t) = \bar v$ with the initial value $(x_\circ, y_\circ)$. The OVFL model for $n \ge 1$ can be presented in the form of $$\label{E:initial_model} \begin{cases} \dot x_n(t) = y_n(t) \\ \dot y_n(t) = \alpha \left\{V(x_{n -1} - x_n) + y_n(t)) \right\}+ \beta \frac{y_{n -1} - y_n }{(x_{n -1} - x_n)^2} \\ (x_n(0), y_n(0)) = (x_{n, \circ}, y_{n, \circ}) \end{cases}$$ where function $V$ is monotonically increasing, bounded, and Lipschitz continuous function. In this paper, we consider $$\label{E:V_function} V(x) \overset{\textbf{def}}{=}\tanh(x - 2) - \tanh(-2);$$ as illustrated in Figure [2](#fig:Vplot){reference-type="ref" reference="fig:Vplot"}. ![Function $V$ in [\[E:V_function\]](#E:V_function){reference-type="eqref" reference="E:V_function"}.](Figures/VPlot.png){#fig:Vplot width="3in"} We define $V(\infty) = 1.96$ as the scaled maximum possible speed. It should also be noted that $\Delta x_n = x_n - x_{n-1} = l$, where $l$ is the length of vehicles, is interpreted as the collision between vehicle $n-1$ and $n$. In this paper, without loss of any generality, we drop the constant $l$ and consider $\Delta x_n = 0$ as a collision. Therefore, $$\label{E:V_infty} y_n(t) \in [0, V(\infty)), \quad n \in \left\{ 0, \cdots, N \right\}, \quad t \ge 0.$$ For simplicity of the following analysis and interpretation of the results, we define a Galilean change of variable in [\[E:initial_model\]](#E:initial_model){reference-type="eqref" reference="E:initial_model"} $$\begin{split} X_n(t) \overset{\textbf{def}}{=}x_0(t) - x_n(t), \quad n =1, \cdots, N \\ Y_n(t) \overset{\textbf{def}}{=}y_0(t) - y_n(t) = \bar v - y_n(t), \quad n = 1 , \cdots, N \end{split}$$ Consequently, the dynamics of [\[E:initial_model\]](#E:initial_model){reference-type="eqref" reference="E:initial_model"} can be rewritten as $$\label{E:main_dynamics} \begin{cases} \dot X_n(t) = Y_n(t) \\ \dot Y_n(t) = -\alpha \left\{V( X_n- X_{n -1}) + Y_n(t) - \bar v \right\}-\beta \frac{Y_n -Y_{n -1}}{ (X_n-X_{n -1})^2} \\ (X_n(0), Y_n(0)) = (X_{n, \circ}, Y_{n, \circ}) \end{cases}$$ for $n \in \left\{ 1, \cdots, N \right\}$ with the convention that $X_0 = Y_0 = 0$. It should be noted that in [\[E:main_dynamics\]](#E:main_dynamics){reference-type="eqref" reference="E:main_dynamics"} we have that $X_N > \cdots> X_1 > X_0 = 0$; see Figure [1](#fig:direction){reference-type="ref" reference="fig:direction"}. In addition, following [\[E:V_infty\]](#E:V_infty){reference-type="eqref" reference="E:V_infty"}, we have that $$\label{E:speed_limit} Y_n(t) \in (\bar v - V(\infty), \bar v], \quad t \ge 0.$$ This is in particular important in choosing the initial values of the dynamics. The dynamical system [\[E:main_dynamics\]](#E:main_dynamics){reference-type="eqref" reference="E:main_dynamics"} has a unique equilibrium solution when all the vehicles are equidistantly located and moving with the same velocity [@bando1998analysis; @piu2022stability]. Mathematically, for each $n \in \left\{ 1, \cdots, N \right\}$ $$(X_n^\infty , Y_n^\infty) = (n V^{-1}(\bar v) , 0)= (n X_\infty, 0).$$ where $$X_\infty = V^{-1}(\bar v)=2 + \tanh^{-1}(v_\circ + \tanh(-2)),$$ # Dynamics of the First Two Vehicles {#S:two_vechicles} The behavior of the dynamics propagates from the leading vehicles to the following ones. Therefore, we need to start by understanding the interaction between the first two vehicles. ## Hamiltonian and Boundedness of Solution In this section, we consider $N =1$ in [\[E:main_dynamics\]](#E:main_dynamics){reference-type="eqref" reference="E:main_dynamics"}, the dynamics between the first two vehicles. Following [@nick2022near], first we recall few properties of the dynamics of $(X_1(t), Y_1(t))$. The main properties of the dynamics of $(X_1(t), Y_1(t))$ can be obtained by defining the Hamiltonian function $$\label{E:Hamiltonian} H(x, y) \overset{\textbf{def}}{=}\frac 12 y^2 + P(x) \qquad (x, y) \in \mathbb{R}^2$$ with the potential function $$\label{eq:potential} P(x) \overset{\textbf{def}}{=}\alpha \int_{x'=X_\infty }^x \left\{V \left(x' \right) - \bar v \right\}dx'; \qquad x \in \mathbb{R}.$$ The graphical depiction of [\[E:Hamiltonian\]](#E:Hamiltonian){reference-type="eqref" reference="E:Hamiltonian"} and [\[eq:potential\]](#eq:potential){reference-type="eqref" reference="eq:potential"} are shown in Figure [4](#fig:geometry){reference-type="ref" reference="fig:geometry"}. Since $$P'(x) = \alpha \left\{V (x) - \bar v \right\}, \qquad x>0$$ we can write the dynamics [\[E:main_dynamics\]](#E:main_dynamics){reference-type="eqref" reference="E:main_dynamics"} for $n =1$ as a *damped Hamiltonian system* $$\label{E:main_deterministic} \begin{aligned} \dot X_1(t)&=Y_1(t)=\frac{\partial H}{\partial y}(X_1(t), Y_1(t)) \\ \dot Y_1(t) &= -P'(X_1(t))-\alpha Y_1(t) -\beta\frac{Y_1(t)}{X_1^2(t)} \\ &= -\frac{\partial H}{\partial x}(X_1(t), Y_1(t))-\left\{\alpha + \frac{\beta}{X_1^2(t)}\right\}Y_1(t) \end{aligned}$$ for $t\ge 0$. ![Potential $P$ [\[eq:potential\]](#eq:potential){reference-type="eqref" reference="eq:potential"} and Hamiltonian $H$ [\[E:Hamiltonian\]](#E:Hamiltonian){reference-type="eqref" reference="E:Hamiltonian"} along the trajectory $(X_1(t), Y_1(t))$.](Figures/P_Plot.png "fig:"){#fig:geometry width="0.8\\columnwidth"} ![Potential $P$ [\[eq:potential\]](#eq:potential){reference-type="eqref" reference="eq:potential"} and Hamiltonian $H$ [\[E:Hamiltonian\]](#E:Hamiltonian){reference-type="eqref" reference="E:Hamiltonian"} along the trajectory $(X_1(t), Y_1(t))$.](Figures/H_Plot.png "fig:"){#fig:geometry width="0.8\\columnwidth"} **Remark 1**. *The function $V$ is strictly increasing which implies that $V(x)-v_\circ <0$ if $x < X_\infty$ and $V(x)-v_\circ > 0$ if $x>X_\infty$. Thus $P$ will be increasing on the interval $(X_\infty,\infty)$, decreasing on the interval $(0,X_\infty)$ and its minimum point is located at $X_\infty$. Furthermore $\lim_{x\nearrow \infty}P(x)=\infty$.* Using $H(x, y)$, we can show that the solution $(X_1, Y_1)$ is bounded. In particular, we have that $$\label{E:energyLevel} \begin{split} \dot H(X_1(t), Y_1(t))&= \dot X_1(t) \frac{\partial H}{\partial x}(X_1(t), Y_1(t)) \\ & \qquad \qquad + \dot Y_1(t) \frac{\partial H}{\partial y}(X_1(t), Y_1(t))\\ &= -\left\{\alpha + \frac{\beta}{X_1(t)^2}\right\}Y_1(t)^2 \le 0 \end{split}$$ See the illustration of function $H(X_1(t), Y_1(t))$ in Figure [4](#fig:geometry){reference-type="ref" reference="fig:geometry"}. This can be interpreted as decreasing energy in the system. Now, if we set $$\label{E:hcircdef} h_\circ \overset{\textbf{def}}{=}H(X_{1,\circ},Y_{1, \circ}),$$ then from the definition of $H$, we have that $$\tfrac 12 Y_1^2(t) \le H(X_1(t), Y_1(t)) \le h_\circ$$ where the second inequality follows from [\[E:energyLevel\]](#E:energyLevel){reference-type="eqref" reference="E:energyLevel"} and [\[E:hcircdef\]](#E:hcircdef){reference-type="eqref" reference="E:hcircdef"}. Therefore, $$\label{E:Y_bound} \lvert Y_1(t) \rvert \le \bar y\overset{\textbf{def}}{=}\sqrt{2 h_\circ}, \quad t \ge 0.$$ Similarly, we can show by increasing behavior of $P$ on $(X_\infty, \infty)$ that $$\label{E:X1_bound} X_1(t) \le \bar x \overset{\textbf{def}}{=}h_\circ, \quad t \ge 0.$$ Finally, it is shown that the solution $X_1(t)$ never hit zero which can be interpreted as no collision between the first two vehicles. In particular, $$\label{E:X1_LB} X_1(t) \ge \delta_1, \quad t \ge 0$$ where, the lowerbound $\delta_1 = \delta_1(X_{1, \circ}, Y_{1, \circ})$ depends on the initial values of the system. Figure [6](#fig:4-5){reference-type="ref" reference="fig:4-5"} shows the trajectory of the dynamical model for the first two vehicles. ![Trajectory of the first two vehicles. The top plot shows the trajectory of $t \mapsto X_1(t)$ and $t \mapsto Y_1(t))$ separately. The bottom plot show the orbit of this dynamic for $\alpha = 2$, $\beta = 1$, $\bar v = 0.8$, $(X_{1, \circ}, Y_{1, \circ}) = (0.5, -0.7)$.](Figures/4.png "fig:"){#fig:4-5 width=".8\\columnwidth"} ![Trajectory of the first two vehicles. The top plot shows the trajectory of $t \mapsto X_1(t)$ and $t \mapsto Y_1(t))$ separately. The bottom plot show the orbit of this dynamic for $\alpha = 2$, $\beta = 1$, $\bar v = 0.8$, $(X_{1, \circ}, Y_{1, \circ}) = (0.5, -0.7)$.](Figures/5.png "fig:"){#fig:4-5 width=".8\\columnwidth"} Now, we need to develop some more properties of the trajectory of flow $(X_1(t), Y_1(t))$. **Remark 2**. *Using [\[E:Y_bound\]](#E:Y_bound){reference-type="eqref" reference="E:Y_bound"} and [\[E:X1_LB\]](#E:X1_LB){reference-type="eqref" reference="E:X1_LB"}, set $(0, \bar x] \times [-\bar y, \bar y]$ is positively invariant with respect to the flow $(X_1(t), Y_1(t))$ and has a compact closure. Furthermore, [\[E:energyLevel\]](#E:energyLevel){reference-type="eqref" reference="E:energyLevel"} precludes a periodic orbit. Therefore, an application of the Poincaré-Bendixon theorem [@teschl2012ordinary Section 7.3], suggests that the equilibrium solution $(X_\infty, 0)$ is globally asymptotically stable.* **Remark 3** (**Initial Data**). *In this paper, we are mainly concerned with the boundary-layer analysis of the system near collision (the situation that can happen, for instance, as a result of instantaneous perturbation in the system; like sudden braking of the leading vehicles which propagates). In other words, we are interested in the case that the distance between the corresponding consecutive vehicles becomes relatively small. In particular, we consider the initial values $\Delta X_{n, \circ} < X_\infty$, for the respective $n \in \left\{ 1, \cdots, N \right\}$.* *In addition, suppose that $Y_{1, \circ} <0$. Fix a time $T>0$. Since $X_{1, \circ} <X_\infty$, the dynamics of [\[E:main_dynamics\]](#E:main_dynamics){reference-type="eqref" reference="E:main_dynamics"} for $N =1$ suggest that $\dot Y_1(t) >0$ for $t\in (0, \varepsilon_\circ)$; some neighborhood of time zero. On the other hand, since the $X_{1, \circ}$ is relatively small, the dominant term in the dynamics of $\dot Y_1$ in [\[E:main_dynamics\]](#E:main_dynamics){reference-type="eqref" reference="E:main_dynamics"} is $-\nicefrac{\beta Y_1(t)}{(X_1(t))^2}$ for $t \in (0, \varepsilon_\circ)$. Hence, for sufficiently large $\beta$, $Y_1(\bar t) >0$ for some $\bar t <T$ (see Figure [6](#fig:4-5){reference-type="ref" reference="fig:4-5"}). Therefore, in this paper, without sacrificing any generality, it is sufficient to consider $Y_{1, \circ} >0$; otherwise, the same analysis follows after shifting the initial time to $\bar t$.* *Moreover, this assumption will not affect the generality of our follower vehicles' analysis in the next section. More precisely, let $N = 2$. The interaction between two consecutive vehicles depends on their relative speed, i.e. $Y_2 - Y_1$ (rather than merely the relative velocity $Y_1$ of the leading vehicle) which will be analyzed in its full generality. In particular, as we will see, the most interesting case for the purpose of our boundary layer analysis will be $Y_n - Y_{n-1} <0$, $n \ge 2$, which implies that the following vehicle is moving faster than the leading one. This can potentially result in a collision. We will discuss this case in detail in the next section.* ## Controlling the Behavior of the Dynamics by Controlling the Parameters In this section, we study the behavior of the trajectory of $t \mapsto Y_1(t)$ for $(X_{1, \circ}, Y_{1, \circ})\in (0, X_\infty] \times \mathbb{R}_+$. We define $$\label{E:equil_time} \mathcal{T}_\infty \overset{\textbf{def}}{=}\inf \left\{ t \ge 0: X_1(t) = X_\infty \right\}$$ as the first time for which the trajectory $t \mapsto X_1(t)$, starting from the initial data $(X_{1, \circ}, Y_{1, \circ})$, approaches $X_\infty$. The change of variables $u \overset{\textbf{def}}{=}x - X_\infty$ and $v \overset{\textbf{def}}{=}y$ help us standardize the stability analysis by translating the equilibrium point to the origin. The Hamiltonian can be rewritten as $$\label{E:new_hamiltonian}\begin{split} H(u , v) &= \tfrac 12 v^2 + \tilde P(u) \\ & \overset{\textbf{def}}{=}\tfrac 12 v^2 + \alpha \int_0^u \left\{V(u' + X_\infty) - \bar v \right\}du' \\ \frac{dH}{dt}(u(t), v(t)) &= - \left(\alpha + \frac{\beta}{(u(t) + X_\infty)^2} \right) v^2(t). \end{split}$$ The main result of this section expresses that by controlling the parameters $\alpha$ and $\beta$ we can control the behavior of the trajectory $t \mapsto Y_1(t)$. In particular, **Theorem 4**. *Starting from $(X_{1, \circ}, Y_{1, \circ})\in (0, X_\infty] \times \mathbb{R}_+$, for sufficiently large values of $\alpha$ and $\beta$, we have that $$\limsup_{t \nearrow \mathcal{T}_\infty } v(t) = 0.$$* This implies that by controlling $\alpha$ and $\beta$, the flow $Y_1(t)$ will be absorbed to the equilibrium point as $t \nearrow \mathcal{T}_\infty$. We postpone the proof of this theorem until some preliminary results are established. The following lemmas explain the behavior of the trajectory $t \mapsto Y_1(t)$ around the equilibrium point. Figure [8](#fig:9-10){reference-type="ref" reference="fig:9-10"} is provided as a graphical aide to the proofs. **Lemma 5**. *The set $U_1 \overset{\textbf{def}}{=}\left\{ (x, y): x < X_\infty, y\in \mathbb{R}_+ \right\}$ is invariant with respect to the trajectory $[0, \mathcal{T}_\infty) \ni t \mapsto (X_1(t), Y_1(t))$. In other words, if $(X_{1,\circ}, Y_{1, \circ}) \in U_1$, then $Y_1(t) >0$ for $t \in [0, \mathcal{T}_\infty)$.* *Proof.* We use the proof by contradiction to show the result. Suppose that there exists a time $t_\circ < \mathcal{T}_\infty$ such that $Y_1(t_\circ) = 0$. By the continuity of the solution, we have that $\dot Y_1(t_\circ) <0$. On the other hand, $$\begin{split} \dot Y_1(t_\circ) &= - \alpha \left\{V(X_1(t_\circ)) - \bar v + Y_1(t_\circ) \right\}- \beta \frac{Y_1(t_\circ)}{(X_1(t_\circ))^2}\\ & = -\alpha \left\{V(X_1(t_\circ)) - \bar v \right\}>0 \end{split}$$ where the last inequality holds since $X_1(t_\circ) < X_\infty$. But this is a contradiction and hence the result follows (see Figure [8](#fig:9-10){reference-type="ref" reference="fig:9-10"}). ![The behavior of the trajectory for the different cases of initial values $(X_{1, \circ}, Y_{1, \circ})$.](Figures/9.png "fig:"){#fig:9-10 width="3.5in"} ![The behavior of the trajectory for the different cases of initial values $(X_{1, \circ}, Y_{1, \circ})$.](Figures/21.png "fig:"){#fig:9-10 width="3.5in"} ◻ Although in our setting $(X_{1, \circ}, Y_{1, \circ}) \in (0, X_\infty] \times \mathbb{R}_+$, we need to understand the behavior of the dynamics when initial data is located in different positions around the equilibrium point. **Lemma 6**. *Suppose $(X_{1, \circ}, Y_{1, \circ}) \in L_1$. Then there exists a time $t_\circ \in (0, \mathcal{T}_\infty)$ such that $Y_1(t)>0$ for $t \in (t_\circ, \mathcal{T}_\infty)$.* *Proof.* Since $Y_{1, \circ} <0$, the distance is decreasing. On the other hand, since the collision does not happen in the dynamic of $X_1$, there should be a time $t_\circ \in (0, \mathcal{T}_\infty)$ such that $Y_1(t_\circ) = 0$ and $\dot Y_1(t_\circ) >0$. Then from Lemma [Lemma 5](#T:stay_positive){reference-type="ref" reference="T:stay_positive"}, $Y_1(t) >0$ for $t \in (t_\circ, \mathcal{T}_\infty)$. ◻ As the result of the Lemma [Lemma 5](#T:stay_positive){reference-type="ref" reference="T:stay_positive"}, either $Y_1(\mathcal{T}_\infty) =0$ or we have that: **Lemma 7**. *Suppose, $(X_{1, \circ}, Y_{1, \circ}) \in U_2$. Then, there is a time $t_\circ < \mathcal{T}_\infty$ such that $Y_1(t_\circ) = 0$ and $\dot Y_1(t_\circ) <0$.* *Proof.* Similar to previous lemmas, using the dynamics of $(X_1, Y_1)$, we have that $V(X_1) - \bar v >0$ since $X_{1, \circ} >X_\infty$ and therefore, $\dot Y_1 <0$. Hence the claim follows. ◻ **Lemma 8**. *Suppose $(X_{1, \circ}, Y_{1, \circ}) \in L_2$. Then, $Y_1(t) <0$ for $t \in [0,\mathcal{T}_\infty)$.* *Proof.* The proof is similar to the case of Lemma [Lemma 5](#T:stay_positive){reference-type="ref" reference="T:stay_positive"} and by contradiction. ◻ Therefore, if $(X_{1, \circ}, Y_{1, \circ}) \in L_2$, then either $Y_1(\mathcal{T}_\infty) = 0$ or the case of Lemma [Lemma 6](#T:negative_hit_zero){reference-type="ref" reference="T:negative_hit_zero"} will be revisited. All of these cases will be repeated around the equilibrium point until the convergence happens. Employing the results of Lemma [Lemma 5](#T:stay_positive){reference-type="ref" reference="T:stay_positive"}-[Lemma 8](#T:final_lem){reference-type="ref" reference="T:final_lem"}, and properties of the Hamiltonian [\[E:Hamiltonian\]](#E:Hamiltonian){reference-type="eqref" reference="E:Hamiltonian"}, we show that the rate of convergence of the trajectory $(X_1(t), Y_1(t))$ to the equilibrium point can be controlled by controlling the parameters $\alpha$ and $\beta$. The following inequalities are the cornerstone of proving Theorem [Theorem 4](#T:exp_stable_convergence){reference-type="ref" reference="T:exp_stable_convergence"}. **Lemma 9**. *Let's define the domain $\mathcal C \overset{\textbf{def}}{=}(\delta_1 - X_\infty , \bar x - X_\infty) \times \mathbb{R}$ (see [\[E:X1_bound\]](#E:X1_bound){reference-type="eqref" reference="E:X1_bound"} and [\[E:X1_LB\]](#E:X1_LB){reference-type="eqref" reference="E:X1_LB"} and the definition of $u,v$ before [\[E:new_hamiltonian\]](#E:new_hamiltonian){reference-type="eqref" reference="E:new_hamiltonian"}) which contains the equilibrium point. There exists constants $\underline k$ and $\overline k$ such that $$\underline k \left\| U \right\|^2 \le H(u, v) \le \overline k \left\| U \right\|^2$$ where $U = (u, v)^\mathsf T\in \mathcal C$.* *Proof.* The right-hand side inequality is by Lipschitz continuity of function $V$ and the fact that $\bar v = V(X_\infty)$ and for $\overline k \overset{\textbf{def}}{=}\max \left\{ \tfrac 12, \alpha \right\}$. To see the left-hand side of the inequality, we note that function $\tilde P$ (as in [\[E:new_hamiltonian\]](#E:new_hamiltonian){reference-type="eqref" reference="E:new_hamiltonian"}) is a convex function (see the illustration $P$ of Figure [4](#fig:geometry){reference-type="ref" reference="fig:geometry"} and consider that the equilibrium point is shifted to the origin) and in addition, over the domain $\bar{\mathcal C}$, the closure, we have $$\label{E:strong-convex-constant} \tilde P''(u) = \alpha V'(u + X_\infty) \ge \alpha k_{\eqref{E:strong-convex-constant}} >0$$ for some constant $k_{\eqref{E:strong-convex-constant}}>0$. Therefore, $\tilde P$ is strongly convex on $\mathcal C$ which implies that $$\tilde P(u) \ge \tilde P(0) + \tilde P'(0) u + \tfrac {\alpha k_{\eqref{E:strong-convex-constant}}}{2} u^2$$ Since $\tilde P(0) = \tilde P'(0) = 0$, we have that $$\label{E:strong_convex} H(u, v) = \tfrac 12 v^2 + \tilde P(u) \ge \tfrac 12 v^2 + \tfrac {\alpha k_{\eqref{E:strong-convex-constant}}}{2} u^2 \ge \underline k \left\| U \right\|^2$$ for $\underline k \overset{\textbf{def}}{=}\min \left\{ \tfrac 12, \tfrac {\alpha k_{\eqref{E:strong-convex-constant}}}{2} \right\}$. This completes the proof. ◻ For the proof of Theorem [Theorem 4](#T:exp_stable_convergence){reference-type="ref" reference="T:exp_stable_convergence"}, with a slight abuse of notation, we consider $\mathcal{T}_\infty$ as in [\[E:equil_time\]](#E:equil_time){reference-type="eqref" reference="E:equil_time"} to denote the time that trajectory $t \mapsto u(t)$ approaches the origin (which is the equilibrium point here). *of Theorem [Theorem 4](#T:exp_stable_convergence){reference-type="ref" reference="T:exp_stable_convergence"}.* Let us fix $\varepsilon>0$ such that $\mathop{\mathrm{Ball}}((0,0)^\mathsf T, \varepsilon)$ be the region of attraction (for exponential stability) of the origin in the linearized model; see Remark [Remark 2](#R:global_asym_stab){reference-type="ref" reference="R:global_asym_stab"}. We recall Lemma [Lemma 5](#T:stay_positive){reference-type="ref" reference="T:stay_positive"}-[Lemma 8](#T:final_lem){reference-type="ref" reference="T:final_lem"} and we define $$\label{E:Teps} \mathcal{T}_\infty^\varepsilon\overset{\textbf{def}}{=}\inf \left\{ t < \mathcal{T}_\infty: u(t) < \varepsilon/3 \right\},$$ If for some values of $\alpha$ and $\beta$ $$v(t) < \varepsilon, \quad t \in [\mathcal{T}_\infty^\varepsilon, \mathcal{T}_\infty),$$ i.e. is already in the domain of attraction, then the claim follows by exponential convergence of the linearized problem. Suppose on the contrary that for all values of $\alpha$ and $\beta$, $v(t) \ge \varepsilon$ on $[\mathcal{T}_\infty^\varepsilon, \mathcal{T}_\infty)$. Then over the domain $\mathcal C$ $$\begin{split} \left(\alpha + \frac{\beta}{(u(t) + X_\infty)^2} \right)v^2(t) &\ge \tfrac 12 \left(\alpha + \frac{\beta}{\bar x^2}\right) v^2(t)\\ & \qquad \qquad + \tfrac 12 \left(\alpha + \frac{\beta}{\bar x^2}\right) \varepsilon^2 \\ &\ge \tfrac 12 \left(\alpha + \frac{\beta}{\bar x^2} \right) \left\| (u(t), v(t))^\mathsf T \right\|^2 \end{split}$$ for $t \in [\mathcal{T}_\infty^\varepsilon, \mathcal{T}_\infty)$, and where the last inequality is by [\[E:Teps\]](#E:Teps){reference-type="eqref" reference="E:Teps"}. Using [\[E:new_hamiltonian\]](#E:new_hamiltonian){reference-type="eqref" reference="E:new_hamiltonian"}, we have that $$\label{E:dotH_bound_asym}\begin{split} \dot H(u(t) , v(t)) & \le - K_{\eqref{E:dotH_bound_asym}}\left\| (u(t), v(t))^\mathsf T \right\|^2, \quad t \in [\mathcal{T}_\infty^\varepsilon, \mathcal{T}_\infty). \end{split}$$ where, $K_{\eqref{E:dotH_bound_asym}} \overset{\textbf{def}}{=}\tfrac 12 \left(\alpha + \frac{\beta}{\bar x^2}\right)$. Using Lemma [Lemma 9](#T:H_bound_expstab_condition){reference-type="ref" reference="T:H_bound_expstab_condition"} and [\[E:dotH_bound_asym\]](#E:dotH_bound_asym){reference-type="eqref" reference="E:dotH_bound_asym"}, we can write $$\dot H(u(t), v(t)) \le - \frac{K_{\eqref{E:dotH_bound_asym}}}{\overline k} H(u(t), v(t)),\quad t \in [\mathcal{T}_\infty^\varepsilon, \mathcal{T}_\infty).$$ Using Gronwall's inequality, we get $$H(u(t), v(t)) \le H(u(\mathcal{T}_\infty^\varepsilon), v(\mathcal{T}_\infty^\varepsilon)) \exp\left\{ - \frac{K_{\eqref{E:dotH_bound_asym}}}{\overline k} t \right\},$$ for $t \in [\mathcal{T}_\infty^\varepsilon, \mathcal{T}_\infty)$. Once more, using Lemma [Lemma 9](#T:H_bound_expstab_condition){reference-type="ref" reference="T:H_bound_expstab_condition"}, we will have that $$\begin{split} \left\| (u(t), v(t))^\mathsf T \right\|^2 & \le \frac{1}{\underline k} H(u(t), v(t)) \\ & \le \frac{1}{\underline k} H(u(\mathcal{T}_\infty^\varepsilon), v(\mathcal{T}_\infty^\varepsilon)) \exp\left\{ - \frac{K_{\eqref{E:dotH_bound_asym}}}{\overline k} t \right\}\\ & \le \frac{ h_\circ}{\underline k} \exp\left\{ - \frac{K_{\eqref{E:dotH_bound_asym}}}{\overline k} t \right\}, \quad t \in [\mathcal{T}_\infty^\varepsilon, \mathcal{T}_\infty), \end{split}$$ where the last inequality is from [\[E:energyLevel\]](#E:energyLevel){reference-type="eqref" reference="E:energyLevel"} and [\[E:hcircdef\]](#E:hcircdef){reference-type="eqref" reference="E:hcircdef"}. But comparing $K_{\eqref{E:dotH_bound_asym}}$ and $\overline k$ shows that for sufficiently large values of $\alpha$ and $\beta$, $\left\| (u(t), v(t))^\mathsf T \right\| <\varepsilon$ for some $t\in (\mathcal{T}_\infty^\varepsilon, \mathcal{T}_\infty)$ which contradicts our initial assumption. Therefore, the statement of the theorem follows (see Figure [10](#fig:alpha_beta_effect){reference-type="ref" reference="fig:alpha_beta_effect"}). ![The first figure is for $\alpha = 3$, $\beta = 2$ and it converges to the rest point. The second figure is with respect to $\alpha = \beta = 1$. Hence for relatively small $\alpha$ and $\beta$, the cases of Lemma [Lemma 7](#T:U_2_case){reference-type="ref" reference="T:U_2_case"}-[Lemma 8](#T:final_lem){reference-type="ref" reference="T:final_lem"} happen. The red curves zoom into the behavior of the trajectories near the equilibrium point. Other parameters are $\bar v = 1.3$, $(X_{1, \circ}, Y_{1, \circ})=(0.5, 1)$.](Figures/14.png "fig:"){#fig:alpha_beta_effect width="0.9\\columnwidth"} ![The first figure is for $\alpha = 3$, $\beta = 2$ and it converges to the rest point. The second figure is with respect to $\alpha = \beta = 1$. Hence for relatively small $\alpha$ and $\beta$, the cases of Lemma [Lemma 7](#T:U_2_case){reference-type="ref" reference="T:U_2_case"}-[Lemma 8](#T:final_lem){reference-type="ref" reference="T:final_lem"} happen. The red curves zoom into the behavior of the trajectories near the equilibrium point. Other parameters are $\bar v = 1.3$, $(X_{1, \circ}, Y_{1, \circ})=(0.5, 1)$.](Figures/16.png "fig:"){#fig:alpha_beta_effect width="0.90\\columnwidth"} ◻ # Dynamics of Other Vehicles The analysis of the properties of the dynamics of interaction between vehicle $n$ and $n-1$ for $n \ge 2$ requires an in-depth understanding of the interaction between vehicles $n-1$ and $n-2$ (the leading vehicles). In this section, using the results of section [3](#S:two_vechicles){reference-type="ref" reference="S:two_vechicles"}, we will consider the interaction between vehicles $n$ and $n -1$ for $n= 2$ (the interaction between vehicles two and three). For the purpose of our analysis (in particular collision analysis), we need to work with the *difference* flow $(X_2- X_1,Y_2 - Y_1)$ rather than the flow $(X_2, Y_2)$. In particular, $X_2 - X_1 = 0$ means collision. Therefore, it would be reasonable to introduce the change of variables $$\begin{split} \xi_1 \overset{\textbf{def}}{=}X_1, \quad \xi_2 \overset{\textbf{def}}{=}X_2 - X_1 \\ \zeta_1 \overset{\textbf{def}}{=}Y_1 \quad \zeta_2 \overset{\textbf{def}}{=}Y_2 - Y_1, \end{split}$$ and the difference dynamical model of [\[E:main_dynamics\]](#E:main_dynamics){reference-type="eqref" reference="E:main_dynamics"} then reads $$\label{E:differece_dynamics} \begin{cases} \dot \xi_1 = \zeta_1 \\ \dot \zeta_1 = -\alpha \left\{V(\xi_1) - \bar v \right\}- \alpha \zeta_1 - \beta \frac{\zeta_1}{(\xi_1)^2} \\ \dot \xi_2 = \zeta_2 \\ \dot \zeta_2 = - \alpha \left\{V(\xi_2) - V(\xi_1) \right\}- \alpha \zeta_2 - \beta \left(\frac{\zeta_2}{(\xi_2)^2} - \frac{\zeta_1}{(\xi_1)^2} \right)\\ (\xi_1(0), \zeta_1(0)) = (\xi_{1, \circ}, \zeta_{1, \circ }) = (X_{1, \circ}, Y_{1, \circ})\\ (\xi_2(0), \zeta_2(0))= (\xi_{2, \circ}, \zeta_{2, \circ}) = (X_{2, \circ} - X_{1, \circ }, Y_{2, \circ } - Y_{1, \circ }). \end{cases}$$ First, we look at the existence of the solution of the dynamics of [\[E:differece_dynamics\]](#E:differece_dynamics){reference-type="eqref" reference="E:differece_dynamics"}. We define the state space $$\label{E:new_domain} \mathcal D \overset{\textbf{def}}{=}\left((0, \infty)\times \mathbb{R}\right)^2 \ni (\xi_{1, 0}, \zeta_{1, 0}, \xi_{2, 0}, \zeta_{2, 0})$$ and the flow $\Xi \overset{\textbf{def}}{=}(\xi_1, \zeta_1, \xi_2, \zeta_2)$. From the abstract theory of dynamical systems [@teschl2012ordinary], the solution of [\[E:differece_dynamics\]](#E:differece_dynamics){reference-type="eqref" reference="E:differece_dynamics"} exists on a maximal interval $[0, \mathcal{T}^c)$, for some $\mathcal{T}^c >0$. It should be noted that if $\mathcal{T}^c = \infty$ then the system is globally well-behaved and consequently no collision occurs. In what follows, we first show that if $\mathcal{T}^c < \infty$, i.e. the solution does not exist at all times, $\mathcal{T}^c$ should represent the time that collision happens in the system; i.e. $\xi_2(t) \to 0$ as $t \to \mathcal{T}^c$. **Assumption.** We suppose that $$\label{E:finite_collision_time} \mathcal{T}^c < \infty.$$ We start with the implications of [\[E:finite_collision_time\]](#E:finite_collision_time){reference-type="eqref" reference="E:finite_collision_time"}. As $t \nearrow \mathcal{T}^c$, the flow $\Xi$, either grows unbounded, or $\Xi \in \partial \mathcal D$. We recall that under the conditions of Theorem [Theorem 4](#T:exp_stable_convergence){reference-type="ref" reference="T:exp_stable_convergence"}, $\zeta_1$ vanishes at $\mathcal{T}_\infty$. Therefore, if $\mathcal{T}^c \ge \mathcal{T}_\infty$ then for $t \in [\mathcal{T}_\infty, \mathcal{T}^c)$ the dynamics $\dot \zeta_2$ in [\[E:differece_dynamics\]](#E:differece_dynamics){reference-type="eqref" reference="E:differece_dynamics"} will be the same as dynamics of $\dot \zeta_1$ and so the solution exists for all $t \ge \mathcal{T}_\infty$; i.e. $\mathcal{T}^c = \infty$. On the other hand, $\zeta_2 \nearrow \infty$ which implies $\xi_2 \nearrow \infty$ is prohibited since by properties of function $V$ and boundedness of $\xi_1$ and $\zeta_1$, this implies $\dot \zeta_2 \to -\infty$ which is a contradiction. Finally, $\zeta_2 \searrow -\infty$ implies that $\xi_2 \searrow - \infty$ by the third equation of the dynamical system [\[E:differece_dynamics\]](#E:differece_dynamics){reference-type="eqref" reference="E:differece_dynamics"} which is not admissible in the domain $\mathcal D$. Putting all together, for $\mathcal{T}^c < \mathcal{T}_\infty$, by Lemma [Lemma 5](#T:stay_positive){reference-type="ref" reference="T:stay_positive"}-[Lemma 8](#T:final_lem){reference-type="ref" reference="T:final_lem"} and since $\xi_1(t) \notin \left\{ 0 \right\}$ for $t < \mathcal{T}_\infty$, from [\[E:new_domain\]](#E:new_domain){reference-type="eqref" reference="E:new_domain"} we must have $$\label{E:collition_time_limit} \lim_{t \nearrow \mathcal{T}^c} \xi_2(t) = 0,$$ or in other words, if assumption [\[E:finite_collision_time\]](#E:finite_collision_time){reference-type="eqref" reference="E:finite_collision_time"} holds, then$\mathcal{T}^c$ must be the collision time. Let's look at the implications of the finite collision time. Under such an assumption, there exists a time $$\check t \overset{\textbf{def}}{=}\sup \left\{ t < \mathcal{T}^c: \xi_2 > \tfrac 12 \delta_2 \right\} ,$$ where we set $$\label{E:delta_2} \delta_2 \overset{\textbf{def}}{=}\min \left\{ \xi_{2, \circ }, \delta_1 \right\},$$ where $\delta_1 < X_\infty$ is defined in [\[E:X1_LB\]](#E:X1_LB){reference-type="eqref" reference="E:X1_LB"}. In other words, if the collision time is finite, then there should be a time $\check t$ after which the trajectory $\xi_2(t) \le \tfrac 12 \delta_2$, for $t \in [\check t, \mathcal{T}^c)$. We study the behavior of the $\dot \zeta_2$ in this region. The next result shows that in this region, $\zeta_2(t) <0$; i.e. the follower vehicle is moving faster than the leading one. **Lemma 10**. *The Set $\mathcal U \overset{\textbf{def}}{=}\left\{ (x, y): x \in (0, \tfrac 12 \delta_2), y \in \mathbb{R}_+ \right\}$ is invariant with respect to the trajectory $t \in [\check t, \mathcal{T}^c) \mapsto (\xi_2(t) , \zeta_2(t))$. In other words, if $\zeta_2(t) > 0$ for some $t \in [\check t, \mathcal{T}^c)$, then it must remain positive.* *Proof.* Figure [11](#fig:region){reference-type="ref" reference="fig:region"} illustrates the proof argument. Suppose on the contrary that $\zeta_2(t) <0$ for some $t \in [\check t, \mathcal{T}^c)$. This implies, by definition of $\check t$, there exists a time $t_\circ$ such that $\zeta_2(t_\circ) =0$ and $\dot \zeta_2(t_\circ) <0$. But using the dynamics of $\zeta_2$ in [\[E:differece_dynamics\]](#E:differece_dynamics){reference-type="eqref" reference="E:differece_dynamics"} as well as [\[E:delta_2\]](#E:delta_2){reference-type="eqref" reference="E:delta_2"}, must have that $$\dot \zeta_2(t_\circ) = - \alpha \left\{V(\xi_2(t_\circ)) - V(\xi_1(t_\circ) \right\}+ \beta \frac{\zeta_1}{(\xi_1)^2} >0,$$ which is a contradiction. ![Illustration of the proof discussions.](Figures/17.png "fig:"){#fig:region width="2in"} ◻ **Remark 11**. *Lemma [Lemma 10](#T:invar_set){reference-type="ref" reference="T:invar_set"} along with the assumption [\[E:finite_collision_time\]](#E:finite_collision_time){reference-type="eqref" reference="E:finite_collision_time"} shows in particular that $\zeta_2(t) <0$ for $t \in [\check t, \mathcal{T}^c)$.* Next, we show that while on $[\check t, \mathcal{T}^c)$, $\xi_2$ is small and $\zeta_2 <0$, deceleration will be encouraged, which is the mere hope to prevent the collision. In other words, if deceleration is not strong enough, the collision will happen. **Lemma 12**. *For $t \in [\check t, \mathcal{T}^c)$, we have that $$\dot \zeta_2(t) > 0.$$* *Proof.* Let's consider the dynamics of $\dot \zeta_2$ in [\[E:differece_dynamics\]](#E:differece_dynamics){reference-type="eqref" reference="E:differece_dynamics"}. Then, [\[E:delta_2\]](#E:delta_2){reference-type="eqref" reference="E:delta_2"} implies that $- \alpha \left\{V(\xi_2) - V(\xi_1) \right\}>0$. Remark [Remark 11](#R:zeta_2_sign){reference-type="ref" reference="R:zeta_2_sign"} show that $- \alpha \zeta_2 >0$. Finally $\zeta_1 >0$ implies that the last term should also be positive and this completes the proof. ◻ Now, we are ready to show that our assumption [\[E:finite_collision_time\]](#E:finite_collision_time){reference-type="eqref" reference="E:finite_collision_time"} and its implications (the previous results) lead to a contradiction, i.e. the collision cannot happen in finite time. **Proposition 13**. *Under the conditions of Theorem [Theorem 4](#T:exp_stable_convergence){reference-type="ref" reference="T:exp_stable_convergence"}, $\mathcal T^c = \infty$. In other words, collision does not happen in the dynamical model [\[E:differece_dynamics\]](#E:differece_dynamics){reference-type="eqref" reference="E:differece_dynamics"}.* *Proof.* Using [\[E:collition_time_limit\]](#E:collition_time_limit){reference-type="eqref" reference="E:collition_time_limit"}, the idea of the proof is to show that $\xi_2(t) > 0$ for $t \in [\check t, \mathcal{T}^c)$ which implies the statement of the theorem. The result of Lemma [Lemma 12](#T:positive_accel){reference-type="ref" reference="T:positive_accel"} (monotonicity of $\zeta_2(t)$) suggests that, given the trajectory of $t \mapsto (\xi_1(t), \zeta_1(t))$, we should be able to locally write $$\label{E:Psi_dynamics} \xi_2(t) = \psi(\zeta_2(t)), \quad \text{for $t \in [\check t, \mathcal T^c)$},$$ for some function $\psi$ which will be constructed below. Employing [\[E:differece_dynamics\]](#E:differece_dynamics){reference-type="eqref" reference="E:differece_dynamics"}, we have that $$\label{E:transit} \begin{split} \zeta_2 = \dot \xi_2 &= \psi'(\zeta_2) \dot \zeta_2 \\ & = \psi'(\zeta_2) \Bigg \{ - \alpha \left\{V(\psi(\zeta_2)) - V(\xi_1) \right\}- \alpha \zeta_2 \\ &\qquad \qquad \qquad - \beta \left(\frac{\zeta_2}{(\psi(\zeta_2))^2} - \frac{\zeta_1}{(\xi_1)^2} \right) \Bigg\}. \end{split}$$ Furthermore, thanks to strictly monotone behavior, the function $\zeta_2:[\check t, \mathcal T^c) \to [\check \zeta, \hat \zeta)$, where $\check \zeta \overset{\textbf{def}}{=}\zeta_2(\check t)$ and $\hat \zeta \overset{\textbf{def}}{=}\zeta_2(\mathcal T^c)$, is a diffeomorphism. Let $\theta \overset{\textbf{def}}{=}\zeta_2^{-1}$, the inverse function of $\zeta_2$. Then, function $\zeta_1$ on $[\check t, \mathcal T^c )$ can be presented as the smooth function $\zeta_1 \circ \theta$ on $[\check \zeta, \hat \zeta)$ if $\mathcal T^c < \mathcal T_\infty$ and zero otherwise. A similar argument holds true for $\xi_1 \circ \theta$.\ Let us now formalize the construction of $\psi$ by extending the function $\theta$ smoothly on the domain $(- \infty, 0)$ and defining a function $$%\begin{split} {\scriptstyle %& \mathbf g(\zeta, \psi) \overset{\textbf{def}}{=} %\\ %& \frac{-\zeta}{\alpha \left\{V(\psi) - V(\xi_1(\theta(\zeta))) \right\}+ \alpha \zeta + \beta \left(\frac{\zeta}{\psi^2} - \frac{\zeta_1(\theta(\zeta))}{(\xi_1(\theta(\zeta)))^2} \right)}} %\end{split}$$ for $(\zeta, \psi) \in (- \infty, 0) \times (0 , \delta_2)$, and $(\zeta_1, \xi_1) \in (0, \bar y) \times(\delta_1, X_\infty)$. Therefore, using [\[E:transit\]](#E:transit){reference-type="eqref" reference="E:transit"}, the dynamical model can be presented by $$\label{E:phi_psi_dynamics} \begin{cases} \psi'(\zeta)= \mathbf g(\zeta, \psi(\zeta)) \\ \psi(\Check \zeta) = \xi_2(\check t) \end{cases}$$ where $\check \zeta \overset{\textbf{def}}{=}\zeta_2(\check t)$ and $\xi_2(\check t) = \tfrac 12 \delta_2$. Through such construction, the dynamics of [\[E:phi_psi_dynamics\]](#E:phi_psi_dynamics){reference-type="eqref" reference="E:phi_psi_dynamics"} is well-defined and has a maximal interval of existence $(\mu_-, \mu_+) \subset (-\infty, 0)$ and contains the initial value $\check \zeta$. The construction [\[E:phi_psi_dynamics\]](#E:phi_psi_dynamics){reference-type="eqref" reference="E:phi_psi_dynamics"} creates a barrier dynamics through comparison with which we can show $\xi_2 >0$ on $[\check t, \mathcal{T}^c)$ (see [\[E:Psi_dynamics\]](#E:Psi_dynamics){reference-type="eqref" reference="E:Psi_dynamics"}). **Theorem 14**. *We have that $$\inf_{\bar \zeta \in {[\check \zeta, \mu_+)}} \psi(\bar \zeta) >0.$$* *Proof.* Let's consider the definition of $\psi'(\zeta)$ in [\[E:phi_psi_dynamics\]](#E:phi_psi_dynamics){reference-type="eqref" reference="E:phi_psi_dynamics"}. We recall that $\zeta_1 >0$ for $t \in [\check t, \mathcal{T}^c)$, and by construction $\psi(\zeta) < \delta_2 \le \delta_1 < \xi_1$. This implies that $$\psi'(\zeta) < 0, \quad \zeta \in [\check \zeta, \mu_+).$$ Dividing both sides of [\[E:phi_psi_dynamics\]](#E:phi_psi_dynamics){reference-type="eqref" reference="E:phi_psi_dynamics"}, we will have $$\begin{split} & \frac{\psi'(\zeta)}{\psi^2(\zeta)} = \\ & (- \zeta) \Bigg\{\alpha \psi^2(\zeta) \left\{V(\psi(\zeta)) - V(\xi_1(\theta(\zeta))) \right\}\\ & \quad + \alpha \zeta \psi^2(\zeta) + \beta \left(\zeta - \zeta_1(\theta(\zeta)) \frac{\psi^2(\zeta)}{(\xi_1(\theta(\zeta)))^2} \right) \Bigg\}^{-1} \end{split}$$ The right-hand side of this equation is in fact $$\label{E:psi_bound} \text{RHS} = \frac{- \zeta}{\text{Negative Terms + $\beta \zeta$}} > \frac{- \zeta}{\beta \zeta} = - \frac 1 \beta$$ on $[\check \zeta, \mu_+)$. In addition, we define $$\mathcal R(u) \overset{\textbf{def}}{=}\left\{(\tfrac 12 \delta_2)^{-1} + \frac 1\beta (u - \check \zeta) \right\}^{-1}, \quad u \in [\check \zeta, 0).$$ Then, $$\label{E:mathcal R_relations} \mathcal R(\check \zeta) = \tfrac 12 \delta_2 = \psi(\check \zeta), \quad \frac{\dot{\mathcal R}(u)}{\mathcal R^2(u)} = -\frac 1\beta , \quad u \in [\check \zeta, 0).$$ To compare $\mathcal R(\cdot)$ and $\psi(\cdot)$ which will help us showing the lower bound for $\psi$, we define $$\begin{split} \Lambda (u) &\overset{\textbf{def}}{=}\left(\frac{1}{\psi(u)} - \frac{1}{\mathcal R(u)}\right)^+ \\ & = \left(\frac{1}{\psi(u)} - \frac{1}{\mathcal R(u)}\right) \textcolor{black}{\mathbf{1}}_ {\left\{\nicefrac{1}{\psi(u)} > \nicefrac{1}{\mathcal R(u)} \right\}}\\ \end{split}$$ for $u \in [\check \zeta, \mu_+)$. Therefore, if $\psi(u) < \mathcal R(u)$, we get $$\label{E:Lambda_negative_growth} \dot \Lambda(u) = \frac{\dot{\mathcal R}(u)}{\mathcal R^2(u)} - \frac{\psi'(u)}{\psi^2(u)} = - \frac 1 \beta - \frac{\psi'(u)}{\psi^2(u)} \le 0,$$ and the last inequality is by [\[E:psi_bound\]](#E:psi_bound){reference-type="eqref" reference="E:psi_bound"}. But then from [\[E:mathcal R_relations\]](#E:mathcal R_relations){reference-type="eqref" reference="E:mathcal R_relations"} we have that $\Lambda(\check \zeta) = 0$, and hence [\[E:Lambda_negative_growth\]](#E:Lambda_negative_growth){reference-type="eqref" reference="E:Lambda_negative_growth"} implies that $\Lambda(u) \le 0$ for $u \in [\check \zeta, \mu_+)$ which is a contradition with the assumption $\psi(u) < \mathcal R(u)$. Therefore, $$\inf_{u \in [\check \zeta, \mu_+)} \psi(u) \ge \inf_{u \in [\check \zeta, \mu_+)} \mathcal R(u) \ge \left\{(\tfrac 12 \delta_2)^{-1} + \frac {\check \zeta}{\beta} \right\}^{-1}.$$ This completes the proof. ◻ We conclude the proof of proposition [Proposition 13](#T:collison){reference-type="ref" reference="T:collison"}, by showing that $\mu_+ = \hat \zeta = \zeta_2(\mathcal T^c)$. In other words, we show that the result of the Theorem [Theorem 14](#T:collision_avoid){reference-type="ref" reference="T:collision_avoid"} holds true for all $[\check t, \mathcal T^c)$ which in turn implies that $\mathcal T^c = \infty$. To do so, as mentioned before, $\zeta_2: [\check t, \mathcal T^c) \to [\check \zeta, \hat \zeta)$ is a homeomorphism. Therefore, we should have $$\theta \bigl\vert_{[\check t, \mathcal T^c)} \left([\check \zeta , \mu_+) \cap [\check \zeta, \hat \zeta) \right) = [\check t, \mathcal T'),$$ for some $\mathcal T' \le \mathcal T^c$ and we recall that $\theta = \zeta_2^{-1}$. From [\[E:phi_psi_dynamics\]](#E:phi_psi_dynamics){reference-type="eqref" reference="E:phi_psi_dynamics"} $(\psi(\zeta), \zeta)$ solves the dynamics of [\[E:differece_dynamics\]](#E:differece_dynamics){reference-type="eqref" reference="E:differece_dynamics"} and hence must coincide with $(\xi_2, \zeta_2)$ on $[\check t, \mathcal T')$. Let's assume that $\mathcal T' < \mathcal T^c$. In this case, by the extensibility theorem of dynamical systems, $(\xi_2(\mathcal T'-), \zeta_2(\mathcal T'-))$ should hit the boundary of $(0, \tfrac 12 \delta_2) \times (-\infty, 0)$ which is precluded by the fact that $\zeta_2 <0$ and $\dot \zeta_2 >0$ on $[\check t, \mathcal{T}')$. Therefore, $\mathcal T' = \mathcal T$. However, in this case, we must have that $$\lim_{t \nearrow \mathcal T^c} \psi(\zeta) = \lim_{t \nearrow \mathcal T^c} \xi_2 = 0,$$ which is prohibited by Theorem [Theorem 14](#T:collision_avoid){reference-type="ref" reference="T:collision_avoid"}. This contradicts our main assumption [\[E:finite_collision_time\]](#E:finite_collision_time){reference-type="eqref" reference="E:finite_collision_time"}. ◻ To summarize, Proposition [Proposition 13](#T:collison){reference-type="ref" reference="T:collison"} states that the collision does not happen. Figure [14](#fig:18-20){reference-type="ref" reference="fig:18-20"} illustrates the interaction of three vehicles in the form of the difference dynamics. ![Illustration of the trajectory and orbit of the dynamics between the first two and the second two vehicles. The red curve in the last illustration shows the behavior of the orbit near the rest point.](Figures/18.png "fig:"){#fig:18-20 width="2.5in"} ![Illustration of the trajectory and orbit of the dynamics between the first two and the second two vehicles. The red curve in the last illustration shows the behavior of the orbit near the rest point.](Figures/19.png "fig:"){#fig:18-20 width="2.35in"} ![Illustration of the trajectory and orbit of the dynamics between the first two and the second two vehicles. The red curve in the last illustration shows the behavior of the orbit near the rest point.](Figures/20.png "fig:"){#fig:18-20 width="0.83\\columnwidth"} # Conclusions and future works In this paper, we presented a rigorous boundary layer analysis of the OVFL dynamical model near collision. Such analysis provides an in-depth understanding of the behavior of the dynamics (e.g. in a platoon of connected autonomous vehicles) especially when the system is forced out of equilibrium (for instance as a result of some perturbation in the system). Understanding the interaction of the singularity and behavior of the dynamics near collision is fundamental both from a theoretical standpoint and in designing efficient systems, such as adaptive cruise controls. This paper can be extended on several fronts. The theory can benefit from a broader definition of Hamiltonian which serves as a Lyapunov-type function to explain the boundedness and stability of the equilibrium solution. Utilizing this, further analysis is required to generalize the results in a rigorous way. [^1]: $^{1}$ Hossein Nick Zinat Matin and Maria Laura Delle Monache are with the Department of Civil and Environmental Engineering, University of California, Berkeley, `h-matin@berkeley.edu, mldellemonache@berkeley.edu`
arxiv_math
{ "id": "2309.11704", "title": "Near Collision and Controllability Analysis of Nonlinear Optimal\n Velocity Follow-the-Leader Dynamical Model In Traffic Flow", "authors": "Hossein Nick Zinat Matin, Maria Laura Delle Monache", "categories": "math.OC math.DS", "license": "http://creativecommons.org/licenses/by-nc-nd/4.0/" }
--- abstract: | We study algebraic varieties associated with the camera resectioning problem. We characterize these resectioning varieties' multigraded vanishing ideals using Gröbner basis techniques. As an application, we derive and re-interpret celebrated results in geometric computer vision related to camera-point duality. We also clarify some relationships between the classical problems of optimal resectioning and triangulation, state a conjectural formula for the Euclidean distance degree of the resectioning variety, and discuss how this conjecture relates to the recently-resolved multiview conjecture. author: - Erin Connelly, Timothy Duff, Jessie Loucks-Tavitas bibliography: - refs.bib title: Algebra and Geometry of Camera Resectioning --- # Introduction {#sec:intro} The *dramatis personae* of the classical pinhole camera model are a full-rank $3\times 4$ matrix $A$ representing a camera, a $4\times 1$ matrix $q$ representing a world point, and a $3\times 1$ matrix $p$ representing its projection into an image. Image formation may be understood via the projective-linear map $$\label{eq:cam} \begin{split} A : \mathbf P^3 &\dashrightarrow \mathbf P^2 \\ q &\mapsto A q, \end{split}$$ and we write $A q \sim p$ if these two vectors represent the same point in $\mathbf P^2.$ The *center* of the camera $A$ is the unique point where the map [\[eq:cam\]](#eq:cam){reference-type="eqref" reference="eq:cam"} is undefined. The pinhole camera, despite its simplicity, remains a good model of physical cameras. This explains its importance in modern computer vision applications such as structure-from-motion (SfM) and Simultaneous Localization and Mapping (SLAM). On the other hand, classical problems associated with 3D reconstruction have been studied long before the advent of computers, and the role played by algebraic methods in their solution has long been apparent. For instance, Hesse in 1863 formulated the problem of constructing two homographic configurations of 7 lines in space, each prescribed to pass through a configuration of 7 points in a plane [@7pt-Hesse]. Hesse's reduction of this problem to computing the roots of a cubic equation may be understood as an early instance of the so-called 7 point algorithm. Similarly, Grunert's 1841 "3D Pothenot problem\" [@Grunert-1841] is known nowadays as the perspective 3-point (P3P) problem, and his general strategy reducing the problem to a quartic equation remains in use today. In recent years, the name *algebraic vision* [@https://doi.org/10.48550/arxiv.2210.11443] has been coined to describe a body of interdisciplinary research in which notions from algebra and vision flow freely. To date, algebraic vision has largely focused on problems which we refer to as the *full reconstruction problem* and *triangulation*. In the full reconstruction problem, we are given a collection of image points $\tilde{p}_{1 1}, \ldots , \tilde{p}_{m n}$, and our task is to recover a set of cameras $A_1, \ldots , A_m$ and world points $q_1, \ldots , q_n$ that is consistent with these observations. Hesse's solution treats the "minimal\" case $(m,n) = (2,7)$. Today, there are many works which solve analogous minimal problems which can be used effectively in SfM pipelines (see eg. [@DBLP:conf/cvpr/LarssonOAWKP18; @PLMP; @PL1P; @kukelova2008automatic; @larsson2017efficient].) In triangulation, we are given not only image points, but also the cameras that produced them, $\bar{A}_1, \ldots , \bar{A}_m$. We need only recover one or more unknown world points. Already for $m=2,$ an exact solution to this problem will typically not exist, due to the fact that the lines in $\mathbf P^3$ projecting to generic image points under $\bar{A}_1, \bar{A}_2$ will be skew. Nevertheless, algebraic methods have led to a wealth of knowledge about the triangulation problem. For example, the *multiview ideal* associated to $\bar{A}_1, \ldots , \bar{A}_m$ gives rise to a complete set of algebraic constraints on *any* $m$-tuple of image points they produce. There is a considerable literature related to multiview ideals  [@idealMultiview; @Hilb; @DBLP:conf/iccv/FaugerasM95; @HA97]. collects some important previous results. Often regarded as being "dual\" to triangulation is the problem of camera *resectioning*. Here, we assume $n$ image points are given along with the configuration of world points $\bar{\mathbf{q}} = (\bar{q}_1, \ldots , \bar{q}_n) \in (\mathbf P^2)^n$ from which they were produced by a single unknown camera $A.$ Grunert's 1841 paper gives a minimal solution for $(m,n) = (1,3)$ under the assumption that $A$ is *Euclidean*. Without this assumption, $A$ is a general $3\times 4$ matrix, and we need $n\ge 6$. ## Results and Organization {#subsec:results-organization} In this paper, we aim to bring the general resectioning problem up-to-speed with the latest developments in algebraic vision. In , after recalling some previous results about multiview varieties, we state our first main result, . This characterizes a complete set of algebraic constraints for the resectioning problem, under the genericity assumption that no four of the given world points are coplanar. These constraints are given by $k$-linear polynomials for $6\le k \le 12$ which generate the *resectioning ideal* $I( \Gamma_{\bar{\mathbf{q}} , \mathbf{p}}^{m,n})$ (). Our work is a natural continuation of recent work by Agarwal et al. [@agarwal2022atlas], and we resolve three of its open questions. For instance,  resolves [@agarwal2022atlas §8.1, Q4] for generic $\bar{\mathbf{q}}$ by determining a universal Gröbner basis for $I( \Gamma_{\bar{\mathbf{q}} , \mathbf{p}}^{m,n}).$ We note that resectioning ideals have several pleasant properties from the point of view of commutative algebra: namely, for generic $\bar{\mathbf{q}} \in \left( \mathbf P^3 \right)^n$, 1. For fixed $m$ and $n,$ resectioning ideals are homogeneous with respect to a natural $\mathbf Z^{mn}$-grading, and have the same $\mathbf Z^{mn}$-graded Hilbert function as long as no four points are coplanar. implies that this Hilbert function may be obtained by specializing a combinatorial formula of Li [@Li-IMRN Theorem 1.1], based on the inclusion-exclusion rule. Our ideal-theoretic result also considerably strengthens Li's set-theoretic description, and reduces the degrees of the equations that are needed. 2. The multidegrees of resectioning ideals are always equal to $1.$ A geometric explanation of this phenomenon follows along the lines explained in [@breiding2022line §4]. See also [@CCF23 Theorem 4.2] for an explanation using multigraded Rees algebras. 3. For any monomial order $<$, the initial ideal $\operatorname{in}_< (I(\Gamma_{\bar{\mathbf{q}}, \mathbf{p}}^{m,n}))$ and the multigraded generic initial ideal $\operatorname{gin}_< (I(\Gamma_{\bar{\mathbf{q}}, \mathbf{p}}^{m,n}))$, although not equal as in the case of multiview ideals [@Hilb], are both radical. In particular, $I(\Gamma_{\bar{\mathbf{q}}, \mathbf{p}}^{m,n})$ belongs to the class of *Cartwright-Sturmfels ideals*, recently surveyed by Conca, De Negri, and Gorla [@conca-survey]. Our first basic insight is that the projection of a point $q\in \mathbf P^3$ under a pinhole camera $A : \mathbf P^3 \dashrightarrow \mathbf P^2$ may be viewed as the projection of a point $\mathop{\mathrm{vec}}(A) \in \mathbf P^{11}$ under what we call a "hypercamera\" $Q : \mathbf P^{11} \dashrightarrow \mathbf P^2.$ This is reminiscent, and in fact a generalization, of a well-studied principle in computer vision known as *Carlsson-Weinshall duality* [@DBLP:journals/ijcv/CarlssonW98]. This is the subject of . Our  develops a reduced analogue of the "atlas\" of algebraic varieties proposed in [@agarwal2022atlas]. This addresses [@agarwal2022atlas §8.2, Q2]. The *reduced joint image* and its dual, recently studied by Trager, Ponce, and Hebert, are two members of this atlas. Carlsson-Weinshall duality amounts to a simple linear isomorphism between these two varieties. In  and , we explain how our perspective unifies previous approaches to resectioning constraints in the computer vision literature [@DBLP:journals/ijcv/CarlssonW98; @DBLP:conf/cvpr/TragerHP19; @quan; @DBLP:conf/eccv/SchaffalitzkyZHT00], which can all be obtained from the ideal $I(\Gamma_{\bar{\mathbf{q}} , \mathbf{p}}^{m,n})$ by specialization. $$% "left" of triptych \begin{tikzpicture}[scale = 1/3] %%%% camera centers \coordinate (C1) at (-5, 0); \coordinate (C2) at (-5, -3); %%%% world points \coordinate (Q1) at (4, 2); \coordinate (Q2) at (7, 0); \coordinate (Q3) at (4, -3); %%%blue plane \begin{scope} \filldraw[draw = c1, fill = c1!30, name path global = Plane1][shift={(-3.75,-2.5)}] (0, 0) -- (0, 4.5) -- (3, 5.5) -- (3, 1) -- cycle; \end{scope} %%% green plane \begin{scope} \filldraw[draw = c2, fill = c2!30, name path global = Plane2][shift={(-0.25, -4)}] (0, 0) -- (0, 4.5) -- (3, 5.5) -- (3, 1) -- cycle; \end{scope} %%%%%% intersections with planes \path[name path = image1] (-3, 1) -- (-2, -1) -- (-1,0) -- cycle ; %%% plane 1 (red) \path[name path = image2] (0.5,0.25) -- (0.5,-3) -- (2, -1.25) -- cycle; %%% plane 2 (green) %%%%%% imaging lines (invisible) \path [name path=C1--Q1] (C1) -- (Q1); \path [name path=C1--Q2] (C1) -- (Q2); \path [name path=C1--Q3] (C1) -- (Q3); \path [name path=C2--Q1] (C2) -- (Q1); \path [name path=C2--Q2] (C2) -- (Q2); \path [name path=C2--Q3] (C2) -- (Q3); %%%%%% plane boundaries \path[name intersections={of=Plane1 and C1--Q1, by = int1}]; \coordinate (P1L1) at (intersection-1); \path[name intersections={of=Plane1 and C1--Q2, by = int1}]; \coordinate (P2L1) at (intersection-1); \path[name intersections={of=Plane1 and C1--Q3, by = int1}]; \coordinate (P3L1) at (intersection-1); \path[name intersections={of=Plane2 and C2--Q1, by = int1}]; \coordinate (P1L2) at (intersection-1); \path[name intersections={of=Plane2 and C2--Q2, by = int1}]; \coordinate (P2L2) at (intersection-1); \path[name intersections={of=Plane2 and C2--Q3, by = int1}]; \coordinate (P3L2) at (intersection-1); \path[name intersections = {of=Plane2 and C1--Q2, by = int1}]; \coordinate (Q2C1first) at (intersection-1); \coordinate (Q2C1sec) at (intersection-2); %%%%%% image point coordinates \path [name intersections={of = C1--Q1 and image1, by = imagepoint}]; \coordinate (P11) at (intersection-1); \path [name intersections={of = C1--Q2 and image1, by = imagepoint}]; \coordinate (P12) at (intersection-2); \path [name intersections={of = C1--Q3 and image1, by = imagepoint}]; \coordinate (P13) at (intersection-1); \path [name intersections={of = C2--Q1 and image2, by = imagepoint}]; \coordinate (P21) at (intersection-1); \path [name intersections={of = C2--Q2 and image2, by = imagepoint}]; \coordinate (P22) at (intersection-2); \path [name intersections={of = C2--Q3 and image2, by = imagepoint}]; \coordinate (P23) at (intersection-1); %%%%%%%% imaging lines \draw[very thin, c1] (C1) -- (P1L1); \draw[very thin, c1] (P11) -- (Q1); \draw[very thin, c1] (C1) -- (P2L1); % \draw[very thin, c1, dotted] (P2L1) -- (P12); \draw[very thin, c1] (P12) -- (Q2C1first); % \draw[very thin, c1, dotted] (Q2C1first) -- (Q2C1sec); \draw[very thin, c1] (Q2C1sec) -- (Q2); \draw[very thin, c1] (C1) -- (P3L1); \draw[very thin, c1] (P13) -- (Q3); \draw[very thin, c2] (C2) -- (P1L2); \draw[very thin, c2] (P21) -- (Q1); \draw[very thin, c2] (C2) -- (P2L2); \draw[very thin, c2] (P22) -- (Q2); \draw[very thin, c2] (C2) -- (P3L2); \draw[very thin, c2] (P23) -- (Q3); %%%%%%%% image points \fill[q2] (P12) circle (3pt); \fill[q2] (P22) circle (3pt); \fill[q1] (P11) circle (3pt); \fill[q1] (P21) circle (3pt); \fill[q3] (P13) circle (3pt); \fill[q3] (P23) circle (3pt); \fill[c1] (C1) circle (4pt); \fill[c2] (C2) circle (4pt); \fill[q1] (Q1) circle (4pt); \fill[q2] (Q2) circle (4pt); \fill[q3] (Q3) circle (4pt); \end{tikzpicture} \qquad \begin{tikzpicture}[scale = 1/3] %%%% camera centers \coordinate (C1) at (-5, 0); \coordinate (C2) at (-5, -3); %%%% world points \coordinate (Q1) at (4, 2); \coordinate (Q2) at (7, 0); \coordinate (Q3) at (4, -3); %%% plane \begin{scope} \filldraw[draw = middlecolor, fill = middlecolor!30, name path global = Plane1][shift={(-3.5, -4.5)}] (0, 0) -- (0, 5) -- (6.5,7) -- (6.5,2) -- cycle; \end{scope} %%%%%% intersections with planes \path[name path = image1] (0, 3) -- (-1, -5); %%% blue (top) \path[name path = image2] (2, 3) -- (1, -5); %%% green (rightmost) \path[name path = image3] (-1, 3) -- (-3, -5); %%% orange (bottom) %%%%%% imaging lines (invisible) \path [name path=C1--Q1] (C1) -- (Q1); \path [name path=C1--Q2] (C1) -- (Q2); \path [name path=C1--Q3] (C1) -- (Q3); \path [name path=C2--Q1] (C2) -- (Q1); \path [name path=C2--Q2] (C2) -- (Q2); \path [name path=C2--Q3] (C2) -- (Q3); %%%%%% plane boundaries \path[name intersections={of=Plane1 and C1--Q1, by = int1}]; \coordinate (P1L1) at (intersection-1); \path[name intersections={of=Plane1 and C2--Q1, by = int1}]; \coordinate (P1L2) at (intersection-1); \path[name intersections={of=Plane1 and C1--Q2, by = int1}]; \coordinate (P2L1) at (intersection-1); \path[name intersections={of=Plane1 and C2--Q2, by = int1}]; \coordinate (P2L2) at (intersection-1); \path[name intersections={of=Plane1 and C1--Q3, by = int1}]; \coordinate (P3L1) at (intersection-1); \path[name intersections={of=Plane1 and C2--Q3, by = int1}]; \coordinate (P3L2) at (intersection-1); %%%%%% image point coordinates \path [name intersections={of = C1--Q1 and image1, by = P11}]; \path [name intersections={of = C2--Q1 and image1, by = P21}]; \path [name intersections={of = C1--Q2 and image2, by = P12}]; \path [name intersections={of = C2--Q2 and image2, by = P22}]; \path [name intersections={of = C1--Q3 and image3, by = P13}]; \path [name intersections={of = C2--Q3 and image3, by = P23}]; %%%%%%%% imaging lines \draw[very thin, middlecolor] (C1) -- (P1L1); \draw[very thin, middlecolor] (P11) -- (Q1); \draw[very thin, middlecolor] (C1) -- (P2L1); \draw[very thin, middlecolor] (P12) -- (Q2); \draw[very thin, middlecolor] (C1) -- (P3L1); \draw[very thin, middlecolor] (P13) -- (Q3); \draw[very thin, middlecolor] (C2) -- (P1L2); \draw[very thin, middlecolor] (P21) -- (Q1); \draw[very thin, middlecolor] (C2) -- (P2L2); \draw[very thin, middlecolor] (P22) -- (Q2); \draw[very thin, middlecolor] (C2) -- (P3L2); \draw[very thin, middlecolor] (P23) -- (Q3); %%%%%%%% image points \fill[middlecolor] (P12) circle (3pt); \fill[middlecolor] (P22) circle (3pt); \fill[middlecolor] (P11) circle (3pt); \fill[middlecolor] (P21) circle (3pt); \fill[middlecolor] (P13) circle (3pt); \fill[middlecolor] (P23) circle (3pt); \fill[c1] (C1) circle (4pt); \fill[c2] (C2) circle (4pt); \fill[q1] (Q1) circle (4pt); \fill[q2] (Q2) circle (4pt); \fill[q3] (Q3) circle (4pt); \end{tikzpicture} \qquad \begin{tikzpicture}[scale = 1/3] %%%% camera centers \coordinate (C1) at (-5, 0); \coordinate (C2) at (-5, -3); %%%% world points \coordinate (Q1) at (4, 2); \coordinate (Q2) at (7, 0); \coordinate (Q3) at (4, -3); %%%blue plane \begin{scope} \filldraw[draw = q1, fill = q1!30, name path global = Plane1][shift={(-2,-1.25)}] (0, 0) -- (0, 2.5) -- (3.5, 3.5) -- (3.5, 1) -- cycle; \end{scope} %%% green plane \begin{scope} \filldraw[draw = q2, fill = q2!30, name path global = Plane2][shift={(2, -2)}] (0, 0) -- (0, 2.5) -- (3.5, 3.5) -- (3.5, 1) -- cycle; \end{scope} %%% orange plane \begin{scope} \filldraw[draw = q3, fill = q3!30, name path global = Plane3][shift={(-3,-4.5)}] (0, 0) -- (0, 2.5) -- (3.5, 3.5) -- (3.5, 1) -- cycle; \end{scope} %%%%%% intersections with planes \path[name path = image1] (0, 3) -- (-2, -5); %%% plane 1 (top) \path[name path = image2] (3.5, 3) -- (3.5, -5); %%% plane 2 (rightmost) \path[name path = image3] (0, 3) -- (-1, -5); %%% plane 3 (bottom) %%%%%% imaging lines (invisible) \path [name path=C1--Q1] (C1) -- (Q1); \path [name path=C1--Q2] (C1) -- (Q2); \path [name path=C1--Q3] (C1) -- (Q3); \path [name path=C2--Q1] (C2) -- (Q1); \path [name path=C2--Q2] (C2) -- (Q2); \path [name path=C2--Q3] (C2) -- (Q3); %%%%%% plane boundaries \path[name intersections={of=Plane1 and C1--Q1, by = int1}]; \coordinate (P1L1) at (intersection-1); \path[name intersections={of=Plane1 and C2--Q1, by = int1}]; \coordinate (P1L2) at (intersection-2); \path[name intersections={of=Plane1 and C1--Q2, by = int1}]; \coordinate (P1L1Q2) at (intersection-1); \coordinate (P2L1Q2) at (intersection-2); \path[name intersections={of=Plane3 and C2--Q2, by = int1}]; \coordinate (P1L2Q2) at (intersection-1); \coordinate (P2L2Q2) at (intersection-2); \path[name intersections={of=Plane2 and C1--Q2, by = int1}]; \coordinate (P2L1) at (intersection-1); \path[name intersections={of=Plane2 and C2--Q2, by = int1}]; \coordinate (P2L2) at (intersection-1); \path[name intersections={of=Plane3 and C1--Q3, by = int1}]; \coordinate (P3L1) at (intersection-1); \path[name intersections={of=Plane3 and C2--Q3, by = int1}]; \coordinate (P3L2) at (intersection-1); %%%%%% image point coordinates \path [name intersections={of = C1--Q1 and image1, by = P11}]; \path [name intersections={of = C2--Q1 and image1, by = P21}]; \path [name intersections={of = C1--Q2 and image2, by = P12}]; \path [name intersections={of = C2--Q2 and image2, by = P22}]; \path [name intersections={of = C1--Q3 and image3, by = P13}]; \path [name intersections={of = C2--Q3 and image3, by = P23}]; %%%%%%%% imaging lines \draw[very thin, q1] (C1) -- (P1L1); \draw[very thin, q1] (P11) -- (Q1); \draw[very thin, q2] (C1) -- (P1L1Q2); \draw[very thin, q2] (P2L1Q2) -- (P2L1); \draw[very thin, q2] (P12) -- (Q2); \draw[very thin, q3] (C1) -- (P3L1); \draw[very thin, q3] (P13) -- (Q3); \draw[very thin, q1] (C2) -- (P1L2); \draw[very thin, q1] (P21) -- (Q1); \draw[very thin, q2] (C2) -- (P1L2Q2); \draw[very thin, q2] (P2L2Q2) -- (P2L2); \draw[very thin, q2] (P22) -- (Q2); \draw[very thin, q3] (C2) -- (P3L2); \draw[very thin, q3] (P23) -- (Q3); %%%%%%%% image points \fill[c1] (P12) circle (3pt); \fill[c2] (P22) circle (3pt); \fill[c1] (P11) circle (3pt); \fill[c2] (P21) circle (3pt); \fill[c1] (P13) circle (3pt); \fill[c2] (P23) circle (3pt); \fill[c1] (C1) circle (4pt); \fill[c2] (C2) circle (4pt); \fill[q1] (Q1) circle (4pt); \fill[q2] (Q2) circle (4pt); \fill[q3] (Q3) circle (4pt); \end{tikzpicture}$$ in  shows that reduced resectioning varieties for generic point configurations are scheme-theoretically cut out by bilinear forms. This stands in stark contrast to the high degree polynomials in , whose proof we complete in . Finally, in , we address [@agarwal2022atlas §8.1, Q6] by investigating the *Euclidean distance degree* of the resectioning variety in affine pixel coordinates. This is a number that quantifies the algebraic complexity of a natural Euclidean distance optimization formulation of the camera resectioning problem. Our main contribution, based on evidence supplied by computational experiments, is , giving a formula for this quantity as a cubic polynomial in $n.$ The statement is analogous to, and inspired by, the *multiview conjecture*, recently resolved by Maxim, Rodriguez, and Wang [@MRW20]. We conclude with a short discussion in . ## Notation and conventions {#subsec:notation} Our notation largely follows that established in [@agarwal2022atlas]. Our basic algebro-geometric objects are affine and projective varieties over the field of complex numbers $\mathbf C.$ The symbol $\mathbf P^n$ denotes complex $n$-dimensional projective space, which we may also identify with the projectivization $\mathbf P(V)$ of any $(n+1)$-dimensional complex vector space $V$. As in the introduction, known quantities will usually be designated with a bar $\bar{\bullet}.$ This bar is also used to denote the *Zariski closure* of a set: its usage will be clear from the context. If we wish to emphasize that given quantities in certain scenarios may be "noisy\" due to deviations from the pinhole model or erroneous measurements, we instead use $\widetilde{\bullet }.$ # Resectioning vs Triangulation {#sec:res-triang} Let us recall a "universal\" version of the imaging map [\[eq:cam\]](#eq:cam){reference-type="eqref" reference="eq:cam"}. This is the map which sends $m$ cameras $A_1, \ldots , A_m \in \mathbf P\left( \operatorname{Hom}_\mathbf C(\mathbf C^4, \mathbf C^3) \right) \cong \mathbf P^{11}$ and $n$ points $q_1, \ldots , q_n \in \mathbf P^3$ to $mn$ points in $\mathbf P^2.$ The graph of this rational map is an incidence correspondence, dubbed the *image formation correspondence* in [@agarwal2022atlas], $$\label{eq:image-formation-correspondence} \Gamma_{\mathbf{A}, \mathbf{q}, \mathbf{p}}^{m,n} = \overline{\{ (\mathbf{A}, \mathbf{q}, \mathbf{p}) \in (\mathbf P^{11})^m \times (\mathbf P^3)^n \times (\mathbf P^2)^{mn} \mid A_i q_j \sim p_{i j} \quad \forall i \in [m], \, j \in [n] \}}.$$ Given a generic camera arrangement $\bar{\mathbf{A}} = (\bar{A}_1, \ldots , \bar{A}_m) \in \left( \mathbf P^{11} \right)^m,$ one may also consider the associated *multiview variety*. In the notation of [@agarwal2022atlas], this may be defined as $$\label{eq:multiview-variety} \Gamma_{\bar{\mathbf{A}} , \mathbf{p}}^{m,n} = \{ \mathbf{p} \in (\mathbf P^2)^{mn} \mid (\bar{\mathbf{A}}, \mathbf{q}, \mathbf{p}) \in \Gamma_{\mathbf{A}, \mathbf{q}, \mathbf{p}}^{m,n} \text{ for some } \mathbf{q}\in \left( \mathbf P^3 \right)^n \}.$$ Multiview varieties and their vanishing ideals are well-understood objects. Our present study of camera resectioning is based on the following definition, which parallels [\[eq:multiview-variety\]](#eq:multiview-variety){reference-type="eqref" reference="eq:multiview-variety"} in that the role of cameras and 3D points are switched. **Definition 1**. The $m$-camera *resectioning variety* associated to a given point arrangement $\bar{\mathbf{q}}\in \left(\mathbf P^3 \right)^n$ is the multiprojective variety $$\label{eq:resect-definition} \Gamma_{\bar{\mathbf{q}} , \mathbf{p}}^{m,n} = \left\{ \mathbf{p}\in \left( \mathbf P^2 \right)^{mn} \mid (\mathbf{A}, \bar{\mathbf{q}}, \mathbf{p}) \in \Gamma_{\mathbf{A}, \mathbf{q}, \mathbf{p}}^{m,n} \text{ for some } \mathbf{A}\in \left( \mathbf P^{11} \right)^m\right\}.$$ The vanishing ideal $I(\Gamma_{\bar{\mathbf{q}}, \mathbf{p}}^{m,n})$ is the *resectioning ideal* of $\bar{\mathbf{q}}.$ **Remark 2**. It turns out that $\Gamma_{\bar{\mathbf{q}}, \mathbf{p}}^{m,n} = (\mathbf P^2)^{mn}$ if and only if $n < 6,$ assuming $\bar{\mathbf{q}} \in \left( \mathbf P^3 \right)^n$ is sufficiently generic. Thus we assume $n\ge 6$ throughout this section. To better explain the analogy between resectioning and triangulation, we collect several previous results about the multiview ideals $I(\Gamma_{\bar{\mathbf{A}} , \mathbf{p}}^{m,n})$ in  below. Our first main result, , involves certain multilinear *focal polynomials* which belong to the resectioning ideal $I( \Gamma_{\bar{\mathbf{q}}, \mathbf{p}}^{m,n})$. These are structurally very similar to the classically-known focal polynomials belonging to $I( \Gamma_{\bar{\mathbf{A}}, \mathbf{p}}^{m,n})$. We briefly recall a derivation of these constraints. Suppose we are given a camera arrangement $\bar{\mathbf{A}} \in \left( \mathbf P^{11}\right)^n$. Consider a generic point $$(\bar{\mathbf{A}}, \mathbf{q}, \mathbf{p}) = (\bar{A}_1, \ldots , \bar{A}_m , q_1, \ldots , q_n, p_{1 1}, \ldots , p_{m n}) \in \Gamma_{\mathbf{A}, \mathbf{q}, \mathbf{p}}^{m,n}.$$ Fixing representatives for this point in homogeneous coordinates, there exist nonzero scalars $\lambda_{1 1} , \ldots , \lambda_{m n} \in \mathbf C$ which satisfy the equations $$\label{eq:multiview-in-iamge-R-lambda} \bar{A}_i q_j = \lambda_{i j} p_{i j} , \quad 1 \le i \le m, \, 1 \le j \le n.$$ From these conditions, one may obtain certain multilinear polynomials in $\bar{A}_i, p_{i j}$ alone, known in various sources as $k$-focals or $k$-multilinearities. Specifically, for each $j=1, \ldots , n$ and any subset $\sigma = \{ \sigma_1, \ldots , \sigma_k \} \subset [m]$ of size $\ge 2$, the matrix $$\label{eq:A-foca} \begin{bmatrix} \bar{A}_{\sigma_1} & p_{\sigma_1} & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots \\ \bar A_{\sigma_k} & 0 & \cdots & p_{\sigma_k} \end{bmatrix}$$ must be rank-deficient. The maximal $(4+ k) \times (4+ k)$ minors of these matrices are the *$k$-focals* associated with the camera arrangement $\bar{\mathbf{A}}.$ In , we collect several previous results which make the relationship between $\Gamma_{\bar{\mathbf{A}}, \mathbf{q}, \mathbf{p}}^{m,n}$ and the $k$-focals more precise. These results impose progressively stronger genericity assumptions on the camera arrangement $\bar{\mathbf{A}}.$ **Theorem 3**. Let $\bar{\mathbf{A}} = (\bar{A}_1,\ldots , \bar{A}_m)$, for $m\ge 2,$ be a fixed camera arrangement. 1. [@Hilb Theorem 2.1] If all maximal $4\times 4$ minors of the matrix $\left[ \hspace{-.4em} \begin{array}{c|c|c} \bar{A}_1^T & \cdots & \bar{A}_m^T \end{array} \hspace{-.4em}\right]$ are nonzero, then the $k$-focals for $k\in \{ 2,3,4 \}$ form a universal Gröbner basis for $I(\Gamma_{\bar{\mathbf{A}} , \mathbf{p}}^{m,n}).$ 2. [@idealMultiview Theorem 3.7] If $\bar{\mathbf{A}}$ is such that the camera centers are distinct, then the $k$-focals for $k \in \{ 2, 3 \}$ generate the vanishing ideal $I (\Gamma_{\bar{\mathbf{q}} , \mathbf{p}}^{m,n}).$ 3. [@idealMultiview Theorem 5.6] If $\bar{\mathbf{A}}$ is such that the camera centers are distinct and do not lie in a common plane, then the $2$-focals determine $\Gamma_{\bar{\mathbf{q}} , \mathbf{p}}^{m,n}$ as a subscheme of $\left(\mathbf P^2 \right)^{mn}$. Turning now to camera resectioning, suppose we are instead given $\bar{\mathbf{q}} \in \left( \mathbf P^3\right)^n$. Similar to [\[eq:multiview-in-iamge-R-lambda\]](#eq:multiview-in-iamge-R-lambda){reference-type="eqref" reference="eq:multiview-in-iamge-R-lambda"}, we wish to obtain conditions involving only $\bar{q}_j$ and $p_{i j}$ from $$\label{eq:in-iamge-R-lambda} A_i \bar{q}_j = \lambda_{i j} p_{i j} , \quad 1 \le i \le m, \, 1 \le j \le n.$$ To obtain these conditions, we may apply a well-known identity involving the matrix Kronecker product, denoted $\otimes$, and the vectorization operator $\mathop{\mathrm{vec}}(\bullet)$, which stacks the columns of a matrix vertically. **Proposition 4** (See eg. [@horn-topics p252, Exercise 22]). For any $M \in \mathbf C^{q \times r},$ $N \in \mathbf C^{r \times s}$, $$\label{eq:amazing-identity} \mathop{\mathrm{vec}}\left(M N\right) = \left(I_{s \times s} \otimes M\right) \mathop{\mathrm{vec}}\left(N\right),$$ where $I_{s \times s}\in \mathbf C^{s \times s}$ is the identity matrix. We apply this identity with $M = \bar{q}_j^\top$ and $N = A_i^\top$. For the $3\times 12$ matrix $I_{3 \times 3} \otimes \bar{q}_j^\top$, we introduce the notation $$\label{eq:q-check} \bar{Q}_j := I_{3 \times 3} \otimes \bar{q}_j^\top = \begin{bmatrix} \bar{q}_j^\top & 0 & 0 \\ 0 & \bar{q}_j^\top & 0\\ 0 & 0 & \bar{q}_j^\top \end{bmatrix}.$$ Combining [\[eq:in-iamge-R-lambda\]](#eq:in-iamge-R-lambda){reference-type="eqref" reference="eq:in-iamge-R-lambda"} and , we deduce that $$\bar{Q}_j \mathop{\mathrm{vec}}\left(A_i^\top \right) = \lambda_{i j} p_{i j}, \quad 1 \le i \le m, \, 1 \le j \le n.$$ Equivalently, for each $i=1, \ldots , m$ we have $$\begin{bmatrix} \bar Q_1 & p_{i 1} & \cdots & 0\\ \vdots & \ddots & \vdots \\ \bar Q_n & 0& \cdots & p_{i n} \end{bmatrix} \begin{bmatrix} \mathop{\mathrm{vec}}(A_i^\top) \\ -\lambda_{i 1} \\ \vdots \\ -\lambda_{i n} \end{bmatrix} = \begin{bmatrix} 0\\ %0 \\ \vdots \\ 0 \end{bmatrix}.$$ Thus, if $\mathbf{p}\in \Gamma_{\bar{\mathbf{q}} , \mathbf{p}}^{m,n}$, then we have the rank constraints $$\label{eq:rank-constraint} \mathop{\mathrm{rank}}\begin{bmatrix} \bar Q_1& p_{i 1} & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots \\ \bar Q_n & 0 & \cdots & p_{i n} \end{bmatrix} < 12 + n.$$ We observe that this rank constraint is equivalent to the vanishing of all maximal $(12 + n) \times (12 + n)$ minors. These minors are homogeneous polynomials in the entries of each $\bar Q_i$ and $p_{i j}$; indeed, for any nonzero scalars $c_1, \ldots , c_n, c_1' , \ldots , c_n',$ $$\begin{aligned} \mathop{\mathrm{rank}} \begin{bmatrix} c_1 \bar Q_1& c_1 ' p_{i 1} & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots \\ c_n \bar Q_n & 0 & \cdots & c_n ' p_{i n} \end{bmatrix} &= \mathop{\mathrm{rank}} \begin{bmatrix} \bar Q_1& c_1^{-1} p_{i 1} & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots \\ \bar Q_n & 0 & \cdots & c_n^{-1} p_{i n} \end{bmatrix} \nonumber \\ &= \mathop{\mathrm{rank}} \begin{bmatrix} \bar Q_1& p_{i 1} & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots \\ \bar Q_n & 0 & \cdots & p_{i n} \label{eq:rescale} \end{bmatrix} .\end{aligned}$$ One may of course consider such rank constraints not only for $3\times 12$ matrices of the form [\[eq:q-check\]](#eq:q-check){reference-type="eqref" reference="eq:q-check"}, but for any given arrangement of surjective linear maps, $$\bar B_j : \mathbf P^{11} \dashrightarrow \mathbf P^2, \quad j=1, \ldots , n,$$ represented by generic $3\times 12$ matrices. To prevent confusion with cameras $A_i,$ we refer to each $\bar{B}_j$ as a *hypercamera*. We denote a general arrangement of hypercameras by $\bar{\mathbf{B}} = (\bar{B}_1, \ldots , \bar{B}_n) \in \left( \mathbf P^{35} \right)^n$. However, we instead write $\bar{\mathbf{Q}}$ to denote the special hypercamera arrangement associated to a point arrangement $\bar{\mathbf{q}} \in \left( \mathbf P^3 \right)^n$ by the rule [\[eq:q-check\]](#eq:q-check){reference-type="eqref" reference="eq:q-check"}. Let us also note that rank constraints analogous to [\[eq:rank-constraint\]](#eq:rank-constraint){reference-type="eqref" reference="eq:rank-constraint"} hold for any subset of at least $6$ world points and their corresponding images. This motivates the following definition, as well as the statement of our first result. **Definition 5**. Fix a hypercamera arrangement $\bar{\mathbf{B}} = (\bar{B}_1, \ldots , \bar{B}_n)\in \left(\mathbf P^{35}\right)^n$. For any set $\{ \sigma_1, \ldots , \sigma_k \} \subset [n]$ of size $k\ge 6$ and an index $i\in [m]$, a *$k$-focal* polynomial is any maximal $(12+k ) \times (12+k)$ minor of the $3k \times (12 + k)$ matrix $$\label{eq:k-focal-matrix} \begin{bmatrix} \bar{B}_{\sigma_1} & p_{i \sigma_1} & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots \\ \bar{B}_{\sigma_k} & 0 & \cdots & p_{i \sigma_k} \end{bmatrix} .$$ From context, it will be clear whether "focals\" refers to the polynomials in  or their triangulation counterparts. The ideal in $\mathbf C[\mathbf{p}]$ generated by all $k$-focals, $6\leq k\leq m$, is the $m$-camera *focal ideal* ${I}_{m} (\bar{\mathbf{B}}).$ For a given point arrangement $\bar{\mathbf{q}}$, we define its focal ideal ${I}_{m} (\bar{\mathbf{q}})$ to be the focal ideal ${I}_{m} (\bar{\mathbf{Q}})$ for the associated hypercamera arrangement $\bar{\mathbf{Q}}$. **Theorem 6**. Let $m,n \ge 1$ be integers. For any point arrangement $\bar{\mathbf{q}}\in (\mathbf P^3)^n$ such that no four points are coplanar, we have $$I (\Gamma_{\bar{\mathbf{q}} , \mathbf{p}}^{m,n}) = {I}_{m} (\bar{\mathbf{q}} ),$$ and the set of all $k$-focals for $6\le k \le 12$ forms a universal Gröbner basis for this ideal. is the resectioning analogue of  part (1). Directly adapting the proof of this result is not straightforward. This is because $\bar{\mathbf{Q}}$ is a very special hypercamera arrangement. Nevertheless, the noncoplanarity hypothesis in  ensures that $\bar{\mathbf{Q}}$ is generic enough for Gröbner basis arguments to be applied. In the setting of triangulation, we note that the range of interesting focals $2 \le k \le 4$ is much smaller than in , and in this setting the $k$-focals correspond to well-understood objects in multiview geometry---namely, fundamental matrices, trifocal tensors, and quadrifocal tensors [@HZ04 cf. Ch. 17]. It would seem that the $k$-focals for resectioning are less well-understood. Nevertheless, in , , we observe that they do specialize to "dual\" multiview constraints appearing in the literature. As a warm-up, we establish a set-theoretic variant of . By analogy with [\[eq:multiview-variety\]](#eq:multiview-variety){reference-type="eqref" reference="eq:multiview-variety"}, let us define for any hypercamera arrangement $\bar{\mathbf{B}} \in \left(\mathbb{P} ^{35}\right)^{n}$ the variety $\Gamma_{\bar{\mathbf{B}}, \mathbf{p}}^{n,m}$ to be the closed image of the associated imaging map $\left( \mathbb{P}^{11} \right)^m \dashrightarrow \left( \mathbf P^2 \right)^{mn}.$ In other words, $\Gamma_{\bar{\mathbf{B}}, \mathbf{p}}^{n,m}$ is a hypercamera version of the multiview variety. When $\bar{\mathbf{B}}=\bar{\mathbf{Q}},$ we have the following result. **Proposition 7**. Fix $\bar{\mathbf{q}} = (\bar{q}_1, \dots, \bar{q}_n) \in (\mathbf P^3)^n$ with no four $\bar{q}_i$ coplanar. Then $$\Gamma_{\bar{\mathbf{Q}}, \mathbf{p}}^{n,m} = \Gamma_{\bar{\mathbf{q}} , \mathbf{p}}^{m,n} = \operatorname{V}({I}_{m} (\bar{\mathbf{q}})).$$ *Proof.* It is relatively straightforward to prove the inclusions $$\Gamma_{\bar{\mathbf{Q}}, \mathbf{p}}^{n,m} \subset \Gamma_{\bar{\mathbf{q}} , \mathbf{p}}^{m,n} \subset \operatorname{V}({I}_{m} (\bar{\mathbf{q}})),$$ so we focus on the harder inclusion $\operatorname{V}({I}_{m} (\bar{\mathbf{q}})) \subset \Gamma_{\bar{\mathbf{Q}}, \mathbf{p}}^{n,m}$. This is also where we need the noncoplanarity assumption. Consider any point $$\mathbf{p}\in \operatorname{V}({I}_{m} (\bar{\mathbf{q}})).$$ We will construct a sequence of points $(\mathbf{p}^{(k)}) \in \Gamma_{\bar{\mathbf{Q}}, \mathbf{p}}^{n,m}$ converging to $\mathbf{p}.$ To simplify notation in what follows, we consider the case $m=1.$ When $m>1,$ the same construction applies component-wise. We write $p_i$ in place of $p_{1 i},$ so that the kernel of the matrix $$\begin{bmatrix} \bar Q_1 & p_{1} & \cdots & 0\\ \vdots & \ddots & \vdots \\ \bar Q_n & 0& \cdots & p_{n} \end{bmatrix}$$ contains a point $v = [ v_{1} : \cdots : v_{12+n}] \in \mathbf P^{11 + n}$. Let us fix homogeneous coordinates for $p_{1}, \ldots , p_{n}, \bar{q}_1, \ldots , \bar{q}_n, v$. We define $$A = \begin{bmatrix} v_{1} & \cdots & v_{4} \\ \vdots & \ddots & \vdots \\ v_{9} & \cdots & v_{12} \end{bmatrix}, \quad \lambda_{j} = -v_{12+j}.$$ Let us first observe that the matrix $A$ is nonzero, for otherwise we would have $$\lambda_j p_j = A \bar{q}_j = \bar{Q}_j \mathop{\mathrm{vec}}(A^\top) = 0 \quad \Rightarrow \quad \lambda_j = 0$$ for all $j$, contradicting the fact that $v\ne 0.$ Next, observe that at most three of the $\lambda_j$ can be zero: otherwise, four of the points $\bar{q}_j$ would lie in some plane containing the kernel of $A$, contradicting our hypothesis that $\bar{\mathbf{q}}$ is noncoplanar. It follows that we can find a nonzero $3\times 4$ matrix $A'$ with $A' \bar{q}_j = p_j$ for each $j$ with $\lambda_j=0$. Fix such a matrix $A'.$ We now construct the desired sequence $(\mathbf{p}^{(k)})_{k\ge 1}$. Set $$\begin{aligned} A^{(k) } &= A + (1/k) A', \\ p_j^{(k)} &= A^{(k)} \bar{q}_j, \quad j=1, \ldots , n.\end{aligned}$$ To show convergence, note that when $\lambda_j = 0$ we have $$p_j^{(k)} = (1/k) A' \bar{q}_j \sim A' \bar{q}_j = p_j.$$ For $\lambda_j \ne 0$ we attain $p_j$ in the limit as $k\to \infty$, since $$p_j^{(k)} = A \bar{q}_j + (1/k) A' \bar{q}_j \to A \bar{q}_j \sim p_j.$$ In summary, we have found for any point $\mathbf{p}\in \operatorname{V}({I}_{m} (\bar{\mathbf{q}}))$ a sequence of points $(\mathbf{p}^{(k)}) \in \Gamma_{\bar{\mathbf{Q}}, \mathbf{p}}^{n,m}$ converging to $\mathbf{p}.$ Since $\Gamma_{\bar{\mathbf{Q}}, \mathbf{p}}^{n,m}$ is closed in the Euclidean topology, we deduce the needed inclusion: $\operatorname{V}({I}_{m} (\bar{\mathbf{q}})) \subset \Gamma_{\bar{\mathbf{Q}}, \mathbf{p}}^{n,m}$. ◻ We conclude this section with the simplest interesting example of a resectioning variety. **Example 8**. For $m=1$ camera and a generic point arrangement $\bar{\mathbf{q}} = (\bar{q}_1, \ldots , \bar{q}_6) \in \left( \mathbf P^3 \right)^n,$ the resectioning variety $\Gamma_{\bar{\mathbf{q}} , \mathbf{p}}^{1,6}$ is a hypersurface in $\left( \mathbf P^{2} \right)^6.$ Applying a suitable permutation to the rows of the $6$-focal matrix, this hypersurface has an $18 \times 18$ determinantal representation, $$\label{eq:det-hypersurface} \det \begin{bmatrix} \bar{q}_1^T & & & p_1 [1] & \\ \vdots & & & & \ddots & \\ \bar{q}_6^T & & & & & p_6 [1] \\ & \bar{q}_1^T& & p_1 [2] & \\ & \vdots & & & \ddots & \\ & \bar{q}_6^T & & & & p_6 [2] \\ & & \bar{q}_1^T & p_1 [3] & \\ & & \vdots & & \ddots & \\ & & \bar{q}_6^T & & & p_6 [3] \end{bmatrix} =0.$$ Out of the $3^6=729$ possible terms of a sextilinear form on $\left( \mathbf P^2 \right)^6$, the special structure of the $6$-focal determinant dictates that only $\binom{6}{2} \binom{4}{2} = 90$ can be nonzero. On the other hand,  below shows that applying a general linear change of coordinates to [\[eq:det-hypersurface\]](#eq:det-hypersurface){reference-type="eqref" reference="eq:det-hypersurface"} has the effect that all $729$ possible terms become nonzero. This highlights an important distinction between resectioning and multiview ideals---the initial ideal for generic data $\bar{\mathbf{q}}$ is not the same as the $\mathbb{Z}^6$-graded Borel-fixed generic initial ideal (cf. [@Hilb §3].) Letting $<$ denote the lexicographic order with $p_6[3] < p_6 [2] < p_6 [1] < p_5 [3] < \cdots <p_1[1],$ we have $$\label{eq:in-not-gin} \begin{split} \operatorname{in}_< ( I (\Gamma_{\bar{\mathbf{q}}, \mathbf{p}}^{1,6} ) = \langle p_{1} [1] p_{2} [1] p_{3} [2] p_{4} [2] p_{5} [3] p_{6} [3] \rangle , \\ \operatorname{gin}_< ( I (\Gamma_{\bar{\mathbf{q}}, \mathbf{p}}^{1,6} ) = \langle p_{1} [1] p_{2} [1] p_{3} [1] p_{4} [1] p_{5} [1] p_{6} [1] \rangle . \end{split}$$ Interestingly, [\[eq:det-hypersurface\]](#eq:det-hypersurface){reference-type="eqref" reference="eq:det-hypersurface"} also has several smaller determinantal representations. Many of these may be obtained from [\[eq:det-hypersurface\]](#eq:det-hypersurface){reference-type="eqref" reference="eq:det-hypersurface"} using Schur complements. For example, we have the $12\times 12$ determinantal representation $$\label{eq:12-det} \left( p_1 [3] \cdots p_6[3] \right)^{-1} \det \begin{bmatrix} p_1[3]\bar{q}_1^T & & - p_1 [1] \bar{q}_1^T \\ \vdots & & \vdots & \\ p_6[3] \bar{q}_6^T & & - p_6 [1] \bar{q}_6^T \\ & p_1 [3]\bar{q}_1^T & - p_1 [2] \bar{q}_1^T \\ & \vdots & \vdots & \\ & p_6[3]\bar{q}_6^T & - p_6 [2] \bar{q}_6^T \end{bmatrix} =0.$$ The $6$-focal determinant also has a $6\times 6$ determinantal representation, which specializes to [\[eq:dual-fundamental-1\]](#eq:dual-fundamental-1){reference-type="eqref" reference="eq:dual-fundamental-1"} below after fixing $\bar{q}_1, \ldots , \bar{q}_4, p_1, \ldots , p_4.$ This is the classical form of the constraint appearing in works such as [@quan; @DBLP:journals/ijcv/CarlssonW98]. Finally, we note that Schaffilitzky et al. [@DBLP:conf/eccv/SchaffalitzkyZHT00] derive a $3\times 3$ determinantal constraint relating 3D points and their 2D projections that is linear in a distinguished image point $p_6$.  implies that their determinant is a multiple of the $6$-focal determinant. Notably, Schaffilitzky et al. use their constraint to solve the minimal problem of reconstructing $6$ points from $3$ views. Earlier works, eg. [@DBLP:journals/ijcv/CarlssonW98], had already observed that this problem is equivalent to the classical $7$ point problem in $2$ views. This equivalence follows from the principle of Carlsson-Weinshall duality, which we revisit in the next section. # Carlsson-Weinshall duality revisited  [\[sec:CWduality\]]{#sec:CWduality label="sec:CWduality"} Recall the image formation variety $\Gamma_{\mathbf{A}, \mathbf{q}, \mathbf{p}}^{m,n}$ from [\[eq:image-formation-correspondence\]](#eq:image-formation-correspondence){reference-type="eqref" reference="eq:image-formation-correspondence"}. Previous work of Agarwal et al. [@agarwal2022atlas] explains how the problems of reconstruction, triangulation, and resectioning may all be understood in terms of slicing and projection operations on this variety. The relationships between the varieties produced by these operations are summarized in a diagram designated as an *atlas* for the pinhole camera. One striking feature of the atlas's appearance is the apparently symmetric roles of cameras in $\mathbf P^{11}$ and world points in $\mathbf P^3.$ A simple explanation for this phenomenon is as follows: for a given camera center $c\in \mathbf P^{3},$ world point $q \in \mathbf P^{3},$ and image plane $L \in \mathop{\mathrm{Gr}}(\mathbf P^2, \mathbf P^3),$ we obtain the same projected point on $L$ whether we project $c$ through $q$ or project $q$ through $c$. If we want to express this symmetry in terms of camera matrices instead of camera centers, one approach is to introduce coordinates on the image plane. Indeed, there are an additional $\dim \mathop{\mathrm{PGL}}_3 = 8 = 11 - 3$ degrees of freedom in choosing projective coordinates on $L$. A particular choice of coordinates leads directly to the framework of *Carlsson-Weinshall (CW) duality* from the multiview geometry literature. In this section, we point out that several world-to-image point constraints which were previously discovered using CW duality arise naturally as specializations of our focal constraints. We also show in  that Carlsson-Weinshall duality gives rise to a rational quotient of the image formation correspondence, and develop a *reduced* version of the atlas that better explains the symmetry between cameras and world points---see . A direct application of the focal constraints described in  arises naturally in the setting of Carlsson-Weinshall (CW) duality. In the eponymous authors' celebrated work, CW duality is described as the notion that "*problems of \[resectioning\] and \[triangulation\] from image data are\... dual in the sense that they can be solved with the same algorithm depending on the number of \[world\] points and cameras*" [@DBLP:journals/ijcv/CarlssonW98]. In this section, we develop CW duality in the context of a *reduced atlas*, analogous to that of [@agarwal2022atlas], which makes the symmetry between cameras and 3D points evident. explain how nodes in this atlas arise as rational quotients of their non-reduced counterparts. The latter result also includes an analogue of : the reduced resectioning variety is cut out scheme-theoretically by bilinear forms for a sufficiently generic point configuration. **Remark 9**. Recent work by Trager, Hebert, and Ponce [@DBLP:conf/cvpr/TragerHP19] demonstrates that the exact coordinates of the camera centers and world points are not essential features of CW duality, contrary to the original setup. For simplicity, we state the main results of this section with respect to the conventional projective frame defined in [\[eq:standard-position\]](#eq:standard-position){reference-type="eqref" reference="eq:standard-position"}. ## Geometric formulation {#subsec:geometric-formulation} For $m$ cameras and $n$ world points, we define the *reduced image formation correspondence* to be the variety $$\label{eq:reduced-image-formation-correspondence} \mathrm{P}_{\mathbf{a}, \mathbf{q}, \mathbf{p}}^{m,n} = \overline{\{ (\mathbf{a}, \mathbf{q}, \mathbf{p}) \in (\mathbf P^3)^m \times (\mathbf P^3)^n \times (\mathbf P^2)^{mn} \mid A(a_i) \cdot q_j \sim p_{i j} \quad \forall i \in [m], \, j \in [n] \}},$$ where for $a_i = [a_{i 1} : a_{i 2} : a_{i 3} : a_{i 4} ] \in \mathbf P^3$ we define $$\label{eq:Aq} A( a_i ) = \begin{bmatrix} a_{i 1} & 0 & 0 & a_{i 4}\\ 0 & a_{i 2} & 0 & a_{i 4}\\ 0 & 0 & a_{i 3} & a_{i 4} \end{bmatrix}.$$ When $A(a_i)$ is of full rank, we call it the *reduced camera matrix* associated to the point $a_i.$ The center of a reduced camera matrix $A(a_i)$ is $\mathcal C(a_i)$, where $\mathcal C$ is the quadratic Cremona involution $$\begin{aligned} \mathcal{C} : \mathbf P^3 &\dashrightarrow \mathbf P^3 \nonumber\\ [a_1 : a_2 : a_3 : a_4 ] &\mapsto [1/a_1 : 1/a_2 : 1/a_3 : - 1 / a_4]. \label{eq:cremona}\end{aligned}$$ Note that $\mathcal C(a_i)$ is defined exactly when at most one $a_{ij}$ is zero, or equivalently, when $A(a_i)$ is a full-rank camera matrix with a well-defined center. The key observation of Carlsson-Weinshall duality is expressed by the symmetric roles of a 3D point $q_j$ and a reduced camera $A(a_i)$ in image formation: $$\label{eq:cw-flip} A (a_i) q_j = A(q_j) a_i \quad \forall i=1, \ldots, m, \, \, j=1, \ldots , n.$$ The special form of the reduced camera matrix arises from fixing a projective basis in each image and a partial projective basis in the world. We adopt the notation of [@HZ04 Ch. 16]: $$\begin{aligned} E_1 = [1 : 0: 0 : 0], &\quad e_1 = [1: 0 : 0], \nonumber \\ E_2 = [0 : 1: 0 : 0], &\quad e_2 = [0: 1 : 0], \nonumber \\ E_3 = [0 : 0: 1 : 0], &\quad e_3 = [0: 0 : 1], \nonumber \\ E_4 = [0 : 0: 0 : 1], &\quad e_4 = [1: 1 : 1], \nonumber \\ E_5 = [1 : 1: 1 : 1]. \label{eq:standard-position}\end{aligned}$$ Each set of four points $E_1, \ldots E_4\in \mathbf P^3$, $e_1,\ldots , e_4\in \mathbf P^3$ is said to span a *reference tetrahedron* in $\mathbf P^3.$ The geometry relating these points and the Cremona transformation $\mathcal{C}$ can be appreciated in . Given a camera of the form $A = A(a), a \in \mathbf P^3$, we have that $A(a) E_i = e_i$ for $i = 1, \dots, 4$. The converse is true as well; that is, a camera matrix takes the reduced form [\[eq:Aq\]](#eq:Aq){reference-type="eqref" reference="eq:Aq"} if and only if it sends $E_i$ to $e_i$ for each $i=1, \ldots ,4.$ As we will soon demonstrate, there is a rational group action by an algebraic group $\mathcal G_m$ on $\Gamma_{\mathbf{A}, \mathbf{q}, \mathbf{p}}^{m,n+4}$ for which each $\mathcal G_m$-orbit, where defined, contains a unique element of $\mathrm{P}_{\mathbf{a}, \mathbf{q}, \mathbf{p}}^{m,n}$. That is, $\mathrm{P}_{\mathbf{a}, \mathbf{q}, \mathbf{p}}^{m,n}$ can be thought of as a kind of quotient of the general image formation correspondence. makes this precise using the notion of a *rational quotient.* In this reduced setting, the roles of camera centers and world points are manifestly symmetric: a point $(\mathbf{a}, \mathbf{q}, \mathbf{p}) \in \mathrm{P}_{\mathbf{a}, \mathbf{q}, \mathbf{p}}^{m,n}$ is also a point in $\mathrm{P}_{\mathbf{a}, \mathbf{q}, \mathbf{p}}^{n,m}$ after swapping the $\mathbf{a}$ and $\mathbf{q}$ factors. By this observation, we then get the isomorphism $\mathrm{P}_{\mathbf{a}, \mathbf{q}, \mathbf{p}}^{m,n} \simeq \mathrm{P}_{\mathbf{a}, \mathbf{q}, \mathbf{p}}^{n,m}$. Just as a point in $\Gamma_{\mathbf{A}, \mathbf{q}, \mathbf{p}}^{m,n}$ can be thought of as a configuration of cameras and points, a point in $\mathrm{P}_{\mathbf{a}, \mathbf{q}, \mathbf{p}}^{m,n}$ can be thought of as such a configuration up to certain coordinate changes. More precisely, points in $\mathrm{P}_{\mathbf{a}, \mathbf{q}, \mathbf{p}}^{m,n}$ correspond to orbits in $\Gamma_{\mathbf{A}, \mathbf{q}, \mathbf{p}}^{m,n}$ under the action of a group $\mathcal{G}_m$ consisting of coordinate changes in the world and each of the $m$ images. Up to this group action, we may assume the image planes $L_1, \ldots , L_m \in \mathop{\mathrm{Gr}}(\mathbf P^2, \mathbf P^3)$ are all equal, ie. $L_1=\ldots=L_m$. This explains the center image in . We now transition into a formal treatment of the notions described above. Define $\mathcal G_m = (\mathop{\mathrm{PGL}}_3)^m \times \operatorname{Stab}_{\, \mathop{\mathrm{PGL}}_4} \left(E_5 \right)$, an algebraic group of dimension $8m + 12$ which acts rationally on $\Gamma_{\mathbf{A}, \mathbf{q}, \mathbf{p}}^{m,n}$ as follows: $$\begin{aligned} \mathcal G_m \times \Gamma_{\mathbf{A}, \mathbf{q}, \mathbf{p}}^{m,n} &\dashrightarrow \Gamma_{\mathbf{A}, \mathbf{q}, \mathbf{p}}^{m,n} \nonumber \\ (T_1, \ldots , T_m, S) &\cdot (A_1, \ldots , A_m, q_1, \ldots q_n, p_{1 1}, \ldots , p_{m n} ) \label{eq:act-Gm}\\ &= (T_1 A_1 S^{-1}, \ldots , T_m A_m S^{-1}, S q_1 , \ldots , S q_n, T_1 p_{1 1}, \ldots , T_m p_{m n}). \nonumber \end{aligned}$$ To formalize the intuition that $\mathrm{P}_{\mathbf{a}, \mathbf{q}, \mathbf{p}}^{m,n}$ is a quotient of $\Gamma_{\mathbf{A}, \mathbf{q}, \mathbf{p}}^{m,n}$ by $\mathcal{G}_m$, we recall the definition of a *rational quotient* as follows. **Definition 10**. (cf. [@Dolgachev §6.2].) Let $X$ and $Y$ be irreducible algebraic varieties and $G$ an algebraic group acting rationally on $X.$ We say $Y$ is a *rational quotient* of $X$ by $G$, and write $X / G \cong_{\textbf{Bir}} Y$, if $Y$ is a model for the field of $G$-invariant rational functions on $X$: that is, if there exists an isomorphism $\mathbf C(Y) \cong \mathbf C(X)^G.$ A classical result due to Rosenlicht states that rational quotients always exist over any algebraically closed field (cf. [@Dolgachev Theorem 6.2].) The following simple lemma provides sufficient conditions for recognizing a particular class of rational quotients in which the action yields a birational equivalence of $X$ with $G \times Y.$ **Lemma 11**. Let $G$ be an algebraic group acting rationally on a variety $X.$ For a subvariety $Y \subset X$, we have $X / G \cong_{\textbf{Bir}} Y$ if there exists a rational map $$\begin{aligned} \mu_G : X &\dashrightarrow G \\ x &\mapsto \mu_G (x)\end{aligned}$$ such that $\mu_G (y) = \operatorname{id}_G$ for all $y$ in a dense open subset of $Y,$ and such that $\mu_G( g \cdot x) = \mu_G ( x) \, g^{-1}$ for all $(g,x)$ in dense open subset of $G\times X.$ Moreover, these assumptions imply that $$\begin{aligned} X &\dashrightarrow G \times Y\\ x &\mapsto (\mu_G (x), \mu_G (x) \cdot x) \end{aligned}$$is a birational equivalence, with a rational inverse given by $$\begin{aligned} G \times Y &\dashrightarrow X\\ (g, y) &\mapsto g^{-1} \cdot y.\end{aligned}$$ *Proof.* A function $f\in \mathbf C(Y)$ pulls back to a function $h \in \mathbf C(X)$ defined by $h(x) = f(\mu_G (x) \cdot x)$ on $X$. Our assumptions imply $h$ is $G$-invariant, since $$h(g\cdot x ) = f( \mu_G (g\cdot x) \cdot (g\cdot x) ) = f\left( (\mu_G (x) g^{-1}) \cdot (g\cdot x) \right) = h(x).$$ Let us write $\varphi^* : \mathbf C(Y) \to \mathbf C(X)^G.$ Since $Y \subset X,$ we also have the induced map $\iota^*: \mathbf C(X)^G \to \mathbf C(Y).$ We show that $\varphi^*$ and $\iota^*$ are mutual inverses. Taking any function $f \in \mathbf C(Y)$ and $y\in Y$ in its domain of definition, we calculate $$\iota^* \varphi^* f (y) = f ( \mu_G (y) \cdot y ) = f(y) .$$ Similarly, for any fixed $f \in \mathbf C(X)^G,$ the values $f(x)$ and $f( \mu_G (x) \cdot x)$ are defined for a dense open subset of $x \in X,$ for which we compute $$\varphi^* \iota^* f (x) = f ( \mu_G (x) \cdot x ) = f(x).$$ This proves $X / G \cong_{\textbf{Bir}} Y.$ The birational equivalence of $G \times Y$ and $X$ follows similarly. ◻ **Theorem 12** (CW duality). For any $m,n \geq 0$, we have a birational equivalence of varieties $$\Gamma_{\mathbf{A}, \mathbf{q}, \mathbf{p}}^{m,n+4} \cong_{\textrm{Bir}} \mathrm{P}_{\mathbf{a}, \mathbf{q}, \mathbf{p}}^{m,n} \times \mathcal{G}_m ,$$ which yields following commutative diagram (in which each arrow labeled $\sim$ is a birational or biregular isomorphism.) $$\begin{tikzcd}[ampersand replacement=\&] \Gamma_{\mathbf{A}, \mathbf{q}, \mathbf{p}}^{m,n+4} \times (\mathop{\mathrm{PGL}}_3)^n \arrow[r, dashed, "\sim"] \arrow[d, dashed, "\rotatebox{90}{$\sim$}"] \& \Gamma_{\mathbf{A}, \mathbf{q}, \mathbf{p}}^{n,m+4} \times (\mathop{\mathrm{PGL}}_3)^m \arrow[d, "\rotatebox{90}{$\sim$}", dashed]\\ (\mathrm{P}_{\mathbf{a}, \mathbf{q}, \mathbf{p}}^{m,n} \times \mathcal G_m) \times (\mathop{\mathrm{PGL}}_3)^n \ar[dashed, r, "\sim"] \ar[d] \& (\mathrm{P}_{\mathbf{a}, \mathbf{q}, \mathbf{p}}^{n,m} \times \mathcal G_n) \times (\mathop{\mathrm{PGL}}_3)^m \ar[d] \\ \mathrm{P}_{\mathbf{a}, \mathbf{q}, \mathbf{p}}^{m,n} \ar[r, "\sim"] \& \mathrm{P}_{\mathbf{a}, \mathbf{q}, \mathbf{p}}^{n,m} \end{tikzcd}$$ This diagram has the following additional properties: 1. If $\nu_{m,n}$ denotes any of the horizontal maps, we have $\nu_{n,m} \circ \nu_{m,n} = \operatorname{id}$ wherever both maps are defined. 2. The vertical maps express the reduced image formation variety as a rational quotient of the image formation correspondence, $$\label{eq:rat-quot} \Gamma_{\mathbf{A}, \mathbf{q}, \mathbf{p}}^{m,n+4} / \mathcal{G}_m \cong_{\textbf{Bir}} \mathrm{P}_{\mathbf{a}, \mathbf{q}, \mathbf{p}}^{m,n}.$$ 3. The duality between the problems of exact resectioning and triangulation may be expressed in terms of this commutative diagram and certain projections: eg., for the bottom row, if $\pi_{\mathbf{a}}', \pi_{\mathbf{q}}'$ denote the projections from $\mathrm{P}_{\mathbf{a}, \mathbf{q}, \mathbf{p}}$ that forget the $\mathbf{a}$ and $\mathbf{q}$ factors, then the diagram below commutes. *Proof.* We begin by constructing the maps that yield the rational quotient [\[eq:rat-quot\]](#eq:rat-quot){reference-type="eqref" reference="eq:rat-quot"}. This part follows by applying  with $X = \Gamma_{\mathbf{A}, \mathbf{q}, \mathbf{p}}^{m,n+4}$, $Y = \mathrm{P}_{\mathbf{a}, \mathbf{q}, \mathbf{p}}^{m,n}$, and $G = \mathcal{G}_m.$ To obtain the inclusion $\mathrm{P}_{\mathbf{a}, \mathbf{q}, \mathbf{p}}^{m,n} \subset \Gamma_{\mathbf{A}, \mathbf{q}, \mathbf{p}}^{m,n+4}$ we define $$\begin{aligned} \iota : \mathrm{P}_{\mathbf{a}, \mathbf{q}, \mathbf{p}}^{m,n} &\to \Gamma_{\mathbf{A}, \mathbf{q}, \mathbf{p}}^{m,n+4}\\ (a_1, \ldots , a_m, q_1, \ldots , q_n , p_{1 1}, \ldots , p_{m n}) &\mapsto \\ (A(a_1), \ldots , A(a_m), E_1, E_2, E_3, E_4, &\, q_1, \ldots , q_n , e_1, e_2, e_3, e_4, p_{1 1}, \ldots , p_{m n}).\end{aligned}$$ To construct the map $\mu_{\mathcal{G}_m} : \Gamma_{\mathbf{A}, \mathbf{q}, \mathbf{p}}^{m,n+4} \dashrightarrow \mathcal{G}_m$, consider first the map $$\begin{aligned} S : (\mathbf P^3)^4 &\dashrightarrow \operatorname{Stab}_{\mathop{\mathrm{PGL}}_4}(E_5) \\ ( q_1, \ldots , q_4) &\mapsto \bigg( \left[\hspace{-.2em} \begin{array}{c|c|c|c} q_1 & q_2 & q_3 & q_4 \end{array}\hspace{-.2em} \right] \cdot \mathop{\mathrm{diag}}\left( [5 2 3 4]_\mathbf{q}, [1 5 3 4]_\mathbf{q}, [1 2 5 4]_\mathbf{q}, [1 2 3 5]_\mathbf{q}\right) \bigg)^{-1} ,\end{aligned}$$ where each $[5 2 3 4]_\mathbf{q}, \ldots , [1 2 3 5]_\mathbf{q}$ is the determinant of a matrix obtained by replacing $q_1, \ldots , q_4$ with $E_5$ in the $4\times 4$ matrix whose columns are $q_1, \ldots , q_4.$ We verify that $S(q_1,\ldots , q_4)$ is well-defined and contained in $\operatorname{Stab}_{\mathop{\mathrm{PGL}}_4}(E_5)$ using linear algebra. To ease notation, we write $Q = \left[\hspace{-.2em} \begin{array}{c|c|c|c} q_1 & q_2 & q_3 & q_4 \end{array}\hspace{-.2em} \right]$ and $D = \mathop{\mathrm{diag}}\left( [5 2 3 4]_\mathbf{q}, [1 5 3 4]_\mathbf{q}, [1 2 5 4]_\mathbf{q}, [1 2 3 5]_\mathbf{q}\right)$. Rescaling any of the $q_1, \ldots , q_4$ then rescales the matrix product $Q D$. Using Cramer's rule, we calculate that $$\begin{aligned} S(q_1, \ldots , q_4) E_5 &= D^{-1} \cdot Q^{-1} E_5 \\ &= D^{-1} [ [5 2 3 4]_\mathbf{q}: [1 5 3 4]_\mathbf{q}: [1 2 5 4]_\mathbf{q}: [1 2 3 5]_\mathbf{q}]\\ &= E_5.\end{aligned}$$ An analagous calculation can be used to verify that for any $S_0 \in \mathop{\mathrm{PGL}}_4$ we have $$\label{eq:T-act} S( S_0 q_1, \ldots , S_0 q_4) = S(q_1, \ldots ,q_4) \, S_0^{-1}.$$ Similar to our definition of $S$ above, we may define a map $$\begin{aligned} T : (\mathbf P^2)^4 &\dashrightarrow \mathop{\mathrm{PGL}}_3 \\ ( p_1, \ldots , p_4) &\mapsto \bigg( \left[\hspace{-.2em} \begin{array}{c|c|c} p_1 & p_2 & p_3 \end{array}\hspace{-.2em} \right] \cdot \mathop{\mathrm{diag}}\left( [4 2 3]_\mathbf{p}, [1 4 3]_\mathbf{p}, [1 2 4]_\mathbf{p}\right) \bigg)^{-1},\end{aligned}$$ but we replace $p_1, \ldots ,p_3$ by $p_4$ (rather than $e_4$) when forming the expressions $[4 2 3]_\mathbf{p}, \ldots , [1 2 4]_\mathbf{p}.$ Once again, for $T_0 \in \mathop{\mathrm{PGL}}_3$ we have $$\label{eq:S-act} T( T_0 p_1, \ldots , T_0 p_4) = T(p_1, \ldots ,p_4) \, T_0^{-1}.$$ Finally, we define $$\begin{aligned} \mu_{\mathcal{G}_m} : \Gamma_{\mathbf{A}, \mathbf{q}, \mathbf{p}}^{m,n+4} &\dashrightarrow \mathcal{G}_n \\ (A_1, \ldots ,A_m, q_1, \ldots , q_{n+4} , p_{1 1} , \ldots , p_{m n}) &\mapsto \\ (T(p_{1 1}, &\ldots , p_{1 4}), \, \ldots ,\, T(p_{1 1}, \ldots , p_{m 4}) , \, S(q)).\end{aligned}$$ We check that the two assumptions of  are satisfied. The map $\mu_{\mathcal{G}_m}$ fixes $\mathrm{P}_{\mathbf{a}, \mathbf{q}, \mathbf{p}}^{m,n}$ pointwise since $T(e_1, e_2, e_3, e_4)$ and $S(E_1, E_2, E_3, E_4)$ both act as the identity. Similarly, for sufficiently generic $g\in \mathcal{G}_m$ and $x\in \Gamma_{\mathbf{A}, \mathbf{q}, \mathbf{p}}^{m,n+4}$ the assumption that $\mu_{\mathcal{G}_m} ( g\cdot x) = \mu_{\mathcal{G}_m} (x) \, g^{-1}$ follows from [\[eq:T-act\]](#eq:T-act){reference-type="eqref" reference="eq:T-act"} and [\[eq:S-act\]](#eq:S-act){reference-type="eqref" reference="eq:S-act"}. Thus, we may conclude from  that we have the rational quotient [\[eq:rat-quot\]](#eq:rat-quot){reference-type="eqref" reference="eq:rat-quot"}, giving property 2 in the statement of the theorem. Moreover, the lemma implies that $\Gamma_{\mathbf{A}, \mathbf{q}, \mathbf{p}}^{m,n+4}$ is birationally equivalent to $\mathrm{P}_{\mathbf{a}, \mathbf{q}, \mathbf{p}}^{m,n} \times \mathcal{G}_m$, which allows us to define the vertical maps in the main diagram. To complete the diagram, it suffices to define the bottom-most map, which is $$\begin{aligned} \nu_{m,n} : \mathrm{P}_{\mathbf{a}, \mathbf{q}, \mathbf{p}}^{m,n} &\to \mathrm{P}_{\mathbf{a}, \mathbf{q}, \mathbf{p}}^{n,m} \\ (a_1,\ldots , a_m , q_1, \ldots , q_n, p_{1 1} , \ldots p_{m n}) &\mapsto (q_1,\ldots , q_n , a_1, \ldots , a_m, p_{1 1} , \ldots p_{n m}). \end{aligned}$$ Now, to show that $\nu_{m,n}$ is an isomorphism, we use the symmetric equations [\[eq:cw-flip\]](#eq:cw-flip){reference-type="eqref" reference="eq:cw-flip"}. The remaining parts of the theorem now follow easily. ◻ The reduced image formation correspondence $\mathrm{P}_{\mathbf{a}, \mathbf{q}, \mathbf{p}}^{m,n}$ sits at the center of the *reduced atlas* depicted in . Following [@agarwal2022atlas], we may define the remaining entities in this figure using slices and projections of the reduced image formation correspondence. For instance, the varieties $\mathrm{P}_{\mathbf{a}, \mathbf{p}}^{m,n}$ and $\mathrm{P}_{\mathbf{q}, \mathbf{p}}^{m,n}$ are defined, respectively, as the image under the coordinate projections $\pi_{\mathbf{q}}' : \mathrm{P}_{\mathbf{a}, \mathbf{q}, \mathbf{p}}^{m,n} \to \left( \mathbf P^3 \right)^m \times \left( \mathbf P^2 \right)^{mn}$, $\pi_{\mathbf{a}} ' : \left( \mathbf P^3 \right)^n \times \left( \mathbf P^2 \right)^{mn}$ appearing in . Slicing the variety $\mathrm{P}_{\mathbf{q}, \mathbf{p}}^{m,n}$ with the coordinate planes defined by $\mathbf{q}= \bar{\mathbf{q}},$ we obtain the *reduced resectioning variety* $\mathrm{P}_{\bar{\mathbf{q}}, \mathbf{p}}^{m,n}.$ Up to Zariski closure, this is the *dual reduced joint image* introduced in [@DBLP:conf/cvpr/TragerHP19]. above and  below illustrate the correspondence between these varieties and their non-reduced counterparts in [@agarwal2022atlas], and provide an explanation for the vertical symmetry present in both atlases. ## Algebraic consequences {#subsec:algebraic-consequences} Readers familiar with multiview geometry will no doubt wonder how the focal constraints of  relate to various "dual multiview constraints\", derived by Carlsson, Weinshall, and others. All of these previously-studied constraints may be interpreted as polynomials vanishing on $\mathrm{P}_{\bar{\mathbf{q}}, \mathbf{p}}^{m,n}.$ Specializing the $6$-focal constraints to $\mathrm{P}_{\mathbf{a}, \mathbf{q}, \mathbf{p}}^{m,n},$ we obtain $$\label{eq:det-dual-fundamental} \det \begin{bmatrix} E_1^\top \otimes I & e_1\\ E_2^\top \otimes I & & e_2 \\ E_3^\top \otimes I & & & e_3 \\ E_4^\top \otimes I & & && e_4\\ \bar{Q}_{j_1} & & & & & p_{i j_1}\\ \bar{Q}_{j_2} & & & & & & p_{i j_2} \end{bmatrix} = 0, \quad \forall i = 1, \ldots , m, \, \, 1\le j_1 < j_2 \le n.$$ Permuting the rows, [\[eq:det-dual-fundamental\]](#eq:det-dual-fundamental){reference-type="eqref" reference="eq:det-dual-fundamental"} implies that $$\begin{aligned} &\det \left[ \begin{array}{c|cc} I_{12 \times 12} & \begin{array}{cccc} E_1 & & & E_4\\ & E_2 & & E_4\\ & & E_3 & E_4 \end{array} \\[.9em] \hline \vspace{.9em} \begin{array}{c} \bar{Q}_{j_1}\\ \bar{Q}_{j_2} \end{array} & & \begin{array}{cc} p_{i j_1} & \\ & p_{i j_2} \end{array} \end{array} \right] = 0\nonumber ,\end{aligned}$$ and taking the Schur complement, we find $$\begin{aligned} &\det \left( \left[\begin{array}{ccc} 0_{3 \times 4} &p_{i j_1} & \\ 0_{3\times 4} & & p_{i j_2} \end{array}\right] - \left[\begin{array}{c} \bar{Q}_{j_1}\\ \bar{Q}_{j_2} \end{array}\right] \, \left[ \begin{array}{cccc} E_1 & & & E_4\\ & E_2 & & E_4\\ & & E_3 & E_4 \end{array} \right] \right) = \nonumber \\[0.8em] &\det \left[\begin{array}{ccc} A(\bar{q}_{j_1}) & p_{i j_1} & \\ A(\bar{q}_{j_2}) & & p_{i j_2} \end{array}\right] = 0. \label{eq:dual-fundamental-1} \end{aligned}$$ is a bilinear form in $p_{i j_1}$ and $p_{i j_2},$ which may be represented by Carlsson and Weinshall's $3\times 3$ *dual fundamental matrix* (cf. [@DBLP:journals/ijcv/CarlssonW98 eq. 18]), $$\label{eq:dual-fundamental-2} \left[ \begin{smallmatrix} 0 & \bar{q}_{j_1} [2] \bar{q}_{j_2} [1] \det \left[\begin{smallmatrix} \bar{q}_{j_1} [4] & \bar{q}_{j_1} [3] \\ \bar{q}_{j_2} [4] & \bar{q}_{j_2} [3] \end{smallmatrix}\right] & \bar{q}_{j_1} [3] \bar{q}_{j_2} [1] \det \left[\begin{smallmatrix} \bar{q}_{j_1} [2] & \bar{q}_{j_1} [4] \\ \bar{q}_{j_2} [2] & \bar{q}_{j_2} [4] \end{smallmatrix}\right] \\ \bar{q}_{j_1} [1] \bar{q}_{j_2} [2] \det \left[\begin{smallmatrix} \bar{q}_{j_1} [3] & \bar{q}_{j_1} [4] \\ \bar{q}_{j_2} [3] & \bar{q}_{j_2} [4] \end{smallmatrix}\right] & 0 & \bar{q}_{j_1} [3] \bar{q}_{j_2} [2] \det \left[\begin{smallmatrix} \bar{q}_{j_1} [4] & \bar{q}_{j_1} [1] \\ \bar{q}_{j_2} [4] & \bar{q}_{j_2} [1] \end{smallmatrix}\right] \\ \bar{q}_{j_1} [1] \bar{q}_{j_2} [3] \det \left[\begin{smallmatrix} \bar{q}_{j_1} [4] & \bar{q}_{j_1} [2] \\ \bar{q}_{j_2} [4] & \bar{q}_{j_2} [2] \end{smallmatrix}\right] & \bar{q}_{j_1} [2] \bar{q}_{j_2} [3] \det \left[\begin{smallmatrix} \bar{q}_{j_1} [1] & \bar{q}_{j_1} [4] \\ \bar{q}_{j_2} [1] & \bar{q}_{j_2} [4] \end{smallmatrix}\right] & 0 \end{smallmatrix} \right].$$ This construction yields a total of $m \binom{n}{2}$ bilinear equations vanishing on the reduced joint image $\mathrm{P}_{\bar{\mathbf{q}}, \mathbf{p}}^{m,n}.$ A similar application of this Schur complement trick to suitably-chosen $7$- and $8$-focals leads to the *dual trifocal and quadrifocal tensors* (cf. [@DBLP:journals/ijcv/CarlssonW98 §6.3--6.4], [@DBLP:conf/cvpr/TragerHP19 §3, 4]). For a sufficiently generic point configuration $\bar{\mathbf{q}} \in \left( \mathbf P^3 \right)^n,$ it turns out that the reduced $2$-focals [\[eq:dual-fundamental-1\]](#eq:dual-fundamental-1){reference-type="eqref" reference="eq:dual-fundamental-1"} determine $\mathrm{P}_{\bar{\mathbf{q}}, \mathbf{p}}^{m,n}$ as a subscheme of $\left(\mathbf P^2\right)^{mn}.$ states precise genericity conditions such that this occurs. Thus, while equations needed to cut out $\Gamma_{\mathbf{A}, \mathbf{q}, \mathbf{p}}^{m,n}$ in a strong sense have very high degree, only bilinear equations are needed to cut out its quotient $\mathrm{P}_{\mathbf{a}, \mathbf{q}, \mathbf{p}}^{m,n}$ in a weaker sense. The essential insight is, via Carlsson-Weinshall duality, that $\mathrm{P}_{\bar{\mathbf{q}}, \mathbf{p}}^{m,n}$ is simply the direct product of "ordinary\" multiview varieties, $$\label{eq:iso-joint-image} \mathrm{P}_{\bar{\mathbf{q}}, \mathbf{p}}^{m,n} = \Gamma_{A(\bar{\mathbf{q}}), \mathbf{p}}^{n,m} \cong \Gamma_{A(\bar{\mathbf{q}}), \mathbf{p}}^{n,1} \times \cdots \times \Gamma_{A(\bar{\mathbf{q}}), \mathbf{p}}^{n,1} \quad \text{ where } A(\bar{\mathbf{q}}) = \left(A(\bar{q}_1), \ldots , A(\bar{q}_n) \right).$$ A previous result, namely part (3) of , states that the multiview variety of a sufficiently generic camera arrangement is cut out by the bilinear forms in its vanishing ideal. The Cremona transformation $\mathcal{C}$ allows us to translate these genericity conditions on the cameras $A(\bar{\mathbf{q}})$ into conditions on the point arrangement $\bar{\mathbf{q}} \in \left( \mathbf P^2 \right).$ A $4$-nodal cubic surface in $\mathbf P^3$ containing the points $E_1,\ldots E_4$ is given by an equation of the form $$\label{eq:cayley-cubic} a_1 x_2 x_3 x_4 + a_2 x_1 x_3 x_4 + a_3 x_1 x_2 x_4 + a_4 x_1 x_2 x_3 = 0, \quad [a_1 : a_2 : a_3 : a_4] \in \mathbf P^3.$$ If all $a_i$ are nonzero, then such a surface is projectively equivalent to *Cayley's nodal cubic surface*, for which $a_1=a_2=a_3=a_4=1$ and the points $E_1,\ldots , E_4$ comprise the singular locus. We also allow degenerate cases where one or more $a_i=0$ in [\[eq:cayley-cubic\]](#eq:cayley-cubic){reference-type="eqref" reference="eq:cayley-cubic"}, in which case the surface degenerates to the union of a plane and a quadric, or the union of three planes. **Theorem 13**. Fix $n$ distinct points $\bar q_1, \dots, \bar q_{n} \in \mathbf P^3 \setminus \{ E_1, E_2, E_3, E_4\}$, $n \geq 2$, such that no four $\bar q_j$ lie on a common 4-nodal cubic surface through $E_1,\ldots E_4.$ Write $$\bar{\mathbf{q}} = (\bar{q}_1, \ldots , \bar{q}_{n}, E_1, \ldots , E_4),$$ and $\bar{\mathbf{q}}'$ for the sub-arrangement of $\bar{\mathbf{q}}$ obtained by deleting the $E_1, \ldots , E_4$. We have a birational equivalence of varieties $$\Gamma_{\bar{\mathbf{q}} , \mathbf{p}}^{m,n+4} \simeq_{\text{\bf Bir}} \mathrm{P}_{\bar{\mathbf{q}} ', \mathbf{p}}^{m,n} \times (\mathop{\mathrm{PGL}}_3)^m,$$ which realizes the reduced resectioning variety $\mathrm{P}_{\bar{\mathbf{q}} ', \mathbf{p}}^{m,n}$ as a rational quotient of $\Gamma_{\bar{\mathbf{q}} , \mathbf{p}}^{m,n}$ by $(\mathop{\mathrm{PGL}}_3)^m$. Additionally, $\mathrm{P}_{\bar{\mathbf{q}} ', \mathbf{p}}^{m,n}$ is cut out scheme-theoretically by the $m \binom{n}{2}$ bilinear equations [\[eq:dual-fundamental-1\]](#eq:dual-fundamental-1){reference-type="eqref" reference="eq:dual-fundamental-1"}. *Proof.* The statements involving rational quotients follow similarly as in . Under the isomorphism [\[eq:iso-joint-image\]](#eq:iso-joint-image){reference-type="eqref" reference="eq:iso-joint-image"}, the bilinear constraints in the theorem statement are the usual $2$-focals vanishing on the multiview variety. We recall from part (3) of  that the $2$-focals cut out the multiview variety scheme-theoretically whenever the camera centers are distinct and do not lie on a common plane. Now, since $\bar{q}_i$ is not in the span of any three $E_j,$ the center of the camera $A(\bar{q}_i)$ is given by the Cremona transformation $\mathcal{C} (\bar{q}_i)$. Since $\mathcal{C}$ maps any plane in $\mathbf P^3$ to a $4$-nodal cubic surface, and vice-versa, we are done. ◻ From the practitioner's point of view, the genericity assumptions of , as well as the implicit assumption that we can fix four fiducial 3D points and their images to the standard positions [\[eq:standard-position\]](#eq:standard-position){reference-type="eqref" reference="eq:standard-position"}, may be quite reasonable. This is supported by the experiments of [@DBLP:conf/cvpr/TragerHP19 §5], suggesting some potential uses of Carlsson-Weinshall duality in SfM settings. # Proof of  {#sec:ideals} Our proof of  follows the general strategy used in the proof of [@agarwal2022atlas Theorem 3.2], but requires some nontrivial modifications. **Remark 14**. Unlike triangulation, resectioning is an interesting problem even for $m=1$ camera. In fact, most of the work needed to prove  involves the special case $m=1$. As in the proof of , we fix $m=1$ and write $p_i$ in place of $p_{1 i}.$ We also write ${I}_{}\left( \bullet \right)$ in place of ${I}_{1} \left( \bullet \right),$ and $\Gamma_{\bar{\mathbf{q}} , \mathbf{p}}$ instead of $\Gamma_{\bar{\mathbf{q}} , \mathbf{p}}^{1,n}.$ Finally, let us recall the variety $\Gamma_{\bar{\mathbf{Q}}, \mathbf{p}}^{n,m} \subset \left( \mathbf P^2 \right)^{mn}$ introduced in . In place of $\Gamma_{\bar{\mathbf{Q}}, \mathbf{p}}^{n,1},$ we simply write $\Gamma_{\bar{\mathbf{Q}}, \mathbf{p}}.$ ## Proof outline and preliminary facts {#subsec:outline} To begin, we describe our proof strategy at a high level. The main steps of our proof can be understood via the diagram in Figure [\[fig:proof-schematic\]](#fig:proof-schematic){reference-type="ref" reference="fig:proof-schematic"}, with each of the steps (1)--(4) explained below. 1. **Coordinate change to obtain a generic hypercamera arrangement from the structured one.** For $\bar q_1, \dots, \bar q_n \in \mathbf P^3$ with no four coplanar, we establish in  that there exist $3 \times 3$ invertible matrices $H_1, \dots, H_n$ such that the transformed hypercamera arrangement $$\bar{\mathbf{B}} := (H_1 \bar Q_{1} , \ldots, H_n \bar Q_{n})$$ is *minor-generic* in the sense of . Applying the coordinate change $\mathbf H = (H_1, \dots, H_n)$ to $(\mathbf P^2)^n$ reduces the study of ${I}_{}(\bar{\mathbf{q}})$ for the structured arrangement $\bar\mathbf{Q}$ to that of ${I}_{}(\bar{\mathbf{B}})$ for the generic $\bar{\mathbf{B}}.$ 2. If we apply the inverse of the coordinate change $\mathbf H^{-1} = (H_1^{-1}, \dots, H_n^{-1})$ from step (1) to $(\mathbf P^2)^n$, we can specialize from $I(\Gamma_{\bar \mathbf{B}, \mathbf{p}})$ to $I(\Gamma_{\bar{\mathbf{q}} , \mathbf{p}})$: $$\bar B_j (A) = p_j \quad \iff \quad H_j^{-1} \bar B_j (A) = H_j^{-1} p_j \quad \iff \quad \bar Q_{j}(A) = H_j^{-1} p_j.$$ 3. **Show equality of focal and vanishing ideals in the generic case.** By the previous two steps, it is sufficient to show $I_{\operatorname{foc}}(\bar \mathbf{B}) = I(\Gamma_{\bar\mathbf{B}, \mathbf{p}})$. We establish this using Gröbner bases, as described in . 4. **Show equality of focal and vanishing ideals in the structured case.** Combine steps (1)--(3). For the first step in the proof outline, we need the following definition. **Definition 15**. We say the hypercamera arrangement $\bar{\mathbf{B}} = (\bar{B}_1, \ldots , \bar{B}_n) \in \left( \mathbf P^{35} \right)^n$ is *minor-generic* if all $12 \times 12$ minors of the $12 \times 3n$ matrix $\left( \bar{B}_1^\top \mid \cdots \mid \bar{B}_n^\top \right)$ are nonzero. This is a direct analogue of the genericity condition in , part (1). We also need the following result. Let $\mathbf F$ be a field, and consider $s$ matrices $A_1, \ldots , A_s \in \mathbf F^{M\times N}$. We say $A_1, \ldots , A_s$ are *rowspan-uniform* if, for any subset $S\subset [s]$ of size at least $M/N,$ we have $$\label{eq:rowspan} \displaystyle\sum_{i\in S} \mathop{\mathrm{rowspan}}(A_i) = \mathbf F^N.$$ **Lemma 16**. If $A_1, \ldots , A_s \in \mathbf F^{M\times N}$ are rowspan-uniform, then there exists a dense Zariski-open set of matrices $(H_1, \ldots , H_s) \in \mathop{\mathrm{GL}}(\mathbf F^M)^s$ such that the maximal $n\times n$ minors of the the $sM \times N$ matrix $$\label{eq:stacked-transformed-matrix} \left( \begin{array}{c} H_1 A_1 \\ \hline \vdots \\ \hline H_s A_s \end{array} \right)$$ are all nonzero. We leave the proof of this result to . This result is a direct generalization of [@idealMultiview Lemma 3.6], in the setting of triangulation. In our setting of resectioning, we take $(M,N) = (3, 12),$ and deduce that we can transform the arrangement $\bar{\mathbf{Q}}$ for suitably generic $\bar{\mathbf{q}}$ to a minor-generic arrangement $\bar{\mathbf{B}}$ using the following result. **Lemma 17**. Suppose that $\bar{\mathbf{q}}\in (\mathbf P^3)^n$ is a point arrangement such that no four points are coplanar. Then $\bar{\mathbf{Q}}$ is rowspan-uniform. *Proof.* For any subset $S \subset [n]$ of size at least $4$, we must show $$\sum_{j \in S} \mathop{\mathrm{rowspan}}(\bar Q_j) = \mathbf C^{12} .$$ Noting the compatible direct-sum decompositions $$\begin{aligned} \mathbf C^{12} &\simeq \mathbf C^4 \oplus \mathbf C^4 \oplus \mathbf C^4 ,\\ \mathop{\mathrm{rowspan}}(\bar{Q}_j ) &\simeq \mathop{\mathrm{rowspan}}(\bar{q}_j^\top) \oplus \mathop{\mathrm{rowspan}}(\bar{q}_j^\top) \oplus \mathop{\mathrm{rowspan}}(\bar{q}_j^\top) ,\end{aligned}$$ it suffices to observe that any set of four elements from the set $\{ \bar{q}_j \}_{j \in S}$ span $\mathbb{P}^3,$ from our assumption that such a set is noncoplanar. ◻ gives us a geometric interpretation of when $\bar{\mathbf{Q}}$ is minor-generic. Algebraically, this condition is precisely what we need to obtain the Gröbner basis of  via a standard specialization argument. This is the focus of the next subsection. ## Gröbner basis tools {#subsec:gb-tools} To realize ${I}_{}(\bar{\mathbf{q}})$ as the specialization of an ideal that is independent of $\bar{\mathbf{q}}$, we could replace the arrangement $\bar{\mathbf{Q}}$ with $\mathbf{B}= (B_1, \ldots , B_n)$, where $$\label{eq:Q-matrix} B_i = \left[ \begin{smallmatrix} B_i [1,1] & B_i [1,2] & B_i [1,3] & B_i [1,4] & B_i [1,5] & B_i [1,6] & B_i [1,7] & B_i [1,8] & B_i [1,9] & B_i [1,10] & B_i [1,11] & B_i [1,12] \\ B_i [2,1] & B_i [2,2] & B_i [2,3] & B_i [2,4] & B_i [2,5] & B_i [2,6] & B_i [2,7] & B_i [2,8] & B_i [2,9] & B_i [2,10] & B_i [2,11] & B_i [2,12] \\ B_i [3,1] & B_i [3,2] & B_i [3,3] & B_i [3,4] & B_i [3,5] & B_i [3,6] & B_i [3,7] & B_i [3,8] & B_i [3,9] & B_i [3,10] & B_i [3,11] & B_i [3,12] \end{smallmatrix}\right],$$ thereby introducing $36 n$ new indeterminates. Alternatively, we could replace $\bar{\mathbf{Q}}$ with the symbolic arrangement $\mathbf{B}^\star = (B_1^\star, \ldots , B_n^\star)$, where $$\label{eq:Qvee-matrix} B_i^\star = \left[ \begin{smallmatrix} B_i^\star [1,1] & B_i^\star [1,2] & B_i^\star [1,3] & B_i^\star [1,4] & 0 & 0 &0 &0 & 0 & 0 & 0 &0 \\ 0 & 0 &0 &0 & B_i^\star [2,1] & B_i^\star [2,2] & B_i^\star [2,3] & B_i^\star [2,4] & 0 & 0 & 0 & 0\\ 0 & 0 &0 &0 & 0 & 0 & 0 &0 & B_i [3,1]^\star & B_i^\star [3,2] & B_i^\star [3,3] & B_i^\star [3,4] \end{smallmatrix}\right],$$ for a total of $12n$ new indeterminates. For either $\mathbf{B}$ or $\mathbf{B}^\star$ and for each $k$ with $6\le k \le m$, we are interested in the determinants of the matrices $$\begin{aligned} (\mathbf{B}\, | \, \mathbf{p}) [\mathbf{r} ]_\sigma&= \begin{bmatrix} B_{\sigma_1}[\mathbf{r}_{1}, :] &p_{\sigma_1}[\mathbf{r}_{1}] &0&\dots &0\\B_{\sigma_2}[\mathbf{r}_{2}, :] & 0 & p_{\sigma_2}[\mathbf{r}_{2}]&\ddots &0 \\ \vdots & \vdots &\ddots&\ddots &\vdots \\B_{\sigma_k}[\mathbf{r}_{k}, :] &0&\dots&0&p_{\sigma_k}[\mathbf{r}_{k}] \end{bmatrix}, \label{eq:Q-focal-matrix} \\[0.5em] (\mathbf{B}^\star \, | \, \mathbf{p}) [\mathbf{r} ]_\sigma&= \begin{bmatrix} B_{\sigma_1}^\star[\mathbf{r}_{1}, :] &p_{\sigma_1}[\mathbf{r}_{1}] &0&\dots &0\\B_{\sigma_2}^\star[\mathbf{r}_{2}, :] & 0 & p_{\sigma_2}[\mathbf{r}_{2}]&\ddots &0 \\ \vdots & \vdots &\ddots&\ddots &\vdots \\B_{\sigma_k}^\star[\mathbf{r}_{k}, :] &0&\dots&0&p_{\sigma_k}[\mathbf{r}_{k}] \end{bmatrix},\label{eq:Qvee-focal-matrix} \\ \text{where } \mathbf{r} = (\mathbf{r}_{1}, \ldots , \mathbf{r}_{n}) &\subset [3]^n, \nonumber \\ \# \mathbf{r}_{1} + \cdots + \# \mathbf{r}_{k} &= k , \nonumber \\ \sigma= \{ \sigma_1, \ldots , \sigma_k \} &\subset [n]. \nonumber \end{aligned}$$ Upon specializing $\mathbf{B}\to \bar{\mathbf{Q}}$, or $\mathbf{B}^\star \to \bar{\mathbf{Q}}$, the respective determinants of [\[eq:Q-focal-matrix\]](#eq:Q-focal-matrix){reference-type="eqref" reference="eq:Q-focal-matrix"} or [\[eq:Qvee-focal-matrix\]](#eq:Qvee-focal-matrix){reference-type="eqref" reference="eq:Qvee-focal-matrix"} specialize to the same $k$-focal. **Proposition 18**. Equip either ring $\mathbf C[ \mathbf{B}, \mathbf{p}]$ or $\mathbf C[\mathbf{B}^\star , \mathbf{p}]$ with the $\mathbf Z^{2n}$-grading defined on generators by $\deg (B_i [j,k] ) = \deg (B_i^\star [j,k] ) = e_i$ and $\deg (p_{i} [j]) = e_{n+i}.$ The polynomial $\det (\mathbf{B}\, | \, \mathbf{p}) [\mathbf{r} ]_\sigma$ is homogeneous of multidegree $$\label{eq:multidegree} \sum_{i=1}^k (\# \mathbf{r}_i - 1)e_{\sigma_i} + e_{n + \sigma_i}.$$ The same is true for $\det (\mathbf{B}^\star \, | \, \mathbf{p}) [\mathbf{r} ]_\sigma,$ provided it is nonzero. When $k>12,$ we have $\mathbf{r}_j = \{ l \}$ for some $j\in [k]$, $l \in [3],$ and hence $$\begin{aligned} \det (\mathbf{B}\, | \, \mathbf{p}) [\mathbf{r} ]_\sigma&= p_{\sigma_j} [l] \cdot \det (\mathbf{B}\, | \, \mathbf{p}) [\mathbf{r}_1, \ldots , \widehat{\mathbf{r}_j}, \ldots , \mathbf{r}_k ]_{\sigma\setminus \{ j \}} , \label{eq:focal-factor}\\ \det (\mathbf{B}^\star \, | \, \mathbf{p}) [\mathbf{r} ]_\sigma&= p_{\sigma_j} [l] \cdot \det (\mathbf{B}^\star \, | \, \mathbf{p}) [\mathbf{r}_1, \ldots , \widehat{\mathbf{r}_j}, \ldots , \mathbf{r}_k ]_{\sigma\setminus \{ j \}} .\nonumber \end{aligned}$$ *Proof.* The argument is nearly identical to [@agarwal2022atlas Proposition 3.3]. The multidegree formula [\[eq:multidegree\]](#eq:multidegree){reference-type="eqref" reference="eq:multidegree"} follows from a calculation analagous to that already given in [\[eq:rescale\]](#eq:rescale){reference-type="eqref" reference="eq:rescale"}. Since $12 = \displaystyle\sum_{i=1}^k (\# \mathbf{r}_{i} - 1),$ it follows that at most $12$ of the sets $\mathbf{r}_{i}$ can contain more than one element. If some $\mathbf{r}_{j}$ is a singleton, the factorization [\[eq:focal-factor\]](#eq:focal-factor){reference-type="eqref" reference="eq:focal-factor"} follows by Laplace expansion. ◻ We now define four auxiliary ideals. **Definition 19**. The ideals ${I}_{6..12} (\mathbf{B}), {I}_{m}(\mathbf{B})\subset \mathbf C[\mathbf{B}, \mathbf{p}]$ are those which are generated by all determinants of [\[eq:Q-focal-matrix\]](#eq:Q-focal-matrix){reference-type="eqref" reference="eq:Q-focal-matrix"} for all $k$, respectively, in the ranges $6\le k \le 12,$ and $k=m.$ Similarly, ${I}_{6..12} ( \mathbf{B}^\star), {I}_{m} (\mathbf{B}^\star )\subset \mathbf C[\mathbf{B}^\star , \mathbf{p}]$ are generated by all determinants of [\[eq:Qvee-focal-matrix\]](#eq:Qvee-focal-matrix){reference-type="eqref" reference="eq:Qvee-focal-matrix"}. Each of the four auxiliary ideals in  is useful for different reasons. For example, ${I}_{m}(\mathbf{B})$ and ${I}_{m} (\mathbf{B}^\star )$ are the ideals of maximal minors of a *sparse generic matrix* whose nonzero entries are distinct indeterminates. Thus, ${I}_{m}(\mathbf{B})$ and ${I}_{m} (\mathbf{B}^\star )$ belong to the class of *sparse determinantal ideals*, whose structure has been analyzed in several previous works [@giusti; @boocher]. Most relevant to our work is the result of [@boocher] which directly implies that the $m$-focals form a universal Gröbner basis for either of these ideals. For the other two ideals ${I}_{6..12} (\mathbf{B})$, ${I}_{6..12} ( \mathbf{B}^\star)$, we do not know whether or not the focals form universal Gröbner bases. However,  shows that they *do* form Gröbner bases for a class of product orders that allows us to make the necessary specialization argument. We recall that a *product order* on $\mathbf C[\mathbf{B} , \mathbf{p} ]$ with $\mathbf{B} < \mathbf{p}$ is a monomial order defined by comparing monomials first with some fixed monomial order in $\mathbf{p}$, then breaking any ties with some other monomial order in $\mathbf{B}.$ **Proposition 20**. 1. The set $G = \{ \det (\mathbf{B}\, | \, \mathbf{p}) [\mathbf{r}]_\sigma \mid 6\le \# \sigma \le 12\}$ forms a Gröbner basis for the ideal ${I}_{6..12} (\mathbf{B})$ for any product order with $\mathbf{B}< \mathbf{p}.$ 2. The set $G^\star = \{ \det (\mathbf{B}^\star \, | \, \mathbf{p}) [\mathbf{r}]_\sigma \mid 6\le \# \sigma \le 12\}$ forms a Gröbner basis for the ideal ${I}_{6..12} ( \mathbf{B}^\star)$ for any product order with $\mathbf{B}^\star < \mathbf{p}.$ We provide a proof in . A specialization argument applied to the two parts of  gives, respectively, the two parts of  below. **Lemma 21**. For any monomial order on $\mathbf C[\mathbf{p}],$ the following hold: 1. Let $\bf{\bar{B}}$ be a specialization of $\bf{B}$ such that the $\bf{\bar{B}}$ is minor generic. Then the specialized $6-12$ focals form a Gröbner basis for the ideal they generate. 2. Let $\bar{\mathbf{Q}}$ be a specialization of $\bf{B}^\star$ derived from a point arrangement $\bar{\mathbf{q}}\in \left( \mathbf P^3 \right)$ with no four points coplanar. Then the specialized $6-12$ focals form a Gröbner basis. *Proof.* For part (1), let $<$ be any product order with $\mathbf{B}<\mathbf{p}$. Then $$\text{in}_< \left(\sum_{p^{\alpha_1}<\ldots<p^{\alpha_k}}g_{\alpha_i}(\mathbf{B})\mathbf{p}^{\alpha_i}\right)=\text{in}_<(g_{\alpha_k}(\mathbf{B}))p^{\alpha_k}.$$ Standard specialization results for Gröbner bases with respect to product orders [@CLO Theorem 2, §4.7] imply that the $\bf{\bar{B}}$ specialized $6-12$ focals form a Gröbner basis if each coefficient $g_{\alpha_k}(\bf{\bar{B}})$ is nonzero. Each of these coefficients is a $12\times 12$ minor of $(\bar{B}_1^\top \mid \cdots \mid \bar{B}_n^\top)$. By minor-genericity, none of these coefficients vanish. Similarly, for part (2), consider any product order $<$ with $\mathbf{B}^*<\mathbf{p}$. In this case, the nonzero coefficients $g_{\alpha_k}(\mathbf{B}^\star)$ are always products of three $4\times 4$ determinants, $$\label{eq:det-prod} \displaystyle\prod_{i=1}^3 \det \begin{bmatrix} B_{j_{i, 1}}^\ast [i,1] & B_{j_{i, 1}}^\ast [i,2] & B_{j_{i, 1}}^\ast [i, 3] & B_{j_{i, 1}}^\ast [i,4] \\ B_{j_{i, 2}}^\ast [i,1] & B_{j_{i, 2}}^\ast [i,2] & B_{j_{i, 2}}^\ast [i, 3] & B_{j_{i, 2}}^\ast [i,4] \\ B_{j_{i, 3}}^\ast [i,1] & B_{j_{i, 3}}^\ast [i,2] & B_{j_{i, 3}}^\ast [i, 3] & B_{j_{i, 3}}^\ast [i,4] \\ B_{j_{i, 4}}^\ast [i,1] & B_{j_{i, 4}}^\ast [i,2] & B_{j_{i, 4}}^\ast [i, 3] & B_{j_{i, 4}}^\ast [i,4] \end{bmatrix}.$$ Our noncoplanarity assumption implies that the $\mathbf{B}^\star \to \bar{\mathbf{Q}}$ specialization of [\[eq:det-prod\]](#eq:det-prod){reference-type="eqref" reference="eq:det-prod"} is nonzero. ◻ Finally, we have the following result on the vanishing ideal of $\Gamma_{\bar{\mathbf{B}}, \mathbf{p}}$ when $\bar{\mathbf{B}}$ is a minor-generic hypercamera arrangement. See  for the proof. **Proposition 22**. For a minor-generic hypercamera arrangement $\bar{\mathbf{B}}$, we have that $$I(\Gamma_{\bar{\mathbf{B}}, \mathbf{p}}) = {I}_{}(\bar{\mathbf{B}}).$$ ## Completing the proof {#subsec:proof-main-ideals} Using the results of the previous sections, we may complete the proof of , following the overall structure presented in . We first prove the statement for $m=1$ camera. Let $\bar{\mathbf{q}} \in \left(\mathbf P^3 \right)^m$ be a point arrangement with no four points coplanar. then implies that the hypercamera arrangement $\bar{\mathbf{Q}}$ is rowspan-uniform, and thus  implies there exist coordinate changes in the images, $\mathbf{H} = (H_1, \ldots , H_m ) \in (\mathop{\mathrm{PGL}}_3)^m$, such that the arrangement $\bar{\mathbf{B}} = (H_1 \bar Q_{1}, \ldots , H_m \bar Q_{m})$ is minor-generic. Noting $$\label{eq:focal-transform} \begin{pmatrix} \bar Q_{1} & p_1 & &\\ \vdots & & \ddots &\\ \bar Q_{n} & & & p_n \end{pmatrix} = \begin{pmatrix} H_1^{-1} & &\\ &\ddots &\\ & & H_n^{-1} \end{pmatrix} \begin{pmatrix} \bar{B}_1 & H_1 p_1 & &\\ \vdots & & \ddots &\\ \bar{B}_n & & & H_n p_n \end{pmatrix},$$ we define the isomorphism of multigraded rings $$\begin{aligned} L_{\mathbf{H}} : \mathbf C[\mathbf{p}] &\to \mathbf C[\mathbf{p}] \\ p_i &\to H_i p_i,\end{aligned}$$ and observe that $${I}_{}(\bar{\mathbf{q}}) = L_{\mathbf{H}} ({I}_{}(\bar{\mathbf{B}})) .$$ To see this, take any focal $f \in {I}_{}(\bar{\mathbf{q}})$. Just as in the proof of , corresponding minor of the focal matrix on the left of  may written as a $\mathbb{C}$-linear combination of focals for the arrangement $\bar{\mathbf{B}}.$ Hence the inclusion ${I}_{}(\bar{\mathbf{q}})\subset L_{\mathbf{H}} ({I}_{}(\bar{\mathbf{B}}))$ holds, and the reverse follows similarly. Thus, we have $$\begin{aligned} {I}_{}(\bar{\mathbf{q}}) &= L_{\mathbf{H}} ({I}_{}(\bar{\mathbf{B}})) \\ &= L_{\mathbf{H}} ( I(\Gamma_{\bar{\mathbf{B}}, \mathbf{p}})) \tag{\Cref{prop:vanishing-ideal-minor-generic}}\\ &= I(\Gamma_{\bar{\mathbf{Q}}, \mathbf{p}}) \\ &= I(\Gamma_{\bar{\mathbf{q}}, \mathbf{p}}) \tag{\Cref{prop:set-theoretic}}. \end{aligned}$$ Thus the focals generate $I(\Gamma_{\bar{\mathbf{q}}, \mathbf{p}})$. Moreover,  part (2) implies that they form a universal Gröbner basis, which completes the proof when $m=1$. Finally, if $m>1,$ it suffices to observe that $\Gamma_{\bar{\mathbf{q}}, \mathbf{p}}^{m,n}$ is the direct product of varieties $\Gamma_{\bar{\mathbf{q}}, \mathbf{p}}$, and hence the vanishing ideals sum. Moreover, two $k$-focals corresponding to different factors have disjoint support in $\mathbf C[\mathbf{p}],$ so their S-polynomials reduce to zero for any term order, and we may conclude that the focals form a universal Gröbner basis for any number of cameras. # Optimal single-camera resectioning {#sec:resectioning} The results of  express a duality principle for the exact versions of the camera resectioning and triangulation problems. A consequence of this duality is that, in a certain sense, resectioning and triangulation are equivalent problems. However, we should be mindful that this equivalence holds in an idealized setting which assumes that the pinhole camera is exact and there is no measurement noise. In practice, neither of these assumptions hold. In this section, we fix a generic point arrangement $\bar{\mathbf{q}}$ and consider $\Gamma_{\bar{\mathbf{q}}, \mathbf{p}}^{1,n} \subset (\mathbf P^2)^n$ intersected with the affine chart where $p_i[3]\ne 0$ for all $1\le i \le n.$ We denote this affine variety by $X_{\bar{\mathbf{q}}, n}$. In other words, for a point arrangement $\bar{\mathbf{q}}\in \left( \mathbf P^3\right)^n$ such that no four points are coplanar, the affine variety $X_{\bar{\mathbf{q}}, n}$ is, by , equal to the closed image of the rational map $$\begin{aligned} \psi_{\bar{\mathbf{q}}, n} : \mathbf P^{11} &\dashrightarrow \mathbf C^{2n} \nonumber \\ A &\mapsto \left( \displaystyle\frac{A[1, :] \bar{q}_1}{ A[3, :] \bar{q}_1}, \displaystyle\frac{A[2, :] \bar{q}_1}{ A[3, :] \bar{q}_1}, \ldots , \displaystyle\frac{A[1, :] \bar{q}_n}{ A[3, :] \bar{q}_n}, \displaystyle\frac{A[2, :] \bar{q}_n}{ A[3, :] \bar{q}_n} \right). \label{eq:psi}\end{aligned}$$ In the resectioning problem, we are given world points $\bar{\mathbf{q}} = (\bar{q}_1, \ldots , \bar{q}_n)\in (\mathbf P^3)^n$ and pixel values of $n$ corresponding image points, $(\tilde{u}_i, \tilde{v}_i)$ for $i=1, \ldots , n.$ We denote the vector of image measurement data by $\tilde{d}_{uv} = (\tilde{u}_1, \ldots , \tilde{v}_n) \in \mathbf C^{2n}$. In practice, $\tilde{d}_{uv}$ and $\bar{\mathbf{q}}$ are both defined over the real numbers. Our task is to recover a camera $A$ such that $$\label{eq:exact-resection} \psi_{\bar{\mathbf{q}}, n} (A) = \tilde{d}_{uv}.$$ In an idealized setting, the pinhole model is exact and there is no measurement noise. Hence, we can recover $A$ by computing the kernel of the $n$-focal matrix, and we expect a unique solution as soon as $n\ge 6.$ This is the basis of the so-called "5.5-point\" minimal solver. In practice, the pinhole model is *not* exact and there *is* measurement noise. Thus, for $n\ge 6,$ we should expect $\tilde{d}_{uv} \notin X_{\bar{\mathbf{q}}, n}$, meaning that no solution to [\[eq:exact-resection\]](#eq:exact-resection){reference-type="eqref" reference="eq:exact-resection"} can exist. However, we can still consider the following optimization problem: $$\label{eq:opt-resection} L_{\tilde{d}_{uv}}(u_1, v_1, \ldots , u_n, v_n) = \displaystyle\sum_{i=1}^n (u_i - \tilde{u}_i)^2 + (v_i - \tilde{v}_i)^2 \quad \text{s.t.} \quad (u_1, \ldots , v_n) \in X_{\bar{\mathbf{q}}, n}.$$ This is essentially the formulation of the optimal resectioning problem that is used in Hartley and Zisserman's classic text [@HZ04]\[§7.2\]. The only minor difference, implicit in their formulation, is that our feasible set $X_{\bar{\mathbf{q}}, n}$ differs from theirs by a set of measure zero picked up through Zariski closures. Similar formulations, which make the camera matrix explicit, appear in other sources, eg. in work of Cifuentes [@cifuentes2021convex Example 6.5] who studied sums-of-squares relaxations of this problem. Hartley and Zisserman refer to the squared Euclidean loss function $L_{\tilde{d}_{uv}}$ as the *geometric error*, and suggest using local methods like Levenberg-Marquardt to optimize it. Here, we address the complexity of computing the *global minimum* of [\[eq:opt-resection\]](#eq:opt-resection){reference-type="eqref" reference="eq:opt-resection"}. We recall the notion of the *Euclidean distance degree* of an affine variety, [@draisma2016euclidean §2]. For $X_{\bar{\mathbf{q}}, n},$ we denote this quantity by $\mathop{\mathrm{ED}}(X_{\bar{\mathbf{q}}, n})$. Given a generic data point $\tilde{d}_{uv}$, this is the number of critical points of the squared Euclidean loss $L_{\tilde{d}_{uv}}$ restricted to the smooth locus of $X_{\bar{\mathbf{q}}, n}$. **Conjecture 23**. For all $n\ge 6$ and generic $\bar{\mathbf{q}}\in (\mathbf P^3)^n,$ we have $$\label{eq:ED-formula-conjecture} \mathop{\mathrm{ED}}(X_{\bar{\mathbf{q}}, n}) = (80/3) n^3 - 368 n^2 + (5068/3) n - 2580.$$ We return to , to verify the simplest case of this conjecture. **Example 24**. Consider the resectioning hypersurface $H(u_1, \ldots , v_6) =0$, ie. [\[eq:det-hypersurface\]](#eq:det-hypersurface){reference-type="eqref" reference="eq:det-hypersurface"} in the chart $$\label{eq:ex-chart} p_1[3] = p_2 [3] = p_3[3] = p_4[3] = p_5[3] = p_6[3] = 1.$$ The affine variety $X_{\bar{\mathbf{q}}, 6} \subset \mathbf C^{12}$ is in fact the cone over a projective variety in $\mathbf P^{11}.$ This can be seen from the determinantal representation of $H$ in [\[eq:12-det\]](#eq:12-det){reference-type="eqref" reference="eq:12-det"}. It follows that $X_{\bar{\mathbf{q}}, n}$ is singular. More precisely, the singular locus of $X_{\bar{\mathbf{q}}, n}$ has dimension $9.$ Working over the finite field $\mathbb{F} = \mathbb{Z}_{32003}$, we may verify  with symbolic computation using the computer algebra system Macaulay2 [@M2]. To do so, we draw a $\mathbb{F}$-valued point configuration $\bar{\mathbf{q}}\in \left( \mathbf P^3 \right)^6$ and data vector $\tilde{d}_{uv} \in \mathbf F^{12}$ uniformly at random. The critical points of [\[eq:opt-resection\]](#eq:opt-resection){reference-type="eqref" reference="eq:opt-resection"} correspond to points $(u_1, \ldots, v_6) \in X_{\bar{\mathbf{q}}, 6}$ such that $$\label{eq:rank-deficient-ED} \mathop{\mathrm{rank}}\begin{bmatrix} u_1 - \tilde{u}_1 & \cdots & v_6 - \tilde{v}_6 \\ \frac{\partial H}{\partial u_1} & \cdots & \frac{\partial H}{\partial v_6} \end{bmatrix} \le 1.$$ To remove the singular points on $X_{\bar{\mathbf{q}}, 6}$ which cause rank-deficiency in [\[eq:rank-deficient-ED\]](#eq:rank-deficient-ED){reference-type="eqref" reference="eq:rank-deficient-ED"}, it is sufficient take the ideal generated by the $2\times 2$ minors of this matrix and $H(u_1, \ldots , v_6)$ and compute its ideal quotient with respect to the ideal $\langle \frac{\partial H}{\partial u_1}, \frac{\partial H}{\partial v_1} \rangle$. The result of this operation is a zero-dimensional ideal of degree $68.$ Moreover, we may compute that the vanishing locus of this ideal consists of $68$ distinct, nonsingular points on $X_{\bar{\mathbf{q}}, 6}.$ The number $68$ may be seen as quantifying the intrinsic algebraic difficulty of solving the constrained optimization problem [\[eq:opt-resection\]](#eq:opt-resection){reference-type="eqref" reference="eq:opt-resection"}. This is further reinforced by heuristically computing the Galois/monodromy group of this problem, as in [@galvis1], which reveals the full symmetric group $S_{68}.$ Our conjectural formula [\[eq:ED-formula-conjecture\]](#eq:ED-formula-conjecture){reference-type="eqref" reference="eq:ED-formula-conjecture"} is reminiscent of recent results characterizing the Euclidean distance degree of the *affine multiview variety* $X_{\bar{\mathbf{A}}, m}$. This can be defined by taking analagous affine charts on the multiview variety $\Gamma_{\bar{\mathbf{A}} , \mathbf{p}}^{m,1}$. Using a topological formula for the ED-degree of a smooth variety, Maxim, Rodriguez, and Wang [@MRW20] proved $$\label{eq:ED-formula-multiview} \mathop{\mathrm{ED}}(X_{\bar{\mathbf{A}}, m}) = (9/2) m^3 - (21/2) m^2 + 8 m - 4.$$ We compare this formula with ours in . We confirmed the entries of this table using numerical monodromy heuristics [@monodromy1], using both the implementations provided in Macaulay2 [@M2] and Julia [@HCJL]. For these computations, it is advantageous to use the rational parametrization [\[eq:psi\]](#eq:psi){reference-type="eqref" reference="eq:psi"} instead of the implicit focal constraints in . A surprising aspect of  is that $\mathop{\mathrm{ED}}(X_{\bar{\mathbf{q}}, n})$ is a polynomial of degree 3 in $n.$ On the other hand, if we were to apply the methods of [@MRW20] to computing the affine ED-degree of the variety $\Gamma_{\bar{\mathbf{B}}, \mathbf{q}}^{n,1}$ associated to a *generic* hypercamera arrangement $\bar{\mathbf{B}} \in \left( \mathbf P^{11} \right)$, this would give instead a polynomial of degree $11.$ This highlights some special properties of the hypercamera arrangement $\bar{\mathbf{Q}}$, and provides contrast with the results of previous sections. One explanation for this contrast is the fact that the projective coordinate changes used in  do not preserve the Euclidean distance. For similar reasons, the affine ED degree of the reduced resectioning variety, which is the same as $\mathop{\mathrm{ED}}(X_{\bar{\mathbf{A}}, m})$, appears to be unrelated to that of the general resectioning variety. $m$ / $n$ $\mathop{\mathrm{ED}}(X_{\bar{\mathbf{A}}, m})$ $\mathop{\mathrm{ED}}(X_{\bar{\mathbf{q}}, n})$ ----------- ------------------------------------------------- ------------------------------------------------- 2 6 --- 3 47 --- 4 148 --- 5 336 --- 6 638 68 7 1081 360 8 1692 1036 9 2498 2256 10 3526 4180 11 4803 6968 12 6356 10780 13 8212 15776 14 10398 22116 15 12941 29960 : Euclidean distance degrees for optimal triangulation from $m$ generic cameras (middle column) and optimal resectioning from $n$ 3D points (right.) We close this section by noting one immediate obstacle to proving . As already seen in , the variety $X_{\bar{\mathbf{q}}, n}$ for generic data $\bar{\mathbf{q}}$ is *not smooth* for any $n \ge 6.$ This contrasts with the case of $X_{\bar{\mathbf{A}}, m}$, which is smooth for a sufficiently generic arrangement of $m\ge 3$ cameras $\bar{\mathbf{A}}.$ Thus, to prove [\[eq:ED-formula-multiview\]](#eq:ED-formula-multiview){reference-type="eqref" reference="eq:ED-formula-multiview"} with similar techniques, the basic Euler characteristic formulas valid in the smooth case would need to be replaced by their singular counterparts involving Euler obstruction functions, eg. [@eulerobs Theorem 1.3]. # Conclusion {#sec:conclusion} In summary, our work takes several first steps in studying the resectioning problem for general projective cameras from the algebro-geometric perspective, with a focus on Gröbner bases, Carlsson-Weinshall duality, and Euclidean distance optimization. Our discoveries provide many parallels with the already well-studied multiview ideals associated with the triangulation problem. Still, many open questions remain. In this paper, we considered resectioning in the setting of general projective cameras. Returning to the classical P3P problem [@Grunert-1841], it would be worthwhile to carry out a parallel study in the setting of *Euclidean cameras*, as proposed in [@agarwal2022atlas §8.3, Q1]. In view of  parts (2)--(3), it is natural to ask: are all $k$-focals for $6\le k \le 12$ are needed to generate $I_m (\bar{\mathbf{q}})$ under the noncoplanarity assumption of ? What can we say about $I_m (\bar{\mathbf{q}})$ if this noncoplanarity assumption is relaxed? Using the reduced atlas developed  to answer more of the open questions in [@agarwal2022atlas §8] is yet another interesting avenue to pursue. Our focus on resectioning for linear maps $\mathbb{P}^3 \dashrightarrow \mathbb{P}^2$ was motivated by computer vision. However, it would make just as much sense to study resectioning varieties in the context of general projections $\mathbb{P}^N \dashrightarrow \mathbb{P}^M,$ [@Li-IMRN], or even matrix multiplication maps as in [@agarwal2022atlas §8.3, Q3]. Finally, we offer  as a challenge in Euclidean distance degree computation. # Acknowledgements {#acknowledgements .unnumbered} The authors thank Sameer Agarwal, Max Lieblich, and Rekha Thomas for many helpful conversations and suggestions. TD also thanks Laurentiu Maxim, Jose Rodriguez, and Felix Rydell for helpful discussions related to , and acknowledges support from an NSF Mathematical Sciences Postdoctoral Research Fellowship (DMS-2103310). # Miscellaneous Proofs {#appendix:proofs} First, we prove , justifying the coordinate change $\mathbf{H}$ used to prove . *Proof of .* [\[proof-make-minor-generic\]]{#proof-make-minor-generic label="proof-make-minor-generic"} Consider some maximal minor of the matrix [\[eq:stacked-transformed-matrix\]](#eq:stacked-transformed-matrix){reference-type="eqref" reference="eq:stacked-transformed-matrix"}. We fix the set of indices $\{ \sigma_1, \ldots , \sigma_k \},$ where $1\le \sigma_1 < \sigma_2 < \cdots < \sigma_k \le s$, such that at least one row is taken from the submatrix $H_{\sigma_j} A_{\sigma_j}$ when forming this minor, and let $1 \le i_{j 1}, \ldots , i_{j r_j} \le N$ index the rows that are taken from this submatrix. We compute this minor using the multilinearity of the determinant: $$\begin{aligned} \det \left( \begin{array}{c} H_{\sigma_1} A_{\sigma_1} [i_{1 1}, :] \\ \hline \vdots \\ \hline H_{\sigma_k} A_{\sigma_k} [i_{k r_k}, :] \end{array} \right) &= \det \left( \begin{array}{c} \displaystyle\sum_{l=1}^M H_{\sigma_1} [i_{1 1}, \ell ] A_{\sigma_1} [ \ell , : ] \\ \hline \vdots \\ \hline \displaystyle\sum_{l=1}^M H_{\sigma_k} [i_{k r_k}, \ell ] A_{\sigma_k} [ \ell , : ] \end{array} \right) \\ &= \displaystyle\sum_{1 \le \ell_1, \ldots , \ell_M \le M} \det \left( \begin{array}{c} A_{\sigma_1} [\ell_{1}, :] \\ \hline \vdots \\ \hline A_{\sigma_k} [\ell_{M}, :] \end{array} \right) \cdot \left(H_{\sigma_1} [i_{1 1}, \ell_{1}] \cdots H_{\sigma_k} [i_{k r_k}, \ell_{M}] \right).\end{aligned}$$ We think of this minor as a polynomial in the entries of $(H_1, \ldots , H_s).$ Our assumption of rowspan-uniformity implies that $A_{\sigma_1} [\ell_1, :], \ldots , A_{\sigma_k} [\ell_M, :]$ form a basis of $\mathbf F^N$ for some choice of indices in the sum above, and hence one of the coefficients of this polynomial is nonzero. Thus, the equation $$\label{eq:transformed-minor-zero} \det \left( \begin{array}{c} H_{\sigma_1} A_{\sigma_1} [i_{1 1}, :] \\ \hline \vdots \\ \hline H_{\sigma_k} A_{\sigma_k} [i_{k r_k}, :] \end{array} \right) = 0$$ defines a hypersurface in the affine space of all $k$-tuples of $M\times M$ matrices $(H_1, \ldots , H_s).$ Taking the union over all such hypersurfaces and those defined by $\det H_i=0$ gives us a proper Zariski-closed set $Z$ in this affine space. The complement of $Z$ is an open set which satisfies the desired conclusion. ◻ Next, to prove , we recall [@agarwal2022atlas Definition 3.6]. **Definition 25**. Consider a polynomial $f\in \mathbf C[\mathbf{B}, \mathbf{p}_{\sigma_1}, \ldots , \mathbf{p}_{\sigma_k}]$ which is homogeneous of degree $1$ in each group of variables $\mathbf{p}_{\sigma_i} = \{ p_{\sigma_i} [1], p_{\sigma_i} [2], p_{\sigma_i} [3] \} .$ We say $f$ is *well-supported* with respect to $\mathbf{p}_{\sigma_1}, \ldots , \mathbf{p}_{\sigma_k}$ if, for every choice of variables $p_{\sigma_1} [i_1]\in \mathbf{p}_{\sigma_1} , \ldots , p_{\sigma_k} [i_k]\in \mathbf{p}_{\sigma_k}$ such that each $p_i [i_j]$ appears in some term of $f$, the monomial $\displaystyle\prod_{j=1}^k p_{\sigma_j} [i_j]$ also appears in $f$ (with nonzero coefficient in $\mathbf C[\mathbf{B}]$.) More intuitively, $f$ is well-supported if its monomial support in $\mathbf{p}$ is as large as possible given its variable support, or if it has a dense coefficient tensor in $\mathbf C[ \mathbf{B}].$ For a product order with $\mathbf{B}< \mathbf{p}$, well-supportedness implies that the leading terms of $k$-focals depend only on the relative orderings of variables within each of the groups $\mathbf{p}_1, \ldots , \mathbf{p}_n$ and the ordering on $\mathbf{B}$ (cf. [@agarwal2022atlas Lemma 3.8].) When $6\le k\le m,$ an argument using Laplace expansion can be used to show that the $k$-focal $\det (\mathbf{B}\, | \, \mathbf{p}) [\mathbf{r} ]_\sigma$ is well-supported with respect to $\mathbf{p}_{\sigma_1}, \ldots , \mathbf{p}_{\sigma_k}.$ In contrast, the $k$-focal $\det (\mathbf{B}^\star \, | \, \mathbf{p}) [\mathbf{r} ]_\sigma$ is *not* well-supported with respect to $\mathbf{p}_{\sigma_1}, \ldots , \mathbf{p}_{\sigma_k},$ since the nonzero coefficients of $\mathbf{p}$-monomials must have the form [\[eq:det-prod\]](#eq:det-prod){reference-type="eqref" reference="eq:det-prod"}. *Proof of .* For part (1), we construct an ascending chain of ideals $$J_0 \subset J_1 \subset \cdots \subset J_n,$$ where $J_k = \langle G_k \rangle$, and $G_k$ is an inductively-defined Gröbner basis with respect to the appropriate class of product orders. We take $G_0$ to be the set of $m$-focals, so that $J_0$ is a sparse determinantal ideal. As previously noted, [@boocher Proposition 5.4] implies that $G_0$ is a *universal* Gröbner basis. Having defined $G_k$ for some $k\ge 0,$ we define $G_{k+1}$ to be the set consisting of all polynomials $g$ such that either $g\in G_k$ and is not divisible by any entry of $p_k$ or such that $g\notin G_k$ with $p_k[l] \cdot g \in G_k$ for some $l\in [3].$ implies that each $G_k$ may be obtained from $G_0$ by dividing out any entry of the matrices $p_1, \ldots , p_k$ from any $m$-focal containing it as a factor. When $k=n,$ we obtain $G=G_n$ as the set of all $k$-focals for $6\le k \le 12.$ We claim that each $G_k$ is a Gröbner basis for the appropriate class of product orders, and moreover that $J_k$ can be expressed in terms of ideal quotients as $$\label{eq:Jk-sat} J_k = J_{k-1} : \langle p_k [1] \rangle = J_{k-1} : \langle p_k [2] \rangle = J_{k-1} : \langle p_k [3] \rangle .$$ The elements of each $G_k$ are well-supported, and hence by [@agarwal2022atlas Corollary 3.9] the Gróbner basis property for product orders is preserved. Having established part (1), we can prove part (2) via an argument used in the proof of  [@boocher Proposition 5.4]. If $<$ is any product order with $\mathbf{B}^\star<\mathbf{p}$ then we can extend this to a product order with $\mathbf{B}<\mathbf{p}$ where the entries of $\mathbf{B}$ which are zero in $\mathbf{B^\star}$ are weighted last. Let $f\in{I}_{6..12} ( \mathbf{B}^\star)$ be nonzero, so that $f=\sum c_{\sigma , \mathbf{r}} \det (\mathbf{B}^\star \, | \, \mathbf{p}) [\mathbf{r}]_\sigma$ for some coefficients $c_{\sigma , \mathbf{R}} \in \mathbf C[\mathbf{B}^\star , \mathbf{p}] \subset \mathbf C[\mathbf{B}, \mathbf{p}].$ Consider the lifted polynomial $$\bar{f}=\sum c_{\sigma , \mathbf{r}}(\mathbf{B}, \mathbf{p}) \, \det (\mathbf{B}\, | \, \mathbf{p}) [\mathbf{r}]_\sigma \in {I}_{6..12} (\mathbf{B}).$$ Our chosen weighting implies that $\text{in}(f)=\text{in}(\bar f)$. Part (1) implies $\text{in}(\bar f)$ is divisible by the leading monomial $\bar m_{\sigma , \mathbf{r}}=\text{in}(\det (\mathbf{B}\, | \, \mathbf{p}) [\mathbf{r}]_\sigma)$ corresponding to some summand above. It follows that $\text{in}(f)$ is divisible by $m_{\sigma , \mathbf{r}} =\text{in}(\det (\mathbf{B}^\star \, | \, \mathbf{p}) [\mathbf{r}]_\sigma)$. This gives part (2). ◻ *Proof of .* Having shown the set-theoretic statement in , it is enough to show that the focal ideal is radical and saturated with respect to the irrelevant ideal of $\left( \mathbf P^2 \right)^n.$ Radicality follows from , since the initial ideal is squarefree. For saturatedness, we note that, in the notation of the previous proof, the focal ideal $I (\bar{\mathbf{B}})$ is the specialization of the ideal $J_m.$ Using [\[eq:Jk-sat\]](#eq:Jk-sat){reference-type="eqref" reference="eq:Jk-sat"} with $k=m$ and the fact that specialization $\mathbf{B}\to \bar{\mathbf{B}}$ preserves the Gröbner basis property, it follows that $I (\bar{\mathbf{B}}) : \langle p_k [i] \rangle$ for all $i$ and $j.$ This in turn implies saturatedness with respect to the irrelevant ideal. ◻ [Department of Mathematics, University of Washington]{.smallcaps} *Email address*: E. Connelly `erin96@uw.edu` *Email address*: T. Duff `timduff@uw.edu` *Email address*: J. Loucks-Tavitas `jaloucks@uw.edu`
arxiv_math
{ "id": "2309.04028", "title": "Algebra and Geometry of Camera Resectioning", "authors": "Erin Connelly, Timothy Duff, Jessie Loucks-Tavitas", "categories": "math.AG cs.CV math.AC", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- author: - | Chaowen Zhang\ Department of Mathematics,\ China University of Mining and Technology,\ Xuzhou, 221116, Jiang Su, P. R. China title: Representations of the restricted Lie superalgebra $p(n)$ --- Mathematics Subject Classification 2010: 17B10; 17B45; 17B50 # Introduction Let $F$ be an algebraically closed field of characteristic $p>2$, and let ${\mathfrak g}$ be the restricted Lie superalgebra $p(n)$ over $F$. In this paper, we first study the center of a quotient of the restricted enveloping superalgebra of ${\mathfrak g}$, then apply the obtained results to investigate the restricted representations of ${\mathfrak g}$. For $n\geq 2$, the Lie superalgebra ${\mathfrak g}$ over $F$ consists of block matrices of the form $$\begin{pmatrix} a& b\\ c&-a^t \end{pmatrix}$$ where $a$ is an arbitrary $n\times n$ matrix, $a^t$ denotes the transpose of $a$, $b$ is a symmetric $n\times n$ matrix and $c$ is a skew-symmetric $n\times n$ matrix ([@kk; @kv; @rsyz; @sv]). It is clear that ${\mathfrak g}_{\bar 0}\cong \mathfrak{gl}(n)$ and hence ${\mathfrak g}_{\bar{0}}'\cong \mathfrak{sl}(n)$ ([@kv; @sv]). The Lie superalgebra ${\mathfrak g}$ has the $\mathbb{Z}$-gradings ${\mathfrak g}={\mathfrak g}_{-1}\oplus {\mathfrak g}_{\bar 0}\oplus {\mathfrak g}_1$, where ${\mathfrak g}_{-1}$ (resp. ${\mathfrak g}_1$) consists of above matrices with $a=0$ and $b=0$ (resp. $a=0$ and $c=0$). According to [@kk; @sv], the universal enveloping superalgera of the Lie superalgebra $p(n)$ over $\mathbb C$ has a nontrivial radical. Applying a similar idea, we define a nontrivial nilpotent ideal $I$ for the restricted enveloping superalgebra $u({\mathfrak g})$. Then the study of simple $u({\mathfrak g})$-modules is reduced to that of the quotient superalgebra $\bar{\mathfrak u}=u({\mathfrak g})/I$. The paper is organized as follows. Section 2 gives the preliminaries. In Section 3 we define the quotient superalgebra $\bar{\mathfrak u}$ and study its center. Using the results from this section we study restricted representations of ${\mathfrak g}$ in Section 4. # Preliminaries A Lie superalgebra $L=L_{\bar 0}\oplus L_{\bar 1}$ is called restricted if $L_{\bar 0}$ is a restricted Lie algebra and if $L_{\bar{1}}$ is a restricted $L_{\bar 0}$-module under the adjoint action. Define the p-mapping $[p]$ on ${\mathfrak g}_{\bar{0}}$ to be the $p$-th power for each matrix in ${\mathfrak g}_{\bar{0}}$. Then ${\mathfrak g}_{\bar{0}}$ becomes a restricted Lie algebra and ${\mathfrak g}$ is a restricted Lie superalgebra. Let $L=L_{\bar{0}}\oplus L_{\bar{1}}$ be a restricted Lie superalgebra, let $U(L)$ be its universal enveloping algebra and let $M=M_{\bar{0}}\oplus M_{\bar 1}$ be a $L$-module. We say that $M$ admits a $p$-character $\chi\in L_{\bar{0}}^*$ if $(x^p-x^{[p]}-\chi^p(x)\cdot 1)M=0$ for all $x\in L_{\bar 0}$, where $x^p$ denotes the $p$-power of $x$ in $U(L)$. According to [@zc1 Lemma 2.1], each simple $L$-module admits a unique $p$-character. Let $\chi\in{\mathfrak g}_{\bar{0}}^*$ and let $I_{\chi}$ be the (super) ideal of $U({\mathfrak g})$ generated by elements $x^p-x^{[p]}-\chi(x)^p$, $x\in{\mathfrak g}_{\bar{0}}$. The superalgebra $u_{\chi}({\mathfrak g})=U({\mathfrak g})/I_{\chi}$ is called the $\chi$-reduced enveloping algebra of ${\mathfrak g}$. If $\chi=0$, the superalgebra $u_{\chi}({\mathfrak g})$, referred to as the reduced enveloping algebra of $\mathfrak g$, is simply denoted $u({\mathfrak g})$. The $\chi$-reduced enveloping algebra $u_{\chi}({\mathfrak g}_{\bar{0}})$ of the Lie algebra ${\mathfrak g}_{\bar{0}}$ is defined similarly (see [@sf 5.3]). In particular, $u_{\chi}({\mathfrak g}_{\bar{0}})$ may be viewed as a subalgebra of $u_{\chi}({\mathfrak g})$. Let $\mathfrak h$ be the maximal torus of ${\mathfrak g}_{\bar{0}}$ consisting of diagonal matrices. Then $\mathfrak h$ has a basis $$h_1=:e_{11}-e_{n+1,n+1}, \dots, h_n:=e_{nn}-e_{2n,2n}.$$ Let $\epsilon_1,\dots, \epsilon_n\in \mathfrak h^*$ be the basis defined by $\epsilon_i(h_j)=\delta_{ij}$. Take the natural basis of ${\mathfrak g}$ as follows. For $1\leq i<j\leq n$, let $y_{ij}=:e_{i+n, j}-e_{j+n,i}\in {\mathfrak g}_{-1}$; for $1\leq i\leq j\leq n$, let $z_{ij}=e_{i,j+n}+e_{i+n,j}\in {\mathfrak g}_1$; for $1\leq i,j\leq n$, let $\tilde e_{ij}=e_{ij}-e_{j+n,i+n}\in {\mathfrak g}_{\bar{0}}$. These are root vectors of ${\mathfrak g}$ and the roots are of the following form. $$\begin{aligned} \text{Roots\ of}\ & {\mathfrak g}_{\bar{0}}:&& \epsilon_i-\epsilon_j,&&& 1\leq i\neq j\leq n;\\ \text{Roots\ of}\ & {\mathfrak g}_{-1}:&& -\epsilon_i-\epsilon_j,&&& 1\leq i<j\leq n;\\ \text{Roots\ of}\ & {\mathfrak g}_1:&& \epsilon_i+\epsilon_j,&&& 1\leq i\leq j\leq n. \end{aligned}$$ Note that $\text{dim}{\mathfrak g}_{-1}=d=:{1\over 2}n(n-1)$ and $\text{dim}{\mathfrak g}_1=n^2$. Denote by $G$ the linear algebraic group $GL_n(F)$. Let $$\rho:\ GL_n(F)\longrightarrow GL_{2n}(F)$$ be defined by $\rho (a)=\text{diag} (a, a^{-1,t})$. Then $\rho$ is a monomorphism of algebraic groups ([@hu 7.4]). Under the action $$Ad\rho(a) x=\rho(a)x\rho (a)^{-1},\ a\in G, x\in {\mathfrak g},$$ the Lie superalgebra ${\mathfrak g}$ becomes a (rational) $G$-module (see [@hu Corollary 10.3]). Then $U({\mathfrak g})$ is also a $G$-module. Let $g\in G$ and let $$x=\begin{pmatrix}0&b\\ 0& 0 \end{pmatrix}\in {\mathfrak g}_1.$$ Then we get $$Ad\rho(g) x=\begin{pmatrix}0&gbg^t\\ 0& 0 \end{pmatrix}\in {\mathfrak g}_1.$$ Hence ${\mathfrak g}_1$ is a $G$-submodule of ${\mathfrak g}$ by [@jj1 2.9(2), Part I]. Similarly ${\mathfrak g}_{-1}$ is also a $G$-submodule of ${\mathfrak g}$. Let ${\mathfrak g}^+={\mathfrak g}_{\bar{0}}\oplus {\mathfrak g}_1$. According to [@zc1 Lemma 2.4], each simple $U({\mathfrak g}^+)$-module is annihilated by ${\mathfrak g}_1$. Let $\chi\in {\mathfrak g}^*_{\bar{0}}$, and let $M$ be a simple $u_{\chi}({\mathfrak g}_{\bar{0}})$-module viewed as a $u_{\chi}({\mathfrak g}^+)$-module by letting ${\mathfrak g}_1\cdot M=0$. Define the Kac module $$K_{\chi}(M)=u_{\chi}({\mathfrak g})\otimes _{u_{\chi}({\mathfrak g}^+)}M.$$ Since $M$ is finite dimensional ([@sf Theorem 5.24]), so is $K_{\chi}(M)$. If $\chi=0$, then we denote $K_{\chi}(M)$ simply by $K(M)$. We have ${\mathfrak g}_{-1}$-module isomorphism $K_{\chi}(M)\cong \wedge ({\mathfrak g}_{-1})\otimes_F M$, where $\wedge ({\mathfrak g}_{-1})$ is the exterior algebra of ${\mathfrak g}_{-1}$. Let $v_1,\dots, v_d$ be a basis of ${\mathfrak g}_{-1}$. Set $$\wedge^k({\mathfrak g}_{-1})=\langle v_{i_1}\cdots v_{i_k}\in \wedge ({\mathfrak g}_{-1})|1\leq i_1\leq \cdots\leq i_k\leq d\rangle.$$ Denote $\wedge_{\bar{0}}({\mathfrak g}_{-1})=\sum_{k=2i}\wedge^k({\mathfrak g}_{-1})$ and $\wedge_{\bar{1}}({\mathfrak g}_{-1})=\sum_{k=2i+1}\wedge^k({\mathfrak g}_{-1})$. Then $\wedge ({\mathfrak g}_{-1})$ is $\mathbb Z_2$-graded with $\wedge({\mathfrak g}_{-1})=\wedge_{\bar{0}}({\mathfrak g}_{-1})\oplus \wedge_{\bar{1}}({\mathfrak g}_{-1})$. It follows that $K_{\chi}(M)$ is $\mathbb Z_2$-graded with $$K_{\chi}(M)_{\bar i}=\wedge_{\bar i} ({\mathfrak g}_{-1})\otimes_{F} M, \ i=0,1.$$ **Lemma 1**. *Let $\chi\in {\mathfrak g}_{\bar{0}}^*$ and let $\mathfrak m=\mathfrak m_{\bar{0}}\oplus \mathfrak m_{\bar{1}}$ be a simple $u_{\chi}({\mathfrak g})$-module. Then there is an (even) epimorphism $K_{\chi}(M)\longrightarrow \mathfrak m$ with $M\subseteq \mathfrak m_{\bar{0}}$ a simple $u_{\chi}({\mathfrak g}_{\bar{0}})$-submodule.* *Proof.* Let $N\subseteq \mathfrak m_{\bar{0}}$ be a simple $u_{\chi}({\mathfrak g}_{\bar{0}})$-submodule. Then $\wedge^k ({\mathfrak g}_1) N\subseteq \mathfrak m_{\bar{0}}$ if $k$ is even, and $\wedge^k ({\mathfrak g}_1)N\subseteq \mathfrak m_{\bar{1}}$ if $k$ is odd. It is clear that each $\wedge^k ({\mathfrak g}_1) N$ is a $u_{\chi}({\mathfrak g}_{\bar{0}})$-submodule of $\mathfrak m$. Let $k$ be the largest such that $\wedge^k ({\mathfrak g}_1) N\neq 0$, and let $M\subseteq \wedge^k ({\mathfrak g}_1)N$ be a simple $u_{\chi}({\mathfrak g}_{\bar{0}})$-submodule. Then we have ${\mathfrak g}_1 M=0$. We may renumber the $\mathbb Z_2$-gradings of $\mathfrak m$ if necessary, so we always assume $M\subseteq \mathfrak m_{\bar{0}}$. Then since $\mathfrak m=\mathfrak m_{\bar{0}}\oplus \mathfrak m_{\bar{1}}$ is simple, we obtain an (even) epimorphism from $K_{\chi}(M)$ to $\mathfrak m$. ◻ Let $\mathfrak n_0^+$ (resp. $\mathfrak n_0^-$) be the Lie subalgebra of ${\mathfrak g}_{\bar{0}}$ spanned by positive root vectors $$\tilde e_{ij}=e_{ij}-e_{j+n,i+n}, \ i<j\ (\text{resp.} \ i>j).$$ **Definition 2**. *Let $\mathfrak m=\mathfrak m_{\bar{0}}\oplus \mathfrak m_{\bar{1}}$ be a ${\mathfrak g}$-module. A nonzero (homogeneous) vector $v\in \mathfrak m$ is *maximal* of weight $\mu=\mu_1\epsilon_1+\cdots +\mu_n\epsilon_n\in \mathfrak h^*$ if $(\mathfrak n_0^++{\mathfrak g}_1)\cdot v=0$ and if $h_i\cdot v=\mu_iv$, $i=1,\dots, n$.* If $\chi (\mathfrak n_0^+)=0$, then each simple $u_{\chi}({\mathfrak g})$-module has a maximal vector (see [@zc1 Section 2]). For $\mu=\sum^n_{i=1}\mu_i\epsilon_i\in \mathfrak h^*$, set $\delta_{\mu}=:\Pi_{i<j}(\mu_i-\mu_j+j-i-1)\in F$. In the following proposition, we assume $\chi(\mathfrak n_0^+)=0$ and let $M$ be a simple $u_{\chi}({\mathfrak g}_{\bar{0}})$-module generated by a maximal vector of weight $\mu\in\mathfrak h^*$. **Proposition 3**. *If $\delta_{\mu}\neq 0$, then $K_{\chi}(M)$ is simple.* *Proof.* For an arbitrarily fixed order of the basis vectors $y_{ij}$'s of ${\mathfrak g}_{-1}$, set $Y=\Pi_{i<j}y_{ij}\in \wedge^d ({\mathfrak g}_{-1})$. Let $\mathfrak m\subseteq K_{\chi}(M)$ be a simple $u_{\chi}({\mathfrak g})$-submodule. We claim that $Y\otimes M\subseteq \mathfrak m$. To see this, take a nonzero vector $$x=\sum_{i_1,\dots, i_k} y_{i_1,\dots, i_k}\otimes m_{i_1,\dots, i_k}\in \mathfrak m,$$ where each $y_{i_1,\dots, i_k}$ is a monomial in $\wedge^k({\mathfrak g}_{-1})$ and $m_{i_1,\dots, i_k}\in M$. By applying appropriate $y_{ij}$'s to $x$, we obtain $Y\otimes m\in \mathfrak m$ for some nonzero element $m\in M$. Since $FY$ is a $1$-dimensional $\text{ad}_{{\mathfrak g}_{\bar{0}}}$-module, and since $M$ is a simple $u_{\chi}({\mathfrak g}_{\bar{0}})$-module, we obtain $Y\otimes M\subseteq \mathfrak m$. Set $Z=\Pi_{i<j}z_{ij}$, where the $z_{ij}$'s are in an arbitrary fixed order. Let $v_{\mu}\in M$ be the maximal vector of the weight $\mu$. Applying a similar argument as that used in the proof of [@kv Lemma 3.1], we have $$ZY\otimes v_{\mu}=\pm \delta_\mu \otimes v_{\mu}\in \mathfrak m.$$ Since $\delta_{\mu}\neq 0$, we have $1\otimes v_{\mu}\in \mathfrak m$, and hence $\mathfrak m=K_{\chi}(M)$, implying that $K_{\chi}(M)$ is simple. ◻ # The quotient algebra $\bar {\mathfrak u}$ ## A nilpotent (super) ideal of $u({\mathfrak g})$ Since ${\mathfrak g}$ is a restricted Lie superalgebra, the element $x^p-x^{[p]}$ is central in $U({\mathfrak g})$ for every $x\in {\mathfrak g}_{\bar{0}}$. By [@sf Theorem 5.1.2], these elements generates a polynomial algebra $\mathcal O$ inside $U({\mathfrak g}_{\bar{0}})\subseteq U({\mathfrak g})$. Let $x\in{\mathfrak g}_{\bar{0}}$. Since $x^{[p]}$ is the $p$-th power of the matrix $x$, we have $gx^{[p]}g^{-1}=(gxg^{-1})^{[p]}$ for every $g\in G$. Therefore, $\mathcal O$ is $G$-invariant. Then the (super) ideal $I_0$ defined in Section 2 is a $G$-submodule of $U({\mathfrak g})$. It follows that $u({\mathfrak g})$ is a $G$-module. In the following we assume $p>3$. Let $x_1,\dots, x_t$ be a basis of ${\mathfrak g}_{\bar{0}}$, $y_1,\dots, y_d$ be a basis of ${\mathfrak g}_{-1}$ and $z_1, \dots, z_l$ be a basis of ${\mathfrak g}_1$. By [@bmpz Theorem 2.5] $u({\mathfrak g})$ has a basis of the form $$y^{\delta}x^sz^{\delta'}=:y_1^{\delta_1}\cdots y_d^{\delta_d}x_1^{s_1}\cdots x_t^{s_t}z_1^{\delta'_1}\cdots z_l^{\delta'_l},\ \delta_i,\delta'_i=0,1,\ 0\leq s_j\leq p-1.$$ It follows that $$u({\mathfrak g})\cong \wedge ({\mathfrak g}_{-1})\otimes u({\mathfrak g}_{\bar{0}})\otimes \wedge ({\mathfrak g}_1).$$ If $y=y_{i_1}\cdots y_{i_q}\in \wedge^q ({\mathfrak g}_{-1})$ or $z=z_{i_1}\cdots z_{i_q}\in\wedge^q ({\mathfrak g}_1)$, we call $q$ the length of $y$ or $z$, and write $l(y)=q$ or $l(z)=q$. The following lemma is proved in [@kk] for the Lie superalgebra $p(n)$ over $\mathbb C$. Applying almost verbatim the proof there we obtain the following conclusion in $u({\mathfrak g})$. **Lemma 4**. *(cf. [@kk Lemma 2.3(b)]) Let $y\in \wedge({\mathfrak g}_{-1})$ and $z\in \wedge({\mathfrak g}_1)$ are two monomials. Then $zy=\sum_{\alpha} u_{\alpha}y_{\alpha}z_{\alpha}$, where $u_{\alpha}\in u({\mathfrak g}_{\bar{0}})$, $y_{\alpha}\in \wedge ({\mathfrak g}_{-1})$ and $z_{\alpha}\in\wedge ({\mathfrak g}_1)$ such that $l(y_{\alpha})-l(z_{\alpha})=l(y)-l(z)$.* Let $u({\mathfrak g})_k$ be spanned by above basis elements of $u({\mathfrak g})$ with $\sum_j{\delta_j'}-\sum_i{\delta_i}=k$. Then we get $$u({\mathfrak g})=\oplus _{k=-d}^{l}u({\mathfrak g})_k.$$ By Lemma 3.1, we obtain $u({\mathfrak g})_iu({\mathfrak g})_j\subseteq u({\mathfrak g})_{i+j}$. So that $u({\mathfrak g})$ is a $\mathbb Z$-grade algebra. It is easy to see that each $u({\mathfrak g})_k$ is a $G$-submodule. Let $I$ denote the (super) ideal of $u({\mathfrak g})$ generated by $\wedge^k ({\mathfrak g}_1)$ with $k>d$. Then $I$ is spanned by basis elements $y^{\delta}x^sz^{\delta'}$ with $\sum \delta'_j\geq d+1$. It follows that $I$ is a $G$-submodule of $u({\mathfrak g})$ such that $$I\subseteq \sum_{k\geq 1}u({\mathfrak g})_k.$$ Applying Lemma 3.1 we have $I^s\subseteq \sum_{k\geq s}u({\mathfrak g})_k$ for any $s\geq 1$, implying that $I$ is nilpotent. We say that $\mu\in\mathfrak h^*$ is compatible with $\chi\in{\mathfrak g}_{\bar{0}}^*$ if $\mu (h)^p-\mu (h)=\chi (h)^p$ for all $h\in \mathfrak h$. Set $\mathfrak b_0=:\mathfrak h+\mathfrak n_0^+\subseteq {\mathfrak g}_{\bar{0}}$. Let $\chi\in{\mathfrak g}_{\bar{0}}^*$ be such that $\chi(\mathfrak n^+_0)=0$ and let $\mu$ be compatible with $\chi$. Let $Fv_{\mu}$ be the 1-dimensional $\mathfrak b_0 +{\mathfrak g}_1$-module of weight $\mu\in \mathfrak h^*$ and annihilated by $\mathfrak n_0^++{\mathfrak g}_1$. Define the baby Verma (super) module $$Z_{\chi}(\mu)=u_{\chi}({\mathfrak g})\otimes _{u_{\chi}(\mathfrak b_0 +{\mathfrak g}_1)} Fv_{\mu}.$$ If $\chi=0$, we denote it simply by $Z(\mu)$. The following lemma follows immediately from Lemma 3.1. **Lemma 5**. *(1) Let $K_{\chi}(M)$ be a Kac module. Then $I K_{\chi}(M)=0$.* *(2) Let $Z_{\chi}(\mu)$ be a baby Verma module. Then $I Z_{\chi}(\mu)=0$.* ## The definition of $\bar {\mathfrak u}$ Define the quotient superalgebra $\bar{\mathfrak u}=u({\mathfrak g})/I$. From Lemma 3.2 we see that each Kac module $K(M)$, each baby Verma module $Z(\mu)$ or each simple module of $u({\mathfrak g})$ may be viewed as a $\bar{\mathfrak u}$-module. From above discussions we have $\bar{\mathfrak u}=\oplus_{k=-d}^d \bar{\mathfrak u}_k$, where each $\bar{\mathfrak u}_k$ has a basis $$y^{\delta}x^s z^{\delta'},\ \sum_j \delta'_j-\sum_i\delta_i=k.$$ Set $$\mathfrak a=\langle y^{\delta}x^s z^{\delta'}|\ \sum_j \delta'_j=\sum_i\delta_i>0\rangle.$$ Then we have $\bar{\mathfrak u}_0=u({\mathfrak g}_{\bar{0}})\oplus \mathfrak a$. It is clear that $\mathfrak a$, $u({\mathfrak g}_{\bar{0}})$ and each $\bar{\mathfrak u}_k$ are all $G$-submodules of $\bar{\mathfrak u}$. ## The center of $\bar{\mathfrak u}$ Recall the imbedding $\rho$ of $G$ into $GL_{2n}(F)$. We have $$\rho(G)=\{\text{diag}(a, a^{-1,t})|a\in GL_n( F)\}.$$ The differential $\bar\rho$: $\mathfrak{gl}(n,F)\longrightarrow \mathfrak{gl}(2n, F)$ is a Lie algebra homomorphism. Using [@hu 5.4] we obtain $$\bar\rho (a)=\begin{pmatrix}a&0\\ 0& -a^t \end{pmatrix}\in {\mathfrak g}_{\bar{0}}, \ \ a\in \mathfrak{gl} (n,F).$$ Define a bilinear form $\theta: {\mathfrak g}_{\bar{0}}\times {\mathfrak g}_{\bar{0}}\longrightarrow F$ by letting $\theta (x, y)=\text{tr} (ab)$ $$\text{for}\ x=\text{diag} (a, -a^t),\ y=\text{diag} (b, -b^t)\in {\mathfrak g}_{\bar{0}}.$$ Then it is easy to see that $\theta$ is symmetric, non-degenerate and $G$-invariant. Besides, the restriction of $\theta$ to $\mathfrak h$ is also non-degenerate. For each $\lambda\in\mathfrak h^*$, let $h_{\lambda}\in\mathfrak h$ be the unique element such that $\theta (h_{\lambda}, h)=\lambda(h)$ for all $h\in\mathfrak h$. It is easy to show that $h_{\epsilon_i-\epsilon_j}=h_i-h_j$ for any $i\neq j$. Note that $G$ satisfies the assumptions (H1)-(H3) in [@jj 6.3]. Recall the adjoint action $Ad\rho$ of $G$ on the Lie superalgebra ${\mathfrak g}$. **Lemma 6**. *The differential of $Ad\rho$ is $ad\bar\rho: \mathfrak{gl}(n, F)\longrightarrow \mathfrak{gl}({\mathfrak g})$ given by $$ad\bar \rho (x)(y)=[\bar\rho (x), y]=\bar \rho (x) y-y\bar\rho (x),\ \ x\in \mathfrak{gl}(n, F),\ y\in {\mathfrak g}.$$* *Proof.* Since $d(Ad \rho)=d(Ad)\bar\rho$ by [@hu 5.4], the lemma follows from [@hu Theorem 10.4]. ◻ Define $$\mathcal Z=\{u\in \bar {\mathfrak u}_{\bar{0}}|\ gu=u\ \text{for all}\ g\in G\ \text{and}\ [u, {\mathfrak g}_1]=0\}.$$ Since $\bar{\mathfrak u}$ is a $G$-module, we obtain from [@jj1 7.11(5),I] that $\mathcal Z\subseteq \{u\in\bar{\mathfrak u}_{\bar 0}| [x,u]=0 \ \text{for all}\ x\in {\mathfrak g}\}$. Thus, $\mathcal Z$ is a central subalgebra of of $\bar{\mathfrak u}$. Take the maximal torus $T=\{diag(t_1,\cdots t_n)|t_i\in F^*\}$ of $G$. We also use the notation $\epsilon_i$, $1\leq i\leq n$, to denote the natural basis of $X(T)$. Therefore the notation $\epsilon_i\in \mathfrak h^*$ is the differential of $\epsilon_i\in X(T)$ (see [@jj4 1.2]). Recall the root vectors of ${\mathfrak g}$ in Section 2. Then the $T$-weights of these root vectors (viewing $X(T)$ as a additive group) are $$\text{wt} (y_{ij})=\epsilon_i+\epsilon_j,\ \text{wt}(z_{ij})=-(\epsilon_i+\epsilon_j),\ \text{wt}(\tilde e_{ij})=\epsilon_i-\epsilon_j.$$ Set $\bar{\mathfrak u}^T=\{x\in \bar{\mathfrak u}|tx=x \ \text{for all}\ t\in T\}$. **Lemma 7**. *$\bar{\mathfrak u}^T\subseteq \bar{\mathfrak u}_0$.* *Proof.* Recall that $\bar {\mathfrak u}$ has a basis consisting of elements $y^{\delta}x^sz^{\delta'}$. Let $u\in \bar{\mathfrak u}^T$. Since each of these basis elements is a $T$-weight vector, it suffices to assume $u$ is such an element. Write $$u=\Pi_{i<j}y_{ij}^{\delta_{ij}}\Pi_{i\neq j}\tilde e^{s_{ij}}_{ij}f(h)\Pi_{i\leq j}z_{ij}^{\delta'_{ij}},\ \delta_{ij}, \delta'_{ij}=0,1,\ 0\leq s_{ij}\leq p-1, \ f(h)\in u(\mathfrak h).$$ Note that $\text{wt}(f(h))=0$. Since $tu=t^{\text{wt}(u)}u=u$ for all $t\in T$, we have $\text{wt}(u)=0$. Also, we have $$\begin{aligned} \text{wt}(\Pi_{i\neq j}\tilde{e}_{ij}^{s_{ij}})&=\sum_{i\neq j}s_{ij}(\epsilon_i-\epsilon_j)\\ &=\sum_{i<j}l_{ij}(\epsilon_i-\epsilon_j),\end{aligned}$$ where $l_{ij}=s_{ij}-s_{ji}$. It follows that $$\sum_{i<j}(\delta'_{ij}-\delta_{ij})(\epsilon_i+\epsilon_j)+\sum_{i=1}^n2\delta'_{ii}\epsilon_i+\sum_{i<j}l_{ij}(\epsilon_i-\epsilon_j)=0.$$ Since $\epsilon_1,\dots,\epsilon_n\in X(T)$ are linearly independent, the coefficient of each $\epsilon_i$ is $0$. Then we have $$\sum_{i<j}(\delta'_{ij}-\delta_{ij})+\sum_{k<i}(\delta'_{ki}-\delta_{ki})+2\delta'_{ii}+\sum_{i<j}l_{ij}-\sum_{k<i}l_{ki}=0.$$ The sum of all these $n$ equations gives $$2\sum_{i<j}\delta'_{ij}-2\sum_{i<j}\delta_{ij}+2\sum_{i=1}^n\delta'_{ii}=0,$$ and hence $\sum_{i\leq j}\delta'_{ij}=\sum_{i<j}\delta_{ij}$, implying that $u\in \bar{\mathfrak u}_0$. ◻ Immediately from the lemma, we have $\mathcal Z\subseteq \bar{\mathfrak u}_0$. ## The Harish-Chandra homomorphism on $\bar{\mathfrak u}$ Let $\Lambda_0=F_p\epsilon_1+\cdots +F_p\epsilon_n$ and let $u(\mathfrak h)$ be the restricted enveloping algebra of $\mathfrak h$ (see [@sf Section 5.3]). **Lemma 8**. *Let $f(h)\in u(\mathfrak h)$. If $f(h)(\mu)=0$ for all $\mu\in \Lambda_0$, then $f(h)=0$.* *Proof.* We proceed by induction on $n$. If $n=1$, then $f(h)$ is a polynomial of one variable $h_1$ such that $\text{deg} f\leq p-1$. Since the set $\{h_1(\mu)|\ \mu\in\Lambda_0\}\subseteq F$ contains $p$ elements, we obtain $f(h)=0$. If $n\geq 2$, then we may write $f(h)\in u(\mathfrak h)$ as $$\begin{aligned} &f(h_1,\dots, h_n)\\ &=f_0(h_1,\dots,h_{n-1})+f_1(h_1,\dots,h_{n-1})h_n+\cdots +f_{p-1}(h_1,\dots,h_{n-1})h_n^{p-1}.\end{aligned}$$ For each fixed $\mu\in\Lambda_0$, using the fact that $\mu+F_p\epsilon_n\subseteq \Lambda_0$ together with the conclusion for $n=1$, we obtain $$\begin{aligned} f_0(h_1(\mu),\dots,h_{n-1}(\mu)) &=f_1(h_1(\mu),\dots,h_{n-1}(\mu))\\ &=\cdots \\ &=f_{p-1}(h_1(\mu),\dots, h_{n-1}(\mu))\\ &=0.\end{aligned}$$ Then the induction hypotheses yields $f_i(h_1,\dots, h_{p-1})=0$ for all $i$, and hence $f(h)=0$. ◻ Denote by $\mathfrak n^+$(resp. $\mathfrak n^-$) the Lie subalgebra of ${\mathfrak g}$ spanned by positive (resp. negative) roots. We fix an order $$f_{\alpha_1},\dots, f_{\alpha_l}, h_1,\dots, h_n, e_{\beta_1}\dots, e_{\beta_m},$$ where $\alpha_1,\dots, \alpha_l$ (resp. $\beta_1,\dots \beta_m$) are all the positive (resp. negative) roots of ${\mathfrak g}$. From Section 3.2, $\bar {\mathfrak u}$ has a basis of the form $$f_{\alpha_1}^{d_1}\cdots f_{\alpha_l}^{d_l}h_1^{s_1}\cdots h_n^{s_n}e_{\beta_1}^{d_1'}\cdots e_{\beta_m}^{d_m'},\ 0\leq s_k\leq p-1, \sum_{\beta_i\ \text{is odd}}d_i'\leq d,$$ where $d_i=0,1$ (resp. $d_i'=0,1$) if $\alpha_i$ (resp. $\beta_i$) is odd and $0\leq d_i\leq p-1$ (resp. $0\leq d_i'\leq p-1$) if $\alpha_i$ (resp. $\beta_i$) is even. Then $\bar{\mathfrak u}^T$ is spanned by these vectors with $$\sum_{i=1}^l d_i\alpha_i=\sum_{j=1}^m d'_j\beta_j.$$ Denote by $\mathfrak n^-\bar{\mathfrak u}$ (resp. $\bar{\mathfrak u}\mathfrak n^+$) the set spanned by these vectors with $\sum _id_i>0$ (resp. $\sum_i d_i'>0$). The following conclusion is analogous to [@jd Lemma 7.4.2], **Lemma 9**. *Set $\mathfrak l=:\bar{\mathfrak u}\mathfrak n^+\cap \bar {\mathfrak u}^T$.* *(1) $\mathfrak l=\mathfrak n^- \bar{\mathfrak u}\cap \bar{\mathfrak u}^T$, so that $\mathfrak l\triangleleft \bar{\mathfrak u}^T$;* *(2) $\bar{\mathfrak u}^T= u(\mathfrak h)\oplus \mathfrak l$.* *Proof.* It suffices to prove (2). It is clear that $\bar{\mathfrak u}^T=u(\mathfrak h)+ \mathfrak l$. Let $x\in u(\mathfrak h)\cap \mathfrak l$. Then we have $x= u_0(h)$ for some $u_0(h)\in u(\mathfrak h)$ and $x\in\bar{\mathfrak u}\mathfrak n^+$. Applying $x$ to the maximal vector $v_{\mu}\in Z(\mu)$ for all $\mu\in \Lambda_0$. Then we get $u_0(h) v_{\mu}=0$, and hence $u_0(h)(\mu)=0$ for all $\mu\in \Lambda_0$. It follows that $x=u_0(h)=0$, which completes the proof. ◻ Define the Harish-Chandra homomorphism $h: \bar{\mathfrak u}^T\longrightarrow u(\mathfrak h)$ to be the projection with kernel $\mathfrak l$. Recall that $\bar{\mathfrak u}_0=u({\mathfrak g}_{\bar{0}})\oplus \mathfrak a$. It is easy to see that $\bar{\mathfrak u}^T=\bar{\mathfrak u}_0^T=u({\mathfrak g}_{\bar{0}})^T\oplus {\mathfrak a}^T$. Then the restriction of $h$ to $u({\mathfrak g}_{\bar{0}})^T$ and $u({\mathfrak g}'_{\bar{0}})^T$ define analogous homomorphisms on $u({\mathfrak g}_{\bar{0}})$ and $u({\mathfrak g}'_{\bar{0}})$. Let $$\Phi_0=\{\epsilon_i-\epsilon_j| 1\leq i\neq j\leq n\}\ (\text{resp.} \ \Phi^+_0=\{\epsilon_i-\epsilon_j| 1\leq i< j\leq n\})$$ denote the root system (resp. positive roots) of ${\mathfrak g}_{\bar{0}}$. We denote the Weyl group of $G$ with respect to $T$ by $W$. This group is generated by $s_{\alpha}, \alpha\in \Phi^+_0\subseteq X(T)$. The action of each $s_{\alpha}$ on $\mathfrak h^*$ is given by $s_{\alpha}(\lambda)=\lambda-\lambda(h_{\alpha})\alpha$. Since $p>2$, each $s_{\alpha}$ has order $2$. Since the elements $$h_{\epsilon_1-\epsilon_2}=h_1-h_2, \dots, h_{\epsilon_{n-1}-\epsilon_n}=h_{n-1}-h_n$$ are linearly independent, we can find $\rho \in\mathfrak h^*$ such that $$\rho (h_{\epsilon_i-\epsilon_{i+1}})=1\ \text{for}\ 1\leq i\leq n-1.$$ The dot action on $\mathfrak h^*$ of any $w\in W$ is defined by $w\cdot \lambda=w(\lambda+\rho)-\rho$. In particular, we have $s_{\alpha}\cdot \lambda=s_{\alpha}\lambda-\alpha$ for all simple $\alpha$. Since the $s_{\alpha}$ with $\alpha$ simple generate $W$, the dot action is independent of the choice of $\rho$ (see [@jj 9.2]). Since the bilinear form $\theta_{|\mathfrak h}$ is $W$-invariant and since $W$ permutes $\Phi_0$ ([@hu1 Lemma 10.4 C], we obtain that $W$ permutes $\{h_{\alpha}| \alpha\in \Phi_0\}$. We may identify $U(\mathfrak h)$ with the symmetric algebra $S(\mathfrak h)$, and hence with the algebra of polynomial functions on $\mathfrak h^*$. Thus, the dot action on $\mathfrak h^*$ yields also a dot action on $U(\mathfrak h)$ defined by $(w\cdot f) (\lambda)=f(w^{-1}\cdot \lambda)$ for $f\in U(\mathfrak h)$. In particular, we have $s_{\alpha}\cdot h=s_{\alpha}(h)-\alpha(h) 1$ if $\alpha$ is a simple root ([@jj 9.2]). It is easy to see that $W$ stabilizes $\Lambda_0\subseteq \mathfrak h^*$. Then $W$ also acts on $\Lambda_0$ by the dot action. We have $s_{\alpha}(h)=h-\alpha(h)h_{\alpha}$ for all simple roots $\alpha$, implying that $$s_{\alpha}(h_i)^p-s_{\alpha}(h_i)\in \mathcal O\cap U(\mathfrak h)$$ for all $i$. Then $W$ acts also on the restricted enveloping algebra $u(\mathfrak h)$. Since $$(s_{\alpha}\cdot h_i)^p-s_{\alpha}\cdot h_i=s_{\alpha}(h_i)^p-s_{\alpha}(h_i)\in \mathcal O$$ for all $i$, $W$ also acts on $u(\mathfrak h)$ by the dot action. Following [@sv], define an element $\Theta\in U(\mathfrak h)$ by $$\begin{aligned} \Theta (\mu)& =\Pi_{\alpha\in\Phi_0} ((\alpha, \mu+\rho)-1)\\ &=\Pi_{\i\neq j} ((h_i-h_j)(\mu +\rho)-1)\ \text{for}\ \mu\in\mathfrak h^*.\end{aligned}$$ Since $W$ permute $\Phi_0$, we have $w\cdot \Theta =\Theta$ for all $w\in W$, and hence $\Theta\in U(\mathfrak h)^{W\cdot}$. Note that $$\text{deg}\Theta=|\Phi_0|=2|\Phi_0^+|=2d.$$ Assume $p>2d$ in the following. Denote also by $\Theta$ its image in $u(\mathfrak h)$. Then we have $\Theta\in u(\mathfrak h)^{W\cdot}$. Let $\alpha\in\Phi^+_0$ and let $x_{\alpha}$ be a root vector such that $[x_{\alpha}, x_{-\alpha}]=h_{\alpha}$. We may simply let $x_{\alpha}={\tilde e}_{ij}$ and $x_{-\alpha}={\tilde e}_{ji}$ for $\alpha=\epsilon_i-\epsilon_j\in \Phi^+_0$. Note that $ad^3 x_{\alpha}({\mathfrak g})=0$. With the assumption $p>3$, we may define an automorphism of ${\mathfrak g}$ by letting $$\tilde s_{\alpha}=(\text{exp}\ ad x_{\alpha})(\text{exp}\ ad x_{-\alpha})(\text{exp} \ ad x_{\alpha}).$$ By [@jd 1.10.19], we have ${\tilde s_{\alpha}}|_{\mathfrak h}=s_{\alpha}$. Thus, each element of $W$ may be viewed as an automorphism of ${\mathfrak g}$. By essentially a similar argument as that for [@hu1 2.3 (\*)], we obtain $g\in G$ such that $$\begin{aligned} \tilde s_{\alpha}(x^{[p]})&= \rho(g) x^{[p]}\rho(g)^{-1}\\ & =(\rho(g)x\rho(g)^{-1})^{[p]}\\ &=(\tilde s_{\alpha}x)^{[p]}\ \text{for all}\ x\in {\mathfrak g}_{\bar{0}}.\end{aligned}$$ Then $\tilde s_{\alpha}$ can be extended to an automorphism of $u({\mathfrak g})$. It is easy to see that $\tilde s_{\alpha}({\mathfrak g}_{\pm 1})\subseteq {\mathfrak g}_{\pm 1}$. Then each $w\in W$ can also be extended to an automorphism of $\bar{\mathfrak u}$. **Lemma 10**. *$$h(\mathcal Z)\subseteq u(\mathfrak h)^{W\cdot}.$$* *Proof.* Let $\mu\in \Lambda_0\subseteq \mathfrak h^*$, and let $v_{\mu}$ be the maximal vector in $Z(\mu)$. The Lie superalgebra ${\mathfrak g}$ has a triangular decomposition $${\mathfrak g}=({\mathfrak g}_{-1}+\mathfrak n^-)+\mathfrak h +(\mathfrak n^+ +{\mathfrak g}_1).$$ Let $x_{\alpha}$ be a root vector for $\alpha\in\Phi^+_0$. Note that $x_{-\alpha}^{p-1}\otimes v_{\mu}\in Z(\mu)$ is a maximal vector for the new triangular decomposition $$\begin{aligned}{\mathfrak g}=& \tilde s_{\alpha}({\mathfrak g}_{-1}+\mathfrak n^-)+\mathfrak h +\tilde s_{\alpha}(\mathfrak n^+ +{\mathfrak g}_1)\\ =& ({\mathfrak g}_{-1}+\tilde s_{\alpha}(\mathfrak n^-))+ \mathfrak h +(\tilde s_{\alpha}(\mathfrak n^+)+{\mathfrak g}_1).\end{aligned}$$ Therefore, $Z(\mu)$ is isomorphic to the baby Verma module $Z(\mu-(p-1)\alpha)$ with respect to the new triangular decomposition. Let $u\in \mathcal Z$. By Lemma 3.6, we may write $u=u_1+h(u)$ such that $u_1\in\bar{\mathfrak u}\mathfrak n^+$. Since $\mathcal Z\subseteq \bar{\mathfrak u}^G,$ using the discussion preceding the lemma we have $$u=\tilde s_{\alpha}(u)= \tilde s_{\alpha}(u_1) +s_{\alpha}(h(u)).$$ Since $\tilde s_{\alpha}(u_1)\in \bar{\mathfrak u}\tilde s_{\alpha}(\mathfrak n^+)$, the righthand side is a linear combination of a basis of $\bar{\mathfrak u}$ using the new triangular decomposition. We now use Jantzen's argument (see [@jj 9.5]). Applying $u$ to $x_{-\alpha}^{p-1}\otimes v_{\mu}$, we have $$u(x_{-\alpha}^{p-1}\otimes v_{\mu})=h(u)(\mu) x_{-\alpha}^{p-1}\otimes v_{\mu}$$ and $$u(x_{-\alpha}^{p-1}\otimes v_{\mu})=\tilde s_{\alpha}(u)(x_{-\alpha}^{p-1}\otimes v_{\mu})$$$$=s_{\alpha}(h(u))(\mu +\alpha)(x_{-\alpha}^{p-1}\otimes v_{\mu})= h(u)(s_{\alpha}\cdot \mu)(x_{-\alpha}^{p-1}\otimes v_{\mu}),$$ implying that $s_{\alpha}\cdot h(u)=h(u)$, and hence $h(u)\in u(\mathfrak h)^{W\cdot}$, since $\alpha$ is arbitrary. This completes the proof. ◻ **Lemma 11**. *Let $\mu\in\Lambda_0$ be such that $\mu_{n-1}=\mu_n$ and let $h(z)\in h(\mathcal Z)$. Then $$h(z)(\mu+t(\epsilon_{n-1}+\epsilon_n))=h(z)(\mu)$$ for any $t\in F_p$.* *Proof.* Let $v_{\mu}\in Z(\mu)$ be the maximal vector of weight $\mu$. Recall the root vector $\tilde e_{n.n-1}=e_{n,n-1}-e_{2n-1,2n}\in{\mathfrak g}_{\bar{0}}$. Since $\mu_{n-1}=\mu_n$, $\tilde e_{n,n-1}v_{\mu}$ is a maximal vector of weight $\mu-(\epsilon_{n-1}-\epsilon_n)$. Then we obtain a homomorphism of $\bar{\mathfrak u}$-modules $\varphi: Z(\mu-(\epsilon_{n-1}-\epsilon_n))\longrightarrow Z(\mu)$. Since $v_{\mu}\notin \text{Im} \varphi$, we have $\text{Im}\varphi\subsetneq Z(\mu)$. Recall the root vector for $-\epsilon_n-\epsilon_{n-1}$ is $y_{n-1,n}=e_{2n-1,n}-e_{2n,n-1}\in {\mathfrak g}_{-1}$. A straightforward computation shows that, the image of the vector $y_{n-1,n}v_{\mu}$ in $Z(\mu)/\text{Im}\varphi$ is a maximal vector of weight $\mu-(\epsilon_{n-1}+\epsilon_n)$. This implies that $$h(z)(\mu)=h(z)(\mu-(\epsilon_{n-1}+\epsilon_n))$$ and hence $h(z)(\mu)=h(z)(\mu+t(\epsilon_{n-1}+\epsilon_n))$ for all $t\in F_p$. ◻ **Lemma 12**. *Let $\mu\in\Lambda_0$ be such that $(\mu+\rho, \epsilon_i-\epsilon_j)=1$ for $i\neq j$. Let $h(z)\in h(\mathcal Z)$. Then $h(z)(\mu+t(\epsilon_i+\epsilon_j))=h(z)(\mu)$ for any $t\in F_p$.* *Proof.* Let $\alpha=\epsilon_s-\epsilon_t, \ s<t$. Then $$s_{\alpha}(\epsilon_k)=\begin{cases} \epsilon_k, &\text{if $k\neq s,t$}\\ \epsilon_s, &\text{if $k=t$}\\ \epsilon_t, &\text{if $k=s$}.\end{cases}$$ So we can find $w\in W$ satisfying $w(\epsilon_i-\epsilon_j)=\epsilon_{n-1}-\epsilon_n$ and $w(\epsilon_i+\epsilon_j)=\epsilon_{n-1}+\epsilon_n$. By assumption we obtain $(w(\mu+\rho), \epsilon_{n-1}-\epsilon_n)=1$ or, equivalently, $$(w\cdot\mu, \epsilon_{n-1}-\epsilon_n)=0.$$ Using the previous lemma we have $$h(z)(w\cdot \mu)=h(z)(w\cdot \mu+t(\epsilon_{n-1}+\epsilon_n))$$ for all $t\in F_p$. Then we get from Lemma 3.7 that $$\begin{aligned} h(z)(\mu)&=h(z)(w\cdot \mu+t(\epsilon_{n-1}+\epsilon_n))\\&=h(z)(w\cdot (\mu+t(\epsilon_i+\epsilon_j))\\ &=h(z)(\mu+t(\epsilon_i+\epsilon_j))\end{aligned}$$ for all $t\in F_p$, as required. ◻ For $i\neq j$, set $s_{ij}=\{\mu\in\Lambda_0|(\mu+\rho, \epsilon_i-\epsilon_j)=1\}$. **Lemma 13**. *Each $h(z)\in h(\mathcal Z)$ is constant on $\cup_{i\neq j}s_{ij}$.* *Proof.* (cf. [@sv Lemma 4.4]). By the proof of above lemma, we see that, for each $s_{ij}$ and each $\mu\in s_{ij}$, there is $w\in W$ such that $w\cdot \mu\in s_{n-1,n}$. Since $h(z)\in u(\mathfrak h)^{W\cdot}$, it suffices to show that $h(z)$ is constant on $s_{12}$. This is immediate from Lemma 3.8 when $n=2$. Suppose $n>2$. Let $\mu=\sum_{i=1}^n\mu_i\epsilon_i\in s_{12}$. Set $$\lambda=\mu+(\mu_3-\mu_1)(\epsilon_1+\epsilon_2)\ \text{and}\ \nu=\lambda+(\mu_1-\mu_3+1)(\epsilon_2+\epsilon_3).$$ Then we have $\lambda\in s_{23}$. By Lemma 3.9 we have $$h(z)(\mu)=h(z)(\lambda)=h(z)(\nu).$$ Let $w=s_{\epsilon_1-\epsilon_2}s_{\epsilon_1-\epsilon_3}\in W$. By a short computation we see that $w\cdot \nu=\mu+2\epsilon_3$. Then we have $$\begin{aligned}h(z)(\mu)&=h(z)(\nu)\\ &=h(z)(w\cdot \nu)\\&=h(z)(\mu+2\epsilon_3),\end{aligned}$$ and hence $h(z)(\mu)=h(z)(\mu +2k\epsilon_3)$ for all $k\in F_p$. Since $p>2$, we get $h(z)(\mu)=h(z)(\mu +t\epsilon_3)$ for all $t\in F_p$. Similarly, we have $$h(z)(\mu)=h(z)(\mu+t\epsilon_i)$$ for all $i>2$ and all $t\in F_p$. Therefore, $h(z)$ is constant on $s_{12}$. ◻ **Lemma 14**. *Let $h(z)\in h(\mathcal Z)$ be such that $h(z)|_{s_{n-1,n}}=0$. Then $(h_{n-1}-h_n)|h(z)$ in $u(\mathfrak h)$.* *Proof.* Write $$h(z)=\sum _{0\leq i,j\leq p-1}c_{ij}h_{n-1}^ih_n^j,$$ where each $c_{ij}\in u(\mathfrak h)$ is a polynomial of the variables $h_1,\dots, h_{n-2}$. In particular, if $n=2$, each $c_{ij}$ is just a scalar. By assumption, we have $h(z)(\mu)=0$ if $\mu\in \Lambda_0$ with $\mu_{n-1}=\mu_n$. Note that $$h(z)=\sum_{s=0}^{p-1}(\sum_{i+j\equiv s} c_{ij}h_{n-1}^ih_n^j)$$ and $k^{i+j}=k^s$ for all $k\in F_p$ if $i+j\equiv s$. Then we have for any $\mu\in s_{n-1,n}$ that $$\begin{aligned} h(z)(\mu)&=\sum_{s=0}^{p-1}(\sum_{i+j\equiv s}c_{ij}(\mu)\mu_n^{i+j})\\&=\sum_{s=0}^{p-1}(\sum_{i+j\equiv s}c_{ij}(\mu))\mu_n^s\\&=0,\end{aligned}$$ implying that $\sum_{i+j\equiv s}c_{ij}(\mu_1,\dots,\mu_{n-2})=0$ for all $s\in F_p$. It follows from Lemma 3.5 that $\sum_{i+j\equiv s}c_{ij}=0\in u(\mathfrak h)$ for $s$. Since $h_n^s=h_n^{p+s}\in u(\mathfrak h)$, we obtain $$\begin{aligned}\sum_{i+j\equiv s}c_{ij}h_{n-1}^ih_n^j&=\sum_{i+j\equiv s}c_{ij}h_{n-1}^ih_n^j-\sum_{i+j\equiv s}c_{ij}h_n^s\\ &=\sum_{i+j=s}c_{ij}(h_{n-1}^ih_n^j-h_n^s)+\sum_{i+j=p+s}c_{ij}(h_{n-1}^ih_n^j-h_n^{p+s})\\ &=\sum_{i+j=s}c_{ij}(h_{n-1}^i-h_n^i)h_n^{s-i}+\sum_{i+j=p+s}c_{ij}(h_{n-1}^i-h_n^i)h_n^{p+s-i},\end{aligned}$$ which gives $(h_{n-1}-h_n)|h(z)$, as required. ◻ For $i\neq j$, define $\Theta_{ij}\in u(\mathfrak h)$ by $$\Theta_{ij}(\mu)=(h_i-h_j)(\mu+\rho)-1=(\mu+\rho,\epsilon_i-\epsilon_j)-1,\ \mu\in \Lambda_0.$$ Then a short computation shows that $\Theta_{ij}=h_i-h_j+j-i-1$. **Corollary 15**. *Let $i\neq j$ and let $h(z)\in h(\mathcal Z)$ be such that $h(z)|_{s_{ij}}=0$. Then $\Theta_{ij}|h(z)$ in $u(\mathfrak h)$.* *Proof.* There is $w\in W$ such that $w(\epsilon_i-\epsilon_j)=\epsilon_{n-1}-\epsilon_n$. Let $\mu\in s_{ij}$. Since $(\mu+\rho, \epsilon_i-\epsilon_j)=1$, so that $(w(\mu+\rho),w(\epsilon_i-\epsilon_j))=1$, we have $(w\cdot\mu+\rho, \epsilon_{n-1}-\epsilon_n)=1$, and hence $w\cdot \mu\in s_{n-1,n}$. Therefore, we get $w\cdot s_{ij}=s_{n-1,n}$. Meanwhile, since $$\begin{aligned} \Theta_{ij}(\mu)&=(\mu+\rho,\epsilon_i-\epsilon_j)-1\\&=(w\cdot \mu+\rho, \epsilon_{n-1}-\epsilon_n)-1\\&=\Theta_{n-1,n}(w\cdot \mu)\\&=(w^{-1}\cdot \Theta_{n-1,n})(\mu)\ \text{for all} \ \mu \in \Lambda_0,\end{aligned}$$ we have $\Theta_{ij}=w^{-1}\cdot \Theta_{n-1,n}$. Since $w\cdot h(z)=h(z)$ and since $w^{-1}\cdot \lambda\in s_{ij}$ for $\lambda\in s_{n-1,n}$, we have by assumption that $$h(z)(\lambda)=h(z)(w^{-1}\cdot \lambda)=0.$$ Note that $\Theta_{n-1,n}=h_{n-1}-h_n$. Then Lemma 3.11 gives $\Theta_{n-1,n}|h(z)$, and hence $w^{-1}\cdot\Theta_{n-1,n}|w^{-1}\cdot h(z)$, implying that $\Theta_{ij}|h(z)$, as required. ◻ Let $z_i=h_i^p-h_i$ for $1\leq i\leq n$. By [@sf Theorem 5.1.2], $$\mathcal O'=:F[z_1,\dots, z_n]\subseteq U(\mathfrak h)$$ is a polynomial ring. Clearly we see that $\mathcal O'=\mathcal O'_{F_p}\otimes_{F_p}F$, where $\mathcal O'_{F_p}=F_p[z_1,\dots, z_n]$. Then we have $$u(\mathfrak h)=U(\mathfrak h)/\mathcal O'U(\mathfrak h)=U(\mathfrak h_{F_p})\otimes_{F_p}F/(\mathcal O'_{F_p}U(\mathfrak h_{F_p}))\otimes_{F_p}F\cong u(\mathfrak h_{F_p})\otimes_{F_p} F.$$ **Lemma 16**. *$u(\mathfrak h)^{W\cdot}=u(\mathfrak h_{F_p})^{W\cdot}\otimes_{F_p}F$.* *Proof.* Let $\{a_i\}_{i\in I}$ be a basis of the $F_p$-vector space $F$, and let $$x=\sum f_i(h)\otimes a_i\in (u(\mathfrak h_{F_p})\otimes F)^{W\cdot}.$$ Since $w\cdot x=x$ for every $w\in W$, we get $\sum _i (w\cdot f_i(h)-f_i(h))\otimes a_i=0$ for all $w$. Substituting $h$ by each $\lambda\in\Lambda_0$, we get, for each $i$, $(w\cdot f_i(h)-f_i(h))(\lambda)=0$ for all $\lambda\in\Lambda_0$. Then Lemma 3.5 yields $w\cdot f_i(h)-f_i(h)=0$ for all $w\in W$, and hence $f_i(h)\in u(\mathfrak h_{F_p})^{W\cdot}$, as desired. ◻ **Theorem 17**. *Assume $p>2d$. Then $h(\mathcal Z)\subseteq \Theta u(\mathfrak h)^W+ F$.* *Proof.* Let $h(z)\in h(\mathcal Z)$. By Lemma 3.10, there is $k\in F$ such that $(h(z)-k)|_{s_{ij}}=0\ \text{for all}\ i\neq j.$ Then Corollary 3.12 gives $\Theta_{ij}|(h(z)-k)$ in $u(\mathfrak h)$ for all $i\neq j$. It follows that $\Theta|(h(z)-k)^{2d}$ and hence $\Theta|(h(z)-k)^p$ in $u(\mathfrak h)$. Assume $(h(z)-k)^p=\Theta q(h)$ for some $q(h)\in u(\mathfrak h)$. Since $(h(z)-k)^p, \Theta\in u(\mathfrak h)^{W\cdot}$, we have $(h(z)-k)^p=\Theta\ (w\cdot q(h))$ for every $w\in W$. Since $p>2d=n(n-1)\geq n$, $|W|=n!\neq 0$. Let $$\tilde{q}(h)={1\over |W|}\sum_{w\in W} w\cdot q(h).$$ Then we have $\tilde{q}(h)\in u(\mathfrak h)^{W\cdot}$ and also $(h(z)-k)^p=\Theta\tilde{q}(h)$. Thus, we may assume that $q(h)\in u(\mathfrak h)^{W\cdot}$. Let $\{a_i\}_{i\in I}$ be a basis of the $F_p$-vector space $F$ as above. Since $F$ is algebraically closed, the set $\{a_i^p\}_{i\in I}$ is also a basis. In fact, for each $a\in F$, we have $a^{1/p}=\sum_i \lambda_i a_i$ for $\lambda_i\in F_p$, so that $a=\sum \lambda_i a_i^p$. The linear independence of the set $\{a_i^p\}$ is immediate. By the lemma above, we have $h(z)-k=\sum_i h_i(z)\otimes a_i$, where $h_i(z)\in u(\mathfrak h_{F_p})^{W\cdot}$. Then $$(h(z)-k)^p=\sum_i h_i(z)^p\otimes a_i^p=\sum_i h_i(z)\otimes a_i^p.$$ Assume $q(h)=\sum_i q_i(h)\otimes a_i^p$ with $q_i(h)\in u(\mathfrak h_{F_p})^{W\cdot}$. Then we get $$\sum_i(h_i(z)-\Theta q_i(h))\otimes a_i^p=0.$$ By a similar argument as that used in the proof of above lemma, we have $h_i(z)=\Theta q_i(h)$ for all $i$, implying that $\Theta|(h(z)-k)$. Let $h(z)-k=\Theta q'(h)$. By the discussion above we may assume $q'(h)\in u(\mathfrak h)^{W\cdot}$. This completes the proof. ◻ ## The image of $h$ In this subsection, we construct some central elements in $\bar{\mathfrak u}$. Set $\wedge^{\leq d} ({\mathfrak g}_1)=:\sum_{i=0}^d\wedge ^i({\mathfrak g}_1)$. From Section 3.2, we obtain an isomorphism $\bar{\mathfrak u}\cong \wedge^{\leq d}({\mathfrak g}_1)\otimes u({\mathfrak g}_{\bar{0}})\otimes \wedge ({\mathfrak g}_{-1})$ as vector spaces. Then $\bar{\mathfrak u}$ has a basis of the form $$z^{\delta'}x^sy^{\delta},\ z^{\delta'}\in \wedge^{\leq d} ({\mathfrak g}_1),\ x^s\in u({\mathfrak g}_{\bar{0}}), \ y^{\delta}\in\wedge ({\mathfrak g}_{-1}).$$ Define a projection $(,)_0$: $\bar {\mathfrak u}\longrightarrow u({\mathfrak g}_{\bar{0}})$ by $$(z^{\delta'}x^sy^{\delta})_0=\begin{cases} x^s, &\text{if $z^{\delta'}=y^{\delta}=1$}\\ 0, &\text{otherwise.}\end{cases}$$ Recall the notion $Y$ and $Z$ defined earlier. **Lemma 18**. *Assume $p>2d$. Then $(Y\wedge^d({\mathfrak g}_1))_0\neq 0$.* *Proof.* Let $L(\mu)$ be the restricted simple ${\mathfrak g}_{\bar{0}}$-module of highest weight $\mu\in\Lambda_0$. Note that $\delta_{\mu}=(\Pi_{i<j}\Theta_{ij})(\mu)$ for all $\mu\in \Lambda_0$. Since $p>n(n-1)$, the degree of each $h_i$ in the polynomial $\Pi_{i<j}\Theta_{ij}$ is less that $p$. By Lemma 3.5, there is $\mu\in\Lambda_0$ such that $(\Pi_{i<j}\Theta_{ij})(\mu)\neq 0$. It then follows from Proposition 2.3 that $K(L({\mu}))$ is simple as a $u({\mathfrak g})$-module. Let $v_{\mu}\in L(\mu)$ be a maximal vector. Write $YZ$ as a linear combination of the above basis for $\bar {\mathfrak u}$. Then we have in $K(L(\mu))$ that $$(YZ)_0Yv_{\mu}= YZYv_{\mu}=\delta_{\mu}Yv_{\mu}\neq 0,$$ implying that $(YZ)_0\neq 0$. ◻ Since $\wedge ({\mathfrak g}_{-1})$, $\wedge^{\leq d} ({\mathfrak g}_1)$ and $u({\mathfrak g}_0)$ are $G$-submodules of $u({\mathfrak g})$, it is easy to see that $(,)_0$ is a homomorphism of $G$-modules. Let $u^d({\mathfrak g}_{\bar{0}})=\langle x_1\cdots x_k\in u({\mathfrak g}_{\bar{0}})| x_1,\dots, x_k\in{\mathfrak g}_{\bar{0}},\ k\leq d\rangle$. Clearly $u^d({\mathfrak g}_{\bar{0}})$ is a $G$-submodule of $u({\mathfrak g})$. Define a linear map $\varphi: \wedge^d ({\mathfrak g}_1)\longrightarrow u^d({\mathfrak g}_{\bar{0}})$ by letting $\varphi(x)=(Yx)_0$ for $x\in \wedge^d({\mathfrak g}_1)$. **Lemma 19**. *$\varphi$ is a homomorphism of $G'$-modules.* *Proof.* Recall that $G$ acts on ${\mathfrak g}$ by the composition of the adjoint action with $\rho$. A short computation shows that, for $c\in F$, $s<t$, $$Ad\rho(1+ce_{ij}) y_{st}=\begin{cases}& y_{st}-cy_{jt}, \ \text{if $i=s$}\\ &y_{st},\ \ \text{otherwise.}\end{cases}$$ Then we get $(1+ce_{ij})\cdot Y=Y$. Since $G'=SL_n(F)$ is generated by all $1+ce_{ij}$, $i\neq j$([@ja Lemma 6.7.1]), we have $g\cdot Y=Y$ for all $g\in G'$. For any $x\in\wedge^d({\mathfrak g}_1)$ and $g\in G'$, we then have $$\begin{aligned}\varphi(g\cdot x)&=(Y\rho(g)x\rho(g)^{-1})_0\\ &=(\rho(g)(Yx)\rho(g^{-1}))_0\\&=(g\cdot(Yx))_0\\&=g\cdot(Yx)_0\\&=g\cdot\varphi(x),\end{aligned}$$ as required. ◻ ### A simple ${\mathfrak g}_{\bar{0}}$-module of typical weight In the following we study a simple ${\mathfrak g}_{\bar{0}}$-module used later to construct central elements in $\bar{\mathfrak u}$. **Definition 20**. *A weight $\lambda\in \Lambda_0$ is *typical* if $\Theta(\lambda)\neq 0$ and *atypical* otherwise.* Example. Let $\lambda=(n-1)\epsilon_1+(n-2)\epsilon_2+\cdots +\epsilon_{n-1}\in \Lambda_0$. If $p>2n-1$, then the weight $\lambda$ is typical. Note that if $\lambda$ is typical, then $\delta_{\lambda}\neq 0$. Denote by ${\mathfrak g}_{\mathbb C}$ the Lie superalgebra $p(n)$ over $\mathbb C$. Let $V(\tilde\lambda)$ be the simple ${\mathfrak g}_{\bar{0}, \mathbb C}$-module with highest weight $\tilde\lambda=(n-1)\epsilon_1+(n-2)\epsilon_2+\cdots +\epsilon_{n-1}\in \mathfrak h^*$. Since $\tilde\lambda$ is dominant integral, $V(\tilde\lambda)$ is finite dimensional (see [@hu1 Theorem 21.2]). Assume $\Phi_0^+=\{\alpha_1,\dots, \alpha_d\}$, where $\alpha_i=\epsilon_i-\epsilon_{i+1}$ for $1\leq i\leq n-1$. Then $\beta=\alpha_1+\cdots +\alpha_{n-1}\in\Phi^+_0$ is the longest root. For each $\alpha=\epsilon_i-\epsilon_j\in \Phi^+_0$, let $e_{\alpha}=\tilde e_{ij}, f_{\alpha}=\tilde e_{ji}$ and $h_{\alpha}=h_i-h_j$. Let $v^+\in V(\tilde\lambda)$ be the maximal vector. Then $V(\tilde\lambda)$ is spanned by vectors of the form $f_{\alpha_1}^{s_1}\cdots f_{\alpha_d}^{s_d}v^+$, $s_i\geq 0$. Let $\mathfrak{sl}^{\beta}_2=\langle e_{\beta}, f_{\beta}, h_{\beta}\rangle \subseteq {\mathfrak g}_{\bar{0}, \mathbb C}$. By [@hu1 Theorem 6.3], $V(\tilde\lambda)$ is completely reducible as a $\mathfrak{sl}^{\beta}_2$-module. Moreover, each simple summand has maximal weight $m\geq 0$ and has dimension $m+1$ ([@hu1 Theorem 7.2]). We claim that $\tilde{\lambda}(h_{\beta})$ is the largest of all these weights. In fact, note that all weights (with respect to $\mathfrak h_{\mathbb C}=\langle h_1,\cdots, h_n\rangle$) in $V(\tilde\lambda)$ are of the form $$\tilde\lambda-\sum_{i=1}^{n-1} {k_i}\alpha_i,\ k_i\geq 0.$$ Since $\alpha_i=\epsilon_i-\epsilon_{i+1}$ for all $i$ and $h_{\beta}=h_1-h_n$, we have $\alpha_i(h_{\beta})=\delta_{i1}+\delta_{i+1,n}\geq 0$. Then the claim follows. Since $\tilde\lambda(h_{\beta})=n-1$, it follows from [@hu1 Theorem 7.2] that every simple summand of the $\mathfrak{sl}^{\beta}_{2}$-module $V(\tilde\lambda)$ has dimension $\leq n$. Thus, we obtain that $f_{\beta}^nV(\tilde\lambda)=0$, and similarly $e_{\beta}^n V(\tilde\lambda)=0$. **Lemma 21**. *Let $\alpha\in \Phi_0^+$. Then $e_{\alpha}^nV(\tilde\lambda)=f_{\alpha}^nV(\tilde\lambda)=0$.* *Proof.* Note that $\Phi_0^+=\{\epsilon_i-\epsilon_j| 1\leq i<j\leq n\}$. For each $\alpha\in \Phi_0^+$, define $$x_{\alpha}(t)=\text{exp}(tad e_{\alpha}),\ t\in \mathbb C.$$ Then $x_{\alpha}(t)$ is an automorphism of ${\mathfrak g}_{\bar{0}, \mathbb C}$. Set $\tilde \sigma_{\alpha}=x_{\alpha}(1)x_{-\alpha}(-1)x_{\alpha}(1)$. According to [@cr Lemma 6.4.4] and [@cr Proposition 6.4.2], we have $\tilde \sigma_{\alpha}(h_{\beta})=h_{\sigma_{\alpha}(\beta)}$ and $\tilde \sigma_{\alpha}(e_{\beta})=\pm e_{\sigma_{\alpha}(\beta)}$ for any $\beta\in \Phi_0$, where $\sigma_{\alpha}$ is the refection defined with $\alpha$ (see [@hu1 Section 9.1]). For each $\alpha\in \Phi_0^+$, we have a triangular decomposition of ${\mathfrak g}_{\bar{0}, \mathbb C}$ with respect to $\alpha$: $${\mathfrak g}_{\bar{0}, \mathbb C}=\tilde \sigma_{\alpha}(\mathfrak n_0^+)+\mathfrak h+\tilde \sigma_{\alpha}(\mathfrak n_0^-).$$ The corresponding positive root system(resp. bases) is $\sigma_{\alpha}(\Phi_0^+)$ (resp. $\sigma_{\alpha}(\Delta)$). Since $e_{\alpha}$ acts nilpotently on $V(\tilde\lambda)$, we may define an automorphism of the vector space $V(\tilde\lambda)$ by (see [@jj2 Chapter 8] or [@hu1 21.25]) $$\sigma_{\alpha,V}=\text{exp}(e_{\alpha,V})\text{exp}(-e_{-\alpha,V})\text{exp}(e_{\alpha,V}),$$ where $e_{\alpha,V}$ denotes the endomorphism of $V(\tilde\lambda)$ given by the action of $e_{\alpha}$. Then for $x\in{\mathfrak g}_{\bar{0}}$ and $v\in V(\tilde\lambda)$, we have $$\sigma_{\alpha, V}(xv)=\tilde \sigma_{\alpha}(x) \sigma_{\alpha,V}(v).$$ This follows from [@cr Lemma 4.5.2] (for the case $\alpha$ is simple, see also [@jj2 Chapter 8]). Let $\mathfrak b=\mathfrak h+\mathfrak n_0^+$. We claim that $\sigma_{\alpha,V} (v^+)\in V(\tilde\lambda)$ is the maximal vector with respect to the Borel subalgebra $\tilde\sigma_{\alpha}(\mathfrak b)$. In fact, we have $$\tilde\sigma_{\alpha}(x)\sigma_{\alpha,V} (v^+)=\sigma_{\alpha,V}(xv^+)=0$$ for all $x\in \mathfrak n_0^+$ and $$\begin{aligned} h\sigma_{\alpha,V}(v^+)&=\sigma_{\alpha,V}(\tilde\sigma_{\alpha}^{-1}(h)v^+)\\ &=\tilde{\lambda}(\tilde\sigma_{\alpha}^{-1}(h))\sigma_{\alpha,V}(v^+)\\ &=(\sigma_{\alpha} \tilde\lambda)(h)\sigma_{\alpha,V}(v^+),\end{aligned}$$ for $h\in\mathfrak h$. Then the claim follows. In addition, we see that the highest weight of $V(\tilde\lambda)$ is $\sigma_{\alpha} \tilde\lambda$. For each $\alpha=\epsilon_i-\epsilon_j\in \Phi_0^+$, let $\alpha_1=\epsilon_1-\epsilon_i$ and $\alpha_2=\epsilon_j-\epsilon_n$. We let $\sigma_{\alpha_2}=1$ if $j=n$ and $\sigma_{\alpha_1}=1$ if $i=1$. Then we have $\sigma_{\alpha_2}\sigma_{\alpha_1}(\beta)=\alpha$, where $\beta=\epsilon_1-\epsilon_n$ is the longest root. By above discussion, the highest weight of $V(\tilde\lambda)$ with respect to the Borel subalgebra $\tilde\sigma_{\alpha_2}\tilde\sigma_{\alpha_1}(\mathfrak b)$ is $\sigma_{\alpha_2}\sigma_{\alpha_1}\tilde\lambda$. Meanwhile, the longest root in the positive system $\sigma_{\alpha_2}\sigma_{\alpha_1}(\Phi_0^+)$ is $\alpha$. Decompose $V(\tilde\lambda)$ as a direct sum of simple $\mathfrak{sl}^{\alpha}_2$-submodules. By a similar argument as above, we obtain that the largest weight of these simple summands is $$\begin{aligned}(\sigma_{\alpha_2}\sigma_{\alpha_1}\tilde\lambda)(h_{\alpha}) &=(\sigma_{\alpha_2}\sigma_{\alpha_1}\tilde\lambda)(h_{\sigma_{\alpha_2}\sigma_{\alpha_1}(\epsilon_1-\epsilon_n)})\\ &=(\sigma_{\alpha_2}\sigma_{\alpha_1}\tilde\lambda)(\tilde\sigma_{\alpha_2}\tilde\sigma_{\alpha_1}(h_{\epsilon_1-\epsilon_n}))\\ &=\tilde{\lambda}(h_{\epsilon_1-\epsilon_n})\\&=n-1.\end{aligned}$$ This implies that $f_{\alpha}^nV(\tilde\lambda)=0$ and similarly $e_{\alpha}^nV(\tilde\lambda)=0$. ◻ Set $d_1=n^2(n-1)$. Let $$U^{d_1}({\mathfrak g}_{\bar{0},\mathbb C})=\langle x_1\cdots x_k\in U({\mathfrak g}_{\bar{0}, \mathbb C})| x_1,\dots, x_k\in {\mathfrak g}_{\bar{0}}, k\leq d_1\rangle.$$ With the assumption $p>d_1$, the notation $u^{d_1}({\mathfrak g}_{\bar{0}})$($\subseteq u({\mathfrak g})$) can be defined similarly. **Lemma 22**. *Let $\vartheta: U({\mathfrak g}_{\bar{0}, \mathbb C})\longrightarrow \text{End}_{\mathbb C} V(\tilde\lambda)$ be the representation afforded by the ${\mathfrak g}_{\bar{0}, \mathbb C}$-module $V(\tilde\lambda)$. Then $\vartheta (U^{d_1}({\mathfrak g}_{\bar{0}, \mathbb C}))=\text{End}_{\mathbb C} V(\tilde\lambda)$.* *Proof.* Set $\text{ann}(V(\tilde\lambda))=\{u\in U({\mathfrak g}_{\bar{0}, \mathbb C})|u V(\tilde\lambda)=0\}$. Then $\text{ann} V(\tilde\lambda)$ is an ideal of $U({\mathfrak g}_{\bar{0}})$. Moreover, $\bar U({\mathfrak g}_{\bar{0}, \mathbb C})=U({\mathfrak g}_{\bar{0}, \mathbb C})/\text{ann} (V(\tilde\lambda))$ is primitive and $V(\tilde\lambda)$ is a faithful $\bar U({\mathfrak g}_{\bar{0}, \mathbb C})$-module. By [@hunw Theorem 1.12, p. 420], $\vartheta(\bar U({\mathfrak g}_{\bar{0},\mathbb C}))$ is dense in $\text{End}_{\mathbb C}V(\tilde\lambda)$, implying that $\vartheta(U({\mathfrak g}_{\bar{0}, \mathbb C}))=\text{End}_{\mathbb C} V(\tilde\lambda)$. From the proof of above lemma, we see that, for $\alpha\in\Phi_0^+$, each simple $\mathfrak{sl}^{\alpha}_2$-submodule of $V(\tilde\lambda)$ has dimension $\leq n$. Then $\vartheta(h_{\alpha})$ satisfying either $$(\vartheta(h_{\alpha})-(n-1))(\vartheta(h_{\alpha})-(n-3))\cdots (\vartheta(h_{\alpha})+(n-1))=0$$ or $$(\vartheta(h_{\alpha})-(n-2))(\vartheta(h_{\alpha})-(n-4))\cdots (\vartheta(h_{\alpha})+(n-2))=0$$ or both. Hence $\vartheta(U(\mathfrak h))$ is spanned by the images of $h_1^{k_1}\cdots h_n^{k_n}, 0\leq k_i\leq n-1$. It follows that $\text{End}_{\mathbb C} V(\tilde\lambda)$ is spanned by the images under $\vartheta$ of the elements $$e_{\alpha_1}^{s_1}\cdots e_{\alpha_d}^{s_d}f_{\alpha_1}^{t_1}\cdots f_{\alpha_d}^{t_d}h_1^{k_1}\cdots h_n^{k_n},\ 0\leq s_i,t_i,k_i\leq n-1.$$ All these elements are contained in $U^{d_1}({\mathfrak g}_{\bar{0}})$, then the lemma follows. ◻ Let ${\mathfrak g}_{\bar{0}, \mathbb Z}$ be the $\mathbb Z$-span of the basis vectors $\tilde{e}_{ij}$ of ${\mathfrak g}_{\bar{0}, \mathbb C}$. Let $\mathcal U_{\mathbb Z}$ and $\mathcal U'_{\mathbb Z}$ denote respectively the Kostant $\mathbb Z$-form of $U({\mathfrak g}_{\bar{0}, \mathbb C})$ and $U({\mathfrak g}'_{\bar{0}, \mathbb C})$ (see [@hu1 26]). Let $\hbar=\sum_{i=1}^n h_i\in{\mathfrak g}_{\bar{0}, \mathbb Z}$. Then $[\hbar,\ {\mathfrak g}_{\bar{0},\mathbb C}]=0$. Let $\mathcal U(\hbar)_{\mathbb Z}$ be the $\mathbb Z$-submodule of $\mathcal U_{\mathbb Z}$ generated by all $\binom{\hbar}{a}, a\geq 0$. Then we have $\mathcal U_{\mathbb Z}=\mathcal U'_{\mathbb Z}\otimes_{\mathbb Z}\mathcal U(\hbar)_{\mathbb Z}$. Let ${\bar V}(\lambda)=\mathcal U_{\mathbb Z}v^+\otimes F$. Then ${\bar V}(\lambda)$ is a rational $G$-module (hence restricted ${\mathfrak g}_{\bar{0}}$-module). Since $p>n$, the maximal vector $v^+\otimes 1\in {\bar V}(\lambda)$ has the restricted weight $\lambda=\tilde\lambda\otimes 1\in \Lambda_0$. Clearly we have $\bar V(\lambda)=\mathcal U'_{\mathbb Z}v^+\otimes F$. By [@hu2 Proposition 1.2], the ${\mathfrak g}_{\bar{0}}'$-module $\bar V(\lambda)$ has a unique maximal submodule. It follows that $\bar V(\lambda)$, as a ${\mathfrak g}_{\bar{0}}$-module, also has a unique maximal submodule. Therefore, $\bar V(\lambda)$ has a unique quotient $L(\lambda)$ simple as a restricted ${\mathfrak g}_{\bar{0}}$-module (hence as a $G$-module). **Theorem 23**. *Let $\vartheta$ be the representation of $u({\mathfrak g}_{\bar{0}})$ afforded by ${\mathfrak g}_{\bar{0}}$-module $L(\lambda)$. Then $\vartheta(u^{d_1}({\mathfrak g}_{\bar{0}}))=\text{End}_F L(\lambda)$.* *Proof.* By Lemma 3.18 we have $e_{\alpha}^n\bar V(\lambda)=f_{\alpha}^n\bar V(\lambda)=0$ and hence $e_{\alpha}^nL(\lambda)=f_{\alpha}^nL(\lambda)=0$ for all $\alpha\in \Phi_0^+$. From the proof of Lemma 3.19, we see that $L(\lambda)$ is annihilated by $$\Pi_{k=-(n-1)}^{n-1}(\vartheta(h_{\alpha})-k)$$ for each $\alpha\in \Phi_0^+$. Then the theorem follows from a similar argument as that used in the proof of Lemma 3.19. ◻ ### An $\text{ad}_{{\mathfrak g}'_{\bar{0}}}$-submodule of $u^{d_1}({\mathfrak g})$ In the following, we continue making the preparation for constructing central elements in $\bar{\mathfrak u}$. Let $U({\mathfrak g}_Q)$ be the universal enveloping superalgebra of the Lie superalgebra $p(n)$ over $Q$. Set $$V_Q=:U^{d_1}({\mathfrak g}_{\bar{0}, Q})+\wedge^d({\mathfrak g}_{1, Q}).$$ Then $V_Q$ is an $\text{ad}_{{\mathfrak g}'_{\bar{0}, Q}}$-submodule of $U^{d_1}({\mathfrak g}_Q)$. Since ${\mathfrak g}_{\bar{0}, Q}'\cong \mathfrak{sl}(n, Q)$ is semisimple, we have by [@hu1 Theorem 6.3] that $V_Q=V_1\oplus V_2\oplus \cdots \oplus V_s$, where each $V_i$ is a simple ${\mathfrak g}'_{\bar{0}, Q}$-module. **Lemma 24**. *Each $V_i$ is generated by a maximal vector $v_{\lambda}\in V_i$ of dominant integral weight $\lambda$.* *Proof.* Since $\mathbb C$ is algebraically closed and since ${\mathfrak g}'_{\bar{0}}$ is semisimple, $V_i\otimes_Q\mathbb C$ is a direct sum of simple ${\mathfrak g}'_{\bar{0}, \mathbb C}$-modules each of the form $V(\lambda)$ with $\lambda$ dominant integral (see [@hu1 21.2]). Let $V(\lambda)$ be one of the summands, let $v_{\lambda}\in V(\lambda)$ be the maximal vector and let $\{\xi_i| i\in I\}$ be a basis of $\mathbb C$ over $Q$. Write $v_{\lambda}=\sum_{j=1}^t \xi_jv_j$, $v_j\in V_i$. If $t=1$, we let $v_1=v_{\lambda}\in V_i$. Then $U({\mathfrak g}'_{\bar{0}, Q})v_{\lambda}\subseteq V_i$ is a ${\mathfrak g}'_{\bar{0}, Q}$-submodule of $V_i$, forcing $V_i=U({\mathfrak g}'_{\bar{0}, Q})v_{\lambda}$, as required. If $t>1$, then for all $\alpha\in \Phi_0^+$, we have $e_{\alpha}v_{\lambda}=0$ and hence $\sum^t_{j=1}\xi_j(e_{\alpha}v_j)=0$. Applying every $$f\in \text{Hom}_Q(V_i,Q)\subseteq \text{Hom}_{\mathbb C}(V_i\otimes_Q \mathbb C, \mathbb C)$$ to this equation, we have $\sum_{j=1}^t \xi_jf(e_{\alpha}v_j)=0$, and hence $f(e_{\alpha}v_j)=0$ for each $j$. Therefore, for each $j$, we have $e_{\alpha}v_j=0$ for all $\alpha\in\Phi_0^+$. Similarly, we obtain $h_kv_j=\lambda(h_k)v_j$ for all $k=1,\dots, n$. It follows that each $v_j$ is a maximal vector of weight $\lambda$. So that $U({\mathfrak g}'_{\bar{0}, Q})v_j\subseteq V_i$ is a ${\mathfrak g}'_{\bar{0}, \mathbb C}$-submodule, implying that $V_i=U({\mathfrak g}'_{\bar{0}, Q})v_j$, as required. ◻ Fix a PBW basis of $U({\mathfrak g}_{\mathbb C})$ consisting of monomials of the natural basis of ${\mathfrak g}_{\mathbb C}$. Let $U({\mathfrak g}_{\mathbb Z})$ denote the $\mathbb Z$-span of this basis, and let $V_{\mathbb Z}$ be the $\mathbb Z$-span of the basis vectors contained in $V_Q$. By the lemma above, each $V_i$ may be written as $V(\lambda_i)$ as in [@hu1 21.1]. In addition, we may assume $v_{\lambda_i}=\sum_{i=1}^r c_iv_i\in V_{\mathbb Z}$ with each $v_i$ a PBW basis vector, each $c_i\in\mathbb Z$ and $(c_1,c_2,\dots, c_r)=1$. Then $\mathcal U'_{\mathbb Z}v_{\lambda_i}$ is an admissible lattice in $V(\lambda_i)$. According to the proof of [@hu1 Theorem 27.1], $\mathcal U'_{\mathbb Z}v_{\lambda_i}$ is a free $\mathbb Z$-module with $\mathbb Z$-rank $\text{dim}V(\lambda_i)$. Moreover, we have $\mathcal U'_{\mathbb Z}v_{\lambda_i}\subseteq V_{\mathbb Z}$ by [@hu1 Corollary 26.3]. Set $\bar V(\lambda_i)=\mathcal U'_{\mathbb Z}v_{\lambda_i}\otimes F$. This is a rational $G'$-module and hence a restricted ${\mathfrak g}'_{\bar{0}}$-module (see [@hu2]). Then the inclusion $\mathcal U'_{\mathbb Z}v_{\lambda_i}\subseteq V_{\mathbb Z}$ induces a $G'$-module homomorphism from $\bar V(\lambda_i)$ to $V=:V_{\mathbb Z}\otimes F$. Remark: Since $p>2$, we have $U({\mathfrak g}_{\mathbb Z})\otimes F\cong U({\mathfrak g})$, which gives $V\cong U^{d_1}({\mathfrak g}_{\bar{0}})\oplus \wedge ^d({\mathfrak g}_1)$. Now that $\mathcal U'_{\mathbb Z}v_{\lambda_1}\oplus \cdots \oplus\mathcal U'_{\mathbb Z}v_{\lambda_s}\subseteq V_{\mathbb Z}$ is a free $\mathbb Z$-module of $\mathbb Z$-rank $$\sum_{i=1}^s\text{dim} V(\lambda_i)=\text{dim} V_Q.$$ For each $V(\lambda_i)$, there is a contravariant bilinear form whose determinant on $\mathcal U'_{\mathbb Z}v_{\lambda_i}$ is an integer $D_{\lambda_i}$ ([@jj2]). The induced bilinear form on $\bar V(\lambda_i)$ has as the radical the unique maximal $G'$-submodule. Assume $(C1)$: $p\nmid \Pi^s_{i=1} D_{\lambda_i}$. Then the contravariant form on each $\bar V(\lambda_i)$ is nondegenerate, and hence each $\bar V(\lambda_i)$ is a simple ${\mathfrak g}_{\bar{0}}'$-module (and a simple $G'$-module). Therefore, we obtain a completely reducible $G'$-submodule $\bar V(\lambda_1)+ \cdots + \bar V(\lambda_s)$ of $V$. Let $v_1,\dots, v_l$ denote the fixed PBW basis of $V_{\mathbb Z}$. Then we have $$(*)\\ \hskip 0.5truept\ (v_{\lambda_1}, \dots, v_{\lambda_s})=(v_1,\dots, v_l)A_{l\times s},$$ where $A_{l\times s}$ is an integral matrix. For each $v\in V_{\mathbb Z}$, denote $v\otimes 1\in V$ by $\bar v$. Denote the image of $A_{l\times s}$ under the canonical map $\mathbb Z^{r\times s}\longrightarrow F_p^{r\times s}$ by $\bar A_{l\times s}$. It follows that $$(\bar v_{\lambda_1}, \dots, \bar v_{\lambda_s})=(\bar v_1,\dots, \bar v_l)\bar A_{l\times s}.$$ Since $v_{\lambda_1}, \dots, v_{\lambda_s}$ are linearly independent, there is a submatrix $A'_{s\times s}$ in $A_{l\times s}$ with $\text{det} A'=m\neq 0$. Assume $(C1'): p>|m|$. Then we have $r(\bar A_{l\times s})=s$, and hence the elements $\bar v_{\lambda_1}, \dots, \bar v_{\lambda_s}\in V$ are linearly independent. **Corollary 25**. *With assumptions $(C1)$ and $(C1')$, we have $$\bar V(\lambda_1)\oplus \cdots \oplus \bar V(\lambda_s)=V.$$* *Proof.* Since both sides have the same dimension, it suffices to show that $\bar V(\lambda_1)+\cdots +\bar V(\lambda_s)\subseteq V$ is a direct sum. Assume $x_1+\cdots +x_s=0$, $x_i\in \bar V(\lambda_i)$, $1\leq i\leq s$. Since each $\bar V(\lambda_i)$ is a simple $G'$-module, we may assume that each $x_i$ is a weight vector with respect to the maximal torus $T\cap G'$. Suppose that there is $i$, $1\leq i\leq s$, such that $x_i\neq 0$. If $\text{wt} (x_i)<\lambda_i$, then there is $\alpha\in \{\epsilon_1-\epsilon_2,\dots, \epsilon_{n-1}-\epsilon_n\}$ such that $e_{\alpha} x_i\neq 0$. Clearly we have $\text{wt} (x_i)<\text{wt} (e_{\alpha}x_i)\leq \lambda_i$. Since each $\bar V(\lambda_i)$ has finite many nonzero weight spaces, by repeated applications of appropriate $e_{\alpha}$'s we have $$x_1'+\cdots +x_s'=0, \ x'_i\in \bar V(\lambda_i),$$ where $x'_i$'s are not all zero and each $x_i$ is a multiple of $\bar v_{\lambda_i}$, contrary to the fact that $\bar v_{\lambda_1}, \dots, \bar v_{\lambda_s}$ are linearly independent. Thus, we conclude that $x_i=0$ for all $1\leq i\leq s$, which completes the proof. ◻ With the assumption $p>d_1$, we have an isomorphism of rational $G$-modules (hence of restricted ${\mathfrak g}_{\bar{0}}$-modules) $U^{d_1}({\mathfrak g}_{\bar{0}})\cong u^{d_1}({\mathfrak g}_{\bar{0}})$, and hence $V\cong u^{d_1}({\mathfrak g}_{\bar{0}})\oplus \wedge ^d({\mathfrak g}_1)$. Together with assumptions $(C1)$ and $(C1')$, we obtain an isomorphism of rational $G'$-modules $$\bar V(\lambda_1)\oplus \cdots \oplus \bar V(\lambda_s)\cong u^{d_1}({\mathfrak g}_{\bar{0}})\oplus \wedge ^d({\mathfrak g}_1).$$ Therefore, both $u^{d_1}({\mathfrak g}_{\bar{0}})$ and $\wedge ^d({\mathfrak g}_1)$ are completely reducible as restricted $\text{ad}_{{\mathfrak g}'_{\bar{0}}}$-modules. In the remainder of the paper we assume the following conditions: (H1) $(C1)$ and $(C1')$; (H2) $p>d_1$; (H3) $p>\text{dim}\wedge^d ({\mathfrak g}_1)=C^d_{n^2}$. Under these assumptions, since $d_1>2d$, the condition on $p$ in Theorem 3.14 is satisfied. ### Central elements in $\bar{\mathfrak u}$ Recall the weight $\lambda=(n-1)\epsilon_1+\cdots +\epsilon_{n-1}\in\Lambda_0$. From 3.5.1 we see that $\lambda$ is typical, since $p>d_1=(n-1)n^2\geq n^2\geq 2n-1$. Let $\vartheta$ be the representation of $u({\mathfrak g}_{\bar{0}})$ afforded by the simple G-module $L(\lambda)$ in 3.5.1. According to Theorem 3.20, $\vartheta$ is an epimorphism from $u({\mathfrak g}_{\bar{0}})$ to $\text{End}_F L(\lambda)$. This induces an isomorphism of algebras $$u({\mathfrak g}_{\bar{0}})/\text{ker} \vartheta\cong \text{End}_F L(\lambda).$$ Denote the restriction of $\vartheta$ to $u^{d_1}({\mathfrak g}_{\bar{0}})$ by $\vartheta_1$. Then $\text{ker}\vartheta_1=\text{ker}\vartheta\cap u^{d_1}({\mathfrak g}_{\bar{0}})$, and which is easily seen to be an $\text{Ad}G$-submodule of $u^{d_1}({\mathfrak g}_{\bar{0}})$. By Theorem 3.20 we obtain an isomorphism of $G$-modules $$u^{d_1}({\mathfrak g}_{\bar{0}})/\text{ker} \vartheta_1\cong \text{End}_F L(\lambda).$$ Recall the mapping $\varphi$. Let $v_{\lambda}\in L(\lambda)$ be the maximal vector. Since $\lambda$ is typical, we have $$(YZ)v_{\lambda}=(YZ)_0v_{\lambda}=\delta_{\lambda}v_{\lambda}\neq 0,$$ implying that $\text{Im}\varphi\varsubsetneq \text{ker}\vartheta$. Since $\text{Im}\varphi\subseteq u^d({\mathfrak g}_{\bar{0}})$ and $d<d_1$, we get $\text{Im}\varphi \varsubsetneq\text{ker}\vartheta_1$. Since $u^{d_1}({\mathfrak g}_{\bar{0}})$ is a completely reducible ${\mathfrak g}_{\bar{0}}'$-module, so is $\text{Im}\varphi$. It follows that $L(\mu)\varsubsetneq\text{ker}\vartheta_1$ for some simple summand $L(\mu)$ of $\text{Im}\varphi$. Since $L(\mu)$ is simple, we have $L(\mu)\cap \text{ker}\vartheta_1=\{0\}$. Since $u^{d_1}({\mathfrak g}_{\bar{0}})$ is completely reducible, we have $$u^{d_1}({\mathfrak g}_{\bar{0}})=(L(\mu) +\text{ker}\vartheta_1)\oplus W$$ for some $G'$-submodule $W$. Let $T_1$ be the toral subgroup of $G$ defined by $$T_1=\{diag(t,\dots, t)|t\in F^*\}.$$ Then we have $G=G'\times T_1$. Since $T_1$ acts trivially on $u({\mathfrak g}_{\bar{0}})$, every $G'$-submodule of $u({\mathfrak g}_{\bar{0}})$ is naturally a $G$-submodule. Then we have a $G$-module isomorphism $$u^{d_1}({\mathfrak g}_{\bar{0}})/\text{ker}\vartheta_1\cong L(\mu)+W\cong \text{End}_F L(\lambda).$$ Since the trace form on $\text{End}_F L(\lambda)$ is $G$-invariant, nondegenerate and symmetric, so is the induced form on $L(\mu)+W$. Since $\wedge^d({\mathfrak g}_1)$ is completely reducible, we have an isomorphism of $u({\mathfrak g}_{\bar{0}}')$-modules (also $G'$-modules) $$\text{ker}\varphi \oplus \text{Im}\varphi\cong \wedge^d({\mathfrak g}_1).$$ Let $Z_1,\dots, Z_r$ be a basis of the simple $u({\mathfrak g}'_{\bar{0}})$-submodule of $\wedge^d({\mathfrak g}_1)$ whose image under $\varphi$ is $L(\mu)$. Then $(YZ_1)_0, \dots, (YZ_r)_0$ is a basis of $L(\mu)$, from which we extend to a basis of $L(\mu)+W$: $$(YZ_1)_0, \dots, (YZ_r)_0, w_1,\dots, w_t.$$ Let $(YZ_1)_0^{\lor}, \dots, (YZ_r)_0^{\lor}, w_1^{\lor},\dots, w_t^{\lor}$ be the dual basis. Set $$W^{\perp}=\{x\in L(\mu)+W| (x,W)=0\}.$$ Since $W$ is a $G$-submodule of $u({\mathfrak g}_{\bar{0}})$, so is $W^{\perp}$. By [@ja Section 6.1], we have $$\text{dim} W^{\perp}=\text{dim} (L(\mu)+W)-\text{dim} W=\text{dim}L(\mu)=r,$$ implying that $W^{\perp}=\langle (YZ_1)_0^{\lor},\dots, (YZ_r)_0^{\lor}\rangle$. For $g\in G'$, assume $$g(Z_1,\dots, Z_r)^t=(b^g_{ij})_{r\times r} (Z_1,\dots, Z_r)^t$$ and $$g((YZ_1)_0^{\lor}, \dots, (YZ_r)_0^{\lor})^t=(c_{ij}^g)_{r\times r} ((YZ_1)_0^{\lor}, \dots, (YZ_r)_0^{\lor})^t.$$ Then since $Y$ is a $G'$-invariant, we have $$g(YZ_1,\dots, YZ_r)^t=(b^g_{ij})_{r\times r} (YZ_1,\dots, YZ_r)^t.$$ Since the mapping $(,)_0$ is a $G$-module homomorphism, we obtain $$g((YZ_1)_0, \dots, (YZ_r)_0)^t=(b_{ij}^g)_{r\times r} ((YZ_1)_0,\dots, (YZ_r)_0)^t.$$ Then the $G$-invariance of the bilinear form implies that $(b_{ij}^g)_{r\times r}^t=(c_{ij}^g)_{r\times r}^{-1}.$ Set $\omega=\sum_{i=1}^r Z_i(YZ_i)_0^{\lor}\in u({\mathfrak g})$. The following conclusion follows from a standard argument in linear algebra. **Lemma 26**. *For each $g\in G'$, we have $g\omega=\omega$.* Then by [@jj1 7.11(5), Part I], we have $[x, \omega]=0$ for all $x\in {\mathfrak g}_{\bar{0}}'$. **Lemma 27**. *$(Y\omega)_0\neq 0$.* *Proof.* Since $(Y\omega)_0=\sum_{i=1}^r (YZ_i)_0(YZ_i)_0^{\lor}$, we have $$tr \vartheta(Y\omega)_0=\sum_{i=1}^r tr \vartheta((YZ_i)_0)\vartheta((YZ_i)_0^{\lor})=\sum_{i=1}^r((fZ_i)_0, (fZ)_0^{\lor})=r\neq 0,$$ where the last inequality follows from the assumption $p>\text{dim}\wedge^d({\mathfrak g}_1)\geq r$. It follows that $(Y\omega)_0\neq 0$. ◻ Let $M=M_{\bar{0}}\oplus M_{\bar{1}}$ be a restricted ${\mathfrak g}$-module. If both $M_{\bar{0}}$ and $M_{\bar{1}}$ are rational $T$-modules such that the $\mathfrak h$-action is induced from the differential of the $T$-action. In addition, for all $t\in T$, $x\in {\mathfrak g}_{\bar{0}}$ and $m\in M$, we have $t(xm)=\text{Ad}(t)(x)(tm)$. Then we call $M$ a $u({\mathfrak g})-T$-module ([@jj4 1.8]). For example, $u({\mathfrak g})$ and $\bar{\mathfrak u}$ are both $u({\mathfrak g})-T$-modules. Fix an order of the root vectors $y_{ij}$ of ${\mathfrak g}_{-1}$ in the following lemma. **Lemma 28**. *(1) Let $S=\Pi_{i<j}\text{ad} y_{ij} (\omega)\in u({\mathfrak g})$ and let $\bar S$ be its image in $\bar {\mathfrak u}$. Then $\bar S\in \mathcal Z$ and $h(\bar S)=k\Theta$ for some $k\in F^*$.* *(2) For $z\in u({\mathfrak g}_{\bar{0}})^G$, let $S_z=\Pi_{i<j} \text{ad} y_{ij} (z\omega)\in u({\mathfrak g})$ and let $\bar S_z$ be its image in $\bar{\mathfrak u}$. Then $\bar S_z\in \mathcal Z$ and $h(\bar S_z)=h(\bar S)h(z)$.* *Proof.* (1) Since $F Y$ is a 1-dimensional $\text{ad}_{{\mathfrak g}_{\bar{0}}}$-module, $Y\otimes_F M$ is a restricted simple ${\mathfrak g}_{\bar{0}}$-module, for each restricted simple ${\mathfrak g}_{\bar{0}}$-module $M$. Then the mapping $Y\otimes_F -$ defines a bijection on the set of all restricted simple ${\mathfrak g}_{\bar{0}}$-modules. Now let $L(\nu)$ be a restricted simple ${\mathfrak g}_{\bar{0}}$-module such that $Y\otimes_F L(\nu)$ is isomorphic to $L(\lambda)$ above. By the proof of Lemma 3.15, we have $(Y\omega)_0 Y\otimes m\neq 0$ for some $m\in L(\nu)$. Applying Proposition 2.3 to $K(L(\nu))$, we have $$(Y\omega)_0Y\otimes m=Y\omega Y\otimes m=SY\otimes m=\bar SY\otimes m,$$ and hence $\bar S\neq 0$. Since $\bar{\omega}\in \bar{\mathfrak u}_d$, ${\mathfrak g}_1\bar{\omega}=\bar{\omega}{\mathfrak g}_1=0$. We also have $g\bar{\omega}=\bar{\omega}$ for all $g\in G'$. Since $(Y\omega)_0$ is a $G$-invariant, it follows that $\bar{\omega}$ has $T$-weight $-\sigma$, where $\sigma$ is the $T$-weight of $Y$. Let $V\subseteq \bar{\mathfrak u}$ be an $\text{ad}_{{\mathfrak g}}$-submodule generated by $\bar{\omega}$, which is also a $G$-submodule. Since $\bar S\neq 0$, we have $$V\cong u({\mathfrak g})\otimes_{ u({\mathfrak g}_{\bar{0}}+{\mathfrak g}_1)}F\bar{\omega}=K(F\bar{\omega}).$$ Clearly $V$ is a $u({\mathfrak g})-T$-module. Since $V\cong \wedge ({\mathfrak g}_{-1})\otimes F\bar\omega$, each element of $V$ may be written as $\sum_i u_i \bar\omega$ with $u_i\in \wedge ^i({\mathfrak g}_{-1})$. Let $M\subseteq V$ be a simple $u({\mathfrak g})-T$-submodule and let $\sum_i u_i \bar\omega\in M$ be a nonzero $T$-weight vector. By applying appropriate $y_{ij}$'s to this vector, we obtain $Y\bar\omega=\bar S\in M$. Since $M$ is simple, we obtain $$M=\bar{\mathfrak u}\cdot\bar S=\wedge^{\leq d}({\mathfrak g}_1)\cdot\bar S.$$ Let $\wedge({\mathfrak g}_1)^+=:\sum_{i\geq 1} \wedge^i({\mathfrak g}_1)$. Then a short computation shows that $\wedge({\mathfrak g}_1)^+\bar S$ is a $u({\mathfrak g})-T$-submodule of $M$. By comparing the $T$-weights we see that $\bar S\notin \wedge({\mathfrak g}_1)^+\bar S$, implying that $M=F\bar S$, and hence $[\bar S, {\mathfrak g}_{\bar 1}]=0$. From the proof of Lemma 3.16, we see that $Y$ is a $G'$-invariant. It follows that $\bar S$ is also a $G'$-invariant. This gives $\bar S\in\mathcal Z$, since the $T$-weight of $\bar S$ is 0. Note that $\bar S\in u^{2d}({\mathfrak g})$. Then we have by Theorem 3.14 that $h(\bar S)=k\Theta +k'$ for some $k,k'\in F$. Since $\bar S$ annihilates the trivial ${\mathfrak g}$-module, in view of the fact that $\Theta (0)=0$, we have $k'=h(\bar S)(0)=0$. From above we have $\bar S K(L(\nu))\neq 0$, implying that $h(\bar S)(\nu)=k\Theta (\nu)\neq 0$ and hence $k\neq 0$. \(2\) By a similar argument as above we have $\bar S_z\in \mathcal Z$, for $z\in u({\mathfrak g}_{\bar{0}})^G$. Let $\mu\in \Lambda_0$ and let $v_{\mu}\in K(L(\mu))$ be the maximal vector of weight $\mu$. Since $FY$ is an $\text{ad}_{{\mathfrak g}_{\bar{0}}}$-module, implying that $Yz=(z+c(z))Y$ for some $c(z)\in F$, we have $$\begin{aligned} h(\bar S_z)(\mu) Y\otimes v_{\mu}&=S_zY\otimes v_{\mu}\\ &=Yz\omega Y\otimes v_{\mu}\\&=(z+c(z))Y\omega Y\otimes v_{\mu}\\& =(z+c(z))\bar SY\otimes v_{\mu}\\&=\bar S(z+c(z))Y\otimes v_{\mu}\\&=\bar SY\otimes zv_{\mu}\\&=h(z)(\mu)\bar SY\otimes v_{\mu}\\& =h(z)(\mu)h(\bar S)(\mu)Y\otimes v_{\mu},\end{aligned}$$ implying that $h(\bar S_z)(\mu)=h(z)(\mu)h(\bar S)(\mu)$ for all $\mu\in\Lambda_0$, and hence $h(\bar S_z)=h(z)h(\bar S)$. ◻ Immediately, we have the following conclusion. **Corollary 29**. *$$h(\mathcal Z)=F+\Theta u(\mathfrak h)^{W\cdot}.$$* # Representations of $\bar{\mathfrak u}$ Fir each $\mu\in\Lambda_0$, we define an algebra homomorphism $\chi_{\mu}: \mathcal Z\longrightarrow F$ by $$\chi_{\mu}(z)=h(z)(\mu),\ z\in\mathcal Z.$$ Two weights $\mu, \nu\in \Lambda_0$ are referred to be linked and denoted $\mu\sim \nu$, if there is $\sigma\in W$ such that $\mu+\rho=\sigma (\nu+\rho)$ ([@hu2]). Let $U({\mathfrak g}'_{\bar{0}})$ be the universal enveloping algebra of ${\mathfrak g}_{\bar{0}}'$ over $F$, and let $$U({\mathfrak g}'_{\bar{0}})^{G'}=\{u\in U({\mathfrak g}_{\bar{0}}')| g\cdot u=u\ \text{for all}\ g\in G'\}.$$ Set $\mathfrak h_1=\mathfrak h\cap {\mathfrak g}'_{\bar{0}}$. According to [@jj 9.1-9.4], there is an algebra isomorphism $$\pi: U({\mathfrak g}'_{\bar{0}})^{G'}\longrightarrow U(\mathfrak h_1)^{W\cdot}$$ with kernel $\mathfrak n_0^-U({\mathfrak g}_{\bar{0}}')+U({\mathfrak g}_{\bar{0}}')\mathfrak n_0^+$. For each $\lambda\in\mathfrak h_1^*$, define an algebra homomorphism $\text{cen}_{\lambda}: U({\mathfrak g}_{\bar{0}}')^{G'}\longrightarrow F$ by $\text{cen}_{\lambda}(u)=\pi (u)(\lambda)$, $u\in U({\mathfrak g}_{\bar{0}}')^{G'}$ ([@jj 9.4]). Let $\xi: U(\mathfrak h_1)\longrightarrow u(\mathfrak h_1)$ be the canonical homomorphism. By [@hu2 Lemma 3.3], the restriction $\xi: U(\mathfrak h_1)^{W\cdot}\longrightarrow u(\mathfrak h_1)^{W\cdot}$ is an epimorphism. It follows that $\xi\pi: U({\mathfrak g}'_{\bar{0}})^{G'}\longrightarrow u(\mathfrak h_1)^{W\cdot}$ is also an epimorphism. Remark: The assumption on $p$ in [@hu2 Lemma 3.3] is satisfied in our case here, since $|W|=n!$. For each $\lambda=\lambda_1\epsilon_1+\cdots +\lambda_n\epsilon_n\in\Lambda_0$. we have $$h_i^p(\lambda)-h_i(\lambda)=\lambda_i^p-\lambda_i=0\ \text{for}\ 1\leq i\leq n.$$ Then we get $f(h)(\lambda)=\xi(f(h))(\lambda)$ for all $f(h)\in U(\mathfrak h)$. In particular, if $\lambda\in\Lambda_0\cap \mathfrak h_1^*$, then we get $$\text{cen}_{\lambda}(u)=\pi(u)(\lambda)=\xi\pi (u)(\lambda)$$ for all $u\in U({\mathfrak g}_{\bar{0}}')^{G'}$. **Theorem 30**. *Let $\mu, \nu\in \Lambda_0$. Then $\chi_{\mu}=\chi_{\nu}$ if and only if both $\mu$ and $\nu$ are atypical or both are typical with $\mu \sim \nu$.* *Proof.* Suppose $\chi_{\mu}=\chi_{\nu}$. Then we have $h(z)(\mu)=h(z)(\nu)$ for all $z\in\mathcal Z$. Write $$h(z)=\Theta f(h)+c, \ f(h)\in u(\mathfrak h)^{W\cdot},\ c\in F.$$ Then we have $\Theta (\mu)f(h)(\mu)=\Theta(\nu)f(h)(\nu)$ for all $f(h)\in u(\mathfrak h)^{W\cdot}$. If $\mu$ is atypical, then we obtain $\Theta (\nu)f(h)(\nu)=0$ for all $f(h)\in u(\mathfrak h)^{W\cdot}$. Let $f(h)=1$. Then we get $\Theta(\nu)=0$, and hence $\nu$ is atypical. Suppose both $\mu$ and $\nu$ are typical. Let $h(z)=\Theta$. Then we get $\Theta(\mu)=\Theta(\nu)$ and hence $f(h)(\mu)=f(h)(\nu)$ for all $f(h)\in u(\mathfrak h)^{W\cdot}$. Since $u(\mathfrak h_1)^{W\cdot}\subseteq u(\mathfrak h)^{W\cdot}$, we obtain $f(h)(\mu)=f(h)(\nu)$ for all $f(h)\in u(\mathfrak h_1)^{W\cdot}$. It follows that $$\text{cen}_{\mu}(u)=\xi\pi(u)(\mu)=\xi\pi(u)(\nu)=\text{cen}_{\nu}(u)$$ for all $u\in U({\mathfrak g}'_{\bar{0}})^{G'}$, and hence $\text{cen}_{\mu}=\text{cen}_{\nu}$, and whence $\mu_{|\mathfrak h_1}=(w\cdot \nu)_{|\mathfrak h_1}$ for some $w\in W$, by [@jj Corollary 9.4]. Since $\hbar=\sum_{i=1}^nh_i\in u(\mathfrak h)^{W\cdot}$, we also have $\hbar (\mu)=\hbar (\nu)$, implying that $\mu(\hbar)=(w\cdot\nu)(\hbar)$, and hence $\mu=w\cdot \nu$. If both $\mu$ and $\nu$ are atypical, then Corollary 3.26 gives $\chi_{\mu}=\chi_{\nu}$. Assume both $\mu$ and $\nu$ are typical and $\mu\sim \nu$. For each $z\in \mathcal Z$, we have by Lemma 3.7 that $h(z)\in u(\mathfrak h)^{W\cdot}$, implying that $\chi_{\mu}(z)=\chi_{\nu}(z)$. ◻ For $\mu\in \Lambda_0$, let $\bar{\mathfrak u}_{\mu}$ denote the superalgebra $\bar{\mathfrak u}/\bar{\mathfrak u}\text{ker}\chi_{\mu}$. Let $$\chi^0_{\mu}: u({\mathfrak g}_{\bar{0}})^G\longrightarrow F$$ be the homomorphism defined by $\chi^0_{\mu}(z)=h(z)(\mu), \ z\in u({\mathfrak g}_{\bar{0}})^G$. Define the quotient algebra $u^0_{\mu}=: u({\mathfrak g}_{\bar{0}})/u({\mathfrak g}_{\bar{0}})\text{ker}\chi^0_{\mu}$. Note: It is easy to see that $\hbar$ is an element of $u({\mathfrak g}_{\bar{0}})^G$, but not necessarily in $\mathcal Z$. **Definition 31**. *If $M=M_{\bar{0}}\oplus M_{\bar{1}}$ is a $\bar{\mathfrak u}_{\mu}$-module and also a $u({\mathfrak g})-T$-module, then we call $M$ a $\bar{\mathfrak u}_{\mu}-T$-module.* Example. For $\mu\in\Lambda_0$, the baby Verma module $Z(\mu)$ is a $\bar{\mathfrak u}_{\mu}-T$-module. A $u^0_{\mu}-T$-module can be defined similarly. **Theorem 32**. *Assume $\mu$ is typical.* *(1) If $N$ is a simple $u^0_{\mu}-T$-module, then $K(N)$ is a simple $\bar{\mathfrak u}_{\mu}-T$-module.* *(2) If $M$ is a simple $\bar{\mathfrak u}_{\mu}-T$-module, then $M^{{\mathfrak g}_1}=\{m\in M|xm=0\ \text{for all}\ x\in{\mathfrak g}_1\}$ is a simple $u^0_{\mu}-T$-module.* *Proof.* (1) By definition $N$ is a restricted simple ${\mathfrak g}_{\bar{0}}$-module annihilated by $\text{ker}\chi^0_{\mu}$. Let $v_{\lambda}\in N$ be the maximal vector of weight $\lambda$. For each $z\in u({\mathfrak g}_{\bar{0}})^G$, we have $$zv_{\lambda}=h(z)v_{\lambda}=h(z)(\lambda)v_{\lambda}=\chi^0_{\lambda}(z)v_{\lambda}$$ and $$zv_{\lambda}=(z-\chi^0_{\mu}(z))v_{\lambda}+\chi^0_{\mu}(z)v_{\lambda}=\chi^0_{\mu}(z)v_{\lambda},$$ since $z-\chi^0_{\mu}(z)\in\text{ker}\chi^0_{\mu}$. Therefore, we get $\chi^0_{\lambda}=\chi^0_{\mu}$. Applying [@hu2 Theorem 3.1], we get $\lambda_{|\mathfrak h_1}\sim \mu_{|\mathfrak h_1}$, and hence $\lambda_{|\mathfrak h_1}=(\sigma\cdot \mu)_{|\mathfrak h_1}$ for some $\sigma\in W$. By [@hu2 Lemma 3.2, 3.3], the Harish-Chandra morphism $h: u({\mathfrak g}_{\bar{0}}')^{G'}\longrightarrow u(\mathfrak h_1)^{W\cdot}$ is an epimorphism. Note that $$u({\mathfrak g}_{\bar{0}})^G\cong u({\mathfrak g}_{\bar{0}}')^{G'}\otimes u(F\hbar)$$ and $h(\hbar)=\hbar\in u(\mathfrak h)^{W\cdot}$. Then $h: u({\mathfrak g}_{\bar{0}})^{G}\longrightarrow u(\mathfrak h)^{W\cdot}$ is also an epimorphism. From $\chi^0_{\lambda}(\hbar)=\chi^0_{\mu}(\hbar)$, we obtain $\hbar (\lambda)=\hbar (\mu)$, which gives $\lambda(\hbar)=(\sigma\cdot \mu)(\hbar)$, since $\hbar \in u(\mathfrak h)^{W\cdot}$. Thus, we obtain $\lambda\sim \sigma\cdot \mu$. Then $\lambda$ is also typical by Theorem 4.1. Hence $K(N)$ is simple as $u({\mathfrak g})$-module by Proposition 2.3, and also as $\bar{\mathfrak u}$-module by Lemma 3.2. Let the $T$-action on $K(N)$ be induced from that of $N$. It remains to show that $K(N)$ is annihilated by $\text{ker}\chi_{\mu}$. To see this, let $z\in \text{ker}\chi_{\mu}$. Then we have $$\begin{aligned} zv_{\lambda}&=h(z)v_{\lambda}\\&=h(z)(\lambda)v_{\lambda}\\&=h(z)(\mu)v_{\lambda}\\&=\chi_{\mu}(z)v_{\lambda}\\&=0,\end{aligned}$$ and hence $zK(N)=0$. \(2\) Let $M=M_{\bar{0}}\oplus M_{\bar{1}}$ and let $m\in M_{\bar{0}}\cup M_{\bar{1}}$ be a nonzero $T$-weight vector. Set $M'=\wedge ({\mathfrak g}_1)m\subseteq M$. Since $\wedge^i({\mathfrak g}_1)=0$ in $\bar{\mathfrak u}$ is for $i>d$, we have $$M'=\wedge^{\leq d} ({\mathfrak g}_1)m.$$ Let $i_0$ be the largest integer such that $\wedge^{i_0}({\mathfrak g}_1)m\neq 0$. Then we have $0\leq i_0\leq d$ and ${\mathfrak g}_1 \wedge^{i_0}({\mathfrak g}_1)m=0$. Therefore we have $M^{{\mathfrak g}_1}\neq 0$. Clearly $M^{{\mathfrak g}_1}$ is $\mathbb Z_2$-graded. It is easy to see that $M^{{\mathfrak g}_1}$ is a $T$-submodule and a ${\mathfrak g}_{\bar{0}}$-submodule of $M$. Since $M$ is a restricted ${\mathfrak g}_{\bar{0}}$-module, so is $M^{{\mathfrak g}_1}$. It is no loss of generality to assume that $M^{{\mathfrak g}_1}_{\bar{0}}\neq 0$. Let $L(\lambda)\subseteq M^{{\mathfrak g}_1}_{\bar{0}}$ be a simple $u({\mathfrak g}_{\bar{0}})$-submodule of highest weight $\lambda$. Then $L(\lambda)$ is a $u({\mathfrak g}_{\bar{0}})-T$-submodule. The inclusion $L(\lambda)\subseteq M_{\bar{0}}$ induces a $u({\mathfrak g})$-module homomorphism from $K(L(\lambda))$ onto $M$. Since $M$ is a $\bar{\mathfrak u}_{\mu}$-module, by applying $z\in \mathcal Z$ to the maximal vector $v_{\lambda}\in L(\lambda)$ we obtain $\chi_{\lambda}=\chi_{\mu}$, implying that $\lambda\sim \mu$ by Theorem 4.1. It follows that $\lambda$ is also typical, and hence $K(L(\lambda))$ is simple, and whence $M\cong K(L(\lambda))$. We simply write $M=K(L(\lambda))$. Then $L(\lambda)\subseteq M^{{\mathfrak g}_1}$. We claim that $M^{{\mathfrak g}_1}= L(\lambda)$. To see this, note that $$M=L(\lambda)\oplus {\mathfrak g}_{-1}M.$$ Let $N=M^{{\mathfrak g}_1}\cap {\mathfrak g}_{-1}M$. Then we have $$M^{{\mathfrak g}_1}=L(\lambda)+N.$$ Let $\tilde N\subseteq M$ be the ${\mathfrak g}$-submodule generated by $N$. By the definition of $K(L(\lambda))$ we have $\tilde N\subseteq {\mathfrak g}_{-1}M$, and hence $\tilde N\cap L(\lambda)=0$. Since $M$ is simple, we have $\tilde N=0$ and hence $N=0$. So the claim follows. It remains to show that $L(\lambda)$ is annihilated by $\text{ker}\chi^0_{\mu}$. Let $z\in u({\mathfrak g}_{\bar{0}})^G$. Since $\lambda\sim \mu$, we have $$\begin{aligned} zv_{\lambda}&=h(z)(\lambda)v_{\lambda}\\ &=h(z)(\mu)v_{\lambda}\\ &=\chi^0_{\mu}(z)v_{\lambda}.\end{aligned}$$ Then $z$ acts on $L(\lambda)$ as a multiplication by $\chi^0_{\mu}(z)$ or, equivalently, $L(\lambda)$ is annihilated by $\text{ker}\chi^0_{\mu}$. It follows that $L(\lambda)$ is a simple $u^0_{\mu}-T$-module. ◻ 0.4truein 10 Yu. A. Bahturin, A. A. Mikhalev, V. M. Petrogradsky and M. V. Zaicev, *Infinite dimensional Lie superalgebras,* De Gruyter Expositions in Mathematics **7** Walter de Gruyter, 1992. R. W. Carter, *Simple groups of Lie type,* Pure and Applied mathematics, Vol. XXXVIII. John Wiley & Sons Ltd, 1972. J. Dixmier, *Enveloping algebras,* Graduate Studies in Mathematics, Vol. 11 Amer. Math. Soc., 1996. J. E. Humphreys, *Introduction to Lie algebras and representation theory,* GTM **9** Springer-Verlag, 1972. J. E. Humphreys, *Linear algebraic groups,* GTM **21** Springer-Verlag, 1981. J. E. Humphreys, *Modular representations of classical Lie algebras and semisimple groups,* J. Algebra **19** (1971), 51-79 . T. W. Hungerford, *Algebra,* GTM **73** Springer-Verlag, 1974. N. Jacobson, *Basic algebra I,* 2nd Edition, W. H. Freeman and Company, New York, 1985. J. C. Jantzen, *Representations of Lie algebras in prime characteristic,* Notes by I. Gordon. J. C. Jantzen, *Representations of Algebraic groups,* 2nd Ed. Math. Surv. and Mono. **107** Amer. Math. Soc., 2003. J. C. Jantzen, *Lectures on Quantum Groups,* GSM **6** Amer. Math. Soc., 1995. J. C. Jantzen, *Darstellungen Halbeinfacher Gruppen und kontravaiante formen,* J. Reine Angew. Math. **290** (1977), 117-141. J. C. Jantzen, *Subregular nilpotent representations of $\mathfrak{sl}_n$ and $\mathfrak{so}_{2n+1}$,* Math. Proc. Camb. Phil. Soc. **126** (1999), 223-257. E. Kirkman and J. Kuzmanovich, *Minimal prime ideals in enveloping algebras of Lie superalgebras,* Proc. Amer. Math. Soc. **124** (1996), 1693-1702. V. G. Kac, *Lie superalgebras,* Adv. Math. **26** (1977), 8-96. Y. Ren, B. Shu, F. Yang and A. Zhang, *Modular representations of strange classical Lie superalgebras and the first Kac-Weisfeiler conjecture,* arXiv.math: 2307.00483v1. V. Serganova, *On representations of Lie superalgebra $p(n)$,* J. Algebra **258** (2002), 615-630. H. Strade and R. Farnsteiner, *Modular Lie algebras and their representations,* Pure and Appl. Math. **116** Marcel Dekker New York, 1988. C. Zhang, *On the simple modules for the restricted Lie superalgebra $sl(n|1)$,* J. Pure and Applied Algebra **213** (2009), 756-765.
arxiv_math
{ "id": "2309.10992", "title": "Representations of the restricted Lie superalgebra p(n)", "authors": "Chaowen Zhang", "categories": "math.RT", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We show topological genericity for the set of functions in the space $\bigcap\limits_{p<1}H^p$ on the open unit disc such that the sequence of Taylor coefficients of the function and of all derivatives of the function are unbounded. Results of similar nature are valid when the space $\bigcap\limits_{p<1}H^p$ is replaced by $H^p(0<p<1)$ and by localized versions of such spaces. Looking at the smaller space $A(\mathbb{D}) \subseteq H^{\infty}$ we show topological genericity for the set of functions in $A(\mathbb{D})$ and of all derivatives such that the sequence of Taylor coefficients of the function are outside of $\ell^1$. We also show topological genericity for the set of functions in the space $\bigcap\limits_{p<1}h^p$ whose harmonic conjugate does not belong in any $h^q(q>0)$ author: - C. Pandis title: "**Some results of topological genericity** " --- *Keywords and phrases*: Hardy space $H^p$, Harmonic Hardy space $h^p$, Disc Algebra, Baire's theorem, Topological genericity, generic property.\ *AMS classification numbers*: 30H10 # Introduction {#sec1} If there exists an object with a "bad" property, then a general principle is that there are many such objects and their set is big in various senses. In this paper we study such properties. It is well known that for a function $f$ in $H^1$ its Taylor coefficients tend to zero due to Rieman-Lebesgue Theorem. The space $\bigcap\limits_{p<1}H^p$, which is very close to $H^1$ has a diametrically opposite property; a generic function of this space has Taylor coefficients that are unbounded. More generally, a generic function of $\bigcap\limits_{p<1}H^p$ has derivatives with unbounded Taylor coefficients. Those results can also be generalised for the local spaces $H^p_{[A,B]}$ and $\bigcap\limits_{p<1}H^p_{[A,B]}$. Taking a step further we show that a generic function and its derivatives in the disc algebra $A(\mathbb{D})$ has Taylor coefficients outside of $\ell^1$. In the second part we prove a generic result concerning the harmonic hardy spaces $h^p$. M.Riez proved that if $1<p<\infty$ the space $h^p$ is "self-conjugate" in the sense that if $u\in h^p$ then the same is true for the harmonic conjugate $\tilde{u}$. This is not true for $h^1$, but here Kolmogorov's theorem provides a substitute : if $u$ in $h^1$ then $\widetilde{u}$ is in $h^p$ for all $p<1$ . We show that in the space $\bigcap\limits_{p<1}h^p$, a generic function $u\in \bigcap\limits_{p<1}h^p$ satisfies $$\sup_{0<r<1}\dfrac{1}{ 2\pi}\int_{-\pi}^{\pi}|\widetilde{u}(re^{i\theta })|^q d\theta = +\infty \quad \text{for all} \quad q>0$$ # Preliminaries {#sec:2} Let $\mathbb{D} =\{z\in\mathbb{C}:|z|<1\}$ be the open unit disc. A holomorphic function $f:\mathbb{D}\rightarrow \mathbb{C}$ belongs to the Hardy space $H^p$ $(0<p<+\infty)$ if $\displaystyle\sup_{0<r<1}\dfrac{1}{2\pi}\int^{2\pi}_0|f(re^{i\theta })|^pd\theta <+\infty$. It belongs to the Hardy space $H^\infty$ if $\displaystyle\sup_{|z|<1}|f(z)|<+\infty$. The space $H^\infty$ endowed with the supremum norm on $\mathbb{D}$ is a Banach space, but polynomials are not dense in this space. For $1\leq p<+\infty$ the space $H^p$ endowed with the norm $$\|f\|_p=\sup_{0<r<1}\bigg\{\frac{1}{2\pi}\int^{2\pi}_0|f(re^{i\theta })|^pd\theta \bigg\}^{1/p}$$ is also a Banach space. Then for $f,g\in H^p$ their distance is $$d_p(f,g)=\sup_{0<r<1}\bigg\{\frac{1}{2\pi}\int^{2\pi}_0|f(re^{i\theta })-g(re^{i\theta })|^p d\theta \bigg\}^{1/p}, \ \ 1\le p<+\infty.$$ For $0<p<1$ we endow $H^p$ with the metric $$d_p(f,g)=\sup_{0<r<1}\frac{1}{2\pi}\int^{2\pi}_0|f(re^{i\theta })-g(re^{i\theta })|^pd\theta , \ \ f,g\in H^p, \ \ 0<p<1$$ and then $H^p$ becomes a topological vector space endowed with a metric invariant by translation which is complete ($F$-space). For $0<p<+\infty$ polynomials are dense in $H^p$. Also convergence in $H^p$, $0<p\le+\infty$ implies uniform convergence on each compact subset of $\mathbb{D}$. For $a,b\in(0,+\infty]$, $a<b$ we have $H^b\subset H^a$ and the injection map is continuous. Jensen's inequality implies that the map $$a\;\rightarrow\;\sup_{0<r<1}\Big\{\frac{1}{2\pi}\int^{2\pi}_0|f(re^{i\theta })|^ad\theta \Big\}^{1/a}$$ is increasing. Obviously we also have $$\sup_{0<r<1}\Big\{\frac{1}{2\pi}\int^{2\pi}_0|f(re^{i\theta })|^pd\theta \Big\}^{1/p}\le \sup_{|z|<1}|f(z)|.$$ Next for $0<a\le+\infty$ we consider the intersection $\bigcap\limits_{p<a}H^p$. Convergence in this space is equivalent with convergence in all spaces $H^p$, $p<a$. Equivalently, we consider a strictly increasing sequence $p_n$ converging to $a$ and the metric in $\bigcap\limits_{p<a}H^p$ is defined by $$d(f,g)=\sum^\infty_{n=1}\frac{1}{2^n}\frac{d_{p_n}(f,g)}{1+d_{p_n}(f,g)}, \ \ f,g\in\bigcap_{p<a}H^p.$$ This space is also complete, in fact an $F$-space. Obviously convergence in $\bigcap\limits_{p<a}H^p$ implies uniform convergence on each compact subset of $\mathbb{D}$. **Proposition 1**. *Polynomials are dense in $\bigcap\limits_{p<a}H^p$, for every $a$, $0<a\le+\infty$.* For the proof it suffices for $f\in\bigcap\limits_{p<a}H^p$ to control $d_{p_n}(f,P)$, for $n=1,\ldots,N$ for any finite $N$. Because of the monotonicity of the map $p\;\rightarrow\;\displaystyle\sup_{0<r<1}\Big\{\dfrac{1}{2\pi}\int^{2\pi}_0|f(re^{i\theta })-P(re^{i\theta })|^pd\theta \Big\}^{1/p}$ is suffices to control $d_{p_N}(f,P)$. But this is possible, because polynomials are dense in $H^{p_N}$ since $p_N<+\infty$. Thus Proposition [Proposition 1](#prop2.1){reference-type="ref" reference="prop2.1"} holds. Next we present localized versions of the previous spaces. Let $0<p<+\infty$ and $A,B\in\mathbb{R}$, $A<B$. Then a holomorphic function $\linebreak$ $f:\mathbb{D} \rightarrow \mathbb{C}$ belongs to $H^p_{[A,B]}$ if $\displaystyle\sup_{0<r<1}\int^B_A|f(re^{i\theta })|^p\dfrac{d\theta }{B-A}<+\infty$ and to $H^\infty_{[A,B]}$ if $\displaystyle\sup_{0<r<1}\displaystyle\sup_{A\le\theta \le B}|f(re^{i\theta })|<+\infty$. Because of the monotonicity of the function $$a\;\rightarrow\;\displaystyle\sup_{0<r<1}\Big\{\int^B_A|f(re^{i\theta })|^a\dfrac{d\theta }{B-A}\Big\}^{1/a}$$ it follows $H^b_{[A,B]}\subset H^a_{[A,B]}$ for $0<a<b\le+\infty$. Convergence in $H^p_{[A,B]}$ of a sequence $f_n$ towards $f$ where $f_n,f\in H^p_{[A,B]}$ is equivalent to uniform convergence on all compact subsets of $\mathbb{D}$ and $$\sup_{0<r<1}\bigg\{\int^B_A|f_n(re^{i\theta })-f(re^{i\theta })|^p\frac{d\theta }{B-A}\bigg\}^{1/p} \xrightarrow{n\;\rightarrow\;+\infty}{}0 \ \ \mbox{for} \ \ 0<p<+\infty \ \ \mbox{and}$$ $$\sup_{0<r<1}\sup_{A\le\theta \le B}|f_n(re^{i\theta })-f(re^{i\theta })|\xrightarrow{n\;\rightarrow\;+\infty}{}0 \ \ \mbox{for} \ \ p=+\infty.$$ The metric giving this topology in $H^p_{[A,B]}$ is defined by $$\begin{aligned} d_{p,[A,B]}(f,g)=&\sup_{0<r<1}\bigg\{\int^B_A|f_n(re^{i\theta })-g(re^{i\theta })|^p\cdot \frac{d\theta }{B-A}\bigg\}^{1/p}\\ &+\sum^\infty_{n=2}\frac{1}{2^n}\frac{\displaystyle\sup_{|z|\le1-\frac{1}{n}}|f(z)-g(z)|} {1+\displaystyle\sup_{|z|\le1-\frac{1}{n}}|f(z)-g(z)|} \ \ \mbox{for} \ \ 1\le p<+\infty\end{aligned}$$ $$\begin{aligned} d_{p,[A,B]}(f,g)=&\sup_{0<r<1}\int^B_A|f(re^{i\theta })-g(re^{i\theta })|^p\frac{d\theta }{B-A} \\ &+\sum^\infty_{n=2}\frac{1}{2^n}\frac{\displaystyle\sup_{|z|\le1-\frac{1}{n}}|f(z)-g(z)|} {1+\displaystyle\sup_{|z|\le1-\frac{1}{n}}|f(z)-g(z)|} \ \ \mbox{for} \ \ 0<p<1 \ \ \mbox{and}\end{aligned}$$ $$\begin{aligned} d_{\infty,[A,B]}(f,g)=&\sup_{0<r<1}\sup_{A\le\theta \le B}|f(z)-g(z)| \\ &+\sum^\infty_{n=2}\frac{1}{2^n}\frac{\displaystyle\sup_{|z|\le1-\frac{1}{n}}|f(z)-g(z)|} {1+\displaystyle\sup_{|z|\le1-\frac{1}{n}}|f(z)-g(z)|} \ \ \mbox{for} \ \ p=+\infty.\end{aligned}$$ Obviously, convergence in $H^p_{[A,B]}$ implies uniform convergence on all compact subsets of $\mathbb{D}$. Also for $0<a<b\le+\infty$ the injection map $H^b_{[A,B]}\subset H^a_{[A,B]}$ is continuous. Finally these spaces are complete, in fact $F$-spaces and $H^p_{[A,B]}=H^p$ when $B-A\ge 2\pi$ and in general $H^p\subset H^p_{[A,B]}$, provided $A<B$. Let $0<a\le+\infty$. Convergence in the space $\bigcap\limits_{p<a}H^p_{[A,B]}$ is equivalent to convergence in all $H^p_{[A,B]}$ for $p<a$. A metric in $\bigcap\limits_{p<a}H^p_{[A,B]}$ compatible with this topology is given by $$d(f,g)=\sum^\infty_{n=1}\frac{1}{2^n}\frac{d_{p_n,[A,B]}(f,g)}{1+d_{p_n,[A,B]}(f,g)}$$ where $p_n$ is any strictly increasing sequence converging to $a$. This space is complete, in fact an $F$-space. Obviously convergence in $\bigcap\limits_{p<a}H^p_{[A,B]}$ implies uniform convergence on all compact subsets of $\mathbb{D}$ A function $f: \overline{\mathbb{D}} \rightarrow \mathbb{C}$ belongs to the disc algebra $A(\mathbb{D})$ if it is holomorphic on the open unit disc and continuous on the closed unit disc or else we can say $A(\mathbb{D})= H^{\infty}(\mathbb{D})\cap C(\overline{\mathbb{D}})$. Endowed with the unifrom norm $A(\mathbb{D})$ becomes a Banach space (a Banach algebra in particular). Obviously convergence in $A(\mathbb{D})$ implies uniform convergence on $\overline{\mathbb{D}}$ which in turn implies uniform convergence on compact subsets of $\mathbb{D}$. Polynomials are dense in $A(\mathbb{D})$. A harmonic function $u:\mathbb{D}\!\;\rightarrow\;\!\mathbb{R}$ belongs in $h^p$ if $\displaystyle\sup_{0<r<1}\dfrac{1}{2\pi}\int^{\pi}_{-\pi}|u(re^{i\theta })|^pd\theta <+\infty$. For $p\geq 1$ then space $h^p$ endowed with the norm $$\|u\|_p =\displaystyle\sup_{0<r<1}\bigg\{\dfrac{1}{2\pi}\int^{\pi}_{-\pi}|u(re^{i\theta })|^pd\theta \bigg\}^{1/p}$$ is a Banach space. For $0<p<1$ we endow $h^p$ with the translation invariant metric $$d_p(u,v)= \sup_{0<r<1}\dfrac{1}{2\pi}\int^{\pi}_{-\pi}|u(re^{i\theta })-v(re^{i\theta })|^pd\theta$$ In either case metric convergence implies uniform convergence in compact subsets of $\mathbb{D}$, so even if $0<p<1$, the space $h^p$ is complete. For $a,b\in(0,+\infty]$, $a<b$ we have $h^b\subset h^a$ as before.[@8] Harmonic cojugate is only defined to within an additive constant, working at the unit cirlce, it is customary to require that its value at zero is zero. $\cite{2}$ Next for $0<a < 1 \ $ we consider the intersection $\bigcap\limits_{p<a}h^p$. Convergence in this space is equivalent with convergence in all spaces $h^p$, $p<a$. Equivalently, we consider a strictly increasing sequence $p_n$ converging to $a$ and the metric in $\bigcap\limits_{p<a}h^p$ is defined by $$d(u,v)=\sum^\infty_{n=1}\frac{1}{2^n}\frac{d_{p_n}(u,v)}{1+d_{p_n}(u,v)}, \ \ u,v\in\bigcap_{p<a}h^p.$$ This space is also complete . Obviously convergence in $\bigcap\limits_{p<a}h^p$ implies uniform convergence on each compact subset of $\mathbb{D}$. We now mention a theorem of Borel and Carathéodory [@9] that shows that an analytic function may be bounded by its real part that will be of use later in the paper. **Theorem 1**. *Let a function $f$ be analytic on a closed disc of radius R, centered at the origin. Suppose that $r < R$. Then, we have the following inequality: $$\sup_{|z|\leq r }|f(z)| \leq \dfrac{2r}{R-r} \sup_{|z|\leq R}\mathfrak{R}(f(z)) + \dfrac{R+r}{R-r}|f(0)|$$* In order to show that the sets of these functions are "big" we prove a slightly changed variation of a result of M. Siskaki ([@4]), that was introduced later by V.Nestoridis and E.Thirios ([@6]). **Proposition 2** (V.Nestoridis, E.Thirios variation). *Let $V$ be a topological vector space over the field $\mathbb{R}$ or $\mathbb{C}$. Let $X$ be a non empty set and $\mathbb{C}^X$ the set of complex functions defined on $X$. Let $T:V\;\rightarrow\;\mathbb{C}^X$ be such that* *1) For every $x\in X$ the function $V\ni f\;\rightarrow\;T(f)(x)\in\mathbb{C}$ is continuous.* *2) $|T(f-g)(x)|\le|T(f)(x)|+|T(g)(x)|$ for all $f,g\in V$ and $x\in X$.* *3) For every $f\in V$, if $T(f)$ is unbounded on $X$, then there is a sequence $(\lambda _n)_n$ of numbers in $\mathbb{R}$ or in $\mathbb{C}$, respectively, with $\lambda _n\;\rightarrow\;0$ as $n\;\rightarrow\;\infty$ such that $T(\lambda _nf)$ is unbounded on $X$ for every $n\ge1$.* *We set $S=\{f\in V:T(f)$ is unbounded on $X\}$. Then, either $S=\emptyset$ or $S$ is a $G_\delta$ and dense subset of $V$.* For the needs of this paper we will notice that condition $2$ of this variation can be replaced as below **Proposition 3**. *Let $V$ be a topological vector space over the field $\mathbb{R}$ or $\mathbb{C}$. Let $X$ be a non empty set and $\mathbb{C}^X$ the set of complex functions defined on $X$. Let $T:V\;\rightarrow\;\mathbb{C}^X$ be such that* *1) For every $x\in X$ the function $V\ni f\;\rightarrow\;T(f)(x)\in\mathbb{C}$ is continuous.* *2) $|T(f-g)(x)|\le M(|T(f)(x)|+|T(g)(x)|)$ for all $f,g\in V$, for some $M>0$ and $x\in X$.* *3) For every $f\in V$, if $T(f)$ is unbounded on $X$, then there is a sequence $(\lambda _n)_n$ of numbers in $\mathbb{R}$ or in $\mathbb{C}$, respectively, with $\lambda _n\;\rightarrow\;0$ as $n\;\rightarrow\;\infty$ such that $T(\lambda _nf)$ is unbounded on $X$ for every $n\ge1$.* *We set $S=\{f\in V:T(f)$ is unbounded on $X\}$. Then, either $S=\emptyset$ or $S$ is a $G_\delta$ and dense subset of $V$.* **Proof 1**. The proof that $S$ is a $G_\delta$ is omitted, because it follows simply from 1 and is similar to the proof in [@4]. Suppose $S\neq\emptyset$ and let $f\in S$. Thus, $T(f)$ is unbounded on $X$. If $S$ is not dense, then there exist $g\in V$ so that $g\notin\overline{S}$. Then $T(g)$ is bounded on $X$ by a constant $M_1$. Since $V$ is a topological vector space it holds $g+\lambda _nf\;\rightarrow\;g$ as $n\;\rightarrow\;+\infty$. According to our assumptions we have $$\begin{aligned} |T(\lambda _nf)(x)|=|T(g+\lambda _nf-g)(x)|&\le M|T(g+\lambda _nf)(x)|+M|T(g)(x)| \\ &\le M|T(g+\lambda _nf)(x)|+MM_1\end{aligned}$$ for all $x\in X$ where $M_1<+\infty$ and $M>0$ is independent of $x$.\ Then for every $n\ge1$ it follows that $T(g+\lambda _nf)$ is unbounded on $X$ and it is deduced that $g$ belongs to the closure of $S$, which is a contradiction. $\quad\blacksquare$ # The Results {#sec:3} **Terminology 1**. *From now on we refer as $\beta(f)$ to be the sequence of Taylor\ coefficients of a holomorphic function f on the unit disc : $$\beta(f) = ( \beta_k (f) )_{k=0}^{\infty}$$* **Theorem 1**. *The set $\mathcal{A} = \big \{ f \in \bigcap\limits_{p<1}H^p : \beta(f) \notin \ell^{\infty} \big \}$ is a $G_{\delta}$ and dense subset of $\bigcap\limits_{p<1}H^p$ .* **Proof 2**. Let $f(z)= \dfrac{1}{1-z}\log\bigg(\dfrac{1}{1-z}\bigg)$ , $z \in \mathbb{D}$ .Then $f$ belongs to $H^p$ for all $0<p<1$ and its Taylor coefficients are unbounded [@10] and thus $\mathcal{A} \neq \emptyset$. We now notice that if $P$ is a polynomial and $f \in \mathcal{A}$ then $f+P \in \mathcal{A}$. Since the set of polynomials is dense in $\bigcap\limits_{p<1}H^p$, it follows that the set $\{f+P:P$  polynomial$\}$ is dense in $\bigcap\limits_{p<1}H^p$. Since the last set is contained in $\mathcal{A}$, it follows that $\mathcal{A}$ is dense in $\bigcap\limits_{p<1}H^p$. In order to show that $\mathcal{A}$ is $G_{\delta}$ it suffices to prove that $\bigcap\limits_{p<1}H^p \setminus \mathcal{A}$ is a denumerable union of closed subsets of $\bigcap\limits_{p<1}H^p$. For $M$ a natural number we consider the set $$\Omega_M= \bigg \{f \in \bigcap\limits_{p<1}H^p : |\beta_n(f)| \leq M \;\; , \forall n \in \mathbb{N} \bigg \}$$ Then $\bigcap\limits_{p<1}H^p \setminus \mathcal{A}= \bigcup\limits_{M} \Omega_M$ . We verify that each set $\Omega_M$ is a closed subset of $\bigcap\limits_{p<1}H^p$. Indeed, let $f_k \in \Omega_ M$ be a sequence converging in $\bigcap\limits_{p<1}H^p$ to some $f \in \bigcap\limits_{p<1}H^p$. Then $f_k$ converges uniformly to f on compacta of $\mathbb{D}$ ,which implies $\beta_n(f_k) \xrightarrow[k\to\infty]{}\beta_n(f) , \forall n \in \mathbb{N}$ . Since $|\beta_n(f)|=\lim_{k \to \infty}|\beta_n(f_k)| \leq M$ and thus $f\in \Omega_M$ .We shown that $\bigcap\limits_{p<1}H^p \setminus \mathcal{A}$ is $F_{\sigma}$ which completes the proof . $\blacksquare$ **Theorem 2**. *The sets $\mathcal{C}_n = \big \{ f \in \bigcap\limits_{p<1}H^p : \beta(f^{(n)}) \notin \ell^{\infty} \big \}$ , $(n \in \mathbb{N})$ are $G_{\delta}$ and dense subsets of $\bigcap\limits_{p<1}H^p$.* **Proof 3**. It is obvious that $\beta_k(f^{(n)})=(k+n)(k+(n-1))\cdots(k+1)\beta_{k+n}(f)$. We notice that for a function $f \in \bigcap\limits_{p<1}H^p$ such that the sequence $\beta(f)$ is unbounded then the same is true for $\beta(f^{(n)})$ and so $\mathcal{A} \subseteq \mathcal{C}_n$ for all $n \in \mathbb{N}$. Since $\mathcal{A}$ is non-empty and dense subset of $\bigcap\limits_{p<1}H^p$, so is $\mathcal{C}_n$ for all $n \in \mathbb{N}$. We fix a natural number $n$ and proceed to show that $\bigcap\limits_{p<1}H^p \setminus \mathcal{C}_n$ is a denumerable union of closed subsets of $\bigcap\limits_{p<1}H^p$. As before, for $M$ a natural number we consider the set $$\Omega_{M}^n= \bigg \{f \in\bigcap\limits_{p<1}H^p : |\beta_k(f^{(n)})| \leq M \;\; , \forall k \in \mathbb{N} \bigg \}$$ and notice that $\bigcap\limits_{p<1}H^p \setminus \mathcal{C}_n=\bigcup\limits_{M} \Omega_{M}^n$. The set $\Omega_ {M}^n$ is closed, indeed let $f_l \in \Omega_ {M}^n$ be a sequence converging in $\bigcap\limits_{p<1}H^p$ to some $f \in \bigcap\limits_{p<1}H^p$. Then $f_l \rightarrow f$ uniformly on compacta of $\mathbb{D}$ which implies, from Weierstrass theorem, that $f_l^{(n)}$ converges uniformly to $f^{(n)}$ on compacta of $\mathbb{D}$, for all $n \in \mathbb{N}$, and thus $\beta_k(f_l^{(n)}) \xrightarrow[l\to\infty]{}\beta_k(f^{(n)})$ , for all $n \in \mathbb{N}$ . Since $|\beta_k(f^{(n)})|=\lim_{l \to \infty}|\beta_k(f_{l}^{(n)})| \leq M$, for all $k\in \mathbb{N}$ we obtain $f \in \Omega_{M}^n$. We shown that $\bigcap\limits_{p<1}H^p \setminus \mathcal{C}_n$ is $F_{\sigma}$ $\blacksquare$ **Remark 3**. The above results holds true if we replace the space $\bigcap\limits_{p<1}H^p$ with each of the spaces $H^p$ for $0<p<1$, or even with their local spaces $H^p_{[A,B]}$ for $0<p<1$ and $\bigcap\limits_{p<1}H^p_{[A,B]}$, since $\bigcap\limits_{p<1}H^p \subseteq H^p \subseteq H^p_{[A,B]}$ and so $\dfrac{1}{1-z}\log\bigg(\dfrac{1}{1-z}\bigg) \in H^p_{[A,B]}$ for all $0<p<1$. **Remark 4**. In contrast with the situation of the Taylor coefficients of the generic function $f$ and its derivative, the same is not true for the primative $F(f)$ of the function. We see that if $f \in \bigcap\limits_{p<1}H^p$ then $\beta(F(f))$ is bounded and that is immediate by using a variation of a known theorem asserting that if $f\in \bigcap\limits_{p<1}H^p$ then $F(f) \in \bigcap\limits_{p< +\infty}H^p \subseteq H^1$ [@1], [@6]. Here it would be interesting to mention a result of V.Nestoridis [@7], that for a generic function $f$ on $H^1$ the sequence $\beta(F(f))$ is outside any $\ell^p$ space smaller than $\ell^1$ i.e. with $0<p<1$; thus, $\beta(F(f)) \in\ell^1\setminus \Big(\displaystyle\bigcup_{0<p<1}\ell^p\Big)$ holds generically for every $f$ in $H^1$. $\blacksquare$ We now notice that for a generic function $f\in A(\mathbb{D}) \subseteq H^{\infty}$, then the weaker condition $\beta(f)\notin \ell^1$ holds. A direct consequence of Hardy's inequality [@1] is that if $f'\in H^1$ then $\beta(f) \in \ell^1$. In particular $\beta(f) \in \ell^1$ if $f$ is a conformal mapping of the unit disc onto a Jordan domain with rectifiable boundary.[@1] Golubev in [@11] gave the first example of a conformal mapping of the unit disc $\mathbb{D}$ onto a Jordan domain which carries a boundary set of measure zero onto a set of positive measure on the nonrectifiable boundary of the image domain . **Theorem 5**. *The set $\mathcal{E} = \big \{ f \in A(\mathbb{D}) : \beta(f) \notin \ell^{1} \big \}$ is a $G_{\delta}$ and dense subset of $A(\mathbb{D})$.* **Proof 4**. We mentioned before that $\mathcal{E} \neq\emptyset$, let $g \in \mathcal{E}$. To prove that $\mathcal{E}$ is a dense subset of $A(\mathbb{D})$ we will show that the interior of $A(\mathbb{D})\setminus \mathcal{E}$ is void. Assume $f\in (A(\mathbb{D})\setminus \mathcal{E})^{\mathrm{o}}$ then $$f +\frac{1}{n}g \xrightarrow[n \;\rightarrow\;\infty]{\|\cdot \|_{\infty}} f$$ and $f$ is in the interior of $A(\mathbb{D})\setminus \mathcal{E}$. It follows that that for some $n_0\in \{1,2,\ldots \}$ the function $f+\frac{1}{n_0}g$ belongs to $A(\mathbb{D})\setminus \mathcal{E}$. This suggests that $\beta(\frac{1}{n_0}g) \in \ell^1$ which is a contradiction. Thus $(A(\mathbb{D})\setminus \mathcal{E})^{\mathrm{o}}=\emptyset$ In order to show that $\mathcal{E}$ is a $G_\delta$ is suffices to prove that $A(\mathbb{D})\setminus\mathcal{E}$ is a denumerable union of closed subsets of $A(\mathbb{D})$. For $M$ and $N$ natural numbers we consider the set $$\Omega_{M,N}=\bigg\{f\in A(\mathbb{D}) :f(z)=\sum^\infty_{n=0}\beta_n(f)z^n, \;\;\sum^N_{n=0}|\beta_n(f)|\le M\bigg\}.$$ Then $A(\mathbb{D}) \setminus \mathcal{E}= \displaystyle\bigcup_M\Big[\displaystyle\bigcap_N \Omega_{M,N }\Big]$. We verify that each set $\Omega_{M,N}$ is a closed subset of $A(\mathbb{D})$. Indeed, let $f_k\in \Omega_{M,N}$ be a sequence converging in $A(\mathbb{D})$ to some $f\in A(\mathbb{D})$. Then $f_k$ converges uniformly on compacta of $\mathbb{D}$ to $f$, which implies $\beta_n(f_k)\xrightarrow[k\;\rightarrow\;\infty]{}\beta_n(f)$ for every $n=0,1,2,\ldots$ . Since $\sum\limits^N_{n=0}|\beta_n(f_k)|\le M$ for all $k$, it follows $\sum\limits^N_{n=0}|\beta_n(f)|\le M$; that is, $f\in \Omega_{M,N}$ and the set $\Omega_{M,N}$ is closed in $A(\mathbb{D})$. The same holds for the intersections $\displaystyle\bigcap_N \Omega_{M,N}$ and their denumerable union $A(\mathbb{D})\setminus\mathcal{E}$ is an $F_\sigma$. The proof is complete. $\blacksquare$ **Theorem 6**. *The sets $\mathcal{R}_n =\big \{ f \in A(\mathbb{D}) : \beta(f^{(n)}) \notin \ell^{1} \big \}$ $(n>1)$ are $G_{\delta}$ and dense subsets of $A(\mathbb{D})$.* **Proof 5**. Since $\beta_k(f^{(n)})=(k+n)(k+(n-1))\cdots(k+1)\beta_{k+n}(f)$, it is immidiate that $\mathcal{E}\subset \mathcal{R}_n$ for $n>1$ and $\mathcal{E}$ is a dense subset of $A(\mathbb{D})$ then so is $\mathcal{R}_n$. As before for $M$ and $N$ natural numbers we consider the set $$\Omega_{M,N}^n=\bigg\{f\in A(\mathbb{D}) :f(z)=\sum^\infty_{k=0}\beta_k(f)z^k, \;\;\sum^N_{k=0}|\beta_k(f^{(n)})|\le M\bigg\}.$$ Then $A(\mathbb{D}) \setminus \mathcal{R}_n= \displaystyle\bigcup_M\Big[\displaystyle\bigcap_N \Omega_{M,N }^n \Big]$. Now we show that the set $A(\mathbb{D}) \setminus \mathcal{R}_n$ is $F_{\sigma}$. Let $f_l\in \Omega_{M,N}^n$ be a sequence converging in $A(\mathbb{D})$ to some $f\in A(\mathbb{D})$. Then $f_l$ converges unifromly on compacta of $\mathbb{D}$ to $f$ again by Weierstrass theorem we obtain $\beta_k(f_{l}^{(n)})\xrightarrow[l\;\rightarrow\;\infty]{}\beta_k(f^{(n)})$. As seen before the set $\Omega_{M,N}^n$ is closed. $\blacksquare$ We now use Baire's category theorem and the variation of Siskaki's proposition to prove that for a generic function $u$ in $\bigcap\limits_{p<1}h^p$ its harmonic conjugate $\widetilde{u}$ is outside of any harmonic hardy space $h^q$ for $q>0$ . We may mention here again that the harmonic conjugate is normalised i.e its value at zero is zero. **Definition 1**. *We denote by ${\mit\Lambda}_q$, ($q>0$) the set of all functions $u\in \bigcap\limits_{p<1}h^p$ such that the harmonic conjugate $\widetilde{u} \notin h^q$.* **Proposition 4**. *The sets ${\mit\Lambda}_q$ for $0<q<1$ are $G_{\delta}$ and dense subset of $\bigcap\limits_{p<1}h^p$ .* **Proof 7**. Let $$f(z)=u(z)+iv(z)=\sum_{n=1}^{+\infty}\epsilon_n \dfrac{z^{2^n}}{1-z^{2^{n+1}}} ,\quad \epsilon_n \in \{+1,-1 \}$$ Then for every choice of signs $\epsilon_n$, $u \in h^p$ for all $p<1$, while for almost every sequence of signs, $f(z)$ has a radial limit on no set of positive measure. In particular, some choice of the $\epsilon_n$ gives a function $f$ which is not even of Nevanlinna class $N$, but whose real part belongs to $h^p$ for all $p<1$. [@1] , and thus ${\mit\Lambda}_q \neq \emptyset$ for all $q>0$. We consider the operator $T: \bigcap\limits_{p<1}h^p \times (0,1) \;\rightarrow\;\mathbb{C}$ given by $$T(u,r)= \frac{1}{ 2\pi}\int_{-\pi}^{\pi}|\widetilde{u}(re^{i\theta })|^q d\theta ,\quad u \in \bigcap\limits_{p<1}h^p , \quad r\in (0,1) , \, \quad \text{for} \quad 0<q<1$$ and notice that ${\mit\Lambda}_q= \{ u \in \bigcap\limits_{p<1}h^p : \displaystyle\sup_{r\in (0,1)}|T(u,r)|=+\infty \}$. We are going now to verify the conditions of the variation of M.Siskaki theorem. 1. Since $\widetilde{u+v}=\widetilde{u}+\widetilde{v}$ , $\widetilde{\lambda u}=\lambda\widetilde{u}$ and by using the triangular inequality and the well known inequality $|a \pm b|^q \leq 2^q(|a|^q + |b|^q)$ it is immediate that $$|T(u-v,r)| \leq 2^q( |T(u,r)|+|T(v,r)|)$$ and $$|T(\lambda u,r)|= |\lambda|^q|T(u,r)|$$ 2. For all $r\in (0,1)$, $T( \cdot ,r )$ is continuous : **Claim 1**. *Let $(u_m)_{m=1}^{\infty}$ be a sequence of real-valued harmonic functions defined on the unit disc $\mathbb{D}$ such that $u_m \rightarrow u$ uniformly on compacta of $\mathbb{D}$ then $\widetilde{u_m} \rightarrow \widetilde{u}$ uniformly on compacta of $\mathbb{D}$.* **Proof of Claim 1**. *Let $(f_m)_{m=1}^{\infty}$ be a sequence of holomorphic functions on the unit disc $\mathbb{D}$ such that $f_m=u_m+i\widetilde{u_m}$.* *We observe that $f_m(0) = u_m(0) \xrightarrow[m\to\infty]{}u(0) = f(0)$.* *Let $K$ be a compact subset of the unit disc $\mathbb{D}$, then there exist $0<r<1$ such that $K$ is contained in the open disc $D(0,r)$.* *We now pick $R= \dfrac{1+r}{2}$ and so $D(0,r) \subset D(0,R)\subset \mathbb{D}$, we can now apply the Borel-Carathéodory theorem for the holomorphic function $$f_m-f=u_m-u+i(\widetilde{u_m}-\widetilde{u})$$ and obtain that $$\sup_{|z|\leq r }|f_m(z)-f(z)| \leq \dfrac{2r}{R-r} \sup_{|z|\leq R}|u_m(z)-u(z)| + \dfrac{R+r}{R-r}|u_m(0)-u(0)|$$* *Since the disc $|z| \leq R$ is compact and we know that $u_m\rightarrow u$ uniformly on compact subsets on $\mathbb{D}$ we obtain $$\sup_{z\in K}|f_m(z)-f(z)| \leq \sup_{|z| \leq r }|f_m(z)-f(z)| \xrightarrow[m \to \infty]{} 0$$ It is now immidiate that $\displaystyle\sup_{z\in K}|\widetilde{u_m}(z) -\widetilde{u}(z)|\xrightarrow[m\to \infty]{}0$. $\blacksquare$* Now let $(u_m)_{m=1}^{\infty}$ be a sequence in $\bigcap\limits_{p<1}h^p$ such that $u_m \rightarrow u$ in $\bigcap\limits_{p<1}h^p$. Since convergence in $\bigcap\limits_{p<1}h^p$ implies uniform convergence on compacta of $\mathbb{D}$ combined with the above claim, we obtain that $\widetilde{u_m} \rightarrow \widetilde{u}$ uniformly on compacta of $\mathbb{D}$ and so $T(u_m,r) \rightarrow T(u,r)$ for all $r\in(0,1).$ 3. Is easy to check. **Theorem 7**. *The set $\mathcal{L} = \big \{ u \in \bigcap\limits_{p<1}h^p : \widetilde{u}\notin h^q, \, \text{for all} \ q>0 \big \}$ is a $G_{\delta}$ and dense subset of $\bigcap\limits_{p<1}h^p$.* **Proof 6**. Because of the monotonicity of the harmonic hardy spaces and that the assertion that for every $q>0$ it holds $\widetilde{u} \notin h^q$ is equivalent to the assertion that for every $n\geq 2$ it holds $\widetilde{u} \notin h^{ \frac{1}{n}}$, the set $\mathcal{L}$ is equal to a denumerable intersection for $n\geq 2$ of the the sets ${\mit\Lambda}_{ \frac{1}{n}}$. According to Proposition 3.1 these sets are $G_{\delta}$ and dense in $\bigcap\limits_{p<1}h^p$. Baire's theorem yields the result. $\blacksquare$ **Remark 8**. We can replace the space $\bigcap\limits_{p<1}h^p$ with each of the spaces $h^p$ for $0<p<1$. **Remark 9**. One could avoid the use of the variation of M.Siskakis lemma introduced in this paper by noticing that for arbitrary positive numbers $a$ and $b$ and $0<p<1$ it holds $(a+b)^p \leq a^p+b^p$ by elementary calculus. [@1] **Remark 10**. A direct approach on the continuity of the operator without the use of the claim mentioned above was suggested by V.Nestoridis and A.Siskakis and goes as follows: Let $0<p<1$ and $0<r<1$ and let $(u_m)_{m=1}^{\infty}$ be a sequence such that $u_m \rightarrow u$ in $\bigcap\limits_{p<1}h^p$. Since convergence in $\bigcap\limits_{p<1}h^p$ implies convergence in each $h^p$ for $p<1$, and so uniform convergence on compacta of $\mathbb{D}$ we get that $u_m \rightarrow u$ uniformly on the disc $r\mathbb{D}$. It is immediate that $u_m \rightarrow u$ in $h^2(r\mathbb{D})$, but in $h^2$ the map sending $u$ to its conjugate $\widetilde{u}$ is an isometry so $\widetilde{u_m} \rightarrow \widetilde{u}$ in $h^2(r\mathbb{D})$, thus $\widetilde{u_m} \rightarrow \widetilde{u}$ in compacta of $r\mathbb{D}$. For every compact $K\subseteq \mathbb{D}$ we can find $0<s<1$ such that $K\subseteq s\mathbb{D}$ and so the result follows. $\blacksquare$ **Acknowledgements**. The author would like to express his gratitude towards V.Nestoridis for his interest in this work and his always useful comments and remarks. The author would also like to thank A.Siskakis for the helpful communication. XX Duren, P. L., Theory of $H^p$ spaces, Academic Press, New York and London, 1970. P. Koosis, Introduction of $H_p$ spaces, Cambridge University Press, 1980. J. - P. Kahane, Baire's category theorem and trigonometric series, J. Anal. Math. 80 (2000) 143 - 182. M. Siskaki, Boundedness of derivatives and anti-derivatives of holomorphic functions as a rare phenomenon, "J. Math. Anal. Appl." 426 (2018) no 2, 1073 - 1086 see also arxiv: 1611.05386. Zygmund, A., Trigonometrtic series, 2nd edition, Cambridge University Press, Cambridge, 1979. Nestoridis, Vassili, and Efstratios Thirios. \"Generic versions of a result in the theory of Hardy spaces.\" Results in Mathematics 77.5 (2022): 210. Vassili Nestoridis, A generic result on the Hardy space $H^1$, Proceedings of a conference in memory of D.Gatzouras. Joel H. Shapiro, Linear topological properties of the harmonic Hardy spaces $h^p$ for $0<p<1$, Duke Math. J. 43 (1976), 187--202. S.Lang, Complex Analysis, Springer, Fourth Edition, 1999. V. J . Smirnoff. I, Sur Ies valeurs limites des fonctions, réguliéres a I'intérieur d'un cercle. Journal de la Sociétte' Phys.-Math. de Léningrade 2 (1929), no. 2, 22-37 V.Golubev, Sur la correspondance des frontiéres dans la représentation conforme. Mat. Sb. 32 (1924), 55-57 (in Russian). C. Pandis\ National and Kapodistrian University of Athens\ Department of Mathematics\ Panepistemiopolis, 157 84\ Athens,\ Greece\ e-mail: chrpandis\@gmail.com\
arxiv_math
{ "id": "2309.15042", "title": "Some results of topological genericity", "authors": "C. Pandis", "categories": "math.CV math.FA", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We give some rates of convergence in the distances of Kolmogorov and Wasserstein for standardized martingales with differences having finite variances. For the Kolmogorov distances, we present some exact Berry-Esseen bounds for martingales, which generalizes some Berry-Esseen bounds due to Bolthausen. For the Wasserstein distance, with Stein's method and Lindeberg's telescoping sum argument, the rates of convergence in martingale central limit theorems recover the classical rates for sums of i.i.d. random variables, and therefore they are believed to be optimal. address: - School of Mathematics and Statistics, Northeastern University at Qinhuangdao, Qinhuangdao, 066004, Hebei, China. - School of Mathematical Sciences, Zhejiang University, Hangzhou 310058, China. author: - Xiequan Fan - Zhonggen Su title: Rates of convergence in the distances of Kolmogorov and Wasserstein for standardized martingales --- Martingales; Central limit theorem; Rates of convergence; Berry-Esseen bounds; Stein's method; the Wasserstein distance Primary 60G42; 60F05; Secondary 60E15 # Introduction Let $\mathbf{X}=\{X_k, \mathcal{F}_k\}_{ k \geq 1}$ be a sequence of square integrable martingale differences defined on a probability space $(\Omega, \mathcal{F}, \mathbf{P})$, where $X_0=0$ and $\{\emptyset, \Omega\}=\mathcal{F}_0\subseteq ...\subseteq \mathcal{F}_k\subseteq ...\subseteq \mathcal{F}$. By the definition of martingale differences, we have $$\mathbf{E}[X_k|\mathcal{F}_{k-1}]=0, \quad k \geq 1, \ \ \textrm{a.s}.$$ Define $$S_0=0,\ \ \ S_k = X_1+X_2+...+X_k, \qquad k \geq 1.$$ Then $\{S_k, \mathcal{F}_k\}_{ k \geq 1 }$ is a martingale. Let $p \in [0,+\infty]$, and define $$||\mathbf{X}||_p=\max_{ k \geq 1 } ||X_k||_p,$$ where $||X_k||_p=(\mathbf{E}|X_k|^p)^{1/p}, p \in [0,+\infty),$ and $||X_k||_{\infty}=\inf\{\alpha:\mathbf{P}(|X_k| \leq \alpha)=1\}$. Let $n\geq 2.$ For briefness, for all $1\leq k \leq n,$ denote $$\begin{aligned} && \sigma_k^2:=\mathbf{E}[X_k^2|\mathcal{F}_{k-1}], \ \ \ \ \ V_0^2:=0, \ \ \ \ \ \ \ V_k^2:=\sum_{i=1}^k \sigma_i^2, \ \ \ \ \ \ \ \rho_{n,k } :=\sqrt{V_n^2- V_{k-1}^2}\, ,\\ && \bar{\sigma}_k^2:=\mathbf{E} X_k^2 , \ \ \ \ \ \ \ \ \ \ \ \ \ \, \ s_0^2:=0, \ \ \ \ \, \ \ \ \ s_k^2:=\sum_{i=1}^k \bar{\sigma}_i^2, \ \ \ \ \ \, \ \ \ \overline{\rho}_{n,k} :=\sqrt{s_n^2-s_{k-1}^2}\, , \\ && \upsilon_{n,k}:=\sqrt{\rho_{n,k}^2+a^2} ,\ \ \ \ \ \ \ \ \, \ \ \ \ \ \ \ \tau_{n,k} :=\sqrt{\overline{\rho}_{n,k}^2+a^2} . \end{aligned}$$ Over all the paper, we assume that $s_n^2 \rightarrow \infty$ as $n \rightarrow \infty$ and that, without loss of generality, $s_n^2 \geq 2$ for all $n$. The standardized martingale is defined as $S_n/s_n .$ Denote by $\stackrel{\ \mathbf{P} \ }{\longrightarrow}$ convergence in probability. The central limit theorem (CLT) for martingales (see the monograph Hall and Heyde [@HH80]) states that the conditional "Lindeberg condition\", that is for each $\varepsilon>0,$ $$\begin{aligned} \frac{1}{s_n^2} \sum_{i=1 }^n \mathbf{E}[ X_i^2\mathbf{1}_{\{| X_i| \geq \varepsilon s_n \}} | \mathcal{F}_{i-1}] \stackrel{\ \mathbf{P} \ }{\longrightarrow} 0, \ \ \ \ \ n \rightarrow \infty,\end{aligned}$$ and the "conditional normalizing condition\" $V_n^2/s_n^2 \stackrel{\ \mathbf{P}\ }{\longrightarrow} 1$, together implies that the standardized martingales $S_n/s_n$ converges in distribution to the standard normal random variable. In this paper, we are interested in the convergence rates in the martingale CLT. The distances of Kolmogorov and Wasserstein are frequently used to describe the convergence rates in the martingale CLT. Denote the Kologmorov distance between $S_n/s_n$ and the standard normal random variable by $$\textbf{K}(S_n/s_n)= \sup_{ x \in \mathbf{R}}\Big|\mathbf{P}(S_n/s_n \leq x)-\Phi \left( x\right) \Big| ,$$ where $\Phi \left( x\right)$ is the distribution function of the standard normal random variable. Recall the definition of the $1$-Wasserstein distance. The$1$-Wasserstein distance between two distributions $\mu$ and $\nu$ is defined as follows: $$\begin{aligned} W_1(\mu, \nu)= \sup \bigg \{ \Big|\mathbf{E}[f(X)] -\mathbf{E}[f(Y)] \Big|: \ (X, Y) \in \mathcal{L}(\mu, \nu),\ f\ \textrm{is $1$-Lipschitz} \bigg\} ,\end{aligned}$$ where $\mathcal{L}(\mu, \nu)$ is the collection of all pairs of random variables whose marginal distributions are $\mu$ and $\nu$ respectively. In particular, if $\mu_{S_n/s_n}$ is the distribution of $S_n/s_n$ and $\nu$ is the standard normal distribution, then we write $$\textbf{W}\left(S_n/s_n\right) =W_1(\mu_{S_n/s_n} , \nu ).$$ The relationship between the distances of Kolmogorov and Wasserstein-1 can be found in Ross [@R11] (see p. 218 therein). It is stated that $$\textbf{K}(S_n/s_n) \leq \Big( \frac{2}{ \pi } \Big)^{1/4} \sqrt{\textbf{W}\left(S_n/s_n\right)} .$$ See also inequality (3.1) in Dedecker,  Merlevède and Rio [@DFR22]. The exact convergence rates for $\textbf{K}(S_n/s_n)$, usually termed as Berry-Esseen's bounds, have been established with various conditions. For instance, when $p\in (1, 2]$, Heyde and Brown [@HB70] (see also Theorem 3.10 in Hall and Heyde [@HH80]) have established the following Berry-Esseen bound: there exists a positive constant $C_p$ such that $$\label{sfdf} \mathbf{K}(S_n/s_n) \leq C_p \, N_n^{1/(2p+1) },$$ where $$\nonumber N_n = \sum_{i=1}^n \mathbf{E} |X_i/s_n|^{2p} + \mathbf{E} \big| V^2_n/s_n^2 -1\big|^p.$$ Later, Haeusler [@H88] presented an extension of ([\[sfdf\]](#sfdf){reference-type="ref" reference="sfdf"}), and proved that ([\[sfdf\]](#sfdf){reference-type="ref" reference="sfdf"}) holds for all $p\in (1, \infty)$. Moreover, Haeusler [@H88] also gave an example to show that his Berry-Esseen's bound is the best possible for martingales with finite moments. When $\mathbf{E} | V_n^2/s_n^2 -1|^{p} \rightarrow 0$ for some $p \in [1, \infty],$ the convergence rate for $\mathbf{K}(S_n/s_n)$ has been established by Bolthausen [@2] (for $p=1, +\infty$), Mourrat [@3], El Machkouri and Ouchti [@EO07] and Fan [@4]. When $\sigma_k^2=\bar{\sigma}_k^2$ a.s., the Berry-Esseen bound of Haeusler [@H88] may not be the best possible. At an earlier time, Bolthausen [@2] obtained the following Berry-Esseen bound. If there exist three constants $\alpha,\ \beta$ and $\gamma$, with $0 < \alpha \leq \beta < +\infty$ and $0 < \gamma < +\infty$, such that $$\begin{aligned} \label{sdfd} \sigma_k^2=\bar{\sigma}_k^2\ \textrm{a.s.}, \ \ \ \ \alpha \leq \bar{\sigma}_k^2 \leq \beta \ \ \ \textrm{and} \ \ \ ||\mathbf{X}||_3 \leq \gamma,\end{aligned}$$ then there exists a constant $C(\alpha,\beta,\gamma)$ depending only on $\alpha,\ \beta$ and $\gamma$ such that $$\begin{aligned} \label{b} \mathbf{K}(S_n/s_n) \leq C(\alpha,\beta,\gamma) \ n^{-1/4}. \end{aligned}$$ Bolthausen [@2] also proved the following result: Assume that $V_n^2=s_n^2$ a.s. and that there exists a positive constant $\gamma$ such that $||\mathbf{X}||_{\infty} \leq \gamma$. Then there exists a constant $C(\gamma)$ depending on $\gamma$ such that $$\begin{aligned} \label{a} \mathbf{K}(S_n/s_n) \leq C(\gamma) \frac{n \ln n}{s_n^3}. \end{aligned}$$ Moreover, Bolthausen [@2] gave some examples to illustrate that the Berry-Esseen bounds ([\[b\]](#b){reference-type="ref" reference="b"}) and ([\[a\]](#a){reference-type="ref" reference="a"}) are sharp when $s_n^2$ is in order of $n$. The Berry-Esseen bound ([\[a\]](#a){reference-type="ref" reference="a"}) can be extended to a more general case. Recently, Fan [@4] proved that if $V_n^2=s_n^2$ a.s. and there exist two positive constants $\delta$ and $\gamma$ such that $$\begin{aligned} \label{BDer} \mathbf{E}[|X_k|^{2+\delta}|\mathcal{F}_{k-1}] \leq \gamma^{\delta} \mathbf{E}[ X_k ^2|\mathcal{F}_{k-1}]\ \ \ \textrm{a.s.},\end{aligned}$$ then for any $p \geq 1$, there exits a constant $C(p,\delta,\gamma)$ depending only on $\delta,\ \gamma$ and $p$ such that $$\begin{aligned} \label{gfdgs1} \mathbf{K}(S_n/s_n)\leq C(p,\delta,\gamma)\ \alpha_n , \end{aligned}$$ where $$\alpha_n = \left\{ \begin{array}{ll} \displaystyle s_n^{-\delta} , & \textrm{\ \ \ if $\delta \in (0,1)$,}\\ & \textrm{ }\\ \displaystyle s_n^{-1} \ln s_n , & \textrm{\ \ \ if $\delta \geq 1$.} \end{array} \right.$$ Fan [@4] also showed that the last Berry-Esseen bound is optimal under the stated condition. When $\delta=1$, an earlier result can be found in El Machkouri and Ouchti [@EO07], where they obtained a Berry-Esseen bound of order $s_n^{-1} \ln n$. When the randomness for $\sigma_k^2$ tends to small as $k$ increasing, Bolthausen [@2] also gave the following Berry-Esseen bound for standardized martingales. Assume that $\sup_{k\geq 1}|| \mathbf{E}[ |X_k|^{3} | \mathcal{F}_{k-1} ] ||_\infty < \infty,$ and that there exists a positive constant $\sigma$ such that $\lim_{k \rightarrow \infty}\bar{\sigma}_k^2 =\sigma^2$, and that there exists a positive constant $\alpha\in (0, 1)$ such that $$\mathbf{E} \big|\sigma_k^2-\overline{\sigma}_k^2 \big| = O( k^{-\alpha}).$$ Then $$\begin{aligned} \label{ghd} \mathbf{K}(S_n/s_n) =O \bigg( \displaystyle \frac{\ln n }{ n^{\alpha } } \bigg) .\end{aligned}$$ Despite the fact that the convergence rates for $\mathbf{K}(S_n/s_n)$ have been intensely studied, there are only a few results for the convergence rates for $\mathbf{W}(S_n/s_n)$. To the best of our knowledge, we are only aware the work of Van Dung et al. [@5], Röllin [@1] and Dedecker, Merlevède and Rio [@8; @DFR22]. Van Dung et al. [@5] gave a generalization of ([\[a\]](#a){reference-type="ref" reference="a"}) and proved that ([\[a\]](#a){reference-type="ref" reference="a"}) holds also when $\mathbf{K}(S_n/s_n)$ is replaced by $\textbf{W}\big(S_{n} / s_{n}\big)$. Using Stein's methods, Röllin [@1] proved the following result: If $V_{n}^{2}=s_n^2$ a.s. and $\mathbf{E}\left|X_{i}\right|^{3} < \infty$ for all $1\leq i \leq n$, then for any $a\geq0$, $$\begin{aligned} \label{r} \textbf{W}\big(S_{n} / s_{n}\big) \leq \frac{3}{s_{n}}\sum_{i=1}^{n} \mathbf{E}\frac{\left|X_{i}\right|^{3}}{ \rho_{n,i }^{2}+a^2}+\frac{2a}{s_{n}}. \end{aligned}$$ When $\mathbf{E} \big|\sigma_k^2-\overline{\sigma}_k^2 \big|\rightarrow 0,$ Dedecker, Merlevède and Rio [@DFR22] recently also have established a convergence rate for $\textbf{W}(S_n/s_n),$ with finite moment of order $2.$ In this paper, we are interested in giving some generalizations of ([\[b\]](#b){reference-type="ref" reference="b"}), ([\[ghd\]](#ghd){reference-type="ref" reference="ghd"}) and ([\[r\]](#r){reference-type="ref" reference="r"}) for martingales with finite moments of order $2.$ From our results, we can recover many optimal convergence rates for standardized martingales with respect to the distances of Kolmogorov and Wasserstein. Moreover, applications of our results are also discussed. The paper is organized as follows. Our main results are stated and discussed in Section [2](#sec2){reference-type="ref" reference="sec2"}. An application of our theorems to branching processes in a random environment is given in Section [3](#seca){reference-type="ref" reference="seca"}. The proofs of theorems and their corollaries are deferred to Section [4](#sec4){reference-type="ref" reference="sec4"}. Throughout the paper, $C$ will denote a finite and positive constant. This constant may vary from place to place. For two sequences of positive numbers $\{a_n\}_{n\geq 1}$ and $\{b_n\}_{n\geq 1}$, write $a_n \asymp b_n$ if there exists a constant $C>0$ such that ${a_n}/{C}\leq b_n\leq C a_n$ for all sufficiently large $n$. We shall also use the notation $a_n \ll b_n$ to mean that there exists a positive constant $C$ not depending on $n$ such that $a_n \leq C b_n$. We also write $a_n \sim b_n$ if $\lim_{n\rightarrow \infty} a_n/b_n =1$. We denote by $N(0, \sigma^2)$ the normal distribution with mean $0$ and variance $\sigma^2$. $\mathcal{N}$ will designate a standard normal random variable, and we will denote by $\Phi(\cdot)$ the cumulative distribution function of a standard normal random variable. # Main results {#sec2} We have the following convergence rates for standardized martingales $S_n/s_n$ in the distances of Kolmogorov and Wasserstein. ## Convergence rates in the Kolmogorov distance We have the following Berry-Esseen bound for the standardized martingale $S_n/s_n$. **Theorem 1**. *Assume that $|| \textit{\textbf{X}} ||_2 < \infty$. Let $a\geq 1$. Then it holds $$\begin{aligned} \mathbf{K}(S_n/s_n) \ \ll \ \sum_{k=1}^n \Bigg( \frac{ \mathbf{E}\big[ |X_k|^3 \mathbf{1}_{\{|X_k|\leq \tau_{n,k+1}\}} \big] }{\tau_{n,k+1}^3}+ \frac{ \mathbf{E}\big[ X_k ^2 \mathbf{1}_{\{|X_k|> \tau_{n,k+1}\}} \big] + \mathbf{E} \big | \sigma_k^2 - \overline{\sigma}_k^2 \big| }{\tau_{n,k+1}^2} \Bigg) + \frac{a}{s_n }.\end{aligned}$$* By Theorem [Theorem 1](#th4.s1){reference-type="ref" reference="th4.s1"}, the following CLT for the standardized martingales $S_n/s_n$ holds. If there exists a constant $a\geq 1$ such that $$\begin{aligned} \label{gsdxvn} \sum_{k=1}^n \Bigg( \frac{ \mathbf{E}\big[ |X_k|^3 \mathbf{1}_{\{|X_k|\leq \tau_{n,k+1}\}} \big] }{\tau_{n,k+1}^3}+ \frac{ \mathbf{E}\big[ X_k ^2 \mathbf{1}_{\{|X_k|> \tau_{n,k+1}\}} \big] }{\tau_{n,k+1}^2} \Bigg) \rightarrow 0 \ \ \ \textrm{and} \ \ \ \ \sum_{k=1}^n \frac{ \mathbf{E} \big | \sigma_k^2 - \overline{\sigma}_k^2 \big| }{\tau_{n,k+1}^2} \rightarrow 0,\end{aligned}$$ then $S_n/s_n$ converges in distribution to the standard normal distribution as $n\rightarrow \infty$. For $|| \textit{\textbf{X}} ||_p < \infty, p\in (2, 3],$ a closely related result to Theorem [Theorem 1](#th4.s1){reference-type="ref" reference="th4.s1"} can be found in Dedecker, Merlevède and Rio [@DFR22], see Corollary 3.1 therein. The next corollary indicates that in certain cases, the Berry-Esseen bound in Theorem [Theorem 1](#th4.s1){reference-type="ref" reference="th4.s1"} is sharper than the one in Corollary 3.1 of Dedecker, Merlevède and Rio [@DFR22]. **Corollary 1**. *Assume that $|| \textit{\textbf{X}} ||_{2 } < \infty$, and that there exists a constant $\delta\in (0, 1]$ such that $$\mathbf{E} |X_k|^{2+\delta} \leq C \,\mathbf{E} X_k^2,\ \ \ 1\leq k \leq n.$$ If there exists a positive constant $\alpha$ such that $$\mathbf{E} \big|\sigma_k^2-\overline{\sigma}_k^2 \big| \leq C \, \frac{ \overline{\sigma}_k^2}{\, s_n^{\alpha} \, } ,\ \ \ 1\leq k \leq n,$$ then it holds $$\mathbf{K}(S_n/s_n) \ll \, \displaystyle \frac{ 1}{ s_n^{\delta /(1+ \delta) }}+ \frac{\ln s_n }{ s_n^{\alpha } } .$$* When $\sigma_k^2=\overline{\sigma}_k^2$ a.s., $\delta=1$ and $\mathbf{E} |X_k|^{3} \leq C \,\mathbf{E} X_k^2$ ($1\leq k \leq n$), Dedecker, Merlevède and Rio [@DFR22] obtained a Berry-Esseen bound of order $s_n^{-1/2} \sqrt{\ln s_n}$, see the remark after Corollary 3.1 in [@DFR22]. Now, Corollary [Corollary 1](#co2.5){reference-type="ref" reference="co2.5"} gives a Berry-Esseen bound of order $s_n^{-1/2},$ by ([\[b\]](#b){reference-type="ref" reference="b"}), which is sharp when $s_n^2\asymp n$. By Theorem [Theorem 1](#th4.s1){reference-type="ref" reference="th4.s1"}, we can obtain the following corollary, which generalizes inequality ([\[b\]](#b){reference-type="ref" reference="b"}). **Corollary 2**. *Assume that there exists a constant $\delta \in (0, 1]$ such that $||\mathbf{X}||_{2+\delta} < \infty$, and that $\inf_{k \geq 1}\bar{\sigma}_k^2 \geq\sigma^2$ a.s., with $\sigma$ a positive constant. If there exists a positive constant $\alpha \in (0, 1)$ such that $$\mathbf{E} \big|\sigma_k^2-\overline{\sigma}_k^2 \big| \ll k^{-\alpha } ,\ $$ then it holds for all $n\geq 2,$ $$\mathbf{K}(S_n/s_n) \ \ll \ \displaystyle \frac{ 1}{ n^{\delta /(2+2\delta) }}+ \frac{\ln n }{ n^{\alpha } } .$$* Clearly, when $\delta=1$ and $\sigma_k^2=\overline{\sigma}_k^2$ a.s., the last corollary reduces to the Berry-Esseen bound ([\[b\]](#b){reference-type="ref" reference="b"}). Thus, Corollary [Corollary 2](#th4.21){reference-type="ref" reference="th4.21"} can be regarded as a generalization of ([\[b\]](#b){reference-type="ref" reference="b"}), that is Theorem 1 of Bolthausen [@2]. For $\delta>0$ and $n \geq 20$, Dedecker, Merlevède and Rio [@DFR22] showed that there exists a finite sequence of martingale differences $(\Delta M_i, \mathcal{F}_i)_{1 \leq i \leq n}$ satisfying the following three conditions: 1. $\mathbf{E} [ \Delta M_i^2 | \mathcal{F}_{i-1} ] =1$ a.s., 2. $\sup_{1 \leq i \leq n}\mathbf{E} |\Delta M_i|^{2+\delta} \leq \mathbf{E} | \mathcal{N}|^{2+\delta} + 5^{\delta}$, 3. $\sup_{u \in \mathbf{R}} \Big| \mathbf{P}\Big( \sum_{i=1}^n \Delta M_i \leq u \sqrt{n}\Big) -\Phi (x)\Big| \geq 0.06 \, n^{-\delta/(2+2\delta)}.$ Under the conditions 1 and 2, by Corollary [Corollary 2](#th4.21){reference-type="ref" reference="th4.21"} with $\alpha=1/2$, we have for all $\delta\in (0, 1],$ $$\begin{aligned} \sup_{u \in \mathbf{R} } \bigg| \mathbf{P}\Big( \sum_{i=1}^n \Delta M_i \leq u \sqrt{n}\Big) - \Phi (u)\bigg| \ \ll \ \frac{ 1 }{ n^{\delta /(2+2\delta) }} .\end{aligned}$$ By point 3 and the last inequality, we find that the decaying rate $n^{-\delta /(2+2\delta) }$ in the Berry-Esseen bound of Corollary [Corollary 2](#th4.21){reference-type="ref" reference="th4.21"} is sharp. **Theorem 2**. *Assume $\sup_{k\geq 1}|| \mathbf{E}[ X_k ^2 | \mathcal{F}_{k-1} ] ||_\infty < \infty$, $\rho_{n,k}^2 \leq C\, \overline{\rho}^2_{n,k}$ for all $k$ and $$\begin{aligned} \label{fsdfzn} \lim_{a \rightarrow \infty} \sum_{k=1}^n\Bigg( \frac{ \big\| \mathbf{E} [ |X_k|^3 \mathbf{1}_{\{|X_k|\leq \tau_{n,k+1}\}} |\mathcal{F}_{k-1} ] \big\|_{\infty}}{\tau_{n,k+1}^3} + \frac{ \big\| \mathbf{E} [ X_k^2 \mathbf{1}_{\{|X_k|> \tau_{n,k+1}\}} |\mathcal{F}_{k-1} ] \big\|_{\infty}}{\tau_{n,k+1}^2} \Bigg) =0\end{aligned}$$ uniformly for $n \geq 2.$ Then it holds for all $a$ large enough, $$\begin{aligned} \mathbf{K}(S_n/s_n) \!&\ll&\! \frac{ 1 }{s_n} \sum_{k=1}^n \Bigg( \frac{ \big\| \mathbf{E} [ |X_k|^3 \mathbf{1}_{\{|X_k|\leq \tau_{n,k+1}\}} |\mathcal{F}_{k-1} ] \big\|_{\infty} }{\tau_{n,k+1}^2 } \, + \, \frac{ \big\|\mathbf{E} [ X_k ^2 \mathbf{1}_{\{|X_k|> \tau_{n,k+1}\}} |\mathcal{F}_{k-1} ]\big\|_{\infty} }{\tau_{n,k+1} } \Bigg) \nonumber \\ && \ + \ \sum_{k=1}^n \frac{ \mathbf{E} \big | \sigma_k^2 - \overline{\sigma}_k^2 \big| }{\tau_{n,k+1}^2 } + \frac{a}{s_n } .\end{aligned}$$* The condition ([\[fsdfzn\]](#fsdfzn){reference-type="ref" reference="fsdfzn"}) can be obtained if, for instance, $|| \mathbf{E}[ |X_k|^{2+\delta} | \mathcal{F}_{k-1} ] ||_\infty \leq C \, \overline{\sigma}_k^2$, $\delta>0$. Thus, we have the following corollary. **Corollary 3**. *Assume that $||\mathbf{X}||_2 < \infty$, and that there exists a positive constant $\delta$ such that $$|| \mathbf{E}[ |X_k|^{2+\delta} | \mathcal{F}_{k-1} ] ||_\infty \leq C \, \overline{\sigma}_k^2$$ and $\rho_{n,k}^2 \leq C\, \overline{\rho}^2_{n,k}$ for all $k$. If there exists a positive constant $\alpha$ such that for all $1\leq k \leq n,$ $$\mathbf{E} \big|\sigma_k^2-\overline{\sigma}_k^2 \big| \leq C \, \frac{ \overline{\sigma}_k^2}{\, s_n^{\alpha} \, } ,$$ then the following inequalities hold.* - *If $\delta \in (0, 1)$, then $$\begin{aligned} \mathbf{K}(S_n/s_n) \ll \displaystyle \frac{ 1}{ s_n^{\delta }}+ \frac{\ln s_n }{ s_n^{\alpha } } . \end{aligned}$$* - *If $\delta\geq 1$, then $$\begin{aligned} \label{rdffs} \mathbf{K}(S_n/s_n) \ll \displaystyle \frac{ \ln s_n }{ s_n }+ \frac{\ln s_n }{ s_n^{\alpha } } . \end{aligned}$$* When $s_n^2$ is in order of $n,$ the last corollary implies the following result, which shows that our results are optimal. **Corollary 4**. *Assume that $\inf_{k \geq 1}\bar{\sigma}_k^2 >0$, and that there exists a positive constant $\delta$ such that $$\sup_{k\geq 1}|| \mathbf{E}[ |X_k|^{2+\delta} | \mathcal{F}_{k-1} ] ||_\infty < \infty.$$ If there exists a positive constant $\alpha\in (0, 1)$ such that $$\mathbf{E} \big|\sigma_k^2-\overline{\sigma}_k^2 \big| = O( k^{-\alpha}),$$ then the following two inequalities hold.* - *If $\delta \in (0, 1)$, then it holds $$\begin{aligned} \label{rdfdf} \mathbf{K}(S_n/s_n) \ll \displaystyle \frac{ 1}{ n^{\delta/2 }}+ \frac{\ln n }{ n^{\alpha } } .\end{aligned}$$* - *If $\delta \geq 1$, then it holds $$\begin{aligned} \label{rdfdfs} \mathbf{K}(S_n/s_n) \ll \displaystyle \frac{ \ln n }{ \sqrt{n}\ }+ \frac{\ln n }{ n^{\alpha } } .\end{aligned}$$* When $\delta=1,$ Bolthausen [@2] obtained a result similar to ([\[rdfdfs\]](#rdfdfs){reference-type="ref" reference="rdfdfs"}), cf. Theorem 3 (a) therein. As Corollary [Corollary 4](#fsdvs){reference-type="ref" reference="fsdvs"} also includes the case $\delta \in (0, 1),$ Corollary [Corollary 4](#fsdvs){reference-type="ref" reference="fsdvs"} can be regarded as a generalization of Theorem 3 (a) of Bolthausen [@2]. The convergence rates $\frac{ 1}{ n^{\delta/2 }}$ and $\frac{ \ln n }{ \sqrt{n} }$ in the inequalities ([\[rdfdf\]](#rdfdf){reference-type="ref" reference="rdfdf"}) and ([\[rdfdfs\]](#rdfdfs){reference-type="ref" reference="rdfdfs"}), respectively, are the same as that for independent and identically distributed (i.i.d.) random variables, up to a factor $\ln n$. However, the factor $\ln n$ cannot be replaced by $1$. Indeed, by Example 2 of Bolthausen [@2], the convergence rate $\frac{\ln n }{ \sqrt{n}}$ in ([\[rdfdfs\]](#rdfdfs){reference-type="ref" reference="rdfdfs"}) is the best possible even for uniformly bounded martingale differences with $\sigma_k^2=\overline{\sigma}_k^2=1$ a.s. Thus the terms $\frac{ 1}{ n^{\delta/2 }}$ and $\frac{ \ln n }{ \sqrt{n} }$ in the inequalities ([\[rdfdf\]](#rdfdf){reference-type="ref" reference="rdfdf"}) and ([\[rdfdfs\]](#rdfdfs){reference-type="ref" reference="rdfdfs"}) are optimal. It is known that there exists a martingale such that $\|\sigma_k^2-\overline{\sigma}_k^2 \|_{\infty} = O( k^{-\alpha})$ and $\displaystyle \limsup _{n \rightarrow \infty} n^{\alpha } \mathbf{K}(S_n/s_n) > 0,$ see Example 4 of Bolthausen [@2]. Thus, the the terms $\frac{\ln n }{ n^{\alpha } }$ in Corollary [Corollary 2](#th4.21){reference-type="ref" reference="th4.21"}, the inequalities ([\[rdfdf\]](#rdfdf){reference-type="ref" reference="rdfdf"}) and ([\[rdfdfs\]](#rdfdfs){reference-type="ref" reference="rdfdfs"}) are optimal, up to a factor $\ln n$. **Remark 1**. *If the condition $\inf_{k \geq 1}\bar{\sigma}_k^2 >0$ is replaced by $\liminf_{k\rightarrow \infty}\bar{\sigma}_k^2 =\sigma^2$, with $\sigma$ a positive constant, then Corollaries [Corollary 2](#th4.21){reference-type="ref" reference="th4.21"} and [Corollary 4](#fsdvs){reference-type="ref" reference="fsdvs"} hold also. The proofs are similar.* ## Convergence rates in the Wasserstein distance When $V_n^2=s_n^2$ for some $n$, using Stein's method (see Chen, Goldstein and Shao [@7]) and Lindeberg's telescoping sum argument (see Bolthausen [@2]), we obtain the following convergence rates in the Wasserstein distance for standardized martingales $S_n/s_n$. **Theorem 3**. *Assume that $V_n^2=s_n^2$ for some $n$, and $||\mathbf{X}||_2 < \infty$. Then for any $a \geq 0$, it holds $$\begin{aligned} && \textbf{W}\big( S_n/s_n \big) \leq\frac{1}{s_n} \sum_{k=1}^n \Bigg(\mathbf{E}\bigg[\frac{(\sigma_k^2 + X_k^2)\mathbf{1}_{\{|X_k| \geq \upsilon_{n,k} \}} }{\upsilon_{n,k}} \bigg] +2\, \mathbf{E}\bigg[ \frac{ (\sigma_k^2 |X_k|+|X_k|^3)\mathbf{1}_{\{|X_k| <\upsilon_{n,k}\}}}{\upsilon_{n,k}^2}\bigg]\Bigg) +\frac{2a}{s_n}.\ \ \\end{aligned}$$* Notice that in the right hand side of the last inequality, the constants are explicit. This is an advantage of Stein's method. When the martingale differences have finite moments of order $2+\delta, \delta \in (0, 1],$ Theorem [Theorem 3](#theorem2.1){reference-type="ref" reference="theorem2.1"} implies the following corollary. **Corollary 5**. *Assume that $V_n^2=s_n^2$ a.s. for some $n$, and that there exists a constant $\delta \in (0, 1]$ such that $||\mathbf{X}||_{2+\delta} < \infty$. Then for any $a \geq 0$, it holds $$\begin{aligned} \textbf{W} \big( S_n/s_n \big) \leq \frac{6}{s_n} \sum_{k=1}^n \mathbf{E} \frac{ |X_k|^{2+\delta} }{ (\rho_k^2+a^2 )^{(1+ \delta)/2} } +\frac{ 2a}{s_n}.\end{aligned}$$* Clearly, when $\delta=1,$ Corollary [Corollary 5](#fdgs){reference-type="ref" reference="fdgs"} reduces to the result of Röllin [@1], up to a constant. When the conditional moments of order $2+\delta$ can be dominated by the conditional variances, Theorem [Theorem 3](#theorem2.1){reference-type="ref" reference="theorem2.1"} implies the following corollary, which extends the main result of Fan [@4] to the distance of Wasserstein. **Corollary 6**. *Assume that $V_n^2=s_n^2$ a.s. for some $n \geq 2$, and that there exists a positive constant $\gamma$ such that for all $1\leq k \leq n,$ $$\mathbf{E}[ |X_k|^{2+\delta} | \mathcal{F}_{k-1} ] \leq \gamma \, \mathbf{E}[ X_k^2 | \mathcal{F}_{k-1} ].$$ Then we have the following two inequalities.* - *If $\delta \in (0, 1)$, then $$\begin{aligned} \textbf{W} \big( S_n/s_n \big) \leq C \, s_n^{-\delta } . \end{aligned}$$* - *If $\delta \geq 1$, then $$\begin{aligned} \label{rffds} \textbf{W} \big( S_n/s_n \big) \leq C \, \frac{\ln s_n }{ s_n } . \end{aligned}$$* Röllin [@1] (cf. Corollary 2.3 therein) considered the case $\mathbf{E}[ |X_k|^3 | \mathcal{F}_{k-1} ] \leq \gamma \, \mathbf{E}[ X_k^2 | \mathcal{F}_{k-1} ]$ and obtained a convergence rate $\displaystyle C\, \frac{ \ln n}{ s_n }$. Because the condition $\mathbf{E}[ |X_k|^3 | \mathcal{F}_{k-1} ] \leq \gamma \, \mathbf{E}[ X_k^2 | \mathcal{F}_{k-1} ]$ implies $s_n^2 \leq n \gamma^2,$ we get $\displaystyle \frac{ \ln s_n}{ s_n } =O \Big(\frac{ \ln n}{ s_n } \Big )$. Thus ([\[rffds\]](#rffds){reference-type="ref" reference="rffds"}) slightly improved the result in Corollary 2.3 of [@1]. Moreover, Corollary [Corollary 6](#fdfff){reference-type="ref" reference="fdfff"} can be regarded as a generalization of Corollary 2.3 of Röllin [@1] to the case $\delta \in (0, 1)$. When the martingale differences satisfy the conditions for the Berry-Esseen bound ([\[b\]](#b){reference-type="ref" reference="b"}) in Bolthausen [@2], Corollary [Corollary 5](#fdgs){reference-type="ref" reference="fdgs"} implies the following result. **Corollary 7**. *Assume that $V_n^2=s_n^2$ a.s. for some $n \geq 2$ and that there exists two positive constants $\alpha$ and $\delta$ such that $\sigma_k^2 \geq \alpha$ a.s. for all $1\leq k \leq n$ and $||\mathbf{X}||_{2+\delta} < \infty.$ Then the following two inequalities hold.* - *If $\delta \in (0, 1)$, then $$\begin{aligned} \textbf{W} \big( S_n/s_n \big) \ll \frac{1}{ n^{\delta/2}}. \end{aligned}$$* - *If $\delta \geq 1$, then $$\begin{aligned} \label{rffs} \textbf{W} \big( S_n/s_n \big) \ll \frac{\ln n }{ \sqrt{n}\ } . \end{aligned}$$* The convergence rates in Corollary [Corollary 7](#fdffs){reference-type="ref" reference="fdffs"} are the same as that for i.i.d. random variables, up to the factor $\ln n$ in the bound ([\[rffs\]](#rffs){reference-type="ref" reference="rffs"}). Thus, up to the factor $\ln n$ for the case $\delta =1$, the convergence rates in Corollary [Corollary 7](#fdffs){reference-type="ref" reference="fdffs"} are optimal. Comparing the results of ([\[b\]](#b){reference-type="ref" reference="b"}) and Corollary [Corollary 7](#fdffs){reference-type="ref" reference="fdffs"}, we find an interesting conclusion: when $\delta=1$, the convergence rate for standardized martingales in the Wasserstein distance is much better than the one in the Kolmogorov distance. When $\mathbf{E} \big|\sigma_k^2-\overline{\sigma}_k^2 \big|\rightarrow 0\, (k\rightarrow \infty)$, that is the randomness for $\sigma_k^2$ tends to small as $k$ increasing, Dedecker, Merlevède and Rio [@DFR22] have established a convergence rate in the Wasserstein distance for standardized martingales $S_n/s_n$. With Stein's method, we can establish a result similar to that of Dedecker, Merlevède and Rio [@DFR22], but with explicit constants. We have the following result. **Theorem 4**. *Assume that $||\mathbf{X}||_2 < \infty$. For any $a \geq 0$, it holds $$\begin{aligned} \textbf{W} \big( S_n/s_n \big) \!\! &\leq&\! \! \frac{1}{s_n} \sum_{k=1}^n \Bigg(\frac{ \mathbf{E} \big|\sigma_k^2-\overline{\sigma}_k^2 \big| + \mathbf{E}[ X_k^2\mathbf{1}_{\{|X_k| \geq \tau_{n,k}\}}] }{\tau_{n,k}}\, \\ &&\ \ \ \ \ \ \ \ \ \ \ \ + \, \frac{ 3\,\overline{\sigma}_k^2 \mathbf{E} |X_k| + 2\,\mathbf{E}[|X_k|^3\mathbf{1}_{\{|X_k| <\tau_{n,k}\}} ]}{\tau_{n,k}^2 } \Bigg) +\frac{2a}{s_n}.\end{aligned}$$* Notice that $\lim_{n\rightarrow \infty} \sup_{k}\overline{\sigma}_k^2/s_n^2 =0$. As $s_n\geq2,$ we have for all $a \geq 1,$ $$\displaystyle \sum_{k=1}^n\frac{ \overline{\sigma}_k^2}{ \tau_{n,k}^2 } \mathbf{E} |X_k| \leq \sum_{k=1}^n\frac{ \overline{\sigma}_k^2/s_n^2}{ \overline{\rho}_{n,k}^2/s_n^2 +a^2/s_n^2 } \sqrt{ ||\mathbf{X}||_2} \leq C \ln s_n .$$ Applying the last line to Theorem [Theorem 4](#theorem3.2){reference-type="ref" reference="theorem3.2"} with $a \geq 1,$ we can deduce that as $s_n\rightarrow \infty,$ $$\begin{aligned} \textbf{W} \big( S_n/s_n \big) \ll \frac{1}{s_n} \sum_{k=1}^n \Bigg(\frac{ \mathbf{E} \big|\sigma_k^2-\overline{\sigma}_k^2 \big|+ \mathbf{E}\big[ X_k^2\mathbf{1}_{\{|X_k| \geq \tau_{n,k} \}} \big] }{\tau_{n,k} } \, + \, \frac{ \mathbf{E}\big[|X_k|^3\mathbf{1}_{\{|X_k| <\tau_{n,k} \}} \big ] }{ \tau_{n,k}^2 }\Bigg) + \, \frac{\ln s_n}{s_n} . \label{gsds}\end{aligned}$$ From the last inequality, we get the following convergence rate. **Corollary 8**. *Assume that $||\mathbf{X}||_2 < \infty$ and that $\inf_{k \geq 1}\bar{\sigma}_k^2 \geq\sigma^2$, with $\sigma$ a positive constant. If there exists a constant $\alpha \in (0, 1]$ such that $$\mathbf{E} \big|\sigma_k^2-\overline{\sigma}_k^2 \big| \ll k^{-\alpha},$$ then for any $a \geq 1$, it holds $$\begin{aligned} \textbf{W} \big( S_n/s_n \big) \ll \frac{1}{ \sqrt{n}} \sum_{k=1}^n \Bigg(\frac{ \mathbf{E}[ X_k^2\mathbf{1}_{\{|X_k| \geq\tau_{n,k}\}}] }{\tau_{n,k} } \, + \, \frac{ \mathbf{E}[|X_k|^3\mathbf{1}_{\{|X_k| <\tau_{n,k}\}} ] }{ \tau_{n,k}^2 }\Bigg) +\frac{\ln n}{ n^{\alpha} } + \frac{\ln n}{ \sqrt{n}\, } .\end{aligned}$$ Moreover, if $||\mathbf{X}||_{2+\delta} < \infty$, then the following two inequalities hold.* - *If $\delta \in (0, 1)$, then $$\begin{aligned} \textbf{W} \big( S_n/s_n \big) \ll \frac{1}{ n^{\delta/2 }} + \frac{\ln n}{ n^{\alpha} } .\end{aligned}$$* - *If $\delta \geq 1$, then $$\begin{aligned} \label{rdffs} \textbf{W} \big( S_n/s_n \big) \ll \frac{\ln n }{ \sqrt{n}\, } + \frac{\ln n}{ n^{\alpha} } .\end{aligned}$$* The convergence rates $\displaystyle \frac{1}{ n^{\delta/2 }}$ and $\displaystyle \frac{\ln n }{ \sqrt{n} \, }$ in Corollary [Corollary 8](#fsvg){reference-type="ref" reference="fsvg"} are the same as the classical rates for i.i.d. random variables, up to the factor $\ln n$. Thus, the convergence rates in Corollary [Corollary 8](#fsvg){reference-type="ref" reference="fsvg"} are optimal or almost optimal. # Application to branching processes in a random environment {#seca} The branching processes in a random environment can be described as follows. Let $\xi=\{\xi_n\}_{n\geq0}$ be a sequence of i.i.d. random variables, where $\xi_n$ stands for the random environment at generation $n.$ For each $n \in \mathbf{N}=\{0, 1,...\}$, the realization of $\xi_n$ corresponds a probability law $\{ p_i(\xi_n): i \in \mathbf{N}\}.$ A branching process $\{Z_n\}_{n\geq 0}$ in the random environment $\xi$ can be defined as follows: $$Z_0=1,\ \ \ \ Z_{n+1}= \sum_{i=1}^{Z_n} X_{n,i}, \ \ \ \ \ n \geq 0,$$ where $X_{n,i}$ is the offspring number of the $i$-th individual in the generation $n.$ Moreover, given the environment $\xi,$ the random variables $\{X_{n,i}\}_{i\geq 1}$ are independent of each other with a common distribution law as follows: $$\label{ssfs} \mathbf{P}(X_{n,i} =k | \xi ) = p_k(\xi_n),\ \ \ \ \ \ \ k\in \mathbf{N},$$ and they are also independent of $\{Z_i\}_{1\leq i \leq n}.$ Denote by $\mathbf{P}_\xi$ the quenched law, i.e. the conditional probability when the environment $\xi$ is given, and by $\tau$ the law of the environment $\xi.$ Then $\mathbf{P}(dx, d\xi )=\mathbf{P}_\xi(dx)\tau(d\xi)$ is the total law of the process $\{Z_n\}_{n\geq0}$, called annealed law. In the sequel, the expectations with respect to the quenched law and annealed law are denoted by $\mathbf{E}_\xi$ and $\mathbf{E}$, respectively. Denote by $m$ the average offspring number of an individual, usually termed the offspring mean. Clearly, it holds $$m=\mathbf{E}Z_1=\mathbf{E} X_{n,i} =\sum_{k=1}^\infty k \mathbf{E} p_k(\xi_{n,i}).$$ Thus, $m$ is not only the mean of $Z_1$, but also the mean of each individual $X_{n,i}$. Denote $$\begin{aligned} m_n:=\mathbf{E}_\xi X_{n,i}=\sum_{i=1}^{\infty} i \, p_i(\xi_n) ,\ \ \ n\geq 0, \ \ \ \ \ \mu= \mathbf{E} \ln m_0, \\ \ \ \ \ \nu^2= \mathbf{E}( \ln m_0 -\mu)^2 \ \ \ \ \ \ \ \ \ \ \ \ \textrm{and} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \tau^2=\mathbf{E} ( Z_1-m_0 )^2.\end{aligned}$$ Notice that $\{m_n\}_{n\geq 1}$ is a sequence of nonnegative i.i.d. random variables. The distribution of $m_n$ depends only on $\xi_n$, and $m_n$ is independent of $\{Z_i\}_{1\leq i \leq n}$. Clearly, it holds $m= \mathbf{E}m_n.$ The process $\{Z_n \}_{ n\geq 0}$ is respectively called supercritical, critical or subcritical according to $\mu >0$, $\mu =0$ or $\mu <0$. Assume $$\begin{aligned} \label{defv1} p_0(\xi_0)=0 \ \ \ \ a.s.,\end{aligned}$$ which means each individual has at least one offspring. Thus the process $\{Z_n\}_{n\geq 0 }$ is supercritical. Denote by $v$ and $\sigma$ the standard variances of $Z_1$ and $m_0$ respectively, that is $$\begin{aligned} \upsilon^2=\mathbf{E} (Z_1-m)^2,\ \ \ \ \ \ \sigma^2= \mathbf{E} (m_0-m)^2.\end{aligned}$$ To avoid triviality, assume that $0 < v, \sigma <\infty,$ which implies that $\mu, \nu^2$ and $\tau$ all are finite. The assumption $\sigma >0$ means that the random environment is not degenerate. For the degenerate case $\sigma =0$ (i.e. Bienaymé-Galton-Watson process), Maaouia and Touati [@MT05] have established the CLT for the maximum likelihood estimator of $m$. Moreover, we also assume that $$\begin{aligned} \label{defv3} \mathbf{E} \frac{Z_1}{m_0} \ln^+ Z_1 < \infty.\end{aligned}$$ Denote $V_n= \frac{Z_n }{\Pi_{i=0}^{n-1}m_i},\ n \geq 1.$ The conditions ([\[defv1\]](#defv1){reference-type="ref" reference="defv1"}) and ([\[defv3\]](#defv3){reference-type="ref" reference="defv3"}) imply that the limit $V=\lim_{n\rightarrow \infty} V_n$ exists a.s., $V_n \rightarrow V$ in $\mathbf{L}^{1}$ and $\mathbf{P}(V >0)=\mathbf{P}(\lim_{n \rightarrow \infty} Z_n = \infty)=\lim_{n \rightarrow \infty} \mathbf{P}(Z_n >0) =1$ (see Athreya and Karlin [@AK71b] and Tanny [@T88]). We also need the following assumption: There exist two constants $p > 1$ and $\eta_0 \in (0, 1)$ such that $$\begin{aligned} \label{defv4} \mathbf{E} (\theta_0(p))^{\eta_0} < +\infty, \ \ \ \ \ \ \textrm{where} \ \ \ \theta_0(p)= \frac{Z_1^p}{m_0^p} .\end{aligned}$$ With the last assumption, Grama, Liu and Miqueu [@GLP20] proved that there exists a positive constant $\alpha$ such that $$\begin{aligned} \label{defv55} \mathbf{E} V^{-\alpha} < \infty.\end{aligned}$$ A critical task in statistical inference of BPRE is to estimate the offspring mean $m.$ To this end, the Lotka-Nagaev [@L39; @N67] estimator $\frac{Z_{ k+1}}{Z_{ k}}$ plays an important role. Denote the average Lotka-Nagaev estimator by $\hat{m}_{n_0,n}$, that is $$\hat{m}_{n_0,n}=\frac{1}{n}\sum_{k=n_0}^{n_0+n-1} \frac{Z_{ k+1}}{Z_{ k}}.$$ For any $n\geq2$ and $n_0 \geq 0$, denote $$\begin{aligned} \label{fdsds} S_{n_0,n} = \frac{ \sqrt{n}}{ \sigma \ } (\hat{m}_{n_0,n} -m )\end{aligned}$$ the normalized process for $\hat{m}_{n_0,n}$. Here, $n_0$ may depends on $n.$ We have the following Berry-Esseen bound for $S_{n_0,n}$ and $-S_{n_0,n}$. **Theorem 5**. *Assume that there exists a positive constant $\delta$ such that $$\nonumber \mathbf{E}| m_0 -m |^{2+\delta } + \mathbf{E}| Z_1 -m_0 |^{2+\delta } < \infty.$$* - *If $\delta \in (0, 1)$, then $$\begin{aligned} \label{rdffs1} \mathbf{K}( S_{n_0,n}) \ll \displaystyle \frac{ 1}{ n^{\delta/2}} \ \ \ \ \ \ \ \ and \ \ \ \ \ \ \ \textbf{W} ( S_{n_0,n}) \ll \displaystyle \frac{ 1}{ n^{\delta/2}}. \end{aligned}$$* - *If $\delta\geq 1$, then $$\begin{aligned} \label{rdffs2} \mathbf{K}( S_{n_0,n}) \ll \displaystyle \frac{ \ln n }{ \sqrt{n} } \ \ \ \ \ \ \ \ and \ \ \ \ \ \ \ \textbf{W}( S_{n_0,n}) \ll \displaystyle \frac{ \ln n }{ \sqrt{n} }. \end{aligned}$$* *Moreover, ([\[rdffs1\]](#rdffs1){reference-type="ref" reference="rdffs1"}) and ([\[rdffs2\]](#rdffs2){reference-type="ref" reference="rdffs2"}) remain valid when $S_{n_0,n}$ is replaced by $-S_{n_0,n}$.* # Proofs of the main results {#sec4} For the sake of simplicity, in the sequel we denote $\rho_{n,k },$ $\overline{\rho}_{n,k}$, $\upsilon_{n,k}$ and $\tau_{n,k}$ by $\rho_{k },$ $\overline{\rho}_{k}$, $\upsilon_{k}$ and $\tau_{k}$, respectively. ## Proof of Theorem [Theorem 1](#th4.s1){reference-type="ref" reference="th4.s1"} {#proof-of-theorem-th4.s1} The proof of Theorem [Theorem 1](#th4.s1){reference-type="ref" reference="th4.s1"} is a refinement of Lemma 3.3 of Grama and Haeusler [@GH00]. Compared to the proof of Grama and Haeusler [@GH00], our proof does not need the assumption $\| V_n^2/ s_n^2-1 \|_{\infty} \rightarrow 0.$ In the proof of Theorem [Theorem 1](#th4.s1){reference-type="ref" reference="th4.s1"}, we make use of the following technical lemma of Bolthausen [@2] (cf.  Lemma 1 therein), which plays an important role. **Lemma 1**. *Let $X$ and $Y$ be two random variables. Then $$\mathbf{K}(X) \leq c_1 \mathbf{K}(X+Y) + c_2 \big\| \mathbf{E}\left[ Y^2|X\right] \big\| _\infty ^{1/2}.$$* Now we are in the position to prove Theorem [Theorem 1](#th4.s1){reference-type="ref" reference="th4.s1"}. For $a>0,$ define $s_{n+1}^2 = s_n^2.$ Thus, by the definition, we have $\tau_{1}=\sqrt{s_n^2+a^2}$ and $\tau_{n+1}= a$. For $u, x\in \mathbf{R}$ and $y > 0,$ denote $$\Phi _u(x,y)=\Phi \bigg( \frac{u-x}{ \sqrt{y} } \bigg) . \label{RA-7}$$ Let $\mathcal{N}$ be independent of the martingale $\textit{\textbf{X}}$. By Lemma [Lemma 1](#LEMMA-APX-1){reference-type="ref" reference="LEMMA-APX-1"}, for any $a>0,$ we can deduce that $$\begin{aligned} \mathbf{K}(S_n/s_n) &\leq& c_1 \, \mathbf{K}(S_n/s_n + a\mathcal{N}/s_n ) +c_2\, \frac{a}{s_n } \nonumber \\ &=& c_1\sup_{u \in \mathbf{R}}\Big| \mathbf{E} [\Phi _u(S_n/s_n, a^2 /s_n^2)]- \Phi (u)\Big| +c_2\, \frac{a}{s_n } \nonumber\\ &\leq& c_1\sup_{u \in \mathbf{R}}\Big| \mathbf{E} [\Phi _u(S_n/s_n, a^2 /s_n^2 )]-\mathbf{E} [\Phi _u(S_0/s_n , \tau^2_1/s_n^2)]\Big| \nonumber \\ && \ +\, c_1\sup_{u \in \mathbf{R}}\Big| \mathbf{E} [\Phi _u(S_0/s_n, \tau^2_1/s_n^2)]-\Phi (u)\Big| +c_2\, \frac{a}{s_n } \nonumber\\ &=& c_1\sup_{u \in \mathbf{R}}\Big| \mathbf{E} [\Phi _u(S_n/s_n, \tau_{n+1}^2 /s_n^2)]-\mathbf{E} [\Phi _u(S_0/s_n, \tau^2_1/s_n^2)]\Big| \ \nonumber \\ && +\, c_1\sup_{u \in \mathbf{R}}\bigg| \Phi \bigg(\frac{u}{\sqrt{a^2/s_n^2+ 1}} \bigg)-\Phi (u)\bigg| +c_2\, \frac{a}{s_n } . \label{dfafs}\end{aligned}$$ Notice that $a/s_n \rightarrow 0$ as $n\rightarrow \infty.$ It is easy to see that $$\begin{aligned} \bigg| \Phi \bigg(\frac{u}{\sqrt{a^2/s_n^2+ 1}} \bigg)-\Phi (u)\bigg| & \leq & c_3 \bigg|\frac{1}{\sqrt{a^2/s_n^2+ 1}} -1 \bigg| \nonumber \\ &\leq & c_4\frac{a}{s_n } .\end{aligned}$$ Therefore, by the last inequality and ([\[dfafs\]](#dfafs){reference-type="ref" reference="dfafs"}), we can deduce that $$\begin{aligned} \mathbf{K}(S_n/s_n) \, \leq \, c_1\,\sup_{u \in \mathbf{R}}\bigg| \mathbf{E} [\Phi _u(S_n/s_n, \tau_{n+1}^2 /s_n^2)]-\mathbf{E} [\Phi _u(S_0/s_n, \tau^2_1/s_n^2)]\bigg| +c_5\, \frac{a}{s_n } . \label{dsdff}\end{aligned}$$ In the sequel, we give an estimation for the term $\mathbf{E} [\Phi _u(S_n/s_n, \tau_{n+1}^2 /s_n^2)]-\mathbf{E} [\Phi _u(S_0/s_n, \tau^2_1/s_n^2)]$. By a simple telescoping sum, we have $$\begin{aligned} && \mathbf{E} [\Phi _u(S_n/s_n, \tau^2_{n+1} /s_n^2)]-\mathbf{E} [\Phi _u(S_0/s_n, \tau^2_1/s_n^2)] \\ &&\ \ \ \ \ \ \ \ \ \ \ \ \ \ = \ \sum_{k=1}^n\mathbf{E} \Big[ \Phi _u(S_k/s_n, \tau^2_{k+1} /s_n^2)-\Phi _u(S_{k-1}/s_n, \tau^2_{k} /s_n^2) \Big] .\end{aligned}$$ Denote $$\xi_k= \frac{X_k}{s_n } , \ \ \ \ \ \ 1 \leq k \leq n.$$ Using the fact that $$\frac{\partial ^2}{\partial x^2}\Phi _u(x,y)=2\frac \partial {\partial y}\Phi _u(x,y)$$ and that $\frac{\partial ^2}{\partial x^2}\Phi _u(S_{k-1}/s_n, \tau^2_{k+1} /s_n^2)$ is measurable with respect to $\mathcal{F}_{k-1}$, we obtain the following equality $$\nonumber \mathbf{E} [\Phi _u(S_n/s_n, \tau^2_{n+1} /s_n^2)]-\mathbf{E} [\Phi _u(S_0/s_n, \tau^2_1/s_n^2)]\ =\ I_1+I_2-I_3,$$ where $$\begin{aligned} I_1\ &=&\ \sum_{k=1}^n \mathbf{E} \bigg[ \frac{}{} \Phi _u(S_k/s_n, \tau^2_{k+1} /s_n^2)-\Phi _u(S_{k-1}/s_n, \tau^2_{k+1} /s_n^2) \nonumber \\ && \ \ \ \ \ \ \ \ \ \ +\, \frac \partial {\partial x}\Phi _u(S_{k-1}/s_n, \tau^2_{k+1} /s_n^2)\xi_k-\frac 12\frac{\partial ^2}{\partial x^2}\Phi _u(S_{k-1}/s_n, \tau^2_{k+1} /s_n^2)\xi _k^2 \bigg] , \nonumber\\ I_2 \ &=&\ \frac 12 \sum_{k=1}^n \mathbf{E} \bigg[\frac{\partial ^2}{\partial x^2}\Phi _u(S_{k-1}/s_n, \tau^2_{k+1} /s_n^2)\Big( \mathbf{E} [ \xi _k^2 | \mathcal{F}_{k-1}] - \overline{\sigma}_k^2 /s_n^2 \Big) \bigg] ,\quad \quad \ \nonumber \\ I_3\ &=&\ \sum_{k=1}^n\mathbf{E} \bigg[ \Phi _u(S_{k-1}/s_n, \tau^2_{k} /s_n^2)-\Phi _u(S_{k-1}/s_n, \tau^2_{k-1} /s_n^2)-\frac \partial {\partial y}\Phi _u(S_{k-1}/s_n, \tau^2_{k-1} /s_n^2)\frac{ \overline{\sigma}_k^2 }{s_n^2} \bigg]. \nonumber\end{aligned}$$ From ([\[dsdff\]](#dsdff){reference-type="ref" reference="dsdff"}) and the definitions of $I_1, I_2$ and $I_3$, we can deduce that $$\begin{aligned} \mathbf{K}(S_n/s_n) \leq C\, \Big( |I_1|+|I_2|+|I_3| + \frac{a}{s_n } \Big).\end{aligned}$$ In the sequel, we give some estimates for the terms $|I_1|,$ $|I_2|$ and $|I_3|.$ **a)** Control of $|I_1|.$ It is easy to see that for all $1\leq k \leq n,$ $$\begin{aligned} && R_k:= \frac{}{} \Phi _u(S_k/s_n, \tau^2_{k+1} /s_n^2)-\Phi _u(S_{k-1}/s_n, \tau^2_{k+1} /s_n^2) \, +\, \frac \partial {\partial x}\Phi _u(S_{k-1}/s_n, \tau^2_{k+1} /s_n^2)\, \xi_k \nonumber \\ && \ \ \ \ \ \ \ \ \ \ -\frac 12\frac{\partial ^2}{\partial x^2}\Phi _u(S_{k-1}/s_n, \tau^2_{k+1} /s_n^2)\, \xi _k^2 \nonumber \\ && \ \ \ \ \, =\ \Phi \Big( H_{k-1} - \frac{ X_k}{\tau_{k+1}} \Big)-\Phi( H_{k-1}) +\, \Phi'(H_{k-1}) \frac{ X_k}{\tau_{k+1}} -\frac 12\Phi''(H_{k-1}) \Big(\frac{ X_k}{\tau_{k+1}}\Big)^2 , \nonumber\end{aligned}$$ where $$\displaystyle H_{k-1}= \frac{u-S_{k-1}/s_n}{\tau_{k+1} /s_n} .$$ When $|X_k|\leq \tau_{k+1}$, it is easy to see that $$\begin{aligned} \Big| \mathbf{E}[R_k \mathbf{1}_{ \{|X_k|\leq \tau_{k+1} \}} ] \Big| &\leq & \bigg| \mathbf{E}\Big[ \frac1{6} \Phi^{(3)} \Big( H_{k-1} - \vartheta \frac{ X_k}{\tau_{k+1}} \Big) \Big(\frac{ X_k}{\tau_{k+1}}\Big)^3 \textbf{1}_{\{|X_k|\leq \tau_{k+1}\}} \Big] \bigg|\nonumber \\ &\leq & \frac{C }{\tau_{k+1}^3} \mathbf{E}\big[ |X_k|^3 \textbf{1}_{\{|X_k|\leq \tau_{k+1}\}} \big], \label{dfs01}\end{aligned}$$ where $\vartheta$ stands for some values or random variables satisfying $0 \leq \vartheta \leq 1$, which may represent different values at different places. Next, we consider the case $|X_k|> \tau_{k+1}$. It is easy to see that for all $|\Delta x|>1 ,$ $$\begin{aligned} \bigg|\Phi(x+\Delta x)-\Phi(x) - \Phi'(x) \Delta x - \frac12 \Phi''(x) (\Delta x)^2 \bigg| \ \leq \ \frac{c_2}{ 2+ x^2 } \, |\Delta x|^{2 }.\end{aligned}$$ Therefore, we have $$\begin{aligned} \nonumber \Big| R_k \mathbf{1}_{ \{|X_k|> \tau_{k+1} \}} \Big| \leq G(H_{k-1}) \, \Big(\frac{ X_k}{\tau_{k+1}}\Big)^2\mathbf{1}_{ \{|X_k|> \tau_{k+1} \}} ,\end{aligned}$$ where $\displaystyle G(z)=\frac{c_2}{ 2+ z^{2} } .$ As $G$ is bounded, we deduce that $$\begin{aligned} \label{dfs02} \Big|\mathbf{E}[R_k \mathbf{1}_{ \{|X_k|> \tau_{k+1} \}} ] \Big| \leq \frac{C }{\tau_{k+1}^2} \mathbf{E}\big[ X_k ^2 \textbf{1}_{\{|X_k|> \tau_{k+1}\}} \big].\end{aligned}$$ Applying ([\[dfs01\]](#dfs01){reference-type="ref" reference="dfs01"}) and ([\[dfs02\]](#dfs02){reference-type="ref" reference="dfs02"}) to the estimation of $|I_1|,$ we get $$\begin{aligned} |I_1| &\leq& \sum_{k=1}^n \Big|\mathbf{E}R_k \Big| \leq \sum_{k=1}^n \Big|\mathbf{E}[R_k \mathbf{1}_{ \{|X_k|\leq \tau_{k+1} \}} ] \Big| + \sum_{k=1}^n \Big| \mathbf{E}[R_k \mathbf{1}_{ \{|X_k|>\tau_{k+1} \}} ] \Big| \nonumber \\ &\leq& C \,\sum_{k=1}^n \Bigg( \frac{ \mathbf{E}\big[ |X_k|^3 \textbf{1}_{\{|X_k|\leq \tau_{k+1}\}} \big] }{\tau_{k+1}^3}+ \frac{ \mathbf{E}\big[ X_k ^2 \textbf{1}_{\{|X_k|> \tau_{k+1}\}} \big] }{\tau_{k+1}^2} \Bigg) .\end{aligned}$$ **b)** Control of $|I_2|.$ For $|I_2|,$ we have $$\begin{aligned} \left| I_2\right| &\leq& \Bigg|\frac{1}{\tau_{k+1}^2 /s_n^2}\sum_{k=1}^n \mathbf{E} \bigg[ \varphi'(H_{k-1}) \Big(\mathbf{E} [ \xi _k^2 | \mathcal{F}_{k-1}] - \overline{\sigma}_k^2 /s_n^2 \Big) \bigg] \Bigg|\\ &\leq& C\,\sum_{k=1}^n \frac{1}{\tau_{k+1}^2 } \mathbf{E} \big | \sigma_k^2 - \overline{\sigma}_k^2 \big|,\end{aligned}$$ where $\varphi$ is the density function of the standard normal random variable. **c)** Control of $|I_3|.$ For $|I_3|,$ by a two-term Taylor's expansion, it follows that $$I_3=\frac{1}{8}\,\sum_{k=1}^n\mathbf{E} \Bigg[\frac 1{\tau_{k }^2/s_n^2-\vartheta \overline{\sigma}^2_k/s_n^2 } \varphi ^{\prime \prime \prime }\bigg( \frac{u-X_{k-1}/s_n}{\sqrt{ \tau_{k }^2/s_n^2-\vartheta \overline{\sigma}^2_k/s_n^2}\ } \bigg) \Big(\frac{\overline{\sigma}^2_k }{s_n^2 } \Big)^2\Bigg],$$ where $0 \leq \vartheta \leq 1$. As $\varphi^{\prime \prime \prime }$ is bounded, we obtain $$\begin{aligned} \label{fgfdd} \left|I_3\right| \ \leq \ C \, \sum_{k=1}^n\frac 1{ \tau_{k-1}^2 /s_n^2 } \ \Big(\frac{\overline{\sigma}^2_k }{s_n^2 } \Big)^2 .\end{aligned}$$ Applying the upper bounds of $|I_1|, |I_2|$ and $|I_3|$ to ([\[dsdff\]](#dsdff){reference-type="ref" reference="dsdff"}), we get $$\begin{aligned} \label{ds5sfs} \mathbf{K}(S_n/s_n) \! &\leq& \! C \bigg[ \,\sum_{k=1}^n \bigg( \frac{ \mathbf{E}\big[ |X_k|^3 \textbf{1}_{\{|X_k|\leq \tau_{k+1}\}} \big] }{\tau_{k+1}^3}+ \frac{ \mathbf{E}\big[ X_k ^2 \textbf{1}_{\{|X_k|> \tau_{k+1}\}} \big] \ +\ \mathbf{E} \big | \sigma_k^2 - \overline{\sigma}_k^2 \big| }{\tau_{k+1}^2} \nonumber \\ && \ \ \ \ \ \ \ + \ \frac 1{ \tau_{k-1}^2 /s_n^2 } \ \Big(\frac{\overline{\sigma}^2_k }{s_n^2 } \Big)^2 \bigg) + \frac{a}{s_n } \ \bigg].\end{aligned}$$ Notice that $\overline{\sigma}^2_k / s_n^2 \leq || \textit{\textbf{X}} ||_2^2/ s_n^2 \rightarrow 0$ as $n \rightarrow \infty.$ Thus, when $a\geq1,$ we have $$\begin{aligned} \sum_{k=1}^n \frac 1{ \tau_{k-1}^2 /s_n^2 } \ \Big(\frac{\overline{\sigma}^2_k }{s_n^2 } \Big)^2 \ &\leq & \ \sum_{k=1}^n \frac 1{ 1 - s_{k-1}^2/ s_n^2 +a^2/ s_n^2 } \ \frac{\overline{\sigma}^2_k }{s_n^2 } \ \frac{|| \textit{\textbf{X}} ||_2^2}{s_n^2 } \nonumber \\ &\leq & \ C \, \frac{|| \textit{\textbf{X}} ||_2^2}{s_n^2 } \ln \Big(2+ \frac{s_n^2}{a^2} \Big) \nonumber \\ &\leq & \ C \, \frac{a}{s_n }. \label{fsdh5}\end{aligned}$$ Applying the last inequality to ([\[ds5sfs\]](#ds5sfs){reference-type="ref" reference="ds5sfs"}), we get the desired inequality. 0◻ ## Proof of Corollary [Corollary 1](#co2.5){reference-type="ref" reference="co2.5"} {#proof-of-corollary-co2.5} Notice that $\overline{\sigma}^2_k / s_n^2 \leq || \textit{\textbf{X}} ||_2^2/ s_n^2 \rightarrow 0$ as $n \rightarrow \infty.$ Thus, we have $$\begin{aligned} \sum_{k=1}^n \frac{ \mathbf{E}\big[ |X_k|^3 \textbf{1}_{\{|X_k|\leq \tau_{k+1}\}} \big] }{\tau_{k+1}^3} &\leq& \sum_{k=1}^n\frac{ \mathbf{E} |X_k|^{2+ \delta} }{ \tau_{k+1}^{2+\delta} } \ \leq\ C\, \sum_{k=1}^n\frac{ \overline{\sigma}_k^2 }{ \tau_{k+1}^{2+\delta} } \nonumber\\ &=& \ C\,\sum_{k=1}^n\frac{ 1}{ (\tau_{k+1}/s_n)^{2+\delta} } \frac{\overline{\sigma}_k^2}{s_n^2} \frac{1}{s_n^{\delta}}\nonumber\\ &\leq& \ C\, \frac{1}{a^\delta } .\end{aligned}$$ Similarly, we have $$\begin{aligned} \sum_{k=1}^n \frac{ \mathbf{E}\big[ |X_k|^2 \textbf{1}_{\{|X_k|> \tau_{k+1}\}} \big] }{\tau_{k+1}^2} &\leq& \sum_{k=1}^n\frac{ \mathbf{E} |X_k|^{2+ \delta} }{ \tau_{k+1}^{2+\delta} } \ \leq\ C\, \frac{1}{a^\delta } .\end{aligned}$$ Using the condition $\mathbf{E} \big|\sigma_k^2-\overline{\sigma}_k^2 \big|\leq C \overline{\sigma}_k^2 s_n^{-\alpha},$ we get $$\begin{aligned} \label{une4.3} \sum_{k=1}^n\frac{ \mathbf{E} \big | \sigma_k^2 - \overline{\sigma}_k^2 \big| }{\tau_{k+1}^2}&\leq& C \sum_{k=1}^n\frac{1}{ \tau_{k+1}^{2 } } \frac{\overline{\sigma}_k^2}{s_n^{\alpha}} \ \leq\ C\, \frac{\ln(2+ s_n^2/a^2)}{s_n^{\alpha}} .\end{aligned}$$ Thus, by Theorem [Theorem 1](#th4.s1){reference-type="ref" reference="th4.s1"}, we deduce that for all $a\geq 1,$ $$\begin{aligned} \mathbf{K}(S_n/s_n) \ \leq \ C \bigg( \frac{1}{a^\delta } \ +\ \frac{\ln(2+ s_n^2/a^2)}{s_n^{\alpha}}\ + \ \frac{a}{s_n } \bigg) .\end{aligned}$$ Taking $a= s_n^{1/(1+\delta)}$ in the last inequality, we get $$\begin{aligned} \mathbf{K}(S_n/s_n) \ \leq \ C \, \bigg(\displaystyle \frac{ 1}{ s_n^{\delta /(1+ \delta) }}+ \frac{\ln s_n }{ s_n^{\alpha } } \bigg) ,\end{aligned}$$ which gives the desired inequality. ## Proof of Corollary [Corollary 2](#th4.21){reference-type="ref" reference="th4.21"} {#proof-of-corollary-th4.21} By the conditions of Corollary [Corollary 2](#th4.21){reference-type="ref" reference="th4.21"}, it is easy to see that $\sigma^2 \leq \bar{\sigma}_k^2 \leq ||\mathbf{X}||^2_{2+\delta}.$ By the fact $\mathbf{E} \big|\sigma_k^2-\overline{\sigma}_k^2 \big|=O( k^{-\alpha}) \ (k\rightarrow \infty)$, we can deduce that for $\alpha \in (0, 1)$ and all $n\geq 2,$ $$\begin{aligned} \sum_{k=1}^n\frac{ \mathbf{E} \big | \sigma_k^2 - \overline{\sigma}_k^2 \big| }{\tau_{k+1}^2}&\leq& C\, \sum_{k=1}^n \frac{ 1 }{ n -k + 1 \ } k^{-\alpha} \\ &=& C\, \sum_{k=1}^n \frac{ 1 }{ 1 -k/n + 1/n \ } \frac{1}{ (k/n)^{\alpha}} \, \frac{1}{n} \, n^{ -\alpha} \\ &\leq& C\, \int_{0}^1 \frac{}{}\frac{}{} \frac{ 1 }{ 1 -x +1/n\ } \frac{1}{ x^{\alpha}} \,dx \, n^{ -\alpha} \\ &\leq& C\, \frac{\ln n }{ n^{\alpha } }.\end{aligned}$$ The remain of the proof is similar to the proof of Corollary [Corollary 1](#co2.5){reference-type="ref" reference="co2.5"}. ## Proof of Theorem [Theorem 2](#th4.2){reference-type="ref" reference="th4.2"} {#proof-of-theorem-th4.2} For the proof of Theorem [Theorem 2](#th4.2){reference-type="ref" reference="th4.2"}, we make use of the following technical lemma of Bolthausen (cf. Lemma 2 of [@2]). **Lemma 2**. *Let $G(x)$ be an integrable function on $\mathbf{\mathbf{R}}$ of bounded variation $||G||_V$, $X$ a random variable and $a,$ $b\neq 0$ real numbers. Then $$\mathbf{E}\Big[ \, G\Big( \frac{X+a}b\Big) \Big] \leq ||G||_V \, \mathbf{K}(X) + ||G||_1 \, |b|,$$ where $||G||_1$ is the $L_1(\mathbf{R})$ norm of $G(x).$* It is easy to see that for all $1\leq k \leq n,$ $$\begin{aligned} && R_k =\ \Phi \Big( H_{k-1} - \frac{ X_k}{\tau_{k+1}} \Big)-\Phi( H_{k-1}) +\, \Phi'(H_{k-1}) \frac{ X_k}{\tau_{k+1}} -\frac 12\Phi''(H_{k-1}) \Big(\frac{ X_k}{\tau_{k+1}}\Big)^2 . \nonumber\end{aligned}$$ For the estimation of $|\mathbf{E} R_k|,$ we distinguish two cases as follows. **a)** Case $|X_k|\leq \tau_{k+1}$. When $|X_k|\leq \tau_{k+1}$, it is easy to see that $$\begin{aligned} \Big| \mathbf{E}[R_k \mathbf{1}_{ \{|X_k|\leq \tau_{k+1} \}} ] \Big| &\leq & \bigg| \mathbf{E}\Big[ \frac1{6} \Phi^{(3)} \Big( H_{k-1} - \vartheta \frac{ X_k}{\tau_{k+1}} \Big) \Big(\frac{ X_k}{\tau_{k+1}}\Big)^3 \textbf{1}_{\{|X_k|\leq \tau_{k+1}\}} \Big] \bigg|\nonumber \\ &\leq & \frac{C }{\tau_{k+1}^3} \mathbf{E}\big[ H(H_{k-1}) |X_k|^3 \textbf{1}_{\{|X_k|\leq \tau_{k+1}\}} \big] \nonumber \\ &\leq& \frac{C }{\tau_{k+1}^3} \big\| \mathbf{E} [ |X_k|^3 \mathbf{1}_{\{|X_k|\leq \tau_{k+1}\}} |\mathcal{F}_{k-1} ] \big\|_{\infty} \mathbf{E}[ H(H_{k-1})], \label{dfssf01}\end{aligned}$$ where $H(x)=\sup_{|t|\leq 1} |\Phi^{(3)}(x+t) |$ and $\vartheta$ stands for some values or random variables satisfying $0 \leq \vartheta \leq 1$, which may represent different values at different places. By Lemma [Lemma 2](#LEMMA-APX-2){reference-type="ref" reference="LEMMA-APX-2"}, we have $$\begin{aligned} \mathbf{E}[ H(H_{k-1})] \leq C \, \mathbf{K}(S_{k-1}/s_n) + C \, |\tau_{k+1} /s_n |.\end{aligned}$$ By Lemma [Lemma 1](#LEMMA-APX-1){reference-type="ref" reference="LEMMA-APX-1"}, we deduce that for $a$ large enough, $$\begin{aligned} \mathbf{E}[ H(H_{k-1})] & \leq & C \, \mathbf{K}(S_{n}/s_n) + C \big\| \rho_{k} / s_n \big\| _\infty + C \, |\tau_{k+1} /s_n | \nonumber \\ & \leq & C \, \mathbf{K}(S_{n}/s_n) + C \, |\tau_{k+1} /s_n | , \nonumber\end{aligned}$$ where the last line using the fact that $\rho_{n,k}^2 \leq C\, \overline{\rho}^2_{n,k}$. Applying the last line to ([\[dfssf01\]](#dfssf01){reference-type="ref" reference="dfssf01"}), we get $$\begin{aligned} \Big| \mathbf{E}[R_k \mathbf{1}_{ \{|X_k|\leq \tau_{k+1} \}} ] \Big| &\leq & \frac{C \big\| \mathbf{E} [ |X_k|^3 \mathbf{1}_{\{|X_k|\leq \tau_{k+1}\}} |\mathcal{F}_{k-1} ] \big\|_{\infty} }{\tau_{k+1}^3} \mathbf{K}(S_{n}/s_n) \nonumber \\ && \ \ \ \ \ \ \ \ \ \ \ + \ \frac{C }{s_n\,} \frac{\big\| \mathbf{E} [ |X_k|^3 \mathbf{1}_{\{|X_k|\leq \tau_{k+1}\}} |\mathcal{F}_{k-1} ] \big\|_{\infty}}{\tau_{k+1}^2} . \label{dfssdd1}\end{aligned}$$ **b)** Case $|X_k|>\tau_{k+1}$. It is easy to see that for all $|\Delta x|>1 ,$ $$\begin{aligned} \bigg|\Phi(x+\Delta x)-\Phi(x) - \Phi'(x) \Delta x - \frac12 \Phi''(x) (\Delta x)^2 \bigg| \ \leq \ \frac{c_2}{ 2+ x^2 } \, |\Delta x|^{2 }.\end{aligned}$$ Therefore, we have $$\begin{aligned} \nonumber \Big| \mathbf{E}[R_k \mathbf{1}_{ \{|X_k|> \tau_{k+1} \}} ] \Big| \leq G(H_{k-1}) \, \Big(\frac{ X_k}{\tau_{k+1}}\Big)^2\mathbf{1}_{ \{|X_k|> \tau_{k+1} \}} ,\end{aligned}$$ where $\displaystyle G(z)=\frac{c_2}{ 2+ z^{2} } .$ As $H_{k-1}$ is $\mathcal{F}_{k-1}$-measurable, we deduce that $$\begin{aligned} \Big|\mathbf{E}[R_k \mathbf{1}_{ \{|X_k|> \tau_{k+1} \}} ] \Big| &\leq& \frac{C }{\tau_{k+1}^2} \mathbf{E}\big[ G(H_{k-1}) X_k ^2 \textbf{1}_{\{|X_k|> \tau_{k+1}\}} \big] \nonumber \\ &\leq& \frac{C }{\tau_{k+1}^2} \big\| \mathbf{E} [ X_k^2 \mathbf{1}_{\{|X_k|> \tau_{k+1}\}} |\mathcal{F}_{k-1} ] \big\|_{\infty} \mathbf{E}[ G(H_{k-1})].\label{dfgfs02}\end{aligned}$$ By an argument similar to the proof of ([\[dfssdd1\]](#dfssdd1){reference-type="ref" reference="dfssdd1"}), we deduce that $$\begin{aligned} \Big| \mathbf{E}[R_k \mathbf{1}_{ \{|X_k|> \tau_{k+1} \}} ] \Big| &\leq & \frac{C \big\| \mathbf{E} [ X_k^2 \mathbf{1}_{\{|X_k|> \tau_{k+1}\}} |\mathcal{F}_{k-1} ] \big\|_{\infty}}{\tau_{k+1}^2} \mathbf{K}(S_{n}/s_n) \nonumber \\ &&\ \ \ \ \ \ \ \ \ \ \ +\ \frac{C }{s_n\,} \frac{\big\| \mathbf{E} [ X_k^2 \mathbf{1}_{\{|X_k|> \tau_{k+1}\}} |\mathcal{F}_{k-1} ] \big\|_{\infty}}{\tau_{k+1} } \label{dfssdd2} .\end{aligned}$$ Applying ([\[dfssdd1\]](#dfssdd1){reference-type="ref" reference="dfssdd1"}) and ([\[dfssdd2\]](#dfssdd2){reference-type="ref" reference="dfssdd2"}) to the estimation of $|I_1|,$ we get $$\begin{aligned} |I_1| &\leq &\sum_{k=1}^n\big|\mathbf{E} R_k \big| \leq \sum_{k=1}^n \Big| \mathbf{E}[R_k \mathbf{1}_{ \{|X_k|\leq \tau_{k+1} \}} ] \Big| +\sum_{k=1}^n\Big| \mathbf{E}[R_k \mathbf{1}_{ \{|X_k|> \tau_{k+1} \}} ] \Big| \nonumber \\ &\leq& C \sum_{k=1}^n\bigg[ \frac{ \big\| \mathbf{E} [ |X_k|^3 \mathbf{1}_{\{|X_k|\leq \tau_{k+1}\}} |\mathcal{F}_{k-1} ] \big\|_{\infty}}{\tau_{k+1}^3} + \frac{ \big\| \mathbf{E} [ X_k^2 \mathbf{1}_{\{|X_k|> \tau_{k+1}\}} |\mathcal{F}_{k-1} ] \big\|_{\infty}}{\tau_{k+1}^2} \bigg] \mathbf{K}(S_{n}/s_n) \nonumber \\ && \ +\ \frac{C }{s_n\,} \sum_{k=1}^n\bigg[ \frac{ \big\| \mathbf{E} [ |X_k|^3 \mathbf{1}_{\{|X_k|\leq \tau_{k+1}\}} |\mathcal{F}_{k-1} ] \big\|_{\infty}}{\tau_{k+1}^2} + \frac{ \big\| \mathbf{E} [ X_k^2 \mathbf{1}_{\{|X_k|> \tau_{k+1}\}} |\mathcal{F}_{k-1} ] \big\|_{\infty}}{\tau_{k+1} } \bigg].\end{aligned}$$ By the controls of $|I_2|$ and $|I_3|$ in the proof of Theorem [Theorem 1](#th4.s1){reference-type="ref" reference="th4.s1"}, we deduce that $$\begin{aligned} \mathbf{K}(S_n/s_n) &\leq& C_* \sum_{k=1}^n\bigg[ \frac{ \big\| \mathbf{E} [ |X_k|^3 \mathbf{1}_{\{|X_k|\leq \tau_{k+1}\}} |\mathcal{F}_{k-1} ] \big\|_{\infty}}{\tau_{k+1}^3} + \frac{ \big\| \mathbf{E} [ X_k^2 \mathbf{1}_{\{|X_k|> \tau_{k+1}\}} |\mathcal{F}_{k-1} ] \big\|_{\infty}}{\tau_{k+1}^2} \bigg] \mathbf{K}(S_{n}/s_n) \nonumber \\ && \ +\ \frac{C }{s_n\,} \sum_{k=1}^n\bigg[ \frac{ \big\| \mathbf{E} [ |X_k|^3 \mathbf{1}_{\{|X_k|\leq \tau_{k+1}\}} |\mathcal{F}_{k-1} ] \big\|_{\infty}}{\tau_{k+1}^2} + \frac{ \big\| \mathbf{E} [ X_k^2 \mathbf{1}_{\{|X_k|> \tau_{k+1}\}} |\mathcal{F}_{k-1} ] \big\|_{\infty}}{\tau_{k+1} } \bigg]\\ && +\ C\, \Bigg[ \sum_{k=1}^n \frac{ \mathbf{E} \big | \sigma_k^2 - \overline{\sigma}_k^2 \big| }{\tau_{k+1}^2} \ + \ \frac 1{ \tau_{k-1}^2 /s_n^2 } \ \Big(\frac{\overline{\sigma}^2_k }{s_n^2 } \Big)^2 \ \Bigg] + \frac{a}{s_n }.\end{aligned}$$ By condition ([\[fsdfzn\]](#fsdfzn){reference-type="ref" reference="fsdfzn"}), for all $a$ large enough, we have $$\sum_{k=1}^n\bigg[ \frac{ \big\| \mathbf{E} [ |X_k|^3 \mathbf{1}_{\{|X_k|\leq \tau_{k+1}\}} |\mathcal{F}_{k-1} ] \big\|_{\infty}}{\tau_{k+1}^3} + \frac{ \big\| \mathbf{E} [ X_k^2 \mathbf{1}_{\{|X_k|> \tau_{k+1}\}} |\mathcal{F}_{k-1} ] \big\|_{\infty}}{\tau_{k+1}^2} \bigg] \leq \frac{1}{2C_*},$$ and, by ([\[fsdh5\]](#fsdh5){reference-type="ref" reference="fsdh5"}), $$\frac 1{ \tau_{k-1}^2 /s_n^2 } \ \Big(\frac{\overline{\sigma}^2_k }{s_n^2 } \Big)^2 \leq C \frac{a}{s_n }.$$ Therefore, it holds for all $a$ large enough, $$\begin{aligned} \mathbf{K}(S_n/s_n) &\leq& \frac{C }{s_n\,} \sum_{k=1}^n\bigg[ \frac{ \big\| \mathbf{E} [ |X_k|^3 \mathbf{1}_{\{|X_k|\leq \tau_{k+1}\}} |\mathcal{F}_{k-1} ] \big\|_{\infty}}{\tau_{k+1}^2} + \frac{ \big\| \mathbf{E} [ X_k^2 \mathbf{1}_{\{|X_k|> \tau_{k+1}\}} |\mathcal{F}_{k-1} ] \big\|_{\infty}}{\tau_{k+1} } \bigg]\\ && + \ \frac12 \mathbf{K}(S_n/s_n) \ +\ C\, \bigg[ \sum_{k=1}^n \frac{ \mathbf{E} \big | \sigma_k^2 - \overline{\sigma}_k^2 \big| }{\tau_{k+1}^2} + \frac{a}{s_n } \bigg] ,\end{aligned}$$ which implies the desired inequality. ## Proof of Corollary [Corollary 3](#fssdvs){reference-type="ref" reference="fssdvs"} {#proof-of-corollary-fssdvs} We only need to consider the case $\delta \in (0, 1].$ We first verify the condition ([\[fsdfzn\]](#fsdfzn){reference-type="ref" reference="fsdfzn"}). Notice that $\overline{\sigma}^2_k / s_n^2 \leq ||\mathbf{X}||_2/ s_n^2 \rightarrow 0$ as $n \rightarrow \infty.$ Thus, we have $$\begin{aligned} \sum_{k=1}^n \frac{ \big\| \mathbf{E} [ |X_k|^3 \mathbf{1}_{\{|X_k|\leq \tau_{k+1}\}} |\mathcal{F}_{k-1} ] \big\|_{\infty}}{\tau_{k+1}^3} &\leq& \sum_{k=1}^n\frac{\big\| \mathbf{E} [ |X_k|^{2+ \delta} |\mathcal{F}_{k-1} ] \big\|_{\infty} }{ \tau_{k+1}^{2+\delta} } \ \leq\ C\, \sum_{k=1}^n\frac{ \overline{\sigma}_k^2 }{ \tau_{k+1}^{2+\delta} } \nonumber\\ &\leq& C\, \sum_{k=1}^n\frac{ \overline{\sigma}_k^2 }{ (\tau_{k+1}/s_n)^{2+\delta/2} } \frac{ \overline{\sigma}_k^2 }{ s_n^2 } \frac{1}{s_n^{\delta/2}} \frac{ 1}{ a^{\delta/2} }\ \nonumber \\ &\leq& C\, (s_n/a)^{\delta/2} \frac{1}{s_n^{\delta/2}} \frac{ 1}{ a^{\delta/2} }\ \nonumber \\ &=& C\, \frac{1 }{a^\delta \,} \rightarrow 0.\end{aligned}$$ Similarly, we have $$\begin{aligned} \sum_{k=1}^n \frac{ \big\| \mathbf{E} [ X_k^2 \mathbf{1}_{\{|X_k|> \tau_{k+1}\}} |\mathcal{F}_{k-1} ] \big\|_{\infty} }{\tau_{k+1}^2} &\leq& \sum_{k=1}^n\frac{ \mathbf{E} |X_k|^{2+ \delta} }{ \tau_{k+1}^{2+\delta} } \ \leq\ C\, \frac{1 }{a^\delta\, } \rightarrow 0 .\end{aligned}$$ Thus the condition ([\[fsdfzn\]](#fsdfzn){reference-type="ref" reference="fsdfzn"}) is satisfied. Taking $a=s_n,$ we have for $\delta \in (0, 1)$, $$\begin{aligned} \sum_{k=1}^n \frac{ \big\| \mathbf{E} [ |X_k|^3 \mathbf{1}_{\{|X_k|\leq \tau_{k+1}\}} |\mathcal{F}_{k-1} ] \big\|_{\infty}}{\tau_{k+1}^2} &\leq& \sum_{k=1}^n\frac{\big\| \mathbf{E} [ |X_k|^{2+ \delta} |\mathcal{F}_{k-1} ] \big\|_{\infty} }{ \tau_{k+1}^{1+\delta} } \ \leq\ C\, \sum_{k=1}^n\frac{ \overline{\sigma}_k^2 }{ \tau_{k+1}^{1+\delta} } \nonumber\\ &=& \ C\,\sum_{k=1}^n\frac{ 1}{ (\tau_{k+1}/s_n)^{1+\delta} } \frac{\overline{\sigma}_k^2}{s_n^2} s_n^{1-\delta} \nonumber\\ &\leq& \ C\, s_n^{1-\delta} . \label{fdsds1}\end{aligned}$$ Similarly, we have for $\delta \in (0, 1)$ $$\begin{aligned} \sum_{k=1}^n \frac{ \big\| \mathbf{E} [ X_k^2 \mathbf{1}_{\{|X_k|> \tau_{k+1}\}} |\mathcal{F}_{k-1} ] \big\|_{\infty} }{\tau_{k+1} } &\leq& \sum_{k=1}^n\frac{\big\| \mathbf{E} [ |X_k|^{2+ \delta} |\mathcal{F}_{k-1} ] \big\|_{\infty} }{ \tau_{k+1}^{1+\delta} } \ \leq\ C\, s_n^{1-\delta} .\label{fdsds2}\end{aligned}$$ Applying the inequalities ([\[fdsds1\]](#fdsds1){reference-type="ref" reference="fdsds1"}) and ([\[fdsds2\]](#fdsds2){reference-type="ref" reference="fdsds2"}) to Theorem [Theorem 2](#th4.2){reference-type="ref" reference="th4.2"}, we obtain the desired inequality for $\delta \in (0, 1)$. For $\delta \geq 1,$ the inequality follows by a similar argument. ## Proof of Corollary [Corollary 4](#fsdvs){reference-type="ref" reference="fsdvs"} {#proof-of-corollary-fsdvs} The proof of Corollary [Corollary 4](#fsdvs){reference-type="ref" reference="fsdvs"} is similar to the proof of Corollary [Corollary 2](#th4.21){reference-type="ref" reference="th4.21"}. For the sake of simplicity, in the sequel we denote $\rho_{n,k },$ $\overline{\rho}_{n,k}$, $\upsilon_{n,k}$ and $\tau_{n,k}$ by $\rho_{k },$ $\overline{\rho}_{k}$, $\upsilon_{k}$ and $\tau_{k}$, respectively. ## Proof of Theorem [Theorem 3](#theorem2.1){reference-type="ref" reference="theorem2.1"} {#proof-of-theorem-theorem2.1} The proof is a modification of Röllin's argument. Let $Z', Z_1,..., Z_n$ be a sequence of independent standard normal random variables, and they are independent of $\mathbf{X}$. Define $$Z:=\sum_{i=1}^n \sigma_i Z_i, \qquad T_{k}:=\sum_{i=k}^n \sigma_i Z_i, \qquad k = 1, 2, ..., n.$$ By the definition of $Z$, $Z$ is a normal random variable with mean $0$ and variance $s_n^2$. As $\rho_k^2=s_n^2- V_k^2$ and $V_k^2$ is measurable with respect to $\mathcal{F}_{k-1}$, it holds $$\mathcal{L}(T_k| \mathcal{F}_{k-1}) \sim N(0,\rho_k^2),\$$ where $\mathcal{L}(\cdot | \mathcal{F}_{k-1})$ represents the conditional distribution on $\mathcal{F}_{k-1}$. Let $h$ be a fixed 1-Lipschitz continuous function. By the properties of Lipschitz continuous functions, we have $||h'||_{\infty} \leq 1$, where $||\cdot||_{\infty}$ denotes the supremum norm with respect to Lebesgue's measure. By the triangle inequality, it is easy to see that $$\Big|\mathbf{E}[h(S_n)-h(Z)] \Big| \leq \Big |\mathbf{E}[h(S_n+aZ')-h(Z+aZ')] \Big|+2a.$$ Define $S_0=T_{n+1}=0.$ Using telescoping sum representation, we have $$\mathbf{E}[h(S_n+aZ')-h(Z+aZ')] = \mathbf{E} \sum_{k=1}^n \mathbf{E}[R_k|\mathcal{F}_{k-1}],$$ where $$R_k = h(S_k+T_{k+1}+aZ') - h(S_{k-1}+T_k+aZ').$$ Let $g$ be the unique bounded solution of the following equation $$g'(x)-xg(x)=f(x)-\mathbf{E} f(Y),\ \ x \in \mathbf{R},$$ where $f(x) = h(tx+s)/t$, $s \in \mathbf{R}$, $t > 0$, $\mathcal{L}(Y) \sim N(0,1)$. Clearly, $f$ is a 1-Lipschitz continuous function. Chen, Goldstein and Shao [@7] proved that $$||g||_{\infty} \leq 2||f'||_{\infty}, \qquad ||g'||_{\infty} \leq \sqrt{\frac{2}{\pi}}||f'||_{\infty}, \qquad ||g''||_{\infty} \leq 2||f'||_{\infty}.$$ Set $f_{s,t}(w) := g((w - s)/t)$, where $w \in \mathbf{R}$. Then $f_{s,t}$ is also a Lipschitz function such that $$\label{gfdgd} t^2 f'_{s,t}(w)-(w-s)f_{s,t}(w)=h(w)-\mathbf{E}h(tY+s), \quad w \in \mathbf{R}.$$ As $g$ is bounded, we have $$\label{gsgs} ||f_{s,t}||_{\infty} \leq 2||h'||_{\infty}, \qquad ||f'_{s,t}||_{\infty} \leq \frac{||h'||_{\infty}}{t}, \qquad ||f''_{s,t}||_{\infty} \leq \frac{2||h'||_{\infty}}{t^2}.$$ By the definition of $f_{s,t}$, $f_{s,t}$, $f'_{s,t}$ and $f''_{s,t}$ can be regarded as measurable functions of $(s,t, w)$ from $\mathbf{R} \times \mathbf{R}^{+} \times \mathbf{R}$ to $\mathbf{R}$. Therefore, $f_{U,V}(W)$, $f'_{U,V}(W)$ and $f''_{U,V}(W)$ can be regarded as random functions for the random variables $U$, $V$ and $W$, where $V>0$. Denote $T'_k:=T_k+aZ'$, then $$\mathcal{L}(T'_k|\mathcal{F}_{k-1})\sim N(0,\rho_k^2+a^2).$$ Notice that $S_{k-1}$ and $\sqrt{\rho_k^2+a^2}$ are $\mathcal{F}_{k-1}$-measurable. Thus $$\begin{aligned} \mathbf{E}[R_k|\mathcal{F}_{k-1}]&=&\mathbf{E}[h(S_k+T_{k+1}+aZ')-h(S_{k-1}+T_k+aZ')|\mathcal{F}_{k-1}]\\ &=&\mathbf{E}[h(S_k+T'_{k+1})-h(S_{k-1}+T'_k)|\mathcal{F}_{k-1}]\\ &=&\mathbf{E}[h(S_k+T'_{k+1})-h(S_{k-1}+v_k Y)|\mathcal{F}_{k-1}]\\ &=& \mathbf{E}[v_k^{2} f'_{S_{k-1},v_k}(S_k+T'_{k+1})-(S_k+T'_{k+1}-S_{k-1}) f_{S_{k-1},v_k}(S_k+T'_{k+1})|\mathcal{F}_{k-1}].\end{aligned}$$ The last line follows by ([\[gfdgd\]](#gfdgd){reference-type="ref" reference="gfdgd"}). Since $v_k=\sqrt{\rho_k^2+a^2}$ and $\rho_k^2=\rho_{k+1}^2+\sigma_k^2$, we have $$\begin{aligned} \mathbf{E}[R_k|\mathcal{F}_{k-1}]&=& \mathbf{E}[(\rho_k^2+a^2) f'_{S_{k-1},v_k}(S_k+T'_{k+1})-(X_k+T'_{k+1}) f_{S_{k-1},v_k}(S_k+T'_{k+1})|\mathcal{F}_{k-1}] \nonumber \\ &=&\,\mathbf{E}[\sigma_k^2 f'_{S_{k-1},v_k}(S_k+T'_{k+1})-X_k f_{S_{k-1},v_k}(S_k+T'_{k+1})|\mathcal{F}_{k-1}] \label{rsffg}\\ &&\ +\,\mathbf{E}[(\rho_{k+1}^2+a^2) f'_{S_{k-1},v_k}(S_k+T'_{k+1}) - T'_{k+1} f_{S_{k-1},v_k}(S_k+T'_{k+1})|\mathcal{F}_{k-1}]. \nonumber \end{aligned}$$ Notice that $f_{s,t}$  is a Lipschitz function and $\mathcal{L}(T'_{k+1}|\mathcal{F}_{k-1}) \sim N(0,\rho_{k+1}^2+a^2)$. For any normal random variable $Y$ with $\mathcal{L}(Y) \sim N(0,\sigma^2)$, if $\mathbf{E}{g'(Y)}$ exists, then the function $g$ satisfies the equation $\mathbf{E}[\sigma^2 g'(Y)-Yg(Y)]=0.$ Therefore, it holds $$\mathbf{E}[(\rho_{k+1}^2+a^2) f'_{S_{k-1},v_k}(S_k+T'_{k+1}) - T'_{k+1} f_{S_{k-1},v_k}(S_k+T'_{k+1})|\mathcal{F}_{k-1}]=0 .$$ Applying the last equality to ([\[rsffg\]](#rsffg){reference-type="ref" reference="rsffg"}), we get $$\begin{aligned} \mathbf{E}[R_k|\mathcal{F}_{k-1}] = \mathbf{E}[\sigma_k^2 f'_{S_{k-1},v_k}(S_k+T'_{k+1}) - X_k f_{S_{k-1},v_k}(S_k+T'_{k+1})|\mathcal{F}_{k-1}]. \end{aligned}$$ Next, using Taylor's expansions for $f'_{S_{k-1},v_k}(S_k+T'_{k+1})$ and $f_{S_{k-1},v_k}(S_k+T'_{k+1})$, we deduce that $$\begin{aligned} \mathbf{E}[R_k|\mathcal{F}_{k-1}] &=& \mathbf{E}[ \sigma_k^2\mathbf{1}_{\{|X_k| < v_k\}} f'_{S_{k-1},v_k}(S_{k-1}+T'_{k+1})+ \sigma_k^2 X_k\mathbf{1}_{\{|X_k| < v_k\}} f''_{S_{k-1},v_k}(S_{k-1}+\theta_1 X_k +T'_{k+1}) \nonumber\\ && -X_k \mathbf{1}_{\{|X_k| < v_k\}}f_{S_{k-1},v_k}(S_{k-1}+T'_{k+1})-X_k^2\mathbf{1}_{\{|X_k| < v_k\}} f'_{S_{k-1},v_k}(S_{k-1} +T'_{k+1}) \nonumber \\ && -X_k^3\mathbf{1}_{\{|X_k| < v_k\}} f''_{S_{k-1},v_k}(S_{k-1}+\theta_2 X_k +T'_{k+1}) \ \nonumber \\ && + \sigma_k^2\mathbf{1}_{\{|X_k| \geq v_k\}} f'_{S_{k-1},v_k}(S_{k}+T'_{k+1})-X_k\mathbf{1}_{\{|X_k| \geq v_k\}} f_{S_{k-1},v_k}(S_{k-1}+T'_{k+1}) \nonumber \\ &&-X_k^2\mathbf{1}_{\{|X_k| \geq v_k\}} f'_{S_{k-1},v_k}(S_{k-1} +\theta_3 X_k+T'_{k+1}) |\mathcal{F}_{k-1}] \nonumber \\ &=& \mathbf{E}[ \sigma_k^2\mathbf{1}_{\{|X_k| < v_k\}} f'_{S_{k-1},v_k}(S_{k-1}+T'_{k+1})+ \sigma_k^2 X_k\mathbf{1}_{\{|X_k| < v_k\}} f''_{S_{k-1},v_k}(S_{k-1}+\theta_1 X_k +T'_{k+1}) \nonumber \\ && -X_k f_{S_{k-1},v_k}(S_{k-1}+T'_{k+1})-X_k^2\mathbf{1}_{\{|X_k| < v_k\}} f'_{S_{k-1},v_k}(S_{k-1} +T'_{k+1}) \nonumber \\ && -X_k^3\mathbf{1}_{\{|X_k| < v_k\}} f''_{S_{k-1},v_k}(S_{k-1}+\theta_2 X_k +T'_{k+1}) + \sigma_k^2\mathbf{1}_{\{|X_k| \geq v_k\}} f'_{S_{k-1},v_k}(S_{k}+T'_{k+1}) \ \nonumber \\ && -X_k^2\mathbf{1}_{\{|X_k| \geq v_k\}} f'_{S_{k-1},v_k}(S_{k-1} +\theta_3 X_k+T'_{k+1})\, |\mathcal{F}_{k-1}] \nonumber \\ &=& \mathbf{E}\Big[ \sigma_k^2 f'_{S_{k-1},v_k}(S_{k-1}+T'_{k+1})+ \sigma_k^2 X_k\mathbf{1}_{\{|X_k| < v_k\}} f''_{S_{k-1},v_k}(S_{k-1}+\theta_1 X_k +T'_{k+1}) \nonumber \\ && -X_k f_{S_{k-1},v_k}(S_{k-1}+T'_{k+1})-X_k^2 f'_{S_{k-1},v_k}(S_{k-1} +T'_{k+1}) \nonumber \\ && -X_k^3\mathbf{1}_{\{|X_k| < v_k\}} f''_{S_{k-1},v_k}(S_{k-1}+\theta_2 X_k +T'_{k+1}) \ \nonumber \\ && + \sigma_k^2\mathbf{1}_{\{|X_k| \geq v_k\}} \Big(f'_{S_{k-1},v_k}(S_{k}+T'_{k+1}) - f'_{S_{k-1},v_k}(S_{k-1}+T'_{k+1}) \Big) \nonumber \\ && -X_k^2\mathbf{1}_{\{|X_k| \geq v_k\}} \Big(f'_{S_{k-1},v_k}(S_{k-1} +\theta_3 X_k+T'_{k+1}) - f'_{S_{k-1},v_k}(S_{k-1}+T'_{k+1}) \Big) \ \Big |\mathcal{F}_{k-1} \Big] ,\nonumber \end{aligned}$$ where $0\leq\theta_1,\theta_2, \theta_3 \leq1$. By the last equality and ([\[gsgs\]](#gsgs){reference-type="ref" reference="gsgs"}), we deduce that $$\begin{aligned} \mathbf{E}[R_k|\mathcal{F}_{k-1}] & \leq & \frac{ \mathbf{E}[ \sigma_k^2\mathbf{1}_{\{|X_k| \geq v_k\}}|\mathcal{F}_{k-1}]+ \mathbf{E}[ X_k^2\mathbf{1}_{\{|X_k| \geq v_k\}}|\mathcal{F}_{k-1}]}{v_k} +2\frac{ \mathbf{E}[(\sigma_k^2 |X_k|+|X_k|^3)\mathbf{1}_{\{|X_k| <v_k\}} |\mathcal{F}_{k-1}]}{v_k^2} \nonumber \\ &=& \frac{ \mathbf{E}[(\sigma_k^2 +X_k^2)\mathbf{1}_{\{|X_k| \geq v_k\}}|\mathcal{F}_{k-1}]}{v_k} +2\frac{ \mathbf{E}[(\sigma_k^2 |X_k|+|X_k|^3)\mathbf{1}_{\{|X_k| <v_k\}} |\mathcal{F}_{k-1}]}{v_k^2} .\label{fsdfg} \end{aligned}$$ Thus, we have $$\begin{aligned} \textbf{W}\big(\mathcal{L}(S_n), \mathcal{L}( N(0, s_n^2) )\big) &\leq& |\mathbf{E}[h(S_n)-h(Z)]| \\ & \leq& \Big|\mathbf{E}[h(S_n+aZ')-h(Z+aZ')] \Big|+2a\\ & =& \Big|\mathbf{E} \sum_{k=1}^n \mathbf{E}[R_k|\mathcal{F}_{k-1}] \Big|+2a\\ & \leq& \sum_{k=1}^n \bigg(\mathbf{E}\bigg[\frac{(\sigma_k^2 + X_k^2)\mathbf{1}_{\{|X_k| \geq v_k\}} }{v_k}\bigg] +2 \mathbf{E}\bigg[\frac{(\sigma_k^2 |X_k|+|X_k|^3)\mathbf{1}_{\{|X_k| <v_k\}} }{v_k^2}\bigg]\bigg) + 2a. \end{aligned}$$ Normalized $S_n$, we get $$\begin{aligned} \textbf{W}\big( S_n/s_n \big) \leq \frac{1}{s_n} \sum_{k=1}^n \bigg(\mathbf{E}\bigg[\frac{(\sigma_k^2 + X_k^2)\mathbf{1}_{\{|X_k| \geq v_k\}} }{v_k}\bigg] +2 \mathbf{E}\bigg[\frac{(\sigma_k^2 |X_k|+|X_k|^3)\mathbf{1}_{\{|X_k| <v_k\}} }{v_k^2}\bigg]\bigg) +\frac{2a}{s_n}, \end{aligned}$$ which gives the desired inequality. ## Proof of Corollary [Corollary 5](#fdgs){reference-type="ref" reference="fdgs"} {#proof-of-corollary-fdgs} By Hölder's inequality and Jensen's inequality, we get $$\begin{aligned} \mathbf{E}\bigg[\frac{ \sigma_k^2 \mathbf{1}_{\{|X_k| \geq v_k\}} }{v_k}\bigg] \!\!&\leq&\!\! \mathbf{E} \frac{ \sigma_k^2 |X_k|^{\delta} }{v_k v_k\,\!\!^{\delta}} = \mathbf{E} \frac{ \sigma_k^2 |X_k|^{\delta} }{(\rho_k^2+a^2)^{(1+\delta)/2} } \nonumber\\ &\leq&\!\! \Big[\mathbf{E} \Big(\frac{ \sigma_k^2 }{\ (\rho_k^2+a^2)^{(1+\delta)/(2+\delta)} }\Big)^{(2+\delta)/2} \Big]^{2/(2+\delta)} \Big[\mathbf{E}\Big( \frac{ |X_k|^{\delta} }{ (\rho_k^2+a^2)^{ }\,\!\!^{\delta(1+\delta)/(4+2\delta)}} \Big)^{(2+\delta)/\delta}\Big]^{\delta/(2+\delta)} \nonumber \\ &\leq&\!\!\bigg[\mathbf{E} \frac{ |X_k|^{2+ \delta} }{ (\rho_k^2+a^2 )^{(1+ \delta)/2} } \bigg]^{2/(2+\delta)} \bigg[\mathbf{E}\frac{ |X_k|^{2+ \delta} }{ (\rho_k^2+a^2 )^{(1+ \delta)/2} }\bigg]^{\delta/(2+\delta)} \nonumber\\ &\leq&\!\! \mathbf{E} \frac{ |X_k|^{2+ \delta} }{ (\rho_k^2+a^2 )^{(1+ \delta)/2} } . \label{ines21}\end{aligned}$$ Similarly, we can prove that $$\begin{aligned} \mathbf{E}\bigg[\frac{ \sigma_k^2 |X_k| \mathbf{1}_{\{|X_k| <v_k\}} }{v_k^2}\bigg] &\leq& \mathbf{E} \frac{ \sigma_k^2 |X_k|^{\delta} v_k\,\!\!^{1-\delta}}{ \rho_k^2+a^2 } = \mathbf{E} \frac{ \sigma_k^2 |X_k|^{\delta} }{ (\rho_k^2+a^2)^{(1+\delta)/2} } \nonumber \\ &\leq& \mathbf{E} \frac{ |X_k|^{2+ \delta} }{ (\rho_k^2+a^2 )^{(1+ \delta)/2} } .\end{aligned}$$ The following inequalities hold obviously $$\begin{aligned} \mathbf{E}\bigg[\frac{ X_k^2 \mathbf{1}_{\{|X_k| \geq v_k\}} }{v_k}\bigg] \leq \mathbf{E} \frac{ |X_k|^{2+\delta} }{v_k v_k\,\!\!^{\delta}} = \mathbf{E} \frac{ |X_k|^{2+ \delta} }{ (\rho_k^2+a^2 )^{(1+ \delta)/2} },\ \ \ \end{aligned}$$ $$\begin{aligned} \mathbf{E}\bigg[\frac{ |X_k|^3 \mathbf{1}_{\{|X_k| < v_k\}} }{v_k^2}\bigg] \leq \mathbf{E} \frac{ |X_k|^{2+\delta} v_k\,\!^{1-\delta} }{v_k^2} = \mathbf{E} \frac{ |X_k|^{2+ \delta} }{ (\rho_k^2+a^2 )^{(1+ \rho)/2} } . \label{ines25} \end{aligned}$$ Applying the inequalities ([\[ines21\]](#ines21){reference-type="ref" reference="ines21"}) - ([\[ines25\]](#ines25){reference-type="ref" reference="ines25"}) to Theorem [Theorem 3](#theorem2.1){reference-type="ref" reference="theorem2.1"}, we obtain the first desired inequality: $$\begin{aligned} \textbf{W} \big( S_n/s_n \big) & \leq& \frac{6}{s_n} \sum_{k=1}^n \mathbf{E} \frac{ |X_k|^{2+\delta} }{ (\rho_k^2+a^2 )^{(1+ \delta)/2} } +\frac{ 2a}{s_n}.\end{aligned}$$ This completes the proof of Corollary [Corollary 5](#fdgs){reference-type="ref" reference="fdgs"}. ## Proof of Corollary [Corollary 6](#fdfff){reference-type="ref" reference="fdfff"} {#proof-of-corollary-fdfff} By Jensen's inequality, $\mathbf{E}[ |X_k|^{2+\delta} | \mathcal{F}_{k-1} ] \leq \gamma \, \mathbf{E}[ X_k^2 | \mathcal{F}_{k-1} ]$ and $\mathbf{E}[ X_k^2 | \mathcal{F}_{k-1} ] = \sigma_k^2$ together implies that for all $1 \leq k \leq n,$ $$(\sigma_k^2)^{(2+\delta) /2 } \leq \mathbf{E}[ |X_k|^{2+\delta} | \mathcal{F}_{k-1} ] \leq \gamma \, \sigma_k^2 ,$$ which implies that $\sigma_k^2 \leq \gamma^{2/\delta}.$ Thus $\sigma_k^2/s_n^2 \rightarrow 0, n \rightarrow \infty,$ uniformly for all $1 \leq k \leq n.$ Notice that $\rho_{k }^2 = \sum_{i=k }^n \sigma_i^2.$ Taking $a=1,$ by the conditions in Corollary [Corollary 6](#fdfff){reference-type="ref" reference="fdfff"}, it holds for $\delta \in (0, 1)$, $$\begin{aligned} \sum_{k=1}^n \mathbf{E} \frac{ |X_k|^{2+\delta} }{ (\rho_k^2+1)^{(1+ \delta)/2} } & =& \mathbf{E} \sum_{k=1}^n \frac{ \mathbf{E}[ |X_k|^{2+\delta} | \mathcal{F}_{k-1} ] }{ (\rho_k^2 +1 )^{(1+ \delta)/2}} \\ & \leq & s_n^{1-\delta} \gamma \ \mathbf{E}\sum_{k=1}^n \frac{ \sigma_k^2/s_n^2 }{ (\rho_k^2/s_n^2 +1/s_n^2 )^{(1+ \delta)/2}} \\ & \leq & s_n^{1-\delta} \gamma\, \Big( \int_{0}^1\frac{1}{x^{(1+ \delta)/2}}dx + \frac{\gamma^2}{s_n^2} \Big) \\ &\leq& C_1\,s_n^{1-\delta} .\end{aligned}$$ Applying the last inequality to Corollary [Corollary 5](#fdgs){reference-type="ref" reference="fdgs"} with $a=1$, it holds for $\delta \in (0, 1)$, $$\begin{aligned} \textbf{W} \big( S_n/s_n \big) \leq C\, \frac{1}{ s_n^{\delta }}. \end{aligned}$$ When $\delta =1$, we have $$\begin{aligned} \sum_{k=1}^n \mathbf{E} \frac{ |X_k|^{3} }{ \rho_k^2+1 } & =& \mathbf{E} \sum_{k=1}^n \frac{ \mathbf{E}[ |X_k|^3 | \mathcal{F}_{k-1} ] }{ \rho_k^2 +1 } \,\leq \, \gamma \, \mathbf{E}\sum_{k=1}^n \frac{ \sigma_k^2/s_n^2 }{ \rho_k^2/s_n^2 +1/s_n^2 } \, \leq \, C_1\, \ln s_n .\end{aligned}$$ Applying the last inequality to Corollary [Corollary 5](#fdgs){reference-type="ref" reference="fdgs"} and taking $a=1$, we get for $\delta =1$, $$\begin{aligned} \textbf{W} \big( S_n/s_n \big) \leq C\, \frac{\ln s_n}{ s_n }. \end{aligned}$$ This completes the proof of Corollary [Corollary 6](#fdfff){reference-type="ref" reference="fdfff"}. ## Proof of Corollary [Corollary 7](#fdffs){reference-type="ref" reference="fdffs"} {#proof-of-corollary-fdffs} We only give a proof for the case $\delta \in (0, 1)$. For the case $\delta=1$, the proof is similar. Taking $a=\alpha,$ by the conditions in Corollary [Corollary 7](#fdffs){reference-type="ref" reference="fdffs"}, we get $$\begin{aligned} 6 \sum_{k=1}^n \mathbf{E} \frac{ |X_k|^{2+\delta} }{ (\rho_k^2+\alpha )^{(1+ \delta)/2} } + 2 \alpha & \leq& \sum_{k=1}^n \frac{ 6}{ ((n-k) \alpha +\alpha )^{(1+ \delta)/2} } \mathbf{E} |X_k|^{2+\delta} + 2 \alpha \\ &\leq& C_1 \bigg( \sum_{k=1}^n \frac{ 1}{ ((n-k) + 1)^{(1+ \delta)/2} } + 1 \bigg) \\ &\leq& C_2 n^{(1-\delta)/2} .\end{aligned}$$ Again by the conditions in Corollary [Corollary 7](#fdffs){reference-type="ref" reference="fdffs"}, we have $s_n^2 \geq n\alpha$. Therefore, it holds $$\begin{aligned} \frac{6}{s_n} \sum_{k=1}^n \mathbf{E} \frac{ |X_k|^{2+\delta} }{ (\rho_k^2+a^2 )^{(1+ \delta)/2} } +\frac{ 2a}{s_n}& \leq& C_2 \frac{1}{\sqrt{n\alpha}} n^{(1-\delta)/2} \leq C_3 n^{ -\delta /2}.\end{aligned}$$ Applying the last inequality to Corollary [Corollary 5](#fdgs){reference-type="ref" reference="fdgs"}, we get the desired inequality. ## Proof of Theorem [Theorem 4](#theorem3.2){reference-type="ref" reference="theorem3.2"} {#proof-of-theorem-theorem3.2} The proof of Theorem [Theorem 4](#theorem3.2){reference-type="ref" reference="theorem3.2"} is similar to that of Theorem [Theorem 3](#theorem2.1){reference-type="ref" reference="theorem2.1"}, with $\sigma_k$ replaced by $\overline{\sigma}_k$. Similar to the proof of ([\[fsdfg\]](#fsdfg){reference-type="ref" reference="fsdfg"}), we obtain $$\begin{aligned} \Big|\mathbf{E}[R_k|\mathcal{F}_{k-1}] \Big| &\leq& \Big|\mathbf{E}[(X_k^2-\overline{\sigma}_k^2) f'_{S_{k-1}, \tau_k}(S_{k-1} +T'_{k+1})|\mathcal{F}_{k-1}] \Big| \\ && + \frac{\overline{\sigma}_k^2 \mathbf{E}[ \mathbf{1}_{\{|X_k| \geq \tau_k\}}|\mathcal{F}_{k-1}]}{\sqrt{\overline{\rho}_k^2+a^2}} + \frac{ \mathbf{E}[ X_k^2\mathbf{1}_{\{|X_k| \geq \tau_k\}}|\mathcal{F}_{k-1}]}{\sqrt{\overline{\rho}_k^2+a^2}} +2\frac{ \mathbf{E}[|X_k|^3\mathbf{1}_{\{|X_k| < \tau_k\}} |\mathcal{F}_{k-1}]}{\overline{\rho}_k^2+a^2} \\ &\leq& \frac{1}{\sqrt{\overline{\rho}_k^2+a^2}} \Big|\mathbf{E}[ X_k^2 |\mathcal{F}_{k-1}]-\overline{\sigma}_k^2 \Big| + \frac{ \mathbf{E}[(\overline{\sigma}_k^2 +X_k^2)\mathbf{1}_{\{|X_k| \geq \tau_k\}}|\mathcal{F}_{k-1}]}{\sqrt{\overline{\rho}_k^2+a^2}} \\ && +2\frac{ \mathbf{E}[(\overline{\sigma}_k^2 |X_k|+|X_k|^3)\mathbf{1}_{\{|X_k| < \tau_k\}} |\mathcal{F}_{k-1}]}{\overline{\rho}_k^2+a^2}. \end{aligned}$$ Then, it holds for any $a\geq0$, $$\begin{aligned} |\mathbf{E}[h(S_n)-h(Z)]| &\leq& |\mathbf{E}[h(S_n+aZ')-h(Z+aZ')]|+2a\\ & =& \ \Big|\mathbf{E} \sum_{k=1}^n \mathbf{E}[R_k|\mathcal{F}_{k-1}] \Big|+2a\\ & \leq& \ \sum_{k=1}^n \Bigg(\frac{ 1}{\sqrt{\overline{\rho}_k^2+a^2}}\bigg(\mathbf{E} \big|\mathbf{E}[ X_k^2 |\mathcal{F}_{k-1}]-\overline{\sigma}_k^2 \big| +\overline{\sigma}_k^2 \mathbf{E}[ \mathbf{1}_{\{|X_k| \geq \tau_k\}} ] + \mathbf{E}[ X_k^2\mathbf{1}_{\{|X_k| \geq \tau_k\}}]\bigg) \\ & & +2 \frac{\overline{\sigma}_k^2\mathbf{E}[ |X_k| \mathbf{1}_{\{|X_k| < \tau_k\}} ]+\mathbf{E}[|X_k|^3\mathbf{1}_{\{|X_k| < \tau_k\}} ]}{\overline{\rho}_k^2+a^2}\Bigg) + 2a, \end{aligned}$$ where $h$ can be any Lipschitz function such that $||h'||_{\infty} \leq 1$. Thus, we have $$\begin{aligned} && \textbf{W}\Big( \mathcal{L} (S_n), N(0, s_n^2) \Big) \\ &&\leq |\mathbf{E}[h(S_n)-h(Z)]| \leq \ \Big|\mathbf{E}[h(S_n+aZ')-h(Z+aZ')] \Big|+2a\\ & & \leq \ \sum_{k=1}^n \Bigg(\frac{ 1}{\sqrt{\overline{\rho}_k^2+a^2}}\bigg(\mathbf{E} \big|\mathbf{E}[ X_k^2 |\mathcal{F}_{k-1}]-\overline{\sigma}_k^2 \big| +\overline{\sigma}_k^2 \mathbf{E}[ \mathbf{1}_{\{|X_k| \geq \tau_k\}} ] + \mathbf{E}[ X_k^2\mathbf{1}_{\{|X_k| \geq \tau_k\}}]\bigg) \\ & & \ \ \ \ \ + \ 2 \frac{\overline{\sigma}_k^2\mathbf{E}[ |X_k| \mathbf{1}_{\{|X_k| < \tau_k\}} ]+\mathbf{E}[|X_k|^3\mathbf{1}_{\{|X_k| < \tau_k\}} ]}{\overline{\rho}_k^2+a^2}\Bigg) + 2a. \end{aligned}$$ Normalized $S_n$, we obtain $$\begin{aligned} \textbf{W} \big( S_n/s_n \big) &\leq& \frac{1}{s_n} \sum_{k=1}^n \Bigg(\frac{ 1}{\sqrt{\overline{\rho}_k^2+a^2}}\Big(\mathbf{E} \big|\mathbf{E}[ X_k^2 |\mathcal{F}_{k-1}]-\overline{\sigma}_k^2 \big| +\overline{\sigma}_k^2 \mathbf{E}[ \mathbf{1}_{\{|X_k| \geq \tau_k\}} ] + \mathbf{E}[ X_k^2\mathbf{1}_{\{|X_k| \geq \tau_k\}}]\Big) \\ &&\ +\ 2 \frac{\overline{\sigma}_k^2\mathbf{E}[ |X_k| \mathbf{1}_{\{|X_k| < \tau_k\}} ]+\mathbf{E}[|X_k|^3\mathbf{1}_{\{|X_k| < \tau_k\}} ]}{\overline{\rho}_k^2+a^2}\Bigg) +\frac{2a}{s_n}\\ & \leq& \frac{1}{s_n} \sum_{k=1}^n \Bigg(\frac{ 1}{ \tau_{ k}}\Big(\mathbf{E} \big|\mathbf{E}[ X_k^2 |\mathcal{F}_{k-1}]-\overline{\sigma}_k^2 \big| + \mathbf{E}[ X_k^2\mathbf{1}_{\{|X_k| \geq \tau_k\}}]\Big) \\ && \ + \ \frac{ 1}{ \tau_{k}^2 }\Big(3\,\overline{\sigma}_k^2 \mathbf{E} |X_k| + 2\,\mathbf{E}[|X_k|^3\mathbf{1}_{\{|X_k| < \tau_k\}} ] \Big)\Bigg) +\frac{2a}{s_n}, \end{aligned}$$ which gives the first desired inequality. ## Proof of Corollary [Corollary 8](#fsvg){reference-type="ref" reference="fsvg"} {#proof-of-corollary-fsvg} By the fact $\mathbf{E} \big|\mathbf{E}[ X_k^2 |\mathcal{F}_{k-1}]-\overline{\sigma}_k^2 \big|=O( k^{-\alpha}) \ (k\rightarrow \infty)$ and $s_n^2-s_k^2 \asymp n- k$ for any $n >k$, we can deduce that $$\begin{aligned} \sum_{k=1}^n \frac{ \mathbf{E} \big|\mathbf{E}[ X_k^2 |\mathcal{F}_{k-1}]-\overline{\sigma}_k^2 \big| }{ \tau_k } &\leq& C\, \sum_{k=1}^n \frac{ 1 }{ \sqrt{ s_n^2 -s_{k-1}^2 + a^2 } \ } k^{-\alpha} \\ &\leq& C\, \sum_{k=1}^n \frac{ 1 }{ \sqrt{ n -k + 1 } \ } k^{-\alpha} \\ &=& C\, \sum_{k=1}^n \frac{ 1 }{ \sqrt{ 1 -k/n + 1/n } \ } \frac{1}{ (k/n)^{ \delta/2}} \, \frac{1}{n} \, n^{(1-2\alpha )/2} \\ &\leq& C\, \int_{0}^1 \frac{}{}\frac{}{} \frac{ 1 }{ \sqrt{ 1 -x } \ } \frac{1}{ x^{ \delta/2}} \,dx \, n^{(1-2\alpha )/2} \\ &\leq& C\, n^{(1-2\alpha)/2}.\end{aligned}$$ Applying the last inequality to ([\[gsds\]](#gsds){reference-type="ref" reference="gsds"}), we get the first desired inequality. Taking $a=1,$ by the condition in Corollary [Corollary 8](#fsvg){reference-type="ref" reference="fsvg"}, it holds for $\delta \in (0, 1)$, $$\begin{aligned} \frac{1}{ \sqrt{n}} \sum_{k=1}^n \frac{ \mathbf{E}[ X_k^2\mathbf{1}_{\{|X_k| \geq \tau_k\}}] }{ \tau_k } \, & \leq & \frac{1}{ \sqrt{n}}\sum_{k=1}^n \frac{\mathbf{E} |X_k|^{2+\delta} }{ (\overline{\rho}_k^2+1)^{(1+ \delta)/2} } \\ & \leq & \frac{ C}{ \sqrt{n}}\sum_{k=1}^n \frac{ 1 }{ (n-k+1 )^{(1+ \delta)/2}} \\ & \leq & \frac{ C}{ n^{\delta/2}}.\end{aligned}$$ Similarly, it holds for $\delta \in (0, 1)$, $$\begin{aligned} \frac{1}{ \sqrt{n}} \sum_{k=1}^n \, \frac{ \mathbf{E}[|X_k|^3\mathbf{1}_{\{|X_k| < \tau_k\}} ] }{ \tau_k\,\!^2 } & \leq & \frac{1}{ \sqrt{n}}\sum_{k=1}^n \frac{\mathbf{E} |X_k|^{2+\delta} }{ (\overline{\rho}_k^2+1)^{(1+ \delta)/2} } \ \leq \ \frac{ C}{ n^{\delta/2}}.\end{aligned}$$ Applying the foregoing results to Corollary [Corollary 8](#fsvg){reference-type="ref" reference="fsvg"}, we get for $\delta \in (0, 1)$, $$\begin{aligned} \nonumber \textbf{W} \big( S_n/s_n \big) \leq \frac{ C}{ n^{\delta/2}}. \end{aligned}$$ For $\delta =1$, Corollary [Corollary 8](#fsvg){reference-type="ref" reference="fsvg"} can be proved similarly. ## Proof of Theorem [Theorem 5](#th00s1){reference-type="ref" reference="th00s1"} {#proof-of-theorem-th00s1} Denote $c_\delta$ a positive constant depending on $\rho, \upsilon^2 , \sigma^2, \tau^2,$ $\mu$, $\nu, \alpha,$ $\mathbf{E} V^{-\alpha},$ $\mathbf{E}| Z_1 -m_0 |^{2+\delta }$ and $\mathbf{E} | m_0 -m |^{2+\delta }$. In particular, it does not depend on $n$ and $x$. Moreover, the exact values of $c_{\delta}$ may vary from line to line. **Lemma 3**. *Assume the conditions of Theorem [Theorem 5](#th00s1){reference-type="ref" reference="th00s1"}. It holds for all $0<x \leq \mu \sqrt{n} /\nu,$ $$\mathbf{P}\bigg( \Big| \frac{\ln Z_{n} - \mu n }{\sqrt{ n}\, \nu }\Big| \geq x \bigg) \leq c_{\delta}^{-1} \exp\bigg\{ - c_{\delta}\, x^2 \bigg\}.$$* *Proof*. It is easy to see that for all $x> 0,$ $$\begin{aligned} \label{dfsafh} \mathbf{P}\bigg( \Big| \frac{\ln Z_{n} - \mu n }{\sqrt{ n}\, \nu }\Big| \geq x \bigg) = I_1 +I_2,\end{aligned}$$ where $$I_1= \mathbf{P}\bigg( \frac{\ln Z_{n} - \mu n }{\sqrt{ n}\, \nu } \leq - x \bigg) \ \ \ \ \textrm{and} \ \ \ \ I_2 = \mathbf{P}\bigg( \frac{\ln Z_{n} - \mu n }{\sqrt{ n}\, \nu } \geq x \bigg).$$ Clearly, we have the following decomposition: $$\label{decopos} \ln Z_n = \sum_{i=1}^n X_i + \ln V_n,$$ where $X_i=\ln m_{i-1} (i\geq 1)$ are i.i.d. random variables depending only on the environment $\xi.$ Denote $\eta_{i}=(X_i-\mu)/\sqrt{n}\, \nu.$ Clearly, it holds for all $x >0,$ $$\begin{aligned} I_1 &=& \mathbf{P}\bigg( \sum_{i=1}^n \eta_{i} + \frac{\ln V_{n}}{ \sqrt{n}\, \nu} \leq - x \bigg ) \nonumber \\ &\leq& \mathbf{P}\bigg( \sum_{i=1}^n \eta_{i} \leq - \frac x 2 \bigg ) + \mathbf{P}\bigg( \frac{\ln V_{n}}{ \sqrt{n}\, \nu} \leq - \frac x 2 \bigg ) . \label{dasa01}\end{aligned}$$ The condition $\mathbf{E} | m_0 -m |^{2+\rho } < \infty$ implies that $\mathbf{E} e^{c_\rho\, |X_1| } < \infty$ for some $c_\rho > 0$. By Corollary 1.2 of Liu and Watbled [@LW09], it holds $$\mathbf{P}\bigg( \frac{1}{n}\sum_{i=1}^n X_i -\mu \leq - x \bigg ) \leq \left\{ \begin{array}{ll} \exp\Big\{ - c_\rho \, n \, x^2 \Big\},\ \ \ \ & \textrm{ $0< x\leq1$,}\\ & \\ \exp\Big\{ - c_\rho \, n \, x \Big\},\ \ \ \ & \textrm{ $x> 1$. } \end{array} \right.$$ The last inequality implies that for all $0 <x \leq \mu \sqrt{n} /\nu,$ $$\begin{aligned} \mathbf{P}\bigg( \sum_{i=1}^n \eta_{i} \leq - \frac x 2 \bigg ) \, \leq \, \exp\bigg\{ - c_\rho \, x^2 \bigg\}. \label{dasa02}\end{aligned}$$ By Markov's inequality, it is easy to see that for all $x>0,$ $$\begin{aligned} \mathbf{P}\bigg( \frac{\ln V_{n}}{ \sqrt{n}\, \nu} \leq - \frac x 2 \bigg ) = \mathbf{P}\bigg( V_{n}^{-\alpha } \geq \exp\Big\{\frac x 2 \sqrt{n} \nu \alpha \Big\} \bigg) \leq \exp\bigg\{ - \frac x 2 \sqrt{n} \nu \alpha \bigg \} \mathbf{E} V_{ n} ^{-\alpha } ,\nonumber\end{aligned}$$ where $\alpha$ is given by ([\[defv55\]](#defv55){reference-type="ref" reference="defv55"}). As $V_n \rightarrow V$ in $\mathbf{L}^{1} ,$ we have $V_n=\mathbf{E}[V| \mathcal{F}_n]$ a.s. Using Jensen's inequality, we can deduce that $$\begin{aligned} V_{ n} ^{-\alpha } =(\mathbf{E}[V | \mathcal{F}_{ n}])^{-\alpha } \leq \mathbf{E}[ V ^{-\alpha } | \mathcal{F}_{ n}].\end{aligned}$$ Taking expectations on both sides of the last inequality, by ([\[defv55\]](#defv55){reference-type="ref" reference="defv55"}), we get $\mathbf{E} V_{n} ^{-\alpha} \leq \mathbf{E} V ^{-\alpha } < \infty.$ Hence, we have for all $0 <x \leq \mu \sqrt{n} /\nu,$ $$\begin{aligned} \mathbf{P}\bigg( \frac{\ln V_{n}}{ \sqrt{n}\, \nu} \leq - \frac x 2 \bigg ) \leq c_\rho \, \exp\bigg\{ - \frac x 2 \sqrt{n} \nu \alpha \bigg \} \leq c_\rho \, \exp\bigg\{ - c_\rho ' x^2 \bigg\} . \label{dasa03}\end{aligned}$$ Combining ([\[dasa01\]](#dasa01){reference-type="ref" reference="dasa01"})-([\[dasa03\]](#dasa03){reference-type="ref" reference="dasa03"}) together, we get for all $0 <x \leq \mu \sqrt{n} /\nu,$ $$\begin{aligned} I_1 \leq c_\rho^{-1} \exp\Big\{ - c_\rho x^2 \Big\} .\nonumber\end{aligned}$$ Similarly, we can prove that the last bound holds also for $I_2.$ The desired inequality follows by ([\[dfsafh\]](#dfsafh){reference-type="ref" reference="dfsafh"}) and the upper bounds of $I_1$ and $I_2$. This completes the proof of Lemma [Lemma 3](#thsn20){reference-type="ref" reference="thsn20"}. Now, we are in position to prove Theorem [Theorem 5](#th00s1){reference-type="ref" reference="th00s1"}. Denote $$\begin{aligned} \label{gdfbg} \tilde{\xi}_{k+1}= \frac{Z_{ k+1}}{Z_{ k}} -m , \end{aligned}$$ $\mathfrak{F}_{n_0} =\{ \emptyset, \Omega \}$ and $\mathfrak{F}_{k+1}=\sigma \{ \xi_{i-1}, Z_{i}: n_0\leq i\leq k+1 \}$ for all $k\geq n_0$. Notice that $X_{k,i}$ is independent of $Z_k.$ Then it is easy to verify that $$\begin{aligned} \nonumber \mathbf{E}[ \tilde{\xi}_{k+1} |\mathfrak{F}_{k } ] = Z_{ k} ^{-1 } \mathbf{E}[ Z_{ k+1} -mZ_{ k} |\mathfrak{F}_{k } ] \ = \ Z_{ k} ^{-1 } \sum_{i=1}^{Z_k} \mathbf{E}[ X_{ k, i} -m |\mathfrak{F}_{k } ] = Z_{ k} ^{-1 } \sum_{i=1}^{Z_k} \mathbf{E}[ X_{ k, i} -m ] \ = \ 0.\end{aligned}$$ Thus $(\tilde{\xi}_k, \mathfrak{F}_k)_{k=n_0+1,...,n_0+n}$ is a finite sequence of martingale differences. Moreover, it is easy to see that $$\begin{aligned} \mathbf{E}[ \tilde{\xi}_{k+1}^2 |\mathfrak{F}_{k } ] &=& Z_{k}^{-2} \mathbf{E}[ ( Z_{ k+1} -mZ_{ k} )^2 |\mathfrak{F}_{k } ]= Z_{k}^{-2} \mathbf{E}\Big[ \Big( \sum_{i=1}^{Z_k} (X_{ k, i} -m) \Big)^2 \Big|\mathfrak{F}_{k } \Big]\nonumber \\ &=& Z_{k}^{-2} \mathbf{E}\Big[ \Big( \sum_{i=1}^{Z_k} (X_{ k, i} -m_k+ m_k -m) \Big)^2 \Big|\mathfrak{F}_{k } \Big] \nonumber \\ &=& Z_{k}^{-2} \Bigg(\mathbf{E}\Big[ \Big( \sum_{i=1}^{Z_k} (X_{ k, i} -m_k) \Big)^2 \Big |\mathfrak{F}_{k } \Big ] +\, \mathbf{E}\Big[ \Big( \sum_{i=1}^{Z_k} (m_k -m ) \Big)^2 \Big|\mathfrak{F}_{k } \Big] \nonumber\\ && +\, 2\ \mathbf{E}\Big[ \Big( \sum_{i=1}^{Z_k} (X_{ k, i} -m_k) \Big)\Big( \sum_{i=1}^{Z_k} (m_k -m ) \Big) \Big|\mathfrak{F}_{k } \Big]\Bigg).\nonumber\end{aligned}$$ Notice that conditionally on $\xi_n$, the random variables $(X_{n,i})_{i\geq1}$ are i.i.d. Thus, we can deduce that $$\begin{aligned} \mathbf{E}\Big[ \Big( \sum_{i=1}^{Z_k} (X_{ k, i} -m_k) \Big)^2 \Big |\mathfrak{F}_{k } \Big]&=& \mathbf{E}\Big[ \mathbf{E}\Big[ \Big( \sum_{i=1}^{Z_k} (X_{ k, i} -m_k) \Big)^2 \Big|\xi_k , \mathfrak{F}_{k } \Big] \Big| \mathfrak{F}_{k } \Big] \\ &=& \mathbf{E}\Big[ \sum_{i=1}^{Z_k} \mathbf{E}[ (X_{ k, i} -m_k)^2 | \xi_k , \mathfrak{F}_{k } ] \Big| \mathfrak{F}_{k } \Big] \ = \ \sum_{i=1}^{Z_k} \mathbf{E}[ (X_{ k, i} -m_k)^2 | \mathfrak{F}_{k } ]\\ &=& \sum_{i=1}^{Z_k} \mathbf{E} (X_{ k, i} -m_k)^2 \ = \ Z_k \tau^2.\end{aligned}$$ Similarly, we can prove that $$\begin{aligned} \mathbf{E}\Big[ \Big( \sum_{i=1}^{Z_k} (X_{ k, i} -m_k) \Big)\Big( \sum_{i=1}^{Z_k} (m_k -m ) \Big) \Big| \mathfrak{F}_{k } \Big] = 0\end{aligned}$$ and $$\begin{aligned} \mathbf{E}\Big[ \Big( \sum_{i=1}^{Z_k} (m_k -m ) \Big)^2 \Big|\mathfrak{F}_{k } \Big] = Z_k^2 \mathbf{E} (m_k -m ) ^2= Z_k^2 \mathbf{E} (m_0 -m ) ^2= Z_k^2\sigma^2 .\end{aligned}$$ Therefore, it holds $$\begin{aligned} \mathbf{E}[ \tilde{\xi}_{k+1}^2 |\mathfrak{F}_{k } ] = Z_k^{-1} \tau^2 + \sigma^2 \geq \sigma^2 .\end{aligned}$$ By the last line and the fact $Z_k \geq 1$ a.s. for all $k\geq1$, it is easy to see that $$\begin{aligned} \mathbf{E} \big|\mathbf{E}[ \tilde{\xi}_{k+1} ^{2 } |\mathfrak{F}_{k } ] - \mathbf{E} \tilde{\xi}_{k+1}^2 \big | &\leq&\, 2\,\tau^2 \mathbf{E} Z_k^{-1} \ \leq \ 2\,\tau^2 \Big( \mathbf{P}( Z_k \leq k )+ \frac1k \mathbf{P}( Z_k > k ) \Big) \nonumber \\ &\leq&\, 2\,\tau^2 \bigg( \mathbf{P}( Z_k \leq k )+ \frac1k \bigg). \label{fsdsl3}\end{aligned}$$ By Lemma [Lemma 3](#thsn20){reference-type="ref" reference="thsn20"}, we have $$\begin{aligned} \label{fsdsl4} \mathbf{P}( Z_k \leq k ) \leq \mathbf{P}\bigg( \frac{\ln Z_{k} - \mu k }{\sqrt{ k}\, \nu } \leq \frac{\ln k - \mu k }{\sqrt{ k}\, \nu } \bigg) \leq c_{\rho}^{-1} \exp\Big\{ - c_{\rho}\, k \Big\}.\end{aligned}$$ Applying ([\[fsdsl4\]](#fsdsl4){reference-type="ref" reference="fsdsl4"}) to ([\[fsdsl3\]](#fsdsl3){reference-type="ref" reference="fsdsl3"}), we get $$\begin{aligned} \mathbf{E} |\mathbf{E}[ \tilde{\xi}_{k+1} ^{2 } |\mathfrak{F}_{k } ] - \mathbf{E} \tilde{\xi}_{k+1}^2 | \ll \frac1k.\end{aligned}$$ Notice that $$\begin{aligned} \mathbf{E}[ |\tilde{\xi}_{k+1}|^{2+\delta} |\mathfrak{F}_{k } ] &=& Z_{k}^{-2-\delta } \mathbf{E}[ | Z_{ k+1} -mZ_{ k} |^{2+\delta } |\mathfrak{F}_{k } ] \nonumber \\ & = & Z_{k}^{-2-\delta } \mathbf{E}\bigg[ \Big| \sum_{i=1}^{Z_k} (X_{ k, i} -m) \Big|^{2+\delta } \bigg|\mathfrak{F}_{k } \bigg]. \label{ines2.8}\end{aligned}$$ By Minkowski's inequality, we have $$\begin{aligned} && \mathbf{E}\bigg[ \Big| \sum_{i=1}^{Z_k} (X_{ k, i} -m) \Big|^{2+\delta } \bigg|\mathfrak{F}_{k } \bigg] \leq \Bigg( \sum_{i=1}^{Z_k} \Big( \mathbf{E}[ | X_{ k, i} -m |^{2+\delta } |\mathfrak{F}_{k } ] \Big)^{\frac{1}{2+\delta} } \Bigg)^{2+\delta} \nonumber \\ &&=\Bigg( \sum_{i=1}^{Z_k} \Big( \mathbf{E}[ | X_{ k, i} -m_k + m_k -m |^{2+\delta } |\mathfrak{F}_{k } ] \Big)^{\frac{1}{2+\delta} } \Bigg)^{2+\delta} \nonumber\\ &&\leq Z_k^{2+\delta} 2^{1+\delta} \bigg( \mathbf{E}[ | X_{ k, i} -m_k |^{2+\delta } |\mathfrak{F}_{k } ] + \mathbf{E}[ | m_k -m |^{2+\delta } |\mathfrak{F}_{k } ] \bigg) \nonumber \\ && = 2^{1+\delta} Z_k^{2+\delta} \Big( \mathbf{E}| Z_1 -m _0 |^{2+\delta } + \mathbf{E} | m_0 -m |^{2+\delta } \Big) . \label{dfsds4}\end{aligned}$$ By ([\[dfsds4\]](#dfsds4){reference-type="ref" reference="dfsds4"}) and ([\[ines2.8\]](#ines2.8){reference-type="ref" reference="ines2.8"}), we get $$\sup_{k\geq n_0} \Big\| \mathbf{E}[ |\tilde{\xi}_{k+1}|^{2+\delta} |\mathfrak{F}_{k } ] \Big\|_\infty\leq 2^{1+\delta} \Big( \mathbf{E}| Z_1 -m _0 |^{2+\delta } + \mathbf{E} | m_0 -m |^{2+\delta } \Big) < \infty.$$ Applying Corollaries [Corollary 4](#fsdvs){reference-type="ref" reference="fsdvs"} and [Corollary 8](#fsvg){reference-type="ref" reference="fsvg"} to $(\tilde{\xi}_k, \mathfrak{F}_k)_{k=n_0+1,...,n_0+n}$, we obtain the desired results. This completes the proof of Theorem [Theorem 5](#th00s1){reference-type="ref" reference="th00s1"}. # Acknowledgements {#acknowledgements .unnumbered} The authors would like to thank Quansheng Liu for his helpful discussion on the harmonic moments for branching processes in a random environment. Fan was partially supported by the National Natural Science Foundation of China (Grant Nos. 11971063 and 12371155). Su was partially supported by the National Natural Science Foundation of China (Grant Nos. 12271457 and 11871425). 00 Athreya, K.B., Karlin, S. 1971. Branching processes with random environments: II: Limit theorems. Ann. Math. Stat. **42**(6): 1843--1858. Bolthausen, E. 1982. Exact convergence rates in some martingale central limit theorems, Ann. Probab. **10**: 672-688. Chen, L.H.Y., Goldstein, L., Shao, Q.M. Normal approximation by Stein's method. Heidelberg: Springer, 2011. Dedecker, J., Merlevède, F. and Rio, E. 2009. Rates of convergence for minimal distances in the central limit theorem under projective criteria. Electron. J. Probab. **14**: 978-1011. Dedecker, J, Merlevède, F and Rio E. 2022. Rates of convergence in the central limit theorem for martingales in the non stationary setting. Ann. Inst. H. Poincaré Probab. Statist. **58**(2): 945-966. El Machkouri, M., Ouchti, L. 2007. Exat convergence rates in the central limit theorem for a class of martingales, Bernoulli **13**(4): 981-999. Fan, X. 2019. Exact rates of convergence in some martingale central limit theorems, J. Math. Anal. Appl. **469**: 1028-1044. Grama, I., Haeusler, E. 2000. Large deviations for martingales via Cramér's method, Stochastic Process. Appl. **85**, 279--293. Grama, I., Liu, Q. and Miqueu, M. 2023. Asymptotics of the distribution and harmonic moments for a supercritical branching process in a random environment. Ann. Inst. H. Poincaré Probab. Statist. Accepted. to appear. Haeusler, E. 1988. On the rate of convergence in the central limit theorem for martingales with discrete and continuous time. Ann. Probab. **16**(1): 275-299. Hall, P., Heyde, C.C. Martingale Limit Theory and its Applications. Academic, New York. 1980. Heyde, C.C., Brown, B.M. On the depature from normality of a certain class of martingales. Ann. Math. Statist. 1970, **41**: 2161-2165. Jin X, Shen T, Su Z. 2023. Using Stein's method to analyze Euler-Maruyama approximations of regime-switching jump diffusion processes. J. Theoret. Probab. 36: 1797-1828. Liu, Q, Watbled F. 2009. Exponential ineqalities for martingales and asymptotic properties of the free energy of directed polymers in a random environment. Stochastic Process. Appl. **119**: 3101--3132. Lotka, A. 1939. Theorie analytique des assiciation biologiques. Actualités Sci. Ind. 780: 123--136. Maaouia F, Touati A. 2005. Identification of multitype branching processes. Ann. Statist. 33(6): 2655--2694. Mourrat, J.C. 2013. On the rate of convergence in the martingale central limit theorem, Bernoulli 19(2): 633--645. Nagaev, S.V. 1967. On estimating the expected number of direct descendants of a particle in a branching process. Theory Probab. Appl. 12: 314--320. Ross, N. 2011. Fundamentals of Stein's method. Probab. Survey 8: 210--293. Röllin, A. 2014. On quantitative bounds in the mean martingale central limit theorem. Statist. Probab. Lett. 138: 171-176. Tanny, D. 1988. A necessary and sufficient condition for a branching process in a random enviroment to grow like the product of its means. Stochastic Process. Appl. 28(1): 123--139. Van Dung, L., Son, T.C., Tien, N.D. 2014. L1 bounds for some martingale central limit theorems. Lith. Math. J. 54: 48-60.
arxiv_math
{ "id": "2309.08189", "title": "Rates of convergence in the distances of Kolmogorov and Wasserstein for\n standardized martingales", "authors": "Xiequan Fan, Zhonggen Su", "categories": "math.PR", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We consider particle systems described by moments of a phase-space density and propose a realizability-preserving numerical method to evolve a spectral two-moment model for particles interacting with a background fluid moving with nonrelativistic velocities. The system of nonlinear moment equations, with special relativistic corrections to $\mathcal{O}(v/c)$, expresses a balance between phase-space advection and collisions and includes velocity-dependent terms that account for spatial advection, Doppler shift, and angular aberration. This model is closely related to the one promoted by Lowrie et al. (2001; JQSRT, 69, 291-304) and similar to models currently used to study transport phenomena in large-scale simulations of astrophysical environments. The method is designed to preserve moment realizability, which guarantees that the moments correspond to a nonnegative phase-space density. The realizability-preserving scheme consists of the following key components: (i) a strong stability-preserving implicit-explicit (IMEX) time-integration method; (ii) a discontinuous Galerkin (DG) phase-space discretization with carefully constructed numerical fluxes; (iii) a realizability-preserving implicit collision update; and (iv) a realizability-enforcing limiter. In time integration, nonlinearity of the moment model necessitates solution of nonlinear equations, which we formulate as fixed-point problems and solve with tailored iterative solvers that preserve moment realizability with guaranteed convergence. We also analyze the simultaneous Eulerian-frame number and energy conservation properties of the semi-discrete DG scheme and propose an \"energy limiter\" that promotes Eulerian-frame energy conservation. Through numerical experiments, we demonstrate the accuracy and robustness of this DG-IMEX method and investigate its Eulerian-frame energy conservation properties. address: - Multiscale Methods and Dynamics Group, Oak Ridge National Laboratory, Oak Ridge, TN 37831 USA - Department of Physics and Astronomy, University of Tennessee Knoxville, TN 37996-1200 - Advanced Computing for Nuclear, Particles, and Astrophysics Group, Oak Ridge National Laboratory, Oak Ridge, TN 37831 USA author: - M. Paul Laiu - Eirik Endeve - J. Austin Harris - Zachary Elledge - Anthony Mezzacappa bibliography: - references.bib title: | DG-IMEX Method for a Two-Moment Model for\ Radiation Transport in the $\mathcal{O}(v/c)$ Limit --- Boltzmann equation, Radiation transport, Hyperbolic conservation laws, Discontinuous Galerkin, Implicit-Explicit # Introduction {#sec:intro} In this paper, we design and analyze a numerical method for solving a system of moment equations that model transport of neutral particles (e.g., photons, neutrons, or neutrinos) interacting with a background fluid moving with nonrelativistic velocities --- i.e., flows in which the ratio of the background flow velocity to the speed of light, $v/c$, is sufficiently small such that special relativistic corrections of order $(v/c)^{2}$ and higher can be neglected. Similar $\mathcal{O}(v/c)$ models have been used to study transport phenomena in astrophysical environments [@mihalasMihalas_1999], including neutrino transport in core-collapse supernovae (e.g., [@ramppJanka_2002; @just_etal_2015; @skinner_etal_2019; @bruenn_etal_2020; @mezzacappa_etal_2020]) and binary neutron star mergers (e.g., [@just_etal_2015b; @foucart_2023]). The numerical method is based on the discontinuous Galerkin (DG) phase-space discretization and an implicit-explicit (IMEX) method for time integration, and we pay particular attention to the preservation of certain physical bounds by the fully discrete scheme. The bound-preserving property is achieved by carefully considering the phase-space and temporal discretizations, as well as the formulation of associated iterative nonlinear solvers. Neutral particle transport in physical systems where the particle mean-free path may be similar to, or exceed, other characteristic length scales demands a kinetic description based on the distribution function $f(\boldsymbol{p},\boldsymbol{x},t)$, which is a phase-space density providing, at time $t$, the number of particles in an infinitesimal phase-space volume $d\boldsymbol{x}d\boldsymbol{p}$ centered around phase-space coordinates $\{\boldsymbol{p},\boldsymbol{x}\}$. Here, $\boldsymbol{p}$ and $\boldsymbol{x}$ are momentum- and position-space coordinates, respectively. The evolution of $f$ is governed by a kinetic equation that expresses a balance between phase-space advection and collisions (e.g., interparticle collisions and/or collisions with a background); see, e.g., [@chapmanCowling_1970; @mihalasMihalas_1999] for detailed expositions. In this paper, as a simplification, we consider the situation where particles described by a kinetic distribution function interact with an external background whose properties are prescribed and unaffected by $f$. The design of numerical methods to model transport of particles interacting with a moving fluid is complicated, in part, by the necessity to choose coordinates for discretization of momentum space. While relativistic kinetic theory provides the framework to freely specify momentum-space coordinates, the two most obvious reference frame choices, the Eulerian and comoving frames, come with distinct computational challenges (e.g., [@castor_1972; @buchler_1979; @buchler_1983; @mihalasMihalas_1999]). On the one hand, choosing momentum-space coordinates associated with an Eulerian observer eases the discretization of the phase-space advection problem at the expense of complicating the particle-fluid interaction kinematics and, for moment models, the closure procedure. On the other hand, choosing momentum-space coordinates associated with the comoving frame (or comoving observer) --- defined as the sequence of inertial frames whose velocity instantaneously coincides with the fluid velocity [@buchler_1983; @mihalasMihalas_1999] --- simplifies the description of particle-fluid interaction kinematics but at the expense of increased complexity in solving the phase-space advection problem numerically. Moreover, when particles equilibrate with the fluid, the distribution function becomes isotropic in the comoving frame, which simplifies the closure procedure for moment-based methods [@buchler_1983]. We also mention the mixed-frame approach (e.g., [@mihalasKlein_1982]), where the distribution function depends on Eulerian-frame momentum coordinates. Then, to evaluate comoving-frame emissivities and opacities at Eulerian-frame momentum coordinates, appropriate transformation laws and expansions to $\mathcal{O}(v/c)$ are applied (see Section 7.2 in [@mihalasMihalas_1999]). The mixed-frame approach attempts to combine the best of both coordinate choices but has difficulties with certain collision operators and does not generalize to the relativistic case. Nagakura et al. [@nagakura_etal_2014] combine both coordinate choices in a relativistic framework, using a discrete ordinates method, which requires mapping of numerical data between momentum space coordinate systems. This approach has yet to be applied to moment models. Our primary goal is to model neutrino transport in large-scale core-collapse supernova simulations, which require the inclusion of a wide range of neutrino--matter interactions --- with various kinematic forms (e.g., [@burrows_etal_2006; @janka_etal_2007; @janka_2012; @mezzacappa_etal_2020; @fischer_etal_2023]) --- which tend to dominate the overall computational cost. Therefore, we opt for relative simplicity in the collision term, adopt momentum-space coordinates associated with the comoving frame, and focus our effort here on the discretization of the phase-space advection problem. Because of the high computational cost associated with solving kinetic equations numerically in full dimensionality with sufficient phase-space resolution, dimension-reduction techniques are frequently employed. One commonly used method is to define and solve for a sequence of moments, instead of $f$ directly. Specifically, we employ spherical-polar momentum-space coordinates $(\varepsilon,\vartheta,\varphi)$ and integrate the distribution function against angular basis functions (depending on momentum-space angles $\omega=(\vartheta,\varphi)$) to obtain spectral, angular moments (depending on particle energy $\varepsilon$, and $\boldsymbol{x}$ and $t$) representing number densities, number fluxes, etc. The hierarchy of moment equations is obtained by taking corresponding moments of the kinetic equation. In this study, we consider a so-called two-moment model, where we solve for the zeroth (scalar) and first (vector) moments. The resulting system of moment equations, accurate to $\mathcal{O}(v/c)$, describes the evolution of the moments due to advection in phase-space (the left-hand side) and collisions with the background fluid (the right-hand side). Due to the choice of comoving-frame momentum coordinates, the left-hand side contains velocity-dependent terms that account for spatial advection, Doppler shift, and angular aberration. Moreover, the moment equations contain higher-order moments (rank-two and rank-three tensors) that must be expressed in terms of the lower-order moments to close the system of equations. Specifically, we consider an approximate, algebraic moment closure originating from the maximum-entropy closure proposed by Minerbo [@minerbo_1978] (see also [@cernohorskyBludman_1994; @just_etal_2015]). Related two-moment models have recently been used to model neutrino transport in core-collapse supernova simulations (e.g., [@just_etal_2015; @skinner_etal_2019]). In this paper, we consider a number-conservative two-moment model obtained by taking the flat spacetime, $\mathcal{O}(v/c)$ limit of general-relativistic moment models, e.g., from [@shibata_etal_2011; @cardall_etal_2013a; @mezzacappa_etal_2020]. We refer to the model as number-conservative because, in the absence of collisions, the zeroth moment equation is conservative for the correct $\mathcal{O}(v/c)$ Eulerian-frame number density. The model is closely related to the two-moment model promoted by Lowrie et al. [@lowrie_etal_2001]: With the assumption of one-dimensional, planar geometry, we obtain their equations by multiplying our equations with the particle energy $\varepsilon$. This two-moment model supports wave speeds that are bounded by the speed of light. It is also consistent, to $\mathcal{O}(v/c)$, with conservation laws for Eulerian-frame energy and momentum. *Key to this consistency is retention of certain $\mathcal{O}(v/c)$ terms in the time derivative of the moment equations*, which are often omitted (e.g., [@vaytet_etal_2011; @just_etal_2015; @skinner_etal_2019]). However, retention of these terms increases the computational complexity of the algorithm because the evolved moments become nonlinear functions of the primitive (comoving-frame) moments needed to evaluate closure relations, which then introduces nonlinear, iterative solves that contribute to increased computational costs. We use the DG method [@cockburnShu_2001] to discretize the moment equations. The choice of comoving-frame momentum coordinates results in advection-type terms along the energy dimension and four-dimensional divergence operators in the left-hand side of the moment equations. We use the DG method to discretize all four phase-space dimensions. DG methods have advantages for modeling particle transport because of their ability to capture the asymptotic diffusion limit with coarse meshes [@larsenMorel_1989; @adams_2001; @guermondKanschat_2010] without modification of numerical fluxes (as in, e.g., [@audit_etal_2002]), and we leverage this property here. Moreover, their variational formulation and flexibility with respect to test functions make them suitable for designing methods that conserve particle number and total energy *simultaneously* (e.g., [@ayuso_etal_2011; @cheng_etal_2013b]), which can be more difficult to achieve with, e.g., finite-difference or finite-volume methods. We use IMEX time stepping [@ascher_etal_1997; @pareschiRusso_2005] to integrate the ordinary differential equations resulting from the semi-discretization of the moment equations by the DG method. Following our prior works [@chu_etal_2019; @laiu_etal_2021], we integrate the phase-space advection problem explicitly and the collision term implicitly. However, different from our prior works, due to the additional $\mathcal{O}(v/c)$ terms in the time derivatives of the moment equations, the implicit part is nonlinear, even for the simplified collision term we consider here, and requires an iterative solution procedure, which we formulate in this paper. Given appropriate initial and boundary conditions, the solution to moment models with maximum-entropy closure is known to be *realizable*; i.e., the moment solution is consistent with a kinetic distribution $f$ that satisfies required physical bounds [@levermore_1996; @alldredge2019regularized]. For particle systems obeying Bose--Einstein or Maxwell--Boltzmann statistics, $f$ is nonnegative, whereas for particle systems obeying Fermi--Dirac statistics, $f\in[0,1]$. These bounds translate into constraints on the associated moments, and moments satisfying these constraints are referred to as "realizable" moments. Although moment realizability is preserved by continuous moment models, solving moment models numerically can result in unrealizable moments, which leads to ill-posedness of the closure procedure and can give unphysical results when coupling moment models to other physical models, such as fluid models. Therefore, maintaining moment realizability has been a key challenge in the design of numerical schemes for solving moment equations and has been explored in existing work from different perspectives, including development of realizability-preserving spatio-temporal discretizations [@olbrant2012realizability; @hauck2011high; @chu_etal_2019], design of realizability-enforcing limiters [@chu_etal_2019], and relaxation of the realizability constraints via regularization [@alldredge2019regularized]. While these existing approaches provide some essential components to construct a realizability-preserving scheme for the $\mathcal{O}(v/c)$ two-moment model considered in this work, they focus on models without any relativistic corrections and do not fully address the challenges of preserving moment realizability when relativistic corrections are included. The realizability-preserving numerical scheme proposed in this paper consists of the following key components. First, for time integration, we adopt a strong stability-preserving (SSP) IMEX method, which treats the advection terms explicitly and the collision term implicitly. This choice avoids excessive time-step restrictions in the highly collisional regime and gives explicit stage updates that can be expressed as a convex combination of multiple forward Euler steps, which is necessary for preserving realizability. Second, the DG method is equipped with tailored numerical fluxes, which, together with the SSP IMEX time integration method, maintains nonnegative cell-averaged number densities in the explicit update under a time-step restriction that takes the form of a hyperbolic-type Courant--Friedrichs--Lewy (CFL) condition. Third, the realizability-enforcing limiter proposed in [@chu_etal_2019] is used to recover pointwise realizable moments after each stage of the IMEX method. As discussed above, the moment closure procedure requires an iterative solver for nonlinear equations that convert evolved (conserved) moments to the primitive moments. To preserve realizability in this conversion process, we formulate the nonlinear equation as a fixed-point problem and apply an iterative solver analogous to the modified Richardson iteration (e.g., [@richardson1911ix; @saad2003iterative]) to ensure realizability in each iteration. We prove the global convergence property of this iterative solver in the $\mathcal{O}(v/c)$ regime. The convergence analysis is applicable to the maximum-entropy closure as well as its algebraic approximation. Finally, the nonlinear systems arising from the implicit step of the IMEX method can also be formulated as a fixed-point problem and solved in a similar fashion. The realizability-preserving and convergence analyses both carry through with minor modifications. With these components in hand, we prove that the proposed DG-IMEX scheme for solving the $\mathcal{O}(v/c)$ two-moment model indeed preserves moment realizability. The two-moment model we consider is number conservative and, in the continuum limit, consistent to $\mathcal{O}(v/c)$ with phase-space conservation laws for Eulerian-frame energy and momentum. Because the Eulerian-frame energy is not a primary evolved quantity of the model, but is instead obtained from a nontrivial combination of the evolved quantities, similar consistency with this conservation law is not guaranteed at the discrete level. In the context of finite-difference methods, Liebendörfer et al. [@liebendorfer_etal_2004] proposed a consistent discretization by carefully matching specific numerical flux terms in the finite-difference representation of the general-relativistic Boltzmann equation (see also [@muller_etal_2010] for an approach in the case of moment models). For the semi-discrete DG scheme proposed here, the numerical fluxes are tailored to maintain moment realizability, which limits the flexibility of following this procedure. However, the flexibility provided by the approximation spaces of the DG method can be helpful in this respect. For example, by testing with the particle energy $\varepsilon$, which is represented exactly by the DG approximation space with linear functions in the energy dimension, we obtain the two-moment model promoted in [@lowrie_etal_2001]. We further analyze the *simultaneous* Eulerian-frame number and energy conservation properties of the semi-discrete DG scheme, and point out that our DG approximation of the background velocity, which is allowed to be discontinuous, can impact the ability to achieve consistency with Eulerian-frame energy conservation to $\mathcal{O}(v/c)$. Moreover, we design an "energy limiter" that corrects for Eulerian-frame energy conservation violations introduced by the realizability-enforcing limiter mentioned above. Through numerical experiments, we observe that Eulerian-frame energy conservation violations grow as $(v/c)^{2}$, indicating the desired consistency for an $\mathcal{O}(v/c)$ method. The paper is organized as follows. The mathematical formulation of the two-moment model is presented in Section [2](#sec:model){reference-type="ref" reference="sec:model"}, while the closure procedure and wave propagation speeds supported by the resulting moment model are presented and discussed in Section [3](#sec:closure){reference-type="ref" reference="sec:closure"}. Section [4](#sec:discretization){reference-type="ref" reference="sec:discretization"} provides an overview of the numerical method, including the DG phase-space discretization, IMEX time discretization, and iterative solvers for the nonlinear systems arising from the conserved-to-primitive conversion problem and time-implicit evaluation of the collision term. Section [5](#sec:realizability_preservation){reference-type="ref" reference="sec:realizability_preservation"}, where the realizability-preserving property of the method is established, contains the main technical results of the paper. The simultaneous conservation of Eulerian-frame number and energy of the DG method is discussed in Section [6](#sec:conservation){reference-type="ref" reference="sec:conservation"}, where the energy limiter that corrects for Eulerian-frame energy conservation violations introduced by the realizability-enforcing limiter is also presented. The algorithms have been implemented in the toolkit for high-order neutrino radiation-hydrodynamics ([thornado]{.smallcaps}[^1]) and have been ported to utilize graphics processing units (GPUs). Our GPU programming model and implementation strategy is briefly discussed in Section [7](#sec:gpu){reference-type="ref" reference="sec:gpu"}. Results from numerical experiments demonstrating the robustness and accuracy of our method are presented in Section [8](#sec:numericalResults){reference-type="ref" reference="sec:numericalResults"}, where we also present GPU and multi-core performance results and highlight the relative computational cost of algorithmic components. Some technical proofs are given in [10](#sec:appendix){reference-type="ref" reference="sec:appendix"}. For the remainder of this paper we employ units in which the speed of light is unity ($c=1$). # Mathematical Model {#sec:model} We consider a kinetic model where we solve for angular moments of the distribution function $f\colon (\omega,\varepsilon,\boldsymbol{x},t)\in\mathbb{S}^{2}\times\mathbb{R}^{+}\times\mathbb{R}^{3}\times\mathbb{R}^{+}\to\mathbb{R}^{+}$, which gives the number of particles propagating in the direction $\omega\in\mathbb{S}^{2}:=\{\,\omega=(\vartheta,\varphi)~|~\vartheta\in[0,\pi], \varphi\in[0,2\pi)\,\}$, with energy $\varepsilon\in\mathbb{R}^{+}$, at position $\boldsymbol{x}\in\mathbb{R}^{3}$ and time $t\in\mathbb{R}^{+}$. We define angular moments of $f$ as $$\big\{\,\mathcal{D},\,\mathcal{I}^{i},\,\mathcal{K}^{ij},\,\mathcal{Q}^{ijk}\,\big\}(\varepsilon,\boldsymbol{x},t) =\frac{1}{4\pi}\int_{\mathbb{S}^{2}}f(\omega,\varepsilon,\boldsymbol{x},t)\,\big\{\,1,\,\ell^{i},\,\ell^{i}\ell^{j},\,\ell^{i}\ell^{j}\ell^{k}\,\big\}\,d\omega, \label{eq:angularMoments}$$ where $\ell^{i}(\omega)$ is the $i$th component of a unit vector parallel to the particle three-momentum $\boldsymbol{p}=\varepsilon\,\boldsymbol{\ell}$, and $d\omega=\sin\vartheta\,d\vartheta\,d\varphi$. We take $\boldsymbol{p}=\big(p^{1},p^{2},p^{3}\big)^{\intercal}$ to be the particle three-momentum, and $\varepsilon$ and $\omega$ the particle energy and direction in a spherical-polar momentum-space coordinate system associated with an observer instantaneously moving with the fluid three-velocity $\boldsymbol{v}$ (the comoving observer). This choice of momentum-space coordinates is commonly used to model particles interacting with a moving material, as it simplifies the particle--material interaction (collision) terms (see, e.g., [@buchler_1979; @mihalasMihalas_1999]). For simplicity, we will assume that the components of the three-velocity $v^{i}$ are given functions of position $\boldsymbol{x}$, *independent* of time $t$. In Eq. [\[eq:angularMoments\]](#eq:angularMoments){reference-type="eqref" reference="eq:angularMoments"}, $\mathcal{D}$ and $\mathcal{I}^{i}$ are the comoving-frame, spectral particle density and flux density components, respectively. Moment models that incorporate moving fluid effects are derived in the framework of relativistic kinetic theory [@lindquist_1966], and the moment model considered here is obtained from the general relativistic two-moment model from [@cardall_etal_2013a]. Specifically, we consider the number-conservative two-moment model presented in Section 4.7.3 in [@mezzacappa_etal_2020], after taking the limit of flat spacetime, specializing to Cartesian spatial coordinates, and retaining velocity-dependent terms to $\mathcal{O}(v)$. In this limit, the zeroth-moment equation is given by $$\begin{aligned} \partial_{t}{}\big(\,\mathcal{D}+v^{i}\,\mathcal{I}_{i}\,\big) +\partial_{i}{}\big(\,\mathcal{I}^{i}+v^{i}\,\mathcal{D}\,\big) -\frac{1}{\varepsilon^{2}}\partial_{\varepsilon}{}\big(\,\varepsilon^{3}\,\mathcal{K}^{i}_{\hspace{2pt}k}\,\partial_{i}{v^{k}}\,\big) =\chi\,\big(\,\mathcal{D}_{0}-\mathcal{D}\,\big), \label{eq:spectralNumberEquation}\end{aligned}$$ where $\partial_{t}{}=\partial/\partial t$, $\partial_{i}{}=\partial/\partial x^{i}$, and $\partial_{\varepsilon}{}=\partial/\partial\varepsilon$. We use the Einstein summation convention, where repeated latin indices run from $1$ to $3$. In flat spacetime, assuming Cartesian spatial coordinates, we can raise and lower indices on vectors and tensors with the Kronecker tensor; e.g., $\mathcal{I}_{i}=\delta_{ij}\mathcal{I}^{j}$. On the right-hand side of Eq. [\[eq:spectralNumberEquation\]](#eq:spectralNumberEquation){reference-type="eqref" reference="eq:spectralNumberEquation"}, $\chi\ge0$ is the absorption opacity, and $\mathcal{D}_{0}$ is the zeroth moment of an equilibrium distribution $f_{0}$. The corresponding first-moment equation is given by $$\begin{aligned} &\partial_{t}{}\big(\,\mathcal{I}_{j}+v^{i}\,\mathcal{K}_{ij}\,\big) +\partial_{i}{}\big(\,\mathcal{K}^{i}_{\hspace{2pt}j}+v^{i}\,\mathcal{I}_{j}\,\big) -\frac{1}{\varepsilon^{2}}\partial_{\varepsilon}{}\big(\,\varepsilon^{3}\,\mathcal{Q}^{i}_{\hspace{2pt}kj}\,\partial_{i}{v^{k}}\,\big) \nonumber \\ &\hspace{12pt} +\mathcal{I}^{i}\,\partial_{i}{v_{j}} - \mathcal{Q}^{i}_{\hspace{2pt}kj}\,\partial_{i}{v^{k}} =-\kappa\,\mathcal{I}_{j}, \label{eq:spectralNumberFluxEquation}\end{aligned}$$ where $\kappa=\chi+\sigma$ is the sum of the absorption opacity and the opacity due to elastic and isotropic scattering ($\sigma\ge0$). The two-moment model given by Eqs. [\[eq:spectralNumberEquation\]](#eq:spectralNumberEquation){reference-type="eqref" reference="eq:spectralNumberEquation"} and [\[eq:spectralNumberFluxEquation\]](#eq:spectralNumberFluxEquation){reference-type="eqref" reference="eq:spectralNumberFluxEquation"} correspond to the moment equations for number transport given by Just et al. [@just_etal_2015]; their Equations (9a) and (9b). (See also Eq. (125) in [@endeve_etal_2012] for the number-density equation.) The velocity-dependent terms in the spatial and energy derivatives in Eqs. [\[eq:spectralNumberEquation\]](#eq:spectralNumberEquation){reference-type="eqref" reference="eq:spectralNumberEquation"} and [\[eq:spectralNumberFluxEquation\]](#eq:spectralNumberFluxEquation){reference-type="eqref" reference="eq:spectralNumberFluxEquation"} account for spatial advection and Doppler shift between adjacent comoving observers, respectively, while the fourth and fifth terms on the left-hand side of Eq. [\[eq:spectralNumberFluxEquation\]](#eq:spectralNumberFluxEquation){reference-type="eqref" reference="eq:spectralNumberFluxEquation"} account for angular aberration between adjacent comoving observers (e.g., [@liebendorfer_etal_2004]). We point out that the velocity-dependent terms inside the time derivatives in Eqs. [\[eq:spectralNumberEquation\]](#eq:spectralNumberEquation){reference-type="eqref" reference="eq:spectralNumberEquation"} and [\[eq:spectralNumberFluxEquation\]](#eq:spectralNumberFluxEquation){reference-type="eqref" reference="eq:spectralNumberFluxEquation"} were dropped in [@just_etal_2015]. By retaining these terms, Eq. [\[eq:spectralNumberEquation\]](#eq:spectralNumberEquation){reference-type="eqref" reference="eq:spectralNumberEquation"} evolves the $\mathcal{O}(v)$ Eulerian-frame number density, and, as emphasized by Lowrie et al. [@lowrie_etal_2001], wave speeds remain bounded by the speed of light and the model is consistent with the correct $\mathcal{O}(v)$ Eulerian-frame energy and momentum equations. To elaborate on the latter, we define the "conserved\" moments that are evolved in Eqs. [\[eq:spectralNumberEquation\]](#eq:spectralNumberEquation){reference-type="eqref" reference="eq:spectralNumberEquation"} and [\[eq:spectralNumberFluxEquation\]](#eq:spectralNumberFluxEquation){reference-type="eqref" reference="eq:spectralNumberFluxEquation"} as $$\mathcal{N} := \mathcal{D}+v^{i}\,\mathcal{I}_{i} {\quad\text{and}\quad} \mathcal{G}_{j} := \mathcal{I}_{j}+v^{i}\,\mathcal{K}_{ij}, \label{eq:eachconservedMoments}$$ respectively. Here, $\mathcal{N}$ is the correct $\mathcal{O}(v)$ Eulerian-frame number density, and, in the absence of sources on the right-hand side, Eq. [\[eq:spectralNumberEquation\]](#eq:spectralNumberEquation){reference-type="eqref" reference="eq:spectralNumberEquation"} is a phase-space conservation law. The Eulerian-frame energy and momentum densities are related to $\mathcal{N}$ and $\mathcal{G}_{j}$ by $$\mathcal{E} = \varepsilon\, ( \mathcal{N} + v^{i} \,\mathcal{G}_{i} ) = \varepsilon \,(\mathcal{D} + \, 2 v^{i} \,\mathcal{I}_{i}) +\mathcal{O}(v^2) \label{eq:conservedEnergy}$$ and $$\mathcal{P}_{j} = \varepsilon \,( \mathcal{G}_{j} + v_{j} \,\mathcal{N}) =\varepsilon \,(\mathcal{I}_{j}+v^{i}\,\mathcal{K}_{ij} + v_{j} \,\mathcal{D}) +\mathcal{O}(v^2), \label{eq:conservedMomentum}$$ respectively. The following proposition gives the energy and momentum conservation properties of the two-moment model in Eqs. [\[eq:spectralNumberEquation\]](#eq:spectralNumberEquation){reference-type="eqref" reference="eq:spectralNumberEquation"}--[\[eq:spectralNumberFluxEquation\]](#eq:spectralNumberFluxEquation){reference-type="eqref" reference="eq:spectralNumberFluxEquation"}. **Proposition 1**. *The two-moment model given by Eqs. [\[eq:spectralNumberEquation\]](#eq:spectralNumberEquation){reference-type="eqref" reference="eq:spectralNumberEquation"}--[\[eq:spectralNumberFluxEquation\]](#eq:spectralNumberFluxEquation){reference-type="eqref" reference="eq:spectralNumberFluxEquation"} is, up to $\mathcal{O}(v)$, consistent with phase-space conservation laws for the energy density $\mathcal{E}$ and momentum density $\mathcal{P}_{j}$.* *Proof.* By multiplying Eqs. [\[eq:spectralNumberEquation\]](#eq:spectralNumberEquation){reference-type="eqref" reference="eq:spectralNumberEquation"} and [\[eq:spectralNumberFluxEquation\]](#eq:spectralNumberFluxEquation){reference-type="eqref" reference="eq:spectralNumberFluxEquation"} with appropriate factors and summing up the resulting equations, the evolution equations for the energy and momentum densities can be derived, respectively, as $$\begin{aligned} \partial_{t}{\mathcal{E}} + \partial_{i}{}\,{\mathcal{P}}^{i} -\frac{1}{\varepsilon^{2}}\partial_{\varepsilon}{} \big(\,\varepsilon^{4}\,{\mathcal{K}}_{\hspace{2pt}k}^{i}\,\partial_{i}{v^{k}}\,\big) =\varepsilon\,\chi\,\big(\,\mathcal{D}_{0}-\mathcal{D}\,\big) - \,\varepsilon\,\kappa\,v^{j}\,\mathcal{I}_{j} \label{eq:energyEquationEulerian} \end{aligned}$$ and $$\begin{aligned} \partial_{t}{{\mathcal{P}}_{j}} +\partial_{i}{}\,\mathcal{S}_{\hspace{2pt}j}^{i} -\frac{1}{\varepsilon^{2}}\partial_{\varepsilon}{} \big(\,\varepsilon^{4}\, {\mathcal{Q}}_{\hspace{2pt}kj}^{i}\,\partial_{i}{v^{k}}\,\big) = - \varepsilon\,\kappa\,\mathcal{I}_{j} + \varepsilon\,v_{j}\,\chi\,\big(\,\mathcal{D}_{0}-\mathcal{D}\,\big). \label{eq:momentumEquationEulerian} \end{aligned}$$ Here, all $\mathcal{O}(v^2)$ terms are dropped, and the momentum flux density is denoted as $\mathcal{S}^{ij}:= \varepsilon\,(\mathcal{K}^{ij}+\mathcal{I}^{i}\,v^{j} + v^{i}\,\mathcal{I}^{j}$). In the absence of sources on the right-hand side, Eqs. [\[eq:energyEquationEulerian\]](#eq:energyEquationEulerian){reference-type="eqref" reference="eq:energyEquationEulerian"} and [\[eq:momentumEquationEulerian\]](#eq:momentumEquationEulerian){reference-type="eqref" reference="eq:momentumEquationEulerian"} become phase-space conservation laws for $\mathcal{E}$ and $\mathcal{P}_{j}$, respectively. ◻ To close the two-moment model [\[eq:spectralNumberEquation\]](#eq:spectralNumberEquation){reference-type="eqref" reference="eq:spectralNumberEquation"}--[\[eq:spectralNumberFluxEquation\]](#eq:spectralNumberFluxEquation){reference-type="eqref" reference="eq:spectralNumberFluxEquation"}, the higher-order moments $\mathcal{K}^{ij}$ and $\mathcal{Q}^{ijk}$ must be specified. We will use an algebraic closure, which we discuss in more detail in Section [3](#sec:closure){reference-type="ref" reference="sec:closure"}. To this end, we write the second-order moments as $$\mathcal{K}^{ij} = \mathsf{k}^{ij}\,\mathcal{D},$$ where the symmetric variable Eddington tensor components are given by (e.g., [@levermore_1984]) $$\mathsf{k}^{ij} = \frac{1}{2}\,\Big[\,(1-\psi)\,\delta^{ij}+(3\psi-1)\,\hat{\mathsf{n}}^{i}\,\hat{\mathsf{n}}^{j}\,\Big], \label{eq:VariableEddingtonTensor}$$ where $\hat{\mathsf{n}}^{i}=\mathcal{I}^{i}/\mathcal{I}$ and $\mathcal{I}=\sqrt{\mathcal{I}_{i}\mathcal{I}^{i}}$. The expression given by Eq. [\[eq:VariableEddingtonTensor\]](#eq:VariableEddingtonTensor){reference-type="eqref" reference="eq:VariableEddingtonTensor"} satisfies the trace condition $\mathsf{k}^{i}_{\hspace{4pt}i}=\delta_{ij}\mathsf{k}^{ij}=1$ (cf. Eq. [\[eq:angularMoments\]](#eq:angularMoments){reference-type="eqref" reference="eq:angularMoments"}), and the Eddington factor can be obtained from $$\psi = \hat{\mathsf{n}}_{i}\,\hat{\mathsf{n}}_{j}\,\mathsf{k}^{ij} = \frac{\int_{\mathbb{S}^{2}}f\,(\hat{\mathsf{n}}_{i}\ell^{i})^{2}\,d\omega}{\int_{\mathbb{S}^{2}}f\,d\omega}.$$ Similarly, the third-order moments can be written as $$\mathcal{Q}^{ijk} = \mathsf{q}^{ijk}\,\mathcal{D},$$ where we define the symmetric "heat-flux" tensor (e.g., [@just_etal_2015]), $$\mathsf{q}^{ijk} = \frac{1}{2}\, \Big[\, (h-\zeta)\,\Big(\,\hat{\mathsf{n}}^{i}\,\delta^{jk}+\hat{\mathsf{n}}^{j}\,\delta^{ik} +\hat{\mathsf{n}}^{k}\,\delta^{ij}\,\Big)+(5\zeta-3h)\,\hat{\mathsf{n}}^{i}\,\hat{\mathsf{n}}^{j}\,\hat{\mathsf{n}}^{k} \,\Big], \label{eq:heatfluxTensor}$$ where $h=\mathcal{I}/\mathcal{D}$ is the flux factor. The expression in Eq. [\[eq:heatfluxTensor\]](#eq:heatfluxTensor){reference-type="eqref" reference="eq:heatfluxTensor"} satisfies the trace condition $\delta_{jk}\,\mathsf{q}^{ijk}=\mathsf{q}^{ij}_{\hspace{6pt}j}=\mathcal{I}^{i}/\mathcal{D}$, and the "heat-flux" factor can be obtained from $$\zeta = \hat{\mathsf{n}}_{i}\,\hat{\mathsf{n}}_{j}\,\hat{\mathsf{n}}_{k}\,\mathsf{q}^{ijk} = \frac{\int_{\mathbb{S}^{2}}f\,(\hat{\mathsf{n}}_{i}\ell^{i})^{3}\,d\omega}{\int_{\mathbb{S}^{2}}f\,d\omega}.$$ Eqs [\[eq:spectralNumberEquation\]](#eq:spectralNumberEquation){reference-type="eqref" reference="eq:spectralNumberEquation"} and [\[eq:spectralNumberFluxEquation\]](#eq:spectralNumberFluxEquation){reference-type="eqref" reference="eq:spectralNumberFluxEquation"} are closed by specifying the Eddington and heat-flux factors in terms of the "primitive" moments $\boldsymbol{\mathcal{M}}=\big(\,\mathcal{D},\,\boldsymbol{\mathcal{I}}\,\big)^{\intercal}$; i.e., $\psi=\psi(\boldsymbol{\mathcal{M}})$ and $\zeta=\zeta(\boldsymbol{\mathcal{M}})$. Assuming a closure for the higher-order tensors, we define the vector of evolved moments, $$\boldsymbol{\mathcal{U}}(\boldsymbol{\mathcal{M}},\boldsymbol{v}) =\left[\begin{array}{c} \mathcal{N} \\ \mathcal{G}_{j} \end{array}\right] =\left[\begin{array}{c} \mathcal{D}+v^{i}\,\mathcal{I}_{i} \\ \mathcal{I}_{j}+v^{i}\,\mathcal{K}_{ij} \end{array}\right], \label{eq:conservedMoments}$$ the phase-space fluxes, $$\boldsymbol{\mathcal{F}}^{i}(\boldsymbol{\mathcal{U}},\boldsymbol{v}) =\left[\begin{array}{c} \mathcal{I}^{i}+v^{i}\,\mathcal{D} \\ \mathcal{K}^{i}_{\hspace{2pt}j}+v^{i}\,\mathcal{I}_{j} \end{array}\right] \quad\text{and}\quad \boldsymbol{\mathcal{F}}^{\varepsilon}(\boldsymbol{\mathcal{U}},\boldsymbol{v}) =-\left[\begin{array}{c} \mathcal{K}^{i}_{\hspace{2pt}k} \\ \mathcal{Q}^{i}_{\hspace{2pt}kj} \end{array}\right]\,\partial_{i}{v^{k}}, \label{eq:phaseSpaceFluxes}$$ and the sources, $$\boldsymbol{\mathcal{S}}(\boldsymbol{\mathcal{U}},\boldsymbol{v}) =\left[\begin{array}{c} 0 \\ \mathcal{Q}^{i}_{\hspace{2pt}kj}\,\partial_{i}{v^{k}} - \mathcal{I}^{i}\,\partial_{i}{v_{j}} \end{array}\right] \quad\text{and}\quad \boldsymbol{\mathcal{C}}(\boldsymbol{\mathcal{U}}) =\left[\begin{array}{c} \chi\,\big(\,\mathcal{D}_{0}-\mathcal{D}\,\big) \\ -\kappa\,\mathcal{I}_{j} \end{array}\right], \label{eq:sources}$$ so we can write the two-moment model in the compact form, $$\partial_{t}{\boldsymbol{\mathcal{U}}} +\frac{\partial }{\partial x^{i}}\Big(\boldsymbol{\mathcal{F}}^{i}(\boldsymbol{\mathcal{U}},\boldsymbol{v})\Big) +\frac{1}{\varepsilon^{2}}\frac{\partial }{\partial \varepsilon}\Big(\varepsilon^{3}\,\boldsymbol{\mathcal{F}}^{\varepsilon}(\boldsymbol{\mathcal{U}},\boldsymbol{v})\Big) =\boldsymbol{\mathcal{S}}(\boldsymbol{\mathcal{U}},\boldsymbol{v}) + \boldsymbol{\mathcal{C}}(\boldsymbol{\mathcal{U}}). \label{eq:twoMomentModelCompact}$$ Note that the collision term $\boldsymbol{\mathcal{C}}$ does not depend explicitly on the three-velocity $\boldsymbol{v}$. This is a consequence of choosing comoving-frame, momentum-space coordinates. The moment closure is defined in terms of the primitive moments $\boldsymbol{\mathcal{M}}$, while we will evolve the "conserved" moments $\boldsymbol{\mathcal{U}}=\big(\,\mathcal{N},\mathcal{G}_{j}\,\big)^{\intercal}$. The relation between the conserved and primitive moments can be written as $$\boldsymbol{\mathcal{U}} =\boldsymbol{\mathcal{L}}(\boldsymbol{\mathcal{M}},\boldsymbol{v})\,\boldsymbol{\mathcal{M}}, \label{eq:ConservedToPrimitive}$$ where $$\boldsymbol{\mathcal{L}}(\boldsymbol{\mathcal{M}},\boldsymbol{v}) =\left[\begin{array}{cccc} 1 & v^{1} & v^{2} & v^{3} \\ v^{k}\,\mathsf{k}_{k1}(\boldsymbol{\mathcal{M}}) & 1 & 0 & 0 \\ v^{k}\,\mathsf{k}_{k2}(\boldsymbol{\mathcal{M}}) & 0 & 1 & 0 \\ v^{k}\,\mathsf{k}_{k3}(\boldsymbol{\mathcal{M}}) & 0 & 0 & 1 \end{array}\right]. \label{eq:MatrixFormL}$$ When solving Eq. [\[eq:twoMomentModelCompact\]](#eq:twoMomentModelCompact){reference-type="eqref" reference="eq:twoMomentModelCompact"} numerically, it is necessary to convert between primitive and conserved moments. Computing the conserved moments from the primitive moments is straightforward, but obtaining the primitive moments from the conserved moments is nontrivial because, for a given nontrivial velocity $\boldsymbol{v}$, there is no closed-form expression for $\boldsymbol{\mathcal{M}}$ in terms of $\boldsymbol{\mathcal{U}}$, due to the nonlinear dependence $\mathsf{k}_{ij}(\boldsymbol{\mathcal{M}})$. Thus, the primitive moments must be obtained through an iterative procedure, which we discuss in more detail later, where we will pay particular attention to maintaining physically-realizable moments throughout the iteration process. One is faced with a similar problem, e.g., when solving the relativistic Euler and magnetohydrodynamics equations (e.g., [@noble_etal_2006]). # Moment Closure {#sec:closure} We use the maximum-entropy closure of Minerbo [@minerbo_1978] to close the two-moment model. We let the admissible set of kinetic distribution functions be $$\mathfrak{R} := \left\{\, f ~|~ f\ge 0 \quad\text{and}\quad \frac{1}{4\pi}\int_{\mathbb{S}^{2}}f\,d\omega > 0 \,\right\},$$ which is then used to define moment realizability as below. [^2] **Definition 1**. *The moments $\boldsymbol{\mathcal{M}}=(\mathcal{D},\boldsymbol{\mathcal{I}})^{\intercal}$ are realizable if they can be obtained from a distribution function $f(\omega)\in\mathfrak{R}$. The set of all realizable moments $\mathcal{R}$ is $$\mathcal{R} := \big\{\,\boldsymbol{\mathcal{M}}=(\mathcal{D},\boldsymbol{\mathcal{I}})^{\intercal} ~|~ \mathcal{D} > 0 ~\text{and}~ \gamma(\boldsymbol{\mathcal{M}})=\mathcal{D}-\mathcal{I}\ge0\,\big\}, \label{eq:realizableSet}$$ where the function $\gamma(\boldsymbol{\mathcal{M}})$ is concave.* The Minerbo closure is based on the maximum-entropy principle, assuming an entropy functional of the form $s[f] = f\,\ln f - f$. The functional form of the distribution maximizing this entropy functional is, in this case, the Maxwell--Boltzmann distribution, $$f_{\mbox{\tiny \sc ME}}(\omega) = \exp\big(\alpha+\beta\,(\hat{\mathsf{n}}_{i}\ell^{i})\big), \label{eq:fME}$$ where $\alpha$ and $\beta$ are determined from the constraints, $$\mathcal{D}=\frac{1}{4\pi}\int_{\mathbb{S}^{2}}f_{\mbox{\tiny \sc ME}}(\omega)\,d\omega \quad\text{and}\quad \hat{\mathsf{n}}_{i}\,\mathcal{I}^{i}=\mathcal{I}=\frac{1}{4\pi}\int_{\mathbb{S}^{2}}f_{\mbox{\tiny \sc ME}}(\omega)\,(\hat{\mathsf{n}}_{i}\ell^{i})\,d\omega. \label{eq:closureConstraints}$$ (Note that $f_{\mbox{\tiny \sc ME}}\in\mathfrak{R}$.) Letting $\hat{\mathsf{n}}_{i}\ell^{i}=\mu$, we can write $f_{\mbox{\tiny \sc ME}}$ as a function of $\mu$ and perform a change of variable to write the integrals in Eq. [\[eq:closureConstraints\]](#eq:closureConstraints){reference-type="eqref" reference="eq:closureConstraints"} in terms of $\mu$, which allows us to evaluate the constraints in Eq. [\[eq:closureConstraints\]](#eq:closureConstraints){reference-type="eqref" reference="eq:closureConstraints"} analytically (cf. [@minerbo_1978]) and leads to $$\mathcal{D} =e^{\alpha}\,\sinh(\beta)/\beta \quad\text{and}\quad \mathcal{I} =e^{\alpha}\,\big(\,\beta\,\cosh(\beta)-\sinh(\beta)\,\big)/\beta^{2}.$$ The flux factor can then be written solely as a function of $\beta$; i.e., $h=\coth(\beta)-1/\beta=: L(\beta)$, where $L(\beta)$ is the Langevin function. Thus, for a given $h$, we can obtain $\beta(h)=L^{-1}(h)$. Note that $L(\beta)\in(-1,1)$, so that solutions for $\beta$ only exist for $h<1$ (i.e., for $\boldsymbol{\mathcal{M}}$ in the interior of $\mathcal{R}$). Using the maximum-entropy distribution in Eq. [\[eq:fME\]](#eq:fME){reference-type="eqref" reference="eq:fME"}, direct calculations give, for $h \in[0,1)$, $$\psi(h) = 1 - \frac{2\,h}{\beta(h)} \quad\text{and}\quad \zeta(h) = \coth(\beta(h))-3\psi(h)/\beta(h) \label{eq:psiZetaMinerbo}.$$ When $h=1$ (i.e., when $\boldsymbol{\mathcal{M}}$ is on the boundary of $\mathcal{R}$), it is known [@fialkow1991recursiveness] that, for the two-moment case considered here, the underlying kinetic distribution is a weighted Dirac delta function. In this case, the moment closure is given by the associated Eddington and heat-flux factors $\psi(1) = \zeta(1) = 1$. Instead of inverting the Langevin function for $\beta$, the Eddington and heat-flux factors, $\psi$ and $\zeta$, can be accurately approximated by polynomials in $h$. For $\psi$, the following polynomial approximation leads to a relative approximation error, $\delta\psi:=(\psi - \psi_{\mathsf{a}}) / \psi$, within $1\%$ [@cernohorskyBludman_1994]: $$\psi_{\mathsf{a}}(h) = \frac{1}{3} + \frac{2}{15}\,\big(\,3\,h^{2} - h^{3} + 3\,h^{4}\,\big). \label{eq:psiApproximate}$$ For $\zeta$, the following approximation, given by [@just_etal_2015], $$\zeta_{\mathsf{a}}(h) = h\,\big(\,45 + 10\,h - 12\,h^{2} - 12\,h^{3} + 38\,h^{4} - 12\,h^{5} + 18\,h^{6}\,\big) / 75, \label{eq:zetaApproximate}$$ has a relative approximation error, $\delta\zeta := (\zeta - \zeta_{\mathsf{a}}) / \zeta$, lower than $3\%$ . In Figure [2](#fig:eddingtonFactors){reference-type="ref" reference="fig:eddingtonFactors"}, we plot the Eddington factor, $\psi$, the heat-flux factor, $\zeta$, and their polynomial approximations, $\psi_{\mathsf{a}}$ and $\zeta_{\mathsf{a}}$, and report the relative approximation error versus the flux factor, $h$. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![The left plot shows the values of the Eddington factor, $\psi$, the heat-flux factor, $\zeta$, and their polynomial approximations, $\psi_{\mathsf{a}}$ and $\zeta_{\mathsf{a}}$, versus the flux factor, $h$. The right plot illustrates the relative errors, $\delta\psi=(\psi - \psi_{\mathsf{a}}) / \psi$ and $\delta\zeta = (\zeta - \zeta_{\mathsf{a}}) / \zeta$, versus $h$.](MinerboClosure_Factors.png){#fig:eddingtonFactors width="50%"} ![The left plot shows the values of the Eddington factor, $\psi$, the heat-flux factor, $\zeta$, and their polynomial approximations, $\psi_{\mathsf{a}}$ and $\zeta_{\mathsf{a}}$, versus the flux factor, $h$. The right plot illustrates the relative errors, $\delta\psi=(\psi - \psi_{\mathsf{a}}) / \psi$ and $\delta\zeta = (\zeta - \zeta_{\mathsf{a}}) / \zeta$, versus $h$.](MinerboClosure_FactorErrors.png){#fig:eddingtonFactors width="50%"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ It can be seen from Figure [2](#fig:eddingtonFactors){reference-type="ref" reference="fig:eddingtonFactors"} that $\psi_{\mathsf{a}}$ and $\zeta_{\mathsf{a}}$ are quite accurate polynomial approximations to the Eddington and heat-flux factors. Thus, the approximate closure is used in the numerical tests for the two-moment model reported in Section [8](#sec:numericalResults){reference-type="ref" reference="sec:numericalResults"}, in which the two-moment model is closed by plugging the algebraic expressions given in Eqs. [\[eq:psiApproximate\]](#eq:psiApproximate){reference-type="eqref" reference="eq:psiApproximate"} and [\[eq:zetaApproximate\]](#eq:zetaApproximate){reference-type="eqref" reference="eq:zetaApproximate"} into the Eddington and heat-flux tensors in Eqs. [\[eq:VariableEddingtonTensor\]](#eq:VariableEddingtonTensor){reference-type="eqref" reference="eq:VariableEddingtonTensor"} and [\[eq:heatfluxTensor\]](#eq:heatfluxTensor){reference-type="eqref" reference="eq:heatfluxTensor"}, respectively. Next we explore the wave propagation speeds of the moment system in Eq. [\[eq:twoMomentModelCompact\]](#eq:twoMomentModelCompact){reference-type="eqref" reference="eq:twoMomentModelCompact"} with the approximate Minerbo closure introduced above. To calculate the wave speed, we compute the maximum magnitude of the eigenvalues of the spatial-flux Jacobians with respect to the conserved moments, $(\partial_{\boldsymbol{\mathcal{U}}}\boldsymbol{\mathcal{F}}^{i})$, $i=1,2,3$. Specifically, we compute the spatial-flux Jacobian by $$\Big(\frac{\partial \boldsymbol{\mathcal{F}}^{i}}{\partial \boldsymbol{\mathcal{U}}}\Big) =\Big(\frac{\partial \boldsymbol{\mathcal{F}}^{i}}{\partial \boldsymbol{\mathcal{M}}}\Big)\Big(\frac{\partial \boldsymbol{\mathcal{U}}}{\partial \boldsymbol{\mathcal{M}}}\Big)^{-1},$$ where $$\Big(\frac{\partial \boldsymbol{\mathcal{U}}}{\partial \boldsymbol{\mathcal{M}}}\Big)_{ij} =\left[\begin{array}{cc} 1 & v^{j} \\ v^{k}\Big[\Big(\frac{\partial \mathsf{k}_{ik}}{\partial \mathcal{D}}\Big)\,\mathcal{D}+\mathsf{k}_{ik}\Big] & \delta_{ij} + v^{k}\Big(\frac{\partial \mathsf{k}_{ik}}{\partial \mathcal{I}^{j}}\Big)\,\mathcal{D} \end{array}\right]$$ and $$\Big(\frac{\partial \boldsymbol{\mathcal{F}}^{i}}{\partial \boldsymbol{\mathcal{M}}}\Big) =\left[\begin{array}{cccc} v^{i} & \delta^{i1} & \delta^{i2} & \delta^{i3} \\ \Big(\frac{\partial \mathsf{k}^{i}_{\hspace{2pt}1}}{\partial \mathcal{D}}\Big)\,\mathcal{D}+\mathsf{k}^{i}_{\hspace{2pt}1} & \Big(\frac{\partial \mathsf{k}^{i}_{\hspace{2pt}1}}{\partial \mathcal{I}^{1}}\Big)\,\mathcal{D}+v^{i} & \Big(\frac{\partial \mathsf{k}^{i}_{\hspace{2pt}1}}{\partial \mathcal{I}^{2}}\Big)\,\mathcal{D} & \Big(\frac{\partial \mathsf{k}^{i}_{\hspace{2pt}1}}{\partial \mathcal{I}^{3}}\Big)\,\mathcal{D} \\ \Big(\frac{\partial \mathsf{k}^{i}_{\hspace{2pt}2}}{\partial \mathcal{D}}\Big)\,\mathcal{D}+\mathsf{k}^{i}_{\hspace{2pt}2} & \Big(\frac{\partial \mathsf{k}^{i}_{\hspace{2pt}2}}{\partial \mathcal{I}^{1}}\Big)\,\mathcal{D} & \Big(\frac{\partial \mathsf{k}^{i}_{\hspace{2pt}2}}{\partial \mathcal{I}^{2}}\Big)\,\mathcal{D}+v^{i} & \Big(\frac{\partial \mathsf{k}^{i}_{\hspace{2pt}2}}{\partial \mathcal{I}^{3}}\Big)\,\mathcal{D} \\ \Big(\frac{\partial \mathsf{k}^{i}_{\hspace{2pt}3}}{\partial \mathcal{D}}\Big)\,\mathcal{D}+\mathsf{k}^{i}_{\hspace{2pt}3} & \Big(\frac{\partial \mathsf{k}^{i}_{\hspace{2pt}3}}{\partial \mathcal{I}^{1}}\Big)\,\mathcal{D} & \Big(\frac{\partial \mathsf{k}^{i}_{\hspace{2pt}3}}{\partial \mathcal{I}^{2}}\Big)\,\mathcal{D} & \Big(\frac{\partial \mathsf{k}^{i}_{\hspace{2pt}3}}{\partial \mathcal{I}^{3}}\Big)\,\mathcal{D}+v^{i} \end{array}\right]$$ follow from the definitions given in Eqs. [\[eq:conservedMoments\]](#eq:conservedMoments){reference-type="eqref" reference="eq:conservedMoments"} and [\[eq:phaseSpaceFluxes\]](#eq:phaseSpaceFluxes){reference-type="eqref" reference="eq:phaseSpaceFluxes"}, respectively. With this expression, we are able to demonstrate the following proposition, which states that the maximum wave speed is bounded above by the speed of light in a one-dimensional setting. **Proposition 2**. *Suppose $\boldsymbol{v}=(v,0,0)$ and $\boldsymbol{\mathcal{I}}=(\mathcal{I},0,0)$, with $|v|\leq 1$, $|\mathcal{I}|\leq \mathcal{D}$, and $\mathcal{D}>0$. Let $\lambda_{\max}:=\max\big(|\lambda(\partial_{\boldsymbol{\mathcal{U}}}\boldsymbol{\mathcal{F}}^{1})|\big)$ denote the maximum magnitude of the spatial-flux Jacobian eigenvalues. Then $\lambda_{\max}\leq1$.* *Proof.* In this setting, the spatial-flux Jacobian reduces to a 2-by-2 matrix, because the entries associated with the $x^2$ and $x^3$ axes are all zeros. In addition, the only nonzero component of the Eddington tensor is $\mathsf{k}_{11}$, which takes the values of the (approximate) Eddington factor $\psi_{\mathsf{a}}$. Thus, the partial derivatives $\frac{\partial \mathsf{k}_{11}}{\partial \mathcal{D}}$ and $\frac{\partial \mathsf{k}^{1}_{\hspace{2pt}1}}{\partial \mathcal{I}^{1}}$ become $\frac{\partial \psi_{\mathsf{a}}}{\partial \mathcal{D}}$ and $\frac{\partial \psi_{\mathsf{a}}}{\partial \mathcal{I}^{1}}$, respectively. Evaluating these partial derivatives using the chain rule then leads to $$\label{eq:fluxJacobian} \Big(\frac{\partial \boldsymbol{\mathcal{F}}^{1}}{\partial \boldsymbol{\mathcal{U}}}\Big) = \frac{1}{ 1 - v^2 \psi_{\mathsf{a}} + v(1 + v h)\psi_{\mathsf{a}}^\prime} \left[\begin{array}{cc} v - v \psi_{\mathsf{a}} + v(v + h)\psi_{\mathsf{a}}^\prime & 1 - v^2\\ (1 - v^2) (\psi_{\mathsf{a}} - h \psi_{\mathsf{a}}^{\prime}) & v - v \psi_{\mathsf{a}} + (1 + v h)\psi_{\mathsf{a}}^\prime \end{array}\right],$$ where $\psi_{\mathsf{a}}^\prime$ denotes the derivative of the approximate Eddington factor, $\psi_{\mathsf{a}}$, in Eq. [\[eq:psiApproximate\]](#eq:psiApproximate){reference-type="eqref" reference="eq:psiApproximate"} with respect to the flux factor, $h$. To prove the claim, we need to show that the eigenvalues of $(\partial_{\boldsymbol{\mathcal{U}}}\boldsymbol{\mathcal{F}}^{1})$ are in $[-1,1]$. Since $\psi_{\mathsf{a}}$ and $\psi_{\mathsf{a}}^\prime$ are both one-dimensional polynomials in $h$, the proof of the claim is straightforward but tedious. Here we omit the detailed analysis and show in Figure [\[fig:WaveSpeed\]](#fig:WaveSpeed){reference-type="ref" reference="fig:WaveSpeed"} the computed values of $\lambda_{\max}$ for $v\in[0,1]$ and $h\in[0,1]$, which illustrates that $\lambda_{\max}$ is bounded from above by one. ◻    This result is an extension of the wave speed analysis in [@lowrie_etal_2001 Section 6.2], in which it is assumed that the Eddington factor is independent of the flux factor; i.e., that $\psi_{\mathsf{a}}^{\prime}=0$. **Remark 1**. *In the three-dimensional case, the magnitude of the eigenvalues of the spatial-flux Jacobian are bounded above by $1+\mathcal{O}(v^2)$, which we provide verification of in Figure [\[fig:WaveSpeedViolation\]](#fig:WaveSpeedViolation){reference-type="ref" reference="fig:WaveSpeedViolation"}. Although the upper bound can exceed unity, which implies that the wave speed of the two-moment model in Eq. [\[eq:twoMomentModelCompact\]](#eq:twoMomentModelCompact){reference-type="eqref" reference="eq:twoMomentModelCompact"} can become unphysical, it shows that including the velocity-dependent term in the time derivatives in Eqs. [\[eq:spectralNumberEquation\]](#eq:spectralNumberEquation){reference-type="eqref" reference="eq:spectralNumberEquation"} and [\[eq:spectralNumberFluxEquation\]](#eq:spectralNumberFluxEquation){reference-type="eqref" reference="eq:spectralNumberFluxEquation"} improves the maximum wave speed estimation from $1+\mathcal{O}(v)$ (see, e.g., discussions in [@lowrie_etal_2001]) to $1+\mathcal{O}(v^2)$. Note that in the design of the numerical flux discussed in Section [4.1](#sec:dgMethod){reference-type="ref" reference="sec:dgMethod"}, we use unity as the estimate for the maximum wave speed, which appears to be valid in the regimes for which the $\mathcal{O}(v)$ model is applicable. In particular, unphysical wave speeds are not observed for $v\leq0.25$, as shown in Figure [\[fig:WaveSpeedViolation\]](#fig:WaveSpeedViolation){reference-type="ref" reference="fig:WaveSpeedViolation"}, for which we do not yet have a theoretical explanation.* # Numerical Scheme {#sec:discretization} ## Discontinuous Galerkin Phase-Space Discretization {#sec:dgMethod} We use the DG method to discretize Eq. [\[eq:twoMomentModelCompact\]](#eq:twoMomentModelCompact){reference-type="eqref" reference="eq:twoMomentModelCompact"} in phase-space. To this end we divide the phase-space domain $D=D_{\varepsilon}\times D_{\boldsymbol{x}}$ into a disjoint union $\mathcal{T}$ of open elements $\boldsymbol{K}=K_{\varepsilon}\times\boldsymbol{K}_{\boldsymbol{x}}$, so that $D=\cup_{\boldsymbol{K}\in\mathcal{T}}\boldsymbol{K}$. Here, $D_{\varepsilon}$ is the energy domain and $D_{\boldsymbol{x}}$ is the $d_{\boldsymbol{x}}$-dimensional spatial domain, and $$\boldsymbol{K}_{\boldsymbol{x}} = \big\{\,\boldsymbol{x} \colon x^{i} \in K_{\boldsymbol{x}}^{i} := (x_{\mbox{\tiny\sc L}}^{i},x_{\mbox{\tiny\sc H}}^{i}) ~|~ i=1,\ldots,d_{\boldsymbol{x}}\,\big\} \quad\text{and}\quad K_{\varepsilon} := (\varepsilon_{\mbox{\tiny\sc L}},\varepsilon_{\mbox{\tiny\sc H}}),$$ where $x_{\mbox{\tiny\sc L}}^{i}$ ($x_{\mbox{\tiny\sc H}}^{i}$) is the low (high) boundary of the spatial element in the $i$th spatial dimension, and $\varepsilon_{\mbox{\tiny\sc L}}$ ($\varepsilon_{\mbox{\tiny\sc H}}$) is the low (high) boundary of the energy element. We also define $\tau(\varepsilon)=\varepsilon^{2}$ and denote the volume of a phase-space element by $$|\boldsymbol{K}| = \int_{\boldsymbol{K}}\tau\,d\varepsilon\,d\boldsymbol{x}, \quad\text{where}\quad d\boldsymbol{x} = \prod_{i=1}^{d_{\boldsymbol{x}}}dx^{i}.$$ The length of an element in the $i$th dimension is $|K_{\boldsymbol{x}}^{i}|=x_{\mbox{\tiny\sc H}}^{i}-x_{\mbox{\tiny\sc L}}^{i}$, and $|K_{\varepsilon}|=\varepsilon_{\mbox{\tiny\sc H}}-\varepsilon_{\mbox{\tiny\sc L}}$. We also define the phase-space surface element $\tilde{\boldsymbol{K}}^{i}=(\times_{j\ne i}K_{\boldsymbol{x}}^{j})\times K_{\varepsilon}$ and the spatial coordinates orthogonal to the $i$th spatial dimension $\tilde{\boldsymbol{x}}^{i}$, so that as a set $\boldsymbol{x}=\{\,x^{i},\tilde{\boldsymbol{x}}^{i}\,\}$. Finally, we let $\boldsymbol{z}=(\varepsilon,\boldsymbol{x})$ denote the phase-space coordinate, and define $d\boldsymbol{z}=d\varepsilon d\boldsymbol{x}$, $d\tilde{\boldsymbol{z}}^{i}=d\varepsilon d\tilde{\boldsymbol{x}}^{i}$, and let, again as a set, $\tilde{\boldsymbol{z}}^{i}=\{\varepsilon,\tilde{\boldsymbol{x}}^{i}\}$. On each element $\boldsymbol{K}$, we let the approximation space for the DG method be $$\mathbb{V}_{h}^{k}(\boldsymbol{K}) =\big\{\,\varphi_{h} \colon \varphi_{h}|_{\boldsymbol{K}}\in\mathbb{Q}^{k}(\boldsymbol{K}), \forall \boldsymbol{K}\in\mathcal{T}\,\big\}, \label{eq:approximationSpace}$$ where $\mathbb{Q}^{k}(\boldsymbol{K})$ is the phase-space tensor product of one-dimensional polynomials of maximal degree $k$. We will denote the approximation space on spatial elements as $\mathbb{V}_{h}^{k}(\boldsymbol{K}_{\boldsymbol{x}})$, which is defined as in Eq. [\[eq:approximationSpace\]](#eq:approximationSpace){reference-type="eqref" reference="eq:approximationSpace"}, where $\mathbb{Q}^{k}(\boldsymbol{K}_{\boldsymbol{x}})$ is the spatial tensor product of one-dimensional polynomials of maximal degree $k$. We will use $\mathbb{V}_{h}^{k}(\boldsymbol{K}_{\boldsymbol{x}})$ to approximate the fluid three-velocity $\boldsymbol{v}=(v^{1},v^{2},v^{3})$, which will be assumed to be a given function of $\boldsymbol{x}$. The semi-discrete DG problem is then to find $\boldsymbol{\mathcal{U}}_{h}\in\mathbb{V}_{h}^{k}(\boldsymbol{K})$, which approximates $\boldsymbol{\mathcal{U}}$ in Eq. [\[eq:twoMomentModelCompact\]](#eq:twoMomentModelCompact){reference-type="eqref" reference="eq:twoMomentModelCompact"}, such that $$\big(\,\partial_{t}{\boldsymbol{\mathcal{U}}_{h}},\varphi_{h}\,\big)_{\boldsymbol{K}} =\boldsymbol{\mathcal{B}}_{h}\big(\,\boldsymbol{\mathcal{U}}_{h},\boldsymbol{v}_{h},\varphi_{h}\,\big)_{\boldsymbol{K}} + \big(\,\boldsymbol{\mathcal{C}}(\boldsymbol{\mathcal{U}}_{h}),\varphi_{h}\,\big)_{\boldsymbol{K}}, \label{eq:dgSemiDiscrete}$$ for all test functions $\varphi_{h}\in\mathbb{V}_{h}^{k}(\boldsymbol{K})$, $\boldsymbol{v}_{h}\in\mathbb{V}_{h}^{k}(\boldsymbol{K}_{\boldsymbol{x}})$, and all $\boldsymbol{K}\in\mathcal{T}$. In Eq. [\[eq:dgSemiDiscrete\]](#eq:dgSemiDiscrete){reference-type="eqref" reference="eq:dgSemiDiscrete"}, we have defined the inner product $$\big(\,a_{h},b_{h}\,\big)_{\boldsymbol{K}} =\int_{\boldsymbol{K}}a_{h}\,b_{h}\,\tau\,d\boldsymbol{z}, \quad a_{h},b_{h}\in\mathbb{V}_{h}^{k}(\boldsymbol{K})$$ and the phase-space advection operator $$\boldsymbol{\mathcal{B}}_{h}\big(\,\boldsymbol{\mathcal{U}}_{h},\boldsymbol{v}_{h},\varphi_{h}\,\big)_{\boldsymbol{K}} =\boldsymbol{\mathcal{B}}_{h}^{\boldsymbol{x}}\big(\,\boldsymbol{\mathcal{U}}_{h},\boldsymbol{v}_{h},\varphi_{h}\,\big)_{\boldsymbol{K}} +\boldsymbol{\mathcal{B}}_{h}^{\varepsilon}\big(\,\boldsymbol{\mathcal{U}}_{h},\boldsymbol{v}_{h},\varphi_{h}\,\big)_{\boldsymbol{K}} +\big(\,\boldsymbol{\mathcal{S}}(\boldsymbol{\mathcal{U}}_{h},\boldsymbol{v}_{h}),\varphi_{h}\,\big)_{\boldsymbol{K}}, \label{eq:bilinearFormAdvection}$$ where the contribution from position space fluxes is $$\begin{aligned} \boldsymbol{\mathcal{B}}_{h}^{\boldsymbol{x}}\big(\,\boldsymbol{\mathcal{U}}_{h},\boldsymbol{v}_{h},\varphi_{h}\,\big)_{\boldsymbol{K}} &=-\sum_{i=1}^{d_{\boldsymbol{x}}}\int_{\tilde{\boldsymbol{K}}^{i}} \Big[\, \widehat{\boldsymbol{\mathcal{F}}^{i}}\big(\boldsymbol{\mathcal{U}}_{h},\boldsymbol{v}_{h}\big)\,\varphi_{h}|_{x_{\mbox{\tiny\sc H}}^{i}} -\widehat{\boldsymbol{\mathcal{F}}^{i}}\big(\boldsymbol{\mathcal{U}}_{h},\boldsymbol{v}_{h}\big)\,\varphi_{h}|_{x_{\mbox{\tiny\sc L}}^{i}} \,\Big]\,\tau\,d\tilde{\boldsymbol{z}}^{i} \nonumber \\ &\hspace{12pt} +\sum_{i=1}^{d_{\boldsymbol{x}}}\big(\,\boldsymbol{\mathcal{F}}^{i}(\boldsymbol{\mathcal{U}}_{h},\boldsymbol{v}_{h}),\partial_{i}{\varphi_{h}}\,\big)_{\boldsymbol{K}} \label{eq:bilinearFormAdvectionPosition}\end{aligned}$$ and the contribution from energy space fluxes is $$\begin{aligned} \boldsymbol{\mathcal{B}}_{h}^{\varepsilon}\big(\,\boldsymbol{\mathcal{U}}_{h},\boldsymbol{v}_{h},\varphi_{h}\,\big)_{\boldsymbol{K}} &=-\int_{\boldsymbol{K}_{\boldsymbol{x}}} \Big[\, \varepsilon^{3}\,\widehat{\boldsymbol{\mathcal{F}}^{\varepsilon}}\big(\boldsymbol{\mathcal{U}}_{h},\boldsymbol{v}_{h}\big)\,\varphi_{h}|_{\varepsilon_{\mbox{\tiny\sc H}}} -\varepsilon^{3}\,\widehat{\boldsymbol{\mathcal{F}}^{\varepsilon}}\big(\boldsymbol{\mathcal{U}}_{h},\boldsymbol{v}_{h}\big)\,\varphi_{h}|_{\varepsilon_{\mbox{\tiny\sc L}}} \,\Big]\,d\boldsymbol{x} \nonumber \\ &\hspace{12pt} +\big(\,\varepsilon\,\boldsymbol{\mathcal{F}}^{\varepsilon}(\boldsymbol{\mathcal{U}}_{h},\boldsymbol{v}_{h}),\partial_{\varepsilon}{\varphi_{h}}\,\big)_{\boldsymbol{K}}. \label{eq:bilinearFormAdvectionEnergy}\end{aligned}$$ In Eq. [\[eq:bilinearFormAdvectionPosition\]](#eq:bilinearFormAdvectionPosition){reference-type="eqref" reference="eq:bilinearFormAdvectionPosition"}, $\widehat{\boldsymbol{\mathcal{F}}^{i}}\big(\boldsymbol{\mathcal{U}}_{h},\boldsymbol{v}_{h}\big)$ is a numerical flux approximating the flux on the surface $\tilde{\boldsymbol{K}}^{i}$, which is evaluated using the global Lax--Friedrichs (LF) flux $$\widehat{\boldsymbol{\mathcal{F}}^{i}}\big(\boldsymbol{\mathcal{U}}_{h},\boldsymbol{v}_{h}\big)|_{x^{i}} =\mathscr{F}_{\mbox{\tiny\sc LF}}^{i}\big(\boldsymbol{\mathcal{U}}_{h}(x^{i,-},\tilde{\boldsymbol{z}}^{i}),\boldsymbol{\mathcal{U}}_{h}(x^{i,+},\tilde{\boldsymbol{z}}^{i}),\hat{\boldsymbol{v}}(x^{i},\tilde{\boldsymbol{x}}^{i})\big), \label{eq:numericalFluxPosition}$$ where $x^{i,\mp}=\lim_{\delta\to0^{+}}x^{i}\mp\delta$ and where we write the global LF flux function as $$\mathscr{F}_{\mbox{\tiny\sc LF}}^{i}\big(\boldsymbol{\mathcal{U}}_{a},\boldsymbol{\mathcal{U}}_{b},\hat{\boldsymbol{v}}\big) =\frac{1}{2}\, \big(\, \boldsymbol{\mathcal{F}}^{i}(\boldsymbol{\mathcal{U}}_{a},\hat{\boldsymbol{v}})+\boldsymbol{\mathcal{F}}^{i}(\boldsymbol{\mathcal{U}}_{b},\hat{\boldsymbol{v}}) -\alpha^{i}\,(\,\boldsymbol{\mathcal{U}}_{b}[\hat{\boldsymbol{v}}^{i}]-\boldsymbol{\mathcal{U}}_{a}[\hat{\boldsymbol{v}}^{i}]\,) \,\big), \label{eq:globalLF}$$ where $\alpha^{i}$ is the largest (absolute) eigenvalue of the flux Jacobian $\partial\boldsymbol{\mathcal{F}}^{i}/\partial\boldsymbol{\mathcal{U}}$ over the entire domain, for which we simply set $\alpha^{i}=1$. [^3] The components of the fluid three-velocity at the element interface is computed as the average $$\hat{\boldsymbol{v}}(x^{i},\tilde{\boldsymbol{x}}^{i}) =\frac{1}{2}\big(\,\boldsymbol{v}_{h}(x^{i,-},\tilde{\boldsymbol{x}}^{i})+\boldsymbol{v}_{h}(x^{i,+},\tilde{\boldsymbol{x}}^{i})\,\big). \label{eq:faceVelocity}$$ Note that the three-velocity components can be discontinuous across element interfaces. **Remark 2**. *In the flux function in Eq. [\[eq:globalLF\]](#eq:globalLF){reference-type="eqref" reference="eq:globalLF"}, we have defined the dissipative term to be proportional to $(\,\boldsymbol{\mathcal{U}}_{b}[\hat{\boldsymbol{v}}^{i}]-\boldsymbol{\mathcal{U}}_{a}[\hat{\boldsymbol{v}}^{i}]\,)$, where $\hat{\boldsymbol{v}}^{i}=\big(\,\delta^{i1}\,\hat{v}^{1},\,\delta^{i2}\,\hat{v}^{2},\,\delta^{i3}\,\hat{v}^{3}\,\big)^{\intercal}$, as opposed to the standard LF flux where the dissipative term is proportional to $(\,\boldsymbol{\mathcal{U}}_{b}[\hat{\boldsymbol{v}}]-\boldsymbol{\mathcal{U}}_{a}[\hat{\boldsymbol{v}}]\,)$. We have found this to be necessary in order to improve the realizability-preserving property of the scheme in the multi-dimensional setting (see Section [5](#sec:realizability_preservation){reference-type="ref" reference="sec:realizability_preservation"}).* In order to compute the energy space fluxes $\boldsymbol{\mathcal{F}}^{\varepsilon}(\boldsymbol{\mathcal{U}}_{h},\boldsymbol{v}_{h})$ and the sources $\boldsymbol{\mathcal{S}}(\boldsymbol{\mathcal{U}}_{h},\boldsymbol{v}_{h})$, we need to approximate spatial derivatives of the three-velocity components within elements. We denote the derivative of the $i$th velocity component with respect to $x^{j}$ by $(\partial_{j}{v^{i}})_{h}\in\mathbb{V}_{h}^{k}(\boldsymbol{K}_{\boldsymbol{x}})$, and compute this by demanding that $$\int_{\boldsymbol{K}_{\boldsymbol{x}}}(\partial_{j}{v^{i}})_{h}\,\varphi_{h}\,d\boldsymbol{x} =\int_{\tilde{\boldsymbol{K}}_{\boldsymbol{x}}^{j}}\Big[\,\hat{v}^{i}\,\varphi_{h}|_{x_{\mbox{\tiny\sc H}}^{j}}-\hat{v}^{i}\,\varphi_{h}|_{x_{\mbox{\tiny\sc L}}^{j}}\,\Big]\,d\tilde{\boldsymbol{x}}^{j} -\int_{\boldsymbol{K}_{\boldsymbol{x}}}v_{h}^{i}\,\partial_{j}{\varphi_{h}}\,d\boldsymbol{x} \label{eq:velocityDerivatives}$$ holds for all $\varphi_{h}\in\mathbb{V}_{h}^{k}(\boldsymbol{K}_{\boldsymbol{x}})$ and all $\boldsymbol{K}_{\boldsymbol{x}}$, and where $\hat{v}^{i}(x^{j},\tilde{\boldsymbol{x}}^{j})$ is computed as in Eq. [\[eq:faceVelocity\]](#eq:faceVelocity){reference-type="eqref" reference="eq:faceVelocity"}. The energy space flux $\widehat{\boldsymbol{\mathcal{F}}^{\varepsilon}}\big(\boldsymbol{\mathcal{U}}_{h},\boldsymbol{v}_{h}\big)$ in Eq. [\[eq:bilinearFormAdvectionEnergy\]](#eq:bilinearFormAdvectionEnergy){reference-type="eqref" reference="eq:bilinearFormAdvectionEnergy"} is also computed using an LF-type flux $$\widehat{\boldsymbol{\mathcal{F}}^{\varepsilon}}\big(\boldsymbol{\mathcal{U}}_{h},\boldsymbol{v}_{h}\big)|_{\varepsilon} =\mathscr{F}_{\mbox{\tiny\sc LF}}^{\varepsilon}\big(\boldsymbol{\mathcal{U}}_{h}(\varepsilon^{-},\boldsymbol{x}),\boldsymbol{\mathcal{U}}_{h}(\varepsilon^{+},\boldsymbol{x}),\boldsymbol{v}_{h}(\boldsymbol{x})\big), \label{eq:numericalFluxEnergy}$$ where $\varepsilon^{\mp}=\lim_{\delta\to0^{+}}\varepsilon\mp\delta$, and we take the LF flux function to be given by $$\mathscr{F}_{\mbox{\tiny\sc LF}}^{\varepsilon}\big(\boldsymbol{\mathcal{U}}_{a},\boldsymbol{\mathcal{U}}_{b},\boldsymbol{v}_h\big) =\frac{1}{2}\, \big(\, \boldsymbol{\mathcal{F}}^{\varepsilon}(\boldsymbol{\mathcal{U}}_{a},\boldsymbol{v}_h)+\boldsymbol{\mathcal{F}}^{\varepsilon}(\boldsymbol{\mathcal{U}}_{b},\boldsymbol{v}_h) -\alpha^{\varepsilon}\,(\,\boldsymbol{\mathcal{M}}_{b}-\boldsymbol{\mathcal{M}}_{a}\,) \,\big), \label{eq:globalLFenergy}$$ where $\alpha^{\varepsilon}$ is an estimate of the largest absolute eigenvalue of the flux Jacobian $\partial\boldsymbol{\mathcal{F}}^{\varepsilon}/\partial\boldsymbol{\mathcal{U}}$. To estimate $\alpha^{\varepsilon}$ we consider the quadratic form $$Q(\boldsymbol{v}_{h}) =(-\partial_{j}{v^{i}})_{h}\,\ell_{i}\,\ell^{j}= \boldsymbol{\ell}^{\intercal}A(\boldsymbol{v}_{h})\,\boldsymbol{\ell}, \quad\text{where}\quad A_{ij}(\boldsymbol{v}_{h})=-\frac{1}{2}\big(\,(\partial_{i}{v^{j}})_{h}+(\partial_{j}{v^{i}})_{h}\big). \label{eq:quadraticForm}$$ It can be shown that $|Q(\boldsymbol{v}_{h})|\le\lambda_{A}$, where $\lambda_{A}$ is the largest absolute eigenvalue of the matrix $A$. (Since $A$ is symmetric, the eigenvalues are real.) Hence, we set $\alpha^{\varepsilon}=\lambda_{A}$. **Remark 3**. *In the energy space flux function in Eq. [\[eq:globalLFenergy\]](#eq:globalLFenergy){reference-type="eqref" reference="eq:globalLFenergy"}, the numerical dissipation term is given in terms of the primitive moments $\boldsymbol{\mathcal{M}}$ rather than the conserved moments $\boldsymbol{\mathcal{U}}$. This choice is motivated by the realizability analysis in Section [5.1.2](#sec:realizabilityEnergy){reference-type="ref" reference="sec:realizabilityEnergy"}.* **Remark 4**. *For simplicity we assume that the absorption and scattering opacity ($\chi$ and $\sigma$, respectively), appearing in the second term on the right-hand side of Eq. [\[eq:dgSemiDiscrete\]](#eq:dgSemiDiscrete){reference-type="eqref" reference="eq:dgSemiDiscrete"}, are constant within each phase-space element $\boldsymbol{K}$.* In this work, we consider the nodal DG scheme (see, e.g., [@hesthavenWarburton_2008] for an overview), which writes $\boldsymbol{\mathcal{U}}_{h}\in\mathbb{V}_{h}^{k}(\boldsymbol{K})$ as an expansion of tensor products of one-dimensional Lagrange polynomials of degrees up to $k$ in each element. As in [@laiu_etal_2021], we use the $(k+1)$-point Legendre--Gauss (LG) quadrature points (see, e.g., [@abramowitzStegun_1988]) as the interpolation points for the Lagrange polynomials. Following the standard practice (i.e., for Ritz--Galerkin), we choose the test functions $\varphi_h$ to be identical to the trial functions, which are the tensor products of Lagrange polynomials used in the expansion of $\boldsymbol{\mathcal{U}}_{h}$, and evaluate the inner products $(\cdot,\cdot)_{\boldsymbol{K}}$ using the $(k+1)$-point LG quadrature rule. In the remainder of this paper, we denote the sets of the $(k+1)$-point LG quadrature points in an element $\boldsymbol{K}$ on $K_{\varepsilon}$ and $K_{\boldsymbol{x}}^{i}$ by $S_\varepsilon^{\boldsymbol{K}}:=\{\varepsilon_1,\dots,\varepsilon_{k+1}\}$ and $S_i^{\boldsymbol{K}}:=\{x^{i}_1,\dots,x^{i}_{k+1}\}$, respectively. Then the set of local DG nodes in element $\boldsymbol{K}$ is denoted as $$\label{eq:localNodes} \textstyle S^{\boldsymbol{K}}_{\otimes} := S_{\varepsilon}^{\boldsymbol{K}} \otimes \big( \bigotimes_{i=1}^{d_{\boldsymbol{x}}} S_i^{\boldsymbol{K}} \big) \:.$$ With this notation, the semidiscretized Eq. [\[eq:dgSemiDiscrete\]](#eq:dgSemiDiscrete){reference-type="eqref" reference="eq:dgSemiDiscrete"} can then be written as $$\label{eq:nodalSemiDiscrete} \partial_{t}{\boldsymbol{\mathcal{U}}_{\boldsymbol{k}}} =\boldsymbol{\mathsf{B}}\big(\,\boldsymbol{\mathcal{U}}_{h},\boldsymbol{v}_{h}\,\big)_{\boldsymbol{k}} + \boldsymbol{\mathsf{C}}(\boldsymbol{\mathcal{U}}_{\boldsymbol{k}})\:,\quad \forall \boldsymbol{K}\in\mathcal{T}\:,$$ where $\boldsymbol{\mathsf{B}}$ and $\boldsymbol{\mathsf{C}}$ denote the advection and collision operators acting on the collection of nodal values $\boldsymbol{\mathcal{U}}_{\boldsymbol{k}}(t) := \{ \boldsymbol{\mathcal{U}}_{h}(\varepsilon, \boldsymbol{x}, t)\colon (\varepsilon, \boldsymbol{x})\in S^{\boldsymbol{K}}_{\otimes} \}$. Here the subscript ${\boldsymbol{k}}$ implies evaluations at points in $S^{\boldsymbol{K}}_{\otimes}$. This nodal representation will become useful in the following sections. To simplify the notations therein, we will introduce a few auxiliary point sets in phase-space, which become useful in the realizability analysis in Sections [5.1.1](#sec:realizabilitySpatial){reference-type="ref" reference="sec:realizabilitySpatial"} and [5.1.2](#sec:realizabilityEnergy){reference-type="ref" reference="sec:realizabilityEnergy"}. In element $\boldsymbol{K}$, let $\widehat{S}_{\varepsilon}^{\boldsymbol{K}}:=\{\hat{\varepsilon}_1,\dots,\hat{\varepsilon}_{\hat{k}}\}$ and $\widehat{S}_{i}^{\boldsymbol{K}}:=\{\hat{x}^{i}_1,\dots,\hat{x}^{i}_{\hat{k}}\}$ denote the sets of quadrature points given by the $\hat{k}$-point Legendre--Gauss--Lobatto (LGL) quadrature rule (see, e.g., [@abramowitzStegun_1988]) on $K_{\varepsilon}$ and $K_{\boldsymbol{x}}^{i}$, respectively. Here $\hat{k}\geq\frac{k+5}{2}$ is chosen so that the quadrature integrates polynomials up to degree $k+2$ exactly, which is required in the analysis. In element $\boldsymbol{K}$, we define the auxiliary sets $\widehat{S}^{\boldsymbol{K}}_{\varepsilon,\otimes}$ and $\widehat{S}^{\boldsymbol{K}}_{i,\otimes}$, $i=1,\dots,d_{\boldsymbol{x}}$, as $$\label{eq:AuxSets} \textstyle \widehat{S}^{\boldsymbol{K}}_{\varepsilon,\otimes} := \widehat{S}_{\varepsilon}^{\boldsymbol{K}} \otimes \big(\bigotimes_{i=1}^{d_{\boldsymbol{x}}} S^{\boldsymbol{K}}_{i} \big) {\quad\text{and}\quad} \widehat{S}^{\boldsymbol{K}}_{i,\otimes} := S_{\varepsilon}^{\boldsymbol{K}} \otimes \big(\bigotimes_{j=1,j\neq i}^{d_{\boldsymbol{x}}} S^{\boldsymbol{K}}_{j} \big) \otimes \widehat{S}^{\boldsymbol{K}}_{i},$$ respectively. We denote the union of these auxiliary sets in element $\boldsymbol{K}$ as $$\label{eq:AuxSetUnion} \textstyle \widehat{S}^{\boldsymbol{K}}_{\otimes} := \widehat{S}^{\boldsymbol{K}}_{\varepsilon,\otimes} \cup \big( \bigcup_{i=1}^{d_{\boldsymbol{x}}} \widehat{S}^{\boldsymbol{K}}_{i,\otimes} \big)$$ and further denote the union of the auxiliary sets and the local DG nodes as $$\label{eq:AllSetUnion} \textstyle \widetilde{S}^{\boldsymbol{K}}_{\otimes} := S^{\boldsymbol{K}}_{\otimes} \cup \widehat{S}^{\boldsymbol{K}}_{\otimes}\:.$$ An illustration of the local point sets $S^{\boldsymbol{K}}$, $\widehat{S}^{\boldsymbol{K}}_{1,\otimes}$, and $\widehat{S}^{\boldsymbol{K}}_{\varepsilon,\otimes}$ is given in Figure [3](#fig:localNodes){reference-type="ref" reference="fig:localNodes"}, in which the case $d_{\boldsymbol{x}}=1$ and $\boldsymbol{x}=x^1$ is considered. Therefore $\widehat{S}^{\boldsymbol{K}}_{i,\otimes}$ is simply $\widehat{S}^{\boldsymbol{K}}_{1,\otimes}:={S}^{\boldsymbol{K}}_{\varepsilon} \otimes \widehat{S}^{\boldsymbol{K}}_{1}$, as defined in Eq. [\[eq:AuxSets\]](#eq:AuxSets){reference-type="eqref" reference="eq:AuxSets"}. ![Illustration of the collection of DG nodes $S^{\boldsymbol{K}}$ and the auxiliary phase-space point sets $\widehat{S}^{\boldsymbol{K}}_{1,\otimes}$ and $\widehat{S}^{\boldsymbol{K}}_{\varepsilon,,\otimes}$ in an element $\boldsymbol{K}$ in a computational domain $\mathbb{R} \times \mathbb{R}^{+}$. These sets are defined in Eqs. [\[eq:localNodes\]](#eq:localNodes){reference-type="eqref" reference="eq:localNodes"} and [\[eq:AuxSets\]](#eq:AuxSets){reference-type="eqref" reference="eq:AuxSets"}, respectively. In this figure, $\widehat{S}^{\boldsymbol{K}}_{i,\otimes}$ reduces to $\widehat{S}^{\boldsymbol{K}}_{1,\otimes}$ since here $\boldsymbol{x}=x^{1}$ is considered.](LocalNodes.pdf){#fig:localNodes width="\\textwidth"} ## Time Integration {#sec:timeIntegration} We use IMEX methods to evolve the semi-discrete two-moment model in Eq. [\[eq:dgSemiDiscrete\]](#eq:dgSemiDiscrete){reference-type="eqref" reference="eq:dgSemiDiscrete"} forward in time, where the phase-space advection term is treated explicitly and the collision term is treated implicitly. The general $s$-stage IMEX scheme can then be written as [@ascher_etal_1997; @pareschiRusso_2005] $$\begin{aligned} \big(\,\boldsymbol{\mathcal{U}}_{h}^{(i)},\varphi_{h}\,\big)_{\boldsymbol{K}} &=\big(\,\boldsymbol{\mathcal{U}}_{h}^{n},\varphi_{h}\,\big)_{\boldsymbol{K}} \nonumber \\ &\hspace{12pt}\label{eq:s_IMEX_1} +\Delta t\sum_{j=1}^{i-1}\tilde{\alpha}_{ij}\,\boldsymbol{\mathcal{B}}_{h}\big(\,\boldsymbol{\mathcal{U}}_{h}^{(j)},\boldsymbol{v}_{h},\varphi_{h}\,\big)_{\boldsymbol{K}} +\Delta t\sum_{j=1}^{i}\alpha_{ij}\,\big(\,\boldsymbol{\mathcal{C}}(\boldsymbol{\mathcal{U}}_{h}^{(j)}),\varphi_{h}\,\big)_{\boldsymbol{K}}, \\ \big(\,\boldsymbol{\mathcal{U}}_{h}^{n+1},\varphi_{h}\,\big)_{\boldsymbol{K}} &=\big(\,\boldsymbol{\mathcal{U}}_{h}^{n},\varphi_{h}\,\big)_{\boldsymbol{K}} \nonumber \\ &\hspace{12pt}\label{eq:s_IMEX_2} +\Delta t\sum_{i=1}^{s}\tilde{w}_{i}\,\boldsymbol{\mathcal{B}}_{h}\big(\,\boldsymbol{\mathcal{U}}_{h}^{(i)},\boldsymbol{v}_{h},\varphi_{h}\,\big)_{\boldsymbol{K}} +\Delta t\sum_{i=1}^{s}w_{i}\,\big(\,\boldsymbol{\mathcal{C}}(\boldsymbol{\mathcal{U}}_{h}^{(i)}),\varphi_{h}\,\big)_{\boldsymbol{K}},\end{aligned}$$ for $i=1,\ldots,s$, all $\boldsymbol{K}\in\mathcal{T}$, and all $\varphi_{h}\in\mathbb{V}_{h}^{k}(\boldsymbol{K})$. Here the coefficients $\tilde{\alpha}_{ij}$, $\alpha_{ij}$, $\tilde{w}_{i}$, and $w_{i}$ are required to satisfy certain order conditions for achieving the desired accuracy of the IMEX scheme. In addition, to preserve realizability of the evolved moments, each stage in the IMEX update needs to be formulated as convex combinations of realizable terms, which results in additional restrictions on the choices of coefficients. We refer the readers to [@chu_etal_2019 Section 6] for details on the order and convex-invariant conditions on the coefficients in the IMEX scheme. ## Iterative Solvers for Nonlinear Systems {#sec:iterative_solvers} In this section, we introduce the iterative solvers for the nonlinear systems that occur in the evolution of the IMEX scheme in Eqs. [\[eq:s_IMEX_1\]](#eq:s_IMEX_1){reference-type="eqref" reference="eq:s_IMEX_1"}--[\[eq:s_IMEX_2\]](#eq:s_IMEX_2){reference-type="eqref" reference="eq:s_IMEX_2"}. In Section [4.3.1](#sec:moment_conversion){reference-type="ref" reference="sec:moment_conversion"}, we present the iterative solver for the conversion of conserved moments $\boldsymbol{\mathcal{U}}$ to primitive moments $\boldsymbol{\mathcal{M}}$. This moment conversion is required to evaluate the closures for the higher-order moments $\mathcal{K}^{ij}$ and $\mathcal{Q}^{ijk}$ at each stage of the IMEX scheme, since these closures are defined in terms of the primitive moments as discussed in Section [3](#sec:closure){reference-type="ref" reference="sec:closure"}. In Section [4.3.2](#sec:collision_solver){reference-type="ref" reference="sec:collision_solver"}, we discuss the solver for the nonlinear equations arising from the implicit update in the IMEX scheme. Even though the simplified collision term $\boldsymbol{\mathcal{C}}(\boldsymbol{\mathcal{U}})$ in Eq. [\[eq:sources\]](#eq:sources){reference-type="eqref" reference="eq:sources"} appears to be linear in terms of the primitive moments, the implicit system is still nonlinear because the IMEX scheme evolves the conserved moments. This nonlinear system formulation is also extendable to handle systems with collision terms that include more comprehensive physics; e.g., neutrino--electron scattering and thermal pair processes, as considered in [@laiu_etal_2021]. Under the nodal DG framework (see Eq. [\[eq:nodalSemiDiscrete\]](#eq:nodalSemiDiscrete){reference-type="eqref" reference="eq:nodalSemiDiscrete"}), each of these nonlinear systems can be formulated locally at each node in the phase-space element because there is no coupling between nodes in either the moment conversion or the collision solve. Therefore, the nonlinear systems considered in this section are written in terms of the nodal moments at a given phase-space node $\boldsymbol{z}\in S^{\boldsymbol{K}}_{\otimes}$, $\forall \boldsymbol{K}\in\mathcal{T}$, where $S^{\boldsymbol{K}}_{\otimes}$ is the set of DG nodes in element $\boldsymbol{K}$, as defined in Eq. [\[eq:localNodes\]](#eq:localNodes){reference-type="eqref" reference="eq:localNodes"}. For convenience, we drop the subscript from the nodal representation in this section, and note that, such nonlinear systems must be solved at each $\boldsymbol{z}\in S^{\boldsymbol{K}}_{\otimes}$ and in each $\boldsymbol{K}\in\mathcal{T}$ to perform the moment conversion from $\boldsymbol{\mathcal{U}}$ to $\boldsymbol{\mathcal{M}}$ or the implicit steps in the IMEX scheme. ### Moment Conversion Solver {#sec:moment_conversion} For a given conserved moment $\boldsymbol{\mathcal{U}}\in\mathcal{R}$, finding a corresponding primitive moment $\boldsymbol{\mathcal{M}}\in\mathcal{R}$ that satisfies Eq. [\[eq:ConservedToPrimitive\]](#eq:ConservedToPrimitive){reference-type="eqref" reference="eq:ConservedToPrimitive"} requires solving a nonlinear system. A naive approach is to formulate Eq. [\[eq:ConservedToPrimitive\]](#eq:ConservedToPrimitive){reference-type="eqref" reference="eq:ConservedToPrimitive"} as a fixed-point problem $$\label{eq:fixed_pt} \boldsymbol{\mathcal{M}}= \left(\begin{array}{c} \mathcal{D}\\ \mathcal{I}_{j} \end{array} \right) = \left(\begin{array}{c} - v^{i}\mathcal{I}_{i} + \mathcal{N}\\ - v^{i}\mathsf{k}_{ij} \mathcal{D}+ \mathcal{G}_{j} \end{array}\right):=\tilde{\boldsymbol{\mathcal{H}}}_{\boldsymbol{\mathcal{U}}}(\boldsymbol{\mathcal{M}}) .$$ However, when standard fixed-point iteration, i.e., Picard iteration (see, e.g., [@hairer1993solving Section I.8]), is applied to solve Eq. [\[eq:fixed_pt\]](#eq:fixed_pt){reference-type="eqref" reference="eq:fixed_pt"}, this formulation does not guarantee that the resulting moments are realizable at each iteration, which, in turn, may result in failures to convergence on problems in this form. To address these issues, we adopt the idea from Richardson iteration, see, e.g., [@richardson1911ix] and [@saad2003iterative Section 13.2.1], for solving linear systems and reformulate the fixed-point problem in Eq. [\[eq:fixed_pt\]](#eq:fixed_pt){reference-type="eqref" reference="eq:fixed_pt"} as $$\label{eq:richardson_fixed_pt} \boldsymbol{\mathcal{M}}= \left(\begin{array}{c} \mathcal{D}\\ \mathcal{I}_{j} \end{array} \right) = \left(\begin{array}{c} \mathcal{D}\\ \mathcal{I}_j \end{array} \right) - \lambda \left(\begin{array}{c} \mathcal{D}+ v^{i}\mathcal{I}_{i} - \mathcal{N}\\ \mathcal{I}_{j} + v^{i}\mathsf{k}_{ij} \mathcal{D}- \mathcal{G}_{j} \end{array}\right):=\boldsymbol{\mathcal{H}}_{\boldsymbol{\mathcal{U}}}(\boldsymbol{\mathcal{M}}) ,$$ where $\lambda\in(0,1]$ is a constant. Here we choose $\lambda = (1+v)^{-1}$, where $v := |\boldsymbol{v}| = \sqrt{v_{i}v^{i}}\,$, to guarantee the realizability-preserving and convergence properties of the Picard iteration method; i.e., $$\label{eq:Picard} \boldsymbol{\mathcal{M}}^{[k+1]} = \boldsymbol{\mathcal{H}}_{\boldsymbol{\mathcal{U}}}(\boldsymbol{\mathcal{M}}^{[k]}) .$$ The realizability-preserving and convergence properties of Eq. [\[eq:Picard\]](#eq:Picard){reference-type="eqref" reference="eq:Picard"} are stated and proved in Section [5.3](#sec:momentConversionRealizability){reference-type="ref" reference="sec:momentConversionRealizability"}. ### Collision Solver {#sec:collision_solver} The implicit steps in Eq. [\[eq:s_IMEX_1\]](#eq:s_IMEX_1){reference-type="eqref" reference="eq:s_IMEX_1"} require solving nonlinear systems to find the updated conserved moments. Similar to the implicit systems considered in [@laiu_etal_2021], these systems take the form $$\label{eq:ImplicitSystem} \boldsymbol{\mathcal{U}}= \boldsymbol{\mathcal{U}}^{{(\ast)}} + \Delta t_{\boldsymbol{\mathcal{C}}} \, \boldsymbol{\mathcal{C}}(\boldsymbol{\mathcal{U}})\:,$$ where $\boldsymbol{\mathcal{U}}^{{(\ast)}}$ denotes the known conserved moments from the explicit steps, $\boldsymbol{\mathcal{U}}$ denotes the unknown updated conserved moments driven by the implicit collision term $\boldsymbol{\mathcal{C}}(\boldsymbol{\mathcal{U}})$ defined in Eq. [\[eq:sources\]](#eq:sources){reference-type="eqref" reference="eq:sources"}, and $\Delta t_{\boldsymbol{\mathcal{C}}}$ denotes the effective time step size for the implicit system. Since the sources are expressed in terms of primitive moments, we solve Eq. [\[eq:ImplicitSystem\]](#eq:ImplicitSystem){reference-type="eqref" reference="eq:ImplicitSystem"} as a nonlinear fixed-point problem on the unknown primitive moments and use the primitive moment solution to compute the collision term $\boldsymbol{\mathcal{C}}(\boldsymbol{\mathcal{U}})$, which is then used to update the conserved moments $\boldsymbol{\mathcal{U}}$ in Eq. [\[eq:ImplicitSystem\]](#eq:ImplicitSystem){reference-type="eqref" reference="eq:ImplicitSystem"}. As in the moment conversion case discussed in Section [4.3.1](#sec:moment_conversion){reference-type="ref" reference="sec:moment_conversion"}, we apply the idea from Richardson iteration to Eq. [\[eq:ImplicitSystem\]](#eq:ImplicitSystem){reference-type="eqref" reference="eq:ImplicitSystem"} and formulate it as a fixed-point problem in terms of the primitive moments; i.e., $$\label{eq:collision_fixed_point0} \boldsymbol{\mathcal{M}}= \left(\begin{array}{c} \mathcal{D}\\ \mathcal{I}_j \end{array}\right) = \left(\begin{array}{c} \mathcal{D}\\ \mathcal{I}_j \end{array}\right) - \lambda \,\left(\begin{array}{c} \mathcal{D}+ v^{i}\mathcal{I}_{i} - \mathcal{N}^{{(\ast)}} - \Delta t_{\boldsymbol{\mathcal{C}}} \chi\,\big(\,\mathcal{D}_{0}-\mathcal{D}\,\big) \\ \mathcal{I}_j + v^{i}\mathsf{k}_{ij} \mathcal{D}- \mathcal{G}_j^{{(\ast)}} + \Delta t_{\boldsymbol{\mathcal{C}}} \kappa\,\mathcal{I}_{j} \end{array}\right) =: \tilde{\boldsymbol{\mathcal{Q}}}(\boldsymbol{\mathcal{M}}),$$ where $\mathcal{N}^{{(\ast)}}$ and $\boldsymbol{\mathcal{G}}^{{(\ast)}}$ denote the number density and number flux components of the given conserved moment $\boldsymbol{\mathcal{U}}^{{(\ast)}}$, respectively, and the constant $\lambda\in(0,1]$. Although, this formulation is consistent with the one considered in Section [4.3.1](#sec:moment_conversion){reference-type="ref" reference="sec:moment_conversion"} when there are no collisions ($\chi = \kappa = 0$), it cannot guarantee that the realizability of moments is preserved when collisions are taken into account. To address this issue, we follow the approach taken in [@laiu_etal_2021] and reformulate the fixed-point problem as $$\label{eq:collision_fixed_point1} \boldsymbol{\mathcal{M}}= \left(\begin{array}{c} \mathcal{D}\\ \mathcal{I}_j \end{array}\right) = \Lambda\left(\begin{array}{c} (1-\lambda) \mathcal{D}+\lambda (- v^{i}\mathcal{I}_{i} + \mathcal{N}^{{(\ast)}} + \Delta t_{\boldsymbol{\mathcal{C}}} \chi\,\mathcal{D}_{0})\\ (1-\lambda) \mathcal{I}_j + \lambda (- v^{i}\mathsf{k}_{ij} \mathcal{D}+\mathcal{G}_j^{{(\ast)}}) \end{array}\right) =: \boldsymbol{\mathcal{Q}}(\boldsymbol{\mathcal{M}})\:,$$ where $\Lambda:=\textup{diag}(\mu_{\chi},\mu_{\kappa})$ with $\mu_{\chi} = (1+\lambda \, \Delta t_{\boldsymbol{\mathcal{C}}} \, \chi)^{-1}$ and $\mu_{\kappa} = (1+ \lambda \, \Delta t_{\boldsymbol{\mathcal{C}}} \, \kappa)^{-1}$. Applying Picard iteration to this fixed-point problem then leads to the iterative scheme $$\label{eq:collision_Picard} \boldsymbol{\mathcal{M}}^{[k+1]} = \boldsymbol{\mathcal{Q}}(\boldsymbol{\mathcal{M}}^{[k]})\:.$$ In Section [5.4](#sec:collision){reference-type="ref" reference="sec:collision"}, we prove the realizability-preserving and convergence properties of this iterative solver with $\lambda = (1+v)^{-1}$. # Realizability-Preserving Property {#sec:realizability_preservation} In this section, we show that, by imposing a proper time-step restriction and a realizability-enforcing limiter, the DG scheme with IMEX time integration given in Section [4](#sec:discretization){reference-type="ref" reference="sec:discretization"} preserves the realizability of both conserved and primitive moments. To this end, we focus on the analysis of a forward-backward Euler method for its simplicity. The theoretical results can be extended to more general IMEX methods that are strong stability-preserving (SSP) with the size of time steps dependent only on the explicit part, such as the IMEX scheme implemented in the numerical tests reported in Section [8](#sec:numericalResults){reference-type="ref" reference="sec:numericalResults"}. Specifically, we analyze the realizability-preserving property of the following numerical scheme. $$\begin{aligned} \big(\,\widehat{\boldsymbol{\mathcal{U}}}_{h}^{n+\sfrac{1}{2}},\varphi_{h}\,\big)_{\boldsymbol{K}} &=\big(\,{\boldsymbol{\mathcal{U}}}_{h}^{n},\varphi_{h}\,\big)_{\boldsymbol{K}} + \Delta t\,\boldsymbol{\mathcal{B}}_{h}\big(\,{\boldsymbol{\mathcal{U}}}_{h}^{n},\boldsymbol{v}_{h},\varphi_{h}\,\big)_{\boldsymbol{K}}, \label{eq:imexForwardEuler}\\ {\boldsymbol{\mathcal{U}}}_{h}^{n+\sfrac{1}{2}} &=\, \texttt{RealizabilityLimiter}\,(\,\widehat{\boldsymbol{\mathcal{U}}}_{h}^{n+\sfrac{1}{2}}\,), \label{eq:realizabilityLimiter}\\ \big(\,\widehat{\boldsymbol{\mathcal{U}}}_{h}^{n+1},\varphi_{h}\,\big)_{\boldsymbol{K}} &=\big(\,{\boldsymbol{\mathcal{U}}}_{h}^{n+\sfrac{1}{2}},\varphi_{h}\,\big)_{\boldsymbol{K}} + \Delta t\,\big(\,\boldsymbol{\mathcal{C}}(\widehat{\boldsymbol{\mathcal{U}}}_{h}^{n+1}),\varphi_{h}\,\big)_{\boldsymbol{K}}, \label{eq:imexBackwardEuler}\\ {\boldsymbol{\mathcal{U}}}_{h}^{n+1} &=\, \texttt{RealizabilityLimiter}\,(\,\widehat{\boldsymbol{\mathcal{U}}}_{h}^{n+1}\,). \label{eq:realizabilityLimiter2}\end{aligned}$$ Here, $\texttt{RealizabilityLimiter}()$ denotes the realizability-enforcing limiter proposed in [@chu_etal_2019], the details of which is given in Section [5.2](#sec:realizabilityLimiter){reference-type="ref" reference="sec:realizabilityLimiter"} for completeness. Loosely speaking, the realizability-preserving property of the scheme [\[eq:imexForwardEuler\]](#eq:imexForwardEuler){reference-type="eqref" reference="eq:imexForwardEuler"}--[\[eq:realizabilityLimiter2\]](#eq:realizabilityLimiter2){reference-type="eqref" reference="eq:realizabilityLimiter2"} requires that, if the current moments $\boldsymbol{\mathcal{U}}_{h}^{n}$ are realizable, then the updated moments $\boldsymbol{\mathcal{U}}_{h}^{n+1}$ remain realizable. In the following paragraphs, we summarize the realizability-preserving properties proved in this section, where more detailed realizability results and conditions are described using the sets of phase-space points defined in Section [4.1](#sec:dgMethod){reference-type="ref" reference="sec:dgMethod"}. A key assumption in the realizability analysis is the exact closure assumption. **Assumption 1** (Exact closures). *The moment closures for closing the higher order moments $\mathcal{K}^{ij}$ and $\mathcal{Q}^{ijk}$ in Eq. [\[eq:imexForwardEuler\]](#eq:imexForwardEuler){reference-type="eqref" reference="eq:imexForwardEuler"} are exact, i.e., given lower order primitive moments $(\mathcal{D},\,\mathcal{I}^{i})$, the moments $(\mathcal{K}^{ij},\,\mathcal{Q}^{ijk})$ are computed such that $(\mathcal{D},\,\mathcal{I}^{i},\,\mathcal{K}^{ij},\,\mathcal{Q}^{ijk})$ satisfy Eq. [\[eq:angularMoments\]](#eq:angularMoments){reference-type="eqref" reference="eq:angularMoments"} for some nonnegative distribution $f$.* We note that Assumption [Assumption 1](#assum:exact_closure){reference-type="ref" reference="assum:exact_closure"} holds when the exact Minerbo closure is used, i.e., when the Eddington and heat-flux factors are given in Eq. [\[eq:psiZetaMinerbo\]](#eq:psiZetaMinerbo){reference-type="eqref" reference="eq:psiZetaMinerbo"} (as opposed to the approximation given in Eqs. [\[eq:psiApproximate\]](#eq:psiApproximate){reference-type="eqref" reference="eq:psiApproximate"}--[\[eq:zetaApproximate\]](#eq:zetaApproximate){reference-type="eqref" reference="eq:zetaApproximate"}). Evaluating (either the exact or approximate) Eddington factor and heat-flux factor uses the flux factor $h\,(=\mathcal{I}/\mathcal{D})$ of the primitive moments $\boldsymbol{\mathcal{M}}=\big(\,\mathcal{D},\,\boldsymbol{\mathcal{I}}\,\big)^{\intercal}$. Since the numerical scheme Eqs. [\[eq:imexForwardEuler\]](#eq:imexForwardEuler){reference-type="eqref" reference="eq:imexForwardEuler"}--[\[eq:realizabilityLimiter2\]](#eq:realizabilityLimiter2){reference-type="eqref" reference="eq:realizabilityLimiter2"} evolves the conserved moments $\boldsymbol{\mathcal{U}}$, evaluating moment closures requires the conversion between conserved and primitive moments. In other words, given $\boldsymbol{\mathcal{U}}$ (or $\boldsymbol{\mathcal{M}}$), the solver needs to compute the associated $\boldsymbol{\mathcal{M}}$ (or $\boldsymbol{\mathcal{U}}$) that satisfies Eq. [\[eq:ConservedToPrimitive\]](#eq:ConservedToPrimitive){reference-type="eqref" reference="eq:ConservedToPrimitive"}. Under Assumption [Assumption 1](#assum:exact_closure){reference-type="ref" reference="assum:exact_closure"}, we state the main theoretical result of the realizability-preserving analysis for the scheme Eqs. [\[eq:imexForwardEuler\]](#eq:imexForwardEuler){reference-type="eqref" reference="eq:imexForwardEuler"}--[\[eq:realizabilityLimiter2\]](#eq:realizabilityLimiter2){reference-type="eqref" reference="eq:realizabilityLimiter2"} in Theorem [Theorem 1](#thm:realizability){reference-type="ref" reference="thm:realizability"}, where this scheme is shown to preserve realizability of moments $\boldsymbol{\mathcal{U}}_{h}$ on the point set $\widetilde{S}^{\boldsymbol{K}}_{\otimes}$ defined in Eq. [\[eq:AllSetUnion\]](#eq:AllSetUnion){reference-type="eqref" reference="eq:AllSetUnion"}, for all elements $\boldsymbol{K}\in\mathcal{T}$. **Theorem 1** (Realizability preservation). *Suppose (i) Assumption [Assumption 1](#assum:exact_closure){reference-type="ref" reference="assum:exact_closure"} holds, (ii) $v_h := |\boldsymbol{v}_h|<1$ for all $\boldsymbol{K}\in\mathcal{T}$, and (iii) the time step $\Delta t$ in Eq. [\[eq:imexForwardEuler\]](#eq:imexForwardEuler){reference-type="eqref" reference="eq:imexForwardEuler"} satisfies the hyperbolic-type time-step restriction $$\label{eq:timestepRestriction} \begin{alignedat}{2} \Delta t\leq \min\big\{&\Delta t_{\boldsymbol{x}}^{\min}\, , \,\Delta t_{\varepsilon}^{\min}\big\}\:,\text{\,with\,}\\ \Delta t_{\boldsymbol{x}}^{\min}:= \min_{\boldsymbol{K}\in\mathcal{T}}\, \min_i (1-v_h)\, {C^i} |K_{\boldsymbol{x}}^i|&{\quad\text{and}\quad} \Delta t_{\varepsilon}^{\min}:=\min_{\boldsymbol{K}\in\mathcal{T}}\,(1-v_h)\,{C^\varepsilon} |K_{\varepsilon}|/{\varepsilon_{\mbox{\tiny\sc H}}} \end{alignedat}$$ where $C^i$ and $C^\varepsilon$, which are independent of the size of elements in the discretization, are given in Eqs. [\[eq:CFLSpatial\]](#eq:CFLSpatial){reference-type="eqref" reference="eq:CFLSpatial"} and [\[eq:CFLEnergy\]](#eq:CFLEnergy){reference-type="eqref" reference="eq:CFLEnergy"}, respectively. Then the scheme [\[eq:imexForwardEuler\]](#eq:imexForwardEuler){reference-type="eqref" reference="eq:imexForwardEuler"}--[\[eq:realizabilityLimiter2\]](#eq:realizabilityLimiter2){reference-type="eqref" reference="eq:realizabilityLimiter2"} is realizability-preserving, i.e., $\boldsymbol{\mathcal{U}}_{h}^{n+1}\in\mathcal{R}$ on $\widetilde{S}^{\boldsymbol{K}}_{\otimes}$, $\forall \boldsymbol{K}\in\mathcal{T}$, provided that $\boldsymbol{\mathcal{U}}_{h}^{n}\in\mathcal{R}$ on $\widetilde{S}^{\boldsymbol{K}}_{\otimes}$, $\forall \boldsymbol{K}\in\mathcal{T}$.* Theorem [Theorem 1](#thm:realizability){reference-type="ref" reference="thm:realizability"} is a direct consequence of the following Propositions [Proposition 3](#prop:realizabilityExplicit){reference-type="ref" reference="prop:realizabilityExplicit"}, [Proposition 4](#prop:realizabilityLimiter){reference-type="ref" reference="prop:realizabilityLimiter"}, [Proposition 5](#prop:realizabilityMomentConversion){reference-type="ref" reference="prop:realizabilityMomentConversion"}, and [Proposition 6](#prop:realizabilityImplicit){reference-type="ref" reference="prop:realizabilityImplicit"}, which provide the realizability-preserving properties of the explicit update (Eq. [\[eq:imexForwardEuler\]](#eq:imexForwardEuler){reference-type="eqref" reference="eq:imexForwardEuler"}), the realizability-enforcing limiter (Eq. [\[eq:realizabilityLimiter\]](#eq:realizabilityLimiter){reference-type="eqref" reference="eq:realizabilityLimiter"}), the moment conversion (Eq. [\[eq:ConservedToPrimitive\]](#eq:ConservedToPrimitive){reference-type="eqref" reference="eq:ConservedToPrimitive"}), and the implicit update (Eq. [\[eq:imexBackwardEuler\]](#eq:imexBackwardEuler){reference-type="eqref" reference="eq:imexBackwardEuler"}), respectively. In these propositions, the notion of cell-averaged moments will come in handy. Given $\boldsymbol{\mathcal{U}}_{h}^{n}$, the cell-averaged moments $\boldsymbol{\mathcal{U}}_{\boldsymbol{K}}:=(\mathcal{N}_{\boldsymbol{K}},\boldsymbol{\mathcal{G}}_{\boldsymbol{K}})$ are defined as $$\boldsymbol{\mathcal{U}}_{\boldsymbol{K}} = \big(\,\boldsymbol{\mathcal{U}}_{h}\,\big)_{\boldsymbol{K}}/|\boldsymbol{K}|. \label{eq:cellAverage}$$ **Proposition 3** (Explicit advection update). *Suppose (i) Assumption [Assumption 1](#assum:exact_closure){reference-type="ref" reference="assum:exact_closure"} holds, (ii) $v_h <1$ for all $\boldsymbol{K}\in\mathcal{T}$, and (iii) $\Delta t$ in Eq. [\[eq:imexForwardEuler\]](#eq:imexForwardEuler){reference-type="eqref" reference="eq:imexForwardEuler"} satisfies the restriction [\[eq:timestepRestriction\]](#eq:timestepRestriction){reference-type="eqref" reference="eq:timestepRestriction"}. Let $\widehat{\boldsymbol{\mathcal{U}}}_{\boldsymbol{K}}^{n+\sfrac{1}{2}}:=(\widehat{\mathcal{N}}_{\boldsymbol{K}}^{n+\sfrac{1}{2}},\widehat{\boldsymbol{\mathcal{G}}}_{\boldsymbol{K}}^{n+\sfrac{1}{2}})$ denote the element average of the moment $\widehat{\boldsymbol{\mathcal{U}}}_{h}^{n+\sfrac{1}{2}}$ (as defined in Eq. [\[eq:cellAverage\]](#eq:cellAverage){reference-type="eqref" reference="eq:cellAverage"}) updated by Eq. [\[eq:imexForwardEuler\]](#eq:imexForwardEuler){reference-type="eqref" reference="eq:imexForwardEuler"} from $\boldsymbol{\mathcal{U}}_{h}^{n}$. Then, it is guaranteed that, $\forall \boldsymbol{K}\in\mathcal{T}$, $\widehat{\mathcal{N}}_{\boldsymbol{K}}^{n+\sfrac{1}{2}}>0$, provided $\boldsymbol{\mathcal{U}}_{h}^{n}\in\mathcal{R}$ on $S^{\boldsymbol{K}}_{\otimes}$, $\forall \boldsymbol{K}\in\mathcal{T}$. Further, when a reduced one-dimensional planar geometry [^4] is considered, it is guaranteed that, $\forall \boldsymbol{K}\in\mathcal{T}$, $\widehat{\boldsymbol{\mathcal{U}}}_{\boldsymbol{K}}^{n+\sfrac{1}{2}}\in\mathcal{R}$ when an additional time-step restriction [\[eq:sourceTimeStepRestriction\]](#eq:sourceTimeStepRestriction){reference-type="eqref" reference="eq:sourceTimeStepRestriction"} is satisfied.* **Proposition 4** (Realizability-enforcing limiter). *Suppose $\widehat{\mathcal{N}}_{\boldsymbol{K}}>0$ on element $\boldsymbol{K}$, applying the realizability-enforcing limiter given in Algorithm [\[algo:realizabilityLimiter\]](#algo:realizabilityLimiter){reference-type="ref" reference="algo:realizabilityLimiter"} (see Section [5.2](#sec:realizabilityLimiter){reference-type="ref" reference="sec:realizabilityLimiter"}) to the moments $\widehat{\boldsymbol{\mathcal{U}}}_{h}$ on $\boldsymbol{K}$ leads to realizable moments $\boldsymbol{\mathcal{U}}_{h}\in\mathcal{R}$ on $S^{\boldsymbol{K}}_{\otimes} \cup \widehat{S}^{\boldsymbol{K}}_{\otimes}$ in $\boldsymbol{K}$.* **Proposition 5** (Moment conversion). *Suppose that Assumption [Assumption 1](#assum:exact_closure){reference-type="ref" reference="assum:exact_closure"} holds and that $v_h<1$ on all $\boldsymbol{K}\in\mathcal{T}$, then the conversion between conserved and primitive moments following the relation in Eq. [\[eq:ConservedToPrimitive\]](#eq:ConservedToPrimitive){reference-type="eqref" reference="eq:ConservedToPrimitive"} preserves realizability, i.e., for a pair of conserved and primitive moments $(\boldsymbol{\mathcal{U}},\boldsymbol{\mathcal{M}})$ satisfying Eq. [\[eq:ConservedToPrimitive\]](#eq:ConservedToPrimitive){reference-type="eqref" reference="eq:ConservedToPrimitive"}, $\boldsymbol{\mathcal{U}}\in\mathcal{R}$ if and only if $\boldsymbol{\mathcal{M}}\in\mathcal{R}$. Further, given $\boldsymbol{\mathcal{U}}\in\mathcal{R}$, the iterative solver [\[eq:Picard\]](#eq:Picard){reference-type="eqref" reference="eq:Picard"} in Section [4.3.1](#sec:moment_conversion){reference-type="ref" reference="sec:moment_conversion"} converges to the unique $\boldsymbol{\mathcal{M}}\in\mathcal{R}$ that satisfies Eq. [\[eq:ConservedToPrimitive\]](#eq:ConservedToPrimitive){reference-type="eqref" reference="eq:ConservedToPrimitive"}.* **Proposition 6** (Implicit collision solve). *Suppose Assumption [Assumption 1](#assum:exact_closure){reference-type="ref" reference="assum:exact_closure"} holds and $v_h<1$ on all $\boldsymbol{K}\in\mathcal{T}$. Let $\boldsymbol{\mathcal{U}}_{h}^{n+\sfrac{1}{2}}\in\mathcal{R}$ on $S^{\boldsymbol{K}}_{\otimes}$ for all $\boldsymbol{K} \in \mathcal{T}$, then solving Eq. [\[eq:imexBackwardEuler\]](#eq:imexBackwardEuler){reference-type="eqref" reference="eq:imexBackwardEuler"} with the iterative solvers considered in Section [4.3](#sec:iterative_solvers){reference-type="ref" reference="sec:iterative_solvers"} gives $\widehat{\boldsymbol{\mathcal{U}}}_{h}^{n+1}\in\mathcal{R}$ on $S^{\boldsymbol{K}}_{\otimes}$ for all $\boldsymbol{K} \in \mathcal{T}$.* These propositions form a basis for the proof of Theorem [Theorem 1](#thm:realizability){reference-type="ref" reference="thm:realizability"}. Specifically, Proposition [Proposition 3](#prop:realizabilityExplicit){reference-type="ref" reference="prop:realizabilityExplicit"} guarantees that the updated moments $\widehat{\boldsymbol{\mathcal{U}}}_{h}^{n+\sfrac{1}{2}}$ from Eq. [\[eq:imexForwardEuler\]](#eq:imexForwardEuler){reference-type="eqref" reference="eq:imexForwardEuler"} have a nonnegative cell-averaged density $\widehat{\mathcal{N}}_{\boldsymbol{K}}^{n+\sfrac{1}{2}}$ for each $\boldsymbol{K}\in\mathcal{T}$. It follows from Proposition [Proposition 4](#prop:realizabilityLimiter){reference-type="ref" reference="prop:realizabilityLimiter"} that the limited moments $\boldsymbol{\mathcal{U}}_{h}^{n+\sfrac{1}{2}}$ are realizable on ${S}^{\boldsymbol{K}}_{\otimes}$ for all $\boldsymbol{K}\in\mathcal{T}$. Solving Eq. [\[eq:imexBackwardEuler\]](#eq:imexBackwardEuler){reference-type="eqref" reference="eq:imexBackwardEuler"} on each nodal point in ${S}^{\boldsymbol{K}}_{\otimes}$ for all $\boldsymbol{K}\in\mathcal{T}$ gives the updated moment $\widehat{\boldsymbol{\mathcal{U}}}_{h}^{n+1}$, which is guaranteed to be realizable on ${S}^{\boldsymbol{K}}_{\otimes}$, $\forall \boldsymbol{K}\in\mathcal{T}$, by Proposition [Proposition 6](#prop:realizabilityImplicit){reference-type="ref" reference="prop:realizabilityImplicit"}. Applying the realizability-enforcing limiter again to $\widehat{\boldsymbol{\mathcal{U}}}_{h}^{n+1}$ on every $\boldsymbol{K}\in\mathcal{T}$ leads to $\boldsymbol{\mathcal{U}}_{h}^{n+1}$, which is realizable on $\widehat{S}^{\boldsymbol{K}}_{\otimes}$, $\forall \boldsymbol{K}\in\mathcal{T}$, again from Proposition [Proposition 4](#prop:realizabilityLimiter){reference-type="ref" reference="prop:realizabilityLimiter"}. In Sections [5.1](#sec:streaming){reference-type="ref" reference="sec:streaming"}, [5.2](#sec:realizabilityLimiter){reference-type="ref" reference="sec:realizabilityLimiter"}, [5.3](#sec:momentConversionRealizability){reference-type="ref" reference="sec:momentConversionRealizability"}, and [5.4](#sec:collision){reference-type="ref" reference="sec:collision"}, we prove Propositions [Proposition 3](#prop:realizabilityExplicit){reference-type="ref" reference="prop:realizabilityExplicit"}, [Proposition 4](#prop:realizabilityLimiter){reference-type="ref" reference="prop:realizabilityLimiter"}, [Proposition 5](#prop:realizabilityMomentConversion){reference-type="ref" reference="prop:realizabilityMomentConversion"}, and [Proposition 6](#prop:realizabilityImplicit){reference-type="ref" reference="prop:realizabilityImplicit"}, respectively. These results together lead to the main realizability-preserving property of the numerical scheme Eqs. [\[eq:imexForwardEuler\]](#eq:imexForwardEuler){reference-type="eqref" reference="eq:imexForwardEuler"}--[\[eq:imexBackwardEuler\]](#eq:imexBackwardEuler){reference-type="eqref" reference="eq:imexBackwardEuler"} given in Theorem [Theorem 1](#thm:realizability){reference-type="ref" reference="thm:realizability"}, under the exact closure assumption, Assumption [Assumption 1](#assum:exact_closure){reference-type="ref" reference="assum:exact_closure"}. In Section [5.5](#sec:approxClosure){reference-type="ref" reference="sec:approxClosure"}, we extend the realizability-preserving and convergence results in Propositions [Proposition 5](#prop:realizabilityMomentConversion){reference-type="ref" reference="prop:realizabilityMomentConversion"} and [Proposition 6](#prop:realizabilityImplicit){reference-type="ref" reference="prop:realizabilityImplicit"} to the case of evaluating the closure with the approximate Eddington factor $\psi_{\mathsf{a}}$ in Eq. [\[eq:psiApproximate\]](#eq:psiApproximate){reference-type="eqref" reference="eq:psiApproximate"}, which is often used in practice to reduce computational cost. ## Explicit Advection Update {#sec:streaming} In this section, we prove Proposition [Proposition 3](#prop:realizabilityExplicit){reference-type="ref" reference="prop:realizabilityExplicit"} by deriving the time-step restriction [\[eq:timestepRestriction\]](#eq:timestepRestriction){reference-type="eqref" reference="eq:timestepRestriction"} under which the updated cell-averaged number density $\widehat{\mathcal{N}}_{\boldsymbol{K}}^{n+\sfrac{1}{2}}>0$. In a one-dimensional planar geometry, we show that $\widehat{\boldsymbol{\mathcal{U}}}_{\boldsymbol{K}}^{n+\sfrac{1}{2}}\in\mathcal{R}$ under an additional time-step restriction given in Eq. [\[eq:sourceTimeStepRestriction\]](#eq:sourceTimeStepRestriction){reference-type="eqref" reference="eq:sourceTimeStepRestriction"}. Since constant functions are in the approximation space $\mathbb{V}_{h}^{k}(\boldsymbol{K})$, we start with deriving the update formula for cell-averaged moments by setting $\varphi_{h}=1$ in Eq. [\[eq:imexForwardEuler\]](#eq:imexForwardEuler){reference-type="eqref" reference="eq:imexForwardEuler"}, which leads to $$\begin{aligned} \widehat{\boldsymbol{\mathcal{U}}}_{\boldsymbol{K}}^{n+\sfrac{1}{2}} &=\boldsymbol{\mathcal{U}}_{\boldsymbol{K}}^{n}+\frac{\Delta t}{|\boldsymbol{K}|}\,\boldsymbol{\mathcal{B}}_{h}\big(\,\boldsymbol{\mathcal{U}}_{h}^{n},\boldsymbol{v}_{h}\,\big)_{\boldsymbol{K}} \nonumber \\ &=\gamma^{\boldsymbol{x}}\,\Big\{\,\boldsymbol{\mathcal{U}}_{\boldsymbol{K}}^{n}+\frac{\Delta t}{\gamma^{\boldsymbol{x}}\,|\boldsymbol{K}|}\,\boldsymbol{\mathcal{B}}_{h}^{\boldsymbol{x}}\big(\,\boldsymbol{\mathcal{U}}_{h}^{n},\boldsymbol{v}_{h}\,\big)_{\boldsymbol{K}}\,\Big\} +\gamma^{\varepsilon}\,\Big\{\,\boldsymbol{\mathcal{U}}_{\boldsymbol{K}}^{n}+\frac{\Delta t}{\gamma^{\varepsilon}\,|\boldsymbol{K}|}\,\boldsymbol{\mathcal{B}}_{h}^{\varepsilon}\big(\,\boldsymbol{\mathcal{U}}_{h}^{n},\boldsymbol{v}_{h}\,\big)_{\boldsymbol{K}}\,\Big\} \nonumber \\ &\hspace{12pt} +\gamma^{\mathcal{S}}\,\Big\{\,\boldsymbol{\mathcal{U}}_{\boldsymbol{K}}^{n}+\frac{\Delta t}{\gamma^{\mathcal{S}}\,|\boldsymbol{K}|}\,\big(\,\boldsymbol{\mathcal{S}}(\boldsymbol{\mathcal{U}}_{h}^{n},\boldsymbol{v}_{h})\,\big)_{\boldsymbol{K}}\,\Big\}, \label{eq:fullCellAverageUpdateSplit} \\ &=:\gamma^{\boldsymbol{x}}\,\widehat{\boldsymbol{\mathcal{U}}}_{\boldsymbol{K}}^{n+\sfrac{1}{2},\,\boldsymbol{x}} + \gamma^{\varepsilon}\,\widehat{\boldsymbol{\mathcal{U}}}_{\boldsymbol{K}}^{n+\sfrac{1}{2},\,\varepsilon} + \gamma^{\mathcal{S}}\,\widehat{\boldsymbol{\mathcal{U}}}_{\boldsymbol{K}}^{n+\sfrac{1}{2},\,\mathcal{S}}\:,\nonumber\end{aligned}$$ where we have defined $\gamma^{\boldsymbol{x}},\gamma^{\varepsilon},\gamma^{\mathcal{S}}>0$, satisfying $\gamma^{\boldsymbol{x}}+\gamma^{\varepsilon}+\gamma^{\mathcal{S}}=1$. In the following subsections, we show that, when $\boldsymbol{\mathcal{U}}_h^n\in\mathcal{R}$ on $\widehat{S}^{\boldsymbol{K}}_{\otimes}$ for all $\boldsymbol{K}\in\mathcal{T}$, $\widehat{\boldsymbol{\mathcal{U}}}_{\boldsymbol{K}}^{n+\sfrac{1}{2},\boldsymbol{x}}$ and $\widehat{\boldsymbol{\mathcal{U}}}_{\boldsymbol{K}}^{n+\sfrac{1}{2},\varepsilon}$ are realizable under time-step restrictions given in Eq. [\[eq:timestepRestriction\]](#eq:timestepRestriction){reference-type="eqref" reference="eq:timestepRestriction"} (Sections [5.1.1](#sec:realizabilitySpatial){reference-type="ref" reference="sec:realizabilitySpatial"} and [5.1.2](#sec:realizabilityEnergy){reference-type="ref" reference="sec:realizabilityEnergy"}) and that $\widehat{\mathcal{N}}_{\boldsymbol{K}}^{n+\sfrac{1}{2},\mathcal{S}}>0$ (Section [5.1.3](#sec:realizabilitySource){reference-type="ref" reference="sec:realizabilitySource"}) for all $\boldsymbol{K}\in\mathcal{T}$. Further, we show in Section [5.1.3](#sec:realizabilitySource){reference-type="ref" reference="sec:realizabilitySource"} that $\widehat{\boldsymbol{\mathcal{U}}}_{\boldsymbol{K}}^{n+\sfrac{1}{2},\mathcal{S}}$ is realizable in one-dimensional, planar geometry under an additional time-step restriction given in Eq. [\[eq:sourceTimeStepRestriction\]](#eq:sourceTimeStepRestriction){reference-type="eqref" reference="eq:sourceTimeStepRestriction"}. Since the realizable set $\mathcal{R}$ is convex and $\widehat{\boldsymbol{\mathcal{U}}}_{\boldsymbol{K}}^{n+\sfrac{1}{2}}$ is written as a convex combination of $\widehat{\boldsymbol{\mathcal{U}}}_{\boldsymbol{K}}^{n+\sfrac{1}{2},\boldsymbol{x}}$, $\widehat{\boldsymbol{\mathcal{U}}}_{\boldsymbol{K}}^{n+\sfrac{1}{2},\varepsilon}$, and $\widehat{\boldsymbol{\mathcal{U}}}_{\boldsymbol{K}}^{n+\sfrac{1}{2},\mathcal{S}}$ in Eq. [\[eq:fullCellAverageUpdateSplit\]](#eq:fullCellAverageUpdateSplit){reference-type="eqref" reference="eq:fullCellAverageUpdateSplit"}, we thus conclude that, under the time-step restrictions in Eqs. [\[eq:timestepRestriction\]](#eq:timestepRestriction){reference-type="eqref" reference="eq:timestepRestriction"} and [\[eq:sourceTimeStepRestriction\]](#eq:sourceTimeStepRestriction){reference-type="eqref" reference="eq:sourceTimeStepRestriction"}, (i) $\widehat{\mathcal{N}}_{\boldsymbol{K}}^{n+\sfrac{1}{2}}>0$ and (ii) $\widehat{\boldsymbol{\mathcal{U}}}_{\boldsymbol{K}}^{n+\sfrac{1}{2}}\in\mathcal{R}$ in a planar geometry. ### Position Space Fluxes {#sec:realizabilitySpatial} For transport in position space we follow the approach in [@chu_etal_2019] and write $$\widehat{\boldsymbol{\mathcal{U}}}_{\boldsymbol{K}}^{n+\sfrac{1}{2},\boldsymbol{x}} =\boldsymbol{\mathcal{U}}_{\boldsymbol{K}}^{n}+\frac{\Delta t_{\boldsymbol{x}}}{|\boldsymbol{K}|}\,\boldsymbol{\mathcal{B}}_{h}^{\boldsymbol{x}}\big(\,\boldsymbol{\mathcal{U}}_{h}^{n},\boldsymbol{v}_{h}\,\big)_{\boldsymbol{K}} \quad(\Delta t_{\boldsymbol{x}}=\Delta t/\gamma^{\boldsymbol{x}}).$$ To find sufficient conditions such that $\widehat{\boldsymbol{\mathcal{U}}}_{\boldsymbol{K}}^{n+\sfrac{1}{2},\boldsymbol{x}}\in\mathcal{R}$, we define (cf. [@chu_etal_2019]) $$\Gamma^{i}\big[\boldsymbol{\mathcal{U}}_{h}^{n}\big](\tilde{\boldsymbol{z}}^{i}) =\frac{1}{|K_{\boldsymbol{x}}^{i}|} \Big[\, \int_{K_{\boldsymbol{x}}^{i}}\boldsymbol{\mathcal{U}}_{h}^{n}\,dx^{i} - \frac{\Delta t_{\boldsymbol{x}}}{\beta^{i}} \Big(\, \widehat{\boldsymbol{\mathcal{F}}^{i}}\big(\boldsymbol{\mathcal{U}}_{h}^{n},\boldsymbol{v}_{h}\big)|_{x_{\mbox{\tiny\sc H}}^{i}} -\widehat{\boldsymbol{\mathcal{F}}^{i}}\big(\boldsymbol{\mathcal{U}}_{h}^{n},\boldsymbol{v}_{h}\big)|_{x_{\mbox{\tiny\sc L}}^{i}} \,\Big) \,\Big], \label{eq:GammaSpatial}$$ so that $$\widehat{\boldsymbol{\mathcal{U}}}_{\boldsymbol{K}}^{n+\sfrac{1}{2},\boldsymbol{x}} =\sum_{i=1}^{d_{\boldsymbol{x}}}\frac{\beta^{i}}{|\tilde{\boldsymbol{K}}^{i}|} \int_{\tilde{\boldsymbol{K}}^{i}}\Gamma^{i}\big[\boldsymbol{\mathcal{U}}_{h}^{n}\big](\tilde{\boldsymbol{z}}^{i})\,\tau\,d\tilde{\boldsymbol{z}}^{i}, \label{eq:cellAverageInTermsOfGammaSpatial}$$ where we have defined the set of positive constants $\{\beta^{i}\}_{i=1}^{d_{\boldsymbol{x}}}$ satisfying $\sum_{i=1}^{d_{\boldsymbol{x}}}\beta^{i}=1$. If a quadrature rule $\tilde{\boldsymbol{\mathcal{Q}}}^{i}:C^{0}(\tilde{\boldsymbol{K}}^{i})\to\mathbb{R}$ with positive weights, e.g., the tensor product of one-dimensional LG quadrature, is used to approximate the integral in Eq. [\[eq:cellAverageInTermsOfGammaSpatial\]](#eq:cellAverageInTermsOfGammaSpatial){reference-type="eqref" reference="eq:cellAverageInTermsOfGammaSpatial"}, it is sufficient to show that, under the assumptions in Proposition [Proposition 3](#prop:realizabilityExplicit){reference-type="ref" reference="prop:realizabilityExplicit"}, $\Gamma^{i}\big[\boldsymbol{\mathcal{U}}_{h}^{n}\big](\tilde{\boldsymbol{z}}^{i})\in\mathcal{R}$ holds for $\tilde{\boldsymbol{z}}^{i}\in\tilde{\boldsymbol{\mathcal{S}}}^{i}\subset\tilde{\boldsymbol{K}}^{i}$, where $\tilde{\boldsymbol{\mathcal{S}}}^{i}$ denotes the set of quadrature points given by $\tilde{\boldsymbol{\mathcal{Q}}}^{i}$. We prove this sufficient condition in the remainder of this subsection. Let $\hat{Q}^{i}:C^{0}(K_{\boldsymbol{x}}^{i})\to\mathbb{R}$ denote the $\hat{k}$-point LGL quadrature rule on $K_{\boldsymbol{x}}^{i}$ with points $\widehat{S}^{\boldsymbol{K}}_{i}=\{x_{\mbox{\tiny\sc L}}^{i}=\hat{x}_{1}^{i},\ldots,\hat{x}_{\hat{k}}^{i}=x_{\mbox{\tiny\sc H}}^{i}\}$ as defined in Section [4.1](#sec:dgMethod){reference-type="ref" reference="sec:dgMethod"} and strictly positive weights $\{\hat{w}_{q}\}_{q=1}^{\hat{k}}$, normalized such that $\sum_{q=1}^{\hat{k}}\hat{w}_{q}=1$. Since $\hat{k}\geq \frac{k+5}{2}$, this quadrature integrates $\boldsymbol{\mathcal{U}}_{h}^{n}$ exactly, and thus we have $$\int_{\boldsymbol{K}_{\boldsymbol{x}}^{i}}\boldsymbol{\mathcal{U}}_{h}^{n}(x^{i})\,dx^{i} = \hat{Q}^{i}\big[\boldsymbol{\mathcal{U}}_{h}^{n}\big] = |K_{\boldsymbol{x}}^{i}|\,\sum_{q=1}^{\hat{k}}\hat{w}_{q}\,\boldsymbol{\mathcal{U}}_{h}^{n}(\hat{x}_{q}^{i}), \label{eq:glQuadratureSpace}$$ where, for notational convenience, we have suppressed explicit dependence on $\tilde{\boldsymbol{z}}^{i}$ in writing $\boldsymbol{\mathcal{U}}_{h}^{n}(\hat{x}_{q}^{i},\tilde{\boldsymbol{z}}^{i})=\boldsymbol{\mathcal{U}}_{h}^{n}(\hat{x}_{q}^{i})$. Similarly, $\boldsymbol{\mathcal{U}}_{h}^{n}(x_{\mbox{\tiny\sc L}}^{i,\pm},\tilde{\boldsymbol{z}}^{i})=\boldsymbol{\mathcal{U}}_{h}^{n}(x_{\mbox{\tiny\sc L}}^{i,\pm})$ and $\boldsymbol{\mathcal{U}}_{h}^{n}(x_{\mbox{\tiny\sc H}}^{i,\pm},\tilde{\boldsymbol{z}}^{i})=\boldsymbol{\mathcal{U}}_{h}^{n}(x_{\mbox{\tiny\sc H}}^{i,\pm})$. Then, using the quadrature rule in Eq. [\[eq:glQuadratureSpace\]](#eq:glQuadratureSpace){reference-type="eqref" reference="eq:glQuadratureSpace"} and the LF flux in Eq. [\[eq:globalLF\]](#eq:globalLF){reference-type="eqref" reference="eq:globalLF"}, we can write Eq. [\[eq:GammaSpatial\]](#eq:GammaSpatial){reference-type="eqref" reference="eq:GammaSpatial"} as a convex combination $$\begin{aligned} &\Gamma^{i}\big[\boldsymbol{\mathcal{U}}_{h}^{n}\big](\tilde{\boldsymbol{z}}^{i}) \nonumber \\ &=\sum_{q=2}^{\hat{k}-1}\hat{w}_{q}\,\boldsymbol{\mathcal{U}}_{h}^{n}(\hat{x}_{q}^{i}) + \hat{w}_{1}\,\Phi_{1}^{i}\big[\,\boldsymbol{\mathcal{U}}_{h}^{n}(x_{\mbox{\tiny\sc L}}^{i,-}),\,\boldsymbol{\mathcal{U}}_{h}^{n}(x_{\mbox{\tiny\sc L}}^{i,+}),\,\hat{\boldsymbol{v}}(x_{\mbox{\tiny\sc L}}^{i})\,\big] + \hat{w}_{\hat{k}}\,\Phi_{\hat{k}}^{i}\big[\,\boldsymbol{\mathcal{U}}_{h}^{n}(x_{\mbox{\tiny\sc H}}^{i,-}),\,\boldsymbol{\mathcal{U}}_{h}^{n}(x_{\mbox{\tiny\sc H}}^{i,+}),\,\hat{\boldsymbol{v}}(x_{\mbox{\tiny\sc H}}^{i})\,\big], \label{eq:GammaSpatialConvex}\end{aligned}$$ where $$\begin{aligned} \Phi_{1}^{i}\big[\,\boldsymbol{\mathcal{U}}_{a},\boldsymbol{\mathcal{U}}_{b},\hat{\boldsymbol{v}}\,\big] &=\boldsymbol{\mathcal{U}}_{b}+\lambda_{\boldsymbol{x}}^{i}\,\mathscr{F}_{\mbox{\tiny\sc LF}}^{i}\big(\boldsymbol{\mathcal{U}}_{a},\boldsymbol{\mathcal{U}}_{b},\hat{\boldsymbol{v}}\big), \label{eq:phi1} \\ \Phi_{\hat{k}}^{i}\big[\,\boldsymbol{\mathcal{U}}_{a},\boldsymbol{\mathcal{U}}_{b},\hat{\boldsymbol{v}}\,\big] &=\boldsymbol{\mathcal{U}}_{a}-\lambda_{\boldsymbol{x}}^{i}\,\mathscr{F}_{\mbox{\tiny\sc LF}}^{i}\big(\boldsymbol{\mathcal{U}}_{a},\boldsymbol{\mathcal{U}}_{b},\hat{\boldsymbol{v}}\big), \label{eq:phiN}\end{aligned}$$ and $\lambda_{\boldsymbol{x}}^{i}=\Delta t_{\boldsymbol{x}}/(\beta^{i}\,\hat{w}_{\hat{k}}\,|K_{\boldsymbol{x}}^{i}|)$. Since Eq. [\[eq:GammaSpatialConvex\]](#eq:GammaSpatialConvex){reference-type="eqref" reference="eq:GammaSpatialConvex"} is a convex combination, it is sufficient to show the realizability of each term independently to obtain $\Gamma^{i}\big[\boldsymbol{\mathcal{U}}_{h}^{n}\big](\tilde{\boldsymbol{z}}^{i})\in\mathcal{R}$. For the first term on the right-hand side of Eq. [\[eq:GammaSpatialConvex\]](#eq:GammaSpatialConvex){reference-type="eqref" reference="eq:GammaSpatialConvex"}, it is sufficient that $\boldsymbol{\mathcal{U}}_{h}^{n}(\hat{x}_{q}^{i})\in\mathcal{R}$, which holds under the assumption that $\boldsymbol{\mathcal{U}}_{h}^{n}\in\mathcal{R}$ on $S^{\boldsymbol{K}}_{\otimes}$ for all $\boldsymbol{K}\in\mathcal{T}$. It remains to find conditions for which $\Phi_{1}^{i},\Phi_{\hat{k}}^{i}\in\mathcal{R}$, which we summarize in the following lemmas. **Lemma 1**. *Define $$\Theta_{\pm}^{i}(\,\boldsymbol{\mathcal{U}},\hat{\boldsymbol{v}}\,) = \boldsymbol{\mathcal{U}}[\hat{\boldsymbol{v}}^{i}] \pm \boldsymbol{\mathcal{F}}^{i}(\,\boldsymbol{\mathcal{U}},\hat{\boldsymbol{v}}\,),$$ where $\boldsymbol{\mathcal{U}}[\hat{\boldsymbol{v}}^{i}]$ and $\boldsymbol{\mathcal{F}}^{i}(\,\boldsymbol{\mathcal{U}},\hat{\boldsymbol{v}}\,)$ are defined as in Eqs. [\[eq:conservedMoments\]](#eq:conservedMoments){reference-type="eqref" reference="eq:conservedMoments"} and [\[eq:phaseSpaceFluxes\]](#eq:phaseSpaceFluxes){reference-type="eqref" reference="eq:phaseSpaceFluxes"}, respectively, and $\hat{\boldsymbol{v}}^{i}=\big(\,\delta^{i1}\,\hat{v}^{1},\,\delta^{i2}\,\hat{v}^{2},\,\delta^{i3}\,\hat{v}^{3}\,\big)^{\intercal}$ as defined in Remark [Remark 2](#rem:LF_flux){reference-type="ref" reference="rem:LF_flux"}. Suppose that $\boldsymbol{\mathcal{U}}\in\mathcal{R}$ and $\hat{v}=|\hat{\boldsymbol{v}}|<1$. Then $\Theta_{\pm}^{i}(\,\boldsymbol{\mathcal{U}},\hat{\boldsymbol{v}}\,)\in\mathcal{R}$.* **Proof.* The first component of $\Theta_{\pm}^{i}(\,\boldsymbol{\mathcal{U}},\hat{\boldsymbol{v}}\,)$ can be written as $$\frac{1}{4\pi}\int_{\mathbb{S}^{2}}(\,1 \pm \hat{v}^{i}\,)\,(\,1 \pm \ell^{i}\,)\,f\,d\omega,$$ while the remaining components can be written as $$\frac{1}{4\pi}\int_{\mathbb{S}^{2}}(\,1 \pm \hat{v}^{i}\,)\,(\,1 \pm \ell^{i}\,)\,f\,\ell_{j}\,d\omega, \quad (j=1,2,3).$$ Since $(\,1 \pm \hat{v}^{i}\,)\,(\,1 \pm \ell^{i}\,)\,f\in\mathfrak{R}$, the result follows. ◻* *[\[lem:realizableTheta\]]{#lem:realizableTheta label="lem:realizableTheta"}* **Lemma 2**. *Let $\Phi_{1}^{i}$ and $\Phi_{\hat{k}}^{i}$ be defined as in Eqs. [\[eq:phi1\]](#eq:phi1){reference-type="eqref" reference="eq:phi1"} and [\[eq:phiN\]](#eq:phiN){reference-type="eqref" reference="eq:phiN"}, respectively. Assume that the following holds* 1. *$\boldsymbol{\mathcal{U}}_{a},\boldsymbol{\mathcal{U}}_{b}\in\mathcal{R}$, defined as in Eq. [\[eq:conservedMoments\]](#eq:conservedMoments){reference-type="eqref" reference="eq:conservedMoments"} as the moments of distributions $f_{a},f_{b}\in\mathfrak{R}$.* 2. *The three-velocity in Eq. [\[eq:faceVelocity\]](#eq:faceVelocity){reference-type="eqref" reference="eq:faceVelocity"} satisfies $\hat{v}=|\hat{\boldsymbol{v}}|<1$.* 3. *The time step $\Delta t_{\boldsymbol{x}}$ is chosen such that $\lambda_{\boldsymbol{x}}^{i}\le(1-\hat{v})$.* *Then $\Phi_{1}^{i}\big[\,\boldsymbol{\mathcal{U}}_{a},\boldsymbol{\mathcal{U}}_{b},\hat{\boldsymbol{v}}\,\big],\Phi_{\hat{k}}^{i}\big[\,\boldsymbol{\mathcal{U}}_{a},\boldsymbol{\mathcal{U}}_{b},\hat{\boldsymbol{v}}\,\big]\in\mathcal{R}$.* **Proof.* Define $$\Theta_{0}^{i}(\,\boldsymbol{\mathcal{U}},\hat{\boldsymbol{v}}\,) =\frac{\boldsymbol{\mathcal{U}}[\hat{\boldsymbol{v}}]-\lambda_{\boldsymbol{x}}^{i}\,\boldsymbol{\mathcal{U}}[\hat{\boldsymbol{v}}^{i}]}{1-\lambda_{\boldsymbol{x}}^{i}}.$$ Then, using the LF flux in Eq. [\[eq:globalLF\]](#eq:globalLF){reference-type="eqref" reference="eq:globalLF"}, we can write $$\Phi_{1}^{i}\big[\,\boldsymbol{\mathcal{U}}_{a},\boldsymbol{\mathcal{U}}_{b},\hat{\boldsymbol{v}}\,\big] = (1-\lambda_{\boldsymbol{x}}^{i})\,\Theta_{0}^{i}(\,\boldsymbol{\mathcal{U}}_{b},\hat{\boldsymbol{v}}\,) +\frac{1}{2}\lambda_{\boldsymbol{x}}^{i}\,\Theta_{+}^{i}(\,\boldsymbol{\mathcal{U}}_{a},\hat{\boldsymbol{v}}\,) +\frac{1}{2}\lambda_{\boldsymbol{x}}^{i}\,\Theta_{+}^{i}(\,\boldsymbol{\mathcal{U}}_{b},\hat{\boldsymbol{v}}\,),$$ which is a convex combination for $\lambda_{\boldsymbol{x}}^{i}<1$. From assumptions (a) and (b) above, it follows from Lemma [\[lem:realizableTheta\]](#lem:realizableTheta){reference-type="ref" reference="lem:realizableTheta"} that $\Theta_{+}^{i}(\,\boldsymbol{\mathcal{U}}_{a},\hat{\boldsymbol{v}}\,),\Theta_{+}^{i}(\,\boldsymbol{\mathcal{U}}_{b},\hat{\boldsymbol{v}}\,)\in\mathcal{R}$. It remains to show that $\Theta_{0}^{i}(\,\boldsymbol{\mathcal{U}}_{b},\hat{\boldsymbol{v}}\,)\in\mathcal{R}$. The first component of $\Theta_{0}^{i}(\,\boldsymbol{\mathcal{U}}_{b},\hat{\boldsymbol{v}}\,)$ can be written as $$\frac{1}{4\pi}\int_{\mathbb{S}^{2}}\mathsf{f}(\omega)\,d\omega, \quad\text{where}\quad \mathsf{f}(\omega) = \frac{[(1-\hat{\boldsymbol{v}}\cdot\boldsymbol{\ell})-\lambda_{\boldsymbol{x}}^{i}\,(1+\hat{v}^{i}\,\ell^{i})]}{(1-\lambda_{\boldsymbol{x}}^{i})}\,f,$$ while the remaining components can be written as $$\frac{1}{4\pi}\int_{\mathbb{S}^{2}}\mathsf{f}(\omega)\,\ell_{j}(\omega)\,d\omega, \quad (j=1,2,3).$$ From assumptions (b) and (c), it follows that $\mathsf{f}\in\mathfrak{R}$, which implies $\Theta_{0}^{i}(\,\boldsymbol{\mathcal{U}}_{b},\hat{\boldsymbol{v}}\,)\in\mathcal{R}$. The proof for $\Phi_{\hat{k}}^{i}\big[\,\boldsymbol{\mathcal{U}}_{a},\boldsymbol{\mathcal{U}}_{b},\hat{\boldsymbol{v}}\,\big]$ is analogous and is omitted. ◻* *[\[lem:realizablePhi\]]{#lem:realizablePhi label="lem:realizablePhi"}* To this end, the results of Lemma [\[lem:realizablePhi\]](#lem:realizablePhi){reference-type="ref" reference="lem:realizablePhi"} lead to $\widehat{\boldsymbol{\mathcal{U}}}_{\boldsymbol{K}}^{n+\sfrac{1}{2},\boldsymbol{x}}\in\mathcal{R}$ under the assumptions therein. It is straightforward to verify that these assumptions are fulfilled for each $\boldsymbol{K}\in\mathcal{T}$ when the assumptions in Proposition [Proposition 3](#prop:realizabilityExplicit){reference-type="ref" reference="prop:realizabilityExplicit"} hold. In particular, from Eq. [\[eq:faceVelocity\]](#eq:faceVelocity){reference-type="eqref" reference="eq:faceVelocity"}, it is clear that $\hat{v}<1$ is implied by $v_h<1$. Also, by defining $$\label{eq:CFLSpatial} C^{i}:=\gamma^{\boldsymbol{x}} \beta^{i} \hat{w}_{\hat{k}}\:,$$ the time-step restriction in Eq. [\[eq:timestepRestriction\]](#eq:timestepRestriction){reference-type="eqref" reference="eq:timestepRestriction"} guarantees $\lambda_{\boldsymbol{x}}^{i}\le(1-\hat{v})$ for all $\boldsymbol{K}\in\mathcal{T}$. Therefore, we have shown that, under the assumptions of Proposition [Proposition 3](#prop:realizabilityExplicit){reference-type="ref" reference="prop:realizabilityExplicit"}, $\widehat{\boldsymbol{\mathcal{U}}}_{\boldsymbol{K}}^{n+\sfrac{1}{2},\boldsymbol{x}}\in\mathcal{R}$ for all $\boldsymbol{K}\in\mathcal{T}$. ### Energy Space Fluxes {#sec:realizabilityEnergy} For energy space advection, we define $$\widehat{\boldsymbol{\mathcal{U}}}_{\boldsymbol{K}}^{n+\sfrac{1}{2},\varepsilon} =\boldsymbol{\mathcal{U}}_{\boldsymbol{K}}^{n}+\frac{\Delta t_{\varepsilon}}{|\boldsymbol{K}|}\,\boldsymbol{\mathcal{B}}_{h}^{\varepsilon}\big(\,\boldsymbol{\mathcal{U}}_{h}^{n},\boldsymbol{v}_{h}\,\big)_{\boldsymbol{K}} \quad (\Delta t_{\varepsilon}=\Delta t/\gamma^{\varepsilon}),$$ and seek to find sufficient conditions such that $\widehat{\boldsymbol{\mathcal{U}}}_{\boldsymbol{K}}^{n+\sfrac{1}{2},\varepsilon}\in\mathcal{R}$. We proceed in a fashion similar to that in Section [5.1.1](#sec:realizabilitySpatial){reference-type="ref" reference="sec:realizabilitySpatial"}, and define $$\Gamma^{\varepsilon}[\boldsymbol{\mathcal{U}}_{h}](\boldsymbol{x}) =\frac{1}{|K_{\varepsilon}|} \Big[\, \int_{K_{\varepsilon}}\boldsymbol{\mathcal{U}}_{h}\,\varepsilon^{2}d\varepsilon -\Delta t_{\varepsilon}\, \Big(\, \varepsilon^{3}\,\widehat{\boldsymbol{\mathcal{F}}^{\varepsilon}}\big(\boldsymbol{\mathcal{U}}_{h},\boldsymbol{v}_{h}\big)|_{\varepsilon_{\mbox{\tiny\sc H}}} -\varepsilon^{3}\,\widehat{\boldsymbol{\mathcal{F}}^{\varepsilon}}\big(\boldsymbol{\mathcal{U}}_{h},\boldsymbol{v}_{h}\big)|_{\varepsilon_{\mbox{\tiny\sc L}}} \,\Big) \,\Big]$$ so that $$\widehat{\boldsymbol{\mathcal{U}}}_{\boldsymbol{K}}^{n+\sfrac{1}{2},\varepsilon} =\frac{1}{|\boldsymbol{K}_{\boldsymbol{x}}|}\int_{\boldsymbol{K}_{\boldsymbol{x}}}\Gamma^{\varepsilon}[\boldsymbol{\mathcal{U}}_{h}](\boldsymbol{x})\,d\boldsymbol{x}.$$ Evaluating the integrals in the energy dimension using the same $\hat{k}$-point LGL quadrature rule leads to $$\label{eq:GammaEnergyConvex} \Gamma^{\varepsilon}[\boldsymbol{\mathcal{U}}_{h}](\boldsymbol{x}) =\sum_{q=2}^{\hat{k}-1}\hat{w}_{q}\,\hat{\varepsilon}_{q}^{2}\,\boldsymbol{\mathcal{U}}_{h}(\hat{\varepsilon}_{q}) +\hat{w}_{1}\,\varepsilon_{\mbox{\tiny\sc L}}^{2}\,\Phi_{1}^{\varepsilon}\big[\,\boldsymbol{\mathcal{U}}_{h}(\varepsilon_{\mbox{\tiny\sc L}}^{-}),\boldsymbol{\mathcal{U}}_{h}(\varepsilon_{\mbox{\tiny\sc L}}^{+}),\boldsymbol{v}_{h}\,\big] +\hat{w}_{\hat{k}}\,\varepsilon_{\mbox{\tiny\sc H}}^{2}\,\Phi_{\hat{k}}^{\varepsilon}\big[\,\boldsymbol{\mathcal{U}}_{h}(\varepsilon_{\mbox{\tiny\sc H}}^{-}),\boldsymbol{\mathcal{U}}_{h}(\varepsilon_{\mbox{\tiny\sc H}}^{+}),\boldsymbol{v}_{h}\,\big]\:,$$ where the integral of the moments is exact when $\hat{k}\geq\frac{k+5}{2}$, i.e., $$\int_{K_{\varepsilon}}\boldsymbol{\mathcal{U}}_{h}(\varepsilon)\,\varepsilon^{2}d\varepsilon =\hat{Q}^{\varepsilon}\big[\boldsymbol{\mathcal{U}}_{h}\big] =|K_{\varepsilon}|\sum_{q=1}^{\hat{k}}\hat{w}_{q}\,\boldsymbol{\mathcal{U}}_{h}(\hat{\varepsilon}_{q})\,\hat{\varepsilon}_{q}^{2}\:.$$ Since $\Gamma^{\varepsilon}[\boldsymbol{\mathcal{U}}_{h}](\boldsymbol{x})$ is written as a convex combination in Eq. [\[eq:GammaEnergyConvex\]](#eq:GammaEnergyConvex){reference-type="eqref" reference="eq:GammaEnergyConvex"}, the realizability of each term on the right-hand side gives the realizability of $\Gamma^{\varepsilon}[\boldsymbol{\mathcal{U}}_{h}](\boldsymbol{x})$. Since $\boldsymbol{\mathcal{U}}_{h}(\hat{\varepsilon}_{q})\in\mathcal{R}$ for each $\hat{\varepsilon}_{q}$ under the assumption that $\boldsymbol{\mathcal{U}}_{h}\in\mathcal{R}$ on $\widehat{S}^{\boldsymbol{K}}_{\otimes}$ for all $\boldsymbol{K}\in\mathcal{T}$, we focus on proving realizability of $\Phi_{1}^{\varepsilon}$ and $\Phi_{\hat{k}}^{\varepsilon}$, which are defined as $$\begin{aligned} \Phi_{1}^{\varepsilon}\big[\,\boldsymbol{\mathcal{U}}_{a},\boldsymbol{\mathcal{U}}_{b},\boldsymbol{v}_h\,\big] &=\boldsymbol{\mathcal{U}}_{b} +\lambda_{\mbox{\tiny\sc L}}^{\varepsilon}\,\mathscr{F}_{\mbox{\tiny\sc LF}}^{\varepsilon}\big(\boldsymbol{\mathcal{U}}_{a},\boldsymbol{\mathcal{U}}_{b},\boldsymbol{v}_h\big) \label{eq:phi1Energy} \\ &=(1-\alpha^{\varepsilon}\lambda_{\mbox{\tiny\sc L}}^{\varepsilon})\,\Theta_{0,\mbox{\tiny\sc L}}^{\varepsilon}(\boldsymbol{\mathcal{U}}_{b},\boldsymbol{v}_h) +\frac{1}{2}\,\alpha^{\varepsilon}\lambda_{\mbox{\tiny\sc L}}^{\varepsilon}\,\Theta_{+}^{\varepsilon}(\boldsymbol{\mathcal{U}}_{a},\boldsymbol{v}_h) +\frac{1}{2}\,\alpha^{\varepsilon}\lambda_{\mbox{\tiny\sc L}}^{\varepsilon}\,\Theta_{+}^{\varepsilon}(\boldsymbol{\mathcal{U}}_{b},\boldsymbol{v}_h), \nonumber \\ \Phi_{\hat{k}}^{\varepsilon}\big[\,\boldsymbol{\mathcal{U}}_{a},\boldsymbol{\mathcal{U}}_{b},\boldsymbol{v}_h\,\big] &=\boldsymbol{\mathcal{U}}_{a} -\lambda_{\mbox{\tiny\sc H}}^{\varepsilon}\,\mathscr{F}_{\mbox{\tiny\sc LF}}^{\varepsilon}\big(\boldsymbol{\mathcal{U}}_{a},\boldsymbol{\mathcal{U}}_{b},\boldsymbol{v}_h\big) \label{eq:phiNEnergy} \\ &=(1-\alpha^{\varepsilon}\lambda_{\mbox{\tiny\sc H}}^{\varepsilon})\,\Theta_{0,\mbox{\tiny\sc H}}^{\varepsilon}(\boldsymbol{\mathcal{U}}_{a},\boldsymbol{v}_h) +\frac{1}{2}\,\alpha^{\varepsilon}\lambda_{\mbox{\tiny\sc H}}^{\varepsilon}\,\Theta_{-}^{\varepsilon}(\boldsymbol{\mathcal{U}}_{a},\boldsymbol{v}_h) +\frac{1}{2}\,\alpha^{\varepsilon}\lambda_{\mbox{\tiny\sc H}}^{\varepsilon}\,\Theta_{-}^{\varepsilon}(\boldsymbol{\mathcal{U}}_{b},\boldsymbol{v}_h), \nonumber\end{aligned}$$ where we used the definition of $\mathscr{F}_{\mbox{\tiny\sc LF}}^{\varepsilon}$ given in Eq. [\[eq:globalLFenergy\]](#eq:globalLFenergy){reference-type="eqref" reference="eq:globalLFenergy"} and defined $\lambda_{\mbox{\tiny\sc L}/\mbox{\tiny\sc H}}^{\varepsilon}=\varepsilon_{\mbox{\tiny\sc L}/\mbox{\tiny\sc H}}\,\Delta t_{\varepsilon}/(\hat{w}_{\hat{k}}\,|K_{\varepsilon}|)$, $$\label{eq:thetaEnergy} \Theta_{0,\mbox{\tiny\sc L}/\mbox{\tiny\sc H}}^{\varepsilon}(\boldsymbol{\mathcal{U}},\boldsymbol{v}_h) =\frac{\boldsymbol{\mathcal{U}}[\boldsymbol{v}_h]-\alpha^{\varepsilon}\lambda_{\mbox{\tiny\sc L}/\mbox{\tiny\sc H}}^{\varepsilon}\,\boldsymbol{\mathcal{M}}}{1-\alpha^{\varepsilon}\lambda_{\mbox{\tiny\sc L}/\mbox{\tiny\sc H}}^{\varepsilon}}, {\quad\text{and}\quad} \Theta_{\pm}^{\varepsilon}(\boldsymbol{\mathcal{U}},\boldsymbol{v}_h) =\boldsymbol{\mathcal{M}} \pm \frac{1}{\alpha^{\varepsilon}}\,\boldsymbol{\mathcal{F}}^{\varepsilon}(\boldsymbol{\mathcal{U}},\boldsymbol{v}_h) \:.$$ Similar to the approach in Section [5.1.1](#sec:realizabilitySpatial){reference-type="ref" reference="sec:realizabilitySpatial"}, the following two lemmas show realizability of $\Theta_{\pm}^{\varepsilon}$ and $\Theta_{0,\mbox{\tiny\sc L}/\mbox{\tiny\sc H}}^{\varepsilon}$. **Lemma 3**. *Let $\Theta_{\pm}^{\varepsilon}(\boldsymbol{\mathcal{U}},\boldsymbol{v}_h)$ be given as in Eq. [\[eq:thetaEnergy\]](#eq:thetaEnergy){reference-type="eqref" reference="eq:thetaEnergy"}. Assume that $\boldsymbol{\mathcal{U}}\in\mathcal{R}$. Then $\Theta_{\pm}^{\varepsilon}\in\mathcal{R}$.* **Proof.* The first component of $\Theta_{\pm}^{\varepsilon}$ can be written as $$\frac{1}{4\pi}\int_{\mathbb{S}^{2}}\mathsf{f}_{\pm}[\boldsymbol{v}_h,\alpha^{\varepsilon}](\omega)\,d\omega, \quad\text{where}\quad \mathsf{f}_{\pm}[\boldsymbol{v}_h,\alpha^{\varepsilon}](\omega) =\big(\,1\pm Q(\boldsymbol{v}_h)/\alpha^{\varepsilon}\,\big)\,f(\omega),$$ and where $Q(\boldsymbol{v}_h)$ is the quadratic form in Eq. [\[eq:quadraticForm\]](#eq:quadraticForm){reference-type="eqref" reference="eq:quadraticForm"}. Similarly, the remaining components of $\Theta_{\pm}^{\varepsilon}$ can be written as $$\frac{1}{4\pi}\int_{\mathbb{S}^{2}}\mathsf{f}_{\pm}[\boldsymbol{v}_h,\alpha^{\varepsilon}](\omega)\,\boldsymbol{\ell}(\omega)\,d\omega.$$ Since $|Q(\boldsymbol{v}_h)|/\alpha^{\varepsilon}\le1$, it follows that $\mathsf{f}_{\pm}[\boldsymbol{v}_h,\alpha^{\varepsilon}](\omega)\in\mathfrak{R}$ and $\Theta_{\pm}^{\varepsilon}\in\mathcal{R}$. ◻* **Lemma 4**. *Consider $\Theta_{0,\mbox{\tiny\sc L}/\mbox{\tiny\sc H}}^{\varepsilon}(\boldsymbol{\mathcal{U}},\boldsymbol{v}_h)$ as defined in Eq. [\[eq:thetaEnergy\]](#eq:thetaEnergy){reference-type="eqref" reference="eq:thetaEnergy"}. Assume that $\boldsymbol{\mathcal{U}},\boldsymbol{\mathcal{M}}\in\mathcal{R}$, $v_h=|\boldsymbol{v}_h|<1$, and $\eta_{\mbox{\tiny\sc L}/\mbox{\tiny\sc H}}^{\varepsilon}:=\alpha^{\varepsilon}\lambda_{\mbox{\tiny\sc L}/\mbox{\tiny\sc H}}^{\varepsilon}<(1-v_h)$. Then, $\Theta_{0,\mbox{\tiny\sc L}/\mbox{\tiny\sc H}}^{\varepsilon}(\boldsymbol{\mathcal{U}},\boldsymbol{v}_h)\in\mathcal{R}$.* **Proof.* The first component of $\Theta_{0,\mbox{\tiny\sc L}/\mbox{\tiny\sc H}}^{\varepsilon}(\boldsymbol{\mathcal{U}},\boldsymbol{v}_h)$ can be written as $$\frac{1}{4\pi}\int_{\mathbb{S}^{2}}\mathsf{f}[\boldsymbol{v}_h,\eta_{\mbox{\tiny\sc L}/\mbox{\tiny\sc H}}^{\varepsilon}](\omega)\,d\omega, \quad\text{where}\quad \mathsf{f}[\boldsymbol{v}_h,\eta_{\mbox{\tiny\sc L}/\mbox{\tiny\sc H}}^{\varepsilon}](\omega) = \frac{(1-\boldsymbol{v}_h\cdot\boldsymbol{\ell}-\eta_{\mbox{\tiny\sc L}/\mbox{\tiny\sc H}}^{\varepsilon})}{(1-\eta_{\mbox{\tiny\sc L}/\mbox{\tiny\sc H}}^{\varepsilon})}\,f(\omega).$$ The remaining components of $\Theta_{0,\mbox{\tiny\sc L}/\mbox{\tiny\sc H}}^{\varepsilon}(\boldsymbol{\mathcal{U}},\boldsymbol{v}_h)$ can be written as $$\frac{1}{4\pi}\int_{\mathbb{S}^{2}}\mathsf{f}[\boldsymbol{v}_h,\eta_{\mbox{\tiny\sc L}/\mbox{\tiny\sc H}}^{\varepsilon}](\omega)\,\boldsymbol{\ell}(\omega)\,d\omega.$$ Since $v_h<1$ and $\eta_{\mbox{\tiny\sc L}/\mbox{\tiny\sc H}}^{\varepsilon}<1-v$, we have $(1-\boldsymbol{v}_h\cdot\boldsymbol{\ell}-\eta_{\mbox{\tiny\sc L}/\mbox{\tiny\sc H}}^{\varepsilon})\ge(1-v_h)-\eta_{\mbox{\tiny\sc L}/\mbox{\tiny\sc H}}^{\varepsilon}>0$. This, together with $f\in\mathfrak{R}$, implies that $\mathsf{f}[\boldsymbol{v}_h,\eta_{\mbox{\tiny\sc L}/\mbox{\tiny\sc H}}^{\varepsilon}](\omega)\in\mathfrak{R}$ and $\Theta_{0,\mbox{\tiny\sc L}/\mbox{\tiny\sc H}}^{\varepsilon}(\boldsymbol{\mathcal{U}},\boldsymbol{v}_h)\in\mathcal{R}$. ◻* Analogous to the spatial advection case, the assumptions in Lemma [Lemma 4](#lem:realizableTheta0Energy){reference-type="ref" reference="lem:realizableTheta0Energy"} are fulfilled for all $\boldsymbol{K}\in\mathcal{T}$ under assumptions of Proposition [Proposition 3](#prop:realizabilityExplicit){reference-type="ref" reference="prop:realizabilityExplicit"}, when $$\label{eq:CFLEnergy} C^{\varepsilon}:=\gamma^{\varepsilon} \alpha^{\varepsilon} \hat{w}_{\hat{k}}$$ is used in the time-step restriction [\[eq:timestepRestriction\]](#eq:timestepRestriction){reference-type="eqref" reference="eq:timestepRestriction"}. Under these assumptions, $\eta_{\mbox{\tiny\sc L}/\mbox{\tiny\sc H}}^{\varepsilon}:=\alpha^{\varepsilon}\lambda_{\mbox{\tiny\sc L}/\mbox{\tiny\sc H}}^{\varepsilon}<(1-v_h)\le 1$. Therefore $\Phi_{1}^{\varepsilon}$ and $\Phi_{\hat{k}}^{\varepsilon}$ are convex combinations of realizable terms, and are thus realizable. We have shown that, under the assumptions of Proposition [Proposition 3](#prop:realizabilityExplicit){reference-type="ref" reference="prop:realizabilityExplicit"}, $\widehat{\boldsymbol{\mathcal{U}}}_{\boldsymbol{K}}^{n+\sfrac{1}{2},\varepsilon}\in\mathcal{R}$ for all $\boldsymbol{K}\in\mathcal{T}$. ### Sources {#sec:realizabilitySource} The last part of the explicit update involves the source term in the number flux equation. We define $$\begin{aligned} \widehat{\boldsymbol{\mathcal{U}}}_{\boldsymbol{K}}^{n+\sfrac{1}{2},\boldsymbol{\mathcal{S}}} &=\boldsymbol{\mathcal{U}}_{\boldsymbol{K}}^{n}+\frac{\Delta t_{\boldsymbol{\mathcal{S}}}}{|\boldsymbol{K}|}\big(\,\boldsymbol{\mathcal{S}}(\boldsymbol{\mathcal{U}}_{h}^{n},\boldsymbol{v}_{h})\,\big)_{\boldsymbol{K}} \quad(\Delta t_{\boldsymbol{\mathcal{S}}}=\Delta t/\gamma^{\boldsymbol{\mathcal{S}}}) \nonumber \\ &=\frac{1}{|\boldsymbol{K}|}\int_{\boldsymbol{K}}\Big[\,\boldsymbol{\mathcal{U}}_{h}^{n}+\Delta t_{\boldsymbol{\mathcal{S}}}\,\boldsymbol{\mathcal{S}}(\boldsymbol{\mathcal{U}}_{h}^{n},\boldsymbol{v}_{h})\,\Big]\,\tau\,d\boldsymbol{z}. \label{eq:source_update}\end{aligned}$$ From the definition of the source term $\boldsymbol{\mathcal{S}}$ in Eq. [\[eq:sources\]](#eq:sources){reference-type="eqref" reference="eq:sources"}, the number density is not affected in the source update. Thus we have $\widehat{\mathcal{N}}_{\boldsymbol{K}}^{n+\sfrac{1}{2},\boldsymbol{\mathcal{S}}} = {\mathcal{N}}_{\boldsymbol{K}}^{n} > 0$, which, together with the results obtained in Sections [5.1.1](#sec:realizabilitySpatial){reference-type="ref" reference="sec:realizabilitySpatial"} and [5.1.2](#sec:realizabilityEnergy){reference-type="ref" reference="sec:realizabilityEnergy"}, concludes the proof of the first claim in Proposition [Proposition 3](#prop:realizabilityExplicit){reference-type="ref" reference="prop:realizabilityExplicit"}. Ideally, one would expect to show that $\widehat{\boldsymbol{\mathcal{U}}}_{\boldsymbol{K}}^{n+\sfrac{1}{2},\boldsymbol{\mathcal{S}}}\in\mathcal{R}$ under time-step restrictions similar to the ones in Sections [5.1.1](#sec:realizabilitySpatial){reference-type="ref" reference="sec:realizabilitySpatial"} and [5.1.2](#sec:realizabilityEnergy){reference-type="ref" reference="sec:realizabilityEnergy"}. Unfortunately, this is not true in the three-dimensional case considered in this paper. In the rest of this section, we will show that (i) realizability of $\widehat{\boldsymbol{\mathcal{U}}}_{\boldsymbol{K}}^{n+\sfrac{1}{2},\boldsymbol{\mathcal{S}}}$ is preserved by the semi-discrete equation, i.e., without time discretization, and (ii) with the forward Euler discretization in Eq. [\[eq:imexForwardEuler\]](#eq:imexForwardEuler){reference-type="eqref" reference="eq:imexForwardEuler"}, $\widehat{\boldsymbol{\mathcal{U}}}_{\boldsymbol{K}}^{n+\sfrac{1}{2},\boldsymbol{\mathcal{S}}}\in\mathcal{R}$ in a reduced, one-dimensional planar geometry. **Proposition 7** (Semi-discrete source update). *Given a quadrature rule $\boldsymbol{Q}:C^{0}(\boldsymbol{K})\to\mathbb{R}$ with positive weights and points given by the set $\boldsymbol{S}^{\boldsymbol{K}}_{\otimes}$, we show that, for all $\boldsymbol{z}\in\boldsymbol{S}^{\boldsymbol{K}}_{\otimes}\subset\boldsymbol{K}$, the solution $\boldsymbol{\mathcal{U}}_{h}(\boldsymbol{z},t)$ to the semi-discrete equation $$\label{eq:continuum_source_update} \partial_t \boldsymbol{\mathcal{U}}_{h}(\boldsymbol{z},t) = \boldsymbol{\mathcal{S}}(\boldsymbol{\mathcal{U}}_{h}(\boldsymbol{z},t),\boldsymbol{v}_{h}(\boldsymbol{z}))$$ remains in the realizable set $\mathcal{R}$ for all $t\geq t_0$, provided that $\boldsymbol{\mathcal{U}}_{h}(\boldsymbol{z},t_0)$ is realizable.* This semi-discrete equation is consistent with the source update portion in Eq. [\[eq:twoMomentModelCompact\]](#eq:twoMomentModelCompact){reference-type="eqref" reference="eq:twoMomentModelCompact"} and results in Eq. [\[eq:source_update\]](#eq:source_update){reference-type="eqref" reference="eq:source_update"} after applying forward Euler discretization and cell-averaging. *Proof.* To show that $\boldsymbol{\mathcal{U}}_{h}(\boldsymbol{z},t)\in\mathcal{R}$ for $t\geq t_0$, we first observe that since the first component of $\boldsymbol{\mathcal{S}}(\boldsymbol{\mathcal{U}},\boldsymbol{v})$ is zero (see Eq. [\[eq:sources\]](#eq:sources){reference-type="eqref" reference="eq:sources"}), the source update does not affect $\mathcal{N}_h$. Thus, showing $\boldsymbol{\mathcal{U}}_{h}(\boldsymbol{z},t)\in \mathcal{R}$ is equivalent to proving that $\mathcal{G}_h(\boldsymbol{z},t) \leq \mathcal{N}_h(\boldsymbol{z})$, where $\mathcal{G}_h(\boldsymbol{z},t) = |\boldsymbol{\mathcal{G}}_h(\boldsymbol{z},t)|$ with $\boldsymbol{\mathcal{G}}_h$ the number flux governed by Eq. [\[eq:continuum_source_update\]](#eq:continuum_source_update){reference-type="eqref" reference="eq:continuum_source_update"}. Due to the continuity of $\mathcal{G}_{h}(\boldsymbol{z},t)$ in time, it suffices to show that if $\mathcal{G}_h(\boldsymbol{z},\hat{t}) = \mathcal{N}_h(\boldsymbol{z})$ for some $\hat{t}\geq t_0$, then $\mathcal{G}_h(\boldsymbol{z},t) = \mathcal{N}_h(\boldsymbol{z})$ for all $t\geq\hat{t}$, i.e., the number flux magnitude does not exceed the number density. Indeed, the number flux portion of Eq. [\[eq:continuum_source_update\]](#eq:continuum_source_update){reference-type="eqref" reference="eq:continuum_source_update"} is given by $$\partial_t \mathcal{G}_{h,j} = \mathcal{Q}^{i}_{\hspace{2pt}kj}(\partial_{i}{v^{k}})_{h}-\mathcal{I}^{i}(\partial_{i}{v_{j}})_{h} =\frac{1}{4\pi}\int_{\mathbb{S}^{2}} \big(\,\ell^{i}(\omega)\ell_{k}(\omega)\ell_{j}(\omega)(\partial_{i}{v^{k}})_{h}-\ell^{i}(\omega)(\partial_{i}{v_{j}})_{h}\,\big) \,f(\omega,t)\,d\omega.$$ Suppose $\mathcal{G}_h(\boldsymbol{z},\hat{t}) = \mathcal{N}_h(\boldsymbol{z})$ for some $\hat{t}\geq t_0$, it is known [@fialkow1991recursiveness] that the distribution function $f(\omega)$ takes the form of a Dirac delta function, i.e., $f(\omega) = c\, \delta(\hat{\omega})$ for some $c>0$, $\hat{\omega}\in\mathbb{S}^2$. Therefore, at $t=\hat{t}$, we have $\mathcal{G}_h^j = c \, \big(\,1+v^{k}\ell_{k}(\hat{\omega})\,\big)\,\ell^{j}(\hat{\omega})$ and $$\partial_t \mathcal{G}_{h,j} =\frac{c}{4\pi} \big(\,\ell^{i}(\hat{\omega})\ell_{k}(\hat{\omega})\ell_{j}(\hat{\omega})(\partial_{i}{v^{k}})_{h}-\ell^{i}(\hat{\omega})(\partial_{i}{v_{j}})_{h}\,\big)\:.$$ Thus, $$\label{eq:continuumRealizability} \begin{alignedat}{2} \frac{1}{2} \partial_t (\mathcal{G}_{h})^2 &=\, \mathcal{G}_{h}^j \partial_t \mathcal{G}_{h,j}\\ &= \frac{c^2}{4\pi} \big(\,1+v^{k}\ell_{k}(\hat{\omega})\,\big)\,\ell^{j}(\hat{\omega}) \big(\,\ell^{i}(\hat{\omega})\ell_{k}(\hat{\omega})\ell_{j}(\hat{\omega})(\partial_{i}{v^{k}})_{h}-\ell^{i}(\hat{\omega})(\partial_{i}{v_{j}})_{h}\,\big) = 0\:, \end{alignedat}$$ where the fact $\ell^{i}\ell_{i}=1$ is used in the last equality. Eq. [\[eq:continuumRealizability\]](#eq:continuumRealizability){reference-type="eqref" reference="eq:continuumRealizability"} indicates that the number flux magnitude does not change once $\mathcal{G}_h(\boldsymbol{z},\hat{t}) = \mathcal{N}_h(\boldsymbol{z})$ for some $\hat{t}\geq t_0$ and implies that $\boldsymbol{\mathcal{U}}_{h}(\boldsymbol{z},t)\in\mathcal{R}$ for $t\geq t_0$. ◻ **Remark 5**. *The result in Eq. [\[eq:continuumRealizability\]](#eq:continuumRealizability){reference-type="eqref" reference="eq:continuumRealizability"} also explains why the discretized source update [\[eq:source_update\]](#eq:source_update){reference-type="eqref" reference="eq:source_update"} cannot guarantee realizability of the updated moments. Specifically, Eq. [\[eq:continuumRealizability\]](#eq:continuumRealizability){reference-type="eqref" reference="eq:continuumRealizability"} suggests that, for moments on the realizable boundary ($\mathcal{G}_h = \mathcal{N}_h$), the continuous source update [\[eq:continuum_source_update\]](#eq:continuum_source_update){reference-type="eqref" reference="eq:continuum_source_update"} moves the moments tangentially with the boundary of the realizable set. Once explicit discretization is applied, e.g., Eq. [\[eq:source_update\]](#eq:source_update){reference-type="eqref" reference="eq:source_update"}, the update may result in unrealizable moments, regardless of the time-step size.* Next, we show that, in a one-dimensional planar geometry [@mihalasMihalas_1999 Section 6.5], the discretized source update Eq. [\[eq:source_update\]](#eq:source_update){reference-type="eqref" reference="eq:source_update"} preserves realizability of the moments when a time-step restriction is satisfied. In the planar geometry, the spatial fluxes are zero in two of the three spatial dimensions (e.g., $\partial_{x^2} f = \partial_{x^3}f = 0$) with the angular direction reduced from $\omega=(\vartheta, \varphi)\in\mathbb{S}^2$ to $\mu=\cos\,\vartheta\in[-1,1]$. In the remainder of this subsection, we use $x(=x^1)$ to denote the only spatial dimension that has nonzero fluxes, and use a scalar function $v$ to denote the velocity which varies only in the $x$ direction. Moreover, the primitive moments in the planar geometry are given by $$\big\{\,\mathcal{D},\,\mathcal{I},\,\mathcal{K},\,\mathcal{Q}\,\big\}(\varepsilon,{x},t) =\frac{1}{2}\int_{-1}^{1}f(\omega,\varepsilon,{x},t)\,\big\{\,1,\,\mu,\,\mu^{2},\,\mu^{3}\,\big\}\,d\mu, \label{eq:reducedAngularMoments}$$ and the conserved moments are $\mathcal{N} =\mathcal{D}+v\,\mathcal{I}$ and $\mathcal{G}=\mathcal{I}+v\,\mathcal{K}$. In this case, the semi-discrete source update Eq. [\[eq:continuum_source_update\]](#eq:continuum_source_update){reference-type="eqref" reference="eq:continuum_source_update"} reduces to $$\partial_t \mathcal{N}= 0\:,{\quad\text{and}\quad}\partial_t \mathcal{G}= \frac{1}{2} \int_{-1}^{1} \mu(\mu^2-1)(\partial_x v)_{h}\,f(\mu) d\mu\:.$$ The following proposition shows that the discretized version of this source update preserves moment realizability under a time step restriction. **Proposition 8**. *In the planar geometry, suppose Assumption [Assumption 1](#assum:exact_closure){reference-type="ref" reference="assum:exact_closure"} holds, $v_{h}<1$, and the time step satisfies $$\label{eq:sourceTimeStepRestriction} \Delta t\leq \frac{1}{2}\,\gamma^{\boldsymbol{\mathcal{S}}}\, \frac{1-v_{h}}{|(\partial_{x}{v})_{h}|}, \quad\bigg(\,\text{ i.e., }\, \Delta t_{\boldsymbol{\mathcal{S}}} \leq \frac{1}{2}\, \frac{1-v_{h}}{|(\partial_{x}{v})_{h}|}\bigg).$$ Then the discretized source update Eq. [\[eq:source_update\]](#eq:source_update){reference-type="eqref" reference="eq:source_update"} gives a realizable cell-averaged moment $\widehat{\boldsymbol{\mathcal{U}}}_{\boldsymbol{K}}^{n+\sfrac{1}{2},\boldsymbol{\mathcal{S}}}$ for all $\boldsymbol{K}\in\mathcal{T}$, provided $\boldsymbol{\mathcal{U}}_{h}^{n} \in \mathcal{R}$ on all $\boldsymbol{K}\in\mathcal{T}$.* *Proof.* In this proof, we show the realizability of $\widehat{\boldsymbol{\mathcal{U}}}_{h}^{n+\sfrac{1}{2},\boldsymbol{\mathcal{S}}}:= \boldsymbol{\mathcal{U}}_{h}^{n}+\Delta t_{\boldsymbol{\mathcal{S}}}\,\boldsymbol{\mathcal{S}}(\boldsymbol{\mathcal{U}}_{h}^{n},\boldsymbol{v}_{h})$, which leads to the realizability of $\widehat{\boldsymbol{\mathcal{U}}}_{\boldsymbol{K}}^{n+\sfrac{1}{2},\boldsymbol{\mathcal{S}}}$ when the element integral in Eq. [\[eq:source_update\]](#eq:source_update){reference-type="eqref" reference="eq:source_update"} is evaluated using quadrature rules with positive weights in both the spatial and energy dimensions. We start with denoting $\widehat{\boldsymbol{\mathcal{U}}}_{h}^{n+\sfrac{1}{2},\boldsymbol{\mathcal{S}}}=:(\widehat{\mathcal{N}}_{h}^{n+\sfrac{1}{2},\boldsymbol{\mathcal{S}}}, \widehat{\mathcal{G}}_{h}^{n+\sfrac{1}{2},\boldsymbol{\mathcal{S}}})$. In the planar geometry, the number density $\widehat{\mathcal{N}}_{h}^{n+\sfrac{1}{2},\boldsymbol{\mathcal{S}}}$ and number flux $\widehat{\mathcal{G}}_{h}^{n+\sfrac{1}{2},\boldsymbol{\mathcal{S}}}$ are both scalar-valued. From Assumption [Assumption 1](#assum:exact_closure){reference-type="ref" reference="assum:exact_closure"} and the definition of the source terms $\boldsymbol{\mathcal{S}}$ in Eq. [\[eq:sources\]](#eq:sources){reference-type="eqref" reference="eq:sources"}, we can write $$\begin{aligned} \widehat{\mathcal{N}}_{h}^{n+\sfrac{1}{2},\boldsymbol{\mathcal{S}}} &=\frac{1}{2}\int_{-1}^{1}\big(\,1+v_{h}\mu\,\big)\,f(\mu)\,d\mu,\\ \widehat{\mathcal{G}}_{h}^{n+\sfrac{1}{2},\boldsymbol{\mathcal{S}}} &=\frac{1}{2}\int_{-1}^{1} \big[\, \big(\,1+v_{h}\mu\,\big)\,\mu +\Delta t_{\boldsymbol{\mathcal{S}}}\,\big(\,\mu^3(\partial_{x}{v})_{h}-\mu(\partial_{x}{v})_{h}\,\big) \,\big]\,f(\mu)\,d\mu \\ &=\frac{1}{2}\int_{-1}^{1} \big(\,1+v_{h}\mu\,\big)\,f(\mu)\,\big[\, \mu -\Delta t_{\boldsymbol{\mathcal{S}}}\,(\partial_{x}{v})_{h}\mu\,\frac{1-\mu^2}{1+v_{h}\mu}\, \,\big]\,d\mu,\nonumber\end{aligned}$$ where $f\in\mathfrak{R}$. Since $v_{h}<1$ and $\mu\in[-1,1]$, it is clear that $\widehat{\mathcal{N}}_{h}^{n+\sfrac{1}{2},\boldsymbol{\mathcal{S}}}>0$. We next prove $\widehat{\mathcal{N}}_{h}^{n+\sfrac{1}{2},\boldsymbol{\mathcal{S}}} - |\widehat{\mathcal{G}}_{h}^{n+\sfrac{1}{2},\boldsymbol{\mathcal{S}}} |\geq0$ when $\Delta t_{\boldsymbol{\mathcal{S}}}$ satisfies Eq. [\[eq:sourceTimeStepRestriction\]](#eq:sourceTimeStepRestriction){reference-type="eqref" reference="eq:sourceTimeStepRestriction"}. By Cauchy-Schwartz inequality, $$\begin{aligned} |\widehat{\mathcal{G}}_{h}^{n+\sfrac{1}{2},\boldsymbol{\mathcal{S}}}|^2 &\leq \frac{1}{4}\int_{-1}^{1} (1+v_{h}\mu)\,f(\mu)\,d\mu \int_{-1}^{1} (1+v_{h}\mu)\,f(\mu)\,\big[\, \mu -\Delta t_{\boldsymbol{\mathcal{S}}}\,(\partial_{x}{v})_{h}\mu\,\frac{1-\mu^2}{1+v_{h}\mu}\, \,\big]^2\,d\mu.\end{aligned}$$ We then show that, under Eq. [\[eq:sourceTimeStepRestriction\]](#eq:sourceTimeStepRestriction){reference-type="eqref" reference="eq:sourceTimeStepRestriction"}, $\big[\,\mu-\Delta t_{\boldsymbol{\mathcal{S}}}\,(\partial_{x}{v})_{h}\mu\,\frac{1-\mu^2}{1+v_{h}\mu}\,\big]^2 \leq 1$ for $\mu\in[-1,1]$. This inequality clearly holds when $\mu=\pm 1$ and $\mu=0$. We thus focus on the case when $\mu\in(-1,1)$ and $\mu\neq0$. Since $\Delta t_{\boldsymbol{\mathcal{S}}}>0$ and $\,\frac{1-\mu^2}{1+v_{h}\mu}\geq0$, the inequality holds when $$\Delta t_{\boldsymbol{\mathcal{S}}} \leq \frac{1+v_{h}\mu}{(\partial_{x}{v})_{h}\mu\,(1-\mu)}\,\text{ if }\, (\partial_{x}{v})_{h}\,\mu>0,\,\text{ and }\, \Delta t_{\boldsymbol{\mathcal{S}}} \leq \frac{1+v_{h}\mu}{(-(\partial_{x}{v})_{h}\mu) (1+\mu)}\,\text{ if }\, (\partial_{x}{v})_{h}\,\mu<0.$$ It is straightforward to verify that Eq. [\[eq:sourceTimeStepRestriction\]](#eq:sourceTimeStepRestriction){reference-type="eqref" reference="eq:sourceTimeStepRestriction"} gives a sufficient condition to the two time-step restrictions above. ◻ ## Realizability-enforcing Limiter {#sec:realizabilityLimiter} It has been shown in Proposition [Proposition 3](#prop:realizabilityExplicit){reference-type="ref" reference="prop:realizabilityExplicit"} that, when starting from realizable moments $\boldsymbol{\mathcal{U}}_{h}^{n}$, the explicit update in Eq. [\[eq:imexForwardEuler\]](#eq:imexForwardEuler){reference-type="eqref" reference="eq:imexForwardEuler"} is guaranteed to provide updated cell-averaged moments $\widehat{\boldsymbol{\mathcal{U}}}_{\boldsymbol{K}}^{n+\sfrac{1}{2}}$ with number density $\widehat{\mathcal{N}}_{\boldsymbol{K}}^{n+\sfrac{1}{2}}>0$ for every $\boldsymbol{K}$ under a reasonable time-step restriction. In this section, we discuss how the realizability-enforcing limiter proposed in [@chu_etal_2019] is used here in Eq. [\[eq:realizabilityLimiter\]](#eq:realizabilityLimiter){reference-type="eqref" reference="eq:realizabilityLimiter"} to enforce realizability of moments $\boldsymbol{\mathcal{U}}_{h}$ at a point set $\widetilde{S}^{\boldsymbol{K}}_{\otimes}$ defined in Eq. [\[eq:AllSetUnion\]](#eq:AllSetUnion){reference-type="eqref" reference="eq:AllSetUnion"}, which covers all DG nodal points as well as the auxiliary points in element $\boldsymbol{K}$. In [@chu_etal_2019], the realizability-enforcing limiter was formulated following the approach considered in [@zhangShu_2010a; @zhangShu_2010b] for constructing bound-preserving limiters for high-order DG schemes. The limiter enforces moment realizability at each quadrature point in a DG element by relaxing unrealizable moments towards the realizable cell-averaged moments. Specifically, this limiter replaces unrealizable moments with their convex combinations with the cell-averaged moment, which preserves the Eulerian-frame particle number in each element (but not the energy; see Section [6.2](#sec:EnergyLimiter){reference-type="ref" reference="sec:EnergyLimiter"} for further discussions) when the same convex combination factor ($\theta^{\mathcal{N}}_{\boldsymbol{K}}$ and $\theta^{\boldsymbol{\mathcal{U}}}_{\boldsymbol{K}}$) is applied to all moments within the element. For completeness, the steps taken in this realizability-enforcing limiter are summarized in Algorithm [\[algo:realizabilityLimiter\]](#algo:realizabilityLimiter){reference-type="ref" reference="algo:realizabilityLimiter"}. We refer to [@chu_etal_2019] and references therein for detailed discussions. **Inputs:** Discretized moments $\widehat{\boldsymbol{\mathcal{U}}}_h$ with $\widehat{\mathcal{N}}_{\boldsymbol{K}}>0$ for all $\boldsymbol{K}\in\mathcal{T}$.\ **Parameter:** $0<\delta\ll 1$.\ As seen in Algorithm [\[algo:realizabilityLimiter\]](#algo:realizabilityLimiter){reference-type="ref" reference="algo:realizabilityLimiter"}, starting from discretized moment $\widehat{\boldsymbol{\mathcal{U}}}_{h}$ with positive cell-averaged number density $\widehat{\mathcal{N}}_{\boldsymbol{K}}$, the limiter enforces realizability of the resulting moments ${\boldsymbol{\mathcal{U}}}_{h}$ in the point set $\widetilde{S}^{\boldsymbol{K}}_{\otimes}$ by limiting toward the cell-averaged moments. The limiter is guaranteed to provide realizable outputs at the point set when the starting moment has a positive cell-averaged number density, thus Proposition [Proposition 4](#prop:realizabilityLimiter){reference-type="ref" reference="prop:realizabilityLimiter"} holds. We note that, when approximate closures are considered, the explicit update may not result in moments with positive cell-averaged number density (since Assumption [Assumption 1](#assum:exact_closure){reference-type="ref" reference="assum:exact_closure"} does not hold). If a negative cell-averaged number density is observed in element $\boldsymbol{K}$, we set the moments in $\boldsymbol{K}$ to be an isotropic moment with close to zero but positive number density and zero number flux. This safeguard affects the conservation property of the scheme, however, we do not observe a negative cell-averaged number density in any of the numerical experiments presented in Section [8](#sec:numericalResults){reference-type="ref" reference="sec:numericalResults"}. ## Conversion between Conserved and Primitive Moments {#sec:momentConversionRealizability} In this section, we prove Proposition [Proposition 5](#prop:realizabilityMomentConversion){reference-type="ref" reference="prop:realizabilityMomentConversion"} by showing that, under Assumption [Assumption 1](#assum:exact_closure){reference-type="ref" reference="assum:exact_closure"} and assuming $v_h<1$, (i) the conversion between conserved and primitive moments preserves realizability and (ii) the iterative solver in Eq. [\[eq:Picard\]](#eq:Picard){reference-type="eqref" reference="eq:Picard"} is guaranteed to converge to a unique $\boldsymbol{\mathcal{M}}\in\mathcal{R}$ that satisfies Eq. [\[eq:ConservedToPrimitive\]](#eq:ConservedToPrimitive){reference-type="eqref" reference="eq:ConservedToPrimitive"} given $\boldsymbol{\mathcal{U}}\in\mathcal{R}$. In the following two lemmas, we show that the realizability is preserved in the conversion between conserved and primitive moments. **Lemma 5**. *Suppose Assumption [Assumption 1](#assum:exact_closure){reference-type="ref" reference="assum:exact_closure"} holds and $v<1$. Let $\boldsymbol{\mathcal{U}}$ be given as in Eq. [\[eq:ConservedToPrimitive\]](#eq:ConservedToPrimitive){reference-type="eqref" reference="eq:ConservedToPrimitive"} with $\boldsymbol{\mathcal{M}}\in\mathcal{R}$, then $\boldsymbol{\mathcal{U}}\in\mathcal{R}$.* *Proof.* Let $f\in\mathfrak{R}$ be the underlying distribution for $\boldsymbol{\mathcal{M}}\in\mathcal{R}$. Then, from Eq. [\[eq:ConservedToPrimitive\]](#eq:ConservedToPrimitive){reference-type="eqref" reference="eq:ConservedToPrimitive"}, the components of $\boldsymbol{\mathcal{U}}$ can be written as $$\big(\mathcal{N}, \mathcal{G}_{j}\big)^{\intercal} = \frac{1}{4\pi}\int_{\mathbb{S}^{2}}\big(\,1+v^{i}\,\ell_{i}(\omega)\,\big)\,f(\omega)\,\big(1, \ell_{j}(\omega)\big)^{\intercal}\,d\omega:=\frac{1}{4\pi}\int_{\mathbb{S}^{2}}\mathsf{f}(\omega)\,\big(1, \ell_{j}(\omega)\big)^{\intercal}\,d\omega.$$ Since $f\in\mathfrak{R}$ and $v^{i}\,\ell_{i}\in(-1,1)$, it follows that $\mathsf{f}(\omega):=\big(\,1+v^{i}\,\ell_{i}(\omega)\,\big)\,f(\omega)\in\mathfrak{R}$ and thus $\boldsymbol{\mathcal{U}}\in\mathcal{R}$. ◻ **Lemma 6**. *Suppose Assumption [Assumption 1](#assum:exact_closure){reference-type="ref" reference="assum:exact_closure"} holds, $v<1$, and $\boldsymbol{\mathcal{U}}\in\mathcal{R}$. Then there exists some $\boldsymbol{\mathcal{M}}\in\mathcal{R}$ that satisfies Eq. [\[eq:ConservedToPrimitive\]](#eq:ConservedToPrimitive){reference-type="eqref" reference="eq:ConservedToPrimitive"}.* *Proof.* Let $\mathsf{f}\in\mathfrak{R}$ denote the underlying distribution for $\boldsymbol{\mathcal{U}}\in\mathcal{R}$. Then the components of $\boldsymbol{\mathcal{U}}$ can be written as $$\big(\mathcal{N}, \mathcal{G}_{j}\big)^{\intercal} = \frac{1}{4\pi}\int_{\mathbb{S}^{2}}\mathsf{f}(\omega)\,\big(1, \ell_{j}(\omega)\big)^{\intercal}\,d\omega.$$ Since $\mathsf{f}\in\mathfrak{R}$ and $v^{i}\,\ell_{i}\in(-1,1)$, it follows that $f(\omega):=\big(\,1+v^{i}\,\ell_{i}(\omega)\,\big)^{-1}\,\mathsf{f}(\omega)\in\mathfrak{R}$. Taking the moments of $f$ leads to $\boldsymbol{\mathcal{M}}\in\mathcal{R}$. Using the relation between $f$ and $\mathsf{f}$ it is then straightforward to verify that $\boldsymbol{\mathcal{M}}$ satisfies Eq. [\[eq:ConservedToPrimitive\]](#eq:ConservedToPrimitive){reference-type="eqref" reference="eq:ConservedToPrimitive"}. ◻ Lemma [Lemma 6](#lem:BackwardRealizability){reference-type="ref" reference="lem:BackwardRealizability"} shows the existence of realizable primitive moments corresponding to given conserved moments. However, it does not provide guarantees on the convergence of the iterative solver we use to find the primitive moments. In the remainder of this subsection, we prove that the iterative solver in Eq. [\[eq:Picard\]](#eq:Picard){reference-type="eqref" reference="eq:Picard"} guarantees the convergence to a realizable moment $\boldsymbol{\mathcal{M}}$. To start, in the following lemma we show that realizability is guaranteed at each iteration of the solver in Eq. [\[eq:Picard\]](#eq:Picard){reference-type="eqref" reference="eq:Picard"}. **Lemma 7**. *Let $\boldsymbol{\mathcal{U}}\in\mathcal{R}$ and $\lambda \leq \frac{1}{1+v}$ in Eq. [\[eq:richardson_fixed_pt\]](#eq:richardson_fixed_pt){reference-type="eqref" reference="eq:richardson_fixed_pt"}. Then, the solver in Eq. [\[eq:Picard\]](#eq:Picard){reference-type="eqref" reference="eq:Picard"} guarantees that $\boldsymbol{\mathcal{M}}^{[k+1]}=(\mathcal{D}^{[k+1]},\boldsymbol{\mathcal{I}}^{[k+1]})^{\intercal}\in\mathcal{R}$, provided that $\boldsymbol{\mathcal{M}}^{[k]}=(\mathcal{D}^{[k]},\boldsymbol{\mathcal{I}}^{[k]})^{\intercal}\in\mathcal{R}$.* *Proof.* We write the iterative update in Eq. [\[eq:Picard\]](#eq:Picard){reference-type="eqref" reference="eq:Picard"} as $$\begin{alignedat}{2} \boldsymbol{\mathcal{M}}^{[k+1]} = \left( \begin{array}{c} \mathcal{D}^{[k+1]} \\ \mathcal{I}_{j}^{[k+1]} \end{array} \right) &= (1-\lambda) \left(\begin{array}{c} \mathcal{D}^{[k]} - \frac{\lambda}{1-\lambda} v^{i}\mathcal{I}_{i}^{[k]}\\ \mathcal{I}_{j}^{[k]} - \frac{\lambda}{1-\lambda} v^{i}\mathsf{k}_{ij}^{[k]}\mathcal{D}^{[k]} \end{array}\right) + \lambda \left( \begin{array}{c} \mathcal{N}\\ \mathcal{G}_{j} \end{array}\right)\\ &=: (1-\lambda)\,\widetilde{\boldsymbol{\mathcal{M}}}^{[k]} + \lambda\, {\boldsymbol{\mathcal{U}}}. \end{alignedat}$$ Since the realizable set $\mathcal{R}$ is convex and ${\boldsymbol{\mathcal{M}}}^{[k+1]}$ is a convex combination of $\widetilde{\boldsymbol{\mathcal{M}}}^{[k]}$ and $\boldsymbol{\mathcal{U}}\in\mathcal{R}$, it suffices to show that $\widetilde{\boldsymbol{\mathcal{M}}}^{[k]}\in\mathcal{R}$. We observe that the entries in $\widetilde{\boldsymbol{\mathcal{M}}}^{[k]}$ takes the exact same form as the ones on the right-hand side of Eq. [\[eq:ConservedToPrimitive\]](#eq:ConservedToPrimitive){reference-type="eqref" reference="eq:ConservedToPrimitive"}, except with $\boldsymbol{v}$ replaced by $-\frac{\lambda}{1-\lambda} \boldsymbol{v}$. It then follows from Lemma [Lemma 5](#lem:ForwardRealizability){reference-type="ref" reference="lem:ForwardRealizability"} that $\widetilde{\boldsymbol{\mathcal{M}}}^{[k]}\in\mathcal{R}$ if $\frac{\lambda}{1-\lambda} v \leq1$, i.e., $\lambda\leq\frac{1}{1+v}$. ◻ It is well-known that, when solving a fixed-point problem defined by a contraction operator, the Picard iteration converges to the unique fixed point (see, e.g., [@hairer1993solving]). We show below in Proposition [Proposition 9](#prop:contraction){reference-type="ref" reference="prop:contraction"} that the fixed-point operator $\boldsymbol{\mathcal{H}}_{\boldsymbol{\mathcal{U}}}$ defined in Eq. [\[eq:richardson_fixed_pt\]](#eq:richardson_fixed_pt){reference-type="eqref" reference="eq:richardson_fixed_pt"} is a contraction under mild assumptions on $v_h$, which thus guarantees the convergence of the iterative solver in Eq. [\[eq:Picard\]](#eq:Picard){reference-type="eqref" reference="eq:Picard"}. The proof of Proposition [Proposition 9](#prop:contraction){reference-type="ref" reference="prop:contraction"} uses results from the following two technical lemmas. **Lemma 8**. *For any $\boldsymbol{\mathcal{M}}\in\mathcal{R}$, $\| \partial_{\mathcal{D}}(v^{i}\mathsf{k}_{ij}\mathcal{D})\| \leq v$.* *Proof.* See [10.2](#sec:proof_of_dD){reference-type="ref" reference="sec:proof_of_dD"} for the proof. ◻ **Lemma 9**. *For any $\boldsymbol{\mathcal{M}}\in\mathcal{R}$, $\| \nabla_{\boldsymbol{\mathcal{I}}}(v^{i}\mathsf{k}_{ij}\mathcal{D}) \| \leq 2v$.* *Proof.* See [10.3](#sec:proof_of_dI){reference-type="ref" reference="sec:proof_of_dI"} for the proof. ◻ We now state and prove Proposition [Proposition 9](#prop:contraction){reference-type="ref" reference="prop:contraction"}. **Proposition 9**. *Suppose $v<\sqrt{2}-1$ and $\lambda\in(0,1]$. Then, $\boldsymbol{\mathcal{H}}_{\boldsymbol{\mathcal{U}}}$ defined in Eq. [\[eq:richardson_fixed_pt\]](#eq:richardson_fixed_pt){reference-type="eqref" reference="eq:richardson_fixed_pt"} is a contraction operator, i.e., there exists some $L<1$ such that $$\|\boldsymbol{\mathcal{H}}_{\boldsymbol{\mathcal{U}}}(\boldsymbol{\mathcal{M}}^{(1)}) - \boldsymbol{\mathcal{H}}_{\boldsymbol{\mathcal{U}}}(\boldsymbol{\mathcal{M}}^{(2)}) \| \leq L \| \boldsymbol{\mathcal{M}}^{(1)}- \boldsymbol{\mathcal{M}}^{(2)}\|\:, \quad \forall \boldsymbol{\mathcal{M}}^{(1)}, \boldsymbol{\mathcal{M}}^{(2)}\in \mathcal{R}\:.$$* *Proof.* First, for convenience, we denote $\Delta \mathcal{D}= \mathcal{D}^{(1)}- \mathcal{D}^{(2)}$ and $\Delta \mathcal{I}_j = \mathcal{I}^{(1)}_j - \mathcal{I}^{(2)}_j$. It then follows from the definition of $\boldsymbol{\mathcal{H}}_{\boldsymbol{\mathcal{U}}}$ and the triangle inequality that $$\|\boldsymbol{\mathcal{H}}_{\boldsymbol{\mathcal{U}}}(\boldsymbol{\mathcal{M}}^{(1)}) - \boldsymbol{\mathcal{H}}_{\boldsymbol{\mathcal{U}}}(\boldsymbol{\mathcal{M}}^{(2)})\| \leq (1-\lambda)\left\|\left(\begin{array}{l} \Delta \mathcal{D}\\ \Delta \mathcal{I}_j \end{array}\right)\right\| + \lambda \left\|\left(\begin{array}{c} v^{i}\,\Delta \mathcal{I}_i\\ v^{i}(\mathsf{k}^{(1)}_{ij}\mathcal{D}^{(1)}- \mathsf{k}^{(2)}_{ij}\mathcal{D}^{(2)}) \end{array}\right)\right\|$$ Thus, it suffices to show that, there exists some $\tilde{L}<1$ such that $$\label{eq:contraction_proof_1} \left\|\left(\begin{array}{c} v^{i}\,\Delta \mathcal{I}_i\\ v^{i}(\mathsf{k}^{(1)}_{ij}\mathcal{D}^{(1)}- \mathsf{k}^{(2)}_{ij}\mathcal{D}^{(2)}) \end{array}\right)\right\| \leq \tilde{L} \left\|\left(\begin{array}{l} \Delta \mathcal{D}\\ \Delta \mathcal{I}_j \end{array}\right)\right\|,\quad \forall \boldsymbol{\mathcal{M}}^{(1)}, \boldsymbol{\mathcal{M}}^{(2)}\in \mathcal{R}.$$ Lemmas [Lemma 8](#lemma:dD_term){reference-type="ref" reference="lemma:dD_term"} and [Lemma 9](#lemma:dI_term){reference-type="ref" reference="lemma:dI_term"} imply that the gradients of $v^{i}\mathsf{k}_{ij}\mathcal{D}$ in the $\mathcal{D}$ and $\boldsymbol{\mathcal{I}}$ directions are bounded. Thus, we have $$\begin{alignedat}{2} \| v^{i}(\mathsf{k}^{(1)}_{ij}\mathcal{D}^{(1)}- \mathsf{k}^{(2)}_{ij}\mathcal{D}^{(2)}) \| &\leq \| \partial_{\mathcal{D}}(v^{i}\mathsf{k}_{ij}\mathcal{D})\| \| \Delta \mathcal{D}\| + \| \nabla_{\boldsymbol{\mathcal{I}}}(v^{i}\mathsf{k}_{ij}\mathcal{D}) \| \| \Delta \mathcal{I}_j \|\\ &\leq v \| \Delta \mathcal{D}\| + 2v \| \Delta \mathcal{I}_j \|\:, \end{alignedat}$$ which leads to $$\label{eq:vkD_bound} \begin{alignedat}{2} \| v^{i}(\mathsf{k}^{(1)}_{ij}\mathcal{D}^{(1)}- \mathsf{k}^{(2)}_{ij}\mathcal{D}^{(2)}) \|^2 &\leq v^2 \| \Delta \mathcal{D}\|^2 + 4v^2 \| \Delta \mathcal{D}\| \|\Delta \mathcal{I}_j \| + 4v^2 \|\Delta \mathcal{I}_j \|^2\\ &\leq (3+2\sqrt{2}) v^2 \| \Delta \mathcal{D}\|^2 + (2+2\sqrt{2})v^2 \|\Delta \mathcal{I}_j \|^2\:, \end{alignedat}$$ where the second inequality follows from the inequality, $2ab \leq (\sqrt{2}+1) a^2 + (\sqrt{2}-1) b^2$, with $a=\sqrt{2}v\|\Delta\mathcal{D}\|$ and $b=\sqrt{2}v\|\Delta \mathcal{I}_j\|$. Taking the square of the left-hand side in Eq. [\[eq:contraction_proof_1\]](#eq:contraction_proof_1){reference-type="eqref" reference="eq:contraction_proof_1"} and applying the inequality in Eq. [\[eq:vkD_bound\]](#eq:vkD_bound){reference-type="eqref" reference="eq:vkD_bound"} gives $$\begin{alignedat}{2} \left\|\left(\begin{array}{c} v^{i}\,\Delta \mathcal{I}_i\\ v^{i}(\mathsf{k}^{(1)}_{ij}\mathcal{D}^{(1)}- \mathsf{k}^{(2)}_{ij}\mathcal{D}^{(2)}) \end{array}\right)\right\| ^2 &= v^2 \| \Delta \mathcal{I}_j \|^2 + \| v^{i}(\mathsf{k}^{(1)}_{ij}\mathcal{D}^{(1)}- \mathsf{k}^{(2)}_{ij}\mathcal{D}^{(2)}) \|^2\\ &\leq (3+2\sqrt{2}) v^2 (\| \Delta \mathcal{D}\|^2 + \| \Delta \mathcal{I}_j \|^2)\:. \end{alignedat}$$ Let $\tilde{L}:=\sqrt{3+2\sqrt{2}} \, v$, the claim then holds when $v < \big(\sqrt{3+2\sqrt{2}}\,\big)^{-1} = \sqrt{2}-1$. ◻ **Theorem 2**. *Suppose $v<\sqrt{2}-1$ and $\lambda \leq \frac{1}{1+v}$ in Eq. [\[eq:richardson_fixed_pt\]](#eq:richardson_fixed_pt){reference-type="eqref" reference="eq:richardson_fixed_pt"}. Then, for any given conserved moment $\boldsymbol{\mathcal{U}}=(\mathcal{N},\boldsymbol{\mathcal{G}})^{\intercal}\in\mathcal{R}$ and initial primitive moment $\boldsymbol{\mathcal{M}}^{[0]}\in\mathcal{R}$, the iterative solver in Eq. [\[eq:Picard\]](#eq:Picard){reference-type="eqref" reference="eq:Picard"} converges to the unique realizable primitive moment $\boldsymbol{\mathcal{M}}$ that satisfies Eq. [\[eq:ConservedToPrimitive\]](#eq:ConservedToPrimitive){reference-type="eqref" reference="eq:ConservedToPrimitive"} as $k\to\infty$.* *Proof.* This theorem is a direct consequence from the realizability-preserving property of the solver shown in Lemma [Lemma 7](#lem:solver_realizability){reference-type="ref" reference="lem:solver_realizability"} and the contraction property of $\boldsymbol{\mathcal{H}}_{\boldsymbol{\mathcal{U}}}$ proved in Proposition [Proposition 9](#prop:contraction){reference-type="ref" reference="prop:contraction"}. ◻ The results in Theorem [Theorem 2](#thm:convergence){reference-type="ref" reference="thm:convergence"} lead to the following corollary on the uniqueness of realizable primitive moments associated with realizable conserved moments. **Corollary 1**. *Suppose $v<\sqrt{2}-1$. For any conserved moment $\boldsymbol{\mathcal{U}}\in\mathcal{R}$, there exists a unique realizable primitive moment $\boldsymbol{\mathcal{M}}$ that satisfies Eq. [\[eq:ConservedToPrimitive\]](#eq:ConservedToPrimitive){reference-type="eqref" reference="eq:ConservedToPrimitive"}.* ## Implicit Collision Update {#sec:collision} In this section, we prove Proposition [Proposition 6](#prop:realizabilityImplicit){reference-type="ref" reference="prop:realizabilityImplicit"}, which states that, under Assumption [Assumption 1](#assum:exact_closure){reference-type="ref" reference="assum:exact_closure"}, the implicit update in Eq. [\[eq:imexBackwardEuler\]](#eq:imexBackwardEuler){reference-type="eqref" reference="eq:imexBackwardEuler"} preserves realizability when the iterative solver in Eq. [\[eq:collision_Picard\]](#eq:collision_Picard){reference-type="eqref" reference="eq:collision_Picard"} is used. We first show in the following lemma that realizability is preserved in each iteration when starting from a realizable moment. **Lemma 10**. *Let $\boldsymbol{\mathcal{U}}^{{(\ast)}}=(\mathcal{N}^{{(\ast)}},\boldsymbol{\mathcal{G}}^{{(\ast)}})^{\intercal}\in\mathcal{R}$, $\lambda \leq \frac{1}{1+v}$, and $\kappa\geq\chi\geq0$ in Eq. [\[eq:collision_fixed_point1\]](#eq:collision_fixed_point1){reference-type="eqref" reference="eq:collision_fixed_point1"}. Then, the solver in Eq. [\[eq:collision_Picard\]](#eq:collision_Picard){reference-type="eqref" reference="eq:collision_Picard"} guarantees that $\boldsymbol{\mathcal{M}}^{[k+1]}=(\mathcal{D}^{[k+1]},\boldsymbol{\mathcal{I}}^{[k+1]})^{\intercal}\in\mathcal{R}$, provided that $\boldsymbol{\mathcal{M}}^{[k]}=(\mathcal{D}^{[k]},\boldsymbol{\mathcal{I}}^{[k]})^{\intercal}\in\mathcal{R}$.* *Proof.* We follow the approach in the proof of Lemma [Lemma 7](#lem:solver_realizability){reference-type="ref" reference="lem:solver_realizability"} and write Eq. [\[eq:collision_Picard\]](#eq:collision_Picard){reference-type="eqref" reference="eq:collision_Picard"} as $$\begin{alignedat}{2} \boldsymbol{\mathcal{M}}^{[k+1]} &= (1-\lambda)\,\Lambda \left(\begin{array}{c} \mathcal{D}^{[k]} - \frac{\lambda}{1-\lambda} v^{i}\mathcal{I}_{i}^{[k]}\\ \mathcal{I}_{j}^{[k]} - \frac{\lambda}{1-\lambda} v^{i}\mathsf{k}_{ij}^{[k]}\mathcal{D}^{[k]} \end{array}\right) + \lambda\, \Lambda \left( \begin{array}{c} \mathcal{N}^{{(\ast)}}+ \Delta t\chi\,\mathcal{D}_{0}\\ \mathcal{G}_{j}^{{(\ast)}} \end{array}\right)\\ &=: (1-\lambda)\,\Lambda\,\tilde{\boldsymbol{\mathcal{M}}}^{[k]} + \lambda\,\Lambda\, \tilde{\boldsymbol{\mathcal{U}}}^{{(\ast)}}. \end{alignedat}$$ In the proof of Lemma [Lemma 7](#lem:solver_realizability){reference-type="ref" reference="lem:solver_realizability"}, we have shown that $\tilde{\boldsymbol{\mathcal{M}}}^{[k]}\in\mathcal{R}$ when $\lambda\leq\frac{1}{1+v}$. Also, it is clear that $\tilde{\boldsymbol{\mathcal{U}}}^{{(\ast)}}\in\mathcal{R}$ because ${\boldsymbol{\mathcal{U}}}^{{(\ast)}}\in\mathcal{R}$, $\chi\geq0$ and $\mathcal{D}_0\geq0$. Since $\kappa\geq\chi\geq0$, we have $\mu_{\chi}\geq\mu_{\kappa}\geq0$, implying that $\Lambda\,\boldsymbol{\mathcal{M}}\in\mathcal{R}$ for all $\boldsymbol{\mathcal{M}}\in\mathcal{R}$ based on the definition of $\mathcal{R}$ in Eq. [\[eq:realizableSet\]](#eq:realizableSet){reference-type="eqref" reference="eq:realizableSet"}. Therefore, $\Lambda\,\tilde{\boldsymbol{\mathcal{M}}}^{[k]}$ and $\Lambda\, \tilde{\boldsymbol{\mathcal{U}}}^{{(\ast)}}$ are both realizable, which, together with the convexity of $\mathcal{R}$, completes the proof. ◻ Similar to the moment conversion problem considered in Section [5.3](#sec:momentConversionRealizability){reference-type="ref" reference="sec:momentConversionRealizability"}, we next show in the following proposition that the operator $\boldsymbol{\mathcal{Q}}$ in Eq. [\[eq:collision_fixed_point1\]](#eq:collision_fixed_point1){reference-type="eqref" reference="eq:collision_fixed_point1"} is a contraction, which implies convergence of the Picard iteration method in Eq. [\[eq:collision_Picard\]](#eq:collision_Picard){reference-type="eqref" reference="eq:collision_Picard"}. **Proposition 10**. *Suppose $v<\sqrt{2}-1$, $\lambda\in(0,1]$, and $\kappa\geq\chi\geq0$. Then, $\boldsymbol{\mathcal{Q}}$ is a contraction operator, i.e., there exists some $L<1$ such that $$\|\boldsymbol{\mathcal{Q}}(\boldsymbol{\mathcal{M}}^{(1)}) - \boldsymbol{\mathcal{Q}}(\boldsymbol{\mathcal{M}}^{(2)}) \| \leq L \| \boldsymbol{\mathcal{M}}^{(1)}- \boldsymbol{\mathcal{M}}^{(2)}\|\:, \quad \forall \boldsymbol{\mathcal{M}}^{(1)}, \boldsymbol{\mathcal{M}}^{(2)}\in \mathcal{R}\:.$$* *Proof.* From the definitions of $\boldsymbol{\mathcal{H}}_{\boldsymbol{\mathcal{U}}}$ and $\boldsymbol{\mathcal{Q}}$ in Eqs. [\[eq:richardson_fixed_pt\]](#eq:richardson_fixed_pt){reference-type="eqref" reference="eq:richardson_fixed_pt"} and [\[eq:collision_fixed_point1\]](#eq:collision_fixed_point1){reference-type="eqref" reference="eq:collision_fixed_point1"}, we observe that $$\begin{alignedat}{2} \|\boldsymbol{\mathcal{Q}}(\boldsymbol{\mathcal{M}}^{(1)}) - \boldsymbol{\mathcal{Q}}(\boldsymbol{\mathcal{M}}^{(2)}) \| &= \|\Lambda(\boldsymbol{\mathcal{H}}_{\boldsymbol{\mathcal{U}}}(\boldsymbol{\mathcal{M}}^{(1)}) - \boldsymbol{\mathcal{H}}_{\boldsymbol{\mathcal{U}}}(\boldsymbol{\mathcal{M}}^{(2)})) \|\\ &\leq \|\Lambda\|\|\boldsymbol{\mathcal{H}}_{\boldsymbol{\mathcal{U}}}(\boldsymbol{\mathcal{M}}^{(1)}) - \boldsymbol{\mathcal{H}}_{\boldsymbol{\mathcal{U}}}(\boldsymbol{\mathcal{M}}^{(2)}) \| \end{alignedat}$$ for all $\boldsymbol{\mathcal{M}}^{(1)}, \boldsymbol{\mathcal{M}}^{(2)}\in \mathcal{R}$. Since $\kappa\geq\chi\geq0$ and $\lambda>0$, we have $0\leq\mu_{\kappa}\leq \mu_{\chi} \leq 1$, i.e., $\|\Lambda\|\leq 1$. The claim is thus a direct consequence of Proposition [Proposition 9](#prop:contraction){reference-type="ref" reference="prop:contraction"}. ◻ **Theorem 3**. *Suppose $v<\sqrt{2}-1$, $\lambda \leq \frac{1}{1+v}$, and $\kappa\geq\chi\geq0$ in Eq. [\[eq:collision_fixed_point1\]](#eq:collision_fixed_point1){reference-type="eqref" reference="eq:collision_fixed_point1"}. Then, for any given conserved moment $\boldsymbol{\mathcal{U}}^{{(\ast)}}=(\mathcal{N}^{{(\ast)}},\boldsymbol{\mathcal{G}}^{{(\ast)}})^{\intercal}\in\mathcal{R}$ and initial primitive moment $\boldsymbol{\mathcal{M}}^{[0]}\in\mathcal{R}$, the iterative solver in Eq. [\[eq:collision_Picard\]](#eq:collision_Picard){reference-type="eqref" reference="eq:collision_Picard"} converges to a unique realizable primitive moment $\boldsymbol{\mathcal{M}}$ as $k\to\infty$. Further, the conserved moment $\boldsymbol{\mathcal{U}}$ associated to $\boldsymbol{\mathcal{M}}$ via Eq. [\[eq:ConservedToPrimitive\]](#eq:ConservedToPrimitive){reference-type="eqref" reference="eq:ConservedToPrimitive"} is also realizable and solves the implicit system in Eq. [\[eq:ImplicitSystem\]](#eq:ImplicitSystem){reference-type="eqref" reference="eq:ImplicitSystem"}.* *Proof.* The convergence to a unique $\boldsymbol{\mathcal{M}}\in\mathcal{R}$ is given by the realizability-preserving property in Lemma [Lemma 10](#lem:collision_solver_realizability){reference-type="ref" reference="lem:collision_solver_realizability"} and the contraction property in Proposition [Proposition 10](#prop:collision_contraction){reference-type="ref" reference="prop:collision_contraction"}. Realizability of $\boldsymbol{\mathcal{U}}$ follows from Lemma [Lemma 5](#lem:ForwardRealizability){reference-type="ref" reference="lem:ForwardRealizability"}, and the formulation of the fixed-point problem in Eq. [\[eq:collision_fixed_point1\]](#eq:collision_fixed_point1){reference-type="eqref" reference="eq:collision_fixed_point1"} guarantees that $\boldsymbol{\mathcal{U}}$ solves the implicit system in Eq. [\[eq:ImplicitSystem\]](#eq:ImplicitSystem){reference-type="eqref" reference="eq:ImplicitSystem"}. ◻ ## Extension to Approximate Moment Closures {#sec:approxClosure} In the earlier sections, we have shown the realizability-preserving property of the numerical scheme [\[eq:imexForwardEuler\]](#eq:imexForwardEuler){reference-type="eqref" reference="eq:imexForwardEuler"}--[\[eq:imexBackwardEuler\]](#eq:imexBackwardEuler){reference-type="eqref" reference="eq:imexBackwardEuler"} under Assumption [Assumption 1](#assum:exact_closure){reference-type="ref" reference="assum:exact_closure"}, in which the use of exact moment closures is assumed. As discussed in Sections [2](#sec:model){reference-type="ref" reference="sec:model"} and [3](#sec:closure){reference-type="ref" reference="sec:closure"}, the approximate Minerbo closure is often used in practice to reduce the computational cost, where the approximate Eddington factor $\psi_{\mathsf{a}}$ and heat-flux factor $\zeta_{\mathsf{a}}$, defined respectively in Eqs. [\[eq:psiApproximate\]](#eq:psiApproximate){reference-type="eqref" reference="eq:psiApproximate"} and [\[eq:zetaApproximate\]](#eq:zetaApproximate){reference-type="eqref" reference="eq:zetaApproximate"}, are considered. In this section, we show that the realizability-preserving and convergence analyses for the conserved-to-primitive moment conversion (Eq. [\[eq:ConservedToPrimitive\]](#eq:ConservedToPrimitive){reference-type="eqref" reference="eq:ConservedToPrimitive"}) and the implicit update (Eq. [\[eq:imexBackwardEuler\]](#eq:imexBackwardEuler){reference-type="eqref" reference="eq:imexBackwardEuler"}) given in Sections [5.3](#sec:momentConversionRealizability){reference-type="ref" reference="sec:momentConversionRealizability"} and [5.4](#sec:collision){reference-type="ref" reference="sec:collision"} can be extended to the case when the approximate Minerbo closure is used. When the approximate Eddington factor $\psi_{\mathsf{a}}$ in Eq. [\[eq:psiApproximate\]](#eq:psiApproximate){reference-type="eqref" reference="eq:psiApproximate"} is used, we replace Lemma [Lemma 5](#lem:ForwardRealizability){reference-type="ref" reference="lem:ForwardRealizability"} with the following lemma. **Lemma 11**. *Suppose $v<1$. Let $\boldsymbol{\mathcal{U}}$ be given as in Eq. [\[eq:ConservedToPrimitive\]](#eq:ConservedToPrimitive){reference-type="eqref" reference="eq:ConservedToPrimitive"} with $\boldsymbol{\mathcal{M}}\in\mathcal{R}$, then $\boldsymbol{\mathcal{U}}\in\mathcal{R}$.* *Proof.* Since $\boldsymbol{\mathcal{M}}=\big(\mathcal{D},\boldsymbol{\mathcal{I}}\big)^{\intercal}\in\mathcal{R}$, we know that $\mathcal{D}>0$ and $\mathcal{D}-\mathcal{I}\geq0$. To show $\boldsymbol{\mathcal{U}}=\big(\mathcal{N},\boldsymbol{\mathcal{G}}\big)^{\intercal}\in\mathcal{R}$, we first prove $\mathcal{N}>0$. By definition, $$\mathcal{N} = \mathcal{D} + v^{i} \mathcal{I}_{i} \geq \mathcal{D} - v {\mathcal{I}} > \mathcal{D} - \mathcal{I} \geq0\:,$$ where the Cauchy-Schwartz inequality and the assumption that $v<1$ are used. We next prove that $\mathcal{N}^2 - \mathcal{G}^2 \geq0$ with $\mathcal{G} := |\boldsymbol{\mathcal{G}}|$, which implies $\mathcal{N} - \mathcal{G} \geq0$. Writing $\mathcal{N}^2$ and $\mathcal{G}^2$ in terms of the primitive moments leads to $$\begin{alignedat}{2} \mathcal{N}^2 &= \mathcal{D}^2 + 2 (v^{i} \mathcal{I}_{i}) \mathcal{D} + (v^{i} \mathcal{I}_{i})^2\:,\\ &= \mathcal{D}^2 \big( 1 + 2\, v^{i}\, \hat{n}_{i} h + (v^{i} \,\hat{n}_{i})^2 h^2 \big)\:,\\ \mathcal{G}^2 &= \mathcal{I}^2 + 2 \mathcal{I}^{j}(v^{i}\,\mathsf{k}_{ij})\mathcal{D} + (v_{\ell}\,\mathsf{k}^{\ell j})(v^{i}\,\mathsf{k}_{ij}) \mathcal{D}^2\:.\\ &= \mathcal{D}^2 \big( h^2 + 2\, \hat{n}^{j} \,v^{i}\,\mathsf{k}_{ij} h+ v_{\ell}\,\mathsf{k}^{\ell j}\,v^{i}\,\mathsf{k}_{ij}\, \big)\:. \end{alignedat} \label{eq:NormalizedNG}$$ Using the definition of $\mathsf{k}_{ij}$ in Eq. [\[eq:VariableEddingtonTensor\]](#eq:VariableEddingtonTensor){reference-type="eqref" reference="eq:VariableEddingtonTensor"} we obtain $$\begin{aligned} \hat{n}^{j} \,v^{i}\,\mathsf{k}_{ij} &= \hat{n}^{j} \,v^{i}\,\frac{1}{2} \big( (1-\psi_{\mathsf{a}}) \delta_{ij} + (3\psi_{\mathsf{a}}-1) \hat{n}_i \hat{n}_j\big) = \psi_{\mathsf{a}}\,v^{i}\,\hat{n}_i\:,\\ v_{\ell}\,\mathsf{k}^{\ell j}\,v^{i}\,\mathsf{k}_{ij} &= \frac{1}{4} \big((1-\psi_{\mathsf{a}})^2 v^2 + (1+\psi_{\mathsf{a}})(3\psi_{\mathsf{a}} - 1) (v_i\,\hat{n}^{i})^2 \big)\:. \end{aligned}$$ Plugging these terms into Eq. [\[eq:NormalizedNG\]](#eq:NormalizedNG){reference-type="eqref" reference="eq:NormalizedNG"}, denoting $s:= v^i \hat{n}_i$, and using the assumption that $v<1$ leads to a sufficient condition for $\mathcal{N}^2 - \mathcal{G}^2 \geq0$: $\forall s\in[-1,1]$ and $\forall h \in [0, 1]$, $$\big(1 - h^2 - \frac{1}{4} (1-\psi_{\mathsf{a}})^2 \big) + 2 (1-\psi_{\mathsf{a}})s h + \big( h^2- \frac{1}{4} (1+\psi_{\mathsf{a}})(3\psi_{\mathsf{a}} - 1) \big) s^2 \geq0\:.$$ From Lemma [Lemma 12](#lemma:polynomial_bounds){reference-type="ref" reference="lemma:polynomial_bounds"} (e), we have $1-\psi_{\mathsf{a}}\geq0$. Thus, by applying the inequality $2sh\geq -1-s^2h^2$ to the second term above, it suffices to show that $$\big[\psi_{\mathsf{a}} - h^2 - \frac{1}{4} (1-\psi_{\mathsf{a}})^2 \big] +\big[ \psi_{\mathsf{a}} h^2- \frac{1}{4} (1+\psi_{\mathsf{a}})(3\psi_{\mathsf{a}} - 1) \big] s^2 \geq0\:.$$ Since $\psi_{\mathsf{a}} - h^2 - \frac{1}{4} (1-\psi_{\mathsf{a}})^2\geq0$ (Lemma [Lemma 12](#lemma:polynomial_bounds){reference-type="ref" reference="lemma:polynomial_bounds"} (f)) and $s^2\in[0,1]$, $$\big[\psi_{\mathsf{a}} - h^2 - \frac{1}{4} (1-\psi_{\mathsf{a}})^2 + \psi_{\mathsf{a}} h^2- \frac{1}{4} (1+\psi_{\mathsf{a}})(3\psi_{\mathsf{a}} - 1) \big] s^2 \geq0\:,$$ which then becomes $$(\psi_{\mathsf{a}} - h^2)(1-\psi_{\mathsf{a}}) s^2\geq0\:.$$ With $h^2\leq\psi_{\mathsf{a}}\leq1$ from Lemma [Lemma 12](#lemma:polynomial_bounds){reference-type="ref" reference="lemma:polynomial_bounds"} (e), the proof is complete. ◻ Lemma [Lemma 11](#lemma:MtoU){reference-type="ref" reference="lemma:MtoU"} extends Lemma [Lemma 5](#lem:ForwardRealizability){reference-type="ref" reference="lem:ForwardRealizability"} by showing that the mapping from primitive moments to conserved moments via Eq. [\[eq:ConservedToPrimitive\]](#eq:ConservedToPrimitive){reference-type="eqref" reference="eq:ConservedToPrimitive"} preserves realizability even when the approximate Minerbo closure is used. However, there is not an analogous extension of Lemma [Lemma 6](#lem:BackwardRealizability){reference-type="ref" reference="lem:BackwardRealizability"} to the case of approximate closures. To show that the map from conserved to primitive moments is realizability-preserving with the approximate closure, we verify that the analysis from Lemma [Lemma 7](#lem:solver_realizability){reference-type="ref" reference="lem:solver_realizability"} to Corollary [Corollary 1](#coro:UtoM){reference-type="ref" reference="coro:UtoM"} is still valid when the approximate closure is considered. Specifically, when $\psi$ is replaced by $\psi_{\mathsf{a}}$, the result of Lemma [Lemma 7](#lem:solver_realizability){reference-type="ref" reference="lem:solver_realizability"} can be obtained by invoking Lemma [Lemma 11](#lemma:MtoU){reference-type="ref" reference="lemma:MtoU"} rather than Lemma [Lemma 5](#lem:ForwardRealizability){reference-type="ref" reference="lem:ForwardRealizability"} in the proof, the results of Lemmas [Lemma 8](#lemma:dD_term){reference-type="ref" reference="lemma:dD_term"} and [Lemma 9](#lemma:dI_term){reference-type="ref" reference="lemma:dI_term"} hold since it is shown in [10.1](#sec:polynomial_bounds){reference-type="ref" reference="sec:polynomial_bounds"} that $\psi_{\mathsf{a}}$ also satisfies the required properties of $\psi$, and the remainder of the analysis stays identical to the exact closure case considered in Section [5.3](#sec:momentConversionRealizability){reference-type="ref" reference="sec:momentConversionRealizability"}. Therefore, we have shown that, when $\psi$ is replaced by $\psi_{\mathsf{a}}$, the iterative solver in Eq. [\[eq:Picard\]](#eq:Picard){reference-type="eqref" reference="eq:Picard"} converges to the unique, realizable primitive moment that satisfies Eq. [\[eq:ConservedToPrimitive\]](#eq:ConservedToPrimitive){reference-type="eqref" reference="eq:ConservedToPrimitive"} for the given conserved moment, which implies that the conserved to primitive moment map from Eq. [\[eq:ConservedToPrimitive\]](#eq:ConservedToPrimitive){reference-type="eqref" reference="eq:ConservedToPrimitive"} still preserves realizability when $v<\sqrt{2}-1$. Further, we also verified that the convergence and realizability-preserving properties of the iterative solver in Eq. [\[eq:collision_Picard\]](#eq:collision_Picard){reference-type="eqref" reference="eq:collision_Picard"} for the implicit system in Eq. [\[eq:collision_fixed_point1\]](#eq:collision_fixed_point1){reference-type="eqref" reference="eq:collision_fixed_point1"} given in Theorem [Theorem 3](#thm:collision_convergence){reference-type="ref" reference="thm:collision_convergence"} also hold in the approximate closure case by applying the same arguments to the analysis in Section [5.4](#sec:collision){reference-type="ref" reference="sec:collision"}. # Conservation Property {#sec:conservation} ## Simultaneous Number and Energy Conservation of the DG Scheme {#sec:dgConservation} It has been shown in Proposition [Proposition 1](#prop:EnergyandMomentumConservation){reference-type="ref" reference="prop:EnergyandMomentumConservation"} that the two-moment model in Eqs. [\[eq:spectralNumberEquation\]](#eq:spectralNumberEquation){reference-type="eqref" reference="eq:spectralNumberEquation"}--[\[eq:spectralNumberFluxEquation\]](#eq:spectralNumberFluxEquation){reference-type="eqref" reference="eq:spectralNumberFluxEquation"} conserves the Eulerian-frame energy up to $\mathcal{O}(v)$. In this section, we discuss the simultaneous Eulerian-frame number and energy conservation properties of the two-moment model with the discontinuous Galerkin phase-space discretization presented in Section [4.1](#sec:dgMethod){reference-type="ref" reference="sec:dgMethod"}. We are primarily concerned with consistency with the Eulerian-frame energy equation for the phase-space advection problem. For this reason, we consider the collisionless case. Eulerian-frame number conservation follows from the first component of the semi-discrete DG scheme in Eq. [\[eq:dgSemiDiscrete\]](#eq:dgSemiDiscrete){reference-type="eqref" reference="eq:dgSemiDiscrete"} (treating the general case with $d_{\boldsymbol{x}}=3$), $$\begin{aligned} &\big(\,\partial_{t}\mathcal{N}_{h},\varphi_{h}\,\big)_{\boldsymbol{K}} =-\sum_{i=1}^{3}\int_{\tilde{\boldsymbol{K}}^{i}} \Big[\, \widehat{\mathcal{F}_{\mathcal{N}}^{i}}\big(\boldsymbol{\mathcal{U}}_{h},\boldsymbol{v}_{h}\big)\,\varphi_{h}|_{x_{\mbox{\tiny\sc H}}^{i}} -\widehat{\mathcal{F}_{\mathcal{N}}^{i}}\big(\boldsymbol{\mathcal{U}}_{h},\boldsymbol{v}_{h}\big)\,\varphi_{h}|_{x_{\mbox{\tiny\sc L}}^{i}} \,\Big]\,\tau\,d\tilde{\boldsymbol{z}}^{i} +\sum_{i=1}^{3}\big(\,\mathcal{F}_{\mathcal{N}}^{i}(\boldsymbol{\mathcal{U}}_{h},\boldsymbol{v}_{h}),\partial_{i}{\varphi_{h}}\,\big)_{\boldsymbol{K}} \nonumber \\ &\hspace{36pt} -\int_{\boldsymbol{K}_{\boldsymbol{x}}} \Big[\, \varepsilon^{3}\,\widehat{\mathcal{F}_{\mathcal{N}}^{\varepsilon}}\big(\boldsymbol{\mathcal{U}}_{h},\boldsymbol{v}_{h}\big)\,\varphi_{h}|_{\varepsilon_{\mbox{\tiny\sc H}}} -\varepsilon^{3}\,\widehat{\mathcal{F}_{\mathcal{N}}^{\varepsilon}}\big(\boldsymbol{\mathcal{U}}_{h},\boldsymbol{v}_{h}\big)\,\varphi_{h}|_{\varepsilon_{\mbox{\tiny\sc L}}} \,\Big]\,d\boldsymbol{x} +\big(\,\varepsilon\,\mathcal{F}_{\mathcal{N}}^{\varepsilon}(\boldsymbol{\mathcal{U}}_{h},\boldsymbol{v}_{h}),\partial_{\varepsilon}{\varphi_{h}}\,\big)_{\boldsymbol{K}}, \label{eq:dgSemiDiscreteNumber}\end{aligned}$$ where $\mathcal{F}_{\mathcal{N}}^{i}$ and $\mathcal{F}_{\mathcal{N}}^{\varepsilon}$, respectively, are the first component of the position and energy space fluxes, defined in Eq. [\[eq:phaseSpaceFluxes\]](#eq:phaseSpaceFluxes){reference-type="eqref" reference="eq:phaseSpaceFluxes"}, and $\widehat{\mathcal{F}_{\mathcal{N}}^{i}}$ and $\widehat{\mathcal{F}_{\mathcal{N}}^{\varepsilon}}$ are the corresponding numerical fluxes, defined in Eqs. [\[eq:numericalFluxPosition\]](#eq:numericalFluxPosition){reference-type="eqref" reference="eq:numericalFluxPosition"} and [\[eq:numericalFluxEnergy\]](#eq:numericalFluxEnergy){reference-type="eqref" reference="eq:numericalFluxEnergy"}. Setting $\varphi_h=1$ as the test function in Eq. [\[eq:dgSemiDiscreteNumber\]](#eq:dgSemiDiscreteNumber){reference-type="eqref" reference="eq:dgSemiDiscreteNumber"} results in the equation for the element-integrated Eulerian-frame number density. Note that the volume terms (the second and fourth terms) on the right-hand side of Eq. [\[eq:dgSemiDiscreteNumber\]](#eq:dgSemiDiscreteNumber){reference-type="eqref" reference="eq:dgSemiDiscreteNumber"} vanish when $\varphi_{h}=1$. Then, because the numerical fluxes $\widehat{\mathcal{F}_{\mathcal{N}}^{i}}$ and $\widehat{\mathcal{F}_{\mathcal{N}}^{\varepsilon}}$ are continuous on element interfaces, summation over all phase-space elements $\boldsymbol{K}\in D$ results in cancellation of all interior fluxes, and the resulting rate of change in the total Eulerian-frame particle number is only due to the flow of particles though the boundary of the domain $D$. That is, the DG scheme for the Eulerian-frame particle number is conservative by construction. As for Eulerian-frame energy conservation, similar to Eq. [\[eq:energyEquationEulerian\]](#eq:energyEquationEulerian){reference-type="eqref" reference="eq:energyEquationEulerian"} in Proposition [Proposition 1](#prop:EnergyandMomentumConservation){reference-type="ref" reference="prop:EnergyandMomentumConservation"}, the element-integrated Eulerian-frame energy equation can be derived by adding the Eulerian-frame number equation in Eq. [\[eq:dgSemiDiscreteNumber\]](#eq:dgSemiDiscreteNumber){reference-type="eqref" reference="eq:dgSemiDiscreteNumber"}, with $\varphi_h = \varepsilon$, and the sum of the three number flux equations in Eq. [\[eq:dgSemiDiscrete\]](#eq:dgSemiDiscrete){reference-type="eqref" reference="eq:dgSemiDiscrete"} with test functions $\varphi_h = \varepsilon v_h^{j}$. To accommodate this choice of test functions, the approximation space $\mathbb{V}_h^k$ must include the piecewise linear function in the energy dimension, i.e., $k\geq1$. Let $\mathcal{E}_h:=\varepsilon\, ( \mathcal{N}_h + v^{j}_h \,{\mathcal{G}}_{h,j} )$ denote the discretized Eulerian-frame energy density. Then, the resulting equation for the element-integrated Eulerian-frame energy takes the form $$\begin{aligned} &\big(\,\partial_{t}\mathcal{E}_{h}\,\big)_{\boldsymbol{K}} := \big(\,\partial_{t}\mathcal{N}_{h},\varepsilon\,\big)_{\boldsymbol{K}} + \big(\,\partial_{t}\mathcal{G}_{j,h},\varepsilon v_{h}^{j}\,\big)_{\boldsymbol{K}} \nonumber \\ &=-\sum_{i=1}^{3}\int_{\tilde{\boldsymbol{K}}^{i}} \Big[\, \widehat{\mathcal{F}_{\mathcal{E}}^{i}}\big(\boldsymbol{\mathcal{U}}_{h},\boldsymbol{v}_{h}\big)|_{x_{\mbox{\tiny\sc H}}^{i}} -\widehat{\mathcal{F}_{\mathcal{E}}^{i}}\big(\boldsymbol{\mathcal{U}}_{h},\boldsymbol{v}_{h}\big)|_{x_{\mbox{\tiny\sc L}}^{i}} \,\Big]\,\tau\,d\tilde{\boldsymbol{z}}^{i} -\int_{\boldsymbol{K}_{\boldsymbol{x}}} \Big[\, \varepsilon^{3}\,\widehat{\mathcal{F}_{\mathcal{E}}^{\varepsilon}}\big(\boldsymbol{\mathcal{U}}_{h},\boldsymbol{v}_{h}\big)|_{\varepsilon_{\mbox{\tiny\sc H}}} -\varepsilon^{3}\,\widehat{\mathcal{F}_{\mathcal{E}}^{\varepsilon}}\big(\boldsymbol{\mathcal{U}}_{h},\boldsymbol{v}_{h}\big)|_{\varepsilon_{\mbox{\tiny\sc L}}} \,\Big]\,d\boldsymbol{x} \nonumber \\ &\hspace{36pt} +\big(\,\varepsilon\,\mathcal{F}_{\mathcal{N}}^{\varepsilon}(\boldsymbol{\mathcal{U}}_{h},\boldsymbol{v}_{h})\,\big)_{\boldsymbol{K}} +\sum_{i=1}^{3}\big(\,\mathcal{F}_{\mathcal{G}_{j}}^{i}(\boldsymbol{\mathcal{U}}_{h},\boldsymbol{v}_{h}),\varepsilon\partial_{i}{v_{h}^{j}}\,\big)_{\boldsymbol{K}} +\mathcal{O}(v_{h}^{2}), \label{eq:dgSemiDiscreteEnergy}\end{aligned}$$ where we have defined the position space numerical fluxes, $$\widehat{\mathcal{F}_{\mathcal{E}}^{i}}\big(\boldsymbol{\mathcal{U}}_{h},\boldsymbol{v}_{h}\big)|_{x_{\mbox{\tiny\sc H}/\mbox{\tiny\sc L}}^{i}} =\varepsilon\,\big[\,\widehat{\mathcal{F}_{\mathcal{N}}^{i}}\big(\boldsymbol{\mathcal{U}}_{h},\boldsymbol{v}_{h}\big) + v_{h}^{j}\,\widehat{\mathcal{F}_{\mathcal{G}_{j}}^{i}}\big(\boldsymbol{\mathcal{U}}_{h},\boldsymbol{v}_{h}\big)\,\big]|_{x_{\mbox{\tiny\sc H}/\mbox{\tiny\sc L}}^{i}}, \label{eq:numericalFluxEulerianEnergy}$$ the energy space numerical fluxes, $$\widehat{\mathcal{F}_{\mathcal{E}}^{\varepsilon}}\big(\boldsymbol{\mathcal{U}}_{h},\boldsymbol{v}_{h}\big)|_{\varepsilon_{\mbox{\tiny\sc H}/\mbox{\tiny\sc L}}} =\varepsilon\,\widehat{\mathcal{F}_{\mathcal{N}}^{\varepsilon}}\big(\boldsymbol{\mathcal{U}}_{h},\boldsymbol{v}_{h}\big)|_{\varepsilon_{\mbox{\tiny\sc H}/\mbox{\tiny\sc L}}},$$ and $v_h^{2}:=|\boldsymbol{v}_h|^{2}$. Here, $\mathcal{F}_{\mathcal{G}_{j}}^{i}$ and $\widehat{\mathcal{F}_{\mathcal{G}_{j}}^{i}}$ represent the fluxes and the corresponding numerical fluxes for the number flux equation, defined in Eqs. [\[eq:phaseSpaceFluxes\]](#eq:phaseSpaceFluxes){reference-type="eqref" reference="eq:phaseSpaceFluxes"} and [\[eq:numericalFluxPosition\]](#eq:numericalFluxPosition){reference-type="eqref" reference="eq:numericalFluxPosition"}, respectively. The third and fourth term on the right-hand side of Eq. [\[eq:dgSemiDiscreteEnergy\]](#eq:dgSemiDiscreteEnergy){reference-type="eqref" reference="eq:dgSemiDiscreteEnergy"}, which emanate from the energy derivative term of the number equation and the spatial derivative of the number flux equations, respectively, can be written as $$\big(\,\varepsilon\,\mathcal{F}_{\mathcal{N}}^{\varepsilon}(\boldsymbol{\mathcal{U}}_{h},\boldsymbol{v}_{h})\,\big)_{\boldsymbol{K}} +\sum_{i=1}^{3}\big(\,\mathcal{F}_{\mathcal{G}_{j}}^{i}(\boldsymbol{\mathcal{U}}_{h},\boldsymbol{v}_{h}),\varepsilon\partial_{i}{v_{h}^{j}}\,\big)_{\boldsymbol{K}} =\int_{\boldsymbol{K}}\varepsilon\mathcal{K}^{i}_{\hspace{2pt}j}\,\big[\,\partial_{i}v_{h}^{j}-(\partial_{i}v^{j})_{h}\,\big]\,\tau d\boldsymbol{z}+ \mathcal{O}(v_{h}^{2}), \label{eq:cancellations}$$ where $(\partial_{i}v^{j})_h$ is the discretized velocity derivative that satisfies Eq. [\[eq:velocityDerivatives\]](#eq:velocityDerivatives){reference-type="eqref" reference="eq:velocityDerivatives"}. Provided (i) the numerical flux in Eq. [\[eq:numericalFluxEulerianEnergy\]](#eq:numericalFluxEulerianEnergy){reference-type="eqref" reference="eq:numericalFluxEulerianEnergy"} is uniquely defined on element interfaces and (ii) the first term on the right-hand side of Eq. [\[eq:cancellations\]](#eq:cancellations){reference-type="eqref" reference="eq:cancellations"} vanishes, Eq. [\[eq:dgSemiDiscreteEnergy\]](#eq:dgSemiDiscreteEnergy){reference-type="eqref" reference="eq:dgSemiDiscreteEnergy"} is, to $\mathcal{O}(v_{h}^{2})$, a phase-space conservation law for the element-integrated Eulerian-frame energy, in accordance with Proposition [Proposition 1](#prop:EnergyandMomentumConservation){reference-type="ref" reference="prop:EnergyandMomentumConservation"}. These requirements --- which generally require the discrete velocity $\boldsymbol{v}_h$ to be continuous across the elements --- are not satisfied exactly by the DG scheme proposed here. Since the components of $\boldsymbol{v}_h$ are represented by piecewise polynomials, which are discontinuous on element boundaries, the violation in Eulerian-frame energy conservation may be larger than what would be expected from $\mathcal{O}(v_h^2)$ contributions alone. We will investigate the simultaneous conservation of Eulerian-frame number and energy numerically in Section [8](#sec:numericalResults){reference-type="ref" reference="sec:numericalResults"}. ## Energy Limiter {#sec:EnergyLimiter} In addition to the potential violations of Eulerian-frame energy conservation, beyond the $\mathcal{O}(v^{2})$ violations inherent to the model, from discontinuous $\boldsymbol{v}_h$ as discussed in Section [6.1](#sec:dgConservation){reference-type="ref" reference="sec:dgConservation"}, the realizability-enforcing limiter introduced in Section [5.2](#sec:realizabilityLimiter){reference-type="ref" reference="sec:realizabilityLimiter"} can also affect the conservation of energy. In fact, for small velocities (including $v=0$, when the total energy should be preserved to machine precision) the realizability-enforcing limiter is the dominant source of non-conservation of the Eulerian-frame energy. To improve Eulerian-frame energy conservation, we propose an "energy limiter" that corrects the change of energy induced by the realizability-enforcing limiter via a redistribution of particles between energy elements through a sweeping procedure. This approach maintains Eulerian-frame number and energy conservation across all energy elements for a given spatial element, at the expense of local number conservation in each energy element. The energy limiter does not correct for Eulerian-frame energy conservation violations inherent to the $\mathcal{O}(v)$ two-moment model or due to discontinuous $\boldsymbol{v}_h$ (see Section [6.1](#sec:dgConservation){reference-type="ref" reference="sec:dgConservation"}). To facilitate the discussion, we denote the element integrated Eulerian-frame number and energy by $\textsf{N}$ and $\textsf{E}$, respectively. Given evolved moments $\boldsymbol{\mathcal{U}}_h=(\mathcal{N}_h,\boldsymbol{\mathcal{G}}_h)$ on element $\boldsymbol{K}$, the element-integrated number and energy can be computed by $$\begin{aligned} \textsf{N}_{\boldsymbol{K}} &= \int_{\boldsymbol{K}} {\mathcal{N}}_{h} \varepsilon^2 d\boldsymbol{z} :=|\boldsymbol{K}|\sum_{\boldsymbol{k}=1}^{|S_{\otimes}^{\boldsymbol{K}}|} w_{\boldsymbol{k}}^{(2)} {\mathcal{N}}_{\boldsymbol{k}}, \quad\text{and} \label{eq:ConservedNumber} \\ \textsf{E}_{\boldsymbol{K}} &= \int_{\boldsymbol{K}} ({\mathcal{N}}_{h} + v^j \mathcal{G}_{h,j})\varepsilon^3 d\boldsymbol{z} :=|\boldsymbol{K}|\sum_{\boldsymbol{k}=1}^{|S_{\otimes}^{\boldsymbol{K}}|} w_{\boldsymbol{k}}^{(3)} ({\mathcal{N}}_{\boldsymbol{k}} + v^j \mathcal{G}_{\boldsymbol{k},j}), \label{eq:ConservedEnergy}\end{aligned}$$ where $S_{\otimes}^{\boldsymbol{K}}$ denote the set of local DG nodes as defined in Eq. [\[eq:localNodes\]](#eq:localNodes){reference-type="eqref" reference="eq:localNodes"}, $\mathcal{N}_{\boldsymbol{k}}$ and $\boldsymbol{\mathcal{G}}_{\boldsymbol{k}}$ denotes the nodal values at points in $S_{\otimes}^{\boldsymbol{K}}$, and the weights $w_{\boldsymbol{k}}^{(2)}$ and $w_{\boldsymbol{k}}^{(3)}$ are given by the tensor product of the $(k+1)$-point one-dimensional LG quadrature rules introduced in Section [4.1](#sec:dgMethod){reference-type="ref" reference="sec:dgMethod"}, weighted by $\varepsilon^2$ and $\varepsilon^3$, respectively. Let $\widetilde{\boldsymbol{\mathcal{U}}}_{h}:=\texttt{RealizabilityLimiter}(\widehat{\boldsymbol{\mathcal{U}}}_{h})$ be the output of the realizability-enforcing limiter given a potentially non-realizable solution $\widehat{\boldsymbol{\mathcal{U}}}_{h}$, and let $(\widetilde{\textsf{N}}_{\boldsymbol{K}},\widetilde{\textsf{E}}_{\boldsymbol{K}})$ and $(\widehat{\textsf{N}}_{\boldsymbol{K}},\widehat{\textsf{E}}_{\boldsymbol{K}})$ denote the element-integrated number and energy, defined in Eqs. [\[eq:ConservedNumber\]](#eq:ConservedNumber){reference-type="eqref" reference="eq:ConservedNumber"} and [\[eq:ConservedEnergy\]](#eq:ConservedEnergy){reference-type="eqref" reference="eq:ConservedEnergy"}, for $\widetilde{\boldsymbol{\mathcal{U}}}_{h}$ and $\widehat{\boldsymbol{\mathcal{U}}}_{h}$, respectively. As discussed in Section [5.2](#sec:realizabilityLimiter){reference-type="ref" reference="sec:realizabilityLimiter"}, the realizability-enforcing limiter gives a solution $\widetilde{\boldsymbol{\mathcal{U}}}_{h}$ that is realizable on $\widetilde{S}_{\otimes}^{\boldsymbol{K}}$ while maintaining number conservation in each element; i.e., $\widetilde{\textsf{N}}_{\boldsymbol{K}} = \widehat{\textsf{N}}_{\boldsymbol{K}}$. However, in part because of the additional factor of $\varepsilon$ in the definition of the element-integrated energy in Eq. [\[eq:ConservedEnergy\]](#eq:ConservedEnergy){reference-type="eqref" reference="eq:ConservedEnergy"}, the limiter results in energy changes (i.e., $\widetilde{\textsf{E}}_{\boldsymbol{K}} \neq \widehat{\textsf{E}}_{\boldsymbol{K}}$), which can lead to $\sum_{{\boldsymbol{K}}\in\mathcal{T}}\widetilde{\textsf{E}}_{\boldsymbol{K}} \neq \sum_{{\boldsymbol{K}}\in\mathcal{T}} \widehat{\textsf{E}}_{\boldsymbol{K}}$; i.e., a change in the global Eulerian-frame energy. The proposed energy limiter corrects Eulerian-frame energy conservation violations by redistributing particles via a sweeping procedure in the energy dimension to produce $\boldsymbol{\mathcal{U}}_{h}:=\texttt{EnergyLimiter}(\widetilde{\boldsymbol{\mathcal{U}}}_{h})$, as detailed in Algorithm [\[algo:energyLimiter\]](#algo:energyLimiter){reference-type="ref" reference="algo:energyLimiter"}. Here we let $\mathcal{T}_{\boldsymbol{x}}$ denote the collection of all spatial elements $\boldsymbol{K}_{\boldsymbol{x}}$ and let $\mathcal{T}_{\varepsilon}=\{{K}_{\varepsilon,n}\}_{n=1}^{N_\varepsilon}$ denote the collection of all energy elements ${K}_{\varepsilon}$ that cover the energy domain $D_\varepsilon$. For a given spatial element $\boldsymbol{K}_{\boldsymbol{x}}\in\mathcal{T}_{\boldsymbol{x}}$, the proposed energy limiter sweeps through elements $\boldsymbol{K}=K_{\varepsilon}\times \boldsymbol{K}_{\boldsymbol{x}}$ for all $K_{\varepsilon}\in\mathcal{T}_{\varepsilon}$ in a user-prescribed order to redistribute particles in a way that the number and energy are both conserved for the given spatial element $\boldsymbol{K}_{\boldsymbol{x}}\in\mathcal{T}_{\boldsymbol{x}}$, i.e., $$\sum_{K_{\varepsilon}\in\mathcal{T}_{\varepsilon}}\textsf{N}_{K_{\varepsilon}\times \boldsymbol{K}_{\boldsymbol{x}}} = \sum_{K_{\varepsilon}\in\mathcal{T}_{\varepsilon}} \widehat{\textsf{N}}_{K_{\varepsilon}\times \boldsymbol{K}_{\boldsymbol{x}}} {\quad\text{and}\quad} \sum_{K_{\varepsilon}\in\mathcal{T}_{\varepsilon}}\widetilde{\textsf{E}}_{K_{\varepsilon}\times \boldsymbol{K}_{\boldsymbol{x}}} = \sum_{K_{\varepsilon}\in\mathcal{T}_{\varepsilon}} \widehat{\textsf{E}}_{K_{\varepsilon}\times \boldsymbol{K}_{\boldsymbol{x}}},$$ which then leads to global number and energy conservation, $\sum_{{\boldsymbol{K}}\in\mathcal{T}}\textsf{N}_{\boldsymbol{K}} = \sum_{{\boldsymbol{K}}\in\mathcal{T}} \widehat{\textsf{N}}_{\boldsymbol{K}}$ and $\sum_{{\boldsymbol{K}}\in\mathcal{T}}\textsf{E}_{\boldsymbol{K}} = \sum_{{\boldsymbol{K}}\in\mathcal{T}} \widehat{\textsf{E}}_{\boldsymbol{K}}$, by summing over all spatial elements. The sweeping procedure and the particle redistribution strategy used in the energy limiter are listed in Algorithm [\[algo:energyLimiter\]](#algo:energyLimiter){reference-type="ref" reference="algo:energyLimiter"}. Specifically, when the Eulerian-frame energy conservation violation $\delta\textsf{E}$ is nonzero, the energy limiter redistributes particles between elements in a pairwise manner to correct $\delta\textsf{E}$. The pairwise redistribution strategy is detailed in Algorithm [\[algo:energyCorrection\]](#algo:energyCorrection){reference-type="ref" reference="algo:energyCorrection"}, where a pair of scaling coefficients $(\theta_{1},\theta_{2})$ is computed by solving a linear system that requires the sum of the scaled energies to correct $\delta \textsf{E}$ while preserving the sum of particles (see Line 3). When at least one of the coefficients ($\theta_{1}$ and $\theta_{2}$) is less than a prescribed threshold $\theta_{\min}>-1$, a damping factor $\gamma$ is applied so that $\theta_{1}>\theta_{\min}$ and $\theta_{2}>\theta_{\min}$, which preserves moment realizability. (The moment realizability property is invariant to scaling by a positive scalar.) When the linear system does not have a solution, or when $\min(\theta_{1},\theta_{2})<\theta_{\min}$, the output of Algorithm [\[algo:energyCorrection\]](#algo:energyCorrection){reference-type="ref" reference="algo:energyCorrection"} does not fully correct $\delta \textsf{E}$, and the remainder is propagated to the next pair of elements in the sweeping procedure. As shown in Algorithm [\[algo:energyLimiter\]](#algo:energyLimiter){reference-type="ref" reference="algo:energyLimiter"}, beginning on Line 18, a backward sweep will be launched after the forward sweep when $|\delta {\textsf{E}}|>\delta$; i.e., when the energy conservation violation is not fully corrected. Here $\delta>0$ is a user-specified tolerance on the energy conservation violation. In the implementation, we choose to omit the condition in Line 19 and perform the full backward sweep in order to improve the computational efficiency on GPUs. Moreover, to avoid numerical issues, we restrict the damping factor $\gamma$ in Algorithm [\[algo:energyCorrection\]](#algo:energyCorrection){reference-type="ref" reference="algo:energyCorrection"} such that the resulting corrected moment ${\boldsymbol{\mathcal{U}}}_{h}$ remains a strictly positive number density; i.e., $\mathcal{N}_h>0$. In the numerical results reported in Section [8](#sec:numericalResults){reference-type="ref" reference="sec:numericalResults"}, we choose $\theta_{\min}=-0.5$ and permute the energy elements in an ascending order based on the associated energy values. We observe that the forward and backward sweeping procedure is sufficient for correcting the energy conservation violations introduced by the realizability-enforcing limiter --- i.e., $\delta\textsf{E}\to0$ during the sweeping procedure --- and that the additional scaling introduced in this energy correction process has no noticeable adverse impact on the solution to the two-moment system.\ **Inputs:** Discretized moments before and after the realizability-enforcing limiter, i.e., $\widehat{\boldsymbol{\mathcal{U}}}_{h}$ and $\widetilde{\boldsymbol{\mathcal{U}}}_h$; a permutation of the energy elements, denoted as $K_{\varepsilon,n}$, $n=1,\dots, N_\varepsilon$.\ **Parameter:** $\delta$ (Energy conservation violation tolerance) ${\boldsymbol{\mathcal{U}}}_{h}\leftarrow\widetilde{\boldsymbol{\mathcal{U}}}_{h}$ **Inputs:** $\textsf{N}_{1}, \textsf{E}_{1}, \textsf{N}_{2}, \textsf{E}_{2}, \delta\textsf{E}$\ **Parameter:** $\theta_{\min}>-1$\ Compute $(\theta_{1},\theta_{2})$ by solving $\left\{\begin{alignedat}{2} \theta_{1} {\textsf{N}}_{1} + \theta_{2} {\textsf{N}}_{2} & = 0\:\\ \theta_{1} {\textsf{E}}_{1} + \theta_{2} {\textsf{E}}_{2} & = -\delta \textsf{E}\: \end{alignedat}\right.$ # Implementation, Programming Models, and Portability {#sec:gpu} The DG-IMEX method proposed here has been implemented in the toolkit for high-order neutrino radiation hydrodynamics ([thornado]{.smallcaps}). Here we briefly discuss some considerations in this process. Neutrino transport is only one component (along with, e.g., hydrodynamics, nuclear reaction kinetics, and gravity) of a broader, multiphysics simulation framework needed to model multiscale astrophysical systems, e.g., core-collapse supernova explosions. However, the number of evolved degrees of freedom is relatively high compared to other components. For example, simulations incorporating a two-moment model (four moments), evolving three independent neutrino flavors (six species), with 16 linear elements (k=1) to discretize the energy dimension evolve $4\times6\times16\times(k+1)=768$ degrees of freedom per spatial point. As such, spectral neutrino radiation transport represents the bulk of the computational load in such scientific applications. With this in mind, node-level performance and portability for heterogeneous computing systems are prioritized in [thornado]{.smallcaps} development as a collection of modular physics components that can be incorporated into distributed mutliphysics simulation codes (e.g., [Flash-X]{.smallcaps} [@dubey_etal_2022]), which are equipped with native infrastructure for distributed parallelism. In particular, we target frameworks that utilize adaptive mesh refinement, where simulation data is mapped to smaller grid blocks of relatively even size. [thornado]{.smallcaps} uses a combination of compiler directives and optimized linear algebra libraries to accelerate all components of the DG-IMEX method. All of the solver components --- e.g., the computation of numerical fluxes, evaluation of phase-space divergences, and limiters --- are reduced to small, discrete kernels that can be executed either as collapsible, tightly-nested loops over phase-space dimensions or basic linear algebra operations. In addition to optimizing many key metrics for GPU performance (e.g., occupancy, register pressure, and memory coalescence), this strategy naturally exposes vector-level parallelism which also benefits performance on modern, multicore CPUs. This is especially important when invoking iterative solvers, such as those described in Sections [4.3.1](#sec:moment_conversion){reference-type="ref" reference="sec:moment_conversion"} and [4.3.2](#sec:collision_solver){reference-type="ref" reference="sec:collision_solver"}, across many independent phase-space points. Since iteration counts can vary, assigning an even number of phase-space points to each thread can lead to severe load imbalance among GPU threads. We address this problem by tracking the convergence of each point independently, removing them from calculations in each kernel until all points have converged. Our portability strategy focuses on maintaining a single code-base that can efficiently execute on different hardware architectures and software environments. [thornado]{.smallcaps} contains three distinct implementations of compiler directives that are managed with C preprocessor macros: traditional OpenMP (CPU multi-core), OpenMP offload (GPU), and OpenACC (GPU). We refer to code listings in [@laiu_etal_2020] for specific examples. Interfaces to optimized linear algebra routines are also written in a generic way for portability across different libraries. Currently, [thornado]{.smallcaps} has linear algebra interfaces supporting several LAPACK and BLAS [@laug] routines with GPU implementations from NVIDIA, AMD, Intel, and `MAGMA` [@MAGMA]. This approach hides the complexities of managing different interfaces in a single [thornado]{.smallcaps} module that can be easily used throughout the code. In addition to the individual routine interfaces, each linear algebra package requires specific attention to interoperability with the compiler directives to ensure correct synchronization when using multiple execution streams per device. This is managed during initialization with compiler directives and C preprocessor macros. We provide timing results and a breakdown of the computational cost associated with key solver components for one of the numerical examples in Section [8](#sec:numericalResults){reference-type="ref" reference="sec:numericalResults"}. # Numerical Tests {#sec:numericalResults} In this section, we demonstrate the performance of our implementation of the DG-IMEX method to solve the $\mathcal{O}(v)$ two-moment model. We consider problems with and without collisions. For problems with collisions, we use the IMEX scheme proposed in [@chu_etal_2019] (see also [@laiu_etal_2021] for details). For problems without collisions, we use the optimal second- and third-order accurate strong stability-preserving Runge--Kutta methods from [@shuOsher_1988], referred to as SSPRK2 and SSPRK3, respectively. For the tests in Sections [8.2](#sec:sineWaveStreaming){reference-type="ref" reference="sec:sineWaveStreaming"} and [8.3](#sec:gaussianDiffusion){reference-type="ref" reference="sec:gaussianDiffusion"}, unless stated otherwise, we set the time step to $\Delta t=0.3\times|K_{\boldsymbol{x}}^{1}|/(k+1)$, where $k$ is the polynomial degree. For the tests in Sections [8.4](#sec:streamingDopplerShift){reference-type="ref" reference="sec:streamingDopplerShift"}-[8.6](#sec:transparentVortex){reference-type="ref" reference="sec:transparentVortex"}, we enforce the time step restriction given in Theorem [Theorem 1](#thm:realizability){reference-type="ref" reference="thm:realizability"}. Collisions tend to drive the distribution towards isotropy in the angular dimensions of momentum space (i.e., $|\boldsymbol{\mathcal{I}}|\to0$), which places the comoving-frame moments $\boldsymbol{\mathcal{M}}$ safely inside the realizable domain. Therefore, to emphasize the improved robustness resulting from our analysis, our main focus is on phase-space advection problems without collisions, where the moments evolve close to the boundary of the realizable domain. ## Moment Conversion Solver {#sec:momentConversion} The solution of the conserved-to-primitive moment conversion problem in Eq. [\[eq:ConservedToPrimitive\]](#eq:ConservedToPrimitive){reference-type="eqref" reference="eq:ConservedToPrimitive"} and the implicit system in Eq. [\[eq:ImplicitSystem\]](#eq:ImplicitSystem){reference-type="eqref" reference="eq:ImplicitSystem"} contribute the majority of the computational cost of the realizability-preserving scheme. In this section, we test the iterative solver for solving the moment conversion problem Eq. [\[eq:ConservedToPrimitive\]](#eq:ConservedToPrimitive){reference-type="eqref" reference="eq:ConservedToPrimitive"} with various solver configurations, and the results reported provide guidance for selecting iterative solver configurations for this critical part of the algorithm. As discussed in Section [4.3.1](#sec:moment_conversion){reference-type="ref" reference="sec:moment_conversion"}, we formulate the moment conversion problem in Eq. [\[eq:ConservedToPrimitive\]](#eq:ConservedToPrimitive){reference-type="eqref" reference="eq:ConservedToPrimitive"} as a fixed-point problem on the primitive moments $\boldsymbol{\mathcal{M}}=(\mathcal{D},\boldsymbol{\mathcal{I}})^{\intercal}$ of the form stated in Eq. [\[eq:richardson_fixed_pt\]](#eq:richardson_fixed_pt){reference-type="eqref" reference="eq:richardson_fixed_pt"}. In Lemma [Lemma 7](#lem:solver_realizability){reference-type="ref" reference="lem:solver_realizability"}, we have shown that the moment realizability is preserved in the iterative procedure when Eq. [\[eq:richardson_fixed_pt\]](#eq:richardson_fixed_pt){reference-type="eqref" reference="eq:richardson_fixed_pt"} is solved with the Picard iteration method in Eq. [\[eq:Picard\]](#eq:Picard){reference-type="eqref" reference="eq:Picard"} and $\lambda\leq(1+v)^{-1}$ in Eq. [\[eq:richardson_fixed_pt\]](#eq:richardson_fixed_pt){reference-type="eqref" reference="eq:richardson_fixed_pt"}. The convergence of Picard iteration is guaranteed in Theorem [Theorem 2](#thm:convergence){reference-type="ref" reference="thm:convergence"} with the additional assumption that $v < \sqrt{2}-1$. We first compare the iteration counts required for convergence of the Picard iteration solver and an Anderson acceleration (AA) solver, using two different choices for $\lambda$. The AA technique was first proposed in [@Anderson-1965] to accelerate the convergence of fixed-point iterations by accounting for the past iteration history to compute new iterates. Here we follow the formulation and implementation in [@Walker-Ni-2011; @laiu_etal_2021] and apply the AA solver to the moment conversion problem Eq. [\[eq:richardson_fixed_pt\]](#eq:richardson_fixed_pt){reference-type="eqref" reference="eq:richardson_fixed_pt"}. In Figure [7](#fig:SolverIterCnt){reference-type="ref" reference="fig:SolverIterCnt"}, the iteration counts are reported for the two iterative solvers applied to solve Eq. [\[eq:richardson_fixed_pt\]](#eq:richardson_fixed_pt){reference-type="eqref" reference="eq:richardson_fixed_pt"} at varying fluid speed $v:=|\boldsymbol{v}|$ and flux factor $h=|\boldsymbol{\mathcal{I}}|/\mathcal{D}$, with $\lambda$ chosen to be the largest allowable value, i.e., $\lambda=(1+v)^{-1}$ and a more conservative value $\lambda=0.5$. The AA solver uses the memory parameter $m=1$ (defined in [@laiu_etal_2021]), so that only information from the previous and current iterate is used. The stopping criteria for both solvers are given as $$\label{eq:stoppingCriteria} \|\boldsymbol{\mathcal{M}}^{[k]} - \boldsymbol{\mathcal{M}}^{[k-1]}\|\leq \texttt{tol}\, \|\boldsymbol{\mathcal{U}}\|,$$ where we consider the norms in the $L^2$ sense and the tolerance $\texttt{tol}=10^{-8}$. For each choice of $(v,h)$ in Figure [7](#fig:SolverIterCnt){reference-type="ref" reference="fig:SolverIterCnt"}, the fixed-point problem is solved for 100 randomly generated $\boldsymbol{\mathcal{U}}\in\mathcal{R}$ (varying the direction of $\boldsymbol{v}$ and $\boldsymbol{\mathcal{I}}/\mathcal{D}$ randomly), and the averaged iteration counts over these 100 problems are recorded. In each test, the initial guess takes the form $\boldsymbol{\mathcal{M}}^{[0]} = \boldsymbol{\mathcal{U}}$. The results in Figure [7](#fig:SolverIterCnt){reference-type="ref" reference="fig:SolverIterCnt"} illustrate that, for both the Picard iteration and the AA solvers, choosing the parameter $\lambda$ to be the largest allowable value $(1+v)^{-1}$ indeed reduces the iteration counts from the more conservative choice $\lambda=0.5$, particularly in the low velocity regime. In addition, it can be found from Figure [7](#fig:SolverIterCnt){reference-type="ref" reference="fig:SolverIterCnt"} that AA solver consistently outperforms the Picard iteration method, and the advantage of using AA grows as the velocity increases. We note that the realizability-preserving and convergence properties analyzed in Section [5.3](#sec:momentConversionRealizability){reference-type="ref" reference="sec:momentConversionRealizability"} are only applicable to the Picard iteration solver, and not to the AA solver. [^5] However, in the numerical results reported in Figure [7](#fig:SolverIterCnt){reference-type="ref" reference="fig:SolverIterCnt"}, we have not observed convergence failure by any of the solvers, even when the velocity is larger than the upper bound ($v=\sqrt{2}-1$; plotted as a red vertical line in each panel in Figure [7](#fig:SolverIterCnt){reference-type="ref" reference="fig:SolverIterCnt"}) required in the convergence analysis in Theorem [Theorem 2](#thm:convergence){reference-type="ref" reference="thm:convergence"}. In Figure [9](#fig:SolverIC){reference-type="ref" reference="fig:SolverIC"}, we show results from experimenting with two choices for the initial guess, $\boldsymbol{\mathcal{M}}^{[0]} = (\mathcal{N}, \boldsymbol{0})^{\intercal}$ and $\boldsymbol{\mathcal{M}}^{[0]} = \boldsymbol{\mathcal{U}}= (\mathcal{N}, \boldsymbol{\mathcal{G}})^{\intercal}$, for the AA solver with $\lambda = (1+v)^{-1}$, which is the best performing configuration shown in Figure [7](#fig:SolverIterCnt){reference-type="ref" reference="fig:SolverIterCnt"}. As shown in Figure [9](#fig:SolverIC){reference-type="ref" reference="fig:SolverIC"}, initializing with the conserved moment $\boldsymbol{\mathcal{U}}$ generally outperforms the isotropic initial condition $(\mathcal{N},\boldsymbol{0})$, except for the case when the flux factor $h=0$, for which the isotropic initial condition is exactly the primitive moment. Since we expect moments with $h=0$ to be rarely encountered in numerical simulations, adopting the AA solver with $\lambda = (1+v)^{-1}$ and initial guess $\boldsymbol{\mathcal{M}}^{[0]} = \boldsymbol{\mathcal{U}}$ appears to be the best choice. This conjecture is confirmed in the performance comparison reported in Section [8.7](#sec:performance){reference-type="ref" reference="sec:performance"}, where we observe a considerable improvement in terms of computational time by using the initial guess $\boldsymbol{\mathcal{M}}^{[0]} = \boldsymbol{\mathcal{U}}$. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Iteration counts for the Picard iteration (top panels) and AA (bottom panels) solvers with modified Richardson iteration parameter $\lambda = (1+v)^{-1}$ and $\lambda=0.5$ (right and left columns, respectively), applied to the moment conversion problems with various velocity $v$ and flux factor $h$. The reported iteration counts are the average over 100 randomly generated moment conversion problems at each $(v,h)$, where the randomness is applied to the directions of $\boldsymbol{v}$ and $\boldsymbol{\mathcal{I}}/\mathcal{D}$. In each panel, the red vertical line indicates the upper velocity bound for guaranteed convergence, $v=\sqrt{2}-1$.](Picard_lambdaFixed.pdf){#fig:SolverIterCnt width="44%"} ![Iteration counts for the Picard iteration (top panels) and AA (bottom panels) solvers with modified Richardson iteration parameter $\lambda = (1+v)^{-1}$ and $\lambda=0.5$ (right and left columns, respectively), applied to the moment conversion problems with various velocity $v$ and flux factor $h$. The reported iteration counts are the average over 100 randomly generated moment conversion problems at each $(v,h)$, where the randomness is applied to the directions of $\boldsymbol{v}$ and $\boldsymbol{\mathcal{I}}/\mathcal{D}$. In each panel, the red vertical line indicates the upper velocity bound for guaranteed convergence, $v=\sqrt{2}-1$.](Picard_lambdaAdapt.pdf){#fig:SolverIterCnt width="44%"} ![Iteration counts for the Picard iteration (top panels) and AA (bottom panels) solvers with modified Richardson iteration parameter $\lambda = (1+v)^{-1}$ and $\lambda=0.5$ (right and left columns, respectively), applied to the moment conversion problems with various velocity $v$ and flux factor $h$. The reported iteration counts are the average over 100 randomly generated moment conversion problems at each $(v,h)$, where the randomness is applied to the directions of $\boldsymbol{v}$ and $\boldsymbol{\mathcal{I}}/\mathcal{D}$. In each panel, the red vertical line indicates the upper velocity bound for guaranteed convergence, $v=\sqrt{2}-1$.](AA_lambdaFixed.pdf){#fig:SolverIterCnt width="44%"} ![Iteration counts for the Picard iteration (top panels) and AA (bottom panels) solvers with modified Richardson iteration parameter $\lambda = (1+v)^{-1}$ and $\lambda=0.5$ (right and left columns, respectively), applied to the moment conversion problems with various velocity $v$ and flux factor $h$. The reported iteration counts are the average over 100 randomly generated moment conversion problems at each $(v,h)$, where the randomness is applied to the directions of $\boldsymbol{v}$ and $\boldsymbol{\mathcal{I}}/\mathcal{D}$. In each panel, the red vertical line indicates the upper velocity bound for guaranteed convergence, $v=\sqrt{2}-1$.](AA_lambdaAdapt.pdf){#fig:SolverIterCnt width="44%"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![Iteration counts for the AA solver with modified Richardson iteration parameters $\lambda = (1+v)^{-1}$ applied to the moment conversion problems with various velocity $v$ and flux factor $h$ for two different initial guesses. The reported iteration counts are the average over 100 randomly generated moment conversion problems at each $(v,h)$.](AA_lambdaAdapt_IC_N0.pdf){#fig:SolverIC width="44%"} ![Iteration counts for the AA solver with modified Richardson iteration parameters $\lambda = (1+v)^{-1}$ applied to the moment conversion problems with various velocity $v$ and flux factor $h$ for two different initial guesses. The reported iteration counts are the average over 100 randomly generated moment conversion problems at each $(v,h)$.](AA_lambdaAdapt_IC_NG.pdf){#fig:SolverIC width="44%"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ## Sine Wave Streaming {#sec:sineWaveStreaming} The first test we consider that evolves the two-moment system models free-streaming radiation through a background with a spatially (and temporally) constant velocity field in the $x^{1}$-direction. That is, we set $\chi=\sigma=0$, while $\boldsymbol{v}=[v,0,0]^{\intercal}$, with $v=0.1$. The purpose of this test is to verify (i) the correct radiation propagation speed in this idealized setting, and (ii) the expected order of accuracy of the implemented method. We consider a periodic one-dimensional unit spatial domain $D_{x^{1}}=[0,1]$. The initial number density and flux are set to $\mathcal{D}(x^{1},0)=\mathcal{D}_{0}(x^{1})=0.5+0.49\times\sin(2\pi x^{1})$ and $\mathcal{I}^{1}(x^{1},0)=\mathcal{D}_{0}(x^{1})$, respectively. Then, the flux factor is $h=1$, and the analytic solution is given by $\mathcal{D}(x^{1},t)=\mathcal{I}^{1}(x^{1},t)=\mathcal{D}_{0}(x^{1}-t)$; i.e., the initial profile propagates with unit speed, *independent* of $v$. (As noted by [@lowrie_etal_2001], dropping the velocity-dependent terms in the time derivatives of Eqs. [\[eq:spectralNumberEquation\]](#eq:spectralNumberEquation){reference-type="eqref" reference="eq:spectralNumberEquation"} and [\[eq:spectralNumberFluxEquation\]](#eq:spectralNumberFluxEquation){reference-type="eqref" reference="eq:spectralNumberFluxEquation"}, as is done in [@just_etal_2015; @skinner_etal_2019], the propagation speed becomes $1+v$ for this test, which is unphysical.) Since the background velocity is constant, there is no coupling in the energy dimension. Therefore, this test is performed with a single energy. We run this test until $t=1$, when the initial profile has crossed the grid once before returning to its initial position. ![Error in the $L^{2}$ norm versus number of spatial elements $N$ for the Sine Wave Streaming test. Results obtained with second- and third-order schemes (using $k=1$ polynomials and SSPRK2 time stepping and $k=2$ polynomials and SSPRK3 time stepping, respectively), along with dotted reference lines proportional to $1/N^{k+1}$, are plotted using black and red lines, respectively.](SineWaveStreamingOrder.png){#fig:SinewaveStreamingOrder width="50%"} In Figure [10](#fig:SinewaveStreamingOrder){reference-type="ref" reference="fig:SinewaveStreamingOrder"}, we plot the error in the $L^2$ norm versus the number of spatial elements for the second-order scheme, using linear polynomials ($k=1$) and SSPRK2 time stepping, and the third-order scheme, using quadratic polynomials ($k=2$) and SSPRK3 time stepping. The results confirm the expected convergence rate to the exact solution. ## Gaussian Diffusion {#sec:gaussianDiffusion} The next test we consider, adopted from [@just_etal_2015], models the diffusion of particles through a medium moving with constant velocity in the $x^{1}$-direction. We consider a purely scattering medium and set $\sigma=3.2\times10^{3}$ and $\chi=0$, and let $\boldsymbol{v}=[v,0,0]^{\intercal}$, with $v=0.1$. We let the spatial domain be periodic, $D_{x^{1}}=[0,3]$, with initial conditions $\mathcal{D}_{0}(x^{1})=\exp[-(x^{1}-x_{0}^{1})^{2}/(4t_{0}\kappa_{\mathsf{D}})]$ and $\mathcal{I}_{0}^{1}(x^{1})=-\kappa_{\mathsf{D}}\partial_{x^{1}}\mathcal{D}_{0}$, where $\kappa_{\mathsf{D}}=(3\sigma)^{-1}$, and we set $x_{0}^{1}=1$ and $t_{0}=5$. Then, the evolution of the number density is approximately governed by the advection-diffusion equation $$\partial_{t}\mathcal{D}+\partial_{x^{1}}\big(\,\mathcal{D}v - \kappa_{\mathsf{D}}\partial_{x^{1}}\mathcal{D}\,\big) = 0, \label{eq:advectionDiffusion}$$ whose analytical solution is given by [@just_etal_2015] $$\mathcal{D}(x^{1},t) = \sqrt{\frac{t_{0}}{t_{0}+t}}\exp\Big\{\,-\frac{((x^{1}-v t)-x_{0}^{1})^2}{4(t_{0}+t)\kappa_{\mathsf{D}}}\,\Big\}. \label{eq:advectionDiffusionAnalytic}$$ Since there is no coupling in the energy dimension ($v$ is constant), we perform this test with a single energy. We use quadratic elements ($k=2$) and the IMEX time stepping scheme from [@chu_etal_2019], integrating the collision term implicitly. For this test, the time step is set to $\Delta t=C_{\rm CFL}\times |K_{\boldsymbol{x}}^{1}|$, where $C_{\rm CFL}$ is specified below. The purpose of this test is to investigate the performance of the DG-IMEX scheme in a regime where both advection and diffusion contribute to the evolution of the number density. For $t>0$, the Gaussian profile is advected with the flow, while the amplitude decreases and the width increases due to diffusion. The left panel in Figure [12](#fig:GaussianDiffusion){reference-type="ref" reference="fig:GaussianDiffusion"} shows the number density versus $x^{1}-v t$ for various times as the Gaussian profile propagates once across the periodic domain and returns to its initial position at $t=30$. At this time the amplitude is reduced by a factor $\sqrt{5/35}\approx0.378$. For this simulation, the spatial domain is discretized using $96$ elements, so that the radio of the element width to the mean-free path is $|K_{\boldsymbol{x}}^{1}|\sigma=10^{2}$. The numerical solution (open circles) agrees well with the expression in Eq. [\[eq:advectionDiffusionAnalytic\]](#eq:advectionDiffusionAnalytic){reference-type="eqref" reference="eq:advectionDiffusionAnalytic"} (solid lines). The right panel in Figure [12](#fig:GaussianDiffusion){reference-type="ref" reference="fig:GaussianDiffusion"} shows the error in the $L^2$ norm at $t=5$ versus the number of spatial elements $N$. Since the expression given by Eq. [\[eq:advectionDiffusionAnalytic\]](#eq:advectionDiffusionAnalytic){reference-type="eqref" reference="eq:advectionDiffusionAnalytic"} is not an exact solution to the $\mathcal{O}(v)$ two-moment model in the limit of high scattering opacity, we compare the numerical results to a high-resolution reference solution, computed with $8192$ elements. (We have confirmed that for fixed $N$ and $t$, and varying $v$, the difference between the numerical solution and the expression in Eq. [\[eq:advectionDiffusionAnalytic\]](#eq:advectionDiffusionAnalytic){reference-type="eqref" reference="eq:advectionDiffusionAnalytic"} increases as $\mathcal{O}(v^{2})$.) The solid black curve with squares shows the error obtained with the standard CFL number $C_{\rm CFL}:=C_{\rm CFL}^{0}=0.3/(k+1)$. For smaller $N$, the error falls off as $N^{-3}$ (see dashed black reference line), consistent with a third-order convergence rate, while for larger $N$ the convergence rate transitions to first-order (see dotted black reference line). The reason for this change in convergence rate is because the IMEX scheme, taken from [@chu_etal_2019], is formally only first-order accurate. Spatial discretization errors dominate for small $N$, but since these errors decrease with the third-order rate, temporal errors become dominant for large $N$. To verify this, we also plot convergence results obtained after reducing the time step by a factor of $25$, $C_{\rm CFL}:=C_{\rm CFL}^{0}/25$; see solid red curve with squares. For this case, the error decreases with the third-order rate for all $N$. We also show the error obtained with a second-order IMEX scheme (IMEXRKCB2 from [@cavaglieriBewley2015]), using the standard CFL number. Due to better temporal accuracy, the error decreases with the third-order rate, but the scheme does not satisfy the convex-invariant conditions delineated in [@chu_etal_2019], and can therefore not be guaranteed to maintain moment realizability by our analysis. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![Results for the Gaussian diffusion test. The left panel shows the numerical solution (open circles) versus the shifted coordinate $x^{1}-v t$ for various times, compared with the analytic solution (solid lines) to the advection-diffusion equation in Eq. [\[eq:advectionDiffusion\]](#eq:advectionDiffusion){reference-type="eqref" reference="eq:advectionDiffusion"}. The right panel shows the error in the $L^{2}$ norm versus the number of elements $N$. The error is computed with respect to a high-resolution reference run with $N=8192$. Convergence results are shown for standard and reduced CFL number, using the first-order SSP-IMEX scheme from [@chu_etal_2019] (solid black and red, respectively; see text for details), and a second-order IMEX scheme from [@cavaglieriBewley2015] with standard CFL number (dotted blue). The dotted and dashed black reference lines are proportional to $N^{-1}$ and $N^{-3}$, respectively. ](GaussianDiffusion1D.pdf){#fig:GaussianDiffusion width="44%"} ![Results for the Gaussian diffusion test. The left panel shows the numerical solution (open circles) versus the shifted coordinate $x^{1}-v t$ for various times, compared with the analytic solution (solid lines) to the advection-diffusion equation in Eq. [\[eq:advectionDiffusion\]](#eq:advectionDiffusion){reference-type="eqref" reference="eq:advectionDiffusion"}. The right panel shows the error in the $L^{2}$ norm versus the number of elements $N$. The error is computed with respect to a high-resolution reference run with $N=8192$. Convergence results are shown for standard and reduced CFL number, using the first-order SSP-IMEX scheme from [@chu_etal_2019] (solid black and red, respectively; see text for details), and a second-order IMEX scheme from [@cavaglieriBewley2015] with standard CFL number (dotted blue). The dotted and dashed black reference lines are proportional to $N^{-1}$ and $N^{-3}$, respectively. ](GaussianDiffusion1D_Error.pdf){#fig:GaussianDiffusion width="44%"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ## Streaming Doppler Shift {#sec:streamingDopplerShift} This test, adopted from [@vaytet_etal_2011] (see also [@just_etal_2015; @skinner_etal_2019]), models the propagation of free-streaming radiation along the $x^{1}$-direction through a background with a spatially varying velocity field. Because the two-moment model adopts momentum-space coordinates associated with a comoving observer, the radiation energy spectra will be Doppler shifted. We consider a one-dimensional spatial domain $D_{x^{1}}=[0,10]$. Again, we set $\chi=\sigma=0$, while the velocity field is set to $\boldsymbol{v}=(v,0,0)^{\intercal}$, where $$v(x^{1}) = \left\{ \begin{array}{ll} 0, & x^{1}\in[0,2) \\ v_{\max} \times \sin^{2}[2\pi(x^{1}-2)/6], & x^{1}\in[2,3.5) \\ v_{\max}, & x^{1}\in[3.5,6.5) \\ v_{\max} \times \sin^{2}[2\pi(x^{1}-2)/6], & x^{1}\in[6.5,8) \\ 0, & x^{1}\in[8,10] \end{array} \right.,$$ and where we will vary $v_{\max}$. We set the energy domain to $D_{\varepsilon}=[0,50]$. In this test, we discretize the spatial and energy domains into $128$ and $32$ elements, respectively, and use quadratic elements ($k=2$) and SSPRK3 time stepping. In the computational domain, the moments are initially set to $\mathcal{D}=1\times10^{-40}$ and $\mathcal{I}^{1}=0$ for all $(x^{1},\varepsilon)\in D_{x^{1}}\times D_{\varepsilon}$. At the inner spatial boundary, we impose an incoming, forward-peaked radiation field with a Fermi-Dirac spectrum; i.e., we set $\mathcal{D}(\varepsilon,x^{1}=0)=1/[\exp(\varepsilon/3-3)+1]$ and $\mathcal{I}^{1}(\varepsilon,x^{1}=0)=0.999\times\mathcal{D}(\varepsilon,x^{1}=0)$, so that the flux factor $h\approx1$. (We impose outflow boundary conditions at $x^{1}=10$.) Then, for $t>0$, a radiation front propagates through the computational domain, and a steady state is established for $t\gtrsim10$, where the spectrum is Doppler-shifted according to the velocity field. From special relativistic considerations, similar to [@just_etal_2015], the analytical spectral number density in the steady state can be written as $$\mathcal{D}_{\rm A}=\frac{s^{2}}{\exp(s\varepsilon/3-3)+1}, \label{eq:dopplerSpectraSR}$$ where $s=\sqrt{(1+v)/(1-v)}$. The purpose of this test is to (i) compare steady state numerical solutions with the prediction from special relativity given by Eq. [\[eq:dopplerSpectraSR\]](#eq:dopplerSpectraSR){reference-type="eqref" reference="eq:dopplerSpectraSR"}, and (ii) investigate the simultaneous Eulerian-frame number and energy conservation properties of the method as the initial conditions are evolved to steady state. To reach an approximate steady state, we run all models until $t=20$. ### General Solution Characteristics ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Steady state solutions ($t=20$) for the streaming Doppler shift problem for various $v_{\max}\in\{0.0,0.1,0.2,0.4\}$. In the top panels, we plot spectra at $x^{1}=5$ ($\mathcal{D}\varepsilon^{2}$ versus $\varepsilon$; left) and the RMS energy, as defined in Eq. [\[eq:rmsEnergy\]](#eq:rmsEnergy){reference-type="eqref" reference="eq:rmsEnergy"}, versus position $x^{1}$ (right). In these panels, solid lines represent the analytic (A) solution from special relativistic considerations, given by Eq. [\[eq:dopplerSpectraSR\]](#eq:dopplerSpectraSR){reference-type="eqref" reference="eq:dopplerSpectraSR"}, while dotted lines represent the numerical (N) results. In all panels, black, red, blue, and magenta curves represent runs with $v_{\max}$ set to $0.0$, $0.1$, $0.2$, and $0.4$, respectively. In the bottom left panel, the Eulerian-frame (solid lines) and comoving-frame (dotted lines) number densities are plotted versus position. In the bottom right panel, the Eulerian-frame (solid lines) and comoving-frame (dotted lines) energy densities are plotted versus position.](StreamingDopplerShift_Paper_1_Spectra.pdf){#fig:StreamingDopplerShift_1 width="44%"} ![Steady state solutions ($t=20$) for the streaming Doppler shift problem for various $v_{\max}\in\{0.0,0.1,0.2,0.4\}$. In the top panels, we plot spectra at $x^{1}=5$ ($\mathcal{D}\varepsilon^{2}$ versus $\varepsilon$; left) and the RMS energy, as defined in Eq. [\[eq:rmsEnergy\]](#eq:rmsEnergy){reference-type="eqref" reference="eq:rmsEnergy"}, versus position $x^{1}$ (right). In these panels, solid lines represent the analytic (A) solution from special relativistic considerations, given by Eq. [\[eq:dopplerSpectraSR\]](#eq:dopplerSpectraSR){reference-type="eqref" reference="eq:dopplerSpectraSR"}, while dotted lines represent the numerical (N) results. In all panels, black, red, blue, and magenta curves represent runs with $v_{\max}$ set to $0.0$, $0.1$, $0.2$, and $0.4$, respectively. In the bottom left panel, the Eulerian-frame (solid lines) and comoving-frame (dotted lines) number densities are plotted versus position. In the bottom right panel, the Eulerian-frame (solid lines) and comoving-frame (dotted lines) energy densities are plotted versus position.](StreamingDopplerShift_Paper_1_RMS.pdf){#fig:StreamingDopplerShift_1 width="44%"} ![Steady state solutions ($t=20$) for the streaming Doppler shift problem for various $v_{\max}\in\{0.0,0.1,0.2,0.4\}$. In the top panels, we plot spectra at $x^{1}=5$ ($\mathcal{D}\varepsilon^{2}$ versus $\varepsilon$; left) and the RMS energy, as defined in Eq. [\[eq:rmsEnergy\]](#eq:rmsEnergy){reference-type="eqref" reference="eq:rmsEnergy"}, versus position $x^{1}$ (right). In these panels, solid lines represent the analytic (A) solution from special relativistic considerations, given by Eq. [\[eq:dopplerSpectraSR\]](#eq:dopplerSpectraSR){reference-type="eqref" reference="eq:dopplerSpectraSR"}, while dotted lines represent the numerical (N) results. In all panels, black, red, blue, and magenta curves represent runs with $v_{\max}$ set to $0.0$, $0.1$, $0.2$, and $0.4$, respectively. In the bottom left panel, the Eulerian-frame (solid lines) and comoving-frame (dotted lines) number densities are plotted versus position. In the bottom right panel, the Eulerian-frame (solid lines) and comoving-frame (dotted lines) energy densities are plotted versus position.](StreamingDopplerShift_Paper_1_Number.pdf){#fig:StreamingDopplerShift_1 width="44%"} ![Steady state solutions ($t=20$) for the streaming Doppler shift problem for various $v_{\max}\in\{0.0,0.1,0.2,0.4\}$. In the top panels, we plot spectra at $x^{1}=5$ ($\mathcal{D}\varepsilon^{2}$ versus $\varepsilon$; left) and the RMS energy, as defined in Eq. [\[eq:rmsEnergy\]](#eq:rmsEnergy){reference-type="eqref" reference="eq:rmsEnergy"}, versus position $x^{1}$ (right). In these panels, solid lines represent the analytic (A) solution from special relativistic considerations, given by Eq. [\[eq:dopplerSpectraSR\]](#eq:dopplerSpectraSR){reference-type="eqref" reference="eq:dopplerSpectraSR"}, while dotted lines represent the numerical (N) results. In all panels, black, red, blue, and magenta curves represent runs with $v_{\max}$ set to $0.0$, $0.1$, $0.2$, and $0.4$, respectively. In the bottom left panel, the Eulerian-frame (solid lines) and comoving-frame (dotted lines) number densities are plotted versus position. In the bottom right panel, the Eulerian-frame (solid lines) and comoving-frame (dotted lines) energy densities are plotted versus position.](StreamingDopplerShift_Paper_1_Energy.pdf){#fig:StreamingDopplerShift_1 width="44%"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Figure [16](#fig:StreamingDopplerShift_1){reference-type="ref" reference="fig:StreamingDopplerShift_1"} displays steady state solution characteristics for models where we have varied $v_{\max}\in\{0.0,0.1,0.2,0.4\}$. From the top left panel, we see that the $\mathcal{O}(v)$ spectra (dotted) --- which for $v_{\max}>0$ are redshifted relative to the case with $v_{\max}=0$ --- agree well with the analytic, special relativistic results (solid) for the lower values of $v_{\max}$, while the difference between the $\mathcal{O}(v)$ and the special relativistic results are larger when $v_{\max}=0.4$, which is to be expected since $\mathcal{O}(v^{2})$ terms are no longer negligible. The top right panel shows the RMS energy, defined as $$\varepsilon_{\rm RMS} = \sqrt{\int_{D_{\varepsilon}}\mathcal{D}\varepsilon^{5}d\varepsilon/\int_{D_{\varepsilon}}\mathcal{D}\varepsilon^{3}d\varepsilon}, \label{eq:rmsEnergy}$$ versus position, computed from the numerical, $\mathcal{O}(v)$ solution and the analytic, special relativistic solution in Eq. [\[eq:dopplerSpectraSR\]](#eq:dopplerSpectraSR){reference-type="eqref" reference="eq:dopplerSpectraSR"}. Indeed, at $x^{1}=5$, the relative difference in $\varepsilon_{\rm RMS}$ between the $\mathcal{O}(v)$ and the special relativistic result for $v_{\max}=0.1$ is $4.8\times10^{-3}$, while it is $2.0\times10^{-2}$ and $9.1\times10^{-2}$ for $v_{\max}=0.2$ and $v_{\max}=0.4$, respectively. That is, the relative error in the RMS energy increases roughly as $\mathcal{O}(v^{2})$. The bottom panels of Figure [16](#fig:StreamingDopplerShift_1){reference-type="ref" reference="fig:StreamingDopplerShift_1"} show the Eulerian- and comoving-frame number densities ($N$ and $D$, respectively; left), and the Eulerian- and comoving-frame energy densities ($E$ and $J$, respectively; right) versus position. Here, $$\big\{\,N,\,D,\,E,\,J\,\big\}=4\pi\int_{D_{\varepsilon}}\big\{\,\mathcal{N},\,\mathcal{D},\,\mathcal{E},\,\mathcal{D}\varepsilon\,\big\}\varepsilon^{2}d\varepsilon,$$ where $\mathcal{N}$ and $\mathcal{E}$ are defined in Eqs. [\[eq:eachconservedMoments\]](#eq:eachconservedMoments){reference-type="eqref" reference="eq:eachconservedMoments"} and [\[eq:conservedEnergy\]](#eq:conservedEnergy){reference-type="eqref" reference="eq:conservedEnergy"}, respectively. Relative to where $v=0$, both $D$ and $J$ are lower in the region where $v>0$, which is expected from the redshifted spectra displayed in the top left panel in Figure [16](#fig:StreamingDopplerShift_1){reference-type="ref" reference="fig:StreamingDopplerShift_1"}. The Eulerian-frame quantities, $N$ and $E$, are practically unaffected by the velocity field, and remain relatively constant throughout the spatial domain. ### Simultaneous Number and Energy Conservation Next, we investigate the simultaneous number and energy conservation properties of the scheme as a function of time for $t\in[0,20]$. Here, a main challenge stems from the fact that, since the flux factor $h\approx1$, the moments evolve close to the boundary of the realizable set $\mathcal{R}$. With high-order polynomials ($k\ge1$), the solution can become non-realizable in one or more quadrature points in some elements, which then triggers the realizability-enforcing limiter discussed in Section [5.2](#sec:realizabilityLimiter){reference-type="ref" reference="sec:realizabilityLimiter"}. The realizability-enforcing limiter preserves the Eulerian-frame particle number, but not the Eulerian-frame energy, which is the reason for introducing the 'energy limiter' in Section [6.2](#sec:EnergyLimiter){reference-type="ref" reference="sec:EnergyLimiter"}. Here, we demonstrate the performance of the energy limiter, and its effect on the simultaneous number and energy conservation properties of the method. Recall from Proposition [Proposition 1](#prop:EnergyandMomentumConservation){reference-type="ref" reference="prop:EnergyandMomentumConservation"} that the $\mathcal{O}(v)$ two-moment model is conservative for the Eulerian-frame energy only to $\mathcal{O}(v^{2})$. In the context of the current test, the Eulerian-frame number density satisfies a conservation law of the form $$\partial_{t}N+\partial_{1}F_{N}^{1} = 0.$$ Integration over space $D_{x^{1}}$ and time $[t_{0},t]$ gives $$\underbrace{\int_{D_{x^{1}}}[\,N(x^{1},t)-N(x^{1},0)]\,dx^{1}}_{\Delta N_{\rm int}(t)} + \underbrace{\int_{0}^{t}[F_{N}^{1}|_{x^{1}=10}-F_{N}^{1}|_{x^{1}=0}]\,d\tau}_{\Delta N_{\rm ext}(t)} = 0, \label{eq:numberBalance}$$ where $\Delta N_{\rm int}(t)$ and $\Delta N_{\rm ext}(t)$ represent the change in the total number of particles, from $t_{0}$ to $t$, *interior* and *exterior* to the domain $D_{x^{1}}$, respectively. Since there is no creation or destruction of particles, the sum vanishes. We can obtain a similar expression for the Eulerian-frame energy (with $E$ replacing $N$ in Eq. [\[eq:numberBalance\]](#eq:numberBalance){reference-type="eqref" reference="eq:numberBalance"}), but for the $\mathcal{O}(v)$ two-moment model considered here, by Proposition [Proposition 1](#prop:EnergyandMomentumConservation){reference-type="ref" reference="prop:EnergyandMomentumConservation"}, one would in general expect $$\Delta E_{\rm int} + \Delta E_{\rm ext} = \mathcal{O}(v^{2}), \label{eq:energyBalance}$$ at the continuous level. At the discrete level, with consistent discretization of the left-hand side of the two-moment model, Eulerian-frame energy violations of $\mathcal{O}(v^{2})$ should be considered optimal. (Recall the discussion on this issue specific to the DG scheme from Section [6.1](#sec:dgConservation){reference-type="ref" reference="sec:dgConservation"}.) For this test, given our chosen spatial resolution and use of quadratic elements, velocity jumps across elements are small, and we expect to observe near optimal Eulerian-frame energy conservation properties. However, the acceptable level of Eulerian-frame energy nonconservation is application dependent, and should be considered on a case-by-case basis. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Plot of Eulerian-frame number (left) and energy (right) balance versus time for the streaming Doppler shift problem for a run with $v_{\max}=0.2$. In the left panel, $\Delta N_{\rm int}$ (solid), $\Delta N_{\rm ext}$ (dash-dotted), and $\Delta N_{\rm int}+\Delta N_{\rm ext}$ (dotted), see Eq. [\[eq:numberBalance\]](#eq:numberBalance){reference-type="eqref" reference="eq:numberBalance"}, are plotted. Similarly, in the right panel, $\Delta E_{\rm int}$ (solid), $\Delta E_{\rm ext}$ (dash-dotted), and $\Delta E_{\rm int}+\Delta E_{\rm ext}$ (dotted), see Eq. [\[eq:energyBalance\]](#eq:energyBalance){reference-type="eqref" reference="eq:energyBalance"}, are plotted.](StreamingDopplerShift_Paper_2_NumberBalance.pdf){#fig:StreamingDopplerShift_Balance width="44%"} ![Plot of Eulerian-frame number (left) and energy (right) balance versus time for the streaming Doppler shift problem for a run with $v_{\max}=0.2$. In the left panel, $\Delta N_{\rm int}$ (solid), $\Delta N_{\rm ext}$ (dash-dotted), and $\Delta N_{\rm int}+\Delta N_{\rm ext}$ (dotted), see Eq. [\[eq:numberBalance\]](#eq:numberBalance){reference-type="eqref" reference="eq:numberBalance"}, are plotted. Similarly, in the right panel, $\Delta E_{\rm int}$ (solid), $\Delta E_{\rm ext}$ (dash-dotted), and $\Delta E_{\rm int}+\Delta E_{\rm ext}$ (dotted), see Eq. [\[eq:energyBalance\]](#eq:energyBalance){reference-type="eqref" reference="eq:energyBalance"}, are plotted.](StreamingDopplerShift_Paper_2_EnergyBalance.pdf){#fig:StreamingDopplerShift_Balance width="44%"} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Figure [18](#fig:StreamingDopplerShift_Balance){reference-type="ref" reference="fig:StreamingDopplerShift_Balance"} plots the number and energy balance versus time for a run with $v_{\max}=0.2$. Initially, there are essentially no particles in the computational domain $D_{x^{1}}$, and the flux at the outer boundary is zero. For $t>0$, particles enter the domain through the inner boundary, and $\Delta N_{\rm int}$ begins to increase linearly with time, while $\Delta N_{\rm ext}$ decreases at the same rate, and the sum $\Delta N_{\rm int}+\Delta N_{\rm ext}$ remains zero to machine precision (see also Figure [22](#fig:StreamingDopplerShift_Change){reference-type="ref" reference="fig:StreamingDopplerShift_Change"}). Around $t=10$, the particles that entered the domain at $t=0$ reach the outer boundary, establishing a balance between particles entering and leaving the domain, and the system reaches a steady state where both $\Delta N_{\rm int}$ and $\Delta N_{\rm ext}$ remain unchanged. The evolution observed for the Eulerian-frame energy quantities ($\Delta E_{\rm ext}$ and $\Delta E_{\rm ext}$) is similar to that for the particle number, and, on the scale of the ordinate on the right panel in Figure [18](#fig:StreamingDopplerShift_Balance){reference-type="ref" reference="fig:StreamingDopplerShift_Balance"}, the sum $\Delta E_{\rm ext}+\Delta E_{\rm ext}$ remains close to zero. -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Results from the streaming Doppler shift problem for models with various values of $v_{\max}$. In the upper left panel the relative change in the Eulerian-frame particle number, defined as $(\Delta N_{\rm int}+\Delta N_{\rm ext})/\Delta N_{\rm int}(t=20)$, is plotted versus time. Similarly, in the upper right panel, the relative change in the Eulerian-frame energy, defined as $(\Delta E_{\rm int}+\Delta E_{\rm ext})/\Delta E_{\rm int}(t=20)$, is plotted. In these panels, solid lines represent results obtained with the fiducial algorithm with the energy limiter on, while dotted lines represent results obtained with the energy limiter turned off. In the lower left panel, the minimum (solid) and mean (dashed) value of the limiter parameter $\theta_{\boldsymbol{K}}^{\boldsymbol{\mathcal{U}}}$ (see Algorithm [\[algo:realizabilityLimiter\]](#algo:realizabilityLimiter){reference-type="ref" reference="algo:realizabilityLimiter"}) are plotted versus time for the fiducial models with the energy limiter on. Results with $v_{\max}$ set to $0.1$, $0.2$, and $0.3$ are plotted with red, blue, and magenta curves, respectively. In the lower right panel, the relative change in the Eulerian-frame energy at $t=20$ is plotted versus $v_{\max}$ for models where the energy limiter is on (solid lines) and off (dotted lines) (the dashed black reference line is proportional to $v_{\max}^{2}$).](StreamingDopplerShift_Paper_2_NumberChange.pdf){#fig:StreamingDopplerShift_Change width="44%"} ![Results from the streaming Doppler shift problem for models with various values of $v_{\max}$. In the upper left panel the relative change in the Eulerian-frame particle number, defined as $(\Delta N_{\rm int}+\Delta N_{\rm ext})/\Delta N_{\rm int}(t=20)$, is plotted versus time. Similarly, in the upper right panel, the relative change in the Eulerian-frame energy, defined as $(\Delta E_{\rm int}+\Delta E_{\rm ext})/\Delta E_{\rm int}(t=20)$, is plotted. In these panels, solid lines represent results obtained with the fiducial algorithm with the energy limiter on, while dotted lines represent results obtained with the energy limiter turned off. In the lower left panel, the minimum (solid) and mean (dashed) value of the limiter parameter $\theta_{\boldsymbol{K}}^{\boldsymbol{\mathcal{U}}}$ (see Algorithm [\[algo:realizabilityLimiter\]](#algo:realizabilityLimiter){reference-type="ref" reference="algo:realizabilityLimiter"}) are plotted versus time for the fiducial models with the energy limiter on. Results with $v_{\max}$ set to $0.1$, $0.2$, and $0.3$ are plotted with red, blue, and magenta curves, respectively. In the lower right panel, the relative change in the Eulerian-frame energy at $t=20$ is plotted versus $v_{\max}$ for models where the energy limiter is on (solid lines) and off (dotted lines) (the dashed black reference line is proportional to $v_{\max}^{2}$).](StreamingDopplerShift_Paper_2_EnergyChange.pdf){#fig:StreamingDopplerShift_Change width="44%"} ![Results from the streaming Doppler shift problem for models with various values of $v_{\max}$. In the upper left panel the relative change in the Eulerian-frame particle number, defined as $(\Delta N_{\rm int}+\Delta N_{\rm ext})/\Delta N_{\rm int}(t=20)$, is plotted versus time. Similarly, in the upper right panel, the relative change in the Eulerian-frame energy, defined as $(\Delta E_{\rm int}+\Delta E_{\rm ext})/\Delta E_{\rm int}(t=20)$, is plotted. In these panels, solid lines represent results obtained with the fiducial algorithm with the energy limiter on, while dotted lines represent results obtained with the energy limiter turned off. In the lower left panel, the minimum (solid) and mean (dashed) value of the limiter parameter $\theta_{\boldsymbol{K}}^{\boldsymbol{\mathcal{U}}}$ (see Algorithm [\[algo:realizabilityLimiter\]](#algo:realizabilityLimiter){reference-type="ref" reference="algo:realizabilityLimiter"}) are plotted versus time for the fiducial models with the energy limiter on. Results with $v_{\max}$ set to $0.1$, $0.2$, and $0.3$ are plotted with red, blue, and magenta curves, respectively. In the lower right panel, the relative change in the Eulerian-frame energy at $t=20$ is plotted versus $v_{\max}$ for models where the energy limiter is on (solid lines) and off (dotted lines) (the dashed black reference line is proportional to $v_{\max}^{2}$).](StreamingDopplerShift_Paper_2_Theta2.pdf){#fig:StreamingDopplerShift_Change width="44%"} ![Results from the streaming Doppler shift problem for models with various values of $v_{\max}$. In the upper left panel the relative change in the Eulerian-frame particle number, defined as $(\Delta N_{\rm int}+\Delta N_{\rm ext})/\Delta N_{\rm int}(t=20)$, is plotted versus time. Similarly, in the upper right panel, the relative change in the Eulerian-frame energy, defined as $(\Delta E_{\rm int}+\Delta E_{\rm ext})/\Delta E_{\rm int}(t=20)$, is plotted. In these panels, solid lines represent results obtained with the fiducial algorithm with the energy limiter on, while dotted lines represent results obtained with the energy limiter turned off. In the lower left panel, the minimum (solid) and mean (dashed) value of the limiter parameter $\theta_{\boldsymbol{K}}^{\boldsymbol{\mathcal{U}}}$ (see Algorithm [\[algo:realizabilityLimiter\]](#algo:realizabilityLimiter){reference-type="ref" reference="algo:realizabilityLimiter"}) are plotted versus time for the fiducial models with the energy limiter on. Results with $v_{\max}$ set to $0.1$, $0.2$, and $0.3$ are plotted with red, blue, and magenta curves, respectively. In the lower right panel, the relative change in the Eulerian-frame energy at $t=20$ is plotted versus $v_{\max}$ for models where the energy limiter is on (solid lines) and off (dotted lines) (the dashed black reference line is proportional to $v_{\max}^{2}$).](StreamingDopplerShift_Paper_2_EnergyChangeFinal.pdf){#fig:StreamingDopplerShift_Change width="44%"} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Figure [22](#fig:StreamingDopplerShift_Change){reference-type="ref" reference="fig:StreamingDopplerShift_Change"} shows select results from runs with $v_{\max}\in\{0.05,0.1,0.2,0.3,0.4\}$ and provides further details on the simultaneous number and energy conservation properties of the scheme when applied to the streaming Doppler shift problem. For comparison, we have also run all the models with the energy limiter turned off. The relative change in the total number of particles (top left panel) is at the level of machine precision for all values of $v_{\max}$ (with and without the energy limiter), which is expected since the two-moment model is formulated in number conservative form. As mentioned earlier, the moments evolve close to the boundary of the realizable set in this test, and the realizability-enforcing limiter is frequently invoked to enforce pointwise realizability in all elements. From the lower left panel in Figure [22](#fig:StreamingDopplerShift_Change){reference-type="ref" reference="fig:StreamingDopplerShift_Change"}, we observe that the minimum value of the limiter parameter $\theta_{\boldsymbol{K}}^{\boldsymbol{\mathcal{U}}}$ (solid lines) varies between $0$ and $0.1$ for $t\lesssim10$, while the average value of $\theta_{\boldsymbol{K}}^{\boldsymbol{\mathcal{U}}}$ (dashed lines) grows from about $0.8$ to $1$ during this time. For $t\le10$, a discontinuity in the moments, driven by the inner boundary condition, propagates through the domain and is mainly responsible for triggering the realizability-enforcing limiter. Recall that $\theta_{\boldsymbol{K}}^{\boldsymbol{\mathcal{U}}}=0$ implies full limiting where all the moments within an element are set to their respective cell average, while $\theta_{\boldsymbol{K}}^{\boldsymbol{\mathcal{U}}}=1$ implies no limiting. After $t\approx10$, the average $\theta_{\boldsymbol{K}}^{\boldsymbol{\mathcal{U}}}$ value hovers around unity, but a few elements still require significant limiting when $t>10$, especially for the $v_{\max}=0.4$ model, with the minimum $\theta_{\boldsymbol{K}}^{\boldsymbol{\mathcal{U}}}$ still dropping down close to zero. Closer inspection reveals that a few elements around the location of the velocity gradients ($x^{1}\in[2,3.5)$ and $x^{1}\in[6.5,8)$) require limiting beyond $t=10$. The relative change in the Eulerian-frame energy is plotted in the top right panel of Figure [22](#fig:StreamingDopplerShift_Change){reference-type="ref" reference="fig:StreamingDopplerShift_Change"}, which reveals a significant improvement in conservation for the fiducial models with the energy limiter turned on (solid lines), when compared to models without the energy limiter (dotted lines). (We compute the relative change in number and energy by normalizing by interior values at $t=20$, when the system is in steady state, because the initial interior values are close to zero.) For the models without the energy limiter, the relative change in total energy immediately jumps to about $1\times10^{-5}$, and continues to grow for later times, while it remains relatively constant for the fiducial models. This implies that the realizability-enforcing limiter is the main driver of Eulerian-frame energy nonconservation for this test, and not the inherent nonconservation properties of the continuum $\mathcal{O}(v)$ two-moment model. (Velocity jumps across elements are small, and we infer from our results that their contribution to energy nonconservation is negligible.) The relative change in the Eulerian-frame energy at $t=20$ is plotted versus $v_{\max}$ in the lower right panel of Figure [22](#fig:StreamingDopplerShift_Change){reference-type="ref" reference="fig:StreamingDopplerShift_Change"}. For the models with the energy limiter turned off, the relative change is essentially independent of $v_{\max}$, while Eulerian-frame energy violations grow as $v_{\max}^{2}$ for the fiducial models. Note that the energy limiter only recovers energy conservation violations caused by the realizability-enforcing limiter. Energy conservation violations caused by the use of the $\mathcal{O}(v)$ two-moment model are unaffected by the energy limiter. Since we observe the $v_{\max}^{2}$ scaling when using the energy limiter, we posit that the DG discretization maintains consistency with the continuum model on this aspect. ## Transparent Shock {#sec:transparentShock} In this test, we investigate the performance of the method when the background velocity gradient is varied. We consider a one-dimensional spatial domain $D_{x^{1}}=[0,2]$, set the opacities $\chi=\sigma=0$, and the velocity field $\boldsymbol{v}=(v,0,0)^{\intercal}$, where $$v(x^{1}) = \frac{1}{2} v_{\max} \times\big[\,1-\tanh\big(\,(x^{1}-1)/H\,\big)\,\big]. \label{eq:TransparentShockVelocity}$$ We will vary both the velocity magnitude $v_{\max}$ and gradient, parametrized by the length scale $H$. The energy domain is again $D_{\varepsilon}=[0,50]$. We discretize the spatial and energy domains using $80$ and $32$ elements, respectively, and use quadratic elements ($k=2$) and SSPRK3 time stepping. Then, $\Delta x^{1}=0.025$, and $\Delta x^{1}/H=5/6$, $2.5$, and $25$, for $H=3\times10^{-2}$, $10^{-2}$, and $10^{-3}$, respectively. We use the same boundary conditions as in the Doppler shift test, and the moments are initially set to $\mathcal{D}=1\times 10^{-8}$ and $\mathcal{I}^{1}=0$ for all $(x^{1},\varepsilon)\in D_{x^{1}}\times D_{\varepsilon}$. With the given initial and boundary conditions, the equations are integrated until $t=3$, when an approximate steady state has been established. Figure [24](#fig:TransparentShock_Profiles){reference-type="ref" reference="fig:TransparentShock_Profiles"} shows velocity profiles and comoving-frame and Eulerian-frame number densities versus position around the 'shock' ($x^{1}\in[0.9,1.1]$) for the different values of $H$ for the case with $v_{\max}=-0.1$. (The markers indicate locations of LG quadrature points in each spatial element.) For $H=3\times10^{-2}$, the shock is resolved by the spatial grid, for $H=10^{-2}$ it is under-resolved, while for $H=10^{-3}$, the velocity profile is discontinuous. The comoving-frame number densities increase across the velocity gradient because of the Doppler effect, increasing the particle energy measured by the comoving observer, who is moving towards the inner boundary. Beyond the shock, the values for the computed comoving-frame number densities (solid lines) are about $0.5~\%$ higher than the analytic values obtained using Eq. [\[eq:dopplerSpectraSR\]](#eq:dopplerSpectraSR){reference-type="eqref" reference="eq:dopplerSpectraSR"} (dashed lines), and this fact is independent of the value of $H$. The Eulerian-frame number densities, which should remain unaffected by the presence of the background velocity, are essentially constant across the shock. These results indicate that the method is able to capture Doppler shifts correctly, even when velocity gradients are large. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Results from the transparent shock problem for $v_{\max}=-0.1$ and various values of the shock width parameter $H$, defined in Eq. [\[eq:TransparentShockVelocity\]](#eq:TransparentShockVelocity){reference-type="eqref" reference="eq:TransparentShockVelocity"}. In the left panel, velocity profiles are plotted versus position for $H=3\times10^{-2}$, $10^{-2}$, and $10^{-3}$ (black, red, and blue curves, respectively). In the right panel, the numerical and analytic (special relativistic) comoving-frame number densities (solid lines with markers and dashed lines, respectively), and the numerical Eulerian-frame number densities (dotted lines with markers) are plotted versus position. For the results displayed in the right panel, the line colors correspond to the velocity profile with the matching color plotted in the left panel.](TransparentShock_VelocityProfiles.pdf){#fig:TransparentShock_Profiles width="44%"} ![Results from the transparent shock problem for $v_{\max}=-0.1$ and various values of the shock width parameter $H$, defined in Eq. [\[eq:TransparentShockVelocity\]](#eq:TransparentShockVelocity){reference-type="eqref" reference="eq:TransparentShockVelocity"}. In the left panel, velocity profiles are plotted versus position for $H=3\times10^{-2}$, $10^{-2}$, and $10^{-3}$ (black, red, and blue curves, respectively). In the right panel, the numerical and analytic (special relativistic) comoving-frame number densities (solid lines with markers and dashed lines, respectively), and the numerical Eulerian-frame number densities (dotted lines with markers) are plotted versus position. For the results displayed in the right panel, the line colors correspond to the velocity profile with the matching color plotted in the left panel.](TransparentShock_NumberDensities.pdf){#fig:TransparentShock_Profiles width="44%"} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- In Figure [26](#fig:TransparentShock_Energies){reference-type="ref" reference="fig:TransparentShock_Energies"}, the left panel displays the relative change in the Eulerian-frame energy, defined by the left-hand side of Eq. [\[eq:energyBalance\]](#eq:energyBalance){reference-type="eqref" reference="eq:energyBalance"}, versus $|v_{\max}|$ at $t=3$, for various values of $H$. The right panel displays the relative error in $\varepsilon_{\rm RMS}$. Both panels display results obtained with and without the energy limiter. (The relative change in the Eulerian-frame particle number, not shown, is at the level of machine precision for all models.) Considering the results obtained with the energy limiter active, in the left panel we observe that, for a given value of $H$, the relative change in the total energy increases with increasing $|v_{\max}|$, roughly proportional to $|v_{\max}|^{2}$. Also, for a given value of $|v_{\max}|$, the relative change in total energy increases with decreasing shock width $H$. For the models with the energy limiter turned off, the behavior is different in a large region of the $(v_{\max},H)$-space. With $H=3\times10^{-2}$ (dotted black line), the relative change in the Eulerian-frame energy at $t=3$ is around $2\times10^{-5}$; independent of $v_{\max}$. This can be attributed to the realizability-enforcing limiter. For the models with $H=10^{-2}$ (dotted red line), the energy change is roughly constant until $|v_{\max}|=0.1$, when the relative energy change begins to increase with $|v_{\max}|$ in a manner similar to the models with the energy limiter activated (solid red line). The models with the steepest velocity gradient ($H=10^{-3}$; dotted blue line) follow the corresponding models with active energy limiter, and the relative change in the Eulerian-frame energy increases as $|v_{\max}|^{2}$ for all $|v_{\max}|$. From this we conclude that the energy limiter can help to recover the $\mathcal{O}(v^{2})$ Eulerian-frame energy conservation property of the $\mathcal{O}(v)$ two-moment model for small velocities and velocity gradients. The relative error in $\varepsilon_{\rm RMS}$, which increases as $|v_{\max}|^{2}$ for all models, is essentially unaffected by the energy limiter. -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Results for the transparent shock problem, plotted versus $|v_{\max}|$. The left panel displays the relative change in the Eulerian-frame energy at $t=3$ for models where the energy limiter is on (solid lines) and off (dotted lines). The right panel displays the absolute relative difference in $\varepsilon_{\rm RMS}$ with respect to the exact solution from special relativity at $x^{1}=2$. In both panels, black, red, and blue curves correspond to $H=3\times10^{-2}$, $10^{-2}$, and $10^{-3}$, respectively, and the dashed references line are proportional to $|v_{\max}|^{2}$. ](TransparentShock_EnergyChangeFinal.pdf){#fig:TransparentShock_Energies width="45%"} ![Results for the transparent shock problem, plotted versus $|v_{\max}|$. The left panel displays the relative change in the Eulerian-frame energy at $t=3$ for models where the energy limiter is on (solid lines) and off (dotted lines). The right panel displays the absolute relative difference in $\varepsilon_{\rm RMS}$ with respect to the exact solution from special relativity at $x^{1}=2$. In both panels, black, red, and blue curves correspond to $H=3\times10^{-2}$, $10^{-2}$, and $10^{-3}$, respectively, and the dashed references line are proportional to $|v_{\max}|^{2}$. ](TransparentShock_RmsError.pdf){#fig:TransparentShock_Energies width="45%"} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ## Transparent Vortex {#sec:transparentVortex} The final test, inspired by the test in Section 4.2.3 of [@just_etal_2015], considers evolution in a two-dimensional spatial domain $D_{x^{1}}\times D_{x^{2}}=[-5,5]\times[-5,5]$. We set the opacities $\chi=\sigma=0$, and the velocity field is given by $\boldsymbol{v}=[v^{1},v^{2},0]^{\intercal}$, where $$\begin{aligned} v^{1}(x^{1},x^{2}) &= - v_{\max} \, x^{2} \, \exp\big[\,(1-r^{2})/2\,\big], \\ v^{2}(x^{1},x^{2}) &= \hspace{8pt} v_{\max} \, x^{1} \, \exp\big[\,(1-r^{2})/2\,\big], \end{aligned}$$ [\[eq:vortexVelocityField\]]{#eq:vortexVelocityField label="eq:vortexVelocityField"} and $r=\sqrt{(x^{1})^{2}+(x^{2})^{2}}$. The energy domain is $D_{\varepsilon}=[0,50]$, and we discretize the spatial and energy domains using $48\times48$ and $32$ elements, respectively. We use quadratic elements ($k=2$) and SSPRK3 time stepping. The upper left panel in Figure [30](#fig:TransparentVortex){reference-type="ref" reference="fig:TransparentVortex"} shows the velocity field for the case with $v_{\max}=0.1$. The main purpose of this test is to investigate the robustness of the method in configurations where the radiation field propagates through a spatially variable velocity field with various relative angles between the radiation flux and velocity vectors. The moments are initially set to $\mathcal{D}=1\times 10^{-8}$ and $\mathcal{I}^{1}=\mathcal{I}^{2}=0$ for all $(x^{1},x^{2},\varepsilon)\in D_{x^{1}}\times D_{x^{2}}\times D_{\varepsilon}$. At the inner $x^{1}$ boundary, we impose an incoming, radiation field with a Fermi-Dirac spectrum: We set $\mathcal{D}(\varepsilon,x^{1}=-5,x^{2})=0.05/[\exp(\varepsilon/3-3)+1]$, $\mathcal{I}^{1}(\varepsilon,x^{1}=-5,x^{2})=0.95\times\mathcal{D}(\varepsilon,x^{1}=-5,x^{2})$, and $\mathcal{I}^{2}(\varepsilon,x^{1}=-5,x^{2})=0$, so that the flux factor is $h=0.95$. With these initial and boundary conditions, the moment equations are integrated until a steady state is reached ($t=20$). ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![Results for the transparent vortex problem. In the upper left panel, the magnitude of the velocity, for the case with $v_{\max}=0.1$, is displayed in grayscale with velocity vectors overlaid. The black, magenta, red, and blue markers indicate spatial positions for which we plot numerical energy spectra in solid lines in the upper right panel, with line colors corresponding to the marker colors in the upper left panel. The dotted lines are analytic spectra obtained from special relativistic considerations using the local three-velocity. In the lower left panel, the relative change in Eulerian-frame energy is plotted versus time for models with $v_{\max}$ in Eq. [\[eq:vortexVelocityField\]](#eq:vortexVelocityField){reference-type="eqref" reference="eq:vortexVelocityField"} set to $0.01$ (black lines), $0.03$ (red lines), and $0.1$ (blue lines). Results obtained with and without the energy limiter are plotted using solid and dotted lines, respectively. The lower right panel plots the relative difference between the incoming and outgoing particle fluxes in the $x^{1}$-direction.](TransparentVortex_Velocity.pdf){#fig:TransparentVortex width="45%"} ![Results for the transparent vortex problem. In the upper left panel, the magnitude of the velocity, for the case with $v_{\max}=0.1$, is displayed in grayscale with velocity vectors overlaid. The black, magenta, red, and blue markers indicate spatial positions for which we plot numerical energy spectra in solid lines in the upper right panel, with line colors corresponding to the marker colors in the upper left panel. The dotted lines are analytic spectra obtained from special relativistic considerations using the local three-velocity. In the lower left panel, the relative change in Eulerian-frame energy is plotted versus time for models with $v_{\max}$ in Eq. [\[eq:vortexVelocityField\]](#eq:vortexVelocityField){reference-type="eqref" reference="eq:vortexVelocityField"} set to $0.01$ (black lines), $0.03$ (red lines), and $0.1$ (blue lines). Results obtained with and without the energy limiter are plotted using solid and dotted lines, respectively. The lower right panel plots the relative difference between the incoming and outgoing particle fluxes in the $x^{1}$-direction.](TransparentVortex_Spectra.pdf){#fig:TransparentVortex width="40%"} ![Results for the transparent vortex problem. In the upper left panel, the magnitude of the velocity, for the case with $v_{\max}=0.1$, is displayed in grayscale with velocity vectors overlaid. The black, magenta, red, and blue markers indicate spatial positions for which we plot numerical energy spectra in solid lines in the upper right panel, with line colors corresponding to the marker colors in the upper left panel. The dotted lines are analytic spectra obtained from special relativistic considerations using the local three-velocity. In the lower left panel, the relative change in Eulerian-frame energy is plotted versus time for models with $v_{\max}$ in Eq. [\[eq:vortexVelocityField\]](#eq:vortexVelocityField){reference-type="eqref" reference="eq:vortexVelocityField"} set to $0.01$ (black lines), $0.03$ (red lines), and $0.1$ (blue lines). Results obtained with and without the energy limiter are plotted using solid and dotted lines, respectively. The lower right panel plots the relative difference between the incoming and outgoing particle fluxes in the $x^{1}$-direction.](TransparentVortex_EnergyConservation.pdf){#fig:TransparentVortex width="42.5%"} ![Results for the transparent vortex problem. In the upper left panel, the magnitude of the velocity, for the case with $v_{\max}=0.1$, is displayed in grayscale with velocity vectors overlaid. The black, magenta, red, and blue markers indicate spatial positions for which we plot numerical energy spectra in solid lines in the upper right panel, with line colors corresponding to the marker colors in the upper left panel. The dotted lines are analytic spectra obtained from special relativistic considerations using the local three-velocity. In the lower left panel, the relative change in Eulerian-frame energy is plotted versus time for models with $v_{\max}$ in Eq. [\[eq:vortexVelocityField\]](#eq:vortexVelocityField){reference-type="eqref" reference="eq:vortexVelocityField"} set to $0.01$ (black lines), $0.03$ (red lines), and $0.1$ (blue lines). Results obtained with and without the energy limiter are plotted using solid and dotted lines, respectively. The lower right panel plots the relative difference between the incoming and outgoing particle fluxes in the $x^{1}$-direction.](TransparentVortex_Fluxes.pdf){#fig:TransparentVortex width="42.5%"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ In the upper right panel in Figure [30](#fig:TransparentVortex){reference-type="ref" reference="fig:TransparentVortex"}, the solid lines represent numerical energy spectra at spatial locations indicated with solid markers of matching color in upper left panel. At the location of the black marker the velocity is close to zero, and thus the black line represents the spectrum of the incoming radiation. The red and blue markers are located where $\boldsymbol{v}=(v_{\max},0,0)^{\intercal}$ and $\boldsymbol{v}=(-v_{\max},0,0)^{\intercal}$, respectively, and the spectra at these locations are, respectively, red- and blue-shifted relative to the spectrum sampled at the location of the black marker. Analytic spectra at the locations of the black, red, and blue markers are plotted with dotted lines, which indicate good agreement between numerical and analytical solutions across all energies. At the locations of the black, red, and blue markers, we find that $\varepsilon_{\rm RMS}$ is approximately $15.6$, $14.2$, and $17.2$, respectively. At the location of the magenta marker, which is placed on the opposite side of the vortex (relative to the black marker), the velocity is again close to zero, and it is expected that the spectrum at this location agrees with the spectrum at the location of the black marker. Comparing the solid black and magenta lines in the upper right panel, we observe that the spectral number density is consistently higher at the location of the magenta marker (by a constant factor of about $1.07$). Comparing $\varepsilon_{\rm RMS}$ at the two locations, we find that the relative difference is less than $10^{-4}$. The lower left panel in Figure [30](#fig:TransparentVortex){reference-type="ref" reference="fig:TransparentVortex"} plots the relative change in the Eulerian-frame energy versus time for models with $v_{\max}\in\{0.01,0.03,0.1\}$. Results obtained with the energy limiter on are plotted with solid lines, while dotted lines correspond to results with the energy limiter turned off. For all models, the relative change in the Eulerian-frame energy is less than $10^{-4}$. For the models with $v_{\max}=0.1$, the relative change reaches the largest amplitudes for $t\in[4,7]$, when a radiation front, driven by the boundary condition at $x^{1}=-5$, propagates through the vortex. The model with the energy limiter makes a better recovery than the corresponding model with the energy limiter turned off. For smaller $v_{\max}$, the relative change in the Eulerian-frame energy is clearly much smaller when the energy limiter is active. These results demonstrate the contribution to Eulerian-frame energy nonconservation caused by the realizability-enforcing limiter. For both suites of models (energy limiter on or off), the relative change in the Eulerian-frame number (not shown) is at the level of machine precision for all models. The lower right panel in Figure [30](#fig:TransparentVortex){reference-type="ref" reference="fig:TransparentVortex"}, similar to Figure 6 (b) in [@just_etal_2015], shows, for $t=20$, the relative difference between the energy integrated $x^{1}$-component of the number flux densities evaluated at the inner and outer boundaries of $D_{x^{1}}$, defined as $|I^{1}(5,x^{2})-I^{1}(-5,x^{2})|/I^{1}(-5,x^{2})$. As discussed by Just et al. [@just_etal_2015], this quantity should vanish for exact calculations, while errors of $\mathcal{O}(v^{2})$ are to be expected for the $\mathcal{O}(v)$ two-moment model. Comparing with their results, the curves plotted in our figure share similar features. Moreover, for $v_{\max}=0.01$, the maximum relative difference is $6.15\times10^{-4}$, for $v_{\max}=0.03$ it is $4.87\times10^{-3}$, while it is $5.68\times10^{-2}$ for $v_{\max}=0.1$; i.e., the maximum error grows as $v_{\max}^{2}$. Despite the growing (with $v_{\max}$) relative difference between the number fluxes at the inner and outer boundaries in the $x^{1}$-direction, we point out that, due to number conservation, the integrated number fluxes through the inner and outer boundaries balance each other. That is, in the steady state at $t=20$, $\int_{D_{x^{2}}}I^{1}(-5,x^{2})\,dx^{2}=\int_{D_{x^{2}}}I^{1}(5,x^{2})\,dx^{2}$. However, the distribution of particles along the $x^{2}$-direction becomes nonuniform in the wake of the vortex, while a uniform distribution is expected as $|\boldsymbol{v}|\to0$. We illustrate this further in Figure [33](#fig:TransparentVortexII){reference-type="ref" reference="fig:TransparentVortexII"}. The left panel shows that, within the vortex ($r\lesssim2$), the comoving-frame number density is higher than the reference value $D_{0}$ for $x^{2}>0$, and lower than $D_{0}$ for $x^{2}<0$, which is consistent with the Doppler shift of the spectra in the respective regions. In the wake of the vortex, the comoving-frame number density is relatively higher in the region centered around $x^{2}=0$, while it is lower further away (compare red and blue regions for $x^{1}\gtrsim2$ in the left panel in Figure [33](#fig:TransparentVortexII){reference-type="ref" reference="fig:TransparentVortexII"}). The Eulerian-frame number density is relatively unaffected by the vortex for $x^{1}<0$, but exhibits a spatial distribution similar to the comoving-frame number density in the wake. In contrast, the spatial distribution of the RMS energy is more consistent with expectations: Within the vortex, $\varepsilon_{\rm RMS}>\varepsilon_{{\rm RMS},0}$ for $x^{2}>0$, while $\varepsilon_{\rm RMS}<\varepsilon_{{\rm RMS},0}$ for $x^{2}<0$. Moreover, the RMS energy returns to the reference value in the wake of the vortex, with almost uniform distribution along the $x^{2}$-direction. We do not have a complete theoretical explanation for the spatial distribution of the number densities in the wake of the vortex, but suspect that the two-moment approximation and the associated closure, which assumes that the radiation field is axisymmetric about a preferred direction in momentum space [@levermore_1984], is insufficient for capturing relativistic aberration effects. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Results for the transparent vortex problem at $t=20$ for a model with $v_{\max}=0.1$. The left panel shows the relative deviation in comoving-frame number density from $D_{0}=D(x^{1}=-5,x^{2})$, $(D-D_{0})/D_{0}$, with vectors of the comoving-frame number flux $(I^{1}-I_{0}^{1},I^{2})^{\intercal}$ overlaid, where $I_{0}^{1}=I^{1}(x^{1}=-5,x^{2})$ is the first component of the comoving-frame number flux density at the inner boundary in the $x^{1}$-direction, which is subtracted to better illustrate the flow, since $|I^{2}|\ll|I^{1}|$ generally holds. Similarly, the middle panel shows the corresponding relative deviation in the Eulerian-frame number density $(N-N_{0})/N_{0}$, with vectors of the Eulerian-frame number flux $(F_{N}^{1}-F_{N,0}^{1},F_{N}^{2})^{\intercal}$. The right panel shows the relative deviation in the RMS energy, $(\varepsilon_{{\rm RMS},0}-\varepsilon_{\rm RMS})/\varepsilon_{{\rm RMS},0}$, where $\varepsilon_{{\rm RMS},0}=\varepsilon_{\rm RMS}(x^{1}=-5,x^{2})$.](TransparentVortex_LagrangianDensity.pdf){#fig:TransparentVortexII width="31%"} ![Results for the transparent vortex problem at $t=20$ for a model with $v_{\max}=0.1$. The left panel shows the relative deviation in comoving-frame number density from $D_{0}=D(x^{1}=-5,x^{2})$, $(D-D_{0})/D_{0}$, with vectors of the comoving-frame number flux $(I^{1}-I_{0}^{1},I^{2})^{\intercal}$ overlaid, where $I_{0}^{1}=I^{1}(x^{1}=-5,x^{2})$ is the first component of the comoving-frame number flux density at the inner boundary in the $x^{1}$-direction, which is subtracted to better illustrate the flow, since $|I^{2}|\ll|I^{1}|$ generally holds. Similarly, the middle panel shows the corresponding relative deviation in the Eulerian-frame number density $(N-N_{0})/N_{0}$, with vectors of the Eulerian-frame number flux $(F_{N}^{1}-F_{N,0}^{1},F_{N}^{2})^{\intercal}$. The right panel shows the relative deviation in the RMS energy, $(\varepsilon_{{\rm RMS},0}-\varepsilon_{\rm RMS})/\varepsilon_{{\rm RMS},0}$, where $\varepsilon_{{\rm RMS},0}=\varepsilon_{\rm RMS}(x^{1}=-5,x^{2})$.](TransparentVortex_EulerianDensity.pdf){#fig:TransparentVortexII width="31%"} ![Results for the transparent vortex problem at $t=20$ for a model with $v_{\max}=0.1$. The left panel shows the relative deviation in comoving-frame number density from $D_{0}=D(x^{1}=-5,x^{2})$, $(D-D_{0})/D_{0}$, with vectors of the comoving-frame number flux $(I^{1}-I_{0}^{1},I^{2})^{\intercal}$ overlaid, where $I_{0}^{1}=I^{1}(x^{1}=-5,x^{2})$ is the first component of the comoving-frame number flux density at the inner boundary in the $x^{1}$-direction, which is subtracted to better illustrate the flow, since $|I^{2}|\ll|I^{1}|$ generally holds. Similarly, the middle panel shows the corresponding relative deviation in the Eulerian-frame number density $(N-N_{0})/N_{0}$, with vectors of the Eulerian-frame number flux $(F_{N}^{1}-F_{N,0}^{1},F_{N}^{2})^{\intercal}$. The right panel shows the relative deviation in the RMS energy, $(\varepsilon_{{\rm RMS},0}-\varepsilon_{\rm RMS})/\varepsilon_{{\rm RMS},0}$, where $\varepsilon_{{\rm RMS},0}=\varepsilon_{\rm RMS}(x^{1}=-5,x^{2})$.](TransparentVortex_RMS.pdf){#fig:TransparentVortexII width="31%"} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ## Performance Evaluation {#sec:performance} To demonstrate the GPU functionality and performance characteristics of the DG-IMEX method as implemented in [thornado]{.smallcaps}, we consider the Streaming Doppler Shift test, described in Section [8.4](#sec:streamingDopplerShift){reference-type="ref" reference="sec:streamingDopplerShift"}, with $v_{\max}=0.1$. To more accurately capture a production workload, the tests are performed in three spatial dimensions, with the number of elements similar to what would be used for a single process invoking [thornado]{.smallcaps} in a multiphysics simulation. The benchmark is run in two configurations, using tensor product polynomials of degree $k=1$ and $k=2$, respectively. The SSPRK2 time stepper is used for both configurations. For $k=1$, we use 16 energy elements and $96\times3\times3$ spatial elements, while 12 energy elements and $64\times2\times2$ spatial elements are used for $k=2$ --- thus keeping the total number of spatial degrees of freedom the same. Our goal is to provide a high-level demonstration of performance characteristics and the relative cost of main algorithmic components, while we defer a rigorous performance analysis to future work. The tests are performed on a single node of the Summit computer at the Oak Ridge Leadership Computing Facility (OLCF). Each Summit node has 2 IBM POWER9 CPUs and 6 NVIDIA V100 GPUs, but here we limit our benchmarks to a single CPU or GPU. For the CPU runs, we use seven cores with one thread per core as this is the number of cores that would be available to one process if we divide the resources equally with one GPU per process. All runs use version 22.5 of the NVIDIA `nvfortran` compiler with standard `-O2` optimizations. Optimized linear algebra libraries are provided by IBM ESSL (v6.3.0) on the CPU and NVIDIA cuBLAS (v11.0.3) on the GPU. For the GPU runs, all computations are done on the GPU using OpenACC and libraries; the CPU process is only used to launch kernels and manage data transfer. In both cases, the salient metric is wall-time per time step (lower is better). Figure [35](#fig:SDS_walltime){reference-type="ref" reference="fig:SDS_walltime"} shows a breakdown of the relative cost associated with evaluating the major components of the explicit phase-space advection operator. The polynomial degree has little effect on the absolute wall-time, especially for the GPU runs. For the CPU runs, the relative cost of linear algebra (`MatMul`) is somewhat higher when $k=2$. As can be seen comparing the right and left panels, the initial guess in the conserved-to-primitive calculation can have a non-trivial impact on the total wall-time by reducing the total number of solver iterations. We measure a total speedup factor of 8--10 for the V100 relative to the multi-core CPU runs on the POWER9. Notably, the relative cost for linear algebra and limiters becomes negligible when using the GPU, and the majority of the computational cost is shifted to the iterative conserved-to-primitive calculations. We speculate that one approach to further improve the performance would be to combine the calculation of all of the primitive moments on the quadrature set $\widetilde{S}_{\otimes}^{\boldsymbol{K}}$, defined in Eq. [\[eq:AllSetUnion\]](#eq:AllSetUnion){reference-type="eqref" reference="eq:AllSetUnion"}, into a single kernel, rather than to calculate them separately for each evaluation of $\boldsymbol{\mathcal{F}}^{i}$ and $\boldsymbol{\mathcal{F}}^{\varepsilon}$, defined in Eqs. [\[eq:bilinearFormAdvectionPosition\]](#eq:bilinearFormAdvectionPosition){reference-type="eqref" reference="eq:bilinearFormAdvectionPosition"}--[\[eq:bilinearFormAdvectionEnergy\]](#eq:bilinearFormAdvectionEnergy){reference-type="eqref" reference="eq:bilinearFormAdvectionEnergy"}, which results in some duplicate evaluations. While these savings may be significant for the phase-space advection problem considered here, refactoring will be considered in the context of a more physics-complete implementation. With more realistic collision terms included, the relative cost of the explicit phase-space advection part is expected to be small (see, e.g., [@laiu_etal_2020; @laiu_etal_2021]). ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![ Breakdown of normalized wall-time for components of the Streaming Doppler Shift test problem as implemented in [thornado]{.smallcaps}. The left panel uses an initial guess of $\boldsymbol{\mathcal{M}}^{[0]} = (\mathcal{N}, \boldsymbol{0})^{\intercal}$ in the conserved-to-primitive calculation, and the right panel uses $\boldsymbol{\mathcal{M}}^{[0]} = \boldsymbol{\mathcal{U}}= (\mathcal{N}, \boldsymbol{\mathcal{G}})^{\intercal}$. Absolute wall-clock times per time step are shown above each bar. `Flux` captures the calculation of fluxes $\boldsymbol{\mathcal{F}}^{i}$ and $\boldsymbol{\mathcal{F}}^{\varepsilon}$ in Eqs. [\[eq:bilinearFormAdvectionPosition\]](#eq:bilinearFormAdvectionPosition){reference-type="eqref" reference="eq:bilinearFormAdvectionPosition"}--[\[eq:bilinearFormAdvectionEnergy\]](#eq:bilinearFormAdvectionEnergy){reference-type="eqref" reference="eq:bilinearFormAdvectionEnergy"}. `MatMul` represents the matrix-matrix multiplications used throughout the explicit operator, e.g., to evaluate polynomials $\boldsymbol{\mathcal{U}}_{h}$ in quadrature points on all elements. `Primitive` captures the iterative conserved-to-primitive calculation described in Section [4.3.1](#sec:moment_conversion){reference-type="ref" reference="sec:moment_conversion"}. `Limiters` includes the application of the realizability-enforcing limiter described Section [5.2](#sec:realizabilityLimiter){reference-type="ref" reference="sec:realizabilityLimiter"} and the energy limiter described in Section [6.2](#sec:EnergyLimiter){reference-type="ref" reference="sec:EnergyLimiter"}. `Other` is used to capture all remaining wall-time spent in the explicit step. ](walltime_sds_normalized_perstep_oldguess.pdf){#fig:SDS_walltime width="45%"} ![ Breakdown of normalized wall-time for components of the Streaming Doppler Shift test problem as implemented in [thornado]{.smallcaps}. The left panel uses an initial guess of $\boldsymbol{\mathcal{M}}^{[0]} = (\mathcal{N}, \boldsymbol{0})^{\intercal}$ in the conserved-to-primitive calculation, and the right panel uses $\boldsymbol{\mathcal{M}}^{[0]} = \boldsymbol{\mathcal{U}}= (\mathcal{N}, \boldsymbol{\mathcal{G}})^{\intercal}$. Absolute wall-clock times per time step are shown above each bar. `Flux` captures the calculation of fluxes $\boldsymbol{\mathcal{F}}^{i}$ and $\boldsymbol{\mathcal{F}}^{\varepsilon}$ in Eqs. [\[eq:bilinearFormAdvectionPosition\]](#eq:bilinearFormAdvectionPosition){reference-type="eqref" reference="eq:bilinearFormAdvectionPosition"}--[\[eq:bilinearFormAdvectionEnergy\]](#eq:bilinearFormAdvectionEnergy){reference-type="eqref" reference="eq:bilinearFormAdvectionEnergy"}. `MatMul` represents the matrix-matrix multiplications used throughout the explicit operator, e.g., to evaluate polynomials $\boldsymbol{\mathcal{U}}_{h}$ in quadrature points on all elements. `Primitive` captures the iterative conserved-to-primitive calculation described in Section [4.3.1](#sec:moment_conversion){reference-type="ref" reference="sec:moment_conversion"}. `Limiters` includes the application of the realizability-enforcing limiter described Section [5.2](#sec:realizabilityLimiter){reference-type="ref" reference="sec:realizabilityLimiter"} and the energy limiter described in Section [6.2](#sec:EnergyLimiter){reference-type="ref" reference="sec:EnergyLimiter"}. `Other` is used to capture all remaining wall-time spent in the explicit step. ](walltime_sds_normalized_perstep_newguess.pdf){#fig:SDS_walltime width="45%"} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- # Summary and Conclusions {#sec:summaryConclusions} We have proposed and analyzed a realizability-preserving numerical method for evolving a spectral two-moment model for neutral particles interacting with a moving background fluid. This number-conservative moment model is based on comoving-frame momentum coordinates, includes special relativistic corrections to $\mathcal{O}(v)$, and, as a result, contains velocity-dependent terms accounting for spatial advection, Doppler shift, and angular aberration. The nonlinear two-moment model solves for comoving-frame angular moments, representing number density and components of the number flux densitiy, and is closed by expressing higher-order moments (rank-two and rank-three tensors) in terms of the evolved moments using the maximum entropy closure (both exact and approximate) due to Minerbo [@minerbo_1978]. The two-moment model is closely related to that promoted in [@lowrie_etal_2001], predicts wave speeds bounded by the speed of light (Proposition [Proposition 2](#prop:waveSpeed){reference-type="ref" reference="prop:waveSpeed"}), and is consistent, to $\mathcal{O}(v)$, with Eulerian-frame energy and momentum conservation (Proposition [Proposition 1](#prop:EnergyandMomentumConservation){reference-type="ref" reference="prop:EnergyandMomentumConservation"}). The numerical method is based on the DG method to discretize phase-space, and IMEX time stepping, where the phase-space advection part is integrated with explicit methods, and the collision term is integrated with implicit methods. The discretized spatial and energy derivative terms in the moment equations have been equipped with tailored numerical fluxes, which in the case of exact moment closure (Assumption [Assumption 1](#assum:exact_closure){reference-type="ref" reference="assum:exact_closure"}) allow us to derive explicit time-step restrictions that guarantee realizable cell-averaged moments due to these terms, and $\mathcal{N}>0$ overall. Unfortunately, a corresponding time-step restriction was not found for the source terms associated with phase-space advection in the number flux equation to guarantee the second moment realizability condition, in the sense of cell averages, for the evolved moments (i.e., $|\boldsymbol{\mathcal{G}}|\le\mathcal{N}$) in the general multidimensional case. However, an analysis in the semi-discrete setting revealed that the moments evolve tangentially to the boundary of the realizable domain when $|\boldsymbol{\mathcal{G}}|=\mathcal{N}$, and we found a sufficient time-step restriction to guarantee realizable cell averages in the one-dimensional, planar geometry case. Given a positive cell-averaged number density, a realizability-enforcing limiter is proposed to recover pointwise moment realizability in each element. Specific properties of the IMEX scheme (i.e., convex-invariance, as defined in [@chu_etal_2019]) extend the applicability of our results beyond the forward-backward Euler sequence analyzed in detail. Retention of specific $\mathcal{O}(v)$ terms in the time derivative of the two-moment system, motivated by the desire to maintain wave speeds bounded by the speed of light and consistency with Eulerian-frame energy and momentum conservation equations, results in increased computational complexity of the numerical scheme in two (related) ways. First, since the evolved moments are nonlinear functions of the primitive moments used to close the moment equations, a nonlinear system must be solved to recover primitive moments from evolved moments. Second, because the collision operators are formulated in terms of primitive moments, the implicit collision update requires the solution of a similar nonlinear system. For both cases, solution methods have been formulated as fixed-point problems, and we have proposed tailored fixed-point operators in Eqs. [\[eq:richardson_fixed_pt\]](#eq:richardson_fixed_pt){reference-type="eqref" reference="eq:richardson_fixed_pt"} and [\[eq:collision_fixed_point1\]](#eq:collision_fixed_point1){reference-type="eqref" reference="eq:collision_fixed_point1"}, for the primitive recovery and implicit collision solve, respectively. The fixed-point operators are designed to preserve moment realizability in each iteration (subject to mild conditions on the step size), and we have proven convergence for cases with exact *and* approximate moment closures, subject to the additional constraint $|\boldsymbol{v}|\le\sqrt{2}-1$, which is mild when considering the applicability of the $\mathcal{O}(v)$ model. Numerically, we did not observe convergence failures for the primitive recovery problem, even when violating the condition on the velocity, or when combining the algorithm with Anderson acceleration, which our analysis here did not consider. The proposed algorithm has been implemented and tested against a series of benchmark problems. Using two problems with a constant background velocity --- in the streaming and diffusion regimes, respectively --- we demonstrate the expected rate of error convergence in the $L^{2}$ norm. Additional tests with spatially varying (smooth and discontinuous) background velocity fields --- the Streaming Doppler Shift, Transparent Shock, and Transparent Vortex tests --- were used to document the robustness of the proposed algorithm, and qualitative accuracy with respect to special relativistic considerations (e.g., correct Doppler shifts) for sufficiently small background velocities. In these tests, the moments evolve close to the boundary of the realizable domain, and the realizability-enforcing limiter is frequently triggered to recover pointwise realizability from (guaranteed) realizable cell averages. Without this recovery procedure, the algorithm fails invariably on these challenging problems. We have analyzed the simultaneous Eulerian-frame number and energy conservation properties of the proposed method. While the DG method provides flexibility in the approximation spaces to capture conservation properties beyond those inherent to the model formulation (i.e., number conservation in the present setting), the approximation of the background velocity by piecewise polynomials from the DG approximation space, which accommodates discontinuities, can result in Eulerian-frame energy conservation errors that exceed the $\mathcal{O}(v^{2})$ scaling predicted by the continuum model. However, we found that the realizability-enforcing limiter is the main contributor to Eulerian-frame energy conservation violations when the background velocity field is smooth and its magnitude is within the range of applicability of an $\mathcal{O}(v)$ model. For this reason, an energy limiter is proposed to recover conservation violations introduced by the realizability-enforcing limiter. This limiter trades local number conservation for number *and* energy conservation after integration over the phase-space energy dimension, and has no observed negative impact on solution accuracy, while improving Eulerian-frame energy conservation properties of the method. With the energy limiter active, we observe that energy conservation violations scale as $\mathcal{O}(v^{2})$, in accordance with the continuum model. We emphasize that the energy limiter introduces a rescaling of the moments, which does not impact moment realizability. However, the proposed strategy to promote Eulerian-frame energy conservation is not feasible without the realizability-preserving property. Our goal is to apply the proposed algorithm to model neutrino transport in core-collapse supernova simulations. Several extensions are needed to achieve this goal. First, the collision term must be extended to include a complete set of neutrino weak interactions, and the model extended to include coupling to dynamical equations for the background fluid. Second, because neutrinos are Fermions, for which the Pauli exclusion principle implies an upper bound on the phase-space density and associated bounds on the moments, the analysis should be extended to apply to moment closures based on Fermi-Dirac statistics. Third, because special *and* general relativistic effects contribute to the dynamics in nontrivial ways, further development and analysis is required to design realizability-preserving methods for fully relativistic moment models. We believe the methodologies developed in this paper can be helpful in these endeavors, and hope to present progress on addressing these challenges in future work. # Technical Proofs {#sec:appendix} ## Various Bounds for the Exact and Approximate Eddington Factors {#sec:polynomial_bounds} In the following lemma, we list several bounds on functions dependent on the exact or approximate Eddington factors ($\psi$ or $\psi_{\mathsf{a}}$). These bounds are used in the proofs of Lemmas [Lemma 8](#lemma:dD_term){reference-type="ref" reference="lemma:dD_term"} and [Lemma 9](#lemma:dI_term){reference-type="ref" reference="lemma:dI_term"} in [10.2](#sec:proof_of_dD){reference-type="ref" reference="sec:proof_of_dD"} and [10.3](#sec:proof_of_dI){reference-type="ref" reference="sec:proof_of_dI"}, respectively, as well of the proof of Lemma [Lemma 11](#lemma:MtoU){reference-type="ref" reference="lemma:MtoU"} in Section [5.5](#sec:approxClosure){reference-type="ref" reference="sec:approxClosure"}. **Lemma 12**. *Let $\psi$ be the Eddington factor in the exact Minerbo closure as given in Eq. [\[eq:psiZetaMinerbo\]](#eq:psiZetaMinerbo){reference-type="eqref" reference="eq:psiZetaMinerbo"} and let $$\label{eq:phi_def} \phi_1:= 3\psi-1-3\psi^\prime h\quad \text{and}\quad \phi_2:=(3\psi-1)h^{-1}\:.$$ Then, the following bounds hold when $h\in[0,1]$.* *2* - *$-4 \leq \phi_1 \leq 0$,* - *$\phi_2^2 - \psi^\prime \phi_2\geq0$,* - *$3(\psi^\prime)^2 - 3 \psi^\prime \phi_2 +\phi_2^2 \geq0$,* - *$\partial_h(\phi_2^2 - \psi^\prime \phi_2 +(\psi^\prime)^2) >0$.* *Moreover, Let $\psi_{\mathsf{a}}$ be the approximate Eddington factor defined in Eq. [\[eq:psiApproximate\]](#eq:psiApproximate){reference-type="eqref" reference="eq:psiApproximate"} and let $$\phi_{\mathsf{a},1}:= 3\psi_{\mathsf{a}}-1-3\psi_{\mathsf{a}}^\prime h\quad \text{and}\quad \phi_{\mathsf{a},2}:=(3\psi_{\mathsf{a}}-1)h^{-1}\:.$$ Then the bounds [(a)--(d)]{.upright} hold when $(\psi,\phi_1,\phi_2)$ are replaced by $(\psi_{\mathsf{a}},\phi_{\mathsf{a},1},\phi_{\mathsf{a},2})$. In addition, the following two bounds hold for the approximate Eddington factor when $h\in[0,1)$.* *2* - *$\psi_{\mathsf{a}} - h^2 - \frac{1}{4} (1-\psi_{\mathsf{a}})^2\geq0$,* - *$h^2\leq \psi_{\mathsf{a}} \leq 1$.* Since both $\psi$ and $\psi_{\mathsf{a}}$ are one-dimensional functions defined between 0 and 1, the proof of the bounds are straightforward but are rather tedious. Instead of giving rigorous proofs for these bounds, we plot the functions of interest in Figure [\[fig:psi_bounds\]](#fig:psi_bounds){reference-type="ref" reference="fig:psi_bounds"}, from which the bounds can be visually verified. \ ## Proof of Lemma [Lemma 8](#lemma:dD_term){reference-type="ref" reference="lemma:dD_term"} {#sec:proof_of_dD} *Proof of Lemma [Lemma 8](#lemma:dD_term){reference-type="ref" reference="lemma:dD_term"}..* Using the definition of the closure terms $\mathsf{k}_{ij}$ in Eq. [\[eq:VariableEddingtonTensor\]](#eq:VariableEddingtonTensor){reference-type="eqref" reference="eq:VariableEddingtonTensor"}, we have from chain rule that $$\begin{aligned} v^{i} {\partial_{\mathcal{D}} (\mathsf{k}_{ij} \mathcal{D})} &= v^{i} \left( \frac{1}{2} \big[3 \hat{n}_i\hat{n}_j - \delta_{ij}\big] \frac{\partial\psi}{\partial h} \frac{\partial h}{\partial\mathcal{D}} \mathcal{D} + \frac{1}{2} \big[(1-\psi) \delta_{ij} + (3\psi -1)\hat{n}_i\hat{n}_j\big] \right) \nonumber \\ &= v^{i} \left( - \frac{1}{2} \big[3 \hat{n}_i\hat{n}_j - \delta_{ij}\big] \psi^\prime h + \frac{1}{2} \big[(1-\psi) \delta_{ij} + (3\psi -1)\hat{n}_i\hat{n}_j\big] \right) \\ &= \frac{1}{2} (3\psi -1 - 3\psi^\prime h ) \big(v^{i} \hat{n}_i\hat{n}_j - \frac{1}{3} v_{j} \big) + \frac{1}{3} v_{j} = \frac{1}{2} \phi_1 \big(v^{i} \hat{n}_i\hat{n}_j - \frac{1}{3} v_{j} \big) + \frac{1}{3} v_{j} \:,\nonumber \end{aligned}$$ where $\phi_1:= (3\psi -1 - 3\psi^\prime h )$ as defined in Eq. [\[eq:phi_def\]](#eq:phi_def){reference-type="eqref" reference="eq:phi_def"}. Since $\|\partial_{\mathcal{D}} (v^{i} \mathsf{k}_{ij} \mathcal{D}) \|^2 = \sum_j \left( v^{i} {\partial_{\mathcal{D}} (\mathsf{k}_{ij} \mathcal{D})}\right)^2$, summing up the squares leads to $$\begin{aligned} \|\partial_{\mathcal{D}} (v^{i} \mathsf{k}_{ij} \mathcal{D}) \|^2 &= \frac{1}{4} \phi_1^2 \sum_j \left(v^{i} \hat{n}_i\hat{n}_j - \frac{1}{3} v_{j} \right)^2 + \frac{1}{3} \phi_1 v^{j}\left(v^{i} \hat{n}_i\hat{n}_j - \frac{1}{3} v_{j} \right) + \frac{1}{9} v^{j} v_{j}\nonumber \\ &= \left(\frac{\phi_1^2}{12} + \frac{\phi_1}{3} \right) (v^{i} \hat{n}_i)^2 + \left(\frac{\phi_1^2}{36} - \frac{\phi_1}{9} + \frac{1}{9}\right) v^{i} v_{i}\:.\end{aligned}$$ It follows from Lemma [Lemma 12](#lemma:polynomial_bounds){reference-type="ref" reference="lemma:polynomial_bounds"} (a) that $\phi_1(h)\in[-4,0]$ for $h\in[0,1]$. Therefore, $\left(\frac{\phi_1^2}{12} + \frac{\phi_1}{3} \right) = \frac{\phi_1}{12}\left(\phi_1 + 4 \right) \geq0$ and we have $$\|\partial_{\mathcal{D}} (v^{i} \mathsf{k}_{ij} \mathcal{D}) \|^2 \leq \left(\big(\frac{\phi_1^2}{12} + \frac{\phi_1}{3} \big) + \big(\frac{\phi_1^2}{36} - \frac{\phi_1}{9} + \frac{1}{9}\big)\right) v^{i} v_{i} = \frac{1}{9}\left( \phi_1 + 1\right)^2 v^{i} v_{i}\:.$$ Since $\phi_1\in[-4,0]$, $\frac{1}{9}\left( \phi_1 + 1\right)^2\leq1$, which concludes the proof. ◻ ## Proof of Lemma [Lemma 9](#lemma:dI_term){reference-type="ref" reference="lemma:dI_term"} {#sec:proof_of_dI} *Proof of Lemma [Lemma 9](#lemma:dI_term){reference-type="ref" reference="lemma:dI_term"}..* Using the definition of the closure terms $\mathsf{k}_{ij}$ in Eq. [\[eq:VariableEddingtonTensor\]](#eq:VariableEddingtonTensor){reference-type="eqref" reference="eq:VariableEddingtonTensor"}, we have from chain rule that $$v^{i} {\partial_{\mathcal{I}_{k}} (\mathsf{k}_{ij} \mathcal{D})} = \frac{1}{2}\,{\psi^\prime}\,\Big[\,3\,s\,\hat{n}_{j}-v_{j}\,\Big]\,\hat{n}_{k} +\frac{(3\,\psi-1)}{2h}\,\Big[\,v_{k}\,\hat{n}_{j}+s\delta_{jk}\,-2\,s\,\hat{n}_{j}\,\hat{n}_{k}\,\Big]\:.$$ To show $\|\nabla_{\boldsymbol{\mathcal{I}}}(v^{i}\mathsf{k}_{ij} \mathcal{D})\|\leq2v$, we prove in the following that $$\|\nabla_{\boldsymbol{\mathcal{I}}}(v^{i}\mathsf{k}_{ij} \mathcal{D})\boldsymbol{y}\|\leq2v y\:,\quad \forall\boldsymbol{y}=\big(y^1, y^2, y^3\big)^{\intercal},$$ where $y:=\sqrt{y^i y_i}\,$. Let $\phi_2:=(3\psi-1)h^{-1}$ as defined in Eq. [\[eq:phi_def\]](#eq:phi_def){reference-type="eqref" reference="eq:phi_def"}. Then, $$v^{i} {\partial_{\mathcal{I}_{k}} (\mathsf{k}_{ij} \mathcal{D})} y^{k}= \frac{1}{2}\,{\psi^\prime}\,\Big[\,3\,s\,\hat{n}_{j}-v_{j}\,\Big]\,(y^{k}\hat{n}_{k}) +\frac{1}{2}\phi_2\,\Big[\,\hat{n}_{j}(y^{k} v_{k})\,+s y_{j}\,-2\,s\,\hat{n}_{j}\,(y^{k}\hat{n}_{k})\,\Big]\:.$$ Summing up the squares leads to $$\begin{alignedat}{2} \|\nabla_{\boldsymbol{\mathcal{I}}}(v^{i}\mathsf{k}_{ij} \mathcal{D})\boldsymbol{y}\|^2 = &\sum_j \left( v^{i} {\partial_{\mathcal{I}_{k}} (\mathsf{k}_{ij} \mathcal{D})} y^{k}\right)^2 = \frac{1}{4} \phi_2^2 s^2 y^2 + \frac{1}{4}\phi_2^2 (y^{k} v_{k})^2 +\frac{1}{4}(\psi^\prime)^2 v^2 (y^{k} \hat{n}_{k})^2 \\ &\qquad+ \frac{1}{4}\Big[3(\psi^\prime)^2 - 2 \psi^\prime \phi_2 \Big]s^2 (y^{k} \hat{n}_{k})^2 + \frac{1}{2}\Big[\psi^\prime \phi_2 - \phi_2^2 \Big]s (y^{k} v_{k})(y^{k} \hat{n}_{k})\:. \end{alignedat}$$ Since $\phi_2^2 - \psi^\prime \phi_2\geq0$ (Lemma [Lemma 12](#lemma:polynomial_bounds){reference-type="ref" reference="lemma:polynomial_bounds"} (b)), we apply the inequality $-2ab\leq a^2 + b^2$ and obtain $$\frac{1}{2}\Big[\psi^\prime \phi_2 - \phi_2^2 \Big]s (y^{k} v_{k})(y^{k} \hat{n}_{k})\leq \frac{1}{4}\Big[\phi_2^2 - \psi^\prime \phi_2 \Big] (y^{k} v_{k})^2 + \frac{1}{4}\Big[\phi_2^2 - \psi^\prime \phi_2 \Big]s^2 (y^{k} \hat{n}_{k})^2\:.$$ Therefore, $$\begin{alignedat}{2} \|\nabla_{\boldsymbol{\mathcal{I}}}(v^{i}\mathsf{k}_{ij} \mathcal{D})\boldsymbol{y}\|^2 \leq& \,\frac{1}{4} \phi_2^2 s^2 y^2 + \frac{1}{4}\Big[2\phi_2^2 - \psi^\prime \phi_2 \Big](y^{k} v_{k})^2 +\frac{1}{4}(\psi^\prime)^2 v^2 (y^{k} \hat{n}_{k})^2 \\ &+ \frac{1}{4}\Big[3(\psi^\prime)^2 - 3 \psi^\prime \phi_2 +\phi_2^2\Big]s^2 (y^{k} \hat{n}_{k})^2\:. \end{alignedat}$$ Further, since $\phi_2^2 \geq0$, $(\psi^\prime)^2 \geq0$, $2\phi_2^2 - \psi^\prime \phi_2 \geq 0$ (Lemma [Lemma 12](#lemma:polynomial_bounds){reference-type="ref" reference="lemma:polynomial_bounds"} (b)), and $3(\psi^\prime)^2 - 3 \psi^\prime \phi_2 +\phi_2^2 \geq0$ (Lemma [Lemma 12](#lemma:polynomial_bounds){reference-type="ref" reference="lemma:polynomial_bounds"} (c)), we can take the upper bounds $s^2\leq v^2$, $(y^{k} v_{k})^2\leq (vy)^2$, and $(y^{k} \hat{n}_{k})^2\leq y^2$ to obtain $$\|\nabla_{\boldsymbol{\mathcal{I}}}(v^{i}\mathsf{k}_{ij} \mathcal{D})\boldsymbol{y}\|^2 \leq \, \Big[\phi_2^2 - \psi^\prime \phi_2 +(\psi^\prime)^2 \Big] {(vy)^2}\:.$$ It follows from Lemma [Lemma 12](#lemma:polynomial_bounds){reference-type="ref" reference="lemma:polynomial_bounds"} (d) that $\partial_h(\phi_2^2 - \psi^\prime \phi_2 +(\psi^\prime)^2) >0$, which implies $\max_{h\in[0,1]}\big[\phi_2^2 - \psi^\prime \phi_2 +(\psi^\prime)^2\big] = \phi_2^2(1) - \psi^\prime(1) \phi_2(1) +(\psi^\prime(1))^2 = 4$. Thus, $$\|\nabla_{\boldsymbol{\mathcal{I}}}(v^{i}\mathsf{k}_{ij} \mathcal{D})\boldsymbol{y}\|^2 \leq \, 4 {(vy)^2}\:,\quad \forall\boldsymbol{y}=\big(y^1, y^2, y^3\big)^{\intercal},$$ which proves the claim. ◻ [^1]: [www.github.com/endeve/thornado](https://github.com/endeve/thornado) [^2]: The admissible set $\mathfrak{R}$ and the realizable set $\mathcal{R}$ in this work are appropriate for particle systems obeying Bose--Einstein or Maxwell--Boltzmann statistics. The extension of this work to systems obeying Fermi--Dirac statistics, where $f$ is also bounded from above, is non-trivial and deferred to future work. [^3]: With this choice, at the expense of potentially increased numerical dissipation when the flux factor is small (see Figure [\[fig:WaveSpeed\]](#fig:WaveSpeed){reference-type="ref" reference="fig:WaveSpeed"}), computation of flux Jacobian eigenvalues are avoided, and the realizability analysis is simplified. [^4]: *An example of this one-dimensional geometry is the reduced case of the full three-dimensional geometry when the fluxes in two of the three spatial dimensions are assumed to be zero. See Section [5.1.3](#sec:realizabilitySource){reference-type="ref" reference="sec:realizabilitySource"} for further discussions.* [^5]: The realizability-preserving and convergence properties of the AA solver require additional conditions such as boundedness of extrapolation coefficients, which we do not enforce in the implementation.
arxiv_math
{ "id": "2309.04429", "title": "DG-IMEX Method for a Two-Moment Model for Radiation Transport in the\n $\\mathcal{O}(v/c)$ Limit", "authors": "M. Paul Laiu, Eirik Endeve, J. Austin Harris, Zachary Elledge, Anthony\n Mezzacappa", "categories": "math.NA astro-ph.IM cs.NA", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | In zero forcing, the focus is typically on finding the minimum cardinality of any zero forcing set in the graph; however, the number of cardinalities between $0$ and the number of vertices in the graph for which there are both zero forcing sets and sets that fail to be zero forcing sets is not well known. In this paper, we introduce the zero forcing span of a graph, which is the number of distinct cardinalities for which there are sets that are zero forcing sets and sets that are not. We introduce the span within the context of standard zero forcing and skew zero forcing as well as for standard zero forcing on directed graphs. We characterize graphs with high span and low span of each type, and also investigate graphs with special zero forcing polynomials. MSC2020: 05C50 author: - Bonnie Jacob title: The zero forcing span of a graph --- Keywords: Zero forcing, Failed zero forcing, Zero forcing polynomial # Introduction Throughout this paper, we use $G$ to denote a finite, simple graph on $n=|V(G)|$ vertices with edge set $E(G)$. We use $D$ to denote a directed graph, or digraph, with vertex set $V(D)$ and arc set $E(D)$. Like in our undirected graphs, hereafter simply "graphs," we do not allow loops or multiple arcs in our digraphs, though between a pair of vertices there may be an arc in each direction (from vertex $u$ to vertex $v$ and from vertex $v$ to vertex $u$, for example). When the graph is understood, we use $V$ in place of $V(D)$ or $V(G)$. Unless otherwise stated, we use $n=|V(G)|$ and call $|V(G)|$ the *order* of the graph. For any $v \in V(G)$, the *open neighborhood* of $v$ denoted $N(v)$ is the set of vertices adjacent to $v$. In a digraph, the *open in-neighborhood* of $v$ denoted $N^-(v)$ is the set of vertices from which there is an arc to $v$, and the *open out-neighborhood* denoted $N^+(v)$ the set of vertices to which there is an arc from $v$. A *neighbor* of $v$ is a vertex in the open neighborhood of $v$, with analogous definitions for in- and out-neighbor. Zero forcing is a process that consists of designating a subset $S \subseteq V(G)$ as blue, and the remaining vertices as white. A color change rule of varying forms is then applied. If repeated applications of the color change rule results in all vertices eventually turning blue, the original set is called a *zero forcing set*. In this paper, we use three different color change rules, and define those here. 1. The *standard color change rule* (standard zero forcing): if any blue vertex has exactly one white neighbor, then the white neighbor becomes blue. 2. The *skew color change rule* (skew zero forcing): if any vertex (blue or white) has exactly one white neighbor, then the white neighbor becomes blue. 3. The *(standard) digraph color change rule* (standard zero forcing on digraphs): if any blue vertex has exactly one white out-neighbor, then the white out-neighbor becomes blue. Under each of these rules, the minimum cardinality of any starting set of blue vertices that eventually results in the entire graph turning blue is called the *zero forcing number*, first formally introduced in [@aim2008zero] (the *skew zero forcing number* introduced in [@ima2010minimum], or the *digraph zero forcing number* introduced in [@berliner2013minimum] respectively) and is denoted $\operatorname{Z}(G)$ (or $\operatorname{Z^-}(G)$, or $\operatorname{Z}(D)$). The maximum cardinality of any starting set of blue vertices that never results in the entire graph turning blue is called the *failed zero forcing number* introduced in [@fetcie2015failed] (the *failed skew zero forcing number* introduced in [@ansill2016failed], or the *digraph failed zero forcing number* introduced in [@adams2021failed] respectively) and is denoted $\operatorname{F}(G)$ (or $\operatorname{F^-}(G)$, or $\operatorname{F}(D)$). Given a graph $G$, the *zero forcing span* $\lambda(G)$ is the number of distinct cardinalities, $k_1, k_2, \ldots k_{\lambda(G)}$ such that for each $i$, $1 \leq i \leq \lambda(G)$, there exist sets $Z, F \subseteq V(G)$ with $|Z|=|F|=k_i$, where $Z$ is a zero forcing set and $F$ is not. We define $\lambda^-(G)$ analogously, but where $Z$ is a skew zero forcing set and $F$ is not, and $\lambda(D)$ as well, but where $D$ is a digraph and the digraph color change rule is applied. Note that $\lambda(G)$ denotes how much bigger than $\operatorname{Z}(G)$ a set must be to guarantee that it must be a zero forcing set, without regard to which vertices are in the set. # Motivation The concept of zero forcing span has connections with several other problems in the literature. First, we look at the connection to linear algebra. With $G$ we associate a set of symmetric matrices denoted $\mathcal{S}(G)$. Number the vertices $1, 2, \ldots, n$ and define $\mathcal{S}(G)$ as follows. $$\mathcal{S}(G) = \left\{ A \in \mathbb{R}^{n \times n} \ : \ A^T = A \ \mbox{ and for } i \neq j, a_{ij} \neq 0 \mbox{ if and only if } ij \notin E(G) \right\}.$$ Note that the diagonal is unconstrained. By applying [@aim2008zero Proposition 2.3], we find that the zero forcing span $\lambda(G)$ is the number of values $k$ such that both of the following conditions hold. 1. There exists a set $S \subseteq \{1, 2, \ldots, |V(G)|\}$ with $|S|=k$ such that for any $A \in \mathcal{S}(G)$, if $v \in \ker(A)$ with $v_i=0$ for all $i \in S$, then $v=0$, but 2. there exists $A \in \mathcal{S}(G)$ with $v \in \ker(A)$, $v \neq 0$ and some $k$ entries of $v$ that are all $0$. We can also relate the zero forcing span to the *zero forcing polynomial* $\mathcal{Z}(G;x)=\sum_{i=1}^n z(G;i) x^i$, introduced in [@boyer2019zero], where $z(G;i)$ is the number of zero forcing sets of cardinality $i$. Then $\lambda(G)$ gives the number of terms where $0< z(G;i) <{ n \choose i }$, that is, the number of terms in the polynomial that are not simply $0$ or ${n \choose i}$ (the minimum or maximum possible for each coefficient in the polynomial). While not yet explicitly defined in the literature, we can define analogous zero forcing polynomials for variants of zero forcing. Let the *skew zero forcing polynomial* be $\mathcal{Z^-}(G;x)=\sum_{i=0}^n z^-(G;i) x^i$ where $z^-(G;i)$ is the number of skew zero forcing sets of cardinality $i$, and the *directed zero forcing polynomial* be $\mathcal{Z^D}(D;x)=\sum_{i=1}^n z^D(D;i) x^i$ where $z^D(D;i)$ is the number of zero forcing sets of cardinality $i$ in a digraph $D$. Note that, unlike standard zero forcing, the skew zero forcing polynomial may have $z(G;0) > 0$. # Results ## Extreme values of $\lambda(G)$ We note the following formula for $\lambda(G)$ in terms of $\operatorname{F}(G)$ and $\operatorname{Z}(G)$. The equivalent statement holds for each type of zero forcing. **Observation 1**. *$\lambda(G)=\operatorname{F}(G) - \operatorname{Z}(G) +1$. [\[obs:formula\]]{#obs:formula label="obs:formula"}* Recall that $\operatorname{Z}(G)\geq 1$ for any undirected graph $G$. Also recall that $\operatorname{F}(G) \leq n-1$ and that $\operatorname{F}(G) \geq \operatorname{Z}(G)-1$. The same statements hold for a digraph $D$, which give us the following trivial bounds. **Observation 2**. *For a graph $G$ and digraph $D$, $0 \leq \lambda(G) \leq n-1$ and $0 \leq \lambda(D) \leq n-1$.* In the skew case, there exist graphs for which $\operatorname{Z^-}(G)=0$, implying the following observation. **Observation 3**. *$0 \leq \lambda^-(G) \leq n$ [\[obs:skew\]]{#obs:skew label="obs:skew"}* ### Characterizations of graphs with zero forcing span of $0$ **Lemma 4**. *The following are equivalent:* 1. *$\lambda(G)=0$* 2. *$\operatorname{F}(G) < \operatorname{Z}(G)$* 3. *$\operatorname{Z}(G) = \operatorname{F}(G) +1$* *The same equivalence holds if the graph $G$ is replaced by a digraph $D$. [\[lem:equivstandard\]]{#lem:equivstandard label="lem:equivstandard"}* *Proof.* By Observation [\[obs:formula\]](#obs:formula){reference-type="ref" reference="obs:formula"}, $\lambda(G)=0$ if and only if $\operatorname{Z}(G) = \operatorname{F}(G)+1$. Since $\operatorname{F}(G) \geq \operatorname{Z}(G)-1$ (as noted in [@fetcie2015failed] or by observing that any set of vertices must be a failed zero forcing set or a zero forcing set), the equivalence holds. 0◻ ◻ For skew zero forcing, since $\operatorname{F^-}(G)$ is not always defined, we have the following list of equivalences. The proof is identical to that of Lemma [\[lem:equivstandard\]](#lem:equivstandard){reference-type="ref" reference="lem:equivstandard"} but with the addition of the possibility that we may have $\operatorname{Z^-}(G)=0$, which is equivalent to $\operatorname{F^-}(G)$ being undefined. **Lemma 5**. *The following are equivalent:* 1. *$\lambda^-(G)=0$* 2. *$\operatorname{F^-}(G) < \operatorname{Z^-}(G)$ or $\operatorname{Z^-}(G)=0$.* 3. *$\operatorname{Z^-}(G) = \operatorname{F^-}(G) +1$ or $\operatorname{F^-}(G)$ is undefined.* *[\[lem:equivskew\]]{#lem:equivskew label="lem:equivskew"}* In [@fetcie2015failed], it was established that $\operatorname{F}(G)<\operatorname{Z}(G)$ if and only if $G=K_n$ or $G=\overline{K_n}$, leading to the following characterization of graphs with $\lambda(G)=0$. **Theorem 6**. *$\lambda(G)=0$ if and only if $G=K_n$ or $G=\overline{K_n}$ [\[prop:zero\]]{#prop:zero label="prop:zero"}* *Proof.* We have that $\operatorname{F}(G)=\operatorname{Z}(G)-1$ if and only if $G=K_n$ or $G=\overline{K_n}$ [@fetcie2015failed]. By Lemma [\[lem:equivstandard\]](#lem:equivstandard){reference-type="ref" reference="lem:equivstandard"}, the result follows. 0◻ ◻ For skew zero forcing, we need a few definitions to characterize graphs with $\lambda^-(G)=0$. In [@ansill2016failed], we established that $\operatorname{F^-}(G)<\operatorname{Z^-}(G)$ if and only if $G$ is an odd cycle or nonempty set of cycles intersecting in a single vertex, or a doubly extended bouquet-dipole, pictured in Figure [\[fig:skewzero\]](#fig:skewzero){reference-type="ref" reference="fig:skewzero"} on the left. **Definition 7**. *We call a graph $G$ a *doubly extended bouquet-dipole* if it consists of vertices $u$ and $v$ that are each on a nonempty set of odd cycles, where all other vertices on the cycles have degree two, and $u, v$ are joined by a path of even order that alternates between single even order paths whose internal vertices all have degree two, and multiple even order paths whose internal vertices all have degree two.* In addition, there exist graphs that have $\operatorname{Z^-}(G)=0$ and therefore $\operatorname{F^-}(G)$ is undefined, specifically two-set perfectly orderable graphs. **Definition 8**. *We say that a graph $G$ is *two-set perfectly orderable* if* 1. *$V(G)$ can be partitioned into ordered sets, $U = \left\{ u_1, u_2, \ldots, u_m \right\}$ and $W = \left\{ w_1, w_2, \ldots, w_m \right\}$ such that $u_i v \in E(G)$ only if $v=w_j$ where $j \leq i$, and* 2. *$u_i w_i \in E(G)$ for all $i \in \{1, 2, \ldots, m\}$.* This gives us the following characterization of graphs with $\lambda^-(G)=0$. **Theorem 9**. *$\lambda^-(G)=0$ if and only if $G$ is one of the following graphs.* 1. *$K_n$ [\[item:kn\]]{#item:kn label="item:kn"}* 2. *$\overline{K_n}$ [\[item:isolated\]]{#item:isolated label="item:isolated"}* 3. *a doubly extended bouquet-dipole graph. [\[item:doubly\]]{#item:doubly label="item:doubly"}* 4. *a collection of one or more odd cycles that intersect in exactly one vertex. [\[item:odd\]]{#item:odd label="item:odd"}* 5. *a two-set perfectly orderable graph. [\[item:perfectly\]]{#item:perfectly label="item:perfectly"}* *Proof.* In [@ansill2016failed] the graphs with $\operatorname{Z^-}(G) < \operatorname{F^-}(G)$ were characterized and are precisely Graphs [\[item:kn\]](#item:kn){reference-type="ref" reference="item:kn"} through [\[item:odd\]](#item:odd){reference-type="ref" reference="item:odd"}. Graphs with $\operatorname{Z^-}(G)=0$ and $\operatorname{F^-}(G)$ undefined were characterized in the same paper as Graph [\[item:perfectly\]](#item:perfectly){reference-type="ref" reference="item:perfectly"}. By Lemma [\[lem:equivskew\]](#lem:equivskew){reference-type="ref" reference="lem:equivskew"}, the results holds. 0◻ ◻ Since we've seen that two-set perfectly orderable graphs not only have $\lambda^-(G)=0$, but they also are precisely the graphs with $\operatorname{Z^-}(G)=0$, we have the following statement about their skew zero forcing polynomials. **Corollary 10**. *The only graphs with skew zero forcing polynomial $\mathcal{Z}^-(G;x)=\sum_{i=0}^n {n \choose i}x^i = (x+1)^n$ are two-set perfectly orderable graphs.* Finally, we characterize digraphs with $\lambda(D)=0$. **Theorem 11**. *A digraph $D$ has $\lambda(D)=0$ if and only if $D$ is one of the following.* 1. *a directed cycle. [\[characterizationdirectedcycle\]]{#characterizationdirectedcycle label="characterizationdirectedcycle"}* 2. *a regular tournament on 5 vertices. [\[tournament5\]]{#tournament5 label="tournament5"}* 3. *a digraph obtained from $K_n$ by removing the arcs of [\[characterizationremovecycles\]]{#characterizationremovecycles label="characterizationremovecycles"}* 1. *a collection of vertex-disjoint directed cycles each of length at least 3 that span $V$ ($n\geq 3$), [\[cyclesspan\]]{#cyclesspan label="cyclesspan"}* 2. *a collection of vertex-disjoint directed cycles each of length at least 3 that span $V \backslash \{v\}$ for some $v \in V$ ($n\geq 4$), or [\[cyclesspanbutone\]]{#cyclesspanbutone label="cyclesspanbutone"}* 3. *$vu$ for some $u, v \in V$ and a collection of vertex-disjoint directed cycles each of length at least 3 that span $V \backslash \{v\}$ ($n\geq 4$). [\[cyclesspanleaf\]]{#cyclesspanleaf label="cyclesspanleaf"}* 4. *a digraph obtained from $K_{n-1} \overrightarrow{\vee} \{v\}$ by removing the arcs of a collection of vertex-disjoint directed cycles each of length at least 3 that span $K_{n-1}$ ($n \geq 4$). [\[characterizationsinkcomplement\]]{#characterizationsinkcomplement label="characterizationsinkcomplement"}* 5. *$K_j \overrightarrow{\vee} \overline{K_{\ell}}$ where $j \geq 2$ and $\ell \geq 0$. [\[characterizationcomplete\]]{#characterizationcomplete label="characterizationcomplete"}* 6. *$\overline{K_n}$. [\[characterizationisolated\]]{#characterizationisolated label="characterizationisolated"}* *Proof.* In [@adams2021failed], the list of graphs in the statement of this theorem were shown to be the graphs that have $\operatorname{Z}(D) < \operatorname{F}(D)$. We then apply Lemma [\[lem:equivstandard\]](#lem:equivstandard){reference-type="ref" reference="lem:equivstandard"}.0◻ ◻ ### Comments on graphs with standard zero forcing span of 1 We provide here a list of graphs that have $\lambda(G)=1$. We have neither a characterization, nor any reason to believe the list of graphs is complete. First, we note an immediate but notable property of graphs with $\lambda(G)=1$, and introduce a few definitions. **Observation 12**. *$\lambda(G)=1$ if and only if $\operatorname{Z}(G)=\operatorname{F}(G)$.* Recall that a *module* $S \subseteq V(G)$ is a set of vertices such that for any vertex $u \notin S$, either $uv \in E(G)$ for all $v \in S$, or $uv \notin E(G)$ for any $v \in S$. In [@fetcie2015failed] it was shown that $\operatorname{F}(G)=n-2$ if and only if $G$ has a module of order 2. The *union* of graphs $G$ and $H$, denoted $G \cup H$, is the graph with vertex set $V(G) \cup V(H)$ and edge set $E(G) \cup E(H)$. The *join* $G \vee H$ is simply $G \cup H$ with the addition of an edge between $v_G$ and $v_H$ for every $v_G \in V(G)$ and every $v_H \in V(H)$. **Proposition 13**. *The following graphs have $\lambda(G)=1$.* 1. *a complete multipartite graph $K_{n_1, n_2, \ldots n_k}$ where $n_1 \geq 2$. [\[item:multipart\]]{#item:multipart label="item:multipart"}* 2. *$K_m \cup K_n$ where $m, n \geq 2$. [\[item:union\]]{#item:union label="item:union"}* 3. *$K_n \backslash M$ where $n \geq 3$ and $M$ is a nonempty matching. [\[item:matching\]]{#item:matching label="item:matching"}* 4. *$K_n \vee \overline{K_m}$ where $m \geq 2$. [\[item:completesplit\]]{#item:completesplit label="item:completesplit"}* 5. *a path on $4$ vertices. [\[item:pathcycle\]]{#item:pathcycle label="item:pathcycle"}* *[\[prop:span1\]]{#prop:span1 label="prop:span1"}* *Proof.* For the complete multipartite graph, Item [\[item:multipart\]](#item:multipart){reference-type="ref" reference="item:multipart"}, note that any pair of vertices in a single partite set forms a module of order $2$, so $F(G)=n-2$. For any $S\subseteq V(G)$ with $|S|\leq n-3$, $S$ is a failed zero forcing set since if two of the vertices in $V\backslash S$ are in a single partite set, then $S$ is a failed zero forcing set, and otherwise, each vertex of $V \backslash S$ is in a distinct partite set, and each vertex in $S$ is adjacent to at least two vertices in $V \backslash S$. Take $u \in V(P_1)$ where $P_1$ is a partite set with at least two vertices, and $v \in P_2$ where $P_2$ is any other partite set. Then $V \backslash \{u, v\}$ is a zero forcing set, giving us that $\operatorname{F}(G)=\operatorname{Z}(G)=n-2$, and $\lambda(G)=1$. For $K_m \cup K_n$, Item [\[item:union\]](#item:union){reference-type="ref" reference="item:union"}, note that any pair of vertices in the same component form a module of order $2$. Thus, $F(G) = n+m-2$. For $S$ to be a zero forcing set, it must contain $m-1$ vertices of $K_m$ and $n-1$ vertices of $K_n$, giving us that $\operatorname{Z}(G)=n+m-2$ as well. For Item [\[item:matching\]](#item:matching){reference-type="ref" reference="item:matching"}, $G=K_n \backslash M$ where $M$ is a matching, note that the endpoints of any edge in the matching form a module of order $2$, so $\operatorname{F}(G)=n-2$. However, $\operatorname{Z}(G)=n-2$ as well, since taking $S= V\backslash \{u, v\}$ for any $u, v \in V$ where $uv$ is not an edge in the matching forms a zero forcing set, and any set of $S'$ with $|S'| \leq n-3$ is not a zero forcing set since for any $v \in S'$, $v$ is adjacent to at least two vertices in $V \backslash S'$. For Item [\[item:completesplit\]](#item:completesplit){reference-type="ref" reference="item:completesplit"}, note that any pair of vertices in either $K_n$ or $\overline{K_m}$ forms a module of order two, giving us $\operatorname{F}(G)=n-2$. Since any set $S \subseteq V( K_n \vee \overline{K_m})$ with $|S| \leq n-3$ has at least two vertices missing from $K_n$ or $\overline{K_m}$, we have that $\operatorname{Z}(G) \geq n-2$. By picking a set $S'$ with $S'= V( K_n \vee \overline{K_m}) \backslash \{u,v\}$ where $u \in K_n$ and $v \in \overline{K_m}$ we see $\operatorname{Z}(G)=n-2=\operatorname{F}(G)$. For Item [\[item:pathcycle\]](#item:pathcycle){reference-type="ref" reference="item:pathcycle"}, note that $\operatorname{Z}(P_4)=\operatorname{F}(P_4)=1$. 0◻ ◻ ### Characterizations of graphs with high zero forcing spans We now characterize graphs and digraphs with high zero forcing spans. Specifically, we characterize graphs that have $\lambda(G) \geq n-3$ and digraphs that have $\lambda(D) \geq n-2$. For skew zero forcing, we show that $\lambda^-(G) \neq n$ for any graph $G$ and characterize graphs that have $\lambda^-(G)=n-1$. **Theorem 14**. *Graphs with $\lambda(G)=n-1$ or $\lambda(G)=n-2$ can be characterized as follows. $$\lambda(G)= \begin{cases} n-1 \mbox{ if and only if } G=K_1 \\ n-2 \mbox{ if and only if } G = P_{n-1} \cup K_1 \mbox{ or } G=P_3 \end{cases}$$* *Proof.* For $\lambda(G)=n-1$, we must have $\operatorname{F}(G)=n-1$ and $\operatorname{Z}(G)=1$. From [@fetcie2015failed], $\operatorname{F}(G)=n-1$ gives us that $G$ has an isolated vertex. It is well known that $\operatorname{Z}(G)=1$ if and only if $G$ is a path. Thus, $G=K_1$. If $\lambda(G)=n-2$, then either $\operatorname{F}(G)=n-1$ and $\operatorname{Z}(G)=2$, or $\operatorname{F}(G)=n-2$ and $\operatorname{Z}(G)=1$. In the first case, $\operatorname{F}(G)=n-1$ implies that $G$ has an isolated vertex. Since $G$ has an isolated vertex with $\operatorname{Z}(G)=2$, we must have that the other component of $G$ is a path, giving us a path and a single isolated vertex. Note that we can construct a failed zero forcing set of $G$ with $n-1$ vertices by taking all vertices but the isolated vertex, and a zero forcing set with $2$ vertices by taking one end vertex of the path along with the isolated vertex. If $\operatorname{F}(G)=n-2$ and $\operatorname{Z}(G)=1$, we have that $G$ must be a path, but from [@fetcie2015failed] that there are two pendant vertices since $G$ is a tree with $\operatorname{F}(G)=n-2$, giving us that $G=P_3$. Note the middle vertex forms a failed zero forcing set of maximum order, and the end vertex a zero forcing set of minimum order. 0◻ ◻ We pause here to recall definitions that are essential in some of the characterizations below. **Definition 15**. *We say that $G$ is a graph of *two parallel paths* if $G$ itself is not a path, and $V(G)$ can be partitioned into subsets $V_1$ and $V_2$ such that the subgraphs induced by $V_1$ and $V_2$ are paths, and $G$ can be drawn in the plane so that the paths induced by $V_1$ and $V_2$ are parallel line segments, and edges between $V_1$ and $V_2$ can be drawn as straight line segments that do not cross. Such a drawing is known as a *standard drawing*.* **Lemma 16**. *A graph $G$ that consists of two parallel paths has a module of order 2 if and only if any standard drawing of $G$ consists of one of the following: [\[lem:twoparallelmodule\]]{#lem:twoparallelmodule label="lem:twoparallelmodule"}* 1. *$P_1 \cup P_2$ or $P_1 \cup P_1$. [\[isolated\]]{#isolated label="isolated"}* 2. *$P_1=\{x\}$ and $P_k$ where $k \geq 3$, and [\[parallelp1\]]{#parallelp1 label="parallelp1"}* 1. *for some $u, v, w$ that form a subpath of $P_k$, $N(x)=\{u, w\}$ or $N(x)=\{u, v, w\}$, or* 2. *for an end vertex $v$ of $P_k$ with neighbor $w$, $N(x)=\{w\}$ or $N(x)=\{v, w\}$.* 3. *$P_2=uv$ and $P_k$ with $N(u) \cap V(P_k) =N(v) \cap V(P_k)$. [\[parallelp2\]]{#parallelp2 label="parallelp2"}* 4. *$P_3=uvw$ and $P_k$ where $N(u)=N(w)=\{v\}$ or $N(u)=N(w)=\{v, x\}$ for some $x$ on $P_k$. [\[parallelp3\]]{#parallelp3 label="parallelp3"}* 5. *$P_k = v_1v_2v_3v_4 \cdots v_k$ and $P_j = w_1w_2w_3\cdots w_j$ where $k, j \geq 2$, and the edges between $P_k$ and $P_j$ are one of $\left\{ v_{k-1}w_1, v_k w_2 \right\}$, $\left\{ v_{k-1}w_1, v_k w_2, v_{k-1}w_2\right\}$, or $\left\{ v_{k-1}w_1, v_k w_2,v_{k}w_1\right\}$. [\[parallelpk\]]{#parallelpk label="parallelpk"}* *Proof.* Note that in Item [\[isolated\]](#isolated){reference-type="ref" reference="isolated"}, if $G=P_1 \cup P_2$, then $V(P_2)$ forms a module of order 2; if $G=P_1 \cup P_1$ then $V(G)$ itself is a module of order 2. In Item [\[parallelp1\]](#parallelp1){reference-type="ref" reference="parallelp1"}, $\{v,x\}$ forms a module of order 2. For Item [\[parallelp2\]](#parallelp2){reference-type="ref" reference="parallelp2"}, $\{u,v\}$ forms a module of order 2. For Item [\[parallelp3\]](#parallelp3){reference-type="ref" reference="parallelp3"}, $\{u, w\}$ forms a module of order 2. For Item [\[parallelpk\]](#parallelpk){reference-type="ref" reference="parallelpk"}, $\{v_k, w_1\}$ forms a module of order 2. Now assume that $G$ is two parallel paths and has a module of order 2. We will show that $G$ is one of the graphs described in Items [\[isolated\]](#isolated){reference-type="ref" reference="isolated"} through [\[parallelpk\]](#parallelpk){reference-type="ref" reference="parallelpk"}. Consider a standard drawing of $G$. Call the two paths $P_k$ and $P_j$. First, assume that $k,j \geq 4$, and that $\{u, v\}$ is a module of order 2. Note that we cannot have that $u, v \in V(P_k)$ (without loss of generality) because then $u, v$ will have different neighbors along $P_k$. Hence, we must have that $u \in V(P_k)$ and $v \in V(P_j)$. If $u$ is an interval vertex in $P_k$, then $v$ is adjacent to both vertices that are adjacent to $u$ along $P_k$, and there is also an edge between $u$ and the vertex (or vertices) adjacent to $v$; this edge will cross one of the edges from $v$ to the neighbors of $u$ which contradicts the definition of parallel paths. Thus we must have that $u$ is an end vertex of $P_k$ and $v$ is an end vertex of $P_j$. Recall that we're considering a standard drawing, $P_k = v_1v_2v_3v_4 \cdots v_k$ and $P_j = w_1w_2w_3\cdots w_j$. Note that we cannot have that $u= v_1$ and $v = w_1$, or $u=v_k$ and $v=w_j$, since then the edge from $u$ to the neighbor of $v$ will cross the edge from $v$ to the neighbor of $u$. Thus, without loss of generality, we have $u=v_k$ and $v=w_1$. To satisfy $\{ u, v\}$ being a module of order $2$, we then have that $v_{k-1} v \in E(G)$ and $w_2 u \in E(G)$. That is, $v_{k-1}, v, w_2, u$ form a $C_4$. Note then that we may have $uv \in E(G)$ or $v_{k-1} w_2 \in E(G)$, but not both, since the edges would cross, giving us Item [\[parallelpk\]](#parallelpk){reference-type="ref" reference="parallelpk"}. We now consider $j = 3$, so $P_j= uvw$. Note that along $P_j$, $N(u)=N(w)=\{v\}$. For $\{u, w\}$ to form a module of order 2, we must have that $u$ and $w$ have the same neighbors in $P_k$ as well. Note that if $u$ and $w$ have more than one neighbor in $P_k$, then we will have crossed edges between $P_k$ and $P_j$. Hence, $u$ and $w$ have at most one neighbor in $P_k$, giving us Item [\[parallelp3\]](#parallelp3){reference-type="ref" reference="parallelp3"}. By the same arguments we made for the case when $j \geq 4$, the only other possibility for $j=3$ is if the graph satisfies Item [\[parallelpk\]](#parallelpk){reference-type="ref" reference="parallelpk"}. If $j=2$, let $P_2=uv$. For the case $k=1$, let $V(P_1)=\{x\}$. Note either $ux, vx \in E(G)$ or $ux, vx \notin E(G)$ satisfying Item [\[isolated\]](#isolated){reference-type="ref" reference="isolated"} or [\[parallelp2\]](#parallelp2){reference-type="ref" reference="parallelp2"}. If $k \geq 2$, note that for $\{u, v\}$ to be a module of order $2$, then they must have the same neighborhood in $V(P_k)$, satisfying Item [\[parallelp2\]](#parallelp2){reference-type="ref" reference="parallelp2"}. Note that by the definition of parallel paths, $|N(u)\cap V(P_k)| = |N(v) \cap V(P_k)| \in \{0, 1\}$, else edges from $u$ and $v$ to their neighbors in $P_k$ would cross. If $u,v$ do not form a module, then without loss of generality $\{u, w\}$ form a module of order $2$ for some $w \in V(P_j)$. By the same arguments for the cases $k, j \geq 4$, $G$ must satisfy Item [\[parallelpk\]](#parallelpk){reference-type="ref" reference="parallelpk"}. Finally, suppose $j=1$. If $k=1$, we have Item [\[isolated\]](#isolated){reference-type="ref" reference="isolated"}. If $k=2$, we have the same situation just described for $j=2$ and $k=1$. If $k =3$, we must have Item [\[parallelp3\]](#parallelp3){reference-type="ref" reference="parallelp3"} by the arguments for the case $j=3$. If $k=4$, note that no two vertices of $P_k$ can form a module of order 2. Thus we must have that $\{x, v\}$ form a module of order 2 where $\{x\}=V(P_j)$ and $v \in V(P_k)$. Then $x$ must be adjacent to $N(v)$, and may be adjacent to $v$ as well, giving us Item [\[parallelp1\]](#parallelp1){reference-type="ref" reference="parallelp1"}. 0◻ ◻ If $S \subseteq V(G)$, then we denote the subgraph of $G$ induced by $S$ by $G[S]$. We now characterize graphs with zero forcing span of $n-3$. **Theorem 17**. *$\lambda(G)=n-3$ if and only if $G$ is one of the following graphs.* 1. *two parallel paths with an additional $K_1$ component. [\[k1comp\]]{#k1comp label="k1comp"}* 2. *$P_4$ [\[p4\]]{#p4 label="p4"} or $P_5$.* 3. *two parallel paths such that any standard drawing has one of the following forms: [\[parallel\]]{#parallel label="parallel"}* 1. *$P_1=\{x\}$ and $P_k$ where $k \geq 3$, and* 1. *for some $u, v, w$ that form a subpath of $P_k$, $N(x)=\{u, w\}$ or $N(x)=\{u, v, w\}$, or* 2. *for an end vertex $v$ with neighbor $w$, $N(x)=\{w\}$ or $N(x)=\{v, w\}$.* 2. *$P_2=uv$ and $P_k$ with $N(u) \cap V(P_k) =N(v) \cap V(P_k)$, and if $k=1$, then $\{ux, vx\} \in E(G)$ where $x=V(P_1)$.* 3. *$P_3=uvw$ and $P_k$ where $N(u)=N(w)=\{v\}$ or $N(u)=N(w)=\{v, x\}$ for some $x$ on $P_k$.* 4. *$P_k = v_1v_2v_3v_4 \cdots v_k$ and $P_j = w_1w_2w_3\cdots w_j$ where $k, j \geq 2$, and the edges between $P_k$ and $P_j$ are one of $\left\{ v_{k-1}w_1, v_k w_2 \right\}$, $\left\{ v_{k-1}w_1, v_k w_2, v_{k-1}w_2\right\}$, or $\left\{ v_{k-1}w_1, v_k w_2,v_{k}w_1\right\}$.* *[\[prop:nminus3\]]{#prop:nminus3 label="prop:nminus3"}* *Proof.* Using Observation [\[obs:formula\]](#obs:formula){reference-type="ref" reference="obs:formula"}, we know that $\lambda(G)=n-3$ implies that (I) $\operatorname{F}(G)=n-1$ and $\operatorname{Z}(G)=3$, (II) $\operatorname{F}(G)=n-2$ and $\operatorname{Z}(G)=2$, or (III) $\operatorname{F}(G)=n-3$ and $\operatorname{Z}(G)=1$. For (I) we know that $\operatorname{F}(G)=n-1$ if and only if $G$ contains an isolated vertex [@fetcie2015failed], giving us that (I) is satisfied if and only if $G$ has an isolated vertex $v$ and $\operatorname{Z}(G[V(G)\backslash\{v\}])=2$. By [@ferrero2019rigid], $\operatorname{Z}(G[V(G)\backslash\{v\}])=2$ if and only if $G[V(G)\backslash\{v\}]$ is two parallel paths. Hence, we have that $\operatorname{F}(G)=n-1$ and $\operatorname{Z}(G)=3$ if and only if Item [\[k1comp\]](#k1comp){reference-type="ref" reference="k1comp"} holds. For (II), we know that $\operatorname{F}(G)=n-2$ if and only if $G$ has a module of order 2 [@fetcie2015failed] and no isolated vertices, and $\operatorname{Z}(G)=2$ if and only if $G$ is two parallel paths. Hence, (II) holds if and only if $G$ is two parallel paths with a module of order 2 and no isolated vertices. By Lemma [\[lem:twoparallelmodule\]](#lem:twoparallelmodule){reference-type="ref" reference="lem:twoparallelmodule"}, then (II) holds if and only if $G$ satisfies Item [\[parallel\]](#parallel){reference-type="ref" reference="parallel"}. For (III), $\operatorname{Z}(G)=1$ if and only if $G$ is a path. From [@fetcie2015failed], $\operatorname{F}(P_n)=n-3$ if and only if $n=4$ or $n=5$. Hence, we have that $\operatorname{F}(G)=n-3$ and $\operatorname{Z}(G)=1$ if and only if Item [\[p4\]](#p4){reference-type="ref" reference="p4"} holds.0◻ ◻ We now characterize graphs of highest possible skew zero forcing span, and improve the trivial bound from Observation [\[obs:skew\]](#obs:skew){reference-type="ref" reference="obs:skew"} to a tight bound. **Theorem 18**. *$\lambda^-(G)=n-1$ if and only if $G$ consists of a (possibly empty) two-set perfectly orderable graph and an isolated vertex. Also, $\lambda^-(G) \leq n-1$.* *Proof.* By Observation [\[obs:formula\]](#obs:formula){reference-type="ref" reference="obs:formula"}, $\lambda^-(G)=n-1$ implies that $\operatorname{F^-}(G)=n-1$ and $\operatorname{Z^-}(G)=1$, or $\operatorname{F^-}(G)=n-2$ and $\operatorname{Z^-}(G)=0$. If $\operatorname{Z^-}(G)=0$, then $\operatorname{F^-}(G)$ is undefined, meaning that the only feasible case is that $\operatorname{F^-}(G)=n-1$ and $\operatorname{Z^-}(G)=1$. From [@ansill2016failed], $\operatorname{F^-}(G)=n-1$ if and only if $G$ has an isolated vertex, $v$. Note that one possibility is that $V(G)=\{v\}$. Otherwise, since $\operatorname{Z^-}(G)=1$, and any zero forcing set must contain $v$, we have that $\operatorname{Z^-}(H) =0$ where $H$ is the subgraph induced by $V(G) \backslash \{v\}$. Thus, $H$ must be a two-set perfectly orderable graph. We noted above that $\lambda^-(G) \leq n$. If $\lambda^-(G)=n$, then $\operatorname{F^-}(G)=n-1$ and $\operatorname{Z^-}(G)=0$. This gives us that $G$ is a two-set perfectly orderable graph with an isolated vertex, which is a contradiction since no two-set perfectly orderable graph has an isolated vertex. Hence $\lambda^-(G) \leq n-1$. 0◻ ◻ Finally, we turn to high zero forcing spans of digraphs. First, we recall some definitions. A *source* $v \in V(D)$ has $N^-(v)=\varnothing$. A path $P= (v_1, v_2, \ldots, v_k)$ is *Hessenberg* if $E(D)$ does not contain any arc of the form $(v_i, v_j)$ with $j > i+1$. A simple digraph $D$ is a *digraph of two parallel Hessenberg paths* if $D$ is not a Hessenberg path, $V(D) = \left\{ u_1, \ldots u_r, v_1, \ldots, v_s\right\}$ where $(u_1, \ldots, u_r)$ and $(v_1, \ldots v_s)$ are nonempty Hessenberg paths, and there do not exist, $i, j, k, \ell$ with $i <j$ and $k<\ell$ such that $\left\{ (u_k, v_j), (v_i, k_{\ell})\right\} \subseteq E(D)$. That is, there are no pairs of forward crossing arcs. **Theorem 19**. *$\lambda(D)=n-1$ if and only if $D$ is a Hessenberg path $(v_1, v_2, \ldots, v_n)$ such that $v_1$ is a source.* *Proof.* From [@hogben2010minimum], $\operatorname{Z}(D) = 1$ if and only if $D$ is a Hessenberg path. From [@adams2021failed], $\operatorname{F}(D)=n-1$ if and only if $D$ has a source. Thus, $\lambda(D)=n-1$ if and only if $D$ is a Hessenberg path with a source. The only vertex in a Hessenberg path $(v_1, \ldots, v_n)$ that can be a source is $v_1$, since $v_{i-1} \in N^-(v_i)$ for any $v_i$ with $i >1$. Taking any Hessenberg path such that $v_1$ has no in-neighbors gives us a Hessenberg path with a source, $v_1$. 0◻ ◻ **Theorem 20**. *$\lambda(D)=n-2$ if and only if $D$ is one of* 1. *two parallel Hessenberg paths $(u_1, u_2, \ldots, u_k)$ and $(v_1, v_2, \ldots, v_j)$ such that $u_1$ or $v_1$ is a source, or [\[parallelHess\]]{#parallelHess label="parallelHess"}* 2. *a single Hessenberg path $(v_1, \ldots, v_n)$ with $N^-(v_1)=N^-(v_{i+1})=\{v_i\}$ for some $i$, $1<i<n$. [\[twoneighborHess\]]{#twoneighborHess label="twoneighborHess"}* *Proof.* Note that from Observation [\[obs:formula\]](#obs:formula){reference-type="ref" reference="obs:formula"}, $\lambda(D)=n-2$ if and only if either $\operatorname{Z}(D)=1$ and $\operatorname{F}(D)=n-2$, or $\operatorname{Z}(D)=2$ and $\operatorname{F}(D)=n-1$. From [@berliner2013minimum], $\operatorname{Z}(D)=1$ if and only if $D$ is a Hessenberg path, and $\operatorname{Z}(D)=2$ if and only if $D$ consists of two parallel Hessenberg paths. From [@adams2021failed], $\operatorname{F}(D)=n-1$ if and only if $D$ has a source, and $\operatorname{F}(D)=n-2$ if and only if $D$ has no source and there exist $u, v \in V(D)$ with $N^-(u)\backslash\{v\} = N^-(v)\backslash\{u\}$. Noting that only the first vertex in a Hessenberg path can be a source, we have $\lambda(D)=n-2$ if and only if either Item [\[parallelHess\]](#parallelHess){reference-type="ref" reference="parallelHess"} holds, or $D$ is a Hessenberg path with no source and there exist $u, v \in V(D)$ with $N^-(u)\backslash\{v\} = N^-(v)\backslash\{u\}$. By the definition of a Hessenberg path, $N^-(u)\backslash\{v\} = N^-(v)\backslash\{u\}$ is true if and only if $u=v_1$ and $v= v_{i+1}$ where $1<i<n$, and $N^-(v_1) = N^-(v_{i+1}) = \{ v_i \}$, Item [\[twoneighborHess\]](#twoneighborHess){reference-type="ref" reference="twoneighborHess"}. 0◻ ◻ The graph shown in Figure [\[fig:highdigraph\]](#fig:highdigraph){reference-type="ref" reference="fig:highdigraph"} is a Hessenberg path with $\lambda(D)=n-2$, since the blue vertices represent a failed zero forcing set $S$ with $|S|=n-2$, and $\{v_1\}$ forms a zero forcing set of order 1. # Zero forcing span characteristics for some graphs In this section, we look at the spans of trees, graphs with two or more components, and Cartesian products. **Proposition 21**. *For a tree $T$ on four or more vertices, $1 \leq \lambda(T) \leq n-3$ and these bounds are sharp.* *Proof.* For any tree $T$, by Theorems [\[prop:zero\]](#prop:zero){reference-type="ref" reference="prop:zero"} and [Theorem 14](#prop:high){reference-type="ref" reference="prop:high"}, $\lambda(T) \in \{0, n-2, n-1\}$ if and only if $T$ is $K_1, K_2$, or $P_3$. Thus, if $|V(T)| \geq 4$, we have $1 \leq \lambda(T) \leq n-3$. From Proposition [\[prop:span1\]](#prop:span1){reference-type="ref" reference="prop:span1"}, for the star $T=K_{m,1}$ with $m \geq 2$, we see that $\lambda(K_{m,1}) = 1$, showing sharpness of the lower bound. From Theorem [\[prop:nminus3\]](#prop:nminus3){reference-type="ref" reference="prop:nminus3"}, if $T$ consists of a path on at least two vertices with two pendant vertices on one of its end vertices, we see that $\lambda(T)=n-3$. Hence the bounds are sharp. 0◻ ◻ For a disconnected graph, we display a formula for its span in terms of the zero forcing numbers, orders, and failed zero forcing numbers of its components. **Proposition 22**. *If $G$ is a graph with at least two components: $G_1, G_2, \ldots, G_k$, then $$\lambda(G)=|V(G)| + \max_{1 \leq i \leq k} \left( \operatorname{F}(G_i) - |V(G_i)|\right) - \left(\sum_{i=1}^k \operatorname{Z}(G_i)\right) + 1$$* *Proof.* From [@fetcie2015failed], $\operatorname{F}(G) = |V(G)| + \max_{1 \leq i \leq k} \left( \operatorname{F}(G_i) - |V(G_i)|\right)$. For $S \subseteq V(G)$ to be a zero forcing set, $S \cap V(G_i)$ must be a zero forcing set for each $i$, $1 \leq i \leq k$, giving us that $\operatorname{Z}(G) = \sum_{i=1}^k \operatorname{Z}(G_i)$. Applying the formula from Observation [\[obs:formula\]](#obs:formula){reference-type="ref" reference="obs:formula"} completes the result. 0◻ ◻ We provide a bound on the zero forcing span of the Cartesian product of graphs $G$ and $H$, $G \square H$, in terms of the orders of $G$ and $H$ and their zero forcing and failed zero forcing numbers. **Proposition 23**. *Let $G \square H$ denote the Cartesian product of graphs $G$ and $H$. Then $$\lambda(G\square H) \geq \max\{ \operatorname{F}(G) |V(H)|, F(H)|V(G)|\} - \min \{\operatorname{Z}(G) |V(H)|, \operatorname{Z}(H)|V(G)|\} + 1$$* *Proof.* This follows from the bound on the failed zero forcing number of a Cartesian product [@fetcie2015failed] and the bound on the zero forcing number of a Cartesian product [@aim2008zero], together with Observation [\[obs:formula\]](#obs:formula){reference-type="ref" reference="obs:formula"}. 0◻ ◻ # Conclusion In this paper, we introduced the idea of zero forcing span, and characterized graphs with high and low values in the context of standard zero forcing for both undirected and directed graphs, as well as in skew zero forcing. Since the zero forcing span is intimately related to linear algebra and to zero forcing polynomials, further investigation of these relationships is a compelling direction, and could include further investigation of parameters studied here, or other variants including, for example, positive semidefinite zero forcing or power domination. 1 Alyssa Adams and Bonnie Jacob. Failed zero forcing and critical sets on directed graphs. , 81(3):367--387, 2021. AIM Minimum Rank-Special Graphs Work Group. Zero forcing sets and the minimum rank of graphs. , 428(7):1628--1648, 2008. AIM Minimum Rank-Special Graphs Work Group members: Francesco Barioli, Wayne Barrett, Steve Butler, Sebastian M. Cioabă, Dragos̆ Cvetković, Shaun M. Fallat, Chris Godsil, Willem Haemers, Leslie Hogben, Rana Mikkelson, Sivaram Narayan, Olga Pryporova, Irene Sciriha, Wasin So, Dragan Stevanović, Hein van der Holst, Kevin Vander Meulen, Amy Wangsness Wehe. Thomas Ansill, Bonnie Jacob, Jaime Penzellna, and Daniel Saavedra. Failed skew zero forcing on a graph. , 509:40--63, 2016. Adam Berliner, Minerva Catral, Leslie Hogben, My Huynh, Kelsey Lied, and Michael Young. Minimum rank, maximum nullity, and zero forcing number of simple digraphs. , 26:762, 2013. Kirk Boyer, Boris Brimkov, Sean English, Daniela Ferrero, Ariel Keller, Rachel Kirsch, Michael Phillips, and Carolyn Reinhart. The zero forcing polynomial of a graph. , 258:35--48, 2019. Daniela Ferrero, Mary Flagg, H. Tracy Hall, Leslie Hogben, Jephian C.-H. Lin, Seth A. Meyer, Shahla Nasserasr, and Bryan Shader. Rigid linkages and partial zero forcing. , 26(2), 2019. Katherine Fetcie, Bonnie Jacob, and Daniel Saavedra. The failed zero forcing number of a graph. , 8(1):99--117, 2015. Leslie Hogben. Minimum rank problems. , 432(8):1961--1974, 2010. IMA-ISU research group on minimum rank. Minimum rank of skew-symmetric matrices described by a graph. , 432(10):2457--2472, 2010. IMA-ISU research group members: Mary Allison, Elizabeth Bodine, Luz Maria DeAlba, Joyati Debnath, Laura DeLoss, Colin Garnett, Jason Grout, Leslie Hogben, Bokhee Im, Hana Kim, Reshmi Nair, Olga Pryporova, Kendrick Savage, Bryan Shader and Amy Wangsness Wehe.
arxiv_math
{ "id": "2309.06234", "title": "The zero forcing span of a graph", "authors": "Bonnie Jacob", "categories": "math.CO", "license": "http://creativecommons.org/licenses/by-nc-nd/4.0/" }
--- abstract: | This paper proposes a strategy to solve the problems of the conventional s-version of finite element method (SFEM) fundamentally. Because SFEM can reasonably model an analytical domain by superimposing meshes with different spatial resolutions, it has intrinsic advantages of local high accuracy, low computation time, and simple meshing procedure. However, it has disadvantages such as accuracy of numerical integration and matrix singularity. Although several additional techniques have been proposed to mitigate these limitations, they are computationally expensive or ad-hoc, and detract from the method's strengths. To solve these issues, we propose a novel strategy called B-spline based SFEM. To improve the accuracy of numerical integration, we employed cubic B-spline basis functions with $C^2$-continuity across element boundaries as the global basis functions. To avoid matrix singularity, we applied different basis functions to different meshes. Specifically, we employed the Lagrange basis functions as local basis functions. The numerical results indicate that using the proposed method, numerical integration can be calculated with sufficient accuracy without any additional techniques used in conventional SFEM. Furthermore, the proposed method avoids matrix singularity and is superior to conventional methods in terms of convergence for solving linear equations. Therefore, the proposed method has the potential to reduce computation time while maintaining a comparable accuracy to conventional SFEM. author: - Nozomi Magome - Naoki Morita - Shigeki Kaneko - Naoto Mitsume bibliography: - bibliography_paperpile.bib title: Higher-continuity s-version of finite element method with B-spline functions --- s-version of finite element method ,mesh superposition method ,B-spline basis functions ,localized mesh refinement # Introduction The modeling of flow dynamics with moving boundaries and interfaces, including fluid--structure interactions [@ishihara2009two], free-surface flows [@queutey2007interface], and two-fluid flows [@qian2006free], plays a prominent role in many scientific and engineering fields. Depending on the nature of these problems, we can use an interface-tracking or interface-capturing method for their computation. In an interface-tracking method, such as arbitrary Lagrangian--Eulerian (ALE) schemes [@hirt1974arbitrary] and deforming-spatial-domain/stabilized space--time (DSD/SST) [@tezduyar1991stabilized; @tezduyar1992computation], as the interfaces move and the fluid domain changes its shape, the mesh moves to adjust to the shape change and follow the interfaces. Moving the fluid mesh to follow the interfaces enables us to control the mesh resolution across the entire domain, produce a high-resolution representation of the boundary layers, and obtain highly accurate solutions in such critical flow regions. As we move the mesh, if the element distortion exceeds the threshold for good accuracy, a remeshing (i.e., mesh regenerating) is performed. Although some advanced mesh update methods that aim to decrease the frequency of remeshing and sustain the good quality of elements near solid surfaces have been developed [@takizawa2020low; @tonon2021linear], these approaches incur additional non-negligible computation time and significant coding effort. In addition, even if an algorithm is robust to small interface movements, it may exhibit numerical instabilities when dealing with large deformations, movements, and contacts of interfaces [@sahin2009arbitrary]. Generally, an interface-capturing method is used to solve these problems of interface-tracking methods. In this approach, flow fields are represented by a fixed Eulerian mesh regardless of the change in interface. Although a fixed Eulerian mesh by itself cannot be used to model and simulate a complex moving geometry, combining it with interface representation approaches enables us to handle boundary conditions at interfaces. Previously developed approaches include the immersed boundary (IB) methods [@peskin1972flow], extended immersed boundary (EIB) method [@wang2004extended], immersed finite element (IFE) method [@zhang2004immersed], distributed Lagrange multiplier/fictitious domain (DLM/FD) method [@glowinski1999distributed], extended FEM (XFEM) [@wagner2001extended], finite cover method (FCM) [@terada2003finite], cut-cell methods with marker particles [@udaykumar1996elafint], and level-set methods [@dunne2006eulerian]. Owing to their advantages associated with mesh processing and robustness against moving boundaries, interface-capturing approaches have been employed in various applications that involve complex geometries and large boundary movements ranging from incompressible flows to turbulence flows [@kan2021numerical] and FSI phenomena [@souza2022multi; @kawakami2022fluid]. More details pertaining to these approaches can be found in several reviews [@osher2001level; @kim2019immersed; @huang2019recent]. However, it remains difficult to achieve locally high resolution using interface-capturing approaches. To obtain highly accurate solutions in critical flow domains such as boundary layers, localized fine meshes are required. If a uniform mesh is used, this requirement is inevitably extended to the entire computational domain, and the resulting mesh may exceed storage capacity. For this purpose, several adaptive mesh refinement schemes [@roma1999adaptive; @hartmann2008adaptive; @griffith2012immersed; @salih2019thin; @borker2019mesh; @aldlemy2020adaptive], overlapping schemes [@henshaw2008parallel; @massing2014stabilized; @bathe2017finite; @huang2021convergence], and hybrid schemes that merge concepts from capturing methods and ALE formulations [@gerstenberger2008enhancement] have been proposed. However, these refinement algorithms often fail to guarantee cell conformity and consistent interpolation of the adapted meshes, or are highly complex. By contrast, @fish1992s proposed the s-version of finite element method (SFEM), another localized mesh refinement approach. SFEM uses two-level FEM meshes - a global mesh and a local mesh - to model the target domain, where a fine local mesh(es) representing local features is superposed on the relatively coarse global mesh that represents the entire analytical domain. Variables in the mesh superposing region are given by the sum of those in the global and local meshes. Note that the local mesh can be inserted into an arbitrary part of the domain independently of the global mesh, so that the complex meshing procedure can be avoided. SFEM has been successfully applied to various engineering problems, such as stress analyses of laminated composites [@fish1992multi; @FISH1993363; @REDDY199321; @FISH1994135; @ANGIONI2011780; @ANGIONI2012559; @Chen2014-ro; @jiao2015adaptive; @Jiao2015-fo; @KUMAGAI2017136; @Sakata2020-dq], mesoscopic analyses of particulate composites[@Okada2004-el; @Okada2004-co], multiscale analyses of fibre-reinforced composites [@vorobiov2017mesh; @sakata2022mesh], concurrent multiscaling [@fish1993multiscale; @sun2018variant; @cheng2022multiscale], multiscale analyses of porous materials [@takano2003multi; @takano2004three; @kawagai2006image; @tsukino2015multiscale], dynamic analyses of transient problems [@yue2005adaptive; @yue2007adaptive], and shape and topology optimization problems [@wang2006moving]. Fracture mechanics problems, such as fatigue crack and dynamic crack propagation, are also major applications of SFEM [@fish1993adaptive; @FISH1994135; @lee2004combined; @okada2005fracture; @okada2007application; @fan2008rs; @nakasumi2008crack; @kikuchi2012crack; @kikuchi2014fatigue; @wada2014fatigue; @kikuchi2016crack; @xu2018study; @kishi2020dynamic; @cheng2022multiscale; @he2023strategy; @cheng2023application]. Furthermore, SFEM has been combined with XFEM [@lee2004combined; @nakasumi2008crack; @ANGIONI2011780; @ANGIONI2012559; @Jiao2015-fo] and phase field modeling [@cheng2022multiscale; @cheng2023application]. The approximation concept used in SFEM has been applied to the coupling of peridynamics and FEM [@sun2019superposition; @sun2022numerical; @sun2022parallel; @sun2023stabilized]. Despite its advantages, SFEM presents two challenges. The first challenge is the inaccuracy of numerical integration based on Gaussian quadrature, occurring when the global and local elements exhibit partial mutual superposition. The Lagrange basis functions used in the conventional SFEM have $C^0$-continuity, and their derivatives have discontinuities across element boundaries. Owing to their low continuity, if the integral domain contains the Lagrange element boundaries, the integrands are often discontinuous. As a result, the accuracy of the Gaussian quadrature degrades. To improve the accuracy, some approaches subdivide the integral domain into several subdomains [@FISH1993363; @FISH1994135; @Okada2004-el; @Okada2004-co; @okada2005fracture; @okada2007application; @he2023strategy], while others apply high-order Gaussian quadrature [@fish1992s; @lee2004combined; @nakasumi2008crack; @kishi2020dynamic]. However, all of them require large computation time. The second challenge inherent to SFEM is the singularity of the matrix. In the SFEM framework, two or more finite meshes are superposed, and the basis functions in said meshes are not guaranteed to be linearly independent of each other. If the basis functions in one mesh can be represented as a linear combination of those in other meshes, matrix singularity occurs. @ooya2009linear pointed out that once the problem arises, a linear equation solver using an iterative method, such as the conjugate gradient method, either fails to converge or is extremely slow to converge to the solution. Although some approaches have been proposed to solve this problem [@fish1992s; @FISH1994135; @ANGIONI2011780; @ANGIONI2012559; @yue2005adaptive; @yue2007adaptive; @fan2008rs; @nakasumi2008crack; @park2003efficient; @ooya2009linear], many of them are ad-hoc methods that represent slight modifications to the models, or incur additional computation time. More details on the difficulties of conventional SFEM are discussed in Section [3.2](#sec:difficulties_in_lagrange){reference-type="ref" reference="sec:difficulties_in_lagrange"}. In this study, we propose a new SFEM framework to avoid these problems fundamentally. To improve the accuracy of numerical integration, functions with high continuity across the element boundaries were used as global basis functions such that the integrands are smooth and continuous. Specifically, we applied cubic B-spline basis functions, which have $C^2$-continuity across the element boundaries, as the global basis functions. Furthermore, to address matrix singularity, we applied different types of functions as the basis functions in different meshes. Specifically, we employed Lagrange basis functions as the local basis functions. Note that unlike the conventional method, our framework does not require additional and computationally expensive techniques to address these issues. Thus, the proposed method is expected to achieve the same level of accuracy with less computation time than the conventional method. The remainder of this paper is organized as follows. Basic formulations and the concept of SFEM are presented in Section [2](#sec:formulation_of_s-fem){reference-type="ref" reference="sec:formulation_of_s-fem"}. The formulations and difficulties of conventional SFEM, and the formulations and the advantages of our proposed B-spline based SFEM method, are introduced in Section [3](#sec:bsfem){reference-type="ref" reference="sec:bsfem"}. The proposed method is verified in Section [4](#sec:verification){reference-type="ref" reference="sec:verification"}. Finally, the conclusions of this study are presented in Section [5](#sec:conclusions){reference-type="ref" reference="sec:conclusions"}. # Basic formulation of SFEM {#sec:formulation_of_s-fem} This section reviews the basic formulations and underlying concept of SFEM. The target problem of this study is Poisson's equation with Dirichlet boundary conditions given as follows: [\[eq:poisson\]]{#eq:poisson label="eq:poisson"} $$\begin{aligned} \Delta{u} + f &= 0 \quad \mathrm{in} \; \Omega, \\ u &= g \quad \mathrm{on} \; \Gamma_D, \label{eq:dirichlet} \end{aligned}$$ where $\Omega$ is the domain and $\Gamma$ is its boundary, consisting of $\Gamma_D = \Gamma$. The function $u : \Omega \rightarrow \mathbb{R}$ is a trial solution and the function $f : \Omega \rightarrow \mathbb{R}$ is given. Eq. [\[eq:dirichlet\]](#eq:dirichlet){reference-type="eqref" reference="eq:dirichlet"} represents the Dirichlet boundary conditions. We define the trial solution space $\mathcal{S}$ and test function space $\mathcal{V}$ as $$\mathcal{S} = \{ u \mid u \in H^1\left( \Omega \right), u|_{\Gamma_D} = g \}$$ and $$\mathcal{V} = \{ w \mid w \in H^1\left( \Omega \right), w|_{\Gamma_D} = 0 \},$$ respectively, where $H^1\left( \Omega \right)$ is the Sobolev space. The resulting weak form of the problem is: Given $f$ and $g$, find $u \in \mathcal{S}$ such that for all $w \in \mathcal{V}$, $$a_{\Omega} \left( w, u \right) = L_{\Omega} \left( w \right),$$ where $$a_{\Omega} \left( w, u \right) = \int_{\Omega} \nabla{w} \cdot \nabla{u} \; d \Omega,$$ and $$L_{\Omega} \left( w \right) = \int_{\Omega} w f \; d \Omega.$$ Here, $a_{\Omega} \left( \cdot, \cdot \right)$ is a bilinear form, and $L_{\Omega} \left( \cdot \right)$ is a linear functional. In the framework of SFEM [@fish1992s], the domain $\Omega$ is discretized by some finite element meshes defined with mutual independence. In many cases, as shown in Figure [1](#fig:sfem_meshes){reference-type="ref" reference="fig:sfem_meshes"}, one relatively coarse mesh may represent the entire domain $\Omega^{\mathrm{G}}$ that corresponds to domain $\Omega$, known as the global mesh. The local domain $\Omega^{\mathrm{L}}$ requiring high resolution is discretized by the finer local mesh, which is superimposed on the global mesh. The local domain $\Omega^{\mathrm{L}}$ is assumed to be included in the global domain $\Omega^{\mathrm{G}}$ as $\Omega^{\mathrm{L}}$ $\subseteq$ $\Omega^{\mathrm{G}}$. In the present work, although the local domain $\Omega^{\mathrm{L}}$ must not extend outside of the global domain $\Omega^{\mathrm{G}}$, the boundary of the local domain $\Omega^{\mathrm{L}}$ is permitted to overlap with the boundary of the global domain $\Omega^{\mathrm{G}}$. Because meshes are defined independently from each other without considering their mutual consistency, mesh generation can be simplified. Subscripts $\mathrm{G}$ and $\mathrm{L}$ respectively represent the quantities of the global and local meshes, as illustrated in Figure [1](#fig:sfem_meshes){reference-type="ref" reference="fig:sfem_meshes"}. ![Global and local meshes defined in SFEM.](pics/1_sfem_area.pdf){#fig:sfem_meshes width="9cm"} In the formulation of SFEM, the trial solution $u$ in different regions is defined as $$\begin{aligned} u = \begin{cases} {u^{\mathrm{G}} \quad \mathrm{in} \; \Omega^{\mathrm{G}} \setminus \Omega^{\mathrm{L}}},\\ {u^{\mathrm{G}} + u^{\mathrm{L}} \quad \mathrm{in} \; \Omega^{\mathrm{L}}}. \end{cases}\end{aligned}$$ To ensure continuity between the global and local meshes, the following Dirichlet boundary condition is imposed: $$u^{\mathrm{L}} = 0 \quad \mathrm{on} \; \Gamma^{\mathrm{GL}}, \label{eq:dirichlet_local}$$ where $\Gamma^{\mathrm{GL}}$ is the boundary of the local domain. Because the global domain $\Omega^{\mathrm{G}}$ corresponds to domain $\Omega$, the following Dirichlet boundary condition is imposed: $$u^{\mathrm{G}} = g \quad \mathrm{on} \; \Gamma_D.$$ Based on the Galerkin method, the test function is defined as $$\begin{aligned} w = \begin{cases} {w^{\mathrm{G}} \quad \mathrm{in} \; \Omega^{\mathrm{G}} \setminus \Omega^{\mathrm{L}}},\\ {w^{\mathrm{G}} + w^{\mathrm{L}} \quad \mathrm{in} \; \Omega^{\mathrm{L}}}. \end{cases}\end{aligned}$$ We therefore define the trial solution spaces $\mathcal{S}^{\mathrm{G}}$, $\mathcal{S}^{\mathrm{L}}$ and test function spaces $\mathcal{V}^{\mathrm{G}}$, $\mathcal{V}^{\mathrm{L}}$ as $$\begin{aligned} \mathcal{S}^{\mathrm{G}} &= \{ u^{\mathrm{G}} \mid u^{\mathrm{G}} \in H^1\left( \Omega^{\mathrm{G}} \right), u^{\mathrm{G}}|_{\Gamma_D} = g \},\\ \mathcal{S}^{\mathrm{L}} &= \{ u^{\mathrm{L}} \mid u^{\mathrm{L}} \in H^1\left( \Omega^{\mathrm{L}} \right), u^{\mathrm{L}}|_{\Gamma^{\mathrm{GL}}} = 0 \},\\ \mathcal{V}^{\mathrm{G}} &= \{ w^{\mathrm{G}} \mid w^{\mathrm{G}} \in H^1\left( \Omega^{\mathrm{G}} \right), w^{\mathrm{G}}|_{\Gamma_D} = 0 \},\\ \mathcal{V}^{\mathrm{L}} &= \{ w^{\mathrm{L}} \mid w^{\mathrm{L}} \in H^1\left( \Omega^{\mathrm{L}} \right), w^{\mathrm{L}}|_{\Gamma^{\mathrm{GL}}} = 0 \}.\end{aligned}$$ The resulting weak form of Eq. [\[eq:poisson\]](#eq:poisson){reference-type="eqref" reference="eq:poisson"} in the SFEM formulation is: Given $f$ and $g$, find $\left( u^{\mathrm{G}}, u^{\mathrm{L}} \right) \in \mathcal{S}^{\mathrm{G}} \times \mathcal{S}^{\mathrm{L}}$ such that for all $\left( w^{\mathrm{G}}, w^{\mathrm{L}} \right) \in \mathcal{V}^{\mathrm{G}} \times \mathcal{V}^{\mathrm{L}}$ $$a_{\Omega}^{'} \left( w, u \right) = L_{\Omega}^{'} \left( w \right), \label{eq:variational_form_sfem_1}$$ where $$\begin{aligned} a_{\Omega}^{'} \left( w, u \right) =& \: a_{\Omega^{\mathrm{G}} \setminus \Omega^{\mathrm{L}}} \left( w^{\mathrm{G}}, u^{\mathrm{G}} \right) + a_{\Omega^{\mathrm{L}}} \left( w^{\mathrm{G}} + w^{\mathrm{L}}, u^{\mathrm{G}} + u^{\mathrm{L}} \right),\label{eq:sfem_a_1}\\ L_{\Omega}^{'} \left( w \right) =& \: L_{\Omega^{\mathrm{G}} \setminus \Omega^{\mathrm{L}}} \left( w^{\mathrm{G}} \right) + L_{\Omega^{\mathrm{L}}} \left( w^{\mathrm{G}} + w^{\mathrm{L}} \right).\label{eq:sfem_L_1}\end{aligned}$$ Recalling the bilinearity of $a_{\Omega} \left( \cdot, \cdot \right)$ and linearity of $L_{\Omega} \left( \cdot \right)$, we can rewrite Eqs. [\[eq:sfem_a\_1\]](#eq:sfem_a_1){reference-type="eqref" reference="eq:sfem_a_1"} and [\[eq:sfem_L\_1\]](#eq:sfem_L_1){reference-type="eqref" reference="eq:sfem_L_1"} as $$\begin{split} a_{\Omega}^{'} \left( w, u \right) = \: &a_{\Omega^{\mathrm{G}}} \left( w^{\mathrm{G}}, u^{\mathrm{G}} \right) + a_{\Omega^{\mathrm{L}}} \left( w^{\mathrm{G}}, u^{\mathrm{L}} \right) \\ &+ a_{\Omega^{\mathrm{L}}} \left( w^{\mathrm{L}}, u^{\mathrm{G}} \right) + a_{\Omega^{\mathrm{L}}} \left( w^{\mathrm{L}}, u^{\mathrm{L}} \right), \end{split}$$ $$\begin{aligned} L' \left( w \right) = L_{\Omega^{\mathrm{G}}} \left( w^{\mathrm{G}} \right) + L_{\Omega^{\mathrm{L}}} \left( w^{\mathrm{L}} \right).\end{aligned}$$ To convert this weak problem statement into a coupled system of linear algebraic equations, we apply Galerkin's method and work in finite-dimensional subspaces $\left( \mathcal{S}^{\mathrm{G}} \right)^h \subset \mathcal{S}^{\mathrm{G}}$, $\left( \mathcal{S}^{\mathrm{L}} \right)^h \subset \mathcal{S}^{\mathrm{L}}$, $\left( \mathcal{V}^{\mathrm{G}} \right)^h \subset \mathcal{V}^{\mathrm{G}}$, and $\left( \mathcal{V}^{\mathrm{L}} \right)^h \subset \mathcal{V}^{\mathrm{L}}$. We have a given function $g^h \in \left( \mathcal{S}^{\mathrm{G}} \right)^h$ such that $g^h|_{\Gamma_D} = g$, and thus for every $\left( u^{\mathrm{G}} \right)^h \in \left( \mathcal{S}^{\mathrm{G}} \right)^h$ we have a unique decomposition $$\left( u^{\mathrm{G}} \right)^h = \left( v^{\mathrm{G}} \right)^h + g^h,$$ where $\left( v^{\mathrm{G}} \right)^h \in \left( \mathcal{V}^{\mathrm{G}} \right)^h$. Therefore the Galerkin approximation of Eq. [\[eq:variational_form_sfem_1\]](#eq:variational_form_sfem_1){reference-type="eqref" reference="eq:variational_form_sfem_1"} is: Find $\left( u^{\mathrm{G}} \right)^h = \left( v^{\mathrm{G}} \right)^h + g^h$ and $\left( u^{\mathrm{L}} \right)^h = \left( v^{\mathrm{L}} \right)^h$, where $\left( \left( v^{\mathrm{G}} \right)^h, \left( v^{\mathrm{L}} \right)^h \right) \in \left( \mathcal{V}^{\mathrm{G}} \right)^h \times \left( \mathcal{V}^{\mathrm{L}} \right)^h$, such that for all $\left( \left( w^{\mathrm{G}} \right)^h, \left( w^{\mathrm{L}} \right)^h \right) \in \left( \mathcal{V}^{\mathrm{G}} \right)^h \times \left( \mathcal{V}^{\mathrm{L}} \right)^h$ $$a_{\Omega}^{'} \left( w^h, v^h \right) = L_{\Omega}^{'} \left( w^h \right) - a_{\Omega}^{'} \left( w^h, g^h \right), \label{eq:variational_form_sfem_2}$$ where $$\begin{split} a_{\Omega}^{'} \left( w^h, v^h \right) = \: &a_{\Omega^{\mathrm{G}}} \left( \left( w^{\mathrm{G}} \right)^h, \left( v^{\mathrm{G}} \right)^h \right) + a_{\Omega^{\mathrm{L}}} \left( \left( w^{\mathrm{G}} \right)^h, \left( v^{\mathrm{L}} \right)^h \right) \\ &+ a_{\Omega^{\mathrm{L}}} \left( \left( w^{\mathrm{L}} \right)^h, \left( v^{\mathrm{G}} \right)^h \right) + a_{\Omega^{\mathrm{L}}} \left( \left( w^{\mathrm{L}} \right)^h, \left( v^{\mathrm{L}} \right)^h \right), \end{split}$$ $$\begin{aligned} L' \left( w \right) = L_{\Omega^{\mathrm{G}}} \left( \left( w^{\mathrm{G}} \right)^h \right) + L_{\Omega^{\mathrm{L}}} \left( \left( w^{\mathrm{L}} \right)^h \right),\end{aligned}$$ and $$\begin{aligned} a_{\Omega}^{'} \left( w^h, g^h \right) = a_{\Omega^{\mathrm{G}}} \left( \left( w^{\mathrm{G}} \right)^h, g^h \right) + a_{\Omega^{\mathrm{G}}} \left( \left( w^{\mathrm{L}} \right)^h, g^h \right).\end{aligned}$$ We define $\boldsymbol{\eta}^{\mathrm{G}}$ and $\boldsymbol{\eta}^{\mathrm{L}}$ to be the sets containing the indices of all global basis functions $N_A^{\mathrm{G}}, A \in \boldsymbol{\eta}^{\mathrm{G}}$ and local basis functions $N_A^{\mathrm{L}}, A \in \boldsymbol{\eta}^{\mathrm{L}}$, respectively. Similarly, we let $\boldsymbol{\eta}_D^{\mathrm{G}} \subset \boldsymbol{\eta}^{\mathrm{G}}$ be the set containing the indices of all of global basis functions that are non-zero on $\Gamma_D$. Thus, $\left( u^{\mathrm{G}} \right)^h$ and $\left( u^{\mathrm{L}} \right)^h$ can be expressed as $$\begin{aligned} \left( u^{\mathrm{G}} \right)^h &= \sum_{A \in \boldsymbol{\eta}^{\mathrm{G}} - \boldsymbol{\eta}_D^{\mathrm{G}}} N_A^{\mathrm{G}} d_A^{\mathrm{G}} + \sum_{B \in \boldsymbol{\eta}_D^{\mathrm{G}}} N_B^{\mathrm{G}} g_B = \sum_{A \in \boldsymbol{\eta}^{\mathrm{G}} - \boldsymbol{\eta}_D^{\mathrm{G}}} N_A^{\mathrm{G}} d_A^{\mathrm{G}} + g^h, \label{eq:interporation_uG} \\ \left( u^{\mathrm{L}} \right)^h &= \sum_{A \in \boldsymbol{\eta}^{\mathrm{L}}} N_A^{\mathrm{L}} d_A^{\mathrm{L}}. \label{eq:interporation_uL}\end{aligned}$$ Similarly, $\left( w^{\mathrm{G}} \right)^h$ and $\left( w^{\mathrm{L}} \right)^h$ can be expressed as $$\begin{aligned} \left( w^{\mathrm{G}} \right)^h &= \sum_{A \in \boldsymbol{\eta}^{\mathrm{G}} - \boldsymbol{\eta}_D^{\mathrm{G}}} N_A^{\mathrm{G}} e_A^{\mathrm{G}}, \label{eq:interporation_wG} \\ \left( w^{\mathrm{L}} \right)^h &= \sum_{A \in \boldsymbol{\eta}^{\mathrm{L}}} N_A^{\mathrm{L}} e_A^{\mathrm{L}}. \label{eq:interporation_wL}\end{aligned}$$ Inserting Eqs. [\[eq:interporation_uG\]](#eq:interporation_uG){reference-type="eqref" reference="eq:interporation_uG"}, [\[eq:interporation_uL\]](#eq:interporation_uL){reference-type="eqref" reference="eq:interporation_uL"}, [\[eq:interporation_wG\]](#eq:interporation_wG){reference-type="eqref" reference="eq:interporation_wG"}, and [\[eq:interporation_wL\]](#eq:interporation_wL){reference-type="eqref" reference="eq:interporation_wL"} into Eq. [\[eq:variational_form_sfem_2\]](#eq:variational_form_sfem_2){reference-type="eqref" reference="eq:variational_form_sfem_2"} yields $$\begin{split} \sum_{A \in \boldsymbol{\eta}^{\mathrm{G}} - \boldsymbol{\eta}_D^{\mathrm{G}}} e_A^{\mathrm{G}} \Biggl( \sum_{B \in \boldsymbol{\eta}^{\mathrm{G}} - \boldsymbol{\eta}_D^{\mathrm{G}}} a_{\Omega^{\mathrm{G}}} \left( N_A^{\mathrm{G}}, N_B^{\mathrm{G}} \right)d_B^{\mathrm{G}} + \sum_{C \in \boldsymbol{\eta}^{\mathrm{L}}} a_{\Omega^{\mathrm{L}}} \left( N_A^{\mathrm{G}}, N_C^{\mathrm{L}} \right)d_C^{\mathrm{L}} \Biggr. \\ \Biggl. - L_{\Omega^{\mathrm{G}}} \left( N_A^{\mathrm{G}} \right) + a_{\Omega^{\mathrm{G}}} \left( N_A^{\mathrm{G}}, g^h \right) \Biggr) \\ + \sum_{D \in \boldsymbol{\eta}^{\mathrm{L}}} e_D^{\mathrm{L}} \Biggl( \sum_{B \in \boldsymbol{\eta}^{\mathrm{G}} - \boldsymbol{\eta}_D^{\mathrm{G}}} a_{\Omega^{\mathrm{L}}} \left( N_D^{\mathrm{L}}, N_B^{\mathrm{G}} \right)d_B^{\mathrm{G}} + \sum_{C \in \boldsymbol{\eta}^{\mathrm{L}}} a_{\Omega^{\mathrm{L}}} \left( N_D^{\mathrm{L}}, N_C^{\mathrm{L}} \right)d_C^{\mathrm{L}} \Biggr. \\ \Biggl. - L_{\Omega^{\mathrm{L}}} \left( N_D^{\mathrm{L}} \right) + a_{\Omega^{\mathrm{G}}} \left( N_D^{\mathrm{L}}, g^h \right) \Biggr) = 0. \end{split}$$ As the $e_A^{\mathrm{G}}$'s and $e_D^{\mathrm{L}}$'s are arbitrary, the terms in parentheses must be identically zero. Thus, for $A \in \boldsymbol{\eta}^{\mathrm{G}} - \boldsymbol{\eta}_D^{\mathrm{G}}$ and $D \in \boldsymbol{\eta}^{\mathrm{L}}$, $$\begin{gathered} \sum_{B \in \boldsymbol{\eta}^{\mathrm{G}} - \boldsymbol{\eta}_D^{\mathrm{G}}} a_{\Omega^{\mathrm{G}}} \left( N_A^{\mathrm{G}}, N_B^{\mathrm{G}} \right)d_B^{\mathrm{G}} + \sum_{C \in \boldsymbol{\eta}^{\mathrm{L}}} a_{\Omega^{\mathrm{L}}} \left( N_A^{\mathrm{G}}, N_C^{\mathrm{L}} \right)d_C^{\mathrm{L}}\\ = L_{\Omega^{\mathrm{G}}} \left( N_A^{\mathrm{G}} \right) - a_{\Omega^{\mathrm{G}}} \left( N_A^{\mathrm{G}}, g^h \right), \label{eq:befor_matrix1}\end{gathered}$$ $$\begin{gathered} \sum_{B \in \boldsymbol{\eta}^{\mathrm{G}} - \boldsymbol{\eta}_D^{\mathrm{G}}} a_{\Omega^{\mathrm{L}}} \left( N_D^{\mathrm{L}}, N_B^{\mathrm{G}} \right)d_B^{\mathrm{G}} + \sum_{C \in \boldsymbol{\eta}^{\mathrm{L}}} a_{\Omega^{\mathrm{L}}} \left( N_D^{\mathrm{L}}, N_C^{\mathrm{L}} \right)d_C^{\mathrm{L}}\\ = L_{\Omega^{\mathrm{L}}} \left( N_D^{\mathrm{L}} \right) - a_{\Omega^{\mathrm{G}}} \left( N_D^{\mathrm{L}}, g^h \right). \label{eq:befor_matrix2}\end{gathered}$$ Proceeding to define $$\begin{aligned} K_{AB} &= a_{\Omega^{\mathrm{G}}} \left( N_A^{\mathrm{G}}, N_B^{\mathrm{G}} \right), \\ K_{AC} &= a_{\Omega^{\mathrm{L}}} \left( N_A^{\mathrm{G}}, N_C^{\mathrm{L}} \right), \\ K_{DB} &= a_{\Omega^{\mathrm{L}}} \left( N_D^{\mathrm{L}}, N_B^{\mathrm{G}} \right), \\ K_{DC} &= a_{\Omega^{\mathrm{L}}} \left( N_D^{\mathrm{L}}, N_C^{\mathrm{L}} \right), \\ F_A &= L_{\Omega^{\mathrm{G}}} \left( N_A^{\mathrm{G}} \right) - a_{\Omega^{\mathrm{G}}} \left( N_A^{\mathrm{G}}, g^h \right),\\ F_D &= L_{\Omega^{\mathrm{L}}} \left( N_D^{\mathrm{L}} \right) - a_{\Omega^{\mathrm{G}}} \left( N_D^{\mathrm{L}}, g^h \right),\\ \boldsymbol{K}^{\mathrm{GG}} &= \lbrack K_{AB} \rbrack, \\ \boldsymbol{K}^{\mathrm{GL}} &= \lbrack K_{AC} \rbrack, \\ \boldsymbol{K}^{\mathrm{LG}} &= \lbrack K_{DB} \rbrack, \\ \boldsymbol{K}^{\mathrm{LL}} &= \lbrack K_{DC} \rbrack, \\ \boldsymbol{F}^{\mathrm{G}} &= \lbrace F_A \rbrace, \\ \boldsymbol{F}^{\mathrm{L}} &= \lbrace F_D \rbrace, \\ \boldsymbol{d}^{\mathrm{G}} &= \lbrace d_B^{\mathrm{G}} \rbrace, \\ \boldsymbol{d}^{\mathrm{L}} &= \lbrace d_C^{\mathrm{L}} \rbrace,\end{aligned}$$ and $$\begin{aligned} \boldsymbol{K} &= \begin{bmatrix} \boldsymbol{K}^{\mathrm{GG}} & \boldsymbol{K}^{\mathrm{GL}} \\ \boldsymbol{K}^{\mathrm{LG}} & \boldsymbol{K}^{\mathrm{LL}} \\ \end{bmatrix}, \label{eq:submatrix_K} \\ \boldsymbol{F} &= \begin{bmatrix} \boldsymbol{F}^{\mathrm{G}} \\ \boldsymbol{F}^{\mathrm{L}} \\ \end{bmatrix}, \\ \boldsymbol{d} &= \begin{bmatrix} \boldsymbol{d}^{\mathrm{G}} \\ \boldsymbol{d}^{\mathrm{L}} \\ \end{bmatrix},\end{aligned}$$ for $A, B \in \boldsymbol{\eta}^{\mathrm{G}} - \boldsymbol{\eta}_D^{\mathrm{G}}$ and $C, D \in \boldsymbol{\eta}^{\mathrm{L}}$, we can rewrite Eqs. [\[eq:befor_matrix1\]](#eq:befor_matrix1){reference-type="eqref" reference="eq:befor_matrix1"} and [\[eq:befor_matrix2\]](#eq:befor_matrix2){reference-type="eqref" reference="eq:befor_matrix2"} as simultaneous linear equations: $$\begin{aligned} \boldsymbol{K} \boldsymbol{d} = \boldsymbol{F}, \label{eq:system_sfem}\end{aligned}$$ where $\boldsymbol{K}^{\mathrm{GG}}$ and $\boldsymbol{K}^{\mathrm{LL}}$ are the submatrices for the global and local meshes, respectively, and $\boldsymbol{K}^{\mathrm{GL}}$ and $\boldsymbol{K}^{\mathrm{LG}}$ are submatrices representing the relationship between said meshes. The primary challenges of standard SFEM are the difficulty in exact integration of the submatrices $\boldsymbol{K}^{\mathrm{GL}}$ and $\boldsymbol{K}^{\mathrm{LG}}$, and the singularity of the matrix $\boldsymbol{K}$. These topics are discussed in Section [3.2](#sec:difficulties_in_lagrange){reference-type="ref" reference="sec:difficulties_in_lagrange"}. # B-spline based s-version of finite element method (BSFEM) {#sec:bsfem} This section introduces the concept and formulations of our proposed method. Lagrange basis functions, and conventional SFEM problems using said functions, are defined in Sections [3.1](#sec:formulation_of_lagrange){reference-type="ref" reference="sec:formulation_of_lagrange"} and [3.2](#sec:difficulties_in_lagrange){reference-type="ref" reference="sec:difficulties_in_lagrange"}, respectively. Subsequently, B-spline basis functions and details of the proposed method are presented in Sections [3.3](#sec:formulation_of_b-spline){reference-type="ref" reference="sec:formulation_of_b-spline"} and [3.4](#sec:details_bsfem){reference-type="ref" reference="sec:details_bsfem"}, respectively. ## Formulation of Lagrange basis functions {#sec:formulation_of_lagrange} In this section, we briefly summarize the basics of Lagrange basis functions used in conventional SFEM. Let us denote coordinates in the physical space and parent element by $\boldsymbol{x}$ and $\hat{\boldsymbol{\xi}}$, respectively. To use Gaussian quadrature for integration, the interval of the parent element is defined as $[-1, 1]^d$, where $d$ is the space dimension. In the parent element, the Lagrange interpolation formula in one dimension is given by $$l_i^p \left( \hat{\xi} \right) = \prod_{j=1, j \neq i}^{p+1} \frac{\hat{\xi} - \hat{\xi}_j}{\hat{\xi}_i - \hat{\xi}_j},$$ where $i=1, 2, \dotsc, p+1$, $p$ is the order of the polynomial, and the number of functions (or nodes) used in the parent element is $n = p+1$. Let $\hat{\xi}_i$ be the parent coordinate of node $i$ and $\hat{\xi}_1 = -1, \hat{\xi}_{p+1} = 1$. The $p$th order Lagrange basis functions consist of the $p$th order Lagrange polynomials $l_i^p$. As shown in Figure [2](#fig:lagrange_function){reference-type="ref" reference="fig:lagrange_function"}, Lagrange basis functions have several essential features. The most important feature noted in this study is that although each Lagrange basis function of order $p$ has $C^{p-1}$-continuous derivatives inside each element, it has $C^0$-continuity and its derivatives have discontinuity across their respective element boundaries. Furthermore, each Lagrange basis function satisfies the interpolation property; that is, $$N_i \left( \hat{\xi}_j \right) = \delta_{ij}.$$ ![Linear, quadratic and cubic Lagrange basis functions.](pics/2_lagrange_1_2_3order.pdf){#fig:lagrange_function width="12cm"} ## Difficulties in conventional SFEM using Lagrange basis functions {#sec:difficulties_in_lagrange} Two challenges with the conventional Lagrange-based SFEM are described in this section. The first is the inaccuracy of numerical integration based on Gaussian quadrature. The second is the singularity of the matrix, occurring if the basis functions in one mesh can be represented as a linear combination of the basis functions in other meshes. ### Inaccurate numerical integration for discontinuous functions The first challenge of conventional SFEM involves the numerical integration of the submatrices $\boldsymbol{K}^{\mathrm{GL}}$ and $\boldsymbol{K}^{\mathrm{LG}}$ in Eq. [\[eq:submatrix_K\]](#eq:submatrix_K){reference-type="eqref" reference="eq:submatrix_K"}. The accuracy of numerical integration deteriorates based on Gaussian quadrature when a local element is superimposed on several global elements as shown in Figure [3](#fig:discontinuous_integration){reference-type="ref" reference="fig:discontinuous_integration"}. In many cases, the integration of $\boldsymbol{K}^{\mathrm{GL}}$ and $\boldsymbol{K}^{\mathrm{LG}}$ is conducted by each element on the local mesh. Figure [3](#fig:discontinuous_integration){reference-type="ref" reference="fig:discontinuous_integration"} shows that a local element contains two different global elements $A, B$ and the boundaries between them. The integrands often contain the first derivatives of global basis functions, which are Lagrange basis functions in conventional SFEM. As mentioned in Section [3.1](#sec:formulation_of_lagrange){reference-type="ref" reference="sec:formulation_of_lagrange"}, the first derivatives of $p$th order Lagrange basis functions exhibit discontinuities across element boundaries. The resulting integrands often become discontinuous for local elements located at the boundaries of global elements. Because the Gaussian quadrature scheme assumes that the integrands are smooth and continuous, the accuracy degrades. ![An example of mesh superimposition that causes inaccurate quadrature.](pics/3_sfem_discontinue_mesh.pdf){#fig:discontinuous_integration width="8cm"} For the exact integration of such submatrices, @FISH1993363 and @FISH1994135 subdivided the local element into several subdomains, each of which corresponds to a global element, and performed Gaussian quadrature separately in each subdomain. (see Figure [4](#fig:subdivision_technique){reference-type="ref" reference="fig:subdivision_technique"}(a)) However, this subdivision process requires a complex treatment of geometries and incurs substantial computation time. Furthermore, the boundaries of each subdomain must be precisely defined, and the resulting subdomains may be arbitrary polygons that require further subdivision into simpler shapes. The resulting complexity may be excessive when unstructured meshes are used. To reduce the computation time other studies have applied the following approximate quadrature schemes. As shown in Figure [4](#fig:subdivision_technique){reference-type="ref" reference="fig:subdivision_technique"}(b), @okada2007application divided local elements into equal-sized square domains in the parent element coordinate space to confine numerical error due to discontinuous variations of the integrands in subdomains that contain discontinuities. They also discussed the effect of this subdivision technique on accuracy [@okada2005fracture]. In other studies [@Okada2004-el; @Okada2004-co; @he2023strategy], local elements that contain the edges of the global mesh are divided recursively as shown in Figure [4](#fig:subdivision_technique){reference-type="ref" reference="fig:subdivision_technique"}(c). On the other hand, to integrate discontinuous functions without subdivision techniques, a high-order Gauss quadrature may be used [@fish1992s; @lee2004combined; @nakasumi2008crack; @kishi2020dynamic; @sawada2010high]. ![Examples of existing mesh-subdivision approaches for discontinuous integrands.](pics/4_subdivision_existing_work.pdf){#fig:subdivision_technique width="13cm"} Although more effective subdivision techniques have been devised, they still incur additional computation time, and the more complex the target problem, the greater the impact of deterioration in computational accuracy. ### Loss of solution uniqueness based on independency of basis functions It is well-established that because the basis functions in two or more finite element meshes are not guaranteed to be linearly independent of each other, uniqueness of the decomposing numerical solution $u=u^{\mathrm{G}} + u^{\mathrm{L}}$ is sometimes lost [@fish1992s; @FISH1994135; @ANGIONI2011780; @ANGIONI2012559; @yue2005adaptive; @yue2007adaptive; @fan2008rs; @nakasumi2008crack; @park2003efficient; @ooya2009linear]. In other words, if the basis functions in one mesh can be represented as a linear combination of those in other meshes, a singularity of the matrix $\boldsymbol{K}$ in Eq. [\[eq:system_sfem\]](#eq:system_sfem){reference-type="eqref" reference="eq:system_sfem"} occurs. This is often the case if there is a patch of local elements having entire boundaries aligned along the global element sides as shown in Figure [5](#fig:loss_uniqueness){reference-type="ref" reference="fig:loss_uniqueness"}. Once the problem arises, a linear equation solver using an iterative method, such as the conjugate gradient method, either fails to converge to the solution, or exhibits extremely slow convergence. ![An example of mesh superimposition causing a loss of solution uniqueness.](pics/5_linear_independency.pdf){#fig:loss_uniqueness width="6cm"} To solve this problem, several prior studies have employed structured meshes for both global and local meshes, suppressing the degrees of freedom at the nodes in local meshes that coincide with those at the global mesh to eliminate redundant degrees of freedom in the former [@park2003efficient; @yue2005adaptive; @yue2007adaptive; @fan2008rs; @ANGIONI2011780; @ANGIONI2012559]. Other approaches eliminate equations with zero (or close to zero) pivots encountered in the course of factorizing the equations corresponding to the local meshes, thereby ensuring rank sufficiency in the case of unstructured mesh superimposes [@FISH1994135]. @ooya2009linear proposed an approach that systematically finds the linear dependencies of degrees of freedom, and suggested the possibility that the dependency results in an ill-conditioned matrix. @nakasumi2008crack discretized the global mesh aslant. However, many of these methods are ad-hoc approaches that slightly modify the underlying models, typically requiring additional computation time. Consequently, approaches that constrain the degrees of freedom detract from low computation time and simplicity in the meshing procedure of SFEM. ## Formulation of B-spline basis functions {#sec:formulation_of_b-spline} The following section summarizes B-spline basis functions. We note that these functions are employed to ensure a smooth global mesh discretization, which has significant benefits compared to Lagrange basis functions in the numerical integration of the submatrices $\boldsymbol{K}^{\mathrm{GL}}$ and $\boldsymbol{K}^{\mathrm{LG}}$. We first describe the basic framework. A knot vector in one dimension is a non-decreasing set of coordinates in the parameter space $\hat{\Omega}$ written as $\boldsymbol{\Xi} = \{ \xi_1, \xi_2, \dotsc, \xi_{n+p+1} \}$, where $\xi_i \in \mathbb{R}$ is the $i$th knot, $i$ is the knot index, $i=1, 2, \dotsc, n+p+1$, $p$ is the polynomial order, and $n$ is the number of B-spline basis functions. The knots partition the parametric space into elements. For a given knot vector, the B-spline basis functions are defined recursively starting with piecewise constants $\left( p=0 \right)$ $$\begin{aligned} N_{i,0} \left( \xi \right) = \begin{cases} {1 \quad \mathrm{if} \; \xi_{i} \leq \xi < \xi_{i+1} },\\ {0 \quad \mathrm{otherwise}}. \end{cases}\end{aligned}$$ For $p=1, 2, 3, \dotsc,$ these functions are defined by $$\begin{aligned} N_{i,p} \left( \xi \right) = \frac{\xi - \xi_i}{\xi_{i+p} - \xi_i} N_{i,p-1}\left( \xi \right) + \frac{\xi_{i+p+1} - \xi}{\xi_{i+p+1} - \xi_{i+1}} N_{i+1,p-1}\left( \xi \right),\end{aligned}$$ which is the Cox--de Boor recursion formula. The derivatives of the functions are represented in terms of B-spline lower-order bases. For a given polynomial order $p$ and knot vector $\boldsymbol{\Xi}$, the derivative of the $i$th B-spline basis function is given by $$\begin{aligned} \frac{d}{d \xi} N_{i,p} \left( \xi \right) = \frac{p}{\xi_{i+p} - \xi_i} N_{i,p-1}\left( \xi \right) - \frac{p}{\xi_{i+p+1} - \xi_{i+1}} N_{i+1,p-1}\left( \xi \right).\end{aligned}$$ B-spline curves are constructed by taking a linear combination of B-spline basis functions: $$\boldsymbol{C}\left( \xi \right) = \sum_{i=1}^{n} N_{i, p} \left( \xi \right) \boldsymbol{B}_i,$$ where $\boldsymbol{B}_i \in \mathbb{R}^d$ are the control points. In higher dimensions, B-spline basis functions are constructed from tensor products similarly to Lagrange basis functions. Given additional knot vectors $\boldsymbol{\mathcal{H}} = \{ \eta_1, \eta_2, \dotsc, \eta_{m+q+1} \}$ and $\boldsymbol{\mathcal{Z}} = \{ \zeta_1, \zeta_2, \dotsc, \zeta_{l+r+1} \}$, B-spline volumes are defined as $$\boldsymbol{V}\left( \xi, \eta, \zeta \right) = \sum_{i=1}^{n} \sum_{j=1}^{m} \sum_{k=1}^{l} N_{i, p}\left( \xi \right) M_{j, q} \left( \eta \right) L_{k, r} \left( \zeta \right) \boldsymbol{B}_{i, j, k},$$ where $N_{i, p}\left( \xi \right)$, $M_{j, q} \left( \eta \right)$, and $L_{k, r} \left( \zeta \right)$ are univariate B-spline basis functions of orders $p$, $q$, and $r$, corresponding to knot vectors $\boldsymbol{\Xi}$, $\boldsymbol{\mathcal{H}}$, and $\boldsymbol{\mathcal{Z}}$, respectively. The $\boldsymbol{B}_{i, j, k}$'s form a control mesh that does not conform to the actual geometry. To ensure that the generated control mesh corresponds to the B-spline volume, we adopt a mesh generation method [@otoguro2017space] based on the projection of a mesh generated with existing techniques to a B-spline volume. For B-spline basis functions with $p=0$ and $p=1$, we obtain the same results as for standard piecewise constant and linear Lagrange basis functions, respectively. B-spline basis functions with $p \geq 2$, however, have several distinct features from their Lagrange-based counterparts. The most important feature noted in the present work is that B-spline basis functions have higher continuity across the element boundaries than Lagrange basis functions, as shown in Figure [6](#fig:b-spline_function){reference-type="ref" reference="fig:b-spline_function"}. The figure depicts quadratic and cubic B-spline basis functions for uniform knot vectors, which are assembled by knots that are equally-spaced in the parametric space. ![Quadratic and cubic B-spline basis functions for uniform knot vectors.](pics/6_B-spline_2_3order.pdf){#fig:b-spline_function width="12cm"} In general, $p$th order B-spline basis functions have $p-m_i$ continuous derivatives at knot $\xi_i$, where $m_i$ is the multiplicity of $\xi_i$ in the knot vector. In the present work, the multiplicity of all interior knots is defined as $m_i = 1$, where $p+2 \leq i \leq n$. Thus, $p$th order B-spline basis functions have $C^{p-1}$-continuity across the interior element boundaries. For numerical integration in B-spline meshes, Gaussian quadrature can be employed without any additional technique and is defined on each individual knot span in the parametric space. A knot vector is said to be open if its first and last knot values appear $p+1$ times. B-spline basis functions formed from open knot vectors are interpolatory at the ends of the parameter space interval $\lbrack \xi_1, \xi_{n+p+1} \rbrack$. In general, B-spline basis functions are not interpolatory at interior knots. The present work employs open knot vectors. ## Proposed method: B-spline based s-version of finite element method (BSFEM) {#sec:details_bsfem} As shown in Section [3.2](#sec:difficulties_in_lagrange){reference-type="ref" reference="sec:difficulties_in_lagrange"}, the conventional SFEM has problems with accuracy and computation time of numerical integration, as well as matrix singularity resulting in poor convergence for solving linear equations. The former occurs because the global basis functions have low continuity across element boundaries. In conventional SFEM, Lagrange functions are used as both global and local basis functions. Because these functions have $C^0$-continuity, their derivatives exhibit discontinuities across element boundaries. Thus, the integrands often become discontinuous and the accuracy of Gaussian quadrature degrades when a local element contains several global elements. The latter problem arises because the global and local basis functions are not guaranteed to be linearly independent of each other. Although many existing studies addressed this problem by constraining the degrees of freedom, the approaches employed therein are ad-hoc or incur additional computation time. To solve the problem of accuracy, this study employs basis functions with high continuity across element boundaries as the global basis functions. Specifically, we define $p$th order B-spline basis functions ($p \geq 3$) as the global basis functions. The resulting integrands, including the first derivatives of basis functions, are smooth and continuous. Thus, Gaussian quadrature can be applied to the submatrices $\boldsymbol{K}^{\mathrm{GL}}$ and $\boldsymbol{K}^{\mathrm{LG}}$ without requiring additional techniques, ensuring accurate and efficient integration. To solve the latter problem, we apply different functions to the global and local basis functions to guarantee their mutual linear independence. Specifically, we employ B-spline and Lagrange functions as the global and local basis functions, respectively. This approach is a potentially versatile and fundamental method to guarantee the uniqueness of the numerical solution. As shown in Figure [7](#fig:sfem_mapping){reference-type="ref" reference="fig:sfem_mapping"}, when the integrands are evaluated on the quadrature points defined in the parent element corresponding to the local element for the calculation of $\boldsymbol{K}^{\mathrm{GL}}$ and $\boldsymbol{K}^{\mathrm{LG}}$, the locations of these points in the parent element corresponding to the global element must be identified. ![Schematic illustration of mapping between two meshes in SFEM.](pics/8_sfem_mapping.pdf){#fig:sfem_mapping width="10cm"} Because the problem of finding corresponding locations in the global mesh is nonlinear, an iterative procedure is generally required. When the shapes of the global elements are generated irregularly and the number of local elements increases, the mapping calculation may be rather inefficient and time-consuming. By contrast, we employ a structured mesh as the B-spline based global mesh, assuming the imposition of boundary conditions by interface capturing approaches [@peskin1972flow; @wang2004extended; @zhang2004immersed; @glowinski1999distributed; @wagner2001extended; @terada2003finite; @udaykumar1996elafint; @dunne2006eulerian]. Thereby, the mapping in our proposed framework can be explicitly obtained without using iterative procedures, allowing us to improve computational efficiency. # Verification of the proposed method {#sec:verification} ## Target problem {#sec:target_problem} As described in Section [3](#sec:bsfem){reference-type="ref" reference="sec:bsfem"}, the conventional SFEM faces challenges in terms of numerical integration and independency of basis functions. To address these issues, we propose B-spline based SFEM, wherein B-spline and Lagrange basis functions are applied as the global and local basis functions, respectively. The proposed method employs $p$th order B-spline basis functions ($p=2, 3$) as global basis functions, and $q$th order Lagrange basis functions ($q=1, 2, 3$) as local basis functions. That is, six pairs of global and local basis functions are tested. In contrast, the conventional method employs $p$th order Lagrange basis functions ($p=1, 2, 3$) as global basis functions, and $q$th order Lagrange basis functions ($q=1, 2, 3$) as local basis functions. That is, nine pairs of global and local basis functions are tested. The target problem is the basic Poisson's equation with Dirichlet boundary conditions as expressed in Section [2](#sec:formulation_of_s-fem){reference-type="ref" reference="sec:formulation_of_s-fem"}. The domain for the analysis is defined as $[0, 2]^3$. The global and local meshes with their boundary conditions are located in $[0, 2]^3$ and $[0, 1]^3$, respectively, considering each element as a cube. An example of global and local meshes is illustrated in Figure [8](#fig:analysis_mesh){reference-type="ref" reference="fig:analysis_mesh"}. ![An example mesh model for SFEM.](pics/9_sfem_analysis_mesh.pdf){#fig:analysis_mesh width="12cm"} For the quantitative comparison, the same meshes were employed for the proposed and conventional methods in all verification tests. The element sizes $h^{\mathrm{G}}$ and $h^{\mathrm{L}}$ in the respective meshes were used as evaluation parameters. Each test was performed under two conditions: (A) $h^{\mathrm{G}}:h^{\mathrm{L}}=4:3$ and (B) $h^{\mathrm{G}}:h^{\mathrm{L}}=2:1$, as shown in Figure [8](#fig:analysis_mesh){reference-type="ref" reference="fig:analysis_mesh"}. The $n$-point Gaussian quadrature rule ensures the exact integration of a polynomial of degree $2n-1$ or lower. According to this rule, in FEM, using $p$th order basis functions, $p+1$-point Gaussian quadrature allows sufficiently accurate numerical integration. In SFEM, if polynomials of degree $p$ are applied to the basis functions of one mesh and those of degree $q$ are applied to the basis functions of the other mesh $(p \geq q)$, all numerical integrals can be solved accurately using the $p+1$-point Gaussian quadrature. In Case (A), however, the global element boundary is contained within the local element, and a discontinuous function is integrated. Therefore, we employ high-order Gaussian quadrature to avoid the integral discontinuities that occur in the conventional SFEM [@fish1992s; @lee2004combined; @nakasumi2008crack; @kishi2020dynamic; @sawada2010high]. In this study, we use the $p+8$-point Gaussian quadrature for Case (A) if polynomials of degree $p$ and $q$ are applied to the basis functions of two meshes, respectively $(p \geq q)$. We evaluated the change in the relative $L^2$ error norm when the order of the Gaussian quadrature varied by 1 from $p+1$-point to $p+10$-point for each pair of basis functions. These tests were conducted for cases with the coarsest mesh $(h^{\mathrm{G}}=0.166667)$, where the relative $L^2$ error norm was expected to be maximal. As a result, for all pairs of basis functions, the change in relative $L^2$ error norm was less than 5% of the overall value of the relative $L^2$ error norm when using $p+8$-point Gaussian quadrature, compared to when using $p+7$-point and $p+9$-point Gaussian quadrature. In other words, when the $p+8$-point Gaussian quadrature is used, the effect of the relative $L^2$ error due to the integration of discontinuous functions on the overall relative $L^2$ error norm is sufficiently small, and the result is considered sufficiently accurate. We employed the same Gaussian quadrature for both the proposed and conventional methods to ensure a fair comparison. In Case (B), no global element boundary is contained within the local element and no integration of the discontinuous function occurs. Accordingly, we employ the $p+1$-point Gaussian quadrature without any additional techniques to improve accuracy. On the other hand, all global element boundaries in the local domain overlap with local element boundaries; thus, the basis functions of both meshes are mutually dependent in the conventional SFEM [@ooya2009linear]. Here, we don't use any additional approach to avoid matrix singularity for both the proposed and conventional methods. As described in Section [3.3](#sec:formulation_of_b-spline){reference-type="ref" reference="sec:formulation_of_b-spline"}, we define B-spline based meshes as structured meshes wherein every element is a cube. Open knot vectors are employed, and the multiplicity of all interior knots is defined as $1$. Hence, $p$th order B-spline basis functions have $C^{p-1}$-continuity across the element boundaries. To form control meshes corresponding to the B-spline volumes, we adopt the mesh generation method [@otoguro2017space]. In the present work, we employed (1) the relative $L^2$ error norm, (2) the iterative method for solving linear equations, with the number of iterations, and (3) the positive definiteness of the matrix as the verification parameter. The manufactured solution approach [@roache1998verification] is employed so that a convergence study on the relative $L^2$ error norm can be performed. The exact solution is given as $$u = \sin{2 \pi x}\sin{2 \pi y}\sin{2 \pi z} + 10 \quad \mathrm{in} \; \Omega, \label{eq:manufactured_solution}$$ and the resulting Poisson's equation for verification is defined as $$\Delta{u} + 12 {\pi}^2 \sin{2 \pi x}\sin{2 \pi y}\sin{2 \pi z} = 0 \quad \mathrm{in} \; \Omega.$$ By applying Eq. [\[eq:manufactured_solution\]](#eq:manufactured_solution){reference-type="eqref" reference="eq:manufactured_solution"} to the problem domain boundary $\Gamma_D$ as the Dirichlet boundary condition, Eq. [\[eq:manufactured_solution\]](#eq:manufactured_solution){reference-type="eqref" reference="eq:manufactured_solution"} can be regarded as the exact solution of the problem. The relative $L^2$ error norm $\varepsilon_{L^2}$ in the analysis based on SFEM is expressed as $$\begin{aligned} \varepsilon_{L^2} &= \frac{ \sqrt{\int_{\Omega^{\mathrm{G}}} \left| u^h - u \right|^2 d \Omega} }{ \sqrt{\int_{\Omega^{\mathrm{G}}} \left| u \right|^2 d \Omega} } \nonumber \\ &= \frac{ \sqrt{ \int_{\Omega^{\mathrm{G}}\backslash\Omega^{\mathrm{L}}} \left| \left( u^{\mathrm{G}} \right)^h - u \right|^2 d \Omega + \int_{\Omega^{\mathrm{L}}} \left| \left( u^{\mathrm{G}} \right)^h + \left( u^{\mathrm{L}} \right)^h - u \right|^2 d \Omega }}{\sqrt{ \int_{\Omega^{\mathrm{G}}} \left| u \right|^2 d \Omega }}, \label{eq:sfem_error}\end{aligned}$$ where $u$ is the exact solution and $\left( u^{\mathrm{G}} \right)^h$ and $\left( u^{\mathrm{L}} \right)^h$ are the calculated solutions in global and local meshes, respectively. To simplify the integral calculations, both meshes are located so that the edges of the local domain are aligned along the global element boundaries, as shown in Figure [8](#fig:analysis_mesh){reference-type="ref" reference="fig:analysis_mesh"}. Integrations over $\Omega^{\mathrm{G}}$ and $\Omega^{\mathrm{G}}\backslash\Omega^{\mathrm{L}}$ are calculated based on the global mesh, whereas that over $\Omega^{\mathrm{L}}$ is calculated based on the local mesh. Krylov-subspace methods, such as the conjugate gradient method, are often used to solve linear systems. However, the conjugate gradient method can only be used to solve a symmetric positive definite matrix. In the conventional SFEM, the matrix [\[eq:submatrix_K\]](#eq:submatrix_K){reference-type="eqref" reference="eq:submatrix_K"} has been considered to be symmetric and positive definite in previous studies [@Okada2004-co]. However, no strict verification test or detailed discussion on the definiteness of the matrix in SFEM have been performed. In this study, we assessed the positive definiteness of the matrix in the proposed and conventional methods. One method for testing whether a symmetric matrix is positive definite is Cholesky factorization [@higham2009cholesky; @zhan1996computing]. As described by @higham2009cholesky, upon running the Cholesky factorization algorithm, a matrix is considered positive definite if the algorithm completes without encountering any negative or zero pivots, and not positive definite otherwise. In other words, if Cholesky factorization succeeds, all of the eigenvalues are positive, and the matrix is positive definite and regular. On the other hand, if Cholesky factorization fails, the matrix has at least one non-positive eigenvalue and is not positive definite. When the matrix has 0 eigenvalues, its determinant becomes 0 and the matrix is singular. When the matrix has negative eigenvalues, its determinant is not guaranteed to be 0 and the matrix is also not guaranteed to be singular. Loss of positive definiteness and matrix singularity are separate outcomes. We verified the number of iterations required to solve the linear equations because, in many large-scale and realistic problems, the time required to solve said equations accounts for a large portion of the overall computation time. In other words, reducing the number of iterations in solving linear equations significantly reduces the overall computation time. In the present work, we used the general-purpose linear equation solver library \"Monolithic non-overlapping / overlapping DDM based linear equation solver (monolis)\" [@monolis]. Table [1](#tab:sfem_solver){reference-type="ref" reference="tab:sfem_solver"} lists other conditions required to solve linear equations. ------------------------------ -------------------------------- Linear solver conjugate gradient (CG) method Preconditioner diagonal scaling method Convergence criterion $1.0 \times 10^{-10}$ Maximum number of iterations degrees of freedom ------------------------------ -------------------------------- : Analytical conditions for matrix calculation. [\[tab:sfem_solver\]]{#tab:sfem_solver label="tab:sfem_solver"} The conjugate gradient method terminates in at most $n$ iterations, where $n$ corresponds to the degrees of freedom in the matrix, if no rounding errors are encountered [@hestenes1952methods]. If the method fails to converge in $n$ iterations, we conclude that the matrix is singular. To minimize the influence of rounding errors, we employed the diagonal scaling method as a preconditioner. Incidentally, $n$ is the sum of degrees of freedom of the global and local meshes. In this paper, we tested the positive definiteness of the matrix via Cholesky factorization, and matrix singularity via the convergence of the conjugate gradient method. ## Results {#sec:results} In this verification, we discuss the differences in the overall trends of the results of the proposed and conventional methods with respect to the relative $L^2$ error norm, convergence of the conjugate gradient method, and positive definiteness of the matrix. In the following graphs (Figures [9](#fig:L2error_convergence_caseA){reference-type="ref" reference="fig:L2error_convergence_caseA"}, [10](#fig:L2error_convergence_caseB){reference-type="ref" reference="fig:L2error_convergence_caseB"}, [14](#fig:iteration_dof_caseA){reference-type="ref" reference="fig:iteration_dof_caseA"}, [15](#fig:iteration_dof_caseB){reference-type="ref" reference="fig:iteration_dof_caseB"}, [16](#fig:L2error_iteration_caseA){reference-type="ref" reference="fig:L2error_iteration_caseA"}, and [17](#fig:L2error_iteration_caseB){reference-type="ref" reference="fig:L2error_iteration_caseB"}), the results of the proposed method are shown in blue and those of the conventional method in orange. The convergence of the relative $L^2$ error norm defined in Eq. [\[eq:sfem_error\]](#eq:sfem_error){reference-type="eqref" reference="eq:sfem_error"} against the global element size $h^{\mathrm{G}}$ was evaluated in the proposed and conventional methods. The numerical results for Cases (A) and (B) are shown in Figures [9](#fig:L2error_convergence_caseA){reference-type="ref" reference="fig:L2error_convergence_caseA"} and [10](#fig:L2error_convergence_caseB){reference-type="ref" reference="fig:L2error_convergence_caseB"}, respectively. ![Convergence of relative $L^2$ error norm against global element size $h^{\mathrm{G}}$ for Case (A). \"(Failed)\" indicates that solutions did not converge in all cases except where calculation was not possible due to insufficient memory.](pics/L2error_convergence_4methods_ip7_caseA.pdf){#fig:L2error_convergence_caseA width="14cm"} ![Convergence of relative $L^2$ error norm against global element size $h^{\mathrm{G}}$ for Case (B). \"(Failed)\" indicates that solutions did not converge in all cases except where calculation was not possible due to insufficient memory.](pics/L2error_convergence_4methods_ip7_caseB.pdf){#fig:L2error_convergence_caseB width="14cm"} The results show that the proposed method exhibits better error convergence in all cases when compared with the same order basis functions in the conventional method. Thus, the proposed method seems to show better accuracy than the conventional method for larger-scale analysis that requires more detailed meshes. Comparing the results of Cases (A) and (B) for each method, the errors are comparable in all cases. This indicates that the high-order Gaussian quadrature method can be used to calculate Case (A) of the conventional method, where the integration of discontinuous functions occurs, with sufficient accuracy. However, this approach incurs significant computation time, in line with other methods of improving accuracy which are more complex. In addition, both methods are most accurate when the global basis functions are third order, and there is little variability depending on the local basis functions. This seems to be a natural result because the global basis functions have a dominant impact on the relative $L^2$ error distribution over the entire domain, and this problem does not generate local regions that require high resolution. We note that when using the conventional method in most cases of (B), solutions did not converge even after the maximum number of iterations was reached, as shown in Table [1](#tab:sfem_solver){reference-type="ref" reference="tab:sfem_solver"}. These results are seemingly caused by matrix singularity. By contrast, using the proposed method, solutions converged in all cases of (A) and (B). These results indicate that the basis functions in the proposed method are guaranteed to be mutually linearly independent, thereby avoiding matrix singularity. To qualitatively assess the error due to the integration of discontinuous functions, the relative $L^2$ error distribution in the local domain was verified for three cases with different continuity of the global basis functions, where the ratio of global to local element sizes was extreme: $h_{\mathrm{G}}:h_{\mathrm{L}}= 40:3$. This test was performed under the following three cases: (1) cubic B-spline basis functions with $C^2$-continuity across element boundaries, (2) quadratic B-spline basis functions with $C^1$-continuity across element boundaries, and (3) cubic Lagrange basis functions with $C^0$-continuity across element boundaries are applied as global basis functions. In all cases, linear Lagrange functions were employed as local basis functions. To evaluate the error due to the integration of discontinuous functions, we applied the $p+1$-point Gaussian quadrature without any additional techniques to improve accuracy when $p$th and $q$th order basis functions were applied to two meshes, respectively $(p \geq q)$. The results are shown in Figure [13](#fig:error_distribution){reference-type="ref" reference="fig:error_distribution"}. ![Cross-sectional view (yz plane, x=0.33) of the relative $L^2$ error distribution in the local domain for the case of $h_\mathrm{G}:h_\mathrm{L}=40:3$. Global basis functions are (left:) cubic B-spline basis functions, (middle:) quadratic B-spline basis functions, and (right:) cubic Lagrange basis functions. The contours of error are shown in color (min: $1.2 \times 10^{-7}$, max: $5.4\times 10^{-4}$).](pics/error_distribution_proposed.png){#fig:error_distribution} ![Cross-sectional view (yz plane, x=0.33) of the relative $L^2$ error distribution in the local domain for the case of $h_\mathrm{G}:h_\mathrm{L}=40:3$. Global basis functions are (left:) cubic B-spline basis functions, (middle:) quadratic B-spline basis functions, and (right:) cubic Lagrange basis functions. The contours of error are shown in color (min: $1.2 \times 10^{-7}$, max: $5.4\times 10^{-4}$).](pics/error_distribution_proposed_2sbsp.png){#fig:error_distribution} ![Cross-sectional view (yz plane, x=0.33) of the relative $L^2$ error distribution in the local domain for the case of $h_\mathrm{G}:h_\mathrm{L}=40:3$. Global basis functions are (left:) cubic B-spline basis functions, (middle:) quadratic B-spline basis functions, and (right:) cubic Lagrange basis functions. The contours of error are shown in color (min: $1.2 \times 10^{-7}$, max: $5.4\times 10^{-4}$).](pics/error_distribution_conventional.png){#fig:error_distribution} The distribution of the relative $L^2$ error norm for each element in the local domain is shown by the colored contour, and the element boundaries of the global mesh are denoted by black lines. These results indicate that the case wherein the global basis functions have lower continuity produces significantly larger errors for local elements that are located across global element boundaries, and that the error distribution is discontinuous in proximity of said boundaries. In contrast, the proposed method, which uses cubic B-spline functions for the global basis, exhibits almost no such errors. That is, the proposed method allows sufficiently accurate computation using Gaussian quadrature without any additional and computationally expensive techniques to improve accuracy. Consequently, the proposed method can further reduce computation time at the same level of accuracy. Relationships between the number of iterations required for convergence and the degrees of freedom for Cases (A) and (B) are shown in Figures [14](#fig:iteration_dof_caseA){reference-type="ref" reference="fig:iteration_dof_caseA"} and [15](#fig:iteration_dof_caseB){reference-type="ref" reference="fig:iteration_dof_caseB"}, respectively. The results show that using the proposed method, the solutions converged at a small number of iterations in all cases of (A) and (B) even with large degrees of freedom. On the other hand, using the conventional method, the number of iterations increased significantly with degrees of freedom for both cases. Furthermore, many cases of (B) failed to converge using the conventional method, indicating matrix singularity. Therefore, not only does the conventional method fail to converge in some cases, but convergence is also slow and computationally intensive when solving simultaneous linear equations in almost all cases. On the other hand, the proposed method exhibits excellent convergence in all cases of (A) and (B). From these results, we conclude that the proposed method guarantees linear independence of basis functions, and thus, it has excellent convergence. Reducing the number of iterations to solve linear equations implies reducing the overall computation time. In addition, the proposed method does not require computationally expensive or ad-hoc techniques to improve convergence [@fish1992s; @FISH1994135; @ANGIONI2011780; @ANGIONI2012559; @yue2005adaptive; @yue2007adaptive; @fan2008rs; @nakasumi2008crack; @park2003efficient; @ooya2009linear], further reducing computation time and simplifying the meshing procedure of SFEM. ![Number of iterations until convergence against degrees of freedom for Case (A). \"(Failed)\" indicates that solutions did not converge in all cases except where calculation was not possible due to insufficient memory.](pics/iteration_dof_4methods_ip7_caseA.pdf){#fig:iteration_dof_caseA width="14cm"} ![Number of iterations until convergence against degrees of freedom for Case (B). \"(Failed)\" indicates that solutions did not converge in all cases except where calculation was not possible due to insufficient memory.](pics/iteration_dof_4methods_ip7_caseB.pdf){#fig:iteration_dof_caseB width="14cm"} Figures [14](#fig:iteration_dof_caseA){reference-type="ref" reference="fig:iteration_dof_caseA"} and [15](#fig:iteration_dof_caseB){reference-type="ref" reference="fig:iteration_dof_caseB"} show that the conventional method converged very slowly, even in cases where the solution converged and the matrix was not considered singular. For further verification of this problem, we focused on Case A and tested the positive definiteness of the matrices. Table [\[tab:definiteness_caseA\]](#tab:definiteness_caseA){reference-type="ref" reference="tab:definiteness_caseA"} lists the verification results of the positive definiteness of the matrices in the proposed and conventional SFEM for Case (A). We conclude that matrices that can be Cholesky decomposed are positive definite matrices, whereas those that cannot be Cholesky decomposed are not positive definite matrices. In the tables, \"Pass\" means that decomposition succeeded and \"Fail\" means that it failed. Blank columns indicate cases where calculation was not possible due to insufficient memory, which are discussed further. [\[tab:definiteness_caseA\]]{#tab:definiteness_caseA label="tab:definiteness_caseA"} The results show that the matrices in the proposed method were positive definite in all cases, whereas those in the conventional method were not positive definite in all cases for Case (A). As noted above, the conjugate gradient method and many other iterative methods can only be used on positive definite matrices. Thus, in conventional SFEM, not only can the matrix become singular, but it can also lose its positive definiteness. This result is a novel finding that contradicts the assumptions of existing studies, suggesting a possible cause of the poor convergence of the iterative approach when solving linear equations in the conventional method. Relationships between the relative $L^2$ error norm and the number of iterations until convergence for Cases (A) and (B) are shown in Figures [16](#fig:L2error_iteration_caseA){reference-type="ref" reference="fig:L2error_iteration_caseA"} and [17](#fig:L2error_iteration_caseB){reference-type="ref" reference="fig:L2error_iteration_caseB"}. ![Relative $L^2$ error norm against number of iterations until convergence for Case (A). \"(Failed)\" indicates that solutions did not converge in all of its cases except cases where calculation was not possible due to insufficient memory.](pics/L2error_iteration_4methods_ip7_caseA.pdf){#fig:L2error_iteration_caseA width="14cm"} ![Relative $L^2$ error norm against number of iterations until convergence for Case (B). \"(Failed)\" indicates that solutions did not converge in all cases except where calculation was not possible due to insufficient memory.](pics/L2error_iteration_4methods_ip7_caseB.pdf){#fig:L2error_iteration_caseB width="14cm"} These results indicate that the proposed method requires far fewer iterations than the conventional method for the same relative $L^2$ error norm. Because the computation time for solving linear equations is a huge component of the overall computation time, the proposed method with excellent convergence results in much less computation time at the same level of accuracy. These results demonstrate that the proposed method achieves sufficient accuracy of numerical integration and excellent convergence without requiring additional computationally expensive or ad-hoc techniques. We therefore conclude that the proposed method is expected to achieve the same level of accuracy with less computation time than the conventional method, or vice versa. # Conclusions and future works {#sec:conclusions} This study proposed a B-spline based SFEM that fundamentally solved the challenges of the conventional SFEM in numerical integration and matrix singularity. There are two challenges with the conventional method. First, the inaccuracy of numerical integration in the term representing the interaction between the global and local meshes. This problem arises when the global basis functions have low continuity across the element boundaries. Second, because linear independence of the global and local basis functions is not guaranteed, the matrices become singular or nearly singular, and the convergence of solving simultaneous linear equations deteriorates. Thus, this study fundamentally solved these problems by applying cubic B-spline basis functions with $C^2$-continuity across element boundaries as global basis functions, while retaining Lagrangian basis functions as local basis functions. We verified the proposed and conventional SFEM using the relative $L^2$ error norm, number of iterations for solving linear equations, and positive definiteness of the matrix as parameters. Our results indicate that the proposed method can be computed with sufficient accuracy using Gaussian quadrature without requiring additional and computationally expensive techniques to improve accuracy. Furthermore, the conventional method was observed to require many iterations for convergence, and occasionally failed to converge even at the maximum number of iterations. In contrast, the proposed method exhibited convergence at significantly small numbers of iterations for the same problems. The relationship between excellent and poor convergence in the proposed and conventional methods and the positive definiteness of the matrix was identified. These results indicate that the proposed method guarantees linear independence of basis functions and has excellent convergence. Therefore, we concluded that the proposed method has potential to reduce computation time while maintaining accuracy. This study presents the first steps toward improving localized mesh refinement schemes in interface-capturing approaches for moving boundary problems. Future efforts will be focused on the introduction of ALE schemes for moving local mesh tracking to the boundary layers and surrounding area, coupling with IB methods and other approaches to handle boundary conditions at interfaces, and extending it to unsteady nonlinear problems. # CRediT authorship contribution statement {#credit-authorship-contribution-statement .unnumbered} **Nozomi Magome:** Data curation, Formal analysis, Investigation, Methodology, Software, Validation, Visualization, Writing -- original draft. **Naoki Morita:** Methodology, Software, Supervision, Writing -- review & editing. **Shigeki Kaneko:** Methodology, Supervision, Writing -- review & editing. **Naoto Mitsume:** Conceptualization, Funding acquisition, Project administration, Resources, Supervision, Writing -- review & editing. # Declaration of competing interest {#declaration-of-competing-interest .unnumbered} The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. # Data availability {#data-availability .unnumbered} Data will be made available on request. # Acknowledgements {#acknowledgements .unnumbered} This work was supported by JST FOREST Grant Number JPMJFR215S and JSPS KAKENHI Grant Numbers 22H03601, 23H00475.
arxiv_math
{ "id": "2310.04011", "title": "Higher-continuity s-version of finite element method with B-spline\n functions", "authors": "Nozomi Magome, Naoki Morita, Shigeki Kaneko and Naoto Mitsume", "categories": "math.NA cs.NA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We survey some recent developments on various notions of semipositivity for $(1,1)$-classes on complex manifolds, and discuss a number of open questions. address: Courant Institute of Mathematical Sciences, New York University, 251 Mercer St, New York, NY 10012 author: - Valentino Tosatti title: "Semipositive line bundles and $(1,1)$-classes" --- # Introduction Let $X^n$ be a compact Kähler manifold. A closed real $(1,1)$-form $\alpha$ on $X$ defines a cohomology class $$\label{quot} [\alpha]\in H^{1,1}(X,\mathbb{R})=\frac{\{\text{closed real }(1,1)\text{-forms}\}}{\{i \partial \overline{\partial}\varphi\ |\ \varphi\in C^\infty(X,\mathbb{R})\}},$$ in the finite-dimensional real vector space $H^{1,1}(X,\mathbb{R})$ of $(1,1)$-classes. Kodaira's $\partial\overline{\partial}$-Lemma shows that the natural map $H^{1,1}(X,\mathbb{R})\to H^2(X,\mathbb{R})$ to deRham cohomology is an injection. The simplest example of a $(1,1)$-class is $[\omega]$, where $\omega$ is a Kähler form on $X$, which we will refer to as a *Kähler class*. Every Kähler class is nonzero since $$[\omega]^n\cdot X=\int_X\omega^n=n!\, \mathrm{Vol}_g(X)>0,$$ where $g$ is the Hermitian metric associated to $\omega$. If we let $\omega$ vary among all Kähler forms on $X$, the corresponding Kähler classes $[\omega]$ define a cone $$\mathcal{C}=\{[\omega]\ |\ \omega \text{ K\"ahler form on }X\}\subset H^{1,1}(X,\mathbb{R}),$$ which is nonempty, open and convex, the Kähler cone of $X$, see e.g. [@To4 p.288]. We think of Kähler classes as "strictly positive" classes. If $L\to X$ is a holomorphic line bundle and $h$ is a smooth Hermitian metric on $L$, then the curvature form $R_h$ of $h$ is a closed real $(1,1)$-form on $X$ given locally by $$R_h=-\frac{i}{2\pi}\partial\overline{\partial}\log h,$$ and its cohomology class $$c_1(L)=[R_h]\in H^{1,1}(X,\mathbb{R}),$$ is easily seen to be independent of the choice of $h$, and is called the first Chern class of $L$. Its image in $H^2(X,\mathbb{R})$ lies in $N^1(X)$ (which is defined as the image of the change of scalars map $H^2(X,\mathbb{Z})\to H^2(X,\mathbb{R})$), and conversely every $(1,1)$-class in $N^1(X)$ is of the form $c_1(L)$ for some holomorphic line bundle $L$. A line bundle $L$ with $c_1(L)\in\mathcal{C}$ is called ample, and by the Kodaira embedding theorem sections of some positive tensor power of $L$ provide a projective embedding $X\subset\mathbb{P}^N$. The main object of interest in this survey are $(1,1)$-classes in the closure of the Kähler cone $\overline{\mathcal{C}}$, which are known as nef classes. Being limits of Kähler classes, nef classes can naively be thought of as "semipositive" classes. However, as we will see, this is not quite true literally. The main theme of this note will be to explore various notions of semipositivity for $(1,1)$-classes, as well as line bundles, and to draw relations between them, some proven and some conjectural. In particular, we will discuss the author's Conjectures [Conjecture 12](#c4){reference-type="ref" reference="c4"} and [Conjecture 17](#c0){reference-type="ref" reference="c0"}, as well as Conjecture [Conjecture 8](#c2){reference-type="ref" reference="c2"} of Filip and the author, and the new Question [Question 7](#bzp){reference-type="ref" reference="bzp"} and Conjecture [Conjecture 13](#c5){reference-type="ref" reference="c5"}. We will also mention a geometric application of these results to the study of families of Ricci-flat Kähler metrics, and in the last section we will consider the more general setting of (possibly non-Kähler) compact complex manifolds. ## Acknowledgments {#acknowledgments .unnumbered} The author thanks O. Das for comments on Question [Question 7](#bzp){reference-type="ref" reference="bzp"}, M. Sroka for discussions about Section [5](#nk){reference-type="ref" reference="nk"} and X. Yang for very useful comments. The author was partially supported by NSF grant DMS-2231783. It is a pleasure to dedicate this note to the memory of Professor Zhihua Chen, whose work on the Schwarz Lemma in complex geometry (including [@CCL; @CY]) had a big impact on the author's early work [@To0]. # Nef $(1,1)$-classes ## Nef and semipositive classes As in the Introduction, $X^n$ will be a compact Kähler manifold, and $\alpha$ a closed real $(1,1)$-form on $X$. Its cohomology class in $H^{1,1}(X,\mathbb{R})$ is denoted by $[\alpha]$. Recall that in the introduction we have defined the Kähler cone $\mathcal{C}\subset H^{1,1}(X,\mathbb{R})$, consisting of cohomology classes of Kähler forms, and defined the nef cone $\mathcal{C}$ as the closure of $\mathcal{C}$. It is elementary to see (e.g. [@To4 Lemma 2.2]) that a $(1,1)$-class $[\alpha]$ is nef if and only if for every $\varepsilon>0$ there is $\varphi_\varepsilon\in C^\infty(X,\mathbb{R})$ such that $$\label{succa} \alpha+i \partial \overline{\partial}\varphi_\varepsilon\geqslant-\varepsilon\omega,$$ on $X$. It is immediate to see that this notion does not depend on the choice of reference Kähler form $\omega$. We will also say that a nef class $[\alpha]$ is nef and big if it satisfies $$[\alpha]^n\cdot X=\int_X\alpha^n>0.$$ We will say that a $(1,1)$-class $[\alpha]$ is semipositive if it contains a smooth semipositive representative, namely if there exists $\varphi\in C^\infty(X,\mathbb{R})$ such that $$\alpha+i \partial \overline{\partial}\varphi\geqslant 0,$$ on $X$. The convex cone of semipositive $(1,1)$-classes will be denoted by $\mathcal{S}\subset H^{1,1}(X,\mathbb{R})$, and we have the obvious inclusions $$\label{cones} \mathcal{C}\subset\mathcal{S}\subset\overline{\mathcal{C}}.$$ Intersecting these cones with $N^1(X)$ we obtain the corresponding notions for line bundles, so $L$ is called nef if $c_1(L)\in\overline{\mathcal{C}}$ and $L$ is called semipositive if $c_1(L)\in\mathcal{S}$. Explicitly, a holomorphic line bundle $L\to X$ is nef if for every $\varepsilon>0$ it admits a smooth Hermitian metric $h_\varepsilon$ whose curvature satisfies $$R_{h_\varepsilon}\geqslant-\varepsilon\omega,$$ on $X$, and $L$ is semipositive if it admits a smooth Hermitian metric $h$ with $$R_h\geqslant 0.$$ When $X$ is projective (or more generally Moishezon [@Pau]), a line bundle is nef if and only if $(L\cdot C)\geqslant 0$ for all curves $C\subset X$, and the latter is the usual definition in algebraic geometry. The notion of a semipositive line bundle is classical, see e.g. [@KO; @Fu]. Both Kobayashi-Ochiai [@KO P.519] and Fujita [@Fu Conjecture 4.14] asked whether on a projective manifold $X$ every nef line bundle is semipositive. This conjecture turned out to be false, as shown by Yau [@Y1 P.228]: **Theorem 1** (Yau [@Y1]). *There is a projective manifold $X$ with a nef line bundle $L$ which is not semipositive.* This counterexample is on a ruled surface $X=\mathbb{P}(E)$ where $E$ is a rank $2$ vector bundle over an elliptic curve $C$ which is a nontrivial extension of $\mathcal{O}_C$ by itself, and $L=\mathcal{O}_{\mathbb{P}(E)}(1)$ is the Serre line bundle. This same counterexample was later rediscovered by Demailly-Peternell-Schneider [@DPS], who also proved that $c_1(L)$ contains a unique closed positive current (see below for definitions). In particular, the inclusions in [\[cones\]](#cones){reference-type="eqref" reference="cones"} are all strict in general (this is clear for the first inclusion), and the cone $\mathcal{S}$ is neither open nor closed. In this example we have $\int_X c_1(L)^2=0$, i.e. $L$ is not big. Boucksom-Eyssidieux-Guedj-Zeriahi [@BEGZ Example 5.4] observed that by considering $\mathbb{P}(E\oplus A)$ where $A\to C$ is ample, with its Serre line bundle, one get examples of nef and big line bundles which are not semipositive in all dimensions $n\geqslant 3$. The question remained whether one could find an example of a nef and big but not semipositive line bundle on a surface, and in [@FT Problem 2.2] it was suggested that such an example should exist on a ruled surface constructed by Grauert [@Gr 8(d), p.365-366]. This expectation was realized by Koike [@Ko2], who showed that this is indeed the case: **Theorem 2** (Koike [@Ko2]). *The ruled surface $X$ constructed by Grauert admits a nef and big line bundle $L$ which is not semipositive.* # Semiample $(1,1)$-classes ## Semiample line bundles A holomorphic line bundle $L\to X$ over a compact Kähler manifold is called semiample if there is $m\geqslant 1$ such that $L^m$ is globally generated. In this case, global sections of $L^m$ define a holomorphic map $\Phi:X\to \mathbb{P}^N$ such that $\Phi^*\mathcal{O}(1)=L^m$, and so $\frac{1}{m}\Phi^*\omega_{\rm FS}$ is a smooth semipositive representative of $c_1(L)$. Thus, semiample line bundles are semipositive. The converse implication is false, for example by taking $X$ to be an elliptic curve and $L\in \mathrm{Pic}^0(X)$ a degree-zero non-torsion line bundle. Then $c_1(L)=0$ in $H^{1,1}(X,\mathbb{R})$, so $L$ is trivially semipositive, but $L$ is not semiample. A more interesting example is given in [@Laz Example 10.3.3], of a ruled surface with two nef and big line bundles $L,L'$ with $c_1(L)=c_1(L')$ and $L$ semiample but $L'$ not semiample (so in this case $L'$ is semipositive but not semiample). What these examples show is that being semiample is not a numerical notion, i.e. it does not only depend on $c_1(L)$. The following definition, which appears in [@LP], is then natural: **Definition 3**. A line bundle $L\to X$ over a compact Kähler manifold $X$ is called numerically semiample if there is a semiample line bundle $L'$ with $c_1(L)=c_1(L')$ in $H^{1,1}(X,\mathbb{R})$. From the exponential exact sequence, we see that if $b_1(X)=0$ then every numerically semiample line bundle is actually semiample. Indeed, since $c_1(L\otimes L'^*)=0$ in $H^2(X,\mathbb{R})$, it follows that $c_1(L^k\otimes (L'^*)^k)=0$ in $H^2(X,\mathbb{Z})$ for some $k\geqslant 1$. Since $X$ is Kähler, the assumption $b_1(X)=0$ implies $H^1(X,\mathcal{O})=0$, so the first Chern class map $\mathrm{Pic}(X)\to H^2(X,\mathbb{Z})$ is injective, and so $L\cong L'\otimes M$ where $M$ is a holomorphically torsion line bundle. Since $L'$ is semiample, so is $L'\otimes M$, proving our claim. We also clearly have that if $L$ is numerically semiample then $L$ is semipositive, and one can ask whether the converse holds. Unfortunately, we have **Theorem 4** (Koike [@Ko1]). *There is a projective manifold $X$ with a semipositive line bundle $L$ which is not numerically semiample.* The manifold $X$ and the line bundle $L$ come from a famous example of Zariski: $X$ is the blowup of $\mathbb{P}^2$ along $12$ general points on an elliptic curve $C\subset\mathbb{P}^2$, and the line bundle $L$ is the sum of the pullback of the hyperplane bundle $\mathcal{O}(1)$ on $\mathbb{P}^2$ and the strict transform of $C$. Zariski famously showed that $L$ is not semiample, and since $X$ is simply connected it follows that $L$ is also not numerically semiample, and Koike [@Ko1 Theorem 1.1] shows that $L$ is semipositive. Nevertheless, despite the fact that (numerical) semiampleness is not the same as semipositivity, showing that a line bundle is semiample remains the easiest way to show that it is semipositive. Thus, theorems that guarantee that a line bundle is semiample are very valuable, and the most prominent such result is the Kawamata-Shokurov base-point-free theorem in birational geometry (which we state here in its simplest form): **Theorem 5**. *Let $X$ be a projective manifold and $L\to X$ a nef line bundle such that $L^m\otimes K_X^{*}$ is nef and big for some $m\geqslant 1$. Then $L$ is semiample.* In particular, if $c_1(K_X)=0$ in $H^{1,1}(X,\mathbb{R})$ (i.e. $X$ is a projective Calabi-Yau manifold), then every nef and big line bundle is semiample, hence semipositive. ## Semiample $(1,1)$-classes Suppose $L\to X$ is a semiample line bundle. Then if $m\geqslant 1$ is sufficiently divisible, then global sections of $L^m$ define a holomorphic map $\Phi:X\to\mathbb{P}^N$ with image a normal projective variety $Y$, such that $L^m$ is the pullback of an ample line bundle on $Y$. Passing to a Stein factorization, we may also assume that $\Phi:X\to Y$ has connected fibers. Inspired by this, in [@FT] we posed the following definition: **Definition 6**. A $(1,1)$-class $[\alpha]$ on a compact Kähler manifold $X$ is called semiample if there is a surjective holomorphic map $f:X\to Y$ with connected fibers onto a normal compact Kähler analytic space such that $[\alpha]=f^*[\beta]$ for some Kähler class $[\beta]$ on $Y$. Clearly, if $L$ is numerically semiample then $c_1(L)$ is semiample, and we can ask about the converse: **Question 7**. *If $L\to X$ is a line bundle over a compact Kähler manifold such that $c_1(L)$ is semiample, does it follow that $L$ is numerically semiample?* At the moment, it is not clear to us whether the answer is affirmative, even if we assume that $X$ is projective. In general, if $c_1(L)$ is semiample then we have a surjective map with connected fibers $f:X\to Y$ with $c_1(L)=f^*[\beta]$, with $[\beta]$ a Kähler class on $Y$. We necessarily have $\dim Y\leqslant\dim X$. Let us make a few observations about Question [Question 7](#bzp){reference-type="ref" reference="bzp"}. If $\dim Y=0$, i.e. $Y$ is a point, then $c_1(L)=0$ in $H^{1,1}(X,\mathbb{R})$, so the answer to Question [Question 7](#bzp){reference-type="ref" reference="bzp"} is trivially affirmative in this case, by taking $L'=\mathcal{O}_X$. If $\dim Y=1$ then Question [Question 7](#bzp){reference-type="ref" reference="bzp"} again has an affirmative answer as follows: in this case $Y$ is a smooth compact Riemann surface, so there is an ample line bundle $A\to Y$ such that $[\beta]=\lambda c_1(A)$ for some $\lambda\in\mathbb{R}_{>0}$. Thus $$\label{useless} c_1(L)=\lambda c_1(f^*A).$$ The class $c_1(f^*A)$ lies in $H^2(X,\mathbb{Z})$, and if we denote by $[B]\in H_2(X,\mathbb{Z})$ its Poincaré dual, then intersecting [\[useless\]](#useless){reference-type="eqref" reference="useless"} with $[B]$ gives $$\lambda = c_1(L)\cdot [B]\in\mathbb{Z},$$ so we see that $\lambda\in\mathbb{N}_{>0}$. Thus, $$L':=f^*A^{\otimes\lambda},$$ is a semiample line bundle on $X$ with $c_1(L)=c_1(L')$, and so $L$ is numerically semiample. If $\dim Y=\dim X$ then $f$ is bimeromorphic and $L$ is nef and big, and so $X$ is Moishezon and Kähler, hence projective, and $Y$ is a Moishezon space. By a theorem of Nakamaye [@Nak], the augmented base locus $\mathbb{B}_+(L)$ equals the null locus $$\mathrm{Null}(L)=\bigcup_{\substack{V\subset X\\ (L^{\dim V}\cdot V)=0}}V.$$ From [@To Prop.2.5] it follows that $$\mathrm{Exc}(f)=\mathbb{B}_+(L)=\mathrm{Null}(L).$$ At this point we would expect that in fact $Y$ is projective, and that there is an ample line bundle $A\to Y$ such that $c_1(A)$ is proportional to $[\beta]$, which (by the argument above) would indeed show that $L$ is numerically semiample. ## Transcendental base-point-free conjecture Motivated by the base-point-free Theorem [Theorem 5](#bpf){reference-type="ref" reference="bpf"}, the author conjectured in [@To3 Question 5.5] a transcendental extension of this to $(1,1)$-classes on Calabi-Yau manifolds, which together with a later conjecture of Filip and the author [@FT Conjecture 1.2] reads: **Conjecture 8**. *Let $X$ be a compact Kähler manifold and $[\alpha]$ a nef $(1,1)$-class such that $\lambda[\alpha]-c_1(K_X)$ is nef and big for some $\lambda\in\mathbb{R}_{>0}$. Then $[\alpha]$ is semiample. In particular, if $c_1(K_X)=0$ in $H^{1,1}(X,\mathbb{R})$ (i.e. $X$ is a Calabi-Yau manifold), then every nef and big $(1,1)$-class is semiample, hence semipositive.* Conjecture [Conjecture 8](#c2){reference-type="ref" reference="c2"} is known for $n\leqslant 2$ thanks to work of Filip and the author [@FT], and the Calabi-Yau special case is also known when $n=3$ by work of Höring [@Ho]. The case when $n=3$ of Conjecture [Conjecture 8](#c2){reference-type="ref" reference="c2"} when $\lambda[\alpha]-c_1(K_X)$ is Kähler was proved by Zhang and the author [@TZ] when $[\alpha]$ is not big and by Höring [@Ho] when $[\alpha]$ is big. The assumption that $\lambda[\alpha]-c_1(K_X)$ is Kähler is actually quite natural, since it is equivalent to $[\alpha]$ being the limiting class of some solution of the Kähler-Ricci flow on $X$ with finite time singularity, see [@TZ]. More recently, Conjecture [Conjecture 8](#c2){reference-type="ref" reference="c2"} with $n=3$ was proved in general by Das-Hacon [@DH]. Of course, when $X$ is projective and $[\alpha]\in N^1(X)$ then Conjecture [Conjecture 8](#c2){reference-type="ref" reference="c2"} follows from Theorem [Theorem 5](#bpf){reference-type="ref" reference="bpf"}. In general, work of Collins and the author [@CT] shows that the non-Kähler locus $E_{nK}([\alpha])$ of a nef and big $(1,1)$-class $[\alpha]$ is a proper closed analytic subvariety of $X$, and when $[\alpha]$ is semiample then $E_{nK}([\alpha])$ should be equal to the exceptional locus of the map $f$ (this holds when $Y$ is smooth, by [@To Prop.2.5], and also in the case of semiample line bundles [@BCL Theorem A]). Thus, at least in the Calabi-Yau setting of Conjecture [Conjecture 8](#c2){reference-type="ref" reference="c2"}, one would hope to construct $f$ by "contracting" $E_{nK}([\alpha])$. In a nutshell, this is what is done in [@FT; @Ho] in low dimensions, and at this moment the best hope for extending this to all dimensions comes from recent progress in the Minimal Model Program for Kähler manifolds by Das, Hacon and coauthors, see e.g. [@DH; @DH2; @DHP]. ## Generalized abundance conjecture In the base-point-free Theorem [Theorem 5](#bpf){reference-type="ref" reference="bpf"}, the line bundle $L^m\otimes K_X^{*}$ is assumed to be nef and big. If the bigness assumption is dropped, Lazić-Peternell [@LP] recently proposed the following "generalized abundance conjecture", which again state in its simplest form: **Conjecture 9** (Lazić-Peternell [@LP]). *Let $X$ be a projective manifold with $K_X$ pseudoeffective, and $L\to X$ a nef line bundle such that $L^m\otimes K_X^{*}$ is nef for some $m\geqslant 1$. Then $L$ is numerically semiample.* This is known only when $n\leqslant 2$ by [@LP]. ## Nef $(1,1)$-classes on Calabi-Yau manifolds An important special case the generalized abundance Conjecture [Conjecture 9](#lp){reference-type="ref" reference="lp"}, which was in fact one of the main motivations for it, is the case when $X$ is Calabi-Yau, i.e. $X$ is a compact Kähler manifold with $c_1(K_X)=0$ in $H^{1,1}(X,\mathbb{R})$. In this case, the generalized abundance conjecture specializes to the following well-known conjecture: **Conjecture 10**. *If $X$ is a projective Calabi-Yau manifold and $L\to X$ is a nef line bundle, then $L$ is numerically semiample.* This conjecture is a classical result when $n=2$, but it remains open already for $n=3$, see e.g. [@LOP] for recent results and an overview. Going now from line bundles to general $(1,1)$-classes, recall that Conjecture [Conjecture 8](#c2){reference-type="ref" reference="c2"} predicts that every nef and big $(1,1)$-class on a Calabi-Yau manifold is semiample. One is then naturally led to wonder about the case of nef $(1,1)$-classes that are not big. Taking cue from Conjecture [Conjecture 10](#c3){reference-type="ref" reference="c3"}, one might expect that every nef $(1,1)$-class on a Calabi-Yau manifold must be semiample. This however fails completely: **Theorem 11** (Filip-T. [@FT; @FT2]). *There are $K3$ surfaces (both projective and non-projective) with a nef $(1,1)$-class which is not semipositive, hence not semiample.* These examples come from holomorphic dynamics, where certain $K3$ surfaces $X$ admit a holomorphic automorphism $T:X\to X$ whose iterates exhibit a chaotic behavior. In this case, Cantat [@Can] has constructed two nontrivial nef classes $[\alpha_{\pm}]$, which are not big, which are respectively expanded and contracted by the dynamics, and which contain only one closed positive current $\eta_{\pm}$ with Hölder continuous potentials. A dynamical rigidity theorem, proved by Cantat-Dupont [@CD] when $X$ is projective and by Filip and the author [@FT2] in general, shows that if $\eta_{\pm}$ is smooth (equivalently, if $[\alpha_{\pm}]$ is semipositive) then $X$ must be a Kummer $K3$ surface (and $T$ must come from an affine transformation of the corresponding torus). There are however plenty of examples of such $(X,T)$ which are not Kummer (including non-projective ones constructed by McMullen [@McM]), so on these examples the nef classes $[\alpha_{\pm}]$ are not semipositive. # Currents in nef $(1,1)$-classes ## Currents with bounded potentials Despite the failure of the direct generalization of Conjecture [Conjecture 10](#c3){reference-type="ref" reference="c3"} to $(1,1)$-classes, the author conjectured in [@To5 Conjecture 3.7] that the following weaker statement should be true: **Conjecture 12**. *If $X$ is a Calabi-Yau manifold and $[\alpha]$ is a nef $(1,1)$-class, then $[\alpha]$ contains a closed positive current with bounded potentials.* Here a closed positive current in the class $[\alpha]$ is a current of the form $\alpha+i \partial \overline{\partial}\varphi$ which is semipositive in the weak sense, where $\varphi$ is a quasi-psh function on $X$ (i.e. an usc $L^1$ function $\varphi:X\to \mathbb{R}\cup\{-\infty\}$ which in local charts is the sum of a smooth function and a plurisubharmonic function). If $\varphi$ is a bounded function then we say that the current has bounded potentials. Remarkably, Conjecture [Conjecture 12](#c4){reference-type="ref" reference="c4"} is open even when $n=2$. Even the weaker statement that $[\alpha]$ contains a closed positive current with vanishing Lelong numbers is open (when $[\alpha]$ is not big). In the opposite direction, one could also strengthen the conjecture by requiring that the current have continuous (and not just bounded) potentials. Inspired by the generalized abundance Conjecture [Conjecture 9](#lp){reference-type="ref" reference="lp"}, one can also pose the following more general conjecture, which can be compared with Conjecture [Conjecture 8](#c2){reference-type="ref" reference="c2"}: **Conjecture 13**. *Let $X$ be a compact Kähler manifold with $K_X$ pseudoeffective, and $[\alpha]$ a nef $(1,1)$-class such that $\lambda[\alpha]-c_1(K_X)$ is nef for some $\lambda\in\mathbb{R}_{>0}$. Then $[\alpha]$ contains a closed positive current with bounded potentials.* Let us now discuss in more detail Conjecture [Conjecture 12](#c4){reference-type="ref" reference="c4"} when $n=2$, so $X$ is a Calabi-Yau surface. By the Kodaira classification $X$ is finitely covered by a torus or a $K3$ surface. Since Conjecture [Conjecture 12](#c4){reference-type="ref" reference="c4"} is easily seen to hold on tori, and to behave well under finite covers, we can restrict to the case when $X$ is $K3$, and consider an arbitrary nef $(1,1)$-class $[\alpha]$. As mentioned earlier, if $[\alpha]$ is also big then $[\alpha]$ is semiample [@FT] and so Conjecture [Conjecture 12](#c4){reference-type="ref" reference="c4"} holds in this case. Similarly, if $[\alpha]$ is not big and $\mathbb{R}_{>0}[\alpha]\cap H^2(X,\mathbb{Q})\neq \emptyset$, then $[\alpha]$ is again semiample [@FT]. We are thus left with the case when $[\alpha]$ is nef, not big and "strongly irrational" in the sense that $\mathbb{R}_{>0}[\alpha]\cap H^2(X,\mathbb{Q})=\emptyset$. In this case it is expected that $[\alpha]$ contains a unique closed positive current, and this is indeed proved in many cases by Sibony-Soldatenkov-Verbitsky [@SSV], see also [@FT3 Theorem 4.3.1]. For such strongly irrational classes, the current state of Conjecture [Conjecture 13](#c5){reference-type="ref" reference="c5"} is given by: **Theorem 14** (Filip-T. [@FT3]). *If $X$ is a projective $K3$ surface with no $(-2)$-curves and Picard rank at least $3$, and $[\alpha]$ a nef $(1,1)$-class which is not big, strongly irrational, and $[\alpha]\in N^1(X)\otimes\mathbb{R}\subset H^{1,1}(X,\mathbb{R})$, then $[\alpha]$ contains a unique closed positive current, and this current has continuous potentials.* This result uses crucially the "Kawamata-Morrison Cone Conjecture" on projective Calabi-Yau manifolds, which is known for $K3$ surfaces [@St] but open in dimensions $3$ and higher. New ideas will be needed in order to settle Conjecture [Conjecture 12](#c4){reference-type="ref" reference="c4"} completely when $n=2$. ## Families of Monge-Ampère equations There is a direct link between Conjectures [Conjecture 8](#c2){reference-type="ref" reference="c2"} and [Conjecture 12](#c4){reference-type="ref" reference="c4"} and regularity of solutions of certain complex Monge-Ampère equations. More precisely, let $X^n$ be a compact Kähler manifold with a nef $(1,1)$-class $[\alpha]$. For every $\varepsilon>0$ the class $[\alpha]+\varepsilon[\omega]$ is Kähler, so by Yau's Theorem [@Y] we can solve the family of complex Monge-Ampère equations $$\omega_\varepsilon:=\alpha+\varepsilon\omega+i \partial \overline{\partial}\varphi_\varepsilon>0, \quad \omega_\varepsilon^n=c_\varepsilon e^F\omega^n,$$ where $F\in C^\infty(X,\mathbb{R})$ is given, $c_\varepsilon\in\mathbb{R}_{>0}$ is given by $$c_\varepsilon=\frac{\int_X(\alpha+\varepsilon\omega)^n}{\int_X e^F\omega^n},$$ and $\varphi_\varepsilon$ is normalized by $\sup_X\varphi_\varepsilon=0$. For example, if $X$ is Calabi-Yau, then with a suitable choice of $F$ the metrics $\omega_\varepsilon$ are Ricci-flat Kähler, see the survey [@To5] for a detailed discussion of this case. Going back to our general setting, we have: **Theorem 15**. *In this setting, the nef class $[\alpha]$ contains a closed positive current with bounded potentials if and only if there is $C>0$ such that for all $\varepsilon>0$ small we have $$\label{esti} \sup_X|\varphi_\varepsilon|\leqslant C.$$* *Proof.* First assume that [\[esti\]](#esti){reference-type="eqref" reference="esti"} holds. Then from weak compactness of closed positive currents with bounded cohomology classes we see that there is a sequence $\varepsilon_i\to 0$ such that $\varphi_{\varepsilon_i}$ converge in $L^1(X)$ to a quasi-psh function $\varphi_0$ which satisfies $\alpha+i \partial \overline{\partial}\varphi_0\geqslant 0$ weakly, i.e. $\alpha+i \partial \overline{\partial}\varphi_0$ is a closed positive current in the class $[\alpha]$. From the uniform estimate [\[esti\]](#esti){reference-type="eqref" reference="esti"} and standard properties of quasi-psh functions, it follows that $\varphi_0$ is bounded, as desired. Now assume conversely that there is a bounded quasi-psh function $\varphi_0$ with $\alpha+i \partial \overline{\partial}\varphi_0\geqslant 0$, which we can again normalize by $\sup_X\varphi_0=0$. Let $$u_0(x)=\sup\{\eta(x)\ |\ \alpha+i \partial \overline{\partial}\eta\geqslant 0, \eta\leqslant 0\},$$ be the standard envelope, then we have $\alpha+i \partial \overline{\partial}u_0\geqslant 0$ weakly, and $\varphi_0\leqslant u_0\leqslant 0$, so $u_0$ is bounded as well. Given $\varepsilon>0$ we consider similarly the envelope $$u_\varepsilon(x)=\sup\{\eta(x)\ |\ \alpha+\varepsilon\omega+i \partial \overline{\partial}\eta\geqslant 0, \eta\leqslant 0\},$$ which are easily seen to decrease pointwise to $u_0$ as $\varepsilon\to 0$. Since $u_0$ is bounded, we thus see that there is $C>0$ such that for all $\varepsilon>0$ we have $$\label{kz1} -C\leqslant u_0\leqslant u_\varepsilon\leqslant 0.$$ Then a key result of [@BEGZ] (when $[\alpha]$ is big) and [@FGS] (when $[\alpha]$ is not big, with a recent new proof in [@GPTW]) shows that there is $C>0$ such that for all $\varepsilon>0$ small we have $$\sup_X|\varphi_\varepsilon-u_\varepsilon|\leqslant C,$$ and combining this with [\[kz1\]](#kz1){reference-type="eqref" reference="kz1"} proves [\[esti\]](#esti){reference-type="eqref" reference="esti"}. ◻ Thus, Conjecture [Conjecture 12](#c4){reference-type="ref" reference="c4"} can be restated in terms of a uniform $L^\infty$ estimate for the potentials of Ricci-flat Kähler metrics $\omega_\varepsilon$ whose cohomology class converges to $[\alpha]$ as $\varepsilon\to 0$. From this viewpoint, when $[\alpha]$ is semiample the Ricci-flat metrics $\omega_\varepsilon$ behave very nicely: **Theorem 16** (T. [@To2], Collins-T. [@CT], Hein-T. [@HT]). *Let $X$ be a Calabi-Yau manifold, $[\alpha]$ a semiample class and $\omega_\varepsilon=\alpha+\varepsilon\omega+i \partial \overline{\partial}\varphi_\varepsilon$ the Ricci-flat Kähler metric cohomologous to $[\alpha]+\varepsilon[\omega]$. Then there is a closed analytic subvariety $S\subset X$ such that for every $k\in\mathbb{N}$ and every $K\Subset X\backslash S$ there is $C>0$ such that for all $\varepsilon>0$ small we have $$\|\varphi_\varepsilon\|_{C^k(K,\omega)}\leqslant C.$$* Recall that since $[\alpha]$ is semiample, it is of the form $[\alpha]=f^*[\beta]$ for some map $f:X\to Y$ as before and $[\beta]$ Kähler on $Y$. When $[\alpha]$ is semiample and big, Theorem [Theorem 16](#kk){reference-type="ref" reference="kk"} was shown in [@To2; @CT], and $S$ in this case can be taken to be $\mathrm{Exc}(f)=\mathrm{Null}([\alpha])$. The much harder case when $[\alpha]$ is semiample but not big was recently settled in [@HT], and $S$ here can be taken to be the singular fibers of $f$. # The non-Kähler case {#nk} In this last section we expand our setting and consider a general compact complex manifold $X^n$, equipped with a Hermitian metric $\omega$, without any Kählerity assumption. The space of $(1,1)$-classes $H^{1,1}(X,\mathbb{R})$ is defined exactly as in [\[quot\]](#quot){reference-type="eqref" reference="quot"}. When $X$ is non-Kähler (i.e. it does not admit any Kähler metric) then its Kähler cone $\mathcal{C}\subset H^{1,1}(X,\mathbb{R})$ is empty by definition. However, following [@DPS], one can still meaningfully define a cone $\mathcal{N}\subset H^{1,1}(X,\mathbb{R})$ of nef $(1,1)$-classes (not equal to the closure $\mathcal{C}$ in general), by imitating the characterization in [\[succa\]](#succa){reference-type="eqref" reference="succa"}, namely given a closed real $(1,1)$-form $\alpha$ on $X$ we will say that $[\alpha]\in\mathcal{N}$ whenever given any $\varepsilon>0$ there is $\varphi_\varepsilon\in C^\infty(X,\mathbb{R})$ such that [\[succa\]](#succa){reference-type="eqref" reference="succa"} holds on $X$. A very basic question that we raised in 2016 (see [@To (3.2)]) is the following: **Conjecture 17**. *Let $[\alpha]\in\mathcal{N}$, and $V\subset X$ be a closed irreducible analytic subvariety of dimension $k>0$. Then we have $$\int_V\alpha^k\geqslant 0.$$* It is not hard to see that Conjecture [Conjecture 17](#c0){reference-type="ref" reference="c0"} is equivalent to the following: **Conjecture 18**. *Let $[\alpha]\in\mathcal{N}$. Then we have $$\int_X\alpha^n\geqslant 0.$$* *Proof of the equivalence of Conjectures [Conjecture 17](#c0){reference-type="ref" reference="c0"} and [Conjecture 18](#c1){reference-type="ref" reference="c1"}.* It is clear that Conjecture [Conjecture 17](#c0){reference-type="ref" reference="c0"} implies Conjecture [Conjecture 18](#c1){reference-type="ref" reference="c1"}. Conversely, assume Conjecture [Conjecture 18](#c1){reference-type="ref" reference="c1"} and let $V^k\subset X$ be a closed irreducible subvariety. By Hironaka's resolution of singularities of analytic spaces, we can find a modification $\mu:\tilde{X}\to X$ with $\tilde{X}$ a compact complex manifold, such that $\mu$ is an isomorphism over the generic point of $V$ and the strict transform $\tilde{V}$ of $V$ via $\mu$ is smooth. Then $\tilde{V}$ is also $k$-dimensional and the class $[\mu^*\alpha]$ is nef on $\tilde{X}$: indeed, given a Hermitian metric $\tilde{\omega}$ on $\tilde{X}$, there is a constant $C>0$ such that $$\mu^*\omega\leqslant C\tilde{\omega},$$ on $\tilde{X}$. Given $\varepsilon>0$ let $\varphi_\varepsilon\in C^\infty(X,\mathbb{R})$ be a smooth function such that $$\alpha+i \partial \overline{\partial}\varphi_\varepsilon\geqslant-\varepsilon\omega,$$ on $X$. Pulling back via $\mu$ we get $$\mu^*\alpha+i \partial \overline{\partial}(\varphi_\varepsilon\circ\mu)\geqslant-\varepsilon\mu^*\omega\geqslant-C\varepsilon\tilde{\omega},$$ and so $[\mu^*\alpha]$ is nef. Since $\mu|_{\tilde{V}}$ is generically an isomorphism with its image, it follows that $$\int_V\alpha^k=\int_{\tilde{V}}\mu^*\alpha^k.$$ Now the restricted class $[\mu^*\alpha|_{\tilde{V}}]\in H^{1,1}(\tilde{V},\mathbb{R})$ is immediately seen to be nef, and so by Conjecture [Conjecture 18](#c1){reference-type="ref" reference="c1"} we have $$\int_{\tilde{V}}\mu^*\alpha^k\geqslant 0,$$ as desired. ◻ We observe also that Conjecture [Conjecture 18](#c1){reference-type="ref" reference="c1"} is an immediate consequence of a stronger conjecture proposed by Kołodziej and the author in [@KT Conjecture 1.2], which is the conjectural formula $$\int_X\alpha^n=\inf_{u\in C^\infty(X,\mathbb{R})}\int_{X(\alpha+i \partial \overline{\partial}u,0)}(\alpha+i \partial \overline{\partial}u)^n,$$ where $X(\alpha+i \partial \overline{\partial}u,0)\subset X$ is the subset of points $x\in X$ where $(\alpha+i \partial \overline{\partial}u)(x)\geqslant 0$, since on this set we clearly have $(\alpha+i \partial \overline{\partial}u)^n\geqslant 0$. We have the following easy partial results towards Conjecture [Conjecture 18](#c1){reference-type="ref" reference="c1"}: **Proposition 19**. *Conjecture [Conjecture 18](#c1){reference-type="ref" reference="c1"} holds in all of the following cases:* - *If $X$ admits a Hermitian metric $\omega$ with $\partial\overline{\partial}\omega=0=\partial\overline{\partial}(\omega^2)$. In particular, if $X$ is Kähler.* - *If $X$ is in Fujiki's class $\mathcal{C}$. In particular, if $[\alpha]$ is big, in the sense that it contains a Kähler current.* - *If $\dim X\leqslant 2$.* - *If $[\alpha]$ contains a closed positive current $\alpha+i \partial \overline{\partial}\varphi\geqslant 0$ with $\varphi$ bounded.* - *If $X$ admits a Hermitian metric $\omega$ with $$v_+(\omega):=\sup\left\{\int_X (\omega+i \partial \overline{\partial}\varphi)^n\ \bigg|\ \varphi\in {\rm PSH}(X,\omega)\cap L^\infty(X)\right\}<\infty.$$* The quantity $v_+(\omega)$ in (e) was recently introduced by Guedj-Lu [@GL]. *Proof.* (a) Let $\varphi_\varepsilon$ be smooth functions such that $\alpha+i \partial \overline{\partial}\varphi_\varepsilon\geqslant-\varepsilon\omega$ on $X$. A well-known direct calculation shows that we have $\partial\overline{\partial}(\omega^k)=0$ for all $1\leqslant k\leqslant n-1$. We can then integrate by parts as usual and obtain that $$0\leqslant\int_X(\alpha+\varepsilon\omega+i \partial \overline{\partial}\varphi_\varepsilon)^n=\int_X(\alpha+\varepsilon\omega)^n,$$ and letting $\varepsilon\to 0$ concludes the proof. \(b\) $X$ in class $\mathcal{C}$ means that we can find a modification $\mu:\tilde{X}\to X$ with $\tilde{X}$ a compact Kähler manifold. As shown earlier, the pullback class $[\mu^*\alpha]$ is nef and $$\int_X\alpha^n=\int_{\tilde{X}}\mu^*\alpha^n.$$ Since $\tilde{X}$ is Kähler, we conclude using part (a). It is also known [@DP] that if $[\alpha]$ is big then $X$ is in class $\mathcal{C}$. \(c\) The case when $\dim X=1$ is elementary. If $\dim X=2$ then by a classical theorem of Gauduchon [@Ga], $X$ admits a Hermitian metric $\omega$ with $\partial\overline{\partial}\omega=0$, so part (a) applies. \(d\) By Demailly's regularization theorem [@Dem2] applied to the bounded function $\varphi$, we can find smooth functions $\varphi_\varepsilon$ on $X$ which decrease pointwise to $\varphi$ as $\varepsilon\to 0$, and such that $$\alpha+i \partial \overline{\partial}\varphi_\varepsilon\geqslant-\varepsilon\omega.$$ Since $\varphi$ is bounded, it follows that $$\|\varphi_\varepsilon\|_{L^\infty(X)}\leqslant C,$$ for all $\varepsilon>0$. Then a standard Chern-Levine-Nirenberg type argument in [@KT Claim, p.997] shows that $$\label{useless2} 0\leqslant\lim_{\varepsilon\to 0}\int_X(\alpha+\varepsilon\omega+i \partial \overline{\partial}\varphi_\varepsilon)^n=\int_X\alpha^n,$$ as desired. \(e\) For $\varepsilon>0$ choose smooth functions $\varphi_\varepsilon$ as in (a), then the proof of [@GL Theorem 4.12] shows that the assumption $v_+(\omega)<\infty$ implies that [\[useless2\]](#useless2){reference-type="eqref" reference="useless2"} again holds. ◻ Despite these partial results, Conjecture [Conjecture 18](#c1){reference-type="ref" reference="c1"} remains open even when $n=3$. The special case when $[\alpha]=c_1(L)$ for a holomorphic line bundle $L$ might be more approachable, for example using Demailly's holomorphic Morse inequalities [@Dem], but so far it remains open as well. In fact, despite the fact that failure of Conjecture [Conjecture 18](#c1){reference-type="ref" reference="c1"} would be rather strange indeed, the author suspects that perhaps counterexamples do exist, and it might be desirable to do some explicit computations in the hope of finding one. 99 S. Boucksom, S. Cacciola, A.F. Lopez, *Augmented base loci and restricted volumes on normal varieties*, Math. Z. **278** (2014), no. 3-4, 979--985. S. Boucksom, P. Eyssidieux, V. Guedj, A. Zeriahi, *Monge-Ampère equations in big cohomology classes*, Acta Math. **205** (2010), no. 2, 199--262. S. Cantat, *Dynamique des automorphismes des surfaces $K3$*, Acta Math. **187** (2001), no. 1, 1--57. S. Cantat, C. Dupont, *Automorphisms of surfaces: Kummer rigidity and measure of maximal entropy*, J. Eur. Math. Soc. (JEMS) **22** (2020), no. 4, 1289--1351. Z. Chen, S.-Y. Cheng, Q. Lu, *On the Schwarz Lemma for complete Kähler manifolds*, Sci. Sinica **22** (1979), no. 11, 1238--1247. Z. Chen, H. Yang, *Estimation of the upper bound on the Levi form of the distance function on Hermitian manifolds and some of its applications*, Acta Math. Sinica **27** (1984), no. 5, 631--643. T.C. Collins, V. Tosatti, *Kähler currents and null loci*, Invent. Math. **202** (2015), no.3, 1167--1198. O. Das, C. Hacon, *The log minimal model program for Kähler $3$-folds*, preprint, arXiv:2009.05924. O. Das, C. Hacon, *On the minimal model program for Kähler $3$-folds*, preprint, arXiv:2306.11708. O. Das, C. Hacon, M. Păun, *On the $4$-dimensional minimal model program for Kähler varieties*, preprint, arXiv:2205.12205. J.-P. Demailly, *Champs magnétiques et inégalités de Morse pour la $d''$-cohomologie*, Ann. Inst. Fourier (Grenoble) **35** (1985), no. 4, 185--229. J.-P. Demailly, *Regularization of closed positive currents and intersection theory*, J. Algebraic Geom. **1** (1992), no. 3, 361--409. J.-P. Demailly, M. Păun, *Numerical characterization of the Kähler cone of a compact Kähler manifold*, Ann. of Math., **159** (2004), no. 3, 1247--1274. J.-P. Demailly, T. Peternell, M. Schneider, *Compact complex manifolds with numerically effective tangent bundles*, J. Algebraic Geom. **3** (1994), no. 2, 295--345. S. Filip, V. Tosatti, *Smooth and rough positive currents*, Ann. Inst. Fourier (Grenoble) **68** (2018), no.7, 2981--2999. S. Filip, V. Tosatti, *Kummer rigidity for $K3$ surface automorphisms via Ricci-flat metrics*, Amer. J. Math. **143** (2021), no. 5, 1431--1462. S. Filip, V. Tosatti, *Canonical currents and heights for $K3$ surfaces*, Camb. J. Math. **11** (2023), no.3, 699--794. X. Fu, B. Guo, J. Song, *Geometric estimates for complex Monge-Ampère equations*, J. Reine Angew. Math. **765** (2020), 69--99. T. Fujita, *Semipositive line bundles*, J. Fac. Sci. Univ. Tokyo Sect. IA Math. **30** (1983), no. 2, 353--378. P. Gauduchon, *Le théorème de l'excentricité nulle*, C. R. Acad. Sci. Paris Sér. A-B **285** (1977), no. 5, A387--A390. H. Grauert, *Über Modifikationen und exzeptionelle analytische Mengen*, Math. Ann. **146** (1962), 331--368. V. Guedj, H.C. Lu, *Quasi-plurisubharmonic envelopes 2: bounds on Monge-Ampère volumes*, Alg. Geom. 9 (2022), no. 6, 688--713. B. Guo, D.H. Phong, F. Tong, C. Wang, *On $L^\infty$ estimates for Monge-Ampère and Hessian equations on nef classes*, to appear in Anal. PDE. H.-J. Hein, V. Tosatti, *Smooth asymptotics for collapsing Calabi-Yau metrics*, preprint, arXiv:2102.03978. A. Höring, *Adjoint $(1,1)$-classes on threefolds*, Izv. Math. **85** (2021), no. 4, 823--830. A. Höring, T. Peternell, *Minimal models for Kähler threefolds*, Invent. Math. **203** (2016), no. 1, 217--264. S. Kobayashi, T. Ochiai, *On complex manifolds with positive tangent bundles*, J. Math. Soc. Japan **22** (1970), 499--525. T. Koike, *On minimal singular metrics of certain class of line bundles whose section ring is not finitely generated*, Ann. Inst. Fourier (Grenoble) **65** (2015), no. 5, 1953--1967. T. Koike, *Linearization of transition functions of a semi-positive line bundle along a certain submanifold*, Ann. Inst. Fourier (Grenoble) **71** (2021), no. 5, 2237--2271. S. Kołodziej, V. Tosatti, *Morse-type integrals on non-Kähler manifolds*, Pure Appl. Math. Q. **17** (2021), no.3, 991--1004. R. Lazarsfeld, *Positivity in algebraic geometry II*, Springer 2004. V. Lazić, K. Oguiso, T. Peternell, *Nef line bundles on Calabi-Yau threefolds, I*, Int. Math. Res. Not. IMRN 2020, no. 19, 6070--6119. V. Lazić, T. Peternell, *On generalised abundance, I*, Publ. Res. Inst. Math. Sci. **56** (2020), no. 2, 353--389. C.T. McMullen, *Dynamics on $K3$ surfaces: Salem numbers and Siegel disks*, J. Reine Angew. Math. **545** (2002), 201--233. M. Nakamaye, *Stable base loci of linear series*, Math. Ann. **318** (2000), no. 4, 837--847. M. Păun, *Sur l'effectivité numérique des images inverses de fibrés en droites*, Math. Ann. **310** (1998), no. 3, 411--421. N. Sibony, A. Soldatenkov, M. Verbitsky, *Rigid currents on compact hyperkähler manifolds*, preprint, arXiv:2303.11362. H. Sterk, *Finiteness results for algebraic $K3$ surfaces*, Math. Z. **189** (1985), 507--513. V. Tosatti, *A general Schwarz Lemma for almost-Hermitian manifolds*, Comm. Anal. Geom. **15** (2007), no.5, 1063--1086. V. Tosatti, *Limits of Calabi-Yau metrics when the Kähler class degenerates*, J. Eur. Math. Soc. (JEMS) **11** (2009), no.4, 755-776. V. Tosatti, *Calabi-Yau manifolds and their degenerations*, Ann. N.Y. Acad. Sci. **1260** (2012), 8--13. V. Tosatti, *Nakamaye's Theorem on complex manifolds*, in *Algebraic geometry: Salt Lake City 2015*, 633--655, Proc. Sympos. Pure Math., 97.1, Amer. Math. Soc., Providence, RI, 2018. V. Tosatti, *KAWA lecture notes on the Kähler-Ricci flow*, Ann. Fac. Sci. Toulouse Math. **27** (2018), no. 2, 285--376. V. Tosatti, *Collapsing Calabi-Yau manifolds*, Surveys in Differential Geometry **23** (2018), 305--337, International Press, 2020. V. Tosatti, Y. Zhang, *Finite time collapsing of the Kähler-Ricci flow on threefolds*, Ann. Sc. Norm. Super. Pisa Cl. Sci. **18** (2018), no.1, 105--118. S.-T. Yau, *On the curvature of compact Hermitian manifolds*, Invent. Math. **25** (1974), 213--239. S.-T. Yau, *On the Ricci curvature of a compact Kähler manifold and the complex Monge-Ampère equation, I*, Comm. Pure Appl. Math. **31** (1978), no.3, 339--411.
arxiv_math
{ "id": "2309.00580", "title": "Semipositive line bundles and (1,1)-classes", "authors": "Valentino Tosatti", "categories": "math.CV math.AG", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this paper, we introduce an improved version of the fifth-order weighted essentially non-oscillatory (WENO) shock-capturing scheme by incorporating deep learning techniques. The established WENO algorithm is improved by training a compact neural network to adjust the smoothness indicators within the WENO scheme. This modification enhances the accuracy of the numerical results, particularly near abrupt shocks. Unlike previous deep learning-based methods, no additional post-processing steps are necessary for maintaining consistency. We demonstrate the superiority of our new approach using several examples from the literature for the two-dimensional Euler equations of gas dynamics. Through intensive study of these test problems, which involve various shocks and rarefaction waves, the new technique is shown to outperform traditional fifth-order WENO schemes, especially in cases where the numerical solutions exhibit excessive diffusion or overshoot around shocks. address: - $^{\dagger}$ Chair of Applied and Computational Mathematics, Bergische Universität Wuppertal, Gaußstrasse 20, Wuppertal, 42119, Germany - $^{\ddagger}$ Division of Applied Mathematics, Brown University, 182 George Street, Providence, RI, 02912, USA author: - Tatiana Kossaczká$^{{\dagger},*}$, Ameya D. Jagtap$^{\ddagger}$, Matthias Ehrhardt$^{\dagger}$ bibliography: - cas-refs.bib title: "Deep smoothness WENO scheme for two-dimensional hyperbolic conservation laws: A deep learning approach for learning smoothness indicators" --- Weighted essentially non-oscillatory method ,Hyperbolic conservation laws ,Smoothness indicators ,Deep Learning ,Neural Networks 65M06 ,68T05 ,76M20 # Introduction {#sec:S1} It has long been a challenge to adequately simulate complex flow problems using numerical methods. Recently, this has been further improved using machine learning techniques. As an example, in [@RAISSI2019686; @jagtap2022physics; @jagtap2022deep], the concept of physics-informed neural networks (PINNs) for the solution of complex fluid flow problems was proposed, which seamlessly combines the data and the mathematical models; see [@RAISSI2019686; @jagtap2020conservative; @jagtap2020extended; @shukla2021parallel; @PENWARDEN2023112464; @de2022error] for more details. Similarly, a new method using a U-Net-like convolutional neural network (CNN) along with established finite difference discretization techniques was proposed to learn approximate solutions for the NSE without the need for parameterization [@grimm2023learning]. Also, recently, a framework called *local transfer function analysis* (LTA) for optimizing numerical methods for convection problems using a graph neural network (GNN) was proposed [@drozda2023learning]. The work [@mao2020physics] investigated the use of PINNs to approximate the hyperbolic Euler equations of gas dynamics. The Euler equations and initial and boundary conditions are used to create a loss function that solves scenarios with smooth solutions and those with discontinuities. Next, in [@jagtap2020conservative], a novel approach, called *conservative PINNs*, for solving nonlinear conservation laws, such as the compressible Euler equations, was presented. In the recent paper [@van2023accelerating], another novel approach has been proposed where machine learning improves finite-difference-based approximations of PDEs while maintaining high-order convergence through node refinement. This research area is also the context of our work. Recently, improvements to the standard finite difference methods (FDMs) have been developed [@kossaczka2023]. By adding a small convolutional neural network, the solutions of the standard PDEs are improved, while the convergence and consistency properties of the original methods are preserved. We aim to further improve modern FDMs, such as WENO schemes, for nonlinear hyperbolic systems using machine learning. For this type of PDEs, it is known that discontinuities (shocks) can occur despite initial smoothness, which makes specialized numerical methods mandatory. Therefore, the focus of our attention is on the behavior of numerical solutions in the vicinity of shocks. To better frame our current work, let us very briefly sketch the historical development of WENO schemes. Crandall and Majda [@Crandall] introduced *monotone schemes* in 1980 that maintain stability and satisfy entropy conditions, but are only exactly first order due to Godunov's theorem. Next, *shock-capturing schemes* were developed to accurately handle shocks and gradients without excessive diffusion [@Harten]. The *essentially non-oscillatory (ENO) schemes* [@Harten87] were outstanding, achieving high accuracy in smooth regions and effective shock resolution using smoothness indicators, e.g. [@jiang1996; @shu1998]. Extensions such as the *Hermite WENO (HWENO)* schemes [@qiu2004hermite; @qiu2005hermite] and *hybrid methods* [@pirozzoli2002conservative; @hill2004hybrid] were introduced for higher accuracy and efficiency. A gas-kinetic theory-based KWENO scheme was proposed in [@jagtap2020kinetic] for hyperbolic conservation laws. Moreover, further modifications of WENO scheme have been developed, e.g. [@Henrick2005; @Borges2008; @Castro2011; @Ha2013; @zhu2016new; @rathan2020l1]. Neural networks approximated the solutions of PDEs and improved numerical methods for PDEs. While the data-driven approach is promising for improving modern numerical methods, it is always important to maintain a balance between new data-driven insights and established mathematical structures, i.e., the basic numerical scheme (here based on physical principles), e.g., for hyperbolic problems, the resulting hybrid scheme should be conservative in any case. We have maintained this balance, and next, we will briefly describe our approach. Recent approaches to solving numerical PDEs include neural network-based WENO methods that modify coefficients and smoothness indicators of established state-of-the-art numerical methods to further improve these schemes, especially near shocks. However, some methods achieve only first-order accuracy [@Stevens2020]. In this paper, we present a new approach called \"WENO-DS\", a Deep learning-based extension of the family of WENO methods, and extend it to solving a general two-dimensional system of hyperbolic conservation laws $$\label{eq:HCL} U_t + F(U)_x + G(U)_y = 0.$$ To this end, we modify the smoothness indicators of the WENO schemes using a small neural network, maintaining high accuracy in smooth regions and reducing diffusion and overshoots (oscillatory behavior) near shocks. The resulting machine learning-enhanced WENO scheme combines accuracy and improved qualitative behavior for both smooth and discontinuous solutions. The paper is organized as follows. In Section [2](#sec:S2){reference-type="ref" reference="sec:S2"}, we introduce two underlying WENO schemes and explain the basic ideas, such as the smoothness indicators, on a 1D conservation law. In Section [3](#sec:S3){reference-type="ref" reference="sec:S3"}, we present our method for improving these schemes using a deep learning approach to modify the smoothness indicators accordingly. This novel idea does not destroy the basic structure of the WENO schemes, such as the conservative property, and qualitatively improves the solution near shocks with only small additional computational costs. In this section, we also elaborate on implementation aspects, such as adaptive activation functions, the design of the small network, and the training procedure. In Section [4](#sec:S4){reference-type="ref" reference="sec:S4"}, we briefly describe our application example of the 2D Euler equations of gas dynamics. Subsequently, in Section [5](#sec:S5){reference-type="ref" reference="sec:S5"} we present in detail the numerical results with a wide range of test configurations. Finally, in Section [6](#sec:S6){reference-type="ref" reference="sec:S6"} we conclude our work and give a brief overview of future research directions. # The WENO scheme {#sec:S2} We first introduce the standard fifth-order WENO scheme for solving one-dimensional hyperbolic conservation laws $$\label{eq:HCL_1D} u_t + f(u)_x = 0,$$ as developed by Jiang and Shu [@jiang1996; @shu1998]. For this purpose, we consider the uniform grid defined by the points $x_i = x_0+i\Delta x$ with cell boundaries $x_{i+\frac{1}{2}} = x_i+\frac{\Delta x}{2}$, $i = 0,\ldots,I$. The semi-discrete formulation of [\[eq:HCL_1D\]](#eq:HCL_1D){reference-type="eqref" reference="eq:HCL_1D"} can be written as $$\label{eq:hypsemi} \frac{\text{d}u_i(t)}{\text{d}t} = -\frac{1}{\Delta x}\bigl(\hat{f}_{i+\frac{1}{2}}-\hat{f}_{i-\frac{1}{2}}\bigr),$$ where $u_i(t)$ approximates $u(x_i,t)$ pointwise and $\hat{f}$ is a numerical approximation of the flux function $f$, i.e. $\hat{f}_{i+\frac{1}{2}}$ and $\hat{f}_{i-\frac{1}{2}}$ are numerical flux approximations at the cell boundaries $x_{i+\frac{1}{2}}$ and $x_{i-\frac{1}{2}}$, respectively. The numerical flux $\hat{f}_{i+\frac{1}{2}}$ is chosen such that for all sufficiently smooth $u$ $$\frac{1}{\Delta x} \Bigl( \hat{f}_{i+\frac{1}{2}} - \hat{f}_{i-\frac{1}{2}} \Bigr) = \bigl(f(u)\bigr)_{x}\big\vert_{x=x_i} + O(\Delta x^5),$$ with fifth-order of accuracy. Defining a function $h$ implicitly by $$\label{eq:int} f\bigl(u(x)\bigr) = \frac{1}{\Delta x} \int_{x-\frac{\Delta x}{2}}^{x+\frac{\Delta x}{2}} h(\xi)\,d\xi,$$ we obtain $$\label{eq:approx} f'\bigl(u(x_i)\bigr) = \frac{1}{\Delta x}\bigl(h_{i+\frac{1}{2}} - h_{i-\frac{1}{2}}\bigr), \qquad h_{i\pm\frac{1}{2}} = h(x_{i\pm\frac{1}{2}}),$$ where $h_{i\pm\frac{1}{2}}$ approximates the numerical flux $\hat{f}_{\pm\frac{1}{2}}$ with the fifth-order of accuracy in the sense that $$\hat{f}_{i\pm\frac{1}{2}} = h_{i\pm\frac{1}{2}} + O(\Delta x^5).$$ This procedure results in a *conservative* numerical scheme. To ensure numerical stability, the *flux splitting method* is applied. We therefore write the flux in the form $$\label{eq:fluxsplit} f(u) = f^+(u) + f^-(u),\quad \text{where}\quad \frac{\text{d}f^+(u)}{\text{d}u}\ge0\quad \text{and}\quad \frac{\text{d}f^-(u)}{\text{d}u}\le0.$$ The numerical flux $\hat{f}_{i\pm\frac{1}{2}}$ is then given by $\hat{f}_{i\pm\frac{1}{2}} = \hat{f}_{i\pm\frac{1}{2}}^+ + \hat{f}_{i\pm\frac{1}{2}}^-$ and we get the final approximation $$\label{eq:hypsemi_splitting} \frac{\text{d}u_i}{\text{d}t}= -\frac{1}{\Delta x}\biggl[\Bigl(\hat{f}_{i+\frac{1}{2}}^+-\hat{f}_{i-\frac{1}{2}}^+\Bigr)+\Bigl(\hat{f}_{i+\frac{1}{2}}^--\hat{f}_{i-\frac{1}{2}}^-\Bigr)\biggr].$$ **Remark 1**. *In our implementation, we use the Lax-Friedrichs flux splitting $$\label{eq:LF_flux_splitting} f^\pm(u) = \frac{1}{2}\bigl(f(u)\pm\alpha u\bigr),$$ with $\alpha = \max\limits_u|f'(u)|$.* ## The fifth order WENO scheme {#sec:S2.1} First, we consider the construction of $\hat{f}^+_{i+\frac{1}{2}}$ and drop the superscript $^+$ for simplicity. For this approximation a 5-point stencil $$\label{eq:stencil_plus} S(i)=\{x_{i-2},\dots,x_{i+2} \}$$ is used. The main idea of the fifth-order WENO scheme is to divide this stencil [\[eq:stencil_plus\]](#eq:stencil_plus){reference-type="eqref" reference="eq:stencil_plus"} into three candidate substencils, which are given by $$\label{eq:substencils} S^m(i)=\{x_{i+m-2}, x_{i+m-1}, x_{i+m} \}, \quad m=0,1,2.$$ The numerical fluxes $\hat{f}^m(x_{i+\frac{1}{2}}) = \hat{f}^m_{i+\frac{1}{2}} = h_{i+\frac{1}{2}}+ O(\Delta x^3)$ are then calculated for each of the small substencils [\[eq:substencils\]](#eq:substencils){reference-type="eqref" reference="eq:substencils"}. Let $\hat{f}^m(x)$ be the polynomial approximation of $h(x)$ on each of the substencils [\[eq:substencils\]](#eq:substencils){reference-type="eqref" reference="eq:substencils"}. By evaluation of these polynomials at $x = x_{i+\frac{1}{2}}$ the following explicit formulas can be obtained [@shu1998] $$\label{eq:sub_fluxes} \begin{split} \hat{f}^0_{i + \frac{1}{2}} &= \frac{2f(u_{i-2})-7f(u_{i-1})+11f(u_{i})}{6}, \\ \hat{f}^1_{i + \frac{1}{2}} &= \frac{-f(u_{i-1})+5f(u_{i})+2f(u_{i+1})}{6}, \\ \hat{f}^2_{i + \frac{1}{2}} &= \frac{2f(u_{i})+5f(u_{i+1})-f(u_{i+2})}{6}, \end{split}$$ where the value of a function $f$ at $u(x_i)$ is indicated by $f(u_i)=f(u(x_i))$. Then, we obtain a final approximation on a big stencil [\[eq:stencil_plus\]](#eq:stencil_plus){reference-type="eqref" reference="eq:stencil_plus"} as a linear combination of the fluxes [\[eq:sub_fluxes\]](#eq:sub_fluxes){reference-type="eqref" reference="eq:sub_fluxes"} $$\hat{f}_{i + \frac{1}{2}} = \sum_{m=0}^2 d_m \hat{f}^m_{i+\frac{1}{2}},$$ where the coefficients $d_m$ are the linear weights, which would form the upstream fifth-order central scheme for the 5-point stencil and their values are $$d_0=\frac{1}{10}, \quad d_1=\frac{6}{10}, \quad d_2=\frac{3}{10}.$$ As described in [@jiang1996; @shu1998], the linear weights can be replaced by *nonlinear weights* $\omega_m^{JS}$, $m=0,1,2$, such that $$\label{eq:omega_flux} \hat{f}_{i + \frac{1}{2}} = \sum_{m=0}^2 \omega_m^{JS}\hat{f}^m_{i + \frac{1}{2}},$$ with $$\label{eq:omegas} \omega_m^{JS} = \frac{\alpha_m^{JS}}{\sum_{i=0}^2 \alpha_i^{JS}}, \quad \text{ where } \quad \alpha_m^{JS} = \frac{d_m}{ (\epsilon + \beta_m)^2 }.$$ The parameter $\beta_m$ is crucial for deciding which substencils to include in the final flux approximation. It is referred to as *smoothness indicator* and its main role is to reduce or remove the contribution of the substencil $S^m$, which contains the discontinuity. In this case, the corresponding nonlinear weight $\omega_m^{JS}$ becomes smaller. For smooth parts of the solution, the indicators are designed to come closer to zero, so that the nonlinear weights $\omega_m^{JS}$ come closer to the ideal weights $d_m$. We will further analyze the smoothness indicators in the next section. The parameter $\epsilon$ is used to prevent the denominator from becoming zero. In all our experiments, we set the value of $\epsilon$ to $10^{-6}$. ## Smoothness indicators {#S:2.2} In [@jiang1996], the smoothness indicators have been developed as: $$\label{eq:betas} \beta_m = \sum_{q=1}^2 \Delta x^{2q-1} \int_{x_{i-\frac{1}{2}}}^{x_{i+\frac{1}{2}}} \Bigl(\frac{\text{d}^{q}\hat{f}^m(x)}{\text{d}x^{q}}\Bigr)^2\,dx,$$ with $\hat{f}^m(x)$ being the polynomial approximation in each of three substencils. Their explicit form corresponding to the flux approximation $\hat{f}_{i+\frac{1}{2}}$ can be obtained as $$\label{eq:betas_explicit} \begin{split} \beta_0 &= \frac{13}{12} \bigl(f(u_{i-2})-2f(u_{i-1})+f(u_{i})\bigr)^2 +\frac{1}{4}\bigl(f(u_{i-2})-4f(u_{i-1})+3f(u_{i})\bigr)^2 , \\ \beta_1 &= \frac{13}{12} \bigl(f(u_{i-1})-2f(u_{i})+f(u_{i+1})\bigr)^2 +\frac{1}{4}\bigl(-f(u_{i-1})+f(u_{i+1})\bigr)^2, \\ \beta_2 &= \frac{13}{12} \bigl(f(u_{i})-2f(u_{i+1})+f(u_{i+2})\bigr)^2 +\frac{1}{4}\bigl(3f(u_{i})-4f(u_{i+1})+f(u_{i+2})\bigr)^2. \end{split}$$ **Remark 2**. *As mentioned before, we only considered the construction of the numerical flux $\hat{f}^+_{i+\frac{1}{2}}$. For the numerical approximation of the flux $\hat{f}^+_{i-\frac{1}{2}}$ we can use formulas [\[eq:sub_fluxes\]](#eq:sub_fluxes){reference-type="eqref" reference="eq:sub_fluxes"}--[\[eq:omegas\]](#eq:omegas){reference-type="eqref" reference="eq:omegas"} and [\[eq:betas_explicit\]](#eq:betas_explicit){reference-type="eqref" reference="eq:betas_explicit"} and shift each index by $-1$.* The negative part of the flux splitting can be obtained using symmetry (see, e.g., [@wang2007]), and we briefly summarize the formulas for $\hat{f}_{i+\frac{1}{2}}^-$ and omit the superscript $^-$: $$\begin{split} \hat{f}^0_{i + \frac{1}{2}} &= \frac{11f(u_{i+1})-7f(u_{i+2})+2f(u_{i+3})}{6}, \\ \hat{f}^1_{i + \frac{1}{2}} &= \frac{2f(u_{i})+5f(u_{i+1})-f(u_{i+2})}{6}, \\ \hat{f}^2_{i + \frac{1}{2}} &= \frac{-f(u_{i-1})+5f(u_{i})+2f(u_{i+1})}{6}, \end{split}$$ where the weights $\omega_m^{JS}$ are computed as in [\[eq:omegas\]](#eq:omegas){reference-type="eqref" reference="eq:omegas"} using the smoothness indicators given by $$\label{eq:betas_explicit_minus} \begin{split} \beta_0 &= \frac{13}{12} \big(f(u_{i+1})-2f(u_{i+2})+f(u_{i+3})\big)^2 +\frac{1}{4}\big(3f(u_{i+1})-4f(u_{i+2})+f(u_{i+3})\big)^2 , \\ \beta_1 &= \frac{13}{12} \big(f(u_{i})-2f(u_{i+1})+f(u_{i+2})\big)^2 +\frac{1}{4}\big(f(u_{i})-f(u_{i+2})\big)^2, \\ \beta_2 &= \frac{13}{12} \big(f(u_{i-1})-2f(u_{i})+f(u_{i+1})\big)^2 +\frac{1}{4}\big(f(u_{i-1})-4f(u_{i})+3f(u_{i+1})\big)^2. \end{split}$$ In the next section, where the deep learning algorithm will be introduced, this will help to understand how the improved smoothness indicators will be constructed. ## The WENO-Z scheme {#sec:S2.3} Borges et al. [@Borges2008] pointed out that the classical WENO-JS scheme described in previous sections looses the fifth-order accuracy at the critical points where $f'(u)=0$, and proposed new nonlinear weights defined by $$\label{eq:WENOZ} \omega_m^Z = \frac{\alpha^Z_m}{\sum\limits_{i=0}^2\alpha^Z_i}, \quad \text{ where } \quad \alpha^Z_m = d_m \biggl[ 1+ \Bigl(\frac{\tau_5}{\beta_m + \epsilon} \Bigr)^2 \biggr]$$ and $$\label{eq:tau} \tau_5 = |\beta_0 - \beta_2|$$ is a new global smoothness indicator. # Deep smoothness WENO scheme {#sec:S3} In [@kossaczka2021; @kossaczka2022; @kossaczka2022deep] the new WENO-DS scheme based on the improvement of the smoothness indicators was developed. The smoothness indicators $\beta_m$, $m=0,1,2$, are multiplied by the perturbations $\delta_m$, which are the outputs of the respective neural network algorithm. The new smoothness indicators are denoted by $\beta_m^{DS}$: $$\label{eq:betas_DS} \beta_m^{DS} = \beta_m (\delta_m + C), \qquad m=0,1,2,$$ where $C$ is a constant that ensures the consistency and accuracy of the new method. In all our experiments we set $C=0.1$. For more details and corresponding theoretical proofs of accuracy and consistency, we refer to [@kossaczka2021; @kossaczka2022]. Note that the formulation of the new smoothness indicators $\beta_m^{DS}$ as a multiplication of the original ones with the perturbations $\delta_m$ is very favorable. In a case where the original smoothness indicator converges to zero, the improved $\beta_m^{DS}$ behaves in the same way. On the other hand, if a subset $S^m$ contains a discontinuity, the perturbation $\delta_m$ can improve the original smoothness indicator so that the final scheme exhibits better numerical approximations. Moreover, the theoretical convergence properties are not lost, see [@kossaczka2021; @kossaczka2022]. In [@kossaczka2021] the algorithm was successfully applied to one-dimensional benchmark examples such as the Burgers' equation, the Buckley-Leverett equation, the one-dimensional Euler system, and the two-dimensional Burgers' equation. In [@kossaczka2022], the algorithm was extended to nonlinear degenerate parabolic equations and further applied to computational finance problems in [@kossaczka2022deep]. The theoretical order of convergence was demonstrated on the smooth solutions and the large numerical improvements were obtained when comparing the WENO-DS method with the original WENO methods. ## Preservation of a conservative property for WENO-DS scheme {#sec:S3.1} However, the multipliers introduced for the smoothness indicators in [@kossaczka2021] were cell-based (not interface-based). This means that although the high numerical accuracy was theoretically demonstrated and numerically confirmed, the guarantee of the conservative property was lost. As stated in [@kossaczka2022], the conservative property can be easily recovered by defining the multipliers such that $$\label{eq:betas_DS_cons} \begin{aligned} \beta_{m,i+\frac{1}{2}}^{DS} &= \beta_{m,i+\frac{1}{2}} (\delta_{m,i+\frac{1}{2}} + C), \\ \beta_{m,i-\frac{1}{2}}^{DS} &= \beta_{m,i-\frac{1}{2}} (\delta_{m,i-\frac{1}{2}} + C), \end{aligned}$$ with $$\label{eq:deltas} \delta_{0,i+\frac{3}{2}} = \delta_{1,i+\frac{1}{2}} = \delta_{2,i-\frac{1}{2}}, \quad i=0, \ldots, N.$$ This makes the multipliers depend on the location of the substencils corresponding to $\beta_{m,i+\frac{1}{2}}$ and $\beta_{m,i-\frac{1}{2}}$. This ensures that the values $\hat{f}^\pm_{i-\frac{1}{2}}$ can be obtained from the values $\hat{f}^\pm_{i+\frac{1}{2}}$ by simple index shifting and that the conservative property is preserved. ## Structure of neural network {#sec:S3.2} To ensure the consistency of a numerical method, the Convolutional Neural Network (CNN) is used. This is crucial to ensure the spatial invariance of the resulting numerical method. This means that the multipliers $\delta_m$ are independent of their position in the spatial grid and only depend on the solution itself. Let us formulate the CNN as a function $H(\cdot)\in\mathbb{R}^{2k+1} \to \mathbb{R}$, where $2k+1$ is the size of the receptive field of the CNN: $$\label{eq:CNN_func} H\bigl(\bar{f}(\bar{u}_{i})\bigr) = {\rm CNN} \bigl(\bar{f}(\bar{u}_{i})\bigr).$$ As an input we define a vector $$\label{eq:cnn_input} \begin{split} \bar{f}(\bar{u}_i) &= \bigl(f(u(x_{i-k})), f(u(x_{i-k+1})), \ldots, f(u(x_{i+k}))\bigr), \\ \bar{u}_i &= \bar{u}(\bar{x}_i)=\bigl(u(x_{i-k}), u(x_{i-k+1}), \ldots, u(x_{i+k})\bigr). \end{split}$$ The Figure [1](#fig:Deltas_stencils){reference-type="ref" reference="fig:Deltas_stencils"} shows the values from which the multipliers $\delta_m$, $m=0,1,2$ are constructed, assuming $2k+1=3$ for the receptive field. In this case, the values used to compute the original smoothness indicators are also used to compute the multipliers $\delta_m$, $m=0,1,2$, (see equations [\[eq:betas_explicit\]](#eq:betas_explicit){reference-type="eqref" reference="eq:betas_explicit"} and [\[eq:betas_explicit_minus\]](#eq:betas_explicit_minus){reference-type="eqref" reference="eq:betas_explicit_minus"}). If we enlarge the receptive field of the CNN, we also enlarge the stencil for computing the multipliers $\delta_m$, $m=0,1,2$. In this way, the smoothness indicators are basically computed from a wider stencil, which can lead to better numerical approximations. In this case, we just need to add more bounds before feeding the values [\[eq:cnn_input\]](#eq:cnn_input){reference-type="eqref" reference="eq:cnn_input"} to the CNN. ![The substencils used for computation of multipliers $\delta_m$, $m=0,1,2$ corresponding to the flux approximations $\hat{f}^\pm_{i\pm\frac{1}{2}}$, assuming that for the receptive field of the CNN holds $2k+1=3$.](Deltas_stencils.JPG){#fig:Deltas_stencils width="1\\linewidth"} As we are improving the existing numerical scheme and adding a neural network part to it, it is important that the new numerical scheme remains computationally efficient. The neural network part added to the numerical scheme could be computationally expensive. However, we propose to use only a small CNN, which would not have such high computational costs. The detailed structure of the CNN can be found in Section [4.1](#sec:S4.1){reference-type="ref" reference="sec:S4.1"}. It was pointed out in [@kossaczka2021] that better numerical results were obtained using two different neural networks for the positive and negative part of a flux. We experimentally found that we can avoid using more neural networks and use only one CNN. On the other hand, we can achieve better results by using a superior training procedure and adaptive activation functions. More details will be discussed in the next subsections. For convergence and consistency of the numerical scheme, all hidden layers of the CNN must be differentiable functions, and the activation function in the last CNN layer must be bounded from below [@kossaczka2022]. Experimentally, we found that the use of a *softplus* activation function in the last CNN layer is more effective and gives better numerical results compared to e.g. *sigmoid* as used in [@kossaczka2021]. ### Adaptive activation functions We can make the training more effective and get better numerical results by using *adaptive activation functions*. [@jagtap2020adaptive; @jagtap2020locally; @jagtap2022deepR]. The activation function is one of the most important hyperparameters in neural network architectures. The purpose of this hyperparameter is to introduce nonlinearity into the prediction. There are many activation functions proposed in the literature; see the comprehensive survey [@jagtap2023important] for more details. However, there is no basic rule for the choice of the activation function. This is the motivation to use an adaptive activation function that can adapt to the problem at hand. In this work, we used global adaptive activation functions [@jagtap2020adaptive], where the additional slope parameter is introduced in the activation function as follows. For the ELU activation function, we train the additional parameter $\alpha$: $$\text{ELU} = \begin{cases} x, \qquad &\text{if} \quad x>0, \\ \alpha (\exp(x)-1) \qquad &\text{if} \quad x \leq 0 \end{cases}$$ and we denote the adaptive ELU as *aELU*. For the softplus activation function, we train the additional parameter $\beta$: $$\text{Softplus}(x) = \frac{1}{\beta} \log(1 + \exp(\beta x))$$ and we denote the adaptive softplus as *aSoftplus*. ## Training procedure {#sec:S3.3} In this section, we describe how the training procedure for WENO-DS is carried out. We have experimented with different training procedures and have found experimentally that following the training procedure described in [@kossaczka2022] gives the best numerical results. First, we have to create the data set. For this purpose we compute the reference solutions using the WENO-Z method on a fine grid of $I\times J = 400\times400$ space points up to the given final time $T$, where $t_n$ represents the time points, $n=0,\ldots,N$. More details on the construction of the reference solutions are given in Sections [5.1](#sec:S5.1){reference-type="ref" reference="sec:S5.1"}, [5.2](#sec:S5.2){reference-type="ref" reference="sec:S5.2"}, [5.3](#sec:S5.3){reference-type="ref" reference="sec:S5.3"}. During training, we compute the numerical solutions on a grid of $I\times J = 100\times100$ space points. At the beginning of a training we randomly select a problem from a data set and perform a single time step to get to the time $t_{n+1}$, using CNN to predict the multipliers $\delta_m$. However, by performing a single time step on a coarse grid, we do not match the time step size of the fine precomputed solutions, as the adaptive time step size is used. So we simply take the closest reference solution from the data set, use it as an initial condition, and do another small time step to get a reference solution in time $t_{n+1}$. Then we compute the loss and its gradient with respect to the weights of the CNN. We then decide whether to proceed to the next time step of a current problem or to choose another problem from our dataset and run a time step of that problem. The probability of choosing the new problem has to be determined at the beginning of the training session and we set it to $\varphi = 0.5$ in our experiments. We set the maximum number of opened problems to $150$. We remember all opened problems, and if no new problem is opened (with probability $1-\varphi$), or if the maximum number of opened problems is reached, we execute the next time step of a problem uniformly chosen from the set of already opened problems. Keeping the solution from the previous time step as initial data, we repeat the same procedure until we reach the maximum number of training steps. This training procedure gives us a great opportunity to mix the solutions with different initial data and in different time points, which makes the training more effective. ### Optimizer and the optimal learning rate To train the network, we used a gradient-based optimizer, namely a variant of stochastic gradient descent, the Adam optimizer [@kingma2014adam]. The learning rate is another important hyperparameter to choose. A larger learning rate may miss the local minima, and a smaller learning rate may require a large number of iterations to reach convergence. Therefore, it is important to find a near-optimal learning rate. In this work, the learning rate is $0.001$ to update the weights of the CNN. This near-optimal learning rate was found through experiments. ### Loss function In this work, the loss function consists of the data mismatch term between the solution predicted by the networks and the reference solution. For the loss function, we use the mean square error loss as follows: $$\label{eq:loss_L2} LOSS_{\rm MSE}(u) = \frac{1}{I} \sum_{i=0}^I (u_i - u_i^{\rm ref})^2,$$ where $u_i$ is a numerical approximation of $u(x_i)$ and $u_i^{\rm ref}$ is the corresponding reference solution. The $L_2$ norm-based loss function has the advantage of stronger gradients with respect to $u_i$, resulting in faster training. However, in our examples, we use the $L_1$ norm as the main error measure, which is more typical for measuring errors for hyperbolic conservation laws. Thus, for validation during training, we use the metrics $$\label{eq:loss_L1} L_1(u) = \frac{1}{I} \sum_{i=0}^I |u_i - u_i^{\rm ref}|.$$ # Application of our approach to the 2D Euler equations {#sec:S4} We consider the two-dimensional Euler equations of gas dynamics in the form [\[eq:HCL\]](#eq:HCL){reference-type="eqref" reference="eq:HCL"} with $$\label{eq:Euler_HCL} U=\begin{pmatrix} \rho \\ \rho u \\\rho v \\ E \end{pmatrix} \qquad F(U)=\begin{pmatrix} \rho u \\ \rho u^2 + p \\\rho u v \\ u (E + p) \end{pmatrix} \qquad G(U)=\begin{pmatrix} \rho v \\\rho u v \\ \rho v^2 + p \\ v (E + p) \end{pmatrix} \qquad$$ for polytropic gas. Here, the variable $\rho$ is the density, $u$ the $x$-velocity component, $v$ the $y$-velocity component, $E$ the total energy and $p$ the pressure. Further, it holds $$p = (\gamma - 1)\Bigl[E-\frac{\rho}{2}(u^2+v^2)\Bigr].$$ $\gamma$ denotes the ratio of the specific heats and we will use $\gamma \in (1.1, 1.67)$ in this paper. We consider the spatial domain $[0,1]\times[0,1]$ and solve the Riemann problem with the following initial condition $$\label{eq:Riemann_IC} (\rho,u, v,p) = \begin{cases} (\rho_1,u_1, v_1,p_1) \quad x > 0.5 \quad \text{and} \quad y > 0.5,\\ (\rho_2,u_2, v_2,p_2) \quad x < 0.5 \quad \text{and} \quad y > 0.5,\\ (\rho_3,u_3, v_3,p_3) \quad x < 0.5 \quad \text{and} \quad y < 0.5,\\ (\rho_4,u_4, v_4,p_4) \quad x > 0.5 \quad \text{and} \quad y < 0.5.\\ \end{cases}$$ The combination of four elementary planar waves is used to define the classification of the Riemann problem. A detailed study of these configurations has been done in [@schulz1993classification; @schulz1993; @chang1995; @chang1999; @kurganov2002; @zhang1990] and there are 19 different possible configurations for polytropic gas. These are defined by three types of elementary waves, namely a backward rarefaction wave $\overleftarrow{R}$, a backward shock wave $\overleftarrow{S}$, a forward rarefaction wave $\overrightarrow{R}$, a forward shock wave $\overrightarrow{S}$ and a contact discontinuity $J^{\pm}$, where the superscript $\pm$ refers to negative and positive contacts. To obtain the WENO approximations in the two-dimensional example, we apply the procedure described in Section [2](#sec:S2){reference-type="ref" reference="sec:S2"} using the dimension-by-dimension principle. Thus we obtain the flux approximations for [\[eq:HCL\]](#eq:HCL){reference-type="eqref" reference="eq:HCL"} as $$\begin{split} \frac{1}{\Delta x} \bigl(\hat{f}_{i+\frac{1}{2}} - \hat{f}_{i-\frac{1}{2}} \bigr) &= \bigl(F(U)\bigr)_{x}\big\vert_{(x_i,y_j)} + O\bigl(\Delta x^5\bigr), \\ \frac{1}{\Delta y} \bigl( \hat{g}_{i+\frac{1}{2}} - \hat{g}_{i-\frac{1}{2}} \bigr) &= \bigl(G(U)\bigr)_{y}\big\vert_{(x_i,y_j)} + O\bigl(\Delta y^5\bigr), \end{split}$$ with the uniform grid defined by the nodes $(x_i,y_j)$, $\Delta x = x_{i+1}-x_i$, $\Delta y = y_{j+1}-y_j$, $i = 0,\ldots,I$, $j = 0,\ldots,J$. In our examples, we proceed with the implementation of the Euler system using characteristic decomposition. This means that we first project the solution and the flux onto the characteristic fields using left eigenvectors. Then we apply the Lax-Friedrichs flux splitting [\[eq:LF_flux_splitting\]](#eq:LF_flux_splitting){reference-type="eqref" reference="eq:LF_flux_splitting"} for each component of the characteristic variables. These values are fed into the CNN and the enhanced smoothness indicators are computed. After obtaining the final WENO approximation, the projection back to physical space is done using the right eigenvectors, see [@shu1992] for more details on this procedure. ## Size of the neural network {#sec:S4.1} In our paper, we considered different structures of neural networks and carried out numerous experiments with them. First, we used a rather simple CNN with only two layers and a receptive field of width $3$. The structure is shown in Figure [2](#fig:structure_1){reference-type="ref" reference="fig:structure_1"}. The advantage of this is its computational efficiency. Second, we used a CNN with the same number of layers, but we increased the number of channels and made the receptive field wider. The structure is shown in Figure [3](#fig:structure_2){reference-type="ref" reference="fig:structure_2"}. Finally, we used only a receptive field of width $3$, but added one more layer and used a more complex neural network, as shown in Figure [4](#fig:structure_3){reference-type="ref" reference="fig:structure_3"}. Each of these neural networks gave interesting results and we summarize them in Section [5](#sec:S5){reference-type="ref" reference="sec:S5"}. ![Two hidden layers, lower number of channels, receptive field of size $3$.](structure_1.pdf){#fig:structure_1 width="\\textwidth"} ![Two hidden layers, higher number of channels, receptive field of size $5$.](structure_4.pdf){#fig:structure_2 width="\\textwidth"} ![Three hidden layers, higher number of channels, receptive field of size $3$.](structure_5.pdf){#fig:structure_3 width="\\textwidth"} As can be seen, we have $4$ input channels in the first hidden layer and $4$ output channels in the last hidden layer in each CNN. These represent the dimension of the solution $U$ from [\[eq:Euler_HCL\]](#eq:Euler_HCL){reference-type="eqref" reference="eq:Euler_HCL"}. In this way, the neural network also takes in information from other variables, which can be useful for improving the numerical solution. The input $\Bar{F}(\Bar{U})$, respectively $\Bar{G}(\Bar{U})$ represents the numerical approximation after the projection using left eigenvectors and after applying the flux splitting method. We also have to adapt the loss function from [\[eq:loss_L2\]](#eq:loss_L2){reference-type="eqref" reference="eq:loss_L2"} and use it for training $$\label{eq:Euler_loss} \begin{split} LOSS_{\rm MSE}(\rho, u, v, p) &= LOSS_{\rm MSE}(\rho) + LOSS_{\rm MSE}(u) + LOSS_{\rm MSE}(v) + LOSS_{\rm MSE}(p) \end{split}$$ and for the validation during training from [\[eq:loss_L1\]](#eq:loss_L1){reference-type="eqref" reference="eq:loss_L1"} $$\label{eq:Euler_validation} L_1(\rho, u, v, p) = L_1(\rho) + L_1(u) + L_1(v) + L_1(p).$$ When we plot the error on validation problems, we rescale the values for each validation problem to be in the interval $[0,1]$ using the relationship $$\label{eq:Euler_validation_adjusted} L^*_1(\rho, u, v, p) = \frac{L^l_1(\rho, u, v, p)}{\max_l(L^l_1(\rho, u, v, p))}, \qquad l=0,\dots,L,$$ where $L$ denotes the total number of training steps. ## Construction of the data set for the CNN training procedure {#sec:S4.2} For each of the 19 configurations of the Riemann problem, the specific relations must be satisfied by the initial data and the symmetry properties of the solution. We present the formulas given in [@schulz1993] and create the data sets for the CNN training according to these formulas. We define $$\label{eq:relations_phi_psi} \Phi_{lr} := \frac{2\sqrt{\gamma}}{\gamma - 1} \Big(\sqrt{\frac{p_l}{\rho_l}} - \sqrt{\frac{p_r}{\rho_r}}\Big), \quad \Psi_{lr}^2 := \frac{(p_l-p_r)(\rho_l-\rho_r)}{\rho_l \rho_r}, \quad (\Psi_{lr}>0)$$ and $$\label{eq:relations_pi} \Pi_{lr} := \Big( \frac{p_l}{p_r} + \frac{(\gamma-1)}{(\gamma +1)}\Big) \Big/ \Big(1+\frac{(\gamma-1)}{(\gamma+1)}\frac{p_l}{p_r}\Big).$$ In Sections [5.1](#sec:S5.1){reference-type="ref" reference="sec:S5.1"}, [5.2](#sec:S5.2){reference-type="ref" reference="sec:S5.2"}, [5.3](#sec:S5.3){reference-type="ref" reference="sec:S5.3"} we list the specific relations for given examples that are sufficient to uniquely define the solution. Following these relations, we randomly generate the initial data and construct our data sets. # Numerical results {#sec:S5} To demonstrate the efficiency of the proposed method, in this section, we present the numerical results obtained with the WENO-DS method after the CNN training procedure. Note that the CNN training procedure only needs to be performed once as *offline* training for each of the examples presented in Sections [5.1](#sec:S5.1){reference-type="ref" reference="sec:S5.1"}, [5.2](#sec:S5.2){reference-type="ref" reference="sec:S5.2"}, [5.3](#sec:S5.3){reference-type="ref" reference="sec:S5.3"}. No additional training was performed for the examples in Section [5.4](#sec:S5.4){reference-type="ref" reference="sec:S5.4"} as we show the results using the same trained CNN from the previous examples. In Section [5.5](#sec:S5.5){reference-type="ref" reference="sec:S5.5"} we perform two more trainings with larger CNN and illustrate the results. Further details can be found in the respective sections. For the following system of ordinary differential equations (ODEs) $$\frac{\text{d}U(t)}{\text{d}t}= L(U),$$ we use a third-order *total variation diminishing* (TVD) Runge-Kutta method [@jiang1996] given by $$\label{eq:runge_kutta} \begin{split} U^{(1)} &= U^n + \Delta t \,L(U^n), \\ U^{(2)} &= \frac{3}{4}U^n + \frac{1}{4}U^{(1)} + \frac{1}{4}\Delta t\, L(U^{(1)}), \\ U^{n+1} &= \frac{1}{3}U^n + \frac{2}{3}U^{(2)} + \frac{2}{3}\Delta t\,L(U^{(2)}), \end{split}$$ where $U^n$ is the numerical solution at the time step $n$. For the scheme [\[eq:runge_kutta\]](#eq:runge_kutta){reference-type="eqref" reference="eq:runge_kutta"} we use an adaptive step size $$\Delta t = 0.6 \min\Big(\frac{\Delta x}{a}, \frac{\Delta y}{a}\Big),$$ with $$a = \max_{\substack{i=0,\ldots,I\\ j=0,\ldots,J}}(|\lambda^+_{i,j}|, |\lambda^-_{i,j}|) \quad \lambda^\pm = V \pm c, \quad V = \sqrt{u^2+v^2} \quad c^2= \gamma\,\frac{p}{\rho},$$ where $u$, $v$ are the velocities and $c$ is the local speed of sound. In the sequel we enumerate the different configurations of initial conditions according to [@kurganov2002]. ## Configuration 2 {#sec:S5.1} This is the configuration with four rarefaction waves: $\overrightarrow{R}_{21}$, $\overleftarrow{R}_{32}$, $\overleftarrow{R}_{34}$, $\overrightarrow{R}_{41}$. The detailed analysis was done in [@zhang1990; @schulz1993] and we have to satisfy the following relations for this case: $$\label{eq:relations_R2} \begin{split} &u_2-u_1 = \Phi_{21}, \quad u_4-u_3 = \Phi_{34}, \quad u_3=u_2, \quad u_4=u_1, \\ &v_4-v_1 = \Phi_{41}, \quad v_2-v_3 = \Phi_{32}, \quad v_2=v_1, \quad v_3=v_4 \end{split}$$ with the compatibility conditions $\Phi_{21} = -\Phi_{34}$ and $\Phi_{41}=-\Phi_{32}$. Moreover, for a polytropic gas the equations $$\label{eq:relations_R2_poly} \rho_l/\rho_r = (p_l/p_r)^{1/\gamma} \quad \text{for} \quad (l,r) \in \{(2,1),(3,4),(3,2),(4,1)\}$$ have to be included. Furthermore, we have $\rho_2=\rho_4$, $\rho_1 = \rho_3$, $p_1=p_3$, $p_2=p_4$, $u_2-u_1=v_4-v_1$ and $u_4-u_3=v_2-v_3$. We use for creating of the data set the values $$\label{eq:par_range_R2} \begin{split} \rho_1 \in \mathcal{U}[0.7,2], \quad \rho_2 &\in \mathcal{U}[0.5,\rho_1], \quad p_1 \in \mathcal{U}[0.2,1.5], \\ \quad u_1 \in \mathcal{U}[-1,1], \quad &v_1 = u_1, \quad \gamma \in (1.1, 1.67) \end{split}$$ and for the other values we use the relations [\[eq:relations_R2\]](#eq:relations_R2){reference-type="eqref" reference="eq:relations_R2"}, [\[eq:relations_R2_poly\]](#eq:relations_R2_poly){reference-type="eqref" reference="eq:relations_R2_poly"} with [\[eq:relations_phi_psi\]](#eq:relations_phi_psi){reference-type="eqref" reference="eq:relations_phi_psi"}. We also compute the reference solutions using the WENO-Z method on a grid $I\times J = 400\times400$ space points up to the final time $T \in \mathcal{U}[0.1,0.2]$ and create the data set consisting of $50$ reference solutions. For training, we use the training procedure described in Section [3.3](#sec:S3.3){reference-type="ref" reference="sec:S3.3"}. First, we use the simplest neural network structure shown in Figure [2](#fig:structure_1){reference-type="ref" reference="fig:structure_1"} and perform the training for the total number of $4000$ training steps. We plot the evolution of the $L^*_1$ error [\[eq:Euler_validation_adjusted\]](#eq:Euler_validation_adjusted){reference-type="eqref" reference="eq:Euler_validation_adjusted"} for the validation problems in Figure [5](#fig:validation_Riemann_2){reference-type="ref" reference="fig:validation_Riemann_2"}. Note that these problems were not included in the training data, and the initial conditions of these problems were generated analogously to the construction of the training data set. For these problems, we measured the error every $100$ training steps and at a randomly chosen final time $T$. We select the final model based on the evolution of the error of the validation set. We see that the error decreases up to a certain point for all problems and then starts to increase for some problems. Longer training would lead to overfitting of the training data. Finally, we choose the final model from the $2800$ training step and present the results using this model. ![The values [\[eq:Euler_validation_adjusted\]](#eq:Euler_validation_adjusted){reference-type="eqref" reference="eq:Euler_validation_adjusted"} for different validation problems evaluated each $100$ training steps.](validation_Riemann_2.pdf){#fig:validation_Riemann_2 width="70%"} As a test problem we use the problem from [@kurganov2002] with $\gamma = 1.4$, $T=0.2$ and the initial condition $$\label{eq:Riemann_IC_2} (\rho,u, v,p) = \begin{cases} (1, 0, 0, 1) \quad &x > 0.5 \quad \text{and} \quad y > 0.5,\\ (0.5197,-0.7259, 0, 0.4) \quad &x < 0.5 \quad \text{and} \quad y > 0.5,\\ (1, -0.7259, -0.7259, 1) \quad &x < 0.5 \quad \text{and} \quad y < 0.5,\\ (0.5197, 0, -0.7259, 0.4) \quad &x > 0.5 \quad \text{and} \quad y < 0.5.\\ \end{cases}$$ The results are shown in Table [\[tab:Riemann_2\]](#tab:Riemann_2){reference-type="ref" reference="tab:Riemann_2"}. As can be seen, we achieve a significant error improvement for all four variables and for different discretizations. It should be noted that we trained only with the discretization $100 \times 100$ space points and did not retrain the neural network for different discretizations. We refer to the error of the WENO-Z method divided by the error of WENO-DS (rounded to 2 decimal points) as the 'ratio'. The density contour plots are shown in Figure [\[fig:Riemann_2\]](#fig:Riemann_2){reference-type="ref" reference="fig:Riemann_2"} and the absolute pointwise errors for the density are shown in Figure [\[fig:Riemann_2\_error\]](#fig:Riemann_2_error){reference-type="ref" reference="fig:Riemann_2_error"}. ![WENO-DS](R2_WENODS.pdf){#fig:R2_WENODS width="\\linewidth"} ![WENO-Z](R2_WENOZ.pdf){#fig:R2_WENOZ width="\\linewidth"} ![reference solution](R2_reference.pdf){#fig:R2_reference width="\\linewidth"} ![WENO-DS](R2_WENODS_error.pdf){#fig:R2_WENODS_error width="\\linewidth"} ![WENO-Z](R2_WENOZ_error.pdf){#fig:R2_WENOZ_error width="\\linewidth"} Finally, we want to compare the computational cost of WENO-DS compared to the original WENO scheme in solving the problem shown in Figure [11](#fig:Riemann_2_cost){reference-type="ref" reference="fig:Riemann_2_cost"}. Using a logarithmic scale, we plot the computation time against the $L_1$ error averaged over the four variables $\rho$, $u$, $v$, $p$. ![Comparison of computational cost against $L_1$-error of the solution of the Riemann problem with the initial condition [\[eq:Riemann_IC_2\]](#eq:Riemann_IC_2){reference-type="eqref" reference="eq:Riemann_IC_2"}.](Riemann_2_cost.pdf){#fig:Riemann_2_cost width="50%"} It should be noted that if we were to test the method on another unseen test problem using the initial data from the previously described range, we would obtain very similar error improvements in those cases. ## Configuration 3 {#sec:S5.2} This is the configuration with four shock waves: $\overleftarrow{S}_{21}$, $\overleftarrow{S}_{32}$, $\overleftarrow{S}_{34}$, $\overleftarrow{S}_{41}$. According to [@schulz1993], in this case we have the following equations that must be satisfied: $$\label{eq:relations_R3} \begin{split} &u_2-u_1 = \Psi_{21}, \quad u_3-u_4 = \Psi_{34}, \quad u_3=u_2, \quad u_4=u_1, \\ &v_4-v_1 = \Psi_{41}, \quad v_3-v_2 = \Psi_{32}, \quad v_2=v_1, \quad v_3=v_4 \end{split}$$ and for polytropic gas the equations $$\label{eq:relations_R3_poly} \rho_l/\rho_r = \Pi_{lr} \quad \text{for} \quad (l,r) \in \{(2,1),(3,4),(3,2),(4,1)\}$$ are added. This gives the compatibility conditions $\Psi_{21}=\Psi_{34}$ and $\Psi_{41}=\Psi_{32}$. Furthermore, we have $\rho_2=\rho_4$, $p_2=p_4$ and $u_2-u_1=v_4-v_1$. In this case, we use them for creating the data set values $$\label{eq:par_range_R3} \begin{split} \rho_1 \in \mathcal{U}[1, 2], \quad \rho_2 \in & \mathcal{U}[0.5, 1], \quad p_1 \in \mathcal{U}[1, 2], \\ \quad u_1 \in \mathcal{U}[-0.25,0.25], \quad &v_1 = u_1, \quad \gamma \in (1.1, 1.67) \end{split}$$ and for the other values we use the relations [\[eq:relations_R3\]](#eq:relations_R3){reference-type="eqref" reference="eq:relations_R3"}, [\[eq:relations_R3_poly\]](#eq:relations_R3_poly){reference-type="eqref" reference="eq:relations_R3_poly"} with [\[eq:relations_phi_psi\]](#eq:relations_phi_psi){reference-type="eqref" reference="eq:relations_phi_psi"} and [\[eq:relations_pi\]](#eq:relations_pi){reference-type="eqref" reference="eq:relations_pi"}. Similar to the previous example, we compute the reference solutions using the WENO-Z method on a grid $I\times J = 400\times400$ space points up to the final time $T\in\mathcal{U}[0.1,0.3]$ and create the data set consisting of $50$ reference solutions. We proceed with training as described in the previous section, using the same neural network structure as shown in Figure [2](#fig:structure_1){reference-type="ref" reference="fig:structure_1"}. Again, we train only on the discretization $I \times J = 100 \times 100$ space steps. We run the training for $4000$ training steps and plot the evolution of the validation metrics [\[eq:Euler_validation_adjusted\]](#eq:Euler_validation_adjusted){reference-type="eqref" reference="eq:Euler_validation_adjusted"} for the validation problems in Figure [12](#fig:validation_Riemann_3){reference-type="ref" reference="fig:validation_Riemann_3"}. We measured the error every $100$ training steps and at the randomly chosen final time $T$. Based on this, we choose the final model from training step $3200$ and present the results for the test problem with $\gamma = 1.4$, $T=0.3$, and initial condition [@kurganov2002] $$\label{eq:Riemann_IC_3} (\rho,u, v,p) = \begin{cases} (1.5, 0, 0, 1.5) \quad &x > 0.5 \quad \text{and} \quad y > 0.5,\\ (0.5323. 1.206, 0, 0.3) \quad &x < 0.5 \quad \text{and} \quad y > 0.5,\\ (0.138, 1.206, 1.206, 0.029) \quad &x < 0.5 \quad \text{and} \quad y < 0.5,\\ (0.5323, 0, 1.206, 0.3) \quad &x > 0.5 \quad \text{and} \quad y < 0.5.\\ \end{cases}$$ ![The values [\[eq:Euler_validation_adjusted\]](#eq:Euler_validation_adjusted){reference-type="eqref" reference="eq:Euler_validation_adjusted"} for different validation problems evaluated each $100$ training steps.](validation_Riemann_3.pdf){#fig:validation_Riemann_3 width="70%"} We compare the results in Table [\[tab:Riemann_3\]](#tab:Riemann_3){reference-type="ref" reference="tab:Riemann_3"}. As can be seen, we achieve a large error improvement for all discretizations listed. The density contour plots can be found in Figure [\[fig:Riemann_3\]](#fig:Riemann_3){reference-type="ref" reference="fig:Riemann_3"} and the absolute pointwise errors for the density in Figure [\[fig:Riemann_3\_error\]](#fig:Riemann_3_error){reference-type="ref" reference="fig:Riemann_3_error"}. Here it can be seen that the error of WENO-DS is significantly lower in the areas of the shock contacts. ![WENO-DS](R3_WENODS.pdf){#fig:R3_WENODS width="\\linewidth"} ![WENO-Z](R3_WENOZ.pdf){#fig:R3_WENOZ width="\\linewidth"} ![reference solution](R3_reference.pdf){#fig:R3_reference width="\\linewidth"} ![WENO-DS](R3_WENODS_error.pdf){#fig:R3_WENODS_error width="\\linewidth"} ![WENO-Z](R3_WENOZ_error.pdf){#fig:R3_WENOZ_error width="\\linewidth"} We also compare the weights $\omega^Z_m$, $m = 0,1,2$ [\[eq:WENOZ\]](#eq:WENOZ){reference-type="eqref" reference="eq:WENOZ"} and the updated weights $\omega^{DS}_m$, $m = 0,1,2$ with the improved smoothness indicators [\[eq:betas_DS_cons\]](#eq:betas_DS_cons){reference-type="eqref" reference="eq:betas_DS_cons"}. We plot these weights, corresponding to the positive part of a flux $\hat{f}^+$ from the flux splitting, using WENO-Z and WENO-DS for the previous test problem at the final time $T=0.3$. Since we apply the principle dimension-by-dimension, we present the weights only for the approximation of the flux $F(U)$. For the approximations of the flux $G(U)$, we could obtain these weights in this example using symmetry. As can be seen, WENO-DS is much better at localizing the shock from the other direction as well, which has a significant impact on error improvement. ![WENO-DS, $\omega^{DS}_0$](R3_WENODS_omega_0.pdf){#fig:R3_WENODS_omega_0 width="\\textwidth"} ![WENO-DS, $\omega^{DS}_1$](R3_WENODS_omega_1.pdf){#fig:R3_WENODS_omega_1 width="\\textwidth"} ![WENO-DS, $\omega^{DS}_2$](R3_WENODS_omega_2.pdf){#fig:R3_WENODS_omega_2 width="\\textwidth"} ![WENO-Z, $\omega^{Z}_0$](R3_WENOZ_omega_0.pdf){#fig:R3_WENOZ_omega_0 width="\\textwidth"} ![WENO-Z, $\omega^{Z}_1$](R3_WENOZ_omega_1.pdf){#fig:R3_WENOZ_omega_1 width="\\textwidth"} ![WENO-Z, $\omega^{Z}_2$](R3_WENOZ_omega_2.pdf){#fig:R3_WENOZ_omega_2 width="\\textwidth"} Finally, let us compare the computational cost of WENO-DS for the problem shown in Figure [24](#fig:Riemann_3_cost){reference-type="ref" reference="fig:Riemann_3_cost"}. We see that WENO-DS is much more computationally intensive compared to WENO-DS. Again, if we tested the method on the unseen problems, but with the same initial configuration, we would get analogous significant error improvements. ![Comparison of computational cost against $L_1$-error of the solution of the Riemann problem with the initial condition [\[eq:Riemann_IC_3\]](#eq:Riemann_IC_3){reference-type="eqref" reference="eq:Riemann_IC_3"}.](Riemann_3_cost.pdf){#fig:Riemann_3_cost width="50%"} ## Configuration 16 {#sec:S5.3} This is the configuration with the combination of rarefaction wave, shock wave and contact discontinuities: $\overleftarrow{R}_{21}$, $J^-_{32}$, $J^+_{34}$, $\overrightarrow{S}_{41}$. As shown in [@schulz1993], the following relations must hold for this case $$\label{eq:relations_R16} \begin{split} u_1-u_2 = &\Phi_{21}, \quad u_3=u_4=u_1, \\ v_4-v_1 = \Psi_{41}, \quad &v_3=v_2 = v_1, \quad p_1 < p_2 = p_3 = p_4 \end{split}$$ and for polytropic gas we add the equation [\[eq:relations_R2_poly\]](#eq:relations_R2_poly){reference-type="eqref" reference="eq:relations_R2_poly"} for a rarefaction and [\[eq:relations_R3_poly\]](#eq:relations_R3_poly){reference-type="eqref" reference="eq:relations_R3_poly"} for a shock wave between the $l$th and $r$th quadrants. For our data set we use the values $$\label{eq:par_range_R16} \begin{split} \rho_4 \in \mathcal{U}[1, 2], \quad \rho_3 \in \mathcal{U}[0.5, \rho_4], &\quad p_1 \in \mathcal{U}[0.3,1], \quad p_2 \in \mathcal{U}[1,1.5], \\ \quad u_1 \in \mathcal{U}[-0.25,0.25], \quad &v_1 = u_1, \quad \gamma \in (1.1, 1.67) \end{split}$$ To compute the data set consisting of $50$ reference solutions, we use the WENO-Z method on a grid $I \times J = 400 \times400$ space points up to the final time $T\in\mathcal{U}[0.1,0.2]$. We train the CNN with the structure shown in Figure [2](#fig:structure_1){reference-type="ref" reference="fig:structure_1"} as in the previous examples on the discretization $I \times J = 100 \times 100$ space steps for the total number of $2000$ training steps. We show the evolution of the validation metrics [\[eq:Euler_validation_adjusted\]](#eq:Euler_validation_adjusted){reference-type="eqref" reference="eq:Euler_validation_adjusted"} in Figure [25](#fig:validation_Riemann_16){reference-type="ref" reference="fig:validation_Riemann_16"} and choose the model from training step $1900$. ![The values [\[eq:Euler_validation_adjusted\]](#eq:Euler_validation_adjusted){reference-type="eqref" reference="eq:Euler_validation_adjusted"} for different validation problems evaluated each $100$ training steps.](validation_Riemann_16.pdf){#fig:validation_Riemann_16 width="70%"} We test the trained WENO-DS on a test problem [@kurganov2002] with $\gamma = 1.4$, $T=0.2$ and the initial condition $$\label{eq:Riemann_IC_16} (\rho,u, v,p) = \begin{cases} (0.5313, 0.1, 0.1, 0.4) \quad &x > 0.5 \quad \text{and} \quad y > 0.5,\\ (1.0222, -0.6179, 0.1, 1) \quad &x < 0.5 \quad \text{and} \quad y > 0.5,\\ (0.8, 0.1, 0.1, 1) \quad &x < 0.5 \quad \text{and} \quad y < 0.5,\\ (1, 0.1, 0.8276, 1) \quad &x > 0.5 \quad \text{and} \quad y < 0.5.\\ \end{cases}$$ We compare the results in Table [\[tab:Riemann_16\]](#tab:Riemann_16){reference-type="ref" reference="tab:Riemann_16"} and the density contour plots can be found in Figure [\[fig:Riemann_16\]](#fig:Riemann_16){reference-type="ref" reference="fig:Riemann_16"}. As can be seen, WENO-DS outperforms WENO-Z and has smaller $L_1$ errors in all cases. In addition, we plot the absolute pointwise errors for the density solution and show them in Figure [\[fig:Riemann_16_error\]](#fig:Riemann_16_error){reference-type="ref" reference="fig:Riemann_16_error"}. For another unseen test problem with the same initial configurations, we would again obtain analogous significant error improvements. ![WENO-DS](R16_WENODS.pdf){#fig:R16_WENODS width="\\linewidth"} ![WENO-Z](R16_WENOZ.pdf){#fig:R16_WENOZ width="\\linewidth"} ![reference solution](R16_reference.pdf){#fig:R16_reference width="\\linewidth"} ![WENO-DS](R16_WENODS_error.pdf){#fig:R16_WENODS_error width="\\linewidth"} ![WENO-Z](R16_WENOZ_error.pdf){#fig:R16_WENOZ_error width="\\linewidth"} ## Configuration 11 and Configuration 19 {#sec:S5.4} In the previous sections, we trained three WENO-DS methods for three different types of configurations. We denote by WENO-DS (C2), WENO-DS (C3), and WENO-DS (C16) the methods from Sections [5.1](#sec:S5.1){reference-type="ref" reference="sec:S5.1"}, [5.2](#sec:S5.2){reference-type="ref" reference="sec:S5.2"}, and [5.3](#sec:S5.3){reference-type="ref" reference="sec:S5.3"}, respectively. In this section, we test these methods on the unseen problems containing the combination of rarefaction wave, shock wave, and contact discontinuities. First, we consider Configuration 11 ($\overleftarrow{S}_{21}$, $J^+_{32}$, $J^+_{34}$, $\overleftarrow{S}_{41}$) with the test problem with $\gamma = 1.4$, $T=0.3$, and the initial condition [@kurganov2002] $$\label{eq:Riemann_IC_11} (\rho,u, v,p) = \begin{cases} (1, 0.1, 0, 1) \quad &x > 0.5 \quad \text{and} \quad y > 0.5,\\ (0.5313, 0.8276, 0, 0.4) \quad &x < 0.5 \quad \text{and} \quad y > 0.5,\\ (0.8, 0.1, 0, 0.4) \quad &x < 0.5 \quad \text{and} \quad y < 0.5,\\ (0.5313, 0.1, 0.7276, 0.4) \quad &x > 0.5 \quad \text{and} \quad y < 0.5.\\ \end{cases}$$ Second, we test the models on the configuration ($J^+_{21}$, $\overleftarrow{S}_{32}$, $J^-_{34}$, $\overrightarrow{R}_{41}$) with the test problem with $\gamma = 1.4$, $T=0.3$ and the initial condition [@kurganov2002] $$\label{eq:Riemann_IC_19} (\rho,u, v,p) = \begin{cases} (1, 0, 0.3, 1) \quad &x > 0.5 \quad \text{and} \quad y > 0.5,\\ (2, 0, -0.3, 1) \quad &x < 0.5 \quad \text{and} \quad y > 0.5,\\ (1.0625, 0, 0.2145, 0.4) \quad &x < 0.5 \quad \text{and} \quad y < 0.5,\\ (0.5197, 0, -0.4259, 0.4) \quad &x > 0.5 \quad \text{and} \quad y < 0.5.\\ \end{cases}$$ We summarize the results in Tables [\[tab:Riemann_11\]](#tab:Riemann_11){reference-type="ref" reference="tab:Riemann_11"} and [\[tab:Riemann_19\]](#tab:Riemann_19){reference-type="ref" reference="tab:Riemann_19"}. As can be seen, the method trained on problems containing only rarefaction waves has the worst ability to generalize to unseen problems. On the other hand, by using methods trained on problems containing shocks or a combination of contact discontinuities, rarefaction, and shock waves, we obtain the error improvements even on unseen problems with different initial configurations. We would like to emphasize that the test problems in this section are far from the problems included in the training and validation sets. This is not only due to the choice of initial data, but also to the combination of rarefaction, shock waves and their direction, and positive and negative contact discontinuities. ## Bigger CNN and ability to generalize on unseen configurations {#sec:S5.5} As can be seen from the previous Section [5.4](#sec:S5.4){reference-type="ref" reference="sec:S5.4"}, the models trained using the data from Section [5.2](#sec:S5.2){reference-type="ref" reference="sec:S5.2"} and Section [5.3](#sec:S5.3){reference-type="ref" reference="sec:S5.3"} are able to generalize very well to unseen problems. The WENO-DS method is able to properly localize the shocks and discontinuities, leading to a better numerical solution. Let us now increase the size of the CNN and use the structures shown in Figures [3](#fig:structure_2){reference-type="ref" reference="fig:structure_2"}, increasing the size of the receptive field and the number of channels, and Figure [4](#fig:structure_3){reference-type="ref" reference="fig:structure_3"}, increasing the number of channels and adding another CNN layer. Experimentally, we found that only increasing the size of the receptive field and the number of channels leads to similar results as described in the previous sections. In addition, increasing the receptive field makes the WENO-DS computationally more expensive. This is because we need to prepare wider inputs for the CNN, which also need to be projected onto the characteristic fields using left eigenvectors, and the matrix multiplications are more expensive here. On the other hand, if we use the CNN structure described in Figure [4](#fig:structure_3){reference-type="ref" reference="fig:structure_3"}, we obtain a trained WENO-DS method that provides a much better numerical solution even for unseen problems with significantly different initial configurations. Let us now train the method on two data sets. First, we use the dataset from the Section [5.2](#sec:S5.2){reference-type="ref" reference="sec:S5.2"}, train the CNN, and denote the final method as WENO-DS (C3c). Second, we train the CNN on the data set from the Section [5.3](#sec:S5.3){reference-type="ref" reference="sec:S5.3"} and denote the final method as WENO-DS (C16c). We test the methods on even more different configurations and compare the results in Tables [\[tab:Riemann_C3c\]](#tab:Riemann_C3c){reference-type="ref" reference="tab:Riemann_C3c"} and [\[tab:Riemann_C16c\]](#tab:Riemann_C16c){reference-type="ref" reference="tab:Riemann_C16c"}. We use boldface to indicate the configuration on which the method was actually trained. With the number of configurations listed in the tables, we cover a wide range of possible combinations of contact discontinuities, rarefaction and shock waves. For all of them we use the test examples from the literature, see, e.g. [@kurganov2002]. We treat the possibility with four contact discontinuities with Configuration 6: $J^-_{21}$, $J^-_{32}$, $J^-_{34}$, $J^-_{41}$, two contact discontinuities and two rarefaction waves with Configuration 8: $\overleftarrow{R}_{21}$, $J^-_{32}$, $J^-_{34}$, $\overleftarrow{R}_{41}$, two shock waves and two contact discontinuities using Configuration 14: $J^+_{21}$, $\overleftarrow{S}_{32}$, $J^-_{34}$, $\overleftarrow{S}_{41}$ and Configuration 11 from Section [5.4](#sec:S5.4){reference-type="ref" reference="sec:S5.4"}. Finally, the combination of contact discontinuities, rarefaction, and shock waves using Configuration 18: $J^+_{21}$, $\overleftarrow{S}_{32}$, $J^+_{34}$, $\overrightarrow{R}_{41}$, and Configuration 19 form the Section [5.4](#sec:S5.4){reference-type="ref" reference="sec:S5.4"}. As one can see, we obtain significant error improvements with both methods. Comparing both methods, even better results are obtained when the CNN was trained on a data set from Section [5.2](#sec:S5.2){reference-type="ref" reference="sec:S5.2"} on a configuration with four shock waves. Compared to the Table [\[tab:Riemann_3\]](#tab:Riemann_3){reference-type="ref" reference="tab:Riemann_3"}, the improvement for Configuration 3 is smaller but still significant. However, the method is able to generalize much better to unknown configurations. For example, for Configuration 14, we obtain an average improvement rate of $1.30$ for all four variables. In addition, we use WENO-DS (C3c) to illustrate the density contour plots and absolute pointwise errors in Figures [\[fig:Riemann_6\]](#fig:Riemann_6){reference-type="ref" reference="fig:Riemann_6"}, [\[fig:Riemann_8\]](#fig:Riemann_8){reference-type="ref" reference="fig:Riemann_8"}, and [\[fig:Riemann_19\]](#fig:Riemann_19){reference-type="ref" reference="fig:Riemann_19"}. Here we also show the difference from Configuration 3, with which the model was actually trained. The WENO-DS (C3c) method achieves large error improvements not only for problems from the same configuration, but also for problems from significantly different configurations. Since we used a larger CNN, the question is what is the actual numerical cost of these improvements? We illustrate the computational costs in Figure [\[fig:Riemann_all_costs\]](#fig:Riemann_all_costs){reference-type="ref" reference="fig:Riemann_all_costs"}. As can be seen from the shift of the red dots to the right, the method involves larger computational costs. However, it is still more effective or not worse than the original method in most cases. We would like to emphasize that here we are comparing results with significantly different initial problems than those on which the method was actually trained. Machine learning models are generally not expected to give much better results on unseen problems. ![WENO-DS](R6_WENODS.pdf){#fig:R6_WENODS width="\\linewidth"} ![WENO-Z](R6_WENOZ.pdf){#fig:R6_WENOZ width="\\linewidth"} ![reference solution](R6_reference.pdf){#fig:R6_reference width="\\linewidth"} ![WENO-DS](R6_WENODS_error.pdf){width="\\linewidth"} ![WENO-Z](R6_WENOZ_error.pdf){width="\\linewidth"} ![WENO-DS](R8_WENODS.pdf){#fig:R8_WENODS width="\\linewidth"} ![WENO-Z](R8_WENOZ.pdf){#fig:R8_WENOZ width="\\linewidth"} ![reference solution](R8_reference.pdf){#fig:R8_reference width="\\linewidth"} ![WENO-DS](R8_WENODS_error.pdf){width="\\linewidth"} ![WENO-Z](R8_WENOZ_error.pdf){width="\\linewidth"} ![WENO-DS](R19_WENODS.pdf){#fig:R19_WENODS width="\\linewidth"} ![WENO-Z](R19_WENOZ.pdf){#fig:R19_WENOZ width="\\linewidth"} ![reference solution](R19_reference.pdf){#fig:R19_reference width="\\linewidth"} ![WENO-DS](R19_WENODS_error.pdf){width="\\linewidth"} ![WENO-Z](R19_WENOZ_error.pdf){width="\\linewidth"} ![Configuration 3](Riemann_3c_cost.pdf){#fig:Riemann_3c_cost width="\\textwidth"} ![Configuration 6](Riemann_6_cost.pdf){#fig:Riemann_6_cost width="\\textwidth"} ![Configuration 8](Riemann_8_cost.pdf){#fig:Riemann_8_cost width="\\textwidth"} ![Configuration 11](Riemann_11_cost.pdf){#fig:Riemann_11_cost width="\\textwidth"} ![Configuration 14](Riemann_14_cost.pdf){#fig:Riemann_14_cost width="\\textwidth"} ![Configuration 18](Riemann_18_cost.pdf){#fig:Riemann_18_cost width="\\textwidth"} ![Configuration 19](Riemann_19_cost.pdf){#fig:Riemann_19_cost width="\\textwidth"} # Conclusion {#sec:S6} In this paper, we introduced a novel approach, WENO-DS, which leverages the power of deep learning to enhance the performance of the well-established Weighted Essentially Non-Oscillatory (WENO) scheme in the context of solving hyperbolic conservation laws, particularly exemplified by the two-dimensional Euler equations of gas dynamics. By seamlessly integrating deep learning techniques into the WENO algorithm, we have successfully improved the accuracy of numerical solutions, particularly in regions near abrupt shocks. Unlike previous attempts at incorporating deep learning into numerical methods, this approach stands out by eliminating the need for additional post-processing steps, ensuring consistency throughout. This study demonstrates the superiority of the WENO-DS approach through an extensive examination of various test problems, including scenarios featuring shocks and rarefaction waves. The results consistently showcase the newfound capabilities of the approach, outperforming traditional fifth-order WENO schemes, especially when dealing with challenges like excessive diffusion or overshooting around shocks. The introduction of machine learning into the realm of solving partial differential equations (PDEs) has brought about promising improvements in numerical methods. However, it is crucial to strike a balance between these data-driven insights and the foundational mathematical principles underpinning the numerical scheme. This study successfully maintains this equilibrium, building upon the physical principles of the Euler equations while incorporating deep learning enhancements. In summary, the WENO-DS approach represents a significant advancement in the field of numerical methods for hyperbolic conservation laws, where the incorporation of deep learning techniques has not only enhanced the accuracy but also improved the qualitative behavior of solutions, both in smooth regions and near discontinuities. This research paves the way for future developments in the intersection of traditional numerical methods and machine learning, offering a promising direction for further advancements in solving complex PDEs like the Euler equations.
arxiv_math
{ "id": "2309.10117", "title": "Deep smoothness WENO scheme for two-dimensional hyperbolic conservation\n laws: A deep learning approach for learning smoothness indicators", "authors": "Tatiana Kossaczk\\'a, Ameya D. Jagtap, Matthias Ehrhardt", "categories": "math.NA cs.LG cs.NA", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We consider semiclassical Schrödinger operators acting in $L^2(\mathbb{R}^d)$ with $d\geq3$. For these operators we establish a sharp spectral asymptotics without full regularity. For the counting function we assume the potential is locally integrable and that the negative part of the potential minus a constant is one time differentiable and the derivative is Hölder continues with parameter $\mu\geq1/2$. Moreover we also consider sharp Riesz means of order $\gamma$ with $\gamma\in(0,1]$. Here we assume the potential is locally integrable and that the negative part of the potential minus a constant is two time differentiable and the second derivative is Hölder continues with parameter $\mu$ that depends on $\gamma$. author: - Søren Mikkelsen bibliography: - Bib_paperB.bib title: Sharp semiclassical spectral asymptotics for Schrödinger operators with non-smooth potentials --- # Introduction Consider a semiclassical Schrödinger operator $H_\hbar= - \hbar^2\Delta + V$ acting in $L^2(\mathbb{R}^d)$, where $-\Delta$ is the positive Laplacian and $V$ is the potential. For the Schrödinger operator $H_\hbar$ the Weyl law states that $$\label{EQ:weyl_general} \mathop{\mathrm{Tr}}\big[\boldsymbol{1}_{(-\infty,0]}(H_\hbar)\big] = \frac{1}{(2\pi\hbar)^d} \int_{\mathbb{R}^{2d}} \boldsymbol{1}_{(-\infty,0]}(p^2+V(x)) \, dpdx + o(\hbar^{-d}),$$ where $\boldsymbol{1}_{\Omega}(t)$ is the characteristic function of the set $\Omega$. It has recently been proven by Frank [@frank2022weyls] that [\[EQ:weyl_general\]](#EQ:weyl_general){reference-type="eqref" reference="EQ:weyl_general"} is valid under the condition that $d\geq3$, $V\in L^1_{loc}(\mathbb{R}^d)$ and $V_{-}\in L^{\frac{d}{2}}(\mathbb{R}^d)$, where $V_{-} = \max(0,-V)$. These conditions are the minimal conditions such that both sides of the equality are well defined and finite. For a brief historical description of the development on establishing [\[EQ:weyl_general\]](#EQ:weyl_general){reference-type="eqref" reference="EQ:weyl_general"} under minimal assumptions see the introduction of [@frank2022weyls]. Under additional assumptions on the potential $V$ it was established by Helffer and Robert in [@MR724029] that $$\label{EQ:weyl_general_1} \mathop{\mathrm{Tr}}\big[\boldsymbol{1}_{(-\infty,0]}(H_\hbar)\big] = \frac{1}{(2\pi\hbar)^d} \int_{\mathbb{R}^{2d}} \boldsymbol{1}_{(-\infty,0]}(p^2+V(x)) \, dpdx + \mathcal{O}(\hbar^{1-d})$$ for all $\hbar\in(0,\hbar_0]$, $\hbar_0$ suffciently small. They proved this under the condition that $V\in C^\infty(\mathbb{R}^d)$, satisfies some regularity condition at infinity and $V(x)\geq c>0$ for all $x \in \Omega^c$, where $\Omega\subset \mathbb{R}^d$ is some open bounded set. Moreover, they assumed a non-critical condition on the energy surface $\{(x,p)\in \mathbb{R}^{2d} \,|\, p^2 + V(x) =0\}$. The non-critical condition can afterwards be removed see e.g. [@MR1343781]. The error estimate in [\[EQ:weyl_general_1\]](#EQ:weyl_general_1){reference-type="eqref" reference="EQ:weyl_general_1"} is the best generic error estimate one can obtain. As an example, one can consider the operator $H_\hbar= - \hbar^2\Delta + x^2 - \lambda$, for some $\lambda>0$. For this operator we can explicitly find all eigenvalues and check by hand that [\[EQ:weyl_general_1\]](#EQ:weyl_general_1){reference-type="eqref" reference="EQ:weyl_general_1"} is valid with an explicit error of order $\hbar^{1-d}$. When comparing the two results in dimensions $d\geq3$ it do raise the question: Is the formula [\[EQ:weyl_general_1\]](#EQ:weyl_general_1){reference-type="eqref" reference="EQ:weyl_general_1"} valid under less smoothness? Could it even be valid for all $V$ satisfying the assumptions of the result by Frank? The last part of the question seems currently out of reach to give a positive answer and to the authors knowledge there do not yet exist a counter example. However, for the first part of the question we will give positive answers. We will in fact not just consider the Weyl law but also Riesz means. That is for $\gamma\in[0,1]$ we will consider traces of the form $$\label{traces_to_consider} \mathop{\mathrm{Tr}}\big[g_\gamma(H_{\hbar})\big],$$ where the function $g_\gamma$ is given by $$g_\gamma(t) = \begin{cases} \boldsymbol{1}_{(-\infty,0]}(t) &\gamma=0 \\ (t)_{-}^\gamma &\gamma\in(0,1]. \end{cases}$$ Frank also considered traces of the form [\[traces_to_consider\]](#traces_to_consider){reference-type="eqref" reference="traces_to_consider"} in [@frank2022weyls]. Helffer and Robert did only consider Weyl asymptotics in [@MR724029], but proved the sharp estimate for Riesz means in [@MR1061661]. For later comparison and use we recall the exact statement of the results obtained by Frank in [@frank2022weyls]. **Theorem 1**. *Let $\gamma\geq1/2$ if $d=1$, $\gamma>0$ if $d=2$ and $\gamma\geq0$ if $d\geq3$. Let $\Omega\subset\mathbb{R}^d$ be an open set and let $V\in L^1_{loc}(\Omega)$ with $V_{-}\in L^{\gamma+d/2}(\Omega)$. Then $$\mathop{\mathrm{Tr}}\big[g_\gamma(H_\hbar)\big] = \frac{1}{(2\pi\hbar)^d} \int_{\mathbb{R}^{2d}}g_\gamma(p^2+V(x)) \,dx dp +o(\hbar^{-d})$$ as $\hbar\rightarrow 0$, where $H_\hbar = -\hbar^2\Delta +V(x)$ is considered in $L^2(\Omega)$ with Dirichlet boundary conditions.* One thing to observe here is that this theorem is also valid, when we are on bounded domains. We will only discuss the case, where the domain is the whole space $\mathbb{R}^d$, $d\geq3$. For results on sharp Weyl laws without full regularity and on bounded domains we refer the reader to the works by Ivrii [@MR1974451] and [@ivrii2019microlocal1 Vol 1]. We will set up some notation and recall a definition before we give the assumptions for our main theorem and state it. **Definition 2**. Let $f:\mathbb{R}^d\mapsto\mathbb{R}$ be a measurable function. For each $\nu\in\mathbb{R}$ we define the set $$\Omega_{\nu,f} \coloneqq \big\{ x\in\mathbb{R}^d \,| \, f(x)<\nu \big\}.$$ **Definition 3**. For $k$ in $\mathbb{N}$ and $\mu$ in $[0,1]$ and $\Omega\subset\mathbb{R}^d$ open we denote by $C^{k,\mu}(\Omega)$ the subspace of $C^{k}(\Omega)$ defined by $$\begin{aligned} C^{k,\mu}(\Omega) = \big\{ f\in C^{k}(\Omega) \, \big| \, \exists C>0&: |\partial_x^{\alpha} f(x) - \partial_x^{\alpha} f(y) | \leq C |x-y|^{\mu} \\ & \forall \alpha \in\mathbb{N}^d \text{ with } \left| \alpha \right|=k \text{ and } \forall x,y\in\Omega \big\}. \end{aligned}$$ These definitions are here to clarify notation. We are now ready to state our assumptions on the potential $V$. **Assumption 4**. Let $V\in L^1_{loc}(\mathbb{R}^d)$ be a real function. Suppose there exists numbers $\nu>0$, $k\in\mathbb{N}_0$ and $\mu\in[0,1]$ such that the set $\Omega_{4\nu,V}$ is open and bounded and $V\in C^{k,\mu}(\Omega_{4\nu,V})$. With our assumptions on the potential $V$ in place we can now state the main theorem. **Theorem 5**. *Let $H_{\hbar} = -\hbar^2 \Delta +V$ be a Schrödinger operator acting in $L^2(\mathbb{R}^d)$ and let $\gamma\in[0,1]$. If $\gamma=0$ we assume $d\geq3$ and if $\gamma\in(0,1]$ we assume $d\geq4$. Suppose that $V$ satisfies Assumption [Assumption 4](#Assumption:local_potential){reference-type="ref" reference="Assumption:local_potential"} with the number $\nu>0$ and $k=1$ , $\mu\geq\frac{1}{2}$ if $\gamma=0$ and $k=2$ , $\mu\geq \min(\frac{3}{2}\gamma - \frac{1}{2},0)$ if $\gamma>0$. Then it holds that $$\label{EQ:THM:Local_two_derivative} \Big|\mathop{\mathrm{Tr}}\big[g_\gamma(H_\hbar)\big] - \frac{1}{(2\pi\hbar)^d} \int_{\mathbb{R}^{2d}}g_\gamma(p^2+V(x)) \,dx dp \Big| \leq C \hbar^{1+\gamma-d}$$ for all $\hbar$ sufficiently small. The constant $C$ depends on the number $\nu$ and the potential $V$.* When comparing the assumptions for our main theorem and Theorem [Theorem 1](#Thm:Frank){reference-type="ref" reference="Thm:Frank"} we have that in both we assume the potential to be in $L^1_{loc}(\mathbb{R}^d)$. But in Theorem [Theorem 1](#Thm:Frank){reference-type="ref" reference="Thm:Frank"} the additional assumptions on the potential is on the negative part of $V$, whereas we need to assume regularity for the negative part of $V-4\nu$ for some $\nu>0$. One could have hoped to only have an assumption on the negative part of $V$. However, this does not seem obtainable with the methods we use here. Firstly, the way we prove the theorem require us to have control of the potential just outside the classical allowed zone ($\{x\in\mathbb{R}^d \, | \, V(x) \leq 0\}$). Secondly, we have that the constant in [\[EQ:THM:Local_two_derivative\]](#EQ:THM:Local_two_derivative){reference-type="eqref" reference="EQ:THM:Local_two_derivative"} will diverge to infinity as $\nu$ tends to zero. Hence, we cannot hope to do an approximation argument. The assumptions on dimensions are needed to ensure integrability of some integrals. In the case of Theorem [Theorem 1](#Thm:Frank){reference-type="ref" reference="Thm:Frank"} there are counter examples to Weyl asymptotics for $V\in L^{\frac{d}{2}}(\mathbb{R}^d)$ for $d=1,2$ for details see [@MR1399202; @MR1491550]. This is not the first work considering sharp Weyl laws without full regularity. The first results in a semicalssical setting was obtained by Ivrii in [@MR1807155], where he also considered higher order differential operators acting in $L^2(M)$, where $M$ is a compact manifold without boundary. In this work the coefficients are assumed to be differentiable and with a Hölder continuous first derivative. This was a generalisation of works by Zielinski who previously had obtained sharp Weyl laws in high energy asymptotics in [@MR1736710; @MR1612880; @MR1620550; @MR1635856]. The results by Ivrii was generalised by Bronstein and Ivrii in [@MR1974450], where they reduced the assumptions further by assuming the first derivative to have modulus of continuity $\mathcal{O}(|\log(x-y)|^{-1})$ and then again by Ivrii in [@MR1974451] to also include boundaries and removing the non-critical condition. The non-critical condition, used in cases without full regularity, for a semiclassical pseudo-differential operator $\mathop{\mathrm{Op_\hbar^w}}(a)$ is $$\label{EQ:non-critical_int} |\nabla_p a(x,p)|\geq c >0 \qquad\text{for all $(x,p)\in a^{-1}(\{0\})$}.$$ In [@MR2105486] Zielinski considers the semiclassical setting with differential operators acting in $L^2(\mathbb{R}^d)$ and proves an optimal Weyl Law under the assumption that all coefficients are one time differentiable with a Hölder continuous derivative. Moreover, it is assumed that the coefficients and the derivatives are bounded. In [@MR2105486] it is remarked that it should is possible to consider unbounded coefficients in a framework of tempered variation models. This was generalised by the author in [@mikkelsen2022optimal] to allow for the coefficients to be unbounded. Moreover, more general operators where also considered in [@mikkelsen2022optimal]. Both of these works assumed a non-critical condition [\[EQ:non-critical_int\]](#EQ:non-critical_int){reference-type="eqref" reference="EQ:non-critical_int"}. This assumption makes the results of [@mikkelsen2022optimal] and [@MR2105486] not valid for Schrödinger operators. Since the assumption is equivalent to assuming that $$\label{EQ:non-critical_int_2} |V(x)|\geq c >0 \qquad\text{for all $x\in \mathbb{R}^d$}.$$ The author recently established sharp local spectral asymptotics for magnetic Schrödinger operators in [@mikkelsen2023sharp]. The techniques used to establish those will be crucial for the results obtained here. The assumptions we make on regularity here is "lower" than the regularity assumptions made in [@mikkelsen2023sharp]. The results obtained by Bronstein and Ivrii [@MR1974450] and Ivrii [@MR1974451; @MR1807155] do assume less regularity than we do in the present work. However, the techniques used in these works seem to not translate well to a non-compact setting. The methods we use to establish Theorem [Theorem 5](#Thm:Main){reference-type="ref" reference="Thm:Main"} can also be be used in cases where we have less regularity than we assume in the statement of the theorem. However, if we assume less regularity, we cannot obtain sharp remainder estimates. The results we can obtain are in the following two theorems. **Theorem 6**. *Let $H_{\hbar} = -\hbar^2 \Delta +V$ be a Schrödinger operator acting in $L^2(\mathbb{R}^d)$ with $d\geq3$. Suppose that $V$ satisfies Assumption [Assumption 4](#Assumption:local_potential){reference-type="ref" reference="Assumption:local_potential"} with the number $\nu>0$, $k=1$ and $0\leq\mu\leq1$. Then it holds that $$\Big|\mathop{\mathrm{Tr}}\big[g_0(H_\hbar)\big] - \frac{1}{(2\pi\hbar)^d} \int_{\mathbb{R}^{2d}}g_0(p^2+V(x)) \,dx dp \Big| \leq C \hbar^{\kappa-d}$$ for all $\hbar$ sufficiently small, where $\kappa = \min[\frac{2}{3}(1+\mu),1]$ The constant $C$ depends on the number $\nu$ and the potential $V$.* One can see that for $\mu\geq\frac{1}{2}$ we are in the setting of Theorem [Theorem 5](#Thm:Main){reference-type="ref" reference="Thm:Main"} and recover the sharp estimate. For the cases where $\mu<\frac{1}{2}$ we cannot currently get optimal error. However, the "worst" error we can obtain is $\hbar^{\frac{2}{3}-d}$. This is still a significant improving of the estimate $\hbar^{-d}$. Moreover, since a globally Lipschitz function is almost everywhere differentiable we can with these methods obtain the error $\hbar^{\frac{2}{3}-d}$, when the potential $V$ satisfies Assumption [Assumption 4](#Assumption:local_potential){reference-type="ref" reference="Assumption:local_potential"} with the number $\nu>0$, $k=0$ and $\mu=1$. The author believes that this case should also have sharp estimates. **Theorem 7**. *Let $H_{\hbar} = -\hbar^2 \Delta +V$ be a Schrödinger operator acting in $L^2(\mathbb{R}^d)$ with $d\geq4$ and let $\gamma\in(0,1]$. Suppose that $V$ satisfies Assumption [Assumption 4](#Assumption:local_potential){reference-type="ref" reference="Assumption:local_potential"} with the numbers $\nu>0$, $k=2$ and $0\leq \mu\leq1$. Then it holds that $$\Big|\mathop{\mathrm{Tr}}\big[g_\gamma(H_\hbar)\big] - \frac{1}{(2\pi\hbar)^d} \int_{\mathbb{R}^{2d}}g_\gamma(p^2+V(x)) \,dx dp \Big| \leq C \hbar^{\kappa-d}$$ for all $\hbar$ sufficiently small where $\kappa = \min[\frac{2}{3}(2+\mu),1+\gamma]$. The constant $C$ depends on the number $\nu$ and the potential $V$.* Again we have that for $\mu\geq \min(\frac{3}{2}\gamma - \frac{1}{2},0)$ we again recover the sharp estimates from Theorem [Theorem 5](#Thm:Main){reference-type="ref" reference="Thm:Main"}. When considering the result we have obtained here, we have for the case $\gamma=1$ and a $C^3$ assumption sharp error terms and even in the case of $C^2$ assumption we obtain an error of the form $\hbar^{\frac{4}{3}-d}$. The current paper is structured as follows. In Section [2](#SEC:pre){reference-type="ref" reference="SEC:pre"} we specify our notation and construct approximating/framing operators. Inspired by these framing operators we define operators that locally behave as rough Schrödinger operators in Section [3](#SEC:aux){reference-type="ref" reference="SEC:aux"}. For these operators we establish a sharp Weyl law at the end of the section. This result rely heavily on the results obtained in [@mikkelsen2023sharp]. In Section [4](#SEC:proof_main){reference-type="ref" reference="SEC:proof_main"} we first establish a result on localisations of the traces and a comparison of phase-space integrals. We end the section with a proof of the main theorems. ## Acknowledgement {#acknowledgement .unnumbered} The author is grateful to the Leverhulme Trust for their support via Research Project Grant 2020-037. # Preliminaries {#SEC:pre} We will for an operator $A$ acting in a Hilbert space $\mathscr{H}$ denote the operator norm by $\lVert A \rVert_{\mathrm{op}}$ and the trace norm by $\lVert A \rVert_1$. Moreover, we will in the following use the convention that $\mathbb{N}$ is the strictly positive integers and $\mathbb{N}_0=\mathbb{N}\cup\{0\}$. Next we will describe the operators we are working with. Under Assumption [Assumption 4](#Assumption:local_potential){reference-type="ref" reference="Assumption:local_potential"} we can define the operator $H_\hbar=-\hbar^2\Delta + V$ as the Friedrichs extension of the quadratic form given by $$\mathfrak{h}[f,g] = \int_{\mathbb{R}^d} \hbar^2\sum_{i=1}^d \partial_{x_i}f(x) \overline{\partial_{x_i}g(x)} + V(x)f(x)\overline{g(x)}\;dx, \qquad f,g \in \mathcal{D}(\mathfrak{h}),$$ where $$\mathcal{D}(\mathfrak{h}) = \left\{ f\in L^2(\mathbb{R}^d) | \int_{\mathbb{R}^d} \left| p \right|^2 \left| \hat{f}(p) \right|^2 \;dp<\infty \text{ and } \int_{\mathbb{R}^d} \left| V(x) \right|\left| f(x) \right|^2 \;dx <\infty \right\}.$$ In this set up the Friedrichs extension will be unique and self-adjoint see e.g. [@MR0493420]. We will in our analysis use the Helffer-Sjöstrand formula. Before we state it we will recall a definition of an almost analytic extension. **Definition 8** (Almost analytic exstension). For $f\in C_0^\infty(\mathbb{R})$ we call a function $\tilde{f} \in C_0^\infty(\mathbb{C})$ an almost analytic extension if it has the properties $$\begin{aligned} |\bar{\partial} \tilde{f}(z)| &\leq C_n |\mathop{\mathrm{\mathrm{Im}}}(z)|^n, \qquad \text{for all $n\in\mathbb{N}_0$} \\ \tilde{f}(t)&=f(t) \qquad \text{for all $t\in\mathbb{R}$}, \end{aligned}$$ where $\bar{\partial} = \frac12 (\partial_x +i\partial_y)$. For how to construct the almost analytic extension for a given $f\in C_0^\infty(\mathbb{R})$ see e.g. [@MR2952218; @MR1735654]. The following theorem is a simplified version of a theorem in [@MR1349825]. **Theorem 9** (The Helffer-Sjöstrand formula). *Let $H$ be a self-adjoint operator acting on a Hilbert space $\mathscr{H}$ and $f$ a function from $C_0^\infty(\mathbb{R})$. Then the bounded operator $f(H)$ is given by the equation $$f(H) =- \frac{1}{\pi} \int_\mathbb{C}\bar{\partial }\tilde{f}(z) (z-H)^{-1} \, L(dz),$$ where $L(dz)=dxdy$ is the Lebesgue measure on $\mathbb{C}$ and $\tilde{f}$ is an almost analytic extension of $f$.* ## Construction of framing operators and auxiliary asymptotics The crucial part in this construction is Proposition [Proposition 10](#PRO:smoothning_of_func){reference-type="ref" reference="PRO:smoothning_of_func"}, for which a proof can be found in either [@MR1974450 Proposition 1.1] or [@ivrii2019microlocal1 Proposition 4.A.2]. **Proposition 10**. *Let $f$ be in $C^{k,\mu}(\mathbb{R}^d)$ for a $\mu$ in $[0,1]$. Then for every $\varepsilon >0$ there exists a function $f_\varepsilon$ in $C^\infty(\mathbb{R}^d)$ such that $$\label{EQ:pro.smoothning_of_func_bounds} \begin{aligned} \left| \partial_x^\alpha f_\varepsilon(x) -\partial_x^\alpha f(x) \right| \leq{}& C_\alpha \varepsilon^{k+\mu-\left| \alpha \right|} \qquad \left| \alpha \right|\leq k, \\ \left| \partial_x^\alpha f_\varepsilon(x) \right| \leq{}& C_\alpha \varepsilon^{k+\mu-\left| \alpha \right|} \qquad \left| \alpha \right|\geq k+1, \end{aligned}$$ where $C_\alpha$ is independent of $\varepsilon$, but depends on $f$ for all $\alpha\in \mathbb{N}_0^d$.* **Lemma 11**. *Let $H_{\hbar} = -\hbar^2 \Delta +V$ be a Schrödinger operator acting in $L^2(\mathbb{R}^d)$ and suppose that $V$ satisfies Assumption [Assumption 4](#Assumption:local_potential){reference-type="ref" reference="Assumption:local_potential"} with the numbers $(\nu,k,\mu)$. Then for all $\varepsilon>0$ there exists two framing operators $H_{\hbar,\varepsilon}^{\pm}$ such that $$\label{LEEQ:framing_operators} H_{\hbar,\varepsilon}^{-} \leq H_\hbar \leq H_{\hbar,\varepsilon}^{+}$$ in the sense of quadratic forms. The operators $H_{\hbar,\varepsilon}^{\pm}$ are explicitly given by $H_{\hbar,\varepsilon}^{\pm} = -\hbar^2 \Delta +V_\varepsilon^{\pm}$, where $$V_\varepsilon^{\pm}(x) = V^1_\varepsilon(x) +V^2(x) \pm C\varepsilon^{k+\mu},$$ where the function $V^1_\varepsilon(x)$ is the smooth function from Proposition [Proposition 10](#PRO:smoothning_of_func){reference-type="ref" reference="PRO:smoothning_of_func"} associated to $V^1=V\varphi$ and $V^2=V(1-\varphi)$. The function $\varphi$ is chosen such that $\varphi\in C_0^\infty(\mathbb{R}^d)$ with $\varphi(x)=1$ for all $x\in \Omega_{3\nu,V}$ and $\mathop{\mathrm{supp}}(\varphi)\subset \Omega_{4\nu,V}$. Moreover for all $\varepsilon>0$ sufficiently small there exists a $\tilde{\nu}>0$ such that $$\label{LEEQ:framing_operators2} \Omega_{4\tilde{\nu},V^{+}_\varepsilon} \cap \mathop{\mathrm{supp}}(V^2) = \emptyset \quad\text{and}\quad \Omega_{4\tilde{\nu},V^{-}_\varepsilon} \cap \mathop{\mathrm{supp}}(V^2) = \emptyset.$$* *Proof.* We start by letting $\varphi$ be as given in the statement of the lemma and set $V^1=V\varphi$ and $V^2=V(1-\varphi)$. By assumption we have that $V^1\in C^{k,\mu}_0(\mathbb{R}^d)$. Hence for all $\varepsilon>0$ we get from Proposition [Proposition 10](#PRO:smoothning_of_func){reference-type="ref" reference="PRO:smoothning_of_func"} the existence of $V^1_\varepsilon(x)$ such that [\[EQ:pro.smoothning_of_func_bounds\]](#EQ:pro.smoothning_of_func_bounds){reference-type="eqref" reference="EQ:pro.smoothning_of_func_bounds"} is satisfied with $f$ replaced by $V^1$. We now let $$H_{\hbar,\varepsilon}= -\hbar^2\Delta + V^1_\varepsilon + V^2.$$ This operator is well defined and selfadjoint since both potentials are in $L^1_{loc}(\mathbb{R}^d)$. Moreover we have that $H_{\hbar,\varepsilon}$ and $H_{\hbar}$ will have the same domains. Let $f\in \mathcal{D}[H_\hbar]$ we then have that $$\label{EQ:framing_operators} \begin{aligned} \big|\langle H_\hbar f, f \rangle - \langle H_{\hbar,\varepsilon} f, f \rangle\big| &=\big| \langle (V^1 - V^1_\varepsilon) f, f \rangle \big| \\ &\leq \lVert V^1 - V^1_\varepsilon \rVert_{L^\infty(\mathbb{R}^d)} \lVert f \rVert^2_{L^2(\mathbb{R}^d)} \leq c\varepsilon^{k+\mu} \lVert f \rVert^2_{L^2(\mathbb{R}^d)}. \end{aligned}$$ From choosing a sufficiently large constant $C$ we get from [\[EQ:framing_operators\]](#EQ:framing_operators){reference-type="eqref" reference="EQ:framing_operators"} that by letting $H_{\hbar,\varepsilon}^{\pm} = -\hbar^2 \Delta +V_\varepsilon^{\pm}$ with $V_\varepsilon^{\pm}(x) = V^1_\varepsilon(x) +V^2(x) \pm C\varepsilon^{k+\mu}$ we have that [\[LEEQ:framing_operators\]](#LEEQ:framing_operators){reference-type="eqref" reference="LEEQ:framing_operators"} is satisfied with this choice of operators. What remains is to establish [\[LEEQ:framing_operators2\]](#LEEQ:framing_operators2){reference-type="eqref" reference="LEEQ:framing_operators2"}. We have by construction that $$\rVert V-V_\varepsilon^{\pm}\rVert_{L^\infty(\mathbb{R}^d)} \leq C \varepsilon^{k+\mu}.$$ Hence if we choose $\tilde{\nu}\leq\frac{\nu}{2}$ and $\varepsilon$ is sufficiently small we can ensure that $\Omega_{4\tilde{\nu},V^{+}_\varepsilon}\subset \Omega_{3\nu,V}$ and $\Omega_{4\tilde{\nu},V^{-}_\varepsilon}\subset \Omega_{3\nu,V}$. Since we have that $\mathop{\mathrm{supp}}(V^2) \subset \Omega_{3\nu,V}^c$ by construction it follows that with such a choice of $\tilde{\nu}$ and for $\varepsilon$ sufficiently small we have [\[LEEQ:framing_operators2\]](#LEEQ:framing_operators2){reference-type="eqref" reference="LEEQ:framing_operators2"} is true. This concludes the proof. ◻ *Remark 12*. We will in what follows for $\varepsilon >0$ call a potential $V_\varepsilon\in C_0^\infty(\mathbb{R}^d)$ a rough potential of regularity $\tau\geq0$ if $$\begin{aligned} \sup_{x\in\mathbb{R}^d} \big| \partial_x^\alpha V_\varepsilon(x)\big| \leq C_\alpha \varepsilon^{\min(0,\tau - |\alpha|)} \quad\text{for all $\alpha\in\mathbb{N}_0^d$} \end{aligned}$$ where the constants $C_\alpha$ are independent of $\varepsilon$. Moreover, we denote by a rough Schrödinger operator of regularity $\tau\geq0$ an operator of the form $$H_{\hbar,\varepsilon}=-\hbar^2\Delta + V + V_\varepsilon,$$ where $V\in L^1_{loc}(\mathbb{R}^d)$ and $V_\varepsilon$ is a rough potential of regularity $\tau$. *Remark 13*. Assume we are in the setting of Lemma [\[LEEQ:framing_operators\]](#LEEQ:framing_operators){reference-type="ref" reference="LEEQ:framing_operators"} then it follows from Theorem [Theorem 1](#Thm:Frank){reference-type="ref" reference="Thm:Frank"} that there exists a constant $C>0$ such that $$\mathop{\mathrm{Tr}}\big[g_\gamma(H_{\hbar,\varepsilon}^{+} )\big] \leq \mathop{\mathrm{Tr}}\big[g_\gamma(H_\hbar )\big] \leq \mathop{\mathrm{Tr}}\big[g_\gamma(H_{\hbar,\varepsilon}^{-} )\big] \leq C\hbar^{-d}$$ for $\hbar>0$, $\varepsilon>0$ sufficiently small. The constant $C$ only depends on the dimension, the set $\Omega_{4\nu,V}$ and $\min(V)$. The two first inequalities follows from the min-max principle. For the third inequality we can choose a potential $V^{min}$ such that $$V^{min}(x) = \begin{cases} \min(V) - 1 &\text{if $x\in \Omega_{4\nu,V}$} \\ 0 & \text{if $x\notin \Omega_{4\nu,V}$}. \end{cases}$$ Then when we consider the operator $H_\hbar^{min}=-\hbar^2 \Delta + V^{min}$, defined as a Friedrichs extension of the associated form, we have that $$H_\hbar^{min}\leq H_{\hbar,\varepsilon}^{-}$$ in the sense of quadratic forms. Hence using the min-max principle and Theorem [Theorem 1](#Thm:Frank){reference-type="ref" reference="Thm:Frank"} we obtain that $$\begin{aligned} \mathop{\mathrm{Tr}}\big[g_\gamma(H_{\hbar,\varepsilon}^{-} )\big] \leq {}& \mathop{\mathrm{Tr}}\big[g_\gamma(H_\hbar^{min} )\big] \\ \leq {}& \frac{1}{(2\pi\hbar)^d} \int_{\mathbb{R}^{2d}}g_\gamma(p^2+V^{min}(x)) \,dx dp + \tilde{C} \hbar^{-d} \leq C\hbar^{-d}, \end{aligned}$$ where the constant $C$ only depends on the dimension, the set $\Omega_{4\nu,V}$ and $\min(V)$. # Auxiliary results and model problem {#SEC:aux} Inspired by the form of the framing operators we will make the following assumption. This assumption is essentially the assumption that appears in [@MR1343781] but with a rough potential and no magnetic field. **Assumption 14**. Let $\mathcal{H}_{\hbar,\varepsilon}$ be an operator acting in $L^2(\mathbb{R}^d)$, where $\hbar,\varepsilon>0$. Suppose that 1. [\[G.L.1.1\]]{#G.L.1.1 label="G.L.1.1"} $\mathcal{H}_{\hbar,\varepsilon}$ is self-adjoint and lower semibounded. 2. [\[G.L.1.2\]]{#G.L.1.2 label="G.L.1.2"} Suppose there exists an open set $\Omega\subset\mathbb{R}^d$ and a rough potential $V_\varepsilon\in C^{\infty}_0(\mathbb{R}^d)$ of regularity $\tau\geq0$ such that $C_0^\infty(\Omega)\subset\mathcal{D}(\mathcal{H})$ and $$\mathcal{H}_{\hbar} \varphi = H_{\hbar,\varepsilon}\varphi \quad\text{for all $\varphi\in C_0^\infty(\Omega)$},$$ where $H_{\hbar,\varepsilon}= -\hbar^2\Delta + V_\varepsilon$. For these operators we will establish our model problem. The first auxiliary result we will need was established in [@mikkelsen2023sharp], where it is Lemma 4.6. It is almost the full model problem except we consider only the operator $H_{\hbar,\varepsilon}$ and not the general operator $\mathcal{H}_{\hbar,\varepsilon}$. **Lemma 15**. *Let $\gamma\in[0,1]$ and $H_{\hbar,\varepsilon} = -\hbar^2\Delta +V_\varepsilon$ be a rough Schrödinger operator acting in $L^2(\mathbb{R}^d)$ of regularity $\tau\geq1$ if $\gamma=0$ and regularity $\tau\geq2$ if $\gamma>0$ with $\hbar\in(0,\hbar_0]$, $\hbar_0$ sufficiently small. Assume that $V_\varepsilon \in C_0^\infty(\mathbb{R}^d)$ and there exists a $\delta\in(0,1]$ such that $\varepsilon\geq\hbar^{1-\delta}$. Suppose there is an open set $\Omega \subset \mathop{\mathrm{supp}}(V_\varepsilon)$ and a $c>0$ such that $$|V_\varepsilon(x)| +\hbar^{\frac{2}{3}} \geq c \qquad\text{for all $x\in \Omega$}.$$ Then for $\varphi\in C_0^\infty(\Omega)$ it holds that $$\Big|\mathop{\mathrm{Tr}}\big[\varphi g_\gamma(H_{\hbar,\varepsilon})\big] - \frac{1}{(2\pi\hbar)^d} \int_{\mathbb{R}^{2d}}g_\gamma(p^2+V_\varepsilon(x))\varphi(x) \,dx dp \Big| \leq C \hbar^{1+\gamma-d},$$ where the constant $C$ depends only on the dimension and the numbers $\lVert \partial^\alpha \varphi \rVert_{L^\infty(\mathbb{R}^d)}$ and $\varepsilon^{-\min(0,\tau-|\alpha|)}\lVert \partial^\alpha V_\varepsilon \rVert_{L^\infty(\Omega)}$ for all $\alpha\in N_0^d$.* What remains in order for us to be able to prove our model problem is to establish that $\mathop{\mathrm{Tr}}\big[\varphi g_\gamma(H_{\hbar,\varepsilon})\big]$ and $\mathop{\mathrm{Tr}}\big[\varphi g_\gamma(\mathcal{H}_{\hbar,\varepsilon})\big]$ are close. To do this we will need some addtional notation and results. *Remark 16*. In order to prove Lemma [Lemma 15](#LE:Aux_weyl_law){reference-type="ref" reference="LE:Aux_weyl_law"} as done in [@mikkelsen2023sharp] one needs to understand the Schrödinger propagator $e^{i\hbar^{-1}t H_{\hbar,\varepsilon}}$ associated to $H_{\hbar,\varepsilon}$. Under the assumptions of the lemma we can find an operator with an explicit kernel that locally approximate $e^{i\hbar^{-1}t H_{\hbar,\varepsilon}}$ in a suitable sense. This local construction is only valid for times of order $\hbar^{1-\frac{\delta}{2}}$. But if we locally have a non-critical condition the approximation can be extended to a small time interval $[-T_0,T_0]$, where $T_0$ is indenpendent of $\hbar$. For further details see [@mikkelsen2022optimal]. In the following we will reference this remark and the number $T_0$. *Remark 17*. Let $T\in(0,T_0]$ and $\hat{\chi}\in C_0^\infty((-T,T))$ be a real valued function such that $\hat{\chi}(s)=\hat{\chi}(-s)$ and $\hat{\chi}(s)=1$ for all $t\in(-\frac{T}{2},\frac{T}{2})$. Here $T_0$ is the number from Remark [Remark 16](#RE:propagator){reference-type="ref" reference="RE:propagator"} Define $$\chi_1(t) = \frac{1}{2\pi} \int_{\mathbb{R}} \hat{\chi}(s) e^{ist} \,ds.$$ We assume that $\chi_1(t)\geq 0$ for all $t\in\mathbb{R}$ and there exist $T_1\in(0,T)$ and $c>0$ such that $\chi_1(t)\geq c$ for all $T\in[-T_1,T_1]$. We can guarantee these assumptions by (possible) replace $\hat{\chi}$ by $\hat{\chi}*\hat{\chi}$. We will by $\chi_\hbar(t)$ denote the function $$\chi_\hbar(t) = \tfrac{1}{\hbar} \chi_1(\tfrac{t}{\hbar}).$$ Moreover for any function $g\in L^1_{loc}(\mathbb{R})$ we will use the notation $$g^{(\hbar)}(t) =g*\chi_\hbar(t) = \int_\mathbb{R}g(s) \chi_\hbar(t-s).$$ Before we proceed we will just recall the following classes of functions. These did first appear in [@MR1272980]. **Definition 18**. A function $g\in C^\infty(\mathbb{R}\setminus\{0\})$ is said to belong to the class $C^{\infty,\gamma}(\mathbb{R})$, $\gamma\in[0,1]$, if $g\in C(\mathbb{R})$ for $\gamma>0$, for some constants $C >0$ and $r>0$ it holds that $$\begin{aligned} g(t) &= 0, \qquad\text{for all $t\geq C$} \\ |\partial_t^m g(t)| &\leq C_m |t|^r, \qquad\text{for all $m\in\mathbb{N}_0$ and $t\leq -C$} \\ |\partial_t^m g(t)| &\leq \begin{cases} C_m & \text{if $\gamma=0,1$} \\ C_m|t|^{\gamma-m} &\text{if $\gamma\in(0,1)$} \end{cases}, \qquad\text{for all $m\in\mathbb{N}$ and $t\in [ -C,C]\setminus\{0\}$}. \end{aligned}$$ A function $g$ is said to belong to $C^{\infty,\gamma}_0(\mathbb{R})$ if $g\in C^{\infty,\gamma}(\mathbb{R})$ and $g$ has compact support. With this notation sat up we recall the following Tauberian type result. that can be found in [@MR1343781], where it is Proposition 2.8. **Proposition 19**. *Let $A$ be a selfadjoint operator acting in a Hilbert space $\mathscr{H}$ and $g\in C_{0}^{\infty,\gamma}(\mathbb{R})$. Let $\chi_1$ be defined as in Remark [Remark 17](#RE:mollyfier_def){reference-type="ref" reference="RE:mollyfier_def"}. If for a Hilbert-Schmidt operator $B$ $$\label{EQ:PRO:Tauberian_1} \sup_{t\in \mathcal{D}(\delta)} \lVert B^{*}\chi_\hbar (A-t) B \rVert_1 \leq Z(\hbar),$$ where $\mathcal{D}(\delta) = \{ t \in\mathbb{R}\,|\, \mathop{\mathrm{dist}}(\mathop{\mathrm{supp}}(g)),t)\leq \delta \}$, $Z(\beta)$ is some positive function and strictly positive number $\delta$. Then it holds that $$\lVert B^{*}(g(A)-g^{(\hbar)}(A)) B \rVert_1 \leq C \hbar^{1+\gamma} Z(\hbar) + C_{N}' \hbar^N \lVert B^{*}B \rVert_1 \quad\text{for all $N\in\mathbb{N}$},$$ where the constants $C$ and $C'$ depend on the number $\delta$ and the functions $g$ and $\chi_1$ only.* The following two lemmas can be found in [@mikkelsen2023sharp], where it is Lemma 4.3 and Lemma 4.5 respectively. In the first lemma we state here we have included an additional estimate. This estimate is established as a part of the proof Lemma 4.3 in [@mikkelsen2023sharp]. **Lemma 20**. *Let $\mathcal{H}_{\hbar,\varepsilon}$ be an operator acting in $L^2(\mathbb{R}^d)$ which satisfies Assumption [Assumption 14](#Assumption:local_potential_1){reference-type="ref" reference="Assumption:local_potential_1"} with the open set $\Omega$ and let $H_{\hbar,\varepsilon} = -\hbar^2 \Delta +V_\varepsilon$ be the associated rough Schrödinger operator of regularity $\tau\geq1$. Assume that $\hbar\in(0,\hbar_0]$, with $\hbar_0$ sufficiently small. Then for $f\in C_0^\infty(\mathbb{R})$ and $\varphi\in C_0^\infty(\Omega)$ we have for any $N\in\mathbb{N}_0$ that $$\begin{aligned} \lVert \varphi[f(\mathcal{H}_{\hbar,\varepsilon})-f(H_{\hbar,\varepsilon})] \rVert_1 &\leq C_N \hbar^N, \label{EQLE:Comparision_Loc_infty_1} \\ \big \lVert \varphi [(z-\mathcal{H}_{\hbar,\varepsilon})^{-1}-(z-H_{\hbar,\varepsilon})^{-1}] \big\rVert_1 &\leq C_N \frac{\langle z \rangle^{N+\frac{d+1}{2}} \hbar^{2N-d}}{|\mathop{\mathrm{\mathrm{Im}}}(z)|^{2N+2}} , \qquad\text{$z\in \mathbb{C}\setminus\mathbb{R}$} \label{EQLE:Comparision_Loc_infty_2} \shortintertext{and} \lVert \varphi f(\mathcal{H}_{\hbar,\varepsilon}) \rVert_1&\leq C \hbar^{-d},\label{EQLE:Comparision_Loc_infty_3}\end{aligned}$$ The constant $C_N$ depends only on the numbers $N$, $\lVert f \rVert_{L^\infty(\mathbb{R})}$, $\lVert \partial^\alpha\varphi \rVert_{L^\infty(\mathbb{R}^d)}$ for all $\alpha\in\mathbb{N}_0^d$ and the constant $c$.* **Lemma 21**. *Let $H_{\hbar,\varepsilon} = -\hbar^2\Delta +V_\varepsilon$ be a rough Schrödinger operator acting in $L^2(\mathbb{R}^d)$ of regularity $\tau\geq1$ with $\hbar\in(0,\hbar_0]$, $\hbar_0$ sufficiently small. Assume that $V_\varepsilon \in C_0^\infty(\mathbb{R}^d)$ and there exists a $\delta\in(0,1]$ such that $\varepsilon\geq\hbar^{1-\delta}$. Suppose there is an open set $\Omega \subset \mathop{\mathrm{supp}}(V_\varepsilon)$ and a $c>0$ such that $$|V_\varepsilon(x)| +\hbar^{\frac{2}{3}} \geq c \qquad\text{for all $x\in \Omega$}.$$ Let $\chi_\hbar(t)$ be the function from Remark [Remark 17](#RE:mollyfier_def){reference-type="ref" reference="RE:mollyfier_def"}, $f\in C_0^\infty(\mathbb{R})$ and $\varphi\in C_0^\infty(\Omega)$ then it holds for $s\in\mathbb{R}$ that $$\lVert \varphi f(H_{\hbar,\varepsilon}) \chi_\hbar(H_{\hbar,\varepsilon}-s)f(H_{\hbar,\varepsilon}) \varphi \rVert_1 \leq C \hbar^{-d}.$$ The constant $C_N$ depends only on the dimension and the numbers $\lVert f \rVert_{L^\infty(\mathbb{R})}$, $\lVert \partial^\alpha\varphi \rVert_{L^\infty(\mathbb{R})}$ for all $\alpha\in\mathbb{N}^d_0$, $\lVert \partial^\alpha a_j \rVert_{L^\infty(\mathbb{R}^d)}$ and $\varepsilon^{-\min(0,\tau-|\alpha|)}\lVert \partial^\alpha V_\varepsilon \rVert_{L^\infty(\Omega)}$ for all $\alpha\in\mathbb{N}_0^d$.* The following two lemmas are similar to Lemma 4.7 and Lemma 4.8 from [@mikkelsen2023sharp]. However due to some differences in the assumptions and the proofs we will give the proofs here. **Lemma 22**. *Let $\mathcal{H}_{\hbar,\varepsilon}$ be an operator acting in $L^2(\mathbb{R}^d)$ satisfying Assumption [Assumption 14](#Assumption:local_potential_1){reference-type="ref" reference="Assumption:local_potential_1"} with the open set $\Omega$ and let $H_{\hbar,\varepsilon} = -\hbar^2 \Delta +V_\varepsilon$ be the associated rough Schrödinger operator of regularity $\tau\geq1$. Assume that $\hbar\in(0,\hbar_0]$, with $\hbar_0$ sufficiently small and there exists a $\delta\in(0,1]$ such that $\varepsilon\geq\hbar^{1-\delta}$ Moreover, let $\chi_\hbar(t)$ be the function from Remark [Remark 17](#RE:mollyfier_def){reference-type="ref" reference="RE:mollyfier_def"}, $f\in C_0^\infty(\mathbb{R})$ and $\varphi\in C_0^\infty(B(\Omega))$ then it holds for $s\in\mathbb{R}$ and $N\in\mathbb{N}$ that $$\label{EQLE:Func_moll_com_model_reg} \lVert \varphi f(\mathcal{H}_{\hbar,\varepsilon}) \chi_\hbar(\mathcal{H}_{\hbar,\varepsilon}-s)f(\mathcal{H}_{\hbar,\varepsilon}) \varphi -\varphi f(H_{\hbar,\varepsilon}) \chi_\hbar(H_{\hbar,\varepsilon}-s)f(H_{\hbar,\varepsilon}) \varphi \rVert_1 \leq C_N \hbar^N.$$ Moreover, suppose there exists some $c>0$ such that $$|V_\varepsilon(x)| +\hbar^{\frac{2}{3}} \geq c \qquad\text{for all $x\in \Omega$}.$$ Then it holds that $$\label{LEEQ:Func_moll_com_model_reg} \lVert \varphi f(\mathcal{H}_{\hbar,\varepsilon}) \chi_\hbar(\mathcal{H}_{\hbar,\varepsilon}-s)f(\mathcal{H}_{\hbar,\varepsilon}) \varphi \rVert_1 \leq C \hbar^{-d}.$$ The constant $C_N$ depends only on the dimension and the numbers $\lVert f \rVert_{L^\infty(\mathbb{R})}$, $\lVert \partial^\alpha\varphi \rVert_{L^\infty(\mathbb{R})}$ and $\varepsilon^{-\min(0,\tau-|\alpha|)}\lVert \partial^\alpha V_\varepsilon \rVert_{L^\infty(\Omega)}$ for all $\alpha\in\mathbb{N}_0^d$.* *Proof.* We start by making the following estimate $$\label{EQ:Func_moll_com_model_reg_1} \begin{aligned} \MoveEqLeft \lVert \varphi f(\mathcal{H}_{\hbar,\varepsilon}) \chi_\hbar(\mathcal{H}_{\hbar,\varepsilon}-s)f(\mathcal{H}_{\hbar,\varepsilon}) \varphi -\varphi f(H_{\hbar,\varepsilon}) \chi_\hbar(H_{\hbar,\varepsilon}-s)f(H_{\hbar,\varepsilon}) \varphi \rVert_1 \\ \leq{}& C\hbar^{-d} \lVert \varphi f(\mathcal{H}_{\hbar,\varepsilon}) \chi_\hbar(\mathcal{H}_{\hbar,\varepsilon}-s) -\varphi f(H_{\hbar,\varepsilon}) \chi_\hbar(H_{\hbar,\varepsilon}-s) \rVert_{\mathrm{op}} + C\hbar^{-1} \lVert f(\mathcal{H}_{\hbar,\varepsilon}) \varphi -f(H_{\hbar,\varepsilon}) \varphi \rVert_1 \\ \leq{}& C\hbar^{-d}\lVert \varphi f(\mathcal{H}_{\hbar,\varepsilon}) \chi_\hbar(\mathcal{H}_{\hbar,\varepsilon}-s) -\varphi f(H_{\hbar,\varepsilon}) \chi_\hbar(H_{\hbar,\varepsilon}-s) \rVert_{\mathrm{op}} + C \hbar^N, \end{aligned}$$ where we in the first inequality have added and subtracted the term $\varphi f(H_{\hbar,\varepsilon}) \chi_\hbar(H_{\hbar,\varepsilon}-s)f(\mathcal{H}_{\hbar,\varepsilon}) \varphi$, used the triangle inequality, used estimate [\[EQLE:Comparision_Loc_infty_3\]](#EQLE:Comparision_Loc_infty_3){reference-type="eqref" reference="EQLE:Comparision_Loc_infty_3"} from Lemma [Lemma 20](#LE:Comparision_Loc_infty){reference-type="ref" reference="LE:Comparision_Loc_infty"} and that $\sup_{t\in\mathbb{R}}\chi_\hbar(t) \leq C\hbar^{-1}$. In the second inequality we have used estimate [\[EQLE:Comparision_Loc_infty_1\]](#EQLE:Comparision_Loc_infty_1){reference-type="eqref" reference="EQLE:Comparision_Loc_infty_1"} from Lemma [Lemma 20](#LE:Comparision_Loc_infty){reference-type="ref" reference="LE:Comparision_Loc_infty"}. We observe that $\chi_\hbar(z-s)$ is the inverse Fourier transform of a smooth compactly supported function. Hence it is holomorphic in $z$. Using the Helffer-Sjöstrand formula (Theorem [Theorem 9](#THM:Helffer-Sjostrand){reference-type="ref" reference="THM:Helffer-Sjostrand"}) we get that $$\label{EQ:Func_moll_com_model_reg_2} \begin{aligned} \MoveEqLeft \varphi f(\mathcal{H}_{\hbar,\varepsilon}) \chi_\hbar(\mathcal{H}_{\hbar,\varepsilon}-s) -\varphi f(H_{\hbar,\varepsilon}) \chi_\hbar(H_{\hbar,\varepsilon}-s) \\ &=- \frac{1}{\pi} \int_\mathbb{C}\bar{\partial }\tilde{f}(z) \chi_\hbar(z-s) \varphi [(z-\mathcal{H}_{\hbar,\varepsilon})^{-1}-(z-H_{\hbar,\varepsilon})^{-1}] \, L(dz), \end{aligned}$$ where $\tilde{f}$ is an almost analytic extension of $f$. Estimate [\[EQLE:Comparision_Loc_infty_2\]](#EQLE:Comparision_Loc_infty_2){reference-type="eqref" reference="EQLE:Comparision_Loc_infty_2"} of Lemma [Lemma 20](#LE:Comparision_Loc_infty){reference-type="ref" reference="LE:Comparision_Loc_infty"} give us that $$\label{EQ:Func_moll_com_model_reg_3} \big \lVert \varphi [(z-\mathcal{H}_{\hbar,\mu})^{-1}-(z-H_{\hbar,\mu})^{-1}] \big\rVert_{\mathrm{op}} \leq C_N \frac{\langle z \rangle^{N+\frac{d+1}{2}} \hbar^{2N-d}}{|\mathop{\mathrm{\mathrm{Im}}}(z)|^{2N+2}}.$$ Since the trace norm dominates the operator norm. Combining [\[EQ:Func_moll_com_model_reg_2\]](#EQ:Func_moll_com_model_reg_2){reference-type="eqref" reference="EQ:Func_moll_com_model_reg_2"}, [\[EQ:Func_moll_com_model_reg_3\]](#EQ:Func_moll_com_model_reg_3){reference-type="eqref" reference="EQ:Func_moll_com_model_reg_3"} and using the properties of $\tilde{f}$ and $\chi_\hbar$ we obtain that $$\label{EQ:Func_moll_com_model_reg_4} \begin{aligned} \lVert \varphi f(\mathcal{H}_{\hbar,\mu}) \chi_\hbar(\mathcal{H}_{\hbar,\mu}-s) -\varphi f(H_{\hbar,\mu}) \chi_\hbar(H_{\hbar,\mu}-s) \rVert_{\mathrm{op}} \leq C_N\hbar^N, \end{aligned}$$ where $C_N$ depends on the dimension, the number $N$ and the functions $f$, $\varphi$. Combining the estimates in [\[EQ:Func_moll_com_model_reg_1\]](#EQ:Func_moll_com_model_reg_1){reference-type="eqref" reference="EQ:Func_moll_com_model_reg_1"} and [\[EQ:Func_moll_com_model_reg_4\]](#EQ:Func_moll_com_model_reg_4){reference-type="eqref" reference="EQ:Func_moll_com_model_reg_4"} we obtain that $$\label{EQ:Func_moll_com_model_reg_5} \begin{aligned} \lVert \varphi f(\mathcal{H}_{\hbar,\varepsilon}) \chi_\hbar(\mathcal{H}_{\hbar,\varepsilon}-s)f(\mathcal{H}_{\hbar,\varepsilon}) \varphi -\varphi f(H_{\hbar,\varepsilon}) \chi_\hbar(H_{\hbar,\varepsilon}-s)f(H_{\hbar,\varepsilon}) \varphi \rVert_1 \leq C \hbar^N. \end{aligned}$$ This concludes first part of the proof. By combining the the estimate in [\[EQLE:Func_moll_com_model_reg\]](#EQLE:Func_moll_com_model_reg){reference-type="eqref" reference="EQLE:Func_moll_com_model_reg"} with Lemma [Lemma 21](#LE:assump_est_func_loc){reference-type="ref" reference="LE:assump_est_func_loc"} we can obtain the estimate [\[LEEQ:Func_moll_com_model_reg\]](#LEEQ:Func_moll_com_model_reg){reference-type="eqref" reference="LEEQ:Func_moll_com_model_reg"}. This concludes the proof. ◻ **Lemma 23**. *Let $\gamma\in[0,1]$ and $\mathcal{H}_{\hbar,\varepsilon}$ be an operator acting in $L^2(\mathbb{R}^d)$. Suppose $\mathcal{H}_{\hbar,\varepsilon}$ satisfies Assumption [Assumption 14](#Assumption:local_potential_1){reference-type="ref" reference="Assumption:local_potential_1"} with the open set $\Omega$ and let $H_{\hbar,\varepsilon} = -\hbar^2 \Delta +V_\varepsilon$ be the associated rough Schrödinger operator of regularity $\tau\geq1$ if $\gamma=0$ and $\tau\geq2$ if $\gamma>0$. Assume that $\hbar\in(0,\hbar_0]$, with $\hbar_0$ sufficiently small and there exists a $\delta\in(0,1]$ such that $\varepsilon\geq\hbar^{1-\delta}$. Moreover, suppose there exists some $c>0$ such that $$|V_\varepsilon(x)| +\hbar^{\frac{2}{3}} \geq c \qquad\text{for all $x\in \Omega$}$$ Then for $\varphi\in C_0^\infty(\Omega)$ it holds that $$\Big|\mathop{\mathrm{Tr}}\big[\varphi g_\gamma(\mathcal{H}_{\hbar,\varepsilon})\big] -\mathop{\mathrm{Tr}}\big[\varphi g_\gamma(H_{\hbar,\varepsilon})\big] \Big| \leq C \hbar^{1+\gamma-d} + C_N' \hbar^{N}.$$ The constants $C$ and $C_N'$ depends on the dimension and the numbers $\lVert f \rVert_{L^\infty(\mathbb{R})}$, $\lVert \partial^\alpha\varphi \rVert_{L^\infty(\mathbb{R})}$ and $\varepsilon^{-\min(0,\tau-|\alpha|)}\lVert \partial^\alpha V_\varepsilon \rVert_{L^\infty(\Omega)}$ for all $\alpha\in\mathbb{N}_0^d$.* *Proof.* Since both operators are lower semi-bounded we may assume that $g$ is compactly supported. Let $f\in C_0^\infty(\mathbb{R})$ such that $f(t)g_\gamma(t)= g_\gamma(t)$ for all $t\in\mathbb{R}$ and let $\varphi_1\in C_0^\infty(\Omega)$ such that $\varphi(x)\varphi_1(x) = \varphi(x)$ for all $x\in\mathbb{R}^d$. Moreover, let $\chi_\hbar(t)$ be the function from Remark [Remark 17](#RE:mollyfier_def){reference-type="ref" reference="RE:mollyfier_def"} and set $g_\gamma^{(\hbar)}(t) = g_\gamma*\chi_\hbar(t)$. With this notation set up we have that $$\label{EQ:trace_com_model_reg_1} \begin{aligned} \MoveEqLeft \Big|\mathop{\mathrm{Tr}}\big[\varphi g_\gamma(\mathcal{H}_{\hbar,\varepsilon})\big] -\mathop{\mathrm{Tr}}\big[\varphi g_\gamma(H_{\hbar,\varepsilon})\big] \Big| \\ \leq {}& \lVert \varphi \varphi_1f(\mathcal{H}_{\hbar,\varepsilon}) (g_\gamma(\mathcal{H}_{\hbar,\varepsilon}) - g_\gamma^{(\hbar)}(\mathcal{H}_{\hbar,\varepsilon}))f(\mathcal{H}_{\hbar,\varepsilon})\varphi_1] \rVert_1 \\ &+\lVert \varphi \varphi_1 f(H_{\hbar,\varepsilon}) (g_\gamma(H_{\hbar,\varepsilon})-g_\gamma^{(\hbar)}(H_{\hbar,\varepsilon}))f(H_{\hbar,\varepsilon})\varphi_1] \rVert_1 + \lVert \varphi \rVert_{L^\infty(\mathbb{R}^d)} \int_{\mathbb{R}} g_\gamma(s) \,ds \\ &\times \sup_{s\in\mathbb{R}} \lVert\varphi \varphi_1 f(\mathcal{H}_{\hbar,\varepsilon}) \chi_\hbar(\mathcal{H}_{\hbar,\varepsilon}-s)f(\mathcal{H}_{\hbar,\varepsilon}) \varphi_1 -\varphi_1 f(H_{\hbar,\varepsilon}) \chi_\hbar(H_{\hbar,\varepsilon}-s)f(H_{\hbar,\varepsilon}) \varphi_1\rVert_1. \end{aligned}$$ Lemma [Lemma 21](#LE:assump_est_func_loc){reference-type="ref" reference="LE:assump_est_func_loc"} and Lemma [Lemma 22](#LE:Func_moll_com_model_reg){reference-type="ref" reference="LE:Func_moll_com_model_reg"} gives us that the assumptions of Proposition [Proposition 19](#PRO:Tauberian){reference-type="ref" reference="PRO:Tauberian"} is fulfilled with $B$ equal to $\varphi_1 f(H_\hbar)$ and $\varphi_1 f(H_{\hbar,\varepsilon})$ respectively. Hence we hvae that $$\label{EQ:trace_com_model_reg_2} \begin{aligned} \lVert \varphi \varphi_1f(\mathcal{H}_{\hbar,\varepsilon}) (g_\gamma(\mathcal{H}_{\hbar,\varepsilon}) - g_\gamma^{(\hbar)}(\mathcal{H}_{\hbar,\varepsilon}))f(\mathcal{H}_{\hbar,\varepsilon})\varphi_1] \rVert_1 \leq C \hbar^{1+\gamma-d} \end{aligned}$$ and $$\label{EQ:trace_com_model_reg_3} \begin{aligned} \lVert \varphi \varphi_1 f(H_{\hbar,\varepsilon}) (g_\gamma(H_{\hbar,\varepsilon})-g_\gamma^{(\hbar)}(H_{\hbar,\varepsilon}))f(H_{\hbar,\varepsilon})\varphi_1] \rVert_1 \leq C \hbar^{1+\gamma-d}. \end{aligned}$$ From applying Lemma [Lemma 22](#LE:Func_moll_com_model_reg){reference-type="ref" reference="LE:Func_moll_com_model_reg"} we get for all $N\in\mathbb{N}$ that $$\label{EQ:trace_com_model_reg_4} \begin{aligned} \MoveEqLeft \sup_{s\in\mathbb{R}} \lVert\varphi \varphi_1 f(\mathcal{H}_{\hbar,\mu}) \chi_\hbar(\mathcal{H}_{\hbar,\mu}-s)f(\mathcal{H}_{\hbar,\mu}) \varphi_1 -\varphi_1 f(H_{\hbar,\mu\varepsilon}) \chi_\hbar(H_{\hbar,\mu\varepsilon}-s)f(H_{\hbar,\mu,\varepsilon}) \varphi_1\rVert_1 \\ &\leq C_N\hbar^N. \end{aligned}$$ Finally from combining the estimates in [\[EQ:trace_com_model_reg_1\]](#EQ:trace_com_model_reg_1){reference-type="eqref" reference="EQ:trace_com_model_reg_1"}, [\[EQ:trace_com_model_reg_2\]](#EQ:trace_com_model_reg_2){reference-type="eqref" reference="EQ:trace_com_model_reg_2"}, [\[EQ:trace_com_model_reg_3\]](#EQ:trace_com_model_reg_3){reference-type="eqref" reference="EQ:trace_com_model_reg_3"} and [\[EQ:trace_com_model_reg_4\]](#EQ:trace_com_model_reg_4){reference-type="eqref" reference="EQ:trace_com_model_reg_4"}we obtain the desired estimate and this concludes the proof. ◻ For operators that satisfies Assumption [Assumption 14](#Assumption:local_potential_1){reference-type="ref" reference="Assumption:local_potential_1"} we can establish the following model theorem. The proof of the theorem is similar to the proof of Theorem 5.1 in [@mikkelsen2023sharp]. **Theorem 24**. *Let $\gamma\in[0,1]$ and $\mathcal{H}_{\hbar,\varepsilon}$ be an operator acting in $L^2(\mathbb{R}^d)$. Suppose $\mathcal{H}_{\hbar,\varepsilon}$ satisfies Assumption [Assumption 14](#Assumption:local_potential_1){reference-type="ref" reference="Assumption:local_potential_1"} with the open set $\Omega$ and let $H_{\hbar,\varepsilon} = -\hbar^2 \Delta +V_\varepsilon$ be the associated rough Schrödinger operator of regularity $\tau\geq1$ if $\gamma=0$ and $\tau\geq2$ if $\gamma>0$. Assume that $\hbar\in(0,\hbar_0]$, with $\hbar_0$ sufficiently small and there exists a $\delta\in(0,1]$ such that $\varepsilon\geq\hbar^{1-\delta}$. Moreover, suppose there exists some $c>0$ such that $$\label{THM:model_prob_global_Non_crit} |V_\varepsilon(x)| +\hbar^{\frac{2}{3}} \geq c \qquad\text{for all $x\in\Omega$}.$$ Then for any $\varphi\in C_0^\infty(\Omega)$ it holds that $$\Big|\mathop{\mathrm{Tr}}\big[\varphi g_\gamma(\mathcal{H}_{\hbar,\varepsilon})\big] - \frac{1}{(2\pi\hbar)^d} \int_{\mathbb{R}^{2d}}g_\gamma(p^2+V_\varepsilon(x)) \varphi(x) \,dx dp \Big| \leq C \hbar^{1+\gamma-d},$$ where the constant $C$ depends only on the dimension and the numbers $\lVert \partial^\alpha \varphi \rVert_{L^\infty(\mathbb{R}^d)}$ and $\varepsilon^{-\min(0,\tau-|\alpha|)}\lVert \partial^\alpha V_\varepsilon \rVert_{L^\infty(\Omega)}$ for all $\alpha\in N_0^d$.* *Proof.* Firstly observe that under the assumptions of this theorem we have that $\mathcal{H}_{\hbar,\varepsilon}$ and $H_{\hbar,\varepsilon}$ satisfies the assumptions of Lemma [Lemma 23](#LE:trace_com_model_reg){reference-type="ref" reference="LE:trace_com_model_reg"}. Moreover, we have that $H_{\hbar,\varepsilon}$ satisfies the assumptions of Lemma [Lemma 15](#LE:Aux_weyl_law){reference-type="ref" reference="LE:Aux_weyl_law"}. Hence from applying Lemma [Lemma 23](#LE:trace_com_model_reg){reference-type="ref" reference="LE:trace_com_model_reg"} and Lemma [Lemma 15](#LE:Aux_weyl_law){reference-type="ref" reference="LE:Aux_weyl_law"} we obtain that $$\begin{aligned} \MoveEqLeft \Big|\mathop{\mathrm{Tr}}\big[\varphi g_\gamma(\mathcal{H}_{\hbar,\varepsilon})\big] - \frac{1}{(2\pi\hbar)^d} \int_{\mathbb{R}^{2d}}g_\gamma(p^2+V_\varepsilon(x)) \varphi(x) \,dx dp \Big| \\ \leq{}& \Big|\mathop{\mathrm{Tr}}\big[\varphi g_\gamma(\mathcal{H}_{\hbar,\varepsilon})]- \varphi g_\gamma(H_{\hbar,\varepsilon})\big] \Big| + \Big|\mathop{\mathrm{Tr}}\big[\varphi g_\gamma(H_{\hbar,\varepsilon})\big] - \frac{1}{(2\pi\hbar)^d} \int_{\mathbb{R}^{2d}}g_\gamma(p^2+V_\varepsilon(x)) \varphi(x) \,dx dp \Big| \\ \leq{}& C \hbar^{1+\gamma-d}. \end{aligned}$$ This concludes the proof. ◻ # Towards a proof of the main theorem {#SEC:proof_main} At the end of this section we will prove our main theorem. But before we do this we will first prove some lemmas that we will need in the proof. The first is a lemma that allow us to localise the trace we consider. The second one is a comparison of phase space integrals. ## Localisation of traces and comparison of phase-space integrals Before we state the lemma on localisation of the trace we recall the following Agmon type estimate from [@MR4182014], where it is Lemma A.1. **Lemma 25**. *Let $H_\hbar=-\hbar^2\Delta +V$ be a Schrödinger operator acting in $L^2(\mathbb{R}^d)$, where $V$ is in $L^1_{loc}(\mathbb{R}^d)$ and suppose that there exist an $\nu>0$ and a open bounded sets $U$ such that $$\label{EQ:Agmon_type_lem} V(x) \geq {\nu} \quad \text{when } x\in U^c.$$ Let $d(x) = \mathop{\mathrm{dist}}(x,U_a)$, where $$U_a = \{ x\in\mathbb{R}^d \, |\, \mathop{\mathrm{dist}}(x,U)<a\}$$ and let $\psi$ be a normalised solution to the equation $$H_\hbar\psi=E\psi,$$ with $E<\nu/4$. Then there exists a $C>0$ such that $$\lVert e^{\delta \hbar^{-1} d } \psi \rVert_{L^2(\mathbb{R}^d)} \leq C,$$ for $\delta=\tfrac{\sqrt{\nu}}{8}$. The constant $C$ depends on $a$ and is uniform in $V$, $\nu$ and $U$ satisfying [\[EQ:Agmon_type_lem\]](#EQ:Agmon_type_lem){reference-type="eqref" reference="EQ:Agmon_type_lem"}.* In the formulation of the lemma, we have presented here we consider $U_a$ for $a>0$ and not just $U_1$ as in [@MR4182014]. There are no difference in the proof. However, one thing to remark is that the constant $C$ will diverge to infinity as $a$ tends to $0$. Moreover, we have in the statement highlighted the uniformity of the constant in the potential $V$, the number $\nu$ and the set $U$. That the constant is indeed uniform in these follows directly from the proof given in [@MR4182014]. **Lemma 26**. *Let $\gamma\in[0,1]$ and $H_\hbar=-\hbar^2\Delta +V$ be a Schrödinger operator acting in $L^2(\mathbb{R}^d)$, where $V$ is in $L^1_{loc}(\mathbb{R}^d)$ and suppose that there exist an $\nu>0$ and a open bounded sets $U$ such that $V(x)\boldsymbol{1}_U(x)\in L^{\gamma+\frac{d}{2}}(\mathbb{R}^d)$ and $$\label{EQLE:localise_trace} V(x) \geq {\nu} \quad \text{when } x\in U^c.$$ Fix $a>0$ and let $\varphi\in C_0^\infty(\mathbb{R}^d)$ such that $\varphi(x)=1$ for all $x\in U_a$, where $$U_a = \{ x\in\mathbb{R}^d \, |\, \mathop{\mathrm{dist}}(x,U)<a\}.$$ Then for every $N\in\mathbb{N}$ it holds that $$\mathop{\mathrm{Tr}}\big[g_\gamma(H_\hbar)\big] = \mathop{\mathrm{Tr}}\big[g_\gamma(H_\hbar)\varphi\big] + C_N \hbar^N,$$ where the constant is $C_N$ depends on $a$ and is uniform in $V$, $\nu$ and $U$ satisfying [\[EQLE:localise_trace\]](#EQLE:localise_trace){reference-type="eqref" reference="EQLE:localise_trace"}.* *Proof.* Using linearity of the trace we have that $$\label{EQ:localise_trace} \mathop{\mathrm{Tr}}\big[g_\gamma( H_{\hbar}) \big]= \mathop{\mathrm{Tr}}\big[g_\gamma( H_{\hbar})\varphi \big] + \mathop{\mathrm{Tr}}\big[g_\gamma( H_{\hbar})(1-\varphi) \big].$$ For the second term on the right hand side of [\[EQ:localise_trace\]](#EQ:localise_trace){reference-type="eqref" reference="EQ:localise_trace"} we calculate the trace in a normalised basis of eigenfunctions for $H_{\hbar}$ called $\psi_n$ with eigenvalue $E_n$. $$\label{EQ:localise_trace_2} \mathop{\mathrm{Tr}}\big[g_\gamma( H_{\hbar})(1-\varphi) \big] = \sum_{E_n\leq0} \langle g_\gamma(H_{\hbar})(1-\varphi) \psi_n , \psi_n \rangle = \sum_{E_n\leq0} g_\gamma(E_n) \lVert \sqrt{1-\varphi} \psi_n \rVert_{L^2(\mathbb{R}^d)}^2.$$ To estimate the $L^2$-norms we let $d(x) = \mathop{\mathrm{dist}}(x,U_{\frac12a})$. For all $x\in \mathop{\mathrm{supp}}(1-\varphi)$ we have that $d(x)>0$ since $\varphi(x)=1$ for all $x\in U_a$. We get from Lemma [Lemma 25](#LE:Agmon_type_lem){reference-type="ref" reference="LE:Agmon_type_lem"} that there exists a constant $C$ depending on $a$ such that for all normalised eigenfunctions $\psi_n$ with eigenvalue less than $\frac{\nu}{4}$ we have the estimate $$\label{EQ:localise_trace_3} \lVert e^{\tilde{\delta} \hbar^{-1} d } \psi_n \rVert_{L^2(\mathbb{R}^d)} \leq C,$$ where $\tilde{\delta}=\frac{\sqrt{\nu}}{8}$ and $C$ is uniform in $V$, $\nu$ and $U$ satisfying [\[EQLE:localise_trace\]](#EQLE:localise_trace){reference-type="eqref" reference="EQLE:localise_trace"}. Using this estimate and the observations made for $d(x)$ we get for all norms in [\[EQ:localise_trace_2\]](#EQ:localise_trace_2){reference-type="eqref" reference="EQ:localise_trace_2"} and all $N\in\mathbb{N}$ the estimate $$\label{EQ:localise_trace_4} \begin{aligned} \lVert \sqrt{1-\varphi} \psi_n \rVert_{L^2(\mathbb{R}^d)}^2 &\leq \lVert \sqrt{1-\varphi} e^{-\tilde{\delta} \hbar^{-1} d } \rVert_{L^\infty(\mathbb{R}^d)}^2 \lVert e^{\tilde{\delta} \hbar^{-1} d } \psi_n \rVert_{L^2(\mathbb{R}^d)}^2 \\ &\leq C \big\lVert \sqrt{1-\varphi} \big( \tfrac{\hbar}{\tilde{\delta} d}\big)^N \big(\tfrac{\tilde{\delta} d}{\hbar}\big)^N e^{-\tilde{\delta} \hbar^{-1} d }\big\rVert_{L^\infty(\mathbb{R}^d)}^2 \leq C_N \hbar^{2N}. \end{aligned}$$ Combining [\[EQ:localise_trace_2\]](#EQ:localise_trace_2){reference-type="eqref" reference="EQ:localise_trace_2"} with the estimate obtained in [\[EQ:localise_trace_4\]](#EQ:localise_trace_4){reference-type="eqref" reference="EQ:localise_trace_4"} we get for all $N\in\mathbb{N}$ that $$\label{EQ:localise_trace_5} \begin{aligned} \mathop{\mathrm{Tr}}\big[g_\gamma( H_{\hbar})(1-\varphi) \big] &\leq C_N \hbar^{2N} \sum_{E_n\leq0} g_\gamma(E_n) = C_N \hbar^{2N} \mathop{\mathrm{Tr}}\big[g_\gamma( H_{\hbar})\big] \leq \tilde{C}_N \hbar^{2N-d}, \end{aligned}$$ where we in the last estimate have used Theorem [Theorem 1](#Thm:Frank){reference-type="ref" reference="Thm:Frank"}. Combining [\[EQ:localise_trace\]](#EQ:localise_trace){reference-type="eqref" reference="EQ:localise_trace"} and [\[EQ:localise_trace_5\]](#EQ:localise_trace_5){reference-type="eqref" reference="EQ:localise_trace_5"} we obtain the desired estimate. ◻ *Remark 27*. When we will apply the above Lemma we need to ensure the constant is the same for the two cases we consider. To ensure this we use Theorem [Theorem 1](#Thm:Frank){reference-type="ref" reference="Thm:Frank"} as descibed in Remark [Remark 13](#RE:use_of_thm_1){reference-type="ref" reference="RE:use_of_thm_1"} at the end of the proof. The next lemma is a result on comparing phase-space integrals. Similar estimates are obtained with different methods in [@mikkelsen2022optimal]. These are parts of larger proofs and not an independent lemma. The following Lemma is taken from [@mikkelsen2023sharp], where it is Lemma 5.1. **Lemma 28**. *Suppose $\Omega\subset\mathbb{R}^d$ is an open set and let $\varphi\in C_0^\infty(\Omega)$. Moreover, let $\varepsilon>0$, $\hbar\in(0,\hbar_0]$ and $V,V_\varepsilon\in L^1_{loc}(\mathbb{R}^d)\cap C(\Omega)$. Suppose that $$\label{EQLE:comparison_phase_space_int} \lVert V-V_\varepsilon \rVert_{L^\infty(\Omega)}\leq c\varepsilon^{k+\mu}.$$ Then for $\gamma\in[0,1]$ and $\varepsilon$ sufficiently small it holds that $$\label{LEEQ:Loc_mod_prob_5} \begin{aligned} \Big| \int_{\mathbb{R}^{2d}} [g_\gamma(p^2+V_\varepsilon(x))-g_\gamma(p^2+V(x))]\varphi(x) \,dx dp \Big| \leq C\varepsilon^{k+\mu}, \end{aligned}$$ where the constant $C$ depends on the dimension and the numbers $\gamma$ and $c$ in [\[EQLE:comparison_phase_space_int\]](#EQLE:comparison_phase_space_int){reference-type="eqref" reference="EQLE:comparison_phase_space_int"}.* ## Proof of main theorem The proof of the main theorem is based on a multi-scale argument. Before we prove the main theorem by using these techniques we will recall the following crucial lemma. **Lemma 29**. *Let $\Omega\subset\mathbb{R}^d$ be an open set and let $l$ be a function in $C^1(\bar{\Omega})$ such that $l>0$ on $\bar{\Omega}$ and assume that there exists $\rho$ in $(0,1)$ such that $$\left| \nabla_x l(x) \right| \leq \rho,$$ for all $x$ in $\Omega$.* *Then* 1. *There exists a sequence $\{x_k\}_{k=0}^\infty$ in $\Omega$ such that the open balls $B(x_k,l(x_k))$ form a covering of $\Omega$. Furthermore, there exists a constant $N_\rho$, depending only on the constant $\rho$, such that the intersection of more than $N_\rho$ balls is empty.* 2. *One can choose a sequence $\{\varphi_k\}_{k=0}^\infty$ such that $\varphi_k \in C_0^\infty(B(x_k,l(x_k)))$ for all $k$ in $\mathbb{N}$. Moreover, for all multiindices $\alpha$ and all $k$ in $\mathbb{N}$ $$\left| \partial_x^\alpha \varphi_k(x) \right|\leq C_\alpha l(x_k)^{-{\left| \alpha \right|}},$$ and $$\sum_{k=1}^\infty \varphi_k(x) = 1,$$ for all $x$ in $\Omega$.* This lemma is taken from [@MR1343781] where it is Lemma 5.4. The proof is analogous to the proof of [@MR1996773 Theorem 1.4.10]. We are now ready to prove the main theorem. *Proof of Theorem [Theorem 5](#Thm:Main){reference-type="ref" reference="Thm:Main"}.* Let $H_{\hbar,\varepsilon}^{\pm}$ be the two framing operators constructed in Lemma [Lemma 11](#LE:framing_operators){reference-type="ref" reference="LE:framing_operators"}, where we choose $\varepsilon=\hbar^{1-\delta}$. For $\gamma=0$ we choose $\delta=\frac{\mu}{1+\mu}$ and if $\gamma>0$ we choose $\delta=\frac{1+\mu-\gamma}{2+\mu}$. Note that our assumptions on $\mu$ will in all cases ensure that $\delta\geq\frac13$. Moreover, we get that $$\label{EQ:proof_main_0} \begin{aligned} \varepsilon^{1+\mu} &= \hbar &\text{$\gamma=0$} \\ \varepsilon^{2+\mu} &= \hbar^{1+\gamma} &\text{$\gamma>0$}. \end{aligned}$$ Since we have that $H_{\hbar,\varepsilon}^{-} \leq H_\hbar \leq H_{\hbar,\varepsilon}^{+}$ in the sense of quadratic forms. It follows from the min-max theorem that $$\label{EQ:proof_main_1} \mathop{\mathrm{Tr}}\big[g_\gamma( H_{\hbar,\varepsilon}^{+}) \big] \leq \mathop{\mathrm{Tr}}\big[g_\gamma( H_{\hbar}) \big] \leq \mathop{\mathrm{Tr}}\big[g_\gamma( H_{\hbar,\varepsilon}^{-}) \big].$$ The aim is now to obtain spectral asymptotics for $\mathop{\mathrm{Tr}}\big[g_\gamma( H_{\hbar,\varepsilon}^{+}) \big]$ and $\mathop{\mathrm{Tr}}\big[g_\gamma( H_{\hbar,\varepsilon}^{-}) \big]$. The arguments will be analogous so we will drop the superscript $\pm$ for the operator $H_{\hbar,\varepsilon}^{\pm}$ in what follows. Let $\varphi\in C_0^\infty(\mathbb{R}^d)$ with $\varphi(x)=1$ for all $x\in \Omega_{\tilde{\nu},V_\varepsilon}$ and $\mathop{\mathrm{supp}}(\varphi)\subset \Omega_{2\tilde{\nu},V_\varepsilon}$. Then applying Lemma [Lemma 26](#LE:localise_trace){reference-type="ref" reference="LE:localise_trace"} we obtain for all $N\in\mathbb{N}$ that $$\label{EQ:proof_main_2} \mathop{\mathrm{Tr}}\big[g_\gamma( H_{\hbar,\varepsilon}) \big]= \mathop{\mathrm{Tr}}\big[g_\gamma( H_{\hbar,\varepsilon})\varphi \big] +C_N\hbar^N.$$ For the terms $\mathop{\mathrm{Tr}}\big[g_\gamma( H_{\hbar,\varepsilon})\varphi \big]$ we use a multiscale argument such that we locally can apply Theorem [Theorem 24](#THM:Loc_mod_prob){reference-type="ref" reference="THM:Loc_mod_prob"}. Recall that from from Lemma [Lemma 11](#LE:framing_operators){reference-type="ref" reference="LE:framing_operators"} we have that $$V_\varepsilon(x) = V^1_\varepsilon(x) +V^2(x) \pm C\varepsilon^{\tau+\mu},$$ where $\mathop{\mathrm{supp}}(V^2)\cap\Omega_{4\tilde{\nu},V_\varepsilon}^C =\emptyset$ and $V^1_\varepsilon\in C_0^\infty(\mathbb{R}^d)$. We let $\varphi_1\in C_0^\infty(\mathbb{R}^d)$ such that $\varphi_1(x) = 1$ for all $x\in\Omega_{2\tilde{\nu},V_\varepsilon}$ and $\mathop{\mathrm{supp}}(\varphi_1)\subset \Omega_{4\tilde{\nu},V_\varepsilon}$. With this function we have that $$\varphi_1(x)V_\varepsilon^{\pm}(x) = \varphi_1(x)(V^1_\varepsilon(x) \pm C\varepsilon^{\tau+\mu}).$$ Note that with these assumptions on $\varphi_1(x)$ we have that $\varphi_1(x)\varphi(x)=\varphi(x)$ for all $x\in\mathbb{R}^d$. This observations ensures that when we define our localisation function $l(x)$ below it is positive on the set $\mathop{\mathrm{supp}}(\varphi)$. Before we define our localisation functions we remark that due to the continuity of $V_\varepsilon$ on $\Omega_{4\tilde{\nu},V_\varepsilon}$ there exists a number $\epsilon>0$ such that $$\mathop{\mathrm{dist}}(\mathop{\mathrm{supp}}(\varphi), \Omega_{2\tilde{\nu},V_\varepsilon}^{c}) >\epsilon.$$ The number $\epsilon$ is important for our localisation functions. As we need to ensure the supports are contained in the set $\Omega_{2\tilde{\nu},V_\varepsilon}$. We let $$l(x) = A^{-1}\sqrt{ |\varphi_1(x)V_\varepsilon(x)|^2 + \hbar^\frac{4}{3}} \quad\text{and}\quad f(x)=\sqrt{l(x)}.$$ Where we choose $A >0$ sufficiently large such that $$\label{EQ:proof_main_3} l(x) \leq \frac{\epsilon}{9} \quad\text{and}\quad |\nabla l(x)|\leq \rho <\frac{1}{8}$$ for all $x\in\overline{\mathop{\mathrm{supp}}(\varphi)}$. Note that due to our assumptions on $V_\varepsilon$ we can choose $A$ independent of $\hbar$ and uniformly for $\hbar\in(0,\hbar_0]$. Moreover, we have that $$\label{EQ:proof_main_4} |\varphi_1(x)V_\varepsilon(x)| \leq A l(x)$$ for all $x\in\mathbb{R}^d$. By Lemma [Lemma 29](#LE:partition_lemma){reference-type="ref" reference="LE:partition_lemma"} with the set $\mathop{\mathrm{supp}}(\varphi)$ and the function $l(x)$ there exists a sequence $\{x_k\}_{k=1}^\infty$ in $\mathop{\mathrm{supp}}(\varphi)$ such that $\mathop{\mathrm{supp}}(\varphi) \subset \cup_{k\in\mathbb{N}} B(x_k,l(x_k))$ and there exists a constant $N_{\frac{1}{8}}$ such that at most $N_{\frac{1}{8}}$ of the sets $B(x_k,l(x_k))$ can have a non-empty intersection. Moreover, there exists a sequence $\{\varphi_{k}\}_{k=1}^\infty$ such that $\varphi_k\in C_0^\infty(B(x_k,l(x_k)))$, $$\label{EQ:proof_main_5} \big| \partial_x^\alpha \varphi_k(x) \big| \leq C_\alpha l(x_k)^{-|\alpha|} \qquad\text{for all $\alpha\in\mathbb{N}_0$},$$ and $$\sum_{k=1}^\infty \varphi_k(x) =1 \qquad\text{for all $\mathop{\mathrm{supp}}(\varphi)$}.$$ We have that $\cup_{k\in\mathbb{N}} B(x_k,l(x_k))$ is an open covering of $\mathop{\mathrm{supp}}(\varphi)$ and since this set is compact there exists a finite subset $\mathcal{I}'\subset \mathbb{N}$ such that $$\mathop{\mathrm{supp}}(\varphi) \subset \bigcup_{k\in\mathcal{I}'} B(x_k,l(x_k)).$$ In order to ensure that we have a finite partition of unity over the set $\mathop{\mathrm{supp}}(\varphi)$ we define the set $$\mathcal{I} = \bigcup_{j\in\mathcal I'} \big\{ k\in\mathbb{N}\,|\, B(x_k,l(x_k))\cap B(x_j,l(x_j)) \neq \emptyset \big\}.$$ Then we have that $\mathcal{I}$ is still finite since at most $N_{\frac{1}{8}}$ balls can have non-empty intersection. Moreover, we have that $$\sum_{k\in\mathcal{I}} \varphi_k(x) =1 \qquad\text{for all $\mathop{\mathrm{supp}}(\varphi)$}.$$ From this we get the following identity $$\label{EQ:proof_main_6} \mathop{\mathrm{Tr}}\big[\varphi \boldsymbol{1}_{ (-\infty,0]}(H_{\hbar,\varepsilon})\big] = \sum_{k\in\mathcal{I}} \mathop{\mathrm{Tr}}\big[\varphi_k \varphi \boldsymbol{1}_{ (-\infty,0]}(H_{\hbar,\varepsilon})\big] ,$$ where we have used linearity of the trace. We will for the remaining part of the proof use the following notation $$l_k=l(x_k), \quad f_k=f(x_k) \quad h_k = \frac{\hbar}{l_kf_k} \quad\text{and}\quad \varepsilon_k = \hbar_k^{1-\delta}.$$ We have that $h_k$ is uniformly bounded from above since $$l(x)f(x) = A^{-\frac32}(\left| \varphi_1(x)V_\varepsilon(x) \right|^2+\hbar^{\frac43})^{\frac34} \geq A^{-\frac32} \hbar,$$ for all $x$. Moreover, since we by assumption have that $\delta\geq \frac13$ and $l_k=f_k^2$ we obtain that $$\label{EQ:proof_main_7} l_k \varepsilon^{-1} \leq \varepsilon_k^{-1}$$ We define the two unitary operators $U_l$ and $T_z$ by $$U_l f(x) = l^{\frac{d}{2}} f( l x) \quad\text{and}\quad T_zf(x)=f(x+z) \qquad\text{for $f\in L^2(\mathbb{R}^d)$}.$$ Moreover we set $$\begin{aligned} \tilde{H}_{\varepsilon,h_k} = f_k^{-2} (T_{x_k} U_{l_k}) H_\hbar (T_{x_k} U_{l_k})^{*} = - h_k^2 \Delta +\tilde{V}_\varepsilon(x), \end{aligned}$$ where $\tilde{V}_\varepsilon(x)=f_k^{-2} V_\varepsilon(l_kx+x_k)$. We will here need to establish that this rescaled operator satisfies the assumptions of Theorem [Theorem 24](#THM:Loc_mod_prob){reference-type="ref" reference="THM:Loc_mod_prob"} with $\hbar_k$, $\varepsilon_k$ and the set $B(0,8)$. To establish this we firstly observe that by [\[EQ:proof_main_3\]](#EQ:proof_main_3){reference-type="eqref" reference="EQ:proof_main_3"} we have $$\label{EQ:Rough_weyl_asymptotics_3.5} (1-8\rho) l_k \leq l(x) \leq (1+8\rho) l_k \qquad\text{for all $x \in B(x_k,8l_k)$}.$$ We start by verifying that the operator $\tilde{H}_{\varepsilon,h_k}$ satisfies Assumption [Assumption 14](#Assumption:local_potential_1){reference-type="ref" reference="Assumption:local_potential_1"}. From Lemma [Lemma 11](#LE:framing_operators){reference-type="ref" reference="LE:framing_operators"} it follows that the operator $\tilde{H}_{\varepsilon,h_k}$ is lower semibounded and selfadjoint. By our choice of $\varphi_1$ we have that $\tilde{H}_{\varepsilon,h_k}$ will satisfies part two of Assumption [Assumption 14](#Assumption:local_potential_1){reference-type="ref" reference="Assumption:local_potential_1"} with the set $B(0,8)$ and the potential $$\label{EQ:proof_main_8} \widetilde{\varphi_1V}_\varepsilon(x) = \varphi_1(l_kx+x_k) f_k^{-2}V_\varepsilon(l_kx+x_k),$$ where we by [\[EQ:proof_main_8\]](#EQ:proof_main_8){reference-type="eqref" reference="EQ:proof_main_8"} have that $\widetilde{\varphi_1V}_\varepsilon(x)\in C_0^\infty(\mathbb{R}^d)$. What remains to verify is that we have obtained a non-critical condition [\[THM:model_prob_global_Non_crit\]](#THM:model_prob_global_Non_crit){reference-type="eqref" reference="THM:model_prob_global_Non_crit"}. Using [\[EQ:proof_main_4\]](#EQ:proof_main_4){reference-type="eqref" reference="EQ:proof_main_4"} we have for for $x$ in $B(0,8)$ that $$\begin{aligned} \left| \widetilde{\varphi_1V}_\varepsilon(x) \right| + h_k^{\frac{2}{3}} &= f_k^{-2} \left| \varphi_1V_\varepsilon(l_kx+x_k) \right| + (\tfrac{\hbar}{f_k l_k})^{\frac{2}{3}} =l_k^{-1}( \left| \varphi_1V_\varepsilon(l_kx+x_k) \right| +\hbar^{\frac23}) \\ &\geq l_k^{-1} A l(l_k x+x_k) \geq (1-8\rho) A. \end{aligned}$$ Hence we have obtained the non-critical condition on $B(0,8)$. So all assumptions of Theorem [Theorem 24](#THM:Loc_mod_prob){reference-type="ref" reference="THM:Loc_mod_prob"} is fulfilled. But before applying it we will verify that the numbers the constant from Theorem [Theorem 24](#THM:Loc_mod_prob){reference-type="ref" reference="THM:Loc_mod_prob"} depends on are independent of $k$ and $\hbar$. Firstly we have for the norm estimate for the potential that $$\lVert \widetilde{\varphi_1V}_\varepsilon \rVert_{L^\infty(B(0,8))} = \sup_{x\in B(0,8)} \big| \varphi_1(l_kx+x_k) f_k^{-2}V_\varepsilon(l_kx+x_k)\big| \leq (1+8\rho)A,$$ where we have used [\[EQ:proof_main_4\]](#EQ:proof_main_4){reference-type="eqref" reference="EQ:proof_main_4"} and [\[EQ:Rough_weyl_asymptotics_3.5\]](#EQ:Rough_weyl_asymptotics_3.5){reference-type="eqref" reference="EQ:Rough_weyl_asymptotics_3.5"}. When considering the derivatives we have for $\alpha\in\mathbb{N}_0^d$ with $|\alpha|\geq1$ that $$\begin{aligned} \MoveEqLeft \varepsilon_k^{-\min(0,\tau-|\alpha|)}\lVert \partial^\alpha\widetilde{\varphi_1V}_\varepsilon \rVert_{L^\infty(\mathbb{R}^d)} \\ &\leq \varepsilon_k^{-\min(0,\tau-|\alpha|)} f_k^{-2} l_k^{|\alpha|} \varepsilon^{\min(0,\tau-|\alpha|)} \sup_{x\in\mathbb{R}^d} \sum_{\beta\leq \alpha } \binom{\alpha}{\beta} \big| (\partial^{\alpha-\beta}\varphi_1)(\partial^\beta V_\varepsilon)(l_kx+x_k) \big| \\ & \leq C_\alpha , \end{aligned}$$ where $C_\alpha$ is independent of $k$ and $\hbar$. We have in the estimate used the definition of $\varepsilon_k$, $f_k$, [\[EQ:proof_main_7\]](#EQ:proof_main_7){reference-type="eqref" reference="EQ:proof_main_7"} and Proposition [Proposition 10](#PRO:smoothning_of_func){reference-type="ref" reference="PRO:smoothning_of_func"}. Hence we have that all these estimates are independent of $\hbar$ and $k$. The last numbers we check are the numbers $\lVert \partial_x^\alpha \widetilde{\varphi_k\varphi} \rVert_{L^\infty(\mathbb{R}^d)}$ for all $\alpha\in\mathbb{N}_0^d$, where $\widetilde{\varphi_k\varphi}=(T_{x_k} U_{l_k})\varphi_k\varphi(T_{x_k} U_{l_k})^{*}$. Here we have by construction of $\varphi_k$ [\[EQ:proof_main_5\]](#EQ:proof_main_5){reference-type="eqref" reference="EQ:proof_main_5"} for all $\alpha\in\mathbb{N}_0^d$ $$\begin{aligned} \lVert \partial_x^\alpha \widetilde{\varphi_k\varphi} \rVert_{L^\infty(\mathbb{R}^d)} &=\sup_{x\in\mathbb{R}^d} \left| l_k^{\left| \alpha \right|} \sum_{\beta\leq \alpha} {\binom{\alpha}{\beta}} (\partial_x^{\beta}\varphi_k)(l_k x+x_k) (\partial_x^{\alpha - \beta}\varphi)(l_kx+x_k) \right| \\ &\leq C_\alpha \sup_{x\in\mathbb{R}^d} \sum_{\beta\leq \alpha} {\binom{\alpha}{\beta}} l_k^{\left| \alpha-\beta \right| }\left| (\partial_x^{\alpha - \beta} \varphi)(l_kx+x_k) \right| \leq \widetilde{C}_\alpha. \end{aligned}$$ With this we have established that all numbers the constant from Theorem [Theorem 24](#THM:Loc_mod_prob){reference-type="ref" reference="THM:Loc_mod_prob"} are independent of $\hbar$ and $k$. From applying Theorem [Theorem 24](#THM:Loc_mod_prob){reference-type="ref" reference="THM:Loc_mod_prob"} we get that $$\label{EQ:proof_main_9} \begin{aligned} \MoveEqLeft \big| \mathop{\mathrm{Tr}}\big[\varphi g_\gamma (H_{\varepsilon,\hbar}) \big] - \frac{1}{(2\pi\hbar)^d} \int_{\mathbb{R}^{2d}}g_\gamma( p^2+V(x))\varphi(x) \,dx dp \big| \\ \leq {}& \sum_{k\in\mathcal{I}}\big| \mathop{\mathrm{Tr}}\big[\varphi_k\varphi g_\gamma (H_{\varepsilon,\hbar} ) \big] - \frac{1}{(2\pi\hbar)^d} \int_{\mathbb{R}^{2d}}g_\gamma( p^2+V(x))\varphi_k\varphi(x) \,dx dp \big| \\ \leq {} & \sum_{k\in\mathcal{I}} f_k^{2\gamma} \big|\mathop{\mathrm{Tr}}\big[ g_\gamma\tilde{H}_{\varepsilon,h_k} ) \widetilde{\varphi_k\varphi} \big] - \frac{1}{(2\pi h_k)^d} \int_{\mathbb{R}^{2d}} g_\gamma( p^2+\tilde{V}(x))\widetilde{\varphi_k\varphi}(x) \,dx dp \big| \\ \leq {} & \sum_{k\in\mathcal{I}} f_k^{2\gamma} \big|\mathop{\mathrm{Tr}}\big[ g_\gamma (\tilde{H}_{\varepsilon,h_k} ) \widetilde{\varphi_k\varphi} \big] - \frac{1}{(2\pi h_k)^d} \int_{\mathbb{R}^{2d}} g_\gamma( p^2+\widetilde{\varphi_1 V_\varepsilon}(x))\widetilde{\varphi_k\varphi}(x) \,dx dp \big| \\ &+ \sum_{k\in\mathcal{I}} \frac{ f_k^{2\gamma}}{(2\pi h_k)^d} \Big|\int_{\mathbb{R}^{2d}} \big[ g_\gamma( p^2+\widetilde{\varphi_1 V_\varepsilon}(x))\widetilde{\varphi_k\varphi}(x) -g_\gamma( p^2+\tilde{V}(x)) \big]\widetilde{\varphi_k\varphi}(x) \,dx dp \Big| \\ \leq {} & C \sum_{k\in\mathcal{I}} \frac{f_k^{2\gamma} }{h_k^{d}} \Big[ h_k^{1+\gamma} + \Big|\int_{\mathbb{R}^{2d}} \big[ g_\gamma( p^2+\widetilde{\varphi_1 V_\varepsilon}(x))\widetilde{\varphi_k\varphi}(x) -g_\gamma( p^2+\tilde{V}(x)) \big]\widetilde{\varphi_k\varphi}(x) \,dx dp \Big|\Big] \end{aligned}$$ To estimate the remaining integrals we will use Lemma [Lemma 28](#LE:comparison_phase_space_int){reference-type="ref" reference="LE:comparison_phase_space_int"}. Combining this Lemma with [\[EQ:proof_main_0\]](#EQ:proof_main_0){reference-type="eqref" reference="EQ:proof_main_0"} we obtain that $$\label{EQ:proof_main_10} \begin{aligned} \MoveEqLeft \Big|\int_{\mathbb{R}^{2d}} \big[ g_\gamma( p^2+\widetilde{\varphi_1 V_\varepsilon}(x))\widetilde{\varphi_k\varphi}(x) -g_\gamma( p^2+\tilde{V}(x)) \big]\widetilde{\varphi_k\varphi}(x) \,dx dp \Big| \leq C \hbar^{1+\gamma} \leq C h_k^{1+\gamma} . \end{aligned}$$ Hence we obtain from combining [\[EQ:proof_main_9\]](#EQ:proof_main_9){reference-type="eqref" reference="EQ:proof_main_9"} and [\[EQ:proof_main_10\]](#EQ:proof_main_10){reference-type="eqref" reference="EQ:proof_main_10"} that $$\label{EQ:proof_main_11} \begin{aligned} \big| \mathop{\mathrm{Tr}}\big[\varphi g_\gamma (H_{\varepsilon,\hbar}) \big] - \frac{1}{(2\pi\hbar)^d} \int_{\mathbb{R}^{2d}}g_\gamma( p^2+V(x))\varphi(x) \,dx dp \big| \leq C \sum_{k\in\mathcal{I}} f_k^{2\gamma} h_k^{1+\gamma-d} . \end{aligned}$$ By just considering the sum over $k$ on the righthand side of [\[EQ:proof_main_11\]](#EQ:proof_main_11){reference-type="eqref" reference="EQ:proof_main_11"} and by using [\[EQ:Rough_weyl_asymptotics_3.5\]](#EQ:Rough_weyl_asymptotics_3.5){reference-type="eqref" reference="EQ:Rough_weyl_asymptotics_3.5"} we have that $$\label{EQ:proof_main_12} \begin{aligned} \sum_{k\in\mathcal{I}} C h_k^{1+\gamma-d}f_k^{2\gamma} &= \sum_{k\in\mathcal{I}} \tilde{C} \hbar^{1+\gamma-d} \int_{B(x_k,l_k)} l_k^{-d} f_k^{2\gamma}(l_kf_k)^{d-1-\gamma} \,dx \\ & = \sum_{k\in\mathcal{I}} \tilde{C} \hbar^{1+\gamma-d} \int_{B(x_k,l_k)} l_k^{\gamma-d} l_k^{\frac{3d-3-3\gamma}{2}} \,dx \\ &\leq \sum_{k\in\mathcal{I}} \hat{C} \hbar^{1+\gamma-d} \int_{B(x_k,l_k)} l(x)^{\frac{d -3 -\gamma}{2}}\,dx \leq C \hbar^{1+\gamma-d} \end{aligned}$$ where we in the last inequality have used that $\mathop{\mathrm{supp}}(\varphi)\subset\Omega_{2\tilde{\nu},V_\varepsilon}$ and that $\Omega_{2\tilde{\nu},V_\varepsilon}$ is assumed to be compact. This ensures that the constant obtained in the last inequality is finite. From combing the estimates and identities in [\[EQ:proof_main_1\]](#EQ:proof_main_1){reference-type="eqref" reference="EQ:proof_main_1"}, [\[EQ:proof_main_2\]](#EQ:proof_main_2){reference-type="eqref" reference="EQ:proof_main_2"}, [\[EQ:proof_main_11\]](#EQ:proof_main_11){reference-type="eqref" reference="EQ:proof_main_11"} and [\[EQ:proof_main_12\]](#EQ:proof_main_12){reference-type="eqref" reference="EQ:proof_main_12"} we obtain that $$\Big|\mathop{\mathrm{Tr}}\big[\boldsymbol{1}_{ (-\infty,0]}(H_{\hbar,\varepsilon})\big] - \frac{1}{(2\pi\hbar)^d} \int_{\mathbb{R}^{2d}}\boldsymbol{1}_{(-\infty,0]}(p^2+V_\varepsilon(x))\,dx dp \Big| \leq C \hbar^{1-d}$$ for all $\hbar\in(0,\hbar_0]$. This concludes the proof. ◻ *Proof of Theorem [Theorem 6](#Thm:Main_2){reference-type="ref" reference="Thm:Main_2"} and Theorem [Theorem 7](#Thm:Main_3){reference-type="ref" reference="Thm:Main_3"}.* The proof are almost analogous to the proof just given for Theorem [Theorem 5](#Thm:Main){reference-type="ref" reference="Thm:Main"}. The difference is that $\delta$ is here always chosen to be $\frac{1}{3}$ when choosing the scaling of the framing operators $H_{\hbar,\varepsilon}^{\pm}$ with $\varepsilon=\hbar^{1-\delta}$. After this choice the remainder of the proof is identical. This concludes the proof. ◻
arxiv_math
{ "id": "2309.12015", "title": "Sharp semiclassical spectral asymptotics for Schr\\\"odinger operators\n with non-smooth potentials", "authors": "S{\\o}ren Mikkelsen", "categories": "math.SP math-ph math.MP", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | Surface tension at cavity walls can play havoc with the mechanical properties of perforated soft solids when the cavities are filled with a fluid. This study is an investigation of the macroscopic elastic properties of elastomers embedding spherical cavities filled with a pressurized liquid in the presence of surface tension, starting with the linearization of the fully nonlinear model and ending with the enhancement properties of the linearized model when many such liquid filled cavities are present. address: - Departamento de Ecuaciones Diferenciales y Análisis Numérico, Universidad de Sevilla, Campus de Reina Mercedes, 41012 Sevilla, Spain - Flatiron Institute, 162 Fifth Avenue, New York, NY10010, USA - Department of Civil and Environmental Engineering, University of Illinois at Urbana--Champaign, IL 61801, USA - Dipartimento di Matematica, Università di Pavia, Via Ferrata 5, 27100 Pavia, Italy author: - Juan Casado Díaz - Gilles A. Francfort - Oscar Lopez-Pamies - Maria Giovanna Mora title: "Liquid Filled Elastomers: From Linearization to Elastic Enhancement" --- # Introduction The study of the mechanics of interfaces in the continuum has a long and rich history with origins dating back to the classical works of Young [@Young1805] and Laplace [@Laplace1806] on interfaces between fluids in the early 1800's and of Gibbs [@Gibbs1928] on the more general case of interfaces between solids and fluids in the 1870's. Yet it was only in 1975 that complete descriptions of the kinematics, the concept of interfacial stress, and the balance of linear and angular momenta of bodies containing interfaces were properly formulated, even when specialized to the basic case of elastic interfaces [@Gurtin75a; @Gurtin75b]. The results remained abstract at the time, most certainly because of the technical difficulties in measuring and tailoring mechanical and physical properties of interfaces. In the early 2000's, the onset of new synthesis and characterization tools reinvigorated the study of interfaces in soft matter. In this context, elastomers filled with liquid --- as opposed to solid --- inclusions are a recent trend in the soft matter community because they exhibit remarkable mechanical and physical properties; see e.g. [@Syleetal15; @LDLP17; @Yunetal19]. In particular, the interfacial physics in these soft material systems can be actively tailored to enhance or impede deformability. While the addition of liquid inclusions should increase the macroscopic deformability of the material, the behavior of the solid/liquid interfaces, if negligible when the inclusions are "large", may counteract this increase and lead to stiffening when the inclusions become sufficiently "small". As a first step in our understanding of this paradigm, a recent contribution by one of us [@GLP] derives the governing equations that describe the mechanical response of a hyperelastic solid filled with initially spherical inclusions made of a pressurized hyperelastic fluid when the solid/fluid interface is hyperelastic and possesses an initial surface tension. Arguably, this corresponds to the most basic type of elastomer filled with liquid inclusions. From a mechanics standpoint, the main objectives of this work are twofold. First, we derive the linearization of the governing equations put forth in [@GLP] in the limit of small deformations. Second, within that linearized setting, we derive the homogenization limit of a periodic distribution of liquid inclusions as the period gets smaller. Formal derivations of both results were proposed in [@GLP; @GLLP]. Our analysis corroborates those, although even an attentive reader may be at pains to check that the results are identical because of differential geometric intricacies. From a mathematical standpoint, given an elastic energy density $W$ and an interaction surface term $\mathscr J$ on the boundary of a ball $B_a\subset {\Omega}$ that will be detailed below, we propose to linearize the energy $$\mathscr E_\varepsilon(y)=\int_{{\Omega}\setminus \overline B_a}W(\nabla y)\, dx + \mathscr J(y) -\varepsilon\int_{{\Omega}\setminus \overline B_a}f\cdot y\, dx$$ when the external load (here $\varepsilon f$) is indeed of order $\varepsilon$, $\varepsilon$ being a small parameter. This is by now a classical problem that was first handled in [@DMNP] for finite elasticity by computing the $\Gamma(L^2)$-limit of $$\mathscr I_\varepsilon(u):=\frac1{\varepsilon^2}\int_{\Omega}W(I+\varepsilon\nabla u)\; dx$$ for a standard elastic energy density $W$ and showing the $L^2$-compactness of almost minimizers of $v\mapsto\mathscr I_\varepsilon(v)-\int_{\Omega}f\cdot v\; dx.$ The celebrated rigidity result of [@FJM] plays a pivotal role in the analysis. Since [@DMNP], a similar linearization process has been implemented in a variety of settings generally assuming that the relevant forces were of order $\varepsilon$ and rescaling the energy accordingly as above. In that spirit, the work which is closest to the current investigation is [@MR] where live pressure loads are applied to an elastic body. The current setting introduces a new feature in the analysis, namely a pre-stress due to the liquid inclusions. This in turn changes the order of the various contributions and results in a breakdown of the $\Gamma$-convergence process. We were unfortunately unable to complete that process without the addition of a vanishing higher order contribution to the energy. This is because, barring the presence of such an additional term, we cannot hope to get a bound on the $L^2$-norm of the tangential gradient of the (rescaled) field along the boundary --- that is, the solid/liquid interface --- of the liquid cavity (a sphere). Consequently, we are adding an appropriately vanishing second order term to our energy (see [\[eq.reg\]](#eq.reg){reference-type="eqref" reference="eq.reg"}, [\[eta_e\]](#eta_e){reference-type="eqref" reference="eta_e"}). A similar technique was recently used in [@ADMLP] to handle the linearization of a multi-well elastic energy. Our first result is the linearization Theorem [Theorem 5](#thm){reference-type="ref" reference="thm"} which gives rise from a P.D.E. standpoint to a highly non trivial set of equations both on the solid part of the domain and on the boundary of the liquid inclusions, that is, along the solid/liquid interface (see Remark [Remark 8](#rem.PDE){reference-type="ref" reference="rem.PDE"}). Our second result, Theorem [Theorem 11](#thm.hom){reference-type="ref" reference="thm.hom"}, is a periodic homogenization statement on the linearized system with an appropriate rescaling of the surface tension on each inclusion. The resulting homogenized behavior ends up being purely elastic. However, the expression for the homogenized Hooke's law incorporates a memory of the presence of surface tension on the solid/liquid interfaces. This is achieved through a somewhat intricate unfolding of the oscillating fields which heavily draws on the specific spectral properties of spherical harmonics and, to our knowledge is the first time a homogenization process is performed on a system that couples bulk and surface P.D.E.'s. The homogenization result promotes elastic enhancement as detailed in Subsection [4.2](#sec.enhance){reference-type="ref" reference="sec.enhance"}. Technical hurdles prevent the derivation of a general enhancement result so we have to illustrate its occurrence on a uniaxial strain and for an isotropic base material; see Proposition [Proposition 14](#prop.enhance){reference-type="ref" reference="prop.enhance"}. We are confident that enhancement can always be achieved for large enough surface tensions, although currently defeated in our attempts to prove such a general statement. Now for a few mathematical prerequisites. We recall that, for any $C^1$-manifold $M$ embedded in an open set ${\Omega}\subset\mathbb{R}^3$ and any smooth field $u:{\Omega}\to \mathbb{R}^3$, the tangential gradient $\nabla_\tau u$ of $u$ at $x\in M$ is defined as $$\nabla_\tau u(x)= \nabla u(x)-\nabla u(x)\nu(x)\otimes\nu(x)$$ where $\nu(x)$ is the unit normal to $M$ at $x$. Similarly, the tangential divergence of $u$ at $x\in M$ is defined as $$\mathop{\mathrm{div}}_\tau u(x)=\mathop{\mathrm{div}}u(x)-(\nabla u)^T(x)\nu(x)\cdot\nu(x).$$ Note that, for any smooth vector field $v$ on a smooth oriented manifold $M$ with normal unit vector $\nu$, $$\label{eq.tau-nabla-div} \nabla_\tau(\mathop{\mathrm{div}}_\tau v)= (I-\nu\otimes\nu)\mathop{\mathrm{div}}_\tau (\nabla_\tau v)^T-\nabla_\tau \nu(\nabla_\tau v)^T\nu,$$ where the tangential divergence of a tensor-valued function $S$ is defined as $\mathop{\mathrm{div}}_\tau S\cdot e=\mathop{\mathrm{div}}_\tau(S^Te)$ for any vector $e$ (see [@CRMSP Lemma 2.6]). We also recall a few useful algebraic identities. In the following $\mathbb{M}^{3{\times}3}$ stands for the space of $(3\times 3)$-matrices with $I$ as the identity matrix and $\mathbb{M}^{3{\times}3}_{\rm sym}$ for the subspace of symmetric $(3\times 3)$-matrices. For any $A,B\in\mathbb{M}^{3{\times}3}$ with $\det A\neq0$ one has $$\label{eq.alg1} \mathop{\mathrm{cof}}(A+B)=\mathop{\mathrm{cof}}A +\mathop{\mathrm{cof}}B +\frac1{\det A} \big( (\mathop{\mathrm{cof}}A\cdot B)\mathop{\mathrm{cof}}A - (\mathop{\mathrm{cof}}A) B^T(\mathop{\mathrm{cof}}A) \big)$$ (see, e.g, [@PGVC Proposition 1.6]). Also Cayley-Hamilton Theorem implies that, for any $A\in\mathbb{M}^{3{\times}3}$, $$\frac12\big( (\mathop{\mathrm{tr}}A)^2-\mathop{\mathrm{tr}}A^2\big)A-(\mathop{\mathrm{tr}}A)A^2+A^3=(\det A) I$$ so that $$\label{eq.alg2} \mathop{\mathrm{cof}}A= \frac12\big( (\mathop{\mathrm{tr}}A)^2-\mathop{\mathrm{tr}}A^2\big)I-(\mathop{\mathrm{tr}}A)A^T+(A^T)^2.$$ from which we also obtain that $$\label{eq.alg3} \mathop{\mathrm{tr}}(\mathop{\mathrm{cof}}A)=\frac12\big((\mathop{\mathrm{tr}}A)^2- \mathop{\mathrm{tr}}A^2\big).$$ Finally, if $\{\tau_1,\tau_2,\nu\}$ form an orthonormal basis of vectors with $\tau_1\times\tau_2=\nu$, $$\label{eq.alg4}(\mathop{\mathrm{cof}}A)\nu=A\tau_1\times A\tau_2 \quad \mbox{ for every }A\in\mathbb{M}^{3{\times}3}.$$ Notationwise, we denote by $B_r$ the open ball of center $0$ and radius $r>0$ and by ${\rm \bf{id}}$ the identity mapping (${\rm \bf{id}}(x)=x$). Also, throughout $\{\vec{e}_i, i=1,2,3\}$ is the canonical orthonormal basis of $\mathbb{R}^3$ and ${\vec{e}_r}$ is the unit normal at the point under consideration on any sphere. We always omit the target space when writing a norm, so, for example, if $u:{\Omega}\to \mathbb{R}^3$, $\|u\|_{L^2({\Omega})}$ is the $L^2({\Omega};\mathbb{R}^3)$-norm of $u$. We say that a sequence $(a_\varepsilon)\subset\mathbb{R}$ indexed by $\varepsilon\searrow0$ is of the order of $\varepsilon^\alpha$ (and write $a_\varepsilon\simeq \varepsilon^\alpha$) if there exist two positive constants $c,c'$ such that $c\varepsilon^\alpha\le a_\varepsilon\le c'\varepsilon^\alpha$ for all $\varepsilon$. The symbol $\subset\!\subset$ means "compactly contained in\" and the symbol $\sharp$ stands for "periodic\". The characteristic function of a set $A$ is denoted by $\chi_A$. Finally, "$\cdot$\" stands for the Euclidean inner product on $\mathbb{R}^3$ as well as for the Frobenius inner product on matrices, i.e. $A\cdot B:= {\rm tr }(B^TA)$ for $A,B\in \mathbb{M}^{3{\times}3}$. # The nonlinear formulation ## Setting Let ${\Omega}\subset\mathbb{R}^3$ be a Lipschitz bounded domain containing a closed ball $\overline B_a$ with $a>0$. The elastomer occupying ${\Omega}\setminus \overline B_a$ is assumed to possess an elastic energy density $W:\mathbb{M}^{3{\times}3}\to[0,+\infty]$ that satisfies what are by now the usual assumptions of geometrically non-linear elasticity theory, namely, - $W(F)=W(RF)$ for every $F\in\mathbb{M}^{3{\times}3}$ and $R\in SO(3)$, - $W(I)=0$, $W(F)= +\infty$ if $\det F\le 0$, - $W(F)\geq c\mathop{\mathrm{dist}}^2(F, SO(3))$ for every $F\in\mathbb{M}^{3{\times}3}$, for some $c>0$, - $W$ is $C^2$ in a neighborhood of $SO(3)$ and $\displaystyle\mathbb A:={\partial^2 W}/{\partial F^2}(I)$, - $W(F)\to +\infty$ as $\det F\to 0^+$. **Remark 1**. The assumptions on $W$ imply that the quadratic form $F\mapsto Q(F):=\mathbb AF\cdot F$ is positive definite on symmetric matrices. ¶ The ball $B_a$ is filled with a compressible pressurized liquid. The initial Cauchy pressure is $p$. We take the internal energy density of the liquid to be $$\label{eq.en-fl} W_{f\!\ell}(F):= \frac\lambda_{f\!\ell}2 (\det F-1)^2-p\det F, \quad\lambda_{f\!\ell}>0.$$ In other words, the liquid is assumed to be an elastic fluid with bulk modulus $\lambda_{f\!\ell}$. This corresponds to a first Piola-Kirchhoff stress of the form $$\label{eq.PK}\displaystyle P=\frac{\partial W_{f\!\ell}}{\partial F}(F)= (-p+\lambda_{f\!\ell}(\det F -1))\mathop{\mathrm{cof}}F,$$ hence to a Cauchy stress of the form $$\Sigma=(-p +\lambda_{f\!\ell}(\det F-1))I.$$ Because of the presence of interfacial forces at the solid/liquid interface $\partial B_a$, the interface $\partial B_a$ exerts a normal force per unit surface area $-\gamma \kappa(y(x))\nu_{y(x)}$ on the liquid. In this last expression, $\gamma\geq 0$ stands for the surface tension of the solid/liquid interface, an intrinsic property of the interface at hand, while $\kappa(y(x)), x\in \partial B_a$, is the mean curvature at the image of $x$ under the deformation $y: {\Omega}\to\mathbb{R}^3$, and $\nu_{y(x)}$ is the exterior normal to $y(B_a)$ at $y(x)$. We assume that, in the absence of any external loading process, the initial liquid pressure $p$ is so that the hydrostatic pressure $-p{\vec{e}_r}$ equilibrates the surface tension without deformation of the ball so that, $y\equiv{\rm \bf{id}}$, $\kappa\equiv -2/a$ and $$\label{eq.eqm} p=2\gamma/a.$$ The contribution to the energy of the surface tension is $\gamma \mathcal{H}^2(\partial y(B_a))$. Indeed, assuming that $y$ is a $C^2$-diffeomorphism on $\overline{\Omega}$ so that, in particular, $y({\Omega})$ is open, and $\partial y(B_a)$ is a $C^2$-manifold, and considering a deformation of the form $\Phi^\varepsilon(y)=y+\varepsilon z(y)$ with $z$ smooth and compactly supported in $y({\Omega})$, we get (see [@AFP Theorems 7.31, 7.34]) $$\label{eq.der-area} \frac{\partial}{\partial\varepsilon}\mathcal{H}^2(\partial y(B_a))|_{\varepsilon=0}=\int_{\partial y(B_a)} \mathop{\mathrm{div}}_\tau z \,d\mathcal{H}^2=- \int_{\partial y(B_a)}\kappa(y(x))\nu_{y(x)}\cdot z\, d\mathcal{H}^2.$$ Now, since $y$ is one to one and $\partial y(B_a)=y(\partial B_a)$, the area formula yields $$\label{eq.area} \mathcal{H}^2(\partial y(B_a))=\int_{\partial B_a} |\mathop{\mathrm{cof}}\nabla y{\vec{e}_r}|\, d\mathcal{H}^2$$ and that contribution becomes $$\label{eq.st-cont} \gamma \int_{\partial B_a} |\mathop{\mathrm{cof}}\nabla y{\vec{e}_r}| \,d\mathcal{H}^2.$$ Further, the fluid must be in equilibrium; thus $\mathop{\mathrm{div}}P=0$ in $B_a$ with $P$ defined by [\[eq.PK\]](#eq.PK){reference-type="eqref" reference="eq.PK"}. Explicit computations yield $$0=\mathop{\mathrm{div}}P=\lambda_{f\!\ell}(\mathop{\mathrm{cof}}\nabla y)^T\nabla(\det\nabla y).$$ Since $\det\nabla y>0$, the matrix $(\mathop{\mathrm{cof}}\nabla y)^T$ is invertible, hence $\det \nabla y \equiv {\rm cst}$ in $B_a$ and the constant must be $|y(B_a)|/|B_a|$. So, recalling [\[eq.en-fl\]](#eq.en-fl){reference-type="eqref" reference="eq.en-fl"}, the contribution of the fluid to the energy is $$\label{eq.fl-cont} -p |y(B_a)|+ \frac{\lambda_{f\!\ell}}2 |B_a| \left(\frac{|y(B_a)|}{|B_a|}-1\right)^2.$$ In view of [\[eq.st-cont\]](#eq.st-cont){reference-type="eqref" reference="eq.st-cont"}, [\[eq.fl-cont\]](#eq.fl-cont){reference-type="eqref" reference="eq.fl-cont"}, we conclude that, for $y$ a $C^2$-diffeomorphism on $\overline{\Omega}$ and in the presence of a body load applied to the solid part of ${\Omega}$ and of density $\varepsilon f$ with $\varepsilon>0$ and $f:{{\Omega}\setminus \overline B_a}\to\mathbb{R}^3$, the total energy is given by $$\label{eq.tot-en} \mathscr E_\varepsilon(y):=\int_{{\Omega}\setminus \overline B_a}W(\nabla y)\, dx + \mathscr J(y) -\varepsilon\int_{{\Omega}\setminus \overline B_a}f\cdot y\, dx$$ where $$\label{eq.tot-en2} \mathscr J(y):= \gamma \int_{\partial B_a} |(\mathop{\mathrm{cof}}\nabla y){\vec{e}_r}|\,d\mathcal{H}^2+ \frac{\lambda_{f\!\ell}}2 |B_a| \left(\frac{|y(B_a)|}{|B_a|}-1\right)^2-p |y(B_a)|.$$ Note that we can always write $$|y(B_a)|=\int_{B_a}\det\nabla y(x)\, dx=\frac13\int_{\partial B_a} \mathop{\mathrm{cof}}\nabla y {\vec{e}_r}\cdot y\, d\mathcal{H}^2.$$ **Remark 2**. From a mechanical standpoint the imposition of a (dead) body load acting only on the solid part of the domain ${{\Omega}\setminus \overline B_a}$ might seem unrealistic. It would certainly be more appropriate to impose surface loads on $\partial{\Omega}\setminus\overline\Gamma$, where $\Gamma$ is the Dirichlet part of the boundary. Of course surface loads are generally live loads and this would add an additional level of complexity which we do not want to address here. The interested reader is directed to [@MR] where a study of linearization in the presence of pressure loads is undertaken. ¶ ## Existence of a minimizer in the absence of external loads As seen in the previous Subsection, relation [\[eq.eqm\]](#eq.eqm){reference-type="eqref" reference="eq.eqm"} ensures that the pressure in the fluid $B_a$ is balanced by the surface tension on $\partial B_a$ in the initial configuration. Consequently, in the absence of external loadings, no loads are applied to the solid part of ${\Omega}$, that is to ${\Omega}\setminus\overline B_a$. Thus the identity mapping ${\rm \bf{id}}$ should in essence be a "stationary point\" for $\mathscr J$, this independently of the values of $\gamma$ or $\lambda_{f\!\ell}$. Of course, such will only be true for smooth variations because of the determinant constraint. In the context of [\[eq.tot-en\]](#eq.tot-en){reference-type="eqref" reference="eq.tot-en"}, we investigate if and when the identity mapping ${\rm \bf{id}}$ is an energetic minimizer when $f\equiv 0$. Formally, the isoperimetric inequality implies that $$\mathcal{H}^2(\partial y(B_a))\ge (36\pi)^{1/3} |y(B_a)|^{2/3}=:C|y(B_a)|^{2/3}$$ so that, in view of [\[eq.area\]](#eq.area){reference-type="eqref" reference="eq.area"}, $$\mathscr J(y)\ge \gamma C|y(B_a)|^{2/3}+\frac{\lambda_{f\!\ell}}2 |B_a| \left(\frac{|y(B_a)|}{|B_a|}-1\right)^2-p |y(B_a)|.$$ Set $$\Phi(t):= \gamma Ct^{2/3}+\frac{\lambda_{f\!\ell}}2 |B_a| \left(\frac{t}{|B_a|}-1\right)^2-p t\quad \text{ for } t\ge 0,$$ and note that $$\Phi(0)= \frac23 \pi \lambda_{f\!\ell}a^3>0, \quad \Phi'(0)=+\infty, \quad \lim_{t\to+\infty} \Phi(t)=\lim_{t\to+\infty} \Phi'(t)=+\infty,$$ and $\Phi''(t)<0$ for $t<t^*$, $\Phi''(t)>0$ for $t>t^*$ with $$t^*= \left(\frac{9\lambda_{f\!\ell}}{2\gamma C |B_a|}\right)^{-3/4}.$$ Now, recalling [\[eq.eqm\]](#eq.eqm){reference-type="eqref" reference="eq.eqm"}, $\Phi'(t^*)=((2^{11}/3^3)\lambda_{f\!\ell})^{1/4}(\gamma /a)^{3/4}-(\lambda_{f\!\ell}+2\gamma /a)$. The maximum in $\gamma/a$ of the previous expression is attained at $\gamma/ a= (3/2)\lambda_{f\!\ell}$ and it is $0$, so that $\Phi'(t^*)<0$ except when $\gamma=(3/2)\lambda_{f\!\ell}a$ in which case $\Phi'(t^*)=0$. In view of the already established properties of $\Phi$, this means that, for $\gamma\ne (3/2)\lambda_{f\!\ell}a$, $\Phi$ has a unique maximizer at some point $t'<t^*$ and a unique minimizer at some point $t''>t^*$. Now, $\Phi'(|B_a|)=0$ and $\Phi''(|B_a|)=a^{-3}/\pi(3\lambda_{f\!\ell}/4-\gamma a^{-1}/2)$, which is positive if and only if $\gamma< (3/2)\lambda_{f\!\ell}a$. In that case, $|B_a|$ must be a minimizer and it is unique. For $\gamma=(3/2)\lambda_{f\!\ell}a$ we have $\Phi'(t)\ge 0$ for $t>0$ and thus $\Phi$ is increasing. Further $\Phi(|B_a|)>\Phi(0)$, so $|B_a|$ is not a minimizer of $\Phi$. Since the elastic energy $\int_{{\Omega}\setminus\overline B_a}W(\nabla y) dx$ is strictly positive for $y\ne Rx+c$ we conclude that, provided that $\gamma< (3/2)\lambda_{f\!\ell}a$, $\mathscr E_\varepsilon$ is only minimized at $y={\rm \bf{id}}$, and also possibly at $y=Rx+c$ for $R\in SO(3)$ and $c\in\mathbb{R}^3$, if we have imposed no boundary conditions on $y$. In conclusion we have obtained the following **Lemma 3**. *In the absence of external loadings and under assumption [\[eq.eqm\]](#eq.eqm){reference-type="eqref" reference="eq.eqm"}, ${\rm \bf{id}}$ is the unique minimizer of $\mathscr E_\varepsilon$ (possibly up to rotations and translations) provided that $$\label{eq.cond-min} \gamma< \frac32\lambda_{f\!\ell}a \quad(\mbox{or equivalently } p< 3\lambda_{f\!\ell}).$$ Further $\mathscr E_\varepsilon({\rm \bf{id}})=\displaystyle\frac43\pi\gamma a^2$.* **Remark 4**. Condition ([\[eq.cond-min\]](#eq.cond-min){reference-type="ref" reference="eq.cond-min"}) is always satisfied by incompressible liquids, i.e. when $\lambda_{f\!\ell}=+\infty$.¶ From now onward, we restrict the setting to that for which *both* [\[eq.eqm\]](#eq.eqm){reference-type="eqref" reference="eq.eqm"} and [\[eq.cond-min\]](#eq.cond-min){reference-type="eqref" reference="eq.cond-min"} hold true. # Linearization in the presence of a higher order regularization {#sec.lin} As mentioned in the introduction, the linearization process cannot succeed without the addition of a regularizing term. For simplicity we will assume henceforth that the domain ${\Omega}$ is clamped on a part $\Gamma$ of its boundary, so that we can apply Poincaré's inequality. We assume $\Gamma$ to be a non-empty subset of $\partial{\Omega}$, open in the relative topology of $\partial{\Omega}$ and such that ${\rm cap}_2(\overline\Gamma\setminus\Gamma)=0$ (we refer to [@EG Section 4.7] for the notion of $2$-capacity). This last condition is needed to ensure a suitable density result in the proof of the $\Gamma$-limsup inequality. Let $p>3$ and $$W^{2,p}_\Gamma({\Omega};\mathbb{R}^3):=\{u\in W^{2,p}({\Omega};\mathbb{R}^3): \ u=0\mbox{ on }\Gamma\}.$$ For a given load $f\in L^2({{\Omega}\setminus \overline B_a};\mathbb{R}^3)$ we consider the regularized functional $\mathcal{F}_\varepsilon$ defined on $C^2_{loc}({{\Omega}};\mathbb{R}^3)\cap W^{2,p}_\Gamma({\Omega};\mathbb{R}^3)$ as $$\label{eq.reg} \mathcal{F}_\varepsilon(u):=\frac1{\varepsilon^2}\Big(\mathscr E_\varepsilon({\rm \bf{id}}+\varepsilon u)-\mathscr E_\varepsilon({\rm \bf{id}})\Big) +\eta_\varepsilon\int_{{\Omega}\setminus \overline B_a}|\nabla^2 u|^p\, dx,$$ where $$\label{eta_e} \varepsilon^{p/3}\le\eta_\varepsilon\stackrel{\varepsilon}{\to}0.$$ We define $$\label{def.A} \mathcal{A}:=\{u\in H^1({{\Omega}\setminus \overline B_a};\mathbb{R}^3): \ u\cdot{\vec{e}_r}\in H^1(\partial B_a) \text{ and }u=0 \text{ on }\Gamma\}$$ and the functional on $\mathcal{A}$ $$\begin{gathered} \mathcal{F}(u):=\frac12 \int_{{\Omega}\setminus \overline B_a}Q(\mathbf Eu)\, dx +\frac\gamma2\int_{\partial B_a}|\nabla_\tau (u\cdot{\vec{e}_r})|^2\, d\mathcal{H}^2 -\frac{\gamma}{a^2}\int_{\partial B_a} |u\cdot{\vec{e}_r}|^2 \, d\mathcal{H}^2 \\ +\frac{\lambda_{f\!\ell}}{2|B_a|} \left(\int_{\partial B_a} u\cdot{\vec{e}_r}\,d\mathcal{H}^2\right)^2 -\int_{{\Omega}\setminus \overline B_a}f\cdot u\, dx\end{gathered}$$ with $\mathbf Eu:=1/2(\nabla u+\nabla u^T)$ and $Q$ defined as in Remark [Remark 1](#rm.positdef){reference-type="ref" reference="rm.positdef"}. Our first main result is the following compactness and convergence theorem. **Theorem 5**. *Assume that [\[eq.eqm\]](#eq.eqm){reference-type="eqref" reference="eq.eqm"} and [\[eq.cond-min\]](#eq.cond-min){reference-type="eqref" reference="eq.cond-min"} hold true. Let $(u^\varepsilon)$ be a sequence in $C^2_{loc}({{\Omega}};\mathbb{R}^3)\cap W^{2,p}_\Gamma({\Omega};\mathbb{R}^3)$ such that $$\label{min-seq} \mathcal{F}_\varepsilon(u^\varepsilon)\leq C.$$ Then there exists $u\in\mathcal{A}$ such that, up to subsequences, $u^\varepsilon$ converge to $u$ weakly in $H^1({{\Omega}\setminus \overline B_a};\mathbb{R}^3)$ and $u^\varepsilon\cdot{\vec{e}_r}\rightharpoonup u\cdot{\vec{e}_r}$ weakly in $H^1(\partial B_a)$.* *Further, the $\Gamma$-limit of $\mathcal{F}_\varepsilon$ in the strong $L^2({\Omega}\setminus\overline B_a;\mathbb{R}^3)$ topology is precisely $\mathcal{F}$.* **Remark 6**. Condition [\[min-seq\]](#min-seq){reference-type="eqref" reference="min-seq"} is clearly satisfied by any minimizing sequence $(u^\varepsilon)$ of $\mathcal{F}_\varepsilon$ since $\mathcal{F}_\varepsilon(0)=0$.¶ **Remark 7**. By [\[PW-sphere\]](#PW-sphere){reference-type="eqref" reference="PW-sphere"} and [\[eq.cond-min\]](#eq.cond-min){reference-type="eqref" reference="eq.cond-min"} we have $$\frac\gamma2\int_{\partial B_a}|\nabla_\tau (u\cdot{\vec{e}_r})|^2\, d\mathcal{H}^2 -\frac{\gamma}{a^2}\int_{\partial B_a} |u\cdot{\vec{e}_r}|^2 \, d\mathcal{H}^2 +\frac{\lambda_{f\!\ell}}{2|B_a|} \left(\int_{\partial B_a} u\cdot{\vec{e}_r}\,d\mathcal{H}^2\right)^2\ge 0$$ for any $u\in \mathcal{A}$. Furthermore, again by [\[PW-sphere\]](#PW-sphere){reference-type="eqref" reference="PW-sphere"} and [\[eq.cond-min\]](#eq.cond-min){reference-type="eqref" reference="eq.cond-min"}, for every $\delta>0$ small enough the following coercivity property holds: $$\begin{gathered} \frac\gamma2\int_{\partial B_a}|\nabla_\tau (u\cdot{\vec{e}_r})|^2\, d\mathcal{H}^2 -\frac{\gamma}{a^2}\int_{\partial B_a} |u\cdot{\vec{e}_r}|^2 \, d\mathcal{H}^2 +\frac{\lambda_{f\!\ell}}{2|B_a|} \left(\int_{\partial B_a} u\cdot{\vec{e}_r}\,d\mathcal{H}^2\right)^2 \\ \ge \delta \int_{\partial B_a}|\nabla_\tau (u\cdot{\vec{e}_r})|^2\, d\mathcal{H}^2 - 2\frac{\delta}{a^2} \int_{\partial B_a} |u\cdot{\vec{e}_r}|^2 \, d\mathcal{H}^2\end{gathered}$$ for any $u\in \mathcal{A}$. ¶ *Proof of Theorem [Theorem 5](#thm){reference-type="ref" reference="thm"}.* We split the proof into three steps. By [\[min-seq\]](#min-seq){reference-type="eqref" reference="min-seq"} we deduce that $$\begin{gathered} \frac1{\varepsilon^2} \left(\int_{{\Omega}\setminus \overline B_a}W(I+\varepsilon\nabla u^\varepsilon)\; dx+ \mathscr J({\rm \bf{id}}+\varepsilon u^\varepsilon)-\frac43\pi\gamma a^2\right) +\eta_\varepsilon\int_{{\Omega}\setminus \overline B_a}|\nabla^2 u^\varepsilon|^p\, dx \\ \leq \int_{{\Omega}\setminus \overline B_a}f\cdot u^\varepsilon\, dx +C \leq \|f\|_{L^2({{\Omega}\setminus \overline B_a})}\|u^\varepsilon\|_{L^2({{\Omega}\setminus \overline B_a})}+C. \label{bound-I-v}\end{gathered}$$ Since $\mathscr J({\rm \bf{id}}+\varepsilon u^\varepsilon)\geq \frac43\pi\gamma a^2$ by Lemma [Lemma 3](#lem.id){reference-type="ref" reference="lem.id"}, for some possibly larger $C>0$ we have $$\int_{{\Omega}\setminus \overline B_a}W(I+\varepsilon\nabla u^\varepsilon)\, dx\leq C\varepsilon^2(\|u^\varepsilon\|_{L^2({{\Omega}\setminus \overline B_a})}+1).$$ We now apply the rigidity estimate [@FJM Theorem 3.1] and conclude that there exists a constant $R^\varepsilon\in SO(3)$ such that $$\label{rigidity} \|I+\varepsilon\nabla u^\varepsilon-R^\varepsilon\|^2_{L^2({{\Omega}\setminus \overline B_a})}\le C\varepsilon^2(\|u^\varepsilon\|_{L^2({{\Omega}\setminus \overline B_a})}+1).$$ Let $\xi^\varepsilon$ be the mean of the function ${\rm \bf{id}}+\varepsilon u^\varepsilon-R^\varepsilon x$ on ${{\Omega}\setminus \overline B_a}$. By Poincaré-Wirtinger's inequality we obtain $$\|{\rm \bf{id}}+\varepsilon u^\varepsilon-R^\varepsilon x-\xi^\varepsilon\|_{H^1({{\Omega}\setminus \overline B_a})}^2\leq C\varepsilon^2(\|u^\varepsilon\|_{L^2({{\Omega}\setminus \overline B_a})}+1).$$ Since $u^\varepsilon=0$ on $\Gamma$, the continuity of the trace operator yields $$\|{\rm \bf{id}}-R^\varepsilon x-\xi^\varepsilon\|_{L^2(\Gamma)}^2\leq C\|{\rm \bf{id}}+\varepsilon u^\varepsilon-R^\varepsilon x-\xi^\varepsilon\|_{H^1({{\Omega}\setminus \overline B_a})}^2\leq C\varepsilon^2(\|u^\varepsilon\|_{L^2({{\Omega}\setminus \overline B_a})}+1).$$ By [@DMNP Lemma 3.3] (see also the proof of [@DMNP Proposition 3.4]) this implies that $$|I-R^\varepsilon|\leq C\varepsilon^2(\|u^\varepsilon\|_{L^2({{\Omega}\setminus \overline B_a})}+1).$$ Consequently, by [\[rigidity\]](#rigidity){reference-type="eqref" reference="rigidity"} and the boundary condition on $\Gamma$, we conclude that $$\label{eq.H1bound} (u^\varepsilon) \text{ is bounded in } H^1({{\Omega}\setminus \overline B_a};\mathbb{R}^3).$$ Therefore, by [\[bound-I-v\]](#bound-I-v){reference-type="eqref" reference="bound-I-v"} we obtain $$\label{bound-J-v} \mathscr J({\rm \bf{id}}+\varepsilon u^\varepsilon)-\frac43\pi\gamma a^2 \leq C\varepsilon^2$$ and $$\eta_\varepsilon\int_{{\Omega}\setminus \overline B_a}|\nabla^2 u^\varepsilon|^p\, dx \leq C.$$ By Sobolev embedding we deduce that $$\|\nabla u^\varepsilon\|_{C^0(\overline\Omega\setminus B_a)} \leq C\big( \|\nabla u^\varepsilon\|_{L^2}+ \|\nabla^2 u^\varepsilon\|_{L^p}\big) \leq C + C\left(\frac{1}{\eta_\varepsilon}\right)^{\frac1p}.$$ Thus, by [\[eta_e\]](#eta_e){reference-type="eqref" reference="eta_e"}, $$\label{bound-ho-v} \varepsilon\|\nabla u^\varepsilon\|_{C^0(\overline\Omega\setminus B_a)} \leq C\left(\frac{\varepsilon^{p}}{\eta_\varepsilon}\right)^{\frac 1p}=:C\sigma_\varepsilon\to0.$$ In all that follows the $O(\sigma_\varepsilon^3)$ terms should be understood as quantities whose norm in $C^0(\overline\Omega\setminus B_a)$ is of order $\sigma_\varepsilon^k$ with $k\ge 3$. In other words, because of estimate [\[bound-ho-v\]](#bound-ho-v){reference-type="eqref" reference="bound-ho-v"}, we can neglect in the $\Gamma$-convergence process all terms that are more than quadratic in $\varepsilon\nabla u^\varepsilon$. Using [\[eq.alg1\]](#eq.alg1){reference-type="eqref" reference="eq.alg1"} we have that $$\label{eq.det-MG} \det (I+\varepsilon\nabla u^\varepsilon)= 1+\varepsilon\mathop{\mathrm{div}}u^\varepsilon+\varepsilon^2\mathop{\mathrm{tr}}(\mathop{\mathrm{cof}}\nabla u^\varepsilon)+\varepsilon^3\det\nabla u^\varepsilon,$$ while $$\label{eq.cof-MG} \mathop{\mathrm{cof}}(I+\varepsilon\nabla u^\varepsilon)= I+ \varepsilon\big((\mathop{\mathrm{div}}u^\varepsilon) I-(\nabla u^\varepsilon)^T\big)+ \varepsilon^2\mathop{\mathrm{cof}}\nabla u^\varepsilon.$$ From [\[eq.alg2\]](#eq.alg2){reference-type="eqref" reference="eq.alg2"} and [\[eq.alg3\]](#eq.alg3){reference-type="eqref" reference="eq.alg3"} we can further write that $$\begin{gathered} \label{eq.cof-MGbis} \mathop{\mathrm{cof}}(I+\varepsilon\nabla u^\varepsilon) = I+ \varepsilon\big((\mathop{\mathrm{div}}u^\varepsilon) I-(\nabla u^\varepsilon)^T\big) \\ + \varepsilon^2 \big(\mathop{\mathrm{tr}}(\mathop{\mathrm{cof}}\nabla u^\varepsilon)I-\mathop{\mathrm{div}}u^\varepsilon(\nabla u^\varepsilon)^T +(\nabla u^\varepsilon)^T(\nabla u^\varepsilon)^T\big).\end{gathered}$$ By [\[eq.cof-MG\]](#eq.cof-MG){reference-type="eqref" reference="eq.cof-MG"} we obtain $$\begin{aligned} |\mathop{\mathrm{cof}}(I+\varepsilon\nabla u^\varepsilon) {\vec{e}_r}|^2 & = & 1+2\varepsilon\big(\mathop{\mathrm{div}}u^\varepsilon-(\nabla u^\varepsilon)^T{\vec{e}_r}\cdot{\vec{e}_r}\big) \nonumber \\ && {}+ \varepsilon^2\big( |(\mathop{\mathrm{div}}u^\varepsilon){\vec{e}_r}-(\nabla u^\varepsilon)^T{\vec{e}_r}|^2+ 2\mathop{\mathrm{cof}}\nabla u_\varepsilon{\vec{e}_r}\cdot{\vec{e}_r}\big) \nonumber \\ && {} +O(\sigma_\varepsilon^3). \label{fist}\end{aligned}$$ The $\varepsilon$-term in [\[fist\]](#fist){reference-type="eqref" reference="fist"} reads as $2\varepsilon\mathop{\mathrm{div}}_\tau u^\varepsilon$, while for the $\varepsilon^2$-term we use that $$\label{eq.form-tau} (\mathop{\mathrm{div}}u^\varepsilon){\vec{e}_r}-(\nabla u^\varepsilon)^T{\vec{e}_r}= (\mathop{\mathrm{div}}_\tau u^\varepsilon){\vec{e}_r}-(\nabla_\tau u^\varepsilon)^T{\vec{e}_r}$$ so that $$|(\mathop{\mathrm{div}}u^\varepsilon){\vec{e}_r}-(\nabla u^\varepsilon)^T{\vec{e}_r}|^2= |(\mathop{\mathrm{div}}_\tau u^\varepsilon){\vec{e}_r}-(\nabla_\tau u^\varepsilon)^T{\vec{e}_r}|^2 =(\mathop{\mathrm{div}}_\tau u^\varepsilon)^2+ |(\nabla_\tau u^\varepsilon)^T{\vec{e}_r}|^2.$$ Finally, from [\[eq.alg4\]](#eq.alg4){reference-type="eqref" reference="eq.alg4"}, we deduce that $\mathop{\mathrm{cof}}\nabla u_\varepsilon{\vec{e}_r}=\mathop{\mathrm{cof}}\nabla_\tau u_\varepsilon{\vec{e}_r}$. Thus, $$\begin{aligned} |\mathop{\mathrm{cof}}(I+\nabla u^\varepsilon) {\vec{e}_r}|^2 & = & 1+2\varepsilon\mathop{\mathrm{div}}_\tau u^\varepsilon \\ & & + \varepsilon^2\big( |(\nabla_\tau u^\varepsilon)^T{\vec{e}_r}|^2 +(\mathop{\mathrm{div}}_\tau u^\varepsilon)^2 +2\mathop{\mathrm{cof}}\nabla_\tau u_\varepsilon{\vec{e}_r}\cdot{\vec{e}_r}\big) +O(\sigma_\varepsilon^3).\end{aligned}$$ Using the expansion $\sqrt{1+x}=1+\frac12 x-\frac18 x^2+O(x^3)$, we conclude that $$\label{eq.|cof|}|\mathop{\mathrm{cof}}\nabla y^\varepsilon{\vec{e}_r}|=1+\varepsilon\mathop{\mathrm{div}}_\tau u^\varepsilon + \frac{\varepsilon^2}2\big( |(\nabla_\tau u^\varepsilon)^T{\vec{e}_r}|^2 +2\mathop{\mathrm{cof}}\nabla_\tau u_\varepsilon{\vec{e}_r}\cdot{\vec{e}_r}\big) +O(\sigma_\varepsilon^3).$$ From [\[eq.alg2\]](#eq.alg2){reference-type="eqref" reference="eq.alg2"} and [\[eq.alg3\]](#eq.alg3){reference-type="eqref" reference="eq.alg3"}, we have that $$\begin{gathered} 2\mathop{\mathrm{cof}}\nabla_\tau u_\varepsilon{\vec{e}_r}\cdot{\vec{e}_r}=2\mathop{\mathrm{tr}}(\mathop{\mathrm{cof}}\nabla_\tau u^\varepsilon) -(\mathop{\mathrm{div}}_\tau u^\varepsilon)(\nabla_\tau u^\varepsilon)^T{\vec{e}_r}\cdot{\vec{e}_r}\\+(\nabla_\tau u^\varepsilon)^T(\nabla_\tau u^\varepsilon)^T{\vec{e}_r}\cdot{\vec{e}_r}.\end{gathered}$$ But, since $\nabla_\tau u^\varepsilon{\vec{e}_r}=0$, $$-(\mathop{\mathrm{div}}_\tau u^\varepsilon)(\nabla_\tau u^\varepsilon)^T{\vec{e}_r}\cdot{\vec{e}_r}+(\nabla_\tau u^\varepsilon)^T(\nabla_\tau u^\varepsilon)^T{\vec{e}_r}\cdot{\vec{e}_r}=0,$$ so that, using [\[eq.alg3\]](#eq.alg3){reference-type="eqref" reference="eq.alg3"} once again, $$\label{eq.cofuerer} 2\mathop{\mathrm{cof}}\nabla_\tau u_\varepsilon{\vec{e}_r}\cdot{\vec{e}_r}=2\mathop{\mathrm{tr}}(\mathop{\mathrm{cof}}\nabla_\tau u^\varepsilon)= (\mathop{\mathrm{div}}_\tau u^\varepsilon)^2-(\nabla_\tau u^\varepsilon)^T\cdot\nabla_\tau u^\varepsilon.$$ Hence [\[eq.\|cof\|\]](#eq.|cof|){reference-type="eqref" reference="eq.|cof|"} finally reads as $$\label{eq.|cof|bis} |\mathop{\mathrm{cof}}\nabla y^\varepsilon{\vec{e}_r}|=1+\varepsilon\mathop{\mathrm{div}}_\tau u^\varepsilon+\frac{\varepsilon^2}2\big( |(\nabla_\tau u^\varepsilon)^T{\vec{e}_r}|^2 +(\mathop{\mathrm{div}}_\tau u^\varepsilon)^2-(\nabla_\tau u^\varepsilon)^T\cdot\nabla_\tau u^\varepsilon\big)+O(\sigma_\varepsilon^3).$$ With the help of [\[eq.det-MG\]](#eq.det-MG){reference-type="eqref" reference="eq.det-MG"} and [\[eq.\|cof\|bis\]](#eq.|cof|bis){reference-type="eqref" reference="eq.|cof|bis"} we get from [\[eq.tot-en2\]](#eq.tot-en2){reference-type="eqref" reference="eq.tot-en2"} $$\begin{gathered} \label{eq.lin-en} \mathscr J({\rm \bf{id}}+\varepsilon u^\varepsilon) = \gamma \mathcal{H}^2(\partial B_a)-p|B_a|+\varepsilon\int_{\partial B_a}\big(\gamma\mathop{\mathrm{div}}_\tau u^\varepsilon-p u^\varepsilon\cdot{\vec{e}_r}\big)\, d\mathcal{H}^2 \\ +\varepsilon^2 \left( \int_{\partial B_a} \frac\gamma2\left( |(\nabla_\tau u^\varepsilon)^T{\vec{e}_r}|^2 +(\mathop{\mathrm{div}}_\tau u^\varepsilon)^2-(\nabla_\tau u^\varepsilon)^T\cdot\nabla_\tau u^\varepsilon\right) \, d\mathcal{H}^2\right. \\ +\frac{\lambda_{f\!\ell}}{2|B_a|} \left(\int_{\partial B_a} u^\varepsilon\cdot{\vec{e}_r}\, d\mathcal{H}^2\right)^2 \left. {} -p\int_B\mathop{\mathrm{tr}}(\mathop{\mathrm{cof}}\nabla u^\varepsilon)\, dx\right)+O(\sigma_\varepsilon^3).\end{gathered}$$ In view of [\[eq.eqm\]](#eq.eqm){reference-type="eqref" reference="eq.eqm"}, the constant term in [\[eq.lin-en\]](#eq.lin-en){reference-type="eqref" reference="eq.lin-en"} is $4/3\pi\gamma a^2$ while the linear term disappears upon invoking the second equality in [\[eq.der-area\]](#eq.der-area){reference-type="eqref" reference="eq.der-area"} for the sphere $\partial B_a$. Note also that, with the help of [\[eq.alg3\]](#eq.alg3){reference-type="eqref" reference="eq.alg3"} once again, $$\mathop{\mathrm{tr}}(\mathop{\mathrm{cof}}\nabla u^\varepsilon)=\frac12\big((\mathop{\mathrm{div}}u^\varepsilon)^2- \mathop{\mathrm{tr}}[(\nabla u^\varepsilon)^2]\big)=\frac12\big(\mathop{\mathrm{div}}(\mathop{\mathrm{div}}u^\varepsilon u^\varepsilon)-\mathop{\mathrm{div}}(\nabla u^\varepsilon u^\varepsilon)\big),$$ so the last term in [\[eq.lin-en\]](#eq.lin-en){reference-type="eqref" reference="eq.lin-en"} can be written as the boundary term $$-\frac{p}2\int_{\partial B_a} u^\varepsilon\cdot(\mathop{\mathrm{div}}u^\varepsilon{\vec{e}_r}- (\nabla u^\varepsilon)^T {\vec{e}_r})\ d\mathcal{H}^2$$ or still, in view of [\[eq.form-tau\]](#eq.form-tau){reference-type="eqref" reference="eq.form-tau"}, as $$-\frac{p}2 \int_{\partial B_a}\big(\mathop{\mathrm{div}}_\tau u^\varepsilon u^\varepsilon\cdot {\vec{e}_r}- u^\varepsilon\cdot (\nabla_\tau u^\varepsilon)^T{\vec{e}_r}\big)\, d\mathcal{H}^2.$$ Summing up, [\[eq.lin-en\]](#eq.lin-en){reference-type="eqref" reference="eq.lin-en"} also reads as $$\begin{aligned} \lefteqn{\mathscr J({\rm \bf{id}}+\varepsilon u^\varepsilon) -\mathscr J({\rm \bf{id}})} \nonumber \\ & = & \varepsilon^2 \left( \int_{\partial B_a} \frac\gamma2\left( |(\nabla_\tau u^\varepsilon)^T{\vec{e}_r}|^2 +(\mathop{\mathrm{div}}_\tau u^\varepsilon)^2-(\nabla_\tau u^\varepsilon)^T\cdot\nabla_\tau u^\varepsilon\right) \, d\mathcal{H}^2\right. \nonumber \\ & & {} +\frac{\lambda_{f\!\ell}}{2|B_a|} \left(\int_{\partial B_a} u^\varepsilon\cdot{\vec{e}_r}\, d\mathcal{H}^2\right)^2 \nonumber \\ & & \left. {} -\frac{p}2 \int_{\partial B_a}\big(\mathop{\mathrm{div}}_\tau u^\varepsilon u^\varepsilon\cdot {\vec{e}_r}- u^\varepsilon\cdot (\nabla_\tau u^\varepsilon)^T{\vec{e}_r}\big)\, d\mathcal{H}^2\right)+O(\sigma_\varepsilon^3). \label{eq.lin-en-bis}\end{aligned}$$ Appealing to [\[eq.tau-nabla-div\]](#eq.tau-nabla-div){reference-type="eqref" reference="eq.tau-nabla-div"}, we obtain, after some algebraic manipulations, $$\begin{gathered} (\mathop{\mathrm{div}}_\tau u^\varepsilon)^2-(\nabla_\tau u^\varepsilon)^T:\nabla_\tau u^\varepsilon= \mathop{\mathrm{div}}_\tau(\mathop{\mathrm{div}}_\tau u^\varepsilon u^\varepsilon- \nabla_\tau u^\varepsilon u^\varepsilon) \\ {}+(\mathop{\mathrm{div}}_\tau(\nabla_\tau u^\varepsilon)^T\cdot{\vec{e}_r})(u^\varepsilon\cdot{\vec{e}_r}) +(\nabla_\tau u^\varepsilon)^T{\vec{e}_r}\cdot (\nabla_\tau{\vec{e}_r})^Tu^\varepsilon.\end{gathered}$$ Therefore, using the second equality in [\[eq.der-area\]](#eq.der-area){reference-type="eqref" reference="eq.der-area"} and [\[eq.eqm\]](#eq.eqm){reference-type="eqref" reference="eq.eqm"}, [\[eq.lin-en-bis\]](#eq.lin-en-bis){reference-type="eqref" reference="eq.lin-en-bis"} reduces to $$\begin{gathered} \label{e2-red} \mathscr J({\rm \bf{id}}+\varepsilon u^\varepsilon)-\mathscr J({\rm \bf{id}})=\\ \varepsilon^2\left(\int_{\partial B_a} \frac\gamma2\left( |(\nabla_\tau u^\varepsilon)^T{\vec{e}_r}|^2 +(\mathop{\mathrm{div}}_\tau(\nabla_\tau u^\varepsilon)^T\cdot{\vec{e}_r})(u^\varepsilon\cdot{\vec{e}_r}) +(\nabla_\tau u^\varepsilon)^T{\vec{e}_r}\cdot (\nabla_\tau{\vec{e}_r})^Tu^\varepsilon \right) d\mathcal{H}^2\right. \\ {}+\frac{\lambda_{f\!\ell}}{2|B_a|} \left(\int_{\partial B_a} u^\varepsilon\cdot{\vec{e}_r}\, d\mathcal{H}^2\right)^2\left.\vphantom{\int_{\partial B_a}}\right)+O(\sigma_\varepsilon^3).\end{gathered}$$ We now write $u^\varepsilon=v^\varepsilon+\varphi^\varepsilon{\vec{e}_r}$, where $v^\varepsilon\cdot{\vec{e}_r}=0$ and $\varphi^\varepsilon=u^\varepsilon\cdot{\vec{e}_r}$. By differentiating we have $$(\nabla_\tau v^\varepsilon)^T{\vec{e}_r}= - (\nabla_\tau {\vec{e}_r})^Tv^\varepsilon= -\frac1a v^\varepsilon$$ since $\nabla_\tau {\vec{e}_r}=1/a(I-{\vec{e}_r}\otimes{\vec{e}_r})$. Thus $$\label{eq.trans1} (\nabla_\tau u^\varepsilon)^T{\vec{e}_r}= \nabla_\tau \varphi^\varepsilon- (\nabla_\tau {\vec{e}_r})^Tv^\varepsilon= \nabla_\tau \varphi^\varepsilon-\frac1a v^\varepsilon.$$ Therefore, $$\begin{aligned} (\nabla_\tau u^\varepsilon)^T{\vec{e}_r}\cdot (\nabla_\tau{\vec{e}_r})^Tu^\varepsilon& = & (\nabla_\tau u^\varepsilon)^T{\vec{e}_r}\cdot (\nabla_\tau{\vec{e}_r})^Tv^\varepsilon \nonumber \\ & = & \label{eq.trans3} (\nabla_\tau u^\varepsilon)^T{\vec{e}_r}\cdot(\frac1a v^\varepsilon)=\frac1a \nabla_\tau \varphi^\varepsilon\cdot v^\varepsilon-\frac1{a^2} |v^\varepsilon|^2,\end{aligned}$$ while, using that $\nabla_\tau u^\varepsilon{\vec{e}_r}=0$ and that $\nabla_\tau \varphi^\varepsilon\cdot{\vec{e}_r}=0$, $$\begin{aligned} \mathop{\mathrm{div}}_\tau(\nabla_\tau u^\varepsilon)^T\cdot{\vec{e}_r}& = & \mathop{\mathrm{div}}_\tau(\nabla_\tau u^\varepsilon{\vec{e}_r})-(\nabla_\tau u^\varepsilon)^T:\nabla_\tau{\vec{e}_r} \nonumber \\ & = & -(\nabla_\tau u^\varepsilon)^T\cdot\nabla_\tau{\vec{e}_r}\nonumber \\ & = & -\frac 1 a\mathop{\mathrm{div}}_\tau u^\varepsilon= -\frac1a \mathop{\mathrm{div}}_\tau v^\varepsilon-\frac2{a^2}\varphi^\varepsilon. \label{eq.trans2}\end{aligned}$$ Combining [\[eq.trans1\]](#eq.trans1){reference-type="eqref" reference="eq.trans1"}--[\[eq.trans2\]](#eq.trans2){reference-type="eqref" reference="eq.trans2"} together, expression [\[e2-red\]](#e2-red){reference-type="eqref" reference="e2-red"} can again be rewritten as $$\begin{gathered} \label{eq.Jfinal} \mathscr J({\rm \bf{id}}+\varepsilon u^\varepsilon)-\mathscr J({\rm \bf{id}})= \\ \varepsilon^2\left(\int_{\partial B_a} \frac\gamma2\left( \big|\nabla_\tau \varphi^\varepsilon-\frac1a v^\varepsilon\big|^2 -\frac1a \varphi^\varepsilon\mathop{\mathrm{div}}_\tau v^\varepsilon-\frac2{a^2}|\varphi^\varepsilon|^2 +\frac1a \nabla_\tau \varphi^\varepsilon\cdot v^\varepsilon-\frac1{a^2} |v^\varepsilon|^2 \right) d\mathcal{H}^2\right. \\ +\frac{\lambda_{f\!\ell}}{2|B_a|} \left(\int_{\partial B_a} \varphi^\varepsilon\, d\mathcal{H}^2\right)^2 \left. \vphantom{\int_{\partial B_a}}\right)+O(\sigma_\varepsilon^3).\end{gathered}$$ Integrating by parts the second term in the first integral above yields, in view of the second equality in [\[eq.der-area\]](#eq.der-area){reference-type="eqref" reference="eq.der-area"}, $$\begin{aligned} \int_{\partial B_a} \varphi^\varepsilon\mathop{\mathrm{div}}_\tau v^\varepsilon\, d\mathcal{H}^2& = & - \int_{\partial B_a} \nabla_\tau\varphi^\varepsilon\cdot v^\varepsilon\, d\mathcal{H}^2+ \frac2a \int_{\partial B_a} \varphi^\varepsilon v^\varepsilon\cdot{\vec{e}_r}\, d\mathcal{H}^2 \\ & = & - \int_{\partial B_a} \nabla_\tau\varphi^\varepsilon\cdot v^\varepsilon\, d\mathcal{H}^2.\end{aligned}$$ We finally conclude that expression [\[eq.Jfinal\]](#eq.Jfinal){reference-type="eqref" reference="eq.Jfinal"} is given by $$\begin{gathered} \label{eq.JFinal} \mathscr J({\rm \bf{id}}+\varepsilon u^\varepsilon)-\mathscr J({\rm \bf{id}})= \mathscr J({\rm \bf{id}}+\varepsilon u^\varepsilon)- \frac4{3}\pi\gamma a^2 \\ =\varepsilon^2\left(\frac\gamma2\int_{\partial B_a}|\nabla_\tau \varphi^\varepsilon|^2\, d\mathcal{H}^2 -\frac{\gamma}{a^2}\int_{\partial B_a} |\varphi^\varepsilon|^2 \, d\mathcal{H}^2+\frac{\lambda_{f\!\ell}}{2|B_a|} \left(\int_{\partial B_a} \varphi^\varepsilon\,d\mathcal{H}^2\right)^2\right)+O(\sigma_\varepsilon^3),\end{gathered}$$ where we recall that $\varphi^\varepsilon=u^\varepsilon\cdot {\vec{e}_r}$. By [\[bound-J-v\]](#bound-J-v){reference-type="eqref" reference="bound-J-v"}, [\[eta_e\]](#eta_e){reference-type="eqref" reference="eta_e"} and the definition [\[bound-ho-v\]](#bound-ho-v){reference-type="eqref" reference="bound-ho-v"} of $\sigma_\varepsilon$, we deduce that $$\begin{aligned} \frac\gamma2\int_{\partial B_a}|\nabla_\tau (u^\varepsilon\cdot{\vec{e}_r})|^2\, d\mathcal{H}^2 & \leq & \frac{\gamma}{a^2}\int_{\partial B_a} |u^\varepsilon\cdot{\vec{e}_r}|^2 \, d\mathcal{H}^2 -\frac{\lambda_{f\!\ell}}{2|B_a|} \left(\int_{\partial B_a} u^\varepsilon\cdot{\vec{e}_r}\,d\mathcal{H}^2\right)^2 \\ && {}+ C+ \frac1{\varepsilon^2}O(\sigma_\varepsilon^3) \\ & \leq &C+ C\|u^\varepsilon\|_{L^2(\partial B_a)}^2+C \left(\frac{\varepsilon^{p/3}}{\eta_\varepsilon}\right)^{\frac 3p}\le C\|u^\varepsilon\|_{L^2(\partial B_a)}^2+C .\end{aligned}$$ Since $(u_\varepsilon)$ is bounded in $H^1({{\Omega}\setminus \overline B_a};\mathbb{R}^3)$, its trace on $\partial B_a$ is bounded in $L^2(\partial B_a;\mathbb{R}^3)$. Therefore, the inequality above implies that $$\label{eq.boundtang}(\nabla_\tau (u^\varepsilon\cdot{\vec{e}_r})) \mbox{ is bounded in }L^2(\partial B_a;\mathbb{R}^3).$$ At the expense of extracting a subsequence, $u_\varepsilon\rightharpoonup u$ weakly in $H^1({{\Omega}\setminus \overline B_a};\mathbb{R}^3)$, $u_\varepsilon\to u$ strongly in $L^2(\partial B_a;\mathbb{R}^3)$, and $\nabla_\tau (u^\varepsilon\cdot{\vec{e}_r})\rightharpoonup\nabla_\tau (u\cdot{\vec{e}_r})$ weakly in $L^2(\partial B_a;\mathbb{R}^3)$, for some $u\in \mathcal{A}$, which completes the first part of the proof of Theorem [Theorem 5](#thm){reference-type="ref" reference="thm"}. Assume that $u^\varepsilon\to u$ strongly in $L^2({\Omega};\mathbb{R}^3)$ and, also, without loss of generality that $\mathcal{F}_\varepsilon(u^\varepsilon)\le C$ and $\liminf \mathcal{F}_\varepsilon(u^\varepsilon)=\lim \mathcal{F}_\varepsilon(u^\varepsilon)$. The first step of this proof guarantees that [\[eq.H1bound\]](#eq.H1bound){reference-type="eqref" reference="eq.H1bound"}, [\[bound-ho-v\]](#bound-ho-v){reference-type="eqref" reference="bound-ho-v"}, [\[eq.JFinal\]](#eq.JFinal){reference-type="eqref" reference="eq.JFinal"}, and [\[eq.boundtang\]](#eq.boundtang){reference-type="eqref" reference="eq.boundtang"} hold. Therefore, by [\[bound-ho-v\]](#bound-ho-v){reference-type="eqref" reference="bound-ho-v"} and lower semicontinuity, we have that, up to a subsequence, $$\begin{gathered} \label{liminf1} \liminf_{\varepsilon\to0}\frac1{\varepsilon^2}\Big( \mathscr J({\rm \bf{id}}+\varepsilon u^\varepsilon) -\frac4{3}\pi\gamma a^2 \Big) \\ \geq \frac\gamma2\int_{\partial B_a}|\nabla_\tau (u\cdot{\vec{e}_r})|^2\, d\mathcal{H}^2 -\frac{\gamma}{a^2}\int_{\partial B_a} (u\cdot{\vec{e}_r})^2 \, d\mathcal{H}^2 +\frac{\lambda_{f\!\ell}}{2|B_a|} \left(\int_{\partial B_a} u\cdot{\vec{e}_r}\,d\mathcal{H}^2\right)^2.\end{gathered}$$ Moreover, $$\label{liminf2} \liminf_{\varepsilon\to0} \left(\eta_\varepsilon\int_{{\Omega}\setminus \overline B_a}|\nabla^2 u^\varepsilon|^p\, dx -\int_{{\Omega}\setminus \overline B_a}f\cdot u^\varepsilon\, dx \right) \geq -\int_{{\Omega}\setminus \overline B_a}f\cdot u\, dx.$$ Finally, as in [@DMNP], $$\liminf_{\varepsilon\to0}\frac1{\varepsilon^2}\int_{{\Omega}\setminus \overline B_a}W(I+\varepsilon\nabla u^\varepsilon)\, dx \geq \frac12 \int_{{\Omega}\setminus \overline B_a}Q(\mathbf Eu)\, dx.$$ Note however that, in view of [\[bound-ho-v\]](#bound-ho-v){reference-type="eqref" reference="bound-ho-v"} and since the quadratic form $Q$ is positive definite (see Remark [Remark 1](#rm.positdef){reference-type="ref" reference="rm.positdef"}), the above liminf is trivial in our setting. Together with [\[liminf1\]](#liminf1){reference-type="eqref" reference="liminf1"} and [\[liminf2\]](#liminf2){reference-type="eqref" reference="liminf2"}, this proves that $$\lim_{\varepsilon\to0}\mathcal{F}_\varepsilon(u^\varepsilon)\ge \mathcal{F}(u).$$ We have to show that for every $u\in \mathcal{A}$ (see [\[def.A\]](#def.A){reference-type="eqref" reference="def.A"} for the definition of $\mathcal{A}$) there exists a sequence $(u^\varepsilon)\subset C^2_{loc}({\Omega};\mathbb{R}^3)\cap W^{2,p}_\Gamma({\Omega};\mathbb{R}^3)$ such that $u^\varepsilon$ converge to $u$ strongly in $H^1({{\Omega}\setminus \overline B_a};\mathbb{R}^3)$, $u^\varepsilon\cdot{\vec{e}_r}\to u\cdot{\vec{e}_r}$ strongly in $H^1(\partial B_a)$, and $$\label{liminf} \lim_{\varepsilon\to0}\mathcal{F}_\varepsilon(u^\varepsilon) = \mathcal{F}(u).$$ If $u\in C^2_{loc}(\Omega\setminus B_a;\mathbb{R}^3)\cap W^{2,p}_\Gamma({\Omega};\mathbb{R}^3)$, the constant sequence $u^\varepsilon:=\tilde u$, where $\tilde u$ is any $C^2_{loc}$ extension of $u$ to ${\Omega}$, has all the desired properties. Indeed, since $p>3$, by Sobolev embedding $\nabla u$ is uniformly bounded in $\overline\Omega\setminus B_a$, so that by Taylor expansion $$\lim_{\varepsilon\to0}\frac1{\varepsilon^2}\int_{{\Omega}\setminus \overline B_a}W(I+\varepsilon\nabla u)\, dx = \frac12 \int_{{\Omega}\setminus \overline B_a}Q(\nabla u)\, dx$$ and moreover [\[eq.JFinal\]](#eq.JFinal){reference-type="eqref" reference="eq.JFinal"} holds with $O(\varepsilon^3)$ in place of $O(\sigma_\varepsilon^3)$. Therefore, $$\begin{gathered} \lim_{\varepsilon\to0}\frac1{\varepsilon^2}\Big( \mathscr J({\rm \bf{id}}+\varepsilon u^\varepsilon) -\frac4{3}\pi\gamma a^2 \Big) \\ = \frac\gamma2\int_{\partial B_a}|\nabla_\tau (u\cdot{\vec{e}_r})|^2\, d\mathcal{H}^2 -\frac{\gamma}{a^2}\int_{\partial B_a} |u\cdot{\vec{e}_r}|^2 \, d\mathcal{H}^2 +\frac{\lambda_{f\!\ell}}{2|B_a|} \left(\int_{\partial B_a} u\cdot{\vec{e}_r}\,d\mathcal{H}^2\right)^2.\end{gathered}$$ Finally, since $\eta_\varepsilon\to 0$ by [\[eta_e\]](#eta_e){reference-type="eqref" reference="eta_e"}, $$\eta_\varepsilon\int_{{\Omega}\setminus \overline B_a}|\nabla^2 u|^p\, dx -\int_{{\Omega}\setminus \overline B_a}f\cdot u\, dx \to -\int_{{\Omega}\setminus \overline B_a}f\cdot u\, dx.$$ Let now $u\in \mathcal{A}$. By [@ADMLP Lemmas A.1 and A.2] and the assumptions on $\Gamma$ there exists a sequence $(v_n)\subset C^\infty(\overline{\Omega}\setminus B_a;\mathbb{R}^3)$ such that $v_n=0$ on $\Gamma$ and $v_n$ converge to $u$ strongly in $H^1({{\Omega}\setminus \overline B_a};\mathbb{R}^3)$. Let $\eta>0$ be such that $\overline B_{a+\eta}\subset{\Omega}$. Consider a sequence $(\psi_n) \subset C^\infty(\partial B_a)$ that approximates $u\cdot{\vec{e}_r}$ strongly in $H^1(\partial B_a)$ (see e.g. [@H]). Let $U:=B_{a+\eta}\setminus\overline B_a$. Consider the solution $w_n$ of the following system: $$\begin{cases} -\Delta w_n = -\Delta (v_n\cdot{\vec{e}_r}) \quad \mbox{ in } U,\\[2mm] w_n= \psi_n \mbox{ on }\partial B_a,\quad w_n= v_n\cdot{\vec{e}_r}\mbox{ on }\partial B_{a+\eta}. \end{cases}$$ By elliptic regularity we have that $w_n\in C^\infty(\overline U)$. Moreover, $$w_n\to w \quad \mbox{ strongly in } H^1(U),$$ where $w$ is the solution to $$\begin{cases} -\Delta w= -\Delta (u\cdot{\vec{e}_r}) \quad \mbox{ in } U,\\[2mm] w= u\cdot{\vec{e}_r}\quad\mbox{ on }\partial U, \end{cases}$$ so that $w=u\cdot{\vec{e}_r}$. Further, by construction $w_n$ converges strongly to $u\cdot{\vec{e}_r}$ in $H^1(\partial B_a)$. Let $\varphi\in C^\infty_c(B_{a+\eta})$ be a cut-off function such that $\varphi=1$ on $B_{a+\eta/2}$. Define $$u_n:=\varphi (w_n- v_n\cdot{\vec{e}_r}) {\vec{e}_r}+ v_n.$$ Then $u_n\in C^\infty(\overline{\Omega}\setminus B_a;\mathbb{R}^3)$ for every $n$, $u_n\to u$ strongly in $H^1({{\Omega}\setminus \overline B_a};\mathbb{R}^3)$, and $u_n\cdot{\vec{e}_r}\to u\cdot{\vec{e}_r}$ strongly in $H^1(\partial B_a)$. Since we clearly have that $\mathcal{F}(u_n)\to \mathcal{F}(u)$, the result is achieved through a diagonalization process. ◻ **Remark 8** (The linearized system). The $\Gamma$-convergence and compactness result of Theorem [Theorem 5](#thm){reference-type="ref" reference="thm"} immediately implies the existence of a minimizer $u$ for $\mathcal{F}$ on $\mathcal{A}$. Simple variations and use of the second equality in [\[eq.der-area\]](#eq.der-area){reference-type="eqref" reference="eq.der-area"} then demonstrate that $u$ satisfies the following set of equations: $$\label{eq.PDEs} \begin{cases} -\mathop{\mathrm{div}}(\mathbb A\mathbf Eu)=f \quad \mbox{ in }{{\Omega}\setminus \overline B_a}, \\[2mm] \displaystyle\gamma\Delta_\tau(u\cdot{\vec{e}_r})+2\frac\gamma {a^2} u\cdot{\vec{e}_r}+\big((\mathbb A\mathbf Eu) {\vec{e}_r}\big)_r-\frac{3\lambda_{f\!\ell}}{4\pi a^3} \int_{\partial B_a} u\cdot{\vec{e}_r}\; d\mathcal{H}^2=0 \quad \mbox{ on }\partial B_a,\\[3mm] (\mathbb A\mathbf Eu){\vec{e}_r}\parallel {\vec{e}_r}\quad \mbox{ on }\partial B_a,\\[2mm] u=0 \mbox{ on }\Gamma,\quad (\mathbb A\mathbf Eu)\nu_{\partial{\Omega}} =0 \mbox{ on }\partial{\Omega}\setminus \overline\Gamma, \end{cases}$$ where, again, $\mathbf Eu=1/2(\nabla u+\nabla u^T)$ and $\Delta_\tau$ is the Laplace-Beltrami operator, defined by $\Delta_\tau\varphi=\mathop{\mathrm{div}}_\tau(\nabla_\tau\varphi)$, on $\partial B_a$. Note that $(\mathbb A\mathbf Eu){\vec{e}_r}$ is an element of $H^{-1/2}(\partial B_a;\mathbb{R}^3)$, so the notation $(\mathbb A\mathbf Eu){\vec{e}_r}\parallel {\vec{e}_r}$ means that it only acts on the radial component of elements of $H^{1/2}(\partial B_a;\mathbb{R}^3)$, while $\big((\mathbb A\mathbf Eu) {\vec{e}_r}\big)_r$ is defined through the following equality: $$\label{def.dual} \langle\big((\mathbb A\mathbf Eu) {\vec{e}_r}\big)_r,v\cdot {\vec{e}_r}\rangle:=\langle (\mathbb A\mathbf Eu) {\vec{e}_r},v\rangle_{H^{-1/2}(\partial B_a)\times H^{1/2}(\partial B_a)}$$ for every $v\in H^{1/2}(\partial B_a;\mathbb{R}^3)$. Note that [\[eq.PDEs\]](#eq.PDEs){reference-type="eqref" reference="eq.PDEs"} has a unique solution since the associated quadratic form, that is, $$\begin{gathered} \frac12 \int_{{\Omega}\setminus \overline B_a}Q(\mathbf Eu)\, dx +\frac\gamma2\int_{\partial B_a}|\nabla_\tau (u\cdot{\vec{e}_r})|^2\, d\mathcal{H}^2 -\frac{\gamma}{a^2}\int_{\partial B_a} |u\cdot{\vec{e}_r}|^2 \, d\mathcal{H}^2 \\ +\frac{\lambda_{f\!\ell}}{2|B_a|} \left(\int_{\partial B_a} u\cdot{\vec{e}_r}\,d\mathcal{H}^2\right)^2\end{gathered}$$ is coercive on $\mathcal{A}$ in view of the positive definiteness of $Q$ (see Remark [Remark 1](#rm.positdef){reference-type="ref" reference="rm.positdef"}) and of Remark [Remark 7](#rm.posit){reference-type="ref" reference="rm.posit"}. So, uniqueness and existence in $\mathcal{A}$ -- which we have already secured, thanks to the $\Gamma$-convergence process -- can be obtained directly through Lax-Milgram lemma. This is, to our knowledge, the first time that a linearization process produces an interfacial PDE, this at the expense of introducing a vanishing regularization. The set of PDE's ([\[eq.PDEs\]](#eq.PDEs){reference-type="ref" reference="eq.PDEs"}) is precisely that derived formally in [@GLP] when specialized to solid/liquid interfaces with constant surface tension $\gamma$.¶ **Remark 9**. Testing [\[eq.PDEs\]](#eq.PDEs){reference-type="eqref" reference="eq.PDEs"} by ${\vec{e}_r}$, we obtain with the help of [\[eq.der-area\]](#eq.der-area){reference-type="eqref" reference="eq.der-area"} that $$\langle \big((\mathbb A\mathbf Eu) {\vec{e}_r}\big)_r,1\rangle= \left(3\frac{\lambda_{f\!\ell}}{a}-2\frac\gamma{a^2}\right)\int_{\partial B_a} u\cdot{\vec{e}_r}\; d\mathcal{H}^2.$$ -.9cm ¶ **Remark 10**. The above result equally applies to domains containing any finite number of liquid inclusions.¶ # The linearized problem in the presence of many inclusions {#sec.hom} ## Homogenization. In this subsection, we propose to investigate the limit (macroscopic) behavior of a linearized solid filled with many periodically distributed liquid inclusions pressurized at the same pressure. Note that the periodicity assumption is not essential; we adopt it below for brevity sake. The setting is as follows. The domain ${\Omega}$ of the previous sections is under the same loading $f$ (defined as an element of $L^2({\Omega};\mathbb{R}^3)$ this time) and boundary conditions as before (see the beginning of Section [3](#sec.lin){reference-type="ref" reference="sec.lin"}). We cover ${\Omega}$ with identical disjoint cubes $Y^i_\varepsilon:=\varepsilon i+\varepsilon Y$ for $i\in\mathbb{Z}^3$, $Y:=[-1/2,1/2)^3$, each containing an identical centered spherical inclusion $B^i_{\varepsilon a}:=\varepsilon i+\varepsilon B_a$ with $a<1/2$, filled with a liquid pressurized at the pressure $\varepsilon p$. Let $I_\varepsilon$ denote the set of centers $i\in\mathbb{Z}^3$ such that $Y^i_\varepsilon\subset\Omega$ and ${\rm dist}(Y^i_\varepsilon,\partial\Omega)\geq\varepsilon$. Note that $\#(I_\varepsilon)\simeq1/\varepsilon^3$. We define the following domains $${\Omega}^\varepsilon:={\Omega}\setminus\left(\cup_{i\in I_\varepsilon} \overline B_{\varepsilon a}^i\right), \quad\omega^\varepsilon:=\cup_{i\in I_\varepsilon}(Y^i_\varepsilon\setminus \overline B^i_{\varepsilon a}), \quad\tilde\Omega^\varepsilon:=\cup_{i\in I_\varepsilon}Y^i_\varepsilon.$$ As an immediate corollary of the results in Remark [Remark 8](#rem.PDE){reference-type="ref" reference="rem.PDE"}, the solution $u^\varepsilon$ to the system $$\label{eq.eps-sys} \begin{cases} -\mathop{\mathrm{div}}(\mathbb A\mathbf Eu^\varepsilon)=f \quad \mbox{ in }{\Omega}^\varepsilon,\\[2mm] \displaystyle\varepsilon\gamma\Delta_\tau(u^\varepsilon\cdot{\vec{e}_r})+2\frac{\gamma} {\varepsilon a^2} u^\varepsilon\cdot{\vec{e}_r}+((\mathbb A\mathbf Eu^\varepsilon){\vec{e}_r})_r \\\qquad\qquad\qquad\qquad\displaystyle -\frac{3\lambda_{f\!\ell}}{4\pi \varepsilon^3a^3} \int_{\partial B^i_{\varepsilon a}} u\cdot{\vec{e}_r}\; d\mathcal{H}^2=0 \quad\mbox{ on }\partial B^i_{\varepsilon a}, \ i\in I_\varepsilon, \\[3mm] (\mathbb A\mathbf Eu^\varepsilon){\vec{e}_r}\parallel {\vec{e}_r}\quad \mbox{ on }\partial B^i_{\varepsilon a}, \ i\in I_\varepsilon, \\[2mm] u=0 \mbox{ on }\Gamma,\quad (\mathbb A\mathbf Eu^\varepsilon)\nu_{\partial{\Omega}} =0 \mbox{ on }\partial{\Omega}\setminus \overline\Gamma \end{cases}$$ exists and is unique in the class $$\label{def.Ae} \mathcal{A}^\varepsilon:=\left\{u\in H^1({\Omega}^\varepsilon;\mathbb{R}^3): \ u\cdot{\vec{e}_r}\in \cup_{i\in I_\varepsilon} H^1(\partial B^i_{\varepsilon a}) \text{ and }u=0 \text{ on } \Gamma\right\}.$$ Note that the various powers of $\varepsilon$ in [\[eq.eps-sys\]](#eq.eps-sys){reference-type="eqref" reference="eq.eps-sys"} correspond to the rescaling of both $p$ and $\gamma$ by $\varepsilon$, the natural scaling if one wishes to conserve both [\[eq.eqm\]](#eq.eqm){reference-type="eqref" reference="eq.eqm"} and [\[eq.cond-min\]](#eq.cond-min){reference-type="eqref" reference="eq.cond-min"}. The solution $u^\varepsilon$ is then the (unique) minimizer in $\mathcal{A}^\varepsilon$ of the functional $$\label{eq.def-fe} \mathcal{F}^\varepsilon(v):=\frac12 \int_{{\Omega}^\varepsilon} Q(\mathbf Ev)\, dx +\sum_{i\in I_\varepsilon} \mathcal V^\varepsilon_i(v) -\int_{{\Omega}^\varepsilon}f\cdot v\, dx,$$ where $$\begin{gathered} \label{Vei-var-0} \mathcal V^\varepsilon_i(v):=\frac{\gamma\varepsilon}2\int_{\partial B^i_{\varepsilon a}}|\nabla_\tau (v\cdot{\vec{e}_r})|^2\, d\mathcal{H}^2 {} -\frac{\gamma}{\varepsilon a^2}\int_{\partial B^i_{\varepsilon a}} (v\cdot{\vec{e}_r})^2 \, d\mathcal{H}^2 \\ {}+\frac{\lambda_{f\!\ell}}{2\varepsilon^3|B_a|} \left(\int_{\partial B^i_{\varepsilon a}} v\cdot{\vec{e}_r}\,d\mathcal{H}^2\right)^2\end{gathered}$$ for $i\in I_\varepsilon$. Note that by [\[identity\]](#identity){reference-type="eqref" reference="identity"} with $K={\lambda_{f\!\ell}a^2/ (2\gamma |B_a|)}$ we can rewrite $$\begin{gathered} \label{Vei-var} \mathcal V^\varepsilon_i(v)=\frac{\gamma\varepsilon}2\int_{\partial B^i_{\varepsilon a}}|\nabla_\tau (P^2_{i,\varepsilon a}(v\cdot {\vec{e}_r}))|^2\,d{\mathcal H}^2 - \frac{\gamma}{\varepsilon a^2}\int_{\partial B^i_{\varepsilon a}}|P^2_{i,\varepsilon a}(v\cdot {\vec{e}_r}) |^2\,d{\mathcal H}^2 \\ {} + \frac1{4\pi\varepsilon^3 a^3}\Big(\frac{3\lambda_{f\!\ell}}2-\frac\gamma a\Big)\left(\int_{\partial B^i_{\varepsilon a}}v\cdot {\vec{e}_r}\,d{\mathcal H}^2\right)^2,\end{gathered}$$ where $P^2_{i,\varepsilon a}$ is the orthogonal projection in $L^2(\partial B^i_{\varepsilon a})$ onto the orthogonal space to affine functions, see Appendix A. From the minimality of $u^\varepsilon$ we have $$\begin{aligned} 0=\mathcal{F}^\varepsilon(0) &\ge & \mathcal{F}^\varepsilon(u^\varepsilon) \nonumber \\ & \ge & \frac12 \int_{{\Omega}^\varepsilon} Q(\mathbf Eu^\varepsilon)\, dx + \sum_{i\in I_\varepsilon}\left\{\frac{\gamma\varepsilon}3\int_{\partial B^i_{\varepsilon a}}|\nabla_\tau (P^2_{i,\varepsilon a}(u^\varepsilon\cdot{\vec{e}_r}))|^2\, d\mathcal{H}^2+\right. \nonumber \\ & &\left. {\frac1 {4\pi\varepsilon^3 a^3}}\Big(\frac{3\lambda_{f\!\ell}}2-\frac\gamma a\Big) \left(\int_{\partial B^i_{\varepsilon a}} u^\varepsilon\cdot{\vec{e}_r}\,d\mathcal{H}^2\right)^2\right\}-\int_{{\Omega}^\varepsilon}f\cdot u^\varepsilon\, dx, \label{eq.est.feue}\end{aligned}$$ where we used [\[Vei-var\]](#Vei-var){reference-type="eqref" reference="Vei-var"} and the coercivity estimate [\[coer\]](#coer){reference-type="eqref" reference="coer"} in Appendix A. By [\[eq.cond-min\]](#eq.cond-min){reference-type="eqref" reference="eq.cond-min"} and because of the positive definite character of $Q$ we deduce that $$\label{eq.bd-ue} \|\mathbf Eu^\varepsilon\|^2_{L^2({\Omega}^\varepsilon)}\le C \|u^\varepsilon\|_{L^2({\Omega}^\varepsilon)}.$$ Appealing to [@OSY Theorem 4.2], there exists a linear extension operator $$R^\varepsilon: H^1({\Omega}^\varepsilon;\mathbb{R}^3)\to H^1({\Omega};\mathbb{R}^3)$$ such that, for some constant $C$ independent of $\varepsilon$, $$\begin{array}{c} \|R^\varepsilon u\|_{H^1({\Omega})} \le C\|u\|_{H^1({\Omega}^\varepsilon)}, \\[2mm] \|\mathbf ER^\varepsilon u\|_{L^2({\Omega})} \le C\|\mathbf Eu\|_{L^2({\Omega}^\varepsilon)} \end{array}$$ for every $u\in H^1({\Omega}^\varepsilon;\mathbb{R}^3)$. Because of this result we actually obtain, in lieu of [\[eq.bd-ue\]](#eq.bd-ue){reference-type="eqref" reference="eq.bd-ue"}, that $$\|\mathbf ER^\varepsilon u^\varepsilon\|^2_{L^2({\Omega})}\le C \|R^\varepsilon u^\varepsilon\|_{L^2({\Omega})},$$ so that, using Korn and Poincaré-Korn inequalities on ${\Omega}$, we conclude that $$\label{eq.bd-ue-b} \|R^\varepsilon u^\varepsilon\|_{H^1({\Omega})}\le C.$$ Further, by [\[eq.est.feue\]](#eq.est.feue){reference-type="eqref" reference="eq.est.feue"} and [\[eq.bd-ue-b\]](#eq.bd-ue-b){reference-type="eqref" reference="eq.bd-ue-b"} we deduce that $$\label{eq.bd-ue-b2} \sum_{i\in I_\varepsilon}\left(\int_{\partial B^i_{\varepsilon a}} u^\varepsilon\cdot{\vec{e}_r}\,d\mathcal{H}^2\right)^2\le C\varepsilon^{3}$$ and $$\label{eq.bd-nabue} \varepsilon\sum_{i\in I_\varepsilon} \int_{\partial B^i_{\varepsilon a}}|\nabla_\tau (P^2_{i,\varepsilon a}(u^\varepsilon\cdot{\vec{e}_r}))|^2\, d\mathcal{H}^2\le C.$$ Let $$H^1_\Gamma({\Omega};\mathbb{R}^3):=\{u\in H^1({\Omega};\mathbb{R}^3): \ u=0\mbox{ on }\Gamma\}.$$ We propose to establish the following homogenization result. **Theorem 11**. *The unique solution $u^\varepsilon$ to [\[eq.eps-sys\]](#eq.eps-sys){reference-type="eqref" reference="eq.eps-sys"} can be extended to a function $R^\varepsilon u^\varepsilon\in H^1({\Omega};\mathbb{R}^3)$ such that $$R^\varepsilon u^\varepsilon\rightharpoonup u \quad \mbox{ weakly in } H^1({\Omega};\mathbb{R}^3),$$ where $u$ is the unique solution in $H^1_\Gamma({\Omega};\mathbb{R}^3)$ of $$\label{eq.hom-pb} \begin{cases} -\mathop{\mathrm{div}}(\mathbb A_{\rm hom}\mathbf Eu)=(1-|B_a|)f \quad \mbox{ in }{\Omega},\\[2mm] u=0 \mbox{ on }\Gamma, \quad (\mathbb A_{\rm hom}\mathbf Eu)\nu_{\partial{\Omega}} =0 \mbox{ on }\partial{\Omega}\setminus \overline\Gamma, \end{cases}$$ with $$\label{eq.def-Ah} \mathbb A_{\rm hom}F\cdot F := 2\mathcal{F}_{per}(F, \lambda_F)$$ for every $F\in \mathbb{M}^{3{\times}3}_{\rm sym}$ and $\lambda_F$ denotes the unique minimizer of $\mathcal{F}_{per}(F,\cdot)$ defined in [\[eq.cell-pb\]](#eq.cell-pb){reference-type="eqref" reference="eq.cell-pb"} below.* *Furthermore, the following corrector results hold: $$\label{eq.resuco} \lim_\varepsilon\int_{\omega\cap\omega^\varepsilon}\left|\mathbf E_xu^\varepsilon- \mathbf E_x u- \frac1{\varepsilon^3}\sum_{i,j=1}^{3}\left(\int_{Y^{\kappa(x/\varepsilon)}_{\varepsilon}}(\mathbf E_x u)_{ij}(z)\,dz\right) \mathbf E_y\lambda_{ij}(x/\varepsilon)\right|^2 dx=0$$ for any $\omega\subset\subset{\Omega}$, and $$\begin{gathered} \label{eq.resuco2} \lim_\varepsilon\varepsilon\sum_{i\in I_\varepsilon}\int_{\partial B^i_{\varepsilon a}}\Bigg|\nabla_\tau (P^2_{i,\varepsilon a}( u^\varepsilon\cdot {\vec{e}_r})) \\[2mm] \left.-\frac1{\varepsilon^3}\nabla_\tau \Bigg(P^2_{a}\Bigg(\int_{Y^i_\varepsilon}\Big(\nabla_xu(z) \, y +\sum_{j,k=1}^{3}(\mathbf E_x u)_{jk}(z)\lambda_{jk}(y)\Big)dz\cdot {\vec{e}_r}\Bigg)\Bigg)\!\Big|_{y=x/\varepsilon}\right|^2d\mathcal{H}^2=0,\end{gathered}$$ where $P^2_a$ is the orthogonal projection in $L^2(\partial B_a)$ onto the orthogonal space to affine functions on $\partial B_a$ while $P^2_{i,\varepsilon a}$ is the orthogonal projection in $L^2(\partial B^i_{\varepsilon a})$ onto the orthogonal space to affine functions on $\partial B^i_{\varepsilon a}$ (see [\[proj-e\]](#proj-e){reference-type="eqref" reference="proj-e"} in Appendix A) and $$\label{eq.def-chiij} \lambda_{ij}:=\lambda_{F_{ij}}$$ with $(F_{ij})_{kh}=1/2(\delta_{ik}\delta_{jh}+\delta_{ih}\delta_{jk})$.* In Theorem [Theorem 11](#thm.hom){reference-type="ref" reference="thm.hom"} above, the cell problem is given by $$\min\left\{ \mathcal{F}_{per}(F,\psi): \ \psi \in {\mathscr X}\right\},$$ where $$\label{eq.defXcell} {\mathscr X}:=\{\psi\in H^1_\sharp(Y\setminus\overline B_a;\mathbb{R}^3): \psi\cdot{\vec{e}_r}\in H^1(\partial B_a)\}$$ and $$\begin{gathered} \label{eq.cell-pb} \mathcal{F}_{per}(F,\psi):= \frac12 \int_{Y\setminus\overline B_a} Q(\mathbf E\psi+F)\, dy +\frac\gamma2\int_{\partial B_a}|\nabla_\tau P^2_a ((\psi+Fy)\cdot{\vec{e}_r})|^2\, d\mathcal{H}^2 \\ -\frac{\gamma}{a^2}\int_{\partial B_a} |P^2_a ((\psi+Fy)\cdot{\vec{e}_r})|^2 \, d\mathcal{H}^2 \\ +\frac{1}{2|B_a|}\left(\lambda_{f\!\ell}-\frac{2\gamma}{3a}\right) \left(\int_{\partial B_a} (\psi+Fy)\cdot{\vec{e}_r}\,d\mathcal{H}^2\right)^2\end{gathered}$$ and $P^2_a$ is the orthogonal projection in $L^2(\partial B_a)$ onto the orthogonal space to affine functions on $\partial B_a$ (see [\[proj\]](#proj){reference-type="eqref" reference="proj"} in Appendix A). **Remark 12**. As could be easily seen from reproducing the computations leading to [\[identity\]](#identity){reference-type="eqref" reference="identity"} in Appendix A, an equivalent expression for $\mathcal{F}_{per}$ defined in [\[eq.cell-pb\]](#eq.cell-pb){reference-type="eqref" reference="eq.cell-pb"} above is $$\begin{gathered} \mathcal{F}_{per}(F,\psi)= \frac12 \int_{Y\setminus\overline B_a} Q(\mathbf E\psi+F)\, dy +\frac\gamma2\int_{\partial B_a}|\nabla_\tau ((\psi+Fy)\cdot{\vec{e}_r})|^2\, d\mathcal{H}^2 \\[2mm]-\frac{\gamma}{a^2}\int_{\partial B_a} ((\psi+Fy)\cdot{\vec{e}_r})^2 \, d\mathcal{H}^2 +\frac{\lambda_{f\!\ell}}{2|B_a|} \left(\int_{\partial B_a} (\psi+Fy)\cdot{\vec{e}_r}\,d\mathcal{H}^2\right)^2.\end{gathered}$$ In that form it is clear that an argument analogous to that used in Remark [Remark 7](#rm.posit){reference-type="ref" reference="rm.posit"} would demonstrate the existence and uniqueness of $\lambda_F$. We also note that, because of Remark [Remark 7](#rm.posit){reference-type="ref" reference="rm.posit"} and of [\[eq.cond-min\]](#eq.cond-min){reference-type="eqref" reference="eq.cond-min"}, $$\mathbb A_{\rm hom}\mbox{ defined in \eqref{eq.def-Ah} is definite positive.}$$ -.7cm¶ **Remark 13**. Since $$\nabla u- \frac1{\varepsilon^3}\int_{Y^{\kappa(x/\varepsilon)}_{\varepsilon}}\nabla u(z)dz\stackrel{\varepsilon}{\longrightarrow} 0\mbox{ strongly in }L^2(\Omega;\mathbb{M}^{3{\times}3}),$$ a simpler expression for the corrector result [\[eq.resuco\]](#eq.resuco){reference-type="eqref" reference="eq.resuco"} can be obtained, namely $$\begin{gathered} \lim_\varepsilon\Bigg\{\int_{\omega\cap\omega^\varepsilon}\left|\mathbf E_xu^\varepsilon- \mathbf E_x u- \sum_{i,j=1}^{3}(\mathbf E_x u)_{ij} \mathbf E_y\lambda_{ij}(x/\varepsilon)\right|^2 dx \\ +\varepsilon\sum_{i\in I_\varepsilon}\int_{\partial B^i_{\varepsilon a}}\Bigg|\nabla_\tau (P^2_{i,\varepsilon a}( u^\varepsilon\cdot e_r)) \\[2mm] - \nabla_\tau\Bigg(P^2_{a}\left(\Big(\nabla_xu \, y +\sum_{j,k=1}^{3}(\mathbf E_x u)_{jk}\lambda_{jk}(y)\Big)\cdot {\vec{e}_r}\right)\Bigg)\!\Big|_{y=x/\varepsilon}\Bigg|^2d\mathcal{H}^2 \Bigg\}=0,\end{gathered}$$ provided that, either $\mathbf E_y \lambda_{ij}\in L^\infty(Y;\mathbb{M}^{3{\times}3}_{\rm sym})$ and $\nabla_\tau\lambda_{ij}\in L^\infty(\partial B_a;\mathbb{R}^3)$ for all $i,j \in \{1,2,3\}$, or that $u$ turns out to be sufficiently smooth. While the regularity of $u$ will hinge, in particular, on the regularity of $f$, the $L^\infty$ regularity of $\mathbf E_y\lambda_{ij}$ and of $\nabla_\tau\lambda_{ij}$ might be true, but we confess a lack of determination in the matter. ¶ We now proceed with the proof of Theorem [Theorem 11](#thm.hom){reference-type="ref" reference="thm.hom"}. *Proof of Theorem [Theorem 11](#thm.hom){reference-type="ref" reference="thm.hom"}.* We define the unfolding of $u^\varepsilon$ adapting the ideas in [@ADH; @CDG] (see also [@Cas1]). Define $\kappa:\mathbb{R}^3\to \mathbb{Z}^3$ so that $$x\in Y^{\kappa(x)}_1,$$ that is, $\kappa(x)$ provides the center $i\in \mathbb{Z}^3$ of the cube $Y^i_1:=i+Y$ containing $x$. In particular, $$x\in Y^i_\varepsilon\quad \text{ if and only if } \quad i=\kappa\Big({\frac x \varepsilon}\Big)$$ for every $x\in \mathbb{R}^3$. We define the unfolding ${\hat u}^\varepsilon:\tilde\Omega^\varepsilon\times (Y\setminus \overline B_a)\to \mathbb{R}^3$ as $${\hat u}^\varepsilon(x,y)=u^\varepsilon\Big(\varepsilon\kappa\Big({\frac x\varepsilon}\Big)+\varepsilon y\Big).$$ Observe that, for $x\in Y^i_\varepsilon$, ${\hat u}^\varepsilon$ does not depend on $x$, while as a function of $y$, it just comes from $u^\varepsilon$ by the change of variables $\displaystyle y=(x-\varepsilon i)/\varepsilon$, which transforms $Y^i_\varepsilon\setminus {\overline B}^i_{\varepsilon a}$ into $Y\setminus \overline B_a$. Using the definition of ${\hat u}^\varepsilon$ and recalling that $\omega^\varepsilon=\cup_{i\in I_\varepsilon}(Y^i_\varepsilon\setminus \overline B^i_{\varepsilon a})$, we have $$\label{eq.unfol-grad} \begin{array}{l}\displaystyle\int_{\omega^\varepsilon} |\nabla u^\varepsilon(x)|^2\,dx=\sum_{i\in I_\varepsilon}\int_{Y^i_\varepsilon\setminus {\overline B}^i_{\varepsilon a}}|\nabla u^\varepsilon(x)|^2\,dx=\varepsilon^3\sum_{i\in I_\varepsilon}\int_{Y\setminus \overline B_a}|\nabla u^\varepsilon(\varepsilon i +\varepsilon y)|^2\,dy \\[3mm] \displaystyle = \frac{1}{\varepsilon^2}\sum_{i\in I_\varepsilon}\int_{Y^i_\varepsilon}\int_{Y\setminus \overline B_a}|\nabla_y {\hat u}^\varepsilon(x,y)|^2\,dy\,dx= \frac1{\varepsilon^2}\int_{\tilde\Omega^\varepsilon}\int_{Y\setminus \overline B_a}|\nabla_y {\hat u}^\varepsilon(x,y)|^2\,dy\,dx. \end{array}$$ Analogously, we obtain that $$\begin{aligned} \label{eq.unfol-ex} \int_{\omega^\varepsilon} Q(\mathbf Eu^\varepsilon)\,dx & = & \frac1{\varepsilon^2}\int_{\tilde\Omega^\varepsilon}\int_{Y\setminus \overline B_a}Q(\mathbf E_y {\hat u}^\varepsilon(x,y))\,dy\,dx \\ \sum_{i\in I_\varepsilon}\!\int_{\partial B^i_{\varepsilon a}}\!|P^2_{i,\varepsilon a}(u^\varepsilon\cdot {\vec{e}_r}) |^2d\mathcal{H}^2\!\!& = &\!\! \frac1{\varepsilon}\int_{\tilde\Omega^\varepsilon}\int_{\partial B_a}\!|P^2_a ({\hat u}^\varepsilon(x,y)\cdot{\vec{e}_r}(y))|^2\,d\mathcal{H}^2\!(y)dx \\ \sum_{i\in I_\varepsilon}\left(\int_{\partial B^i_{\varepsilon a}}u^\varepsilon\cdot{\vec{e}_r}\, d\mathcal{H}^2\right)^2\!\! & = & \varepsilon\int_{\tilde\Omega^\varepsilon}\!\!\left(\int_{\partial B_a} {\hat u}^\varepsilon(x,y)\cdot{\vec{e}_r}(y)\, d\mathcal{H}^2(y)\right)^2\!\!dx,\end{aligned}$$ where ${\vec{e}_r}(y):=y/{|y|}$ and $P^2_a ({\hat u}^\varepsilon(x,\cdot)\cdot{\vec{e}_r})$ denotes the orthogonal projection in $L^2(\partial B_a)$ of the function $y\mapsto {\hat u}^\varepsilon(x,y)\cdot{\vec{e}_r}(y)$ onto the orthogonal space to affine functions on $\partial B_a$. Moreover, $$\label{eq.unfol-ex2} \sum_{i\in I_\varepsilon}\int_{\partial B^i_{\varepsilon a}}|\nabla_\tau (P^2_{i,\varepsilon a}(u^\varepsilon\cdot{\vec{e}_r}))|^2\,d\mathcal{H}^2=\frac1{\varepsilon^3}\int_{\tilde\Omega^\varepsilon}\int_{\partial B_a}|\nabla_{\tau,y} (P^2_a ({\hat u}^\varepsilon(x,\cdot)\cdot{\vec{e}_r}))|^2\,d\mathcal{H}^2(y)\, dx.$$ In view of [\[eq.bd-ue-b\]](#eq.bd-ue-b){reference-type="eqref" reference="eq.bd-ue-b"}--[\[eq.bd-nabue\]](#eq.bd-nabue){reference-type="eqref" reference="eq.bd-nabue"}, we conclude in particular that $$\begin{gathered} \frac1{\varepsilon^2} \int_{\tilde\Omega^\varepsilon}\int_{Y\setminus \overline B_a}|\nabla_y {\hat u}^\varepsilon(x,y)|^2\,dy\,dx +\frac1{\varepsilon^2} \int_{\tilde\Omega^\varepsilon}\int_{\partial B_a}|\nabla_{\tau,y} (P^2_a ({\hat u}^\varepsilon(x,\cdot)\cdot{\vec{e}_r}))|^2\,d\mathcal{H}^2(y)\, dx \\ +\frac1{\varepsilon^2} \int_{\tilde\Omega^\varepsilon}\left(\int_{\partial B_a} {\hat u}^\varepsilon(x,y)\cdot{\vec{e}_r}(y)\, d\mathcal{H}^2(y)\right)^2\!\!dx \le C.\end{gathered}$$ Let now $$\begin{aligned} {\hat w}^\varepsilon(x,y) & := & \frac1\varepsilon{\hat u}^\varepsilon(x,y)- \frac1\varepsilon\fint_{Y\setminus \overline B_a}{\hat u}^\varepsilon(x,z)\, dz \nonumber \\ & = & \frac1\varepsilon{\hat u}^\varepsilon(x,y) - \frac1\varepsilon\fint_{Y^{\kappa(\frac x\varepsilon)}_{\varepsilon}\setminus \overline B^{\kappa(\frac x\varepsilon)}_{\varepsilon a}}u^\varepsilon(z)\, dz. \label{eq.unfol}\end{aligned}$$ By the previous bounds and Poincaré-Wirtinger's inequality applied to $Y\setminus \overline B_{a}$, a (not relabeled) subsequence of ${\hat w}^\varepsilon$ satisfies $$\label{eq.lim.uxy} {\hat w}^\varepsilon\rightharpoonup \hat w \quad \mbox{ weakly in } L^2(\omega; H^1(Y\setminus\overline B_a;\mathbb{R}^3))$$ and $$\label{eq.lim.uxy2} P^2_a ({\hat w}^\varepsilon\cdot{\vec{e}_r}) \rightharpoonup P^2_a (\hat w\cdot{\vec{e}_r}) \quad \mbox{ weakly in } L^2(\omega; H^1(\partial B_a))$$ for any open set $\omega\subset\subset {\Omega}$ and for some $\hat w\in L^2_{\rm loc}({\Omega};H^1(Y\setminus\overline B_a;\mathbb{R}^3))$ such that $\hat w\cdot{\vec{e}_r}\in L^2_{\rm loc}({\Omega}; H^1(\partial B_a))$. Further, in view of [\[eq.bd-ue-b\]](#eq.bd-ue-b){reference-type="eqref" reference="eq.bd-ue-b"}, we can assume that $$\label{eq.conv=Rue} R^\varepsilon u^\varepsilon\rightharpoonup u \quad \mbox{ weakly in } H^1({\Omega}; \mathbb{R}^3).$$ Take $\vec{e}_1$ as the first vector of the canonical basis in $\mathbb R^3$ and note that the definition of $\hat u_\varepsilon$ implies that $$\hat u_\varepsilon\Big(x+\varepsilon\vec e_1,-{\frac1 2},y_2,y_3\Big)=\hat u_\varepsilon\Big(x,{\frac 1 2 },y_2,y_3\Big)$$ for a.e. $x\in \tilde\Omega^\varepsilon$ and a.e. $(y_2,y_3)\in (-1/2,1/2)^2$. Thus the definition [\[eq.unfol\]](#eq.unfol){reference-type="eqref" reference="eq.unfol"} of ${\hat w}^\varepsilon$ implies in turn that $$\begin{gathered} \label{eq.interm10} \hat w_\varepsilon\Big(x+\varepsilon\vec e_1,-{\frac 1 2 },y_2,y_3\Big)-\hat w_\varepsilon\Big(x,{\frac 1 2 },y_2,y_3\Big) \\ =-\fint_{Y^{\kappa(\frac x\varepsilon)}_{\varepsilon}\setminus \overline B^{\kappa(\frac x\varepsilon)}_{\varepsilon a}}\frac{R^\varepsilon u^\varepsilon(z+\varepsilon\vec{e}_1)-R^\varepsilon u^\varepsilon(z)}\varepsilon\, dz\end{gathered}$$ for a.e. $x\in \tilde\Omega^\varepsilon$ and a.e. $(y_2,y_3)\in (-1/2,1/2)^2$. Thus, passing to the limit, we get $$\label{eq.interm1} \hat w\Big(x,-{\frac 1 2 },y_2,y_3\Big)-\hat w\Big(x,{\frac 1 2},y_2,y_3\Big)=-\frac{\partial u}{\partial x_1}(x).$$ Indeed, taking $\varphi\in C^\infty_c(\Omega;\mathbb{R}^3)$ and integrating [\[eq.interm10\]](#eq.interm10){reference-type="eqref" reference="eq.interm10"} over the support $K$ of $\varphi$ we get, for $\varepsilon$ small enough, $$\begin{aligned} \lefteqn{\int_K \Big(\hat w_\varepsilon\Big(x+\varepsilon\vec{e}_1,-{\frac 1 2 },y_2,y_3\Big)-\hat w_\varepsilon\Big(x,{\frac 1 2 },y_2,y_3\Big)\Big)\varphi(x)\, dx} \\ & = & \int_{\tilde\Omega^\varepsilon} \Big(\hat w_\varepsilon\Big(x+\varepsilon\vec{e}_1,-{\frac 1 2 },y_2,y_3\Big)-\hat w_\varepsilon\Big(x,{\frac 1 2 },y_2,y_3\Big)\Big)\varphi(x)\, dx \\ & = & - \sum_{i\in I_\varepsilon} \int_{Y^i_{\varepsilon}}\fint_{Y^i_{\varepsilon}\setminus \overline B^i_{\varepsilon a}}\frac{R^\varepsilon u^\varepsilon(z+\varepsilon\vec{e}_1)-R^\varepsilon u^\varepsilon(z)}\varepsilon\varphi(x) \,dz\,dx \\ & = & - \sum_{i\in I_\varepsilon} \int_{Y^i_{\varepsilon}}\fint_{Y^i_{\varepsilon}\setminus \overline B^i_{\varepsilon a}}\frac{R^\varepsilon u^\varepsilon(z+\varepsilon\vec{e}_1)-R^\varepsilon u^\varepsilon(z)}\varepsilon\varphi(z) \,dz\,dx +O(\varepsilon) \\ & = & -\frac{1}{1-|B_a|}\int_{\omega^\varepsilon}\frac{R^\varepsilon u^\varepsilon(z+\varepsilon\vec{e}_1)-R^\varepsilon u^\varepsilon(z)}\varepsilon\varphi(z) \,dz +O(\varepsilon) \\ & = & -\frac{1}{1-|B_a|}\int_{\omega^\varepsilon}\frac{\varphi(z-\varepsilon\vec{e}_1)- \varphi(z)}\varepsilon R^\varepsilon u^\varepsilon(z) \,dz +O(\varepsilon).\end{aligned}$$ Since by periodicity $\chi_{\omega^\varepsilon}\rightharpoonup 1-|B_a|$ weakly$^*$ in $L^\infty(\omega)$ for every $\omega\subset\subset\Omega$ and $R^\varepsilon u^\varepsilon\to u$ strongly in $L^2(\Omega;\mathbb{R}^3)$ by [\[eq.conv=Rue\]](#eq.conv=Rue){reference-type="eqref" reference="eq.conv=Rue"} and Rellich Theorem, this yields [\[eq.interm1\]](#eq.interm1){reference-type="eqref" reference="eq.interm1"}. Reasoning analogously with respect to the other vectors of the canonical basis, we conclude that $$\label{eq.def-u_1} \hat u_1(x,y):=\hat w(x,y)-\nabla u(x)y \in L^2_{\rm loc}({\Omega};H^1_\sharp (Y\setminus \overline B_a;\mathbb{R}^3)).$$ We now consider $v\in C^\infty(\overline\Omega;\mathbb{R}^3)$ with $v=0$ on $\Gamma$ and $v_1\in C^1_c(\Omega;C^1_\sharp(Y\setminus \overline B_a;\mathbb{R}^3))$. Define $v^\varepsilon$ by $$\label{eq.ve} v^\varepsilon(x)=v(x)+\varepsilon v_1\Big(x,{\frac x\varepsilon}\Big).$$ By minimality we have $$\label{eq.uemi1} {\mathcal F}^\varepsilon(u^\varepsilon) \leq {\mathcal F}^\varepsilon(v^\varepsilon).$$ By definition of ${\mathcal F}^\varepsilon$ and [\[Vei-var\]](#Vei-var){reference-type="eqref" reference="Vei-var"}, we have $$\begin{aligned} {\mathcal F}^\varepsilon(v^\varepsilon) & = & \frac12 \int_{\Omega^\varepsilon} Q(\mathbf Ev^\varepsilon)\,dx +\sum_{i\in I_\varepsilon} \frac{\gamma \varepsilon}{2}\int_{\partial B^i_{\varepsilon a}}|\nabla_\tau (P^2_{i,\varepsilon a}(v^\varepsilon\cdot {\vec{e}_r}))|^2\,d{\mathcal H}^2 \nonumber \\ && {} - \sum_{i\in I_\varepsilon} \frac{\gamma}{\varepsilon a^2}\int_{\partial B^i_{\varepsilon a}}|P^2_{i,\varepsilon a}(v^\varepsilon\cdot {\vec{e}_r}) |^2\,d{\mathcal H}^2 \nonumber \\ && {}+ \sum_{i\in I_\varepsilon} \frac1{4\pi\varepsilon^3 a^3} \Big(\frac{3\lambda_{f\!\ell}}2-\frac\gamma a\Big)\left(\int_{\partial B^i_{\varepsilon a}}v^\varepsilon\cdot {\vec{e}_r}\,d{\mathcal H}^2\right)^2 \nonumber \\ && {} -\int_{\Omega^\varepsilon}f\cdot v^\varepsilon\, dx. \label{eq.terFev} \end{aligned}$$ Let us pass to the limit in the different terms of the right-hand side of [\[eq.terFev\]](#eq.terFev){reference-type="eqref" reference="eq.terFev"}. The first term yields with obvious notation $$\begin{aligned} \frac12 \int_{\Omega^\varepsilon} Q(\mathbf Ev^\varepsilon)\,dx & = & \frac12 \int_{\Omega^\varepsilon} Q(\mathbf E_xv(x)+\mathbf E_yv_1(x,x/\varepsilon))\,dx+O(\varepsilon) \nonumber \\ & = & \frac12 \sum_{i\in I_\varepsilon} \int_{Y^i_{\varepsilon}\setminus \overline B^i_{\varepsilon a}}Q(\mathbf E_xv(\varepsilon i)+\mathbf E_yv_1(\varepsilon i,x/\varepsilon))\,dx+O(\varepsilon) \nonumber \\ & = & \frac12 \sum_{i\in I_\varepsilon} \varepsilon^3\int_{Y\setminus \overline B_a}Q(\mathbf E_xv(\varepsilon i)+\mathbf E_yv_1(\varepsilon i,y))\,dy+O(\varepsilon) \nonumber \\ & = & \frac12 \int_{\tilde\Omega^\varepsilon} \int_{Y\setminus B_a}Q(\mathbf E_xv(x)+\mathbf E_yv_1(x,y))\,dy\,dx+O(\varepsilon). \label{eq.terFev1} \end{aligned}$$ For the terms in [\[eq.terFev\]](#eq.terFev){reference-type="eqref" reference="eq.terFev"} on the boundary of the balls $B^i_{\varepsilon a}$ we write $$v^\varepsilon(x)= v(\varepsilon i)+\nabla_x v(\varepsilon i)(x-\varepsilon i)+\varepsilon v_1(\varepsilon i, x/\varepsilon)+\omega^i_\varepsilon(x)$$ for $x\in\partial B^i_{\varepsilon a}$, where $\|\omega^i_\varepsilon\|_{C^0(\partial B^i_{\varepsilon a})}=O(\varepsilon^2)$ and $\|\omega^i_\varepsilon\|_{C^1(\partial B^i_{\varepsilon a})}=O(\varepsilon)$. Using that $P^2_{i,\varepsilon a}(v(\varepsilon i)\cdot{\vec{e}_r})=0$, we obtain for the second term in [\[eq.terFev\]](#eq.terFev){reference-type="eqref" reference="eq.terFev"} $$\begin{aligned} \lefteqn{\frac{\gamma \varepsilon}2\sum_{i\in I_\varepsilon}\int_{\partial B^i_{\varepsilon a}}|\nabla_\tau (P^2_{i,\varepsilon a}(v^\varepsilon\cdot {\vec{e}_r}))|^2\,d\mathcal{H}^2} \nonumber \\ & =& \frac{\gamma \varepsilon^2}2\sum_{i\in I_\varepsilon}\int_{\partial B^i_{\varepsilon a}}\big|\nabla_{\tau,x}(P^2_{i,\varepsilon a}(a\nabla_x v(\varepsilon i){\vec{e}_r}\cdot {\vec{e}_r}+v_1(\varepsilon i,x/\varepsilon)\cdot {\vec{e}_r})) \big|^2\,d\mathcal{H}^2+O(\varepsilon) \nonumber \\ & = & \frac\gamma{2}\int_{\tilde\Omega^\varepsilon}\int_{\partial B_a} \big|\nabla_{\tau,y}(P^2_a (\nabla_x v\, y \cdot {\vec{e}_r}+ v_1\cdot{\vec{e}_r})) \big|^2\,d\mathcal{H}^2(y)\, dx+O(\varepsilon). \label{eq.terFev2} \end{aligned}$$ Arguing in a similar way, the third term can be written as $$\begin{aligned} \lefteqn{\frac{\gamma}{\varepsilon a^2}\sum_{i\in I_\varepsilon}\int_{\partial B^i_{\varepsilon a}}|P^2_{i,\varepsilon a}(v^\varepsilon\cdot {\vec{e}_r}) |^2\,d\mathcal{H}^2} \nonumber \\ & = & \frac{\gamma}{ a^2}\sum_{i\in I_\varepsilon}\int_{\partial B^i_{\varepsilon a}}\Big|P^2_{i,\varepsilon a}(a\nabla_x v(\varepsilon i){\vec{e}_r}\cdot {\vec{e}_r}+v_1(\varepsilon i, x/\varepsilon)\cdot{\vec{e}_r}) \Big|^2\,d\mathcal{H}^2+O(\varepsilon) \nonumber \\ & = & \frac{\gamma}{a^2}\int_{\tilde\Omega^\varepsilon}\int_{\partial B_a}\big|P^2_a (\nabla_x v\, y\cdot {\vec{e}_r}+v_1\cdot{\vec{e}_r}) \big|^2\,d\mathcal{H}^2(y)\, dx+O(\varepsilon). \label{eq.terFev3}\end{aligned}$$ For the fourth term we get $$\begin{aligned} \lefteqn{\frac1{4\pi\varepsilon^3 a^3}\sum_{i\in I_\varepsilon}\left(\int_{\partial B^i_{\varepsilon a}}v^\varepsilon\cdot {\vec{e}_r}\,d\mathcal{H}^2\right)^2} \nonumber \\ & = &\frac1{4\pi\varepsilon a^3}\sum_{i\in I_\varepsilon}\left(\int_{\partial B^i_{\varepsilon a}}(a\nabla_x v(\varepsilon i){\vec{e}_r}\cdot{\vec{e}_r}+ v_1(\varepsilon i,x/\varepsilon)\cdot {\vec{e}_r}) \, d\mathcal{H}^2\right)^2+O(\varepsilon) \nonumber \\ & = &\frac1{4\pi a^3}\int_{\tilde\Omega^\varepsilon}\left(\int_{\partial B_a}\big(\nabla_x v(x) y\cdot {\vec{e}_r}+v_1(x,y)\cdot {\vec{e}_r}\big)\,d\mathcal{H}^2(y)\right)^2\!\! dx+O(\varepsilon). \label{eq.terFev4}\end{aligned}$$ Finally, since by periodicity $\chi_{\omega^\varepsilon\cap\omega}\rightharpoonup (1-|B_a|)\chi_\omega$ weakly in $L^2({\Omega})$ for every $\omega\subset\subset\Omega$, it is easily concluded, upon letting $\omega\nearrow{\Omega}$, that $$\label{eq.terFev5} \int_{\Omega^\varepsilon}f\cdot v^\varepsilon\, dx=\int_{\Omega^\varepsilon}f\cdot v \,dx+O(\varepsilon)\longrightarrow\big(1-|B_a|\big)\int_\Omega f\cdot v\,dx.$$ Collecting [\[eq.terFev1\]](#eq.terFev1){reference-type="eqref" reference="eq.terFev1"}--[\[eq.terFev5\]](#eq.terFev5){reference-type="eqref" reference="eq.terFev5"} and letting $\varepsilon$ tend to $0$, we finally obtain that, for $v^\varepsilon$ as in [\[eq.ve\]](#eq.ve){reference-type="eqref" reference="eq.ve"}, $$\begin{aligned} \lim_\varepsilon{\mathcal F}^\varepsilon(v^\varepsilon) &= & \frac12 \int_\Omega\int_{Y\setminus B_a} Q(\mathbf E_xv(x)+\mathbf E_yv_1(x,y))\,dy\,dx \nonumber \\ && {}+\frac\gamma{2}\int_\Omega\int_{\partial B_a} \big|\nabla_{\tau,y}(P^2_a ( \nabla_x v(x)\, y\cdot {\vec{e}_r}+v_1(x,y)\cdot{\vec{e}_r})) \big|^2\, d\mathcal{H}^2(y)\, dx \nonumber \\ && {}-\frac\gamma{a^2}\int_{{\Omega}}\int_{\partial B_a}\big|P^2_a (\nabla_x v(x)\, y\cdot {\vec{e}_r}+v_1(x,y)\cdot{\vec{e}_r})\big|^2\,d\mathcal{H}^2(y)\,dx \nonumber \\ && {}+\frac1{4\pi a^3}\Big(\frac{3\lambda_{f\!\ell}}2-\frac\gamma a\Big)\int_{{\Omega}}\left(\int_{\partial B_a}\big(\nabla_x v(x) y\cdot {\vec{e}_r}+v_1(x,y)\cdot {\vec{e}_r}\big)\,d\mathcal{H}^2(y)\right)^2\!\! dx \nonumber \\ && {}-(1-|B_a|\big)\int_\Omega f(x)\cdot v(x)\,dx. \label{eq.limv}\end{aligned}$$ On the other hand, recalling [\[Vei-var\]](#Vei-var){reference-type="eqref" reference="Vei-var"} and making use of [\[eq.unfol-ex\]](#eq.unfol-ex){reference-type="eqref" reference="eq.unfol-ex"}--[\[eq.unfol-ex2\]](#eq.unfol-ex2){reference-type="eqref" reference="eq.unfol-ex2"} and of the definition [\[eq.unfol\]](#eq.unfol){reference-type="eqref" reference="eq.unfol"} of $\hat w^\varepsilon$, we have that for $\omega\subset\subset{\Omega}$ and $\varepsilon$ small enough, $$\begin{aligned} {\mathcal F}^\varepsilon(u^\varepsilon)& \ge & \frac12 \int_{\omega}\int_{Y\setminus \overline B_a}Q(\mathbf E_y {\hat w}^\varepsilon(x,y))\,dy\,dx \nonumber \\ && {}+ \frac\gamma2 \int_{\omega}\int_{\partial B_a}|\nabla_{\tau,y}(P^2_a ({\hat w}^\varepsilon(x,y)\cdot{\vec{e}_r}(y)))|^2\,d\mathcal{H}^2(y)\, dx \nonumber \\ && {}- \frac\gamma{a^2}\int_{\omega}\int_{\partial B_a}|P^2_a ({\hat w}^\varepsilon(x,y)\cdot {\vec{e}_r}(y))|^2\,d\mathcal{H}^2(y)\,dx \nonumber \\ && {} + \frac1{4\pi a^3} \Big(\frac{3\lambda_{f\!\ell}}2-\frac\gamma a\Big)\int_{\omega}\left(\int_{\partial B_a}{\hat w}^\varepsilon(x,y)\cdot{\vec{e}_r}(y)\,d\mathcal{H}^2(y)\right)^2\!\!dx-\int_{{\Omega}^\varepsilon} f\cdot u^\varepsilon\, dx \nonumber \\ & =: & \mathcal{G}({\hat w}^\varepsilon)-\int_{{\Omega}^\varepsilon} f\cdot u^\varepsilon\, dx, \label{eq.>liminf}\end{aligned}$$ where we also used that for a.e. $x$ $$\begin{gathered} \frac\gamma2 \int_{\partial B_a}|\nabla_{\tau,y}(P^2_a ({\hat w}^\varepsilon(x,\cdot)\cdot{\vec{e}_r}))|^2\,d\mathcal{H}^2- \frac\gamma{a^2}\int_{\partial B_a}|P^2_a ({\hat w}^\varepsilon(x,\cdot)\cdot {\vec{e}_r})|^2\,d\mathcal{H}^2 \\ + \frac1{4\pi a^3} \Big(\frac{3\lambda_{f\!\ell}}2-\frac\gamma a\Big)\left(\int_{\partial B_a}{\hat w}^\varepsilon(x,y)\cdot{\vec{e}_r}(y)\,d\mathcal{H}^2(y)\right)^2\geq0\end{gathered}$$ by [\[ecdt3-0\]](#ecdt3-0){reference-type="eqref" reference="ecdt3-0"} with $\varepsilon=1$, and [\[eq.cond-min\]](#eq.cond-min){reference-type="eqref" reference="eq.cond-min"}. Now, the inequality above and the positive definite character of $Q$ imply that the quadratic functional $\mathcal{G}$ defined in [\[eq.\>liminf\]](#eq.>liminf){reference-type="eqref" reference="eq.>liminf"} is non-negative, hence convex on the space of functions ${\mathscr X}_\omega$ with $$\label{eq.def-X} \begin{array}{rcl} {\mathscr X}_\omega &:= & \big\{ w\in L^2(\omega; H^1(Y\setminus\overline B_a;\mathbb{R}^3)): \ P^2_a (w\cdot{\vec{e}_r})\in L^2(\omega; H^1(\partial B_a))\big\} \smallskip \\ & = & \big\{ w\in L^2(\omega; H^1(Y\setminus\overline B_a;\mathbb{R}^3)): \ w\cdot{\vec{e}_r}\in L^2(\omega; H^1(\partial B_a))\big\} . \end{array}$$ Thus, in view of convergences [\[eq.lim.uxy\]](#eq.lim.uxy){reference-type="eqref" reference="eq.lim.uxy"}--[\[eq.lim.uxy2\]](#eq.lim.uxy2){reference-type="eqref" reference="eq.lim.uxy2"} and of [\[eq.def-u_1\]](#eq.def-u_1){reference-type="eqref" reference="eq.def-u_1"}, weak lower semicontinuity yields $$\begin{aligned} \lefteqn{\liminf_\varepsilon{\mathcal F}^\varepsilon(u^\varepsilon)} \nonumber \\ & \geq & \frac12 \int_{\omega} \int_{Y\setminus B_a}Q(\mathbf E_xu(x)+\mathbf E_y\hat u_1(x,y))\, dy\, dx \nonumber \\ && {} + \frac{\gamma}{2}\int_{\omega} \int_{\partial B_a} |\nabla_{\tau,y}P^2_a (\nabla_x u(x)\, y \cdot {\vec{e}_r}(y) + \hat u_1(x,y)\cdot {\vec{e}_r}(y))|^2\,d\mathcal{H}^2(y)\, dx \nonumber \\ && {} - \frac\gamma{a^2}\int_{\omega}\int_{\partial B_a} |P^2_a (\nabla_x u(x)\, y\cdot {\vec{e}_r}(y) + \hat u_1(x,y)\cdot {\vec{e}_r}(y))|^2\,d\mathcal{H}^2(y)\, dx \nonumber \\ && {} + \frac1{4\pi a^3} \Big(\frac{3\lambda_{f\!\ell}}2-\frac\gamma a\Big) \int_{\omega}\left(\int_{\partial B_a} (\nabla_x u(x)y\cdot{\vec{e}_r}(y) +\hat u_1(x,y)\cdot {\vec{e}_r}(y) )\, d\mathcal{H}^2(y)\right)^2\!\!dx \nonumber \\ && {}- \limsup_\varepsilon\int_{\Omega^\varepsilon}f\cdot u^\varepsilon\,dx. \label{eq.terFev6}\end{aligned}$$ Now, for any $\omega_\eta\subset\subset{\Omega}$ with $|{\Omega}\setminus\omega_\eta|\le \eta$, we may write $$\int_{{\Omega}^\varepsilon\cap\,\omega_\eta}f\cdot u^\varepsilon\, dx=\int_{{\Omega}^\varepsilon\cap\,\omega_\eta}f\cdot R^\varepsilon u^\varepsilon\, dx=\int_{\omega_\eta}\chi_{Y\setminus \overline B_a}(x/\varepsilon)f\cdot R^\varepsilon u^\varepsilon\, dx.$$ Therefore, in view of [\[eq.conv=Rue\]](#eq.conv=Rue){reference-type="eqref" reference="eq.conv=Rue"} and Rellich's Theorem we have $$\int_{{\Omega}^\varepsilon\cap\,\omega_\eta}f\cdot u^\varepsilon\, dx \longrightarrow (1-|B_a|)\int_{\omega_\eta} fu\,dx.$$ Since $$\left|\int_{{\Omega}^\varepsilon\setminus\omega_\eta}f\cdot u^\varepsilon dx\right|\le C\|f\|_{L_2({\Omega}\setminus \omega_\eta)}\longrightarrow 0$$ as $\eta\to0$, we deduce that $$\label{eq.limfu} \lim_\varepsilon\int_{{\Omega}^\varepsilon}f\cdot u^\varepsilon\,dx=(1-|B_a|)\int_{{\Omega}} fu\,dx.$$ By [\[eq.limfu\]](#eq.limfu){reference-type="eqref" reference="eq.limfu"} and by letting $\omega\nearrow{\Omega}$ in [\[eq.terFev6\]](#eq.terFev6){reference-type="eqref" reference="eq.terFev6"}, we conclude that $\hat u_1\in L^2({\Omega};H^1(Y\setminus\overline B_a;\mathbb{R}^3))$ and $P^2_a (\hat u_1\cdot{\vec{e}_r})\in L^2({\Omega}; H^1(\partial B_a;\mathbb{R}^3))$ (and not only locally as in [\[eq.def-u_1\]](#eq.def-u_1){reference-type="eqref" reference="eq.def-u_1"} and as implied by [\[eq.lim.uxy\]](#eq.lim.uxy){reference-type="eqref" reference="eq.lim.uxy"}) and that $$\begin{aligned} \lefteqn{\liminf_\varepsilon{\mathcal F}^\varepsilon(u^\varepsilon)} \nonumber \\ & \geq & \frac12 \int_{\Omega} \int_{Y\setminus B_a}Q(\mathbf E_xu(x)+ \mathbf E_y\hat u_1(x,y))\, dy\, dx \nonumber \\ && {} + \frac{\gamma}{2}\int_{\Omega} \int_{\partial B_a} |\nabla_{\tau,y}P^2_a (\nabla_x u(x)\, y \cdot {\vec{e}_r}+ \hat u_1(x,y)\cdot {\vec{e}_r}(y))|^2\,d\mathcal{H}^2(y)\, dx \nonumber \\ && {} - \frac\gamma{a^2}\int_{\Omega}\int_{\partial B_a} |P^2_a (\nabla_x u(x)\, y\cdot {\vec{e}_r}+ \hat u_1(x,y)\cdot {\vec{e}_r}(y))|^2\,d\mathcal{H}^2(y)\, dx \nonumber \\ && {} + \frac1{4\pi a^3} \Big(\frac{3\lambda_{f\!\ell}}2-\frac\gamma a\Big) \int_{\Omega}\left(\int_{\partial B_a} (\nabla_x u(x)y\cdot{\vec{e}_r}(y) + \hat u_1(x,y)\cdot {\vec{e}_r}(y) )\, d\mathcal{H}^2(y)\right)^2\!\!dx \nonumber \\ && {}- (1-|B_a|)\int_{{\Omega}}f(x)\cdot u(x)\,dx. \label{eq.terFev7}\end{aligned}$$ By [@ADMLP Lemmas A.1 and A.2] and the assumptions on $\Gamma$ the set $\{v\in C^\infty(\overline\Omega;\mathbb{R}^3): v=0 \mbox{ on }\Gamma\}$ is dense in $H^1_\Gamma({\Omega};\mathbb{R}^3)$. Further, $C^1_c(\Omega;C^1_\sharp(Y\setminus \overline B_a;\mathbb{R}^3))$ is, of course, dense in $L^2({\Omega}; H^1_\sharp(Y\setminus\overline B_a;\mathbb{R}^3))$ but also in ${\mathscr X}_\Omega$ defined in [\[eq.def-X\]](#eq.def-X){reference-type="eqref" reference="eq.def-X"} because $C^1_\sharp(Y\setminus \overline B_a;\mathbb{R}^3)$ is dense in ${\mathscr X}$ defined in [\[eq.defXcell\]](#eq.defXcell){reference-type="eqref" reference="eq.defXcell"}. Indeed, take $\psi\in {\mathscr X}$ and $\psi_n\in C^\infty_\sharp(Y\setminus \overline B_a;\mathbb{R}^3)$, $g_n\in C^\infty(\partial B_a)$ converging strongly to $\psi$ and $\psi\cdot{\vec{e}_r}$ in $H^1_\sharp(Y\setminus\overline B_a;\mathbb{R}^3)$ and $H^1(\partial B_a)$, respectively. Solve, with periodic boundary conditions on $\partial Y$, $$\begin{cases} \Delta v_n=\Delta (\psi_n\cdot{\vec{e}_r}) \mbox{ in } Y\setminus \overline B_a,\\[2mm] v_n=g_n \mbox{ on }\partial B_a,\end{cases}$$ so that $v_n\in C^\infty_\sharp(Y\setminus\overline B_a)$ converges to $\psi\cdot{\vec{e}_r}$ strongly in $H^1(Y\setminus \overline B_a)$ and ${v_n}|_{\partial B_a}$ converges to $\psi\cdot{\vec{e}_r}$ strongly in $H^1(\partial B_a)$. Let $\eta>0$ be such that $a+\eta<1/2$ and let $\varphi\in C^\infty_c(B_{a+\eta})$ be a cut-off function such that $\varphi=1$ on $B_{a+\eta/2}$. Define $$\zeta_n:=\varphi (v_n- \psi_n\cdot{\vec{e}_r}) {\vec{e}_r}+\psi_n.$$ Clearly, $\zeta_n\in C^\infty_\sharp(Y\setminus\overline B_a;\mathbb{R}^3)$ for every $n$, $\zeta_n\to \psi$ strongly in $H^1_\sharp(Y\setminus\overline B_a;\mathbb{R}^3)$, and $\zeta_n\cdot{\vec{e}_r}\to \psi\cdot{\vec{e}_r}$ strongly in $H^1(\partial B_a)$. In view of [\[eq.uemi1\]](#eq.uemi1){reference-type="eqref" reference="eq.uemi1"}, [\[eq.limv\]](#eq.limv){reference-type="eqref" reference="eq.limv"}, and [\[eq.terFev7\]](#eq.terFev7){reference-type="eqref" reference="eq.terFev7"}, these density results establish that $(u,\hat u_1)$ is a solution of the problem $$\begin{gathered} \label{eq.pbmili} \min\left\{\int_\Omega\mathcal{F}_{per}(\mathbf E_x v(x),v_1)\, dx-(1-|B_a|)\int_\Omega f\cdot v\,dx:\right.\\[2mm] \left. (v,v_1)\in H^1_\Gamma(\Omega;\mathbb{R}^3)\times {\mathscr X}_\Omega\right\}\end{gathered}$$ with $\mathcal{F}_{per}$ defined in [\[eq.cell-pb\]](#eq.cell-pb){reference-type="eqref" reference="eq.cell-pb"} and ${\mathscr X}_\Omega$ in [\[eq.def-X\]](#eq.def-X){reference-type="eqref" reference="eq.def-X"}. Further the minimizing pair $(u,\hat u_1)$ is unique by Remark [Remark 12](#rk.eqform){reference-type="ref" reference="rk.eqform"}, which implies the uniqueness of $\nabla_x u \, y+\hat u_1 \in L^2(\Omega;H^1_\sharp (Y\setminus \overline B_a;\mathbb{R}^3))$, hence of $(u,\hat u_1)$ in $H^1_\Gamma(\Omega;\mathbb{R}^3)\times L^2(\Omega;H^1_\sharp (Y\setminus \overline B_a;\mathbb{R}^3))$. In particular, convergence of $({\hat w}^\varepsilon)$ holds along the whole sequence and not only along a suitable subsequence. Moreover, by taking the $\limsup$ instead of the $\liminf$ in [\[eq.terFev6\]](#eq.terFev6){reference-type="eqref" reference="eq.terFev6"}, we actually get from [\[eq.limv\]](#eq.limv){reference-type="eqref" reference="eq.limv"}, [\[eq.\>liminf\]](#eq.>liminf){reference-type="eqref" reference="eq.>liminf"}, together with the already mentioned density argument that, for any $\omega\subset\subset{\Omega}$, as $\varepsilon\to 0$, $$\label{eq.conv-xy1} \mathbf E_y {\hat w}^\varepsilon\to \mathbf E_x u+\mathbf E_y \hat u_1 \mbox{ strongly in }L^2(\omega;L^2(Y\setminus\overline B_a;\mathbb{R}^3))$$ and $$\label{eq.conv-xy2} \nabla_{\tau,y}(P^2_a ({\hat w}^\varepsilon(x,\cdot)\cdot{\vec{e}_r})) \to \nabla_{\tau,y}(P^2_a (\nabla_x u\, y\cdot {\vec{e}_r}+\hat u_1\cdot {\vec{e}_r})) \mbox{ strongly in }L^2(\omega;L^2(\partial B_a;\mathbb{R}^3)).$$ Note that, in view of [\[eq.pbmili\]](#eq.pbmili){reference-type="eqref" reference="eq.pbmili"} and of the definition of $\lambda_F$ as the minimizer of [\[eq.cell-pb\]](#eq.cell-pb){reference-type="eqref" reference="eq.cell-pb"} for any $F\in \mathbb{M}^{3{\times}3}_{\rm sym}$, $$\label{eq.exp-u1} \hat u_1(x,\cdot)=\sum_{i,j=1}^{3}(\mathbf E_x u(x))_{ij} \lambda_{ij},$$ where $\lambda_{ij}$ is defined in [\[eq.def-chiij\]](#eq.def-chiij){reference-type="eqref" reference="eq.def-chiij"}. Then remark that [\[eq.pbmili\]](#eq.pbmili){reference-type="eqref" reference="eq.pbmili"} also reads as $$\min\left\{\frac12 \int_\Omega\mathbb A_{\rm hom}\mathbf Ev(x)\cdot \mathbf Ev(x)\, dx-(1-|B_a|)\int_\Omega f\cdot v\,dx: \ v\in H^1_\Gamma(\Omega;\mathbb{R}^3)\right\}$$ with $\mathbb A_{\rm hom}$ defined in [\[eq.def-Ah\]](#eq.def-Ah){reference-type="eqref" reference="eq.def-Ah"}, which, together with [\[eq.exp-u1\]](#eq.exp-u1){reference-type="eqref" reference="eq.exp-u1"}, delivers [\[eq.hom-pb\]](#eq.hom-pb){reference-type="eqref" reference="eq.hom-pb"}. Finally, convergences [\[eq.conv-xy1\]](#eq.conv-xy1){reference-type="eqref" reference="eq.conv-xy1"} and [\[eq.conv-xy2\]](#eq.conv-xy2){reference-type="eqref" reference="eq.conv-xy2"} above can in turn be rewritten in terms of $u^\varepsilon$ as follows. First, for $\omega\subset\!\subset \Omega$, consider $I^\omega_\varepsilon\subset I_\varepsilon$ the set of indices $i$ such that $Y^i_\varepsilon\subset \omega$ and set $\tilde\omega^\varepsilon:=\cup_{i\in I^\omega_\varepsilon}(Y^i_\varepsilon\setminus {\overline B}^i_{\varepsilon a})$. Then, $$\begin{aligned} \lefteqn{\int_{\tilde\omega^\varepsilon}\left|\mathbf E_xu^\varepsilon(x)-\frac1{\varepsilon^3}\int_{Y^{\kappa(x/\varepsilon)}_{\varepsilon}}\left(\mathbf E_x u(z)+\mathbf E_y\hat u_1(z, x/\varepsilon)\right)dz\right|^2 dx} \nonumber \\ & = & \sum_{i\in I^\omega_\varepsilon}\int_{Y^i_\varepsilon\setminus {\overline B}^i_{\varepsilon a}}\left| \frac1{\varepsilon^3}\int_{Y^i_\varepsilon}\left(\mathbf E_x u^\varepsilon(x)- \mathbf E_x u(z)-\mathbf E_y\hat u_1(z,x/\varepsilon)\right)dz \right|^2 dx \nonumber \\ & \le & \frac1{\varepsilon^3}\sum_{i\in I^\omega_\varepsilon}\int_{Y^i_\varepsilon\setminus {\overline B}^i_{\varepsilon a}}\int_{Y^i_\varepsilon}\left|\mathbf E_x u^\varepsilon(x)- \mathbf E_x u(z)-\mathbf E_y\hat u_1(z,x/\varepsilon)\right|^2dz dx \nonumber \\ & \le & \int_{\omega}\int_{Y\setminus\overline B_a}\left|1/\varepsilon\mathbf E_y{\hat u}^\varepsilon(x,y)-\mathbf E_x u(x)-\mathbf E_y\hat u_1(x,y)\right|^2 dydx \nonumber \\ & = & \int_{\omega}\int_{Y\setminus\overline B_a}\left| \mathbf E_y{\hat w}^\varepsilon(x,y)- \mathbf E_x u(x)-\mathbf E_y\hat u_1(x,y)\right|^2 dydx \stackrel{\varepsilon}{\to} 0 \label{eq.conv-cor-0}\end{aligned}$$ where we have used [\[eq.unfol\]](#eq.unfol){reference-type="eqref" reference="eq.unfol"} and [\[eq.conv-xy1\]](#eq.conv-xy1){reference-type="eqref" reference="eq.conv-xy1"}. Since $$\mathbf E_x u- \frac1{\varepsilon^3}\int_{Y^{\kappa(x/\varepsilon)}_{\varepsilon}}\mathbf E_x u(z)dz\stackrel{\varepsilon}{\longrightarrow} 0\quad \mbox{ strongly in }L^2(\Omega;\mathbb{M}^{3{\times}3}),$$ we conclude from [\[eq.conv-cor-0\]](#eq.conv-cor-0){reference-type="eqref" reference="eq.conv-cor-0"} that $$\int_{\tilde\omega^\varepsilon}\left|\mathbf E_xu^\varepsilon(x)-\mathbf E_x u(x)-\frac1{\varepsilon^3}\int_{Y^{\kappa(x/\varepsilon)}_{\varepsilon}}\mathbf E_y\hat u_1(z, x/\varepsilon)dz\right|^2 dx \stackrel{\varepsilon}{\longrightarrow} 0,$$ hence, in view of [\[eq.exp-u1\]](#eq.exp-u1){reference-type="eqref" reference="eq.exp-u1"}, that $$\label{eq.strong1} \int_{\tilde\omega^\varepsilon}\left|\mathbf E_xu^\varepsilon- \mathbf E_x u- \frac1{\varepsilon^3}\sum_{i,j=1}^{3}\left(\int_{Y^{\kappa(x/\varepsilon)}_{\varepsilon}}(\mathbf E_x u)_{ij}(z)\,dz\right) \mathbf E_y\lambda_{ij}(x/\varepsilon)\right|^2 dx \stackrel{\varepsilon}{\longrightarrow} 0.$$ Similarly, in the notation of the statement of Theorem [Theorem 11](#thm.hom){reference-type="ref" reference="thm.hom"}, one can obtain $$\begin{gathered} \label{eq.strong2} \varepsilon\sum_{i\in I^\omega_\varepsilon}\int_{\partial B^i_{\varepsilon a}}\Bigg|\nabla_\tau (P^2_{i,\varepsilon a}( u^\varepsilon\cdot {\vec{e}_r})) \\[2mm] \left.-\frac1{\varepsilon^3}\nabla_\tau \Bigg(P^2_{a}\Bigg(\int_{Y^i_\varepsilon}\Big(\nabla_xu(z) \, y +\sum_{j,k=1}^{3}(\mathbf E_x u)_{jk}(z)\chi_{jk}(y)\Big)dz\cdot {\vec{e}_r}\Bigg)\Bigg)\!\Big|_{y=x/\varepsilon}\right|^2d\mathcal{H}^2\\ \stackrel{\varepsilon}{\longrightarrow} 0.\end{gathered}$$ Equations [\[eq.strong1\]](#eq.strong1){reference-type="eqref" reference="eq.strong1"} and [\[eq.strong2\]](#eq.strong2){reference-type="eqref" reference="eq.strong2"} yield [\[eq.resuco\]](#eq.resuco){reference-type="eqref" reference="eq.resuco"} and [\[eq.resuco2\]](#eq.resuco2){reference-type="eqref" reference="eq.resuco2"}. Indeed, we can replace $\tilde\omega^\varepsilon$ and $I^\omega_\varepsilon$ by $\omega\cap\omega^\varepsilon$ and $I_\varepsilon$ in [\[eq.strong1\]](#eq.strong1){reference-type="eqref" reference="eq.strong1"} and [\[eq.strong2\]](#eq.strong2){reference-type="eqref" reference="eq.strong2"}, respectively, upon choosing, for any $\omega \subset\!\subset\Omega$, another set $\omega'\subset\!\subset\Omega$ containing all $Y^i_\varepsilon$'s that intersect $\omega$. In view of [\[eq.conv=Rue\]](#eq.conv=Rue){reference-type="eqref" reference="eq.conv=Rue"}, the proof of Theorem [Theorem 11](#thm.hom){reference-type="ref" reference="thm.hom"} is complete. ◻ ## Elastic enhancement. {#sec.enhance} In this last subsection, we compare the homogenized behavior obtained in the first subsection with that of the elastomer without the inclusions. We establish in a specialized setting the following counterintuitive result: a large enough surface tension will produce elastic enhancement (stronger elasticities) in spite of the lack of resistance to shear in the fluid inclusions. To that effect we propose to compare between $1/2 \mathbb AF\cdot F$ (the elastic energy for a given constant strain $F$ associated with the original material occupying the whole volume) to $1/2 \mathbb A_{\rm hom}F\cdot F=\mathcal{F}_{per}(F, \lambda_F)$. This will be done through a "dualization\" process. First we recall the expression [\[eq.cell-pb\]](#eq.cell-pb){reference-type="eqref" reference="eq.cell-pb"} for $\mathcal{F}_{per}$ as well as [\[ecdt3-0\]](#ecdt3-0){reference-type="eqref" reference="ecdt3-0"} in Appendix A. We obtain the following inequality for every $F\in\mathbb{M}^{3{\times}3}_{\rm sym}$ and $\psi\in {\mathscr X}$: $$\begin{gathered} \mathcal{F}_{per}(F,\psi)\ge \frac12 \int_{Y\setminus\overline B_a} Q(\mathbf E\psi+F)\, dy +\frac\gamma3\int_{\partial B_a}|\nabla_\tau P^2_a ((\psi+Fy)\cdot{\vec{e}_r})|^2\, d\mathcal{H}^2 \\ +\frac{1}{2|B_a|}\left(\lambda_{f\!\ell}-\frac{2\gamma}{3a}\right) \left(\int_{\partial B_a} (\psi+Fy)\cdot{\vec{e}_r}\,d\mathcal{H}^2\right)^2.\end{gathered}$$ Taking a supremum over triplets $(\sigma,\xi,t)\in L^2(Y\setminus\overline B_a;\mathbb{M}^{3{\times}3}_{\rm sym})\times L^2(\partial B_a;\mathbb{R}^3)\times\mathbb{R}$ with $$\label{eq.cond-sigma} \begin{cases}\mathop{\mathrm{div}}\sigma=0 \mbox{ in } Y\setminus\overline B_a,\\[2mm] \sigma\nu \mbox{ anti-periodic on }\partial Y,\quad \sigma{\vec{e}_r}\parallel{\vec{e}_r}\mbox{ on }\partial B_a,\end{cases}$$ quadratic duality, integration by parts and [\[eq.der-area\]](#eq.der-area){reference-type="eqref" reference="eq.der-area"} imply that $$\begin{gathered} \mathcal{F}_{per}(F,\psi)\ge\sup_{\sigma,\xi,t}\left\{\left(\int_{Y\setminus\overline B_a}\sigma\, dy\right)\cdot F+\int_{Y\setminus\overline B_a} \sigma\cdot \mathbf E\psi\, dy \right.\\ +\frac{2\gamma}3 \int_{\partial B_a}\xi\cdot\nabla_\tau P^2_a ((\psi+Fy)\cdot{\vec{e}_r})\, d\mathcal{H}^2+\frac{1}{|B_a|}\left(\lambda_{f\!\ell}-\frac{2\gamma}{3a}\right) t\left(\int_{\partial B_a} (\psi+Fy)\cdot{\vec{e}_r}\,d\mathcal{H}^2\right)\\\left.-\frac12 \int_{Y\setminus\overline B_a} Q^{-1}(\sigma)\, dy- \frac\gamma3 \int_{\partial B_a}|\xi|^2\, d\mathcal{H}^2-\frac{1}{2|B_a|}\left(\lambda_{f\!\ell}-\frac{2\gamma}{3a}\right)t^2\right\} \\ ={\sup_{\sigma,\xi,t}}\left\{\left(\int_{Y\setminus\overline B_a}\sigma\, dy+\int_{\partial B_a}(\sigma{\vec{e}_r})\otimes y\, d\mathcal{H}^2\right)\cdot F-\int_{\partial B_a}(\sigma{\vec{e}_r}\cdot{\vec{e}_r}) ((\psi+Fy)\cdot{\vec{e}_r})\,d\mathcal{H}^2\right.\\ \left. +\frac{2\gamma}{3} \int_{\partial B_a}\left(-\mathop{\mathrm{div}}_\tau\xi+\frac2a\xi\cdot{\vec{e}_r}\right) P^2_a ((\psi+Fy)\cdot{\vec{e}_r})\, d\mathcal{H}^2\right.\\ +\frac{1}{|B_a|}\left(\lambda_{f\!\ell}-\frac{2\gamma}{3a}\right) t\left(\int_{\partial B_a} (\psi+Fy)\cdot{\vec{e}_r}\,d\mathcal{H}^2\right)-\frac12 \int_{Y\setminus\overline B_a} Q^{-1}(\sigma)\, dy\\\left. -\frac\gamma3 \int_{\partial B_a}|\xi|^2\, d\mathcal{H}^2-\frac{1}{2|B_a|}\left(\lambda_{f\!\ell}-\frac{2\gamma}{3a}\right)t^2\right\}.\end{gathered}$$ In the right handside of the equality above, the term $\int_{\partial B_a}\mathop{\mathrm{div}}_\tau\xi\; P^2_a((\psi+Fy)\cdot{\vec{e}_r})\,d\mathcal{H}^2$ should be understood as a duality product between $H^1(\partial B_a)$ and its dual. Taking the infimum in $\psi$ in the previous inequality and using that $\inf_\psi\sup_{\sigma,\xi,t}\ge\sup_{\sigma,\xi,t}\inf_\psi$ yields $$\begin{gathered} \label{eq.ineq-bd1} \frac12 \mathbb A_{\rm hom}F\cdot F\ge\sup_{\sigma,\xi,t}\inf_\psi\left\{\left(\int_{Y\setminus\overline B_a}\sigma\, dy+\int_{\partial B_a}(\sigma{\vec{e}_r})\otimes y\, d\mathcal{H}^2\right)\cdot F\right.\\ \left. -\int_{\partial B_a}\!\!(\sigma{\vec{e}_r}\cdot{\vec{e}_r})((\psi+Fy)\cdot{\vec{e}_r})\, d\mathcal{H}^2+ \frac{2\gamma}{3} \int_{\partial B_a}\!\!\left(\!\!-\mathop{\mathrm{div}}_\tau\xi+\frac2a\xi\cdot{\vec{e}_r}\right) P^2_a ((\psi+Fy)\cdot{\vec{e}_r})\, d\mathcal{H}^2\right.\\ +\frac{1}{|B_a|}\left(\lambda_{f\!\ell}-\frac{2\gamma}{3a}\right) t\left(\int_{\partial B_a} (\psi+Fy)\cdot{\vec{e}_r}\,d\mathcal{H}^2\right)-\frac12 \int_{Y\setminus\overline B_a} Q^{-1}(\sigma)\, dy\\\left. -\frac\gamma3 \int_{\partial B_a}|\xi|^2\, d\mathcal{H}^2-\frac{1}{2|B_a|}\left(\lambda_{f\!\ell}-\frac{2\gamma}{3a}\right)t^2\right\}.\end{gathered}$$ Given $(\sigma,\psi,t)$ the infimum at the right hand-side in [\[eq.ineq-bd1\]](#eq.ineq-bd1){reference-type="eqref" reference="eq.ineq-bd1"} is $-\infty$ unless the part that is linear in $\psi$ vanishes. Therefore, the supremum can be restricted to those $(\sigma,\psi,t)$ for which this linear term is zero. This is, in particular, the case if $\xi$ and $t$ are such that $$\label{eq.cond-psi-tau} \begin{cases}\displaystyle \int_{\partial B_a} (-\mathop{\mathrm{div}}_\tau\xi+\frac2a\xi\cdot{\vec{e}_r}) y\, d\mathcal{H}^2=0,\\[3mm]\displaystyle \frac{2\gamma}3( -\mathop{\mathrm{div}}_\tau\xi+\frac2a\xi\cdot{\vec{e}_r}) -\sigma{\vec{e}_r}\cdot{\vec{e}_r}+\frac{1}{|B_a|}\left(\lambda_{f\!\ell}-\frac{2\gamma}{3a}\right) t=0 \mbox{ on }\partial B_a. \end{cases}$$ We do not know how to optimally exploit [\[eq.ineq-bd1\]](#eq.ineq-bd1){reference-type="eqref" reference="eq.ineq-bd1"} with the restrictions [\[eq.cond-sigma\]](#eq.cond-sigma){reference-type="eqref" reference="eq.cond-sigma"}, [\[eq.cond-psi-tau\]](#eq.cond-psi-tau){reference-type="eqref" reference="eq.cond-psi-tau"} on $(\sigma,\psi,t)$ as a possible way to demonstrate enhancement for general $\mathbb A$'s or $F$'s. We propose instead to illustrate enhancement in the specific case of an isotropic elastomer, i.e., $\mathbb A_{ijkh}:= \lambda \delta_{ij} \delta_{kh}+\mu (\delta_{ik} \delta_{jh}+\delta_{ih} \delta_{jk}),$ where $\lambda,\mu$ stand for the Lamé constants, and for an axisymmetric shear strain $F=F(f)$ with $$F(f):=-\frac{f}2(\vec{e}_1\otimes\vec{e}_1+\vec{e}_2\otimes\vec{e}_2)+f\vec{e}_3\otimes\vec{e}_3.$$ Then $1/2\mathbb AF(f)\cdot F(f)$ is $3/2\mu f^2$. Furthermore we will do so in the dilute limit, that is when $a\searrow 0$. We thus restrict $\sigma$, $\xi$, and $t$ to be of the form $$\begin{aligned} &\sigma(y)= \begin{cases} \sigma_{ij}(y)\vec{e}_i\otimes\vec{e}_j & \mbox{ in }S_{b}=\{y: a<|y|<b\}, \vspace{0.15cm}\\ \overline{\sigma}=\overline{\sigma}_{11}\left(\vec{e}_1\otimes\vec{e}_1+\vec{e}_2\otimes\vec{e}_2\right)+\overline{\sigma}_{33}\vec{e}_3\otimes\vec{e}_3 & \mbox{ in }Y\setminus \overline{S}_{b}, \end{cases} \nonumber\\ &\xi(y)=\beta_7\left(-\frac{y_1 y_3^2}{a^3}\vec{e}_1-\frac{y_2 y_3^2}{a^3}\vec{e}_2+\left(\frac{y_3}{a}-\frac{y_3^3}{a^3}\right)\vec{e}_3\right),\nonumber\\ &t=\beta_8, \label{S-xi-t}\end{aligned}$$ with components $$\begin{aligned} \sigma_{11}(y)&=\alpha_1+\alpha_2 y_1^2+\alpha_3 y_3^2+\alpha_4 y_1^2 y_3^2,& \sigma_{12}(y)&=\sigma_{21}(y)=\alpha_2 y_1 y_2+\alpha_4 y_1 y_2 y_3^2, \nonumber\\ \sigma_{13}(y)&=\sigma_{31}(y)=\alpha_5 y_1 y_3+\alpha_4 y_1 y_3^3,& \sigma_{22}(y)&=\alpha_1+\alpha_2 y_2^2+\alpha_3 y_3^2+\alpha_4 y_2^2 y_3^2,\nonumber\\ \sigma_{23}(y)&=\sigma_{32}(y)=\alpha_5 y_2 y_3+\alpha_4 y_2 y_3^3,& \sigma_{33}(y)&=\alpha_6+\alpha_7 y_3^2+\alpha_4 y_3^4,\label{sij}\end{aligned}$$ where $\alpha_{1}$ through $\alpha_{7}$ (which are functions of $|y|$) and $\beta_{7}$ and $\beta_{8}$ (which are constants) are spelled out in Appendix B, and where $\overline{\sigma}_{11}$, $\overline{\sigma}_{33}$ are two arbitrary constants. It can be checked that the fields $\sigma, \xi, t$ defined above satisfy [\[eq.cond-sigma\]](#eq.cond-sigma){reference-type="eqref" reference="eq.cond-sigma"} and [\[eq.cond-psi-tau\]](#eq.cond-psi-tau){reference-type="eqref" reference="eq.cond-psi-tau"}. The first equality in [\[S-xi-t\]](#S-xi-t){reference-type="eqref" reference="S-xi-t"} corresponds to the stress field in a spherical shell of inner radius $a$ and outer radius $b$ made of an elastic material with elasticity $\mathbb A$ containing a liquid with bulk modulus $\lambda_{f\!\ell}$; the solid/liquid interface $r=a$ is endowed with a surface tension $\gamma$. The outer boundary $r=b$ is subject to the affine traction $\overline\sigma{\vec{e}_r}$. The choices for $\xi$ and $t$ in [\[S-xi-t\]](#S-xi-t){reference-type="eqref" reference="S-xi-t"} correspond to the traction fields at the interface of the liquid inclusion with the spherical shell in the same problem. Note that the resulting displacement field on the outer boundary of the spherical shell is of the form $F(\bar f) y$ for some $\bar f$, which motivates our choice of $\sigma$. The second equality in [\[S-xi-t\]](#S-xi-t){reference-type="eqref" reference="S-xi-t"} corresponds to an affine extension of the stress field in the complement of $B_{b}$ in the unit cell $Y$. Rather cumbersome but straightforward calculations ensue. First, we use [\[S-xi-t\]](#S-xi-t){reference-type="eqref" reference="S-xi-t"} in [\[eq.ineq-bd1\]](#eq.ineq-bd1){reference-type="eqref" reference="eq.ineq-bd1"}, so that the terms that are linear in $\psi+Fy$ cancel out. The result is a concave polynomial of degree two in $\overline{\sigma}_{11}$ and $\overline{\sigma}_{33}$. We compute its maximum in $\overline{\sigma}_{11}$ and $\overline{\sigma}_{33}$. Next we go to the dilute case, letting $\theta:=4\pi a^3/3$ tend to $0$. We then obtain the following fully explicit bound: $$\begin{gathered} \label{eq.ineq-bd3} \frac12 \mathbb A_{\rm hom}F\cdot F\geq \frac{3\mu}{2}f^2\left(1+ \underbrace{\frac{15\mu(\lambda+2\mu) (\gamma/(2\mu a)-1)}{14\mu +9 \lambda+(34 \mu+15\lambda)\gamma/(2\mu a)}}\ \theta\right)+O(\theta^2).\\ (*)\hskip5.28cm\end{gathered}$$ If $\gamma/\mu>2a$, expression $(*)$ in [\[eq.ineq-bd3\]](#eq.ineq-bd3){reference-type="eqref" reference="eq.ineq-bd3"} will be positive. We conclude that the following holds true. **Proposition 14**. *Consider an isotropic elastomer with Lamé coefficients $\lambda, \mu$. If $\gamma/\mu >2a$, then, in the dilute limit $\theta\searrow 0$ ($\theta$ being the volume fraction of the fluid filled cavities), enhancement will occur for axisymmetric shear strains of the form $F=-f/2(\vec{e}_1\otimes\vec{e}_1+\vec{e}_2\otimes\vec{e}_2)+f\vec{e}_3\otimes\vec{e}_3$, that is, $$\frac12\mathbb A_{\rm hom}F\cdot F>\frac12\mathbb AF\cdot F=\frac{3\mu}{2}f^2.$$* The presence of liquid inclusions leads to a stiffer elasticity than that of the elastomer, in spite of the fact that the inclusions have zero shear resistance. **Remark 15**. A similar computation could be performed for uniaxial strains of the form $f\vec e\otimes\vec e$ with $|\vec e\,|=1$. In that case enhancement can also be achieved but at the expense of choosing both $\gamma$ large with respect to $\mu$ *and* $\lambda_{f\!\ell}$ much larger than $\gamma$. For large $\gamma$'s and still larger $\lambda_{f\!\ell}$, we expect enhancement for general elasticities and strains, but the technicalities involved in deriving a useful lower bound for the homogenized energy remain intractable at present. ¶ **Remark 16**. In the case of rigid inclusions the homogenized tensor is given by $$\begin{gathered} \frac12\mathbb A^{\rm rigid}F\cdot F\\[2mm] =\min\left\{\frac12 \int_{Y\setminus\overline B_a} Q(\mathbf E\psi+F)\, dx: \ \psi \in H^1_\sharp(Y\setminus\overline B_a;\mathbb{R}^3),\ \psi=-Fy \mbox{ on }\partial B_a\right\}. \end{gathered}$$ In view of the formula [\[eq.def-Ah\]](#eq.def-Ah){reference-type="eqref" reference="eq.def-Ah"} for $\mathbb A_{\rm hom}$ in the present setting, $$\frac12 \mathbb A^{\rm rigid}F\cdot F > \frac12\mathbb A_{\rm hom}F\cdot F,$$ independently of the values of $\gamma$ and $\lambda_{f\!\ell}$. Thus, while surface tension on many small liquid inclusions can surprisingly enhance elasticity, it cannot compete with rigid inclusion, as expected. ¶ # Acknowledgements {#acknowledgements .unnumbered} Support for this work by the National Science Foundation through the Grant DMREF--1922371 is gratefully acknowledged by GAF and OLP. JCD acknowledges support by the project PID2020-116809GB-I00 of the Ministerio de Ciencia e Innovación. MGM acknowledges support from MIUR--PRIN 2017. MGM is a member of GNAMPA--INdAM. # Appendix A {#appendix-a .unnumbered} In this appendix we review some variants of Poincaré's inequality on the boundary of the unit sphere, that are instrumental in the proofs of our main results. Consider the spectral decomposition of $H^1(\partial B_1)$ associated with the eigenvalues $\mu_\ell$, $\ell\geq 0$, of the Laplace-Beltrami operator on $\partial B_1$, i.e., the solutions in $H^1(\partial B_1)$ of $$\int_{\partial B_1} \nabla_\tau \varphi\cdot\nabla_\tau \psi\,d\mathcal{H}^2=\mu_\ell\int_{\partial B_1} \varphi\psi\,d\mathcal{H}^2\quad \text{ for every } \psi\in H^1(\partial B_1).$$ It is well known that $\mu_\ell=\ell(\ell+1)$, $\ell\geq0$ and that the space $V^\ell$ of eigenvectors relative to $\mu_\ell$ has dimension $2\ell+1$ (see [@EF Proposition 4.5]). We are especially interested in the space $V^0$, which is given by constant functions, and in the space $V^1$, which is given by the restrictions to $\partial B_1$ of the linear functions in $\mathbb R^3$. Denote by $P^0$, $P^1$ the orthogonal projections in $L^2(\partial B_1)$ onto the spaces $V^0$ and $V^1$, respectively. Setting $$\label{proj} P^2:=I-P^0-P^1, \quad V^2:= (V^0+V^1)^\bot,$$ and recalling that $\mu_1=2$, $\mu_2=6$, we have $$\|\varphi\|_{L^2(\partial B_1)}^2=\|P^0\varphi\|^2_{L^2(\partial B_1)}+\|P^1\varphi\|^2_{L^2(\partial B_1)}+\|P^2 \varphi\|^2_{L^2(\partial B_1)},$$ $$\|\nabla_\tau\varphi\|_{L^2(\partial B_1)}^2=\|\nabla_\tau P^1\varphi\|^2_{L^2(\partial B_1)}+\|\nabla_\tau P^2 \varphi\|^2_{L^2(\partial B_1)},$$ and $$\begin{aligned} \label{PWC} \int_{\partial B_1}|\nabla_\tau P^1\varphi|^2\,d\mathcal{H}^2& = & 2\int_{\partial B_1} |P^1 \varphi|^2\,d\mathcal{H}^2, \\\label{PWC2} \int_{\partial B_1}|\nabla_\tau P^2 \varphi|^2\,d\mathcal{H}^2& \geq & 6\int_{\partial B_1} |P^2 \varphi|^2\,d\mathcal{H}^2\end{aligned}$$ for every $\varphi\in H^1(\partial B_1)$. Combining the previous results, we have $$\begin{aligned} \int_{\partial B_1}|\varphi|^2\, d\mathcal{H}^2& \leq & \|P^0\varphi\|^2_{L^2(\partial B_1)} + \frac12 \int_{\partial B_1}|\nabla_\tau P^1\varphi|^2\,d\mathcal{H}^2+\frac16 \int_{\partial B_1}|\nabla_\tau P^2 \varphi|^2\,d\mathcal{H}^2 \\ & \leq & \frac1{4\pi}\left(\int_{\partial B_1}\varphi\, d\mathcal{H}^2\right)^2 +\frac12 \int_{\partial B_1}|\nabla_\tau \varphi|^2\, d\mathcal{H}^2,\end{aligned}$$ where we used that $$P^0(\varphi)={\frac1 4\pi}\int_{\partial B_1}\varphi\,d\mathcal{H}^2.$$ A simple scaling argument shows that $$\label{PW-sphere} \int_{\partial B_r}|\varphi|^2\, d\mathcal{H}^2\leq \frac{r^2}2\int_{\partial B_r}|\nabla_\tau \varphi|^2\, d\mathcal{H}^2 + \frac1{4\pi r^2}\left(\int_{\partial B_r}\varphi\, d\mathcal{H}^2\right)^2$$ for every $\varphi\in H^1(\partial B_r)$ and $r>0$. In Section [4](#sec.hom){reference-type="ref" reference="sec.hom"} we need a refinement of [\[PW-sphere\]](#PW-sphere){reference-type="eqref" reference="PW-sphere"} established in what follows. Using the change of variables $x=\varepsilon i+ \varepsilon a y$, which transforms $\partial B_1$ into $\partial B^i_{\varepsilon a}$, we deduce that $H^1(\partial B^i_{\varepsilon a})$ decomposes as the orthogonal sum of $V_{i,\varepsilon a}^\ell=\{\psi((x-\varepsilon i)/\varepsilon a):\ \psi\in V^\ell\}.$ Denoting by $$\label{proj-e}P^\ell_{i,\varepsilon a} \mbox{ the orthogonal projection of }L^2(\partial B^i_{\varepsilon a}) \mbox{ onto }V_{i,\varepsilon a}^\ell,$$ we have, as above, $$\|\varphi\|_{L^2(\partial B^i_{\varepsilon a})}^2=\|P^0_{i,\varepsilon a} \varphi\|^2_{L^2(\partial B^i_{\varepsilon a})}+\|P^1_{i,\varepsilon a} \varphi\|^2_{L^2(\partial B^i_{\varepsilon a})}+\|P^2_{i,\varepsilon a} \varphi\|^2_{L^2(\partial B^i_{\varepsilon a})},$$ $$\|\nabla_\tau \varphi\|_{L^2(\partial B^i_{\varepsilon a})^3}^2=\|\nabla_\tau P^1_{i,\varepsilon a}\varphi\|^2_{L^2(\partial B^i_{\varepsilon a})}+\|\nabla_\tau P^2_{i,\varepsilon a} \varphi\|^2_{L^2(\partial B^i_{\varepsilon a})},$$ and $$\label{ecdt3} \varepsilon^2 a^2\int_{\partial B^i_{\varepsilon a}}|\nabla_\tau P^1_{i,\varepsilon a} \varphi |^2\,d\mathcal{H}^2= 2\int_{\partial B^i_{\varepsilon a}} |P^1_{i,\varepsilon a} u|^2\, d\mathcal{H}^2$$ $$\label{ecdt3-0} \varepsilon^2 a^2\int_{\partial B^i_{\varepsilon a}}|\nabla_\tau P^2_{i,\varepsilon a} \varphi |^2\, d\mathcal{H}^2\geq 6\int_{\partial B^i_{\varepsilon a}} |P^2_{i,\varepsilon a} \varphi|^2\, d\mathcal{H}^2$$ for every $\varphi\in H^1(\partial B^i_{\varepsilon a})$. Moreover, $$\label{ecdt4} P^0_{i,\varepsilon a} u={\frac 1 {4\pi \varepsilon^2 a^2}}\int_{\partial B^i_{\varepsilon a}}u\,d\mathcal{H}^2.$$ For $K>0$ let us define $$\mathcal V^i_{\varepsilon,K}(\varphi):=\frac{a^2 \varepsilon^2}{2}\int_{\partial B^i_{\varepsilon a}}|\nabla_\tau \varphi|^2\,d\mathcal{H}^2+\frac{K}{\varepsilon^2}\left(\int_{\partial B^i_{\varepsilon a}}\varphi\,d\mathcal{H}^2\right)^2-\int_{\partial B^i_{\varepsilon a}}|\varphi|^2\,d\mathcal{H}^2$$ for every $\varphi\in H^1(\partial B^i_{\varepsilon a})$. By [\[ecdt3\]](#ecdt3){reference-type="eqref" reference="ecdt3"} and [\[ecdt4\]](#ecdt4){reference-type="eqref" reference="ecdt4"} we deduce that $$\begin{aligned} \mathcal V^i_{\varepsilon,K}(\varphi) & = & \frac{a^2 \varepsilon^2}{2}\int_{\partial B^i_{\varepsilon a}}|\nabla_\tau P^1_{i,\varepsilon a} \varphi |^2\,d\mathcal{H}^2 +\frac{a^2 \varepsilon^2}{2}\int_{\partial B^i_{\varepsilon a}}|\nabla_\tau P^2_{i,\varepsilon a} \varphi |^2\,d\mathcal{H}^2 \nonumber \\ && {}+ \frac{K}{\varepsilon^2}\left(\int_{\partial B^i_{\varepsilon a}}\varphi\,d\mathcal{H}^2\right)^2 -\frac1{4\pi a^2\varepsilon^2}\left(\int_{\partial B^i_{\varepsilon a}}\varphi \, d\mathcal{H}^2\right)^2 \nonumber \\ && {} -\int_{\partial B^i_{\varepsilon a}}|P^1_{i,\varepsilon a} \varphi|^2\,d\mathcal{H}^2-\int_{\partial B^i_{\varepsilon a}}|P^2_{i,\varepsilon a} \varphi|^2\,d\mathcal{H}^2 \nonumber\\ & = & \frac{a^2 \varepsilon^2}{2}\int_{\partial B^i_{\varepsilon a}}|\nabla_\tau P^2_{i,\varepsilon a} \varphi |^2\,d\mathcal{H}^2 +\frac1{\varepsilon^2}\Big(K - \frac1{4\pi a^2}\Big)\left(\int_{\partial B^i_{\varepsilon a}} \varphi \,d\mathcal{H}^2\right)^2 \nonumber \\ && {}-\int_{\partial B^i_{\varepsilon a}}|P^2_{i,\varepsilon a} \varphi|^2 \, d\mathcal{H}^2, \label{identity}\end{aligned}$$ hence by [\[ecdt3-0\]](#ecdt3-0){reference-type="eqref" reference="ecdt3-0"} $$\mathcal V^i_{\varepsilon,K}(\varphi) \geq \frac{a^2\varepsilon^2}{3}\int_{\partial B^i_{\varepsilon a}}|\nabla_\tau P^2_{i,\varepsilon a} \varphi|^2\,d\mathcal{H}^2 +\frac1{\varepsilon^2}\Big(K- \frac1{4\pi a^2}\Big)\left(\int_{\partial B^i_{\varepsilon a}}\varphi\,d\mathcal{H}^2\right)^2.$$ Thus, in our setting (see [\[Vei-var-0\]](#Vei-var-0){reference-type="eqref" reference="Vei-var-0"}), the following coercivity estimate holds upon taking $K={\lambda_{f\!\ell}a^2/ (2\gamma |B_a|)}$, $$\begin{gathered} \label{coer} \frac{\gamma\varepsilon}2\int_{\partial B^i_{\varepsilon a}}|\nabla_\tau (v\cdot{\vec{e}_r})|^2\, d\mathcal{H}^2 -\frac{\gamma}{\varepsilon a^2}\int_{\partial B^i_{\varepsilon a}} (v\cdot{\vec{e}_r})^2 \, d\mathcal{H}^2 +\frac{\lambda_{f\!\ell}}{2\varepsilon^3|B_a|} \left(\int_{\partial B^i_{\varepsilon a}} v\cdot{\vec{e}_r}\,d\mathcal{H}^2\right)^2\\[2mm] \ge \frac{\gamma \varepsilon}3 \int_{\partial B^i_{\varepsilon a}}|\nabla_\tau (P^2_{i,\varepsilon a}(v\cdot{\vec{e}_r}))|^2\, d\mathcal{H}^2+ \frac1{4\pi a^3\varepsilon^3}\left(\frac32\lambda_{f\!\ell}- \frac{\gamma}{a}\right)\left(\int_{\partial B^i_{\varepsilon a}} v\cdot{\vec{e}_r}\,d\mathcal{H}^2\right)^2.\end{gathered}$$ # Appendix B {#appendix-b .unnumbered} The functions $\alpha_{1}$ through $\alpha_{7}$ in the components ([\[sij\]](#sij){reference-type="ref" reference="sij"}) and the constants $\beta_7$ and $\beta_8$ in relations ([\[S-xi-t\]](#S-xi-t){reference-type="ref" reference="S-xi-t"}) were computed using Wolfram Mathematica software. They read as $$\begin{aligned} \alpha_1=&\frac{15 \beta_1 \mu \lambda |y|^2}{\mu+\lambda}-2 \beta_2 \mu+\beta_5 (2 \mu+3 \lambda)+\frac{2 \beta_6 \mu-\frac{10 \beta_3 \mu^2}{\mu+\lambda}}{|y|^3}+\frac{3 \beta_4 \mu}{|y|^5},\\ \alpha_2=&-\frac{12 \beta_1 \mu \lambda}{\mu+\lambda}+\frac{3 \mu \left(\frac{2 \beta_3 (5 \mu+3 \lambda)}{\mu+\lambda}-2 \beta_6\right)}{|y|^5}-\frac{15 \beta_4 \mu}{|y|^7},\\ \alpha_3=&-\frac{3 \beta_1 \mu (14 \mu+25 \lambda)}{\mu+\lambda}+\frac{18 \beta_3 \mu^2}{|y|^5 (\mu+\lambda)}-\frac{15 \beta_4 \mu}{|y|^7},\\ \alpha_4=&\frac{105 \beta_4 \mu}{|y|^9}-\frac{90 \beta_3 \mu}{|y|^7},\\ \alpha_5=&\frac{6 \beta_1 \mu \lambda}{\mu+\lambda}+\frac{\frac{6 \beta_3 \mu (5 \mu+6 \lambda)}{\mu+\lambda}-6 \beta_6 \mu}{|y|^5}-\frac{45 \beta_4 \mu}{|y|^7},\\ \alpha_6=&\frac{3 \beta_1 \mu |y|^2 (14 \mu+15 \lambda)}{\mu+\lambda}+4 \beta_2 \mu+\beta_5 (2 \mu+3 \lambda)+\frac{\frac{2 \beta_3 \mu^2}{\mu+\lambda}+2 \beta_6 \mu}{|y|^3}+\frac{9 \beta_4 \mu}{|y|^5},\\ \alpha_7=&-\frac{3 \beta_1 \mu (14 \mu+17 \lambda)}{\mu+\lambda}-\frac{6 \mu (\beta_6 (\mu+\lambda)-\beta_3 (8 \mu+9 \lambda))}{|y|^5 (\mu+\lambda)}-\frac{90 \beta_4 \mu}{|y|^7},\end{aligned}$$ and $$\begin{aligned} {\beta_7=6 \beta_2+\frac{18 a^7 \beta_1 \lambda+6 a^2 \beta_3 (5 \mu+3 \lambda)-9 \beta_4 (\mu+\lambda)}{a^5 (\mu+\lambda)},\,\;\,\;\beta_8=3 \left(\frac{\beta_6}{a^3}+\beta_5\right)|B_a|}\end{aligned}$$ with $$\begin{aligned} \beta_1=&k_1(\overline{\sigma}_{11}-\overline{\sigma}_{33}),\,\beta_2=k_2(\overline{\sigma}_{11}-\overline{\sigma}_{33}),\,\beta_3=k_3(\overline{\sigma}_{11}-\overline{\sigma}_{33}), \,\beta_4=k_4(\overline{\sigma}_{11}-\overline{\sigma}_{33}),\\ \beta_5=&k_5(2\overline{\sigma}_{11}+\overline{\sigma}_{33}),\,\beta_6=k_6(2\overline{\sigma}_{11}+\overline{\sigma}_{33}),\end{aligned}$$ where $k_1$ through $k_{6}$ are coefficients explicitly known in terms of $\lambda$, $\mu$, $\lambda_{f\!\ell}$, $\gamma$, $a$, and $b$. They read as $$\begin{aligned} k_1=&-20 a^3 b^3 K^{-1} (\mu+\lambda) \left[2 a^3 \mu (\mu+\lambda)+a^2 \gamma \mu-2 a b^2 \mu (\mu+\lambda)+b^2 \gamma (\mu+\lambda)\right],\\ k_2=&\frac{1}{6} b^3 K^{-1} \left[50 a^8 \mu \left(28 \mu^2+56 \mu \lambda+27 \lambda^2\right)+200 a^7 \gamma \left(7 \mu^2+11 \mu \lambda+3 \lambda^2\right)-\right.\\ & 1008 a^6 b^2 \mu (\mu+\lambda)^2-504 a^5 b^2 \gamma \mu (\mu+\lambda)-2 a b^7 \mu (14 \mu+9 \lambda) (14 \mu+19 \lambda)-\\ &\left.b^7 \gamma (34 \mu+15 \lambda) (14 \mu+19 \lambda)\right],\\ k_3=&\frac{5}{6} a^3 b^3 K^{-1} (\mu+\lambda) \left[2 a^8 \mu (14 \mu+19 \lambda)-8 a^7 \gamma (7 \mu+5 \lambda)-2 a b^7 \mu (14 \mu+19 \lambda)+\right.\\ &\left. b^7 \gamma (14 \mu+19 \lambda)\right],\\ k_4=&a^5 b^5 K^{-1} \left[2 a^6 \mu (\mu+\lambda) (14 \mu+19 \lambda)-8 a^5 \gamma (\mu+\lambda) (7 \mu+5 \lambda)-\right.\\ &\left. 2 a b^5 \mu (\mu+\lambda) (14 \mu+19 \lambda)-b^5 \gamma \mu (14 \mu+19 \lambda)\right],\end{aligned}$$ $$\begin{aligned} k_5=&\frac{b^3 (2 \gamma-4 a \mu-3 a \lambda_{f\!\ell})}{12 a^4 \mu (2 \mu+3 \lambda-3 \lambda_{f\!\ell})+24 a^3 \gamma \mu-3 a b^3 (2 \mu+3 \lambda) (4 \mu+3 \lambda_{f\!\ell})+6 b^3 \gamma (2 \mu+3 \lambda)},\\ k_6=&\frac{-a^3 b^3 [2\gamma+a (2 \mu+3 \lambda-3 \lambda_{f\!\ell})]}{3 \left(4 a^4 \mu (2 \mu+3 \lambda-3 \lambda_{f\!\ell})+8 a^3 \gamma \mu-a b^3 (2 \mu+3 \lambda) (4 \mu+3 \lambda_{f\!\ell})+2 b^3 \gamma (2 \mu+3 \lambda)\right)},\\\end{aligned}$$ where $$\begin{aligned} K=&\mu \left[2 a^{11} \mu (14 \mu+9 \lambda) (14 \mu+19 \lambda)-8 a^{10} \gamma (7 \mu+5 \lambda) (14 \mu+9 \lambda)-\right.\\ &50 a^8 b^3 \mu \left(28 \mu^2+56 \mu \lambda+27 \lambda^2\right)-200 a^7 b^3 \gamma \left(7 \mu^2+11 \mu \lambda+3 \lambda^2\right)+\\ &2016 a^6 b^5 \mu (\mu+\lambda)^2+1008 a^5 b^5 \gamma \mu (\mu+\lambda)-50 a^4 b^7 \mu \left(28 \mu^2+56 \mu \lambda+27 \lambda^2\right)+\\ & 25 a^3 b^7 \gamma \left(28 \mu^2+56 \mu \lambda+27 \lambda^2\right)+2 a b^{10} \mu (14 \mu+9 \lambda) (14 \mu+19 \lambda)+\\ &\left. b^{10} \gamma (34 \mu+15 \lambda) (14 \mu+19 \lambda)\right].\end{aligned}$$ 99 R. Alicandro, G. Dal Maso, G. Lazzaroni and M. Palombaro. Derivation of a linearised elasticity model from singularly perturbed multiwell energy functionals. *Arch. Rat. Mech. Anal.* **230**, 1--45 (2018). L. Ambrosio, N.  Fusco, N. and D.  Pallara. *Functions of Bounded Variation and Free Discontinuity Problems*, Oxford University Press, Oxford (2000). T.  Arbogast, J. Douglas, U. Hornung. Derivation of the double porosity model of single phase flow via homogenization theory. *SIAM J. Math. Anal.* **21**, 823--836 (1990). J. Casado-Dı́az. Two-scale convergence for nonlinear Dirichlet problems in perforated domains. *Proc. Roy. Soc. Edinburgh* **130**-A, 249--276 (2000). D. Cioranescu, A. Damlamian, G. Griso. *The Periodic Unfolding Method*, Springer, Singapore, Series in Contemporary Mathematics, Vol. 3 (2019). A. Chicco-Ruiz, P. Morin and M. Sebastian Pauletti. The shape derivative of the Gauss curvature. *Rev. Un. Mat. Argentina* **59**, 311--337 (2018). G. Dal Maso, M. Negri and D. Percivale. Linearized elasticity as $\Gamma$-limit of finite elasticity. *Set-Valued Anal.* **10**, 165--183 (2002). C. Efthimiou, C. Frye. *Spherical Harmonics in $p$ Dimensions*. World Scientific Publishing, Singapore, 2014. L.C. Evans, R.F. Gariepy. *Measure Theory and Fine Properties of Functions*. Studies in Advanced Mathematics, CRC Press, Boca Raton, 1992. G. Friesecke, R. James and S. Müller. A theorem on geometric rigidity and the derivation of nonlinear plate theory from three-dimensional elasticity. *Com. Pure Appl. Math.* **LV**, 1461--1506 (2002). K. Ghosh, O. Lopez-Pamies. Elastomers filled with liquid inclusions: Theory, numerical implementation, and some basic results. *J. Mech Phys. Solids* **166**, 104930 (2022). K. Ghosh, V. Lefèvre and O. Lopez-Pamies. Homogenization of elastomers filled with liquid inclusions: The small-deformation limit. *Journal of Elasticity*, published online (2023). J. W. Gibbs. *The Collected Works of J. W. Gibbs*, Vol. 1, Section III (1928). M. E. Gurtin, A. I. Murdoch. A continuum theory of elastic material surfaces. *Arch. Ration. Mech. Anal.* **57**, 291--323. M. E. Gurtin, A. I. Murdoch. Addenda to our paper a continuum theory of elastic material surfaces. *Arch. Ration. Mech. Anal.* **59**, 1--2 (1975). E. Hebey. *Sobolev Spaces on Riemannian Manifolds*. Lecture Notes in Mathematics, Springer Berlin, Heidelberg, 1996. P. S. Laplace. Traité de Mécanique Céleste, Volume 4, Supplémeent au dixième livre du Traité de Mécanique Céleste, pp. 1--79 (1806). V. Lefèvre, K. Danas, and O. Lopez-Pamies. A general result for the magnetoelastic response of isotropic suspensions of iron and ferrofluid particles in rubber, with applications to spherical and cylindrical specimens. *J. Mech. and Physics of Solids* **107**, 343--364 (2017). C. Maor, M.G. Mora. Reference configurations versus optimal rotations: a derivation of linear elasticity from finite elasticity for all traction forces. *J. Nonlinear Sci.* **31**, 62 (2021). M.G. Mora, F. Riva. Pressure live loads and the variational derivation of linear elasticity. *Proc. Royal Soc. Edinburgh A*, online, 1--36. O.A. Oleinik, A.S. Shamaev, G.A. Yosifian. *Mathematical Problems in Elasticity and Homogenization,* North-Holland Publishing Co., Amsterdam, 1992. P. Podio-Guidugli, G. Vergara Caffarelli. Surface interaction potentials in elasticity. *Arch. Rational Mech. Anal.* **109**, 343--383 (1990). R.W. Style, R. Boltyanskiy, A. Benjamin, K.E. Jensen, H.P. Foote, J.S. Wettlaufer and E.R. Dufresne. Stiffening solids with liquid inclusions. *Nature Physics* **11**, 82--87 (2015). T. Young. III An essay on the cohesion of fluids. *Phil. Trans. R. Soc.* **95**, 9565--9587 (1805). G. Yun, S.Y. Tang, S. Sun, D. Yuan, Q. Zhao, L. Deng, S. Yan, H. Du and M.D. Dickey and W. Li. Liquid metal-filled magnetorheological elastomer with positive piezoconductivity. *Nature Communications* **10**, 1300 (2019).
arxiv_math
{ "id": "2309.03630", "title": "Liquid Filled Elastomers: From Linearization to Elastic Enhancement", "authors": "Juan Casado D\\`iaz, Gilles A. Francfort, Oscar Lopez-Pamies, Maria\n Giovanna Mora", "categories": "math.AP", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We consider the monomer-dimer model, whose realisations are spanning sub-graphs of a given graph such that every vertex has degree zero or one. The measure depends on a parameter, the monomer activity, which rewards the total number of monomers. We consider general correlation functions including monomer-monomer correlations and dimer-dimer covariances. We show that these correlations decay exponentially fast with the distance if the monomer activity is strictly positive. Our result improves a previous upper bound from van den Berg and is of interest due to its relation to truncated spin-spin correlations in classical spin systems. Our proof is based on the cluster expansion technique. author: - "Alexandra Quitmann[^1]" date: title: A note on the monomer-dimer model --- # Introduction This note considers the *monomer-dimer model* [@HeilmannLieb], whose realisations are spanning sub-graphs of a given graph such that every vertex has degree zero or one. Vertices with degree zero are referred to as *monomers* and pairs of vertices connected by an edge are referred to as *dimers*. The measure depends on a parameter, the monomer activity $\rho \geq 0$, which controls the total number of monomers. In case of zero monomer activity no monomers are present and we obtain the classical *dimer model*. By superimposing two independent realisations of the monomer-dimer model we obtain a configuration of the *double monomer-dimer model*. This model can be viewed as a random walk loop soup whose configurations are collections of self-avoiding and mutually self-avoiding paths which might be closed or open, see e.g. Figure [3](#Fig:monomerdimer){reference-type="ref" reference="Fig:monomerdimer"}. If the monomer activity is zero, all paths are closed and the double monomer-dimer model reduces to the double dimer model. The (double) monomer-dimer model shares an intriguing similarity with the Spin $O(N)$ model, namely they both have a probabilistic reformulation as a particular random path model [@LeesTaggi]. In this representation, the external magnetic field of the Spin $O(N)$ model plays the same role as the monomer activity in the monomer-dimer model and the two models only differ in the weight that is assigned to the number of visits of (open and closed) paths at the vertices. The qualitative behaviour of the random path model, however, is expected to not depend on the choice of such weight function. In this note we study the rate of decay of *monomer-monomer correlations* when the monomer activity is non-zero. Through the random path representation, this question is closely related to an open problem in the Spin $O(N)$ model. Here, it is known that spin-spin correlations decay exponentially fast with the distance between the vertices when the external magnetic field is non-zero. The constant of decay in the exponent is known to be $O(h)$ as $h \to 0$ for $N=1,2,3$ [@Frohlich] and $O(h^2)$ for any $N \in \mathbb{N}$ [@LeesTaggi]. It is however conjectured that the constant decays as $O(\sqrt{h})$ when $h \to 0$ for any integer value of $N$. The same behaviour is expected to occur in the monomer-dimer model. Our main result shows that for any strictly positive value of the monomer activity $\rho$, monomer-monomer correlations decay exponentially fast with the graph distance between the vertices. For large enough values of $\rho$, this result is derived using a cluster expansion. Applying similar analytic tools as in [@Frohlich] we can then extend this result to small values of $\rho$ and further show that the constant of decay is of order at least $O(\rho)$ as $\rho \to 0$. It should be emphasized that our result only holds for non-zero values of the monomer activity and the behaviour of the dimer model, i.e., the monomer-dimer model at zero monomer activity, is strongly different. In dimension $d=2$ the monomer-monomer correlation admits polynomial decay with the distance between the monomers [@Dubedat; @Fisher], while in any dimension $d \geq 3$ it exhibits long-range order [@Taggi]. Our result also holds for more general correlation functions including the *dimer-dimer covariance* as special case. It is known that this covariance decays exponentially in the distance between the edges with a constant of order $O(\rho^2)$ in the limit as $\rho \to 0$. More precisely, in [@vandenBergSteif] it is shown that the dimer-dimer covariance is upper bounded by the probability of observing a path in the double monomer-dimer model that connects these two edges. The exponential decay of such probability then follows from [@vandenBerg] based on an argument with disagreement percolation. We remark that for non-zero monomer activity, the connection probability behaves differently, namely it stays uniformly positive in any dimension $d \geq 3$ [@QuitmannTaggi]. In this note, we improve the existing bound due to [@vandenBerg; @vandenBergSteif] by showing that the constant decays as $O(\rho)$ in the limit $\rho \to 0$. ![The first two figures show monomer-dimer configurations $\omega, \omega^\prime \in \Omega_K$, where $K \subset \mathbb{Z}^2$. The third figure shows their superposition resulting in a collection of open and closed paths.](Figure_MonomerDimer1.pdf "fig:"){#Fig:monomerdimer} ![The first two figures show monomer-dimer configurations $\omega, \omega^\prime \in \Omega_K$, where $K \subset \mathbb{Z}^2$. The third figure shows their superposition resulting in a collection of open and closed paths.](Figure_MonomerDimer2.pdf "fig:"){#Fig:monomerdimer} ![The first two figures show monomer-dimer configurations $\omega, \omega^\prime \in \Omega_K$, where $K \subset \mathbb{Z}^2$. The third figure shows their superposition resulting in a collection of open and closed paths.](Figure_MonomerDimer1Dimer2.pdf "fig:"){#Fig:monomerdimer} It is further interesting to compare the double monomer-dimer model with the *monomer double-dimer model* [@QuitmannTaggi]. The configurations in both models are superpositions of two independently sampled monomer-dimer configurations. The difference, however, is that in the former model the two monomer-dimer configurations might have different monomer sets, while in the latter model the monomer sets have to be identical. Consequently, the paths in the double monomer-dimer model might be open, while in the monomer double-dimer model all paths are closed. In the discussion above we have seen that the double monomer-dimer model behaves drastically different if the monomer activity changes from small, but strictly positive values to zero. This, however, is not the case in the monomer double-dimer model, i.e., in the system where all the paths are closed [@BetzTaggi; @QuitmannTaggiQuattropaniForien; @QuitmannTaggi1; @QuitmannTaggi]. In particular, exponential decay of the connection probabilities only occurs for large enough values of the monomer activity. # Model and main result Consider a finite undirected graph $G = (V,E)$. A dimer configuration (or perfect matching) of $G$ is a subset $d \subset E$ of edges such that every vertex $v \in V$ is an element of precisely one edge. We let $D_G$ be the set of all dimer configurations in $G$. Given a set $A \subset V$, we let $G_A$ be the subgraph of $G$ with vertex set $V \setminus A$ and with edge-set consisting of all the edges in $E$ which do not touch any vertex in $A$. We let $D_G(A)$ be the set of dimer configurations in $G_A$. In this note, we concentrate on the $d$-dimensional cubic lattice. We denote by $\mathbb{G}=(\mathbb{V},\mathbb{E})$ the graph with vertex set $\mathbb{V}=\mathbb{Z}^d$ and with edge set $\mathbb{E}=\{\{x,y\} \, : x,y \in \mathbb{Z}^d, \, d(x,y)=1\}$, where $d(x,y)$ denotes the graph distance between $x$ and $y$. We denote by $G_K=(K,E_K)$ the graph with vertex set $K \subset \mathbb{Z}^d$ and with edge set $E_K := \{\{x,y\} \in \mathbb{E}: \, x,y \in K \} \subset \mathbb{E}$. Given $K \subset \mathbb{Z}^d$, the configuration space of the monomer-dimer model in $G_K$ is denoted by $\Omega_K$ and it corresponds to the set of tupels $\omega = (M, d)$ such that $M \subset K$ and $d \in D_{G_K}(M)$. We refer to the first and second element of the tupel $\omega$ as a set of monomers and a set of dimers, respectively. We let ${\mathcal M }:\Omega_K \to K$ and ${\mathcal D }: \Omega_K \to E_K$ be the random variables defined by $\mathcal{M}(\omega) := M$ and ${\mathcal D }(\omega):=d$ for each $\omega = (M, d) \in \Omega_K$. We define a probability measure on $\Omega_K$, $$\label{eq:defmonomerdimermeasure} \forall \omega \in \Omega_K \qquad \mathbb{P}_{K,\rho}(\omega) := \frac{\rho^{|\mathcal{M}(\omega)|}}{\mathbb{Z}_{K,\rho}},$$ where $\rho \geq 0$ is the parameter of the model (monomer density) and $\mathbb{Z}_{K,\rho}$ is the normalizing constant (partition function). We are interested in correlations between sets of monomers. For any $\rho \geq 0$ and any $A \subset K$ we set $$C_{K,\rho}(A):= \mathbb{Z}_{K \setminus A, \rho}.$$ In other words, $C_{K,\rho}(A)$ corresponds to the weight of all monomer-dimer configurations in $G_K$ with fixed monomers at all vertices in $A$. For any $A,B \subset K$ with $A \cap B=\emptyset$, we then introduce the correlation function $$U_{K,\rho}(A,B):= \frac{C_{K,\rho}(A \cup B)}{C_{K,\rho}(\emptyset)}- \frac{C_{K,\rho}(A)}{C_{K,\rho}(\emptyset)} \, \frac{C_{K,\rho}(B)}{C_{K,\rho}(\emptyset)}.$$ We further set $$U_\rho(A,B):= \lim_{K \uparrow \mathbb{Z}^d} U_{K,\rho}(A,B),$$ where the limit is in the sense of van Hove. Its existence follows from [@Gruber Theorem 10] since our monomer-dimer model is a special case of the polymer systems studied in [@Gruber]. If $A,B \in \mathbb{E}$, then the correlation function reduces to the dimer-dimer covariance, namely $$U_{K,\rho}(A,B) = \mathbb{P}_{K,\rho}\big(A,B\in {\mathcal D }\big) - \mathbb{P}_{K,\rho}\big(A \in {\mathcal D }\big) \, \mathbb{P}_{K,\rho}\big(B \in {\mathcal D }\big).$$ #### Monomer correlations, paths and $O(N)$ spin systems. We now briefly explain the relation between monomer-monomer and spin-spin correlations. Consider the Spin $O(N)$ model with $N \in \mathbb{N}$ at inverse temperature $\beta \geq 0$ and external magnetic field $h \geq 0$. In [@LeesTaggi Proposition 2.3] it is shown that the spin-spin correlation at $x,y \in K \subset \mathbb{V}$ is identical to the two-point function $\mathbb{G}_{G_K,N,\beta,h}(x,y)$ that is defined in a particular model of random paths. The measure of this model depends on a function $\mathcal{U}:\mathbb{N}_0 \to \mathbb{R}_{\geq 0}$ which controls the number of visits of paths at the vertices. If we consider a different choice of the function $\mathcal{U}$, namely if we set $\tilde{\mathcal{U}}(r):=1$ for any $r \in \mathbb{N}_0$, then we have that for $N=2$, $\rho=h$ and any $\beta \geq 0$, $$\frac{C_{K,\rho}(\{x\} \cup \{y\})}{C_{K,\rho}(\emptyset)}= \tilde{\mathbb{G}}_{G_K,N,\beta,h}(\{x,y\}),$$ where $\tilde{\mathbb{G}}_{G_K,N,\beta,h}(x,y)$ is defined as the function $\mathbb{G}_{G_K,N,\beta,h}(x,y)$, but with the choice of $\tilde{\mathcal{U}}$ instead of $\mathcal{U}$. We now present our main theorem. It states that correlation functions between sets of monomers decay exponentially fast with the distance between their vertices. For any $A, B \subset \mathbb{V}$ we set $$d(A,B):= \min_{\substack{(u,v): \, u \in A, \, v \in B}} d(u,v).$$ **Theorem 1**. *For any $d \geq 1$ and $\rho>0$, there exist $c,c^\prime \in (0,\infty)$ such that for any non-empty $A, B \subset \mathbb{V}$ with $A \cap B=\emptyset$, $$\label{eq:maintheorem} U_\rho(A,B) \leq c^\prime \, e^{-c \, d(A,B) },$$ where $c=c(\rho) = \tilde{c} \, \rho$ if $\rho$ is sufficiently small and where $\tilde{c} \geq \frac{2}{\ln(2(a+1)) \, (a+1)}$ with $a=e \, \sqrt{e \, |A| \, (4d-1)}$.* **Remark 2**. Concerning the decay of dimer-dimer covariances, our theorem above improves the result of [@vandenBerg; @vandenBergSteif] which states that $c=O(\rho^2)$ in the limit as $\rho \to 0$. We remark, however, that it is still an open problem to show that $c =O(\sqrt{\rho})$. # Proof of Theorem [Theorem 1](#theorem:maintheorem){reference-type="ref" reference="theorem:maintheorem"} {#proof-of-theorem-theoremmaintheorem} In this section we present the proof of Theorem [Theorem 1](#theorem:maintheorem){reference-type="ref" reference="theorem:maintheorem"}. We will first prove exponential decay of monomer correlations in the regime of large $\rho$ using a cluster expansion. Exponential decay for small $\rho$ will then follow by applying an analytic theorem, see Theorem [Theorem 4](#theorem:Guerra){reference-type="ref" reference="theorem:Guerra"} below. **Proposition 3**. *For any $d \geq 1$, any $K \subset \mathbb{Z}^d$, any non-empty $A,B \subset \mathbb{V}$ with $A \cap B =\emptyset$ and any $\rho \geq \sqrt{e \, |A| \, (4d-1)} \, e$, it holds that $$\label{eq: exponentialdecaycluster} 0 \leq U_{K,\rho}(A,B) \leq e^{-2 \, d(A,B) \,+1}.$$* *Proof.* Fix $d \geq 1$, $K \subset \mathbb{Z}^d$ and two non-empty sets $A,B \subset \mathbb{V}_L$ with $A \cap B = \emptyset$. Let $\rho \geq \sqrt{e \, |A| \, (4d-1)} \, e$. To begin, we rewrite the partition function $\mathbb{Z}_{K, \rho}$ using a cluster expansion. First, we note that $$\begin{aligned} \mathbb{Z}_{K, \rho} % = \sum\limits_{(M, d ) \in {\Omega}} \rho^{|M|} & = \sum_{n=0}^{|K|/2} \sum_{\substack{(M, d ) \in {\Omega_K} : \\ |d|=n}} \rho^{|K|-2|d|} \\ & = \rho^{|K|} \bigg(1 + \sum_{n \geq 1} \frac{\rho^{-2n}}{n!} \sum_{(\gamma_1,\dots,\gamma_n) \in E_K^n} \prod_{1 \leq i < j \leq n } \big(1+ \zeta(\gamma_i,\gamma_j)\big)\bigg), \end{aligned}$$ where for $\gamma,\gamma^\prime \in E_K$, $\zeta(\gamma,\gamma^\prime):= \mathbbm{1}_{\{\gamma \cap \gamma^\prime = \emptyset\}} -1$. We denote by $\mathcal{G}_n$ the set of all (unoriented) connected graphs with vertex set $\mathcal{V}_n=\{1,\dots,n\}$. We introduce the Ursell functions $\varphi$ on finite ordered sequences $(\gamma_1,\dots,\gamma_m) \in E_K^m$, which are defined by $$\varphi(\gamma_1,\dots,\gamma_m) := \begin{cases} 1 & \text{ if } m=1, \\ \frac{1}{m!} \sum\limits_{G \in {\mathcal G }_m} \prod\limits_{\{i,j\} \in G} \zeta(\gamma_i,\gamma_j) & \text{ if } m \geq 2, \end{cases}$$ where the product is over all edges in $G$. For any $\gamma^* \in E_K$, using that $\rho \geq \sqrt{e \, (4d-1)}$, it holds that $$\sum_{\gamma \in E_K} \rho^{-2} e \, |\zeta(\gamma,\gamma^*)| \leq (4d-1) \, \rho^{-2} e \leq 1.$$ By cluster expansion [@FriedliVelenik Theorem 5.4 ] it thus holds that $$\label{eq:clusterexpansionZ} \mathbb{Z}_{K, \rho} = \rho^{|K|} \, \exp\bigg( \sum\limits_{m \geq 1} \sum\limits_{(\gamma_1,\dots,\gamma_m) \in E_K^m} \varphi(\gamma_1,\dots,\gamma_m) \, \rho^{-2m} \bigg),$$ where combined sum and integrals converge absolutely. Furthermore, for any $\gamma_1 \in E_K$, we have that $$\label{eq:upperbound} 1 + \sum\limits_{n \geq 2} n \sum\limits_{(\gamma_2,\dots,\gamma_n) \in E_K^{n-1}} |\varphi(\gamma_1,\gamma_2, \dots, \gamma_n)| \, \rho^{-2(n-1)} \leq e.$$ Using a similar cluster expansion as above, we further obtain that for any $A \subset K$, $$\label{eq:clusterexpansionG} \mathbb{Z}_{K\setminus A, \rho} = \rho^{|K|-|A|} \, \exp\bigg( \sum\limits_{m \geq 1} \sum\limits_{(\gamma_1,\dots,\gamma_m) \in E_{K \setminus A}^m} \varphi(\gamma_1,\dots,\gamma_m) \,\rho^{-2m} \bigg).$$ For $m \in \mathbb{N}$ and $A \subset K$, let $C_A^m$ denote the set of ordered sequences $\boldsymbol{\gamma}=(\gamma_1,\dots,\gamma_m) \in E_K^m$ such that there exists $i \in [m]$ and $x \in A$ such that $x$ is an endpoint of $\gamma_i$. Fix now two disjoint subsets $A, B \subset K$. From [\[eq:clusterexpansionZ\]](#eq:clusterexpansionZ){reference-type="eqref" reference="eq:clusterexpansionZ"} and [\[eq:clusterexpansionG\]](#eq:clusterexpansionG){reference-type="eqref" reference="eq:clusterexpansionG"} we deduce that, $$\label{eq:twopoint} \begin{aligned} \frac{C_{K,\rho}(A \cup B)}{C_{K,\rho}(\emptyset)} & = \frac{1}{\rho^{|A|}} \, \exp\bigg(- \sum\limits_{m \geq 1} \sum\limits_{(\gamma_1,\dots,\gamma_m) \in C_{A}^m} \varphi(\gamma_1,\dots,\gamma_m) \, \rho^{-2m} \bigg) \\ & \qquad \times \frac{1}{\rho^{|B|}} \, \exp\bigg(- \sum\limits_{m \geq 1} \sum\limits_{(\gamma_1,\dots,\gamma_m) \in C_{B}^m} \varphi(\gamma_1,\dots,\gamma_m) \, \rho^{-2m} \bigg) \\ & \qquad \qquad \times \exp\bigg(\sum\limits_{m \geq 1} \sum\limits_{(\gamma_1,\dots,\gamma_m) \in C_{A}^m \cap C_{B}^m} \varphi(\gamma_1,\dots,\gamma_m) \, \rho^{-2m}\bigg) \\ & = \frac{C_{K,\rho}(A)}{C_{K,\rho}(\emptyset)} \, \frac{C_{K,\rho}(B)}{C_{K,\rho}(\emptyset)} \, \exp\bigg(\sum\limits_{m \geq 1} \, \rho^{-2m} \, \sum\limits_{(\gamma_1,\dots,\gamma_m) \in C_{A}^m \cap C_{B}^m} \varphi(\gamma_1,\dots,\gamma_m) \bigg). \end{aligned}$$ Now observe that for any $(\gamma_1,\dots,\gamma_m) \in C_{A}^m \cap C_{B}^m$, $\varphi(\gamma_1,\dots,\gamma_m) \neq 0$ only if the graph $G$, which is obtained from $(\gamma_1,\dots,\gamma_m)$ by drawing an edge between $i$ and $j$ whenever $\zeta(\gamma_i,\gamma_j) \neq 0$, is connected. Stated differently, $\varphi(\gamma_1,\dots,\gamma_m) \neq 0$ only if there exists at least one path connecting a vertex of the set $A$ to a vertex of the set $B$. In particular, it is necessary that $m \geq d(A,B)$. Thus, $$\begin{aligned} \label{eq:application} & \sum\limits_{m \geq 1} \rho^{-2m} \, \sum\limits_{(\gamma_1,\dots,\gamma_m) \in C_{A}^m \cap C_{B}^m} |\varphi(\gamma_1,\dots,\gamma_m)| \\ %& \leq \sum\limits_{m \geq d_L(A,B)} e^{-2m} \sum\limits_{(\gamma_1,\dots,\gamma_m) \in C_{e}^m} |\varphi(\gamma_1,\dots,\gamma_m)| \prod_{i=1}^m \Big(\frac{\rho}{e}\Big)^{-2} \\ & \leq e^{-2 \, d(A,B)} \, \sum\limits_{m \geq 1} m \, \sum\limits_{\substack{(\gamma_1,\dots,\gamma_m) \in E_K^m: \\ \gamma_1 \cap A \neq \emptyset}} |\varphi(\gamma_1,\dots,\gamma_m)| \, \Big(\frac{\rho}{e}\Big)^{-2m} \\ & = e^{-2 \, d(A,B)} \, \Big(\frac{\rho}{e}\Big)^{-2} \, \sum\limits_{\gamma_1 \in C_{A}^1} \bigg(1+\sum\limits_{m \geq 2} m \, \sum\limits_{(\gamma_2,\dots,\gamma_m) \in E_K^{m-1}} |\varphi(\gamma_1,\dots,\gamma_m)| \, \Big(\frac{\rho}{e}\Big)^{-2(m-1)} \bigg)\\ & \leq e^{-2 \, d(A,B)} \, e^3 \, \rho^{-2} \, |A| \, 2d \leq e^{-2 \, d(A,B)}, \end{aligned}$$ where in the last two steps we used [\[eq:upperbound\]](#eq:upperbound){reference-type="eqref" reference="eq:upperbound"} and that $\rho \geq \sqrt{e \, |A| \,(4d-1)} \, e$. From [\[eq:twopoint\]](#eq:twopoint){reference-type="eqref" reference="eq:twopoint"} and [\[eq:application\]](#eq:application){reference-type="eqref" reference="eq:application"}, we thus obtain that $$U_{K,\rho}(A,B) \leq \frac{C_{K,\rho}(A)}{C_{K,\rho}(\emptyset)} \, \frac{C_{K,\rho}(B)}{C_{K,\rho}(\emptyset)} \, \Big(e^{e^{-2 \, d(A,B)}} -1 \Big) \leq e^{-2 \, d(A,B) \, +1}.$$ ◻ We are now ready to prove Theorem [Theorem 1](#theorem:maintheorem){reference-type="ref" reference="theorem:maintheorem"}. The proof is based on Proposition [Proposition 3](#prop:exponentialdecaycluster){reference-type="ref" reference="prop:exponentialdecaycluster"} above and on the following analytic theorem. **Theorem 4** ([@Guerra Theorem A.3 and Theorem A.6]). *If $f:\mathbb{C} \to \mathbb{R}\cup \{\infty\}$ is analytic and $f(x) \leq 1$ in the region $\{z \, | \, Re(z)>0\}$ and if $f(x) \leq e^{-b}$ for $a \leq x \leq a+2$, $a,b \geq 0$, then for $0 < x < a$, $$f(x) \leq e^{-c \, x} ,$$ where $c = \frac{b}{\ln(2(a+1)) \, (a+1)}$.* *Proof of Theorem [Theorem 1](#theorem:maintheorem){reference-type="ref" reference="theorem:maintheorem"}.* We fix two disjoint subsets $A, B \subset \mathbb{V}$. To begin, we note that by [@Gruber Theorem 10] for all $\rho>0$, the function $U_\rho(A,B)$ is an analytic function of $\rho$ on $\mathbb{R}^+:= \{x \in \mathbb{R}: \, x >0\}$. Using Proposition [Proposition 3](#prop:exponentialdecaycluster){reference-type="ref" reference="prop:exponentialdecaycluster"} and applying Theorem [Theorem 4](#theorem:Guerra){reference-type="ref" reference="theorem:Guerra"} to the function $f_{A,B}(\rho):=\frac{1}{e} \, U_\rho(A,B)$ concludes the proof of the theorem. ◻ # Acknowledgements {#acknowledgements .unnumbered} The author thanks the German Research Foundation through the international research training group 2544 and through the priority program SPP2265 (project number 444084038) for financial support. 99 V. Betz; L. Taggi. Scaling limit of ballistic self-avoiding walk interacting with spatial random permutations. *Electoron. J. Probab.* **24** (2019). J. Dubédat. Dimers and families of Cauchy-Riemann operators I. *J. Amer. Soc.* **28**, no. 4, 1063-1167 (2015). M. E. Fisher; J. Sephenson. Statistical Mechanics of Dimers on a Plane Lattice. II. Dimer Correlations and Monomers. *Phys. Rev.* **132**, 1411-1431 (1963). R. Kenyon. Conformal invariance of loops in the double dimer model. *Commun. Math. Phys.* **326**, no. 2, 477-497 (2014). N. Forien; M. Quattropani; A. Quitmann; L. Taggi. Coexistence, enhancements and short loops in random walk loop soups. ArXiv Preprint 2306.12102 (2023). S. Friedli; Y. Velenik. Statistical mechanics of lattice systems: a concrete mathematical introduction. *Cambridge University Press, Cambridge* (2018). J. Fröhlich; P.-F. Rodriguez. Some applications of the Lee-Yang theorem. *J. Math. Phys.* **53** (2012). C. Gruber; H. Kunz. General properties of polymer systems. *Commun. Math. Phys.* **22**, 133-161 (1971). **22** (1971), 133-161. F. Guerra; L. Rosen; B. Simon. Correlation inequalities and the mass gap in $P(\varphi)_2$. III. Mass gap for a class of strongly coupled theories with nonzero external field. *Commun. Math. Phys.* **41**, 19-32 (1975). O. J. Heilmann; E. H. Lieb. Theory of monomer-dimer systems. *Commun. Math. Phys.* **25**, 190-232 (1972). B. Lees; L. Taggi. Exponential decay of transverse correlations for $O(N)$ spin systems and related models. *Probab. Theory Relat. Fields.* **180**, no. 3-4, 1099-1133 (2021). A. Quitmann; L. Taggi. Macroscopic loops in the Bose gas, Spin $O(N)$ and related models. *Commun. Math. Phys.* **400**, 2081,2136 (2023). A. Quitmann; L. Taggi. Macroscopic loops in the $3d$ double-dimer model. *Electron. Commun. Probab.* **28** (2023). L. Taggi. Uniformly positive correlations in the dimer model and macroscopic interacting self-avoiding walk in $\mathbb{Z}^d$, $d \geq 3$. *Comm. Pure Appl. Math.* **75**, no. 6, 1183-1236 (2022). J. van den Berg. On the absence of phase transition in the monomer-dimer model. *Perplexing Problems in Probability*, 185-195, Progr. Probab., 44, *Birkhäuser Boston, Boston* (1999). J. van den Berg; J. E. Steif. Percolation and the hard-core lattice gas model. *Stochastic Process. Appl.* **49**, no. 2, 179-197 (1994). [^1]: Weierstrass Institute for Applied Analysis and Stochastics, Berlin, Germany. Email: alexandra.quitmann\@wias-berlin.de
arxiv_math
{ "id": "2309.13632", "title": "A note on the monomer-dimer model", "authors": "Alexandra Quitmann", "categories": "math.PR", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- author: - "Xuehua Li[^1]" - "Cairong Chen[^2]" bibliography: - cjcmsample.bib title: "**A generalization of the Newton-based matrix splitting iteration method for generalized absolute value equations**" --- > **Abstract:** A generalization of the Newton-based matrix splitting iteration method (GNMS) for solving the generalized absolute value equations (GAVEs) is proposed. Under mild conditions, the GNMS method converges to the unique solution of the GAVEs. Moreover, we can obtain a few weaker convergence conditions for some existing methods. Numerical results verify the effectiveness of the proposed method.\ > **Keyword:** Generalized absolute value equations; Matrix splitting; Generalized Newton-based method; Convergence. # Introduction {#sec:intro} Consider the system of generalized absolute value equations (GAVEs) $$\label{eq:gave} Ax - B|x| - c = 0,$$ where $A, B\in \mathbb{R}^{n\times n}$ and $c\in \mathbb{R}^n$ are given, and $x\in \mathbb{R}^n$ is unknown with $|x| = (|x_1|,~|x_2|,~\cdots,~|x_n|)^\top$. If the matrix $B$ is the identity matrix, GAVEs [\[eq:gave\]](#eq:gave){reference-type="eqref" reference="eq:gave"} turns into the system of absolute value equations (AVEs) $$\label{eq:ave} Ax - |x| - c = 0.$$ To our knowledge, GAVEs [\[eq:gave\]](#eq:gave){reference-type="eqref" reference="eq:gave"} was formally introduced by Rohn in $2004$ [@rohn2004]. Over the past two decades, GAVEs [\[eq:gave\]](#eq:gave){reference-type="eqref" reference="eq:gave"} and AVEs [\[eq:ave\]](#eq:ave){reference-type="eqref" reference="eq:ave"} have received considerable attention in the optimization community. The main reason is that GAVEs [\[eq:gave\]](#eq:gave){reference-type="eqref" reference="eq:gave"} and AVEs [\[eq:ave\]](#eq:ave){reference-type="eqref" reference="eq:ave"} are equivalent to the linear complementarity problem [@mang2007; @mame2006; @huhu2010; @prok2009], which has wide applications in engineering, science and economics [@copa1992]. As shown in [@mang2007], solving GAVEs [\[eq:gave\]](#eq:gave){reference-type="eqref" reference="eq:gave"} is NP-hard. In addition, if GAVEs [\[eq:gave\]](#eq:gave){reference-type="eqref" reference="eq:gave"} is solvable, checking whether it has a unique solution or multiple solutions is NP-complete [@prok2009]. Nevertheless, some researches focused on constructing conditions under which the GAVEs [\[eq:gave\]](#eq:gave){reference-type="eqref" reference="eq:gave"} has a unique solution for any $b\in \mathbb{R}^n$ [@rohf2014; @mezz2020; @wuli2020; @wush2021; @rohn2009; @love2013]. Furthermore, the bounds for the solutions of GAVEs [\[eq:gave\]](#eq:gave){reference-type="eqref" reference="eq:gave"} were studied in [@hlad2018]. When GAVEs [\[eq:gave\]](#eq:gave){reference-type="eqref" reference="eq:gave"} is solvable, considerable research effort has been, and is still, put into finding efficient algorithms for computing the approximate solution of it. For instance, by separating the differential and non-differential parts of GAVEs [\[eq:gave\]](#eq:gave){reference-type="eqref" reference="eq:gave"}, Wang, Cao and Chen [@wacc2019] proposed the modified Newton-type (MN) iteration method, which is described in Algorithm [\[alg:mn\]](#alg:mn){reference-type="ref" reference="alg:mn"}. Particularly, if $\Omega$ is the zero matrix, the MN iteration [\[eq:mn\]](#eq:mn){reference-type="eqref" reference="eq:mn"} reduces to the Picard iteration method contained in [@rohf2014]. Whereafter, by using the matrix splitting technique, Zhou, Wu and Li [@zhwl2021] established the Newton-based matrix splitting (NMS) iteration method (see Algorithm [\[alg:nms\]](#alg:nms){reference-type="ref" reference="alg:nms"}), which covers the MN method (by setting $\bar{M} = A$ and $\bar{N} = 0$), the shift splitting MN (SSMN) iteration method (see Algorithm [\[alg:ssmn\]](#alg:ssmn){reference-type="ref" reference="alg:ssmn"}) [@liyi2021] (by setting $\bar{M}=\frac{1}{2}(A + \tilde{\Omega}),\bar{N}=\frac{1}{2}( \tilde{\Omega}-A)$ and $\Omega=0$) and the relaxed MN iteration method (see Algorithm [\[alg:rmn\]](#alg:rmn){reference-type="ref" reference="alg:rmn"}) [@shzh2023] (by setting $\bar{M} = \theta A$ and $\bar{N}=(\theta - 1) A$). More recently, Zhao and Shao proposed the relaxed NMS (RNMS) iteration method [@zhsh2023], which is shown in Algorithm [\[alg:rnms\]](#alg:rnms){reference-type="ref" reference="alg:rnms"}. On the one hand, as mentioned in [@zhsh2023], if $\theta = 1$, $\hat{M} = \bar{M}$, $\hat{N} = \bar{N}$ and $\hat{\Omega} = \Omega$, the RNMS method is simplified to the NMS method. On the other hand, if $\bar{M} = \hat{\Omega} + \theta \hat{M}$, $\bar{N} = \hat{\Omega} + (\theta -1) \hat{M} + \hat{N}$ and $\Omega =0$, the NMS method is reduced to the RNMS method. In this sense, we can conclude that the NMS method is equivalent to the RNMS method. For more numerical algorithms, one can refer to [@lild2022; @soso2023; @jizh2013; @tazh2019; @cyhm2023; @acha2018; @alct2023; @lilw2018] and the references therein. **Algorithm 1**. *[\[alg:mn\]]{#alg:mn label="alg:mn"} Assume that $\Omega\in \mathbb{R}^{n\times n}$ is a given matrix such that $A + \Omega$ is nonsingular. Given an initial vector $x^0\in \mathbb{R}^{n}$, for $k=0,1,2,\cdots$ until the iteration sequence $\{x^k\}^\infty_{k=0}$ is convergent, compute $$\label{eq:mn} x^{k+1}=(A + \Omega)^{-1}(\Omega x^k+B|x^k| + c).$$* **Algorithm 2**. *[\[alg:nms\]]{#alg:nms label="alg:nms"} Assume that $x^0\in \mathbb{R}^{n}$ is an arbitrary initial vector. Let $A = \bar{M}-\bar{N}$ and $\Omega\in \mathbb{R}^{n\times n}$ be a given matrix such that $\bar{M} + \Omega$ is nonsingular. For $k=0,1,2,\cdots$ until the iteration sequence $\{x^k\}^\infty_{k=0}$ is convergent, compute $$\label{eq:nms} x^{k+1}=(\bar{M} + \Omega)^{-1}\left[(\bar{N}+\Omega) x^k+B|x^k| + c\right].$$* **Algorithm 3**. *[\[alg:ssmn\]]{#alg:ssmn label="alg:ssmn"} Assume that $x^0\in \mathbb{R}^{n}$ is an arbitrary initial vector. Let $\tilde{\Omega}\in \mathbb{R}^{n\times n}$ be a given matrix such that $A + \tilde{\Omega}$ is nonsingular. For $k=0,1,2,\cdots$ until the iteration sequence $\{x^k\}^\infty_{k=0}$ is convergent, compute $$\label{eq:ssmn} x^{k+1}=(A + \tilde{\Omega})^{-1}\left[(\tilde{\Omega}-A) x^k+2B|x^k| + 2c\right].$$* **Algorithm 4**. *[\[alg:rmn\]]{#alg:rmn label="alg:rmn"} Assume that $x^0\in \mathbb{R}^{n}$ is an arbitrary initial vector. Choose $\Omega\in \mathbb{R}^{n\times n}$ and $\theta\ge 0$ such that $\theta A + \Omega$ is nonsingular. For $k=0,1,2,\cdots$ until the iteration sequence $\{x^k\}^\infty_{k=0}$ is convergent, compute $$\label{eq:rmn} x^{k+1}=(\theta A + \Omega)^{-1}\left[\Omega x^k+(\theta -1) A x^k + B|x^k| + c\right].$$* **Algorithm 5**. *[\[alg:rnms\]]{#alg:rnms label="alg:rnms"} Assume that $x^0\in \mathbb{R}^{n}$ is an arbitrary initial vector. Let $A = \hat{M}-\hat{N}$ and $\hat{\Omega}\in \mathbb{R}^{n\times n}$ be a given matrix such that $\theta \hat{M} + \hat{\Omega}$ $(\theta \ge 0)$ is nonsingular. For $k=0,1,2,\cdots$ until the iteration sequence $\{x^k\}^\infty_{k=0}$ is convergent, compute $$\label{eq:rnms} x^{k+1}=(\theta \hat{M} + \hat{\Omega})^{-1}\left[(\hat{\Omega} + (\theta -1) \hat{M} + \hat{N}) x^k+B|x^k| + c\right].$$* This paper is devoted to developing a generalization of the NMS (GNMS) iteration method. To this end, the matrix splitting technique, the relaxation technique and the variable transformation $Qy = |x|$ (where $Q$ is a nonsingular matrix) are exploited. The convergence of the proposed method is studied and numerical examples are given to demonstrate the efficiency of the GNMS method. The paper is organized as follows. In Section [2](#sec:prel){reference-type="ref" reference="sec:prel"}, we present notations and some useful lemmas. In Section [3](#sec:main){reference-type="ref" reference="sec:main"}, we propose the GNMS iterative method for solving the GAVEs [\[eq:gave\]](#eq:gave){reference-type="eqref" reference="eq:gave"} and a number of its special cases. In addition the convergence analysis of the proposed method will be discussed in detail. In Section [4](#sec:Numericalexample){reference-type="ref" reference="sec:Numericalexample"}, we give numerical results to show the effectiveness of our method. Finally, we give a conclusion for this paper in Section [5](#sec:Conclusions){reference-type="ref" reference="sec:Conclusions"}. # Preliminaries {#sec:prel} We recall some basic definitions and results that will be used later in this paper. **Notation.** In this paper, let $\mathbb{R}^{n\times n}$ be the set of all $n \times n$ real matrices and $\mathbb{R}^{n}= \mathbb{R}^{n\times 1}$. $|U|\in\mathbb{R}^{m\times n}$ denote the componentwise absolute value of matrix $U$. $I$ denotes the identity matrix with suitable dimensions. $\Vert U\Vert$ denotes the $2$-norm of $U\in \mathbb{R}^{m\times n}$ which is defined by the formula $\Vert U\Vert= \max\{\Vert Ux\Vert: x\in\mathbb{R}^n,~\Vert x\Vert=1\}$, where $\Vert x\Vert$ is the $2$-norm of the vector $x$. For any matrix $U=(u_{ij})$ and $V=(v_{ij})\in\mathbb{R}^{n\times n}$, $U\leq V$ means that $u_{ij}\leq v_{ij}$ for any $i,~j=1,2,\cdots,n$. $\rho(U)$ denotes the spectral radius of $U$. A matrix $U$ is positive semi-definite if $\langle x,Ux\rangle\geq0$ for any nonzero vector $x\in\mathbb{R}^n$. For any matrix $U\in\mathbb{R}^{n\times n}$, $U = U_1-U_2$ is call a splitting of $U$ if $U_1$ is nonsingular. **Lemma 1**. *[@young1971 Lemma 2.1] If $s$ and $q$ are real, then both roots of the quadratic equation $$x^2 - sx + q = 0$$ are less than one in modulus if and only if $$|q| < 1\quad \text{and} \quad |s| < 1 + q.$$* **Lemma 2**. *[@harj1985] For $x,~y \in \mathbb{R}^n$, the following results hold:* - *$\Vert\vert x\vert-\vert y\vert\Vert \leq \Vert x - y \Vert$;* - *if $0 \leq x \leq y$, then $\Vert x\Vert\leq \Vert y\Vert$;* - *if $x \leq y$ and $U \geq 0$, then $Ux \leq Uy$.* **Lemma 3**. *[@ys2003 Theorem 1.5] For $U\in \mathbb{R}^{n\times n}$, the series $\sum\limits_{k=0}^{\infty}U^k$ converges if only if $\rho(U)<1$ and we have $\sum\limits_{k=0}^{\infty}U^k=(I-U)^{-1}$ whenever it converges.* **Lemma 4**. *[@serre2002 Corollary 4.4.1] For $U\in \mathbb{R}^{n\times n}$, $\lim\limits_{k \to +\infty}U^k=0$ if and only if $\rho(U)<1.$* **Lemma 5**. *[@harj1985 Exercise 8.1.16] For any matrices $U,~V \in\mathbb{R}^{n\times n}$, if $0 \leq U \leq V$, then $\Vert U\Vert \leq \Vert V\Vert$.* **Lemma 6**. *[@serre2002 Proposition 4.1.6] For any $U\in\mathbb{R}^{n\times n}$ and the induced norm $\Vert \cdot\Vert$ on $\mathbb{R}^n$, we have $\rho(U)\leq\Vert U\Vert.$* # The GNMS method and its convergence analysis {#sec:main} Let $|x|=Qy$[^3] with $Q\in \mathbb{R}^{n\times n}$ being invertible. Then the GAVEs [\[eq:gave\]](#eq:gave){reference-type="eqref" reference="eq:gave"} can be rewritten as $$\label{eq:two} \begin{cases} Qy - |x| = 0 ,\\ Ax - BQy = c. \end{cases}$$ By splitting matrices $A$ and $Q$ as $$A= M-N \quad \text{and}\quad Q = Q_1-Q_2$$ with $M$ and $Q_1$ being nonsingular, the two-by-two nonlinear system [\[eq:two\]](#eq:two){reference-type="eqref" reference="eq:two"} is equivalent to $$\label{eq:etwo} \begin{cases} Q_1 y= Q_1 y -\tau Q_1y + \tau Q_2y + \tau |x|,\\ Mx = Nx + BQ_1 y- BQ_2 y + c, \end{cases}$$ in which $\tau$ is a positive constant. According to [\[eq:etwo\]](#eq:etwo){reference-type="eqref" reference="eq:etwo"}, we can develop the following iteration method for solving the GAVEs [\[eq:gave\]](#eq:gave){reference-type="eqref" reference="eq:gave"}. **Algorithm 6**. *[\[alg:gnms\]]{#alg:gnms label="alg:gnms"} Let $Q\in \mathbb{R}^{n\times n}$ be a nonsingular matrix and $A=M-N$, $Q = Q_1-Q_2$ with $M$ and $Q_1$ being invertible. Given initial vectors $x^0\in \mathbb{R}^{n}$ and $y^0\in \mathbb{R}^{n}$, for $k=0,1,2,\cdots$ until the iteration sequence $\{(x^k,y^k)\}^\infty_{k=0}$ is convergent, compute $$\label{eq:gnms} \begin{cases} y^{k+1}=(1-\tau)y^k+\tau{Q_1}^{-1}\left(Q_2y^k+ |x^k|\right),\\ x^{k+1}=M^{-1}(Nx^k+BQ_1y^{k+1}-BQ_2y^k + c), \end{cases}$$ where the relaxation parameter $\tau>0$.* Unlike the methods proposed in [@wacc2019; @zhwl2021; @shzh2023; @liyi2021; @zhsh2023], the iteration scheme [\[eq:gnms\]](#eq:gnms){reference-type="eqref" reference="eq:gnms"} updates $y$ first and then $x$. On the one hand, surprisingly, the iteration scheme [\[eq:gnms\]](#eq:gnms){reference-type="eqref" reference="eq:gnms"} reduces to the NMS iteration [\[eq:nms\]](#eq:nms){reference-type="eqref" reference="eq:nms"} whenever $Q=Q_1 = I, Q_2=0, M = \bar{M} + \Omega, N = \bar{N} + \Omega$ and $\tau = 1$. On the other hand, the iteration scheme [\[eq:gnms\]](#eq:gnms){reference-type="eqref" reference="eq:gnms"} frequently can not be rewritten as the form of the NMS iteration [\[eq:nms\]](#eq:nms){reference-type="eqref" reference="eq:nms"}. Indeed, the iteration scheme [\[eq:gnms\]](#eq:gnms){reference-type="eqref" reference="eq:gnms"} can be reformulated as $$\label{eq:rgnms} \begin{cases} y^{k+1}=(1-\tau)y^k+\tau{Q_1}^{-1}Q_2y^k+\tau{Q_1}^{-1}|x^k|,\\ x^{k+1}=M^{-1}(Nx^k+ (1-\tau)BQy^{k} + \tau B|x^k| + c), \end{cases}$$ which can not be reduced to [\[eq:nms\]](#eq:nms){reference-type="eqref" reference="eq:nms"} provided that $\tau \neq 1$. That is why the proposed method is called as the GNMS iteration method. According to the statements in Section [1](#sec:intro){reference-type="ref" reference="sec:intro"}, by approximate choosing $M,N,Q_1,Q_2$ and $\tau$, it is easy to see that the GNMS iteration method also involves the Picard iteration method, the MN method, the SSMN method, the RNMS method and their relaxation versions (if exist) as special cases. Now we are in the position to show the convergence of the GNMS iterative method. Denote $$\label{eq:nota} \alpha=\Vert Q_1^{-1}Q_2\Vert,~\beta=\Vert Q_1^{-1}\Vert,~\gamma=\Vert M^{-1}N\Vert,~\mu=\Vert M^{-1}BQ_1\Vert,~\nu=\Vert M^{-1}BQ_2\Vert.$$ Then we have the following convergence theorem. **Theorem 1**. *Let $A = M - N$ and $Q = Q_1 - Q_2$ with $M$ and $Q_1$ being nonsingular matrices. If $$\label{eq:con1} \big\vert \gamma\vert 1-\tau\vert+\tau(\gamma\alpha-\beta \nu)\big\vert < 1\quad \text{and}\quad \tau(\mu\beta+\beta \nu) < (\gamma-1)(\vert 1-\tau\vert+\tau\alpha-1),$$ then the GAVEs [\[eq:gave\]](#eq:gave){reference-type="eqref" reference="eq:gave"} has a unique solution $x^*$ and the sequence $\{(x^k,~y^k)\}^\infty_{k=0}$ generated by [\[eq:gnms\]](#eq:gnms){reference-type="eqref" reference="eq:gnms"} converges to $(x^*,y^* = Q^{-1}|x^*|)$.* *Proof.* It follows from [\[eq:gnms\]](#eq:gnms){reference-type="eqref" reference="eq:gnms"} that $$\label{eq:gnmsm} \begin{cases} y^{k}=(1-\tau)y^{k-1}+\tau{Q_1}^{-1}\left(Q_2y^{k-1}+ |x^{k-1}|\right),\\ x^{k}=M^{-1}(Nx^{k-1} +BQ_1y^{k}-BQ_2y^{k-1} + c), \end{cases}$$ Subtracting [\[eq:gnmsm\]](#eq:gnmsm){reference-type="eqref" reference="eq:gnmsm"} from [\[eq:gnms\]](#eq:gnms){reference-type="eqref" reference="eq:gnms"}, we have $$\label{eq:mm} \begin{cases} y^{k+1} - y^{k} = (1-\tau)(y^k - y^{k-1})+\tau{Q_1}^{-1}\left[Q_2(y^k - y^{k-1}) +(|x^k| - |x^{k-1}|)\right],\\ x^{k+1} - x^{k} = M^{-1}\left[(N(x^k - x^{k-1}) +BQ_1(y^{k+1} - y^{k}) -BQ_2(y^k - y^{k-1})\right], \end{cases}$$ from which, Lemma [Lemma 2](#lemma:2){reference-type="ref" reference="lemma:2"} (a) and [\[eq:nota\]](#eq:nota){reference-type="eqref" reference="eq:nota"}, we have $$\label{neq:ss} \begin{aligned} \Vert y^{k+1}-y^k\Vert& \leq\vert 1-\tau\vert\Vert y^k - y^{k-1} \Vert+\tau\alpha\Vert y^k - y^{k-1}\Vert+\tau \beta\Vert\Vert x^k - x^{k-1}\Vert,\\ \Vert x^{k+1} - x^k\Vert& \leq\gamma \Vert x^k - x^{k-1}\Vert +\mu \Vert y^{k+1} - y^k \Vert+\nu \Vert y^k - y^{k-1}\Vert, \end{aligned}$$ from which we have $$\label{eq:wss} \begin{bmatrix} 1 & 0\\ -\mu & 1 \end{bmatrix} \begin{bmatrix} \Vert y^{k+1} - y^k\Vert\\ \Vert x^{k+1} -x^k\Vert \end{bmatrix} \leq \begin{bmatrix} \vert1-\tau\vert+\tau\alpha & \tau\beta\\ \nu & \gamma \end{bmatrix} \begin{bmatrix} \Vert y^k - y^{k-1}\Vert\\ \Vert x^k - x^{k-1}\Vert \end{bmatrix}.$$ Multiplying both sides of [\[eq:wss\]](#eq:wss){reference-type="eqref" reference="eq:wss"} from the left by the nonnegative matrix $P = \begin{bmatrix} 1 & 0\\ \mu & 1 \end{bmatrix}$ and using Lemma [Lemma 2](#lemma:2){reference-type="ref" reference="lemma:2"} (c), we have $$\label{neq:ssnorm} \begin{bmatrix} \Vert y^{k+1} - y^k\Vert\\ \Vert x^{k+1} - x^k\Vert \end{bmatrix} \leq W \begin{bmatrix} \Vert y^k - y^{k-1}\Vert\\ \Vert x^k - x^{k-1}\Vert \end{bmatrix},$$ where $$W= \begin{bmatrix} \vert1-\tau\vert+\tau\alpha & \tau\beta\\ (\vert1-\tau\vert+\tau\alpha)\mu+\nu & \tau\beta \mu+\gamma \end{bmatrix}.$$ For each $m\geq1$, if $\rho(W)<1$, it follows from [\[neq:ssnorm\]](#neq:ssnorm){reference-type="eqref" reference="neq:ssnorm"}, Lemma [Lemma 3](#lemma:4){reference-type="ref" reference="lemma:4"} and Lemma [Lemma 4](#lem:conv){reference-type="ref" reference="lem:conv"} that $$\begin{aligned} \label{neq:cauchy} \begin{bmatrix} \Vert y^{k+m}-y^k \Vert\\ \Vert x^{k+m}-x^k \Vert \end{bmatrix} &= \begin{bmatrix} \Big\Vert \sum\limits_{j=0}^{m-1} (y^{k+j+1}-y^{k+j}) \Big\Vert\\ \Big\Vert\sum\limits_{j=0}^{m-1} (x^{k+j+1}-x^{k+j}) \Big\Vert \end{bmatrix} \leq \begin{bmatrix} \sum\limits_{j=0}^{\infty}\Vert (y^{k+j+1}-y^{k+j}) \Vert\\ \sum\limits_{j=0}^{\infty}\Vert (x^{k+j+1}-x^{k+j}) \Vert \end{bmatrix}\nonumber\\ &\leq \sum_{j=0}^{\infty} W^{j+1} \begin{bmatrix} \Vert y^k-y^{k-1}\Vert\\ \Vert x^k-x^{k-1}\Vert \end{bmatrix} = (I-W)^{-1}W \begin{bmatrix} \Vert y^k-y^{k-1}\Vert\\ \Vert x^k-x^{k-1}\Vert \end{bmatrix}\nonumber\\ &\leq(I-W)^{-1}W^k \begin{bmatrix} \Vert y^1-y^0\Vert\\ \Vert x^1-x^0\Vert \end{bmatrix}\rightarrow \begin{bmatrix} 0\\ 0 \end{bmatrix} (\text{as} \quad k\rightarrow \infty). \end{aligned}$$ Thus, both $\{y^k\}^\infty_{k=0}$ and $\{x^k\}^\infty_{k=0}$ is Cauchy Sequence whenever $\rho(W)<1$. Then, from [@harj1985 Theorem 5.4.10], $\{x^k\}^\infty_{k=0}$ and $\{y^k\}^\infty_{k=0}$ are convergent. Let $\lim_{k\rightarrow\infty}y^k = y^*$ and $\lim_{k\rightarrow\infty}x^k = x^*$. Then it follows from [\[eq:gnms\]](#eq:gnms){reference-type="eqref" reference="eq:gnms"} that $$\label{eq:lgnms} \begin{cases} y^*=(1-\tau)y^*+\tau{Q_1}^{-1}\left(Q_2y^*+ |x^*|\right),\\ x^*=M^{-1}(Nx^*+BQ_1y^*-BQ_2y^* + c), \end{cases}$$ which implies that $$\label{eq:lgnms} \begin{cases} Qy^*= |x^*|,\\ Ax^* - B|x^*| - c = 0, \end{cases}$$ that is, $x^*$ is a solution to GAVE [\[eq:gave\]](#eq:gave){reference-type="eqref" reference="eq:gave"}. In the following, we will prove that $\rho(W)<1$ if [\[eq:con1\]](#eq:con1){reference-type="eqref" reference="eq:con1"} holds. For simplicity, we denote the matrix $W$ as $$W= \begin{bmatrix} f & g\\ f\mu+\nu & g\mu+\gamma\\ \end{bmatrix},$$ where $$\label{eq:fg} f=\vert1-\tau\vert+\tau\alpha \quad \text{and} \quad g=\tau\beta.$$ Suppose that $\lambda$ is an eigenvalue of $W$, then $$\label{eq:det} \det(\lambda I - W) = \det \begin{bmatrix} \lambda-f & -g\\ -f\mu-\nu & \lambda-(g\mu+\gamma) \end{bmatrix} =0,$$ from which we have $$\lambda^2-(f+\mu g+\gamma)\lambda+\gamma f-g\nu=0,$$ it follows from Lemma [Lemma 1](#lemma:1){reference-type="ref" reference="lemma:1"} that $|\lambda|<1$ if and only if $$\vert \gamma f-g\nu\vert<1\quad \text{and} \quad f+\mu g+\gamma<1+\gamma f-g\nu,$$ that is $$\big\vert \gamma\vert 1-\tau\vert+\tau(\gamma\alpha-\beta \nu)\big\vert < 1\quad \text{and} \quad \tau(\mu\beta+\beta \nu) < (\gamma-1)(\vert 1-\tau\vert+\tau\alpha-1),$$ which is [\[eq:con1\]](#eq:con1){reference-type="eqref" reference="eq:con1"}. Finally, we will prove the unique solvability. In contrast, suppose that $x^*$ and $\bar{x}^*$ are two different solutions of the GAVEs [\[eq:gave\]](#eq:gave){reference-type="eqref" reference="eq:gave"}, then we have $$\begin{aligned} \Vert y^*-\bar{y}^*\Vert&\leq\vert1-\tau\vert\Vert y^*-\bar{y}^*\Vert+\tau\alpha\Vert y^*-\bar{y}^*\Vert+\tau\beta\Vert x^*-\bar{x}^*\Vert,\label{3.9a}\\ \Vert x^*-\bar{x}^*\Vert&\leq \gamma\Vert x^*-\bar{x}^*\Vert+\mu\Vert y^*-\bar{y}^*\Vert+\nu\Vert y^*-\bar{y}^*\Vert \label{3.9b}, \end{aligned}$$ where $y^* = Q^{-1}|x^*|$ and $\bar{y}^* = Q^{-1}|\bar{x}^*|$. Note that it can be deduced from [\[eq:con1\]](#eq:con1){reference-type="eqref" reference="eq:con1"} that $\gamma<1$ and $|1-\tau|+\tau\alpha<1$, it follows from [\[3.9a\]](#3.9a){reference-type="eqref" reference="3.9a"} and [\[3.9b\]](#3.9b){reference-type="eqref" reference="3.9b"} that $$\begin{aligned} \Vert y^*-\bar{y}^*\Vert&\leq\dfrac{\tau\beta}{1-(|1-\tau|+\tau\alpha)}\Vert x^*-\bar{x}^*\Vert,\label{3.10a}\\ \Vert x^*-\bar{x}^*\Vert&\leq\dfrac{\mu+\nu}{1-\gamma}\Vert y^*-\bar{y}^*\Vert.\label{3.10b} \end{aligned}$$ It follows from [\[3.9a\]](#3.9a){reference-type="eqref" reference="3.9a"}, [\[3.10b\]](#3.10b){reference-type="eqref" reference="3.10b"} and the second inequality of [\[eq:con1\]](#eq:con1){reference-type="eqref" reference="eq:con1"} that $$\begin{aligned} \label{neq: y*} \Vert y^*-\bar{y}^*\Vert&\leq&\vert1-\tau\vert\Vert y^*-\bar{y}^*\Vert+\tau\alpha\Vert y^*-\bar{y}^*\Vert+\dfrac{\tau\beta(\mu+\nu)}{1-\gamma}\Vert y^*-\bar{y}^*\Vert\nonumber\\ &<&\vert1-\tau\vert\Vert y^*-\bar{y}^*\Vert+\tau\alpha\Vert y^*-\bar{y}^*\Vert+[1-(|1-\tau|+\tau\alpha)]\Vert y^*-\bar{y}^*\Vert\nonumber\\ &=&\Vert y^*-\bar{y}^*\Vert,\end{aligned}$$ which will lead to a contradiction whenever $y^*\neq \bar{y}^*$ (since $x^*\neq \bar{x}^*$). Hence, we have $x^*=\bar{x}^*$. ◻ **Corollary 1**. *If $$\label{eq:con2} \vert \gamma\alpha-\beta \nu\vert<\gamma<1, \beta(\mu+\nu)<(\gamma-1)(\alpha-1), 0<\tau <\dfrac{2(1-\gamma)}{\beta(\mu+\nu)-(\gamma-1)(\alpha+1)},$$ then the GAVEs [\[eq:gave\]](#eq:gave){reference-type="eqref" reference="eq:gave"} has a unique solution $x^*$ and the sequence $\{(x^k,~y^k)\}^\infty_{k=0}$ generated by [\[eq:gnms\]](#eq:gnms){reference-type="eqref" reference="eq:gnms"} converges to $(x^*,y^* = Q^{-1}|x^*|)$.* *Proof.* In order to prove this corollary, it suffices to prove that [\[eq:con2\]](#eq:con2){reference-type="eqref" reference="eq:con2"} implies [\[eq:con1\]](#eq:con1){reference-type="eqref" reference="eq:con1"}. We will prove it in the following and the proof is divided into two cases. - **Case I**: We first consider $0 < \tau\leq 1$. It can be inferred from the first two inequalities of [\[eq:con2\]](#eq:con2){reference-type="eqref" reference="eq:con2"} that $$\dfrac{1-\gamma}{\gamma\alpha-\beta\nu-\gamma}<0\quad \text{and}\quad 1<\dfrac{-\gamma-1}{\gamma\alpha-\beta \nu-\gamma},$$ from which we have $$\label{neq:tau} \dfrac{1-\gamma}{\gamma\alpha-\beta\nu-\gamma}<\tau<\dfrac{-\gamma-1}{\gamma\alpha-\beta \nu-\gamma}.$$ Multiplying both sides of [\[neq:tau\]](#neq:tau){reference-type="eqref" reference="neq:tau"} by $\gamma\alpha-\beta\nu-\gamma$, we get $$-\gamma-1< \tau(\gamma\alpha-\beta\nu)-\tau\gamma <1-\gamma.$$ For the right inequality, we have $$\tau(\gamma\alpha-\beta \nu)+(1-\tau)\gamma<1.$$ For the left inequality, we have $$\tau(\gamma\alpha-\beta \nu)+(1-\tau)\gamma>-1.$$ In conclusion, we have $$\label{eq:ncond1} \big\vert\tau(\gamma\alpha-\beta \nu)+\vert1-\tau\vert \gamma\big\vert<1.$$ Furthermore, $$\begin{aligned} \nonumber \tau(\mu\beta+\beta \nu) - (\gamma-1)(\vert 1-\tau\vert+\tau\alpha-1)&=\tau(\mu\beta+\beta \nu)- (\gamma-1)( 1-\tau+\tau\alpha-1)\\\nonumber &=\tau(\mu\beta+\beta \nu)) -\tau(\gamma-1)(\alpha-1)\\\label{eq:ncond2} &<0. \end{aligned}$$ Obviously, [\[eq:ncond1\]](#eq:ncond1){reference-type="eqref" reference="eq:ncond1"} and [\[eq:ncond2\]](#eq:ncond2){reference-type="eqref" reference="eq:ncond2"} imply [\[eq:con1\]](#eq:con1){reference-type="eqref" reference="eq:con1"}. - **Case II**: We consider $1<\tau <\dfrac{2(1-\gamma)}{\beta(\mu+\nu)-(\gamma-1)(\alpha+1)}$, from which and the first two inequalities of [\[eq:con2\]](#eq:con2){reference-type="eqref" reference="eq:con2"} we have $$\dfrac{\gamma-1}{\gamma\alpha-\beta \nu+\gamma}<\tau,$$ which implies that $$\label{neq:-1} \tau(\gamma\alpha-\beta \nu)+(\tau-1)\gamma>-1.$$ In addition, since $\beta(\mu+\nu)-(\gamma-1)(\alpha+1)>0$ and $\gamma\alpha-\beta \nu+\gamma >0$, it follows that $$\tau<\dfrac{2(1-\gamma)}{\beta(\mu+\nu)-(\gamma-1)(\alpha+1)}<\dfrac{\gamma+1}{\gamma\alpha-\beta \nu+\gamma},$$ which implies that $$\label{neq:1} \tau(\gamma\alpha-\beta \nu)+(\tau-1)\gamma<1.$$ Combining [\[neq:-1\]](#neq:-1){reference-type="eqref" reference="neq:-1"} and [\[neq:1\]](#neq:1){reference-type="eqref" reference="neq:1"}, we have $$\label{eq:ncond3} \big\vert\tau(\gamma\alpha-\beta \nu)+\vert1-\tau\vert \gamma\big\vert<1.$$ In addition, it follows from $\tau <\dfrac{2(1-\gamma)}{\beta(\mu+\nu)-(\gamma-1)(\alpha+1)}$ that $$\begin{aligned} \nonumber \tau\beta(\mu+\nu)&<2(1-\gamma)+\tau(\gamma-1)(\alpha+1)\\\nonumber &=(\gamma-1)(\tau\alpha+\tau-2)\\\nonumber &=(\gamma-1)(\tau-1+\tau\alpha-1)\\\label{eq:ncond4} &=(\gamma-1)(\vert1-\tau\vert +\tau\alpha-1).\end{aligned}$$ It easy to see that [\[eq:ncond3\]](#eq:ncond3){reference-type="eqref" reference="eq:ncond3"} and [\[eq:ncond4\]](#eq:ncond4){reference-type="eqref" reference="eq:ncond4"} imply [\[eq:con1\]](#eq:con1){reference-type="eqref" reference="eq:con1"}. The proof is completed by summarizing the results of Case I and Case II. ◻ If $M = A + \Omega$, $N = \Omega$, where $\Omega$ is a semi-definite matrix, $Q = Q_1 = I$ and $\tau = 1$, then the GNMS method reduces to the MN method and we have the following Corollary. **Corollary 2**. *Let $A+\Omega$ be nonsingular and $$\label{eq:mnconv} \Vert(A+\Omega)^{-1}\Omega\Vert+\Vert(A+\Omega)^{-1}B\Vert < 1.$$ Then, the GAVEs [\[eq:gave\]](#eq:gave){reference-type="eqref" reference="eq:gave"} has a unique solution $x^*$ and the MN iteration method [\[eq:mn\]](#eq:mn){reference-type="eqref" reference="eq:mn"} converges to the unique solution.* *Proof.* In this case, the condition [\[eq:con2\]](#eq:con2){reference-type="eqref" reference="eq:con2"} reduces to $\Vert(A+\Omega)^{-1}\Omega\Vert+\Vert(A+\Omega)^{-1}B\Vert < 1$. Then the results follow from Corollary [Corollary 1](#cor:conv){reference-type="ref" reference="cor:conv"}. ◻ **Remark 1**. *In [@wacc2019 Theorem 3.1], the authors show that the MN iteration method [\[eq:mn\]](#eq:mn){reference-type="eqref" reference="eq:mn"} converges linearly to a solution of the GAVEs [\[eq:gave\]](#eq:gave){reference-type="eqref" reference="eq:gave"} if $$\label{eq:wc} \|(A+\Omega)^{-1}\|(\|\Omega\| + \|B\|)<1.$$ However, under the condition [\[eq:wc\]](#eq:wc){reference-type="eqref" reference="eq:wc"}, the unique solvability of the GAVEs [\[eq:gave\]](#eq:gave){reference-type="eqref" reference="eq:gave"} does not explored in [@wacc2019]. In addition, [\[eq:wc\]](#eq:wc){reference-type="eqref" reference="eq:wc"} implies [\[eq:mnconv\]](#eq:mnconv){reference-type="eqref" reference="eq:mnconv"} but the converse is generally not true. Hence, the condition [\[eq:mnconv\]](#eq:mnconv){reference-type="eqref" reference="eq:mnconv"} is weaker than [\[eq:wc\]](#eq:wc){reference-type="eqref" reference="eq:wc"} and we can conclude that the GAVEs [\[eq:gave\]](#eq:gave){reference-type="eqref" reference="eq:gave"} is unique solvable under [\[eq:wc\]](#eq:wc){reference-type="eqref" reference="eq:wc"}.* If $\Omega=0$, then the MN method develops into the Picard method [@rohf2014] and we get the following Corollary $\ref{cor:pi}$. **Corollary 3**. *Let $A$ be nonsingular and $\Vert A^{-1}B\Vert< 1$. Then, the Picard iterative method converges to the unique solution $x^*$ of the GAVEs [\[eq:gave\]](#eq:gave){reference-type="eqref" reference="eq:gave"}.* **Remark 2**. *In [@rohf2014 Theorem 2], the authors showed that the Picard iteration method converges to the unique solution of the GAVEs [\[eq:gave\]](#eq:gave){reference-type="eqref" reference="eq:gave"} if $\rho(|A^{-1}B|)<1$. Clearly, when $A^{-1}B\geq0$, $\rho(|A^{-1}B|)\leq\Vert A^{-1}B\Vert< 1$, but the converse is generally not true. However, when $A^{-1}B \ngeq 0$, the following example shows that $\rho(|A^{-1}B|)< 1$ and $\Vert A^{-1}B\Vert< 1$ are irrelevant.* ***Example 1**. * 1. **Consider $A= \begin{bmatrix} 1 & 0.5\\ 3 & 0.25 \end{bmatrix}$ and $B= \begin{bmatrix} 1 & 0\\ 2.1 & 1 \end{bmatrix}$, it follows that $A^{-1}B= \begin{bmatrix} 0.64 & 0.4\\ 0.72 & -0.8 \end{bmatrix}\ngeq0$, $\Vert A^{-1}B\Vert=1.0910>1$, and $\rho(\vert A^{-1}B\vert)=0.9780<1$.** 2. **When $A= \begin{bmatrix} 3 & 0\\ 0 & 3 \end{bmatrix}$ and $B= \begin{bmatrix} -2 & 1\\ 1 & 2 \end{bmatrix}$, we have $A^{-1}B=\dfrac{1}{3} \begin{bmatrix} -2 & 1\\ 1 & 2 \end{bmatrix}\ngeq0$, $\Vert A^{-1}B\Vert=0.7454<1$, and $\rho(\vert A^{-1}B\vert)=1$.** If $M = \bar{M}+\Omega$, $N = \bar{N}+\Omega$, where $\Omega$ is a given matrix, $Q = Q_1 = I$ and $\tau = 1$, then the GNMS method changes into the NMS method and Corollary [Corollary 4](#cor:nms){reference-type="ref" reference="cor:nms"} can be obtained. **Corollary 4**. *If $\bar{M}+\Omega$ is nonsingular and $$\label{eq:nmsconv} \Vert{(\bar{M}+\Omega)}^{-1}(\bar{N}+\Omega)\Vert+\Vert{(\bar{M}+\Omega)}^{-1}B\Vert <1$$ then NMS method converges linearly to the unique solution $x^*$ of the GAVEs [\[eq:gave\]](#eq:gave){reference-type="eqref" reference="eq:gave"}.* *Proof.* Under this circumstances, we can obtain [\[eq:nmsconv\]](#eq:nmsconv){reference-type="eqref" reference="eq:nmsconv"} from [\[eq:con2\]](#eq:con2){reference-type="eqref" reference="eq:con2"}. Hence, under [\[eq:nmsconv\]](#eq:nmsconv){reference-type="eqref" reference="eq:nmsconv"} we can conclude that the system [\[eq:gave\]](#eq:gave){reference-type="eqref" reference="eq:gave"} is unique solvable and the NMS method converges to the unique solution $x^*$. ◻ **Remark 3**. *In [@zhwl2021 Theorem 4.1], Zhou et al. show that the NMS iteration method (Algorithm [\[alg:nms\]](#alg:nms){reference-type="ref" reference="alg:nms"}) converges to a solution of the GAVEs [\[eq:gave\]](#eq:gave){reference-type="eqref" reference="eq:gave"} if $$\label{eq:zw} \Vert{(\bar{M}+\Omega)}^{-1}\Vert(\Vert\bar{N}+\Omega\Vert+\Vert B\Vert) <1$$ As it can be seen [\[eq:zw\]](#eq:zw){reference-type="eqref" reference="eq:zw"} implies [\[eq:nmsconv\]](#eq:nmsconv){reference-type="eqref" reference="eq:nmsconv"} but the converse is generally not true. Thus the condition [\[eq:nmsconv\]](#eq:nmsconv){reference-type="eqref" reference="eq:nmsconv"} is weaker than [\[eq:zw\]](#eq:zw){reference-type="eqref" reference="eq:zw"} and we can conclude that the GAVEs [\[eq:gave\]](#eq:gave){reference-type="eqref" reference="eq:gave"} is uniquely solvable under the condition [\[eq:zw\]](#eq:zw){reference-type="eqref" reference="eq:zw"}.* If $M = \theta\hat{M}+\hat{\Omega}$, $N =\hat{\Omega}+(\theta-1)\hat{M}+\hat{N}$, where $\hat{\Omega}$ is given matrix, $Q = Q_1 = I$ and $\tau = 1$, then the GNMS method reduces to the RNMS method and the following corollary can be obtained. **Corollary 5**. *If $\theta\hat{M}+\hat{\Omega}$ is nonsingular and $$\label{eq:rnmsconv} \Vert{(\theta\hat{M}+\hat{\Omega})}^{-1}(\hat{\Omega}+(\theta-1)\hat{M}+\hat{N})\Vert+\Vert{(\theta\hat{M}+\hat{\Omega})}^{-1}B\Vert <1,$$ then RNMS method (Algorithm [\[alg:rnms\]](#alg:rnms){reference-type="ref" reference="alg:rnms"}) converges linearly to the unique solution $x^*$ of the GAVEs [\[eq:gave\]](#eq:gave){reference-type="eqref" reference="eq:gave"}.* **Remark 4**. *In [@zhsh2023 Theorem 3.1], Zhao and Shao present that the RNMS iteration method (Algorithm [\[alg:rnms\]](#alg:rnms){reference-type="ref" reference="alg:rnms"}) converges to a solution of the GAVEs [\[eq:gave\]](#eq:gave){reference-type="eqref" reference="eq:gave"} if $$\label{eq:ws} \Vert{(\theta\hat{M}+\hat{\Omega})}^{-1}\Vert(\Vert\hat{\Omega}+(\theta-1)\hat{M}+\hat{N}\Vert+\Vert B\Vert) <1$$ It is easy to see that [\[eq:ws\]](#eq:ws){reference-type="eqref" reference="eq:ws"} implies [\[eq:rnmsconv\]](#eq:rnmsconv){reference-type="eqref" reference="eq:rnmsconv"}, but [\[eq:rnmsconv\]](#eq:rnmsconv){reference-type="eqref" reference="eq:rnmsconv"} generally not implies [\[eq:ws\]](#eq:ws){reference-type="eqref" reference="eq:ws"}. Thus [\[eq:rnmsconv\]](#eq:rnmsconv){reference-type="eqref" reference="eq:rnmsconv"} is weaker than [\[eq:ws\]](#eq:ws){reference-type="eqref" reference="eq:ws"}. By the Corollary [Corollary 1](#cor:conv){reference-type="ref" reference="cor:conv"}, we can conclude the unique solvability of the GAVEs [\[eq:gave\]](#eq:gave){reference-type="eqref" reference="eq:gave"} when [\[eq:ws\]](#eq:ws){reference-type="eqref" reference="eq:ws"} holds.* # Numerical example {#sec:Numericalexample} In this section, we utilize an example to demonstrate the effectiveness of the proposed method for solving the GAVEs [\[eq:gave\]](#eq:gave){reference-type="eqref" reference="eq:gave"}. All experiments were run on a personal computer with $3.20$ GHZ central processing unit (Intel (R), Corel(TM), i$5$-$11320$H), $16$GB memory and windows $11$ operating system, and the MATLAB version R$2021$a is used. Eight algorithms will be tested. 1. GNMS: the Algorithm [\[alg:gnms\]](#alg:gnms){reference-type="ref" reference="alg:gnms"} with $Q_1 = 10I$, $Q_2 = 0.5I$, $M = D-\frac{3}{4}L$ and $N = \frac{1}{4}L + U$, where $D$, $-L$ and $-U$ are the diagonal part, the strictly lower-triangular and the strictly upper-triangular parts of $A$, respectively. 2. MN: the Algorithm [\[alg:mn\]](#alg:mn){reference-type="ref" reference="alg:mn"}. 3. Picard: the Picard iteration [@rohf2014] $$x^{k+1} = A^{-1} (B|x^k| + c).$$ 4. FPI: the fixed point iteration $$\begin{cases} x^{k+1}=A^{-1}\left( By^k + c\right),\\ y^{k+1}=(1-\tau)y^k + \tau |x^{k+1}|, \end{cases}$$ which arises from [@ke2020]. 5. NMS: the Algorithm [\[alg:nms\]](#alg:nms){reference-type="ref" reference="alg:nms"} with $\bar{M} = M$. 6. NGS: the Algorithm [\[alg:nms\]](#alg:nms){reference-type="ref" reference="alg:nms"} with $\bar{M} = D - L$ and $\bar{N} = U$. 7. RMS: the relaxed-based matrix splitting iteration [@soso2023] $$\begin{cases} y^{k+1}=S^{-1}\left(T x^k+ By^k + c\right),\\ y^{k+1}=(1-\tau)y^k + \tau |x^{k+1}| \end{cases}$$ with $S=M$ and $T = N$. 8. SSMN: the Algorithm [\[alg:ssmn\]](#alg:ssmn){reference-type="ref" reference="alg:ssmn"}. **Example 2**. *Consider the GAVEs [\[eq:gave\]](#eq:gave){reference-type="eqref" reference="eq:gave"} with $A=\tilde{A}+\dfrac{1}{5}I$ and $$B= \left( \begin{array}{cccccccccccc} S_2 & -I & -I &-I &-I & 0 & 0 &0 & 0 & 0 & 0 & 0 \\ -I & S_2 & -I &-I &-I & -I & 0 &0 & 0 & 0 & 0 & 0\\ -I & -I & S_2 &-I &-I & -I & -I &0 & 0 & 0 & 0 & 0\\ -I & -I & -I &S_2 &-I & -I & -I &-I & 0 & 0 & 0 & 0\\ -I & -I & -I &-I &S_2 & -I & -I &-I & -I & 0 & 0 & 0\\ 0 & -I & -I &-I &-I & S_2 & -I &-I & -I & -I & 0 & 0\\ \ddots&\ddots&\ddots&\ddots&\ddots&\ddots&\ddots &\ddots&\ddots&\ddots&\ddots&\ddots\\ 0 & 0&0&-I & -I &-I &-I & S_2 & -I &-I & -I & -I \\ 0 & 0&0&0&-I & -I &-I &-I & S_2 & -I &-I & -I \\ 0 & 0&0&0&0&-I & -I &-I &-I & S_2 & -I &-I \\ 0 & 0&0&0&0&0&-I & -I &-I &-I & S_2 & -I \\ 0 & 0&0&0&0&0&0&-I & -I &-I &-I & S_2 \end{array} \right) \in\mathbb{R}^{n\times n},$$ where $$\tilde{A}=\small \left( \begin{array}{cccccccccccc} S_1 & -1.5I & -0.5I &-1.5I &-0.5I & 0 & 0 &0 & 0 & 0 & 0 & 0 \\ -1.5I & S_1 & -1.5I &-0.5I &-1.5I & -0.5I & 0 &0 & 0 & 0 & 0 & 0\\ -0.5I & -1.5I & S_1 &-1.5I &-0.5I & -1.5I & -0.5I &0 & 0 & 0 & 0 & 0\\ -1.5I & -0.5I & -1.5I &S_1 &-1.5I & -0.5I & -1.5I &-0.5I & 0 & 0 & 0 & 0\\ -0.5I & -1.5I & -0.5I &-1.5I &S_1 & -1.5I & -0.5I &-1.5I & -0.5I & 0 & 0 & 0\\ 0 & -0.5I & -1.5I &-0.5I &-1.5I & S_1 & -1.5I &-0.5I & -1.5I & -0.5I & 0 & 0\\ \ddots&\ddots&\ddots&\ddots&\ddots&\ddots&\ddots &\ddots&\ddots&\ddots&\ddots&\ddots\\ 0 & 0&0&-0.5I & -1.5I &-0.5I &-1.5I & S_1 & -1.5I &-0.5I & -1.5I & -0.5I \\ 0 & 0&0&0&-0.5I & -1.5I &-0.5I &-1.5I & S_1 & -1.5I &-0.5I & -1.5I \\ 0 & 0&0&0&0&-0.5I & -1.5I &-0.5I &-1.5I & S_1 & -1.5I &-0.5I \\ 0 & 0&0&0&0&0&-0.5I & -1.5I &-0.5I &-1.5I & S_1 & -1.5I \\ 0 & 0&0&0&0&0&0&-0.5I & -1.5I &-0.5I &-1.5I & S_1 \end{array} \right), %\left( % \begin{array}{cccccccccccccc} % S_2 & -1.5I & -0.5I &-1.5I &-0.5I & 0 & 0 &\dots & 0 & 0 & 0 & 0 \\ % -1.5I & S_2 &-1.5I & -0.5I & -1.5I & -0.5I & 0 &\dots & 0 & 0 & 0 & 0 \\ % -0.5I & -1.5I & S_2 & -1.5I & -0.5I & -1.5I & -0.5I &\dots & 0 & 0 & 0 & 0 \\ % -1.5I & -0.5I & -1.5I & S_2 & -1.5I & -0.5I & -1.5I &\dots & 0 & 0 & 0 & 0 \\ % -0.5I & -1.5I& -0.5I & -1.5I & S_2 & -1.5I & -0.5I &\dots & 0 & 0 & 0 & 0 \\ % 0 & -0.5I& -1.5I & -0.5I & -1.5I & S_2 &-1.5I &\dots & 0 & 0& 0 & 0 \\ % 0 & 0 & -0.5I & -1.5I & -0.5I & -1.5I & S_2 &\dots & 0 & 0 & 0 & 0 \\ % \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots &\ddots & \vdots& \vdots & \vdots & \vdots \\ % 0 & 0 & 0 & 0 & 0 & 0 & 0 &\dots & S_2 & -1.5I &-0.5I & -1.5I \\ % 0 & 0 & 0 & 0 & 0 & 0 & 0 &\dots & -1.5I & S_2 &-1.5I & -0.5I \\ % 0 & 0 & 0 & 0 & 0 & 0 & 0 &\dots & -0.5I & -1.5I & S_2 & -1.5I \\ % 0 & 0 & 0 & 0 & 0 & 0 & 0 &\dots & -1.5I & -0.5I & -1.5I & S_2 \\ % \end{array} %\right)$$* *$$S_1= \left( \begin{array}{cccccccccc} 36 & -1.5 & -0.5 &-1.5 & 0 &0 & 0 & 0 & 0 & 0 \\ -1.5 & 36 & -1.5 &-0.5 &-1.5 & 0 &0 & 0 & 0 & 0 \\ -0.5 & -1.5 & 36 &-1.5 &-0.5 & -1.5 & 0 & 0 & 0 & 0\\ -1.5 & -0.5 & -1.5 &36 &-1.5 & -0.5 & -1.5 & 0 & 0 & 0\\ 0 & -1.5 & -0.5 &-1.5 &36 & -1.5 & -0.5 &-1.5 & 0 & 0\\ 0 & 0 & -1.5 &-0.5 &-1.5 & 36 & -1.5 &-0.5 & -1.5 & 0\\ \ddots&\ddots&\ddots&\ddots&\ddots&\ddots&\ddots &\ddots&\ddots&\ddots\\ 0&0&0&0 & -1.5 &-0.5 &-1.5 & 36 & -1.5 &-0.5 \\ 0&0&0&0&0& -1.5 &-0.5 &-1.5 &36 & -1.5 \\ 0&0&0&0&0&0& -1.5 &-0.5 &-1.5 & 36 \end{array} \right) \in\mathbb{R}^{m\times m},$$* *$$S_2= \left( \begin{array}{cccccccccc} 3 & -1 & -1 &-1 & 0 &0 & 0 & 0 & 0 & 0 \\ -1 & 3 & -1 &-1 &-1 & 0 &0 & 0 & 0 & 0 \\ -1 & -1 & 3 &-1 &-1 & -1 & 0 & 0 & 0 & 0\\ -1 & -1 & -1 &3 &-1 & -1 & -1 & 0 & 0 & 0\\ 0 & -1 & -1 &-1 &3 & -1 & -1 &-1 & 0 & 0\\ 0 & 0 & -1 &-1 &-1 & 3 & -1 &-1 & -1 & 0\\ \ddots&\ddots&\ddots&\ddots&\ddots&\ddots&\ddots &\ddots&\ddots&\ddots\\ 0&0&0&0 & -1 &-1 &-1 & 3 & -1 &-1 \\ 0&0&0&0&0& -1 &-1 &-1 &3 & -1 \\ 0&0&0&0&0&0& -1 &-1 &-1 & 3 \end{array} \right) \in\mathbb{R}^{m\times m},$$* *and $c=Ax^*-B|x^*|$ with $x^*=\left(\dfrac{1}{2},1,\dfrac{1}{2},1,\dots,\dfrac{1}{2}, 1\right)^\top$.* *In this example, we choose $x_0=(-1,0,-1,0,\dots,~-1,0)^\top$, $y_0= c$. For every $n$, we run each method ten times and the average IT (the number of iteration), the average CPU (the elapsed CPU time in seconds) and the average RES are reported, where $${\rm RES} = \frac{\|Ax^k - B|x^k| -c\|}{\|c\|}.$$ Once ${\rm RES} \le 10^{-8}$, the experiment is terminated. Numerical results are shown in Table [1](#table1){reference-type="ref" reference="table1"}[^4], from which we can see that the proposed method is the best one within the tested methods, both in terms of IT and CPU.* *[\[table1\]]{#table1 label="table1"}* *Method* *$m$* *$60$* *$80$* *$90$* *$100$* *$110$* ---------- ------------------------------------------- -------------- -------------- -------------- -------------- -------------- *GNMS* *$\tau_{opt}$* *$1.00$* *$1.00$* *$1.00$* *$1.00$* *$1.00$* *IT* *$8$* *$8$* *$8$* *$8$* *$8$* *CPU* *$0.0018$* *$0.0033$* *$0.0044$* *$0.0071$* *$0.0087$* *RES* *4.1370e-09* *3.1608e-09* *2.8363e-09* *2.5773e-09* *2.3658e-09* *MN* *$\Omega = 2*diag(A)$* *IT* *$47$* *$47$* *$47$* *$47$* *$47$* *CPU* *$0.3717$* *$0.7608$* *$1.0522$* *$1.4283$* *$1.7924$* *RES* *7.5124e-09* *7.0945e-09* *6.9526e-09* *6.8380e-09* *6.7435e-09* *$\Omega = \frac{1}{2}*diag(A)$* *IT* *$16$* *$16$* *$16$* *$16$* *$16$* *CPU* *$0.1268$* *$0.2544$* *$0.3548$* *$0.4895$* *$0.6081$* *RES* *7.2195e-09* *5.9416e-09* *5.5203e-09* *5.1856e-09* *4.9137e-09* *Picard* *IT* *$26$* *$26$* *$26$* *$26$* *$26$* *CPU* *$0.2003$* *$0.4274$* *$0.5773$* *$0.7880$* *$0.9853$* *RES* *6.9693e-09* *8.2848e-09* *8.7217e-09* *9.0704e-09* *9.3553e-09* *FPI* *$\tau_{opt}$* *$0.8$* *$0.8$* *$0.79$* *$0.79$* *$0.79$* *IT* *$17$* *$17$* *$17$* *$17$* *$17$* *CPU* *$0.1329$* *$0.2729$* *$0.3863$* *$0.5147$* *$0.6506$* *RES* *9.2742e-09* *8.4833e-09* *9.7848e-09* *9.3634e-09* *8.9942e-09* *NMS* *$\Omega = 2*diag(A)$* *IT* *$52$* *$52$* *$52$* *$52$* *$52$* *CPU* *$0.0079$* *$0.0164$* *$0.0252$* *$0.0416$* *$0.0493$* *RES* *7.8099e-09* *7.7233e-09* *7.6941e-09* *7.6705e-09* *7.6512e-09* *$\Omega = \dfrac{1}{2}*diag(A)$* *IT* *$19$* *$19$* *$19$* *$19$* *$19$* *CPU* *$0.0032$* *$0.0066$* *$0.0090$* *$0.0162$* *$0.0193$* *RES* *5.6173e-09* *5.1661e-09* *5.0093e-09* *4.8812e-09* *4.7744e-09* *NGS* *$\Omega = 2*diag(A)$* *IT* *$51$* *$51$* *$51$* *$51$* *$51$* *CPU* *$0.0071$* *$0.0142$* *$0.0202$* *$0.0324$* *$0.0423$* *RES* *7.6531e-09* *7.5154e-09* *7.4677e-09* *7.4300e-09* *7.3991e-09* *$\Omega = \dfrac{1}{2}*diag(A)$* *IT* *$18$* *$18$* *$18$* *$18$* *$18$* *CPU* *$0.0029$* *$0.0057$* *$0.0074$* *$0.0137$* *$0.0160$* *RES* *8.0587e-09* *7.0044e-09* *6.6319e-09* *6.3241e-09* *6.0648e-09* *RMS* *$\tau_{opt}$* *$0.99$* *$0.99$* *$0.99$* *$0.99$* *$0.99$* *IT* *$12$* *$12$* *$12$* *$12$* *$12$* *CPU* *$0.0022$* *$0.0046$* *$0.0061$* *$0.0093$* *$0.0115$* *RES* *3.4193e-09* *2.7439e-09* *2.5157e-09* *2.3315e-09* *2.1795e-09* *SSMN* *$\tilde{\Omega} = 2*diag(A)$* *IT* *$18$* *$18$* *$18$* *$18$* *$18$* *CPU* *$0.1443$* *$0.2917$* *$0.4124$* *$0.5657$* *$0.7034$* *RES* *5.0798e-09* *4.5585e-09* *4.3772e-09* *4.2288e-09* *4.1049e-09* *$\tilde{\Omega} = \dfrac{1}{2}*diag(A)$* *IT* *$39$* *$39$* *$39$* *$39$* *$39$* *CPU* *$0.3126$* *$0.6474$* *$0.8856$* *$1.2022$* *$1.5239$* *RES* *7.7547e-09* *8.7984e-09* *9.1439e-09* *9.4195e-09* *9.6445e-09* : *Numerical results for Example [Example 2](#exam1){reference-type="ref" reference="exam1"}.* # Conclusions {#sec:Conclusions} A generalization of the Newton-based matrix splitting iteration method (GNMS) for solving GAVEs is proposed, which include some existing methods as special cases. Under mild conditions, the GNMS method converges to the unique solution of the GAVEs and a few weaker convergence conditions for some existing methods are obtained. Numerical results illustrate that GNMS can be superior to some existing methods in our setting. 99 M. Achache, N. Hazzam. Solving absolute value equations via complementarity and interior-point methods, *J. Nonlinear Funct. Anal.*, 2018: 1--10, 2018. J. H. Alcantara, J.-S. Chen, M. K. Tam. Method of alternating projections for the general absolute value equation, *J. Fixed Point Theory Appl.*, 25: 39, 2023. C.-R. Chen, D.-M. Yu, D.-R Han. Exact and inexact Douglas-Rachford splitting methods for solving large-scale sparse absolute value equations, *IMA J. Numer. Anal.*, 43: 1036--1060, 2023. R. W. Cottle, J.-S. Pang. R. E. Stone. The Linear Complementarity Problem, *Academic Press, New York.*, 1992. M. Hladík. Bounds for the solutions of absolute value equations, *Comput. Optim. Appl.*, 69: 243--266, 2018. S.-L. Hu, Z.-H. Huang. A note on absolute value equations, *Optim. Lett.*, 4: 417--424, 2010. X.-Q. Jiang, Y. Zhang. A smoothing-type algorithm for absolute value equations, *J. Ind. Manag. Optim.*, 9: 789--798, 2013. Y.-F. Ke. The new iteration algorithm for absolute value equation, *Appl. Math. Lett.*, 99: 105-990, 2020. Y.-Y. Lian, C.-X. Li, S.-L. Wu. Weaker convergent results of the generalized Newton method for the generalized absolute value equations, *J. Comput. Appl. Math.*, 338: 221--226, 2018. X. Li, X.-X. Yin. A new modified Newton-type iteration method for solving generalized absolute value equations, *arXiv:2103.09452v3*, 2021. [ https://doi.org/10.48550/arXiv.2103.09452]( https://doi.org/10.48550/arXiv.2103.09452). X. Li, Y.-X. Li, Y. Dou. Shift-splitting fixed point iteration method for solving generalized absolute value equations, *Numer. Algor.*, 2022. <https://doi.org/10.1007/s11075-022-01435-3>. Roger A. Horn, Charles R. Johnson. Matrix analysis (Second edition). *Cambridge University Press, New York.*,1985. T. Lotfi, H. Veiseh. A note on unique solvability of the absolute value equation, *J. Linear Topological Algebra*, 2: 77--81, 2013. O. L. Mangasarian, R. R. Meyer. Absolute value equations, *Linear Algebra Appl.*, 419: 359--367, 2006. O. L. Mangasarian. Absolute value programming, *Comput. Optim. Appl.*, 36: 43--53, 2007. F. Mezzadri. On the solution of general absolute value equations, *Appl. Math. Lett.*, 107: 106462, 2020. O. Prokopyev. On equivalent reformulations for absolute value equations, *Comput. Optim. Appl.*, 44: 363--372, 2009. J. Rohn. A theorem of the alternatives for the equation $Ax + B|x| = b$, *Linear Multilinear Algebra*, 52: 421--426, 2004. J. Rohn. On unique solvability of the absolute value equation, *Optim. Lett.*, 3: 603--606, 2009. J. Rohn, V. Hooshyarbakhsh, R. Farhadsefat. An iterative method for solving absolute value equations and sufficient conditions for unique solvability, Optim. Lett., 8: 35--44, 2014. D. Serre. Matrices: Theory and Applications, *Springer, New York.*, 2002. X.-H. Shao, W.-C. Zhao. Relaxed modified Newton-based iteration method for generalized absolute value equations, *AIMS Math.*, 8: 4714--4725, 2023. J. Song, Y.-Z. Song. Relaxed-based matrix splitting methods for solving absolute value equations, *Comp. Appl. Math.*, 42: 19, 2023. J.-Y. Tang, J.-C. Zhou. A quadratically convergent descent method for the absolute value equation $Ax + B|x| = b$, *Oper. Res. Lett.*, 47: 229--234, 2019. A. Wang, Y. Cao, J.-X. Chen. Modified Newton-type iteration methods for generalized absolute value equations, *J. Optim. Theory. Appl.*, 181: 216--230, 2019. S.-L. Wu, C.-X. Li. A note on unique solvability of the absolute value equation, *Optim. Lett.*, 14: 1957--1960, 2020. S.-L. Wu, S.-Q. Shen. On the unique solution of the generalized absolute value equation, *Optim. Lett.*, 15: 2017--2024, 2021. Y. Saad. Iterative Methods for Sparse Linear Systems (Second edition), *SIAM*, USA, 2003. D.-M. Young. Iterative Solution of Large Linear Systems, *Academic Press, New York*, 1971. D.-M. Yu, C.-R. Chen, D.-R. Han. A modified fixed point iteration method for solving the system of absolute value equations, *Optimization*, 71: 449-461, 2022. H.-Y. Zhou, S.-L. Wu, C.-X. Li. Newton-based matrix splitting method for generalized absolute value equation, *J. Comput. Appl. Math.*, 394: 113578, 2021. J.-L. Zhang, G.-F. Zhang, Z.-Z. Liang. A modified generalized SOR-like method for solving an absolute value equation, *Linear Multilinear Algebra*, 2022. [DOI: 10.1080/03081087.2022.2066614](DOI: 10.1080/03081087.2022.2066614). W.-C. Zhao, X.-H. Shao. New matrix splitting iteration method for generalized absolute value equations, *AIMS Math.*, 8(5): 10558-10578, 2023. [^1]: Email address: 3222714384\@qq.com. [^2]: Corresponding author. Supported by the Natural Science Foundation of Fujian Province (Grand No. 2021J01661). Email address: cairongchen\@fjnu.edu.cn. [^3]: This variable transformation was formally introduced in the modified fixed point iteration method for solving AVEs [\[eq:ave\]](#eq:ave){reference-type="eqref" reference="eq:ave"} [@yuch2022] and further used in [@zhzl2023]. [^4]: *In the table, $\tau_{opt}$ is the numerical optimal iteration parameter, which is selected from $[0:0.01:2]$ and is the first one to reach the minimal number of iteration of the method.*
arxiv_math
{ "id": "2309.09520", "title": "A generalization of the Newton-based matrix splitting iteration method\n for generalized absolute value equations", "authors": "Xuehua Li, Cairong Chen", "categories": "math.NA cs.NA math.OC", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Let $K/\mathbb{Q}$ be a cyclic extension of number fields with Galois group $G$. We study the ideal classes of primes $\mathfrak{p}$ of $K$ of residue degree bigger than one in the class group of $K$. In particular, we explore such extensions $K/\mathbb{Q}$ for which there exist an integer $f>1$ such that the ideal classes of primes $\mathfrak{p}$ of $K$ of residue degree $f$ generate the full class group of $K$. It is shown that there are many such fields. These results are used to obtain information on class group of $K$; like rank of $\ell-$torsion of the class group, factors of class number, fields with class group of certain exponents, and even structure of class group in some cases. Moreover, such $f$ can be used to construct annihilators of the class groups. address: - Indian Institute of Science Education and Research, Berhampur, India. - Indian Institute of Science Education and Research, Berhampur, India. author: - Prem Prakash Pandey, Mahesh Kumar Ram title: Primes of higher degree --- # Introduction For any Galois extension $K/F$ of number fields, we use the notation $G_{K/F}$ for the Galois group of the extension. When $F=\mathbb{Q}$, this will be abbreviated to $G_K$. For any prime ideal $\mathfrak{P}$ of $K$, the degree $[\mathbb{O}_K/\mathfrak{P}: \mathbb{O}_F/ \mathfrak{p}]$ is called the residue degree of the ideal $\mathfrak{P}$ for the extension $K/F$. Here $\mathbb{O}_K, \mathbb{O}_F$ denote the ring of integers of $K$ and $F$ respectively, and $\mathfrak{p}=\mathfrak{P} \cap \mathbb{O}_F$. We shall use the notation $f(\mathfrak{P}|\mathfrak{p})$ for this residue degree. All the prime ideals appearing in the article are assume to be unramified unless stated otherwise explicitly. For any odd prime $\ell$, Kummer showed that the class group of $\mathbb{Q}(\zeta_{\ell})$ is generated by classes of prime ideals of $\mathbb{Q}(\zeta_{\ell})$ of residue degree one. For a general number field $K$, as a consequence of class field theory, one knows that each ideal class in the class groups $C\ell(K)$ of $K$ contains infinitely many prime ideals of residue degree one. In [@PPP19], the first author explored the following problem:\ **Problem :** Determine extensions $K/F$ for which there exists an integer $f>1$ such that the class group $C\ell(K)$ is generated by prime ideals of residue degree $f$?\ In relation to this question the following set was introduced. $$\mathbf{R}_{K/F}=\{f \in \mathbb{N}: C\ell(K) \mbox{ is generated by primes of residue degree }f\}.$$ Here the residue degree means the residue degree for the extension $K/F$. The cases where $\mathbf{R}_{K/F}=\{1\}$ are referred as trivial cases. In [@PPP19], a criteria for some integer $f$ to be in $\mathbf{R}_{K/F}$ was obtained. Under the assumption that $K/F$ is cyclic and $C\ell(F)$ is trivial, it was shown that any $f\in \mathbf{R}_{K/F}$ can be used to construct an annihilator in $\mathbb{Z}[G_{K/F}]$ for the class group of $K$. However, not a single example of extension $K/F$ was shown where $\mathbf{R}_{K/F}$ is non-trivial. In article [@NMMRPP], the authors showed that many cyclotomic and real cyclotomic fields whose class numbers are prime are generated by classes of primes of a fixed residue degree bigger than one. These were the first example of extension $K/F$ with non-trivial $\mathbf{R}_{K/F}$. These were used to obtain some consequences about class number of intermediate fields.\ This article aims to generalize the works of [@PPP19; @NMMRPP] in three direction: show that there are many extensions $K/F$ with non-trivial $\mathbf{R}_{K/F}$, obtain the best possible result on annihilator of class groups possible from this study, obtain many strong consequences on class groups of cyclic extensions of $\mathbb{Q}$. We remark that we obtain these consequences by studying primes of higher degree. For the most part of this paper we use $K$ for a number field which is a cyclic extension of $\mathbb{Q}$ of degree $n$ (with only exception occurring in Theorem [Theorem 3](#AT){reference-type="ref" reference="AT"} and Section 4). We shall use $C\ell(K)$, $h_K$ to denote the class group of $K$ and the class number of $K$ respectively. We use $H(K)$ to denote the Hilbert class field of $K$. For any group $G$ we use $Aut(G)$ to denote the group of automorphisms of $G$. Our first result in the direction of non-trivial $\mathbf{R}_{K/\mathbb{Q}}$ is the following. **Theorem 1**. *Let $K$ be a number field which is cyclic extension of $\mathbb{Q}$ and let $G_K$ be the Galois group of $K/\mathbb{Q}$. Assume that $n$ and $h_K$ are relatively prime. Suppose that there exists a non-trivial element $\sigma \in G_K$ such that any homomorphism $\psi : G_K \longrightarrow Aut(C\ell(K))$ maps $\sigma$ to identity automorphism. Then every divisor of the order of $\sigma$ is in $\mathbf{R}_{K/\mathbb{Q}}$.* Before proceeding further, we remark that Theorem [Theorem 1](#Main){reference-type="ref" reference="Main"} generalises the Kummer's result mentioned in the first paragraph, for those cyclic extension $K/\mathbb{Q}$ for which the degree $[K:\mathbb{Q}]$ is relatively prime to the class number $h_K$. Theorem [Theorem 1](#Main){reference-type="ref" reference="Main"} is quite general result to obtain cyclic extensions $K/\mathbb{Q}$ for which $\mathbf{R}_{K/\mathbb{Q}}$ is non-trivial. If there exists a divisor $f$ of $n$ which is relatively prime to $|Aut(C\ell(K))|$ then there is an element $\sigma \in G_K$ of order $f$ which satisfies the hypothesis of Theorem [Theorem 1](#Main){reference-type="ref" reference="Main"}. We can obtain extensions $K/\mathbb{Q}$ with non-trivial $\mathbf{R}_{K/\mathbb{Q}}$ in more general scenarios as well. In this regard we have the following result. **Theorem 2**. *Let $K/\mathbb{Q}$ be cyclic extension of number fields and let $G_K$ denote its Galois group. Suppose $C\ell(K) \cong \mathbb{Z}/n_1\mathbb{Z}\oplus \dots \oplus \mathbb{Z}/n_t\mathbb{Z}$, and the class number $h_K$ is relatively prime to the degree $n$ of the extension $K/\mathbb{Q}$. Let $p$ be a prime dividing $n$ with $p^{e_1}$ being the highest power of $p$ dividing $n$, and let $p^{e_2}$ be the highest power of $p$ in $|Aut(C\ell(K))|$ with $e_2 \geq 0$. If $e_1>e_2$ then $p^i \in \mathbf{R}_{K/\mathbb{Q}}$ for all $i \in \{0, 1, \dots, e_1-e_2\}$.* Before proceeding further, we remark that above results help in exhibiting extensions $K/\mathbb{Q}$ for which $\mathbf{R}_{K/\mathbb{Q}}$ is non-trivial. For such extensions, most of the times, it is possible to choose intermediate fields $F$ such that $\mathbf{R}_{K/F}$ is non-trivial.\ Now we turn to the second goal of the article, that is, construction of annihilators of class groups. Annihilators of class groups are important objects in algebraic number theory [@FT88; @JT81; @YFB04; @RS08; @BBM14; @JSBT18; @CGRC21]. It was established in [@PPP19] that every element in $\mathbf{R}_{K/F}$ can be used to construct annihilators of $C\ell(K)$ provided the Galois group $G_{K/F}$ is cyclic and the class number $h_F=1$. The annihilators of [@PPP19] were extended to more general extensions $K/F$ in [@NMMRPP].The method of [@NMMRPP] yields annihilator corresponding to non-trivial element in $\mathbf{R}_{K/F}$ even for non-abelian extensions. In this regard we obtain a stronger result than mentioned in [@NMMRPP]. For this part of discussion, we emphasize that $K/F$ is any extension of number fields (may be non-abelian). Let $f \in \mathbf{R}_{K/F}$, and $H_f^1, \dots, H_f^r$ denote all the cyclic subgroups of $G_{K/F}$ of order $f$. Put $H_f=\cap_{i=1}^r H_f^i$ and let $T=\{\sigma_1, \dots, \sigma_t\}$ denote a set of coset representatives of $G_{K/F}/H_f$. Define $$\label{ann} \theta_f=\sum_{i=1}^t \sigma_i.$$ The following theorem generalizes Theorem 1.2 of [@NMMRPP]. **Theorem 3**. *If the class group $C\ell(F)$ is trivial then the element $\theta_f \in \mathbb{Z}[G_{K/F}]$ annihilates the class group $C\ell(K)$.* Lastly we mention some consequences of study of primes of higher degree, on the class groups. For example, Kummer wanted to prove that the cubic subfield of $\mathbb{Q}(\zeta_{163})$ can not have class number $2$. Using the $\mathbb{Z}[G_{\mathbb{Q}(\zeta_{163})}]$ structure of the class group of $\mathbb{Q}(\zeta_{163})$ this can be proved (see [@CG21]). In a private communication Prof. Greither informed us that this can be proved for many other cyclic extensions. We prove a much more general result in this direction. For any integer $m$, let $\phi(m)$ denote the value of Euler-totient function at $m$. **Theorem 4**. *Let $n$ and $m$ be two relatively prime integers bigger than $1$. Assume that $n$ is relatively prime to $\phi(m)$.There is no cyclic extension $K/\mathbb{Q}$ of degree $n$ whose class group is cyclic of order $m$.* Theorem [Theorem 4](#A1){reference-type="ref" reference="A1"} shows that there is some constraint on prime factors of the class number of a cyclic extension $K/\mathbb{Q}$. On the other hand, if we know factors of class number, primes of higher degree can be used to obtain the rank of class groups. For this let $n=q_1^{e_1} \ldots q_r^{e_r}$ be the factorization of $n$ as product of primes. For any prime number $\ell$ not dividing $n$, let $s_i$ denote the multiplicative order of $\ell$ modulo $q_i$. Put $s=\min \{s_1, \ldots, s_r\}$. Let $C\ell(K)[\ell]$ denote the $\ell$-torsion of the class group $C\ell(K)$. That is, $$C\ell(K)[\ell]=\{[\mathfrak{a}] \in C\ell(K): [\mathfrak{a}]^{\ell}=1\}.$$ Then we prove the following theorem. **Theorem 5**. *Let $K/\mathbb{Q}$ be cyclic extension of degree $n=q_1^{e_1} \ldots q_r^{e_r}$ and $\ell$ be any prime number not dividing $n$. If $\ell$ divides $h_K$ then $C\ell(K)[\ell]$ has a subgroup isomorphic to $(\mathbb{Z}/\ell\mathbb{Z})^s$.* We remark that the $\ell-$torsion subgroup $C\ell(K)[\ell]$ are very important (see [@HV06; @EV07; @PCW20; @PCW21]) and a lot of results are obtained on them in last twenty years or so (see [@LB05; @HV06; @EV07; @EPW17; @BSTTTZ20; @PCW20; @WJ21; @KP22]). Our result is purely based on the study of primes of higher degree. Lastly we mention one more result, on class groups, obtained from our study of primes of higher degree. **Theorem 6**. *Consider the cyclotomic extension $\mathbb{Q}(\zeta_{\ell})$ for some prime $\ell$. Assume that $(\ell-1)$ is relatively prime to $C\ell(\mathbb{Q}(\zeta_{\ell}))$. Let $f$ be a divisor of $(\ell-1)$ which is relatively prime to both $(\ell-1)/f$ and $|Aut(C\ell(\mathbb{Q}(\zeta_{\ell})))|$. Then the exponent of the class group of the subfield of degree $f$ is at most $(\ell-1)/f$.* In Section 2 we recall some results from group theory and some results from class field theory which will be useful. Section 3 contains a proof of theorem [Theorem 1](#Main){reference-type="ref" reference="Main"} and Theorem [Theorem 2](#main){reference-type="ref" reference="main"}. Section 4 contains some more extensions $K/\mathbb{Q}$ (non-cyclic) with non-trivial $\mathbf{R}_{K/\mathbb{Q}}$. In Section 5 we prove Theorem [Theorem 3](#AT){reference-type="ref" reference="AT"}. Sections 6 contains a proof of the consequences Theorem [Theorem 4](#A1){reference-type="ref" reference="A1"}, Theorem [Theorem 5](#A11){reference-type="ref" reference="A11"} and Theorem [Theorem 6](#A3){reference-type="ref" reference="A3"}. In Section 6 we mention some more consequences of the study of primes of higher degree. The applications mentioned here are not exhaustive and some more such applications will appear in the second author's thesis. The last section ends with providing explicit examples of extensions $K/\mathbb{Q}$ for which $\mathbf{R}_{K/\mathbb{Q}}$ is non-trivial. The second author's thesis will list many more examples produced from this method. # Preliminaries We begin with a Galois extension $L/F$ of number fields. For any unramified prime ideal $\mathfrak{P}$ of $L$, the Frobenius automorphism of $\mathfrak{P}$ is the unique element $\sigma$ of the decomposition group $D_{\mathfrak{P}}$ satisfying $$\sigma (x) \equiv x^{N(\mathfrak{p})} \pmod {\mathfrak{P}}.$$ Here $\mathfrak{p}=\mathfrak{P} \cap \mathbf{O}_F$ and $N(\mathfrak{p})$ is the norm for the extension $F/\mathbb{Q}$. The Frobenius automorphism of $\mathfrak{P}$ will be denoted by $\left(\frac{\mathfrak{P}}{L/F}\right)$, and by $\left(\frac{\mathfrak{p}}{L/F}\right)$ in case $G_{L/F}$ is abelian. We recall the following result from [@GJ96]. **Lemma 7**. *Let $L/F$ be a Galois extension of number fields and $E$ be an intermediate field. For any unramified prime ideal $\mathfrak{P}$ of $L$, we have $$\left( \frac{\mathfrak{P}}{L/F} \right)^{f(\mathbf{p}/\mathfrak{p})}=\left( \frac{\mathfrak{P}}{L/E}\right).$$ Here $\mathbf{p}$ and $\mathfrak{p}$ are the prime ideals below $\mathfrak{P}$ in $E$ and $F$ respectively.* Let $H(L)$ denote the Hilbert class field of the number field $L$. The following result is an important consequences of class field theory [@LW91] **Lemma 8**. *The Galois group $G_{H(L)/L}$ is isomorphic to the class group $C\ell(L)$. The isomorphism is induced by the Artin map $$[\mathfrak{P}] \longmapsto \left( \frac{\mathfrak{P}}{L/F} \right).$$* The groups $G_{H(L)/L}$ and $C\ell(L)$ can be used in place of each other, up to isomorphism, and the justification is offered by Lemma [Lemma 8](#HCF){reference-type="ref" reference="HCF"}. Next, we recall the $\check{C}$ebotarev density Theorem. For any $\sigma \in G_{L/F}$ we consider the set $P_{L/F}(\sigma)$ of prime ideals $\mathfrak{p}$ of $F$ such that there is a prime $\mathfrak{P}$ of $L$ above $\mathfrak{p}$ satisfying $$\sigma =\left( \frac{\mathfrak{P}}{L/F} \right),$$ also we use $C_{\sigma}$ to denote the conjugacy class of $\sigma$ in $G_{L/F}$. Now we state the $\check{C}$ebotarev density theorem [@JN99]. **Theorem 9**. *Let $L/F$ be a Galois extension with Galois group $G_{L/F}$. For every $\sigma \in G_{L/F}$, the density of the set $P_{L/F}(\sigma)$ is positive and equals to $\frac{|C_{\sigma}|}{[L:F]}$.* The next lemma can be proved elementarily [@PPP19]. **Lemma 10**. *The extension $H(L)/F$ is Galois.* For any Galois extension of number fields $L/F$, there is following exact sequence $$1 \longrightarrow G_{H(L)/L} \longrightarrow G_{H(L)/F} \longrightarrow G_{L/F} \longrightarrow 1.$$ We need the following theorem of Wyman [@BW73] (also see [@GCMR88]). **Theorem 11**. *If $K/\mathbb{Q}$ is a cyclic extension then the sequence $$1 \longrightarrow G_{H(K)/K} \longrightarrow G_{H(K)} \longrightarrow G_K \longrightarrow 1$$ splits. Consequently, there exists a homomorphism $\psi : G_K \longrightarrow Aut(G_{H(K)/K})$ such that $$G_{H(K)} \cong G_{H(K)/K} \rtimes_{\psi} G_K.$$* Now we state some results on groups. The following lemma can be easily proved. **Lemma 12**. *Let $H$ be an abelian group of order $n$. Assume that $n=r.s$ for relatively prime integers $r$ and $s$. If $H_1$ and $H_2$ are subgroups of orders $r$ and $s$ respectively then the set $H_2$ is a complete set of coset representatives of $H/H_1$.* Now we recall some results on the automorphism group of finite abelian groups (see [@CHDR07; @JP04; @AR1907]). We begin with the following result on automorphism group of finite abelian $p-$groups. Let $H$ be a finite abelian $p-$group and $$\label{41} H \cong \mathbb{Z}/p^{e_1}\mathbb{Z}\oplus \dots \oplus \mathbb{Z}/p^{e_t}\mathbb{Z}.$$ Define $$d_i=\max \{j: e_j=e_i\} \mbox{ and } c_i=\min \{j: e_j=e_i\}.$$ **Proposition 13**. *Let $H$ be as in ([\[41\]](#41){reference-type="ref" reference="41"}) and $Aut(H)$ denote the group of automorphisms of $H$. Then $$\label{42} |Aut(H)|=\prod_{i=1}^t \left(p^{d_i}-p^{(i-1)}\right) \prod_{i=1}^t \left(p^{e_i}\right)^{(t-d_i)} \prod_{i=1}^t \left( p^{(e_i-1)}\right)^{(t-c_i+1)}.$$* The next lemma is elementary and facilitates us with the automorphism group of any abelian group. **Lemma 14**. *Let $G_1$ and $G_2$ be finite abelian groups of relatively prime orders. Then $$Aut(G_1 \oplus G_2) \cong Aut(G_1) \oplus Aut(G_2).$$* Next, we prove an elementary lemma about the order of certain elements in a semidirect product of groups. **Lemma 15**. *Let $G_1, G_2$ be two finite groups and $\psi :G_2 \longrightarrow Aut(G_1)$ be a group homomorphism. Let $g_1 \in G_1, g_2 \in G_2$ be of orders $n_1$ and $n_2$ respectively. If $\psi(g_2)$ is identity automorphism then the element $(g_1, g_2)$ in the semidirect product $G_1 \rtimes_{\psi} G_2$ has order $lcm~ (n_1, n_2)$.* *Proof.* Since $\psi(g_2)$ is identity automorphism of $G_1$, we have $\psi(g_2)(g)=g$ for all $g\in G_1$. In particular, $\psi(g_2)(g_1^i)=g_1^i$ for any integer $i$. As a result, we have $$(g_1,g_2)^2=(g_1 \psi(g_2)(g_1), g_2^2)=(g_1^2,g_2^2).$$ Proceeding inductively yields $(g_1, g_2)^n=(g_1^n, g_2^n)$. The Lemma follows at once. ◻ # Proofs of Theorem [Theorem 1](#Main){reference-type="ref" reference="Main"} and Theorem [Theorem 2](#main){reference-type="ref" reference="main"} {#proofs-of-theorem-main-and-theorem-main} *Proof.* (Theorem [Theorem 1](#Main){reference-type="ref" reference="Main"}) As $K/\mathbb{Q}$ is a cyclic extension, from Theorem [Theorem 11](#splitting){reference-type="ref" reference="splitting"} it follows that there is a homomorphism $\psi : G_K \longrightarrow Aut(G_{H(K)/K})$ inducing an isomorphism $$\label{MTE1} \Psi : G_{H(K)} \longrightarrow G_{H(K)/K} \rtimes_{\psi} G_K.$$ Since $C\ell(K)$ is an abelian group and $C\ell(K) \cong G_{H(K)/K}$, we have $$G_{H(K)/K} \cong \mathbb{Z}/n_1\mathbb{Z}\oplus \dots \oplus \mathbb{Z}/n_t\mathbb{Z}$$ for some integers $n_1, \dots, n_t$. Let $\sigma_1, \dots, \sigma_t \in G_{H(K)/K}$ be such that order of $\sigma_i$ is $n_i$ for each $i$ and $$G_{H(K)/K}=\left\{\prod_{i=1}^t \sigma_i^{j_i}: 1 \leq j_i \leq n_i , \forall i\right\}.$$ We are given that $\psi(\sigma)=Id$. Now suppose the order of $\sigma$ equals to $f$. Then $f$ is relatively prime to each $n_i$, hence we have $$\label{MTE2} G_{H(K)/K}=\left\{\prod_{i=1}^t \sigma_i^{fj_i}: 1 \leq j_i \leq n_i , \forall i\right\}.$$ From Lemma [Lemma 15](#SPL){reference-type="ref" reference="SPL"}, the element $(\sigma_i, \sigma)$ have orders $n_i.f$ for each $i$. Because of the isomorphism in ([\[MTE1\]](#MTE1){reference-type="ref" reference="MTE1"}), there are elements $\tau_i \in G_{H(K)}$ such that $\Psi(\tau_i)=(\sigma_i, \sigma)$. Applying the $\check{C}$ebotarev density theorem we obtain unramified prime ideals $\mathfrak{P}_i$ of the field $H(K)$ such that $$\label{MTE3} \left(\frac{\mathfrak{P}_i}{H(K)/\mathbb{Q}} \right)= \tau_i, \mbox{ for each }i=1, \dots, t.$$ We let $\mathfrak{p}_i$ and $p_i$ denote the primes below $\mathfrak{P}_i$ of the fields $K$ and $\mathbb{Q}$ respectively. We have the following identities $$f(\mathfrak{P}_i|p_i)= {\mathrm{ord}}{~\tau_i}= n_i.f \mbox{ and } f(\mathfrak{P}_i|p_i)=f(\mathfrak{P}_i|\mathfrak{p}_i).f(\mathfrak{p}_i|p_i).$$ Note that $f(\mathfrak{P}_i|\mathfrak{p}_i)$ is a divisor of $[H(K):K]$ and $f(\mathfrak{p}_i|p_i)$ is a divisor of $n$. From this we obtain the following $$\label{MTE4} f(\mathfrak{P}_i|\mathfrak{p}_i)= n_i \mbox{ and } f(\mathfrak{p}_i|p_i)=f.$$ Put $$\left(\frac{\mathfrak{P}_i}{H(K)/K} \right)= \tau_i^{'}, \mbox{ for each }i=1, \dots, t.$$ Then from Lemma [Lemma 7](#FR){reference-type="ref" reference="FR"}, we get $\tau_i^f=\tau_i^{'}$ for each $i$. Thus ${\mathrm{ord}}{~\tau_i'}=n_i$. We claim that the Galois group $G_{H(K)/K}$ is generated by $\tau_1', \dots, \tau_t'$.\ We have $$\Psi(\tau_i'^{j_i})=\Psi(\tau_i^{fj_i})=(\sigma_i, \sigma)^{fj_i}.$$ Using Lemma [Lemma 15](#SPL){reference-type="ref" reference="SPL"}, we see that $$\label{MTE5} \Psi(\tau_i'^{j_i})=(\sigma_i^{fj_i}, 1).$$ Since $G_{H(K)/K}$ embeds inside $G_{H(K)/K} \rtimes_{\psi} Gal(K/Q)$ via the map $g \mapsto (g,1)$ and $\Psi$ is an isomorphism, our claim follows from ([\[MTE2\]](#MTE2){reference-type="ref" reference="MTE2"}) and ([\[MTE5\]](#MTE5){reference-type="ref" reference="MTE5"}).\ As the extension $H(K)/K$ is abelian we have $$\left(\frac{\mathfrak{p}_i}{H(K)/K} \right)=\left(\frac{\mathfrak{P}_i}{H(K)/K} \right)=\tau_i'.$$ Using Lemma [Lemma 8](#HCF){reference-type="ref" reference="HCF"}, we conclude that $C\ell(K)$ is generated by the classes of prime ideals $\mathfrak{p}_i$. This shows that $f \in \mathbf{R}_{K/\mathbb{Q}}$.\ For any divisor $f'$ of $f$, working with $\sigma^{f/f'}$ in place of $\sigma$, we obtain $f' \in \mathbf{R}_{K/\mathbb{Q}}$. This finishes the proof of Theorem [Theorem 1](#Main){reference-type="ref" reference="Main"}.\  ◻ To generate examples of number fields $K$ with non-trivial $\mathbf{R}_{K/\mathbb{Q}}$ we state a weaker version of Theorem [Theorem 1](#Main){reference-type="ref" reference="Main"} which can be readily applied. This can be easily derived from Theorem [Theorem 1](#Main){reference-type="ref" reference="Main"}. **Theorem 16**. *Let $K/\mathbb{Q}$ be cyclic with Galois group $G_K$. Assume that $n$ and $h_K$ are relatively prime. Let $f$ be a divisor of $n$ which is relatively prime to $|Aut(C\ell(K))|$. Then every divisor of $f$ is in $\mathbf{R}_{K/\mathbb{Q}}$.* Along the lines of the proof of Theorem [Theorem 1](#Main){reference-type="ref" reference="Main"}, we can prove the following generalization of Theorem [Theorem 1](#Main){reference-type="ref" reference="Main"}. **Theorem 17**. *Let $K/\mathbb{Q}$ be cyclic with Galois group $G_K$. Let $$C\ell(K) \cong \mathbb{Z}/n_1\mathbb{Z}\oplus \dots \oplus \mathbb{Z}/n_t\mathbb{Z}$$ be the decomposition of the class group into the invariant factors with $n_1|n_2| \dots |n_t$. Let $\sigma \in G_K$ be a non-trivial element of order $f$ which is relatively prime to $n_t$ and\ (1) for any homomorphism $\psi : G_K \longrightarrow Aut(H(K))$ we have $\psi(\sigma)=Id$,\ (2) the factor $\frac{n}{f}=q$ is a prime relatively prime to $f$.\ Then the subgroup of $C\ell(K)$ generated by prime ideals of residue degree $f$ contains a subgroup isomorphic to $$\mathbb{Z}/n_1^{'}\mathbb{Z}\oplus \dots \oplus \mathbb{Z}/n_t^{'}\mathbb{Z},$$ where $n_i^{'}=\frac{n_i}{q^{r_i}}$, with $q^{r_i}$ being the highest power of $q$ dividing $n_i$.* One can obtain more generalizations, some such generalizations will appear in second author's thesis.\ Now we give the proof of Theorem [Theorem 2](#main){reference-type="ref" reference="main"}. *Proof.* (Theorem [Theorem 2](#main){reference-type="ref" reference="main"}) The Galois group $G_K$ is cyclic and $p^{e_1}$ divides $n$, so there exist an element $\sigma \in G_K$ of order $p^{e_1}$. For any homomorphism $\psi : G_K \longrightarrow Aut(C\ell(K))$, the order of the element $\psi(\sigma)$ is a divisor of $p^{e_1}$. Since $p^{e_2}$ is the highest power of $p$ which divides $|Aut(C\ell(K))|$, we conclude that the order of $\psi(\sigma)$ is at most $p^{e_2}$. In particular, $$\psi(\sigma)^{p^{e_2}}=Id.$$ Thus for $\sigma'=\sigma^{p^{e_2}}$, we have $$\psi(\sigma')=\psi(\sigma^{p^{e_2}})=\psi(\sigma)^{p^{e_2}}=Id.$$ Since the order of $\sigma'$ is $p^{e_1-e_2}$, the theorem follows from Theorem [Theorem 1](#Main){reference-type="ref" reference="Main"}. ◻ # More families of extensions $K/\mathbb{Q}$ with non-trivial $\mathbf{R}_{K/\mathbb{Q}}$ In this section we do not require the extension $K/\mathbb{Q}$ to be cyclic. We need the following analogue of Theorem [Theorem 11](#splitting){reference-type="ref" reference="splitting"}. **Proposition 18**. *Let $K/\mathbb{Q}$ be a Galois extension of number fields of prime power degree, say $n=q^t$ for some prime $q$. Assume that $q$ does not divide the class number $h_K$. Then the sequence $$\label{43} 1 \longrightarrow C\ell(K) \longrightarrow G_{H(K)} \longrightarrow G_K \longrightarrow 1$$ splits. In particular, we have $$\label{44} G_{H(K)} \cong C\ell(K) \rtimes G_K.$$* *Proof.* It is enough to find a subfield $L$ of $H(K)$ such that $K \cap L=\mathbb{Q}$ and $H(K)=KL$. Let $T$ be a Sylow $q-$subgroup of $G_{H(K)}$ and $L$ be the fixed field of $T$. As $[H(K):K]=h_K$ is relatively prime to $q$, we see that $$[H(K):L]=|T|=q^t.$$ From this, we immediately obtain $$[L:\mathbb{Q}]=\frac{[H(K):\mathbb{Q}]}{q^t}=\frac{[H(K):\mathbb{Q}]}{n}=h_K.$$ Since $[L:\mathbb{Q}]$ and $n$ are relatively prime, we obtain at once $$K \cap L=\mathbb{Q}\mbox{ and } [KL:\mathbb{Q}]=n[L:\mathbb{Q}]=[H(K):\mathbb{Q}].$$ Now the proposition follows. ◻ Now we are in a position to prove the main theorem of this section. **Theorem 19**. *Let $K/\mathbb{Q}$ be a Galois extension of degree $q^t$ for some $t>1$ and an odd prime $q$. Assume that the class number factors as $h_K=p_1^{e_1} \dots p_u^{e^u}$ with distinct primes $p_i$. If $q$ does not divide $|Aut(C\ell(K))|$ and is relatively prime to $h_K$ then $q \in \mathbf{R}_{K/\mathbb{Q}}$.* *Proof.* From Proposition [Proposition 18](#Splitting-4){reference-type="ref" reference="Splitting-4"} it follows that there is a homomorphism $$\psi: G_K \longrightarrow Aut(C\ell(K))$$ inducing an isomorphism $$\Psi: G_{H(K)} \longrightarrow C\ell(K) \rtimes_{\psi} G_K.$$ Let $\sigma \in G_K$ be an element of order $q$. As $q$ does not divide $|Aut(C\ell(K))|$, we must have $\psi(\sigma)=Id$.\ Now rest of the proof proceeds along the same line as that of Theorem [Theorem 1](#Main){reference-type="ref" reference="Main"}. ◻ Before stating our next Theorem, we recall the following lemma from elementary group theory. **Lemma 20**. *Let $n=q_1^{e_1}q_2^{e_2}\cdots q_r^{e_r}$ be an integer, where $q_i$'s are distinct odd primes, and $e_i\text{'s}\geq 1$ for all $i\in\{1,2,\dots,r\}$. Then* 1. *$Aut(\frac{\mathbb{Z}}{n\mathbb{Z}})\cong \prod\limits_{i=1}^r\frac{\mathbb{Z}}{q_i^{e_i-1}(q_i-1)\mathbb{Z}}$* 2. *the number of elements of order $2$ in $Aut(\frac{\mathbb{Z}}{n\mathbb{Z}})$ is $2^r-1$.* Another lemma along the same lines is the following. **Lemma 21**. *Let $G_1$ and $G_2$ be finite abelian groups such that $G_1\cong (\frac{\mathbb{Z}}{2\mathbb{Z}})^{t_1}$, and the total number of elements of order $2$ in the group $G_2$ is $2^{t_2}$, where $t_1$ and $t_2$ are positive integers such that $t_1>t_2\geq 1$. Then, for any homomorphism $\psi: G_1\rightarrow G_2$, the kernel of $\psi$ has an element of order $2$.* *Proof.* For any element $\sigma$ of $(\frac{\mathbb{Z}}{2\mathbb{Z}})^{t_1}$, order of $\psi(\sigma)$ is either $1$ or $2$. Since $G_2$ has $2^{t_2}$ elements of order $2$, and $2^{t_1}>2^{t_2}+1$, it follows that there is a non-trivial element in the kernel of $\psi$. ◻ As an analogue to Theorem [Theorem 19](#T41){reference-type="ref" reference="T41"} for the case $q=2$ we prove the following theorem. **Theorem 22**. *Let $K/\mathbb{Q}$ be a Galois extension of degree $2^{t_1}$ such that $G_K \cong (\frac{\mathbb{Z}}{2\mathbb{Z}})^{t_1}$, where $t_1\geq 2$ is an integer. Assume that the class group $C\ell(K)$ of $K$ is cyclic of odd order, and let $h_k=p_1^{e_1} \dots p_{t_2}^{e_{t_2}}$. If $t_1 >t_2$, then $2\in\mathbf{R}_{K/\mathbb{Q}}.$* *Proof.* From Proposition [Proposition 18](#Splitting-4){reference-type="ref" reference="Splitting-4"} it follows that there is a homomorphism $$\psi: G_K \longrightarrow Aut(C\ell(K))$$ inducing an isomorphism $$\Psi: G_{H(K)} \longrightarrow C\ell(K) \rtimes_{\psi} G_K.$$ From Lemma [Lemma 20](#L43){reference-type="ref" reference="L43"} and Lemma [Lemma 21](#L44){reference-type="ref" reference="L44"}, we find that there exists an element $\sigma$ in $G_K$ of order $2$ such that $\psi(\sigma)=Id$.\ Now rest of the proof proceeds along the same line as that of Theorem [Theorem 1](#Main){reference-type="ref" reference="Main"}. ◻ The examples in (Theorem $5.2$, [@NMMRPP]) can be considered as an example of Theorem [Theorem 22](#T45){reference-type="ref" reference="T45"} for $t_1=2$ and $h_L=3$. # Proof of Theorem [Theorem 3](#AT){reference-type="ref" reference="AT"} {#proof-of-theorem-at} We need the following result about groups. **Lemma 23**. *Let $H_2$ be a subgroup of $H_1$ and $H_1$ be a subgroup of a finite group $H$. If $T$ is a set of representatives of all the left cosets of $H_2$ in $H$, then there exist sets $T_1$ and $T_2$ of representatives of all the left cosets of $H_2$ in $H_1$ and $H_1$ in $H$ respectively such that for each $g \in T$ there is a unique pair $(g', g^{''}) \in T_1 \times T_2$ and an element $h \in H_2$ satisfying $$\label{L2e1} g=g^{''}g'h$$ and vice-versa.* *Proof.* Let $T=\{z_1, \dots, z_t\}$. Since $T$ is a set of representatives of the set of left cosets of $H_2$ in $H$, $$\label{Pe11} H= \bigsqcup_{i=1}^t z_iH_2.$$ Let $T_1$ be the set of all $z_i's$ such that $z_iH_2 \subset H_1$. We consider the largest subset $T_2' \subset T$ with the property $$z_i \not \in H_1 \mbox{ holds for all but one } z_i \in T_2'.$$ Let $T_2$ be the largest subset of $T_2'$ such that for any two distinct elements $z_i, z_j \in T_2$ we have $z_iH_1 \not = z_jH_1$.\ From the definition of $T_1$ and ([\[Pe11\]](#Pe11){reference-type="ref" reference="Pe11"}) it follows that $T_1$ is a set of representatives for the set of left cosets of $H_2$ in $H_1$. That is, $$\label{Pe12} H_1=\bigsqcup_{z_i \in T_1}z_iH_2.$$ By definition, there is exactly one $y_i\in T_2'$ such that $y_iH_1=H_1$. Clearly the element $y_i \in T_2$. Let $g\in H$ be such that $gH_1 \not = H_1$. Now, $gH_2=z_iH_2$ for some $z_i \in T$. As $H_2 \subset H_1$ and $gH_1 \neq H_1$, we must have $z_iH_1 \neq H_1$. Thus $z_i \in T_2'$. From the definition of $T_2$, there exists unique $z_j \in T_2$ such that $z_iH_1=z_jH_1$. It is readily seen that $gH_1=z_jH_1$. From this, we conclude that $$\label{Pe13} H=\bigsqcup_{z_j \in T_2}z_jH_1.$$ From ([\[Pe12\]](#Pe12){reference-type="ref" reference="Pe12"}) and ([\[Pe13\]](#Pe13){reference-type="ref" reference="Pe13"}) it follows that $$\label{Pe14} H=\bigsqcup_{\substack{z_i \in T_1}\ z_j \in T_2}z_jz_iH_2.$$ Now, the assertion ([\[L2e1\]](#L2e1){reference-type="ref" reference="L2e1"}) follows from ([\[Pe11\]](#Pe11){reference-type="ref" reference="Pe11"}) and ([\[Pe14\]](#Pe14){reference-type="ref" reference="Pe14"}). This proves the lemma. ◻ *Proof.* (Proof of Theorem [Theorem 3](#AT){reference-type="ref" reference="AT"}) Let $\wp$ be an unramified prime ideal of $L$ of residue degree $f$. We let $D_f$ denote the decomposition subgroup of $G_{L/F}$ for the prime ideal $\wp$. Let $H_f^1, \ldots, H_f^r$ denote all the cyclic subgroups $G_{L/F}$ of order $f$. Then $D_f=H_f^i$ for some $i$. If we put $H_f=\bigcap_{i=1}^r H_f^i$, then there exist a set $T=\{\sigma_1, \dots, \sigma_t\}$ of representatives of left cosets of $H_f$ in $G_{L/F}$ such that $$\label{P1e1} \theta_f=\sum_{\sigma_i \in T} \sigma_i.$$ From Lemma [Lemma 23](#L2){reference-type="ref" reference="L2"}, there exist subsets $T_1, T_2$ of $T$ such that $T_1$ is a set of representatives of left cosets of $H_f$ in $D_f$ and $T_2$ is a set of representatives of left cosets of $D_f$ in $G_{L/F}$. Moreover, there are elements $\tau_{ij}$ in $H_f$ such that $$\label{P1e2} \theta_f=\sum_{\sigma_i' \in T_1, \sigma_j^{''} \in T_2,} \sigma_j^{''} \sigma_i' \tau_{ij}.$$ As a result, $$\label{P1e3} \wp^{\theta_f}=\prod_{\sigma_i' \in T_1, \sigma_j^{''} \in T_2,} \sigma_j^{''} \sigma_i' \tau_{ij}(\wp).$$ Since $H_f \subset D_f$, we have $\tau_{ij}(\wp)=\wp$. Consequently. $$\label{P1e4} \wp^{\theta_f}=\prod_{\sigma_j^{''} \in T_2} \prod_{\sigma_i' \in T_1}, \sigma_j^{''} \sigma_i' (\wp).$$ For each $\sigma_i' \in T_1$, we have $\sigma_i(\wp)=\wp$. As a result, we obtain $$\label{P1e5} \wp^{\theta_f}=\prod_{\sigma_j^{''} \in T_2} \sigma_j^{''}(\wp^{t_1}) \mbox{ here } t_1=|T_1|.$$ If $\mathfrak{p}$ is the prime ideal of $F$ lying below $\wp$ then, from the factorization theorem of Dedekind, we see that $$\label{P1e6} \mathfrak{p} \mathcal{O}_L=\prod_{\sigma_j^{''} \in T_2} \sigma_j^{''}(\wp).$$ From equations ([\[P1e5\]](#P1e5){reference-type="ref" reference="P1e5"}) and ([\[P1e6\]](#P1e6){reference-type="ref" reference="P1e6"}) we obtain $$\wp^{\theta_f}=(\mathfrak{p}\mathcal{O}_L)^{t_1}.$$ Since $h_F=1$, the ideals $\mathfrak{p}$ and $\mathfrak{p}\mathcal{O}_L$ must be principal. From this we conclude that $\wp^{\theta_f}$ is principal. As the class group $C\ell(L)$ is generated by unramified prime ideals of residue degree $f$, the theorem follows. ◻ # applications In this section we prove Theorem [Theorem 4](#A1){reference-type="ref" reference="A1"}, Theorem [Theorem 5](#A11){reference-type="ref" reference="A11"} and Theorem [Theorem 6](#A3){reference-type="ref" reference="A3"}. We begin with the proof of Theorem [Theorem 4](#A1){reference-type="ref" reference="A1"}. *Proof.* (Theorem [Theorem 4](#A1){reference-type="ref" reference="A1"}) If possible let $K/\mathbb{Q}$ be a cyclic extension such that the class group $C\ell(K)$ is cyclic of order $m$. Then $|Aut(C\ell(K))|=\phi(m)$. From Theorem [Theorem 16](#CMain){reference-type="ref" reference="CMain"}, we see that $n \in \mathbf{R}_{K/\mathbb{Q}}$. This means the class group $C\ell(K)$ is generated by prime ideals of residue degree $n$. As $n=n$, prime ideals of $K$ of residue degree $n$ are principal. Consequently the class group $C\ell(K)$ must be trivial. This proves Theorem [Theorem 4](#A1){reference-type="ref" reference="A1"}. ◻ Now we prove Theorem [Theorem 5](#A11){reference-type="ref" reference="A11"}. We prove Theorem [Theorem 5](#A11){reference-type="ref" reference="A11"} for $n=q$ an odd prime. With this assumption the arguments can be easily conveyed. Let $s$ be the multiplicative order of $\ell$ modulo $q$. Recall that $G_{H(K)/K} \cong H_K$. Suppose $h_k=\ell^t m$ with $\ell \nmid m, t>0$. Let $H$ denote the $\ell-$ Syllow subgroup of $C\ell(K)$, then there are integers $n_1, \ldots, n_r$ such that $$\label{ea1} H \cong (\mathbb{Z}/\ell^{n_1}\mathbb{Z}) \oplus \ldots \oplus (\mathbb{Z}/\ell^{n_r}\mathbb{Z}).$$ Next, let $M$ be the unique subgroup of $C\ell(K)$ of order $m$ and let $L$ denote the fixed field of $M$ for the extension $H(K)/K$. Then $L/K$ is Galois and $G_L/K \cong H$. **Proposition 24**. *The extension $L/\mathbb{Q}$ is Galois.* *Proof.* Suppose the extension $L/\mathbb{Q}$ is not Galois. This means that there exists an embedding $\sigma: L\rightarrow \Bar{\mathbb{Q}}$ such that $\sigma(L)\neq L$. Put $L_\sigma=\sigma(L)$. Clearly, we have $[L: \mathbb{Q}]=[L_\sigma: \mathbb{Q}]=\ell^t[K:\mathbb{Q}]$ and $[L: K]=[L_\sigma: K]=\ell^t$. From this it follows that $$[L\cap L_\sigma :K]= \ell^i \;\text{for some}\; 0\leq i< t.$$ Since $L/K$ is Galois, it follows that $$[L L_\sigma :K]= \ell^{(2t-i)}.$$ From this we get that $\ell^{(t+1)}|h_K$. This contradiction proves the proposition. ◻ Now, we have the following short exact sequence\ $$\label{ea2} 1 \longrightarrow G_{L/K} \longrightarrow G_L \longrightarrow G_K \longrightarrow 1.$$ Analogous to Theorem [Theorem 11](#splitting){reference-type="ref" reference="splitting"}, the exact sequence ([\[ea2\]](#ea2){reference-type="ref" reference="ea2"}) splits. Consequently, one has $$\label{ea21} G_L \cong G_{L/K} \rtimes G_K.$$ From ([\[ea1\]](#ea1){reference-type="ref" reference="ea1"}), it follows that $$C\ell(K)[\ell] \cong \left(\mathbb{Z}/\ell\mathbb{Z}\right)^r.$$ We need to show that $r\geq s$. Suppose this is not true. From Proposition [Proposition 13](#CAAG){reference-type="ref" reference="CAAG"} it follows that $|Aut(G_{L/K})|$ is relatively prime to $q$. Thus if $\sigma$ is a generator of $G_K$, then for any homomorphism $\psi : G_K \longrightarrow Aut(G_{L/K})$ we see that $\psi(\sigma)=Id$. Now we consider the following set $$\label{e3} \mathbf{R}_{K/\mathbb{Q}}[\ell]=\{f \in \mathbb{N}: C\ell(K)[\ell] \mbox{ is generated by primes of residue degree }f\}.$$ Note that $n \in \mathbf{R}_{K/\mathbb{Q}}[\ell]$ implies that $C\ell(K)[\ell]$ is trivial. Thus, showing $n \in \mathbf{R}_{K/\mathbb{Q}}[\ell]$ will complete the proof. This follows from the next theorem. **Theorem 25**. *Let $\tau \in G_K$ be an element of order $f$ and assume that for any homomorphism $\psi: G_K\rightarrow \text{Aut}(G_{L/K})$ we have $\psi(\tau)=Id$. Then each divisor of $f$ is an element of $\mathbf{R}_{K/\mathbb{Q}}[\ell]$.* *Proof.* Let $L$ be the field, defined as above, such that $G_{L/K} \cong H$. From Proposition [Proposition 24](#Pa3){reference-type="ref" reference="Pa3"} it follows that the extension $L/\mathbb{Q}$ is a Galois extension. From ([\[ea21\]](#ea21){reference-type="ref" reference="ea21"}) we obtain a homomorphism $\psi :G_K \rightarrow\text{Aut}(G_{L/K})$ inducing an isomorphism $$\Psi: G_L \rightarrow G_{L/K }\rtimes_{\psi} G_K.$$ We choose elements $\sigma_1, \sigma_2, \dots,\sigma_r$ form $G_{L/K}$ such that the order of $\sigma_i$ is $\ell^{n_i}$ for all $i\in \{1,2,\dots, r\}$ and $$G_{L/K}=\{\prod\limits_{i=1}^{r}\sigma_i^{j_i}: j_i\in \mathbb{Z}\;\text{for each}\;i\}.$$ As $f$ and $q$ are relatively prime, it follows that $$\label{ea4} G_{L/K}= \{\prod\limits_{i=1}^{r}\sigma_i^{fj_i}: j_i\in \mathbb{Z}\;\text{for each}\;i\}.$$ Given that $\psi(\tau)=Id$, we see that the order of $(\sigma_i, \tau)$ in $G{L/K}\rtimes_{\psi} G_K$ is $\ell^{n_i}f$. Since $\Psi$ is an isomorphism, there exist elements $\tau_i$'s in $G_L$ of order $\ell^{n_i}f$ such that $\Psi(\tau_i)= (\sigma_i, \tau)$. Applying $\Check{C}$ebotarev density theorem for each $\tau_i$, we obtain prime ideals $\mathfrak{P}_i$ of $L$ lying above the prime $p_i$ such that $$\left(\frac{\mathfrak{P}_i}{L/\mathbb{Q}}\right)=\tau_i\; \text{for each}\; i.$$ Clearly $f(\mathfrak{P}_i|p_i)= \ell^{n_i}f$ for each $i$. Suppose $\mathfrak{p}_i=\mathfrak{P}_i\cap \mathcal{O}_K$, then $$f(\mathfrak{P}_i|p_i)=f(\mathfrak{P}_i|\mathfrak{p}_i) \times f(\mathfrak{p}_i|p_i) .$$ As $\ell$ is relatively prime to $n$ and $f(\mathfrak{p}_i|p_i)|n$, we conclude that $$f(\mathfrak{P}_i|\mathfrak{p}_i)=\ell^{n_i} \mbox{ and } f(\mathfrak{p}_i|p_i)=f.$$ If $$\left(\frac{\mathfrak{p}_i}{L/K}\right)=\tau'_i\; \text{for each}\; i.$$ Then we have $\tau_i^f=\tau'_i$ and order of $\tau'_i=\ell^{n_i}$ for each $i$. Observe that $\tau'_i$'s generate $G_{L/K}$. Thus, by the Artin isomorphism theorem the ideal classes of $\mathfrak{p}_i$'s generate the group $H$. Since $H$ contains $C\ell(K)[\ell]$, it shows that $f\in \mathbf{R}_{K/\mathbb{Q}}[\ell]$.\ Let $f'$ be a positive divisor of $f$ and $f=af'$. Working with $\tau'=\tau^a$ in the above proof shows that $f'\in \mathbf{R}_{K/\mathbb{Q}}[\ell]$. ◻ This completes the proof of Theorem [Theorem 5](#A11){reference-type="ref" reference="A11"} when $n=q$ is a prime. Similar arguments apply in the general case too. Next we prove Theorem [Theorem 6](#A3){reference-type="ref" reference="A3"}. *Proof.* (Theorem [Theorem 6](#A3){reference-type="ref" reference="A3"}) Let $\sigma \in \text{Gal}(\mathbb{Q}(\zeta_{\ell})/\mathbb{Q})$ be an element of order $f$. As $f$ is relatively prime to $|Aut(C\ell(\mathbb{Q}(\zeta_{\ell})))|$, for any homomorphism $\psi : \text{Gal}(Q(\zeta_{\ell})/\mathbb{Q}) \longrightarrow Aut(C\ell(\mathbb{Q}(\zeta_{\ell})))$ we must have $\psi(\sigma)=Id$. From Theorem [Theorem 1](#Main){reference-type="ref" reference="Main"}, we obtain $f \in \mathbf{R}_{\mathbb{Q}(\zeta_{\ell})/\mathbb{Q}}$.\ Let $\theta_f$ be the annihilator as defined in ([\[ann\]](#ann){reference-type="ref" reference="ann"}). The Galois group $\text{Gal}(\mathbb{Q}(\zeta_{\ell})/\mathbb{Q})$ has unique subgroup of orders $f$ and $(\ell-1)/f$, say, $H_1$ and $H_2$ respectively. By Lemma [Lemma 12](#el){reference-type="ref" reference="el"}, we see that $$\label{51} \theta_f =\sum_{\sigma \in H_2} \sigma.$$ Let $F$ be the subfield of $\mathbb{Q}(\zeta_{\ell})$ of degree $f$. Then $\text{Gal}(\mathbb{Q}(\zeta_{\ell})/F)=H_2$. Let $\mathfrak{P}$ be an unramified prime ideal of $\mathbb{Q}(\zeta_{\ell})$ and $\mathfrak{p}$ be the prime ideal of $F$ below $\mathfrak{P}$. From ([\[51\]](#51){reference-type="ref" reference="51"}) it follows that $$\theta_f(\mathfrak{P})=\mathfrak{p}^t \mathbb{Z}[\zeta_{\ell}],$$ where $t=f(\mathfrak{P}|\mathfrak{p})$ is a divisor of $(\ell -1)/f$. Since $\theta_f$ is an annihilator, the ideal $\mathfrak{p}^t \mathbb{Z}[\zeta_{\ell}]$ must be principal. From the hypothesis, $t$ is relatively prime to $C\ell(\mathbb{Q}(\zeta_{\ell}))$. Hence $\mathfrak{p}\mathbb{Z}[\zeta_{\ell}]$ must be principal. Taking norm, we obtain $$N_{\mathbb{Q}(\zeta_{\ell})/F}(\mathfrak{p}\mathbb{Z}[\zeta_{\ell}])=\mathfrak{p}^{(\ell-1)/f}$$ is principal. Thus the exponent of the class group $H_F$ is at most $(\ell-1)/f$. ◻ Looking at the table of class numbers of maximal real cyclotomic fields in [@LW91], it looks like that the class numbers $C\ell(\mathbb{Q}(\zeta_{\ell}))^{+}$ are a power of $2$ with good frequency. Our Theorem [Theorem 6](#A3){reference-type="ref" reference="A3"} can be used to conclude that, in many of such cases, the class group $$C\ell(\mathbb{Q}(\zeta_{\ell}))^{+} \cong \left(\frac{\mathbb{Z}}{2\mathbb{Z}}\right)^s,$$ for some integer $s$. Here, by many of the cases we mean when $(\ell-1)/2$ is odd, $C\ell(\mathbb{Q}(\zeta_{\ell}))^{-}$ is odd and $(\ell-1)/2$ is relatively prime to $\phi(C\ell(\mathbb{Q}(\zeta_{\ell}))^{-})$.\ In the remaining part of this section we give two more consequences of our study. There is good overlap between proof of Theorem [Theorem 6](#A3){reference-type="ref" reference="A3"} and the following theorem, still we provide details. **Theorem 26**. *Let $K/\mathbb{Q}$ be a cyclic extension of degree $6$ and $F$ be the intermediate field of degree $3$. Assume that $h_K$ is relatively prime to $6$, and $3$ does not divide $|Aut(C\ell(K))|$. Then the exponent of the class group $H_F$ is at most $2$. Moreover, there exists an integer $s \geq 0$ such that $H_F \cong (\mathbb{Z}/2\mathbb{Z})^s$.* *Proof.* (Theorem [Theorem 26](#A2){reference-type="ref" reference="A2"}) Using Theorem [Theorem 16](#CMain){reference-type="ref" reference="CMain"}, we see that $3 \in \mathbf{R}_{K/\mathbb{Q}}$. Consider the corresponding annihilator $\theta_3$ as defined in ([\[ann\]](#ann){reference-type="ref" reference="ann"}). The Galois group $G_K$ has unique subgroups of orders $3$ and $2$, say, $H_1$ and $H_2$ respectively. We observe that $\text{Gal}(K/F) \cong H_2$. From Lemma [Lemma 12](#el){reference-type="ref" reference="el"}, we see that $$\theta_3=\sum_{\sigma \in H_2} \sigma.$$ Let $\mathfrak{P}$ be a prime ideal of $K$ and $\mathfrak{p}$ denote the prime of $F$ below $\mathfrak{P}$. If $f(\mathfrak{P}|\mathfrak{p})=t$ then $t|2$, and from the Dedekind theorem we obtain $$\theta_3(\mathfrak{P})=\mathfrak{p}^t \mathbb{O}_K.$$ Using Theorem [Theorem 3](#AT){reference-type="ref" reference="AT"}, we see that $\mathfrak{p}^t \mathbb{O}_K$ is principal. As $t|2$ and $2$ does not divide $h_K$ we conclude that $\mathfrak{p} \mathbb{O}_K$ is principal. Now, taking norm gives $$N_{K/F}(\mathfrak{p} \mathbb{O}_K)=\mathfrak{p}^2$$ is principal. Thus for any prime $\mathfrak{p}$ of $F$ the ideal $\mathfrak{p}^2$ is principal. Hence the exponent of $H_F$ is a divisor of $2$. This proves the Theorem. ◻ **Theorem 27**. *Let $K$ be a number field with class number bigger than one. If $H(K)$ denotes the Hilbert class field of $K$ then the extension $H(K)/\mathbb{Q}$ is never cyclic.* *Proof.* If possible, suppose $H(K)/\mathbb{Q}$ is cyclic and the Galois group $G_{H(K)}$ is generated by $\sigma$. From Theorem [Theorem 9](#CDT){reference-type="ref" reference="CDT"}, there is a prime ideal $\mathfrak{P}$ of $H(K)$ such that $$\label{e61} \left( \frac{\mathfrak{P}}{H(K)/\mathbb{Q}} \right) =\sigma.$$ Let $\mathfrak{p}=\mathfrak{P} \cap \mathbb{O}_K$ and $p=\mathfrak{P} \cap \mathbb{Z}$. Then $$\label{e62} f(\mathfrak{P}|p)=[H(K):\mathbb{Q}] \mbox{ and }f(\mathfrak{p}|p)=n.$$ From this it follows that the ideal $\mathfrak{p}$ is a principal ideal. On the other hand, by Lemma [Lemma 7](#FR){reference-type="ref" reference="FR"} we have $$\label{e63} \left( \frac{\mathfrak{p}}{H(K)/K} \right)=\left( \frac{\mathfrak{P}}{H(K)/K} \right)=\left( \frac{\mathfrak{P}}{H(K)/\mathbb{Q}} \right)^{f(\mathfrak{p}|p)}=\sigma^{n}.$$ Since $\sigma^{n}$ generates the Galois group $G_{H(K)/K}$, from Lemma [Lemma 8](#HCF){reference-type="ref" reference="HCF"} it follows that the class group $C\ell(K)$ is generated by prime ideal $\mathfrak{p}$. This implies that $C\ell(K)$ is trivial. This contradiction establishes the Theorem. ◻ We end this section with the remark that the consequences mentioned here are by no means exhaustive. Some more consequences will appear in second author's thesis. # Explicit Examples We illustrate few examples explicitly, mainly among cyclotomic fields. There are many examples, we illustrate few explicitly and record some in a table. For the value of class numbers we used the tables given in [@LW91]\ 1. Consider $K=\mathbb{Q}(\zeta_{163}^{+})$. Then $h_K=4$, and from Proposition [Proposition 13](#CAAG){reference-type="ref" reference="CAAG"} we obtain $|Aut(C\ell(K))|$ divides $6$. Using Theorem [Theorem 2](#main){reference-type="ref" reference="main"} we see that $3, 9, 27 \in \mathbf{R}_{\mathbb{Q}(\zeta_{163}^{+})/\mathbb{Q}}$. . Consider $K=\mathbb{Q}(\zeta_{191}^{+})$. Then $h_K=11$. Using Theorem [Theorem 16](#CMain){reference-type="ref" reference="CMain"} we see that $19 \in \mathbf{R}_{\mathbb{Q}(\zeta_{191}^{+})/\mathbb{Q}}$. . Consider $K=\mathbb{Q}(\zeta_{313}^{+})$. Then $h_K=7$. Using Theorem [Theorem 16](#CMain){reference-type="ref" reference="CMain"} we see that $13 \in \mathbf{R}_{\mathbb{Q}(\zeta_{313}^{+})/\mathbb{Q}}$. . Consider $K=\mathbb{Q}(\zeta_{457}^{+})$. Then $h_K=5$. Using Theorem [Theorem 16](#CMain){reference-type="ref" reference="CMain"} we see that $3, 19, 57 \in \mathbf{R}_{\mathbb{Q}(\zeta_{457}^{+})/\mathbb{Q}}$. . Consider $K=\mathbb{Q}(\zeta_{457}^{+})$. Then $h_K=5$. Using Theorem [Theorem 16](#CMain){reference-type="ref" reference="CMain"} we see that $3, 19, 57 \in \mathbf{R}_{\mathbb{Q}(\zeta_{457}^{+})/\mathbb{Q}}$. . Consider $K=\mathbb{Q}(\zeta_{547}^{+})$. Then $h_K=4$, and from Proposition [Proposition 13](#CAAG){reference-type="ref" reference="CAAG"} we obtain $|Aut(C\ell(K))|$ divides $6$. Using Theorem [Theorem 16](#CMain){reference-type="ref" reference="CMain"} we see that $7, 13 \in \mathbf{R}_{\mathbb{Q}(\zeta_{547}^{+})/\mathbb{Q}}$. . Take $K=\mathbb{Q}(\zeta_{1399}^{+})$. Then $h_K=4$, and from Proposition [Proposition 13](#CAAG){reference-type="ref" reference="CAAG"} we obtain $|Aut(C\ell(K))|$ divides $6$. Using Theorem [Theorem 16](#CMain){reference-type="ref" reference="CMain"} we get $233 \in \mathbf{R}_{\mathbb{Q}(\zeta_{1399}^{+})/\mathbb{Q}}$. . Take $K=\mathbb{Q}(\zeta_{1459}^{+})$. Then $h_K=247=13\times 19$. Using Theorem [Theorem 2](#main){reference-type="ref" reference="main"} we obtain $3, 9 \in \mathbf{R}_{\mathbb{Q}(\zeta_{1459}^{+})/\mathbb{Q}}$. . Suppose $K=\mathbb{Q}(\zeta_{1699}^{+})$. Then $h_K=4$, and from Proposition [Proposition 13](#CAAG){reference-type="ref" reference="CAAG"} we find that $|Aut(C\ell(K))|$ divides $6$. Using Theorem [Theorem 16](#CMain){reference-type="ref" reference="CMain"} we get $283 \in \mathbf{R}_{\mathbb{Q}(\zeta_{1699}^{+})/\mathbb{Q}}$. . Take $K=\mathbb{Q}(\zeta_{1879}^{+})$. Then $h_K=4$, and from Proposition [Proposition 13](#CAAG){reference-type="ref" reference="CAAG"} we obtain $|Aut(C\ell(K))|$ divides $6$. Using Theorem [Theorem 16](#CMain){reference-type="ref" reference="CMain"} we see that $313 \in \mathbf{R}_{\mathbb{Q}(\zeta_{1879}^{+})/\mathbb{Q}}$. . The class number of $\mathbb{Q}(\zeta_{96})$ is $9$. From Proposition [Proposition 18](#Splitting-4){reference-type="ref" reference="Splitting-4"}, we see that ([\[44\]](#44){reference-type="ref" reference="44"}) holds true. The group $Aut(C\ell(K))$ has either $6$ elements or $48$ elements. In either case, number of elements of order $2^r$ for some $r$ is at most $16$. On the other hand, all the elements of $\text{Gal}(\mathbb{Q}(\zeta_{96})/\mathbb{Q})$ have order $2^r$ for some $r$. Thus there is an element of order $2$ in $\text{Gal}(\mathbb{Q}(\zeta_{96})/\mathbb{Q})$ which is in the kernel of any homomorphism $\psi : \text{Gal}(\mathbb{Q}(\zeta_{96})/\mathbb{Q}) \longrightarrow Aut(C\ell(K))$. Now proceeding as done in the proof of Theorem [Theorem 1](#Main){reference-type="ref" reference="Main"}, we can show that $2 \in \mathbf{R}_{\mathbb{Q}(\zeta_{96})/\mathbb{Q}}$. In the table below, we list some primes $\ell$, corresponding class numbers $C\ell(\mathbb{Q}(\zeta_{\ell}^{+}))$ of real cyclotomic fields, and elements in $\mathbf{R}_{\mathbb{Q}(\zeta_{\ell}^{+})/\mathbb{Q}}$ which we could obtain from our study. In most of the examples recorded here, it turns out that the class number is prime (even though it was not necessary for application of our results). There are many more examples possible, we do not try to list all the examples. This table is given just as an illustration of the outcomes of our study. Many more examples will appear in second author's thesis.\ to $\ell$ & $|C\ell(\mathbb{Q}(\zeta_{\ell}^{+}))$ & $\phi(|C\ell(\mathbb{Q}(\zeta_{\ell}^{+}))|)$ & $\mathbf{R}_{\mathbb{Q}(\zeta_{\ell}^{+})/\mathbb{Q}}$\ & 11 & 10 & 3, 7, 9,21, 63\ 761 & 3 & 2 & 5, 19, 95\ 821 & 11 & 10 & 41\ 829 & 47 & 46 &3, 9\ 857 & 5 & 4 &107\ 953 & 71 & 70 &17\ 977 & 5 & 4 & 61\ 1063 & 13 & 12 & 59\ 1069 & 7 & 6 &89\ 1093 & 5 & 4 &3, 7, 13, 21, 39, 91, 273\ 1229 & 3 & 2 &307\ 1231 & 211 & 210 &41\ 1373 & 3 & 2 &7, 49, 343\ 1381 & 7 & 6 &5, 23, 115\ 1429 & 5 & 4 &3, 7, 17, 21, 51, 119,357\ 1567 & 7 & 6 & 29\ 1601 & 7 & 6 &5, 25\ 1697 & 17 & 16 & 53\ 1831 & 7 & 6 &5, 61, 305\ 1861 & 11 & 10 &3, 31, 93\ 1901 & 3 & 2 &5, 19, 25, 95, 475\ 1987 & 7 & 6 &331\ 2029 & 7 & 6 &13, 169\ 2113 & 37 & 36 &11\ 2213 & 3 & 2 &7, 79, 553\ 2351 & 11 & 10 &47\ 2381 & 11 & 10 &7, 17, 119\ # Concluding remarks Computations of class numbers $C\ell(\mathbb{Q}(\zeta_{\ell}))^+$ of real cyclotomic field $\mathbb{Q}(\zeta_{\ell}^+)$ is an arduous task [@LF82; @LW91]. In a remarkable work [@RS03], Schoof proposed a subgroup of $C\ell(\mathbb{Q}(\zeta_{\ell}))^+$ whose cardinality $\tilde{C\ell(\mathbb{Q}(\zeta_{\ell}))}$ can easily be computed. Conjecturally this subgroup equals to the class group $C\ell(\mathbb{Q}(\zeta_{\ell}^+))$. We wonder if our study can be used to lend credence to this conjecture. Using this conjectural equality we can obtain elements $f$ in $\mathbf{R}_{\mathbb{Q}(\zeta_{\ell}^+)/\mathbb{Q}}$. Now these $f's$ can be used to get information on class groups of intermediate fields. Many times, the class groups of these intermediate fields are relatively easier to study. A match between the actual computation and prediction from our studies should work as a support to the above mentioned conjecture of Schoof.\ We believe that the study of primes of higher degree has good potential. It is desirable to obtain some analytic method to ensure some elements $f>1$ in $\mathbf{R}_{L/F}$ along the lines of $1\in \mathbf{R}_{L/F}$. 100 Bhargava, M., Shankar, A., Taniguchi, T., Thorne, F., Tsimerman, J., Zhao, Y.: Bounds on $2-$torsion in class groups of number fields and integral points on elliptic curves. *J. Amer. Math. Soc.* **33** (2020), no. 4, 1087-1099. Y. F. Bilu. (after Mihailescu). Seminaire Bourbaki, Vol. 2002-2003, Exp. 909, Asterisque, **294** (2004), 1--26. Y. F. Bilu, Y. Bugeaud and M. Mignotte. Springer, 2014. Gary Cornell, Michael Rosen. . Journal of Number Theory, **28**, 152-158 (1988). Ellenberg, J. S., Venkatesh, A.: Reflection principles and bounds for class group torsion. *Int. Math. Res. Not.*, IMRN 2007, no. 1, Art. ID rnm002, 18pp. Ellenberg, J.S.,Pierce, L. B. , Wood, M. M.: On $\ell-$torsion in class groups of number fields. *Algebra Number Theory* **11** (2017), no. 8, 1739-1778. Cornelius Greither. . Jahresbericht der Deutschen Mathematiker-Vereinigung, **123** (2021), 153-181. C. Greither, R. Kucera. . Manuscripta Math., **166** (2021), no. 1-2, 277--86. Helfgott, H., A.,Venkatesh, A.: Integral points on elliptic curves and $3$-torsion in class groups. *J. Amer. Math. Soc.* **19** (2006), no. 3, 527-550. C. J. Hillar, D. L. Rhea. . American Mathematical Monthly, **114** (2007), no.10, 917-923. G. J. Janusz. , second edition, . Volume **7**, American Math. Society, 1996. Koymans, P., Pagano, C.: On the distribution of $Cl(K)[\ell^{\infty}]$ for degree $\ell$ cyclic fields. *J. Eur. Math. Soc.* **24** (2022), no. 4, 1189-1283. Linden, F. van der Math. Comp., **39** (1982), 693-707. Nimish K. Mahapatra, Prem Prakash Pandey, Mahesh K. Ram. , submitted. http://arxiv.org/abs/2209.00328 Jürgen Neukirch. . Volume 322. Springer-Verlag, Berlin, 1999. J.-M. Pan, The order of the automorphism group of finite abelian group. *J. Yunnan Univ. Nat. Sci.* **26** (2004), 370-372. Prem Prakash Pandey. . J. Ramanujan Math. Soc., **34** (2019), 143--150. Pierce, L., B.: The $3-$part of class numbers of quadratic fields. *J. London Math. Soc. (2)* **71** (2005), no. 3, 579-598. Pierce, L., B., Turnage-Butterbaugh, C., L., Wood, M., M.: An effective Chebotarev density theorem for families of number fields, with an application to $\ell-$torsion in class groups. *Invent. Math.* **219** (2020), no. 2, 701-778. Pierce, L., B., Turnage-Butterbaugh, C., L., Wood, M., M.:On a conjecture for $\ell-$torsion in class groups of number fields: from the perspective of moments. *Mathematical Research Letters*, Volume 28 (2021), no. 2, 575-621. A. Ranum, The group of classes of congruent matrices with application of the group of isomoprhisms of any abelian group. *Trans. Amer. Math. Soc.* **8** (1907) 71-91. J. W. Sands, Brett A. Tangedal. . Math. Comp., **87** (2018), 2937--2953. R. Schoof. . Math. Comp., **72** (2003), 913--937. R. Schoof. . Universitext, Springer, 2008. J. Tate. . Seminar on number theory, Exp. No. 24, 16 pp., 1980-81 F. Thaine. . Ann. of Math., **128** (1988), 1--18. Wang, J.: Pointwise bound for $\ell-$torsion in class groups: elementary abelain extensions. *J. Reine Angew. Math.* **773** (2021), 129-151. L. C. Washington. , Second Edition, Springer, 1991. B. F. Wyman. . Scripta Math., **29** (1973), 141-149.
arxiv_math
{ "id": "2310.04974", "title": "Primes of Higher Degree", "authors": "Prem Prakash Pandey, Mahesh Kumar Ram", "categories": "math.NT", "license": "http://creativecommons.org/licenses/by-nc-nd/4.0/" }
--- abstract: | Following the techniques and notions we defined in our previous article, we define the notion of twisted differential operator of finite radius. We show that this notion is independent of the choice of the endomorphisms. In the case of $q$-coordinates we obtain an equivalence between $\eta^\dagger$-convergent modules of finite type endowed with an integrable connection and the modules of finite type endowed with an action by $\underline{q}$-differential operators of the same radius. author: - Pierre Houédry bibliography: - biblio.bib date: 5 june 2023 title: Twisted calculus in several variables --- # Introduction {#introduction .unnumbered} In [@moi] we defined some objects and developed some techniques to investigate twisted differential operators in several variables. We now wish to apply those technique to study the phenomenon of $p$-adic confluence, following the work of André-Di Vizio and Pulita. However, our framework is more general. We consider a Tate ring $A$, it is a Huber ring admiting an invertible toplogically nilpotent element. It is used to defined a submultiplicative norm on our ring. We assume that our ring is endowed with classical and symmetrical $\underline{\sigma}$-coordinates so that the objects defined in our previous paper are well defined. For a positive real number $\eta$ close enough to $1$, we define the notion of $\eta^\dagger$-convergent $\text{D}^{(\infty)}_{\underline{\sigma}}$-module by requiring that the twisted Taylor series has a radius of convergence at least equal to $\eta$. In a similar manner to [@GLQ4], we introduce the notion of $\eta$-convergent twisted differential operator (and $\eta^\dagger$-convergent by going to the limit) and construct two rings $\text{D}_{\underline{\sigma}}^{(\eta)}$ and $\text{D}_{\underline{\sigma}}^{(\eta^\dagger)}$.We then show an equivalence between the category of $\text{D}_{\underline{\sigma}}^{(\eta^\dagger)}$-modules that are of finite type over $A$ and the category of $\eta^\dagger$-convergent $\text{D}^{(\infty)}_{\underline{\sigma}}$-module of finite type over $A$. If there exists some others $R$-linear continuous endomorphisms $\underline{\tau}=(\tau_1,\ldots,\tau_d)$ of $A$ that commute such that the coordinates $\underline{x}$ are also classical and symmetrical $\underline{\tau}$-coordinates we have for $\eta$ close enough to $1$, an isomorphism $\text{D}_{\underline{\sigma}}^{(\eta)} \simeq \text{D}_{\underline{\tau}}^{(\eta)}.$ As an application we can look at the case of $\underline{q}$-coordinates: there exists some elements $\underline{q}=\lbrace q_1,\ldots,q_d \rbrace$ of $R$ such that $\underline{x}$ are classical and symmetrical $\underline{q}$-coordinates, $$\forall i=1,\ldots,d; ~ \forall n \in \mathbb{N}, ~ (n)_{q_i} \in R^\times,$$ and $A$ is $\eta^\dagger$-convergent then, we have the following equivalence of categories $$\nabla_{\underline{\sigma}}^{\text{Int}}\text{-}\text{Mod}_{\text{tf}}^{(\eta^\dagger)}(A/R) \simeq \text{D}_{\underline{\sigma}}^{(\eta^\dagger)}\text{-}\text{Mod}_{\text{tf}}(A/R),$$ between the category of $\eta$-convergent finite module over $A$ endowed with an integrable twisted connection $\nabla_{\underline{\sigma}}^{\text{Int}}\text{-}\text{Mod}_{\text{tf}}^{(\eta^\dagger)}(A/R)$ and the category $\text{D}_{\underline{\sigma}}^{(\eta^\dagger)}\text{-}\text{Mod}_{\text{tf}}(A/R)$ of $\text{D}_{\underline{\sigma}}^{(\eta^\dagger)}$-modules of finite type over $A$. We recover a result in the spirit of [@ADV04] and [@pulita]. # Twisted algebras ## Huber rings **Definition 1**. A topological ring $A$ is said to be a *Huber ring* if there exists an open subring $A_0$ of $A$ and a finitely generated ideal $I_0$ of $A_0$ such that $\lbrace I_0^n\rbrace_{n \in \mathbb{N}}$ is a fundamental system of neighborhoods of 0 in $A_0$. The subring $A_0$ is called a *ring of definition* of $A$ and the ideal $I_0$ is called an *ideal of definition* of $A$. A *couple of definition* $(A_0,I_0)$ is the couple given by a ring of definition and an ideal of definition. In the case where we can choose $A$ as a ring of definition, the ring is said to be *adic*. Moreover, if $\lbrace 0 \rbrace$ is an ideal of definition, the ring $A$ is said to be *discrete*. **Examples 1**. 1. Let $A$ be a Huber ring. For every natural integer $n$, the ring $A[X_1,\ldots,X_n]$ can be endowed with a structure of Huber ring,that induces the topology on $A$, with $A_0[X_1,\ldots,X_n]$ as a ring of definition and $I_0A_0[X_1,\ldots,X_n]$ as an ideal of definition. 2. Let $K$ be a *non-archimedean field*: by that we mean a topological field for a non-trivial non-archimedean absolute value $|\cdot|$. There exists an element $\pi$ of $K$ such that $0<|\pi|<1$. The field $K$ is a Huber ring: the set $\left\lbrace |x|\leq 1 \right\rbrace$ is a ring of definition and $(\pi)$ is an ideal of definition. 3. Let $d$ be a natural integer. In the previous situation, we consider the following ring $$A=\left\lbrace \sum_{\underline{k}\in \mathbb{Z}^d} a_{\underline{k}}\underline{X}^{\underline{k}},a_{\underline{k}} \in K, |a_{\underline{k}}|\prod_{i=1}^d \max (1, |\pi|^{k_i}) \rightarrow 0 \text{ when } \underline{k} \rightarrow +\infty \right\rbrace ,$$ endowed with the topology by the Gauss norm $$\left\lVert \sum_{\underline{k} \in \mathbb{Z}^d} a_{\underline{k}} \underline{X}^{\underline{k}} \right\rVert = \max \left\lbrace \sup_{|\underline{k}| \geq 0} |a_{\underline{k}}|, \sup_{|\underline{k}|<0} |\pi|^{|\underline{k}|}|a_{\underline{k}}| \right\rbrace.$$ With this topology, this ring is a Huber ring. A couple of definition is given by the following subring and ideal $$\begin{aligned} A_0=\left\lbrace \sum_{\underline{k} \in \mathbb{Z}^d} a_{\underline{k}} \underline{X}^{\underline{k}} \in A \text{ such that } |a_{\underline{k}}| \leq 1 \text{ if } |\underline{k}| \geq 0, ~|\pi^{|\underline{k}|}a_{\underline{k}}| \leq 1 \text { if } |\underline{k}|<0 \right\rbrace, \\ I_0=\left\lbrace \sum_{\underline{k} \in \mathbb{Z}^d} a_{\underline{k}} \underline{X}^{\underline{k}} \in A \text{ such } |a_{\underline{k}}| < 1 \text{ if } |\underline{k}| \geq 0, ~|\pi^{|\underline{k}|}a_{\underline{k}}| < 1 \text { si } |\underline{k}|<0 \right\rbrace.\end{aligned}$$ **Definition 2**. Let $A$ and $B$ be two Huber rings. A *morphism of Huber rings* $u: A \rightarrow B$ is a morphism of rings which is continuous. This is equivalent to require that there exists a ring of definition $A_0$ of $A$ (resp. $B_0$ of $B$) and an ideal of definition $I_0 \subset A_0$ (resp. $J_0\subset B_0$) such that $u(A_0) \subset B_0$ and $u(I_0) \subset J_0$. In the case where the ideal generated by $u(I_0)$ is an ideal of definition of $B$, the morphism is said to be *adic*. **Definition 3**. If $R \rightarrow A$ is a morphism of Huber rings, $A$ is said to be a *Huber $R$-algebra*. If the morphism is adic $A$ is said to be a *$R$-adic alegbra*. **Remarks 1**. 1. A $R$-adic algebra is not necessarily an adic ring. 2. Let $R$ be a Huber ring with $(A_0,I_0)$ as a couple of definition. If $M$ is a $A$-module of finite type, we can endow it with the natural topology, that is the universal topology for the continuous $A$-linear morphisms $M\rightarrow N$ into topological $A$-modules. In that case, every $A_0$-module $M_0$ of $M$ that generates $M$ is necessarily open in $M$ and the topology of $M_0$ is the $I_0$-adic topology. **Definition 4**. A *Tate ring* $A$ is a Huber ring such that there exists a topologically nilpotent invertible element $\pi$ of $A$. **Proposition 5**. *The topology of a Huber ring can always be defined by a submultiplicative semi-norm. In the case where the ring is Tate, the topology can be defined by a submultiplicative norm.* *Proof.* We refer the reader to [@moi Proposition 1.8]. ◻ **Definition 6**. A submultiplicative norm $\lVert ~\rVert$ over a ring $A$ is said to be *contractive* with respect to an endomorphism $\varphi$ of $A$ if $\forall x \in A, ~ \lVert \varphi (x) \rVert \leq \lVert x \rVert$. **Definition 7**. Let $R$ be a Huber ring, $A$ a $R$-adic algebra and $d\in \mathbb{N}$. The $R$-algebra $A$ is said to be *twisted of order $d$* if it is endowed with a sequence of $R$-linear continuous endomorphisms $\underline{\sigma}=\left(\sigma_{A,1},\ldots,\sigma_{A,d} \right)$ that commute. Set $P_A= A \otimes_R A$. We extend $\sigma_i$ to $P_A$ by setting $\sigma_i(a \otimes b)=\sigma_i(a) \otimes b$. If no confusion can arise, we will simply write $\sigma_i$ instead $\sigma_{A,i}$. The data of the ring and the set of endomorphisms will be denoted under the form of a couple $(A,\underline{\sigma})$. ## Twisted principal parts In what follows we consider a Huber ring $R$ and a twisted $R$-adic algebra $(A,\underline{\sigma})$ of order $d$. We also consider some elements $\underline{x}=(x_1,\ldots,x_d)$ of $A$ on which we will later put various conditions. We will use the same notation $\sigma_i$ for the endomorphism of the ring of polynomials $A\left[\underline{\xi}\right]$ such that for every $i,j \in \lbrace 1,\ldots,d \rbrace$ $$\sigma_i(\xi_j)= \begin{cases} \xi_i+x_i-\sigma_i(x_i),& \text{if } j=i\\ \xi_j, & \text{otherwise.} \end{cases}$$ **Definition 8**. For every natural integer $n$, the $A$-module $P_{A, (n)_{\underline{\sigma}, \underline{x}}}$ of *twisted principal parts of order $n$* is the $A$-module defined by $$P_{A, (n)_{\underline{\sigma},\underline{x}}}:=A[\underline{\xi}] \Big/ \left(\underline{\xi}^{(\underline{k})_{\underline{\sigma},\underline{x}}} \textrm{ such that } |\underline{k}|=n+1\right)$$ where for $\underline{k} \in \mathbb{N}^d$ $$\underline{\xi}^{(\underline{k})_{\underline{\sigma},\underline{x}}}:=\xi_1^{(k_1)_{\sigma_1}}\ldots\xi_d^{(k_d)_{\sigma_d}} \textrm{ and } \xi_i^{(k_i)_{\sigma_i}}:=\xi_i \sigma_i(\xi_i) \ldots \sigma_i^{k_i-1} (\xi_i).$$ This gives $$\underline{\xi}^{(\underline{k})_{\underline{\sigma},\underline{x}}} = \prod_{i=1}^d \prod_{j=0}^{k_i-1} \left(\xi_i+x_i-\sigma_i^j(x_i)\right). \label{twist}$$ In the case where $d=1$, this is the module defined by Le Stum and Quirós in [@LSQ3 Definition 1.5]. **Remark 1**. As remarked in the examples following the definition [Definition 1](#anneauxdehuber){reference-type="ref" reference="anneauxdehuber"}, the ring of polynomials $A[\underline{\xi}]$ can be endowed with a structure of Huber rings inducing the topology on $A$. The ring $P_{A, (n)_{\underline{\sigma}, \underline{x}}}$ endowed with the quotient topology is also a Huber ring. Here and subsequently we will put this topology on it. **Proposition 9**. *For every natural integer $n$, the composed morphism $$A[\underline{\xi}]_{\leq n} \hookrightarrow A[\underline{\xi}]\twoheadrightarrow P_{A, (n)_{\underline{\sigma}, \underline{x}}}$$ is an isomorphism.* *Proof.* We refer the reader to [@moi Proposition 4.4] ◻ **Definition 10**. The elements $x_1,\ldots,x_d$ are *$\underline{\sigma}$-coordinates* over $A$ if, for every natural integer $n$, there exists an unique adic morphism of $R$-algebras $$\begin{aligned} \Theta_{A,(n)_{\underline{\sigma},\underline{x}}} \colon A &\to P_{A, (n)_{\underline{\sigma},\underline{x}}} \\ x_i &\mapsto x_i+\xi_i \end{aligned}$$ such that the composition of the projection on $A$ with $\Theta_{A,(n)_{\underline{\sigma},\underline{x}}}$ is the identity. The morphism $\Theta_{A,(n)_{\underline{\sigma},\underline{x}}}$ is called the *$n$-th Taylor morphism* of $A$ with respect to $\underline{\sigma}$ and $\underline{x}$. **Remark 2**. Let $\underline{q}=( q_1,\ldots,q_d )$ be elements of $R$. In the situation where $$\forall i,j \in \lbrace 1,\ldots,d\rbrace, \sigma_i(x_j)= \begin{cases} q_ix_i & \text{ if } i=j\\ x_j & \text{ otherwise,} \end{cases}$$ and $\underline{x}$ are $\underline{\sigma}$-coordinates, we will say that they are $\underline{q}$-coordinates to emphasize the link with the elements $q_1,\ldots,q_d$. In the case $d=1$, this is the situation described in [@ADV04]. Let $\underline{x}$ be $\underline{\sigma}$-coordinates over $A$. Consider for a natural integer $n$ the morphism $$\begin{aligned} \tilde{\Theta}_{A,(n)_{\underline{\sigma},\underline{x}}}:&P_A \rightarrow P_{A,(n)_{\underline{\sigma},\underline{x}}} \\ & a \otimes b \mapsto a \Theta_{A,(n)_{\underline{\sigma},\underline{x}}}(b) \end{aligned}$$ that extends $\Theta_{A,(n)_{\underline{\sigma},\underline{x}}}$ to $P_A$ by $A$-linearity. For $i=1,\ldots,d,$ $$\tilde{\Theta}_{A,(n)_{\underline{\sigma},\underline{x}}}(1\otimes x_i - x_i \otimes 1)= \xi_i.$$ The proposition [Proposition 9](#surjection){reference-type="ref" reference="surjection"} ensures that this morphism is surjective. Denote $$I_{A}^{(n+1)_{{\underline{\sigma},\underline{x}}}}:=\textrm{ker}\left( \tilde{\Theta}_{A,(n)_{\underline{\sigma},\underline{x}}}\right).$$ By the first isomorphism theorem $$P_A/I_{A}^{{(n+1)_{\underline{\sigma},\underline{x}}}} \simeq P_{A,(n)_{\underline{\sigma},\underline{x}}}.$$ ## Derivations We keep the hypothesis of the previous subsection: $R$ is a Huber ring and $(A,\underline{\sigma})$ is a twisted $R$-adic algebra of order $d$ (definition [Definition 7](#defor){reference-type="ref" reference="defor"}) Here and subsequently we assume that there exist some elements $\underline{x}=(x_1,\ldots,x_d)$ of $A$ that are $\underline{\sigma}$-coordinates (definition [Definition 10](#coor){reference-type="ref" reference="coor"}). **Definition 11**. The $A$-module of *twisted differential forms* of $A$ over $R$ is $$\Omega^1_{A, \underline{\sigma}}:=I_{A}^{(1)_{\underline{\sigma}}}/I_{A}^{(2)_{\underline{\sigma}}}.$$ **Definition 12**. Let $1 \leq i \leq d$. We recall from [@SQ2] that a *$\sigma_i$-derivation* $D$ of $A$ is a $R$-linear morphism from $A$ into a $A$-module $M$ that verifies the twisted Leibniz rule: $$\forall x,y \in A, ~ D(xy)=xD(y)+\sigma_i(y)D(x).$$ **Remark 3**. Denote by $(e_i)_{i=1,\ldots,d}:=((1\otimes x_i - x_i \otimes 1)^*)_{i=1,\ldots,d}$ the basis of $\textrm{Hom}_A\left(\Omega^1_{A, \underline{\sigma}},A\right)$ dual to the basis $(1\otimes x_i - x_i \otimes 1)_{i=1,\ldots,d}$ of $\Omega^1_{A, \underline{\sigma}}$. Set $$\text{d}:A \rightarrow \Omega^1_{A, \underline{\sigma}}, ~ f \mapsto \Theta_{A,(1)_{\underline{\sigma}}}(f) - f.$$ This map permits to define $$\forall i\in \lbrace 1\ldots,d \rbrace,~ \partial_{\underline{\sigma},i}:=e_i \circ \text{d}$$ We will denote by $\text{Der}_{\underline{\sigma}}(A,A)$ the $A$-module generated by $\partial_{\underline{\sigma},1}\ldots,\partial_{\underline{\sigma},d}$. This module is free over $A$. **Definition 13**. The $\underline{\sigma}$-coordinates $\underline{x}$ are *classical* if $$\forall i=1 \ldots,d,~ \forall f \in A, ~\sigma_i(f)=f+(\sigma_i(x_i)-x_i)\partial_{\underline{\sigma},i}(f).$$ **Proposition 14**. *The $\underline{\sigma}$-coordinates $\underline{x}$ are classical coordinates if and only if $\forall i\in \lbrace 1,\ldots,d \rbrace,$ $\partial_{\underline{\sigma},i}$ is a $\sigma_i$-derivation.* *Proof.* We refer the reader to [@moi Proposition 7.4]. ◻ **Definition 15**. The $\underline{\sigma}$-coordinates $\underline{x}$ are said to be *symmetrical* if $$\forall i \in \lbrace 1, \ldots,d \rbrace, \sigma_i(x_i) \in R[x_i] \text{ and }$$ $$\forall n \in \mathbb{N}, \forall f \in A, ~\delta_{n,m}\left(\Theta_{A,(n+m)_{\underline{\sigma}}}(f)\right)=1 \otimes'\Theta_{A,(m)_{\underline{\sigma}}}(f).$$ Where $\otimes'$ means that the action on the left is given by $\Theta_{A,(n)_{\underline{\sigma}}}$. **Proposition 16**. *Assume that the coordinates are symmetrical. Set $$\begin{aligned} \delta \colon P_{A} &\rightarrow P_{A} \otimes_A P_{A} \\ a\otimes b &\mapsto a \otimes 1 \otimes 1 \otimes b \end{aligned}$$ then, $$\delta\left(I_{A}^{(n+m+1)_{\underline{\sigma}}}\right) \subset I_{A}^{(n+1)_{\underline{\sigma}}} \otimes P_{A} + P_{A} \otimes I_{A}^{(m+1)_{\underline{\sigma}}}.$$* *Proof.* We refer the reader to [@moi Proposition 7.11]. ◻ ## Twisted connection We keep the hypothesis of the previous subsection: $R$ is a Huber ring and $(A,\underline{\sigma})$ is a twisted $R$-adic algebra of order $d$ (definition [Definition 7](#defor){reference-type="ref" reference="defor"}). The exists some elements $\underline{x}=(x_1,\ldots,x_d)$ of $A$ that are $\underline{\sigma}$-coordinates (definition [Definition 10](#coor){reference-type="ref" reference="coor"}). **Definition 17**. When the coordinates $\underline{x}$ are classical, a *twisted connection* on a $A$-module $M$ is the data of a map $$\nabla_{\underline{\sigma}}:M \rightarrow M \otimes_A \Omega^1_{\underline{\sigma}}$$ verifying that the map $$\Theta_{M}: M \rightarrow M \otimes_A P_{A, (1)_{\underline{\sigma}}},~ s \rightarrow s \otimes 1 + \nabla_{\underline{\sigma}}(s)$$ is $\Theta_{A,(1)_{\underline{\sigma}}}$-linear : $$\forall f \in A, ~\forall s \in M, \Theta_M(fs)=\Theta_{A,(1)_{\underline{\sigma}}}(f)\Theta_M(s).$$ **Remark 4**. For $n \in \mathbb{N}^*$, set $\Omega^n_{\underline{\sigma}}= \bigwedge^n \Omega^1_{\underline{\sigma}}.$ A twisted connection $\nabla_{\underline{\sigma}}$ on a $A$-module $M$ gives a sequence of $A$-linear maps $$\nabla_{n,\underline{\sigma}}: M \otimes_A \Omega^n_{\underline{\sigma}} \rightarrow M \otimes_A \Omega^{n+1}_{\underline{\sigma}}$$ by setting $\nabla_{n,\underline{\sigma}}(m \otimes w)=m \otimes \text{d}(w)+\nabla_{\underline{\sigma}}(m)\wedge (-1)^nw$. **Definition 18**. A twisted connection $\nabla_{\underline{\sigma}}$ on a $A$-module $M$ is said to be *integrable* if $\nabla_{2,\underline{\sigma}} \circ \nabla_{\underline{\sigma}}=0.$ **Remark 5**. On a $A$-module $M$, it is possible to construct the de Rham complex $$\text{DR}(A)=\left[ M \rightarrow M \otimes_A \Omega^1_{\underline{\sigma}} \rightarrow M \otimes_A \Omega^2_{\underline{\sigma}} \rightarrow \dots \right].$$ The de Rham cohomology is the cohomology of this complex. We denote by $\nabla_{\underline{\sigma}}$-$\text{Mod}(A)$ the set of $A$-modules endowed with a twisted connection and by $\nabla_{\underline{\sigma}}^{\text{Int}}$-$\text{Mod}(A)$ the set of $A$-modules endowed with an integrable twisted connection. **Proposition 19**. *When the coordinates $\underline{x}$ are classical, there exists an equivalence of categories $$\nabla_{\underline{\sigma}} \emph{-Mod}(A) \simeq \emph{T}_{A,\underline{\sigma}}\emph{-Mod}(A).$$* *Proof.* We refer the reader to [@moi Proposition 7.14]. ◻ ## Twisted differential operators We keep the hypothesis of the previous subsection: $R$ is a Huber ring, $(A,\underline{\sigma})$ is a twisted $R$-adic algebra of order $d$ (definition [Definition 7](#defor){reference-type="ref" reference="defor"}) and there exist some elements $\underline{x}=(x_1,\ldots,x_d)$ of $A$ that are $\underline{\sigma}$-coordinates (definition [Definition 10](#coor){reference-type="ref" reference="coor"}). In this section we will also assume that the coordinates $x_1,\ldots,x_d$ are symmetrical (definition [Definition 15](#sym){reference-type="ref" reference="sym"}). This will be useful to define the composition of twisted differential operators. **Definition 20**. Let $M$ and $N$ be $A$-modules. A *twisted differential operator of order at most $n$* is a $R$-linear morphism $\phi:M \rightarrow N$ which $A$-linearization $\tilde{\phi}$ factorizes throught $P_{A,(n)_{\underline{\sigma}}}$. This gives the diagram $$\xymatrix{ P_{A} \otimes_A M \ar[r]^-{\sim} \ar[rd] \ar@/^1pc/[rr]^{\tilde{\phi}} & A \otimes_R M \ar[r] & N \\ & P_{A,(n)_{\underline{\sigma}}} \otimes_A' M \ar[ru] & }$$ where $\otimes_A'$ designates that the action on the left is given by $\Theta_{A,(n)_{\underline{\sigma}}}$. This condition can be translated by the fact that $\tilde{\phi}$ is zero over $I_{A}^{(n+1)_{\underline{\sigma}}}$. We denote by $\text{Diff}_{n, \underline{\sigma}}(M,N)$ the set of twisted differential operators of order at most $n$. By definition $\text{Diff}_{n, \underline{\sigma}}(M,N) \simeq \text{Hom}_A(P_{A,(n)_{\underline{\sigma}}}\otimes_A' M,N)$. In the case where $M=N$ we will simply write $\text{Diff}_{n, \underline{\sigma}}(M)$. We also set $$\text{Diff}_{\underline{\sigma}}(M,N)=\varinjlim_n \text{Hom}_A\left(P_{A,(n)_{\underline{\sigma}}}\otimes_A' M,N\right) \text{ and } \text{D}^{(\infty)}_{A,\underline{\sigma}}=\text{Diff}_{\underline{\sigma}}(A).$$ Recall that for an element $q$ of $R$ and every natural integer $n$,we define the $q$-analogue of $n$ as $$(n)_q:=1+q+\ldots+q^{n-1}.$$ It verifies $\lim\limits_{q \mapsto 1} (n)_q=n.$ In a similar manner, we can define the $q$-analogue of the factorial of $n$ by setting: $$(n)_q!:=(2)_q \ldots (n)_q.$$ **Remarks 2**. 1. A standard basis of $\text{Diff}_{n,\underline{\sigma}}(A)$ is given by $\partial^{[\underline{k}]}_{\underline{\sigma}}$ dual of the image of $\underline{\xi}^{(\underline{k})_{\underline{\sigma}}}$ in $P_{A,(n)_{\underline{\sigma}}}$ for $\underline{k} \in \mathbb{N}^d$ such that $|\underline{k}|\leq n$. 2. Let $\underline{q}=\lbrace q_1,\ldots,q_d \rbrace$ be elements of $R$.In the case where $x_1,\ldots,x_d$ are $\underline{q}$-coordinates, by corollary 6.2 of [@LSQ3], we have $$\forall i \in \lbrace 1,\ldots,d \rbrace, ~ \forall k \in \mathbb{N}, ~ \forall z \in A, \partial^k_{q_i}(z)=(k)^!_{q_i} \partial_{q_i}^{[k]}(z).$$ For $\underline{k}=(k_1,\ldots,k_d)$, set $$\partial_{\underline{\sigma}}^{\underline{k}}:=\partial_{\underline{\sigma},1}^{k_1} \circ \ldots \circ \partial_{\underline{\sigma},d}^{k_d}.$$ Thus, by the previous relation, $$\label{qderiv} \partial_{\underline{\sigma}}^{\underline{k}}=(k_1)_{q_1}^!\ldots(k_d)_{q_d}^!\partial_{\underline{\sigma}}^{[\underline{k}]}.$$ **Proposition 21**. *If there exist some elements $\underline{q}=\lbrace q_1,\ldots,q_d \rbrace$ of $R$ such that $\underline{x}$ are classical $\underline{q}$-coordinates and $$\forall i=1,\ldots,d ~ \forall n \in \mathbb{N}, ~ (n)_{q_i} \in R^\times,$$ then there exists an equivalence of categories $$\nabla_{\underline{\sigma}}^{\emph{Int}}\text{-}\emph{Mod}(A) \simeq \emph{D}_{A,\underline{\sigma}}^{(\infty)}\text{-}\emph{Mod}.$$* *Proof.* We refer the reader to [@moi Proposition 8.5]. ◻ # Twisted calculus From now on we make the following assumptions: $R$ is a Tate ring, $(A,\underline{\sigma})$ is a complete noetherian twisted (définition [Definition 7](#defor){reference-type="ref" reference="defor"}) $R$-adic algebra of order $d$ (in particular $A$ is a Tate ring and so it is endowed with an ultrametric norm). There exist some elements $\underline{x}=(x_1,\ldots,x_d)$ of $A$ that are classican and symmetrical $\underline{\sigma}$-coordinates (definitions [Definition 10](#coor){reference-type="ref" reference="coor"}, [Definition 13](#classique){reference-type="ref" reference="classique"} and [Definition 15](#sym){reference-type="ref" reference="sym"}). We fix an ultrmatric norm $\lVert ~\rVert$ that defines the topology on $A$. We will assume that this norm is contractive (définition [Definition 6](#contractante){reference-type="ref" reference="contractante"}) with respect to the endomorphisms $\sigma_1,\ldots,\sigma_d$. **Remarks 3**. 1. If $A$ is a ring endowed with a submultiplicative norm $\lVert ~\rVert$ then, every $A$-module of finite type $M$ dcan be endowed with a semi-norm. If we choose some generators $(e_1,\ldots,e_m )$ of $M$, We can define the semi-norm of $M$ as follows $$\forall x \in M, ~\lVert x \rVert_M= \inf_{x=\sum_{i=1}^m a_i e_i} \left\lbrace \sum_{i=1}^m \lVert a_i \rVert \right\rbrace.$$ We refer the reader to section 2 of [@kedliu] for more details. 2. In what follows it will be necessary to assume that finite modules over $A$ are complete for some limits to exist. It is the case under the hypothesis we are working. An other option would have been to require that there exists a noetherian ring of definition $A_0$ of $A$ such that $A$ is of finite presentation over $A_0$ or that the module $M$ is flat of finite presentation over $A$. ## Twisted differential principal parts of finite radius **Definition 22**. The *$\underline{x}$-radius* of $\underline{\sigma}=\left\lbrace\sigma_1,\ldots,\sigma_d\right\rbrace$ is $$\rho(\underline{\sigma})=\max_{1 \leq i \leq d} \lVert x_i - \sigma_i(x_i) \rVert.$$ **Lemma 23**. *If we set $\underline{\sigma}^n=\left\lbrace \sigma_1^n,\ldots,\sigma_d^n \right\rbrace$ then, $$\rho(\underline{\sigma}^n) \leq \rho(\underline{\sigma})$$* *Proof.* By induction over $n$, $$\begin{aligned} \sup_{1 \leq i \leq d} \lVert x_i - \sigma_i^{n+1}(x_i)\rVert & \leq \max \left\lbrace \sup_{1 \leq i \leq d} \lVert x_i - \sigma_i^{n}(x_i)\rVert, \sup_{1 \leq i \leq d} \lVert \sigma_i^n(x_i-\sigma_i(x_i))\rVert \right\rbrace \\ & \leq \max \left\lbrace \rho( \underline{\sigma}^n), \rho(\underline{\sigma }) \right\rbrace = \rho(\underline{\sigma}). \end{aligned}$$ The last inequality is obtained using the fact that the norm is contractive with respect to the endomorphisms $\sigma_1,\ldots,\sigma_d$. ◻ Let $\eta \in \mathbb{R}$ such that $0<\eta<1$. Set $$A \left\lbrace \underline{\xi}/\eta \right\rbrace = \left\lbrace \sum\limits_{\underline{n} \in \mathbb{N}^d} z_{\underline{n}} \underline{\xi}^{\underline{n}}, ~z_{\underline{n}} \in A \text{ et } \lVert z_{\underline{n}} \rVert \eta^{|\underline{n}|} \rightarrow 0 \text{ when } |\underline{n}| \rightarrow \infty \right\rbrace.$$ This a Banach algebra for the sup norm $$\left\lVert \sum\limits_{\underline{n} \in \mathbb{N}^d} z_{\underline{n}} \underline{\xi}^{\underline{n}} \right\rVert_\eta = \max \lVert z_{\underline{n}} \rVert \eta^{|\underline{n}|}.$$ We refer the reader to section 2.2 of [@kedliu] for more details. This ring is a Tate ring. **Lemma 24**. *If $\eta \geq \rho(\underline{\sigma})$ then the $A$-linear map $$\underline{\xi}^{\underline{n}} \mapsto \underline{\xi}^{(\underline{n})_{\underline{\sigma}}}$$ is an isometric automorphism of the $A$-module $A \left\lbrace \underline{\xi}/\eta \right\rbrace$.* *Proof.* Recall that for $\underline{k}\in\mathbb{N}^d$, $$\underline{\xi}^{(\underline{k})_{\underline{\sigma}}} = \prod_{i=1}^d \prod_{j=0}^{k_i-1} (\xi_i+x_i-\sigma_i^j(x_i))=\xi_1^{k_1}\ldots\xi_d^{k_d}+f_{\underline{k}}.$$ with $f_{\underline{k}} \in A[\underline{\xi}]_{< |\underline{k}|}$. By lemma [Lemma 23](#3.4){reference-type="ref" reference="3.4"}, for every natural integer $j$ and every $1 \leq i \leq d$, $\lVert x_i - \sigma_i^j(x_i) \rVert \leq \eta$. It follows that $\lVert f_{\underline{k}} \rVert_\eta \leq \eta^{\left(\sum k_i\right)} = \eta^{|\underline{k}|}$ and that $\left\lVert \underline{\xi}^{(\underline{k})_{\underline{\sigma}}} \right\rVert_\eta= \eta^{|\underline{k}|}$. Hence, the unique $A$-linear endomorphism of $A[\underline{\xi}]$ that sends $\underline{\xi}^{\underline{k}}$ on $\underline{\xi}^{(\underline{k})_{\underline{\sigma}}}$ is an isometry that preserves the degree. Therefore, it extends in a unique manner to a isometry of $A \left\lbrace \underline{\xi}/\eta \right\rbrace$ to itself. ◻ **Definition 25**. An *orthogonal Schauder basis* of a normed $A$-module is a familly $\left\lbrace s_n \right\rbrace_{n \in \mathbb{N}}$ of elements of $M$ such that every element $s$ of $M$ can be uniquely written as $$s=\sum_{n=0}^{\infty}z_ns_n$$ with $z_n \in A$ et $\lVert s \rVert = \sup \left\lbrace \lVert z_n \rVert \lVert s_n \rVert \right\rbrace.$ **Proposition 26**. *Let $M$ be a normed $A$-module and $\left\lbrace s_n \right\rbrace_{n \in \mathbb{N}}$ be a Schauder basis of $M$. If, $N$ is a $A$-module of finite type then every element $f$ of $N \otimes_A M$ can be written under the form $$f=\sum f_i \otimes s_i$$ where $\lbrace f_i \rbrace_{i \in \mathbb{N}}$ is a familly of elements of $N$.* *Proof.* By hypothesis, for a natural integer $n$ there exists a surjective morphism $$A^n \twoheadrightarrow N.$$ We deduce a surjective morphism $$M^n \simeq (A \otimes_A M)^n \simeq (A^n \otimes_A M) \twoheadrightarrow N \otimes_A M.$$ This concludes the proof. ◻ **Proposition 27**. *When $\eta \geq \rho (\underline{\sigma})$, $\left\lbrace \underline{\xi}^{(\underline{n})_{\underline{\sigma}}}\right\rbrace_{\underline{n}-\in \mathbb{N}^d}$ is an orthogonal Schauder basis for the $A$-module $A \left\lbrace \underline{\xi}/\eta \right\rbrace$.* *Proof.* We conclude from lemma [Lemma 24](#bla){reference-type="ref" reference="bla"} and the fact that the familly $\left\lbrace \underline{\xi}^{\underline{k}} \right\rbrace_{\underline{k} \in \mathbb{N}^d}$ is an orthogonal Schauder basis of $A \left\lbrace \underline{\xi}/\eta \right\rbrace$. ◻ **Proposition 28**. *If $\eta \geq \rho (\underline{\sigma})$, then, for all $n \in \mathbb{N}$, there exists an isomorphism of $A$-algebras $$A \left\lbrace \underline{\xi}/ \eta \right\rbrace / \left(\underline{\xi}^{(\underline{k})_{\underline{\sigma}}} \emph{ avec } \underline{k}\in \mathbb{N}^d \emph{ tels que } |\underline{k}|=n+1 \right) \simeq P_{A,(n)_{\underline{\sigma}}}.$$* *Proof.* By proposition [Proposition 9](#surjection){reference-type="ref" reference="surjection"} $$P_{A,(n)_{\underline{\sigma}}}=A[\underline{\xi}]/ \left(\underline{\xi}^{(k)_{\underline{\sigma}}} \text{ with } \underline{k}\in \mathbb{N}^d \text{ such that } |\underline{k}|=n \right) \simeq A[\underline{\xi}]_{\leq n} .$$ Set, $I_n:=\left(\underline{\xi}^{(\underline{k})_{\underline{\sigma}}} \text{ with } \underline{k}\in \mathbb{N}^d \text{ such that } |\underline{k}|=n+1 \right)$ the ideal generated by those elements in $P_{A,(n)_{\underline{\sigma}}}$ and $A \left\lbrace \underline{\xi}/ \eta \right\rbrace$. We can conclude by considering the map below $$A[\underline{\xi}]_{\leq n} \simeq A[\underline{\xi}]/I_n \rightarrow A \left\lbrace \underline{\xi}/ \eta \right\rbrace /I_n$$ which is bijective by proposition [Proposition 27](#Schauder){reference-type="ref" reference="Schauder"}. ◻ **Corollary 29**. *If $\eta \geq \rho (\underline{\sigma})$. Then, for every $n \in \mathbb{N}$ and every $A$-module $M$ of finite type, there exists a canonical injection $$M \otimes_A A \left\lbrace \underline{\xi}/ \eta \right\rbrace \rightarrow M \otimes_A \widehat{\widehat{P}}_{\underline{\sigma}}.$$* *Proof.* By proposition [Proposition 28](#pr){reference-type="ref" reference="pr"}, for all $n \in \mathbb{N}^d$ there exists a map $$A\left\lbrace \underline{\xi}/\eta \right\rbrace\rightarrow A\left\lbrace \underline{\xi}/\eta \right\rbrace/ \left(\underline{\xi}^{(\underline{k})_{\underline{\sigma}}},~|\underline{k}|=n+1 \right) \simeq P_{A,(n)_{\underline{\sigma}}}.$$ Going to the limit and tensoring by $M$ we obtain a map $M \otimes_A A \left\lbrace \underline{\xi}/ \eta \right\rbrace \rightarrow M \otimes_A \widehat{\widehat{P}}_{\underline{\sigma}}$. It remains to show that this map is injective. This can be translated by $$\forall \sum s_i \otimes f_i \in M \otimes_A A \left\lbrace \underline{\xi}/ \eta \right\rbrace; \forall \underline{n} \in \mathbb{N}^d, \sum s_i \otimes f_i = 0 \text{ mod } \underline{\xi}^{(\underline{n})_{\underline{\sigma}}} \Rightarrow \sum s_i \otimes f_i=0.$$ This follows directly from the fact that $\left\lbrace \underline{\xi}^{(\underline{n})_{\underline{\sigma}}}\right\rbrace_{\underline{n} \in \mathbb{N}^d}$ is a Schauder basis of $A\left\lbrace \underline{\xi}/\eta \right\rbrace$ and proposition [Proposition 26](#tenseurschauder){reference-type="ref" reference="tenseurschauder"}. ◻ ## Radius of convergence **Definition 30**. Let $M$ be a $\text{D}^{(\infty)}_{A,\underline{\sigma}}$-module of finite type over $A$ and $\eta \in \mathbb{R}$ 1. Let $s \in M$ then, 1. $s$ is *$\eta$-convergent* if $$\left\lVert \partial_{\underline{\sigma}}^{[\underline{k}]} (s) \right\rVert \eta^{|\underline{k}|} \rightarrow 0 \text{ when } |\underline{k}| \rightarrow \infty,$$ 2. the *radius of convergence* of $s$ is $$\text{Rad}(s)=\sup \left\lbrace \eta, ~s \text{ is } \eta\text{-convergent} \right\rbrace,$$ 3. $s$ is *$\eta^\dagger$-convergent* if $\text{Rad}(s) \geq \eta$. 2. $M$ is *$\eta$-convergent* if all elements of $M$ are, 3. the *radius of convergence* of $M$ is $$\text{Rad}(M)=\inf_{s \in M} \text{Rad}(s)$$ 4. $M$ is *$\eta^\dagger$-convergent* if all elements of $M$ are. **Remarks 4**. 1. We have the following formula $$\text{Rad}(M)=\inf_{s \in M} \limsup_{\underline{k} \to \infty} \left\lVert \partial^{[\underline{k}]}_{\underline{\sigma}}(s) \right\rVert ^{-\frac{1}{|\underline{k}|}}.$$ 2. If $s$ (resp. $M$) is $\eta$-convergent then $s$ (resp. $M$) is $\eta^\dagger$-convergent. **Lemma 31**. *Let $M$ be a $\emph{D}^{(\infty)}_{A,\underline{\sigma}}$-module of finite type over $A$ and $\eta \geq \rho (\underline{\sigma})$. An element $s$ of $M$ is $\eta$-convergent if and only if its twisted Taylor series $$\widehat{\theta}(s)=\sum_{\underline{k} \in \mathbb{N}^d} \partial^{[\underline{k}]}_{\underline{\sigma}}(s) \otimes \underline{\xi}^{(\underline{k})_{\underline{\sigma}}}$$ is an element of $M \otimes_A A \left\lbrace \underline{\xi}/ \eta \right\rbrace \subset M \otimes_A \widehat{\widehat{P}}_{\underline{\sigma}}$.* *Proof.* By proposition [Proposition 27](#Schauder){reference-type="ref" reference="Schauder"}, $\left(\underline{\xi}^{(\underline{n})_{\underline{\sigma}}}\right)_{\underline{n} \in \mathbb{N}^d}$ is a Schauder basis of $A \left\lbrace \underline{\xi}/ \eta \right\rbrace$. We conclude by using again proposition [Proposition 26](#tenseurschauder){reference-type="ref" reference="tenseurschauder"}. ◻ **Proposition 32**. *Let $M$ be a $\emph{D}^{(\infty)}_{A,\underline{\sigma}}$-module of finite type over $A$ and $\eta \geq \rho (\underline{\sigma})$. The module $M$ is $\eta$-convergent if and only if its Twisted Taylor series factorizes in the following manner $$\xymatrix{ M \ar[r]^-{\widehat{\theta}} \ar[rd]_-{\theta_\eta} & M \otimes_A \widehat{\widehat{P}}_{\underline{\sigma}} \\ & M \otimes_A A\left\lbrace \underline{\xi}/\eta \right\rbrace\ar[u] }$$* *Proof.* It can be deduced directly from lemma [Lemma 31](#découle){reference-type="ref" reference="découle"}. ◻ **Definition 33**. The map $\theta_\eta$ is called *twisted Taylor map of radius $\eta$*. **Proposition 34**. *The $\eta$-convergence for the $\emph{D}^{(\infty)}_{A,\underline{\sigma}}$-module of finite type over $A$ is stable under quotient and subobjects if they are themself of finite type over $A$.* *Proof.* The proof is identical to the one of [@GLQ4 Proposition 3.5]. ◻ ## Twisted differential operators of finite radius **Definition 35**. The $A$-module structure of $A\left\lbrace \underline{\xi}/\eta \right\rbrace$ induced by the twisted Taylor map of radius $\eta$ is called *right structure*. In that case we will denote $A\left\lbrace \underline{\xi}/\eta \right\rbrace\otimes'-$ that we use the right structure on the left hand side of the tensor product: $$f \otimes' zs=\theta_\eta(z)f\otimes' s.$$ **Remark 6**. If $A$ is $\eta$-convergent, it is possible to linearize the twisted Taylor map of radius $\eta$ to obtain a $A$-linear map $$\tilde{\theta}_\eta: P_{A} \rightarrow A\left\lbrace \underline{\xi}/\eta \right\rbrace$$ which induces the commutative diagram below $$\label{diagram} \xymatrix{ & & A \ar[dl]_-\theta \ar[dr]^-{\widehat{\theta}} \ar[d]^-{\theta_\eta} \\ A[\underline{\xi}] \ar[r] & P_{A} \ar[r]_-{\tilde{\theta}_\eta} & A\left\lbrace \underline{\xi}/\eta \right\rbrace\ar[r] & \widehat{\widehat{P}}_{\underline{\sigma}} }$$ where $\theta$ is defined as $$\theta: A \rightarrow P_{A}, ~ f \mapsto 1 \otimes f.$$ **Lemma 36**. *Let $M$ and $N$ eb two $A$-modules of finite type, the map $$M \rightarrow A\left\lbrace \underline{\xi}/\eta \right\rbrace\otimes'_A M, ~s \mapsto 1 \otimes' s$$ induce a $P_{A}$-linear injective map $$\emph{Hom}_{A-\emph{cont}}(A\left\lbrace \underline{\xi}/\eta \right\rbrace\otimes'_A M,N) \rightarrow \emph{Hom}_{R-\emph{cont}}(M,N).$$* *Proof.* By the diagram [\[diagram\]](#diagram){reference-type="ref" reference="diagram"} the inclusion of $A[\underline{\xi}]$ in $A\left\lbrace \underline{\xi}/\eta \right\rbrace$ factorizes as $$A[\underline{\xi}] \rightarrow P_{A} \xrightarrow{\tilde{\theta}_\eta} A\left\lbrace \underline{\xi}/\eta \right\rbrace.$$ The image of $\tilde{\theta}_\eta$ is dense in $A\left\lbrace \underline{\xi}/\eta \right\rbrace$. Moreover, for a $A$-linear continuous morphism $\psi: A\left\lbrace \underline{\xi}/\eta \right\rbrace\otimes'_A M \rightarrow N$, the following diagram is commutative by construction $$\xymatrix{ A \otimes_R M \ar[r] & P_{A} \otimes'_A M \ar[r]^-{\tilde{\theta}_\eta \otimes' \text{Id}} & A\left\lbrace \underline{\xi}/\eta \right\rbrace\otimes'_A M \ar[r]^-{\psi} & N \\ & & M \ar[ul]^-{i} \ar[u] \ar[ur]_-{\bar{\psi}} }$$ où $\bar{\psi}:=\psi \circ (\tilde{\theta }_\eta \otimes' \text{Id}) \circ i$ and $$\xymatrix@R=0cm{ i: M \ar[r] & P_{A} \otimes'_A M \\ ~~ s \ar@{|->}[r] & 1 \otimes s. }$$ Hence, by density of $\tilde{\theta}_\eta$ inside $A\left\lbrace \underline{\xi}/\eta \right\rbrace$, the map $$\text{Hom}_{A-\text{cont}}(A\left\lbrace \underline{\xi}/\eta \right\rbrace\otimes'_A M,N) \rightarrow \text{Hom}_{R-\text{cont}}(M,N), ~ \psi \mapsto \psi \circ (\tilde{\theta }_\eta \otimes' \text{Id}) \circ i.$$ is injective. ◻ **Definition 37**. Let $M$ and $N$ be two $A$-module of finite type, a $R$-linear map $\varphi: M \rightarrow N$ is called *twisted differential operator of radius $\eta$* if it extends in a continuous $A$-linear map $\tilde{\varphi_{\eta}}:A\left\lbrace \underline{\xi}/\eta \right\rbrace\otimes'_A M \rightarrow N$ called $\eta$-linearization. We will denote by $\text{Diff}^{(\eta)}_{\underline{\sigma}}(M,N)$ the set of twisted differential operators of radius $\eta$. **Remark 7**. Let $M$ and $N$ be two $A$-modules of finite type. The set $\text{Hom}_{R-\text{cont}}(M,N)$ of $R$-linear continuous morphisms from $M$ to $N$ is a $P_{A}$-module for the action defined by $$\forall \psi \in \text{Hom}_{R-\text{cont}}(M,N), ~\forall a,b \in A, \forall x \in M ~(a\otimes b)\cdot \psi(x)=a \psi(bx).$$ **Proposition 38**. *Let $M$ and $N$ be two $A$-modules of finite type. The set $\emph{Diff}^{(\eta)}_{\underline{\sigma}}(M,N)$ is a $P_{A}$-submodule of $\emph{Hom}_{R-\emph{cont}}(M,N)$ containing $\emph{Diff}_{\underline{\sigma}}^{(\infty)}(M,N)$ and $$\emph{Hom}_{A-\emph{cont}}(A\left\lbrace \underline{\xi}/\eta \right\rbrace\otimes'_A M,N) \simeq \emph{Diff}^{(\eta)}_{\underline{\sigma}}(M,N).$$* *Proof.* It is possible to define the action of $P_{A}$ over $\text{Hom}_{A-\text{cont}}(A\left\lbrace \underline{\xi}/\eta \right\rbrace\otimes'_A M,N)$. For an element $\varphi$ of $\text{Diff}^{(\eta)}_{\underline{\sigma}}(M,N)$, the diagram below is commutative $$\xymatrix{ A\left\lbrace \underline{\xi}/\eta \right\rbrace\otimes'_A M \ar@{->>}[d]_-\pi \ar[r]^-{\tilde{\varphi}_\eta} & N \\ M \ar[ur]_-{\varphi} }$$ where the canonical map $\pi$ is $P_{A}$-linear. Hence, $(a \otimes b) \cdot \varphi = \pi((a \otimes b) \cdot \tilde{\varphi}_\eta)$. It ensure us that $\text{Diff}^{(\eta)}_{\underline{\sigma}}(M,N)$ is a $P_{A}$-submodule of $\text{Hom}_{R-\text{cont}}(M,N)$. It contains $\text{Diff}_{\underline{\sigma}}^{(\infty)}(M,N)$ because we have a surjection of $A\left\lbrace \underline{\xi}/\eta \right\rbrace$ into $P_{A, (n)_{\underline{\sigma}}}$ for every natural integer $n$. The isomoprhism can be directly deduced from the definition. ◻ In what follows, we endows $A\left\lbrace \underline{\xi}/\eta \right\rbrace\widehat{\otimes}_A' A\left\lbrace \underline{\xi}/\eta \right\rbrace$ with the norm defined by $$\forall f \in A\left\lbrace \underline{\xi}/\eta \right\rbrace\widehat{\otimes}_A' A\left\lbrace \underline{\xi}/\eta \right\rbrace, ~\lVert f \rVert := \inf_{f=\sum a_i \otimes b_i} \max \lVert a_i \rVert_\eta \lVert b_i \rVert_\eta.$$ **Proposition 39**. *The map $\delta_\eta$ defined by $$\delta_\eta: A\left\lbrace \underline{\xi}/\eta \right\rbrace\rightarrow A\left\lbrace \underline{\xi}/\eta \right\rbrace\widehat{\otimes}_A, ~ \xi_i \mapsto \xi \otimes' 1 + 1 \otimes' \xi_i$$ makes the following diagram commute for every natural integers $m'<m$ and $n'<n$ : $$\xymatrix{ P_{A} \ar[r] \ar[d]^-{\delta} & A\left\lbrace \underline{\xi}/\eta \right\rbrace\ar[r] \ar[d]^-{\delta_\eta} & P_{A,(m+n)_{\underline{\sigma}}} \ar[d]^-{\delta_{m,n}} \ar[r] & P_{A,(m'+n')_{\underline{\sigma}}} \ar[d]^-{\delta_{m',n'}} \\ P_{A} \widehat{\otimes}_A' P_{A} \ar[r] & A\left\lbrace \underline{\xi}/\eta \right\rbrace\widehat{\otimes}_A' A\left\lbrace \underline{\xi}/\eta \right\rbrace\ar[r] & P_{A,(m)_{\underline{\sigma}}} \otimes'_A P_{A,(n)_{\underline{\sigma}}} \ar[r] & P_{A,(m')_{\underline{\sigma}}} \otimes'_A P_{A,(n')_{\underline{\sigma}}} }$$ Moreover, $\delta_\eta$ is a morphism of Huber $R$-algebras of norm $1$.* *Proof.* Going to the limit, we obtain the commutative diagram below $$\xymatrix{ A\left\lbrace \underline{\xi}/\eta \right\rbrace\ar[d] \ar[r] & \widehat{\widehat{P}}_{\underline{\sigma}} \ar[d] \\ A\left\lbrace \underline{\xi}/\eta \right\rbrace\widehat{\otimes}_A' A\left\lbrace \underline{\xi}/\eta \right\rbrace\ar[r]& \varprojlim P_{A,(m)_{\underline{\sigma}}} \otimes_A P_{A,(n)_{\underline{\sigma}}}}$$ The uniqueness and the fact that $\delta_\eta$ is a morphism of rings comes from proposition [Proposition 16](#symcom){reference-type="ref" reference="symcom"} and the fact that the map $A\left\lbrace \underline{\xi}/\eta \right\rbrace\rightarrow \widehat{\widehat{P}}_{\underline{\sigma}}$ is injective. For the existence, it suffice to set $\delta_\eta (\xi_i)=1 \otimes' \xi_i + \xi_i \otimes' 1$ and check that the map defined in such way makes the diagram commute. It is well defined and of norm $1$ because $\left\lVert \delta_\eta (\xi_i ) \right\rVert_\eta = \eta$. ◻ **Proposition 40**. *The $A$-module $\emph{Hom}_{A-\emph{cont}}(A\left\lbrace \underline{\xi}/\eta \right\rbrace,A)$ is a Banach $R$-algebra for the multiplication defined by $$\psi \phi: A\left\lbrace \underline{\xi}/\eta \right\rbrace\xrightarrow{\delta_\eta} A\left\lbrace \underline{\xi}/\eta \right\rbrace\widehat{\otimes}_A' A\left\lbrace \underline{\xi}/\eta \right\rbrace\xrightarrow{\emph{Id} \otimes' \phi} A\left\lbrace \underline{\xi}/\eta \right\rbrace\xrightarrow{\psi} A.$$* **Proposition 41**. *The twisted differential operators of radius $\eta$ are stable under composition and $\lVert \varphi \circ \psi \rVert_\eta \leq \lVert \varphi \rVert_\eta \lVert \psi \rVert_\eta$.* *Proof.* Let $\varphi:M \rightarrow N$ be $\psi:L \rightarrow M$ two twisted differential operators of radius $\eta$. The diagram below commutes $$\xymatrix{ P_{A} \otimes_A' L \ar[r]^-\delta \ar[d] & P_{A} \widehat{\otimes}_A'P_{A} \otimes_A'L \ar[d] \ar[r]^-{\text{Id} \otimes' \tilde{\psi}} & P_{A} \otimes_A' M \ar[r]^-{\tilde{\phi}} \ar[d] & N \ar[d] \\ A\left\lbrace \underline{\xi}/\eta \right\rbrace\otimes_A' L \ar[r]^-{\delta_\eta} & A\left\lbrace \underline{\xi}/\eta \right\rbrace\widehat{\otimes}_A' A\left\lbrace \underline{\xi}/\eta \right\rbrace\otimes_A'L \ar[r]^-{\text{Id} \otimes' \tilde{\psi}_\eta} & A\left\lbrace \underline{\xi}/\eta \right\rbrace\otimes_A' M \ar[r]^-{\tilde{\phi}_\eta} & N }$$ where the upper lign is exactly $\widetilde{\phi \circ \psi}$. The twisted differential operators of radius $\eta$ are stable by composition. BY submultiplicatity of the norm, the following inequality is satisfied $$\lVert \phi \circ \psi \rVert_\eta \leq \lVert \widetilde{(\phi \circ \psi )}_\eta \rVert \leq \lVert \tilde{\phi}_\eta \rVert \lVert \text{Id}\otimes \tilde{\psi}_\eta \rVert \lVert \delta_\eta \rVert = \lVert \tilde{\phi}_\eta \rVert \lVert \tilde{\psi}_\eta \rVert=\lVert \phi \rVert_\eta \lVert \psi \rVert_\eta. \qedhere$$ ◻ **Proposition 42**. *Let $M$ be a $A$-module of finite type and $\theta_\eta : M \rightarrow M \otimes_A A\left\lbrace \underline{\xi}/\eta \right\rbrace$ be a $A$-linear map with respect to the right structure of $A\left\lbrace \underline{\xi}/\eta \right\rbrace$. Then, $\theta_\eta$ is a twisted Taylor map of radius $\eta$ if and only if the diagram below is commutative $$\xymatrix{ M \ar[r]^-{\theta_\eta} \ar[d]^-{\theta_\eta} & M \otimes_A A\left\lbrace \underline{\xi}/\eta \right\rbrace\ar[d]^-{\theta_\eta \otimes' \emph{Id}} \\ M \otimes_A A\left\lbrace \underline{\xi}/\eta \right\rbrace\ar[r]^-{\emph{Id}\otimes \delta_\eta} & M \otimes_A A\left\lbrace \underline{\xi}/\eta \right\rbrace\widehat{\otimes}_A' A\left\lbrace \underline{\xi}/\eta \right\rbrace }$$* *Proof.* Assume the module $M$ is endowed by a twisted Taylor map of radius $\eta$. By proposition [Proposition 32](#3.3){reference-type="ref" reference="3.3"}, the twisted Taylor map factorizes trough $M \otimes_A A\left\lbrace \underline{\xi}/\eta \right\rbrace$ and the map $\theta_\eta$. Proposition [Proposition 39](#4.5){reference-type="ref" reference="4.5"} implies the commutativity of the diagram and allows us to conclude. Reciprocally we can use the proposition and follow the same path. ◻ **Definition 43**. *The ring of twisted differential operators of radius $\eta$* is $$\text{D}^{(\eta)}_{\underline{\sigma}}:=\text{Diff}^{(\eta)}_{A,\underline{\sigma}}(A,A).$$ **Remark 8**. We endow $\text{D}^{(\eta)}_{\underline{\sigma}}$ with a norm defined by: $$\lVert \varphi \rVert_\eta:=\lVert \tilde{\varphi}_\eta \rVert:= \sup_{f \neq 0} \frac{\lVert \tilde{\varphi}_\eta(f) \rVert}{\lVert f \rVert}.$$ **Corollary 44**. *There exists a canonical isomorphism of Banach $A$-modules $$\emph{D}^{(\eta)}_{\underline{\sigma}} \simeq \emph{Hom}_{A-\emph{cont}}(A\left\lbrace \underline{\xi}/\eta \right\rbrace,A).$$* **Remark 9**. For $n \in \mathbb{N}$, we denote by $K^{[n]}$ the submodule generated by $\partial_{\underline{\sigma}}^{[\underline{k}]}$ such that $|\underline{k}| \geq n$ and $$\widehat{\text{D}}_{\underline{\sigma}}^{(\infty ) }=\varprojlim \text{D}^{(\infty ) }_{A,\underline{\sigma}}/K^{[n]}.$$ In general this is not a ring, however, we have the following description $$\widehat{\text{D}}_{\underline{\sigma}}^{(\infty ) }:=\left\lbrace \sum\limits_{\underline{k} \in \mathbb{N}^d} z_{\underline{k}} \partial^{[\underline{k}]}_{\underline{\sigma}}, ~ z_{\underline{k}} \in A \right\rbrace.$$ By duality, the diagram [\[diagram\]](#diagram){reference-type="ref" reference="diagram"} induces the following sequence of inclusions $$\text{D}^{(\infty ) }_{\underline{\sigma}} \rightarrow \text{D}^{(\eta)}_{\underline{\sigma}} \rightarrow \widehat{\text{D}}_{\underline{\sigma}}^{(\infty ) }.$$ where the right map is explicitely given by $$\varphi \mapsto \sum_{\underline{k} \in \mathbb{N}^d}\tilde{\varphi}_\eta \left(\underline{\xi}^{(\underline{k})_{\underline{\sigma}}}\right)\partial^{[\underline{k}]}_{\underline{\sigma}}.$$ **Proposition 45**. *The injective map $\emph{D}^{(\eta)}_{A,\underline{\sigma}} \rightarrow \widehat{\emph{D}}^{(\infty)}_{A,\underline{\sigma}}$ iinduces an isometric isomorphism of Banach $A$-modules $$\emph{D}^{(\eta)}_{\underline{\sigma}} \rightarrow \left\lbrace \sum_{\underline{k} \in \mathbb{N}^d} z_{\underline{k}} \partial^{[\underline{k}]}_{\underline{\sigma}}, ~ \exists C >0, ~\forall\underline{k} \in \mathbb{N}^d, ~\lVert z_{\underline{k}} \rVert \leq C \eta^{|\underline{k}|} \right\rbrace$$ for the sup norm $$\left\lVert \sum_{\underline{k} \in \mathbb{N}^d} z_{\underline{k}} \partial^{[\underline{k}]}_{\underline{\sigma}}\right\rVert_\eta=\sup \left\lbrace \left\lVert z_{\underline{k}} \right\rVert / \eta^{|\underline{k}|} \right\rbrace.$$* *Proof.* Let $\varphi \in \text{D}^{(\eta)}_{\underline{\sigma}}$, as $\tilde{\varphi}_\eta$ is continuous, for all $\underline{k} \in \mathbb{N}^d$ $$\left\lVert \tilde{\varphi}_\eta \left(\underline{\xi}^{(\underline{k})_{\underline{\sigma}}}\right) \right\rVert \leq \left\lVert \tilde{\varphi}_\eta \right\rVert \left\lVert \underline{\xi}^{(\underline{k})_{\underline{\sigma}}} \right\rVert = \left\lVert \tilde{\varphi}_\eta \right\rVert \eta^{|\underline{k}|}.$$ The injection map $\text{D}^{(\eta)}_{\underline{\sigma}} \rightarrow \widehat{\text{D}}^{(\infty)}_{\underline{\sigma}}$ has its image contained in the considered set and its norm is at most $1$. Given an element $\sum\limits_{\underline{k} \in \mathbb{N}^d} w_{\underline{k}} \partial^{[\underline{k}]}_{\underline{\sigma}}$ of norm at most $C$ on the rifht side. By proposition [Proposition 38](#dualité){reference-type="ref" reference="dualité"}, for every sequence $(z_{\underline{k}})_{\underline{k} \in \mathbb{N}^d}$ of elements of $A$ satisfying $\lVert z_{\underline{k}} \rVert \eta^{|\underline{k}|} \rightarrow 0$, there exists a unique $\varphi \in \text{D}^{(\eta)}_{\underline{\sigma}}$ such that $$\tilde{\varphi}_\eta\left( \sum_{\underline{k} \in \mathbb{N}^d} z_{\underline{k}} \underline{\xi}^{(\underline{k})_{\underline{\sigma}}} \right)=\sum_{\underline{k} \in \mathbb{N}^d} z_{\underline{k}}w_{\underline{k}} \in A$$ following from the fact that $\lVert z_{\underline{k}} w_{\underline{k}} \rVert \leq C \lVert z_{\underline{k}} \rVert \eta^{|\underline{k}|} \rightarrow 0$. It also implies that $\lVert \varphi \rVert \leq C$. That defines an inverse map which is indeed an isometry. ◻ **Proposition 46**. *The structure of $\emph{D}_{\underline{\sigma}}^{(\infty)}$-module over a $\eta$-convergent module of finite type over $A$ extends canonically in a structure of $\emph{D}^{(\eta)}_{\underline{\sigma}}$-module.* *Proof.* Consider the $A\left\lbrace \underline{\xi}/\eta \right\rbrace$-linear map $$\xymatrix@R=0cm{ M \otimes_A A\left\lbrace \underline{\xi}/\eta \right\rbrace\ar[r]& \text{Hom}_A(\text{Hom}_A(A\left\lbrace \underline{\xi}/\eta \right\rbrace,A),M)\\ s \otimes f \ar@{|->}[r]& (\psi \mapsto \psi(f)s). }$$ The twisted Taylor map of radius $\eta$ can be linearized to get a map $A\left\lbrace \underline{\xi}/\eta \right\rbrace\otimes'M \rightarrow M \otimes A\left\lbrace \underline{\xi}/\eta \right\rbrace$. By composition we obtain a $A\left\lbrace \underline{\xi}/\eta \right\rbrace$-linéar map $$A\left\lbrace \underline{\xi}/\eta \right\rbrace\otimes_A'M \rightarrow \text{Hom}_A(\text{D}^{(\eta)}_{\underline{\sigma}},M)$$ or, in an equivalent manner, a $A\left\lbrace \underline{\xi}/\eta \right\rbrace$-linear map $$\text{D}^{(\eta)}_{\underline{\sigma}} \rightarrow \text{Hom}_A(A\left\lbrace \underline{\xi}/\eta \right\rbrace\otimes'M,M)=\text{D}^{(\eta)}_{\underline{\sigma}}(M,M).$$ It is a morphism of rings by proposition [Proposition 42](#commu){reference-type="ref" reference="commu"}. By construction, this map is compatible with the map $$\text{D}^{(\infty)}_{\underline{\sigma}} \rightarrow \text{D}^{(\infty)}_{\underline{\sigma}}(M,M)$$ which gives the action of $\text{D}^{(\infty)}_{\underline{\sigma}}$ over $M$. ◻ **Proposition 47**. *If $M$ is a $\emph{D}^{(\eta)}_{\underline{\sigma}}$-module of finite type over $A$ then, $M$ is $\eta^\dagger$-convergent.* *Proof.* We have to show here that $M$ is $\eta'$-convergent for every $\eta' < \eta$. By lemma 4.1.2 of [@Berthelot], $M$ is a $\text{D}^{(\eta)}_{\underline{\sigma}}$ topological module. In other words, for every elements $s$ of $M$, the map $$\text{D}^{(\eta)}_{\underline{\sigma}} \rightarrow M, ~\varphi \mapsto \varphi(s)$$ is continuous $A$-lineair. This translates by the fact that there exists a constant $C$ such that for every $\varphi \in \text{D}^{(\eta)}_{\underline{\sigma}}$, $\lVert \varphi(s) \rVert \leq C \lVert \varphi \rVert \lVert s \rVert$. Hence, $\forall \underline{k} \in \mathbb{N}^d, ~ \lVert \partial^{[\underline{k}]}_{\underline{\sigma}}(s) \rVert \leq C \lVert s \rVert / \eta^{|\underline{k}|}$ and so $\lVert \partial^{[\underline{k}]}_{\underline{\sigma}}(s) \rVert \eta'^{|\underline{k}|} \leq C \lVert s \rVert \left(\eta'^{|\underline{k}|} / \eta^{|\underline{k}|}\right) \rightarrow 0$. We have shown that the module $M$ is $\eta'$ convergent for every $\eta' < \eta$. ◻ **Proposition 48**. *Let $\tau_1,\ldots,\tau_d$ be $R$-linearr continuous endomorphisms of $A$ that commute. Assume that $\underline{x}$ are classical and symmetrical $\underline{\tau}$-coordinates and that $\eta \geq \rho(\underline{\tau})$. Then, if $A$ is $\eta$ with respect to $\underline{\sigma}$, it is also $\eta$-convergent with respect to $\underline{\tau}$ with the same twisted Taylor morphism as for $\underline{\sigma}$.* *Proof.* We can consider the commutative diagram below $$\xymatrix{ & & A \ar[ld]_-\theta \ar[d]^-{\theta_\eta} \\ A[\underline{\xi}] \ar@{^{(}->}[r] \ar@{->>}[d] & P_{A} \ar@{->>}[d] \ar[r] & A\left\lbrace \underline{\xi}/\eta \right\rbrace\ar@{->>}[d] \\ A[\underline{\xi}]/(\underline{\xi}^{(\underline{k})_{\underline{\tau}}}; ~|\underline{k}|= n+1)\ar[r] \ar@/_2pc/[rr] & P_{A,(n)_{\underline{\tau}}} & A\left\lbrace \underline{\xi}/\eta \right\rbrace/ \left(\underline{\xi}^{(\underline{k})_{\underline{\tau}}}; ~|\underline{k}|= n+1\right) }$$  \  \ (where $\theta_\eta$ is the twisted Taylor map of radius $\eta$ with respect to $\underline{\sigma}$). The diagram being commutative, the composition of the vertical maps on the right does not depend of $\underline{\sigma}$. Moreover, by hypothesis $\eta \geq \rho (\underline{\tau})$, hence, the map $$A[\underline{\xi}]/\left(\underline{\xi}^{(\underline{k})_{\underline{\tau}}}, ~|\underline{k}|= n+1\right) \rightarrow A\left\lbrace \underline{\xi}/\eta \right\rbrace/ \left(\underline{\xi}^{(k)_{\underline{\tau}}}, ~|\underline{k}|= n+1 \right)$$ is an isomorphism by proposition [Proposition 28](#pr){reference-type="ref" reference="pr"}. Going to the limit we obtain the diagram below $$\xymatrix{ & A \ar[d]^-{\theta_\eta} \ar[ld]_-\theta \\ P_{A} \ar[d] & A\left\lbrace \underline{\xi}/\eta \right\rbrace\ar[ld] \\ \widehat{\widehat{P}}_{\underline{\tau}}. }$$ By proposition [Proposition 32](#3.3){reference-type="ref" reference="3.3"} it follows that $A$ is $\eta$-convergent with respect to $\underline{\tau}$ $\theta_\eta$ is the twisted Taylor map of radius $\eta$ with respect to $\underline{\tau}$. ◻ **Proposition 49**. *The ring structure of $\emph{Hom}_{A-\emph{cont}}(A\left\lbrace \underline{\xi}/\eta \right\rbrace,A)$ does not depend of $\underline{\sigma}$.* *Proof.* It follows from the multiplication defined on the $A$-module $\text{Hom}_{A-\text{cont}}(A\left\lbrace \underline{\xi}/\eta \right\rbrace, A)$ in corollary [Proposition 40](#multiplicationf){reference-type="ref" reference="multiplicationf"} and the proposition [Proposition 48](#6.1){reference-type="ref" reference="6.1"}. ◻ **Theorem 50**. *Let $\tau_1,\ldots,\tau_d$ be $R$-linear continuous endomorphisms of $A$ that commute. Assume that $\underline{x}$ are classical and symmetrical $\underline{\tau}$-coordinates and that $\eta \geq \rho(\underline{\tau})$. Then, there exists an isometric $A$-linear isomorphism of $R$-algebras $$\emph{D}^{(\eta)}_{\underline{\sigma}} \simeq \emph{D}^{(\eta)}_{\underline{\tau}}.$$ that only depends on $\underline{x}$.* *Proof.* This results from the previous proposition and the fact that the isomorphism $$\text{D}^{(\eta)}_{\underline{\sigma}} \simeq \text{Hom}_{A-\text{cont}}(A\left\lbrace \underline{\xi}/\eta \right\rbrace,A)$$ is $A$-linear. ◻ In particular, it is possible to apply this theorem in the case where $$\forall i=1,\ldots,d,~\tau_i=\text{Id}_A.$$ **Corollary 51**. *If $\underline{x}$ are étale coordinates then, there exists an isometric $A$-linear isomorphism of $R$-algebras $$\emph{D}^{(\eta)}_{\underline{\sigma}} \simeq \emph{D}^{(\eta)}_{\underline{\emph{Id}}}$$ where $\emph{D}^{(\eta)}_{ \underline{\emph{Id}}}$ designates the ring of twisted differential operators of radius $\eta$ with $\underline{\emph{Id}}=\left( \emph{Id}_A,\ldots,\emph{Id}_A \right)$.* ## Confluence From now on we fix a $\eta > \rho(\underline{\sigma})$ such that $A$ is $\eta^\dagger$-convergent. **Definition 52**. The ring of *twisted differential operators of radius $\eta^\dagger$* is $$\text{D}_{\underline{\sigma}}^{(\eta^\dagger)}=\varinjlim_{\eta'<\eta} \text{D}_{\underline{\sigma}}^{(\eta')}.$$ **Proposition 53**. *The category of $\emph{D}_{\underline{\sigma}}^{(\eta^\dagger)}$-modules of finite type over $A$ is equivalent to the subcategory of $\emph{D}^{(\infty)}_{\underline{\sigma}}$-modules $\eta^\dagger$-convergent of finite type over $A$.* *Proof.* It is enough to apply propositions [Proposition 46](#5.5){reference-type="ref" reference="5.5"} and [Proposition 47](#5.6){reference-type="ref" reference="5.6"}. ◻ **Theorem 54**. *Let $\tau_1,\ldots,\tau_d$ be $R$-linear continuous endomorphisms of $A$ such that $\underline{x}$ are also $\underline{\tau}$-coordinates. If $\eta > \rho(\underline{\tau})$, then, the categories of $\emph{D}^{(\infty)}_{\underline{\sigma}}$-modules of finite type over $A$ which are also $\eta^\dagger$-convergent with respect to $\underline{\sigma}$ and $\underline{\tau}$ are quivalent.* *Proof.* It is enough to apply theorem [Theorem 50](#6.3){reference-type="ref" reference="6.3"} and proposition [Proposition 53](#7.2){reference-type="ref" reference="7.2"}. ◻ As in [@ADV04] to defined the $q$-analogue of the exponential $\sum\limits_{n \geq 0} \frac{x^n}{(n)_q^!}$, we are interested in the case where $q$-analogues of non zero integers are invertible. **Definition 55**. When there exist some elements $\underline{q}=\lbrace q_1,\ldots,q_d \rbrace$ of $R$ such that $\underline{x}$ are classical $\underline{q}$-coordinates and $$\forall i=1,\ldots,d; ~ \forall n \in \mathbb{N}, ~ (n)_{q_i} \in R^\times.$$ then, a $A$-module $M$ of finite type endowed with a twisted connection $\nabla_{\underline{\sigma}}$ is said to be *$\eta^\dagger$-convergent* when its image in the category of $\text{D}_{\underline{\sigma}}^{(\infty)}$-modules (see proposition [Proposition 21](#equivate 3){reference-type="ref" reference="equivate 3"}) is $\eta^\dagger$-convergent. We denote by $\nabla_{\underline{\sigma}}^{\text{Int}}\text{-}\text{Mod}_{\text{tf}}^{(\eta^\dagger)}(A)$ the category $A$-modules of finite type endowed with an integrable twisted connection that are $\eta^\dagger$-convergent and $\text{D}_{\underline{\sigma}}^{(\eta^\dagger)}\text{-}\text{Mod}_{\text{tf}}(A)$ the category of $\text{D}_{\underline{\sigma}}^{(\eta^\dagger)}$-modules of finite type over $A$. The cohomology of this category is the one given by $\text{Ext}_{\text{D}_{\underline{\sigma}}^{(\eta^\dagger)}}(A, \cdot)$. We also use the notation $$\text{SP}\left(\text{D}_{\underline{\sigma}}^{(\eta^\dagger)}\right)=\text{Hom}_{\text{D}_{\underline{\sigma}}^{(\eta^\dagger)}}\left(\text{DR}\left(\text{D}_{\underline{\sigma}}^{(\eta^\dagger)}\right),\text{D}_{\underline{\sigma}}^{(\eta^\dagger)}\right).$$ **Theorem 56**. *Assume there exist some elements $\underline{q}=\lbrace q_1,\ldots,q_d \rbrace$ of $R$ such that $\underline{x}$ are classical and symmetrical $\underline{q}$-coordinates and that every $q_i$-adic integers are invertible in $R$. Then, if moreover, $A$ is $\eta^\dagger$-convergent, we have the following equivalence of categories $$\nabla_{\underline{\sigma}}\text{-}\emph{Mod}_{\emph{tf}}^{(\eta^\dagger)}(A) \simeq \emph{D}_{\underline{\sigma}}^{(\eta^\dagger)}\text{-}\emph{Mod}_{\emph{tf}}(A).$$ This equivalence is compatible with the cohomologies.* *Proof.* By proposition [Proposition 21](#equivate 3){reference-type="ref" reference="equivate 3"}, the category of finite $A$-modules endowed with an integrable connection $\nabla_{\underline{\sigma}}$ is equivalent to the category $\text{D}_{A,\underline{\sigma}}^{(\infty)}$-modules of finite type over $A$. We used here the hypothesis on the elements $q_1,\ldots,q_d.$ The equivalence follows directly from proposition [Proposition 53](#7.2){reference-type="ref" reference="7.2"}. We have left to prove that this equivalence is compatible with the cohomologies on both sides. The sequence $\partial_{\underline{\sigma},1}, \ldots, \partial_{\underline{\sigma},d}$ is a regular sequence of $\text{D}^{(\eta^\dagger)}_{\underline{\sigma}}$. By proposition 1.4.3 of [@Schap], $\text{SP}\left(\text{D}_{\underline{\sigma}}^{(\eta^\dagger)}\right)$ is a free resolution of $A$. Hence, $$\begin{aligned} \text{Ext}_{\text{D}_{\underline{\sigma}}^{(\eta^\dagger)}}(A, M)&=\text{Hom}_{\text{D}_{\underline{\sigma}}^{(\eta^\dagger)}}\left( \text{SP}(\text{D}_{\underline{\sigma}}^{(\eta^\dagger)}),M\right) \\ &=\text{Hom}_{\text{D}_{\underline{\sigma}}^{(\eta^\dagger)}}\left( \text{SP}(\text{D}_{\underline{\sigma}}^{(\eta^\dagger)}),\text{D}_{\underline{\sigma}}^{(\eta^\dagger)}\right) \otimes_{\text{D}_{\underline{\sigma}}^{(\eta^\dagger)}}M\\ &=\text{DR}(\text{D}_{\underline{\sigma}}^{(\eta^\dagger)})\otimes_{\text{D}_{\underline{\sigma}}^{(\eta^\dagger)}}M\\ &=\text{DR}(M). \end{aligned} \qedhere$$ ◻ We recover here a generalization in several variables of the result in [@GLQ4]. This theorem is in the spirit of [@ADV04] and [@pulita]. **Theorem 57**. *Let $K$ be a non-archimedean field complete of characteristic $0$. Let $A$ be a Huber $K$-algebra and $\underline{q}=(q_1,\ldots,q_d)$ be elements of $R$ such that for every $i=1,\ldots,d$ the $q_i$-entiers are invertible. If $A$ is $\eta^\dagger$-convergent with respect to étale classical symmetrical $\underline{q}$-coordinates, $\underline{x}=(x_1,\ldots,x_d)$ with $\eta\geq \rho(\underline{\sigma})$ then, we have an equivalence of categories $$\nabla_{\underline{\sigma}}^{\emph{Int}}\text{-}\emph{Mod}_{\emph{tf}}^{(\eta^\dagger)}(A)\simeq \nabla_{\underline{\emph{Id}}}^{\emph{Int}}\text{-}\emph{Mod}_{\emph{tf}}^{(\eta^\dagger)}(A)$$ compatible with cohomologies on both sied.* *Proof.* We are in a situation where we can apply [Theorem 56](#confluence){reference-type="ref" reference="confluence"}.It suffice to show that we have an isomorphism $$\text{D}^{(\eta^\dagger)}_{\underline{\sigma}} \simeq \text{D}^{(\eta^\dagger)}_{\underline{\text{Id}}}.$$ This follows from corollary [Corollary 51](#6.4){reference-type="ref" reference="6.4"}. ◻
arxiv_math
{ "id": "2309.13277", "title": "Twisted calculus in several variables", "authors": "Pierre Hou\\'edry", "categories": "math.AG", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We prove that the polynomials counting locally free, absolutely indecomposable, rank 1 representations of quivers over rings of truncated power series have non-negative coefficients. This is a generalisation to higher depth of positivity for toric Kac polynomials. The proof goes by inductively contracting/deleting arrows of the quiver and is inspired from a previous work of Abdelgadir, Mellit and Rodriguez-Villegas on toric Kac polynomials. We also relate counts of absolutely indecomposable quiver representations in higher depth and counts of jets over fibres of quiver moment maps. This is expressed in a plethystic identity involving generating series of these counts. In rank 1, we prove a cohomological upgrade of this identity, by computing the compactly supported cohomology of jet spaces over preprojective stacks. This is reminiscent of PBW isomorphisms for preprojective cohomological Hall algebras. Finally, our plethystic identity allows us to prove two conjectures by Wyss on the asymptotic behaviour of both counts, when depth goes to infinity. author: - "Tanguy Vernet[^1]" bibliography: - 0D\_\_EPFL_R\_\_f\_\_rences_R\_\_f\_\_rences_th\_\_se.bib title: Positivity for toric Kac polynomials in higher depth --- # Introduction Given a quiver $Q$ and a dimension vector $\mathbf{d}\in\mathbb{Z}_{\geq0}^{Q_{0}}$, there is a polynomial $A_{Q,\mathbf{d}}$ which counts absolutely indecomposable representations of $Q$ over finite fields. These polynomials were introduced by Kac in [@Kac83] and have motivated considerable work in geometric representation theory since then. Of particular interest are conjectures [@Kac83 Conj. 1-2.] on the coefficients of $A_{Q,\mathbf{d}}$, later refined by Bozec and Schiffmann [@BS19a Conj. 1.3.]. These conjectures state that the coefficients of $A_{Q,\mathbf{d}}$ are non-negative and can be interpreted as (graded) dimensions of a Borcherds algebra. Their proofs largely rely on the study of quiver moment maps $\mu_{Q,\mathbf{d}}$ and their geometry. Kac's original conjectures were proved by studying Nakajima's quiver varieties [@Nak94; @Nak98] and representations of Kac-Moody algebras on their cohomology [@CBVB04; @Hau10; @HLRV13b]. More recently, the development of cohomological Hall algebras built from the preprojective stacks $\left[\mu_{Q,\mathbf{d}}^{-1}(0)/\mathop{\mathrm{GL}}(\mathbf{d})\right]$ [@SV13b; @RS17; @YZ18a; @YZ20; @SV20] led to the discovery of a suitable Borcherds algebra $\mathfrak{g}_{Q}$ categorifying Kac's polynomials - in other words, a proof of Bozec and Schiffmann's conjecture - along with new actions of $\mathfrak{g}_{Q}$ on the cohomology of Nakajima's quiver varieties [@DM20; @Dav20; @DHSM23]. In this paper, we are interested in generalisations of Kac's polynomials to quiver representations over truncated polynomial rings $\mathcal{O}_{\alpha}:=\mathbb{F}_{q}[t]/(t^{\alpha}),\ \alpha\geq1$. More precisely, we study the counts $A_{Q,\mathbf{r},\alpha}$ of locally free, absolutely indecomposable representations of $Q$ over $\mathcal{O}_{\alpha}$, in rank $\mathbf{r}\in\mathbb{Z}_{\geq0}^{Q_{0}}$. These are still quite mysterious; for instance, showing that $A_{Q,\mathbf{r},\alpha}$ is polynomial in $q$ is still an open problem for $\mathbf{r}>\underline{1}$. More is known about toric representations i.e. representations of rank $\mathbf{r}=\underline{1}$. These were recently studied by Hausel, Letellier, Rodriguez-Villegas [@HLRV18] and Wyss [@Wys17b], who established explicit polynomial formulas for $A_{Q,\underline{1},\alpha}$ and conjectured that $A_{Q,\underline{1},\alpha}$ has non-negative coefficients [@HLRV18 Rmk. 7.7.i.]. We prove this conjecture. We also investigate the relation of $A_{Q,\mathbf{r},\alpha}$ with quiver moment maps. Since we are working over $\mathcal{O}_{\alpha}$, a natural object to consider is the moment map $\mu_{Q,\mathbf{r},\alpha}$ induced by $\mu_{Q,\mathbf{r}}$ on jet spaces of quiver moduli. In [@Wys17b], Wyss computed the counts $\sharp_{\mathbb{F}_{q}}\mu_{Q,\mathbf{r},\alpha}^{-1}(0)$ for $\mathbf{r}=\underline{1}$, in the form of an Igusa local zeta function, and conjectured an *asymptotic* relation between $\sharp_{\mathbb{F}_{q}}\mu_{Q,\mathbf{r},\alpha}^{-1}(0)$ and $A_{Q,\mathbf{r},\alpha}$, when $\alpha$ goes to infinity. Wyss also conjectured that the limits of both counts have non-negative coefficients (see below for more details). In this paper, we find an identity between generating series encoding $\sharp_{\mathbb{F}_{q}}\mu_{Q,\mathbf{r},\alpha}^{-1}(0)$ and $A_{Q,\mathbf{r},\alpha}$, *for all* $\mathbf{r}$ and a *fixed* value of $\alpha$. This identity allows us to prove both of Wyss' conjectures. Regarding geometric representation theory, Geiss, Leclerc and Schröer recently explored representations of quivers over rings of truncated power series in a series of papers [@GLS17a; @GLS17b; @GLS16; @GLS18a; @GLS18b]. In particular, they use moduli of quiver representations in higher depth to build realisations of symmetrizable Kac-Moody algebras [@GLS16; @GLS18a]. Their constructions exploit nilpotent subvarieties of $\mu_{Q,\mathbf{r},\alpha}^{-1}(0)$, whose relations to cohomological Hall algebras are well understood when working over a field [@Hen22b]. However, as far as we know, no relations with $A_{Q,\mathbf{r},\alpha}$ have been found up to now. Instead, using a purity argument, we find that the compactly supported cohomology of $\left[\mu_{Q,\mathbf{r},\alpha}^{-1}(0)/\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})\right]$ categorifies $A_{Q,\mathbf{r},\alpha}$ when $\mathbf{r}\leq\underline{1}$ - see Theorem [Theorem 7](#Thm/IntroCohIntgr){reference-type="ref" reference="Thm/IntroCohIntgr"} below. This is reminiscent of the PBW isomorphism for preprojective cohomological Hall algebras established in [@Dav17a] and we regard this as evidence that the cohomology of jet spaces over preprojective stacks may also carry representation-theoretic structure. Let us describe these results in more details. #### Higher depth Kac polynomials and preprojective stacks {#higher-depth-kac-polynomials-and-preprojective-stacks .unnumbered} Our first results generalise the relation between Kac polynomials and counts of $\mathbb{F}_{q}$-points over $\mu_{Q,\mathbf{d}}^{-1}(0)$ to higher depth. The following formula was established by Mozgovoy [@Moz11a], using dimensional reduction:$$\sum_{\mathbf{d}\in\mathbb{N}^{Q_0}} \frac{\sharp_{\mathbb{F}_q}\mu_{Q,\mathbf{d}}^{-1}(0)}{\sharp_{\mathbb{F}_q}\mathop{\mathrm{GL}}(\mathbf{d})} \cdot q^{\langle\mathbf{d},\mathbf{d}\rangle}t^{\mathbf{d}} = \mathop{\mathrm{Exp}}_{q,t}\left( \sum_{\mathbf{d}\in\mathbb{N}^{Q_0}\setminus\{0\}} \frac{A_{Q,\mathbf{d}}}{1-q^{-1}}\cdot t^{\mathbf{d}} \right),$$ where $\langle\bullet,\bullet\rangle$ is the Euler form of $Q$. More precisely, the formula relates the stacky point count of $\left[\mu_{Q,\mathbf{d}}^{-1}(0)/\mathop{\mathrm{GL}}(\mathbf{d})\right]$ - the moduli stack of objects in a category of homological dimension 2 - to the count of objects in the category of representations of $Q$, which has homological dimension 1. Because $\mathop{\mathrm{Rep}}(Q)$ has dimension 1, the point count of $\left[\mu_{Q,\mathbf{d}}^{-1}(0)/\mathop{\mathrm{GL}}(\mathbf{d})\right]$ is directly related to the count of objects of $\mathop{\mathrm{Rep}}(Q)$ with endomorphisms. This is in turn related to Kac polynomials, using Krull-Schmidt decomposition and Galois descent. As was shown by Geiss, Leclerc and Schröer, the category of locally free representations of $Q$ over $\mathcal{O}_{\alpha}$ also has homological dimension 1. We show that the above argument can be generalised to that setting, which results in: **Theorem 1**. *Let $Q$ be a quiver and $\alpha\geq1$. Then: $$\sum_{\mathbf{r}\in\mathbb{N}^{Q_0}} \frac{\sharp_{\mathbb{F}_q}\mu_{Q,\mathbf{r},\alpha}^{-1}(0)}{\sharp_{\mathbb{F}_q}\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})} \cdot q^{\alpha\langle\mathbf{r},\mathbf{r}\rangle}t^{\mathbf{r}} = \mathop{\mathrm{Exp}}_{q,t}\left( \sum_{\mathbf{r}\in\mathbb{N}^{Q_0}\setminus\{0\}} \frac{A_{Q,\mathbf{r},\alpha}}{1-q^{-1}}\cdot t^{\mathbf{r}} \right).$$* One advantage of this formula is that it holds for any fixed $\alpha\geq1$. This strengthens the asymptotic connection between $A_{Q,\mathbf{r},\alpha}$ and $\sharp_{\mathbb{F}_{q}}\mu_{Q,\mathbf{r},\alpha}^{-1}(0)$ conjectured by Wyss in [@Wys17b §4.]. Let us recall how this works. When $\mathbf{r}=\underline{1}$ and $Q$ is 2-connected, Wyss showed that the following two sequences converge and that their limits are rational fractions in $q$:$$A_{Q}(q):= \underset{\alpha\rightarrow +\infty}{\lim} \left(q^{-\alpha (1-\langle\mathbf{r},\mathbf{r}\rangle)}\cdot A_{Q,\underline{1},\alpha}(q)\right),$$ $$B_{\mu_{Q}}(q):= \underset{\alpha\rightarrow +\infty}{\lim} \left(q^{-\alpha(2\sharp Q_1-\sharp Q_0+1)}\cdot\sharp_{\mathbb{F}_q}\mu_{Q,\underline{1},\alpha}^{-1}(0)\right).$$ He further conjectured that $A_{Q}$ and $B_{\mu_{Q}}$ are directly related. Indeed, this is implied by Theorem [Theorem 1](#Thm/IntroExpFmlKacPol){reference-type="ref" reference="Thm/IntroExpFmlKacPol"} : **Corollary 2**. *Let $Q$ be a 2-connected quiver. Then: $$\frac{B_{\mu_{Q}}(q)}{(1-q^{-1})^{\sharp Q_0}}=\frac{A_{Q}(q)}{1-q^{-1}}.$$* Another approach to obtain Kac polynomials from quiver moment maps uses generic fibers of $\mu_{Q,\mathbf{d}}$. This strategy led to the first proof of Kac's conjectures for indivisible dimension vectors by Crawley-Boevey and Van den Bergh [@CBVB04]. Their results were generalised more recently by Davison in [@Dav23a]. We prove a generalisation in higher depth of Crawley-Boevey and Van den Bergh's result. Given $\mathbf{r}\in\mathbb{N}^{Q_{0}}$, we say that $\lambda\in\mathbb{Z}^{Q_{0}}$ is generic with respect to $\mathbf{r}$ if $\lambda\cdot\mathbf{r}=0$ and $\lambda\cdot\mathbf{r}'\ne0$ for all $0<\mathbf{r}'<\mathbf{r}$. Such a $\lambda$ exists if, and only if, $\mathbf{r}$ is indivisible. **Theorem 3**. *Let $\mathbf{r}\in\mathbb{N}^{Q_{0}}$ be an indivisible rank vector and $\lambda\in\mathbb{Z}^{Q_{0}}$ be generic with respect to $\mathbf{r}$. Then: $$\frac{\sharp_{\mathbb{F}_q}\mu_{Q,\mathbf{r},\alpha}^{-1}(t^{\alpha-1}\cdot\lambda)}{\sharp_{\mathbb{F}_q}\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})}=q^{-\alpha\langle\mathbf{r},\mathbf{r}\rangle}\cdot\frac{A_{Q,\mathbf{r},\alpha}(q)}{1-q^{-1}}.$$* #### Positivity results {#positivity-results .unnumbered} We also prove that counts of toric quiver representations in higher depth enjoy positivity properties, both in fixed depth $\alpha\geq1$ and asymptotically. The asymptotic result was also conjectured by Wyss in [@Wys17b §4.] and roughly states that the numerator of $B_{\mu_{Q}}$ (or equivalently, of $A_{Q}$ by Corollary [Corollary 2](#Cor/IntroRelAvsB){reference-type="ref" reference="Cor/IntroRelAvsB"}) has non-negative coefficients. Our approach to this conjecture relies on Wyss' explicit formula for $A_{Q}$:$$A_Q(q)= (1-q^{-1})^{b(Q)}\cdot \sum_{E_1\subsetneq E_2\ldots\subsetneq E_s=Q_1}\prod_{j=1}^{s-1}\frac{1}{q^{b(Q)-b(Q\restriction_{E_j})}-1}.$$ Here $b(Q)$ stands for the Betti number of the graph underlying $Q$ (see Section [\[Sect/Graph\]](#Sect/Graph){reference-type="ref" reference="Sect/Graph"} for details). We show that $A_{Q}$ is essentially the Hilbert series of the Stanley-Reisner ring of a Cohen-Macaulay simplicial complex (see Section [\[Sect/SRrings\]](#Sect/SRrings){reference-type="ref" reference="Sect/SRrings"} for background). This was inspired from [@MV22], where similar combinatorial formulas for local Igusa zeta functions are studied. We obtain the following: **Theorem 4**. *Let $Q$ be a 2-connected quiver. Consider the Stanley-Reisner ring $\mathbb{Q}[\Delta]$ associated to the order complex $\Delta$ of the poset $(\Pi(Q_{1})\setminus\{\emptyset,Q_{1}\},\subseteq)$. Then:* *$$A_Q(q)=\frac{(1-q^{-1})^{b(Q)}}{1-q^{-b(Q)}}\cdot\mathop{\mathrm{Hilb}}_{\Delta}\left(u_{E}=q^{-(b(Q)-b(Q\restriction_{E}))}\right).$$ and $\mathop{\mathrm{Hilb}}_{\Delta}\left(u_{E}=q^{-(b(Q)-b(Q\restriction_{E}))}\right)$ can be presented as a rational fraction whose numerator has non-negative coefficients.* On the other hand, assuming that $\alpha$ is finite and fixed, we also prove that $A_{Q,\mathbf{r},\alpha}$ has non-negative coefficients, when $\mathbf{r}=\underline{1}$. For toric Kac polynomials, there is a beautiful graph-theoretic proof of Kac's positivity conjecture involving Tutte polynomials [@AMRV22]. The key observation is that toric Kac polynomials can be computed recursively, using a contraction-deletion method on arrows of $Q$. In higher depth, this contraction-deletion argument must take into account valuations in the ring $\mathcal{O}_{\alpha}$ and the connection to Tutte polynomials is lost. Nevertheless, this is enough to prove: **Theorem 5**. *Let $\mathbf{r}=\underline{1}$. Then $A_{Q,\mathbf{r},\alpha}$ has non-negative coefficients.* Contraction-deletion of edges is also key to our computation of $\mathrm{H}_{\mathrm{c}}^{\bullet}\left(\left[\mu_{Q,\mathbf{r},\alpha}^{-1}(0)/\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})\right]\right)$, which we now explain. #### Cohomology of toric preprojective stacks in higher depth {#cohomology-of-toric-preprojective-stacks-in-higher-depth .unnumbered} A fruitful strategy to prove that Kac polynomials have non-negative coefficients is categorification i.e. extracting $A_{Q,\mathbf{d}}$ from the cohomology of pure algebraic varieties (or stacks). This was done with quiver varieties in [@CBVB04; @HLRV18] and preprojective stacks in [@Dav17a; @Dav18]. When $\mathbf{r}=\underline{1}$, we show that this categorification also works in higher depth and compute the (compactly supported) cohomology of preprojective stacks over $\mathbb{K}=\mathbb{C}$. We obtain that the mixed Hodge structure of $\mathrm{H}_{\mathrm{c}}^{\bullet}\left(\left[\mu_{Q,\mathbf{r},\alpha}^{-1}(0)/\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})\right]\right)$ is pure and that its graded dimensions can be computed from $A_{Q,\mathbf{r}',\alpha},\ \mathbf{r}'\leq\underline{1}$. This is a cohomological upgrade of Theorems [Theorem 1](#Thm/IntroExpFmlKacPol){reference-type="ref" reference="Thm/IntroExpFmlKacPol"} and [Theorem 5](#Thm/IntroPosToricKacPol){reference-type="ref" reference="Thm/IntroPosToricKacPol"}. Our proof relies on a stratification of $\left[\mu_{Q,\mathbf{r},\alpha}^{-1}(t^{\alpha-1}\cdot\lambda)/\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})\right]$, based on the contraction-deletion algorithm mentioned above, which we further extend to $\left[\mu_{Q,\mathbf{r},\alpha}^{-1}(0)/\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})\right]$. This is the content of the following theorems (see Section [\[Sect/EqCoh\]](#Sect/EqCoh){reference-type="ref" reference="Sect/EqCoh"} for notations): **Theorem 6**. *Let $\mathbf{r}=\underline{1}$. then:$$\mathrm{H}_{\mathrm{c}}^{\bullet}\left(\left[\mu_{Q,\mathbf{r},\alpha}^{-1}(t^{\alpha-1}\cdot\lambda)/\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})\right]\right) \simeq A_{Q,\mathbf{r},\alpha}(\mathbb{L})\otimes\mathbb{L}^{1-\alpha\langle\mathbf{r},\mathbf{r}\rangle}\otimes\mathrm{H}_{\mathrm{c}}^{\bullet}\left(\mathrm{B}\mathbb{G}_{\mathrm{m}}\right)$$In particular, $\mathrm{H}_{\mathrm{c}}^{\bullet}\left(\left[\mu_{Q,\mathbf{r},\alpha}^{-1}(t^{\alpha-1}\cdot\lambda)/\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})\right]\right)$ carries a pure Hodge structure.* **Theorem 7**. *Let $\mathbf{r}=\underline{1}$. Then:$$\mathrm{H}_{\mathrm{c}}^{\bullet}\left(\left[\mu_{Q,\mathbf{r},\alpha}^{-1}(0)/\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})\right]\right) \otimes\mathbb{L}^{\otimes\alpha\langle\mathbf{r},\mathbf{r}\rangle} \simeq \bigoplus_{Q_0=I_1\sqcup\ldots\sqcup I_s} \bigotimes_{j=1}^s \left( A_{Q\restriction_{I_j},\mathbf{r}\restriction_{I_j},\alpha}(\mathbb{L})\otimes\mathbb{L}\otimes \mathrm{H}_{\mathrm{c}}^{\bullet}(\mathrm{B}\mathbb{G}_m) \right).$$In particular, $\mathrm{H}_{\mathrm{c}}^{\bullet}\left(\left[\mu_{Q,\mathbf{r},\alpha}^{-1}(0)/\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})\right]\right)$ carries a pure Hodge structure.* Theorem [Theorem 7](#Thm/IntroCohIntgr){reference-type="ref" reference="Thm/IntroCohIntgr"} can be seen as an analog in higher depth of the PBW isomorphism for preprojective cohomological Hall algebras [@Dav17a], restricted to $\mathbf{r}=\underline{1}$. This is a key ingredient in Davison's reproof of Kac's positivity conjecture[^2] [@Dav18]. We take this result as a first piece of evidence that similar structures may exist in higher depth, which we hope to adress in future works. #### Beyond the toric case {#beyond-the-toric-case .unnumbered} As mentioned above, beyond the case where $\mathbf{r}=\underline{1}$, the counting functions $A_{Q,\mathbf{r},\alpha}$ largely unknown. One reason for this is that Kac and Stanley's computations of $A_{Q,\mathbf{d}}$ [@Kac83] (see also [@Hua00]) rely on a good understanding of conjugacy classes in $\mathop{\mathrm{GL}}(\mathbf{d})$. To the best of our knowledge, conjugacy classes of $\mathop{\mathrm{GL}}(r,\mathcal{O}_{\alpha})$ are not classified for $r\geq4$. When $r\leq3$, we exploit results of Avni, Klopsch, Onn and Voll [@AKOV16] to prove the following explicit formulas: **Proposition 8**. *Let $Q$ be the quiver with one vertex and $g\geq1$ loops. Then:$$\begin{split} A_{Q,2,\alpha}= & \frac{q^{2\alpha g-1}(q^{2g}-1)(q^{\alpha(2g-3)}-1)}{(q^2-1)(q^{2g-3}-1)}, \\ A_{Q,3,\alpha}= & \frac{q^{3\alpha g-2}(q^{2g}-1)(q^{2g-1}-1)}{(q^2-1)(q^3-1)(q^{2g-3}-1)(q^{6g-8}-1)(q^{4g-5}-1)} \\ & \cdot \left( q^{\alpha(6g-8)-1}(q^{6g-7}-1)(q^{2g}+1) -q^{\alpha(6g-8)+2g-4}(q^2-1)(q^{4g-3}+1) \right. \\ & \left. +q^{\alpha(2g-3)-1}(q^2+q+1)(q^{2g-1}-1)(q^{6g-8}-1)+(q+1)(q^{8g-10}-1)+q^{2g-4}(q^4+1)(q^{4g-5}-1) \right) . \end{split}$$In particular, $A_{Q,2,\alpha}$ and $A_{Q,3,\alpha}$ are polynomials and $A_{Q,2,\alpha}$ has non-negative coefficients.* Although it is not clear from the formula above that $A_{Q,3,\alpha}\in\mathbb{Z}_{\geq0}[q]$, we check with a computer that this is true for small values of $\alpha$ and $g$. This provides some evidence that $A_{Q,\mathbf{r},\alpha}$ is a polynomial with non-negative coefficients for all $\mathbf{r}\in\mathbb{Z}_{\geq0}^{Q_{0}}$ and $\alpha\geq1$. We record this in the following: **Conjecture 9**. *Let $Q$ be a quiver, $\mathbf{r}\in\mathbb{Z}_{\geq0}^{Q_{0}}$ and $\alpha\geq1$. Then $A_{Q,\mathbf{r},\alpha}\in\mathbb{Z}_{\geq0}[q]$ .* #### Plan of the paper {#plan-of-the-paper .unnumbered} In Section [\[Sect/QuiverRep\]](#Sect/QuiverRep){reference-type="ref" reference="Sect/QuiverRep"}, we introduce basic facts on locally free representations of quivers in higher depth, generalising well-known results over base fields, such as Krull-Schmidt decomposition and Galois descent techniques. We then set notations concerning plethystic formulas and graphs in Sections [\[Sect/Plethysm\]](#Sect/Plethysm){reference-type="ref" reference="Sect/Plethysm"} and [\[Sect/Graph\]](#Sect/Graph){reference-type="ref" reference="Sect/Graph"}. In Section [\[Sect/SRrings\]](#Sect/SRrings){reference-type="ref" reference="Sect/SRrings"}, we collect relevant facts on Stanley-Reisner rings and their Hilbert series, used in the proof of Theorem [Theorem 4](#Thm/IntroPositivityA){reference-type="ref" reference="Thm/IntroPositivityA"}. Our last prerequisites concern mixed Hodge structure on equivariant cohomology of algebraic varieties and their relation to point-counting over finite fields. We also collect computational tools for the proof of Theorem [Theorem 7](#Thm/IntroCohIntgr){reference-type="ref" reference="Thm/IntroCohIntgr"}. This is done in Section [\[Sect/EqCoh\]](#Sect/EqCoh){reference-type="ref" reference="Sect/EqCoh"}. We prove Theorems [Theorem 1](#Thm/IntroExpFmlKacPol){reference-type="ref" reference="Thm/IntroExpFmlKacPol"}, [Theorem 3](#Thm/IntroCountMomMapGenFib){reference-type="ref" reference="Thm/IntroCountMomMapGenFib"} in Section [\[Sect/KacPolvsMomMap\]](#Sect/KacPolvsMomMap){reference-type="ref" reference="Sect/KacPolvsMomMap"}, as well as Corollary [Corollary 2](#Cor/IntroRelAvsB){reference-type="ref" reference="Cor/IntroRelAvsB"}. Section [\[Section/PosPurity\]](#Section/PosPurity){reference-type="ref" reference="Section/PosPurity"} is devoted to the proof of our positivity and purity results in the toric setting. Theorem [Theorem 4](#Thm/IntroPositivityA){reference-type="ref" reference="Thm/IntroPositivityA"} is proved in Section [\[Sect/PosA\]](#Sect/PosA){reference-type="ref" reference="Sect/PosA"}, Theorem [Theorem 5](#Thm/IntroPosToricKacPol){reference-type="ref" reference="Thm/IntroPosToricKacPol"} in Section [\[Sect/PosToricKacPol\]](#Sect/PosToricKacPol){reference-type="ref" reference="Sect/PosToricKacPol"}, whereas Section [\[Sect/CohPreprojStack\]](#Sect/CohPreprojStack){reference-type="ref" reference="Sect/CohPreprojStack"} contains the proofs of Theorems [Theorem 6](#Thm/IntroCohMomMapGenFib){reference-type="ref" reference="Thm/IntroCohMomMapGenFib"} and [Theorem 7](#Thm/IntroCohIntgr){reference-type="ref" reference="Thm/IntroCohIntgr"}. Finally, we discuss higher-rank counts and Proposition [Proposition 8](#Prop/IntroKacPolLowRk){reference-type="ref" reference="Prop/IntroKacPolLowRk"} in Section [\[Sect/HigherRk\]](#Sect/HigherRk){reference-type="ref" reference="Sect/HigherRk"}. #### Acknowledgements {#acknowledgements .unnumbered} I would like to warmly thank Dimitri Wyss for his constant support throughout this project and Emmanuel Letellier for helpful discussions and suggesting to look into [@AMRV22]. I would also like to thank Ben Davison, Tamás Hausel, Anton Mellit, Olivier Schiffmann and Sebastian Schlegel-Mejia for helpful conversations. This work was supported by the Swiss National Science Foundation \[No. 196960\]. # Preliminaries ## Quiver representations in higher depth [\[Sect/QuiverRep\]]{#Sect/QuiverRep label="Sect/QuiverRep"} In this section, we set notations for locally free representations of quivers over a base ring (typically $\mathbb{F}_{q}[t]/(t^{\alpha})$) and their moduli. We also recall and adapt to our setting a few well-known results on endomorphism rings of quiver representations and their relation with quiver moment maps. #### Locally free representations of quivers {#locally-free-representations-of-quivers .unnumbered} A quiver is the datum $Q=(Q_{0},Q_{1},s,t)$ of a set of vertices $Q_{0}$, a set of arrows $Q_{1}$ and maps $s,t:Q_{1}\rightarrow Q_{0}$ which assign to an arrow $a\in Q_{1}$ its source $s(a)$ and target $t(a)$ respectively. A quiver may have parallel edges and loops. Consider a base ring $R$. A representation $M$ of $Q$ over $R$ is the datum of $R$-modules $M_{i},\ i\in Q_{0}$ and $R$-linear maps $f_{a}:M_{s(a)}\rightarrow M_{t(a)},\ a\in Q_{1}$. A morphism of representations $\varphi:M\rightarrow M'$ is the datum of $R$-linear maps $\varphi_{i}:M_{i}\rightarrow M'_{i}$ such that $f'_{a}\circ\varphi_{s(a)}=\varphi_{t(a)}\circ f_{a}$ for all $a\in Q_{1}$. A representation $M$ is called locally free if all $M_{i},\ i\in Q_{0}$ are free $R$-modules. If $M_{i}$ is finitely generated for every $i\in Q_{0}$, we call $\mathbf{r}:=(\mathop{\mathrm{rk}}(M_{i}))_{i\in Q_{0}}$ the rank vector of $M$. In this paper, we will always work with locally free representations of finite rank. Finally, we recall the definition of the Euler form of $Q$:$$\begin{array}{cccc} \langle\bullet,\bullet\rangle: & \mathbb{Z}^{Q_0}\times\mathbb{Z}^{Q_0} & \rightarrow & \mathbb{Z}^{Q_0} \\ & (\mathbf{d},\mathbf{e}) & \mapsto & \sum_{i\in Q_0}d_ie_i-\sum_{a:i\rightarrow j}d_ie_j. \end{array}$$ #### Moduli of quiver representations in higher depth {#moduli-of-quiver-representations-in-higher-depth .unnumbered} Let us now set a quiver $Q$ and $R=\mathcal{O}_{\alpha}:=\mathbb{K}[t]/(t^{\alpha})$, where $\alpha\geq1$ and $\mathbb{K}$ is a field. Similarly to the case where the base ring is a field (i.e. $\alpha=1$), locally free representations of quivers of a given rank vector are parametrised by a quotient stack (see [@Rei08a §3] for the corresponding group action). Let us define:$$R(Q,\mathbf{r},\mathcal{O}_{\alpha}):= \prod_{\substack{a\in Q_1 \\ a:i\rightarrow j}} \mathop{\mathrm{Hom}}_{\mathcal{O}_{\alpha}}(\mathcal{O}_{\alpha}^{\oplus r_i},\mathcal{O}_{\alpha}^{\oplus r_j}),$$ $$\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha}):=\prod_{i\in Q_0}\mathop{\mathrm{GL}}(r_i,\mathcal{O}_{\alpha}),$$ which we view respectively as an algebraic variety and an algebraic group over $\mathbb{K}$. Then $\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})$ acts on $R(Q,\mathbf{r},\mathcal{O}_{\alpha})$ similarly to the case where $\alpha=1$ and the quotient stack $\left[R(Q,\mathbf{r},\mathcal{O}_{\alpha})/\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})\right]$ parametrises locally free representations of $Q$ of rank vector $\mathbf{r}$. *Remark 10*. One can easily check that $R(Q,\mathbf{r},\mathcal{O}_{\alpha})$ and $\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})$ are the $\alpha$-th jet spaces of $R(Q,\mathbf{r},\mathbb{K})$ and $\mathop{\mathrm{GL}}(\mathbf{r},\mathbb{K})$ respectively. The above action can be obtained by extending to jet spaces the classical action of $\mathop{\mathrm{GL}}(\mathbf{r},\mathbb{K})$ on $R(Q,\mathbf{r},\mathbb{K})$ (the construction of jet spaces is functorial - see [@CLNS18 Ch. 3]). #### Moment map {#moment-map .unnumbered} The action of $\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})$ on $R(Q,\mathbf{r},\mathcal{O}_{\alpha})$ also extends to its cotangent bundle and gives rise to a moment map $\mu_{Q,\mathbf{r},\alpha}:R(\overline{Q},\mathbf{r},\mathcal{O}_{\alpha})\rightarrow\mathfrak{gl}(\mathbf{r},\mathcal{O}_{\alpha})$, where $\overline{Q}$ is the double quiver of $Q$. Let us first clarify how $R(\overline{Q},\mathbf{r},\mathcal{O}_{\alpha})$ is identified to $\mathrm{T}^{*}R(Q,\mathbf{r},\mathcal{O}_{\alpha})$ - see also [@HWW18 §6.1.]. The ring $\mathcal{O}_{\alpha}$ admits the following non-degenerate, $\mathbb{K}$-linear pairing. For $a(t),b(t)\in\mathcal{O}_{\alpha}$, define:$$\left( a(t)=\sum_ka_k\cdot t^k \ ; \ b(t)=\sum_kb_k\cdot t^k \right) \mapsto a(t)* b(t):=\sum_ka_kb_{\alpha-1-k}.$$ $a(t)*b(t)$ can be seen as the trace of the $\mathbb{K}$-linear map $\pi\circ(a(t)\cdot b(t))\circ\iota$, where $a(t)\cdot b(t)$ stands for the multiplication map, while $\iota:\mathbb{K}\hookrightarrow\mathcal{O}_{\alpha}$ and $\pi:\mathcal{O}_{\alpha}\rightarrow\mathbb{K}$ are defined respectively by $\iota(1)=1$ and $\pi(t^{k})=\delta_{k,\alpha-1}$. Generalizing this, we can then identify the dual $\mathbb{K}$-vector space $\mathop{\mathrm{Hom}}_{\mathcal{O}_{\alpha}}(\mathcal{O}_{\alpha}^{\oplus n},\mathcal{O}_{\alpha}^{\oplus m})^{\vee}$ to $\mathop{\mathrm{Hom}}_{\mathcal{O}_{\alpha}}(\mathcal{O}_{\alpha}^{\oplus m},\mathcal{O}_{\alpha}^{\oplus n})$ using the following trace pairing. Let $\iota:\mathbb{K}^{\oplus n}\hookrightarrow\mathcal{O}_{\alpha}^{\oplus n}$ and $\pi:\mathcal{O}_{\alpha}^{\oplus n}\twoheadrightarrow\mathbb{K}^{\oplus n}$ be $\mathbb{K}$-linear maps which induce isomorphisms of $\mathcal{O}_{\alpha}$-modules $\mathbb{K}^{\oplus n}\otimes_{\mathbb{K}}\mathcal{O}_{\alpha}\simeq\mathcal{O}_{\alpha}^{\oplus n}$ and $t^{\alpha-1}\mathcal{O}_{\alpha}^{\oplus n}\hookrightarrow\mathcal{O}_{\alpha}^{\oplus n}\twoheadrightarrow\mathbb{K}^{\oplus n}$. Then define:$$\begin{array}{ccccc} \mathop{\mathrm{Hom}}_{\mathcal{O}_{\alpha}}(\mathcal{O}_{\alpha}^{\oplus n},\mathcal{O}_{\alpha}^{\oplus m}) & \times & \mathop{\mathrm{Hom}}_{\mathcal{O}_{\alpha}}(\mathcal{O}_{\alpha}^{\oplus m},\mathcal{O}_{\alpha}^{\oplus n}) & \rightarrow & \mathbb{K}\\ A & ; & B & \mapsto & A*B:=\mathop{\mathrm{Tr}}\left(\pi\circ A\circ B\circ\iota\right) \end{array} .$$ This can be shown to be independent from the choice of $\iota$ and $\pi$. More concretely, seeing $A$ (resp. $B$) as an $m\times n$ (resp. $n\times m$) matrix $A(t)$ (resp. $B(t)$) with coefficients in $\mathcal{O}_{\alpha}$, $A*B$ is the coefficient of $t^{\alpha-1}$ in $\mathop{\mathrm{Tr}}(A(t)\times B(t))$. Using the trace pairing, we obtain the following identification: $$\begin{split} \mathrm{T}^*R(Q,\mathbf{r},\mathcal{O}_{\alpha}) & = \prod_{\substack{a\in Q_1 \\ a:i\rightarrow j}} \left( \mathop{\mathrm{Hom}}_{\mathcal{O}_{\alpha}}(\mathcal{O}_{\alpha}^{\oplus r_i},\mathcal{O}_{\alpha}^{\oplus r_j}) \times \mathop{\mathrm{Hom}}_{\mathcal{O}_{\alpha}}(\mathcal{O}_{\alpha}^{\oplus r_i},\mathcal{O}_{\alpha}^{\oplus r_j})^{\vee} \right) \\ & \simeq \prod_{\substack{a\in Q_1 \\ a:i\rightarrow j}} \left( \mathop{\mathrm{Hom}}_{\mathcal{O}_{\alpha}}(\mathcal{O}_{\alpha}^{\oplus r_i},\mathcal{O}_{\alpha}^{\oplus r_j}) \times \mathop{\mathrm{Hom}}_{\mathcal{O}_{\alpha}}(\mathcal{O}_{\alpha}^{\oplus r_j},\mathcal{O}_{\alpha}^{\oplus r_i}) \right) = R(\overline{Q},\mathbf{r},\mathcal{O}_{\alpha}), \end{split}$$ where $\overline{Q}$ is the double quiver of $Q$ i.e. the quiver obtained by adding to $Q$ an arrow $a^{*}:j\rightarrow i$ for each arrow $Q_{1}\ni a:i\rightarrow j$. Likewise, using the trace pairing, we obtain an isomorphism $\mathfrak{gl}(\mathbf{r},\mathcal{O}_{\alpha})^{\vee}\simeq\mathfrak{gl}(\mathbf{r},\mathcal{O}_{\alpha})$. Under these identifications, the moment map $\mu_{Q,\mathbf{r},\alpha}$ has the following expression, which is similar to the case where $\alpha=1$. We denote a point of $R(\overline{Q},\mathbf{r},\mathcal{O}_{\alpha})$ by $(x_{a},y_{a})_{a\in Q_{1}}$, where $x_{a}\in\mathop{\mathrm{Hom}}_{\mathcal{O}_{\alpha}}(\mathcal{O}_{\alpha}^{\oplus r_{s(a)}},\mathcal{O}_{\alpha}^{\oplus r_{t(a)}})$ and $y_{a}\in\mathop{\mathrm{Hom}}_{\mathcal{O}_{\alpha}}(\mathcal{O}_{\alpha}^{\oplus r_{t(a)}},\mathcal{O}_{\alpha}^{\oplus r_{s(a)}})$. Then:$$\begin{array}{cccc} \mu_{Q,\mathbf{r},\alpha}: & R(\overline{Q},\mathbf{r},\mathcal{O}_{\alpha}) & \rightarrow & \mathfrak{gl}(\mathbf{r},\mathcal{O}_{\alpha}) \\ & (x_{a},y_{a})_{a\in Q_{1}} & \mapsto & \left(\sum_{t(a)=i}x_ay_a-\sum_{s(a)=i}y_ax_a\right)_{i\in Q_0} \end{array} .$$ Finally, we record the following proposition, which characterises fibres of the forgetful map $\mu_{Q,\mathbf{r},\alpha}^{-1}(0)\rightarrow R(Q,\mathbf{r},\mathcal{O}_{\alpha})$, $(x,y)\mapsto x$. The proof closely follows [@CBH98 Lem. 4.2.], so we simply indicate how to generalize the main arguments. **Proposition 11**. *Let $x\in R(Q,\mathbf{r},\mathcal{O}_{\alpha})$ and $M$ the corresponding representation of $Q$. Then there is an exact sequence: $$\begin{tikzcd}[ampersand replacement = \&, column sep=small, row sep=large] 0 \arrow[r] \& \mathop{\mathrm{Ext}}^1(M,M)^{\vee} \arrow[r] \& R(Q^{\mathrm{op}},\mathbf{r},\mathcal{O}_{\alpha}) \arrow[r, "{\mu_{Q,\mathbf{r},\alpha}(x,\bullet)}"] \&[4em] \mathfrak{gl}(\mathbf{r},\mathcal{O}_{\alpha}) \arrow[r] \& \mathop{\mathrm{End}}(M)^{\vee} \arrow[r] \& 0, \end{tikzcd}$$where $Q^{\mathrm{op}}$ is the quiver obtained from $Q$ by reversing all arrows and $\mathfrak{gl}(\mathbf{r},\mathcal{O}_{\alpha})\rightarrow\mathop{\mathrm{End}}(M)^{\vee}$ is the dual of the inclusion $\mathop{\mathrm{End}}(M)\hookrightarrow\mathfrak{gl}(\mathbf{r},\mathcal{O}_{\alpha})$ (using the identification $\mathfrak{gl}(\mathbf{r},\mathcal{O}_{\alpha})^{\vee}\simeq\mathfrak{gl}(\mathbf{r},\mathcal{O}_{\alpha})$).* *Proof.* Let $\mathcal{O}_{\alpha}Q:=\mathcal{O}_{\alpha}\otimes_{\mathbb{K}}\mathbb{K}Q$ be the path algebra of $Q$ with coefficients in $\mathcal{O}_{\alpha}$ (see [@CB92 §1]) and denote by $e_{i}\in\mathcal{O}_{\alpha}Q$ the lazy path starting and ending at vertex $i\in Q_{0}$. The category of representations of $Q$ over $\mathcal{O}_{\alpha}$ is equivalent to $\mathcal{O}_{\alpha}Q-\mathrm{Mod}$. By [@GLS17a Cor. 7.2.], $M$ admits the following projective resolution:$$0\rightarrow \bigoplus_{\substack{a\in Q_1 \\ a:i\rightarrow j}}\mathcal{O}_{\alpha}Q\cdot e_j \otimes_{\mathcal{O}_{\alpha}}M_i\overset{f}{\longrightarrow} \bigoplus_{i\in Q_0}\mathcal{O}_{\alpha}Q\cdot e_i\otimes_{\mathcal{O}_{\alpha}}M_i\overset{g}{\longrightarrow} M\rightarrow0,$$ where, for $\rho=a_{1}\ldots a_{s}$ a path in $Q$, $f(\rho e_{t(a)}\otimes m)=\rho ae_{s(a)}\otimes m-\rho e_{t(a)}\otimes f_{a}(m)$ and $g(\rho e_{i}\otimes m)=f_{\rho}(m)$ (here $f_{\rho}$ is a shorthand for $f_{a_{1}}\circ\ldots\circ f_{a_{s}}$). Indeed, in the notation of [@GLS17a], $\mathcal{O}_{\alpha}Q\simeq H(C,D,\Omega)$, where $C$ is the symmetric Cartan matrix associated to $Q$, $D=\alpha\cdot\mathrm{Id}$ and $\Omega$ is the orientation of $Q$. The wanted exact sequence is then obtained by applying the functor $\mathop{\mathrm{Hom}}_{\mathcal{O}_{\alpha}Q}(\bullet,M)$ and dualizing. ◻ #### Krull-Schmidt theorem and absolutely indecomposable representations {#krull-schmidt-theorem-and-absolutely-indecomposable-representations .unnumbered} Let us now assume that $\mathbb{K}=\mathbb{F}_{q}$. We collect some standard results on absolutely indecomposable representations and their rings of endomorphisms. We also check that standard Galois descent techniques apply to $\mathop{\mathrm{Rep}}_{\mathcal{O}_{\alpha}}(Q)^{\mathrm{l.f.}}$, the category of finite-rank, locally free representations of $Q$ over $\mathcal{O}_{\alpha}$, following [@Moz19]. We start with some definitions. Let $M$ be a locally free representation of $Q$ over $\mathcal{O}_{\alpha}$. $M$ is called indecomposable if it cannot be split into a direct sum of non-zero (locally free[^3]) subrepresentations. Given a field extension $\mathbb{K}'/\mathbb{K}$, we denote by $M_{\mathbb{K}'}=M\otimes_{\mathbb{K}}\mathbb{K}'$ the representation obtained from $M$ by base change i.e. $\left(M_{\mathbb{K}'}\right)_{i}=M_{i}\otimes_{\mathbb{K}}\mathbb{K}'$ for $i\in Q_{0}$ and $\left(f_{\mathbb{K}'}\right)_{a}=f_{a}\otimes\mathrm{id}_{\mathbb{K}'}$ for $a\in Q_{1}$. Let $\overline{\mathbb{K}}$ be an algebraic closure of $\mathbb{K}$. A locally free representation $M$ is called absolutely indecomposable if $M_{\overline{\mathbb{K}}}$ is indecomposable. In this paper, we use well-known properties of absolutely indecomposable representations, which rely on standard Galois descent arguments. These arguments were formalised by Mozgovoy in [@Moz19 §3] through the notion of a linear stack. In order to apply his results, we show that the assignment $\mathbb{K}'\mapsto\mathop{\mathrm{Rep}}_{\mathcal{O}_{\alpha}\otimes\mathbb{K}'}(Q)^{\mathrm{l.f.}}$ gives a linear stack on the small étale site over $\mathbb{K}$. We check that Galois descent holds for objects and morphisms. Descent for objects follows from the fact that $\left[R(Q,\mathbf{r},\mathcal{O}_{\alpha})/\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})\right]$ is an Artin stack. Note that $\left[R(Q,\mathbf{r},\mathcal{O}_{\alpha})/\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})\right](\mathbb{K}')$ is equivalent to the groupoid of $\mathop{\mathrm{Rep}}_{\mathcal{O}_{\alpha}\otimes\mathbb{K}'}(Q)^{\mathrm{l.f.}}$ because, by Lang's theorem, all principal $\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})$-bundles over $\mathop{\mathrm{Spec}}(\mathbb{K}')$ are trivial. Descent for morphisms follows from the following Galois-equivariant isomorphism: given a field extension $\mathbb{K}''/\mathbb{K}'$ and $M,N\in\mathop{\mathrm{Rep}}_{\mathcal{O}_{\alpha}\otimes\mathbb{K}'}(Q)^{\mathrm{l.f.}}$, $\mathop{\mathrm{Hom}}(M_{\mathbb{K}''},N_{\mathbb{K}''})\simeq\mathop{\mathrm{Hom}}(M,N)\otimes_{\mathbb{K}'}\mathbb{K}''$. Then we obtain $\mathop{\mathrm{Hom}}(M,N)\simeq\mathop{\mathrm{Hom}}(M_{\mathbb{K}''},N_{\mathbb{K}''})^{\mathrm{Gal}(\mathbb{K}''/\mathbb{K}')}$ i.e. Galois descent for morphisms. Lastly, we check that $\mathop{\mathrm{Rep}}_{\mathcal{O}_{\alpha}\otimes\mathbb{K}'}(Q)^{\mathrm{l.f.}}$ is Krull-Schmidt (see [@Moz19 §2.3.]): **Proposition 12**. *$\mathop{\mathrm{Rep}}_{\mathcal{O}_{\alpha}}(Q)^{\mathrm{l.f.}}$ is a Krull-Schmidt category i.e. any objects splits into a direct sum of indecomposable objects and the ring of endomorphisms of an indecomposable object is local.* *Proof.* It is clear, by induction on rank vectors, that any locally free, finite-rank representation of $Q$ over $\mathcal{O}_{\alpha}$ can be decomposed into a direct sum of indecomposable representations (which are also locally free, as mentioned above). Let $M\in\mathop{\mathrm{Rep}}_{\mathcal{O}_{\alpha}}(Q)^{\mathrm{l.f.}}$ be indecomposable. We prove that any endomorphism of $M$ is either invertible or nilpotent. Then nilpotent elements form a maximal double-sided ideal, which is the unique maximal ideal of $\mathop{\mathrm{End}}(M)$. Let $\xi\in\mathop{\mathrm{End}}(M)$. Since $\xi$ commutes with the action of $t\in\mathcal{O}_{\alpha}$, its characteristic subspaces $M_{i}^{P},\ i\in Q_{0}$ ($P$ an irreducible polynomial over $\mathbb{F}_{q}$) are $\mathcal{O}_{\alpha}$-submodules, which are preserved by the action of arrows. Moreover, because $M_{i}=\bigoplus_{P}M_{i}^{P}$, these modules are free over $\mathcal{O}_{\alpha}$. We therefore obtain $M=\bigoplus_{P}M^{P}$ and since $M$ is indecomposable, $M=M^{P}$ for some $P$. Thus $\xi$ is either invertible ($P\ne X$) or nilpotent ($P=X$). ◻ Now, the following well-known properties follow from the general framework of [@Moz19]. Given $M\in\mathop{\mathrm{Rep}}_{\mathcal{O}_{\alpha}}(Q)^{\mathrm{l.f.}}$, its ring of endomorphisms $\mathop{\mathrm{End}}(M)$ is a finite-dimensional algebra. We call $\mathrm{Rad}(M)\subseteq\mathop{\mathrm{End}}(M)$ its Jacobson radical and $\mathrm{top}(\mathop{\mathrm{End}}(M)):=\mathop{\mathrm{End}}(M)/\mathrm{Rad}(M)$. **Proposition 13**. *Let $M\in\mathop{\mathrm{Rep}}_{\mathcal{O}_{\alpha}}(Q)^{\mathrm{l.f.}}$.* 1. *If $M$ is indecomposable, then $\mathrm{Rad}(M)$ consists of nilpotent elements.* 2. *$M$ is absolutely indecomposable if, and only if, $\mathrm{top}(\mathop{\mathrm{End}}(M))=\mathbb{K}$.* 3. *If $M$ is indecomposable and $\mathbf{r}$ is indivisible, then $M$ is absolutely indecomposable.* *Proof.* 1. follows from the proof of Proposition [Proposition 12](#Prop/Krull-Schmidt){reference-type="ref" reference="Prop/Krull-Schmidt"}, since $\mathrm{Rad}(M)$ is the maximal ideal of $\mathop{\mathrm{End}}(M)$. 2. is [@Moz19 Lemma 3.8.2.]. 3\. follows from [@Moz19 Thm. 3.9.1.]. Indeed, if $M$ is indecomposable, then $\mathrm{top}(\mathop{\mathrm{End}}(M))=\mathbb{K}'$ is a finite field extension of $\mathbb{K}$ and $M_{\mathbb{K}'}$ splits as a direct sum of $[\mathbb{K}':\mathbb{K}]$ Galois twists of some absolutely indecomposable representation over $\mathcal{O}_{\alpha}\otimes_{\mathbb{K}}\mathbb{K}'$, hence $[\mathbb{K}':\mathbb{K}]\vert\mathbf{r}$. If $\mathbf{r}$ is indivisible, this implies that $\mathrm{top}(\mathop{\mathrm{End}}(M))=\mathbb{K}$, hence $M$ is absolutely indecomposable by 2. ◻ ## Plethystic notations [\[Sect/Plethysm\]]{#Sect/Plethysm label="Sect/Plethysm"} In this section, we introduce the $\lambda$-ring formalism required to formulate Theorem [Theorem 1](#Thm/IntroExpFmlKacPol){reference-type="ref" reference="Thm/IntroExpFmlKacPol"}, again following [@Moz19]. In particular, we define the plethystic exponential and recall a well-known formula for counting objects with endomorphisms in $\mathop{\mathrm{Rep}}_{\mathcal{O}_{\alpha}}(Q)^{\mathrm{l.f.}}$. We assume the ground field to be $\mathbb{K}=\mathbb{F}_{q}$. Let us first define $\mathcal{V}$, the $\lambda$-ring of counting functions (see [@Moz19 §2.1-2.]). As a ring, $\mathcal{V}=\prod_{n\geq1}\mathbb{Q}$. Here are a few examples of counting functions. Given a quiver $Q$ and a rank vector $\mathbf{r}$, we define $A_{Q,\mathbf{r},\alpha}:=\left(A_{Q,\mathbf{r},\alpha}(q^{n})\right)_{n\geq1}\in\mathcal{V}$, where $A_{Q,\mathbf{r},\alpha}(q^{n})$ is the number of isomorphism classes of absolutely indecomposable, locally free representations of $Q$ over $\mathbb{F}_{q^{n}}[t]/(t^{\alpha})$, in rank $\mathbf{r}$. If $X$ is an algebraic variety over $\mathbb{F}_{q}$, then $\sharp X:=\left(\sharp_{\mathbb{F}_{q^{n}}}X\right)_{n\geq1}\in\mathcal{V}$ is also a counting function. A last, easy example is given by rational fractions: if $R\in\mathbb{Q}(T)$ does not vanish on powers of $q$, then we obtain a counting function $R(q):=\left(R(q^{n})\right)_{n\geq1}\in\mathcal{V}$. The $\lambda$-ring structure on $\mathcal{V}$ is given by the Adams operators $\psi_{m},\ m\geq1$, defined by: $$\psi_m(a)=(a_{mn})_{n\geq1},\text{ where } a=(a_n)_{n\geq1}.$$ Our main counting results are expressed in terms of power series. For instance, we consider the following power series, which encodes all counting functions $A_{Q,\mathbf{r},\alpha}$ at once: $$\sum_{\mathbf{r}\in\mathbb{N}^{Q_0}\setminus\{0\}}A_{Q,\mathbf{r},\alpha}\cdot t^{\mathbf{r}}\in\mathcal{V}[[t_i,\ i\in Q_0]].$$ The power series ring $\mathcal{V}[[t_{i},\ i\in Q_{0}]]$ can also be endowed with a $\lambda$-ring structure, given by the Adams operators: $$\psi_m\left(\sum_{\mathbf{r}\in\mathbb{N}^{Q_0}}f_{\mathbf{r}}\cdot t^{\mathbf{r}}\right):=\sum_{\mathbf{r}\in\mathbb{N}^{Q_0}}\psi_m(f_{\mathbf{r}})\cdot t^{m\mathbf{r}}.$$ Let $\mathcal{V}[[t_{i},\ i\in Q_{0}]]_{+}=\{f_{0}=0\}\subset\mathcal{V}[[t_{i},\ i\in Q_{0}]]$ be the augmentation ideal. The plethystic exponential is the operator $\mathop{\mathrm{Exp}}_{q,t}:\mathcal{V}[[t_{i},\ i\in Q_{0}]]_{+}\rightarrow\mathcal{V}[[t_{i},\ i\in Q_{0}]]$ defined by:$$\mathop{\mathrm{Exp}}_{q,t}(F)=\exp\left(\sum_{m\geq1}\frac{\psi_m(F)}{m}\right).$$ The following result is a direct application of [@Moz19 Thm. 4.6.], on the weighted volume of the stack of objects with endomorphisms (associated to a linear stack). **Proposition 14**. *Let $Q$ be a quiver and $\alpha\geq1$. Let us define:$$\mathop{\mathrm{vol}}_{\mathop{\mathrm{End}}}(\mathop{\mathrm{Rep}}_{\mathcal{O}_{\alpha}}(Q,\mathbf{r})^{\mathrm{l.f.}}) = \left( \sum_{[M]} \frac{\sharp\mathop{\mathrm{End}}(M)}{\sharp\mathop{\mathrm{Aut}}(M)} \right)_{n\geq1}\in\mathcal{V},$$where the sum runs on isomorphism classes of $\mathop{\mathrm{Rep}}_{\mathcal{O}_{\alpha}\otimes\mathbb{F}_{q^{n}}}(Q,\mathbf{r})^{\mathrm{l.f.}}$. Then:$$\sum_{\mathbf{r}\in\mathbb{N}^{Q_0}} \mathop{\mathrm{vol}}_{\mathop{\mathrm{End}}}(\mathop{\mathrm{Rep}}_{\mathcal{O}_{\alpha}}(Q,\mathbf{r})^{\mathrm{l.f.}})\cdot t^{\mathbf{r}} = \mathop{\mathrm{Exp}}_{q,t}\left(\sum_{\mathbf{r}\in\mathbb{N}^{Q_0}\setminus\{0\}}\frac{A_{Q,\mathbf{r},\alpha}}{1-q^{-1}}\cdot t^{\mathbf{r}}\right).$$* ## Graph-theoretic notations [\[Sect/Graph\]]{#Sect/Graph label="Sect/Graph"} In this section, we set notations for the operations on quivers which appear in the proofs of our main counting results. These include restricting to subquivers, contracting and deleting arrows. We also recall basic notions on Betti numbers of graphs and spanning trees. Let $Q$ be a quiver. We define subquivers of $Q$ obtained by restricting to a subset of vertices or arrows of $Q$. **Definition 15**. Let $I\subseteq Q_{0}$ be a subset of vertices and $J\subseteq Q_{1}$ be a subset of arrows of $Q$. 1. $Q\restriction_{I}$ is the quiver with set of vertices $I$ and set of arrows $Q_{1,I}:=\{a\in Q_{1}\ \vert\ s(a),t(a)\in I\}$. The source and target maps of $Q\restriction_{I}$ are obtained from those of $Q$ by restriction. 2. $Q\restriction_{J}$ is the quiver with set of vertices $Q_{0}$ and set of arrows $J$. The source and target maps of $Q\restriction_{J}$ are obtained from those of $Q$ by restriction. We also define quivers obtained from $Q$ by contracting or deleting an arrow. **Definition 16**. Let $a\in Q_{1}$ be an arrow of $Q$. Consider the equivalence relation $\sim_{a}$ on $Q_{0}$, whose equivalence classes are $\{s(a),t(a)\}$ and $\{b\},\ b\in Q_{1}\setminus\{a\}$. Given a vertex $i\in Q_{0}$, we denote by $[i]$ the equivalence class of $i$ under $\sim_{a}$. 1. $Q/a$ is the quiver obtained from $Q$ by contracting $a$ i.e. its set of vertices is $Q_{0}/\sim_{a}$ and its set of arrows is $(Q/a)_{1}:=\{[s(b)]\rightarrow[t(b)],\ b\in Q_{1}\setminus\{a\}\}$. The source and target maps are obtained from those of $Q$ by composing with $Q_{0}\rightarrow Q_{0}/\sim_{a}$. 2. $Q\setminus a$ is the quiver obtained from $Q$ by deleting $a$ i.e. its set of vertices is $Q_{0}$ and its set of arrows is $(Q\setminus a)_{1}:=Q_{1}\setminus\{a\}$. The source and target maps are obtained from those of $Q$ by restriction. Likewise, given $\lambda\in\mathbb{Z}^{Q_{0}}$, we define: 1. $\lambda/a\in\mathbb{Z}^{(Q/a)_{0}}$ is given by $$(\lambda/a)_i= \left\{ \begin{array}{ll} \lambda_{s(a)}+\lambda_{t(a)} & ,\ i=[s(a)]=[t(a)] \\ \lambda_i & ,\ i\not\sim_a s(a),t(a) \end{array} \right. .$$ 2. $\lambda\setminus a\in\mathbb{Z}^{(Q\setminus a)_{0}}=\mathbb{Z}^{Q_{0}}$ is simply given by $\lambda$. We will also need a basic fact on the Betti number of a graph. Given a graph $\Gamma$ (not necessarily oriented) with $C$ connected components, $V$ vertices and $E$ edges, the Betti number of $\Gamma$ is $b(\Gamma):=C-V+E$. This should be thought as the number of independent cycles of $\Gamma$. Given a quiver $Q$, we define its Betti number $b(Q)$ as the Betti number of the underlying graph. We also denote by $c(Q)$ the number of connected components of $Q$. Recall that a quiver $Q$ (or its underlying graph $\Gamma$) is said to be 2-connected if $\Gamma$ is connected and removing one edge from $\Gamma$ does not disconnect it. **Proposition 17**. *Let $Q$ be a quiver.* 1. *If $Q$ is 2-connected, then $b(Q)>0$.* 2. *Given $Q_{0}=I_{1}\sqcup\ldots\sqcup I_{s}$ a partition of the set of vertices and $Q'$ the quiver obtained from $Q$ by contracting the arrows of subquivers $Q\restriction_{I_{1}},\ldots,Q\restriction_{I_{s}}$, $b(Q)=b(Q')+b(Q\restriction_{I_{1}})+\ldots+b(Q\restriction_{I_{s}})$.* 3. *$Q$ is 2-connected if, and only if, for all partitions $Q_{0}=I_{1}\sqcup\ldots\sqcup I_{s}$, $b(Q)>b(Q\restriction_{I_{1}})+\ldots+b(Q\restriction_{I_{s}})$.* *Proof.* 1. By contraposition (and leaving out the trivial case where $Q$ is not connected), suppose that $b(Q)=0$ and $Q$ is connected. Then $\sharp Q_{1}=\sharp Q_{0}-1$, which is the minimum number of arrows required to get a connected quiver with $\sharp Q_{0}$ vertices. Hence removing one arrow disconnects $Q$ i.e. it is not 2-connected. 2\. This follows from a straightforward computation. Suppose that $Q$ (resp. $Q\restriction_{I_{k}}$) has $C$ (resp. $C_{k}$) connected components, $V$ (resp. $V_{k}$) vertices and $A$ (resp. $A_{k}$) arrows. Then $Q'$ has $C$ connected components, $C_{1}+\ldots+C_{s}$ vertices and $A-A_{1}-\ldots-A_{s}$ arrows. 3\. Suppose that $Q$ is 2-connected. Let $Q_{0}=I_{1}\sqcup\ldots\sqcup I_{s}$ be a partition of the set of vertices and $Q'$ be the quiver obtained from $Q$ by contracting the arrows of subquivers $Q\restriction_{I_{1}},\ldots,Q\restriction_{I_{s}}$. Then $Q'$ is also 2-connected and the wanted inequality follows by 1. and 2. On the other hand, if $Q$ is not 2-connected, consider an arrow $a\in Q_{1}$ which disconnects $Q$ when removed. Then $Q\setminus a$ splits into two connected components with sets of vertices $I_{1},I_{2}$ and one can check that $b(Q)=b(Q\restriction_{I_{1}})+b(Q\restriction_{I_{2}})$. ◻ Finally, we will use labelled trees to index strata of the stack $\left[\mu_{Q,\mathbf{r},\alpha}^{-1}(0)/\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})\right]$, in order to compute its cohomology (when $\mathbf{r}=\underline{1}$). A spanning tree of a connected quiver $Q$ is a connected subquiver $Q\restriction_{J}$, where $J\subseteq Q_{1}$ has cardinality $\sharp Q_{0}-1$. In other words, $b(Q\restriction_{J})=0$, which means that $Q\restriction_{J}$ has no cycles i.e. it is a tree. **Definition 18**. A valued spanning tree $T$ of $Q$ is the datum of a spanning tree $Q\restriction_{J}$ ($J\subseteq Q_{1}$), along with a labeling $v:J\rightarrow\mathbb{Z}$. The above labeling will be used to keep track of valuations $\mathop{\mathrm{val}}(x_{a}),\ a\in J$ of a quiver representation $x\in R(Q,\underline{1},\mathcal{O}_{\alpha})$. ## Stanley-Reisner rings and their Hilbert series [\[Sect/SRrings\]]{#Sect/SRrings label="Sect/SRrings"} In this section, we recall some results on Stanley-Reisner rings of simplicial complexes and their Hilbert series. We focus on shellable and Cohen-Macaulay simplicial complexes, as their Hilbert series enjoy positivity properties that we exploit in our proof of Theorem [Theorem 4](#Thm/IntroPositivityA){reference-type="ref" reference="Thm/IntroPositivityA"}. More specifically, we state shellability results by Björner on order complexes of posets [@Bjo80]. We refer to [@Sta07] for more details on Hilbert series of simplicial complexes. Let us first define simplicial complexes and their Stanley-Reisner rings. A(n abstract) simplicial complex $\Delta$ on a vertex set $V=\{x_{1},\ldots,x_{n}\}$ is a collection of subsets of $V$ (called faces of $\Delta$) such that (i) for all $x\in V$, $\{x\}\in\Delta$ and (ii) for any $F\in\Delta$, $G\subseteq F\Rightarrow G\in\Delta$. The dimension of a face $F\in\Delta$ is $\sharp F-1$ and faces of maximal dimension are called facets. Given a field $\mathbb{K}$, the Stanley-Reisner ring $\mathbb{K}[\Delta]$ associated to $\Delta$ is:$$\mathbb{K}[x_1,\ldots,x_n]/I_{\Delta},$$ where $I_{\Delta}$ is the ideal generated by $x_{i_{1}}\ldots x_{i_{r}}$, for $i_{1}<\ldots<i_{r}$ and $\{i_{1},\ldots,i_{r}\}\notin\Delta$. Note that a $\mathbb{K}$-basis of $\mathbb{K}[\Delta]$ is given by monomials $x^{\mathbf{m}}=x_{1}^{m_{1}}\ldots x_{n}^{m_{n}}$ whose support (i.e. $\mathop{\mathrm{supp}}(x^{\mathbf{m}}):=\{1\leq i\leq n\ \vert\ m_{i}\ne0\}$) is a face of $\Delta$. Given a $\mathbb{Z}_{\geq0}$-grading of $\mathbb{K}[\Delta]$, one can consider its Hilbert series. We will consider $\mathbb{Z}_{\geq0}$-gradings for which the Hilbert series is a specialisation of the fine Hilbert series $\mathop{\mathrm{Hilb}}_{\Delta}\in\mathbb{K}[[u_{1},\ldots,u_{n}]]$, associated to the natural $\mathbb{Z}_{\geq0}^{n}$-grading of $\mathbb{K}[\Delta]$ given by $\deg(x_{1}^{m_{1}},\ldots x_{n}^{m_{n}})=(m_{1},\ldots,m_{n})$. The fine Hilbert series is defined as:$$\mathop{\mathrm{Hilb}}_{\Delta}(u):=\sum_{\mathbf{m}\in\mathbb{Z}_{\geq0}^{n}}\dim_{\mathbb{K}}\mathbb{K}[\Delta]_{\mathbf{m}}\cdot u^{\mathbf{m}} =\sum_{F\in\Delta}\prod_{x_i\in F}\frac{u_i}{1-u_i}.$$ By [@Sta07 Ch. I, Thm. 2.3.], there exists $\mathbf{n}\in\mathbb{Z}_{\geq0}^{n}$ and $P(u)\in\mathbb{Z}[u_{1},\ldots,u_{n}]$ such that:$$\mathop{\mathrm{Hilb}}_{\Delta}(u)=u^{\mathbf{n}}\cdot\frac{P(u)}{\prod_{i=1}^n(1-u_i)}.$$ We now discuss some properties of simplicial complexes which imply that, when specialised according to any $\mathbb{Z}_{\geq0}$-grading of $\mathbb{K}[\Delta]$, $\mathop{\mathrm{Hilb}}_{\Delta}$ admits a numerator with non-negative coefficients. A simplicial complex $\Delta$ is called Cohen-Macaulay if the graded ring $\mathbb{K}[\Delta]$ is Cohen-Macaulay (see [@Sta07 Ch. I, §5] for background). Consider the $\mathbb{Z}_{\geq0}$-grading of $\mathbb{K}[\Delta]$ given by $\deg(x_{i})=d_{i}$. Then the Cohen-Macaulay property implies that the specialised Hilbert series can be presented as a rational fraction whose numerator has non-negative coefficients: **Proposition 19**. *[@Sta07 Ch. I, §5.2. - Thm. 5.10.] [\[Prop/CMHilb\]]{#Prop/CMHilb label="Prop/CMHilb"}* *In the above setting, if $\Delta$ is Cohen-Macaulay, then there exist $Q(q)\in\mathbb{Z}_{\geq0}[q]$ and $e_{1},\ldots,e_{s}\geq1$ such that:$$\mathop{\mathrm{Hilb}}_{\Delta}(q^{d_1},\ldots,q^{d_n})=\frac{Q(q)}{\prod_{i=1}^s(1-q^{e_i})}.$$* *Remark 20*. Note that, in Proposition [\[Prop/CMHilb\]](#Prop/CMHilb){reference-type="ref" reference="Prop/CMHilb"}, $Q(q)$ may not be a specialisation of $P(u)$. Indeed, such a presentation of $\mathop{\mathrm{Hilb}}_{\Delta}$ as a rational fraction depends on the choice of some elements in $\mathbb{K}[\Delta]$. In the first presentation (with numerator $P(u)$), we choose generators $x_{i},\ 1\leq i\leq n$. In the second presentation (with numerator $Q(q)$), we consider a regular sequence of homogeneous elements (of degrees $e_{i},\ 1\leq i\leq s$) in $\mathbb{K}[\Delta]$. It may be the case that generators $x_{i},\ 1\leq i\leq n$ do not form a regular sequence (the depth of $\mathbb{K}[\Delta]$ may even be smaller than $n$). See [@Sta07 Ch. I, §5] for the relation between $Q(q)$ and the choice of a regular sequence. While the Cohen-Macaulay condition has an abstract algebraic formulation in terms of $\mathbb{K}[\Delta]$, there is a combinatorial condition on $\Delta$ itself which implies that $\Delta$ is Cohen-Macaulay. A simplicial complex $\Delta$ is called shellable if: 1. $\Delta$ is pure i.e. its facets all have the same dimension $d$; 2. Facets of $\Delta$ can be ordered (let us call them $F_{1},\ldots,F_{s}$) in such a way that for $2\leq i\leq s$, $F_{i}\cap\left(\bigcup_{j=1}^{i-1}F_{j}\right)$ is a nonempty union of faces of dimension $d-1$. The sequence of facets $F_{1},\ldots,F_{s}$ is called a shelling of $\Delta$. **Proposition 21**. *[@Sta07 Ch. III, Thm. 2.5.] [\[Prop/ShellCM\]]{#Prop/ShellCM label="Prop/ShellCM"}* *If $\Delta$ is shellable, then it is Cohen-Macaulay.* There is a particular class of simplicial complexes with well-studied shellings: order complexes. Given a poset $\Pi$, the associated order complex $\mathcal{O}(\Pi)$ has vertex set $\Pi$ and its faces are chains $\{x_{1}<x_{2}<\ldots<x_{r}\}\subseteq\Pi$. In [@Bjo80], Björner studies shellings of order complexes through the stronger notion of lexicographic shellability. We recall some of his results on order complexes of lattices. Recall that a lattice is a poset where any pair $x,y\in\Pi$ admits a greatest lower bound $x\wedge y$ and a lowest upper bound $x\vee y$ (see [@Bir73 Ch. I.] for background). A lattice is called modular if, for all $x,y,z\in\Pi$ such that $x\leq z$, $x\vee(y\wedge z)=(x\wedge y)\vee z$. A lattice is called graded if there exists a strictly increasing function $\rho:\Pi\rightarrow\mathbb{Z}_{\geq0}$ such that for $x,y\in\Pi$, $y$ covers[^4] $x\ \Rightarrow\ \rho(y)=\rho(x)+1$. A modular lattice is graded (see [@Bir73 Ch II, §8]). *Example 22*. Given a set $S$, let us call $\Pi(S)$ the poset of subsets of $S$, ordered by inclusion. $\Pi(S)$ is a modular lattice. Indeed, for $A,B\in\Pi(S)$, $A\wedge B=A\cap B$ and $A\vee B=A\cup B$; moreover, if $A\subseteq C$, then $A\cup(B\cap C)=(A\cup B)\cap C$. Note that $\Pi(S)$ is graded by cardinality $\rho:A\mapsto\sharp A$. We collect the following results from [@Bjo80]: **Proposition 23**. *Let $L$ be a finite lattice. If $L$ is graded by $\rho$, define, for $S\subset\mathbb{Z}_{\geq0}$, $L_{S}:=\{x\in L\ \vert\ \rho(x)\in S\}$. Then:* 1. *[@Bjo80 Thm. 3.1.] If $L$ is modular, then $\mathcal{O}(L)$ is shellable.* 2. *[@Bjo80 Thm. 4.1.] If $\mathcal{O}(L)$ is shellable, then $\mathcal{O}(L_{S})$ is shellable for any $S\subset\mathbb{Z}_{\geq0}$.* ## Equivariant cohomology and mixed Hodge structures [\[Sect/EqCoh\]]{#Sect/EqCoh label="Sect/EqCoh"} In this section, we recall elementary facts on $\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})$-equivariant cohomology of complex algebraic varieties and prove some computational lemmas. We also recall the connection between Hodge theory of algebraic varieties and point-counting over finite fields, as discussed for instance in [@HRV08 Appendix]. #### Mixed Hodge structures on $\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})$-equivariant cohomology {#mixed-hodge-structures-on-mathopmathrmglmathbfrmathcalo_alpha-equivariant-cohomology .unnumbered} A mixed Hodge structure on a $\mathbb{Q}$-vector space $V$ is the datum of an increasing filtration $W_{\bullet}\subseteq V$ (the weight filtration) and a decreasing filtration $F^{\bullet}\subseteq V_{\mathbb{C}}$ (the Hodge filtration) satisfying certain compatibility conditions (see [@PS08 Ch. 3] for background). In particular, for any $n\in\mathbb{Z}$, there is a splitting $\mathrm{gr}_{n}^{W}(V)_{\mathbb{C}}=\bigoplus_{p+q=n}V^{p,q}$ such that $F^{p}\mathrm{gr}_{n}^{W}(V)_{\mathbb{C}}=\bigoplus_{p'\geq p}V^{p',n-p'}$. By definition, the Hodge numbers of $V$ are $h^{p,q}:=\dim_{\mathbb{C}}V^{p,q}$. A mixed Hodge structure is called pure of weight $n$ if $\mathrm{gr}_{n'}^{W}(V)_{\mathbb{C}}=0$ for $n'\ne n$. Given two vector spaces $V_{1}$, $V_{2}$ endowed with mixed Hodge structures, the tensor product $V_{1}\otimes V_{2}$ also carries a mixed Hodge structure. In this paper, we will work with $\mathbb{Z}$-graded mixed Hodge structures (in all examples below, the grading will keep track of cohomological degree). A graded mixed Hodge structure $V^{\bullet}$ is called pure of weight $n$ if $V^{k}$ is pure of weight $n+k$ for all $k\in\mathbb{Z}$. Given two $\mathbb{Z}$-graded mixed Hodge structures $V_{1}^{\bullet}$, $V_{2}^{\bullet}$, we define the tensor product $(V_{1}\otimes V_{2})^{\bullet}$ as follows:$$(V_{1}\otimes V_{2})^k=\bigoplus_{k_1+k_2=k}V_1^{k_1}\otimes V_2^{k_2}.$$We let $\mathbb{Q}(n)$ be the pure Hodge structure of dimension 1 and type $(-n,-n)$, concentrated in degree 0. We also denote by $\mathbb{L}$ the graded mixed Hodge structure on $\mathrm{H}_{\mathrm{c}}^{\bullet}(\mathbb{A}^{1},\mathbb{Q})$ (see [@PS08 Ch. 4.]). It is concentrated in degree $2$ and $\mathrm{H}_{\mathrm{c}}^{2}(\mathbb{A}^{1},\mathbb{Q})$ is pure of type $(1,1)$. In other words, $\mathbb{L}=\mathbb{Q}(-1)[-2]$. Following [@Dav17a], we say that a graded mixed Hodge structure $V^{\bullet}$ is of Tate type if there exist $a_{m,n}\in\mathbb{Z}_{\geq0},\ m,n\in\mathbb{Z}$ such that $V^{\bullet}=\bigoplus_{m,n}\left(\mathbb{L}^{\otimes n}[m]\right)^{\oplus a_{m,n}}$. A natural source of graded mixed Hodge structures is the (equivariant, compactly supported) cohomology of algebraic varieties over $\mathbb{C}$. Given an algebraic variety $X$ acted on by a linear algebraic group $G$, the compactly supported cohomology of the quotient stack $[X/G]$ can be defined using approximations of the Borel construction $(X\times E_{G})/G$ by algebraic varieties. We will follow [@DM20 §2.2.] for the construction of $\mathrm{H}_{\mathrm{c}}^{\bullet}\left([X/G]\right)$. This involves building a suitable geometric quotient by the group $G$, which we now discuss in the case where $G=\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})$. For $r\geq1$ and $N\geq1$ sufficiently large, we consider the open subset $U_{r,N,\alpha}\subseteq\mathop{\mathrm{Hom}}_{\mathcal{O}_{\alpha}}(\mathcal{O}_{\alpha}^{\oplus r},\mathcal{O}_{\alpha}^{\oplus N})$ of injective morphisms. Viewing points of $M\in\mathop{\mathrm{Hom}}_{\mathcal{O}_{\alpha}}(\mathcal{O}_{\alpha}^{\oplus r},\mathcal{O}_{\alpha}^{\oplus N})$ as $N\times r$ matrices with coefficients in $\mathcal{O}_{\alpha}$, $U_{r,N,\alpha}$ is defined by the condition that not all $r\times r$ minors of $M$ are zero modulo $t$. Given $\Delta$ a subset of $r$ lines, we denote by $U_{\Delta,\alpha}\subseteq U_{r,N,\alpha}$ the open subset of matrices for which the minor associated to $\Delta$ is non-zero modulo $t$. $U_{r,N,\alpha}$ is endowed with a free action of $\mathop{\mathrm{GL}}(r,\mathcal{O}_{\alpha})$ by right multiplication, for which the $U_{\Delta,\alpha}$ form a $\mathop{\mathrm{GL}}(r,\mathcal{O}_{\alpha})$-invariant open cover. We argue that this action has a geometric quotient (in the sense of [@MFK94 Def. 0.6.]), although the usual methods of invariant theory are not available ($\mathop{\mathrm{GL}}(r,\mathcal{O}_{\alpha})$ being non-reductive). **Lemma 24**. *The variety $U_{r,N,\alpha}$ admits a geometric quotient $U_{r,N,\alpha}\rightarrow\mathrm{Gr}_{r,N,\alpha}$, where $\mathrm{Gr}_{r,N,\alpha}$ is the $\alpha$-th jet space of the grassmannian $\mathrm{Gr}_{r,N}$.* *Proof.* When $\alpha=0$, $\mathrm{Gr}_{r,N}$ is the geometric quotient $U_{r,N,0}/\mathop{\mathrm{GL}}(r,\mathbb{C})$, which can be defined using geometric invariant theory. The quotient morphism $q:U_{r,N}=U_{r,N,0}\rightarrow\mathrm{Gr}_{r,N}$ is a $\mathop{\mathrm{GL}}(r,\mathbb{C})$-principal bundle, which is trivial in restriction to the open subsets $V_{\Delta}=q(U_{\Delta})\subseteq\mathrm{Gr}_{r,N}$. $\mathrm{Gr}_{r,N}$ can thus be constructed by glueing the open subsets $V_{\Delta}$. Since the construction of jet spaces is functorial and compatible with open immersions, $\mathrm{Gr}_{r,N,\alpha}$ can be obtained by glueing the jet spaces $V_{\Delta,\alpha}$. We also obtain a principal $\mathop{\mathrm{GL}}(r,\mathcal{O}_{\alpha})$-bundle $U_{r,N,\alpha}\rightarrow\mathrm{Gr}_{r,N,\alpha}$, which is trivial in restriction to the open subsets $V_{\Delta,\alpha}$. This shows that $U_{r,N,\alpha}\rightarrow\mathrm{Gr}_{r,N,\alpha}$ is a geometric quotient. ◻ Now, let $G=\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})$ and $X$ a $G$-variety over $\mathbb{C}$. Then $U_{\mathbf{r},N,\alpha}:=\prod_{i\in Q_{0}}U_{r_{i},N,\alpha}$ admits a geometric quotient as well under the action of $G$. Following [@DM20 §2.2.], we define:$$\mathrm{H}_{\mathrm{c}}^{k}([X/G]):=\mathrm{H}_{\mathrm{c}}^{k+2\dim(U_{\mathbf{r},N,\alpha})}\left(X\times^{G}U_{\mathbf{r},N,\alpha}\right)\otimes\mathbb{Q}(\dim(U_{\mathbf{r},N,\alpha})),$$ for $N$ large enough. This is shown to be independent of the choice of $U_{\mathbf{r},N,\alpha}$, as long as a certain codimension assumption is satisfied. The variety $X\times^{G}U_{\mathbf{r},N,\alpha}$ is the geometric quotient of $X\times U_{\mathbf{r},N,\alpha}$ under the diagonal action. This quotient is well-defined by [@EG98 Prop. 23] (note that we are in the case where the principal bundle $U_{\mathbf{r},N,\alpha}\rightarrow U_{\mathbf{r},N,\alpha}/\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})$ is Zariski-locally trivial). #### Computational lemmas {#computational-lemmas .unnumbered} We now collect some lemmas on equivariant cohomology, which will prove useful in computing $\mathrm{H}_{\mathrm{c}}^{\bullet}([\mu_{Q,\mathbf{r},\alpha}^{-1}(0)/\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})])$. Unless specified otherwise, we assume that $G=\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})$. We use mixed Hodge modules in some proofs - see [@Sai89; @Sai90] for background. **Lemma 25**. *Suppose that $f:X\rightarrow Y$ is a $G$-equivariant (Zariski-locally trivial) affine fibration of dimension $d$. Then:* *$$\mathrm{H}_{\mathrm{c}}^{\bullet}([X/G]) \simeq \mathbb{L}^{\otimes d} \otimes \mathrm{H}_{\mathrm{c}}^{\bullet}([Y/G]).$$* *Proof.* For an affine fibration of algebraic varieties $f:X\rightarrow Y$, the result can be deduced from the fact that the adjunction morphism $\underline{\mathbb{Q}}_{Y}\otimes\mathbb{L}_{Y}^{\otimes d}\rightarrow f_{!}\underline{\mathbb{Q}}_{X}$ of (complexes of) mixed Hodge modules is an isomorphism. This in turn can be checked for the underlying complexes of sheaves on trivializing opens of $f$, using base change and the projection formula. In the equivariant setting, one can check, using the construction of $X\times^{G}U_{\mathbf{r},N,\alpha}$ from [@EG98 Prop. 23], that $f$ induces an affine fibration $X\times^{G}U_{\mathbf{r},N,\alpha}\rightarrow Y\times^{G}U_{\mathbf{r},N,\alpha}$ of dimension $d$. ◻ **Lemma 26**. *Let $X$ be a $\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})$-variety such that the normal subgroup $K_{\mathbf{r},\alpha}=\mathop{\mathrm{Ker}}(\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})\twoheadrightarrow\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha-1}))$ acts trivially on $X$. Then:* *$$\mathrm{H}_{\mathrm{c}}^{\bullet}([X/\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})]) \simeq \mathbb{L}^{\otimes (-\mathbf{r}\cdot\mathbf{r})} \otimes \mathrm{H}_{\mathrm{c}}^{\bullet}([X/\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha-1})]).$$* *Proof.* For simplicity, assume that $Q$ has only one vertex. We claim that, for $N$ large enough, the geometric quotient $U'_{r,N,\alpha}:=U_{r,N,\alpha}/K_{r,\alpha}$ is well defined and that the projection $U_{r,N,\alpha}\rightarrow U_{r,N,\alpha-1}$ induces an affine fibration $$X\times^{\mathop{\mathrm{GL}}(r,\mathcal{O}_{\alpha})}U_{r,N,\alpha}\simeq X\times^{\mathop{\mathrm{GL}}(r,\mathcal{O}_{\alpha-1})}U'_{r,N,\alpha}\rightarrow X\times^{\mathop{\mathrm{GL}}(r,\mathcal{O}_{\alpha-1})}U_{r,N,\alpha-1}$$ of dimension $r(N-r)$. Then, since $\dim(U_{r,N,\alpha})=\dim(U_{r,N,\alpha-1})+rN$, we obtain by Lemma [Lemma 25](#Lem/AffFib){reference-type="ref" reference="Lem/AffFib"}:$$\mathrm{H}_{\mathrm{c}}^{\bullet}(X\times^{\mathop{\mathrm{GL}}(r,\mathcal{O}_{\alpha})}U_{r,N,\alpha}) \otimes \mathbb{L}^{-\dim(U_{r,N,\alpha})} \simeq \mathrm{H}_{\mathrm{c}}^{\bullet}(X\times^{\mathop{\mathrm{GL}}(r,\mathcal{O}_{\alpha-1})}U_{r,N,\alpha-1}) \otimes \mathbb{L}^{-\dim(U_{r,N,\alpha-1})} \otimes \mathbb{L}^{-r^2},$$ which yields the results when $N$ goes to infinity. Let us now sketch a proof of the claims. First, we construct the geometric quotient $U'_{r,N,\alpha}$ by glueing quotients of the open subsets $U_{\Delta,\alpha}$ as above. Given a set $\Delta$ of $r$ lines, consider the subset $U'_{\Delta,\alpha}\subset U_{\Delta,\alpha}$ of matrices $M(t)=\sum_{k}M_{k}\cdot t^{k}$ such that the lines of $M_{\alpha}$ indexed by $\Delta$ are zero. Then the action of $N_{\alpha}$ induces an isomorphism $U_{\Delta,\alpha}\simeq U'_{\Delta,\alpha}\times N_{\alpha}$. One can check that the varieties $U'_{\Delta,\alpha}$ glue, which yields a geometric quotient $U'_{r,N,\alpha}$ as in the proof of Lemma [Lemma 24](#Lem/BorelApprox){reference-type="ref" reference="Lem/BorelApprox"}. Finally, we shortly describe the affine fibration $X\times^{\mathop{\mathrm{GL}}(r,\mathcal{O}_{\alpha-1})}U'_{r,N,\alpha}\rightarrow X\times^{\mathop{\mathrm{GL}}(r,\mathcal{O}_{\alpha-1})}U_{r,N,\alpha-1}$. The isomorphisms $U'_{\Delta,\alpha}\simeq U_{\Delta,\alpha-1}\times\mathbb{A}^{r(N-r)}$ glue to a $\mathop{\mathrm{GL}}(r,\mathcal{O}_{\alpha-1})$-equivariant affine fibration of dimension $r(N-r)$. One can then check that the following diagram $$\begin{tikzcd}[ampersand replacement=\&] X\times U'_{r,N,\alpha} \ar[r]\ar[d] \& X\times^{\mathop{\mathrm{GL}}(r,\mathcal{O}_{\alpha-1})}U'_{r,N,\alpha} \ar[d] \\ X\times U_{r,N,\alpha-1} \ar[r] \& X\times^{\mathop{\mathrm{GL}}(r,\mathcal{O}_{\alpha-1})}U_{r,N,\alpha-1} \end{tikzcd}$$ restricts to $$\begin{tikzcd}[ampersand replacement=\&] X\times V_{\Delta,\alpha-1}\times\mathop{\mathrm{GL}}(r,\mathcal{O}_{\alpha-1})\times\mathbb{A}^{r(N-r)} \ar[r]\ar[d] \& X\times V_{\Delta,\alpha-1}\times\mathbb{A}^{r(N-r)} \ar[d] \\ X\times V_{\Delta,\alpha-1}\times\mathop{\mathrm{GL}}(r,\mathcal{O}_{\alpha-1}) \ar[r] \& X\times V_{\Delta,\alpha-1} \end{tikzcd}$$ over $X\times V_{\Delta,\alpha-1}$ (see the proof of Lemma [Lemma 24](#Lem/BorelApprox){reference-type="ref" reference="Lem/BorelApprox"} for the definition of $V_{\Delta,\alpha-1}$). ◻ **Lemma 27**. *Let $n\geq1$ and $G=\mathbb{G}_{\mathrm{m}}^{n}(\mathcal{O}_{\alpha})$. Let $X$ be a $G$-variety. Then:$$\mathrm{H}_{\mathrm{c}}^{\bullet}([(X\times^{\mathbb{G}_{\mathrm{m}}^{n}(\mathcal{O}_{\alpha})}\mathbb{G}_{\mathrm{m}}^{n+1}(\mathcal{O}_{\alpha})/\mathbb{G}_{\mathrm{m}}^{n+1}(\mathcal{O}_{\alpha})]) \simeq \mathrm{H}_{\mathrm{c}}^{\bullet}([X/\mathbb{G}_{\mathrm{m}}^{n}(\mathcal{O}_{\alpha})]).$$* *Proof.* Let us call $U_{n,N,\alpha}$ the variety $U_{\mathbf{r},N,\alpha}$ for $\mathbf{r}=\underline{1}$. From the isomorphism$$\left( X\times^{\mathbb{G}_{\mathrm{m}}^{n}(\mathcal{O}_{\alpha})}\mathbb{G}_{\mathrm{m}}^{n+1}(\mathcal{O}_{\alpha}) \right) \times^{\mathbb{G}_{\mathrm{m}}^{n+1}(\mathcal{O}_{\alpha})}U_{n+1,N,\alpha} \simeq \left( X\times^{\mathbb{G}_{\mathrm{m}}^{n}(\mathcal{O}_{\alpha})}U_{n,N,\alpha} \right) \times U_{1,N,\alpha},$$ we obtain:$$\begin{split} \mathrm{H}_{\mathrm{c}}^{\bullet}((X\times^{\mathbb{G}_{\mathrm{m}}^{n}(\mathcal{O}_{\alpha})}\mathbb{G}_{\mathrm{m}}^{n+1}(\mathcal{O}_{\alpha}))\times^{\mathbb{G}_{\mathrm{m}}^{n+1}(\mathcal{O}_{\alpha})}U_{n+1,N,\alpha}) \otimes \mathbb{L}^{-\dim(U_{n+1,N,\alpha})} & \\ \simeq \left(\mathrm{H}_{\mathrm{c}}^{\bullet}(X\times^{\mathbb{G}_{\mathrm{m}}^{n}(\mathcal{O}_{\alpha})}U_{n,N,\alpha}) \otimes \mathbb{L}^{-\dim(U_{n,N,\alpha})} \right) \otimes \left( \mathrm{H}_{\mathrm{c}}^{\bullet}(U_{1,N,\alpha})\otimes\mathbb{L}^{-\dim(U_{1,N,\alpha})} \right) . & \end{split}$$ Let us examine $\mathrm{H}_{\mathrm{c}}^{\bullet}(U_{1,N,\alpha})\otimes\mathbb{L}^{-\dim(U_{1,N,\alpha})}$. Using the long exact sequence in compactly supported cohomology associated to the open-closed decomposition $\mathop{\mathrm{Hom}}_{\mathcal{O}_{\alpha}}(\mathcal{O}_{\alpha},\mathcal{O}_{\alpha}^{\oplus N})=U_{1,N,\alpha}\sqcup\left(\mathop{\mathrm{Hom}}_{\mathcal{O}_{\alpha}}(\mathcal{O}_{\alpha},\mathcal{O}_{\alpha}^{\oplus N})\setminus U_{1,N,\alpha}\right)$ (see [@PS08 §5.5.2.]), one can check that: (i) $\mathrm{H}_{\mathrm{c}}^{\bullet}(U_{1,N,\alpha})\otimes\mathbb{L}^{-\dim(U_{1,N,\alpha})}$ is concentrated in nonpositive degrees (ii) its graded piece in degree 0 is $\mathbb{Q}$ (with trivial mixed Hodge structure) (iii) it vanishes in cohomological degrees $-2N$ to $-1$. Since $\mathrm{H}_{\mathrm{c}}^{\bullet}(X\times^{\mathbb{G}_{\mathrm{m}}^{n}(\mathcal{O}_{\alpha})}U_{n,N,\alpha})\otimes\mathbb{L}^{-\dim(U_{n,N,\alpha})}$ is supported in cohomological degree at most $\dim\left[X/\mathbb{G}_{\mathrm{m}}^{n}(\mathcal{O}_{\alpha})\right]$, we get $\mathrm{H}_{\mathrm{c}}^{j}([(X\times^{\mathbb{G}_{\mathrm{m}}^{n}(\mathcal{O}_{\alpha})}\mathbb{G}_{\mathrm{m}}^{n+1}(\mathcal{O}_{\alpha})/\mathbb{G}_{\mathrm{m}}^{n+1}(\mathcal{O}_{\alpha})])\simeq\mathrm{H}_{\mathrm{c}}^{j}([X/\mathbb{G}_{\mathrm{m}}^{n}(\mathcal{O}_{\alpha})])$ for any given $j$, by taking $N$ large enough. ◻ **Lemma 28**. *Let $X$ be a $\mathop{\mathrm{GL}}(\mathbf{r}_{1},\mathcal{O}_{\alpha})$-variety and $Y$ be a $\mathop{\mathrm{GL}}(\mathbf{r}_{2},\mathcal{O}_{\alpha})$-variety. Then:* *$$\mathrm{H}_{\mathrm{c}}^{\bullet}([(X\times Y)/(\mathop{\mathrm{GL}}(\mathbf{r}_{1},\mathcal{O}_{\alpha})\times\mathop{\mathrm{GL}}(\mathbf{r}_{2},\mathcal{O}_{\alpha}))]) \simeq H_{\mathrm{c}}^{\bullet}([X/\mathop{\mathrm{GL}}(\mathbf{r}_{1},\mathcal{O}_{\alpha})]) \otimes H_{\mathrm{c}}^{\bullet}([Y/\mathop{\mathrm{GL}}(\mathbf{r}_{2},\mathcal{O}_{\alpha})]).$$* *Proof.* The isomorphism can be checked directly in each cohomological degree - taking $N$ large enough - from the isomorphism$$(X\times Y)\times^{\mathop{\mathrm{GL}}(\mathbf{r}_{1},\mathcal{O}_{\alpha})\times\mathop{\mathrm{GL}}(\mathbf{r}_{2},\mathcal{O}_{\alpha})}(U_{\mathbf{r}_1,N,\alpha}\times U_{\mathbf{r}_2,N,\alpha}) \simeq (X\times^{\mathop{\mathrm{GL}}(\mathbf{r}_{1},\mathcal{O}_{\alpha})}U_{\mathbf{r}_1,N,\alpha}) \times (Y\times^{\mathop{\mathrm{GL}}(\mathbf{r}_{2},\mathcal{O}_{\alpha})}U_{\mathbf{r}_2,N,\alpha})$$ and the Künneth isomorphism for compactly supported cohomology (which is compatible with mixed Hodge structures - see [@CLNS18 Ch. 2, §3.2.]). ◻ **Lemma 29**. *Let $X$ be a $G$-variety, $Z\subseteq X$ a $G$-invariant closed subvariety and $U=X\setminus Z$. Then there is a long exact sequence of mixed Hodge structures:* *$$\ldots\rightarrow \mathrm{H}_{\mathrm{c}}^{j-1}([Z/G])\rightarrow \mathrm{H}_{\mathrm{c}}^j([U/G])\rightarrow \mathrm{H}_{\mathrm{c}}^j([X/G])\rightarrow \mathrm{H}_{\mathrm{c}}^j([Z/G])\rightarrow \mathrm{H}_{\mathrm{c}}^{j+1}([U/G]) \rightarrow \ldots$$ Moreover, if both $\mathrm{H}_{\mathrm{c}}^{\bullet}([U/G])$ and $\mathrm{H}_{\mathrm{c}}^{\bullet}([Z/G])$ are pure, then $\mathrm{H}_{\mathrm{c}}^{\bullet}([X/G])$ is also pure.* *Proof.* The long exact sequence can be derived from the following long exact sequence of mixed Hodge structures (see [@PS08 §5.5.2.]), taking $N$ large enough for each cohomological step: $$\ldots\rightarrow \mathrm{H}_{\mathrm{c}}^{j-1}(Z\times^GU_{\mathbf{r},N,\alpha})\rightarrow \mathrm{H}_{\mathrm{c}}^j(U\times^GU_{\mathbf{r},N,\alpha})\rightarrow \mathrm{H}_{\mathrm{c}}^j(X\times^GU_{\mathbf{r},N,\alpha})\rightarrow \mathrm{H}_{\mathrm{c}}^j(Z\times^GU_{\mathbf{r},N,\alpha})\rightarrow \mathrm{H}_{\mathrm{c}}^{j+1}(U\times^GU_{\mathbf{r},N,\alpha}) \rightarrow \ldots$$ If $\mathrm{H}_{\mathrm{c}}^{\bullet}([U/G])$ and $\mathrm{H}_{\mathrm{c}}^{\bullet}([Z/G])$ are pure, then for all $j\in\mathbb{Z}$, $\mathrm{H}_{\mathrm{c}}^{j}([U/G])$ and $\mathrm{H}_{\mathrm{c}}^{j}([Z/G])$ are pure of weight $j$. This implies that the connecting morphisms of the long exact sequence vanish and the short exact sequences obtained in each cohomological degree split (see [@PS08 Cor. 2.12.]). Therefore, $\mathrm{H}_{\mathrm{c}}^{j}([X/G])\simeq\mathrm{H}_{\mathrm{c}}^{j}([U/G])\oplus\mathrm{H}_{\mathrm{c}}^{j}([Z/G])$ is pure of weight $j$, which proves the claim. ◻ #### E-series and counts of $\mathbb{F}_{q}$-points {#e-series-and-counts-of-mathbbf_q-points .unnumbered} Finally, we recall some results relating Hodge numbers of an algebraic variety to counts of its points over finite fields. These results rely on a theorem by Katz [@HRV08 Appendix, Thm. 6.1.2.]. Throughout, we assume that $G=\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})$. Let $X$ be a complex $G$-variety. The E-series of $[X/G]$ is defined as:$$E([X/G];x,y):=\sum_{p,q\in\mathbb{Z}}\left(\sum_{k\in\mathbb{Z}}(-1)^k\cdot h^{p,q}\left(\mathrm{H}_{\mathrm{c}}^{k}([X/G])\right)\right)\cdot x^py^q\in\mathbb{Z}((u^{-1},v^{-1})).$$ Let us briefly justify that the E-series is a well-defined formal Laurent series in $u^{-1},v^{-1}$. For a fixed couple $(p,q)$, $h^{p,q}\left(\mathrm{H}_{\mathrm{c}}^{k}([X/G])\right)\ne0$ only if $p+q\leq k$ (see [@PS08 Ch. 5, Prop. 5.54.]). As $\mathrm{H}_{\mathrm{c}}^{\bullet}([X/G])$ is supported in degree at most $\dim\left([X/G]\right)$, the coefficient of $x^{p}y^{q}$ boils down to a finite sum. Moreover, one can check from [@Del74 Thm. 8.2.4.] and [@PS08 Ch. 5, Def. 5.52.] that $\sum_{k}(-1)^{k}\cdot h^{p,q}\left(\mathrm{H}_{\mathrm{c}}^{k}([X/G])\right)\ne0$ only if $p,q\leq\dim\left([X/G]\right)$, so we indeed obtain a Laurent series. A similar reasoning shows that the $E$-series of a variety $X$ is actually a polynomial in $u,v$. Given a complex $G$-variety $X$, we may count points over finite fields of some spreading-out of $X$ i.e. an $R$-scheme $\mathcal{X}$, where $R\subseteq\mathbb{C}$ is a finitely generated $\mathbb{Z}$-algebra and $\mathcal{X}\otimes_{R}\mathbb{C}\simeq X$. Such a ring $R$ admits ring homomorphisms $R\rightarrow\mathbb{F}_{q}$ for finite fields of large enough characteristics (see for instance [@CLNS18 Ch. 1, Lem. 2.2.6.]). Following [@HRV08 Appendix], we call $X$ polynomial-count if there exists a spreading-out $\mathcal{X}$ and a polynomial $P\in\mathbb{Q}[T]$ such that for any ring homomorphism $\varphi:R\rightarrow\mathbb{F}_{q}$, $\sharp\mathcal{X}_{\varphi}(\mathbb{F}_{q^{r}})=P(q^{r})$ (for all $r\geq1$). Note that $G$ is polynomial-count, with counting polynomial $P_{G}(T)=\prod_{i\in Q_{0}}t^{\alpha r_{i}^{2}}(1-T^{-1})\ldots(1-T^{-(r_{i}-1)})$. The following proposition is a straightforward generalisation of [@HRV08 Thm. 6.1.2.]. **Proposition 30**. *If $X$ is polynomial-count, with counting polynomial $P_{X}$, then:$$E([X/G];x,y)=\frac{P_X(xy)}{P_G(xy)}.$$* *Proof.* By construction,$$E([X/G];x,y)= \underset{N\rightarrow +\infty}{\lim} \left( \frac{E\left( X\times^GU_{\mathbf{r},N,\alpha};x,y\right)}{(xy)^{\dim(U_{\mathbf{r},N,\alpha})}} \right) .$$ Arguing as in the proof of Lemma [Lemma 27](#Lem/GrpChg){reference-type="ref" reference="Lem/GrpChg"}, one can check that $U_{\mathbf{r},N,\alpha}$ is polynomial-count and that $\frac{P_{U_{\mathbf{r},N,\alpha}}(T)}{T^{\dim(U_{\mathbf{r},N,\alpha})}}$ is a power series in $T^{-1}$, congruent to $1$ modulo $T^{-\sum_{i}{N \choose r_{i}}}$. Now, by [@HRV08 Thm. 6.1.2.] and from the construction of $X\times^{G}U_{\mathbf{r},N,\alpha}$, we obtain:$$\frac{E\left( X\times^GU_{\mathbf{r},N,\alpha};x,y\right)}{(xy)^{\dim(U_{\mathbf{r},N,\alpha})}} = \frac{P_{U_{\mathbf{r},N,\alpha}}(xy)}{(xy)^{\dim(U_{\mathbf{r},N,\alpha})}} \cdot \frac{P_X(xy)}{P_G(xy)} \underset{N\rightarrow +\infty}{\longrightarrow} \frac{P_X(xy)}{P_G(xy)}.$$ ◻ Finally, if $X$ is polynomial-count and $\mathrm{H}_{\mathrm{c}}^{\bullet}([X/G])$ is pure, then it follows from Proposition [Proposition 30](#Prop/E-seriesVSCountF_q){reference-type="ref" reference="Prop/E-seriesVSCountF_q"} that $\mathrm{H}_{\mathrm{c}}^{\bullet}([X/G])$ is of Tate type, concentrated in even degrees and:$$E([X/G];x,y)=\sum_{k\in\mathbb{Z}}\dim\left(\mathrm{H}_{\mathrm{c}}^{2k}([X/G])\right)\cdot (xy)^k.$$ In other words, $\mathrm{H}_{\mathrm{c}}^{\bullet}([X/G])$ is determined by its E-series, as $\mathrm{H}_{\mathrm{c}}^{\bullet}([X/G])=P_{X}(\mathbb{L})\otimes F_{G}(\mathbb{L})$, where for a Laurent series $F(T)=\sum_{i}a_{i}\cdot T^{i}\in\mathbb{Z}_{\geq0}((T^{-1}))$, we define $F(\mathbb{L}):=\bigoplus_{i}\left(\mathbb{L}^{\otimes i}\right)^{\oplus a_{i}}$. Note that $\mathrm{H}_{\mathrm{c}}^{\bullet}(\mathrm{B}\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha}))$=$\mathrm{H}_{\mathrm{c}}^{\bullet}([\mathrm{pt}/\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})])$ is pure, with E-series $F_{G}(xy)=\prod_{i\in Q_{0}}\frac{(xy)^{-\alpha r_{i}^{2}}}{(1-(xy)^{-1})\ldots(1-(xy)^{-(r_{i}-1)})}$. Purity accounts for the fact that many counting polynomials have non-negative coefficients. For instance Theorem [Theorem 5](#Thm/IntroPosToricKacPol){reference-type="ref" reference="Thm/IntroPosToricKacPol"} admits a cohomological upgrade in the form of Theorem [Theorem 7](#Thm/IntroCohIntgr){reference-type="ref" reference="Thm/IntroCohIntgr"}. # Jet spaces of quiver moment maps and absolutely indecomposable representations [\[Sect/KacPolvsMomMap\]]{#Sect/KacPolvsMomMap label="Sect/KacPolvsMomMap"} In this section, we prove several results relating counts of jets over fibers of quiver moment maps and counts of absolutely indecomposable, locally free representations of quivers over $\mathbb{F}_{q}[t]/(t^{\alpha})$. These results are analogous to the now well-understood relation between Kac polynomials and counts of points on fibers of quiver moment maps, see [@CBVB04; @Moz11a; @Dav23a]. Throughout this section, $\mathbb{K}=\mathbb{F}_{q}$. Let $Q$ be a quiver. Our first result concerns counts of jets over zero-fibers of quiver moment maps. We obtain a formula in the $\lambda$-ring $\mathcal{V}[[t_{i},\ i\in Q_{0}]]$, which directly generalizes [@Moz11a Thm. 5.1.], with a similar proof. **Theorem 31**. *Let $Q$ be a quiver and $\alpha\geq1$. Then: $$\sum_{\mathbf{r}\in\mathbb{N}^{Q_0}} \frac{\sharp\mu_{Q,\mathbf{r},\alpha}^{-1}(0)}{\sharp\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})} \cdot q^{\alpha\langle\mathbf{r},\mathbf{r}\rangle}t^{\mathbf{r}} = \mathop{\mathrm{Exp}}_{q,t}\left( \sum_{\mathbf{r}\in\mathbb{N}^{Q_0}\setminus\{0\}} \frac{A_{Q,\mathbf{r},\alpha}}{1-q^{-1}}\cdot t^{\mathbf{r}} \right).$$* *Proof.* By Proposition [Proposition 11](#Prop/MomMapExactSeq){reference-type="ref" reference="Prop/MomMapExactSeq"}, for a given $x\in R(Q,\mathbf{r},\mathcal{O}_{\alpha})$ corresponding to a locally free representation $M$, $\sharp\{y\in R(Q^{\mathrm{op}},\mathbf{r},\mathcal{O}_{\alpha}))\ \vert\ \mu_{Q,\mathbf{r},\alpha}(x,y)=0\}=\sharp\mathop{\mathrm{Ext}}^{1}(M,M)$. Summing over all isomorphism classes $[M]$ of locally free representations in rank $\mathbf{r}$, we obtain:$$\frac{\sharp\mu_{Q,\mathbf{r},\alpha}^{-1}(0)}{\sharp\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})} = \sum_{[M]}\frac{\sharp\mathop{\mathrm{Ext}}^{1}(M,M)}{\sharp\mathop{\mathrm{Aut}}(M)}.$$ Using Proposition [Proposition 11](#Prop/MomMapExactSeq){reference-type="ref" reference="Prop/MomMapExactSeq"} again, we obtain that $\sharp\mathop{\mathrm{Ext}}^{1}(M,M)=q^{-\alpha\langle\mathbf{r},\mathbf{r}\rangle}\cdot\sharp\mathop{\mathrm{End}}(M)$ and so:$$q^{\alpha\langle\mathbf{r},\mathbf{r}\rangle}\cdot\frac{\sharp\mu_{Q,\mathbf{r},\alpha}^{-1}(0)}{\sharp\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})} = \sum_{[M]}\frac{\sharp\mathop{\mathrm{End}}(M)}{\sharp\mathop{\mathrm{Aut}}(M)}=\mathop{\mathrm{vol}}_{\mathop{\mathrm{End}}}(\mathop{\mathrm{Rep}}_{\mathcal{O}_{\alpha}}(Q,\mathbf{r})^{\mathrm{l.f.}}).$$ Thus the formula follows from Proposition [Proposition 14](#Prop/ExpFml){reference-type="ref" reference="Prop/ExpFml"}. ◻ As a consequence, we prove that the asymptotic count of jets over $\mu_{Q,\mathbf{r}}^{-1}(0)$ is related to the asymptotic count of absolutely indecomposable, locally-free representations in rank $\mathbf{r}$, when $\mathbf{r}=\underline{1}$. This solves a conjecture of Wyss [@Wys17b Conj. 4.37.]. Recall the definition of these asymptotic counts (limits in $\mathcal{V}$ are computed coordinate-wise):$$A_{Q}:= \underset{\alpha\rightarrow +\infty}{\lim} \left(q^{-\alpha b(Q)}\cdot A_{Q,\underline{1},\alpha}\right),$$ $$B_{\mu_{Q}}:= \underset{\alpha\rightarrow +\infty}{\lim} \left(q^{-\alpha(2\sharp Q_1-\sharp Q_0+1)}\cdot\sharp\mu_{Q,\underline{1},\alpha}^{-1}(0)\right).$$ Then we show: **Corollary 32**. *Let $Q$ be a 2-connected quiver. Then: $$\frac{B_{\mu_{Q}}}{(1-q^{-1})^{\sharp Q_0}}=\frac{A_{Q}}{1-q^{-1}}.$$* *Proof.* Let us spell out Theorem [Theorem 31](#Thm/ExpFmlKacPol){reference-type="ref" reference="Thm/ExpFmlKacPol"} for $\mathbf{r}=\underline{1}$. Then:$$q^{-\alpha(b(Q)+\sharp Q_0-\langle\mathbf{r},\mathbf{r}\rangle)} \cdot \frac{\sharp\mu_{Q,\underline{1},\alpha}^{-1}(0)}{\prod_{i\in Q_0}(1-q^{-1})} = q^{-\alpha b(Q)}\cdot\sum_{Q_0=I_1\sqcup\ldots\sqcup I_s}\prod_{j=1}^s\frac{A_{Q,\underline{1}\restriction_{I_j},\alpha}}{1-q^{-1}}.$$ Note that $b(Q)+\sharp Q_{0}-\langle\mathbf{r},\mathbf{r}\rangle=2\sharp Q_{1}-\sharp Q_{0}+1$. Moreover, for $I\subseteq Q_{0}$, $A_{Q,\underline{1}_{I},\alpha}$ is a polynomial of degree $\alpha b(Q\restriction_{I})$ (see [@Wys17b Cor. 4.35.]). By Proposition [Proposition 17](#Prop/BettiNb){reference-type="ref" reference="Prop/BettiNb"}, only the term corresponding to $I_{1}=Q_{0}$ contributes to the limit in the right-hand side. Therefore we conclude:$$\frac{B_{\mu_{Q}}}{(1-q^{-1})^{\sharp Q_0}}=\frac{A_{Q}}{1-q^{-1}}.$$ ◻ *Remark 33*. Wyss' conjecture actually relates the numerators of $A_{Q}$ and $B_{\mu_{Q}}$. However, the conjecture does not hold as stated. The following quiver is a counter-example (regardless of orientation):$$\begin{tikzcd}[ampersand replacement=\&] \bullet \ar[rr, dash, bend left] \& \& \bullet \ar[dl, dash, bend left]\ar[dl, dash, bend right] \\ \& \bullet \ar[ul, dash]\ar[ul, dash, bend left]\ar[ul, dash, bend right] \& \end{tikzcd} .$$ Finally, we generalise Crawley-Boevey and Van den Bergh's result relating Kac polynomials and counts of $\mathbb{F}_{q}$-points on the representation variety of a *deformed* preprojective algebra. Recall that $\lambda\in\mathbb{Z}^{Q_{0}}$ is called generic with respect to a rank vector $\mathbf{r}\in\mathbb{N}^{Q_{0}}$ if $\lambda\cdot\mathbf{r}=0$ and $\lambda\cdot\mathbf{r}'\ne0$ for all $0<\mathbf{r}'<\mathbf{r}$. **Theorem 34**. *Let $\mathbf{r}\in\mathbb{N}^{Q_{0}}$ be an indivisible rank vector and $\lambda\in\mathbb{Z}^{Q_{0}}$ be generic with respect to $\mathbf{r}$. Suppose that $\mathbb{F}_{q}$ has characteristic larger than $\sum_{i}\vert\lambda_{i}\vert\cdot r_{i}$. Then: $$\frac{\sharp\mu_{Q,\mathbf{r},\alpha}^{-1}(t^{\alpha-1}\cdot\lambda)}{\sharp\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})} =q^{-\alpha\langle\mathbf{r},\mathbf{r}\rangle}\cdot\frac{A_{Q,\mathbf{r},\alpha}}{1-q^{-1}}.$$* *Proof.* Consider the projection map $\pi:\mu_{Q,\mathbf{r},\alpha}^{-1}(t^{\alpha-1}\cdot\lambda)\subseteq R(\overline{Q},\mathbf{r},\mathcal{O}_{\alpha})\rightarrow R(Q,\mathbf{r},\mathcal{O}_{\alpha})$. We claim that, when $\lambda$ is generic, the image of $\pi$ coincides with the constructible subset of absolutely indecomposable representations. Moreover, given $x$ an $\mathbb{F}_{q}$-point of $R(Q,\mathbf{r},\mathcal{O}_{\alpha})$ corresponding to a representation $M$, then $\pi^{-1}(x)$ has cardinality $\sharp\mathop{\mathrm{Ext}}^{1}(M,M)$ by Proposition [Proposition 11](#Prop/MomMapExactSeq){reference-type="ref" reference="Prop/MomMapExactSeq"}. Therefore, we can compute $\sharp\mu_{Q,\mathbf{r},\alpha}^{-1}(t^{\alpha-1}\cdot\lambda)$ by summing over isomorphism classes $[M]$ of absolutely indecomposable representations in rank $\mathbf{r}$:$$\frac{\sharp\mu_{Q,\mathbf{r},\alpha}^{-1}(t^{\alpha-1}\cdot\lambda)}{\sharp\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})} = \sum_{[M]}\frac{\sharp\mathop{\mathrm{Ext}}^1(M,M)}{\sharp\mathop{\mathrm{Aut}}(M)}.$$ Moreover, for $M$ absolutely indecomposable, $\sharp\mathop{\mathrm{Aut}}(M)=\frac{q-1}{q}\cdot\sharp\mathop{\mathrm{End}}(M)=q^{\alpha\langle\mathbf{d},\mathbf{d}\rangle}\cdot\frac{q-1}{q}\cdot\sharp\mathop{\mathrm{Ext}}^{1}(M,M)$ by Propositions [Proposition 11](#Prop/MomMapExactSeq){reference-type="ref" reference="Prop/MomMapExactSeq"} and [Proposition 13](#Prop/EndRings){reference-type="ref" reference="Prop/EndRings"}. This yields:$$\frac{\sharp\mu_{Q,\mathbf{r},\alpha}^{-1}(t^{\alpha-1}\cdot\lambda)}{\sharp\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})} =q^{-\alpha\langle\mathbf{r},\mathbf{r}\rangle}\cdot\frac{A_{Q,\mathbf{r},\alpha}}{1-q^{-1}}.$$ Let us now prove the claim. We actually prove the following stronger fact, by analogy with [@CB01a Thm. 3.3.]: given $x$ an $\mathbb{F}_{q}$-point of $R(Q,\mathbf{r},\mathcal{O}_{\alpha})$ corresponding to a representation $M$, $x$ admits a lift in $\mu_{Q,\mathbf{r},\alpha}^{-1}(t^{\alpha-1}\cdot\lambda)$ if, and only if, any direct summand of $M$ of rank $\mathbf{r}'\leq\mathbf{r}$ satisfies $\lambda\cdot\mathbf{r}'=0$. Indeed, if $(x,y)$ is an $\mathbb{F}_{q}$-point of $\mu_{Q,\mathbf{r},\alpha}^{-1}(t^{\alpha-1}\cdot\lambda)$ such that $\pi(x,y)=x$, then by Proposition [Proposition 11](#Prop/MomMapExactSeq){reference-type="ref" reference="Prop/MomMapExactSeq"}, $\mu_{Q,\mathbf{r},\alpha}(x,y)=t^{\alpha-1}\cdot\lambda$ lies in the kernel of $\mathfrak{gl}(\mathbf{r},\mathcal{O}_{\alpha})\rightarrow\mathop{\mathrm{End}}(M)^{\vee}$. Therefore, pairing $\mu_{Q,\mathbf{r},\alpha}(x,y)=t^{\alpha-1}\cdot\lambda$ with the projection onto an $\mathbf{r}'$-dimensional direct summand of $M$ yields $\mathbb{F}_{q}\ni\lambda\cdot\mathbf{r}'=0$. This gives $\mathbb{Z}\ni\lambda\cdot\mathbf{r}'=0$, due to the bound on the characteristic of $\mathbb{F}_{q}$. Conversely, suppose that any direct summand of $M$ (call its rank $\mathbf{r}'$) satisfies $\lambda\cdot\mathbf{r}'=0$. It is enough to show that, for any indecomposable direct summand of $M$ (corresponding to an $\mathbb{F}_{q}$-point $x'$ of $R(Q,\mathbf{r}',\mathcal{O}_{\alpha})$), $x'$ admits a lift in $\mu_{Q,\mathbf{r}',\alpha}^{-1}(t^{\alpha-1}\cdot\lambda)$. Thus we assume that $M$ is indecomposable. Switching to $\overline{\mathbb{F}_{q}}$, Proposition [Proposition 13](#Prop/EndRings){reference-type="ref" reference="Prop/EndRings"} implies that any $\xi\in\mathop{\mathrm{End}}(M)$ is the sum of a scalar $\xi_{0}\cdot\mathop{\mathrm{Id}}\in\overline{\mathbb{F}_{q}}$ and a nilpotent endomorphism, i.e. $\xi$ is the sum of $\xi_{0}\cdot\mathop{\mathrm{Id}}$ and a tuple of nilpotent matrices modulo $t$ (note that, over $\mathcal{O}_{\alpha}$, not all nilpotent elements are traceless). Therefore, for all $\xi\in\mathop{\mathrm{End}}(M)$, we get $\xi*(t^{\alpha-1}\lambda\cdot\mathop{\mathrm{Id}})=\xi_{0}t^{\alpha-1}\lambda\cdot\mathbf{r}=0$, which implies, by Proposition [Proposition 11](#Prop/MomMapExactSeq){reference-type="ref" reference="Prop/MomMapExactSeq"}, that $x$ lifts to $\mu_{Q,\mathbf{r},\alpha}^{-1}(t^{\alpha-1}\cdot\lambda)$. ◻ # Positivity and purity in the toric setting [\[Section/PosPurity\]]{#Section/PosPurity label="Section/PosPurity"} In this section, we prove positivity results for counts of absolutely indecomposable toric representations of quivers in higher depth. We first prove positivity for asymptotic counts of quiver representations, as conjectured in [@Wys17b §4]. We then proceed with the proof of Theorem [Theorem 5](#Thm/IntroPosToricKacPol){reference-type="ref" reference="Thm/IntroPosToricKacPol"} in fixed depth $\alpha\geq1$. Finally, we prove a cohomological upgrade of Theorem [Theorem 31](#Thm/ExpFmlKacPol){reference-type="ref" reference="Thm/ExpFmlKacPol"} for rank vectors $\mathbf{r}\leq1$, in the spirit of [@Dav17a Thm. A.]. ## Proof of Wyss' first conjecture [\[Sect/PosA\]]{#Sect/PosA label="Sect/PosA"} Let $Q$ be a quiver and $\mathbb{K}=\mathbb{F}_{q}$. In [@Wys17b §4], Wyss studied the asymptotic count of locally free, absolutely indecomposable representations of $Q$ over $\mathcal{O}_{\alpha}$ in rank $\mathbf{r}=\underline{1}$:$$A_Q:= \underset{\alpha\rightarrow +\infty}{\lim} \left(q^{-\alpha b(Q)}\cdot A_{Q,\underline{1},\alpha}(q)\right).$$ The above sequence converges if, and only if, $Q$ is 2-connected and the limit is an explicit rational fraction in $q$ (see [@Wys17b Cor. 4.35.]):$$A_Q= (1-q^{-1})^{b(Q)}\cdot \sum_{E_1\subsetneq E_2\ldots\subsetneq E_s=Q_1}\prod_{j=1}^{s-1}\frac{1}{q^{b(Q)-b(Q\restriction_{E_j})}-1}.$$ The appearance of chains of subsets of $Q_{1}$ is reminiscent of the order complex of $\Pi(Q_{1})$ and indeed, we show that $A_{Q}$ is essentially the Hilbert series of the Stanley Reisner ring associated to $\Delta:=\mathcal{O}(\Pi(Q_{1}))$, suitably specialised. This approach was inspired from [@MV22 Prop. 1.9.]. **Theorem 35**. *Let $Q$ be a 2-connected quiver. Consider the Stanley-Reisner ring $\mathbb{Q}[\Delta]$ associated to the order complex $\Delta$ of the poset $\Pi(Q_{1})\setminus\{\emptyset,Q_{1}\}$. Then:* *$$A_Q(q)=\frac{(1-q^{-1})^{b(Q)}}{1-q^{-b(Q)}}\cdot\mathop{\mathrm{Hilb}}_{\Delta}\left(u_E=q^{-(b(Q)-b(Q\restriction_E))}\right).$$ Moreover, $\mathop{\mathrm{Hilb}}_{\Delta}\left(u_{E}=q^{-(b(Q)-b(Q\restriction_{E}))}\right)$ can be written as a rational fraction whose numerator has non-negative coefficients.* *Proof.* The above formula for $A_{Q}$ can be rewritten as:$$\begin{split} A_Q & = (1-q^{-1})^{b(Q)}\cdot\sum_{\emptyset\subsetneq E_1\subsetneq E_2\ldots\subsetneq E_{s-1}\subsetneq E_s=Q_1}\prod_{j=1}^{s-1}\frac{1}{q^{b(Q)-b(Q\restriction_{E_j})}-1}\times\left\{ 1+\frac{1}{q^{b(Q)}-1}\right\} \\ & = \frac{(1-q^{-1})^{b(Q)}}{1-q^{-b(Q)}}\cdot \sum_{\emptyset\subsetneq E_1\subsetneq E_2\ldots\subsetneq E_{s-1}\subsetneq E_s=Q_1}\prod_{j=1}^{s-1}\frac{(q^{-1})^{b(Q)-b(Q\restriction_{E_j})}}{1-(q^{-1})^{b(Q)-b(Q\restriction_{E_j})}} \\ & = \frac{(1-q^{-1})^{b(Q)}}{1-q^{-b(Q)}}\cdot\mathop{\mathrm{Hilb}}_{\Delta}\left(u_E=q^{-(b(Q)-b(Q\restriction_E))}\right). \end{split}$$ Moreover, the order complex of $\Pi(Q_{1})\setminus\{\emptyset,Q_{1}\}$ is shellable, by Proposition [Proposition 23](#Prop/LattShell){reference-type="ref" reference="Prop/LattShell"} (see also Example [Example 22](#Exmp/OrderCpx){reference-type="ref" reference="Exmp/OrderCpx"}), so it is Cohen-Macaulay, by Proposition [\[Prop/ShellCM\]](#Prop/ShellCM){reference-type="ref" reference="Prop/ShellCM"}. Therefore, up to substituting $q^{-1}$ for $q$, the result follows directly from Proposition [\[Prop/CMHilb\]](#Prop/CMHilb){reference-type="ref" reference="Prop/CMHilb"}. ◻ *Remark 36*. Wyss' initial conjecture concerns the numerator of $B_{\mu_{Q}}(q)$, which is explicitly defined in [@Wys17b §4.6.]. Using Corollary [Corollary 32](#Cor/RelAvsB){reference-type="ref" reference="Cor/RelAvsB"} and Theorem [Theorem 35](#Thm/PositivityA){reference-type="ref" reference="Thm/PositivityA"}, one can show that this conjecture is true. We leave out the detailed computations to avoid introducing too many additional notations. As the reader may have noticed, $A_{Q}(q)$ can be related more directly to the order complex of $\Pi(Q_{1})\setminus\{Q_{1}\}$ (instead of $\Pi(Q_{1})\setminus\{\emptyset,Q_{1}\}$), which is also shellable. We chose to work with $\Pi(Q_{1})\setminus\{\emptyset,Q_{1}\}$ in order to extract a factor $(1-q^{-b(Q)})$ from the Hilbert series, which turns out to be useful in the proof of [@Wys17b Conj. 4.32.]. ## Positivity of toric Kac polynomials [\[Sect/PosToricKacPol\]]{#Sect/PosToricKacPol label="Sect/PosToricKacPol"} We now turn to the proof of Theorem [Theorem 5](#Thm/IntroPosToricKacPol){reference-type="ref" reference="Thm/IntroPosToricKacPol"}. Recall from [@Wys17b Prop. 4.34.] that $A_{Q,\underline{1},\alpha}$ enjoys the following explicit formula:$$A_{Q,\underline{1},\alpha}= \sum_{\substack{E_1\subseteq\ldots\subseteq E_{\alpha}\subseteq Q_1 \\ c(Q\restriction_{E_{\alpha}})=1}} (q-1)^{b(Q\restriction_{E_{\alpha}})}q^{\sum_{k=1}^{\alpha-1}b(Q\restriction_{E_{k}})}.$$ Set $\mathbf{r}=\underline{1}$, $\mathbb{K}=\mathbb{F}_{q}$ and call $R(Q,\mathcal{O}_{\alpha}):=R(Q,\underline{1},\mathcal{O}_{\alpha})$ for short. Let $R(Q,\mathcal{O}_{\alpha})_{\mathrm{ind.}}\subseteq R(Q,\mathcal{O}_{\alpha})$ be the constructible subset of (absolutely) indecomposable representations[^5]. Suppose also that $Q$ is connected, so that $R(Q,\mathcal{O}_{\alpha})_{\mathrm{ind.}}\ne\emptyset$. The above formula is obtained by counting $\left(\mathcal{O}_{\alpha}^{\times}\right)^{Q_{0}}$-orbits (of $\mathbb{F}_{q}$-points) along a stratification of $R(Q,\mathcal{O}_{\alpha})$, which is defined by prescribing valuations $\mathop{\mathrm{val}}(x_{a}),\ a\in Q_{1}$. The datum of valuations $\mathop{\mathrm{val}}(x_{a})$ is encoded in the subsets $E_{1}\subseteq\ldots\subseteq E_{\alpha}\subseteq Q_{1}$ and there are $(q-1)^{b(Q\restriction_{E_{\alpha}})}q^{\sum_{k=1}^{\alpha-1}b(Q\restriction_{E_{k}})}$ orbits in the stratum associated to $E_{1}\subseteq\ldots\subseteq E_{\alpha}\subseteq Q_{1}$. As the formula shows, the polynomial counting orbits in a given stratum does not necessarily have non-negative coefficients. To fix this, we consider a coarser stratification of $R(Q,\mathcal{O}_{\alpha})$ based on valued spanning trees and inspired from [@AMRV22]. Let us describe the algorithm we use to assign a valued spanning tree to $x\in R(Q,\mathcal{O}_{\alpha})$. #### Contraction-deletion algorithm {#contraction-deletion-algorithm .unnumbered} Fix a total ordering of $Q_{1}$. Let $x\in R(Q,\mathcal{O}_{\alpha})_{\mathrm{ind}}$. We build a valued spanning tree of $Q$, called $T_{x}$, by following the algorithm below: 1. If $Q_{1}$ contains at least one non-loop arrow $a$ such that $x_{a}\in\mathcal{O}_{\alpha}^{\times}$, call $a_{0}\in Q_{1}$ the largest such arrow and apply step 1 to the induced representation $x'\in R(Q/a_{0},\mathcal{O}_{\alpha})$. Otherwise, apply step 2 to $x$. 2. If $Q_{1}$ contains at least one loop, call $a_{0}\in Q_{1}$ the largest loop and apply step 2 to the induced representation $x'\in R(Q\setminus a_{0},\mathcal{O}_{\alpha})$. Otherwise, apply step 3 to $x$. 3. If $Q_{1}\ne\emptyset$ and $\mathop{\mathrm{val}}(x_{a})>0$ for all $a\in Q_{1}$, apply step 1 to the representation $x'\in R(Q,\mathcal{O}_{\alpha-1})$ induced by the isomorphism of $\mathcal{O}_{\alpha}$-modules $\mathcal{O}_{\alpha-1}\simeq t\mathcal{O}_{\alpha}$. Note that the algorithm terminates in step 3, when $Q$ is a one-vertex quiver with no loops. Indeed, since $x\in R(Q,\mathcal{O}_{\alpha})_{\mathrm{ind}}$, the restriction of $Q$ to $\{a\in Q_{1}\ \vert\ x_{a}\ne0\}\subseteq Q_{1}$ is connected and the algorithm ends up contracting exactly the edges of a spanning tree of $Q$, which we call $T_{x}$. Note that the assumption $\mathop{\mathrm{val}}(x_{a})>0$ in step 3 is necessarily satisfied for all $a\in Q_{1}$, as non-loop arrows (resp. loops) such that $x_{a}\in\mathcal{O}_{\alpha}^{\times}$ are contracted (resp. deleted) in step 1 (resp. step 2). We define the valued spanning tree associated to $x$ as $T_{x}$, with labeling $v:a\mapsto\mathop{\mathrm{val}}(x_{a})$. *Example 37*. Let us illustrate the above algorithm on the following quiver:$$\begin{tikzcd}[ampersand replacement=\&] \bullet \arrow[loop, distance=2em, in=125, out=55, "6" description] \arrow[rr, bend left, "5" description] \& \& \bullet \arrow[ll, bend left, "4" description] \arrow[ld, bend left, "1" description] \\ \& \bullet \arrow[lu, bend left, "3" description] \arrow[loop, distance=2em, in=305, out=235, "2" description] \& \end{tikzcd}$$ and representation $x=(x_{1},x_{2},x_{3},x_{4},x_{5},x_{6})=(t,t^{2},t^{2},1,t,1)$. Here, the order on $Q_{1}$ is the natural order on the integers.$$\begin{tabular}{>{\centering\arraybackslash}p{4cm} >{\centering\arraybackslash}p{4cm} >{\centering\arraybackslash}p{4cm} >{\centering\arraybackslash}p{4cm} >{\centering\arraybackslash}p{4cm}} \begin{tikzcd}[ampersand replacement=\&] \bullet \arrow[loop, distance=2em, in=125, out=55, "1", swap] \arrow[rr, bend left, "t"] \& \& \bullet \arrow[ll, bend left, "1", blue] \arrow[ld, bend left, "t"] \\ \& \bullet \arrow[lu, bend left, "t^2"] \arrow[loop, distance=2em, in=305, out=235, "t^2", swap] \& \end{tikzcd} & \begin{tikzcd}[ampersand replacement=\&] \bullet \arrow[loop, distance=2em, in=35, out=325, "t", swap] \arrow[loop, distance=2em, in=145, out=215, "1", blue] \arrow[d, bend left, "t"] \\ \bullet \arrow[u, bend left, "t^2"] \arrow[loop, distance=2em, in=305, out=235, "t^2", swap] \end{tikzcd} & \begin{tikzcd}[ampersand replacement=\&] \bullet \arrow[loop, distance=2em, in=35, out=325, "t", swap, blue] \arrow[loop, distance=2em, in=145, out=215, phantom] \arrow[d, bend left, "t"] \\ \bullet \arrow[u, bend left, "t^2"] \arrow[loop, distance=2em, in=305, out=235, "t^2", swap] \end{tikzcd} & \begin{tikzcd}[ampersand replacement=\&] \bullet \arrow[d, bend left, "t"] \\ \bullet \arrow[u, bend left, "t^2"] \arrow[loop, distance=2em, in=305, out=235, "t^2", swap, blue] \end{tikzcd} \\ \text{Step 1} & \text{Step 2} & \text{Step 2} & \text{Step 2} \\ \begin{tikzcd}[ampersand replacement=\&] \bullet \arrow[d, bend left, "t", blue] \\ \bullet \arrow[u, bend left, "t^2", blue] \end{tikzcd} & \begin{tikzcd}[ampersand replacement=\&] \bullet \arrow[d, bend left, "1", blue] \\ \bullet \arrow[u, bend left, "t"] \end{tikzcd} & \begin{tikzcd}[ampersand replacement=\&] \bullet \arrow[loop, distance=2em, in=305, out=235, "t", swap, blue] \end{tikzcd} & \begin{tikzcd}[ampersand replacement=\&] \bullet \end{tikzcd} \\ \text{Step 3} & \text{Step 1} & \text{Step 2} & \text{Stop} \end{tabular}$$ Given a valued spanning tree $T$, we define $R(Q,\mathcal{O}_{\alpha})_{T}:=\{x\in R(Q,\mathcal{O}_{\alpha})\ \vert\ T_{x}=T\}\subseteq R(Q,\mathcal{O}_{\alpha})_{\mathrm{ind.}}$. Given that $T_{x}$ only depends on $\mathop{\mathrm{val}}(x_{a}),\ a\in Q_{1}$, $R(Q,\mathcal{O}_{\alpha})_{T}$ is an $\left(\mathcal{O}_{\alpha}^{\times}\right)^{Q_{0}}$-invariant constructible subset of $R(Q,\mathcal{O}_{\alpha})_{\mathrm{ind.}}$. Let us describe the strata $R(Q,\mathcal{O}_{\alpha})_{T}$ in more details. Set $T$ a valued spanning tree with valuation $v_{T}$, $x\in R(Q,\mathcal{O}_{\alpha})_{T}$ and $a\in Q_{1}$ a non-loop edge. If $a\in T$, then $\mathop{\mathrm{val}}(x_{a})=v_{T}(a)$ by construction. Suppose now that $a\notin T$. Then there exists a unique (unoriented) path in $T$ joining the end vertices of $a$. Let us denote by $T_{a}\subseteq Q_{1}$ the corresponding set of edges and $v_{T_{a}}$ the largest valuation of $T_{a}$. When running the contraction-deletion algorithm, $a$ is contracted into a loop exactly when the smallest edge of $T_{a}$ with valuation $v_{T_{a}}$ is contracted i.e. when the last edge of $T_{a}$ is contracted. Let us call $e_{T_{a}}$ this edge. This means that $\mathop{\mathrm{val}}(x_{a})\geq v_{T_{a}}$, otherwise $a$ would have been contracted before $e_{T_{a}}$. Moreover, if $a>e_{T_{a}}$, then $\mathop{\mathrm{val}}(x_{a})>v_{T_{a}}$, or again, $a$ would have been contracted before $e_{T_{a}}$. One can check that these inequalities imply $x\in R(Q,\mathcal{O}_{\alpha})_{T}$, so:$$R(Q,\mathcal{O}_{\alpha})_{T} = \left\{ x\in R(Q,\mathcal{O}_{\alpha})\ \left\vert \begin{array}{ll} \mathop{\mathrm{val}}(x_{a})=v_{T}(a), & a\in T \\ \mathop{\mathrm{val}}(x_{a})\geq v_{T_{a}}+\delta_{a>e_{T_{a}}}, & a\not\in T \end{array} \right. \right\} .$$ Let us call $A_{Q,T,\alpha}$ the number of orbits (of $\mathbb{F}_{q}$-points in $R(Q,\mathcal{O}_{\alpha})_{\mathrm{ind.}}$) lying in $R(Q,\mathcal{O}_{\alpha})_{T}$. Then summing over all valued spanning tree of $Q$, we obtain:$$A_{Q,\underline{1},\alpha}=\sum_TA_{Q,T,\alpha}.$$ The advantage of considering coarser strata is that $A_{Q,T,\alpha}$ can be computed recursively as in [@AMRV22], following the contraction-deletion algorithm: **Proposition 38**. *Let $T$ be a valued spanning tree of $Q$. Denote by $Q',T'$ the quiver and valued spanning tree obtained from $Q,T$ by running the contraction-deletion algorithm for an arbitrary $x\in R(Q,\mathcal{O}_{\alpha})_{T}$ and stopping after the first occurence of step 3. Consider:* - *$n_{1}$ the number of loops of $Q$;* - *$n_{2}$ the number of non-loop arrows $a\in Q_{1}$ which get contracted into loops during step 1 and satisfying $a<e_{T_{a}}$;* - *$n_{3}$ the number of non-loop arrows $a\in Q_{1}$ which get contracted into loops during step 1 and satisfying $a>e_{T_{a}}$.* *Then:$$A_{Q,T,\alpha}=q^{\alpha n_1 + \alpha n_2 + (\alpha-1)n_3}\cdot A_{Q',T',\alpha-1}.$$* *Proof.* Throughout the proof, we will be considering orbits of $\mathbb{F}_{q}$-rational points. Let $a\in Q_{1}$ be the first arrow of $T$ which gets contracted during step 1. Then the $\left(\mathcal{O}_{\alpha}^{\times}\right)^{Q_{0}}$-orbits of $R(Q,\mathcal{O}_{\alpha})_{T}$ are in one-to-one correspondence with $\left(\mathcal{O}_{\alpha}^{\times}\right)^{(Q/a)_{0}}$-orbits of $\{x\in R(Q,\mathcal{O}_{\alpha})_{T}\ \vert\ x_{a}=1\}$, where the factor $\mathcal{O}_{\alpha}^{\times}\subseteq\left(\mathcal{O}_{\alpha}^{\times}\right)^{Q_{0}}$ corresponding to $[s(a)]=[t(a)]\in(Q/a)_{0}$ acts diagonally on the modules $\mathcal{O}_{\alpha}$ located at vertices $s(a)$ and $t(a)$. Repeating this reasoning for all arrows $a_{1},\ldots,a_{s}$ of $T$ which get contracted (in step 1) during the construction of $Q'$, we obtain that the $\left(\mathcal{O}_{\alpha}^{\times}\right)^{Q_{0}}$-orbits of $R(Q,\mathcal{O}_{\alpha})_{T}$ are in one-to-one correspondence with $\left(\mathcal{O}_{\alpha}^{\times}\right)^{Q'_{0}}$-orbits of:$$\{x\in R(Q,\mathcal{O}_{\alpha})_{T}\ \vert\ x_{a_1}=\ldots=x_{a_s}=1\} \simeq R(Q',\mathcal{O}_{\alpha})_{T'}\times\mathcal{O}_{\alpha}^{\oplus n_1}\times\mathcal{O}_{\alpha}^{\oplus n_2}\times\left(t\mathcal{O}_{\alpha}\right)^{\oplus n_3} .$$ The above isomorphism is $\left(\mathcal{O}_{\alpha}^{\times}\right)^{Q'_{0}}$-equivariant, with trivial action on the second and third factors of the right-hand side. Here $T'$ is obtained from $T$ by contracting $a_{1},\ldots,a_{s}$ and leaving valuations unchanged. By abuse of notation, let us also call $T'$ the valued spanning tree obtained by contracting $a_{1},\ldots,a_{s}$ and decreasing valuations by 1. Then we obtain as claimed:$$A_{Q,T,\alpha}=q^{\alpha n_1 + \alpha n_2 + (\alpha-1)n_2}\cdot A_{Q',T',\alpha-1}.$$ ◻ The contraction-deletion algorithm terminates in step 3, when $Q'$ is a one-vertex quiver without arrows. Then $A_{Q',\bullet,\alpha'}(q)=1$ and it follows from Proposition [Proposition 38](#Prop/KacPolContDel){reference-type="ref" reference="Prop/KacPolContDel"} that$$A_{Q,T,\alpha}(q)=q^{n_T} \ ; \ n_T:=\sum_{a\not\in T}(\alpha-v_{T_a}-\delta_{a>e_{T_a}}).$$This yields the following: **Theorem 39**. *Let $\mathbf{r}=\underline{1}$, then $A_{Q,\mathbf{r},\alpha}$ has non-negative coefficients.* *Remark 40*. In [@AMRV22], toric Kac polynomials are shown to be specialisations of Tutte polynomials. This can be deduced from a recursive relation satisfied by $A_{Q,\underline{1},1}$ under contraction-deletion of edges, for which Tutte polynomials are universal. This relation relies on the fact that, for $x\in R(Q,\mathbb{F}_{q})$ and $a\in Q_{1}$, either $x_{a}$ is invertible or $x_{a}=0$. This is no longer the case for $\alpha>1$ and $A_{Q,\underline{1},\alpha}$ cannot be obtained as a specialisation of the Tutte polynomial of $Q$ (see also [@HLRV18 Prop. 7.16.]). ## Purity of toric preprojective stacks in higher depth [\[Sect/CohPreprojStack\]]{#Sect/CohPreprojStack label="Sect/CohPreprojStack"} In this section, we upgrade Theorems [Theorem 34](#Thm/CountMomMapGenFib){reference-type="ref" reference="Thm/CountMomMapGenFib"} and [Theorem 39](#Thm/PositivityToricKacPol){reference-type="ref" reference="Thm/PositivityToricKacPol"} to a cohomological result. Set $\mathbb{K}=\mathbb{C}$. We show by direct computation that the compactly supported cohomology of $\left[\mu_{Q,\mathbf{r},\alpha}^{-1}(t^{\alpha-1}\cdot\lambda)/\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})\right]$ ($\mathbf{r}=\underline{1}$) is pure and contains a pure Hodge structure with E-poynomial $A_{Q,\mathbf{r},\alpha}$. This leads us to compute the compactly supported cohomology of $\left[\mu_{Q,\mathbf{r},\alpha}^{-1}(0)/\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})\right]$, which is also pure and satisfies a formula analogous to cohomological integrality in [@Dav17a]. We assume throughout that $Q$ is connected (the non-connected case is analogous, by treating connected components separately). Set $\mathbf{r}=1$ and call $\mu_{Q,\mathbf{r},\alpha}:=\mu_{Q,\alpha}$ for short. We compute $\mathrm{H}_{\mathrm{c}}^{\bullet}\left(\left[\mu_{Q,\alpha}^{-1}(t^{\alpha-1}\cdot\lambda)/\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})\right]\right)$ using a $\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})$-equivariant stratification, where strata are labeled by valued spanning trees. Let $\pi:\mu_{Q,\alpha}^{-1}(t^{\alpha-1}\cdot\lambda)\subseteq R(\overline{Q},\mathcal{O}_{\alpha})\rightarrow R(Q,\mathcal{O}_{\alpha})$ be the projection $(x,y)\mapsto x$. For $T$ a valued spanning tree of $Q$, define the stratum $\mu_{Q,\alpha}^{-1}(t^{\alpha-1}\cdot\lambda)_{T}:=\pi^{-1}\left(R(Q,\mathcal{O}_{\alpha})_{T}\right)$. We use the term "stratification" in a loose sense: the set of spanning trees of $Q$ can be endowed with a partial order such that, for any valued spanning tree $T$,$$\bigsqcup_{T'\leq T}\mu_{Q,\alpha}^{-1}(t^{\alpha-1}\cdot\lambda)_{T'}\subseteq\mu_{Q,\alpha}^{-1}(t^{\alpha-1}\cdot\lambda)$$ is closed. One can then compute the (compactly supported) cohomology of $\left[\mu_{Q,\mathbf{r},\alpha}^{-1}(t^{\alpha-1}\cdot\lambda)/\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})\right]$ from the cohomology of the strata using successive open-closed decompositions and Lemma [Lemma 29](#Lem/Strat){reference-type="ref" reference="Lem/Strat"}. The following proposition gives us the partial order on strata that we are looking for: **Proposition 41**. *The following closed subset of $R(Q,\mathcal{O}_{\alpha})_{\mathrm{ind}}$ is a union of strata $R(Q,\mathcal{O}_{\alpha})_{T'}$:* *$$R(Q,\mathcal{O}_{\alpha})_{\leq T} := \left\{ x\in R(Q,\mathcal{O}_{\alpha})\ \left\vert \begin{array}{ll} \mathop{\mathrm{val}}(x_{a})\geq v_{T}(a), & a\in T \\ \mathop{\mathrm{val}}(x_{a})\geq v_{T_{a}}+\delta_{a>e_{T_{a}}}, & a\not\in T \end{array} \right. \right\} .$$* *Proof.* Let $T'$ be a valued spanning tree of $Q$ and suppose that $R(Q,\mathcal{O}_{\alpha})_{\leq T}\cap R(Q,\mathbf{d},\mathcal{O}_{\alpha})_{T'}\ne\emptyset$. We claim that for any $a\in Q_{1}$, $v_{T'_{a}}+\delta_{a>e_{T'_{a}}}\geq v_{T_{a}}+\delta_{a>e_{T_{a}}}$, hence $R(Q,\mathcal{O}_{\alpha})_{T'}\subseteq R(Q,\mathcal{O}_{\alpha})_{\leq T}$. Note that when $a$ belongs to $T$, we have: $T_{a}=Q\restriction_{\{a\}}$, $v_{T_{a}}=v_{T}(a)$, $e_{T_{a}}=a$ and the right-hand side simplifies to $v_{T}(a)$. The same goes for the left-hand side when $a$ belongs to $T'$. We write $a\in T$ for short when $a$ is an arrow of $T$. Let us prove the claim. Take $x\in R(Q,\mathcal{O}_{\alpha})_{\leq T}\cap R(Q,\mathcal{O}_{\alpha})_{T'}$; since $T_{a}\subseteq\cup_{b\in T'_{a}}T_{b}$, we obtain:$$v_{T'_{a}}=\max_{b\in T'_{a}}\{v_{T'}(b)\}=\max_{b\in T'_{a}}\{\mathop{\mathrm{val}}(x_{b})\}\geq\max_{b\in T'_{a}}\{v_{T_{b}}\}\geq v_{T_{a}}.$$The second equality holds because $x\in R(Q,\mathcal{O}_{\alpha})_{T'}$, whereas $\mathop{\mathrm{val}}(x_{b})\geq v_{T_{b}}$ because $x\in R(Q,\mathcal{O}_{\alpha})_{\leq T}$. We need to further show that, if $e_{T_{a}}<a\leq e_{T'_{a}}$, then $v_{T'_{a}}>v_{T_{a}}$. Suppose $e_{T_{a}}<a\leq e_{T'_{a}}$ and consider some $b\in T'_{a}$ such that $e_{T_{a}}\in T_{b}$. Then:$$v_{T'_a}\geq v_{T'}(b)=\mathop{\mathrm{val}}(x_b)\geq v_{T_b}+\delta_{b>e_{T_b}}\geq v_T(e_{T_a})=v_{T_a}.$$We argue that either the leftmost or the rightmost inequality is strict. Indeed, if $v_{T'_{a}}=v_{T'}(b)$ and $v_{T_{b}}=v_{T_{a}}$, then $b\geq e_{T'_{a}}$ (as $v_{T'}(b)=v_{T'}(e_{T'_{a}})$) and $e_{T_{a}}\geq e_{T_{b}}$ (as $v_{T}(e_{T_{a}})=v_{T}(e_{T_{b}})$). Since we assumed that $e_{T_{a}}<a\leq e_{T'_{a}}$, we obtain $b>e_{T_{b}}$ and the rightmost inequality is strict. This concludes the proof. ◻ The partial order on valued spanning trees is then given by $T'\leq T$ if, and only if, $R(Q,\mathcal{O}_{\alpha})_{T'}\subseteq R(Q,\mathcal{O}_{\alpha})_{\leq T}$. Let us now compute the compactly supported cohomology of $\left[\mu_{Q,\alpha}^{-1}(t^{\alpha-1}\cdot\lambda)_{T}/\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})\right]$. **Proposition 42**. *Let $T$ be a valued spanning tree of $Q$. Denote by $Q',T'$ the quiver and valued spanning tree obtained from $Q,T$ by running the contraction-deletion algorithm for an arbitrary $x\in R(Q,\mathcal{O}_{\alpha})_{T}$ and stopping after the first occurence of step 3. Let $\lambda'\in\mathbb{Z}^{Q_{0}}$ be the vector obtained from $\lambda$ by applying the corresponding contractions. Consider:* - *$n_{1}$ the number of loops of $Q$;* - *$n_{2}$ the number of non-loop arrows $a\in Q_{1}$ which get contracted into loops during step 1 and satisfying $a<e_{T_{a}}$;* - *$n_{3}$ the number of non-loop arrows $a\in Q_{1}$ which get contracted into loops during step 1 and satisfying $a>e_{T_{a}}$.* *Then:$$\mathrm{H}_{\mathrm{c}}^{\bullet}\left(\left[\mu_{Q,\alpha}^{-1}(t^{\alpha-1}\cdot\lambda)_T/\left(\mathcal{O}_{\alpha}^{\times}\right)^{Q_0}\right]\right) \simeq \mathrm{H}_{\mathrm{c}}^{\bullet}\left(\left[\mu_{Q',\alpha-1}^{-1}(t^{\alpha-2}\cdot\lambda')_{T'}/\left(\mathcal{O}_{\alpha-1}^{\times}\right)^{Q'_0}\right]\right) \otimes\mathbb{L}^{\otimes\left(2\alpha n_1+2\alpha n_2+(2\alpha-1)n_3+\sharp Q'_1-\sharp Q'_0\right)}.$$* *Proof.* We claim that there is an $\left(\mathcal{O}_{\alpha}^{\times}\right)^{Q_{0}}$-equivariant isomorphism:$$\Psi: \mu_{Q,\alpha}^{-1}(t^{\alpha-1}\cdot\lambda)_T \simeq \left( \mu_{Q',\alpha-1}^{-1}(t^{\alpha-2}\cdot\lambda')_{T'} \times^{\left(\mathcal{O}_{\alpha}^{\times}\right)^{Q'_0}}\left(\mathcal{O}_{\alpha}^{\times}\right)^{Q_0} \right) \times\mathbb{A}^{2\alpha n_1}\times\mathbb{A}^{2\alpha n_2+(2\alpha-1)n_3}\times\mathbb{A}^{\sharp Q'_1}.$$ Let us call: - $a_{1},\ldots,a_{s}\in Q_{1}$ the arrows of $T$ which get contracted (in step 1) during the construction of $Q'$; - $a'_{1},\ldots,a'_{n_{1}}\in Q_{1}$ the loops of $Q$; - $a''_{1},\ldots,a''_{n_{2}}\in Q_{1}$ the non-loop arrows of $Q$ which get contracted into loops (in step 1) and deleted (in step 2) during the construction of $Q'$ and which satisfy $a<e_{T_{a}}$; - $b''_{1},\ldots,b''_{n_{3}}\in Q_{1}$ the non-loop arrows of $Q$ which get contracted into loops (in step 1) and deleted (in step 2) during the construction of $Q'$ and which satisfy $a>e_{T_{a}}$. Let $(x,y)\in\mu_{Q,\alpha}^{-1}(t^{\alpha-1}\cdot\lambda)_{T}$. One can check that $(x,y)$ determines a point $(x',y')\in\mu_{Q',\alpha}^{-1}(t^{\alpha-1}\cdot\lambda')_{T'}$ such that, for all $a\in Q'_{1}$, $x'_{a}\in t\mathcal{O}_{\alpha}$. We describe the components of $\Psi(x,y)$: - Note that $\mu_{Q',\alpha-1}^{-1}(t^{\alpha-2}\cdot\lambda')_{T'}\times^{\left(\mathcal{O}_{\alpha}^{\times}\right)^{Q'_{0}}}\left(\mathcal{O}_{\alpha}^{\times}\right)^{Q_{0}}\simeq\mu_{Q',\alpha-1}^{-1}(t^{\alpha-2}\cdot\lambda')_{T'}\times\left(\mathcal{O}_{\alpha}^{\times}\right)^{s}$. The component of $\Psi(x,y)$ along $\mu_{Q',\alpha-1}^{-1}(t^{\alpha-2}\cdot\lambda')_{T'}$ is induced by $(x',y')$ via the morphisms of $\mathcal{O}_{\alpha}$-modules $t\mathcal{O}_{\alpha}\simeq\mathcal{O}_{\alpha-1}$ (for the $x$-coordinate) and $\mathcal{O}_{\alpha}\twoheadrightarrow\mathcal{O}_{\alpha-1}$ (for the $y$-coordinate). The component along $\left(\mathcal{O}_{\alpha}^{\times}\right)^{s}$ is $(x_{a_{t}})_{1\leq t\leq s}$; - $\Psi(x,y)_{\mathbb{A}^{2\alpha n_{1}}}=(x_{a'_{n}},y_{a'_{n}})_{1\leq n\leq n_{1}}$ - note that the moment map equation imposes no conditions on $(x_{a'_{n}},y_{a'_{n}})$, as $\mathbf{r}=\underline{1}$; - $\Psi(x,y)_{\mathbb{A}^{2\alpha n_{2}+(2\alpha-1)n_{3}}}=\left((x_{a''_{n}},y_{a''_{n}})_{1\leq n\leq n_{2}},(x_{b''_{n}},y_{b''_{n}})_{1\leq n\leq n_{3}}\right)$ - note that $x_{a''_{n}}\in t\mathcal{O}_{\alpha}$ by assumption; moreover the coordinates $(x_{a''_{n}},y_{a''_{n}})_{1\leq n\leq n_{2}},(x_{b''_{n}},y_{b''_{n}})_{1\leq n\leq n_{3}}$ may be chosen freely and determine $x_{a_{1}}y_{a_{1}},\ldots,x_{a_{s}}y_{a_{s}}$ through the moment map equation; - Write $y'_{a}=\sum_{k}y'_{a,k}\cdot t^{k}\in\mathcal{O}_{\alpha}$ for $a\in Q'_{1}$; then $\Psi(x,y)_{\mathbb{A}^{\sharp Q'_{1}}}=(y'_{a,\alpha-1})_{a\in Q'_{1}}$. Consequently, $\Psi(x,y)$ contains all the coordinates of $(x,y)$ except for $y_{a_{1}},\ldots,y_{a_{s}}$. These are determined from $\Psi(x,y)$ using the moment map equation. The action of $\left(\mathcal{O}_{\alpha}^{\times}\right)^{Q_{0}}$ on $\mu_{Q',\alpha-1}^{-1}(t^{\alpha-2}\cdot\lambda')_{T'}\times\left(\mathcal{O}_{\alpha}^{\times}\right)^{s}$ is induced by a choice of splitting $\left(\mathcal{O}_{\alpha}^{\times}\right)^{Q_{0}}\simeq\left(\mathcal{O}_{\alpha}^{\times}\right)^{Q'_{0}}\times\left(\mathcal{O}_{\alpha}^{\times}\right)^{s}$ and makes the morphism $\mu_{Q,\alpha}^{-1}(t^{\alpha-1}\cdot\lambda)_{T}\rightarrow\mu_{Q',\alpha-1}^{-1}(t^{\alpha-2}\cdot\lambda')_{T'}\times^{\left(\mathcal{O}_{\alpha}^{\times}\right)^{Q'_{0}}}\left(\mathcal{O}_{\alpha}^{\times}\right)^{Q_{0}}$ $\left(\mathcal{O}_{\alpha}^{\times}\right)^{Q_{0}}$-equivariant. The $\left(\mathcal{O}_{\alpha}^{\times}\right)^{Q_{0}}$-action on the remaining components of the right-hand side is transferred from the action on the left-hand side. Using the isomorphism $\Psi$, Lemmas [Lemma 25](#Lem/AffFib){reference-type="ref" reference="Lem/AffFib"} and [Lemma 27](#Lem/GrpChg){reference-type="ref" reference="Lem/GrpChg"} yield:$$\mathrm{H}_{\mathrm{c}}^{\bullet}\left(\left[\mu_{Q,\alpha}^{-1}(t^{\alpha-1}\cdot\lambda)_T/\left(\mathcal{O}_{\alpha}^{\times}\right)^{Q_0}\right]\right) \simeq \mathrm{H}_{\mathrm{c}}^{\bullet}\left(\left[\mu_{Q',\alpha-1}^{-1}(t^{\alpha-2}\cdot\lambda')_{T'}/\left(\mathcal{O}_{\alpha}^{\times}\right)^{Q'_0}\right]\right) \otimes\mathbb{L}^{\otimes\left(2\alpha n_1+2\alpha n_2+(2\alpha-1)n_3+\sharp Q'_1\right)}.$$ Since the action of $\left(\mathcal{O}_{\alpha}^{\times}\right)^{Q'_{0}}$ on $\mu_{Q',\alpha-1}^{-1}(t^{\alpha-2}\cdot\lambda')_{T'}$ factors through the quotient group $\left(\mathcal{O}_{\alpha-1}^{\times}\right)^{Q'_{0}}$, Lemma [Lemma 26](#Lem/DepthChg){reference-type="ref" reference="Lem/DepthChg"} gives the desired formula. ◻ When the contraction-deletion algorithm terminates, the computation finishes with $\mathrm{H}_{\mathrm{c}}^{\bullet}\left(\left[\mathrm{pt}/\mathcal{O}_{\alpha'}^{\times}\right]\right)\simeq\mathbb{L}^{\otimes-\alpha'}\otimes\mathrm{H}_{\mathrm{c}}^{\bullet}\left(\mathrm{B}\mathbb{G}_{\mathrm{m}}\right)$ , which is pure. This shows that $\mathrm{H}_{\mathrm{c}}^{\bullet}\left(\left[\mu_{Q,\alpha}^{-1}(t^{\alpha-1}\cdot\lambda)_{T}/\left(\mathcal{O}_{\alpha}^{\times}\right)^{Q_{0}}\right]\right)$ is pure, of Tate type, which allows us to conclude using Lemma [Lemma 29](#Lem/Strat){reference-type="ref" reference="Lem/Strat"} and Theorem [Theorem 34](#Thm/CountMomMapGenFib){reference-type="ref" reference="Thm/CountMomMapGenFib"}: **Theorem 43**. *Let $\mathbf{r}=\underline{1}$. then:$$\mathrm{H}_{\mathrm{c}}^{\bullet}\left(\left[\mu_{Q,\mathbf{r},\alpha}^{-1}(t^{\alpha-1}\cdot\lambda)/\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})\right]\right) \simeq A_{Q,\mathbf{r},\alpha}(\mathbb{L})\otimes\mathbb{L}^{1-\alpha\langle\mathbf{r},\mathbf{r}\rangle}\otimes\mathrm{H}_{\mathrm{c}}^{\bullet}\left(\mathrm{B}\mathbb{G}_{\mathrm{m}}\right)$$In particular, $\mathrm{H}_{\mathrm{c}}^{\bullet}\left(\left[\mu_{Q,\mathbf{r},\alpha}^{-1}(t^{\alpha-1}\cdot\lambda)/\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})\right]\right)$ carries a pure Hodge structure.* There is more: as can be seen from the proof of Proposition [Proposition 41](#Prop/Strata){reference-type="ref" reference="Prop/Strata"}, the parameter $\lambda$ plays no role in computing the cohomology of the strata. If we replace $\lambda$ with 0, we may compute $\mathrm{H}_{\mathrm{c}}^{\bullet}\left(\left[\mu_{Q,\mathbf{r},\alpha}^{-1}(0)/\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})\right]\right)$ by pulling back a stratification of $R(Q,\mathcal{O}_{\alpha})$ instead of $R(Q,\mathcal{O}_{\alpha})_{\mathrm{ind.}}$. For a general $x\in R(Q,\mathcal{O}_{\alpha})$ the restriction of $Q$ to $\{a\in Q_{1}\ \vert\ x_{a}\ne0\}$ may not be connected, so we should index strata of $R(Q,\mathcal{O}_{\alpha})$ by (i) partitions $Q_{0}=I_{1}\sqcup\ldots\sqcup I_{s}$ and (ii) valued spanning trees for $Q\restriction_{I_{1}},\ldots,Q\restriction_{I_{s}}$. Therefore, given such a collection $(I=(I_{1},\ldots,I_{s}),T=(T_{1},\ldots,T_{s}))$, we define:$$R(Q,\mathcal{O}_{\alpha})_{(I,T)}:= \left\{ x\in R(Q,\mathcal{O}_{\alpha})\ \left\vert \begin{array}{l} \forall 1\leq t\leq s,\ (x_a)_{a\in Q_{1,I_s}}\in R(Q\restriction_{I_s},\mathcal{O}_{\alpha})_{T_s} \\ \forall a\not\in\bigcup_{t=1}^sQ_{1,I_t},\ x_a=0 \end{array} \right. \right\} .$$The same proof as Proposition [Proposition 41](#Prop/Strata){reference-type="ref" reference="Prop/Strata"} shows that $\mathrm{H}_{\mathrm{c}}^{\bullet}\left(\left[\mu_{Q,\alpha}^{-1}(0)_{(I,T)}/\left(\mathcal{O}_{\alpha}^{\times}\right)^{Q_{0}}\right]\right)$ is pure, of Tate type for all $(I,T)$. Combined with Theorem [Theorem 31](#Thm/ExpFmlKacPol){reference-type="ref" reference="Thm/ExpFmlKacPol"}, this shows: **Theorem 44**. *Let $\mathbf{r}=\underline{1}$. Then:* *$$\mathrm{H}_{\mathrm{c}}^{\bullet}\left(\left[\mu_{Q,\mathbf{r},\alpha}^{-1}(0)/\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})\right]\right) \otimes\mathbb{L}^{\otimes\alpha\langle\mathbf{r},\mathbf{r}\rangle} \simeq \bigoplus_{Q_0=I_1\sqcup\ldots\sqcup I_s} \bigotimes_{j=1}^s \left( A_{Q\restriction_{I_j},\mathbf{r}\restriction_{I_j},\alpha}(\mathbb{L})\otimes\mathbb{L}\otimes \mathrm{H}_{\mathrm{c}}^{\bullet}(\mathrm{B}\mathbb{G}_m) \right).$$* *In particular, $\mathrm{H}_{\mathrm{c}}^{\bullet}\left(\left[\mu_{Q,\mathbf{r},\alpha}^{-1}(0)/\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})\right]\right)$ carries a pure Hodge structure.* *Remark 45*. Theorem [Theorem 44](#Thm/CohIntgr){reference-type="ref" reference="Thm/CohIntgr"} may be interpreted as a higher depth analog of a PBW theorem for preprojective cohomological Hall algebras [@Dav17a], restricted to $\mathbf{r}\leq\underline{1}$. Let us call $\mathrm{Sym}$ the operator on $\mathbb{Z}\times\mathbb{Z}_{\geq0}^{Q_{0}}$-graded mixed Hodge structures which categorifies the plethystic exponential - see [@DM20 §3.2]. Then Theorem [Theorem 44](#Thm/CohIntgr){reference-type="ref" reference="Thm/CohIntgr"} can be stated as follows:$$\bigoplus_{\mathbf{r}\leq\underline{1}}\mathrm{H}_{\mathrm{c}}^{\bullet}\left(\left[\mu_{Q,\mathbf{r},\alpha}^{-1}(0)/\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})\right]\right)\otimes\mathbb{L}^{\otimes\alpha\langle\mathbf{r},\mathbf{r}\rangle} \simeq \mathrm{Sym} \left. \left( \bigoplus_{\mathbf{r}>0}A_{Q,\mathbf{r},\alpha}(\mathbb{L})\otimes\mathbb{L}\otimes \mathrm{H}_{\mathrm{c}}^{\bullet}(\mathrm{B}\mathbb{G}_m) \right) \right\vert_{\mathbf{r}\leq\underline{1}}.$$ *Remark 46*. Note that, while in [@Dav17a; @Dav18], positivity of Kac polynomials is deduced from a cohomological PBW isomorphism, here the proof of Theorem [Theorem 44](#Thm/CohIntgr){reference-type="ref" reference="Thm/CohIntgr"} *relies* on positivity for $A_{Q,\mathbf{r},\alpha}$ - in order to make sense of the graded pure Hodge structure $A_{Q,\mathbf{r},\alpha}(\mathbb{L})$. In [@Dav17a], the existence of graded mixed Hodge structures $\mathrm{BPS}_{Q,\mathbf{d}}^{\vee},\ \mathbf{d}\in\mathbb{Z}_{\geq0}^{Q_{0}}$ satisfying$$\bigoplus_{\mathbf{d}\geq0} \mathrm{H}_{\mathrm{c}}^{\bullet}\left(\left[\mu_{Q,\mathbf{d}}^{-1}(0)/\mathop{\mathrm{GL}}(\mathbf{d})\right]\right)\otimes\mathbb{L}^{\otimes\langle\mathbf{d},\mathbf{d}\rangle} \simeq \mathrm{Sym} \left( \bigoplus_{\mathbf{d}\geq0}\mathrm{BPS}_{Q,\mathbf{d}}^{\vee}\otimes\mathbb{L}\otimes \mathrm{H}_{\mathrm{c}}^{\bullet}(\mathrm{B}\mathbb{G}_m) \right)$$ is proved using cohomological Donaldson-Thomas theory. Purity of $\mathrm{H}_{\mathrm{c}}^{\bullet}\left(\left[\mu_{Q,\mathbf{d}}^{-1}(0)/\mathop{\mathrm{GL}}(\mathbf{d})\right]\right)$ then implies purity of $\mathrm{BPS}_{Q,\mathbf{d}}^{\vee}$, which in turn shows that $A_{Q,\mathbf{d}}$ has non-negative coefficients. In our setting however, the isomorphism of Theorem [Theorem 44](#Thm/CohIntgr){reference-type="ref" reference="Thm/CohIntgr"} is deduced from the fact that both sides are pure, of Tate type and have the same E-polynomial by Theorem [Theorem 31](#Thm/ExpFmlKacPol){reference-type="ref" reference="Thm/ExpFmlKacPol"}. The well-definedness of $A_{Q,\mathbf{r},\alpha}(\mathbb{L})$ relies on Theorem [Theorem 39](#Thm/PositivityToricKacPol){reference-type="ref" reference="Thm/PositivityToricKacPol"}, since we do not know by other means that there exists a graded pure Hodge structure analogous to $\mathrm{BPS}_{Q,\mathbf{d}}^{\vee}$. # Beyond the toric setting [\[Sect/HigherRk\]]{#Sect/HigherRk label="Sect/HigherRk"} In this last section, we collect some computations of $A_{Q,\mathbf{r},\alpha}$ for larger rank vectors. We follow the approach of Kac, Stanley and later Hua and use Burnside's lemma to compute $A_{Q,\mathbf{r},\alpha}$ [@Kac83; @Hua00]. The computation turns out to be rather involved, already for low ranks and simple quivers, as a good knowledge of conjugacy classes of $\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})$ is required. We exploit results of Avni, Onn, Prasad, Vaserstein and Avni, Klopsch, Onn, Voll [@AOPV09; @AKOV16] to obtain closed formulas for $g$-loop quivers in ranks 2 and 3. This involves solving a linear recurrence relation over $\alpha\geq1$. Throughout this section, we assume that $\mathbb{K}=\mathbb{F}_{q}$. Let us first apply Burnside's lemma. Consider $M_{Q,\mathbf{r},\alpha}\in\mathcal{V}$ the count of all isomorphism classes of locally free quiver representations over $\mathcal{O}_{\alpha}$, in rank $\mathbf{r}$. This is the orbit-count for the action $\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})\circlearrowleft R(Q,\mathbf{r},\mathcal{O}_{\alpha})$ and so we obtain:$$M_{Q,\mathbf{r},\alpha}=\sum_{[\gamma]\in\mathrm{Cl}(\mathbf{r},\mathcal{O}_{\alpha})}\frac{\sharp R(Q,\mathbf{r},\mathcal{O}_{\alpha})^{\gamma}}{\sharp Z_{\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})}(\gamma)},$$ where the sum runs over the set $\mathrm{Cl}(\mathbf{r},\mathcal{O}_{\alpha})$ of conjugacy classes of $\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})$, $R(Q,\mathbf{r},\mathcal{O}_{\alpha})^{\gamma}$ is the set of points which are fixed by $\gamma$ and $Z_{\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})}(\gamma)$ is the centraliser of $\gamma$ in $\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})$. Then we can recover $A_{Q,\mathbf{r},\alpha}$ from $M_{Q,\mathbf{r},\alpha}$ using the following formula [@Moz19 Thm. 1.2.]:$$\sum_{\mathbf{r}\in\mathbb{N}^{Q_0}}M_{Q,\mathbf{r},\alpha}\cdot t^{\mathbf{r}}=\mathop{\mathrm{Exp}}_{q,t}\left(\sum_{\mathbf{r}\in\mathbb{N}^{Q_0}\setminus\{0\}}A_{Q,\mathbf{r},\alpha}\cdot t^{\mathbf{r}}\right).$$ Conjugacy classes of $\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})$ are completely classified in ranks 2 and 3 [@AOPV09; @AKOV16]. However, as noted in [@AOPV09 §4], computing the space of intertwiners between two distinct conjugacy classes proves untractable. Therefore, we reduce to the case of a quiver $Q$ with one vertex and $g$ loops. One could then compute the centralisers in $\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})$ and $\mathfrak{gl}(\mathbf{r},\mathcal{O}_{\alpha})$ for representatives of all conjugacy classes in $\mathop{\mathrm{GL}}(\mathbf{r},\mathcal{O}_{\alpha})$. However, we prefer to compute summands in Burnside's formula by induction on $\alpha$, using the branching rules for conjugacy classes described in [@AOPV09; @AKOV16]. Let us give details on this when $r=2$. Recall from [@AOPV09 §2] that matrices in $\mathfrak{gl}(2,\mathcal{O}_{\alpha})$ are either scalar (we call these of type I) or conjugate to a unique matrix of the form $$d\cdot\mathrm{I}_2+t^j\cdot \left( \begin{array}{cc} 0 & a_0 \\ 1 & a_1 \end{array} \right),$$ where $0\leq j\leq\alpha-1$, $d\in\bigoplus_{l=0}^{j-1}\mathbb{F}_{q}\cdot t^{l}$ and $a_{0},a_{1}\in\mathcal{O}_{\alpha-j}$ (we call these of type II). We split the set of conjugacy classes in $\mathop{\mathrm{GL}}(2,\mathcal{O}_{\alpha})$ in four disjoint subsets $\mathrm{Cl}(2,\mathcal{O}_{\alpha})=\mathrm{Cl}(2,\mathcal{O}_{\alpha})_{\mathrm{I}}\sqcup\mathrm{Cl}(2,\mathcal{O}_{\alpha})_{\mathrm{II_{1}}}\sqcup\mathrm{Cl}(2,\mathcal{O}_{\alpha})_{\mathrm{II_{2}}}\sqcup\mathrm{Cl}(2,\mathcal{O}_{\alpha})_{\mathrm{II_{3}}}$ and compute the sums$$S_{\bullet,\alpha}:=\sum_{[\gamma]\in\mathrm{Cl}(2,\mathcal{O}_{\alpha})_{\bullet}}\frac{\sharp R(Q,2,\mathcal{O}_{\alpha})^{\gamma}}{\sharp Z_{\mathop{\mathrm{GL}}(2,\mathcal{O}_{\alpha})}(\gamma)}$$ recursively, for $\bullet\in\{\mathrm{I},\mathrm{II}_{1},\mathrm{II}_{2},\mathrm{II}_{3}\}$. The subsets $\mathrm{Cl}(2,\mathcal{O}_{\alpha})_{\bullet}$ are defined as follows: - $\mathrm{Cl}(2,\mathcal{O}_{\alpha})_{\mathrm{I}}$ consists of conjugacy classes of scalar matrices; - $\mathrm{Cl}(2,\mathcal{O}_{\alpha})_{\mathrm{II}_{1}}$ consists of conjugacy classes of type II, where $T^{2}-a_{1}\cdot T-a_{0}$ is split in $\mathbb{F}_{q}[T]$ and has two distinct simple roots; - $\mathrm{Cl}(2,\mathcal{O}_{\alpha})_{\mathrm{II}_{2}}$ consists of conjugacy classes of type II, where $T^{2}-a_{1}\cdot T-a_{0}$ is split in $\mathbb{F}_{q}[T]$ and has a double root; - $\mathrm{Cl}(2,\mathcal{O}_{\alpha})_{\mathrm{II}_{3}}$ consists of conjugacy classes of type II, where $T^{2}-a_{1}\cdot T-a_{0}$ is irreducible in $\mathbb{F}_{q}[T]$. Conjugacy classes belonging to the same subset $\mathrm{Cl}(2,\mathcal{O}_{\alpha})_{\bullet}$ have similar centralisers in $\mathop{\mathrm{GL}}(2,\mathcal{O}_{\alpha})$ and $\mathfrak{gl}(2,\mathcal{O}_{\alpha})$. Let $\mathop{\mathrm{GL}}^{1}(2,\mathcal{O}_{\alpha})\subseteq\mathop{\mathrm{GL}}(2,\mathcal{O}_{\alpha})$ be the kernel of reduction modulo $t$ i.e. $\mathop{\mathrm{GL}}^{1}(2,\mathcal{O}_{\alpha})=\mathrm{I}_{2}+t\cdot\mathfrak{gl}(2,\mathcal{O}_{\alpha})$. Following [@AKOV16 §2.1.], for $\sigma\in\{\mathrm{I},\mathrm{II}_{1},\mathrm{II}_{2},\mathrm{II}_{3}\}$ and $\gamma\in\mathop{\mathrm{GL}}(2,\mathcal{O}_{\alpha})$ of type $\sigma$, we call $\Vert\sigma\Vert$ the cardinality of $Z_{\mathop{\mathrm{GL}}(2,\mathcal{O}_{\alpha})}(\gamma)/\left(Z_{\mathop{\mathrm{GL}}(2,\mathcal{O}_{\alpha})}(\gamma)\cap\mathop{\mathrm{GL}}^{1}(2,\mathcal{O}_{\alpha})\right)$ and $\dim(\sigma)$ its dimension as an algebraic group. These do not depend on the choice of $\gamma$. Consider $\gamma\in\mathop{\mathrm{GL}}(2,\mathcal{O}_{\alpha+1})$, of type $\sigma$, and its reduction $\overline{\gamma}\in\mathop{\mathrm{GL}}(2,\mathcal{O}_{\alpha})$, of type $\tau$. Then, arguing as in the proof of [@AKOV16 Prop. 2.5.], we easily obtain:$$\frac{\sharp Z_{\mathop{\mathrm{GL}}(2,\mathcal{O}_{\alpha+1})}(\gamma)}{\sharp Z_{\mathop{\mathrm{GL}}(2,\mathcal{O}_{\alpha})}(\overline{\gamma})} = q^{\dim(\tau)}\cdot\frac{\Vert\sigma\Vert}{\Vert\tau\Vert},$$ $$\frac{\sharp Z_{\mathfrak{gl}(2,\mathcal{O}_{\alpha+1})}(\gamma)}{\sharp Z_{\mathfrak{gl}(2,\mathcal{O}_{\alpha})}(\overline{\gamma})} = q^{\dim(\sigma)}.$$ Moreover, one can deduce the following branching rules from the description of conjugacy classes in $\mathop{\mathrm{GL}}(2,\mathcal{O}_{\alpha})$. Let $a_{\tau,\sigma}(q)$ be the number of conjugacy classes of type $\sigma$ in $\mathop{\mathrm{GL}}(2,\mathcal{O}_{\alpha+1})$ whose reduction in $\mathop{\mathrm{GL}}(2,\mathcal{O}_{\alpha})$ is of type $\tau$. Then: $$\left(a_{\tau,\sigma}(q)\right)_{\substack{\sigma\in\{\mathrm{I},\mathrm{II}_{1},\mathrm{II}_{2},\mathrm{II}_{3}\} \\ \tau\in\{\mathrm{I},\mathrm{II}_{1},\mathrm{II}_{2},\mathrm{II}_{3}\}}} = \left( \begin{array}{cccc} q & 0 & 0 & 0 \\ \frac{q(q-1)}{2} & q^2 & 0 & 0 \\ q & 0 & q^2 & 0 \\ \frac{q(q-1)}{2} & 0 & 0 & q^2 \end{array} \right) ,$$ where rows are labeled by $\sigma$ and columns are labeled by $\tau$. Putting everything together, we obtain:$$\left( \begin{array}{c} S_{\mathrm{I},\alpha+1} \\ S_{\mathrm{II}_1,\alpha+1} \\ S_{\mathrm{II}_2,\alpha+1} \\ S_{\mathrm{II}_3,\alpha+1} \end{array} \right) = \left( \begin{array}{cccc} q^{4g-3} & 0 & 0 & 0 \\ \frac{1}{2}q^{2g-2}(q-1)(q+1) & q^{2g} & 0 & 0 \\ q^{2g-3}(q-1)(q+1) & 0 & q^{2g} & 0 \\ \frac{1}{2}q^{2g-2}(q-1)^2 & 0 & 0 & q^{2g} \end{array} \right) \times \left( \begin{array}{c} S_{\mathrm{I},\alpha} \\ S_{\mathrm{II}_1,\alpha} \\ S_{\mathrm{II}_2,\alpha} \\ S_{\mathrm{II}_3,\alpha} \end{array} \right) \ ;\ \left( \begin{array}{c} S_{\mathrm{I},1} \\ S_{\mathrm{II}_1,1} \\ S_{\mathrm{II}_2,1} \\ S_{\mathrm{II}_3,1} \end{array} \right) = \left( \begin{array}{c} \frac{q^{4g}}{q(q-1)(q+1)} \\ \frac{q^2(q-2)}{2(q-1)} \\ q^{2g-1} \\ \frac{q^{2g+1}}{2(q+1)} \end{array} \right) ,$$ where the $(\sigma,\tau)$-entry of the transition matrix is $q^{g\dim(\sigma)-\dim(\tau)}\cdot\frac{\Vert\tau\Vert}{\Vert\sigma\Vert}\cdot a_{\tau,\sigma}(q)$. This matrix can easily be diagonalised and with the help of a computer, we finally get:$$A_{Q,2,\alpha}= \frac{q^{2\alpha g-1}(q^{2g}-1)(q^{\alpha(2g-3)}-1)}{(q^2-1)(q^{2g-3}-1)}.$$ In rank $r=3$, we split $\mathrm{Cl}(3,\mathcal{O}_{\alpha})$ into ten types $\mathcal{G},\mathcal{L},\mathcal{J},\mathcal{T}_{1},\mathcal{T}_{2},\mathcal{T}_{3},\mathcal{M},\mathcal{N},\mathcal{K}_{0},\mathcal{K}_{\infty}$, following [@AKOV16 §2.2.], and we obtain the following recursive relation:$$\begin{array}{l} \left( \begin{array}{c} S_{\mathcal{G},\alpha+1} \\ S_{\mathcal{L},\alpha+1} \\ S_{\mathcal{J},\alpha+1} \\ S_{\mathcal{T}_{1},\alpha+1} \\ S_{\mathcal{T}_{2},\alpha+1} \\ S_{\mathcal{T}_{3},\alpha+1} \\ S_{\mathcal{M},\alpha+1} \\ S_{\mathcal{N},\alpha+1} \\ S_{\mathcal{K}_0,\alpha+1} \\ S_{\mathcal{K}_{\infty},\alpha+1} \end{array} \right) = M \times \left( \begin{array}{c} S_{\mathcal{G},\alpha} \\ S_{\mathcal{L},\alpha} \\ S_{\mathcal{J},\alpha} \\ S_{\mathcal{T}_{1},\alpha} \\ S_{\mathcal{T}_{2},\alpha} \\ S_{\mathcal{T}_{3},\alpha} \\ S_{\mathcal{M},\alpha} \\ S_{\mathcal{N},\alpha} \\ S_{\mathcal{K}_0,\alpha} \\ S_{\mathcal{K}_{\infty},\alpha} \end{array} \right) \ ; \ \left( \begin{array}{c} S_{\mathcal{G},1} \\ S_{\mathcal{L},1} \\ S_{\mathcal{J},1} \\ S_{\mathcal{T}_{1},1} \\ S_{\mathcal{T}_{2},1} \\ S_{\mathcal{T}_{3},1} \\ S_{\mathcal{M},1} \\ S_{\mathcal{N},1} \\ S_{\mathcal{K}_0,1} \\ S_{\mathcal{K}_{\infty},1} \end{array} \right) = \left( \begin{array}{c} \frac{q^{9g-3}}{(q^2-1)(q^3-1)} \\ \frac{q^{5g-1}(q-2)}{(q-1)(q^2-1)} \\ \frac{q^{5g-3}}{q-1} \\ \frac{q^{3g}(q-2)(q-3)}{6(q-1)^2} \\ \frac{q^{3g+1}}{2(q+1)} \\ \frac{q^{3g+1}(q^2-1)}{3(q^3-1)} \\ \frac{q^{3g-1}(q-2)}{q-1} \\ q^{3g-2} \\ 0 \\ 0 \end{array} \right) \text{, where:} \\ \\ M= \left( \begin{array}{cccccccccc} q^{9g-8} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ q^{5g-6}(q^3-1) & q^{5g-3} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \frac{q^{5g-8}(q^2-1)(q^3-1)}{q-1} & 0 & q^{5g-3} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \frac{q^{3g-5}(q-2)(q^2-1)(q^3-1)}{6(q-1)} & \frac{q^{3g-2}(q^2-1)}{2} & 0 & q^{3g} & 0 & 0 & 0 & 0 & 0 & 0 \\ \frac{q^{3g-4}(q-1)(q^3-1)}{2} & \frac{q^{3g-2}(q-1)^2}{2} & 0 & 0 & q^{3g} & 0 & 0 & 0 & 0 & 0 \\ \frac{q^{3g-5}(q-1)(q^2-1)^2}{3} & 0 & 0 & 0 & 0 & q^{3g} & 0 & 0 & 0 & 0 \\ q^{3g-6}(q^2-1)(q^3-1) & q^{3g-3}(q^2-1) & q^{3g-1}(q-1) & 0 & 0 & 0 & q^{3g} & 0 & 0 & 0 \\ q^{3g-7}(q^2-1)(q^3-1) & 0 & q^{3g-3}(q-1)^2 & 0 & 0 & 0 & 0 & q^{3g} & 0 & 0 \\ 0 & 0 & q^{3g-3}(q-1) & 0 & 0 & 0 & 0 & 0 & q^{3g} & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & q^{3g} \end{array} \right) . \end{array}$$ Again, the transition matrix $M$ can be diagonalised and we get the following closed formula. **Proposition 47**. *Let $Q$ be the quiver with one vertex and $g\geq1$ loops. Then:$$\begin{split} A_{Q,2,\alpha}= & \frac{q^{2\alpha g-1}(q^{2g}-1)(q^{\alpha(2g-3)}-1)}{(q^2-1)(q^{2g-3}-1)}, \\ A_{Q,3,\alpha}= & \frac{q^{3\alpha g-2}(q^{2g}-1)(q^{2g-1}-1)}{(q^2-1)(q^3-1)(q^{2g-3}-1)(q^{6g-8}-1)(q^{4g-5}-1)} \\ & \cdot \left( q^{\alpha(6g-8)-1}(q^{6g-7}-1)(q^{2g}+1) -q^{\alpha(6g-8)+2g-4}(q^2-1)(q^{4g-3}+1) \right. \\ & \left. +q^{\alpha(2g-3)-1}(q^2+q+1)(q^{2g-1}-1)(q^{6g-8}-1)+(q+1)(q^{8g-10}-1)+q^{2g-4}(q^4+1)(q^{4g-5}-1) \right) . \end{split}$$In particular, $A_{Q,2,\alpha}$ and $A_{Q,3,\alpha}$ are polynomials and $A_{Q,2,\alpha}$ has non-negative coefficients.* Unfortunately, it is not obvious from the above expression that $A_{Q,3,\alpha}$ is a polynomial in $q$ with non-negative coefficients. But we can still compute $A_{Q,3,\alpha}$ for small values of $g$ and $\alpha$ and check that this is indeed the case. We write $A_{g,3,\alpha}:=A_{Q,3,\alpha}$ when $Q$ is the $g$-loop quiver. Using a computer, we get: $$\begin{array}{l} g=1: \\ \begin{split} A_{1,3,1} & = q \\ A_{1,3,2} & = q^4 + q^3 + 2 q^2 \\ A_{1,3,3} & = q^7 + q^6 + 3 q^5 + 2 q^4 + 2 q^3 \\ A_{1,3,4} & = q^{10} + q^9 + 3 q^8 + 3 q^7 + 4 q^6 + 2 q^5 + 2 q^4 \\ A_{1,3,5} & = q^{13} + q^{12} + 3 q^{11} + 3 q^{10} + 5 q^9 + 4 q^8 + 4 q^7 + 2 q^6 + 2 q^5 \end{split} \\ \\ g=2: \\ \begin{split} A_{2,3,1} & = q^{10} + q^8 + q^7 + q^6 + q^5 + q^4 \\ A_{2,3,2} & = q^{20} + q^{18} + 2q^{17} + 3q^{16} + 3q^{15} + 4q^{14} + 3q^{13} + 3q^{12} + 2q^{11} + 2q^{10} \\ A_{2,3,3} & = q^{30} + q^{28} + 2q^{27} + 3q^{26} + 3q^{25} + 5q^{24} + 5q^{23} + 7q^{22} + 6q^{21} + 7q^{20} + 5q^{19} + 4q^{18} + 3q^{17} + 2q^{16} \\ A_{2,3,4} & = q^{40} + q^{38} + 2q^{37} + 3q^{36} + 3q^{35} + 5q^{34} + 5q^{33} + 7q^{32} + 7q^{31} \\ & + 9q^{30} + 9q^{29} + 10q^{28} + 9q^{27} + 9q^{26} + 6q^{25} + 5q^{24} + 3q^{23} + 2q^{22} \\ A_{2,3,5} & = q^{50} + q^{48} + 2q^{47} + 3q^{46} + 3q^{45} + 5q^{44} + 5q^{43} + 7q^{42} + 7q^{41} + 9q^{40} + 9q^{39} + 11q^{38} \\ & + 11q^{37} + 13q^{36} + 12q^{35} + 13q^{34} + 11q^{33} + 10q^{32} + 7q^{31} + 5q^{30} + 3q^{29} + 2q^{28} \end{split} \\ \\ g=3: \\ \begin{split} A_{3,3,1} & = q^{19} + q^{17} + q^{16} + q^{15} + q^{14} + 2q^{13} + q^{12} + 2q^{11} + 2q^{10} + q^{9} + q^{8} + q^{7} \\ A_{3,3,2} & = q^{38} + q^{36} + q^{35} + q^{34} + q^{33} + 2q^{32} + 2q^{31} + 3q^{30} + 4q^{29} + 4q^{28} + 4q^{27} \\ & + 5q^{26} + 4q^{25} + 4q^{24} + 4q^{23} + 5q^{22} + 3q^{21} + 4q^{20} + 3q^{19} + 2q^{18} + q^{17} + q^{16} \\ A_{3,3,3} & = q^{57} + q^{55} + q^{54} + q^{53} + q^{52} + 2q^{51} + 2q^{50} + 3q^{49} + 4q^{48} + 4q^{47} + 4q^{46} + 5q^{45} + 4q^{44} + 5q^{43} + 5q^{42} + 7q^{41} + 6q^{40} + \\ & 8q^{39} + 8q^{38} + 8q^{37} + 7q^{36} + 8q^{35} + 7q^{34} + 6q^{33} + 6q^{32} + 6q^{31} + 4q^{30} + 4q^{29} + 3q^{28} + 2q^{27} + q^{26} + q^{25} \\ A_{3,3,4} & = q^{76} + q^{74} + q^{73} + q^{72} + q^{71} + 2q^{70} + 2q^{69} + 3q^{68} + 4q^{67} + 4q^{66} + 4q^{65} + 5q^{64} + 4q^{63} + 5q^{62} + 5q^{61} + 7q^{60} \\ & + 6q^{59} + 8q^{58} + 8q^{57} + 8q^{56} + 8q^{55} + 9q^{54} + 9q^{53} + 9q^{52} + 10q^{51} + 11q^{50} + 10q^{49} + 11q^{48} + 11q^{47} + 11q^{46} \\ & + 9q^{45} + 10q^{44} + 8q^{43} + 7q^{42} + 6q^{41} + 6q^{40} + 4q^{39} + 4q^{38} + 3q^{37} + 2q^{36} + q^{35} + q^{34} \\ A_{3,3,5} & = q^{95} + q^{93} + q^{92} + q^{91} + q^{90} + 2q^{89} + 2q^{88} + 3q^{87} + 4q^{86} + 4q^{85} + 4q^{84} + 5q^{83} + 4q^{82} + 5q^{81} + 5q^{80} + 7q^{79} \\ & + 6q^{78} + 8q^{77} + 8q^{76} + 8q^{75} + 8q^{74} + 9q^{73} + 9q^{72} + 9q^{71} + 10q^{70} + 11q^{69} + 10q^{68} + 12q^{67} + 12q^{66} + 13q^{65} + 12q^{64} \\ & + 14q^{63} + 13q^{62} + 13q^{61} + 13q^{60} + 14q^{59} + 13q^{58} + 13q^{57} + 13q^{56} + 12q^{55} + 10q^{54} + 10q^{53} + 8q^{52} + 7q^{51} + 6q^{50} \\ & + 6q^{49} + 4q^{48} + 4q^{47} + 3q^{46} + 2q^{45} + q^{44} + q^{43} \end{split} \end{array}$$ From the data above, it seems reasonable to expect that $A_{Q,\mathbf{r},\alpha}$ has non-negative coefficients for any quiver and any rank vector. We hope to adress this conjecture in future works: **Conjecture 48**. *Let $Q$ be a quiver, $\mathbf{r}\in\mathbb{Z}_{\geq0}^{Q_{0}}$ and $\alpha\geq1$. Then $A_{Q,\mathbf{r},\alpha}\in\mathbb{Z}_{\geq0}[q]$ .* [^1]: École Polytechnique Fédérale de Lausanne, Chair of Arithmetic Geometry (ARG) [^2]: However, in our setting, Theorem [Theorem 5](#Thm/IntroPosToricKacPol){reference-type="ref" reference="Thm/IntroPosToricKacPol"} is not a consequence of Theorem [Theorem 7](#Thm/IntroCohIntgr){reference-type="ref" reference="Thm/IntroCohIntgr"}, see Remark [Remark 46](#Rmk/CohUpgrade=000026Positivity){reference-type="ref" reference="Rmk/CohUpgrade=000026Positivity"}. [^3]: Note that a direct summand of a locally free representation is locally free (this can be seen from the classification of finitely generated $\mathbb{K}[t]$-modules). [^4]: In a poset $\Pi$, $y$ is said to cover $x$ if $x\leq y$ and there exist no $z\in\Pi$ such that $x<z<y$. [^5]: By Proposition [Proposition 13](#Prop/EndRings){reference-type="ref" reference="Prop/EndRings"}, a locally free representation of rank $\underline{1}$ is indecomposable if, and only if, it is absolutely indecomposable.
arxiv_math
{ "id": "2310.02912", "title": "Positivity for toric Kac polynomials in higher depth", "authors": "Tanguy Vernet", "categories": "math.RT math.AG", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Consider a centered smooth Gaussian random field $\{X(t), t\in T \}$ with a general (nonconstant) variance function. In this work, we demonstrate that as $u \to \infty$, the excursion probability ${\mathbb P}\{\sup_{t\in T} X(t) \geq u\}$ can be accurately approximated by ${\mathbb E}\{\chi(A_u)\}$ such that the error decays at a super-exponential rate. Here, $A_u = \{t\in T: X(t)\geq u\}$ represents the excursion set above $u$, and ${\mathbb E}\{\chi(A_u)\}$ is the expectation of its Euler characteristic $\chi(A_u)$. This result substantiates the expected Euler characteristic heuristic for a broad class of smooth Gaussian random fields with diverse covariance structures. In addition, we employ the Laplace method to derive explicit approximations to the excursion probabilities. author: - | Dan Cheng\ Arizona State University title: The expected Euler characteristic approximation to excursion probabilities of smooth Gaussian random fields with general variance functions --- # Introduction Let $X = \{X(t),\, t\in T \}$ represent a real-valued Gaussian random field defined on the probability space $(\Omega, \mathcal{F}, {\mathbb P})$, where $T$ denotes the parameter space. The study of excursion probabilities, denoted as ${\mathbb P}\{\sup_{t\in T} X(t) \geq u \}$, is a classical and fundamental problem in both probability and statistics. It finds extensive applications across numerous domains, including $p$-value computations, risk control and extreme event analysis, etc. In the field of statistics, excursion probabilities play a critical role in tasks such as controlling family-wise error rates [@TaylorW07; @TaylorW08], constructing confidence bands [@Sun:2001], and detecting signals in noisy data [@Siegmund:1995; @TaylorW07]. However, except for only a few examples, computing the exact values of these probabilities is almost impossible. To address this challenge, many researchers have developed various methods for precise approximations of ${\mathbb P}\{\sup_{t\in T} X(t) \geq u \}$. These methods encompass techniques like the double sum method [@Piterbarg:1996], the tube method [@Sun93] and the Rice method [@AzaisD02; @AzaisW09]. For comprehensive theoretical insights and related applications, we refer readers to the survey by @Adler00 and the monographs by @Piterbarg:1996, @Adler:2007, and @AzaisW09, as well as the references therein. In recent years, the expected Euler characteristic (EEC) method has emerged as a powerful tool for approximating excursion probabilities. This method, originating from the works of @TTA05 and @Adler:2007, provides the following approximation: $$\label{Equation:mean EC approximation error} {\mathbb P}\bigg\{\sup_{t\in T} X(t) \geq u \bigg\} = {\mathbb E}\{\chi(A_u)\} + {\rm error}, \quad {\rm as} \ u\to \infty,$$ where $\chi(A_u)$ represents the Euler characteristic of the excursion set $A_u = \{t\in T: X(t)\geq u\}$. This approximation [\[Equation:mean EC approximation error\]](#Equation:mean EC approximation error){reference-type="eqref" reference="Equation:mean EC approximation error"} is highly elegant and accurate, primarily due to the fact that the principle term ${\mathbb E}\{\chi(A_u)\}$ is computable and the error term decays exponentially faster than the major component. However, it is essential to note that this method assumes a Gaussian field with constant variance, limiting its applicability in various scenarios. In this paper, we extend the EEC method to accommodate smooth Gaussian random fields with general (nonconstant) variance functions. Our main objective is to demonstrate that the EEC approximation [\[Equation:mean EC approximation error\]](#Equation:mean EC approximation error){reference-type="eqref" reference="Equation:mean EC approximation error"} remains valid under these conditions, with the error term exhibiting super-exponential decay. For a precise description of our findings, please refer to Theorem [Theorem 2](#Thm:MEC approximation je){reference-type="ref" reference="Thm:MEC approximation je"} below. Our derived approximation result shows that the maximum variance of $X(t)$, denoted by $\sigma_T^2$ (see [\[eq:R\]](#eq:R){reference-type="eqref" reference="eq:R"} below), plays a pivotal role in both ${\mathbb E}\{\chi(A_u)\}$ and the super-exponentially small error. In our analysis, we observe that the points where $\sigma_T^2$ is attained make the most substantial contributions to ${\mathbb E}\{\chi(A_u)\}$. Building on this observation, we establish two simpler approximations: one in Theorem [Theorem 3](#Thm:MEC approximation je2){reference-type="ref" reference="Thm:MEC approximation je2"}, which incorporates boundary conditions on nonzero derivatives of the variance function over points where $\sigma_T^2$ is attained, and another in Theorem [Theorem 4](#Thm:MEC approximation){reference-type="ref" reference="Thm:MEC approximation"}, assuming only a single point attains $\sigma_T^2$. In general, the EEC approximation can be expressed as an integral using the Kac-Rice formula, as outlined in [\[eq:EEC\]](#eq:EEC){reference-type="eqref" reference="eq:EEC"} in Theorem [Theorem 2](#Thm:MEC approximation je){reference-type="ref" reference="Thm:MEC approximation je"}. While [@TTA05; @Adler:2007] provided an elegant expression for ${\mathbb E}\{\chi(A_u)\}$ termed the Gaussian kinematic formula, this expression heavily relies on the assumption of unit variance, which simplifies the calculation. In our case, where the variance function of $X(t)$ varies across $T$, deriving an explicit expression for ${\mathbb E}\{\chi(A_u)\}$ becomes challenging. Instead, we apply the Laplace method to extract the term with the leading order of $u$ from the integral, leaving a remaining error that is ${\mathbb E}\{\chi(A_u)\}o(1/u)$. For a more detailed explanation, we offer specific calculations in Sections [8](#sec:unique-max){reference-type="ref" reference="sec:unique-max"} and [9](#sec:example){reference-type="ref" reference="sec:example"}. To intuitively grasp the EEC approximation, one can roughly consider the major term as $g(u)e^{-u^2/(2\sigma_T^2)}$, while the error term diminishes as $o(e^{-u^2/(2\sigma_T^2) - \alpha u^2})$, where $g(u)$ is a polynomial in $u$, and $\alpha>0$ is a constant. The structure of this paper is as follows: We begin by introducing the notations and assumptions in Section [2](#sec:notation){reference-type="ref" reference="sec:notation"}. In Section [3](#sec:main){reference-type="ref" reference="sec:main"}, we present our main results, including Theorems [Theorem 2](#Thm:MEC approximation je){reference-type="ref" reference="Thm:MEC approximation je"}, [Theorem 3](#Thm:MEC approximation je2){reference-type="ref" reference="Thm:MEC approximation je2"}, and [Theorem 4](#Thm:MEC approximation){reference-type="ref" reference="Thm:MEC approximation"}. To understand our approach, we outline the main ideas in Section [4](#sec:sketch){reference-type="ref" reference="sec:sketch"} and delve into the analysis of super-exponentially small errors in Sections [5](#sec:small){reference-type="ref" reference="sec:small"} and [6](#sec:diff){reference-type="ref" reference="sec:diff"}. Finally, we provide the proofs of our main results in Section [7](#sec:proof){reference-type="ref" reference="sec:proof"}. In Section [8](#sec:unique-max){reference-type="ref" reference="sec:unique-max"}, we apply the Laplace method to derive explicit approximations (Theorems [Theorem 14](#thm:unique-max-boundary){reference-type="ref" reference="thm:unique-max-boundary"} and [Theorem 15](#thm:unique-max){reference-type="ref" reference="thm:unique-max"}) for cases where a unique maximum point of the variance is present. In Section [9](#sec:example){reference-type="ref" reference="sec:example"}, we demonstrate several examples that illustrate the evaluation of EEC and the subsequent approximation of excursion probabilities. # Notations and assumptions {#sec:notation} Let $\{X(t),\, t\in T\}$ be a real-valued and centered Gaussian random field, where $T$ is a compact rectangle in ${\mathbb R}^N$. We define $$\label{eq:R} \begin{split} \nu(t) = \sigma_t^2 = {\rm Var}(X(t)) \quad \text{\rm and } \quad \sup_{t \in T} \nu(t) =\sigma_T^2. \end{split}$$ Here, $\nu(\cdot)$ represents the variance function of the field and $\sigma_T^2$ is the maximum variance over $T$. For a function $f(\cdot) \in C^2({\mathbb R}^N)$ and $t\in {\mathbb R}^N$, we introduce the following notations on derivatives: $$\label{Eq:notatoin-diff} \begin{split} f_i (t)&=\frac{\partial f(t)}{\partial t_i}, \quad f_{ij}(t)=\frac{\partial^2 f(t)}{\partial t_i\partial t_j}, \quad \forall i, j=1, \ldots, N;\\ \nabla f(t) &= (f_1(t), \ldots , f_N(t))^{T}, \quad \nabla^2 f(t) = \left(f_{ij}(t)\right)_{ i, j = 1, \ldots, N}. \end{split}$$ Let $B \prec 0$ (negative definite) and $B \preceq 0$ (negative semi-definite) denote that a symmetric matrix $B$ has all negative or nonpositive eigenvalues, respectively. Additionally, we use ${\rm Cov}(\xi_1, \xi_2)$ and ${\rm Corr}(\xi_1, \xi_2)$ to represent the covariance and correlation between two random variables $\xi_1$ and $\xi_2$. The density of the standard Normal distribution is denoted as $\phi(x)$, and its tail probability is $\Psi(x) = \int_x^\infty \phi(y)dy$. Let ${\mathbb S}^j$ be the $j$-dimensional unit sphere. Consider the domain $T=\prod^N_{i=1}[a_i, b_i]$, where $-\infty< a_i<b_i<\infty$. We draw from the notation established by Adler and Taylor in [@Adler:2007] to demonstrate that $T$ can be decomposed into the union of its interior and lower-dimensional faces. This decomposition forms the basis for calculating the Euler characteristic of the excursion set $A_u$, as elaborated in Section [3](#sec:main){reference-type="ref" reference="sec:main"}. Each face $K$ of dimension $k$ is defined by fixing a subset $\tau(K) \subset \{1, \ldots, N\}$ of size $k$ and a subset $\varepsilon(K) = \{\varepsilon_j, j\notin \tau(K)\} \subset \{0, 1\}^{N-k}$ of size $N-k$ so that $$\begin{split} K= \{ t=(t_1, \ldots, t_N) \in T: \, &a_j< t_j <b_j \ {\rm if} \ j\in \tau(K), \\ &t_j = (1-\varepsilon_j)a_j + \varepsilon_{j}b_{j} \ {\rm if} \ j\notin \tau(K) \}. \end{split}$$ Denote by $\partial_k T$ the collection of all $k$-dimensional faces in $T$. The interior of $T$ is designated as $\overset{\circ}{T}=\partial_N T$, while the boundary of $T$ is formulated as $\partial T = \cup^{N-1}_{k=0}\cup _{K\in \partial_k T} K$. This allows us to partition $T$ in the following manner: $$T= \bigcup_{k=0}^N \partial_k T = \bigcup_{k=0}^N\bigcup_{K\in \partial_k T}K.$$ For each $t\in T$, let $$\label{eq:Sigma} \begin{split} \nabla X_{|K}(t) &= (X_{i_1} (t), \ldots, X_{i_k} (t))^T_{i_1, \ldots, i_k \in \tau(K)}, \quad \nabla^2 X_{|K}(t) = (X_{mn}(t))_{m, n \in \tau(K)},\\ \Sigma(t)&= {\mathbb E}\{X(t)\nabla^2X(t)\} = ({\mathbb E}\{X(t)X_{ij}(t)\})_{1\leq i,j\leq N}, \\ \Sigma_K (t)&={\mathbb E}\{X(t)\nabla^2X_{|K}(t)\} = ({\mathbb E}\{X(t)X_{ij}(t)\})_{i,j\in \tau(K)},\\ \Lambda(t) &= {\rm Cov}(\nabla X(t)) = ({\mathbb E}\{X_i(t)X_j(t)\})_{1\leq i,j\leq N},\\ \Lambda_K(t) &= {\rm Cov}(\nabla X_{|K}(t))=({\mathbb E}\{X_i(t)X_j(t)\})_{i,j\in \tau(K)}. \end{split}$$ For each $K\in \partial_k T$, we define the *number of extended outward maxima above $u$ on face $K$* as $$\begin{split} M_u^E (K) & := \# \{ t\in K: X(t)\geq u, \nabla X_{|K}(t)=0, \nabla^2 X_{|K}(t)\prec 0, \varepsilon^*_jX_j(t) \geq 0, \forall j\notin \tau(K) \}, \end{split}$$ where $\varepsilon^*_j=2\varepsilon_j-1$, and define the *number of local maxima above $u$ on face $K$* as $$\begin{split} M_u (K) & := \# \{ t\in K: X(t)\geq u, \nabla X_{|K}(t)=0, \nabla^2 X_{|K}(t)\prec 0\}. \end{split}$$ Clearly, $M_u^E (X,K) \le M_u (X,K)$. For each $t\in T$ with $\nu(t)=\sigma_T^2$, we define the index set $\mathcal{I}(t) = \{ \ell: \nu_\ell(t)=0\}$ representing the directions along which the partial derivatives of $\nu(t)$ vanish. If $t\in K\in \partial_k T$ with $\nu(t)=\sigma_T^2$, then we have $\tau(K) \subset \mathcal{I}(t)$ since $\nu_\ell(t) =0$ for all $\ell \in \tau(K)$. It is worth noting that since $\nu_i(t) = 2{\mathbb E}\{X_{i}(t)X(t)\}$, we can also express this index set as $\mathcal{I}(t) = \{ \ell: {\mathbb E}\{X(t)X_\ell(s)\}=0\}$. Our analytical framework relies on the following conditions for smoothness (**H**1) and regularity (**H**2), in addition to curvature conditions (**H**3) or $({\bf H}3')$. - $X\in C^2({\mathbb R}^N)$ almost surely and the second derivatives satisfy the *uniform mean-square Hölder condition*: there exist constants $C, \delta>0$ such that $$\begin{split} {\mathbb E}(X_{ij}(t)-X_{ij}(t'))^2 &\leq C \|t-t'\|^{2\delta}, \quad \forall t,t'\in T,\ i, j= 1, \ldots, N. \end{split}$$ - For every pair $(t, t')\in T^2$ with $t\neq t'$, the Gaussian vector $$\begin{split} \big(&X(t), \nabla X(t), X_{ij}(t), X(t'), \nabla X(t'), X_{ij}(t'), 1\leq i\leq j\leq N\big) \end{split}$$ is non-degenerate. - For every $t\in K\in \partial_k T$, $0\le k\le N-2$, such that $\nu(t)=\sigma_T^2$ and $\mathcal{I}(t)$ contains at least two indices, we have $$\label{eq:Hessian_Sigma} ({\mathbb E}\{X(t)X_{ij}(t)\})_{i,j\in \mathcal{I}(t) }\prec 0.$$ - For every $t\in K\in \partial_k T$, $0\le k\le N-2$, such that $\nu(t)=\sigma_T^2$ and $\mathcal{I}(t)$ contains at least two indices, we have $$\label{eq:Hessian_I} \left(\nu_{ij}(t)\right)_{i,j\in \mathcal{I}(t) }\preceq 0.$$ Conditions (**H**3) and $({\bf H}3')$ involve the behavior of the variance function $\nu(t)$ at critical points, and they are closely related, as shown in Proposition [Proposition 1](#prop:H3){reference-type="ref" reference="prop:H3"} below. Here we provide some additional insights into $({\bf H}3')$. Despite its initially technical appearance, $({\bf H}3')$ is in fact a mild condition that specifically applies to lower-dimensional boundary points $t$ where $\nu(t)=\sigma_T^2$. In essence, it indicates that the variance function should possess a negative semi-definite Hessian matrix at these boundary critical points where $\nu(t)=\sigma_T^2$ while concurrently exhibiting at least two zero partial derivatives. For example, in the 1D case, since $\mathcal{I}(t)$ contains at most one index, there is no need to check $({\bf H}3')$. Similarly, in the 2D case, we only need to check $({\bf H}3')$ or [\[eq:Hessian_I\]](#eq:Hessian_I){reference-type="eqref" reference="eq:Hessian_I"} when $\sigma_T^2$ is achieved at corner points $t\in \partial_0 T$ with $\mathcal{I}(t)=\{1, 2\}$. Moreover, if the variance function $\nu(t)$ demonstrates strict monotonicity in all directions across ${\mathbb R}^N$, then $\mathcal{I}(t) = \emptyset$ and there is no need to verify $({\bf H}3')$. **Proposition 1**. *The condition $({\bf H}3')$ implies $({\bf H}3)$. In addition, $({\bf H}3)$ implies that $$\label{eq:prop:H3} ({\mathbb E}\{X(t)X_{ij}(t)\})_{i,j\in \mathcal{I}(t) }\prec 0, \quad \forall t\in T \text{ with } \nu(t)=\sigma_T^2.$$* *Proof.* Taking the second derivative on both sides of $\nu(t)={\mathbb E}\{X(t)^2\}$, we obtain $\nu_{ij}(t)/2 = {\mathbb E}\{X(t)X_{ij}(t)\} + {\mathbb E}\{X_i(t)X_j(t)\}$, implying $$\label{eq:negdef} ({\mathbb E}\{X(t)X_{ij}(t)\})_{i, j \in \mathcal{I}(t)} = \frac{1}{2}(\nu_{ij}(t))_{i, j \in\mathcal{I}(t)} - ({\mathbb E}\{X_i(t)X_j(t)\})_{i, j \in \mathcal{I}(t)}.$$ Note that, as a covariance matrix, $({\mathbb E}\{X_i(t)X_j(t)\})_{i, j \in \mathcal{I}(t)}$ is positive definite by $({\bf H}2)$. Therefore, [\[eq:Hessian_I\]](#eq:Hessian_I){reference-type="eqref" reference="eq:Hessian_I"} implies [\[eq:Hessian_Sigma\]](#eq:Hessian_Sigma){reference-type="eqref" reference="eq:Hessian_Sigma"}, or equivalently $({\bf H}3')$ implies $({\bf H}3)$. Next we demonstrate that $({\bf H}3)$ implies [\[eq:prop:H3\]](#eq:prop:H3){reference-type="eqref" reference="eq:prop:H3"}. It suffices to show [\[eq:Hessian_Sigma\]](#eq:Hessian_Sigma){reference-type="eqref" reference="eq:Hessian_Sigma"} for $k= N-1$ and $k=N$, and for the case that $\mathcal{I}(t)$ contains at most one index, which complement those cases in $({\bf H}3)$. \(i\) If $k = N$, then $t$ becomes a maximum point of $\nu$ within the interior of $T$ and $\mathcal{I}(t) = \tau(K) =\{1, \cdots, N\}$, implying [\[eq:Hessian_I\]](#eq:Hessian_I){reference-type="eqref" reference="eq:Hessian_I"}, and hence [\[eq:Hessian_Sigma\]](#eq:Hessian_Sigma){reference-type="eqref" reference="eq:Hessian_Sigma"} holds by [\[eq:negdef\]](#eq:negdef){reference-type="eqref" reference="eq:negdef"}. \(ii\) For $k=N-1$, we consider two scenarios. If $\mathcal{I}(t) = \tau(K)$, then $t$ becomes a maximum point of $\nu$ restricted on $K$, hence [\[eq:Hessian_Sigma\]](#eq:Hessian_Sigma){reference-type="eqref" reference="eq:Hessian_Sigma"} is satisfied as discussed above. If $\mathcal{I}(t) =\{1, \cdots, N\}$, then it follows from Taylor's formula that $$\nu (t') =\nu(t) + (t'-t)^T \nabla^2 \nu(t) (t'-t)+o(\|t'-t\|^2), \quad t'\in T.$$ Notice that $\{(t'-t)/\|t'-t\|: t'\in T\}$ contains all directions in ${\mathbb R}^N$ since $t\in K \in \partial_{N-1}T$, together with the fact $\nu(t)=\sigma_T^2$, we see that $\nabla^2 \nu(t)$ cannot have any positive eigenvalue, thus [\[eq:Hessian_I\]](#eq:Hessian_I){reference-type="eqref" reference="eq:Hessian_I"} and hence [\[eq:Hessian_Sigma\]](#eq:Hessian_Sigma){reference-type="eqref" reference="eq:Hessian_Sigma"} hold. \(iii\) Finally, it's evident from the 1D Taylor's formula that [\[eq:Hessian_I\]](#eq:Hessian_I){reference-type="eqref" reference="eq:Hessian_I"} is valid when $\mathcal{I}(t)$ contains only one index. ◻ The condition [\[eq:prop:H3\]](#eq:prop:H3){reference-type="eqref" reference="eq:prop:H3"} established in Proposition [Proposition 1](#prop:H3){reference-type="ref" reference="prop:H3"} serves as the fundamental requirement for our main results, as demonstrated in Theorems [Theorem 2](#Thm:MEC approximation je){reference-type="ref" reference="Thm:MEC approximation je"}, [Theorem 3](#Thm:MEC approximation je2){reference-type="ref" reference="Thm:MEC approximation je2"} and [Theorem 4](#Thm:MEC approximation){reference-type="ref" reference="Thm:MEC approximation"} below. As seen from Proposition [Proposition 1](#prop:H3){reference-type="ref" reference="prop:H3"}, we can simplify [\[eq:prop:H3\]](#eq:prop:H3){reference-type="eqref" reference="eq:prop:H3"} to condition $({\bf H}3)$. Thus our main results will be presented under the assumption of condition $({\bf H}3)$. Furthermore, it is worth highlighting that, in practical applications, verifying $({\bf H}3')$ can often be a more straightforward process. This condition directly pertains to the variance function $\nu(t)$, making it easier to assess. Thus, Proposition [Proposition 1](#prop:H3){reference-type="ref" reference="prop:H3"} provides the flexibility to check $({\bf H}3')$ instead of $({\bf H}3)$. This insight simplifies the verification procedure, enhancing the practical applicability of our results. # Main results {#sec:main} Here, we will present our main results Theorems [Theorem 2](#Thm:MEC approximation je){reference-type="ref" reference="Thm:MEC approximation je"}, [Theorem 3](#Thm:MEC approximation je2){reference-type="ref" reference="Thm:MEC approximation je2"} and [Theorem 4](#Thm:MEC approximation){reference-type="ref" reference="Thm:MEC approximation"}, whose proofs are given in Section [7](#sec:proof){reference-type="ref" reference="sec:proof"}. Define the *number of extended outward critical points of index $i$ above level $u$ on face $K$* be $$\begin{split} \mu_i(K) & := \# \{ t\in K: X(t)\geq u, \nabla X_{|K}(t)=0, \text{index} (\nabla^2 X_{|K}(t))=i, \\ & \qquad \qquad \qquad \varepsilon^*_jX_j(t) \geq 0 \ {\rm for \ all}\ j\notin \tau(K) \}. \end{split}$$ Recall that $\varepsilon^*_j=2\varepsilon_j-1$ and the index of a matrix is defined as the number of its negative eigenvalues. It is evident to observe that $\mu_N(K)=M_u^E(K)$. It follows from (**H**1), (**H**2) and the Morse theorem (see Corollary 9.3.5 or pages 211--212 in @Adler:2007) that the Euler characteristic of the excursion set $A_u$ can be represented as $$\label{eq:Euler} \begin{split} \chi(A_u)&= \sum^N_{k=0}\sum_{K\in \partial_k T}(-1)^k\sum^k_{i=0} (-1)^i \mu_i(K). \end{split}$$ Now we state the following general result on the EEC approximation for the excursion probability. **Theorem 2**. *Let $\{X(t),\, t\in T\}$ be a centered Gaussian random field satisfying $({\bf H}1)$, $({\bf H}2)$ and $({\bf H}3)$. Then there exists a constant $\alpha>0$ such that as $u\to \infty$, $$\label{eq:EEC} \begin{split} &\quad {\mathbb P}\left\{\sup_{t\in T} X(t) \geq u \right\}\\ &=\sum^N_{k=0}\sum_{K\in \partial_k T}(-1)^k\int_K {\mathbb E}\big\{{\rm det}\nabla^2 X_{|K}(t)\mathbbm{1}_{\{X(t)\geq u, \ \varepsilon^*_\ell X_\ell(t) \geq 0 \ {\rm for \ all}\ \ell\notin \tau(K)\}} \big| \nabla X_{|K}(t)=0\big\}\\ &\quad \times p_{\nabla X_{|K}(t)}(0)dt +o\left( \exp \left\{ -\frac{u^2}{2\sigma_T^2} -\alpha u^2 \right\}\right)\\ &= {\mathbb E}\{\chi(A_u)\}+o\left( \exp \left\{ -\frac{u^2}{2\sigma_T^2} -\alpha u^2 \right\}\right). \end{split}$$* In general, computing the EEC approximation ${\mathbb E}\{\chi(A_u)\}$ is a challenging task because it involves conditional expectations over the joint covariance of the Gaussian field and its Hessian, given zero gradient, which vary across $T$. However, one can apply the Laplace method to extract the term with the largest order of $u$ from ${\mathbb E}\{\chi(A_u)\}$ such that the remaining error is $o(1/u){\mathbb E}\{\chi(A_u)\}$. Examples demonstrating the Laplace method are presented in Section [9](#sec:example){reference-type="ref" reference="sec:example"}. It is important to note that in the expression [\[eq:EEC\]](#eq:EEC){reference-type="eqref" reference="eq:EEC"}, when $k=0$, all terms involving $\nabla X_{|K}(t)$ and $\nabla^2 X_{|K}(t)$ vanish. Consequently, if $k=0$, we treat the integral in [\[eq:EEC\]](#eq:EEC){reference-type="eqref" reference="eq:EEC"} as the usual Gaussian tail probabilities. This notation is also adopted in the results presented in Theorems [Theorem 3](#Thm:MEC approximation je2){reference-type="ref" reference="Thm:MEC approximation je2"} and [Theorem 4](#Thm:MEC approximation){reference-type="ref" reference="Thm:MEC approximation"} below. The proof of Theorem [Theorem 2](#Thm:MEC approximation je){reference-type="ref" reference="Thm:MEC approximation je"} reveals that points where the maximum variance $\sigma_T^2$ is attained make the most significant contribution to ${\mathbb E}\{\chi(A_u)\}$. Therefore, in many cases, the general EEC approximation ${\mathbb E}\{\chi(A_u)\}$ can be simplified. The following result is based on the boundary condition [\[Eq:boundary\]](#Eq:boundary){reference-type="eqref" reference="Eq:boundary"} and is applicable at boundary points where nonzero partial derivatives of the variance function occur when $\sigma_T^2$ is reached. **Theorem 3**. *Let $\{X(t),\, t\in T\}$ be a centered Gaussian random field satisfying $({\bf H}1)$, $({\bf H}2)$ and the following boundary condition $$\label{Eq:boundary} \begin{split} \Big\{t\in J:\, \nu(t)=\sigma_T^2, \prod_{i\notin \tau(J)}\nu_i(t)=0\Big\} = \emptyset, \quad \forall \text{ face } J\subset T. \end{split}$$ Then there exists a constant $\alpha>0$ such that as $u\to \infty$, $$\begin{split} {\mathbb P}\left\{\sup_{t\in T} X(t) \geq u \right\} &=\sum^N_{k=0}\sum_{K\in \partial_k T}(-1)^{k}\int_K {\mathbb E}\big\{{\rm det}\nabla^2 X_{|K}(t) \mathbbm{1}_{\{X(t)\geq u\}}\big|\nabla X_{|K}(t)=0\big\}\\ &\quad \times p_{\nabla X_{|K}(t)}(0)dt +o\left( \exp \left\{ -\frac{u^2}{2\sigma_T^2} -\alpha u^2 \right\}\right). \end{split}$$* In other words, the boundary condition [\[Eq:boundary\]](#Eq:boundary){reference-type="eqref" reference="Eq:boundary"} indicates that, for any point $t\in J$ attaining the maximum variance $\sigma_T^2$, there must be $\nu_i(t)\neq 0$ for all $i\notin \tau(J)$. In particular, as an important property, we observe that [\[Eq:boundary\]](#Eq:boundary){reference-type="eqref" reference="Eq:boundary"} implies the condition $({\bf H}3')$ and hence $({\bf H}3)$. The following result provides an asymptotic approximation for the special case where the variance function attains its maximum $\sigma_T^2$ only at a unique point. **Theorem 4**. *Let $\{X(t),\, t\in T\}$ be a centered Gaussian random field satisfying $({\bf H}1)$, $({\bf H}2)$ and $({\bf H}3)$. Suppose $\nu(t)$ attains its maximum $\sigma_T^2$ only at a single point $t^*\in K$, where $K\in \partial_k T$ with $k\ge 0$. Then there exists a constant $\alpha>0$ such that as $u\to \infty$, $$\begin{split} &\quad {\mathbb P}\left\{\sup_{t\in T} X(t) \geq u \right\}\\ &=\sum_{J}(-1)^{{\rm dim}(J)}\int_J {\mathbb E}\big\{{\rm det}\nabla^2 X_{|J}(t)\mathbbm{1}_{\{X(t)\geq u, \ \varepsilon^*_\ell X_\ell(t) \geq 0 \ {\rm for \ all}\ \ell\in \mathcal{I}(t^*)\setminus \tau(J)\}} \big| \nabla X_{|J}(t)=0\big\}\\ &\quad \times p_{\nabla X_{|J}(t)}(0)dt +o\left( \exp \left\{ -\frac{u^2}{2\sigma_T^2} -\alpha u^2 \right\}\right), \end{split}$$ where the sum is taken over all faces $J$ of $T$ such that $t^*\in \bar{J}$ and $\tau(J)\subset \mathcal{I}(t^*)$.* Employing the Laplace method, we will provide refined explicit approximation results in Section [8](#sec:unique-max){reference-type="ref" reference="sec:unique-max"} under the assumptions in Theorem [Theorem 4](#Thm:MEC approximation){reference-type="ref" reference="Thm:MEC approximation"}. Furthermore, we demonstrate several examples that illustrate the evaluation of approximating excursion probabilities in Section [9](#sec:example){reference-type="ref" reference="sec:example"}. # Outline of the proofs {#sec:sketch} Here we show the main idea for proving the main results above. Let $f$ be a smooth real-valued function, then $\sup_{t\in T} f(t) \geq u$ if and only if there exists at least one extended outward local maximum above $u$ on some face of $T$. Thus, under conditions $({\bf H}1)$ and $({\bf H}2)$, the following relation holds for each $u\in {\mathbb R}$: $$\label{Eq:maxima-faces} \begin{split} \left\{\sup_{t\in T} X(t) \geq u \right\} = \bigcup_{k=0}^N \bigcup_{K\in \partial_k T} \{M_u^E (K) \geq 1\} \quad {\rm a.s.} \end{split}$$ This implies that the probability of the supremum of the Gaussian random field exceeding $u$ is equal to the probability that there exists at least one extended outward local maximum above $u$ on some face $K$ of $T$. Therefore, we obtain the following upper bound for the excursion probability: $$\label{Ineq:upperbound je} \begin{split} {\mathbb P}\left\{\sup_{t\in T} X(t) \geq u\right\}\leq \sum_{k=0}^N\sum_{K\in \partial_k T} {\mathbb P}\{M_u^E (K) \geq 1\}\leq \sum_{k=0}^N\sum_{K\in \partial_k T} {\mathbb E}\{M_u^E (K) \}. \end{split}$$ On the other hand, notice that $$\begin{split} {\mathbb E}\{M_u^E (K) \} - {\mathbb P}\{M_u^E (K) \geq 1\} &= \sum_{i=1}^\infty (i-1){\mathbb P}\{M_u^E (K)=i\}\\ & \leq \sum_{i=1}^\infty i(i-1){\mathbb P}\{M_u^E (K)=i\}= {\mathbb E}\{M_u^E (K)[M_u^E (K)-1] \} \end{split}$$ and $$\begin{split} {\mathbb P}\{M_u^E (K) \geq 1, M_u^E (K') \geq 1\}\le {\mathbb E}\{M_u^E (K) M_u^E (K') \}. \end{split}$$ Applying the Bonferroni inequality to [\[Eq:maxima-faces\]](#Eq:maxima-faces){reference-type="eqref" reference="Eq:maxima-faces"} and combining these two inequalities, we obtain the following lower bound for the excursion probability: $$\label{Ineq:lowerbound je} \begin{split} &\quad {\mathbb P}\left\{\sup_{t\in T} X(t) \geq u \right\} \\ &\ge \sum_{k=0}^N\sum_{K\in \partial_k T} {\mathbb P}\{M_u^E (K) \geq 1\} - \sum_{K\neq K'} {\mathbb P}\{M_u^E (K) \geq 1, M_u^E (K') \geq 1\} \\ &\ge \sum_{k=0}^N\sum_{K\in \partial_k T} \left({\mathbb E}\{M_u^E (K) \} - {\mathbb E}\{M_u^E (K)[M_u^E (K)-1] \} \right) - \sum_{K\neq K'} {\mathbb E}\{M_u^E (K) M_u^E (K') \}, \end{split}$$ where the last sum is taken over all possible pairs of different faces $(K, K')$. **Remark  **[\[remark:M_u\]]{#remark:M_u label="remark:M_u"} Note that, following the same arguments above, we have that the expectations on the number of extended outward maxima $M_u^E(\cdot)$ in both [\[Ineq:upperbound je\]](#Ineq:upperbound je){reference-type="eqref" reference="Ineq:upperbound je"} and [\[Ineq:lowerbound je\]](#Ineq:lowerbound je){reference-type="eqref" reference="Ineq:lowerbound je"} can be replaced by the expectations on the number of local maxima $M_u(\cdot)$. 0◻ We call a function $h(u)$ *super-exponentially small* \[when compared with the excursion probability ${\mathbb P}\{\sup_{t\in T} X(t) \geq u \}$ or ${\mathbb E}\{\chi(A_u)\}$\], if there exists a constant $\alpha >0$ such that $h(u) = o(e^{- u^2/{(2\sigma_T^2)-\alpha u^2}})$ as $u \to \infty$. The main idea for proving the EEC approximation Theorem [Theorem 2](#Thm:MEC approximation je){reference-type="ref" reference="Thm:MEC approximation je"} consists of the following two steps: (i) show that, except for the upper bound in ([\[Ineq:upperbound je\]](#Ineq:upperbound je){reference-type="ref" reference="Ineq:upperbound je"}), all terms in the lower bound in ([\[Ineq:lowerbound je\]](#Ineq:lowerbound je){reference-type="ref" reference="Ineq:lowerbound je"}) are super-exponentially small; and (ii) demonstrate that the difference between the upper bound in ([\[Ineq:upperbound je\]](#Ineq:upperbound je){reference-type="ref" reference="Ineq:upperbound je"}) and ${\mathbb E}\{\chi(A_u)\}$ is also super-exponentially small. The proofs for Theorems [Theorem 3](#Thm:MEC approximation je2){reference-type="ref" reference="Thm:MEC approximation je2"} and [Theorem 4](#Thm:MEC approximation){reference-type="ref" reference="Thm:MEC approximation"} follow the same ideas, aiming to establish super-exponential smallness for the terms involved in the lower bounds, as well as for the difference between the upper bound and EEC. # Estimation of super-exponential smallness for terms in the lower bound {#sec:small} ## Factorial moments We first state the following result, which is a modified version (restricted on a face $K$) of Lemma 4 in @Piterbarg96, characterizing the decaying rate for factorial moments of the number of critical points exceeding a high level for Gaussian fields. **Lemma 5**. *Assume $({\bf H}1)$ and $({\bf H}2)$. Then there exists a positive constant $C$ such that for any $\varepsilon >0$ one can find a number $\varepsilon_1 >0$ such that for any $K\in\partial_k T$, $$\begin{split} {\mathbb E}\{M_u (K)(M_u(K)- 1)\}\leq C u^{2k+1} \exp\bigg\{-\frac{u^2}{2\beta_K^2 +\varepsilon}\bigg\} + C u^{4k+2}\exp\bigg\{-\frac{u^2}{2\sigma_K^2 -\varepsilon_1}\bigg\}, \end{split}$$ where $$\beta_K^2 = \sup_{t\in K} \sup_{e\in {\mathbb S}^{k-1}} {\rm Var} (X(t)|\nabla X_{|K}(t), \nabla^2 X_{|K}(t)e), \quad \sigma_K^2=\sup_{t\in K} {\rm Var} (X(t)).$$* The following result shows that the factorial moments in [\[Ineq:lowerbound je\]](#Ineq:lowerbound je){reference-type="eqref" reference="Ineq:lowerbound je"} are super-exponentially small under our assumptions. **Proposition 6**. *Let $\{X(t),\, t\in T\}$ be a centered Gaussian random field satisfying $({\bf H}1)$, $({\bf H}2)$ and $({\bf H}3)$. Then there exists $\alpha>0$ such that as $u\to \infty$, $$\sum_{k=0}^N\sum_{K\in\partial_k T} {\mathbb E}\{M_u (K)(M_u (K) -1) \} = o\left(e^{ - u^2/(2\sigma_T^2) -\alpha u^2 }\right).$$* *Proof.* Due to Lemma [Lemma 5](#lemma:Piterbarg){reference-type="ref" reference="lemma:Piterbarg"}, it suffices to show that for each $K\in\partial_k T$, $\beta_K^2<\sigma_T^2$, which is equivalent to ${\rm Var} (X(t)|\nabla X_{|K}(t), \nabla^2 X_{|K}(t)e) <\sigma_T^2$ for all $t \in \bar{K} = K \cup \partial K$ and $e\in {\mathbb S}^{k-1}$. Suppose ${\rm Var} (X(t)|\nabla X_{|K}(t), \nabla^2 X_{|K}(t)e) =\sigma_T^2$ for some $t \in K$, then $$\sigma_T^2={\rm Var} (X(t)|\nabla X_{|K}(t), \nabla^2 X_{|K}(t)e) \leq {\rm Var} (X(t)|\nabla^2 X_{|K}(t)e) \leq {\rm Var} (X(t)) \leq \sigma_T^2.$$ Note that $$\begin{split} \quad {\rm Var} (X(t)|\nabla^2 X_{|K}(t)e) = {\rm Var} (X(t))\Leftrightarrow {\mathbb E}\{X(t)(\nabla^2 X_{|K}(t)e)\}=0 \Leftrightarrow \Sigma_K(t)e=0. \end{split}$$ But $t$ is a point with $\nu(t)=\sigma_T^2$, thus $\Sigma_K(t)\prec 0$ by Proposition [Proposition 1](#prop:H3){reference-type="ref" reference="prop:H3"}, implying $\Sigma_K(t)e \neq 0$ for all $e\in {\mathbb S}^{k-1}$ and causing a contradiction. On the other hand, suppose ${\rm Var} (X(t)|\nabla X_{|K}(t), \nabla^2 X_{|K}(t)e) =\sigma_T^2$ for some $t \in \partial K$, then ${\rm Var} (X(t)|\nabla X_{|K}(t)) =\sigma_T^2$ and hence $\nu_i(t)=0$ for all $i\in \tau (K)$, implying $\Sigma_K(t)\prec 0$ by Proposition [Proposition 1](#prop:H3){reference-type="ref" reference="prop:H3"}. Similarly to the previous arguments, this will lead to a contradiction. The proof is completed. ◻ ## Non-adjacent faces For two sets $D, D' \subset {\mathbb R}^N$, let $d(D,D')=\inf\{\|t-t'\|: t\in D, t'\in D'\}$ denote their distance. The following result demonstrates that the last two sums involving the joint moment of two non-adjacent faces in [\[Ineq:lowerbound je\]](#Ineq:lowerbound je){reference-type="eqref" reference="Ineq:lowerbound je"} are super-exponentially small. **Proposition 7**. *Let $\{X(t),\, t\in T\}$ be a centered Gaussian random field satisfying $({\bf H}1)$ and $({\bf H}2)$. Then there exists $\alpha>0$ such that as $u\to \infty$, $$\label{Eq:crossing terms disjoint sets} \begin{split} {\mathbb E}\{M_u (K) M_u (K')\} &= o\left( \exp \left\{ -\frac{u^2}{2\sigma_T^2} -\alpha u^2 \right\}\right), \end{split}$$ where $K$ and $K'$ are different faces of $T$ with $d(K,K')>0$.* *Proof.* Consider first the case where ${\rm dim}(K) =k \geq 1$ and ${\rm dim}(K') =k' \geq 1$. By the Kac-Rice formula for high moments [@Adler:2007], we have $$\label{Eq:disjoint faces} \begin{split} &\quad {\mathbb E}\{M_u (K) M_u (K') \}\\ &= \int_{K} dt \int_{K'} dt'\, {\mathbb E}\big \{ |{\rm det} \nabla^2 X_{|K}(t) | |{\rm det} \nabla^2 X_{|K'}(t') | \mathbbm{1}_{\{X(t)\geq u, X(t')\geq u\}}\\ &\quad \times \mathbbm{1}_{\{\nabla^2 X_{|K}(t) \prec 0, \, \nabla^2 X_{|K'}(t') \prec 0\}} \big|\nabla X_{|K}(t)=0, \nabla X_{|K'}(t')=0 \big\}p_{\nabla X_{|K}(t), \nabla X_{|K'} (t')}(0,0) \\ &\leq \int_{K} dt \int_{K'} dt' \int_u^\infty dx \int_u^\infty dx' \, p_{X(t), X(t')}(x,x')p_{\nabla X_{|K}(t), \nabla X_{|K'} (t')}(0,0|X(t)=x, X(t')=x')\\ &\quad \times {\mathbb E}\big \{ |{\rm det} \nabla^2 X_{|K}(t) | |{\rm det} \nabla^2 X_{|K'}(t') |\big | X(t)=x, X(t')=x', \nabla X_{|K}(t)=0, \nabla X_{|K'}(t')=0 \} . \end{split}$$ Notice that the following two inequalities hold: for constants $a_{i_1}$ and $b_{i_2}$, $$\begin{split} &\prod_{i_1=1}^k |a_{i_1}| \prod_{i_2=1}^{k'} |b_{i_2}| \leq \frac{\sum_{i_1=1}^k |a_{i_1}|^{k+k'} + \sum_{i_2=1}^{k'} |b_{i_2}|^{k+k'}}{k+k'}; \end{split}$$ and for any Gaussian variable $\xi$ and positive integer $m$, by Jensen's inequality, $$\begin{split} {\mathbb E}|\xi|^m \leq {\mathbb E}(|{\mathbb E}\xi|+|\xi-{\mathbb E}\xi|)^m &\leq 2^{m-1} (|{\mathbb E}\xi|^m + {\mathbb E}|\xi-{\mathbb E}\xi|^m)= 2^{m-1} (|{\mathbb E}\xi|^m + B_m({\rm Var}(\xi))^{m/2}), \end{split}$$ where $B_m$ is some constant depending only on $m$. Combining these two inequalities with the well-known conditional formula for Gaussian variables, we obtain that there exist positive constants $C_1$ and $N_1$ such that for sufficiently large $x$ and $x'$, $$\label{Eq:disjoint faces 2} \begin{split} \sup_{t\in K, t'\in K'}&{\mathbb E}\big \{ |{\rm det} \nabla^2 X_{|K}(t) | |{\rm det} \nabla^2 X_{|K'}(t') | \big| X(t)=x, X(t')=x',\nabla X_{|K}(t)=0, \nabla X_{|K'}(t')=0 \big\}\\ & \leq C_1+(xx')^{N_1}. \end{split}$$ Further, there exists $C_2>0$ such that $$\label{Eq:disjoint faces 3} \begin{split} &\quad \sup_{t\in K, t'\in K'} p_{\nabla X_{|K}(t), \nabla X_{|K'} (t')}(0,0|X(t)=x, X(t')=x')\\ & \leq \sup_{t\in K, t'\in K'}(2\pi)^{-(k+k')/2}[{\rm det Cov} (\nabla X_{|K}(t), \nabla X_{|K'} (t') | X(t)=x, X(t')=x')]^{-1/2} \\ & \leq C_2. \end{split}$$ Plugging [\[Eq:disjoint faces 2\]](#Eq:disjoint faces 2){reference-type="eqref" reference="Eq:disjoint faces 2"} and [\[Eq:disjoint faces 3\]](#Eq:disjoint faces 3){reference-type="eqref" reference="Eq:disjoint faces 3"} into [\[Eq:disjoint faces\]](#Eq:disjoint faces){reference-type="eqref" reference="Eq:disjoint faces"}, we obtain that there exists $C_3$ such that, for $u$ large enough, $$\label{Eq:disjoint faces 4} \begin{split} {\mathbb E}\{M_u (K) M_u (K')\} &\le C_3\sup_{t\in K, t'\in K'} {\mathbb E}\{(C_1+|X(t)X(t')|^{N_1}) \mathbbm{1}_{\{X(t) \geq u, X(t') \geq u\}} \}\\ &\le C_3\sup_{t\in K, t'\in K'} {\mathbb E}\{(C_1+(X(t)+X(t'))^{2N_1}) \mathbbm{1}_{\{[X(t)+X(t')]/2 \geq u\}} \}\\ &\le C_3 \exp\left( -\frac{u^2}{(1+\rho)\sigma_T^2} + \varepsilon u^2\right), \end{split}$$ where $\varepsilon$ is any positive number and $\rho=\sup_{t\in K, t'\in K'}{\rm Corr}[X(t), X'(t)]<1$ due to $({\bf H}2)$. The case when one of the dimensions of $K$ and $K'$ is zero can be proved similarly. ◻ ## Adjacent faces The following result shows that the last two sums involving the joint moment of two adjacent faces in [\[Ineq:lowerbound je\]](#Ineq:lowerbound je){reference-type="eqref" reference="Ineq:lowerbound je"} are super-exponentially small. **Proposition 8**. *Let $\{X(t),\, t\in T\}$ be a centered Gaussian random field satisfying $({\bf H}1)$, $({\bf H}2)$ and $({\bf H}3)$. Then there exists $\alpha>0$ such that as $u\to \infty$, $$\label{Eq:crossing terms je} \begin{split} {\mathbb E}\{M_u^E (K) M_u^E (K')\} &= o\left( \exp \left\{ -\frac{u^2}{2\sigma_T^2} -\alpha u^2 \right\}\right), \end{split}$$ where $K$ and $K'$ are different faces of $T$ with $d(K,K')=0$.* *Proof.* Let $I:= \bar{K}\cap \bar{K'}$, which is nonempty since $d(K,K')=0$. To simplify notation, let us assume without loss of generality: $$\begin{split} \tau(K) &= \{1, \ldots, m, m+1, \ldots, k\},\\ \tau(K')&= \{1, \ldots, m, k+1, \ldots, k+k'-m\}, \end{split}$$ where $0 \leq m \leq k \leq k' \leq N$ and $k'\geq 1$. If $k=0$, we conventionally consider $\tau(K)= \emptyset$. Under this assumption, $K\in \partial_k T$, $K'\in \partial_{k'} T$, $\text{dim} (I) =m$, and all elements in $\varepsilon(K)$ and $\varepsilon(K')$ are 1. We first consider the case when $k\geq 1$ and $l\ge 1$. By the Kac-Rice formula, $$\label{Eq:cross term je} \begin{split} &\quad {\mathbb E}\{M_u^E (K) M_u^E (K') \}\\ &\leq \int_{K} dt\int_{K'} dt'\int_u^\infty dx \int_u^\infty dx' \int_0^\infty dz_{k+1} \cdots \int_0^\infty dz_{k+k'-m} \int_0^\infty dw_{m+1} \cdots \int_0^\infty dw_{k}\\ &\quad {\mathbb E}\big \{ |\text{det} \nabla^2 X_{|K}(t) ||\text{det} \nabla^2 X_{|K'}(t') | \big| X(t)=x, X(t')=x', \nabla X_{|K}(t)=0, X_{k+1}(t)=z_{k+1}, \\ & \quad \ldots, X_{k+k'-m}(t)=z_{k+k'-m}, \nabla X_{|K'}(t')=0, X_{m+1}(t')=w_{m+1}, \ldots, X_{k}(t')=w_{k} \big\}\\ & \quad \times p_{t,t'}(x,x',0, z_{k+1}, \ldots,z_{k+k'-m}, 0, w_{m+1}, \ldots, w_{k})\\ &:= \int \int \int_{K\times K'} A(t,t',u)\,dtdt', \end{split}$$ where $p_{t,t'}(x,x',0, z_{k+1}, \ldots,z_{k+k'-m}, 0, w_{m+1}, \ldots, w_{k})$ is the density of the joint distribution of the variables involved in the given condition. We define $$\label{Eq:M0} \begin{split} \mathcal{M}_0:=\{t\in I :\, \nu(t)=\sigma_T^2, \, \nu_i(t)=0, \ \forall i=1,\ldots, k+k'-m\}, \end{split}$$ and consider two cases for $\mathcal{M}_0$. **Case (i): $\bm{\mathcal{M}_0 = \emptyset}$.** Under this case, since $I$ is a compact set, by the uniform continuity of conditional variance, there exist constants $\varepsilon_1, \delta_1>0$ such that $$\label{Eq:super-exponentially small} \begin{split} &\sup_{t\in B(I, \delta_1), \, t'\in B'(I, \delta_1)} {\rm Var} ( X(t) |\nabla X_{|K}(t), \nabla X_{|K'} (t'))\leq \sigma_T^2 - \varepsilon_1, \end{split}$$ where $B(I, \delta_1)=\{t\in K: d(t, I)\le \delta_1\}$ and $B'(I, \delta_1)=\{t'\in K': d(t', I)\le \delta_1\}$. By partitioning $K\times K'$ into $B(I, \delta_1) \times B'(I, \delta_1)$ and $(K\times K')\backslash (B(I, \delta_1) \times B'(I, \delta_1))$ and applying the Kac-Rice formula, we obtain $$\label{Eq:Mu3} \begin{split} &\quad {\mathbb E}\{M_u (K) M_u (K') \}\\ &\leq \int_{(K\times K')\backslash (B(I, \delta_1) \times B'(I, \delta_1))} dt dt' \, p_{\nabla X_{|K}(t), \nabla X_{|K'} (t')}(0,0)\\ &\quad \times {\mathbb E}\big\{ |{\rm det} \nabla^2 X_{|K}(t) | |{\rm det} \nabla^2 X_{|K'}(t') | \mathbbm{1}_{\{X(t)\geq u, X(t')\geq u\}}\big| \nabla X_{|K}(t)=0, \nabla X_{|K'}(t')=0 \big\} \\ &+ \int_{B(I, \delta_1) \times B'(I, \delta_1)} dt dt'\, p_{\nabla X_{|K}(t), \nabla X_{|K'} (t')}(0,0)\\ &\quad \times {\mathbb E}\big\{ |{\rm det} \nabla^2 X_{|K}(t) | |{\rm det} \nabla^2 X_{|K'}(t') | \mathbbm{1}_{\{X(t)\geq u, X(t')\geq u\}}\big| \nabla X_{|K}(t)=0, \nabla X_{|K'}(t')=0 \big\} \\ &:=I_1(u) + I_2(u). \end{split}$$ Note that $$\begin{split} (K\times K') \backslash (B(I, \delta_1) \times B'(I, \delta_1)) &= \Big( (K\backslash B(I, \delta_1)) \times B'(I, \delta_1) \Big) \bigcup \Big( B(I, \delta_1)\times (K\backslash B(I, \delta_1)) \Big) \\ & \quad \bigcup \Big( (K\backslash B(I, \delta_1))\times (K\backslash B(I, \delta_1)) \Big), \end{split}$$ where each product on the right hand side consists of two sets with a positive distance. It then follows from Proposition [Proposition 7](#Lem:cross terms disjoint sets){reference-type="ref" reference="Lem:cross terms disjoint sets"} that $I_1(u)$ is super-exponentially small. On the other hand, since $\mathbbm{1}_{\{X(t)\geq u, X(t')\geq u\}}\le \mathbbm{1}_{\{[X(t)+ X(t')]/2\geq u\}}$, one has $$\label{Eq:I2} \begin{split} I_2(u)&\le\int_{B(I, \delta_1) \times B'(I, \delta_1)} dt dt' \int_u^\infty dx\,p_{\frac{X(t)+X(t')}{2}}(x|\nabla X_{|K}(t)=0, \nabla X_{|K'}(t')=0) \\ &\quad \times {\mathbb E}\big\{ |{\rm det} \nabla^2 X_{|K}(t) | |{\rm det} \nabla^2 X_{|K'}(t') | \big| [X(t)+ X(t')]/2=x,\\ &\qquad \nabla X_{|K}(t)=0, \nabla X_{|K'}(t')=0 \big\}p_{\nabla X_{|K}(t), \nabla X_{|K'} (t')}(0,0). \end{split}$$ Combining this with [\[Eq:super-exponentially small\]](#Eq:super-exponentially small){reference-type="eqref" reference="Eq:super-exponentially small"}, we conclude that $I_2(u)$ and hence ${\mathbb E}\{M_u^E (X,K) M_u^E (X,K')\}$ are super-exponentially small. **Case (ii): $\bm{\mathcal{M}_0 \neq \emptyset}$.** Let $$\begin{split} B(\mathcal{M}_0,\delta_2):=\{(t,t')\in K\times K' : d(t,\mathcal{M}_0)\vee d(t',\mathcal{M}_0)\le \delta_2 \}, \end{split}$$ where $\delta_2$ is a small positive number to be specified. Note that, by the definitions of $\mathcal{M}_0$ and $B(\mathcal{M}_0,\delta_2)$, there exists $\varepsilon_2>0$ such that $$\label{Eq:delta2} \begin{split} \sup_{(t,t')\in (K\times K') \backslash B(\mathcal{M}_0,\delta_2)} &{\rm Var} ( [X(t)+X(t')]/2 |\nabla X_{|K}(t), \nabla X_{|K'} (t'))\leq \sigma_T^2 - \varepsilon_2. \end{split}$$ Similarly to [\[Eq:I2\]](#Eq:I2){reference-type="eqref" reference="Eq:I2"}, we obtain that $\int_{(K\times K') \backslash B(\mathcal{M}_0,\delta_2)}A(t,t',u)dtdt'$ is super-exponentially small. It suffices to show below that $\int _{B(\mathcal{M}_0,\delta_2)} A(t,t',u)\,dtdt'$ is super-exponentially small. Due to $({\bf H}3)$ and Proposition [Proposition 1](#prop:H3){reference-type="ref" reference="prop:H3"}, we can choose $\delta_2$ small enough such that for all $(t,t')\in B(\mathcal{M}_0,\delta_2)$, $$\begin{split} \Lambda_{K\cup K'}(t)&:=-{\mathbb E}\{X(t)\nabla^2 X_{|K\cup K'}(t)\}=-({\mathbb E}\{X(t) X_{ij}(t)\})_{i,j=1,\ldots, k+k'-m} \end{split}$$ are positive definite. Let $\{e_1, e_2, \ldots, e_N\}$ be the standard orthonormal basis of ${\mathbb R}^N$. For $t\in K$ and $t'\in K'$, let $e_{t, t'}=(t'-t)/\|t'-t\|$ and $\alpha_i(t, t')= {\langle}e_i, \Lambda_{K\cup K'}(t)e_{t,t'}\rangle$. Then $$\label{Eq:orthnormal basis decomp je} \Lambda_{K\cup K'}(t)e_{t,t'}=\sum_{i=1}^N {\langle}e_i, \Lambda_{K\cup K'}(t)e_{t,t'}\rangle e_i = \sum_{i=1}^N \alpha_i(t, t') e_i$$ and there exists $\alpha_0 >0$ such that for all $(t,t')\in B(\mathcal{M}_0,\delta_2)$, $$\label{Eq:posdef je} {\langle}e_{t,t'}, \Lambda_{K\cup K'}(t)e_{t,t'} \rangle\geq \alpha_0.$$ Since all elements in $\varepsilon(K)$ and $\varepsilon(K')$ are 1, we may write $$\begin{split} t &= (t_1, \ldots, t_m, t_{m+1}, \ldots, t_k, b_{k+1}, \ldots, b_{k+k'-m}, 0, \ldots, 0),\\ t' &= (t'_1, \ldots, t'_m, b_{m+1}, \ldots, b_k, t'_{k+1}, \ldots, t'_{k+k'-m}, 0, \ldots, 0), \end{split}$$ where $t_i \in (a_i, b_i)$ for $i\in \tau(K)$ and $t'_j \in (a_j, b_j)$ for $j \in \tau(K')$. Therefore, $$\label{Eq:e_k je} \begin{split} {\langle}e_i , e_{t,t'}\rangle&\geq 0, \quad \forall \ m+1 \leq i \leq k,\\ {\langle}e_i , e_{t,t'}\rangle&\leq 0, \quad \forall \ k+1\leq i \leq k+k'-m,\\ {\langle}e_i , e_{t,t'}\rangle&= 0, \quad \forall \ k+k'-m< i \leq N. \end{split}$$ Let $$\label{Eq:D_k je} \begin{split} D_i &= \{ (t,t')\in B(\mathcal{M}_0,\delta_2): \alpha_i (t,t')\geq \beta_i \}, \quad \text{if}\ m+1 \leq i \leq k,\\ D_i &= \{ (t,t')\in B(\mathcal{M}_0,\delta_2): \alpha_i (t,t')\leq -\beta_i \}, \quad \text{if}\ k+1\leq i \leq k+k'-m,\\ D_0 &= \bigg\{ (t,t')\in B(\mathcal{M}_0,\delta_2): \sum_{i=1}^m \alpha_i (t,t'){\langle}e_i , e_{t,t'}\rangle\geq \beta_0\bigg \}, \end{split}$$ where $\beta_0, \beta_1,\ldots, \beta_{k+k'-m}$ are positive constants such that $\beta_0 + \sum_{i=m+1}^{k+k'-m} \beta_i < \alpha_0$. It follows from ([\[Eq:e_k je\]](#Eq:e_k je){reference-type="ref" reference="Eq:e_k je"}) and ([\[Eq:D_k je\]](#Eq:D_k je){reference-type="ref" reference="Eq:D_k je"}) that, if $(t,s)$ does not belong to any of $D_0, D_{m+1}, \ldots, D_{k+k'-m}$, then by ([\[Eq:orthnormal basis decomp je\]](#Eq:orthnormal basis decomp je){reference-type="ref" reference="Eq:orthnormal basis decomp je"}), $${\langle}\Lambda_{K\cup K'}(t)e_{t,t'}, e_{t,t'}\rangle= \sum_{i=1}^N \alpha_i(t,t') {\langle}e_i , e_{t,t'}\rangle\leq \beta_0 + \sum_{i=m+1}^{k+k'-m} \beta_i < \alpha_0,$$ which contradicts ([\[Eq:posdef je\]](#Eq:posdef je){reference-type="ref" reference="Eq:posdef je"}). Thus $D_0\cup \cup_{i=m+1}^{k+k'-m} D_i$ is a covering of $B(\mathcal{M}_0,\delta_2)$. By ([\[Eq:cross term je\]](#Eq:cross term je){reference-type="ref" reference="Eq:cross term je"}), $$\begin{split} {\mathbb E}&\{M_u^E (K) M_u^E (K')\} \leq \int_{D_0} A(t,t',u)\,dtdt' + \sum_{i=m+1}^{k+k'-m} \int_{D_i}A(t,t',u)\,dtdt'. \end{split}$$ By the Kac-Rice metatheorem and the fact $\mathbbm{1}_{\{X(t)\geq u, Y(s)\geq u\}}\le \mathbbm{1}_{\{X(t)\geq u\}}$, we obtain $$\label{Eq:D0} \begin{split} &\quad \int_{D_0}A(t,t',u)\,dtdt'\\ &\leq \int_{D_0} dtdt' \int_u^\infty dx \, p_{\nabla X_{|K}(t), \nabla X_{|K'} (t')}(0,0) p_{X(t)}(x|\nabla X_{|K}(t)=0, \nabla X_{|K'} (t')=0)\\ &\quad \times {\mathbb E}\big\{ |\text{det} \nabla^2 X_{|K}(t) ||\text{det} \nabla^2 X_{|K'}(t') | \big| X(t)=x, \nabla X_{|K}(t)=0, \nabla X_{|K'}(t')=0 \big\}, \end{split}$$ and that for $i= m+1, \ldots, k$, $$\label{Eq:Di} \begin{split} &\quad \int_{D_i}A(t,t',u)\,dtdt'\\ &\le \int_{D_i} dtdt' \int_u^\infty dx \int_0^\infty dw_i \, p_{X(t), \nabla X_{|K}(t), X_i(t'), \nabla X_{|K'} (t')}(x,0,w_i,0)\\ &\quad \times {\mathbb E}\big \{ |\text{det} \nabla^2 X_{|K}(t)| |\text{det} \nabla^2 X_{|K'}(t')| \big| X(t)=x, \nabla X_{|K}(t)=0, X_i(t')=w_i, \nabla X_{|K'} (t')=0 \big\}. \end{split}$$ Comparing [\[Eq:D0\]](#Eq:D0){reference-type="eqref" reference="Eq:D0"} and [\[Eq:Di\]](#Eq:Di){reference-type="eqref" reference="Eq:Di"} with Eqs. (4.33) and (4.36) respectively in the proof of Theorem 4.8 in @ChengXiao2014, one can employ the same reasoning therein to show that ${\rm Var}(X(t)|\nabla X_{|K}(t), \nabla X_{|K'} (t'))<\sigma_T^2$ uniformly on $D_0$ and ${\mathbb P}(X(t)>u, X_i(t')>0|\nabla X_{|K}(t)=0, \nabla X_{|K'} (t')=0)=o(e^{-u^2/(2\sigma_T^2)-\alpha u^2})$ uniformly on $D_i$, and deduce that $\int_{D_0}A(t,t',u)\,dtdt'$ and $\int_{D_i} A(t,t',u)\,dtdt'$ ($i= m+1, \ldots, k$) are super-exponentially small. It is similar to show that $\int_{D_i} A(t,t',u)\,dtdt'$ are super-exponentially small for $i=k+1, \ldots, k+k'-m$. For the case $k=0$ or $l=0$, the argument is even simpler when applying the Kac-Rice formula; the details are omitted here. The proof is finished. ◻ In the proof of Proposition [Proposition 8](#Lem:cross terms je){reference-type="ref" reference="Lem:cross terms je"}, we have shown in [\[Eq:Mu3\]](#Eq:Mu3){reference-type="eqref" reference="Eq:Mu3"} that, if $\mathcal{M}_0 = \emptyset$, then the moment ${\mathbb E}\{M_u (X,K) M_u (X,K')\}$ is super-exponentially small. It is important to note that, the boundary condition [\[Eq:boundary\]](#Eq:boundary){reference-type="eqref" reference="Eq:boundary"} implies (and generalizes) the condition $\mathcal{M}_0 = \emptyset$, yielding the following result. **Proposition 9**. *Let $\{X(t),\, t\in T\}$ be a centered Gaussian random field satisfying $({\bf H}1)$, $({\bf H}2)$ and the boundary condition [\[Eq:boundary\]](#Eq:boundary){reference-type="eqref" reference="Eq:boundary"}. Then there exists $\alpha>0$ such that as $u\to \infty$, $$\begin{split} {\mathbb E}\{M_u (K) M_u (K')\} &= o\Big(\exp \Big\{ -\frac{u^2}{2\sigma_T^2} -\alpha u^2 \Big\}\Big), \end{split}$$ where $K$ and $K'$ are adjacent faces of $T$.* # Estimation of the difference between EEC and the upper bound {#sec:diff} In this section, we demonstrate that the difference between ${\mathbb E}\{\chi(A_u)\}$ and the expected number of extended outward local maxima, i.e. the upper bound in [\[Ineq:upperbound je\]](#Ineq:upperbound je){reference-type="eqref" reference="Ineq:upperbound je"}, is super-exponentially small. **Proposition 10**. *Let $\{X(t),\, t\in T\}$ be a centered Gaussian random field satisfying $({\bf H}1)$, $({\bf H}2)$ and $({\bf H}3)$. Then there exists $\alpha>0$ such that for any $K\in \partial_k T$ with $k\ge 0$, as $u\to \infty$, $$\label{Eq:simplify-major-term} \begin{split} {\mathbb E}\{M_u^E (K)\} &=(-1)^{k}\int_K {\mathbb E}\big\{{\rm det}\nabla^2 X_{|K}(t)\mathbbm{1}_{\{X(t)\geq u, \ \varepsilon^*_\ell X_\ell(t) \geq 0 \ {\rm for \ all}\ \ell\notin \tau(K)\}} \big| \nabla X_{|K}(t)=0\big\}\\ &\quad \times p_{\nabla X_{|K}(t)}(0) dt +o\left( \exp \left\{ -\frac{u^2}{2\sigma_T^2} -\alpha u^2 \right\}\right)\\ &= (-1)^{k} {\mathbb E}\bigg\{\bigg(\sum^k_{i=0} (-1)^i \mu_i(K)\bigg)\bigg\} + o\left( \exp \left\{ -\frac{u^2}{2\sigma_T^2} -\alpha u^2 \right\}\right). \end{split}$$* *Proof.* The second equality in [\[Eq:simplify-major-term\]](#Eq:simplify-major-term){reference-type="eqref" reference="Eq:simplify-major-term"} arises from the application of the Kac-Rice formula: $$\begin{split} &\quad {\mathbb E}\bigg\{\bigg(\sum^k_{i=0} (-1)^i \mu_i(K)\bigg)\bigg\} \\ &=\sum^k_{i=0} (-1)^i\int_K {\mathbb E}\big\{|{\rm det}\nabla^2 X_{|K}(t)|\mathbbm{1}_{\{\text{index} (\nabla^2 X_{|K}(t))=i\}} \\ &\quad \times \mathbbm{1}_{\{X(t)\geq u, \ \varepsilon^*_\ell X_\ell(t) \geq 0 \ {\rm for \ all}\ \ell\notin \tau(K)\}} \big| \nabla X_{|K}(t)=0\big\}p_{\nabla X_{|K}(t)}(0)\, dt\\ &=\int_K {\mathbb E}\big\{{\rm det}\nabla^2 X_{|K}(t)\mathbbm{1}_{\{X(t)\geq u, \ \varepsilon^*_\ell X_\ell(t) \geq 0 \ {\rm for \ all}\ \ell\notin \tau(K)\}} \big| \nabla X_{|K}(t)=0\big\} p_{\nabla X_{|K}(t)}(0) \, dt. \end{split}$$ To prove the first approximation in [\[Eq:simplify-major-term\]](#Eq:simplify-major-term){reference-type="eqref" reference="Eq:simplify-major-term"} and convey the main idea, we start with the case when the face $K$ represents the interior of $T$. **Case (i): $\bm{k=N}$.** By the Kac-Rice formula, we have $$\begin{split} &\quad {\mathbb E}\{M_u^E (K)\}\\ &=\int_K p_{\nabla X(t)}(0) dt \int_u^\infty dx\, p_{X(t)}(x|\nabla X(t)=0){\mathbb E}\big\{{\rm det}\nabla^2 X(t)\mathbbm{1}_{\{\nabla^2 X(t)\prec 0\}} \big| X(t)=x, \nabla X(t)=0\big\}\\ &:=\int_K p_{\nabla X(t)}(0) dt \int_u^\infty A(t,x)dx. \end{split}$$ Let $$\label{Eq:O-Udelta} \begin{split} \mathcal{M}_1&=\{t\in \bar{K}=T: \nu(t)=\sigma_T^2, \ \nabla \nu(t)=2{\mathbb E}\{X(t)\nabla X(t)\}=0 \},\\ B(\mathcal{M}_1, \delta_1)&=\left\{t\in K: d\left(t, \mathcal{M}_1\right)\le\delta_1\right \}, \end{split}$$ where $\delta_1$ is a small positive number to be specified. Then, we only need to estimate $$\label{Eq:A1} \begin{split} \int_{B(\mathcal{M}_1, \delta_1)} p_{\nabla X(t)}(0) dt \int_u^\infty A(t,x)dx, \end{split}$$ since the integral above with $B(\mathcal{M}_1, \delta_1)$ replaced by $K\backslash B(\mathcal{M}_1, \delta_1)$ becomes super-exponentially small due to the fact $$\sup_{t\in K\backslash B(\mathcal{M}_1, \delta_1)} {\rm Var}(X(t) | \nabla X(t)=0) < \sigma_T^2.$$ Notice that, by Proposition [Proposition 1](#prop:H3){reference-type="ref" reference="prop:H3"}, ${\mathbb E}\{X(t)\nabla^2 X(t)\} \prec 0$ for all $t\in \mathcal{M}_1$. Thus there exists $\delta_1$ small enough such that ${\mathbb E}\{X(t)\nabla^2 X(t)\}\prec 0$ for all $t\in B(\mathcal{M}_1, \delta_1)$. In particular, let $\lambda_0$ be the largest eigenvalue of ${\mathbb E}\{X(t) \nabla^2 X(t)\}$ over $B(\mathcal{M}_1, \delta_1)$, then $\lambda_0<0$ by the uniform continuity. Also note that ${\mathbb E}\{X(t)\nabla X(t)\}$ tends to 0 as $\delta_1\to 0$. Therefore, as $\delta_1\to 0$, $$\begin{split} &\quad {\mathbb E}\{X_{ij}(t)|X(t)=x, \nabla X(t)=0\}\\ &=({\mathbb E}\{X_{ij}(t)X(t)\}, {\mathbb E}\{X_{ij}(t)X_1(t)\}, \ldots, {\mathbb E}\{X_{ij}(t)X_N(t)\}) \left[ {\rm Cov}(X(t), \nabla X(t)) \right]^{-1} (x, 0, \ldots, 0)^T\\ &=\frac{{\mathbb E}\{X_{ij}(t)X(t)\} x}{\sigma_T^2}(1+o(1)). \end{split}$$ Thus, for all $x\ge u$ and $t\in B(\mathcal{M}_1, \delta_1)$ with $\delta_1$ small enough, $$\begin{split} \Sigma_1(t,x) &:={\mathbb E}\{\nabla^2 X(t)|X(t)=x, \nabla X(t)=0\}\prec 0. \end{split}$$ Let $\Delta_1(t,x)=\nabla^2 X(t) - \Sigma_1(t,x)$. We have $$\label{Eq:int_A} \begin{split} \int_u^\infty A(t,x)dx &=\int_u^\infty p_{X(t)}(x|\nabla X(t)=0){\mathbb E}\big\{{\rm det}(\Delta_1(t,x)+ \Sigma_1(t,x))\\ &\quad \times \mathbbm{1}_{\{\Delta_1(t,x)+ \Sigma_1(t,x)\prec 0\}} \big| X(t)=x, \nabla X(t)=0\big\}\, dx\\ &:=\int_u^\infty p_{X(t)}(x|\nabla X(t)=0)E(t,x)\, dx. \end{split}$$ Note that the following is a centered Gaussian random matrix not depending on $x$: $$\begin{split} \Omega(t)&=(\Omega_{ij}(t))_{1\le i,j\le N}=(\Delta_1(t,x) | X(t)=x, \nabla X(t)=0). \end{split}$$ Let $h_t(v)$ denote the density of the Gaussian vector $((\Omega_{ij}^X(t))_{1\le i\le j\le N}$ with $v=(v_{ij})_{1\le i\le j\le N}\in {\mathbb R}^{N(N+1)/2}$. Then $$\label{Eq:E(t,s,xy)} \begin{split} E(t,x) &= {\mathbb E}\big\{{\rm det}(\Omega(t)+ \Sigma_1(t,x)) \mathbbm{1}_{\{\Omega(t)+ \Sigma_1(t,x)\prec 0\}} \big\}\\ &= \int_{v:\, (v_{ij})+\Sigma_1(t,x)\prec 0} {\rm det}((v_{ij})+ \Sigma_1(t,x)) h_t(v)dv, \end{split}$$ where $(v_{ij})$ is the abbreviations of the matrix $v=(v_{ij})_{1\le i, j\le N}$. There exists a constant $c>0$ such that for $\delta_1$ small enough and all $t\in B(\mathcal{M}_1, \delta_1)$, and $x\ge u$, we have $$(v_{ij})+ \Sigma_1(t,x) \prec 0, \quad \forall \|(v_{ij})\|:=\Big(\sum_{i,j=1}^N v_{ij}^2\Big)^{1/2}<cu.$$ This implies $\{v:\, (v_{ij})+\Sigma_1(t,x)\not\prec 0\} \subset \{v:\, \|(v_{ij})\|\ge cu\}$. Consequently, the integral in [\[Eq:E(t,s,xy)\]](#Eq:E(t,s,xy)){reference-type="eqref" reference="Eq:E(t,s,xy)"} with the domain of integration replaced by $\{v:\, (v_{ij})+\Sigma_1(t,x)\not\prec 0\}$ is $o(e^{-\alpha'u^2})$ uniformly for all $t\in B(\mathcal{M}_1, \delta_1)$, where $\alpha'$ is a positive constant. As a result, we conclude that, uniformly for all $t\in B(\mathcal{M}_1, \delta_1)$ and $x\ge u$, $$\begin{split} E(t,x) = \int_{{\mathbb R}^{N(N+1)/2}} {\rm det}((v_{ij})+ \Sigma_1(t,x))h_t(v)dv + o(e^{-\alpha'u^2}). \end{split}$$ By substituting this result into [\[Eq:int_A\]](#Eq:int_A){reference-type="eqref" reference="Eq:int_A"}, we observe that the indicator function $\mathbbm{1}_{\{\nabla^2 X(t)\prec 0\}}$ in [\[Eq:A1\]](#Eq:A1){reference-type="eqref" reference="Eq:A1"} can be eliminated, causing only a super-exponentially small error. Thus, for sufficiently large $u$, there exists $\alpha>0$ such that $$\begin{split} {\mathbb E}\{M_u^E (K)\} &=\int_K p_{\nabla X(t)}(0) dt \int_u^\infty {\mathbb E}\{{\rm det}\nabla^2 X(t)|X(t)=x, \nabla X(t)=0\} \\ &\quad \times p_{X(t)}(x|\nabla X(t)=0)dx +o\Big( \exp \Big\{ -\frac{u^2}{2\sigma_T^2} -\alpha u^2 \Big\}\Big). \end{split}$$ **Case (ii): $\bm{k,l\ge 0}$.** It is worth noting that when $k=0$, the terms in [\[Eq:simplify-major-term\]](#Eq:simplify-major-term){reference-type="eqref" reference="Eq:simplify-major-term"} related to the Hessian will vanish, simplifying the proof. Therefore, without loss of generality, let $k\ge 1$, $\tau(K)= \{1, \cdots, k\}$ and assume all the elements in $\varepsilon(K)$ are 1. By the Kac-Rice formula, $$\begin{split} {\mathbb E}\{M_u^E (K)\} &=(-1)^k\int_K p_{\nabla X_{|K}(t)}(0) dt \int_u^\infty p_{X(t)}\big(x\big |\nabla X_{|K}(t)=0\big){\mathbb E}\big\{{\rm det}\nabla^2 X_{|K}(t) \\ &\quad \times \mathbbm{1}_{\{\nabla^2 X_{|K}(t)\prec 0\}}\mathbbm{1}_{\{X_{k+1}(t)>0,\ldots, X_N(t)>0\}} \big| X(t)=x, \nabla X_{|K}(t)=0\big\}dx\\ &:=(-1)^k\int_K p_{\nabla X_{|K}(t)}(0) dt \int_u^\infty A'(t,x)dx. \end{split}$$ Let $$\label{Eq:M2} \begin{split} \mathcal{M}_2&=\{t\in \bar{K}: \nu(t)=\sigma_T^2, \ \nabla \nu_{|K}(t)= 2{\mathbb E}\{X(t)\nabla X_{|K}(t)\}=0 \},\\ B(\mathcal{M}_2, \delta_2)&=\left\{t\in K: d\left(t, \mathcal{M}_2\right)\le\delta_2\right \}, \end{split}$$ where $\delta_2$ is another small positive number to be specified. Here, we only need to estimate $$\label{Eq:A'} \begin{split} \int_{B(\mathcal{M}_2, \delta_2)} p_{\nabla X_{|K}(t)}(0) dt \int_u^\infty A'(t,x)dx, \end{split}$$ since the integral above with $B(\mathcal{M}_2, \delta_2)$ replaced by $K\backslash B(\mathcal{M}_2, \delta_2)$ is super-exponentially small due to the fact $$\sup_{t\in K\backslash B(\mathcal{M}_2, \delta_2)} {\rm Var}(X(t) | \nabla X(t)=0) < \sigma_T^2.$$ On the other hand, following similar arguments in the proof for Case (i), we have that removing the indicator functions $\mathbbm{1}_{\{\nabla^2 X_{|K}(t)\prec 0\}}$ in [\[Eq:A\'\]](#Eq:A'){reference-type="eqref" reference="Eq:A'"} will only cause a super-exponentially small error. Combining these results, we conclude that the first approximation in [\[Eq:simplify-major-term\]](#Eq:simplify-major-term){reference-type="eqref" reference="Eq:simplify-major-term"} holds, thus completing the proof. ◻ From the proof of Proposition [Proposition 10](#Prop:simlify the high moment je){reference-type="ref" reference="Prop:simlify the high moment je"}, it is evident that the same line of reasoning and arguments can be readily extended to ${\mathbb E}\{M_u (X,K)\}$, leading to the following result. **Proposition 11**. *Let $\{X(t),\, t\in T\}$ be a centered Gaussian random field satisfying $({\bf H}1)$, $({\bf H}2)$ and $({\bf H}3)$. Then there exists a constant $\alpha>0$ such that for any $K\in \partial_k T$, as $u\to \infty$, $$\begin{split} {\mathbb E}\{M_u (K)\} &=(-1)^k\int_K {\mathbb E}\big\{{\rm det}\nabla^2 X_{|K}(t)\mathbbm{1}_{\{X(t)\geq u\}}\big|\nabla X_{|K}(t)=0\big\}p_{\nabla X_{|K}(t)}(0) dt\\ &\quad +o\left( \exp \left\{ -\frac{u^2}{2\sigma_T^2} -\alpha u^2 \right\}\right). \end{split}$$* # Proofs of the main results {#sec:proof} *Proof of Theorem [Theorem 2](#Thm:MEC approximation je){reference-type="ref" reference="Thm:MEC approximation je"}.* By Propositions [Proposition 6](#Lem:factorial moments je){reference-type="ref" reference="Lem:factorial moments je"}, [Proposition 7](#Lem:cross terms disjoint sets){reference-type="ref" reference="Lem:cross terms disjoint sets"} and [Proposition 8](#Lem:cross terms je){reference-type="ref" reference="Lem:cross terms je"}, together with the fact $M_u^E (K)\le M_u (K)$, we obtain that the factorial moments and the last two sums in [\[Ineq:lowerbound je\]](#Ineq:lowerbound je){reference-type="eqref" reference="Ineq:lowerbound je"} are super-exponentially small. Therefore, from [\[Ineq:upperbound je\]](#Ineq:upperbound je){reference-type="eqref" reference="Ineq:upperbound je"} and [\[Ineq:lowerbound je\]](#Ineq:lowerbound je){reference-type="eqref" reference="Ineq:lowerbound je"}, it follows that there exists a constant $\alpha>0$ such that as $u\to \infty$, $$\begin{split} {\mathbb P}\left\{\sup_{t\in T} X(t) \geq u\right\}= \sum_{k=0}^N\sum_{K\in \partial_k T} {\mathbb E}\{M_u^E (K) \}+ o\left( \exp \left\{ -\frac{u^2}{2\sigma_T^2} -\alpha u^2 \right\}\right). \end{split}$$ This desired result follows as an immediate consequence of Proposition [Proposition 10](#Prop:simlify the high moment je){reference-type="ref" reference="Prop:simlify the high moment je"}. ◻ *Proof of Theorem [Theorem 3](#Thm:MEC approximation je2){reference-type="ref" reference="Thm:MEC approximation je2"}.* Remark [\[remark:M_u\]](#remark:M_u){reference-type="ref" reference="remark:M_u"} indicates that both inequalities [\[Ineq:upperbound je\]](#Ineq:upperbound je){reference-type="eqref" reference="Ineq:upperbound je"} and [\[Ineq:lowerbound je\]](#Ineq:lowerbound je){reference-type="eqref" reference="Ineq:lowerbound je"} hold with $M_u^E(\cdot)$ replaced by $M_u(\cdot)$. Therefore, the corresponding factorial moments and the last two sums in [\[Ineq:lowerbound je\]](#Ineq:lowerbound je){reference-type="eqref" reference="Ineq:lowerbound je"} with $M_u^E(\cdot)$ replaced by $M_u(\cdot)$ are super-exponentially small by Propositions [Proposition 6](#Lem:factorial moments je){reference-type="ref" reference="Lem:factorial moments je"}, [Proposition 7](#Lem:cross terms disjoint sets){reference-type="ref" reference="Lem:cross terms disjoint sets"} and [Proposition 9](#Lem:cross terms je2){reference-type="ref" reference="Lem:cross terms je2"}. Consequently, there exists a constant $\alpha>0$ such that as $u\to \infty$, $$\begin{split} {\mathbb P}\left\{\sup_{t\in T} X(t) \geq u\right\} = \sum_{k=0}^N\sum_{K\in \partial_k T} {\mathbb E}\{M_u (K)\}+ o\left( \exp \left\{ -\frac{u^2}{2\sigma_T^2} -\alpha u^2 \right\}\right). \end{split}$$ The desired result follows directly from Proposition [Proposition 11](#Prop:simlify the high moment je2){reference-type="ref" reference="Prop:simlify the high moment je2"}. ◻ *Proof of Theorem [Theorem 4](#Thm:MEC approximation){reference-type="ref" reference="Thm:MEC approximation"}.* Note that, in the proof of Theorem [Theorem 2](#Thm:MEC approximation je){reference-type="ref" reference="Thm:MEC approximation je"}, we have seen that the points in $\mathcal{M}_2$ defined in [\[Eq:M2\]](#Eq:M2){reference-type="eqref" reference="Eq:M2"} make major contribution to the excursion probability. That is, with up to a super-exponentially small error, we can focus only on those faces, say $J$, whose closure $\bar{J}$ contains the unique point $t^*$ with $\nu(t^*)=\sigma_T^2$ and satisfying $\tau(J)\subset \mathcal{I}(t^*)$ (i.e., the partial derivatives of $\nu$ are 0 at $t^*$ restricted on $J$). To formalize this concept, we define a set of faces $T^*$ as follows: $$\begin{split} T^* &=\{J\in \partial_k T: t^*\in \bar{J}, \, \tau(J)\subset \mathcal{I}(t^*), \, k=0, \ldots, N\}. \end{split}$$ For each $J\in T^*$, let $$\begin{split} M_u^{E^*} (J) & := \# \{ t\in J: X(t)\geq u, \nabla X_{|J}(t)=0, \nabla^2 X_{|J}(t)\prec 0, \\ & \qquad \qquad \qquad \varepsilon^*_jX_j(t) \geq 0 \ {\rm for \ all}\ j\in \mathcal{I}(t^*)\setminus \tau(J) \}. \end{split}$$ Note that, both inequalities [\[Ineq:upperbound je\]](#Ineq:upperbound je){reference-type="eqref" reference="Ineq:upperbound je"} and [\[Ineq:lowerbound je\]](#Ineq:lowerbound je){reference-type="eqref" reference="Ineq:lowerbound je"} remain valid when we replace $M_u^E(J)$ with $M_u^{E^*}(J)$ for faces $J$ belonging to $T^*$, and replace $M_u^E(J)$ with $M_u(J)$ otherwise. Employing analogous reasoning as used in the derivation of Theorems [Theorem 2](#Thm:MEC approximation je){reference-type="ref" reference="Thm:MEC approximation je"} and [Theorem 3](#Thm:MEC approximation je2){reference-type="ref" reference="Thm:MEC approximation je2"}, we obtain that, there exists $\alpha>0$ such that as $u\to \infty$, $$\begin{split} {\mathbb P}\left\{\sup_{t\in T} X(t) \geq u\right\}= \sum_{J\in T^*} {\mathbb E}\{M_u^{E^*} (J) \}+ o\left( \exp \left\{ -\frac{u^2}{2\sigma_T^2} -\alpha u^2 \right\}\right). \end{split}$$ This desired result is then deduced from Proposition [Proposition 10](#Prop:simlify the high moment je){reference-type="ref" reference="Prop:simlify the high moment je"}. ◻ # Gaussian fields with a unique maximum point of the variance {#sec:unique-max} In this section, we delve deeper into EEC approximations when the variance function $\nu(t)$ reaches its maximum value $\sigma_T^2$ at a solitary point $t^*$. While Theorem [Theorem 4](#Thm:MEC approximation){reference-type="ref" reference="Thm:MEC approximation"} provides an implicit formula for such scenarios, our objective here is to obtain explicit formulae by employing integral approximation techniques based on the Kac-Rice formula. To facilitate this process, we begin by presenting some auxiliary results related to the Laplace method for integral approximations. ## Auxiliary lemmas on Laplace approximation The following two formulas state the results on the Laplace approximation method. Lemma [Lemma 12](#Lem:Laplace method 1){reference-type="ref" reference="Lem:Laplace method 1"} can be found in many books on the approximations of integrals; here we refer to @Wong2001. Lemma [Lemma 13](#Lem:Laplace method 2){reference-type="ref" reference="Lem:Laplace method 2"} can be derived by following similar arguments in the proof of the Laplace method for the case of boundary points in [@Wong2001]. **Lemma 12**. *\[Laplace method for interior points\] Let $t_0$ be an interior point of $T$. Suppose the following conditions hold: (i) $g(t) \in C(T)$ and $g(t_0) \neq 0$; (ii) $h(t) \in C^2(T)$ and attains its minimum only at $t_0$; and (iii) $\nabla^2 h(t_0)$ is positive definite. Then as $u \to \infty$, $$\int_T g(t) e^{-uh(t)} dt =\frac{(2\pi)^{N/2}}{u^{N/2} ({\rm det} \nabla^2 h(t_0))^{1/2}} g(t_0) e^{-uh(t_0)} (1+ o(1)).$$* **Lemma 13**. *\[Laplace method for boundary points\] Let $t_0\in K \in \partial_k T$ with $0\leq k\leq N-1$. Suppose that conditions (i), (ii) and (iii) in Lemma [Lemma 12](#Lem:Laplace method 1){reference-type="ref" reference="Lem:Laplace method 1"} hold, and additionally $\nabla h(t_0)=0$. Then as $u \to \infty$, $$\begin{split} \int_T g(t) e^{-uh(t)} dt &=\frac{(2\pi)^{N/2} {\mathbb P}\{Z_{i_\ell}\varepsilon_{i_\ell}^*>0, \forall i_\ell \notin \tau(K)\}}{u^{N/2} ({\rm det} \nabla^2 h(t_0))^{1/2}}g(t_0) e^{-uh(t_0)} (1+ o(1)), \end{split}$$ where $(Z_{i_1}, \ldots, Z_{i_{N-k}})$ is a centered $(N-k)$-dimensional Gaussian vector with covariance matrix $(h_{i_\ell i_{\ell'}}(t_0))_{i_\ell,i_{\ell'}\notin \tau(K)}$ and $\tau(K)$ and $\varepsilon_{i_\ell}^*$ are defined in Section [2](#sec:notation){reference-type="ref" reference="sec:notation"}.* ## Gaussian fields satisfying the boundary condition [\[Eq:boundary\]](#Eq:boundary){reference-type="eqref" reference="Eq:boundary"} For $t\in T$, we define the following notation for conditional variances: $$\label{eq:La} \begin{split} \tilde{\nu}_{|K}(t) = {\rm Var}(X(t)|\nabla X_{|K}(t)=0), \quad \tilde{\nu}(t) = {\rm Var}(X(t)|\nabla X_{|K}(t)=0). \end{split}$$ The following result provides explicit approximations to the excursion probabilities when the maximum of the variance is reached only at a single point and the boundary condition [\[Eq:boundary\]](#Eq:boundary){reference-type="eqref" reference="Eq:boundary"} is satisfied. **Theorem 14**. *Let $\{X(t),\, t\in T\}$ be a centered Gaussian random field satisfying $({\bf H}1)$ and $({\bf H}2)$. Suppose $\nu$ attains its maximum $\sigma_T^2$ only at $t^* \in K \in \partial_k T$, $\nu_i(t^*)\neq 0$ for all $i\notin \tau(K)$, and $\nabla^2 \nu_{|K}(t^*)\prec 0$. Then, as $u\to \infty$, $$\label{eq:interior1} \begin{split} {\mathbb P}\left\{\sup_{t\in T} X(t) \geq u\right\} &= \Psi\left(\frac{u}{\sigma_T}\right) +o\left( \exp \left\{ -\frac{u^2}{2\sigma_T^2} -\alpha u^2 \right\}\right) \text{\rm for some $\alpha>0$,}\quad \text{ if } k=0,\\ {\mathbb P}\left\{\sup_{t\in T} X(t) \geq u \right\} &= \sqrt{\frac{{\rm det}(\Sigma_K(t^*))}{{\rm det}(\Lambda_K(t^*) + \Sigma_K(t^*))}}\Psi\left(\frac{u}{\sigma_T}\right)(1+o(1)), \quad \text{ if } k\ge 1, \end{split}$$ where $\Lambda_K(t^*)$ and $\Sigma_K(t^*)$ are defined in [\[eq:Sigma\]](#eq:Sigma){reference-type="eqref" reference="eq:Sigma"}.* *Proof.* If $k=0$, then $\nu_i(t^*)\neq 0$ for all $i\ge 1$, and hence $\mathcal{I}(t^*)=\emptyset$. The first line of [\[eq:interior1\]](#eq:interior1){reference-type="eqref" reference="eq:interior1"} follows from Theorem [Theorem 4](#Thm:MEC approximation){reference-type="ref" reference="Thm:MEC approximation"} that $$\begin{split} {\mathbb P}\left\{\sup_{t\in T} X(t) \geq u\right\} = {\mathbb P}\{X(t^*)\geq u\} +o\left( \exp \left\{ -\frac{u^2}{2\sigma_T^2} -\alpha u^2 \right\}\right). \end{split}$$ Now, let us consider the case when $k\ge 1$. Note that the assumption on partial derivatives of $\nu(t)$ implies $\mathcal{I}(t^*)=\tau(K)$. By Theorem [Theorem 4](#Thm:MEC approximation){reference-type="ref" reference="Thm:MEC approximation"}, we have $$\label{eq:EP=k1} \begin{split} {\mathbb P}\left\{\sup_{t\in T} X(t) \geq u \right\} = (-1)^k I(u, K) + o\left( \exp \left\{ -\frac{u^2}{2\sigma_T^2} -\alpha u^2 \right\}\right), \end{split}$$ where $$\begin{split} I(u,K)& =\int_K {\mathbb E}\big\{{\rm det}\nabla^2 X_{|K}(t)\mathbbm{1}_{\{X(t)\geq u\}} \big| \nabla X_{|K}(t)=0\big\} p_{\nabla X_{|K}(t)}(0)dt \\ &=\int_u^\infty \int_K \frac{{(2\pi)^{-(k+1)/2}} }{\sqrt{\tilde{\nu}_{|K}(t){\rm det}\left(\Lambda_K(t)\right)}} {\mathbb E}\big\{{\rm det}\nabla^2 X_{|K}(t)\big|X(t)=x, \nabla X_{|K}(t)=0\big\} e^{-\frac{x^2}{2\tilde{\nu}(t)}}dtdx. \end{split}$$ Applying the Laplace method in Lemma [Lemma 12](#Lem:Laplace method 1){reference-type="ref" reference="Lem:Laplace method 1"} with $$\begin{split} g(t) &= \frac{1}{\sqrt{\tilde{\nu}_{|K}(t){\rm det}\left(\Lambda_K(t)\right)}} {\mathbb E}\big\{{\rm det}\nabla^2 X_{|K}(t)\big|X(t)=x, \nabla X_{|K}(t)=0\big\},\\ h(t) &= \frac{1}{{2\tilde{\nu}_{|K}(t)}}, \quad u = x^2, \end{split}$$ and noting that the Hessian matrix of $1/(2\tilde{\nu}_{|K}(t))$ evaluated at $t^*$ is $$\label{eq:Theta-Hessian} -\frac{1}{2\tilde{\nu}_{|K}^2(t^*)} \left(\tilde{\nu}_{ij}(t^*)\right)_{i, j \in \tau(K)}= -\frac{1}{2\sigma_T^4} \nabla^2 \tilde{\nu}_{|K}(t^*)\succ 0,$$ we obtain $$\label{eq:I(u,k)2} \begin{split} I(u,K) &= \frac{(2\sigma_T^4)^{k/2}}{\sqrt{2\pi\sigma_T^2{\rm det}\left(\Lambda_K(t^*)\right)}\sqrt{|{\rm det}\nabla^2 \tilde{\nu}_{|K}(t^*)|}} I(u) (1+o(1)), \end{split}$$ where $$\label{eq:Q} \begin{split} I(u)&=\int^\infty_u {\mathbb E}\big\{{\rm det} \nabla^2 X_{|K}(t^*) \big|X(t^*)=x, \nabla X_{|K}(t^*)=0 \big\} x^{-k}e^{-\frac{x^2}{2\sigma_T^2}}\, dx\\ & ={\rm det}(\Sigma_K(t^*))\int^\infty_u {\mathbb E}\big\{{\rm det} (Q\nabla^2 X_{|K}(t^*)Q) \big|X(t^*)=x, \nabla X_{|K}(t^*)=0 \big\} x^{-k}e^{-\frac{x^2}{2\sigma_T^2}} dx. \end{split}$$ Here, noting that $\Sigma_K(t^*)={\mathbb E}\{X(t)\nabla^2 X_{|K}(t^*)\}\prec 0$ by Proposition [Proposition 1](#prop:H3){reference-type="ref" reference="prop:H3"}, we let $Q$ in [\[eq:Q\]](#eq:Q){reference-type="eqref" reference="eq:Q"} be a $k \times k$ positive definite matrix such that $Q(-\Sigma_K(t^*))Q = I_k$, where $I_k$ is the size-$k$ identity matrix. Then $${\mathbb E}\{X(t)(Q \nabla^2 X_{|K}(t^*) Q)\} = Q\Sigma_K(t^*)Q = -I_k.$$ Notice that ${\mathbb E}\{X(t^*)\nabla X_{|K}(t^*)\}=0$ due to $\nu_{|K}(t^*)=0$, we have $$\begin{split} {\mathbb E}\big\{Q \nabla^2 X_{|K}(t^*) Q \big|X(t^*)=x, \nabla X_{|K}(t^*)=0 \big\} = -\frac{x}{\sigma_T^2}I_k. \end{split}$$ One can write $$\begin{split} {\mathbb E}\big\{{\rm det} (Q\nabla^2 X_{|K}(t^*)Q) \big|X(t^*)=x, \nabla X_{|K}(t^*)=0 \big\} = {\mathbb E}\{{\rm det} (\Delta(t^*) - (x/\sigma_T^2)I_k) \}, \end{split}$$ where $\Delta(t^*)$ is a centered Gaussian random matrix with covariance independent of $x$. According to the Laplace expansion of determinant, ${\mathbb E}\{{\rm det} (\Delta(t^*) - (x/\sigma_T^2)I_k) \}$ is a polynomial in $x$ with the highest-order term being $(-1)^k\sigma_T^{-2k}x^k$. Plugging this into [\[eq:Q\]](#eq:Q){reference-type="eqref" reference="eq:Q"} and [\[eq:I(u,k)2\]](#eq:I(u,k)2){reference-type="eqref" reference="eq:I(u,k)2"}, we obtain $$I(u,K) = \frac{(-1)^k2^{k/2}|{\rm det}(\Sigma_K(t^*))|}{\sqrt{{\rm det}(\Lambda_K(t^*))}\sqrt{|{\rm det}\left(\nabla^2 \tilde{\nu}_{|K}(t^*)\right)|}}\Psi\left(\frac{u}{\sigma_T}\right)(1+o(1)).$$ Finally, note that $$\tilde{\nu}_{|K}(t) = {\mathbb E}\{X(t)^2\} - {\mathbb E}\{X(t)\nabla X_{|K}(t)\}^T \Lambda_K^{-1}(t){\mathbb E}\{X(t)\nabla X_{|K}(t)\},$$ we have $$\label{eq:nu''} \begin{split} \nabla^2 \tilde{\nu}_{|K}(t^*) &= 2[\Lambda_K(t^*) + \Sigma_K(t^*)] - 2[\Lambda_K(t^*) + \Sigma_K(t^*)]\Lambda_K^{-1}(t^*)[\Lambda_K(t^*) + \Sigma_K(t^*)]\\ &= -2\Sigma_K(t^*)[I_k + \Lambda_K^{-1}(t^*)\Sigma_K(t^*)]. \end{split}$$ Therefore, $$I(u,K) = (-1)^k\sqrt{\frac{{\rm det}(\Sigma_K(t^*))}{{\rm det}(\Lambda_K(t^*) + \Sigma_K(t^*))}}\Psi\left(\frac{u}{\sigma_T}\right)(1+o(1)),$$ where $\Sigma_K(t^*)\prec 0$ by Proposition [Proposition 1](#prop:H3){reference-type="ref" reference="prop:H3"} and $\Lambda_K(t^*) + \Sigma_K(t^*)=\nabla^2 \nu_{|K}(t^*)/2\prec 0$ by assumption. Plugging this into [\[eq:EP=k1\]](#eq:EP=k1){reference-type="eqref" reference="eq:EP=k1"} yields the desired result. ◻ Now we apply Theorem [Theorem 14](#thm:unique-max-boundary){reference-type="ref" reference="thm:unique-max-boundary"} to the 1D case when $T=[a, b]$. If $t^*=a$ or $t^*=b$, then it is a direct application of the first line in [\[eq:interior1\]](#eq:interior1){reference-type="eqref" reference="eq:interior1"}. If $t^* \in (a, b)$, then it follows from [\[eq:interior1\]](#eq:interior1){reference-type="eqref" reference="eq:interior1"} that $${\mathbb P}\left\{\sup_{t\in [a,b]} X(t) \geq u \right\} = \sqrt{\frac{{\mathbb E}\{X(t^*)X''(t^*)\}}{{\rm Var}(X'(t^*))+{\mathbb E}\{X(t^*)X''(t^*)\}}}\Psi\left(\frac{u}{\sigma_T}\right)(1+o(1)).$$ ## Gaussian fields not satisfying the boundary condition [\[Eq:boundary\]](#Eq:boundary){reference-type="eqref" reference="Eq:boundary"} We consider here the other case when $\nu_i(t^*)\neq 0$ for some $i\notin \tau(K)$. For a symmetric matrix $B=(B_{ij})_{1\le i,j\le N}$, we call $(B_{ij})_{i,j\in \mathcal{I}}$ the matrix $B$ with indices restricted on $\mathcal{I}$. **Theorem 15**. *Let $\{X(t),\, t\in T\}$ be a centered Gaussian random field satisfying $({\bf H}1)$ and $({\bf H}2)$. Suppose $\nu$ attains its maximum $\sigma_T^2$ only at $t^* \in K \in \partial_k T$ such that $\mathcal{I}(t^*)\setminus \tau(K)$ contains $m\ge 1$ indices and $(\nu_{ii'}(t^*))_{i,i'\in \mathcal{I}(t^*)}\prec 0$. Then, as $u\to \infty$, $$\label{eq:interior2} \begin{split} &\quad {\mathbb P}\left\{\sup_{t\in T} X(t) \geq u\right\}\\ &= \sum_{J} \sqrt{\frac{{\rm det}(\Sigma_J(t^*))}{{\rm det}(\Lambda_J(t^*) + \Sigma_J(t^*))}}{\mathbb P}\{(Z_{J_1'}, \ldots, Z_{J_{j-k}'})\in E'(J)\}\\ &\quad \times {\mathbb P}\big\{(X_{J_1}(t^*), \ldots, X_{J_{k+m-j}}(t^*))\in E(J) \big| \nabla X_{|J}(t^*)=0\big\}\Psi\left(\frac{u}{\sigma_T}\right)(1+o(1)), \end{split}$$ where the sum is taken over all faces $J$ such that $t^*\in \bar{J}$ and $\tau(J)\subset \mathcal{I}(t^*)$, $j={\rm dim}(J)$, $$\begin{split} &(J_1, \ldots, J_{k+m-j})=\mathcal{I}(t^*)\setminus \tau(J), \quad (J_1', \ldots, J_{j-k}')=\tau(J)\setminus \tau(K), \\ &E(J)=\{(y_{J_1}, \ldots, y_{J_{k+m-j}})\in {\mathbb R}^{k+m-j}: \varepsilon^*_{J_\ell}(J) y_{J_\ell} \geq 0, \, \forall \ell= 1, \ldots, k+m-j\},\\ &E'(J)=\{(y_{J_1'}, \ldots, y_{J_{j-k}'})\in {\mathbb R}^{j-k}: \varepsilon^*_{J_\ell'}(K) y_{J_\ell'} \geq 0, \, \forall \ell= 1, \ldots, j-k\}, \end{split}$$ $\varepsilon^*_{J_\ell}(J)$ and $\varepsilon^*_{J_\ell'}(K)$ are the $\varepsilon^*$ numbers for faces $J$ and $K$ respectively, $(Z_{J_1'}, \ldots, Z_{J_{j-k}'})$ is a centered Gaussian vector having covariance matrix $\Sigma(t^*) + \Sigma(t^*)\Lambda^{-1}(t^*)\Sigma(t^*)$ with indices restricted on $\tau(J)\setminus \tau(K)$, and $\Lambda_J(t^*)$ and $\Sigma_J(t^*)$ are defined in [\[eq:Sigma\]](#eq:Sigma){reference-type="eqref" reference="eq:Sigma"}. In particular, for $k=0$, the term inside the sum in [\[eq:interior2\]](#eq:interior2){reference-type="eqref" reference="eq:interior2"} with $J=K=\{t^*\}$ is given by $${\mathbb P}\{(X_{J_1}(t^*), \ldots, X_{J_m}(t^*))\in E(J) \}\Psi\left(\frac{u}{\sigma_T}\right).$$* *Proof.* We first prove the case when $k\ge 1$. By Theorem [Theorem 4](#Thm:MEC approximation){reference-type="ref" reference="Thm:MEC approximation"}, we have $$\label{eq:EP=k11} \begin{split} {\mathbb P}\left\{\sup_{t\in T} X(t) \geq u\right\} &= \sum_{J}(-1)^j I(u,J) +o\left( \exp \left\{ -\frac{u^2}{2\sigma_T^2} -\alpha u^2 \right\}\right), \end{split}$$ where $j={\rm dim}(J)$, the sum is taken over all faces $J$ such that $t^*\in \bar{J}$ and $\tau(J)\subset \mathcal{I}(t^*)$, and $$\begin{split} I(u,J)& =\int_J {\mathbb E}\big\{{\rm det}\nabla^2 X_{|J}(t)\mathbbm{1}_{\{X(t)\geq u\}} \mathbbm{1}_{\{\varepsilon^*_\ell X_\ell(t) \geq 0, \, \forall \ell\in \mathcal{I}(t^*)\setminus \tau(J)\}}\big| \nabla X_{|J}(t)=0\big\} p_{\nabla X_{|J}(t)}(0)dt \\ &= \int_u^\infty \int_J \frac{{(2\pi)^{-(j+1)/2}} }{\sqrt{\tilde{\nu}_{|J}(t){\rm det}\left(\Lambda_J(t)\right)}} {\mathbb E}\big\{{\rm det}\nabla^2 X_{|K}(t)\mathbbm{1}_{\{\varepsilon^*_\ell X_\ell(t) \geq 0, \, \forall \ell\in \mathcal{I}(t^*)\setminus \tau(J)\}}\big|X(t)=x, \\ &\quad \nabla X_{|J}(t)=0\big\} e^{-\frac{x^2}{2\tilde{\nu}_{|J}(t)}}dtdx. \end{split}$$ Applying the Laplace method in Lemma [Lemma 13](#Lem:Laplace method 2){reference-type="ref" reference="Lem:Laplace method 2"} with $$\begin{split} g(t) &= \frac{1}{\sqrt{\tilde{\nu}_{|J}(t){\rm det}\left(\Lambda_J(t)\right)}} {\mathbb E}\big\{{\rm det}\nabla^2 X_{|J}(t)\mathbbm{1}_{\{\varepsilon^*_\ell X_\ell(t) \geq 0, \, \forall \ell\in \mathcal{I}(t^*)\setminus \tau(J)\}}\big|X(t)=x, \nabla X_{|J}(t)=0\big\},\\ h(t) &= \frac{1}{{2\tilde{\nu}_{|J}(t)}}, \quad u = x^2, \end{split}$$ we obtain $$\begin{split} I(u,J) &= \frac{(2\sigma_T^4)^{j/2}{\mathbb P}\{(Z_{J_1'}, \ldots, Z_{J_{j-k}'})\in E'(J)\}}{\sqrt{2\pi\sigma_T^2{\rm det}\left(\Lambda_J(t^*)\right)}\sqrt{|{\rm det}\nabla^2 \tilde{\nu}_{|J}(t^*)|}} I(u) (1+o(1)), \end{split}$$ where $(Z_{J_1'}, \ldots, Z_{J_{j-k}'})$ is a centered $(j-k)$-dimensional Gaussian vector having covariance matrix $\nabla^2 h(t^*)$ with indices restricted on $\tau(J)\setminus \tau(K)$, and $$\label{eq:Q1} \begin{split} I(u)&=\int^\infty_u {\mathbb E}\big\{{\rm det} \nabla^2 X_{|J}(t^*) \mathbbm{1}_{\{\varepsilon^*_\ell X_\ell(t^*) \geq 0, \, \forall \ell\in \mathcal{I}(t^*)\setminus \tau(J)\}} \big|X(t^*)=x, \nabla X_{|J}(t^*)=0 \big\} x^{-j}e^{-\frac{x^2}{2\sigma_T^2}}\, dx\\ & ={\rm det}(\Sigma_J(t^*))\int^\infty_u \int_{E(J)} {\mathbb E}\big\{{\rm det} (Q\nabla^2 X_{|J}(t^*)Q) \big|X(t^*)=x, \nabla X_{|J}(t^*)=0, X_{J_1}(t^*)=y_{J_1}, \\ &\quad \ldots, X_{J_{k+m-j}}(t^*)=y_{J_{k+m-j}} \big\} p(y_{J_1}, \ldots, y_{J_{k+m-j}}|x,0)x^{-j}e^{-\frac{x^2}{2\sigma_T^2}} dy_{J_1}\cdots dy_{J_{k+m-j}}dx. \end{split}$$ Here $p(y_{J_1}, \ldots, y_{J_{k+m-j}}|x,0)$ is the pdf of $(X_{J_1}(t^*), \ldots, X_{J_{k+m-j}}(t^*) | X(t^*)=x, \nabla X_{|J}(t^*)=0)$, and $Q$ is a $j \times j$ positive definite matrix such that $Q(-\Sigma_J(t^*))Q = I_j$. Then, similarly to the arguments in the proof of Theorem [Theorem 14](#thm:unique-max-boundary){reference-type="ref" reference="thm:unique-max-boundary"}, one can write the last expectation in [\[eq:Q1\]](#eq:Q1){reference-type="eqref" reference="eq:Q1"} as $$\begin{split} {\mathbb E}\{{\rm det} (\Delta(t^*, y_{J_1}, \ldots, y_{J_{k+m-j}}) - (x/\sigma_T^2)I_k) \}, \end{split}$$ where $\Delta(t^*, y_{J_1}, \ldots, y_{J_{k+m-j}})$ is a centered Gaussian random matrix with covariance independent of $x$, and hence the highest-order term in $x$ is $(-1)^jx^j/\sigma_T^{2j}$. Noting that ${\mathbb E}\{X(t^*)X_i(t^*)\}=0$ for all $i\in \mathcal{I}(t^*)$ and following similar arguments in the proof of Theorem [Theorem 14](#thm:unique-max-boundary){reference-type="ref" reference="thm:unique-max-boundary"}, we obtain $$\begin{split} I(u,J) &= (-1)^j\sqrt{\frac{{\rm det}(\Sigma_J(t^*))}{{\rm det}(\Lambda_J(t^*) + \Sigma_J(t^*))}}{\mathbb P}\{(Z_{J_1'}, \ldots, Z_{J_{j-k}'})\in E'(J)\}\\ &\quad \times {\mathbb P}\{(X_{J_1}(t^*), \ldots, X_{J_{k+m-j}}(t^*))\in E(J) | \nabla X_{|J}(t^*)=0\}\Psi\left(\frac{u}{\sigma_T}\right)(1+o(1)), \end{split}$$ which yields the desired result together with [\[eq:EP=k11\]](#eq:EP=k11){reference-type="eqref" reference="eq:EP=k11"}. In particular, by [\[eq:nu\'\'\]](#eq:nu''){reference-type="eqref" reference="eq:nu''"}, one can treat $(Z_{J_1'}, \ldots, Z_{J_{j-k}'})$ having covariance $\Sigma(t^*) + \Sigma(t^*)\Lambda^{-1}(t^*)\Sigma(t^*)$ with indices restricted on $\tau(J)\setminus \tau(K)$ while not changing the probability that it falls in $E(J)$. Lastly, the case when $k=0$ can be shown similarly. ◻ Now we apply Theorem [Theorem 15](#thm:unique-max){reference-type="ref" reference="thm:unique-max"} to the 1D case when $T=[a, b]$. Without loss of generality, assume $t^*=b$ and $\nu'(t^*)=0$. Then it follows from Theorem [Theorem 15](#thm:unique-max){reference-type="ref" reference="thm:unique-max"} that $$\begin{split} &\quad {\mathbb P}\left\{\sup_{t\in [a,b]} X(t) \geq u \right\}\\ & = \left({\mathbb P}\{X'(t^*)>0\} + \sqrt{\frac{{\mathbb E}\{X(t^*)X''(t^*)\}}{{\rm Var}(X'(t^*))+{\mathbb E}\{X(t^*)X''(t^*)\}}}{\mathbb P}\{Z>0\} \right)\Psi\left(\frac{u}{\sigma_T}\right)(1+o(1))\\ & = \frac{1}{2}\left(1 + \sqrt{\frac{{\mathbb E}\{X(t^*)X''(t^*)\}}{{\rm Var}(X'(t^*))+{\mathbb E}\{X(t^*)X''(t^*)\}}} \right)\Psi\left(\frac{u}{\sigma_T}\right)(1+o(1)), \end{split}$$ where $Z$ is a centered Gaussian variable. Denote by ${\mathbb R}_{+}^n=(0,\infty)^n$. To simplify the statement in Theorem [Theorem 15](#thm:unique-max){reference-type="ref" reference="thm:unique-max"}, we present below another version with less notations on faces. **Corollary 16**. *Let $\{X(t),\, t\in T\}$ be a centered Gaussian random field satisfying $({\bf H}1)$, $({\bf H}2)$ and $({\bf H}3)$. Suppose $\nu$ attains its maximum $\sigma_T^2$ only at $t^* \in K \in \partial_k T$ with $\tau(K)=\{1, \ldots, k\}$ such that $\mathcal{I}(t^*)= \{1,\ldots, k, k+1, \ldots, k+m\}$ with $m\ge 1$ and $\left(\nu_{ii'}(t^*)\right)_{1\le i,i'\le k+m}\prec 0$. Then, as $u\to \infty$, $$\label{eq:interior3} \begin{split} &\quad {\mathbb P}\left\{\sup_{t\in T} X(t) \geq u\right\}\\ &= \sum_{j=k}^{k+m}\sum_{J\in \partial_j T:\, t^*\in \bar{J} } \sqrt{\frac{{\rm det}(\Sigma_J(t^*))}{{\rm det}(\Lambda_J(t^*) + \Sigma_J(t^*))}}{\mathbb P}\{(Z_1, \ldots, Z_{j-k})\in {\mathbb R}_{+}^{j-k}\} \\ &\quad \times {\mathbb P}\big\{(X_{j+1}(t^*), \ldots, X_{k+m}(t^*))\in {\mathbb R}_{+}^{k+m-j} \big|\nabla X_{|J}(t^*)=0\big\} \Psi\left(\frac{u}{\sigma_T}\right)(1+o(1)), \end{split}$$ where $(Z_1, \ldots, Z_{j-k})$ is a centered Gaussian vector having covariance $\Sigma(t^*) + \Sigma(t^*)\Lambda^{-1}(t^*)\Sigma(t^*)$ with indices restricted on $\{k+1, \ldots, j\}$, and $\Lambda_J(t^*)$ and $\Sigma_J(t^*)$ are defined in [\[eq:Sigma\]](#eq:Sigma){reference-type="eqref" reference="eq:Sigma"}. In particular, for $k=0$, the term inside the sum in [\[eq:interior3\]](#eq:interior3){reference-type="eqref" reference="eq:interior3"} with $J=K=\{t^*\}$ is $${\mathbb P}\{(X_1(t^*), \ldots, X_m(t^*))\in {\mathbb R}_{+}^m \}\Psi\left(\frac{u}{\sigma_T}\right).$$* # Examples {#sec:example} Throughout this section, we consider a centered Gaussian random field $\{X(t),\, t\in T\}$ satisfying $({\bf H}1)$, $({\bf H}2)$ and $({\bf H}3)$, where $T=[a_1, b_1]\times[a_2, b_2] \subset {\mathbb R}^2$. ## Examples with a unique maximum point of the variance Suppose $\nu(t_1,t_2)$ attains the maximum $\sigma_T^2$ only at a single point $t^*=(t_1^*,t_2^*)$; and the assumptions in Theorems [Theorem 14](#thm:unique-max-boundary){reference-type="ref" reference="thm:unique-max-boundary"} or [Theorem 15](#thm:unique-max){reference-type="ref" reference="thm:unique-max"} are satisfied. **Case 1: $\bm {t^* = (b_1, b_2)}$ and $\bm {\nu_1(t^*)\nu_2(t^*)\neq 0}$.** It follows directly from Theorem [Theorem 14](#thm:unique-max-boundary){reference-type="ref" reference="thm:unique-max-boundary"} that $$\begin{split} {\mathbb P}\left\{\sup_{t\in T} X(t) \geq u\right\} = \Psi\left(\frac{u}{\sigma_T}\right) +o\left( \exp \left\{ -\frac{u^2}{2\sigma_T^2} -\alpha u^2 \right\}\right). \end{split}$$ **Case 2: $\bm {t^* = (b_1, b_2)}$, $\bm{\nu_1(t^*)= 0}$ and $\bm{\nu_2(t^*)\neq 0}$.** It follows from Corollary [Corollary 16](#cor:unique-max){reference-type="ref" reference="cor:unique-max"} that $$\begin{split} &\quad {\mathbb P}\left\{\sup_{t\in T} X(t) \geq u \right\}\\ & = \left({\mathbb P}\{X_1(t^*)>0\} + \sqrt{\frac{{\mathbb E}\{X(t^*)X_{11}(t^*)\}}{{\rm Var}(X_1(t^*))+{\mathbb E}\{X(t^*)X_{11}(t^*)\}}}{\mathbb P}\{Z>0\} \right)\Psi\left(\frac{u}{\sigma_T}\right)(1+o(1))\\ & = \frac{1}{2}\left(1 + \sqrt{\frac{{\mathbb E}\{X(t^*)X_{11}(t^*)\}}{{\rm Var}(X_1(t^*))+{\mathbb E}\{X(t^*)X_{11}(t^*)\}}} \right)\Psi\left(\frac{u}{\sigma_T}\right)(1+o(1)), \end{split}$$ where $Z$ is a centered Gaussian variable. **Case 3: $\bm {t^* = (b_1, b_2)}$ and $\bm{\nu_1(t^*)= \nu_2(t^*)= 0}$.** Applying Corollary [Corollary 16](#cor:unique-max){reference-type="ref" reference="cor:unique-max"} and noting the calculations in Case 2 above, we obtain $$\begin{split} &\quad {\mathbb P}\left\{\sup_{t\in T} X(t) \geq u \right\}\\ & = \Bigg({\mathbb P}\{X_1(t^*)>0, X_2(t^*)>0\} + \frac{1}{2}\sqrt{\frac{{\mathbb E}\{X(t^*)X_{11}(t^*)\}}{{\rm Var}(X_1(t^*))+{\mathbb E}\{X(t^*)X_{11}(t^*)\}}} \\ &\qquad + \frac{1}{2}\sqrt{\frac{{\mathbb E}\{X(t^*)X_{22}(t^*)\}}{{\rm Var}(X_2(t^*))+{\mathbb E}\{X(t^*)X_{22}(t^*)\}}}\\ &\qquad + {\mathbb P}\{Z_1>0, Z_2>0\} \sqrt{\frac{{\rm det}(\Sigma(t^*))}{{\rm det}(\Lambda(t^*) + \Sigma(t^*))}}\Bigg)\Psi\left(\frac{u}{\sigma_T}\right)(1+o(1)), \end{split}$$ where $(Z_1, Z_2)$ is a centered Gaussian vector with covariance $\Sigma(t^*) + \Sigma(t^*)\Lambda^{-1}(t^*)\Sigma(t^*)$. **Case 4: $\bm {t^* = (t_1^*, b_2)}$, where $\bm{t_1^*\in (a_1,b_1)}$ and $\bm{\nu_2(t^*)\neq 0}$.** It follows directly from Theorem [Theorem 14](#thm:unique-max-boundary){reference-type="ref" reference="thm:unique-max-boundary"} that $$\begin{split} {\mathbb P}\left\{\sup_{t\in T} X(t) \geq u\right\} = \sqrt{\frac{{\mathbb E}\{X(t^*)X_{11}(t^*)\}}{{\rm Var}(X_1(t^*))+{\mathbb E}\{X(t^*)X_{11}(t^*)\}}}\Psi\left(\frac{u}{\sigma_T}\right)(1+o(1). \end{split}$$ **Case 5: $\bm {t^* = (t_1^*, b_2)}$, where $\bm{t_1^*\in (a_1,b_1)}$ and $\bm{\nu_2(t^*)= 0}$.** Applying Corollary [Corollary 16](#cor:unique-max){reference-type="ref" reference="cor:unique-max"} and noting the calculations in Case 2 above, we obtain $$\begin{split} &\quad {\mathbb P}\left\{\sup_{t\in T} X(t) \geq u \right\}\\ & = \frac{1}{2}\Bigg(\sqrt{\frac{{\mathbb E}\{X(t^*)X_{11}(t^*)\}}{{\rm Var}(X_1(t^*))+{\mathbb E}\{X(t^*)X_{11}(t^*)\}}} + \sqrt{\frac{{\rm det}(\Sigma(t^*))}{{\rm det}(\Lambda(t^*) + \Sigma(t^*))}}\Bigg)\Psi\left(\frac{u}{\sigma_T}\right)(1+o(1)). \end{split}$$ **Case 6: $\bm {a_1<t_1^*<b_1}$ and $\bm {a_2<t_2^*<b_2}$.** It follows directly from Theorem [Theorem 14](#thm:unique-max-boundary){reference-type="ref" reference="thm:unique-max-boundary"} that $$\begin{split} {\mathbb P}\left\{\sup_{t\in T} X(t) \geq u\right\} = \sqrt{\frac{{\rm det}(\Sigma(t^*))}{{\rm det}(\Lambda(t^*) + \Sigma(t^*))}}\Psi\left(\frac{u}{\sigma_T}\right)(1+o(1). \end{split}$$ ## Examples with the maximum of the variance achieved on a line Consider the Gaussian random field $X(t)$ defined as: $$X(t)=\xi_1\cos t_1 + \xi_1'\sin t_1 + t_2(\xi_2\cos t_2 + \xi_2'\sin t_2),$$ where $t=(t_1, t_2)\in T=[a_1, b_1]\times [a_2, b_2]\subset (0,2\pi)^2$, and $\xi_1, \xi_1', \xi_2, \xi_2'$ are independent standard Gaussian random variables. This is a Gaussian random field on ${\mathbb R}^2$ generated from the cosine field, with an additional product of $t_2$ along the vertical direction. The constraint on the parameter space within $(0, 2\pi)^2$ is imposed to prevent degeneracy in derivatives. For this field, we have $\nu(t)=1+t_2^2$, which reaches the maximum $\sigma_T^2=1+b_2^2$ on the entire real line $L:= \{(t_1, b_2): a_1\le t_1 \le b_1\}$. Furthermore, $$\nu_1(t)|_{t\in L}=0, \quad \nu_2(t)|_{t\in L} = 2b_2 >0, \quad \forall t\in L.$$ By employing similar reasoning in the proofs of Theorems [Theorem 2](#Thm:MEC approximation je){reference-type="ref" reference="Thm:MEC approximation je"} and [Theorem 3](#Thm:MEC approximation je2){reference-type="ref" reference="Thm:MEC approximation je2"}, we see that, in the EEC approximation ${\mathbb E}\{\chi(A_u)\}$, all integrals (derived from the Kac-Rice formula) over faces not contained within $\bar{L}$ are super-exponentially small. Thus, there exists $\alpha>0$ such that as $u\to \infty$, $$\label{eq:cosine} \begin{split} {\mathbb P}\left\{\sup_{t\in T} X(t) \geq u \right\} &={\mathbb P}\{X(a_1, b_2)\ge u, X_1(a_1, b_2)<0\} + {\mathbb P}\{X(b_1, b_2)\ge u, X_1(b_1, b_2)>0\} \\ &\quad + I(u) +o\left( \exp \left\{ -\frac{u^2}{2\sigma_T^2} -\alpha u^2 \right\}\right)\\ &= \Psi\left(\frac{u}{\sqrt{1+b_2^2}}\right) + I(u) +o\left( \exp \left\{ -\frac{u^2}{2\sigma_T^2} -\alpha u^2 \right\}\right), \end{split}$$ where $$\begin{split} I(u)&= -\int_{a_1}^{b_1}{\mathbb E}\big\{X_{11}(t_1, b_2) \mathbbm{1}_{\{X(t_1, b_2)\geq u\}}\big|X_1(t_1, b_2)=0\big\}p_{X_1(t_1, b_2)}(0)dt_1. \end{split}$$ Since $X_1(t_1, b_2)=-\xi_1\sin t_1 + \xi_1'\cos t_1$ and $X_{11}(t_1, b_2)=-\xi_1\cos t_1 - \xi_1'\sin t_1$, one has $$\begin{split} {\rm Cov}(X(t_1, b_2), X_1(t_1, b_2), X_{11}(t_1, b_2))&= \begin{pmatrix} 1+b_2^2 & 0 & -1\\ 0 & 1 & 0\\ -1 & 0 & 1 \end{pmatrix}, \end{split}$$ which does not depend on $t_1$. Particularly, $X_1(t_1, b_2)$ is independent of both $X(t_1, b_2)$ and $X_{11}(t_1, b_2)$. Thus $$\begin{split} I(u)&= -\frac{b_1-a_1}{\sqrt{2\pi}}{\mathbb E}\big\{X_{11}(t_1, b_2) \mathbbm{1}_{\{X(t_1, b_2)\geq u\}}\big\}\\ &= -\frac{b_1-a_1}{\sqrt{2\pi}} \int_u^\infty {\mathbb E}\{X_{11}(t_1, b_2) | X(t_1, b_2)=x\}\phi\left(\frac{x}{\sqrt{1+b_2^2}}\right)dx\\ &= \frac{b_1-a_1}{\sqrt{2\pi}} \int_u^\infty \frac{x}{1+b_2^2}\phi\left(\frac{x}{\sqrt{1+b_2^2}}\right)dx\\ &= \frac{b_1-a_1}{\sqrt{2\pi}}\phi\left(\frac{u}{\sqrt{1+b_2^2}}\right). \end{split}$$ Substituting this expression into [\[eq:cosine\]](#eq:cosine){reference-type="eqref" reference="eq:cosine"}, we arrive at the following refined approximation: $$\begin{split} {\mathbb P}\left\{\sup_{t\in T} X(t) \geq u \right\} = \Psi\left(\frac{u}{\sqrt{1+b_2^2}}\right) + \frac{b_1-a_1}{\sqrt{2\pi}}\phi\left(\frac{u}{\sqrt{1+b_2^2}}\right) +o\left( \exp \left\{ -\frac{u^2}{2\sigma_T^2} -\alpha u^2 \right\}\right), \end{split}$$ which has a super-exponentially small error. # Acknowledgments {#acknowledgments .unnumbered} The author acknowledges support from NSF Grants DMS-1902432 and DMS-2220523, as well as the Simons Foundation Collaboration Grant 854127. 36 Adler, R. J. (2000). On excursion sets, tube formulas and maxima of random fields. **10**, 1--74. Adler, R. J. and Taylor, J. E. (2007). *Random fields and geometry*. Springer, New York. Azaïs, J. M. and Delmas, C. (2002). Asymptotic expansions for the distribution of the maximum of Gaussian random fields. **5**, 181--212. Azaïs, J. M. and Wschebor, M. (2009). . John Wiley & Sons, Hoboken, NJ. Cheng, D. and Xiao, Y. (2016). The mean Euler characteristic and excursion probability of Gaussian random fields with stationary increments. **26**, 722--759. Piterbarg, V. I. (1996). *Asymptotic Methods in the Theory of Gaussian Processes and Fields. Translations of Mathematical Monographs 148*. Amer. Math. Soc., Providence, RI. Piterbarg, V. I. (1996) Rice's method for large excursions of Gaussian random fields. Technical Report NO. 478, Center for Stochastic Processes, Univ. North Carolina. Siegmund, D. and Worsley, K. J. (1995). Testing for a signal with unknown location and scale in a stationary Gaussian random field. *Ann. Statist.* **23**, 608--639. Sun, J. (1993). Tail probabilities of the maxima of Gaussian random fields. *Ann. Probab.* **21**, 34--71. Sun, J. (2001). Multiple comparisons for a large number of parameters. **43**, 627--643. Taylor, J. E. and Adler, R. J. (2003). Euler characteristics for Gaussian fields on manifolds. **31**, 533--563. Taylor, J. E., Takemura, A. and Adler, R. J. (2005). Validity of the expected Euler characteristic heuristic. **33**, 1362--1396. Taylor, J. E. and Worsley, K. J. (2007). Detecting sparse signals in random fields, with an application to brain mapping. *J. Amer. Statist. Assoc.* **102**, 913--928. Taylor, J. E. and Worsley, K. J. (2008). Random fields of multivariate test statistics, with applications to shape analysis. *Ann. Statist.* **36**, 1--27. Wong, R. (2001). . SIAM, Philadelphia, PA. > ::: {.small} > [Dan Cheng]{.smallcaps}\ > School of Mathematical and Statistical Sciences\ > Arizona State University\ > 900 S Palm Walk\ > Tempe, AZ 85281, USA\ > E-mail: `cheng.stats@gmail.com` > :::
arxiv_math
{ "id": "2309.05627", "title": "The expected Euler characteristic approximation to excursion\n probabilities of smooth Gaussian random fields with general variance\n functions", "authors": "Dan Cheng", "categories": "math.PR", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | The following is an improved version of Chapter 12 of my book [@Sm17]. Among others, we present a new unified approach to the Archimedean Positivstellensätze for quadratic modules and semirings in Section 12.4 and we add a number of results on Positivstellensätze for semirings and the corresponding moment problems.All references to formulas and to the bibliography of the book are retained. This version is essentially based on results from the recent paper [@SmS23]. We will also use a result from the book [@Sm20]. address: University of Leipzig, Mathematical Institute, Augustusplatz 10/11, D-04109 Leipzig, Germany author: - Konrad Schmüdgen title: "Chapter 12: The moment problem on compact semi-algebraic sets (revised version)" --- [^1] In this chapter we begin the study of the multidimensional moment problem. The passage to dimensions $d\geq 2$ brings new difficulties and unexpected phenomena. In Section 3.2 we derived solvability criteria of the moment problem on intervals in terms of positivity conditions. It seems to be natural to look for similar characterizations in higher dimensions as well. This leads us immediately into the realm of real algebraic geometry and to descriptions of positive polynomials on semi-algebraic sets. In this chapter we treat this approach for basic closed *compact* semi-algebraic subsets of $\mathbb{R}^d$. It turns out that for such sets there is a close interaction between the moment problem and real algebraic geometry. Generally speaking, combined with Haviland's theorem any denominator-free Positivstellensatz yields an existence result for the moment problem. We develop this connection in detail and give complete proofs of the corresponding Positivstellensätze. Basic notions and facts from real algebraic geometry that are needed for our treatment of the moment problem are collected in Section [1](#basicssemialgebraicsets){reference-type="ref" reference="basicssemialgebraicsets"}. Section [2](#localizing){reference-type="ref" reference="localizing"} contains general facts on localizing functionals and supports of representing measures. In Section [3](#momentproblemstrictpos){reference-type="ref" reference="momentproblemstrictpos"}, we prove our main existence result for the moment problem on compact semi-algebraic sets (Theorem [Theorem 29](#mpschm){reference-type="ref" reference="mpschm"}) and the corresponding Positivstellensatz for preorderings (Theorem [Theorem 28](#schmps){reference-type="ref" reference="schmps"}). In Section [4](#reparchimodiules){reference-type="ref" reference="reparchimodiules"} we derive a fundamental result, the Archimedean Positivstellensatz for quadratic modules and semirings (Theorem [Theorem 43](#archpos){reference-type="ref" reference="archpos"}). In Section [5](#archimedeanpolynomials){reference-type="ref" reference="archimedeanpolynomials"}, we restate this theorem for the polynomial algebra $\mathbb{R}[x_1,\dotsc,x_d]$ and give applications to the moment problem (Theorems [Theorem 48](#archmedpsq){reference-type="ref" reference="archmedpsq"}, [Theorem 50](#archmedps){reference-type="ref" reference="archmedps"}, and [Theorem 51](#auxsemiring){reference-type="ref" reference="auxsemiring"}). Section [7](#polyhedron){reference-type="ref" reference="polyhedron"} contains a Positivstellensatz and its application to the moment problem (Theorem [Theorem 59](#prestel){reference-type="ref" reference="prestel"}) for semi-algebraic sets which are contained in compact polyhedra. In Section [8](#examplesmp){reference-type="ref" reference="examplesmp"}, we derive a number of classical results and examples on the moment problem for concrete compact sets. The results in Sections [3](#momentproblemstrictpos){reference-type="ref" reference="momentproblemstrictpos"}, [4](#reparchimodiules){reference-type="ref" reference="reparchimodiules"}, [5](#archimedeanpolynomials){reference-type="ref" reference="archimedeanpolynomials"}, [7](#polyhedron){reference-type="ref" reference="polyhedron"}, and [8](#examplesmp){reference-type="ref" reference="examplesmp"} are formulated in the language of real algebra, that is, in terms of preorderings, quadratic modules, or semirings. Apart from real algebraic geometry the theory of self-adjoint Hilbert space operators is our main tool for the multidimensional moment problem. In Section [6](#Operator-theoreticappraochtothemomentprblem){reference-type="ref" reference="Operator-theoreticappraochtothemomentprblem"} we develop this method by studying the GNS construction and the multidimensional spectral theorem. This approach yields a short and elegant approach to the Positivstellensatz and to the moment problem for Archimedean quadratic modules. Throughout this chapter, ${\sf{A}}$ denotes a **commutative real algebra with unit element** denoted by $1$. For notational simplicity we write $\lambda$ for $\lambda \cdot 1$, where $\lambda \in \mathbb{R}$. Recall that  $\sum {\sf{A}}^2$  is the set of finite sums $\sum_i a_i^2$ of squares of elements $a_i\in {\sf{A}}$. # Semi-algebraic sets and Positivstellensätze {#basicssemialgebraicsets} The following definition contains three basic notions which are needed in the sequel. **Definition 1**. A *quadratic module* of  ${\sf{A}}$ is a subset $Q$ of ${\sf{A}}$ such that $$\begin{aligned} \label{axiomquadmodule} Q + Q \subseteq Q,~~ 1 \in Q,~~ a^2 Q \in Q~~{\rm for~all}~~a\in {\sf{A}}. \end{aligned}$$ A quadratic module $T$ is called a *preordering* if  $T\cdot T\subseteq T$.\ A *semiring* is a subset $S$ of ${\sf{A}}$ satisfying $$\begin{aligned} S + S \subseteq S,~~ S\cdot S \subseteq S,~~ \lambda \in S ~~{\rm for~all}~~\lambda\in \mathbb{R},\lambda \geq 0. \end{aligned}$$ In the literature "semirings\" are also called "preprimes\". The name "quadratic module\" stems from the last condition in ([\[axiomquadmodule\]](#axiomquadmodule){reference-type="ref" reference="axiomquadmodule"}) which means that $Q$ is invariant under multiplication by squares. Setting $a=\sqrt{\lambda}$, this implies that $\lambda\cdot Q\subseteq Q$ for $\lambda \geq 0$. While semirings and preorderings are closed under multiplication, quadratic modules are not necessarily. Semirings do not contain all squares in general. Clearly, a quadratic module is a preordering if and only if it is a semiring. In this book, we work mainly with quadratic modules and preorderings. *Example 2*. The subset  $S=\{ \sum_{j=0}^n a_jx^j:\, a_j\geq 0, n\in \mathbb{N}\}$  of  $\mathbb{R}[x]$  is a semiring, but not a quadratic module. Clearly, $Q=\sum \mathbb{R}_d[\underline{x}]^2+x_1\sum \mathbb{R}_d[\underline{x}]^2+x_2\sum \mathbb{R}_d[\underline{x}]^2$ is a quadratic module of $\mathbb{R}_d[\underline{x}], d\geq 2$, but $Q$ is neither a semiring nor a preordering. $\hfill \circ$ Obviously, $\sum {\sf{A}}^2$ is the smallest quadratic module of ${\sf{A}}$. Since ${\sf{A}}$ is commutative, $\sum {\sf{A}}^2$ is invariant under multiplication, so it is also the smallest preordering of ${\sf{A}}$. Our guiding example for  ${\sf{A}}$  is the polynomial algebra  $\mathbb{R}_d[\underline{x}]:=\mathbb{R}[x_1,\dotsc,x_d]$. Let ${\sf f}=\{f_1,\dotsc,f_k\}$ be a finite subset of $\mathbb{R}_d[\underline{x}]$. The set $$\begin{aligned} \label{basicclosed} \mathcal{K}({\sf f})\equiv \mathcal{K}(f_1,\dotsc,f_k)=\{x\in \mathbb{R}^d: f_1(x)\geq 0,\dotsc,f_k(x)\geq 0\}\end{aligned}$$ is called the *basic closed semi-algebraic set associated with $\sf f$*. It is easily seen that $$\begin{aligned} \label{quadraticqf} Q({\sf f})\equiv Q(f_1,\dotsc,f_k)= \big\{\, \sigma_0+ f_1 \sigma_1+\dots+f_k \sigma_k : \,\sigma_0,\dotsc,\sigma_k\in \sum\mathbb{R}_d[\underline{x}]^2\big\}\end{aligned}$$ is the *quadratic module generated by the set* $\sf f$, $$\begin{aligned} \label{semiringf} S({\sf f}) \equiv S(f_1,\dotsc,f_k)= \bigg\{ \sum_{n_1,\dotsc,n_k=0}^r \alpha_{n_1,\dotsc,n_k} f_1^{n_1}\cdots f_r^{n_r}: \alpha_{n_1,\dotsc,n_r}\geq 0, t\in \mathbb{N}_0 \bigg\}\end{aligned}$$ is the *semiring generated by*  ${\sf f}$, and $$\begin{aligned} \label{preorderingtf} T({\sf f})\equiv T(f_1,\dotsc,f_k)=\bigg\{ \sum_{e=(e_1,\dotsc,e_k)\in \{0,1\}^k} f_1^{e_1}\cdots f_k^{e_k} \sigma_e:\, \sigma_e\in \sum\mathbb{R}_d[\underline{x}]^2 \, \bigg\}\end{aligned}$$ is the *preordering generated by the set $\sf f$*. These sets $\mathcal{K}({\sf f})$, $Q({\sf f})$, $S({\sf f})$, $T({\sf f})$ play a crucial role in this chapter and the next. **Definition 3**. A *cone* is a subset $C$ of ${\sf{A}}$ such that $$\begin{aligned} C+C\subseteq C~~~ {\rm and}~~~ \lambda \cdot C\subseteq C~~~ {\rm for}~~ \lambda\geq 0.\ \end{aligned}$$ A *unital cone* of $\sf{A}$ is a cone $C$ which contain the unit element of  $\sf{A}$.\ An *$S$-module* for a semiring $S$ is a unital cone such that $$\begin{aligned} \label{smodules} ac\in C~~~ {\rm for} ~~~ a\in S ~~ {\rm and}~~c\in C. \end{aligned}$$ Obviously, semirings, quadratic modules, and preorderings are unital cones. Setting $c=1$ in ([\[smodules\]](#smodules){reference-type="ref" reference="smodules"}) yields $a\in C$ for $a\in S$. Thus, $S\subseteq C$ for any $S$-module $C$. Each cone $C$ of ${\sf{A}}$ yields an ordering $\preceq$ on ${\sf{A}}$  by defining $$\begin{aligned} a \preceq b \quad {\rm if~ and~ only~ if}\quad b-a \in C.\end{aligned}$$ *Example 4*. Let $S$ be a semiring of $\sf{A}$ and $g_0:=1, g_1,\dotsc,g_r\in {\sf{A}}$, where $r\in \mathbb{N}$. Then $$\begin{aligned} C:=g_0 S+ g_1 S+\cdots+g_rS \end{aligned}$$ is the  *$S$-module of  $\sf{A}$ generated by $g_1,\dotsc,g_r$*. By the above definitions, all polynomials from $T({\sf f})$ are nonnegative on $\mathcal{K}({\sf f})$, but in general $T({\sf f})$ does not exhaust the nonnegative polynomials on $\mathcal{K}({\sf f})$. The following *Positivstellensatz of Krivine--Stengle* is a fundamental result of real algebraic geometry. It describes nonnegative resp. positive polynomials on $\mathcal{K}({\sf f})$ in terms of *quotients* of elements of the preordering $T({\sf f})$. **Theorem 5**. *Let $\mathcal{K}({\sf f})$ and $T({\sf f})$ be as above and let $g\in \mathbb{R}_d[\underline{x}]$. Then we have:* - *  (Positivstellensatz)  $g(x)>0$ for all  $x\in \mathcal{K}({\sf f})$  if and only if there exist polynomials $p,q\in T({\sf f})$ such that $pg=1+q$.* - *  (Nichtnegativstellensatz) $g(x)\geq 0$ for all $x\in \mathcal{K}({\sf f})$ if and only if there exist $p,q\in T({\sf f})$ and $m\in \mathbb{N}$ such that $pg=g^{2m}+q$.* - *  (Nullstellensatz) $g(x)= 0$ for $x\in \mathcal{K}({\sf f})$ if and only if $-g^{2n}\in T({\sf f})$ for some $n\in \mathbb{N}$.* - *  $\mathcal{K}({\sf f})$ is empty if and only if  $-1$ belongs to $T({\sf f}).$* *Proof.* See \[PD\] or \[Ms1\]. The original papers are \[Kv1\] and \[Ste1\]. ◻ All *"if\"* assertions are easily checked and it is not difficult to show that all four statements are equivalent, see e.g. \[Ms1\]. Standard proofs of Theorem [Theorem 5](#krivinestengle){reference-type="ref" reference="krivinestengle"} as given in \[PD\] or \[Ms1\] are based on the Tarski--Seidenberg transfer principle. Assertion (i) of Theorem [Theorem 5](#krivinestengle){reference-type="ref" reference="krivinestengle"} will play an essential role in the proof of Proposition [Proposition 26](#prearchcom){reference-type="ref" reference="prearchcom"} below. Now we turn to algebraic sets. For a subset $S$ of $\mathbb{R}_d[\underline{x}],$ the real zero set of $S$ is $$\begin{aligned} \mathcal{Z}(S)=\{x\in \mathbb{R}^d: f(x)=0 \quad {\rm for~ all}~~f\in S\}.\end{aligned}$$ A subset $V$ of $\mathbb{R}^d$ of the form $\mathcal{Z}(S)$ is called a *real algebraic set*. Hilbert's basis theorem \[CLO, p. 75\] implies that each real algebraic set is of the form $\mathcal{Z}(S)$ for some *finite* set $S=\{h_1,\dotsc,h_m\}$. In particular, each real algebraic set is a basic closed semi-algebraic set, because $\mathcal{K}(h_1,\dotsc,h_m,-h_1,\dotsc,-h_m)=\mathcal{Z}(S)$. Let $S$ be a subset of $\mathbb{R}_d[\underline{x}]$ and $V:=\mathcal{Z}(S)$ the corresponding real algebraic set. We denote by $\mathcal{I}$ the ideal of $\mathbb{R}_d[\underline{x}]$ generated by $S$ and by $\hat{\mathcal{I}}$ the ideal of $f\in \mathbb{R}_d[\underline{x}]$ which vanish on $V$. Clearly, $\mathcal{Z}(S)=\mathcal{Z}(\mathcal{I})$ and $\mathcal{I}\subseteq \hat{\mathcal{I}}$. In general, $\mathcal{I}\neq \hat{\mathcal{I}}$. (For instance, if $d=2$ and $S=\{x_1^2+x_2^2\}$, then $V=\{0\}$ and $x_1^2\in \hat{\mathcal{I}}$, but $x_1^2\notin\mathcal{I}$.) It can be shown \[BDRo, Theorem 4.1.4\] that  $\mathcal{I}=\hat{\mathcal{I}}$  if and only if  $\sum p_j^2\in \mathcal{I}$  for finitely many $p_j\in \mathbb{R}_d[\underline{x}]$ implies that $p_j\in \mathcal{I}$ for all $j$. An ideal that obeys this property is called *real*. In particular, $\hat{\mathcal{I}}$ is real. The ideal $\mathcal{I}$ generated by a single irreducible polynomial $h\in \mathbb{R}_d[\underline{x}]$ is real if and only if $h$ changes its sign on $\mathbb{R}^d$, that is, there are $x_0,x_1\in \mathbb{R}^d$ such that $h(x_0)h(x_1)<0$, see \[BCRo, Theorem 4.5.1\]. The quotient algebra $$\begin{aligned} \label{algregvar} \mathbb{R}[V]:=\mathbb{R}_d[\underline{x}]/\hat{\mathcal{I}}\end{aligned}$$ is called the algebra of *regular functions* on $V$. Since $\hat{\mathcal{I}}$ is real, it follows that $$\begin{aligned} \label{qminusq0} \sum \mathbb{R}[V]^2\cap \big(-\sum \mathbb{R}[V]^2\big) =\{0\}. \end{aligned}$$ *Example 6*. Let us assume that the set ${\sf f}$ is of the form $$\begin{aligned} {\sf f}=\{g_1,\cdots,g_l,h_1,-h_1,\dotsc,h_m,-h_m\}.\end{aligned}$$ If ${\sf g}:=\{g_1,\dotsc,g_l\}$ and $\mathcal{I}$ denotes the ideal of $\mathbb{R}_d[\underline{x}]$ generated by $h_1,\dotsc,h_m$, then $$\begin{aligned} \label{quqadraticzero} \mathcal{K}({\sf f})=\mathcal{K}({\sf g})\cap \mathcal{Z}(\mathcal{I}),~~Q({\sf f})=Q({\sf g})+\mathcal{I}, ~~{\rm and}~~ T({\sf f})=T({\sf g})+\mathcal{I}. \end{aligned}$$ We prove ([\[quqadraticzero\]](#quqadraticzero){reference-type="ref" reference="quqadraticzero"}). The first equality of ([\[quqadraticzero\]](#quqadraticzero){reference-type="ref" reference="quqadraticzero"}) and the inclusions $Q({\sf f})\subseteq Q({\sf g})+\mathcal{I}$ and $T({\sf f})\subseteq T({\sf g})+\mathcal{I}$ are clear from the corresponding definitions. The identity $$\begin{aligned} ph_j=\frac{1}{4} [(p+1)^2h_j+(p-1)^2(-h_j)]\in Q({\sf f}),~~p\in \mathbb{R}_d[\underline{x}], \end{aligned}$$ implies that $\mathcal{I}\subseteq Q({\sf f})\subseteq T({\sf f})$. Hence $Q({\sf g})+\mathcal{I}\subseteq Q({\sf f})$ and $T({\sf g})+\mathcal{I}\subseteq T({\sf f}).$ $\hfill \circ$ Another important concept is introduced in the following definition. **Definition 7**. Let $C$ be a unital cone in ${\sf{A}}$. Define $$\begin{aligned} {\sf{A}}_b(C):=\{ a\in {\sf{A}}: {\rm there ~exists~ a}~~\lambda >0 ~~{\rm such ~that}~~ \lambda - a \in C~~{\rm and}~~\lambda +a\in C\}. \end{aligned}$$ We shall say that $C$ is *Archimedean* if  ${\sf{A}}_b(C)={\sf{A}}$, or equivalently, for every $a\in {\sf{A}}$ there exists a $\lambda>0$ such that $\lambda -a\in C.$ **Lemma 8**. *Let $Q$ be a quadratic module of  ${\sf{A}}$  and let $a\in {\sf{A}}$. Then $a\in {\sf{A}}_b(Q)$ if and only if  $\lambda^2 -a^2\in Q$ for some $\lambda >0$.* *Proof.* If $\lambda \pm a\in Q$ for $\lambda>0$, then $$\begin{aligned} \lambda^2 -a^2= \frac{1}{2\lambda}\big[(\lambda+a)^2(\lambda -a) +(\lambda -a)^2(\lambda + a)\big] \in Q.\end{aligned}$$ Conversely, if $\lambda^2-a^2 \in Q$ and $\lambda >0$, then $$\begin{aligned} \hspace{2,7cm} \lambda\pm a=\frac{1}{2\lambda}\big[(\lambda^2-a^2) +(\lambda\pm a)^2 \big] \in Q. \hspace{3cm} \Box \end{aligned}$$ ◻ **Lemma 9**. *Suppose that $Q$ is a quadratic module or a semiring of ${\sf{A}}$.* - *  ${\sf{A}}_b(Q)$ is a unital subalgebra of ${\sf{A}}$.* - *   If the algebra ${\sf{A}}$ is generated by elements $a_1,\dotsc,a_n$, then $Q$ is Archimedean if and only if each $a_i$ there exists a $\lambda_i>0$ such that $\lambda_i\pm a_i \in Q$.* *Proof.* (i): Clearly, sums and scalar multiples of elements of ${\sf{A}}_b(Q)$ are again in ${\sf{A}}_b(Q)$. It suffices to verify that this holds for the product of elements $a,b \in {\sf{A}}_b(Q)$. First we suppose that $Q$ is a quadratic module. By Lemma [Lemma 8](#boundedele1){reference-type="ref" reference="boundedele1"}, there are $\lambda_1 >0$ and $\lambda_2>0$ such that $\lambda_1^2-a^2$ and $\lambda_2^2-b^2$ are in $Q$. Then $$\begin{aligned} (\lambda_1\lambda_2)^2 - (ab)^2= \lambda_2^2(\lambda_1^2-a^2)+ a^2(\lambda_2^2-b^2) \in Q,\end{aligned}$$ so that $ab\in {\sf{A}}_b(Q)$ again by Lemma [Lemma 8](#boundedele1){reference-type="ref" reference="boundedele1"}. Now let $Q$ be a semiring. If $\lambda_1-a\in Q$ and $\lambda_2-b\in Q$, then $$\begin{aligned} \lambda_1\lambda_2\mp ab =\frac{1}{2}\big(( \lambda_1\pm a)(\lambda_2-b)+ (\lambda_2\mp a)(\lambda_2+b)\big) \in Q. \end{aligned}$$ \(ii\) follows at once from (i). ◻ By Lemma [Lemma 9](#boundedele2){reference-type="ref" reference="boundedele2"}(ii), it suffices to check the Archimedean condition $\lambda \pm a\in Q$ for algebra generators. Often this simplifies proving that $Q$ is Archimedean. **Corollary 10**. *For a quadratic module $Q$ of  $\mathbb{R}_d[\underline{x}]$ the following are equivalent:* - *  $Q$ is Archimedean.* - *   There exists a number $\lambda >0$ such that $\lambda -\sum_{k=1}^d x_k^2\in Q$.* - *   For any $k=1,\dotsc,d$ there exists a $\lambda_k>0$ such that $\lambda_k-x_k^2\in Q$.* *Proof.* (i)$\to$(ii) is clear by definition. If $\lambda -\sum_{j=1}^d x_j^2\in Q$, then $$\begin{aligned} \lambda- x_k^2=\lambda -\sum\nolimits_j x_j^2~ +~ \sum\nolimits_{j\neq k}x_j^2 \in Q.\end{aligned}$$ This proves (ii)$\to$(iii). Finally, if (iii) holds, then $x_k\in {\sf{A}}_b(Q)$ by Lemma [Lemma 8](#boundedele1){reference-type="ref" reference="boundedele1"} and hence ${\sf{A}}_b(Q)={\sf{A}}$ by Lemma [Lemma 9](#boundedele2){reference-type="ref" reference="boundedele2"}(ii). Thus, (iii)$\to$(i). ◻ Note that $S=\mathbb{R}_+ \cdot 1$ is a semiring, so semirings could be rather "small". **Definition 11**. A semiring $S$ is called *generating* if  $A=S-S$. An Archimedean semiring is always generating, since $a=\lambda -(\lambda -a)$ for $a\in A$ and $\lambda \in \mathbb{R}$. **Corollary 12**. *If the quadratic module $Q({\sf f})$ of $\mathbb{R}_d[\underline{x}]$ is Archimedean, then the set $\mathcal{K}({\sf f})$ is compact.* *Proof.* By the respective definitions, polynomials of $Q({\sf f})$ are nonnegative on $\mathcal{K}({\sf f})$. Since $Q({\sf f})$ is Archimedean, $\lambda -\sum_{k=1}^d x_k^2\in Q({\sf f})$ for some $\lambda >0$ by Corollary [Corollary 10](#archirux){reference-type="ref" reference="archirux"}, so $\mathcal{K}({\sf f})$ is contained in the ball centered at the origin with radius $\sqrt{\lambda}$. ◻ The converse of Corollary [Corollary 12](#archicompact){reference-type="ref" reference="archicompact"} does not hold, as the following example shows. (However, it does hold for the preordering $T({\sf f})$ as shown by Proposition [Proposition 26](#prearchcom){reference-type="ref" reference="prearchcom"} below.) *Example 13*. Let $f_1=2x_1-1$, $f_2=2x_2-1$, $f_3=1-x_1x_2$. Then the set $\mathcal{K}({\sf f})$ is compact, but $Q({\sf f})$ is not Archimedean (see \[PD, p. 146\] for a proof). $\hfill \circ$ The following separation result will be used in Sections [4](#reparchimodiules){reference-type="ref" reference="reparchimodiules"} and [6](#Operator-theoreticappraochtothemomentprblem){reference-type="ref" reference="Operator-theoreticappraochtothemomentprblem"}. **Proposition 14**. *Let $C$ be an Archimedean unital cone of ${\sf{A}}$. If $a_0\in {\sf{A}}$ and $a_0\notin C$, there exists a $C$-positive linear functional $\varphi$ on ${\sf{A}}$ such that $\varphi(1)=1$ and $\varphi(a_0)\leq 0$. The functional $\varphi$ may be chosen as an extremal functional of the dual cone $$\begin{aligned} \label{cwedge} C^\wedge:=\{ L\in A^*: L(c)\geq 0~~~ \textup{for}~~ c\in C\, \}. \end{aligned}$$* *Proof.* Let $a\in {\sf{A}}$ and choose $\lambda >0$ such that  $\lambda \pm a\in C$. If  $0<\delta \leq\lambda^{-1}$,  then $\delta^{-1} \pm a\in C$ and hence $1\pm \delta a \in C$. Thus $1$ is an internal point of $C$ and an order unit for $C$. Therefore a separation theorem for convex sets (see e.g. Proposition C.5 in \[Sm20\]) applies, so there exists an extremal functional $\varphi$ of $C^\wedge$ such that $\varphi(1)=1$ and $\varphi(a_0)\leq 0$. (Without the extremality of $\varphi$ this result follows also from Eidelheit's separation Theorem A.27.) ◻ *Example 15*. Let ${\sf{A}}=\mathbb{R}_d[\underline{x}]$ and let $K$ be a closed subset of $\mathbb{R}^d$. If $C$ is the preordering ${\rm{Pos}} (K)$ of nonnegative polynomials on $K$, then ${\sf{A}}_b(C)$ is just the set of bounded polynomials on $K$. Hence $C$ is Archimedean if and only if $K$ is compact. $\hfill \circ$ Recall from Definition 1.13 that $\hat{{\sf{A}}}$ denotes the set of characters of the real algebra ${\sf{A}}$, that is, the set of unital algebra homomorphism $\chi:\sf{A}\to \mathbb{R}$. For a subset $C$ of ${\sf{A}}$ we define $$\begin{aligned} \label{definionkq} \mathcal{K}(C):=\{ \chi\in \hat{{\sf{A}}}: \chi(c)\geq 0 ~~ {\rm for~all}~c\in C\}.\end{aligned}$$ *Example 16*. ${\sf{A}}=\mathbb{R}_d[\underline{x}]$\ Then $\hat{A}$ is the set of evaluations $\chi_t(p)=p(t), p\in {\sf{A}}$, at points of $\mathbb{R}^d$. As usual, we identify $\chi_t$ and $t$, so that $\hat{A}\cong \mathbb{R}^d$. Then, if $C$ is the quadratic module $Q({\sf f})$ defined by ([\[quadraticqf\]](#quadraticqf){reference-type="ref" reference="quadraticqf"}) or  $C$ is the semiring $S({\sf f})$ defined by ([\[semiringf\]](#semiringf){reference-type="ref" reference="semiringf"}) or  $C$ is the preordering $T({\sf f})$ defined by ([\[preorderingtf\]](#preorderingtf){reference-type="ref" reference="preorderingtf"}), the set $\mathcal{K}(C)$ is just the semi-algebraic set $\mathcal{K}({\sf f})$ given by ([\[basicclosed\]](#basicclosed){reference-type="ref" reference="basicclosed"}). $\circ$ Let $C$ be a quadratic module or a semiring. The set $C^{\rm{sat}}={\rm{Pos}}(\mathcal{K}(C))$ of all $f\in {\sf{A}}$ which are nonnegative on the set $\mathcal{K}(C)$ is obviously a preordering of ${\sf{A}}$ that contains $C$. Then $C$ is called *saturated* if  $C=C^{\rm{sat}}$, that is, if $C$ is equal to its *saturation*  $Q^{\rm{sat}}$. Real algebraic geometry is treated in the books \[BCRo\], \[PD\], \[Ms1\]; a recent survey on positivity and sums of squares is given in \[Sr3\]. # Localizing functionals and supports of representing measures {#localizing} Haviland's Theorem 1.12 shows that there is a close link between positive polynomials and the moment problem. However, in order to apply this result reasonable descriptions of positive, or at least of strictly positive, polynomials are needed. Recall that the moment problem for a functional $L$ on the interval $[a,b]$ is solvable if and only if $L(p^2+(x-a)(b-x)q^2)\geq 0$ for all $p,q \in \mathbb{R}[x]$. This condition means that two infinite Hankel matrices are positive semidefinite and this holds if and only if all principal minors of these matrices are nonnegative. In the multidimensional case we are trying to find similar solvability criteria. For this it is natural to consider sets that are defined by finitely many polynomial inequalities $f_1(x)\geq 0,\dotsc, f_k(x)\geq 0$. These are precisely the basic closed semi-algebraic sets $\mathcal{K}({\sf f})$, so we have entered the setup of real algebraic geometry. Let us fix a semi-algebraic set $\mathcal{K}({\sf f})$. Let $L$ be a $\mathcal{K}({\sf f})$-moment functional, that is, $L$ is of the form  $L(p)=L^\mu(p)\equiv \int p\,d\mu$ for $p \in \mathbb{R}_d[\underline{x}]$,  where $\mu$ is a Radon measure supported on $\mathcal{K}({\sf f})$. If $g\in \mathbb{R}_d[x]$ is nonnegative on $\mathcal{K}({\sf f})$, then obviously $$\begin{aligned} \label{hcondition} L(gp^2)\geq 0 \quad{\rm for~ all}\quad p \in \mathbb{R}_d[\underline{x}],\end{aligned}$$ so ([\[hcondition\]](#hcondition){reference-type="ref" reference="hcondition"}) is a  *necessary*  condition for $L$ being a $\mathcal{K}({\sf f})$-moment functional. The overall strategy in this chapter and the next is to solve the $\mathcal{K}({\sf f})$-moment problem by *finitely many sufficient* conditions of the form ([\[hcondition\]](#hcondition){reference-type="ref" reference="hcondition"}). That is, our aim is to "find\" nonnegative polynomials $g_1,\dotsc,g_m$  on $\mathcal{K}({\sf f})$ such that the following holds: *Each linear functional $L$ on $\mathbb{R}_d[\underline{x}]$ which satisfies condition ([\[hcondition\]](#hcondition){reference-type="ref" reference="hcondition"}) for $g=g_1,\dotsc,g_m$ and $g=1$ is a $\mathcal{K}({\sf f})$-moment functional.* (The polynomial $g=1$ is needed in order to ensure that $L$ itself is a positive functional.) In general it is not sufficient to take only the polynomials $f_j$ themselves as $g_j$. For our main results (Theorems [Theorem 29](#mpschm){reference-type="ref" reference="mpschm"} and 13.10), the positivity of the functional on the preordering $T({\sf f})$ is assumed. This means that condition ([\[hcondition\]](#hcondition){reference-type="ref" reference="hcondition"}) is required for *all* mixed products $g=f_1^{e_1}\cdots f_k^{e_k}$, where $e_j\in \{0,1\}$ for $j=1,\dotsc,k$. **Definition 17**. Let $L$ be a linear functional on $\mathbb{R}_d[\underline{x}]$ and let $g\in\mathbb{R}_d[\underline{x}]$. The linear functional $L_g$ on $\mathbb{R}_d[\underline{x}]$ defined by $L_g(p)=L(gp),\, p\in\mathbb{R}_d[\underline{x}]$, is called the *localization* of $L$ at $g$ or simply the *localized functional*. Condition ([\[hcondition\]](#hcondition){reference-type="ref" reference="hcondition"}) means the localized functional $L_g$ is a positive linear functional on $\mathbb{R}_d[\underline{x}].$ Further, if $L$ comes from a measure $\mu$ supported on $\mathcal{K}({\sf f})$ and $g$ is nonnegative on $\mathcal{K}({\sf f})$, then $$\begin{aligned} L_g(p)=L(gp)=\int_{\mathcal{K}({\sf f})} p(x)\, g(x)d\mu(x), ~~p\in\mathbb{R}_d[\underline{x}],\end{aligned}$$ that is, $L_g$ is given by the measure $\nu$ on $\mathcal{K}({\sf f})$ defined by $d\nu=g(x)d\mu$. Localized functionals will play an important role throughout our treatment. They are used to localize the support of the measure (see Propositions [Proposition 22](#cqfimpliessuppckf){reference-type="ref" reference="cqfimpliessuppckf"} and [Proposition 23](#zerosetsupport){reference-type="ref" reference="zerosetsupport"} and Theorem 14.25) or to derive determinacy criteria (see Theorem 14.12). Now we introduce two other objects associated with the functional $L$ and the polynomial $g$. Let $s=(s_\alpha)_{\alpha \in \mathbb{N}_0^d}$ be the $d$-sequence given by $s_\alpha=L(x^\alpha)$ and write $g=\sum_\gamma g_\gamma x^\gamma$. Then we define a $d$-sequence $g(E)s=((g(E)s)_\alpha)_{\alpha\in \mathbb{N}_0^d}$ by $$\begin{aligned} (g(E)s)_\alpha:=\sum\nolimits_\gamma ~ g_\gamma s_{\alpha+\gamma},~~\alpha\in \mathbb{N}_0^d,\end{aligned}$$ and an infinite matrix  $H(gs)=(H(gs)_{\alpha,\beta})_{\alpha,\beta\in \mathbb{N}_0^d}$  over  $\mathbb{N}_0^d\times \mathbb{N}_0^d$ with entries $$\begin{aligned} \label{localhaneklentries} H(gs)_{\alpha,\beta}:= \sum\nolimits_\gamma ~ g_\gamma s_{\alpha+\beta+\gamma},~~\alpha,\beta\in \mathbb{N}_0^d.\end{aligned}$$ Using these definitions for $p(x)=\sum_\alpha a_\alpha x^{\alpha}\in \mathbb{R}_d[\underline{x}]$ we compute $$\begin{aligned} \label{hcondition1} L_s(gp^2)=\sum_{\alpha,\beta,\gamma} a_\alpha a_\beta g_\gamma s_{\alpha+\beta+\gamma} =\sum_{\alpha,\beta} a_\alpha a_\beta (g(E)s)_{\alpha+\beta}=\sum_{\alpha,\beta}\, a_\alpha a_\beta H(gs)_{\alpha,\beta} .\end{aligned}$$ This shows that $g(E)s$ is the  $d$-sequence for the functional $L_g$ and $H(gs)$ is a Hankel matrix for the sequence $g(E)s$. The matrix $H(gs)$ is called the *localized Hankel matrix* of $s$ at $g$. **Proposition 18**. *Let  $Q({\sf g})$  be the quadratic module generated by the finite subset  ${\sf g}=\{g_1,\dotsc,g_m\}$ of  $\mathbb{R}_d[\underline{x}]$. Let $L$ be a linear functional on $\mathbb{R}_d[\underline{x}]$ and $s=(s_\alpha)_{\alpha \in \mathbb{N}_0^d}$ the $d$-sequence defined by  $s_\alpha=L(x^\alpha).$ Then the following are equivalent:* - *  $L$ is a $Q({\sf g})$-positive linear functional on  $\mathbb{R}_d[\underline{x}]$.* - *  $L, L_{g_1},\dotsc,L_{g_m}$ are positive linear functionals on  $\mathbb{R}_d[\underline{x}]$.* - *   $s, g_1(E)s,\dotsc,g_m(E)s$ are positive semidefinite $d$-sequences.* - *   $H(s),H(g_1s),\dotsc, H(g_ms)$ are positive semidefinite matrices.* *Proof.* The equivalence of (i) and (ii) is immediate from the definition ([\[quadraticqf\]](#quadraticqf){reference-type="ref" reference="quadraticqf"}) of the quadratic module $Q({\sf g})$ and Definition [Definition 17](#localizedfunctional){reference-type="ref" reference="localizedfunctional"} of the localized functionals $L_{g_j}$. By Proposition 2.7, a linear functional is positive if and only if the corresponding sequence is positive semidefinite, or equivalently, the Hankel matrix is positive semidefinite. By ([\[hcondition1\]](#hcondition1){reference-type="ref" reference="hcondition1"}) this gives the equivalence of (ii), (iii), and (iv). ◻ The solvability conditions in the existence theorems for the moment problem in this chapter and the next are given in the form (i) for some finitely generated quadratic module or preordering. This means that condition ([\[hcondition\]](#hcondition){reference-type="ref" reference="hcondition"}) is satisfied for finitely many polynomials $g$. Proposition [Proposition 18](#equvialentsolvmp){reference-type="ref" reference="equvialentsolvmp"} says there are various *equivalent* formulations of these solvability criteria: They can be expressed in the language of real algebraic geometry (in terms of quadratic modules, semirings or preorderings), of $*$-algebras (as positive functionals on $\mathbb{R}_d[\underline{x}]$), of matrices (by the positive semidefiniteness of Hankel matrices) or of sequences (by the positive semidefiniteness of sequences). The next proposition contains a useful criterion for localizing supports of representing measures. We denote by $\mathcal{M}_+(\mathbb{R}^d)$ the set of Radon measure $\mu$ on $\mathbb{R}^d$ for which all moments are finite, or equivalently, $\int |p(x)|\, d\mu < \infty$  for all $p\in \mathbb{R}_d[\underline{x}]$. **Proposition 19**. *Let $\mu\in \mathcal{M}_+(\mathbb{R}^d)$ and let $s$ be the moment sequence of $\mu$. Further, let $g_j\in \mathbb{R}_d[\underline{x}]$ and $c_j\geq 0$ be given for $j=1,\dotsc,k.$ Set $$\begin{aligned} \label{defsetK} \mathcal{K}=\{ x\in \mathbb{R}^d: |g_j(x)|\leq c_j ~~~ \text{for} ~~j=1,\dotsc,k\}. \end{aligned}$$ Then we have  ${\rm supp} ~\mu \subseteq \mathcal{K}$ if and only if there exist constants $M_j>0$ such that $$\begin{aligned} \label{condtionL_my} L_s(g_j^{2n})\leq M_jc_j^{2n}~~~\text{for}~~ n\in \mathbb{N},~j=1,\dotsc,k. \end{aligned}$$* *Proof.* The only if part is obvious. We prove the if direction and slightly modify the argument used in the proof of Proposition 4.1. Let $t_0 \in \mathbb{R}^d\backslash \mathcal{K}$. Then there is an index  $j=1,\dotsc,k$ such that $|g_j(t_0)|>c_j$. Hence there exist a number $\lambda>c_j$ and a ball $U$ around $t_0$ such that $|g_j(t)|\geq \lambda$ for $t\in U$. For $n\in \mathbb{N}$ we then derive $$\begin{aligned} \lambda^{2n}\mu(U)\leq \int_U g_j(t)^{2n}\, d\mu(t)\leq \int_{\mathbb{R}^d} g_j(t)^{2n}\, d\mu(t)=L_s(g_j^{2n})\leq M_jc_j^{2n}. \end{aligned}$$ Since $\lambda>c_j$, this is only possible for all $n\in \mathbb{N}$ if  $\mu(U)=0$. Therefore, $t_0\notin{\rm supp} ~\mu$. This proves that ${\rm supp} ~\mu \subseteq \mathcal{K}.$ ◻ We state the special case $g_j(x)=x_j$ of Proposition [Proposition 19](#locquader){reference-type="ref" reference="locquader"} separately as **Corollary 20**. *Suppose $c_1>0,\dotsc,c_d>0$. A measure $\mu\in \mathcal{M}_+(\mathbb{R}^d)$ with moment sequence $s$ is supported on the $d$-dimensional interval  $[-c_1,c_1]\times\dots \times [-c_d,c_d]$  if and only if there are positive constants $M_j$ such that $$\begin{aligned} L_s(x_j^{2n})\equiv s_{(0,\dotsc,0,1,0,\dotsc,0)}^{2n}\leq M_jc_j^{2n}~~~~\text{for}~~n\in\mathbb{N},~j=1,\dotsc,d.\end{aligned}$$* The following two propositions are basic results about the moment problem on *compact* sets. Both follow from Weierstrass' theorem on approximation of continuous functions by polynomials. **Proposition 21**. *If $\mu\in \mathcal{M}_+(\mathbb{R}^d)$ is supported on a compact set, then $\mu$ is determinate. In particular, if $K$ is a compact subset of $\mathbb{R}^d$, then each $K$-moment sequence, so each measure $\mu\in \mathcal{M}(\mathbb{R}^d)$ supported on $K$, is determinate.* *Proof.* Let $\nu\in \mathcal{M}_+(\mathbb{R}^d)$ be a measure having the same moments and so the same moment functional $L$ as $\mu$. Fix $h\in C_c(\mathbb{R}^d,\mathbb{R})$. We choose a compact $d$-dimensional interval $K$ containing the supports of $\mu$ and $h$. From Corollary [Corollary 20](#suppndiminterval){reference-type="ref" reference="suppndiminterval"} it follows that ${\rm{supp}}\, \nu\subseteq K$. By Weierstrass' theorem, there is a sequence $(p_n)_{n\in \mathbb{N}}$ of polynomials $p_n\in \mathbb{R}_d[\underline{x}]$ converging to $h$ uniformly on $K$. Passing to the limits in the equality $$\begin{aligned} \int_K p_n\, d\mu=L(p_n)=\int_K p_n\, d\nu \end{aligned}$$ we get $\int h\,d\mu =\int h \,d\nu$. Since this holds for all $h\in C_c(\mathbb{R}^d,\mathbb{R})$, we have $\mu=\nu$. ◻ **Proposition 22**. *Suppose that $\mu\in \mathcal{M}_+(\mathbb{R}^d)$ is supported on a compact set. Let ${\sf f}=\{f_1,\dotsc,f_k\}$ be a finite subset of $\mathbb{R}_d[\underline{x}]$ and assume that the moment functional defined by $L^\mu(p) =\int p\, d\mu$, $p\in \mathbb{R}_d[\underline{x}]$, is $Q({\sf f})$-positive. Then ${\rm supp} ~\mu \subseteq \mathcal{K}({\sf f}).$* *Proof.* Suppose that $t_0 \in \mathbb{R}^d\backslash \mathcal{K}({\sf f})$. Then there exist a number $j\in \{1,\dotsc,k\}$, a ball $U$ with radius $\rho >0$ around $t_0$, and a number $\delta>0$ such that $f_j \leq-\delta$ on $2 U$. We define a continuous function $h$ on $\mathbb{R}^d$ by $h(t)=\sqrt{2\rho{-}|| t -t_0||}$  for  $|| t - t_0|| \leq 2\rho$  and $h(t)=0$ otherwise and take a compact $d$-dimensional interval $K$ containing $2 U$ and ${\rm supp} ~\mu$. By Weierstrass' theorem, there is a sequence of polynomials $p_n \in \mathbb{R}_d[\underline{x}]$ converging to $h$ uniformly on $K$. Then  $f_jp_n^2\to f_jh^2$  uniformly on $K$ and hence $$\begin{aligned} \label{estiamtefkU} \lim_{n}\, L^\mu(& f_j p_n^2) =\int_K (\lim_n\, f_j p_n^2)\, d\mu= \int_K~ f_j h^2\, d\mu\nonumber = \int_{2 U} f_j(t)(2 \rho{-}|| t-t_0 ||)\, d\mu(t)\\ & \leq \int_{2U} -\delta(2 \rho{-}|| t-t_0 ||)\, d\mu\leq -\int_U\delta \rho\, d\mu(t)= -\delta \rho \mu(U) . \end{aligned}$$ Since $L^\mu$ is $Q({\sf f})$-positive, we have  $L^\mu(f_j p_n^2)\geq 0$. Therefore, $\mu(U)=0$ by ([\[estiamtefkU\]](#estiamtefkU){reference-type="ref" reference="estiamtefkU"}), so that $t_0\notin{\rm supp} ~\mu$. This proves that  ${\rm supp} ~\mu \subseteq \mathcal{K}({\sf f}).$ ◻ The assertions of Propositions [Proposition 21](#detcomacpcase){reference-type="ref" reference="detcomacpcase"} and [Proposition 22](#cqfimpliessuppckf){reference-type="ref" reference="cqfimpliessuppckf"} are no longer valid if the compactness assumptions are omitted. But the counterpart of Proposition [Proposition 22](#cqfimpliessuppckf){reference-type="ref" reference="cqfimpliessuppckf"} for zero sets of ideals holds without any compactness assumption. **Proposition 23**. *Let $\mu\in \mathcal{M}_+(\mathbb{R}^d)$ and let $\mathcal{I}$ be an ideal of $\mathbb{R}_d[\underline{x}]$. If the moment functional $L^\mu$ of  $\mu$ is $\mathcal{I}$-positive, then $L^\mu$ annihilates $\mathcal{I}$ and   ${\rm supp}\, \mu \subseteq \mathcal{Z}(\mathcal{I})$.\ (As usual, $\mathcal{Z}(\mathcal{I})=\{x\in \mathbb{R}^d:p(x)=0~~ \text{for}~ p\in \mathcal{I}\}$ is the zero set of $\mathcal{I}$.)* *Proof.* If $p\in \mathcal{I}$, then $-p\in \mathcal{I}$ and hence $L^\mu(\pm p)\geq 0$ by the $\mathcal{I}$-positivity of $L^\mu$, so that $L^\mu(p)=0$. That is, $L^\mu$ annihilates $\mathcal{I}$. Let $p\in \mathcal{I}$. Since $p^2\in \mathcal{I}$, we have $L^\mu(p^2)=\int p^2\, d\mu=0$. Therefore, from Proposition [\[zerosuppproposition\]](#zerosuppproposition){reference-type="ref" reference="zerosuppproposition"} it follows that ${\rm supp}\, \mu\subseteq \mathcal{Z}(p^2)=\mathcal{Z}(p)$. Thus, ${\rm supp}\, \mu \subseteq \mathcal{Z}(\mathcal{I})$. ◻ For a linear functional $L$ on $\mathbb{R}_d[\underline{x}]$ we define $$\begin{aligned} \mathcal{N}_+(L) :=\{ f\in {\rm{Pos}}(\mathbb{R}^d): L(p)=0\, \}. %\cV_+(L)=\cZ(\cN_+(L))\end{aligned}$$ **Proposition 24**. *Let $L$ be a moment functional on $\mathbb{R}_d[\underline{x}]$, that is, $L=L^\mu$ for some $\mu\in \mathcal{M}_+(\mathbb{R}^d)$. Then the ideal $\mathcal{I}_+(L)$ of $\mathbb{R}_d[\underline{x}]$ generated by $\mathcal{N}_+(L)$ is annihilated by $L$ and the support of each representing measure of $L$ is contained in $\mathcal{Z}(\mathcal{I}_+(L))$.* *Proof.* Let $\nu$ be an arbitrary representing measure of $L$. If $f\in \mathcal{N}_+(L)$, then we have  $L(f)=\int f(x)\, d\nu=0$. Since $f\in {\rm{Pos}}(\mathbb{R}^d)$, Proposition [\[zerosuppproposition\]](#zerosuppproposition){reference-type="ref" reference="zerosuppproposition"} applies and yields ${\rm supp}\, \nu\subseteq \mathcal{Z}(f)$. Hence ${\rm supp}\, \nu\subseteq \mathcal{Z}(\mathcal{N}_+(L)))=\mathcal{Z}(\mathcal{I}_+(L)).$ In particular, the inclusion ${\rm supp}\, \nu\subseteq \mathcal{Z}(\mathcal{I}_+(L))$ implies that $L=L^\nu$ annihilates $\mathcal{I}_+(L).$ ◻ # The moment problem on compact semi-algebraic sets and the strict Positivstellensatz {#momentproblemstrictpos} The solutions of one-dimensional moment problems have been derived from descriptions of nonnegative polynomials as weighted sums of squares. The counterparts of the latter in the multidimensional case are the so-called "Positivstellensätze\" of real algebraic geometry. In general these results require denominators (see Theorem [Theorem 5](#krivinestengle){reference-type="ref" reference="krivinestengle"}), so they do not yield reasonable criteria for solving moment problems. However, for *strictly positive* polynomials on *compact* semi-algebraic sets $\mathcal{K}({\sf f})$ there are *denominator free* Positivstellensätze (Theorems [Theorem 28](#schmps){reference-type="ref" reference="schmps"} and [Theorem 50](#archmedps){reference-type="ref" reference="archmedps"}) which provides solutions of moment problems. Even more, it turns out that there is a close interplay between this type of Positivstellensätze and moment problems on compact semi-algebraic sets, that is, existence results for the moment problem can be derived from Positivstellensätze and vice versa. We state the main technical steps of the proofs separately as Propositions [Proposition 25](#technkrivinestengle){reference-type="ref" reference="technkrivinestengle"}--[Proposition 27](#continuityLprop){reference-type="ref" reference="continuityLprop"}. Proposition [Proposition 27](#continuityLprop){reference-type="ref" reference="continuityLprop"} is also used in a crucial manner in the proof of Theorem 13.10 below. Suppose that ${\sf f}=\{f_1,\dotsc,f_k\}$ is a finite subset of $\mathbb{R}_d[\underline{x}]$. Let $B(\mathcal{K}({\sf f}))$ denote the algebra of all polynomials of $\mathbb{R}_d[\underline{x}]$ which are bounded on the set $\mathcal{K}({\sf f})$. **Proposition 25**. *Let $g\in B(\mathcal{K}({\sf f}))$ and $\lambda >0$. If  $\lambda^2>g(x)^2$  for all $x\in \mathcal{K}({\sf f})$, then there exists a $p \in T({\sf f})$ such that $$\begin{aligned} \label{p2n2} g^{2n} \preceq \lambda^{2n+2}p ~~~\text{for}~~~n\in \mathbb{N}. \end{aligned}$$* *Proof.* By the Krivine--Stengle Positivstellensatz (Theorem [Theorem 5](#krivinestengle){reference-type="ref" reference="krivinestengle"}(i)), applied to the positive polynomial $\lambda^2-g^2$ on $\mathcal{K}({\sf f})$, there exist polynomials $p,q \in T({\sf f})$ such that $$\begin{aligned} \label{stengle} p(\lambda^2-g^2) =1 +q. \end{aligned}$$ Since $q \in T({\sf f})$ and $T({\sf f})$ is a quadratic module, $g^{2n}(1+q) \in T({\sf f})$ for $n \in \mathbb{N}_0$. Therefore, using ([\[stengle\]](#stengle){reference-type="ref" reference="stengle"}) we conclude that $$\begin{aligned} g^{2n+2}p = g^{2n} \lambda^2 p-g^{2n}(1+q) \preceq g^{2n} \lambda^2 p.\end{aligned}$$ By induction it follows that $$\begin{aligned} \label{p2n1} g^{2n} p \preceq \lambda^{2n} p. \end{aligned}$$ Since $g^{2n}(q+pg^2) \in T({\sf f})$, using first ([\[stengle\]](#stengle){reference-type="ref" reference="stengle"}) and then ([\[p2n1\]](#p2n1){reference-type="ref" reference="p2n1"}) we derive $$\begin{aligned} \hspace{0,7cm}g^{2n} \preceq g^{2n} + g^{2n}(q+pg^2) = g^{2n}(1+q+pg^2)=g^{2n}\lambda^2 p \preceq \lambda^{2n+2}p\,. \hspace{1,1cm} \Box \end{aligned}$$ ◻ **Proposition 26**. *If the set $\mathcal{K}({\sf f})$ is compact, then the associated preordering $T({\sf f})$ is Archimedean.* *Proof.* Put $g(x):=(1+x_1^2)\cdots(1+x_d^2)$. Since $g$ is bounded on the compact set $\mathcal{K}({\sf f})$, we have $\lambda^2 >g(x)^2$ on $\mathcal{K}({\sf f})$ for some $\lambda>0$. Therefore, by Proposition [Proposition 25](#technkrivinestengle){reference-type="ref" reference="technkrivinestengle"} there exists a $p \in T({\sf f})$ such that ([\[p2n2\]](#p2n2){reference-type="ref" reference="p2n2"}) holds. Further, for any multiindex $\alpha\in \mathbb{N}_0^d$, $|\alpha| \leq k$, $k \in \mathbb{N}$, we obtain $$\begin{aligned} \label{pk} \pm 2x^\alpha \preceq x^{2\alpha} + 1 \preceq \sum_{|\beta| \leq k} x^{2\beta} = g^k. \end{aligned}$$ Hence there exist numbers $c >0$ and $k \in \mathbb{N}$ such that $p \preceq 2c g^k$. Combining the latter with $g^{2n} \preceq \lambda^{2n+2}p$ by ([\[p2n2\]](#p2n2){reference-type="ref" reference="p2n2"}), we get $g^{2k} \preceq \lambda^{2k+2} 2c g^k$ and so $$\begin{aligned} (g^k{-}\lambda^{2k+2}c)^2 \preceq (\lambda^{2k+2}c)^2{\cdot}1.\end{aligned}$$ Hence, by Lemma [Lemma 8](#boundedele1){reference-type="ref" reference="boundedele1"}, $g^k{-}\lambda^{2k+2}c \in {\sf{A}}_b(T({\sf f}))$ and so $g^k \in {\sf{A}}_b(T({\sf f}))$, where ${\sf{A}}:=\mathbb{R}_d[\underline{x}]$. Since $\pm x_j \preceq g^k$ by ([\[pk\]](#pk){reference-type="ref" reference="pk"}) and $g^k \in {\sf{A}}_b(T({\sf f}))$, we obtain $x_j \in {\sf{A}}_b(T({\sf f}))$ for $j=1,{\cdots},d$. Now from Lemma [Lemma 9](#boundedele2){reference-type="ref" reference="boundedele2"}(ii) it follows that ${\sf{A}}_b(T({\sf f}))= {\sf{A}}$. This means that $T({\sf f})$ is Archimedean. ◻ **Proposition 27**. *Suppose that $L$ is a  $T({\sf f})$-positive linear functional on $\mathbb{R}_d[\underline{x}]$.* - *    If  $g\in B(\mathcal{K}({\sf f}))$ and $\|g \|_\infty$ denotes the supremum of $g$ on $\mathcal{K}({\sf f}),$ then $$\begin{aligned} \label{conitnuityL} |L(g)|\leq L(1) ~\|g\|_\infty . \end{aligned}$$* - *  If  $g\in B(\mathcal{K}({\sf f}))$ and $g(x)\geq 0$ for $x\in \mathcal{K}({\sf f})$, then $L(g)\geq 0$.* *Proof.* (i): Fix $\varepsilon >0$ and put $\lambda:= \parallel g \parallel_\infty +\varepsilon$. We define a real sequence $s=(s_n)_{n\in \mathbb{N}_0}$ by $s_n:=L(g^n)$. Then $L_s(q(y))=L(q(g))$ for $q\in \mathbb{R}[y]$. For any $p\in \mathbb{R}[y]$, we have $p(g)^2\in \sum \mathbb{R}_d[\underline{x}]^2\subseteq T({\sf f})$ and hence $L_s(p(y)^2)=L(p(g)^2)\geq 0$, since $L$ is $T({\sf f})$-positive. Thus, by Hamburger's theorem 3.8, there exists a Radon measure $\nu$ on $\mathbb{R}$ such that $s_n= \int_\mathbb{R}t^n d\nu(t)$, $n \in \mathbb{N}_0$. For $\gamma >\lambda$ let $\chi_\gamma$ denote the characteristic function of the set $(-\infty,-\gamma]\cup[\gamma,+\infty)$. Since $\lambda^2-g(x)^2>0$ on $\mathcal{K}({\sf f})$, we have $g^{2n} \preceq \lambda^{2n+2}p$  by equation ([\[p2n2\]](#p2n2){reference-type="ref" reference="p2n2"}) in Proposition [Proposition 25](#technkrivinestengle){reference-type="ref" reference="technkrivinestengle"}. Using the $T({\sf f})$-positivity of $L$ we derive $$\begin{aligned} \label{gamma2nlp} \gamma^{2n} \int_\mathbb{R}\chi_\gamma(t) ~d\nu(t) \leq \int_\mathbb{R}t^{2n} d\nu(t) =s_{2n}= L(g^{2n}) \leq \lambda^{2n+2} L(p) \end{aligned}$$ for all $n\in \mathbb{N}$. Since $\gamma >\lambda$, ([\[gamma2nlp\]](#gamma2nlp){reference-type="ref" reference="gamma2nlp"}) implies that $\int_\mathbb{R}\chi_\gamma(t)~d\nu(t) =0$. Therefore, ${\rm supp}~\nu \subseteq [-\lambda,\lambda]$. (The preceding argument has been already used in the proof of Proposition [Proposition 19](#locquader){reference-type="ref" reference="locquader"} to obtain a similar conclusion.) Therefore, applying the Cauchy--Schwarz inequality for $L$ we derive $$\begin{aligned} |L(g)|^2 &\leq L(1)L(g^2) = L(1) s_2= L(1)\int_{-\lambda}^\lambda~ t^2 ~d\nu(t)\\& \leq L(1)\nu(\mathbb{R})\lambda^2 = L(1)^2\lambda^2=L(1)^2(\parallel g \parallel_\infty + \varepsilon)^2. \end{aligned}$$ Letting $\varepsilon \to +0$, we get  $|L(g)| \leq L(1)\parallel g \parallel_\infty$. (ii): Since $g\geq 0$ on $\mathcal{K}({\sf f})$, we clearly have  $\|\,1\cdot \|g\|_\infty-2\,g\|_\infty = \|g\|_\infty.$ Using this equality and ([\[conitnuityL\]](#conitnuityL){reference-type="ref" reference="conitnuityL"}) we conclude that $$\begin{aligned} L(1) \|g\|_\infty - 2\,L(g)=L( 1 \cdot\|g\|_\infty - 2\,g) \leq L(1)\| 1\cdot \|g\|_\infty -2\,g\|_\infty =L(1)\|g\|_\infty , \end{aligned}$$ which in turn implies that  $L(g)\geq 0$. ◻ The following theorem is the *strict Positivstellensatz* for compact basic closed semi-algebraic sets $\mathcal{K}({\sf f}).$ **Theorem 28**. *Let ${\sf f}=\{f_1,\dotsc,f_k\}$ be a finite subset of $\mathbb{R}_d[\underline{x}]$ and let  $h\in \mathbb{R}[x]$. If the set $\mathcal{K}({\sf f})$ is compact and $h(x) > 0$ for all $x \in \mathcal{K}({\sf f})$, then $h \in T({\sf f})$.* *Proof.* Assume to the contrary that $h$ is not in $T({\sf f})$. By Proposition [Proposition 26](#prearchcom){reference-type="ref" reference="prearchcom"}, $T({\sf f})$ is Archimedean. Therefore, by Proposition [Proposition 14](#eidelheit){reference-type="ref" reference="eidelheit"}, there exists a $T({\sf f})$-positive linear functional $L$ on ${\sf{A}}$ such that $L(1)=1$ and $L(h) \leq 0$. Since $h >0$ on the compact set $\mathcal{K}({\sf f})$, there is a positive number $\delta$ such that $h(x)-\delta> 0$ for all $x\in\mathcal{K}({\sf f})$. We extend the continuous function $\sqrt{h(x)-\delta}$  on $\mathcal{K}({\sf f})$ to a continuous function on some compact $d$-dimensional interval containing $\mathcal{K}({\sf f})$. Again by the classical Weierstrass theorem,  $\sqrt{h(x)-\delta}$  is the uniform limit on $\mathcal{K}({\sf f})$ of a sequence $(p_n)$ of polynomials $p_n \in \mathbb{R}_d[\underline{x}]$. Then  $p_n^2-h+\delta\to 0$ uniformly on $\mathcal{K}({\sf f})$, that is, $\lim_{n} \parallel p_n^2 -h +\delta \parallel_\infty =0$. Recall that $B(\mathcal{K}({\sf f}))=\mathbb{R}_d[\underline{x}]$, since $\mathcal{K}({\sf f})$ is compact. Hence  $\lim_{n} L(p_n^2-h+ \delta)=0$  by the inequality ([\[conitnuityL\]](#conitnuityL){reference-type="ref" reference="conitnuityL"}) in Proposition [Proposition 27](#continuityLprop){reference-type="ref" reference="continuityLprop"}(i). But, since $L(p_n^2) \geq 0$, $L(h)\leq 0$, and $L(1)=1$, we have  $L(p_n^2-h+\delta)\geq \delta >0$ which is the desired contradiction. This completes the proof of the theorem. ◻ The next result gives a solution of the $\mathcal{K}({\sf f})$-moment problem for compact basic closed semi-algebraic sets. **Theorem 29**. *Let ${\sf f}=\{f_1,\dotsc,f_k\}$ be a finite subset of $\mathbb{R}_d[\underline{x}]$. If the set $\mathcal{K}({\sf f})$ is compact, then each $T({\sf f})$-positive linear functional $L$ on $\mathbb{R}_d[\underline{x}]$ is a $\mathcal{K}({\sf f})$-moment functional.* *Proof.* Since $\mathcal{K}({\sf f})$ is compact, $B(\mathcal{K}({\sf f}))=\mathbb{R}_d[\underline{x}]$. Therefore, it suffices to combine Proposition [Proposition 27](#continuityLprop){reference-type="ref" reference="continuityLprop"}(ii) with Haviland's Theorem 1.12. ◻ *Remark 30*. Theorem [Theorem 29](#mpschm){reference-type="ref" reference="mpschm"} was obtained from Proposition [Proposition 27](#continuityLprop){reference-type="ref" reference="continuityLprop"}(ii) and Haviland's Theorem 1.12. Alternatively, it can derived from Proposition [Proposition 27](#continuityLprop){reference-type="ref" reference="continuityLprop"}(i) combined with Riesz' representation theorem. Let us sketch this proof. By ([\[conitnuityL\]](#conitnuityL){reference-type="ref" reference="conitnuityL"}), the functional $L$ on $\mathbb{R}_d[\underline{x}]$ is $\|\cdot\|_\infty$- continuous. Extending $L$ to $C(\mathcal{K}({\sf f}))$ by the Hahn--Banach theorem and applying Riesz' representation theorem for continuous linear functionals, $L$ is given by a signed Radon measure on $\mathcal{K}({\sf f})$. Setting $g=1$ in ([\[conitnuityL\]](#conitnuityL){reference-type="ref" reference="conitnuityL"}), it follows that $L$, hence the extended functional, has the norm $L(1)$. It is not difficult to show that this implies that the representing measure is positive. $\hfill \circ$ The shortest path to Theorems [Theorem 28](#schmps){reference-type="ref" reference="schmps"} and [Theorem 29](#mpschm){reference-type="ref" reference="mpschm"} is probably to use Proposition [Proposition 27](#continuityLprop){reference-type="ref" reference="continuityLprop"} as we have done. However, in order to emphasize the interaction between both theorems and so in fact between the moment problem and real algebraic geometry we now derive each of these theorems from the other. *Proof of Theorem [Theorem 29](#mpschm){reference-type="ref" reference="mpschm"} (assuming Theorem [Theorem 28](#schmps){reference-type="ref" reference="schmps"}):*\ Let $h\in \mathbb{R}_d[\underline{x}]$. If $h(x)>0$ on $\mathcal{K}({\sf f})$, then $h\in T({\sf f})$ by Theorem [Theorem 28](#schmps){reference-type="ref" reference="schmps"} and so $L(h)\geq 0$ by the assumption. Therefore $L$ is a $\mathcal{K}({\sf f})$-moment functional by the implication (ii)$\to$(iv) of Haviland's Theorem 1.12. $\hfill$ $\Box$ *Proof of Theorem [Theorem 28](#schmps){reference-type="ref" reference="schmps"} (assuming Theorem [Theorem 29](#mpschm){reference-type="ref" reference="mpschm"} and Proposition [Proposition 26](#prearchcom){reference-type="ref" reference="prearchcom"}):*\ Suppose $h\in \mathbb{R}_d[\underline{x}]$ and $h(x)>0$ on $\mathcal{K}({\sf f})$. Assume to the contrary that $h\notin T({\sf f})$. Since the preordering $T({\sf f})$ is Archimedean by Proposition [Proposition 26](#prearchcom){reference-type="ref" reference="prearchcom"}, Proposition [Proposition 14](#eidelheit){reference-type="ref" reference="eidelheit"} applies, so there is a $T({\sf f})$-positive linear functional $L$ on $\mathbb{R}_d[\underline{x}]$ such that $L(1)=1$ and $L(h)\leq 0$. By Theorem [Theorem 29](#mpschm){reference-type="ref" reference="mpschm"}, $L$ is a $\mathcal{K}({\sf f})$-moment functional, that is, there is a measure $\mu\in M_+(\mathcal{K}({\sf f}))$ such that $L(p)=\int_{\mathcal{K}({\sf f})} p\, d\mu$ for $p\in \mathbb{R}_d[\underline{x}]$. But $L(1)=\mu(\mathcal{K}({\sf f}))=1$ and $h>0$ on $\mathcal{K}({\sf f})$ imply that $L(h)>0$. This is a contradiction, since $L(h)\leq 0$. $\Box$ The preordering $T({\sf f})$  was defined as the sum of sets $f_1^{e_1}\cdots f_k^{e_k}\cdot\sum \mathbb{R}_d[\underline{x}]^2$. It is natural to ask whether or not all such sets with mixed products $f_1^{e_1}\cdots f_k^{e_k}$ are really needed. To formulate the corresponding result we put $l_k:=2^{k-1}$ and let $g_1,\dotsc,g_{l_k}$ denote the first $l_k$ polynomials of the following row of mixed products: $$\begin{aligned} f_1,\dotsc,f_k,f_1f_2,f_1f_3,\dotsc,f_1f_k,\dotsc,f_{k-1}f_k,f_1f_2f_3,\dotsc, f_{k-2}f_{k-1}f_k,\dotsc,f_1f_2\dots,f_k.\end{aligned}$$ Let $Q({\sf g})$ denote the quadratic module generated by $g_1,\dotsc,g_{l_k}$, that is, $$\begin{aligned} Q({\sf g}):=\sum\mathbb{R}_d[\underline{x}]^2+ g_1\sum\mathbb{R}_d[\underline{x}]^2+\dots+g_{l_k}\sum\mathbb{R}_d[\underline{x}]^2.\end{aligned}$$ The following result of T. Jacobi and A. Prestel \[JP\] sharpens Theorem [Theorem 28](#schmps){reference-type="ref" reference="schmps"}. **Theorem 31**. *If the set $\mathcal{K}({\sf f})$ is compact and $h\in \mathbb{R}_d[\underline{x}]$ satisfies $h(x)>0$ for all $x\in \mathcal{K}({\sf f})$, then $h\in Q({\sf g})$.* We do not prove Theorem [Theorem 31](#jacobiprestel){reference-type="ref" reference="jacobiprestel"}; for a proof of this result we refer to \[JP\]. If we take Theorem [Theorem 31](#jacobiprestel){reference-type="ref" reference="jacobiprestel"} for granted and combine it with Haviland's theorem 1.12 we obtain the following corollary. **Corollary 32**. *If the set $\mathcal{K}({\sf f})$ is compact and $L$ is a $Q({\sf g})$-positive linear functional on $\mathbb{R}_d[\underline{x}]$, then $L$ is a $\mathcal{K}({\sf f})$-moment functional.* We briefly discuss Theorem [Theorem 31](#jacobiprestel){reference-type="ref" reference="jacobiprestel"}. If $k=1$, then $Q({\sf f})=T({\sf f})$. However, for $k=2$, $$\begin{aligned} Q({\sf f})&=\sum \mathbb{R}_d[\underline{x}]^2+f_1\sum \mathbb{R}_d[\underline{x}]^2+f_2\sum \mathbb{R}_d[\underline{x}]^2,\end{aligned}$$ so $Q({\sf f})$ differs from the preordering $T({\sf f})$ by the summand $f_1f_2\sum \mathbb{R}_d[\underline{x}]^2$. If $k=3$, then $$\begin{aligned} Q({\sf f}) %=\cS_{\sf f} =\sum \mathbb{R}_d[\underline{x}]^2+&f_1\sum \mathbb{R}_d[\underline{x}]^2+ f_2\sum \mathbb{R}_d[\underline{x}]^2+f_3\sum \mathbb{R}_d[\underline{x}]^2+f_1f_2\sum \mathbb{R}_d[\underline{x}]^2 \,,\end{aligned}$$ that is, the sets $g\sum \mathbb{R}_d[\underline{x}]^2$ with $g=f_1f_3,f_2f_3,f_1f_2f_3$ do not enter into the definition of $Q({\sf f})$. For $k=4$, no products of three or four generators appear in the definition of $Q({\sf f})$. For large $k$, only a small portion of mixed products occur in $Q({\sf f})$ and Theorem [Theorem 31](#jacobiprestel){reference-type="ref" reference="jacobiprestel"} is an essential strengthening of Theorem [Theorem 28](#schmps){reference-type="ref" reference="schmps"}. The next corollary characterizes in terms of moment functionals when a Radon measure on a compact semi-algebraic set has a *bounded* density with respect to another Radon measure. A version for closed sets is stated in Exercise 14.11 below. **Corollary 33**. *Suppose that the semi-algebraic set $\mathcal{K}({\sf f})$ is compact. Let $\mu$ and $\nu$ be finite Radon measures on $\mathcal{K}({\sf f})$ and let $L^\mu$ and $L^\nu$ be the corresponding moment functionals on $\mathbb{R}_d[\underline{x}]$. There exists a function $\varphi \in L^\infty(\mathcal{K}({\sf f}),\mu)$, $\varphi(x)\geq 0$ $\mu$-a.e. on $\mathcal{K}({\sf f})$, such that  $d\nu=\varphi d\mu$  if and only if there is a constant $c> 0$ such that $$\begin{aligned} \label{boundedesti} L^\nu(g)\leq cL^\mu(g)\quad \text{for}~~~ g\in T({\sf f}). \end{aligned}$$* *Proof.* Choosing $c\geq \|\varphi\|_{ L^\infty(\mathcal{K}({\sf f}),\mu)}$, the necessity of ([\[boundedesti\]](#boundedesti){reference-type="ref" reference="boundedesti"}) is easily verified. To prove the converse we assume that ([\[boundedesti\]](#boundedesti){reference-type="ref" reference="boundedesti"}) holds. Then, by ([\[boundedesti\]](#boundedesti){reference-type="ref" reference="boundedesti"}),  $L:=cL^\mu-L^\nu$ is a $T({\sf f})$-positive linear functional on $\mathbb{R}_d[\underline{x}]$ and hence a $\mathcal{K}({\sf f})$-moment functional by Theorem [Theorem 29](#mpschm){reference-type="ref" reference="mpschm"}. Let $\tau$ be a representing measure of $L$, that is, $L=L^\tau$. Then we have $L^\tau+ L^\nu=cL^\mu$. Hence both $\tau+\nu$ and $c\mu$ are representing measures of the $\mathcal{K}({\sf f})$-moment functional $cL^\mu$. Since $\mathcal{K}({\sf f})$ is compact, $c\mu$ is determinate by Proposition [Proposition 21](#detcomacpcase){reference-type="ref" reference="detcomacpcase"}, so that $\tau+\nu=c\mu$. In particular, this implies that $\nu$ is absolutely continuous with respect to $\mu$. Therefore, by the Radon--Nikodym theorem A.3, $d\nu =\varphi d\mu$ for some function $\varphi \in L^1(\mathcal{K}({\sf f}),\mu)$, $\varphi(x)\geq 0$ $\mu$-a.e. on $\mathcal{K}({\sf f})$. Since $\tau+\nu=c\mu$, for each Borel subset $M$ of $\mathcal{K}({\sf f})$ we have $$\begin{aligned} \tau (M)=c\mu(M)-\nu(M)=\int_M (c-\varphi(x))d\mu\geq 0 . \end{aligned}$$ Therefore, $c-\varphi(x)\geq 0$  $\mu$-a.e., so that $\varphi\in L^\infty(\mathcal{K}({\sf f}),\mu)$ and $\|\varphi\|_{ L^\infty(\mathcal{K}({\sf f}),\mu)}\leq c.$ ◻ We close this section by restating Theorems [Theorem 28](#schmps){reference-type="ref" reference="schmps"} and [Theorem 29](#mpschm){reference-type="ref" reference="mpschm"} in the special case of compact real algebraic sets. **Corollary 34**. *Suppose that $\mathcal{I}$ is an ideal of $\mathbb{R}_d[\underline{x}]$ such that the real algebraic set  $V:=\mathcal{Z}(\mathcal{I})=\{x\in \mathbb{R}^d:f(x)=0~~\text{for}~ f\in \mathcal{I}\}$ is compact.* - *   If $h\in \mathbb{R}_d[\underline{x}]$ satisfies $h(x)>0$ for all $x\in V$, then $h\in \sum \mathbb{R}_d[\underline{x}]^2 +\mathcal{I}$.* - *   If $p\in\mathbb{R}_d[\underline{x}]/ \mathcal{I}$ and $p(x)>0$ for all $x\in V$, then $p\in \sum (\mathbb{R}_d[\underline{x}]/ \mathcal{I})^2$.* - *   If $q\in \mathbb{R}[V]\equiv\mathbb{R}_d[\underline{x}]/ \hat{\mathcal{I}}$ and $q(x)>0$ for all $x\in V$, then $q\in \sum \mathbb{R}[V]^2$.* - *  Each positive linear functional on $\mathbb{R}_d[\underline{x}]$ which annihilates $\mathcal{I}$ is a $V$-moment functional.* *Proof.* Put $f_1=1, f_2=h_1,f_3=-h_1,\dotsc, f_{2m}=h_m,f_{2m+1}=-h_m$, where $h_1,\dotsc,h_m$ is a set of generators of $\mathcal{I}$. Then, by ([\[quqadraticzero\]](#quqadraticzero){reference-type="ref" reference="quqadraticzero"}), the preordering $T({\sf f})$ is  $\sum \mathbb{R}_d[\underline{x}]^2 +\mathcal{I}$  and the semi-algebraic set $\mathcal{K}({\sf f})$ is $V=\mathcal{Z}(\mathcal{I})$. Therefore, Theorem [Theorem 28](#schmps){reference-type="ref" reference="schmps"} yields (i). Since $\mathcal{I}\subseteq \hat{\mathcal{I}}$, (i) implies (ii) and (iii). Clearly, a linear functional on $\mathbb{R}_d[\underline{x}]$ is $T({\sf f})$-positive if it is positive and annihilates $\mathcal{I}$. Thus (iv) follows at once from Theorem [Theorem 29](#mpschm){reference-type="ref" reference="mpschm"}. ◻ *Example 35*. (*Moment problem on unit spheres*)\ Let $S^{d-1}=\{x\in \mathbb{R}^d:x_1^2+\dots+x_d^2=1\}$ be the unit sphere of $\mathbb{R}^d$. Then $S^{d-1}$ is the real algebraic set $\mathcal{Z}(\mathcal{I})$ for the ideal $\mathcal{I}$ generated by $h_1(x)=x_1^2+\dots+x_d^2-1.$ Suppose that $L$ is a linear functional on $\mathbb{R}_d[\underline{x}]$ such that $$\begin{aligned} L(p^2)\geq 0 \quad {\rm and}~~ L((x_1^2+\dots+x_d^2-1)p)=0 \quad {\rm for }~~~ p\in \mathbb{R}_d[\underline{x}]. \end{aligned}$$ Then it follows from Corollary [Corollary 34](#compactalgvar){reference-type="ref" reference="compactalgvar"}(iv) that $L$ is an  $S^{d-1}$-moment functional. Further, if $q\in \mathbb{R}[S^{d-1}]$ is strictly positive on $S^{d-1},$ that is, $q(x)>0$ for $x\in S^{d-1}$, then $q\in \sum\mathbb{R}[S^{d-1}]^2$ by Corollary [Corollary 34](#compactalgvar){reference-type="ref" reference="compactalgvar"}(iii). $\hfill \circ$ # The Archimedean Positivstellensatz for quadratic modules and semirings {#reparchimodiules} The main aim of this section is to derive a representation theorem for Archimedean semirings and Archimedean quadratic modules (Theorem [Theorem 43](#archpos){reference-type="ref" reference="archpos"}) and its application to the moment problem (Corollary [Corollary 47](#archrepmp){reference-type="ref" reference="archrepmp"}). By means of the so-called dagger cones we show that to prove this general result it suffices to do so in the special cases of Archimedian semirings *or* of Archimedean quadratic modules. In this section we develop an approach based on semirings. At the end of Section [6](#Operator-theoreticappraochtothemomentprblem){reference-type="ref" reference="Operator-theoreticappraochtothemomentprblem"} we give a proof using quadratic modules and Hilbert space operators. Recall that $\sf{A}$ is a *commutative real unital algebra*. The *weak topology* on the dual $\sf{A}^*$ is the locally convex topology generated by the family of seminorms $f\to |f(a)|$, where $a\in {\sf{A}}$. Then, for each $a\in {\sf{A}}$, the function $a\to f(a)$ is continuous on ${\sf{A}}^*$ in the weak topology. **Lemma 36**. *Suppose that $C$ is an Archimedean unital cone of $\sf{A}$. Then the set  $\mathcal{K}(C)=\{ \chi\in \hat{A}:\chi(a)\geq 0, a\in C\}$ is compact in the weak topology of $A^*$.* *Proof.* Since $C$ is Archimedean, for any $a\in A$ there exists a number $\lambda_a>0$ such that $\lambda_a-a\in C$ and $\lambda_a+a\in C$. Hence for $\chi\in \mathcal{K}(C)$ we have $\chi(\lambda_a-a)\geq 0$ and $\chi(\lambda_a+a)\geq 0$, so that $\chi(a)\in [-\lambda_a,\lambda_a]$. Thus there is an injection $\Phi$ of $\mathcal{K}(C)$ into the topological product space $$P:=\prod\nolimits_{a\in A} ~[-\lambda_a,\lambda_a]$$ given by $\Phi(\chi)= (\chi(a))_{a\in A}$. From the definitions of the corresponding topologies it follows that $\Phi$ is a homeomorphism of $\mathcal{K}(C)$, equipped with the weak topology, on the subspace $\Phi(\mathcal{K}(C))$ of $P$, equipped with the product topology. We show that the image $\Phi(\mathcal{K}(C))$ is closed in $P$. Indeed, suppose $(\Phi(\chi_i))_{i\in I}$ is a net from $\Phi(\mathcal{K}(C))$ which converges to $\varphi=(\varphi_a)_{a\in a}\in P$. Then, by the definition of the weak topology, $\lim_i \Phi(\chi_i)(a)=\lim_i \chi_i(a)=\varphi_a$ for all $a\in A$. Since for each $i$ the map $a\mapsto \chi_i(a)$ is a character that is nonnegative on $\mathcal{K}(C)$, so is $a\mapsto \varphi_a$. Hence there exists $\chi\in \mathcal{K}(C)$ such that $\varphi_a=\chi(a)$ for $a\in A$. Thus, $\varphi =\Phi(\chi)\in \Phi(\mathcal{K}(C)$. The product $P$ is a compact topological space by Tychonoff's theorem. Hence its closed subset $\Phi(\mathcal{K}(C))$ is also compact and so is $\mathcal{K}(C)$, because $\Phi$ is a homeomorphism of $\mathcal{K}(C)$ and $\Phi(\mathcal{K}(C))$. ◻ In our approach to the Archimedean Positivstellensatz we use the following notion. **Definition 37**. For a unital convex cone $C$ in $\sf{A}$ we define $$C^\dagger =\{ a\in {\sf{A}} : ~ a+\epsilon \in C~~ \textup{ for~ all }~~ \epsilon \in {(0,+\infty)} \}. \label{eq:ddagger}$$ Clearly, $C^\dagger$ is again a unital convex cone in $\sf{A}$. Since $1\in C$, we have $C\subseteq C^\dagger$. **Lemma 38**. *For each unital convex cone $C$ in $\sf{A}$, we have $\mathcal{K}(C)=\mathcal{K}(C^\dagger)$ and $(C^\dagger)^\dagger= C^\dagger$.* *Proof.* It is obvious that $\mathcal{K}(C^\dagger)\subseteq \mathcal{K}(C)$, because $C\subseteq C^\dagger$. Conversely, let $\chi\in \mathcal{K}(C)$. If $a\in C^\dagger$, then $a+\epsilon \in C$ and hence $\chi(a+\epsilon)\geq 0$ for all $\varepsilon >0$. Letting $\varepsilon \searrow 0$, we get $\chi(a)\geq 0$. Thus $\chi\in \mathcal{K}(C^\dagger)$. Clearly, $C^\dagger \subseteq (C^\dagger)^\dagger$. To verify the converse, let $a\in (C^\dagger)^\dagger$. Then $a+\varepsilon_1 \in C^\dagger$ and $a+\varepsilon_1+\varepsilon_2 \in C$ for $\varepsilon_1>0$, $\varepsilon_2>0$, so $a+\varepsilon\in \mathbb{C}$ for all $\varepsilon >0$. Hence $a\in C^\dagger$. ◻ *Example 39*. Let ${\sf{A}}$ be a real algebra of bounded real-valued functions on a set $X$ which contains the constant functions. Then $$\begin{aligned} C:=\{ f\in {\sf{A}}: f(x)>0~~~ \textup{for~ all} ~~ x\in X\} \end{aligned}$$ is an Archimedean preordering of $\sf{A}$ and $$\begin{aligned} \label{exacdagger} C^\dagger=\{f\in {\sf{A}}: f(x)\geq 0~~~ \textup{for~ all}~~x\in X\}. \end{aligned}$$ We verify formula ([\[exacdagger\]](#exacdagger){reference-type="ref" reference="exacdagger"}). If $f(x)\geq 0$ on $X$, then $f(x)+\varepsilon >0$ on $X$, hence $f+\varepsilon\in C$ for all $\varepsilon >0$, so that $f\in C^\dagger$. Conversely, if $f\in C^\dagger$, then $f+\varepsilon \in C$, hence $f(x)+\varepsilon >0$ on $X$ for all $\varepsilon >0$; letting $\varepsilon \searrow 0$, we get $f(x)\geq 0$ on $X$ . This proves ([\[exacdagger\]](#exacdagger){reference-type="ref" reference="exacdagger"}). **Proposition 40**. *If $Q$ is an Archimedean quadratic module of ${\sf{A}}$, then $Q^\dagger$ is an Archimedean preordering of ${\sf{A}}$.* *Proof.* Clearly, $Q^\dagger$ is a unital convex cone of ${\sf{A}}$ that contains all squares. We only have to show that $Q^\dagger$ is closed under multiplication. Let $p,q\in Q$ and $\epsilon \in {(0,+\infty)}$ be given. We prove that $pq + \epsilon \in Q$. Because $Q$ is Archimedean, there exists a $\lambda >0$ such that $\lambda - p \in Q$. We recursively define a sequence $(r_k)_{k\in \mathbb{N}_0}$ of elements of ${\sf{A}}$ by  $r_0 := p/\lambda$ and  $r_{k+1} := 2 r_k - r_k^2$, $k\in \mathbb{N}_0$. Then we have $pq-\lambda qr_0=0$ and $$pq - 2^{-(k+1)}\lambda q r_{k+1}=(pq-2^{-k}\lambda qr_k ) +2^{-(k+1)}\lambda qr_k^2.$$ Therefore, since $q\in Q$ and $Q$ is a quadratic module, it follows by induction that $$\begin{aligned} \label{qrk} (pq - 2^{-k}\lambda q r_k) \in Q\, \quad {\rm for}~~k\in \mathbb{N}_0. \end{aligned}$$ Adding $2^{-(k+1)}\lambda (q+r_k)^2 \in Q$ we obtain  $pq + 2^{-(k+1)}\lambda (q^2 + r_k^2) \in Q$  for $k\in \mathbb{N}_0$. For sufficiently large $k\in \mathbb{N}_0$ we have  $\epsilon - 2^{-(k+1)}\lambda (q^2 + r_k^2) \in Q$ because $Q$ is Archimedean. Adding  $pq + 2^{-(k+1)}\lambda (q^2 + (r_k)^2) \in Q$ by ([\[qrk\]](#qrk){reference-type="ref" reference="qrk"}) yields $(pq+\epsilon) \in Q$. Now let $r,s \in Q^\dagger$ and $\epsilon \in {(0,+\infty)}$. As $Q$ is Archimedean, there exists $\lambda >0$ such that $\lambda - (r+s) \in Q$.  Set $\delta := \sqrt{\lambda^2 + \epsilon}\, - \lambda$. Since $r,s\in Q^\dagger$, we have $r+\delta , s+\delta \in Q$ and $((r+\delta)(s+\delta)+ \delta\lambda)\in Q$, as shown in the preceding paragraph. Therefore, since $\delta^2 + 2\lambda \delta = \epsilon$, we obtain $$rs + \epsilon = \big((r+\delta)(s+\delta ) + \delta \lambda \big) + \delta\big(\lambda - (r+s) \big) \in Q .$$ Hence $rs\in Q^\dagger$. ◻ **Proposition 41**. *Suppose that $S$ is an Archimedean semiring of $\sf{A}$ and $C$ is an $S$-module. Then $C^\dagger$ is an Archimedean preordering of ${\sf{A}}$ and an $S^\dagger$-module. In particular, $S^\dagger$ is an Archimedean preordering.* *Proof.* Let $a \in S^\dagger$ and $c\in C^\dagger$. Then, by definition, $a + \delta \in S$ and $c + \delta \in C$ for all $\delta >0.$ Since $S$ is Archimedean, there exists a number $\lambda >0$ such that $\lambda - a \in S\subseteq C$ and $\lambda - a \in S\subseteq C$. Given $\epsilon \in {(0,+\infty)}$, we set $\delta :=-\lambda + \sqrt{\lambda+\epsilon}$. Then $\delta > 0$ and $\delta^2 + 2\delta \lambda = \epsilon$, so we obtain $$ac + \epsilon = (a + \delta)(c + \delta ) + \delta(\lambda-a) + \delta(\lambda -c) \in C .$$ Therefore, $ac \in C^\dagger$. In particular, in the special case $C = S$ this shows that $S^\dagger$ is also a semiring. In the general case, it proves that $C^\dagger$ is an $S^\dagger$-module. Let $a\in {\sf{A}}$. The crucial step is to prove that $a^2 \in S^\dagger$. For let $\varepsilon>0$. Since  the polynomial $x^2+\varepsilon$ is positive for all $x\in [-1,1]$, by Bernstein's theorem (Proposition 3.4) there exist numbers $m\in \mathbb{N}$ and $a_{kl}\geq 0$ for $k,l=0,\dotsc,m$ such that $$\begin{aligned} \label{applbernstein} x^2+\varepsilon =\sum_{k,l=0}^m a_{kl} (1-x)^k(1+x)^l \end{aligned}$$ Since the semiring $S$ is Archimedean, there exists a $\lambda >0$ such that $( \lambda + a)\in S$ and $(\lambda - a)\in S$. Then $(1 + a / \lambda)\in S$ and $(1 - a/\lambda)\in S$ and hence $(1 + a / \lambda)^n\in S$ and $(1 - a/\lambda)^n\in S$ for all $n\in \mathbb{N}_0$, because $S$ is a semiring. As usual, we set $(1 \pm a/\lambda)^0=1$. Therefore, using ([\[applbernstein\]](#applbernstein){reference-type="ref" reference="applbernstein"}) and the fact that $S$ is closed under multiplication, we find $$(a/\lambda)^2 + \varepsilon = \sum_{k,l = 0}^m a_{kl} (1-(a/\lambda)^k (1+(a/\lambda)^l \in S.$$ Hence $(a^2+\lambda^2 \varepsilon)\in S$. Since $\lambda$ depends only on $a$ and $\varepsilon >0$ was arbitrary, this implies that $a^2 \in S^\dagger$. Thus, $S^\dagger$ is a semiring which contains all squares, that is, $S^\dagger$ is a preordering. Since $S\subseteq C$ and hence $S^\dagger\subseteq C^\dagger$, $C^\dagger$ contains also all squares, so $C^\dagger$ is a quadratic module. Moreover, from $S\subseteq S^\dagger$ and $S \subseteq C \subseteq C^\dagger$ it follows that $C^\dagger$ and $S^\dagger$ are Archimedean because $S$ is Archimedean by assumption. Since $C^\dagger$ is an Archimedean quadratic module as we have proved, $(C^\dagger)^\dagger$ is an Archimedean preordering by Proposition [Proposition 40](#proposition:qmddagger){reference-type="ref" reference="proposition:qmddagger"}. By Lemma [Lemma 38](#daggersimple){reference-type="ref" reference="daggersimple"}, $(C^\dagger)^\dagger = C^\dagger$. ◻ *Remark 42*. For  $\varepsilon =\frac{1}{k-1}, k\in \mathbb{N},$  there is the following explicit form of the identity ([\[applbernstein\]](#applbernstein){reference-type="ref" reference="applbernstein"}): $$x^2 + \frac{1}{k-1}= \frac{1}{2^k k (k-1)} \sum_{\ell = 0}^{k} \binom{k}{\ell} (k-2\ell)^2 (1+x)^{k-\ell}(1-x)^{\ell}.$$ The following important result is the *Archimedean Positivstellensatz for quadratic modules and semirings*. **Theorem 43**. *Suppose that $C$ is an $S$-module of an Archimedean semiring $S$ or $C$ is an Archimedean quadratic module of the commutative unital real algebra ${\sf{A}}$. For any $a\in {\sf{A}}$, the following are equivalent:* 1. *$\chi(a)>0$ for all $\chi\in\mathcal{K}(C)$.* 2. *There exists $\epsilon \in {(0,+\infty)}$ such that $a \in \epsilon + C$.* The following simple fact is crucial for our proofs of Theorem [Theorem 43](#archpos){reference-type="ref" reference="archpos"} given below. **Lemma 44**. *In the notation of Theorem [Theorem 43](#archpos){reference-type="ref" reference="archpos"}, each of the conditions $(i)_C$ and $(ii)_C$ holds for $C$ if and only if it does for $C^\dagger$.* *Proof.* Since $\mathcal{K}(C)=\mathcal{K}(C^\dagger)$ by Lemma [Lemma 38](#daggersimple){reference-type="ref" reference="daggersimple"}, this is obvious of $(i)_C$. For $(ii)_C$, since $C\subseteq C^\dagger$, it suffices it verify that $(ii)_{C^\dagger}$ implies $(ii)_C$ . Indeed, if $a=2 \epsilon + c^\dagger$ with $\epsilon >0$ and $c^\dagger \in C^\dagger$, then by the definition of $C^\dagger$ we have $c:=c^\dagger+\epsilon\in C$, so that $a=\epsilon +c\in C$. Thus, $(ii)_C$ is equivalent to $(ii)_{C^\dagger}$. ◻ Before proving the theorem, we discuss this result with a couple of remarks. *Remark 45*. 1.) First we emphasize that in strong contrast to Theorem [Theorem 28](#schmps){reference-type="ref" reference="schmps"} the above Theorem [Theorem 43](#archpos){reference-type="ref" reference="archpos"} does not require that $\sf{A}$ or $C$ or $S$ is finitely generated. 2.) Using the fact that the preordering $T({\sf f})$ is Archimedean (by Proposition [Proposition 26](#prearchcom){reference-type="ref" reference="prearchcom"}) it is clear that Theorem[Theorem 28](#schmps){reference-type="ref" reference="schmps"} follows directly from Theorem [Theorem 43](#archpos){reference-type="ref" reference="archpos"}. In Section [3](#momentproblemstrictpos){reference-type="ref" reference="momentproblemstrictpos"} we have given an "elementary" proof of Theorem [Theorem 28](#schmps){reference-type="ref" reference="schmps"} which is based on Proposition [Proposition 27](#continuityLprop){reference-type="ref" reference="continuityLprop"}(i) and does not depend on Theorem [Theorem 43](#archpos){reference-type="ref" reference="archpos"}. 3.) The proof of implication $(ii)_c\to (i)_C$ is very easy: Indeed, if $a=\epsilon +c$ with $c\in C$, then $\chi(a)=\epsilon\chi(1)+\chi(c)= \epsilon+\chi(c)\geq \epsilon >0$ for all $\chi\in \mathcal{K}(C)$. 4.) Since $1\in C$, $(ii)_C$ implies that $a\in C$. The stronger statement $a\in \epsilon +C$ is given in order to get an equivalence of conditions $(i)_C$ and $(ii)_C$. The main assertion of Theorem [Theorem 43](#archpos){reference-type="ref" reference="archpos"} states that *the positivity (!) of the values $\chi(a)$ for all $C$-positive characters on ${\sf{A}}$ implies that $a$ belongs to $C$*. 5.) Recall that $C^\dagger$ is an Archimedean preordering by Propositions [Proposition 40](#proposition:qmddagger){reference-type="ref" reference="proposition:qmddagger"} and [Proposition 41](#archpre){reference-type="ref" reference="archpre"}. Therefore, by Lemma [Lemma 44](#ccdagger){reference-type="ref" reference="ccdagger"}, to prove Theorem [Theorem 43](#archpos){reference-type="ref" reference="archpos"} it suffices to do so in the case when $C$ is an Archimedean preordering of ${\sf{A}}$. In particular, it is enough to show Theorem [Theorem 43](#archpos){reference-type="ref" reference="archpos"} for Archimedean semirings or for Archimedean quadratic modules. In this section we prove of Theorem [Theorem 43](#archpos){reference-type="ref" reference="archpos"} for Archmimedean semirings, while in Section [6](#Operator-theoreticappraochtothemomentprblem){reference-type="ref" reference="Operator-theoreticappraochtothemomentprblem"} we give an approach for Archimedean quadratic modules. *Proof of Theorem [Theorem 43](#archpos){reference-type="ref" reference="archpos"} for Archimedean semirings:*\ The trivial implication $(ii)_C\to (i)_C$ was already noted in the preceding remark 3.). We suppose that $C$ is an Archimedean semirings of ${\sf{A}}$ and prove the main implication $(i)_C\to (ii)_C$ . For let $c\in {\sf{A}}$ be such that $c\notin C$. Then, by Proposition [Proposition 14](#eidelheit){reference-type="ref" reference="eidelheit"}, there exists an extremal (!) functional $\varphi$ of $C^\wedge$ such that $\varphi(1)=1$ and $\varphi(c)\leq 0$. We prove that $\varphi\in \hat{{\sf{A}}}$, that is, $$\begin{aligned} \label{characterprop} \varphi(ab)=\varphi(a)\varphi(b)\quad \textup{for}~~ a,b\in {\sf{A}}.\end{aligned}$$ Let $a\in {\sf{A}}$. Since $C$ is Archimedean, there exists $\lambda>0$ such that $\lambda+a\in C$, so that $a=(\lambda + a)-\lambda\in C-C$. Thus, ${\sf{A}}=C-C$. Hence it suffices to verify ([\[characterprop\]](#characterprop){reference-type="ref" reference="characterprop"}) for $a\in C$ and similarly for $b\in C.$ Then $\varphi(a)\geq 0$, since $\varphi$ is $C$-positive. Case 1:  $\varphi (a)=0$.\ Let $b\in C$ and choose $\lambda>0$ such that $\lambda -b\in C$. Then $(\lambda-b)a\in C$ and $ab\in C$ (because $C$ is a semiring!), so that $\varphi((\lambda-b)a)= \lambda \varphi(a)-\varphi(ab)=-\varphi(ab)\geq 0$ and $\varphi (ab)\geq 0$. Hence $\varphi(ab)=0$, so that ([\[characterprop\]](#characterprop){reference-type="ref" reference="characterprop"}) holds. Case 2:  $\varphi(a)>0$.\ We choose $\lambda>0$ such that $(\lambda {-}a )\in C$ and $\varphi(\lambda{-}a)>0$. Because $C$ is a semiring, the functionals $\varphi_1(\cdot):=\varphi(a)^{-1}\varphi( a\cdot)$ and $\varphi_2(\cdot):=\varphi(\lambda -a)^{-1}\varphi((\lambda-a)\cdot)$ belong to the dual cone $C^\wedge$. They satisfy $$\begin{aligned} \varphi= \lambda^{-1}\varphi(a)\, \varphi_1 + \lambda^{-1}\varphi(\lambda -a)\, \varphi_2 ,\end{aligned}$$ so $\varphi$ is a convex combination of two functionals from $C^\wedge$. Since $\varphi$ is extremal, it follows that $\varphi_1=\varphi$ which gives ([\[characterprop\]](#characterprop){reference-type="ref" reference="characterprop"}). Summarizing both cases, we have shown that $\varphi\in \hat{{\sf{A}}}$. Recall that $\varphi(c)\leq 0.$ Now it is easy to prove that $(i)_C$ implies $(ii)_C$. Let $a\in {\sf{A}}$ be as in $(i)_C$. Then, since the function $a\to \varphi (a)$ is continuous on the compact set $\mathcal{K}(C)$ in the weak topology (by Lemma [Lemma 36](#kccompact){reference-type="ref" reference="kccompact"}), there exists $\epsilon>0$ such that $c:=a-\epsilon$ also satisfies $\varphi(c)>0$ for all $\varphi\in \mathcal{K}(C)$. Therefore, by the preceding proof, $c\notin C$ cannot hold, so that $c\in C$. Hence $a=\epsilon + c\in \epsilon +C.$ $\hfill \Box$ **Corollary 46**. *Under the assumptions of Theorem [Theorem 43](#archpos){reference-type="ref" reference="archpos"}, we have $$\begin{aligned} C^\dagger =\{a\in {\sf{A}}: \chi(a)\geq 0 ~ \textup{ for all } ~\chi\in \mathcal{K}(C)\, \}. \end{aligned}$$* *Proof.* If $\chi(a)\geq 0$ for $\chi\in \mathcal{K}(C)$, then for $\epsilon>0$ we have  $\chi(a+\epsilon )=\chi(a)+\epsilon >0$. Therefore, $a+\epsilon\in C$ by Theorem [Theorem 43](#archpos){reference-type="ref" reference="archpos"}, so that $a\in C^\dagger$. Conversely, if $a\in C^\dagger$ and $\chi\in \mathcal{K}(C)$, then $a+\epsilon \in C$. Hence $\chi(a)+\epsilon=\chi(a+\epsilon ) \geq 0$ for all $\epsilon>0$. Letting $\epsilon\searrow 0$ yields $\chi(a)\geq 0$. ◻ The following is the main application of Theorem [Theorem 43](#archpos){reference-type="ref" reference="archpos"} to the moment problem. **Corollary 47**. *Retain the assumptions of Theorem [Theorem 43](#archpos){reference-type="ref" reference="archpos"}. Suppose that $L$ is a linear functional on ${\sf{A}}$ such that $L(c)\geq 0$ for all $a\in C$. Then there exists a Radon measure $\mu$ on the compact topological space $\mathcal{K}(C)$ such that $$\begin{aligned} L(a)=\int_{\mathcal{K}(C)} \chi(a)~ d\mu(\chi)~~~ \textup{for}~~ a\in {\sf{A}}. \end{aligned}$$* *Proof.* Let $a\in {\sf{A}}$ be such that $\chi(a)\geq 0$ for $\chi\in \mathcal{K}(C)$. Then, for each $\epsilon >0$, $a+\epsilon$ satisfies $(i)_C$, so $a+\epsilon \in C$ by Theorem [Theorem 43](#archpos){reference-type="ref" reference="archpos"}. Hence $L(a+\epsilon )=L(a)+\epsilon L(1)\geq 0.$ Letting $\epsilon\searrow 0$, we get $L(a)\geq 0$. Now the assertion follows from Proposition 1.9. ◻ # The Archimedean representation theorem for polynomial algebras {#archimedeanpolynomials} In this section we first restate Theorem [Theorem 43](#archpos){reference-type="ref" reference="archpos"} and Corollary [Corollary 47](#archrepmp){reference-type="ref" reference="archrepmp"} in the special case when ${\sf{A}}$ is the polynomial algebra $\mathbb{R}_d[\underline{x}]$. We begin with the case of Archimedean quadratic modules. Assertion (i) of the following theorem is also called the *Archimedean Positivstellensatz*. **Theorem 48**. *Let ${\sf f}=\{f_1,\dotsc,f_k\}$ be a finite subset of $\mathbb{R}_d[\underline{x}]$. Suppose that the quadratic module $Q({\sf f})$ defined by ([\[quadraticqf\]](#quadraticqf){reference-type="ref" reference="quadraticqf"}) is Archimedean.* - *  If $h\in \mathbb{R}_d[\underline{x}]$ satisfies $f(x)> 0$ for all $x\in \mathcal{K}({\sf f})$, then $h\in Q({\sf f}).$* - *  Any $Q({\sf f})$-positive linear functional $L$ on  $\mathbb{R}_d[\underline{x}]$ is a $\mathcal{K}({\sf f})$-moment functional, that is, there exists a measure $\mu\in M_+(\mathbb{R}^d)$ supported on the compact set $\mathcal{K}({\sf f})$ such that  $L(f)=\int f(x)\, d\mu(x)$  for  $f\in \mathbb{R}_d[\underline{x}]$.* *Proof.* Set ${\sf{A}}=\mathbb{R}_d[\underline{x}]$ and $C=Q({\sf f})$. As noted in Example [Example 16](#ckcpoly){reference-type="ref" reference="ckcpoly"}, characters $\chi$ of ${\sf{A}}$ correspond to points $\chi_t\cong t$ of $\mathbb{R}^d$ and we have $\mathcal{K}(Q)=\mathcal{K}({\sf f})$ under this identification. Hence the assertions of (i) and (ii) follow at once from Theorem [Theorem 43](#archpos){reference-type="ref" reference="archpos"} and Corollary [Corollary 47](#archrepmp){reference-type="ref" reference="archrepmp"}, respectively. ◻ Next we turn to modules for semirings. *Example 49*. Let ${\sf f}=\{f_1,\dotsc,f_k\}$ and ${\sf g}=\{g_0=1,g_1,\dotsc,g_r\}$ be finite subsets of $\mathbb{R}_d[\underline{x}]$, where $k\in \mathbb{N}, r\in \mathbb{N}_0$. Then $$\begin{aligned} \label{cfg} C({\sf f},{\sf g}):= g_0 S({\sf f})+ g_1 S({\sf f})+\cdots+ g_r S({\sf f}) \end{aligned}$$ is an  $S({\sf f})$-module for the semiring  $S({\sf f})$. Clearly, $\mathcal{K}(C({\sf f},{\sf g}))= \mathcal{K}({\sf f})\cap \mathcal{K}({\sf g})$. Note that in the special case $r=0$  the $S({\sf f})$-module $C({\sf f}, {\sf g})$ is just the semiring $S({\sf f})$ itself and $\mathcal{K}(C({\sf f},{\sf g}))= \mathcal{K}({\sf f}).$ **Theorem 50**. *Let ${\sf f}=\{f_1,\dotsc,f_k\}$ and ${\sf g}=\{g_0=1,g_1,\dotsc,g_r\}$ be subsets of $\mathbb{R}_d[\underline{x}]$, where $k\in \mathbb{N}, r\in \mathbb{N}_0$. Suppose that the semiring $S({\sf f})$ defined by ([\[semiringf\]](#semiringf){reference-type="ref" reference="semiringf"}) is Archimedean. Let $C({\sf f},{\sf g})$ denote the $S({\sf f})$-module defined by ([\[cfg\]](#cfg){reference-type="ref" reference="cfg"}).* - *  If $h\in \mathbb{R}_d[\underline{x}]$ satisfies $h(x)> 0$ for all $x\in \mathcal{K}({\sf f})\cap\mathcal{K}({\sf g})$, then $h\in C({\sf f}, {\sf g}).$* - *  Suppose $L$ is a linear functional on  $\mathbb{R}_d[\underline{x}]$ such that $L(f)\geq 0$ for all $f\in C({\sf f}, {\sf g})$. Then $L$ is a $\mathcal{K}({\sf f})\cap \mathcal{K}( {\sf g})$--moment functional, that is, there is a measure $\mu\in M_+(\mathbb{R}^d)$ supported on the compact semi-algebraic set $\mathcal{K}({\sf f})\cap \mathcal{K}({\sf g})$ such that  $L(f)=\int f(x)\, d\mu(x)$  for all  $f\in \mathbb{R}_d[\underline{x}]$.* *Proof.* Combine Theorem [Theorem 43](#archpos){reference-type="ref" reference="archpos"} and Corollary [Corollary 47](#archrepmp){reference-type="ref" reference="archrepmp"} with Example [\[cfg\]](#cfg){reference-type="ref" reference="cfg"}. ◻ If $r=0$, then the $S({\sf f})$-module $C({\sf f}, {\sf g})$ coincides with the semiring $S({\sf f})$ and we have $\mathcal{K}(C({\sf f},{\sf g}))= \mathcal{K}({\sf f}).$ Then Theorem [Theorem 50](#archmedps){reference-type="ref" reference="archmedps"}(i) is the Archimedean Positivstellensatz for semirings in the special case of the polynomial algebra $\mathbb{R}_d[\underline{x}]$. The next theorem is an application of Theorem [Theorem 50](#archmedps){reference-type="ref" reference="archmedps"}. It sharpens Theorem [Theorem 28](#schmps){reference-type="ref" reference="schmps"} by representing positive polynomials on a compact semi-algebraic set by a certain subset of the corresponding preordering. **Theorem 51**. *Suppose ${\sf f}=\{f_1,\dotsc,f_r\}$, $r\in \mathbb{N}$, is a subset of $\mathbb{R}_d[\underline{x}]$ such that the semialgebraic set $\mathcal{K}({\sf f})$ is compact. Then there exist polynomials $p_1,\dotsc,p_s\in \mathbb{R}_d[\underline{x}]$, $s\in \mathbb{N},$ such that the semiring $S$ of $\mathbb{R}_d[\underline{x}]$ generated by $f_1,\dotsc,f_r, p_1^2, \dots,p_s^2$ is Archimedean.* *If $h\in \mathbb{R}_d[\underline{x}]$ satisfies $h(x)>0$ for all $x\in \mathcal{K}({\sf f})$, then $h$ is a finite sum of polynomials $$\begin{aligned} \label{specialform} \alpha\, f_1^{e_1}\cdots f_r^{e_r}~ f_1^{2n_1}\cdots f_r^{2n_r}\, p_1^{2k_1}\cdots p_s^{2k_s}, \end{aligned}$$ where $\alpha\geq 0$, $e_1,\dotsc,e_r\in \{0,1\}$, $n_1,\dotsc,n_r, k_1,\dotsc,k_s\in \mathbb{N}_0$.* *Further, each linear functional on $\mathbb{R}_d[\underline{x}]$ that is nonnegative on all polynomials [\[specialform\]](#specialform){reference-type="eqref" reference="specialform"} (with $\alpha=1$) is a $\mathcal{K}({\sf f})$-moment functional.* *Proof.* Since the set $\mathcal{K}({\sf f})$ is compact, there are numbers $\alpha_j>0, \beta_j>0$ such that $$\label{poshj} \alpha_j+x_j>0~~\textup{and}~~ \beta_j-x_j>0~~ \textup{for}~~ x\in \mathcal{K}(f_1,\dotsc,f_r),~j=1,\dotsc,d.$$ Therefore, by Theorem [Theorem 28](#schmps){reference-type="ref" reference="schmps"}, the polynomials $\alpha_j+x_j>0,\beta_j-x_j>0$ are in the preordering $T(f_1,\dotsc,f_r)$. By the definition ([\[preorderingtf\]](#preorderingtf){reference-type="ref" reference="preorderingtf"}) of $T(f_1,\dotsc,f_r)$, this means that each polynomial $\alpha_j+ x_j$, $\beta_j-x_j$ is a finite sum of polynomials of the form $f_1^{e_1}\cdots f_r^{e_r}p^2$ with $p\in \mathbb{R}_d[\underline{x}]$ and $e_1,\dotsc,e_r\in \{0,1\}$. Let $S$ denote the semiring generated by $f_1,\dotsc,f_r$ and all squares $p^2$ occurring in these representations of the polynomials $\alpha_j+ x_j,\beta_1-x_j$, where $j=1,\dotsc,d$. Then, by construction, $x_1,\dotsc,x_d$ belong to $\mathbb{R}_d[\underline{x}]_b(S)$, so $S$ is Archimedean by Lemma [Lemma 9](#boundedele2){reference-type="ref" reference="boundedele2"}. Since $f_1,\dotsc,f_r\in S$, $\mathcal{K}(S)$ is the set of point evaluations at $\mathcal{K}(f_1,\dotsc,f_r)$. By its construction, the semiring $S$ defined above is generated by polynomials $f_1,\dotsc,f_r$, $p_1^2,\dotsc,p_s^2$. The Archimedean Positivstellensatz for semirings (Theorem [Theorem 43](#archpos){reference-type="ref" reference="archpos"} or Theorem [Theorem 50](#archmedps){reference-type="ref" reference="archmedps"}) yields $h\in S$. This means that $h$ is a finite sum of terms [\[specialform\]](#specialform){reference-type="eqref" reference="specialform"}. By Haviland's theorem (Theorem 1.12) this implies the last assertion. ◻ In the above proof the polynomials $x_1,\dotsc,x_d$ can be replaced by any finite set of algebra generators of $\mathbb{R}_d[\underline{x}].$ Note that ([\[poshj\]](#poshj){reference-type="ref" reference="poshj"}) means that the set $\mathcal{K}({\sf f})$ is contained in the $d$-dimensional rectangle $[-\alpha_1,\beta_1]\times \dots \times [-\alpha_d,\beta_d]$. We illustrate the preceding result with an example. *Example 52*. Let $S$ denote the semiring of $\mathbb{R}_d[\underline{x}]$ generating by the polynomials $$\begin{aligned} f(x) := 1-x_1^2-\cdots-x_d^2,~~ g_{j,\pm}(x) := (1\pm x_j)^2,\; j=1,\dotsc,d. \end{aligned}$$ Obviously, $\mathcal{K}(S)$ is the closed unit ball $$\mathcal{K}(f)=\{ x\in \mathbb{R}^d:~ x_1^2+\cdots+x_d^2\leq 1\}.$$ Then, since $$d+1\pm 2x_k=(1-x_1^2-\cdots-x_d^2) +(1\pm x_k)^2+\frac{1}{2} \sum_{i=1, i\neq k}^d \big((1+x_j)^2+(1-x_j)^2\big)\in S,$$ for $k=1,\dotsc,d$,  Lemma [Lemma 9](#boundedele2){reference-type="ref" reference="boundedele2"} implies that $S$ is Archimedean. Therefore, by Theorem [Theorem 43](#archpos){reference-type="ref" reference="archpos"} (or Theorem [Theorem 50](#archmedps){reference-type="ref" reference="archmedps"}), each polynomial $h\in \mathbb{R}_d[\underline{x}]$ that is positive in all points of the closed unit ball $\mathcal{K}(f)$ belongs to $S$. This means that $h$ is of the form $$\begin{gathered} h(x) =\sum_{n,k_i,\ell_i=0}^m \alpha_{n,k_1,\ell_1,\dotsc,k_d,\ell_d}f^{2n}(1-x_1)^{2k_1}(1+x_1)^{2\ell_1}\dotsm (1-x_d)^{2k_d}(1+x_d)^{2\ell_d}\\ + f\sum_{n,k_i,\ell_i=0}^m \beta_{n,k_1,\ell_1,\dotsc,k_d,\ell_d} f^{2n}(1-x_1)^{2k_1}(1+x_1)^{2\ell_1}\dotsm (1-x_d)^{2k_d}(1+x_d)^{2\ell_d}, \end{gathered}$$ where  $m\in \mathbb{N}_0$ and $\alpha_{n,k_1,\ell_1,\dotsc,k_d,\ell_d}\geq 0,~ \beta_{n,k_1,\ell_1,\dotsc,k_d,\ell_d}\geq 0$. This formula is a distinguished weighted sum of squares representation of the positive polynomial $h$. The Archimedean Positivstellensatz for quadratic modules (Theorem [Theorem 48](#archmedpsq){reference-type="ref" reference="archmedpsq"}) gives in this case the weaker assertion  $h(x)=\sigma_1+ f\sigma_2$, with $\sigma_1,\sigma_2\in \sum \mathbb{R}_d[\underline{x}]^2.$ # The operator-theoretic approach to the moment problem {#Operator-theoreticappraochtothemomentprblem} The spectral theory of self-adjoint operators in Hilbert space is well suited to the moment problem and provides powerful techniques for the study of this problem. The technical tool that relates the multidimensional moment problem to Hilbert space operator theory is the *Gelfand--Naimark--Segal construction*, briefly the *GNS-construction*. We develop this construction first for a general $*$-algebra (see \[Sm4, Section 8.6\] or \[Sm20, Section 4.4\]\] and then we specialize to the polynomial algebra. Suppose that ${\sf{A}}$ is a unital (real or complex) $*$-algebra. Let $\mathbb{K}=\mathbb{R}$ or $\mathbb{K}=C$. **Definition 53**. Let $(\mathcal{D},\langle \cdot, \cdot \rangle)$ be a unitary space. A *$*$-representation* of ${\sf{A}}$ on $(\mathcal{D},\langle \cdot, \cdot \rangle)$ is an algebra homomorphism $\pi$ of ${\sf{A}}$ into the algebra $L(\mathcal{D})$ of linear operators mapping $\mathcal{D}$ into itself such that $\pi(1)\varphi=\varphi$ for $\varphi\in \mathcal{D}$ and $$\begin{aligned} \label{defstarrep} \langle \pi(a)\varphi,\psi \rangle =\langle \varphi ,\pi(a^*)\psi\rangle \quad {\rm for}\quad a\in {\sf{A}} ,~~\varphi, \psi \in \mathcal{D}. \end{aligned}$$ The unitary space $\mathcal{D}$ is called the *domain* of  $\pi$ and denoted by $\mathcal{D}(\pi)$. A vector $\varphi\in \mathcal{D}$ is called *algebraically cyclic*, briefly *a-cyclic*, for $\pi$ if  $\mathcal{D}=\pi({\sf{A}})\varphi$. Suppose that $L$ is a positive linear functional on ${\sf{A}}$, that is, $L$ is a linear functional such that $L(a^*a)\geq 0$ for $a\in {\sf{A}}$. Then, by Lemma 2.3, the Cauchy--Schwarz inequality holds: $$\begin{aligned} \label{cauchyschwarzineq} |L(a^*b)|^2 \leq L(a^*a)L(b^*b)\quad {\rm for}\quad a,b \in {\sf{A}}.\end{aligned}$$ **Lemma 54**. *$\mathcal{N}_L:=\{ a\in {\sf{A}}: L(a^*a)=0\}$ is a left ideal of the algebra ${\sf{A}}$.* *Proof.* Let $a,b\in \mathcal{N}_L$ and $x\in {\sf{A}}$. Using ([\[cauchyschwarzineq\]](#cauchyschwarzineq){reference-type="ref" reference="cauchyschwarzineq"}) we obtain $$\begin{aligned} |L((xa)^*xa)|^2 =|L((x^*xa)^*a)|^2\leq L((x^*xa)^*x^*xa)L(a^*a)=0, \end{aligned}$$ so that $xa\in \mathcal{N}_L$. Applying again ([\[cauchyschwarzineq\]](#cauchyschwarzineq){reference-type="ref" reference="cauchyschwarzineq"}) we get $L(a^*b)=L(b^*a)=0$. Hence $$\begin{aligned} L((a+b)^*(a+b)) =L(a^*a) +L(b^*b)+ L(a^*b)+L(b^*a)=0, \end{aligned}$$ so that $a+b\in \mathcal{N}_L$. Obviously, $\lambda a\in \mathcal{N}_L$ for $\lambda\in \mathbb{K}$. ◻ Hence there exist a well-defined scalar product $\langle \cdot,\cdot\rangle_L$ on the quotient vector space $\mathcal{D}_L{=}{\sf{A}}/ \mathcal{N}_L$ and a well-defined algebra homomorphism $\pi_L: {\sf{A}}{\to} L(\mathcal{D}_L)$ given by $$\begin{aligned} \label{defscalarGNS} \langle a+\mathcal{N}_L,b+\mathcal{N}_L\rangle_L =L(b^*a) ~~{\rm and}~~ \pi_L(a)(b+\mathcal{N}_L)=ab+\mathcal{N}_L,~~a,b \in {\sf{A}}.\end{aligned}$$ Let $\mathcal{H}_L$ denote the Hilbert space completion of the pre-Hilbert space $\mathcal{D}_L$. If no confusion can arise we write $\langle \cdot,\cdot\rangle$ for $\langle \cdot,\cdot\rangle_L$ and $a$ for $a+\mathcal{N}_L$. Then we have $\pi_L(a)b=ab$, in particular $\pi_L(1)a=a$, and $$\begin{aligned} \label{Lrep} \langle \pi_L(a)b,c\rangle = L(c^*ab)= L((a^*c)^*b)=\langle b,\pi_L(a^*)c \rangle \quad{\rm for}\quad a,b,c \in {\sf{A}}.\end{aligned}$$ Clearly, $\mathcal{D}_L=\pi_L({\sf{A}})1$. Thus, we have shown that *$\pi_L$ is a $*$-representation of ${\sf{A}}$ on the domain $\mathcal{D}(\pi_L)=\mathcal{D}_L$ and $1$ is an $\rm{a}$-cyclic vector for  $\pi_L$*. Further, we have $$\begin{aligned} \label{gnslfunctional} L(a)=\langle \pi_L(a)1,1 \rangle \quad {\rm for} \quad a\in {\sf{A}}.\end{aligned}$$ **Definition 55**. $\pi_L$ is called the  *GNS-representation* of ${\sf{A}}$ associated with $L$. We show that the GNS-representation is unique up to unitary equivalence. Let $\pi$ be another $*$-representation of ${\sf{A}}$ with a-cyclic vector $\varphi\in \mathcal{D}(\pi)$ on a dense domain $\mathcal{D}(\pi)$ of a Hilbert space $\mathcal{G}$ such that $L(a)=\langle \pi(a)\varphi, \varphi\rangle$ for all $a\in {\sf{A}}$. For $a\in {\sf{A}}$, $$\begin{aligned} \|\pi(a)\varphi\|^2=\langle\pi(a)\varphi,\pi(a)\varphi\rangle=\langle \pi(a^*a)\varphi,\varphi\rangle=L(a^*a)\end{aligned}$$ and similarly $\|\pi_L(a)1\|^2=L(a^*a)$. Hence there is an isometric linear map $U$ given by  $U(\pi(a)\varphi)=\pi_L(a)1, a\in {\sf{A}},$  of $\mathcal{D}(\pi)=\pi({\sf{A}})\varphi$  onto  $\mathcal{D}(\pi_L)=\pi_L({\sf{A}})1$. Since the domains $\mathcal{D}(\pi)$ and $\mathcal{D}(\pi_L)$ are dense in $\mathcal{G}$ and $\mathcal{H}_L$, respectively, $U$ extends by continuity to a unitary operator of $\mathcal{G}$ onto $\mathcal{H}_L$. For $a,b\in{\sf{A}}$ we derive $$\begin{aligned} U\pi(a)U^{-1}(\pi_L(b)1)=U\pi(a)\pi(b)\varphi =U\pi(ab)\varphi=\pi_L(ab)1=\pi_L(a)(\pi_L(b)1),\end{aligned}$$ that is,  $U\pi(a)U^{-1}\varphi=\pi_L(a)\varphi$  for $\varphi \in \mathcal{D}(\pi_L)$ and $a\in {\sf{A}}$. By definition, this means that the $*$-representations $\pi$ and $\pi_L$ are unitarily equivalent. Now we specialize the preceding to the $*$-algebra $\mathbb{C}_d[\underline{x}]\equiv \mathbb{C}[x_1,\dotsc,x_d]$ with involution determined by $(x_j)^*:=x_j$ for $j=1,\dotsc, d$. Suppose that $L$ is a positive linear functional on $\mathbb{C}_d[\underline{x}]$. Since $(x_j)^*=x_j$, it follows from ([\[Lrep\]](#Lrep){reference-type="ref" reference="Lrep"}) that $X_j:=\pi_L(x_j)$ is a symmetric operator on the domain $\mathcal{D}_L$. The operators $X_j$ and $X_k$ commute (because $x_j$ and $x_k$ commute in $\mathbb{C}_d[\underline{x}]$) and $X_j$ leaves the domain $\mathcal{D}_L$ invariant (because $x_j\mathbb{C}_d[\underline{x}]\subseteq \mathbb{C}_d[\underline{x}]$). That is, $(X_1,\dotsc,X_d)$ is a $d$-tuple of *pairwise commuting symmetric operators acting on the dense invariant domain*  $\mathcal{D}_L=\pi_L(\mathbb{C}_d[\underline{x}])1$  of the Hilbert space $\mathcal{H}_L$. Note that this $d$-tuple $(X_1,\dotsc,X_d)$ essentially depends on the given positive linear functional $L$. The next theorem is the crucial result of the operator approach to the multidimensional moment problem and it is the counterpart of Theorem 6.1. . It relates solutions of the moment problem to spectral measures of strongly commuting $d$-tuples $(A_1,\dotsc,A_d)$  of self-adjoint operators which extend our given $d$-tuple $(X_1,\dotsc,X_d)$. **Theorem 56**. *A positive linear functional $L$ on the $*$-algebra $\mathbb{C}_d[\underline{x}]$ is a moment functional if and only if there exists a $d$-tuple $(A_1,\dotsc,A_d)$ of strongly commuting self-adjoint operators $A_1,\dotsc,A_d$ acting on a Hilbert space $\mathcal{K}$ such that  $\mathcal{H}_L$ is a subspace of  $\mathcal{K}$ and $X_1\subseteq A_1,\dotsc, X_d \subseteq A_d$. If this is fulfilled and  $E_{(A_1,\dotsc,A_d)}$  denotes the spectral measure of the $d$-tuple  $(A_1,\dotsc,A_d)$, then  $\mu(\cdot)= \langle E_{(A_1,\dotsc,A_d)}(\cdot)1,1 \rangle_\mathcal{K}$  is a solution of the moment problem for $L$.* *Each solution of the moment problem for $L$ is of this form.* First we explain the notions occurring in this theorem (see \[Sm9, Chapter 5\] for the corresponding results and more details). A $d$-tuple $(A_1,\dotsc,A_d)$ of self-adjoint operators $A_1,\dotsc,A_d$ acting on a Hilbert space $\mathcal{K}$ is called *strongly commuting* if for all $k,l=1,\dotsc,d, k\neq l,$ the resolvents $(A_k-{\sf{i}} I)^{-1}$ and $(A_l-{\sf{i}} I)^{-1}$ commute, or equivalently, the spectral measures $E_{A_k}$ and $E_{A_l}$ commute (that is, $E_{A_k}(M)E_{A_l}(N)=E_{A_l}(N)E_{A_k}(M)$ for all Borel subsets $M,N$ of $\mathbb{R}$). (If the self-adjoint operators are bounded, strong commutativity and "usual\" commutativity are equivalent.) The spectral theorem states that, for such a $d$-tuple, there exists a unique spectral measure $E_{(A_1,\dotsc,A_d)}$ on the Borel $\sigma$-algebra of $\mathbb{R}^d$ such that $$\begin{aligned} A_j=\int_{\mathbb{R}^d} \lambda_j~ dE_{(A_1,\dotsc,A_d)}(\lambda_1,\dotsc,\lambda_d),~j=1,\dotsc,d.\end{aligned}$$ The spectral measure $E_{(A_1,\dotsc,A_d)}$ is the product of spectral measures $E_{A_1},\cdots E_{A_d}$. Therefore, if $M_1,\dotsc,M_d$ are Borel subsets of $\mathbb{R}$, then $$\begin{aligned} \label{jointspec} E_{(A_1,\dotsc,A_d)}(M_1\times\cdots \times M_d) =E_{A_1}(M_1)\cdots E_{A_d}(M_d).\end{aligned}$$ First assume that $L$ is the moment functional and let $\mu$ be a representing measure of $L$. It is well-known and easily checked by the preceding remarks that the multiplication operators $A_k$, $k=1,\dotsc,d$, by the coordinate functions $x_k$ form a $d$-tuple of strongly commuting self-adjoint operators on the Hilbert space $\mathcal{K}:=L^2(\mathbb{R}^d,\mu)$ such that $\mathcal{H}_L \subseteq \mathcal{K}$ and $X_k\subseteq A_k$ for $k=1,\dotsc,d$. The spectral measure $E:= E_{(A_1,\dotsc,A_d)}$ of this $d$-tuple acts by  $E(M)f=\chi_M \cdot f$, $f \in L^2(\mathbb{R}^d,\mu)$, where $\chi_M$ is the characteristic function of the Borel set $M\subseteq \mathbb{R}^d$. This implies that $\langle E(M)1,1 \rangle_\mathcal{K}=\mu(M)$. Thus, $\mu(\cdot)=\langle E( \cdot ) 1,1\rangle_\mathcal{K}$. Conversely, suppose that $(A_1,\dotsc,A_d)$ is such a $d$-tuple. By the multidimensional spectral theorem \[Sm9, Theorem 5.23\] this $d$-tuple has a joint spectral measure $E_{(A_1,\dotsc,A_d)}$. Put $\mu(\cdot):=\langle E_{(A_1,\dotsc,A_d)}(\cdot)1,1 \rangle_\mathcal{K}$. Let $p \in \mathbb{C}_d[\underline{x}]$. Since $X_k \subseteq A_k$, we have $$\begin{aligned} p(X_1,\dotsc,X_d) \subseteq p(A_1,\dotsc,A_d).\end{aligned}$$ Therefore, since the polynomial $1$ belongs to the domain of $p(X_1,\dotsc,X_d)$, it is also in the domain of $p(A_1,\dotsc,A_d)$. Then $$\begin{aligned} \int_{\mathbb{R}^d}& p(\lambda)~ d\mu(\lambda) = \int_{\mathbb{R}^d} p(\lambda)~ d\langle E_{(A_1,\dotsc,A_d)}(\lambda)1,1 \rangle_\mathcal{K}=\langle p(A_1,\dotsc,A_d) 1,1 \rangle_\mathcal{K}\\& = \langle p(X_1,\dotsc,X_d)1,1 \rangle = \langle \pi_L(p(x_1,\dotsc,x_d))1,1 \rangle =L(p(x_1,\dotsc,x_d)),\end{aligned}$$ where the second equality follows from the functional calculus and the last from ([\[gnslfunctional\]](#gnslfunctional){reference-type="ref" reference="gnslfunctional"}). This shows that $\mu$ is a solution of the moment problem for $L$. $\hfill$ $\Box$ **Proposition 57**. *Suppose $Q$ is an Archimedean quadratic module of a commutative real unital algebra ${\sf{A}}$. Let $L_0$ be a $Q$-positive $\mathbb{R}$-linear functional on ${\sf{A}}$ and let $\pi_L$ be the GNS representation of its extension $L$ to a $\mathbb{C}$-linear functional on the complexification ${\sf{A}}_\mathbb{C}={\sf{A}}+\sf{i}{\sf{A}}$. Then all operators $\pi_L(a)$, $a\in {\sf{A}}_\mathbb{C}$, are bounded.* *Proof.* Since $\sum ({\sf{A}}_\mathbb{C})^2=\sum {\sf{A}}^2$ by Lemma 2.17(ii) and $\sum {\sf{A}}^2\subseteq Q$, $L$ is a positive linear functional on ${\sf{A}}_\mathbb{C}$, so the GNS representation $\pi_L$ is well-defined. It suffices to prove that $\pi_L(a)$ is bounded for $a\in {\sf{A}}$. Since $Q$ is Archimedean, $\lambda -a^2\in Q$ for some $\lambda >0$. Let $x\in {\sf{A}}_\mathbb{C}$. By Lemma 2.17(ii), $x^*x(\lambda- a^2)\in Q$ and hence  $L(x^*xa^2)=L_0(x^*xa^2)\leq \lambda L_0(x^*x)=\lambda L(x^*x)$, since $L_0$ is $Q$-positive. Then $$\begin{aligned} \|\pi_L(a)\pi_L(x)1\|^2&= \langle \pi_L(a)\pi_L(x)1,\pi_L(a)\pi_L(x)1\rangle=\langle\pi_L((ax)^*ax)1,1\rangle\\ &= L((ax)^*ax)=L(x^*xa^2)\leq \lambda L(x^*x)=\lambda \|\pi_L(x)1\|^2, \end{aligned}$$ where we used ([\[defstarrep\]](#defstarrep){reference-type="ref" reference="defstarrep"}) and ([\[gnslfunctional\]](#gnslfunctional){reference-type="ref" reference="gnslfunctional"}). That is, $\pi_L(a)$ is bounded on $\mathcal{D}(\pi_L)$. ◻ We now illustrate the power of the operator approach to moment problems by giving short proofs of Theorems [Theorem 43](#archpos){reference-type="ref" reference="archpos"} and [Theorem 50](#archmedps){reference-type="ref" reference="archmedps"}. From remark [Remark 45](#remarkarchpos){reference-type="ref" reference="remarkarchpos"}, 6.), we recall that in order to prove Theorem [Theorem 43](#archpos){reference-type="ref" reference="archpos"} in the general case it suffices to do this in the special case when $C$ is an Archimedean semiring *or* when $C$ is an Archimedean quadratic module. In Section [4](#reparchimodiules){reference-type="ref" reference="reparchimodiules"} we have given an approach based on semirings. Here we prove it for quadratic modules. *Proof of Theorem [Theorem 43](#archpos){reference-type="ref" reference="archpos"} for Archimedean quadratic modules:* Suppose that $C$ is an Archimedean quadratic module of $\sf{A}$. As in the proof for semirings, the implication $(ii)_C\to (i)_C$ is trivial and it suffices to prove that $(i)_C$ implies $a\in C$ (otherwise replace $a$ by $a-\varepsilon$ for small $\varepsilon>0$.). Assume to the contrary that $a$ satisfies $(i)_C$, but $a\notin C$. Since $C$ is Archimedean, by Proposition [Proposition 14](#eidelheit){reference-type="ref" reference="eidelheit"} there is a $C$-positive $\mathbb{R}$-linear functional $L_0$ on ${\sf{A}}$ such that $L_0(1)=1$ and $L_0(a)\leq 0$. Let $\pi_L$ be the GNS representation of its extension to a $\mathbb{C}$-linear (positive) functional $L$ on the unital commutative complex $*$-algebra ${\sf{A}}_\mathbb{C}$. Let $c\in C$. If $x\in {\sf{A}}_\mathbb{C}$, then $x^*xc\in C$ by Lemma 2.17(ii), so $L_0 (x^*xc)\geq 0$, and $$\begin{aligned} \label{pivarphicpositive} \langle \pi_L(c)\pi_L(x)1,\pi_L(x)1\rangle=L(x^*xc)=L_0 (x^*xc)\geq 0\end{aligned}$$ by ([\[gnslfunctional\]](#gnslfunctional){reference-type="ref" reference="gnslfunctional"}). This shows that the operator $\pi_L(c)$ is nonnegative. For $b\in {\sf{A}}_\mathbb{C}$, the operator $\pi_L(b)$ is bounded by Proposition [Proposition 57](#archmiboundedop){reference-type="ref" reference="archmiboundedop"}. Let  $\overline{\pi_L(b)}$  denote its continuous extension to the Hilbert space  $\mathcal{H}_L$. These operators form a unital commutative $*$-algebra of bounded operators. Its completion $\mathcal{B}$ is a unital commutative $C^*$-algebra. Let $\chi$ be a character of $\mathcal{B}$. Then  $\tilde{\chi}(\cdot):=\chi(\, \overline{\pi_L(\cdot)}\,)$ is a character of ${\sf{A}}$. If $c\in C$, then  $\pi_L(c)\geq 0$  by ([\[pivarphicpositive\]](#pivarphicpositive){reference-type="ref" reference="pivarphicpositive"}) and so  $\overline{\pi_L(c)}\geq 0$. Hence $\tilde{\chi}$ is $C$-positive, that is, $\tilde{\chi}\in \mathcal{K}(C)$. Therefore, $\tilde{\chi}(a)=\chi(\overline{\pi_L(a)}\,)>0$ by $(i)_C$. Thus, if we realize $\mathcal{B}$ as a $C^*$-algebra of continuous functions on a compact Hausdorff space, the function corresponding to  $\overline{\pi_L(a_0)}$  is positive, so it has a positive minimum $\delta$. Then  $\overline{\pi_L(a_0)}\,\geq \delta \cdot I$  and hence $$\begin{aligned} 0 <\delta =\delta L(1)=\langle\delta 1,1\rangle\leq \langle \pi_L (a)1,1\rangle =L(a_0)=L_0(a)\leq 0,\end{aligned}$$ which is the desired contradiction. $\hfill$ $\Box$ *Proof of Theorem [Theorem 50](#archmedps){reference-type="ref" reference="archmedps"}(ii):* We extend $L$ to a $\mathbb{C}$-linear functional, denoted again by $L$, on $\mathbb{C}_d[\underline{x}]$ and consider the GNS representation $\pi_L$. By Proposition [Proposition 57](#archmiboundedop){reference-type="ref" reference="archmiboundedop"}, the symmetric operators $\pi_L(x_1),\dotsc, \pi_L(x_d)$ are bounded. Hence their continuous extensions to the whole Hilbert space $\mathcal{H}_L$ are pairwise commuting bounded self-adjoint operators $A_1,\dotsc,A_d$. Therefore, by Theorem [Theorem 56](#mp-spec){reference-type="ref" reference="mp-spec"}, if $E$ denotes the spectral measure of this $d$-tuple $(A_1,\dotsc,A_d)$, then $\mu(\cdot)= \langle E(\cdot)1,1 \rangle_{\mathcal{H}_L}$  is a solution of the moment problem for $L$. Since the operators $A_j$ are bounded, the spectral measure $E$, hence $\mu$, has compact support. (In fact,  ${\rm supp}~ E\subseteq [-\|A_1\|,\|A_1\|] \times \dots \times[-\|A_d\|,\|A_d\|]$.) Hence, since $L$ is $C({\sf f})$-positive by assumption, Proposition [Proposition 22](#cqfimpliessuppckf){reference-type="ref" reference="cqfimpliessuppckf"} implies that ${\rm supp}\, \mu \subseteq \mathcal{K}({\sf f})$. This shows that $L$ is a $\mathcal{K}({\sf f})$-moment functional. $\hfill$ $\Box$ The preceding proof of Theorem [Theorem 50](#archmedps){reference-type="ref" reference="archmedps"}(ii) based on the spectral theorem is probably the most elegant approach to the moment problem for Archimedean quadratic modules. Next we derive Theorem [Theorem 50](#archmedps){reference-type="ref" reference="archmedps"}(i) from Theorem [Theorem 50](#archmedps){reference-type="ref" reference="archmedps"}(ii). We argue in the same manner as in the second proof of Theorem [Theorem 28](#schmps){reference-type="ref" reference="schmps"} in Section [3](#momentproblemstrictpos){reference-type="ref" reference="momentproblemstrictpos"}. Assume to the contrary that $h\notin Q({\sf f})$. Since $Q({\sf f})$ is Archimedean, Proposition [Proposition 14](#eidelheit){reference-type="ref" reference="eidelheit"} and Theorem [Theorem 50](#archmedps){reference-type="ref" reference="archmedps"}(ii) apply to $Q({\sf f})$. By these results, there is a $Q({\sf f})$-positive linear functional $L$ on  $\mathbb{R}_d[\underline{x}]$ satisfying $L(1)=1$ and $L(h)\leq 0$, and this functional is a $\mathcal{K}({\sf f})$-moment functional. Then there is a measure $\mu \in M_+(\mathbb{R}^d)$ supported on $\mathcal{K}({\sf f})$ such that $L(p)=\int p\, d\mu$ for $p\in \mathbb{R}_d[\underline{x}]$. (Note that $\mathcal{K}({\sf f})$ is compact by Corollary [Corollary 12](#archicompact){reference-type="ref" reference="archicompact"}.) Again $h(x)>0$ on $\mathcal{K}({\sf f})$, $L(1)=1$, and $L(h)\leq 0$ lead to a contradiction. $\hfill$ $\Box$ # The moment problem for semi-algebraic sets contained in compact polyhedra {#polyhedron} Let $k\in \mathbb{N}$. Suppose that ${\sf f}=\{f_1,\dotsc,f_k\}$ is a set of linear polynomials of $\mathbb{R}_d[\underline{x}]$. By a linear polynomial we mean a polynomial of degree at most one. The semi-algebraic set $\mathcal{K}({\sf f})$ defined by the linear polynomials $f_1,\dotsc,f_k$ is called a *polyhedron*. Recall that $S({\sf f})$ is the semiring of $\mathbb{R}_d[\underline{x}]$ generated by $f_1,\dotsc,f_k$, that is, $S({\sf f})$ consists of all finite sums of terms $\alpha\, f_1^{n_1}\cdots f_k^{n_k}$, where $\alpha\geq 0$ and $n_1,\dotsc,n_k\in \mathbb{N}_0$. Further, let ${\sf g}=\{g_0=1, g_1,\dotsc,g_r\}$, where $r\in \mathbb{N}_0$, be a finite subset of $\mathbb{R}_d[\underline{x}]$. Recall that $C({\sf f},{\sf g}):= g_0\, S({\sf f})+ g_1 S({\sf f})+\cdots+ g_r S({\sf f})$ denotes the $S({\sf f})$-module considered in Example [Example 49](#exacfg){reference-type="ref" reference="exacfg"}, see ([\[cfg\]](#cfg){reference-type="ref" reference="cfg"}). The following lemma goes back to H. Minkowski. In the optimization literature it is called *Farkas' lemma*. We will use it in the proof of Theorem [Theorem 59](#prestel){reference-type="ref" reference="prestel"} below. **Lemma 58**. *Let $h, f_1,\dotsc,f_k$ be linear polynomials of $\mathbb{R}_d[\underline{x}]$ such that the set $\mathcal{K}({\sf f})$ is not empty. If $h(x)\geq 0$ on $\mathcal{K}({\sf f})$, there exist numbers $\lambda_0\geq 0,\dotsc,\lambda_m\geq 0$ such that $h=\lambda_0+\lambda_1f_1+\dots+\lambda_m f_m.$* *Proof.* Let $E$ be the vector space spanned by the polynomials $1,x_1,\dotsc,x_d$ and $C$ the cone in $E$ generated by $1,f_1,\dotsc,f_m$. It is easily shown that $C$ is closed in $E$. We have to prove that $h\in C$. Assume to the contrary that $g\notin C$. Then, by the separation of convex sets (Theorem A.26(ii)), there exists a $C$-positive linear functional $L$ on $E$ such that $L(h)<0$. In particular, $L(1)\geq 0$, because $1\in C$. Without loss of generality we can assume that $L(1)>0$. Indeed, if $L(1)=0$, we take a point $x_0$ of the non-empty (!) set $\mathcal{K}(\,{\sf \hat{f}}\,)$ and replace $L$ by $L^\prime=L+\varepsilon l_{x_0}$, where $l_{x_0}$ denotes the point evaluation at $x_0$ on $E$. Then $L^\prime$ is $C$-positive as well and $L^\prime(h)<0$ for small $\varepsilon >0$. Define a point  $x:=L(1)^{-1}(L(x_1),\dotsc,L(x_d))\in \mathbb{R}^d$. Then $L(1)^{-1}L$ is the evaluation $l_x$ at the point $x$ for the polynomials $x_1,\dotsc,x_d$ and for $1$, hence on the whole vector space $E$. Therefore, $f_j(x)=l_x(f_j)=L(1)^{-1}L(f_j)\geq 0$ for all $j$, so that $x\in \mathcal{K}(\, {\sf \hat{f}}\, )$, and $g(x)=l_x(h)=L(1)^{-1}L(h)<0$. This contradicts the assumption. ◻ **Theorem 59**. *Let $k\in \mathbb{N}$, $r\in \mathbb{N}_0$. Let  ${\sf f}=\{f_1,\dotsc,f_k\}$ and ${\sf g}=\{g_0=1, g_1,\dotsc,g_r\}$ be subsets of $\mathbb{R}_d[\underline{x}]$  such that the polynomials $f_1,\dotsc,f_k$ are linear.  Suppose that the polyhedron  $\mathcal{K}(\, {\sf f}\, )$  is compact and nonempty.* - *  If $h\in \mathbb{R}_d[\underline{x}]$ satisfies $h(x)> 0$ for all $x\in \mathcal{K}({\sf g})$, then $h\in C({\sf f}, {\sf g})$, that is, $h$ is a finite sum of polynomials $$\begin{aligned} \label{hsumof} \alpha g_j~ f_1^{n_1}\cdots f_k^{n_k}, ~~\textup{where}~~ \alpha\geq 0,~ j=1,\dotsc,r;~ n_1\dots,n_r\in \mathbb{N}_0. \end{aligned}$$* - *     A linear functional $L$ on $\mathbb{R}_d[\underline{x}]$ is a $\mathcal{K}( {\sf f} )\cap\mathcal{K}({\sf g})$--moment functional if and only if $$\begin{aligned} \label{solvconsemiring} L( g_j\, f_1^{n_1}\cdots f_k^{n_k})\geq 0\, \quad \text{for~ all}~~~ j=0,\dotsc, r; n_1,\dotsc,n_k\in \mathbb{N}_0. \end{aligned}$$* *Proof.* First we show that the semiring  $S( {\sf f} )$ is Archimedean. Let $j\in \{1,\dotsc,d\}$. Since the set $\mathcal{K}(\, {\sf f}\, )$ is compact, there exists a $\lambda>0$ such that $\lambda\pm x_j> 0$ on $\mathcal{K}(\, {\sf f}\, )$. Hence, since $\mathcal{K}(\, {\sf f}\, )$ is nonempty, Lemma [Lemma 58](#minkow){reference-type="ref" reference="minkow"} implies that $(\lambda\pm x_j)\in S({\sf f})$. Hence $S( {\sf f})$ is Archimedean by Lemma [Lemma 9](#boundedele2){reference-type="ref" reference="boundedele2"}(ii). The only if part in (ii) is obvious. Since $S( {\sf f})$ is Archimedean, Theorem [Theorem 50](#archmedps){reference-type="ref" reference="archmedps"} applies to the $S( {\sf f})$-module $C({\sf f}, {\sf g})$ and gives the other assertions. Note that the requirements ([\[solvconsemiring\]](#solvconsemiring){reference-type="ref" reference="solvconsemiring"}) suffice, since $h$ in (i) is a sum of terms ([\[hsumof\]](#hsumof){reference-type="ref" reference="hsumof"}). ◻ We state the special case $r=0$ of a polyhedron $\mathcal{K}(\, {\sf f})$ separately as a corollary. Assertion (i) is called *Handelman's theorem*. **Corollary 60**. *Let $k\in \mathbb{N}$. Suppose that  ${\sf f}=\{f_1,\dotsc,f_k\}$ is a set of linear polynomials of $\mathbb{R}_d[\underline{x}]$ such that the polyhedron  $\mathcal{K}(\, {\sf f}\, )$  is compact and nonempty.* - *  If $h\in \mathbb{R}_d[\underline{x}]$ satisfies $h(x)> 0$ for all $x\in \mathcal{K}({\sf f})$, then $h\in S({\sf f}).$* - *     A linear functional $L$ on $\mathbb{R}_d[\underline{x}]$ is a $\mathcal{K}( {\sf f})$--moment functional if and only if $$\begin{aligned} \label{solvconsemiringcor} L( f_1^{n_1}\cdots f_k^{n_k})\geq 0\, \quad \text{for~ all}~~~ n_1,\dotsc,n_k\in \mathbb{N}_0. \end{aligned}$$* *Proof.* Set $r=0, g_0=1$ in Theorem [Theorem 59](#prestel){reference-type="ref" reference="prestel"} and note that $\mathcal{K}(C({\sf f},{\sf g}))= \mathcal{K}({\sf f}).$ ◻ # Examples and applications {#examplesmp} Throughout this section, ${\sf f}=\{f_1,\dotsc,f_k\}$ is a finite subset of $\mathbb{R}_d[\underline{x}]$ and $L$ denotes a linear functional on $\mathbb{R}_d[\underline{x}].$ If $L$ is a $\mathcal{K}({\sf f})$-moment functional, it is obviously $T({\sf f})$-positive, $Q({\sf f})$-positive, and $S({\sf f})$-positive. Theorems  [Theorem 29](#mpschm){reference-type="ref" reference="mpschm"}, [Theorem 50](#archmedps){reference-type="ref" reference="archmedps"}(ii), and [Theorem 59](#prestel){reference-type="ref" reference="prestel"}(ii) deal with the converse implication and are the main solvability criteria for the moment problem in this chapter. First we discuss Theorems [Theorem 29](#mpschm){reference-type="ref" reference="mpschm"} and [Theorem 50](#archmedps){reference-type="ref" reference="archmedps"}(ii). Theorem [Theorem 29](#mpschm){reference-type="ref" reference="mpschm"} applies to *each* compact semi-algebraic set $\mathcal{K}({\sf f})$ and implies that $L$ is a $\mathcal{K}({\sf f})$-moment functional if and only if it is $T({\sf f})$-positive. For Theorem [Theorem 50](#archmedps){reference-type="ref" reference="archmedps"}(ii) the compactness of the set $\mathcal{K}({\sf f})$ is not sufficient; it requires that the quadratic module $Q({\sf f})$ is Archimedean. In this case, $L$ is a $\mathcal{K}({\sf f})$-moment functional if and only if it is $Q({\sf f})$-positive. *Example 61*. Let us begin with a single polynomial $f\in \mathbb{R}_d[\underline{x}]$ for which the set $\mathcal{K}(f)=\{x\in \mathbb{R}^d:f(x)\geq 0\}$ is compact. (A simple example is the $d$-ellipsoid given by $f(x)=1-a_1x_1^2-\dots -a_dx_d^2$, where $a_1>0,\dotsc,a_d>0$.) Clearly, $T(f)=Q(f)$. Then, *$L$ is a $\mathcal{K}(f)$-moment functional if and only if it is $T(f)$-positive, or equivalently, if $L$ and $L_f$ are positive functionals on $\mathbb{R}_d[\underline{x}]$.* Now we add further polynomials $f_2,\dotsc,f_k$ and set ${\sf f}=\{f,f_2,\dotsc,f_k\}$. (For instance, one may take coordinate functions as $f_j=x_l$.) Since $T(f)$ is Archimedean (by Proposition [Proposition 26](#prearchcom){reference-type="ref" reference="prearchcom"}, because $\mathcal{K}(f)$ is compact), so is the quadratic module $Q({\sf f})$. Therefore, *$L$ is a $\mathcal{K}({\sf f})$-moment functional if and only if it is $Q(f)$-positive, or equivalently, if $L, L_f,L_{f_2}, \dots, L_{f_k}$ are positive functionals on $\mathbb{R}_d[\underline{x}]$.* $\hfill \circ$ *Example 62*. *($d$-dimensional compact interval  $[a_1,b_1]\times\dots \times [a_d,b_d]$)*\ Let $a_j,b_j\in \mathbb{R}$, $a_j< b_j,$ and set $f_{2j-1}:=b_j-x_j$, $f_{2j}:=x_j-a_j,$ for $j=1,\dotsc,d$. Then the semi-algebraic set $\mathcal{K}({\sf f})$ for   ${\sf f}:=\{f_1,\dotsc,f_{2d}\}$ is the $d$-dimensional interval $[a_1,b_1]\times\dots \times [a_d,b_d]$. Put $\lambda_j=|a_j|+|b_j|.$ Then $\lambda_j-x_j=f_{2j-1}+ \lambda_j-b_j$ and $\lambda_j+x_j=f_{2j}+\lambda_j+a_j$ are $Q({\sf f})$, so each $x_j$ is a bounded element with respect to the quadratic module $Q({\sf f})$. Hence $Q({\sf f})$ is Archimedean by Lemma [Lemma 9](#boundedele2){reference-type="ref" reference="boundedele2"}(ii). Thus, *$L$ is a $\mathcal{K}({\sf f})$-moment functional if and only if it is $Q(f)$-positive, or equivalently, if  $L_{f_1},L_{f_2}, \dots, L_{f_k}$ are positive functionals, that is,* $$\begin{aligned} \label{solvn-diminter} L((b_j{-}x_j)p^2)\geq 0~~ \text{\textit{and}} ~~ L((x_j{-}a_j)p^2)\geq 0~~ \text{\textit{for}} ~~ j=1,\dotsc,d,\, p\in \mathbb{R}_d[\underline{x}]. \end{aligned}$$ Clearly, ([\[solvn-diminter\]](#solvn-diminter){reference-type="ref" reference="solvn-diminter"}) implies that  $L$  itself is positive, since $L=(b_1{-}a_1)^{-1}(L_{f_1}{+}L_{f_2})$. $\hfill \circ$ *Example 63*. *($1$-dimensional interval $[a,b]$)*\ Let $a<b$, $a,b\in \mathbb{R}$ and let $l,n\in \mathbb{N}$ be odd. We set $f(x):=(b-x)^l(x-a)^n$. Then $\mathcal{K}( f)=[a,b]$  and  $T( f)=\sum \mathbb{R}[x]^2+f\sum \mathbb{R}[x]^2$. Hence, by Theorem [Theorem 29](#mpschm){reference-type="ref" reference="mpschm"}, *a linear functional $L$ on $\mathbb{R}[x]$ is an $[a,b]$-moment functional if and only if $L$ and $L_f$ are positive functionals on $\mathbb{R}[x]$*. This result extends Hausdorff's Theorem 3.13. It should be noted that this solvability criterion holds for arbitrary (!) odd numbers $l$ and $n$, while the equality ${\rm{Pos}}([a,b])=T(f)$ is only true if $l=n=1$, see Exercise 3.4 b. in Chapter 3. $\hfill \circ$ *Example 64*. (*Simplex in $\mathbb{R}^d, d\geq 2$*)\ Let $f_1=x_1,\dotsc,f_d=x_d, f_{d+1}=1- \sum_{i=1}^d x_i, k=d+1$. Clearly, $\mathcal{K}({\sf f})$ is the simplex $$\begin{aligned} K_d=\{ x\in \mathbb{R}^d: x_1\geq 0,\dotsc, x_d\geq 0,\, x_1+\dots+x_d\leq 1\, \}. \end{aligned}$$ Note that $1-x_j=f_{d+1}+\sum_{i\neq j} f_i$ and $1+x_j=1+f_j$. Therefore, $1\pm x_j\in Q({\sf f})$ and $1\pm x_j\in S({\sf f})$. Hence, by Lemma [Lemma 9](#boundedele2){reference-type="ref" reference="boundedele2"}(ii), the quadratic module $Q({\sf f})$ and the semiring $S({\sf f})$ are Archimedean. Therefore, Theorem [Theorem 50](#archmedps){reference-type="ref" reference="archmedps"} applies to $Q({\sf f})$ and Theorem [Theorem 59](#prestel){reference-type="ref" reference="prestel"} applies to $S({\sf f})$. We restate only the results on the moment problem. By Theorems [Theorem 50](#archmedps){reference-type="ref" reference="archmedps"}(ii) and [Theorem 59](#prestel){reference-type="ref" reference="prestel"}(ii), *$L$ is a  $K_d$--moment functional if and only if* $$\begin{aligned} L(x_ip^2)\geq 0, ~ i=1,\cdots,d,~~\text{\textit{and}}~~~ L((1- ( x_1+x_2+\dots+x_d))p^2)\geq 0 ~~~\text{\textit{for}}~~p \in \mathbb{R}_d[\underline{x}], \end{aligned}$$ *or equivalently,* $$\begin{aligned} \hspace{0,5cm}L(x_1^{n_1}\dots x_d^{n_d}(1- (x_1+\dots+x_d))^{n_{d+1}}) \geq 0 \quad \text{\textit{for}}~~~n_1,\dotsc,n_{d+1}\in \mathbb{N}_0.\hspace{0,5cm}\Box~~\circ \end{aligned}$$ *Example 65*. (*Standard simplex $\Delta_d$ in $\mathbb{R}^d$*)\ Let   $f_1=x_1,\dotsc,\, f_d=x_d,\, f_{d+1}=1- \sum_{i=1}^d x_i,\, f_{d+2}=-f_{d+1},\, k=d+2.$ Then the semi-algebraic set $\mathcal{K}({\sf f})$ is the standard simplex $$\begin{aligned} \Delta_d=\{x\in \mathbb{R}^d: x_1\geq 0,\dotsc,x_d\geq 0, x_1+\dots+x_d=1\}. \end{aligned}$$ Let $S_0$ denote the polynomials of $\mathbb{R}_d[\underline{x}]$ with nonnegative coefficients and $\mathcal{I}$ the ideal generated by $1-(x_1+\dots +x_d)$. Then $S:=S_0+\mathcal{I}$ is a semiring of $\mathbb{R}_d[\underline{x}]$. Since $1\pm x_j\in S$, $S$ is Archimedean. The characters of $\mathbb{R}_d[\underline{x}]$ are the evaluations at points of $\mathbb{R}^d$. Obviously, $x\in \mathbb{R}^d$ gives a $S$-positive character if and only if $x\in \Delta_d$. Let $f\in \mathbb{R}_d[\underline{x}]$ be such that $f(x)>0$ on $\Delta_d$. Then, $f\in S$ by Theorem [Theorem 43](#archpos){reference-type="ref" reference="archpos"}, so $$\begin{aligned} \label{polyaformoff} f(x)=g(x)+ h(x)(1-(x_1+\dots + x_d)),\quad {\rm where}~~~ g\in S_0, ~h\in \mathbb{R}_d[\underline{x}]. \end{aligned}$$ From Theorem [Theorem 59](#prestel){reference-type="ref" reference="prestel"}(ii) it follows that *$L$ is a $\Delta_d$-moment functional if and only if* $$\begin{aligned} L(x_1^{n_1}\dots x_d^{n_d})\geq 0,~ L(x_1^{n_1}\dots x_d^{n_d}(1{-}(x_1{+}\dots{+}x_d))^r)=0, ~~ n_1,\dotsc,n_d\in \mathbb{N}_0, r\in \mathbb{N}.\circ \end{aligned}$$ From the preceding example it is only a small step to derive an elegant proof of the following classical *theorem of G. Polya*. **Proposition 66**. *Suppose that $f\in \mathbb{R}_d[\underline{x}]$ is a homogeneous polynomial such that $f(x)>0$ for all  $x\in \mathbb{R}^d\backslash \{0\}$, $x_1\geq 0, \dots,x_d\geq 0$. Then there exists an $n\in \mathbb{N}$ such that all coefficients of the polynomial $(x_1+\dots+x_d)^n f(x)$ are nonnegative.* *Proof.* We use Example [Example 65](#Delta_dsimplex){reference-type="ref" reference="Delta_dsimplex"}. As noted therein, Theorem [Theorem 43](#archpos){reference-type="ref" reference="archpos"} implies that $f$ is of the form ([\[polyaformoff\]](#polyaformoff){reference-type="ref" reference="polyaformoff"}). We replace in ([\[polyaformoff\]](#polyaformoff){reference-type="ref" reference="polyaformoff"}) each variable $x_j, j=1,\dotsc,d,$ by $x_j(\, \sum_{i=1}^d x_i)^{-1}$. Since $(1-\sum_j x_j (\sum_i x_i)^{-1}) =1- 1=0,$ the second summand in ([\[polyaformoff\]](#polyaformoff){reference-type="ref" reference="polyaformoff"}) vanishes after this substitution. Hence, because $f$ is homogeneous, ([\[polyaformoff\]](#polyaformoff){reference-type="ref" reference="polyaformoff"}) yields $$\begin{aligned} \label{polyaidy} \big(\, \sum\nolimits_i x_i\big)^{-m}f(x)=g\big(x_1 \big( \, \sum\nolimits_i x_i \big)^{-1},\dotsc,x_d\big(\, \sum\nolimits_i x_i\big)^{-1}\big), \end{aligned}$$ where $m=\deg(f)$. Since $g\in S_0$, $g(x)$ has only nonnegative coefficients. Therefore, after multiplying ([\[polyaidy\]](#polyaidy){reference-type="ref" reference="polyaidy"}) by $(\sum_i x_i)^{n+m}$ with $n$ sufficiently large to clear the denominators, we obtain the assertion. ◻ Finally, we mention two examples of polyhedrons based on Corollary [Corollary 60](#prestelcor){reference-type="ref" reference="prestelcor"}(ii). *Example 67*. $[-1,1]^d$\ Let $k=m=2d$ and $f_1=1-x_1, f_2=1+x_1,\dotsc,f_{2d-1}=1-x_d,f_{2d}=1+x_d.$ Then $\mathcal{K}(\,{\sf f}\,)=[-1,1]^d$. Therefore, by Corollary [Corollary 60](#prestelcor){reference-type="ref" reference="prestelcor"}(ii), *a linear functional $L$ on $\mathbb{R}_d[x_d]$ is a $[-1,1]^d$-moment functional if and only if* $$\begin{aligned} L( (1-x_1)^{n_1}(1+x_1)^{n_2}\dotsm (1-x_d)^{n_{2d-1}}(1+x_d)^{n_{2d}})&\geq 0& \text{for }n_1,\dotsc,n_{2d}\in \mathbb{N}_0. \hspace{0,2cm} \circ \end{aligned}$$ *Example 68*. (*Multidimensional  Hausdorff  moment  problem on $[0,1]^d$*)\ Set $f_1=x_1, f_2=1-x_1,\dotsc,f_{2d-1}=x_d,f_{2d}=1-x_d, k=2d$. Then $\mathcal{K}(\,{\sf f}\,)= [0,1]^d$. Let $s=(s_\mathfrak{n})_{\mathfrak{n}\in \mathbb{N}_0^d}$ be a multisequence. We define the shift $E_j$ of the $j$-th index by $$\begin{aligned} (E_js)_\mathfrak{m}=s_{(m_1,\dotsc,m_{j-1},m_j+1,m_{j+1},\dotsc,m_d)}, ~~~\mathfrak{m}\in \mathbb{N}_0^d. \end{aligned}$$ **Proposition 69**. *The following five statements are equivalent:* - *  $s$ is a Hausdorff moment sequence on $[0,1]^d$.* - *  $L_s$ is a $[-1,1]^d$-moment functional on $\mathbb{R}_d[\underline{x}].$* - *  $L_s( x_1^{m_1}(1-x_1)^{n_1}\cdots x_d^{m_d}(1-x_d)^{n_d})\geq 0$  for all  $\mathfrak{n},\mathfrak{m}\in \mathbb{N}_0^d$.* - *  $((I-E_1)^{n_1}\dots (I-E_d)^{n_d}s)_\mathfrak{m}\geq 0$  for all  $\mathfrak{n},\mathfrak{m}\in \mathbb{N}_0^d$.* - *   $$\begin{aligned} \sum_{\mathfrak{j}\in \mathbb{N}_0^d, \mathfrak{j}\leq \mathfrak{n}}~ (-1)^{|\mathfrak{j}|}\binom{n_1}{ j_1}\cdots \binom{n_d}{ j_d}\, s_{\mathfrak{m}+\mathfrak{j}}\geq 0\end{aligned}$$ for all $\mathfrak{n},\mathfrak{m}\in \mathbb{N}_0^d$. Here $|\mathfrak{j}|:=j_1+\dots+j_d$ and $\mathfrak{j}\leq \mathfrak{n}$ means that $j_i\leq n_i$ for $i=1,\dotsc,d$.* *Proof.* (i)$\leftrightarrow$(ii) holds by definition. Corollary [Corollary 60](#prestelcor){reference-type="ref" reference="prestelcor"}(ii) yields (ii)$\leftrightarrow$(iii). Let $\mathfrak{n}, \mathfrak{m}\in \mathbb{N}_0^d$. We repeat the computation from the proof of Theorem 3.15 and derive $$\begin{aligned} L_s( x_1^{m_1}(1-x_1)^{n_1}&\cdots x_d^{m_d}(1-x_d)^{n_d})= ((I-E_1)^{n_1}\dots (I-E_d)^{n_d}s)_\mathfrak{m}\\ &=\sum_{\mathfrak{j}\in \mathbb{N}_0^d, \mathfrak{j}\leq \mathfrak{n}}~ (-1)^{|\mathfrak{j}|}\binom{n_1}{ j_1}\cdots \binom{n_d}{ j_d} s_{\mathfrak{m}+\mathfrak{j}}. \end{aligned}$$ This identity implies the equivalence of conditions (iii)--(v). ◻ $\circ$ # Exercises -   Suppose that $Q$ is a quadratic module of a commutative real algebra ${\sf{A}}$. Show that $Q\cap(-Q)$ is an ideal of ${\sf{A}}$. This ideal is called the *support ideal* of $Q$. -   Let $K$ be a closed subset of $\mathbb{R}^d$. Show that ${\rm{Pos}}(K)$ is saturated. -   Formulate solvability criteria in terms of localized functionals and in terms of $d$-sequences for the following sets. -   Unit ball of $\mathbb{R}^d$. -   $\{x\in \mathbb{R}^d: x_1^2+\dots+x_d^2\leq r^2,~ x_1\geq 0,\dotsc, x_d\geq 0\}$. -   $\{(x_1,x_2,x_3,x_4)\in \mathbb{R}^4: x_1^2+x_2^2\leq 1,x_3^2+x_4^2\leq 1\}$. -   $\{(x_1,x_2,x_3)\in \mathbb{R}^3: x_1^2+x_2^2+x_3^2\leq 1, x_1+x_2+x_3\leq 1\}.$ -   $\{x\in \mathbb{R}^{2d}:x_1^2+x_2^2=1,\dotsc, x_{2d-1}^2+x_{2d}^2=1\}$. -   Decide whether or not the following quadratic modules $Q({\sf f})$ are Archimedean. -   $f_1=x_1,f_2=x_2, f_3=1-x_1x_2, f_4=4-x_1x_2$. -   $f_1=x_1,f_2=x_2, f_3=1-x_1-x_2.$ -   $f_1=x_1,f_2=x_2, f_3=1-x_1x_2$. -   Let $f_1,\dotsc,f_k,g_1,\dotsc,g_l\in \mathbb{R}_d[\underline{x}]$. Set ${\sf {g}}=(f_1,\dotsc,f_k,g_1,\dotsc,g_l)$, ${\sf {f}}=(f_1,\dotsc,f_k)$. Suppose that $Q({\sf f})$ is Archimedean. Show that each $Q({\sf g})$-positive linear functional $L$ is a determinate $\mathcal{K}({\sf g})$-moment functional. -   Formulate solvability criteria for the moment problem of the following semi-algebraic sets $\mathcal{K}({\sf f})$. -   $f_1=x_1^2+\dots+x_d^2, f_2=x_1,\dotsc, f_k=x_{k-1}$, where $2\leq k\leq d+1$. -   $f_1=x_1, f_2=2-x_1,f_3=x_2, f_4=2-x_2, f_5=x_1^2-x_2,$ where $d=2$. -   $f_1=x_1^2+x_2^2, f_2=ax_1+bx_2, f_3=x_2,$ where $d=2, a,b\in \mathbb{R}$. -   Let $d=2$, $f_1=1-x_1, f_2=1+x_1, f_3=1-x_2,f_4=1+x_2, f_5=1-x_1^2-x_2^2$  and ${\sf f}=(f_1,f_2,f_3,f_4,f_5).$ Describe the set $\mathcal{K}(\, {\sf f}\, )$ and use Theorem [Theorem 59](#prestel){reference-type="ref" reference="prestel"}(ii) to characterize $\mathcal{K}(\, {\sf f}\, )$-moment functionals. -   Find a $d$-dimensional version of Exercise 7, where $d\geq 3.$ -   (*Tensor product of preorderings*)\ Let $n,k\in \mathbb{N}$. Suppose that ${\sf f}_1$ and ${\sf f}_2$ are finite subsets of $\mathbb{R}_n[\underline{x}]\equiv \mathbb{R}[x_1,\dotsc,x_n]$ and $\mathbb{R}_k[\underline{x}']\equiv \mathbb{R}[x_{n+1},\dotsc,x_{n+k}]$, respectively, such that the semi-algebraic sets $\mathcal{K}({\sf f}_1)$ of $\mathbb{R}^n$ and $\mathcal{K}({\sf f}_2)$ of $\mathbb{R}^k$ are compact. Define a subset $T$ of $\mathbb{R}[x_1,\dotsc,x_{n+k}]$ by $$\begin{aligned} T:= \Big\{ p(x,x')=\sum_{j=1}^r p_j(x)q_j(x'): ~p_1,\dotsc,p_r\in T({\sf f}_1),\, q_1,\dotsc,q_r\in T({\sf f}_2),\, r\in \mathbb{N}\Big\}. \end{aligned}$$ - Show that $T$ is an Archimedean semiring of $\mathbb{R}[x_1,\dotsc,x_{n+k}]$. - Give an example of ${\sf f}_1$ and ${\sf f}_2$ for which $T$ is not a preordering. - Let $p\in \mathbb{R}[x_1,\dotsc,x_{n+k}]$. Suppose $p(x,x')>0$ for all $x\in \mathcal{K}({\sf f}_1)$, $x'\in\mathcal{K}({\sf f}_2)$. Prove that $p\in T$. Hint: The preorderings $T({\sf f}_1)$ and $T({\sf f}_2)$ are Archimedean (Proposition [Proposition 26](#prearchcom){reference-type="ref" reference="prearchcom"}). Hence $f\otimes 1$ and $1\otimes g$ satisfy the Archimedean condition for $f\in T({\sf f}_1)$ and $g\in T({\sf f}_2)$. The semiring $T$ is generated by these elements, so $T$ is Archimedean. For b.) try $p=(x_1-x_{n+1})^2$. For c.), apply the Archimedean Positivstellensatz. -   (*Supporting polynomials of compact convex sets of $\mathbb{R}^d$*)\ Let $K$ be a non-empty compact convex subset of $\mathbb{R}^d$. By a *supporting polynomial* of $K$ at some point $t_0\in K$ we mean a polynomial $h\in \mathbb{R}_d[\underline{x}]$ of degree one such that $h(t_0)=0$ and $h(t)\geq 0$ for all $t\in K$. (In this case, $t_0$ a is a boundary point of $K$.) Suppose that $H$ is a set of supporting polynomials at points of $K$ such that $$K=\{ t\in \mathbb{R}^d: h(t)\geq 0~~\textup{ for all}~~ h\in H \}.$$ - Prove that the semiring $S(H)$ of $\mathbb{R}_d[\underline{x}]$ generated by $H$ is Archimedean. - Let $f\in \mathbb{R}_d[\underline{x}]$ be such that $f(t)>0$ for all $t\in K$. Prove that $f\in S(H).$ -  Elaborate Exercise 10. for the unit disc $K=\{(x,y)\in \mathbb{R}^2: x^2+y^2\leq 1\}$ and $H:=\{ h_\theta:=1+x\, \cos(\theta)+y\, \sin\theta: \theta\in [0,2\pi)\}$ or for appropriate subsets of $K$. -  (*Reznick's theorem* \[Re2\])\ Let $f\in \mathbb{R}_d[\underline{x}]$ be a homogeneous polynomial such that $f(x)> 0$ for $x\in \mathbb{R}^d$, $x\neq 0$. Prove that there exists an $n\in \mathbb{N}$ such that $(x_1^2+\dots+x_d^2)^nf(x)\in \sum \mathbb{R}_d[\underline{x}]^2$. Hint: Mimic the proof of Proposition [Proposition 66](#proofPolya){reference-type="ref" reference="proofPolya"}:  Let $T$ denote the preordering $\sum \mathbb{R}_d[\underline{x}]+\mathcal{I}$, where $\mathcal{I}$ is the ideal generated by the polynomial  $1-(x_1^2+\dots+ x_d^2)$. Show that $T$-positive characters corresponds to points of the unit sphere, substitute $x_j(\sum_i x_i^2)^{-1}$ for $x_j$, apply Theorem [Theorem 59](#prestel){reference-type="ref" reference="prestel"}(i) to $T$, and clear denominators. # Notes The interplay between real algebraic geometry and the moment problem for compact semi-algebraic sets and the corresponding Theorems [Theorem 28](#schmps){reference-type="ref" reference="schmps"} and [Theorem 29](#mpschm){reference-type="ref" reference="mpschm"} were discovered by the author in \[Sm6\]. A small gap in the proof of \[Sm6, Corollary 3\] (observed by A. Prestel) was immediately repaired by the reasoning of the above proof of Proposition [Proposition 26](#prearchcom){reference-type="ref" reference="prearchcom"} (taken from \[Sm8, Proposition 18\]). The fact that the preordering is Archimedean in the compact case was first noted by T. Wörmann \[Wö\]. An algorithmic proof of Theorem [Theorem 28](#schmps){reference-type="ref" reference="schmps"} was developed by M. Schweighofer \[Sw1\], \[Sw2\]. The operator-theoretic proof of Theorem [Theorem 50](#archmedps){reference-type="ref" reference="archmedps"}(ii) given above is long known among operator theorists; it was used in \[Sm6\]. The operator-theoretic approach to the multidimensional moment theory was investigated in detail by F. Vasilescu \[Vs1\], \[Vs2\]. The Archimedean Positivstellensatz (Theorem [Theorem 43](#archpos){reference-type="ref" reference="archpos"}) has a long history. It was proved in various versions by M.H. Stone \[Stn\], R.V. Kadison \[Kd\], J.-L. Krivine \[Kv1\], E. Becker and N. Schwartz \[BS\], M. Putinar \[Pu2\], and T. Jacobi \[Jc\]. The general version for quadratic modules is due to Jacobi \[Jc\], while the version for semirings was proved much earlier by Krivine \[Kr1\]. A more general version and a detailed discussion can be found in \[Ms1, Section 5.4\]. The unified approach to Theorem [Theorem 43](#archpos){reference-type="ref" reference="archpos"} in Section [4](#reparchimodiules){reference-type="ref" reference="reparchimodiules"} using the dagger cones is based on results obtained in the paper \[SmS23\]. Theorem [Theorem 51](#auxsemiring){reference-type="ref" reference="auxsemiring"} and Example [Example 52](#module){reference-type="ref" reference="module"} are also taken from \[SmS23\]. M. Putinar \[Pu2\] has proved that a finitely generated quadratic module $Q$ in $\mathbb{R}_d[\underline{x}]$ is Archimedean if (and only if) there exists a polynomial $f\in Q$ such that the set $\{x\in \mathbb{R}^d: f(x)\geq 0\}$ is compact. Corollary [Corollary 33](#mpwithboundeddensity){reference-type="ref" reference="mpwithboundeddensity"} and its non-compact version in Exercise 14.11 below are from \[Ls3\]. The moment problem with bounded densities is usually called the *Markov moment problem* or $L$-moment problem. In dimension one it goes back to A.A. Markov \[Mv1\], \[Mv2\], see \[AK\], \[Kr2\]. An interesting more recent work is \[DF\]. The multidimensional case was studied in \[Pu1\], \[Pu3\], \[Pu5\], \[Ls3\], \[Ls4\]. For compact polyhedra with nonempty interiors Corollary [Corollary 60](#prestelcor){reference-type="ref" reference="prestelcor"}(i) was proved by D. Handelman \[Hn\]. A special case was treated earlier by J.-L. Krivine \[Kv2\]. A related version can be found in \[Cs, Theorem 4\]. The general Theorem [Theorem 59](#prestel){reference-type="ref" reference="prestel"} is taken from \[SmS23\]; it is a slight generalization of \[PD, Theorem 5.4.6\]. Polya's theorem was proved in \[P\]. Polya's original proof is elementary; the elegant proof given in the text is from \[Wö\]. Proposition [Proposition 69](#multihausdorff){reference-type="ref" reference="multihausdorff"} is a classical result obtained in \[HS\]. It should be noted that Reznick's theorem \[Re2\] can be derived as an immediate consequence of Theorem [Theorem 28](#schmps){reference-type="ref" reference="schmps"}, see \[Sr3, 2.1.8\]. Reconstructing the shape of subsets of $\mathbb{R}^d$ from its moments with respect to the Lebesgue measure is another interesting topic, see e.g. \[GHPP\] and \[GLPR\]. SmS23 Schmüdgen, K., *The Moment Problem*, Graduate Texts in Math. **277**,  Springer,  Cham,  2017. Schmüdgen, K., *An Invitation to Unbounded $*$-Representations of  $*$-Algebras on Hilbert Space*, Graduate Texts in Math. **285**,  Springer,  Cham,  2020. Schmüdgen, K. and M. Schötz, Positivstellensätze for semirings, Mathematische Annalen 2023, https://doi.org/10.1007/s00208-023-02656-0 . [^1]: Acknowledgment: The author would like to thank Matthias Schötz for the fruitful cooperation.
arxiv_math
{ "id": "2309.10052", "title": "Chapter 12: The moment problem on compact semi-algebraic sets (revised\n version)", "authors": "Konrad Schm\\\"udgen", "categories": "math.FA math.AG", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | This paper aims to address two issues of integral equations for the scattering of time-harmonic electromagnetic waves by a perfect electric conductor with Lipschitz continuous boundary: resonant instability and dense discretization breakdown. The remedy to resonant instability is a combined field integral equation, and dense discretization breakdown is eliminated by means of operator preconditioning. The exterior traces of single and double layer potentials are complemented by their interior counterparts of a pure imaginary wave number. We derive the corresponding variational formulation in the natural trace space for electromagnetic fields and establish its well-posedness for all wave numbers. A Galerkin discretization scheme is employed using conforming edge boundary elements on dual meshes, which produces well-conditioned discrete linear systems of the variational formulation. Some numerical results are also provided to support the numerical analysis. address: IDLab, Department of Information Technology, Ghent University - imec, 9000 Ghent, Belgium author: - Van Chien Le - Kristof Cools bibliography: - abrv_ref.bib title: A well-conditioned combined field integral equation for electromagnetic scattering --- [^1] # Introduction This paper concerns the scattering of electromagnetic waves by a perfect electric conductor, which plays a fundamental role in computational electromagnetics. Let $\Omega\subset\mathbb R^3$ be an open bounded domain with a connected Lipschitz boundary $\Gamma:= \partial\Omega$. The exterior domain $\Omega^c := \mathbb R^3 \setminus \overline{\Omega}$ is filled by a homogeneous, isotropic, and linear material with permittivity $\epsilon$ and permeability $\mu$, which both are positive constants in $\Omega^c$. Electromagnetic waves prolonging outside $\Omega$ are excited by a time-harmonic incident electric field $\boldsymbol{e}^i$ of angular frequency $\omega> 0$. Therefore, we can switch to a frequency-domain problem with unknown complex-valued spatial functions. The scattered field $\boldsymbol{e}$ satisfies the following exterior Dirichlet boundary-value problem for the electric wave equation [@Colton1992 Section 6.4] $$\begin{aligned} & \textbf{\textup{curl}}\, \textbf{\textup{curl}}\, \boldsymbol{e}- \kappa^2 \boldsymbol{e}= 0 && \text{in} \quad\Omega^c, \label{eq:wave} \\ & \boldsymbol{e}\times \textbf{n}= - \boldsymbol{e}^i \times \textbf{n}&& \text{on} \quad\Gamma, \label{eq:DBC}\end{aligned}$$ supplemented with the Silver-Müller radiation condition $$\label{eq:SilverMuller} \lim_{r \to \infty} \int_{\partial B_r} \left\vert{\textbf{\textup{curl}}\, \boldsymbol{e}\times \textbf{n}+ i\kappa (\textbf{n}\times \boldsymbol{e}) \times \textbf{n}}\right\vert^2 \,\mathrm{d}s= 0.$$ Here, $\kappa = \omega\sqrt{\epsilon\mu} > 0$ is the wave number, $\textbf{n}$ is the unit normal vector on $\Gamma$ pointing from $\Omega$ to $\Omega^c$, and $B_r$ is a ball of radius $r > 0$ centered at $0$. We refer the reader to Rellich's lemma [@Cessenat1996; @Nedelec2001] for the existence and uniqueness of a solution to [\[eq:wave\]](#eq:wave){reference-type="eqref" reference="eq:wave"}-[\[eq:SilverMuller\]](#eq:SilverMuller){reference-type="eqref" reference="eq:SilverMuller"}. Boundary integral equations have become the most popular method to solve electromagnetic scattering problems in unbounded domains. Based on the integral representation formulas for solutions to Maxwell's equations, this method poses an alternative problem on the boundary of domains, leading to discrete systems of much smaller size. Prominent examples are electric and magnetic field integral equations. This paper aims to address two issues of boundary integral equations for electromagnetic scattering by perfectly conducting bodies with Lipschitz continuous boundaries: resonant instability and dense discretization breakdown. These issues arise when $\kappa^2$ is close to a resonant frequency (the former), or when the discrete problem involves a large number of unknowns (the latter), both manifest themselves in ill-conditioning of the discrete linear systems of equations. Resonant frequencies are Dirichlet or Neumann eigenvalues of the $\textbf{\textup{curl}}\, \textbf{\textup{curl}}$-operator inside $\Omega$, at which the standard boundary integral equations are not uniquely solvable. Among some approaches to overcome the resonant instability (see, e.g., [@FK1998; @HL1996]), combined field integral equations (CFIEs) are vastly more popular than others. CFIEs owe their name to an appropriate combination of the single and double layer potentials. This class of integral equations for electromagnetic scattering was pioneered by Panich in [@Panich1965]. A regularized CFIE was then introduced by Kress in [@Kress1986]. Both formulations are only applicable for domains with sufficiently smooth boundaries (specifically, belonging to the class $\mathop{\mathrm{C}}^2$), which are not suitable for the numerical implementation of boundary element methods. With the advancement in numerical analysis of Maxwell's equations in Lipschitz domains (see, e.g., [@BC2001; @BC2001b; @BCS2002a; @BHV+2003]), some coercive CFIEs were proposed by Buffa and Hiptmair in [@BH2005], and by Steinbach and Windisch in [@SW2009], which are applicable for domains with Lipschitz continuous boundaries. Despite the fact that CFIEs are well-posed at all frequencies, they may produce ill-conditioned matrices when involving a large number of discrete unknowns, which leads the numerical resolution by means of iterative schemes to be extremely expensive. The typical approach to curing this challenge is to employ an algebraic preconditioner. Some preconditioners for CFIEs were presented in [@ABL2007; @Levadoux2008; @BEP+2009; @BT2014]. Both the treatments primarily rely on the fact that the magnetic field integral operator on a sufficiently smooth domain is a Fredholm operator of second kind. Unfortunately, this special property is no longer valid for non-smooth Lipschitz domains. In this paper, we propose a well-conditioned CFIE for electromagnetic scattering in Lipschitz domains. The exterior traces of the single and double layer potentials are complemented by their interior counterparts of a pure imaginary wave number. The uniqueness of a solution to the CFIE is established by means of the ellipticity of the electric field integral operator with a pure imaginary wave number. A generalized Gårding inequality is achieved with the aid of a Calderón projection formula. Therefore, the well-posedness of the CFIE at any wave number can be concluded by a Fredholm alternative argument. A matching Galerkin discretization scheme is then employed using $\textup{div}_\Gamma$-conforming boundary elements on a pair of dual meshes, which guarantees the well-conditioning of discrete linear systems. This paper is organized as follows: the next section provides a concise summary of relevant function spaces and trace spaces, which are needed for numerical analysis throughout the paper. Then, section [3](#sec:potentials){reference-type="ref" reference="sec:potentials"} aims to recall the crucial potentials and integral operators for electromagnetic scattering. Section [4](#sec:cfie){reference-type="ref" reference="sec:cfie"} is the main part where we propose the new CFIE, derive its variational formulation and prove its well-posedness. A Galerkin discretization scheme for the variational formulation is introduced in section [5](#sec:discretization){reference-type="ref" reference="sec:discretization"} with the aim to produce well-conditioned linear systems. Section [6](#sec:results){reference-type="ref" reference="sec:results"} is devoted to some numerical results which corroborate the stability and well-conditioning of the CFIE. We end up with some conclusions and an outlook on future works in section [7](#sec:conclusions){reference-type="ref" reference="sec:conclusions"}. # Traces and spaces {#sec:traces} For any domain $\Omega\subseteq \mathbb R^3$, let $\mathop{\mathrm{H}}^s(\Omega)$ and $\mathop{\mathrm{\mathbf H}}^s(\Omega)$, with $s \geq 0$, be the Sobolev spaces of complex-valued scalar and vector functions on $\Omega$ equipped with the standard graph norms, where $\mathop{\mathrm{H}}^0(\Omega)$ and $\mathop{\mathrm{\mathbf H}}^0(\Omega)$ coincide with the Lebesgue spaces $\mathop{\mathrm{L}}^2(\Omega)$ and $\mathop{\mathrm{\mathbf L}}^2(\Omega)$. The following function spaces are the natural spaces for solutions of the electric wave equation [\[eq:wave\]](#eq:wave){reference-type="eqref" reference="eq:wave"} on bounded domains $$\begin{aligned} \mathop{\mathrm{\mathbf H}}(\textbf{\textup{curl}}, \Omega) & := \left\lbrace{\boldsymbol{u}\in \mathop{\mathrm{\mathbf L}}^2(\Omega) : \textbf{\textup{curl}}\, \boldsymbol{u}\in \mathop{\mathrm{\mathbf L}}^2(\Omega)}\right\rbrace, \\ \mathop{\mathrm{\mathbf H}}(\textbf{\textup{curl}}^2, \Omega) & := \left\lbrace{\boldsymbol{u}\in \mathop{\mathrm{\mathbf H}}(\textbf{\textup{curl}}, \Omega) : \textbf{\textup{curl}}\,\textbf{\textup{curl}}\, \boldsymbol{u}\in \mathop{\mathrm{\mathbf L}}^2(\Omega)}\right\rbrace,\end{aligned}$$ which are respectively endowed with the norms $$\begin{aligned} \left\lVert{\boldsymbol{u}}\right\rVert^2_{\mathop{\mathrm{\mathbf H}}(\textbf{\textup{curl}}, \Omega)} & := \left\lVert{\boldsymbol{u}}\right\rVert^2_{\mathop{\mathrm{\mathbf L}}^2(\Omega)} + \left\lVert{\textbf{\textup{curl}}\, \boldsymbol{u}}\right\rVert^2_{\mathop{\mathrm{\mathbf L}}^2(\Omega)}, \\ \left\lVert{\boldsymbol{u}}\right\rVert^2_{\mathop{\mathrm{\mathbf H}}(\textbf{\textup{curl}}^2, \Omega)} & := \left\lVert{\boldsymbol{u}}\right\rVert^2_{\mathop{\mathrm{\mathbf H}}(\textbf{\textup{curl}}, \Omega)} + \left\lVert{\textbf{\textup{curl}}\, \textbf{\textup{curl}}\, \boldsymbol{u}}\right\rVert^2_{\mathop{\mathrm{\mathbf L}}^2(\Omega)}.\end{aligned}$$ On unbounded domains $\Omega$, the space $\mathop{\mathrm{\mathbf H}}_\textup{loc}(\textbf{\textup{curl}}^2, \Omega)$ is defined as the set of all vector functions $\boldsymbol{u}$ such that $\varphi \boldsymbol{u}\in \mathop{\mathrm{\mathbf H}}(\textbf{\textup{curl}}^2, \Omega)$ for all compactly supported smooth scalar functions $\varphi \in \mathop{\mathrm{C}}^{\infty}(\mathbb R^3)$. Next, we briefly introduce some trace spaces related to Maxwell's equations in a Lipschitz domain $\Omega$. For more details, the reader is referred to the monographs [@BC2001; @BC2001b; @BCS2002a]. We define the continuous tangential trace operator $\gamma_D : \mathop{\mathrm{\mathbf H}}^1(\Omega) \to \mathop{\mathrm{\mathbf L}}^2_\textbf{t}(\Gamma)$ by $$\gamma_D : \boldsymbol{u}\mapsto \gamma(\boldsymbol{u}) \times \textbf{n},$$ where $\gamma: \mathop{\mathrm{\mathbf H}}^1(\Omega) \to \mathop{\mathrm{\mathbf L}}^2(\Gamma)$ is the standard trace operator. The range of $\gamma_D$ in $\mathop{\mathrm{\mathbf L}}^2_\textbf{t}(\Gamma)$ is denoted by $\mathop{\mathrm{\mathbf H}}^{1/2}_{\times}(\Gamma)$. Its dual space $\mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\Gamma)$ is defined with respect to the anti-symmetric pairing $$\left\langle{\boldsymbol{u}, \boldsymbol{v}}\right\rangle_{\times, \Gamma} := \int_\Gamma(\boldsymbol{u}\times \textbf{n}) \cdot \boldsymbol{v}\,\mathrm{d}s, \qquad\quad\boldsymbol{u}, \boldsymbol{v}\in \mathop{\mathrm{\mathbf L}}^2_\textbf{t}(\Gamma) := \left\lbrace{\boldsymbol{u}\in \mathop{\mathrm{\mathbf L}}^2(\Gamma) : \boldsymbol{u}\cdot \textbf{n}= 0}\right\rbrace.$$ Let $s \in \{\frac{1}{2}, \frac{3}{2}\}$ and $\mathop{\mathrm{H}}^s(\Gamma)$ be the trace space of $\mathop{\mathrm{H}}^{s+1/2}(\Omega)$ (the definition of $\mathop{\mathrm{H}}^{3/2}(\Gamma)$ can be found in [@BCS2002a Proposition 3.4]). The dual space of $\mathop{\mathrm{H}}^s(\Gamma)$ with respect to the pivot $\mathop{\mathrm{L}}^2(\Gamma)$ is denoted by $\mathop{\mathrm{H}}^{-s}(\Gamma)$, and their natural duality pairing is denoted by $\langle\cdot, \cdot\rangle_{s, \Gamma}$. We adopt the definition of the operator $\textbf{\textup{curl}}_\Gamma: \mathop{\mathrm{H}}^{3/2}(\Gamma) \to \mathop{\mathrm{\mathbf H}}^{1/2}_{\times}(\Gamma)$ from [@BCS2002a] $$\textbf{\textup{curl}}_\Gamma\, \gamma(\varphi) = \gamma_D(\textbf{grad}\, \varphi), \qquad\qquad\forall\varphi \in \mathop{\mathrm{H}}^{2}(\Omega).$$ The surface divergence operator $\textup{div}_\Gamma: \mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\Gamma) \to \mathop{\mathrm{H}}^{-3/2}(\Gamma)$ is then defined as the adjoint operator to $\textbf{\textup{curl}}_\Gamma$ $$\left\langle{\textup{div}_\Gamma\boldsymbol{u}, \varphi}\right\rangle_{3/2, \Gamma} = - \left\langle{\boldsymbol{u}, \textbf{\textup{curl}}_\Gamma\varphi}\right\rangle_{\times, \Gamma}, \qquad\quad\forall\boldsymbol{u}\in \mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\Gamma), \ \forall\varphi \in \mathop{\mathrm{H}}^{3/2}(\Gamma).$$ Now, we introduce the space $$\mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_{\Gamma}, \Gamma) := \left\lbrace{\boldsymbol{u}\in \mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\Gamma) : \textup{div}_{\Gamma} \boldsymbol{u}\in \mathop{\mathrm{H}}^{-1/2}(\Gamma)}\right\rbrace,$$ equipped with the graph norm $$\left\lVert{\boldsymbol{u}}\right\rVert^2_{\mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_{\Gamma}, \Gamma)} := \left\lVert{\boldsymbol{u}}\right\rVert^2_{\mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\Gamma)} + \left\lVert{\textup{div}_\Gamma\boldsymbol{u}}\right\rVert^2_{\mathop{\mathrm{H}}^{-1/2}(\Gamma)}.$$ It is noteworthy that $\mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_{\Gamma}, \Gamma)$ is the desired trace space for electromagnetic fields. An important property of the space $\mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_{\Gamma}, \Gamma)$ is its self-duality, which was given in [@BC2001b] and [@BCS2002a Lemma 5.6]. **Theorem 1** (self-duality of the space $\mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_{\Gamma}, \Gamma)$). *The pairing $\langle{\cdot, \cdot}\rangle_{\times, \Gamma}$ can be extended to a continuous bilinear form on $\mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_{\Gamma}, \Gamma)$. Moreover, the space $\mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_{\Gamma}, \Gamma)$ becomes its own dual with respect to $\langle{\cdot, \cdot}\rangle_{\times, \Gamma}$.* Finally, we introduce the trace for the energy space $\mathop{\mathrm{\mathbf H}}(\textbf{\textup{curl}}, \Omega)$. The next theorem presents the extension of the tangential trace operator $\gamma_D$ to $\mathop{\mathrm{\mathbf H}}(\textbf{\textup{curl}}, \Omega)$ and a related integration by parts formula, see [@BCS2002a Theorem 4.1] or [@CH2012 Theorem 2.3]. **Theorem 2** (integration by parts formula). *The tangential trace operator $\gamma_D$ can be extended to a continuous mapping from $\mathop{\mathrm{\mathbf H}}(\textbf{\textup{curl}}, \Omega)$ onto $\mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_{\Gamma}, \Gamma)$, which possesses a continuous right inverse. In addition, the following integration by parts formula holds $$\label{eq:identity} \int_\Omega(\textbf{\textup{curl}}\, \boldsymbol{u}\cdot \boldsymbol{v}- \boldsymbol{u}\cdot \textbf{\textup{curl}}\, \boldsymbol{v}) \,\mathrm{d}\boldsymbol{x}= -\left\langle{\gamma_D \boldsymbol{u}, \gamma_D \boldsymbol{v}}\right\rangle_{\times, \Gamma}, \quad\forall\boldsymbol{u}, \boldsymbol{v}\in \mathop{\mathrm{\mathbf H}}(\textbf{\textup{curl}}, \Omega).$$* The traces of $\mathop{\mathrm{\mathbf H}}(\textbf{\textup{curl}}^2, \Omega)$ involve the Neumann trace operator $\gamma_N := \gamma_D \circ \textbf{\textup{curl}}$, which continuously maps $\mathop{\mathrm{\mathbf H}}(\textbf{\textup{curl}}^2, \Omega)$ onto $\mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_{\Gamma}, \Gamma)$, see [@BH2005]. # Potentials and integral operators {#sec:potentials} In this section, we introduce the relevant potentials and boundary integral operators for electromagnetic scattering. In order to support the numerical analysis in the next sections, the potentials and integral operators are defined for wave number $\sigma$, where $\sigma= \kappa$ or $\sigma= i\kappa$ with $\kappa > 0$. The reader can find more details about the case of a pure imaginary wave number in [@Nedelec2001 Section 5.6.4], [@Hiptmair2003b Section 5.1] and [@SW2009]. For the sake of convenience, we restate here the Maxwell equation [\[eq:wave\]](#eq:wave){reference-type="eqref" reference="eq:wave"} and the Silver-Müller radiation condition [\[eq:SilverMuller\]](#eq:SilverMuller){reference-type="eqref" reference="eq:SilverMuller"}, which are now associated with the wave number $\sigma$ $$\begin{aligned} & \,\, \textbf{\textup{curl}}\, \textbf{\textup{curl}}\, \boldsymbol{u}- \sigma^2 \boldsymbol{u}= 0 \qquad\qquad\qquad\text{in} \quad\Omega\cup \Omega^c, \label{eq:wave1} \\ & \lim_{r \to \infty} \int_{\partial B_r} \left\vert{\textbf{\textup{curl}}\, \boldsymbol{u}\times \textbf{n}+ i\sigma(\textbf{n}\times \boldsymbol{u}) \times \textbf{n}}\right\vert^2 \,\mathrm{d}s= 0. \label{eq:SilverMuller1}\end{aligned}$$ Let $G_{\sigma}(\boldsymbol{x}, \boldsymbol{y})$ be the fundamental solution associated with the operator $\Delta + \sigma^2$ $$G_\sigma(\boldsymbol{x}, \boldsymbol{y}) := \dfrac{\exp(i\sigma\left\vert{\boldsymbol{x}- \boldsymbol{y}}\right\vert)}{4\pi \left\vert{\boldsymbol{x}- \boldsymbol{y}}\right\vert}, \qquad\qquad\quad\boldsymbol{x}\neq \boldsymbol{y}.$$ We use this kernel to define the scalar and vectorial single layer potentials $$\Psi^\sigma_V(\varphi)(\boldsymbol{x}) := \int_\Gamma\varphi(\boldsymbol{y}) G_\sigma(\boldsymbol{x}, \boldsymbol{y}) \,\mathrm{d}s(\boldsymbol{y}), \qquad\boldsymbol{\Psi}^\sigma_{\boldsymbol{A}}(\boldsymbol{u})(\boldsymbol{x}) := \int_\Gamma\boldsymbol{u}(\boldsymbol{y}) G_\sigma(\boldsymbol{x}, \boldsymbol{y}) \,\mathrm{d}s(\boldsymbol{y}),$$ with $\boldsymbol{x}\notin \Gamma$. For electromagnetic scattering, the following "Maxwell single and double layer potentials\" are essential $$\boldsymbol{\Psi}^\sigma_{SL}(\boldsymbol{u}) := \boldsymbol{\Psi}^\sigma_{\boldsymbol{A}} (\boldsymbol{u}) + \dfrac{1}{\sigma^2} \textbf{grad}\, \Psi^\sigma_V(\textup{div}_\Gamma\boldsymbol{u}), \qquad\quad\boldsymbol{\Psi}^\sigma_{DL}(\boldsymbol{u}) := \textbf{\textup{curl}}\, \boldsymbol{\Psi}^\sigma_{\boldsymbol{A}}(\boldsymbol{u}).$$ The potentials $\boldsymbol{\Psi}^\sigma_{SL}$ and $\boldsymbol{\Psi}^\sigma_{DL}$ are continuous mappings from $\mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma)$ into $\mathop{\mathrm{\mathbf H}}_{\textup{loc}}(\textbf{\textup{curl}}^2, \Omega\cup \Omega^c)$, see [@Hiptmair2003b Theorem 17]. Moreover, for any $\boldsymbol{u}\in \mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma)$, both $\boldsymbol{\Psi}^{\sigma}_{SL}(\boldsymbol{u})$ and $\boldsymbol{\Psi}^{\sigma}_{DL}(\boldsymbol{u})$ are solutions to the equation [\[eq:wave1\]](#eq:wave1){reference-type="eqref" reference="eq:wave1"} and fulfill the radiation condition [\[eq:SilverMuller1\]](#eq:SilverMuller1){reference-type="eqref" reference="eq:SilverMuller1"}. On the other hand, any solution $\boldsymbol{u}\in \mathop{\mathrm{\mathbf H}}_{\textup{loc}}(\textbf{\textup{curl}}^2, \Omega\cup \Omega^c)$ to [\[eq:wave1\]](#eq:wave1){reference-type="eqref" reference="eq:wave1"}-[\[eq:SilverMuller1\]](#eq:SilverMuller1){reference-type="eqref" reference="eq:SilverMuller1"} satisfies the following variant of the well-known Stratton-Chu representation formula (see, e.g., [@Colton1992 Theorem 6.2], [@Nedelec2001 Theorem 5.5.1] and [@Hiptmair2003b Theorem 22]) $$\label{eq:representation} \boldsymbol{u}(\boldsymbol{x}) = - \boldsymbol{\Psi}^{\sigma}_{DL}(\left[\gamma_D\right]_{\Gamma} \boldsymbol{u})(\boldsymbol{x}) - \boldsymbol{\Psi}^{\sigma}_{SL}(\left[\gamma_N\right]_{\Gamma} \boldsymbol{u}) (\boldsymbol{x}), \qquad\quad\boldsymbol{x}\in \Omega\cup \Omega^c,$$ where $\left[\gamma\right]_{\Gamma} := \gamma^+ - \gamma^-$ is the jump of some trace $\gamma$ across $\Gamma$, and the superscripts $-$ and $+$ designate traces onto $\Gamma$ from $\Omega$ and $\Omega^c$, respectively. Next, please bear in mind the fact that $$\label{eq:sym1} \textbf{\textup{curl}}\circ \boldsymbol{\Psi}^\sigma_{SL} = \boldsymbol{\Psi}^\sigma_{DL}, \qquad\qquad\quad\textbf{\textup{curl}}\circ \boldsymbol{\Psi}^\sigma_{DL} = \sigma^2 \boldsymbol{\Psi}^\sigma_{SL},$$ which immediately implies $$\label{eq:sym2} \gamma^\pm_N \circ \boldsymbol{\Psi}^\sigma_{SL} = \gamma^\pm_D \boldsymbol{\Psi}^\sigma_{DL}, \qquad\qquad\gamma^\pm_N \circ \boldsymbol{\Psi}^\sigma_{DL} = \sigma^2 \gamma^\pm_D \circ \boldsymbol{\Psi}^\sigma_{SL}.$$ The "symmetric\" relations [\[eq:sym1\]](#eq:sym1){reference-type="eqref" reference="eq:sym1"} and [\[eq:sym2\]](#eq:sym2){reference-type="eqref" reference="eq:sym2"} convince us that only integral operators associated with the Maxwell single layer potential $\boldsymbol{\Psi}^\sigma_{SL}$ (or the Maxwell double layer potential $\boldsymbol{\Psi}^\sigma_{DL}$) are needed to be explicitly defined. Let $\left\lbrace{\gamma}\right\rbrace_\Gamma:= \frac{1}{2}(\gamma^+ + \gamma^-)$ be the average of exterior and interior traces $\gamma$ on the boundary $\Gamma$. Now, we are ready to define our desired boundary integral operators. **Theorem 3**. *For $\sigma= \kappa$ or $\sigma= i\kappa$ with $\kappa > 0$, the boundary integral operators $$\begin{aligned} & V_\sigma:= \left\lbrace{\gamma}\right\rbrace_\Gamma\circ \Psi^\sigma_V && : \mathop{\mathrm{H}}^{-1/2}(\Gamma) \to \mathop{\mathrm{H}}^{1/2}(\Gamma), \\ & A_\sigma:= \left\lbrace{\gamma_D}\right\rbrace_\Gamma\circ \boldsymbol{\Psi}^\sigma_{\boldsymbol{A}} && : \mathop{\mathrm{\mathbf H}}^{-1/2}_\times(\Gamma) \to \mathop{\mathrm{\mathbf H}}^{1/2}_\times(\Gamma), \\ & S_\sigma:= \left\lbrace{\gamma_D}\right\rbrace_\Gamma\circ \boldsymbol{\Psi}^\sigma_{SL} && : \mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma) \to \mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma), \\ & C_\sigma:= \left\lbrace{\gamma_N}\right\rbrace_\Gamma\circ \boldsymbol{\Psi}^\sigma_{SL} && : \mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma) \to \mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma), \end{aligned}$$ are well defined and continuous. Moreover, the following relation holds $$\label{eq:efio} S_\sigma= A_\sigma+ \dfrac{1}{\sigma^2} \textbf{\textup{curl}}_\Gamma\circ V_\sigma\circ \textup{div}_\Gamma.$$* *Proof.* The definitions of integral operators can be found in [@Hiptmair2003b Theorem 19]. The last assertion follows from the definition of the Maxwell single layer potential $\boldsymbol{\Psi}^\sigma_{SL}$ and the fact that $\textbf{\textup{curl}}_\Gamma\circ \gamma= \gamma_D \circ \textbf{grad}$. ◻ In order to infer the exterior and interior traces of the potentials, the following jump relations are crucial [@Hiptmair2003b Theorem 18] $$\left[\gamma\right]_\Gamma\circ \Psi^\sigma_V = 0, \qquad\quad\left[\gamma_D\right]_\Gamma\circ \boldsymbol{\Psi}^\sigma_{\boldsymbol{A}} = 0, \qquad\quad\left[\gamma_N\right]_\Gamma\circ \boldsymbol{\Psi}^\sigma_{SL} = -Id,$$ where $Id$ stands for the identity operator. Therefore, we have the Neumann traces of the Maxwell single layer potential as follows $$\gamma^+_N \circ \boldsymbol{\Psi}^\sigma_{SL} = -\dfrac{1}{2} Id + C_\sigma, \qquad\qquad\quad\gamma^-_N \circ \boldsymbol{\Psi}^\sigma_{SL} = \dfrac{1}{2} Id + C_\sigma.$$ The following lemma provides an auxiliary result for establishing the coercivity of the CFIE in the next section. **Lemma 1**. *For $\kappa > 0$, the following integral operators are compact $$\begin{aligned} & \delta V_\kappa := V_\kappa - V_{i\kappa} && : \mathop{\mathrm{H}}^{-1/2}(\Gamma) \to \mathop{\mathrm{H}}^{1/2}(\Gamma), \\ & \delta A_\kappa := A_\kappa - A_{i\kappa} && : \mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\Gamma) \to \mathop{\mathrm{\mathbf H}}^{1/2}_{\times}(\Gamma), \\ & \delta C_\kappa := C_\kappa - C_{i\kappa} && : \mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma) \to \mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma). \end{aligned}$$* *Proof.* The assertion can be proven by following the same lines of the proof of [@Nedelec2001 Theorem 3.4.1] or [@BHV+2003 Theorem 3.12]. Here, it is noteworthy that the kernel $G_\kappa - G_{i\kappa}$ is regular. ◻ To end this section, we present here some useful properties of integral operators of pure imaginary wave number. These results are taken from [@Hiptmair2003b Theorem 21] and [@SW2009 Theorem 2.5]. **Theorem 4**. *For $\kappa > 0$, the operators $V_{i\kappa}$ and $A_{i\kappa}$ are self-adjoint with respect to the bilinear pairings $\langle{\cdot, \cdot}\rangle_{1/2, \Gamma}$ and $\langle{\cdot, \cdot}\rangle_{\times, \Gamma}$, respectively, i.e., $$\begin{aligned} & \left\langle{\psi, V_{i\kappa} \varphi}\right\rangle_{1/2, \Gamma} = \left\langle{\varphi, V_{i\kappa} \psi}\right\rangle_{1/2, \Gamma}, && \forall\psi, \varphi \in \mathop{\mathrm{H}}^{-1/2}(\Gamma), \\ & \left\langle{\boldsymbol{v}, A_{i\kappa} \boldsymbol{u}}\right\rangle_{\times, \Gamma} = \left\langle{\boldsymbol{u}, A_{i\kappa} \boldsymbol{v}}\right\rangle_{\times, \Gamma}, && \forall\boldsymbol{v}, \boldsymbol{u}\in \mathop{\mathrm{\mathbf H}}^{-1/2}_\times(\Gamma). \end{aligned}$$ In addition, there exists a positive constant $C$ depending only on $\Gamma$ such that $$\begin{aligned} \left\langle{\varphi, V_{i\kappa} \varphi}\right\rangle_{1/2, \Gamma} & \ge C \left\lVert{\varphi}\right\rVert^2_{\mathop{\mathrm{H}}^{-1/2}(\Gamma)}, && \forall\varphi \in \mathop{\mathrm{H}}^{-1/2}(\Gamma), \\ \left\langle{\boldsymbol{u}, A_{i\kappa} \boldsymbol{u}}\right\rangle_{\times, \Gamma} & \ge C \left\lVert{\boldsymbol{u}}\right\rVert^2_{\mathop{\mathrm{\mathbf H}}_\times^{-1/2}(\Gamma)}, && \forall\boldsymbol{u}\in \mathop{\mathrm{\mathbf H}}_\times^{-1/2}(\textup{div}_\Gamma, \Gamma). \end{aligned}$$* The following ellipticity and self-adjointness of the operator $S_{i\kappa}$ immediately follow from Theorem [Theorem 4](#thm:ellipticity){reference-type="ref" reference="thm:ellipticity"}. They will play a central role in further analysis. **Corollary 1**. *For $\kappa > 0$, the integral operator $S_{i\kappa}$ is $\mathop{\mathrm{\mathbf H}}_\times^{-1/2}(\textup{div}_\Gamma, \Gamma)$-elliptic and self-adjoint with respect to $\langle{\cdot, \cdot}\rangle_{\times, \Gamma}$, i.e., there exists a constant $C > 0$ depending only on $\Gamma$ such that, for all $\boldsymbol{u}, \boldsymbol{v}\in \mathop{\mathrm{\mathbf H}}^{-1/2}_\times(\textup{div}_\Gamma, \Gamma)$ $$\left\langle{\boldsymbol{v}, S_{i\kappa} \boldsymbol{u}}\right\rangle_{\times, \Gamma} = \left\langle{\boldsymbol{u}, S_{i\kappa} \boldsymbol{v}}\right\rangle_{\times, \Gamma}, \qquad\left\langle{\boldsymbol{u}, S_{i\kappa} \boldsymbol{u}}\right\rangle_{\times, \Gamma} \ge C \left\lVert{\boldsymbol{u}}\right\rVert^2_{\mathop{\mathrm{\mathbf H}}^{-1/2}_\times(\textup{div}_\Gamma, \Gamma)}.$$* # The combined field integral equation {#sec:cfie} In this section, we propose a new CFIE which yields a unique solution to [\[eq:wave\]](#eq:wave){reference-type="eqref" reference="eq:wave"}-[\[eq:SilverMuller\]](#eq:SilverMuller){reference-type="eqref" reference="eq:SilverMuller"} for any wave number $\kappa > 0$. A common strategy when combining the Maxwell single layer potential $\boldsymbol{\Psi}^\kappa_{SL}$ and the Maxwell double layer potential $\boldsymbol{\Psi}^\kappa_{DL}$ is to introduce a compact regularization that targets either $\boldsymbol{\Psi}^\kappa_{SL}$ or $\boldsymbol{\Psi}^\kappa_{DL}$ (see [@Kress1986] and [@BH2005]). In contrast to this approach, in the following formulation, the potentials $\boldsymbol{\Psi}^\kappa_{SL}$ and $\boldsymbol{\Psi}^\kappa_{DL}$ are respectively complemented by their counterparts $\boldsymbol{\Psi}^{i\kappa}_{SL}$ and $\boldsymbol{\Psi}^{i\kappa}_{DL}$ of pure imaginary wave number. This formulation is motivated by the Yukawa-Calderón CFIE investigated in some engineering papers, e.g., [@CDE+2002; @MBC+2020]. Those studies claimed the solvability and stability of discrete equations without providing a rigorous proof, particularly for Lipschitz domains. In this paper, we consider the following ansatz for the scattered electric field $$\label{eq:ansatz} \boldsymbol{e}= \left({i\eta \, \boldsymbol{\Psi}^{\kappa}_{SL} \circ \gamma_D^- \circ \boldsymbol{\Psi}^{i\kappa}_{SL} + \boldsymbol{\Psi}^{\kappa}_{DL} \circ \gamma_D^- \circ \boldsymbol{\Psi}^{i\kappa}_{DL}}\right) (\boldsymbol{\xi}),$$ where $\boldsymbol{\xi}\in \mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma)$ and $\eta \in \mathbb R\setminus \{0\}$. Taking the exterior tangential trace $\gamma^+_D$ of [\[eq:ansatz\]](#eq:ansatz){reference-type="eqref" reference="eq:ansatz"} results in the CFIE $$\label{eq:cfie} \mathcal L_{\kappa}(\boldsymbol{\xi}) = - \gamma_D^+ \boldsymbol{e}^i,$$ where the boundary integral operator $$\label{eq:cfio} \mathcal L_{\kappa} := i\eta \, S_{\kappa} \circ S_{i\kappa} + \left({-\dfrac{1}{2} Id+ C_{\kappa}}\right) \circ \left({\dfrac{1}{2} Id+ C_{i\kappa}}\right).$$ As $\mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma)$ is its own dual with respect to $\langle \cdot, \cdot \rangle_{\times, \Gamma}$, the variational formulation of [\[eq:cfie\]](#eq:cfie){reference-type="eqref" reference="eq:cfie"} reads as: find $\boldsymbol{\xi}\in \mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma)$ such that, for all $\boldsymbol{v}\in \mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma)$ $$\label{eq:vf} \left\langle{\boldsymbol{v}, \mathcal L_\kappa(\boldsymbol{\xi})}\right\rangle_{\times, \Gamma} = - \left\langle{\boldsymbol{v}, \gamma_D^+ \boldsymbol{e}^i}\right\rangle_{\times, \Gamma}.$$ **Remark 1**. *The CFIE [\[eq:cfie\]](#eq:cfie){reference-type="eqref" reference="eq:cfie"} is an indirect integral equation since the unknown $\boldsymbol{\xi}$ is not identical with any physical quantity (see [@BEP+2009] and [@BBL+2015] for more details on direct and indirect integral equations). The corresponding direct formulation of [\[eq:cfie\]](#eq:cfie){reference-type="eqref" reference="eq:cfie"}, which is more commonly used by physicists and engineers, can be analogously analyzed.* ## Uniqueness The most important property of CFIEs which distinguishes them from the standard boundary integral equations is the uniqueness of a solution for any wave number. **Theorem 5** (Uniqueness). *For any $\eta \in \mathbb R\setminus \{0\}$ and any wave number $\kappa > 0$, the integral equation [\[eq:cfie\]](#eq:cfie){reference-type="eqref" reference="eq:cfie"} has at most one solution $\boldsymbol{\xi}\in \mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma)$.* *Proof.* Let $\boldsymbol{\xi}\in \mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma)$ be a solution to the homogeneous equation $$\mathcal L_\kappa (\boldsymbol{\xi}) = 0.$$ It is clear that the scattered electric field $\boldsymbol{e}$ given by [\[eq:ansatz\]](#eq:ansatz){reference-type="eqref" reference="eq:ansatz"} is a solution to the exterior problem [\[eq:wave\]](#eq:wave){reference-type="eqref" reference="eq:wave"} and [\[eq:SilverMuller\]](#eq:SilverMuller){reference-type="eqref" reference="eq:SilverMuller"} with the homogeneous Dirichlet boundary condition $\boldsymbol{e}\times \textbf{n}= 0$. Referring to Relich's lemma, we can conclude that $\boldsymbol{e}= 0$ in $\Omega^c$. Therefore, the Stratton-Chu representation formula [\[eq:representation\]](#eq:representation){reference-type="eqref" reference="eq:representation"} can be invoked to get the interior traces $$\gamma_D^- \boldsymbol{e}= \left({ \gamma_D^- \circ \boldsymbol{\Psi}^{i\kappa}_{DL}}\right) (\boldsymbol{\xi}), \qquad\qquad\qquad\gamma_N^- \boldsymbol{e}= i \eta \left({\gamma_D^- \circ \boldsymbol{\Psi}^{i\kappa}_{SL}}\right) (\boldsymbol{\xi}).$$ On the one hand, the integration by parts formula [\[eq:identity\]](#eq:identity){reference-type="eqref" reference="eq:identity"} gives us that $$\left\langle{\gamma_D^- \boldsymbol{e}, \gamma_N^- \boldsymbol{e}}\right\rangle_{\times, \Gamma} = \int_\Omega\left({\kappa^2 \left\vert{\boldsymbol{e}(\boldsymbol{x})}\right\vert^2 - \left\vert{\textbf{\textup{curl}}\, \boldsymbol{e}(\boldsymbol{x})}\right\vert^2}\right) \,\mathrm{d}\boldsymbol{x}\in \mathbb R.$$ On the other hand $$\begin{aligned} \left\langle{\gamma_D^- \boldsymbol{e}, \gamma_N^- \boldsymbol{e}}\right\rangle_{\times, \Gamma} & = i\eta \left\langle{\left({\gamma_D^- \circ \boldsymbol{\Psi}^{i\kappa}_{DL}}\right) (\boldsymbol{\xi}), \left({\gamma_D^- \circ \boldsymbol{\Psi}^{i\kappa}_{SL}}\right) (\boldsymbol{\xi})}\right\rangle_{\times, \Gamma} \\ & = i\eta \int_\Omega\left({\kappa^2 \left\vert{\boldsymbol{\Psi}^{i\kappa}_{SL}(\boldsymbol{\xi})(\boldsymbol{x})}\right\vert^2 + \left\vert{\textbf{\textup{curl}}\, \boldsymbol{\Psi}^{i\kappa}_{SL}(\boldsymbol{\xi})(\boldsymbol{x})}\right\vert^2}\right) \,\mathrm{d}\boldsymbol{x}\in i\mathbb R. \end{aligned}$$ Please note that the real-valued potentials $\boldsymbol{\Psi}^{i\kappa}_{SL}$ and $\boldsymbol{\Psi}^{i\kappa}_{DL}$ are solutions to the equation [\[eq:wave1\]](#eq:wave1){reference-type="eqref" reference="eq:wave1"} fulfilling the radiation condition [\[eq:SilverMuller1\]](#eq:SilverMuller1){reference-type="eqref" reference="eq:SilverMuller1"} with $\sigma = i\kappa$. As a consequence, we can deduce that $$0 = \left\lVert{\boldsymbol{\Psi}^{i\kappa}_{SL}(\boldsymbol{\xi})}\right\rVert_{\mathop{\mathrm{\mathbf H}}(\textbf{\textup{curl}}, \Omega)} \ge C \left\lVert{S_{i\kappa}(\boldsymbol{\xi})}\right\rVert_{\mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma)},$$ for some positive constant $C$. This implies that $S_{i\kappa}(\boldsymbol{\xi}) = 0$ in $\mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma)$. Finally, we invoke the $\mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma)$-ellipticity of $S_{i\kappa}$ in Corollary [Corollary 1](#cor:ellipticity){reference-type="ref" reference="cor:ellipticity"} to conclude that $\boldsymbol{\xi}= 0$, or equivalently, the CFIE [\[eq:cfie\]](#eq:cfie){reference-type="eqref" reference="eq:cfie"} has at most one solution in $\mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma)$. ◻ It is well-known that if the integral operator $\mathcal L_\kappa$ is a Fredholm operator of index $0$, then the injectivity of $\mathcal L_\kappa$ (i.e., Theorem [Theorem 5](#thm:uniqueness){reference-type="ref" reference="thm:uniqueness"}) also implies its surjectivity by a Fredholm alternative argument. Therefore, the next task is to prove that the operator $\mathcal L_\kappa$ satisfies a generalized Gårding inequality. ## Coercivity {#subsec:coercivity} The key tool to establish the coercivity of the operator $\mathcal L_\kappa$ is the Hodge decomposition for $\mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma)$. Let the closed subspace $$\mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma 0, \Gamma) := \mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma) \cap \ker \left({\textup{div}_\Gamma}\right),$$ and $\mathop{\mathrm{\mathbf X}}(\Gamma)$ be its "$\mathop{\mathrm{\mathbf L}}^2_\textbf{t}(\Gamma)$-orthogonal\" complement defined in [@HS2003b Theorem 2.2]. **Lemma 2** (Hodge decomposition). *The embedding $\mathop{\mathrm{\mathbf X}}(\Gamma) \hookrightarrow\mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma)$ is compact. Moreover, the following decomposition is direct and stable $$\label{eq:Hodge} \mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma) = \mathop{\mathrm{\mathbf X}}(\Gamma) \oplus \mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma 0, \Gamma).$$* *Proof.* See, e.g., [@BC2001b Theorem 5.1], [@BCS2002a Theorem 5.5] and [@HS2003b Theorem 2.2]. ◻ The following auxiliary result can be obtained by following the same lines of [@HS2003b Lemma 4.1]. **Lemma 3**. *For $\kappa > 0$, the operator $L : \mathop{\mathrm{\mathbf X}}(\Gamma) \to \mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma)^\prime$, defined by $L\boldsymbol{v}^\perp(\boldsymbol{u}) := \langle{\boldsymbol{v}^\perp, A_{i\kappa} \boldsymbol{u}}\rangle_{\times, \Gamma}$ for all $\boldsymbol{v}^\perp \in \mathop{\mathrm{\mathbf X}}(\Gamma), \boldsymbol{u}\in \mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma)$, is compact.* We denote by $Z^\Gamma$ and $R^\Gamma$ the projection operators from $\mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma)$ onto $\mathop{\mathrm{\mathbf X}}(\Gamma)$ and $\mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma 0, \Gamma)$, respectively, which are associated with the Hodge decomposition [\[eq:Hodge\]](#eq:Hodge){reference-type="eqref" reference="eq:Hodge"}. It means that $\boldsymbol{u}= Z^\Gamma\boldsymbol{u}+ R^\Gamma\boldsymbol{u}$ with $Z^\Gamma\boldsymbol{u}\in \mathop{\mathrm{\mathbf X}}(\Gamma)$ and $R^\Gamma\boldsymbol{u}\in \mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma 0, \Gamma)$ for all $\boldsymbol{u}\in \mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma)$. Now, we are in the position to establish a generalized Gårding inequality for $\mathcal L_\kappa$. In what follows, we fix the coupling parameter $\eta \in \mathbb R\setminus \{0\}$ and keep it constant. The particular impact of this parameter on the CFIE will be discussed later. **Theorem 6** (coercivity). *For any wave number $\kappa > 0$, there exist a positive constant $C$, an isomorphism mapping $\Theta : \mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma) \to \mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma)$, and a compact sesquilinear form $c : \mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma) \times \mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma) \to \mathbb C$ such that $$\label{eq:Garding} \left\vert{\left\langle{\Theta \boldsymbol{u}, \mathcal L_\kappa \boldsymbol{u}}\right\rangle_{\times, \Gamma} + c(\boldsymbol{u}, \boldsymbol{u})}\right\vert \ge C \left\lVert{\boldsymbol{u}}\right\rVert^2_{\mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma)}, \qquad\forall\boldsymbol{u}\in \mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma).$$* *Proof.* We rewrite the integral operator $\mathcal L_\kappa$ in the following form $$\label{eq:split} \mathcal L_\kappa = \widetilde{\mathcal L}_{\kappa} + c_1,$$ where $\widetilde{\mathcal L}_{\kappa} : \mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma) \to \mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma)$ is given by $$\widetilde{\mathcal L}_{\kappa} := i\eta \, \widetilde{S}_{i\kappa} \circ S_{i\kappa} + \left({-\dfrac{1}{2} Id + C_{i\kappa}}\right) \circ \left({\dfrac{1}{2} Id + C_{i\kappa}}\right),$$ and the operator $c_1: \mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma) \to \mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma)$ reads as $$c_1 := i\eta \left({\delta A_{\kappa} + \kappa^{-2} \textbf{\textup{curl}}_\Gamma\circ \delta V_{\kappa} \circ \textup{div}_\Gamma}\right) \circ S_{i\kappa} + \delta C_{\kappa} \circ \left({\dfrac{1}{2} Id + C_{i\kappa}}\right).$$ Here, the integral operators $S_{i\kappa}$ and $\widetilde{S}_{i\kappa}$ are defined by $$S_{i\kappa} = A_{i\kappa} - \kappa^{-2} \textbf{\textup{curl}}_\Gamma\circ V_{i\kappa} \circ \textup{div}_\Gamma, \qquad\quad\widetilde{S}_{i\kappa} = A_{i\kappa} + \kappa^{-2} \textbf{\textup{curl}}_\Gamma\circ V_{i\kappa} \circ \textup{div}_\Gamma.$$ The following Calderón projection formula can be deduced from the Stratton-Chu representation formula [\[eq:representation\]](#eq:representation){reference-type="eqref" reference="eq:representation"} $$\label{eq:Calderon} \left({-\dfrac{1}{2} Id + C_{i\kappa}}\right) \circ \left({\dfrac{1}{2} Id + C_{i\kappa}}\right) = \kappa^2 S_{i\kappa} \circ S_{i\kappa},$$ which implies that $$\widetilde{\mathcal L}_{\kappa} = \left({i\eta \, \widetilde{S}_{i\kappa} + \kappa^2 S_{i\kappa}}\right) \circ S_{i\kappa}.$$ Combining with the splitting [\[eq:split\]](#eq:split){reference-type="eqref" reference="eq:split"}, we arrive at the following formulation of $\mathcal L_\kappa$ $$\label{eq:cfio1} \mathcal L_{\kappa} = M_{i\kappa} \circ S_{i\kappa},$$ where the operator $M_{i\kappa} : \mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma) \to \mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma)$ is defined by $$M_{i\kappa} := i\eta \, \widetilde{S}_{i\kappa} + \kappa^2 S_{i\kappa} + c_1 \circ S^{-1}_{i\kappa} = \alpha A_{i\kappa} - \beta \kappa^{-2} \textbf{\textup{curl}}_\Gamma\circ V_{i\kappa} \circ \textup{div}_\Gamma+ c_1 \circ S^{-1}_{i\kappa}.$$ with the coefficients $\alpha := i\eta + \kappa^2$ and $\beta := \kappa^2 - i\eta$. From now on, we consider the formulation [\[eq:cfio1\]](#eq:cfio1){reference-type="eqref" reference="eq:cfio1"} of $\mathcal L_\kappa$ as an alternative to [\[eq:cfio\]](#eq:cfio){reference-type="eqref" reference="eq:cfio"}. Since the operator $S_{i\kappa}$ is $\mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma)$-elliptic, we first target the operator $M_{i\kappa}$. The sesquilinear form $a : \mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma) \times \mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma) \to \mathbb C$ associated with $M_{i\kappa}$ reads as $$\begin{aligned} a(\boldsymbol{v}, \boldsymbol{u}) & := \alpha \left\langle{\boldsymbol{v}, A_{i\kappa} \boldsymbol{u}}\right\rangle_{\times, \Gamma} + \beta \kappa^{-2} \left\langle{\textup{div}_\Gamma\boldsymbol{v}, \left({V_{i\kappa} \circ \textup{div}_\Gamma}\right) \boldsymbol{u}}\right\rangle_{1/2, \Gamma} + \left\langle{\boldsymbol{v}, \left({c_1 \circ S^{-1}_{i\kappa}}\right) \boldsymbol{u}}\right\rangle_{\times, \Gamma}.\end{aligned}$$ By means of the projectors $Z^\Gamma$ and $R^\Gamma$, we are able to split $a$ as $a = a_0 + d$, where $$d(\boldsymbol{v}, \boldsymbol{u}) := (\alpha - \beta) \left\langle{Z^\Gamma\boldsymbol{v}, A_{i\kappa} \boldsymbol{u}}\right\rangle_{\times, \Gamma} + \left\langle{\boldsymbol{v}, \left({c_1 \circ S^{-1}_{i\kappa}}\right) \boldsymbol{u}}\right\rangle_{\times, \Gamma},$$ and the principal part $$a_0(\boldsymbol{v}, \boldsymbol{u}) := \left\langle{X^\Gamma\boldsymbol{v}, S_{i\kappa} \boldsymbol{u}}\right\rangle_{\times, \Gamma},$$ with $X^\Gamma:= \alpha R^\Gamma+ \beta Z^\Gamma$. Lemmas [Lemma 1](#lem:compact1){reference-type="ref" reference="lem:compact1"} and [Lemma 3](#lem:compact2){reference-type="ref" reference="lem:compact2"} reveal that $d$ is a compact perturbation of $a_0$. Moreover, the operator $X^\Gamma$ defines an isomorphism mapping on $\mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma)$ with inverse $(X^\Gamma)^{-1} = \alpha^{-1} R^\Gamma+ \beta^{-1} Z^\Gamma$. In light of Corollary [Corollary 1](#cor:ellipticity){reference-type="ref" reference="cor:ellipticity"}, we can show that the sesquilinear form $a_0$ satisfies the inf-sup condition, i.e., for all $\boldsymbol{u}\in \mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma)$ $$\label{eq:inf_sup} a_0\left({(X^\Gamma)^{-1} \boldsymbol{u}, \boldsymbol{u}}\right) = \left\langle{\boldsymbol{u}, S_{i\kappa} \boldsymbol{u}}\right\rangle_{\times, \Gamma} \ge C \left\lVert{\boldsymbol{u}}\right\rVert^2_{\mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma)}.$$ It means that $$a\left({(X^\Gamma)^{-1} \boldsymbol{u}, \boldsymbol{u}}\right) - d((X^\Gamma)^{-1} \boldsymbol{u}, \boldsymbol{u}) \ge C \left\lVert{\boldsymbol{u}}\right\rVert^2_{\mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma)}.$$ Finally, taking into account the fact that $S_{i\kappa}$ is $\mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma)$-elliptic, we conclude that the operator $\mathcal L_\kappa$ satisfies the generalized Gårding inequality [\[eq:Garding\]](#eq:Garding){reference-type="eqref" reference="eq:Garding"}. ◻ By a Fredholm alternative argument, the injectivity of $\mathcal L_\kappa$ (i.e., Theorem [Theorem 5](#thm:uniqueness){reference-type="ref" reference="thm:uniqueness"}) and the generalized Gårding inequality [\[eq:Garding\]](#eq:Garding){reference-type="eqref" reference="eq:Garding"} imply that the CFIE [\[eq:cfie\]](#eq:cfie){reference-type="eqref" reference="eq:cfie"} has a unique solution $\boldsymbol{\xi}\in \mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma)$, see [@BCS2002 Proposition 3]. In particular, the following inf-sup condition holds for all $\boldsymbol{u}\in \mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma)$ $$\label{eq:inf_sup1} \sup_{\boldsymbol{v}\in \mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma)} \dfrac{\left\vert{\left\langle{\boldsymbol{v}, \mathcal L_\kappa \boldsymbol{u}}\right\rangle_{\times, \Gamma}}\right\vert}{\left\lVert{\boldsymbol{v}}\right\rVert_{\mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma)}} \ge C \left\lVert{\boldsymbol{u}}\right\rVert_{\mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma)},$$ for some constant $C > 0$ independent of $\boldsymbol{u}$. This inequality is needed for introducing a stable Galerkin discretization of the CFIE in the next section. **Remark 2**. *The CFIE formulation [\[eq:cfie\]](#eq:cfie){reference-type="eqref" reference="eq:cfie"} can be generalized when replacing the pure imaginary wave number $i\kappa$ by any wave number $i\kappa^\prime$ with $\kappa^\prime > 0$. The analysis can immediately be extended to this case.* # Galerkin discretization {#sec:discretization} This section aims to offer an appropriate Galerkin discretization scheme for the CFIE [\[eq:cfie\]](#eq:cfie){reference-type="eqref" reference="eq:cfie"}, which produces well-conditioned matrices when applied to large discrete problems. Let $(\Gamma_h)_{h > 0}$ be a family of shape-regular, quasi-uniform triangulations of the surface $\Gamma$, which only comprise flat triangles [@Ciarlet2002; @Monk2003]. The parameter $h$ stands for the meshwidth, which equals the length of the longest edge of triangulations. We denote by $\mathcal T_h$ and $\mathcal E_h$ the sets of all triangles and edges of $\Gamma_h$, respectively. On each triangle $T \in \mathcal T_h$, we equip the lowest-order triangular Raviart-Thomas space [@RT1977] $$\mathop{\mathrm{RT}}_0(T) := \left\lbrace{\boldsymbol{x}\mapsto \boldsymbol{a}+ b \boldsymbol{x}: \boldsymbol{a}\in \mathbb C^2, b \in \mathbb C}\right\rbrace.$$ This local space gives rise to the global $\textup{div}_\Gamma$-conforming boundary element space $$\mathop{\mathrm{\mathbf V}}_h := \left\lbrace{\boldsymbol{u}_h \in \mathop{\mathrm{\mathbf H}}^{-1/2}_\times(\textup{div}_\Gamma, \Gamma) : \left.\boldsymbol{u}_h\right|_T \in \mathop{\mathrm{RT}}_0(T), \, \forall T \in \mathcal T_h}\right\rbrace,$$ endowed with the edge degrees of freedom [@HS2003b] $$\phi_e(\boldsymbol{u}_h) := \int_e (\boldsymbol{u}_h \times \textbf{n}_j) \cdot \,\mathrm{d}s, \qquad\qquad\forall e \in \mathcal E_h,$$ where $\textbf{n}_j$ is the normal vector of a triangle $T$ in whose closure $e$ is contained. In order to develop a stable Galerkin discretization scheme, we draw on the idea of operator preconditioning from [@Hiptmair2006]. Let us recall that the operator $\mathcal L_\kappa$ has the following formulation $$\mathcal L_{\kappa} = M_{i\kappa} \circ S_{i\kappa},$$ where $$M_{i\kappa} = \alpha A_{i\kappa} - \beta \kappa^{-2} \textbf{\textup{curl}}_\Gamma\circ V_{i\kappa} \circ \textup{div}_\Gamma+ c_1 \circ S^{-1}_{i\kappa},$$ with $\alpha = i\eta + \kappa^2$ and $\beta = \kappa^2 - i\eta$, and the compact operator $$c_1 = i\eta \left({\delta A_{\kappa} + \kappa^{-2} \textbf{\textup{curl}}_\Gamma\circ \delta V_{\kappa} \circ \textup{div}_\Gamma}\right) \circ S_{i\kappa} + \delta C_{\kappa} \circ \left({\dfrac{1}{2} Id + C_{i\kappa}}\right).$$ Similar to section [4.2](#subsec:coercivity){reference-type="ref" reference="subsec:coercivity"}, we shall only examine the discrete inf-sup condition for the operator $M_{i\kappa}$. Again, the foremost task is to deal with the principal sesquilinear form $a_0 : \mathop{\mathrm{\mathbf H}}^{-1/2}_\times(\textup{div}_\Gamma, \Gamma) \times \mathop{\mathrm{\mathbf H}}^{-1/2}_\times(\textup{div}_\Gamma, \Gamma) \to \mathbb C$, which reads as $$a_0(\boldsymbol{v}, \boldsymbol{u}) = \left\langle{X^\Gamma\boldsymbol{v}, S_{i\kappa} \boldsymbol{u}}\right\rangle_{\times, \Gamma}.$$ It has been shown that $a_0$ satisfies the continuous inf-sup condition [\[eq:inf_sup\]](#eq:inf_sup){reference-type="eqref" reference="eq:inf_sup"}, which has been established by means of the continuous Hodge decomposition. The discrete inf-sup condition for the form $a_0$ can also be established by means of the discrete "$\mathop{\mathrm{\mathbf L}}^2_\textbf{t}(\Gamma)$-orthogonal\" Hodge decomposition $$\mathop{\mathrm{\mathbf V}}_h = \mathop{\mathrm{\mathbf X}}_h \oplus \mathop{\mathrm{\mathbf V}}^0_h,$$ where $\mathop{\mathrm{\mathbf V}}^0_h = \mathop{\mathrm{\mathbf V}}_h \cap \ker(\textup{div}_\Gamma) \subset\mathop{\mathrm{\mathbf H}}^{-1/2}_\times(\textup{div}_\Gamma 0, \Gamma)$. In general, $\mathop{\mathrm{\mathbf X}}_h \not\subset \mathop{\mathrm{\mathbf X}}(\Gamma)$. This means that, though the boundary element space $\mathop{\mathrm{\mathbf V}}_h$ is $\mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma)$-conforming, the space $\mathop{\mathrm{\mathbf X}}_h$ represents only a nonconforming discretization of $\mathop{\mathrm{\mathbf X}}(\Gamma)$. A link between $\mathop{\mathrm{\mathbf X}}(\Gamma)$ and $\mathop{\mathrm{\mathbf X}}_h$ can be provided via the Hodge mapping $H_h : \mathop{\mathrm{\mathbf V}}_h \to \mathop{\mathrm{\mathbf X}}(\Gamma)$, which is defined by (cf. [@HT2000 Definition 4.1] or [@HS2003b Definition 6.1]) $$\textup{div}_\Gamma H_h \boldsymbol{u}_h = \textup{div}_\Gamma\boldsymbol{u}_h, \qquad\qquad\quad\forall\boldsymbol{u}_h \in \mathop{\mathrm{\mathbf V}}_h.$$ Thanks to [@HS2003b Lemma 6.2], there exists a real number $t \in \left(0, \frac{1}{2}\right]$ such that $$\label{eq:estimate} \left\lVert{\boldsymbol{u}_h - H_h \boldsymbol{u}_h}\right\rVert_{\mathop{\mathrm{\mathbf L}}^2(\Gamma)} \le C h^t \left\lVert{\textup{div}_\Gamma\boldsymbol{u}_h}\right\rVert_{\mathop{\mathrm{H}}^{-1/2}(\Gamma)}, \qquad\quad\forall\boldsymbol{u}_h \in \mathop{\mathrm{\mathbf X}}_h,$$ for some positive constant $C$ depending only on $t, \Gamma$ and $\Gamma_h$. Next, we denote by $Z_h^\Gamma$ and $R_h^\Gamma$ the Hodge decomposition operators from $\mathop{\mathrm{\mathbf V}}_h$ onto $\mathop{\mathrm{\mathbf X}}_h$ and $\mathop{\mathrm{\mathbf V}}^0_h$, respectively. Then, we can define a discrete analogue of $(X^\Gamma)^{-1}$ as follows $$(X^\Gamma_h)^{-1} := \alpha^{-1} R^\Gamma_h + \beta^{-1} Z^\Gamma_h,$$ which forms an isomorphism on $\mathop{\mathrm{\mathbf V}}_h$. For any $\boldsymbol{u}_h \in \mathop{\mathrm{\mathbf V}}_h$, it holds that $$\begin{aligned} \left({X^\Gamma\circ (X^\Gamma_h)^{-1}}\right) \boldsymbol{u}_h & = \left({\left({\alpha R^\Gamma+ \beta Z^\Gamma}\right) \circ \left({\alpha^{-1} R^\Gamma_h + \beta^{-1} Z^\Gamma_h}\right)}\right) \boldsymbol{u}_h \\ & = \boldsymbol{u}_h + \left({\alpha \beta^{-1} - 1}\right) \left({R^\Gamma\circ Z^\Gamma_h}\right) \boldsymbol{u}_h \\ & = \boldsymbol{u}_h + \left({\alpha \beta^{-1} - 1}\right) \left({R^\Gamma\circ \left({Id - H_h}\right) \circ Z^\Gamma_h}\right) \boldsymbol{u}_h.\end{aligned}$$ The last assertion involves the fact that $R^\Gamma\circ H_h = 0$. With the aid of [\[eq:estimate\]](#eq:estimate){reference-type="eqref" reference="eq:estimate"}, we can arrive at the following estimate $$\begin{aligned} a_0\left({(X^\Gamma_h)^{-1} \boldsymbol{u}_h, \boldsymbol{u}_h}\right) & = \left\langle{\left({X^\Gamma\circ (X^\Gamma_h)^{-1}}\right) \boldsymbol{u}_h, S_{i\kappa} \boldsymbol{u}_h}\right\rangle_{\times, \Gamma} \\ & = \left\langle{\boldsymbol{u}_h + \left({\alpha\beta^{-1} - 1}\right) \left({R^\Gamma\circ \left({Id - H_h}\right) \circ Z^\Gamma_h}\right) \boldsymbol{u}_h, S_{i\kappa} \boldsymbol{u}_h}\right\rangle_{\times, \Gamma} \\ & \ge \left\langle{\boldsymbol{u}_h, S_{i\kappa} \boldsymbol{u}_h}\right\rangle_{\times, \Gamma} - C \left\vert{\left\langle{\left({R^\Gamma\circ \left({Id - H_h}\right) \circ Z^\Gamma_h}\right) \boldsymbol{u}_h, S_{i\kappa} \boldsymbol{u}_h}\right\rangle_{\times, \Gamma}}\right\vert \\ & \ge \left({C_1 - C_2 h^t}\right) \left\lVert{\boldsymbol{u}_h}\right\rVert^2_{\mathop{\mathrm{\mathbf H}}^{-1/2}_\times(\textup{div}_\Gamma, \Gamma)},\end{aligned}$$ for some positive constants $C, C_1$ and $C_2$ independent of $\boldsymbol{u}_h$. This inequality implies that the sesquilinear form $a_0$ satisfies the discrete inf-sup condition for sufficiently small meshwidth $h < h_0$. To keep the paper concise, we refer the reader to [@HS2003b Section 7] for the treatment of compact perturbations of $a_0$. Thus, we can conclude the following uniform discrete inf-sup condition for $M_{i\kappa}$. **Lemma 4**. *For any wave number $\kappa > 0$ and any $h < h_0$, there exists a positive constant $C$ independent of $h$ such that, the following discrete inf-sup condition is satisfied $$\sup_{\boldsymbol{v}_h \in \mathop{\mathrm{\mathbf V}}_h} \dfrac{\left\vert{\left\langle{\boldsymbol{v}_h, M_{i\kappa} \boldsymbol{u}_h}\right\rangle_{\times, \Gamma}}\right\vert}{\left\lVert{\boldsymbol{v}_h}\right\rVert_{\mathop{\mathrm{\mathbf H}}^{-1/2}_\times(\textup{div}_\Gamma, \Gamma)}} \ge C \left\lVert{\boldsymbol{u}_h}\right\rVert_{\mathop{\mathrm{\mathbf H}}^{-1/2}_\times(\textup{div}_\Gamma, \Gamma)}, \qquad\forall\boldsymbol{u}_h \in \mathop{\mathrm{\mathbf V}}_h.$$* The next step towards a stable Galerkin discretization scheme is to search for a boundary element space $\mathop{\mathrm{\mathbf W}}_h \subset\mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma)$ such that $\dim \mathop{\mathrm{\mathbf W}}_h = \dim \mathop{\mathrm{\mathbf V}}_h$ and the duality paring $\langle{\cdot, \cdot}\rangle_{\times, \Gamma}$ also satisfies the discrete inf-sup condition, i.e., $$\sup_{\boldsymbol{v}_h \in \mathop{\mathrm{\mathbf V}}_h} \dfrac{\left\vert{\left\langle{\boldsymbol{w}_h, \boldsymbol{v}_h}\right\rangle_{\times, \Gamma}}\right\vert}{\left\lVert{\boldsymbol{v}_h}\right\rVert_{\mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma)}} \ge C \left\lVert{\boldsymbol{w}_h}\right\rVert_{\mathop{\mathrm{\mathbf H}}^{-1/2}_{\times}(\textup{div}_\Gamma, \Gamma)}, \qquad\forall\boldsymbol{w}_h \in \mathop{\mathrm{\mathbf W}}_h.$$ As suggested in [@Hiptmair2006 Section 4], a suitable choice for $\mathop{\mathrm{\mathbf W}}_h$ is the $\textup{div}_\Gamma$-conforming boundary element space defined on the barycentric refinement of $\Gamma_h$, which is dual to $\mathop{\mathrm{\mathbf V}}_h$ in the sense of [@BC2007]. Let $\{\boldsymbol{v}_1, \boldsymbol{v}_2, \ldots, \boldsymbol{v}_N\}$ and $\{\boldsymbol{w}_1, \boldsymbol{w}_2, \ldots, \boldsymbol{w}_N\}$ be bases of $\mathop{\mathrm{\mathbf V}}_h$ and $\mathop{\mathrm{\mathbf W}}_h$, respectively, with $N := \dim \mathop{\mathrm{\mathbf V}}_h = \dim \mathop{\mathrm{\mathbf W}}_h$. We introduce the Galerkin matrices $\mathop{\mathrm{\mathbf M}}, \mathop{\mathrm{\mathbf S}}_w$ and $\mathop{\mathrm{\mathbf G}}$ associated with the operators $M_{i\kappa}, S_{i\kappa}$ and the duality pairing $\langle{\cdot, \cdot}\rangle_{\times, \Gamma}$ as follows $$\begin{aligned} \left[\mathop{\mathrm{\mathbf M}}\right]_{mn} := \left\langle{\boldsymbol{v}_m, M_{i\kappa} \boldsymbol{v}_n}\right\rangle_{\times, \Gamma}, \quad \left[\mathop{\mathrm{\mathbf S}}_w\right]_{mn} := \left\langle{\boldsymbol{w}_m, S_{i\kappa} \boldsymbol{w}_n}\right\rangle_{\times, \Gamma}, \quad \left[\mathop{\mathrm{\mathbf G}}\right]_{mn} := \left\langle{\boldsymbol{w}_m, \boldsymbol{v}_n}\right\rangle_{\times, \Gamma},\end{aligned}$$ with $m, n = 1, 2, \ldots, N$. Thanks to [@CN2002] and [@Hiptmair2006 Theorem 2.1], the following condition number is uniformly bounded $$\label{eq:cond} \mathop{\mathrm{cond}}\left({\mathop{\mathrm{\mathbf G}}^{-1}\mathop{\mathrm{\mathbf M}}\mathop{\mathrm{\mathbf G}}^{-\mathsf{T}}\mathop{\mathrm{\mathbf S}}_w}\right) \le C,$$ where $\mathop{\mathrm{cond}}(\cdot)$ stands for the spectral condition number of a square matrix, and $C$ is a positive constant independent of $h$. It is well-known that this condition number is directly related to the number of iteration steps (in GMRES solvers for instance) required for the computation of the corresponding inverse matrix. However, the assembly of the matrix $\mathop{\mathrm{\mathbf M}}$ is not straightforward because it involves the inverse operator $S^{-1}_{i\kappa}$. To tackle this issue, we rewrite the operator $\mathcal L_\kappa$ in the following equivalent form $$\mathcal L_\kappa = i\eta \, S_{\kappa} \circ S_{i\kappa} + \kappa^2 S_{i\kappa} \circ S_{i\kappa} + \delta C_{\kappa} \circ \left({\dfrac{1}{2} Id + C_{i\kappa}}\right).$$ In order to employ a stable Galerkin discretization scheme which inherits the well-conditioning from [\[eq:cond\]](#eq:cond){reference-type="eqref" reference="eq:cond"}, the following Galerkin matrices are defined $$\begin{aligned} & \left[\mathop{\mathrm{\mathbf S}}_v\right]_{mn} := \left\langle{\boldsymbol{v}_m, S_{i\kappa} \boldsymbol{v}_n}\right\rangle_{\times, \Gamma}, && \left[\mathop{\mathrm{\mathbf Z}}\right]_{mn} := \left\langle{\boldsymbol{v}_m, S_{\kappa} \boldsymbol{v}_n}\right\rangle_{\times, \Gamma}, \\ & \left[\mathop{\mathrm{\mathbf C}}\right]_{mn} := \left\langle{\boldsymbol{v}_m, \delta C_{\kappa} \boldsymbol{v}_n}\right\rangle_{\times, \Gamma}, && \left[\mathop{\mathrm{\mathbf K}}\right]_{mn} := \left\langle{\boldsymbol{w}_m, \left({\dfrac{1}{2} Id + C_{i\kappa}}\right) \boldsymbol{w}_n}\right\rangle_{\times, \Gamma}.\end{aligned}$$ A discrete solution $\boldsymbol{\xi}_h \in \mathop{\mathrm{\mathbf W}}_h$ to the variational problem [\[eq:vf\]](#eq:vf){reference-type="eqref" reference="eq:vf"} can be represented as $$\boldsymbol{\xi}_h = \sum_{m = 1}^N \hat{\xi}_m \boldsymbol{w}_m.$$ Then, the expansion coefficient vector $[\hat{\boldsymbol{\xi}}_h]_m := \hat{\xi}_m$ is the solution to the following discrete matrix system $$\label{eq:discrete_cfie} \left({i\eta \mathop{\mathrm{\mathbf G}}^{-1}\mathop{\mathrm{\mathbf Z}}\mathop{\mathrm{\mathbf G}}^{-\mathsf{T}}\mathop{\mathrm{\mathbf S}}_w + \kappa^2 \mathop{\mathrm{\mathbf G}}^{-1}\mathop{\mathrm{\mathbf S}}_v\mathop{\mathrm{\mathbf G}}^{-\mathsf{T}}\mathop{\mathrm{\mathbf S}}_w + \mathop{\mathrm{\mathbf G}}^{-1}\mathop{\mathrm{\mathbf C}}\mathop{\mathrm{\mathbf G}}^{-\mathsf{T}}\mathop{\mathrm{\mathbf K}}}\right) \hat{\boldsymbol{\xi}}_h = \mathop{\mathrm{\mathbf G}}^{-1} \boldsymbol{b},$$ where the right hand side vector $\left[\boldsymbol{b}\right]_m := - \left\langle{\boldsymbol{v}_m, \gamma^+_D \boldsymbol{e}^i}\right\rangle_{\times, \Gamma}$. Finally, we end up with the following properties of the discrete linear system. **Theorem 7**. *For any wave number $\kappa > 0$ and any meshwidth $h < h_0$, the matrix system [\[eq:discrete_cfie\]](#eq:discrete_cfie){reference-type="eqref" reference="eq:discrete_cfie"} has a unique solution $\hat{\boldsymbol{\xi}}_h$. In addition, there exists a positive constant $C$ independent of $h$ such that $$\mathop{\mathrm{cond}}\left({i\eta \mathop{\mathrm{\mathbf G}}^{-1}\mathop{\mathrm{\mathbf Z}}\mathop{\mathrm{\mathbf G}}^{-\mathsf{T}}\mathop{\mathrm{\mathbf S}}_w + \kappa^2 \mathop{\mathrm{\mathbf G}}^{-1}\mathop{\mathrm{\mathbf S}}_v\mathop{\mathrm{\mathbf G}}^{-\mathsf{T}}\mathop{\mathrm{\mathbf S}}_w + \mathop{\mathrm{\mathbf G}}^{-1}\mathop{\mathrm{\mathbf C}}\mathop{\mathrm{\mathbf G}}^{-\mathsf{T}}\mathop{\mathrm{\mathbf K}}}\right) \le C.$$* **Remark 3**. *In the discrete setting, the Calderón formula [\[eq:Calderon\]](#eq:Calderon){reference-type="eqref" reference="eq:Calderon"} generally does not hold true. A Galerkin discretization of the operator $\mathcal L_\kappa$ using the formulation [\[eq:cfio\]](#eq:cfio){reference-type="eqref" reference="eq:cfio"} does not benefit from the discrete inf-sup condition established in Lemma [Lemma 4](#lem:inf_sup1){reference-type="ref" reference="lem:inf_sup1"}. Therefore, the solvability of resulting linear systems and the stability of such discretization require a separate examination. However, it falls beyond the scope of this paper.* **Remark 4**. *The choice of the coupling parameter $\eta$ has a major impact on the spectral properties of the linear system [\[eq:discrete_cfie\]](#eq:discrete_cfie){reference-type="eqref" reference="eq:discrete_cfie"}, in particular on its condition number. There is no general theory concerning the optimal value of $\eta$ that yields the lowest condition number. Some discussions on this topic can be found in [@KS1983] and [@Kress1985].* # Numerical results {#sec:results} In this section, we perform some numerical experiments to evaluate the performance of the proposed CFIE and its Galerkin discretization. The scattering of electromagnetic waves by a sphere and a unit cube are considered. Throughout all experiments, the coupling parameter $\eta = \kappa^2$ is chosen, based on a deduction from the optimal value of $\eta$ for the electromagnetic scattering by the unit sphere in [@Kress1985]. ## Sphere We start with the scattering by a perfectly conducting sphere of radius $1\mathrm{m}$, centered at the origin of the coordinate system. Excitation is provided by an incident plane wave with electric field $$\boldsymbol{e}^i(\boldsymbol{x}) = \hat{\boldsymbol{x}} \exp(i\kappa \, \hat{\boldsymbol{z}} \cdot \boldsymbol{x}),$$ where $\hat{\boldsymbol{x}}$ and $\hat{\boldsymbol{z}}$ stand for the unit vectors along the $x$-axis and $z$-axis, respectively. To demonstrate the solvability of the proposed CFIE, we choose $\kappa = 4.4934$ which is a resonant wave number associated with the sphere of radius $1\mathrm{m}$ (cf. [@Jin2010 Section 7.2]). The exact solution to the exterior Dirichlet boundary-value problem [\[eq:wave\]](#eq:wave){reference-type="eqref" reference="eq:wave"}-[\[eq:SilverMuller\]](#eq:SilverMuller){reference-type="eqref" reference="eq:SilverMuller"} can be derived via the famous Mie series, see, e.g., [@Monk2003 Section 9.5] and [@Jin2010 Section 7.4.3]. The scattered electric field $\boldsymbol{e}$ and the scattered magnetic field $\boldsymbol{h}:= \frac{1}{i\kappa} \textbf{\textup{curl}}\, \boldsymbol{e}$ are computed along the circle $$\boldsymbol{x}= 2 \left({\cos \theta, 0, \sin \theta}\right)^\mathsf{T}, \qquad\qquad\quad\theta \in [0, 2\pi).$$ The sphere is approximated by $4994$ plane triangles with $2499$ edges, leading to $7491$ degrees of freedom. Figure [2](#fig:scatted_field){reference-type="ref" reference="fig:scatted_field"} shows that numerical solution to the CFIE [\[eq:cfie\]](#eq:cfie){reference-type="eqref" reference="eq:cfie"} is in agreement with the solution constructed via the Mie series. ![The intensity of scattered electric field $\boldsymbol{e}$ (*left*) and scattered magnetic field $\boldsymbol{h}$ (*right*) for a sphere of radius $1\mathrm{m}$, excited by an incident plane wave with wave number $\kappa = 4.4934$. The scattered fields are computed via the Mie series and the proposed CFIE along the circle $\boldsymbol{x}= 2 \left({\cos \theta, 0, \sin \theta}\right)^\mathsf{T}$, with observation angle $\theta \in [0, 2\pi)$.](scatteredE.pdf){#fig:scatted_field width="\\textwidth"} ![The intensity of scattered electric field $\boldsymbol{e}$ (*left*) and scattered magnetic field $\boldsymbol{h}$ (*right*) for a sphere of radius $1\mathrm{m}$, excited by an incident plane wave with wave number $\kappa = 4.4934$. The scattered fields are computed via the Mie series and the proposed CFIE along the circle $\boldsymbol{x}= 2 \left({\cos \theta, 0, \sin \theta}\right)^\mathsf{T}$, with observation angle $\theta \in [0, 2\pi)$.](scatteredH.pdf){#fig:scatted_field width="\\textwidth"} Next, we validate the stability of the proposed Galerkin discretization scheme by examining the condition number of matrices arising from the discrete linear system [\[eq:discrete_cfie\]](#eq:discrete_cfie){reference-type="eqref" reference="eq:discrete_cfie"} of the CFIE with varying meshwidth $h$ and wave number $\kappa$. For the purpose of comparison, the performance of the standard electric field integral equation (EFIE) discretized by the linear Raviart-Thomas boundary elements is also demonstrated. In the first experiment, the wave number is fixed at $\kappa = \frac{2\pi}{3}$ (which is far away from resonant wave numbers), and the meshwidth varies from $0.055\mathrm{m}$ to $0.45\mathrm{m}$. Figure [4](#fig:sphere_cond){reference-type="ref" reference="fig:sphere_cond"} (*left*) shows that, whereas the condition number of matrices derived from the EFIE increases when the meshwidth decreases, that of the proposed CFIE remains stable. The second numerical experiment is performed for the fixed meshwidth $h = 0.15\mathrm{m}$, while the wave number varies from $\frac{4\pi}{3}$ to $\frac{5\pi}{3}$. This range contains two resonant wave numbers $\kappa = 4.4934$ and $\kappa = 4.9734$ (cf. [@Jin2010 Section 7.2]). As shown in Figure [4](#fig:sphere_cond){reference-type="ref" reference="fig:sphere_cond"} (*right*), the condition number of EFIE's discrete system grows dramatically when the wave number gets close to the resonant ones. In contrast, the condition number of the discrete linear system [\[eq:discrete_cfie\]](#eq:discrete_cfie){reference-type="eqref" reference="eq:discrete_cfie"} stays almost constant. ![Condition number of matrices arising from Galerkin discretizations of the EFIE and CFIE for a sphere of radius $1\mathrm{m}$. *Left:* the wave number is fixed at $\kappa = \frac{2\pi}{3}$, and the meshwidth is varying. *Right:* the meshwidth is fixed at $h = 0.15\mathrm{m}$, and the wave number is varying.](sphere_h.pdf){#fig:sphere_cond width="\\textwidth"} ![Condition number of matrices arising from Galerkin discretizations of the EFIE and CFIE for a sphere of radius $1\mathrm{m}$. *Left:* the wave number is fixed at $\kappa = \frac{2\pi}{3}$, and the meshwidth is varying. *Right:* the meshwidth is fixed at $h = 0.15\mathrm{m}$, and the wave number is varying.](sphere_kappa.pdf){#fig:sphere_cond width="\\textwidth"} ## Cube We now consider the scattering by a simple Lipschitz polyhedron: a cube of edge $1\mathrm{m}$. The resonant wave numbers associated with this domain are determined by $\sqrt{n}\pi$ for any positive integer $n$, see [@VanBladel2007 Section 10.4.2]. We investigate the performance of the proposed Galerkin discretization scheme for the unit cube by repeating two last experiments for the sphere. Firstly, the wave number is fixed at $\kappa = \frac{\pi}{2}$, which is smaller than the smallest resonant wave number, and the meshwidth is ranging from $0.04\mathrm{m}$ to $0.5\mathrm{m}$. Afterwards, the meshwidth is chosen as $h = 0.1\mathrm{m}$, and the wave number varies in the range from $4.224$ to $5.785$, which contains two resonant wave numbers $\sqrt{2} \pi \approx 4.4429$ and $\sqrt{3} \pi \approx 5.4414$. The condition number of matrices derived from discretizations of the standard EFIE and the proposed CFIE are depicted in Figure [6](#fig:cube_cond){reference-type="ref" reference="fig:cube_cond"}. The standard Galerkin discretization of EFIE is unstable in the sense that its GMRES solvers require much more iteration steps to get a desired accuracy on solution when the wave number $\kappa$ is close the resonant ones, or when the meshwidth $h$ decreases. In both cases, the discrete linear system [\[eq:discrete_cfie\]](#eq:discrete_cfie){reference-type="eqref" reference="eq:discrete_cfie"} remains stable. These numerical results corroborate the solvability of the CFIE [\[eq:cfie\]](#eq:cfie){reference-type="eqref" reference="eq:cfie"} for all wave numbers, as well as the well-conditioning of its Galerkin discretization regardless the resolution to the problem. ![Condition number of matrices arising from Galerkin discretizations of the EFIE and CFIE for a cube of edge $1\mathrm{m}$. *Left:* the wave number is fixed at $\kappa = \frac{\pi}{2}$, and the meshwidth is varying. *Right:* the meshwidth is fixed at $h = 0.1\mathrm{m}$, and the wave number is varying.](cube_h.pdf){#fig:cube_cond width="\\textwidth"} ![Condition number of matrices arising from Galerkin discretizations of the EFIE and CFIE for a cube of edge $1\mathrm{m}$. *Left:* the wave number is fixed at $\kappa = \frac{\pi}{2}$, and the meshwidth is varying. *Right:* the meshwidth is fixed at $h = 0.1\mathrm{m}$, and the wave number is varying.](cube_kappa.pdf){#fig:cube_cond width="\\textwidth"} # Conclusions {#sec:conclusions} In this contribution, we have introduced a boundary integral equation for electromagnetic scattering by a perfect electric conductor with Lipschitz continuous boundary. This equation yields a unique solution for all wave numbers. In addition, we have also proposed a Galerkin discretization scheme for the integral equation, which produces well-conditioned linear systems regardless the numerical resolution to the problem. In a forthcoming work, the convergence rate of the proposed discretization scheme should be investigated. To boost the robustness of the numerical scheme, particularly for complex geometries with corners, other pairs of dual meshes and the corresponding boundary elements should also be considered. [^1]: This work was funded by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No. 101001847).
arxiv_math
{ "id": "2309.02289", "title": "A well-conditioned combined field integral equation for electromagnetic\n scattering", "authors": "Van Chien Le and Kristof Cools", "categories": "math.NA cs.NA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | A new variable Eddington factor (VEF) model is presented for nonlinear problems of thermal radiative transfer (TRT). The VEF model is a data-driven one that acts on known (a-priori) radiation-diffusion solutions for material temperatures in the TRT problem. A linear auxiliary problem is constructed for the radiative transfer equation (RTE) with opacities and emission source evaluated at the known material temperatures. The solution to this RTE approximates the specific intensity distribution for the problem in all phase-space and time. It is applied as a shape function to define the Eddington tensor for the presented VEF model. The shape function computed via the auxiliary RTE problem will capture some degree of transport effects within the TRT problem. The VEF moment equations closed with this approximate Eddington tensor will thus carry with them these captured transport effects. In this study, the temperature data comes from multigroup $P_1$, $P_{1/3}$, and flux-limited diffusion radiative transfer (RT) models. The proposed VEF model can be interpreted as a transport-corrected diffusion reduced-order model. Numerical results are presented on the Fleck-Cummings test problem which models a supersonic wavefront of radiation. The presented VEF model is shown to reliably improve accuracy by 1-2 orders of magnitude compared to the considered radiation-diffusion model solutions to the TRT problem. address: - Computational Physics and Methods Group, Los Alamos National Laboratory, Los Alamos, NM - Department of Nuclear Engineering, North Carolina State University, Raleigh, NC - jmcoale\@lanl.gov - anistratov\@ncsu.edu author: - Joseph M. Coale - Dmitriy Y. Anistratov bibliography: - jmc-dya-tcdrom.bib title: A Variable Eddington Factor Model for Thermal Radiative Transfer with Closure based on Data-Driven Shape Function --- Boltzmann transport equation, variable Eddington tensor, radiation diffusion, model order reduction, nonlinear PDEs # Introduction The modeling and simulation of thermal radiation transport is an important consideration for many applications. Such phenomena can be found in a wide array of different fields, including: high-energy density (HED) physics, astrophysics, plasma physics, fire and combustion physics, and atmospheric and ocean sciences [@zel-1966; @mihalas-FRH-1984; @shu-astro; @thomas-stamnes-atm; @faghri-sunden-2008; @drake-hedp]. Multiphysical models of these phenomena comprise complex systems of partial differential equations (PDEs) which must be solved by means of numerical simulation. Some challenges are associated with the numerical simulation these systems. The involved equations are characterized by tight coupling, strong nonlinearities and multiscale behavior in space-time. Since radiative transfer (RT) is an important mechanism for energy redistribution in these phenomena, a photon transport model must be included in the systems of equations to model RT effects. The equations of RT are high-dimensional, their solution typically depending on 7 independent variables in 3D geometry. Thus along with the aforementioned challenges, a massive number of degrees of freedom (DoF) must be used in calculations to adequately describe the overall solution. For a simulation that discretizes each independent variable with an $x-$point grid, $x^7$ DoF are required, making it reasonable to reach trillions or quadrillions of DoF. The resulting computational load and memory occupation can become intractable for large simulations without the use of exascale computing resources. A common approach to develop models for simulation of RT phenomena in multiphysics problems is to apply methods of dimensionality reduction for the RT component. This will significantly reduce the required computational resources in exchange for some level of approximation and modeling error in the resulting solution. The goal for an RT model is to find balance between computational load with the desired fidelity of solutions. There exist many well-known RT models based on moment equations with approximate closures which have seen extensive use in application, such as $M_N$ methods that use maximum entropy closure relations, the $P_1$ and $P_{1/3}$ models, and flux-limited diffusion models [@olson-auer-hall-2000; @morel-2000; @simmons-mihalas-2000; @hauck-2011; @hauck-2012; @m1-2017]. Variable Eddington factor (VEF) models makeup another such class of RT models [@eddington-1926; @pomraning-1969; @l-p-1981]. VEF models are constructed by reformulating the radiation pressure tensor in terms of the Eddington tensor which brings closure to the moment equations. There exist many ways to construct approximation for the Eddington tensor. Some commonly used VEF models apply Wilson, Kershaw and Levermore closures [@wilson-1970; @kershaw-1976; @levermore-1984; @levermore-1996; @m1-2017]. Numerical methods for solving the Boltzmann transport equation have been developed based on the first two angular moment equations with exact closure defined by the Eddington tensor [@gol'din-1964; @auer-mihalas-1970; @gol'din-1972; @PASE-1986; @winkler-norman-mihalas-85]. The anisotropic diffusion (AD) model has been developed for thermal radiative transfer (TRT) and other particle transport applications [@trahan-larsen-2009; @johnson-larsen-2011; @trahan-larsen-2011; @johnson-larsen-2011-2]. A diffusion equation is constructed which is closed by means of an AD coefficient. The AD coefficient is defined via the solution to an auxiliary transport problem which takes the form of an angular and spatially dependent shape function. The special shape function accounts for some degree of transport effects in the TRT problem. This yields the tensor-diffusion moment equations with transport-corrected AD coefficients. The AD model uses just one transport sweep to solve for the auxiliary function. A new approach based on data-driven reduced-order models (ROMs) has been gaining popularity in recent years which make use of data-based methodologies to dimensionality reduction. Data-driven models have been developed for (i) linear particle transport problems [@buchan-2015; @tencer-2017; @soucasse-2019; @behne-ragusa-morel-2019; @hardy-morel-ahrens-2019; @prince-ragusa-2019-1; @prince-ragusa-2019-2; @Dominesey-2019; @peng-mcclarren-frank-2019; @peng-mcclarren-tans2019; @pozulp-2019; @peng-mcclarren-frank-2020; @pozulp-brantley-palmer-vujic-2021; @peng-mcclarren-2021; @behne-ragusa-tano-2021; @choi-2021; @elhareef-wu-ma-2021; @peng-chen-cheng-li-2022; @hughes-buchan-2022; @huang-I-2022] (ii) nonlinear RT problems [@pinnau-schulze-2007; @qian-wang-song-pant-2015; @fagiano-2016; @alberti-palmer-2018; @jc-dya-m&c2019; @jc-dya-tans2019; @jc-dya-m&c2021; @girault-2021; @alberti-jqsrt-2022; @jc-dya-jqsrt2023; @dya-jc-m&c2023-2; @jmc-dya-arxiv2023], and (iii) various problems in nuclear reactor-physics [@cherezov-sanchez-joo-2018; @alberti-palmer-2019; @german-ragusa-2019-1; @german-ragusa-2019-2; @alberti-palmer-2020; @german-tano-ragusa-2021; @elzohery-roberts-2021-1; @elzohery-roberts-2021-2; @elzohery-roberts-2021-3; @phillips-2021; @Dominesey-2022]. The fundamental idea behind these ROMs is to leverage databases of solutions to their problems of interest (known a-priori) to develop some reduction in the dimensionality for their involved equations. By nature, these models are problem-dependent since they are formulated using chosen datasets, and this allows for higher levels of accuracy than displayed by other types of ROMs (within the regime of parameters covered by the datasets). In this paper we present a data-driven VEF model for the fundamental TRT problem. This kind of TRT problem models a supersonic flow of radiation through matter and neglectshydrodynamic motion of the underlying material and heat conduction [@Moore-2015]. Note that the TRT problem is characterized by the same fundamental challenges as the more general class of problems (e.g. radiation-hydrodynamics problems) and serves as a useful computational platform for the development of new models. There exist several pathways to obtain an approximate Eddington tensor by data-driven means if data on the RT solution for a subset of problem parameters can be obtained [@jc-dya-m&c2019; @jc-dya-m&c2021; @jc-dya-jqsrt2023; @dya-jc-m&c2023-2; @jmc-dya-arxiv2023]. For the model developed here, the Eddington tensor is computed from approximate solution data to the TRT problem generated with radiation-diffusion models. It has been previously shown that the solution of the Boltzmann transport equation computed with a scattering source term evaluated by a diffusion solution yields a sufficiently accurate shape function for estimation of the Eddington tensor in linear problems [@dya-ns-2012; @ns-dya-mla-2014]. An extension of this idea to the nonlinear TRT problem is to use the material temperatures evaluated with a radiation-diffusion model to compute the opacity and emission source in the radiative transfer equation (RTE). This step of the new model uses only one transport sweep to calculate a shape function accounting approximately for transport effects, and to generate the Eddington tensor for the TRT problem. The remainder of the paper is as follows. The TRT problem is described in Sec. [2](#sec:trt){reference-type="ref" reference="sec:trt"}, along with definitions for several classical moment-based RT models applied to generate data for computation of the auxiliary shape function. The new data-driven VEF model is formulated in Sec. [3](#sec:vef){reference-type="ref" reference="sec:vef"}. Numerical results are given in Sec. [4](#sec:results){reference-type="ref" reference="sec:results"}, followed by conclusions in Sec. [5](#sec:conclusion){reference-type="ref" reference="sec:conclusion"}. # Thermal Radiative Transfer and Models Based on Moment Equations {#sec:trt} The TRT problem is formulated by the multigroup RTE [\[bte\]]{#bte label="bte"} $$\begin{gathered} \frac{1}{c}\frac{\partial I_g}{\partial t} + \boldsymbol{\Omega}\cdot\boldsymbol{\nabla}I_g + \varkappa_g(T)I_g = \varkappa_g(T)B_g(T), \\ % I_g|_{\boldsymbol{r}\in\partial\Gamma}=I_g^\text{in}\ \ \text{for}\ \ \boldsymbol{\Omega}\cdot\boldsymbol{n}_\Gamma<0,\quad I_g|_{t=0}=I_g^0, \\ % \boldsymbol{r}\in\Gamma,\quad t> 0,\quad \boldsymbol{\Omega}\in\mathcal{S},\quad g=1,\dots,G, \nonumber \end{gathered}$$ and the material energy balance (MEB) equation $$\label{meb} \frac{\partial \varepsilon(T)}{\partial t} = {\sum_{g=1}^{G}\bigg(\int_{4\pi}I_gd\Omega - 4\pi B_g(T)\bigg)\varkappa_g(T)},\quad T|_{t=0}=T^0 \, ,$$ where $\boldsymbol{r}$ is spatial position, $t$ is time, $g$ is the frequency group index, $\Gamma$ is the spatial domain, $\partial\Gamma$ is the boundary surface of $\Gamma$, $\boldsymbol{n}_\Gamma$ is the unit outward normal to $\partial\Gamma$, $I_g(\boldsymbol{r},\boldsymbol{\Omega},t)$ is the group specific photon intensity, $T(\boldsymbol{r},t)$ is the material temperature, $\varkappa_{g}(\boldsymbol{r},t;T)$ is the group material opacity, $\varepsilon(\boldsymbol{r},t;T)$ is the material energy density, and $B_g(\boldsymbol{r},t;T)$ is the group Planckian function given by $$\label{eq:planck} B_g(T) = \frac{2}{c^2h^2}\int_{\nu_{g-1}}^{\nu_{g}} \frac{\nu^3 }{e^{\frac{\nu}{T}}-1} d\nu .$$ $c$ is the speed of light, $h$ is Planck's constant, $\nu$ is photon frequency. There are several TRT models which apply moment equations for the group radiation energy density $$\label{Eg} E_g = \frac{1}{c} \int_{4\pi} {I}_g \ d\Omega ,$$ and flux $$\label{Fg} \boldsymbol{F}_g = \int_{4\pi} \boldsymbol{\Omega} {I}_g \ d\Omega,$$ to approximate the RTE. The $P_1$ model is defined by the multigroup $P_1$ equations for radiative transfer, given by [\[eq:p1_eqs\]]{#eq:p1_eqs label="eq:p1_eqs"} $$\begin{gathered} \frac{\partial E_g}{\partial t} + \boldsymbol{\nabla}\cdot\boldsymbol{F}_g + c\varkappa_g(T)E_g = 4\pi\varkappa_g(T)B_g(T), \label{p1_ebal}\\ % \frac{1}{c}\frac{\partial \boldsymbol{F}_g}{\partial t} + \frac{c}{3}\boldsymbol{\nabla} E_g + \varkappa_g(T) \boldsymbol{F}_g = 0, \label{p1_m1}\\ % \boldsymbol{F}_g|_{\boldsymbol{r}\in\partial\Gamma} = \frac{1}{2}E_g|_{\boldsymbol{r}\in\partial\Gamma}+2{F}_g^\text{in},\quad \\ E_g|_{t=0}=E_g^0,\quad \boldsymbol{F}_g|_{t=0}=\boldsymbol{F}_g^0 . \end{gathered}$$ The hyperbolic time-dependent $P_1$ equations are derived from the RTE by taking its $0^\text{th}$ and $1^\text{st}$ angular moments. Closure for the moment equations is formulated by defining the highest ($2^\text{nd}$) angular moment $$H_g = \int_{4\pi} \boldsymbol{\Omega}\otimes\boldsymbol{\Omega} {I}_g\ d\Omega$$ with the expansion $$I_g = \frac{1}{4 \pi} (cE_g + 3 \boldsymbol{\Omega}\cdot \boldsymbol{F}_g) \, .$$ This approximation yields $$H_g = \frac{c}{3} E \, .$$ The $P_{1/3}$ model for radiative transfer is formulated by the balance equation [\[p1_ebal\]](#p1_ebal){reference-type="eqref" reference="p1_ebal"} and the modified first moment equation given by [@olson-auer-hall-2000; @morel-2000] $$\label{p1/3-m1} \frac{1}{3c}\frac{\partial \boldsymbol{F}_g}{\partial t} + \frac{c}{3}\boldsymbol{\nabla} E_g + \varkappa_g(T) \boldsymbol{F}_g = 0 \, ,$$ The factor $\frac{1}{3}$ at the time-derivative term in Eq. [\[p1/3-m1\]](#p1/3-m1){reference-type="eqref" reference="p1/3-m1"} produces the correct the propagation speed of radiation in vacuum. The flux-limited diffusion (FLD) models are defined by the time-dependent multigroup diffusion equations [@l-p-1981; @krumholz-2007; @kuiper-2010; @a&a-2015; @tetsu-2016] [\[eq:diff\]]{#eq:diff label="eq:diff"} $$\begin{gathered} \label{eq:diff_eq} \frac{\partial E_g}{\partial t} + c\boldsymbol{\nabla}\cdot(-D_g\boldsymbol{\nabla} E_g) + c\varkappa_{g}(T)E_g = 4\pi\varkappa_{g}(T)B_g(T), \\ %%% \boldsymbol{n}_\Gamma\cdot(-cD_g\boldsymbol{\nabla} E_g)|_{\boldsymbol{r}\in\partial\Gamma}=\frac{1}{2}E_g|_{\boldsymbol{r}\in\partial\Gamma}+2{F}_g^\text{in},\quad E_g|_{t=0}=E_g^0, \end{gathered}$$ and $$\boldsymbol{F}_g = -cD_g\boldsymbol{\nabla} E_g \, ,$$ where $D_g$ is the group diffusion coefficient. In this model, the time derivative of the flux in the first moment equation is neglected. This leads to a parabolic time-dependent equation for $E_g$ with the diffusion coefficient defined by $$\label{diff-coef} D_g = \frac{1}{3\varkappa_{g}(T)} \, .$$ In general, the solution of the diffusion equation [\[eq:diff_eq\]](#eq:diff_eq){reference-type="eqref" reference="eq:diff_eq"} with $D_g$ defined by Eq. [\[diff-coef\]](#diff-coef){reference-type="eqref" reference="diff-coef"} does not satisfy the flux-limiting condition $$\label{f-limit} \frac{| \mathbf{F}_g|}{cE_g} \le 1 \, ,$$ stemming from definitions of the radiation density and flux (Eqs. [\[Eg\]](#Eg){reference-type="eqref" reference="Eg"} and [\[Fg\]](#Fg){reference-type="eqref" reference="Fg"}). The FLD models introduce modifications of the diffusion coefficient to meet the condition in Eq. [\[f-limit\]](#f-limit){reference-type="eqref" reference="f-limit"}. In this study, we consider the coefficient proposed by E. Larsen [@olson-auer-hall-2000] $$D_g(T,E_g) = \bigg[ \ \big( 3\varkappa_{g}(T) \big)^2 + \bigg( \frac{1}{E_g}\boldsymbol{\nabla}E_g \bigg)^2 \ \bigg]^{\frac{1}{2}}.$$ # Variable Eddington Factor Model for TRT with Diffusion-Based Shape Function {#sec:vef} The Variable Eddington factor method is defined by the balance equation (Eq. [\[p1_ebal\]](#p1_ebal){reference-type="eqref" reference="p1_ebal"}) and the first moment equation $$\label{lovef1} \frac{1}{c}\frac{\partial \boldsymbol{F}_g}{\partial t} +c\boldsymbol{\nabla} (\mathfrak{f}_gE_g) + \varkappa_g(T) \boldsymbol{F}_g = 0 \, ,$$ where closure is defined with $$H_g = c\mathfrak{f}_g[\tilde I] E_g,$$ by means of the Eddington tensor given as $$\label{approx-ET} \mathfrak{f}_g[\tilde I] = \frac{\int_{4\pi} \boldsymbol{\Omega}\otimes\boldsymbol{\Omega} {\tilde I}_g d\Omega}{\int_{4\pi} {\tilde I}_g d\Omega} \, .$$ Here $\tilde I_g$ is an approximation of the photon intensity. There exist a group of VEF models which use an approximation of the Eddington tensor defined via the first two moments of the photon intensity [@kershaw-1976; @minerbo-1978; @levermore-1984; @su-mgf-ewl-2001; @swesty-2009; @skinner-2013; @klassen-2014; @tetsu-2016; @Zhang-2013; @m1-2017]. To define the Eddington tensor, we formulate a model in which the material temperature distribution $\tilde{T}$ for a TRT problem is calculated with one of the radiation-diffusion models described in Sec. [2](#sec:trt){reference-type="ref" reference="sec:trt"}. A linear RTE is then defined by available $\tilde{T}(\boldsymbol{r},t)$ for $t\in[0,t^\text{end}]$ and $\boldsymbol{r}\in\Gamma$ [\[eq:tcd_bte\]]{#eq:tcd_bte label="eq:tcd_bte"} $$\begin{gathered} \frac{1}{c}\frac{\partial \tilde{I}_g}{\partial t} + \boldsymbol{\Omega} \cdot \boldsymbol{\nabla} \tilde{I}_g + \varkappa_{g}(\tilde{T})\tilde{I}_g = \varkappa_{g}(\tilde{T}) B_g(\tilde{T}), \label{eq:bte_fixsrc} \\ % \tilde{I}_g |_{\boldsymbol{r}\in\partial\Gamma} = I_g^\text{in}\ \ \text{for}\ \ \boldsymbol{n}_\Gamma\cdot\boldsymbol{\Omega}<0,\quad \tilde{I}_g |_{t=t_0} = I_g^0,\\ % \boldsymbol{r}\in\Gamma,\quad t\in[0,t^\text{end}],\quad \boldsymbol{\Omega}\in\mathcal{S},\quad g=1,\dots,G. \nonumber \end{gathered}$$ The solution of the auxiliary RTE problem [\[eq:tcd_bte\]](#eq:tcd_bte){reference-type="eqref" reference="eq:tcd_bte"} gives an approximate distribution of radiation intensities $\tilde{I}_g$ which accounts for the transport effects of the TRT problem and can used as a shape function to compute approximate Eddington tensor [\[approx-ET\]](#approx-ET){reference-type="eqref" reference="approx-ET"}. The boundary conditions for the VEF moment equations are defined in terms of $\tilde{I}_g$ as follows [@gol'din-1964; @gol'din-1972]: $$\boldsymbol{n}_\Gamma\cdot\boldsymbol{F}_g|_{\boldsymbol{r}\in\partial\Gamma}=cC_g[\tilde{I}_g](E_g|_{\boldsymbol{r}\in\partial\Gamma} - E^\text{in})+{F}_g^\text{in}, \label{eq:newbc}$$ where $$C_g[\tilde{I}_g] = \frac{\int_{\boldsymbol{n}_\Gamma\cdot\boldsymbol{\Omega}>0}\boldsymbol{\Omega}\tilde{I}_g\ d\Omega}{\int_{\boldsymbol{n}_\Gamma\cdot\boldsymbol{\Omega}>0}\tilde{I}_gd\Omega}.$$ The RTE [\[eq:tcd_bte\]](#eq:tcd_bte){reference-type="eqref" reference="eq:tcd_bte"} with the given function of temperature can be efficiently solved with a single transport sweep per time step. To solve the Eqs. [\[eq:tcd_bte\]](#eq:tcd_bte){reference-type="eqref" reference="eq:tcd_bte"} ray tracing techniques (aka the method of long characteristics) are applied [@goldin-1960; @takeuchi-1969; @askew-1972; @brough-chudley-1980; @askew-roth-1982; @casmo-1993; @zika-adams-2000; @dya-jc-m&c2023]. In sum, the data-driven VEF model for TRT is constructed with: - Radiation-diffusion solution data for material temperatures $\tilde{T}$, - The RTE with opacity and Planckian source evaluated with $\tilde{T}$ (Eqs. [\[eq:tcd_bte\]](#eq:tcd_bte){reference-type="eqref" reference="eq:tcd_bte"}), - The VEF equations (Eqs. [\[p1_ebal\]](#p1_ebal){reference-type="eqref" reference="p1_ebal"} & [\[lovef1\]](#lovef1){reference-type="eqref" reference="lovef1"}), where the Eddington tensor is defined via Eq. [\[approx-ET\]](#approx-ET){reference-type="eqref" reference="approx-ET"} and boundary conditions given by Eq. [\[eq:newbc\]](#eq:newbc){reference-type="eqref" reference="eq:newbc"}. Hereafter we refer to this model as the data-driven VEF model (DD-VEF). Input: $\{\boldsymbol{\tilde{T}}(t^n)\}_{n=1}^N$\ $n=0$\ Output: $\{\ {\boldsymbol{\tilde{f}}}_g(t^n)\}_{n=1}^N$ Input: $\{{\boldsymbol{\tilde{f}}}_g(t^n)\}_{n=1}^N$\ $n=0$\ Output: $\{\ \boldsymbol{{T}}(t^n),\ \boldsymbol{{E}}_g(t^n),\ \boldsymbol{{\mathcal{F}}}_g(t^n)\}_{n=1}^N$ The process of solving TRT problems with the DD-VEF model can be explained as a two-phase methodology, which is outlined in Algorithms [\[alg:off\]](#alg:off){reference-type="ref" reference="alg:off"} and [\[alg:on\]](#alg:on){reference-type="ref" reference="alg:on"}. The first (offline) phase demonstrated by Algorithm [\[alg:off\]](#alg:off){reference-type="ref" reference="alg:off"} represents the 'data-processing' operations to prepare the Eddington tensor closure data. The required input is an already known approximate material temperature distribution $\tilde{T}$ for the entire spatial and temporal interval of interest. If we define a simulation with $X$ spatial grid cells and $N$ time steps, then this input data is the set $\{\boldsymbol{\tilde{T}}(t^n)\}_{n=1}^N$ where $\boldsymbol{\tilde{T}}(t^n)\in\mathbb{R}^X$. At each $n^\text{th}$ time step, Eq. [\[eq:tcd_bte\]](#eq:tcd_bte){reference-type="eqref" reference="eq:tcd_bte"} is solved using $\boldsymbol{\tilde{T}}(t^n)$ for the vector of discrete radiation intensities in phase space $\boldsymbol{\tilde{I}}_g$, which give rise to the approximate Eddington tensor on the discrete grid $\boldsymbol{\tilde{f}}_g(t^n)$ via Eq. [\[approx-ET\]](#approx-ET){reference-type="eqref" reference="approx-ET"}. The discrete Eddington tensor values at each time step can be collected and stored in a dataset for later use with the DD-VEF model. This process of preparing the Eddington tensor data is referred to as the offline-phase because it only must completed once per temperature distribution $\tilde{T}$, and the calculated Eddington tensor data can be stored away for later use. The second (online) phase is outlined in Algorithm [\[alg:on\]](#alg:on){reference-type="ref" reference="alg:on"} and represents the operations required to solve a given TRT problem with the DD-VEF model. Taking as input the Eddington tensor data calculated in Algorithm [\[alg:off\]](#alg:off){reference-type="ref" reference="alg:off"}, The DD-VEF equations are solved at each time step to generate vectors for temperature $\boldsymbol{{T}}(t^n)$, radiation energy densities $\boldsymbol{{E}}_g(t^n)$ and radiation fluxes $\boldsymbol{{\mathcal{F}}}_g(t^n)$ over all phase space. In this configuration of offline/online phases, only Algorithm [\[alg:on\]](#alg:on){reference-type="ref" reference="alg:on"} must be completed for any given TRT simulation, assuming Algorithm [\[alg:off\]](#alg:off){reference-type="ref" reference="alg:off"} was completed some time in the past to prepare the required datasets. It is important to note however, that both phases can be combined to save on storage requirements. In this case given the input for $\{\boldsymbol{\tilde{T}}(t^n)\}_{n=1}^N$, at each time step the approximate Eddington tensor is calculated and immediately used with the DD-VEF equations to generate the TRT solution for the time step. # Numerical Results {#sec:results} The DD-VEF model is analyzed with numerical testing on the classical Fleck-Cummings (F-C) test [@fleck-1971] in 2D Cartesian geometry. This test takes the form of a homogeneous square domain with sides 6 cm in length, whose material is defined with spectral opacity $$\varkappa_\nu = \frac{27}{\nu^3}(1-e^{-\nu/T}).$$ Here $\nu$ and $T$ are measured in KeV. The left boundary is subject to an isotropic, black-body radiation source at a temperature of $T^\text{in}=1$ KeV. All other boundaries are vacuum. The initial temperature of the domain is $T^0=1$ eV. The material energy density of the material is a linear one $\varepsilon=c_vT$ where $c_v=0.5917 a_R (T^\text{in})^3$. The problem is solved on the interval $t\in[0,6\, \text{ns}]$ with 300 uniform time steps $\Delta t = 2\times 10^{-2}$ ns. The phase space is discretized using a $20\times 20$ uniform orthogonal spatial grid, 17 frequency groups (see Table [\[tab:freq_grps\]](#tab:freq_grps){reference-type="ref" reference="tab:freq_grps"}) and 144 discrete directions. The Abu-Shumays angular quadrature set is used [@abu-shumays-2001]. The implicit backward-Euler time integration scheme is used to discretize all equations in time. The BTE is discretized in space with the method of conservative long characteristics [@dya-jc-m&c2023], and all low-order equations use a second-order finite-volumes scheme [@pg-dya-jcp-2020]. Note the full-order model (FOM) for this TRT problem is formulated as the RTE coupled with the MEB. Three radiation diffusion models are considered to generate $\tilde{T}$: multigroup FLD, $P_1$ and $P_{1/3}$ (see Sec. [2](#sec:trt){reference-type="ref" reference="sec:trt"}). The physics embedded in $\tilde{T}$ will vary depending on which diffusion type model is used in its computation. For instance the FLD, $P_1$ and $P_{1/3}$ models may all produce different propagation speeds (and spectral distributions) of the radiation wavefront [@olson-auer-hall-2000]. These effects change how energy is redistributed in the F-C test and alters the distribution of material temperatures in space-time. $g$ 1 2 3 4 5 6 7 8 9 ------------------- -------- ------- ------- ------- ------- ------- ------- ------------------ ------- $\nu_{g}$ \[KeV\] 0.7075 1.415 2.123 2.830 3.538 4.245 5.129 6.014 6.898 $g$ 10 11 12 13 14 15 16 17 $\nu_{g}$ \[KeV\] 7.783 8.667 9.551 10.44 11.32 12.20 13.09 1$\times 10^{7}$ Figure [\[fig:TE_2nrm-errs_roms\]](#fig:TE_2nrm-errs_roms){reference-type="ref" reference="fig:TE_2nrm-errs_roms"} plots relative errors (w.r.t. the FOM solution) for the material temperature and total radiation energy density calculated in the 2-norm over space at each instant of time in $t\in[0,6\, \text{ns}]$. Separate curves are shown for each considered diffusion model and for the DD-VEF model. In each case the DD-VEF solution finds an increase in accuracy for $T$ and $E$ compared to the radiation diffusion solutions. The errors in $T$ and $E$ from the DD-VEF model are on the order of $10^{-3}$ for the whole interval of time, whereas the diffusion model errors exist on order $10^{-2}$ for the majority of times. The DD-VEF model is seen to increase the accuracy of each diffusion model by roughly an order of magnitude. The FLD model possesses the highest accuracy of all tested diffusion ROMs, and the DD-VEF model with highest accuracy is the one applied to the FLD solution. Next we investigate the DD-VEF model's performance in capturing the radiation wavefront as it propagates through the spatial domain. Note that the F-C test mimics the class of supersonic radiation shock problems and experiments [@Moore-2015; @guymer-2015; @fryer-2016; @fryer-2020]. One measurement of importance in these experiments concerns the time it takes for the radiation wavefront to reach the radiation-drive-opposite side of the test material [@Moore-2015; @fryer-2016; @fryer-2020]. A measurement of accuracy based on this wavefront-arrival metric can be derived by comparing the TRT solution at the right boundary of the F-C test, where the radiation wavefront propagates towards. The boundary-averaged material temperatures and radiation energy density and flux are defined as $$\begin{gathered} \label{eq:ch4:rbnd_ints} \bar{F}_R = \frac{1}{L_R}\int_{0}^{L_R} \boldsymbol{e}_x\cdot\boldsymbol{F}(x_R,y)\ dy,\\[5pt] \bar{E}_R = \frac{1}{L_R}\int_{0}^{L_R} E(x_R,y)\ dy,\\[5pt] \bar{T}_R = \frac{1}{L_R}\int_{0}^{L_R} T(x_R,y)\ dy,\end{gathered}$$ where $L_R = x_R = 6\text{cm}$. The FOM solution for these three quantities vs time is plotted in Figure [\[fig:rbndvals_fom\]](#fig:rbndvals_fom){reference-type="ref" reference="fig:rbndvals_fom"}. If the DD-VEF model can reproduce these integral quantities to acceptable levels of accuracy, they can be said to correctly reproduce the shape and propagation speed of the radiation wavefront. This is especially important to investigate given that the considered radiation diffusion models are known to produce nonphysical effects [@olson-auer-hall-2000; @morel-2000; @simmons-mihalas-2000]. Figure [\[fig:rbndvals_roms\]](#fig:rbndvals_roms){reference-type="ref" reference="fig:rbndvals_roms"} plots the relative error of the diffusion and DD-VEF produced values of $\bar{F}_R$, $\bar{E}_R$ and $\bar{T}_R$. The relative errors for each quantity are decreased using the DD-VEF model by 1-2 orders of magnitude for most instances of time. There are several time intervals where the errors for a model 'spike' downwards and then come back up. These occur when there is a change of sign in the error and do not indicate that the solution is more accurate there than at other instants of time. The most dramatic increase in accuracy is for the FLD $\bar{F}_R$ by about 3 orders of magnitude. In fact, the FLD $\bar{F}_R$ is the least accurate and the DD-VEF model $\bar{F}_R$ using the FLD solution is the most accurate of the models shown. The explanation for this effect comes from the fact that the DD-VEF model only acts on the approximate material temperature it is given, and the FLD solution for $\bar{T}_R$ (and $T$ in general from Figure [\[fig:TE_2nrm-errs_roms\]](#fig:TE_2nrm-errs_roms){reference-type="ref" reference="fig:TE_2nrm-errs_roms"}) is the most accurate of the considered radiation diffusion models. Finally, we consider the spectrum of radiation present on the right boundary of the test domain. Figure [\[fig:Eg_spectrum\]](#fig:Eg_spectrum){reference-type="ref" reference="fig:Eg_spectrum"} plots the frequency spectrum of radiation energy densities for the F-C test obtained by FOM on two points of the right boundary face ($x=6$ cm). The spectrum of radiation present at the midpoint of the right boundary ($y=3$ cm) is shown on the left, and the radiation spectrum present at the corner of the test domain ($y=0$ cm) is displayed on the right. Select instants of time are plotted to demonstrate how the radiation spectrum evolves. The points on the graphs are located at the center of each discrete energy group on the frequency-axis, and the values they take on are the group-averaged radiation energy densities $\bar{E}_g={E_g}/{(\nu_{g} - \nu_{g-1})}$. The plots have been 'zoomed in' to the spectrum peak, which leaves off the final frequency group. However, this point does not deviate significantly from the position of the second to last frequency group. Figure [\[fig:Eg_spectrum_err_2norms\]](#fig:Eg_spectrum_err_2norms){reference-type="ref" reference="fig:Eg_spectrum_err_2norms"} plots the errors of each model in the radiation energy densities vs photon frequency at the midpoint of the right boundary. Errors have been collected in the relative *temporal* 2-norms $$\| x(t) \|_2^t=\bigg(\int_{0}^{t^\text{end}}x(t)^2dt\bigg)^{-1/2},$$ w.r.t. the full-order solution. The DD-VEF model is demonstrated to improve upon low-frequency group errors by roughly an order of magnitude at each considered point in space. The increase in accuracy from the diffusion solutions significantly improves as frequency increases starting from roughly $\nu=3$ KeV. This is where the peak of (non-local) radiation emanating from the boundary drive should be located in frequency, as the Planckian spectrum peaks at $\nu=2.82T$. This makes sense, since the higher frequency groups are closer to the streaming regime and should be better approximated by the transport-effects correction provided within the VEF model. The same errors calculated in the $\infty-$norm are close to those shown in the 2-norm, indicating that these results well represent the overall errors produced by these models in the radiation spectrum at all instants of time. Note that although the last frequency group has not been included in the plots, its error value is close to that of the last shown frequency group for all models. # Conclusion {#sec:conclusion} In this paper a data-driven VEF model is introduced for nonlinear TRT problems. An approximate Eddington tensor is constructed with a transport correction method applied to radiation diffusion-based solutions to the TRT problem. Three multigroup diffusion models were considered: a FLD model, and the $P_1$, $P_{1/3}$ models. The DD-VEF model provided an increase in accuracy of 1-2 orders of magnitude in the total radiation energy density and material temperature when applied to each diffusion-based solution. The entire spectrum of radiation present at the test domain right boundary was improved upon as well. The most significant reduction in error from the diffusion solutions in the frequency spectrum was in the high-frequency range with strong transport effects. Possible future extensions of this DD-VEF model include parameterization via interpolation between diffusion solutions for a series of TRT problems, or the use of other approximate models for TRT in place of radiation-diffusion. # Acknowledgements Los Alamos Report LA-UR-23-31255. This research project was funded by the Sandia National Laboratory, Light Speed Grand Challenge, LDRD, Strong Shock Thrust. This work was supported by the U.S. Department of Energy through the Los Alamos National Laboratory. Los Alamos National Laboratory is operated by Triad National Security, LLC, for the National Nuclear Security Administration of U.S. Department of Energy (Contract No. 89233218CNA000001). The content of the information does not necessarily reflect the position or the policy of the federal government, and no official endorsement should be inferred.
arxiv_math
{ "id": "2310.02072", "title": "A Variable Eddington Factor Model for Thermal Radiative Transfer with\n Closure based on Data-Driven Shape Function", "authors": "Joseph M. Coale, Dmitriy Y. Anistratov", "categories": "math.NA cs.NA", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- author: - Otto Overkamp and Takashi Suzuki title: Chai's conjectures on base change conductors --- # Introduction Let $\mathop{\mathrm{\mathcal{O}}}_K$ be a Henselian discrete valuation ring with field of fractions $K$ and *perfect residue field $\kappa.$ Let $B$ be a semiabelian variety over $K.$ It is well-known that $B$ admits a Néron lft-model $\mathscr{B} \to \mathop{\mathrm{\operatorname{Spec}}}\mathop{\mathrm{\mathcal{O}}}_K,$ which is a smooth separated $\mathop{\mathrm{\mathcal{O}}}_K$-group scheme with generic fibre $B$ characterised by a universal property. We say that $B$ has *semiabelian reduction over $\mathop{\mathrm{\mathcal{O}}}_K$ if the identity component $\mathscr{B}^0$ of the Néron lft-model of $B$ is a semiabelian scheme. If $L$ is any finite separable extension of $K,$ we let $\mathop{\mathrm{\mathcal{O}}}_L$ be the integral closure of $\mathop{\mathrm{\mathcal{O}}}_K$ in $L.$ Note that $\mathop{\mathrm{\mathcal{O}}}_L$ is a discrete valuation ring finite and flat over $\mathop{\mathrm{\mathcal{O}}}_K$. By Grothendieck's semiabelian reduction theorem, there exists a finite Galois extension $L$ of $K$ such that $B_L$ has semiabelian reduction over $\mathop{\mathrm{\mathcal{O}}}_L.$ Choose such an $L$ and let $\mathscr{B}_L$ be the Néron lft-model of $B_L$ over $\mathop{\mathrm{\mathcal{O}}}_L.$ Following Chai [@Chai], we define the *base change conductor of $B$ by $$c(B):= \frac{1}{e_{L/K}} \ell_{\mathop{\mathrm{\mathcal{O}}}_L} (\mathrm{coker}(\mathop{\mathrm{\mathrm{Lie}}}\mathscr{B} \otimes_{\mathop{\mathrm{\mathcal{O}}}_K} \mathop{\mathrm{\mathcal{O}}}_L \to \mathop{\mathrm{\mathrm{Lie}}}\mathscr{B}_L)),$$ where $e_{L/K}$ denotes the ramification index of the extension $K\subseteq L$ and $\ell_{\mathop{\mathrm{\mathcal{O}}}_L}(-)$ denotes the length of an $\mathcal{O}_{L}$-module. This rational number measures the failure of $B$ to have semiabelian reduction over $\mathop{\mathrm{\mathcal{O}}}_K$ (or, equivalently, the failure of the connected component of the Néron lft-model of $B$ to commute with base change along the possibly ramified extension $\mathop{\mathrm{\mathcal{O}}}_K \subseteq \mathop{\mathrm{\mathcal{O}}}_L$). Moreover, $c(B)$ is independent of the choice of $L$.*** Now let $$\begin{aligned} 0\to T\to B \to A \to 0\label{introsec}\end{aligned}$$ be an exact sequence of semiabelian varieties over $K.$ Because the induced sequence $0\to \mathscr{T} \to \mathscr{B} \to \mathscr{A} \to 0$ of Néron lft-models is usually not exact, understanding the behaviour of the base change conductor in short exact sequences is a highly delicate problem. The following two questions have been proposed in the literature (see, e. g., [@Chai], [@CLN]): - Suppose $T$ is a torus and $A$ an Abelian variety in the sequence ([\[introsec\]](#introsec){reference-type="ref" reference="introsec"}) above. Does the equality $c(B)=c(T) + c(A)$ hold? - Does the equality $c(B)=c(T)+c(A)$ hold without any restrictions on $T,$ $B,$ and $A$? Question (i) above was answered affirmatively by Chai in the special cases where $K$ has characteristic zero or where $\kappa$ is finite [@Chai Theorem 4.1]; the first of those two cases was later re-proven by Cluckers-Loeser-Nicaise [@CLN Theorem 4.2.4] using different methods. This question has since been widely believed to have an affirmative answer in general, which has become known as *Chai's conjecture. Beyond the results just described, this conjecture is known in the following two cases:* - The semiabelian variety $B$ acquires semiabelian reduction over a finite *tamely ramified extension of $K,$ due to Halle-Nicaise [@HN Corollary 4.23], and* - we have $B=\mathop{\mathrm{\mathrm{Pic}}}^ 0_{C/K}$ for a proper curve $C\to \mathop{\mathrm{\operatorname{Spec}}}K,$ due to the first named author [@OvII]. The first main purpose of the present paper is to settle both questions (i) and (ii) in complete generality, without imposing any restrictions on the characteristic of $K$ or the semiabelian variety $B$. More precisely, we shall show that a slightly stronger statement than Chai's conjecture is true, whereas question (ii) has a negative answer: **Theorem 1**. *(i) Assume that $T$ is a torus in the sequence ([\[introsec\]](#introsec){reference-type="ref" reference="introsec"}) above. Then $$c(B)=c(T)+c(A).$$ (ii) If $T$ is not a torus in the sequence ([\[introsec\]](#introsec){reference-type="ref" reference="introsec"}), then $c(B) \not= c(T) + c(A)$ in general. In fact, there are such examples where $T,$ $B,$ and $A$ are all Abelian varieties, and examples where $T$ is an Abelian variety and $A$ a torus.* See Theorem [\[Chaithm\]](#Chaithm){reference-type="ref" reference="Chaithm"} for Statement (i) and Subsections [4.2](#consIsubs){reference-type="ref" reference="consIsubs"} and [4.3](#consIIsubs){reference-type="ref" reference="consIIsubs"} for Statement (ii). Our results have applications in several (seemingly unrelated) fields of mathematics: - It is well-known that every Abelian variety $Y$ over $K$ has a *rigid uniformisation $0\to \Lambda \to B \to Y \to 0,$ where $\Lambda$ is an étale $K$-lattice, $B$ a semiabelian variety, and the sequence is to be understood in the category of rigid analytic $K$-groups (see, e. g., Section 1 of [@BX] for more details). Moreover, $B$ fits into an exact sequence $0\to T \to B \to A \to 0$ where $T$ is a torus and $A$ an Abelian variety with potentially good reduction. By [@BX Theorem 2.3], we have $c(Y)=c(B),$ so by our results, calculating the base change conductor of $Y$ is reduced to calculating that of $T$ and that of $A.$* - If $K$ is the function field of a proper smooth curve $X$ over a finite field $\kappa = \mathop{\mathrm{\mathbf{F}}}_{q}$ (for this paragraph only), then by [@GS], the coherent Euler characteristic $\chi(X, \mathop{\mathrm{\mathrm{Lie}}}\mathscr{B})$ appears in a special value formula for the $L$-function of $B$. In the proof of [@GS Proposition 7.6], the affirmative answer for Question (i) for finite $\kappa$, in the form of the equality $$\chi(X, \mathop{\mathrm{\mathrm{Lie}}}\mathscr{B}) = \chi(X, \mathop{\mathrm{\mathrm{Lie}}}\mathscr{T}) + \chi(X, \mathop{\mathrm{\mathrm{Lie}}}\mathscr{A}),$$ is used to reduce the special value formula to those of the torus part $T$ and the abelian variety quotient $A$. It is easy to see that $L$-functions are multiplicative in exact sequences of semiabelian varieties. Therefore one might expect this kind of additivity of $\chi(X, \mathop{\mathrm{\mathrm{Lie}}}\mathrm{N\acute{e}ron})$ for any exact sequence of semiabelian varieties over $K$, so that $\chi(X, \mathop{\mathrm{\mathrm{Lie}}}\mathscr{B}^{\bullet})$ would be an invariant of any object $B^{\bullet}$ of the bounded derived category of semiabelian varieties over $K$ (or of mixed motives over $K$ even more ambitiously). However, our counterexamples to Question (ii) show that this is not the case: $\chi(X, \mathop{\mathrm{\mathrm{Lie}}}\mathscr{B}^{\bullet})$ is not well-defined (namely, not invariant under quasi-isomorphism). What is well-defined instead (under the assumption of the finiteness of Tate-Shafarevich groups) is the *ratio* of $q^{\chi(X, \mathop{\mathrm{\mathrm{Lie}}}\mathscr{B}^{\bullet})}$ to the Weil-étale Euler characteristic $\chi_{W}(X, \mathscr{B}^{0, \bullet})$. This will be discussed in detail elsewhere. - As explained in Theorem 4.2.1 of [@CLN], Chai's conjecture is equivalent to a Fubini property of certain motivic integrals, which was hitherto known only in the case where $K$ has characteristic 0. Our results show that this Fubini property holds without any restriction on the characteristic of $K.$ Finally, we shall extend Chai's result on the invariance of the base change conductor under duality of Abelian varieties to the equal characteristic case: **Theorem 2**. *Let $A$ be an Abelian variety over $K$ with dual $A^\vee.$ Then $c(A)=c(A^\vee).$ [\[intromainthm2\]]{#intromainthm2 label="intromainthm2"}* This is non-trivial since base change conductors for Abelian varieties are not isogeny invariant as shown in [@Chai (6.10)]. The Theorem is known if the characteristic of $K$ is equal to 0 [@Chai Theorem 6.7], but the proof given in *loc. cit. cannot easily be generalised to the positive characteristic case due to the existence of inseparable isogenies.* The same method of proof, when applied to tori instead, gives a new proof of the following result of Chai, Yu, and de Shalit [@CY Theorem on p. 367 and Theorem 12.1]: **Theorem 3**. *Let $T$ and $T'$ be tori over $K$ isogenous to each other. Then $c(T) = c(T')$. [\[intromainthm3\]]{#intromainthm3 label="intromainthm3"}* Chai, Yu, and de Shalit prove this first for mixed characteristic $K$ using Bégueri's duality theory [@Beg]. They then deduce the equal characteristic case from the mixed characteristic case by a Deligne-type approximation argument. Here we directly prove the equal characteristic case of the above theorem without reduction to the mixed characteristic case. As mentioned in [@CY], this theorem implies that $$c(T) = \frac{1}{2} a(X_{\ast}(T) \otimes_{\mathop{\mathrm{\mathbf{Z}}}} \mathop{\mathrm{\mathbf{Q}}})$$ for any torus $T / K$, where $a(-)$ denotes the Artin conductor. We shall also show that the assumption in Chai's conjecture that $\kappa$ be perfect cannot be dropped, and that the base change conductor of algebraic tori is no longer invariant under isogeny if the residue field is imperfect. In particular, the formula just stated does not extend to the case of imperfect residue fields. The methods we shall employ come from several different sources. Our approach for proving Chai's conjecture is, to some extent, inspired by the first named author's previous work on this topic [@OvII]. The proof of Chai's conjecture for Jacobians given there relies both on the methods presented in the work of Liu-Lorenzini-Raynaud [@LLR], as well as on the techniques for studying Néron models of Jacobians of singular curves introduced by the first named author [@OvI; @Ov]. We shall show in this article that the use of such techniques can, in fact, be circumvented, thus leading to a stronger result. The key results from [@LLR] we use here are the formulas [@LLR Theorems 2.1 (b) and 2.10] writing lengths of cohomology objects of complexes of Lie algebras as dimensions of certain algebraic groups over the residue field $\kappa$. [^1] The methods we shall use in order to construct the counterexamples mentioned in Theorem [Theorem 1](#intromainthm1){reference-type="ref" reference="intromainthm1"} (ii), as well as to prove Theorems [\[intromainthm2\]](#intromainthm2){reference-type="ref" reference="intromainthm2"} and [\[intromainthm3\]](#intromainthm3){reference-type="ref" reference="intromainthm3"}, are largely derived from the version of arithmetic duality theory developed by the second named author [@Suz]. This theory treats dualities for algebraic groups over $\kappa$ arising from those over $K$, and we will apply it to Liu-Lorenzini-Raynaud's algebraic groups mentioned above. Another ingredient for counterexamples is Tan-Trihan-Tsoi's estimates [@TTT] of the kernel of the restriction map $H^{1}(K, A) \to H^{1}(L, A)$ for ordinary elliptic curves $A$ with supersingular reduction and highly ramified extensions $L / K$. Throughout this paper (except for Subsection [4.4](#ImperfectResPara){reference-type="ref" reference="ImperfectResPara"}), we shall assume that $\mathop{\mathrm{\mathcal{O}}}_K$ is complete and that its residue field (henceforth to be denoted by $k$) is algebraically closed. This leads to no loss of generality because Néron lft-models commute with base change along the extension $\mathop{\mathrm{\mathcal{O}}}_K \subseteq \widehat{\mathop{\mathrm{\mathcal{O}}}_K^{\mathrm{sh}}}$ [@BLR Chapter 10.1, Proposition 3].\ \ $\mathbf{Acknowledgement.}$ The authors would like to express their gratitude to Ki-Seng Tan for suggesting that his joint paper [@TTT] with Trihan and Tsoi might be relevant for Construction I (see Proposition [Proposition 28](#ConIProp){reference-type="ref" reference="ConIProp"}) below. The first named author's research was conducted in the framework of the research training group *GRK 2240: Algebro-Geometric Methods in Algebra, Arithmetic and Topology*. # Chai's conjecture Throughout this section, we let $\mathop{\mathrm{\mathcal{O}}}_K$ be a complete discrete valuation ring with field of fractions $K,$ maximal ideal $\mathfrak{m},$ and algebraically closed residue field $k=\mathop{\mathrm{\mathcal{O}}}_K/\mathfrak{m}.$ ## Some cohomological invariants Let $$A^\bullet \colon \, 0 \to A^1 \to A^2 \to ... \to A^n \to 0$$ be a bounded complex of finitely generated free $\mathop{\mathrm{\mathcal{O}}}_K$-modules such that the induced complex $0\to A^1\otimes_{\mathop{\mathrm{\mathcal{O}}}_K}K \to ... \to A^n\otimes_{\mathop{\mathrm{\mathcal{O}}}_K}K\to 0$ is exact (i. e., $A^\bullet{\otimes_{\mathop{\mathrm{\mathcal{O}}}_K}^L}K\cong 0$ in $D^{b}(K)$). Then we obtain a morphism of $\mathop{\mathrm{\mathcal{O}}}_K$-modules $$\det A^\bullet \to \det A^\bullet{\otimes_{\mathop{\mathrm{\mathcal{O}}}_K}^L}K = K,$$ so there exists a unique integer $\gamma(A^\bullet)$ such that $\det A^\bullet = \mathfrak{m}^{\gamma(A^\bullet)}$ as an $\mathop{\mathrm{\mathcal{O}}}_K$-submodule of $K.$ Moreover, we define the *cohomological Euler characteristic of $A^\bullet$ as $$\chi(A^\bullet):=\sum_{i=1}^n (-1)^ i\ell_{\mathop{\mathrm{\mathcal{O}}}_K}(H^i(A ^ \bullet)).$$ Then we have* **Lemma 4**. *For each complex $A^\bullet$ as above, $\gamma(A^\bullet)=\chi(A^\bullet).$ [\[chigammalem\]]{#chigammalem label="chigammalem"}* *Proof.* If $n=1,$ both invariants vanish, so there is nothing to prove. If $n=2,$ we choose a basis $e_1,..., e_r$ of $A^2$ such that $\lambda_1e_1,..., \lambda_n e_r$ is a basis of $A^1$ for some $r\in \mathop{\mathrm{\mathbf{N}}}_0$ and suitable $\lambda_j\in \mathop{\mathrm{\mathcal{O}}}_K.$ Since $A^1\to A^2$ is injective, we have $\chi(A^\bullet)=v_K(\lambda_1) +...+ v_K(\lambda_r).$ Moreover, $\det A ^ \bullet$ is generated by $\lambda_1\cdot...\cdot \lambda_r\cdot(e_1\wedge ... \wedge e_r)\otimes (e_1\wedge ... \wedge e_r)^\vee$ as a submodule of $\det A^\bullet{\otimes_{\mathop{\mathrm{\mathcal{O}}}_K}^L}K=K,$ which implies the claim. In general, consider the two complexes $B^\bullet: \, 0\to A^1 \to ... \to A^{n-2} \to \ker(A^{n-1}\to A^n) \to 0$ and $C^\bullet: \, 0 \to \mathrm{im}(A^{n-1}\to A^n) \to A^n \to 0,$ declaring that $\mathrm{im}(A^{n-1}\to A^n)$ sit in degree $n-1.$ Then we have a (term-wise) exact sequence of complexes $$0 \to B^\bullet \to A^\bullet \to C^\bullet \to 0,$$ which induces a canonical isomorphism $$\det A^\bullet = (\det B^\bullet)\otimes_{\mathop{\mathrm{\mathcal{O}}}_K} \det C^\bullet.$$ This isomorphism shows that $\gamma(A^\bullet)=\gamma(B^\bullet) + \gamma(C^\bullet),$ and the long exact cohomology sequence induced by the short exact sequence of complexes shows that $\chi(A^\bullet)=\chi(B^\bullet) + \chi(C^\bullet).$ Hence the claim follows in general by induction. ◻ ## Complexes of group schemes In this paragraph, we shall recall the construction of another invariant which will play an important role in this article. Our discussion is based on [@LLR Section 2]. Let $$\mathscr{G}^\bullet: \, 0 \to \mathscr{G}^1 \to \mathscr{G}^2 \to... \to \mathscr{G}^{n-1} \to \mathscr{G}^{n} \to 0$$ be a complex of separated group schemes of finite type over $\mathop{\mathrm{\mathcal{O}}}_K.$ Moreover, putting $G^i:=\mathscr{G}^i\times_{\mathop{\mathrm{\mathcal{O}}}_K} K$, we shall assume that the induced complex $$G^\bullet:\, 0\to G^1\to G^2 \to ... \to G^{n-1} \to G^n \to 0$$ is exact and consists of smooth algebraic $K$-groups. Denote by $\mathscr{G}^\bullet(\mathop{\mathrm{\mathcal{O}}}_K)$ the complex $0 \to \mathscr{G}^1(\mathop{\mathrm{\mathcal{O}}}_K) \to \mathscr{G}^2(\mathop{\mathrm{\mathcal{O}}}_K) \to ... \to \mathscr{G}^n(\mathop{\mathrm{\mathcal{O}}}_K)\to 0.$ We shall need the following **Definition 5**. *Let $\mathop{\mathrm{\mathrm{Lie}}}\mathscr{G}^\bullet$ be the complex $0\to \mathop{\mathrm{\mathrm{Lie}}}\mathscr{G}^1 \to \mathop{\mathrm{\mathrm{Lie}}}\mathscr{G}^2\to ... \to\mathop{\mathrm{\mathrm{Lie}}}\mathscr{G}^n\to 0.$ Moreover, we put $$\gamma(\mathscr{G}^\bullet):=\gamma(\mathop{\mathrm{\mathrm{Lie}}}\mathscr{G}^\bullet),$$ and $$\chi_{\mathrm{Lie}}(\mathscr{G}^\bullet):=\chi(\mathop{\mathrm{\mathrm{Lie}}}\mathscr{G}^\bullet).$$* Now recall (for example, from [@BLR Chapter 7.1, Theorem 5]) that any $\mathop{\mathrm{\mathcal{O}}}_K$-group scheme $\mathscr{K}$ of finite type with smooth generic fibre admits a *smoothening $$\psi\colon \widetilde{\mathscr{K}}\to \mathscr{K},$$ i, e., a smooth group scheme $\widetilde{\mathscr{K}}$ and a morphism $\psi$ as above such that, for any smooth $\mathop{\mathrm{\mathcal{O}}}_K$-scheme $T,$ any morphism $T\to \mathscr{K}$ factors uniquely via $\psi.$ This universal property guarantees in particular that the smoothening is functorial in $\mathscr{K},$ so that, given a complex $\mathscr{G}^\bullet$ as above, we can consider the induced complex $$\widetilde{\mathscr{G}}^\bullet : \, 0 \to \widetilde{\mathscr{G}}^1 \to \widetilde{\mathscr{G}}^2 \to ... \to \widetilde{\mathscr{G}}^n\to 0.$$ The following Lemma is of central importance for the construction (and used implicitly throughout Section 2.5 of [@LLR]):* **Lemma 6**. *Let $\kappa$ be an algebraically closed *field and let $f\colon G\to H$ be a morphism of smooth algebraic groups over $\kappa.$ Then $(\mathrm{coker}\,f)(\kappa)=\mathrm{coker}\, f(\kappa),$ where $f(\kappa)\colon G(\kappa)\to H(\kappa)$ is the induced map on $\kappa$-points.** *Proof.* Let $H'$ be the scheme-theoretic image of $f,$ which is a closed algebraic subgroup of $H$ smooth over $\kappa.$ The sequence $0\to H'\to H\to \mathrm{coker}\, f\to 0$ is exact in the étale topology, so $(\mathrm{coker}\,f)(\kappa)=H(\kappa)/H'(\kappa).$ Now let $g\colon G\to H'$ be the induced morphism. For every $x\in H'(\kappa),$ the scheme $g^{-1}(x)$ is non-empty and of finite type over $\kappa,$ so it has a $\kappa$-point because $\kappa$ is algebraically closed. This shows that $H'(\kappa)=\mathrm{im} \, f(\kappa),$ so the claim follows. ◻ The Lemma fails if $\kappa$ is separably closed but imperfect, as shown by $f=[p]\colon \mathop{\mathrm{\mathbf{G}_m}}\to \mathop{\mathrm{\mathbf{G}_m}}$ with $p=\mathrm{char}\, k.$ Indeed, $\mathrm{coker}\, f=0$ in this case, but $\mathrm{coker}\, f(\kappa)=\kappa^\times/\kappa^{\times p},$ which is infinite. **Proposition 7**. *(cf. [@LLR p. 473]) *For $i=1,..., n,$ there exist canonical smooth group schemes $D_i$ of finite type over $k$ and isomorphisms $$H^ i (\mathscr{G}^\bullet(\mathop{\mathrm{\mathcal{O}}}_K)) \cong D_i(k)$$ of Abelian groups. [\[Diconstructprop\]]{#Diconstructprop label="Diconstructprop"}** *Proof.* We briefly recall the construction of the $D_j,$ referring the reader to *loc. cit. for more details. Let $\mathscr{K}$ and $\mathscr{K}'$ be the smoothenings of $\mathscr{G}_{i-1}$ and $\ker(\mathscr{G}_i \to \mathscr{G}_{i+1}),$ respectively. We obtain a morphism $\mathscr{K}\to \mathscr{K}'$ and a canonical isomorphism $$H^ i (\mathscr{G}^\bullet(\mathop{\mathrm{\mathcal{O}}}_K)) = \mathrm{coker}(\mathscr{K}(\mathop{\mathrm{\mathcal{O}}}_K) \to \mathscr{K}'(\mathop{\mathrm{\mathcal{O}}}_K)).$$ As in [@LLR p. 465], there exists a canonical diagram $$\begin{CD} \widehat{\mathscr{K}}@>>>\widehat{\mathscr{K}'}\\ @VVV@VVV\\ \mathscr{K}@>>>\mathscr{K}' \end{CD}$$ of smooth $\mathop{\mathrm{\mathcal{O}}}_K$-group schemes of finite type such that the vertical arrows are isomorphisms at the generic fibre and such that the map $\widehat{\mathscr{K}}\to \widehat{\mathscr{K}'}$ is smooth and surjective. Denote by $\mathop{\mathrm{\mathrm{Gr}}}_i$ the Greenberg functor of level $i,$ for any $i\geq 0$ (see, for example, [@BLR p. 276]). We obtain a diagram of smooth $k$-group schemes of finite type $$\begin{CD} \mathop{\mathrm{\mathrm{Gr}}}_i \widehat{\mathscr{K}}@>>>\mathop{\mathrm{\mathrm{Gr}}}_i\widehat{\mathscr{K}'} @>>>0\\ @VVV@VVV\\ \mathop{\mathrm{\mathrm{Gr}}}_i\mathscr{K}@>>>\mathop{\mathrm{\mathrm{Gr}}}_i\mathscr{K}'\\ @VVV@VVV\\ C@>>> C'' \end{CD}$$ in which $C$ and $C''$ are the obvious cokernels. The top left horizontal arrow is smooth and surjective (and hence surjective on $k$-points). By [@BLR Theorem 2.2(c) and Corollary 2.3], there exists $N\in \mathop{\mathrm{\mathbf{N}}}$ such that for all $i\geq N,$ the maps $\mathrm{coker}(\widehat{\mathscr{K}}(\mathop{\mathrm{\mathcal{O}}}_K) \to \mathscr{K}(\mathop{\mathrm{\mathcal{O}}}_K)) \to C(k)$ and $\mathrm{coker}(\widehat{\mathscr{K}'}(\mathop{\mathrm{\mathcal{O}}}_K) \to \mathscr{K}'(\mathop{\mathrm{\mathcal{O}}}_K)) \to C''(k)$ are isomorphisms. Now put $$D:=\mathrm{coker}(C\to C'').$$ Then the diagrams above show that the canonical map $$H^ i (\mathscr{G}^\bullet(\mathop{\mathrm{\mathcal{O}}}_K)) \to D(k)$$ is an isomorphism.* ◻ *Remark 8*. The groups $C$ and $C''$ in the proof of Proposition [\[Diconstructprop\]](#Diconstructprop){reference-type="ref" reference="Diconstructprop"} are independent of $i \ge N$. More precisely, with these groups denoted by $C_{i}$ and $C_{i}''$, the natural morphisms $C_{i} \to C_{i - 1}$ and $C_{i}'' \to C_{i - 1}''$ (denoted by $\operatorname{Coker}(g_{i}) \to \operatorname{Coker}(g_{i - 1})$ and $\operatorname{Coker}(g_{i}'') \to \operatorname{Coker}(g_{i - 1}'')$ in [@BLR p. 471]) are isomorphisms. The proof of this fact in [@BLR] refers to Lemma 2.6 (c) there, which says that these morphisms are isomorphisms *on $k$-valued points*: $C_{i}(k) \overset{\sim}{\to} C_{i - 1}(k)$ and $C_{i}''(k) \overset{\sim}{\to} C_{i - 1}''(k)$. A priori, this only implies that $C_{i} \to C_{i - 1}$ and $C_{i}'' \to C_{i - 1}''$ are surjective with finite infinitesimal kernel. To show that they are indeed isomorphisms, we use the following fact given in [@BGA Proposition 11.1]: If $\mathscr{G}$ is a smooth group scheme over $\mathcal{O}_{K}$, then the natural morphism $\operatorname{Gr}_{i} \mathscr{G} \to \operatorname{Gr}_{i - 1} \mathscr{G}$ is smooth for all $i$. Apply this to $\mathscr{G} = \mathscr{K}$. Together with the smoothness of $\operatorname{Gr}_{i - 1} \widehat{\mathscr{K}}$ over $k$, we know that the kernel of $C_{i} \to C_{i - 1}$ is smooth over $k$. Hence this kernel is zero for $i \ge N$. The same argument applies to $C_{i}'' \to C_{i - 1}''$. This shows that the group $D = \operatorname{coker}(C \to C'')$ is independent of $i \ge N$. From the proof of the proposition, we have $D \cong \operatorname{coker}( \operatorname{Gr}_{i} \mathscr{K} \to \operatorname{Gr}_{i} \mathscr{K}' )$ for $i \ge N$. Hence $D \cong \operatorname{coker}( \operatorname{Gr} \mathscr{K} \to \operatorname{Gr} \mathscr{K}' )$ as a (pro)algebraic group over $k$, where $\operatorname{Gr} = \varprojlim_{i} \operatorname{Gr}_{i}$ denotes the Greenberg transform of infinite level ([@BGA Section 14]). Following [@LLR p. 473], we make the following **Definition 9**. *We define the invariant $\chi_{\mathrm{points}}(\mathscr{G}^\bullet)$ as $$\chi_{\mathrm{points}}(\mathscr{G}^\bullet) : = \sum_{i=1}^n (-1) ^ i \dim_k D_i.$$* The following result is stated in [@LLR] (see *op. cit., Theorem 2.10) without proof; we shall provide the details because it plays a fundamental role in our argument:* **Theorem 10**. *For a complex $\mathscr{G}^\bullet$ as above, we have [\[Liepointsthm\]]{#Liepointsthm label="Liepointsthm"} $$\chi_{\mathrm{Lie}}(\widetilde{\mathscr{G}}^\bullet)=\chi_{\mathrm{points}}(\mathscr{G}^\bullet).$$* *Proof.* Because the map $\widetilde{\mathscr{G}}^\bullet(\mathop{\mathrm{\mathcal{O}}}_K) \to \mathscr{G}^\bullet(\mathop{\mathrm{\mathcal{O}}}_K)$ is an isomorphism of complexes, we may assume without loss of generality that all the $\mathscr{G}^j$ are smooth. There is nothing to do if $n = 0$. In general, we let $\mathscr{K}:=\ker(\mathscr{G}^{n-1}\to \mathscr{G}^n)$ and decompose $\mathscr{G}^\bullet$ into the complexes $$\mathscr{H}^\bullet_1: \, 0 \to \mathscr{G}^1 \to ... \to \mathscr{G}^{n-2} \to \mathscr{K} \to 0,$$ as well as $$\mathscr{H}_2^\bullet: \, 0\to \mathscr{K} \to \mathscr{G}^{n-1} \to \mathscr{G}^n\to 0,$$ declaring that $\mathscr{K}$ be placed in degree $n-2$ in $\mathscr{H}^\bullet_2.$ Since the functors $\Gamma(\mathop{\mathrm{\mathcal{O}}}_K,-)$ and $\mathop{\mathrm{\mathrm{Lie}}}-$ are left exact, we see the following: On the one hand, denoting by $D_j^{\mu}$ the group schemes constructed in Proposition [\[Diconstructprop\]](#Diconstructprop){reference-type="ref" reference="Diconstructprop"} for $\mathscr{H}^\bullet_{\mu}$ ($\mu=1,2$), and by $D_j$ those constructed for $\mathscr{G}^\bullet$, we find $D_i=D_i^1$ for $i\leq n-1,$ $D^2_{n-2}=D^2_{n-1}=0,$ and $D_j=D^2_j$ for $j\geq n.$ On the other hand, $H^{i}(\mathop{\mathrm{\mathrm{Lie}}}\mathscr{G}^\bullet) = H^{i}(\mathop{\mathrm{\mathrm{Lie}}}\mathscr{H}^\bullet_1)$ for $i \leq n-1,$ $H^{n-2}(\mathop{\mathrm{\mathrm{Lie}}}\mathscr{H}^\bullet_2)=H^{n-1}(\mathop{\mathrm{\mathrm{Lie}}}\mathscr{H}^\bullet_2)$=0, and $H^{j}(\mathop{\mathrm{\mathrm{Lie}}}\mathscr{H}^\bullet_2)=H^{j}(\mathop{\mathrm{\mathrm{Lie}}}\mathscr{G}^\bullet)$ for $j\geq n.$ Moreover, writing $\delta:=\ell_{\mathop{\mathrm{\mathcal{O}}}_K}(\mathrm{coker}(\mathop{\mathrm{\mathrm{Lie}}}\widetilde{\mathscr{K}}\to\mathop{\mathrm{\mathrm{Lie}}}\mathscr{K})),$ we see that $\chi_{\mathrm{Lie}}(\widetilde{\mathscr{H}}^\bullet_1) = \chi_{\mathrm{Lie}}(\mathscr{H}_1^\bullet)-(-1)^{n-1}\delta$ and $\chi_{\mathrm{Lie}}(\widetilde{\mathscr{H}}^\bullet_2) = \chi_{\mathrm{Lie}}(\mathscr{H}_2^\bullet)+(-1)^{n-1}\delta.$ This implies that $$\chi_{\mathrm{Lie}}(\mathscr{G}^\bullet)=\chi_{\mathrm{Lie}}(\widetilde{\mathscr{H}}_1^\bullet)+\chi_{\mathrm{Lie}}(\widetilde{\mathscr{H}}_2^\bullet),$$ as well as $$\chi_{\mathrm{points}}(\mathscr{G}^\bullet)=\chi_{\mathrm{points}}(\mathscr{H}_1^\bullet)+\chi_{\mathrm{points}}(\mathscr{H}_2^\bullet) =\chi_{\mathrm{points}}(\widetilde{\mathscr{H}}_1^\bullet)+\chi_{\mathrm{points}}(\widetilde{\mathscr{H}}_2^\bullet).$$ The equality $$\chi_{\mathrm{Lie}}(\widetilde{\mathscr{H}}_2^\bullet) = \chi_{\mathrm{points}}(\widetilde{\mathscr{H}}_2^\bullet)$$ is precisely [@LLR Theorem 2.1(b)]. Hence the statement for $n$ reduces to the statement for $n - 1$. By induction, we get the result. ◻ ## Complexes of Néron models Let $G^\bullet:\,0 \to T \to B \to A \to 0$ be an exact sequence of semiabelian varieties over $K$ and let $$\mathscr{G}^\bullet\colon \, 0 \to \mathscr{T} \to \mathscr{B}\to \mathscr{A} \to 0$$ be the associated complex of Néron lft-models over $\mathop{\mathrm{\mathcal{O}}}_K.$ Let $L$ be a finite separable extension of $K$ such that $T,$ $B,$ and $A$ all acquire semiabelian reduction over the integral closure $\mathop{\mathrm{\mathcal{O}}}_L$ of $\mathop{\mathrm{\mathcal{O}}}_K$ in $L,$ and let $\mathscr{T}_L,$ $\mathscr{B}_L,$ and $\mathscr{A}_L$ be the Néron lft-models of $T_L,$ $B_L,$ and $A_L,$ respectively. The induced complex of Néron lft-models will be called $\mathscr{G}_L^\bullet.$ Following Chai [@Chai], we define the *base change conductor of $B$ as $$c(B):=\frac{1}{[L:K]} \ell_{\mathop{\mathrm{\mathcal{O}}}_L}(\mathrm{coker}((\mathop{\mathrm{\mathrm{Lie}}}\mathscr{B})\otimes_{\mathop{\mathrm{\mathcal{O}}}_K}\mathop{\mathrm{\mathcal{O}}}_L \to \mathop{\mathrm{\mathrm{Lie}}}\mathscr{B}_L)),$$ and similarly for $T$ and $A.$ Then we have the following* **Conjecture 11**. *(Chai [@Chai]) *Assume that $T$ is a torus and that $A$ is an Abelian variety. Then $$c(B)=c(T)+c(A).$$** The main difficulty comes from the fact that Néron lft-models do not in general commute with ramified base change, and that the complex $\mathscr{G}^\bullet$ above is usually not exact. We shall see that, in general, the additivity of the base change conductor as in Chai's conjecture (without the hypotheses on $T$ and $A$) is equivalent to a growth condition on the invariant $\gamma(-)=\chi_{\mathrm{Lie}}(-)$, generalising [@CLN Proposition 4.1.1]: **Proposition 12**. *Let $G^\bullet$ be as above. Then we have $$c(B)-c(T)-c(A)=\gamma(\mathscr{G}^\bullet)-\frac{1}{[L:K]}\gamma(\mathscr{G}_L^\bullet)\in \frac{1}{[L:K]}\mathop{\mathrm{\mathbf{Z}}}.$$ [\[Chigammaprop\]]{#Chigammaprop label="Chigammaprop"} In particular, the following are equivalent:\ (i) $c(B)=c(T)+c(A),$\ (ii) $\gamma(\mathscr{G}_L^\bullet)=[L:K]\gamma(\mathscr{G}^\bullet),$ and\ (iii) $\chi_{\mathrm{Lie}}(\mathscr{G}_L^\bullet)=[L:K]\chi_{\mathrm{Lie}}(\mathscr{G}^\bullet)$* *Proof.* Let $\mathfrak{m}_L$ be the maximal ideal in $\mathop{\mathrm{\mathcal{O}}}_L,$ and write $d:=[L:K].$ By definition, we have $$\mathfrak{m}_L^{d(c(B)-c(T)-c(A))}\det \mathop{\mathrm{\mathrm{Lie}}}\mathscr{G}_L^\bullet = (\det \mathop{\mathrm{\mathrm{Lie}}}\mathscr{G}^\bullet)\otimes_{\mathop{\mathrm{\mathcal{O}}}_K}\mathop{\mathrm{\mathcal{O}}}_L = \mathfrak{m}_L^{d\gamma(\mathscr{G}^\bullet)}.$$ Moreover, again by definition, we have $$\det \mathop{\mathrm{\mathrm{Lie}}}\mathscr{G}_L^\bullet = \mathfrak{m}_L^{\gamma(\mathscr{G}_L^\bullet)}.$$ Plugging this into the first equation yields $$\mathfrak{m}_L^{d(c(B)-c(T)-c(A)) + \gamma(\mathscr{G}_L^{\bullet})} = \mathfrak{m}_L^{d\gamma(\mathscr{G}^\bullet)},$$ so $d(c(B)-c(T)-c(A)) =d\gamma(\mathscr{G}^\bullet) - \gamma(\mathscr{G}_L^{\bullet}),$ which implies the first claim. The equivalence between (i), (ii), and (iii) follows immediately using Lemma [\[chigammalem\]](#chigammalem){reference-type="ref" reference="chigammalem"}. ◻ *Remark 13*. If $T$ is a torus, $T_L$ is split because $T_L$ has semiabelian reduction and $k$ is algebraically closed. In particular, the complex $\mathscr{G}^\bullet_L$ is exact [@Chai Remark 4.8 (a)], so $\gamma(\mathscr{G}^\bullet_L)=0.$ Therefore we recover Proposition 4.1.1 from [@CLN], which says that $(i)$ is equivalent to $\gamma(\mathop{\mathrm{\mathrm{Lie}}}\mathscr{G}^\bullet)=0$ in this case. We have now assembled the tools required to prove Chai's conjecture. In fact, we shall prove a slightly stronger statement. For any semiabelian variety $B,$ we denote by $B^{\mathrm{b}}$ the quotient of $B$ by its maximal *split subtorus.* **Theorem 14**. *(Chai's conjecture) *Let $0\to T\to B\to A\to 0$ be an exact sequence of semiabelian varieties over $K$ and suppose that $T$ is a torus.[\[Chaithm\]]{#Chaithm label="Chaithm"} Then $$c(B)=c(T)+c(A).$$** *Proof.* We shall give two related (but independent) proofs. Let $\mathscr{G}^\bullet$ be the complex $0\to \mathscr{T}\to \mathscr{B}\to \mathscr{A}\to 0$ of Néron lft-models induced by the sequence from the Theorem. Moreover, let $T'$ be the kernel of the map $B^{\mathrm{b}}\to A^{\mathrm{b}}.$ By [@CLN Proposition 3.4.2 (1)], we obtain an exact sequence $$0\to T'\to B^{\mathrm{b}} \to A^{\mathrm{b}} \to 0,$$ as well as an isogeny $T^{\mathrm{b}}\to T'.$ Because $c(B)=c(B^{\mathrm{b}})$ for any semiabelian variety $B$ [@CLN Lemma 4.13], and because the base change conductor is invariant under isogeny for algebraic tori ([@CY Theorem on p. 367 and Theorem 12.1] or Theorem [Theorem 56](#0025){reference-type="ref" reference="0025"} below), we may assume that $T,$ $A,$ and $B$ contain no split subtorus, and hence that their Néron lft-models are of finite type [@BLR Chapter 10.2, Theorem 1]. Therefore we can apply our previous results to $\mathscr{G}^\bullet.$ Because $k$ is algebraically closed, we have $H^1(K, T)=0$ [@Chai Lemma 4.3], so the sequence $0\to T(K)\to B(K)\to A(K)\to 0$ is exact. However, this sequence is canonically identified with the complex $\mathscr{G}^\bullet(\mathop{\mathrm{\mathcal{O}}}_K)$ by the universal property of the Néron lft-model, so we find that $\chi_{\mathrm{points}}(\mathscr{G}^\bullet)=0.$ Theorem [\[Liepointsthm\]](#Liepointsthm){reference-type="ref" reference="Liepointsthm"} now shows that $\chi_{\mathrm{Lie}}(\mathscr{G}^\bullet)=0,$ which, in turn, implies that $$\gamma(\mathop{\mathrm{\mathrm{Lie}}}\mathscr{G}^\bullet)=0.$$ However, this is equivalent to the claim that $c(B)=c(T)+c(A)$ by Proposition [\[Chigammaprop\]](#Chigammaprop){reference-type="ref" reference="Chigammaprop"} and the Remark thereafter. Note that, if $A$ has no split subtorus (as is the case in Chai's original conjecture), then the sequence $0\to T^{\mathrm{b}}\to B^{\mathrm{b}}\to A^{\mathrm{b}}\to 0$ is exact [@CLN Proposition 3.4.2(2)], so we do not need to invoke the invariance of $c(-)$ under isogeny for tori in that case. We shall now give a second argument, which allows us to circumvent using the isogeny invariance of $c(-)$ in all cases: Consider the commutative diagram $$\begin{CD} &&0&& 0 && 0\\ &&@VVV@VVV@VVV\\ 0@>>>\mathscr{T}^{0}(\mathop{\mathrm{\mathcal{O}}}_K)@>>>\mathscr{B}^{0}(\mathop{\mathrm{\mathcal{O}}}_K) @>>> \mathscr{A}^{0}(\mathop{\mathrm{\mathcal{O}}}_K)@>>>0\\ &&@VVV@VVV@VVV\\ 0@>>>\mathscr{T}(\mathop{\mathrm{\mathcal{O}}}_K)@>>>\mathscr{B}(\mathop{\mathrm{\mathcal{O}}}_K) @>>> \mathscr{A}(\mathop{\mathrm{\mathcal{O}}}_K)@>>>0\\ &&@VVV@VVV@VVV\\ 0@>>> \Phi_T @>>> \Phi_B@>>>\Phi_A @>>>0\\ &&@VVV@VVV@VVV\\ &&0&& 0 && 0, \end{CD}$$ where $\Phi_T,$ $\Phi_B,$ and $\Phi_A$ are the obvious cokernels. Denoting the first non-zero row of this diagram by $\mathscr{G}^{0, \bullet}(\mathop{\mathrm{\mathcal{O}}}_K)$ and the third by $\Phi^\bullet,$ we obtain the exact triangle $$\mathscr{G}^{0, \bullet}(\mathop{\mathrm{\mathcal{O}}}_K) \to \mathscr{G}^\bullet(\mathop{\mathrm{\mathcal{O}}}_K) \to \Phi^\bullet \to \mathscr{G}^{0, \bullet}(\mathop{\mathrm{\mathcal{O}}}_K)[1]$$ in the bounded derived category of Abelian groups. This follows from the fact that all columns of the diagram above are exact. However, as in the first proof, we see hat the middle row in the diagram is exact, so it is quasi-isomorphic to the zero object. In particular, the morphism $$\Phi^\bullet \to \mathscr{G}^{0, \bullet}(\mathop{\mathrm{\mathcal{O}}}_K)[1]$$ is a quasi-isomorphism. For $i=1,2,3,$ denote by $D_i$ the group schemes constructed in Proposition [\[Diconstructprop\]](#Diconstructprop){reference-type="ref" reference="Diconstructprop"} from the complex $\mathscr{G}^{0,\bullet}.$ By [@HN Proposition 3.5], the terms appearing in $\Phi^\bullet$ are finitely generated, and hence so are the $D_i(k)\cong H^ i(\Phi^\bullet[-1]).$ Since $k$ is algebraically closed, the groups $D^0_j(k)$ are $n$-divisible for any $n\in \mathop{\mathrm{\mathbf{N}}}$ invertible in $k.$ Since they are also finitely generated, we see that $D_j^0(k)$ is finite for all $j,$ so $D_j^0=0$ for all $j$ since $k$-points are dense in any smooth $k$-scheme. This shows that the $D_j$ are étale and hence $\dim_kD_j=0$ for all $j,$ which implies in particular that $\chi_{\mathrm{points}}(\mathscr{G}^{0,\bullet})=0.$ Because the map $\mathop{\mathrm{\mathrm{Lie}}}\mathscr{G}^{0,\bullet}\to \mathop{\mathrm{\mathrm{Lie}}}\mathscr{G}^\bullet$ is an isomorphism of complexes, we deduce that $\gamma(\mathop{\mathrm{\mathrm{Lie}}}\mathscr{G}^\bullet)=0$ and conclude as in the first proof. ◻ It is possible to generalise this result slightly, as follows: **Theorem 15**. *Let $0\to T\to B\to A \to 0$ be an exact sequence of semiabelian varieties over $K.$ Let $\mathscr{G}^\bullet$ be the induced complex of Néron lft-models over $\mathop{\mathrm{\mathcal{O}}}_K.$ [\[generalthm\]]{#generalthm label="generalthm"} Then the following are equivalent:\ (i) $\gamma(\mathscr{G}^\bullet)=0$\ (ii) the Abelian group $$\Delta:=\mathrm{coker}(B(K)\to A(K))=\ker(H ^ 1(K,T) \to H ^1 (K,B))$$ is finite.* *In particular, if $B(L) \to A(L)$ is surjective for all finite separable extensions $L$ of $K,$ then $c(B)=c(T)+c(A).$* *Proof.* Let $$\mathscr{G}^\bullet:\, 0\to\mathscr{T}\to\mathscr{B}\to\mathscr{A}\to 0$$ be the complex of Néron lft-models induced by the sequence from the Theorem. For $i=1,2,3,$ let $D_i$ be the canonical $k$-group scheme such that $H^i(\mathscr{G}^{0,\bullet}(\mathop{\mathrm{\mathcal{O}}}_K))=D_i(k)$ constructed in Proposition [\[Diconstructprop\]](#Diconstructprop){reference-type="ref" reference="Diconstructprop"}. We shall use the same notation as in the proof of Theorem [\[Chaithm\]](#Chaithm){reference-type="ref" reference="Chaithm"}. Since the sequence $$0\to T(K)\to B(K) \to A(K)$$ is exact, the long exact cohomology sequence induced by the exact triangle $\mathscr{G}^{0, \bullet}(\mathop{\mathrm{\mathcal{O}}}_K) \to \mathscr{G}^\bullet(\mathop{\mathrm{\mathcal{O}}}_K) \to \Phi^\bullet \to \mathscr{G}^{0, \bullet}(\mathop{\mathrm{\mathcal{O}}}_K)[1]$ gives us the canonical isomorphisms $H^1(\mathscr{G}^{0,\bullet}(\mathop{\mathrm{\mathcal{O}}}_K))=0$ and $H^2(\mathscr{G}^{0,\bullet}(\mathop{\mathrm{\mathcal{O}}}_K))=H^1(\Phi^\bullet),$ as well as a diagram $$\begin{CD}H^2(\Phi^\bullet)@>>> H^3(\mathscr{G}^{0,\bullet}(\mathop{\mathrm{\mathcal{O}}}_K))@>>> H^3(\mathscr{G}^\bullet(\mathop{\mathrm{\mathcal{O}}}_K))@>>> H^3(\Phi^\bullet)\\ &&@V{\cong}VV@VV{\cong}V\\ &&D_3(k)&& \Delta. \end{CD}$$ As in the proof of Theorem [\[Chaithm\]](#Chaithm){reference-type="ref" reference="Chaithm"}, we see that the $H^i(\Phi^\bullet)$ are finitely generated for all $i,$ so that (ii) is true if and only if $D_3(k)$ is finitely generated, which happens if and only if $\dim_k D_3=0.$ Moreover, since $D_1(k)=0$ and $D_2(k)$ is finitely generated, we see that $$\gamma(\mathscr{G}^\bullet)=\dim_k D_3.$$ These observations together imply the equivalence $(i) \iff (ii).$ The last claim now follows from Proposition [\[Chigammaprop\]](#Chigammaprop){reference-type="ref" reference="Chigammaprop"}. ◻ *Remark 16*. We shall see later that the first proof we gave above cannot be generalised to give another proof of Theorem [\[generalthm\]](#generalthm){reference-type="ref" reference="generalthm"}. The reason is that the invariance of $c(-)$ under isogeny does not generalise to all semiabelian varieties. # The ind-rational pro-étale site {#Indrat-Proet-Section} We will recall from [@Suz] some notation and facts about the ind-rational pro-étale site and ind- and pro-algebraic groups. As before, let $k$ be an algebraically closed (or, more generally, a perfect) field of characteristic $p > 0$. As in [@Suz Section 2.1], a $k$-algebra is said to be *rational* if it can be written as a finite product $k_{1}' \times \dots \times k_{n}'$, where each $k_{i}'$ is the perfection (direct limit along Frobenius) of a finitely generated field over $k$. A $k$-algebra is said to be *ind-rational* if it is a filtered direct limit of rational $k$-algebras. Define a Grothendieck site $\mathop{\mathrm{\operatorname{Spec}}}k^{\mathrm{indrat}}_{\mathrm{proet}}$ to be (the opposite of) the category of ind-rational $k$-algebras with $k$-algebra homomorphisms endowed with the pretopology where a covering $\{k' \to k_{i}'\}_{i}$ is a finite family of ind-étale homomorphisms such that $k' \to \prod_{i} k_{i}'$ is faithfully flat. Let $\operatorname{Ab}(k^{\mathrm{indrat}}_{\mathrm{proet}})$ be the category of sheaves of abelian groups on $\mathop{\mathrm{\operatorname{Spec}}}k^{\mathrm{indrat}}_{\mathrm{proet}}$. Let $\operatorname{\mathbf{Hom}}_{k^{\mathrm{indrat}}_{\mathrm{proet}}}$ be the sheaf-Hom functor for $\operatorname{Ab}(k^{\mathrm{indrat}}_{\mathrm{proet}})$ and $\operatorname{\mathbf{Ext}}^{n}_{k^{\mathrm{indrat}}_{\mathrm{proet}}}$ its $n$-th right derived functor. Let $\mathrm{Alg} / k$ be the category of perfections (inverse limits along Frobenius) of commutative algebraic groups over $k$ with $k$-group scheme morphisms. (Here an algebraic group over $k$ means a quasi-compact smooth group scheme over $k$.) Let $\mathrm{PAlg} / k$ be the pro-category of $\mathrm{Alg} / k$, $\mathrm{IAlg} / k$ the ind-category of $\mathrm{Alg} / k$ and $\mathrm{IPAlg} / k$ the ind-category of $\mathrm{PAlg} / k$. The endofunctors $(\;\cdot\;)^{0}$ (identity component) and $\pi_{0}$ (component group) on $\mathrm{Alg} / k$ naturally extend to endofunctors on $\mathrm{IPAlg} / k$. An object $G \in \mathrm{IPAlg} / k$ is said to be *connected* if $G^{0} \overset{\sim}{\to} G$. The natural functors $$\begin{CD} \mathrm{Alg} / k @>>> \mathrm{PAlg} / k @. \\ @VVV @VVV @. \\ \mathrm{IAlg} / k @>>> \mathrm{IPAlg} / k @>>> \operatorname{Ab}(k^{\mathrm{indrat}}_{\mathrm{proet}}) \end{CD}$$ are fully faithful exact by [@Suz Proposition (2.3.4)], where the right lower functor is the Yoneda functor. If an object $G \in \mathrm{Alg} / k$ is the perfection of a unipotent group, then by [@Suz Proposition (2.4.1) (b)], the sheaf $\operatorname{\mathbf{Hom}}_{k^{\mathrm{indrat}}_{\mathrm{proet}}} (G, \mathop{\mathrm{\mathbf{Q}}}/ \mathop{\mathrm{\mathbf{Z}}})$ agrees with the Pontryagin dual of $\pi_{0}(G)$, the sheaf $\operatorname{\mathbf{Ext}}^{1}_{k^{\mathrm{indrat}}_{\mathrm{proet}}} (G, \mathop{\mathrm{\mathbf{Q}}}/ \mathop{\mathrm{\mathbf{Z}}})$ agrees with the Breen-Serre dual ([@Milne Chapter III, Lemma 0.13]) of $G^{0}$ and the sheaf $\operatorname{\mathbf{Ext}}^{n}_{k^{\mathrm{indrat}}_{\mathrm{proet}}} (G, \mathop{\mathrm{\mathbf{Q}}}/ \mathop{\mathrm{\mathbf{Z}}})$ vanishes for $n \ge 2$. The Breen-Serre dual of $G$ has the same dimension as $G$ by [@Beg Proposition 1.2.1]. Any object $G \in \mathrm{IAlg} / k$ commutes with filtered direct limits as a functor from the category of ind-rational $k$-algebras to the category of abelian groups. In other words, $G$ is locally of finite presentation as a sheaf; see the paragraphs after [@Suz Proposition (2.4.1)] for more details about this notion. In particular, $G$ is determined by values on rational $k$-algebras and hence by values on perfections of finitely generated fields over $k$, with $$G(k') \cong G(\overline{k'})^{\mathop{\mathrm{\operatorname{Gal}}}(\overline{k'} / k')}$$ functorially in perfect fields $k'$ over $k$. As before, let $K$ be a complete discrete valuation field with ring of integers $\mathcal{O}_{K}$ and residue field $k$. We have a canonical $W(k)$-algebra structure on $\mathcal{O}_{K}$, where $W$ denotes the functor of $p$-typical Witt vectors of infinite length. This structure $W(k) \to \mathcal{O}_{K}$ factors through $k$ if $K$ has equal characteristic. As in [@Suz Section 3.1], for any ind-rational $k$-algebra $k'$, define $$\begin{gathered} \mathbf{O}_{K}(k') = \varprojlim_{n \ge 1} \bigl( W(k') \otimes_{W(k)} \mathcal{O}_{K} / \mathfrak{m}^{n} \bigr), \\ \mathbf{K}(k') = \mathbf{O}_{K}(k') \otimes_{\mathcal{O}_{K}} K. \end{gathered}$$ If $k'$ is a (perfect) field, then the ring $\mathbf{O}_{K}(k')$ is a complete discrete valuation ring with residue field $k'$ and ramification index $1$ over $\mathcal{O}_{K}$ in the sense of [@BLR Section 3.6, Definition 1]. For any fppf sheaf of abelian groups $G$ on $\mathcal{O}_{K}$ and any integer $n$, define a sheaf $\mathbf{H}^{n}(\mathcal{O}_{K}, G)$ on $\mathop{\mathrm{\operatorname{Spec}}}k^{\mathrm{indrat}}_{\mathrm{proet}}$ to be the sheafification of the presheaf $$k' \mapsto H^{n}(\mathbf{O}_{K}(k'), G),$$ where $k'$ runs through ind-rational $k$-algebras. [^2] Set $\mathbf{\Gamma} = \mathbf{H}^{0}$. If $k'$ is an algebraically closed field extension of $k$, then $$\mathbf{H}^{n}(\mathcal{O}_{K}, G)(k') \cong H^{n}(\mathbf{O}_{K}(k'), G)$$ (that is, the sheafification is unnecessary for $k'$-valued points). Similarly, for any fppf sheaf of abelian groups $G$ on $K$, define a sheaf $\mathbf{H}^{n}(K, G)$ on $\mathop{\mathrm{\operatorname{Spec}}}k^{\mathrm{indrat}}_{\mathrm{proet}}$ to be the sheafification of the presheaf $$k' \mapsto H^{n}(\mathbf{K}(k'), G).$$ Set $\mathbf{\Gamma} = \mathbf{H}^{0}$. If $k'$ is an algebraically closed field extension of $k$, then $$\mathbf{H}^{n}(K, G)(k') \cong H^{n}(\mathbf{K}(k'), G).$$ We recall the following results: **Proposition 17** ([@Suz Proposition (3.4.3) (a)]). *Let $N$ be a commutative finite flat group scheme over $K$.* 1. *The sheaf $\mathbf{\Gamma}(K, N)$ is a finite étale group scheme over $k$.* 2. *The sheaf $\mathbf{H}^{1}(K, N)$ is an object of $\mathrm{IPAlg} / k$.* 3. *We have $\mathbf{H}^{n}(K, N) = 0$ for $n \ge 2$.* **Proposition 18** ([@Suz Propositions (3.4.2) (b) and (3.4.6)]). *Let $N$ be a commutative finite flat group scheme over $\mathcal{O}_{K}$.* 1. *The sheaf $\mathbf{\Gamma}(\mathcal{O}_{K}, N)$ is a finite étale group scheme over $k$.* 2. *The sheaf $\mathbf{H}^{1}(\mathcal{O}_{K}, N)$ is an object of $\mathrm{PAlg} / k$ that is connected.* 3. *The sheaf $\mathbf{H}^{1}(K, N) / \mathbf{H}^{1}(\mathcal{O}_{K}, N)$ is an object of $\mathrm{IAlg} / k$.* 4. *We have $\mathbf{H}^{n}(\mathcal{O}_{K}, N) = 0$ for $n \ge 2$.* **Proposition 19** ([@Suz Theorems (5.2.1.2) and (5.2.2.1)]). *Assume that $K$ has equal characteristic. Let $N$ be a commutative finite flat group scheme over $\mathcal{O}_{K}$ with Cartier dual $M$. Then there exist canonical isomorphisms $$\begin{gathered} \mathbf{H}^{1}(K, N)^{0} / \mathbf{H}^{1}(\mathcal{O}_{K}, N) \cong \operatorname{\mathbf{Ext}}^{1}_{k^{\mathrm{indrat}}_{\mathrm{proet}}} \bigl( \mathbf{H}^{1}(\mathcal{O}_{K}, M), \mathop{\mathrm{\mathbf{Q}}}/ \mathop{\mathrm{\mathbf{Z}}} \bigr), \\ \pi_{0}(\mathbf{H}^{1}(K, N)) \cong \operatorname{\mathbf{Hom}}_{k^{\mathrm{indrat}}_{\mathrm{proet}}} \bigl( \mathbf{\Gamma}(K, M), \mathop{\mathrm{\mathbf{Q}}}/ \mathop{\mathrm{\mathbf{Z}}} \bigr). \end{gathered}$$ In particular, $\pi_{0}(\mathbf{H}^{1}(K, N))$ is finite étale.* **Proposition 20** ([@Suz Proposition (3.4.3) (d)]). *Let $A$ be an Abelian variety over $K$.* 1. *The sheaf $\mathbf{\Gamma}(K, A)$ is an object of $\mathrm{PAlg} / k$. It is the perfection of the Greenberg transform of infinite level of the Néron model of $A$.* 2. *The sheaf $\mathbf{H}^{1}(K, A)$ is an object of $\mathrm{IAlg} / k$.* 3. *We have $\mathbf{H}^{n}(K, A) = 0$ for $n \ge 2$.* **Proposition 21** ([@Suz Theorem (7.2)]). *Let $A$ be an Abelian variety over $K$ with dual $B$. Then there exists a canonical isomorphism $$\mathbf{H}^{1}(K, A) \cong \operatorname{\mathbf{Ext}}^{1}_{k^{\mathrm{indrat}}_{\mathrm{proet}}} \bigl( \mathbf{\Gamma}(K, B), \mathop{\mathrm{\mathbf{Q}}}/ \mathop{\mathrm{\mathbf{Z}}} \bigr).$$* Below we give a few more. **Proposition 22**. *The statements of Proposition [Proposition 19](#FinDuality){reference-type="ref" reference="FinDuality"} remain true even if $K$ has mixed characteristic.* *Proof.* This is basically a translation of Bégueri's results [@Beg]. More precisely, in [@Beg Section 4.2], she gives $H^{1}(\mathcal{O}_{K}, N)$ a structure as the perfection of an algebraic group over $k$, which is defined as the cokernel of $\operatorname{Gr}^{\mathrm{perf}}(G^{1}) \to \operatorname{Gr}^{\mathrm{perf}}(G^{2})$, where $0 \to N \to G^{1} \to G^{2} \to 0$ is a resolution of $N$ by commutative smooth group schemes over $\mathcal{O}_{K}$ and $\operatorname{Gr}^{\mathrm{perf}}$ is the perfection of the Greenberg transform of infinite level. By [@Suz Proposition (3.4.2)] and its proof, we can see that this algebraic structure agrees with our $\mathbf{H}^{1}(\mathcal{O}_{K}, N)$. Similarly, in [@Beg Section 4.3], her algebraic structure for $H^{1}(K, N)$ is defined as the cokernel of $\operatorname{Gr}^{\mathrm{perf}}(\mathscr{T}^{1}) \to \operatorname{Gr}^{\mathrm{perf}}(\mathscr{T}^{2})$, where $0 \to N_{K} \to T^{1} \to T^{2} \to 0$ is a resolution of the generic fiber $N_{K}$ by tori and $\mathscr{T}^{i}$ is the Néron lft-model of $T^{i}$. By [@Suz Proposition (3.4.3) (e)], this agrees with our $\mathbf{H}^{1}(K, N)$. Then our statements correspond to [@Beg Proposition 6.1.2 and Theorem 6.3.2]. ◻ **Proposition 23**. *Let $N$ be a commutative finite flat group scheme over $\mathcal{O}_{K}$. Assume that its generic fiber $N_{K}$ is étale. Then $\mathbf{H}^{1}(\mathcal{O}_{K}, N)$ is an object of $\mathrm{Alg} / k$. Its dimension is the $\mathcal{O}_{K}$-length of the pullback along the zero section of $\Omega^{1}_{N / \mathcal{O}_{K}}$. In particular, $\mathbf{H}^{1}(\mathcal{O}_{K}, N)$ is zero if and only if $N$ is étale (over $\mathcal{O}_{K}$).* *Proof.* As in the proof of Proposition [Proposition 22](#FinDualMixed){reference-type="ref" reference="FinDualMixed"}, we can see that $\mathbf{H}^{1}(\mathcal{O}_{K}, N)$ agrees with the perfection of the algebraic structure on $H^{1}(\mathcal{O}_{K}, N)$ defined in [@BGA Section 16]. Therefore the result follows from [@BGA Theorem 16.3 and Proposition 16.6]. ◻ **Proposition 24**. *Let $N$ be a commutative finite flat group scheme over $\mathcal{O}_{K}$. Then $\mathbf{H}^{1}(K, N)^{0} / \mathbf{H}^{1}(\mathcal{O}_{K}, N)$ is a direct limit indexed by $\mathop{\mathrm{\mathbf{N}}}$ of perfections of connected unipotent algebraic groups over $k$ with injective transition morphisms.* *Proof.* Let $M$ be the Cartier dual of $N$. By [@BGA Proposition 16.1 (ii) and Lemma 16.2], the group $\mathbf{H}^{1}(\mathcal{O}_{K}, M)$ is an inverse limit $\varprojlim_{n \ge 1} G_{n}$ of perfections of connected unipotent algebraic groups over $k$ such that each transition morphism $\varphi_{n} \colon G_{n + 1} \to G_{n}$ is surjective with connected kernel. Hence $$\operatorname{\mathbf{Ext}}^{1}_{k^{\mathrm{indrat}}_{\mathrm{proet}}} \bigl( \mathbf{H}^{1}(\mathcal{O}_{K}, M), \mathop{\mathrm{\mathbf{Q}}}/ \mathop{\mathrm{\mathbf{Z}}} \bigr) \cong \varinjlim_{n \ge 1} \operatorname{\mathbf{Ext}}^{1}_{k^{\mathrm{indrat}}_{\mathrm{proet}}} (G_{n}, \mathop{\mathrm{\mathbf{Q}}}/ \mathop{\mathrm{\mathbf{Z}}})$$ by [@Suz Proposition (2.3.3) (c)]. Each term $\operatorname{\mathbf{Ext}}^{1}_{k^{\mathrm{indrat}}_{\mathrm{proet}}} (G_{n}, \mathop{\mathrm{\mathbf{Q}}}/ \mathop{\mathrm{\mathbf{Z}}})$ is the perfection of a connected unipotent algebraic group by [@Suz Proposition (2.4.1) (b)]. Let $G_{n}' = \mathop{\mathrm{\mathrm{ker}}}(\varphi_{n})$. Then we have an exact sequence $$0 \to \operatorname{\mathbf{Ext}}^{1}_{k^{\mathrm{indrat}}_{\mathrm{proet}}} (G_{n}, \mathop{\mathrm{\mathbf{Q}}}/ \mathop{\mathrm{\mathbf{Z}}}) \to \operatorname{\mathbf{Ext}}^{1}_{k^{\mathrm{indrat}}_{\mathrm{proet}}} (G_{n + 1}, \mathop{\mathrm{\mathbf{Q}}}/ \mathop{\mathrm{\mathbf{Z}}}) \to \operatorname{\mathbf{Ext}}^{1}_{k^{\mathrm{indrat}}_{\mathrm{proet}}} (G_{n}', \mathop{\mathrm{\mathbf{Q}}}/ \mathop{\mathrm{\mathbf{Z}}}) \to 0$$ by [@Suz Proposition (2.4.1) (a)]. Now the result follows from Propositions [Proposition 19](#FinDuality){reference-type="ref" reference="FinDuality"} and [Proposition 22](#FinDualMixed){reference-type="ref" reference="FinDualMixed"}. ◻ The following proposition requires $k$ to be algebraically closed. **Proposition 25**. *Let $N$ be a commutative finite flat group scheme over $\mathcal{O}_{K}$. Assume that its generic fiber is multiplicative. Then $H^{1}(K, N) / H^{1}(\mathcal{O}_{K}, N)$ is finite if and only if $N$ is multiplicative (over $\mathcal{O}_{K}$).* *Proof.* Let $M$ be the Cartier dual of $N$. Since $\pi_{0}(\mathbf{H}^{1}(K, N))$ is always finite étale by Proposition [Proposition 19](#FinDuality){reference-type="ref" reference="FinDuality"}, the finiteness of $H^{1}(K, N) / H^{1}(\mathcal{O}_{K}, N)$ is equivalent to the finiteness of $\bigl( \mathbf{H}^{1}(K, N)^{0} / \mathbf{H}^{1}(\mathcal{O}_{K}, N) \bigr)(k)$. By Proposition [Proposition 24](#LocCohStr){reference-type="ref" reference="LocCohStr"}, this in turn is equivalent to the vanishing of $\mathbf{H}^{1}(K, N)^{0} / \mathbf{H}^{1}(\mathcal{O}_{K}, N)$. By Propositions [Proposition 19](#FinDuality){reference-type="ref" reference="FinDuality"} and [Proposition 22](#FinDualMixed){reference-type="ref" reference="FinDualMixed"} and the connectedness of $\mathbf{H}^{1}(\mathcal{O}_{K}, M)$, this in turn is equivalent to $\mathbf{H}^{1}(\mathcal{O}_{K}, M) = 0$, which is equivalent to $M$ being étale by Proposition [Proposition 23](#EtCriterion){reference-type="ref" reference="EtCriterion"}. Cartier duality then gives the result. ◻ # Examples ## Additivity of $c(-)$ does not imply $\gamma=0$ Suppose $0\to T\to B \to A\to 0$ is an exact sequence of semiabelian varieties over $K,$ and denote by $$\mathscr{G}^\bullet:\; 0\to \mathscr{T}\to\mathscr{B}\to\mathscr{A}\to 0$$ the induced complex of Néron lft-models over $\mathop{\mathrm{\mathcal{O}}}_K.$ If $T$ is a torus, we have seen that Chai's conjecture is equivalent to the vanishing of $\gamma(\mathscr{G}^\bullet)$ (see the Remark after Proposition [\[Chigammaprop\]](#Chigammaprop){reference-type="ref" reference="Chigammaprop"}). Chai [@Chai], as well as Cluckers-Loeser-Nicaise [@CLN], have posed the question whether this still holds if we drop the assumption that $T$ be a torus. As pointed out on p. 907 of [@CLN], this latter claim is equivalent to the assertion that, if $T,$ $B,$ and $A$ all have semiabelian reduction, then $\gamma(\mathscr{G} ^ \bullet)=0.$ In this subsection, we shall construct a family of counterexamples to this claim. More precisely, we shall construct complete discrete valuation rings $\mathop{\mathrm{\mathcal{O}}}_K$ with algebraically closed residue field and exact sequences $0\to T \to B \to A \to 0$ of semiabelian varieties over $K$ with *semiabelian reduction over $\mathop{\mathrm{\mathcal{O}}}_K$ such that $\gamma(\mathscr{G}^\bullet)\not=0.$ Our construction will produce examples in arbitrary (mixed or equal) characteristic as long as the residue characteristic is positive. Note that the assumption on the reduction of our semiabelian varieties implies that all three base change conductors vanish, so the equality $c(B)=c(T)+c(A)$ is trivially verified.* We begin with a complete discrete valuation ring $\mathop{\mathrm{\mathcal{O}}}_K$ with algebraically closed residue field $k$ and write $p:=\mathrm{char}\; k.$ Let $\mathscr{F}$ be a finite flat commutative group scheme over $\mathop{\mathrm{\mathcal{O}}}_K$ with multiplicative generic fibre $\mathscr{F}_K$ and of order $p^n$ for some $n\in \mathop{\mathrm{\mathbf{N}}}.$ Moreover, we choose $\mathscr{F}$ such that it is not multiplicative globally. Note that, if $\mathrm{char}\; K >0,$ such an $\mathscr{F}$ always exists; for example one could take an anisotropic torus $T$ over $K$ and denote by $\mathscr{T}$ the Néron model of $T.$ For any $n>0,$ the scheme-theoretic closure of $T[p^n]$ in $\mathscr{T}$ has the desired properties. In general, one can choose an elliptic curve $\mathscr{E}\to\mathop{\mathrm{\operatorname{Spec}}}\mathop{\mathrm{\mathcal{O}}}_K$ which is generically ordinary and has supersingular special fibre. Then $\mathscr{E}_K[p^n]$ has a multiplicative subgroup scheme, the scheme-theoretic closure of which in $\mathscr{E}$ again has the desired properties. By assumption, we can choose a closed immersion $\mathscr{F}_K\to T$ for some algebraic torus $T$ over $\mathop{\mathrm{\mathcal{O}}}_K.$ Moreover, by [@Milne Theorem A.6], there exists an Abelian scheme $\mathscr{A}\to\mathop{\mathrm{\operatorname{Spec}}}\mathop{\mathrm{\mathcal{O}}}_K$ and a closed immersion $\mathscr{F}\to\mathscr{A}.$ We obtain exact sequences $0 \to \mathscr{F}_K\to T\to T'\to 0$ over $K$ and $0\to \mathscr{F}\to \mathscr{A}\to \mathscr{A}' \to 0$ over $\mathop{\mathrm{\mathcal{O}}}_K,$ where $T'$ and $\mathscr{A}'$ are the obvious cokernels (both of which exist by [@An Théorème 4.C]). Now define the semiabelian variety $B$ such that it fits into the push-out diagram $$\begin{aligned} \begin{CD} \label{Diag1} 0 @>>> \mathscr{F}_K@>>>\mathscr{A}_K@>>>\mathscr{A}'_K@>>>0\\ &&@VVV@VVV@VV{=}V\\ 0@>>>T@>>>B@>>>\mathscr{A}'_K@>>>0. \end{CD}\end{aligned}$$ Because $H^1(K,T)=0=H ^ 2(K,T)$ [@Chai Lemma 4.3], the map $H ^ 1(K,B)\to H ^ 1(K, \mathscr{A}'_K)$ is an isomorphism. Moreover, comparing the fppf-cohomology sequences induced by $0\to \mathscr{F}\to \mathscr{A}\to \mathscr{A}' \to 0$ and $0\to \mathscr{F}_K\to \mathscr{A}_K\to \mathscr{A}'_K \to 0$ yields an exact sequence $$0\to H^1(\mathop{\mathrm{\mathcal{O}}}_K, \mathscr{F})\to H ^ 1(K, \mathscr{F}_K)\to H ^ 1(K, \mathscr{A}_K) \to H ^ 1(K, \mathscr{A}_K').$$ In particular, we find $$\begin{aligned} \ker (H ^ 1(K,\mathscr{A}_K)\to H ^1(K,B)) \nonumber &= \ker (H ^ 1(K,\mathscr{A}_K)\to H ^1(K,\mathscr{A}_K'))\\ \label{sequence2} &=H ^ 1(K, \mathscr{F}_K)/H ^ 1(\mathop{\mathrm{\mathcal{O}}}_K, \mathscr{F}).\end{aligned}$$ Now observe that, since all group schemes appearing in our construction are commutative, $B$ also fits into the push-out diagram $$\begin{aligned} \begin{CD} \label{Diag3} 0 @>>> \mathscr{F}_K@>>>T @>>>T'@>>>0\\ &&@VVV@VVV@VV{=}V\\ 0@>>>\mathscr{A}_K@>>>B@>>>T'@>>>0. \end{CD}\end{aligned}$$ Now we have **Proposition 26**. *Let $\mathscr{G}^\bullet$ be the complex of Néron lft-models induced by the bottom row of diagram ([\[Diag3\]](#Diag3){reference-type="ref" reference="Diag3"}). Then $\gamma(\mathscr{G}^\bullet)\not=0.$* *Proof.* Using the bottom sequence of diagram ([\[Diag3\]](#Diag3){reference-type="ref" reference="Diag3"}) as well as equation ([\[sequence2\]](#sequence2){reference-type="ref" reference="sequence2"}), we find that $$\begin{aligned} \mathrm{coker}(B(K)\to T'(K))&=\ker (H ^ 1(K,\mathscr{A}_K)\to H ^1(K,B))\\ &=H ^ 1(K, \mathscr{F}_K)/H ^ 1(\mathop{\mathrm{\mathcal{O}}}_K, \mathscr{F}),\end{aligned}$$ which is infinite by Proposition [Proposition 25](#MultCriterion){reference-type="ref" reference="MultCriterion"} and our choice of $\mathscr{F}.$ Using the same notation and reasoning as in the proof of Theorem [\[generalthm\]](#generalthm){reference-type="ref" reference="generalthm"}, we find that $\dim_k D_1=\dim_k D_2 = 0$ but $\dim_k D_3>0.$ In particular, if $\mathscr{G}^\bullet$ denotes the complex of Néron lft-models induced by the bottom row of diagram ([\[Diag3\]](#Diag3){reference-type="ref" reference="Diag3"}), then $\gamma(\mathscr{G}^\bullet)\not=0$ by Theorem [\[Liepointsthm\]](#Liepointsthm){reference-type="ref" reference="Liepointsthm"}. ◻ Finally, observe that the properties of $\mathscr{F},$ $\mathscr{A},$ $T,$ and the morphisms between those objects remain true if we replace $K$ by a finite extension. In particular, we may assume without loss of generality that $T$ is a *split torus. In this case, the bottom row of diagram ([\[Diag1\]](#Diag1){reference-type="ref" reference="Diag1"}) shows that $T,$ $B,$ and $\mathscr{A}'_K$ all have semiabelian reduction.* The following result, which we record as it might be of general interest, also shows that the step of taking a finite extension in the construction above cannot be removed: **Theorem 27**. *Assume that $\mathop{\mathrm{\mathcal{O}}}_K$ is of mixed characteristic and that $\langle p \rangle = \mathfrak{m}_K^e$ for some $e<p-1.$ Let $0\to T\to B\to A\to 0$ be an exact sequence of semiabelian varieties over $K$ with semiabelian reduction over $\mathop{\mathrm{\mathcal{O}}}_K.$ Then the induced complex $$\mathscr{G}^ \bullet: \, 0\to \mathscr{T}\to\mathscr{B}\to \mathscr{A}\to 0$$ is exact at $\mathscr{T}$ and $\mathscr{B},$ and the map $\mathscr{B}/\mathscr{T}\to\mathscr{A}$ is an open immersion. In particular, $\gamma(\mathscr{G}^ \bullet)=0.$* *Proof.* This result is proven in [@BLR Chapter 7.5, Theorem 4(ii)] in the case where $T,$ $B,$ and $A$ are all Abelian varieties. The same proof can be taken *mutatis mutandis as long as we can show that, for all $n\geq 0,$ the quasi-finite flat group schemes $\mathscr{T}^0[p^n]/\mathscr{T}^0[p^n]^ \mathrm{f}$ are generically constant, where $(-)^\mathrm{f}$ denotes the finite part of a quasi-finite scheme over a Henselian local base [@Stacks Tag 04GG (13)]. Note that, since $T$ has semiabelian reduction, there exist $d\in \mathop{\mathrm{\mathbf{N}}},$ an Abelian variety $E$ over $K$ with semiabelian reduction, and a exact sequence $$0\to \mathbf{G}^d_{\mathrm{m}}\to T\to E\to 0.$$ Let $\mathscr{E}$ be the Néron model of $E.$ Because the component group of the Néron lft-model of $\mathbf{G}^d_{\mathrm{m}}$ is torsion-free, we obtain an exact sequence $0\to \mathbf{G}^d_{\mathrm{m}}\to \mathscr{T} ^ 0\to \mathscr{E}^ 0 \to 0$ over $\mathop{\mathrm{\mathcal{O}}}_K,$ which induces exact sequences $$0\to \boldsymbol{\mu}_{p^n}^d \to \mathscr{T} ^ 0[p^n]\to \mathscr{E}^ 0[p^n] \to 0.$$ Taking the pullback of this sequence along the map $\mathscr{E}^0[p^n]^{\mathrm{f}} \to \mathscr{E}^0[p^n]$ shows that there are exact sequences $$0\to \boldsymbol{\mu}_{p^n}^d \to \mathscr{T} ^ 0[p^n]^{\mathrm{f}}\to \mathscr{E}^ 0[p^n] ^{\mathrm{f}} \to 0$$ for all $n.$ Now the snake lemma shows that the map $$\mathscr{T} ^ 0[p^n]/\mathscr{T} ^ 0[p^n]^{\mathrm{f}}\to \mathscr{E}^ 0[p^n]/\mathscr{E}^ 0[p^n] ^{\mathrm{f}}$$ is an isomorphism, so the claim follows from Grothendieck's orthogonality theorem as in *loc. cit.** ◻ Finally, we shall construct counterexamples to the most natural generalisation of Chai's conjecture, which has been proposed (as a question) in [@Chai p. 733], as well as [@CLN Question 2.4.1]: Given an appropriate choice of $\mathop{\mathrm{\mathcal{O}}}_K$, we shall show that there are exact sequences $$0 \to T\to B\to A\to 0$$ of semiabelian varieties over $K$ such that $$c(B)\not=c(T)+c(A).$$ In one of our counterexamples, the base ring will even satisfy both hypotheses the disjunction of which Chai calls the *awkward assumption [@Chai p. 733].* ## Non-additivity in general: Construction I {#consIsubs} Let $\mathop{\mathrm{\mathcal{O}}}_K$ be a complete discrete valuation ring with algebraically closed residue field $k.$ Suppose we can find the following objects: - An Abelian variety $A$ over $K$ with good reduction over $\mathop{\mathrm{\mathcal{O}}}_K,$ and - a finite Galois extension $L$ of $K$ such that the kernel of the map $H ^ 1 (K,A) \to H ^ 1 (L, A_L)$ is infinite. Starting from these objects, we construct the exact sequence $$\begin{aligned} 0\to A \to \mathop{\mathrm{\mathrm{Res}}}_{L/K}A_L \to C \to 0, \label{Rseq}\end{aligned}$$ where $C$ is the cokernel of the closed immersion $A \to \mathop{\mathrm{\mathrm{Res}}}_{L/K}A_L.$ **Proposition 28**. *Under the hypotheses above, we have $c(A)=0$ and $$c(C) < c(\mathop{\mathrm{\mathrm{Res}}}_{L/K}A_L).$$* *Proof.* Let $\mathscr{A}$ be the Néron model of $A$ over $\mathop{\mathrm{\mathcal{O}}}_K$ and let $\mathop{\mathrm{\mathcal{O}}}_L$ be the integral closure of $\mathop{\mathrm{\mathcal{O}}}_K$ in $L.$ Because $A$ has good reduction, we know that $\mathop{\mathrm{\mathrm{Res}}}_{\mathop{\mathrm{\mathcal{O}}}_L/\mathop{\mathrm{\mathcal{O}}}_K} \mathscr{A}_{\mathop{\mathrm{\mathcal{O}}}_L}$ is the Néron model of $\mathop{\mathrm{\mathrm{Res}}}_{L/K}A_L.$ Let $\mathscr {C}$ be the Néron model of $C$ over $\mathop{\mathrm{\mathcal{O}}}_K$ and let $$\mathscr{G}^\bullet: \, 0\to \mathscr{A} \to \mathop{\mathrm{\mathrm{Res}}}_{\mathop{\mathrm{\mathcal{O}}}_L/\mathop{\mathrm{\mathcal{O}}}_K} \mathscr{A}_{\mathop{\mathrm{\mathcal{O}}}_L} \to \mathscr{C} \to 0$$ be the induced complex of Néron models. Note that $c(A)=0$ because $A$ has good reduction. We shall give two proofs. *First proof: Note that the sequence ([\[Rseq\]](#Rseq){reference-type="ref" reference="Rseq"}) becomes split over $L,$ so the induced sequence $\mathscr{G}_L^\bullet$ of Néron models over $\mathop{\mathrm{\mathcal{O}}}_L$ is exact. In particular, $\gamma(\mathscr{G}_L^\bullet)=0.$ Let $D_3$ be the group scheme constructed in Proposition [\[Diconstructprop\]](#Diconstructprop){reference-type="ref" reference="Diconstructprop"} such that $$H^3(\mathscr{G}^\bullet(\mathop{\mathrm{\mathcal{O}}}_K)) = D_3(k).$$ Using Proposition [\[Chigammaprop\]](#Chigammaprop){reference-type="ref" reference="Chigammaprop"}, Theorem [\[Liepointsthm\]](#Liepointsthm){reference-type="ref" reference="Liepointsthm"}, and the fact that $H^ i(\mathscr{G}^\bullet(\mathop{\mathrm{\mathcal{O}}}_K))=0$ for $i \not=3,$ we see that $$c(\mathop{\mathrm{\mathrm{Res}}}_{L/K} A) - c(C) = \gamma(\mathscr{G^\bullet})= \chi_{\mathrm{points}}(\mathscr{G}^\bullet) = \dim_k D_3.$$ Because $D_3$ is of finite type over $k$ but $D_3(k)$ is infinite by assumption, we must have $\dim_k D_3>0.$ This implies the claim.* *Second proof: Note that the map $$\mathscr{A} \to \mathop{\mathrm{\mathrm{Res}}}_{\mathop{\mathrm{\mathcal{O}}}_L/\mathop{\mathrm{\mathcal{O}}}_K} \mathscr{A}_{\mathop{\mathrm{\mathcal{O}}}_L}$$ is a closed immersion. The cokernel $\mathscr{Q}$ of this morphism (taken as an fppf-sheaf) is representable by a smooth group scheme of finite type over $\mathop{\mathrm{\mathcal{O}}}_K$ by [@An Théorème 4.C]. In particular, we obtain a canonical map $\mathscr{Q}\to\mathscr{C},$ which is generically an isomorphism. Since $\mathop{\mathrm{\mathcal{O}}}_K$ is strictly Henselian and the map $\mathop{\mathrm{\mathrm{Res}}}_{\mathop{\mathrm{\mathcal{O}}}_L/\mathop{\mathrm{\mathcal{O}}}_K} \mathscr{A}_{\mathop{\mathrm{\mathcal{O}}}_L} \to \mathscr{Q}$ is smooth, the morphism $$\mathop{\mathrm{\mathrm{Res}}}_{\mathop{\mathrm{\mathcal{O}}}_L/\mathop{\mathrm{\mathcal{O}}}_K} \mathscr{A}_{\mathop{\mathrm{\mathcal{O}}}_L}(\mathop{\mathrm{\mathcal{O}}}_K) \to \mathscr{Q}(\mathop{\mathrm{\mathcal{O}}}_K)$$ is surjective. By [@LLR Corollary 2.3], the map $\mathscr{Q}\to\mathscr{C}$ is the composition of finitely many dilatations in smooth algebraic subgroups of the special fibre. Observe that at least one of the centres of these dilatations must have positive codimension. Indeed, the map $\mathscr{Q}\to\mathscr{C}$ would otherwise be an open immersion, so that the cokernel of $\mathop{\mathrm{\mathrm{Res}}}_{L/K}A_L(K) \to C(K)$ would be finite, contradicting one of our assumptions. Using [@LLR Proposition 2.2 (b)], we deduce that the $\mathop{\mathrm{\mathcal{O}}}_K$-length $\ell$ of the cokernel of the map $$\mathop{\mathrm{\mathrm{Lie}}}\mathscr{Q}\to\mathop{\mathrm{\mathrm{Lie}}}\mathscr{C}$$ is positive. Once again, we denote by $\mathscr{G}_L^\bullet$ the sequence of Néron models induced by the base change of sequence ([\[Rseq\]](#Rseq){reference-type="ref" reference="Rseq"}) to $L,$ which is exact as we have already seen in the first proof. Let $q$ denote the $\mathop{\mathrm{\mathcal{O}}}_L$-length of the cokernel of the map $$(\mathop{\mathrm{\mathrm{Lie}}}\mathscr{Q})\otimes_{\mathop{\mathrm{\mathcal{O}}}_K}\mathop{\mathrm{\mathcal{O}}}_L \to \mathop{\mathrm{\mathrm{Lie}}}\mathscr{G}_L ^ 3.$$ The snake lemma shows that $$c(\mathop{\mathrm{\mathrm{Res}}}_{L/K} A_L) = \frac{q}{[L:K]} = c(C) + \ell,$$ which also implies the claim.* ◻ Now we will construct a pair $(A, L)$ satisfying the conditions stated at the beginning of the section in the equal characteristic case. Assume that $K$ is the completed maximal unramified extension of a complete discrete valuation field $K_{1}$ of equal characteristic with finite residue field $k_{1}$. Let $A$ be an ordinary elliptic curve over $K_{1}$ with good supersingular reduction. Denote its base change to $K$ by the same symbol $A$. Let $L_{1} / K_{1}$ be a totally ramified Galois extension of degree $p$. Let $L = L_{1} K = L_1 \otimes_{K_1} K.$ **Proposition 29**. *If the valuation of the discriminant of $L / K$ is large enough (for a fixed $A$), then the kernel of the map $H^{1}(K, A) \to H^{1}(L, A_{L})$ is infinite.* *Proof.* For any integer $n \ge 1$, let $k_{n}$ be the degree $n$ subextension of $k / k_{1}$. Let $K_{n}$ and $L_{n}$ be the corresponding unramified extensions of $K_{1}$ and $L_{1}$, respectively. Then the kernel of the map $H^{1}(K_{n}, A) \to H^{1}(K, A)$ is $H^{1}(\mathop{\mathrm{\operatorname{Gal}}}(k / k_{n}), A(K))$, which is zero by [@Milne Chapter I, Proposition 3.8] (see also the erratum on Milne's homepage about the proof of this proposition) since $A$ has good reduction. Hence it is enough to show that the kernel of the map $H^{1}(K_{n}, A) \to H^{1}(L_{n}, A_{L})$ has unbounded order as $n \to \infty$ if the discriminant of $L / K$ is large enough. This kernel is isomorphic to $H^{1}(G, A(L_{n}))$, where $G = \mathop{\mathrm{\operatorname{Gal}}}(L_{1} / K_{1})$. Denote the logarithm function to base $p$ by $\log_{p}$. We use the following result of Tan, Trihan and Tsoi [@TTT]: They construct, in [@TTT (43) and (44)], an explicit constant $C \in \mathop{\mathrm{\mathbf{Z}}}$ depending only on $A$, $L_{1} / K_{1}$ and not on $n$ such that $$\log_{p} |H^{1}(G, A(L_{n}))| \ge C n$$ for all $n \ge 1$. This constant $C$ is positive if the discriminant of $L / K$ is large enough by its explicit description (the number $\alpha_{v}$ written before [@TTT Theorem B] is less than $1$ if the valuation $(p - 1) f_{v}$ of the discriminant of $L / K$ is large enough and hence the number $\delta_{v}^{\mathrm{ss}}$ there is positive). This immediately implies the result. ◻ ## Non-additivity in general: Construction II {#consIIsubs} We shall now give a construction of a second family of similar examples, making use of arithmetic duality theory as set out in Section [3](#Indrat-Proet-Section){reference-type="ref" reference="Indrat-Proet-Section"}; see [@Suz] for more details. As before, $\mathop{\mathrm{\mathcal{O}}}_K$ denotes a complete discrete valuation ring with algebraically closed residue field $k.$ Suppose we can find an étale isogeny $E' \to E$ of Abelian varieties over $K$ such that $c(E) \not= c(E').$ Let $\mathscr{F}_K$ be the Cartier dual of the kernel of this isogeny, which is multiplicative by assumption. Moreover, let $\mathscr{E}'$ and $\mathscr{E}$ be the Néron models of $E'$ and $E,$ respectively, so that we have an exact sequence of group schemes $$0\to \mathscr{K} \to \mathscr{E}' \to \mathscr{E}$$ over $\mathop{\mathrm{\mathcal{O}}}_K,$ where $\mathscr{K}$ is the obvious kernel. Let $\mathscr{H}^\bullet: 0\to \mathscr{K} \to \mathscr{E}' \to \mathscr{E}\to 0$ be the associated complex, and let $D_{K,\mathscr{H}}$ be the $k$-group scheme constructed in Proposition [\[Diconstructprop\]](#Diconstructprop){reference-type="ref" reference="Diconstructprop"} such that $$H ^ 3 (\mathscr{H} ^ \bullet(\mathop{\mathrm{\mathcal{O}}}_K))=D_{K,\mathscr{H}}(k).$$ Finally, we embed $\mathscr{F}_K$ into an algebraic torus $T$ over $K$ and let $A$ and $A'$ be the dual Abelian varieties of $E$ and $E'$ respectively. If $T'$ denotes the quotient $T/\mathscr{F}_K,$ we obtain a commutative diagram $$\begin{aligned} \begin{CD} 0@>>> \mathscr{F}_K @>>> T @>>> T' @>>> 0\\ &&@VVV@VVV@VV{\mathrm{Id}}V\\ 0@>>> A @>>> B @>>> T' @>>> 0 \label{Diag5} \end{CD}\end{aligned}$$ with exact rows. Let $$\mathscr{G}^\bullet : \, 0\to \mathscr{A} \to \mathscr{B} \to \mathscr{T}' \to 0$$ be the complex of Néron lft-models induced by the bottom row of Diagram ([\[Diag5\]](#Diag5){reference-type="ref" reference="Diag5"}). Let $D_{K,\mathscr{G}}$ be the $k$-group scheme constructed in Proposition [\[Diconstructprop\]](#Diconstructprop){reference-type="ref" reference="Diconstructprop"} such that $$H^3(\mathscr{G}^{0,\bullet})=D_{K,\mathscr{G}}(k).$$ Let $L$ be a finite separable extension of $K$ with ring of integers $\mathop{\mathrm{\mathcal{O}}}_L$ such that $A,$ $B,$ and $T'$ all acquire semiabelian reduction over $L.$ Let $D_{L, \mathscr{G}}$ and $D_{L,\mathscr{H}}$ be the algebraic $k$-groups constructed analogously from the base changes of the relevant exact sequences of algebraic $K$-groups to $L.$ Just for the next Lemma, we drop the assumption that $c(E') \not=c(E).$ **Lemma 30**. *We have $c(E')=c(E)$ if and only if $\dim_k D_{L, \mathscr{H}}=[L:K]\dim_k D_{K, \mathscr{H}}.$ [\[dimlem\]]{#dimlem label="dimlem"}* *Proof.* The Lemma can be proven exactly as Proposition [\[Chigammaprop\]](#Chigammaprop){reference-type="ref" reference="Chigammaprop"}, using that $\mathop{\mathrm{\mathrm{Lie}}}\widetilde{\mathscr{K}}=0.$ ◻ Now we reimpose the condition that $c(E')\not=c(E).$ Our next goal will be to compare the dimensions of $D_{K,\mathscr{G}}$ and $D_{K,\mathscr{H}};$ in fact, we shall see that they are equal. This part of the argument will require results from arithmetic duality. We shall work with the site $\mathop{\mathrm{\operatorname{Spec}}}k^{\mathrm{indrat}}_{\mathrm{pro\acute{e}t}}$ introduced in Section [3](#Indrat-Proet-Section){reference-type="ref" reference="Indrat-Proet-Section"} above. **Proposition 31**. *There is a canonical isomorphism $$D_{K,\mathscr{G}}^{\mathrm{perf}}=\ker(\mathbf{H}^ 1(K,A)\to \mathbf{H}^ 1(K,A'))$$ of sheaves on $\mathop{\mathrm{\operatorname{Spec}}}k^{\mathrm{indrat}}_{\mathrm{pro\acute{e}t}}.$ [\[Dkerprop\]]{#Dkerprop label="Dkerprop"}* *Proof.* Let $k'$ be an algebraically closed extension of $k.$ Since the extension $\mathop{\mathrm{\mathcal{O}}}_K \subseteq \mathbf{O}_K(k')$ has ramification index 1, the construction of $D_{K,\mathscr{G}}$ commutes with base change along this extension. Since $B$ also fits into a diagram analogous to ([\[Diag1\]](#Diag1){reference-type="ref" reference="Diag1"}), we have $$\begin{aligned} D_{K,\mathscr{G}}^{\mathrm{perf}}(k') &= \ker (H^ 1(\mathbf{K}(k'),A)\to H^ 1(\mathbf{K}(k'),A')). \end{aligned}$$ If $k''$ is an arbitrary perfect extension of $k,$ we let $k'$ be an algebraic closure of $k''$ and observe that $$\begin{aligned} D_{K,\mathscr{G}}^{\mathrm{perf}}(k'') &= \ker (H^ 1(\mathbf{K}(k'),A)^{\mathop{\mathrm{\operatorname{Gal}}}(k'/k'')}\to H^ 1(\mathbf{K}(k'),A')^{\mathop{\mathrm{\operatorname{Gal}}}(k'/k'')})\\ &= \ker (\mathbf{H}^1(K,A)(k'')\to \mathbf{H}^1(K,A')(k'')). \end{aligned}$$ This implies the Proposition. ◻ **Lemma 32**. *There exists a morphism $$\mathbf{Ext}^1_{k^{\mathrm{indrat}}_{\mathrm{pro\acute{e}t}}}(D_{K,\mathscr{H}}^{\mathrm{perf}}, \mathop{\mathrm{\mathbf{Q}}}/\mathop{\mathrm{\mathbf{Z}}}) \to D_{K,\mathscr{G}}^{\mathrm{perf}}$$ which has finite kernel and cokernel.* *Proof.* From the exact sequence $0\to \mathscr{F}_K \to E' \to E \to 0$ we obtain an exact sequence $$\begin{aligned} 0\to \boldsymbol{\Gamma}(K, \mathscr{F}_K)\to \boldsymbol{\Gamma}(K, E')\overset{\delta}{\to} \boldsymbol{\Gamma}(K, E) \to D_{K,\mathscr{H}}^{\mathrm{perf}} \to 0\end{aligned}$$ in $\mathrm{PAlg} / k.$ Denote by $\Delta$ the image of $\delta.$ Then we have exact sequences $$\begin{aligned} \mathbf{Hom}_{k^{\mathrm{indrat}}_{\mathrm{pro\acute{e}t}}}(\pi_0(\Delta), \mathop{\mathrm{\mathbf{Q}}}/\mathop{\mathrm{\mathbf{Z}}}) &\to \mathbf{Ext}^1_{k^{\mathrm{indrat}}_{\mathrm{pro\acute{e}t}}}(D_{K,\mathscr{H}}^{\mathrm{perf}}, \mathop{\mathrm{\mathbf{Q}}}/\mathop{\mathrm{\mathbf{Z}}})\\ &\to \mathbf{Ext}^1_{k^{\mathrm{indrat}}_{\mathrm{pro\acute{e}t}}}(\boldsymbol{\Gamma}(K, E), \mathop{\mathrm{\mathbf{Q}}}/\mathop{\mathrm{\mathbf{Z}}}) \to \mathbf{Ext}^1_{k^{\mathrm{indrat}}_{\mathrm{pro\acute{e}t}}}(\Delta, \mathop{\mathrm{\mathbf{Q}}}/\mathop{\mathrm{\mathbf{Z}}}) \to 0\end{aligned}$$ and $$\begin{aligned} \mathbf{Hom}_{k^{\mathrm{indrat}}_{\mathrm{pro\acute{e}t}}}(\boldsymbol{\Gamma}(K, \mathscr{F}_K), \mathop{\mathrm{\mathbf{Q}}}/\mathop{\mathrm{\mathbf{Z}}}) &\to \mathbf{Ext}^1_{k^{\mathrm{indrat}}_{\mathrm{pro\acute{e}t}}}(\Delta, \mathop{\mathrm{\mathbf{Q}}}/\mathop{\mathrm{\mathbf{Z}}}) \to \mathbf{Ext}^1_{k^{\mathrm{indrat}}_{\mathrm{pro\acute{e}t}}}(\boldsymbol{\Gamma}(K, E'), \mathop{\mathrm{\mathbf{Q}}}/\mathop{\mathrm{\mathbf{Z}}}).\end{aligned}$$ Here we use [@Suz Proposition 2.4.1(a)]. Moreover, by Proposition [Proposition 21](#SuzDualityProp){reference-type="ref" reference="SuzDualityProp"}, we have a commutative diagram $$\begin{CD} \mathbf{Ext}^1_{k^{\mathrm{indrat}}_{\mathrm{pro\acute{e}t}}}(\boldsymbol{\Gamma}(K, E), \mathop{\mathrm{\mathbf{Q}}}/\mathop{\mathrm{\mathbf{Z}}})@>>>\mathbf{Ext}^1_{k^{\mathrm{indrat}}_{\mathrm{pro\acute{e}t}}}(\boldsymbol{\Gamma}(K, E'), \mathop{\mathrm{\mathbf{Q}}}/\mathop{\mathrm{\mathbf{Z}}})\\ @AAA@AAA\\ \mathbf{H}^ 1(K,A) @>>> \mathbf{H}^ 1(K,A') \end{CD}$$ with vertical isomorphisms. Hence, using Proposition [\[Dkerprop\]](#Dkerprop){reference-type="ref" reference="Dkerprop"}, all we must show is that the canonical map from $\mathbf{Ext}^1_{k^{\mathrm{indrat}}_{\mathrm{pro\acute{e}t}}}(D_{K,\mathscr{H}}^{\mathrm{perf}}, \mathop{\mathrm{\mathbf{Q}}}/\mathop{\mathrm{\mathbf{Z}}})$ into the kernel of the top row of the diagram has finite kernel and cokernel. However, this follows from the fact that $\pi_0(\Delta)$ and $\boldsymbol{\Gamma}(K, \mathscr{F}_K)$ are finite (see Proposition [Proposition 17](#SuzFinFlatProp){reference-type="ref" reference="SuzFinFlatProp"} for the second of those objects). ◻ **Corollary 33**. *We have $\dim_k D_{K,\mathscr{G}}=\dim_k D_{K,\mathscr{H}}.$ [\[dimcor\]]{#dimcor label="dimcor"}* *Proof.* This follows from the preceding Lemma together with the fact that the dimension is invariant under taking perfections as well as isogenies and Breen-Serre duals (see [@Beg Proposition 1.2.1] for the last claim). ◻ We are now ready to prove **Theorem 34**. *Let $0 \to A \to B\to T' \to 0$ be the bottom sequence from diagram ([\[Diag5\]](#Diag5){reference-type="ref" reference="Diag5"}), and assume that $c(E') \not=c(E).$ Then $c(B) \not = c(A) + c(T').$* *Proof.* Let $L$ be a finite separable extension of $K$ over which all semiabelian varieties in the sequence from the Theorem acquire semiabelian reduction. Corollary [\[dimcor\]](#dimcor){reference-type="ref" reference="dimcor"} and Lemma [\[dimlem\]](#dimlem){reference-type="ref" reference="dimlem"} together imply that $\dim_k D_{L, \mathscr{G}} \not = [L:K]\dim_k D_{K, \mathscr{G}},$ so the claim follows from Proposition [\[Chigammaprop\]](#Chigammaprop){reference-type="ref" reference="Chigammaprop"}. ◻ In particular, we obtain an example of non-additivity of $c(-)$ as soon as we can find an étale isogeny $E'\to E$ such that $c(E')\not=c(E).$ Such an example is constructed in [@Chai (6.10.1)] over the base field $\mathop{\mathrm{\mathbf{Q}}}_3.$ Note that this field is of characteristic 0 and has finite residue field, so it satisfies both parts of Chai's *awkward assumption (*op. cit., p. 733).** ## Examples with imperfect residue field {#ImperfectResPara} In this paragraph, we shall construct examples which show that the base change conductor is not additive in general in exact sequences $0\to T\to B\to A\to 0$ of semiabelian varieties if the residue field of $K$ is imperfect, even when $T$ is a torus. We shall begin by constructing such an example where $T,$ $B,$ and $A$ are all tori, which moreover shows that the base change conductor is not invariant under isogeny if the residue field is imperfect. Using rigid geometry, we shall use this exact sequence of tori to give a counterexample to the statement of Chai's conjecture in this setup. ### Algebraic tori Let $\mathop{\mathrm{\mathcal{O}}}_K$ be a complete discrete valuation ring with separably closed *imperfect residue field $\kappa$ of equal characteristic 2. Let $\pi$ be a uniformiser of $\mathop{\mathrm{\mathcal{O}}}_K$ and $c\in \mathop{\mathrm{\mathcal{O}}}_K$ be an element whose image in $\kappa$ is not contained in $\kappa^2.$ Consider the polynomial $$f_i(X):=X^2 + \pi ^ i X + c\in \mathop{\mathrm{\mathcal{O}}}_K[X]$$ for some $i \in \mathop{\mathrm{\mathbf{N}}}.$ The polynomial $f_i$ is clearly irreducible and separable over $K.$* **Lemma 35**. *Let $K_i$ be the splitting field of $f_i$ and let $T_i$ be the torus $(\mathop{\mathrm{\mathrm{Res}}}_{K_i/K}\mathop{\mathrm{\mathbf{G}_m}})/\mathop{\mathrm{\mathbf{G}_m}}.$ [\[piilem\]]{#piilem label="piilem"} Then $$c(T_i)=i.$$* *Proof.* Because the sequence of Néron lft-models induced by the exact sequence $0\to \mathop{\mathrm{\mathbf{G}_m}}\to \mathop{\mathrm{\mathrm{Res}}}_{K_i/K}\mathop{\mathrm{\mathbf{G}_m}}\to T_i \to 0$ is exact [@CY Lemma 11.2], we have $c(T_i)=c(\mathop{\mathrm{\mathrm{Res}}}_{K_i/K}\mathop{\mathrm{\mathbf{G}_m}}).$ One checks easily that the base change conductor of the induced torus is equal to one-half of the valuation of the discriminant of $K_i,$ so the claim is proven. ◻ Now let $L$ be the compositum of $K_i$ and $K_j$ for distinct $i,j \in \mathop{\mathrm{\mathbf{N}}}.$ Then $$\mathop{\mathrm{\operatorname{Gal}}}(L/K)\cong \mathop{\mathrm{\mathbf{Z}}}/2\mathop{\mathrm{\mathbf{Z}}}\times \mathop{\mathrm{\mathbf{Z}}}/2\mathop{\mathrm{\mathbf{Z}}}$$ (non-canonically). We fix such an isomorphism by choosing generators $\sigma_i, \sigma_j$ of $\mathop{\mathrm{\operatorname{Gal}}}(L/K)$ such that $\sigma_i$ fixes $K_j$ and $\sigma_j$ fixes $K_i.$ Let $T$ be the torus over $K$ whose character lattice is given by $\mathop{\mathrm{\mathbf{Z}}}^2$ with the Galois action $$\sigma_i \mapsto \begin{pmatrix} -1 & 1 \\ 0 & 1 \end{pmatrix}, \hspace{0.5 in} \sigma_j \mapsto \begin{pmatrix} 1 & -1 \\ 0 & -1 \end{pmatrix}.$$ By construction, we have an exact sequence $$0 \to T_j\to T \to T_i \to 0$$ of algebraic tori. From now on, we shall put $i:=1$ and $j:=2.$ This will simplify later calculations. Note that we already know the base change conductors of $T_1$ and $T_2.$ In order to calculate that of $T,$ we choose roots $\alpha_i$ of $f_i$ for $i=1,2,$ and put $$\beta:= \pi \alpha_1 + \alpha_2 \in L.$$ A simple calculation shows that $$g(X):=X^2 + \pi^2X + \pi ^2 c + c$$ is the minimal polynomial of $\beta.$ Let $F:=K(\beta).$ We claim that the kernel of the canonical projection $\mathop{\mathrm{\mathbf{Z}}}[\mathop{\mathrm{\operatorname{Gal}}}(L/K)] \to \mathop{\mathrm{\mathbf{Z}}}[\mathop{\mathrm{\operatorname{Gal}}}(F/K)]\to 0$ is isomorphic to $X^\ast(T).$ Clearly, the elements $\mathrm{Id}_L - \sigma_1\sigma_2$ and $\sigma_1-\sigma_2$ lie in the kernel of this map; this follows from the fact that, by Galois theory, $F$ is the maximal subfield of $L$ on which $\sigma_1\sigma_2$ acts trivially. Moreover, these two elements generate a saturated sublattice of $\mathop{\mathrm{\mathbf{Z}}}[\mathop{\mathrm{\operatorname{Gal}}}(L/K)]$ of rank 2, which must therefore be the whole kernel. Putting $e_1:=1-\sigma_1+\sigma_2-\sigma_1\sigma_2$ and $e_2:=\sigma_1-\sigma_2,$ we see that this kernel and $X^\ast(T)$ are indeed isomorphic as integral $\mathop{\mathrm{\operatorname{Gal}}}(L/K)$-representations. We have, in particular, *resolved $T$ by induced tori by constructing an exact sequence $$0\to \mathop{\mathrm{\mathrm{Res}}}_{F/K}\mathop{\mathrm{\mathbf{G}_m}}\to \mathop{\mathrm{\mathrm{Res}}}_{L/K}\mathop{\mathrm{\mathbf{G}_m}}\to T \to 0.$$* **Lemma 36**. *We have $c(\mathop{\mathrm{\mathrm{Res}}}_{L/K}\mathop{\mathrm{\mathbf{G}_m}})=6.$ [\[matrixlem\]]{#matrixlem label="matrixlem"}* *Proof.* Let us first show that $\mathop{\mathrm{\mathcal{O}}}_L=\mathop{\mathrm{\mathcal{O}}}_K[\alpha_1, \alpha_2].$ To see this, put $\gamma:=\alpha_1+\alpha_2.$ Then $\gamma$ is a root of the Eisenstein polynomial $X^2 + \pi^2 X + \pi \alpha_1+\pi^2\alpha_1$ over $K(\alpha_1).$ Because $f_1$ is irreducible after reduction modulo $\pi,$ we have $\mathop{\mathrm{\mathcal{O}}}_{K_1}=\mathop{\mathrm{\mathcal{O}}}_K[\alpha_1],$ so we obtain $$\mathop{\mathrm{\mathcal{O}}}_L=\mathop{\mathrm{\mathcal{O}}}_K[\alpha_1][\gamma]=\mathop{\mathrm{\mathcal{O}}}_K[\alpha_1, \alpha_2].$$ Put $\delta:=\alpha_1\alpha_2.$ The number $2c(T)=e_{L/K}c(T)$ is therefore equal to the $\mathop{\mathrm{\mathcal{O}}}_L$-length of the cokernel of the map $$\mathop{\mathrm{\mathcal{O}}}_L\otimes_{\mathop{\mathrm{\mathcal{O}}}_K} \mathop{\mathrm{\mathcal{O}}}_L \to \mathop{\mathrm{\mathcal{O}}}_L^4$$ given by the matrix $$\begin{pmatrix} 1 & \alpha_1 & \alpha_2 & \delta \\ 1 & \alpha_1 + \pi & \alpha_2 & \delta + \pi \alpha_2 \\ 1 & \alpha_1 & \alpha_2 + \pi^2 & \delta + \pi^2\alpha_1 \\ 1 & \alpha_1 + \pi & \alpha_2 + \pi ^2 & \delta + \pi ^2 \alpha_1 + \pi\alpha_2 + \pi^3 \end{pmatrix}$$ with respect to the basis $1\otimes 1, \alpha_1\otimes 1, \alpha_2 \otimes 1, \delta \otimes 1$ on the source and the standard basis on the target. Subtracting the first row from the other three and then the second and third from the fourth transforms this matrix into $$\begin{pmatrix} 1 & \alpha_1 & \alpha_2 & \delta \\ 0 & \pi & 0 & \pi \alpha_2 \\ 0 & 0 & \pi ^2 & \pi^2 \alpha_1 \\ 0 & 0 & 0 & \pi^3 \end{pmatrix}.$$ This shows that the cokernel has a composition series with successive quotients $\mathop{\mathrm{\mathcal{O}}}_L/\langle \pi \rangle,$ $\mathop{\mathrm{\mathcal{O}}}_L/\langle \pi^2 \rangle,$ and $\mathop{\mathrm{\mathcal{O}}}_L/\langle \pi^3 \rangle.$ The $\mathop{\mathrm{\mathcal{O}}}_L$-lengths of these modules are 2, 4, and 6, respectively, so that the length of the cokernel is indeed equal to 6. ◻ **Corollary 37**. *Let $0\to T_2 \to T \to T_1 \to 0$ be the sequence of algebraic $K$-tori constructed above. [\[nonaddcor\]]{#nonaddcor label="nonaddcor"}Then $c(T) \not=c(T_2)+ c(T_1).$* *Proof.* We already know from Lemma [\[piilem\]](#piilem){reference-type="ref" reference="piilem"} that $c(T_1)=1$ and $c(T_2)=2.$ Moreover, the proof of the same Lemma shows that $c(\mathop{\mathrm{\mathrm{Res}}}_{F/K}\mathop{\mathrm{\mathbf{G}_m}})=2.$ Hence the resolution $$0\to \mathop{\mathrm{\mathrm{Res}}}_{F/K}\mathop{\mathrm{\mathbf{G}_m}}\to \mathop{\mathrm{\mathrm{Res}}}_{L/K}\mathop{\mathrm{\mathbf{G}_m}}\to T \to 0,$$ together with [@Chai Remark 4.8 (a)] and Lemma [\[matrixlem\]](#matrixlem){reference-type="ref" reference="matrixlem"} above shows that $c(T)= 6-2=4.$ ◻ *Remark 38*. Since $T$ is isogenous to $T_2 \times_K T_1,$ the Corollary also shows that the base change conductor of an algebraic torus is not invariant under isogeny if the residue field is imperfect. This shows in particular that the formula given by Chai, Yu, and de Shalit for the base change conductor of a torus $T$ in terms of the Galois module $X_\ast(T)\otimes_{\mathop{\mathrm{\mathbf{Z}}}}\mathop{\mathrm{\mathbf{Q}}}$ [@CY Theorem on p. 367 and Theorem 12.1] cannot be generalised to the case of imperfect residue fields. ### Raynaud extensions We shall now construct an exact sequence $0\to T_2 \to B \to E\to 0$ such that $T_2$ is the torus constructed in the preceding paragraph, $E$ is an elliptic curve over $K,$ and such that $c(B)\not=c(T_2) + c(E).$ Recall that the character lattice of an algebraic torus can be seen as an étale group scheme over $K$ which becomes isomorphic to a constant group scheme $\mathop{\mathrm{\mathbf{Z}}}^d$ for some $d\in \mathop{\mathrm{\mathbf{N}}}$ after a finite separable extension of $K.$ Such a group scheme will be called an *étale lattice over $K.$ We have a canonical isomorphism $$X^\ast(T_1) \otimes_{\mathop{\mathrm{\mathbf{Z}}}} X^\ast(T_1) = \mathop{\mathrm{\mathbf{Z}}},$$ where $\mathop{\mathrm{\mathbf{Z}}}$ carries the trivial Galois action. Note that we also have an exact sequence $$0 \to T_1 \to T \to T_2 \to 0;$$ this can be seen directly from the definition of the Galois action on $X^\ast(T).$ In particular, we obtain a morphism $$X^\ast(T) \otimes_{\mathop{\mathrm{\mathbf{Z}}}} X^\ast(T_1) \to X^\ast(T_1) \otimes_{\mathop{\mathrm{\mathbf{Z}}}} X^\ast(T_1) = \mathop{\mathrm{\mathbf{Z}}}\to \mathop{\mathrm{\mathbf{G}_m}},$$ where the last map is given by $1\mapsto q$ for some $q\in K^\times$ with $v_K(q)>0.$ This map induces a commutative diagram $$\begin{CD} X^\ast(T_1) @>{=}>> X ^\ast(T_1)\\ @VVV@VVV\\ T @>>> T_1. \end{CD}$$ It is easy to see that the quotient $T_1/X^\ast(T_1)$ (taken as a rigid $K$-group) is equal to the rigid $K$-group associated with the quadratic twist $E$ of the Tate curve with period $q$ along the extension $K\subseteq K_1.$ To proceed, we shall need the following* **Lemma 39**. *Let $0\to \mathscr{F} \to \mathscr{G} \to \mathscr{H} \to \mathscr{E}$ be a complex of smooth formal group schemes over $\mathrm{Spf} \, \mathop{\mathrm{\mathcal{O}}}_K$ which is exact as a complex of sheaves on the small formal smooth site of $\mathop{\mathrm{\mathcal{O}}}_K,$ and such that $\mathscr{E}$ is étale. Then the map $\mathscr{G}\to \mathscr{H}$ is smooth. [\[issmoothlem\]]{#issmoothlem label="issmoothlem"}* *Proof.* We may replace $\mathscr{H}$ by an open formal subgroup scheme without changing the conclusion of the Lemma. But since $\mathscr{E}$ is étale, its unit section is an open immersion, so we may assume without loss of generality that $\mathscr{E}=0.$ Then our assumption implies that the map $\mathscr{G}\to \mathscr{H}$ has sections locally in the smooth topology. Denoting by $\mathscr{G}_n$ and $\mathscr{H}_n$ the levels of $\mathscr{G}$ and $\mathscr{H},$ respectively, for $n\in \mathop{\mathrm{\mathbf{N}}},$ this implies that the maps $\mathop{\mathrm{\mathrm{Lie}}}\mathop{\mathrm{\mathscr{G}}}_n \to \mathop{\mathrm{\mathrm{Lie}}}\mathscr{H}_n$ are surjective for all such $n.$ Hence the claim follows. ◻ Now consider the exact sequence $$0 \to T_2 \to T/X ^ \ast(T_1) \to E\to 0$$ of sheaves on the small rigid smooth site induced by the sequence $0\to T_2 \to T \to T_1 \to 0.$ Then we have **Proposition 40**. *The rigid $K$-group $T/X ^ \ast(T_1)$ is the rigid $K$-group associated with a semiabelian variety $B$ over $K,$ and we have an exact sequence $$0 \to T_2 \to B \to E \to 0$$ of semiabelian varieties over $K.$ Moreover, we have $c(B)=c(T)$ and $c(E)=c(T_1).$ In particular, $c(B) \not= c(T_2) + c(E).$* *Proof.* The first two claims will be proven together. Note that, by Galois descent, we may replace $K$ by a finite Galois extension $L$ which splits $T_2$ and $X^\ast(T_1).$ It is well-known that there is a canonical isomorphism $E_L(L)=\mathrm{Ext}^1_L (E_L, \mathop{\mathrm{\mathbf{G}_m}})$ in the category of commutative $L$-group schemes. By the discussion on p. 1272 of [@Bosch], the analogous isomorphism exists with respect to the rigid category. This shows that the exact sequence $$0 \to T_2 \to T/X^\ast(T_1) \to T_1 / X^\ast (T_1) \to 0$$ algebraises uniquely. In order to see the second claim, we consider the exact sequence $$0 \to X^\ast(T_1) \to T \to B \to 0$$ of rigid analytic $K$-groups. By [@BX Proposition 4.5], we obtain an induced complex $$0 \to \widehat{\mathscr{N}} \to \widehat{\mathscr{T}} \to \widehat{\mathscr{B}} \to H ^ 1 (\mathop{\mathrm{\operatorname{Gal}}}(K\mathop{\mathrm{{^\mathrm{sep}}}}/K), X ^\ast(T_1))$$ of smooth formal algebraic groups over $\mathop{\mathrm{\operatorname{Spf}}}\mathop{\mathrm{\mathcal{O}}}_K$ which is exact when restricted to the small formal smooth site of $\mathop{\mathrm{\operatorname{Spf}}}\mathop{\mathrm{\mathcal{O}}}_K.$ Here, the first four objects are the formal Néron lft-models of $0,$ $X^\ast(T_1),$ $T,$ and $B,$ respectively [@BX Proposition 4.5]. The notation is justified by the fact that those formal group schemes are the formal completions of the usual Néron lft-models [@BS Theorem 6.2]. Because $\widehat{\mathscr{N}}$ is étale over $\mathop{\mathrm{\operatorname{Spf}}}\mathop{\mathrm{\mathcal{O}}}_K,$ Lemma [\[issmoothlem\]](#issmoothlem){reference-type="ref" reference="issmoothlem"} tells us that the maps $$\mathop{\mathrm{\mathrm{Lie}}}\mathscr{T} \otimes_{\mathop{\mathrm{\mathcal{O}}}_K} \mathop{\mathrm{\mathcal{O}}}_K/\mathfrak{m}_K^n \to \mathop{\mathrm{\mathrm{Lie}}}\mathscr{B} \otimes_{\mathop{\mathrm{\mathcal{O}}}_K} \mathop{\mathrm{\mathcal{O}}}_K/\mathfrak{m}_K^n$$ are isomorphisms for all $n\in \mathop{\mathrm{\mathbf{N}}}.$ Hence the map $\mathop{\mathrm{\mathrm{Lie}}}\mathscr{T}\to \mathop{\mathrm{\mathrm{Lie}}}\mathscr{B}$ is an isomorphism. Note, moreover, that this isomorphism is compatible with finite separable extensions of $K.$ This shows that $c(B)=c(T);$ the proof that $c(E)=c(T_1)$ is entirely analogous (alternatively, one could invoke [@Chai Proposition 5.1]). The final claim is now an immediate consequence of Corollary [\[nonaddcor\]](#nonaddcor){reference-type="ref" reference="nonaddcor"}. ◻ # Invariance under duality and isogeny In this section, we shall prove that the base change conductor is invariant under duality for Abelian varieties (Theorem [Theorem 55](#0023){reference-type="ref" reference="0023"}), answering a question of Chai [@Chai 8.2]. We shall also prove that the base change conductor is invariant under isogeny for tori (Theorem [Theorem 56](#0025){reference-type="ref" reference="0025"}), giving another proof for a result of Chai, Yu, and de Shalit [@CY Theorem on p. 367 and Theorem 12.1]. ## Dimension counting functions Let $D^{b}(\mathrm{IPAlg} / k)$ be the bounded derived category of $\mathrm{IPAlg} / k$. **Definition 41**. *Define full subcategories $\mathcal{D}$ (resp. $\mathcal{D}_{u}$) of $D^{b}(\mathrm{IPAlg} / k)$ to be the smallest full triangulated subcategory closed under isomorphisms containing perfections of connected (resp. connected unipotent) algebraic groups, pro-finite-étale groups and torsion étale groups.* Let $K_{0}(\mathcal{D})$ be the Grothendieck group of $\mathcal{D}$. **Proposition 42**. *There exists a unique group homomorphism $\chi \colon K_{0}(\mathcal{D}) \to \mathop{\mathrm{\mathbf{Z}}}$ that sends the perfection of a connected algebraic group to its dimension and kills pro-finite-étale groups and torsion étale groups.* *Proof.* The uniqueness is clear. For the existence, recall the universal covering functor $\mathrm{PAlg} / k \to \mathrm{PAlg} / k$, $G \mapsto \overline{G}$ from [@Ser60 Section 6.2, Definition 2], which is exact by [@Ser60 Section 10.3, Theorem 3]. It preserves the dimension of $G \in \mathrm{Alg} / k$ and kills pro-finite-étale groups. It extends to an exact endofunctor $G \mapsto \overline{G}$ on $\mathrm{IPAlg} / k$ and hence to a triangulated endofunctor on $D^{b}(\mathrm{IPAlg} / k)$, which kills torsion étale groups. Combining all these facts, it formally follows that for any $G \in \mathcal{D}$ and any $n \in \mathop{\mathrm{\mathbf{Z}}}$, the object $H^{n}(\overline{G})$ is the universal covering of a connected algebraic group. Therefore we have a well-defined integer $\sum_{n} (-1)^{n} \dim H^{n}(\overline{G})$ for each $G \in \mathcal{D}$. This assignment factors through $K_{0}(\mathcal{D})$. ◻ For any morphism $f$ in $D^{b}(\mathrm{IPAlg} / k)$ whose mapping cone belongs to $\mathcal{D}$, we define $\chi(f)$ to be the value of $\chi$ at a mapping cone of $f$. Let $D(k^{\mathrm{indrat}}_{\mathrm{proet}})$ be the derived category of sheaves of abelian groups on $\mathop{\mathrm{\operatorname{Spec}}}k^{\mathrm{indrat}}_{\mathrm{proet}}$. The fully faithful embedding $\mathrm{IPAlg} / k \hookrightarrow \mathrm{Ab}(k^{\mathrm{indrat}}_{\mathrm{proet}})$ induces a fully faithful embedding $D^{b}(\mathrm{IPAlg} / k) \hookrightarrow D(k^{\mathrm{indrat}}_{\mathrm{proet}})$ again by [@Suz Proposition (2.3.4)]. Let $G \mapsto G^{\mathrm{SD}}$ be the contravariant endofunctor $R \operatorname{\mathbf{Hom}} _{k^{\mathrm{indrat}}_{\mathrm{proet}}} (\;\cdot\;, \mathop{\mathrm{\mathbf{Z}}})$ on $D(k^{\mathrm{indrat}}_{\mathrm{proet}})$. **Proposition 43**. 1. *[\[0000\]]{#0000 label="0000"} The functor $\mathrm{SD}$ maps $\mathcal{D}$ to $\mathcal{D}_{u}$ and restricts to a contravariant autoequivalence on $\mathcal{D}_{u}$.* 2. *[\[0001\]]{#0001 label="0001"} For any $G \in \mathcal{D}_{u}$, we have $\chi(G) = \chi(G^{\mathrm{SD}})$.* *Proof.* [\[0000\]](#0000){reference-type="eqref" reference="0000"} is [@Suz Proposition (2.4.1) (b), (d)]. [\[0001\]](#0001){reference-type="eqref" reference="0001"} is [@Beg Proposition 1.2.1] (note that $(\mathbf{G}_{\mathrm{a}}^{\mathrm{perf}})^{\mathrm{SD}} \cong (\mathbf{G}_{\mathrm{a}}^{\mathrm{perf}})[-2]$). ◻ ## Base change conductors as dimensions **Proposition 44**. *Let $G$ be a semiabelian variety over $K$ with Néron lft-model $\mathscr{G}$. Let $L / K$ be a finite extension. Then $(\mathbf{\Gamma}(L, G) / \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{G}))^{0}$ is an object of $\mathrm{Alg} / k$. If $G$ has semiabelian reduction over $L$, then the dimension of this object is equal to the base change conductor $c(G)$ times $e_{L / K}$.* *Proof.* Let $\mathscr{G}_{L}$ be the Néron lft-model of $G \times_{K} L$ ($= G \times_{\mathop{\mathrm{\operatorname{Spec}}}K} \mathop{\mathrm{\operatorname{Spec}}}L$). Let $\mathscr{G}'$ be the kernel of the natural morphism $\mathscr{G}^{0} \times_{\mathcal{O}_{K}} \mathcal{O}_{L} \to \mathscr{G}_{L}^{0}$. Then the smooth algebraic group associated with the sequence $$0 \to \mathscr{G}' \to \mathscr{G}^{0} \times_{\mathcal{O}_{K}} \mathcal{O}_{L} \to \mathscr{G}_{L}^{0} \to 0$$ by Proposition [\[Diconstructprop\]](#Diconstructprop){reference-type="ref" reference="Diconstructprop"} is $\operatorname{Gr}(\mathscr{G}_{L}^{0}) / \operatorname{Gr}( \mathscr{G}^{0} \times_{\mathcal{O}_{K}} \mathcal{O}_{L} )$. Its perfection is $$\mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{G}_{L}^{0}) / \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{G}^{0}) \cong \mathbf{\Gamma}(L, G)^{0} / \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{G})^{0}.$$ If $G$ has semiabelian reduction over $L$, then its dimension is $e_{L / K} c(G)$ by Theorem [\[Liepointsthm\]](#Liepointsthm){reference-type="ref" reference="Liepointsthm"}. Since the natural morphism $$\mathbf{\Gamma}(L, G)^{0} / \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{G})^{0} \to (\mathbf{\Gamma}(L, G) / \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{G}))^{0}$$ is surjective with finite kernel, the result follows. ◻ **Proposition 45**. *Let $\varphi \colon G_1\to G_2$ be an isogeny of semiabelian varieties over $K$ and let $L/K$ be a finite Galois extension. For $i=1,2,$ let $\mathscr{G}_i$ be the Néron lft-model of $G_i$ over $\mathop{\mathrm{\mathcal{O}}}_K.$ Then the morphism $$(\mathbf{\Gamma}(L,G_{1,L})/\mathbf{\Gamma}(\mathop{\mathrm{\mathcal{O}}}_L,\mathscr{G}_{1,L})) ^ 0\to (\mathbf{\Gamma}(L,G_{2,L})/\mathbf{\Gamma}(\mathop{\mathrm{\mathcal{O}}}_L,\mathscr{G}_{2,L})) ^ 0$$ induces an isogeny on maximal semiabelian quotients. In particular, the mapping cone of this morphism (in $\mathcal{D}$) lies in $\mathcal{D}_u.$* *Proof.* Let $Q_1$ and $Q_2$ be the maximal semiabelian quotients of the left and right hand side, respectively. Let $d$ be the degree of the isogeny $\varphi$, so we have another isogeny $\psi\colon G_2 \to G_1$ such that $\psi\circ\varphi$ and $\varphi\circ \psi$ are both multiplication by $d.$ Then the compositions $Q_1 \to Q_2 \to Q_1$ and $Q_2 \to Q_1 \to Q_2$ are multiplication by $d,$ which is an isogeny as the $Q_j$ are semiabelian. Hence both maps $Q_1\to Q_2$ and $Q_2 \to Q_1$ are surjective and have finite kernel, as was claimed. In particular, both the kernel and the cokernel of $\varphi$ are extensions of a connected unipotent perfect algebraic group and a finite étale (perfect) algebraic group. ◻ **Definition 46**. *Let $G$ be a semiabelian variety over $K$ with Néron lft-model $\mathscr{G}$. Let $L / K$ be a finite extension. Define $c(G)_{L}'$ to be the dimension of $(\mathbf{\Gamma}(L, G) / \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{G}))^{0}$.* ## Pairings on Néron models Let $A$ and $B$ be Abelian varieties over $K$ dual to each other. Let $\mathscr{A}$ and $\mathscr{B}$ be their Néron models. Let $\otimes^{L} = \otimes^{L}_{\mathop{\mathrm{\mathbf{Z}}}}$ be the derived tensor product functor. As explained in [@Bosch Sections 4 and 5] or [@Milne Chapter III, Appendix C], the morphism $A \otimes^{L} B \to \mathop{\mathrm{\mathbf{G}_m}}[1]$ in $D(K_{\mathop{\mathrm{\mathrm{fppf}}}})$ ($= D(\mathop{\mathrm{\operatorname{Spec}}}K_{\mathop{\mathrm{\mathrm{fppf}}}})$) defined by the Poincaré biextension canonically extends to a morphism $$\label{0006} \mathscr{A} \otimes^{L} \mathscr{B}^{0} \to \mathop{\mathrm{\mathbf{G}_m}}[1]$$ in $D(\mathcal{O}_{K, \mathop{\mathrm{\mathrm{fppf}}}})$. For a morphism $X \to Y$ in an abelian category, we denote by $[X \to Y]$ the complex concentrated in degrees $-1$ and $0$. Its image in the derived category is also denoted by $[X \to Y]$. **Proposition 47**. *Let $u \colon A_{1} \to A_{2}$ be a morphism of Abelian varieties over $K$. Let $v \colon B_{2} \to B_{1}$ be the morphism induced on the duals. Then there exists a canonical morphism $$\label{0007} [A_{1} \to A_{2}][-1] \otimes^{L} [B_{2} \to B_{1}] \to \mathop{\mathrm{\mathbf{G}_m}}[1]$$ in $D(K_{\mathop{\mathrm{\mathrm{fppf}}}})$ such that the diagram $$\begin{CD} [A_{1} \to A_{2}][-1] @>>> A_{1} @>>> A_{2} \\ @VVV @VVV @VVV \\ R \operatorname{\mathbf{Hom}}_{K_{\mathop{\mathrm{\mathrm{fppf}}}}}([B_{2} \to B_{1}], \mathop{\mathrm{\mathbf{G}_m}}[1]) @>>> R \operatorname{\mathbf{Hom}}_{K_{\mathop{\mathrm{\mathrm{fppf}}}}}(B_{1}, \mathop{\mathrm{\mathbf{G}_m}}[1]) @>>> R \operatorname{\mathbf{Hom}}_{K_{\mathop{\mathrm{\mathrm{fppf}}}}}(B_{2}, \mathop{\mathrm{\mathbf{G}_m}}[1]) \end{CD}$$ is a morphism of exact triangles, where the middle and right vertical morphisms are the morphisms induced by the Poincaré biextensions.* *Proof.* For $i = 1, 2$, let $P_{i}$ be the (rigidified) Poincaré bundle on $A_{i} \times_{K} B_{i}$. Then the pullback of $P_{1}$ by the morphism $\mathrm{id} \times v \colon A_{1} \times B_{2} \to A_{1} \times B_{1}$ and the pullback of $P_{2}$ by the morphism $u \times \mathrm{id} \colon A_{1} \times B_{2} \to A_{2} \times B_{2}$ are canonically identified. Then the explicit construction of the morphisms $A_{i} \otimes^{L} B_{i} \to \mathop{\mathrm{\mathbf{G}_m}}[1]$ from $P_{i}$ in [@Gro72 Exposé VII, Theorems 3.2.5 and 3.6.4] gives a canonical choice of a left vertical morphism in the diagram that completes it into a morphism of exact triangles. ◻ **Proposition 48**. *Let $u \colon A_{1} \to A_{2}$ be a morphism of Abelian varieties over $K$. Let $v \colon B_{2} \to B_{1}$ be the morphism induced on the duals. Let $u \colon \mathscr{A}_{1} \to \mathscr{A}_{2}$ and $v \colon \mathscr{B}_{2} \to \mathscr{B}_{1}$ be the morphisms induced on the Néron models. Then the morphism [\[0007\]](#0007){reference-type="eqref" reference="0007"} canonically extends to a morphism $$\label{0012} [\mathscr{A}_{1} \to \mathscr{A}_{2}][-1] \otimes^{L} [\mathscr{B}_{2}^{0} \to \mathscr{B}_{1}^{0}] \to \mathop{\mathrm{\mathbf{G}_m}}[1]$$ in $D(\mathcal{O}_{K, \mathop{\mathrm{\mathrm{fppf}}}})$. The resulting diagram $$\label{0013} \begin{CD} [\mathscr{A}_{1} \to \mathscr{A}_{2}][-1] @>>> \mathscr{A}_{1} @>>> \mathscr{A}_{2} \\ @VVV @VVV @VVV \\ R \operatorname{\mathbf{Hom}} _{\mathcal{O}_{K, \mathop{\mathrm{\mathrm{fppf}}}}} ( [\mathscr{B}_{2}^{0} \to \mathscr{B}_{1}^{0}], \mathop{\mathrm{\mathbf{G}_m}}[1] ) @>>> R \operatorname{\mathbf{Hom}} _{\mathcal{O}_{K, \mathop{\mathrm{\mathrm{fppf}}}}} (\mathscr{B}_{1}^{0}, \mathop{\mathrm{\mathbf{G}_m}}[1]) @>>> R \operatorname{\mathbf{Hom}} _{\mathcal{O}_{K, \mathop{\mathrm{\mathrm{fppf}}}}} (\mathscr{B}_{2}^{0}, \mathop{\mathrm{\mathbf{G}_m}}[1]) \end{CD}$$ is a morphism of exact triangles.* *Proof.* Let $\mathscr{P}_{i}$ be the canonical extension of the Poincaré $\mathop{\mathrm{\mathbf{G}_m}}$-biextension on $A_{i} \times B_{i}$ to $\mathscr{A}_{i} \times \mathscr{B}_{i}^{0}$. Then the pullback of $\mathscr{P}_{1}$ by the morphism $\mathrm{id} \times v \colon \mathscr{A}_{1} \times \mathscr{B}_{2}^{0} \to \mathscr{A}_{1} \times \mathscr{B}_{1}^{0}$ and the pullback of $\mathscr{P}_{2}$ by the morphism $u \times \mathrm{id} \colon \mathscr{A}_{1} \times \mathscr{B}_{2}^{0} \to \mathscr{A}_{2} \times \mathscr{B}_{2}^{0}$ give two $\mathop{\mathrm{\mathbf{G}_m}}$-biextensions on $\mathscr{A}_{1} \times \mathscr{B}_{2}^{0}$. Their generic fibers are canonically identified by the proof of Proposition [Proposition 47](#BiextK){reference-type="ref" reference="BiextK"}. Therefore, by the fully faithfulness result of biextensions in [@Gro72 Exposé VIII, Theorem 7.1 (b)], we know that these two $\mathop{\mathrm{\mathbf{G}_m}}$-biextensions on $\mathscr{A}_{1} \times \mathscr{B}_{2}^{0}$ are canonically identified. By the same argument as the proof of Proposition [Proposition 47](#BiextK){reference-type="ref" reference="BiextK"}, this gives a canonical choice of a left vertical morphism in the diagram that completes it into a morphism of exact triangles. ◻ ## Dimensions of duality pairings Recall from [@Suz (5.2.1.1)] the canonical "trace" isomorphism $$\label{0008} R \mathbf{\Gamma}_{x}(\mathcal{O}_{K}, \mathop{\mathrm{\mathbf{G}_m}}) \cong \mathop{\mathrm{\mathbf{Z}}}[-1]$$ in $D(k^{\mathrm{indrat}}_{\mathrm{proet}})$, where $\mathbf{\Gamma}_{x}(\mathcal{O}_{K}, \;\cdot\;)$ denotes the kernel of the natural morphism $\mathbf{\Gamma}(\mathcal{O}_{K}, \;\cdot\;) \to \mathbf{\Gamma}(K, \;\cdot\;)$ [@Suz Section 3.3]. Let $A, B$ be Abelian varieties dual to each other and $\mathscr{A}, \mathscr{B}$ their Néron models. Let $L / K$ be a finite Galois extension. Then the morphism [\[0006\]](#0006){reference-type="eqref" reference="0006"} induces a morphism $$R \mathbf{\Gamma}_{x}(\mathcal{O}_{L}, \mathscr{A}) \otimes^{L} R \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{B}^{0}) \to R \mathbf{\Gamma}_{x}(\mathcal{O}_{L}, \mathop{\mathrm{\mathbf{G}_m}}[1]) \cong \mathop{\mathrm{\mathbf{Z}}}$$ and hence a morphism $$\label{0010} R \mathbf{\Gamma}_{x}(\mathcal{O}_{L}, \mathscr{A}) \to R \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{B}^{0})^{\mathrm{SD}}.$$ For $G \in \mathrm{IPAlg} / k$, define $G^{\mathrm{SD}'} = \operatorname{\mathbf{Ext}} ^{1}_{k^{\mathrm{indrat}}_{\mathrm{proet}}} (G, \mathop{\mathrm{\mathbf{Q}}}/ \mathop{\mathrm{\mathbf{Z}}})$. Using the calculations presented in the following subsections, we shall show that $c(A)-c(B)$ vanishes by showing that this quantity equals its own additive inverse. In particular, sign and indexing conventions are essential. **Proposition 49**. *Let $L / K$ be a finite Galois extension. Then the cohomology objects of $R \mathbf{\Gamma}_{x}(\mathcal{O}_{L}, \mathscr{A})$ are given by $$\begin{gathered} H^{1} \cong \frac{ \mathbf{\Gamma}(L, A) }{ \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{A}) }, \quad H^{2} \cong \mathbf{H}^{1}(L, A), \\ H^{n} = 0, \quad n \ne 1, 2. \end{gathered}$$ The cohomology objects of $R \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{B}^{0})^{\mathrm{SD}}$ are given by $$H^{2} \cong \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{B}^{0})^{\mathrm{SD}'}, \quad H^{n} = 0, \quad n \ne 2.$$ Under these isomorphisms, the morphism [\[0010\]](#0010){reference-type="eqref" reference="0010"} in $H^{2}$ is given by the composite $$\mathbf{H}^{1}(L, A) \cong \mathbf{\Gamma}(L, B)^{\mathrm{SD}'} \twoheadrightarrow \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{B}^{0})^{\mathrm{SD}'}$$ of the duality isomorphism and the natural morphism, the latter of which is surjective.* *Proof.* The exact triangle $$R \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{A}) \to R \mathbf{\Gamma}(L, A) \to R \mathbf{\Gamma}_{x}(\mathcal{O}_{L}, \mathscr{A})[1]$$ and $\mathbf{H}^{n}(\mathcal{O}_{L}, \mathscr{A}) = 0$ for $n \ge 1$ show the result for $R \mathbf{\Gamma}_{x}(\mathcal{O}_{L}, \mathscr{A})$. For $R \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{B}^{0})^{\mathrm{SD}}$, we have $$R \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{B}^{0}) \cong \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{B}^{0}) \cong \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{B})^{0}$$ by [@Suz Proposition Proposition (3.4.2) (a)], which is connected. Hence the result follows from [@Suz Proposition (2.4.1) (a)]. We have a commutative diagram $$\begin{CD} R \mathbf{\Gamma}(L, A)[-1] @>>> R \mathbf{\Gamma}(L, B)^{\mathrm{SD}} \\ @VVV @VVV \\ R \mathbf{\Gamma}_{x}(\mathcal{O}_{L}, \mathscr{A}) @>>> R \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{B}^{0})^{\mathrm{SD}}, \end{CD}$$ which gives the description of the morphism on $H^{2}$. The surjectivity is the vanishing of $\operatorname{\mathbf{Ext}} ^{\ge 2}_{k^{\mathrm{indrat}}_{\mathrm{proet}}} (\;\cdot\;, \mathop{\mathrm{\mathbf{Q}}}/ \mathop{\mathrm{\mathbf{Z}}})$ for pro-algebraic groups [@Suz Proposition (2.4.1) (a)]. ◻ **Proposition 50**. *Let $A_{i}, B_{i}, \mathscr{A}_{i}, \mathscr{B}_{i}$ be as in Proposition [Proposition 48](#0011){reference-type="ref" reference="0011"}. Assume that the morphism $A_{1} \to A_{2}$ is an isogeny. Let $L / K$ be a finite Galois extension. Consider the morphism $$\label{0018} R \mathbf{\Gamma}_{x}( \mathcal{O}_{L}, [\mathscr{A}_{1} \to \mathscr{A}_{2}][-1] ) \to R \mathbf{\Gamma}( \mathcal{O}_{L}, [\mathscr{B}_{2}^{0} \to \mathscr{B}_{1}^{0}] )^{\mathrm{SD}}$$ induced by the morphism [\[0012\]](#0012){reference-type="eqref" reference="0012"}. Then its mapping cone belongs to $\mathcal{D}$. The value of $\chi$ at this cone is equal to $c(A_{1})_{L}' - c(A_{2})_{L}' - c(B_{1})_{L}' + c(B_{2})_{L}'$.* *Proof.* Denote the morphism in question as $C \to D$ and a choice of its mapping cone as $E$. We have a morphism of exact triangles $$\begin{CD} C @>>> R \mathbf{\Gamma}_{x}(\mathcal{O}_{L}, \mathscr{A}_{1}) @>>> R \mathbf{\Gamma}_{x}(\mathcal{O}_{L}, \mathscr{A}_{2}) \\ @VVV @VVV @VVV \\ D @>>> R \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{B}_{1}^{0})^{\mathrm{SD}} @>>> R \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{B}_{2}^{0})^{\mathrm{SD}} \end{CD}$$ by [\[0013\]](#0013){reference-type="eqref" reference="0013"} and [@Suz Proposition (3.3.7)]. Under the isomorphisms in Proposition [Proposition 49](#0014){reference-type="ref" reference="0014"}, the long exact sequence for the upper triangle can be written as $$\begin{CD} 0 @>>> H^{1}(C) @>>> \dfrac{ \mathbf{\Gamma}(L, A_{1}) }{ \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{A}_{1}) } @>>> \dfrac{ \mathbf{\Gamma}(L, A_{2}) }{ \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{A}_{2}) } \\ @>>> H^{2}(C) @>>> \mathbf{H}^{1}(L, A_{1}) @>>> \mathbf{H}^{1}(L, A_{2}) @>>> 0 \end{CD}$$ and $H^{n}(C) = 0$ for $n \ne 1, 2$, where the last surjectivity is the vanishing of $\mathbf{H}^{\ge 2}(L, \;\cdot\;)$ for finite flat group schemes [@Suz Proposition (3.4.3) (b)]. Let $H^{2}(C)'$ be the image of the morphism to $H^{2}(C)$ in this sequence and let $H^{2}(C)''$ be the image of the morphism from $H^{2}(C)$ in this sequence. The long exact sequence for the lower triangle reduces to an exact sequence $$0 \to H^{2}(D) \to \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{B}_{1}^{0})^{\mathrm{SD}'} \to \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{B}_{2}^{0})^{\mathrm{SD}'} \to 0$$ and $H^{n}(D) = 0$ for $n \ne 2$, where the last morphism is surjective because the kernel of $\mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{B}_{2}^{0}) \to \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{B}_{1}^{0})$ is contained in the kernel of $\mathbf{\Gamma}(L, B_{2}) \to \mathbf{\Gamma}(L, B_{1})$ (which is finite étale) and the vanishing of $\operatorname{\mathbf{Ext}} ^{\ge 2}_{k^{\mathrm{indrat}}_{\mathrm{proet}}} (\;\cdot\;, \mathop{\mathrm{\mathbf{Q}}}/ \mathop{\mathrm{\mathbf{Z}}})$ for pro-algebraic groups. Hence, with Proposition [Proposition 49](#0014){reference-type="ref" reference="0014"}, we have a commutative diagram with exact rows and columns $$\begin{CD} @. @. 0 @. 0 @. \\ @. @. @VVV @VVV @. \\ @. @. \left( \dfrac{ \mathbf{\Gamma}(L, B_{1}) }{ \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{B}_{1}^{0}) } \right)^{\mathrm{SD}'} @>>> \left( \dfrac{ \mathbf{\Gamma}(L, B_{2}) }{ \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{B}_{2}^{0}) } \right)^{\mathrm{SD}'} @. \\ @. @. @VVV @VVV @. \\ 0 @>>> H^{2}(C)'' @>>> \mathbf{H}^{1}(L, A_{1}) @>>> \mathbf{H}^{1}(L, A_{2}) @>>> 0 \\ @. @VVV @VVV @VVV @. \\ 0 @>>> H^{2}(D) @>>> \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{B}_{1}^{0})^{\mathrm{SD}'} @>>> \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{B}_{2}^{0})^{\mathrm{SD}'} @>>> 0 \\ @. @. @VVV @VVV @. \\ @. @. 0 @. 0. @. \end{CD}$$ The morphism $$\dfrac{ \mathbf{\Gamma}(L, B_{2}) }{ \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{B}_{2}^{0}) } \to \dfrac{ \mathbf{\Gamma}(L, B_{1}) }{ \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{B}_{1}^{0}) }$$ induces an isogeny on the semiabelian quotients by Proposition [Proposition 45](#IsogL){reference-type="ref" reference="IsogL"}. From these and Proposition [Proposition 43](#0016){reference-type="ref" reference="0016"}, we know that $H^{1}(C)$, $H^{2}(C)'$, $\mathop{\mathrm{\mathrm{ker}}}(H^{2}(C)'' \to H^{2}(D))^{0}$ and $\operatorname{coker}(H^{2}(C)'' \to H^{2}(D))^{0}$ are perfections of algebraic groups and $$\begin{gathered} \chi(H^{1}(C)) - \chi(H^{2}(C)') = c(A_{1})_{L}' - c(A_{2})_{L}', \\ \chi(H^{2}(C)'' \to H^{2}(D)) = - c(B_{1})_{L}' + c(B_{2})_{L}'. \end{gathered}$$ Therefore $\mathop{\mathrm{\mathrm{ker}}}(H^{n}(C) \to H^{n}(D))$ and $\operatorname{coker}(H^{n}(C) \to H^{n}(D))$ are objects of $\mathcal{D}$ for all $n$. Hence $E \in \mathcal{D}$. We have isomorphisms and an exact sequence $$\begin{gathered} H^{0}(E) \cong H^{1}(C), \\ 0 \to H^{2}(C)' \to H^{1}(E) \to \mathop{\mathrm{\mathrm{ker}}}(H^{2}(C)'' \to H^{2}(D)) \to 0, \\ H^{2}(E) \cong \operatorname{coker}(H^{2}(C)'' \to H^{2}(D)), \\ H^{n}(E) = 0, \quad n \ne 0, 1, 2. \end{gathered}$$ Applying $\chi$, we get the result. ◻ ## Invariance of isogeny conductors under duality Assume that $K$ has equal characteristic. Let $A_{1} \to A_{2}$ be an isogeny of Abelian varieties over $K.$ Let $B_{2} \to B_{1}$ be the dual isogeny. Let $\mathscr{A}_{1} \to \mathscr{A}_{2}$ and $\mathscr{B}_{2} \to \mathscr{B}_{1}$ be the morphisms induced on the Néron models. Let $L / K$ be a finite Galois extension. We shall show below that the equality $$c(A_{1})_{L}' - c(A_{2})_{L}' = c(B_{1})_{L}' - c(B_{2})_{L}'$$ holds. Choosing $L$ such that one (hence all) of the semiabelian varieties just mentioned have semiabelian reduction over $\mathop{\mathrm{\mathcal{O}}}_L$, and dividing by $e_{L/K}$ shows $c(A_1)-c(A_2)=c(B_1)-c(B_2).$ In particular, if $A_1$ and $A_2$ are dual to one another, this implies $c(A_1)=c(A_2).$ The kernel $F$ of the isogeny $A_1 \to A_2$ admits a canonical filtration $$0=F_0 \subset F_1 \subset F_2 \subset F_3 \subset F_4=F$$ whose associated graded object is the direct sum of an infinitesimal multiplicative group scheme, an infinitesimal unipotent group scheme, a finite étale group scheme of $p$-power order, and a finite étale group scheme of order not divisible by $p,$ respectively. This filtration induces a factorisation of the morphisms $A_1\to A_2$ and $B_2 \to B_1,$ so we may assume without loss of generality that $F$ itself belongs to one of the classes of finite $K$-group schemes just listed. Note that, if $F$ is étale of order invertible in $\mathop{\mathrm{\mathcal{O}}}_K,$ then the maps $\mathop{\mathrm{\mathrm{Lie}}}\mathscr{A}_1 \to \mathop{\mathrm{\mathrm{Lie}}}\mathscr{A}_2$ and $\mathop{\mathrm{\mathrm{Lie}}}\mathscr{B}_2 \to \mathop{\mathrm{\mathrm{Lie}}}\mathscr{B}_1$ are isomorphisms. In particular, we may assume that the isogeny $A_1 \to A_2$ has $p$-power degree. **Proposition 51**. *Assume that $A_{1} \to A_{2}$ has multiplicative (hence connected) kernel. Then $c(A_{1})_{L}' - c(A_{2})_{L}' = c(B_{1})_{L}' - c(B_{2})_{L}'$.* *Proof.* The morphism $\mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{B}_{2}^{0}) \to \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{B}_{1}^{0})$ has finite étale kernel and induces an isogeny on the semiabelian quotients. For each $i$, the group $\mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{B}_{i}^{0})$ is the perfection of the Greenberg transform of infinite level of $\mathscr{B}_{i}^{0}$. By assumption, the generic fiber of the morphism $\mathscr{B}_{2}^{0} \to \mathscr{B}_{1}^{0}$ is étale surjective. Therefore the cokernel of $\mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{B}_{2}^{0}) \to \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{B}_{1}^{0})$ is the perfection of a unipotent algebraic group whose dimension is the $\mathcal{O}_{L}$-length of the cokernel of $$\mathop{\mathrm{\mathrm{Lie}}}(\mathscr{B}_{2}^{0}) \otimes_{\mathcal{O}_{K}} \mathcal{O}_{L} \to \mathop{\mathrm{\mathrm{Lie}}}(\mathscr{B}_{1}^{0}) \otimes_{\mathcal{O}_{K}} \mathcal{O}_{L}$$ by [@LLR Theorem 2.1 (b)]. This $\mathcal{O}_{L}$-length divided by $e_{L / K}$ is independent of the choice of $L$. With Proposition [Proposition 43](#0016){reference-type="ref" reference="0016"}, we know that $R \mathbf{\Gamma}( \mathcal{O}_{L}, [\mathscr{B}_{2}^{0} \to \mathscr{B}_{1}^{0}] )^{\mathrm{SD}}$ is an object of $\mathcal{D}_{u}$ and the value of $\chi / e_{L / K}$ at this object is independent of the choice of $L$. For $R \mathbf{\Gamma}_{x}( \mathcal{O}_{L}, [\mathscr{A}_{1} \to \mathscr{A}_{2}][-1] )$, let $N$ be the kernel of $A_{1} \to A_{2}$. Let $\mathscr{N}$ be the schematic closure of $N$ in $\mathscr{A}_{1}$, which is finite flat over $\mathcal{O}_{K}$ since $N$ is infinitesimal. Let $\mathscr{A}_{2}'$ be the fppf quotient $\mathscr{A}_{1} / \mathscr{N}$, which is a smooth separated group scheme over $\mathcal{O}_{K}$ by [@An Théorème 4.C]. The term-wise exact sequence of complexes $$0 \to \mathscr{N} \to [\mathscr{A}_{1} \to \mathscr{A}_{2}][-1] \to [\mathscr{A}_{2}' \to \mathscr{A}_{2}][-1] \to 0$$ induces an exact triangle $$R \mathbf{\Gamma}_{x}(\mathcal{O}_{L}, \mathscr{N}) \to R \mathbf{\Gamma}_{x}( \mathcal{O}_{L}, [\mathscr{A}_{1} \to \mathscr{A}_{2}][-1] ) \to R \mathbf{\Gamma}_{x}( \mathcal{O}_{L}, [\mathscr{A}_{2}' \to \mathscr{A}_{2}][-1] ).$$ Let $\mathscr{M}$ be the Cartier dual of $\mathscr{N}$. We have a canonical isomorphism $$R \mathbf{\Gamma}_{x}(\mathcal{O}_{L}, \mathscr{N}) \cong R \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{M})^{\mathrm{SD}}[-1]$$ by [@Suz Theorem (5.2.1.2)]. Proposition [Proposition 23](#EtCriterion){reference-type="ref" reference="EtCriterion"} shows that these isomorphic objects are in $\mathcal{D}$ and that the dimension of $\mathbf{H}^{1}(\mathcal{O}_{L}, \mathscr{M})$ is the $\mathcal{O}_{L}$-length of the pullback of $\Omega^{1}_{\mathscr{M} \times_{\mathcal{O}_{K}} \mathcal{O}_{L} / \mathcal{O}_{L}}$ along the zero section. Hence the values of $\chi / e_{L / K}$ at $R \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{M})^{\mathrm{SD}}$ and hence at $R \mathbf{\Gamma}_{x}(\mathcal{O}_{L}, \mathscr{N})$ are independent of the choice of $L$. The morphism $\mathscr{A}_{2}' \to \mathscr{A}_{2}$ is an isomorphism over $K$. Hence $$R \mathbf{\Gamma}_{x}( \mathcal{O}_{L}, [\mathscr{A}_{2}' \to \mathscr{A}_{2}][-1] ) \cong R \mathbf{\Gamma}( \mathcal{O}_{L}, [\mathscr{A}_{2}' \to \mathscr{A}_{2}][-1] ).$$ By the same reasoning as the case of $R \mathbf{\Gamma}( \mathcal{O}_{L}, [\mathscr{B}_{2}^{0} \to \mathscr{B}_{1}^{0}] )$ above, this object is in $\mathcal{D}$ and the value of $\chi / e_{L / K}$ at this object is independent of the choice of $L$. With Proposition [Proposition 50](#0017){reference-type="ref" reference="0017"}, all these together imply that $c(A_{1})_{L}' - c(A_{2})_{L}' - c(B_{1})_{L}' + c(B_{2})_{L}'$ divided by $e_{L / K}$ is independent of the choice of $L$. Hence we may assume that $L = K$. But then every term becomes zero. ◻ **Proposition 52**. *Assume that $A_{1} \to A_{2}$ has étale kernel. Then $c(A_{1})_{L}' - c(A_{2})_{L}' = c(B_{1})_{L}' - c(B_{2})_{L}'$.* *Proof.* Let $N$ and $M$ be the kernels of $A_{1} \to A_{2}$ and $B_{2} \to B_{1}$, respectively. Let $\mathscr{N}$ and $\mathscr{M}$ be the schematic closures of $N$ in $\mathscr{A}_{1}$ and $M$ in $\mathscr{B}_{2}$, respectively. Then $\mathscr{N}$ is quasi-finite flat separated and $\mathscr{M}$ is finite flat. Let $\mathscr{A}_{2}'$ and $\mathscr{B}_{1}'$ be the fppf quotients $\mathscr{A}_{1} / \mathscr{N}$ and $\mathscr{B}_{2} / \mathscr{M}$, respectively, which are smooth separated group schemes. We have $\mathscr{M} \subset \mathscr{B}_{2}^{0}$ and $\mathscr{B}_{2}^{0} / \mathscr{M} \cong \mathscr{B}_{1}'^{0}$. By [\[0012\]](#0012){reference-type="eqref" reference="0012"}, we have natural morphisms $$\mathscr{N} \otimes^{L} \mathscr{M} \to [\mathscr{A}_{1} \to \mathscr{A}_{2}][-1] \otimes^{L} [\mathscr{B}_{2}^{0} \to \mathscr{B}_{1}^{0}][-1] \to \mathop{\mathrm{\mathbf{G}_m}}$$ in $D(\mathcal{O}_{K, \mathop{\mathrm{\mathrm{fppf}}}})$. Over the generic fibers, it gives the Cartier duality between $N$ and $M$. This fits in the commutative diagram $$\begin{CD} R \mathbf{\Gamma}_{x}(\mathcal{O}_{L}, \mathscr{N}) @>>> R \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{M})^{\mathrm{SD}}[-1] \\ @VVV @AAA \\ R \mathbf{\Gamma}_{x}( \mathcal{O}_{L}, [\mathscr{A}_{1} \to \mathscr{A}_{2}][-1] ) @>>> R \mathbf{\Gamma}( \mathcal{O}_{L}, [\mathscr{B}_{2}^{0} \to \mathscr{B}_{1}^{0}] )^{\mathrm{SD}} \end{CD}$$ along with the morphism [\[0018\]](#0018){reference-type="eqref" reference="0018"}. Let $\mathscr{N}'$ be the Cartier dual of $\mathscr{M}$. Then by [@Suz Theorem (5.2.1.2)], the upper horizontal morphism can be written as the morphism $$R \mathbf{\Gamma}_{x}(\mathcal{O}_{L}, \mathscr{N}) \to R \mathbf{\Gamma}_{x}(\mathcal{O}_{L}, \mathscr{N}')$$ induced by the morphism $\mathscr{N} \to \mathscr{N}'$. Since it is an isomorphism over the generic fibers, we have $$R \mathbf{\Gamma}_{x}(\mathcal{O}_{L}, [\mathscr{N} \to \mathscr{N}']) \cong R \mathbf{\Gamma}(\mathcal{O}_{L}, [\mathscr{N} \to \mathscr{N}']).$$ As in the proof of Proposition [Proposition 51](#0019){reference-type="ref" reference="0019"}, since the generic fiber of $\mathscr{N} \to \mathscr{N}'$ is an isomorphism of étale group schemes, we know that the value of $\chi / e_{L / K}$ at this object is independent of the choice of $L$. Also, the the values of $\chi / e_{L / K}$ at $$R \mathbf{\Gamma}_{x}( \mathcal{O}_{L}, [\mathscr{A}_{2}' \to \mathscr{A}_{2}][-1] ) \quad \text{and} \quad R \mathbf{\Gamma}( \mathcal{O}_{L}, [\mathscr{B}_{1}'^{0} \to \mathscr{B}_{1}^{0}] )$$ are independent of the choice of $L$ by the same argument. Let $E$ be a mapping cone of the morphism [\[0018\]](#0018){reference-type="eqref" reference="0018"}, and let $D$ be a mapping cone of the map $R \mathbf{\Gamma}_{x}(\mathcal{O}_{L}, \mathscr{N})\to R \mathbf{\Gamma}( \mathcal{O}_{L}, [\mathscr{B}_{2}^{0} \to \mathscr{B}_{1}^{0}] )^{\mathrm{SD}}$ induced by the diagram above. The octahedral axiom shows that there are exact triangles $$R \mathbf{\Gamma}_{x}( \mathcal{O}_{L}, [\mathscr{A}_{2}' \to \mathscr{A}_{2}][-1]) \to D \to E$$ and $$D \to R \mathbf{\Gamma}(\mathcal{O}_{L}, [\mathscr{N} \to \mathscr{N}']) \to R \mathbf{\Gamma}( \mathcal{O}_{L},[\mathscr{B}_{1}'^{0} \to \mathscr{B}_{1}^{0}] )^{\mathrm{SD}}[1]$$ in $\mathcal{D}.$ Moreover, we observe that $R \mathbf{\Gamma}( \mathcal{O}_{L}, [\mathscr{B}_{1}'^{0} \to \mathscr{B}_{1}^{0}] )$ is contained in $\mathcal{D}_u.$ In particular, using Proposition [Proposition 43](#0016){reference-type="ref" reference="0016"}, we see that the value of $\chi/e_{L/K}$ at the class of $E$ in $K_0(\mathcal{D})$ is independent of the choice of $L$. Hence we may assume that $L = K$, and everything becomes zero. ◻ The proofs of the above two propositions also work for mixed characteristic $K$ with little modifications. **Proposition 53**. *Assume that $A_{1} \to A_{2}$ has unipotent connected kernel. Then $c(A_{1})_{L}' - c(A_{2})_{L}' = c(B_{1})_{L}' - c(B_{2})_{L}'$.* *Proof.* Let $N$ and $M$ be the kernels of $A_{1} \to A_{2}$ and $B_{2} \to B_{1}$, respectively. Let $\mathscr{N}$ and $\mathscr{M}$ be the schematic closures of $N$ in $\mathscr{A}_{1}$ and $M$ in $\mathscr{B}_{2}$, respectively. Then both $\mathscr{N}$ and $\mathscr{M}$ are finite flat. Let $\mathscr{A}_{2}'$ and $\mathscr{B}_{1}'$ be the fppf quotients $\mathscr{A}_{1} / \mathscr{N}$ and $\mathscr{B}_{2} / \mathscr{M}$, respectively, which are smooth separated group schemes. Let $\mathscr{N}'$ be the Cartier dual of $\mathscr{M}$ and $\mathscr{M}'$ be the Cartier dual of $\mathscr{N}$. Arguing similarly to the proof of Proposition [Proposition 52](#0020){reference-type="ref" reference="0020"}, we are reduced to showing that the value of $\chi / e_{L / K}$ at $R \mathbf{\Gamma}(\mathcal{O}_{L}, [\mathscr{N} \to \mathscr{N}'])$ is independent of the choice of $L$. Let $G$ be the Weil restriction $\mathop{\mathrm{\mathrm{Res}}}_{\mathscr{M}' / \mathcal{O}_{K}} \mathop{\mathrm{\mathbf{G}_m}}$, which is a smooth affine group scheme over $\mathcal{O}_{K}$. Let $\mathscr{N} \hookrightarrow G$ be the natural inclusion. Set $H = G / \mathscr{N}$, which is a smooth affine group scheme over $\mathcal{O}_{K}$. Let $G'$ be the Weil restriction $\mathop{\mathrm{\mathrm{Res}}}_{\mathscr{M} / \mathcal{O}_{K}} \mathop{\mathrm{\mathbf{G}_m}}$. Let $\mathscr{N}' \hookrightarrow G'$ be the natural inclusion and set $H' = G' / \mathscr{N}'$. We have a commutative diagram with exact rows $$\begin{CD} 0 @>>> \mathscr{N} @>>> G @>>> H @>>> 0, \\ @. @VVV @VVV @VVV @. \\ 0 @>>> \mathscr{N}' @>>> G' @>>> H' @>>> 0. \end{CD}$$ (These are Bégueri's canonical smooth resolutions; see [@Milne Chapter III, Theorem A.5].) The vertical morphisms are isomorphisms on the generic fibers. The diagram induces an exact triangle $$R \mathbf{\Gamma}(\mathcal{O}_{L}, [\mathscr{N} \to \mathscr{N}']) \to \frac{ \mathbf{\Gamma}(\mathcal{O}_{L}, G') }{ \mathbf{\Gamma}(\mathcal{O}_{L}, G) } \to \frac{ \mathbf{\Gamma}(\mathcal{O}_{L}, H') }{ \mathbf{\Gamma}(\mathcal{O}_{L}, H) }.$$ As before, the values of $\chi / e_{L / K}$ at the second and third terms are well-defined and independent of the choice of $L$. Hence the same is true at the first term. ◻ **Proposition 54**. *We have $c(A_{1})_{L}' - c(A_{2})_{L}' = c(B_{1})_{L}' - c(B_{2})_{L}'$. In particular, we have $c(A_{1}) - c(A_{2}) = c(B_{1}) - c(B_{2})$.* *Proof.* This follows from Propositions [Proposition 50](#0017){reference-type="ref" reference="0017"}, [Proposition 51](#0019){reference-type="ref" reference="0019"}, [Proposition 52](#0020){reference-type="ref" reference="0020"} and [Proposition 53](#0021){reference-type="ref" reference="0021"}. ◻ **Theorem 55**. *The base change conductor is invariant under duality for Abelian varieties over $K$. More precisely, we have $c(A) = c(B)$ for any Abelian varieties $A$ and $B$ over $K$ dual to each other.* *Proof.* Apply Proposition [Proposition 54](#0022){reference-type="ref" reference="0022"} to any isogeny $A \to B$ to the dual. We obtain $c(A) - c(B) = c(B) - c(A)$. Thus $c(A) = c(B)$. ◻ ## Isogeny invariance for tori Assume that $K$ has equal characteristic. **Theorem 56** (cf. [@CY]). *The base change conductor for tori is isogeny invariant. More precisely, if $T_{1} \to T_{2}$ is an isogeny between tori over $K$, then $c(T_{1}) = c(T_{2})$.* *Proof.* Let $N$ be the kernel of $T_{1} \to T_{2}$. Let $\mathscr{T}_{i}$ be the Néron lft-model of $T_{i}$. Let $L / K$ be any finite Galois extension. Note first that the maximal étale quotient of $N$ has order invertible in $\mathop{\mathrm{\mathcal{O}}}_K$ because $N$ is multiplicative. In particular, we can factorise $T_1\to T_2$ into an isogeny with infinitesimal kernel and an isogeny of degree prime to $p.$ It clearly suffices to show invariance of $c(-)$ for the two isogenies separately. If the degree is invertible in $\mathop{\mathrm{\mathcal{O}}}_K,$ then the map $\mathop{\mathrm{\mathrm{Lie}}}\mathscr{T}_1 \to \mathop{\mathrm{\mathrm{Lie}}}\mathscr{T}_2$ is an isomorphism, and the same holds true after base change to $L$ (with Néron lft-models taken over $\mathop{\mathrm{\mathcal{O}}}_L$). Therefore we shall henceforth assume that $N$ is infinitesimal. We have $\mathbf{H}^{n}(L, T_{i}) = 0$ for all $n \ge 1$ by [@Suz Proposition (3.4.3) (e)]. Also we have $\mathbf{\Gamma}(L, N) = 0$ since $N$ is infinitesimal. Therefore we have a commutative diagram with exact rows and columns $$\begin{CD} @. 0 @. 0 \\ @. @VVV @VVV \\ 0 @>>> \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{T}_{1}) @>>> \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{T}_{2}) @>>> \dfrac{ \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{T}_{2}) }{ \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{T}_{1}) } @>>> 0 \\ @. @VVV @VVV @VVV \\ 0 @>>> \mathbf{\Gamma}(L, T_{1}) @>>> \mathbf{\Gamma}(L, T_{2}) @>>> \mathbf{H}^{1}(L, N) @>>> 0 \\ @. @VVV @VVV \\ @. \dfrac{ \mathbf{\Gamma}(L, T_{1}) }{ \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{T}_{1}) } @>>> \dfrac{ \mathbf{\Gamma}(L, T_{2}) }{ \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{T}_{2}) } \\ @. @VVV @VVV \\ @. 0 @. 0. \end{CD}$$ It follows by Proposition [Proposition 45](#IsogL){reference-type="ref" reference="IsogL"} that the morphism $$\label{0024} \frac{ \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{T}_{2}) }{ \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{T}_{1}) } \to \mathbf{H}^{1}(L, N)$$ has mapping cone in $\mathcal{D}_{u}$ and the value of $\chi$ at this cone is $- c(T_{1})_{L}' + c(T_{2})_{L}'$. Therefore it is enough to show that the value of $\chi / e_{L / K}$ at this cone is independent of the choice of $L$. Let $\mathscr{N}$ be the schematic closure of $N$ in $\mathscr{T}_{1}$, which is finite flat over $\mathcal{O}_{K}$. Let $\mathscr{T}_{2}'$ be the fppf quotient $\mathscr{T}_{1} / \mathscr{N}$, which is a smooth separated group scheme over $\mathcal{O}_{K}$. Consider the induced two morphisms $$\mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{T}_{1}) \to \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{T}_{2}') \to \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{T}_{2}).$$ The cokernel of the first morphism is $\mathbf{H}^{1}(\mathcal{O}_{L}, \mathscr{N})$ since $\mathbf{H}^{1}(\mathcal{O}_{L}, \mathscr{T}_{1}) = 0$ by [@Suz Proposition (3.4.2) (a)]. The kernel of the second morphism is zero since $\mathscr{T}_{2}' \to \mathscr{T}_{2}$ is an isomorphism over $L$. Therefore we have an exact sequence $$0 \to \mathbf{H}^{1}(\mathcal{O}_{L}, \mathscr{N}) \to \frac{ \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{T}_{2}) }{ \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{T}_{1}) } \to \frac{ \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{T}_{2}) }{ \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{T}_{2}') } \to 0.$$ Consider the resulting morphisms $$\mathbf{H}^{1}(\mathcal{O}_{L}, \mathscr{N}) \hookrightarrow \frac{ \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{T}_{2}) }{ \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{T}_{1}) } \to \mathbf{H}^{1}(L, N).$$ Their composite is injective by [@Suz Proposition (3.4.6)]. Hence the value of $\chi$ at the morphism [\[0024\]](#0024){reference-type="eqref" reference="0024"} is equal to $$- \dim \frac{ \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{T}_{2}) }{ \mathbf{\Gamma}(\mathcal{O}_{L}, \mathscr{T}_{2}') } + \dim \frac{ \mathbf{H}^{1}(L, N) }{ \mathbf{H}^{1}(\mathcal{O}_{L}, \mathscr{N}) }.$$ The first dimension divided by $e_{L / K}$ is independent of $L$ by [@LLR Theorem 2.1 (b)] as before. The second dimension divided by $e_{L / K}$ is independent of $L$ by Propositions [Proposition 19](#FinDuality){reference-type="ref" reference="FinDuality"} and [Proposition 23](#EtCriterion){reference-type="ref" reference="EtCriterion"}. Combining these two, we get the result. ◻ Again, a slight modification of this proof also works in the mixed characteristic case. 10 Anantharaman, S. *Schémas en groupes, espaces homogènes et espaces algébriques sur une base de dimension 1*. Bull. Soc. Math. France, Mémoire 33 (1973), pp. 5-79. Bégueri, L. *Dualité sur un corps local à corps résiduel algébriquement clos*. Mémoires de la S. M. F., 2e série, tome 4, 1980. Bertapelle, A., González-Avilés, C.-D. *The Greenberg functor revisited*. Eur. J. Math., 4(4):1340--1389, 2018. Bosch, S. *Component groups of abelian varieties and Grothendieck's duality conjecture*. Ann. inst. Fourier, tome 47, no 5, pp. 1257-1287, 1997. Bosch, S., Lütkebohmert, W., Raynaud, M. *Néron models*. Ergeb. Math. Grenzgeb., Springer-Verlag, Berlin, Heidelberg, 1990. Bosch, S., Schlöter, K. *Néron models in the setting of formal and rigid geometry*. Math Ann. 301, pp. 339-362, 1995. Bosch, S., Xarles, X. *Component groups of Néron models via rigid uniformization*. Math. Ann. 306, pp. 459-486, 1996. Chai, C.-L. *Néron models for semiabelian varieties: Congruence and change of base field*. Asian J. Math. 4:4, pp. 715-736, 2000. Chai, C.-L., Yu, J.-K. *Congruences of Néron models for tori and the Artin conductor*. Ann. of Math. 154, pp. 347-382, 2001. Cluckers, R., Loeser, F., Nicaise, J. *Chai's conjecture and Fubini properties of dimensional motivic integration*. Algebra Number Theory, Vol. 7, No. 4, pp. 893-915, 2013. Geisser, T. H., Suzuki, T. *Special values of $L$-functions of one-motives over function fields*. J. Reine Angew. Math., 793:281--304, 2022. *Groupes de monodromie en géométrie algébrique. I*. Lecture Notes in Mathematics, Vol. 288. Springer-Verlag, Berlin-New York, 1972. Séminaire de Géométrie Algébrique du Bois-Marie 1967--1969 (SGA 7 I), Dirigé par A. Grothendieck. Avec la collaboration de M. Raynaud et D. S. Rim. Halle, L. H., Nicaise, J. *Motivic zeta functions of abelian varieties, and the monodromy conjecture*. Adv. Math. 227, pp. 610-653, 2011. Liu, Q., Lorenzini, D., Raynaud, M., *Néron models, Lie algebras, and reduction of curves of genus one*. Invent. Math. 157, pp. 455-518, 2004. Liu, Q., Lorenzini, D., Raynaud, M., *Corrigendum to Néron models, Lie algebras, and reduction of curves of genus one and The Brauer group of a surface*. Invent. Math. 214, pp. 593-604, 2018. Milne, J. S., *Arithmetic Duality Theorems*. Perspectives in Math., Vol. 1, Academic Press, Inc., 1986. Erratum available at: `https://www.jmilne.org/math/Books/add/ADT2006.pdf` Overkamp, O. *Chai's conjecture for semiabelian Jacobians*. Preprint, 2022. Available at https://arxiv.org/abs/2212.05018. Overkamp, O. *Jumps and Motivic Invariants of Semiabelian Jacobians*. Int. Math. Res. Not., Issue 20, pp. 6437-6479, 2019. Overkamp, O. *On Jacobians of geometrically reduced curves and their Néron models*. Preprint, 2021. Serre, J.-P. *Groupes proalgébriques*. Inst. Hautes Études Sci. Publ. Math., (7):67, 1960. Authors of the Stacks project. *Stacks project*. Columbia University. Suzuki, T. *Grothendieck's pairing on Néron component groups: Galois descent from the semistable case*. Kyoto J. Math., Vol. 60, No. 2, pp. 593--716, 2020. Tan, K.-S., Trihan, F., Tsoi, K.-W. *The $\mu$-invariant change for abelian varieties over finite $p$-extensions of global fields*. Preprint: arXiv:2301.09073v1, 2023. [Mathematisches Institut der Heinrich-Heine-Universität Düsseldorf, Universitätsstr. 1, 40225 Düsseldorf, Germany]{.smallcaps}\ *E-mail address: `otto.overkamp@uni-duesseldorf.de`\ \ [Department of Mathematics, Chuo University, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan]{.smallcaps}\ *E-mail address: `tsuzuki@gug.math.chuo-u.ac.jp`** [^1]: Note that some errors in [@LLR] have been corrected in the corrigendum [@LLRII]. We only use results of [@LLR] unaffected by these errors. [^2]: The sheafification here is the pro-étale sheafification. The notation in [@Suz Section 3.3] is actually $\Tilde{\mathbf{H}}^{n}(\mathcal{O}_{K}, G)$. The notation $\mathbf{H}^{n}(\mathcal{O}_{K}, G)$ in [@Suz Section 3.3] is reserved for the étale sheafification of the same presheaf. But the étale sheafification is already a pro-étale sheaf under the "P-acyclicity" condition ([@Suz Section 2.4]), which is practically almost always satisfied ([@Suz Section 3.4]). In this paper, we exclusively use the pro-étale sheafified version.
arxiv_math
{ "id": "2310.01289", "title": "Chai's conjectures on base change conductors", "authors": "Otto Overkamp, Takashi Suzuki", "categories": "math.NT math.AG", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this short note we show that the path-connected component of the identity of the derived subgroup of a compact Lie group consists just of commutators. We also discuss an application of our main result to the homotopy type of the classifying space for commutativity for a compact Lie group whose path-connected component of the identity is abelian. address: - Fakultät für Mathematik, Universität Bielefeld, D-33501 Bielefeld, Germany. - Centro de Investigación en Matemáticas, Unidad Mérida, PCTY, Carretera Sierra Papacal--Chuburná Puerto Km 5.5, Sierra Papacal, Mérida, YUC 97302, Mexico. - Centro de Investigación en Matemáticas, Unidad Mérida, PCTY, Carretera Sierra Papacal--Chuburná Puerto Km 5.5, Sierra Papacal, Mérida, YUC 97302, Mexico. author: - Juan Omar Gómez - Victor Torres-Castillo - Bernardo Villarreal title: On the commutator length of compact Lie groups --- # Introduction {#sec : Introduction .unnumbered} Let $G$ be a group, $[G,G]$ its derived subgroup and $x$ an element in $[G,G]$. The *commutator length of $x$* is defined as the minimum number of commutators needed to write $x$ as a product of commutators, and is denoted by $\mathrm{cl}(x)$. It is well known that $\mathrm{cl}(x)$ is not necessarily one. This invariant has been extensively studied in Lie groups, and there are some remarkable results that guaranty that in many cases any element in the derived subgroup is in fact a commutator (see [@Dok86], [@Got49], [@PW62]). For instance, from work of M. Goto [@Got49] it follows that in a compact connected Lie group $G$, the derived subgroup of $G$ consists just of commutators. But for non-compact connected Lie groups, the result no longer holds, for example the negative identity matrix in $\mathrm{SL}_2(\mathbb{R})$ is not a commutator. In this short note we investigate this invariant for compact Lie groups that are not necessarily connected. Let $G_0$ denote the path-connected component of the identity in $G$. **Theorem 1**. *Let $G$ be a compact Lie group. Then any element in $[G,G]_0$ is a commutator, that is, $\mathrm{cl}(x)=1$ for every $x\in [G,G]_0$.* We would like to emphasize that in general the group $[G,G]_0$ is not the derived subgroup of a compact connected Lie group, as $G_0$ may not be semi-simple, but it does contain the derived subgroup $[G_0,G_0]$. Moreover, if $G$ is a compact Lie group and $\pi_0(G)$ contains elements of commutative length greater than 1, then so does $G$ (see Remark [Remark 11](#Remark sobre clG mayor que uno){reference-type="ref" reference="Remark sobre clG mayor que uno"}). Hence Theorem [Theorem 1](#main){reference-type="ref" reference="main"} is the *best* generalization of Goto's result in the compact setting. Our study of the commutator length of elements in $[G,G]_0$ was mainly motivated by some results in [@Villarreal2023] concerning the second homotopy group of the classifying space for commutativity $B(2,G)$ of a compact Lie group $G$. The space $B(2,G)$ is a variant of the classifying space of $G$ and was introduced in [@ACT12]. Similarly as in the theory of principal $G$-bundles, there is universal bundle $E(2,G)\to B(2,G)$ in the sense of [@AG15 Theorem 2.2], where the total space $E(2,G)$ carries all the failure of $G$ to be commutative, up to homotopy. For instance, when $G$ is a compact Lie group, $E(2,G)$ is a contractible space if and only if $G$ is abelian (see [@AGV21]). It is then natural to try to describe the homotopy type of $E(2,G)$ for non-abelian $G$. Our main application of Theorem [Theorem 1](#main){reference-type="ref" reference="main"} is the following. **Proposition 2**. *Let $G$ be a compact Lie group in which $G_0$ is abelian. Then up to homotopy, the derived subgroup $[G,G]$ splits off from the loop space $\Omega E(2,G)$.* **Acknowledgements:** This project was initiated at CIMAT Mérida's Algebraic Topology Seminar, in the modality of 'mesas de trabajo' that took place in the period January-June 2023. # Preliminaries In this note, we will adopt the following convention. Let $g,h$ be elements in a topological group $G$. We write $[g,h]$ to denote the commutator $g^{-1}h^{-1}gh$. As mentioned before we write $G_0$ to denote the path-connected component of the identity of $G$. Recall that the commutator subgroup of a compact Lie group is closed [@HofMor13 Theorem 6.11], and then coincides with the algebraic commutator subgroup. **Definition 3**. Let $G$ be a topological group, and let $g\in [G,G]$. We define the *commutator length* of $g$ as the minimum number of commutators $[g_1,h_1],\ldots,[g_n,h_n]$ such that $g=[g_1,h_1]\cdot\ldots\cdot[g_n,h_n]$, with $g_i,h_i\in G$, and we denote this number by $\mathrm{cl}(g)$. The *commutator length* $\mathrm{cl}(G)$ of $G$ is the supremum in ${\mathbb N}\cup\{\infty\}$ of the set $\{\mathrm{cl}(g)\mid g\in[G,G]\}$, and similarly the *connected commutator length* $\mathrm{cl}(G)_0$ of $G$ is the the supremum of $\{\mathrm{cl}(g)\mid g\in[G,G]_0\}$. Note that under this definition the commutator length of the identity element $1 \in G$ is $\mathrm{cl}(1) = 1$, hence if $G$ is an abelian group then $\mathrm{cl}(G)_0 = \mathrm{cl}(G) = 1$. **Remark 4**. There is no reason to expect the commutator length of an arbitrary group to be finite, we refer to [@Cal08] for some examples of groups with infinite commutator length. However, if $G$ is a compact Lie group, then $\mathrm{cl}(G)$ is finite. For example, it follows by Theorem [Theorem 1](#main){reference-type="ref" reference="main"} and the fact that $[G,G] = [G,G]_0[F,F]$ for some finite subgroup $F\subset G$ (see Remark [Remark 5](#reduccion de extension arbitraria a producto semidirecto){reference-type="ref" reference="reduccion de extension arbitraria a producto semidirecto"} bellow). It is worth highlighting that computing the commutator length of a group is not an easy task, not even for finite groups. In fact, there are some famous conjectures about this invariant. For instance, O. Ore [@Ore51] conjectured that finite simple groups have commutator length 1, and D. Dokovik [@Dok86] conjectured that simple real Lie groups have commutator length 1. **Remark 5**. Let $G$ be a compact Lie group. Recall that $G$ is the quotient of a semidirect product $\Gamma$ of a connected compact Lie group by a finite subgroup, that is, it has the form $G=(G_0\rtimes F)/H$ where $G_0$ denotes the connected component of the identity, $F$ is finite group and $H$ is a common finite subgroup that is central in $G_0$ but not necessarily in $F$ (see [@BorelSerre64 Lemma 5.1, footnote p. 152]). Let $p\colon \Gamma\to G$ the quotient map. Note that the image of $p|_{[\Gamma,\Gamma]}$ is precisely $[G,G]$. Moreover, it maps $[\Gamma,\Gamma]_0$ to $[G,G]_0$. In particular $\mathrm{cl}(G)\leq \mathrm{cl}(\Gamma)$ and $\mathrm{cl}(G)_0\leq \mathrm{cl}(\Gamma)_0$. # Extensions of finite groups by tori In this section we show that the connected commutator length of an extension of a finite group by a torus is at most 1. In the last section we will use Remark [Remark 5](#reduccion de extension arbitraria a producto semidirecto){reference-type="ref" reference="reduccion de extension arbitraria a producto semidirecto"} to extend this result to compact Lie groups. First we identify the connected component of the identity of $[G,G]$ when $G$ is a semidirect product of a finite group by a torus. **Lemma 6**. *Let $G=T\rtimes Q$ be a semidirect product of a torus $T$ by a finite group $Q$. Then $[G,G]_0$ agrees with $[Q,T]$.* *Proof.* We have that $$[G,G]=[Q,Q][Q,T][T,T]=[Q,Q][Q,T]\,.$$ Let $g\in [G,G]_0$. By the previous line, we can write $g$ as $rx$ for $r\in [Q,Q]$ and $x\in [Q,T]$. Note that $[G,G]_0\subseteq T$, and since $x\in [Q,T]\subseteq T$, we deduce that $r$ must lie in $T$, hence $r$ is trivial and the result follows. ◻ **Theorem 7**. *Let $G$ be an extension of a finite group $Q$ by a torus $T$. Then $$\mathrm{cl}(G)_0 = 1\,.$$* *Proof.* We will first prove the case when $G$ is a semidirect product. The general case will follow by Remark [Remark 5](#reduccion de extension arbitraria a producto semidirecto){reference-type="ref" reference="reduccion de extension arbitraria a producto semidirecto"}. Assume that $G$ a semidirect product of $Q$ by $T$. Let $TT$ denote the subset consisting of all torsion elements of $T$. Let $x=r[q,t]$ be an element in $[G,G]_0$ with $q\in Q$ and $t\in T$. We claim that if $r\in TT$, then there is a $t'\in T$ such that $x=[q,t']$. Moreover, it is enough to prove this claim for some choice of $t$. Indeed, for every $p\in Q$ we have a continuous group homomorphism $$[p,-]\colon T\to T\,,$$ hence the equality $r [q,t] = [q,t']$ implies $r = [q,t^{-1}t']$, and with this expression one can readily verify the claim for an arbitrary $t$. Let us assume then that $t$ has infinite order. Consider a 1-parameter subgroup $S$ of $[G,G]_0$ containing $r[q,t]$. Then $[q,t^N]=(r[q,t])^N$ is in $S$, where $N$ denotes the order of $r$. Since $[q,t^N]$ is of infinite order, it follows that $[q,-]$ must cover $S$ and the claim follows. On the other hand, let $m,n$ be non-negative integers with $m\leq n$. Define $$P(m,n)=\{[q_1,t_1]\cdot\ldots\cdot[q_n,t_n]\mid q_i\in Q, t_1,\dots t_m \in TT, t_{m+1},\ldots t_n\in T \}\,.$$ Note that $P(0,n)$ is a closed subset of $[G,G]_0$ for every $n\geq 1$. In fact, $P(0,|Q|)=[Q,T]=[G,G]_0$ by Lemma [Lemma 6](#lemma 1){reference-type="ref" reference="lemma 1"}. We have the following inclusion of sets $$P(|Q|-1,|Q|)\subset P(0,|Q|)=[G,G]_0\,.$$ Moreover, if $t\in TT$, then $[q,t]\in TT$ for any $q\in Q$. Therefore any element of $P(|Q|-1,|Q|)$ can be written as $r[q,t]$ for some $r\in TT$, $q\in Q$ and $t\in T$. Hence by the previous claim we obtain that $P(|Q|-1,|Q|)\subset P(0,1)$. Finally, note that the closure of $P(|Q|-1,|Q|)$ must be contained in $P(0,1)$ since the latter is closed. Recall that $TT$ is a dense subspace of $T$. A standard argument of dense subspaces and surjective continuous maps can be used to show that $P(|Q|-1,|Q|)$ is a dense subspace in $[G,G]_0$. Hence $P(0,1)=[G,G]_0$. ◻ # A homotopy splitting of the classifying space for commutativity for extensions of finite groups by tori {#aplicaciones} In this section we briefly describe an application and our main motivation to study the connected commutator length, $\mathrm{cl}(G)_0$, for extensions of finite groups by tori. A classical result in the homotopy theory of classifying spaces is that the loop space $\Omega BG$ of the classifying space $BG$ of a topological group $G$, is homotopy equivalent to $G$. It has been explored what the analogue result is for a variant of $BG$ called the classifying space for commutativity, denoted $B(2,G)$, which was introduced by A. Adem, F. Cohen and E. Torres-Giese. A model for $B(2,G)$ is the geometric realization of a simplicial space $B_\bullet (2,G)$, where $B_k(2,G)$ is the space of ordered commuting $k$-tuples of $G$ (see [@ACT12]). There is a natural map $B(2,G)\to BG$ induced by the inclusions $B_k(2,G)\hookrightarrow G^k$, along which the universal bundle $EG\to BG$ pulls back to a principal $G$-bundle $E(2,G)\to B(2,G)$. Then if $G$ is a Lie group there is a homotopy equivalence $$G\times \Omega E(2,G)\simeq \Omega B(2,G)$$ (see [@ACT12 Theorem 6.3 and Remark]). Hence to give a more precise answer on what the homotopy type of $\Omega B(2,G)$ is, we should further study the loop space $\Omega E(2,G)$. There is a map $\mathfrak{c}\colon E(2,G)\to B[G,G]$ that can be seen as a *higher* version of the algebraic commutator map $G\times G\to [G,G]$, which may be illustrated by the fact that the restriction of $\mathfrak{c}$ to the 1-skeleton filtration is the suspension of the reduced algebraic commutator $\Sigma G\wedge G\to \Sigma[G,G]$ (see [@AGV21 Section 3] for more details). The map $\mathfrak{c}$ is simply called commutator map. In [@AGV21 Question 21]) the authors asked if for every compact Lie group $G$, the looped commutator map $\Omega\mathfrak{c}\colon \Omega E(2,G)\to [G,G]$ splits, up to homotopy. Theorem [Theorem 7](#thm 1){reference-type="ref" reference="thm 1"} together with some previous work gives a positive answer for compact Lie groups in which the path connected component of the identity is abelian. The following corollary is a stronger version of what was stated in Proposition [Proposition 2](#prop: app){reference-type="ref" reference="prop: app"}. **Corollary 8**. *Let $G$ be an extension of a finite group by a torus. Then the looped commutator map $\Omega\mathfrak{c}\colon \Omega E(2,G) \to [G,G]$ splits, up to homotopy.* *Proof.* Under the hypothesis of the corollary, [@Villarreal2023 Corollary 3] states that the looped commutator map splits, up to homotopy, as long as $[G,G]_0$ consists of single commutators, but by Theorem [Theorem 7](#thm 1){reference-type="ref" reference="thm 1"} every extension of a finite group by a torus satisfies this condition, hence the result follows. ◻ # Proof of Theorem [Theorem 1](#main){reference-type="ref" reference="main"} {#proof-of-theorem-main} Let $G$ be a compact Lie group. Recall that there is an extension of groups $$1\to G_0\to G \to \pi_0(G)\to 1\,,$$ where $\pi_0(G)$ is the group of path-connected components of $G$, which is finite. **Remark 9**. Let $G$ be a compact connected semi-simple Lie group. As mentioned before, a classical result of Goto states that $\mathrm{cl}(G)=1$. Let $T$ be a maximal torus of $G$. Recall that a theorem of Cartan asserts that any element $g\in G$ can be written as $g = h^{-1}th$ for some $h\in G$ and some $t\in T$. Moreover, since $G$ is semi-simple, if $N_G(T)$ is the normalizer of $T$, then $[N_G(T),N_G(T)]_0=T$. By Theorem [Theorem 7](#thm 1){reference-type="ref" reference="thm 1"}, every element of $T$ can be written as a commutator $[n,t]$ for some $t\in T$ and $n\in N_G(T)$. Hence any element of $G$ has the form $h^{-1}[n,t]h$ with $h\in G$, $n\in N_G(T)$ and $t\in T$. Recall that Theorem [Theorem 1](#main){reference-type="ref" reference="main"} states that for a compact Lie group $G$, $\mathrm{cl}(x)=1$ for every $x\in [G,G]_0$. *Proof of Theorem [Theorem 1](#main){reference-type="ref" reference="main"}.* Let $F\subset G$ be a finite group as in Remark [Remark 5](#reduccion de extension arbitraria a producto semidirecto){reference-type="ref" reference="reduccion de extension arbitraria a producto semidirecto"}. By [@Villarreal2023 Lemma 5], we have a decomposition $$[G,G]_0 = [F,Z(G_0)_0][G_0,G_0]\,.$$ Let $x\in [G,G]_0$. We can write $x=zg$ with $z\in [F,Z(G_0)_0]$ and $g\in [G_0,G_0]$. Consider the group extension $$1\to Z(G_0)_0\to Z(G_0)_0 F\to F/(F\cap Z(G_0)_0)\to 1\,.$$ By Theorem [Theorem 7](#thm 1){reference-type="ref" reference="thm 1"} we can write $z$ as a commutator $[q,z^\prime]$, for some $q\in F$ and $z^\prime\in Z(G_0)_0$. By [@BM55] every automorphism of finite order of $[G_0,G_0]$ has an invariant maximal torus, that is, $q^{-1}Tq=T$ for a maximal torus $T\subset [G_0,G_0]$. Now since $[G_0,G_0]$ is a compact connected semi-simple Lie group and all maximal tori in $[G_0,G_0]$ are conjugate, by Remark [Remark 9](#conmutadores en grupos de Lie semisimples compactos y conexos){reference-type="ref" reference="conmutadores en grupos de Lie semisimples compactos y conexos"} we may further assume that $g=h^{-1}[n,t]h$ for some $h\in [G_0,G_0]$, $n\in N_{[G_0,G_0]}(T)$ and $t\in T$, where $N_{[G_0,G_0]}(T)$ is the normalizer of $T$. Moreover, since $z\in Z(G_0)$ we can write $x=h^{-1}[q,z^\prime][n,t]h$. Note that $$[q,z^\prime][n,t]\in [\langle q\rangle,Z(G_0)_0][N_{[G_0,G_0]}(T),N_{[G_0,G_0]}(T)]_0$$ which is a subgroup of the path-connected component of the commutator subgroup $[N_{G_0}(T_0)\langle q\rangle,N_{G_0}(T_0)\langle q\rangle]$, where $T_0=Z(G_0)_0T$ is a maximal torus of $G_0$. We can now consider the group extension $$1\to T_0\to N_{G_0}(T_{0}) \langle q\rangle\to Q\to 1\,,$$ where $Q$ is a finite group as $q$ leaves $N_{G_0}(T_0)$ invariant, as well. Invoking again Theorem [Theorem 7](#thm 1){reference-type="ref" reference="thm 1"}, the element $[q,z^\prime][n,t]$ is a commutator and hence so is $x$. ◻ We finish this note with a sufficient condition for compact Lie groups to have commutator length 1. **Corollary 10**. *Let $G$ be a compact Lie group. If the projection $G\to \pi_0(G)$ admits a section and $\pi_0(G)$ is abelian, then $\mathrm{cl}(G)=1$.* *Proof.* Note that under these hypotheses $[G,G]$ is connected. Thus $\mathrm{cl}(G)_0=\mathrm{cl}(G)$. Therefore the result follows by Theorem [Theorem 1](#main){reference-type="ref" reference="main"}. ◻ **Remark 11**. For a compact Lie group $G$, we have that $\mathrm{cl}(\pi_0(G))$ is a lower bound for $\mathrm{cl}(G)$. Hence we can construct compact Lie groups whose derived subgroups do not consist only of commutators, see [@Isa77] for an example of a family of finite groups with this property. ACGV21 Alejandro Adem, Frederick R. Cohen, and Enrique Torres Giese. Commuting elements, simplicial spaces and filtrations of classifying spaces. , 152(1):91--114, 2012. Alejandro Adem and José Manuel Gómez. A classifying space for commutativity in Lie groups. , 15(1):493--535, 2015. Omar Antolı́n-Camarena, Simon Gritschacher, and Bernardo Villarreal. Higher generation by abelian subgroups in lie groups. , pages 1--16, 2021. A. Borel and G.D. Mostow. On semi-simple automorphisms of Lie algebras. , 61(3):389--405, 1955. A. Borel and J.-P. Serre. Théorèmes de finitude en cohomologie galoisienne. , 39:111--164, 1964. Danny Calegari. What is$\ldots$ stable commutator length? , 55(9):1100--1101, 2008. Dragomir Z. Dokovic. . , 23(1):223 -- 228, 1986. Morikuni Goto. . , 1(3):270 -- 272, 1949. Karl H. Hofmann and Sidney A. Morris. . De Gruyter, Berlin, Boston, 2020. I. M. Isaacs. Commutators and the commutator subgroup. , 84(9):720--722, 1977. Oystein Ore. Some remarks on commutators. , 2(2):307--314, 1951. Samuel Pasiencier and Hsien-chung Wang. Commutators in a semi-simple Lie group. , 13:907--913, 1962. Bernardo Villarreal. On the second homotopy group of the classifying space for commutativity in Lie groups. , 217(3):Paper No. 51, 16, 2023.
arxiv_math
{ "id": "2309.02344", "title": "On the commutator length of compact Lie groups", "authors": "Juan Omar G\\'omez, Victor Torres-Castillo, and Bernardo Villarreal", "categories": "math.GR math.AT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | This note focuses on the properties of two blocks of elements of the probability mass function (pmf) of the Poisson distribution of order $k\ge2$. The first block is the elements for $n\in[1,k]$ and the second block is the elements for $n\in[k+1,2k]$. It is proved that elements in the first block form an "absolutely monotonic sequence" by which is meant that all the finite differences of the sequence are positive. Next, the properties of the elements in the second block are analyzed. It is shown that for sufficiently small $\lambda>0$, the sequence of elements for $n\in[k+1,2k]$ is strictly decreasing and also concave. The purpose of the analysis is to help determine a supremum value for $\lambda$, such that the pmf of the Poisson distribution of order $k\ge2$ decreases strictly for all $n \ge k$. A conjectured criterion for the supremum is given. Numerical calculations indicate it is the optimal bound, i.e. the supremum. In addition, a simple expression is proposed, based on numerical calculation, which is sufficient (but not necessary) and is a good approximation for the supremum. address: Convergent Computing Inc., P. O. Box 561, Shoreham, NY 11786, USA author: - S. R. Mane title: Convexity and monotonicity of the probability mass function of the Poisson distribution of order $k$ --- Poisson distribution of order $k$ ,probability mass function ,convexity ,monotonicity ,Compound Poisson distribution ,discrete distribution # [\[sec:intro\]]{#sec:intro label="sec:intro"} Introduction The Poisson distribution of order $k$ is a special case of a compound Poisson distribution introduced by Adelson [@Adelson1966]. The definition below is from [@KwonPhilippou], with slight changes of notation. **Definition 1**. *The Poisson distribution of order $k$ (where $k\ge1$ is an integer) and parameter $\lambda > 0$ is an integer-valued statistical distribution with the probability mass function (pmf) $$\label{eq:pmf_Poisson_order_k} f_k(n;\lambda) = e^{-k\lambda}\sum_{n_1+2n_2+\dots+kn_k=n} \frac{\lambda^{n_1+\dots+n_k}}{n_1!\dots n_k!} \,, \qquad n=0,1,2\dots$$* For $k=1$ it is the standard Poisson distribution. In a recent note [@Mane_Poisson_k_CC23_6], the author presented a graphical analysis of the structure of the pmf of the Poisson distribution of order $k\ge2$. For example, the pmf of the Poisson distribution of order $k\ge2$ can exhibit a maximum of *four* peaks simultaneously (although not all are modes, i.e. global maxima). Also in a recent note [@Mane_Poisson_k_CC23_7], the author presented analytical proofs of some of the properties of the structure of the pmf of the Poisson distribution of order $k$. This note focuses on the properties of two blocks of elements of the probability mass function of the Poisson distribution of order $k\ge2$. The first block is the elements for $n\in[1,k]$ and the second block is the elements for $n\in[k+1,2k]$. The first goal of this note is to prove that elements in the first block form an "absolutely monotonic sequence" by which is meant that all the finite differences of the sequence are positive. Next, we analyze the properties of the elements in the second block. It will be shown that for sufficiently small $\lambda>0$, the sequence of elements for $n\in[k+1,2k]$ is strictly decreasing and also concave. The purpose of the analysis is to help determine a supremum value for $\lambda$, such that the pmf of the Poisson distribution of order $k\ge2$ decreases strictly for all $n \ge k$. A conjectured criterion for the supremum is given in Conjecture [Conjecture 7](#conj:upbound_monotone_decrease){reference-type="ref" reference="conj:upbound_monotone_decrease"} below. Numerical calculations indicate it is the optimal bound, i.e. the supremum. We also propose a simple expression, based on numerical calculation, which is sufficient (but not necessary) and is a good approximation for the supremum. The structure of this paper is as follows. Sec. [\[sec:notation\]](#sec:notation){reference-type="ref" reference="sec:notation"} presents basic definitions and notation employed in this note. Sec. [\[sec:block1k\]](#sec:block1k){reference-type="ref" reference="sec:block1k"} proves the absolute monotonicity of the probability mass function for $n\in[1,k]$. Sec. [\[sec:block_kp1_2k\]](#sec:block_kp1_2k){reference-type="ref" reference="sec:block_kp1_2k"} analyses the properties of the probability mass function for $n\in[k+1,2k]$. A conjecture is proposed for the supremum value of $\lambda$, such that the pmf of the Poisson distribution of order $k$ decreases monotonically for all $n\ge k$. Sec. [\[sec:numest_lam_pk1k2\]](#sec:numest_lam_pk1k2){reference-type="ref" reference="sec:numest_lam_pk1k2"} proposes a simple expression, based on numerical calculation, which is sufficient (but not necessary) and is a good approximation for the supremum. Sec. [\[sec:conc\]](#sec:conc){reference-type="ref" reference="sec:conc"} concludes. # [\[sec:notation\]]{#sec:notation label="sec:notation"}Basic notation and definitions ## [\[sec:definitions\]]{#sec:definitions label="sec:definitions"}Definitions We work with $h_k(n;\lambda) = e^{k\lambda}f_k(n;\lambda)$ (see [@KwonPhilippou]) and refer to it as the "scaled pmf" below. For later reference we define the parameter $\kappa=k(k+1)/2$. From [@KwonPhilippou], define $r_k$ as the value of $\lambda$ such that $h_k(k;\lambda)=1$. Also for later reference, define $t_k$ as the value of $\lambda$ such that $h_k(k;\lambda)=2$ (see [@Mane_Poisson_k_CC23_7]). Define the upper bound $\lambda_k^{\rm dec}$ such that for $\lambda\in(0,\lambda_k^{\rm dec})$, the pmf of the Poisson distribution of order $k$ decreases strictly for all $n \ge k$. It was shown in [@Mane_Poisson_k_CC23_7] that $\lambda_k^{\rm dec} > 0$. For most of this note, we shall hold $k\ge2$ and $\lambda>0$ fixed and vary only the value of $n$. For brevity of the exposition, we adopt the notation by Kostadinova and Minkova [@KostadinovaMinkova2013] and write "$p_n$" in place of $h_k(n;\lambda)$ and omit explicit mention of $k$ and $\lambda$. Then $p_n$ is a polyomial in $\lambda$ of degree $n$ and for $n>0$ it has no constant term. The recurrence for $p_n$ is as follows (eq. (6) in [@Adelson1966], eq. (6) in [@KwonPhilippou], eq. (8) and Remark 3 in [@KostadinovaMinkova2013], eq. (2.3) in [@GeorghiouPhilippouSaghafi], and in all cases terms with negative indices are set to zero). $$\label{eq:KM_rechk} p_n = \frac{\lambda}{n} \,\sum_{j=1}^k jp_{n-j} \,.$$ Kostadinova and Minkova published the following expression for the pmf, in terms of combinatorial sums (Theorem 1 in [@KostadinovaMinkova2013], omitting the prefactor of $e^{-k\lambda}$). $$\label{eq:KM_Thm1} \begin{split} p_0 &= 1 \,, \\ p_n &= \sum_{j=1}^n \binom{n-1}{j-1}\,\frac{\lambda^j}{j!} \qquad (n=1,2,\dots,k) \,, \\ p_n &= \sum_{j=1}^n \binom{n-1}{j-1}\,\frac{\lambda^j}{j!} \;\;-\;\; \sum_{i=1}^\ell (-1)^{i-1}\,\frac{\lambda^i}{i!} \sum_{j=0}^{n-i(k+1)} \binom{n - i(k+1) +i-1}{j+i-1}\,\frac{\lambda^j}{j!} \\ &\qquad\qquad (n = \ell(k+1)+m,\, m=0,1,\dots,k,\, \ell=1,2,\dots,\infty) \,. \end{split}$$ Note the following. 1. The first element is just the single number $p_0=1$. 2. The block $n\in[1,k]$ has $k$ elements and is special in that there are no subtractions. 3. The tail piece for $n>k$ is composed of blocks of $k+1$ elements each, for $\ell=1,2,\dots$. 4. Numerical computations confirm eqs. [\[eq:KM_rechk\]](#eq:KM_rechk){reference-type="eqref" reference="eq:KM_rechk"} and [\[eq:KM_Thm1\]](#eq:KM_Thm1){reference-type="eqref" reference="eq:KM_Thm1"} yield equal values for $p_n$. ## [\[sec:validation\]]{#sec:validation label="sec:validation"}Validation check of combinatorial sums for the probability mass function Let us validate Kostadinova and Minkova's expression for the scaled pmf in eq. [\[eq:KM_Thm1\]](#eq:KM_Thm1){reference-type="eqref" reference="eq:KM_Thm1"} for $k=2$ and a few values of $n$. For $k=2$ the expression for $p_n$ is simple, for all $n\ge1$. $$\label{eq:pmf_k2} \begin{split} p_n &= \sum_{j=0}^{\lfloor(n/2)\rfloor} \frac{\lambda^{n-j}}{(n-2j)!j!} \\ &= \frac{\lambda^n}{n!} +\frac{\lambda^{n-1}}{(n-2)!1!} +\frac{\lambda^{n-2}}{(n-4)!2!} +\dots +\frac{\lambda^{n-\lfloor(n/2)\rfloor}}{\lfloor(n/2)\rfloor!} \,. \end{split}$$ The first few polynomials are as follows. [\[eq:poly_k2\]]{#eq:poly_k2 label="eq:poly_k2"} $$\begin{aligned} p_0 &= 1 \,, \\ p_1 &= \lambda \,, \\ p_2 &= \frac{\lambda^2}{2!} + \frac{\lambda}{0!1!} \,, \\ p_3 &= \frac{\lambda^3}{3!} + \frac{\lambda^2}{1!1!} \,, \\ p_4 &= \frac{\lambda^4}{4!} + \frac{\lambda^3}{2!1!} + \frac{\lambda^2}{0!2!} \,, \\ p_5 &= \frac{\lambda^5}{5!} + \frac{\lambda^4}{3!1!} + \frac{\lambda^3}{1!2!} \,, \\ p_6 &= \frac{\lambda^6}{6!} + \frac{\lambda^5}{4!1!} + \frac{\lambda^4}{2!2!} + \frac{\lambda^3}{0!3!} \,, \\ p_7 &= \frac{\lambda^7}{7!} + \frac{\lambda^6}{5!1!} + \frac{\lambda^5}{3!2!} + \frac{\lambda^4}{1!3!} \,, \\ p_8 &= \frac{\lambda^8}{8!} + \frac{\lambda^7}{6!1!} + \frac{\lambda^6}{4!2!} + \frac{\lambda^5}{2!3!} + \frac{\lambda^4}{0!4!} \,.\end{aligned}$$ Let us employ eq. [\[eq:KM_Thm1\]](#eq:KM_Thm1){reference-type="eqref" reference="eq:KM_Thm1"} and compare with the expressions in eq. [\[eq:poly_k2\]](#eq:poly_k2){reference-type="eqref" reference="eq:poly_k2"}. 1. Case $n=1$: $$\begin{split} p_1 &= \sum_{j=1}^1 \binom{1-1}{j-1}\,\frac{\lambda^j}{j!} = \lambda \,. \end{split}$$ 2. Case $n=2$: $$\begin{split} p_2 &= \sum_{j=1}^2 \binom{2-1}{j-1}\,\frac{\lambda^j}{j!} = \binom{1}{0}\lambda +\binom{1}{1}\frac{\lambda^2}{2!} = \frac{\lambda}{0!1!} +\frac{\lambda^2}{2!0!} \,. \end{split}$$ 3. Case $n=3$: $$\begin{split} p_3 &= \sum_{j=1}^3 \binom{2}{j-1}\,\frac{\lambda^j}{j!} \;-\; \sum_{i=1}^1 (-1)^{i-1}\,\frac{\lambda^i}{i!} \sum_{j=0}^{3-3i} \binom{3 - 3i +i-1}{j+i-1}\,\frac{\lambda^j}{j!} \\ &= \binom{2}{0}\lambda +\binom{2}{1}\frac{\lambda^2}{2!} +\binom{2}{2}\frac{\lambda^3}{3!} - \lambda \\ &= \frac{\lambda^2}{1!1!} +\frac{\lambda^3}{3!0!} \,. \end{split}$$ 4. Case $n=4$: $$\begin{split} p_4 &= \sum_{j=1}^4 \binom{3}{j-1}\,\frac{\lambda^j}{j!} \;-\; \sum_{i=1}^1 (-1)^{i-1}\,\frac{\lambda^i}{i!} \sum_{j=0}^{4-3i} \binom{4 - 3i +i-1}{j+i-1}\,\frac{\lambda^j}{j!} \\ &= \binom{3}{0}\lambda +\binom{3}{1}\frac{\lambda^2}{2!} +\binom{3}{2}\frac{\lambda^3}{3!} +\binom{3}{3}\frac{\lambda^4}{4!} -\lambda \sum_{j=0}^{1} \binom{1}{j}\,\frac{\lambda^j}{j!} \\ &= \lambda + \frac{3\lambda^2}{0!2!} +\frac{\lambda^3}{2!1!} +\frac{\lambda^4}{4!0!} -\lambda (1 + \lambda) \\ &= \frac{\lambda^2}{0!2!} +\frac{\lambda^3}{2!1!} +\frac{\lambda^4}{4!0!} \,. \end{split}$$ 5. Case $n=5$: $$\begin{split} p_5 &= \sum_{j=1}^5 \binom{4}{j-1}\,\frac{\lambda^j}{j!} \;-\; \sum_{i=1}^1 (-1)^{i-1}\,\frac{\lambda^i}{i!} \sum_{j=0}^{5-3i} \binom{5 - 3i +i-1}{j+i-1}\,\frac{\lambda^j}{j!} \\ &= \binom{4}{0}\lambda +\binom{4}{1}\frac{\lambda^2}{2!} +\binom{4}{2}\frac{\lambda^3}{3!} +\binom{4}{3}\frac{\lambda^4}{4!} +\binom{4}{4}\frac{\lambda^5}{5!} -\lambda \sum_{j=0}^{2} \binom{2}{j}\,\frac{\lambda^j}{j!} \\ &= \lambda + \frac{4\lambda^2}{2!} +\frac{6\lambda^3}{3!} +\frac{4\lambda^4}{4!} +\frac{\lambda^5}{5!} -\lambda \Bigl(1 + 2\lambda +\frac{\lambda^2}{2!}\Bigr) \\ &= \frac{\lambda^3}{1!2!} +\frac{\lambda^4}{3!1!} +\frac{\lambda^5}{5!0!} \,. \end{split}$$ 6. Case $n=6$: $$\begin{split} p_6 &= \sum_{j=1}^6 \binom{5}{j-1}\,\frac{\lambda^j}{j!} \;-\; \sum_{i=1}^2 (-1)^{i-1}\,\frac{\lambda^i}{i!} \sum_{j=0}^{6-3i} \binom{6 - 3i +i-1}{j+i-1}\,\frac{\lambda^j}{j!} \\ &= \binom{5}{0}\lambda +\binom{5}{1}\frac{\lambda^2}{2!} +\binom{5}{2}\frac{\lambda^3}{3!} +\binom{5}{3}\frac{\lambda^4}{4!} +\binom{5}{4}\frac{\lambda^5}{5!} +\binom{5}{5}\frac{\lambda^6}{6!} \\ &\quad -\lambda \sum_{j=0}^{3} \binom{3}{j}\,\frac{\lambda^j}{j!} +\frac{\lambda^2}{2!} \sum_{j=0}^{0} \binom{1}{j+1}\,\frac{\lambda^j}{j!} \\ &= \lambda + \frac{5\lambda^2}{2!} +\frac{10\lambda^3}{3!} +\frac{10\lambda^4}{4!} +\frac{5\lambda^5}{5!} +\frac{\lambda^6}{6!} \\ &\quad -\lambda \Bigl(1 + 3\lambda +\frac{3\lambda^2}{2!} +\frac{\lambda^3}{3!}\Bigr) +\frac{\lambda^2}{2!} \\ &= \frac{\lambda^3}{0!3!} +\frac{\lambda^4}{2!2!} +\frac{\lambda^5}{4!1!} +\frac{\lambda^6}{6!0!} \,. \end{split}$$ 7. Case $n=7$: $$\begin{split} p_7 &= \sum_{j=1}^7 \binom{6}{j-1}\,\frac{\lambda^j}{j!} \;-\; \sum_{i=1}^2 (-1)^{i-1}\,\frac{\lambda^i}{i!} \sum_{j=0}^{7-3i} \binom{7 - 3i +i-1}{j+i-1}\,\frac{\lambda^j}{j!} \\ &= \binom{6}{0}\lambda +\binom{6}{1}\frac{\lambda^2}{2!} +\binom{6}{2}\frac{\lambda^3}{3!} +\binom{6}{3}\frac{\lambda^4}{4!} +\binom{6}{4}\frac{\lambda^5}{5!} +\binom{6}{5}\frac{\lambda^6}{6!} +\binom{6}{6}\frac{\lambda^7}{7!} \\ &\quad -\lambda \sum_{j=0}^{4} \binom{4}{j}\,\frac{\lambda^j}{j!} +\frac{\lambda^2}{2!} \sum_{j=0}^{1} \binom{2}{j+1}\,\frac{\lambda^j}{j!} \\ &= \lambda + \frac{6\lambda^2}{2!} +\frac{15\lambda^3}{3!} +\frac{20\lambda^4}{4!} +\frac{15\lambda^5}{5!} +\frac{6\lambda^6}{6!} +\frac{\lambda^7}{7!} \\ &\quad -\lambda \Bigl(1 + 4\lambda +\frac{6\lambda^2}{2!} +\frac{4\lambda^3}{3!} +\frac{\lambda^4}{4!}\Bigr) +\frac{\lambda^2}{2!} (2 +\lambda) \\ &= \frac{\lambda^4}{1!3!} +\frac{\lambda^5}{3!2!} +\frac{\lambda^6}{5!1!} +\frac{\lambda^7}{7!0!} \,. \end{split}$$ 8. Case $n=8$: $$\begin{split} p_8 &= \sum_{j=1}^8 \binom{7}{j-1}\,\frac{\lambda^j}{j!} \;-\; \sum_{i=1}^2 (-1)^{i-1}\,\frac{\lambda^i}{i!} \sum_{j=0}^{8-3i} \binom{8 - 3i +i-1}{j+i-1}\,\frac{\lambda^j}{j!} \\ &= \binom{7}{0}\lambda +\binom{7}{1}\frac{\lambda^2}{2!} +\binom{7}{2}\frac{\lambda^3}{3!} +\binom{7}{3}\frac{\lambda^4}{4!} +\binom{7}{4}\frac{\lambda^5}{5!} +\binom{7}{5}\frac{\lambda^6}{6!} +\binom{7}{6}\frac{\lambda^7}{7!} +\binom{7}{7}\frac{\lambda^8}{8!} \\ &\quad -\lambda \sum_{j=0}^{5} \binom{5}{j}\,\frac{\lambda^j}{j!} +\frac{\lambda^2}{2!} \sum_{j=0}^{2} \binom{3}{j+1}\,\frac{\lambda^j}{j!} \\ &= \lambda + \frac{7\lambda^2}{2!} +\frac{21\lambda^3}{3!} +\frac{35\lambda^4}{4!} +\frac{35\lambda^5}{5!} +\frac{21\lambda^6}{6!} +\frac{7\lambda^7}{7!} +\frac{\lambda^8}{8!} \\ &\quad -\lambda \Bigl(1 + 5\lambda +\frac{10\lambda^2}{2!} +\frac{10\lambda^3}{3!} +\frac{5\lambda^4}{4!} +\frac{\lambda^5}{5!}\Bigr) +\frac{\lambda^2}{2!} \Bigl(3 +3\lambda +\frac{\lambda^2}{2!}\Bigr) \\ &= \frac{\lambda^4}{0!4!} +\frac{\lambda^5}{2!3!} +\frac{\lambda^6}{4!2!} +\frac{\lambda^7}{6!1!} +\frac{\lambda^8}{8!0!} \,. \end{split}$$ # [\[sec:block1k\]]{#sec:block1k label="sec:block1k"}Absolute monotonicity of the probability mass function for $n\in[1,k]$ It was proved in Lemma 1 in [@KwonPhilippou] that for fixed $k\ge2$ and all $\lambda>0$, the sequence $\{p_1,\dots,p_k\}$ is strictly increasing. The following expression is from eq. (3.3) in [@Mane_Poisson_k_CC23_7]. $$\label{eq:KP_incseq_pn} \lambda = p_1 < p_2 < \dots < p_k \,.$$ That is to say, $p_n - p_{n-1} > 0$ for all $n\in[2,k]$. We shall show that, in addition, *all* the finite differences are strictly positive for fixed $k\ge2$ and $\lambda>0$ and a suitable subinterval of $n\in[1,k]$ (see below). The finite differences are analogs of derivatives, but for discrete (integer) values of $n$. They are centered finite differences, for $m=1,2,\dots$ (with the formal definition $\Delta_0(n) = p_n$). $$\label{eq:Delta_m_def} \Delta_m(n) = \sum_{j=0}^m \binom{m}{j} (-1)^{j-1} p_{n-j} \,.$$ The first few examples are as follows. [\[eq:fd_1\_2\]]{#eq:fd_1_2 label="eq:fd_1_2"} $$\begin{aligned} \Delta_1(n) &= p_n - p_{n-1} \,, \\ \Delta_2(n) &= p_n - 2p_{n-1} + p_{n-2} \,.\end{aligned}$$ In this section, we require only the expression for $p_n$ for $n\in[1,k]$ in eq. [\[eq:KM_Thm1\]](#eq:KM_Thm1){reference-type="eqref" reference="eq:KM_Thm1"}. Let us examine a few simple cases, to clarify ideas. First recall the Pascal triangle recurrence for the binomial coefficients. $$\label{eq:binom_recurrence} \binom{n}{j} = \binom{n-1}{j} + \binom{n-1}{j-1} \,.$$ For the first finite difference $\Delta_1(n)$ we obtain $$\begin{split} p_n -p_{n-1} &= \biggl[\,\sum_{j=1}^n \binom{n-1}{j-1}\,\frac{\lambda^j}{j!}\,\biggr] - \biggl[\,\sum_{j=1}^{n-1} \binom{n-2}{j-1}\,\frac{\lambda^j}{j!}\,\biggr] \\ &= \frac{\lambda^n}{n!} + \sum_{j=1}^{n-1} \biggl[\,\binom{n-1}{j-1} - \binom{n-2}{j-1}\,\biggr]\,\frac{\lambda^j}{j!} %\\ %&= \frac{\lambda^n}{n!} + \sum_{j=2}^{n-1} \biggl[\,\binom{n-1}{j-1} - \binom{n-2}{j-1}\,\biggr]\,\frac{\lambda^j}{j!} \\ &= \frac{\lambda^n}{n!} + \sum_{j=2}^{n-1} \binom{n-2}{j-2}\,\frac{\lambda^j}{j!} \\ &= \sum_{j=2}^n \binom{n-2}{j-2}\,\frac{\lambda^j}{j!} \,. \end{split}$$ Hence $\Delta_1(n) > 0$. The lower limit of the sum was increased from $j=1$ to $j=2$ because the binomial coefficients both equal $1$ for $j=1$ and cancel to zero. We require $n\in[2,k]$ to justify the derivation. For the second finite difference $\Delta_2(n)$ we obtain $$\begin{split} p_n -2p_{n-1} +p_{n-2} &= (p_n -p_{n-1}) - (p_{n-1}-p_{n-2}) \\ &= \biggl[\,\sum_{j=2}^n \binom{n-2}{j-2}\,\frac{\lambda^j}{j!}\,\biggr] - \biggl[\,\sum_{j=2}^{n-1} \binom{n-3}{j-2}\,\frac{\lambda^j}{j!}\,\biggr] \\ &= \frac{\lambda^n}{n!} + \sum_{j=2}^{n-1} \biggl[\,\binom{n-2}{j-2} - \binom{n-3}{j-2}\,\biggr]\,\frac{\lambda^j}{j!} \\ &= \frac{\lambda^n}{n!} + \sum_{j=3}^{n-1} \binom{n-3}{j-3}\,\frac{\lambda^j}{j!} \\ &= \sum_{j=3}^n \binom{n-3}{j-3}\,\frac{\lambda^j}{j!} \,. \end{split}$$ Hence $\Delta_2(n) > 0$. The lower limit of the sum was increased from $j=2$ to $j=3$ because the binomial coefficients both equal $1$ for $j=2$ and cancel to zero. We require $n\in[3,k]$ to justify the derivation. For the $m^{th}$ finite difference one can discern the pattern and surmise the answer. **Proposition 2**. *For fixed $k\ge2$ and $\lambda>0$, the $m^{th}$ finite difference $\Delta_m(n)$ (see eq. [\[eq:Delta_m\_def\]](#eq:Delta_m_def){reference-type="eqref" reference="eq:Delta_m_def"}) is given as follows and is strictly positive for all $m\in[1,k-1]$ and all $n\in[m+1,k]$. $$\label{eq:monotone_m} \Delta_m(n) = \sum_{j=m+1}^n \binom{n-m-1}{j-m-1}\,\frac{\lambda^j}{j!} \;\; >\;\; 0\,.$$* *Proof.* We employ an induction argument on $m$. Assume eq. [\[eq:monotone_m\]](#eq:monotone_m){reference-type="eqref" reference="eq:monotone_m"} to be true for $m-1$ and $n\in[m,k]$ (where $k > m$ and $\lambda>0$ are fixed). Then for $n\in[m+1,k]$ (which is a nonempty interval because $k>m$) $$\begin{split} \sum_{j=0}^m \binom{m}{j} (-1)^{j-1} p_{n-j} &= \biggl[\,\sum_{j=m}^n \binom{n-m}{j-m}\,\frac{\lambda^j}{j!}\,\biggr] - \biggl[\,\sum_{j=m}^{n-1} \binom{n-m-1}{j-m}\,\frac{\lambda^j}{j!}\,\biggr] \\ &= \frac{\lambda^n}{n!} + \sum_{j=m}^{n-1} \biggl[\,\binom{n-m}{j-m} - \binom{n-m-1}{j-m}\,\biggr]\,\frac{\lambda^j}{j!} \\ &= \frac{\lambda^n}{n!} + \sum_{j=m+1}^{n-1} \binom{n-m-1}{j-m-1}\,\frac{\lambda^j}{j!} \\ &= \sum_{j=m+1}^n \binom{n-m-1}{j-m-1}\,\frac{\lambda^j}{j!} \,. \end{split}$$ We know the result to be true for $m=1$, hence the proof follows by induction on $m$. ◻ Feller [@Feller] defined a "completely monotonic function" $f(x)$ as one with the following property. **Definition 3**. *A function $f:(0,\infty)\to[0,\infty)$ is completely monotonic if $$(-1)^nf^{(n)}(x) \ge 0$$ for all $n=0,1,2\dots$.* Note the following: 1. Feller's work was in connection with probability distributions. 2. I suspect Feller had in mind the negative exponential $e^{-x}$ is completely monotonic for $x\in(0,\infty)$. 3. A function such as $e^x$ has all positive derivatives but is technically not completely monotonic. Nevertheless, the underlying idea for us is that all the derivatives are positive (or zero). We also replace derivatives with finite differences. **Remark 4**. *We can perhaps say that the sequence $\{p_1,\dots,p_k\}$ is "absolutely monotonic" for all finite differences $m=1,\dots,k-1$, for fixed $k\ge2$ and $\lambda>0$.* Note that the term "absolutely monotonic" is *not* standard mathematical terminology. # [\[sec:block_kp1_2k\]]{#sec:block_kp1_2k label="sec:block_kp1_2k"}Properties of the probability mass function for $n\in[k+1,2k]$ Based on numerical calculations, it was noted in [@Mane_Poisson_k_CC23_6] that for fixed $k\ge2$ and sufficiently small $\lambda>0$, the pmf of the Poisson distribution of order $k$ decreases monotonically for all $n \ge k$. An analytical proof of this observation was published in [@Mane_Poisson_k_CC23_7]. The proof in [@Mane_Poisson_k_CC23_7] employed induction on $n$. **Proposition 5**. *(Restatement of Prop. (4.1) in [@Mane_Poisson_k_CC23_7]) For fixed $k\ge2$, and $n \ge 2k$, suppose that there exists a fixed $\lambda>0$ such that the block of $k+1$ contiguous elements $\{p_{n-k},p_{n-k+1},\dots,p_n\}$ form a strictly decreasing sequence $p_{n-k} > p_{n-k+1} > \dots > p_n$. Then $p_{n+1}-p_n < 0$, i.e. the sequence can be extended to include $p_n > p_{n+1}$.* Note the following. 1. It was shown in [@Mane_Poisson_k_CC23_7] that there exists a value $\lambda>0$ such that the block of elements $\{p_k,\dots,p_{2k}\}$ is a strictly decreasing sequence of $k+1$ elements. 2. It was also shown in [@KwonPhilippou] that $p_k > p_{k+1}$ for $0 < \lambda \le r_k$. Recall $r_k$ is the value of $\lambda$ such that $p_k=1$. 3. An improved upper bound on $\lambda$ (such that $p_k > p_{k+1}$) was derived in [@Mane_Poisson_k_CC23_7]. Recall $t_k$ is the value of $\lambda$ such that $p_k=2$. Then $p_k > p_{k+1}$ for $0 < \lambda \le t_k$. Note that $t_k$ is a sufficient but not necessary upper bound on the value of $\lambda$. Hence the conditions for $p_k > p_{k+1}$ have been analyzed in detail. Our focus here is on the block of $k$ elements $\{p_{k+1},\dots,p_{2k}\}$. As a simple starting exercise, consider $k=3$ (for $k\le2$ there are too few points). For $k=3$, the polynomials in the block $n\in[k+1,2k]$ are $$\begin{aligned} p_4 = h_3(4;\lambda) &= \frac{\lambda^4}{4!} +\frac{\lambda^3}{2!} +\frac{3\lambda^2}{2} \,, \\ p_5 = h_3(5;\lambda) &= \frac{\lambda^5}{5!} +\frac{\lambda^4}{3!} +\lambda^3 +\lambda^2 \,, \\ p_6 = h_3(6;\lambda) &= \frac{\lambda^6}{6!} +\frac{\lambda^5}{4!} +\frac{5\lambda^4}{12} +\frac{7\lambda^3}{6} +\frac{\lambda^2}{2} \,.\end{aligned}$$ The difference $p_5-p_4$ is $$\begin{split} p_5-p_4 &= \frac{\lambda^5}{120} +\frac{\lambda^4}{8} +\frac{\lambda^3}{2} -\frac{\lambda^2}{2} \\ &= \frac{\lambda^2}{2}\biggl(\frac{\lambda^3}{60} +\frac{\lambda^2}{4} +\lambda -1\biggr) \,. \end{split}$$ This is negative for $\lambda\in(0,\frac12)$. The positive real root for $p_5-p_4=0$ is $\lambda\simeq 0.82187688$. Next, the difference $p_6-p_5$ is $$\begin{split} p_6-p_5 &= \frac{\lambda^6}{6!} +\frac{\lambda^5}{30} +\frac{\lambda^4}{4} +\frac{\lambda^3}{6} -\frac{\lambda^2}{2} \\ &= \frac{\lambda^2}{2}\biggl(\frac{\lambda^4}{360} +\frac{\lambda^3}{15} +\frac{\lambda^2}{2} +\frac{\lambda}{3} -1\biggr) \,. \end{split}$$ This is negative for $\lambda\in(0,1)$. The positive real root for $p_6-p_5=0$ is $\lambda \simeq 1.061200075$. In general, $p_{n+1} < p_n$ for all $n\in[k,2k-1]$ if the first pair is decreasing: $p_{k+2} < p_{k+1}$. We shall prove this below, for all $k\ge2$. Next consider the convexity (second finite difference $\Delta_2(n)$), for $k=3$ and $n=6$: (We require $k\ge3$ because for $k=2$ there are too few points in the interval $n\in[k+1,2k]$.) $$\begin{split} p_6+p_4 -2p_5 &= \frac{\lambda^6}{720} +\frac{\lambda^5}{40} +\frac{7\lambda^4}{24} -\frac{\lambda^3}{3} \\ &= \lambda^3\biggl(\frac{\lambda^3}{720} +\frac{\lambda^2}{40} +\frac{\lambda}{8} -\frac13\biggr) \,. \end{split}$$ This is negative for small $\lambda>0$ (concave) but eventually positive (convex). The positive real root is $\lambda \simeq 1.88318444$. Hence, unlike the sequence $\{p_1,\dots,p_k\}$, the sequence $\{p_{k+1},\dots,p_{2k}\}$ is not always strictly increasing or decreasing, nor always convex or concave. Fig. [1](#fig:graph_hist_k10_nonmonotonic){reference-type="ref" reference="fig:graph_hist_k10_nonmonotonic"} displays a plot of the scaled pmf $h_k(n;\lambda)$ of the Poisson distribution of order $10$ and $\lambda=0.3$ (circles) and $\lambda=0.2$ (triangles). 1. For both $\lambda=0.2$ and $0.3$, the scaled pmf increases strictly for $n\in[1,k]$ (see Sec. [\[sec:block1k\]](#sec:block1k){reference-type="ref" reference="sec:block1k"}). 2. For $\lambda=0.2$, the scaled pmf decreases strictly for $n\in[k+1,2k]$ but for $\lambda=0.3$ it exhibits a local maximum (which is *not* a mode). 3. Also for both $\lambda=0.2$ and $0.3$, the scaled pmf is concave for $n\in[k+1,2k]$. 4. For both $\lambda=0.2$ and $0.3$, notice the large drop in value from $p_k$ to $p_{k+1}$. 5. There is also a noticeable drop in value from $p_{2k}$ to $p_{2k+1}$, although (much?) smaller in magnitude. 6. The smallest power of $\lambda$ in $p_n$ for $n\in[(i-1)k+1,ik]$ is $\lambda^i$, for $i=1,2,\dots$. Hence the smallest power of $\lambda$ changes from $\lambda$ in $p_k$ to $\lambda^2$ in $p_{k+1}$ and from $\lambda^2$ in $p_{2k}$ to $\lambda^3$ in $p_{2k+1}$. There are probably similar drops in value between $p_{ik}$ and $p_{ik+1}$ for all $i=1,2,\dots$. However, this cannot be a general rule because for sufficiently large values of $\lambda$, the pmf does not decrease monotonically for $n\ge k$. In the rest of this section, we quantify the behavior of the sequence $\{p_{k+1},\dots,p_{2k}\}$ in more detail. We employ the expression for $p_n$ by Kostadinova and Minkova in eq. [\[eq:KM_Thm1\]](#eq:KM_Thm1){reference-type="eqref" reference="eq:KM_Thm1"}. For $n\in[k+1,2k]$ the expression is $$\label{eq:KM_block_kp1_2k} \begin{split} p_n = \sum_{j=1}^n \binom{n-1}{j-1}\,\frac{\lambda^j}{j!} \;-\; \lambda \sum_{j=0}^{n-k-1} \binom{n -k-1}{j}\,\frac{\lambda^j}{j!} \,. \end{split}$$ Actually eq. [\[eq:KM_block_kp1_2k\]](#eq:KM_block_kp1_2k){reference-type="eqref" reference="eq:KM_block_kp1_2k"} is valid also for $n=2k+1$, but we treat only the block $n\in[k+1,2k]$ here. The lowest power of $\lambda$ in the block $n\in[k+1,2k]$ is $\lambda^2$. The first finite difference $\Delta_1(n)$ for $n\in[k+2,2k]$ is $$\begin{split} p_n - p_{n-1} &= \sum_{j=1}^n \binom{n-1}{j-1}\,\frac{\lambda^j}{j!} \;-\; \lambda \sum_{j=0}^{n-k-1} \binom{n-k-1}{j}\,\frac{\lambda^j}{j!} \\ &\qquad -\sum_{j=1}^{n-1} \binom{n-2}{j-1}\,\frac{\lambda^j}{j!} \;+\; \lambda \sum_{j=0}^{n-k-2} \binom{n-k-2}{j}\,\frac{\lambda^j}{j!} \\ &= \frac{\lambda^n}{n!} + \sum_{j=1}^{n-1} \biggl[\,\binom{n-1}{j-1} - \binom{n-2}{j-1}\,\biggr]\,\frac{\lambda^j}{j!} \\ &\qquad -\frac{\lambda^{n-k}}{(n-k-1)!} -\lambda\,\sum_{j=0}^{n-k-2} \biggl[\,\binom{n-k-1}{j} - \binom{n-k-2}{j}\,\biggr]\,\frac{\lambda^j}{j!} \\ &= \frac{\lambda^n}{n!} + \sum_{j=2}^{n-1} \binom{n-2}{j-2}\,\frac{\lambda^j}{j!} -\frac{\lambda^{n-k}}{(n-k-1)!} -\lambda\,\sum_{j=1}^{n-k-2} \binom{n-k-2}{j-1}\,\frac{\lambda^j}{j!} \\ &= \sum_{j=2}^n \binom{n-2}{j-2}\,\frac{\lambda^j}{j!} -\lambda\,\sum_{j=1}^{n-k-1} \binom{n-k-2}{j-1}\,\frac{\lambda^j}{j!} \,. \end{split}$$ Writing out the lowest powers of $\lambda$ yields $$\begin{split} p_n -p_{n-1} &= \frac{\lambda^2}{2!} -\frac{\lambda^2}{1!} +(n-2)\frac{\lambda^3}{3!} -(n-k-2)\frac{\lambda^3}{2!} +\dots \\ &= -\frac{\lambda^2}{2} +(n-2-3n+3k+6)\frac{\lambda^4}{4!} +\dots \\ &= -\frac{\lambda^2}{2} +(3k+4-2n)\frac{\lambda^3}{3!} +\dots \end{split}$$ For fixed $k\ge2$ and sufficiently small $\lambda>0$, the sequence $\{p_{k+1},\dots,p_{2k}\}$ is *strictly decreasing* in $n$. (This is a simpler and better proof of the decreasing sequence property than that in [@Mane_Poisson_k_CC23_7].) The second difference $\Delta_2(n)$ is given by a similar induction argument to that for the block $n\in[1,k]$. $$\begin{split} p_n -2p_{n-1} +p_{n-2} &= \sum_{j=3}^n \binom{n-3}{j-3}\,\frac{\lambda^j}{j!} -\lambda\,\sum_{j=2}^{n-k-1} \binom{n-k-3}{j-2}\,\frac{\lambda^j}{j!} \,. \end{split}$$ This is valid for all $n\in[k+3,2k]$. Writing out the lowest powers of $\lambda$ yields $$\begin{split} p_n -2p_{n-1} +p_{n-2} &= \frac{\lambda^3}{3!} -\frac{\lambda^3}{2!} +(n-3)\frac{\lambda^4}{4!} -(n-k-3)\frac{\lambda^4}{3!} +\dots \\ &= -\frac{\lambda^3}{3} +(n-3-4n+4k+12)\frac{\lambda^4}{4!} +\dots \\ &= -\frac{\lambda^3}{3} +(4k+9-3n)\frac{\lambda^4}{4!} +\dots \end{split}$$ For fixed $k\ge2$ and sufficiently small $\lambda>0$, the sequence $\{p_{k+1},\dots,p_{2k}\}$ is *concave* in $n$. Hence for fixed $k\ge2$ and sufficiently small $\lambda>0$, if $p_{k+2} < p_{k+1}$, then $p_{n+1} < p_n$ for all $n\in[k+1,2k]$. 1. The concavity property suggests that if $p_{k+2} < p_{k+1}$, then $p_{n+1} < p_n$ for all $n\in[k+1,2k]$. 2. There is the additional condition $p_{k+1} < p_k$, then the pmf decreases strictly for all $n \ge k$ (the proof was given in [@Mane_Poisson_k_CC23_7]). It was shown in [@Mane_Poisson_k_CC23_7] that $p_{k+1} < p_k$ if $\lambda \le t_k$, but this is a sufficient but not necessary upper bound on $\lambda$. It is still necessary to determine a quantitative upper bound for $\lambda$ such that $p_{n+1} < p_n$ for all $n\in[k+1,2k]$. The evidence suggests that the condition $p_{k+1}=p_{k+2}$ yields the supremum for $\lambda$. **Definition 6**. *Let $\lambda_{k+1,k+2}$ denote the (unique) positive real root of the equation $p_{k+2}-p_{k+1}=0$.* **Conjecture 7**. *For fixed $k\ge2$, the value of $\lambda_k^{\rm dec}$, the supremum for $\lambda$ such that the pmf of the Poisson distribution of order $k$ decreases strictly for all $n\ge k$ is given by the positive real root of the equation $p_{k+2}-p_{k+1}=0$, denoted by $\lambda_{k+1,k+2}$: $$\label{eq:conj_lambda_k_dec} \lambda_k^{\rm dec} = \lambda_{k+1,k+2} \,.$$ Then for fixed $k\ge2$ and $\lambda\in(0,\lambda_{k+1,k+2})$, $p_{n+1} < p_n$ for all $n\ge k$.* **Remark 8**. *It was shown in [@Mane_Poisson_k_CC23_7] that $p_{k+1} < p_k$ if $\lambda \le t_k$, but this is a sufficient but not necessary bound. Recall $t_k$ is the value of $\lambda$ such that $p_k=2$. Numerical evidence suggests that $\lambda < \lambda_{k+1,k+2}$ is the applicable supremum to employ in Conjecture [Conjecture 7](#conj:upbound_monotone_decrease){reference-type="ref" reference="conj:upbound_monotone_decrease"}. For example for $k=2$, $t_k$ is the positive real root of the equation $\frac12\lambda^2 +\lambda - 2 = 0$ and its value is $t_k=\sqrt{5}-1 \simeq 1.23606$. Also $\lambda_{k+1,k+2}$ is the positive real root of the equation $\frac16\lambda^3 +\frac12\lambda^2 -\lambda = 0$ and its value is $\lambda_{k+1,k+2}=2(\sqrt{7}-2) \simeq 1.2915026$. Fig. [2](#fig:graph_hist_k2_pk1k2){reference-type="ref" reference="fig:graph_hist_k2_pk1k2"} displays a plot of the scaled pmf $h_k(n;\lambda)$ of the Poisson distribution of order $2$ and $\lambda=\lambda_{k+1,k+2}=2(\sqrt{7}-2) \simeq 1.2915026$. Observe that $p_k>2$ but even so, $p_{k+1} < p_k$. Observe also that $p_{k+1}=p_{k+2}$ and the pmf decreases monotonically for $n \ge k$, except for the one case $p_{k+1}=p_{k+2}$.* **Remark 9**. *For $k=3$, the condition $p_{k+1}=p_{k+2}$ is attained for $\lambda = -5 +\sqrt[3]{55-10\sqrt{29}} +\sqrt[3]{55+10\sqrt{29}} \simeq 0.82187688$. Fig. [3](#fig:graph_hist_k3_pk1k2){reference-type="ref" reference="fig:graph_hist_k3_pk1k2"} displays a plot of the scaled pmf $h_k(n;\lambda)$ of the Poisson distribution of order $3$ and $\lambda=0.82187688$. Observe that $p_{k+1}=p_{k+2}$ and the pmf decreases monotonically for $n \ge k$, except for the one case $p_{k+1}=p_{k+2}$. Observe also that $p_{k+1} < p_k$ but $p_k<2$. This goes to show that the condition $p_k<2$, i.e. $\lambda \le t_k$, is not relevant to determine a supremum value for $\lambda$ for the pmf to decrease monotonically for all $n \ge k$.* **Remark 10**. *Numerical calculations indicate eq. [\[eq:conj_lambda_k\_dec\]](#eq:conj_lambda_k_dec){reference-type="eqref" reference="eq:conj_lambda_k_dec"} yields the supremum for $\lambda$ for the pmf of the Poisson distribution of order $k$ to decrease strictly for all $n\ge k$. It remains to determine a precise expression for the value of $\lambda_{k+1,k+2}$. By definition, it is the positive real root of a polynomial equation, but that is a polynomial equation of degree $k$.* We can obtain a gross *overestimate* for $\lambda_k^{\rm dec}$ as follows. For the pmf to be strictly decreasing for all $n\ge k$, we must have $p_n<p_{n-1}$ for all $n \ge k+1$. Process eq. [\[eq:KM_rechk\]](#eq:KM_rechk){reference-type="eqref" reference="eq:KM_rechk"} using $p_n < p_j$ for $j < n$: $$\label{eq:gross_upbound_monotone} \begin{split} np_n &= \lambda\,(p_{n-1} +2p_{n-2} +\dots +kp_{n-k}) \\ &> \lambda p_n \,(1+\dots +k) \\ &=\lambda\frac{k(k+1)}{2}\, p_n \\ &=\kappa\lambda\, p_n \,. \end{split}$$ Cancel $p_n$ to deduce $\kappa\lambda < n$. We require $n-k\ge k$ to justify eq. [\[eq:gross_upbound_monotone\]](#eq:gross_upbound_monotone){reference-type="eqref" reference="eq:gross_upbound_monotone"}, so the smallest applicable value of $n$ is $2k$. Hence $$\lambda < \frac{2k}{\kappa} = \frac{4}{k+1} \,.$$ For $k=3$, the right-hand size equals $1$. We saw in Remark [Remark 9](#rem:upbound_k3){reference-type="ref" reference="rem:upbound_k3"} above that the correct upper bound is lower than this. See Fig. [3](#fig:graph_hist_k3_pk1k2){reference-type="ref" reference="fig:graph_hist_k3_pk1k2"}. **Proposition 11**. *For fixed $k\ge2$, an upper bound for $\lambda_k^{\rm dec}$, for the pmf of the Poisson distribution of order $k$ to decrease strictly for all $n\ge k$, is given by $$\label{eq:lambda_dec_upbound} \lambda_k^{\rm dec} < \frac{4}{k+1} \,.$$ This is a necessary but not sufficient upper bound. This upper bound shows that the value of $\lambda_k^{\rm dec}$ must decrease at least as $O(1/k)$ as $k$ increases.* # [\[sec:numest_lam_pk1k2\]]{#sec:numest_lam_pk1k2 label="sec:numest_lam_pk1k2"}Numerical estimate of supremum for monotonic decrease of the pmf for $n\ge k$ The value of $\lambda_{k+1,k+2}$ was computed numerically for $2 \le k \le 40000$. The inverse value $\lambda_{k+1,k+2}^{-1}$ is fitted remarkably well by a straight line. Fig. [4](#fig:graph_invroot_pk1k2){reference-type="ref" reference="fig:graph_invroot_pk1k2"} displays a plot of $\lambda_{k+1,k+2}^{-1}$ (dotted line) for $2 \le k \le 40000$. The dashed line is the straight line fit $\alpha k+\beta = 0.442972564\,k -0.113086$ and is visually indistinguishable. The coefficients of the straight line were obtained via a series of regression fits and are given as follows [\[eq:fit_param_lam_pk1k2\]]{#eq:fit_param_lam_pk1k2 label="eq:fit_param_lam_pk1k2"} $$\begin{aligned} \alpha &= \frac49 - 0.0015 +3\cdot10^{-5} -2\cdot10^{-6} +1\cdot10^{-7} +2\cdot10^{-8} -9\cdot10^{-10} &=& \phantom{-}0.442972564 \,, \\ \beta &= -0.1131 +2\cdot10^{-5} -6\cdot10^{-6} &=& -0.113086 \,.\end{aligned}$$ Fig. [5](#fig:graph_invroot_diff_pk1k2){reference-type="ref" reference="fig:graph_invroot_diff_pk1k2"} displays a plot of the difference $\alpha k +\beta - \lambda_{k+1,k+2}^{-1}$ for $1000 \le k \le 40000$. Note the following. 1. The difference increases as the value of $k$ increases, which is to be expected from a numerical calculation. 2. The maximum difference is $0.000282326$ and the minimum difference is $-0.000278234$, at the right-hand edge $k=40000$. From Fig. [4](#fig:graph_invroot_pk1k2){reference-type="ref" reference="fig:graph_invroot_pk1k2"}, $\lambda_{k+1,k+2}^{-1} \simeq 17718$ for $k=40000$, which yields a relative accuracy of $1.58\cdot10^{-8}$. 3. The worst relative accuracy of the fit is $0.003$ at $k=3$. 4. Significantly, the difference is *symmetric* around zero, i.e. the fit is *unbiased*: it is neither systematically too high nor too low. One can conjecture from the values of $\alpha$ and $\beta$ in eq. [\[eq:fit_param_lam_pk1k2\]](#eq:fit_param_lam_pk1k2){reference-type="eqref" reference="eq:fit_param_lam_pk1k2"} that the exact values are really $\alpha=\frac49$ and $\beta=-\frac19$, and the residuals are due to numerical precision errors. Hence asymptotically $\lambda_{k+1,k+2} \simeq 9/(4k-1)$. The numerical evidence suggests this is not quite correct, although it is a good approximation. Table [1](#tb:lam_pk1k2){reference-type="ref" reference="tb:lam_pk1k2"} tabulates the values of $\lambda_{k+1,k+2}$ and $9/(4k-1)$ and the difference $\lambda_{k+1,k+2} - 9/(4k-1)$ for $k=2,\dots,10$ and higher values. Observe that $\lambda_{k+1,k+2} > 9/(4k-1)$ in all cases and the difference decreases as the value of $k$ increases. Numerical calculations indicate that $\lambda_{k+1,k+2} > 9/(4k-1)$ for all $k\in[2,40000]$. Numerical calculations moreover indicate that for all $k\in[2,40000]$, the pmf decreases monotonically for $0 < \lambda < 9/(4k-1)$. **Conjecture 12**. *For fixed $k\ge2$, the pmf of the Poisson distribution of order $k$ decreases monotonically for all $n\ge k$ for $$\label{eq:suff_not_nec_upbound_monotone_decrease} 0 < \lambda < \frac{9}{4k-1} \,.$$ The value $9/(4k-1)$ is a sufficient, but not necessary, upper bound on the value of $\lambda$. It is a good approximation to the true supremum $\lambda_k^{\rm dec}$ for $k\gg1$.* **Remark 13**. *Increasing the value of $\lambda$ to $9.05/(4k-1)$ results in a value which is too large. For fixed $k\ge2$ and $\lambda=9.05/(4k-1)$, the pmf of the Poisson distribution of order $k$ is *not* decreasing for all $n\ge k$ (for all tested values $k\ge2$). This suggests that $9/(4k-1)$ is a good approximation (sufficient but not necessary) for the true supremum $\lambda_k^{\rm dec}$.* # [\[sec:conc\]]{#sec:conc label="sec:conc"}Conclusion This note focused on the properties of two blocks of elements of the probability mass function (pmf) of the Poisson distribution of order $k\ge2$. The first block was the elements $p_n$ for $n\in[1,k]$ and the second block was the elements $p_n$ for $n\in[k+1,2k]$. The first major goal of this note was to prove that the elements $p_n$ for $n\in[1,k]$ form an "absolutely monotonic sequence" by which is meant that all the finite differences of the sequence are positive. The second major goal was to analyze the properties of the elements $p_n$ for $n\in[k+1,2k]$. It was shown that for sufficiently small $\lambda>0$, the sequence is strictly decreasing and also concave. The purpose of the analysis was to help determine a supremum value for $\lambda$, such that the pmf of the Poisson distribution of order $k\ge2$ decreases strictly for all $n \ge k$. A conjectured criterion for the supremum was given in Conjecture [Conjecture 7](#conj:upbound_monotone_decrease){reference-type="ref" reference="conj:upbound_monotone_decrease"}. Numerical calculations indicate it is the optimal bound, i.e. the supremum. In addition, a simple expression was proposed, based on numerical calculation, which is sufficient (but not necessary) and is a good approximation for the supremum. 99 R. M. Adelson, "Compound Poisson Distributions" *Operational Research Quarterly **17**, 73--75 (1966).* Y. Kwon and A.N. Philippou, "The Modes of the Poisson Distribution of Order 3 and 4" *Entropy **25**, 699 (2023).* S.R. Mane, "Structure of the probability mass function of the Poisson distribution of order $k$" *arXiv:2309.13493 \[math.PR\] (2023).* S.R. Mane, "Analytical proofs for the properties of the probability mass function of the Poisson distribution of order $k$" *arXiv:2310.00827 \[math.PR\] (2023).* K.Y. Kostadinova and L.D. Minkova, "On the Poisson process of order $k$" *Pliska Stud. Math. Bulgar. **22**, 117--128 (2013).* C. Georghiou, A.N. Philippou and A. Saghafi, "On the Modes of the Poisson Distribution of Order $k$" *Fibonacci Quarterly **51**, 44--48 (2013).* W. Feller, "An Introduction to Probability Theory and its Applications," Third Edition, Wiley, New York (1968). S.R. Mane, "Asymptotic results for the Poisson distribution of order $k$" *arXiv:2309.05190 \[math.PR\] (2023).* S.R. Mane, "First double mode of the Poisson distribution of order $k$" *arXiv:2309.09278 \[math.PR\] (2023).* A.N. Philippou, C. Georghiou and C. Philippou, "A generalized geometric distribution and some of its properties" *Stat. Prob. Lett. **1**, 171--175 (1983).* A.N. Philippou, "Poisson and compound Poisson distributions of order $k$ and some of their properties" *Journal of Soviet Mathematics **27**, 3294--3297 (1984).* J.A. Adell and P. Jodrá, "The median of the Poisson distribution" *Metrika **61**, 337--346 (2005).* A.N. Philippou, "a note on the modes of the poisson distribution of order $k$" *Fibonacci Quarterly **52**, 203--205 (2014).* $k$ $\lambda_{k+1,k+2}$ $9/(4k-1)$ $\lambda_{k+1,k+2} - 9/(4k-1)$ ------- ----------------------- ----------------------- -------------------------------- 2 1.291502622 1.285714286 0.005788336 3 0.821876885 0.818181818 0.003695066 4 0.602607787 0.6 0.002607787 5 0.475672588 0.473684211 0.001988378 6 0.392901337 0.391304348 0.001596989 7 0.334663355 0.333333333 0.001330022 8 0.29145995 0.290322581 0.001137369 9 0.258135147 0.257142857 0.00099229 10 0.231648581 0.230769231 0.00087935 100 0.022632529 0.022556391 $7.61377\cdot10^{-5}$ 1000 0.002258053 0.002250563 $7.48997\cdot10^{-6}$ 10000 0.000225753 0.000225006 $7.47754\cdot10^{-7}$ 20000 0.000112875 0.000112501 $3.73843\cdot10^{-7}$ 30000 $7.52498\cdot10^{-5}$ $7.50006\cdot10^{-5}$ $2.49221\cdot10^{-7}$ 40000 $5.64373\cdot10^{-5}$ $5.62504\cdot10^{-5}$ $1.86913\cdot10^{-7}$ : [\[tb:lam_pk1k2\]]{#tb:lam_pk1k2 label="tb:lam_pk1k2"} Tabulation of $\lambda_{k+1,k+2}$ and the approximation $9/(4k-1)$ and the difference, for $k=2,\dots,10$ and selected higher values. ![ [\[fig:graph_hist_k10_nonmonotonic\]]{#fig:graph_hist_k10_nonmonotonic label="fig:graph_hist_k10_nonmonotonic"} Plot of the scaled pmf $h_k(n;\lambda)$ of the Poisson distribution of order $10$ and $\lambda=0.3$ (circles) and $\lambda=0.2$ (triangles).](graph_hist_k10_nonmonotonic.pdf){#fig:graph_hist_k10_nonmonotonic width="75%"} ![ [\[fig:graph_hist_k2_pk1k2\]]{#fig:graph_hist_k2_pk1k2 label="fig:graph_hist_k2_pk1k2"} Plot of the scaled pmf $h_k(n;\lambda)$ of the Poisson distribution of order $2$ and $\lambda=1.2915026$.](graph_hist_k2_pk1k2.pdf){#fig:graph_hist_k2_pk1k2 width="75%"} ![ [\[fig:graph_hist_k3_pk1k2\]]{#fig:graph_hist_k3_pk1k2 label="fig:graph_hist_k3_pk1k2"} Plot of the scaled pmf $h_k(n;\lambda)$ of the Poisson distribution of order $3$ and $\lambda=0.82187688$.](graph_hist_k3_pk1k2.pdf){#fig:graph_hist_k3_pk1k2 width="75%"} ![ [\[fig:graph_invroot_pk1k2\]]{#fig:graph_invroot_pk1k2 label="fig:graph_invroot_pk1k2"} Plot of the inverse root $\lambda_{k+1,k+2}^{-1}$ (dotted line) and the fit function $0.442972564\,k -0.113086$ (dashed).](graph_invroot_pk1k2.pdf){#fig:graph_invroot_pk1k2 width="75%"} ![ [\[fig:graph_invroot_diff_pk1k2\]]{#fig:graph_invroot_diff_pk1k2 label="fig:graph_invroot_diff_pk1k2"} Plot of the difference $0.442972564\,k -0.113086 - \lambda_{k+1,k+2}^{-1}$ for $1000 \le k \le 40000$.](graph_invroot_diff_pk1k2.pdf){#fig:graph_invroot_diff_pk1k2 width="75%"}
arxiv_math
{ "id": "2310.05671", "title": "Convexity and monotonicity of the probability mass function of the\n Poisson distribution of order $k$", "authors": "S. R. Mane", "categories": "math.PR", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We introduce a new debiasing framework for high-dimensional linear regression that bypasses the restrictions on covariate distributions imposed by modern debiasing technology. We study the prevalent setting where the number of features and samples are both large and comparable. In this context, state-of-the-art debiasing technology uses a degrees-of-freedom correction to remove shrinkage bias of regularized estimators and conduct inference. However, this method requires that the observed samples are i.i.d., the covariates follow a mean zero Gaussian distribution, and reliable covariance matrix estimates for observed features are available. This approach struggles when (i) covariates are non-Gaussian with heavy tails or asymmetric distributions, (ii) rows of the design exhibit heterogeneity or dependencies, and (iii) reliable feature covariance estimates are lacking. To address these, we develop a new strategy where the debiasing correction is a rescaled gradient descent step (suitably initialized) with step size determined by the spectrum of the sample covariance matrix. Unlike prior work, we assume that eigenvectors of this matrix are uniform draws from the orthogonal group. We show this assumption remains valid in diverse situations where traditional debiasing fails, including designs with complex row-column dependencies, heavy tails, asymmetric properties, and latent low-rank structures. We establish asymptotic normality of our proposed estimator (centered and scaled) under various convergence notions. Moreover, we develop a consistent estimator for its asymptotic variance. Lastly, we introduce a debiased Principal Component Regression (PCR) technique using our Spectrum-Aware approach. In varied simulations and real data experiments, we observe that our method outperforms degrees-of-freedom debiasing by a margin. bibliography: - ref.bib title: "Spectrum-Aware Adjustment: A New Debiasing Framework with Applications to Principal Components Regression" --- # Introduction Regularized estimators constitute a basic staple of high-dimensional regression. These estimators incur a regularization bias, and characterizing this bias is imperative for accurate uncertainty quantification. This motivated debiased versions of these estimators [@zhang2014confidence; @javanmard2018debiasing; @van2014asymptotically] that remain unbiased asymptotically around the signal of interest. To describe debiasing, consider the setting of a canonical linear model where one observes a sample of size $n$ satisfying $$\mathbf{y}=\mathbf{X}\bm{\beta}^\star+\bm{\varepsilon}.$$ Here $\mathbf{y}\in \mathbb{R}^n$ denotes the vector of outcomes, $\mathbf{X}\in \mathbb{R}^{n \times p}$ the design matrix, $\bm{\beta}^\star\in \mathbb{R}^p$ the unknown coefficient vector, and $\bm{\varepsilon}$ the unknown noise vector. Suppose $\boldsymbol{\hat{\bm{\beta}}}$ denotes the estimator obtained by minimizing $\mathcal{L}(\bm{\cdot} \; ;\mathbf{X},\mathbf{y}):\mathbb{R}^p \mapsto \mathbb{R}_+$ given by $$\label{deflasso} \mathcal{L}(\bm{\beta};\mathbf{X},\mathbf{y}):=\frac{1}{2}\|\mathbf{y}-\mathbf{X}\bm{\beta}\|^2+\sum_{i=1}^p h\left(\beta_i\right), \qquad \bm{\beta} \in \mathbb{R}^p,$$ where $h: \mathbb{R} \mapsto [0,+\infty)$ is some convex penalty function. Commonly used penalties include the ridge $h(b)=\lambda b^2, \lambda>0$, the Lasso $h(b)=\lambda |b|, \lambda>0$, the Elastic Net $h(b)=\lambda_1 |b|+\lambda_2 b^2, \lambda_1,\lambda_2>0$, etc. The debiased version of $\boldsymbol{\hat{\bm{\beta}}}$ takes the form $$\label{eq:debiased} \hat{\bm{\beta}}^u=\hat{\bm{\beta}}+\frac{1}{\widehat{\mathsf{adj}}} \boldsymbol{M} \mathbf{X}^{\top}(\mathbf{y}-\mathbf{X}\hat{\boldsymbol{\beta}}),$$ for suitable choices of $\boldsymbol{M} \in\mathbb{R}^{p\times p}$ and adjustment coefficient $\widehat{\mathsf{adj}}>0$[^1]. At a high-level, one expects the debiasing term $\frac{1}{\widehat{\mathsf{adj}}} \boldsymbol{M} \mathbf{X}^{\top}(\mathbf{y}-\mathbf{X}\hat{\boldsymbol{\beta}})$ will compensate for the regularization bias and lead to asymptotic normality in entries of $\hat{\bm{\beta}}^u- \bm{\beta}^\star$, whereby one can develop associated inference procedures. In the classical low-dimensional regime ($n\to \infty, p$ fixed), [\[eq:debiased\]](#eq:debiased){reference-type="eqref" reference="eq:debiased"} reduces to the well-known one-step estimator. In this case, one may establish asymptotic normality [@van2000asymptotic Theorem 5.45] with $\boldsymbol{M}$ equal to $(\mathbf{X}^\top\mathbf{X})^{-1}$ and $\widehat{\mathsf{adj}}=1$ (i.e. no adjustment). Early works in high dimensions [@van2014asymptotically; @zhang2014confidence; @javanmard2018debiasing] demonstrated that the Lasso can be debiased on taking $\boldsymbol{M}$ as suitable "high-dimensional\" substitutes of $(\mathbf{X}^\top \mathbf{X})^{-1}$. These focused on ultra-high-dimensions ($p \gg n$) and required no adjustment, i.e. $\widehat{\mathsf{adj}}=1$, as long as $\boldsymbol{\beta}^{\star}$ is sufficiently sparse. Recently, [@deshpande2018accurate; @deshpande2023online; @khamaru2021near] extended this machinery to adaptive linear estimation, online debiasing, and parameter estimation in sparse high-dimensional time series models. Later works uncovered that an adjustment $\widehat{\mathsf{adj}}<1$ is necessary for relaxing sparsity assumptions on $\boldsymbol{\beta}^{\star}$ or debiasing general regularized estimators in high dimensions. For instance, [@javanmard2014hypothesis; @bellec2022biasing] established that for the Lasso, the adjustment equation should be $\widehat{\mathsf{adj}}=1-\hat{s}/n$ with $\boldsymbol{M}=\bm{\Sigma}^{-1}$,where $\hat{s}$ denotes the number of non-zero entries in $\bm{\beta}^\star$. This correction term was named the degrees-of-freedom (DF) adjustment since it corresponds to the degrees-of-freedom of the regularized estimator $\hat{\bm{\beta}}$. Later [@bellec2019biasing; @celentano2020lasso] studied this framework further. Recent works [@celentano2021cad; @celentano2023challenges] extended degrees-of-freedom debiasing to semi-supervised settings and missing data models. ![ Histograms of $(\hat{\tau}_*^{-1/2}(\hat{{\beta}}^u_i-{\beta}^\star_i))_{i=1}^p$ comparing [@bellec2019biasing] with ours, where $\hat{\bm{\beta}}^u$ is the debiased Elastic-Net estimator with tuning parameters $\lambda_1=1, \lambda_2=0.1$. The first row uses the Gaussian-based formula [@bellec2019biasing] with their suggested choice of $\hat{\tau}_*$. This is also known as degrees-of-freedom correction (DF). The second row uses our Spectrum-Aware formula [\[de-based\]](#de-based){reference-type="eqref" reference="de-based"} with our proposal for $\hat{\tau}_*$. The signal entries are iid draws from $N(-20,1)+0.06\cdot N(10,1)+0.7\cdot \delta_0$ where $\delta_0$ is Dirac-delta function at $0$. Thereafter, the signal is fixed. The columns correspond to five right-rotationally invariant designs of size $n=500, p=1000$: (i) $\mathsf{MatrixNormal}$: $\mathbf{X}\sim N(\rm{0},\mathbf{\Sigma}^{\mathrm{(col)}}\otimes \mathbf{\Sigma}^{\mathrm{(row)}})$ where $\mathbf{\Sigma}^{\mathrm{(col)}}_{ij}=0.5^{|i-j|},\forall i,j\in [n]$ and $\bm{\Sigma}^{\mathrm{(row)}}\sim \mathsf{InverseWishart}\qty(\mathbf{I}_p, 1.1\cdot p)$ (see for notation); (ii) $\mathsf{Spiked}$: $\mathbf{X}=\alpha \cdot \mathbf{V}\mathbf{W}^\top +n^{-1}N(\rm{0}, \mathbf{I}_n \otimes \mathbf{I}_p)$ where $\alpha=10$ and $\mathbf{V}, \mathbf{W}$ are drawn randomly from Haar matrices of dimensions $n,p$, and then we retain $m=50$ columns; (iii) $\mathsf{LNN}$: $\mathbf{X}=\mathbf{X}_1\cdot \mathbf{X}_2 \cdot \mathbf{X}_3 \cdot \mathbf{X}_4$ where $\mathbf{X}_i \in \mathbb{R}^{n_i\times p_i}$'s have iid entries from $N(\rm{0},1)$. Here, $n_1=n$ and $p_4=p$ whereas $p_1=n_2, p_2=n_3, p_3=n_4$ are sampled uniformly from $n,...,p$; (iv) $\mathsf{VAR}$: $\mathbf{X}_{i,\bullet}=\sum_{k=1}^{\tau\vee i}\alpha_k \mathbf{X}_{i-k,\bullet}+\bm{\varepsilon}_i$ where $\mathbf{X}_{i,\bullet}$ denotes the $i$-th row of $\mathbf{X}$, $\bm{\varepsilon}_i \sim N(\mathbf{0}, \mathbf{\Sigma})$ with $\mathbf{\Sigma} \sim \mathsf{InverseWishart}(\mathbf{I}_p, 1.1\cdot p)$. We set $\tau=3,\alpha=\qty(0.4, 0.08, 0.04)$, $\mathbf{X}_{1,\bullet}=0$; (v) $\mathsf{Mult}$-$\mathsf{t}$: rows of $\mathbf{X}$ are sampled iid from multivariate-t distribution $\mathsf{Mult}$-$\mathsf{t}(3, \mathbf{I}_p)$ (see for notation). All designs are re-scaled so that average of eigenvalues of $\mathbf{X}^\top \mathbf{X}$ is 1. The solid black curve indicates a normal density fitted to the blue histograms whereas the dotted black line indicates the empirical mean corresponding to the histogram. See the corresponding QQ plot in .](image_va/panel.png){#fig1 width="\\textwidth"} Debiasing with degrees-of-freedom adjustment provided a novel viewpoint and overcame limitations with non-adjusted debiasing procedures. However, their proposal relied on i.i.d. data with covariates $\mathbf{X}_i\sim \mathcal{N}(\boldsymbol{0},\boldsymbol{\Sigma})$, where $\boldsymbol{\Sigma}$ is known or a reliable estimate is available. These assumptions place three serious constraints on degrees-of-freedom debiasing: (i) it cannot accommodate non-Gaussian data such as those with heavy-tails or asymmetric distributions; (ii) it fails to cover settings with potential heterogeneity or dependencies across rows of the design; (iii) it is unclear how one should choose $\boldsymbol{M}$ when accurate estimates for $\boldsymbol{\Sigma}$ are unavailable, which might often be the case in high dimensions. These constraints cripple the broader applicability of such debiasing formula. We exemplify this issue via (and later figures in the manuscript), where we consider the following design distributions: (i) $\mathbf{X}$ drawn from a matrix normal distribution with both row and column correlations, (ii) $\mathbf{X}$ containing latent structure, (iii) $\mathbf{X}$ formed by product of multiple random matrices (see [@hanin2020products; @hanin2021non] for connections to linear neural networks (LNN)), (iv) rows of $\mathbf{X}$ drawn from a vector time series, (v) rows of $\mathbf{X}$ drawn independently from a multivariate t-distribution. The figures plot histogram of the empirical distribution of $\hat{\bm{\beta}}^u- \bm{\beta}^{\star}$ scaled by an estimate of its standard deviation. On comparing with the overlaid standard Gaussian density, we observe that debiasing with degrees-of-freedom adjustment fails at once. To emphasize the challenge in these settings, we point to the readers that (i)-(iv) involve non-i.i.d. designs whereas (i),(iv),(v) involve heavy-tailed designs. In this paper, we propose a new debiasing formula that resolves this issue and performs accurate debiasing for *all* of the aforementioned settings. We develop our formula using the insight that a debiasing approach that works for a wide variety of settings should exploit the hidden spectral information in the data. To achieve this, we explore an alternative path for modeling the randomness in the design. Instead of the rows-i.i.d.-Gaussian assumption, we require that the singular value decomposition of $\mathbf{X}$ satisfies certain natural structure that allows for dependent samples and potentially heavy-tailed distributions. Specifically, we assume that $\mathbf{X}$ is right rotationally invariant. We present the formal assumption in Definition [Definition 1](#def:Rotinv){reference-type="ref" reference="def:Rotinv"} and discuss why it is natural for debiasing in Section [1.2](#RIDint){reference-type="ref" reference="RIDint"}. But we emphasize that this assumption covers a broad class of designs, many of which fall outside the purview of the Gaussian designs considered in prior literature. In particular, it covers designs (i)-(v) discussed in the preceding paragraph. We propose that under this assumption, one should choose $\boldsymbol{M}=\mathbf{I}_p$ and obtain $\widehat{\mathsf{adj}}$ by solving the equation $$\label{eq:adjspectrum} \frac{1}{p} \mathlarger{\mathlarger{\sum}}_{i=1}^p \frac{1}{\left(d_i^2-\widehat{\mathsf{adj}}\right)\left(\frac{1}{p} \sum_{j=1}^p \qty(\widehat{\mathsf{adj}}+h^{\prime \prime}\left(\hat{{\beta}}_j\right))^{-1} \right)+1}=1,$$ where $\{d_i^2\}_{1 \leq i \leq p}$ denote the eigenvalues of $\mathbf{X}^{\top}\mathbf{X}$ ($h^{\prime \prime}$ extended by $+\infty$ at non-differentiable points, c.f. Lemma [Lemma 7](#Extend){reference-type="ref" reference="Extend"}). We call the solution $\widehat{\mathsf{adj}}$ of [\[eq:adjspectrum\]](#eq:adjspectrum){reference-type="eqref" reference="eq:adjspectrum"} Spectrum-Aware adjustment since it depends on the eigenvalues of $\mathbf{X}^\top \mathbf{X}$. We name the corresponding debiasing procedure Spectrum-Aware debiasing (in short, SA debiasing). Figure [1](#fig1){reference-type="ref" reference="fig1"} demonstrates the power of our approach---it provides accurate debiasing across rather varied settings. We comment that unlike degrees-of-freedom debiasing, we do not require an estimate of $\boldsymbol{\Sigma}$, yet we can tackle correlations among features (c.f. Figure [1](#fig1){reference-type="ref" reference="fig1"}). Despite the strengths of SA debiasing we observe that it falls short when $\mathbf{X}$ contains outlier eigenvalues and/or the signal aligns with some eigenvectors of $\mathbf{X}$. Since these issues occur commonly in practice (c.f. Figure [5](#figPCRC){reference-type="ref" reference="figPCRC"}), we introduce an enhanced procedure that integrates classical principal component regression (PCR) ideas with SA debiasing. In this approach, we employ PCR to handle the outlier eigenvalues while using a combination of PCR and SA debiasing to estimate the parts of the signal that do not align with an eigenvector. We observe that this hybrid PCR-Spectrum-Aware approach works exceptionally well in challenging settings where both these issues are present. We next summarize our main contributions below. - We establish that our proposed debiasing formula is well-defined, that is, [\[eq:adjspectrum\]](#eq:adjspectrum){reference-type="eqref" reference="eq:adjspectrum"} admits a unique solution (Proposition [Proposition 12](#COR){reference-type="ref" reference="COR"}). Then we establish that $\hat{\bm{\beta}}^u- \bm{\beta}^\star$, with this choice of $\widehat{\mathsf{adj}}$, converges to a mean-zero Gaussian with some variance $\tau_{\star}$ in a Wasserstein-2 sense (Theorem [Theorem 1](#NEIGMAIN){reference-type="ref" reference="NEIGMAIN"}; Wasserstein-2 convergence notion introduced in Definition [Definition 2](#def:Wass){reference-type="ref" reference="def:Wass"}). Under an exchangeability assumption on $\bm{\beta}^\star$, we strengthen this result to convergence guarantees on finite-dimensional marginals of $\hat{\bm{\beta}}^u-\bm{\beta}^{\star}$ (Corollary [Corollary 17](#sgoods){reference-type="ref" reference="sgoods"}). - We develop a consistent estimator for $\tau_*$ (Theorem [Theorem 1](#NEIGMAIN){reference-type="ref" reference="NEIGMAIN"}) by developing new algorithmic insights and new proof techniques that can be of independent interest in the context of vector approximate message passing algorithms [@rangan2019vector; @schniter2016vector; @fletcher2018plug] (details in Section [Theorem 4](#neig){reference-type="ref" reference="neig"}). - To establish the aforementioned points, we imposed two strong assumptions: (a) the signal $\bm{\beta}^\star$ is independent of $\mathbf{X}$ and cannot strongly align with any subspace spanned by a small number of eigenvectors of $\mathbf{X}^{\top}\mathbf{X}$; (b) $\mathbf{X}^{\top}\mathbf{X}$ does not contain an outlier eigenvalue. To mitigate these, we develop a PCR-Spectrum-Aware debiasing approach (Section [4](#section:pcar){reference-type="ref" reference="section:pcar"}) that applies when these assumptions are violated. We prove asymptotic normality results this approach analogous to points (i)-(ii) above (Theorem [Theorem 2](#PCRTHM){reference-type="ref" reference="PCRTHM"}). - We demonstrate the utility of our debiasing formula in the context of hypothesis testing and confidence interval construction with explicit guarantees on quantities such as the false positive rate, false coverage proportion etc. (Sections [3.4](#sec:infvanilla){reference-type="ref" reference="sec:infvanilla"} and [4.4](#section:inference){reference-type="ref" reference="section:inference"}). - As a by-product, our PCR-Spectrum-Aware approach introduces the first methodology for debiasing the classical PCR estimator (Theorem [Theorem 2](#PCRTHM){reference-type="ref" reference="PCRTHM"}), which would otherwise exhibit a shrinkage bias due to omission of low-variance principal components. We view this as a contribution in and of itself to the PCR literature, since inference followed by PCR is under-explored despite the widespread usage of PCR. - As a further by-product, we establish rigorously the risk of regularized estimators under right rotationally invariant designs, a class much larger than Gaussian designs (Theorem [Theorem 3](#thm:empmain){reference-type="ref" reference="thm:empmain"}). One should compare this theorem to [@bayati2011lasso] that developed a risk characterization of the Lasso in high dimensions under Gaussian design matrices. - Finally, we demonstrate the applicability of our Spectrum-Aware approach across a wide variety of covariate distributions, ranging from settings with heightened levels of correlation or heterogeneity among the rows or a combination thereof (Figure [4](#figPCRA){reference-type="ref" reference="figPCRA"}), to diverse real data (Figure [5](#figPCRC){reference-type="ref" reference="figPCRC"}). We observe that PCR-Spectrum-Aware debiasing demonstrates superior performance across the board. In the remaining Introduction, we walk the readers through some important discussion points, before we delve into our main results. In Section [1.1](#section:intuitridge){reference-type="ref" reference="section:intuitridge"}, we provide some intuition for our Spectrum-Aware construction using the example of the ridge estimator, since it admits a closed-form and is simple to study. In Section [1.2](#RIDint){reference-type="ref" reference="RIDint"}, we discuss right rotationally invariant designs and related recent literature, with the goal of motivating why this assumption is natural for debiasing, and emphasizing its degree of generality compared to prior Gaussian assumptions. In Section [1.3](#sec:debiasedPCR){reference-type="ref" reference="sec:debiasedPCR"}, we discuss the importance of PCR-Spectrum-Aware debiasing in the context of the PCR literature. ## Intuition via ridge estimator {#section:intuitridge} To motivate SA debiasing, let us focus on the simple instance of a ridge estimator that admits the closed-form $$\label{betaridge} \hat{\bm{\beta}}=\left(\mathbf{X}^{\top} \mathbf{X}+\lambda_2 \mathbf{I}_p\right)^{-1} \mathbf{X}^{\top}\mathbf{y}, \quad \lambda_2>0.$$ Recall that we seek a debiased estimator of the form $\hat{\bm{\beta}}^u=\hat{\bm{\beta}}+\widehat{\mathsf{adj}}^{-1}\mathbf{X}^\top (\mathbf{y}-\mathbf{X}\hat{\bm{\beta}})$. Suppose we plug this in [\[betaridge\]](#betaridge){reference-type="eqref" reference="betaridge"}, leaving $\widehat{\mathsf{adj}}$ unspecified for the moment. If we denote the singular value decomposition of $\mathbf{X}$ to be $\mathbf{Q}^{\top}\mathbf{D}\mathbf{O}$, we obtain that $$\label{matrixcenterD} \mathbb{E}[\hat{\bm{\beta}}^u\mid \mathbf{X}, \bm{\beta}^\star]=\underbrace{\left[\left(1+\frac{\lambda_2}{\widehat{\mathsf{adj}}}\right) \sum_{i=1}^p\left(\frac{d_i^2}{d_i^2+\lambda_2}\right) \mathbf{o}_i \mathbf{o}_i^{\top}\right]}_{=:\mathbf{V}} \bm{\beta}^\star,$$ where $\mathbf{o}_i^{\top}\in \mathbb{R}^{p}$ denotes the $i$-th row of $\mathbf{O}$ and recall that $d_i^2$ denotes the eigenvalues of $\mathbf{X}^\top\mathbf{X}$. For $\hat{\bm{\beta}}^u$ to be debiased, it appears necessary to choose $\widehat{\mathsf{adj}}$ such that one centers $\mathbf{V}$ around the identity matrix $\mathbf{I}_p$. We thus choose $\widehat{\mathsf{adj}}$ to be solution to the equation $$\label{ideq} \left(1+\frac{\lambda_2}{\widehat{\mathsf{adj}}}\right) \frac{1}{p} \sum_{i=1}^p \frac{d_i^2}{d_i^2+\lambda_2}=1.$$ This choice guarantees that the average of the eigenvalues of $\mathbf{V}$ equals 1. Solving for $\widehat{\mathsf{adj}}$, we obtain $$\label{eq:choiceadj} \widehat{\mathsf{adj}}=\left(\qty(\frac{1}{p} \sum_{i=1}^p \frac{\lambda_2 d_i^2}{d_i^2+\lambda_2})^{-1}-\frac{1}{\lambda_2}\right)^{-1}.$$ This is precisely our Spectrum-Aware adjustment formula for the ridge estimator! However, the above argument was conditional on both $\mathbf{X}$ and $\boldsymbol{\beta}^{\star}$. Since we operate under random designs, let us investigate the case where one calculates [\[matrixcenterD\]](#matrixcenterD){reference-type="eqref" reference="matrixcenterD"} without conditioning on $\mathbf{X}$. To this end, we observe that $$\label{matrixcenter} \begin{aligned} \mathbb{E}\left[\hat{\bm{\beta}}^u\mid \bm{\beta}^\star\right]&=\mathbb{E} \underbrace{\left[\left(1+\frac{\lambda_2}{\widehat{\mathsf{adj}}}\right) \sum_{i=1}^p\left(\frac{d_i^2}{d_i^2+\lambda_2}\right) \mathbf{o}_i \mathbf{o}_i^{\top}\right]}_{=:\mathbf{V}} \bm{\beta}^\star\stackrel{(\star)}{=}\bm{\beta}^\star, \end{aligned}$$ where $(\star)$ holds if we choose $\widehat{\mathsf{adj}}$ using [\[eq:choiceadj\]](#eq:choiceadj){reference-type="eqref" reference="eq:choiceadj"} and in addition if $$\label{eq:conditionono} \mathbb{E}\left(\mathbf{o}_i \mathbf{o}_i^{\top}\right)=\frac{1}{p}\cdot \mathbf{I}_p.$$ Now, condition [\[eq:conditionono\]](#eq:conditionono){reference-type="eqref" reference="eq:conditionono"} does not hold in general. However, if $\mathbf{O}$ satisfies the following assumption, [\[eq:conditionono\]](#eq:conditionono){reference-type="eqref" reference="eq:conditionono"} and therefore $(\star)$ holds. **Assumption 1**. $\mathbf{O}$ is drawn uniformly at random from the set of all orthogonal matrices of dimension $p$ (this is the orthogonal group of dimension $p$ that we denote as $\mathbb{O}(p)$), in other words, $\mathbf{O}$ is drawn from the Haar measure on $\mathbb{O}(p)$. We operate under this assumption since it ensures $(\star)$ holds, and our Spectrum-Aware adjustment turns out to be the correct debiasing strategy in this setting, as the aforementioned argument explains. Meanwhile, the degrees-of-freedom adjustment [@bellec2019biasing] yields the correction factor $$\breve{\mathsf{adj}}=1-n^{-1} \operatorname{Tr} \bigg(\mathbf{X}\left(\mathbf{X}^{\top} \mathbf{X}+\lambda_2 \mathbf{I}_p\right)^{-1} \mathbf{X}^{\top}\bigg)=1-\frac{1}{n} \sum_{i=1}^p \frac{d_i^2}{d_i^2+\lambda_2}.$$ Notably, $\widehat{\mathsf{adj}}$ and $\breve{\mathsf{adj}}$ may be quite different. Unlike $\widehat{\mathsf{adj}}$, $\breve{\mathsf{adj}}$ may not center the spectrum of $\mathbf{V}$, and does not yield $\mathbb{E}(\hat{\bm{\beta}}^u\mid \bm{\beta}^\star)=\bm{\beta}^\star$ in general. But, they coincide asymptotically and $\breve{\mathsf{adj}}$ would provide accurate debiasing if one assumes that the empirical distribution of $\left(d_i^2\right)_{i=1}^p$ converges weakly to the Marchenko-Pastur law (cf. ), a property that many design matrices do not satisfy. We provide examples of such designs in Figure [1](#fig1){reference-type="ref" reference="fig1"}. In contrast, $\widehat{\mathsf{adj}}$ is applicable under much broader settings as it accounts for the *actual spectrum* of $\mathbf{X}^\top \mathbf{X}$. shows the clear strength of our approach over degrees-of-freedom debiasing. ## Right rotationally invariant designs {#RIDint} Roughly speaking, Assumption [Assumption 1](#ass:onO){reference-type="ref" reference="ass:onO"} lands $\mathbf{X}$ in the class of right rotationally invariant designs (Definition [Definition 1](#def:Rotinv){reference-type="ref" reference="def:Rotinv"}). Our arguments in the preceding section indicate that right rotational invariance is a more fundamental assumption for debiasing than prior Gaussian/sub-Gaussian assumptions in the literature. Since it preserves the spectral information of $\mathbf{X}^{\top}\mathbf{X}$, we expect methods developed under this assumption to exhibit improved robustness when applied to designs that may not satisfy the right rotational invariance property or designs observed in real data. We demonstrate this via Figure [5](#figPCRC){reference-type="ref" reference="figPCRC"}, where we conduct experiments with our PCR-Spectrum-Aware debiasing on designs arising from six real datasets spanning image data, financial data, socio-economic data, and so forth. Indeed varied research communities realized the strength of such designs, as demonstrated by the recent surge of literature in this space [@takeda2006analysis; @tulino2013support; @takeuchi2019rigorous; @dudeja2019rigorous; @rangan2019vector; @zhong2021approximate; @schniter2016vector; @barbier2018mutual; @opper2001tractable; @opper2005expectation; @ma2017orthogonal; @pandit2020inference; @takeuchi2021bayes; @liu2022memory; @xu2023capacity; @fan2021replica; @li2022random; @rush2015capacity]. In particular, [@dudeja2022spectral] established that properties of high-dimensional systems proven under such designs continue to hold for a broad class of designs (including nearly deterministic designs as observed in compressed sensing [@donoho2009observed]) as long as they satisfy certain spectral properties. In fact, the universality class for such designs is far broader than that for Gaussians, suggesting that these may serve as a solid prototype for modeling high-dimensional phenomena arising in non-Gaussian data. Despite such exciting developments, hardly much results exist when it comes to debiasing or inference under such designs (with the exception of [@takahashi2018statistical] that we discuss later). This paper develops this important theory and methodology. Despite the generality of right rotationally invariant designs, studying these poses quite some challenges. For starters, analogs of the leave-one-out approach [@mezard1987spin; @mezard2009information; @bean2013optimal; @el2013robust; @zdeborova2016statistical; @el2018impact; @sur2019likelihood; @sur2019modern; @chen2021spectral; @jiang2022new] and Stein's method [@stein1981estimation; @chatterjee2010spin; @bellec2019biasing; @bellec2021second; @bellec2022biasing; @anastasiou2023stein], both of which form fundamental proof techniques for Gaussian designs, are non-existent or under-developed for this more general class. To mitigate this issue, we resort to an algorithmic proof strategy that the senior authors' earlier work and that of others has used in the context of Gaussian designs. To study $\hat{\bm{\beta}}^u$, we observe that it depends on the regularized estimator $\boldsymbol{\hat{\beta}}$. However, $\boldsymbol{\hat{\beta}}$ does not admit a closed-form in general, thus studying these turns out difficult. To circumvent this, we introduce a surrogate estimator that approximates $\boldsymbol{\hat{\beta}}$ in a suitable high-dimensional sense. We establish refined properties of these surrogates and use their closeness to our estimator of interest to infer properties about the latter. Prior literature has invoked this algorithmic proof strategy for Gaussian designs under the name of approximate message passing theory [@donoho2009message; @bayati2011lasso; @sur2019likelihood; @sur2019modern; @zhao2022asymptotic]. In case of right rotationally invariant designs, we create the surrogate estimators using vector approximate message passing (VAMP) algorithms [@rangan2019vector] (see details in and [8.1](#section:DIST){reference-type="ref" reference="section:DIST"}). However, unlike the Gaussian case, proving that these surrogates approximate $\boldsymbol{\hat{\beta}}$ well presents deep challenges. We overcome this via developing novel properties of VAMP algorithms that can be of independent interest to the signal processing [@takeda2006analysis], probability [@tulino2013support], statistical physics [@takeuchi2019rigorous], information and coding theory [@rangan2019vector; @pandit2020inference; @xu2023capacity] communities that seek to study models with right rotationally invariant designs in the context of myriad other problems. Among literature related to right rotationally invariant designs, two prior works are the most relevant for us. Of these, [@gerbelot2022asymptotic] initiated a study of the risk of $\boldsymbol{\hat{\beta}}$ under right rotationally invariant designs using the VAMP machinery. However, their characterization is partially heuristic, meaning that they assume certain critical exchange of limits is allowed and that limits of certain fundamental quantities exist. The former assumption may often not hold, and the latter is unverifiable without a proof (see Remark [Remark 24](#rem:cedric){reference-type="ref" reference="rem:cedric"} for further details). As a by-product of our work on debiasing, we provide a complete rigorous characterization of the risk of regularized estimators under right rotationally invariant designs () without these unverifiable assumptions. The second relevant work is [@takahashi2018statistical] that conjectures a population level version of a debiasing formula for the Lasso using non-rigorous statistical physics tools. To be specific, they conjecture a debiasing formula that involves unknown parameters related to the underlying limiting spectral distribution of the sample covariance matrix. This formula does not provide an estimator that can be calculated from the observed data. In contrast, we develop a complete data-driven pipeline for debiasing and develop a consistent estimator for its asymptotic variance. ## De-biased Principal Component Regression (PCR) {#sec:debiasedPCR} After describing SA debiasing, we identify two common scenarios that remain challenging: (i) a small subset of eigenvectors of $\mathbf{X}^\top \mathbf{X}$ align strongly with the true signal (referred to as the alignment PCs); (ii) top few eigenvalues are significantly separated from the bulk of the spectrum (referred to as the outlier PCs). The presence of these two types of PCs breaks crucial assumptions of degrees-of-freedom and Spectrum-Aware debiasing, and has important practical implications. For instance, many real-world designs contain a latent structure where a small number of PCs dominate the rest. These dominant PCs can distort normality of the debiased estimator significantly, depending on how they align with the signal vector. Motivated by these issues, we develop the debiased Principal Component Regression, leveraging SA debiasing as a sub-routine. The classical PCR estimator, when transformed back to the original basis, exhibits a shrinkage bias [@farebrother1978class; @frank1993statistical; @george1996multiple; @bickel2006regularization; @druilhet2008shrinkage; @jolliffe2016principal]. This bias arises from the loss of the portion of the signal that aligns with the subspace spanned by the discarded PCs. Our approach involves "re-purposing" the discarded PCs to form a new regression problem, where the new signal corresponds to the lost segment. We then form a debiased estimator for this lost segment using SA debiasing. We combine the resulting estimator ("complement PCR estimator") with the classical PCR estimator ("alignment PCR estimator") to form a debiased PCR estimator for the original signal. This combination of PCR ideas with SA debiasing proves effective in handling scenarios (i) and (ii). As a result, one observes substantial improvement compared to prior debiasing strategies across an array of real-data designs, in addition to challenging designs that we exhibiting extremely strong covariance and heterogeneities. Beyond addressing the two specific challenges, our work also contributes to an extensive and growing body of work on PCR [@jolliffe1982note; @hubert2003robust; @bair2006prediction; @fan2016projected; @agarwal2019robustness; @howley2006effect; @agarwal2020principal; @he2022large; @zhang2022envelopes; @hucker2022note; @silin2022canonical; @tahir2023robust]. Most recent works focus on improving or characterizing the predictive performance of PCR. Although the shrinkage bias of PCR estimators is well-understood, limited prior work explores how one should remove this bias in high dimensions and the potential benefits of such debiasing in statistical inference. As far as our knowledge extends, our work provides the first approach with formal high-dimensional guarantees on debiasing the classical PCR estimator. We organize the rest of the paper as follows. In Section [2](#sec:assumption){reference-type="ref" reference="sec:assumption"}, we introduce our assumptions and preliminaries. In Sections [3](#subsectionmrdeb){reference-type="ref" reference="subsectionmrdeb"} and [4](#section:pcar){reference-type="ref" reference="section:pcar"}, we introduce our Spectrum-Aware and PCR-Spectrum-Aware methods with formal guarantees. In Section [5](#section:asympchar){reference-type="ref" reference="section:asympchar"}, we present our proof outline and technical novelties. Finally in Section [6](#sec:conc){reference-type="ref" reference="sec:conc"}, we conclude with potential directions for future work. # Assumptions and Preliminaries {#sec:assumption} In this section, we introduce our assumptions and preliminaries that we require for the sequel. ## Design matrix, signal and noise We first formally define right-rotationally invariant designs. **Definition 1** (Right-rotationally invariant designs). Consider the singular value decomposition $\mathbf{X}=\mathbf{Q}^{\top} \mathbf{D}\mathbf{O}$ where $\mathbf{Q}\in \mathbb{R}^{n \times n}$ and $\mathbf{O}\in \mathbb{R}^{p \times p}$ are orthogonal and $\mathbf{D}\neq 0 \in \mathbb{R}^{n \times p}$ is diagonal. We say a design matrix $\mathbf{X}\in \mathbb{R}^{n\times p}$ is right-rotationally invariant if $\mathbf{Q},\mathbf{D}$ are deterministic, and $\mathbf{O}$ is uniformly distributed on the orthogonal group. We work in a high-dimensional regime where $p$ and $n(p)$ both diverge and $n(p)/p \to \delta \in (0,+\infty)$. Thus we consider a sequence of problem instances $\qty{\mathbf{y}(p), \mathbf{X}(p), \bm{\beta}^\star(p), \bm{\varepsilon}(p)}_{p\ge 1}$ such that $\mathbf{y}(p), \bm{\varepsilon}(p)\in \mathbb{R}^{n(p)}, \mathbf{X}(p) \in \mathbb{R}^{n(p) \times p}, \bm{\beta}^\star(p) \in \mathbb{R}^p$ and $\mathbf{y}(p)=\mathbf{X}(p) \bm{\beta}^\star(p)+\bm{\varepsilon}(p)$. In the sequel, we drop the dependence on $p$ whenever it is clear from context. For a vector $\boldsymbol{v} \in \mathbb{R}^p$, we call its empirical distribution to be the probability distribution that puts equal mass $1/p$ to each coordinate of the vector. Some of our convergence results will be in terms of empirical distributions of sequences of random vectors. Specifically, we will use the notion of Wasserstein-2 convergence frequently so we introduce this next. **Definition 2** (Convergence of empirical distribution under Wasserstein-2 distance). For a matrix $\left(\mathbf{v}_{1}, \ldots, \mathbf{v}_{k}\right)=$ $\left(v_{i, 1}, \ldots, v_{i, k}\right)_{i=1}^{n} \in \mathbb{R}^{n \times k}$ and a random vector $\left(\mathsf{V}_{1}, \ldots, \mathsf{V}_{k}\right)$, we write $$\left(\mathbf{v}_{1}, \ldots, \mathbf{v}_{k}\right) \stackrel{W_2}{\rightarrow}\left(\mathsf{V}_{1}, \ldots, \mathsf{V}_{k}\right)$$ to mean that the empirical distribution of the columns of $\left(\mathbf{v}_{1}, \ldots, \mathbf{v}_{k}\right)$ converge to $(\mathsf{V}_1,\ldots,\mathsf{V}_k)$ in Wasserstein-$2$ distance. This means that for any continuous function $f: \mathbb{R}^{k} \rightarrow \mathbb{R}$ satisfying $$\label{eq:wasslip} \left|f\left(\mathbf{v}_{1}, \ldots, \mathbf{v}_{k}\right)\right| \leq C\left(1+\left\|\left(\mathbf{v}_{1}, \ldots, \mathbf{v}_{k}\right)\right\|^{2}\right)$$ for some $C>0$, we have $$\lim_{n \to \infty} \frac{1}{n} \sum_{i=1}^{n} f\left(v_{i, 1}, \ldots, v_{i, k}\right)=\mathbb{E}\left[f\left(\mathsf{V}_{1}, \ldots, \mathsf{V}_{k}\right)\right],$$ where $\mathbb{E}\left[\left\|\left(\mathsf{V}_1, \ldots, \mathsf{V}_k\right)\right\|^2\right]<\infty$. See in for a review of the properties of the Wasserstein-2 convergence. **Assumption 2** (Measurement matrix). We assume that $\mathbf{X}\in \mathbb{R}^{n\times p}$ is right rotationally invariant () and independent of $\bm{\varepsilon}$. For the eigenvalues, we assume that as $n,p \rightarrow \infty$, $$\label{Dconve} \mathbf{d}:=\mathbf{D}^\top \bm{1}_{n \times 1} \stackrel{W_2}{\to} \mathsf{D},$$ where $\mathsf{D}^2$ has non-zero mean with compact support $\operatorname{supp}(\mathsf{D}^2) \subseteq [0,\infty)$. We denote $d_-:=\min(x:x \in \operatorname{supp}(\mathsf{D}^2))$. Furthermore, we assume that as $p\to \infty$, $$\label{eqtwe} d_+:=\lim_{p\to \infty}\max_{i\in [p]} d_i^2< +\infty .$$ **Remark 3**. The constraint [\[eqtwe\]](#eqtwe){reference-type="eqref" reference="eqtwe"} states that $\mathbf{X}^\top \mathbf{X}$ has bounded operator norm. It has important practical implications. It prevents the occurrence of outlier eigenvalues, where a few prominent eigenvalues of $\mathbf{X}^\top \mathbf{X}$ deviate significantly from the main bulk of the spectrum. We work with Assumption [Assumption 2](#AssumpD){reference-type="ref" reference="AssumpD"} for part of the sequel, in particular, Section [3](#subsectionmrdeb){reference-type="ref" reference="subsectionmrdeb"}. But later in Section [4](#section:pcar){reference-type="ref" reference="section:pcar"}, we relax restriction [\[eqtwe\]](#eqtwe){reference-type="eqref" reference="eqtwe"}. Since our debiasing procedure relies on the spectrum of $\mathbf{X}^{\top}\mathbf{X}$, analyzing its properties requires a thorough understanding of the properties of $\mathsf{D}$ (from [\[Dconve\]](#Dconve){reference-type="eqref" reference="Dconve"}), the limit of the empirical spectral distribution of $\mathbf{X}^{\top}\mathbf{X}$. Often these properties can be expressed using two important quantities---the Cauchy and the R-transform. We define these next. For technical reasons, we will define these transforms corresponding to the law of $-\mathsf{D}^2$. **Definition 4** (Cauchy- and R-transform). Under Assumption [Assumption 2](#AssumpD){reference-type="ref" reference="AssumpD"}, let $G:(-d_-,\infty) \to (0,\infty)$ and $R:(0,G(-d_-)) \to (-\infty,0)$ be the Cauchy- and R-transforms of the law of $-\mathsf{D}^2$, defined as $$\label{eq:CauchyR} G(z)=\mathbb{E}\left[\frac{1}{z+\mathsf{D}^2}\right], \qquad R(z)=G^{-1}(z)-\frac{1}{z},$$ where $G^{-1}(\cdot)$ is the inverse function of $G(\cdot)$. See properties and well-definedness of these in . We set $G(-d_-)=\lim_{z \to -d_-} G(z)$. We next move to discussing our assumptions on the signal. **Assumption 3** (Signal and noise). We assume throughout that $\bm{\varepsilon}\sim N(0,\mathbf{I}_p)$. We require that $\bm{\beta}^\star$ is either deterministic or independent of $\mathbf{O},\bm{\varepsilon}$. In the former case, we assume that $\bm{\beta}^\star\stackrel{W_2}{\to} \mathsf{B}^\star$ where $\mathsf{B}^\star$ is a random variable with finite variance. In the latter case, we assume the same convergence holds almost surely. **Remark 5**. The independence condition between $\bm{\beta}^\star$ and $\mathbf{O}$, along with the condition that $\mathbf{O}$ is uniformly drawn from the orthogonal group enforces that $\bm{\beta}^\star$ cannot strongly align with a small number of these eigenvectors. Once again, we require these assumptions in Section [3](#subsectionmrdeb){reference-type="ref" reference="subsectionmrdeb"} but we relax these later in Section [4](#section:pcar){reference-type="ref" reference="section:pcar"}. **Remark 6**. We believe the assumption on the noise can be relaxed in many settings. For instance, if we assume $\mathbf{Q}$ () to be uniformly distributed on the orthogonal group independent of $\mathbf{O}$ and $\bm{\beta}^\star$, one may work with the relaxed assumption that $\bm{\varepsilon}\stackrel{W_2}{\to}\mathsf{E}$ for any random variable $\mathsf{E}$ with mean 0 and variance 1. This encompasses many noise distributions beyond Gaussians. Even without such an assumption on $\mathbf{Q}$, allowing for sub-Gaussian noise distributions should be feasible invoking universality results. However, in this paper, we prefer to focus on fundamentally breaking the i.i.d. samples, Gaussian assumptions on $\mathbf{X}$ in prior works. In this light, we work with the simpler Gaussian assumption on the noise. In the next segment, we describe the penalty functions that we work with. ## Penalty function As observed in the vast majority of literature on high-dimensional regularized regression, the proximal map of the penalty function plays a crucial role in understanding properties of $\boldsymbol{\hat{\beta}}$. We introduce this function next. Let the proximal map associated to $h$ be $$\forall v>0, x,y \in \mathbb{R}, \quad \operatorname{Prox}_{v h}(x) \equiv \underset{y \in \mathbb{R}}{\arg \min }\left\{h(y)+\frac{1}{2 v}(y-{x})^2\right\}.$$ **Assumption 4** (Penalty function). We assume that $h: \mathbb{R} \mapsto[0,+\infty)$ is proper, closed and satisfies that for some $c_0\ge 0, \forall x,y\in \mathbb{R}, t\in [0,1]$, $$\label{eq:strong} h(t\cdot x+(1-t)\cdot y) \leq t\cdot h(x)+(1-t)\cdot h(y)-\frac{1}{2} c_0\cdot t(1-t)\cdot (x-y)^2.$$ Here, $c_0=0$ is equivalent to assuming $h$ is convex and $c_0>0$ to assuming $h$ is $c_0$-strongly convex. Furthermore, we assume that $h(x)$ is twice continuously differentiable except for a finite set $\mathfrak{D}$ of points, and that $h^{\prime \prime}(x)$ and $\operatorname{Prox}_{v h}^{\prime}(x)$ have been extended at their respective undefined points using below. **Lemma 7** (Extension at non-differentiable points). *Fix any $v>0$. Under , $x\mapsto \operatorname{Prox}_{v h}(x)$ is continuously differentiable at all but a finite set $\mathcal{C}$ of points. Extending functions $x\mapsto h^{\prime \prime}(x)$ and $x\mapsto \operatorname{Prox}_{v h}^{\prime}(x)$ on $\mathfrak{D}$ and $\mathcal{C}$ by $+\infty$ and $0$ respectively, we have that for all $x\in \mathbb{R}$, $$\label{eq:Jacprox} \operatorname{Prox}_{v h}^{\prime}(x)=\frac{1}{1+v h^{\prime \prime}\left(\operatorname{Prox}_{v h}(x)\right)}\in \qty[0,\frac{1}{1+vc_0}], \quad h^{\prime\prime}(x) \in [c_0, +\infty].$$ After the extension, for any $w>0$, $x\mapsto \frac{1}{w+h^{\prime \prime}\left(\operatorname{Prox}_{v h}(x)\right)}$ is piecewise continuous with finitely many discontinuity points on which it takes value $0$.* We defer the proof to . We considered performing this extension since our debiasing formula involves the second derivative of $h(\cdot)$. The extension allows us to handle cases where the second derivative may not exist everywhere. As an example, we compute the extension for the elastic net penalty and demonstrate the form our debiasing formula takes on plugging in this extended version of $h(\cdot).$ **Example 8** (Elastic Net penalty). Consider the elastic-net penalty $$\label{elaspen} h(x)=\lambda_1|x|+\frac{\lambda_2}{2} x^2, \lambda_1\ge 0, \lambda_2\ge 0.$$ This is twice continuously differentiable except at $x=0$ (i.e. $\mathfrak{D}=\qty{0}$). Fix any $v>0$. Its $\operatorname{Prox}_{v h}(x)=\frac{1}{1+\lambda_2 v} \operatorname{ST}_{\lambda_1 v}\left(x\right)$ is continuously differentiable except at $x=$ $\pm \lambda_1 v$. Here, $\mathrm{ST}_{\lambda v}(x):=\operatorname{sgn}(x)(|x|-\lambda v)_{+}$ is the soft-thresholding function. Per Lemma [Lemma 7](#Extend){reference-type="ref" reference="Extend"}, the extended $h^{\prime \prime}, \operatorname{Prox}_{vh}'$ are $$h^{\prime \prime}(x)=\left\{\begin{array}{c}+\infty, \text { if } x=0 \\ \lambda_2, \text { otherwise }\end{array}\right., \quad \operatorname{Prox}_{vh}'(x)=\frac{1}{1+\lambda_2 v} \mathbb{I}\left(|x|>\lambda_1 v\right)$$ respectively, so that [\[eq:Jacprox\]](#eq:Jacprox){reference-type="eqref" reference="eq:Jacprox"} holds for all $x\in \mathbb{R}$. Note also that for any $w>0, x\mapsto \frac{1}{1+wh^{\prime \prime}\left(\operatorname{Prox}_{v h}(x)\right)}=\frac{1}{1+\lambda_2 w} \mathbb{I}(|x|>\lambda_1 v)$ is piecewise continuous and takes value 0 on both of its discontinuity points. It follows that our adjustment [\[eq:adjspectrum\]](#eq:adjspectrum){reference-type="eqref" reference="eq:adjspectrum"} can be written as $$\label{gammasolveaELAST} \frac{1}{p} \mathlarger{\mathlarger{\sum}}_{i=1}^p \frac{1}{\left(d_i^2\widehat{\mathsf{adj}}^{-1}-1 \right)\left(\frac{\hat{s}}{p} \qty(1+\widehat{\mathsf{adj}}^{-1}\lambda_2)^{-1} \right)+1}=1,$$ where $\hat{s}=\left|\left\{j: \hat{{\beta}}_j\neq 0\right\}\right|$. As a sanity check, if one sets $\lambda_2=0$ and solves the population version of the above equation $$\label{gammasolveaELASTLas} \mathbb{E}\frac{1}{\left(\mathsf{D}^2 \widehat{\mathsf{adj}}^{-1}-1 \right)\cdot \frac{\hat{s}}{p}+1}=1$$ with $\mathsf{D}^2$ drawn from the Marchenko-Pastur law, then one recovers the well-known degrees-of-freedom adjustment for the Lasso: $\widehat{\mathsf{adj}}=1-\hat{s}/n.$ The following assumption is analogous to [@bellec2019biasing Assumption 3.1] for the Gaussian design: we require either $h$ to be strongly-convex or $\mathbf{X}^\top \mathbf{X}$ to be non-singular with smallest eigenvalues bounded away from $0$. **Assumption 5**. Either $c_0>0$ or $\lim_{p\to\infty} \min_{i\in p}(d_i^2)\ge c_1$ for some constant $c_1>0$. ## Fixed-point equation Recall our discussion from Section [1.2](#RIDint){reference-type="ref" reference="RIDint"}. We study the regularized estimator $\boldsymbol{\hat{\beta}}$ by introducing a more tractable surrogate $\boldsymbol{\hat{\beta}}^t$. Later we will see that we construct this surrogate using an iterative algorithmic scheme known as a vector approximate message passing algorithm (VAMP) [@rangan2019vector]. Thus to study the surrogate, one needs to study the VAMP algorithm carefully. It transpires that one can describe the properties of this algorithm using a system of fixed point equations in four variables. We use $\gamma_*, \eta_*, \tau_{*}, \tau_{**}, \in(0,+\infty)$ to denote these variables, and define the system here: [\[fp\]]{#fp label="fp"} $$\begin{aligned} & \frac{\gamma_*}{\eta_*}=\mathbb{E}\operatorname{Prox}_{\gamma_{*}^{-1} h}^{\prime}\left(\mathsf{B}^\star+\sqrt{\tau_{*}} \mathsf{Z}\right) \label{RCa}\\ & \tau_{**}=\frac{\eta_*^2}{\left(\eta_*-\gamma_*\right)^2}\left[\mathbb{E}\left(\operatorname{Prox}_{\gamma_*^{-1} h}\left(\mathsf{B}^\star+\sqrt{\tau_{*}} \mathsf{Z}\right)-\mathsf{B}^\star\right)^2-\left(\frac{\gamma_*}{\eta_*}\right)^2 \tau_{*}\right] \label{RCb}\\ & \gamma_*=-R\left(\eta_*^{-1}\right) \label{RCc}\\ & \tau_{*}=\left(\frac{\eta_*}{\gamma_*}\right)^2\left[\mathbb{E}\left[\frac{\mathsf{D}^2+\tau_{**}\left(\eta_*-\gamma_*\right)^2}{\left(\mathsf{D}^2+\eta_*-\gamma_*\right)^2}\right]-\left(\frac{\eta_*-\gamma_*}{\eta_*}\right)^2 \tau_{**}\right], \label{RCd} \end{aligned}$$ where $\mathsf{Z}\sim N(0,1)$ is independent of $\mathsf{B}^\star$. We remind the reader that $x\mapsto \operatorname{Prox}_{\gamma_*^{-1} h}^{\prime}(x)$ is well-defined on $\mathbb{R}$ by the extension described in . The following assumption ensures that at least one solution exists. **Assumption 6** (Existence of fixed points). There exists a solution $\gamma_*, \eta_*, \tau_{*}, \tau_{**}\in (0,+\infty)$ such that [\[fp\]](#fp){reference-type="eqref" reference="fp"} holds. **Remark 9** (Existence implies uniqueness). Under Assumptions [Assumption 2](#AssumpD){reference-type="ref" reference="AssumpD"}--[Assumption 5](#Assumpgp){reference-type="ref" reference="Assumpgp"}, the existence of a solution implies uniqueness, as we show in . The following Proposition shows that holds for any Elastic Net penalties (that are not Lasso). See proof in . **Proposition 10** ( holds for Elastic Net). *Suppose that $\mathsf{D}^2$ and $\mathsf{B}^\star$ are as in and [Assumption 3](#AssumpPrior){reference-type="ref" reference="AssumpPrior"}. For the Elastic Net penalty $h$ defined in with $\lambda_2>0$, holds.* **Remark 11** (Verifying ). Proving presents complications for general penalties and $\mathsf{D}^2$. One therefore hopes to verify it on a case-by-case basis. For instance, when $\mathsf{D}^2$ follows the Marchenko-Pastur law and $h$ is the Lasso, [@bayati2011lasso Proposition 1.3, 1.4] proved that [\[fp\]](#fp){reference-type="eqref" reference="fp"} admits a unique solution. However, for general $\mathsf{D}^2$, there no rigorous results in the literature showing that the system admits a solution. In , we take the first step and show for the first time that indeed holds for the Elastic Net for a general $\mathsf{D}^2$ satisfying our assumptions. This result may be of independent interest: we anticipate that our proof can be adapted to other penalties beyond the Elastic Net; however, since our arguments require explicit expression for the proximal operator, we limit our presentation to the Elastic Net. That said, one expects this to always hold true under Assumptions [Assumption 2](#AssumpD){reference-type="ref" reference="AssumpD"}--[Assumption 5](#Assumpgp){reference-type="ref" reference="Assumpgp"} (see e.g. [@gerbelot2020asymptotic; @gerbelot2022asymptotic]). # Debiasing with Spectrum-Aware adjustment {#subsectionmrdeb} Recall that our debiasing formula involved $\widehat{\mathsf{adj}}$ obtained by solving [\[eq:adjspectrum\]](#eq:adjspectrum){reference-type="eqref" reference="eq:adjspectrum"}. To ensure our estimator is well-defined, we thus need to establish that this equation has a unique solution. In this section, we address this issue, establish asymptotic normality of our debiased estimator (suitably centered and scaled), and present a consistent estimator for its asymptotic variance. ## Well-definedness of our debiasing formula To show that [\[eq:adjspectrum\]](#eq:adjspectrum){reference-type="eqref" reference="eq:adjspectrum"} admits a unique solution, we define the function $g_p:(0,+\infty)\mapsto \mathbb{R}$ as $$\label{defgp} g_p(\gamma)=\frac{1}{p} \mathlarger{\mathlarger{\sum}}_{i=1}^p \frac{1}{\left(d_i^2-\gamma\right)\left(\frac{1}{p} \sum_{j=1}^p \frac{1}{\gamma+h^{\prime \prime}\left(\hat{{\beta}}_j\right)}\right)+1}.$$ Here $h''(\cdot)$ refers to the extended version we defined using Lemma [Lemma 7](#Extend){reference-type="ref" reference="Extend"}. The following Proposition is restated from . **Proposition 12**. *Fix $p\ge 1$ and suppose that holds. Then, the function $\gamma\mapsto g_p(\gamma)$ is well-defined, strictly increasing for any $\gamma>0$, and $$\label{gammasolvea} g_p(\gamma)=1$$ admits a unique solution in $(0,+\infty)$ if and only if there exists some $i\in [p]$ such that $h^{\prime\prime}(\hat{{\beta}}_i)\neq +\infty$ and one of the following holds: (i) $\left\|h^{\prime \prime}(\hat{\bm{\beta}})\right\|_0=p$; (ii) $\mathbf{X}^\top \mathbf{X}$ is non-singular; (iii) $\norm{d}_0+\norm{h^{\prime\prime}(\hat{\bm{\beta}})}_0>p$* **Example 13**. As a concrete example, the assumptions of hold for the Elastic Net with $\lambda_2>0$. **Remark 14**. To find the unique solution of $g_p(\gamma)=1$, we recommend using Newton's method initialized at $\gamma=\frac{1}{p}\sum_{i=1}^p d_i^2$. In rare cases where Newton's method fails to converge, we suggest using a bisection-based method, such as the Brent's method, to solve [\[eq:adjspectrum\]](#eq:adjspectrum){reference-type="eqref" reference="eq:adjspectrum"} on the interval $\left[0, \max_{i\in [p]} d_i^2\right]$, where convergence is guaranteed (by Jensen's inequality, the solution must be upper bounded by $\max_{i\in [p]} d_i^2$). For numerical stability, we suggest re-scaling the design matrix $\mathbf{X}$ such that average of its eigenvalues equals 1, i.e. $\mathbf{X}_{\mathsf{rescaled}} \gets \qty(\frac{1}{p}\sum_{i=1}^p d_i^2)^{-1/2}\cdot \mathbf{X}$. ## The procedure In this section, we introduce our Spectrum-Aware debiasing algorithm (). We also include our methodology for constructing a consistent estimator for the asymptotic variance corresponding to this estimator (centered at the truth) as part of . But to introduce these, we require some intermediate quantities that depend on the observed data and the choice of the penalty. We define these next. Later in , we will provide intuition as to why these intermediate quantities are important and how we construct the variance estimator. **Definition 15** (Scalar statistics). Let $\widehat{\mathsf{adj}}(\mathbf{X},\mathbf{y},h) \in (0,+\infty)$ be the unique solution to [\[eq:adjspectrum\]](#eq:adjspectrum){reference-type="eqref" reference="eq:adjspectrum"}. We define the following scalar statistics $$\label{DEFEFD} \begin{aligned} &\hat{\eta}_* (\mathbf{X},\mathbf{y},h) \gets \left(\frac{1}{p} \mathlarger{\mathlarger{\sum}}_{j=1}^p \frac{1}{\widehat{\mathsf{adj}}+h^{\prime \prime}\left(\hat{{\beta}}_j\right)}\right)^{-1}\\ & \hat{\tau}_{**}(\mathbf{X},\mathbf{y},h) \gets \frac{\left\| \qty(\mathbf{I}_n+ \frac{1}{\hat{\eta}_*-\widehat{\mathsf{adj}}} \mathbf{X}\mathbf{X}^\top)\qty(\mathbf{y}-\mathbf{X}\hat{\bm{\beta}})\right\|^2-n}{ \sum_{i=1}^p d_i^2}\\ & \hat{\tau}_*(\mathbf{X},\mathbf{y},h) \gets \frac{1}{p}\mathlarger{\mathlarger{\sum}}_{i=1}^p \frac{\hat{\eta}_*^2 d_i^2+\qty(d_i^2-\widehat{\mathsf{adj}}+2\hat{\eta}_*)\qty(\widehat{\mathsf{adj}}-d_i^2) \qty(\hat{\eta}_*-\widehat{\mathsf{adj}})^2 \hat{\tau}_{**}}{\qty(d_i^2-\widehat{\mathsf{adj}}+\hat{\eta}_*)^2\qty(\widehat{\mathsf{adj}})^2} \end{aligned}$$ Note that the quantities in [\[DEFEFD\]](#DEFEFD){reference-type="eqref" reference="DEFEFD"} are well-defined for any $p$ (i.e. no zero-valued denominators) if there exists some $i\in [p]$ such that $h^{\prime \prime} (\hat{\beta}_i)\neq +\infty$ and there exists some $j\in [p]$ such that $h^{\prime \prime} (\hat{\beta}_j)\neq 0$. Going forward, we suppress the dependence on $\mathbf{X},\mathbf{y},h$ for convenience. We state the Spectrum-Aware debiasing procedure in . On a first look, the definitions [\[DEFEFD\]](#DEFEFD){reference-type="eqref" reference="DEFEFD"} might appear opaque. However, we construct them from crucial algorithmic insights that we will explain later in . Response and design $(\mathbf{y},\mathbf{X})$ and a penalty function $h$ **Find** minimizer $\hat{\bm{\beta}}$ of [\[deflasso\]](#deflasso){reference-type="eqref" reference="deflasso"} **Compute** the eigenvalues $(d_i^2)_{i=1}^p$ of $\mathbf{X}^\top\mathbf{X}$ **Find** solution $\widehat{\mathsf{adj}}(\mathbf{X},\mathbf{y},h)$ of [\[gammasolvea\]](#gammasolvea){reference-type="eqref" reference="gammasolvea"} debiased estimator $$\label{de-based} \hat{\bm{\beta}}^u\qty(\mathbf{X},\mathbf{y},h)=\hat{\bm{\beta}}+\widehat{\mathsf{adj}}^{-1} \mathbf{X}^\top (\mathbf{y}-\mathbf{X}\hat{\bm{\beta}})$$ and the associated variance estimator $\hat{\tau}_{**}(\mathbf{X},\mathbf{y},h)$ from [\[DEFEFD\]](#DEFEFD){reference-type="eqref" reference="DEFEFD"} ## Asymptotic normality below states that the empirical distribution of $(\hat{\tau}_*^{-1/2}(\hat{{\beta}}^u_i-{\beta}^\star_i))_{i=1}^p$ converges to a standard Gaussian. **Theorem 1** (Asymptotic normality of $\hat{\bm{\beta}}^u$). *Suppose that ---[Assumption 6](#Assumpfix){reference-type="ref" reference="Assumpfix"} hold. Then, we have that almost surely as $p\to \infty$, $$\hat{\tau}_*^{-1/2}(\hat{\bm{\beta}}^u-\bm{\beta}^\star)\stackrel{W_2}{\to} N(0,1).$$* We illustrate in under five different right rotationally-invariant designs (cf. ) with non-trivial correlation structures, and compare with DF debiasing with $\mathbf{M}=I_p$. The corresponding QQ-plot can be found in . We observe that our method outperforms DF debiasing by a margin. We next turn to developing a different result that characterizes asymptotic behavior of finite-dimensional marginals of the regression coefficient vector. Corollary [Corollary 17](#sgoods){reference-type="ref" reference="sgoods"} below establishes this under an additional exchangeability assumption on the regression vector coordinates. To state the corollary, we recall to readers the standard definition of exchangeability for a sequence of random variables. **Definition 16** (Exchangeability). We call a sequence of random variables $\left(\mathsf{V}_i\right)_{i=1}^p$ exchangeable if for any permutation $\pi$ of the indices $1,...,p$, the joint distribution of the permuted sequence $\left(\mathsf{V}_{\pi(i)}\right)_{i=1}^p$ is the same as the original sequence. below is a consequence of . We defer its proof to . **Corollary 17**. *Fix any finite index set $\mathcal{I} \subset [p]$. Suppose that ---[Assumption 6](#Assumpfix){reference-type="ref" reference="Assumpfix"} hold, and $\left(\bm{\beta}^\star\right)_{j=1}^{p}$ is exchangeable independent of $\mathbf{X}, \bm{\varepsilon}$. Then as $p\to \infty$, we have $$\label{werbaoz} \frac{\hat{\bm{\beta}}^u_{\mathcal{I}}-\bm{\beta}^\star_{\mathcal{I}}}{\sqrt{\hat{\tau}_{*}}} \Rightarrow N(\rm{0},\mathbf{I}_{|\mathcal{I}|})$$ where $\Rightarrow$ denotes weak convergence.* shows an illustration for this result focusing on $\mathcal{I}=\{1\}$. Observe that we once again outperform degrees-of-freedom debiasing. ![Histograms of $\frac{\hat{\beta}_1-\beta_1^{\star}}{\sqrt{\hat{\tau}_*}}$ across from 1000 Monte-Carlo trials using DF and SA debiasing. The setting is identical to except that here we set $n=100, p=200$ for computational tractability. ](image_va/panel_coor.png){#fig3 width="\\textwidth"} Corollary [Corollary 17](#sgoods){reference-type="ref" reference="sgoods"} is naturally useful for constructing confidence intervals for finite-dimensional marginals of the regression vector with associated false coverage proportion guarantees. We discuss this in the next section in detail. ## Inference {#sec:infvanilla} In this section, we discuss applications of our Spectrum-Aware debiasing approach to hypothesis testing and construction of confidence intervals. Consider the null hypotheses $H_{i, 0}: {\beta}^\star_i=0$ for all $i \in[p]$. We define $\mathrm{p}$-values $P_i$ and decision rule $T_i$ ($T_i=1$ means rejecting $H_{0,i}$) for the test $H_{0, i}$ based on the definitions $$\label{pvaldef} P_i \qty(\hat{{\beta}}^u_i, \hat{\tau}_*)=2\left(1-\Phi\left(\left|\frac{\hat{{\beta}}^u_i}{\sqrt{\hat{\tau}_{*}}}\right|\right)\right), \quad T_i(\hat{{\beta}}^u_i, \hat{\tau}_*)=\left\{\begin{array}{cc} 1, & \text { if } P_i\qty(\hat{{\beta}}^u_i,\hat{\tau}_*) \leq \alpha \\ 0, & \text { if } P_i \qty(\hat{{\beta}}^u_i,\hat{\tau}_*)>\alpha \end{array},\right.$$ where $\Phi$ denotes the standard Gaussian CDF and $\alpha \in[0,1]$ is the significance level. We define the false positive rate (FPR) and true positive rate (TPR) below $$\mathsf{FPR}(p):=\frac{\sum_{j=1}^p \mathbb{I}\left(P_j \leq \alpha, \beta_j^{\star}=0\right)}{\sum_{j=1}^p \mathbb{I}\left(\beta_j^{\star}=0\right)}, \quad \mathsf{TPR}(p):=\frac{\sum_{j=1}^p \mathbb{I}\left(P_j \leq \alpha,\left|\beta_j^{\star}\right|>0\right)}{\sum_{j=1}^p \mathbb{I}\left(\beta_j^{\star}>0\right)}$$ when their respective denominators are non-zero. Fix $\alpha \in[0,1]$. We can construct confidence intervals $$\label{defCI} \mathsf{CI}_i(\hat{{\beta}}^u_i, \hat{\tau}_*)=\left(\hat{{\beta}}^u_i+a \sqrt{\hat{\tau}_{*}}, \hat{{\beta}}^u_i+b \sqrt{\hat{\tau}_{*}}\right), \qquad \forall i \in[p]$$ for any $a, b \in \mathbb{R}$ such that $\Phi(b)-\Phi(a)=1-\alpha$. One can define the associated false coverage proportion (FCP) $$\mathsf{FCP}(p):=\frac{1}{p} \sum_{i=1}^p \mathbb{I}\left({\beta}^\star_i\notin \mathsf{CI}_i\right).$$ for any $p\ge 1$. directly yield guarantees on the FPR, TPR and FCP as shown in below. We defer the proof to . **Corollary 18**. *Suppose that ---[Assumption 6](#Assumpfix){reference-type="ref" reference="Assumpfix"} hold. We have the following.* - *Suppose that $\mathbb{P}\left(\mathsf{B}^\star=0\right)>0$ and there exists some $\mu_0 \in(0,+\infty)$ such that $$\mathbb{P}\left(\abs{\mathsf{B}^\star} \in\left(\mu_0,+\infty\right) \cup\{0\}\right)=1.$$ Then for any fixed $i$ such that ${\beta}^\star_i=0$, we have $\lim _{p \rightarrow \infty} \mathbb{P}\left(T_{i}=1\right)=\alpha.$ and the false positive rate satisfies that almost surely $\lim _{p\to \infty} \mathsf{FPR}(p)=\alpha.$ Refer also to for the exact asymptotic limit of TPR.* - *The false coverage proportion satisfies that almost surely $\lim _{p\to \infty} \mathsf{FCP}(p)=\alpha$.* We demonstrate in . ![The above plots the TPR and FPR of the hypothesis testing procedure defined in [\[pvaldef\]](#pvaldef){reference-type="eqref" reference="pvaldef"} with significance level $\alpha$ and FCP of the constructed confidence intervals [\[defCI\]](#defCI){reference-type="eqref" reference="defCI"} with $b=\Phi^{-1}(1-\alpha/2), a=\Phi^{-1}(\alpha/2)$ as $\alpha$ on the x-axis varies from $0$ to $1$, for both degrees-of-freedom ($\mathsf{DF}$, blue) adjustment and Spectrum-Aware ($\mathsf{SA}$, red) adjustment. The setting here is the same as in . ](image_va/panel_hyp.png){#fig2 width="\\textwidth"} We note that the FPR and FCP values obtained from DF debiasing diverge from the intended $\alpha$ values, showing a clear misalignment with the 45-degree line. In contrast, the Spectrum-Aware debiasing method aligns rather well with the specified $\alpha$ values, and this occur without much compromise on the TPR level. # De-biased PCR using Spectrum-Aware debiasing {#section:pcar} ## Challenges of alignment and outlier eigenvalues {#challengedfs} We revisit our discussion in on debiasing the ridge regression. Recall that we chose $\widehat{\mathsf{adj}}$ to center the spectrum of $\mathbf{V}$ below at 1: $$\label{wcaocnsdjf} \mathbb{E}[\hat{\bm{\beta}}^u\mid \mathbf{X},\bm{\beta}^\star]=\underbrace{\left[\left(1+\frac{\lambda_2}{\widehat{\mathsf{adj}}}\right) \sum_{i=1}^p\left(\frac{d_i^2}{d_i^2+\lambda_2}\right) \mathbf{o}_i \mathbf{o}_i^{\top}\right]}_{=:\mathbf{V}} \bm{\beta}^\star,$$ where $d_i^2$ and $\mathbf{o}_i$ denote, respectively, the eigenvalues and eigenvectors of $\mathbf{X}^\top\mathbf{X}$. Our motivation was the fact that with this choice of $\widehat{\mathsf{adj}}$, $$\label{huererere} \mathbf{V}\approx \mathbf{I}_p+\mathsf{unbiased \;component},$$ and $\hat{\bm{\beta}}^u$ would be centered around $\bm{\beta}^\star$. However, a simple calculation reveals a potential issue with this approach: if $\bm{\beta}^\star$ perfectly aligns with the top eigenvector $\mathbf{o}_1$, we would obtain $$\mathbb{E}[\hat{\bm{\beta}}^u\mid\mathbf{X},\bm{\beta}^\star]=\left(\frac{1}{p} \sum_{i=1}^p \frac{ d_i^2}{d_i^2+\lambda_2}\right)^{-1}\frac{d_1^2}{d_1^2+\lambda_2}\bm{\beta}^\star.$$ Now this incurs an inflation bias since $\frac{d_1^2}{d_1^2+\lambda_2}>\frac{1}{p} \sum_{i=1}^p \frac{ d_i^2}{d_i^2+\lambda_2}.$ One can generalize this to alignment of $\bm{\beta}^\star$ with any small subset $\mathcal{J}_1$ of $[p]$, where the bias could be inflation or shrinkage depending on the set of eigenvectors $\bm{\beta}^\star$ aligns to. We refer to this as the *alignment issue*. Formally, such alignment violates the independence assumption between $\bm{\beta}^\star$ and $\mathbf{X}$ that we had required earlier in (cf. ). Another common issue arises: often the top few eigenvalues of $\mathbf{X}^{\top}\mathbf{X}$, indexed by $\mathcal{J}_2:=\{1,...,J_2\}$, are significantly separated from the bulk of the spectrum. In this case, after centering the spectrum of $\mathbf{V}$, the variance of the $\mathsf{unbiased \;component}$ of $V$ in [\[huererere\]](#huererere){reference-type="eqref" reference="huererere"} will be large, thus making the debiasing procedure unstable. We refer to these eigenvalues as *outlier eigenvalues*. Formally, the existence of such eigenvalues violates [\[eqtwe\]](#eqtwe){reference-type="eqref" reference="eqtwe"} in (cf. ). We formalize both these issues by relaxing Assumptions [Assumption 2](#AssumpD){reference-type="ref" reference="AssumpD"} and [Assumption 3](#AssumpPrior){reference-type="ref" reference="AssumpPrior"} to below. To this end, denote $\mathcal{N}:=\left\{i \in [p]: d_i^2>0\right\}, N:=|\mathcal{N}|$. We let $\mathcal{J}$ be a user-chosen, finite index set $$\label{defJJ} \mathcal{J}\subseteq \mathcal{N}$$ that contains $\mathcal{J}_1 \cup \mathcal{J}_2$ as subset (See ). We denote its size as $J:=\abs{\mathcal{J}}$. **Assumption 7**. We assume that $\mathcal{J}$ defined in [\[defJJ\]](#defJJ){reference-type="eqref" reference="defJJ"} is of finite size[^2] and for some real-valued vectors $\bm{\upsilon}^\star\in \mathbb{R}^{J},\bm{\zeta}^\star\in \mathbb{R}^{p}$, $$\label{stwerif} \bm{\beta}^\star=\bm{\beta}^\star_{\mathsf{al}}+\bm{\zeta}^\star, \qquad \bm{\beta}^\star_{\mathsf{al}}=\sum_{i=1}^{J} \upsilon^\star_i\cdot \mathbf{o}_{\mathcal{J}(i)}.$$ where we used $\mathcal{J}(i)$ to denote the $i$-th index in $\mathcal{J}$. Both $\bm{\upsilon}^\star$ and $\bm{\zeta}^\star$ are unknown, and they can be either deterministic or random independent of $\mathbf{O},\bm{\varepsilon}$. If $\bm{\zeta}^\star$ is deterministic, we assume that $\bm{\zeta}^\star\stackrel{W_2}{\to} \mathsf{C}^\star$ as $n,p\to \infty$, where $\mathsf{C}^\star$ is a random variable with finite variance. If $\bm{\zeta}^\star$ is random, we assume the same convergence holds almost surely. Furthermore, we assume that holds except that, instead of [\[eqtwe\]](#eqtwe){reference-type="eqref" reference="eqtwe"}, we only require $$\label{labforef} \lim_{p\to \infty} \max_{i\in [p]\setminus \mathcal{J}} d_i^2<+\infty.$$ Note that when $\mathcal{J}=\emptyset$, reduces to Assumptions [Assumption 2](#AssumpD){reference-type="ref" reference="AssumpD"} and [Assumption 3](#AssumpPrior){reference-type="ref" reference="AssumpPrior"} precisely. **Remark 19**. does not impose any constraints on $\bm{\upsilon}^\star\in \mathbb{R}^{J}$. For example, it is permitted that $\bm{\upsilon}^\star=0$ or that $p^{-1}\norm{\bm{\upsilon}^\star}^2$ diverges as $p\to \infty$. Note that also permits $\bm{\zeta}^\star=0$ but $p^{-1}\norm{\bm{\zeta}^\star}^2$ cannot diverge. **Remark 20**. $\mathcal{J}$ needs to be a finite index set that contains both $\mathcal{J}_1$ (as in [\[defJJ\]](#defJJ){reference-type="eqref" reference="defJJ"}) and $\mathcal{J}_2$ (as in [\[labforef\]](#labforef){reference-type="eqref" reference="labforef"}) as subsets. The index set $\mathcal{J}_2$ can be determined by observing the spectrum of $\mathbf{X}^\top \mathbf{X}$. $\mathcal{J}_1$ is generally not observed and requires prior information. However, we remark that eigenvectors indexed by $\mathcal{J}_1\cap\mathcal{J}_2$ tend to distort the debiasing procedure most severely. So often just including $\mathcal{J}_2$ in $\mathcal{J}$ can significantly improve inference. Under Assumption [Assumption 7](#AssumpPriorAL){reference-type="ref" reference="AssumpPriorAL"}, we develop a debiasing approach that recovers both components of $\bm{\beta}^{\star}$ from [\[stwerif\]](#stwerif){reference-type="eqref" reference="stwerif"}. Since we utilize ideas from principal components regression, we first describe the classical PCR algorithm. As the reader will soon discover, our debiasing approach also produces a debiased version of this classical PCR estimator and thereby we contribute independently to the PCR literature. ## The PCR algorithms ### Classical PCR below describes the classical principal component regression procedure with respect to principal components indexed by $\mathcal{K}$. It will be used as a sub-routine in the algorithms we present in the sequel. **Algorithm 1** (Classical PCR). Given input $(\mathbf{X},\mathbf{y})$ and a user-chosen size-$K$ index set $\mathcal{K}$ satisfying $$\label{PCK} \mathcal{K}\subset \mathcal{N}:=\left\{i \in[p]: d_i^2>0\right\},$$ the PCR procedure computes the singular value decomposition $\mathbf{X}=\mathbf{Q}^\top \mathbf{D}\mathbf{O}$[^3] and outputs. $$\hat{\bm{\theta}}_{\mathsf{pcr}}(\mathcal{K})=\left(\bm{W}_\mathcal{K}^{\top} \bm{W}_\mathcal{K}\right)^{-1} \bm{W}_\mathcal{K}^{\top}\mathbf{y}\in \mathbb{R}^K,$$ where $\bm{W}_\mathcal{K}:=\mathbf{X}\mathbf{O}_\mathcal{K}^{\top}\in \mathbb{R}^{n \times K}$ denotes the basis-transformed design matrix with orthogonal columns and $\mathbf{O}_{\mathcal{K}}\in \mathbb{R}^{K\times p}$ denotes the rows of $\mathbf{O}$ indexed by $\mathcal{K}$. ### Alignment PCR In this section, we introduce a procedure to recover $\bm{\beta}^\star_{\mathsf{al}}$, the component of $\bm{\beta}^\star$ that aligns with the $\mathcal{J}_1$-indexed PCs. **Algorithm 2** (Alignment PCR). Given input $(\mathbf{X},\mathbf{y})$ and $\mathcal{J}$ defined in [\[defJJ\]](#defJJ){reference-type="eqref" reference="defJJ"}, the alignment PCR procedure runs with $\mathcal{K}\gets \mathcal{J}$ and obtains $\hat{\bm{\theta}}_{\mathsf{pcr}}(\mathcal{J})$. It then outputs the alignment PCR estimator $$\label{pctJs} \hat{\bm{\beta}}_{\mathsf{al}}(\mathcal{J}):=\mathbf{O}_{\mathcal{J}}^\top \hat{\bm{\theta}}_{\mathsf{pcr}}(\mathcal{J}) \in \mathbb{R}^p,$$ where $\mathbf{O}_{\mathcal{J}}$ consists of rows of $\mathbf{O}$ indexed by $\mathcal{J}$. It is easy to show that $\hat{\bm{\beta}}_{\mathsf{al}}(\mathcal{J})$ is a consistent estimator of $\bm{\beta}^\star_{\mathsf{al}}$ (we formalize this in , (a)). Left multiplication of $\mathbf{O}^\top_\mathcal{J}$ to $\hat{\bm{\theta}}_{\mathsf{pcr}}$ in [\[pctJs\]](#pctJs){reference-type="eqref" reference="pctJs"} is interpreted as transforming $\hat{\bm{\theta}}_{\mathsf{pcr}}$ back to the standard basis. However, the resulting estimator $\hat{\bm{\beta}}_{\mathsf{al}}$ incurs a shrinkage bias since it only recovers $\bm{\beta}^\star_{\mathsf{al}}$ portion of the signal (see , (a)). Our discussions so far is standard in current PCR pipelines. However, to obtain asymptotically unbiased estimators for the entire signal $\bm{\beta}^\star$, more work is required and in particular, debiasing this classical PCR estimator is critical. We achieve this below. ### Complement PCR In the last section, we developed $\hat{\bm{\beta}}_{\mathsf{al}}(\mathcal{J})$ to estimate the alignment component $\bm{\beta}^\star_{\mathsf{al}}$. We now leverage our Spectrum-Aware debiasing theory to devise a modified PCR procedure that we call the *complement PCR*, to estimate the remaining component $\bm{\zeta}^\star$. Let us denote $$\label{Jsb} \bar{\mathcal{J}}:=\mathcal{N} \setminus \mathcal{J}\equiv \{i\in [p]: d_i^2>0, i\notin \mathcal{J}\}.$$ Note that $\bar{\mathcal{J}}$ indexes all PCs that are not used in $\hat{\bm{\beta}}_{\mathsf{al}}(\mathcal{J})$. The complement PCR procedure is described below. **Algorithm 3** (Complement PCR). Given $(\mathbf{X},\mathbf{y})$, a penalty function $h$ and $\bar{\mathcal{J}}$ from [\[Jsb\]](#Jsb){reference-type="eqref" reference="Jsb"}, the complement PCR runs with $\mathcal{K}\gets \bar{\mathcal{J}}$ and obtains $\hat{\bm{\theta}}_{\mathsf{pcr}}(\bar{\mathcal{J}})$. It then constructs the new data $$\label{xnewYnew} \mathbf{y}_\mathsf{new}\gets \qty(\mathbf{D}_{\bar{\mathcal{J}}}^\top \mathbf{D}_{\bar{\mathcal{J}}})^{1/2} \hat{\bm{\theta}}_{\mathsf{pcr}}(\bar{\mathcal{J}}) \in \mathbb{R}^{N-J}, \quad \mathbf{X}_\mathsf{new}\gets \qty(\mathbf{D}_{\bar{\mathcal{J}}}^\top \mathbf{D}_{\bar{\mathcal{J}}})^{1/2} \mathbf{O}_{\bar{\mathcal{J}}} \in \mathbb{R}^{(N-J)\times p}$$ where $\mathbf{D}_{\bar{\mathcal{J}}} \in \mathbb{R}^{n \times (N-J)}$ consists of columns of $\mathbf{D}$ indexed by $\bar{\mathcal{J}}$ and $\mathbf{O}_{\bar{\mathcal{J}}}\in \mathbb{R}^{(N-J)\times p}$ consists of rows of $\mathbf{O}$ indexed by $\bar{\mathcal{J}}$. It then run Spectrum-Aware debiasing procedure in with respect to the new data and outputs the complement PCR estimator $$\hat{\bm{\beta}}_{\mathsf{co}}(\bar{\mathcal{J}})\gets \hat{\bm{\beta}}^u(\mathbf{X}_\mathsf{new}, \mathbf{y}_\mathsf{new}, h)$$ and the associated variance estimator $\hat{\tau}_{*}\gets \hat{\tau}_*(\mathbf{X}_\mathsf{new}, \mathbf{y}_\mathsf{new}, h)$. We establish the asymptotic behavior of $\hat{\bm{\beta}}_{\mathsf{co}}(\bar{\mathcal{J}})$ in , (b). In particular, $\hat{\bm{\beta}}_{\mathsf{co}}(\bar{\mathcal{J}})$ is a debiased estimator of $\bm{\zeta}^\star$ in a suitable asymptotic sense. ### Debiased PCR In the last two sections, we developed the alignment PCR procedure to estimate $\bm{\beta}^\star_{\mathsf{al}}$ and the complement PCR procedure to estimate $\bm{\zeta}^\star$. It is then natural to combine them to form a debiased estimator for $\bm{\beta}^\star$. **Algorithm 4** (Debiased PCR). Given input $(\mathbf{X},\mathbf{y})$, penalty function $h$, $\mathcal{J}$ in [\[defJJ\]](#defJJ){reference-type="eqref" reference="defJJ"}, the debiased PCR runs to obtain $\hat{\bm{\theta}}_{\mathsf{pcr}}(\mathcal{J})$ and $\hat{\bm{\beta}}_{\mathsf{al}}(\mathcal{J})$. It then runs to obtain $\hat{\bm{\beta}}_{\mathsf{co}}(\bar{\mathcal{J}})$ and variance estimator $\hat{\tau}_{*}$. The procedure then outputs the debiased PCR estimator $$\label{defdefpcrdb} \hat{\bm{\beta}}^u_{\mathsf{pcr}}\gets \hat{\bm{\beta}}_{\mathsf{al}}(\mathcal{J})+\hat{\bm{\beta}}_{\mathsf{co}}(\bar{\mathcal{J}})$$ along with $\hat{\tau}_{*}$. We summarize the entire procedure in for clarity. Response and design $(\mathbf{y},\mathbf{X})$, a penalty function $h$ and an index set of PCs $\mathcal{J}\subset \mathcal{N}$ (see [\[defJJ\]](#defJJ){reference-type="eqref" reference="defJJ"}). **Conduct** eigen-decomposition: $\mathbf{X}^\top \mathbf{X}=\mathbf{O}^\top \mathbf{D}^\top \mathbf{D}\mathbf{O}$ and let $\mathbf{O}_\mathcal{J}, \mathbf{O}_{\bar{\mathcal{J}}}$ be PCs indexed by $\mathcal{J}$ and $\bar{\mathcal{J}}=\mathcal{N}\setminus \mathcal{J}$ respectively. **Compute** alignment PCR estimator $$\hat{\bm{\beta}}_{\mathsf{al}}\gets \mathbf{O}_{\mathcal{J}}^\top \left(\bm{W}_\mathcal{J}^{\top} \bm{W}_\mathcal{J}\right)^{-1} \bm{W}_\mathcal{J}^{\top}\mathbf{y}$$ where $\bm{W}_\mathcal{J}:=\mathbf{X}\mathbf{O}_\mathcal{J}^{\top}.$ **Construct** new data $$\mathbf{y}_\mathsf{new}\gets \qty(\mathbf{D}_{\bar{\mathcal{J}}}^\top \mathbf{D}_{\bar{\mathcal{J}}})^{1/2} \left(\bm{W}_{\bar{\mathcal{J}}}^{\top} \bm{W}_{\bar{\mathcal{J}}}\right)^{-1} \bm{W}_{\bar{\mathcal{J}}}^{\top} \mathbf{y}, \quad \mathbf{X}_\mathsf{new}\gets \qty(\mathbf{D}_{\bar{\mathcal{J}}}^\top \mathbf{D}_{\bar{\mathcal{J}}})^{1/2} \mathbf{O}_{\bar{\mathcal{J}}}$$ where $\bm{W}_{\bar{\mathcal{J}}}=\mathbf{X}\mathbf{O}_{\bar{\mathcal{J}}}^\top$ and $\mathbf{D}_{\bar{\mathcal{J}}}$ consists of columns of $\mathbf{D}$ indexed by $\bar{\mathcal{J}}$. **Find** minimizer $\hat{\bm{\beta}}$ of $\mathcal{L}(\bm{\cdot} \; ;\mathbf{X}_\mathsf{new},\mathbf{y}_\mathsf{new})$ for $\mathcal{L}$ defined in [\[deflasso\]](#deflasso){reference-type="eqref" reference="deflasso"} **Compute** the eigenvalues $(d_i^2)_{i=1}^p$ of $\mathbf{X}_\mathsf{new}^\top\mathbf{X}_\mathsf{new}$ **Find** solution $\widehat{\mathsf{adj}}(\mathbf{X}_\mathsf{new},\mathbf{y}_\mathsf{new},h)$ of [\[gammasolvea\]](#gammasolvea){reference-type="eqref" reference="gammasolvea"} and compute complement PCR estimator $$\hat{\bm{\beta}}_{\mathsf{co}}\gets \hat{\bm{\beta}}+\widehat{\mathsf{adj}}^{-1} \mathbf{X}_\mathsf{new}^\top (\mathbf{y}_\mathsf{new}-\mathbf{X}_\mathsf{new}\hat{\bm{\beta}})$$ and $\hat{\tau}_{*}(\mathbf{X}_\mathsf{new},\mathbf{y}_\mathsf{new},h)$ from [\[DEFEFD\]](#DEFEFD){reference-type="eqref" reference="DEFEFD"} PCR-Spectrum-Awaure estimator $$\hat{\bm{\beta}}^u_{\mathsf{pcr}}\gets \hat{\bm{\beta}}_{\mathsf{al}}+\hat{\bm{\beta}}_{\mathsf{co}}$$ and the associated variance estimator $\hat{\tau}_{*}\gets \hat{\tau}_{*}(\mathbf{X}_\mathsf{new},\mathbf{y}_\mathsf{new},h)$. ## Asymptotic normality {#section:asympcrcf} We now state the asymptotic properties of the debiased PCR procedure. The proof of the theorem below is deferred to . **Theorem 2**. *Suppose Assumptions [Assumption 4](#Assumph){reference-type="ref" reference="Assumph"}---[Assumption 7](#AssumpPriorAL){reference-type="ref" reference="AssumpPriorAL"} hold. Then, we have the following statements.* - *Let $\hat{\omega}:=p^{-1}{\norm{\hat{\bm{\beta}}_{\mathsf{co}}}^2}-\hat{\tau}_{*}$. Almost surely as $p\to \infty$, $$\label{XEL3} \frac{1}{p}\norm{\hat{\bm{\beta}}_{\mathsf{al}}(\mathcal{J})-\bm{\beta}^\star_{\mathsf{al}}}^2\to 0,\quad \qty(\mathbf{D}^\top_\mathcal{J}\mathbf{D}_\mathcal{J}+\hat{\omega}\cdot \mathbf{I}_{J})^{-1/2}\qty(\hat{\bm{\theta}}_{\mathsf{pcr}}(\mathcal{J})-\bm{\upsilon}^\star)\Rightarrow N(\bm{0},\mathbf{I}_J)$$ where $\mathbf{D}_\mathcal{J}\in \mathbb{R}^{n\times J}$ consists of columns of $\mathbf{D}$ indexed by $\mathcal{J}$.* - *Almost surely as $p \to \infty$, $$\label{JSSRwa} \hat{\tau}_{*}^{-1/2}\qty(\hat{\bm{\beta}}_{\mathsf{co}}-\bm{\zeta}^\star)\stackrel{W_2}{\to} N(0,1).$$* - *Almost surely as $p\to \infty$, $$\label{OKIqc} \hat{\tau}_{*}^{-1/2}\qty(\hat{\bm{\beta}}^u_{\mathsf{pcr}}-\bm{\beta}^\star)\stackrel{W_2}{\to} N(0,1)$$* The proof of below is deferred to . **Corollary 21**. *Suppose Assumptions [Assumption 4](#Assumph){reference-type="ref" reference="Assumph"}---[Assumption 7](#AssumpPriorAL){reference-type="ref" reference="AssumpPriorAL"} hold. If $\qty({\zeta}^\star_j)_{j=1}^p$ are exchangeable as in , then for any fixed, finite index set $\mathcal{I}\subset [p],$ we have that almost surely as $p\to \infty$, $$\label{centralclaim} \begin{aligned} & \hat{\bm{\beta}}_{\mathsf{al},\mathcal{I}}(\mathcal{J})\to \bm{\beta}^\star_{\mathsf{al},\mathcal{I}}, \quad, \hat{\tau}_{*}^{-1/2}\qty(\hat{\bm{\beta}}_{\mathsf{co},\mathcal{I}}(\bar{\mathcal{J}})-\bm{\zeta}^\star_\mathcal{I})\Rightarrow N(\bm{0},\mathbf{I}_{|\mathcal{I}|})\\ & \hat{\tau}_{*}^{-1/2}\qty(\hat{\bm{\beta}}^u_{\mathsf{pcr},\mathcal{I}}-\bm{\beta}^\star_\mathcal{I})\Rightarrow N \qty(\bm{0},\mathbf{I}_{|\mathcal{I}|}). \end{aligned}$$* We test our claim [\[OKIqc\]](#OKIqc){reference-type="eqref" reference="OKIqc"} under two sets of design matrices. The first set, presented in (see associated QQ plot in ), represents more challenging variants of the designs showcased in . These designs contain heightened levels of correlation, heterogeneity, or a combination of both (see for a detailed comparison). Furthermore, they contain outlier eigenvalues and we construct these experiments so that $\bm{\beta}^\star$ aligns with the top eigenvectors. The second set of design matrices, presented in (associated QQ plot in ), presents debiasing results for designs taken from real-data from five different domains---audio features, genetic data, image data, financial returns, and socio-economic data. Observe that across the board, PCR-Spectrum-Aware debiasing outperforms other debiasing methods with histograms of the empirical distribution of pivotal quantities aligning with the standard Gaussian pdf exceptionally well. Also observe that enhancing degrees-of-freedom debiasing with the PCR methodology (this amounts to substituting Spectrum-Aware debiasing in with degrees-of-freedom debiasing) yields improvements over the vanilla degrees-of-freedom debiasing. However, the empirical distributions deviate significantly from a standard Gaussian even with this version of degrees-of-freedom debiasing whereas our PCR-Spectrum-Aware approach continues to work well. ![ The setting is similar to except for specific changes to the design distribution parameters that give rise to more challenging scenarios than . Row 1---4 corresponds to: (i) DF: DF debiasing, (ii) SA: SA debiasing, (iii): PCR-DF: debiased PCR with DF debiasing in the Complement PCR step in place of SA-debiasing, and (iv) PCR-SA: de-biased PCR defined in . Column 1---5 corresponds to five different right rotationally invariant designs: (i) $\mathsf{MatrixNormal}$-$\mathsf{B}$: $\mathbf{X}\sim N(\rm{0},\mathbf{\Sigma}^{\mathrm{(col)}}\otimes \mathbf{\Sigma}^{\mathrm{(row)}})$ where $\mathbf{\Sigma}^{\mathrm{(col)}}_{ij}=0.9^{|i-j|},\forall i,j\in [n]$ and $\bm{\Sigma}^{\mathrm{(row)}}\sim \mathsf{InverseWishart}\qty(\mathbf{I}_p, 1.002\cdot p)$ (see for notation); (ii) $\mathsf{Spiked}$-$\mathsf{B}$: $\mathbf{X}= \mathbf{V} \mathbf{R} \mathbf{W}^\top +n^{-1}N(\rm{0}, \mathbf{I}_n\otimes \mathbf{I}_p)$ where $\mathbf{V}, \mathbf{W}$ are drawn randomly from Haar matrices of dimensions $n,p$ respectively with 3 columns retained, and $\mathbf{R}=\mathop{\mathrm{diag}}(500,250,50)$; (iii) $\mathsf{LNN}$-$\mathsf{B}$: $\mathbf{X}=\mathbf{X}_1^{20}\cdot \mathbf{X}_2$ where $\mathbf{X}_1\in \mathbb{R}^{n\times n},\mathbf{X}_2\in \mathbb{R}^{n\times p}$ have iid entries from $N(0,1)$; (iv) $\mathsf{VAR}$-$\mathsf{B}$: $\mathbf{X}_{i,\bullet}=\sum_{k=1}^{\tau\vee i}\alpha_k \mathbf{X}_{i-k,\bullet}+\bm{\varepsilon}_i$ where $\mathbf{X}_{i,\bullet}$ denotes the $i$-th row of $\mathbf{X}$. Here, $\bm{\varepsilon}_i \sim N(\mathbf{0}, \mathbf{\Sigma})$ with $\mathbf{\Sigma} \sim \mathsf{InverseWishart}(\mathbf{I}_p, 1.1\cdot p)$. We set $\tau=3,\mathbf{\alpha}=\qty(0.7, 0.14, 0.07)$, $\mathbf{X}_1=0$; (v) $\mathsf{MultiCauchy}$: rows of $\mathbf{X}$ are sampled iid from $\mathsf{Mult}$-$\mathsf{t}(1, \mathbf{I}_p)$ (see for notation). The true signal is $\bm{\beta}^\star=\bm{\beta}^\star_{\mathsf{al}}+\bm{\zeta}^\star$ where ${\zeta}^\star_i\sim 0.24\cdot N(-20,1)+0.06\cdot N(10,1)+0.7\cdot \delta_0$ and $\bm{\beta}^\star_{\mathsf{al}}=\sum_{i=1}^{J} \upsilon^\star_i\cdot \mathbf{o}_{\mathcal{J}(i)}$ with $\upsilon^\star_i=5\cdot \sqrt{p}, i \in J_1$ and $J_1=\{1,3,5,7,9\}$. Penalty $h$ used in complement PCR steps is identical to that used in . When applying PCR debiasing, we set $\mathcal{J}$ to be top 10, 10, 20, 40, 40 for the five design, respectively. See the corresponding QQ plot in .](image_va/panel_pcr_sim.png){#figPCRA width="\\textwidth"} ![The setting is the same as in except that the designs are taken from real datasets and the signal is drawn iid from ${\beta}^\star_i\sim 0.24\cdot N(-20,1)+0.06\cdot N(10,1)+0.7\cdot \delta_0$. The designs are (i) $\mathsf{Speech}$: $200\times 400$ with each row being i-vector (see e.g. [@ibrahim2018vector]) of a speech segment; (ii) $\mathsf{DNA}$: $100\times 180$ entries with each row being one-hot representation of primate splice-junction gene sequences; (iii) $\mathsf{SP500}$: $300\times 496$ entries where each column representing a time series of daily stock returns of a company listed in the S&P 500 index; (iv) $\mathsf{FaceImage}$: $1348\times 2914$ entries where each row corresponds to a JPEG image of a single face; (v) $\mathsf{Crime}$: $50 \times 99$ entries where each column corresponds to a socio-economic metric in the UCI communities and crime dataset. See for details of these datasets. When applying debiased PCR, we set $\mathcal{J}$ to be top 10, 10, 20, 50, 20 for the five design, respectively. See the corresponding QQ plot in .](image_va/panel_pcr_real.png){#figPCRC width="\\textwidth"} ## Inference {#section:inference} In this section, we discuss inference questions surrounding $\bm{\upsilon}^\star, \bm{\zeta}^\star$ and $\bm{\beta}^\star$. For any $a, b \in \mathbb{R}$ such that $\Phi(b)-\Phi(a)=1-\alpha$, define the confidence intervals for $\beta^{\star}_i$ to be $$\label{defCIPCR} \mathsf{CI}_i \qty(\hat{\beta}^u_{\mathsf{pcr},i}, \hat{\tau}_*)=\left(\hat{\beta}^u_{\mathsf{pcr},i}+a \sqrt{\hat{\tau}_*}, \hat{\beta}^u_{\mathsf{pcr},i}+b \sqrt{\hat{\tau}_*}\right), \qquad \forall i \in[p];$$ and those for $\upsilon^\star_i$, ${\zeta}^\star_i$ respectively to be $$\label{skdtest} \mathsf{CI}_i \qty(\hat{\bm{\theta}}_{\mathsf{pcr}},d^{-2}_{\mathcal{J}(i)}+\hat{\omega}),\quad \mathsf{CI}_i \qty(\hat{\beta}_{\mathsf{co},i},\hat{\tau}_*), \qquad \forall i \in[p].$$ For the null hypotheses, $H_{i,0}^{\bm{\upsilon}^\star}:\upsilon^\star_i=0$ and $H_{i,0}^{\bm{\zeta}^\star}:{\zeta}^\star_i=0$ we define the p-values and decision rules respectively as $$\label{sdfasmtest} \bigg( P_i \qty(\hat{\bm{\theta}}_{\mathsf{pcr}},d^{-2}_{\mathcal{J}(i)}+\hat{\omega}), T_i \qty(\hat{\bm{\theta}}_{\mathsf{pcr}},d^{-2}_{\mathcal{J}(i)}+\hat{\omega})\bigg)_{i=1}^{J}, \;\; \bigg( P_i \qty(\hat{\beta}_{\mathsf{co},i},\hat{\tau}_*), T_i \qty(\hat{\beta}_{\mathsf{co},i},\hat{\tau}_*)\bigg)_{i=1}^p,$$ where $P_i(\cdot, \cdot)$ and $T_i(\cdot, \cdot)$ are defined as in [\[pvaldef\]](#pvaldef){reference-type="eqref" reference="pvaldef"}. **Remark 22** (Testing for alignment). Recall from the definition of $\bm{\upsilon}^\star$, [\[stwerif\]](#stwerif){reference-type="eqref" reference="stwerif"}, that if $\bm{\upsilon}^\star=\mathbf{0}$, it implies that $\boldsymbol{\beta}^{\star}$ does not align with any eigenvector of the sample covariance matrix. Thus, rejection of the null hypothesis $H^{\bm{\upsilon}^\star}_{i,0}$ for any $i$ indicates alignment of $\bm{\beta}^\star$ with the PC indexed by $\mathcal{J}(i)$. Thus the aforementioned hypothesis testing procedure provides an educated guess for the PCs in $\mathcal{J}$ that align with $\bm{\beta}^\star$. Since the p-values $\qty(P_i (\hat{\bm{\theta}}_{\mathsf{pcr}},d^{-2}_{\mathcal{J}_1(i)}+\hat{\omega}))_{i=1}^J$ are asymptotically independent (second result in [\[XEL3\]](#XEL3){reference-type="eqref" reference="XEL3"}), the Benjamini-Hochberg procedure [@benjamini1995controlling] may be used to control the False Discovery Rate (FDR), i.e. the ratio of PCs falsely identified to align with $\bm{\beta}^\star$ out of all PCs identified to align with $\bm{\beta}^\star$. We will showcase an application of this idea to real data designs later in this section. To the authors' knowledge, this is the first such test of alignment in the context of high-dimensional linear regression. The following is a straightforward Corollary of . **Corollary 23**. *Suppose ---[Assumption 7](#AssumpPriorAL){reference-type="ref" reference="AssumpPriorAL"} hold. The asymptotic FCP of $\mathsf{CI}_i \qty(\hat{\beta}^u_{\mathsf{pcr},i}, \hat{\tau}_*)$ and $\mathsf{CI}_i \qty(\hat{\beta}_{\mathsf{co},i},\hat{\tau}_*)$ converges to $\alpha$ almost surely as $p\to \infty$. If $\bm{\zeta}^\star$ is exchangeable, we also have that $$\mathbb{P} \qty({\beta}^\star_i\in \mathsf{CI}_i \qty(\hat{\beta}^u_{\mathsf{pcr},i}, \hat{\tau}_*)), \mathbb{P} \qty({\zeta}^\star_i\in \mathsf{CI}_i \qty(\hat{\beta}_{\mathsf{co},i},\hat{\tau}_*)) \to \alpha$$ almost surely as $p\to \infty$, whereas $\mathbb{P} \qty(\upsilon^\star_i\in \mathsf{CI}_i \qty(\hat{\bm{\theta}}_{\mathsf{pcr}},d^{-2}_{\mathcal{J}(i)}+\hat{\omega}))$ converges to $\alpha$ without requiring exchangeability of $\bm{\zeta}^\star$. For any fixed $i$ such that $\upsilon^\star_i=0$, we have that almost surely $$\lim_{p\to \infty} \mathbb{P} \qty(T_i \qty(\hat{\bm{\theta}}_{\mathsf{pcr}},d^{-2}_{\mathcal{J}(i)}+\hat{\omega})=1) =\alpha.$$ Meanwhile, for the hypothesis tests of $({\zeta}^\star_i)_{i=1}^p$, (a) holds for $T_i \qty(\hat{\beta}_{\mathsf{co},i},\hat{\tau}_*)$ with $\mathsf{C}^\star$ in place of $\mathsf{B}^\star$.* We now show numerical experiments demonstrating the results above. Figures [6](#figstCIA){reference-type="ref" reference="figstCIA"} and [7](#figstCIC){reference-type="ref" reference="figstCIC"} plot the false coverage proportion (FCP) of the confidence intervals for ${\beta}^\star_i$ under the designs from Figures [4](#figPCRA){reference-type="ref" reference="figPCRA"} and [5](#figPCRC){reference-type="ref" reference="figPCRC"} respectively. These figures demonstrate that the FCP of the PCR-Spectrum-Aware debiasing method matches the intended $\alpha$ values across all the designs exceptionally well while other methods falls short. We refer the reader to for inference results on $\bm{\zeta}^\star$. ![Under the setting of , we plot the false coverage proportion (FCP) of the confidence intervals for $({\beta}^\star_i)_{i=1}^p$ defined in [\[defCIPCR\]](#defCIPCR){reference-type="eqref" reference="defCIPCR"}, as we vary the targeted FCP level $\alpha$ from $0$ to $1$. The $x$-axis spans $\alpha$ values from 0 to 1, while the $y$-axis ranges from 0 to 1. The dotted black line is the 45-degree line.](image_va/panel_pcr_sim_FCP.png){#figstCIA width="\\textwidth"} Speech DNA SP500 FaceImage Crime ---------------------- -------- ------ ----------- ----------- ----------- $\upsilon_{1}^\star$ 0.70 0.81 0.00 \*\* 0.00 \*\* 0.00 \*\* $\upsilon_{2}^\star$ 0.70 0.74 0.40 0.25 0.10 $\upsilon_{3}^\star$ 0.80 0.74 0.95 0.53 0.02 \* $\upsilon_{4}^\star$ 0.80 0.74 0.19 0.00 \*\* 0.02 \* $\upsilon_{5}^\star$ 0.85 0.81 0.29 0.96 0.11 : The table prints Benjamini-Hochberg adjusted p-values for the hypothesis tests $H_i:\upsilon^\star_i=0,i=1,...,5$ corresponding to the experiments from for the designs specified in . Below we use $**$ to indicate rejection under FDR level $0.01$ and $*$ rejection under FDR level $0.05$ [\[tabCc\]]{#tabCc label="tabCc"} prints Benjamini-Hochberg adjusted p-values for the hypothesis testing results for $\bm{\upsilon}^\star$ under the setting of in (see also ). Observe that the tests are rejected at both levels $0.01$ and $0.05$ for the first two columns suggesting the absence of strong alignment between the signal and the top 5 eigenvectors of $\mathbf{X}^{\top}\mathbf{X}$for the first 2 designs (Speech and DNA). But this is not the case for the last 3 designs (SP500, FaceImage and Crime). This matches our observation that the vanilla Spectrum-Aware debiasing performs well under the first two designs but fails when applied to the last three designs. We refer the reader to for the adjusted p-values under setting of . ![Under the setting of , the above plots the false coverage proportion (FCP) of confidence intervals defined in [\[defCIPCR\]](#defCIPCR){reference-type="eqref" reference="defCIPCR"} for $({\beta}^\star_i)_{i=1}^p$, as we vary targeted FCP level $\alpha$ from $0$ to $1$. The $x$-axis spans $\alpha$ values from 0 to 1, while the $y$-axis ranges from 0 and 1. The dotted black line is the 45-degree line for reference.](image_va/panel_pcr_real_FCP.png){#figstCIC width="\\textwidth"} # Proof outline and novelties {#section:asympchar} Our main result relies on three main steps: (i) a characterization of the empirical distribution of a population version of $\hat{\bm{\beta}}$, (ii) connecting this population version with our data-driven Spectrum-Aware estimator, (iii) developing a consistent estimator of the asymptotic variance. We next describe our main technical novelties for step (i) in Section [5.1](#subsec:riskmath){reference-type="ref" reference="subsec:riskmath"}, and that for steps (ii) and (iii) in . ## Result A: Distributional characterizations {#subsec:riskmath} relies on the characterization of certain properties of $\hat{\bm{\beta}}$ and the following two quantities: $$\label{defr1r2} \mathbf{r}_{*}:=\hat{\bm{\beta}}+\frac{1}{\gamma_*} \mathbf{X}^{\top}(\mathbf{y}-\mathbf{X}\hat{\bm{\beta}}), \quad \mathbf{r}_{**}:=\hat{\bm{\beta}}+\frac{1}{\eta_*-\gamma_*} \mathbf{X}^{\top}(\mathbf{X}\hat{\bm{\beta}}-\mathbf{y}).$$ Here, $\mathbf{r}_{*}$ can be interpreted as the population version of the debiased estimator $\hat{\bm{\beta}}^u$ and $\mathbf{r}_{**}$ as an auxiliary quantity that arises in the intermediate steps in our proof. The following theorem characterizes the empirical distribution of the entries of $\hat{\bm{\beta}}$ and $\mathbf{r}_{*}$. We prove it in . **Theorem 3** (Distributional characterizations). *Under Assumptions [Assumption 2](#AssumpD){reference-type="ref" reference="AssumpD"}--[Assumption 6](#Assumpfix){reference-type="ref" reference="Assumpfix"}, almost surely as $n,p\to \infty$, $$\label{waka1} \left(\hat{\bm{\beta}}, \mathbf{r}_{*}, \bm{\beta}^\star\right) \stackrel{W_2}{\rightarrow}\left(\operatorname{Prox}_{\gamma_*^{-1} h}\left(\sqrt{\tau_{*}} \mathsf{Z}+\mathsf{B}^\star\right), \sqrt{\tau_{*}} \mathsf{Z}+\mathsf{B}^\star, \mathsf{B}^\star\right),$$ where $\mathsf{Z}\sim N(0,1)$ is independent of $\mathsf{B}^\star$. Furthermore, almost surely, $$\label{waka2} \lim _{p \rightarrow \infty} \frac{1}{p}\left\|\mathbf{X}\mathbf{r}_{**}-\mathbf{y}\right\|^2=\tau_{**} \mathbb{E}\mathsf{D}^2+\delta.$$* ## Result B: consistent estimation of fixed points {#subsec:tauest} Note that the population debiased estimator $\mathbf{r}_{*}$ cannot be used to conduct inference since $\gamma_*$ is unknown. Furthermore, the previous theorem says roughly that $\mathbf{r}_{*}- \bm{\beta}^{\star}$ behaves as a standard Gaussian with variance $\tau_{*}$, without providing any estimator for $\tau_{*}$. We address these two points here. In particular, we will see that addressing these points ties us to establishing consistent estimators for the solution to the fixed point equation [\[fp\]](#fp){reference-type="eqref" reference="fp"}. The theorem below shows that $(\widehat{\mathsf{adj}}, \hat{\eta}_*, \hat{\tau}_*, \hat{\tau}_{**})$ from [\[DEFEFD\]](#DEFEFD){reference-type="eqref" reference="DEFEFD"} serve as consistent estimators of the fixed points $(\gamma_*,\eta_*,\tau_*, \tau_{**})$, and $\hat{\bm{\beta}}^u,\hat{\mathbf{{r}}}_{**}$ as consistent estimators of $\mathbf{r}_{*}$ and $\mathbf{r}_{**}$, where $\hat{\mathbf{{r}}}_{**}$ is defined as in [\[NEFTF22\]](#NEFTF22){reference-type="eqref" reference="NEFTF22"} below. Further, for the purpose of the discussion below, we note that $\hat{\tau}_{**}$ from [\[DEFEFD\]](#DEFEFD){reference-type="eqref" reference="DEFEFD"} can be written as follows. $$\label{NEFTF22} \hat{\tau}_{**}(p) :=\frac{\frac{1}{p}\left\|\mathbf{X}\hat{\mathbf{{r}}}_{**}-\mathbf{y}\right\|^2-\frac{n}{p}}{\frac{1}{p} \sum_{i=1}^p d_i^2}; \quad \hat{\mathbf{{r}}}_{**}:=\hat{\bm{\beta}}+\frac{1}{\hat{\eta}_*-\widehat{\mathsf{adj}}} \mathbf{X}^{\top}(\mathbf{X}\hat{\bm{\beta}}-\mathbf{y}).$$ **Theorem 4** (Consistent estimation of fixed points). *Suppose that ---[Assumption 6](#Assumpfix){reference-type="ref" reference="Assumpfix"} hold. Then, the estimators in [\[DEFEFD\]](#DEFEFD){reference-type="eqref" reference="DEFEFD"} and [\[NEFTF22\]](#NEFTF22){reference-type="eqref" reference="NEFTF22"} are well-defined for any $p$ and we have that almost surely as $p\to \infty$, $$\begin{aligned} & \widehat{\mathsf{adj}}\left(p\right) \rightarrow \gamma_*, \quad \hat{\eta}_*\left(p\right) \rightarrow \eta_*, \quad \hat{\tau}_{*}\left(p\right) \rightarrow \tau_{*}, \quad \hat{\tau}_{**}\left(p\right) \rightarrow \tau_{**}\\ & \frac{1}{p}\norm{\hat{\bm{\beta}}^u(p)-\mathbf{r}_{*}}^2\rightarrow 0, \quad \frac{1}{p}\norm{\hat{\mathbf{{r}}}_{**}(p)-\mathbf{r}_{**}}^2\rightarrow 0. \end{aligned}$$* We prove this theorem in . It is not hard to see that combined with proves our main result . ## Proof of result A In this section, we discuss our proof novelties for . contains this proof. We base our proof on the approximate message passing (AMP) machinery (cf. [@donoho2009message; @bayati2011lasso; @sur2019likelihood; @sur2019modern; @zhao2022asymptotic; @feng2022unifying; @fan2022approximate; @mondelli2022optimal; @li2022non] for a non-exhaustive list of references). In this approach, one constructs an AMP algorithm in terms of fixed points ($\eta_*,\gamma_*, \tau_{*}, \tau_{**}$ in our case) and shows that its iterates $\hat{\mathbf{v}}^t$ converge to our objects of interest $\hat{\mathbf{v}}$ ($\hat{\mathbf{v}}$ can be $\hat{\bm{\beta}}$ or $\mathbf{r}_{*}$ in our case) in the following sense: almost surely $$\label{garbc} \lim _{t \rightarrow \infty} \lim_{p\to \infty} \frac{\left\|\hat{\mathbf{v}}^t-\hat{\mathbf{v}}\right\|^2}{p}=0.$$ AMP theory provides a precise characterization of the following limit involving the algorithmic iterates for any fixed $t$: $$\label{eq:stateevol} \lim_{p \rightarrow \infty } \|\hat{\mathbf{v}}^t-\mathbf{v}_0 \|^2/p,$$ where $\mathbf{v}_0$ is usually a suitable function of $\bm{\beta}_{\star}$ around which one expects $\hat{\mathbf{v}}$ should be centered. Thus plugging this in [\[garbc\]](#garbc){reference-type="eqref" reference="garbc"} yields properties of the object of interest $\hat{\mathbf{v}}$. Within this theory, the framework that characterizes [\[eq:stateevol\]](#eq:stateevol){reference-type="eqref" reference="eq:stateevol"} is known as *state evolution* [@bayati2011lasso; @javanmard2013state; @rangan2019vector]. Despite the existence of this solid machinery, [\[garbc\]](#garbc){reference-type="eqref" reference="garbc"} requires a case-by-case proof, and for many settings, this presents deep challenges. We use the above algorithmic proof strategy, but in case of our right rotationally invariant designs that, the original AMP algorithms fail to apply. To alleviate this, [@rangan2019vector] proposed vector approximate message passing algorithms. We use these algorithms to create our $\hat{\boldsymbol{v}}^t$'s. Subsequently, proving [\[garbc\]](#garbc){reference-type="eqref" reference="garbc"} presents the main challenge. To this end, one requires to show the following Cauchy convergence property of the VAMP iterates: almost surely, $$\lim _{(s, t) \rightarrow \infty}\left(\lim _{p \rightarrow \infty} \frac{1}{p}\left \|\hat{\mathbf{v}}^t-\hat{\mathbf{v}}^s\right \|^2\right)=0.$$ We prove this using a Banach contraction argument [\[eq:gprimebound\]](#eq:gprimebound){reference-type="eqref" reference="eq:gprimebound"}. Such an argument saw prior usage in the context of Bayes optimal learning in [@li2022random]. However, they studied a "matched\" problem where the signal prior (analogous to $\mathsf{B}^\star$ in our setting) is known to the statistician and she uses this exact prior during the estimation process. Arguments under such matched Bayes optimal problems fail to translate to our case. In fact, the main point of our manuscript is to develop data-driven estimators that do not require any knowledge of properties of the true signal. Thus, proving [\[eq:gprimebound\]](#eq:gprimebound){reference-type="eqref" reference="eq:gprimebound"} presents novel difficulties in our setting. To mitigate this, we leverage a fundamental property of the R-transform, specifically that $-zR'(z)/R(z)<1$ for all $z$, and discover and utilize a crucial interplay of this property with the non-expansiveness of the proximal map (b). **Remark 24** (Comparison with [@gerbelot2020asymptotic; @gerbelot2022asymptotic]). In their seminal works, [@gerbelot2020asymptotic; @gerbelot2022asymptotic] initiated the first study of the risk of regularized estimators under right rotationally invariant designs. They stated a version of Theorem [Theorem 3](#thm:empmain){reference-type="ref" reference="thm:empmain"} with a partially non-rigorous argument. In their approach, an auxiliary $\ell_2$ penalty of sufficient magnitude is introduced to ensure contraction of AMP iterates. Later, they remove this penalty through an analytical continuation argument. However, this proof suffers two limitations. The first one relates to the non-rigorous applications of the AMP state evolution results. For instance, [@gerbelot2022asymptotic Lemma 3] shows that for each fixed value of $p$, $\lim _{t \rightarrow \infty} \frac{\left \|\hat{x}^t-\hat{x}\right \|^2}{p}=0.$ However, in [@gerbelot2022asymptotic Proof of Lemma 4], the authors claim that this would imply [\[garbc\]](#garbc){reference-type="eqref" reference="garbc"} upon exchanging limits with respect to $t$ and $p$. Such an exchange of limits is non-rigorous since the correctness of AMP state evolution is established for a finite number of iterations ($t<T,T$ fixed) as $p\to \infty$. Thereafter the limit in $T$ is taken. The other limitation lies in the analytic continuation approach that requires multiple exchanges of limit operations [@gerbelot2022asymptotic Appendix H] that seem difficult to justify and incur intractable assumptions [@gerbelot2022asymptotic Assumption 1 (c), (e)] (in particular, it is unclear how to verify the existence claim in Assumption 1 (c) beyond Gaussian designs). Our alternative approach establishes contraction without the need for a sufficiently large $\ell_2$-regularization component, as in [@gerbelot2020asymptotic; @gerbelot2022asymptotic], and thereby avoids the challenges associated with the analytic continuation argument. ## Proof of result B In this section, we discuss the proof of . See for the proof details. First, let us present some heuristics for how one might derive the consistent estimators $\qty(\widehat{\mathsf{adj}}, \hat{\eta}_*, \hat{\tau}_*, \hat{\tau}_{**})$. We start from [\[RCa\]](#RCa){reference-type="eqref" reference="RCa"}. Using , it can be written as $$\label{veree1} \frac{\gamma_*}{\eta_*}=\mathbb{E}\frac{1}{1+\gamma_*^{-1} h^{\prime \prime} \qty(\operatorname{Prox}_{\gamma_*^{-1}h} (\mathsf{B}^\star+\sqrt{\tau_{*}} \mathsf{Z}) )}.$$ Recall that we have established that shows $p\to \infty$, almost surely, $$\label{veree2} \hat{\bm{\beta}}\stackrel{W_2}{\to} \operatorname{Prox}_{\gamma_*^{-1} h}\left(\sqrt{\tau_{*}} \mathsf{Z}+\mathsf{B}^\star\right).$$ Combining [\[veree1\]](#veree1){reference-type="eqref" reference="veree1"} and [\[veree2\]](#veree2){reference-type="eqref" reference="veree2"}, we expect that $$\label{veree3} \frac{1}{\eta_*} \approx \frac{1}{p} \mathlarger{\mathlarger{\sum}}_{i=1}^p \frac{\gamma_*^{-1}}{1+\gamma_*^{-1} h^{\prime \prime} (\hat{{\beta}}_i) }.$$ Using the definition of R-transform, we can rewrite [\[RCc\]](#RCc){reference-type="eqref" reference="RCc"} as $\eta_*^{-1}=\mathbb{E}\frac{1}{\mathsf{D}^2+\eta_*-\gamma_*}$ which, along with [\[Dconve\]](#Dconve){reference-type="eqref" reference="Dconve"}, implies that $$\label{veree4} \frac{1}{\eta_*} \approx \frac{1}{p} \mathlarger{\mathlarger{\sum}}_{i=1}^p \frac{1}{d_i^2+\eta_*-\gamma_*}.$$ Combining [\[veree3\]](#veree3){reference-type="eqref" reference="veree3"}, [\[veree4\]](#veree4){reference-type="eqref" reference="veree4"} and eliminating $\eta_*$, we obtain that $$\label{safdas} \frac{1}{p} \mathlarger{\mathlarger{\sum}}_{i=1}^p \frac{1}{\left(d_i^2-\gamma_* \right)\left(\frac{1}{p} \sum_{j=1}^p \qty(\gamma_*+h^{\prime \prime}\left(\hat{{\beta}}_j\right))^{-1} \right)+1}\approx 1.$$ Setting $\approx$ above to equality, we obtain our exact equation for the Spectrum-Aware adjustment factor, i.e.  [\[gammasolvea\]](#gammasolvea){reference-type="eqref" reference="gammasolvea"}. One thus expects intuitively that $\widehat{\mathsf{adj}}$ consistently estimate $\gamma_*$. To establish the consistency rigorously, we recognize and establish the monotonicity of the LHS of [\[safdas\]](#safdas){reference-type="eqref" reference="safdas"} as a function of $\gamma_*$, and study its point-wise limit. We direct the reader to and for more details. Once we have established the consistency of $\widehat{\mathsf{adj}}$ as an estimator for $\gamma_*$, we substitute $\widehat{\mathsf{adj}}$ back into [\[veree3\]](#veree3){reference-type="eqref" reference="veree3"} to obtain a consistent estimator $\hat{\eta}_*$ for $\eta_*$. It is important to note that the definition of $\mathbf{r}_{**}$, as given in [\[defr1r2\]](#defr1r2){reference-type="eqref" reference="defr1r2"}, only involves the fixed points $\eta_*$ and $\gamma_*$. As a result, we can utilize $\widehat{\mathsf{adj}}$ and $\hat{\eta}_*$ to produce a consistent estimator $\hat{\mathbf{{r}}}_{**}$ for $\mathbf{r}_{**}$. From [\[waka2\]](#waka2){reference-type="eqref" reference="waka2"}, we can deduce that $$\tau_{**}\approx \frac{\frac{1}{p}\norm{\mathbf{X}\hat{\mathbf{{r}}}_{**}-\mathbf{y}}^2-\delta}{\mathbb{E}\mathsf{D}^2}.$$ Using this, the fact that $n/p\to \delta$ and [\[Dconve\]](#Dconve){reference-type="eqref" reference="Dconve"}, we obtain the estimator $\hat{\tau}_{**}$ as defined in [\[DEFEFD\]](#DEFEFD){reference-type="eqref" reference="DEFEFD"}. Now with estimators for $\gamma_*, \eta_*$ and $\tau_{**}$, we can construct the estimator $\hat{\tau}_*$ for $\tau_*$ using [\[RCd\]](#RCd){reference-type="eqref" reference="RCd"} and [\[Dconve\]](#Dconve){reference-type="eqref" reference="Dconve"}. # Discussion {#sec:conc} We conclude our paper with a discussion of two main points. First, we clarify that our setting covers designs with i.i.d. Gaussian entries. But although we can capture different kinds of dependencies through the right rotational invariance assumption, anisotropic Gaussian designs, where each row of $\mathbf{X}$ comes from $\mathcal{N}(\mathbf{0},\mathbf{\Sigma})$ for an arbitrary non-singular $\mathbf{\Sigma}$, fall outside our purview (unless $\mathbf{\Sigma}$ is right rotationally invariant). Moreover, contrasting with [@bellec2019biasing], our Spectrum-Aware adjustment [\[eq:adjspectrum\]](#eq:adjspectrum){reference-type="eqref" reference="eq:adjspectrum"} does not apply directly to non-separable penalties, e.g. SLOPE, group Lasso, etc. Nonetheless, we note that the current framework can be expanded to address both these issues. In , we suggest a debiased estimator for "ellipsoidal designs" $\mathbf{X}=\mathbf{Q}^\top \mathbf{D}\mathbf{O}^\top \mathbf{\Sigma}^{1/2}$ and non-separable convex penalties. We also conjecture its asymptotic normality using the non-separable VAMP formalism [@fletcher2018plug]. We leave a detailed study of this extensive class of estimators to future works. We discuss another potential direction of extension, that of relaxing the exchangeability assumption in Corollaries [Corollary 17](#sgoods){reference-type="ref" reference="sgoods"} and [Corollary 21](#CORPCRTHM){reference-type="ref" reference="CORPCRTHM"} that establish inference guarantees on finite-dimensional marginals. One may raise a related question, that of constructing confidence intervals for $\mathbf{a}^{\top}\bm{\beta}^\star$ for a given choice of $\mathbf{a}$. Under Gaussian design assumptions, such guarantees were obtained using the leave-one-out method as in [@celentano2020lasso Section 4.6] or Stein's method as in [@bellec2019biasing] without requiring the exchangeability assumption (at the cost of other assumptions on $\bm{\beta}^\star$ and/or $\boldsymbol{\Sigma}$). Unfortunately, these arguments no longer apply under right-rotational invariant designs owing to the presence of a global dependence structure. Thus, establishing such guarantees without exchangeability can serve as an exciting direction for future research. P.S. was funded partially by NSF DMS-2113426. The authors would like to thank Florent Krzakala and Cedric Gerbelot for clarification on the contributions in [@gerbelot2020asymptotic; @gerbelot2022asymptotic], and Boris Hanin for references on linear neural networks. # Preliminary ## Empirical Wasserstein-2 convergence {#section:wass} We will use below the following fact. See [@fan2022approximate Appendix E] and references within for its justification. **Proposition 25**. *To verify $(\mathbf{v}_1,\ldots, \mathbf{v}_k) \stackrel{W_2}{\rightarrow} (\mathsf{V}_1,\ldots, \mathsf{V}_k)$, it suffices to check that $$\lim_{n \to \infty} \frac{1}{n} \sum_{i=1}^{n} f\left(v_{i, 1}, \ldots, v_{i, k}\right)=\mathbb{E}\left[f\left(\mathsf{V}_{1}, \ldots, \mathsf{V}_{k}\right)\right]$$ holds for every function $f: \mathbb{R}^k \rightarrow \mathbb{R}$ satisfying, for some constant $C>0$, the pseudo-Lipschitz condition $\left|f(\mathbf{v})-f\left(\mathbf{v}^{\prime}\right)\right| \leq C\left(1+\|\mathbf{v}\|_2+\left\|\mathbf{v}^{\prime}\right\|_2 \right)\left\|\mathbf{v}-\mathbf{v}^{\prime}\right\|_2.$ Meanwhile, this condition implies [\[eq:wasslip\]](#eq:wasslip){reference-type="eqref" reference="eq:wasslip"}.* The following results are from [@fan2022approximate Appendix E]. **Proposition 26**. *Suppose $\mathbf{V}\in \mathbb{R}^{n \times t}$ has i.i.d. rows equal in law to $\mathsf{V}\in \mathbb{R}^t$, which has finite mixed moments of all orders. Then $\mathbf{V}\stackrel{W_2}{\to}\mathsf{V}$ almost surely as $n \to \infty$. Furthermore, if $\mathbf{E} \in \mathbb{R}^{n \times k}$ is deterministic with $\mathbf{E} \stackrel{W_2}{\to}\mathsf{E}$, then $(\mathbf{V},\mathbf{E}) \stackrel{W_2}{\to}(\mathsf{V},\mathsf{E})$ almost surely where $\mathsf{V}$ is independent of $\mathsf{E}$.* **Proposition 27**. *Suppose $\mathbf{V}\in \mathbb{R}^{n \times k}$ satisfies $\mathbf{V}\stackrel{W_2}{\to}\mathsf{V}$ as $n \to \infty$, and $g:\mathbb{R}^k \to \mathbb{R}^l$ is continuous with $\|g(\mathbf{v})\| \leq C(1+\|\mathbf{v}\|)^\mathfrak{p}$ for some $C>0$ and $\mathfrak{p}\geq 1$. Then $g(\mathbf{V}) \stackrel{W_2}{\to}g(\mathsf{V})$ where $g(\cdot)$ is applied row-wise to $\mathbf{V}$.* **Proposition 28**. *Suppose $\mathbf{V}\in \mathbb{R}^{n \times k}$, $\mathbf{W} \in \mathbb{R}^{n \times l}$, and $\mathbf{M}_n,\mathbf{M} \in \mathbb{R}^{k \times l}$ satisfy $\mathbf{V}\stackrel{W_2}{\to}\mathsf{V}$, $\mathbf{W} \stackrel{W_2}{\to}0$, and $\mathbf{M}_n \to \mathbf{M}$ entrywise as $n \to \infty$. Then $\mathbf{V}\mathbf{M}_n+\mathbf{W} \stackrel{W_2}{\to}\mathsf{V}^\top \cdot \mathbf{M}$.* **Proposition 29**. *Fix $\mathfrak{p}\geq 1$ and $k \geq 0$. Suppose $\mathbf{V}\in \mathbb{R}^{n \times k}$ satisfies ${\mathbf{V}} \stackrel{W_2}{\rightarrow} \mathsf{V}$, and $f: \mathbb{R}^k \rightarrow \mathbb{R}$ is a function satisfying [\[eq:wasslip\]](#eq:wasslip){reference-type="eqref" reference="eq:wasslip"} that is continuous everywhere except on a set having probability 0 under the law of $\mathsf{V}$. Then $\frac{1}{n} \sum_{i=1}^n f({\mathbf{V}})_i \rightarrow \mathbb{E}[f(\mathsf{V})].$* **Proposition 30**. *Fix $l \geq 0$, let $\mathbf{O}\sim \mathop{\mathrm{Haar}}(\mathbb{O}(n-l))$, and let $\mathbf{v}\in \mathbb{R}^{n-l}$ and $\bm{\Pi} \in \mathbb{R}^{n \times (n-l)}$ be deterministic, where $\bm{\Pi}$ has orthonormal columns and $n^{-1}\| \mathbf{v}\|^2 \to \sigma^2$ as $n \to \infty$. Then $\bm{\Pi} \mathbf{O}\mathbf{v}\stackrel{W_2}{\to}\mathsf{Z}\sim N(0,\sigma^2)$ almost surely. Furthermore, if $\mathbf{E} \in \mathbb{R}^{n \times k}$ is deterministic with $\mathbf{E} \stackrel{W_2}{\to}\mathsf{E}$, then $(\bm{\Pi} \mathbf{O}\mathbf{v}, \mathbf{E} ) \stackrel{W_2}{\to}(\mathsf{Z},\mathsf{E})$ almost surely where $\mathsf{Z}$ is independent of $\mathsf{E}$.* ## Proximal map {#appendix:prox} We collect a few useful properties of proximal map. **Proposition 31**. *Under , we have that for any $v>0$,* - *for any $x,y \in \mathbb{R}$, $y=\operatorname{Prox}_{vh}(x) \iff x-y \in v\partial h(y)$ where $\partial h$ is the subdifferential of $h$;* - *Proximal map is firmly non-expansive: for any $x,y \in \mathbb{R}$, $\qty|\operatorname{Prox}_{vh}(x)-\operatorname{Prox}_{vh}(y)|^2\le (x-y)(\operatorname{Prox}_{vh}(x)-\operatorname{Prox}_{vh}(y))$. This implies that $x\mapsto \operatorname{Prox}_{vh}(x)$ is 1-Lipschitz continuous.* *Proof of .* Under , for any $v>0$, $x\mapsto \operatorname{Prox}_{v h}(x)$ is continuous, monotone increasing in $x$, and continuously differentiable at any $x$ such that $\operatorname{Prox}_{v h}(x) \notin \mathfrak{D}$ and $$\operatorname{Prox}_{v h}^{\prime}(x)=\frac{1}{1+v h^{\prime \prime}\left(\operatorname{Prox}_{v h}(x)\right)}.$$ This follows from the assumption that $h(x)$ is twice continuously differentiable on $\mathfrak{D}^c$ and the implicit differentiation calculation shown in [@gerbelot2020asymptotic Appendix B1]. For $x\in \{x: \operatorname{Prox}_{v h}(x) \in \mathfrak{D}\}$, $\operatorname{Prox}_{vh}(x)$ is differentiable and has derivative equal to 0 except for a finite set of points. To see this, note that preimage $\operatorname{Prox}^{-1}_{v h}(\mathbf{y})$ for $y\in \mathfrak{D}$ is either a singleton set or a closed interval of the form $[x_1, x_2]$ for $x_1\in \mathbb{R}\cup\{-\infty\},x_2\in \mathbb{R}\cup\{+\infty\}$ and $x_1<x_2$, using continuity and monotonicity of $x\mapsto \operatorname{Prox}_{vh}(x)$. This implies that $\{x: \operatorname{Prox}_{v h}(x) \in \mathfrak{D}\}$ is a union of finite number of singleton sets and a finite number of closed intervals. Furthermore, $\operatorname{Prox}_{v h}(x)$ is constant on each of the closed intervals. It follows that $\operatorname{Prox}_{v h}(x)$ is differentiable and has derivative equal to $0$ on the interiors of the closed intervals, and that $\mathcal{C}$ is union of some of the singleton sets and all of the finite-valued endpoints of the closed intervals. We extend functions $h^{\prime \prime}(x)$ and $\operatorname{Prox}_{v h}^{\prime}(x)$ on $\mathfrak{D}$ and $\mathcal{C}$ respectively in the following way: (i) For $y_0 \in \mathfrak{D}$ such that $\operatorname{Prox}^{-1}_{v h}(y_0)$ is a closed interval with endpoints $x_1\in \mathbb{R}\cup\{-\infty\},x_2\in \mathbb{R}\cup\{+\infty\}$ and $x_1<x_2$, we set $h^{\prime \prime }(y_0) \gets +\infty$ and $\operatorname{Prox}^{\prime}_{vh}(x)\gets 0$ for all $x\in [x_1, x_2]$ (ii) For $y_0\in \mathfrak{D}$ such that $\operatorname{Prox}^{-1}_{v h}(y_0)$ is a singleton set and its sole element $x_0$ is contained in $\mathcal{C}$, we set $h^{\prime\prime}(y_0)\gets +\infty, \operatorname{Prox}_{vh}^{\prime}(x_0)\gets 0$; (iii) For $y_0\in \mathfrak{D}$ such that $\operatorname{Prox}^{-1}_{v h}(y_0)$ is a singleton set $\{x_0\}$ and that $x \mapsto \operatorname{Prox}_{v h}(x)$ is differentiable at $x_0$ with 0 derivative, we set $h(y_0)\gets +\infty$. We show that it is impossible to have some $y_0 \in \mathfrak{D}$ such that $\operatorname{Prox}_{v h}^{-1}\left(y_0\right)$ is a singleton set $\left\{x_0\right\}$ and that $x \mapsto \operatorname{Prox}_{v h}(x)$ is differentiable at $x_0$ with non-zero derivative. This means that all $\mathbf{y}\in \mathfrak{D}$ belongs to cases (i), (ii) and (iii) above. Suppose to the contrary. We know from the above discussion that there exists some $\mathfrak{e}>0$ such that $\operatorname{Prox}_{v h}^{\prime}(x)$ is continuous on $\left(x_0, x_0+\mathfrak{e}\right)$ and $\left(x_0-\mathfrak{e}, x_0\right)$. We claim that $x \mapsto \operatorname{Prox}_{v h}^{\prime}(x)$ is continuous at $x_0$. To see this, note that for any $\Delta>0$, we can find $\bm{\varepsilon}\in(0, \mathfrak{e})$ such that - there exists some $x_{+} \in\left(x_0, x_0+\epsilon \right)$ such that for any $x \in\left(x_0, x_0+\epsilon \right)$, $$\left | \operatorname{Prox}_{v h}^{\prime}(x)-\operatorname{Prox}_{v h}^{\prime}\left(x_{+}\right)\right|<\frac{\Delta}{5}, \quad \left| \frac{\operatorname{Prox}_{v h}\left(x_0\right)-\operatorname{Prox}_{v h}\left(x_{+}\right)}{x_0-x_{+}}-\operatorname{Prox}_{v h}^{\prime}\left(x_{+}\right) \right |<\frac{\Delta}{5}$$ - there exists some $x_{-} \in\left(x_0-\epsilon, x_0\right)$ such that for any $x \in\left(x_0-\epsilon, x_0\right)$, $$\left | \operatorname{Prox}_{v h}^{\prime}(x)-\operatorname{Prox}_{v h}^{\prime}\left(x_{-}\right)\right|<\frac{\Delta}{5},\quad \left| \frac{\operatorname{Prox}_{v h}\left(x_0\right)-\operatorname{Prox}_{v h}\left(x_{-}\right)}{x_0-x_{-}}-\operatorname{Prox}_{v h}^{\prime}\left(x_{-}\right) \right|<\frac{\Delta}{5}$$ - for any $x \in\left(x_0-\epsilon, x_0\right) \cup\left(x_0, x_0+\epsilon \right)$,$$\left|\operatorname{Prox}_{v h}^{\prime}\left(x_0\right)-\frac{\operatorname{Prox}_{v h}\left(x_0\right)-\operatorname{Prox}_{v h}(x)}{x_0-x}\right|<\frac{\Delta}{5}.$$ Then for any $x \in\left(x_0-\epsilon, x_0+\epsilon\right)$, we have $\left|\operatorname{Prox}_{v h}^{\prime}\left(x_0\right)-\operatorname{Prox}_{v h}^{\prime}(x)\right|<\Delta$ by triangle inequality. This proves the claim. Now, since $x \mapsto \operatorname{Prox}_{v h}(x)$ is continuously differentiable on $\left(x_0-\mathfrak{e}, x_0+\mathfrak{e}\right)$ and $\operatorname{Prox}_{v h}^{\prime}\left(x_0\right) \neq 0$, inverse function theorem implies that $y \mapsto \operatorname{Prox}_{v h}^{-1}(y)$ is a well defined, real-valued function and it is continuous differentiable on some open interval $U$ containing $y_0$. This implies that $h$ is differentiable at any $y \in U$ and that $y\mapsto \operatorname{Prox}_{v h}^{-1}(y)=y+v h^{\prime}(y)$ is continuously differentiable. But this would imply that $h$ is twice continuously differentiable on $U$ which contradicts the assumption that $y_0 \in \mathfrak{D}$. Note that we have assigned $+\infty$ to $h^{\prime\prime}$ on $\mathfrak{D}$ and $0$ to $\operatorname{Prox}^{\prime}_{v h}$ on $\mathcal{C}$. Piecewise continuity of $x\mapsto \frac{1}{w+h^{\prime \prime}\left(\operatorname{Prox}_{v h}(x)\right)}$ for any $w>0$ follows from the discussion above. ◻ ## Properties of R- and Cauchy transform The following shows that the Cauchy- and R-transforms of $-\mathsf{D}^2$ are well-defined by ([\[eq:CauchyR\]](#eq:CauchyR){reference-type="ref" reference="eq:CauchyR"}), and reviews their properties. **Lemma 32**. *Let $G(\cdot)$ and $R(\cdot)$ be the Cauchy- and R-transforms of $-\mathsf{D}^2$ under Assumption [Assumption 2](#AssumpD){reference-type="ref" reference="AssumpD"}.* - *The function $G: (-d_{-}, \infty) \to \mathbb{R}$ is positive and strictly decreasing. Setting $G\left(-d_-\right) := \lim _{z \to -d_-} G(z) \in (0,\infty]$, $G$ admits a functional inverse $G^{-1}:(0,G(-d_-)) \to (-d_-,\infty)$.* - *The function $R:\left(0,G\left(-d_-\right)\right) \to \mathbb{R}$ is negative and strictly increasing.* - *For any $z \in\left(0, G\left(-d_{-}\right)\right), R^{\prime}(z)=-\left(\mathbb{E} \frac{1}{\left(\mathsf{D}^2+R(z)+\frac{1}{z}\right)^2}\right)^{-1}+\frac{1}{z^2}$.* - *For any $z \in\left(0, G\left(-d_{-}\right)\right),-\frac{z R^{\prime}(z)}{R(z)}\in (0,1)$.* - *For any $z \in\left(0, G\left(-d_{-}\right)\right),z^2R'(z)\in (0,1)$.* - *For all sufficiently small $z\in (0,G(-d_-))$, R-transform admits convergent series expansion given by $$\label{expandR} R(z)=\sum_{k \geq 1} \kappa_k z^{k-1}$$ where $\left\{\kappa_k\right\}_{k \geq 1}$ are the free cumulants of the law of $-\mathsf{D}^2$ and $\kappa_1=-\mathbb{E}\mathsf{D}^2$ and $\kappa_2=\mathbb{V}(\mathsf{D}^2)$.* *Proof.* See [@li2022random Lemma G.6] for (a) and (b), To see (c), for any $z \in\left(0, G\left(-d_{-}\right)\right)$, differentiating $R(z)=G^{-1}(z)-z^{-1}$ yields $$-z R^{\prime}(z)=z\left(\mathbb{E} \frac{1}{\left(\mathsf{D}^2+G^{-1}(z)\right)^2}\right)^{-1}-\frac{1}{z}$$ To see (d), $$\begin{aligned} & -\frac{z R^{\prime}(z)}{R(z)}=\frac{z\left(\mathbb{E} \frac{1}{\left(\mathsf{D}^2+G^{-1}(z)\right)^2}\right)^{-1}-\frac{1}{z}}{G^{-1}(z)-\frac{1}{z}}<1 \\ & \Leftrightarrow z\left(\mathbb{E} \frac{1}{\left(\mathsf{D}^2+G^{-1}(z)\right)^2}\right)>G^{-1}(z) \\ & \Leftrightarrow \mathbb{E} \frac{G^{-1}(z)}{\left(\mathsf{D}^2+G^{-1}(z)\right)^2}<z=\mathbb{E} \frac{1}{\mathsf{D}^2+G^{-1}(z)} \\ & \Leftrightarrow \mathbb{E} \frac{-\mathsf{D}^2}{\left(\mathsf{D}^2+G^{-1}(z)\right)^2}<0 \end{aligned}$$ where we used in the second line that $R(z)=G^{-1}(z)-1/z<0$ from (b). Note that the last line is true since $\mathsf{D}^2 \neq 0$ with positive probability. (e) trivially follows from (c). (f) follows from [@nica2006lectures Notation 12.6, Proposition 13.15]. ◻ ## DF adjustment coincide with SA adjustment under Marchenko-Pastur law {#appendix:adjadgcc} **Lemma 33**. *If the empirical distribution of the eigenvalues of $\mathbf{X}^\top \mathbf{X}$ weakly converges Marchenko-Pastur law, then $\abs{\widehat{\mathsf{adj}}-\breve{\mathsf{adj}}}\to 0.$* *Proof of .* By weak convergence, $$\frac{1}{p} \sum_{i=1}^p \frac{-1}{d_i^2+\lambda_2} \rightarrow G\left(-\lambda_2\right)$$ where $z \mapsto G(z)$ is the Cauchy transform of Marchenko-Pastur law[^4]. Then we have that $$\label{qudraticeq} \widehat{\mathsf{adj}}\rightarrow \lambda_2\left(\frac{1}{1+\lambda_2 G\left(-\lambda_2\right)}-1\right)^{-1}, \quad \breve{\mathsf{adj}}\rightarrow 1-\delta^{-1}\left(1+\lambda_2 G\left(-\lambda_2\right)\right)$$ Observe that the limiting values of $\widehat{\mathsf{adj}}$ and $\breve{\mathsf{adj}}$ above are equal if and only if the following holds $$\label{webao} 1+\left(\lambda_2+1-\delta^{-1}\right) G\left(-\lambda_2\right)-\delta^{-1} \lambda_2 \qty(G\left(-\lambda_2\right))^2=0.$$ Here, [\[webao\]](#webao){reference-type="eqref" reference="webao"} indeed holds true since $G\left(-\lambda_2\right)$ is one of the root of the quadratic equation [\[webao\]](#webao){reference-type="eqref" reference="webao"}. This is by referencing the explicit expression of the Cauchy transform of the Marchenko-Pastur law (cf. [@bai2010spectral Lemma 3.11]). ◻ ## Fixed point equation {#appendix:misce} ### An auxiliary lemma **Lemma 34**. *Under and [Assumption 6](#Assumpfix){reference-type="ref" reference="Assumpfix"}, $$\begin{aligned} &\mathbb{P}(\operatorname{Prox}_{\gamma_{*}^{-1} h}^{\prime}\left(\sqrt{\tau_{*}}\mathsf{Z}+\mathsf{B}^\star\right)\neq 0)>0, \quad \mathbb{P}(\operatorname{Prox}_{\gamma_{*}^{-1} h}^{\prime}\left(\sqrt{\tau_{*}}\mathsf{Z}+\mathsf{B}^\star\right)\neq 1)>0\\ & \mathbb{P}(h^{\prime \prime}\left(\operatorname{Prox}_{\gamma_*^{-1} h}\left(\sqrt{\tau_{*}} \mathsf{Z}+\mathsf{B}^\star\right)\right)\neq +\infty )>0, \quad \mathbb{P}(h^{\prime \prime}\left(\operatorname{Prox}_{\gamma_*^{-1} h}\left(\sqrt{\tau_{*}} \mathsf{Z}+\mathsf{B}^\star\right)\right)\neq 0 )>0 \end{aligned}$$ where $\mathsf{Z}\sim N(0,1)$ is independent of $\mathsf{B}^\star$.* *Proof of .* Note that $\operatorname{Prox}_{\gamma_{*}^{-1} h}^{\prime}\left(\sqrt{\tau_{*}} \mathsf{Z}+\mathsf{B}^\star\right)\neq 0$ with positive probability or else $\frac{\gamma_*}{\eta_*}=\mathbb{E} \operatorname{Prox}_{\gamma_{*}^{-1} h}^{\prime}\left(\sqrt{\tau_{*}} \mathsf{Z}+\mathsf{B}^\star\right)=0$ which violates . Meanwhile, $\operatorname{Prox}_{\gamma_{*}^{-1} h}^{\prime}\left(\sqrt{\tau_{*}} \mathsf{Z}+\mathsf{B}^\star\right)\neq 1$ with positive probability or else $\frac{\gamma_*}{\eta_*}=\mathbb{E} \operatorname{Prox}_{\gamma_{*}^{-1} h}^{\prime}\left(\sqrt{\tau_{*}} \mathsf{Z}+\mathsf{B}^\star\right)=1$ which also violates . The inequalities in the second line follows immediately from [\[eq:Jacprox\]](#eq:Jacprox){reference-type="eqref" reference="eq:Jacprox"} and that $\gamma_*>0$. ◻ ### Uniqueness of fixed points given existence {#justifuniqe} Suppose that ---[Assumption 6](#Assumpfix){reference-type="ref" reference="Assumpfix"} hold. Our proof of and does not require $(\gamma_*, \eta_*, \tau_*, \tau_{**})$ to be a unique solution of [\[fp\]](#fp){reference-type="eqref" reference="fp"}, only that it is one of the solutions. However, if there are two different solutions of [\[fp\]](#fp){reference-type="eqref" reference="fp"}, it would lead to a contradiction in . More concretely, suppose that there exists two different solutions of [\[fp\]](#fp){reference-type="eqref" reference="fp"}: $x^{(1)}:=\qty (\gamma_*^{(1)}, \eta_*^{(1)}, \tau_*^{(1)}, \tau_{**}^{(1)})$ and $x^{(2)}:=\qty (\gamma_*^{(2)}, \eta_*^{(2)}, \tau_*^{(2)}, \tau_{**}^{(2)})$. By , we would have $\qty(\widehat{\mathsf{adj}}, \hat{\eta}_*, \hat{\tau}_*, \hat{\tau}_{**})$ converges almost surely to both $x^{(1)}$ and $x^{(2)}$, hence the contradiction. ### Existence of fixed points for Elastic Net {#existENex} Given that $h$ is the Elastic Net penalty as in [\[elaspen\]](#elaspen){reference-type="eqref" reference="elaspen"} with $\lambda_2>0$, we now prove show that holds under and [Assumption 3](#AssumpPrior){reference-type="ref" reference="AssumpPrior"}. First, eliminate the variable $\tau_{**}$ from [\[fp\]](#fp){reference-type="eqref" reference="fp"} via [\[RCb\]](#RCb){reference-type="eqref" reference="RCb"} and introduce change of variable $\tau_*=\gamma_*^{-2} \alpha_*^{-2}$ for some new variable $\alpha_*>0$. We then obtain a new system of fixed equation [\[fpstren\]]{#fpstren label="fpstren"} $$\begin{aligned} & \gamma_*^{-1}=\frac{1}{-R\left(\eta_*^{-1}\right)} \label{RSa}\\ & \eta_*^{-1}=\gamma_*^{-1} \mathbb{E} \operatorname{Prox}_{\gamma_*^{-1} h}^{\prime}\left(\mathsf{B}^\star+\frac{\gamma_*^{-1}}{\alpha_*} \mathsf{Z}\right) \label{RSb} \\ & 1=\alpha_*^2 R^{\prime}\left(\eta_*^{-1}\right)\mathbb{E}\left(\operatorname{Prox}_{\gamma_*^{-1} h}\left(\mathsf{B}^\star+\frac{\gamma_*^{-1}}{\alpha_*} \mathsf{Z}\right)-\mathsf{B}^\star\right)^2+\frac{\alpha_*^2}{\gamma_*^{-1}}\left[1+\frac{\eta_*^{-1} R^{\prime}\left(\eta_*^{-1}\right)}{R\left(\eta_*^{-1}\right)}\right] \label{RSc} \end{aligned}$$ Note that holds if and only if we can find a solution $\gamma_*^{-1}, \eta_*^{-1}, \alpha_*>0$ for the above. Denote $\gamma_{+}^{-1}=\lim _{z \rightarrow G\left(-d_{-}\right)} \frac{1}{-R(z)}=\frac{1}{\frac{1}{G\left(-d_{-}\right)}+d_{-}}$. Note that $\gamma_{+}^{-1} \in\left(\frac{1}{\mathbb{E}\mathsf{D}^2}, G\left(-d_{-}\right)\right]$since $x \mapsto \frac{1}{-R(z)}$ is strictly increasing on its domain $\left(0, G\left(-d_{-}\right)\right)$ using , (b) and $\lim _{z \rightarrow 0} \frac{1}{-R(z)}=\frac{1}{\mathbb{E}\mathsf{D}^2}$ using , (f). Let us define the functions $f_1:\left[\frac{1}{\mathbb{E}\mathsf{D}^2}, \gamma_{+}^{-1}\right) \mapsto\left[0, G\left(-d_{-}\right)\right)$ as the inverse function of $x \mapsto \frac{1}{-R(z)}$. Note that $f_1$ is well-defined and strictly increasing on its domain. It also satisfies $$f_1\left(\frac{1}{\mathbb{E} \mathsf{D}^2}\right)=0,\qquad \lim _{\gamma^{-1} \rightarrow \gamma_{+}^{-1}} f_1\left(\gamma^{-1}\right)=G\left(-d_{-}\right).$$ Let us define function $f_2:(0,+\infty) \times(0,+\infty) \mapsto(0,+\infty)$ such that $$f_2\left(\gamma^{-1}, \alpha\right)=\gamma^{-1} \mathbb{E} \operatorname{Prox}_{\gamma^{-1} h}^{\prime}\left(\mathsf{B}^\star+\frac{\gamma^{-1}}{\alpha} \mathsf{Z}\right)=\frac{\gamma^{-1}}{1+\lambda_2 \gamma^{-1}} \mathbb{P}\left(\left|\frac{1}{\gamma^{-1}} \mathsf{B}^\star+\frac{1}{\alpha} \mathsf{Z}\right|>\lambda_1\right).$$ Now we study the equation (in terms of $\gamma^{-1}$ ) $$f_1\left(\gamma^{-1}\right)=f_2\left(\gamma^{-1}, \alpha\right).$$ Observe that this equation amounts to eliminating $\eta_*^{-1}$ and solving for $\gamma_*^{-1}$ in terms $\alpha_*$ from [\[RSa\]](#RSa){reference-type="eqref" reference="RSa"} and [\[RSb\]](#RSb){reference-type="eqref" reference="RSb"}. We claim that for any fixed $\alpha>0$, there is at least one solution $\gamma^{-1}(\alpha) \in\left[\frac{1}{\mathbb{E}\mathsf{D}^2}, \gamma_{+}^{-1}\right)$. To see the claim, note that since $$f_2\left(\frac{1}{\mathbb{E}\mathsf{D}^2}, \alpha\right)=\frac{\frac{1}{\mathbb{E}\mathsf{D}^2}}{1+\lambda_2 \frac{1}{\mathbb{E} \mathsf{D}^2}} \mathbb{P}\left(\left|\mathsf{B}^\star\mathbb{E}\mathsf{D}^2+\frac{1}{\alpha} \mathsf{Z}\right|>\lambda_1\right)>0=f_1\left(\frac{1}{\mathbb{E}\mathsf{D}^2}\right),$$ a sufficient condition for $f_1\left(\gamma^{-1}\right)=f_2\left(\gamma^{-1}, \alpha\right)$ to have a solution on $\left[\frac{1}{\mathbb{E} \mathsf{D}^2}, \gamma_{+}^{-1}\right)$ is $f_2\left(\gamma_{+}^{-1}, \alpha\right)<$ $f_1\left(\gamma_{+}^{-1}\right)$. When $\gamma_{+}^{-1}<+\infty$, it suffices that $$\frac{\gamma_{+}^{-1}}{1+\lambda_2 \gamma_{+}^{-1}} \mathbb{P}\left(\left|\frac{1}{\gamma_{+}^{-1}} \mathsf{B}^\star+\frac{1}{\alpha} \mathsf{Z}\right|>\lambda_1\right)<G\left(-d_{-}\right)$$ which holds since $$\frac{\gamma_{+}^{-1}}{1+\lambda_2 \gamma_{+}^{-1}} \mathbb{P}\left(\left|\frac{1}{\gamma_{+}^{-1}} \mathsf{B}^\star+\frac{1}{\alpha} \mathsf{Z}\right|>\lambda_1\right)<\gamma_{+}^{-1}=\frac{1}{\frac{1}{G\left(-d_{-}\right)}+d_{-}} \leq G\left(-d_{-}\right)$$ and when $\gamma_{+}^{-1}=+\infty$, it suffices that $$\frac{1}{\lambda_2} \mathbb{P}\left(\left|\frac{1}{\alpha} \mathsf{Z}\right|>\lambda_1\right)<G\left(-d_{-}\right)$$ which holds because $\gamma_{+}^{-1}=+\infty$ if and only if $d_{-}=0$ and $G\left(-d_{-}\right)=+\infty$. This means that for any $\alpha>0$, we can find $\gamma^{-1}(\alpha)$ and $\eta^{-1}(\alpha)=f_1(\gamma^{-1}(\alpha))=f_2(\gamma^{-1}(\alpha),\alpha)$ that solves [\[RSa\]](#RSa){reference-type="eqref" reference="RSa"} and [\[RSb\]](#RSb){reference-type="eqref" reference="RSb"}. The plan is to plug $\gamma^{-1}(\alpha)$ and $\eta^{-1}(\alpha)$ into the RHS of [\[RSc\]](#RSc){reference-type="eqref" reference="RSc"} to obtain the function $v:(0,+\infty)\mapsto (0,+\infty)$ $$\begin{gathered}v(\alpha)=\alpha^2 R^{\prime}\left(\eta^{-1}(\alpha)\right)\left[\mathbb{E}\left(\operatorname{Prox}_{\gamma^{-1}(\alpha) h}\left(\mathsf{B}^\star+\frac{\gamma^{-1}(\alpha)}{\alpha} \mathsf{Z}\right)-\mathsf{B}^\star\right)^2\right] \\ +\alpha^2 \frac{1}{\gamma^{-1}(\alpha)}\left[1+\frac{\eta^{-1}(\alpha) R^{\prime}\left(\eta^{-1}(\alpha)\right)}{R\left(\eta^{-1}(\alpha)\right)}\right]\end{gathered}$$ and show that the RHS of [\[RSc\]](#RSc){reference-type="eqref" reference="RSc"}, i.e. $v(\alpha)$, diverges to $+\infty$ as $\alpha \to +\infty$ and goes to some value less than $1$ as $\alpha \to 0$. First consider any positive increasing sequence $\left(\alpha_m\right)_{m=1}^{+\infty}$ such that $\alpha_m \rightarrow+\infty$ as $m \rightarrow \infty$. We claim that one can pick solution $\gamma^{-1}\left(\alpha_m\right) \in\left[\frac{1}{\mathbb{E}\mathsf{D}^2}, \gamma_{+}^{-1}\right)$ for each $m$ such that $C_1:=$ $\limsup _{m \rightarrow \infty} \gamma^{-1}\left(\alpha_m\right)<\gamma_{+}^{-1}$. For the case where $\gamma_{+}^{-1}<+\infty$, suppose the claim is false by which we would have a subsequence $\left(\alpha_{m_i}\right)_{i=1}^{+\infty}$ of $\left(\alpha_m\right)_{m=1}^{+\infty}$ such that as $i \rightarrow \infty$, $$\begin{aligned} f_2\left(\gamma_{m_i}^{-1}, \alpha_{m_i}\right)= & \frac{\gamma^{-1}\left(\alpha_{m_i}\right)}{1+\lambda_2 \gamma^{-1}\left(\alpha_{m_i}\right)} \mathbb{P}\left(\left|\frac{1}{\gamma^{-1}\left(\alpha_{m_i}\right)} \mathsf{B}^\star+\frac{1}{\alpha_{m_i}} \mathsf{Z}\right|>\lambda_1\right)=f_1\left(\gamma^{-1}\left(\alpha_{m_i}\right)\right) \\ & \rightarrow G\left(-d_{-}\right) \end{aligned}$$ which is impossible since we also have that as $i \rightarrow \infty$, $$\begin{aligned} & \frac{\gamma^{-1}\left(\alpha_{m_i}\right)}{1+\lambda_2 \gamma^{-1}\left(\alpha_{m_i}\right)} \mathbb{P}\left(\left|\frac{1}{\gamma^{-1}\left(\alpha_{m_i}\right)} \mathsf{B}^\star+\frac{1}{\alpha_{m_i}} \mathsf{Z}\right|>\lambda_1\right) \rightarrow \frac{\gamma_{+}^{-1}}{1+\lambda_2 \gamma_{+}^{-1}} \mathbb{P}\left(\left|\frac{1}{\gamma_{+}^{-1}} \mathsf{B}^\star\right|>\lambda_1\right) \\ & <\gamma_{+}^{-1}\leq G\left(-d_{-}\right) \end{aligned}$$ For the case $\gamma_{+}^{-1}=+\infty$, suppose the claim is false by which we would have a subsequence $\left(\alpha_{m_i}\right)_{i=1}^{+\infty}$ of $\left(\alpha_m\right)_{m=1}^{+\infty}$ such that as $i \rightarrow \infty$, $$\begin{aligned} f_2\left(\gamma_{m_i}^{-1}, \alpha_{m_i}\right) & =\frac{\gamma^{-1}\left(\alpha_{m_i}\right)}{1+\lambda_2 \gamma^{-1}\left(\alpha_{m_i}\right)} \mathbb{P}\left(\left|\frac{1}{\gamma^{-1}\left(\alpha_{m_i}\right)} \mathsf{B}^\star+\frac{1}{\alpha_{m_i}} \mathsf{Z}\right|>\lambda_1\right) \\ & =f_1\left(\gamma^{-1}\left(\alpha_{m_i}\right)\right) \rightarrow+\infty \end{aligned}$$ because $\lim _{\gamma^{-1} \rightarrow \gamma_{+}^{-1}} f_1\left(\gamma^{-1}\right)=G\left(-d_{-}\right)=+\infty$ when $\gamma_{+}^{-1}=+\infty$. But this is impossible since the LHS is strictly less than $\lambda_2^{-1}$. Recall that $\eta^{-1}\left(\alpha_m\right)=f_1\left(\gamma^{-1}\left(\alpha_m\right)\right)$. It follows from the above discussion that $$\limsup _{m \rightarrow \infty} \eta^{-1}\left(\alpha_m\right)<G\left(-d_{-}\right)$$ from which we conclude that $$C_2:=\liminf _{m \rightarrow \infty} 1+\frac{\eta^{-1}\left(\alpha_m\right) R^{\prime}\left(\eta^{-1}\left(\alpha_m\right)\right)}{R\left(\eta^{-1}\left(\alpha_m\right)\right)}>0$$ This follows from the fact that $\lim _{x \rightarrow 0} 1+\frac{x R^{\prime}(x)}{R(x)}=1$ using , (f) and continuity of the function $x \mapsto$ $1+\frac{x R^{\prime}(x)}{R(x)}$ on $\left(0, G\left(-d_{-}\right)\right)$. Note that by the above discussion, we have $\liminf _{\alpha \rightarrow+\infty} \frac{v(\alpha)}{\alpha^2} \geq \frac{c_2}{c_1}$ which then implies that $$\label{liminffixe} \liminf _{\alpha \rightarrow+\infty} v(\alpha) \rightarrow+\infty.$$ Now consider any positive decreasing sequence $\left(\alpha_m\right)_{m=1}^{+\infty}$ such that $\alpha_m \rightarrow 0$ as $m \rightarrow \infty$. We again claim that one can pick the solution $\gamma^{-1}\left(\alpha_m\right) \in\left[\frac{1}{\mathbb{E}\mathsf{D}^2}, \gamma_{+}^{-1}\right)$ for each $m$ such that $C_3:=$ $\limsup _{m \rightarrow \infty} \gamma^{-1}\left(\alpha_m\right) \in\left[\frac{1}{\mathbb{E} \mathsf{D}^2},+\infty\right)$ and $\limsup _{m \rightarrow \infty} \eta^{-1}\left(\alpha_m\right)<G\left(-d_{-}\right)$. The proofs are analogous to the previous case. We then have that $$C_4:=\limsup _{m \rightarrow \infty} 1+\frac{\eta^{-1}\left(\alpha_m\right) R^{\prime}\left(\eta^{-1}\left(\alpha_m\right)\right)}{R\left(\eta^{-1}\left(\alpha_m\right)\right)}<+\infty$$ This follows from the fact that $\lim _{x \rightarrow 0} 1+\frac{x R^{\prime}(x)}{R(x)}=1$ by , (f), $\limsup _{m \rightarrow \infty} \eta^{-1}\left(\alpha_m\right)<G\left(-d_{-}\right)$ as just shown, and continuity of the function $x \mapsto 1+\frac{x R^{\prime}(x)}{R(x)}$ on $\left(0, G\left(-d_{-}\right)\right)$. We can then conclude that $$\limsup _{m \rightarrow+\infty} \frac{1}{\gamma^{-1}\left(\alpha_m\right)}\left[1+\frac{\eta^{-1}\left(\alpha_m\right) R^{\prime}\left(\eta^{-1}\left(\alpha_m\right)\right)}{R\left(\eta^{-1}\left(\alpha_m\right)\right)}\right] \leq C_4 \mathbb{E}\mathsf{D}^2$$ which then implies that $$\label{baolde} \limsup _{m \rightarrow+\infty} \frac{\alpha_m^2}{\gamma^{-1}\left(\alpha_m\right)}\left[1+\frac{\eta^{-1}\left(\alpha_m\right) R^{\prime}\left(\eta^{-1}\left(\alpha_m\right)\right)}{R\left(\eta^{-1}\left(\alpha_m\right)\right)}\right]=0.$$ Note also that $$\label{boundedRp} \limsup _{m \rightarrow+\infty}\left|R^{\prime}\left(\eta^{-1}\left(\alpha_m\right)\right)\right|<+\infty$$ since $\underset{m \rightarrow \infty}{\limsup } \eta^{-1}\left(\alpha_m\right)<G\left(-d_{-}\right)$. For each $m$, we have $$\begin{aligned} \eta^{-1}\left(\alpha_m\right)= & f_2\left(\gamma^{-1}\left(\alpha_m\right), \alpha_m\right) \\ & =\frac{\gamma^{-1}\left(\alpha_m\right)}{1+\lambda_2 \gamma^{-1}\left(\alpha_m\right)} \mathbb{P}\left(\left|\alpha_m \mathsf{B}^\star+\gamma^{-1}\left(\alpha_m\right) \mathsf{Z}\right|>\alpha_m \gamma^{-1}\left(\alpha_m\right) \lambda_1\right) \end{aligned}$$ It follows that $$\label{LBLBsB} \limsup _{m \rightarrow+\infty}\left|\left(\frac{\gamma^{-1}\left(\alpha_m\right)}{1+\lambda_2 \gamma^{-1}\left(\alpha_m\right)}\right)^2-\eta^{-2}\left(\alpha_m\right)\right|=0$$ Also note that $$\begin{gathered} \alpha_m^2 \mathbb{E}\left(\operatorname{Prox}_{\gamma^{-1}\left(\alpha_m\right) h}\left(\mathsf{B}^\star+\frac{\gamma^{-1}\left(\alpha_m\right)}{\alpha_m} \mathsf{Z}\right)-\mathsf{B}^\star\right)^2 \\ =\mathbb{E}\left(\frac { \operatorname { s g n } ( \alpha _ { m } \mathsf{B}^\star+ \gamma _ { * } ^ { - 1 } \mathsf{ Z } ) } { 1 + \lambda _ { 2 } \gamma ^ { - 1 } ( \alpha _ { m } ) } \left(\left|\alpha_m \mathsf{B}^\star+\gamma^{-1}\left(\alpha_m\right) \mathsf{Z}\right|\right.\right. \left.\left.-\alpha_m \gamma^{-1}\left(\alpha_m\right) \lambda_1\right)_{+}-\alpha_m \mathsf{B}^\star\right)^2 \end{gathered}$$ Since $\gamma^{-1}\left(\alpha_m\right)$ is bounded between $\frac{1}{\mathbb{E }\mathsf{D}^2}$ and $C_3$ for all sufficiently large $m$, it follows from the dominated convergence theorem that $$\label{LBLBsC} \limsup _{m \rightarrow+\infty}\left|\left(\frac{\gamma^{-1}\left(\alpha_m\right)}{1+\lambda_2 \gamma^{-1}\left(\alpha_m\right)}\right)^2-\alpha_m^2 \mathbb{E}\left(\operatorname{Prox}_{\gamma^{-1}\left(\alpha_m\right) h}\left(\mathsf{B}^\star+\frac{\gamma^{-1}\left(\alpha_m\right)}{\alpha_m} \mathsf{Z}\right)-\mathsf{B}^\star\right)^2\right|=0$$ Combining [\[boundedRp\]](#boundedRp){reference-type="eqref" reference="boundedRp"}, [\[LBLBsB\]](#LBLBsB){reference-type="eqref" reference="LBLBsB"} and [\[LBLBsC\]](#LBLBsC){reference-type="eqref" reference="LBLBsC"}, we obtain that $$\label{DsfKW} \begin{aligned} \limsup _{m \rightarrow+\infty} \bigg | \eta^{-2} & \left(\alpha_m\right) R^{\prime}\left(\eta^{-1}\left(\alpha_m\right)\right) \\ & -\alpha_m^2 R^{\prime}\left(\eta^{-1}\left(\alpha_m\right)\right) \mathbb{E}\left(\operatorname{Prox}_{\gamma^{-1}\left(\alpha_m\right) h}\left(\mathsf{B}^\star+\frac{\gamma^{-1}\left(\alpha_m\right)}{\alpha_m} \mathsf{Z}\right)-\mathsf{B}^\star\right)^2 \bigg |=0. \end{aligned}$$ Now note that $$\limsup _{m \rightarrow+\infty} \eta^{-2}\left(\alpha_m\right) R^{\prime}\left(\eta^{-1}\left(\alpha_m\right)\right)<1$$ using , (e), $\limsup _{m \rightarrow \infty} \eta^{-1}\left(\alpha_m\right)^{m \rightarrow+\infty}<G\left(-d_{-}\right)$ from the above and the continuity of $x \mapsto x^2 R^{\prime}(x)$ on $\left(0, G\left(-d_{-}\right)\right)$. Using this and [\[DsfKW\]](#DsfKW){reference-type="eqref" reference="DsfKW"}, we may then conclude that $$\limsup _{m \rightarrow+\infty} \alpha_m^2 R^{\prime}\left(\eta^{-1}\left(\alpha_m\right)\right) \mathbb{E}\left(\operatorname{Prox}_{\gamma^{-1}\left(\alpha_m\right) h}\left(\mathsf{B}^\star+\frac{\gamma^{-1}\left(\alpha_m\right)}{\alpha_m} \mathsf{Z}\right)-\mathsf{B}^\star\right)^2<1$$ which along with [\[baolde\]](#baolde){reference-type="eqref" reference="baolde"} implies that $$\label{supvlim} \limsup _{\alpha \rightarrow 0} v(\alpha)<1.$$ Combine [\[liminffixe\]](#liminffixe){reference-type="eqref" reference="liminffixe"} and [\[supvlim\]](#supvlim){reference-type="eqref" reference="supvlim"}. By continuity of $\alpha \mapsto v(\alpha)$ on $(0,+\infty)$, we know that there exists a solution $\alpha_* \in(0,+\infty)$ to the equation $v\left(\alpha_*\right)=1$. Therefore, a solution of [\[fpstren\]](#fpstren){reference-type="eqref" reference="fpstren"} is $(\gamma, \eta, \alpha)=\left(\gamma^{-1}\left(\alpha_*\right), \eta^{-1}\left(\alpha_*\right), \alpha_*\right)$ by construction. This concludes the proof. ## VAMP algorithm {#vamp} The VAMP algorithm consists of iteration as follows: for $t \geq 1$, $$\begin{aligned} & \hat{\mathbf{x}}_{1t}=\operatorname{Prox}_{\gamma_{1, t-1}^{-1}}\left(\mathbf{r}_{1, t-1}\right), \quad \eta_{1 t}^{-1}=\gamma_{1, t-1}^{-1} \nabla \cdot \operatorname{Prox}_{\gamma_{1, t-1}^{-1} h}\left(\mathbf{r}_{1, t-1}\right) \\ & \gamma_{2 t}=\eta_{1 t}-\gamma_{1, t-1}, \quad \mathbf{r}_{2t}=\left(\eta_{1 t} \hat{\mathbf{x}}_{1t}-\gamma_{1, t-1} \mathbf{r}_{1, t-1}\right) / \gamma_{2 t} \\ & \hat{\mathbf{x}}_{2t}=\left(\mathbf{X}^{\top} \mathbf{X}+\gamma_{2 t} \mathbf{I}_p\right)^{-1}\left(\mathbf{X}^{\top} \mathbf{y}+\gamma_{2 t} \mathbf{r}_{2t}\right), \quad \eta_{2 t}^{-1}=\frac{1}{p} \operatorname{Tr}\left[\left(\mathbf{X}^{\top} \mathbf{X}+\gamma_{2 t} \mathbf{I}_p \right)^{-1}\right] \\ & \gamma_{1 t}=\eta_{2 t}-\gamma_{2 t}, \quad \mathbf{r}_{1t}=\left(\eta_{2 t} \hat{\mathbf{x}}_{2t}-\gamma_{2 t} \mathbf{r}_{2t}\right) / \gamma_{1 t} \end{aligned}$$ The algorithm can be initialized at $r_{10} \in \mathbb{R}^p, \gamma_{10}, \tau_{10} >0$ such that $\left(r_{10}, \bm{\beta}^\star\right) \stackrel{W_2}{\to}\left(\mathsf{R}_{10}, \mathsf{B}^\star\right)$ and $\mathsf{R}_{10}-\mathsf{B}^\star\sim N\left(0, \tau_{10}\right)$. This algorithm is first introduced in [@rangan2019vector] and the iterates $\hat{\mathbf{x}}_{1t}, \hat{\mathbf{x}}_{2t}$ are supposed to track $\hat{\bm{\beta}}$. The performance of this algorithm is characterized by state evolution iterations: for $t\ge 1$, $$\begin{aligned} & \bar{\alpha}_{1 t}=\mathbb{E} \operatorname{Prox}_{\gamma_{1,1-1}^{\prime}}^{\prime}\left(\mathsf{B}^\star+N\left(0, \tau_{1, t-1} \right)\right), \quad \bar{\eta}_{1 t}^{-1}=\bar{\gamma}_{1, t-1}^{-1} \bar{\alpha}_{1 t} \\ & \bar{\gamma}_{2 t}=\bar{\eta}_{1 t}-\bar{\gamma}_{1, t-1}, \quad \tau_{2 t}=\frac{1}{\left(1-\bar{\alpha}_{1 t}\right)^2}\left[\mathcal{E}_1\left(\bar{\gamma}_{1, t-1}, \tau_{1, t-1}\right)-\bar{\alpha}_{1 t}^2 \tau_{1, t-1}\right] \\ & \bar{\alpha}_{2 t}=\bar{\gamma}_{2 t} \mathbb{E} \frac{1}{\mathsf{D}^2+\bar{\gamma}_{2 t}}, \quad \bar{\eta}_{2 t}^{-1}=\bar{\gamma}_{2 t}^{-1} \bar{\alpha}_{2 t} \\ & \bar{\gamma}_{1, t}=\bar{\eta}_{2 t}-\bar{\gamma}_{2 t}, \quad \tau_{t}=\frac{1}{\left(1-\bar{\alpha}_{2 t}\right)^2}\left[\mathcal{E}_2\left(\bar{\gamma}_{2 t}, \tau_{2 t}\right)-\bar{\alpha}_{2 t}^2 \tau_{2 t}\right] \end{aligned}$$ where $$\begin{aligned} & \mathcal{E}_1\left(\gamma_1, \tau\right):=\mathbb{E}\left(\operatorname{Prox}_{\gamma_1^{-1} h}\left(\mathsf{B}^\star+N\left(0, \tau\right)\right)-\mathsf{B}^\star\right)^2, \quad \mathcal{E}_2\left(\gamma_2, \tau_2\right):=\mathbb{E}\left[\frac{\mathsf{D}^2+\tau_2 \gamma_2^2}{\left(\mathsf{D}^2+\gamma_2\right)^2}\right]. \end{aligned}$$ # Proof of asymptotic normality ## Proof result A: distribution characterization {#section:DIST} In this section, we prove using VAMP algorithm as proof device. We define the version of VAMP algorithm we will use in , prove Cauchy convergence of its iterates in , and prove in . To streamline the presentation, proofs of intermediate claims are collected in . ### The oracle VAMP Algorithm {#subsectionVAMP} We review the oracle VAMP algorithm defined in [@gerbelot2020asymptotic] and present an extended state evolution result for the algorithm. This algorithm is obtained by initializing the VAMP algorithm introduced in [@rangan2019vector] at stationarity $\mathbf{r}_{10}=\bm{\beta}^\star+N(\rm{0},\tau_{*}\mathbf{I}_p), \gamma_{10}^{-1}=\gamma_*^{-1}$. See for a review. Then for $t\ge 1$, we have iterates [\[empvamp\]]{#empvamp label="empvamp"} $$\begin{aligned} & \hat{\mathbf{x}}_{1t}=\operatorname{Prox}_{\gamma_*^{-1} h}\left(\mathbf{r}_{1, t-1}\right) \label{x1t} \\ & \mathbf{r}_{2t}=\frac{1}{\eta_*-\gamma_*}\left(\eta_* \hat{\mathbf{x}}_{1t}-\gamma_* \mathbf{r}_{1, t-1}\right) \label{r2t} \\ & \hat{\mathbf{x}}_{2t}=\left(\mathbf{X}^{\top} \mathbf{X}+\left(\eta_*-\gamma_*\right) \mathbf{I}_p\right)^{-1}\left(\mathbf{X}^{\top} \mathbf{y}+\left(\eta_*-\gamma_*\right) \mathbf{r}_{2t}\right) \label{x2t}\\ & \mathbf{r}_{1t}=\frac{1}{\gamma_*}\left(\eta_* \hat{\mathbf{x}}_{2t}-\left(\eta_*-\gamma_*\right) \mathbf{r}_{2t}\right) \label{r1t} \end{aligned}$$ Let us define functions $F: \mathbb{R}\times \mathbb{R}\to \mathbb{R}$ and $F':\mathbb{R}\times \mathbb{R}\to \mathbb{R}$ $$\label{Fdef} \begin{aligned} & F(q, x):=\frac{\eta_*}{\eta_*-\gamma_*} \operatorname{Prox}_{\gamma_*^{-1} h}(q+x)-\frac{\gamma_*}{\eta_*-\gamma_*} q-\frac{\eta_*}{\eta_*-\gamma_*} x \\ & F^\prime(q, x):=\frac{\eta_*}{\eta_*-\gamma_*} \operatorname{Prox}^\prime_{\gamma_*^{-1} h}(q+x)-\frac{\gamma_*}{\eta_*-\gamma_*} \end{aligned}$$ Note that for any fixed $x$, $F^\prime(q,x)$ equals to the derivative of $q \mapsto F(q,x)$ whenever the derivative exists, and at the finitely many points where $q\mapsto F(q,x)$ is not differentiable $F^\prime(q,x)$ equals to $0$ (cf. ). We also define some quantities $$\label{Quant} \begin{aligned} & \bm{\Lambda}:=\frac{\eta_*\left(\eta_*-\gamma_*\right)}{\gamma_*}\left(\mathbf{D}^{\top} \mathbf{D}+\left(\eta_*-\gamma_*\right) \mathbf{I}_p\right)^{-1}-\left(\frac{\eta_*-\gamma_*}{\gamma_*}\right) \cdot \mathbf{I}_p \\ & \bm{\xi}:=\mathbf{Q}\bm{\varepsilon}, \quad \mathbf{e}_b:=\frac{\eta_*}{\gamma_*}\left(\mathbf{D}^{\top} \mathbf{D}+\left(\eta_*-\gamma_*\right) \mathbf{I}_p\right)^{-1} \mathbf{D}^{\top} \bm{\xi}, \quad \mathbf{e}:=\mathbf{O}^{\top} \mathbf{e}_b \end{aligned}$$ We note some important properties of these quantities, which are essentially consequence of and [\[fp\]](#fp){reference-type="eqref" reference="fp"}. We defer the proof of to . **Proposition 35**. *Under ---[Assumption 4](#Assumph){reference-type="ref" reference="Assumph"}, almost surely,* *[\[dddfef\]]{#dddfef label="dddfef"} $$\begin{aligned} & \lim _{p \rightarrow \infty} \frac{1}{p} \operatorname{Tr}(\bm{\Lambda})=0 , \quad \kappa_*:=\lim _{p \rightarrow \infty} \frac{1}{p} \operatorname{Tr}\left(\bm{\Lambda}^2\right)=\mathbb{E}\left(\frac{\eta_*\left(\eta_*-\gamma_*\right)}{\gamma_*\left(\mathsf{D}^2+\left(\eta_*-\gamma_*\right)\right)}-\frac{\eta_*-\gamma_*}{\gamma_*}\right)^2 \\ & b_*:=\lim _{p \rightarrow \infty} \frac{1}{p}\left\|\mathbf{e}_b\right\|^2=\frac{1}{\gamma_*}-\frac{\kappa_*}{\eta_*-\gamma_*}=\left(\frac{\eta_*}{\gamma_*}\right)^2 \mathbb{E} \frac{\mathsf{D}^2}{\left(\mathsf{D}^2+\eta_*-\gamma_*\right)^2},\quad \tau_{*}=b_*+\kappa_* \tau_{**} \label{eq:somide}\\ & \mathbb{E} F^{\prime}\left(\sqrt{\tau_{*}}\mathsf{Z}, \mathsf{B}^\star\right)=0, \quad \mathbb{E} F\left(\sqrt{\tau_{*}}\mathsf{Z}, \mathsf{B}^\star\right)^2= \tau_{**} \label{eq:denoiserpp2} \end{aligned}$$* *where $\mathsf{Z}\sim N(0,1)$ is independent of $\mathsf{B}^\star$. Moreover, the function $(q,x)\mapsto F(q,x)$ is Lipschitz continuous on $\mathbb{R}\times \mathbb{R}$.* Then, one can show that by eliminating $\hat{\mathbf{x}}_{1t}, \hat{\mathbf{x}}_{2t}$ and introducing a change of variables $$\label{changeofv} \mathbf{x}^t=\mathbf{r}_{2t}-\bm{\beta}^\star, \quad \mathbf{y}^t=\mathbf{r}_{1t}-\bm{\beta}^\star-\mathbf{e}, \quad \mathbf{s}^t=\mathbf{O}\mathbf{x}^t$$ [\[empvamp\]](#empvamp){reference-type="eqref" reference="empvamp"} is equivalent to the following iterations: with initialization $\mathbf{q}^0\sim N(\rm{0},\tau_{*}\cdot \mathbf{I}_p), {\mathbf{x}}^{1}=F(q_0,\bm{\beta}^\star)$, for $t=1,2,3,\ldots,$ $$\label{eq:xsy} \mathbf{s}^t=\mathbf{O}\mathbf{x}^t, \qquad \mathbf{y}^t=\mathbf{O}^\top \bm{\Lambda} \mathbf{s}^t, \qquad \mathbf{x}^{t+1}=F(\mathbf{y}^t+\mathbf{e},\bm{\beta}^\star).$$ The following Proposition will be needed later. Its proof is deferred to . **Proposition 36**. *Suppose Assumptions [Assumption 2](#AssumpD){reference-type="ref" reference="AssumpD"}--[Assumption 3](#AssumpPrior){reference-type="ref" reference="AssumpPrior"} hold. Define random variables $$\Xi\sim N(0,1), \qquad \mathsf{P}_0\sim N(0,\tau_{*}), \qquad \mathsf{E}\sim N(0,b_*)$$ independent of each other and of $\mathsf{D}$, and set $$\mathsf{L}=\frac{\eta_{*}-\gamma_{*}}{\gamma_{*}}\left(\frac{\eta_*}{\mathsf{D}^{2}+\eta_{*}-\gamma_{*}}-1\right),\; \mathsf{E}_{b}=\frac{\eta_{*}}{\gamma_{*}} \frac{\mathsf{D} \Xi}{\mathsf{D}^{2}+\eta_{*}-\gamma_{*}}, \; \mathsf{H}=(\mathsf{B}^\star,\mathsf{D},\mathsf{D}\Xi,\mathsf{L},\mathsf{E}_b,\mathsf{E},\mathsf{P}_0).$$ Then $\kappa_*=\mathbb{E}\mathsf{L}^2$ and $b_*=\mathbb{E}\mathsf{E}_b^2$. Furthermore, almost surely as $n,p\to \infty$, $$\mathbf{H}:= \left(\bm{\beta}^\star, \mathbf{D}^{\top} \bm{1}_{n \times 1}, \mathbf{D}^{\top} \bm{\xi}, \operatorname{diag}(\bm{\Lambda}), \mathbf{e}_b, \mathbf{e}, \mathbf{q}^0\right)\stackrel{W_2}{\to} \mathsf{H}.$$* Now we state the state evolution for the VAMP algorithm. Its proof is deferred to . **Proposition 37**. *Suppose Assumptions [Assumption 2](#AssumpD){reference-type="ref" reference="AssumpD"}--[Assumption 6](#Assumpfix){reference-type="ref" reference="Assumpfix"} hold. Further assume that the function $x\mapsto \operatorname{Prox}_{\gamma_*^{-1} h}^{\prime}(x)$ defined in is non-constant. Let $\mathsf{H}=(\mathsf{B}^\star,\mathsf{D},\mathsf{D}\Xi,\mathsf{L},\mathsf{E}_b,\mathsf{E},\mathsf{P}_0)$ be as defined in . Set $\mathsf{X}_1=F(\mathsf{P}_0,\mathsf{B}^\star)$, set $\Delta_1=\mathbb{E}[\mathsf{X}_1^2] \in \mathbb{R}^{1 \times 1}$, and define iteratively $\mathsf{S}_t,\mathsf{Y}_t,\mathsf{X}_{t+1},\Delta_{t+1}$ for $t=1,2,3,\ldots$ such that $$(\mathsf{S}_1,\ldots,\mathsf{S}_t) \sim N(\rm{0},\Delta_t), \qquad (\mathsf{Y}_1,\ldots,\mathsf{Y}_t) \sim N(\rm{0},\kappa_* \Delta_t)$$ are Gaussian vectors independent of each other and of $\mathsf{H}$, and $$\mathsf{X}_{t+1}=F(\mathsf{Y}_t+\mathsf{E},\mathsf{B}^\star), \qquad \Delta_{t+1}=\mathbb{E}\left[\left(\mathsf{X}_{1}, \ldots, \mathsf{X}_{t+1}\right)\left(\mathsf{X}_{1}, \ldots, \mathsf{X}_{t+1}\right)^{\top}\right] \in \mathbb{R}^{(t+1) \times (t+1)}.$$ Then for each $t \geq 1$, $\Delta_t \succ 0$ strictly, $\tau_{**}=\mathbb{E}\mathsf{X}_t^2$, and $\kappa_* \tau_{**}=\mathbb{E}\mathsf{Y}_t^2$.* *Furthermore, let $\mathbf{X}_{t}=\left({\mathbf{x}}^{1}, \ldots, \mathbf{x}^t\right) \in \mathbb{R}^{p \times t}$, $\mathbf{S}_{t}=\left(\mathbf{s}^{1}, \ldots, \mathbf{s}^t\right) \in \mathbb{R}^{p \times t}$, and $\mathbf{Y}_{t}=\left({\mathbf{y}}^{1}, \ldots, \mathbf{y}^t\right) \in \mathbb{R}^{p \times t}$ collect the iterates of [\[eq:xsy\]](#eq:xsy){reference-type="eqref" reference="eq:xsy"}, starting from the initialization ${\mathbf{x}}^{1}=F(\mathbf{q}^0,\bm{\beta}^\star)$. Then for any fixed $t \geq 1$, almost surely as $p,n \rightarrow \infty$, $$\left(\mathbf{H},\mathbf{X}_t, \mathbf{S}_t,\mathbf{Y}_t\right) \stackrel{W_2}{\to}\left(\mathsf{H}, \mathsf{X}_{1}, \ldots, \mathsf{X}_t, \mathsf{S}_{1}, \ldots, \mathsf{S}_{t}, \mathsf{Y}_{1}, \ldots, \mathsf{Y}_{t}\right).$$* Noting that each matrix $\Delta_t$ is the upper-left submatrix of $\Delta_{t+1}$, let us denote the entries of these matrices as $\Delta_t=(\delta_{rs})_{r,s=1}^t$. We also denote $\delta_*:=\tau_{**}$ and $\sigma_*^2:=\kappa_* \tau_{**}$. **Remark 38**. In case where $\operatorname{Prox}_{\gamma_*^{-1} h}^{\prime}(x)$ is constant in $x$ (e.g. ridge penalty), the iterates converges in one iteration and the above result holds for $t\le 1$. Proof of the following Corollary is deferred to . **Corollary 39**. *Under Assumptions [Assumption 2](#AssumpD){reference-type="ref" reference="AssumpD"}--[Assumption 6](#Assumpfix){reference-type="ref" reference="Assumpfix"}, almost surely as $p,n\to \infty$ $$\label{consA} \begin{aligned} \left(\hat{\mathbf{x}}_{1t}, \mathbf{r}_{1t}, \bm{\beta}^\star\right) \stackrel{W_2}{\to}\left(\operatorname{Prox}_{\gamma_*^{-1} h}\left(\sqrt{\tau_{*}} \mathsf{Z}+\mathsf{B}^\star\right), \sqrt{\tau_{*}} \mathsf{Z}+\mathsf{B}^\star, \mathsf{B}^\star\right). \end{aligned}$$ Furthermore, almost surely as $p,n\to \infty$, $$\label{consC} \frac{1}{p}\left\|\mathbf{X}\mathbf{r}_{2t}-\mathbf{y}\right\|^2 \rightarrow \tau_{**} \mathbb{E} \mathsf{D}^2+\delta$$* ### Cauchy convergence of VAMP iterates {#subsectionCauchc} The following Proposition is analogous to [@fan2021replica Proposition 2.3] and [@li2022random Lemma B.2.] in the context of rotationally invariant spin glass and Bayesian linear regression. However, it requires observing a simple but crucial property of the R-transform (i.e. $-zR'(z)/R(z)<1$ for all $z$ on the domain) and its interplay with the non-expansiveness of the proximal map. We defer the proof to . **Proposition 40**. *Under Assumptions [Assumption 2](#AssumpD){reference-type="ref" reference="AssumpD"}--[Assumption 6](#Assumpfix){reference-type="ref" reference="Assumpfix"}, $$\lim_{\min (s, t) \rightarrow \infty} \delta_{s t}=\delta_*$$ where $\delta_{st}=\mathbb{E}\mathsf{X}_s \mathsf{X}_t$.* We can then obtain the convergence of vector iterates for the oracle VAMP algorithm. We defer the proof to . **Corollary 41**. *Under Assumptions [Assumption 2](#AssumpD){reference-type="ref" reference="AssumpD"}--[Assumption 6](#Assumpfix){reference-type="ref" reference="Assumpfix"}, for $j=1,2$, $$\begin{gathered} \lim _{(s, t) \rightarrow \infty}\left(\lim _{p \rightarrow \infty} \frac{1}{p}\left\|\mathbf{x}^t-\mathbf{x}^s\right\|^2\right)=\lim _{(s, t) \rightarrow \infty}\left(\lim _{p \rightarrow \infty} \frac{1}{p}\left\|\mathbf{y}^t-\mathbf{y}^s\right\|^2\right) \\ \qquad \qquad \qquad =\lim _{(s, t) \rightarrow \infty}\left(\lim _{p \rightarrow \infty} \frac{1}{p}\left\|\mathbf{r}_{j t}-\mathbf{r}_{j s}\right\|^2\right)=\lim _{(s, t) \rightarrow \infty}\left(\lim _{p \rightarrow \infty} \frac{1}{p}\left\|\hat{ \mathbf{x} }_{j t}-\hat{ \mathbf{x} }_{j s}\right\|^2\right)=0 \end{gathered}$$ where the inner limits exist almost surely for each fixed $t$ and $s$.* ### Characterize limits of empirical distribution {#subsectionlimi} Recall definition of $\mathbf{r}_{*},\mathbf{r}_{**}$ from [\[defr1r2\]](#defr1r2){reference-type="eqref" reference="defr1r2"}. The following is a direct consequence of the Cauchy convergence of the VAMP iterates and the strong convexity in the penalized loss function. We defer the proof to . **Proposition 42**. *Under Assumptions [Assumption 2](#AssumpD){reference-type="ref" reference="AssumpD"}--[Assumption 6](#Assumpfix){reference-type="ref" reference="Assumpfix"} with $c_0>0$, for $j=1,2$, $$\lim _{t \rightarrow \infty} \lim _{p \rightarrow \infty} \frac{1}{p}\left\|\hat{\bm{\beta}}-\hat{\mathbf{x}}_{j t}\right\|_2^2=\lim _{t \rightarrow \infty} \lim _{p \rightarrow \infty} \frac{1}{p}\left\|\mathbf{r}_{j t}-\mathbf{r}_{j *}\right\|_2^2=0.$$ where the inner limits exist almost surely for each fixed $t$.* Combining and yields the proof of . *Proof of .* We prove [\[waka1\]](#waka1){reference-type="eqref" reference="waka1"} first. Fix function $\psi:\mathbb{R}^3 \mapsto \mathbb{R}$ satisfying, for some constant $C>0$, the pseudo-Lipschitz condition $$\left|\psi (\mathbf{v})-\psi \left(\mathbf{v}^{\prime}\right)\right| \leq C\left(1+\|\mathbf{v}\|_2+\left\|\mathbf{v}^{\prime}\right\|_2\right)\left\|\mathbf{v}-\mathbf{v}^{\prime}\right\|_2.$$ For any fixed $t$, we have $$\begin{aligned} & \left|\frac{1}{p} \sum_{i=1}^p \psi\left(\hat{x}_{1 t, i}, r_{1 t, i}, {\beta}^\star_i\right)-\frac{1}{p} \sum_{i=1}^p \psi\left(\hat{{\beta}}_i, r_{*, i}, {\beta}^\star_i\right)\right| \\ & \leq \frac{C}{p} \sum_{i=1}^p\left(\left|\hat{x}_{1 t, i}-\hat{{\beta}}_i\right|^2+\left|r_{1 t, i}-r_{*, i}\right|^2\right)^{\frac{1}{2}}\\ & \qquad \qquad \times \left(1+\sqrt{\hat{x}_{1 t, i}^2+r_{1 t, i}^2+\beta_i^{\star 2}}+\sqrt{\hat{{\beta}}_i^2+r_{*, i}^2+\beta_i^{\star 2}}\right)\\ & \stackrel{(\star)}{\leq} C\left(\frac{1}{p} \sum_{i=1}^p\left|\hat{x}_{1 t, i}-\hat{{\beta}}_i\right|^2+\left|r_{1 t, i}-r_{*, i}\right|^2\right)^{\frac{1}{2}}\\ &\qquad \qquad \times \left(\frac{1}{p} \sum_{i=1}^p\left(1+\sqrt{\hat{x}_{1 t, i}^2+r_{1 t, i}^2+\beta_i^{\star 2}}+\sqrt{\hat{{\beta}}_i^2+r_{*, i}^2+\beta_i^{\star 2}}\right)^2\right)^{\frac{1}{2}} \\ & \leq C\left(\frac{1}{p}\left\|\hat{\mathbf{x}}_{1t}-\hat{\bm{\beta}}\right\|_2^2+\frac{1}{p}\left\|\hat{\mathbf{x}}_{1t}-\hat{\bm{\beta}}\right\|_2^2\right)^{\frac{1}{2}}\\ & \qquad \qquad \times \left(3+\frac{3}{p}\left(\left\|\hat{\mathbf{x}}_{1t}\right\|_2^2+3\left\|\mathbf{r}_{1t}\right\|_2^2+2\left\|\bm{\beta}^\star\right\|_2^2+2\left\|\mathbf{r}_{1t}-\mathbf{r}_{*}\right\|_2^2\right)\right)^{\frac{1}{2}} \end{aligned}$$ where $(\star)$ is by Cauchy-Schwarz inequality. This, along with , , implies that $$\label{fkt1} \lim _{t \rightarrow \infty} \lim _{p \rightarrow \infty}\left|\frac{1}{p} \sum_{i=1}^p \psi\left(\hat{x}_{1 t, i}, r_{1 t, i}, {\beta}^\star_i\right)-\frac{1}{p} \sum_{i=1}^p \psi\left(\hat{{\beta}}_i, r_{*, i}, {\beta}^\star_i\right)\right|=0$$ Using and , we have that $$\label{fkt2} \lim _{p \rightarrow \infty}\left|\frac{1}{p} \sum_{i=1}^p \psi\left(\hat{x}_{1 t, i}, r_{1 t, i}, {\beta}^\star_i\right)-\mathbb{E} \psi\left(\operatorname{Prox}_{\gamma_*^{-1} h}\left(\sqrt{\tau_{*}} \mathsf{Z}+\mathsf{B}^\star\right), \sqrt{\tau_{*}} \mathsf{Z}+\mathsf{B}^\star, \mathsf{B}^\star\right)\right|=0$$ By triangle inequality, we also have $$\begin{aligned} &\left|\mathbb{E} \psi\left(\operatorname{Prox}_{\gamma_*^{-1} h}\left(\sqrt{\tau_{*}} \mathsf{Z}+\mathsf{B}^\star\right), \sqrt{\tau_{*}} \mathsf{Z}+\mathsf{B}^\star, \mathsf{B}^\star\right)-\frac{1}{p} \sum_{i=1}^p \psi\left(\hat{{\beta}}_i, r_{*, i}, {\beta}^\star_i\right)\right| \\ & \leq\left|\frac{1}{p} \sum_{i=1}^p \psi\left(\hat{x}_{1 t, i}, r_{1 t, i}, {\beta}^\star_i\right)-\mathbb{E} \psi\left(\operatorname{Prox}_{\gamma_*^{-1} h}\left(\sqrt{\tau_{*}} \mathsf{Z}+\mathsf{B}^\star\right), \sqrt{\tau_{*}} \mathsf{Z}+\mathsf{B}^\star, \mathsf{B}^\star\right)\right| \\ &\qquad +\left|\frac{1}{p} \sum_{i=1}^p \psi\left(\hat{x}_{1 t, i}, r_{1 t, i}, {\beta}^\star_i\right)-\frac{1}{p} \sum_{i=1}^p \psi\left(\hat{{\beta}}_i, r_{*, i}, {\beta}^\star_i\right)\right| \end{aligned}.$$ Taking $p$ and then $t$ to infinity on both sides of the above, by [\[fkt1\]](#fkt1){reference-type="eqref" reference="fkt1"} and [\[fkt2\]](#fkt2){reference-type="eqref" reference="fkt2"}, $$\lim _{p \rightarrow \infty}\left|\frac{1}{p} \sum_{i=1}^p \psi\left(\hat{{\beta}}_i, r_{*, i}, {\beta}^\star_i\right)-\mathbb{E} \psi\left(\operatorname{Prox}_{\gamma_*^{-1} h}\left(\sqrt{\tau_{*}} \mathsf{Z}+\mathsf{B}^\star\right), \sqrt{\tau_{*}} \mathsf{Z}+\mathsf{B}^\star, \mathsf{B}^\star\right)\right|=0$$ where we used the fact that lhs does not depend on $t$. An application of with $\mathfrak{p}=2,k=3$ completes the proof for [\[waka1\]](#waka1){reference-type="eqref" reference="waka1"}. To see [\[waka2\]](#waka2){reference-type="eqref" reference="waka2"}, note that $$\begin{aligned} \bigg | \frac{1}{p}\left\|\mathbf{X}\mathbf{r}_{**}-\mathbf{y}\right\|^2 & -\frac{1}{p}\left\|\mathbf{X}\mathbf{r}_{2t}-\mathbf{y}\right\|^2 \bigg| = \bigg | \frac{1}{p}\left\langle \mathbf{X}\mathbf{r}_{**}-2\mathbf{y}+\mathbf{X}\mathbf{r}_{2t}, \mathbf{X}\mathbf{r}_{**}-\mathbf{X}\mathbf{r}_{2t}\right\rangle \bigg| \\ & \leq \frac{1}{p}\left\|\mathbf{X}\mathbf{r}_{**}-2\mathbf{y}+\mathbf{X}\mathbf{r}_{2t}\right\|_2\left\|\mathbf{X}\mathbf{r}_{**}-\mathbf{X}\mathbf{r}_{2t}\right\|_2 \\ & \leq \frac{1}{p}\left(\|\mathbf{X}\|_{\mathrm{op}}\left(\left\|\mathbf{r}_{**}-\bm{\beta}^\star\right\|_2+\left\|\mathbf{r}_{2t}-\bm{\beta}^\star\right\|\right)+2\|\bm{\varepsilon}\|_2\right)\|\mathbf{X}\|_{\mathrm{op}}\left\|\mathbf{r}_{**}-\mathbf{r}_{2t}\right\|_2. \end{aligned}$$ Using this inequality and $\|\mathbf{X}\|_{\mathrm{op}}=\max _{i \in[p]}\left|d_i\right| \rightarrow \sqrt{d_{+}}$ (cf. ), we obtain that almost surely $$\label{win1} \lim _{t \rightarrow \infty} \limsup _{p \rightarrow \infty}\left|\frac{1}{p}\left\|\mathbf{X}\mathbf{r}_{**}-\mathbf{y}\right\|^2-\frac{1}{p}\left\|\mathbf{X}\mathbf{r}_{2t}-\mathbf{y}\right\|^2\right|=0.$$ From triangle inequality, we have $$\begin{aligned} \bigg | \frac{1}{p}\left\|\mathbf{X}\mathbf{r}_{**}-\mathbf{y}\right\|^2 & -\left(\tau_{**} \mathbb{E} \mathsf{D}^2+\delta\right) \bigg | \\ & \leq\left|\frac{1}{p}\left\|\mathbf{X}\mathbf{r}_{2t}-\mathbf{y}\right\|^2-\left(\tau_{**} \mathbb{E} \mathsf{D}^2+\delta\right)\right|+\left|\frac{1}{p}\left\|\mathbf{X}\mathbf{r}_{**}-\mathbf{y}\right\|^2-\frac{1}{p}\left\|\mathbf{X}\mathbf{r}_{2t}-\mathbf{y}\right\|^2\right|. \end{aligned}$$ Apply limit operation $\lim _{t \rightarrow \infty} \limsup _{p \rightarrow \infty}$ on both sides. Using [\[win1\]](#win1){reference-type="eqref" reference="win1"}, [\[consC\]](#consC){reference-type="eqref" reference="consC"} and the fact that the lhs does not depend on $t$, we have that almost surely $$\limsup _{p \rightarrow \infty}\left|\frac{1}{p}\left\|\mathbf{X}\mathbf{r}_{**}-\mathbf{y}\right\|^2-\left(\tau_{**} \mathbb{E} \mathsf{D}^2+\delta\right)\right|=0.$$ This completes the proof of [\[waka2\]](#waka2){reference-type="eqref" reference="waka2"}. ◻ ## Prove result B: consistent estimation {#section:DEBIA} We prove existence and uniqueness of the solution to the adjustment equation [\[gammasolvea\]](#gammasolvea){reference-type="eqref" reference="gammasolvea"} in , show that the adjustment equation converges to a population limit in , and prove in . To streamline the presentation, proofs of intermediate claims are collected in . ### Properties of the adjustment equation {#subsectionspsd} Recall definition of function $g_p:(0,+\infty)\mapsto \mathbb{R}$ from [\[defgp\]](#defgp){reference-type="eqref" reference="defgp"}. We outline in the conditions under which it is well-defined, strictly increasing and the equation $$\label{solution} g_p(\gamma)=1$$ admits a unique solution on $(0,+\infty)$. The proof is deferred to . **Lemma 43**. *Fix $p\ge 1$. Assume that $h^{\prime \prime}(\hat{\beta_j})\ge 0$ for all $j\in [p]$. We then have the following statements:* - *If $d_i\neq 0$ for all $i$, the function $\gamma\mapsto g_p(\gamma )$ is well-defined. If for some $i\in [p], d_i=0$, the function $\gamma\mapsto g_p(\gamma )$ is well-defined if and only if $\norm{h^{\prime\prime}(\hat{\bm{\beta}})}_0>0$.* - *Given that $g_p$ is well-defined, it is strictly increasing if there exists some $j\in [p]$ such that $h^{\prime \prime}\left(\hat{{\beta}}_j\right)\neq +\infty$, or else $g_p(\gamma)=1,\forall \gamma \in (0,+\infty)$.* - *Given that $\left\|h^{\prime \prime}(\hat{\bm{\beta}})\right\|_0=p$ or for all $i, d_i\neq 0$, by which $g_p$ is well-defined from (a), [\[solution\]](#solution){reference-type="eqref" reference="solution"} has a unique solution if and only if there exists some $j\in [p]$ such that $h^{\prime \prime}\left(\hat{{\beta}}_j\right)\neq +\infty$.* - *Given that $\left\|h^{\prime \prime}(\hat{\bm{\beta}})\right\|_0<p$ and for some $i$, $d_i= 0$, $g_p$ is well-defined and [\[solution\]](#solution){reference-type="eqref" reference="solution"} has a unique solution on $(0,+\infty)$ if and only if $\norm{d}_0+\norm{h^{\prime\prime}(\hat{\bm{\beta}})}_0>p$.* The following assumption is made to simplify the conditions outlined in . **Assumption 8**. Fix $p\ge 1$ and suppose that holds. If $\left\|h^{\prime \prime}(\hat{\bm{\beta}})\right\|_0=p$ or that $\mathbf{X}^\top \mathbf{X}$ is non-singular, we require only that there exists some $i\in [p]$ such that $h^{\prime\prime}(\hat{{\beta}}_i)\neq +\infty$. Otherwise, we require in addition that $\norm{d}_0+\norm{h^{\prime\prime}(\hat{\bm{\beta}})}_0>p$. The following is a direct consequence of which in turn has as a special case. **Proposition 44**. *Fix $p\ge 1$ and suppose that holds. Then, holds if and only if the function $\gamma\mapsto g_p(\gamma)$ is well-defined for any $\gamma>0$, strictly increasing, and the equation [\[solution\]](#solution){reference-type="eqref" reference="solution"} admits a unique solution contained in $(0,+\infty).$* ### Population limit of the adjustment equation {#subsectionWe} From now on, we use notation for the following random variable $$\mathsf{U}:= h^{\prime \prime}\left(\operatorname{Prox}_{\gamma_*^{-1} h}\left(\sqrt{\tau_{*}} \mathsf{Z}+\mathsf{B}^\star\right)\right).$$ Define $g_\infty:(0,+\infty)\mapsto \mathbb{R}$ by $$g_\infty(\gamma)=\mathbb{E} \frac{1}{\left(\mathsf{D}^2-\gamma\right) \mathbb{E} \frac{1}{\gamma+\mathsf{U}}+1}.$$ which is well-defined under , [Assumption 6](#Assumpfix){reference-type="ref" reference="Assumpfix"} as shown in below. We defer its proof to . **Lemma 45**. *Under , [Assumption 6](#Assumpfix){reference-type="ref" reference="Assumpfix"}, $g_\infty$ is well-defined on and strictly increasing on $(0,+\infty)$. The equation $g_\infty(\gamma)=1$ admits a unique solution $\gamma_*$ on $(0,+\infty)$.* **Remark 46**. We emphasize that the proof of does not require [\[fp\]](#fp){reference-type="eqref" reference="fp"} admits a unique solution, only that a solution exists. We can show that the LHS of the sample adjustment equation converges to the LHS of the population adjustment equation. We defer its proof to . **Proposition 47**. *Under ---[Assumption 6](#Assumpfix){reference-type="ref" reference="Assumpfix"}, almost surely for all sufficiently large $p$, $g_p$ is well-defined on $(0,+\infty)$ and for any $\gamma>0$, almost surely $$\label{desired} \lim_{p\to \infty} g_p(\gamma)=g_\infty (\gamma).$$* ### Consistent estimation of fixed points {#subsectionConsf} Recall definition of $\widehat{\mathsf{adj}}$ from and definitions of $\hat{\eta}_*, \hat{\tau}_*, \hat{\tau}_{**}, \hat{\mathbf{{r}}}_{**}$ from . We also define consistent estimator of $b_*$ and $\kappa_*$ as follows $$\label{gammadef} \begin{aligned} &\hat{b}_* (p):=\left(\frac{\hat{\eta}_*}{\widehat{\mathsf{adj}}}\right)^2 \frac{1}{p} \sum_{i=1}^p \frac{d_i^2}{\left(d_i^2+\hat{\eta}_*-\widehat{\mathsf{adj}}\right)^2} \\ &\hat{\kappa}_*(p) :=\left(\frac{\hat{\eta}_*-\widehat{\mathsf{adj}}}{\widehat{\mathsf{adj}}}\right)^2\left(\frac{1}{p} \sum_{i=1}^p\left(\frac{\hat{\eta}_*}{d_i^2+\hat{\eta}_*-\widehat{\mathsf{adj}}}\right)^2-1\right) \end{aligned}$$ whereupon we have $\hat{\tau}_{*}(p)=\hat{b}_*+\hat{\kappa}_* \hat{\tau}_{**}$. All of the above are well-defined under , [Assumption 5](#Assumpgp){reference-type="ref" reference="Assumpgp"} which ensures that $\widehat{\mathsf{adj}}, \hat{\eta}_*-\widehat{\mathsf{adj}}>0$ (cf. or ). On the other hand, if , [Assumption 5](#Assumpgp){reference-type="ref" reference="Assumpgp"} do not both hold, we say that the above estimators are not well-defined. We are now ready to prove which shows that the quantities defined in [\[DEFEFD\]](#DEFEFD){reference-type="eqref" reference="DEFEFD"} and [\[gammadef\]](#gammadef){reference-type="eqref" reference="gammadef"} indeed converges to their population counterparts. *Proof of .* We first show that $\lim _{p \rightarrow \infty} \widehat{\mathsf{adj}}\left(p\right) \rightarrow \gamma_*$ almost surely. Fix any $0<\epsilon<\gamma_*$. Note that almost surely $$\begin{aligned} & \lim _{p\rightarrow \infty} g_{p}\left(\gamma_*-\epsilon\right)=g_{\infty}\left(\gamma_*-\epsilon\right)<g_{\infty}\left(\gamma_*\right)=1, \\ & \lim _{p \rightarrow \infty} g_{p}\left(\gamma_*+\epsilon\right)=g_{\infty}\left(\gamma_*+\epsilon\right)>g_{\infty}\left(\gamma_*\right)=1 \end{aligned}$$ as a direct consequence of and that $g_\infty$ is strictly increasing (cf. ). It follows that almost surely for all $m$ sufficiently large $$\label{dddde} g_{p}\left(\gamma_*-\epsilon\right)<1, \quad g_{p}\left(\gamma_*+\epsilon\right)>1.$$ By the assumptions and , $g_{p}$ is continuous, strictly increasing and [\[solution\]](#solution){reference-type="eqref" reference="solution"} admits a unique solution $\widehat{\mathsf{adj}}\left(p\right)$ for all $p$. [\[dddde\]](#dddde){reference-type="eqref" reference="dddde"} thus implies that almost surely for all $p$ sufficiently large $\left|\widehat{\mathsf{adj}}\left(p\right)-\gamma_*\right|<\epsilon$. This completes the proof for $\lim _{p\rightarrow \infty} \widehat{\mathsf{adj}}\left(p\right) \rightarrow \gamma_*$. Recalling [\[RCa\]](#RCa){reference-type="eqref" reference="RCa"}, [\[RCc\]](#RCc){reference-type="eqref" reference="RCc"}, [\[eq:Jacprox\]](#eq:Jacprox){reference-type="eqref" reference="eq:Jacprox"} and definitions of $b_*, \kappa_*$ (cf. [\[dddfef\]](#dddfef){reference-type="eqref" reference="dddfef"}), the same reasoning applied to demonstrate can be utilized to establish the consistency statements for $\hat{\eta}_*, \hat{b}_*$ and $\hat{\kappa}_*$. To show $\hat{\bm{\beta}}^u(p) \stackrel{W_2}{\to} \mathsf{B}^\star+\sqrt{\tau_{*}}\mathsf{Z}$ almost surely as $p\to \infty$, note that $\hat{\bm{\beta}}^u(p)-\mathbf{r}_{*}\stackrel{W_2}{\to} 0$ by consistency of $\widehat{\mathsf{adj}}$ and $\limsup_{p\to \infty }p^{-1}\left\|\mathbf{X}^{\top}(\mathbf{y}-\mathbf{X}\hat{\bm{\beta}})\right\|_2^2<+\infty$, and the claims follow from [\[waka1\]](#waka1){reference-type="eqref" reference="waka1"} and an application of . A similar argument shows that $\frac{1}{p}\norm{\hat{\mathbf{{r}}}_{**}-\mathbf{r}_{**}}^2\rightarrow 0$ almost surely as $p\to \infty$. The consistency statements for $\hat{\tau}_{*}, \hat{\tau}_{**}$ follow from results above, [\[waka2\]](#waka2){reference-type="eqref" reference="waka2"} and the identity $\tau_{*}=b_*+\kappa_* \tau_{**}$ (cf. [\[dddfef\]](#dddfef){reference-type="eqref" reference="dddfef"}). ◻ ## Supporting proofs for result A {#SupportPI} ### Oracle VAMP proofs {#appendix:ovamp} *Proof of .* By , and , [\[RCc\]](#RCc){reference-type="eqref" reference="RCc"}, $$\lim _{p \rightarrow \infty} \frac{1}{p} \operatorname{Tr}(\bm{\Lambda})=\mathbb{E}\left(\eta_*-\gamma_*\right)\left(\frac{\eta_*}{\gamma_*\left(\mathsf{D}^2+\left(\eta_*-\gamma_*\right)\right)}-\frac{1}{\gamma_*}\right)=0.$$ The limiting values of $\kappa_*:=\lim _{p \rightarrow \infty} \frac{1}{p} \operatorname{Tr}\left(\bm{\Lambda}^2\right)$ and $b_*:=\lim _{p \rightarrow \infty} \frac{1}{p}\left\|\mathbf{e}_b\right\|^2$ is found analogously under . The identity $\tau_{*}=b_*+\kappa_* \tau_{**}$ is obtained by rewriting [\[RCd\]](#RCd){reference-type="eqref" reference="RCd"} using definitions of $b_*,\kappa_*$. Using [\[RCa\]](#RCa){reference-type="eqref" reference="RCa"}, we have that $$\mathbb{E} F^{\prime}\left(\sqrt{\tau_{*}} \mathsf{Z}, \mathsf{B}^\star\right)=\frac{\eta_*}{\eta_*-\gamma_*}\left(\mathbb{E} \operatorname{Prox}_{\gamma_{*}^{-1} h}^{\prime}\left(\mathsf{B}^\star+\sqrt{\tau_{*}} \mathsf{Z}\right)-\frac{\gamma_*}{\eta_*}\right)=0.$$ The Lipschitz continuity of $(q,x)\mapsto F(q,x)$ on $\mathbb{R}$ follows from , (b). To show $\mathbb{E} F\left(\sqrt{\tau_{*}} \mathsf{Z}, \mathsf{B}^\star\right)^2=\tau_{**}$, note that $$\begin{aligned} & \mathbb{E} F\left(\sqrt{\tau_{*}} \mathsf{Z}, \mathsf{B}^\star\right)^2=\mathbb{E}\left(\frac{\eta_*}{\eta_*-\gamma_*}\left(\operatorname{Prox}_{\gamma_*^{-1} h}\left(\sqrt{\tau_{*}} \mathsf{Z}+\mathsf{B}^\star\right)-\mathsf{B}^\star\right)-\frac{\gamma_*}{\eta_*-\gamma_*} \sqrt{\tau_{*}} \mathsf{Z}\right)^2 \\ & =\left(\frac{\eta_*}{\eta_*-\gamma_*}\right)^2 \mathbb{E}\left(\operatorname{Prox}_{\gamma_*^{-1} h}\left(\sqrt{\tau_{*}} \mathsf{Z}+\mathsf{B}^\star\right)-\mathsf{B}^\star\right)^2+\left(\frac{\gamma_*}{\eta_*-\gamma_*}\right)^2 \tau_{*} \\ & \quad-2 \frac{\gamma_*}{\eta_*-\gamma_*} \frac{\eta_*}{\eta_*-\gamma_*} \mathbb{E}\left(\sqrt{\tau_{*}} \mathsf{Z}\left(\operatorname{Prox}_{\gamma_*^{-1} h}\left(\sqrt{\tau_{*}} \mathsf{Z}+\mathsf{B}^\star\right)-\mathsf{B}^\star\right)\right) \\ & \stackrel{(a)}{=}\left(\frac{\eta_*}{\eta_*-\gamma_*}\right)^2 \mathbb{E}\left(\operatorname{Prox}_{\gamma_*^{-1} h}\left(\sqrt{\tau_{*}} \mathsf{Z}+\mathsf{B}^\star\right)-\mathsf{B}^\star\right)^2\\ & \qquad \qquad +\left(\frac{\gamma_*}{\eta_*-\gamma_*}\right)^2 \tau_{*}-2 \frac{\gamma_*}{\eta_*-\gamma_*} \frac{\eta_*}{\eta_*-\gamma_*} \frac{\gamma_*}{\eta_*} \tau_{*} \\ & =\left(\frac{\eta_*}{\eta_*-\gamma_*}\right)^2 \mathbb{E}\left(\operatorname{Prox}_{\gamma_*^{-1} h}\left(\sqrt{\tau_{*}} \mathsf{Z}+\mathsf{B}^\star\right)-\mathsf{B}^\star\right)^2-\left(\frac{\gamma_*}{\eta_*-\gamma_*}\right)^2 \tau_{*} \\ & \stackrel{(b)}{=} \tau_{**} \end{aligned}$$ where in $(a)$ we used Stein's lemma and [\[RCa\]](#RCa){reference-type="eqref" reference="RCa"} for the following $$\mathbb{E}\left(\mathsf{Z}\left(\operatorname{Prox}_{\gamma_*^{-1} h}\left(\sqrt{\tau_{*}} \mathsf{Z}+\mathsf{B}^\star\right)-\mathsf{B}^\star\right)\right)=\mathbb{E}\left(\operatorname{Prox}_{\gamma^{-1} h}^{\prime}\left(\sqrt{\tau_{*}} \mathsf{Z}+\mathsf{B}^\star\right)\right)=\frac{\gamma_*}{\eta_*}$$ and in (b) we used [\[RCc\]](#RCc){reference-type="eqref" reference="RCc"}. We remark that although the function $x\mapsto \operatorname{Prox}_{\gamma_*^{-1} h}(x)$ may not be differentiable on a finite set of points, Stein's lemma can still be applied (cf. [@stein1981estimation Lemma 1]). ◻ *Proof of .* Note that $\bm{\xi}=\mathbf{Q}\bm{\varepsilon}\sim N(\rm{0},\mathbf{I}_{n})$. Then $\mathbf{D}^\top \bm{\xi} \in \mathbb{R}^n$ may be written as the entrywise product of $\mathbf{D}^\top \bm{1}_{n \times 1} \in \mathbb{R}^p$ and a vector $\bar{\bm{\xi}} \sim N(\rm{0},\mathbf{I}_{p})$, both when $p \geq n$ and when $n \leq p$. The almost-sure convergence $H \stackrel{W_2}{\to}\mathsf{H}$ is then a straightforward consequence of Propositions [Proposition 26](#prop:iidW){reference-type="ref" reference="prop:iidW"}, [Proposition 27](#prop:contW){reference-type="ref" reference="prop:contW"}, and [Proposition 30](#prop:orthoW){reference-type="ref" reference="prop:orthoW"}, where all random variables of $\mathsf{H}$ have finite moments of all orders under Assumptions [Assumption 2](#AssumpD){reference-type="ref" reference="AssumpD"} and [Assumption 3](#AssumpPrior){reference-type="ref" reference="AssumpPrior"}. The identities $\kappa_*=\mathbb{E}\mathsf{L}^2$ and $b_*=\mathbb{E}\mathsf{E}_b^2$ follows from definitions of $\kappa_*,b_*$ in . ◻ *Proof of .* We have $\delta_{11}=\mathbb{E}\mathsf{X}_1^2=\delta_*$ by the last identity of ([\[eq:denoiserpp2\]](#eq:denoiserpp2){reference-type="ref" reference="eq:denoiserpp2"}). Supposing that $\delta_{tt}=\mathbb{E}\mathsf{X}_t^2=\delta_*$, we have by definition $\mathbb{E}\mathsf{Y}_t^2=\kappa_* \delta_{tt}=\sigma_*^2=\delta_*\kappa_*$. Since $\mathsf{Y}_t$ is independent of $\mathsf{E}$, we have $\mathsf{Y}_t+\mathsf{E}\sim N(0,\sigma_*^2+b_*)$ where this variance is $\sigma_*^2+b_*=\tau_{*}$ by last identity of [\[eq:somide\]](#eq:somide){reference-type="eqref" reference="eq:somide"}. Then $\mathbb{E}\mathsf{X}_{t+1}^2=\delta_*$ by the last identity of ([\[eq:denoiserpp2\]](#eq:denoiserpp2){reference-type="ref" reference="eq:denoiserpp2"}), so $\mathbb{E}\mathsf{X}_t^2=\delta_*$ and $\mathbb{E}\mathsf{Y}_t^2=\sigma_*^2$ for all $t \geq 1$. Noting that $\Delta_t$ is the upper-left submatrix of $\Delta_{t+1}$, let us denote $$\Delta_{t+1}=\begin{pmatrix} \Delta_t & \delta_t \\ \delta_t^\top & \delta_* \end{pmatrix}$$ We now show by induction on $t$ the following three statements: 1. $\Delta_t \succ 0$ strictly. 2. We have $$\label{eq:extraSE} \mathsf{Y}_t=\sum_{k=1}^{t-1} \mathsf{Y}_{k}\left(\Delta_{t-1}^{-1} \delta_{t-1}\right)_k+\mathsf{U}_t, \quad \mathsf{S}_t=\sum_{k=1}^{t-1} \mathsf{S}_{k}\left(\Delta_{t-1}^{-1} \delta_{t-1}\right)_{k}+\mathsf{U}^{\prime}_t$$ where $\mathsf{U}_t,\mathsf{U}_t'$ are Gaussian variables with strictly positive variance, independent of $\mathsf{H}$, $\left(\mathsf{Y}_{1}, \ldots, \mathsf{Y}_{t-1}\right)$, and $\left(\mathsf{S}_{1}, \ldots, \mathsf{S}_{t-1}\right)$. 3. $\left(\mathbf{H},\mathbf{X}_{t+1}, \mathbf{S}_t,\mathbf{Y}_t\right) \stackrel{W_2}{\to}\left(\mathsf{H}, \mathsf{X}_{1}, \ldots, \mathsf{X}_{t+1}, \mathsf{S}_{1}, \ldots, \mathsf{S}_{t}, \mathsf{Y}_{1}, \ldots, \mathsf{Y}_{t}\right)$. We take as base case $t=0$, where the first two statements are vacuous, and the third statement requires $(\mathbf{H},{\mathbf{x}}^{1}) \stackrel{W_2}{\to} (\mathsf{H},\mathsf{X}_1)$ almost surely as $p \to \infty$. Recall that ${\mathbf{x}}^{1}=F(p^0,\bm{\beta}^\star)$, and that $F(p,\beta)$ is Lipschitz by Proposition . Then this third statement follows from Propositions [Proposition 36](#prop:AMPparamconverge){reference-type="ref" reference="prop:AMPparamconverge"} and [Proposition 27](#prop:contW){reference-type="ref" reference="prop:contW"}. Supposing that these statements hold for some $t \geq 0$, we now show that they hold for $t+1$. To show the first statement $\Delta_{t+1} \succ 0$, note that for $t=0$ this follows from $\Delta_1=\delta_*>0$ by . For $t \geq 1$, given that $\Delta_{t}\succ0$, $\Delta_{t+1}$ is singular if and only if there exist constants $\alpha_{1}, \ldots, \alpha_{t} \in \mathbb{R}$ such that $$\mathsf{X}_{t+1}=F\left(\mathsf{Y}_{t}+\mathsf{E}, \mathsf{B}^\star\right)=\sum_{r=1}^t \alpha_{r} \mathsf{X}_{r}$$ with probability 1. From the induction hypothesis, $\mathsf{Y}_{t}=\sum_{k=1}^{t-1} \mathsf{Y}_{k}\left(\Delta_{r}^{-1} \delta_{r}\right)_{k}+\mathsf{U}_t$ where $\mathsf{U}_t$ is independent of $\mathsf{H},\mathsf{Y}_{1}, \ldots, \mathsf{Y}_{t-1}$ and hence also of $\mathsf{E},\mathsf{B}^\star,\mathsf{X}_1,...,\mathsf{X}_t$. We now show that for any realized values $(e_{0}, x_{0}, w_{0})$ of $$\left(\mathsf{E}+\sum_{k=1}^{t-1} \mathsf{Y}_{k}\left(\Delta_{r}^{-1} \delta_{r}\right)_{k}, \quad \mathsf{B}^\star, \quad \sum_{r=1}^t \alpha_{r} \mathsf{X}_{r}\right),$$ we have that $\mathbb{P}\left(F\left(\mathsf{U}_t+e_{0}, x_{0}\right) \neq w_{0}\right)>0$. This would imply that $\Delta_{t+1}\succ 0$. Suppose to the contrary, we then have that $$\mathbb{P}\left(\frac{\eta_*}{\eta_*-\gamma_*} \operatorname{Prox}_{\gamma_{\gamma_*^{-1}}}\left(\mathsf{U}_t+e_0+x_0\right)-\frac{\gamma_*}{\eta_*-\gamma_*} \mathsf{U}_t=w_0+\frac{\eta_*}{\eta_*-\gamma_*} x_0+\frac{\gamma_*}{\eta_*-\gamma_*} e_0\right)=1.$$ Since $\mathsf{U}_t$ is Gaussian with strictly positive variance, the above implies that the function $$u \mapsto \frac{\eta_*}{\eta_*-\gamma_*} \operatorname{Prox}_{\gamma_*^{-1} h}\left(u+e_0+x_0\right)-\frac{\gamma_*}{\eta_*-\gamma_*} u$$ is constant almost everywhere. This in turn is equivalent to that $\operatorname{Prox}_{\gamma_*^{-1} h}(u)=C+\frac{\gamma_*}{\eta_*} u$ almost everywhere for some constant $C\in \mathbb{R}$ by a change of variable. Noting that $u \mapsto \operatorname{Prox}_{\gamma_*^{-1} h}(u)$ is continuous, we thus have that $\operatorname{Prox}_{\gamma_*^{-1} h}(u)=C+\frac{\gamma_*}{\eta_*} u$ for all $u \in \mathbb{R}$. This implies that $\operatorname{Prox}_{\gamma_*^{-1} h}(u)$ is continuously differentiable and has constant derivative $\frac{\gamma_*}{\eta_*}$, which contradicts to the assumption that $x\mapsto \operatorname{Prox}_{\gamma_*^{-1} h}^{\prime}(x)$ is non-constant. We thus have proved the first inductive statement that $\Delta_{t+1}\succ 0$. To study the empirical limit of $s_{t+1}$, let $\mathbf{U}=\left(\mathbf{e}_{b}, \mathbf{S}_{t}, \bm{\Lambda} \mathbf{S}_{t}\right)$ and $\mathbf{V}=\left(\mathbf{e},\mathbf{X}_{t}, \mathbf{Y}_{t}\right)$. (For $t=0$, this is simply $\mathbf{U}=\mathbf{e}_b$ and $\mathbf{V}=\mathbf{e}$.) By the induction hypothesis, the independence of $(\mathsf{S}_1,\ldots,\mathsf{S}_t)$ with $(\mathsf{E}_b,\mathsf{L})$, and the identities $\mathbb{E}\mathsf{E}_b^2=b_*$ and $\mathbb{E}\mathsf{L}=0$ and $\mathbb{E}\mathsf{L}^2=\kappa_*$, almost surely as $p \to \infty$, $$\frac{1}{p}\left(\mathbf{e}_{b}, \mathbf{S}_{t}, \bm{\Lambda} \mathbf{S}_{t}\right)^{\top}\left(\mathbf{e}_{b}, \mathbf{S}_{t}, \bm{\Lambda} \mathbf{S}_{t}\right) \rightarrow\left(\begin{array}{ccc} b_{*} & 0 & 0 \\ 0 & \Delta_{t} & 0 \\ 0 & 0 & \kappa_{*} \Delta_{t} \end{array}\right)\succ0$$ So almost surely for sufficiently large $p$, conditional on $(\mathbf{H},\mathbf{X}_{t+1},\mathbf{S}_t,\mathbf{Y}_t)$, the law of $\mathbf{s}^{t+1}$ is given by its law conditioned on $\mathbf{U}=\mathbf{O}V$, which is (see [@fan2022tap Lemma B.2]) $$\label{eq:SEconv3} \mathbf{s}^{t+1}\big|_{\mathbf{U}=\mathbf{O}V}=\mathbf{O}\mathbf{x}^{t+1}\big|_{\mathbf{U}=\mathbf{O}V} \stackrel{L}{=}\mathbf{U}\left(\mathbf{U}^\top \mathbf{U}\right)^{-1} \mathbf{V}^{\top} \mathbf{x}^{t+1}+ \bm{\Pi}_{U^{\perp}} \tilde{\mathbf{O}} \bm{\Pi}_{V^{\perp}}^{\top} \mathbf{x}^{t+1}$$ where $\tilde{\mathbf{O}} \sim \mathop{\mathrm{Haar}}(\mathbb{O}(p-(2 t+1)))$ and $\bm{\Pi}_{U^{\perp}}, \bm{\Pi}_{V^{\perp}} \in \mathbb{R}^{p \times(p-(2 t+1))}$ are matrices with orthonormal columns spanning the orthogonal complements of the column spans of $U,V$ respectively. We may replace $\mathbf{s}^{t+1}$ by the right side of [\[eq:SEconv3\]](#eq:SEconv3){reference-type="eqref" reference="eq:SEconv3"} without affecting the joint law of $\left(\mathbf{H},\mathbf{X}_{t+1}, \mathbf{S}_t, \mathbf{Y}_{t},\mathbf{s}^{t+1}\right)$. For $t=0$, we have $\mathbb{E}\mathsf{X}_1\mathsf{E}=0$ since $\mathsf{X}_1$ is independent of $\mathsf{E}$. For $t \geq 1$, by the definition of $\mathsf{X}_{t+1}$, the condition $\mathbb{E}F'(\mathsf{P},\mathsf{B}^\star)=0$ from ([\[eq:denoiserpp2\]](#eq:denoiserpp2){reference-type="ref" reference="eq:denoiserpp2"}), and Stein's lemma, we have $\mathbb{E}\mathsf{X}_{t+1}\mathsf{E}=0$ and $\mathbb{E}\mathsf{X}_{t+1}\mathsf{Y}_r=0$ for each $r=1,\ldots,t$. Then by the induction hypothesis, almost surely as $p \rightarrow \infty$, $$\left(p^{-1}\mathbf{U}^\top \mathbf{U}\right)^{-1} \rightarrow\left(\begin{array}{ccc} b_{*} & 0 & 0 \\ 0 & \Delta_{t} & 0 \\ 0 & 0 & \kappa_{*} \Delta_{t} \end{array}\right)^{-1}, \quad p^{-1}\mathbf{V}^{\top} \mathbf{x}^{t+1} \rightarrow\left(\begin{array}{c} 0 \\ \delta_{t} \\ 0 \end{array}\right).$$ Then by ([\[eq:SEconv3\]](#eq:SEconv3){reference-type="ref" reference="eq:SEconv3"}) and Propositions [Proposition 28](#prop:combW){reference-type="ref" reference="prop:combW"} and [Proposition 30](#prop:orthoW){reference-type="ref" reference="prop:orthoW"}, it follows that $$\left(\mathbf{H},\mathbf{X}_{t+1}, \mathbf{S}_{t}, \mathbf{Y}_{t}, \mathbf{s}^{t+1}\right) \stackrel{W_2}{\to}\left(\mathsf{H}, \mathsf{X}_{1}, \ldots, \mathsf{X}_{t+1}, \mathsf{S}_{1}, \ldots, \mathsf{S}_{t}, \mathsf{Y}_{1}, \ldots \mathsf{Y}_{t}, \sum_{r=1}^{t} \mathsf{S}_{r}\left(\Delta_{t}^{-1} \delta_{t}\right)_{r}+\mathsf{U}^{\prime}_{t+1}\right)$$ where $\mathsf{U}^{\prime}_{t+1}$ is the Gaussian limit of the second term on the right side of ([\[eq:SEconv3\]](#eq:SEconv3){reference-type="ref" reference="eq:SEconv3"}) and is independent of $\mathsf{H}, \mathsf{X}_{1}, \ldots, \mathsf{X}_{t+1}, \mathsf{S}_{1}, \ldots, \mathsf{S}_{t}, \mathsf{Y}_{1}, \ldots \mathsf{Y}_{t}$. We can thus set $\mathsf{S}_{t+1}:=\sum_{r=1}^{t} \mathsf{S}_{r}\left(\Delta_{t}^{-1} \delta_{t}\right)_{r}+\mathsf{U}^{\prime}_{t+1}$. Then $(\mathsf{S}_1,\ldots,\mathsf{S}_{t+1})$ is multivariate Gaussian and remains independent of $\mathsf{H}$ and $(\mathsf{Y}_1,\ldots,\mathsf{Y}_t)$. Since $p^{-1}\|\mathbf{s}^{t+1}\|^{2}=p^{-1}\|\mathbf{x}^{t+1}\|^{2} \rightarrow \delta_*$ almost surely as $p \rightarrow \infty$ by the induction hypothesis, we have $\mathbb{E}\mathsf{S}_{t+1}^2=\delta_*$. From the form of $\mathsf{S}_{t+1}$, we may check also $\mathbb{E}\mathsf{S}_{t+1}(\mathsf{S}_1,\ldots,\mathsf{S}_t)=\delta_t$, so $(\mathsf{S}_1,\ldots,\mathsf{S}_{t+1})$ has covariance $\Delta_{t+1}$ as desired. Furthermore $\sum_{r=1}^{t} \mathsf{~S}_{r}\left(\Delta_{t}^{-1} \delta_{t}\right)_{r} \sim N\left(0, \delta_{t}^{\top} \Delta_{t}^{-1} \delta_{t}\right)$. From $\Delta_{t+1} \succ 0$ and the Schur complement formula, $\delta_*-\delta_{t}^{\top} \Delta_{t}^{-1} \delta_{t}>0$ strictly. Then $\mathsf{U}^{\prime}_{t+1}$ has strictly positive variance, since the variance of $\sum_{r=1}^{t} \mathsf{S}_{r}\left(\Delta_{t}^{-1} \delta_{t}\right)_{r}$ is less than the variance of $\mathsf{S}_{t+1}$. This proves the second equation in [\[eq:extraSE\]](#eq:extraSE){reference-type="eqref" reference="eq:extraSE"} for $t+1$. Now, we study the empirical limit of $\mathbf{y}^{t+1}$. Let $\mathbf{U}=\left(\mathbf{e},\mathbf{X}_{t+1}, \mathbf{Y}_{t}\right)$, $\mathbf{V}=\left(\mathbf{e}_{b}, \mathbf{S}_{t+1}, \bm{\Lambda} \mathbf{S}_{t}\right)$. Similarly by the induction hypothesis and the empirical convergence of $(\mathbf{H},\mathbf{S}_{t+1})$ already shown, almost surely as $p \rightarrow \infty$, $$\frac{1}{p}\left(\mathbf{e}_b, \mathbf{S}_{t+1}, \bm{\Lambda} \mathbf{S}_{t}\right)^{\top}\left(\mathbf{e}_b, \mathbf{S}_{t+1}, \bm{\Lambda} \mathbf{S}_{t}\right) \rightarrow\left(\begin{array}{ccc} b_{*} & 0 & 0 \\ 0 & \Delta_{t+1} & 0 \\ 0 & 0 & \kappa_{*} \Delta_{t} \end{array}\right)\succ 0.$$ Then the law of $\mathbf{y}^{t+1}$ conditional on $(\mathbf{H},\mathbf{X}_{t+1},\mathbf{S}_{t+1},\mathbf{Y}_t)$ is given by its law conditioned on $\mathbf{U}=\mathbf{O}^\top \mathbf{V}$, which is $$\label{eq:SEconv4} \mathbf{y}^{t+1}\big|_{\mathbf{U}=\mathbf{O}^\top \mathbf{V}}=\mathbf{O}^{\top}\bm{\Lambda} \mathbf{s}^{t+1}\big|_{\mathbf{U}=\mathbf{O}^{\top} V} \stackrel{L}{=} \mathbf{U}\left(\mathbf{V}^{\top} \mathbf{V}\right)^{-1} \mathbf{V}^{\top} \bm{\Lambda} \mathbf{s}^{t+1}+\bm{\Pi}_{U^{\perp}} \tilde{\mathbf{O}} \bm{\Pi}_{V^{\perp}}^{\top} \bm{\Lambda} \mathbf{s}^{t+1}$$ where $\tilde{\mathbf{O}} \sim \mathop{\mathrm{Haar}}(\mathbb{O}(p-(2 t+2)))$. From the convergence of $(\mathbf{H},\mathbf{S}_{t+1})$ already shown, almost surely as $p \rightarrow \infty$, $$\left(n^{-1} \mathbf{V}^{\top} \mathbf{V}\right)^{-1} \rightarrow\left(\begin{array}{ccc} b_{*} & 0 & 0 \\ 0 & \Delta_{t+1} & 0 \\ 0 & 0 & \kappa_{*} \Delta_{t} \end{array}\right)^{-1}, \quad n^{-1} \mathbf{V}^{\top} \bm{\Lambda} \mathbf{s}^{t+1}\rightarrow\left(\begin{array}{c} 0 \\ 0 \\ \kappa_{*} \delta_{t} \end{array}\right).$$ Then by ([\[eq:SEconv4\]](#eq:SEconv4){reference-type="ref" reference="eq:SEconv4"}) and Propositions [Proposition 28](#prop:combW){reference-type="ref" reference="prop:combW"} and [Proposition 30](#prop:orthoW){reference-type="ref" reference="prop:orthoW"}, $$\left(\mathbf{H},\mathbf{X}_{t+1}, \mathbf{S}_{t+1}, \mathbf{Y}_{t}, \mathbf{y}^{t+1}\right) \stackrel{W_2}{\to}\left(\mathsf{H}, \mathsf{X}_{1}, \ldots, \mathsf{X}_{t+1}, \mathsf{~S}_{1}, \ldots, \mathsf{S}_{t+1}, \mathsf{Y}_{1}, \ldots \mathsf{Y}_t, \sum_{r=1}^{t} \mathsf{Y}_{r}\left(\Delta_{t}^{-1} \delta_{t}\right)_{r}+\mathsf{U}_{t+1}\right)$$ where $\mathsf{U}_{t+1}$ is the limit of the second term on the right side of ([\[eq:SEconv4\]](#eq:SEconv4){reference-type="ref" reference="eq:SEconv4"}), which is Gaussian and independent of $\mathsf{H},\mathsf{S}_{1}, \ldots, \mathsf{S}_{t+1}, \mathsf{Y}_{1}, \ldots \mathsf{Y}_{t}$. Setting $\mathsf{Y}_{t+1}:=\sum_{r=1}^{t} \mathsf{Y}_{r}\left(\Delta_{t}^{-1} \delta_{t}\right)_{r}+\mathsf{U}_{t+1}$, it follows that $(\mathsf{Y}_1,\ldots,\mathsf{Y}_{t+1})$ remains independent of $\mathsf{H}$ and $(\mathsf{S}_1,\ldots,\mathsf{S}_{t+1})$. We may check that $\mathbb{E}\mathsf{Y}_{t+1}(\mathsf{Y}_1,\ldots,\mathsf{Y}_t)=\kappa_* \delta_t$, and we have also $n^{-1}\|\mathbf{y}^{t+1}\|^{2}=n^{-1}\|\bm{\Lambda} \mathbf{s}^{t+1}\|^{2} \rightarrow \kappa_{*}\delta_*$ so $\mathbb{E}\mathsf{Y}_{t+1}^2 =\kappa_{*}\delta_*$. From $\Delta_{t+1} \succ 0$ and the Schur complement formula, note that $\sum_{r=1}^{t} \mathsf{Y}_{r}\left(\Delta_{t}^{-1} \delta_{t}\right)_{r}$ has variance $\kappa_{*} \delta_{t}^{\top} \Delta_{t}^{-1} \delta_{t}$ which is strictly smaller than $\kappa_* \delta_*$, so $\mathsf{U}_{t+1}$ has strictly positive variance. This proves the first equation in [\[eq:extraSE\]](#eq:extraSE){reference-type="eqref" reference="eq:extraSE"} for $t+1$, and completes the proof of this second inductive statement. Finally, recall $x^{t+2}=F\left(\mathbf{y}^{t+1}+\mathbf{e}, \bm{\beta}^\star\right)$ where $F$ is Lipschitz. Then by Proposition [Proposition 27](#prop:contW){reference-type="ref" reference="prop:contW"}, almost surely $$\left(\mathbf{H},\mathbf{X}_{t+2}, \mathbf{S}_{t+1}, \mathbf{Y}_{t+1}\right) \stackrel{W_2}{\to}\left(\mathsf{H}, \mathsf{X}_{1}, \ldots, \mathsf{X}_{t+2}, \mathsf{~S}_{1}, \ldots, \mathsf{S}_{t+1}, \mathsf{Y}_{1}, \ldots, \mathsf{Y}_{t+1}\right)$$ where $\mathsf{X}_{t+2}=F\left(\mathsf{Y}_{t+1}+\mathsf{E}, \mathsf{B}^\star\right)$, showing the third inductive statement and completing the induction. ◻ *Proof of .* [\[consA\]](#consA){reference-type="eqref" reference="consA"} is a direct consequence of , [\[changeofv\]](#changeofv){reference-type="eqref" reference="changeofv"}, , , [\[x1t\]](#x1t){reference-type="eqref" reference="x1t"} and the fact that proximal map is 1-Lipschitz. To see [\[consC\]](#consC){reference-type="eqref" reference="consC"}, note that $$\mathbf{X}\mathbf{r}_{2t}-\mathbf{y}=\mathbf{Q}^{\top} \mathbf{D}\mathbf{O}\left(\mathbf{r}_{2t}-\bm{\beta}^\star\right)-\bm{\varepsilon}=\mathbf{Q}^{\top} \mathbf{D}\mathbf{s}^t-\bm{\varepsilon}$$ and thus almost surely $$\lim _{p \rightarrow \infty} \frac{1}{p}\left\|\mathbf{X}\mathbf{r}_{2t}-\mathbf{y}\right\|^2=\lim _{p \rightarrow \infty} \frac{1}{p} (\mathbf{s}^t)^{\top} \mathbf{D}^{\top} \mathbf{D}\mathbf{s}^t+\frac{1}{p}\|\bm{\varepsilon}\|_2^2- \frac{2}{p} (\mathbf{s}^t)^{\top} \mathbf{D}^{\top} \mathbf{Q}\bm{\varepsilon}=\tau_{**} \mathbb{E}\mathsf{D}^2+\delta.$$ ◻ *Proof of .* Recall that $\delta_{tt}=\delta_*$ for all $t \geq 1$ from Theorem [Proposition 37](#thm:ampSE){reference-type="ref" reference="thm:ampSE"}. Then $\delta_{s t}=\mathbb{E}\left[\mathsf{X}_{s} \mathsf{X}_{t}\right] \leq \sqrt{\mathbb{E}\left[\mathsf{X}_{s}^{2}\right] \mathbb{E}\left[\mathsf{X}_{t}^{2}\right]} =\delta_*$ for all $s,t \geq 1$. For $s=1$ and any $t \geq 2$, observe also that $$\label{eq:delta1t} \begin{gathered} \delta_{1t}=\mathbb{E} \mathsf{X}_{1} \mathsf{X}_{t}=\mathbb{E}\left[F\left(\mathsf{P}_{0}, \mathsf{B}^\star\right) F\left(\mathsf{Y}_{t-1}+\mathsf{E}, \mathsf{B}^\star\right)\right]=\mathbb{E}\left[\mathbb{E}\left[F\left(\mathsf{P}_{0}, \mathsf{B}^\star\right) F\left(\mathsf{Y}_{t-1}+\mathsf{E}, \mathsf{B}^\star\right) \mid \mathsf{B}^\star\right]\right] \\ =\mathbb{E}[\mathbb{E}\left[F\left(\mathsf{P}_{0}, \mathsf{B}^\star\right) \mid \mathsf{B}^\star\right]^{2}] \geq 0 \end{gathered}$$ where the last equality holds because $\mathsf{P}_{0}$, $\mathsf{Y}_{t-1}+\mathsf{E}$, and $\mathsf{B}^\star$ are independent, with $\mathsf{P}_0$ and $\mathsf{Y}_{t-1}+\mathsf{E}$ equal in law (by the identity $\sigma_*^2+b_*=\tau_{*}$). Consider now the map $\delta_{st} \mapsto \delta_{s+1,t+1}$. Recalling that $\mathbb{E}\mathsf{Y}_t^2=\sigma_{*}^{2}$ and $\mathbb{E}\mathsf{Y}_s\mathsf{Y}_t=\kappa_* \delta_{st}$, we may represent $$\left(\mathsf{Y}_{s}+\mathsf{E}, \mathsf{Y}_{t}+\mathsf{E}\right)\stackrel{L}{=}\left(\sqrt{\kappa_{*} \delta_{s t}+b_{*}} \mathsf{G}+\sqrt{\sigma_{*}^{2}-\kappa_{*} \delta_{s t}} \mathsf{G}^{\prime}, \sqrt{\kappa_{*} \delta_{s t}+b_{*}} \mathsf{G}+\sqrt{\sigma_{*}^{2}-\kappa_{*} \delta_{s t}} \mathsf{G}^{\prime \prime}\right)$$ where $\mathsf{G}, \mathsf{G}^{\prime}, \mathsf{G}^{\prime \prime}$ are jointly independent standard Gaussian variables. Denote $$\mathsf{P}_{\delta}^{\prime}:=\sqrt{\kappa_{*} \delta+b_{*}} \cdot \mathsf{G}+\sqrt{\sigma_{*}^{2}-\kappa_{*} \delta} \cdot \mathsf{G}^{\prime}, \quad \mathsf{P}_{\delta}^{\prime \prime}:=\sqrt{\kappa_{*} \delta+b_{*}} \cdot \mathsf{G}+\sqrt{\sigma_{*}^{2}-\kappa_{*} \delta} \cdot \mathsf{G}^{\prime \prime}$$ and define $g:\left[0,\delta_*\right] \to \mathbb{R}$ by $g(\delta):=\mathbb{E}\left[F\left(P_\delta',\mathsf{B}^\star\right) F\left(P_\delta'',\mathsf{B}^\star\right)\right]$. Then $\delta_{s+1,t+1}=g(\delta_{st})$. We claim that for any $\delta \in [0,\delta_*]$, we have $g(\delta) \geq 0$, $g'(\delta) \geq 0$, and $g''(\delta) \geq 0$. The first bound $g(\delta) \geq 0$ follows from $$g(\delta)=\mathbb{E}\Big[\mathbb{E}[F\left(\mathsf{P}_{\delta}^{\prime}, \mathsf{B}^\star\right) F\left(\mathsf{P}_{\delta}^{\prime \prime}, \mathsf{B}^\star\right) \mid \mathsf{B}^\star,\mathsf{G}]\Big]=\mathbb{E}\left[\mathbb{E}\left[F\left(\mathsf{P}_{\delta}^{\prime}, \mathsf{B}^\star\right) \mid \mathsf{B}^\star, \mathsf{G}\right]^{2}\right] \geq 0,$$ because $\mathsf{P}_\delta',\mathsf{P}_\delta''$ are independent and equal in law conditional on $\mathsf{G},\mathsf{B}^\star$. Differentiating in $\delta$ and applying Gaussian integration by parts, $$\begin{aligned} &g^{\prime}(\delta) =2 \mathbb{E}\left[F^{\prime}\left(\mathsf{P}_{\delta}^{\prime}, \mathsf{B}^\star\right) F\left(\mathsf{P}_{\delta}^{\prime \prime}, \mathsf{B}^\star\right)\left(\frac{\kappa_{*}}{2 \sqrt{\kappa_{*} \delta+b_{*}}} \cdot \mathsf{G}-\frac{\kappa_{*}}{2 \sqrt{\sigma_{*}^{2}-\kappa_{*} \delta}} \cdot \mathsf{G}^{\prime}\right)\right] \\ &=\frac{\kappa_{*}}{\sqrt{\kappa_{*} \delta+b_{*}}} \mathbb{E}\left[F^{\prime}\left(\mathsf{P}_{\delta}^{\prime}, \mathsf{B}^\star\right) F\left(\mathsf{P}_{\delta}^{\prime \prime}, \mathsf{B}^\star\right) \mathsf{G}\right]-\frac{\kappa_{*}}{\sqrt{\sigma_{*}^{2}-\kappa_{*} \delta}} \mathbb{E}\left[F^{\prime}\left(\mathsf{P}_{\delta}^{\prime}, \mathsf{B}^\star\right) F\left(\mathsf{P}_{\delta}^{\prime \prime}, \mathsf{B}^\star\right) \mathsf{G}^{\prime}\right] \\ &=\kappa_{*}\mathbb{E}\left[F^{\prime \prime}\left(\mathsf{P}_{\delta}^{\prime}, \mathsf{B}^\star\right) F\left(\mathsf{P}_{\delta}^{\prime \prime}, \mathsf{B}^\star\right)+F^{\prime}\left(\mathsf{P}_{\delta}^{\prime}, \mathsf{B}^\star\right) F^{\prime}\left(\mathsf{P}_{\delta}^{\prime \prime}, \mathsf{B}^\star\right)\right] -\kappa_{*}\mathbb{E}\left[F^{\prime \prime}\left(\mathsf{P}_{\delta}^{\prime}, \mathsf{B}^\star\right) F\left(\mathsf{P}_{\delta}^{\prime \prime}, \mathsf{B}^\star\right)\right] \\ &=\kappa_{*} \mathbb{E}\left[F^{\prime}\left(\mathsf{P}_{\delta}^{\prime}, \mathsf{B}^\star\right) F^{\prime}\left(\mathsf{P}_{\delta}^{\prime \prime}, \mathsf{B}^\star\right)\right]. \end{aligned}$$ Then $g'(\delta)=\kappa_*\mathbb{E}\left[\mathbb{E}[F'(\mathsf{P}_\delta',\mathsf{B}^\star) \mid \mathsf{G},\mathsf{B}^\star]^2\right] \geq 0$, and a similar argument shows $g''(\delta) \geq 0$. Observe that at $\delta=\delta_*$, we have $\mathsf{P}_{\delta_{*}}^{\prime}=\mathsf{P}_{\delta_{*}}^{\prime \prime}=\sqrt{\sigma_*^2+b_{*}} \cdot \mathsf{G}=\sqrt{\tau_{*}} \mathsf{G}$ which is equal in law to $\mathsf{P}\sim N(\rm{0},\tau_{*})$. Then $g(\delta_{*})=\mathbb{E}[F(\mathsf{P},\mathsf{B}^\star)^{2}]=\delta_{*}$ by . So $g:[0,\delta_*] \to [0,\delta_*]$ is a non-negative, increasing, convex function with a fixed point at $\delta_*$. We claim that $$\label{eq:gprimebound} g'(\delta_*)<1$$ This then implies that $\delta_*$ is the unique fixed point of $g(\cdot)$ over $[0,\delta_*]$, and $\lim_{t \to \infty} g^{(t)}(\delta)=\delta_*$ for any $\delta \in [0,\delta_*]$. Observe from ([\[eq:delta1t\]](#eq:delta1t){reference-type="ref" reference="eq:delta1t"}) that $\delta_{1t}=\delta_{12}$ for all $t \geq 2$, so $\delta_{t,t+s}=g^{(t-1)}(\delta_{1,1+s})=g^{(t-1)}(\delta_{12})$ for any $s \geq 1$. Then $\lim_{\min(s,t) \to \infty} \delta_{st}=\delta_*$ follows. It remains to show ([\[eq:gprimebound\]](#eq:gprimebound){reference-type="ref" reference="eq:gprimebound"}). Using , $$\label{combo1} \begin{aligned} g^{\prime}\left(\delta_*\right)=\kappa_* \mathbb{E}[ & \left.F^{\prime}\left(\mathsf{P}_{\delta_*}^{\prime}, \mathsf{B}^\star\right)^2\right]=\kappa_* \mathbb{E}\left[\left(\frac{\eta_*}{\eta_*-\gamma_*} \operatorname{Prox}_{\gamma_*^{-1} h}^{\prime}\left(\mathsf{P}_{\delta_*}^{\prime}+\mathsf{B}^\star\right)-\frac{\gamma_*}{\eta_*-\gamma_*}\right)^2\right] \\ & =\left(\frac{\eta_*}{\eta_*-\gamma_*}\right)^2 \kappa_* \mathbb{E}\left[\left(\operatorname{Prox}_{\gamma_*^{-1} h}^{\prime}\left(\mathsf{P}_{\delta_*}^{\prime}+\mathsf{B}^\star\right)-\frac{\gamma_*}{\eta_*}\right)^2\right] \\ & =\left(\frac{\eta_*}{\gamma_*}\right)^2\left(\mathbb{E} \frac{\eta_*^2}{\left(\mathsf{D}^2+\eta_*-\gamma_*\right)^2}-1\right) \mathbb{E}\left[\left(\operatorname{Prox}_{\gamma_*^{-1} h}^{\prime}\left(\mathsf{P}_{\delta_*}^{\prime}+\mathsf{B}^\star\right)\right)^2-\left(\frac{\gamma_*}{\eta_*}\right)^2\right]. \end{aligned}$$ Using (c), we obtain that $$\label{combo2} \begin{aligned} & R^{\prime}\left(\eta_*^{-1}\right)=-\left(\mathbb{E} \frac{1}{\left(\mathsf{D}^2+\eta_*-\gamma_*\right)^2}\right)^{-1}+\eta_*^2 \implies \frac{\eta_*^2}{\eta_*^2-R^{\prime}\left(\eta_*^{-1}\right)}=\mathbb{E} \frac{\eta_*^2}{\left(\mathsf{D}^2+\eta_*-\gamma_*\right)^2}. \end{aligned}$$ Note also that by Jensen's inequality and [\[RCc\]](#RCc){reference-type="eqref" reference="RCc"} that $$\label{combo5} \mathbb{E} \frac{\eta_*^2}{\left(\mathsf{D}^2+\eta_*-\gamma_*\right)^2}-1\ge 0$$ By and [\[eq:Jacprox\]](#eq:Jacprox){reference-type="eqref" reference="eq:Jacprox"}, we have $$\mathbb{E}\left[\left(\operatorname{Prox}_{\gamma_*^{-1} h}^{\prime}\left(\mathsf{P}_{\delta_*}^{\prime}+\mathsf{B}^\star\right)\right)^2\right]<\mathbb{E} \operatorname{Prox}_{\gamma_*^{-1} h}^{\prime}\left(\mathsf{P}_{\delta_*}^{\prime}+\mathsf{B}^\star\right)=\frac{\gamma_*}{\eta_*}.$$ This implies that $$\label{combo3} 0 \leq \mathbb{E}\left[\left(\operatorname{Prox}_{\gamma_*^{-1} h}^{\prime}\left(\mathsf{P}_{\delta_*}^{\prime}+\mathsf{B}^\star\right)\right)^2-\left(\frac{\gamma_*}{\eta_*}\right)^2\right]<\frac{\gamma_*}{\eta_*}-\left(\frac{\gamma_*}{\eta_*}\right)^2.$$ Combining [\[combo1\]](#combo1){reference-type="eqref" reference="combo1"},[\[combo2\]](#combo2){reference-type="eqref" reference="combo2"},[\[combo5\]](#combo5){reference-type="eqref" reference="combo5"} and [\[combo3\]](#combo3){reference-type="eqref" reference="combo3"} above, we obtain that $$g^{\prime}\left(\delta_*\right)<\left(\frac{R^{\prime}\left(\eta_*^{-1}\right)}{\eta_*^2-R^{\prime}\left(\eta_*^{-1}\right)}\right)\left(\frac{\eta_*}{\gamma_*}-1\right).$$ To show the rhs is less than 1, we observe that $$\label{combb} \begin{aligned} &\left(\frac{R^{\prime}\left(\eta_*^{-1}\right)}{\eta_*^2-R^{\prime}\left(\eta_*^{-1}\right)}\right)\left(\frac{\eta_*}{\gamma_*}-1\right)<1 \Leftrightarrow \frac{R^{\prime}\left(\eta_*^{-1}\right)}{\eta_*^2-R^{\prime}\left(\eta_*^{-1}\right)}<\frac{\eta_* \gamma_*}{\eta_*^2-\eta_* \gamma_*} \stackrel{(i)}{\Leftrightarrow} R^{\prime}\left(\eta_*^{-1}\right) \\ &<\eta_* \gamma_* \stackrel{(i i)}{\Leftrightarrow}-\frac{\eta_*^{-1} R^{\prime}\left(\eta_*^{-1}\right)}{R\left(\eta_*^{-1}\right)}<1 \end{aligned}$$ where in $(i)$ we used that $x \mapsto \frac{x}{\eta_*^2-x}$ is strictly increasing and in $(ii)$ we used [\[RCc\]](#RCc){reference-type="eqref" reference="RCc"}. Finally, we conclude the proof by noting that the rhs of [\[combb\]](#combb){reference-type="eqref" reference="combb"} holds true by , (d). ◻ *Proof of .* Note that $$\begin{aligned} & \lim _{(s, t) \rightarrow \infty}\left(\lim _{p \rightarrow \infty} \frac{1}{p}\left\|\mathbf{x}^t-\mathbf{x}^s\right\|^2\right)=\lim _{(s, t) \rightarrow \infty}\left(\delta_{s s}+\delta_{t t}-2 \delta_{s t}\right)=0 \\ &\lim _{(s, t) \rightarrow \infty}\left(\lim _{p \rightarrow \infty} \frac{1}{p}\left\|\mathbf{y}^t-\mathbf{y}^s\right\|^2\right)=\lim _{(s, t) \rightarrow \infty} \kappa_*\left(\delta_{s s}+\delta_{t t}-2 \delta_{s t}\right)=0 \end{aligned}$$ using . The convergence of iterates $\mathbf{r}_{1t}, \mathbf{r}_{2t}$ follows from $\mathbf{r}_{2t}=\mathbf{x}^t+\bm{\beta}^\star, \mathbf{r}_{1t}=\mathbf{y}^t+\bm{\beta}^\star+\mathbf{e}$. The convergence of $\hat{\mathbf{x}}_{1t}, \hat{\mathbf{x}}_{2t}$ follows from the fact they can be expressed as Lipschitz function applied to iterates $\mathbf{r}_{1, t-1}$ and $\mathbf{r}_{2t}$, i.e. [\[x1t\]](#x1t){reference-type="eqref" reference="x1t"} and [\[x2t\]](#x2t){reference-type="eqref" reference="x2t"}. ◻ ### Track regularized estimator using VAMP iterates {#appendix:trackiovamp} *Proof of .* We assume that $c_0>0$. Proof for the case where $d_->0$ is completely analogous. From strong convexity of the penalty function, almost surely, for all sufficiently large $p$, $$\label{eqconv} \mathcal{L}\left(\hat{\mathbf{x}}_{1t}\right) \geq \mathcal{L}(\hat{\bm{\beta}}) \geq \mathcal{L}\left(\hat{\mathbf{x}}_{1t}\right)+\left\langle\mathcal{L}^{\prime}\left(\hat{\mathbf{x}}_{1t}\right), \hat{\bm{\beta}}-\hat{\mathbf{x}}_{1t}\right\rangle+\frac{1}{2} c_0\left\|\hat{\bm{\beta}}-\hat{\mathbf{x}}_{1t}\right\|_2^2$$ holds for any $\mathcal{L}^{\prime}\left(\hat{\mathbf{x}}_{1t}\right) \in \mathbf{X}^{\top}\left(\mathbf{X}\hat{\mathbf{x}}_{1t}-\mathbf{y}\right)+\partial h\left(\hat{\mathbf{x}}_{1t}\right)$ where $\partial h$ denotes sub-gradients of $h$. So we can take $\mathcal{L}^{\prime}\left(\hat{\mathbf{x}}_{1t}\right)=\mathbf{X}^{\top}\left(\mathbf{X}\hat{\mathbf{x}}_{1t}-\mathbf{y}\right)+\gamma_*\left(\mathbf{r}_{1, t-1}-\hat{\mathbf{x}}_{1t}\right)$ because $\hat{\mathbf{x}}_{1t}=\operatorname{Prox}_{\gamma_*^{-1} h}\left(\mathbf{r}_{1, t-1}\right) \Leftrightarrow \mathbf{r}_{1, t-1}-\hat{\mathbf{x}}_{1t}\in \gamma_*^{-1} \partial h\left(\hat{\mathbf{x}}_{1t}\right) .$ By Cauchy-Schwartz inequality, we have that $$\label{aux1} \left\|\hat{\bm{\beta}}-\hat{\mathbf{x}}_{1t}\right\|_2 \leq \frac{2}{c_0}\left\|\mathcal{L}^{\prime}\left(\hat{\mathbf{x}}_{1t}\right)\right\|_2$$ Now note that $$\begin{aligned} \mathcal{L}^{\prime}\left(\hat{\mathbf{x}}_{1t}\right)&=\left(\mathbf{X}^{\top} \mathbf{X}-\gamma_* I\right) \hat{\mathbf{x}}_{1t}-\mathbf{X}^{\top} \mathbf{y}+\gamma_* \mathbf{r}_{1, t-1}\\ & \stackrel{(a)}{=}\left(1-\frac{\gamma_*}{\eta_*}\right)\left(\mathbf{X}^{\top} \mathbf{X}+\gamma_* \mathbf{I}_p\right)\left(\mathbf{r}_{2t}-\mathbf{r}_{2,t-1}\right)+\left(\mathbf{X}^{\top} \mathbf{X}+\left(\eta_*-\gamma_*\right) \mathbf{I}_p\right) \hat{\mathbf{x}}_{2,t-1}\\ & \qquad \qquad -\mathbf{X}^{\top} \mathbf{y}-\left(\eta_*-\gamma_*\right) \mathbf{r}_{2,t-1}\\ & \stackrel{(b)}{=}\left(1-\frac{\gamma_*}{\eta_*}\right)\left(\mathbf{X}^{\top} \mathbf{X}+\gamma_* \mathbf{I}_p\right)\left(\mathbf{r}_{2t}-\mathbf{r}_{2,t-1}\right) \end{aligned}$$ where we used in $(a)$ $$\label{aux2} \hat{\mathbf{x}}_{1t}=\left(1-\frac{\gamma_*}{\eta_*}\right)\left(\mathbf{r}_{2t}-\mathbf{r}_{2,t-1}\right)+\hat{\mathbf{x}}_{2,t-1}$$ which follows from [\[r1t\]](#r1t){reference-type="eqref" reference="r1t"},[\[r2t\]](#r2t){reference-type="eqref" reference="r2t"} and in $(b)$, $$\left(\mathbf{X}^{\top} \mathbf{X}+\left(\eta_*-\gamma_*\right) \mathbf{I}_p\right) \hat{\mathbf{x}}_{2,t-1}=\mathbf{X}^{\top} \mathbf{y}+\left(\eta_*-\gamma_*\right) \mathbf{r}_{2,t-1}$$ which follows from [\[x2t\]](#x2t){reference-type="eqref" reference="x2t"}. We thus have that almost surely $$\lim _{t \rightarrow \infty} \lim _{p \rightarrow \infty} \frac{1}{p}\left\|\mathcal{L}^{\prime}\left(\hat{\mathbf{x}}_{1t}\right)\right\|_2^2\le \lim _{t \rightarrow \infty} \lim _{p \rightarrow \infty}\left(1-\frac{\gamma_*}{\eta_*}\right)\left\|\mathbf{X}^{\top} \mathbf{X}+\gamma_* \mathbf{I}_p \right\|_{\mathrm{op}}^2 \cdot \frac{1}{p}\left\|\mathbf{r}_{2t}-\mathbf{r}_{2,t-1}\right\|_2^2=0$$ which along with [\[aux1\]](#aux1){reference-type="eqref" reference="aux1"} implies that $$\label{x1tcon} \lim _{t \rightarrow \infty} \lim _{p \rightarrow \infty} \frac{1}{p}\left\|\hat{\bm{\beta}}-\hat{\mathbf{x}}_{1t}\right\|_2^2=0$$ By [\[aux2\]](#aux2){reference-type="eqref" reference="aux2"}, we also have that $$\label{x2tcon} \lim _{t \rightarrow \infty} \lim _{p \rightarrow \infty} \frac{1}{p}\left\|\hat{\bm{\beta}}-\hat{\mathbf{x}}_{2t}\right\|_2^2=0.$$ Rearranging [\[x1t\]](#x1t){reference-type="eqref" reference="x1t"}---[\[r1t\]](#r1t){reference-type="eqref" reference="r1t"}, we have $$\mathbf{r}_{2t}=\hat{\mathbf{x}}_{2t}+\frac{1}{\left(\eta_*-\gamma_*\right)} \mathbf{X}^{\top}\left(\mathbf{X}\hat{\mathbf{x}}_{2t}-\mathbf{y}\right), \quad \mathbf{r}_{1t}=\hat{\mathbf{x}}_{2t}+\frac{1}{\gamma_*} \mathbf{X}^{\top}\left(\mathbf{y}-\mathbf{X}\hat{\mathbf{x}}_{2t}\right)$$ which along with [\[x1tcon\]](#x1tcon){reference-type="eqref" reference="x1tcon"}, [\[x2tcon\]](#x2tcon){reference-type="eqref" reference="x2tcon"} implies that $$\lim _{t \rightarrow \infty} \lim _{p \rightarrow \infty} \frac{1}{p}\left\|\mathbf{r}_{1t}-\mathbf{r}_{*}\right\|_2^2=\lim _{t \rightarrow \infty} \lim _{p \rightarrow \infty} \frac{1}{p}\left\|\mathbf{r}_{2t}-\mathbf{r}_{*}\right\|_2^2=0.$$ ◻ ## Supporting proofs for result B {#SupportPII} ### Properties of sample adjustment equation {#appendix:sampleadjeq} *Proof of .* We can write $g_p(\gamma)$ as $$\label{mao} \begin{aligned} g_p(\gamma)=&\frac{1}{p} \sum_{i: d_i \neq 0} \frac{1}{\frac{1}{p}\left(\sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right) \neq+\infty, 0} \frac{d_i^2-\gamma}{\gamma+h^{\prime \prime}\left(\hat{{\beta}}_j\right)}+\sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right)=0} \frac{d_i^2-\gamma}{\gamma}\right)+1}\\ &+\frac{1}{p}\sum_{i: d_i=0} \frac{1}{\frac{1}{p}\left(\sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right) \neq 0,+\infty} \frac{-\gamma}{\gamma+h^{\prime \prime}\left(\hat{{\beta}}_j\right)}-\sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right)=0} 1\right)+1} \end{aligned}.$$ Let us first consider the case where $d_i \neq 0$ for all $i$. In this case, only the first sum remain and the denominators of the summands are $$\begin{aligned} &\frac{d_i^2}{p}\left(\sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right) \neq+\infty, 0} \frac{1}{\gamma+h^{\prime \prime}\left(\hat{{\beta}}_j\right)}+\sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right)=0} \frac{1}{\gamma}\right)\\ &\qquad+1-\frac{1}{p}\left(\sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right) \neq+\infty, 0} \frac{\gamma}{\gamma+h^{\prime \prime}\left(\hat{{\beta}}_j\right)}+\sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right)=0} 1\right) \end{aligned}$$ Observe that $$1 -\frac{1}{p}\left(\sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right)_{\neq+\infty, 0}} \frac{\gamma}{\gamma+h^{\prime \prime}\left(\hat{{\beta}}_j\right)}+\sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right)=0} 1\right) \geq 0$$ and $$\begin{aligned} & \sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right) \neq+\infty, 0} \frac{1}{\gamma+h^{\prime \prime}\left(\hat{{\beta}}_j\right)}+\sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right)=0} \frac{1}{\gamma}=0 \\ &\qquad \qquad \Leftrightarrow \sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right) \neq+\infty, 0} \frac{\gamma}{\gamma+h^{\prime \prime}\left(\hat{{\beta}}_j\right)}+\sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right)=0} 1=0. \end{aligned}$$ These two observations and the assumption that $d_i \neq 0$ for all $i$ implies that for all $i \in[p]$, $g_p$ is well-defined on $(0,+\infty)$. For the case where $d_i=0$ for some $i$, all the denominators in [\[mao\]](#mao){reference-type="eqref" reference="mao"} are non-zero (and thus $g_p$ is well defined on $(0,+\infty)$) if $$\label{sdfaldfk} 1-\frac{1}{p} \sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right) \neq 0,+\infty} \frac{\gamma}{\gamma+h^{\prime \prime}\left(\hat{{\beta}}_j\right)}-\frac{1}{p} \sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right)=0} 1>0$$ which is equivalent to $\frac{1}{p} \sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right) \neq 0} 1>\frac{1}{p} \sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right) \neq 0,+\infty} \frac{\gamma}{\gamma+h^{\prime \prime}\left(\hat{{\beta}}_j\right)}$. The condition [\[sdfaldfk\]](#sdfaldfk){reference-type="eqref" reference="sdfaldfk"} is also necessary when $\exists i\in [p], d_i>0$. Meanwhile, we have that $$\frac{1}{p} \sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right) \neq 0} 1 \stackrel{(a)}{\geq} \frac{1}{p} \sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right) \neq 0,+\infty} 1 \stackrel{(b)}{\geq} \frac{1}{p} \sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right) \neq 0,+\infty} \frac{\gamma}{\gamma+h^{\prime \prime}\left(\hat{{\beta}}_j\right)} .$$ Therefore, [\[sdfaldfk\]](#sdfaldfk){reference-type="eqref" reference="sdfaldfk"} holds if and only if at least one of $(a),(b)$ is strict. Note that $(a)$ is strict if and only if $\frac{1}{p} \sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right)=+\infty} 1>0$ and $(b)$ is strict if and only if $$\frac{1}{p} \sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right) \neq 0,+\infty}\left(1-\frac{\gamma}{\gamma+h^{\prime \prime}\left(\hat{{\beta}}_j\right)}\right)>0 \Leftrightarrow \frac{1}{p} \sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right) \neq 0,+\infty} 1>0.$$ Note that $\frac{1}{p} \sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right) \neq 0} 1>0$ if and only if $\frac{1}{p} \sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right) \neq 0,+\infty} 1>0$ or $\frac{1}{p} \sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right)=+\infty} 1>0$. This shows that [\[sdfaldfk\]](#sdfaldfk){reference-type="eqref" reference="sdfaldfk"} holds if and only if there exists some $i \in[p]$ such that $h^{\prime \prime}\left(\hat{{\beta}}_i\right) \neq 0$. The latter statement holds if $\|d\|_0+\left\|h^{\prime \prime}(\hat{\bm{\beta}})\right\|_0>p$. From now on, suppose that $g_p$ is well-defined. It follows from [\[sdfaldfk\]](#sdfaldfk){reference-type="eqref" reference="sdfaldfk"} that it is differentiable. Taking derivative of [\[sdfaldfk\]](#sdfaldfk){reference-type="eqref" reference="sdfaldfk"} yields $$\begin{aligned} g^{\prime}_p(\gamma)= &\frac{1}{p} \sum_{i: d_i \neq 0} \frac{\frac{1}{p}\left(\sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right) \neq+\infty, 0} \frac{h^{\prime \prime}\left(\hat{{\beta}}_j\right)+d_i^2}{\left(\gamma+h^{\prime \prime}\left(\hat{{\beta}}_j\right)\right)^2}+\sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right)=0} \frac{d_i^2}{\gamma^2}\right)}{\left(\frac{1}{p}\left(\sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right) \neq+\infty, 0} \frac{d_i^2-\gamma}{\gamma+h^{\prime \prime}\left(\hat{{\beta}}_j\right)}+\sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right)=0} \frac{d_i^2-\gamma}{\gamma}\right)+1\right)^2} \\ &+\frac{1}{p} \sum_{i: d_i=0} \frac{\frac{1}{p}\left(\sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right) \neq 0,+\infty} \frac{h^{\prime \prime}\left(\hat{{\beta}}_j\right)}{\left(\gamma+h^{\prime \prime}\left(\hat{{\beta}}_j\right)\right)^2}\right)}{\left(\frac{1}{p}\left(\sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right) \neq 0,+\infty} \frac{-\gamma}{\gamma+h^{\prime \prime}\left(\hat{{\beta}}_j\right)}-\sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right)=0} 1\right)+1\right)^2}>0 \end{aligned}$$ We claim that given $\gamma \mapsto g(\gamma)$ is well-defined, $g_p^{\prime}(\gamma)>0, \forall \gamma \in(0,+\infty)$ if and only if for some $j, \frac{1}{p} \sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right) \neq+\infty} 1>0$. Note that if $\frac{1}{p} \sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right) \neq 0,+\infty} 1>0$, then $$\frac{1}{p} \sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right) \neq 0,+\infty} \frac{h^{\prime \prime}\left(\hat{{\beta}}_j\right)}{\left(\gamma+h^{\prime \prime}\left(\hat{{\beta}}_j\right)\right)^2}>0$$ and the above will be positive. Also note that if $\frac{1}{p} \sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right)=0} 1>0$, then the assumption $D \neq$ 0 implies that there exists some $i \in[p]$ such that $\frac{1}{p} \sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right)=0} \frac{d_i^2}{\gamma^2}>0$ and the above will be positive. Note that $\frac{1}{p} \sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right) \neq+\infty} 1>0$ if and only if $\frac{1}{p} \sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right) \neq 0,+\infty} 1>0$ or $\frac{1}{p} \sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right)=0} 1>0$. Therefore, the positivity of the above follows from the assumption that there exists some $j \in[p]$ such that $h^{\prime \prime}\left(\hat{{\beta}}_j\right) \neq+\infty$. Conversely, if $h^{\prime \prime}\left(\hat{{\beta}}_j\right)=+\infty, \forall j, g_p(\gamma)=$ $1, \forall \gamma \in(0,+\infty).$ Note that if $\left\|h^{\prime \prime}(\hat{\bm{\beta}})\right\|_0<p$ and for all $i, d_i\neq 0$, $\lim _{\gamma \rightarrow 0} g_p(\gamma)=0$; if $\left\|h^{\prime \prime}(\hat{\bm{\beta}})\right\|_0=0$ and for some $i, d_i=0$, $g_p$ is not well-defined per discussion above; if $0<\left\|h^{\prime \prime}(\hat{\bm{\beta}})\right\|_0<p$ and for some $i, d_i=0$, $$\lim _{\gamma \rightarrow 0} g_p(\gamma)=\frac{p-\|d\|_0}{\left\|h^{\prime \prime}(\hat{\bm{\beta}})\right\|_0}<1$$ given that $\|d\|_0+\left\|h^{\prime \prime}(\hat{\bm{\beta}})\right\|_0>p$; if $\left\|h^{\prime \prime}(\hat{\bm{\beta}})\right\|_0=p,$ $$\lim _{\gamma \rightarrow 0} g_p(\gamma)=\frac{1}{p}\left(\sum_{i: d_i \neq 0} \frac{1}{\frac{1}{p}\left(\sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right) \neq+\infty, 0 } h^{\prime \prime}\left(\hat{{\beta}}_j\right)\right)+1}+\sum_{i: d_i=0} 1\right)<1$$ since $\mathbf{D}\neq 0$. We also have that $$\lim _{\gamma \rightarrow+\infty} g_p(\gamma)=\frac{1}{1-\left(\frac{1}{p} \sum_{j: h^{\prime \prime}\left(\hat{{\beta}}_j\right) \neq+\infty}1\right)} \in(1,+\infty]$$ if for some $i$, $h^{\prime \prime}\left(\hat{{\beta}}_i\right) \neq+\infty$. The proof is complete after combining these facts. ◻ ### Population limit of the adjustment equation {#appendix:popuadjeq} *Proof of .* We can write $g_\infty (\gamma)$ as $$\label{zdgm} \begin{aligned} & g_\infty(\gamma)=\mathbb{E} \frac{\mathbb{I}\left(\mathsf{D}^2>0\right)}{\left(\mathsf{D}^2-\gamma\right) \mathbb{E} \frac{\mathbb{I}(\mathsf{U}\neq+\infty, 0)}{\gamma+\mathsf{U}}+\left(\mathsf{D}^2-\gamma\right) \frac{1}{\gamma} \mathbb{P}(\mathsf{U}=0)+1}\\ & \qquad \qquad \qquad +\frac{\mathbb{P}\left(\mathsf{D}^2=0\right)}{\mathbb{E} \frac{-\gamma \mathbb{I}(\mathsf{U}\neq+\infty, 0)}{\gamma+\mathsf{U}}-\mathbb{P}(\mathsf{U}=0)+1} \end{aligned}$$ Note that the denominators of both terms in [\[zdgm\]](#zdgm){reference-type="eqref" reference="zdgm"} are non-zero (and thus $g_\infty$ is well-defined) if $$\label{zdgm1} 1-\mathbb{E}\frac{\gamma \mathbb{I}(\mathsf{U}\neq+\infty, 0)}{\gamma+\mathsf{U}}-\mathbb{P}(\mathsf{U}=0)>0$$ which is equivalent to $\mathbb{P}(\mathsf{U}\neq 0)>\mathbb{E}\frac{\gamma \mathbb{I}(\mathsf{U}\neq+\infty, 0)}{\gamma+\mathsf{U}}$. Meanwhile we have that $$\mathbb{P}(\mathsf{U}\neq 0) \stackrel{(a)}{\geq} \mathbb{P}(\mathsf{U}\neq 0,+\infty) \stackrel{(b)}{\geq} \mathbb{E}\frac{\gamma \mathbb{I}(\mathsf{U}\neq+\infty, 0)}{\gamma+\mathsf{U}}$$ Therefore, [\[zdgm1\]](#zdgm1){reference-type="eqref" reference="zdgm1"} holds if at least one of $(a),(b)$ is strict. Note that $(a)$ is strict if and only if $\mathbb{P}(\mathsf{U}=+\infty)>0$ and $(b)$ is strict if and only if $$\mathbb{E}\mathbb{I}(\mathsf{U}\neq+\infty, 0)\left(1-\frac{\gamma}{\gamma+\mathsf{U}}\right)>0 \Leftrightarrow \mathbb{P}(\mathsf{U}\neq 0,+\infty)>0.$$ Note that $\mathbb{P}(\mathsf{U}\neq 0)>0$ if and only if $\mathbb{P}(\mathsf{U}\neq 0,+\infty)>0$ or $\mathbb{P}(\mathsf{U}=+\infty)>0$. This shows that [\[zdgm1\]](#zdgm1){reference-type="eqref" reference="zdgm1"} holds and thus $g_\infty$ is well-defined since $\mathbb{P}(\mathsf{U}\neq 0)>0$ by . It follows from [\[RCa\]](#RCa){reference-type="eqref" reference="RCa"}, [\[RCc\]](#RCc){reference-type="eqref" reference="RCc"} and [\[eq:Jacprox\]](#eq:Jacprox){reference-type="eqref" reference="eq:Jacprox"} that $\gamma_*$ is a solution of the equation $g_\infty(\gamma)=1$ . We prove that $\gamma_*$ is a unique solution by showing $g_\infty$ is strictly increasing. Applying [@talagrand2010mean Proposition A.2.1], we obtain that $g_\infty$ is differentiable and can be differentiated inside the expectation as follows $$\begin{aligned} g_{\infty}^{\prime}(\gamma)=&\mathbb{E} \frac{\mathbb{I}\left(\mathsf{D}^2>0\right)\left(\mathbb{E} \frac{\mathsf{U}\mathbb{I}(\mathsf{U}\neq+\infty, 0)}{(\gamma+\mathsf{U})^2}+\mathsf{D}^2 \mathbb{E} \frac{\mathbb{I}(\mathsf{U}\neq+\infty, 0)}{(\gamma+\mathsf{U})^2}+\left(\mathsf{D}^2 \frac{1}{\gamma^2}\right) \mathbb{P}(\mathsf{U}=0)\right)}{\left(\left(\mathsf{D}^2-\gamma\right) \mathbb{E} \frac{\mathbb{I}(\mathsf{U}\neq+\infty, 0)}{\gamma+\mathsf{U}}+\left(\mathsf{D}^2-\gamma\right) \frac{1}{\gamma} \mathbb{P}(\mathsf{U}=0)+1\right)^2} \\ &+\mathbb{E} \frac{\mathbb{I}\left(\mathsf{D}^2=0\right)\left(\mathbb{E} \frac{\mathsf{U}\mathbb{I}(\mathsf{U}\neq+\infty, 0)}{(\gamma+\mathsf{U})^2}\right)}{\left(\mathbb{E} \frac{-\gamma \mathbb{I}(\mathsf{U}\neq+\infty, 0)}{\gamma+\mathsf{U}}-\mathbb{P}(\mathsf{U}=0)+1\right)^2} \end{aligned}$$ To prove $g_{\infty}^{\prime}(\gamma)>0, \forall \gamma \in(0,+\infty)$, note that if $\mathbb{P}(\mathsf{U}\neq+\infty, 0)>0$, then $\mathbb{E} \frac{\mathsf{U}\mathbb{I}(\mathsf{U}\neq+\infty, 0)}{(\gamma+\mathsf{U})^2}>0$ and the above will be positive. Also note that if $\mathbb{P}(\mathsf{U}=0)>0$, then $\mathbb{I}\left(\mathsf{D}^2>0\right)\left(\mathsf{D}^2 \frac{1}{\gamma^2}\right) \mathbb{P}(\mathsf{U}=0)>$ 0 with positive probability and the above will be positive. Note that $\mathbb{P}(\mathsf{U}\neq+\infty)>0$ if and only if $\mathbb{P}(\mathsf{U}\neq 0$ and $\mathsf{U}\neq+\infty)>0$ or $\mathbb{P}(\mathsf{U}=0)>0$. Therefore, the positivity of $g_\infty^{\prime}(\gamma)$ follows from $\mathbb{P}(\mathsf{U}\neq+\infty)>0$ which holds by . The proof is now complete. ◻ *Proof of .* We first note that $$\label{651} \hat{\bm{\beta}}=\operatorname{Prox}_{\gamma_*^{-1} h}\left(\mathbf{r}_{*}\right).$$ This follows from $\mathbf{r}_{*}\in \hat{\bm{\beta}}+\frac{1}{\gamma_*} \partial h(\hat{\bm{\beta}})$ and the equivalence relation $\mathbf{r}_{*}\in \hat{\bm{\beta}}+\frac{1}{\gamma_*} \partial h(\hat{\bm{\beta}}) \Leftrightarrow \hat{\bm{\beta}}=\operatorname{Prox}_{\gamma_*^{-1} h}\left(\mathbf{r}_{*}\right)$. The former is a consequence of the KKT condition $\mathbf{X}^{\top}(\mathbf{y}-\mathbf{X}\hat{\bm{\beta}}) \in \partial h(\hat{\bm{\beta}})$ and the latter follows from , $(a)$. Also note that for any $\gamma>0$, $$\label{sa51} \mathbb{P}\left(\sqrt{\tau_{*}} \mathsf{Z}+\mathsf{B}^\star\in\left\{x \in \mathbb{R}: \frac{1}{\gamma+ h^{\prime \prime}\left(\operatorname{Prox}_{\gamma_*^{-1} h}(x)\right)} \text { is continuous at } x\right\}\right)=1$$ which follows from that $x\mapsto \frac{1}{\gamma+ h^{\prime \prime}\left(\operatorname{Prox}_{\gamma_*^{-1} h}(x)\right)}$ has only finitely many discontinuities (cf. ) and that $\tau_{*}>0$. Then, almost surely, $$\begin{aligned} \lim _{p \rightarrow \infty} \frac{1}{p} \sum_{i=1}^p \frac{1}{\gamma+h^{\prime \prime}\left(\hat{{\beta}}_i\right)} & \stackrel{(a)}{=} \lim _{p \rightarrow \infty} \frac{1}{p} \sum_{j=1}^p \frac{1}{\gamma+h^{\prime \prime}\left(\operatorname{Prox}_{\gamma_*^{-1} h}\left(r_{*, j}\right)\right)}\\ &\stackrel{(b)}{=} \mathbb{E} \frac{1}{\gamma+h^{\prime \prime}\left(\operatorname{Prox}_{\gamma_*^{-1} h}\left(\sqrt{\tau_{*}} \mathsf{Z}+\mathsf{B}^\star\right)\right)} \end{aligned}$$ where $(a)$ follows from [\[651\]](#651){reference-type="eqref" reference="651"} and $(b)$ follows from , and [\[sa51\]](#sa51){reference-type="eqref" reference="sa51"}. This, along with and , implies that almost surely $$\label{thisffdf} \left(\operatorname{diag}\left(\mathbf{D}^{\top} \mathbf{D}\right)-\gamma\right)\left(\frac{1}{p} \sum_{j=1}^p \frac{1}{\gamma+h^{\prime \prime}\left(\hat{{\beta}}_j\right)}\right) \stackrel{W_2}{\rightarrow}\left(\mathsf{D}^2-\gamma\right) \mathbb{E} \frac{1}{\gamma+\mathsf{U}}.$$ Almost sure convergence [\[desired\]](#desired){reference-type="eqref" reference="desired"} follows from [\[thisffdf\]](#thisffdf){reference-type="eqref" reference="thisffdf"}, and the fact from that $1+\left(\mathsf{D}^2-\gamma\right) \mathbb{E} \frac{1}{\gamma+\mathsf{U}}>0$ almost surely. ◻ # Proofs for inference procedures ## Hypothesis testing and confidence intervals {#appendix:testing} *Proof of .* To see $(a)$, We have that almost surely $$\begin{gathered} \lim _{p\to \infty} \frac{\frac{1}{p} \sum_{j=1}^{p} \mathbb{I}\left(P_j \leq \alpha, \beta_j^{\star}=0\right)}{\frac{1}{p} \sum_{j=1}^{p} \mathbb{I}\left(\beta_j^{\star}=0\right)}=\lim _{p\to \infty} \frac{\frac{1}{p} \sum_{j=1}^{p} \mathbb{I}\left(\left|\frac{\hat{r}_{*, j}-\beta_j^{\star}}{\sqrt{\hat{\tau}_{*}}}\right| \geq \Phi^{-1}\left(1-\frac{\alpha}{2}\right), \abs{\beta_j^{\star}} \leq \frac{\mu_0}{2}\right)}{\frac{1}{p} \sum_{j=1}^{p} \mathbb{I}\left(\abs{\beta_j^{\star}} \leq \frac{\mu_0}{2}\right)} \\ =\frac{\mathbb{P}\left(|\mathsf{Z}| \geq \Phi^{-1}\left(1-\frac{\alpha}{2}\right), \abs{\mathsf{B}^\star} \leq \frac{\mu_0}{2}\right)}{\mathbb{P}\left(\abs{\mathsf{B}^\star} \leq \frac{\mu_0}{2}\right)}=\mathbb{P}\left(|\mathsf{Z}| \geq \Phi^{-1}\left(1-\frac{\alpha}{2}\right)\right)=\alpha \end{gathered}$$ by and . Using exchangeability of columns of $\mathbf{X}$ $$\mathbb{E} \frac{\frac{1}{p} \sum_{j=1}^{p} \mathbb{I}\left(P_j \leq \alpha, \beta_j^{\star}=0\right)}{\frac{1}{p} \sum_{j=1}^{p} \mathbb{I}\left(\beta_j^{\star}=0\right)}=\frac{\mathbb{P}\left(T_i=1\right) \frac{1}{p} \sum_{j=1}^{p} \mathbb{I}\left(\beta_j^{\star}=0\right)}{\frac{1}{p} \sum_{j=1}^{p} \mathbb{I}\left(\beta_j^{\star}=0\right)}=\mathbb{P}\left(T_i=1\right)$$ The the coordinate-wise result follows from an application of the dominated convergence theorem. To see $(b)$, note that by and , almost surely $$\lim _{p\to \infty} \frac{1}{p} \sum_{i=1}^{p} \mathbb{I}\left({\beta}^\star_i\in \mathsf{CI}_i\right)=\lim _{p\to \infty} \frac{1}{p} \sum_{i=1}^{p} \mathbb{I}\left(a<\frac{{\beta}^\star_i-\hat{{\beta}}^u_i}{\sqrt{\hat{\tau}_{*}}}<b\right)=\mathbb{P}(a<\mathsf{Z}<b)=1-\alpha.$$ ◻ **Remark 48** (Asymptotic limit of TPR). Note that we can further calculate the exact asymptotic limit of the TPR as follows. Under the assumption of (a), we have that almost surely $$\begin{aligned} \lim _{p\to \infty} \mathsf{TPR}(p) &= \lim _{p\to \infty} \frac{\sum_{j=1}^{p} \mathbb{I}\left(P_j \leq \alpha,\left|\beta_j^{\star}\right| \geq \mu_0\right)}{\sum_{j=1}^p \mathbb{I}\left(\abs{\beta_j^{\star}} \geq \mu_0\right)}\\ &=\lim _{p\to \infty} \frac{\frac{1}{p} \sum_{j=1}^{p} \mathbb{I}\left(\left|\frac{r_{*, j}}{\sqrt{\hat{\tau}_{*}}}\right| \geq \Phi^{-1}\left(1-\frac{\alpha}{2}\right),\left|{\beta_j^{\star}}\right| \geq \mu_0\right)}{\frac{1}{p} \sum_{j=1}^{p} \mathbb{I}\left(\abs{\beta_j^{\star}} \geq \mu_0\right)} \\ &=\frac{\mathbb{P}\left(\left|\frac{1}{\sqrt{\tau_{*}}} \mathsf{B}^\star+\mathsf{Z}\right| \geq \Phi^{-1}\left(1-\frac{\alpha}{2}\right),\left|\mathsf{B}^\star\right| \geq \mu_0\right)}{\mathbb{P}\left(\left|\mathsf{B}^\star\right| \geq \mu_0\right)} \end{aligned}$$ where we used in the second line and . ## Single coordinate inference under exchangeability {#appendix:single} *Proof of .* We only show that $\frac{{r}_{*, i}-{\beta}^\star_i}{\sqrt{\tau_{*}}} \Rightarrow N(0,1)$ for $\mathbf{r}_*$ defined in [\[defr1r2\]](#defr1r2){reference-type="eqref" reference="defr1r2"}. [\[werbaoz\]](#werbaoz){reference-type="eqref" reference="werbaoz"} then follows from consistency of $\hat{\tau}_{*}$ and $\widehat{\mathsf{adj}}$ (cf. ) and the Slutsky's theorem. Let $\mathbf{U}\in \mathbb{R}^{p \times p}$ denote a permutation operator drawn uniformly at random independent of $\bm{\beta}^\star, \mathbf{X}, \bm{\varepsilon}$. We have that $$\left(\mathbf{X}\mathbf{U}^\top, \mathbf{U}\bm{\beta}^\star, \bm{\varepsilon}\right) \stackrel{L}{=}\left(\mathbf{X}, \bm{\beta}^\star, \bm{\varepsilon}\right)$$ where we use $\stackrel{L}{=}$ to denote equality in law. Note that $$\begin{aligned} & \hat{\bm{\beta}}=\underset{\bm{\beta}}{\operatorname{argmin}} \frac{1}{2}\left\|\mathbf{X}\mathbf{U}^\top \mathbf{U}\left(\bm{\beta}^\star-\bm{\beta}\right)+\bm{\varepsilon}\right\|^2+h\left(\mathbf{U}^\top \mathbf{U}\bm{\beta}\right) \\ &\qquad =\mathbf{U}^\top \underset{\mathbf{U}\beta}{\operatorname{argmin}} \frac{1}{2}\left\|\mathbf{X}\mathbf{U}^\top\left(\mathbf{U}\bm{\beta}^\star-\mathbf{U}\bm{\beta} \right)+\bm{\varepsilon}\right\|^2+h(\mathbf{U}\bm{\beta}) \end{aligned}$$ where $h$ applies entry-wise to its argument. The above then implies $$\label{aaaf} \begin{aligned} & \left(\mathbf{U}\hat{\bm{\beta}}, \mathbf{X}\mathbf{U}^\top, \mathbf{U}\bm{\beta}^\star, \bm{\varepsilon}\right) \\ & \qquad =\left(\underset{\bm{\beta}}{\operatorname{argmin}} \frac{1}{2}\left\|\mathbf{X}\mathbf{U}^\top\left(\mathbf{U}\bm{\beta}^\star-\bm{\beta}\right)+\bm{\varepsilon}\right\|^2+h(\bm{\beta}), \mathbf{X}\mathbf{U}^\top, \mathbf{U}\bm{\beta}^\star, \bm{\varepsilon}\right) \\ & \qquad \stackrel{L}{=}\left(\underset{\beta}{\operatorname{argmin}} \frac{1}{2}\left\|\mathbf{X}\left(\bm{\beta}^\star-\bm{\beta}\right)+\bm{\varepsilon}\right\|^2+h(\bm{\beta}), \mathbf{X}, \bm{\beta}^\star, \bm{\varepsilon}\right) \\ & \qquad =\left(\hat{\bm{\beta}}, \mathbf{X}, \bm{\beta}^\star, \bm{\varepsilon}\right) \end{aligned}$$ Below we prove the Corollary for $\mathcal{L}=\{i,k\},i\neq k$. The general case is analogous. For standard basis $\mathbf{e}_i, \mathbf{e}_k$, and any constant $c_1, c_2 \in \mathbb{R}$, $$\begin{aligned} \mathbb{P}&\left(\frac{\mathbf{e}_i^{\top} \mathbf{r}_{*}-\mathbf{e}_i^{\top} \bm{\beta}^\star}{\sqrt{\tau_{*}}}<c_1, \frac{\mathbf{e}_k^{\top} \mathbf{r}_{*}-\mathbf{e}_k^{\top} \bm{\beta}^\star}{\sqrt{\tau_{*}}}<c_2 \right) \\ &\stackrel{(a)}{=} \mathbb{P}\left(\frac{\mathbf{e}_i^{\top} \mathbf{U}\mathbf{r}_{*}-\mathbf{e}_i^{\top} \mathbf{U}\bm{\beta}^\star}{\sqrt{\tau_{*}}}<c_1, \frac{\mathbf{e}_k^{\top} \mathbf{U}\mathbf{r}_{*}-\mathbf{e}_k^{\top} \mathbf{U}\bm{\beta}^\star}{\sqrt{\tau_{*}}}<c_2\right) \\ & \stackrel{(b)}{=} \mathbb{E}\left(\mathbb{P}\left(\frac{\mathbf{e}_i^{\top} \mathbf{U}\mathbf{r}_{*}-\mathbf{e}_i^{\top} \mathbf{U}\bm{\beta}^\star}{\sqrt{\tau_{*}}}<c_1, \frac{\mathbf{e}_k^{\top} \mathbf{U}\mathbf{r}_{*}-\mathbf{e}_k^{\top} \mathbf{U}\bm{\beta}^\star}{\sqrt{\tau_{*}}}<c_2 \mid \mathcal{F}\left(\bm{\beta}^\star, \bm{\varepsilon}, \mathbf{X}\right)\right)\right) \\ & \stackrel{(c)}{=} \mathbb{E} \frac{1}{p(p-1)} \sum_{j_1\neq j_2\in [p]} \mathbb{I}\left(\frac{1}{\sqrt{\tau_{*}}}\left(r_{*, j_1}-\beta_{j_1}^{\star}\right)<c_1\right) \mathbb{I}\left(\frac{1}{\sqrt{\tau_{*}}}\left(r_{*, j_2}-\beta_{j_2}^{\star}\right)<c_2\right) \end{aligned}$$ where in $(a)$ we used [\[aaaf\]](#aaaf){reference-type="eqref" reference="aaaf"} above, in $(b)$ we used $\mathcal{F}\left(\bm{\beta}^\star, \bm{\varepsilon}, \mathbf{X}\right)$ to denote sigma-field generated by $\bm{\beta}^\star, \bm{\varepsilon}, \mathbf{X}$ and in $(c)$ we used that $\mathbf{U}$ is a permutation operator drawn uniformly at random. Note that almost surely as $p\to \infty$, $$\begin{aligned} &\bigg|\frac{1}{p(p-1)} \sum_{j_1\neq j_2\in [p]} \mathbb{I}\left(\frac{1}{\sqrt{\tau_{*}}}\left(r_{*, j_1}-\beta_{j_1}^{\star}\right)<c_1\right) \mathbb{I}\left(\frac{1}{\sqrt{\tau_{*}}}\left(r_{*, j_2}-\beta_{j_2}^{\star}\right)<c_2\right)- \\ &\qquad \frac{1}{p^2} \sum_{j=1}^{p} \mathbb{I}\left(\frac{1}{\sqrt{\tau_{*}}}\left(r_{*, j}-\beta_j^{\star}\right)<c_1\right) \sum_{j=1}^{p} \mathbb{I}\left(\frac{1}{\sqrt{\tau_{*}}}\left(r_{*, j}-\beta_j^{\star}\right)<c_2\right)\bigg|\to 0. \end{aligned}$$ Note also that for $\iota=1,2$ almost surely $$\lim _{p\to \infty} \frac{1}{p} \sum_{j=1}^{p} \mathbb{I}\left(\frac{1}{\sqrt{\tau_{*}}}\left(r_{*, j}-\beta_j^{\star}\right)<c_\iota\right)=\mathbb{P}(\mathsf{Z}<c_\iota)$$ where $\mathsf{Z}\sim N(0,1)$. Here, we used and . Using dominated convergence theorem, we conclude that $$\mathbb{P}\left(\frac{\mathbf{e}_i^{\top} \mathbf{r}_{*}-\mathbf{e}_i^{\top} \bm{\beta}^\star}{\sqrt{\tau_{*}}}<c_1, \frac{\mathbf{e}_k^{\top} \mathbf{r}_{*}-\mathbf{e}_k^{\top} \bm{\beta}^\star}{\sqrt{\tau_{*}}}<c_2 \right) \rightarrow \mathbb{P}(\mathsf{Z}<c_1)\mathbb{P}(\mathsf{Z}<c_2)$$ as required. ◻ # Proof and concurrent results for debiased PCR {#appendix:pcrdeb} *Proof of .* (a) **Alignment PCR.** We first prove the first result of [\[XEL3\]](#XEL3){reference-type="eqref" reference="XEL3"}. Let $\mathbf{D}_{\mathcal{J}} \in \mathbb{R}^{n \times J}$ consist of columns of $\mathbf{D}$ indexed by $\mathcal{J}, \mathbf{O}_{\mathcal{J}} \in \mathbb{R}^{J \times p}$ consist of rows of $\mathbf{O}$ indexed by $\mathcal{J}$, and $\mathbf{P}_{\mathcal{J}}=\mathbf{O}_{\mathcal{J}}^{\top} \mathbf{O}_{\mathcal{J}}$. Note that $$\label{algepcrt} \begin{aligned} \hat{\bm{\beta}}_{\mathsf{al}}(\mathcal{J})&=\mathbf{O}_{\mathcal{J}}^{\top} \hat{\bm{\theta}}_{\mathsf{pcr}}(\mathcal{J})\\ &=\mathbf{O}_{\mathcal{J}}^{\top}\left(\mathbf{W}_{\mathcal{J}}^{\top} \mathbf{W}_{\mathcal{J}}\right)^{-1} \mathbf{W}_{\mathcal{J}}^{\top} \mathbf{y}\\ &=\mathbf{O}_{\mathcal{J}}^{\top}\left(\mathbf{D}_{\mathcal{J}}^{\top} \mathbf{D}_{\mathcal{J}}\right)^{-1} \mathbf{D}_{\mathcal{J}}^{\top}\left(\mathbf{D}\mathbf{O}\bm{\beta}^\star+\mathbf{Q}\bm{\varepsilon}\right) \\ & =\mathbf{O}_{\mathcal{J}}^{\top} \mathbf{O}_{\mathcal{J}} \bm{\beta}^\star+\mathbf{O}_{\mathcal{J}}^{\top}\left(\mathbf{D}_{\mathcal{J}}^{\top} \mathbf{D}_{\mathcal{J}}\right)^{-1} \mathbf{D}_{\mathcal{J}}^{\top} \mathbf{Q}\bm{\varepsilon}\\ &=\bm{\beta}^\star_{\mathsf{al}}+\mathbf{O}_{\mathcal{J}}^{\top} \mathbf{O}_{\mathcal{J}} \bm{\zeta}^\star+\mathbf{O}_{\mathcal{J}}^{\top}\left(\mathbf{D}_{\mathcal{J}}^{\top} \mathbf{D}_{\mathcal{J}}\right)^{-1} \mathbf{D}_{\mathcal{J}}^{\top} \mathbf{Q}\bm{\varepsilon} \end{aligned}$$ where we used that $$\mathbf{W}_{\mathcal{J}}=\mathbf{Q}^{\top} \mathbf{D}\mathbf{O}\mathbf{O}_{\mathcal{J}}^{\top}=\mathbf{Q}^{\top} \mathbf{D}_{\mathcal{J}}, \quad \mathbf{y}=\mathbf{X}\bm{\beta}^\star+\bm{\varepsilon}=\mathbf{Q}^{\top} \mathbf{D}\mathbf{O}\bm{\beta}^\star+\bm{\varepsilon}$$ in the penultimate equality and [\[stwerif\]](#stwerif){reference-type="eqref" reference="stwerif"}, [\[defJJ\]](#defJJ){reference-type="eqref" reference="defJJ"} in the last equality. Using rotational invariance of $\mathbf{O}$, we have $$\label{useuse} \begin{aligned} \mathbb{E}\qty[\qty(\frac{1}{p}\left\|\mathbf{P}_{\mathcal{J}} \bm{\zeta}^\star\right\|_2^2)^2\mid \bm{\zeta}^\star] &=\frac{1}{p^2} \mathbb{E}\qty[\left\|\mathbf{O}_{\mathcal{J}} \bm{\zeta}^\star\right\|_2^4\mid \bm{\zeta}^\star] \\ & = \qty(\frac{\left\|\bm{\zeta}^\star\right\|_2^2}{p})^2 \mathbb{E}\qty[\sum_{i=1}^J O_{1 i}^2]^2=O\qty(\frac{1}{p^2}) \end{aligned}$$ where we used that $J$ is finite not growing with $p$ and basic moment property of entries of $\mathbf{O}$ (see e.g. [@meckes2014concentration Proposition 2.5]). It follows from a straightforward application of Markov inequality and Borel-Cantelli lemma that almost surely $$\label{asffsnone} \lim_{p\to\infty}\frac{1}{p}\left\|\mathbf{P}_{\mathcal{J}} \bm{\zeta}^\star\right\|_2^2=0.$$ Meanwhile, using $\mathbf{Q}\bm{\varepsilon}\stackrel{L}{=} \bm{\varepsilon}$, we have $$\label{useuse2} \frac{1}{p}\left\|\mathbf{O}_{\mathcal{J}}^{\top}\left(\mathbf{D}_{\mathcal{J}}^{\top} \mathbf{D}_{\mathcal{J}}\right)^{-1} \mathbf{D}_{\mathcal{J}}^{\top} \mathbf{Q}\bm{\varepsilon}\right\|_2^2 \stackrel{L}{=} \frac{1}{p} \bm{\varepsilon}^{\top} \mathbf{D}_{\mathcal{J}}\left(\mathbf{D}_{\mathcal{J}}^{\top} \mathbf{D}_{\mathcal{J}}\right)^{-2} \mathbf{D}_{\mathcal{J}}^{\top} \bm{\varepsilon}=\frac{1}{p} \sum_{i \in \mathcal{J}} \frac{{\varepsilon}_i^2}{d_i^2}.$$ Using $J$ is finite, we obtain that almost surely, $$\label{asffstwo} \lim_{p\to\infty} \frac{1}{p}\left\|\mathbf{O}_{\mathcal{J}}^{\top}\left(\mathbf{D}_{\mathcal{J}}^{\top} \mathbf{D}_{\mathcal{J}}\right)^{-1} \mathbf{D}_{\mathcal{J}}^{\top} \mathbf{Q}\bm{\varepsilon}\right\|_2^2 =0.$$ The first result in [\[XEL3\]](#XEL3){reference-type="eqref" reference="XEL3"} follows from [\[asffsnone\]](#asffsnone){reference-type="eqref" reference="asffsnone"}, [\[dawg11\]](#dawg11){reference-type="eqref" reference="dawg11"}, [\[asffstwo\]](#asffstwo){reference-type="eqref" reference="asffstwo"} and [\[dawg12\]](#dawg12){reference-type="eqref" reference="dawg12"}. Now we prove the second result in [\[XEL3\]](#XEL3){reference-type="eqref" reference="XEL3"}. Similarly to [\[algepcrt\]](#algepcrt){reference-type="eqref" reference="algepcrt"}, we have that $$\label{algepcrtBB} \hat{\bm{\theta}}_{\mathsf{pcr}}(\mathcal{J})-\bm{\upsilon}^\star=\mathbf{O}_{\mathcal{J}} \bm{\zeta}^\star+\left(\mathbf{D}_{\mathcal{J}}^{\top} \mathbf{D}_{\mathcal{J}}\right)^{-1} \mathbf{D}_{\mathcal{J}}^{\top} \mathbf{Q}\bm{\varepsilon}$$ Now note that by basic properties of Haar measure on orthogonal groups [@meckes2014concentration], as $p \rightarrow \infty$, $$\mathbf{O}_{\mathcal{J}} \bm{\zeta}^\star\Rightarrow N\left(\bm{0}, \mathbb{E}\left(\mathsf{C}^\star\right)^2\cdot \mathbf{I}_{J}\right)$$ where we used the assumption that $\bm{\zeta}^\star\stackrel{W_2}{\to} \mathsf{C}^\star$, and that $$\left(\mathbf{D}_{\mathcal{J}}^{\top} \mathbf{D}_{\mathcal{J}}\right)^{-1} \mathbf{D}_{\mathcal{J}}^{\top} \mathbf{Q}\bm{\varepsilon}\sim N\left(\bm{0}, \left(\mathbf{D}_{\mathcal{J}}^{\top} \mathbf{D}_{\mathcal{J}}\right)^{-1} \right).$$ By independence of $\mathbf{O}$ and $\bm{\varepsilon}$, we have that $$\mathbf{O}_{\mathcal{J}} \bm{\zeta}^\star+\left(\mathbf{D}_{\mathcal{J}}^{\top} \mathbf{D}_{\mathcal{J}}\right)^{-1} \mathbf{D}_{\mathcal{J}}^{\top} \mathbf{Q}\bm{\varepsilon}\Rightarrow N\left(\bm{0}, \mathbb{E}\left(\mathsf{C}^\star\right)^2\cdot \mathbf{I}_{J}+\left(\mathbf{D}_{\mathcal{J}}^{\top} \mathbf{D}_{\mathcal{J}}\right)^{-1}\right)$$ Statement of the second result of [\[XEL3\]](#XEL3){reference-type="eqref" reference="XEL3"} follows from the fact that $\hat{\omega}$ consistently estimates $\mathbb{E}\left(\mathsf{C}^\star\right)^2$: almost surely $$\hat{\omega}= p^{-1} \norm{\hat{\bm{\beta}}_{\mathsf{co}}}^2 -\hat{\tau}_*\to \mathbb{E}(\mathsf{C}^\star)^2$$ as $p\to \infty$. This follows from the fact that almost surely $\hat{\bm{\beta}}_{\mathsf{co}}\stackrel{W_2}{\to}\mathsf{C}^\star+\sqrt{\tau_*}\mathsf{Z}$ for $\mathsf{Z}$ independent of $\mathsf{C}^\star$, which we prove below. \(b\) **Complement PCR.** Similarly to [\[algepcrt\]](#algepcrt){reference-type="eqref" reference="algepcrt"} and [\[algepcrtBB\]](#algepcrtBB){reference-type="eqref" reference="algepcrtBB"}, we have that $$\label{algepcrtBBC} \hat{\bm{\theta}}_{\mathsf{pcr}}(\bar{\mathcal{J}})-\bm{\upsilon}^\star=\mathbf{O}_{\bar{\mathcal{J}}} \bm{\zeta}^\star+\left(\mathbf{D}_{\bar{\mathcal{J}}}^{\top} \mathbf{D}_{\bar{\mathcal{J}}}\right)^{-1} \mathbf{D}_{\bar{\mathcal{J}}}^{\top} \mathbf{Q}\bm{\varepsilon}.$$ It follows that $$\mathbf{y}_\mathsf{new}=\left(\mathbf{D}_{\bar{\mathcal{J}}}^{\top} \mathbf{D}_{\bar{\mathcal{J}}}\right)^{\frac{1}{2}} \hat{\bm{\theta}}_{\mathsf{pcr}}(\bar{\mathcal{J}}) \in \mathbb{R}^{N-J}, \qquad \mathbf{X}_\mathsf{new}=\left(\mathbf{D}_{\bar{\mathcal{J}}}^{\top} \mathbf{D}_{\bar{\mathcal{J}}}\right)^{\frac{1}{2}} \mathbf{O}_{\bar{\mathcal{J}}} \in \mathbb{R}^{(N-J) \times p}$$ defined in [\[xnewYnew\]](#xnewYnew){reference-type="eqref" reference="xnewYnew"} satisfy the following relation: $$\label{filteredpcr} \mathbf{y}_\mathsf{new}=\mathbf{X}_\mathsf{new}\bm{\beta}^\star+\epsilon_\mathsf{new}$$ for $$\epsilon_\mathsf{new}=\left(\mathbf{D}_{\bar{\mathcal{J}}}^{\top} \mathbf{D}_{\bar{\mathcal{J}}}\right)^{-\frac{1}{2}} \mathbf{D}_{\bar{\mathcal{J}}}^{\top} \mathbf{Q}\bm{\varepsilon}\sim N\left(\bm{0}, \mathbf{I}_{N-J}\right).$$ Note that the new design matrix $\mathbf{X}_\mathsf{new}$ admits singular value decomposition $$\mathbf{X}_\mathsf{new}=\mathbf{Q}_\mathsf{new}^{\top} \mathbf{D}_\mathsf{new}\mathbf{O}$$ where $$\mathbf{Q}_\mathsf{new}=\mathbf{I}_{N-J}, \qquad \mathbf{D}_\mathsf{new}=\left[\left(\mathbf{D}_{\bar{\mathcal{J}}}^{\top} \mathbf{D}_{\bar{\mathcal{J}}}\right)^{\frac{1}{2}}, \bm{0}_{(N-J) \times(p+J-N)}\right] \in \mathbb{R}^{(N-J) \times p}.$$ Note that since $J$ is finite not growing with $n,p$, $$\mathbf{D}_\mathsf{new}^\top \bm{1}_{(N-J)\times 1} \stackrel{W_2}{\to} \mathsf{D}.$$ The above, along with the assumption we made in , reduces the new regression problem defined by [\[filteredpcr\]](#filteredpcr){reference-type="eqref" reference="filteredpcr"} to the same one considered in . Since $\hat{\bm{\beta}}_{\mathsf{co}}$ is Spectrum-Aware debiased estimator with respect to the new regression problem, [\[JSSRwa\]](#JSSRwa){reference-type="eqref" reference="JSSRwa"} follows from . \(c\) **De-biased PCR.** By [\[defdefpcrdb\]](#defdefpcrdb){reference-type="eqref" reference="defdefpcrdb"}, we have that $$\hat{\tau}_{*}^{-1/2}\qty(\hat{\bm{\beta}}^u_{\mathsf{pcr}}-\bm{\beta}^\star)=\hat{\tau}_{*}^{-1/2}\qty(\hat{\bm{\beta}}_{\mathsf{al}}-\bm{\beta}^\star_{\mathsf{al}})+\hat{\tau}_{*}^{-1/2}\qty(\hat{\bm{\beta}}_{\mathsf{co}}-\bm{\zeta}^\star).$$ The statement of [\[OKIqc\]](#OKIqc){reference-type="eqref" reference="OKIqc"} then follows from the first result in [\[XEL3\]](#XEL3){reference-type="eqref" reference="XEL3"}, [\[JSSRwa\]](#JSSRwa){reference-type="eqref" reference="JSSRwa"}, and . ◻ *Proof of .* To see the first result in [\[centralclaim\]](#centralclaim){reference-type="eqref" reference="centralclaim"}, recall from [\[algepcrt\]](#algepcrt){reference-type="eqref" reference="algepcrt"}, we have that $$\hat{\bm{\beta}}_{\mathsf{al}}(\mathcal{J})=\bm{\beta}^\star_{\mathsf{al}}+\mathbf{O}_{\mathcal{J}}^{\top} \mathbf{O}_{\mathcal{J}} \bm{\zeta}^\star+\mathbf{O}_{\mathcal{J}}^{\top}\left(\mathbf{D}_{\mathcal{J}}^{\top} \mathbf{D}_{\mathcal{J}}\right)^{-1} \mathbf{D}_{\mathcal{J}}^{\top} \mathbf{Q}\bm{\varepsilon}$$ Note that when $\bm{\zeta}^\star$ is exchangeable, we have that for any fixed $i \in[p]$ $$\label{dawg11} \begin{aligned} \mathbb{E}\left[\left(\left(\mathbf{O}_{\mathcal{J}}^{\top} \mathbf{O}_{\mathcal{J}} \bm{\zeta}^\star\right)_i\right)^2 \right]&=\mathbb{E}\qty[\qty(\mathbf{e}_i^\top \mathbf{U}\mathbf{O}_{\mathcal{J}}^\top \mathbf{O}_{\mathcal{J}} \mathbf{U}^\top \mathbf{U}\bm{\zeta}^\star)^2] \\ &=\mathbb{E}\left[\left(\frac{1}{p}\left\|\mathbf{P}_{\mathcal{J}} \bm{\zeta}^\star\right\|_2^2\right)^2 \right] =O\left(\frac{1}{p^2}\right) \end{aligned}$$ where we used that for a permutation matrix $\mathbf{U}\in \mathbb{R}^{p\times p}$ drawn uniformly, $(\mathbf{O}_{\mathcal{J}} \mathbf{U}^\top, \mathbf{U}\bm{\zeta}^\star)\stackrel{L}{=}(\mathbf{O}_{\mathcal{J}},\bm{\zeta}^\star)$ and [\[useuse\]](#useuse){reference-type="eqref" reference="useuse"}. And by rotational invariance of $\mathbf{O}$, $$\label{dawg12} \mathbb{E}\left(\mathbf{O}_{\mathcal{J}}^{\top}\left(\mathbf{D}_{\mathcal{J}}^{\top} \mathbf{D}_{\mathcal{J}}\right)^{-1} \mathbf{D}_{\mathcal{J}}^{\top} \mathbf{Q}\bm{\varepsilon}\right)_i^2=\mathbb{E} \frac{1}{p}\left\|\mathbf{O}_{\mathcal{J}}^{\top}\left(\mathbf{D}_{\mathcal{J}}^{\top} \mathbf{D}_{\mathcal{J}}\right)^{-1} \mathbf{D}_{\mathcal{J}}^{\top} \mathbf{Q}\bm{\varepsilon}\right\|_2^2=O \qty(\frac{1}{p^2})$$ where we used [\[useuse2\]](#useuse2){reference-type="eqref" reference="useuse2"} at the last equality. The first result in [\[centralclaim\]](#centralclaim){reference-type="eqref" reference="centralclaim"} then follows from Markov inequality and Borel-Cantelli lemma. The second result in [\[centralclaim\]](#centralclaim){reference-type="eqref" reference="centralclaim"} can be proved similarly to . The third result in [\[centralclaim\]](#centralclaim){reference-type="eqref" reference="centralclaim"} follows from the first two results and an application of the Slutsky's theorem. ◻ # Conjectures for ellipsoidal models {#section:CONJE} We conjecture that debiasing is possible in a more general settings than considered in this paper. Namely, one would like to consider the design matrix $\mathbf{X}=\mathbf{Q}^\top \mathbf{D}\mathbf{O}\bm{\Sigma}^{1/2}$ where $\bm{\Sigma}\in \mathbb{R}^{p\times p}$ is non-singular, $\mathbf{Q}\in \mathbb{R}^{n\times n}, \mathbf{O}\in \mathbb{R}^{p\times p}$ are orthogonal matrices and $\mathbf{D}\in \mathbb{R}^{n\times p}$ is diagonal matrix. We assume that $\bm{\Sigma}\in \mathbb{R}^{p\times p}$ is observed and $\mathbf{O}$ is drawn uniformly from the orthogonal group $\mathbb{O}(p)$ independent of $\bm{\varepsilon},\mathbf{D},\mathbf{Q}$. We refer to this class of random design matrices as ellipsoidal invariant designs. The special case where $\mathbf{Q}^\top \mathbf{D}\mathbf{O}$ is an isotropic Gaussian matrix is studied extensively in prior literature [@bellec2019biasing; @celentano2020lasso; @javanmard2014confidence; @javanmard2014hypothesis; @bellec2022biasing]. Furthermore, one would like to consider the case where the convex penalty function $\vec{h}:\mathbb{R}^p\mapsto \mathbb{R}$ is non-separable (e.g. SLOPE, group-Lasso) and $\hat{\bm{\beta}}\in \underset{\mathbf{b} \in \mathbb{R}^p}{\arg \min } \frac{1}{2}\|\mathbf{y}-\mathbf{X}\mathbf{b}\|^2+\vec{h}\left(\mathbf{b}\right).$ where $\vec{h}$ is assumed to be proper and closed. In this setting, we have $$\label{dbbdbnon} \hat{\bm{\beta}}^u=\hat{\bm{\beta}}+\frac{1}{\widehat{\mathsf{adj}}} \bm{\Sigma}^{-1} \mathbf{X}^{\top}(\mathbf{y}-\mathbf{X}\hat{\bm{\beta}})$$ where $\widehat{\mathsf{adj}}$ is solution of the following equation $$\label{solvegammanon} \frac{1}{p} \mathlarger{\mathlarger{\sum}}_{i=1}^p \frac{1}{\frac{d_i^2-\widehat{\mathsf{adj}}}{p} \operatorname{Tr}\left(\left(\widehat{\mathsf{adj}}\cdot \mathbf{I}_p+\bm{\Sigma}^{-1}\left(\nabla^2\vec{h}(\hat{\bm{\beta}})\right)\right)^{-1}\right)+1}=1.$$ Here, we assumed that $\vec{h}$ is twice-differentiable or that it admits a twice-differentiable extension as in . Notice that the equation [\[solvegammanon\]](#solvegammanon){reference-type="eqref" reference="solvegammanon"} becomes [\[gammasolvea\]](#gammasolvea){reference-type="eqref" reference="gammasolvea"} if one let $\bm{\Sigma}=\mathbf{I}_p$ and $(\vec{h}(x))_i=h(x_i),\forall i \in p$ for some $h:\mathbb{R}\mapsto \mathbb{R}$. Analogous to [\[DEFEFD\]](#DEFEFD){reference-type="eqref" reference="DEFEFD"}, we define $$\label{DEFEFDnon} \begin{aligned} &\hat{\eta}_* (p) :=\left( \frac{1}{p} \operatorname{Tr}\qty(\widehat{\mathsf{adj}}\cdot \mathbf{I}_p+\bm{\Sigma}^{-1} \nabla^2 \vec{h}(\hat{\bm{\beta}})) \right)^{-1}\\ & \hat{\mathbf{{r}}}_{**}(p):=\hat{\bm{\beta}}+\frac{1}{\hat{\eta}_*-\widehat{\mathsf{adj}}} \bm{\Sigma}^{-1} \mathbf{X}^{\top}(\mathbf{X}\hat{\bm{\beta}}-\mathbf{y}),\quad \hat{\tau}_{**}(p) :=\frac{\frac{1}{p}\left\|\mathbf{X}\hat{\mathbf{{r}}}_{**}-\mathbf{y}\right\|^2-\frac{n}{p}}{\frac{1}{p} \sum_{i=1}^p d_i^2}\\ & \hat{\tau}_*(p):=\left(\frac{\hat{\eta}_*}{\widehat{\mathsf{adj}}}\right)^2 \frac{1}{p} \mathlarger{\mathlarger{\sum}}_{i=1}^p \frac{d_i^2}{\left(d_i^2+\hat{\eta}_*-\widehat{\mathsf{adj}}\right)^2}\\ & \qquad \qquad +\left(\frac{\hat{\eta}_*-\widehat{\mathsf{adj}}}{\widehat{\mathsf{adj}}}\right)^2\left(\frac{1}{p} \mathlarger{\mathlarger{\sum}}_{i=1}^p\left(\frac{\hat{\eta}_*}{d_i^2+\hat{\eta}_*-\widehat{\mathsf{adj}}}\right)^2-1\right) \hat{\tau}_{**} \end{aligned}$$ One can then make the following conjecture on the distribution of $\hat{\bm{\beta}}^u$. **Conjecture 49**. Under suitable conditions, there is a unique solution $\widehat{\mathsf{adj}}$ of [\[solvegammanon\]](#solvegammanon){reference-type="eqref" reference="solvegammanon"} and $$\hat{\tau}_*^{-1/2}(\hat{\bm{\beta}}^u-\bm{\beta}^\star) = \bm{\Sigma}^{1/2}\mathbf{z}+O\qty(p^{-1/2})$$ where $\mathbf{z} \sim N(\bm{0},\mathbf{I}_p)$ and $O\qty(p^{-1/2})$ denotes a vector $\mathbf{v}\in \mathbb{R}^p$ satisfying $\frac{1}{p} \norm{\mathbf{v}}^2\to 0$ almost surely as $p\to \infty$. The derivation of the above is by considering a change of variable $\tilde{\bm{\beta}}=\bm{\Sigma}^{1/2}\hat{\bm{\beta}}$ whereby $\tilde{\bm{\beta}}\in \underset{\mathbf{b} \in \mathbb{R}^p}{\arg \min } \frac{1}{2}\|\mathbf{y}-\mathbf{Q}^\top \mathbf{D}\mathbf{O}\mathbf{b} \|^2+h\left(\bm{\Sigma}^{-\frac{1}{2}} \mathbf{b} \right)$ and using the iterates of the VAMP algorithm (for non-separable penalties [@fletcher2018plug Algorithm 1]) to track $\tilde{\bm{\beta}}$. One can then obtain [\[DEFEFDnon\]](#DEFEFDnon){reference-type="eqref" reference="DEFEFDnon"} and from the state evolution of the VAMP algorithm [@fletcher2018plug Eq. (19), Theorem 1]. If holds, it will be straightforward to develop inference procedure for $\bm{\beta}^\star$. A main gap to prove in our opinion is to establish an analogue of , i.e. the non-separable VAMP iterates indeed tracks $\hat{\bm{\beta}}$. We leave the proof of as an open problem. # Numerical experiments {#appendix:Additional} ## Details of the design matrices used in numerical experiments Throughout the paper, we have illustrated our findings using different design matrices. We provide additional details in this section. **Remark 50** (Notations used in caption). we use $\mathsf{InverseWishart}\qty(\bm{\Psi}, \nu)$ to denote inverse-Wishart distribution [@WikipediaInvWishart] with scale matrix $\bm{\Psi}$ and degrees-of-freedom $\nu$, $\mathsf{Mult}$-$\mathsf{t}(\nu, \bm{\Psi})$ to denote multivariate-t distribution [@WikipediamultT] with location $\bm{0}$, scale matrix $\bm{\Psi}$, and degrees-of-freedom $\nu$. **Remark 51** (Right rotationally invariant). All design matrices in , [4](#figPCRA){reference-type="ref" reference="figPCRA"} satisfies that $\mathbf{X}\stackrel{L}{=} \mathbf{X}\mathbf{O}$ for $\mathbf{O}\sim \mathop{\mathrm{Haar}}(\mathbb{O}(p))$ independent of $\mathbf{X}$. It is easy to verify that this is equivalent to right-rotational invariance as defined in . **Remark 52** (Comparison between designs in and ). The designs featured in can be seen as more challenging variants of the designs in , characterized by heightened levels of correlation, heterogeneity, or both. Specifically, $\mathbf{\Sigma}^{\mathrm{(col)}}$ under $\mathsf{MatrixNormal}$-$\mathsf{B}$ has a higher correlation coefficient (0.9) compared to the correlation coefficient (0.5) in $\mathsf{MatrixNormal}$. This results in a stronger dependence among the rows of the matrix $\mathbf{X}$. Concurrently, the $\mathbf{\Sigma}^{\mathrm{(row)}}$ in $\mathsf{MatrixNormal}$-$\mathsf{B}$ is sampled from an inverse-Wishart distribution with fewer degrees of freedom, leading to a more significant deviation from the identity matrix compared to the MatrixNormal design presented in . In $\mathsf{Spiked}$-$\mathsf{B}$, there are three significantly larger spikes when compared to $\mathsf{Spiked}$ in , which contains 50 spikes of smaller magnitudes. Consequently, issues related to alignment and outlier eigenvalues are much more pronounced in the case of $\mathsf{Spiked}$-$\mathsf{B}$. Design under $\mathsf{LLN}$-$\mathsf{B}$ is product of four independent isotropic Gaussian matrices whereas $\mathsf{LLN}$-$\mathsf{B}$ contains 20th power of the same $\mathbf{X}_1$. The latter scenario presents greater challenge for DF or SA debiasing, primarily because the exponentiation step leads to the emergence of eigenvalue outliers. Larger auto-regressive coefficients are used in $\mathsf{VAR}$-$\mathsf{B}$, leading to stronger dependence across rows. When designs are sampled from $\mathsf{MultiCauchy}$, it is equivalent to scaling each row of an isotropic Gaussian matrix by a Cauchy-distributed scalar. This results in substantial heterogeneity across rows, with some rows exhibiting significantly larger magnitudes compared to others. **Definition 53** (Designs from the real dataset). Without loss of generality, all designs below are re-scaled so that average of the eigenvalues of $\mathbf{X}^\top \mathbf{X}$ is 1. - $\mathsf{Speech}$: $200\times 400$ with each row being i-vector (see e.g. [@ibrahim2018vector]) of the speech segment of a English speaker. We imported this dataset from the OpenML repository [@OpenMLSpeech] (ID: 40910) and retained only the last 200 rows of the original design matrix. The original dataset is published in [@goldstein2016comparative]. - $\mathsf{DNA}$: $100\times 180$ entries with each row being one-hot representation of primate splice-junction gene sequences (DNA). We imported this dataset from the OpenML repository [@OpenMLDNA] (ID: 40670) and retained only the last 100 rows of the original design matrix. The original dataset is published in [@noordewier1990training]. - $\mathsf{SP500}$: $300\times 496$ entries where each column representing a time series of daily stock returns (percentage change) for a company listed in the S&P 500 index. These time series span 300 trading days, ending on January 1, 2023.. We imported this dataset from Yahoo finance API [@YahooFin]; - $\mathsf{FaceImage}$: $1348\times 2914$ entries where each row corresponds to a JPEG image of a single face. We imported this dataset from the scikit-learn package, using the handle sklearn.datasets.fetch_lf2_people [@sklearn]. The original dataset is published in [@LFWTech] - $\mathsf{Crime}$: $50 \times 99$ entries where each column corresponds to a socio-economic metric in the UCI communities and crime dataset [@misc_communities_and_crime_183]. Only the last 50 rows of the dataset is retained. ## QQ plots , [9](#figQQB){reference-type="ref" reference="figQQB"} and [10](#figQQD){reference-type="ref" reference="figQQD"} are QQ-plots of , [4](#figPCRA){reference-type="ref" reference="figPCRA"} and [5](#figPCRC){reference-type="ref" reference="figPCRC"} respectively. ![QQ plots corresponding to .](image_va/panel_QQ.png){#figQQA width="\\textwidth"} ![QQ plots corresponding to .](image_va/panel_pcr_sim_qq.png){#figQQB width="\\textwidth"} ![QQ plots corresponding to .](image_va/panel_pcr_real_qq.png){#figQQD width="\\textwidth"} ## Inference for debiased PCR {#appendix:Ininf} We include here plots and tables for the inference procedures described in . ### Inference for $\upsilon^\star_i$ {#dnf} and prints results of hypothesis tests for the alignment coefficients $\upsilon^\star_i, i=1,...,10$ for experiments in and . MatrixNormal-B Spiked-B LNN-B VAR-B MultiCauchy ----------------------- ---------------- ----------- ----------- ----------- ------------- $\upsilon_{1}^\star$ 0.00 \*\* 0.00 \*\* 0.00 \*\* 0.00 \*\* 0.00 \*\* $\upsilon_{2}^\star$ 0.84 0.44 0.94 0.96 0.98 $\upsilon_{3}^\star$ 0.00 \*\* 0.00 \*\* 0.00 \*\* 0.00 \*\* 0.00 \*\* $\upsilon_{4}^\star$ 0.41 0.68 0.50 0.96 0.95 $\upsilon_{5}^\star$ 0.00 \*\* 0.00 \*\* 0.00 \*\* 0.00 \*\* 0.00 \*\* $\upsilon_{6}^\star$ 0.84 0.51 0.78 0.96 0.49 $\upsilon_{7}^\star$ 0.00 \*\* 0.00 \*\* 0.00 \*\* 0.00 \*\* 0.00 \*\* $\upsilon_{8}^\star$ 0.61 0.44 0.98 0.96 0.49 $\upsilon_{9}^\star$ 0.00 \*\* 0.00 \*\* 0.00 \*\* 0.00 \*\* 0.00 \*\* $\upsilon_{10}^\star$ 0.27 0.68 0.94 0.96 0.34 : The table prints Benjamini-Hochberg adjusted p-values for the hypothesis tests $H_i:\upsilon^\star_i=0,i=1,...,10$ of the experiments from for the designs specified in . Recall that we set $\upsilon^\star_i=5\cdot \sqrt{p}, i \in J_1$ and $J_1=\{1,3,5,7,9\}$. Below we use $**$ to indicate rejection under FDR level 0.01 and $*$ rejection under FDR level 0.05. [\[tabA\]]{#tabA label="tabA"} Speech DNA SP500 FaceImage Crime ----------------------- -------- ------ ----------- ----------- ----------- $\upsilon_{1}^\star$ 0.70 0.81 0.00 \*\* 0.00 \*\* 0.00 \*\* $\upsilon_{2}^\star$ 0.70 0.74 0.40 0.25 0.10 $\upsilon_{3}^\star$ 0.80 0.74 0.95 0.53 0.02 \* $\upsilon_{4}^\star$ 0.80 0.74 0.19 0.00 \*\* 0.02 \* $\upsilon_{5}^\star$ 0.85 0.81 0.29 0.96 0.11 $\upsilon_{6}^\star$ 0.80 0.93 0.19 0.06 0.85 $\upsilon_{7}^\star$ 0.80 0.74 0.00 \*\* 0.69 0.57 $\upsilon_{8}^\star$ 0.70 0.56 0.00 \*\* 0.92 0.02 \* $\upsilon_{9}^\star$ 0.35 0.81 0.56 0.34 0.09 $\upsilon_{10}^\star$ 0.80 0.56 0.95 0.53 0.02 \* : The table prints Benjamini-Hochberg adjusted p-values for the hypothesis tests $H_i:\upsilon^\star_i=0,i=1,...,10$ of the experiments from for the designs specified in . Below we use $**$ to indicate rejection under FDR level $0.01$ and $*$ rejection under FDR level $0.05$ [\[tabC\]]{#tabC label="tabC"} ### Inference for ${\zeta}^\star_i$ {#dsdfnf} depict the True Positive Rate (TPR), False Positive Rate (FPR) of the hypothesis tests as outlined in [\[sdfasmtest\]](#sdfasmtest){reference-type="eqref" reference="sdfasmtest"}, and the False Coverage Proportion (FCP) of confidence intervals as defined in [\[skdtest\]](#skdtest){reference-type="eqref" reference="skdtest"} for ${\zeta}^\star_i$. These plots illustrate the changes in TPR, FPR, and FCP as we systematically vary the targeted FPR/FCP level $\alpha$ from 0 to 1. ![Under the setting of , we plot the True Positive Rate (TPR) and False Positive Rate (FPR) corresponding to hypothesis tests outlined in [\[sdfasmtest\]](#sdfasmtest){reference-type="eqref" reference="sdfasmtest"}, and the False Coverage Proportion (FCP) of confidence intervals defined in [\[skdtest\]](#skdtest){reference-type="eqref" reference="skdtest"} for $({\zeta}^\star_i)_{i=1}^p$. The $x$-axis spans $\alpha$ values from 0 to 1, while the $y$-axis ranges between 0 and 1. The dotted black line represents the 45-degree reference line.](image_va/panel_pcr_sim_zeta_FCP.png){#figztCIA width="\\textwidth"} [^1]: We adopt a scaling where $\|\mathbf{X}\|_{\mathrm{op}}$ and $\frac{1}{\sqrt{p}}\|\bm{\beta}^\star\|_2$ remain at a constant order as $n$ and $p$ tend to infinity. Prior literature (e.g. [@bellec2019biasing]) often adopts a scaling where $\frac{1}{\sqrt{p}}\|\mathbf{X}\|_{\mathrm{op}}$ and $\|\bm{\beta}^\star\|_2$ maintains constant order as $n$ and $p$ approach infinity. These scalings should be viewed as equivalent up to a change of variable. [^2]: Finite size means that $J_1$ does not grow with $n,p$ [^3]: In practice, one would choose $\mathcal{K}$ after the SVD. Note also that $\mathbf{Q}\in \mathbb{R}^{n\times n}, \mathbf{O}\in \mathbb{R}^{p\times p}$ and $\mathbf{D}\in \mathbb{R}^{n\times p}$. [^4]: Here, $G(z):=$ $\int \frac{1}{z-x} \mu(d x)$ where $\mu(\cdot)$ is measure associated to Marchenko-Pasteur law.
arxiv_math
{ "id": "2309.07810", "title": "Spectrum-Aware Adjustment: A New Debiasing Framework with Applications\n to Principal Component Regression", "authors": "Yufan Li, Pragya Sur", "categories": "math.ST cs.IT math.IT stat.ME stat.ML stat.TH", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this paper we study harmonic analysis operators in Dunkl settings associated with finite reflection groups on Euclidean spaces. We consider maximal operators, Littlewood-Paley functions, $\rho$-variation and oscillation operators involving time derivatives of the heat semigroup generated by Dunkl operators. We establish the boundedness properties of these operators in $L^p(\mathbb R^d,\omega_\mathfrak{K})$, $1\leq p<\infty$, Hardy spaces, BMO and BLO-type spaces in the Dunkl settings. The study of harmonic analysis operators associated to reflection groups need different strategies from the ones used in the Euclidean case since the integral kernels of the operators admit estimations involving two different metrics, namely, the Euclidean and the orbit metrics. For instance, the classical Calderón-Zygmund theory for singular integrals does not work in this setting. address: Vı́ctor Almeida, Jorge J. Betancor, Juan C. Fariña and Lourdes Rodrı́guez-Mesa Departamento de Análisis Matemático, Universidad de La Laguna, Campus de Anchieta, Avda. Astrofísico Sánchez, s/n, La Laguna (Sta. Cruz de Tenerife), Spain author: - V. Almeida, J.J. Betancor, J.C. Fariña and L. Rodrı́guez-Mesa title: Harmonic analysis operators in the rational Dunkl settings --- # Introduction {#S1} In this paper we study harmonic analysis operators in the Dunkl setting which is associated with finite reflection groups in Euclidean spaces. The Dunkl theory can be seen as an extension of the Euclidean Fourier analysis. After the Dunkl's paper ($\!\!$[@Dun1]) appeared the theory has been developed extensively ($\!\!$[@DaXu], [@Dun2], [@Dun3], [@Dun4], [@Ro1; @Ro2; @Ro4; @Ro3; @JeuRo; @RoVo], [@ThXu1] and [@ThXu2]). Recently, harmonic analysis associated with Dunkl operators has gained a lot of interest ($\!\!$[@AS; @ABDH; @ADH], [@DH2; @DH1; @DH4; @DH3; @DH5; @DH6; @DH7; @DH8], [@HLLW], [@H1], [@JiLi1; @JiLi3; @JiLi2], [@LZ] and [@THHLL]). We now collect some basic definitions and properties concerning to Dunkl theory that will be useful in the sequel. The interested reader can go to [@Dun1] and [@Ro4] for details. We consider the Euclidean space $\mathbb R^d$ endowed with the usual inner product $\langle\cdot ,\cdot\rangle$. If $\alpha$ is a nonzero vector in $\mathbb R^d$ the reflection $\sigma_\alpha$ with respect to the hyperplane orthogonal to $\alpha$ is defined by $$\sigma_\alpha(x)=x-2\frac{\langle x,\alpha\rangle}{|\alpha|^2}\alpha,\quad x\in\mathbb R^d.$$ Here $|\alpha|$ denotes as usual the Euclidean norm of $\alpha$. Let $R$ be a finite set of $\mathbb R^d\setminus\{0\}$, we say that $R$ is a root system if $\sigma_\alpha(R)=R$ and $R\cap(\mathbb R\alpha)=\{\pm\alpha\}$, $\alpha\in R$. The reflection $\{\sigma_\alpha\}_{\alpha\in R}$ generates a finite group G called Weyl group of the root system $R$. We always consider that the system of root is normalised being $|\alpha|=\sqrt{2}$, $\alpha\in R$. The root system $R$ can be rewritten as a disjoint union $R=R_+\cup(-R_+)$, where $R_+$ and $-R_+$ are separated by a hyperplane through the origin. $R_+$ is named a positive subsystem of $R$. This decomposition of $R$ is not unique. If $A$ is a subset of $\mathbb R^d$, by $\theta(A)$ we denote the $G$-orbit of $A$. We define $\rho(x,y)=\min_{\sigma\in G}|x-\sigma(y)|$, $x,y\in\mathbb R^d$, that represents the distance between the orbits $\theta(\{x\})$ and $\theta(\{y\})$. For every $x\in\mathbb R^d$ and $r>0$ we denote by $B(x,r)$ and $B_\rho(x,r)$ the Euclidean and the $\rho$-balls centered in $x$ and with radius $r$. We have that $B_\rho(x,r)=\theta(B(x,r))$, $x\in\mathbb R^d$ and $r>0$. A multiplicity function is a $G$-invariant function $\mathfrak{K}:R\rightarrow\mathbb C$. Throughout this paper we always consider a multiplicity function $\mathfrak{K}\geq 0$. We define $\gamma =\sum_{\alpha\in R_+}\mathfrak{K}(\alpha)$ and $D=d+2\gamma$. The measure $\omega_\mathfrak{K}$ defined by $d\omega_\mathfrak{K}(x)=\omega_\mathfrak{K}(x)dx$, on $\mathbb R^d$, where $$\omega_\mathfrak{K}(x)=\prod_{\alpha\in R_+}|\langle\alpha,x\rangle|^{2\mathfrak{K}(\alpha)},\quad x\in\mathbb R^d,$$ is $G$-invariant. The number $D$ is called the homogeneous dimension because the following property holds $$\omega_\mathfrak{K}(B(tx,tr))=t^D\omega_\mathfrak{K}(B(x,r)),\quad x\in\mathbb R^d\;\mbox{and}\;r>0.$$ We have that $$\omega_\mathfrak{K}(B(x,r))\sim r^d\prod_{\alpha\in R}(|\langle\alpha,x\rangle|+r)^{\mathfrak{K}(\alpha)},\quad x\in\mathbb R^d\;\mbox{and}\;r>0,$$ where the equivalence constants do not depend on $x\in\mathbb R^d$ or $r>0$. Then, we can see that $\omega_\mathfrak{K}$ has the following doubling property: there exists $C>0$ such that $$\omega_\mathfrak{K}(B(x,2r))\leq C\omega_\mathfrak{K}(B(x,r)),\quad x\in\mathbb R^d\;\mbox{and}\;r>0.$$ Hence the triple $(\mathbb R^d,|\cdot|,\omega_\mathfrak{K})$ is a space of homogeneous type in the sense of Coifman and Weiss ($\!\!$[@CoWe]). A very useful property ($\!\!$[@ADH (3.2)]) is the following one: there exists $C\geq 1$ such that $$\begin{aligned} \label{1.1} \frac{1}{C}\left(\frac{s}{r}\right)^d \leq \frac{\omega_\mathfrak{K}(B(x,s))}{\omega_\mathfrak{K}(B(x,r))}\leq C \left(\frac{s}{r}\right)^D,\quad x\in\mathbb R^d\;\mbox{and}\;0<r\leq s.\end{aligned}$$ Also, we have that $$\begin{aligned} \label{1.2} \omega_\mathfrak{K}(B(x,r))\leq \omega_\mathfrak{K}(\theta (B(x,r)))\leq \#(G) \omega_\mathfrak{K}(B(x,r)),\quad x\in\mathbb R^d\;\mbox{and}\;r>0.\end{aligned}$$ Here $\#(G)$ denotes the number of elements of $G$. Some examples of these objects can be found in [@Ro4 p. 96]. We now give the definitions of Dunkl operators associated with a system of roots $R$ and a multiplicity function $\mathfrak K$. Let $\xi\in\mathbb R^d\setminus\{0\}$. We denote by $\partial_\xi$ the classical derivative in the direction $\xi$. The Dunkl operator $D_\xi$ is defined by $$D_\xi f(x)=\partial_\xi f(x)+\sum_{\alpha\in R}\frac{\mathfrak{K}(\alpha)}{2}\langle\alpha,\xi\rangle\frac{f(x)-f(\sigma_\alpha(x))}{\langle\alpha,x\rangle},\quad x\in\mathbb R^d.$$ $D_\xi$ can be seen as a deformation of $\partial_\xi$ by a difference operator. These operators were introduced in [@Dun1] where their main properties can be found. The Dunkl Laplacian associated with $R$ and $\mathfrak{K}$ is defined by $\Delta_\mathfrak K=\sum_{j=1}^dD_{e_j}^2$, where $\{e_j\}_{j=1}^d$ represents the canonical basis in $\mathbb R^d$. It is clear that $\Delta_\mathfrak{K}$ reduces to the Euclidean Laplacian when $\mathfrak K=0$. We can write, for every $f\in C^2(\mathbb R^d)$, $$\Delta_\mathfrak K f=\Delta f+\sum_{\alpha\in R}\mathfrak{K}(\alpha)\delta_\alpha f,$$ where $\Delta$ denotes the Euclidean Laplacian and, for every $\alpha\in R$, $$\delta_\alpha f(x)=\frac{\partial_\alpha f(x)}{\langle\alpha,x\rangle}-\frac{f(x)-f(\sigma_\alpha(x))}{\langle\alpha,x\rangle^2},\quad x\in\mathbb R^d.$$ According to [@AmHa Theorem 3.1] the Dunkl Laplacian is essentially selfadjoint in $L^2(\mathbb R^d,\omega_\mathfrak{K})$ and $\Delta_\mathfrak K$ generates a $C_0$-semigroup of operators $\{T_t^\mathfrak K\}_{t>0}$ in $L^2(\mathbb R^d,\omega_\mathfrak{K})$ where, for every $t>0$ and $f\in L^2(\mathbb R^d,\omega_\mathfrak{K})$, $$\begin{aligned} \label{1.3} T_t^\mathfrak K(f)(x)=\int_{\mathbb R^d}T_t^\mathfrak K (x,y)f(y)d\omega_\mathfrak{K}(y),\end{aligned}$$ being the function $(t,x,y)\rightarrow T_t^\mathfrak K (x,y)$ a $C^\infty$-function on $(0,\infty)\times\mathbb R^d\times\mathbb R^d$ satisfying that $T_t^\mathfrak K (x,y)=T_t^\mathfrak K (y,x)$, $t>0$ and $x,y\in\mathbb R^d$, and $$\begin{aligned} \label{1.4} \int_{\mathbb R^d}T_t^\mathfrak K (x,y)d\omega_\mathfrak{K}(y)=1,\quad x\in\mathbb R^d\;\mbox{and}\;t>0.\end{aligned}$$ The integral operator given by [\[1.3\]](#1.3){reference-type="eqref" reference="1.3"} defines a $C_0$-semigroup of linear contractions in $L^p(\mathbb R^d,\omega_\mathfrak{K})$, $1\leq p<\infty$. Thus $\{T_t^\mathfrak K\}_{t>0}$ defines a symetric diffusion semigroup in the sense of Stein ($\!\!$[@St]). Gaussian type upper bounds were established in [@ADH Theorem 4.1] for the heat kernel and its derivatives. We now recall those estimates that we will use throughout this paper. - For every $m\in\mathbb N$ there exist $C,c>0$ such that $$\begin{aligned} \label{1.5} |t^m\partial _t^mT_t^\mathfrak K (x,y)|\leq \frac{C}{V(x,y,\sqrt{t})}e^{-c\frac{\rho(x,y)^2}{t}},\quad x,y\in\mathbb R^d\;\mbox{and}\;t>0.\end{aligned}$$ Here $V(x,y,r)=\max\{\omega_\mathfrak{K}(B(x,r)),\omega_\mathfrak{K}(B(y,r))\}$, $x,y\in\mathbb R^d$ and $r>0$. - For every $m\in\mathbb N$ there exist $C,c>0$ such that $$\begin{aligned} \label{1.6} |t^m\partial _t^mT_t^\mathfrak K (x,y)-t^m\partial _t^mT_t^\mathfrak K (x,z)|\leq C\frac{|y-z|}{\sqrt{t}}\frac{e^{-c\frac{\rho(x,y)^2}{t}}}{V(x,y,\sqrt{t})},\end{aligned}$$ for every $t>0$, $x,y,z\in\mathbb R^d$ and $|y-z|<\sqrt{t}$. Note that in the exponential term in ([\[1.5\]](#1.5){reference-type="ref" reference="1.5"}) and ([\[1.6\]](#1.6){reference-type="ref" reference="1.6"}) the Euclidean metric is replaced by the orbit distance $\rho$. Moreover in the estimate ([\[1.6\]](#1.6){reference-type="ref" reference="1.6"}) appear the two metrics. Note that these two metrics are not equivalent. The fact that estimates involve the two metrics can not be avoid when we are working in the Dunkl setting as it can be seen in the Dunkl-Calderón Zygmund singular integrals (see [@HLLW]). These properties differ in Dunkl and Euclidean settings, so we need more careful arguments when we are dealing in Dunkl context. Our objective in this paper is to study the behaviour of some Dunkl harmonic analysis operators involving the heat semigroup $\{T_t^\mathfrak K\}_{t>0}$ and its time derivatives in $L^p$, Hardy and ${\rm BMO}$ spaces. In addition to Lebesgue spaces $L^p(\mathbb R^d, \omega_\mathfrak{K})$, $1\leq p<\infty$, and the weak $L^1$-space denoted by $L^{1,\infty}(\mathbb R^d, \omega_\mathfrak{K})$, we also think about the Hardy space $H^1(\Delta_\mathfrak{K})$ and the ${\rm BMO}$ type spaces ${\rm BMO}(\mathbb R^d, \omega_\mathfrak{K})$, ${\rm BMO}^\rho (\mathbb R^d,\omega_\mathfrak K)$ and ${\rm BLO}(\mathbb R^d, \omega_\mathfrak{K})$ which we are going to precise. We consider Hardy spaces associated with Dunkl operators introduced in [@ADH] and [@DH1]. Let $1<q\leq\infty$ and let $M$ be a positive integer. We say that a function $a$ is a $(1,q,\Delta_\mathfrak K ,M)$-atom when $a\in L^2(\mathbb R^d,\omega_\mathfrak{K})$ and there exist $b\in D(\Delta_\mathfrak K ^M)$ and an Euclidean ball $B=B(x_B,r_B)$ with $x_B\in\mathbb R^d$ and $r_B>0$, satisfying that 1. $a=\Delta_\mathfrak K ^M b$; 2. $\mbox{supp} (\Delta_\mathfrak K ^\ell b)\subset\theta(B)$, $\ell=0,...,M$; 3. $\|(r_B^2\Delta_\mathfrak K )^\ell b\|_{L^q(\mathbb R^d,\omega_\mathfrak{K})}\leq r_B^{2M}\omega_\mathfrak{K}(B)^{1/q-1}$, $\ell=0,...,M$. A function $f$ is in $H^1_{(1,q,\Delta_\mathfrak K ,M)}$ when $f=\sum_{j\in \mathbb N}\lambda_j a_j$, where, for every $j\in\mathbb N$, $a_j$ is a $(1,q,\Delta_\mathfrak K ,M)$-atom and $\lambda_j\in\mathbb C$ such that $\sum_{j\in \mathbb N} |\lambda_j|<\infty$. Here the series defining $f$ converges in $L^1(\mathbb R^d,\omega_\mathfrak{K})$. We define, for every $f\in H^1_{(1,q,\Delta_\mathfrak K ,M)}$ $$\|f\|_{H^1_{(1,q,\Delta_\mathfrak K ,M)}}=\inf \sum_{j\in \mathbb N} |\lambda_j|,$$ where the infimum is taken over all the sequences $\{\lambda_j\}_{j\in \mathbb N}\subset\mathbb C$ such that $\sum_{j\in \mathbb N}|\lambda_j|<\infty$ and $f=\sum_{j\in \mathbb N} \lambda_j a_j$, where, for every $j\in\mathbb N$, $a_j$ is a $(1,q,\Delta_\mathfrak K,M)$-atom. Let $1<q\leq\infty$. A function $a$ is said to be a $(1,q)$-atom if there exists an Euclidean ball $B$ such that 1. $\mbox{supp}\;a\subset B$; 2. $\int ad\omega_\mathfrak{K}=0$; 3. $\|a\|_{L^q(\mathbb R^d,\omega_\mathfrak{K})}\leq \omega_\mathfrak{K}(B)^{1/q-1}$. The Hardy space $H^1_{(1,q)}$ is defined as above replacing $(1,q,\Delta_\mathfrak K ,M)$-atoms by $(1,q)$-atoms. In [@DH1 Theorem 1.5] it was proved that $H^1_{(1,q)}=H^1_{(1,q,\Delta_\mathfrak K ,M)}$ algebraically and topologically. The space $H^1_{(1,q)}$ is characterized by using maximal functions ($\!\!$[@ADH Theorem 2.2]), square functions ($\!\!$[@ADH Theorem 2.3]) and Riesz transforms ($\!\!$[@ADH Theorem 2.5]), what makes clear that $H^1_{(1,q)}$ does not depend on $q$. From now on we denote by $H^1(\Delta_\mathfrak K)$ any of these Hardy spaces. According to the results in [@CoWe1] about Hardy spaces in homogeneous type spaces the dual of $H^1(\Delta _\mathfrak K)$ can be characterized as the space of bounded mean oscillation functions ${\rm BMO}(\mathbb R^d,\omega_\mathfrak{K})$ defined as follows. A function $f\in L^1_{\rm loc}(\mathbb R^d,\omega_\mathfrak{K})$ is in ${\rm BMO}(\mathbb R^d,\omega_\mathfrak{K})$ provided that $$\|f\|_{{\rm BMO}(\mathbb R^d,\omega_\mathfrak{K})}:=\sup_{B}\frac{1}{\omega_\mathfrak{K}(B)}\int_B |f(y)-f_B|d\omega_\mathfrak{K}(y)<\infty.$$ Here the supremum is taken over all the Euclidean balls $B$ in $\mathbb R^d$. For every Euclidean ball $B$ we define $f_B=\frac{1}{\omega_\mathfrak{K}(B)}\int_B fd\omega_\mathfrak{K}$. As it is well-known $({\rm BMO}(\mathbb R^d,\omega_\mathfrak{K}), \|\cdot\|_{{\rm BMO}(\mathbb R^d,\omega_\mathfrak{K})})$ is a Banach space when functions differing in a constant are identified. In [@JiLi3 Theorem 6.7] it was proved that the dual of $H^1(\Delta _\mathfrak K)$ can be also realized by a class of functions defined by using Carleson measures in the Dunkl setting. Other ${\rm BMO}$-type space in the Dunkl setting can be considered by replacing the Euclidean balls by $\rho$-metric balls. The space ${\rm BMO}^\rho(\mathbb R^d,\omega_\mathfrak{K})$ consists of all those $f\in L^1_{\rm loc}(\mathbb R^d,\omega_\mathfrak{K})$ such that $$\|f\|_{{\rm BMO}^\rho(\mathbb R^d,\omega_\mathfrak{K})}:=\sup_{B}\frac{1}{\omega_\mathfrak{K}(\theta(B))}\int_{\theta(B)} |f(y)-f_{\theta(B)}|d\omega_\mathfrak{K}(y)<\infty,$$ where the supremum is taken over all the Euclidean balls $B$ in $\mathbb R^d$. We have that ${\rm BMO}^\rho(\mathbb R^d,\omega_\mathfrak{K})$ is contained in ${\rm BMO}(\mathbb R^d,\omega_\mathfrak{K})$ and $f\in {\rm BMO}^\rho(\mathbb R^d,\omega_\mathfrak{K})$ provided that $f$ is $G$-invariant and $f\in {\rm BMO}(\mathbb R^d,\omega_\mathfrak{K})$ ($\!\!$[@JiLi3 Proposition 7.4]). As it is proved in [@JiLi3 Section 7.2] ${\rm BMO}^\rho(\mathbb R^d,\omega_\mathfrak{K})$ does not coincide with ${\rm BMO}(\mathbb R^d,\omega_\mathfrak{K})$. Furthermore, Han, Lee, Li and Wick ($\!\!$[@HLLW]) analyzed the $L^p$-boundedness of the commutator of the Dunkl-Riesz transform with functions $b$ on these ${\rm BMO}$-type spaces in the Dunkl setting. They established that the commutator is bounded on $L^p(\mathbb R^d,\omega _\mathfrak K)$, $1<p<\infty$, when the function $b$ belongs to ${\rm BMO}^\rho (\mathbb R^d,\omega_\mathfrak{K})$. Conversely, if the commutator of the Dunkl-Riesz transform with $b$ is bounded on $L^p(\mathbb R^d,\omega _\mathfrak K)$ for some $1<p<\infty$, then $b\in {\rm BMO}(\mathbb R^d,\omega_\mathfrak K)$ ($\!\!$[@HLLW Theorem 1.3]). Coifman and Rochberg ($\!\!$[@CR]) introduced the space ${\rm BLO}$ of functions of bounded lower oscillation in the Euclidean setting (see also [@Ben]). ${\rm BLO}$ space is defined analogously of ${\rm BMO}$ but replacing the average $f_B$ by the essential infimum of $f$ in $B$. ${\rm BLO}$-type spaces appear as the image of $L^\infty$ and ${\rm BMO}$-spaces for maximal operators, Littlewood-Paley functions and singular integrals ($\!\!$[@Ji], [@MY] and [@YYZ]). We now introduce a ${\rm BLO}$-type space in the Dunkl setting. We say that a function $f\in L^1_{\rm loc}(\mathbb R^d,\omega_\mathfrak{K})$ is in ${\rm BLO}(\mathbb R^d,\omega_\mathfrak{K})$ when $$\|f\|_{{\rm BLO}(\mathbb R^d,\omega_\mathfrak{K})}:=\sup_{B}\frac{1}{\omega_\mathfrak{K}(B)}\int_{B} (f(y)-\mathop{\mathrm{\operatorname{ess\,inf}}}_{z\in B} f(z))d\omega_\mathfrak{K}(y)<\infty,$$ where the supremum is taken over all the Euclidean balls $B$ in $\mathbb R^d$. We now establish the main results of this paper. Let $m\in\mathbb N$. Denote by $T_{t,m}^\mathfrak K$ the operator $T_{t,m}^\mathfrak K=t^m\partial _t^mT_t^\mathfrak K$, $t>0$. We consider the maximal operator $T_{*,m}^\mathfrak K$ defined by $$T_{*,m}^\mathfrak K(f)=\sup_{t>0}|T_{t,m}^\mathfrak K(f)|.$$ Since $\{T_t^\mathfrak K\}_{t>0}$ is a diffusion semigroup in $L^p(\mathbb R^d,\omega_\mathfrak{K})$, $1\leq p<\infty$, according to [@LeMX2 Corollary 4.2], the operator $T_{*,m}^\mathfrak K$ is bounded from $L^p(\mathbb R^d,\omega_\mathfrak{K})$ into itself, for every $1<p<\infty$. **Theorem 1**. *Let $m\in\mathbb N$. The maximal operator $T_{*,m}^\mathfrak K$ is bounded from $L^1(\mathbb R^d,\omega_\mathfrak{K})$ into $L^{1,\infty}(\mathbb R^d,\omega_\mathfrak{K})$ and from $H^1(\Delta _\mathfrak K)$ into $L^1(\mathbb R^d,\omega_\mathfrak{K})$. Furthermore, if $f\in {\rm BMO}^\rho(\mathbb R^d,\omega_\mathfrak{K})$ and $T_{*,m}^\mathfrak K(f)(x)<\infty$, for almost all $x\in\mathbb R^d$, then $T_{*,m}^\mathfrak K(f)\in {\rm BLO}(\mathbb R^d,\omega_\mathfrak{K})$ and $\|T_{*,m}^\mathfrak K(f)\|_{{\rm BLO}(\mathbb R^d,\omega_\mathfrak{K})}\leq C\|f\|_{{\rm BMO}^\rho(\mathbb R^d,\omega_\mathfrak{K})}$ where $C>0$ does not depend on $f$.* Suppose now that $m\in\mathbb N$, $m\geq 1$. We define the $m$-order Littlewood-Paley $g_m$ function by $$g_m(f)(x)=\left(\int_0^\infty |T_{t,m}^\mathfrak K (f)(x)|^2\frac{dt}{t}\right)^{1/2},\quad x\in\mathbb R^d.$$ Since $\{T_t^\mathfrak K \}_{t>0}$ is a symmetric diffusion semigroup from [@St Corollary 1, p. 120] it follows that $g_m$ is bounded from $L^p(\mathbb R^d,\omega_\mathfrak{K})$ into itself, for every $1<p<\infty$. In [@Li Theorem 1.2] it was proved that $g_1$ is bounded from $L^1(\mathbb R^d,\omega_\mathfrak{K})$ into $L^{1,\infty}(\mathbb R^d,\omega_\mathfrak{K})$. Dziubański and Hejna ($\!\!$[@DH5]) proved $L^p$ boundedness properties, $1<p<\infty$, for Littlewood-Paley functions defined by replacing the heat semigroup $\{T_t^\mathfrak K \}_{t>0}$ by families of Dunkl convolutions. **Theorem 2**. *Let $m\in\mathbb N$, $m\geq 1$. The Littlewood-Paley $g_m$ function is bounded from $L^1(\mathbb R^d,\omega_\mathfrak{K})$ into $L^{1,\infty}(\mathbb R^d,\omega_\mathfrak{K})$ and from $H^1(\Delta _\mathfrak K)$ into $L^1(\mathbb R^d,\omega_\mathfrak{K})$. Furthermore, if $f\in {\rm BMO}^\rho(\mathbb R^d,\omega_\mathfrak{K})$ and $g_m(f)(x)<\infty$, for almost all $x\in\mathbb R^d$, then $g_m(f)\in {\rm BLO}(\mathbb R^d,\omega_\mathfrak{K})$ and $\|g_m(f)\|_{{\rm BLO}(\mathbb R^d,\omega_\mathfrak{K})}\leq C\|f\|_{{\rm BMO}^\rho(\mathbb R^d,\omega_\mathfrak{K})}$ being $C>0$ independent of $f$.* Suppose that $J\subset \mathbb R$. We consider a complex function $t\rightarrow a_t$ defined on $J$. Let $\sigma>0$. The $\sigma$-variation $V_\sigma(\{a_t\}_{t\in J})$ is defined by $$V_\sigma(\{a_t\}_{t\in J})=\sup_{\substack{t_n<...<t_1\\t_j\in J}}\Big(\sum_{j=1}^{n-1}|a_{t_j}-a_{t_{j+1}}|^\sigma\Big)^{1/\sigma},$$ where the supremum is taken over all finite decreasing sequences $t_n<...<t_2<t_1$ with $t_j\in J$, $j=1,...,n$. We consider a family of Lebesgue measurable functions $\{F_t\}_{t\in J}$ defined in $\mathbb R^d$. The $\sigma$-variation $V_\sigma(\{F_t\}_{t\in J})$ of $\{F_t\}_{t\in J}$ is the function defined by $$V_\sigma(\{F_t\}_{t\in J})(x):=V_\sigma(\{F_t(x)\}_{t\in J}),\quad x\in\mathbb R^d.$$ If $J$ is a numerable subset of $\mathbb R$ then $V_\sigma(\{F_t\}_{t\in J})$ defines a Lebesgue measurable function. The measurability of $V_\sigma(\{F_t\}_{t\in J})$ can be also assumed when the function $t\rightarrow F_t(x)$ is continuous by considering the usual topology of $\mathbb R$, for almost all $x\in\mathbb R^d$, although $J$ does not satisfy the countable property. Lépingle's inequality ($\!\!$[@Le]) concerning to bounded martingale sequence is a very useful tool in proving variational inequalities. Pisier and Xu ($\!\!$[@PX]) and Bourgain ($\!\!$[@Bou1]) gave simple proofs for Lépingle's inequality. The Bourgain's work ($\!\!$[@Bou1]) has motivated many authors to study variational inequalities for families of averages, semigroups of operators and truncating classical singular integral operators ($\!\!$[@AJS], [@BORSS], [@CJRW1], [@CJRW2], [@HMMT], [@JKRW], [@JSW], [@JW], [@LeMXu1] and [@MTX]). In general, in order to obtain boundedness results for variational operators we need to consider $\sigma >2$. This is the case, for instance, with martingales ($\!\!$[@Q]) or with differentiation operators ($\!\!$[@CJRW1 Remark 1.7]). The $\sigma$-variation operator $V_\sigma(\{F_t\}_{t>0})$ is related with the convergence of the family $\{F_t(x)\}_{t>0}$, $x\in\mathbb R^d$. In particular, if $x\in\mathbb R^d$ and $V_\sigma(\{F_t\}_{t>0})(x)<\infty$ there exists $\lim_{t\rightarrow t_0}F_t(x)$, for every $t_0\in (0,\infty)$. Suppose that $\{S_t\}_{t>0}$ is a family of bounded operators on $L^p(\mathbb R^d,\omega_\mathfrak{K})$, $1\leq p<\infty$. We define the $\sigma$-variation operator $V_\sigma(\{S_t\}_{t>0})$ of $\{S_t\}_{t>0}$ by $$V_\sigma (\{S_t\}_{t>0})(f)=V_\sigma (\{S_tf\}_{t>0}),\quad f\in L^p(\mathbb R^d,\omega_\mathfrak K).$$ If $V_\sigma(\{S_t\}_{t>0}$ is bounded on $L^p(\mathbb R^d,\omega_\mathfrak{K})$ then there exists $\lim_{t\rightarrow t_0}S_tf(x)$ for almost all $x\in\mathbb R^d$ and all $t_0\in (0,\infty)$. The use of $\sigma$-variation operator instead of maximal operators in getting pointwise convergence has advantages, because the maximal operators require of a dense subset $\mathcal D\subset L^p(\mathbb R^d,\omega_\mathfrak{K})$ where the pointwise convergence holds. However, the use of the $\sigma$- variation operators for this purpose is more involved. When $\sigma=2$ a good substitute of the $\sigma$-variation operator is the oscillation operator defined as follows: Let $\{t_j\}_{j\in \mathbb N}$ be a decreasing sequence of positive numbers. If $\{S_t\}_{t>0}$ is as above we define the oscillation operator $\mathcal O(\{t_j\}_{j\in \mathbb N}, \{S_t\}_{t>0})$ as $$\mathcal O(\{t_j\}_{j\in \mathbb N}, \{S_t\}_{t>0})(f)(x)=\left(\sum_{j\in \mathbb N} \sup_{t_{j+1}\leq\varepsilon_{j+1}<\varepsilon_j\leq t_j}|S_{\varepsilon_j}(f)(x)-S_{\varepsilon_{j+1}}(f)(x)|^2\right)^{1/2}.$$ According to [@LeMXu1 Corollary 4.5] the $\sigma$-variation operator $V_\sigma(\{T_{t,m}^\mathfrak K \}_{t>0})$ is bounded from $L^p(\mathbb R^d,\omega_\mathfrak{K})$ into itself, for every $1<p<\infty$ and $m\in\mathbb N$. Furthermore, by [@LeMXu1 p. 2091] the oscillation operator $\mathcal O(\{t_j\}_{j\in \mathbb N},\{T_{t,m}^\mathfrak K \}_{t>0})$ is bounded from $L^p(\mathbb R^d,\omega_\mathfrak{K})$ into itself, for every $1<p<\infty$ and every decreasing sequence $\{t_j\}_{j\in \mathbb N}$ of positive numbers. **Theorem 3**. *Let $m\in\mathbb N$ and $\sigma>2$. The $\sigma$-variation operator $V_\sigma(\{T_{t,m}^\mathfrak K \}_{t>0})$ is bounded from $L^1(\mathbb R^d,\omega_\mathfrak{K})$ into $L^{1,\infty}(\mathbb R^d,\omega_\mathfrak{K})$ and from $H^1(\Delta_\mathfrak K)$ into $L^1(\mathbb R^d,\omega_\mathfrak{K})$. Moreover, a function $f\in L^1(\mathbb R^d,\omega_\mathfrak{K})$ is in $H^1(\Delta_\mathfrak K)$ if and only if $V_\sigma(\{T_t^\mathfrak K \}_{t>0})(f)\in L^1(\mathbb R^d,\omega_\mathfrak{K})$ and the quantities $\|f\|_{L^1(\mathbb R^d,\omega_\mathfrak{K})}+\|T_{*,0}^\mathfrak K (f)\|_{L^1(\mathbb R^d,\omega_\mathfrak{K})}$ and $\|f\|_{L^1(\mathbb R^d,\omega_\mathfrak{K})}+\|V_\sigma(\{T_t^\mathfrak K \}_{t>0})(f)\|_{L^1(\mathbb R^d,\omega_\mathfrak{K})}$ are equivalent.* *If $f\in {\rm BMO}^\rho(\mathbb R^d,\omega_\mathfrak{K})$ and $V_\sigma(\{T_{t,m}^\mathfrak K \}_{t>0})(f)(x)<\infty$, almost every $x\in\mathbb R^d$, then $V_\sigma(\{T_{t,m}^\mathfrak K \}_{t>0})(f)$ belongs to ${\rm BLO}(\mathbb R^d,\omega_\mathfrak{K})$ and $$\|V_\sigma(\{T_{t,m}^\mathfrak K \}_{t>0})(f)\|_{{\rm BLO}(\mathbb R^d,\omega_\mathfrak{K})}\leq C\|f\|_{{\rm BMO}^\rho (\mathbb R^d,\omega_\mathfrak{K})},$$ where $C>0$ does not depend on $f$.* **Theorem 4**. *Let $m\in\mathbb N$. Suppose that $\{t_j\}_{j\in \mathbb N}$ is a decreasing sequence of positive numbers. The oscillation operator operator $\mathcal O(\{t_j\}_{j\in \mathbb N},\{T_{t,m}^\mathfrak K \}_{t>0})$ is bounded from $L^1(\mathbb R^d,\omega_\mathfrak{K})$ into $L^{1,\infty}(\mathbb R^d,\omega_\mathfrak{K})$ and from $H^1(\Delta _\mathfrak K)$ into $L^1(\mathbb R^d,\omega_\mathfrak{K})$.* *Furthermore, if $f\in {\rm BMO}^\rho(\mathbb R^d,\omega_\mathfrak{K})$ and $\mathcal O(\{t_j\}_{j\in \mathbb N},\{T_{t,m}^\mathfrak K \}_{t>0})(f)(x)<\infty$, for almost all $x\in\mathbb R^d$, then $\mathcal O(\{t_j\}_{j\in \mathbb N},\{T_{t,m}^\mathfrak K \}_{t>0})(f)\in {\rm BLO}(\mathbb R^d,\omega_\mathfrak{K})$ and $$\|\mathcal O(\{t_j\}_{j\in \mathbb N},\{T_{t,m}^\mathfrak K \}_{t>0})(f)\|_{{\rm BLO}(\mathbb R^d,\omega_\mathfrak{K})}\leq C\|f\|_{{\rm BMO}^\rho(\mathbb R^d,\omega_\mathfrak{K})},$$ where $C>0$ is independent of $f$.* In the next sections we will prove our Theorems. Throughout this paper by $c$, $C$ we always denote positive constants that can change in each occurrence. # Proof of Theorems [Theorem 3](#Th1.3){reference-type="ref" reference="Th1.3"} and [Theorem 4](#Th1.4){reference-type="ref" reference="Th1.4"} {#proof-of-theorems-th1.3-and-th1.4} We are going to prove Theorem [Theorem 3](#Th1.3){reference-type="ref" reference="Th1.3"}. In order to establish the boundedness properties for the oscillation operator we can proceed analogously. ## Variation operators on $L^1(\mathbb R^d,\omega_\mathfrak{K})$ According to [@LeMXu1 Corollary 6.1] (see also [@JoRe Theorem 3.3]) since $\{T_t^\mathfrak K \}_{t>0}$ is a diffusion semigroup of operators $V_\sigma(\{T_{t,m}^\mathfrak K\}_{t>0})$ is bounded from $L^p(\mathbb R ^d,\omega_\mathfrak{K})$ into itself, for every $1<p<\infty$. By using again [\[1.5\]](#1.5){reference-type="eqref" reference="1.5"} we can see that $$\int_{\mathbb R^d}|t^m\partial _t^mT_t^\mathfrak K (x,y)||h(y)|d\omega_\mathfrak{K}(y)\leq \frac{C}{\omega _\mathfrak K(B(x,\sqrt{t}))}\|h\|_{L^1(\mathbb R^d,\omega _\mathfrak K)},\quad x\in \mathbb R^d\mbox{ and }t>0.$$ Then, we can define $V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})(h)(x)$, for every $x\in \mathbb R^d$. Moreover, by using dominated convergence theorem we can see that, for every $x\in \mathbb R^d$, the function $t\longrightarrow T_{t,m}(h)(x)$ is continuous in $(0,\infty )$. It follows that $$V_\sigma (\{T_{t,m}\}_{t>0})(h)(x)=\sup_{\substack{0<t_n<...<t_1\\t_j\in \mathbb Q,j=1,...,n}}\Big(\sum_{j=1}^{n-1}|T_{t_j,m}^\mathfrak K (h)(x)-T_{t_{j+1},m}^\mathfrak K(h)(x)|^\sigma\Big)^{1/\sigma},\quad x\in \mathbb R^d.$$ Since the set of finite subsets of $\mathbb Q$ is countable we conclude that the function $V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})(h)$ is measurable on $\mathbb R^d$. Let us see that the variation operator $V_\sigma(\{T_{t,m}^\mathfrak K\}_{t>0})$ is bounded from $L^1(\mathbb R ^d,\omega_\mathfrak{K})$ into $L^{1,\infty}(\mathbb R ^d,\omega_\mathfrak{K})$. The triple $(\mathbb R^d,|\cdot|,\omega_\mathfrak{K})$ is a space of homogeneous type. According to the Calderón-Zygmund decomposition (see, for instance, [@BK Theorem 3.1] or [@Ai Theorem 2.10]), there exists $M>0$ such that, for every $f\in L^1(\mathbb R^d,\omega_\mathfrak{K})$ and $\lambda >0$, we can find a measurable function $g$, a sequence $(b_i)_{i\in \mathbb N}$ of measurable functions and a sequence $(B_i=B(x_i,r_i))_{i\in \mathbb N}$ of Euclidean balls satisfying that - $f=g+\sum_{i\in \mathbb N}b_i$; - $\|g\|_\infty \leq C\lambda$; - $\mbox{ supp } (b_i)\subset B_i^*$, $i\in \mathbb N$, and $\#\{j\in \mathbb N:x\in B_j^*\}\leq M$, for every $x\in \mathbb R^d$; - $\|b_i\|_{L^1(\mathbb R^d, \omega_\mathfrak K)}\leq C\lambda \omega_\mathfrak{K}(B_i)$, $i\in \mathbb N$; - $\sum_{i\in \mathbb N}\omega_\mathfrak{K}(B_i)\leq \frac{C}{\lambda}\|f\|_{L^1(\mathbb R^d, \omega_\mathfrak K)}$. Here $B^*$ represents the ball $B^*=B(x_0,\alpha r_0)$, when $B=B(x_0,r_0)$. The constants $C,\alpha$ do not depend on $f$. Let $\lambda >0$ and $f\in L^1(\mathbb R^d,\omega_\mathfrak{K})\cap L^2(\mathbb R^d,\omega_\mathfrak{K})$. We write $f=g+b$, where $b=\sum_{i\in \mathbb N}b_i$ as above. The series defining $b(x)$ is actually a finite sum for every $x\in \mathbb R^d$. Furthermore, the series $\sum_{i\in \mathbb N} b_i$ converges in $L^1(\mathbb R^d,\omega_\mathfrak{K})$ and $$\|b\|_{L^1(\mathbb R^d, \omega_\mathfrak K)}\leq \sum_{i\in \mathbb N}\|b_i\|_{L^1(\mathbb R^d, \omega_\mathfrak K)} \leq C\lambda \sum_{i\in \mathbb N}\omega_\mathfrak{K}(B_i)\leq C\|f\|_{L^1(\mathbb R^d, \omega_\mathfrak K)}.$$ Then, $g\in L^\infty (\mathbb R^d,\omega_\mathfrak{K})\cap L^1(\mathbb R^d,\omega_\mathfrak{K})$, and hence, $g\in L^q(\mathbb R^d,\omega_\mathfrak{K})$, for every $1\leq q\leq \infty$. It follows that $b\in L^1(\mathbb R^d,\omega_\mathfrak{K})\cap L^2(\mathbb R^d,\omega_\mathfrak{K})$. We get $$\begin{aligned} \omega_\mathfrak{K}(\{x\in\mathbb R^d: V_\sigma(\{T_{t,m}^\mathfrak K\}_{t>0})(f)(x)>\lambda \})&\leq \omega_\mathfrak{K}\big(\{x\in \mathbb R^d:V_\sigma(\{T_{t,m}^\mathfrak K\}_{t>0})(g)(x)>\frac{\lambda }{2}\}\big)\\ &\quad +\omega_\mathfrak{K}\big(\{x\in \mathbb R^d: V_\sigma(\{T_{t,m}^\mathfrak K\}_{t>0})(b)(x)>\frac{\lambda }{2}\}\big).\end{aligned}$$ By considering (ii) and that $V_\sigma(\{T_{t,m}^\mathfrak K\}_{t>0})$ is bounded on $L^2(\mathbb R^d,\omega_\mathfrak K )$ we obtain $$\begin{aligned} \omega_\mathfrak{K}\big(\{x\in \mathbb R^d: V_\sigma(\{T_{t,m}^\mathfrak K\}_{t>0})(g)(x)>\frac{\lambda }{2}\}\big)&\leq \frac{4}{\lambda ^2}\int_{\mathbb R^d}|V_\sigma(\{T_{t,m}^\mathfrak K\}_{t>0})(g)(x)|^2d\omega_\mathfrak{K}(x)\\ &\hspace{-2cm}\leq \frac{C}{\lambda ^2}\|g\|_{L^2(\mathbb R^d, \omega_\mathfrak K)}^2\leq \frac{C}{\lambda }\|g\|_{L^1(\mathbb R^d,\omega_\mathfrak K)}\leq \frac{C}{\lambda }\|f\|_{L^1(\mathbb R^d,\omega_\mathfrak K)}.\end{aligned}$$ Our aim is then to establish that $$\label{aim} \omega_\mathfrak{K}\big(\{x\in \mathbb R^d: V_\sigma(\{T_{t,m}^\mathfrak K\}_{t>0})(b)(x)>\frac{\lambda }{2}\}\big)\leq \frac{C}{\lambda }\|f\|_{L^1(\mathbb R^d,\omega_\mathfrak K)}.$$ Consider $s_i=r_i^2$, $i\in \mathbb N$. Since, for every $s>0$, $T_s^\mathfrak K$ is contractive in $L^1(\mathbb R^d,\omega_\mathfrak{K})$ it follows that $$\sum_{i\in \mathbb N}\|T_{s_i}^\mathfrak K (b_i)\|_{L^1(\mathbb R^d, \omega_\mathfrak K)}\leq \sum_{i\in \mathbb N}\|b_i\|_{L^1(\mathbb R^d, \omega_\mathfrak K)}\leq C\|f\|_{L^1(\mathbb R^d, \omega_\mathfrak K)},$$ and the series $\sum_{i\in \mathbb N}|T_{s_i}^\mathfrak K (b_i)|$ and $\sum_{i\in \mathbb N}|(I-T_{s_i}^\mathfrak K )b_i|$ converge in $L^1(\mathbb R^d,\omega_\mathfrak{K})$. Thus we can write $$b=\sum_{i\in \mathbb N}T_{s_i}^\mathfrak K (b_i)+\sum_{i\in \mathbb N}(I-T_{s_i}^\mathfrak K )b_i,$$ and $$T_{t,m}^\mathfrak K (b)=T_{t,m}^\mathfrak K \Big(\sum_{i\in \mathbb N} T_{s_i}^\mathfrak K (b_i)\Big)+T_{t,m}^\mathfrak K \Big(\sum_{i\in \mathbb N}(I-T_{s_i}^\mathfrak K )b_i\Big),\quad t>0.$$ Then, $$V_\sigma(\{T_{t,m}^\mathfrak K\}_{t>0})(b)\leq V_\sigma(\{T_{t,m}^\mathfrak K\}_{t>0})\Big(\sum_{i\in \mathbb N}T_{s_i}^\mathfrak K (b_i)\Big)+V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})\Big(\sum_{i\in \mathbb N}(I-T_{s_i}^\mathfrak K )b_i\Big),$$ and $$\begin{aligned} \label{aim1} \omega_\mathfrak{K}\Big(\{x\in \mathbb R^d: V_\sigma(\{T_{t,m}^\mathfrak K\}_{t>0})(b)(x)>\frac{\lambda }{2}\}\Big)&\nonumber\\ &\hspace{-4cm}\leq \omega_\mathfrak{K}\Big(\{x\in \mathbb R^d: V_\sigma(\{T_{t,m}^\mathfrak K\}_{t>0})\Big(\sum_{i\in \mathbb N}T_{s_i}^\mathfrak K(b_i)\Big)(x)>\frac{\lambda }{4}\}\Big)\nonumber\\ &\hspace{-4cm}\quad +\omega_\mathfrak{K}\Big(\{x\in \mathbb R^d: V_\sigma(\{T_{t,m}^\mathfrak K\}_{t>0})\Big(\sum_{i\in \mathbb N} (I-T_{s_i}^\mathfrak K)b_i\Big)(x)>\frac{\lambda }{4}\}\Big).\end{aligned}$$ Suppose that $m,n\in \mathbb{N}$, $n<m$. By proceeding as in [@Li pp. 11-12] we obtain that $$\Big\|\sum_{i=n}^mT_{s_i}^\mathfrak K (b_i)\Big\|_{L^2(\mathbb R^d, \omega_\mathfrak K)} \leq \Big\|\sum_{i=n}^mT_{s_i}^\mathfrak K (|b_i|)\Big\|_{L^2(\mathbb R^d, \omega_\mathfrak K)}\leq C\lambda \big(\sum_{i=n}^m\omega_\mathfrak{K}(B_i)\big)^{1/2}.$$ It follows that the series $\sum_{i\in \mathbb N} T_{s_i}^\mathfrak K (b_i)$ converges in $L^2(\mathbb R ^d,\omega_\mathfrak{K})$ and $$\Big\|\sum_{i\in \mathbb N} T_{s_i}^\mathfrak K (b_i)\Big\|_{L^2(\mathbb R^d, \omega_\mathfrak K)} \leq C\lambda \Big(\sum_{i\in \mathbb N} \omega_\mathfrak{K}(B_i)\Big)^{1/2}\leq C\sqrt{\lambda \|f\|_{L^1(\mathbb R^d, \omega_\mathfrak K)} }.$$ Since $V_\sigma(\{T_{t,m}^\mathfrak K\}_{t>0})$ is bounded on $L^2(\mathbb R^d,\omega_\mathfrak{K})$ we deduce that $$\label{aim2} \omega_\mathfrak{K}\Big(\{x\in \mathbb R^d: V_\sigma(\{T_{t,m}^\mathfrak K\}_{t>0})\Big(\sum_{i\in \mathbb N}T_{s_i}^\mathfrak K (b_i)\Big)(x)>\frac{\lambda }{4}\}\Big)\leq \frac{C}{\lambda}\|f\|_{L^1(\mathbb R^d, \omega_\mathfrak K)} .$$ On the other hand, $$V_\sigma(\{T_{t,m}^\mathfrak K\}_{t>0})\Big(\sum_{i\in \mathbb N} (I-T_{s_i}^\mathfrak K)b_i\Big)(x)\leq \sum_{i\in \mathbb N} V_\sigma(\{T_{t,m}^\mathfrak K\}_{t>0})((I-T_{s_i}^\mathfrak K )b_i)(x),\quad a.e. x\in \mathbb R^d.$$ Indeed, let $0<t_n<...<t_2<t_1$, $t_j\in \mathbb Q$, $j=1,...,n$, $n\in \mathbb N$. Since $T_{t,m}^\mathfrak K$, $t>0$, is bounded on $L^1(\mathbb R^d,\omega_\mathfrak{K})$ we obtain $$T_{t,m}^\mathfrak K \Big(\sum_{i\in \mathbb N}(I-T_{s_i}^\mathfrak K )b_i\Big)_{|t=t_j}=\sum_{i\in \mathbb N} T_{t,m}^\mathfrak K ((I-T_{s_i}^\mathfrak K )b_i)_{|t=t_j},\quad j=1,...,n,$$ in $L^1(\mathbb R^d,\omega_\mathfrak{K})$. We get, for almost every $x\in \mathbb R^d$, $$\begin{aligned} \Big(\sum_{j=1}^{n-1}\Big|T_{t,m}^\mathfrak K \big(\sum_{i\in \mathbb N}(I-T_{s_i}^\mathfrak K )b_i\big)(x)_{|t=t_j}-T_{t,m}^\mathfrak K \big(\sum_{i\in \mathbb N}(I-T_{s_i}^\mathfrak K )b_i\big)(x)_{|t=t_{j+1}}\Big|^\sigma \Big)^{1/\sigma }&\\ &\hspace{-10cm}=\Big(\sum_{j=1}^{n-1}\Big|\big(\sum_{i\in \mathbb N}T_{t,m}^\mathfrak K (I-T_{s_i}^\mathfrak K )b_i\big)(x)_{|t=t_j}-\big(\sum_{i\in \mathbb N}T_{t,m}^\mathfrak K (I-T_{s_i}^\mathfrak K )b_i\big)(x)_{|t=t_{j+1}}\Big|^\sigma \Big)^{1/\sigma }\\ &\hspace{-10cm}=\Big(\sum_{j=1}^{n-1}\Big|\Big[\sum_{i\in \mathbb N}\big(T_{t,m}^\mathfrak K ((I-T_{s_i}^\mathfrak K )b_i)_{|t=t_j}-T_{t,m}^\mathfrak K ((I-T_{s_i}^\mathfrak K )b_i)_{|t=t_{j+1}}\big)\Big](x)\Big|^\sigma \Big)^{1/\sigma }\\ &\hspace{-10cm}\leq \Big(\sum_{j=1}^{n-1}\Big(\Big[\sum_{i\in \mathbb N}\Big|T_{t,m}^\mathfrak K ((I-T_{s_i}^\mathfrak K )b_i)_{|t=t_j}-T_{t,m}^\mathfrak K ((I-T_{s_i}^\mathfrak K )b_i)_{|t=t_{j+1}}\Big|\Big](x)\Big)^\sigma \Big)^{1/\sigma }\\ &\hspace{-10cm}\leq \sum_{i\in \mathbb N}\Big(\sum_{j=1}^{n-1}\big|T_{t,m}^\mathfrak K ((I-T_{t_i}^\mathfrak K )b_i)_{|t=t_j}(x)-T_{t,m}^\mathfrak K ((I-T_{s_i}^\mathfrak K )b_i)_{|t=t_{j+1}}(x)\big|^\sigma \Big)^{1/\sigma }.\end{aligned}$$ It is remarkable that, for $j=1,...,n$, the series $$\sum_{i\in \mathbb N}\big|T_{t,m}^\mathfrak K ((I-T_{s_i}^\mathfrak K )b_i)_{|t=t_j}-T_{t,m}^\mathfrak K ((I-T_{s_i}^\mathfrak K )b_i)_{|t=t_{j+1}}\big|,$$ converges in $L^1(\mathbb R^d,\omega_\mathfrak{K})$, and then, for $j=1,...,n$, $$\begin{aligned} \Big[\sum_{i\in \mathbb N}\Big|T_{t,m} ^\mathfrak K ((I-T_{s_i}^\mathfrak K )b_i)_{|t =t _j}-T_{t,m} ^\mathfrak K ((I-T_{s_i}^\mathfrak K )b_i)_{|t =t _{j+1}}\Big|\Big](x)\\ &\hspace{-7cm}=\sum_{i\in \mathbb N} \big|T_{t,m} ^\mathfrak K ((I-T_{s_i}^\mathfrak K )b_i)_{|t =t _j}(x)-T_{t,m} ^\mathfrak K ((I-T_{s_i}^\mathfrak K )b_i)_{|t =t _{j+1}}(x)\big|,\quad \mbox{ a.e. }x\in \mathbb R^d.\end{aligned}$$ It can be concluded that $$\begin{aligned} V_\sigma(\{T_{t,m}^\mathfrak K\}_{t>0})\Big(\sum_{i\in \mathbb N} (I-T_{s_i}^\mathfrak K )b_i\Big)(x)\\ &\hspace{-4.5cm}=\sup_{\substack{0<t _n<...<t_1\\t_j\in \mathbb Q,j=1,...,n}}\Big(\sum_{j=1}^{n-1}\big|T_{t,m}^\mathfrak K \Big(\sum_{i\in \mathbb N} (I-T_{s_i}^\mathfrak K )b_i\Big)_{|t =t _j}(x)-T_{t,m}^\mathfrak K \Big(\sum_{i\in \mathbb N} (I-T_{s_i}^\mathfrak K )b_i\Big)_{|t =t _{j+1}}(x)\big|^\sigma \Big)^{1/\sigma }\\ &\hspace{-4.5cm} \leq \sum_{i\in \mathbb N} V_\sigma(\{T_{t,m}^\mathfrak K\}_{t>0})((I-T_{s_i}^\mathfrak K )b_i)(x),\quad \mbox{ a.e. }x\in \mathbb R^d,\end{aligned}$$ and, thus, it follows that $$\begin{aligned} \omega_\mathfrak{K}\Big(\big\{x\in \mathbb R^d: V_\sigma(\{T_{t,m}^\mathfrak K\}_{t>0})\Big(\sum_{i\in \mathbb N} (I-T_{s_i}^\mathfrak K)b_i\Big)(x)>\frac{\lambda }{4}\big\}\Big)\\ &\hspace{-4cm}\leq \omega_\mathfrak{K}\Big(\big\{x\in \mathbb R^d: \sum_{i\in \mathbb N}V_\sigma(\{T_{t,m}^\mathfrak K\}_{t>0})\big((I-T_{s_i}^\mathfrak K )b_i\big)(x)>\frac{\lambda }{4}\big\}\Big).\end{aligned}$$ We are going to see that $$\label{aim3} \omega_\mathfrak{K}\Big(\big\{x\in \mathbb R^d: \sum_{i\in \mathbb N} V_\sigma(\{T_{t,m}^\mathfrak K\}_{t>0})((I-T_{s_i}^\mathfrak K )b_i)(x)>\frac{\lambda }{4}\big\}\Big)\leq \frac{C}{\lambda}\|f\|_{L^1(\mathbb R^d, \omega_\mathfrak K)},$$ which, jointly [\[aim1\]](#aim1){reference-type="eqref" reference="aim1"} and [\[aim2\]](#aim2){reference-type="eqref" reference="aim2"}, leads to [\[aim\]](#aim){reference-type="eqref" reference="aim"}. We have that $$\begin{aligned} \omega_\mathfrak{K}\Big(\big\{x\in \mathbb R^d: \sum_{i\in \mathbb N} V_\sigma(\{T_{t,m}^\mathfrak K\}_{t>0})((I-T_{s_i}^\mathfrak K )b_i)(x)>\frac{\lambda }{4}\big\}\Big)&\leq \omega_\mathfrak{K}\big(\bigcup_{i\in \mathbb N} \theta (2B_i^*)\big)\\ &\hspace{-7cm}+\omega_\mathfrak{K}\Big(\big\{x\in \bigcap_{i\in \mathbb N} (\theta (2B_i^*))^c:\sum_{i\in \mathbb N} V_\sigma(\{T_{t,m}^\mathfrak K\}_{t>0})((I-T_{s_i}^\mathfrak K )b_i)(x)>\frac{\lambda }{4}\big\}\Big).\end{aligned}$$ From [\[1.1\]](#1.1){reference-type="eqref" reference="1.1"} and [\[1.2\]](#1.2){reference-type="eqref" reference="1.2"} we get $$\omega_\mathfrak{K}\big(\bigcup_{i\in \mathbb N} \theta (2B_i^*)\big)\leq C\sum_{i\in \mathbb N} \omega_\mathfrak{K}(B_i)\leq \frac{C}{\lambda }\|f\|_{L^1(\mathbb R^d, \omega_\mathfrak K)} .$$ On the other hand, observe that, for $F\in L^1(\mathbb R^d,\omega_\mathfrak K)$ defined on $\mathbb R^d$, we can write $$\begin{aligned} \label{Vderivt} V_\sigma(\{T_{t,m}^\mathfrak K\}_{t>0})(F)(x)&\leq \int_0^\infty |\partial _t[T_{t,m}^\mathfrak KF(x)]|dt\nonumber\\ &\leq \int_0^\infty (m|T_{t,m}^\mathfrak KF(x)|+|T_{t,m+1}^\mathfrak KF(x)|)\frac{dt}{t},\quad x\in \mathbb R^d.\end{aligned}$$ Then, $$\begin{aligned} & \omega_\mathfrak{K}\Big(\big\{x\in \bigcap_{i\in \mathbb N} (\theta (2B_i^*))^c:\sum_{i\in \mathbb N} V_\sigma(\{T_{t,m}^\mathfrak K\}_{t>0})((I-T_{s_i}^\mathfrak K )b_i)(x)>\frac{\lambda }{4}\big\}\Big)\\ &\leq \frac{4}{\lambda }\sum_{i\in \mathbb N} \int_{(\theta (2B_i^*))^c}V_\sigma(\{T_{t,m}^\mathfrak K\}_{t>0})((I-T_{s_i}^\mathfrak K )b_i)(x)d\omega_\mathfrak{K}(x)\\ &\leq \frac{C}{\lambda }\sum_{i\in \mathbb N} \int_{(\theta (2B_i^*))^c}\int_0^\infty (|T_{t,m}^\mathfrak K ((I-T_{s_i}^\mathfrak K )b_i)(x)|+|T_{t,m+1}^\mathfrak K ((I-T_{s_i}^\mathfrak K )b_i)(x)|)\frac{dt}{t}d\omega_\mathfrak{K}(x)\\ &\leq \frac{C}{\lambda }\sum_{i\in \mathbb N} \int_{B_i^*}|b_i(y)|(J_i^m(y)+J_i^{m+1}(y))d\omega_\mathfrak K(y),\end{aligned}$$ where, for every $i\in \mathbb N$, $J_i^0=0$ and for $r\in \mathbb N$, $r\geq 1$, $$J_i^r(y)=\int_{(\theta (2B_i^*))^c}\int_0^\infty t^{r-1}|\partial _t^r(T_t^\mathfrak K (x,y)-T_{t+s_i}^\mathfrak K(x,y))|dtd\omega_\mathfrak{K}(x),\quad y\in B_i^*.$$ We are going to see that, for $r\in \mathbb N$, $r\geq 1$, $\sup_{i\in \mathbb N}\sup_{y\in B_i^*}J_i^r(y)<\infty$. Let $i\in \mathbb N$ and $r\in \mathbb N$, $r\geq 1$. We write $J_i^r=\sum_{\ell \in \mathbb N}\mathbb J_{i,\ell}^r$, where, for every $\ell \in \mathbb N$, $$\mathbb{J}_{i ,\ell}^r(y)=\int_{(\theta (2B_i^*))^{\rm c}}\int_{\ell s_i}^{(\ell +1)s_i}t^{r-1}|\partial _t^r (T_t^\mathfrak K (x,y)-T_{t+s_i}^\mathfrak K(x,y))|dtd\omega_\mathfrak{K}(x),\quad y\in B_i^*.$$ Since $\partial _u^rT_u^\mathfrak K (x,y)=\Delta _{\mathfrak K,x}^rT_u^\mathfrak K(x,y)$, $x,y\in \mathbb R ^d$, $u>0$, we get $$\begin{aligned} \mathbb J_{i,\ell}^r(y)&=\int_{(\theta (2B_i^*))^{\rm c}}\int_{\ell s_i}^{(\ell+1)s_i}t^{r-1}\Big|\int_t^{t+s_i}\Delta_{\mathfrak K,x}^{r+1}T_u^\mathfrak K (x,y)du\Big|dtd\omega_\mathfrak{K}(x)\\ &\leq \int_{\ell s_i}^{(\ell +1)s_i}t^{r-1}\int_t^{t+s_i}\int_{(\theta (2B_i^*))^{\rm c}}|\Delta_{\mathfrak K,x} ^{r+1}T_u^\mathfrak K (x,y)|d\omega_\mathfrak{K}(x)dudt,\quad y\in B_i^*.\end{aligned}$$ For every $y\in B_i^*$, $x\in \mathbb R^d$ and $g\in G$, we have that $|gx-x_i|\leq |gx-y|+|y-x_i|$. Then, $\rho(x,x_i)\leq \rho(x,y)+\alpha r_i$ and $(\theta (2B_i^*))^{\rm c}\subset (\theta (B(y,\alpha r_i)))^{\rm c}$, $x\in \mathbb R^d$, $y\in B_i^*$. By using now [@Li Lemmas 2.3 and 2.6] we obtain, for small enough $\varepsilon>0$, $y\in B_i^*$ and $u\in (\ell s_i,(\ell+1)s_i)$, $$\begin{aligned} \label{cLi} \int_{(\theta (2B_i^*))^{\rm c}}|\Delta_\mathfrak K ^{r+1}T_u^\mathfrak K (x,y)|d\omega_\mathfrak{K}(x)&\nonumber\\ &\hspace{-4cm}\leq \left(\int_{(\theta (B(y,\alpha r_i)))^{\rm c}}|\Delta_\mathfrak K ^{r+1}T_u^\mathfrak K (x,y)|^2e^{\varepsilon \frac{\rho(x,y)^2}{u}}d\omega_\mathfrak{K}(x)\right)^{1/2}\left( \int_{(\theta (B(y,\alpha r_i)))^{\rm c}}e^{-\varepsilon \frac{\rho(x,y)^2}{u}}d\omega_\mathfrak{K}(x)\right)^{1/2}\nonumber\\ &\hspace{-4cm}\leq C\Big(\frac{e^{-\alpha^2\varepsilon \frac{r_i^2}{u}}}{u^{2r+2}\omega_\mathfrak{K}(B(y,\sqrt{u}))}\Big)^{1/2}\Big(\omega_\mathfrak{K}(B(y,\sqrt{u}))e^{-\alpha ^2\varepsilon \frac{r_i^2}{2u}}\Big)^{1/2}=C\frac{e^{-c\frac{s_i}{u}}}{u^{r+1}}.\end{aligned}$$ We get, if $\ell \geq 1$, $$\mathbb J_{i,\ell}^r(y)\leq C\int_{\ell s_i}^{(\ell+1)s_i}t^{r-1}\int_t^{t+s_i}\frac{e^{-c\frac{s_i}{u}}}{u^{r+1}}dudt\leq Cs_i\int_{\ell s_i}^{(\ell+1)s_i}\frac{dt}{t^2}\leq \frac{C}{\ell^2},\quad y\in B_i^*.$$ On the other hand, we can write $$\begin{aligned} \mathbb J_{i,0}^r(y)&\leq C\int_0^{s_i}t^{r-1}\int_t^{t+s_i}\Big(\frac{u}{s_i}\Big)^{3/2}\frac{du}{u^{r+1}}dt\leq \frac{C}{s_i^{3/2}}\int_0^{s_i}t^{r-1}\int_t^{t+s_i}\frac{du}{u^{r-1/2}}dt\\ &\leq \frac{C}{\sqrt{s_i}}\int_0^{s_i}\frac{dt}{\sqrt{t}}=C,\quad y\in B_i^*.\end{aligned}$$ We conclude that $J_i^r(y)=\sum_{\ell\in \mathbb N} \mathbb J_{i,\ell}^r (y)\leq C$, $y\in B_i^*$, where $C>0$ does not depend on $i$. We obtain $$\begin{aligned} \omega_\mathfrak{K}\Big(\big\{x\in \bigcap_{i\in \mathbb N} (\theta (2B_i^*))^{\rm c}:\sum_{i\in \mathbb N} V_\sigma (\{T_{t,\ell }^\mathfrak K\}_{t>0})((I-T_{s_i}^\mathfrak K )b_i)(x)>\frac{\lambda }{4}\big\}\Big)&\\ &\hspace{-6cm} \leq \frac{C}{\lambda }\sum_{i\in \mathbb N} \int_{B_i^*}|b_i(y)|d\omega_\mathfrak{K}(y)\leq \frac{C}{\lambda}\|f\|_{L^1(\mathbb R^d, \omega_\mathfrak K)},\end{aligned}$$ and [\[aim3\]](#aim3){reference-type="eqref" reference="aim3"} is thus established. ## Variation operators on $H^1(\Delta _\mathfrak K)$ We prove first that there exists $C>0$ such that, for every $f\in H^1(\Delta _\mathfrak K)$, $$\|V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})(f)\|_{L^1(\mathbb R^d, \omega_\mathfrak K)} \leq C\|f\|_{H^1(\Delta _\mathfrak K)} .$$ It is sufficient to see that there exists $C>0$ such that, for every $(1,2,\Delta_\mathfrak K,1)$-atom $a$, $$\|V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})(a)\|_{L^1(\mathbb R^d, \omega_\mathfrak K)} \leq C.$$ Let $a$ be a $(1,2,\Delta_\mathfrak K ,1)$-atom. There exist $b\in D(\Delta_\mathfrak K )$ and a ball $B=B(x_B,r_B)$ satisfying that - $a=\Delta_\mathfrak K b$; - ${\rm supp} (\Delta_\mathfrak K ^\ell b)\subset \theta(B)$, $\ell =0,1$; - $\|(r_B^2\Delta_\mathfrak K )^\ell b\|_{L^2(\mathbb R^d,\omega_\mathfrak K)}\leq r_B^2\omega_\mathfrak{K}(B)^{-1/2}$, $\ell =0,1$. We decompose $\|V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})(a)\|_{L^1(\mathbb R^d, \omega_\mathfrak K)}$ as follows $$\|V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})(a)\|_{L^1(\mathbb R^d, \omega_\mathfrak K)} =\left(\int_{\theta(4B)}+\int_{(\theta(4B))^{\rm c}}\right)V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})(a)(x)d\omega_\mathfrak{K}(x)=I_1(a)+I_2(a).$$ Since $V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})$ is bounded on $L^2(\mathbb R^d,\omega_\mathfrak{K})$, using [\[1.1\]](#1.1){reference-type="eqref" reference="1.1"}, [\[1.2\]](#1.2){reference-type="eqref" reference="1.2"} and (iii) for $\ell =1$ we obtain $$I_1(a)\leq \|V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})(a)\|_{L^2(\mathbb R^d,\omega _\mathfrak K)}\omega_\mathfrak{K}(\theta(4B))^{1/2}\leq C\|a\|_{L^2(\mathbb R^d,\omega_\mathfrak{K})}\omega_\mathfrak K(B)^{1/2}\leq C.$$ On the other hand, by taking into account [\[Vderivt\]](#Vderivt){reference-type="eqref" reference="Vderivt"} we have that $V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})(a)(x)\leq C(J_m(x)+J_{m+1}(x))$, $x\in \mathbb R^d$, where $J_0=0$ and for every $r\in \mathbb N$, $r\geq 1$, $$J_r(x)=\int_0^\infty |T_{t,r}^\mathfrak K (a)(x)|\frac{dt}{t},\quad x\in \mathbb R^d.$$ Let $r\in \mathbb N$, $r\geq 1$. Since $a=\Delta_\mathfrak K b$, it follows that $|T_{t,r}^\mathfrak K (a)|=|t^r\partial_t^{r+1}T_t^\mathfrak K (b)|=|t^r\Delta_\mathfrak K^{r+1}b|$. Then, as in [\[cLi\]](#cLi){reference-type="eqref" reference="cLi"}, according to [@Li Lemmas 2.3 and 2.6] we obtain $$\begin{aligned} \int_{(\theta (4B))^{\rm c}}J_r(x)d\omega_\mathfrak{K}(x)&\leq \int_0^\infty t^{r-1}\int_{(\theta(4B))^{\rm c}}|\Delta_\mathfrak K^{r+1}T_t^\mathfrak K (b)(x)|d\omega_\mathfrak{K}(x)dt\\ &\leq C\int_{\theta (B)}|b(y)|\int_0^\infty \frac{e^{-c\frac{r_B^2}{t}}}{t^2}dtd\omega_\mathfrak{K}(y)\leq \frac{C}{r_B^2}\|b\|_{L^2(\mathbb R^d,\omega_\mathfrak K)}\omega_\mathfrak{K}(B)^{1/2}\leq C.\end{aligned}$$ It follows that $I_2(a)\leq C$ and it can be concluded that $\|V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})(a)\|_{L^1(\mathbb R^d, \omega_\mathfrak K)} \leq C,$ where $C>0$ does not depend on $a$. Let us see now that $\|f\|_{L^1(\mathbb R^d, \omega_\mathfrak K)} +\|T_{*,0}^\mathfrak K (f)\|_{L^1(\mathbb R^d, \omega_\mathfrak K)}$ and $\|f\|_{L^1(\mathbb R^d, \omega_\mathfrak K)} +\|V_\sigma (\{T_t^\mathfrak K\}_{t>0})(f)\|_{L^1(\mathbb R^d, \omega_\mathfrak K)}$ are equivalent. It is clear that $$T_{*,0}^\mathfrak K (f) \leq V_\sigma (\{T_t^\mathfrak K \}_{t>0})(f)+|T_1^\mathfrak K (f)|.$$ By using [@ADH Theorem 2.2] and that $T_1^\mathfrak K$ is bounded from $L^1(\mathbb R^d,\omega_\mathfrak{K})$ into itself, we deduce that if $f\in L^1(\mathbb R^d,\omega_\mathfrak{K})$ and $V_\sigma (\{T_t^\mathfrak K \}_{t>0})(f)\in L^1(\mathbb R^d,\omega_\mathfrak{K})$, then $f\in H^1(\Delta_\mathfrak K )$. Moreover, as it has just been proved, $V_\sigma (\{T_t^\mathfrak K \}_{t>0})$ is bounded from $H^1(\Delta_\mathfrak K )$ into $L^1(\mathbb R^d,\omega_\mathfrak{K})$, then we can conclude that if $f\in L^1(\mathbb R^d,\omega_\mathfrak{K})$, then $f\in H^1(\Delta_\mathfrak K)$ if and only if $V_\sigma (\{T_t\}_{t>0})(f)\in L^1(\mathbb R^d,\omega_\mathfrak{K})$. We also have that $$T_{*,0}^\mathfrak K (f)\leq V_\sigma (\{T_t^\mathfrak K \}_{t>0})(f)+|T_\varepsilon ^\mathfrak K f|,\quad \varepsilon >0.$$ By taking into account that, for every $f\in L^1(\mathbb R^d,\omega_\mathfrak{K})$, $\lim_{\varepsilon \rightarrow 0^+}T_\varepsilon ^\mathfrak K (f)(x)=f(x)$, for almost all $x\in \mathbb R^d$, there exists $C>0$ for which $$\begin{aligned} \frac{1}{C}\big(\|f\|_{L^1(\mathbb R^d, \omega_\mathfrak K)} +\|T_{*,0}^\mathfrak K (f)\|_{L^1(\mathbb R^d, \omega_\mathfrak K)}\big) &\leq \|f\|_{L^1(\mathbb R^d, \omega_\mathfrak K)} +\|V_\sigma (\{T_t^\mathfrak K\}_{t>0})(f)\|_{L^1(\mathbb R^d, \omega_\mathfrak K)}\\ &\leq C\big(\|f\|_{L^1(\mathbb R^d, \omega_\mathfrak K)} +\|T_{*,0}^\mathfrak K (f)\|_{L^1(\mathbb R^d, \omega_\mathfrak K)} \big),\end{aligned}$$ for every $f\in H^1(\Delta_\mathfrak K)$. ## Variation operators on ${\rm BMO}^\rho (\mathbb R^d,\omega_\mathfrak{K})$ {#S2.3} Suppose that $f\in {\rm BMO}^\rho (\mathbb R^d,\omega_\mathfrak{K})$ satisfies that $V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})(f)(x)<\infty$, for almost all $x\in \mathbb R^d$. Our first objective is to establish that $V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})(f)\in {\rm BMO}(\mathbb R^d,\omega_\mathfrak{K})$. Let $x_0\in \mathbb R^d$. The function $\phi (t)=T_t^\mathfrak K (f)(x_0)$, $t\in (0,\infty )$, is smooth and, for every $\ell\in \mathbb N$, $$\phi ^{(\ell)}(t)=\int_{\mathbb R^d}\partial _t^\ell T_t^\mathfrak K (x_0,y)f(y)d\omega_\mathfrak{K}(y),\quad t\in (0,\infty ).$$ Indeed, let $t>0$ and $B_t=B(x_0,\sqrt{t})$. According to [\[1.1\]](#1.1){reference-type="eqref" reference="1.1"}, [\[1.2\]](#1.2){reference-type="eqref" reference="1.2"}, [\[1.4\]](#1.4){reference-type="eqref" reference="1.4"} and [\[1.5\]](#1.5){reference-type="eqref" reference="1.5"} we obtain $$\begin{aligned} \int_{\mathbb R^d}|T_t^\mathfrak K (x_0,y)f(y)|d\omega_\mathfrak{K}(y)&\leq |f_{\theta (B_t)}|+\int_{\mathbb R^d}T_t^\mathfrak K (x_0,y)|f(y)-f_{\theta (B_t)}|d\omega_\mathfrak{K}(y)\\ &\leq |f_{\theta (B_t)}|+\frac{C}{\omega_\mathfrak{K}(\theta (B_t))}\int_{\mathbb R^d}e^{-c\frac{\rho(x_0,y)^2}{t}}|f(y)-f_{\theta (B_t)}|d\omega_\mathfrak{K}(y),\end{aligned}$$ and we can write $$\begin{aligned} \label{sumBMO} \int_{\mathbb R^d}e^{-c\frac{\rho(x_0,y)^2}{t}}|f(y)-f_{\theta (B_t)}|d\omega_\mathfrak{K}(y)&\nonumber\\ &\hspace{-4.5cm}\leq \left(\int_{\theta (B_t)}+\sum_{k=1}^\infty \int_{\theta (2^kB_t)\setminus\theta (2^{k-1}B_t)}\right)e^{-c\frac{\rho(x_0,y)^2}{t}}|f(y)-f_{\theta (B_t)}|d\omega_\mathfrak{K}(y)\nonumber\\ &\hspace{-4.5cm}\leq \sum_{k=0}^\infty e^{-c2^{2k}}\int_{\theta (2^kB_t)}|f(y)-f_{\theta (B_t)}|d\omega_\mathfrak{K}(y)\\ &\hspace{-4.5cm}\leq \sum_{k=0}^\infty e^{-c2^{2k}}\omega_\mathfrak{K}(\theta (2^kB_t))\Big(\|f\|_{{\rm BMO}^\rho (\mathbb R^d,\omega_\mathfrak{K})}+|f_{\theta (2^kB_t)}-f_{\theta (B_t)}|\Big)\nonumber\\ &\hspace{-4.5cm}\leq \sum_{k=0}^\infty e^{-c2^{2k}}\omega_\mathfrak{K}(\theta (2^kB_t))\Big(\|f\|_{{\rm BMO}^\rho (\mathbb R^d,\omega_\mathfrak{K})}+\frac{\omega_\mathfrak K(\theta (2^kB_t)}{\omega_\mathfrak K(\theta (B_t))}\|f\|_{{\rm BMO}^\rho (\mathbb R^d,\omega_\mathfrak{K})}\Big)\nonumber\\ &\hspace{-4.5cm}\leq C\omega_\mathfrak K(\theta (B_t))\|f\|_{{\rm BMO}^\rho (\mathbb R^d,\omega_\mathfrak{K})}\sum_{k=0}^\infty (1+2^{2kD})e^{-c2^{2k}}.\nonumber\end{aligned}$$ Then, $$\int_{\mathbb R^d}T_t^\mathfrak K (x_0,y)|f(y)|d\omega_\mathfrak{K}(y)<\infty,\quad t>0.$$ In analogous way, by using again [\[1.5\]](#1.5){reference-type="eqref" reference="1.5"} we can see that, for every $\ell\in \mathbb N$, $$\partial _t^\ell \int_{\mathbb R^d}T_t^\mathfrak K (x_0,y)f(y)d\omega_\mathfrak{K}(y)=\int_{\mathbb R^d}\partial _t^\ell T_t^\mathfrak K (x_0,y)f(y)d\omega_\mathfrak{K}(y),\quad t>0,$$ and the last integral is absolutely convergent. Let now $x_0\in \mathbb R^d$ and $r_0>0$. We write $B=B(x_0,r_0)$ and decompose $f=f_1+f_2+f_3$, where $f_1=(f-f_{\theta (4B)})\mathcal X_{\theta (4B)}$, $f_2=(f-f_{\theta (4B)})\mathcal X_{(\theta (4B))^{\rm c}}$ and $f_3=f_{\theta (4B)}$. Since $T_t^\mathfrak K (f_3)(x)=f_3$, $x\in \mathbb R^d$ and $t>0$, it follows that $V_\sigma (\{T_{t,m}\}_{t>0})(f_3)=0$ and $$V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})(f_2)\leq V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})(f)+V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})(f_1).$$ By using [\[1.1\]](#1.1){reference-type="eqref" reference="1.1"}, [\[1.2\]](#1.2){reference-type="eqref" reference="1.2"} and that $V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})$ is bounded on $L^2(\mathbb R^d,\omega_\mathfrak{K})$ from [@JiLi3 Proposition 7.3] we obtain $$\label{Vsigmaf1} \|V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})(f_1)\|_{L^2(\mathbb R^d,\omega_\mathfrak{K})}^2\leq C\int_{\theta (4B)}|f(y)-f_{\theta (4B)}|^2d\omega_\mathfrak{K}(y)\leq C\omega_\mathfrak{K}(B )\|f\|_{{\rm BMO}^\rho (\mathbb R^d,\omega_\mathfrak{K})}^2.$$ Hence, $V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})(f_1)(x)<\infty$, for almost all $x\in \mathbb R^d$. Since $V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})(f)(x)<\infty$, for almost all $x\in \mathbb R^d$, we have that $V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})(f_2)(x)<\infty$, for almost all $x\in \mathbb R^d$. We choose $x_1\in B$ such that $V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})(f_2)(x_1)<\infty$. We can write $$\begin{aligned} \frac{1}{\omega_\mathfrak{K}(B)}\int_B|V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})(f)(x)-V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})(f_2)(x_1)|d\omega_\mathfrak{K}(x)\\ &\hspace{-8cm}\leq \frac{1}{\omega_\mathfrak{K}(B)}\int_B V_\sigma (\{T_{t,m}^\mathfrak K (f)(x)-T_{t,m}^\mathfrak K (f_2)(x_1)\}_{t>0})d\omega_\mathfrak{K}(x)\\ &\hspace{-8cm}\leq \frac{1}{\omega_\mathfrak{K}(B)}\int_B V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})(f_1)(x)d\omega_\mathfrak{K}(x)\\ &\hspace{-8cm}\quad +\frac{1}{\omega_\mathfrak{K}(B)}\int_B V_\sigma (\{T_{t,m}^\mathfrak K (f_2)(x)-T_{t,m}^\mathfrak K (f_2)(x_1)\}_{t>0})d\omega_\mathfrak{K}(x).\end{aligned}$$ By applying Hölder inequality and considering [\[Vsigmaf1\]](#Vsigmaf1){reference-type="eqref" reference="Vsigmaf1"} we get $$\frac{1}{\omega_\mathfrak{K}(B)}\int_B V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})(f_1)(x)d\omega_\mathfrak{K}(x)\leq C\|f\|_{{\rm BMO}^\rho (\mathbb R^d,\omega_\mathfrak{K})}.$$ On the other hand, by taking into account [\[Vderivt\]](#Vderivt){reference-type="eqref" reference="Vderivt"} it can be seen that $$V_\sigma (\{T_{t,m}^\mathfrak K (f_2)(x)-T_{t,m}^\mathfrak K (f_2)(x_1)\}_{t>0}) \leq C\int_{(\theta (4B))^{\rm c}}|f_2(y)|(J_m(x,y)+J_{m+1}(x,y))d\omega_\mathfrak{K}(y),\quad x\in \mathbb R^d,$$ where $J_0=0$ and for $r\in \mathbb N$, $r\geq 1$, $$J_r(x,y)=\int_0^\infty |t^{r-1}\partial _t^r(T_t^\mathfrak K (x,y)-T_t^\mathfrak K (x_1,y))|dt,\quad x\in B \mbox{ and }y\in (\theta(4B))^{\rm c}.$$ Consider $r\in \mathbb N$, $r\geq 1$. We claim that $$\label{Jr} J_r(x,y)\leq C\frac{r_0}{\rho (x_0,y)\omega_\mathfrak K(\theta (B(x_0,\rho (x_0,y))))},\quad x\in B,\;y\in (\theta (4B))^{\rm c}.$$ Observe first that by virtue of [\[1.1\]](#1.1){reference-type="eqref" reference="1.1"} and [\[1.5\]](#1.5){reference-type="eqref" reference="1.5"} it follows that $$\label{c1} |t^r\partial _t^rT_t^\mathfrak K (u,v)|\leq C\frac{e^{-c\frac{\rho(u,v)^2}{t}}}{V(u,v,\sqrt{t})}\leq C\frac{e^{-c\frac{\rho(u,v)^2}{t}}}{V(u,v,\rho (u,v))},\quad u,v\in \mathbb R^d,\;t>0,$$ and in analogous way, by considering [\[1.6\]](#1.6){reference-type="eqref" reference="1.6"}, for $t>0$ and $u,v,w\in \mathbb R^d$, $|v-w|<\sqrt{t}$, $$\label{c2} |t^r\partial _t^r[T_t^\mathfrak K (u,v)-T_t^\mathfrak K (u,w)]|\leq C\frac{|v-w|}{\sqrt{t}}\frac{e^{-c\frac{\rho(u,v)^2}{t}}}{V(u,v,\rho (u,v))}.$$ Let $x\in B$ and $y\in (\theta (4B))^{\rm c}$. We decompose $J_r(x,y)$ in the following way, $$J_r(x,y)=\left(\int_0^{|x-x_1|^2}+\int_{|x-x_1|^2}^\infty \right)|t^{r-1}\partial _t^r(T_t^\mathfrak K (x,y)-T_t^\mathfrak K (x_1,y))|dt=J_{r,1}(x,y)+J_{r,2}(x,y).$$ By [\[c1\]](#c1){reference-type="eqref" reference="c1"} we obtain $$\begin{aligned} \label{Jr1} J_{r,1}(x,y)&\leq C\int_0^{|x-x_1|^2}\Big(\frac{e^{-c\frac{\rho(x,y)^2}{t}}}{\omega_\mathfrak{K}(B(x,\rho(x,y)))}+\frac{e^{-c\frac{\rho(x_1,y)^2}{t}}}{\omega_\mathfrak{K}(B(x_1,\rho (x_1,y)))}\Big)\frac{dt}{t}\nonumber\\ &\leq C\left(\frac{1}{\rho(x,y)\omega_\mathfrak{K}(B(x,\rho(x,y)))}+\frac{1}{\rho(x_1,y)\omega_\mathfrak{K}(B(x_1,\rho(x_1,y)))}\right)\int_0^{|x-x_1|^2}\frac{dt}{\sqrt{t}\nonumber}\\ &\leq C|x-x_1|\left(\frac{1}{\rho(x,y)\omega_\mathfrak{K}(B(x,\rho(x,y)))}+\frac{1}{\rho(x_1,y)\omega_\mathfrak{K}(B(x_1,\rho(x_1,y)))}\right)\nonumber\\ &\leq C\frac{r_0}{\rho(x,y)\omega_\mathfrak{K}(B(x,\rho(x,y)))}.\end{aligned}$$ In the last inequality we have taken into account that $\rho (x,y)\sim \rho (x_1,y)$, specifically, $$\frac{1}{3}\rho (x,y)\leq \rho (x_1,y)\leq \frac{5}{3}\rho (x,y).$$ Note that $\rho(x,y)\geq \rho(y,x_0)-\rho(x,x_0)\geq 3r_0$ and that $\rho (x,x_1)\leq |x-x_1|<2r_0$. Then $\rho(x_1,y)\geq \rho(x,y)-\rho(x,x_1)\geq \rho(x,y)-2r_0\geq \rho(x,y)-\frac{2}{3}\rho(x,y)=\frac{1}{3}\rho(x,y)$. Furthermore, $\rho (x_1,y)\leq\rho (x_1,x)+ \rho (x,y)\leq 2r_0+\rho (x,y)\leq \frac{5}{3}\rho (x,y)$. Observe also that $B(x,\rho (x,y))\subseteq B(x_1,5\rho(x_1,y))$ and thus, $\omega_\mathfrak K(B(x,\rho (x,y)))\leq C\omega_\mathfrak K(B(x_1,\rho(x_1,y)))$. On the other hand, according to [\[c2\]](#c2){reference-type="eqref" reference="c2"} we get $$\begin{aligned} \label{Jr2} J_{r,2}(x,y)&\leq C\frac{|x-x_1|}{\omega_\mathfrak{K}(B(x,\rho(x,y)))}\int_0^\infty \frac{e^{-c\frac{\rho(x,y)^2}{t}}}{t^{3/2}}dt\leq C\frac{r_0}{\rho(x,y)\omega_\mathfrak{K}(B(x,\rho(x,y)))}.\end{aligned}$$ Observe now that $\rho(x_0,y)-r_0\leq \rho(x,y)\leq \rho (x_0,y)+r_0$. Then, $\frac{3}{4}\rho(x_0,y)\leq \rho (x,y)\leq \frac{5}{4}\rho (x_0,y)$. In addition, if $z\in B(x_0,\rho(x_0,y))$ then $|z-x|\leq \rho(x_0,y)+r_0\leq \frac{5}{4}\rho(x_0,y)\leq \frac{5}{3}\rho(x,y)$, that is, $z\in B(x, \frac{5}{3}\rho (x,y))$. Thus, from [\[Jr1\]](#Jr1){reference-type="eqref" reference="Jr1"} and [\[Jr2\]](#Jr2){reference-type="eqref" reference="Jr2"} and considering [\[1.1\]](#1.1){reference-type="eqref" reference="1.1"} and [\[1.2\]](#1.2){reference-type="eqref" reference="1.2"} we conclude [\[Jr\]](#Jr){reference-type="eqref" reference="Jr"}. By using [\[Jr\]](#Jr){reference-type="eqref" reference="Jr"} it follows that $$\begin{aligned} \int_{(\theta (4B))^{\rm c}}|f_2(y)|J_r(x,y)d\omega_\mathfrak{K}(y)&\leq Cr_0\int_{(\theta (4B))^{\rm c}}\frac{|f_2(y)|}{\rho(x_0,y)\omega_\mathfrak{K}(\theta (B(x_0,\rho(x_0,y))))}d\omega_\mathfrak{K}(y)\\ &\leq Cr_0\sum_{k=2}^\infty \int_{\theta (2^{k+1}B)\setminus \theta(2^kB)}\frac{|f_2(y)|}{\rho(x_0,y)\omega_\mathfrak{K}(\theta(B (x_0,\rho(x_0,y))))}d\omega_\mathfrak{K}(y)\\ &\leq Cr_0\sum_{k=2}^\infty \frac{1}{2^kr_0\omega_\mathfrak{K}(\theta (2^kB))}\int_{\theta(2^{k+1}B)}|f_2(y)|d\omega_\mathfrak{K}(y)\end{aligned}$$ We now observe that, for every $k\in \mathbb N$, $k\geq 2$, $$\begin{aligned} \int_{\theta(2^{k+1}B)}|f_2(y)|d\omega_\mathfrak{K}(y)&\leq \omega_\mathfrak K(\theta (2^{k+1}B))\Big(\|f\|_{{\rm BMO}^\rho (\mathbb R^d,\omega_\mathfrak K)}+\sum_{r=2}^k|f_{\theta (2^{r+1}B)}-f_{\theta (2^rB)}|\Big)\\ &\leq Ck\omega_\mathfrak K(\theta (2^{k+1}B))\|f\|_{{\rm BMO}^\rho (\mathbb R^d,\omega_\mathfrak K)},\end{aligned}$$ where $C$ does not depend on $k$. Thus, for every $x\in B$, $$\begin{aligned} \label{f2Jr} \int_{(\theta (4B))^{\rm c}}|f_2(y)|J_r(x,y)d\omega_\mathfrak{K}(y) & \leq C\|f\|_{{\rm BMO}^\rho (\mathbb R^d,\omega_\mathfrak{K})}\sum_{k=2}^\infty \frac{k}{2^k}\leq C\|f\|_{{\rm BMO}^\rho (\mathbb R^d,\omega_\mathfrak{K})}.\end{aligned}$$ We get $$\frac{1}{\omega_\mathfrak{K}(B)}\int_{B}V_\sigma (\{T_{t,m}^\mathfrak K (f_2)(x)- T_{t,m}^\mathfrak K (f_2)(x_1)\}_{t>0})d\omega_\mathfrak{K}(x)\leq C\|f\|_{{\rm BMO}^\rho (\mathbb R^d,\omega_\mathfrak{K})}.$$ By putting together the above estimates we conclude that $V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})(f)\in {\rm BMO}(\mathbb R^d,\omega_\mathfrak{K})$ and $$\|V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})(f)\|_{{\rm BMO}(\mathbb R^d,\omega_\mathfrak{K})}\leq C\|f\|_{{\rm BMO}^\rho(\mathbb R^d,\omega_\mathfrak{K})}.$$ Our next aim is to show that $V_\sigma(\{T_{t,m}^\mathfrak K\}_{t>0})$ is bounded from ${\rm BMO}^\rho(\mathbb R^d,\omega_\mathfrak{K})$ into ${\rm BLO}(\mathbb R^d,\omega _\mathfrak K)$. Suppose that $f \in {\rm BMO}^\rho (\mathbb R^d,\omega_\mathfrak{K})$ such that $V_\sigma(\{T_{t,m}^\mathfrak K\}_{t>0})(f)(x) <\infty$, for every $x\in \mathbb R^d \setminus A$, where $A$ has $\omega_\mathfrak{K}$-measure (equivalently, Lebesgue measure) zero. Our objective is to show that there exists $C>0$ such that, for every Euclidean ball $B$, $$\int_B(V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})(f)(x)-\mathop{\mathrm{\operatorname{ess\,inf}}}_{y\in B}V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})(f)(y))d\omega_\mathfrak K (x)\leq C\omega_\mathfrak K(B)\|f\|_{{\rm BMO}^\rho(\mathbb R^d,\omega_\mathfrak K)}.$$ Note that ${\rm essinf}_{y \in B} V_\sigma(\{T_{t,m}^\mathfrak K\}_{t>0})(f)(y) < \infty$ because $V_\sigma(\{T_{t,m}^\mathfrak K\}_{t>0})(f)(x) <\infty$ for almost all $x\in \mathbb R^d$. We are going to see that, for every $x\in \mathbb R^d$, the function $t\longrightarrow T_{t,m}^\mathfrak K (f)(x)$ is continuous in $(0,\infty)$. According to [@JiLi3 Proposition 7.3] we have that, for every $x\in \mathbb R^d$ and $\delta >0$, $$\label{A1} \int_{\mathbb R^d}\frac{|f(y)-f_{\theta (B(x,\delta))}|}{(\delta +\rho (x,y))\omega_\mathfrak K(B(x,\delta +\rho (x,y)))}d\omega _\mathfrak K(y)\leq \frac{C}{\delta}\|f\|_{{\rm BMO} ^\rho (\mathbb R^d, \omega _\mathfrak K)}.$$ Let $x\in \mathbb R^d$. By using [\[1.1\]](#1.1){reference-type="eqref" reference="1.1"} we get $$\begin{aligned} \label{A2} \int_{\mathbb R^d}\frac{d\omega _\mathfrak K(y)}{(\delta +\rho (x,y))\omega_\mathfrak K(B(x,\delta +\rho (x,y)))}&\nonumber\\ &\hspace{-4cm}=\Big(\int_{B(x,1)}+\sum_{k=1}^\infty \int_{B(x,2^k)\setminus B(x,2^{k-1})}\Big)\frac{d\omega_\mathfrak K(y)}{(\delta +\rho (x,y))\omega _\mathfrak K(B(x,\delta +\rho (x,y)))}\nonumber\\ &\hspace{-4cm}\leq C\sum_{k=0}^\infty \frac{1}{\delta +2^k}\frac{\omega _\mathfrak K(B(x,2^k))}{\omega _\mathfrak K(B(x,\delta +2^{k-1}))}\leq C\sum_{k=0}^\infty \frac{1}{\delta +2^k}\Big(\frac{2^k}{\delta +2^k}\Big)^d\leq C\sum_{k=0}^\infty \frac{1}{2^k}\leq C,\quad \delta >0.\end{aligned}$$ Let $0<a<b<\infty$. By [\[A1\]](#A1){reference-type="eqref" reference="A1"} and [\[A2\]](#A2){reference-type="eqref" reference="A2"}, there exists $h\in L^1(\mathbb R^d,\omega _\mathfrak K)$ such that $$\frac{|f(y)|}{(\delta +\rho (x,y))\omega_\mathfrak K(B(x,\delta +\rho (x,y)))}\leq h(y),\quad y\in \mathbb R^d\mbox{ and }\delta \in [a,b].$$ From [\[1.1\]](#1.1){reference-type="eqref" reference="1.1"} and [\[1.5\]](#1.5){reference-type="eqref" reference="1.5"} we deduce that $$\begin{aligned} |t^m\partial _t^mT_t(x,y)|&\leq C\frac{e^{-c\frac{\rho (x,y)^2}{t}}}{\omega_\mathfrak K(B(x,\sqrt{t}))}\leq \frac{C}{\omega_\mathfrak K(B(x,\sqrt{t}+\rho (x,y)))}\Big(\frac{\sqrt{t}+\rho (x,y)}{\sqrt{t}}\Big)^D\Big(\frac{\sqrt{t}}{\sqrt{t}+\rho (x,y)}\Big)^{D+1}\\ &\leq C\frac{\sqrt{t}}{(\sqrt{t}+\rho (x,y))\omega_\mathfrak K(B(x,\sqrt{t}+\rho (x,y)))},\quad x,y\in \mathbb R^d\mbox{ and }t>0.\end{aligned}$$ By using dominated convergence theorem we conclude that the function $t\longrightarrow T_{t,m}^\mathfrak K (f)(x)$ is continuous in $(0,\infty)$. Let us fix $x_0 \in \mathbb R^d$ and $r_0 >0$, denote $B=B(x_0,r_0)$ and write the following decomposition $f= (f-f_{\theta (4B)}) \chi_{\theta (4B)}+ (f-f_{\theta (4B)})\chi_{(\theta (4B))^c}+ f_{\theta (4B)}=f_1+f_2+f_3$. Let $\varepsilon >0$. For every $x\in B\setminus A$ there exist $n=n(x) \in \mathbb N$ and $\{t_j=t_j(x)\}_{j=1}^n \subset \mathbb Q$ such that $0<t_n<...<t_1$ and $$\begin{aligned} V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})(f)(x) < \Big(\sum_{j=1}^{n-1}|T_{t,m}^\mathfrak K (f)(x)_{|t=t_j} - T_{t,m}^\mathfrak K (f)(x)_{t=t_{j+1}}|^\sigma\Big)^{1/\sigma}+\varepsilon.\end{aligned}$$ Note that we can not assure that $\{t_j\}_{j=1}^n$ can be selected in a unique way. By proceeding as in [@BdL p. 39] we see that for every $x\in B$ we can choose $\{t_j(x)\}_{j=1}^{n(x)}$ such that the function $\mathbb V_{\sigma, n}(f)$ is measurable, where $$\mathbb V_{\sigma,n} (h)(x):=\Big(\sum_{j=1}^{n(x)-1} |T_{t,m} ^\mathfrak K (h)(x)_{|t=t_j(x)} - T_{t,m}^\mathfrak K (h)(x)_{|t=t_{j+1}(x)}|^\sigma\Big)^{1/\sigma}.$$ Thus, for every $x\in B\setminus A$ we can write $$V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})(f)(x)-\mathop{\mathrm{\operatorname{ess\,inf}}}_{z\in B}V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})(f)(z) \leq \mathbb V_{\sigma,n}(f)(x)-\mathop{\mathrm{\operatorname{ess\,inf}}}_{z\in B}V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})(f)(z)+\varepsilon.$$ We claim that $$\begin{aligned} \label{claim} \mathbb V_{\sigma,n} (f)(x)-\mathop{\mathrm{\operatorname{ess\,inf}}}_{z\in B}V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})(f)(z)&\leq V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})(f_1)(x)+\int_0^{4r_0^2}|\partial _tT_{t,m}^\mathfrak K(f_2)(x)|dt\nonumber\\ &\hspace{-3cm}+\sup_{u,z\in B}\int_{4r_0^2}^\infty|\partial _t[T_{t,m}^\mathfrak K (f)(u)-T_{t,m}^\mathfrak K (f)(z)]|dt,\quad x\in B\setminus A.\end{aligned}$$ From this estimate we can proceed as follows to finish the proof. By taking in account that $V_\sigma(\{T_{t,m}^\mathfrak K\}_{t>0})$ is bounded on $L^2(\mathbb R^d,\omega_\mathfrak{K})$ and [@JiLi3 Proposition 7.3] we get $$\int_{B} V_\sigma(\{T_{t,m}^\mathfrak K\}_{t>0})(f_1)(x) d\omega_\mathfrak{K}(x)\leq C\omega_\mathfrak{K}(B)^{1/2}\|f_1\|_{L^2(\mathbb R^d,\omega_\mathfrak K)} \leq C\omega_\mathfrak K (B)\|f\|_{{\rm {\rm BMO}}^\rho (\mathbb R^d,\omega_\mathfrak{K})}.$$ On the other hand, note that $\partial _tT_{t,m}^\mathfrak K =t^{-1}(mT_{t,m}^\mathfrak K+T_{t,m+1}^\mathfrak K)$. Then, $$\begin{aligned} \int_0^{4r_0^2}|\partial _tT_{t,m}^\mathfrak K(f_2)(x)|dt&\leq \int_0^{4r_0^2} \big(m|T_{t,m}^\mathfrak K(f_2)(x)|+|T_{t,m+1}^\mathfrak K(f_2)(x)|\big)\frac{dt}{t},\quad x\in B.\end{aligned}$$ Let $r\in \mathbb N$, $r\geq 1$. By using [\[c1\]](#c1){reference-type="eqref" reference="c1"} and proceeding as in estimates [\[Jr1\]](#Jr1){reference-type="eqref" reference="Jr1"} and [\[f2Jr\]](#f2Jr){reference-type="eqref" reference="f2Jr"} we obtain $$\begin{aligned} \int_0^{4r_0^2}|T_{t,r}^\mathfrak K(f_2)(x)|\frac{dt}{t}&\leq \int_{(\theta (4B))^{\rm c}} |f_2(y)|\int_0^{4r_0^2} \left|t^r\partial_t^rT_t^\mathfrak K (x,y)\right|\frac{dt}{t}d\omega_\mathfrak K(y)\\ &\leq C\int_{(\theta(4B))^{\rm c}}|f_2(y)|\int_0^{4r_0^2} \frac{e^{-c\frac{\rho(x,y)^2}{t}}}{\omega_\mathfrak{K}(B(x,\rho(x,y)))}\frac{dt}{t}d\omega_\mathfrak K(y)\\ &\leq C\|f\|_{{\rm BMO}^\rho(\mathbb R^d,\omega_\mathfrak K)},\quad x\in B,\end{aligned}$$ and, consequently, $$\int_{B}\int_0^{4r_0^2}|\partial _t(T_{t,m}^\mathfrak K(f_2)(x))|dtd\omega_\mathfrak{K}(x)\leq C\omega_\mathfrak K (B)\|f\|_{{\rm {\rm BMO}}^\rho (\mathbb R^d,\omega_\mathfrak{K})}.$$ Consider now $u,z\in B$. As before, we can write $$\int_{4r_0^2}^\infty |\partial _t[T_{t,m}^\mathfrak K (f)(u)-T_{t,m}^\mathfrak K (f)(z)]|dt\leq \int_{4r_0^2}^\infty \big[m|T_{t,m}^\mathfrak K (f)(u)-T_{t,m}^\mathfrak K (f)(z)|+|T_{t,m+1}^\mathfrak K (f)(u)-T_{t,m+1}^\mathfrak K (f)(z)|\big]\frac{dt}{t}.$$ Let $r\in \mathbb N$, $r\geq 1$. From [\[1.4\]](#1.4){reference-type="eqref" reference="1.4"} we can write, for each $t>0$, $$\begin{aligned} |T_{t,r}^\mathfrak K (f)(u)-T_{t,r}^\mathfrak K (f)(z)|&\leq \frac{C}{t}|T_{t,r}^\mathfrak K(f_1)(u)|+|T_{t,r}^\mathfrak K(f_1)(z)|+|T_{t,r}^\mathfrak K(f_2)(u)-T_{t,r}^\mathfrak K(f_2)(z)|.\end{aligned}$$ By taking into account [\[c2\]](#c2){reference-type="eqref" reference="c2"} and arguing as in the estimates [\[Jr2\]](#Jr2){reference-type="eqref" reference="Jr2"} and [\[f2Jr\]](#f2Jr){reference-type="eqref" reference="f2Jr"} it follows that $$\label{Tf2} \int_{4r_0^2}^\infty |T_{t,r}^\mathfrak K(f_2)(u)-T_{t,r}^\mathfrak K(f_2)(z)|\frac{dt}{t}\leq C \|f\|_{{\rm BMO}^\rho (\mathbb R^d,\omega_\mathfrak{K})}.$$ Also, from [\[1.1\]](#1.1){reference-type="eqref" reference="1.1"} and [\[1.5\]](#1.5){reference-type="eqref" reference="1.5"} we get $$\begin{aligned} \label{Tf1} \int_{4r_0^2}^\infty |T_{t,r}^\mathfrak K(f_1)(x)|\frac{dt}{t}& \leq C\int_{\theta (4B)}|f(y)- f_{\theta (4B)}|\int_{4r_0^2}^\infty \frac{e^{-c\frac{\rho(x,y)^2}{t}}}{\omega_\mathfrak{K}(B(x,\sqrt{t}))} \frac{dt}{t}\,d\omega_\mathfrak{K}(y)\nonumber\\ &\leq \frac{C}{\omega_\mathfrak{K}(B(x,2r_0))}\int_{\theta (4B)}|f(y)- f_{\theta (4B)}|\int_{4r_0^2}^\infty \frac{\omega_\mathfrak{K}(B(x,2r_0))}{\omega_\mathfrak{K}(B(x,\sqrt{t}))} \frac{dt}{t}d\omega_\mathfrak{K}(y)\nonumber\\ &\leq C\frac{r_0^d}{\omega_\mathfrak{K}(B)}\int_{\theta (4B)}|f(y)- f_{\theta (4B)}|d\omega_\mathfrak{K}(y)\int_{4r_0^2}^\infty\frac{dt}{t^{1+\frac{d}{2}}} \nonumber\\ &\leq C \frac{1}{\omega_\mathfrak{K}(B)}\int_{\theta(4B)} |f(y)- f_{\theta (4B)}|d\omega_\mathfrak{K}(y) \leq C\|f\|_{{\rm BMO}^\rho (\mathbb R^d,\omega _\mathfrak K)},\quad x\in B.\end{aligned}$$ Then, $$\int_B\sup_{u,z\in B}\int_{4r_0^2}^\infty|\partial _t(T_{t,m}^\mathfrak K(f)(u)-T_{t,m}^\mathfrak K(f)(z))|dtd\omega_\mathfrak K(x)\leq C\omega_\mathfrak K(B)\|f\|_{{\rm BMO}^\rho(\mathbb R^d,\omega_\mathfrak K)}.$$ By considering all these estimates and the claim [\[claim\]](#claim){reference-type="eqref" reference="claim"} we can conclude that $$\frac{1}{\omega_\mathfrak K(B)}\int_B\big(V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})(f)(x)-\mathop{\mathrm{\operatorname{ess\,inf}}}_{z\in B}V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})(f)(z)\big)d\omega_\mathfrak K(x)\leq C\|f\|_{{\rm BMO}^\rho(\mathbb R^d,\omega_\mathfrak K)}+\varepsilon.$$ The arbitrariness of $\varepsilon$ leads to $$\frac{1}{\omega_\mathfrak{K}(B)} \int_{B} (V_\sigma(\{T_{t,m}^\mathfrak K\}_{t>0})(f)(x) - \mathop{\mathrm{\operatorname{ess\,inf}}}_{z\in B} V_\sigma(\{T_{t,m}^\mathfrak K\}_{t>0})(f)(z))d\omega_\mathfrak{K}(x)\leq C\|f\|_{{\rm {\rm BMO}}^\rho (\mathbb R^d,\omega_\mathfrak{K})}.$$ Then, $\|V_\sigma(\{T_{t,m}^\mathfrak K\}_{t>0})(f)\|_{{\rm BLO}(\mathbb R^d,\omega_\mathfrak{K})} \leq C\|f\|_{{\rm BMO}^\rho (\mathbb R^d,\omega_\mathfrak{K})}$. In order to complete the proof we only must to verify that the claim [\[claim\]](#claim){reference-type="eqref" reference="claim"} is true. Define the following sets: $B_1=\{x\in B\setminus A: t_1(x)<4r_0^2\}$, $B_2=\{x\in B\setminus A: 4r_0^2 \leq t_{n(x)}(x)\}$ and $B_3=\{ x\in B \setminus A: 4r_0^2\in (t_{n(x)}(x),t_1(x)]\}$. Suppose first that $x\in B_1$. Again by [\[1.4\]](#1.4){reference-type="eqref" reference="1.4"} we can write, $$\begin{aligned} \label{B1} \mathbb V_{\sigma,n} (f)(x)-\mathop{\mathrm{\operatorname{ess\,inf}}}_{z\in B}V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})(f)(z)&\leq \mathbb V_{\sigma,n}(f_1)(x)+\mathbb V_{\sigma,n}(f_2)(x)\nonumber\\ &\hspace{-4cm}\leq V_\sigma(\{T_{t,m}^\mathfrak K\}_{t>0})(f_1)(x)+\Big(\sum_{j=1}^{n(x)-1}\Big|\int_{t_{j+1}(x)}^{t_j(x)}\partial _tT_{t,m}^\mathfrak K(f_2)(x)dt\Big|^\sigma \Big)^{1/\sigma}\nonumber\\ &\hspace{-4cm}\leq V_\sigma(\{T_{t,m}^\mathfrak K\}_{t>0})(f_1)(x)+\int_0^{4r_0^2}|\partial _tT_{t,m}^\mathfrak K(f_2)(x)|dt.\end{aligned}$$ Consider now $x\in B_2$. In this case we have that $$\begin{aligned} \label{B2} \mathbb V_{\sigma,n} (f)(x)-\mathop{\mathrm{\operatorname{ess\,inf}}}_{z\in B}V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})(f)(z)&\nonumber\\ &\hspace{-4.5cm}\leq \sup_{\substack{{u\in B_2}\\z \in B}} \Big(\sum_{j=1}^{n(u)-1}\big|(T_{t,m}^\mathfrak K (f)(u)-T_{t,m}^\mathfrak K (f)(z))_{|t=t_j(u)}-(T_{t,m}^\mathfrak K (f)(u)-T_{t,m}^\mathfrak K (f)(z))_{|t=t_{j+1}(u)}\big|^\sigma\Big)^{\frac{1}{\sigma}} \nonumber\\ &\hspace{-4.5cm}\leq \sup_{\substack{{u\in B_2}\\z \in B}}\int_{4r_0^2}^\infty |\partial _t[T_{t,m}^\mathfrak K(f)(u)-T_{t,m}^\mathfrak K(f)(z)]|dt\leq \sup_{u,z\in B}\int_{4r_0^2}^\infty |\partial _t[T_{t,m}^\mathfrak K (f)(u)-T_{t,m}^\mathfrak K (f)(z)]|dt.\end{aligned}$$ Finally, for every $x\in B_3$ let us consider $j_0(x) \in \{2, \ldots,n(x)\}$ such that $t_{j_0(x)}(x) < 4r_0^2 \leq t_{j_0(x)-1}(x)$. Fix $x\in B_3$. It follows that $$\begin{aligned} \mathbb V_{\sigma,n}(f)(x)&\leq \Big(\sum_{j=1}^{j_0(x)-2}|T_{t,m}^\mathfrak K (f)(x)_{|t=t_j(x)}-T_{t,m}^\mathfrak K (f)(x)_{|t=t_{j+1}(x)}|^\sigma \\ &\qquad +|T_{t,m}^\mathfrak K (f)(x)_{|t=t_{j_0(x)-1}(x)}-T_{t,m}^\mathfrak K ()f(x)_{|t=4r_0^2}|^\sigma\Big)^{1/\sigma}\\ &\quad +\Big(\sum_{j=j_0(x)}^{n(x)-1}|T_{t,m}^\mathfrak K (f)(x)_{|t=t_j(x)}-T_{t,m}^\mathfrak K (f)(x)_{|t=t_{j+1}(x)}|^\sigma\\ &\qquad +|T_{t,m}^\mathfrak K (f)(x)_{|t=4r_0^2}-T_{t,m}^\mathfrak K (f)(x)_{|t=t_{j_0}(x)}|^\sigma\Big)^{1/\sigma}=:\mathbb V_{\sigma,n}^1(f)(x)+\mathbb V_{\sigma,n}^2(f)(x).\end{aligned}$$ Observe that $$\begin{aligned} \mathbb V_{\sigma,n}^1 (f)(x)-\mathop{\mathrm{\operatorname{ess\,inf}}}_{z\in B}V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})(f)(z)&\\ &\hspace{-4.5cm}\leq \sup_{\substack{{u\in B_3}\\z \in B}} \Big(\sum_{j=1}^{j_0(u)-2}\big|(T_{t,m}^\mathfrak K (f)(u)-T_{t,m}^\mathfrak K (f)(z))_{|t=t_j(u)}-(T_{t,m}^\mathfrak K (f)(u)-T_{t,m}^\mathfrak K (f)(z))_{|t=t_{j+1}(u)}\big|^\sigma\\ &\hspace{-3cm} +\big|(T_{t,m}^\mathfrak K (f)(u)-T_{t,m}^\mathfrak K (f)(z))_{|t=t_{j_0(u)-1}(u)}-(T_{t,m}^\mathfrak K (f)(u)-T_{t,m}^\mathfrak K (f)(z))_{|t=4r_0^2}\big|^\sigma\Big)^{1/\sigma}\\ &\hspace{-4.5cm}\leq \sup_{\substack{{u\in B_3}\\z \in B}}\int_{4r_0^2}^\infty \big|\partial_t[T_{t,m}^\mathfrak K (f)(u) - T_{t,m}^\mathfrak K (f)(z)]\big|dt\end{aligned}$$ On the other hand, $$\mathbb V_{\sigma,n}^2(f)(x)\leq \mathbb V_{\sigma,n}^2(f_1)(x)+\mathbb V_{\sigma,n}^2(f_2)(x)\leq V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})(f_1)(x)+\int_0^{4r_0^2}|\partial _tT_{t,m}^\mathfrak K(f_2)(x)|dt.$$ We conclude that $$\begin{aligned} \label{B3} \mathbb V_{\sigma,n} (f)(x)-\mathop{\mathrm{\operatorname{ess\,inf}}}_{z\in B}V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})(f)(z)&\leq V_\sigma (\{T_{t,m}^\mathfrak K\}_{t>0})(f_1)(x)+\int_0^{4r_0^2}|\partial _t(T_{t,m}^\mathfrak K(f_2)(x))|dt\nonumber\\ &\hspace{-3cm}+\sup_{u,z\in B}\int_{4r_0^2}^\infty|\partial _t[T_{t,m}^\mathfrak K (f)(u)-T_{t,m}^\mathfrak K (f)(z)]|dt,\quad x\in B_3.\end{aligned}$$ Estimates [\[B1\]](#B1){reference-type="eqref" reference="B1"}, [\[B2\]](#B2){reference-type="eqref" reference="B2"} and [\[B3\]](#B3){reference-type="eqref" reference="B3"} lead to [\[claim\]](#claim){reference-type="eqref" reference="claim"} and the proof is finished. # Proof of Theorem [Theorem 2](#Th1.2){reference-type="ref" reference="Th1.2"} {#proof-of-theorem-th1.2} By proceeding as in the proofs of [@Li Theorems 1.1 and 1.2] we can see that $g_m$ is bounded from $L^1(\mathbb R^d,\omega_\mathfrak{K})$ into $L^{1,\infty}(\mathbb R^d,\omega_\mathfrak{K})$. Let us establish the boundedness on $H^1(\Delta _\mathfrak K)$. **Proposition 5**. *Let $m\in \mathbb N$, $m\geq 1$. The Littlewood-Paley function $g_m$ defines a bounded operator from $H^1(\Delta_\mathfrak K )$ into $L^1(\mathbb R^d,\omega_\mathfrak{K})$.* *Proof.* It is sufficient to see that there exists $C>0$ such that $$\|g_m (a)\|_{L^1(\mathbb R^d, \omega_\mathfrak K)} \leq C,$$ for every $(1,2,\Delta_\mathfrak K ,1)$-atom $a$. Suppose that $a$ is a $(1,2,\Delta_\mathfrak K ,1)$-atom. For a certain $b \in D(\Delta_\mathfrak K )$ and a ball $B=B(x_0,r_0)$ we have that 1. $a= \Delta_\mathfrak K b$; 2. ${\rm supp }\,(\Delta_\mathfrak K ^\ell b)\subset \theta(B)$, $\ell=0,1$; 3. $\|(r^2_0\Delta_\mathfrak K )^\ell b\|_{L^2(\mathbb R^d,\omega_\mathfrak K)} \leq r^2_0\omega_\mathfrak{K}(B)^{-1/2}$, $\ell=0,1$. We can write $$\|g_m (a)\|_{L^1(\mathbb R^d, \omega_\mathfrak K)} =\left(\int_{\theta(4B)} + \int_{(\theta(4B))^c}\right)g_m (a)(x)d\omega_\mathfrak{K}(x)= I_1(a)+I_2(a).$$ Since $g_m$ is bounded on $L^2(\mathbb R^d,\omega_\mathfrak{K})$ we get $$I_1(a) \leq \|g_m(a)\|_{L^2(\mathbb R^d,\omega_\mathfrak K)}\omega_\mathfrak{K}(\theta(4B))^{1/2} \leq C\|a\|_{L^2(\mathbb R^d,\omega_\mathfrak K)} \omega_\mathfrak{K}(B)^{1/2} \leq C,$$ where we have used [\[1.1\]](#1.1){reference-type="eqref" reference="1.1"}, [\[1.2\]](#1.2){reference-type="eqref" reference="1.2"} and $(iii)$ for $\ell=1$. Note that $C$ does not depend on $a$. On the other hand, since $a=\Delta _\mathfrak Kb=\partial _tT_t^\mathfrak K(b)$, by Minkowski inequality we have that $$\begin{aligned} g_m (a)(x) &=\big\|t^m\partial_t^{m+1}T_t^\mathfrak K (b)(x)\big\|_{L^2((0,\infty),\frac{dt}{t})}\\ &\leq \int_{\theta(B)}|b(y)|\big\|t^m\partial_t^{m+1}T_t^\mathfrak K (x,y)\big\|_{L^2((0,\infty),\frac{dt}{t})}d\omega_\mathfrak K(y),\quad x\in \mathbb R^d.\end{aligned}$$ Observe that if $x\in (\theta (4B))^{\rm c}$ and $y\in \theta (B)$ then $\frac{3}{4}\rho (x,x_0)\leq \rho (x,y)\leq \frac{5}{4}\rho (x_0,x)$, and moreover, according to [\[1.1\]](#1.1){reference-type="eqref" reference="1.1"} and [\[1.2\]](#1.2){reference-type="eqref" reference="1.2"} it follows that $\omega_\mathfrak K(B(y,\rho (x,y)))\sim \omega_\mathfrak K(B(x_0,\rho (x_0,x)))$. Thus, by using [\[c1\]](#c1){reference-type="eqref" reference="c1"} we obtain $$\begin{aligned} \big\|t^m\partial_t^{m+1}T_t^\mathfrak K (x,y)\big\|_{L^2((0,\infty),\frac{dt}{t})}&\leq \frac{C}{\omega_\mathfrak K(B(y,\rho (x,y)))}\left(\int_0^\infty \frac{e^{-c\frac{\rho (x,y)^2}{t}}}{t^3}dt\right)^{1/2}\\ &\hspace{-3cm}\leq \frac{C}{\rho (x,y)^2\omega_\mathfrak K(B(y,\rho (x,y)))}\leq \frac{C}{\rho (x,x_0)^2\omega_\mathfrak K(B(x_0,\rho (x,x_0)))},\quad x\in (\theta (4B))^{\rm c},\,y\in \theta (B).\end{aligned}$$ Again from [\[1.1\]](#1.1){reference-type="eqref" reference="1.1"} and [\[1.2\]](#1.2){reference-type="eqref" reference="1.2"}, $$\begin{aligned} \int_{(\theta (4B))^{\rm c}}\frac{d\omega_\mathfrak K(x)}{\rho (x,x_0)^2\omega_\mathfrak K(B(x_0,\rho (x,x_0)))}&\leq \sum_{k=2}^\infty \int_{\theta (2^{k+1}B)\setminus \theta (2^kB)}\frac{d\omega_\mathfrak K(x)}{\rho (x,x_0)^2\omega_\mathfrak K(B(x_0,\rho (x,x_0)))}\\ &\leq \frac{C}{r_0^2}\sum_{k=2}^\infty \frac{\omega_\mathfrak K(\theta (2^{k+1}B))}{2^{2k}\omega_\mathfrak K(2^kB)}\leq \frac{C}{r_0^2}.\end{aligned}$$ We deduce that $$I_2(a)\leq \frac{C}{r_0^2}\int_{\theta (B)}|b(y)|d\omega _\mathfrak K(y)\leq \frac{C}{r_0^2}\|b\|_{L^2(\mathbb R^d,\omega _\mathfrak K)}\omega_\mathfrak K(\theta (B))^{1/2}\leq C,$$ and conclude the proof. ◻ The behaviour of $g_m$ on BMO is given in the next result. **Proposition 6**. *Let $m\in \mathbb N$, $m\geq 1$. Suppose that $f\in {\rm BMO}^\rho (\mathbb R^d,\omega_\mathfrak{K})$ verifies that $g_m (f)(x) < \infty$ for almost all $x \in \mathbb R^d$. Then, $g_m (f) \in {\rm BLO}(\mathbb R^d,\omega_\mathfrak{K})$ and $$\|g_m (f)\|_{{\rm BLO}(\mathbb R^d,\omega_\mathfrak{K})} \leq C \|f\|_{{\rm BMO}^\rho(\mathbb R^d,\omega_\mathfrak{K})},$$ where $C>0$ does not depend on $f$.* *Proof.* It is sufficient to see that $g_m (f)^2 \in {\rm BLO}(\mathbb R^d,\omega_\mathfrak{K})$ and $$\label{g3} \|g_m (f)^2\|_{{\rm BLO}(\mathbb R^d,\omega_\mathfrak{K})} \leq C\|f\|^2_{{\rm BMO}^\rho (\mathbb R^d,\omega_\mathfrak{K})},$$ where $C>0$ does not depend on $f$. Indeed, let $B$ be an Euclidean ball. Since $g_m (f)(x) <\infty$ for almost all $x \in B$, $\mathop{\mathrm{\operatorname{ess\,inf}}}_{z\in B} g_m (f)(z) < \infty$. Then, as in [@MY p. 31], we have that $$g_m (f)(x) - \mathop{\mathrm{\operatorname{ess\,inf}}}_{z\in B} g_m (f)(z) \leq \Big((g_m (f)(x))^2-\mathop{\mathrm{\operatorname{ess\,inf}}}_{z\in B} (g_m (f)(z))^2\Big)^{1/2},\quad x\in B.$$ By using Jensen's inequality we obtain $$\begin{aligned} \frac{1}{\omega_\mathfrak{K}(B)}\int_B \Big(g_m (f)(x)-\mathop{\mathrm{\operatorname{ess\,inf}}}_{z\in B} g_m (f)(z)\Big)d\omega_\mathfrak{K}(x)&\\ &\hspace{-5.5cm}\leq \frac{1}{\omega_\mathfrak{K}(B)}\int_B \Big((g_m (f)(x))^2-\mathop{\mathrm{\operatorname{ess\,inf}}}_{z\in B} (g_m (f)(z))^2\Big)^{1/2}d\omega_\mathfrak{K}(x)\\ &\hspace{-5.5cm}\leq \left(\frac{1}{\omega_\mathfrak{K}(B)}\int_B \Big((g_m (f)(x))^2-\mathop{\mathrm{\operatorname{ess\,inf}}}_{z\in B} (g_m (f)(z))^2\Big)d\omega_\mathfrak{K}(x)\right)^{\frac{1}{2}}.\end{aligned}$$ From ([\[g3\]](#g3){reference-type="ref" reference="g3"}) we deduce that $\|g_m (f)\|_{{\rm BLO}(\mathbb R^d,\omega_\mathfrak{K})} \leq C\|f\|_{{\rm BMO}^\rho (\mathbb R^d, \omega_\mathfrak{K})}$, with $C$ independent of $f$. We now prove [\[g3\]](#g3){reference-type="eqref" reference="g3"}. Let $B=B(x_0,r_0)$ where $x_0 \in \mathbb R^d$ and $r_0>0$. We define $$g_{m,0}(f)(x)=\big\|T_{t,m}^\mathfrak K (f)(x)\big\|_{L^2((0,4r_0^2),\frac{dt}{t})},\quad x\in \mathbb R^d,$$ and $$g_{m,\infty}(f)(x)=\big\|T_{t,m}^\mathfrak K (f)(x)\big\|_{L^2((4r_0^2,\infty),\frac{dt}{t})},\quad x\in \mathbb R^d,$$ and decompose $f$ as $f= (f-f_{\theta (4B)})\chi_{\theta (4B)} + (f - f_{\theta (4B)})\chi_{(\theta(4B))^{\rm c}}+ f_{\theta (4B)}=f_1+f_2+f_3$. Property [\[1.4\]](#1.4){reference-type="eqref" reference="1.4"} leads to $T_{t,m}^\mathfrak K(f_3)=0$ (note that $m\geq 1$), so we have that $$\begin{aligned} \label{e1} (g_m (f)(x))^2-\mathop{\mathrm{\operatorname{ess\,inf}}}_{z\in B}( g_m (f)(z))^2&\leq(g_{m,0}(f)(x))^2+(g_{m,\infty}(f)(x))^2 - \mathop{\mathrm{\operatorname{ess\,inf}}}_{z\in B}(g_{m,\infty}(f)(z))^2\nonumber\\ &\hspace{-3cm}\leq (g_{m,0}(f_1)(x))^2+(g_{m,0}(f_2)(x))^2+(g_{m,\infty}(f)(x))^2 - \mathop{\mathrm{\operatorname{ess\,inf}}}_{z\in B}(g_{m,\infty}(f)(z))^2,\quad x\in \mathbb R^d.\end{aligned}$$ Since $g_m$ is bounded on $L^2(\mathbb R^d,\omega_\mathfrak{K})$, by using [\[1.1\]](#1.1){reference-type="eqref" reference="1.1"} and [\[1.2\]](#1.2){reference-type="eqref" reference="1.2"} we obtain $$\label{e2} \int_B(g_{m,0}(f_1)(x))^2d\omega_\mathfrak K(x) \leq C\int_{\theta (4B)} |f(x)-f_{\theta (4B)}|^2 d\omega_\mathfrak{K}(x)\leq C\omega_\mathfrak K(B)\|f\|^2_{{\rm BMO}^\rho (\mathbb R^d,\omega_\mathfrak{K})}.$$ On the other hand, by considering [\[c1\]](#c1){reference-type="eqref" reference="c1"} and arguing as in the estimation [\[f2Jr\]](#f2Jr){reference-type="eqref" reference="f2Jr"} we get $$\begin{aligned} \label{e3} \int_B(g_{m,0}(f_2)(x))^2d\omega_\mathfrak K(x)&=\int_0^{4r_0^2}\int_B|T_{t,m}^\mathfrak K(f_2)(x)|^2d\omega_\mathfrak K(x)\frac{dt}{t}\nonumber\\ &\leq C\int_0^{4r_0^2}\int_B\Big(\int_{(\theta (4B))^{\rm c}}|f_2(y)|\frac{e^{-c\frac{\rho(x,y)^2}{t}}}{\omega_\mathfrak K(B(x,\rho (x,y)))}d\omega _\mathfrak K(y)\Big)^2d\omega _\mathfrak K(x)\frac{dt}{t}\nonumber\\ &\leq C\int_0^{4r_0^2}\int_B\Big(\int_{(\theta (4B))^{\rm c}}\frac{|f_2(y)|}{\rho (x,y)\omega_\mathfrak K(B(x,\rho (x,y)))}d\omega _\mathfrak K(y)\Big)^2d\omega _\mathfrak K(x)dt\nonumber\\ &=Cr_0^2\int_B\Big(\int_{(\theta (4B))^{\rm c}}\frac{|f_2(y)|}{\rho (x_0,y)\omega_\mathfrak K(B(x_0,\rho (x_0,y)))}d\omega _\mathfrak K(y)\Big)^2d\omega _\mathfrak K(x)\nonumber\\ &\leq C\omega_\mathfrak K(B)\|f\|_{{\rm BMO}^\rho (\mathbb R^d,\omega_\mathfrak K)}^2.\end{aligned}$$ Now observe that $$\int_B \big[(g_{m,\infty}(f)(x))^2 - \mathop{\mathrm{\operatorname{ess\,inf}}}_{z\in B}(g_{m,\infty}(f)(z))^2\big]d\omega_\mathfrak K(x)\leq \omega_\mathfrak K(B)\sup_{u,z\in B}|(g_{m,\infty}(f)(u))^2-(g_{m,\infty }(f)(z))^2|.$$ We are going to see that $|(g_{m,\infty}(f)(u))^2-(g_{m,\infty }(f)(z))^2|\leq C\|f\|_{{\rm BMO}^\rho (\mathbb R^d,\omega_\mathfrak{K})}^2$, $u,z\in B$. Thus we obtain $$\label{e4} \int_B \big[(g_{m,\infty}(f)(x))^2 - \mathop{\mathrm{\operatorname{ess\,inf}}}_{z\in B}(g_{m,\infty}(f)(z))^2\big]d\omega_\mathfrak K(x)\leq C\omega_\mathfrak K(B)\|f\|_{{\rm BMO}^\rho (\mathbb R^d,\omega_\mathfrak K)}^2.$$ Let $u,z\in B$. We have that $$\begin{aligned} |(g_{m,\infty}(f)(u))^2-(g_{m,\infty }(f)(z))^2|&=\left|\int_{4r_0^2}^\infty (|T_{t,m}^\mathfrak K (f)(u)|^2-|T_{t,m}^\mathfrak K (f)(z)|^2)\frac{dt}{t}\right|\\ &\leq \int_{4r_0^2}^\infty |T_{t,m}^\mathfrak K (f)(u)-T_{t,m}^\mathfrak K (f)(z)|(|T_{t,m}^\mathfrak K (f)(u)|+|T_{t,m}^\mathfrak K (f)(z)|)\frac{dt}{t}.\end{aligned}$$ Let us see that for each $x\in B$ and $t>4r_0^2$, $|T_{t,m}^\mathfrak K (f)(x)|\leq C\|f\|_{{\rm BMO}(\mathbb R^d,\omega _\mathfrak K)}$. Consider $x\in B$ and $t>4r_0^2$ and choose $k_0\in \mathbb N$ ($k_0\geq 3$) such that $2^{k_0-1}r_0< \sqrt{t}\leq 2^{k_0}r_0$. By using that $\int_{\mathbb R^d}s^m\partial _s^mT_s^\mathfrak K(x,y)d\omega _\mathfrak K(y)=0$, $s>0$, and the estimates [\[1.5\]](#1.5){reference-type="eqref" reference="1.5"} and [\[c1\]](#c1){reference-type="eqref" reference="c1"} we get $$\begin{aligned} |T_{t,m}^\mathfrak K (f)(x)|&=|T_{t,m}^\mathfrak K(f-f_{\theta (2^{k_0}B)})|\leq C\left(\int_{\theta (2^{k_0}B)}\frac{|f(y)-f_{\theta(2^{k_0}B)}|}{\omega_\mathfrak K(B(x,\sqrt{t}))}d\omega_\mathfrak K(y)\right.\\ &\quad +\left.\sum_{k=k_0}^\infty\int_{\theta (2^{k+1}B) \setminus \theta (2^kB)}\frac{e^{-c\frac{\rho(x,y)^2}{t}}}{\omega_\mathfrak K(B(x,\rho (x,y)))}|f(y)-f_{\theta (2^{k_0}B) }|d\omega _\mathfrak K(y)\right) \end{aligned}$$ Since $\sqrt{t}\sim 2^{k_0}r_0$ and $x\in B$, from [\[1.1\]](#1.1){reference-type="eqref" reference="1.1"} and [\[1.2\]](#1.2){reference-type="eqref" reference="1.2"} we get $$\int_{\theta (2^{k_0}B)}\!\!\!\frac{|f(y)-f_{\theta(2^{k_0}B)}|}{\omega_\mathfrak K(B(x,\sqrt{t}))}d\omega_\mathfrak K(y)\leq \frac{C}{\omega_\mathfrak K(\theta (2^{k_0}B))}\int_{\theta (2^{k_0}B)}\!\!\!|f(y)-f_{\theta(2^{k_0}B)}|d\omega_\mathfrak K(y)\leq C\|f\|_{{\rm BMO}^\rho(\mathbb R^d,\omega_\mathfrak K)}.$$ On the other hand, when $y\in \theta (2^{k+1}B) \setminus \theta (2^kB)$, with $k\geq k_0$, we have that $\rho (x,y)^2/t\sim 2^{2(k-k_0)}$ and $\omega_\mathfrak K(B(x,\rho (x,y)))\sim \omega_\mathfrak K(B(x,2^kr_0))\sim\omega_\mathfrak K(2^kB)\sim\omega_\mathfrak K(\theta (2^kB))$. Thus, arguing as in [\[sumBMO\]](#sumBMO){reference-type="eqref" reference="sumBMO"} we obtain $$\begin{aligned} \sum_{k=k_0}^\infty \int_{\theta (2^{k+1}B) \setminus \theta (2^kB)}\frac{e^{-c\frac{\rho(x,y)^2}{t}}}{\omega_\mathfrak K(B(x,\rho (x,y)))}|f(y)-f_{\theta (2^{k_0}B) }|d\omega _\mathfrak K(y)& \\ &\hspace{-6.5cm}\leq C \sum_{k=k_0}^\infty \frac{e^{-c2^{2(k-k_0)}}}{\omega_\mathfrak K(\theta (2^kB))}\int_{\theta (2^{k+1}B)}|f(y)-f_{\theta (2^{k_0}B)}|d\omega _\mathfrak K(y)\leq C\|f\|_{{\rm BMO}^\rho(\mathbb R^d,\omega_\mathfrak K)}.\end{aligned}$$ By the above estimates and by proceeding as in [\[Tf2\]](#Tf2){reference-type="eqref" reference="Tf2"} and [\[Tf1\]](#Tf1){reference-type="eqref" reference="Tf1"} it follows that $$\begin{aligned} |(g_{m,\infty}(f)(u))^2-(g_{m,\infty }(f)(z))^2|&\leq C\|f\|_{{\rm BMO}^\rho (\mathbb R^d,\omega_\mathfrak K)}\int_{4r_0^2}^\infty |T_{t,m}^\mathfrak K (f)(u)-T_{t,m}^\mathfrak K (f)(z)|\frac{dt}{t}\\ &\leq C\|f\|_{{\rm BMO}^\rho (\mathbb R^d,\omega_\mathfrak K)}^2.\end{aligned}$$ By combining [\[e1\]](#e1){reference-type="eqref" reference="e1"}, [\[e2\]](#e2){reference-type="eqref" reference="e2"}, [\[e3\]](#e3){reference-type="eqref" reference="e3"} and [\[e4\]](#e4){reference-type="eqref" reference="e4"} we get [\[g3\]](#g3){reference-type="eqref" reference="g3"}. Thus, the proof is finished. ◻ # Proof of Theorem [Theorem 1](#Th1.1){reference-type="ref" reference="Th1.1"} {#proof-of-theorem-th1.1} Since $\{T_t^\mathfrak K\}_{t>0}$ is a diffusion semigroup in $L^p(\mathbb R^d,\omega_\mathfrak{K})$, $1\leq p<\infty$, according to [@LeMX2 Corollary 4.2], the operator $T_{*,k}^\mathfrak K$ is bounded from $L^p(\mathbb R^d,\omega_\mathfrak{K})$ into itself, for every $1< p<\infty$. Let $f\in L^1(\mathbb R^d,\omega_\mathfrak{K})$. According to [\[1.1\]](#1.1){reference-type="eqref" reference="1.1"}, [\[1.5\]](#1.5){reference-type="eqref" reference="1.5"} and [\[c1\]](#c1){reference-type="eqref" reference="c1"} we get, for $x\in\mathbb R^d$ and $t>0$, $$\begin{aligned} |T_{t,m}^\mathfrak K (f)(x)|& \leq C\left(\int_{\theta (B(x,\sqrt{t}))}\frac{|f(y)|}{\omega_\mathfrak{K}(B(x,\sqrt{t}))}d\omega_\mathfrak{K}(y)\right. \\ & \quad +\left. \sum_{k=0}^\infty\int_{\theta(B(x,2^{k+1}\sqrt{t}))\setminus \theta(B(x,2^{k}\sqrt{t}))}\frac{e^{-c\frac{\rho(x,y)^2}{t}}}{\omega_\mathfrak{K}(B(x,\rho (x,y)))}|f(y)|d\omega_\mathfrak{K}(y)\right) \\ & \leq C \sum_{k=-1}^\infty \frac{e^{-c2^{2k}}}{\omega_\mathfrak{K}(\theta (B(x,2^{k+1}\sqrt{t})))}\int_{\theta(B(x,2^{k+1}\sqrt{t}))}|f(y)|d\omega_\mathfrak{K}(y).\end{aligned}$$ Then, $T_{*,m}^\mathfrak K (f)\leq C{\mathcal M}_\rho f$, where $$\mathcal M_\rho f(x)=\sup_{r>0}\frac{1}{\omega_\mathfrak K(\theta (B (x,r)))}\int_{\theta (B (x,r))}|f(y)|d\omega_\mathfrak K(y),\quad x\in \mathbb R^d.$$ As in [@ADH p. 2380] and considering that $\theta(B(x,r))=\cup_{g\in G}B(g(x),r)$, $x\in\mathbb R^d$ and $r>0$, we get $$T_{*,m}^\mathfrak K(f)(x)\leq C\sum_{g\in G}{\mathcal M}_{HL}f(g(x)),\quad x\in\mathbb R^d,$$ where $\mathcal M_{HL}$ denotes the Hardy-Littlewood maximal function on the space of homogeneous type $(\mathbb R^d, |\cdot|,\omega_\mathfrak K)$. We conclude that $T_{*,m}^\mathfrak K$ is bounded from $L^1(\mathbb R^d,\omega_\mathfrak{K})$ into $L^{1,\infty}(\mathbb R^d,\omega_\mathfrak{K})$. Let us show now the behaviour of $T_{*,m}^\mathfrak K$ on $H^1(\Delta _\mathfrak K)$. By taking into account Theorem [Theorem 3](#Th1.3){reference-type="ref" reference="Th1.3"} and that $T_{*,m}^\mathfrak K (f)\leq V_\sigma(\{T_{t,m}^\mathfrak K\}_{t>0})(f)+T_{t,m}^\mathfrak K (f)_{|t=1}$, it is sufficient to prove that $\partial _t^mT_t^\mathfrak K\,_{|t=1}$ is bounded from $H^1(\Delta _\mathfrak K)$ into $L^1(\mathbb R^d,\omega_\mathfrak{K})$ to establish that $T_{*,m}^\mathfrak K$ also maps $H^1(\Delta _\mathfrak K)$ into $L^1(\mathbb R^d,\omega_\mathfrak{K})$. We write $Sf=\partial_t^mT_t^\mathfrak K (f)_{|t=1}$. Since $S$ is bounded from $L^1(\mathbb R^d,\omega_\mathfrak{K})$ into $L^{1,\infty}(\mathbb R^d,\omega_\mathfrak{K})$, in order to prove that $S$ defines a bounded operator from $H^1(\Delta _\mathfrak K)$ into $L^1(\mathbb R^d,\omega_\mathfrak{K})$ we are going to see that there exists $C>0$ such that $$\|Sa\|_{L^1(\mathbb R^d, \omega_\mathfrak K)} \leq C,$$ for every $(1,2,\Delta_\mathfrak K ,1)$-atom $a$. Let $a$ be a $(1,2,\Delta_\mathfrak K ,1)$-atom. There exist $b\in D(\Delta_\mathfrak K )$ and an Euclidean ball $B=B(x_0,r_0)$ with $x_0\in\mathbb R^d$ and $r_0>0$ such that 1. $a=\Delta_\mathfrak K b;$ 2. $\mbox{supp} (\Delta_\mathfrak K ^\ell b)\subset \theta (B)$, $\ell=0,1$; 3. $\|(r^2_0\Delta_\mathfrak K )^\ell b\|_{L^2(\mathbb R^d,\omega_\mathfrak K)}\leq r_0^2\omega_\mathfrak{K}(B)^{-1/2}$, $\ell=0,1$. We write $$\|Sa\|_{L^1(\mathbb R^d, \omega_\mathfrak K)} =\Big(\int_{\theta(4B)}+\int_{(\theta(4B))^{\rm c}}\Big)|Sa(x)|d\omega_\mathfrak{K}(x)=J_1+J_2.$$ Since $S$ is bounded on $L^2(\mathbb R^d,\omega_\mathfrak{K})$, by using (*iii*) and [\[1.2\]](#1.2){reference-type="eqref" reference="1.2"} we get $$J_1\leq \|Sa\|_{L^2(\mathbb R^d,\omega_\mathfrak K)}\omega_\mathfrak K(\theta (4B))^{1/2}\leq C\|a\|_{L^2(\mathbb R^d,\omega_\mathfrak K)}\omega_\mathfrak{K}(B)^{1/2}\leq C.$$ On the other hand, since $|Sa|=|\Delta _\mathfrak K^{m+1}T_t^\mathfrak K(b)\,_{|t=1}|$, according to [@Li Lemmas 2.3 and 2.6] and proceeding as in [\[cLi\]](#cLi){reference-type="eqref" reference="cLi"} we obtain $$\begin{aligned} J_2&= \int_{\theta(B)}|b(y)|\int_{(\theta(4B))^{\rm c}}|\Delta_\mathfrak K^{m+1}T_t^\mathfrak K (x,y)|_{|t=1}d\omega_\mathfrak{K}(x) d\omega_\mathfrak{K}(y)\\ &\leq Ce^{-c r_0^2}\int_{\theta(B)}|b(y)|d\omega_\mathfrak{K}(y)\leq Ce^{-c r_0^2}\|b\|_{L^2(\mathbb R^d,\omega_\mathfrak K)}\omega_\mathfrak{K}(B)^{1/2}\leq Cr_0^2e^{-c r_0^2}\leq C.\end{aligned}$$ We conclude that $\|Sa\|_{L^1(\mathbb R^d, \omega_\mathfrak K)} \leq C$, where $C>0$ does not depend on $a$. Finally, suppose that $f\in {\rm BMO}^\rho (\mathbb R^d,\omega_\mathfrak{K})$ such that $T_{*,m}^\mathfrak K (f)(x)<\infty$, for almost all $x\in\mathbb R^d$. We are going to see that $T_{*,m}^\mathfrak K (f)\in {\rm BLO}(\mathbb R^d,\omega_\mathfrak{K})$. Since $T_{*,m}^\mathfrak K (f)(x)<\infty$, for almost all $x\in\mathbb R^d$, ${\rm essinf}_{x\in B}T_{*,m}^\mathfrak K (f)(x)\in [0,\infty)$, for every Euclidean ball $B$ in $\mathbb R^d$. Let $B=B(x_0,r_0)$ be an Euclidean ball. We consider the operators $$T_{*,m}^{\mathfrak K,0}(f)=\sup_{0<t\leq 4r_0^2}|T_{t,m}^\mathfrak K (f)|\quad \mbox{ and }\quad T_{*,m}^{\mathfrak K,\infty}(f)=\sup_{t> 4r_0^2}|T_{t,m}^\mathfrak K (f)|,$$ and define the sets $B_0$ and $B_\infty$ as follows $$B_0=\big\{x\in B:T_{*,m}^{\mathfrak K,0}(f)(x)\geq T_{*,m}^{\mathfrak K,\infty}(f)(x)\big\}\quad \mbox{ and }\quad B_\infty=\big\{x\in B:T_{*,m}^{\mathfrak K,0}(f)(x)< T_{*,m}^{\mathfrak K,\infty}(f)(x)\big\}.$$ and write $f=(f-f_{\theta (4B)})\chi_{\theta (4B)}+(f-f_{\theta (4B)})\chi_{(\theta (4B))^{\rm c}}+f_{\theta (4B)}=f_1+f_2+f_3$. We have that $$\begin{aligned} \int_B [T_{*,m}^\mathfrak K (f)(x)-\mathop{\mathrm{\operatorname{ess\,inf}}}_{z\in B}T_{*,m}^\mathfrak K (f)(z)]d\omega_\mathfrak{K}(x)&\\ &\hspace{-5cm}\leq \int_{B_0} (T_{*,m}^{\mathfrak K,0}(f)(x)-\mathop{\mathrm{\operatorname{ess\,inf}}}_{z\in B}T_{4r_0^2,m}^\mathfrak K (f)(z))d\omega_\mathfrak{K}(x)+\int_{B_\infty} (T_{*,m}^{\mathfrak K,\infty} (f)(x)-\mathop{\mathrm{\operatorname{ess\,inf}}}_{z\in B}T_{*,m}^{\mathfrak K,\infty} (f)(z))d\omega_\mathfrak{K}(x)\\ &\hspace{-5cm} \leq C\left(\int_B (T_{*,m}^{\mathfrak K,0}(f_1)(x)d\omega_\mathfrak{K}(x)+ \int_B (T_{*,m}^{\mathfrak K,0}(f_2)(x)d\omega_\mathfrak{K}(x) \right. \\ &\hspace{-4cm}\quad + \omega_\mathfrak{K}(B) \sup_{x,z\in B}\sup_{0<t\leq 4r_0^2}|T_{t,m}^\mathfrak K(f_3)(x)-T_{4r_0^2,m}^\mathfrak K (f)(z)|\\ &\hspace{-4cm}\quad + \left. \omega_\mathfrak{K}(B) \sup_{x,z\in B}\sup_{t> 4r_0^2}|T_{t,m}^\mathfrak K (f)(x)-T_{t,m}^\mathfrak K (f)(z)|\right)=\sum_{j=1}^4 I_j.\end{aligned}$$ Since $T_{*,m}^\mathfrak K$ is bounded on $L^2(\mathbb R^d,\omega_\mathfrak{K})$ it follows that $$I_1 \leq \omega_\mathfrak{K}(B)^{1/2}\|T_{*,m}^{\mathfrak K,0}(f_1)\|_{L^2(\mathbb R^d,\omega_\mathfrak{K})}\leq C \omega_\mathfrak{K}(B)^{1/2}\|f_1\|_{L^2(\mathbb R^d,\omega_\mathfrak K)} \leq C\omega_\mathfrak{K}(B)\|f\|_{{\rm BMO}^\rho(\mathbb R^d,\omega_\mathfrak{K})}.$$ By using [\[1.1\]](#1.1){reference-type="eqref" reference="1.1"}, [\[1.2\]](#1.2){reference-type="eqref" reference="1.2"} and [\[c1\]](#c1){reference-type="eqref" reference="c1"} and proceeding as in [\[f2Jr\]](#f2Jr){reference-type="eqref" reference="f2Jr"} we get, for every $x\in B$ and $0<t\leq 4r_0^2$, $$\begin{aligned} \label{tmf2} |T_{t,m}^\mathfrak K(f_2)(x)| & \leq C\int_{(\theta(4B))^{\rm c}}|f_2(y)|\frac{e^{-c\frac{\rho(x,y)^2}{t}}}{\omega_\mathfrak{K}(B(x,\rho(x,y)))}d\omega_\mathfrak{K}(y) \nonumber\\ & \leq C\sqrt{t}\int_{(\theta(4B))^{\rm c}}\frac{|f_2(y)|}{\rho(x,y)\omega_\mathfrak{K}(B(x,\rho(x,y)))}d\omega_\mathfrak{K}(y)\leq C\|f\|_{{\rm BMO}^\rho (\mathbb R^d,\omega_\mathfrak{K})}.\end{aligned}$$ Then, $I_2\leq C\omega_\mathfrak K(B)\|f\|_{{\rm BMO}^\rho (\mathbb R^d,\omega_\mathfrak{K})}$. Now observe that, by virtue of [\[1.4\]](#1.4){reference-type="eqref" reference="1.4"}, for every $x,z\in B$ and $0<t\leq 4r_0^2$, $$T_{4r_0^2,m}^{\mathfrak K} (f)(z)-T_{t,m}^\mathfrak K(f_3)(x)=T_{4r_0^2,m}^{\mathfrak K}(f-f_3)(z)=T_{4r_0^2,m}^{\mathfrak K}(f_1)(z)+T_{4r_0^2,m}^{\mathfrak K}(f_2)(z).$$ From ([\[1.1\]](#1.1){reference-type="ref" reference="1.1"}), [\[1.2\]](#1.2){reference-type="eqref" reference="1.2"} and [\[1.5\]](#1.5){reference-type="eqref" reference="1.5"} it follows that $$\begin{aligned} |T_{4r_0^2,m}^{\mathfrak K} (f_1)(z)| &\leq C \int_{\theta(4B)}\frac{|f_1(y)|}{\omega_\mathfrak{K}(B(z,2r_0))}d\omega_\mathfrak{K}(y) \leq \frac{C}{\omega_\mathfrak{K}(\theta (4B))} \int_{\theta (4B)}|f(y)-f_{\theta (4B)}|d\omega_\mathfrak{K}(y)\\ & \leq C\|f\|_{{\rm BMO}^\rho (\mathbb R^d,\omega_\mathfrak{K})},\quad z\in B,\end{aligned}$$ which, jointly [\[tmf2\]](#tmf2){reference-type="eqref" reference="tmf2"}, leads to $I_3\leq C\omega_\mathfrak K (B)\|f\|_{{\rm BMO}^\rho (\mathbb R^d,\omega_\mathfrak K)}$. Finally, we estimate $I_4$. Consider $x,z\in B$ and $t>4r_0^2$. According again to [\[1.4\]](#1.4){reference-type="eqref" reference="1.4"} we can write $$|T_{t,m}^\mathfrak K (f)(x)-T_{t,m}^\mathfrak K (f)(z)| \leq |T_{t,m}^\mathfrak K(f_1)(x)|+|T_{t,m}^\mathfrak K(f_1)(z)| + |T_{t,m}^\mathfrak K(f_2)(x)-T_{t,m}^\mathfrak K(f_2)(z)|.$$ Estimates [\[1.1\]](#1.1){reference-type="eqref" reference="1.1"}, [\[1.2\]](#1.2){reference-type="eqref" reference="1.2"} and [\[1.5\]](#1.5){reference-type="eqref" reference="1.5"} imply that $$\begin{aligned} |T_{t,m}^\mathfrak K(f_1)(x)|&\leq C\int_{\theta(4B)}\frac{|f_1(y)|}{\omega_\mathfrak{K}(B(x,\sqrt{t}))}d\omega_\mathfrak{K}(y) \leq C \int_{\theta (4B)}\frac{|f_1(y)|}{\omega_\mathfrak{K}(B(x,2r_0))}d\omega_\mathfrak{K}(y)\\ &\leq C\|f\|_{{\rm BMO}^\rho (\mathbb R^d,\omega_\mathfrak{K})},\end{aligned}$$ and in the same way $|T_{t,m}^\mathfrak K(f_1)(z)|\leq C\|f\|_{{\rm BMO}^\rho (\mathbb R^d,\omega_\mathfrak K)}$. On the other hand, by taking into account [\[c2\]](#c2){reference-type="eqref" reference="c2"} and arguing as in [\[f2Jr\]](#f2Jr){reference-type="eqref" reference="f2Jr"}, we get $$\begin{aligned} |T_{t,m}^\mathfrak K(f_2)(x)-T_{t,m}^\mathfrak K(f_2)(z)|&\leq C\frac{|x-z|}{\sqrt{t}}\int_{(\theta (4B))^{\rm c}}|f_2(y)|\frac{e^{-c\frac{\rho(x,y)^2}{t}}}{\omega_\mathfrak{K}(B(x,\rho (x,y)))}d\omega_\mathfrak{K}(y)\\ &\leq Cr_0\int_{(\theta (4B))^{\rm c}}\frac{|f_2(y)|}{\rho (x,y)\omega_\mathfrak{K}(B(x,\rho (x,y)))}d\omega_\mathfrak{K}(y)\\ &\leq C\|f\|_{{\rm BMO}^\rho(\mathbb R^d,\omega_\mathfrak{K})}.\end{aligned}$$ Then, $I_4\leq C\omega_\mathfrak K(B)\|f\|_{{\rm BMO}^\rho (\mathbb R^d,\omega_\mathfrak{K})}$. By combining the above estimates we conclude that $$\frac{1}{\omega_\mathfrak{K}(B)}\int_B (T_{*,m}^\mathfrak K (f)(x)-\mathop{\mathrm{\operatorname{ess\,inf}}}_{z\in B}T_{*,m}^\mathfrak K (f)(z))d\omega_\mathfrak{K}(x)\leq C\|f\|_{{\rm BMO}^\rho(\mathbb R^d,\omega_\mathfrak{K})},$$ where $C>0$ does not depend on $B$. Thus, $\|T_{*,m}^\mathfrak K(f)\|_{{\rm BLO}(\mathbb R^d,\omega_\mathfrak K)}\leq C\|f\|_{{\rm BMO}^\rho (\mathbb R^d,\omega _\mathfrak K)}$. 10 Aimar, H. Singular integrals and approximate identities on spaces of homogeneous type. , 1 (1985), 135--153. Akcoglu, M. A., Jones, R. L., and Schwartz, P. O. Variation in probability, ergodic theory and analysis. , 1 (1998), 154--177. Amri, B., and Sifi, M. Riesz transforms for Dunkl transform. , 1 (2012), 247--262. Anker, J.-P., Ben Salem, N., Dziubański, J., and Hamda, N. The Hardy space $H^1$ in the rational Dunkl setting. , 1 (2015), 93--128. Anker, J.-P., Dziubański, J., and Hejna, A. Harmonic functions, conjugate harmonic functions and the Hardy space $H^1$ in the rational Dunkl setting. , 5 (2019), 2356--2418. Beltran, D., Oberlin, R., Roncal, L., Seeger, A., and Stovall, B. Variation bounds for spherical averages. , 1-2 (2022), 459--512. Bennett, C. Another characterization of BLO. , 4 (1982), 552--556. Betancor, J., and de León-Contreras, M. Variation inequalities for Riesz transforms and Poisson semigroups associated with Laguerre polynomial expansions. (2023). Blunck, S., and Kunstmann, P.C. Calderón-Zygmund theory for non-integral operators and the $H^\infty$ functional calculus. (2003), 919--942. Bourgain, J. Pointwise ergodic theorems for arithmetic sets. , 69 (1989), 5--45. With an appendix by the author, Harry Furstenberg, Yitzhak Katznelson and Donald S. Ornstein. Campbell, J. T., Jones, R. L., Reinhold, K., and Wierdl, M. Oscillation and variation for the Hilbert transform. , 1 (2000), 59--83. Campbell, J. T., Jones, R. L., Reinhold, K., and Wierdl, M. Oscillation and variation for singular integrals in higher dimensions. , 5 (2003), 2115--2137. Coifman, R. R., and Rochberg, R. Another characterization of BMO. , 2 (1980), 249--254. Coifman, R. R., and Weiss, G. . Lecture Notes in Mathematics, Vol. 242. Springer-Verlag, Berlin-New York, 1971. Étude de certaines intégrales singulières. Coifman, R. R., and Weiss, G. Extensions of Hardy spaces and their use in analysis. , 4 (1977), 569--645. Dai, F., and Xu, Y. . Advanced Courses in Mathematics. CRM Barcelona. Birkhäuser/Springer, Basel, 2015. Edited by Sergey Tikhonov. Dunkl, C. F. Reflection groups and orthogonal polynomials on the sphere. , 1 (1988), 33--60. Dunkl, C. F. Differential-difference operators associated to reflection groups. , 1 (1989), 167--183. Dunkl, C. F. Integral kernels with reflection group invariance. , 6 (1991), 1213--1227. Dunkl, C. F. Hankel transforms associated to finite reflection groups. In *Hypergeometric functions on domains of positivity, Jack polynomials, and applications (Tampa, FL, 1991)*, vol. 138 of *Contemp. Math.* Amer. Math. Soc., Providence, RI, 1992, pp. 123--138. Dziubański, J., and Hejna, A. Hörmander's multiplier theorem for the Dunkl transform. , 7 (2019), 2133--2159. Dziubański, J., and Hejna, A. Remark on atomic decompositions for the Hardy space $H^1$ in the rational Dunkl setting. , 1 (2020), 89--110. Dziubański, J., and Hejna, A. On semigroups generated by sums of even powers of Dunkl operators. , 31 (2021). https://doi.org/10.1007/s00020-021-02646-4. Dziubański, J., and Hejna, A. Singular integrals in the rational Dunkl setting. , 3 (2022), 711--737. Dziubański, J., and Hejna, A. Upper and lower bounds for Littlewood-Paley square functions in the Dunkl setting. , 3 (2022), 275--303. Dziubański, J., and Hejna, A. Upper and lower bounds for the Dunkl heat kernel. , 25 (2023), https://doi.org/10.1007/s00526-022-02370-w. Dziubański, J., and Hejna, A. On Dunkl Schrödinger semigroups with Green bounded potentials. Preprint 2022, arXiv:2204.03443r . Dziubański, J., and Hejna, A. Remarks on Dunkl translations of non-radial kernels. , 52 (2023), https://doi.org/10.1007/s00041-023-10034-2. Hammi, A., and Amri, B. Dunkl-Schrödinger operators. , (2019), 1033--1058. Han, Y., Lee, M.-Y., Li, J., and Wick, B. D. Riesz transform and commutators in the Dunkl setting. Preprint 2021, arXiv:2105.11275. Harboure, E., Macı́as, R. A., Menárguez, M. T., and Torrea, J. L. Oscillation and variation for the Gaussian Riesz transforms and Poisson integral. , 1 (2005), 85--104. Hejna, A. Hardy spaces for the Dunkl harmonic oscillator. , 11 (2020), 2112--2139. Jiang, Y. Spaces of type BLO for non-doubling measures. , 7 (2005), 2101--2107. Jiu, J., and Li, Z. On the representing measures of Dunkl's intertwining operator. (2021), Paper No. 105605, 10. Jiu, J., and Li, Z. The dual of the Hardy space associated with the Dunkl operators. (2023), https://doi.org/10.1016/j.aim.2022.108810. Jiu, J., and Li, Z. Local boundary behaviour and the area integral of generalized harmonic functions associated with root systems. Preprint 2022, arXiv:2206.02132. Jones, R. L., Kaufman, R., Rosenblatt, J. M., and Wierdl, M. Oscillation in ergodic theory. , 4 (1998), 889--935. Jones, R. L., and Reinhold, K. Oscillation and variation inequalities for convolution powers. , 6 (2001), 1809--1829. Jones, R. L., Seeger, A., and Wright, J. Strong variational and jump inequalities in harmonic analysis. , 12 (2008), 6711--6742. Jones, R. L., and Wang, G. Variation inequalities for the Fejér and Poisson kernels. , 11 (2004), 4493--4518. Le Merdy, C., and Xu, Q. Maximal theorems and square functions for analytic operators on $L^p$-spaces. , 2 (2012), 343--365. Le Merdy, C., and Xu, Q. Strong $q$-variation inequalities for analytic semigroups. , 6 (2012), 2069--2097 (2013). Lépingle, D. La variation d'ordre $p$ des semi-martingales. , 4 (1976), 295--316. Li, H. Weak type estimates for square functions of Dunkl heat flows. Preprint 2021, arXiv:2101.04056. Li, Z., and Zhang, X. On Schrödinger oscillatory integrals associated with the Dunkl transform. , 2 (2019), 267--298. Ma, T., Torrea, J. L., and Xu, Q. Weighted variation inequalities for differential operators and singular integrals in higher dimensions. , 8 (2017), 1419--1442. Meng, Y., and Yang, D. Estimates for Littlewood-Paley operators in ${\rm BMO}(\Bbb R^n)$. , 1 (2008), 30--38. Pisier, G., and Xu, Q. H. The strong $p$-variation of martingales and orthogonal series. , 4 (1988), 497--514. Qian, J. The $p$-variation of partial sum processes and the empirical process. , 3 (1998), 1370--1383. Rösler, M. Generalized Hermite polynomials and the heat equation for Dunkl operators. , 3 (1998), 519--542. Rösler, M. Positivity of Dunkl's intertwining operator. , 3 (1999), 445--463. Rösler, M. Dunkl operators: theory and applications. In *Orthogonal polynomials and special functions (Leuven, 2002)*, vol. 1817 of *Lecture Notes in Math.* Springer, Berlin, 2003, pp. 93--135. Rösler, M. A positive radial product formula for the Dunkl kernel. , 6 (2003), 2413--2438. Rösler, M., and de Jeu, M. Asymptotic analysis for the Dunkl kernel. , 1 (2002), 110--126. Rösler, M., and Voit, M. Dunkl theory, convolution algebras, and related Markov processes. In *Harmonic and stochastic analysis of Dunkl processes (Paris, 2008)*, vol. 71 of *Travaux en cours*. Hermann, 2008, pp. 1--112. Stein, E. M. . Annals of Mathematics Studies, No. 63. Princeton University Press, Princeton, N.J.; University of Tokyo Press, Tokyo, 1970. Tan, C., Han, Y., Han, Y., Lee, M.-Y., and Li, J. Singular integral operators, ${T}1$ theorem, Littlewood-Paley theory and Hardy spaces in Dunkl setting. Preprint 2022. Thangavelu, S., and Xu, Y. Convolution operator and maximal function for the Dunkl transform. (2005), 25--55. Thangavelu, S., and Xu, Y. Riesz transform and Riesz potentials for Dunkl transform. , 1 (2007), 181--195. Yang, D., Yang, D., and Zhou, Y. Localized Morrey-Campanato spaces on metric measure spaces and applications to Schrödinger operators. (2010), 77--119.
arxiv_math
{ "id": "2309.06147", "title": "Harmonic analysis operators in the rational Dunkl settings", "authors": "V\\'ictor Almeida, Jorge J. Betancor, Juan C. Fari\\~na and Lourdes\n Rodr\\'iguez-Mesa", "categories": "math.CA", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | Non-Gaussian Harmonizable Fractional Stable Motion (HFSM) is a natural and important extension of the well-known Fractional Brownian Motion to the framework of heavy-tailed stable distributions. It was introduced several decades ago; however its properties are far from being completely understood. In our present paper we determine the optimal power of the logarithmic factor in a uniform modulus of continuity for HFSM, which solves an open old problem. The keystone of our strategy consists in Abel transforms of the LePage series expansions of the random coefficients of the wavelets series representation of HFSM. Our methodology can be extended to more general harmonizable stable processes and fields. author: - | Antoine Ayache\ Univ. Lille,\ CNRS, UMR 8524 - Laboratoire Paul Painlevé,\ F-59000 Lille, France\ E-mail: `antoine.ayache@univ-lille.fr`\   - | Yimin Xiao\ Department of Statistics and Probability\ Michigan State University\ East Lansing, MI 48824, U.S.A.\ E-mail: `xiaoy@msu.edu`\ title: An Optimal Uniform Modulus of Continuity for Harmonizable Fractional Stable Motion --- Running head: Optimal modulus of continuity of HFSM\ *AMS Subject Classification (MSC2020 database)*: 60G52, 60G17, 60G22.\ *Key words:* Heavy-tailed stable distributions, LePage random series, wavelet random series, Abel transform, binomial random variables. # Introduction and statements of the main results {#sec:intro} For any given constants $\alpha\in (0,2]$ and $H\in (0,1)$, the Harmonizable Fractional Stable Motion (HFSM) is the continuous real-valued symmetric $\alpha$-stable (S$\alpha$S) stochastic process $\{X(t), t\in{\mathbb R}\}$ defined as $$\label{eq:df-hfsm} X(t):={\rm Re}\,\bigg(\int_{{\mathbb R}}\frac{e^{it\xi}-1}{|\xi|^{H+1/\alpha}}\, d\widetilde{M}_\alpha(\xi)\bigg),$$ where $\widetilde{M}_\alpha$ is a complex-valued rotationally invariant $\alpha$-stable random measure with Lebesgue control measure. When $\alpha = 2$, $\{X(t), t\in{\mathbb R}\}$ is a Fractional Brownian Motion, usually denoted by $\{B_H (t), t\in{\mathbb R}\}$, whose sample path regularity and many other properties have been extensively studied in the literature. When $0 < \alpha < 2$, $\{X(t), t\in{\mathbb R}\}$ is one of the most important non-Gaussian self-similar S$\alpha$S processes with stationary increments; we refer to the well-known book of Samorodnitsky and Taqqu [@ST94] for a systemic account on these processes and many topics related with stable distributions. Non-Gaussian HFSM was introduced about 35 years ago by Cambanis and Maejima in [@CM89]; however, its properties are far from being completely understood. Generally speaking, study of this process has been difficult for various reasons: lack of ergodicity, lack of finite second moment, and so on. Sample path behavior of HFSM such as the uniform modulus of continuity on a compact interval has been studied by Kôno and Maejima [@KonoMaejima91], Xiao [@Xiao10], Boutard [@Boutard], Ayache and Boutard [@AB17], Panigrahi et al [@PRX21], by using the LePage series representation, modified chaining argument, and wavelet methods, respectively. More specifically, in their pioneering article [@KonoMaejima91], Kôno and Maejima applied a LePage series representation for the process $\{X(t), t\in{\mathbb R}\}$ to show that for any given numbers $H\in (0,1)$, $\alpha\in (0,2)$, $\widetilde{\rho} >0$, and arbitrarily small $\delta>0$, almost surely $$\label{eq:koma} \sup_{-\widetilde{\rho}\le t'<t''\le \widetilde{\rho}}\,\frac{\big | X(t')-X(t'')\big |}{\big |t'-t'' \big |^{H}\big (\log(1+|t'-t''|^{-1})\big)^{1/\alpha+1/2+\delta}}<+\infty.$$ In his quite recent PhD thesis [@Boutard], Boutard mainly established directional regularity and irregularity results on general (anisotropic) harmonizable stable random fields with stationary increments. In particular, Theorem 5.2.1 in [@Boutard] proves the following partial inverse to ([\[eq:koma\]](#eq:koma){reference-type="ref" reference="eq:koma"}): for any given numbers $H\in (0,1)$, $\alpha\in (0,2)$, $u<v$, and arbitrarily small $\delta>0$, almost surely $$\label{thm:abou2:eq1} \sup_{u\le t'<t''\le v}\,\frac{\big | X(t')-X(t'')\big |}{\big |t'-t'' \big |^{H}\big (\log(1+|t'-t''|^{-1})\big)^{1/\alpha-\delta}}=+\infty.$$ It is a natural question to wonder whether or not the power of the logarithmic factor in the uniform modulus of continuity ([\[eq:koma\]](#eq:koma){reference-type="ref" reference="eq:koma"}) is optimal. One is tempted to believe that it is not optimal, since in the Gaussian case $\alpha=2$ it is known for a long time that the optimal power for this factor is $1/2$ and not $1$. Yet, when $\alpha\in (0,2)$, the question has remained open for many years. In the case $\alpha\in (0,1)$, a negative answer to it has been given in [@AB17] and [@Boutard]. Indeed, [@AB17 Theorem 3.8] or [@Boutard Theorem 4.1.2] for general harmonizable stable random fields with stationary increments implies the following: **Theorem 1**. *([@AB17; @Boutard]) [\[thm:abou1\]]{#thm:abou1 label="thm:abou1"} If $H\in (0,1)$ and $\alpha\in (0,1)$, then for any positive numbers $\widetilde{\rho}$ and $\delta$, almost surely $$\label{thm:abou1:eq1} \sup_{-\widetilde{\rho}\le t'<t''\le \widetilde{\rho}}\,\frac{\big | X(t')-X(t'')\big |}{\big |t'-t'' \big |^{H}\big (\log(1+|t'-t''|^{-1})\big)^{1/\alpha+\delta}}<+\infty.$$* The proof of this result in [@AB17] and [@Boutard] makes an essential use of a random wavelet series representation for harmonizable stable fields with stationary increments (see for instance Section 2 in [@AB17]). In the particular case of the HFSM $\{X(t), t\in{\mathbb R}\}$ this representation can be expressed, for all $H\in (0,1)$ and $\alpha\in (0,2]$, as $$\label{eq:rep-hfsm} X(t)=\sum_{(j,k)\in{\mathbb Z}^2} 2^{-jH}{\rm Re}\,\big (\varepsilon_{\alpha,j,k}\big)\big (\Psi_{\alpha,H}(2^j t-k)-\Psi_{\alpha,H}(-k)\big).$$ In the above, for every $(j,k)\in{\mathbb Z}^2$, $\varepsilon_{\alpha,j,k}$ is a complex-valued $\alpha$-stable random variable defined as $$\label{def:ep} \varepsilon_{\alpha,j,k}:=\int_{{\mathbb R}} 2^{-j/\alpha}e^{ik2^{-j}\xi}\,\widehat{\psi}(-2^{-j}\xi)\,d\widetilde{M}_\alpha(\xi),$$ and $\Psi_{\alpha,H}$ is the deterministic real-valued function in the Schwartz class $S({\mathbb R})$ defined as $$\label{eq:df-psiah} \Psi_{\alpha,H}(y):=\int_{{\mathbb R}}e^{iy\xi}\frac{\widehat{\psi}(\xi)}{|\xi|^{H+1/\alpha}}\;d\xi,\quad \mbox{for all $y\in{\mathbb R}$},$$ where, $\widehat{\psi}$ is the Fourier transform of a real-valued univariate Meyer mother wavelet (see e.g. [@LM86; @meyer92; @Daubechies]). Recall that $\psi$ and $\widehat{\psi}$ belong to $S({\mathbb R})$ and that $\widehat{\psi}$ is compactly supported with support satisfying $$\label{eq:psihat} {\rm supp}\,\widehat{\psi}\subseteq{\EuScript R}_0:=\Big\{\xi\in{\mathbb R}\,:\,\frac{2\pi}{3}\le |\xi|\le \frac{8\pi}{3}\Big\}.$$ Notice that the random series in ([\[eq:rep-hfsm\]](#eq:rep-hfsm){reference-type="ref" reference="eq:rep-hfsm"}) is almost surely absolutely convergent for each fixed $t\in{\mathbb R}$ (see Proposition 2.10 in [@AB17]), and that it is almost surely uniformly convergent in $t$ on each compact interval (see (2.53) and Propositions 2.15, 2.16 and 2.17 in [@AB17], see also Theorem 2.7 in [@AShX20]). We mention in passing that [@AShX20] has extended the representation ([\[eq:rep-hfsm\]](#eq:rep-hfsm){reference-type="ref" reference="eq:rep-hfsm"}) to the Harmonizable Fractional Stable Sheet which is indexed by ${\mathbb R}^N$ and whose increments are non-stationary. Thanks to stochastic integral representation ([\[def:ep\]](#def:ep){reference-type="ref" reference="def:ep"}), the complex-valued $\alpha$-stable process $\big\{\varepsilon_{\alpha,j,k},\, (j,k)\in{\mathbb Z}^2\big\}$ can be expressed as a LePage random series as shown by Proposition 4.3 in [@AB17] which will be recalled at the beginning of Section [2](#sec:key){reference-type="ref" reference="sec:key"} of the present article. This useful representation allowed [@AB17] and [@Boutard] to establish the almost sure fundamental inequality (see for instance (2.35) in [@AB17]): $$\label{ineq1f:ep} \big |{\rm Re}\,(\varepsilon_{\alpha, j,k})\big |\le C_{\alpha,\delta} (1+|j|)^{1/\alpha+\delta},\quad\mbox{for all $\alpha\in (0,1)$, $\delta>0$ and $(j,k)\in{\mathbb Z}^2$,}$$ which is one of the main ingredients in the proof of Theorem [\[thm:abou1\]](#thm:abou1){reference-type="ref" reference="thm:abou1"} via the wavelet representation ([\[eq:rep-hfsm\]](#eq:rep-hfsm){reference-type="ref" reference="eq:rep-hfsm"}). For establishing the fundamental inequality ([\[ineq1f:ep\]](#ineq1f:ep){reference-type="ref" reference="ineq1f:ep"}), in which $C_{\alpha,\delta}$ denotes a positive finite random variable only depending on $\alpha$ and $\delta$, [@AB17] and [@Boutard] made an essential use of the fact that, when $\alpha\in (0,1)$, the LePage series representation of $\varepsilon_{\alpha,j,k}$ can be bounded, uniformly in $k\in{\mathbb Z}$, by an almost surely convergent positive random series. Unfortunately, this method can hardly be extended to the case $\alpha\in [1,2)$ since the latter random series is no longer convergent. The divergence of this series creates a major difficulty. Because of it, only a weaker form of the inequality ([\[ineq1f:ep\]](#ineq1f:ep){reference-type="ref" reference="ineq1f:ep"}) was obtained in [@AB17] and [@Boutard] (see for instance (2.36) in [@AB17]). Namely, if $\alpha\in [1,2)$, then for any $\delta>0$, almost surely $$\label{ineq2f:ep} \big |{\rm Re}\,(\varepsilon_{\alpha, j,k})\big |\le C_{\alpha,\delta} (1+|j|)^{1/\alpha+\delta}\sqrt{\log \big (3+|j|+|k|\big)} \quad\mbox{for all %$\a\in [1,2)$, $\de>0$ $(j,k)\in{\mathbb Z}^2$.}$$ In contrast with ([\[ineq1f:ep\]](#ineq1f:ep){reference-type="ref" reference="ineq1f:ep"}), the inequality ([\[ineq2f:ep\]](#ineq2f:ep){reference-type="ref" reference="ineq2f:ep"}) is not sharp enough for improving the uniform modulus of continuity ([\[eq:koma\]](#eq:koma){reference-type="ref" reference="eq:koma"}) of HFSM via the wavelet representation ([\[eq:rep-hfsm\]](#eq:rep-hfsm){reference-type="ref" reference="eq:rep-hfsm"}). In our present article we will introduce a new strategy for dealing with the major difficulty described above. The starting point of this new strategy is to apply an Abel transform to the LePage series representation of $\varepsilon_{\alpha,j,k}$. Thanks to the new strategy we are able to prove the following crucial proposition. **Proposition 2**. *Let $\alpha\in [1,2)$, $\delta>0$ and $\rho>0$ be arbitrary and fixed. There exist an event $\Omega^*$ of probability $1$ and a positive finite random variable $C$, which depends on $\alpha$, $\delta$ and $\rho$, such that on $\Omega^*$, the inequality $$\label{prop:fund:eq2} \big |{\rm Re}\,(\varepsilon_{\alpha, j,k})\big |\le C (1+j)^{1/\alpha+\delta}$$ holds for all $(j,k)\in{\mathbb Z}_+\times{\mathbb Z}$ satisfying $$\label{prop:fund:eq1} |2^{-j}k|\le\rho.$$* Proposition [Proposition 2](#prop:fund){reference-type="ref" reference="prop:fund"} allows us to establish the main theorem of this paper, which shows that the improved uniform modulus of continuity for the HFSM $\{X(t), t\in{\mathbb R}\}$ provided by Theorem [\[thm:abou1\]](#thm:abou1){reference-type="ref" reference="thm:abou1"} when $\alpha\in (0,1)$ is also valid when $\alpha\in [1,2)$. **Theorem 3**. *For any $H\in (0,1)$, $\alpha\in [1,2)$, $\delta>0$ and $\widetilde{\rho}>0$, the inequality ([\[thm:abou1:eq1\]](#thm:abou1:eq1){reference-type="ref" reference="thm:abou1:eq1"}) holds almost surely.* For $H\in (0,1)$ and $\alpha\in (0,2)$, the uniform modulus of continuity in Theorems [\[thm:abou1\]](#thm:abou1){reference-type="ref" reference="thm:abou1"} and [Theorem 3](#thm:main1){reference-type="ref" reference="thm:main1"} for the HFSM is more precise than the results in [@KonoMaejima91; @Xiao10; @PRX21] on the uniform modulus of continuity for stable random fields including HFSM. We remark that the methods in the aforementioned references are different from the wavelet methods in [@AB17; @Boutard] and in the present paper. Also, we point out that the methodology of the present article can be extended to harmonizable stable processes and fields which are more general than HFSM; this is what we intend to do in future works. It follows from ([\[thm:abou2:eq1\]](#thm:abou2:eq1){reference-type="ref" reference="thm:abou2:eq1"}) that the uniform modulus of continuity for HFSM given by Theorems [\[thm:abou1\]](#thm:abou1){reference-type="ref" reference="thm:abou1"} and [Theorem 3](#thm:main1){reference-type="ref" reference="thm:main1"} is quasi-optimal. The following theorem improves ([\[thm:abou2:eq1\]](#thm:abou2:eq1){reference-type="ref" reference="thm:abou2:eq1"}) and shows that optimality of the uniform modulus of continuity for HFSM is fundamentally different from that for the Gaussian case $(\alpha = 2$) of the Fractional Brownian Motion $\{B_H (t), t\in{\mathbb R}\}$, where it is known that for any fixed real numbers $u<v$, almost surely, $$0<\sup_{u\le t'<t''\le v}\,\frac{\big | B_H(t')-B_H(t'')\big |}{\big |t'-t'' \big |^{H}\big (\log(1+|t'-t''|^{-1})\big)^{1/2}}<+\infty.$$ **Theorem 4**. *For any given real numbers $H\in (0,1)$, $\alpha\in (0,2)$ and $u<v$, the equality ([\[thm:abou2:eq1\]](#thm:abou2:eq1){reference-type="ref" reference="thm:abou2:eq1"}) with $\delta=0$ holds almost surely: $$\label{thm:abou2:eq1b} \sup_{u\le t'<t''\le v}\,\frac{\big | X(t')-X(t'')\big |}{\big |t'-t'' \big |^{H}\big (\log(1+|t'-t''|^{-1})\big)^{1/\alpha}}=+\infty.$$* Theorems [\[thm:abou1\]](#thm:abou1){reference-type="ref" reference="thm:abou1"} and [Theorem 4](#thm:main2){reference-type="ref" reference="thm:main2"} leave the following open questions. -   Is there a function $L: {\mathbb R}_+ \to {\mathbb R}_+$ that is slowly varying at the origin such that $$\label{thm:abou2:eq1c} 0< \sup_{u\le t'<t''\le v}\,\frac{\big | X(t')-X(t'')\big |} { |t'-t'' \big |^{H} L(|t'-t''|)} <+\infty?$$ -   If the answer to Question (i) is negative, is there a criterion on $L$ (e.g., an integral test) that can decide when the supremum in ([\[thm:abou2:eq1c\]](#thm:abou2:eq1c){reference-type="ref" reference="thm:abou2:eq1c"}) is 0 or $\infty$? The rest of our article is organized as follows. Section [2](#sec:key){reference-type="ref" reference="sec:key"} is devoted to the proof of the fundamental Proposition [Proposition 2](#prop:fund){reference-type="ref" reference="prop:fund"}, and Section [3](#sec:p-main){reference-type="ref" reference="sec:p-main"} to those of Theorems [Theorem 3](#thm:main1){reference-type="ref" reference="thm:main1"} and [Theorem 4](#thm:main2){reference-type="ref" reference="thm:main2"}. # Proof of Proposition [Proposition 2](#prop:fund){reference-type="ref" reference="prop:fund"} {#sec:key} First, we give a represntation of the complex-valued $\alpha$-stable stochastic process $\big\{\varepsilon_{\alpha,j,k},\, (j,k)\in{\mathbb Z}^2\big\}$, defined through ([\[def:ep\]](#def:ep){reference-type="ref" reference="def:ep"}), as a LePage-type random series. The following proposition is a reformulated version of Proposition 4.3 in [@AB17] in our setting. **Proposition 5**. *Let $\alpha\in (0,2)$ be a constant and let the positive finite constant $a_{\alpha}:=\Big(\int_{0}^{+\infty}x^{-\alpha}\sin x\,dx\Big)^{-1/\alpha}$. Then $$\label{lepage:ep} \Big\{\varepsilon_{\alpha,j,k}\,,\,\, (j,k)\in{\mathbb Z}^2\Big\}\stackrel{\mbox{{\tiny $d$}}}{=}\bigg \{a_{\alpha}\sum_{m=1}^{+\infty} \Gamma_{m}^{-1/\alpha}\varphi (\zeta_m)^{-1/\alpha}\,2^{-j/\alpha}e^{ik2^{-j}\zeta_m}\widehat{\psi}(-2^{-j}\zeta_m) g_m\,,\,\, (j,k)\in{\mathbb Z}^2\bigg\},$$ where $\stackrel{\mbox{{\tiny $d$}}}{=}$ means equality of all finite-dimensional distributions. Notice that, for all $(j,k)\in{\mathbb Z}^2$, the random series in the right-hand side of ([\[lepage:ep\]](#lepage:ep){reference-type="ref" reference="lepage:ep"}) converges almost surely, and that $(\Gamma_m)_{m\in{\mathbb N}}$, $(\zeta_m)_{m\in{\mathbb N}}$, and $(g_m)_{m\in{\mathbb N}}$ are three mutually independent sequences of random variables, defined on the same probability space, which satisfy the following three properties.* - *$(\Gamma_m)_{m\in{\mathbb N}}$ is a sequence of Poisson arrival times with unit rate; in other words there is a sequence $({\cal E}_n)_{n\in{\mathbb N}}$ of independent and identically distributed exponential random variables with parameter equals to $1$ such that $$\label{eq:ga} \Gamma_m=\sum_{n=1}^m {\cal E}_n\, \ \hbox{ for all }\, m\in{\mathbb N}.$$* - *$(\zeta_m)_{m\in{\mathbb N}}$ is a sequence of real-valued independent, identically distributed absolutely continuous random variables with probability density function $\varphi$ given by $\varphi(0)=0$ and $$\label{eq:vp} \varphi(\xi)=4^{-1}\varepsilon\, |\xi|^{-1}\big (1+\big| \log |\xi|\big|\big)^{-1-\varepsilon}\,\quad \mbox{for $\xi\in{\mathbb R}\setminus\{0\}$},$$ where $\varepsilon$ is an arbitrarily small positive number; in fact $\varepsilon$ will later be chosen in a convenient way related to $\delta$ which appears in ([\[prop:fund:eq2\]](#prop:fund:eq2){reference-type="ref" reference="prop:fund:eq2"}).* - *$(g_m)_{m\in{\mathbb N}}$ is a sequence of complex-valued, independent, identically distributed, centered Gaussian random variables such that ${\mathbb E}\big (|{\rm Re}\,(g_m)|^\alpha\big)=1$.* From now on, we will not distinguish the two stochastic processes in the left and right hand sides of the equality ([\[lepage:ep\]](#lepage:ep){reference-type="ref" reference="lepage:ep"}). **Definition 6**. *For each $m\in{\mathbb N}$, set $$\label{def:riparts:eq0} g_{0,m}:={\rm Re}\,(g_m)\quad\mbox{and}\quad g_{1,m}:={\rm Im}\,(g_m).$$ Moreover, for any $(j,k)\in{\mathbb Z}_+\times{\mathbb Z}$, let $$\begin{aligned} \label{def:riparts:eq0bis} &&\lambda_{0,m}^{j,k}:={\rm Re}\,\Big(\varphi (\zeta_m)^{-1/\alpha}\,2^{-j/\alpha} e^{ik2^{-j}\zeta_m}\widehat{\psi}(-2^{-j}\zeta_m)\Big)\nonumber\\ &&\mbox{and}\\ &&\lambda_{1,m}^{j,k}:={\rm Im}\,\Big(\varphi (\zeta_m)^{-1/\alpha}\,2^{-j/\alpha} e^{ik2^{-j}\zeta_m}\widehat{\psi}(-2^{-j}\zeta_m)\Big).\nonumber\end{aligned}$$ Then, $$\label{def:riparts:eq1} {\rm Re}\,\Big(\varphi (\zeta_m)^{-1/\alpha}\,2^{-j/\alpha} e^{ik2^{-j}\zeta_m}\widehat{\psi}(-2^{-j}\zeta_m)g_m\Big)=\lambda_{0,m}^{j,k}g_{0,m}-\lambda_{1,m}^{j,k}g_{1,m}\,,$$ and consequently (see ([\[lepage:ep\]](#lepage:ep){reference-type="ref" reference="lepage:ep"})) $$\label{def:riparts:eq2} {\rm Re}\,(\varepsilon_{\alpha,j,k})=a_{\alpha}\sum_{m=1}^{+\infty} \Gamma_m^{-1/\alpha}\big (\lambda_{0,m}^{j,k}g_{0,m}-\lambda_{1,m}^{j,k}g_{1,m}\big).$$ Notice that the random series in ([\[def:riparts:eq2\]](#def:riparts:eq2){reference-type="ref" reference="def:riparts:eq2"}) is almost surely convergent, since the random series in ([\[lepage:ep\]](#lepage:ep){reference-type="ref" reference="lepage:ep"}) satisfies this property.* **Definition 7**. *For any $l\in\{0,1\}$ and $(j,k)\in{\mathbb Z}_+\times{\mathbb Z}$, we set $S_{l,0}^{j,k}:=0$, and, for all $m\in{\mathbb N}$, $$\label{def:Smjk:eq1} S_{l,m}^{j,k}:=\sum_{n=1}^m \lambda_{l,n}^{j,k}g_{l,n}\,.$$* **Definition 8**. *Let ${\EuScript R}_0:=\big\{\xi\in{\mathbb R}\,:\, 2\pi/3\le |\xi|\le 8\pi/3\big\}$ be as in ([\[eq:psihat\]](#eq:psihat){reference-type="ref" reference="eq:psihat"}). For every $j\in{\mathbb Z}_+$, denote by $(\beta_n^j)_{n\in{\mathbb N}}$ the sequence of the independent and identically distributed Bernoulli random variables defined as $$\label{def:ber-bin:eq1} \beta_n^j:={\rm 1 \hskip-2.9truept l}_{{\EuScript R}_0} (2^{-j} \zeta_n)\,,$$ and by $(B_m^j)_{m\in{\mathbb N}}$ the sequence of the binomial random variables defined as $$\label{def:ber-bin:eq2} B_m^j:=\sum_{n=1}^m \beta_n^j\,.$$* **Lemma 9**. *Let $\mu_{\alpha,\varepsilon}$ be the positive finite constant defined as $$\label{lem:bou-lamjk:eq1} \mu_{\alpha,\varepsilon}:=\sup_{\xi\in{\EuScript R}_0} \varphi(\xi)^{-1/\alpha}|\widehat{\psi}(\xi)|=(4/\epsilon)^{1/\alpha}\sup_{\xi\in{\EuScript R}_0} |\xi|^{1/\alpha}(1+\log|\xi|)^{\frac{1+\epsilon}{\alpha}}|\widehat{\psi}(\xi)|,$$ where the last equality results from ([\[eq:vp\]](#eq:vp){reference-type="ref" reference="eq:vp"}) and ([\[eq:psihat\]](#eq:psihat){reference-type="ref" reference="eq:psihat"}). Then, almost surely for all $(j,k)\in{\mathbb Z}_+\times{\mathbb Z}$, $l\in\{0,1\}$ and $n\in{\mathbb N}$, $$\label{lem:bou-lamjk:eq2} |\lambda_{l,n}^{j,k}|\le \mu_{\alpha,\varepsilon}(1+j)^{\frac{1+\epsilon}{\alpha}}\beta_n^j\,.$$* It follows from ([\[def:riparts:eq0bis\]](#def:riparts:eq0bis){reference-type="ref" reference="def:riparts:eq0bis"}), ([\[eq:vp\]](#eq:vp){reference-type="ref" reference="eq:vp"}), the triangle inequality, ([\[eq:psihat\]](#eq:psihat){reference-type="ref" reference="eq:psihat"}), ([\[def:ber-bin:eq1\]](#def:ber-bin:eq1){reference-type="ref" reference="def:ber-bin:eq1"}) and ([\[lem:bou-lamjk:eq1\]](#lem:bou-lamjk:eq1){reference-type="ref" reference="lem:bou-lamjk:eq1"}) that almost surely, for all $(j,k)\in{\mathbb Z}_+\times{\mathbb Z}$, $l\in\{0,1\}$ and $n\in{\mathbb N}$, $$\begin{aligned} |\lambda_{l,n}^{j,k}| &\le &2^{-j/\alpha} \varphi (\zeta_n)^{-1/\alpha}\big|\widehat{\psi}(2^{-j}\zeta_n)\big|\\ &\le & (4/\epsilon)^{1/\alpha}|2^{-j}\zeta_n|^{1/\alpha}\Big(1+j+\big| \log|2^{-j}\zeta_n|\big|\Big)^{\frac{1+\epsilon}{\alpha}}|\widehat{\psi}(2^{-j}\zeta_n)|\\ &\le & (4/\epsilon)^{1/\alpha}|2^{-j}\zeta_n|^{1/\alpha}\Big(1+\big| \log|2^{-j}\zeta_n|\big|\Big)^{\frac{1+\epsilon}{\alpha}}|\widehat{\psi}(2^{-j}\zeta_n)|(1+j)^{\frac{1+\epsilon}{\alpha}}\\ &\le & \mu_{\alpha,\varepsilon}(1+j)^{\frac{1+\epsilon}{\alpha}}\beta_n^j\,,\end{aligned}$$ which shows that ([\[lem:bou-lamjk:eq2\]](#lem:bou-lamjk:eq2){reference-type="ref" reference="lem:bou-lamjk:eq2"}) holds. $\square$ The following lemma provides the first upper bound for $|S_{l,m}^{j,k}|$. **Lemma 10**. *There exists a positive finite random variable $C'$ such that $$\label{lem:bou1-Smjk:eq1} |S_{l,m}^{j,k}|\le C' (1+j)^{\frac{1+\epsilon}{\alpha}}B_m^j \sqrt{\log(1+m)}\,$$ almost surely for all $(j,k)\in{\mathbb Z}_+\times{\mathbb Z}$, $l\in\{0,1\}$ and $m\in{\mathbb N}$.* The lemma is a straightforward consequence of ([\[def:Smjk:eq1\]](#def:Smjk:eq1){reference-type="ref" reference="def:Smjk:eq1"}), the triangle inequality, Lemma [Lemma 9](#lem:bou-lamjk){reference-type="ref" reference="lem:bou-lamjk"}, Definition [Definition 8](#def:ber-bin){reference-type="ref" reference="def:ber-bin"}, ([\[def:riparts:eq0\]](#def:riparts:eq0){reference-type="ref" reference="def:riparts:eq0"}) and the following remark which can easily be derived from Lemma 1 in [@AT03]. $\square$ **Remark  ** [\[rem:gau\]]{#rem:gau label="rem:gau"} Let $(\widetilde{g}_m)_{m\in{\mathbb N}}$ be an arbitrary sequence of real-valued centered, identically distributed Gaussian random variables (which are not necessarily independent). Then, there is a positive finite random variable $C$ such that almost surely, $$\label{rem:gau:eq1} |\widetilde{g}_m|\le C \sqrt{\log(1+m)}\,, \quad\mbox{for all $m\in{\mathbb N}$.}$$   The following lemma provides the second upper bound for $|S_{l,m}^{j,k}|$. **Lemma 11**. *There exists a positive finite random variable $C''$ such that $$\begin{aligned} \label{lem:bou2-Smjk:eq1} |S_{l,m}^{j,k}| &\le & C'' \sqrt{\bigg (\sum_{n=1}^m \Big |\lambda_{l,n}^{j,k}\Big |^2\bigg)\log \Big (3+j+|k|+m\Big)}\nonumber\\ &\le & C'' \mu_{\alpha,\varepsilon}(1+j)^{\frac{1+\epsilon}{\alpha}} \sqrt{B_m^j\log\big(3+j+|k|+m\big)}\end{aligned}$$ almost surely for all $l\in\{0,1\}$, $(j,k)\in{\mathbb Z}_+\times{\mathbb Z}$ and $m\in{\mathbb N}$.* First notice that the second inequality in ([\[lem:bou2-Smjk:eq1\]](#lem:bou2-Smjk:eq1){reference-type="ref" reference="lem:bou2-Smjk:eq1"}) is a straightforward consequence of the first inequality in it and of ([\[lem:bou-lamjk:eq2\]](#lem:bou-lamjk:eq2){reference-type="ref" reference="lem:bou-lamjk:eq2"}) and ([\[def:ber-bin:eq2\]](#def:ber-bin:eq2){reference-type="ref" reference="def:ber-bin:eq2"}). Hence, we only have to show that the first inequality in ([\[lem:bou2-Smjk:eq1\]](#lem:bou2-Smjk:eq1){reference-type="ref" reference="lem:bou2-Smjk:eq1"}) is satisfied. For all $l\in\{0,1\}$, $(j,k)\in{\mathbb Z}_+\times{\mathbb Z}$ and $m\in{\mathbb N}$, let $\Lambda_{l,m}^{j,k}$ be the event defined as $$\label{lem:bou2-Smjk:eq2} \Lambda_{l,m}^{j,k}:=\left\{\omega\in\Omega,\, \big |S_{l,m}^{j,k}(\omega)\big | >4\sigma_l\,\sqrt{\bigg (\sum_{n=1}^m \Big |\lambda_{l,n}^{j,k}(\omega)\Big |^2\bigg)\log \Big (3+j+|k|+m\Big)}\,\right\},$$ where $\sigma_l>0$ denotes the common value of the standard deviations of the centered independent real-valued Gaussian random variables $g_{l,n}$, $n\in{\mathbb N}$. For proving the first inequality in ([\[lem:bou2-Smjk:eq1\]](#lem:bou2-Smjk:eq1){reference-type="ref" reference="lem:bou2-Smjk:eq1"}), it is enough to show that for $l\in\{0,1\}$, $$\label{lem:bou2-Smjk:eq3} \sum_{j=0}^{+\infty}\sum_{k=-\infty}^{+\infty}\sum_{m=1}^{+\infty} {\mathbb P}\big (\Lambda_{l,m}^{j,k}\big)<+\infty.$$ Indeed, ([\[lem:bou2-Smjk:eq3\]](#lem:bou2-Smjk:eq3){reference-type="ref" reference="lem:bou2-Smjk:eq3"}) means that $${\mathbb E}\Big (\sum_{j=0}^{+\infty}\sum_{k=-\infty}^{+\infty}\sum_{m=1}^{+\infty} {\rm 1 \hskip-2.9truept l}_{\Lambda_{l,m}^{j,k}}\Big)<+\infty$$ and consequently that $$\sum_{j=0}^{+\infty}\sum_{k=-\infty}^{+\infty}\sum_{m=1}^{+\infty} {\rm 1 \hskip-2.9truept l}_{\Lambda_{l,m}^{j,k}}<+\infty,\quad\mbox{almost surely.}$$ Thus, for $l\in\{0,1\}$, the random set of indices $\big\{(j,k,m)\in{\mathbb Z}_+\times{\mathbb Z}\times{\mathbb N}\,:\, {\rm 1 \hskip-2.9truept l}_{\Lambda_{l,m}^{j,k}}=1\big\}$ is almost surely finite. The latter fact implies that the positive random variable $C''$ defined as $$\label{lem:bou2-Smjk:eq3bis} C'':=\sup_{(l,j,k,m)\in\{0,1\}\times{\mathbb Z}_+\times{\mathbb Z}\times{\mathbb N}}\left\{4\sigma_l+\Bigg(\bigg (\sigma_l^2\sum_{n=1}^m \big |\lambda_{l,n}^{j,k}\big |^2\bigg) \log \Big (3+j+|k|+m\Big)\Bigg)^{-1/2}\big |S_{l,m}^{j,k}\big |{\rm 1 \hskip-2.9truept l}_{\Lambda_{l,m}^{j,k}}\right\},$$ with the conventions that $0^{-1/2}=+\infty$ and $(+\infty)0=0$, is almost surely finite. Moreover, it can easily be seen that the first inequality in ([\[lem:bou2-Smjk:eq1\]](#lem:bou2-Smjk:eq1){reference-type="ref" reference="lem:bou2-Smjk:eq1"}) holds when $C''$ is defined through ([\[lem:bou2-Smjk:eq3bis\]](#lem:bou2-Smjk:eq3bis){reference-type="ref" reference="lem:bou2-Smjk:eq3bis"}). Now we proceed with the proof of ([\[lem:bou2-Smjk:eq3\]](#lem:bou2-Smjk:eq3){reference-type="ref" reference="lem:bou2-Smjk:eq3"}). Denote by ${\mathbb E}_{\zeta}(\cdot)$ the conditional expectation operator with respect to ${\EuScript F}_\zeta$, the $\sigma$-field spanned by the sequence of the random variables $(\zeta_m)_{m\in{\mathbb N}}$. It is clear that, for all $l\in\{0,1\}$, $(j,k)\in{\mathbb Z}_+\times{\mathbb Z}$ and $m\in{\mathbb N}$, $$\label{lem:bou2-Smjk:eq4} {\mathbb P}\big (\Lambda_{l,m}^{j,k}\big)={\mathbb E}\Big({\rm 1 \hskip-2.9truept l}_{\Lambda_{l,m}^{j,k}}\Big)={\mathbb E}\Big ({\mathbb E}_{\zeta}\Big({\rm 1 \hskip-2.9truept l}_{\Lambda_{l,m}^{j,k}}\Big)\Big).$$ Then, by using the fact that the conditional distribution of $S_{l,m}^{j,k}$ with respect to ${\EuScript F}_\zeta$ is a centered Gaussian distribution with standard deviation equals $$\sigma_l\,\sqrt{\sum_{n=1}^m \Big |\lambda_{l,n}^{j,k}\Big |^2}$$ and the fact that $4\,\sqrt{\log (3+j+|k|+m)}\ge 1$, we obtain that almost surely $${\mathbb E}_{\zeta}\Big({\rm 1 \hskip-2.9truept l}_{\Lambda_{l,m}^{j,k}}\Big)\le \exp\bigg (-2^{-1}\Big(4\,\sqrt{\log (3+j+|k|+m)}\Big)^2\bigg)=\big (3+j+|k|+m\big)^{-8}.$$ Thus, it follows from ([\[lem:bou2-Smjk:eq4\]](#lem:bou2-Smjk:eq4){reference-type="ref" reference="lem:bou2-Smjk:eq4"}) that $${\mathbb P}\big (\Lambda_{l,m}^{j,k}\big)\le (3+j+|k|+m\big)^{-8},$$ which implies ([\[lem:bou2-Smjk:eq3\]](#lem:bou2-Smjk:eq3){reference-type="ref" reference="lem:bou2-Smjk:eq3"}). $\square$ **Remark  ** [\[rem:ga\]]{#rem:ga label="rem:ga"} It results from ([\[eq:ga\]](#eq:ga){reference-type="ref" reference="eq:ga"}) and the strong law of large number that $m^{-1} \Gamma_m\xrightarrow[m\rightarrow+\infty]{a.s.} 1$, which entails that there exist two positive finite random variables $C'<C''$ such that almost surely, $$\label{rem:ga:eq1} C' m\le \Gamma_m\le C'' m\, \ \hbox{ for all $m\in{\mathbb N}$}.$$   **Remark  ** [\[rem:epo\]]{#rem:epo label="rem:epo"} Let $({\cal E}_n)_{n\in{\mathbb N}}$ be an arbitrary sequence of identically distributed exponential random variables (which are not necessarily independent). Then, there is a positive finite random variable $C$ such that almost surely, $$\label{rem:epo:eq1} {\cal E}_n\le C \log(1+n)\, \ \mbox{for all $n\in{\mathbb N}$.}$$   To see this, denote by $\lambda>0$ the common value of the parameters of the ${\cal E}_n$'s. Then $$\sum_{n=1}^{+\infty} {\mathbb P}\big ({\cal E}_n>2\lambda^{-1}\log (n)\big)\le \sum_{n=1}^{+\infty} n^{-2} <+\infty.$$ Hence, ([\[rem:epo:eq1\]](#rem:epo:eq1){reference-type="ref" reference="rem:epo:eq1"}) follows from the Borel-Cantelli Lemma. **Lemma 12**. *For $l\in\{0,1\}$ and $(j,k)\in{\mathbb Z}_+\times{\mathbb Z}$, the random series $$\label{lem:abel:eq1} \chi_{j,k}^{l}:=\sum_{m=1}^{+\infty} \big (\Gamma_m^{-1/\alpha}-\Gamma_{m+1}^{-1/\alpha}\big)S_{l,m}^{j,k}$$ is almost surely absolutely convergent. Moreover, one has almost surely that $$\label{lem:abel:eq2} {\rm Re}\,(\varepsilon_{\alpha,j,k})=a_\alpha\big(\chi_{j,k}^{0}-\chi_{j,k}^{1}\big).$$* In view of ([\[lem:bou2-Smjk:eq1\]](#lem:bou2-Smjk:eq1){reference-type="ref" reference="lem:bou2-Smjk:eq1"}), the inequality $$\label{lem:abel:eq3} \sqrt{B_m^j\log\big(3+j+|k|+m\big)}\le \Big (\sqrt{\log\big(3+j+|k|\big)}\,\Big)\sqrt{m\log(3+m)}\,,$$ and the fact that $\alpha\in [1,2)$, in order to prove that the random series in ([\[lem:abel:eq1\]](#lem:abel:eq1){reference-type="ref" reference="lem:abel:eq1"}) is almost surely absolutely convergent, it is enough to show that almost surely $$\label{lem:abel:eq4} \sup_{m\in {\mathbb N}} \,\frac{m^{1+\frac{1}{\alpha}}}{\log(1+m)}\, \big (\Gamma_m^{-1/\alpha}-\Gamma_{m+1}^{-1/\alpha}\big)<+\infty\,.$$ From elementary calculations and ([\[eq:ga\]](#eq:ga){reference-type="ref" reference="eq:ga"}), we derive that, for all $m\in{\mathbb N}$, $$\begin{aligned} \label{lem:abel:eq5} \Gamma_m^{-1/\alpha}-\Gamma_{m+1}^{-1/\alpha}=\frac{\big (\Gamma_m+{\cal E}_{m+1}\big)^{1/\alpha}-\Gamma_m^{1/\alpha}}{\Gamma_m^{1/\alpha}\Gamma_{m+1}^{1/\alpha}} =\frac{1}{\Gamma_{m+1}^{1/\alpha}}\bigg (\Big (1+\frac{{\cal E}_{m+1}}{\Gamma_m}\Big)^{1/\alpha}-1\bigg).\end{aligned}$$ By using the inequality $(1+x)^{1/\alpha}-1\le \alpha^{-1} x$, for all $x\in{\mathbb R}_+$, we derive that for all $m\in{\mathbb N}$, $$\label{lem:abel:eq6} \Gamma_m^{-1/\alpha}-\Gamma_{m+1}^{-1/\alpha}\le\frac{{\cal E}_{m+1}}{\alpha\Gamma_m \Gamma_{m+1}^{1/\alpha}}.$$ Finally, combining ([\[lem:abel:eq6\]](#lem:abel:eq6){reference-type="ref" reference="lem:abel:eq6"}) with ([\[rem:epo:eq1\]](#rem:epo:eq1){reference-type="ref" reference="rem:epo:eq1"}) and the first inequality in ([\[rem:ga:eq1\]](#rem:ga:eq1){reference-type="ref" reference="rem:ga:eq1"}) yields ([\[lem:abel:eq4\]](#lem:abel:eq4){reference-type="ref" reference="lem:abel:eq4"}). Now we show ([\[lem:abel:eq2\]](#lem:abel:eq2){reference-type="ref" reference="lem:abel:eq2"}). To this end we will use an Abel transform. For any integer $M\ge 2$, let ${\EuScript P}_M$ be the partial sum of order $M$ of the random series in ([\[def:riparts:eq2\]](#def:riparts:eq2){reference-type="ref" reference="def:riparts:eq2"}). That is $$\label{lem:abel:eq7} {\EuScript P}_M:=a_{\alpha}\sum_{m=1}^{M} \Gamma_m^{-1/\alpha}\big (\lambda_{0,m}^{j,k}g_{0,m}-\lambda_{1,m}^{j,k}g_{1,m}\big).$$ By using the notations in Definition [Definition 7](#def:Smjk){reference-type="ref" reference="def:Smjk"}, we can write ${\EuScript P}_M$ as $$\begin{aligned} && {\EuScript P}_M = a_{\alpha}\bigg (\sum_{m=1}^{M} \Gamma_m^{-1/\alpha}\Big (S_{0,m}^{j,k}-S_{0,m-1}^{j,k}\Big)-\sum_{m=1}^{M} \Gamma_m^{-1/\alpha} \Big (S_{1,m}^{j,k}-S_{1,m-1}^{j,k}\Big)\bigg)\nonumber\\ && =a_{\alpha}\bigg (\sum_{m=1}^{M} \Gamma_m^{-1/\alpha}S_{0,m}^{j,k}-\sum_{m=1}^{M-1} \Gamma_{m+1}^{-1/\alpha}S_{0,m}^{j,k}-\sum_{m=1}^{M} \Gamma_m^{-1/\alpha}S_{1,m}^{j,k}+\sum_{m=1}^{M-1} \Gamma_{m+1}^{-1/\alpha}S_{1,m}^{j,k}\bigg)\nonumber\\ &&=a_{\alpha}\bigg (\Gamma_M^{-1/\alpha}S_{0,M}^{j,k}-\Gamma_M^{-1/\alpha}S_{1,M}^{j,k}+\sum_{m=1}^{M-1}\big (\Gamma_m^{-1/\alpha}-\Gamma_{m+1}^{-1/\alpha}\big) S_{0,m}^{j,k}-\sum_{m=1}^{M-1}\big (\Gamma_m^{-1/\alpha}-\Gamma_{m+1}^{-1/\alpha}\big)S_{1,m}^{j,k}\bigg).\end{aligned}$$ Thus, in view of ([\[lem:abel:eq1\]](#lem:abel:eq1){reference-type="ref" reference="lem:abel:eq1"}) and the fact that ${\EuScript P}_M$ converges almost surely to ${\rm Re}\,(\varepsilon_{\alpha,j,k})$ as $M \to +\infty$, we see that, for proving ([\[lem:abel:eq2\]](#lem:abel:eq2){reference-type="ref" reference="lem:abel:eq2"}), it is enough to show that, for $l\in\{0,1\}$, $$\label{lem:abel:eq8} \Gamma_M^{-1/\alpha}S_{l,M}^{j,k}\xrightarrow[M\rightarrow+\infty]{a.s.} 0\,.$$ Putting together ([\[lem:bou2-Smjk:eq1\]](#lem:bou2-Smjk:eq1){reference-type="ref" reference="lem:bou2-Smjk:eq1"}), ([\[lem:abel:eq3\]](#lem:abel:eq3){reference-type="ref" reference="lem:abel:eq3"}), the first inequality in ([\[rem:ga:eq1\]](#rem:ga:eq1){reference-type="ref" reference="rem:ga:eq1"}) and the fact that $1/\alpha>1/2$, it follows that ([\[lem:abel:eq8\]](#lem:abel:eq8){reference-type="ref" reference="lem:abel:eq8"}) is satisfied. This finishes the proof. $\square$ **Lemma 13**. *For each $j\in{\mathbb Z}_+$, let $$\label{lem:sumBin:eq1} p_j:={\mathbb P}\Big (\big\{\omega\in\Omega,\,\, 2^{-j}\zeta_1\in {\EuScript R}_0\big\}\Big)=\frac{\varepsilon}{2}\int_{2^{j+1}\frac{\pi}{3}}^{2^{j+3}\frac{\pi}{3}}\frac{d\xi}{\xi (1+\log \xi )^{1+\varepsilon}}\,,$$ where the second equality follows from the facts that ${\EuScript R}_0:=\big\{\xi\in{\mathbb R}\,:\, 2\pi/3\le |\xi|\le 8\pi/3\big\}$ and the probability density function of $\zeta_1$ is the even function $\varphi$ given by ([\[eq:vp\]](#eq:vp){reference-type="ref" reference="eq:vp"}). Then there is an event $\widetilde{\Omega}$ of probability $1$ with the following property: for each fixed $\eta \in (1/2,1)$, there exists a finite positive random variable $\widetilde{C}_\eta$ such that on $\widetilde{\Omega}$ $$\label{lem:sumBin:eq2} B_m^j\le \widetilde{C}_\eta\big (p_j m+m^\eta\big),\quad\mbox{for all $(j,m)\in{\mathbb Z}_+\times{\mathbb N}$.}$$* For proving the lemma, it is enough to show that for any fixed $\eta\in(1/2,1)$, $$\label{lem:sumBin:eq3} \sum_{j=0}^{+\infty}\,\sum_{m=1}^{+\infty}{\mathbb P}\Big (\big |B_m^j-p_j m\big| >m^\eta\Big)<+\infty\,.$$ Indeed, ([\[lem:sumBin:eq3\]](#lem:sumBin:eq3){reference-type="ref" reference="lem:sumBin:eq3"}) implies that $$\label{lem:sumBin:eq3ter} {\mathbb P}\big (\widetilde{\Omega}_{\eta}\big)=1,$$ where the event $$\label{lem:sumBin:eq3quat} \widetilde{\Omega}_{\eta}:=\bigcup_{v=1}^{+\infty}\bigcap_{(j,m)\in\mathcal{I}(v)}\big\{B_m^j\le p_j m+m^\eta\big\}$$ with $\mathcal{I}(v):=\big\{(j,m)\in{\mathbb Z}_+\times{\mathbb N}\,:\, j+m\ge v\big\}$ for each $v\in{\mathbb N}$. Thus, setting $\widetilde{\Omega}:= \bigcap_{\eta\in {\mathbb Q}\cap (1/2,1)}\widetilde{\Omega}_{\eta}$, we can derive from ([\[lem:sumBin:eq3quat\]](#lem:sumBin:eq3quat){reference-type="ref" reference="lem:sumBin:eq3quat"}) and ([\[lem:sumBin:eq3ter\]](#lem:sumBin:eq3ter){reference-type="ref" reference="lem:sumBin:eq3ter"}) that, for any $\eta\in (1/2,1)$, the positive random variable $$\label{lem:sumBin:eq3bis} \widetilde{C}_\eta:=\sup_{(j,m)\in{\mathbb Z}_+\times{\mathbb N}}\Big\{\big ( p_j m+m^\eta\big)^{-1} B_m^j\Big\}$$ is finite on the event $\widetilde{\Omega}$ of probability $1$. Moreover, it can easily be seen that ([\[lem:sumBin:eq2\]](#lem:sumBin:eq2){reference-type="ref" reference="lem:sumBin:eq2"}) holds on $\widetilde{\Omega}$ when $\widetilde{C}_\eta$ is defined through ([\[lem:sumBin:eq3bis\]](#lem:sumBin:eq3bis){reference-type="ref" reference="lem:sumBin:eq3bis"}). Now it remains to prove ([\[lem:sumBin:eq3\]](#lem:sumBin:eq3){reference-type="ref" reference="lem:sumBin:eq3"}). For each $j\in{\mathbb Z}_+$ and $n\in{\mathbb N}$, denote by $\widetilde{\beta}_n^j$ the centered random variable defined as $$\label{lem:sumBin:eq4} \widetilde{\beta}_n^j:=\beta_n^j -p_j\,,$$ where $\beta_n^j$ is the Bernoulli random variable defined in ([\[def:ber-bin:eq1\]](#def:ber-bin:eq1){reference-type="ref" reference="def:ber-bin:eq1"}). Let $q$ be a positive integer which will be chosen later. It follows from the Markov inequality that for all $j\in{\mathbb Z}_+$ and $m\in{\mathbb N}$, $$\label{lem:sumBin:eq5} {\mathbb P}\Big (\big |B_m^j-p_j m\big| >m^\eta\Big)\le m^{-2\eta q}\,{\mathbb E}\bigg ( \Big (\sum_{n=1}^m \widetilde{\beta}_n^j\Big)^{2q}\bigg)\,.$$ In order to estimate $\displaystyle{\mathbb E}\bigg ( \Big (\sum_{n=1}^m \widetilde{\beta}_n^j\Big)^{2q}\bigg)$, we write it as $$\label{lem:sumBin:eq6} {\mathbb E}\bigg ( \Big (\sum_{n=1}^m \widetilde{\beta}_n^j\Big)^{2q}\bigg)=\sum_{1\le n_1,\ldots, n_{2q}\le m}\,{\mathbb E}\bigg (\prod_{p=1}^{2q} \widetilde{\beta}^j_{n_p}\bigg)\,.$$ Notice that each $\displaystyle\prod_{p=1}^{2q} \widetilde{\beta}^j_{n_p}$ can be expressed, for some $r\in\{1,\ldots,2q\}$ and some distinct integers $\nu_1,\ldots,\nu_r$ satisfying to $1\le\nu_1<\ldots<\nu_r\le m$, as $$\label{lem:sumBin:eq7} \prod_{p=1}^{2q} \widetilde{\beta}^j_{n_p}=\prod_{u=1}^{r} \Big (\widetilde{\beta}^j_{\nu_u}\Big)^{a_u}\,,$$ where $a_1,\ldots, a_r$ belong to $\{1,\ldots,2q\}$ and satisfy $$\label{lem:sumBin:eq8} \sum_{u=1}^r a_u=2q\,.$$ Then, by the independence of the centered random variables $\widetilde{\beta}^j_{\nu_1},\ldots , \widetilde{\beta}^j_{\nu_r}$, we have $$\label{lem:sumBin:eq9} {\mathbb E}\bigg (\prod_{p=1}^{2q} \widetilde{\beta}^j_{n_p}\bigg)=\prod_{u=1}^{r} {\mathbb E}\bigg(\Big (\widetilde{\beta}^j_{\nu_u}\Big)^{a_u}\bigg)\,,$$ which implies that the latter expectation vanishes as soon as $\min\{a_1,\ldots, a_r\}=1$. Thus, we only need to consider the case where $\min\{a_1,\ldots, a_r\}\ge 2$, which implies $r\le q$ because of the equality ([\[lem:sumBin:eq8\]](#lem:sumBin:eq8){reference-type="ref" reference="lem:sumBin:eq8"}). Next notice that for any given $r\in\{1,\ldots, q\}$, distinct integers $1\le\nu_1<\ldots<\nu_r\le m$, and arbitrary numbers $a_1,\ldots, a_r$ belonging to $\{2,\ldots, 2q\}$ and satisfying ([\[lem:sumBin:eq8\]](#lem:sumBin:eq8){reference-type="ref" reference="lem:sumBin:eq8"}), there are exactly $$\binom{2q}{a_1,\ldots, a_r}:=\frac{(2q)!}{a_1!\times\ldots\times a_r!}$$ tuples $(n_1,\ldots, n_{2q})$ of numbers belonging to $\{1,\ldots,m\}$ for which the equality ([\[lem:sumBin:eq7\]](#lem:sumBin:eq7){reference-type="ref" reference="lem:sumBin:eq7"}) holds. Thus, one can derive from ([\[lem:sumBin:eq6\]](#lem:sumBin:eq6){reference-type="ref" reference="lem:sumBin:eq6"}) and ([\[lem:sumBin:eq9\]](#lem:sumBin:eq9){reference-type="ref" reference="lem:sumBin:eq9"}) that $$\label{lem:sumBin:eq10} {\mathbb E}\bigg ( \Big (\sum_{n=1}^m \widetilde{\beta}_n^j\Big)^{2q}\bigg)=\sum_{r=1}^{q}\,\,\sum_{1\le\nu_1<\ldots<\nu_r\le m}\,\, \sum_{(a_1,\ldots, a_r)\in{\EuScript A}_{2q,r}}\binom{2q}{a_1,\ldots, a_r}\prod_{u=1}^{r} {\mathbb E}\bigg(\Big (\widetilde{\beta}^j_{\nu_u}\Big)^{a_u}\bigg)\,,$$ where, for all $r\in\{1,\ldots, q\}$, $$\label{lem:sumBin:eq11} {\EuScript A}_{2q,r}:=\Big\{(a_1,\ldots, a_r)\in\{2,\ldots,2q\}^r,\,\,\,\,\sum_{u=1}^r a_u=2q\Big\}\,.$$ Next, we claim $$\label{lem:sumBin:eq12} \bigg|{\mathbb E}\bigg(\Big (\widetilde{\beta}^j_{n}\Big)^{a}\bigg)\bigg |\le p_j\,,\quad\mbox{for all $(n,j,a)\in{\mathbb N}\times{\mathbb Z}_+\times \{2,\ldots,2q\}$.}$$ Indeed, it follows from ([\[lem:sumBin:eq4\]](#lem:sumBin:eq4){reference-type="ref" reference="lem:sumBin:eq4"}), the facts that $\beta_{n}^j$ is a Bernoulli random variable with parameter equals to $p_j$ (defined in ([\[lem:sumBin:eq1\]](#lem:sumBin:eq1){reference-type="ref" reference="lem:sumBin:eq1"})) and $a\ge 2$ that $$\begin{split} \bigg|{\mathbb E}\bigg(\Big (\widetilde{\beta}^j_{n}\Big)^{a}\bigg)\bigg| &=\big |p_j (1-p_j)^a+(1-p_j)(-p_j)^a\big|\\ &\le (1-p_j) \Big ((1-p_j)^{a-1}+p_j^{a-1}\Big) p_j \le p_j\,. \end{split}$$ This verifies ([\[lem:sumBin:eq12\]](#lem:sumBin:eq12){reference-type="ref" reference="lem:sumBin:eq12"}). Let $c_1(q)$ be the finite deterministic constant, only depending on $q$, defined as $$\label{lem:sumBin:eq13} c_1 (q):=\sum_{r=1}^{q}\,\,\sum_{(a_1,\ldots, a_r)\in{\EuScript A}_{2q,r}}\binom{2q}{a_1,\ldots, a_r}\,.$$ Then, one can derive from ([\[lem:sumBin:eq10\]](#lem:sumBin:eq10){reference-type="ref" reference="lem:sumBin:eq10"}), ([\[lem:sumBin:eq12\]](#lem:sumBin:eq12){reference-type="ref" reference="lem:sumBin:eq12"}) and ([\[lem:sumBin:eq13\]](#lem:sumBin:eq13){reference-type="ref" reference="lem:sumBin:eq13"}) that for all $(j,m)\in{\mathbb Z}_+\times{\mathbb N}$, $$\label{lem:sumBin:eq14} {\mathbb E}\bigg ( \Big (\sum_{n=1}^m \widetilde{\beta}_n^j\Big)^{2q}\bigg)\le c_1 (q) m^q\, p_j.$$ By combining ([\[lem:sumBin:eq5\]](#lem:sumBin:eq5){reference-type="ref" reference="lem:sumBin:eq5"}) and ([\[lem:sumBin:eq14\]](#lem:sumBin:eq14){reference-type="ref" reference="lem:sumBin:eq14"}), we obtain that for all $(j,m)\in{\mathbb Z}_+\times{\mathbb N}$, $$\label{lem:sumBin:eq15} {\mathbb P}\Big (\big |B_m^j-p_j m\big| >m^\eta\Big)\le c_1 (q) m^{-(2\eta -1)q}\,p_j\,.$$ Since $2\eta-1>0$, we choose the integer $q$ large enough so that $(2\eta-1)q>1$. Then, ([\[lem:sumBin:eq3\]](#lem:sumBin:eq3){reference-type="ref" reference="lem:sumBin:eq3"}) follows from ([\[lem:sumBin:eq15\]](#lem:sumBin:eq15){reference-type="ref" reference="lem:sumBin:eq15"}) and ([\[lem:sumBin:eq1\]](#lem:sumBin:eq1){reference-type="ref" reference="lem:sumBin:eq1"}). $\square$ We are now ready to prove Proposition [Proposition 2](#prop:fund){reference-type="ref" reference="prop:fund"}.   First, it follows from ([\[lem:abel:eq4\]](#lem:abel:eq4){reference-type="ref" reference="lem:abel:eq4"}) that there is a positive finite random variable $C_1$ such that almost surely, $$\label{prop:fund:eq2bis} 0<\Gamma_m^{-1/\alpha}-\Gamma_{m+1}^{-1/\alpha}\le C_1 m^{-1-1/\alpha}\log (1+m)\,, \quad\mbox{for all $m\in{\mathbb N}$.}$$ Since $1/\alpha>1/2$, we can choose and fix a constant $\eta_0\in (1/2,1)$ such that $$\label{prop:fund:eq3} 1+\frac{1}{\alpha}-\eta_0>1\,.$$ For each $j\in{\mathbb Z}_+$, denote by ${\EuScript M}_j$ and $\overline{{\EuScript M}}_j$ the two nonempty sets of indices $m$ defined as $$\label{prop:fund:eq4} {\EuScript M}_j:=\big\{m\in{\mathbb N},\,\, p_j m\ge m^{\eta_0}\big\}$$ and $$\label{prop:fund:eq5} \overline{{\EuScript M}}_j:={\mathbb N}\setminus {\EuScript M}_j=\big\{m\in{\mathbb N},\,\, p_j m < m^{\eta_0}\big\},$$ where the probability $p_j\in (0,1)$ is defined through ([\[lem:sumBin:eq1\]](#lem:sumBin:eq1){reference-type="ref" reference="lem:sumBin:eq1"}). Then, for every $j\in{\mathbb Z}_+$, $$\label{prop:fund:eq5bis} {\mathbb N}={\EuScript M}_j \cup \overline{{\EuScript M}}_j\,, \quad\mbox{(disjoint union)}$$ $$\label{prop:fund:eq6} p_j m+m^{\eta_0}\le 2 p_j m\,,\quad\mbox{for all $m\in{\EuScript M}_j$}$$ and $$\label{prop:fund:eq7} p_j m+m^{\eta_0}< 2 m^{\eta_0}\,,\quad\mbox{for all $m\in\overline{{\EuScript M}}_j$.}$$ In all the sequel $l\in\{0,1\}$ is arbitrary, and $j\in{\mathbb Z}_+$ and $k\in{\mathbb Z}$ are arbitrary and such that ([\[prop:fund:eq1\]](#prop:fund:eq1){reference-type="ref" reference="prop:fund:eq1"}) holds. It follows from ([\[prop:fund:eq2bis\]](#prop:fund:eq2bis){reference-type="ref" reference="prop:fund:eq2bis"}), ([\[lem:bou1-Smjk:eq1\]](#lem:bou1-Smjk:eq1){reference-type="ref" reference="lem:bou1-Smjk:eq1"}), ([\[lem:sumBin:eq2\]](#lem:sumBin:eq2){reference-type="ref" reference="lem:sumBin:eq2"}), ([\[prop:fund:eq7\]](#prop:fund:eq7){reference-type="ref" reference="prop:fund:eq7"}) and ([\[prop:fund:eq3\]](#prop:fund:eq3){reference-type="ref" reference="prop:fund:eq3"}) that almost surely, $$\begin{aligned} \label{prop:fund:eq8} && \sum_{m\in\overline{{\EuScript M}}_j} \big (\Gamma_m^{-1/\alpha}-\Gamma_{m+1}^{-1/\alpha}\big)\big |S_{l,m}^{j,k}\big |\nonumber\\ && \le C_2(1+j)^{\frac{1+\epsilon}{\alpha}}\sum_{m\in\overline{{\EuScript M}}_j} B_m^j \, m^{-1-1/\alpha}\big(\log (1+m)\big)^{3/2}\nonumber\\ && \le C_3 (1+j)^{\frac{1+\epsilon}{\alpha}}\sum_{m\in\overline{{\EuScript M}}_j} m^{-(1+1/\alpha-\eta_0)}\big(\log (1+m)\big)^{3/2}\nonumber\\ && \le C_4 (1+j)^{\frac{1+\epsilon}{\alpha}}\,,\end{aligned}$$ where $C_2$ and $C_3$ are two positive finite random variables not depending on $j$ and $k$, and $$C_4:=C_3 \sum_{m=1}^{+\infty} m^{-(1+1/\alpha-\eta_0)}\big(\log (1+m)\big)^{3/2}<+\infty\,.$$ On the other hand, by using ([\[prop:fund:eq2bis\]](#prop:fund:eq2bis){reference-type="ref" reference="prop:fund:eq2bis"}), ([\[lem:bou2-Smjk:eq1\]](#lem:bou2-Smjk:eq1){reference-type="ref" reference="lem:bou2-Smjk:eq1"}), the inequality $$\log \big (3+j+|k|+m\big)\le \log \big (3+j+|k|\big) \log (3+m),\quad\mbox{for all $(j,k,m)\in{\mathbb Z}_+\times{\mathbb Z}\times{\mathbb N}$,}$$ ([\[prop:fund:eq1\]](#prop:fund:eq1){reference-type="ref" reference="prop:fund:eq1"}), ([\[lem:sumBin:eq2\]](#lem:sumBin:eq2){reference-type="ref" reference="lem:sumBin:eq2"}), ([\[prop:fund:eq6\]](#prop:fund:eq6){reference-type="ref" reference="prop:fund:eq6"}), and the inequality $1/2+1/\alpha>1$, we have almost surely, $$\begin{aligned} \label{prop:fund:eq9} && \sum_{m\in{\EuScript M}_j} \big (\Gamma_m^{-1/\alpha}-\Gamma_{m+1}^{-1/\alpha}\big)\big |S_{l,m}^{j,k}\big |\nonumber\\ && \le C_5 (1+j)^{\frac{1+\epsilon}{\alpha}}\, \sqrt{\log\big(3+j+|k|\big)}\sum_{m\in{\EuScript M}_j}\big (B_m^j\big)^{1/2} m^{-(1+1/\alpha)}\big (\log(3+m)\big)^{3/2}\nonumber\\ && \le C_6 (1+j)^{\frac{1+\epsilon}{\alpha}} \, \sqrt{p_j (1+j)}\sum_{m\in{\EuScript M}_j}m^{-(1/2+1/\alpha)}\big (\log(3+m)\big)^{3/2}\nonumber\\ && \le C_7 (1+j)^{\frac{1+\epsilon}{\alpha}} \, \sqrt{p_j (1+j)}\,,\end{aligned}$$ where $C_5$ and $C_6$ are two positive finite random variables not depending on $j$ and $k$, and $$C_7:=C_6 \sum_{m=1}^{+\infty} m^{-(1/2+1/\alpha)}\big(\log (3+m)\big)^{3/2}<+\infty\,.$$ It follows from ([\[lem:sumBin:eq1\]](#lem:sumBin:eq1){reference-type="ref" reference="lem:sumBin:eq1"}) that $$\begin{split} p_j &\le \frac{\varepsilon\pi}{6} \big (2^{j+3}-2^{j+1}\big)\Big (\frac{2^{j+1}\pi}{3}\Big)^{-1}\bigg (1+\log \Big (\frac{2^{j+1}\pi}{3}\Big)\bigg)^{-1-\varepsilon}\\ &\le 6 (1+j)^{-1-\varepsilon}\,. \end{split}$$ Thus, $$\label{prop:fund:eq10} \sqrt{p_j (1+j)}\le \sqrt{6}\,,\quad\mbox{for all $j\in{\mathbb Z}_+$.}$$ By ([\[prop:fund:eq9\]](#prop:fund:eq9){reference-type="ref" reference="prop:fund:eq9"}) and ([\[prop:fund:eq10\]](#prop:fund:eq10){reference-type="ref" reference="prop:fund:eq10"}), we have $$\label{prop:fund:eq11} \sum_{m\in{\EuScript M}_j} \big (\Gamma_m^{-1/\alpha}-\Gamma_{m+1}^{-1/\alpha}\big)\big |S_{l,m}^{j,k}\big | \le \sqrt{6}\, C_7 (1+j)^{\frac{1+\epsilon}{\alpha}}\,.$$ Finally, by combining ([\[lem:abel:eq2\]](#lem:abel:eq2){reference-type="ref" reference="lem:abel:eq2"}), the triangle inequality, ([\[lem:abel:eq1\]](#lem:abel:eq1){reference-type="ref" reference="lem:abel:eq1"}), ([\[prop:fund:eq5bis\]](#prop:fund:eq5bis){reference-type="ref" reference="prop:fund:eq5bis"}), ([\[prop:fund:eq8\]](#prop:fund:eq8){reference-type="ref" reference="prop:fund:eq8"}), and ([\[prop:fund:eq11\]](#prop:fund:eq11){reference-type="ref" reference="prop:fund:eq11"}), we obtain almost surely $$\begin{aligned} && \big |{\rm Re}\,(\varepsilon_{\alpha,j,k})\big |\le a_\alpha\sum_{l=0}^1 \big |\chi_{j,k}^{l}\big|\le a_\alpha\sum_{l=0}^1\sum_{m\in{\mathbb N}} \big (\Gamma_m^{-1/\alpha}- \Gamma_{m+1}^{-1/\alpha}\big)\big |S_{l,m}^{j,k}\big |\\ && \le a_\alpha\sum_{l=0}^1\Big (\sum_{m\in{\EuScript M}_j} \big (\Gamma_m^{-1/\alpha}-\Gamma_{m+1}^{-1/\alpha}\big)\big |S_{l,m}^{j,k}\big | +\sum_{m\in\overline{{\EuScript M}}_j} \big (\Gamma_m^{-1/\alpha}-\Gamma_{m+1}^{-1/\alpha}\big)\big |S_{l,m}^{j,k}\big |\Big)\\ && \le 2 a_\alpha\big (C_4+\sqrt{6}\, C_7\big) (1+j)^{\frac{1+\epsilon}{\alpha}}.\end{aligned}$$ This proves ([\[prop:fund:eq2\]](#prop:fund:eq2){reference-type="ref" reference="prop:fund:eq2"}). $\square$ # Proofs of Theorems [Theorem 3](#thm:main1){reference-type="ref" reference="thm:main1"} and [Theorem 4](#thm:main2){reference-type="ref" reference="thm:main2"} {#sec:p-main}  Let $\widetilde{\rho}$ be a positive constant. For every $j\in{\mathbb Z}_+$, the two nonempty sets ${\EuScript K}_j (\widetilde{\rho}\,)$ and $\overline{{\EuScript K}}_j (\widetilde{\rho}\,)$, which forms a partition of ${\mathbb Z}$, are defined as $$\label{thm:main1:eq0} {\EuScript K}_j (\widetilde{\rho}\,):=\big\{k\in{\mathbb Z},\, |k|\le 2^j (\widetilde{\rho}+1)\big\}$$ and $$\label{thm:main1:eq0bis} \overline{{\EuScript K}}_j (\widetilde{\rho}\,):=\big\{k\in{\mathbb Z},\, |k|> 2^j (\widetilde{\rho}+1)\big\}.$$ It follows from ([\[eq:rep-hfsm\]](#eq:rep-hfsm){reference-type="ref" reference="eq:rep-hfsm"}) that the HFSM $\{X(t), t\in{\mathbb R}\}$ can be expressed, for all $t\in{\mathbb R}$, as $$\label{thm:main1:eq1} X(t)=X^{-}(t)+X_1^{+}(t)+X_2^{+}(t),$$ where the process $\{X^{-}(t), t\in{\mathbb R}\}$ is the low-frequency part of the HFSM defined, for every $t\in{\mathbb R}$, as $$\label{thm:main1:eq2} X^{-}(t):=\sum_{j=-\infty}^{-1}\sum_{k=-\infty}^{+\infty} 2^{-jH}{\rm Re}\,\big (\varepsilon_{\alpha,j,k}\big)\big (\Psi_{\alpha,H}(2^j t-k)-\Psi_{\alpha,H}(-k)\big),$$ while the two processes $\{X_1^{+}(t), t\in{\mathbb R}\}$ and $\{X_2^{+}(t), t\in{\mathbb R}\}$, whose sum gives the high-frequency part of the HFSM, are defined, for each $t\in{\mathbb R}$, as $$\label{thm:main1:eq3} X_1^{+}(t):=\sum_{j=0}^{+\infty}\sum_{k\in{\EuScript K}_j (\widetilde{\rho}\,)} 2^{-jH}{\rm Re}\,\big (\varepsilon_{\alpha,j,k}\big)\big (\Psi_{\alpha,H}(2^j t-k)-\Psi_{\alpha,H}(-k)\big)$$ and $$\label{thm:main1:eq4} X_2^{+}(t):=\sum_{j=0}^{+\infty}\sum_{k\in\overline{{\EuScript K}}_j (\widetilde{\rho}\,)} 2^{-jH}{\rm Re}\,\big (\varepsilon_{\alpha,j,k}\big)\big (\Psi_{\alpha,H}(2^j t-k)-\Psi_{\alpha,H}(-k)\big).$$ It is known from Proposition 2.15 in [@AB17] that $\{X^{-}(t), t\in{\mathbb R}\}$ has almost surely infinitely differentiable paths. Thus, in view of ([\[thm:main1:eq1\]](#thm:main1:eq1){reference-type="ref" reference="thm:main1:eq1"}), for proving the theorem it is enough to show that, for all $H\in (0,1)$, $\alpha\in [1,2)$ and arbitrarily small $\delta>0$, we have almost surely $$\label{thm:main1:eq5} \sup_{-\widetilde{\rho}\le t'<t''\le \widetilde{\rho}}\,\frac{\big | X_1^+(t')-X_1^+(t'')\big |}{\big |t'-t'' \big |^{H}\big (\log(1+|t'-t''|^{-1})\big)^{1/\alpha+\delta}}<+\infty$$ and $$\label{thm:main1:eq6} \sup_{-\widetilde{\rho}\le t'<t''\le \widetilde{\rho}}\,\frac{\big | X_2^+(t')-X_2^+(t'')\big |}{\big |t'-t'' \big |^{H}\big (\log(1+|t'-t''|^{-1})\big)^{1/\alpha+\delta}}<+\infty.$$ First, we prove ([\[thm:main1:eq5\]](#thm:main1:eq5){reference-type="ref" reference="thm:main1:eq5"}). To this end, we apply Proposition [Proposition 2](#prop:fund){reference-type="ref" reference="prop:fund"} with $\rho=\widetilde{\rho}+1$. Let $\Omega^*$ be the event of probability $1$ in this proposition, and let $t'$ and $t''$ be two arbitrary real numbers such that $-\widetilde{\rho}\le t'<t''\le \widetilde{\rho}$. It follows from ([\[thm:main1:eq0\]](#thm:main1:eq0){reference-type="ref" reference="thm:main1:eq0"}) that ([\[prop:fund:eq1\]](#prop:fund:eq1){reference-type="ref" reference="prop:fund:eq1"}) holds for all $(j,k)\in{\mathbb Z}_+\times{\mathbb Z}$ such that $k \in {\EuScript K}_j (\widetilde{\rho}\,)$. Hence, it results from ([\[thm:main1:eq3\]](#thm:main1:eq3){reference-type="ref" reference="thm:main1:eq3"}) and ([\[prop:fund:eq2\]](#prop:fund:eq2){reference-type="ref" reference="prop:fund:eq2"}) that on $\Omega^*$, $$\begin{aligned} \label{thm:main1:eq7} && \big | X_1^+(t')-X_1^+(t'')\big | \le \sum_{j=0}^{+\infty}\sum_{k\in{\EuScript K}_j (\widetilde{\rho}\,)} 2^{-jH}\big |{\rm Re}\,\big (\varepsilon_{\alpha,j,k}\big)\big |\big | \Psi_{\alpha,H}(2^j t'-k)-\Psi_{\alpha,H}(2^j t''-k)\big |\nonumber\\ &&\le C_1 \sum_{j=0}^{+\infty}2^{-jH}(1+j)^{1/\alpha+\delta} \sum_{k\in{\EuScript K}_j (\widetilde{\rho}\,)} \big |\Psi_{\alpha,H}(2^j t'-k)-\Psi_{\alpha,H}(2^j t''-k)\big |\nonumber\\ &&\le C_1 \sum_{j=0}^{+\infty}2^{-jH}(1+j)^{1/\alpha+\delta} \sum_{k\in{\mathbb Z}} \big |\Psi_{\alpha,H}(2^j t'-k)-\Psi_{\alpha,H}(2^j t''-k)\big |,\end{aligned}$$ where $C_1$ is a positive finite random variable not depending on $t'$ and $t''$. Since the function $\Psi_{\alpha,H}$ belongs the Schwartz class, this function and its derivative $\Psi_{\alpha,H}'$ satisfy, for some finite constant $c_2$ and for all $y\in{\mathbb R}$, $$\label{thm:main1:eq8} \big |\Psi_{\alpha,H}(y)\big |+\big |\Psi_{\alpha,H}'(y)\big |\le c_2 \big (1+2\widetilde{\rho}+|y|\big)^{-3}.$$ Observe that $$\label{thm:main1:eq8bis} c_3:=\sup_{y\in{\mathbb R}}\sum_{k\in{\mathbb Z}} \big (1+|y-k|\big)^{-3}<+\infty.$$ Next, let $j_0$ be the unique nonnegative integer such that $$\label{thm:main1:eq9} 2^{-j_0-1}(2\widetilde{\rho})<|t'-t''|\le 2^{-j_0} (2\widetilde{\rho}),$$ that is $$\label{thm:main1:eq10} j_0:=\left [\frac{\log\big ((2\widetilde{\rho})|t'-t''|^{-1}\big) }{\log (2)}\right],$$ where $[\cdot]$ denotes the integer part function. Using the mean-value Theorem, ([\[thm:main1:eq8\]](#thm:main1:eq8){reference-type="ref" reference="thm:main1:eq8"}), ([\[thm:main1:eq9\]](#thm:main1:eq9){reference-type="ref" reference="thm:main1:eq9"}) and ([\[thm:main1:eq8bis\]](#thm:main1:eq8bis){reference-type="ref" reference="thm:main1:eq8bis"}), it can be shown, for all $j\in\{0,\ldots,j_0\}$, that $$\begin{aligned} \label{thm:main1:eq11} \sum_{k\in{\mathbb Z}} \big |\Psi_{\alpha,H}(2^j t'-k)-\Psi_{\alpha,H}(2^j t''-k)\big | &\le & c_2 2^j |t'-t''|\sum_{k\in{\mathbb Z}} \big (1+|2^j t'-k|\big)^{-3}\nonumber\\ &\le & c_2 c_3 2^j |t'-t''|.\end{aligned}$$ It follows from ([\[thm:main1:eq11\]](#thm:main1:eq11){reference-type="ref" reference="thm:main1:eq11"}), ([\[thm:main1:eq10\]](#thm:main1:eq10){reference-type="ref" reference="thm:main1:eq10"}) and ([\[thm:main1:eq9\]](#thm:main1:eq9){reference-type="ref" reference="thm:main1:eq9"}) that $$\label{thm:main1:eq12} \begin{split} & \sum_{j=0}^{j_0}2^{-jH}(1+j)^{1/\alpha+\delta} \sum_{k\in{\mathbb Z}} \big |\Psi_{\alpha,H}(2^j t'-k)-\Psi_{\alpha,H}(2^j t''-k)\big |\\ & \le c_2 c_3 |t'-t''| (1+j_0)^{1/\alpha+\delta}\sum_{j=0}^{j_0} 2^{j(1-H)}\\ &\le c_4 \big |t'-t'' \big |^{H}\big (\log(1+|t'-t''|^{-1})\big)^{1/\alpha+\delta}, \end{split}$$ where the positive finite constant $c_4$ does not depend on $j_0$, $t'$ and $t''$. On the other hand, one can derive from the triangle inequality, ([\[thm:main1:eq8\]](#thm:main1:eq8){reference-type="ref" reference="thm:main1:eq8"}) and ([\[thm:main1:eq8bis\]](#thm:main1:eq8bis){reference-type="ref" reference="thm:main1:eq8bis"}) that, for every $j\ge j_0+1$, $$\sum_{k\in{\mathbb Z}} \big |\Psi_{\alpha,H}(2^j t'-k)-\Psi_{\alpha,H}(2^j t''-k)\big |\le 2 c_2 c_3,$$ and consequently $$\begin{aligned} \label{thm:main1:eq13} && \sum_{j=j_0+1}^{+\infty}2^{-jH}(1+j)^{1/\alpha+\delta} \sum_{k\in{\mathbb Z}} \big |\Psi_{\alpha,H}(2^j t'-k)-\Psi_{\alpha,H}(2^j t''-k)\big |\nonumber\\ && \le 2 c_2 c_3 2^{-(j_0+1)H} \sum_{p=0}^{+\infty}2^{-pH}(2+j_0+p)^{1/\alpha+\delta} \nonumber\\ && \le 2 c_2 c_3 2^{-(j_0+1)H}(2+j_0)^{1/\alpha+\delta} \sum_{p=0}^{+\infty}2^{-pH}\Big (1+\frac{p}{2+j_0}\Big)^{1/\alpha+\delta} \nonumber\\ && \le \Big (2 c_2 c_3 \sum_{p=0}^{+\infty}2^{-pH}(1+p)^{1/\alpha+\delta}\Big) 2^{-(j_0+1)H}(2+j_0)^{1/\alpha+\delta} \nonumber\\ && \le c_5 \big |t'-t'' \big |^{H}\big (\log(1+|t'-t''|^{-1})\big)^{1/\alpha+\delta},\end{aligned}$$ where the last inequality follows from ([\[thm:main1:eq9\]](#thm:main1:eq9){reference-type="ref" reference="thm:main1:eq9"}) and ([\[thm:main1:eq10\]](#thm:main1:eq10){reference-type="ref" reference="thm:main1:eq10"}) and where $c_5$ is a positive and finite constant not depending on $j_0$, $t'$ and $t''$. Putting together ([\[thm:main1:eq7\]](#thm:main1:eq7){reference-type="ref" reference="thm:main1:eq7"}), ([\[thm:main1:eq12\]](#thm:main1:eq12){reference-type="ref" reference="thm:main1:eq12"}) and ([\[thm:main1:eq13\]](#thm:main1:eq13){reference-type="ref" reference="thm:main1:eq13"}) yields ([\[thm:main1:eq5\]](#thm:main1:eq5){reference-type="ref" reference="thm:main1:eq5"}). Next we show that ([\[thm:main1:eq6\]](#thm:main1:eq6){reference-type="ref" reference="thm:main1:eq6"}) is satisfied. Let $\Omega^{**}$ be the event of probability $1$ on which ([\[ineq2f:ep\]](#ineq2f:ep){reference-type="ref" reference="ineq2f:ep"}) holds, and let $t'$ and $t''$ be two arbitrary real numbers such that $-\widetilde{\rho}\le t'<t''\le \widetilde{\rho}$. It follows from ([\[ineq2f:ep\]](#ineq2f:ep){reference-type="ref" reference="ineq2f:ep"}) and ([\[thm:main1:eq4\]](#thm:main1:eq4){reference-type="ref" reference="thm:main1:eq4"}) that on $\Omega^{**}$, $$\begin{aligned} \label{thm:main1:eq14} && \big | X_2^+(t')-X_2^+(t'')\big | \le \sum_{j=0}^{+\infty}\sum_{k\in\overline{{\EuScript K}}_j (\widetilde{\rho}\,)} 2^{-jH}\big |{\rm Re}\,\big (\varepsilon_{\alpha,j,k}\big)\big |\big |\Psi_{\alpha,H}(2^j t'-k)-\Psi_{\alpha,H}(2^j t''-k)\big |\nonumber\\ &&\le C_6 \sum_{j=0}^{+\infty}2^{-jH}(1+j)^{1/\alpha+\delta} \sum_{k\in\overline{{\EuScript K}}_j (\widetilde{\rho}\,)} \sqrt{\log \big (3+j+|k|\big)}\,\big |\Psi_{\alpha,H}(2^j t'-k)-\Psi_{\alpha,H}(2^j t''-k)\big |\nonumber\\ &&\le C_7 \sum_{j=0}^{+\infty}2^{-jH}(1+j)^{1/\alpha+2\delta} \sum_{k\in\overline{{\EuScript K}}_j (\widetilde{\rho}\,)} \sqrt{\log \big (3+|k|\big)}\,\big |\Psi_{\alpha,H}(2^j t'-k)-\Psi_{\alpha,H}(2^j t''-k)\big |,\nonumber\\\end{aligned}$$ where $C_6$ and $C_7$ are two positive finite random variables not depending on $t'$ and $t''$. By the mean-value Theorem and ([\[thm:main1:eq8\]](#thm:main1:eq8){reference-type="ref" reference="thm:main1:eq8"}), we see that for all $j\in{\mathbb Z}_+$ and $k\in\overline{{\EuScript K}}_j (\widetilde{\rho}\,)$, $$\label{thm:main1:eq15} \begin{split} \big |\Psi_{\alpha,H}(2^j t'-k)-\Psi_{\alpha,H}(2^j t''-k)\big |&= 2^j |t'-t''| \big |\Psi_{\alpha,H}'(a-k)\big |\\ &\le c_2 2^j |t'-t''| \big (1+|a-k|\big)^{-3} \\ &\le c_2 2^j |t'-t''| \big (1+||k|-|a||\big)^{-3}, \end{split}$$ where $a$ is some real number satisfying $2^j t'< a <2^j t''$ which implies that $$\label{thm:main1:eq16} |a|\le 2^j\widetilde{\rho}.$$ Combining ([\[thm:main1:eq15\]](#thm:main1:eq15){reference-type="ref" reference="thm:main1:eq15"}) and ([\[thm:main1:eq16\]](#thm:main1:eq16){reference-type="ref" reference="thm:main1:eq16"}) with ([\[thm:main1:eq0bis\]](#thm:main1:eq0bis){reference-type="ref" reference="thm:main1:eq0bis"}), we obtain that for all $j\in{\mathbb Z}_+$ and $k\in\overline{{\EuScript K}}_j (\widetilde{\rho}\,)$, $$\label{thm:main1:eq17} \big |\Psi_{\alpha,H}(2^j t'-k)-\Psi_{\alpha,H}(2^j t''-k)\big |\le c_2 2^j |t'-t''| \big (1+|k|- 2^j\widetilde{\rho}\,\big)^{-3}$$ Then ([\[thm:main1:eq0bis\]](#thm:main1:eq0bis){reference-type="ref" reference="thm:main1:eq0bis"}) and ([\[thm:main1:eq17\]](#thm:main1:eq17){reference-type="ref" reference="thm:main1:eq17"}) entail, for every $j\in{\mathbb Z}_+$, that $$\begin{aligned} \label{thm:main1:eq18} && \sum_{k\in\overline{{\EuScript K}}_j (\widetilde{\rho}\,)} \sqrt{\log \big (3+|k|\big)}\,\big |\Psi_{\alpha,H}(2^j t'-k)-\Psi_{\alpha,H}(2^j t''-k)\big |\nonumber\\ &&\le c_2 2^{j+1} |t'-t''| \sum_{k=[2^j (\widetilde{\rho}+1)]+1}^{+\infty}\sqrt{\log \big (3+k\big)}\big (1+k- 2^j\widetilde{\rho}\,\big)^{-3}\nonumber\\ &&\le c_2 2^{j+1} |t'-t''| \sum_{q=0}^{+\infty} \sqrt{\log \big (4+q+2^j (\widetilde{\rho}+1)\big)}\big (1+q+2^j\big)^{-3}\nonumber\\ &&\le c_2 2^{j+1} |t'-t''| \sqrt{\log \big (3+2^j \widetilde{\rho}\,\big)}\,\sum_{q=0}^{+\infty} \sqrt{\log \big (4+q+2^j \big)}\,\big (1+q+2^j\big)^{-3}\nonumber\\ &&\le c_8 |t'-t''| \, 2^{j}\sqrt{j+1} \,\sum_{q=0}^{+\infty} (1+q+2^j\big)^{-5/2}\nonumber\\ &&\le c_8 |t'-t''| \, 2^{j}\sqrt{j+1} \,\int_{0}^{+\infty} (x+2^j\big)^{-5/2}\,dx\nonumber\\ && \le c_8 |t'-t''| \, 2^{-j/2}\sqrt{j+1}, \end{aligned}$$ where $c_8$ is a positive finite constant not depending on $j$, $t'$ and $t''$. Next, it follows from ([\[thm:main1:eq14\]](#thm:main1:eq14){reference-type="ref" reference="thm:main1:eq14"}) and ([\[thm:main1:eq18\]](#thm:main1:eq18){reference-type="ref" reference="thm:main1:eq18"}) that on the event $\Omega^{**}$ of probability $1$, $$\label{thm:main1:eq19} \big | X_2^+(t')-X_2^+(t'')\big |\le C_9 |t'-t''|,$$ where the positive finite random variable $$C_9:=c_8 C_7\Big (\sum_{j=0}^{+\infty}2^{-j(H+1/2)}(1+j)^{1/2+1/\alpha+2\delta}\Big)$$ does not depend on $t'$ and $t''$. Finally, ([\[thm:main1:eq19\]](#thm:main1:eq19){reference-type="ref" reference="thm:main1:eq19"}) implies that ([\[thm:main1:eq6\]](#thm:main1:eq6){reference-type="ref" reference="thm:main1:eq6"}) holds. $\square$  Let $u<v$ be any fixed real numbers. For each $j\in{\mathbb Z}_+$, set $$\overline{k}_j:=\big [ 2^{j-1}(u+v)\big].$$ Then $$\label{main2:e4} \big | 2^{-j}\,\overline{k}_j-2^{-1}(u+v)\big |<2^{-j}.$$ Let $\theta$ be an even real-valued function in the Schwartz class $S({\mathbb R})$ whose Fourier transform $\widehat{\theta}$ (which is also an even real-valued function) has a compact support such that $$\label{main2:e1} {\rm supp}\,\widehat{\theta}\subseteq \big\{\xi\in{\mathbb R},\,\,2^{-1}\le |\xi|\le 1\big\}.$$ For all $j\in{\mathbb Z}_+$, let $$\label{main2:e2} W_{j}:=2^{j}\int_{{\mathbb R}} \theta(2^j t-\overline{k}_j) X(t)\,dt=\int_{{\mathbb R}}\theta(t)\big (X(2^{-j}\,\overline{k}_j+2^{-j} t)-X(2^{-j}\,\overline{k}_j)\big)\,dt.$$ Notice that the second equality in ([\[main2:e2\]](#main2:e2){reference-type="ref" reference="main2:e2"}) follows from the change of variable $t'=2^j t-\overline{k}_j$ and the equality $\int_{{\mathbb R}}\theta(t)\,dt=\widehat{\theta}(0)=0$ (see ([\[main2:e1\]](#main2:e1){reference-type="ref" reference="main2:e1"})). It is known from Proposition 5.1.4 and Remark 5.1.5 in [@Boutard] that the pathwise Lebesgue integrals in ([\[main2:e2\]](#main2:e2){reference-type="ref" reference="main2:e2"}) are well-defined and almost surely $$\label{main2:e3} W_{j}={\rm Re}\,\bigg(\int_{{\mathbb R}}\frac{e^{i 2^{-j}\overline{k}_j\xi}\,\widehat{\theta}(2^{-j}\xi)}{|\xi|^{H+1/\alpha}}\, d\widetilde{M}_\alpha(\xi)\bigg).$$ Observe that ([\[main2:e1\]](#main2:e1){reference-type="ref" reference="main2:e1"}) and ([\[main2:e3\]](#main2:e3){reference-type="ref" reference="main2:e3"}) imply that $(W_j)_{j\in{\mathbb Z}_+}$ is a sequence of independent real-valued S$\alpha$S random variables whose scale parameters satisfy, for every $j\in{\mathbb Z}_+$, $$\label{main2:e5} \sigma(W_j)=c_1 2^{-jH},$$ where the positive finite constant $c_1:=\big (\int_{{\mathbb R}} |\eta |^{-\alpha H-1} \big |\widehat{\theta}(\eta)\big |^\alpha\,d\eta\big)^{1/\alpha}$. Let us now show that $$\label{main2:e6} \limsup_{j\rightarrow +\infty}\, 2^{jH} (j+1)^{-1/\alpha}\, |W_j |=+\infty\quad\mbox{(almost surely).}$$ Recall from Chapter 1 of [@ST94] that there are two constants $0<c_2<c_3<+\infty$ such that any arbitrary real-valued S$\alpha$S random variable $Z$ with scale parameter $1$ satisfies $$\label{main2:e7} c_2 z^{-\alpha}\le{\mathbb P}\big (|Z|>z)\le c_3 z^{-\alpha} ,\quad\mbox{for all $z\in [1,+\infty)$.}$$ By using the first inequality in ([\[main2:e7\]](#main2:e7){reference-type="ref" reference="main2:e7"}), ([\[main2:e5\]](#main2:e5){reference-type="ref" reference="main2:e5"}) and the fact that $$\sum_{j=1}^{+\infty} \frac{1}{(j+1)\log(j+1)}=+\infty ,$$ we derive that $$\label{main2:e7b} \sum_{j=1}^{+\infty} {\mathbb P}\Big (\big (c_1 2^{-jH}\big)^{-1} (j+1)^{-1/\alpha} |W_j |>\big (\log (j+1)\big)^{1/\alpha}\Big)=+\infty.$$ Since the random variables $(W_j)_{j\in{\mathbb Z}_+}$ are independent, ([\[main2:e6\]](#main2:e6){reference-type="ref" reference="main2:e6"}) follows from [\[main2:e7b\]](#main2:e7b){reference-type="eqref" reference="main2:e7b"} and from the second part of the Borel-Cantelli Lemma. Recall from Corollary 4.2 in [@AB17] (see also [@Boutard]) and the continuity property of paths of $\{X(t), t\in{\mathbb R}\}$ that, for any fixed arbitrarily small $\delta>0$, there is a positive finite random variable $C_{4,\delta}$ such that almost surely $$\label{main2:e9} \big | X(t)\big|\le C_{4,\delta} \big (1+|t|^H\big)\log^{1/\alpha+\delta}\big (3+|t|\big),\quad\mbox{for all $t\in{\mathbb R}$.}$$ It follows from ([\[main2:e4\]](#main2:e4){reference-type="ref" reference="main2:e4"}) and ([\[main2:e9\]](#main2:e9){reference-type="ref" reference="main2:e9"}) that almost surely for all $j\in{\mathbb Z}_+$, $$\begin{aligned} && \int_{\{|t|>2^{j/2}\}}\big |\theta(t)\big| \big |X(2^{-j}\,\overline{k}_j+2^{-j} t)\big |\,dt\\ && \le C_{4,\delta} \int_{\{|t|>2^{j/2}\}}\big |\theta(t)\big|\big (1+|2^{-j}\,\overline{k}_j+2^{-j} t|^H\big)\log^{1/\alpha+\delta}\big (3+|2^{-j}\,\overline{k}_j+2^{-j} t|\big)\,dt\\ && \le C_{4,\delta} \int_{\{|t|>2^{j/2}\}}\big |\theta(t)\big|\big (2+(|u|+|v|+|t|)^H\big)\log^{1/\alpha+\delta}\big (4+|u|+|v|+|t|\big)\,dt.\end{aligned}$$ Since $\theta\in S({\mathbb R})$, we have $$\label{main2:e10} \lim_{j\rightarrow +\infty}\,2^{jH} (j+1)^{-1/\alpha}\int_{\{|t|>2^{j/2}\}}\big |\theta(t)\big| \big |X(2^{-j}\,\overline{k}_j+2^{-j} t)\big |\,dt=0 \quad\mbox{(almost surely).}$$ Combining ([\[main2:e2\]](#main2:e2){reference-type="ref" reference="main2:e2"}) with ([\[main2:e6\]](#main2:e6){reference-type="ref" reference="main2:e6"}) and ([\[main2:e10\]](#main2:e10){reference-type="ref" reference="main2:e10"}) gives $$\label{main2:e11} \limsup_{j\rightarrow +\infty}\, 2^{jH} (j+1)^{-1/\alpha}\, |\widetilde{W}_j |=+\infty\quad\mbox{a.s.,}$$ where $$\label{main2:e12} \widetilde{W}_j:=\int_{\{|t|\le 2^{j/2}\}}\theta(t)\big (X(2^{-j}\,\overline{k}_j+2^{-j} t)-X(2^{-j}\,\overline{k}_j)\big )\,dt.$$ Let us now introduce the positive random variable $A$ defined as $$\label{main2:e12bis} A:=\sup_{u\le t'<t''\le v}\,\frac{\big | X(t')-X(t'')\big |}{\big |t'-t'' \big |^{H}\big (\log(1+|t'-t''|^{-1})\big)^{1/\alpha}}.$$ Observe that for proving the theorem, it is enough to show that $$\label{main2:e8} A=+\infty \quad\mbox{a.s.}$$ Let $j_0$ be a positive fixed integer which is large enough so that $$\label{main2:e13} 2^{-j/2}\le 2^{-2} (v-u),\quad\mbox{for all $j\ge j_0$.}$$ By ([\[main2:e4\]](#main2:e4){reference-type="ref" reference="main2:e4"}) and ([\[main2:e13\]](#main2:e13){reference-type="ref" reference="main2:e13"}), we have $$\label{main2:e14} 2^{-j}\,\overline{k}_j+2^{-j} t\in [u,v],\quad\mbox{for all $j\ge j_0$ and $t\in \big [-2^{j/2}, 2^{j/2}\big]$.}$$ Then, it follows from ([\[main2:e12\]](#main2:e12){reference-type="ref" reference="main2:e12"}), ([\[main2:e12bis\]](#main2:e12bis){reference-type="ref" reference="main2:e12bis"}) and ([\[main2:e14\]](#main2:e14){reference-type="ref" reference="main2:e14"}) that for all $j\ge j_0$, $$\begin{aligned} \label{main2:e15} | \widetilde{W}_j | &\le & A \int_{{\mathbb R}} \big |\theta(t)\big| \big |2^{-j} t\big |^{H}\big (\log(1+|2^{-j} t|^{-1})\big)^{1/\alpha}\,dt\nonumber\\ &\le & A 2^{-jH} \int_{{\mathbb R}} \big |\theta(t)\big| |t |^{H}\big (\log(2^{j}+2^{j} | t|^{-1})\big)^{1/\alpha}\,dt\nonumber\\ &\le & A 2^{-jH}(j+1)^{1/\alpha} \int_{{\mathbb R}} \big |\theta(t)\big| |t |^{H}\big (1+\log(1+| t|^{-1})\big)^{1/\alpha}\,dt.\end{aligned}$$ Moreover, the fact that $\theta\in S({\mathbb R})$ implies $$\label{main2:e16} \int_{{\mathbb R}} \big |\theta(t)\big| |t |^{H}\big (1+\log(1+| t|^{-1})\big)^{1/\alpha}\,dt<+\infty.$$ Finally, putting together ([\[main2:e11\]](#main2:e11){reference-type="ref" reference="main2:e11"}), ([\[main2:e15\]](#main2:e15){reference-type="ref" reference="main2:e15"}) and ([\[main2:e16\]](#main2:e16){reference-type="ref" reference="main2:e16"}) yields ([\[main2:e8\]](#main2:e8){reference-type="ref" reference="main2:e8"}). This finishes the proof of Theorem [Theorem 4](#thm:main2){reference-type="ref" reference="thm:main2"}. : The research of A. Ayache is partially supported by the Labex CEMPI (ANR-11-LABX-0007-01), the GDR 3475 (Analyse Multifractale et Autosimilarité), and the Australian Research Council's Discovery Projects funding scheme (project number DP220101680). The research of Y. Xiao is partially supported by NSF grant DMS-2153846. This work was partially written during A. Ayache's visit to Michigan State University in June 2023; he is very grateful to this university for its hospitality and financial support. 1234 A. Ayache and G. Boutard (2017), Stationary increments harmonizable stable fields: upper estimates on path behaviour. *J. Theoret. Probab.* **30**, 1369--1423. A. Ayache, N.-R. Shieh and Y. Xiao (2020), Wavelet series representation and geometric properties of harmonizable fractional stable sheets. *Stochastics* **92**(1), 1--23. A. Ayache and M. S. Taqqu (2003), Rate optimality of wavelet series approximations of fractional Brownian motion. *J. Fourier Anal. Appl.* **9**, 451--471. G. Boutard (2016), Analyse par ondelettes de champs aléatoires stables à accroissements stationnaires, *PhD thesis, University Lille 1.*\ (<https://pepite-depot.univ-lille.fr/LIBRE/EDSPI/2016/50376-2016-Boutard.pdf>) S. Cambanis and M. Maejima (1989), Two classes of self-similar stable processes with stationary increments. *Stoch. Process. Appl.* **32**(2), 305--329. I. Daubechies (1992), *Ten Lectures on Wavelets.* CBMS-NSF series, Volume 61, SIAM Philadelphia. N. Kôno and M. Maejima (1991), Hölder continuity of sample paths of some self-similar stable processes. *Tokyo J. Math.* **14**, 93--100. P. G. Lemarié and Y. Meyer (1986), Ondelettes et bases hilbertiennes. *Rev. Mat. Iberoamericana* **2**, 1--18. Y. Meyer (1992), *Wavelets and Operators, Volume 1.* Cambridge University Press. S. Panigrahi, P. Roy and Y. Xiao (2021), Maximal moments and uniform modulus of continuity of stable random fields. *Stoch. Process. Appl.* **136**, 92--124. G. Samorodnitsky and M. S. Taqqu (1994), *Stable Non-Gaussian Processes: Stochastic Models with Infinite Variance*. Chapman and Hall. Y. Xiao (2010), On uniform modulus of continuity of random fields. *Monatsh. Math.* **159**, 163--184.
arxiv_math
{ "id": "2310.04518", "title": "An Optimal Uniform Modulus of Continuity for Harmonizable Fractional\n Stable Motion", "authors": "Antoine Ayache and Yimin Xiao", "categories": "math.PR", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We aim to establish Bowen's equations for upper capacity invariance pressure and Pesin-Pitskel invariance pressure of discrete-time control systems. We first introduce a new invariance pressure called induced invariance pressure on partitions that specializes the upper capacity invariance pressure on partitions, and then show that the two types of invariance pressures are related by a Bowen's equation. Besides, to establish Bowen's equation for Pesin-Pitskel invariance pressure on partitions we also introduce a new notion called BS invariance dimension on subsets. Moreover, a variational principle for BS invariance dimension on subsets is established. address: 1.School of Mathematical Sciences and Institute of Mathematics, Nanjing Normal University, Nanjing 210023, Jiangsu, P.R.China author: - Rui Yang, Ercai Chen, Jiao Yang and Xiaoyao Zhou\* title: Bowen's equations for invariance pressure of control systems --- [^1] # Introduction It is well-known that entropy is an important quantity to characterize the topological complexity of dynamical systems. In the framework of control systems, entropy is closely associated with the realization of some certain control tasks. Analogous to the definition of topological entropy given by open covers, Nair et al. [@NM04] firstly introduced topological feedback entropy which gives a characterization for the minimal data rate of certain control tasks. Later, inspired by the definition of topological entropy defined by spanning sets Colonius and Kawan [@ck] introduced the concept of invariance entropy to measure the exponential growth rate of the minimal numbers of control functions necessary to keep a subset of controlled invariant set invariant. Parallel to the classical entropy theory, this two different invariance entropies are proved to be the same quantity up to the technical assumptions in [@ckn13]. Furthermore, using the Carathéodory-Pesin structures of dimension theory of dynamical systems, Huang and Zhong [@hz18] verified that topological feedback entropy and invariance entropy also enjoy the "dimension" characteristic. As the natural generalization of topological entropy, topological pressure of continuous functions, introduced by Walters [@w82], plays a vital role in thermodynamic formalism. In 2019, Colonius et al. [@fsc19] introduced the notion of invariance pressure for discrete control systems, which measures the exponential growth rate of the total weighted information produced by the control functions acting on the system in order to its trajectories remains in a controlled invariant set. The corresponding "dimension" characteristic of invariance pressure can be found in Zhong and Huang [@zh19]. Next we recall the well-known Bowen's equations in the topological dynamical systems and then state the main results of present paper. In 1979, Bowen [@b79] found that the Hausdorff dimension of certain compact set of quasi-circles is exactly the unique root of the equation defined by the topological pressure of geometric potential function. This suggests that Hausdorff dimension of certain compact set can be computed by finding out the root of the equation rather than by its definition, since then the equation was known as Bowen's equation. Later, Barreira and Schmeling [@bs00] proved that BS dimension is the unique root of the equation defined by topological pressure of additive potential function. Xing and Chen [@xc] extended the concept of induced topological pressure introduced by Jaerisch et al. [@jms14] to general topological dynamical systems, and showed that the induced topological pressure is the unique root of equation defined by the classical topological pressure. More results about Bowen's equations are involved in [@c11; @ycz22; @xm23]. Based on these work, it turns out that Bowen's equations provides a bridge between the thermodynamic formalism and the dimension theory of dynamical systems, which can also serve as an approximate estimate of the dimension of the certain sets. Then a natural question is how to establish the so-called Bowen's equations for invariance pressure of control systems. It is difficult to give an equivalent definition for invariance pressure in control systems by separated sets as we have done for topological pressure. So we consider the invariance pressure of controlled invariant sets on invariant partitions so that a Bowen's equations is available. Inspired the ideas of [@jms14; @xc; @ycz22], we first define induced invariance pressure on partitions by spanning sets and separated sets and show the induced invariance pressure on partitions is the unique root of the equation defined by upper capacity invariance pressure on partitions introduced by Zhong and Huang [@zh19]. **Theorem 1**. *Let $\varSigma=(\mathbb{N},X,U,\mathcal{U},\phi)$ be a discrete-time control system. Let $Q$ be a controlled invariant set and $\mathcal C =(\mathcal A, \tau,\nu)$ be an invariant partition of $Q$, and let $\varphi,\psi \in C(U,\mathbb{R})$ with $\psi>0$. Then $P_{inv,\psi }(\varphi,Q,\mathcal{C})$ is the unique of the equation $$P_{inv}(\varphi-\beta \psi,Q,\mathcal{C})=0,$$ where $P_{inv}(\varphi, Q,\mathcal{C})$ and $P_{inv,\psi }(\varphi,Q,\mathcal{C})$ denote the upper capacity invariance pressure of $\varphi$ on $Q$ w.r.t. $\mathcal{C}$, $\psi$-induced invariance pressure of $\varphi$ on $Q$ w.r.t. $\mathcal{C}$, respectively.* In 1973, resembling the definition of Hausdorff dimension Bowen introduced the notion of Bowen topological entropy on subsets. Later, Feng and Huang [@fh12] established variational principle for Bowen and packing topological entropies on subsets in terms of the measure-theoretical lower and upper Brin-Katok local entropies [@bk83]. This result was generalized to BS dimension by Wang and Chen [@wc12]. Then it is reasonable to ask how to inject ergodic theoretic ideas into invariance entropy theory of control systems by establishing some proper variational principles for invariance entropy. Colonius et al. [@ff18; @f18] introduced metric invariance entropy with respect to conditionally invariant measures. Wang, Huang and Sun [@whs19] introduced the measure-theoretic lower and upper invariance entropies with respect to an invariant partition and obtained analogous variational principles for Bowen and packing invariance entropy on subsets, see also Zhong [@z20] for an extension of Pesin-Pitskel invariance pressure on subsets. Recently, the authors [@nwh22; @wh22] established variational principle and inverse variational principle for invariance entropy. Since invariance pressure can be also treated as dimension, this motivates to establish the Bowen's equation for Pesin-Pitskel invariance pressure on subsets. However, a new dimension called BS invariance dimension defined by Carathéodory-Pesin structures is involved to obtain the precise root of the equation defined by Pesin-Pitskel invariance pressure. Moreover, a variational principle for BS invariance dimension on subsets of partitions to link ergodic theory and invariance entropy theory is established, which extends the previous variational principle for Bowen invariance entropy given by Wang et al.[@whs19]. We state the remaining two results as follows. **Theorem 2**. *Let $\varSigma=(\mathbb{N},X,U,\mathcal{U},\phi)$ be a discrete control system. Let $Q$ be a controlled invariant set and $\mathcal C =(\mathcal A, \tau,\nu)$ be an invariant partition of $Q$, and let $Z$ be a non-empty subset of $Q$ and $\varphi\in C(U,\mathbb{R})$ with $\varphi >0$. Then ${\rm dim}_{\mathcal{C}}^{BS}(\varphi,Z,Q)$ is the unique root of the equation $$P_{\mathcal{C}}(-t\varphi,Z,Q)=0,$$ where $P_{\mathcal{C}}(-t\varphi,Z,Q)$ denotes Pesin-Pitskel invariance pressure of $-t\varphi$ on $Z$ w.r.t. $\mathcal{C}$, and ${\rm dim}_{\mathcal{C}}^{BS}(\varphi,Z,Q)$ is called BS invariance dimension of $\varphi$ on $Z$ w.r.t. $\mathcal{C}$.* **Theorem 3**. *Let $\varSigma=(\mathbb{N},X,U,\mathcal{U},\phi)$ be a discrete control system. Let $Q$ be a controlled invariant set and $\mathcal C =(\mathcal A, \tau,\nu)$ be a clopen invariant partition of $Q$, and let $K$ be a non-empty compact set of $Q$ and $\varphi\in C(U,\mathbb{R})$ with $\varphi >0$. Then $$\begin{aligned} {\rm dim}_{\mathcal{C}}^{BS}(\varphi,K,Q)=\sup\{ \underline{h}_{\mu,inv}(\varphi,Q,\mathcal{C}):\mu \in M(Q), \mu (K)=1\}, \end{aligned}$$ where $\underline{h}_{\mu,inv}(\varphi,Q,\mathcal{C})$ denotes the measure-theoretical lower BS invariance pressure of $\mu$ w.r.t. $\mathcal{C}$ and $\varphi$.* The rest of this paper is organized as follows. In section 2, we recall some basic settings and concepts of discrete control systems. In section 3, we introduce the notions of induced invariance pressure and upper capacity invariance pressure on partitions by spanning set and separated set, and we prove Theorem 1.1. In section 4, we introduce the notion of BS invariance dimension to prove Theorems 1.2 and 1.3. # The set-up of discrete-time control systems We briefly recall that some standard facts about discrete-time control systems. A systematic treatment about the invariance entropy theory of deterministic control systems can be found in Kawan's monograph [@k13]. The following crucial ingredients are taken from [@k13 Chapter 1], [@fsc19; @whs19; @zh19; @z20]. In this paper, we focus on a discrete-time control system on a metric space $X$ of the following form $$\begin{aligned} x_{n+1}=F(x_n,u_n)=F_{u_n}(x_n), n\in \mathbb{N}_0=\{0,1,...\},\end{aligned}$$ where $F$ is map from $X\times U \to X$, $U$ is a compact set and for each fixed $u\in U$, $F_u(\cdot):=F(\cdot,u)$ is a continuous map on $X$. Given control sequence $\omega=(\omega_0,\omega_1,...)$ in $U$, then the solution of $(2.1)$ can be re-written as $$\phi(k,x,\omega)=F_{\omega_{k-1}}\circ\cdots\circ F_{\omega_0}(x).$$ For convenience, by $\varSigma:=(\mathbb{N},X,U,\mathcal{U},\phi)$ we denote the above control system, where $\mathcal{U}$ is a subset of maps from $\mathbb{N}$ to $U$. Let $\varSigma=(\mathbb{N},X,U,\mathcal{U},\phi)$ be a control system. A set $Q\subset X$ is said to be a *controlled invariant set* if $Q$ is compact and for any $x\in Q$ there exists $\omega_x\in\mathcal{U}$ such that $\phi(\mathbb{N}_0,x,\omega_x)\subset Q$. Fix a controlled invariant set of $Q$. A triple $\mathcal C =(\mathcal A, \tau,\nu)$ is said to be an *invariant partition* of $Q$ if $\mathcal A=\{A_1,...,A_q\}$ is a finite Borel partition $Q$, $\tau\in \mathbb N$, and $\nu: \mathcal A \to U^{\tau}$ is a map such that $$\phi([0,\tau],A_i,\nu(A_i))\subset Q$$ for all $i=1,2,...,q$. Let $\omega_a:=\nu(A_a)$, for $a=1,2,...,q$. Given $n\in \mathbb{N}$, for each word $s=(s_0,s_1,...,s_{n-1})\in S^n$, where $S=\{1,...,q\}$. The *concatenation of $n$ controls* is given by $$\omega_s=\omega_{s_0,...,s_{n-1}}.$$ Given a word $s=(s_0,s_1,...,s_{n-1})$, we define $$\mathcal{C}_s(Q):=\{x\in Q: \phi(j\tau,x,\omega_{s_0,...,s_{n-1}})\in A_{s_j}, ~\text{for}~{j=0,1,...,n-1}\}.$$ The word $s$ is said to be *an admissible word of length of $n$* if $\mathcal{C}_s(Q)\not=\emptyset$, and we write $l(s)=n$ to denote the length of $s$. The set of all admissible words with length $j$ is denoted by $\mathcal{L}^j{(\mathcal{C})}$. Note that if $\mathcal C =(\mathcal A, \tau,\nu)$ is an invariant partition of $Q$, then for every $x\in Q$ there exists a unique sequence $\mathcal{C}(x)=(\mathcal{C}_0(x),\mathcal{C}_1(x),...) \in S^{\mathbb{N}_0}$ such that $$\phi(j\tau,x,\omega_{\mathcal{C}(x)})\in A_{\mathcal{C}_j(x)},$$ for every $j\in \mathbb{N}_0$. For $m\leq n$, let $\mathcal C_{[m,n]}(x)=(\mathcal{C}_m(x),...,\mathcal{C}_n(x))$ and call the word $\mathcal C_{[m,n]}(x)$ the *orbit address* of $x$ from the time $m$ to $n$ remaining in $Q$. Given $n\in \mathbb{N}$ and $x\in Q$, the *cyclinder of $x$ of length $n$* w.r.t. $\mathcal C$ is defined by $$\begin{aligned} Q_n(x, \mathcal C):& =\{y\in Q :\mathcal{C}_j(y)=\mathcal{C}_j(x)\}\\ &=\{y\in Q :\phi(j\tau,y,\omega_{\mathcal {C}_{[0,n)}(x)})\in A_{\mathcal{C}_j(x) },\text{for}~j=0,1,....,n-1\}\\ &= \cap_{j=0}^{n-1} \phi_{j\tau, \omega_{\mathcal C_{[0,n)}(x)}}^{-1} (A_{\mathcal{C}_j(x)})\\ &=\mathcal{C}_{\omega_{\mathcal{C}_{[0,n)}(x)}}(Q),\end{aligned}$$ where the map $\phi_{t,\omega}: X \rightarrow X$ given by $\phi_{t,\omega}(x):=\phi(t,x,\omega)$ is continuous. Clearly, $Q_N(x, \mathcal C)$ is a Borel measurable set of $Q$. Observe that $Q_N(x, \mathcal C)$ is the set consisting of all points of $Q$ that has the same orbit address with $x$ from time $0$ to $n-1$. Hence for any two points $x,y\in Q$ with $x\not=y$, either $Q_N(x, \mathcal C)=Q_N(y, \mathcal C)$ or $Q_N(x, \mathcal C)\cap Q_N(x, \mathcal C)=\emptyset$. # Bowen's equation for upper capacity invariance pressure In this section, we introduce the induced invariance pressure on partitions in subsection 3.1 and prove the Theorem [Theorem 1](#thm 1.1){reference-type="ref" reference="thm 1.1"} in subsection 3.2. ## Induced invariance pressure on partitions By $C(U,\mathbb{R})$ we denote the set of all real-valued continuous maps on $U$ equipped with supremum norm. For $n\in\mathbb{N}$ and $\varphi,\psi\in C(U,\mathbb{R})$ with $\psi >0$, we set $S_n\varphi(\omega):=\sum_{j=0}^{n-1}\varphi(\omega_i)$, $\omega \in U^n$ and $m:=\min_{u\in U}\psi(u)$. Let $\mathcal C =(\mathcal A, \tau,\nu)$ be an invariant partition of $Q$, $K\subset Q$ and $n\in \mathbb{N}$. A set $E \subset Q$ is a $(\mathcal{C},n,K)$-*spanning set* if for any $x \in K$, there exists $y\in E$ such that $\mathcal{C}_{[0,n)}(x)=\mathcal{C}_{[0,n)}(y)$. The smallest cardinality of $(\mathcal{C},n,K)$-spanning set is denoted by $r(\mathcal{C},n,K)$. A set $F \subset K$ is a $(\mathcal{C}.n,K)$-*separated* set if for any $x, y \in F$ with $x \neq y$, $\mathcal{C}_{[0,N)}(x)\not = \mathcal{C}_{[0,N)}(y)$ (that is, $Q_n(x,\mathcal{C})\cap Q_n(y,\mathcal{C})=\emptyset$). The largest cardinality of $(\mathcal{C},n,K)$-separated set is denoted by $s(\mathcal{C},n,K)$. **Definition 4**. *Let $\mathcal C =(\mathcal A, \tau,\nu)$ of $Q$ be an invariant partition and $n\in \mathbb{N}$, $T>0$, $\varphi,\psi \in C(U,\mathbb{R})$ with $\psi>0$. Put $$m(\varphi,Q,n,\mathcal{C})=\sup\{\sum_{x\in F_n}e^{S_{n\tau}\varphi({\omega_{\mathcal{C}_{[0,n)}(x)}})}:F_n ~\mbox{is a } (\mathcal{C},n,Q)\mbox{-separated set}\}.$$ We define the *upper capacity invariance pressure of $\varphi$ on $Q$ w.r.t. $\mathcal{C}$* as $$P_{inv}(\varphi,Q,\mathcal{C})=\limsup_{n \to \infty }\frac{1}{n\tau}\log m(\varphi,Q,n,\mathcal{C}).$$* *Set $$\begin{aligned} &S_T:=\\ &\{n\in \mathbb{N}: \exists x \in Q ~\mbox{such that } S_{n\tau}\psi({\omega_{\mathcal{C}_{[0,n)}(x)}})\leq T\tau~ \mbox{and}~ S_{(n+1)\tau}\psi({\omega_{\mathcal{C}_{[0,n+1)}(x)}})>T\tau\}. \end{aligned}$$ For each $n\in S_T$, we define $$X_n=\{x\in Q: S_{n\tau}\psi({\omega_{\mathcal{C}_{[0,n)}(x)}})\leq T\tau~ \mbox{and}~ S_{(n+1)\tau}\psi({\omega_{\mathcal{C}_{[0,n+1)}(x)}})>T\tau\}.$$ Clearly, the set $X_n$ is a union of some cyclinder sets $Q_{n+1}(x,\mathcal{C})$. Put $$\begin{aligned} &P_{inv,\psi, T}(\varphi,Q,\mathcal{C})\\ = &\sup\left\{\sum_{n\in S_T}\sum_{x \in F_n}\limits e^{S_{n\tau}\varphi({\omega_{\mathcal{C}_{[0,n)}(x)}})}: F_n \mbox{ is a}~ (\mathcal{C},n,X_n)\mbox{-separated set} \right\}. \end{aligned}$$ We define the *$\psi$-induced invariance pressure of $\varphi$ on $Q$ w.r.t. $\mathcal{C}$* as $$P_{inv,\psi }(\varphi,Q,\mathcal{C})=\limsup_{T \to \infty}\frac{1}{T\tau}\log P_{inv,\psi, T}(\varphi,Q,\mathcal{C}).$$* **Remark 5**. 1. *If $S_T\not=\emptyset$, then for each $n\in S_T$, we have $\frac{T}{||\psi||} -1<n\leq \frac{T}{m}$, which shows $S_T$ is a finite set.* 2. *If $\psi =1$, note that $S_T$ is a singleton, then $P_{inv,1 }(\varphi,Q,\mathcal{C})= P_{inv}(\varphi,Q,\mathcal{C}).$* 3. *For every fixed $\psi \in C(U,\mathbb{R})$ with $\psi >0$, $P_{inv,\psi }(\cdot,Q,\mathcal{C}): C(U.\mathbb{R})\rightarrow \mathbb{R}$ is finite. Indeed, for any $T>0$, we have $P_{inv,\psi, T}(\varphi,Q,\mathcal{C})\geq e^{-\frac{T}{m}\tau ||\varphi||}$ and hence $P_{inv,\psi }(\varphi,Q,\mathcal{C})\geq -\frac{1}{m}||\varphi||>-\infty.$ On the other hand, since for each $n\in S_T$ the set $X_n$ is a union of some cyclinder sets $Q_{n+1}(x,\mathcal{C})$, this means if $F_n$ is a $(\mathcal{C},n,X_n)$-separated set, then $\#F_n\leq ( \#\mathcal{A})^{n+1}$. Therefore, $$\begin{aligned} \sum_{n\in S_T}\sum_{x \in F_n}\limits e^{S_{n\tau}\varphi({\omega_{\mathcal{C}_{[0,n)}(x)}})}&\leq\sum_{n\in S_T} ( \#\mathcal{A})^{n+1} e^{n\tau||\varphi||}\\ &\leq (\frac{T}{m}-\frac{T}{||\psi||}+1) (\#\mathcal{A})^{\frac{T}{m}+1} e^{\frac{T}{m}\tau||\varphi||}.\end{aligned}$$ This gives us $P_{inv,\psi, T}(\varphi,Q,\mathcal{C})\leq (\frac{T}{m}-\frac{T}{||\psi||}+1) (\#\mathcal{A})^{\frac{T}{m}+1} e^{\frac{T}{m}\tau||\varphi||}$ for every $T>0$. Consequently, $P_{inv,\psi }(\varphi,Q,\mathcal{C})\leq \frac{\log\#\mathcal{A}}{m\tau}+\frac{1}{m}||\varphi||<\infty.$* Next we give an equivalent definition for induced invariance pressure using spanning sets. We define $$\begin{aligned} &Q_{inv,\psi, T}(\varphi,Q,\mathcal{C})\\ = &\inf\left\{\sum_{n\in S_T}\sum_{x \in E_n}\limits e^{S_{n\tau}\varphi({\omega_{\mathcal{C}_{[0,n)}(x)}})}: E_n \mbox{ is a}~ (\mathcal{C},n,X_n)\mbox{-spanning set}\right\}.\end{aligned}$$ **Proposition 6**. *Let $Q$ be a controlled invariant set and $\mathcal C =(\mathcal A, \tau,\nu)$ be an invariant partition of $Q$, and let $\varphi,\psi \in C(U,\mathbb{R})$ with $\psi>0$. Then $$P_{inv,\psi }(\varphi,Q,\mathcal{C})=\limsup_{T \to \infty}\frac{1}{T\tau}\log Q_{inv,\psi, T}(\varphi,Q,\mathcal{C}).$$* *Proof.* Let $T>0$ and $n\in S_T$. Notice that a $(\mathcal{C},n,X_n)$-separated set with the maximal cardinality is also a $(\mathcal{C},n,X_n)$-spanning set. This shows that $$P_{inv,\psi }(\varphi,Q,\mathcal{C})\geq \limsup_{T \to \infty}\frac{1}{T\tau}\log Q_{inv,\psi, T}(\varphi,Q,\mathcal{C}).$$ Now, let $E_n$ be a $(\mathcal{C},n,X_n)$-spanning set and $F_n$ be a $(\mathcal{C},n,X_n)$-separated set. Consider the map $\Phi: F_n \rightarrow E_n$ by assigning each $x\in F_n$ to $\Phi(x)\in E_n$ such that $\mathcal{C}_{[0,n)}(\Phi(x))=\mathcal{C}_{[0,n)}(x)$. Then $\Phi$ is injective. Therefore, $$\begin{aligned} &\sum_{n\in S_T}\sum_{y \in E_n}\limits e^ {S_{n\tau}\varphi({\omega_{\mathcal{C}_{[0,n)}(y)}})}\\ \geq&\sum_{n\in S_T}\sum_{x \in F_n}\limits e^ {S_{n\tau}\varphi({\omega_{\mathcal{C}_{[0,n)}(\Phi(x))}})}\\ =&\sum_{n\in S_T}\sum_{x \in F_n}\limits e^ {S_{n\tau}\varphi({\omega_{\mathcal{C}_{[0,n)}(x)}})}, \text{by}~ \mathcal{C}_{[0,n)}(\Phi(x))=\mathcal{C}_{[0,n)}(x).\end{aligned}$$ It follows that $$P_{inv,\psi }(\varphi,Q,\mathcal{C})\leq\limsup_{T \to \infty}\frac{1}{T\tau}\log Q_{inv,\psi, T}(\varphi,Q,\mathcal{C}).$$ ◻ **Remark 7**. *Recall that in [@zh19 Section 4] Zhong and Huang defined upper capacity invariance pressure by admissible words as follows $$P_{inv}^{*}(\varphi,Q,\mathcal{C})=\limsup_{N \to \infty }\frac{1}{N\tau}\log \inf_{\mathcal{G}}\{\sum_{s\in \mathcal{G}} e^{S_{N\tau}\varphi(\omega_s)}\},$$ where the infimum is taken over all finite families $\mathcal{G}\subset \mathcal{L}^N(\mathcal C)$ such that $\cup_{s\in \mathcal{G}}\mathcal{C}_s(Q) \supset Q$. It is readily to show $P_{inv}^{*}(\varphi,Q,\mathcal{C})=P_{inv}(\varphi,Q,\mathcal{C})$.* ## Proof of Theorem 1.1 To establish Bowen's equation for upper capacity invariance pressure on partitions, we first give an equivalent characterization for $P_{inv}(\varphi,Q,\mathcal{C})$ whose definition has a dimensional flavor, which plays a critical role in our later proofs. For each $T>0$, we define $$G_T:=\{n\in \mathbb{N}: \exists x \in Q ~\mbox{such that } S_{n\tau}\psi({\omega_{\mathcal{C}_{[0,n)}(x)}})>T\tau\}.$$ Let $n\in G_T$. We put $$Y_n=\{x\in Q: S_{n\tau}\psi({\omega_{\mathcal{C}_{[0,n)}(x)}})>T\tau\}.$$ and $$\begin{aligned} &R_{inv,\psi, T}(\varphi,Q,\mathcal{C})\\ = &\sup\left\{\sum_{n\in G_T}\sum_{x \in F_n^{'}}\limits e^ {S_{n\tau}\varphi({\omega_{\mathcal{C}_{[0,n)}(x)}})}:F_n ^{'}\mbox{ is a}~ (\mathcal{C},n,Y_n)\mbox{-separated set} \right\}.\end{aligned}$$ **Theorem 8**. *Let $Q$ be a controlled invariant set and $\mathcal C =(\mathcal A, \tau,\nu)$ be an invariant partition of $Q$, and let $\varphi,\psi \in C(U,\mathbb{R})$ with $\psi>0$. Then $$\begin{aligned} \label{equ 2.2} P_{inv,\psi }(\varphi,Q,\mathcal{C})=\inf\{\beta \in \mathbb{R}: \limsup_{T \to \infty} R_{inv,\psi, T}(\varphi-\beta\psi,Q,\mathcal{C})<\infty\}.\end{aligned}$$* *Proof.* For any $n\in \mathbb{N}$ and any $x\in Q$, we define $m_n(x)$ as the unique positive integer satisfying that $$\begin{aligned} \label{equ 2.3} (m_n(x)-1)||\psi||\tau<S_{n\tau}\psi({\omega_{\mathcal{C}_{[0,n)}(x)}})\leq m_n(x)||\psi||\tau.\end{aligned}$$ It is easy to check that for any $x\in Q$, the inequality $$\begin{aligned} \label{equ 2.4} e^{-\beta ||\psi||\tau m_n(x)}e^{-|\beta|||\psi||\tau}\leq e^{-\beta S_{n\tau}\varphi({\omega_{\mathcal{C}_{[0,n)}(x)}})}\leq e^{-\beta ||\psi||\tau m_n(x)}e^{|\beta|||\psi||\tau}\end{aligned}$$ holds for all $\beta \in\mathbb{R}$. Define $$\begin{aligned} &R_{inv,\psi, T}^{*}(\varphi,\{\beta||\psi||\tau m_n(x)\}_{n\in G_T},Q,\mathcal{C})\\ = &\sup\left\{\sum_{n\in G_T}\sum_{x \in F_n^{'}}\limits e^ {S_{n\tau}\varphi({\omega_{\mathcal{C}_{[0,n)}(x)}})-\beta||\psi||\tau m_n(x)}:F_n ^{'}\mbox{ is a}~ (\mathcal{C},n,Y_n)\mbox{-separated set} \right\}.\end{aligned}$$ By inequality ([\[equ 2.4\]](#equ 2.4){reference-type="ref" reference="equ 2.4"}), to show ([\[equ 2.2\]](#equ 2.2){reference-type="ref" reference="equ 2.2"}), it suffices to verify $$\begin{aligned} &P_{inv,\psi }(\varphi,Q,\mathcal{C})\\ =&\inf\{\beta \in \mathbb{R}: \limsup_{T \to \infty} R_{inv,\psi, T}^{*}(\varphi,\{\beta||\psi||\tau m_n(x)\}_{n\in G_T},Q,\mathcal{C})<\infty\}.\end{aligned}$$ We firstly show lhs $\leq$ rhs. Let $\beta<P_{inv,\psi }(\varphi,Q,\mathcal{C})$. Choose $\delta>0$ and a subsequence $\{T_j\}_{j\in \mathbb{N}}$ that converges to $\infty$ as $j \to \infty$ so that $$\beta+\delta<P_{inv,\psi }(\varphi,Q,\mathcal{C}),$$ and $$P_{inv,\psi }(\varphi,Q,\mathcal{C})=\lim_{j \to \infty}\frac{1}{T_j\tau}\log P_{inv,\psi, T_j}(\varphi,Q,\mathcal{C}).$$ Then there exists $J_0$ such that for any $j\geq J_0$, $$\beta+\delta<\frac{1}{T_j\tau}\log P_{inv,\psi, T_j}(\varphi,Q,\mathcal{C}).$$ and $$\begin{aligned} \label{equ 2.5} e^{T_j\tau(\beta+\delta)}<\sum_{n\in S_{T_j}}\sum_{x\in F_n}e^{S_{n\tau}\varphi({\omega_{\mathcal{C}_{[0,n)}(x)}})},\end{aligned}$$ where $F_n$ is a $(\mathcal{C},n,X_n)$-separated set, $n\in S_{T_j}$. Fix $T_{j_1}$ with $j_1\geq J_0$. If we have chose $T_{j_k}$, noting that $T_j \to \infty$, then we choose $T_{j_{k+1}}$ such that $$\frac{T_{j_k}}{m}+1<\frac{T_{j_{k+1}}}{||\psi||}-1.$$ Without loss of generality, we still denote the subsequence $\{T_{j_k}\}_{k\geq 1}$ by $\{T_j\}_{j\geq 1}$. For any $i,j \geq J_0$ with $i\not= j$, we have $S_{T_i}\cap S_{T_j}=\emptyset$. For each $j\geq J_0$ and $n\in S_{T_j}$, we have $(T_j-||\psi||)\tau<S_{n\tau}\psi({\omega_{\mathcal{C}_{[0,n)}(x)}})\leq T_j\tau$ for all $x\in F_n$. Together with inequality ([\[equ 2.3\]](#equ 2.3){reference-type="ref" reference="equ 2.3"}), we get $$\begin{aligned} \label{equ 2.6} |||\psi||\tau m_n(x)-T_j\tau|<2||\psi||\tau.\end{aligned}$$ and hence $-\beta ||\psi||\tau m_n(x)\geq -\beta T_j\tau -2|\beta|\tau||\psi||.$ Therefore, $$\begin{aligned} &R_{inv,\psi, T}^{*}(\varphi,\{\beta||\psi||\tau m_n(x)\}_{n\in G_T},Q,\mathcal{C})\\ \geq& \sum_{j\geq J_0,\atop T_j-||\psi||>T}\sum_{n\in S_{T_j}}\sum_{x\in F_n} e^{S_{n\tau}\varphi({\omega_{\mathcal{C}_{[0,n)}(x)}})-\beta||\psi||\tau m_n(x)}\\ \geq& e^{-2|\beta|\tau||\psi||}\sum_{j\geq J_0,\atop T_j-||\psi||>T}\sum_{n\in S_{T_j}}\sum_{x\in F_n} e^{S_{n\tau}\varphi({\omega_{\mathcal{C}_{[0,n)}(x)}})-\beta T_j\tau}\\ \geq& e^{-2|\beta|\tau||\psi||}\sum_{j\geq J_0, T_j-||\psi||>T} e^{\delta T_j\tau },~~~~~~~ \text{by~~(\ref{equ 2.5}) }\\ =&\infty. \end{aligned}$$ This leads to $$\begin{aligned} \label{equ 2.7} \limsup_{T \to \infty} R_{inv,\psi, T}^{*}(\varphi,\{\beta||\psi||\tau m_n(x)\}_{n\in G_T},Q,\mathcal{C})=\infty,\end{aligned}$$ which implies that lhs $\leq$ rhs. We proceed to show lhs $\geq$ rhs. Let $\delta >0$ and then choose an $l_0\in \mathbb{N}$ such that for any $l\geq l_0$, $$\begin{aligned} \label{equ 2.8} &P_{inv,\psi, lm}(\varphi,Q,\mathcal{C})<e^{lm\tau(P_{inv,\psi }(\varphi,Q,\mathcal{C})+\frac{\delta}{2})}.\end{aligned}$$ Recall that $m=\min_{u\in U}\limits \psi(u)>0.$ For each $l\geq l_0$, let $n\in S_{lm}$. Similar to ([\[equ 2.6\]](#equ 2.6){reference-type="ref" reference="equ 2.6"}) if $F_n$ is an $(\mathcal{C},n,X_n)$-separated set, then $$|||\psi||\tau m_n(x)-lm\tau|<2||\psi||\tau$$ and $$\begin{aligned} \label{equ 2.9} &-(P_{inv,\psi }(\varphi,Q,\mathcal{C})+\delta)||\psi||\tau m_n(x) \nonumber \\ &\leq -lm\tau (P_{inv,\psi }(\varphi,Q,\mathcal{C})+\delta)+2||\psi||\tau\cdot |P_{inv,\psi }(\varphi,Q,\mathcal{C})+\delta|. \end{aligned}$$ For $T\geq l_0m$ sufficiently large and $n\in G_T$, let $F_n^{'}$ be a $(\mathcal{C},n,Y_n)$-separated set. Then for each $x\in F_n^{'}$, there exists the unique $l\geq l_0$ such that $(l-1)m\tau<S_{n\tau}\psi({\omega_{\mathcal{C}_{[0,n)}(x)}})\leq lm\tau$. Then $$S_{(n+1)\tau}\psi({\omega_{\mathcal{C}_{[0,n+1)}(x_1)}})=S_{n\tau}\psi({\omega_{\mathcal{C}_{[0,n)}(x)}})+S_{\tau}\psi({\omega_{\mathcal{C}_n(x)}})>lm\tau.$$ Namely, $n\in S_{lm}$ and $x\in X_n$. Therefore, $$\begin{aligned} &R_{inv,\psi, T}^{*}(\varphi,\{(P_{inv,\psi }(\varphi,Q,\mathcal{C})+\delta)||\psi||\tau m_n(x)\}_{n\in G_T},Q,\mathcal{C})\\ \leq &\sum_{l\geq l_0}\sup\left\{\sum_{n\in S_{lm}}\sum_{x\in F_n} e^{S_{n\tau}\varphi({\omega_{\mathcal{C}_{[0,n)}(x)}})-(P_{inv,\psi }(\varphi,Q,\mathcal{C})+\delta)||\psi||\tau m_n(x)},\right. \\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\left. F_n ~\mbox{is a}~ (\mathcal{C},n,X_n)\text{-separated set},n \in S_{lm} \right\}\\ \leq& e^{2||\psi||\tau\cdot |P_{inv,\psi }(\varphi,Q,\mathcal{C})+\delta|} \sum_{l\geq l_0}\sup\left\{\sum_{n\in S_{lm}}\sum_{x\in F_n} e^{{S_{n\tau}\varphi({\omega_{\mathcal{C}_{[0,n)}(x)}})-lm\tau (P_{inv,\psi }(\varphi,Q,\mathcal{C})+\delta)}} ,\right. \\ & \left.~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~F_n ~\text{is a}~ (\mathcal{C},n,X_n)\text{-separated set}, n \in S_{lm} \right\},~~~ \text{by (\ref{equ 2.9})} \\ \leq& e^{2||\psi||\tau\cdot |P_{inv,\psi }(\varphi,Q,\mathcal{C})+\delta|} \sum_{l\geq l_0}e^{-\frac{\delta}{2}lm\tau}:=M<\infty, ~~~ \text{by (\ref{equ 2.8})} .\\ \end{aligned}$$ This yields that $$\begin{aligned} &\limsup_{T \to \infty}R_{inv,\psi, T}^{*}(\varphi,\{(P_{inv,\psi }(\varphi,Q,\mathcal{C})+\delta)||\psi||\tau m_n(x)\}_{n\in G_T},Q,\mathcal{C})<\infty.\end{aligned}$$ Letting $\delta \to 0$, we get lhs $\geq$ rhs. ◻ Now, we give the proof of Theorem 1.1. *Proof of Theorem [Theorem 1](#thm 1.1){reference-type="ref" reference="thm 1.1"}.* We divide the proof into two steps. Step 1. we show $P_{inv}(\varphi-\beta \psi,Q,\mathcal{C})=0$ has unique root. Consider the map $$\Phi:\beta \in \mathbb{R} \longmapsto \Phi(\beta):=P_{inv}(\varphi-\beta \psi,Q,\mathcal{C}).$$ By remark [Remark 5](#rem 2.2){reference-type="ref" reference="rem 2.2"}, $\Phi(\beta)$ is finite for all $\beta \in \mathbb{R}$. We show $\Phi$ is a continuous and strictly decreasing function on $\mathbb{R}$. Let $\beta_1,\beta_2\in \mathbb{R}$, $n\in \mathbb{N}$ and $F_n$ be a $(\mathcal{C},n,Q)$-separated set. Then $$\begin{aligned} \label{inequ 2.10} &\sum_{x\in F_n} e^{{S_{n\tau}(\varphi -\beta_2 \psi)(\omega_{\mathcal{C}_{[0,n)}(x)})}-|\beta_1-\beta_2|n\tau\cdot||\psi||}\\ \leq&\sum_{x\in F_n} e^{{S_{n\tau}(\varphi -\beta_1 \psi)(\omega_{\mathcal{C}_{[0,n)}(x)})}}\\ \leq&\sum_{x\in F_n} e^{{S_{n\tau}(\varphi -\beta_2 \psi)(\omega_{\mathcal{C}_{[0,n)}(x)})}+|\beta_1-\beta_2|n\tau\cdot||\psi||},\end{aligned}$$ which implies that $$\begin{aligned} P_{inv}(\varphi-\beta_2 \psi,Q,\mathcal{C})-|\beta_1-\beta_2|||\psi||&\leq P_{inv}(\varphi-\beta_1 \psi,Q,\mathcal{C})\nonumber\\ &\leq P_{inv}(\varphi-\beta_2 \psi,Q,\mathcal{C})+|\beta_1-\beta_2|||\psi||.\end{aligned}$$ Hence the continuity of $\Phi$ follows from the following inequality $$| P_{inv}(\varphi-\beta_1 \psi,Q,\mathcal{C})-P_{inv}(\varphi-\beta_2 \psi,Q,\mathcal{C})|\leq||\psi|||\beta_1-\beta_2|.$$ Let $\beta_1, \beta_2 \in \mathbb{R}$ with $\beta_1<\beta_2$, one can similarly obtain that $$\begin{aligned} \label{equ 2.11} P_{inv}(\varphi-\beta_2 \psi,Q,\mathcal{C})\leq P_{inv}(\varphi-\beta_1 \psi,Q,\mathcal{C})-(\beta_2-\beta_1)m.\end{aligned}$$ So the map $\Phi$ is strictly decreasing. According to the possible values of $P_{inv}(\varphi,Q,\mathcal{C})$, we have the following three cases. 1. If $P_{inv}(\varphi,Q,\mathcal{C})=0$, then $0$ is exactly the unique root of the equation $\Phi(\beta)=0$. 2. If $P_{inv}(\varphi,Q,\mathcal{C})>0$, taking $\beta_1=0$ and $\beta_2=h>0$ in ([\[equ 2.11\]](#equ 2.11){reference-type="ref" reference="equ 2.11"}), then $$P_{inv}(\varphi-h \psi,Q,\mathcal{C})\leq P_{inv}(\varphi,Q,\mathcal{C})-hm.$$ The intermediate value theorem of continuous function and the fact $\Phi$ is strictly decreasing ensure that the equation $\Phi(\beta)=0$ has the unique root $\beta^{*}$ satisfying $0<\beta^{*} \leq \frac{P_{inv}(\varphi,Q,\mathcal{C})}{m}$. 3. If $P_{inv}(\varphi,Q,\mathcal{C})<0$, taking $\beta_1=h<0$ and $\beta_2=0$ in ([\[equ 2.11\]](#equ 2.11){reference-type="ref" reference="equ 2.11"}) again, then $$P_{inv}(\varphi,Q,\mathcal{C})-hm\leq P_{inv}(\varphi-h\psi,Q,\mathcal{C}).$$ Similarly, we know that the equation $\Phi(\beta)=0$ has the unique root $\beta^{*}$ satisfying $\frac{P_{inv}(\varphi,Q,\mathcal{C})}{m}\leq\beta^{*}<0.$ To sum up, the equation $\Phi(\beta)=0$ has the unique (finite) root. Step 2. we show $P_{inv,\psi }(\varphi,Q,\mathcal{C})$ is the unique of the equation $\Phi(\beta)=0$. Since $\Phi$ is continuous and strictly decreasing, then $$\begin{aligned} \label{equ 3.11} \begin{split} &\inf\{\beta\in \mathbb{R}:P_{inv}(\varphi-\beta \psi,Q,\mathcal{C})<0\}\\ =&\inf\{\beta\in \mathbb{R}:P_{inv}(\varphi-\beta \psi,Q,\mathcal{C})\leq0\}\\ =&\sup\{\beta\in \mathbb{R}:P_{inv}(\varphi-\beta \psi,Q,\mathcal{C})\geq0\}. \end{split}\end{aligned}$$ Hence it suffices to show $$\begin{aligned} \label{equ 3.12} P_{inv,\psi }(\varphi,Q,\mathcal{C})=\inf\{\beta\in \mathbb{R}:P_{inv}(\varphi-\beta \psi,Q,\mathcal{C})\leq0\}.\end{aligned}$$ We first show $$\begin{aligned} \label{inequ 2.12} P_{inv,\psi }(\varphi,Q,\mathcal{C})\geq\inf\{\beta\in \mathbb{R}:P_{inv}(\varphi-\beta \psi,Q,\mathcal{C})\leq0\}.\end{aligned}$$ By Theorem [Theorem 8](#thm 2.5){reference-type="ref" reference="thm 2.5"}, we need to show for any $\beta \in \{s \in \mathbb{R}: \limsup_{T \to \infty}\limits R_{inv,\psi, T}(\varphi-s\psi,Q,\mathcal{C})<\infty\},$ one has $$P_{inv}(\varphi-\beta \psi,Q,\mathcal{C})\leq0.$$ Let $M:=\limsup_{T \to \infty} \limits R_{inv,\psi, T}(\varphi-\beta\psi,Q,\mathcal{C})$. Then there exists $T_0\in \mathbb{N}$ so that for all $T\geq T_0$, $$R_{inv,\psi, T}(\varphi-\beta\psi,Q,\mathcal{C})<M+1.$$ Using the definition [Definition 4](#def 2.1){reference-type="ref" reference="def 2.1"}, one can choose subsequence $n_j$ that converges to $\infty$ as $j \to \infty$ such that $$\begin{aligned} P_{inv}(\varphi-\beta\psi,Q,\mathcal{C})=\lim_{j\to \infty }\frac{1}{n_j\tau}\log m(\varphi-\beta\psi,Q,n_j,\mathcal{C}).\end{aligned}$$ Fix $T\geq T_0$. Then there exists $n_j>T$ such that $S_{n_j\tau}\psi({\omega_{\mathcal{C}_{[0,n_j)}(x)}})>T\tau$ for all $x\in Q$. So $n_j \in G_T$ and $Y_{n_j}=Q$. Let $F_{n_j}$ be a $(\mathcal{C},n_j,Q)$-separated set. Then $$\sum_{x\in F_{n_j}}\limits e^{S_{n_j\tau}(\varphi -\beta \psi)(\omega_{\mathcal{C}_{[0,n_j)}(x)})}<M+1,$$ which yields that $m(\varphi-\beta\psi,Q,n_j,\mathcal{C})\leq M+1$. This shows $P_{inv}(\varphi-\beta \psi,Q,\mathcal{C})\leq0.$ We continue to show $$\begin{aligned} \label{inequ 2.13} P_{inv,\psi }(\varphi,Q,\mathcal{C})\leq\inf\{\beta\in \mathbb{R}:P_{inv}(\varphi-\beta \psi,Q,\mathcal{C})<0\}.\end{aligned}$$ Let $\beta \in \mathbb{R}$ with $P_{inv}(\varphi-\beta \psi,Q,\mathcal{C})=2a<0.$ Then there is $N_0$ such that for $n\geq N_0$, $$\sup\left\{\sum_{x\in F_n}\limits e^{S_{n\tau}(\varphi-\beta \psi)({\omega_{\mathcal{C}_{[0,n)}(x)}})}: F_n ~\mbox{is a } (\mathcal{C},n,Q)\mbox{-separated set}\right\}<e^{an\tau}.$$ Fix sufficiently large $T$ so that $n\geq N_0$ for each $n\in G_T$. Then $$\begin{aligned} R_{inv,\psi, T}(\varphi-\beta\psi,Q,\mathcal{C})&\leq\sum_{n\geq N_0}\sup_{F_n}\sum_{x\in F_n}\limits e^{S_{n\tau}(\varphi-\beta \psi)({\omega_{\mathcal{C}_{[0,n)}(x)}})}\\ &\leq \sum_{n\geq N_0}e^{an\tau}<\infty.\end{aligned}$$ Hence $\limsup_{T \to \infty} R_{inv,\psi, T}(\varphi-\beta\psi,Q,\mathcal{C})<\infty$ and $$\begin{aligned} \begin{split} &\inf\{\beta\in \mathbb{R}:P_{inv}(\varphi-\beta \psi,Q,\mathcal{C})<0\}\\ \geq &\inf\{\beta \in \mathbb{R}: \limsup_{T \to \infty} R_{inv,\psi, T}(\varphi-\beta\psi,Q,\mathcal{C})<\infty\}.\\ =&P_{inv,\psi }(\varphi,Q,\mathcal{C}) ,~~~~\text{by Theorem \ref{thm 2.5}}. \end{split}\end{aligned}$$ By inequalities ([\[equ 3.11\]](#equ 3.11){reference-type="ref" reference="equ 3.11"}), ([\[inequ 2.12\]](#inequ 2.12){reference-type="ref" reference="inequ 2.12"}) and ([\[inequ 2.13\]](#inequ 2.13){reference-type="ref" reference="inequ 2.13"}), we get ([\[equ 3.12\]](#equ 3.12){reference-type="ref" reference="equ 3.12"}). ◻ # Bowen's equation for Pesin-Pitskel invariance pressure In this section, we give the proofs of Theorem 1.2 in subsection 4.1 and Theorem 1.3 in subsection 4.2. ## Pesin-Pitskel invariance pressure on subsets In this subsection, we first recall the definition of Pesin-Pitskel invariance pressure on subsets [@zh19; @z20], and then introduce the a new notion call BS invariance dimension to establish Bowen's equation for Pesin-Pitskel invariance pressure. **Definition 9**. *Let $Q$ be a controlled invariant set and $\mathcal C =(\mathcal A, \tau,\nu)$ be an invariant partition of $Q$, and let $\varphi\in C(U,\mathbb{R})$, $Z\subset Q, \lambda\in \mathbb{R}, N\in \mathbb{N}$. Put $$\begin{aligned} \label{equ 3.1} M_{\mathcal{C}}(\varphi,Z,Q,\lambda,N)=\inf\left\{\sum_{i\in I}\limits e^{-n_i \lambda\tau+S_{n_i\tau}\varphi({\omega_{\mathcal{C}_{[0,n_i)}(x_i)}})}\right\},\end{aligned}$$ where the infimum is taken over all finite or countable families $\{Q_{n_i}(x_i,\mathcal{C})\}_{i\in I}$ such that $x_i \in Q$, $n_i \geq N$ and $\cup_{i\in I}Q_{n_i}(x_i,\mathcal{C})\supset Z$.* *Notice that the quantity $M_{\mathcal{C}}(\varphi,Z,Q,\lambda,N)$ is non-decreasing as $N$ increases. We define $$M_{\mathcal{C}}(\varphi,Z,Q,\lambda)=\lim_{N \to \infty }M_{\mathcal{C}}(\varphi,Z,Q,\lambda,N).$$* *It is readily to check that $M_{\mathcal{C}}(\varphi,Z,Q,\lambda)$ has a critical value of parameter $\lambda$ jumping from $\infty$ to $0$. The critical values defined by $$\begin{aligned} P_{\mathcal{C}}(\varphi,Z,Q):&=\inf\{\lambda:M_{\mathcal{C}}(\varphi,Z,Q,\lambda)=0\}\\ &=\sup\{\lambda:M_{\mathcal{C}}(\varphi,Z,Q,\lambda)=\infty\}. \end{aligned}$$ is said to be the *Pesin-Pitskel invariance pressure of $\varphi$ on $Z$ w.r.t. $\mathcal{C}$*.* *When $\varphi=0$, ${\rm dim}_{\mathcal{C}}(Z,Q):=P_{\mathcal{C}}(0,Z,Q)$ is reduced to the Bowen invariance entropy introduced by Huang and Zhong [@hz18].* **Remark 10**. *Actually, Pesin-Pitskel invariance pressure can be alternatively defined by using admissible words [@zh19; @z20]. Put $$\Lambda_{\mathcal{C}}(\varphi,Z,Q,\lambda,N)=\inf_{\mathcal G}\left\{\sum_{i\in I}\limits e^{-l(s) \lambda\tau+S_{l(s)\tau}\varphi(\omega_s)}\right\},$$ where the infimum is taken over all finite or countable admissible words such that $l(s)\geq N$ and $\cup_{s\in \mathcal{G}}\mathcal{C}_s(Q)\supset Z$.* *Let $\Lambda_{\mathcal{C}}(\varphi,Z,Q,\lambda)=\lim_{N \to \infty }\Lambda_{\mathcal{C}}(\varphi,Z,Q,\lambda,N)$. We define the critical values of $\Lambda_{\mathcal{C}}(\varphi,Z,Q,\lambda)$ w.r.t. $\lambda$ as $$\begin{aligned} P_{\mathcal{C}}(\varphi,Z,Q)&=\inf\{\lambda:\Lambda_{\mathcal{C}}(\varphi,Z,Q,\lambda)=0\}\\ &=\sup\{\lambda:\Lambda_{\mathcal{C}}(\varphi,Z,Q,\lambda)=+\infty\}. \end{aligned}$$* Fix a non-empty subset $Z\subset Q$, an invariant partition $\mathcal C =(\mathcal A, \tau,\nu)$ of $Q$ and $\varphi \in C(U,\mathbb{R})$. We investigate the existence and uniqueness of the root of the equation associated with Pesin-Pitskel invariance pressure $$\Phi(t)=P_{\mathcal{C}}(t\varphi,Z,Q)=0.$$ **Proposition 11**. *Let $Q$ be a controlled invariant set and $\mathcal C =(\mathcal A, \tau,\nu)$ be an invariant partition of $Q$, and let $\varphi\in C(U,\mathbb{R})$ with $\varphi<0$ and $Z$ be a non-empty subset of $Q$. Then the equation $\Phi(t)=0$ has unique (finite) root $t^{*}$ satisfying $$-\frac{1}{m} {\rm dim}_{\mathcal{C}}(Z,Q) \leq t^{*} \leq -\frac{1}{M}{\rm dim}_{\mathcal{C}}(Z,Q),$$ where $m=\min_{u\in U}\varphi(x)$ and $M=\max_{u\in U}\varphi(x)<0$.* *Proof.* We show $P_{\mathcal{C}}(\varphi,Z,Q)$ is finite for any $\varphi \in C(U,\mathbb{R}).$ Let $\varphi \in C(U,\mathbb{R})$. Then for any $x\in Q$ and $n\in N$, one has $|S_{n\tau}\varphi({\omega_{\mathcal{C}_{[0,n)}(x)}})|\leq n\tau||\varphi||$. Since $$\begin{aligned} M_{\mathcal{C}}(\varphi,Z,Q,-||\varphi||,N)&=\inf\left\{\sum_{i\in I}\limits e^{||\varphi||n_i\tau+\cdot S_{n_i\tau}\varphi({\omega_{\mathcal{C}_{[0,n_i)}(x_i)}})}\right \}\\ &\geq \inf\left\{\sum_{i\in I}\limits e^{||\varphi||n_i\tau-n_i\tau||\varphi||}\right\}>1,\end{aligned}$$ where the infimum is taken over all finite or countable families $\{Q_{n_i}(x_i,\mathcal{C})\}_{i\in I}$ such that $x_i \in Q$, $n_i \geq N$ and $\cup_{i\in I}Q_{n_i}(x_i,\mathcal{C})\supset Z$, this implies that $P_{\mathcal{C}}(\varphi,Z,Q)\geq-||\varphi||>-\infty$. On the other hand, given $N\in\mathbb{N}$, noting that $Z$ can be at most covered by $(\#\mathcal{A})^N$ cyclinder sets of length $N$. Fix such an family $\{Q_N(x_i,\mathcal{C})\}_{i\in I}$. We have $$\begin{aligned} M_{\mathcal{C}}(\varphi,Z,Q,\frac{\log \mathcal{\#A}}{\tau}+||\varphi||,N)\leq\sum_{i\in I}\limits e^{-(\frac{\log \mathcal{\#A}}{\tau}+||\varphi||)N\tau+N\tau||\varphi||}\leq 1,\end{aligned}$$ this gives us $P_{\mathcal{C}}(\varphi,Z,Q)\leq \frac{\log \mathcal{\#A}}{\tau}+||\varphi||<\infty.$ We show $\Phi$ is a continuous and strictly decreasing. Let $t_1, t_2\in \mathbb{R}$ with $t_1>t_2$. Given $N\in \mathbb{N}$ and a family $\{Q_{n_i}(x_i,\mathcal{C})\}_{i\in I}$ such that $x_i \in Q$, $n_i \geq N$ and $\cup_{i\in I}Q_{n_i}(x_i,\mathcal{C})\supset Z$, one has $$\begin{aligned} &\sum_{i\in I}\limits e^{-n_i \lambda\tau+t_2S_{n_i\tau}\varphi({\omega_{\mathcal{C}_{[0,n_i)}(x_i)}})+(t_1-t_2)n_im\tau}\\ \leq&\sum_{i\in I}\limits e^{-n_i \lambda\tau+t_1S_{n_i\tau}\varphi({\omega_{\mathcal{C}_{[0,n_i)}(x_i)}})}\\ \leq&\sum_{i\in I}\limits e^{-n_i \lambda\tau+t_2S_{n_i\tau}\varphi({\omega_{\mathcal{C}_{[0,n_i)}(x_i)}})+(t_1-t_2)n_iM\tau}.\end{aligned}$$ This implies that $$\begin{aligned} \label{4.1} P_{\mathcal{C}}(t_1\varphi,Z,Q)\leq P_{\mathcal{C}}(t_2\varphi,Z,Q)+(t_1-t_2)M\end{aligned}$$ and $$\begin{aligned} \label{4.2} P_{\mathcal{C}}(t_2\varphi,Z,Q)+(t_1-t_2)m \leq P_{\mathcal{C}}(t_1\varphi,Z,Q).\end{aligned}$$ So $\Phi$ is a continuous and strictly decreasing Put $t_1=h>0, t_2=0$ in ([\[4.1\]](#4.1){reference-type="ref" reference="4.1"}). Noting that $P_{\mathcal{C}}(0\cdot\varphi,Z,Q)={\rm dim}_{\mathcal{C}}(Z,Q)\geq 0$, then $$P_{\mathcal{C}}(h\varphi,Z,Q)\leq {\rm dim}_{\mathcal{C}}(Z,Q)-h(-M).$$ Again, putting $t_1=h>0, t_2=0$ in ([\[4.2\]](#4.2){reference-type="ref" reference="4.2"}), then $$P_{\mathcal{C}}(h\varphi,Z,Q)\geq {\rm dim}_{\mathcal{C}}(Z,Q)-h(-m).$$ Using the intermediate value theorem of continuous function and the fact that $\Phi$ is a continuous and strictly decreasing, the equation $\Phi(t)=0$ has unique root $t^{*}$ satisfying $-\frac{1}{m} {\rm dim}_{\mathcal{C}}(Z,Q) \leq t^{*} \leq -\frac{1}{M}{\rm dim}_{\mathcal{C}}(Z,Q)$. ◻ BS dimension was originally introduced by Barreira and Schmeling [@bs00] to find the root of the Bowen's equation defined by topological pressure of additive potentials in topological dynamical systems. Here, we borrow the ideas used in [@bs00; @hz18] to define BS invariance dimension on subsets of partitions in the context of control systems. The new notion allows us to obtain the precise root of the equation $\Phi(t)=0$ in Proposition [Proposition 11](#prop 3.3){reference-type="ref" reference="prop 3.3"}. **Definition 13**. *Let $Q$ be a controlled invariant set and $\mathcal C =(\mathcal A, \tau,\nu)$ be an invariant partition of $Q$, and let $\varphi\in C(U,\mathbb{R})$ with $\varphi >0$, $Z\subset Q, \lambda\in \mathbb{R}, N\in \mathbb{N}$. Define $$R_{\mathcal{C}}(\varphi,Z,Q,\lambda,N)=\inf\left\{\sum_{i\in I}\limits e^{-\lambda S_{n_i\tau}\varphi({\omega_{\mathcal{C}_{[0,n_i)}(x_i)}})}\right\},$$ where the infimum is taken over all finite or countable families $\{Q_{n_i}(x_i,\mathcal{C})\}_{i\in I}$ such that $x_i \in Q$, $n_i \geq N$ and $\cup_{i\in I}Q_{n_i}(x_i,\mathcal{C})\supset Z$.* *Since $R_{\mathcal{C}}(\varphi,Z,Q,\lambda,N)$ is non-decreasing as $N$ increases, so the limit $$R_{\mathcal{C}}(\varphi,Z,Q,\lambda)=\lim_{N\to \infty}R_{\mathcal{C}}(\varphi,Z,Q,\lambda,N).$$ exists.* ***Proposition 12**. * 1. **If $R_{\mathcal{C}}(\varphi,Z,Q,\lambda_0,N)<\infty$ for some $\lambda_0$, then $R_{\mathcal{C}}(\varphi,Z,Q,\lambda,N)=0$ for all $\lambda>\lambda_0$.** 2. **If $R_{\mathcal{C}}(\varphi,Z,Q,\lambda_0,N)=\infty$ for some $\lambda_0$, then $R_{\mathcal{C}}(\varphi,Z,Q,\lambda,N)=\infty$ for all $\lambda<\lambda_0$.** **Proof.* It suffices to show (1). Let $N\in \mathbb{N}$ and $\lambda >\lambda_0$. Given a family $\{Q_{n_i}(x_i,\mathcal{C})\}_{i\in I}$ such that $x_i \in Q$, $n_i \geq N$ and $\cup_{i\in I }Q_{n_i}(x_i,\mathcal{C})\supset Z$, then $$\begin{aligned} &\sum_{i\in I}\limits e^{-\lambda S_{n_i\tau}\varphi({\omega_{\mathcal{C}_{[0,n_i)}(x_i)}})}\\ =&\sum_{i\in I}\limits e^{-\lambda_0S_{n_i\tau}\varphi({\omega_{\mathcal{C}_{[0,n_i)}(x_i)}})+(\lambda_0-\lambda)S_{n_i\tau}\varphi({\omega_{\mathcal{C}_{[0,n_i)}(x_i)}})}\\ \leq&\sum_{i\in I}\limits e^{-\lambda_0S_{n_i\tau}\varphi({\omega_{\mathcal{C}_{[0,n_i)}(x_i)}})+(\lambda_0-\lambda)Nm\tau},\end{aligned}$$ where $m=\min_{u\in U}\varphi(u)>0$. This shows that $R_{\mathcal{C}}(\varphi,Z,Q,\lambda,N)\leq R_{\mathcal{C}}(\varphi,Z,Q,\lambda_0)e^{(\lambda_0-\lambda)Nm\tau}$. Letting $N \to \infty$ gives us the desired result. ◻* *Proposition [Proposition 12](#prop 4.5){reference-type="ref" reference="prop 4.5"} tells us that $R_{\mathcal{C}}(\varphi,Z,Q,\lambda)$ has a critical value of the parameter $\lambda$ that jumps from $\infty$ to 0. The critical value ${\rm dim}_{\mathcal{C}}^{BS}(\varphi,Z,Q)$ defined by $$\begin{aligned} {\rm dim}_{\mathcal{C}}^{BS}(\varphi,Z,Q)&=\inf\{\lambda: R_{\mathcal{C}}(\varphi,Z,Q,\lambda)=0\},\\ &=\sup\{\lambda:R_{\mathcal{C}}(\varphi,Z,Q,\lambda)=\infty\}.\end{aligned}$$ is said to *BS invariance dimension of $\varphi$ on $Z$ w.r.t. $\mathcal{C}$*.* **Proposition 14**. 1. *If $\varphi =1$, then ${\rm dim}_{\mathcal{C}}^{BS}(1,K,Q,)={\rm dim}_{\mathcal{C}}(K,Q)$.* 2. *If $Z_1\subset Z_2\subset Q$, then $0\leq {\rm dim}_{\mathcal{C}}^{BS}(\varphi,Z_1,Q,\mathcal{C})\leq {\rm dim}_{\mathcal{C}}^{BS}(\varphi,Z_2,Q,\mathcal{C})$.* 3. *If $\lambda \geq 0$ and $Z, Z_i\subset Q, i\geq1$ with $Z=\cup_{i\geq 1}Z_i$, then $$R_{\mathcal{C}}(\varphi,Z,Q,\lambda)\leq \sum_{i\geq1}\limits R_{\mathcal{C}}(\varphi,Z_i,Q,\lambda)$$ and $${\rm dim}_{\mathcal{C}}^{BS}(\varphi,Z,Q)=\sup_{i\geq 1}{\rm dim}_{\mathcal{C}}^{BS}(\varphi,Z_i,Q).$$* Now, we give the proof of Theorem [Theorem 2](#thm 1.2){reference-type="ref" reference="thm 1.2"}. *Proof of Theorem [Theorem 2](#thm 1.2){reference-type="ref" reference="thm 1.2"}.* By definitions, for each $N$, $$R_{\mathcal{C}}(\varphi,Z,Q,\lambda,N)=M_{\mathcal{C}}(-\lambda\varphi,Z,Q,0,N)$$ and hence $R_{\mathcal{C}}(\varphi,Z,Q,\lambda)=M_{\mathcal{C}}(-\lambda\varphi,Z,Q,0).$ Let $\lambda > {\rm dim}_{\mathcal{C}}^{BS}(\varphi,Z,Q)$. Then $M_{\mathcal{C}}(-\lambda\varphi,Z,Q,0)=R_{\mathcal{C}}(\varphi,Z,Q,\lambda)<1.$ So $P_{\mathcal{C}}(-\lambda\varphi,Z,Q)\leq 0$. By the proof of Proposition [Proposition 11](#prop 3.3){reference-type="ref" reference="prop 3.3"}, the continuity of $\Phi$ gives us $$P_{\mathcal{C}}(- {\rm dim}_{\mathcal{C}}^{BS}(\varphi,Z,Q)\cdot \varphi,Z,Q)\leq 0.$$ by letting $\lambda \to {\rm dim}_{\mathcal{C}}^{BS}(\varphi,Z,Q)$. One can similarly obtain that $$P_{\mathcal{C}}(-{\rm dim}_{\mathcal{C}}^{BS}(\varphi,Z,Q)\cdot \varphi,Z,Q)\geq0.$$ Together with Proposition [Proposition 11](#prop 3.3){reference-type="ref" reference="prop 3.3"}, ${\rm dim}_{\mathcal{C}}^{BS}(\varphi,Z,Q)$ is the unique root of the equation $\Phi(t)=0$. ◻ The follows proposition shows the BS invariance dimension of $\psi$ on $Q$ w.r.t. $\mathcal{C}$ is a special case of $\psi$-induced invariance pressure of 0 on $Q$ w.r.t. $\mathcal{C}$. **Corollary 15**. *Let $\mathcal C =(\mathcal A, \tau,\nu)$ be an invariant partition of $Q$, and $\psi\in C(U,\mathbb{R})$ with $\psi>0$. If the infimum in Eq. ([\[equ 3.1\]](#equ 3.1){reference-type="ref" reference="equ 3.1"}) is taken over all finite families, then $${\rm dim}_{\mathcal{C}}^{BS}(\psi,Q,Q)=P_{inv,\psi }(0,Q,\mathcal{C}).$$* *Proof.* Taking $\varphi =0$ in Theorem [Theorem 1](#thm 1.1){reference-type="ref" reference="thm 1.1"} and using the fact $P_{\mathcal{C}}(f,Q,Q)=P_{inv}(f,Q,\mathcal{C})$ for any $f\in C(U,\mathbb{R})$ obtained in [@zh19 Theorem 1], we deduce that $$P_{inv}(-P_{inv,\psi }(0,Q,\mathcal{C})\cdot \psi,Q,\mathcal{C})=P_{\mathcal{C}}(-P_{inv,\psi }(0,Q,\mathcal{C})\cdot \psi,Q,Q)=0.$$ By Theorem [Theorem 2](#thm 1.2){reference-type="ref" reference="thm 1.2"}, the uniqueness of the root of the equation implies that ${\rm dim}_{\mathcal{C}}^{BS}(\psi,Q,Q)=P_{inv,\psi }(0,Q,\mathcal{C})$. ◻ We remark that $Q_n(x,\mathcal{C})$ may not be a "open ball\". Hence one may not choose a finite sub-cover from the countable cover of $Q$ even if $Q$ is compact. To get $P_{\mathcal{C}}(f,Q,Q)=P_{inv}(f,Q,\mathcal{C})$ for any $f\in C(U,\mathbb{R})$, Huang and Zhong [@hz18; @zh19] modified the condition to avoid this case. This is the essential different between control systems and topological dynamical systems. ## Variational principle for BS invariance dimension In this subsection we are devoted to establishing a variational principle for BS invariance dimension. The proof of Theorem [Theorem 3](#thm 1.3){reference-type="ref" reference="thm 1.3"} is inspired by the work of [@fh12; @wc12; @whs19; @z20]. By $M(Q)$ we denote the set of Borel probability measures on $Q$, and by $C(Q)$ we denote all real-valued continuous map on $Q$ equipped with supremum norm. **Definition 16**. *Let $Q$ be a controlled invariant set and $\mathcal C =(\mathcal A, \tau,\nu)$ be an invariant partition of $Q$, and let $\varphi\in C(U,\mathbb{R})$ with $\varphi >0$. Given $\mu \in M(Q)$, we define *measure-theoretic lower BS invariance pressure of $\varphi$ w.r.t. $\mathcal{C}$* as $$\begin{aligned} \underline{h}_{\mu,inv}(\varphi,Q,\mathcal{C})&=\int_Q \liminf_{n \to \infty}-\frac{\log \mu(Q_n(x,\mathcal{C}))}{S_{n\tau}\varphi({\omega_{\mathcal{C}_{[0,n)}(x)}})}d\mu. \end{aligned}$$* **Remark 17**. 1. *Fix $n$. Put $f_n(x):=\mu(Q_n(x,\mathcal{C}))$ and $g_n(x):=S_{n\tau}\varphi({\omega_{\mathcal{C}_{[0,n)}(x)}})$. Note that both $f_n(x)$ and $g_n(x)$ only take finite many values and hence are Borel measurable. $Q_n(x,\mathcal{C})$ is Borel measurable. This shows that $\liminf_{n \to \infty}\frac{f_n(x)}{g_n(x)}$ is Borel measurable.* 2. *For different $Q_n(x,\mathcal{C})$ with $x\in Q$, the weight is dynamically assigned to the quantity $S_{n\tau}\varphi({\omega_{\mathcal{C}_{[0,n)}(x)}})$. Hence if $\varphi=1$, $\underline{h}_{\mu,inv}(\varphi,Q,\mathcal{C})$ is reduced to measure-theoretic lower invariance entropy $\underline{h}_{\mu,inv}(Q,\mathcal{C})$ introduced by Wang et al. in [@whs19 Subsection 4.1].* The lower upper of Theorem [Theorem 3](#thm 1.3){reference-type="ref" reference="thm 1.3"} is elementary due to the definitions. To obtain the upper of ${\rm dim}_{\mathcal{C}}^{BS}(\varphi,K,Q)$, a common trick learned from geometric measure theory [@m95] is defining a positive, bound linear functional and then using Riesz representation theorem to produce a required measure. To this end, we introduce the weighted BS invariance dimension. **Definition 18**. *Let $Q$ be a controlled invariant set and $\mathcal C =(\mathcal A, \tau,\nu)$ be an invariant partition of $Q$, let $\psi$ be a negative bound function on $U$ and $\varphi\in C(U,\mathbb{R})$ with $\varphi >0$, $\lambda\in \mathbb{R}, N\in \mathbb{N}$. Define $$W_{\mathcal{C}}(\varphi,\psi,Q,\lambda,N)=\inf\left\{\sum_{i\in I}\limits c_i e^{-\lambda S_{n_i\tau}\varphi({\omega_{\mathcal{C}_{[0,n_i)}(x_i)}})}\right\},$$ where the infimum is taken over all finite or countable families\ $\{(Q_{n_i}(x_i,\mathcal{C}),c_i)\}_{i \in I}$ with $0<c_i<\infty$, $x_i \in Q$, $n_i \geq N$ such that $$\sum_{i\in I}\limits c_i \chi_{Q_{n_i}(x_i,\mathcal{C})}\geq \psi,$$ where $\chi_{A}$ denotes the characteristic function of $A$.* *Let $Z\subset Q$ and put $W_{\mathcal{C}}(\varphi,Z,Q,\lambda,N):=W_{\mathcal{C}}(\varphi,\chi_{Z},Q,\lambda,N)$. We define $$W_{\mathcal{C}}(\varphi,Z,Q,\lambda)=\lim_{N\to \infty}W_{\mathcal{C}}(\varphi,Z,Q,\lambda,N).$$* *Similar to Proposition [Proposition 12](#prop 4.5){reference-type="ref" reference="prop 4.5"}, there is a critical value of $\lambda$ so that $W_{\mathcal{C}}(\varphi,Z,Q,\lambda)$ jumps from $\infty$ to $0$. The critical value defined by $$\begin{aligned} {\rm dim}_{\mathcal{C}}^{WBS}(\varphi,Z,Q)&=\inf\{\lambda: W_{\mathcal{C}}(\varphi,Z,Q,\lambda)=0\},\\ &=\sup\{\lambda:W_{\mathcal{C}}(\varphi,Z,Q,\lambda)=+\infty\}.\end{aligned}$$ is called *weighted BS invariance dimension on $Z$ w.r.t. $\mathcal{C}$ and $\varphi$*.* **Proposition 19**. *Let $\mathcal C =(\mathcal A, \tau,\nu)$ be an invariant partition of $Q$ and $Z\subset Q$ be a non-empty subset, and let $\varphi\in C(U,\mathbb{R})$ with $\varphi >0$. Then for any $\lambda \geq 0$, $\epsilon >0$ and sufficiently large $N$, $$R_{\mathcal{C}}(\varphi,Z,Q,\lambda+\epsilon,N)\leq W_{\mathcal{C}}(\varphi,Z,Q,\lambda,N)\leq R_{\mathcal{C}}(\varphi,Z,Q,\lambda,N).$$ Consequently, ${\rm dim}_{\mathcal{C}}^{BS}(\varphi,Z,Q)={\rm dim}_{\mathcal{C}}^{WBS}(\varphi,Z,Q).$* *Proof.* It is clear that $W_{\mathcal{C}}(\varphi,Z,Q,\lambda,N)\leq R_{\mathcal{C}}(\varphi,Z,Q,\lambda,N)$. Take $N_0\in N$ so that $\frac{n^2}{e^{mn\epsilon\tau}}\leq1$ and $\sum_{n\geq N}\frac{1}{n^2}<1$ for all $N\geq N_0$, where $m=\min_{u\in U}\varphi(u)>0$. We need to show for any finite or countable family $\{(Q_{n_i}(x_i,\mathcal{C}),c_i)\}_{i \in I}$ with $0<c_i<\infty$, $x_i \in Q$, $n_i \geq N$ so that $\sum_{i\in I}\limits c_i \chi_{Q_{n_i}(x_i,\mathcal{C})}\geq \chi_{Z},$ one has $$R_{\mathcal{C}}(\varphi,Z,Q,\lambda+\epsilon,N)\leq \sum_{i\in I}\limits c_i e^{-\lambda S_{n_i\tau}\varphi({\omega_{\mathcal{C}_{[0,n_i)}(x_i)}})}.$$ Let $I_n=\{i\in I:n_i=n\}$. Without loss of generality, we assume that $Q_{n}(x_i,\mathcal{C})\cap Q_{n}(x_j,\mathcal{C})=\emptyset$ for all $i,j\in I_n$ with $i\not=j$. (Otherwise, one can replace $(Q_{n}(x_i,\mathcal{C}),c_i)$ by $(Q_{n}(x_i,\mathcal{C}),c_i+c_j)$). Let $s>0$, and let $$\begin{aligned} K_{n,s}&=\{x\in Z:\sum _{i\in I_n}c_i\chi_{Q_{n}(x_i,\mathcal{C})}(x)>s\}\\ I_{n,s}&=\{x\in I_n:Q_{n}(x_i,\mathcal{C})\cap K_{n,s}\not =\emptyset\}.\end{aligned}$$ Then $K_{n,s}\subset \cup_{i\in I_{n,s}}Q_{n}(x_i,\mathcal{C})$. Therefore, $$\begin{aligned} R_{\mathcal{C}}(\varphi,K_{n,s},Q,\lambda+\epsilon,N) &\leq \sum_{i\in I_{n,s}}e^{-(\lambda+\epsilon)S_{n\tau}\varphi(\omega_{\mathcal{C}_{[0,n)}(x_i)})}\\ &\leq \sum_{i\in I_{n,s}}\frac{c_i}{s}\cdot e^{-\lambda S_{n\tau}\varphi(\omega_{\mathcal{C}_{[0,n)}(x_i)})}\frac{1}{e^{\epsilon \cdot S_{n\tau}\varphi(\omega_{\mathcal{C}_{[0,n)}(x_i)})}}\\ &\leq \frac{1}{n^2s}\sum_{i\in I_{n}}c_i e^{-\lambda S_{n\tau}\varphi(\omega_{\mathcal{C}_{[0,n)}(x_i)})}.\end{aligned}$$ Let $s\in (0,1)$. If $x\in Z$, then we have $\sum_{n\geq N}\limits \sum_{i\in I_n}\limits c_i\geq1> \sum_{n\geq N}\limits \frac{1}{n^2} s$. So there exists $n\geq N$ such that $\sum_{i\in I_n}\limits c_i > \frac{1}{n^2} s$. This implies that $x\in K_{n,\frac{1}{n^2} s}$ and hence $Z\subset \cup_{n\geq N} K_{n,\frac{1}{n^2} s}$. Since $$\begin{aligned} R_{\mathcal{C}}(\varphi,Z,Q,\lambda+\epsilon,N) &\leq \sum_{n\geq N} R_{\mathcal{C}}(\varphi,K_{n,\frac{1}{n^2} s},Q,\lambda+\epsilon,N)\\ &\leq \frac{1}{s}\sum_{n\geq N}\sum_{i\in I_{n}}c_i e^{-\lambda S_{n\tau}\varphi(\omega_{\mathcal{C}_{[0,n)}(x_i)})}\\ &= \frac{1}{s}\sum_{i\in I}\limits c_i e^{-\lambda S_{n_i\tau}\varphi({\omega_{\mathcal{C}_{[0,n_i)}(x_i)}})}.\end{aligned}$$ We finish the proof by letting $s\to 1$. ◻ The family $\mathcal{A}$ is said to be a *clopen partition* of $Q$ if each member of $\mathcal{A}$ is both open and closed subset of $Q$. Such an example can be found in [@whs19 Section 8]. The set $Q_n(x,\mathcal{C})$ is clopen if $\mathcal{A}$ is a clopen partition of $Q$, where the openness of $Q_n(x,\mathcal{C})$ enables us to apply Urysohn lemma to get a BS Frostman's lemma. **Lemma 20** (**BS Frostman's lemma**). *Let $\mathcal C =(\mathcal A, \tau,\nu)$ be a clopen invariant partition of $Q$. Let $K$ be a non-empty compact subset of $Q$ and $\lambda \geq0, N\in \mathbb{N}$, $\varphi\in C(U,\mathbb{R})$ with $\varphi >0$. Suppose that $c:=W_{\mathcal{C}}(\varphi,K,Q,\lambda,N)>0$. Then there exists a Borel probability measure $\mu \in M(Q)$ such that $\mu (K)=1$ and $$\mu(Q_n(x,\mathcal{C}))\leq \frac{1}{c}e^{-\lambda S_{n\tau}\varphi({\omega_{\mathcal{C}_{[0,n)}(x)}})}$$ holds for all $x\in Q, n\geq N$.* *Proof.* Clearly, $c<\infty$. Define a map $p$ on $C(Q)$ given by $$p(f)=\frac{1}{c}W_{\mathcal{C}}(\varphi,\chi_K\cdot f,Q,\lambda,N)$$ for any $f\in C(Q)$. It is readily to check that 1. $p(f+g)\leq p(f)+p(g)$ for any $f,g\in C(Q).$ 2. $p(kf)=kp(f)$ for any $k\geq 0$ and $f\in C(Q).$ 3. $p(\textbf{1})=1$, $p(f)\leq ||f||$ for any $f\in C(Q)$ and $p(g)=0$ for any $g\in C(Q)$ with $g\leq 0$, where $\textbf{1}$ denotes the constant function $\textbf{1}(1)=1$. By the Hahn-Banach theorem, we can extend the linear function $c\mapsto cp(1)$, $c\in \mathbb{R}$, from the subspace of the constant functions to a linear functional $L:C(Q)\rightarrow \mathbb{R}$ satisfying $L(1)=p(1)=1$ and $-p(-f)\leq L(f)\leq p(f)$ for any $f\in C(Q)$. Let $f\in C(Q)$ with $f\geq0$. Then $p(-f)=0$ and hence $L(f)\geq 0$. By Riesz representation theorem, there exists $\mu \in M(Q)$ such that $L(f)=\int_Q f d\mu$ for any $f\in C(Q)$. We show $\mu(K)=1$. For any compact set $E\subset Q\backslash K$, by Urysohn lemma there is a continuous function $0\leq f\leq 1$ on $Q$ such that $f|_E=1$ and $f|_K=0$. Then $\mu(E)\leq\int_Qfd\mu\leq p(f)=0$, which implies that $\mu(K)=1$. We proceed to show $\mu(Q_n(x,\mathcal{C}))\leq \frac{1}{c}e^{-\lambda S_{n\tau}\varphi({\omega_{\mathcal{C}_{[0,n)}(x)}})}$ holds for all $x\in Q, n\geq N$. Let $n\geq N$ and $x\in Q$. For any compact set $E\subset Q_n(x,\mathcal{C})$, noting that $\mathcal{C}$ is a clopen partition of $Q$ since $Q_n(x,\mathcal{C})$ is open, the Urysohn lemma guarantees that there exists a continuous function $0\leq f\leq 1$ on $Q$ such that $f|_E=1$ and $f|_{Q\backslash Q_n(x,\mathcal{C})}=0$. Then $\mu(E)\leq\int_Qfd\mu\leq p(f)$ and $W_{\mathcal{C}}(\varphi,\chi_K\cdot f,Q,\lambda,N)\leq e^{-\lambda S_{n\tau}\varphi({\omega_{\mathcal{C}_{[0,n)}(x)}})}$ since $\chi_{K}\cdot f\leq\chi_{Q_{n}(x_i,\mathcal{C})}$.Hence, $$\mu(Q_n(x,\mathcal{C}))\leq \frac{1}{c}e^{-\lambda S_{n\tau}\varphi({\omega_{\mathcal{C}_{[0,n)}(x)}})}.$$ ◻ Finally, we give the proof of the Theorem [Theorem 3](#thm 1.3){reference-type="ref" reference="thm 1.3"}. *Proof of Theorem [Theorem 3](#thm 1.3){reference-type="ref" reference="thm 1.3"}.* We show $\underline{h}_{\mu,inv}(\varphi,Q,\mathcal{C})\leq {\rm dim}_{\mathcal{C}}^{BS}(\varphi,K,Q)$ for every $\mu \in M(Q)$ with $\mu(K)=1$. Assume that $\underline{h}_{\mu,inv}(\varphi,Q,\mathcal{C})>0$ and let $0<s<\underline{h}_{\mu,inv}(\varphi,Q,\mathcal{C})$. Define $$E=\{x\in Q:\liminf_{n \to \infty}-\frac{\log \mu(Q_n(x,\mathcal{C}))}{S_{n\tau}\varphi({\omega_{\mathcal{C}_{[0,n)}(x)}})}>s\}.$$ Then $\mu(E)>0$ and hence $\mu(E\cap K)>0$. We define $$E_N=\{x\in E\cap K:-\frac{\log \mu(Q_n(x,\mathcal{C}))}{S_{n\tau}\varphi({\omega_{\mathcal{C}_{[0,n)}(x)}})}> s ~\text{for any }n\geq N\}.$$ Then $E\cap K=\cup_{N\geq 1}E_N$ and $\mu(E_{N_0})>0$ for some $N_0\in \mathbb{N}$. Let $N\geq N_0$, and let $\{Q_{n_i}(x_i,\mathcal{C})\}_{i\in I}$ such that $x_i \in Q$, $n_i \geq N$ and $\cup_{i\geq I}Q_{n_i}(x_i,\mathcal{C})\supset E_{N_0}$. We may assume that $Q_{n_i}(x_i,\mathcal{C}) \cap E_{N_0}\not =\emptyset$ for each $i \in I$. Choose a point $y_i \in Q_{n_i}(x_i,\mathcal{C}) \cap E_{N_0}$ for each $i \in I$. Then $Q_{n_i}(x_i,\mathcal{C})=Q_{n_i}(y_i,\mathcal{C})$ and hence $\omega_{\mathcal{C}_{[0,n_i)}(x_i)}=\omega_{\mathcal{C}_{[0,n_i)}(y_i)}$. Therefore, $$\begin{aligned} \sum_{i\in I}e^{-sS_{n_i\tau}\varphi({\omega_{\mathcal{C}_{[0,n_i)}(x_i)}})}&=\sum_{i\in I}e^{-sS_{n_i\tau}\varphi(\omega_{\mathcal{C}_{[0,n_i)}(y_i)})}\\ &\geq\sum_{i\in I}\mu(Q_{n_i}(y_i,\mathcal{C}) \geq\mu( E_{N_0}).\end{aligned}$$ It follows that $R_{\mathcal{C}}(\varphi,E_{N_0},Q,s)\geq R_{\mathcal{C}}(\varphi,E_{N_0},Q,s,N)>0.$ This implies that $${\rm dim}_{\mathcal{C}}^{BS}(\varphi,K,Q)\geq {\rm dim}_{\mathcal{C}}^{BS}(\varphi,E_{N_0},Q)\geq s.$$ Letting $s\to \underline{h}_{\mu,inv}(\varphi,Q,\mathcal{C})$, we get ${\rm dim}_{\mathcal{C}}^{BS}(\varphi,K,Q)\geq \underline{h}_{\mu,inv}(\varphi,Q,\mathcal{C})$. We show $${\rm dim}_{\mathcal{C}}^{BS}(\varphi,K,Q)\leq \sup\{ \underline{h}_{\mu,inv}(\varphi,Q,\mathcal{C}):\mu \in M(Q), \mu (K)=1\},$$ Let $0<s< {\rm dim}_{\mathcal{C}}(\varphi,K,Q)$. By Proposition [Proposition 19](#prop 3.12){reference-type="ref" reference="prop 3.12"}, we have $W_{\mathcal{C}}(\varphi,K,Q,s)>0$ and hence there exists $N$ so that $c:=W_{\mathcal{C}}(\varphi,K,Q,s,N)>0$. By Lemma [Lemma 20](#prop 3.13){reference-type="ref" reference="prop 3.13"}, there exists $\mu \in M(Q)$ such that $\mu (K)=1$ and $$\mu(Q_n(x,\mathcal{C}))\leq \frac{1}{c}e^{-s S_{n\tau}\varphi({\omega_{\mathcal{C}_{[0,n)}(x)}})}$$ holds for all $x\in Q, n\geq N$. Then $$s \leq \underline{h}_{\mu,inv}(\varphi,Q,\mathcal{C})\leq \sup\{ \underline{h}_{\mu,inv}(\varphi,Q,\mathcal{C}):\mu \in M(Q), \mu (K)=1\}.$$ Letting $s \to {\rm dim}_{\mathcal{C}}(\varphi,K,Q)$, this completes the proof. ◻ Taking $\varphi =1$, we remark that Theorem [Theorem 3](#thm 1.3){reference-type="ref" reference="thm 1.3"} extends the previous variational principle for Bowen invariance entropy [@whs19 Theorem 6.4]. # Acknowledgement {#acknowledgement .unnumbered} The first author was supported by Postgraduate Research $\&$ Practice Innovation Program of Jiangsu Province (No. KYCX23$\_$`<!-- -->`{=html}1665). The second author was supported by the National Natural Science Foundation of China (No.12071222). The third author was supported by the National Natural Science Foundation of China (No.11971236). The work was also funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions. We would like to express our gratitude to Tianyuan Mathematical Center in Southwest China(No.11826102), Sichuan University and Southwest Jiaotong University for their support and hospitality. HD82 L. Barreira, A non-additive thermodynamic formalism and applications to dimension theory of hyperbolic dynamical systems, *Erg. Th. Dynam. Syst.* **16** (1996), 871-927. L. Barreira and J. Schmeling, Sets of "non-typical" points have full topological entropy and full Hausdorff dimension, *Israel J. Math.* **116** (2000), 29-70. R. Bowen, Topological entropy for noncompact sets, *Trans. Amer. Math. Soc.* **184** (1973), 125-136. R. Bowen, Hausdorff dimension of quasi-circles, *Inst. Hautes Études Sci. Publ. Math.* **50** (1979), 11-25. M. Brin and A. Katok, On local entropy, Geometric dynamics (Rio de Janeiro). *Lecture Notes in Mathematics*, Springer, Berlin, **1007** (1983), 30-38. V. Climenhaga, Bowen's equation in the non-uniform setting, *Erg.Th. Dynam. Syst.* **31** (2011), 1163-1182. F. Colonius, Invariance entropy, quasi-stationary measures and control sets, *Discr. Contin. Dyn. Syst.* **38** (2018), 2093-2123. F. Colonius, Metric invariance entropy and conditionally invariant measures, *Erg.Th. Dynam. Syst.* **38** (2018), 921-939. F. Colonius, J. Cossich and A. Santana, Invariance pressure of control sets, *SIAM J. Control Optim.* **56** (2018), 4130-4147. F. Colonius and C. Kawan, Invariance entropy for control systems, *SIAM J. Control Optim.* **48** (2009), 1701-1721. F. Colonius, C. Kawan and G. Nair, A note on topological feedback entropy and invariance entropy, *Syst. Control Lett.* **62** (2013), 377-381. F. Colonius, A. Santana and J. Cossich, Invariance pressure for control systems, *J. Dyn. Diff. Equ.* **231** (2019), 1-23. F. Colonius and W. Kliemann, Some aspects of control systems as dynamical systems, *J. Dyn. Diff. Equ.* **5** (1993), 469-494. K. Falconer, Fractal geometry: mathematical foundations and applications, *John Wiley $\&$ Sons*, 2004. D. Feng and W. Huang, Variational principles for topological entropies of subsets, *J. Funct. Anal.* **263** (2012), 2228-2254. Y. Huang and X. Zhong, Carathéodory-Pesin structures associated with control systems, *Syst. Control Lett.* **112** (2018), 36-41. J. Jaerisch, M. Kesseböhmer and S. Lamei, Induced topological pressure for countable state Markov shifts, *Stoch. Dyn.* **14** (2014), 1350016, 31 pp. C. Kawan, Invariance entropy for deterministic control systems, Lecture Notes in Mathematics, 2013. P. Mattila, The geometry of sets and measures in Euclidean spaces, *Cambridge University Press*, 1995. G. Nair and R. Evans, I. Mareels and W. Moran, Topological feedback entropy and nonlinear stabilization, *IEEE Trans. Autom. Control* **49** (2004), 1585-1597. X. Nie, T. Wang and Y.Huang, Measure-theoretic invariance entropy and variational principles for control systems, *J. Diff. Equ.* **321** (2022), 318--348. Y.B. Pesin, Dimension theory in dynamical systems, University of Chicago Press, 1997. P. Walters, An introduction to ergodic theory, Springer, 1982. C. Wang and E. Chen, Variational principles for BS dimension of subsets, *Dyn. Syst.* **27** (2012), 359-385. T. Wang and Y. Huang, Inverse variational principles for control systems, *Nonlinearity* **35** (2022), 1610-1633. T. Wang, Y. Huang and H. Sun, Measure-theoretic invariance entropy for control systems, *SIAM J. Control Optim.* **57** (2019), 310-333. Q. Xiao and D. Ma, Topological pressure of free semigroup actions for non-compact sets and Bowen's equation, I, *J. Dyn. Diff. Equ.* **35** (2023), 199-236. Z. Xing and E. Chen, Induced topological pressure for topological dynamical systems, *J. Math. Phys.* **56** (2015), 022707, 10 pp. R. Yang, E. Chen and X. Zhou, Bowen's equations for upper metric mean dimension with potential, *Nonlinearity* **35** (2022), 4905-4938. X. Zhong, Variational principles of invariance pressures on partitions, *Discrete Contin. Dyn. Syst.* **40** (2020), 491-508. X. Zhong and Y. Huang, Invariance pressure dimensions for control systems, *J. Dyn. Diff. Equ.* **31** (2019), 2205-2222. X. Zhong, Y. Huang and H. Sun, BS invariance pressures for control systems, to appear in *J. Dyn. Diff. Equ.*, 2023. [^1]: \*corresponding author
arxiv_math
{ "id": "2309.01628", "title": "Bowen's equations for invariance pressure of control systems", "authors": "Rui Yang, Ercai Chen, Jiao Yang and Xiaoyao Zhou", "categories": "math.OC cs.IT math.DS math.IT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- author: - | Pengcheng Tang$^{*,a}$\ * $^{a}$ Hunan University of Science and Technology, Xiangtan, Hunan 411201, China* title: Generalized Cesàro-like operator from a class of analytic function spaces to analytic Besov spaces --- **[ABSTRACT]{.ul}**     Let $\mu$ be a finite positive Borel measure on $[0,1)$ and $f(z)=\sum_{n=0}^{\infty}a_{n}z^{n} \in H(\mathbb{D})$. For $0<\alpha<\infty$, the generalized Cesàro-like operator $\mathcal{C}_{\mu,\alpha}$ is defined by $$\mathcal {C}_{\mu,\alpha}(f)(z)=\sum^\infty_{n=0}\left(\mu_n\sum^n_{k=0}\frac{\Gamma(n-k+\alpha)}{\Gamma(\alpha)(n-k)!}a_k\right)z^n, \ z\in \mathbb{D},$$ where, for $n\geq 0$, $\mu_n$ denotes the $n$-th moment of the measure $\mu$, that is, $\mu_n=\int_{0}^{1} t^{n}d\mu(t)$. For $s>1$, let $X$ be a Banach subspace of $H(\mathbb{D})$ with $\Lambda^{s}_{\frac{1}{s}}\subset X\subset\mathcal {B}$. In this paper, for $1\leq p <\infty$, we characterize the measure $\mu$ for which $\mathcal{C}_{\mu,\alpha}$ is bounded(or compact) from $X$ into analytic Besov space $B_{p}$. **Keywords:** Cesàro operator. Bloch space. Besov space. **MSC 2010:** 47B35, 30H30, 30H20 [^1] [^2] # Introduction {#Sec:Intro}       Let $\mathbb{D}=\{z\in \mathbb{C}:\vert z\vert <1\}$ denote the open unit disk of the complex plane $\mathbb{C}$ and $H(\mathbb{D})$ denote the space of all analytic functions in $\mathbb{D}$. $H^{\infty}$ denote the set of bounded analytical functions on $\mathbb{D}$. The Bloch space $\mathcal {B}$ consists of those functions $f\in H(\mathbb{D})$ for which $$\vert \vert f\vert \vert _{\mathcal {B}}=\vert f(0)\vert +\sup_{z\in \mathbb{D}}(1-\vert z\vert ^{2})\vert f'(z)\vert <\infty.$$ For $1<p<\infty$, the analytic Besov space $B_p$ consists of those functions $f\in H(\mathbb{D})$ such that $$\|f\|_{B_p}=|f(0)|+\left(\int_{\mathbb{D}}|f'(z)|^p(1-|z|)^{p-2}\mathrm{d}A(z)\right)^{\frac{1}{p}}<\infty,$$ where $dA(z)=\frac{dxdy}{\pi}$ is the normalized area measure on $\mathbb{D}$. When $p=2$, then $B_2$ is just the classic Dirichlet space. If $1<p_1<p_2<\infty$, then $B_{p_1}\subsetneq B_{p_2} \subsetneq\mathcal {B}$. It is known that the analytic Besov spaces are Möbius invariant and the Bloch space $\mathcal {B}$ is largest Möbius invariant space. The space $B_{1}$ consists of $f\in H(\mathbb{D})$ such that $$\int_{\mathbb{D}}|f''(z)|dA(z)<\infty.$$ We know that $B_{1}$ is the smallest Möbius invariant Banach spaces of analytic function in $\mathbb{D}$ and $B_{1}\subsetneq H^{\infty}$. See [@H14 Chapter 5] for the theory of these spaces. Let $1\leq p<\infty$ and $0<\alpha\leq 1$, the mean Lipschitz space $\Lambda^p_\alpha$ consists of those functions $f\in H(\mathbb{D})$ having a non-tangential limit almost everywhere such that $\omega_p(t, f)=O(t^\alpha)$ as $t\to 0$. Here $\omega_p(\cdot, f)$ is the integral modulus of continuity of order $p$ of the function $f(e^{i\theta})$. It is known (see [@b1]) that $\Lambda^p_\alpha$ is a subset of $H^p$ and $$\Lambda^p_\alpha=\left(f\in H(\mathbb{D}):M_p(r, f')=O\left(\frac{1}{(1-r)^{1-\alpha}}\right), \ \ \mbox{as}\ r\rightarrow 1\right).$$ The space $\Lambda^p_\alpha$ is a Banach space with the norm $||\cdot||_{\Lambda^p_\alpha}$ given by $$\|f\|_{\Lambda^p_\alpha}=|f(0)|+\sup_{0\leq r<1}(1-r)^{1-\alpha}M_p(r, f').$$ It is known (see e.g. [@bu]) that $$\Lambda^{p}_{\frac{1}{p}}\subsetneq \mathcal {B}. \ \ 1<p<\infty.$$ For $f(z)=\sum_{n=0}^\infty a_nz^n\in H(\mathbb{D})$, the Cesàro operator $\mathcal {C}$ is defined by $$\mathcal {C}(f)(z)=\sum_{n=0}^\infty\left(\frac{1}{n+1}\sum_{k=0}^n a_k\right)z^n, \ z\in\mathbb{D}.$$ The boundedness and compactness of the Cesàro operator $\mathcal {C}$ and its generalizations on various spaces of analytic functions such as Hardy spaces, Bergman spaces, Dirichlet spaces, Bloch space, $Q_{p}$ space, mixed norm space have been widely studied. See [@ces9; @ces8; @ces13; @ces15; @ces4; @ces3; @ces14; @ces5] and the references therein. Recently, Galanopoulos, Girela and Merchán [@ces1] introduced a Cesàro-like operator $\mathcal{C}_\mu$ on $H(\mathbb{D})$, which is a natural generalization of the classical Cesàro operator $\mathcal {C}$. They consider the following generalization: For a positive Borel measure $\mu$ on the interval $[0, 1)$ they define the operator $$\mathcal{C}_\mu(f)(z)=\sum^\infty_{n=0}\left(\mu_n\sum^n_{k=0}\widehat{f}(k)\right)z^n=\int_{0}^{1}\frac{f(tz)}{(1-tz)}d\mu(t), \ z\in\mathbb{D}. \eqno{(1.3)}$$ where $\mu_{n}$ stands for the moment of order $n$ of $\mu$, that is, $\mu_{n}=\int_{0}^{1}t^{n}d\mu(t)$. They studied the operators $\mathcal{C}_\mu$ acting on distinct spaces of analytic functions(e.g. Hardy space, Bergman space, Bloch space, etc.). The Cesàro-like operator $\mathcal{C}_\mu$ defined above has attracted the interest of many mathematicians. For instance, Jin and Tang [@ces6] studied the boundedness(compactness) of $\mathcal{C}_\mu$ from one Dirichlet-type space $\mathcal {D}_{\alpha}$ into another one $\mathcal {D}_{\beta}$. Bao, Sun and Wulan [@baoo] studied the range of $\mathcal{C}_\mu$ acting on $H^{\infty}$. Blasco [@blas] investigated the operators $\mathcal{C}_\mu$ induce by complex Borel measures on $[0, 1)$, and extended the results of [@ces1] to this more general case. The operators $\mathcal{C}_\mu$ associated to arbitrary complex Borel measures on $\mathbb{D}$ the reader is referred to [@04]. Bao et al. [@baoo] introduced a more general Cesàro-like operator: Suppose that $0<\alpha<\infty$ and $\mu$ is a finite positive Borel measure on $[0, 1)$. For $f(z)=\sum_{n=0}^{\infty}a_{n}z^{n} \in H(\mathbb{D})$, they defined $$\mathcal {C}_{\mu,\alpha}(f)(z)=\sum^\infty_{n=0}\left(\mu_n\sum^n_{k=0}\frac{\Gamma(n-k+\alpha)}{\Gamma(\alpha)(n-k)!}a_k\right)z^n, \ z\in \mathbb{D}.$$ A simple calculation with power series gives the integral form of $\mathcal{C}_{\mu,\alpha}$ as follows. $$\mathcal{C}_{\mu,\alpha}(f)(z)=\int_{0}^{1}\frac{f(tz)}{(1-tz)^{\alpha}}d\mu(t).$$ It is clear that $\mathcal {C}_{\mu,1}=\mathcal{C}_\mu$. For $1<s<\infty$, let $X$ be a Banach subspace of $H(\mathbb{D})$ with $\Lambda^{s}_{\frac{1}{s}}\subset X\subset\mathcal {B}$. There are many well known spaces located between the mean Lipschitz space $\Lambda^{s}_{\frac{1}{s}}$ and the Bloch space $\mathcal {B}$. In [@baoo], the authors investigated the range of $\mathcal{C}_{\mu,\alpha}$ acting on $H^{\infty}$. They proved that if $\max\{1,\frac{1}{\alpha}\}<s<\infty$, then $\mathcal{C}_{\mu,\alpha}(H^{\infty})\subset X$ if and only if $\mu$ is an $\alpha$-Carleson measure. Zhou [@05] considers the same problem for the measure $\mu$ supported on $\mathbb{D}$. Galanopoulos et al. [@03] studied the behaviour of the operators $\mathcal {C}_{\mu,1}$ on the Dirichlet space and on the analytic Besov spaces. Sun et al. [@sun] studied the operator $\mathcal {C}_{\mu,1}$ acting from $B_{p}$ to $X$ recently. It remains open to characterize the boundedness and the compactness of $\mathcal{C}_{\mu,\alpha}$ from $B_{p}$ to $B_{p}$ $(p>1)$. The Besov spaces $B_{p}$ and Bloch space $\mathcal {B}$ are Möbius invariant and the Bloch space $\mathcal {B}$ can be regarded as the limit case of $B_{p}$ as $p\rightarrow +\infty$. The purpose of this paper is describe the measure $\mu$ such that the operator $\mathcal{C}_{\mu,\alpha}$ is bounded(and compact) from $X$ to $B_{p}$, $1\leq p<\infty$. Our main results are included in the following. **Theorem 1**. *Suppose $0<\alpha<\infty$, $1<s<\infty$, $\max\{1,\frac{1}{\alpha}\}\leq p<\infty$. Let $\mu$ be a finite positive Borel measure on $[0, 1)$ and $X$ is a Banach subspace of $H(\mathbb{D})$ with $\Lambda^{s}_{\frac{1}{s}}\subset X\subset\mathcal {B}$. Then the following statements are equivalent.\ (1) The operator $\mathcal{C}_{\mu,\alpha}$ is bounded from $X$ to $B_{p}$.\ (2) The operator $\mathcal{C}_{\mu,\alpha}$ is compact from $X$ to $B_{p}$.\ (3) The measure $\mu$ satisfies $$\sum_{n=0}^{\infty}(n+1)^{p\alpha-1}\mu_{n}^{p}\log^{p}(n+2)<\infty.$$* For $p = 1$, we have the following corollary. **Corollary 2**. *Suppose $1\leq \alpha<\infty$ and $1<s<\infty$. Let $\mu$ be a finite positive Borel measure on $[0, 1)$ and $X$ is a Banach subspace of $H(\mathbb{D})$ with $\Lambda^{s}_{\frac{1}{s}}\subset X\subset\mathcal {B}$. Then the following statements are equivalent.\ (1) The operator $\mathcal{C}_{\mu,\alpha}$ is bounded from $X$ to $B_{1}$.\ (2) The operator $\mathcal{C}_{\mu,\alpha}$ is compact from $X$ to $B_{1}$.\ (3) The measure $\mu$ satisfies $$\sum_{n=0}^{\infty}(n+1)^{\alpha-1}\mu_{n}\log(n+2)<\infty.$$\ (4) The measure $\mu$ satisfies $$\int_{0}^{1}\frac{\log\frac{e}{1-t}}{(1-t)^{\alpha}}d\mu(t).$$* Throughout the paper, the letter $C$ will denote an absolute constant whose value depends on the parameters indicated in the parenthesis, and may change from one occurrence to another. We will use the notation $``P\lesssim Q"$ if there exists a constant $C=C(\cdot)$ such that $`` P \leq CQ"$, and $`` P \gtrsim Q"$ is understood in an analogous manner. In particular, if $``P\lesssim Q"$ and $``P \gtrsim Q"$ , then we will write $``P\asymp Q"$. # Preliminaries {#prelim} To prove our main results, we need some preliminary results which will be repeatedly used throughout the rest of the paper. We begin with a characterization of the functions $f\in H(\mathbb{D})$ whose sequence of Taylor coefficients is decreasing which belong to $B^{p}$. **Lemma 3**. *Let $1<p<\infty$ and $f(z)=\sum_{n=0}^{\infty}a_{n}z^{n}\in H(\mathbb{D})$. Suppose that the sequence $\{a_{n}\}_{n=0}^{\infty}$ is a decreasing sequence of non-negative real numbers. Then $f\in B_{p}$ if and only if $$\sum_{n=1}^{\infty}n^{p-1}a_{n}^{p}<\infty.$$* This result can be proved with arguments similar to those used in the proofs of [@I1 Theorem 3.1]. For a detailed proof, see also [@H5 Theorem 3.10]. The following lemma contains a characterization of $L^{p}$-integrability of power series with nonnegative coefficients. For a proof, see [@1983 Theorem 1]. **Lemma 4**. *Let $0<\beta,p<\infty$, $\{\lambda_{n}\}_{n=0}^{\infty}$ be a sequence of non-negative numbers. Then $$\int_{0}^{1}(1-r)^{p\beta-1}\left(\sum_{n=0}^{\infty}\lambda_{n}r^{n}\right)^{p}dr\asymp \sum_{n=0}^{\infty}2^{-np\beta}\left(\sum_{k\in I_{n}}\lambda_{k}\right)^{p}$$ where $I_{0}=\{0\}$, $I_{n}=[2^{n-1},2^{n})\cap \mathbb{N}$ for $n\in \mathbb{N}$.* The following lemma is a consequence of Theorem 2.31 on page 192 of the classical monograph [@b8]. **Lemma 5**. *(a) The Taylor coefficients $a_{n}$ of the function $$f(z)=\frac{1}{(1-z)^{\beta}}\log^{\gamma}\frac{2}{1-z}, \ \ \beta>0,\gamma\in \mathbb{R}, \ z\in \mathbb{D}$$ have the property $a_{n}\asymp n^{\beta-1}(\log(n+1))^{\gamma}$.* *(b) The Taylor coefficients $a_{n}$ of the function $$f(z)=\log^{\gamma}\frac{2}{1-z}, \ \ \gamma >0, z\in \mathbb{D}$$ have the property $a_{n}\asymp n^{-1}(\log(n+1))^{\gamma-1}$.* We also need the following estimates (see, e.g. Proposition 1.4.10 in [@b3]). **Lemma 6**. *Let $\alpha$ be any real number and $z\in \mathbb{D}$. Then $$\int^{2\pi}_0\frac{d\theta}{|1-ze^{-i\theta}|^{\alpha}}\asymp \begin{cases}1 & \enspace \text{if} \ \ \alpha<1,\\ \log\frac{2}{1-|z|^2} & \enspace \text{if} \ \ \alpha=1,\\ \frac{1}{(1-|z|^2)^{\alpha-1}} & \enspace \text{if}\ \ \alpha>1, \end{cases}$$* The following lemma is useful in dealing with the compactness. The proof is similar to that of Proposition 3.11 in [@hb1]. The details are omitted. **Lemma 7**. *Let $p\geq 1$, $s>1$, $X$ be a Banach subspace of $H(\mathbb{D})$ with $\Lambda^{s}_{\frac{1}{s}}\subset X\subset\mathcal {B}$. Suppose that $T$ is a bounded operator from $X$ to $B_{p}$. Then $T$ is compact if and only if for any bounded sequence $\{f_{k}\}$ in $X$ which converges to $0$ uniformly on every compact subset of $\mathbb{D}$, we have $\lim_{k\rightarrow\infty}||T(f_{k})||_{B_{p}}=0$.* # Proofs of the main results {#prelim} We now present the proofs of Theorem 1.1. Since the definition of the space $B_{1}$ is slightly different from $B_{p}(p>1)$, we split the proof into $p=1$ and $p>1$. **Case $p=1$.**   Assume $\mathcal{C}_{\mu,\alpha}$ is bounded from $X$ to $B_{1}$. Let $g(z)=\log\frac{1}{1-z}=\sum^{\infty}_{k=1}\frac{z^{k}}{k}$, it is easy to check that $g\in \Lambda^{s}_{\frac{1}{s}}\subset X$. This implies that $\mathcal{C}_{\mu,\alpha}(g)\in B_{1}$. For $z\in \mathbb{D}$, by the definition of $\mathcal{C}_{\mu,\alpha}$ we get $$\mathcal{C}_{\mu,\alpha}(g)''(z)=\sum_{n=0}^{\infty}\left((n+2)(n+1)\mu_{n+2}\sum_{k=1}^{n+2}\frac{\Gamma(n+2-k+\alpha)}{\Gamma(\alpha)(n+2-k)!k}\right)z^{n}.$$ For $0<r<1$, the Hardy's inequality shows that $$M_{1}(r,\mathcal{C}_{\mu,\alpha}(g)'')\gtrsim \sum_{n=0}^{\infty}\left((n+2)\mu_{n+2}\sum_{k=1}^{n+2}\frac{\Gamma(n+2-k+\alpha)}{\Gamma(\alpha)(n+2-k)!k}\right)r^{n}.$$ Hence, $$\begin{split} 1&\gtrsim ||g||_{X}\gtrsim ||\mathcal{C}_{\mu,\alpha}(g)||_{B_{1}}=\int_{\mathbb{D}}|\mathcal{C}_{\mu,\alpha}(g)''(z)|dA(z)\\ & = 2\int_{0}^{1}M_{1}(r,\mathcal{C}_{\mu,\alpha}(g)'')rdr\\ & \gtrsim \int_{0}^{1} \sum_{n=0}^{\infty}\left((n+2)\mu_{n+2}\sum_{k=1}^{n+2}\frac{\Gamma(n+2-k+\alpha)}{\Gamma(\alpha)(n+2-k)!k}\right)r^{n+1}dr\\ & \gtrsim \sum_{n=0}^{\infty}\mu_{n+2}\sum_{k=1}^{n+2}\frac{\Gamma(n+2-k+\alpha)}{\Gamma(\alpha)(n+2-k)!k}. \end{split}$$ Using the Stirling's formula we get $$\sum_{k=1}^{n+2}\frac{\Gamma(n+2-k+\alpha)}{\Gamma(\alpha)(n+2-k)!k}\asymp \sum_{k=1}^{n+2}\frac{(n+3-k)^{\alpha-1}}{k}.$$ For $n\geq 1$, simple estimations lead us to the following $$\begin{split} \sum_{k=1}^{n+2}\frac{(n+3-k)^{\alpha-1}}{k}&=\left(\sum_{k=1}^{[\frac{n+2}{2}]}+\sum_{k=[\frac{n+2}{2}]+1}^{n+2}\right)\frac{(n+3-k)^{\alpha-1}}{k}\\ & \asymp (n+1)^{\alpha-1}\sum_{k=1}^{[\frac{n+2}{2}]}\frac{1}{k} +\frac{1}{n+1} \sum_{k=[\frac{n+2}{2}]+1}^{n+2}(n+3-k)^{\alpha-1}\\ & \asymp (n+1)^{\alpha-1}\log(n+2)+(n+1)^{\alpha-1}\\ & \asymp (n+1)^{\alpha-1}\log(n+2). \end{split}$$ Therefore, $$\begin{split} 1&\gtrsim \sum_{n=0}^{\infty}\mu_{n+2}\sum_{k=1}^{n+2}\frac{\Gamma(n+2-k+\alpha)}{\Gamma(\alpha)(n+2-k)!k}\\ & \gtrsim \sum_{n=0}^{\infty}(n+1)^{\alpha-1}\mu_{n}\log(n+2). \end{split}$$ **Case $p>1$.**   Let $q$ be the conjugate index of $p$, that is, $\frac{1}{p}+\frac{1}{q}=1$. It is known that $(B_{q})^{\ast}\cong B_{p}$(see [@H14 Theorem 5.24]) under the paring $$\langle F,G\rangle=\int_{\mathbb{D}}F'(z)\overline{G'(z)}dA(z), \ \ F\in B_{p},G\in B_{q}.$$ This means that $\mathcal{C}_{\mu,\alpha}$ is bounded from $X$ to $B_{p}$ if and only if $$|\langle \mathcal{C}_{\mu,\alpha}(F),G\rangle|\lesssim ||F||_{X}||G||_{B_{q}}\ \mbox{for all}\ F\in X, G\in B_{q}.$$ Now, suppose $\mathcal{C}_{\mu,\alpha}$ is bounded from $X$ to $B_{p}$. Take $g(z)=\sum_{n=0}^{\infty}\widehat{ g}(n)z^{n}\in B_{q}$ and the sequence of its Taylor coefficients is a decreasing sequence of the non-negative real numbers. Let $f(z)=\log\frac{1}{1-z}=\sum_{n=1}\frac{z^{n}}{n}\in X$, we have that $$|\langle \mathcal{C}_{\mu,\alpha}(f),g\rangle|\lesssim ||f||_{X}||g||_{B_{q}}.$$ A simple calculation shows that $$|\langle \mathcal{C}_{\mu,\alpha}(f),g\rangle|=\sum^{\infty}_{n=1}n\mu_{n}\left(\sum_{k=1}^{n}\frac{\Gamma(n-k+\alpha)}{\Gamma(\alpha)(n-k)!k}\right)\widehat{g}(n).$$ This implies that $$|\langle \mathcal{C}_{\mu,\alpha}(f),g\rangle|=\sum^{\infty}_{n=1}n^{\frac{1}{q}}\mu_{n}\left(\sum_{k=1}^{n}\frac{\Gamma(n-k+\alpha)}{\Gamma(\alpha)(n-k)!k}\right)\widehat{g}(n)n^{\frac{q-1}{q}}<\infty.$$ By Lemma [Lemma 3](#lem2.1){reference-type="ref" reference="lem2.1"}, the sequence $\{\widehat{g}(n)n^{\frac{q-1}{q}}\}_{n=1}^{\infty}\in l^{q}$. The well known duality $(l^{q})^{\ast}=l^{p}$ yields that $$\left\{n^{\frac{1}{q}}\mu_{n}\left(\sum_{k=1}^{n}\frac{\Gamma(n-k+\alpha)}{\Gamma(\alpha)(n-k)!k}\right)\right\}^{\infty}_{n=1}\in l^{p}.$$ Using the estimate $\sum_{k=1}^{n}\frac{\Gamma(n-k+\alpha)}{\Gamma(\alpha)(n-k)!k}\asymp (n+1)^{\alpha-1}\log(n+2)$ we deduce that $$\sum_{n=0}^{\infty}(n+1)^{p\alpha-1}\mu_{n}^{p}\log^{p}(n+2)<\infty.$$ Let $\{f_{k}\}_{k=1}^{\infty}$ be a bounded sequence in $X$ which converges to $0$ uniformly on every compact subset of $\mathbb{D}$. Without loss of generality, we may assume that $f_{k}(0)=0$ and $\sup_{k\geq 1}||f||_{X}\leq 1$. It suffices to prove that $\lim_{k\rightarrow \infty}||\mathcal{C}_{\mu,\alpha}(f_{k})||_{B_{p}}=0$ by using Lemma [Lemma 7](#lem2.5){reference-type="ref" reference="lem2.5"}. As before, we divide the proof into $p = 1$ and $p >1$. **Case **$p>1$.****   Assume $\sum_{n=1}^{\infty}(n+1)^{p\alpha-1}\mu_{n}^{p}\log^{p}(n+1)<\infty$, then $$\begin{split} \sum_{n=1}^{\infty}(n+1)^{p\alpha-1}\mu_{n}^{p}\log^{p}(n+1)&= \sum_{n=1}^{\infty}\left(\sum_{k=2^{n-1}}^{2^{n}-1}(k+1)^{p\alpha-1}\mu_{k}^{p}\log^{p}(k+1)\right)\\ & \gtrsim \sum_{n=1}^{\infty}2^{np\alpha}\mu_{2^{n}}^{p}\log^{p}(2^{n}+1)\\ & \gtrsim \sum_{n=1}^{\infty}2^{-n(p-1)}\left(\sum_{k=2^{n}}^{2^{n+1}-1}(k+1)^{\alpha-\frac{1}{p}}\mu_{k}\log(k+1)\right)^{p}. \end{split}$$ This shows that $$\sum_{n=1}^{\infty}2^{-n(p-1)}\left(\sum_{k=2^{n}}^{2^{n+1}-1}(k+1)^{\alpha-\frac{1}{p}}\mu_{k}\log(k+1)\right)^{p}<\infty.$$ By Lemma [Lemma 4](#lem2.2){reference-type="ref" reference="lem2.2"} we have that $$\begin{split} & \ \ \ \ \int_{0}^{1}(1-r)^{p-2}\left(\sum_{n=0}^{\infty}(n+1)^{\alpha-\frac{1}{p}}\mu_{n}\log(n+1)r^{n}\right)^{p}dr \\ &\asymp \sum_{n=0}^{\infty}2^{-n(p-1)}\left(\sum_{k=2^{n}}^{2^{n+1}-1}(k+1)^{\alpha-\frac{1}{p}}\mu_{k}\log(k+1)\right)^{p}<\infty. \end{split}$$ Therefore, for any $\varepsilon>0$ there exists a $0<r_{0}<1$ such that $$\label{1} \int_{r_{0}}^{1}(1-r)^{p-2}\left(\sum_{n=0}^{\infty}(n+1)^{\alpha-\frac{1}{p}}\mu_{k}\log(k+1)r^{n}\right)^{p}dr<\varepsilon.$$ It is clear that $$\begin{split} ||\mathcal{C}_{\mu,\alpha}(f_{k})||^{p}_{B_{p}}&=\left(\int_{|z|\leq r_{0}}+\int_{r_{0}<|z|<1}\right)|\mathcal{C}_{\mu,\alpha}(f_{k})'(z)|^{p}(1-|z|)^{p-2}dA(z)\\ & := J_{1,k}+J_{2,k}. \end{split}$$ By the integral representation of $\mathcal{C}_{\mu,\alpha}$ we get $$\label{2} \mathcal{C}_\mu(f_{k})'(z)=\int_{0}^{1}\frac{tf'_{k}(tz)}{(1-tz)^{\alpha}}d\mu(t)+\int_{0}^{1}\frac{\alpha tf_{k}(tz)}{(1-tz)^{\alpha+1}}d\mu(t).$$ Cauchy integral theorem implies that the sequence $\{f'_{k}\}_{k=1}^{\infty}$ is also converge to $0$ uniformly on every compact subset of $\mathbb{D}$. Thus, for $|z|\leq r_{0}$ we have that $$\begin{split} |\mathcal{C}_{\mu,\alpha}(f_{k})'(z)|& \lesssim \int_{0}^{1}\frac{|f_{k}'(tz)|}{|1-tz|^{\alpha}}+ \frac{|f_{k}(tz)|}{|1-tz|^{\alpha+1}}d\mu(t)\\ & \lesssim \sup_{|w|<r_{0}}\left(|f_{k}(w)|+|f_{k}'(w)|\right)\int_{0}^{1}\frac{1}{(1-tr_{0})^{\alpha+1}}d\mu(t)\\ & \lesssim \sup_{|w|<r_{0}}\left(|f_{k}(w)|+|f_{k}'(w)|\right). \end{split}$$ It follows that $$J_{1,k} \rightarrow 0, \ (k\rightarrow \infty).$$ Next, we estimate $J_{2,k}$. Since $X\subset \mathcal {B}$, we have $$\label{3} |f_{k}(z)|\lesssim \log\frac{e}{1-|z|}\ \ \mbox{and}\ \ |f_{k}'(z)|\lesssim \frac{1}{1-|z|}\ \ \mbox{for all}\ \ k\geq 1, z\in \mathbb{D}.$$ By ([\[2\]](#2){reference-type="ref" reference="2"}) and ([\[3\]](#3){reference-type="ref" reference="3"}), Minkowski inequity, Lemma [Lemma 6](#lem2.4){reference-type="ref" reference="lem2.4"} we get $$\begin{split} M_{p}(r,\mathcal{C}_{\mu,\alpha}(f_{k})') &= \left\{\int_{0}^{2\pi}\left|\int_{0}^{1}\frac{tf_{k}'(tre^{i\theta})}{(1-tre^{i\theta})^{\alpha}}+\frac{tf_{k}(tre^{i\theta})}{(1-tre^{i\theta})^{\alpha+1}}d\mu(t)\right|^{p}d\theta\right\}^{\frac{1}{p}}\\ &\lesssim \left\{\int_{0}^{2\pi}\left(\int_{0}^{1}\frac{1}{(1-tr)|1-tre^{i\theta}|^{\alpha}}d\mu(t)\right)^{p}d\theta\right\}^{\frac{1}{p}}\\ & \ \ \ \ +\left\{\int_{0}^{2\pi}\left(\int_{0}^{1}\frac{\log\frac{e}{1-tr}}{|1-tre^{i\theta}|^{\alpha+1}}d\mu(t)\right)^{p}d\theta\right\}^{\frac{1}{p}}\\ &\lesssim \int_{0}^{1}\frac{1}{1-tr}\left(\int_{0}^{2\pi}\frac{d\theta}{|1-tre^{i\theta}|^{p\alpha}}\right)^{\frac{1}{p}}d\mu(t)\\ & \ \ \ \ + \int_{0}^{1}\log\frac{e}{1-tr}\left(\int_{0}^{2\pi}\frac{d\theta}{|1-tre^{i\theta}|^{p(\alpha+1)}}\right)^{\frac{1}{p}}d\mu(t)\\ & \lesssim\int_{0}^{1}H(t,r)d\mu(t), \end{split}$$ where $$H(t,r)=\left\{ \begin{array}{cc} \displaystyle{\frac{\log\frac{e}{1-tr}}{(1-tr)^{\alpha+1-\frac{1}{p}}},} & \text{if} \ \ \displaystyle{p>\frac{1}{\alpha}},\\ \displaystyle{\frac{\log\frac{e}{1-tr}}{1-tr}, } & \text{if} \ \ \displaystyle{p=\frac{1}{\alpha}}. \end{array}\right.$$ Lemma [Lemma 5](#lem2.3){reference-type="ref" reference="lem2.3"} yields that $$M_{p}(r,\mathcal{C}_{\mu,\alpha}(f_{k})') \lesssim\int_{0}^{1}H(t,r)d\mu(t) \asymp \sum_{n=0}^{\infty}(n+1)^{\alpha-\frac{1}{p}}\mu_{n}\log(n+1)r^{n}.$$ This together with ([\[1\]](#1){reference-type="ref" reference="1"}) we have that $$\begin{split} J_{2,k}& =\int_{r_{0}<|z|<1}|\mathcal{C}_{\mu,\alpha}(f_{k})'(z)|^{p}(1-|z|)^{p-2}dA(z)\\ & \lesssim \int_{r_{0}}^{1}(1-r)^{p-2}M^{p}_{p}(r,\mathcal{C}_{\mu,\alpha}(f_{k})')dr\\ & \lesssim \int_{r_{0}}^{1}(1-r)^{p-2}\left(\sum_{n=0}^{\infty}(n+1)^{\alpha-\frac{1}{p}}\mu_{n}\log(n+1)r^{n}\right)^{p}dr\\ & \lesssim \varepsilon. \end{split}$$ Consequently, $$\lim_{k\rightarrow \infty}||\mathcal{C}_{\mu,\alpha}(f_{k})||_{B_{p}}=0.$$ **Case **$p=1$.****   When $p=1$, Lemma [Lemma 5](#lem2.3){reference-type="ref" reference="lem2.3"} shows that the condition $\sum_{n=0}^{\infty}(n+1)^{\alpha-1}\mu_{n}\log(n+2)<\infty$ is equivalent to $\int_{0}^{1}\frac{\log\frac{e}{1-t}}{(1-t)^{\alpha}}d\mu(t)<\infty.$ Hence, for any $\varepsilon>0$ there exists a $0<t_{0}<1$ such that $$\label{4} \int_{t_{0}}^{1}\frac{\log\frac{e}{1-t}}{(1-t)^{\alpha}}d\mu(t)<\varepsilon.$$ By the integral representation of $\mathcal{C}_{\mu,\alpha}$ we have $$\label{5} \mathcal{C}_{\mu,\alpha}(f)''(z)=\int_{0}^{1}\left(\frac{t^{2}f''(tz)}{(1-tz)^{\alpha}} +\frac{2\alpha t^{2}f'(tz)}{(1-tz)^{\alpha+1}}+ \frac{\alpha(\alpha+1)t^{2}f(tz)}{(1-tz)^{\alpha+2}}\right)d\mu(t).$$ For $0<r<1$, we have $$\begin{split} M_{1}(r, \mathcal{C}_{\mu,\alpha}(f_{k})'')&\lesssim \sup_{|w|\leq t_{0}}\left(|f_{k}''(w)|+|f_{k}'(w)|+|f_{k}(w)|\right)\int_{0}^{t_{0}}\frac{d\mu(t)}{(1-t_{0}r)^{\alpha+2}}\\ & \ \ \ \ +\int_{0}^{2\pi}\int_{t_{0}}^{1}\frac{|f_{k}''(tz)|}{|1-tre^{i\theta}|^{\alpha}}+\frac{|f_{k}'(tz)|}{|1-tre^{i\theta}|^{\alpha+1}} +\frac{|f_{k}(tz)|}{|1-tre^{i\theta}|^{\alpha+2}}d\mu(t)d\theta. \end{split}$$ Since $\{f_{k}\}\subset X\subset \mathcal {B}$, we see that $$\label{6} |f_{k}''(z)|\lesssim \frac{1}{(1-|z|)^{2}}\ \mbox{ for all }\ k \geq 1.$$ The assumption of $p$ means that $\alpha \geq 1$. By Fubini theorem, ([\[3\]](#3){reference-type="ref" reference="3"}), ([\[6\]](#6){reference-type="ref" reference="6"}) and Lemma [Lemma 6](#lem2.4){reference-type="ref" reference="lem2.4"} we have $$\begin{split} &\ \ \ \ \int_{0}^{2\pi}\int_{t_{0}}^{1}\frac{|f_{k}''(tre^{i\theta})|}{|1-tre^{i\theta}|^{\alpha}}+\frac{|f_{k}'(tre^{i\theta})|}{|1-tre^{i\theta}|^{\alpha+1}} +\frac{|f_{k}(tre^{i\theta})|}{|1-tre^{i\theta}|^{\alpha+2}}d\mu(t)d\theta\\ &\lesssim \int_{t_{0}}^{1}\int_{0}^{2\pi}\left(\frac{1}{(1-tr)^{2}|1-tre^{i\theta}|^{\alpha}}+\frac{1}{(1-tr)|1-tre^{i\theta}|^{\alpha+1}} +\frac{\log\frac{e}{1-tr}}{|1-tre^{i\theta}|^{\alpha+2}}\right)d\theta d\mu(t)\\ &\lesssim \int_{t_{0}}^{1} \frac{\log\frac{e}{1-tr}}{(1-tr)^{\alpha+1}}d\mu(t). \end{split}$$ Hence, $$\begin{split} & \ \ \ \ \int_{t_{0}<|z|<1}|\mathcal{C}_{\mu,\alpha}(f_{k})''(z)|dA(z)\\ &= 2\int_{t_{0}}^{1}M_{1}(r, \mathcal{C}_{\mu,\alpha}(f_{k})'')rdr\\ & \lesssim \sup_{|w|\leq t_{0}}\left(|f_{k}''(w)|+|f_{k}'(w)|+|f_{k}(w)|\right) +\int_{t_{0}}^{1} \int_{t_{0}}^{1} \frac{\log\frac{e}{1-tr}}{(1-tr)^{\alpha+1}}d\mu(t)dr\\ & \lesssim \sup_{|w|\leq t_{0}}\left(|f_{k}''(w)|+|f_{k}'(w)|+|f_{k}(w)|\right) +\int_{t_{0}}^{1}\log\frac{e}{1-t}\int_{0}^{1}\frac{dr}{(1-tr)^{\alpha+1}}d\mu(t)\\ & \lesssim \sup_{|w|\leq t_{0}}\left(|f_{k}''(w)|+|f_{k}'(w)|+|f_{k}(w)|\right)+\int_{t_{0}}^{1}\frac{\log\frac{e}{1-t}}{(1-t)^{\alpha}}d\mu(t)\\ &\lesssim \sup_{|w|\leq t_{0}}\left(|f_{k}''(w)|+|f_{k}'(w)|+|f_{k}(w)|\right)+\varepsilon. \end{split}$$ The uniform convergence of $\{f_{k}\}$ on compact subset of $\mathbb{D}$ implies that $$\int_{|z|\leq t_{0}}|\mathcal{C}_{\mu,\alpha}(f_{k})''(z)|dA(z)\lesssim \sup_{|w|\leq t_{0}}\left(|f_{k}''(w)|+|f_{k}'(w)|+|f_{k}(w)|\right)\rightarrow 0, \ \ \mbox{as}\ k\rightarrow0.$$ Therefore, we deduce that $$\lim_{k\rightarrow \infty}||\mathcal{C}_{\mu,\alpha}(f_{k})||_{B_{1}}=0.$$ Thus, the operator $\mathcal{C}_{\mu,\alpha}$ is compact from $X$ to $B_{1}$. # Conflicts of Interest {#conflicts-of-interest .unnumbered} The authors declare that there is no conflict of interest. # Funding {#funding .unnumbered} The author was supported by the Natural Science Foundation of Hunan Province (No. 2022JJ30369). # Availability of data and materials Data sharing not applicable to this article as no datasets were generated or analysed during the current study: the article describes entirely theoretical research. 99 A. Aleman, A. Siskakis, Integration operators on Bergman spaces, Indiana Univ. Math. J. 46(2) (1997) 337--356. A. Aleman, J. Cima, An integral operator on ${H}^{p}$ and Hardy's inequality, J. Anal. Math. 85 (2001) 157--176. A. Siskakis, Composition semigroups and the Cesàro operator on ${H}^{p}$, J. London Math. Soc. 36 (1987) 153--164. A. Siskakis, On the Bergman space norm of the Cesàro operator, Arch. Math. 67 (1996) 4312--318. A. Siskakis, The Cesàro operator is bounded on ${H}^{1}$, Proc. Amer. Math. Soc. 110 (1990) 461--462. A. Zygmund, Trigonometric Series, Vol. I, II. Cambridge University Press, London, 1959. C. Cowen, B. MacCluer, Composition operators on spaces of analytic functions, CRC Press, Boca Raton, 1995. D. Girela, N. Merchán, A generalized Hilbert operator acting on conformally invariant spaces, Banach J. Math. Anal. Appl. 12 (2018) 374--398. F. Sun, F. Ye, L. Zhou, A Cesaro-like operator from Besov space to some spaces of analytic functions, preprint. [](https://doi.org/10.48550/arXiv.2305.02717) G. Bao, F. Sun, H. Wulan, Carleson measure and the range of Cesàro-like operator acting on ${H}^{\infty}$, Anal. Math. Phys. 12 (2022) Paper No.142. J. Miao, The Cesàro operator is bounded on ${H}^{p}$ for $0<p<1$, Proc. Amer. Math. Soc. 116 (1992) 1077--1079. J. Jin, S. Tang, Generalized Cesàro operator on Dirichlet-type spaces, Acta Math. Sci 42(B) (2022) 1--9. J. Xiao, Cesàro-type operators on Hardy, BMOA and Bloch spaces, Arch. Math. 68 (1997) 398--406. K. Zhu, Operator theory in function spaces, Math. Surveys and Monographs, Vol. 138, American Mathematical Society, Providence, Rhode Island (2007) M. Pavlović, Analytic functions with decreasing coefficients and Hardy and Bloch spaces, Proc. Edinb. Math. Soc. 56 (2013) 623--635. M. Pavlović, M. Mateljević, $L^{p}$-behavior of power series with positive coefficients and Hardy spaces, Proc. Amer. Math. Soc. 87(2) (1983) 309--316. N. Danikas, A. Siskakis, The Cesàro operator on bounded analytic functions, Analysis 13 (1993) 295--299. O. Blasco, Cesàro-type operators on Hardy spaces, J. Math. Anal. Appl. (2023) Paper No. 127017. P. Duren, Theory of $H^{p}$ spaces, Academic Press, New York, 1970. P. Galanopoulos, D. Girela, N. Merchán, Cesàro-type operators associated with Borel measures on the unit disc acting on some Hilbert spaces of analytic functions, J. Math. Anal. Appl. (2023) Paper No. 127287. P. Galanopoulos, D. Girela, N. Merchán, Cesàro-like operators acting on spaces of analytic functions, Anal. Math. Phys. 12 (2022) Paper No. 51. P. Galanopoulos, D. Girela, A. Mas, N. Merchán, Operators induced by radial measures acting on the Dirichlet space, Results Math. 78 (2023) Paper No. 106. S. Buckley, P. Koskela, D. Vukotić, Fractional integration, differentiation, and weighted Bergman spaces, Math. Proc. Camb. Philos. Soc. 126(2) (1999) 369--385 . W. Rudin, Function theory in the unit ball of ${C}^{n}$, Springer, New York, 1980. Z. Zhou, Pseudo-Carleson measures and generalized Cesàro-like operators, preprint.[](https://doi.org/10.21203/rs.3.rs-2413497/v1). [^1]: $^*$Corresponding Author [^2]: Pengcheng Tang: www.tang-tpc.com\@foxmail.com
arxiv_math
{ "id": "2309.02717", "title": "Generalized Ces\\`{a}ro-like operator from a class of analytic function\n spaces to analytic Besov spaces", "authors": "Pengcheng Tang", "categories": "math.FA math.CV", "license": "http://creativecommons.org/licenses/by-nc-nd/4.0/" }
--- abstract: | Cilleruelo conjectured that for an irreducible polynomial $f \in \mathbb{Z}[X]$ of degree $d \geq 2$ one has $$\log\left(\mathrm{lcm}(f(1),f(2),\ldots f(N))\sim(d-1)\right)N\log N$$ as $N \to \infty$. He proved it in the case $d=2$ but it remains open for every polynomial with $d>2$. We investigate the function field analogue of the problem by considering polynomials over the ring ${\mathbb{F}}_q[T]$. We state an analog of Cilleruelo's conjecture in this setting: denoting by $$\deg L_f(n) := \mathrm{lcm} \left(f\left(Q\right)\ : \ Q \in {\mathbb{F}}_q[T]\mbox{ monic},\, \deg Q = n\right)$$ we conjecture that $$\label{eq:conjffabs}\deg L_f(n) \sim c_f \left(d-1\right) nq^n,\ n \to \infty$$ ($c_f$ is an explicit constant dependent only on $f$, typically $c_f=1$). We give both upper and lower bounds for $L_f(n)$ and show that the asymptotic ([\[eq:conjffabs\]](#eq:conjffabs){reference-type="ref" reference="eq:conjffabs"}) holds for a class of "special\" polynomials, initially considered by Leumi in this context, which includes all quadratic polynomials and many other examples as well. We fully classify these special polynomials. We also show that $\deg L_f(n) \sim \deg\mathrm{rad}\left(L_f(n) \right)$ (in other words the corresponding LCM is close to being squarefree), which is not known over ${\mathbb{Z}}$ address: Raymond and Beverly Sackler School of Mathematical Sciences, Tel Aviv University, Tel Aviv 69978, Israel author: - Alexei Entin - Sean Landsberg bibliography: - ref.bib title: The Least Common Multiple of Polynomial Values over Function Fields --- # Introduction While studying the distribution of prime numbers, Chebychev estimated the least common multiple of the first $N$ integers. This was an important step towards the prime number theorem. In fact the condition $\textrm{log lcm}\left(1 ,\ldots, N\right) \sim N$ is equivalent to the prime number theorem. This problem later inspired a more general problem of studying the least common multiple of polynomial sequences. For a linear polynomial $f(X) = k X + h,\ k,h\in{\mathbb{Z}},\ k>0, h+k > 0$ it was observed by Bateman [@bateman2002limit], that $\textrm{log lcm}\left(f\left(1\right) ,\ldots, f\left(N\right)\right) \sim c_f N$ as $N\to\infty$, where $c_f = \frac{k}{\varphi(k)} \sum_{1 \leq m \leq k,(m,k) = 1} \frac{1}{m}$, which is a consequence of the Prime Number Theorem for arithmetic progressions. Cilleruelo conjectured [@cilleruelo2011least] that for any irreducible polynomial $f \in \mathbb{Z}[X]$ of degree $d\ge 2$ the following estimate holds: $$\label{eq:conj_intro}\textrm{log lcm}\left(f\left(1\right) ,..., f\left(N\right)\right) \sim \left(d-1\right)N \log N$$ as $f$ is fixed and $N\to\infty$.\ \ **Convention.** Throughout the rest of the paper in all asymptotic notation the polynomial $f$, and the parameter $q$ to appear later, are assumed fixed, while the parameter $N$ (or $n$) is taken to infinity. If any parameters other than $f,q,N,n$ appear in the notation, the implied constant or rate of convergence are uniform in these parameters.\ \ Cilleruelo proved ([\[eq:conj_intro\]](#eq:conj_intro){reference-type="ref" reference="eq:conj_intro"}) for quadratic polynomials and there are no known examples of polynomials of degree $d > 2$ for which the conjecture is known to hold. Cilleruelo's argument also shows the predicted upper bound, i.e. if $f \in \mathbb{Z}[X]$ is an irreducible polynomial of degree $d \geq 2$, then $$\log\mathrm{lcm}\,\left(f(1),\ldots,f(N)\right)\lesssim (d-1)N\log N.$$ Maynard and Rudnick [@maynard2019lower] provided a lower bound of the correct order of magnitude: $$\label{eq:sah_lower_bound}\textrm{log lcm}\left(f(1),\ldots, f(N)\right) \gtrsim \frac{1}{d} N \log N.$$ Sah [@sah2020improved] improved upon this lower bound while also providing a lower bound for the radical of the least common multiple: $$\textrm{log lcm}\left(f\left(1\right) ,\ldots, f\left(N\right)\right) \gtrsim N \log N,$$ $$\label{eq:sah_lower_bound_rad}\textrm{log rad}\left[ \textrm{lcm}\left(f\left(1\right) ,..., f\left(N\right)\right)\right] \gtrsim \frac{2}{d} N \log N.$$ Rudnick and Zehavi [@rudnick2020cilleruelo] established an averaged form of ([\[eq:conj_intro\]](#eq:conj_intro){reference-type="ref" reference="eq:conj_intro"}) with $f$ varying in a suitable range. Leumi [@leumi2021lcm] studied a function field analogue of the problem. In the present work we expand upon and generalize the results and conjectures in [@leumi2021lcm], as well as correct some erroneous statements and conjectures from the latter work. Despite some overlap, we have kept our exposition self-contained and independent of [@leumi2021lcm]. ## The function field analogue. Let $q=p^k$ be a prime power. For a polynomial $f \in \mathbb{F}_q[T][X]$ of degree $d\ge 1$ of the form $$f(X) = f_d X^d + f_{d-1} X^{d-1} + ... + f_0,\ f_i\in{\mathbb{F}}_q[T]$$ set $$\label{eq: def Lfn}L_f(n) := \mathrm{lcm} \left(f\left(Q\right): \ Q \in M_n\right),$$ where $$M_n := \{Q \in \mathbb{F}_q[T]\text{ monic},\ \deg Q = n\}.$$ Also denote $$V_f := \left\{g \in \mathbb{F}_q[T]:\ f\left(X + g\right) = f\left(X\right)\right\}.$$ The set $V_f$ is a finite-dimensional ${\mathbb{F}}_p$-linear subspace of $\mathbb{F}_q[T]$ (see Lemma [Lemma 11](#V_f: trivial when degree is not devisable by p){reference-type="ref" reference="V_f: trivial when degree is not devisable by p"} below). Denote $$c_f := \frac{1}{|V_f|}.$$ We now state a function field analog of Cilleruelo's conjecture. **Conjecture 1**. *Let $f \in \mathbb{F}_q[T][X]$ be a fixed irreducible polynomial with $\deg_X f = d \geq 2$. Then $$\deg L_f(n) \sim c_f \left(d-1\right) nq^n,\ n \to \infty.$$* **Remark 2**. The expression $(d-1)nq^n$ is directly analogous to $(d-1)N\log N$ appearing in ([\[eq:conj_intro\]](#eq:conj_intro){reference-type="ref" reference="eq:conj_intro"}). Over the integers (or generally in characteristic 0) if $f$ is not constant then $V_f$ is always trivial and no constant $c_f$ appears in ([\[eq:conj_intro\]](#eq:conj_intro){reference-type="ref" reference="eq:conj_intro"}). This is because if $0 \neq g \in V_f$ then $2g,...,d g$ are also in $V_f$ implying $f\left(0\right) = f\left(g\right) = ... = f\left(d g\right)$. Since $f\left(x\right) - f\left(0\right)$ has at most $d$ roots, we reach a contradiction. Even over ${\mathbb{F}}_q[T]$, the typical case that occurs for "most" polynomials is $c_f=1$ (i.e. $V_f$ is trivial). A heuristic justification of will be given in . In the present paper we prove that Conjecture [Conjecture 1](#the conjecture){reference-type="ref" reference="the conjecture"} gives the correct upper bound: **Theorem 3**. *$$\deg L_f(n) \lesssim c_f(d-1)n q^n.$$* The proof of will be given in . We also give a lower bound of the correct order of magnitude (this bound is comparable to the bound in [@sah2020improved Theorem 1.3] under a mild assumption on $f$): **Theorem 4**. 1. *$$\deg L_f(n) \gtrsim \frac{d-1}{d} n q^n.$$* 2. *If $p \nmid d$ or $f_d \nmid f_{d-1}$ then $$\deg L_f(n) \gtrsim n q^n.$$* The proof of will be given in . We note that by Lemma [Lemma 11](#V_f: trivial when degree is not devisable by p){reference-type="ref" reference="V_f: trivial when degree is not devisable by p"} below we always have $c_f\ge 1/d$ and $c_f\ge 1/(d-1)$ if $p\nmid d$ or $f_d\nmid f_{d-1}$, so this is consistent with Conjecture [Conjecture 1](#the conjecture){reference-type="ref" reference="the conjecture"}. Regarding the radical of the LCM $$\ell_f(n):=\mathrm{rad}\ \mathrm{lcm}(f(Q):\ Q\in M_n)$$ we prove the following **Theorem 5**. *[\[ell = L\]]{#ell = L label="ell = L"} $$\deg \ell_f(n) \sim \deg L_f(n).$$* The proof of will be given in . As a consequence, our lower bounds for $L_f(n)$ apply also to $\ell_f(n)$. The analogous statement over ${\mathbb{Z}}$ is not known and the best lower bound over ${\mathbb{Z}}$, namely ([\[eq:sah_lower_bound_rad\]](#eq:sah_lower_bound_rad){reference-type="ref" reference="eq:sah_lower_bound_rad"}), has a smaller constant (if $d>2$) than the best known lower bound for $L_f(N)$ given by ([\[eq:sah_lower_bound\]](#eq:sah_lower_bound){reference-type="ref" reference="eq:sah_lower_bound"}). The key ingredient in the proof which is unavailable over ${\mathbb{Z}}$ is the work of Poonen [@Poo03] on squarefree values of polynomials over ${\mathbb{F}}_q[T]$ (later generalized by Lando [@Lan15] and Carmon [@carmon2021square]). For a class of polynomials $f\in{\mathbb{F}}_q[T][X]$ we call *special* (first introduced by Leumi in [@leumi2021lcm]) it is possible to establish Conjecture [Conjecture 1](#the conjecture){reference-type="ref" reference="the conjecture"} in full. This class includes all quadratic polynomials, but also many polynomials of higher degree. We now define special polynomials over an arbitrary unique factorization domain (UFD). This definition was introduced in [@leumi2021lcm]. **Definition 6**. A polynomial $f \in R[X]$ of degree $d=\deg f\ge 2$ is called *special* in if the bivariate polynomial $f(X) - f(Y)$ factors into a product of linear terms in $R[X,Y]$: $$f(X)-f(Y)=\prod_{i=1}^d(a_iX+b_iY+c_i),\quad a_i,b_i,c_i\in R.$$ **Example 7**. 1. A quadratic polynomial is always special because $$AX^2+BX+C-(AY^2+BY+C)=(X-Y)(AX+AY+B).$$ 2. If $R={\mathbb{F}}_p$ then $f=X^p$ is special because $$X^p-Y^p=\prod_{a\in{\mathbb{F}}_p}(X-aY).$$ For a special polynomial $f$ can be established in full. **Theorem 8**. *If $f\in{\mathbb{F}}_q[T][X]$ is irreducible and special then Conjecture [Conjecture 1](#the conjecture){reference-type="ref" reference="the conjecture"} holds for $f$.* The proof of will be given in section . **Example 9**. The polynomial $f=X^p-T\in{\mathbb{F}}_p[T][X]$ is irreducible (since it is linear in $T$) and special (similarly to (ii)). Hence holds for it. We fully classify the set of special polynomials over an arbitrary UFD $R$. **Theorem 10**. *Let $R$ be a UFD, $K$ its field of fractions and $p=\mathrm{char}(K)$.* 1. *Assume $p=0$. Then $f\in R[X]$ is special iff it is of the form $$f(X)=f_d(X+A)^d+C,\quad 0\neq f_d\in R,\quad A,C\in K,$$ where $d\ge 2$ is such that there exists a primitive $d$-th root of unity in $K$.* 2. *Assume $p>0$. Then $f\in R[X]$ of degree $\deg f=d=p^l m\ge 2 , (m,p) = 1$ is special iff it is of the form $$f(X) = f_d \prod_{i=1}^{p^v} (X - b_i + A)^{m p^{l-v}} + C,$$ $$0\neq f_d\in R,\quad A, C \in K,\quad 0 \leq v \leq l,\quad \zeta\in K,\quad V = \{b_1,...,b_{p^v}\} \subset K,$$ where $\zeta$ is a primitive $m$-th root of unity and $V$ is an $\mathbb{F}_p(\zeta)$-linear subspace of $K$ with $|V|=p^v$.* The proof of will be given in . We now briefly discuss how our main conjecture and results compare with the work of Leumi [@leumi2021lcm], which also studied the function field analog of Cilleruelo's conjecture and influenced the present work. First, [@leumi2021lcm] states Conjecture [Conjecture 1](#the conjecture){reference-type="ref" reference="the conjecture"} without the constant $c_f$. This is certainly false by Theorem [Theorem 3](#Upper bound){reference-type="ref" reference="Upper bound"} because it can happen that $c_f<1$ (see Example [Example 12](#ex:1){reference-type="ref" reference="ex:1"} below). It seems to have been overlooked that when $g\in V_f$ and $Q\in M_n,\,n>\deg g$ then since $f(Q+g)=f(Q)$, the value $f(Q+g)$ contributes nothing new to the LCM on the RHS of ([\[eq: def Lfn\]](#eq: def Lfn){reference-type="ref" reference="eq: def Lfn"}) over the contribution of $f(Q)$. Once one accounts for this redundancy with the constant $c_f=1/|V_f|$, Cilleruelo's original heuristic carries over to the function field case giving rise to Conjecture [\[the conjecture\]](#the conjecture){reference-type="ref" reference="the conjecture"}. Second, all results and conjectures in [@leumi2021lcm] are stated only for an absolutely irreducible and separable $f\in{\mathbb{F}}_q[T][X]$. We do not make these restrictions here and it takes additional technical work to treat the general case. Third, the lower bound on $L_f(n)$ given in [@leumi2021lcm] has a smaller constant than ours (thus the bound is weaker), comparable to the RHS of ([\[eq:sah_lower_bound_rad\]](#eq:sah_lower_bound_rad){reference-type="ref" reference="eq:sah_lower_bound_rad"}), and a lower bound for the radical of $L_f(n)$ is not stated explicitly. Fourth, in [@leumi2021lcm] our Theorem [\[special = conjecture\]](#special = conjecture){reference-type="ref" reference="special = conjecture"} is stated without the constant $c_f$, which is incorrect in general for the same reasons explained above, although the arguments therein are essentially correct in the case $c_f=1$. Finally, [@leumi2021lcm] gives a classification of special polynomials only in the case $p\nmid d$ (and it is stated only for the ring ${\mathbb{F}}_q[T]$), whereas we treat the general case. **Acknowledgments.** The authors would like to thank Zeév Rudnick for spotting a few small errors in a previous draft of the paper. Both authors were partially supported by Israel Science Foundation grant no. 2507/19. # Preliminaries {#sec:prelim} For background on the arithmetic of function fields see [@rosen2002number]. For background on resultants, which we will use below, see [@Lang2002 §IV.8]. ## Notation We now introduce some notation which will be used throughout Sections [2](#sec:prelim){reference-type="ref" reference="sec:prelim"}-[7](#sec: special asym){reference-type="ref" reference="sec: special asym"}. Let $p$ be a prime, $q=p^k$. For $Q\in{\mathbb{F}}_q[T]$ we denote by $|Q|=q^{\deg Q}$ the standard size of $Q$. For $P\in{\mathbb{F}}_q[T]$ prime and $Q\in{\mathbb{F}}_q[T]$ we denote by $v_P(Q)$ the exponent of $P$ in the prime factorization of $Q$. We will always fix a polynomial $$f(X)=\sum_{i=0}^df_iX^i\in{\mathbb{F}}_q[T][X],\,f_d\neq 0$$ of degree $d$. We also adopt the following conventions about notation. - For a polynomial $Q\in{\mathbb{F}}_q[T]$ we denote by $\deg Q$ its degree in $T$. For a polynomial $g\in{\mathbb{F}}_q[T][X]$ we denote by $\deg g=\deg_Xg$ its degree in the variable $X$. - For a polynomial $g\in{\mathbb{F}}_q[T][X]$ we denote by $g'=\frac{\partial g}{\partial X}$ its derivative in the variable $X$. The derivative in the variable $T$ will be written explicitly $\frac{\partial g}{\partial T}$. - $g\in{\mathbb{F}}_q[T][X]$ is called separable if it is separable as a polynomial in the variable $X$, equivalently $g\not\in{\mathbb{F}}_q[T][X^p]$. - For two polynomials $g,h\in{\mathbb{F}}_q[T][X]$ we denote by $\mathrm{Res}(g,h)=\mathrm{Res}_X(g,h)$ their resultant in the variable $X$. ## The space $V_f$ Recall that $V_f=\left\{g \in \mathbb{F}_q[T]:\ f\left(X + g\right) = f\left(X\right)\right\}$. **Lemma 11**. *Assume $d\ge 1$.* 1. *$|V_f| \leq d$.* 2. *$V_f$ an ${\mathbb{F}}_p$-linear subspace of ${\mathbb{F}}_q[T]$* 3. *$|V_f|$ is a power of p.* 4. *If $p \nmid d$ then $V_f$ is trivial.* 5. *If $f_d\nmid f_i$ for some $1\le i\le d-1$ then $|V_f|\le d-1$.* *Proof.* $$ 1. Assume by way of contradiction that $|V_f| \geq d + 1$. Then $f(g)=f(0)$ for every $g\in V_f$. Since $f(x) - f(0)$ has at most $d$ roots, $f\left(x\right) = f\left(0\right)$ and $f$ is constant, a contradiction. 2. It is obvious that $0 \in V_f$. Now let $a, b \in V_f$ and let us prove $\alpha a + \beta b \in V_f$ where $\alpha, \beta \in \{0, 1, ..., p - 1\}$. Recursively applying $f(X+a)=f(X+b)=f(X)$ we obtain $$f\left(X + \alpha a + \beta b\right) = f\left(X + \sum_{i=1}^{\alpha} a + \sum_{i=1}^{\beta} b\right) = f\left(X\right).$$ 3. Obvious from (i) and (ii). 4. Let $g\in V_f$, so $f(X+g)=f(X)$. Comparing coefficients at $X^{d-1}$ we find $dg=0$. If $p\nmid d$ then $g=0$, so in this case $V_f=\{0\}$ and the claim follows. 5. If $|V_f|=d$ then since all elements $g\in V_f$ are roots of $f(X)-f(0)$, we have $f=f_d\prod_{g\in V_f}(X-g)+f(0)$ and $f_d|f_i$ for all $1\le i\le d-1$, contradicting the assumption.  ◻ **Example 12**. (Example of a polynomial with $|V_f| = d=\deg f$) Let $V = \{b_1,...,b_{p^v}\}\subset{\mathbb{F}}_q[T]$ be an ${\mathbb{F}}_p$-linear subspace and $C\in{\mathbb{F}}_q[T]$. Then the polynomial $$f(X)=\prod_{i=1}^{p^v}(X-b_i)+C$$ has $V_f = V$. Thus $|V_f| = |V| = p^v = d$. ## Roots of $f$ modulo prime powers In this subsection we study the quantity $$\rho_f(P^{k}) := \left|\{Q \bmod P^k: f(Q) \equiv 0 \bmod P^k\}\right|,$$ i.e. the number of roots of $f$ modulo a prime power $P^k$. **Lemma 13**. *Let $g, h \in \mathbb{F}_q[T][X]$ be polynomials and let $P\in{\mathbb{F}}_q[T]$ prime. If $g, h$ have a common root modulo $P^m$ then $P^m \mid R := \mathrm{Res}(g, h)$.* *Proof.* We can express $R$ as $R = a(X)f(X) + b(X)g(X)$ for some $a(X), b(X) \in \mathbb{F}_q[T][X]$ (see [@Lang2002 §IV.8]). Therefore, if there exists a $Q \in \mathbb{F}_q[T]$ such that $P^m \mid f(Q)$ and $P^m \mid g(Q)$, then $P^m$ must also divide $a(Q)f(Q) + b(Q)g(Q) = R$. ◻ The proof of the next two lemmas is similar to the analogous proof for the integer case in [@TrygveNagel1921 proof of Theorem II]. **Lemma 14**. *Assume $f$ is separable and denote $R := \mathrm{Res}_X(f, f')\neq 0$. Denote $\mu=v_P(R)$. Let $x_0,x_1\in{\mathbb{F}}_q[T]$ be such that the following conditions hold:* 1. *$f(x_0)\equiv 0\pmod{P^{\mu+1}}$.* 2. *$f(x_1)\equiv 0\pmod{P^{\beta+1}}$, where $\beta:=v_P(f'(x_0))$.* 3. *$x_1 \equiv x_0 \mod P^{\mu + 1}$.* *Then $\beta \leq \mu$ and $v_P(f'(x_1))=\beta$.* *Proof.* Since $f(x_0) \equiv 0 \mod P^{\mu +1}$ and $P^{\mu + 1} \nmid R$, by Lemma [Lemma 13](#shared roots mod P){reference-type="ref" reference="shared roots mod P"} we must have $P^{\mu + 1} \nmid f'(x_0)$. Hence $\beta \leq \mu$. Now writing $x_1=x_0+tP^{\mu+1},\,t\in{\mathbb{F}}_q[T]$ and using $\beta\le\mu$ and conditions (a),(b) we have $$f'(x_1) = f'(x_0 + tP^{\mu + 1}) \equiv f'(x_0) \equiv 0 \pmod {P^{\beta}},$$ $$f'(x_1) = f'(x_0 + tP^{\mu + 1}) \equiv f'(x_0) \not\equiv 0 \pmod{P^{\beta + 1}},$$ so $v_P(f'(x_1))=\beta$. ◻ **Lemma 15**. *In the setup of Lemma [Lemma 14](#beta for Hensel){reference-type="ref" reference="beta for Hensel"} let $\alpha>\mu$ be an integer and assume that in fact $f(x_1)\equiv 0\pmod{P^{\alpha+\beta}}$. Consider the set $$S_1 := \left\{ x_1 + u P^{\alpha} \mid u \in \mathbb{F}_q[T]/ P^{\beta}\right\}\subset{\mathbb{F}}_q[T]/P^{\alpha+\beta}.$$ Then* 1. *The elements of $S_1$ are roots of $f$ modulo $P^{\alpha+\beta}$.* 2. *The number of roots of $f$ modulo $P^{\alpha+\beta + 1}$ that reduce modulo $P^{\alpha+\beta}$ to an element of $S_1$ is equal to $|S_1|=q^{\beta \deg P}$.* *Proof.* To prove (i) we note that $$f(x_1 + uP^{\alpha}) \equiv f(x_1) + uP^{\alpha}f'(x_1) \pmod{P^{2\alpha}}.$$ Thus $P^{\alpha+\beta} \mid f(x_1 + uP^{\alpha})$ as $P^{\alpha+\beta} \mid f(x_1)$ and by Lemma [Lemma 14](#beta for Hensel){reference-type="ref" reference="beta for Hensel"} we have $P^{\beta} \mid f'(x_1)$ and $\alpha+\beta \le\alpha+\mu< 2\alpha$. To show (ii), consider the set of possible lifts from $S_1$ to ${\mathbb{F}}_q[T]/P^{\alpha+\beta+1}$, i.e. $$S_2 := \left\{x_1 + u P^{\alpha} + v P^{\alpha + \beta} \mid u \in \mathbb{F}_q[T]/ P^{\beta}, v \in \mathbb{F}_q[T]/P\right\}.$$ We will now determine for which $u,v$ the element $x_1+uP^\alpha+vP^{\alpha+\beta}$ is a root of $f$ modulo $P^{\alpha+\beta+1}$. Using $2\alpha>\alpha+\beta$ we have $$f(x_1 + u P^{\alpha} + v P^{\alpha + \beta}) \equiv f(x_1) + u P^{\alpha} f'(x_1)+vP^{\alpha+\beta}f'(x_1) \pmod{ P^{\alpha + \beta + 1}}.$$ Hence $x_1 + u P^{\alpha} + v P^{\alpha + \beta}$ is a root of $f\bmod P^{\alpha + \beta + 1}$ iff $$\label{eq: condition for hensel}f(x_1) + u P^{\alpha} f'(x_1)+vP^{\alpha+\beta}f'(x_1) \equiv 0 \pmod {P^{\alpha + \beta + 1}}.$$ As $P^{\alpha + \beta} \mid f(x_1)$ and $v_P(f'(x_1))=\beta$ (Lemma [Lemma 14](#beta for Hensel){reference-type="ref" reference="beta for Hensel"}), we have ([\[eq: condition for hensel\]](#eq: condition for hensel){reference-type="ref" reference="eq: condition for hensel"}) iff $$u \equiv - \left(\frac{f'(x_1)}{P^{\beta}}\right)^{-1} \left(\frac{f(x_1)}{P^{\alpha + \beta}}+vf'(x_1)\right) \pmod P$$ Thus we have $|P|$ possible values of $v$ and for each of these $|P^{\beta}|/|P|$ possible values of $u$. Overall we have $|P^\beta|=q^{\beta\deg P}=|S_1|$ possible values of $(u,v)$ and the assertion follows. ◻ **Lemma 16**. *Assume that $f$ is separable, $R=\mathrm{Res}(f, f')\neq 0$. Let $P$ be a prime in $\mathbb{F}_q[T]$ and $\mu=v_P(R)$. Then $\rho_f(P^{2\mu + k}) = \rho_f(P^{2\mu + 1})$ for all $k \geq 1$.* *Proof.* For each root $x_1$ of $f$ modulo $P^{2\mu+k}$ we can apply Lemma [Lemma 15](#Lift root seprable){reference-type="ref" reference="Lift root seprable"} with $\alpha=2\mu+k-\beta$, where $\beta=v_P(f'(x_1))$ (the condition $\alpha>\mu$ holds because $\beta\le\mu$ by Lemma [Lemma 14](#beta for Hensel){reference-type="ref" reference="beta for Hensel"} applied with $x_0=x_1$) and obtain that the number of roots of $f$ modulo $P^{2\mu+k+1}$ equals the number of roots of $f$ modulo $P^{2\mu+k}$, i.e. $\rho_f(P^{2\mu + k+1}) = \rho_f(P^{2\mu + k})$. The assertion now follows by induction on $k$. ◻ **Lemma 17**. *Assume $f\in{\mathbb{F}}_q[T][X^p]$ is inseparable and irreducible. Then $U := \mathrm{Res}(f, \frac{\partial f}{\partial T}) \neq 0$.* *Proof.* Assume by way of contradiction that $U = 0$. Then $f,\frac{\partial f}{\partial T}$ have a common factor and since $f$ is irreducible we have $f\mid\frac{\partial f}{\partial T}$. Comparing degrees in $T$ we must have $\frac{\partial f}{\partial T}=0$. This means that $f\in{\mathbb{F}}_q[T^p,X^p]$ is a $p$-th power, contradicting its irreducibility. ◻ **Lemma 18**. *Assume that $f$ is inseparable and irreducible, and $P^m \nmid U := \mathrm{Res}(f, \frac{\partial f}{\partial T}) \in \mathbb{F}_q[T]$ for some $P\in{\mathbb{F}}_q[T]$ prime and $m \geq 1$. Then $\rho_f(P^{k}) = 0$ for every $k \geq m+1$.* *Proof.* Assuming the existence of $Q \in \mathbb{F}_q[T]$ such that $P^m \mid f(Q)$ (if $f$ has no roots modulo $P^m$ we are done), we will now prove that $P^{m+1} \nmid f(Q)$. Since $P^m \mid f(Q)$, we know that $P^m \nmid \frac{\partial f}{\partial T}(Q)$; otherwise, by Lemma [Lemma 13](#shared roots mod P){reference-type="ref" reference="shared roots mod P"} we would have $P^m|U$, contradicting our assumption. Since $f$ is inseparable and irreducible we have $f'=0$ and therefore $$\frac{\partial f(Q)}{\partial T} = \frac{\partial f}{\partial T}(Q) + \frac{\partial f}{\partial X}(Q) \frac{dQ}{dT} = \frac{\partial f}{\partial T}(Q).$$ If $P^{m+1} \mid f(Q)$, write $f(Q) = P^{m+1} C$ and then $$\begin{split} \frac{\partial f(T,Q(T))}{\partial T} &= \frac{ \partial(P^{m+1} C)}{\partial T} = P^{m+1} \frac{\partial C}{\partial T} + (m+1)P^{m} \frac{\partial P}{\partial T} C. \\ \end{split}$$ Thus $P^m \mid \frac{\partial f(T,Q(T))}{\partial T} = \frac{\partial f}{\partial T}(Q)$, contradicting the above observation $P^m \nmid \frac{\partial f}{\partial T}(Q)$. ◻ **Proposition 19**. *Assume that $f$ is irreducible. Then $\rho_f(P^{m}) \ll 1$ for all $P$ prime and $m \geq 1$.* *Proof.* Since $\mathbb{F}_q[T]/ P$ is a field we have $\rho_f(P^{}) \leq d$. Thus it remains to handle the case $m\ge 2$. If $f$ is inseparable then using Lemmas [Lemma 18](#Hensel for inseprable){reference-type="ref" reference="Hensel for inseprable"} and [Lemma 17](#non zero resultant){reference-type="ref" reference="non zero resultant"} we see that there are only finitely many pairs $P,m$ such that $m \geq 2$ and $\rho_f(P^{m}) \neq 0$ (as they must satisfy $P^m\,|\,\mathrm{Res}(f,\frac{\partial f}{\partial T})\neq 0$). Now if $f$ is separable then by Lemma [Lemma 16](#Hensel for seprable){reference-type="ref" reference="Hensel for seprable"} there are only finitely many pairs of $(P, m)$ such that $m \geq 2$ and $\rho_f(P^{m}) \neq \rho_f(P^{m-1})$, Denote these pairs by $(P_i, m_i)_{i=1}^s$. Then for all primes $P$ and $m \geq 1$ $$\rho_f(P^{m}) \leq \max\{d, \rho_f(P_i^{m_i})_{i=1}^s\}\ll 1.$$ ◻ **Lemma 20**. *Let $f \in {\mathbb{F}}_q[T][X^p]$ be inseparable and irreducible. Then there exist a separable and irreducible $h \in \mathbb{F}_q[T][X]$ such that $\rho_f(P^{}) = \rho_h(P)$ for all primes $P \in \mathbb{F}_q[T]$.* *Proof.* Write $f(X) = h(X^{p^m})$ for some $m \geq 1$ such that $h\in{\mathbb{F}}_q[T][X]\setminus{\mathbb{F}}_q[T][X^p]$ is separable. For any prime $P$, he $m$-fold Frobenius isomorphism of ${\mathbb{F}}_q[T]/P$ given by $x\mapsto x^{p^m}$ gives a one-to-one correspondence between the roots of $f$ modulo $P$ and the roots of $h$ modulo $P$. Hence $\rho_f(P) = \rho_h(P)$. ◻ # The upper bound {#sec: upper bound} Throughout this section the polynomial $f$ is assumed *irreducible*. To obtain the upper and lower bounds on $L_f(n)$ (recall that this quantity was defined in ([\[eq: def Lfn\]](#eq: def Lfn){reference-type="ref" reference="eq: def Lfn"})) we set $$P_f(n) : = \prod_{Q \in M_n} f(Q)$$ and write $$\label{eq: L P}\deg L_f(n) = c_f \deg P_f(n)- (c_f \deg P_f(n) - \deg L_f(n)),$$ where $c_f=1/|V_f|$. **Lemma 21**. *$\deg P_f(n) = dnq^n + O(q^n)$.* *Proof.* For sufficiently large $n$ and $Q \in M_n$ we have $\deg f(Q) = dn + \deg f_d$, hence $$\deg P_f(n) = \deg \prod_{Q \in M_n} f(Q) = \sum_{Q \in M_n} \deg f(Q) = \sum_{Q \in M_n} (dn + \deg f_d) = d nq^n + O(q^n).$$ ◻ Throughout the rest of the paper $P$ will always denote a prime of ${\mathbb{F}}_q[T]$ and $\sum_P$ (resp. $\prod_P$) will denote a sum (resp. product) over all monic primes of ${\mathbb{F}}_q[T]$. A sum of the form $\sum_{a\le\deg P\le b}$ is over all monic primes in the corresponding degree range (and the same for products).\ To estimate $c_f \deg P_f(n) - \deg L_f(n)$ we write the prime decomposition of $L_f(n)$ and $P_f(n)$ as $$L_f(n) = \prod_{P} P^{\beta_P(n)},\quad P_f(n) = \prod_{P} P^{\alpha_P(n)},$$ where the products are over all (monic) primes in ${\mathbb{F}}_q[T]$.\ \ **Convention.** Throughout Sections [3](#sec: upper bound){reference-type="ref" reference="sec: upper bound"}-[7](#sec: special asym){reference-type="ref" reference="sec: special asym"} we will always assume that $n$ is large enough so that $f(Q)\neq 0$ for all $Q\in M_n$. Thus $L_f(n),P_f(n)\neq 0$ and $\alpha_P(n),\beta_P(n)$ are always finite.\ We have $$\label{eq:def alpha beta}\alpha_P(n) = \sum_{Q \in M_n} v_P(f(Q)) ,\ \beta_P(n) = \max\{ v_P(f(Q)) : Q \in M_n\},$$ (recall that $v_P(Q)$ is the exponent of $P$ in the prime factorization of $Q$). Combining ([\[eq: L P\]](#eq: L P){reference-type="ref" reference="eq: L P"}) with we have $$\deg L_f(n) = c_f d nq^n - \sum_{P}(c_f \alpha_P(n) - \beta_P(n) ) \deg P + O(q^n).$$ **Lemma 22**. *For sufficiently large n $$\beta_P(n) \leq c_f \alpha_P(n) .$$* *Proof.* Denote by $Q_{max} \in M_n$ an element such that $\beta_P(n) = v_P(f(Q_{max}))$. Then for each element $g \in V_f$ we have $f(Q_{max} + g) = f(Q_{max})$ and hence $\beta_P(n)= v_P(f(Q_{max})) = v_P(f(Q_{max} + g)).$ For $n$ sufficiently large $Q_{max} + g \in M_n$, so $$|V_f| \beta_P(n) = \sum_{g \in V_f} v_P(f(Q_{max} + g)) \leq \sum_{Q \in M_n} v_P(f(Q)) = \alpha_P(n) .$$ The assertion follows since $c_f=1/|V_f|$. ◻ The next lemma is the main tool for estimating $\alpha_P(n), \beta_P(n)$. For its proof we introduce the following notation, which will be used in the sequel as well: $$\label{eq: def sf}s_f(P^k,n) = \left|\{Q \in M_n: f(Q) \equiv 0 \bmod P^k\}\right|,$$ where $P$ is prime and $k\ge 1$. **Lemma 23**. *Let $P \in \mathbb{F}_q[T]$ be a prime.* 1. *$\beta_P(n) = O\left(\frac{n}{\deg P}\right)$.* 2. *If $f$ is separable and $P \nmid \mathrm{Res}(f,f')$, then $$\alpha_P(n) = q^n \frac{\rho_f(P^{})}{|P| - 1} + O\left(\frac{n}{\deg P}\right).$$* 3. *If $f$ is inseparable and $P \nmid \mathrm{Res}(f, \frac{\partial f}{\partial T})$, then $$\alpha_P(n) = q^n \frac{\rho_f(P^{})}{|P|} + O\left(\frac{n}{\deg P}\right).$$* 4. *For the finitely many "bad\" primes where neither (ii) nor (iii) hold we have $$\alpha_P(n) = O(q^n).$$* *Proof.* To prove (i) we note that since there exists $Q \in M_n$ such that $P^{\beta_P(n) } \mid f(Q)$ and $f(Q) \neq 0$ we get: $$\beta_P(n) \deg P \leq \deg f(Q).$$ For sufficiently large $n$ we have $\deg f(Q) = dn + \deg f_d$, thus $$\beta_P(n) \ll \frac{n}{\deg P},$$ establishing (i). Using the notation ([\[eq: def sf\]](#eq: def sf){reference-type="ref" reference="eq: def sf"}) we have $$\alpha_P(n) = \sum_{f(Q) \in M_n} v_P(Q) = \sum_{Q \in M_n} \sum_{\substack{ P,\,k \geq 1 \atop{P^k \mid f(Q)}}} 1 = \sum_{k \geq 1{ \deg P}} \sum_{ \substack{ Q \in M_n \atop{P^k \mid f(Q)}}} 1 = \sum_{k \geq 1} s_f(P^k,n).$$ Note that $$s_f(P^k,n) = q^n\frac{\rho_f(P^{k})}{|P^k|} + O(1),$$ since if $Q$ is a root of $f$ modulo $P^k$ and $\deg P^k \le n$ then there are exactly $\frac{q^n}{|P|^k}$ element in $M_n$ that reduce to $Q$ modulo $P^k$. And if $\deg P^k > n$ then $s_f(P^k,n) \leq \rho_f(P^{k}) \ll 1$ by Proposition [Proposition 19](#bound on pn){reference-type="ref" reference="bound on pn"}. Hence $$%\begin{split} \alpha_P(n) = \sum_{k \geq 1} s_f(P^k,n) = \sum_{k \geq 1} \left[q^n\frac{\rho_f(P^{k})}{|P|^k} + O(1)\right] = q^n \sum_{k \geq 1} \frac{\rho_f(P^{k})}{|P|^k} + O\left(\frac{n}{ \deg P}\right). %\end{split}$$ Let us look at the different cases: - If $f$ is separable and $P \nmid \mathrm{Res}(f,f')$ then we have by that $\rho_f(P^{k}) = \rho_f(P^{})$, thus $$\begin{split} \alpha_P(n) & = q^n \sum_{k \geq 1} \frac{\rho_f(P^{})}{|P|^k} + O\left(\frac{n}{ \deg P}\right)\\ & = q^n \rho_f(P^{}) \sum_{k \geq 1} \left(\frac{1}{|P|}\right)^k + O\left(\frac{n}{ \deg P}\right) \\ & = q^n \frac{\rho_f(P^{})}{|P| - 1} + O\left(\frac{n}{ \deg P}\right). \end{split}$$ - If $f$ is inseparable and $P \nmid \mathrm{Res}_X(f, \frac{\partial f}{\partial T})$ then using Lemma [Lemma 18](#Hensel for inseprable){reference-type="ref" reference="Hensel for inseprable"} we get $$\alpha_P(n) = q^n \frac{\rho_f(P^{})}{|P|} + O\left(\frac{n}{\deg P}\right).$$ - For a general irreducible $f$, using the fact that $\text{for every } k \geq 1,\ \rho_f(P^{k}) \ll 1$ () we get $$\begin{split} \alpha_P(n) = q^n \sum_{k \geq 1} \frac{\rho_f(P^{k})}{|P|^k} + O\left(\frac{n}{ \deg P}\right) \ll q^n \sum_{k \geq 1} \frac{1}{|P|^{k}} + O\left(\frac{n}{ \deg P}\right) \ll O(q^n). \end{split}$$  ◻ **Lemma 24**. *$$\deg\left(\prod_{n< \deg P \leq n + \deg f_d} P^{\alpha_P(n) }\right) = O(q^n).$$* *Proof.* By Lemma [Lemma 23](#estimate an and bn){reference-type="ref" reference="estimate an and bn"} for sufficiently large $n$ and $P$ a prime such that $\deg P > n$, we have $\alpha_P(n) \leq q^n \frac{\rho_f(P^{})}{|P| - 1} + O\left(\frac{n}{ \deg P}\right) \ll \rho_f(P^{}) + O(1) \ll 1$. Using the Prime Polynomial Theorem, we have $$\begin{split} \deg\left(\prod_{n< \deg P \leq n + \deg f_d} P^{\alpha_P(n) }\right) & = \sum_{\deg n < \deg P \leq n + \deg f_d} \alpha_P(n) \deg P \\ & \ll \sum_{\deg n < \deg P \leq n + \deg f_d} \deg P = \sum_{k=n+1}^{n + \deg f_d} \sum_{\deg P = k} k \\ & \ll \sum_{k=n+1}^{n + \deg f_d} k \frac{q^k}{k} = \frac{q^{n + \deg f_d + 1} - q^{n+1}}{q - 1} \\ & \ll q^n. \end{split}$$ ◻ **Proposition 25**. *Denote $R_f(n) = \prod_{\deg P \leq n + \deg f_d} P^{\alpha_P(n) }$. Then $$\deg R_f(n) = n q^n + O(q^n).$$* *Proof.* Let us assume first that $f$ is separable. We will handle the inseparable case at the end of the proof. We note that we may ignore the $O(1)$ "bad\" primes (as defined in Lemma [Lemma 23](#estimate an and bn){reference-type="ref" reference="estimate an and bn"}) with an error term of $O(q^n)$ and by we can ignore the primes with $n<\deg P\le n+\deg f_d$ with the same error term. Thus by we obtain $$\label{eq: R_f(n)} \begin{split} \deg R_f(n) = \sum_{\deg P \leq n} \alpha_P(n) \deg P + O(q^n) = \sum_{\deg P \leq n} q^n \frac{\rho_f(P)}{|P| - 1} \deg P + \sum_{\deg P \leq n} O(n) + O(q^n). \end{split}$$ We bound the error term using the Prime Polynomial Theorem: $$\label{eq: error in R_f(n)} \begin{split} \sum_{\deg P \leq n} n = n \sum_{k=1}^n \sum_{\deg P = k} 1 \ll n \sum_{k=1}^n \frac{q^k}{k} \ll q^n. \end{split}$$ Now to estimate $\sum_{\deg P \leq n} q^n \frac{\rho_f(P)}{|P| - 1} \deg P$ we will use the Chebotarev Density Theorem in function fields [@Fried2023 Proposition 7.4.8]. First we introduce some notation and recall some terminology. Let $E / {\mathbb{F}}$ be the splitting field of $f$ and denote by $G$ the Galois group of $f$. For each prime $P$ of $\mathbb{F}_q[T]$ unramified in $E/{\mathbb{F}}_q(T)$ the Frobenius class of $P$ is defined to be: $$\begin{split} \left( \frac{E / {\mathbb{F}}_q(T)}{P} \right) = \left\{\sigma \in G \,:\, \exists \ \mathfrak{P}/P \text{ prime of } E\text{ such that } x^{|P|}\equiv x\pmod{\mathfrak P}\text{ for all }x\text{ with }v_{\mathfrak P}(x)\ge 0\right\} . \end{split}$$ The Frobenius class $\left( \frac{E / {\mathbb{F}}_q(T)}{P} \right)$ is a conjugacy class in $G$. Denote by $S$ the set of all conjugacy classes in $G$. Given a conjugacy class $C \in S$, we set $$\pi_C(n) = \left| \left\{P\ \mathrm{prime} : \deg P = n, \left( \frac{E / {\mathbb{F}}_q(T)}{P} \right) = C\right\}\right| .$$ Let $K = \mathbb{F}_{q^v}$ be the algebraic closure of $\mathbb{F}_q$ in $E$. We have $G_0 := \mathrm{Gal}(K / \mathbb{F}_q) = \langle \phi \rangle$, where $\phi(x)=x^q$ is the $q$-Frobenius. Denote the restriction of automorphisms map by $\varphi: G \to G_0$. Since $G_0=\langle\phi\rangle$ is cyclic and in particular abelian, we have for all $C \in S$ that $\varphi (C) = \{\phi^{n_C}\}$ for some $n_C\in{\mathbb{Z}}$. Define $$S_k := \{C \in S \mid \varphi(C) = \{\phi^k\}\}.$$ Now the Chebotarev Density Theorem in function fields says that if $C \in S_k$ then $$\pi_C(k) = \left\{\begin{array}{ll}v \frac{|C|}{|G|} \frac{q^k}{k} + O\left(\frac{q^{k/2}}{k}\right),&C\in S_k,\\ 0,&C \notin S_k.\end{array}\right.$$ Since the Galois group acts on the roots of $f$, we can define $\mathrm{Fix}(C)$ to be the number of roots fixed by any element in the conjugacy class $C$ (this number is the same for all $\sigma \in C$). Assuming $P$ is unramified in $E/{\mathbb{F}}_q(T)$, we have $\rho_f(P^{}) = \mathrm{Fix}\left(\left( \frac{E / {\mathbb{F}}_q(T)}{P} \right)\right)$. In the calculations throughout the rest of the proof summation over $P$ denotes summation over primes $P\nmid \mathrm{Res}_X(f,f')$ if $f$ is separable and $P\nmid\mathrm{Res}(f,\frac{\partial f}{\partial T})$ otherwise. In the case when $f$ is separable this conditions insures that $P$ is unramified in $E/{\mathbb{F}}_q(T)$. As we observed above the excluded primes contribute $O(q^n)$. We have: $$\begin{gathered} \sum_{\deg P = k} \frac{\rho_f(P^{})}{q^k - 1}k = \sum_{\deg P = k} \frac{\mathrm{Fix}\left(\left( \frac{E / {\mathbb{F}}_q(T)}{P} \right)\right)}{q^k - 1} k = \sum_{C \in S}\frac{\mathrm{Fix}(C) \pi_C(k)}{q^k - 1} k = \sum_{C \in S_k} \left[\frac{\mathrm{Fix}(C) q^k}{(q^k - 1)k} \frac{v|C|}{|G|} k + O(q^{-k/2}) \right] \\ = \sum_{C \in S_k} v \frac{\mathrm{Fix}(C) |C|}{|G|} + O(q^{-k/2}) \end{gathered}$$ and therefore $$\begin{gathered} \label{eq: calc cheb} \sum_{\deg P \leq n} q^n \frac{\rho_f(P^{})}{q^k - 1}k = q^n \sum_{k=1}^{n} \sum_{\deg P = k} \frac{\rho_f(P^{})}{q^k - 1}k = q^n \sum_{k=1}^{n} \left[ \sum_{C \in S_k} v \frac{\mathrm{Fix}(C) |C|}{|G|} + O(q^{-k/2}) \right]\\ = q^n \sum_{k=1}^{n} \sum_{C \in S_k} v \frac{\mathrm{Fix}(C) |C|}{|G|} + O(q^n) = q^n \sum_{l=1}^{\lfloor \frac{n}{v} \rfloor} v \sum_{C \in S} \frac{\mathrm{Fix}(C) |C|}{|G|} + O(q^n). \end{gathered}$$ Using Burnside's lemma [@rotman2012introduction Theorem 3.22] and the transitivity of $G$ (which is a consequence of $f$ being irreducible) we get $$1 = \sum_{\sigma \in G} \frac{\mathrm{Fix}(\sigma)}{|G|} = \sum_{C \in S} \frac{\mathrm{Fix}(C)|C|}{|G|} .$$ Pluggin this into ([\[eq: calc cheb\]](#eq: calc cheb){reference-type="ref" reference="eq: calc cheb"}) and recalling ([\[eq: R_f(n)\]](#eq: R_f(n)){reference-type="ref" reference="eq: R_f(n)"}),([\[eq: error in R_f(n)\]](#eq: error in R_f(n)){reference-type="ref" reference="eq: error in R_f(n)"}) we obtain $$\deg R_f(n) = \sum_{\deg P \leq n} q^n \frac{\rho_f(P)}{|P| - 1} \deg P + O(q^n) = q^n \sum_{l=1}^{\lfloor \frac{n}{v} \rfloor} v + O(q^n) = nq^n + O(q^n).$$ This completes the proof in the separable case. To handle the case when $f$ is inseparable we again do the same calculations (using Lemma [Lemma 23](#estimate an and bn){reference-type="ref" reference="estimate an and bn"}(iii) this time): $$\begin{split} \deg R_f(n) & = \sum_{\deg P \leq n} \alpha_P(n) \deg P + O \left ( q^n \right) \\ & = \sum_{\deg P \leq n} q^n \frac{\rho_f(P)}{|P|} \deg P + \sum_{\deg P \leq n} O(n) + O(q^n). \end{split}$$ We can replace $\rho_f(P)$ with $\rho_h(P)$, where $h$ is the polynomial given by Lemma [Lemma 20](#Replacment to inseparable polinomial){reference-type="ref" reference="Replacment to inseparable polinomial"}. Since $h$ is separable, the argument above (with $f$ replaced by $h$) yields: $$\deg R_f(n) = \sum_{\deg P \leq n} q^n \frac{\rho_h(P)}{|P|} + O(q^n)=nq^n+O(q^n),$$ completing the proof. ◻ We are now ready to prove the upper bound on $\deg L_f(n)$. *Proof of Theorem [Theorem 3](#Upper bound){reference-type="ref" reference="Upper bound"}.* Using Lemma [Lemma 22](#bound on bn using an){reference-type="ref" reference="bound on bn using an"} we have $c_f\alpha_P(n) \leq \beta_P(n)$, for all $P$ prime. We have $$\begin{aligned} \deg L_f(n) & = c_f \deg P_f(n) - (c_f \deg P_f(n) - \deg L_f(n) ) = \nonumber \\ & = c_f d n q^n - \sum_P (c_f\alpha_P(n) - \beta_P(n) ) \deg P \nonumber \\ & \leq c_f d n q^n - \sum_{\deg P \leq n + \deg f_d} (c_f \alpha_P(n) - \beta_P(n) ) \deg P \label{eq: proof upper 1}\\ & = c_f d n q^n - c_f\sum_{\deg P \leq n + \deg f_d} \alpha_P(n) \deg P + \sum_{\deg P \leq n+\deg f_d} \beta_P(n) \deg P \nonumber \\ &= c_f(d-1) n q^n + \sum_{\deg P \leq n + \deg f_d} \beta_P(n) \deg P + O(q^n)\label{eq: proof upper 2} \end{aligned}$$ where ([\[eq: proof upper 1\]](#eq: proof upper 1){reference-type="ref" reference="eq: proof upper 1"}) comes from removing negative terms and ([\[eq: proof upper 2\]](#eq: proof upper 2){reference-type="ref" reference="eq: proof upper 2"}) from Proposition [Proposition 25](#small P){reference-type="ref" reference="small P"} and Lemma [Lemma 21](#P_f(n)){reference-type="ref" reference="P_f(n)"}. It remains to prove that $\sum_{\deg P \leq n+\deg f_d} \beta_P(n) \deg P = O(q^n)$. From Lemma [Lemma 23](#estimate an and bn){reference-type="ref" reference="estimate an and bn"}(i) and the Prime Polynomial Theorem we get $$\label{eq: bound small primes} \sum_{\deg P \leq n + \deg f_d} \beta_P(n) \deg P \ll \sum_{\deg P \leq n + \deg f_d} \frac{n}{\deg P} \deg P = \sum_{\deg P \leq n + \deg f_d} n \ll q^n.$$ ◻ # Heuristic justification for {#sec: heuristic} Throughout the present section we maintain the assumption that $f=\sum_{i=0}^df_iX^i\in{\mathbb{F}}_q[T][X]$ is irreducible. Next we want to study the difference $\deg L_f(n)-c_f(d-1)nq^n$ and we will do so by relating it to a slightly different quantity defined by ([\[eq: def Sf\]](#eq: def Sf){reference-type="ref" reference="eq: def Sf"}) below. To this end let us define an equivalence relation on $M_n$ by $$\ Q_1 \sim Q_2 \iff f(X + Q_1 - Q_2) = f(X)\iff Q_1-Q_2\in V_f.$$ We note that for sufficiently large $n$ the size of each equivalence class is $|V_f|$ and the number of such classes is $c_fq^n$. Consider $$\label{eq: def Sf} S_f(n) = \left| \left\{P \in \mathbb{F}_q[T]\mbox{ prime}: \begin{aligned} \deg P >& n + \deg f_d.\\ \exists\, Q_1 \nsim Q_2 \in M_n& \text{ such that } P \mid f(Q_1), f(Q_2)\end{aligned} \right\}\right|.$$ Note that the condition in the bottom line on the RHS of ([\[eq: def Sf\]](#eq: def Sf){reference-type="ref" reference="eq: def Sf"}) is equivalent to $\beta_P(n)\neq c_f\alpha_P(n)$ (the negation of this condition is that $P$ occurs precisely in all $f(Q)$ where $Q$ runs over a single equivalence class of $\sim$, equivalently $|V_f|\beta_P(n)=\alpha_P(n)$), hence $$\label{eq: exp for Sf} S_f(n) = \sum_{\deg P>n + \deg f_d\atop{c_f\alpha_P(n)\neq\beta_P(n)}}1.$$ **Proposition 26**. *Let $f \in \mathbb{F}_q[T][X]$ be irreducible with $\deg_X f = d \geq 2$.* 1. *$S_f(n)\ll q^n.$* 2. *is equivalent to $S_f(n) = o(q^n).$* *Proof.* The first part follows immediately from the definition of $S_f(n)$, since there are $q^n$ possible values of $f(Q)$ and each has $O(1)$ prime factors of degree $\deg P>n+\deg f_d$. It remains to prove the second part. In the main calculation in the proof of Theorem [Theorem 3](#Upper bound){reference-type="ref" reference="Upper bound"} there is only one inequality, namely $$c_f d n q^n - \sum_P (c_f\alpha_P(n) - \beta_P(n) ) \deg P \leq c_f d n q^n - \sum_{\deg P \leq n + \deg f_d} (c_f \alpha_P(n) - \beta_P(n) ) \deg P.$$ equivalently $$- \sum_{\deg > n + \deg f_d} (c_f \alpha_P(n) - \beta_P(n) ) \deg P \leq 0.$$ Since the RHS in Conjecture [Conjecture 1](#the conjecture){reference-type="ref" reference="the conjecture"} is $\gg nq^n$, the conjecture is now seen to be equivalent to $$\sum_{\deg > n + \deg f_d} (c_f \alpha_P(n) - \beta_P(n) ) = o(n q^n).$$ Hence is suffices to prove that $$\label{eq: need to prove Sfn}S_f(n) = o(q^n) \iff \sum_{\deg P > n + \deg f_d} (c_f \alpha_P(n) - \beta_P(n) ) \deg P = o(n q^n).$$ For $P \in \mathbb{F}_q[T]$ prime set $$\delta_P(n) = \begin{cases} 1, & c_f \alpha_P(n) \neq \beta_P(n) ,\\ 0, & \text{otherwise}.\end{cases}$$ Note that for $n$ sufficiently large if $\deg P > dn + \deg f_d$ then $c_f \alpha_P(n) = \beta_P(n) = 0$. Hence, for $n$ sufficiently large using ([\[eq: exp for Sf\]](#eq: exp for Sf){reference-type="ref" reference="eq: exp for Sf"}) we have $$S_f(n) = \sum_{n + \deg f_d < \deg P \leq dn + \deg f_d} \delta_P(n)$$ From (ii-iii) and , for sufficiently large $n$ and $P$ not one of $O(1)$ bad primes, if $\deg P > n$ then $\alpha_P(n) \ll 1$, so $$\begin{gathered} \label{eq:Sfn 1} \sum_{n + \deg f_d < \deg P} (c_f \alpha_P(n) - \beta_P(n) ) \deg P \ll \sum_{n + \deg f_d < \deg P \leq d n+\deg f_d} \delta_P(n) \deg P \\ \ll \sum_{n + \deg f_d < \deg P \leq d n + \deg f_d} \delta_P(n) n = S_f(n) n. \end{gathered}$$ On the other hand, once again using the fact that $c_f\alpha_P(n)=\beta_P(n)$ if $\deg P>dn+\deg f_d$, $$\begin{gathered} \label{eq:Sfn 2} \sum_{n + \deg f_d < \deg P} (c_f \alpha_P(n) - \beta_P(n) ) \deg P \\ \geq \sum_{n + \deg f_d < \deg P \leq d n + \deg f_d} \delta_P(n) \deg P \geq \sum_{n + \deg f_d < \deg P \leq d n + \deg f_d} \delta_P(n) n = S_f(n) n, \end{gathered}$$ From ([\[eq:Sfn 1\]](#eq:Sfn 1){reference-type="ref" reference="eq:Sfn 1"}) and ([\[eq:Sfn 2\]](#eq:Sfn 2){reference-type="ref" reference="eq:Sfn 2"}) the equivalence ([\[eq: need to prove Sfn\]](#eq: need to prove Sfn){reference-type="ref" reference="eq: need to prove Sfn"}) follows, which completes the proof. ◻ Heuristically one can estimate $S_f(n)$ by arguing that the "probability\" that for $Q_1\not\sim Q_2\in M_n$ the values $f(Q_1),f(Q_2)$ share a prime factor of degree $\deg P>n+\deg f_d$ is (using the Prime Polynomial Theorem) $$\sum_{m=n+\deg f_d+1}^{\infty}\sum_{\deg P=m}\frac 1{|P|^2}=\sum_{m=n+\deg f_d+1}^{\infty}\frac 1{mq^m}= O\left(\frac 1{nq^n}\right).$$ Since there are $O(q^{2n})$ pairs $Q_1,Q_2$ we expect $$S_f(n)\ll \frac 1{nq^n}\cdot q^{2n}\ll\frac{q^n}n=o(q^n).$$ By this is equivalent to . At this point one may ask whether the equivalence relation $\sim$ is really the dominant source of pairs $Q_1,Q_2\in M_n$ with $f(Q_1)=f(Q_2)$, otherwise we would definitely not have $S_f(n)=o(q^n)$. Over the integers it cannot happen that $f(n_1)=f(n_2)$ for $n_1\neq n_2\in{\mathbb{Z}}$ sufficiently large (since a polynomial is a monotone function at sufficiently large arguments), but in finite characteristic this is less obvious. Nevertheless this is guaranteed by the following proposition, which will be needed for the proof of as well. **Proposition 27**. *$$\biggl|\biggl\{(Q_1,Q_2)\,:\,Q_1\not\sim Q_2\in M_n,\,f(Q_1)=f(Q_2)\biggr\}\biggr|=O(q^{n/2}).$$* Before we prove we need a couple of auxiliary results. **Lemma 28**. *Let $g(X,Y)\in{\mathbb{F}}_q[T][X,Y]$ be a fixed irreducible polynomial with $\deg_{X,Y} g\ge 2$. The number of pairs $Q_1,Q_2\in M_n$ such that $g(Q_1,Q_2)=0$ is $O(q^{n/2})$.* *Proof.* If $\deg_Xg=\deg_Yg=1$ then $g=AXY+BX+CY+D,\,A\neq 0,B,C,D\in{\mathbb{F}}_q[T]$ and one cannot have $g(Q_1,Q_2)=0$ for $Q_1,Q_2\in M_n, n>\max(\deg B,\deg C,\deg D)$ since the term $AQ_1Q_2$ has higher degree than the other terms. Hence assume WLOG that $\deg_Yg\ge 2$, otherwise switch $X$ and $Y$. It follows from the quantitative Hilbert Irreducibility Theorem in function fields [@BaEn21 Theorem 1.1] that for all but $O(q^{n/2})$ polynomials $Q_1\in M_n$, the polynomial $g(Q_1,Y)\in{\mathbb{F}}_q[T][Y]$ is irreducible of degree $\ge 2$ and therefore has no root $Q_2\in M_n$. For the remaining $O(q^{n/2})$ values of $Q_1$ there are at most $\deg_Yg=O(1)$ roots for each $Q_1$, so overall there are $O(q^{n/2})$ pairs $Q_1,Q_2\in M_n$ with $g(Q_1,Q_2)=0$. ◻ **Lemma 29**. *Assume that the polynomial $f(X)-f(Y)\in{\mathbb{F}}_q[T][X,Y]$ is divisible by $aX+bY+c$ for some $a,b,c\in{\mathbb{F}}_q[T]$ with $b\neq 0$. Then* 1. *$\zeta:=-a/b$ satisfies $\zeta^d=1$ and $\zeta\in{\mathbb{F}}_q$.* 2. *If $\zeta=1$ then $c/b\in V_f$.* *Proof.* The divisibility assumption implies $f(X)=f\left(-\frac ab X-\frac cb\right)$. Comparing coefficients at $X^d$ gives $\zeta^d=1$. Since $\zeta\in{\mathbb{F}}_q(T)$ and ${\mathbb{F}}_q$ is algebraically closed in ${\mathbb{F}}_q(T)$ we have $\zeta\in{\mathbb{F}}_q$, establishing (i). If $\zeta=-1$ the above identity becomes $f(X)=f\left(X-\frac cb\right)$, i.e. $-c/b\in V_f$. Since $V_f$ is a vector space we obtain (ii). ◻ We are ready to prove . *Proof of .* Write $f(X)-f(Y)=c\prod_{i=1}^m g_i(X,Y)$ with $c\in{\mathbb{F}}_q[T]$ and each $g_i\in{\mathbb{F}}_q[T][X,Y]$ irreducible with $\deg_{X,Y}g_i\ge 1$. It is enough to show that for each factor $g_i(X,Y)$ there are $\ll q^{n/2}$ pairs $Q_1\not\sim Q_2\in M_n$ with $g_i(Q_1,Q_2)=0$. For a given $i$, if we have $\deg_{X,Y}g_i\ge 2$ this follows from . If $\deg_{X,Y}g_i=1$, write $g_i=aX+bY+c$, assume WLOG that $b\neq 0$ and denote $\zeta=-a/b$. By we have $\zeta^d=1$. We distinguish two cases: if $\zeta\neq 1$ then $a\neq -b$ and the leading coefficient of $g_i(Q_1,Q_2)$ is $a+b\neq 0$ for any $Q_1,Q_2\in M_n$ with $n>\deg c$, and therefore there are no such pairs with $g_i(Q_1,Q_2)=0$. If on the other hand $\zeta=1$ (i.e. $a=-b$) then by we have $c/b\in V_f$ and $g_i(Q_1,Q_2)=aQ_1+bQ_2+c=0$ is equivalent to $Q_1+c/b=Q_2$, so $Q_1\sim Q_2$ and there are no pairs with $g_i(Q_1,Q_2)=0$, $Q_1\not\sim Q_2\in M_n$. This completes the proof. ◻ **Remark 30**. It is readily seen from the proof that if $f$ is special in the sense of then we may put 0 on the RHS of for $n$ sufficiently large, since in this case $\deg_{X,Y}g_i=1$ for all $i$. # The lower bound {#sec: lower bound} **Lemma 31**. *Let $P \in \mathbb{F}_q[T]$ be a prime such that* 1. *$\deg P > \deg f_d + n$.* 2. *$P\nmid\mathrm{Res}(f,f')$ if $f$ is separable.* 3. *$P\nmid\mathrm{Res}(f,\frac{\partial f}{\partial T})$ if $f$ is inseparable.* *Denote $$B_i = B_i(P) := \left| \{Q \in M_n : P^i \mid f(Q)\neq 0\}\right|.$$ Then* 1. *$B_1\le d$.* 2. *$B_i\le B_1,\,i\ge 1$.* 3. *$B_i=0$ if $i>d$.* *Proof.* Observe that since $n + \deg f_d < \deg P^i$, $M_n$ is mapped injectivly into $\mathbb{F}_q[T]/ P^i$ by $Q\mapsto f(Q)\bmod P^i$. **(i).** By the observation above $B_1\le \rho_f(P)\le\deg(f\bmod P)=d$, so $B_1\le d$. **(ii).** If $f$ is separable then combined with assumption (b) applied iteratively with $\alpha=1,2,\ldots,i-1$ implies that each root of $f$ modulo $P$ has a unique lift to a root modulo $P^i$. Combined with the observation above this implies $B_i\le B_1$. If $f$ is inseparable then by and assumption (c) we have $B_i\le\rho_f(P^i)=0\le B_1$ for $i>1$. **(iii).** Finally suppose that $i > d$. If there exists $Q \in \mathbb{F}_q[T]$ such that $P^i \mid f(Q)\neq 0$, then $\deg f(Q) \leq d n + \deg f_d$, while $dn + \deg f_d < i \deg P = \deg P^i$, a contradiction. Hence $B_i=0$ in this case. ◻ We are ready to prove the first part of . *Proof of (i).* From we get that for all but $O(1)$ primes $P \in \mathbb{F}_q[T]$ (note that by conditions (b-c) of are satisfied for all but $O(1)$ primes) such that $\deg P > n + \deg f_d$ we have $$\alpha_P(n) =\sum_{i\ge 1}\sum_{Q\in M_n\atop{P^i|f(Q)}}1=\sum_{i\ge 1\atop{B_i>0}}B_i\leq d\sum_{i\ge 1\atop{B_i>0}}1 \leq d\beta_P(n) .$$ Now using and we obtain $$\begin{split} (d-1)n q^n \lesssim \deg \frac{P_f(n)}{R_f(n)} = \sum_{\deg P > n + \deg f_d} \alpha_P(n) \deg P \leq d \sum_{\deg P > n + \deg f_d} \beta_P(n) \deg P \leq d \deg L_f(n), \end{split}$$ hence $\deg L_f(n)\gtrsim\frac{d-1}dnq^n$ as required. ◻ **Remark 32**. There exist polynomials $f$ for which the lower bound given by in (i) and the upper bound given by are equal. Indeed, if $|V_f| = d$ (as in ), then combining and (ii) gives $\deg L_f(n) \sim \frac{d-1}{d}n q^n$ as $n\to\infty$. Hence the lower bound is tight in this case. To prove the second part of we first need the following **Lemma 33**. *Assume $p \nmid d$ or $f_d \nmid f_{d-1}$. Let $n$ be sufficiently large and $P \in \mathbb{F}_q[T]$ a prime such that $\deg P > \deg f_d + n$. Then in the notation of we have $B_1(P) \leq d-1$.* *Proof.* Assume by way of contradiction that there exist distinct $Q_1,..., Q_d$ such that $P \mid f(Q_1), ... , f(Q_d)$. The assumption $\deg P>\deg f_d+n$ implies that $Q_i\bmod P$ are also distinct and therefore $$f(X) \equiv f_d \prod_{j = 1}^d(x - Q_j) \pmod P$$ Comparing coefficients at $X^{d-1}$ we obtain $$f_{d-1} \equiv - f_d \sum_{j=1}^d Q_j \pmod P$$ or $$f_{d-1} + f_d \sum_{j=1}^d Q_j \equiv 0 \pmod P.$$ Thus $P \mid f_{d-1} + f_d \sum_{j=1}^d Q_j$ and if $f_{d-1} + f_d \sum_{j=1}^d Q_j \neq 0$ we would have $\deg P \leq n + \deg f_d$ (if $n\ge f_{d-1}$), a contradiction. Now we observe that as $Q_1,..., Q_d$ are monic, if $\deg\sum_{j=1}^{d} Q_j<n$ then $p \mid d$. Therefore for a sufficiently large $n$ if $f_{d-1} + f_d \sum_{j=1}^d Q_j = 0$ then $p \mid d$ and additionally $f_{d}\sum_{j=1}^{d} Q_j = - f_{d-1}$, hence $f_d \mid f_{d-1}$. This contradicts our initial assumption and we have reached a contradiction in all cases. ◻ We are ready to prove the second part of . *Proof of (ii).* From we get that for $P \in \mathbb{F}_q[T]$ prime such that $\deg P > n + \deg f_d$ we have $B_1(P) \leq d-1$. The rest of the proof is now the same as the proof of (i), with the inequality $\alpha_P(n)\le d\beta_P(n)$ replaced by $\alpha_P(n)\le (d-1)\beta_P(n)$, which has the effect of replacing the constant $\frac{d-1}d$ by 1 in the conclusion. ◻ # The radical of $L_f(n)$ {#sec: rad} We assume for simplicity of exposition that $f$ is irreducible, although this assumption can be dispensed with in Theorem [Theorem 5](#thm:rad){reference-type="ref" reference="thm:rad"} and its proof, which we give in the present section. Recall that we denote $\ell_f(n)=\mathrm{rad}\, L_f(n)$ and observe that $$\deg\ell_f(n)=\sum_{P\atop{\beta_P(n)>0}}\deg P.$$ By and (i) we have $nq^n\ll \deg L_f(n)\ll nq^n$ and the asymptotic $\deg \ell_f(n) \sim \deg L_f(n)$ () is equivalent to the following **Proposition 34**. *$$\deg L_f(n) - \deg \ell_f(n) = o(n q^n).$$* *Proof.* Let us write $$\ell_f(n) = \sum_{\deg P > n\atop{\beta_P(n)>0}} \deg P + \sum_{\deg P \leq n\atop{\beta_P(n)>0}} \deg P,$$ $$L_f(n) = \sum_{\deg P > n} \beta_P(n) \cdot \deg P + \sum_{\deg P \leq n} \beta_P(n) \cdot \deg P.$$ Now using ([\[eq: bound small primes\]](#eq: bound small primes){reference-type="ref" reference="eq: bound small primes"}) we obtain $$\sum_{\deg P \leq n\atop{\beta_P(n)>0}} \deg P \leq \sum_{P \leq n + \deg f_d} \beta_P(n) \cdot\deg P = O(q^n)$$ and therefore $$\ell_f(n) = \sum_{\deg P > n\atop{\beta_P(n)>0}} \deg P + O(q^n),$$ $$L_f(n) = \sum_{\deg P > n\atop{\beta_P(n)>0}} \beta_P(n)\deg P + O(q^n).$$ Denote $$S:=\{Q \in M_n: \exists P \text{ prime, } \deg P > n,\, P^2 \mid f(Q)\}.$$ We have $$\begin{split} 0\le \deg L_f(n) - \deg \ell_f(n) &= \sum_{\deg P > n} \beta_P(n) \deg P - \sum_{\deg P > n\atop{\beta_P(n)>0}} \deg P + O(q^n)\\ & \leq \sum_{\deg P > n\atop{\beta_P(n) \geq 2}} \beta_P(n) \deg P +O(q^n)\\ & \leq \sum_{Q \in S} \deg f(Q) + O(q^n)\\ & \ll |S| n + q^n. \end{split}$$ It remains to prove that $|S| = o(q^n)$ but by [@Poo03 Lemma 7.1], the set $$W=\{\deg Q \leq n: \exists P \text{ prime, } \deg P > n/2,\, P^2 \mid f(Q)\}$$ is of size $o(q^n)$ (this is the key ingredient of the proof; see also [@Lan15 Proposition 3.3] and [@carmon2021square (4.7)] for refinements of this statement). Since $S \subset W$ we have $|S| = o(q^n)$ and we are done. ◻ As noted above, this concludes the proof of . # Asymptotics of $L_f(n)$ for special polynomials {#sec: special asym} In the present section we assume that $f=\sum_{i=0}^df_iX^i\in{\mathbb{F}}_q[T][X],\,\deg f=d\ge 2$ is irreducible and special in the sense of . We will show that $\deg L_f(n)\sim c_fnq^n$ as $n\to\infty$, thus proving . As in we consider the equivalence relation on $M_n$ given by $Q_1\sim Q_2\iff Q_1-Q_2\in V_f$ and the quantity $S_f(n)$ defined by ([\[eq: def Sf\]](#eq: def Sf){reference-type="ref" reference="eq: def Sf"}). By the definition of a special polynomial we may write $$\label{eq: factor fxfy} f(X)-f(Y)=\prod_{i=1}^d(a_iX+b_iY+c_i),\quad a_i,b_i,c_i\in{\mathbb{F}}_q[T].$$ Comparing degrees in $X$ and $Y$ shows that $a_i,b_i\neq 0$. Comparing coefficients at $X^d$ and $Y^d$ in ([\[eq: factor fxfy\]](#eq: factor fxfy){reference-type="ref" reference="eq: factor fxfy"}) we see that $\deg a_i,\deg b_i\le\deg f_d$. Therefore if $Q_1,Q_2\in M_n$ with $n> \deg c_i$, we have $$\label{eq:abcdeg}\deg(a_iQ_1+b_iQ_2+c_i)\le n+\deg f_d.$$ Let $Q_1,Q_2\in M_n$ with $Q_1\not\sim Q_2$ and $P$ a prime with $\deg P>n+\deg f_d$ such that $P\,|\,f(Q_1),f(Q_2)$. Then in particular $P\,|\,f(Q_1)-f(Q_2)$ and by ([\[eq: factor fxfy\]](#eq: factor fxfy){reference-type="ref" reference="eq: factor fxfy"}) we have $P\,|\,a_iQ_1+b_iQ_2+c_i\neq 0$ for some $1\le i\le d$. By ([\[eq:abcdeg\]](#eq:abcdeg){reference-type="ref" reference="eq:abcdeg"}) and the condition $\deg P>n+\deg f_d$ we must have $a_iQ_1+b_iQ_2+c_i=0$ and therefore by ([\[eq: factor fxfy\]](#eq: factor fxfy){reference-type="ref" reference="eq: factor fxfy"}) we have $f(Q_1)=f(Q_2)$. By combined with this is impossible for $n$ sufficiently large. Hence we have $S_f(n)=0$ for $n$ sufficiently large and by it follows that holds for $f$, concluding the proof of . # Classification of special polynomials {#sec: class special} In the present section we classify the special polynomials () over an arbitrary unique factorization domain (UFD) $R$, establishing . We denote by $p$ the characteristic of $R$ (as $R$ is a domain, $p$ is zero or a prime) and by $K$ its field of fractions. Also if $p > 0$ we will denote by $\mathbb{F}_p$ its prime subfield. Note that as $R$ is a UFD, so are the polynomial rings $R[X], R[X, Y]$. For a polynomial $g\in R[X,Y]$ we denote by $\deg g$ its total degree. Special polynomials $f\in R[X]$ were defined in . **Lemma 35**. *Let $f\in R[X]$ be a special polynomial of degree $d$. Then we may write $$\label{eq:fxfy}f(X)-f(Y)=\prod_{i=1}^d (a_i X + b_i Y + c_i),\quad a_i\neq 0,b_i\neq 0,c_i \in R.$$* *Proof.* This is immediate from the definition, except for the condition $a_i,b_i\neq 0$ which follows by comparing degrees in $X$ and $Y$ in ([\[eq:fxfy\]](#eq:fxfy){reference-type="ref" reference="eq:fxfy"}). ◻ **Lemma 36**. *Let $f=\sum_{i=0}^d f_iX^i\in R[X]$ be a special polynomial of degree $d$.* 1. *If $p=0$ then $$f(X) - f(Y) = f_d\prod_{j=1}^{d} \left(X - \zeta^j Y - b^{(j)}\right),$$ where $\zeta\in R$ is a primitive $d$-th root of unity and $b^{(j)}\in K$.* 2. *If $p>0$ write $d=p^lm$ with $(m,p)=1$. Then $$f(X) - f(Y) = f_d\prod_{i=1}^{p^l} \prod_{j=1}^{m} \left(X - \zeta^j Y - b_i^{(j)}\right),$$ where $\zeta\in R$ is a primitive $m$-th root of unity and $b_i^{(j)}\in K$.* *Proof.* For brevity we only treat the case $p>0$, the case $p=0$ being similar to the case $p>0,\,l=0$. Write $f(X) - f(Y) = \prod_{i=1}^d (a_i X +b_i Y + c_i)$ as in . Note that $$\prod_{i=1}^d (a_i X + b_i Y + c_i) = \prod_{i=1}^d (a_i X + b_i Y) + A[X, Y],$$ where $A[X,Y] \in R[X,Y],\deg A < d$. Since $\prod_{i=1}^d (a_i X + b_i Y)$ is homogeneous of degree $d$, we have $$f_d (X^d - Y^d) = \prod_{i=1}^d (a_i X + b_i Y)=\prod_{i=1}^d a_i\prod_{i=1}^d(X-\mu_i Y),$$ where $\mu_i=-b_i/a_i\in K$. Plugging in $Y = 1$ and noting that $\prod_{i=1}^d a_i=f_d$ (by comparing coefficients at $X^d$) we obtain $$X^d - 1 = \prod_{i=1}^d (X - \mu_i).$$ In particular we see that all the $d$-th roots of unity in the algebraic closure of $K$ lie in $K$, and since $R$ is integrally closed (because it is a UFD) they lie in $R$. Since $p=\mathrm{char}(K)>0$ these are actually the $m$-th roots of unity, where $d=mp^l,\,(m,p)=1$. ◻ We are ready to prove the forward direction of . **Proposition 37**. *Let $f\in R[X]$ be special. Then it has the form stated in .* *Proof.* For brevity we only treat the case $p>0$, the case $p=0$ being similar to the case $p>0,\,l=0$ we treat below (using part (i) of Lemma [Lemma 36](#lemma for coefficients of special polynomials ){reference-type="ref" reference="lemma for coefficients of special polynomials "} instead of part (ii) and the fact that $V_f=\{0\}$ (see ). Write $d = \deg f = p^lm, (m,p) = 1$. Denote by $\zeta$ a primitive $m$-th root of unity. From (ii) we have $\zeta\in R$ and $$\label{eq:fxfy prod} f(X) - f(Y) = f_d\prod_{i=1}^{p^l} \prod_{j=1}^{m} \left(X - \zeta^j Y - b_i^{(j)}\right),\quad b_i^{(j)}\in K.$$ Next consider the shifted polynomial $g(X)=f(X+w)\in K[X]$ with $$w=\left\{\begin{array}{ll}\frac{b_1^{(m-1)}}{1-\zeta^{-1}},&m>1,\\0,&m=1.\end{array}\right.$$ Note that $g$ is also special but over the ring $K$ instead of $R$ (this is immediate from the definition) and also by the choice of $w$ we have $$g(\zeta X)=g(X),$$ because $$g(X) - g(\zeta X) = f(X + w) - f(\zeta X + w) = f_d\prod_{i=1}^{p^l} \prod_{j=1}^{m} \left(X + w - \zeta^j (\zeta X + w) - b_i^{(j)}\right)$$ and if $m>1$ then $X+w-\zeta^{m-1}(\zeta X+w)-b_1^{(m-1)}=0$ (using $\zeta^m=1$). If we can show that $g$ has the form asserted by except the coefficients are in $K$ instead of $R$, then $f(X)=g(X-w)$ would have the form asserted by the theorem. Replacing $f$ with $g$, we see that it is enough to prove the assertion under the additional assumptions that $R=K$ is a field and $f(X)=f(\zeta X)$. Thus from now on we assume that $R=K$ is a field and $f(X) = f(\zeta X) = f(\zeta^2 X) = ... = f(\zeta^{m-1} X)$. Next let us prove that for all $j$ the multisets $$S_j := \left\{b_{i}^{(j)} : 1 \leq i\leq p^l\right\}$$ are the same for all $1\le j\le m$. To this end let us identify the indices $1\le j\le m$ with their corresponding residues in ${\mathbb{Z}}/m$ and observe that by ([\[eq:fxfy prod\]](#eq:fxfy prod){reference-type="ref" reference="eq:fxfy prod"}) we have $$\begin{gathered} \label{eq: prod multisets}\prod_{j\bmod m}\prod_{b\in S_j}\left(X-\zeta^jY-b\right)=f(X)-f(Y)=f(X)-f(\zeta^kY)=\prod_{j\bmod m}\prod_{b\in S_j}\left(X-\zeta^{j+k}Y-b\right) \\=\prod_{j\bmod m}\prod_{b\in S_{j-k}}\left(X-\zeta^j Y-b\right)\end{gathered}$$ for each $k\in{\mathbb{Z}}/m$. From the first equality in ([\[eq: prod multisets\]](#eq: prod multisets){reference-type="ref" reference="eq: prod multisets"}) we see that the number of times an element $b$ appears in the multiset $S_j$ equals the multiplicity of the factor $X-\zeta^jY-b$ in the factorization of $f(X)-f(Y)$ (recall that $R[X,Y]$ is a UFD) and from the last equality it also equals the number of times $b$ appears in $S_{j-k}$. Hence the multisets $S_1,\ldots,S_m$ are all equal and we may rewrite ([\[eq:fxfy prod\]](#eq:fxfy prod){reference-type="ref" reference="eq:fxfy prod"}) as $$\label{eq: fxfy prod 2} \begin{split} f(X) - f(Y) &= f_d\prod_{i=1}^{p^l} \prod_{j\bmod m} (X - \zeta^j Y - b_i)=\prod_{b\in S_1}\prod_{j\bmod m}(X-\zeta^jY-b), \end{split}$$ where $b_i=b_i^{(1)}$. Let us prove the following properties of the multiset $S_1=\{b_1,\ldots,b_m\}$: 1. The underlying set of $S_1$ is $V_f:=\{b\in R:f(X+b)=f(X)\}$. It is an ${\mathbb{F}}_p$-linear subspace of $R$, with $|V_f|=p^v$ for some $0\le v\le l$. 2. Multiplication by $\zeta$ permutes $S_1$ as a multiset. 3. $V_f$ is furthermore an ${\mathbb{F}}_p(\zeta)$-linear subspace of $R$. 4. All elements of $S_1$ have the same multiplicity $p^{l-v}$. For (1) observe that by ([\[eq: fxfy prod 2\]](#eq: fxfy prod 2){reference-type="ref" reference="eq: fxfy prod 2"}) we have $$b\in S_1\iff X-Y-b\,\mid\,f(X)-f(Y)\iff f(X+b)=f(X),$$ hence $V_f$ is the underlying set of $S_1$. Arguing as in the proof of one sees that $V_f$ is an ${\mathbb{F}}_p$-linear subspace of $R$. Since $|V_f|\le |S_1|=p^l$ we see that $|V_f|=p^v$ with $0\le v\le l$. Replacing $Y$ with $\zeta Y$ in ([\[eq: fxfy prod 2\]](#eq: fxfy prod 2){reference-type="ref" reference="eq: fxfy prod 2"}) and using $f(Y)=f(\zeta Y)$ gives (2). The properties (1),(2) imply (3). To prove (4) observe that by ([\[eq: fxfy prod 2\]](#eq: fxfy prod 2){reference-type="ref" reference="eq: fxfy prod 2"}) the multiplicity of $b\in V_f$ in $S_1$ equals the exponent of the factor $X-Y-b$ in the factorization of $f(X)-f(Y)$, but since $f(Y+b)=f(Y)$ it also equals the exponent of $X-Y$ in the factorization of $f(X)-f(Y)$ and is therefore independent of $b$. We have established properties (1-4). Now assuming WLOG that $V_f=\{b_1,\ldots,b_{p^v}\}$ we can rewrite ([\[eq: fxfy prod 2\]](#eq: fxfy prod 2){reference-type="ref" reference="eq: fxfy prod 2"}) as $$f(X) - f(Y) =f_d\prod_{i=1}^{p^v} \prod_{j=1}^{m} (X - \zeta^j Y - b_i))^{p^{l-v}}=f_d\prod_{i=1}^{p^v}\left((X-b_i)^{mp^{l-v}}-b_i^{mp^{l-v}}Y^{mp^{l-v}}\right).$$ Setting $Y=0$ we see that $f$ has the form asserted in with $A=0, C=f(0)$. This completes the proof. ◻ Finally we prove the converse direction in . **Proposition 38**. 1. *Assume that $p=0$, that $K$ contains a primitive $d$-th root of unity $\zeta$ and $$f=f_d(X+A)^d+C\in R[X]$$ with $A,C\in K$. Then $f$ is special.* 2. *Assume that $p>0$, $d=p^lm,\,(m,p)=1$ and $K$ contains a primitive $m$-th root of unity. Suppose $$f(X)=f_d\prod_{i=1}^{p^v}(X-b_i+A)^{mp^{l-v}}+C\in R[X],$$ where $A,C\in K$ and $V=\{b_1,\ldots,b_{p^v}\},\,0\le v\le l$ ($b_i$ distinct) is an ${\mathbb{F}}_p(\zeta)$-linear subspace of $K$. Then $f$ is special.* *Proof.* **(i).** Denote by $\zeta$ a primitive $d$-th root of unity. Then we have the factorization $$f(X)-f(Y)=f_d\prod_{i=0}^{m-1}\left(X+A-\zeta^i(Y+A)\right)$$ and $f$ is special. **(ii).** Since $R$ is a UFD, by Gauss's lemma $f(X)-f(Y)$ factors into linear polynomials over $R$ iff it does over $K$. Hence we assume WLOG that $R=K$ is a field and $A,C,b_i\in R$. It is then immediate from the definition that $f(X)$ is special iff $\frac 1{f_d}(f(X-A)-C)$ is, hence we assume WLOG that $A=C=0,\,f_d=1$, i.e. $f=g^{mp^{l-v}}$ where $$g=\prod_{i=1}^{p^v}(X-b_i).$$ We now show that the following factorization holds: $$\label{eq: f is special}f(X)-f(Y)=\prod_{i=1}^{p^v}\prod_{j=1}^m(X-\zeta^jY+b_i)^{p^{l-v}}.$$ Since the Frobenius map is a ring homomorphism this is equivalent to $$\label{eq: g is special}g(X)^m-g(Y)^m=\prod_{i=1}^{p^v}\prod_{j=1}^m(X-\zeta^jY+b_i).$$ Note that since $V$ is an ${\mathbb{F}}_p(\zeta)$-linear subspace, the map $x\mapsto\zeta^{-j}(x+b_i)$ permutes $V$ and we have $$g(\zeta^{-j}(X+b_i))=\zeta^{-jp^v}g(X)$$ and therefore $g(\zeta^{-j}(X+b_i))^m=g(X)^m$. It follows that $X-\zeta^jY+b_i\,\mid\,g(X)^m-g(Y)^m$ and therefore the RHS of ([\[eq: g is special\]](#eq: g is special){reference-type="ref" reference="eq: g is special"}) divides the LHS. Since both sides of ([\[eq: g is special\]](#eq: g is special){reference-type="ref" reference="eq: g is special"}) have the same degree $mp^v$ and are monic in $X$, we must have an equality. This establishes ([\[eq: g is special\]](#eq: g is special){reference-type="ref" reference="eq: g is special"}) and therefore ([\[eq: f is special\]](#eq: f is special){reference-type="ref" reference="eq: f is special"}), which completes the proof. ◻ **Remark 39**. Proposition [Proposition 37](#prop: classification forward){reference-type="ref" reference="prop: classification forward"} and its proof work under the weaker assumption that $R$ is an integrally closed domain (not necessarily a UFD). Proposition [Proposition 38](#prop: classification converse){reference-type="ref" reference="prop: classification converse"} does require the UFD assumption, but it can be replaced with $R$ being only integrally closed if one assumes that $f_d=1$, because Gauss's lemma works for monic polynomials over any integrally closed domain.
arxiv_math
{ "id": "2310.04164", "title": "The Least Common Multiple of Polynomial Values over Function Fields", "authors": "Alexei Entin and Sean Landsberg", "categories": "math.NT", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | In this paper we introduce the notion of quasi-isometry between two almost contact metric manifolds of same dimension. We also impose this idea to study quasi-isometry between $N(k)-$ contact metric manifolds and Sasakian manifolds. Moving further, we investigate some curvature properties of two quasi-isometrically embedded almost contact metric manifolds, $N(k)-$contact metric manifolds and Sasakian manifolds. Next, an illustrative example of a quasi-isometry between two Sasakian structures is constructed. Finally, we establish a relationship between the scalar curvature and the quasi-isometric constants for two quasi-isometric Riemannian manifolds. address: - | Department of Mathematics\ Jadavpur University\ Kolkata-700032, India. - | Department of Mathematics\ Jadavpur University\ Kolkata-700032, India. - | Department of Mathematics\ Jadavpur University\ Kolkata-700032, India author: - "Paritosh Ghosh$^*$[^1]" - Dipen Ganguly - Arindam Bhattacharyya title: Quasi-isometry between two almost contact metric manifolds --- # **Introduction** The notion quasi-isometry was first introduced by the American mathematician G.D. Mostow [@Mos] in 1973 and later it was Gromov [@Gr1] who studied quasi-isometry to a much further extent in the context of geometric group theory. But Mostow used the term pseudo-isometry and this notion was little bit different from the one that we will be discussing here (See [@BH]). Let $(X,d_x)$ and $(Y,d_y)$ be two metric spaces and $f:(X,d_x)\longrightarrow(Y,d_y)$ be a map. Then the map $f$ is said to be an $(L,C)$ quasi-isometric embedding, if there exist constants $L\geq 1$ and $C\geq 0$, such that,\ $$\frac{1}{L}d_x(p,q)-C\leq d_y(f(p),f(q))\leq Ld_x(p,q)+C, \quad \forall p,q\in(X,d_x).$$ Moreover, if the quasi-isometric embedding $f$ has a quasi dense image, i.e if there is a constant $D\geq 0$ such that $\forall y\in Y$, $\exists x\in X$ for which $d_y(f(x),y)\leq D$, then the map $f$ is called a quasi isometry and we call that the two metric spaces $(X,d_x)$ and $(Y,d_y)$ are quasi-isometric. For example, it can shown that the grid $\mathbb{Z}^2$ with the taxicab metric is quasi-isometric to the plane $\mathbb{R}^2$ with the usual Euclidean metric via the natural inclusion map as a Quasi-isometry [@CM]. Also it is easy to see that any metric space of finite diameter is quasi-isometric to a point. In that manner we can say that, all metric spaces of finite diameter are same in the sense of quasi-isometry. We say that a map $f:X\longrightarrow Y$ has finite distance from a map $g:X\longrightarrow Y$ if there is a constant $M\geq 0$ such that, for all $x\in X$ we have $d_x(g(x),f(x))\leq M$ and $f\sim g$ if $f$ is at finite distance from $g$. Then it is easy to check that $'\sim '$ is an equivalence relation. We denote $QI(X)$ be the set of all quasi-isometries from $X\longrightarrow X$, and let $QI(X)/\sim$ be the set of all quasi isometries of X modulo finite distance. Moreover the composition $([f],[g])\mapsto[f\circ g]$ on the set of all equivalence classes $QI(X)$ forms a group, called the quasi-isometry group of X. It is a major problem in geometric group theory, to find the quasi-isometry groups of spaces. P. Sankaran in [@PS] has calculated the quasi-isometry group of the real line. In geometric group theory the main idea is to see how groups can be viewed as geometric objects. To be more precise on which geometric object can act as a group in a 'nice way' so that the interplay between the group and the space reveals the algebraic properties of the group. In this direction one fundamental result is the $\check{S}$varc-Milnor lemma, which says that if a group G acts properly and co-compactly by isometries on a non-empty proper geodesic metric space $(X,d)$, then G is finitely generated and for all $x\in X$ the map $$G\longrightarrow X$$ $$g\mapsto g.x$$ is a quasi-isometry (a metric space is proper if all balls of finite radius are compact in the metric topology and an action of a group G on a topological space X is co-compact if the quotient space $X/G$ is compact with respect to the quotient topology). One of the central theorems in geometric group theory is Gromov's polynomial growth theorem (See [@Gr1]), which says that finitely generated groups have polynomial growth if and only if they are virtually nilpotent (i.e if the group has a subgroup of finite index that is nilpotent). Then using this theorem an interesting result can be proved that, if a group G is quasi-isometric to $\mathbb{Z}^n$ then G has a subgroup of finite index which is isomorphic to $\mathbb{Z}^n$. Another use of quasi-isometry is in classification of lattices by Schwartz [@Sch]. If G be the isometry group of a symmetric space other than the hyperbolic plane, then he has proved that any quasi-isometry between non-uniform lattices in G is equivalent to (the restriction of) a group element of G which commensurate one lattice to the other. From this result it can be proved that any two non-uniform lattice in G are quasi-isometric if and only if they are commensurable(we call two groups $G_1$ and $G_2$ commensurable, if there are subgroups $H_1$ of $G_1$ and $H_2$ of $G_2$ such that the subgroups $H_1$ and $H_2$ are isomorphic to each other). Also it has been proved that, any finitely generated group which is quasi-isometric to a non-uniform lattice in G, is actually a finite extension of a non-uniform lattice in G. In this paper we have introduced the concept of quasi-isometry for almost contact metric manifolds, for $N(k)-$contact metric manifolds and for Sasakian manifolds of same dimensions and established some inequalities between two quasi-isometric metric manifolds for various cases like when the ambient manifold is conformally flat, concircularly flat, etc. We have given an example of a quasi-isometry between two Sasakian manifolds. And finally we find a relationship between the scalar curvature and the quasi-isometric constants for two Riemannian manifolds to be quasi-isometric. # **Preliminaries** A *contact manifold* $M^{2n+1}$ is a $C^\infty$ manifold together with a global 1-form $\eta$ such that $\eta \wedge (d\eta)^n \neq 0$. More specifically, $\eta \wedge (d\eta)^n$ is a volume element on M, which is non-zero everywhere on $M^{2n+1}$ so that the manifold M is orientable. Let $M^{2n+1}$ be a $(2n+1)$ dimensional manifold and let there exist a $(1,1)$ tensor field $\phi$, a vector field $\xi$ and a global 1-form $\eta$ on M such that $$\begin{aligned} \phi ^2 &=& -I + \eta \otimes \xi,\\ \eta (\xi ) &=& 1,\end{aligned}$$ then we say that M has an *almost contact structure* $(\phi,\xi,\eta)$. And the manifold M equipped with this almost contact structure $(\phi,\xi,\eta)$ is called an *almost contact manifold* (See [@Bla]). Here the vector field $\xi$ is called the *characteristic vector field* or *Reeb vector field*. **Result 1**. *[@Bla] For an almost contact structure $(\phi,\xi,\eta)$ the following relations hold: $$\begin{aligned} \phi\circ\xi &=& 0, \\ \eta\circ\phi &=& 0, \\ Rank\phi &=& 2m.\end{aligned}$$* **Theorem 1**. *[@Bla] Every almost contact structure $(\phi,\xi,\eta)$ on a manifold $M^{2n+1}$ admits a Riemannian metric $g$ satisfying: $$\begin{aligned} \eta(X) &=& g(X,\xi), \\ g(\phi X,\phi Y) &=& g(X,Y)-\eta(X)\eta(Y). \end{aligned}$$* And the metric $g$ is called *compatible* with the almost contact structure $(\phi,\xi,\eta)$ and the manifold $M^{2n+1}$ with the almost contact metric structure $(\phi,\xi,\eta,g)$ is called an *almost contact metric manifold*. In 1988, S. Tanno [@ST] introduced the notion of $k-$nullity distribution on a contact metric manifold which is defined as follows: The $k-$nullity distribution of a Riemannian manifold $(M,g)$ for a real number k is a distribution, $$N(k): p\longrightarrow N_p(k)=[Z\in T_pM: R(X, Y)Z=k\{g(Y, Z)X- g(X, Z)Y\}],$$ for any $X, Y\in T_pM$, where $R$ is the *Riemannian curvature tensor* and $T_pM$ denotes the tangent vector space of $M^{2n+1}$ at point $p\in M$. If the characteristic vector field of a contact metric manifold belongs to the $k-$nullity distribution, then the relation, $$R(X, Y)\xi=k[\eta(Y)X-\eta(X)Y]$$ holds. A contact metric manifold with $\xi \in N(k)$ is called a *$N(k)-$contact metric manifold*. **Result 2**. *[@Bla] Let $M^{2n+1}(\phi,\xi,\eta, g) (n\geq2)$ be a $N(k)-$contact metric manifold. Then the following relations hold: $$\begin{aligned} Q\xi &=& (2nk)\xi, \\ S(X, \xi) &=& 2nk\eta(X), \\ \eta(R(X,Y)Z) &=& k[\eta(X)g(Y,Z)-\eta(Y)g(X,Z)],\end{aligned}$$ where, $R$ is the Riemannian curvature tensor, $S$ is the Ricci tensor of type $(0,2)$ and $Q$ is the Ricci operator or the symmetric endomorphism of the tangent space $T_pM$ at the point $p\in M$ and is given by $S(X, Y)=g(QX, Y)$.* Next we recall a very important manifold named Sasakian manifold which was introduced by the Japanese mathematician S. Sasaki [@Sa] in the year 1960. Later, the works of Boyer, Galicki [@BG] and other mathematicians have made a substantial progress in the study of Sasakian manifolds. In mathematical physics Sasakian manifolds and more specifically Sasakian space forms are widely used. Sasakian manifolds or normal contact metric manifolds are an odd-dimensional counterpart of the Kähler manifolds in complex geometry. An almost contact manifold $M^{2n+1}$ together with the almost contact structure $(\phi,\xi,\eta)$ is said to be a *Sasakian manifold* or a *normal contact metric manifold* if $$(X,Y)+2d\eta(X,Y)\xi=0,$$ where, $[\phi,\phi]$ is the *Nijenhuis torsion tensor field* of $\phi$ and is given by, $$(X,Y)=\phi^2[X,Y]+[\phi X,\phi Y]-\phi([\phi X,Y])-\phi([X,\phi Y]).$$ **Theorem 2**. *An almost contact metric manifold $M^{2n+1}$ with the structure $(\phi,\xi,\eta,g)$ is Sasakian if and only if $$(\nabla_{X}\phi)Y=g(X,Y)\xi -\eta(Y)X,$$ where, $\nabla$ is the Levi-Civita connection on $M^{2n+1}$ (See [@Bla]).* **Result 3**. *[@Bla] Let $M^{2n+1}$ be a Sasakian manifold with the structure $(\phi,\xi,\eta,g)$, then the following relations are true: $$\begin{aligned} \nabla_X\xi &=& -\phi X, \\ R(X,Y)\xi &=& \eta(Y)X-\eta(X)Y, \\ R(X,\xi)Y &=& \eta(Y)X-g(X,Y)\xi, \\ \eta(R(X,Y)Z) &=& \eta(X)g(Y,Z)-\eta(Y)g(X,Z),\end{aligned}$$ where, $R$ is the Riemannian curvature tensor of $M^{2n+1}$ and is given by, $$R(X,Y)Z=\nabla_X\nabla_Y Z-\nabla_Y\nabla_X Z-\nabla_{[X,Y]}Z,$$ for all vector fields X,Y,Z on M.* The theorems and results that are stated above will be used frequently in proofs of the next chapters. For a detailed discussion and proofs of these we refer to the text [@Bla]. # **Quasi-isometry between two almost contact metric manifolds** Now we are in a position to define the concept of quasi-isometry between two manifolds, more specifically between two same dimensional almost contact metric manifolds. We introduce the definition as follows: **Definition 3**. Let $M_1$ and $M_2$ be two almost contact metric manifolds of same dimension $(2n+1)$ with the corresponding almost contact metric structures $(\phi_1,\xi_1,\eta_1,g_1)$ and $(\phi_2,\xi_2,\eta_2,g_2)$ respectively and let $\chi(M_1)$ and $\chi(M_2)$ be the set of all vector fields associated to $M_1$ and $M_2$ respectively. Then differential of a function $f_*:\chi(M_1)\longrightarrow\chi(M_2)$ is said to be a quasi-isometric embedding between the two almost contact metric manifolds $M_1$ and $M_2$ if there exist constants $A\geq1, B\geq0$, such that $\forall X,Y\in\chi(M_1)$, $$\frac{1}{A}g_1(X,Y)-B \leq g_2(f_*(X),f_*(Y))\leq Ag_1(X,Y)+B.$$ Moreover, if for all $Z\in\chi(M_2)$, there exists $X\in\chi(M_1)$ and a constant $D\geq0$, such that, $$g_2(Z,f_*(X))\leq D,$$ then $f_*$ is called quasi-isometry between the manifolds $M_1$ and $M_2$. The two almost contact metric manifolds $M_1$ and $M_2$ are called quasi-isometric if there exists such a quasi-isometry $f_*$ between $M_1$ and $M_2$. The definition given in $(1.1)$ is based on usual metric of the metric space whereas, in $(3.1)$ we have considered the Riemannian metric $g$ for the inequalities, which is more generalized form than the usual metric $d$. As in $(3.1)$, for all  $X, Y \in \chi(M_1)$, we have, $$\qquad \frac{1}{A}g_1(X,Y)-B\leq g_2(f_*(X),f_*(Y))\leq Ag_1(X,Y)+B.$$ For $Y=\xi_1$, we get using $(2.6)$, $$\begin{aligned} \frac{1}{A}\eta_1(X)-B\leq &g_2(f_*(X), f_*(\xi_1))&\leq A\eta_1(X)+B.\end{aligned}$$ If the function $f_*$ preserves the structure vector field between the two manifolds $M_1$ and $M_2$, that is, $f_*(\xi_1)=\xi_2$,\ then,   $g_2(f_*(X), f_*(\xi_1))=g_2(f_*(X), \xi_2)=\eta_2(f_*(X))$, so that, $$\frac{1}{A}\eta_1(X)-B\leq \eta_2(f_*(X))\leq A\eta_1(X)+B, \quad \forall X\in \chi(M_1).$$ Since the tensor field $\phi$ is anti-symmetric with respect to the Riemannian metric $g$, that is, $$g(\phi X, Y) = -g(X, \phi Y), \\$$ we have, $$\quad g_1(\phi_1 X, X) = 0.$$ So, replacing $X$ by $\phi_1X$, we get from $(3.1)$, $\frac{1}{A}g_1(\phi_1X,Y)-B\leq g_2(f_*(\phi_1X),f_*(Y))\leq Ag_1(\phi_1X,Y)+B$. Replacing $Y$ by $X$, $$-B\leq g_2(f_*(\phi_1X), f_*(X))\leq B, \quad \forall X\in \chi(M_1).$$ Now replacing $X$ by $\phi_1X$ and $Y$ by $\phi_1Y$, we get from $(3.1)$, $\frac{1}{A}g_1(\phi_1X,\phi_1Y)-B\leq g_2(f_*(\phi_1X),f_*(\phi_1Y))\leq Ag_1(\phi_1X,\phi_1Y)+B$. Taking the left inequality, $$\frac{1}{A}g_1(\phi_1X,\phi_1Y)-B \leq g_2(f_*(\phi_1X),f_*(\phi_1Y)),$$ which implies, $$\frac{1}{A}g_1(X,Y)-B\leq g_2(f_*(\phi_1X),f_*(\phi_1Y))+\frac{1}{A}\eta_1(X)\eta_1(Y).\\$$ Similarly, taking the right inequality of $(3.1)$, we get, $$g_2(f_*(\phi_1X),f_*(\phi_1Y))+A\eta_1(X)\eta_1(Y)\leq Ag_1(X,Y)+B.$$ Since $A\geq1$, thus we can write, $$A\geq \frac{1}{A}.$$ Using $(3.7)$, from $(3.5)$ and $(3.6)$, we get, $\forall X, Y\in \chi(M_1)$, $$\frac{1}{A}g_1(X,Y)-B\leq g_2(f_*(\phi_1X),f_*(\phi_1Y))+\frac{1}{A}\eta_1(X)\eta_1(Y)\leq Ag_1(X, Y)+B.$$ Replacing $X$ by $\phi_1X$ and $Y$ by $\phi_1Y$, this implies, $$\frac{1}{A}g_1(X,Y)-B\leq g_2(f_*({\phi_1}^2X),f_*({\phi_1}^2Y))+\frac{1}{A}\eta_1(X)\eta_1(Y)\leq Ag_1(X, Y)+B.$$ Using the linearity of the differential $f_*$, and using $(2.1)$, and also if $f_*$ preserves the structure vector field between the two manifolds, a simple calculation leads to, $$\begin{aligned} % \nonumber % Remove numbering (before each equation) \frac{1}{A}g_1(X,Y)-B \leq g_2(f_*(X),f_*(Y))-\eta_1(X)g_2(\xi_2,f_*(Y))\\ -\eta_1(Y)g_2(f_*(X),\xi_2)+\eta_1(X)\eta_1(Y)(g_2(\xi_2,\xi_2)+\frac{1}{A})\leq Ag_1(X, Y)+B,\end{aligned}$$ which implies, $$\begin{aligned} % \nonumber % Remove numbering (before each equation) \frac{1}{A}g_1(X,Y)-B \leq g_2(f_*(X),f_*(Y))-\eta_1(X)\eta_2(f_*(Y)) \nonumber \\ -\eta_1(Y)\eta_2(f_*(X))+\eta_1(X)\eta_1(Y)(1+\frac{1}{A})\leq Ag_1(X, Y)+B.\end{aligned}$$ So collecting all these results, we can state that: **Theorem 4**. *Let $M_1(\phi_1,\xi_1,\eta_1,g_1)$ and $M_2(\phi_2,\xi_2,\eta_2,g_2)$ be two almost contact metric manifolds of same dimension $(2n+1),(n\geq1)$ and let $f_*$ be a quasi-isometric embedding between them. Also consider that $f_*$ preserves the structure vector field between the two manifolds. Then, $\forall X, Y\in\chi(M_1)$, the following relations hold;* 1. *$\frac{1}{A}\eta_1(X)-B \leq \eta_2(f_*(X)) \leq A\eta_1(X)+B,$* 2. *$-B \leq g_2(f_*(\phi_1X), f_*(X)) \leq B,$* 3. *$\frac{1}{A}g_1(X,Y)-B \leq g_2(f_*(\phi_1X),f_*(\phi_1Y))+\frac{1}{A}\eta_1(X)\eta_1(Y) \leq Ag_1(X, Y)+B,$* 4. *$\frac{1}{A}g_1(X,Y)-B \leq g_2(f_*(X),f_*(Y))-\eta_1(X)\eta_2(f_*(Y))-\eta_1(Y)\eta_2(f_*(X))+\eta_1(X)\eta_1(Y)(1+\frac{1}{A})\leq Ag_1(X, Y)+B.$* # **Quasi-isometry between two $N(k)-$contact metric manifolds** In this section we deal with the quasi-isometry between two $N(k)-$ contact metric manifolds in a similar way and establish some interesting results. Recall that if a transformation does not change the angle between the tangent vectors of a manifold, it is called a conformal transformation. The Weyl conformal curvature tensor C of a Riemannian manifold $(M^{2n+1},g)$, $(n\geq1)$, is an invariant under any conformal transformation of the metric $g$ and is defined by $$\begin{aligned} % \nonumber % Remove numbering (before each equation) &C(X,Y)Z=R(X,Y)Z-\frac{1}{(2n-1)}[S(Y,Z)X-S(X,Z)Y+g(Y,Z)QX\nonumber\\ &-g(X,Z)QY]+\frac{r}{2n(2n-1)}[g(Y,Z)X-g(X,Z)Y]\end{aligned}$$ where, $R$ is *Riemannian curvature tensor*, $S$ is the *Ricci tensor of type $(0,2)$*, $Q$ is the *Ricci operator* or the *symmetric endomorphism of the tangent space $T_pM$ at the point $p\in M$* and is given by $S(X,Y)=g(QX,Y)$, and $r$ is the *scalar curvature* of the manifold $M$. Next, let the manifold $M_1$ be *conformally flat* i.e; $C_1(X,Y)Z=0$ for all $X,Y,Z\in\chi(M_1)$. Then from the equation $(4.1)$ we get, $$\begin{aligned} R_1(X,Y)Z&=\frac{1}{(2n-1)}[S_1(Y,Z)X-S_1(X,Z)Y+g_1(Y,Z)Q_1X\nonumber\\ &-g_1(X,Z)Q_1Y]-\frac{r}{2n(2n-1)}[g_1(Y,Z)X-g_1(X,Z)Y].\end{aligned}$$ Putting $Z=\xi_1$ and using $(2.8)$, $(2.9)$ and the relation $S_1(X, \xi_1)=2nk\eta_1(X)$, we get after some calculations, $$R_1(X,Y)\xi_1=\frac{2nk}{r-2nk}[\eta_1(Y)Q_1X-\eta_1(X)Q_1Y].$$ And for $Y=\xi_1$, $$Q_1X=(\frac{r-2nk}{2n})X+[(2n+1)k-\frac{r}{2n}]\eta_1(X)\xi_1.$$ Now as $R_1(X,Y)Z\in\chi(M_1)$; from the left side inequality of $(3.1)$ we get, for all $X,Y,Z,W\in\chi(M_1)$,\ $$\frac{1}{A}g_1(R_1(X,Y)Z,W)-B\leq g_2(f_*(R_1(X,Y)Z),f_*(W)).$$ Then putting the value of $R_1(X,Y)Z$ from equation $(4.2)$, the above inequality becomes, $$\begin{aligned} &\frac{1}{A}[\frac{1}{(2n-1)}\{S_1(Y,Z)g_1(X,W)-S_1(X,Z)g_1(Y,W)+\nonumber\\ &g_1(Y,Z)g_1(Q_1X,W)-g_1(X,Z)g_1(Q_1Y,W)\}-\frac{r}{2n(2n-1)}\{g_1(Y,Z)\nonumber\\ &g_1(X,W)-g_1(X,Z)g_1(Y,W)\}]-B \leq g_2(f_*(R_1(X,Y)Z),f_*(W)).\end{aligned}$$ Now using the relations $S_1(Y,\xi)=2nk\eta_1(Y)$, $g_1(Q_1X,Y)=S_1(X,Y)$ and equation $(2.6)$ and putting $Z=\xi_1$ in $(4.6)$, we get, $$\begin{aligned} &\frac{1}{A}[\frac{1}{(2n-1)}\{2nk\eta_1(Y)g_1(X,W)-2nk\eta_1(X)g_1(Y,W)\nonumber\\ &+\eta_1(Y)S_1(X,W)-\eta_1(X),S_1(Y,W)\}-\frac{r}{2n(2n-1)}\{\eta_1(Y)\nonumber\\ &g_1(X,W)-\eta_1(X)g_1(Y,W)\}]-B\leq g_2(f_*(R_1(X,Y)\xi_1),f_*(W)).\end{aligned}$$ Simplifying after some steps and assuming $[\frac{1}{(2n-1)}(2nk-\frac{r}{2n})]=l_1$ and $\frac{1}{(2n-1)}=l_2$ we get, $$\begin{aligned} &\frac{1}{A}[l_1\{\eta_1(Y)g_1(X,W)-\eta_1(X)g_1(Y,W)\}+l_2\{\eta_1(Y)S_1(X,W)\nonumber\\ &-\eta_1(X)S_1(Y,W)\}]-B\leq g_2(f_*(R_1(X,Y)\xi_1),f_*(W)).\end{aligned}$$ Furthermore, using equation $(4.2)$ and setting $Y=Z=\xi_1$, it can be easily shown that, the conformally flat $N(k)-$contact metric manifold $M_1$ becomes $\eta$-Einstein manifold, i.e; $S_1(X,Y)=ag_1(X,Y)+b\eta_1(X)\eta_1(Y)$, where $a=[\frac{r}{2n}-k]$ and $b=[(2n+1)k-\frac{r}{2n}]$. Then putting this value of $S_1$ in $(4.8)$ and after simplification we have, $$\begin{aligned} &\frac{1}{A}[l_1\{\eta_1(Y)g_1(X,W)-\eta_1(X)g_1(Y,W)\}+l_2\{\eta_1(Y)(ag_1(X,W)\nonumber\\ &+b\eta_1(X)\eta_1(W))-\eta_1(X)(ag_1(Y,W)+b\eta_1(Y)\eta_1(W))\}]\nonumber\\ &-B\leq g_2(f_*(R_1(X,Y)\xi_1),f_*(W)).\end{aligned}$$ Now, using the equation $(2.12)$ of result $(2.2)$ and observing that $(l_1+al_2)=k$, the above inequality becomes, $$\frac{1}{A}\eta_1(R_1(Y,X)W)-B\leq g_2(f_*(R_1(X,Y)\xi_1),f_*(W)).$$ Reminding the linearity of $f_*$ and using the relation $(2.9)$, after simplification the last inequality leads to, $$\frac{1}{A}\eta_1(R_1(Y,X)W)-B\leq k[\eta_1(Y)g_2(f_*(X),f_*(W))-\eta_1(X)g_2(f_*(Y),f_*(W))].$$ Similarly, taking the right side inequality of the $(3.1)$ and proceeding as above we get the following, $$k[\eta_1(Y)g_2(f_*(X),f_*(W))-\eta_1(X)g_2(f_*(Y),f_*(W))]\leq A\eta_1(R_1(Y,X)W)]+B.$$ So, combining the inequalities $(4.11)$ and $(4.12)$, we have the following theorem; **Theorem 5**. *Let $M_1(\phi_1,\xi_1,\eta_1,g_1)$ and $M_2(\phi_2,\xi_2,\eta_2,g_2)$ be two quasi- isometrically embedded $N(k)-$contact metric manifolds of same dimension $(2n+1)$, $(n\geq1)$. Suppose $f_*:\chi(M_1)\longrightarrow\chi(M_2)$ be such embedding between them with the constants $A\geq1, B\geq0$. Furthermore, if the manifold $M_1$ is conformally flat, then $\forall X,Y,W\in\chi(M_1)$, the metric $g_2$ of the manifold $M_2$ satisfies; $$\begin{aligned} \frac{1}{A}\eta_1(R_1(Y,X)W)-B\leq k[\eta_1(Y)g_2(f_*(X),f_*(W))\nonumber\\ -\eta_1(X)g_2(f_*(Y),f_*(W))]\leq A\eta_1(R_1(Y,X)W)]+B,\end{aligned}$$ where, $R_1$ is the Riemannian curvature tensor of the manifold $M_1$.\ * *Remark 6*. Now consider $f_*$ is a quasi-isometry between $M_1$ and $M_2$. Also consider, $f_*(X)=Z_1$ and $f_*(Y)=Z_2$. Then there exists some $W \in \chi(M_1)$ such that $g_2(Z_1, f_*(W))\leq D$ and $g_2(Z_2, f_*(W))\leq D$, where $D\geq0$. So from $(4.11)$, we get, $$\frac{1}{A}\eta_1(R_1(Y,X)W)-B\leq kD\eta_1(Y-X).$$ After a small calculation we can remark that, $$R_1(Y,X)W\leq A(B\xi_1+kD(Y-X)).\\$$ The following corollary can also be demonstrated: **Corollary 7**. *Let $f_*$ be a quasi-isometric embedding between two same dimensional $N(k)-$ contact metric manifolds $M_1(\phi_1,\xi_1,\eta_1,g_1)$ and $M_2(\phi_2,\xi_2,\\\eta_2,g_2)$. Moreover if $f_*$ preserves the structure vector field, then for some $A\geq1$ and $B_1\geq0$, the following inequality holds: $$-B_1\leq \eta_1(Y)g_2(f_*(X),\xi_2)-\eta_1(X)g_2(f_*(Y),\xi_2)\leq +B_1.$$* *Proof.* The proof of this corollary follows from the equation $(4.13)$ after putting $W=\xi_1$ and using $(2.9)$.\  ◻ Next, we consider the manifold $M_1$ to be *conformally flat Einstein manifold*, then its Ricci tensor $S_1$ satisfies $S_1(X,Y)=\frac{r}{2n+1}g_1(X,Y)$. Now putting this value of $S_1$ in $(4.8)$ we get, $$\begin{aligned} \frac{1}{A}[l_1\{\eta_1(Y)g_1(X,W)-\eta_1(X)g_1(Y,W)\}+l_2\frac{r}{2n+1}\{\eta_1(Y)g_1(X,W)\nonumber\\ -\eta_1(X)g_1(Y,W)\}]-B\leq g_2(f_*(R_1(X,Y)\xi_1),f_*(W)).\end{aligned}$$ Then after simplification this yields, $$\frac{1}{Ak}(l_1+\frac{r}{2n+1}l_2)\eta_1(R_1(Y,X)W)-B\leq g_2(f_*(R_1(X,Y)\xi_1),f_*(W)).$$ Now considering $a_1=\frac{1}{k}(l_1+\frac{r}{2n+1}l_2)=\frac{1}{k(2n-1)}(2nk-\frac{r}{2n}+\frac{r}{2n+1})$, the above inequality transforms into, $$\frac{a_1}{A}\eta_1((R_1(Y,X)W))-B\leq g_2(f_*(R_1(X,Y)\xi_1),f_*(W)).$$ Applying the linearity of $f_*$, after simplification the last inequality becomes, $$\begin{aligned} &\frac{a_1}{A}\eta_1((R_1(Y,X)W))-B\leq k[\eta_1(Y)g_2(f_*(X),f_*(W))\nonumber\\ &-\eta_1(X)g_2(f_*(Y),f_*(W))].\end{aligned}$$ Again proceeding similarly with the right side inequality, we have, $$\begin{aligned} &k[\eta_1(Y)g_2(f_*(X),f_*(W))-\eta_1(X)g_2(f_*(Y),f_*(W))]\nonumber\\ &\leq a_1A\eta_1((R_1(Y,X)W))+B.\end{aligned}$$ Hence, from $(4.16)$ and $(4.17)$, we can state the following corollary; **Corollary 8**. *Let $f_*$ be a quasi-isometric embedding between two same dimensional $N(k)-$ contact metric manifolds $M_1(\phi_1,\xi_1,\eta_1,g_1)$ and $M_2(\phi_2,\xi_2,\\\eta_2,g_2)$. Moreover, if the manifold $M_1$ be conformally flat Einstein manifold, then for some $A\geq1, B\geq0$ and $\forall X,Y,W\in\chi(M_1)$ the metric $g_2$ of $M_2$ satisfies the following inequality; $$\begin{aligned} &\frac{a_1}{A}\eta_1((R_1(Y,X)W))-B\leq k[\eta_1(Y)g_2(f_*(X),f_*(W))\nonumber\\ &-\eta_1(X)g_2(f_*(Y),f_*(W))]\leq a_1A\eta_1((R_1(Y,X)W))+B,\end{aligned}$$ where, $R_1$ is the Riemannian curvature tensor of the manifold $M_1$ and $a_1=\frac{1}{k(2n-1)}(2nk-\frac{r}{2n}+\frac{r}{2n+1})$.\ * Next, the concircular curvature tensor of a manifold $(M^{2n+1},g)$ is given by, $$\bar{C}(X,Y)Z=R(X,Y)Z-\frac{r}{2n(2n+1)}[g(Y,Z)X-g(X,Z)Y].$$ Now, if our ambient manifold $M_1$ be *concircularly flat* i.e $\bar{C}(X,Y)Z=0$, then from above we have, $R_1(X,Y)Z=\frac{r}{2n(2n+1)}[g(Y,Z)X-g(X,Z)Y]$. Putting this value of $R_1$ in the left hand inequality of $(3.1)$, we get, $$\begin{aligned} &\frac{r}{A2n(2n+1)}[g_1(Y,Z)g_1(X,W)-g_1(X,Z)g_1(Y,W)]\\ &-B\leq g_2(f_*(R_1(X,Y)Z),f_*(W)).\end{aligned}$$ Then for $Z=\xi_1$ and using equations $(2.6)$, $(2.12)$ with the linearity of $f_*$ and simplifying after some steps, $$\frac{b_1}{A}\eta_1(R_1(Y,X)W)-B\leq k[\eta_1(Y)g_2(f_*(X),f_*(W))-\eta_1(X)g_2(f_*(Y),f_*(W))],$$ with $b_1=\frac{r}{2nk(2n+1)}$. Again proceeding similarly as above, from the right hand side inequality of $(3.1)$, we have, $$k[\eta_1(Y)g_2(f_*(X),f_*(W))-\eta_1(X)g_2(f_*(Y),f_*(W))]\leq b_1A\eta_1(R_1(Y,X)W)+B.$$ So, combining $(4.19)$ and $(4.20)$ we state the following theorem: **Theorem 9**. *Let $M_1(\phi_1,\xi_1,\eta_1,g_1)$ and $M_2(\phi_2,\xi_2,\eta_2,g_2)$ be two quasi-iso-metrically embedded $N(k)-$contact metric manifolds of same dimension $(2n+1)$, $(n\geq1)$. Suppose $f_*:\chi(M_1)\longrightarrow\chi(M_2)$ be such embedding between $M_1$ and $M_2$ with the constants $A\geq1, B\geq0$. Furthermore, if the manifold $M_1$ is concircularly flat, then $\forall X,Y,W\in\chi(M_1)$ the metric $g_2$ of the manifold $M_2$ satisfies: $$\begin{aligned} &\frac{b_1}{A}\eta_1(R_1(Y,X)W)-B\leq k[\eta_1(Y)g_2(f_*(X),f_*(W))\nonumber\\ &-\eta_1(X)g_2(f_*(Y),f_*(W))]\leq b_1A\eta_1(R_1(Y,X)W)+B,\end{aligned}$$ where, $b_1=\frac{r}{2nk(2n+1)}$.\ * We have the conharmonic curvature tensor for a manifold $(M^{2n+1},g)$ given by, $$\begin{aligned} &\bar{C}(X,Y)Z=R(X,Y)Z-\frac{1}{(2n-1)}[S(Y,Z)X-S(X,Z)Y\\ &+g(Y,Z)QX-g(X,Z)QY],\end{aligned}$$ where $Q$ is the *Ricci operator* and is given by $g(QX,Y)=S(X,Y)$. Now if our manifold $M_1$ is *conharmonically flat* i.e $\bar{C}(X,Y)Z=0$, then using the value of $R_1(X,Y)Z$ from above, we get from the left hand inequality of $(3.1)$, $$\begin{aligned} &\frac{l_2}{A}[S_1(Y,Z)g_1(X,W)-S_1(X,Z)g_1(Y,W)+g_1(Y,Z)g_1(Q_1X,W)\\ &-g_1(X,Z)g_1(Q_1Y,W)]-B\leq g_2(f_*(R_1(X,Y)Z),f_*(W)).\end{aligned}$$ Then putting $Z=\xi_1$ and using equations $(2.6)$ and $g_1(X,\xi)=2nk\eta_1(X)$ and $S_1(Q_1X,Y)=S_1(X,Y)$, we get after some steps, $$\begin{aligned} &\frac{l_2}{A}[2nk\{\eta_1(Y)g_1(X,W)-\eta_1(X)g_1(Y,W)\}+[\eta_1(Y)S_1(X,W)\\ &-\eta_1(X)S_1(Y,W)]]-B\leq g_2(f_*(R_1(X,Y)\xi_1),f_*(W)).\end{aligned}$$ Moreover if our manifold $M_1$ is an Einstein manifold, that is, $S_1(X,Y)=\frac{r}{2n+1}g_1(X,Y)$, then the above inequality becomes $$\begin{aligned} &\frac{l_2}{A}(2nk+\frac{r}{2n+1})[\eta_1(Y)g_1(X,W)-\eta_1(X)g_1(Y,W)]\\ &-B\leq g_2(f_*(R_1(X,Y)\xi_1),f_*(W)).\end{aligned}$$ Finally, using equation $(2.12)$ and the linearity of $f_*$, the above yields, $$\frac{c_1}{A}\eta_1(R_1(Y,X)W)-B\leq k[\eta_1(Y)g_2(f_*(X),f_*(W))-\eta_1(X)g_2(f_*(Y),f_*(W))].$$ where, $c_1=\frac{l_2}{k}(2nk+\frac{r}{2n+1})$.\ Again proceeding similarly as above, from the right inequality of $(3.1)$ we get, $$k[\eta_1(Y)g_2(f_*(X),f_*(W))-\eta_1(X)g_2(f_*(Y),f_*(W))]\leq c_1A\eta_1(R_1(Y,X)W)+B.$$ Therefore from the inequalities $(4.22)$ and $(4.23)$ we get the following theorem; **Theorem 10**. *Let $M_1(\phi_1,\xi_1,\eta_1,g_1)$ and $M_2(\phi_2,\xi_2,\eta_2,g_2)$ be two $N(k)-$contact metric manifolds of same dimension $(2n+1)$, $(n\geq1)$. Suppose $f_*:\chi(M_1)\longrightarrow\chi(M_2)$ be the quasi-isometric embedding between $M_1$ and $M_2$ with the constants $A\geq1, B\geq0$. Furthermore, if the manifold $M_1$ is conharmonically flat Einstein manifold, then $\forall X,Y,W\in\chi(M_1)$ the metric $g_2$ of the manifold $M_2$ satisfies; $$\begin{aligned} \frac{c_1}{A}\eta_1(R_1(Y,X)W)-B\leq k[\eta_1(Y)g_2(f_*(X),f_*(W))\nonumber\\ -\eta_1(X)g_2(f_*(Y),f_*(W))] \leq c_1A\eta_1(R_1(Y,X)W)+B,\end{aligned}$$ where, $c_1=\frac{l_2}{k}(2nk+\frac{r}{2n+1})=\frac{1}{k(2n-1)}(2nk+\frac{r}{2n+1})=\frac{4n}{2n-1}$, since we have $k=\frac{r}{2n(2n-1)}$ for $N(k)-$contact Einstein manifolds.\ * Recall that the Weyl projective curvature tensor $P$ of type $(1,3)$ on a Riemannian manifold $(M^{2n+1},g)$ can be defined as, $$P(X,Y)Z=R(X,Y)Z-\frac{1}{2n}[S(Y,Z)X-S(X,Z)Y].$$ In a similar calculation, if $M_1$ is *projectively flat* $N(k)-$contact Einstein manifold, i.e. if $P_1=0$ and $S_1(X,Y)=\frac{r}{2n+1}g_1(X,Y)$, then we can state the following theorem: **Theorem 11**. *Let $M_1(\phi_1,\xi_1,\eta_1,g_1)$ and $M_2(\phi_2,\xi_2,\eta_2,g_2)$ be two quasi-isomet-rically embedded $N(k)-$contact metric manifolds of same dimension $(2n+1)$, $(n\geq1)$. Suppose $f_*:\chi(M_1)\longrightarrow\chi(M_2)$ be such embedding between $M_1$ and $M_2$ with the constants $A\geq1, B\geq0$. Furthermore, if the manifold $M_1$ is projectively flat Einstein manifold, then $\forall X,Y,W\in\chi(M_1)$ the metric $g_2$ of the manifold $M_2$ satisfies: $$\begin{aligned} \frac{1}{A}\eta_1(R_1(Y,X)W)-B\leq k[\eta_1(Y)g_2(f_*(X),f_*(W))\nonumber\\ -\eta_1(X)g_2(f_*(Y),f_*(W))] \leq A\eta_1(R_1(Y,X)W)+B.\end{aligned}$$* # **Quasi-isometry between two Sasakian manifolds** Some basic introductory details about the Sasakian manifold is given in the preliminary section. Now we recall an important theorem to establish the rest of the results. **Theorem 12**. *[@Bla] A $N(k)-$contact metric manifold is Sasakian if and only if $k=1$.* Using this theorem we can imply the following result from the previous results for $N(k)-$contact metric manifold. **Theorem 13**. *Let $M_1(\phi_1,\xi_1,\eta_1,g_1)$ and $M_2(\phi_2,\xi_2,\eta_2,g_2)$ be two quasi-isomet-rically embedded Sasakian manifolds of same dimension $(2n+1)$, $(n\geq1)$ with the given map $f_*:\chi(M_1)\longrightarrow\chi(M_2)$ between $M_1$ and $M_2$ and the constants $A\geq1, B\geq0$. Then the following inequalities hold for the Riemannian metric $g_2$ of the manifold $M_2$ in the respective following cases:\ 1) If $M_1$ is conformally flat manifold or conformally flat Einstein manifold or concircularly flat manifold or projectively flat manifold, then, $$\begin{aligned} \frac{1}{A}\eta_1(R_1(Y,X)W)-B\leq \eta_1(Y)g_2(f_*(X),f_*(W)) \nonumber\\ -\eta_1(X)g_2(f_*(Y),f_*(W)) \leq A\eta_1(R_1(Y,X)W)+B.\end{aligned}$$ 2) If $M_1$ is conharmonically flat Einstein manifold, then, $$\begin{aligned} \frac{c_1}{A}\eta_1(R_1(Y,X)W)-B\leq \eta_1(Y)g_2(f_*(X),f_*(W)) \nonumber\\ -\eta_1(X)g_2(f_*(Y),f_*(W)) \leq c_1 A\eta_1(R_1(Y,X)W)+B,\end{aligned}$$ where, $c_1=\frac{4n}{2n-1}$.\ \ * *Example 1*. Consider $M_1=\mathbb{R}^3$ with the Euclidean metric $g_1$. Let $\alpha =\frac{1}{2}(dz-ydx)$, $\xi =\frac{\partial}{\partial z}$ and $g_1=\alpha\otimes\alpha +\frac{1}{4}(dx^2+dy^2)$. Then we take $e_1=\frac{\partial}{\partial x}$, $e_2=\frac{\partial}{\partial y}$ and $e_3=\frac{\partial}{\partial z}$ as a set of linearly independent basis vectors for the set of vector fields $\chi(M_1)$ of the manifold $M_1$. Also consider the $(1,1)$ tensor field $\phi$ be given as, $\phi_1(\frac{\partial}{\partial x})=\frac{\partial}{\partial y}+x\frac{\partial}{\partial z}$, $\phi_1(\frac{\partial}{\partial y})=-\frac{\partial}{\partial x}$ and $\phi_1(\frac{\partial}{\partial z})=0$. Then it can be easily checked that the manifold $(M_1,g_1)$ with the above defined structure is a Sasakian manifold. Take another manifold $M_2=${$(x,y,z)\in\mathbb{R}^3: 1< y< 2,z\neq 0$}, where $(x,y,z)$ are the standard co-ordinates of $\mathbb{R}^3$. Then the linearly independent vector fields are given by $f_1=\frac{\partial}{\partial y}$, $f_2=z^2(\frac{\partial}{\partial z}+2y\frac{\partial}{\partial x})$ and $f_3=\frac{\partial}{\partial x}$. Let $g_2$ be the Riemannian metric defined by: $g_{ij}=1$ for $i=j$ and $g_{ij}=0$ for $i\neq j$. Let $\phi$ be the $(1,1)$ tensor field defined by; $\phi_2 (f_1)=f_3$, $\phi_2 (f_2)=0$ and $\phi_2 (f_3)=-f_1$. Thus for taking $\xi =f_2$, we can show that the manifold $(M_2,g_2)$ with this structure is a Sasakian manifold. Now we define a map $f_*:\chi(M_1)\longrightarrow\chi(M_2)$ on the basis vector fields by, $$f_*(e_1)=\frac{1}{2}(yf_3+\frac{1}{\sqrt{y}}f_1), ~~f_*(e_2)=\frac{1}{2}f_2, ~~f_*(e_3)=-\frac{1}{2}f_3.$$ This map $f$ is a quasi-isometry between the two Sasakian manifolds $M_1$ and $M_2$ with the constants $A=2$ and $B=1$.\ # Quasi-isometric inequality between two Riemannian manifolds We will conclude this article with the following most important result. This theorem concerns between any two Riemannian manifold which have a quasi-isometric structure among them. **Theorem 14**. *Let $(M_1, g_1)$ and $(M_2, g_2)$ be two Riemannian manifolds of same dimension $n$ and let $f_*$ be the quasi-isometric embedding between them with some constants $A\geq1$ and $B\geq0$. Then,\ $\frac{r_1}{A}-n^2B\leq g_2(f_*(R_1(e_i,e_j)e_j),f_*(e_i))\leq Ar_1+n^2B$, $r_1$ being the scalar curvature of the manifold $M_1$.* *Proof.* For all $X$, $Y$ $Z$ and $W$ in $\chi(M_1)$, $R_1(X,Y)Z$ is also in $\chi(M_1)$ and for $f_*$ being the quasi-isometry between $M_1$ and $M_2$, we get, $$\begin{aligned} \frac{1}{A}g_1(R_1(X,Y)Z,W)-B \leq &g_2(f_*(R_1(X,Y)Z),f_*(W))\nonumber\\ \leq &Ag_1(R_1(X,Y)Z,W)+B.\end{aligned}$$ Let $\{e_i\}$ be an orthonormal basis of the tangent space $T_p(M_1)$, $p\in M_1$. Then for $X=W=e_i$, we get from the left inequality of $(6.1)$, $$\frac{1}{A}S_1(Y,Z)-nB \leq g_2(f_*(R_1(e_i,Y)Z),f_*(e_i)).$$ Again putting $Y=Z=e_j$ we get, $$\frac{1}{A}r-n^2B \leq g_2(f_*(R_1(e_i,e_j)e_j),f_*(e_i)).$$ In a similar way, from the right inequality we get, $$g_2(f_*(R_1(e_i,e_j)e_j),f_*(e_i))\leq Ar_1+n^2B.$$ Joining the inequalities $(6.2)$ and $(6.3)$ completes the proof. ◻ **Acknowledgements:** The author P. Ghosh is the corresponding author and is financially supported by UGC Junior Research Fellowship of India (Ref. No: 201610010610; Roll No: WB10605988). The author D. Ganguly is financially supported by NBHM Senior Research Fellowship of India (Ref. No: 0203/11/2017/RD-II/10440). **Conflict of interest:** The authors have no conflict of interest. **Availability of data and materials:** This manuscript has no associated data. 99 P. de la Harpe, *Topics in geometric group theory*,  Chicago lectures in mathematics,  University of Chicago press,  2000. M. Bridson and A. Haeflinger, *Metric spaces of non-positive curvature*,  Springer, 1999. D. E. Blair, *Riemannian Geometry of Contact and Symplectic Manifolds*,  Birkhauser, Second Edition, 2010. G. D. Mostow, *Strong rigidity of locally symmetric spaces*,  Annals of math. Studies, no. 78, Princeton univ. press, 1973. M. Clay and D. Margalit, *Office hours with a geometric group theorist*,  Princeton university press,  Princeton and Oxford,  2017. Shukichi Tanno, *Ricci curvatures of contact Riemannian manifolds*, Tohoku Mathematical Journal, 40(3) 441-448 1988. S. Sasaki,  *On differentiable manifolds with certain structures which are closely related to almost contact structure*,  Tohoku math,  J.2,:459-476, 1960. C. P. Boyer and K. Galicki, *3-Sasakian manifolds*,  Surveys Diff. Geom. 7,  (1999), 123-184. M. Gromov,  *Groups of polynomial growth and expanding maps*,  Inst. Hautes Études Sci. Publ. Math, (53):53-73, 1981. M. Gromov,  *Hyperbolic groups, in Essays in Group Theory*,  Math. Sci. Res. Inst. Publ.,  Vol-8,  pages 75-263.  Springer, New York, 1987. R. E. Schwartz,  *The quasi-isometry classification of rank one lattices*,  Inst. Hautes Études Sci. Publ. Math,  Jan-1995,  Vol-82, Issue-1,  page 133-168. [^1]: $^*$This author is the first author and the corresponding author, supported by UGC Junior Research Fellowship of India.
arxiv_math
{ "id": "2309.15429", "title": "Quasi-isometry between two almost contact metric manifolds", "authors": "Paritosh Ghosh, Dipen Ganguly and Arindam Bhattacharyya", "categories": "math.DG", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | Motivated by complexity questions in integer programming, this paper aims to contribute to the understanding of combinatorial properties of integer matrices of row rank $r$ and with bounded subdeterminants. In particular, we study the column number question for integer matrices whose every $r \times r$ minor is non-zero and bounded by a fixed constant $\Delta$ in absolute value. Approaching the problem in two different ways, one that uses results from coding theory, and the other from the geometry of numbers, we obtain linear and asymptotically sublinear upper bounds on the maximal number of columns of such matrices, respectively. We complement these results by lower bound constructions, matching the linear upper bound for $r=2$, and a discussion of a computational approach to determine the maximal number of columns for small parameters $\Delta$ and $r$. address: - | Institute of Mathematics\ University of Rostock\ Germany - | Institute of Mathematics\ University of Rostock\ Germany - | Institute of Mathematics\ University of Rostock\ Germany author: - Björn Kriepke - Gohar M. Kyureghyan - Matthias Schymura bibliography: - generic-delta.bib title: On the size of integer programs with bounded non-vanishing subdeterminants --- # Introduction This paper contributes to the understanding of combinatorial properties of integer programs, parametrized by the maximal absolute value of a subdeterminant of their constraint matrices. An *integer program* is an optimization problem of the form $$\begin{aligned} \max\left\{ c^\intercal x : Bx \leq d, x \in \mathbb{Z}^r \right\},\tag{IP}\end{aligned}$$ where $B \in \mathbb{Z}^{n \times r}$ is the constraint matrix, $c \in \mathbb{Z}^r$ the cost function, and $d \in \mathbb{Z}^n$ the right hand side. The underlying polyhedron of this integer program is defined as $$P(B,d) := \left\{ x \in \mathbb{R}^r : Bx \leq d \right\},$$ that is, as the set of all solutions of the linear inequality system $b_1^\intercal x \leq d_1, \ldots, b_n^\intercal x \leq d_n$ with $b_1,\ldots,b_n \in \mathbb{Z}^r$ denoting the rows of $B$, which we compactly write as $Bx \leq d$. Solving a general integer program is computationally expensive. Indeed the associated decision problem that asks whether $P(B,d) \cap \mathbb{Z}^r \neq \emptyset$ is among Karp's [@karp1972reducibility] famous early list of NP-complete problems. The most efficient algorithm for solving a general instance of the integer program (IP) was very recently proposed in [@reisrothvoss2023thesubspace], and has a running time of $\log(2r)^{\mathcal{O}(r)}$ multiplied with a polynomial in the encoding length of $B,d$, and $c$. We refer to that paper [@reisrothvoss2023thesubspace] for a detailed discussion of the history of such complexity results. The question whether (IP) can be solved in single exponential time is one of the major research problems in the area. Given the computational hardness of integer programming, there is an extensive amount of research about investigating integer programs with additional structure. The most basic assumption that implies efficient algorithms is that the constraint matrix $B$ in (IP) is *totally unimodular*, which means that all its *minors*, that is, the determinants of square submatrices, are in $\{-1,0,1\}$. As a consequence, the polyhedron $P(B,d)$ has only integral vertices and thus (IP) can be solved in polynomial time, for instance, by linear programming techniques (see [@schrijver1986theory Ch. 19]). This observation suggests to parametrize integer programs by maximal determinants of the submatrices of the constraint matrix. More precisely, given an integer $\Delta \in \mathbb{Z}_{>0}$, we say that a *rational* matrix $A \in \mathbb{Q}^{r \times n}$ of rank $\mathop{\mathrm{rk}}(A)$ is *$\Delta$-submodular*, if all its $\mathop{\mathrm{rk}}(A) \times \mathop{\mathrm{rk}}(A)$ minors are bounded by $\Delta$ in absolute value. Further, we say that $A$ is *$\Delta$-modular*, if it is $\Delta$-submodular and there is a $\mathop{\mathrm{rk}}(A) \times \mathop{\mathrm{rk}}(A)$ minor equal to $\Delta$ or $-\Delta$. If the minors of *every* size are bounded by $\Delta$ in absolute value, we call $A$ *totally $\Delta$-(sub)modular*. An integer program (IP) with a $\Delta$-modular constraint matrix $B$ is called a *$\Delta$-modular integer program*. Note that this definition is independent of the right hand side $d \in \mathbb{Z}^n$. As discussed above, totally unimodular ($\Delta=1$) integer programs are efficiently solvable. This is relevant for problems that can be formulated by network matrices, for instance (see [@schrijver1986theory Ch. 19]), but at the same time it is a very restrictive assumption. Extending our understanding of $\Delta$-modular integer programs beyond the totally unimodular case is a prevailing and currently very active research direction in the community. The main conjecture in the field is that efficient algorithms exist, whenever $\Delta$ is not part of the input: **Conjecture 1**. *Let $\Delta \in \mathbb{Z}_{>0}$ be fixed. Then, there is a polynomial time algorithm to solve any integer program of the form (IP) with a $\Delta$-modular constraint matrix.* The origin of this conjecture is hard to trace, but [@gribanovmalyshevpardalosveselov] attribute it to [@shevchenko1997qualitative]; it might be much older than that though. Despite the fact that [Conjecture 1](#conj:ip-subdeterminants){reference-type="ref" reference="conj:ip-subdeterminants"} is far from being resolved in whole generality, there is a list of pertinent results supporting it and which cover different variations and specializations of the problem. In fact, polynomial time algorithms have been devised for $2$-modular integer programs [@artmannweismantelzenklusen2017astrongly], for totally $\Delta$-modular integer programs with at most two non-zeros per row [@fiorinijoretweltgeyuditsky2021integer], and for $\Delta \leq 4$ for so-called strictly $\Delta$-modular integer programs [@naegelesantiagozenklusen2021congruency]. Additionally, the class of *generic* $\Delta$-modular integer programs is well understood from a computational point of view. A matrix $A \in \mathbb{Z}^{r \times n}$ is called *generic*, if all its $\mathop{\mathrm{rk}}(A) \times \mathop{\mathrm{rk}}(A)$ minors are non-zero, that is, every set of $\mathop{\mathrm{rk}}(A)$ columns or rows of $A$ is linearly independent. Generic $\Delta$-modular integer programming, for arbitrary but fixed $\Delta \in \mathbb{Z}_{>0}$, is known to be polynomial time solvable due to [@artmanneisenbrandglanzeroertelvempalaweismantel2016anote] (the case of generic $2$-modular constraint matrices was known before [@veselovchirkov2009integer]). More strongly, all the potentially relevant feasible points in the underlying polyhedron of a generic $\Delta$-modular integer program can be enumerated efficiently: **Theorem 1** ([@jiangbasu2022enumerating Thm. 2.1]). *Let $\Delta \in \mathbb{Z}_{>0}$ be fixed. Then, for every generic $\Delta$-modular matrix $B \in \mathbb{Z}^{n \times r}$ and every $d \in \mathbb{Z}^n$, one can enumerate the vertices of the convex hull of $P(B,d) \cap \mathbb{Z}^r$ in polynomial time.* In this paper, we continue the study of generic $\Delta$-modular integer programs and their combinatorial properties. Our aim is to underpin the algorithmic results for this class of optimization problems by corresponding strong combinatorial obstructions. More precisely, we are interested in the maximum number of irredundant inequality constraints that a generic $\Delta$-modular integer program can have. After interchanging rows and columns (which is more convenient for our analysis) this corresponds to bounding the number of columns of a generic $\Delta$-modular integer matrix for a fixed number $r$ of rows. We may also assume that the considered matrices have *no repeating columns*, and we will do so for the rest of the paper without further mention. By this assumption we can identify a matrix $A \in \mathbb{Z}^{r \times n}$ with its set of columns and then consider it as a subset of $\mathbb{Z}^r$ of cardinality $n$. The combinatorial question described above is the so-called *column number question* for generic $\Delta$-modular integer matrices and is made precise by the function $$\begin{aligned} \mathop{\mathrm{g}}(\Delta,r) &:= \max\left\{ n : \exists \textrm{ a generic }\Delta\textrm{-modular }A \in \mathbb{Z}^{r \times n}\textrm{ with} \mathop{\mathrm{rk}}(A) = r \right\}.\end{aligned}$$ The relation to the combinatorial complexity of the corresponding (IP)s is given by $$\begin{aligned} \max\left\{ \mathop{\mathrm{fct}}(P(B,d)) : B^\intercal \in \mathbb{Z}^{r \times n} \textrm{ is generic } \Delta\textrm{-modular}, d \in \mathbb{Z}^n \right\} = 2 \cdot \mathop{\mathrm{g}}(\Delta,r),\label{eqn:g-facet-number}\end{aligned}$$ where $\mathop{\mathrm{fct}}(P)$ denotes the number of facets of the polyhedron $P$. This identity can be seen by taking suitably large entries for the vector $d$ of right hand sides, so that every given constraint corresponds to a facet-defining inequality of the polyhedron $P(B,d)$. The factor of two arises as the definition of $\mathop{\mathrm{g}}(\Delta,r)$ excludes parallel columns, but the polyhedron $P(B,d)$ can have pairs of facets that are supported by parallel hyperplanes with opposite outer normal vectors. Our main results on the behavior of the function $\mathop{\mathrm{g}}(\Delta,r)$ substantiate the fact that generic $\Delta$-modular integer programs are quite restrictive. They are summarized as follows: **Theorem 2**. *Let $r$ and $\Delta$ be positive integers with $r \geq 2$, and let $p$ be the smallest prime number with $\Delta < p$. Then, $$\begin{aligned} \mathop{\mathrm{g}}(\Delta,r) \leq \max\{r, p\} + 1.\label{eqn:linear-bound}\end{aligned}$$ Furthermore, if $r,\Delta \geq 2$, then $\mathop{\mathrm{g}}(\Delta,r) \leq 2\Delta$, and if $r \geq 2\Delta - 1$, then $\mathop{\mathrm{g}}(\Delta,r) = r+1$.* For $r=2$, the bound [\[eqn:linear-bound\]](#eqn:linear-bound){reference-type="eqref" reference="eqn:linear-bound"} appears in [@paatstallknechtwalshxu2022onthecolumn], for $r \geq 3$ it is apparently new. The fact that $\mathop{\mathrm{g}}(\Delta,r) = r+1$, if $r$ is large enough compared to $\Delta$, was established before in [@artmanneisenbrandglanzeroertelvempalaweismantel2016anote Lem. 7], albeit with the much stronger assumption that $r$ is at least doubly exponential in $\Delta$. A direct consequence of [\[eqn:linear-bound\]](#eqn:linear-bound){reference-type="eqref" reference="eqn:linear-bound"} is that $\mathop{\mathrm{g}}(\Delta,r)$ is at most linear in *both* parameters $\Delta$ and $r$. The previous state of the art was given by the inequality $\mathop{\mathrm{g}}(\Delta,r) \leq r + \Delta^2$, established in [@paatschloeterweismantel2022theintegrality Lem. 8] (cf. [@glanzerstallknechtweismantel2022notesonabc Proof of Lem. 1] for a similar bound). The authors of [@paatschloeterweismantel2022theintegrality] show how bounds on $\mathop{\mathrm{g}}(\Delta,r)$ have consequences on what they term the *integrality number* of the corresponding integer programs. As our second main result we show that for fixed rank $r \geq 3$ the bound [\[eqn:linear-bound\]](#eqn:linear-bound){reference-type="eqref" reference="eqn:linear-bound"} is not tight and that the dependence on $\Delta$ is indeed *sublinear*. **Theorem 3**. *For every $r,\Delta \in \mathbb{Z}_{>0}$ with $r \geq 3$, we have $$\mathop{\mathrm{g}}(\Delta,r) \leq 130\, r^3 \Delta^{\frac{2}{r}}.$$* The upper bounds in [\[thm:linear-upper-bound,thm:sublinear-bound-generic-heller\]](#thm:linear-upper-bound,thm:sublinear-bound-generic-heller){reference-type="ref" reference="thm:linear-upper-bound,thm:sublinear-bound-generic-heller"} are proven in [2](#sect:upper-bounds){reference-type="ref" reference="sect:upper-bounds"} using tools from coding theory and the geometry of numbers, respectively. We complement these results by lower bound constructions that are best possible for $r=2$, and that imply that the exponent $\frac{2}{r}$ in [Theorem 3](#thm:sublinear-bound-generic-heller){reference-type="ref" reference="thm:sublinear-bound-generic-heller"} is correct up to a constant factor. The essence of these constructions, whose details are layed out in [3.1](#sect:constr-two-rows){reference-type="ref" reference="sect:constr-two-rows"} and [Proposition 2](#prop:generic-gen-heller-lower-bound){reference-type="ref" reference="prop:generic-gen-heller-lower-bound"}, is subsumed into the following (slightly informal) statement: **Theorem 4**. * * 1. *For $r = 2$, there are at least three infinite families of integers $\Delta$ for which the bound [\[eqn:linear-bound\]](#eqn:linear-bound){reference-type="eqref" reference="eqn:linear-bound"} is attained.* 2. *For fixed $r \geq 3$, the function $\Delta \mapsto \mathop{\mathrm{g}}(\Delta,r)$ grows at least with the order $\Omega(\Delta^\frac{1}{r-1})$.* We suspect that these three families are the only infinite families meeting the bound [\[eqn:linear-bound\]](#eqn:linear-bound){reference-type="eqref" reference="eqn:linear-bound"}. Additionally to these theoretical results on the asymptotic behavior of the counting function $(\Delta,r) \mapsto \mathop{\mathrm{g}}(\Delta,r)$, we devised an algorithm to compute $\mathop{\mathrm{g}}(\Delta,r)$ for small parameters $r$ and $\Delta$. The algorithm, its specifications and the computational results obtained will be described in detail in [4](#sect:computational-approach){reference-type="ref" reference="sect:computational-approach"}. Of course, the column number question has been investigated also for general, not necessarily generic, $\Delta$-modular integer matrices. The corresponding counting function is $$\mathop{\mathrm{h}}(\Delta,r) := \max\left\{ n : \exists \textrm{ a }\Delta\textrm{-modular }A \in \mathbb{Z}^{r \times n}\textrm{ with} \mathop{\mathrm{rk}}(A) = r \right\}.$$ Note that in contrast to $\mathop{\mathrm{g}}(\Delta,r)$ this allows linear dependencies among sets of $r$ columns in the considered matrices, and in particular, it allows for parallel columns. We shortly mention the best-known bounds on this parameter and refer to the cited papers for more background and literature. A construction of a $\Delta$-modular matrix with many columns in [@leepaatstallknechtxu2022polynomial] shows that $$\begin{aligned} r^2 + r + 1 + 2r(\Delta - 1) \leq \mathop{\mathrm{h}}(\Delta,r), \textrm{ for all } \Delta,r \in \mathbb{Z}_{>0}.\label{lee-et-at-general-bound}\end{aligned}$$ Furthermore, the authors of  [@leepaatstallknechtxu2022polynomial] prove that $\mathop{\mathrm{h}}(\Delta,r) \leq (r^2+r)\Delta^2 + 1$ and that the lower bound in [\[lee-et-at-general-bound\]](#lee-et-at-general-bound){reference-type="eqref" reference="lee-et-at-general-bound"} is tight whenever $\Delta \leq 2$ or $r \leq 2$. They also conjectured that the identity $\mathop{\mathrm{h}}(\Delta,r) = r^2 + r + 1 + 2r(\Delta - 1)$ holds for *every* $r$ and $\Delta$, which has been disproven in [@averkovschymura2023onthemaximal] for $\Delta \in \{4,8,16\}$ and $r$ large enough. However, the quantitative version of this conjecture, that is, whether $\mathop{\mathrm{h}}(\Delta,r) \leq r^2 + \mathcal{O}(r\cdot\Delta)$, still stands. A step in this direction was achieved in [@averkovschymura2023onthemaximal] (see also [@averkovschymura2022maximal-ipco] for a less technical extended abstract), in which a bound of order $\mathop{\mathrm{h}}(\Delta,r) \in \mathcal{O}(r^4) \cdot \Delta$, for $r \geq 5$ and $\Delta \in \mathbb{Z}_{>0}$ was obtained. Together with [\[lee-et-at-general-bound\]](#lee-et-at-general-bound){reference-type="eqref" reference="lee-et-at-general-bound"} this shows that, for fixed $r$, the function $\Delta \mapsto \mathop{\mathrm{h}}(\Delta,r)$ grows linearly. Although the function $\mathop{\mathrm{h}}(\Delta,r)$ is apparently closely connected to the maximal possible number of irredundant inequalities in a general $\Delta$-modular integer program, allowing for parallel columns in the definition of a $\Delta$-modular integer matrix prohibits the exact relationship analogous to [\[eqn:g-facet-number\]](#eqn:g-facet-number){reference-type="eqref" reference="eqn:g-facet-number"}. This can be circumvented by considering what we call *simple*[^1] matrices, in which only non-zero and pairwise non-parallel columns are allowed. We denote the counting function by $$\mathop{\mathrm{s}}(\Delta,r) := \max\left\{ n : \exists \textrm{ a simple }\Delta\textrm{-modular }A \in \mathbb{Z}^{r \times n}\textrm{ with} \mathop{\mathrm{rk}}(A) = r \right\},$$ and observe that now the exact relationship $$\begin{aligned} \max\left\{ \mathop{\mathrm{fct}}(P(B,d)) : B^\intercal \in \mathbb{Z}^{r \times n} \textrm{ is } \Delta\textrm{-modular}, d \in \mathbb{Z}^n \right\} = 2 \cdot \mathop{\mathrm{s}}(\Delta,r).\label{eqn:s-facet-number}\end{aligned}$$ holds, analogous to [\[eqn:g-facet-number\]](#eqn:g-facet-number){reference-type="eqref" reference="eqn:g-facet-number"} in the generic case. Since every generic matrix is necessarily simple and simplicity restricts the general case, we have $$\mathop{\mathrm{g}}(\Delta,r) \leq \mathop{\mathrm{s}}(\Delta,r) \leq \mathop{\mathrm{h}}(\Delta,r) \qquad \textrm{and} \qquad \mathop{\mathrm{s}}(\Delta,2) = \mathop{\mathrm{g}}(\Delta,2).$$ In particular, the upper bounds on $\mathop{\mathrm{h}}(\Delta,r)$ discussed previously also apply to $\mathop{\mathrm{s}}(\Delta,r)$. Additionally, in [@paatstallknechtwalshxu2022onthecolumn] the inequality $$\mathop{\mathrm{s}}(\Delta,r) \leq \tbinom{r+1}{2} + 80 \Delta^7 \cdot r, \textrm{ for every } \Delta \in \mathbb{Z}_{>0} \textrm{ and for } r \textrm{ sufficiently large},$$ is derived. Their arguments are based on matroid theory and are not applicable to $\mathop{\mathrm{h}}(\Delta,r)$. In conclusion, comparing the generic case, in particular [Theorem 3](#thm:sublinear-bound-generic-heller){reference-type="ref" reference="thm:sublinear-bound-generic-heller"}, to the general or simple setting, we see that $\mathop{\mathrm{g}}(\Delta,r)$ is separated from $\mathop{\mathrm{s}}(\Delta,r)$ and $\mathop{\mathrm{h}}(\Delta,r)$ in their asymptotic behavior, as expected. ## Notations {#notations .unnumbered} Here, we fix some notations that we use throughout the following sections. For a positive integer $r$, we write $[r] = \{1,2,\ldots,r\}$. If $M \subseteq \mathbb{R}^r$ is a Lebesgue-measurable set, then we denote its Lebesgue measure, or *volume*, by $\mathop{\mathrm{vol}}(M)$, and its *normalized volume* by $\mathop{\mathrm{Vol}}(M) = r! \mathop{\mathrm{vol}}(M)$. The symbol $\mathop{\mathrm{conv}}(S)$ stands for the *convex hull* of a set $S \subseteq \mathbb{R}^r$, and if $S$ is finite then $\card{S}$ denotes its cardinality. For a matrix $A$, we let $\mathop{\mathrm{rk}}(A)$ denote its rank, as used already above. Finally, an integer matrix $U \in \mathbb{Z}^{r \times r}$ with determinant equal to $1$ or $-1$ is called a *unimodular matrix*. # Upper bounds on $\mathop{\mathrm{g}}(\Delta,r)$ {#sect:upper-bounds} ## A linear upper bound using finite fields {#sect:mds} A generic $\Delta$-modular matrix implying the lower bound $\mathop{\mathrm{g}}(\Delta,r) \geq r+1$ has columns $e_1,\ldots,e_r,(\Delta,\ldots,\Delta)^\intercal$. In this part we aim to prove [Theorem 2](#thm:linear-upper-bound){reference-type="ref" reference="thm:linear-upper-bound"}, which implies that this lower bound is best possible, if $r \geq 2\Delta - 1$. Our proof of [Theorem 2](#thm:linear-upper-bound){reference-type="ref" reference="thm:linear-upper-bound"} relies on a connection of generic $\Delta$-modularity with so-called *MDS-codes*. The famous MDS conjecture describes possible parameters of an optimal family of codes over the finite field with $q$ elements, denoted here by $\mathbb{F}_q$. It was first stated by Segre in the $1950$s in terms of arcs in finite geometry. For $q=p$ prime the MDS conjecture was proven by Ball [@ball2012sets]: **Theorem 5** (Ball [@ball2012sets Cor. 11.1]). *Let $p$ be a prime and let $A\in\mathbb{F}_p^{r\times n}$ be a matrix with $\mathop{\mathrm{rk}}(A)=r\geq 2$ such that any set of $r$ columns of $A$ is linearly independent. Then $n\leq \max\{r, p\}+1$.* Note that the weaker bound $n\leq r + p - 1$, for arbitrary $p$ and $r$ as above, and the bound $n \leq r+1$, for $p \leq r$, are much easier to obtain (cf. [@ball2012sets Lem. 1.2 & Lem. 1.3]). The case $p > r$ is where the difficulty in [Theorem 5](#thm:ball-mds){reference-type="ref" reference="thm:ball-mds"} lies. We refer the interested reader to the book [@ball2015finite] for an excellent reference on MDS codes and their connections to finite geometry. *Proof of [Theorem 2](#thm:linear-upper-bound){reference-type="ref" reference="thm:linear-upper-bound"}.* Let $A\in\mathbb{Z}^{r\times n}$ be a generic $\Delta$-modular matrix with $\mathop{\mathrm{rk}}(A) = r$. We interpret $A$ as a matrix over $\mathbb F_p$. Let $A_I$ be any $r\times r$ submatrix of $A$, then $$0 < \left|\det(A_I)\right| \leq \Delta < p$$ as integers and thus $\det(A_I)$ is nonzero in $\mathbb F_p$. Therefore, any set of $r$ columns of $A$ is linearly independent over $\mathbb F_p$ and by applying [Theorem 5](#thm:ball-mds){reference-type="ref" reference="thm:ball-mds"} we obtain $n\leq \max\{r, p\}+1$, which implies the claim. Using Bertrand's postulate on the existence of a prime $p$ with $\Delta < p < 2\Delta$, for $\Delta \geq 2$, shows that $\mathop{\mathrm{g}}(\Delta,r) \leq 2 \Delta$, for every $r \geq 2$, and $\mathop{\mathrm{g}}(\Delta,r) = r+1$, if $r \geq 2\Delta - 1$. ◻ In the coding theory setting, generator matrices of so-called *Reed-Solomon codes* meet the upper bound of [Theorem 5](#thm:ball-mds){reference-type="ref" reference="thm:ball-mds"} (cf. [@ball2012sets Cor. 9.2]). However, in general, Reed-Solomon codes do not translate into generic $\Delta$-modular integer matrices. In fact, in the integer setting the bound of [Theorem 2](#thm:linear-upper-bound){reference-type="ref" reference="thm:linear-upper-bound"} is not always tight, which we demonstrate numerically for $r=2$ in [4.4](#sect:numerical-results){reference-type="ref" reference="sect:numerical-results"}, and prove for $r\geq 3$ in the following subsection. Nonetheless we use Reed-Solomon codes implicitly to derive a construction of order $\Omega(\Delta^{\frac{1}{r-1}})$ in [3.2](#sect:vandermonde-construction){reference-type="ref" reference="sect:vandermonde-construction"}. This construction is stated in terms of Vandermonde-matrices which, in fact, generate Reed-Solomon codes. ## A sublinear upper bound for fixed $r \geq 3$ using geometry of numbers In this part, we prove [Theorem 3](#thm:sublinear-bound-generic-heller){reference-type="ref" reference="thm:sublinear-bound-generic-heller"}. The main idea of our argument is the following: Given a generic $\Delta$-modular matrix $A \in \mathbb{Z}^{r \times n}$, considered as a subset of $\mathbb{Z}^r$ with cardinality $n$, we upper bound $n$ by counting the points in $A$ by containment in a family of parallel affine hyperplanes. The subsets of $A$ in each of these hyperplanes are points in general position and that satisfy a volume condition in terms of the parameter $\Delta$. These point sets in $r-1$ dimensions again can be counted by containment in a family of parallel hyperplanes, only that now at most $r-1$ of the points can be contained in a given such hyperplane by the general position condition. For the envisioned bound, we need to find a family of parallel hyperplanes of small cardinality and that covers the points in $A$. This is achieved by using the $\Delta$-modularity in form of a volume-width-principle from the geometry of numbers. The sketched argument can be implemented in a series of lemmas. First of all, let $S \subseteq \mathbb{Z}^r$ be a finite set of lattice points. - $S$ is in *general position*, if no $r+1$ of its points are contained in a common affine hyperplane, and - $S$ has *simplex-volume at most $\Delta$*, if for every $r+1$ points in $S$ the lattice simplex spanned by those points has normalized volume at most $\Delta$. With this notation we define the constant $$\mathop{\mathrm{gv}}(\Delta,r) := \max\left\{ \left|S\right| : S \subseteq \mathbb{Z}^r \textrm{ in general position with simplex-volume at most } \Delta \right\}.$$ A simple observation is the identity $\mathop{\mathrm{gv}}(\Delta,1) = \Delta + 1$, realized by any set of $\Delta+1$ consecutive integers. We will estimate both $\mathop{\mathrm{gv}}(\Delta,r)$ and $\mathop{\mathrm{g}}(\Delta,r)$ by counting points by parallel lattice hyperplanes in $\mathbb{R}^r$. In order to do so, we need the concept of lattice-width. For a non-zero vector $v \in \mathbb{R}^r \setminus \{\mathbf{0}\}$ the *width* of a convex body $K \subseteq \mathbb{R}^r$ in direction $v$ is defined as $$\omega(K,v) = \max_{x \in K} x^\intercal v - \min_{x \in K} x^\intercal v.$$ Minimizing the width over all non-zero lattice directions yields the *lattice-width* $$\omega_L(K) = \min_{v \in \mathbb{Z}^r \setminus \{\mathbf{0}\}} \omega(K,v).$$ If $K$ is a *lattice polytope*, meaning that $K = \mathop{\mathrm{conv}}(S)$, for a finite set $S \subseteq \mathbb{Z}^r$, then for $v \in \mathbb{Z}^r$ with $\gcd(v_1,\ldots,v_r) = 1$, the width of $K$ in direction $v$ can be understood as follows: $\omega(K,v)+1$ is the number of parallel lattice-planes orthogonal to $v$ and which intersect $K$ non-trivially. For any finite set $S \subseteq \mathbb{R}^r$ we write $P_S := \mathop{\mathrm{conv}}(S)$ for the polytope defined as the convex hull of the points in $S$. Coming back to $\Delta$-modularity, we define $$\mathop{\mathrm{w}}(\Delta,r) = \max\left\{ \omega_L(P_A) : A \subseteq \mathbb{Z}^r \textrm{ generic }\Delta\textrm{-modular} \right\}$$ as the maximal lattice-width of a lattice polytope that arises as the convex hull of the columns of a generic $\Delta$-modular matrix with $r$ rows. Likewise, we define $$\mathop{\mathrm{wv}}(\Delta,r) = \max\left\{ \omega_L(P_S) : S \subseteq \mathbb{Z}^r \textrm{ in general position with simplex-volume}\leq\Delta \right\}$$ as the maximal lattice-width of a lattice polytope defined by a subset of $\mathbb{Z}^r$ in general position and of simplex-volume at most $\Delta$. Our motivation to study these quantities is their intimate relationship to the quantities $\mathop{\mathrm{g}}(\Delta,r)$ and $\mathop{\mathrm{gv}}(\Delta,r)$, which is expressed by the following lemma. For its statement, we need the notation $\left|\ell\right|_+ = \max\{1,\left|\ell\right|\}$, for an integer $\ell \in \mathbb{Z}$. **Lemma 1**. *For every $r,\Delta \in \mathbb{N}$, we have* 1. *$\mathop{\mathrm{g}}(\Delta,r) \leq \sup_{a \in \mathbb{Z}} \sum_{\ell = a}^{a+ \mathop{\mathrm{w}}(\Delta,r)} \mathop{\mathrm{gv}}\left(\frac{\Delta}{\left|\ell\right|_+},r-1\right)$ and* 2. *$\mathop{\mathrm{gv}}(\Delta,r) \leq (\mathop{\mathrm{wv}}(\Delta,r) + 1) \cdot r$.* *Proof.* (i): Let $A \subseteq \mathbb{Z}^r$ be a generic $\Delta$-modular set of lattice points. We seek to upper bound $\left|A\right|$. Let $v \in \mathbb{Z}^r$ be a direction that attains the lattice-width of $P_A$, meaning that $\omega_L(P_A) = \omega(P_A,v)$. We may apply a suitable unimodular transformation and assume that $v = e_r$. Writing $L = e_r^\perp$ for the hyperplane orthogonal to $e_r$, we find that each point of $A$ is contained in one of the $\omega_L(P_A)+1$ parallel lattice planes $$L + a e_r, L + (a+1) e_r, \ldots, L + b e_r,$$ where $a,b \in \mathbb{Z}$ are such that $a < b$ and $b-a = \omega_L(P_A)$. For $\ell \in \{a,a+1,\ldots,b\}$ we count the points in $A \cap (L + \ell e_r)$. If $\ell=0$, then $\left|A \cap L\right| \leq r-1$, because $A$ is assumed to be generic, so no $r$ points of $A$ can be contained in the same linear subspace. Also note that $r-1 \leq \mathop{\mathrm{gv}}(1,r-1) \leq \mathop{\mathrm{gv}}(\Delta,r-1)$. So, we may assume that $\ell \neq 0$ and we let $C_\ell = A \cap (L + \ell e_r)$ be the set of points in $A$ with last entry equal to $\ell$. Further, let $S_\ell \subseteq \mathbb{Z}^{r-1}$ be the projection of $C_\ell$ that forgets the last coordinate. The set $S_\ell$ is in general position because $A$ is generic. Moreover, let $c_1,\ldots,c_r \in C_\ell$ be a choice of $r$ lattice points in $C_\ell$ with corresponding projections $s_1,\ldots,s_r \in S_\ell$. Then, $$\begin{aligned} \Delta \geq \left|\det(c_1,\ldots,c_r)\right| &= \left|\det\left(\begin{matrix} s_1 & s_2 & \ldots & s_r \\ \ell & \ell & \ldots & \ell \end{matrix}\right)\right| = \left|\ell\right| \cdot \left|\det(s_2-s_1,\ldots,s_r-s_1)\right|\\ &= \left|\ell\right| \cdot \mathop{\mathrm{Vol}}_{r-1}(\mathop{\mathrm{conv}}(\{s_1,\ldots,s_r\})) > 0.\end{aligned}$$ This implies that the set $S_\ell \subseteq \mathbb{Z}^{r-1}$ has simplex-volume at most $\frac{\Delta}{\left|\ell\right|} \leq \Delta$ and thus $\left|C_\ell\right| = \left|S_\ell\right| \leq \mathop{\mathrm{gv}}(\Delta/\left|\ell\right|,r-1)$. In summary we obtain the desired inequality $$\left|A\right| = \sum_{\ell = a}^b \left|A \cap (L + \ell e_r)\right| \leq \sum_{\ell = a}^b \mathop{\mathrm{gv}}\left(\frac{\Delta}{\left|\ell\right|_+},r-1\right) \leq \sup_{a \in \mathbb{Z}} \sum_{\ell = a}^{a+ \mathop{\mathrm{w}}(\Delta,r)} \mathop{\mathrm{gv}}\left(\frac{\Delta}{\left|\ell\right|_+},r-1\right),$$ where the notation $\left|\ell\right|_+$ takes care of the case $\ell=0$. (ii): The claimed upper bound on $\mathop{\mathrm{gv}}(\Delta,r)$ follows from a similar argument as the one in part (i). Let $S \subseteq \mathbb{Z}^r$ be a point set in general position and with simplex-volume at most $\Delta$. Further, let $v \in \mathbb{Z}^r$ be a direction attaining the lattice-width of $P_S$. Since $S$ is in general position, each hyperplane can contain at most $r$ points of $S$, and hence $$\left|S\right| \leq r \cdot (\omega(P_S,v) + 1) = r \cdot (\omega_L(P_S) + 1) \leq r \cdot (\mathop{\mathrm{wv}}(\Delta,r) + 1),$$ and we are done. ◻ Hence, in order to upper bound the function $\mathop{\mathrm{g}}(\Delta,r)$, we need to upper bound both $\mathop{\mathrm{w}}(\Delta,r)$ and $\mathop{\mathrm{wv}}(\Delta,r)$. We do so by applying a volume argument which is based on the intuition that a convex body of bounded volume necessarily has small width in *some* direction. This intuition is made precise by the following two inequalities: $$\begin{aligned} \mathop{\mathrm{vol}}(K) &\geq \left(\frac{\pi}{8}\right)^r \frac{r+1}{2^r r!}\, \omega_L(K)^r,\quad \textrm{ for every convex body } K \subseteq \mathbb{R}^r,\label{eqn:makai-conj}\end{aligned}$$ and $$\begin{aligned} \mathop{\mathrm{vol}}(K) &\geq \left(\frac{\pi}{4}\right)^r \frac{1}{r!}\, \omega_L(K)^r,\quad \textrm{ for every convex body } K \subseteq \mathbb{R}^r \textrm{ with } K=-K.\label{eqn:makai-conj-symmetric}\end{aligned}$$ The condition $K=-K$ in [\[eqn:makai-conj-symmetric\]](#eqn:makai-conj-symmetric){reference-type="eqref" reference="eqn:makai-conj-symmetric"} means that $K$ is invariant under reflecting any of its points in the origin. These asymptotic bounds can be derived from the asymptotics on Mahler's famous conjecture on the minimal volume-product of convex bodies with the symmetry in [\[eqn:makai-conj-symmetric\]](#eqn:makai-conj-symmetric){reference-type="eqref" reference="eqn:makai-conj-symmetric"}. Makai Jr. [@makai1978on] conjectures that the factor $(\frac{\pi}{8})^r$ in [\[eqn:makai-conj\]](#eqn:makai-conj){reference-type="eqref" reference="eqn:makai-conj"} and $(\frac{\pi}{4})^r$ in [\[eqn:makai-conj-symmetric\]](#eqn:makai-conj-symmetric){reference-type="eqref" reference="eqn:makai-conj-symmetric"} can be replaced by one. We refer the reader to [@alvarezpaivabalachefftzanev2016isosystolic Thm. II] and [@gonzalezschymura2017ondensities] in which these bounds are described and more background is given. With these tools at hand, we can now determine the asymptotics of the parameters $\mathop{\mathrm{w}}(\Delta,r)$ and $\mathop{\mathrm{wv}}(\Delta,r)$: **Lemma 2**. *For every $\Delta,r \in \mathbb{N}$, we have* 1. *$\mathop{\mathrm{wv}}(\Delta,r) \leq 6 r \Delta^{\frac{1}{r}}$ and* 2. *$\mathop{\mathrm{w}}(\Delta,r) \leq 3 r \Delta^{\frac{1}{r}}$.* *Moreover, we have $\mathop{\mathrm{wv}}(\Delta,r) \in \Theta(\Delta^{\frac{1}{r}})$ and $\mathop{\mathrm{w}}(\Delta,r) \in \Theta(\Delta^{\frac{1}{r}})$.* *Proof.* (i): Let $S \subseteq \mathbb{Z}^r$ be a point set in general position and with simplex-volume bounded by $\Delta$. Let $T \subseteq P_S$ be a simplex of maximal volume contained in $P_S$. It is known that $T$ can be chosen to have all its vertices as vertices of $P_S$, and by a result of Lagarias & Ziegler [@lagariasziegler1991bounds Thm. 3], there is a translate of $(-r)T$ that contains $P_S$. Since we assumed $S$ to have simplex-volume bounded by $\Delta$, we have that $\mathop{\mathrm{Vol}}(T) \leq \Delta$, and thus $r!\mathop{\mathrm{vol}}(P_S) = \mathop{\mathrm{Vol}}(P_S) \leq \mathop{\mathrm{Vol}}((-r)T) = r^r \mathop{\mathrm{Vol}}(T) \leq r^r \Delta$. Using the asymptotic result [\[eqn:makai-conj\]](#eqn:makai-conj){reference-type="eqref" reference="eqn:makai-conj"} for the lattice polytope $P_S$, yields $$\omega_L(P_S)^r \leq \left(\frac{8}{\pi}\right)^r \frac{2^r r!}{r+1} \mathop{\mathrm{vol}}(P_S) \leq \left(\frac{8}{\pi}\right)^r \frac{2^r r^r}{r+1} \Delta,$$ and thus $$\omega_L(P_S) \leq \frac{16}{\pi} \frac{r}{(r+1)^{\frac{1}{r}}} \Delta^{\frac{1}{r}} \leq 6 r \Delta^{\frac{1}{r}}.$$ (ii): Let $A \subseteq \mathbb{Z}^r$ be a generic $\Delta$-modular set of lattice points. Again we argue similarly as in part (i) and aim to estimate the volume of $P_A$. In contrast to the situation of sets in general position, we need to make a detour via the symmetric polytope $Q_A := P_{A \cup (-A)}$ spanned by the points in $A$ and their negatives. Clearly, $P_A \subseteq Q_A$ and thus $\omega_L(P_A) \leq \omega_L(Q_A)$. Now, let $C \subseteq Q_A$ be a crosspolytope of maximal volume contained in $Q_A$. Again we can assume that $C$ has all its vertices as vertices of $Q_A$, and in particular, we can write $C = \mathop{\mathrm{conv}}\{\pm a_1,\ldots,\pm a_r\}$ for some linearly independent $a_1,\ldots,a_r \in A$. Note that the volume of $C$ is given by $\mathop{\mathrm{vol}}(C) = \frac{2^r}{r!}\left|\det(a_1,\ldots,a_r)\right|$. Consider the parallelepiped $W = \left\{\sum_{i=1}^r \gamma_i a_i : -1 \leq \gamma_i \leq 1, \textrm{ for } 1 \leq i \leq r \right\}$. We claim that $Q_A \subseteq W$. Indeed, if we assume the contrary, then we find a vertex $v$ of $Q_A$, which will be a point of $A$ or $-A$, such that $v \notin W$. As the $a_1,\ldots,a_r$ are linearly independent, there are unique coefficients $\beta_1,\ldots,\beta_r \in \mathbb{R}$ such that $v = \beta_1 a_1 + \ldots + \beta_r a_r$, and there must be some index $j$ such that $\left|\beta_j\right| > 1$. As a consequence we get $$\begin{aligned} \left|\det(a_1,\ldots,a_{j-1},v,a_{j+1},\ldots,a_r)\right| &= \left|\det(a_1,\ldots,a_{j-1},\beta_j a_j,a_{j+1},\ldots,a_r)\right| \\ &= \left|\beta_j\right| \left|\det(a_1,\ldots,a_{j-1},a_j,a_{j+1},\ldots,a_r)\right| \\ &> \left|\det(a_1,\ldots,a_r)\right|.\end{aligned}$$ This means that the crosspolytope $\mathop{\mathrm{conv}}\{\pm a_1,\ldots,\pm a_{j-1},\pm v,\pm a_{j+1},\ldots,\pm a_r\} \subseteq Q_A$ has larger volume than $C$, contradicting the maximality of the latter. Now, since $A$ is $\Delta$-modular, we get $\mathop{\mathrm{vol}}(Q_A) \leq \mathop{\mathrm{vol}}(W) = 2^r \left|\det(a_1,\ldots,a_r)\right| \leq 2^r \Delta$. Applying the inequality [\[eqn:makai-conj-symmetric\]](#eqn:makai-conj-symmetric){reference-type="eqref" reference="eqn:makai-conj-symmetric"} for the symmetric lattice polytope $Q_A$, yields $$\omega_L(Q_A)^r \leq \left(\frac{4}{\pi}\right)^r r! \mathop{\mathrm{vol}}(Q_A) \leq \left(\frac{4}{\pi}\right)^r 2^r r! \Delta,$$ and thus $$\omega_L(P_A) \leq \omega_L(Q_A) \leq \frac{8}{\pi} (r!)^{\frac{1}{r}} \Delta^{\frac{1}{r}} \leq 3 r \Delta^{\frac{1}{r}}.$$ This finishes the proof of (ii). To see why the previously derived upper bounds on $\mathop{\mathrm{wv}}(\Delta,r)$ and $\mathop{\mathrm{w}}(\Delta,r)$ are asymptotically best possible, we employ a scaling argument. We claim that for every $\ell \in \mathbb{Z}_{>0}$, $$\begin{aligned} \ell \cdot \mathop{\mathrm{wv}}(\Delta,r) \leq \mathop{\mathrm{wv}}(\ell^r \Delta,r) \qquad \textrm{ and } \qquad \ell \cdot \mathop{\mathrm{w}}(\Delta,r) \leq \mathop{\mathrm{w}}(\ell^r \Delta,r).\label{eqn:scaling-wv-and-w}\end{aligned}$$ We give the argument for $\mathop{\mathrm{wv}}(\Delta,r)$, since the one for $\mathop{\mathrm{w}}(\Delta,r)$ is analogous. Let $S \subseteq \mathbb{Z}^r$ be a set in general position with simplex-volume at most $\Delta$ and such that $\omega_L(P_S) = \mathop{\mathrm{wv}}(\Delta,r)$. Now, $\ell S = \left\{ \ell s : s \in S \right\} \subseteq \mathbb{Z}^r$ is also in general position and has simplex-volume at most $\ell^r \Delta$. Moreover, $P_{\ell S} = \mathop{\mathrm{conv}}(\ell S) = \ell \mathop{\mathrm{conv}}(S) = \ell P_S$, and thus $$\mathop{\mathrm{wv}}(\ell^r \Delta,r) \geq \omega_L(P_{\ell S}) = \omega_L(\ell P_S) = \ell \omega_L(P_S) = \ell \mathop{\mathrm{wv}}(\Delta,r).$$ Now, assume to the contrary that, say $\mathop{\mathrm{wv}}(\Delta,r) \in o(\Delta^{\frac{1}{r}})$. In view of [\[eqn:scaling-wv-and-w\]](#eqn:scaling-wv-and-w){reference-type="eqref" reference="eqn:scaling-wv-and-w"}, this implies that $\ell \mathop{\mathrm{wv}}(\Delta,r) \in o((\ell^r \Delta)^{\frac{1}{r}}) = o(\ell) \cdot o(\Delta^{\frac{1}{r}})$, which is a contradiction for $\ell \to \infty$. ◻ **Remark 1**. *The previous arguments for the upper bound on $\mathop{\mathrm{w}}(\Delta,r)$ do not use the genericity of the $\Delta$-modular subsets $A \subseteq \mathbb{Z}^r$ defining this quantity. The argument for $\mathop{\mathrm{w}}(\Delta,r) \in \Omega(\Delta^{\frac{1}{r}})$ works both for the generic case, as well as the non-generic situation. However, it is not clear whether $\mathop{\mathrm{w}}(\Delta,r)$ is actually given as the maximal lattice-width of a polytope $P_A$, for some $\Delta$-modular set $A \subseteq \mathbb{Z}^r$.* We proceed by an elementary estimate on the type of sums occurring in [Lemma 1](#lem:slice-reduction){reference-type="ref" reference="lem:slice-reduction"}. **Lemma 3**. *For every integers $a, n, r$ with $r \geq 2$, we have $$\sum_{\ell=a}^{a+n} \left|\ell\right|_+^{-\frac{1}{r}} \leq 1+4 n ^{\frac{r-1}{r}}.$$* *Proof.* As $\ell \mapsto 1/\left|\ell\right|_+$ is an even function and decreases for $\left|\ell\right|\to\infty$, the sum is maximal if the interval $[a, a+n]$ is symmetric around 0. Therefore, we have $$\sum_{\ell=a}^{a+n} \left|\ell\right|_+^{-\frac{1}{r}} \leq 1 + 2 \sum_{\ell=1}^{\left\lceil n/2 \right\rceil} \left|\ell\right|_+^{-\frac{1}{r}} = 1 + 2 \sum_{\ell=1}^{\left\lceil n/2 \right\rceil}\ell^{-\frac{1}{r}}.$$ The sum in the term on the right hand side is a lower Riemann sum for the integral $\int_0^{\left\lceil n/2 \right\rceil} x^{-1/r} \mathop{}\!\mathrm{d}x$. Hence, we obtain $$\sum_{\ell=1}^{\left\lceil n/2 \right\rceil} \ell^{-\frac{1}{r}} \leq \int_0^{\left\lceil n/2 \right\rceil} x^{-1/r} \mathop{}\!\mathrm{d}x = \frac{r}{r-1} \left\lceil \frac{n}{2} \right\rceil^{\frac{r-1}{r}} \leq 2 n ^{\frac{r-1}{r}},$$ which implies the claim. ◻ We are now prepared to prove the main result of this section. *Proof of [Theorem 3](#thm:sublinear-bound-generic-heller){reference-type="ref" reference="thm:sublinear-bound-generic-heller"}.* We just combine [Lemma 1](#lem:slice-reduction){reference-type="ref" reference="lem:slice-reduction"} with [\[lem:w-delta-asymptotics,lem:riemann-sum\]](#lem:w-delta-asymptotics,lem:riemann-sum){reference-type="ref" reference="lem:w-delta-asymptotics,lem:riemann-sum"}. More precisely, we get for every $r \geq 3$ that $$\begin{aligned} \mathop{\mathrm{g}}(\Delta,r) &\leq \sup_{a \in \mathbb{Z}} \sum_{\ell = a}^{a+ \mathop{\mathrm{w}}(\Delta,r)} \mathop{\mathrm{gv}}\left(\frac{\Delta}{\left|\ell\right|_+},r-1\right) \\ &\leq \sup_{a\in\mathbb{Z}} \sum_{\ell = a}^{a+ \mathop{\mathrm{w}}(\Delta,r)} \left(\mathop{\mathrm{wv}}\left(\frac{\Delta}{\left|\ell\right|_+},r-1\right) + 1\right) \cdot (r-1) \\ &\leq (r-1)\left(\mathop{\mathrm{w}}(\Delta, r)+1 + \sup_{a\in\mathbb{Z}}\sum_{\ell=a}^{a+\mathop{\mathrm{w}}(\Delta, r)} 6 (r-1) \left(\frac{\Delta}{\left|\ell\right|_+}\right)^{\frac{1}{r-1}}\right) \\ &\leq (r-1)\left(\mathop{\mathrm{w}}(\Delta, r)+1 + 6(r-1)\Delta^{\frac{1}{r-1}} \sup_{a\in\mathbb{Z}}\sum_{\ell=a}^{a+\mathop{\mathrm{w}}(\Delta, r)} \left(\frac{1}{\left|\ell\right|_+}\right)^{\frac{1}{r-1}}\right) \\ &\leq (r-1)\left(\mathop{\mathrm{w}}(\Delta, r)+1 + 6(r-1)\Delta^{\frac{1}{r-1}} \left(1+4 \mathop{\mathrm{w}}(\Delta, r)^{\frac{r-2}{r-1}}\right)\right) \\ &\leq (r-1)\left(3r\Delta^{\frac{1}{r}}+1 + 6(r-1)\Delta^{\frac{1}{r-1}} \left(1+4 \left(3r\Delta^{\frac{1}{r}}\right)^{\frac{r-2}{r-1}}\right)\right) \\ &\leq r \cdot 10r \Delta^{\frac{1}{r-1}} \cdot 13 r \Delta^{\frac{r-2}{r(r-1)}}\\ &= 130\, r^3 \Delta^{\frac{1}{r-1}+\frac{r-2}{r(r-1)}} = 130\, r^3 \Delta^{\frac{2}{r}}.\qedhere\end{aligned}$$ ◻ Note that we didn't optimize the constant in the bound in the above proof, but rather aimed for a simple derivation. # Generic $\Delta$-modular matrices with many columns {#sect:lower-bounds} In this section, we discuss to which extent the upper bounds in [Theorem 2](#thm:linear-upper-bound){reference-type="ref" reference="thm:linear-upper-bound"} and [Theorem 3](#thm:sublinear-bound-generic-heller){reference-type="ref" reference="thm:sublinear-bound-generic-heller"} are best possible. We find that for $r=2$ rows, there are infinitely many values of $\Delta$ for which the bound in [Theorem 2](#thm:linear-upper-bound){reference-type="ref" reference="thm:linear-upper-bound"} is tight, while for $r \geq 3$, our best lower bound construction on $\mathop{\mathrm{g}}(\Delta,r)$ is of order $\Omega(\Delta^{\frac{1}{r-1}})$ in contrast to the bound $\mathcal{O}(\Delta^{\frac{2}{r}})$ in [Theorem 3](#thm:sublinear-bound-generic-heller){reference-type="ref" reference="thm:sublinear-bound-generic-heller"}. ## Constructions for two rows {#sect:constr-two-rows} By definition, we have $\mathop{\mathrm{s}}(\Delta,2) = \mathop{\mathrm{g}}(\Delta,2)$, so that in view of [\[eqn:s-facet-number\]](#eqn:s-facet-number){reference-type="eqref" reference="eqn:s-facet-number"} the considerations in this part concern the maximum number of irredundant inequalities in a $\Delta$-modular integer program with two variables. We keep the focus on $\mathop{\mathrm{g}}(\Delta,2)$ though, and use this notation for the same number throughout. If $r=2$, then the upper bound in [Theorem 2](#thm:linear-upper-bound){reference-type="ref" reference="thm:linear-upper-bound"} reads $\mathop{\mathrm{g}}(\Delta,2) \leq p+1$, where $p$ is the smallest prime larger than $\Delta$. In the following, we describe three infinite families for which this upper bound is attained: 1. [\[itm:F1\]]{#itm:F1 label="itm:F1"} $\mathop{\mathrm{g}}(\Delta,2) = \Delta + 2$, if $\Delta$ is even and $\Delta+1$ is prime 2. [\[itm:F2\]]{#itm:F2 label="itm:F2"} $\mathop{\mathrm{g}}(\Delta,2) = \Delta + 3$, if $\Delta \geq 3$ is odd and $\Delta + 2$ is prime 3. [\[itm:F3\]]{#itm:F3 label="itm:F3"} $\mathop{\mathrm{g}}(\Delta,2) = \Delta + 4$, if $\Delta \geq 4$ is even, $\Delta = 2 \bmod 3$, and $\Delta + 3$ is prime Note that there are values $\Delta$ for which $\mathop{\mathrm{g}}(\Delta, 2)$ does not meet the upper bound $p+1$. Indeed, the smallest $\Delta \geq 2$ that does not belong to any of the families [\[itm:F1\]](#itm:F1){reference-type="ref" reference="itm:F1"}--[\[itm:F3\]](#itm:F3){reference-type="ref" reference="itm:F3"} is $\Delta = 7$, and our computational experiments, that we describe in [4.4](#sect:numerical-results){reference-type="ref" reference="sect:numerical-results"} below, show that $\mathop{\mathrm{g}}(7,2) = 10$, while $p+1 = 12$ in this case (see [2](#tab:gDelta2Values){reference-type="ref" reference="tab:gDelta2Values"}). Interestingly, our computational experiments in [4.4](#sect:numerical-results){reference-type="ref" reference="sect:numerical-results"} also show that $\mathop{\mathrm{g}}(\Delta,2) \leq \Delta + 6$, for every $\Delta \leq 450$, so that the following problem may have a positive answer: **Conjecture 2**. *Is there a constant $c>0$, such that $\mathop{\mathrm{g}}(\Delta,2) \leq \Delta + c$, for all $\Delta > 0$?* The weaker question whether $\limsup_{\Delta \to \infty} \frac{\mathop{\mathrm{g}}(\Delta,2)}{\Delta} = 1$ can be answered in the affirmative by using any of the families [\[itm:F1\]](#itm:F1){reference-type="ref" reference="itm:F1"}--[\[itm:F3\]](#itm:F3){reference-type="ref" reference="itm:F3"}, and in view of the results in [@bakerharmanpintz2001thedifference]. Therein it is shown that for sufficiently large $x$, the interval $[x - x^{0.525},x]$ contains at least one prime, which implies $\mathop{\mathrm{g}}(\Delta,2) \leq \Delta + \mathcal{O}(\Delta^{0.525})$. We now describe matrices that attain the claimed identities for the families [\[itm:F1\]](#itm:F1){reference-type="ref" reference="itm:F1"}--[\[itm:F3\]](#itm:F3){reference-type="ref" reference="itm:F3"}. ### Matrices for [\[itm:F1\]](#itm:F1){reference-type="ref" reference="itm:F1"} and [\[itm:F2\]](#itm:F2){reference-type="ref" reference="itm:F2"} {#matrices-for-itmf1-and-itmf2 .unnumbered} A construction showing $\mathop{\mathrm{g}}(\Delta,2) \geq \Delta + 2$, for *every* $\Delta \in \mathbb{Z}_{>0}$, is mentioned in [@paatstallknechtwalshxu2022onthecolumn]. Indeed, the matrix with $\Delta+2$ columns given by $$\begin{aligned} \begin{pmatrix} 1 & 0 & 1 & 1 & \cdots & 1 \\ 0 & 1 & 1 & 2 & \cdots & \Delta \end{pmatrix} \label{eqn:Delta+2Ex}\end{aligned}$$ is generic $\Delta$-modular, and thus attains $\mathop{\mathrm{g}}(\Delta,2)$ for the values $\Delta$ belonging to family [\[itm:F1\]](#itm:F1){reference-type="ref" reference="itm:F1"}. If $\Delta$ is odd, we can extend [\[eqn:Delta+2Ex\]](#eqn:Delta+2Ex){reference-type="eqref" reference="eqn:Delta+2Ex"} and obtain a matrix with $\Delta+3$ columns given by $$\begin{aligned} \begin{pmatrix} 1 & 0 & 1 & 1 & \cdots & 1 & 2 \\ 0 & 1 & 1 & 2 & \cdots & \Delta & \Delta \end{pmatrix}, \label{eqn:Delta+3Ex}\end{aligned}$$ which is again generic $\Delta$-modular. This shows that $\mathop{\mathrm{g}}(\Delta, 2)\geq \Delta+3$, for *every* odd $\Delta$ and meets the upper bound for the family [\[itm:F2\]](#itm:F2){reference-type="ref" reference="itm:F2"}. ### Matrices for [\[itm:F3\]](#itm:F3){reference-type="ref" reference="itm:F3"} {#matrices-for-itmf3 .unnumbered} The construction for this family is more involved. For $m \in \mathbb{N}$ and integer vectors $a,b \in \mathbb{Z}^m$ with $a_j \leq b_j$, for all $j = 1,\ldots,m$, we define the matrix $$M(a,b) := \left( \begin{array}{cccccccccccccc} 0 & 1 & \cdots & 1 & 2 & \cdots & 2 & 3 & \cdots & 3 & \cdots & m & \cdots & m \\ 1 & a_1 & \cdots & b_1 & a_2 & \cdots & b_2 & a_3 & \cdots & b_3 & \cdots & a_m & \cdots & b_m \end{array} \right),$$ where, for $j = 1, \ldots, m$, the dots between $a_j$ and $b_j$ in the second row indicate the list of all integers $k$ such that $a_j \leq k \leq b_j$ and $\gcd(j,k) = 1$. So, by construction the columns of $M(a,b)$ are pairwise non-parallel and thus the matrix is generic. Observe that the matrices [\[eqn:Delta+2Ex\]](#eqn:Delta+2Ex){reference-type="eqref" reference="eqn:Delta+2Ex"} and [\[eqn:Delta+3Ex\]](#eqn:Delta+3Ex){reference-type="eqref" reference="eqn:Delta+3Ex"} also have this form. Indeed, up to column permutations, [\[eqn:Delta+2Ex\]](#eqn:Delta+2Ex){reference-type="eqref" reference="eqn:Delta+2Ex"} equals $M(0,\Delta)$, whereas [\[eqn:Delta+3Ex\]](#eqn:Delta+3Ex){reference-type="eqref" reference="eqn:Delta+3Ex"} equals $M(\binom{0}{\Delta},\binom{\Delta}{\Delta})$. With this notation, our task is to find vectors $a,b \in \mathbb{Z}^m$ such that $M(a,b)$ has many columns while the absolute value of the minors of size two remain bounded. **Proposition 1**. *Let $\Delta \geq 4$ be even and such that $\Delta = 2 \bmod 3$. For the following vectors $a,b \in \mathbb{Z}^3$ the matrix $M(a,b)$ is $\Delta$-modular and has $\Delta + 4$ columns:* 1. *If $\Delta = 12 s + 2$, for some $s \in \mathbb{N}$, let $$a = (0,4 s + 1, 9 s + 1)^\intercal \quad\textrm{ and }\quad b = (7 s + 1, 10 s + 1, 12 s + 2)^\intercal.$$* 2. *If $\Delta = 12 s + 8$, for some $s \in \mathbb{N}$, let $$a = (0,4 s + 3, 9 s + 7)^\intercal \quad\textrm{ and }\quad b = (7 s + 5, 10 s + 7, 12 s + 8)^\intercal.$$* *Proof.* (i): We first count the columns of $$M(a,b) = \begin{pmatrix} 0 & 1 & \cdots & 1 & 2 & \cdots & 2 & 3 & \cdots & 3 \\ 1 & 0 & \cdots & 7s+1 & 4s+1 & \cdots & 10s+1 & 9s+1 & \cdots & 12s+2 \end{pmatrix}.$$ There are $7s+2$ columns of the form $(1,k)^\intercal$, there are $3s+1$ columns of the form $(2,k)^\intercal$ corresponding to the odd numbers between $4s+1$ and $10s+1$, and there are $2s+2$ columns of the form $(3,k)^\intercal$ corresponding to the numbers between $9s+1$ and $12s+2$ that are not divisible by $3$. Together with the column $(0,1)^\intercal$ this gives in total $12s + 6 = \Delta + 4$ columns. We now prove that all the minors of size two in $M(a,b)$ are bounded by $\Delta$ in absolute value. Clearly all determinants involving $(0,1)^\intercal$ satisfy the constraint. For all other determinants we may check the upper and lower bound separately: $$\begin{aligned} \left|\det\begin{pmatrix} 1 & 1 \\ k_1 & k_1' \end{pmatrix}\right| &= \left|k_1'-k_1\right| \leq 7s+1 \leq \Delta \\ \left|\det\begin{pmatrix} 2 & 2 \\ k_2 & k_2' \end{pmatrix}\right| &= \left|2(k_2'-k_2)\right| \leq 2\,(10s+1-(4s+1)) = 12s \leq \Delta \\ \left|\det\begin{pmatrix} 3 & 3 \\ k_3 & k_3' \end{pmatrix}\right| &= \left|3(k_3'-k_3)\right| \leq 3\,(12s+2-(9s+1)) = 9s+3 \leq \Delta \\ \det\begin{pmatrix} 1 & 2 \\ k_1 & k_2 \end{pmatrix} &=k_2-2k_1 \leq 10s+1-2 \cdot 0 = 10s+1 \leq \Delta \\ \det\begin{pmatrix} 1 & 2 \\ k_1 & k_2 \end{pmatrix} &=k_2-2k_1 \geq 4s+1-2\,(7s+1) = -10s-1 \geq -\Delta \\ \det\begin{pmatrix} 1 & 3 \\ k_1 & k_3 \end{pmatrix} &=k_3-3k_1 \leq 12s+2-3 \cdot 0 = 12s+2 \leq \Delta \\ \det\begin{pmatrix} 1 & 3 \\ k_1 & k_3 \end{pmatrix} &=k_3-3k_1 \geq 9s+1-3\,(7s+1) = -12s-2 \geq -\Delta \\ \det\begin{pmatrix} 2 & 3 \\ k_2 & k_3 \end{pmatrix} &=2k_3-3k_2 \leq 2\,(12s+2)-3\,(4s+1) = 12s+1 \leq \Delta \\ \det\begin{pmatrix} 2 & 3 \\ k_2 & k_3 \end{pmatrix} &=2k_3-3k_2 \geq 2\,(9s+1)-3\,(10s+1) = -12s-1 \geq -\Delta. \\\end{aligned}$$ (ii): Here, $\Delta = 12 s + 8$, for some non-negative integer $s$, and we consider the matrix $$M(a,b) = \begin{pmatrix} 0 & 1 & \cdots & 1 & 2 & \cdots & 2 & 3 & \cdots & 3 \\ 1 & 0 & \cdots & 7s+5 & 4s+3 & \cdots & 10s+7 & 9s+7 & \cdots & 12s+8 \end{pmatrix}.$$ Similarly to part (i), we find that there are $7s+6$ columns of the form $(1,k)^\intercal$, there are $3s+3$ columns of the form $(2,k)^\intercal$, and there are $2s+2$ columns of the form $(3,k)^\intercal$. Together with the column $(0,1)^\intercal$ this gives a total of $12s + 12 = \Delta + 4$ columns. Showing that $M(a,b)$ is $\Delta$-modular is analogous to part (i). ◻ Since the conditions on $\Delta$ in [Proposition 1](#prop:r2-construction-family3){reference-type="ref" reference="prop:r2-construction-family3"} imply that either $\Delta = 2 \bmod 12$ or $\Delta = 8 \bmod 12$, the matrices therein explain family [\[itm:F3\]](#itm:F3){reference-type="ref" reference="itm:F3"}. ## A Vandermonde-type construction for an arbitrary number of rows {#sect:vandermonde-construction} The following bound subsumes the linear lower bound $\mathop{\mathrm{g}}(\Delta,2) \geq \Delta$, that is provided by the last $\Delta$ columns of the matrix [\[eqn:Delta+2Ex\]](#eqn:Delta+2Ex){reference-type="eqref" reference="eqn:Delta+2Ex"}, to a bound of order $\mathop{\mathrm{g}}(\Delta,r) \in \Omega(\Delta^{\frac{1}{r-1}})$, for every dimension $r \geq 2$. **Proposition 2**. *For every $r \geq 2$ and any prime number $p$ with $p \geq r$, we have $$\mathop{\mathrm{g}}\left(\lceil (r-1)^{\frac{r-1}{2}} \rceil \cdot (p-1)^{r-1} , r \right) \geq p.$$* *Proof.* For an integer $z \in \mathbb{Z}$, we let $[z]_p = z \bmod p$ be the representative in $\{1,2,\ldots,p\}$. With this notion, we consider the modular moment curve $\nu : \mathbb{Z}\to \{1,2,\ldots,p\}^{r-1}$ defined by $\nu(t) = ([t]_p, [t^2]_p , \ldots , [t^{r-1}]_p)$. The $p$ points $\nu(1),\nu(2),\ldots,\nu(p) \in \mathbb{R}^{r-1}$ are in general position, meaning that no $r$ of them are contained in the same hyperplane in $\mathbb{R}^{r-1}$ (cf. [@brassmoserpach2005researchproblems §10.1]). Lifting these points to $r$ dimensions and arranging the $p$ vectors $(1,\nu(1)),\ldots,(1,\nu(p))$ as columns of a matrix that we denote by $A_{p,r} \in \mathbb{Z}^{r \times p}$, we find that $A_{p,r}$ is generic with $\mathop{\mathrm{rk}}(A_{p,r}) = r$. Further, by the Hadamard inequality for the determinant we have $$\begin{aligned} \left|\det\left( \begin{matrix} 1 & \ldots & 1 \\ \nu(\ell_1) & \ldots & \nu(\ell_r) \end{matrix} \right)\right| &= \left|\det\left( \begin{matrix} 1 & \ldots & 1 \\ [\ell_1]_p & \ldots & [\ell_r]_p \\ \vdots & \ddots & \vdots \\ [\ell_1^{r-1}]_p & \ldots & [\ell_r^{r-1}]_p \end{matrix} \right)\right| \\ &= \left|\det\left( \begin{matrix} 1 & 1 & \ldots & 1 \\ 0 & [\ell_2]_p - [\ell_1]_p & \ldots & [\ell_r]_p - [\ell_1]_p \\ \vdots & \vdots & \ddots & \vdots \\ 0 & [\ell_2^{r-1}]_p - [\ell_1^{r-1}]_p & \ldots & [\ell_r^{r-1}]_p - [\ell_1^{r-1}]_p \end{matrix} \right)\right| \\ &\leq \prod_{i=2}^r \left\| \left( [\ell_i]_p - [\ell_1]_p,\ldots,[\ell_i^{r-1}]_p - [\ell_1^{r-1}]_p \right) \right\| \\ &\leq \left( \sqrt{r-1} \cdot (p-1)\right)^{r-1} ,\end{aligned}$$ for every $1 \leq \ell_1 < \ldots < \ell_r \leq p$. In other words, the matrix $A_{p,r}$ is $\Delta$-modular, where $\Delta = \lceil (r-1)^{\frac{r-1}{2}} \rceil \cdot (p-1)^{r-1}$, and the claimed bound follows. ◻ Given the upper bound on $\mathop{\mathrm{g}}(\Delta,r)$ of order $\mathcal{O}(\Delta^{\frac{2}{r}})$ in [Theorem 3](#thm:sublinear-bound-generic-heller){reference-type="ref" reference="thm:sublinear-bound-generic-heller"}, and the lower bound of order $\Omega(\Delta^{\frac{1}{r-1}})$ above, the question about the precise asymptotic behavior of $\mathop{\mathrm{g}}(\Delta,r)$, for fixed $r$, remains. We tend to believe that the lower bound is the correct order of growth: **Question 1**. *For fixed $r \geq 3$, is it true that $\mathop{\mathrm{g}}(\Delta,r) \in \Theta(\Delta^{\frac{1}{r-1}})$?* # A computational approach for small parameters {#sect:computational-approach} Complementing the theoretical results on $\mathop{\mathrm{g}}(\Delta,r)$ concerning upper bounds in [2](#sect:upper-bounds){reference-type="ref" reference="sect:upper-bounds"} and lower bounds in [3](#sect:lower-bounds){reference-type="ref" reference="sect:lower-bounds"}, we discuss in the following a computational approach to algorithmically determine the precise values of $\mathop{\mathrm{g}}(\Delta,r)$ for small parameters $\Delta$ and $r$. A related yet different computational approach for the "non-generic" numbers $\mathop{\mathrm{h}}(\Delta,r)$ has been described in [@averkovschymura2023onthemaximal] (see the discussion in [4.4.3](#sect:non-generic-computations){reference-type="ref" reference="sect:non-generic-computations"}). We begin the section with some elementary properties of generic $\Delta$-modular matrices that take the symmetries of the problem into account, in particular its invariance under unimodular transformations. Based on these properties we describe our algorithm in [4.2](#sect:algorithm-description){reference-type="ref" reference="sect:algorithm-description"} and investigate simple reductions that speed up the implementation considerably. Finally, in [4.4](#sect:numerical-results){reference-type="ref" reference="sect:numerical-results"}, we report on the numerical results of our computations and we aim to explain the findings at least for the rank two case as best as possible. ## Some properties of generic $\Delta$-modular matrices We start with a linear algebra lemma about integer matrices. **Lemma 4**. *Let $A \in \mathbb{Z}^{r \times r}$ be invertible over $\mathbb{Q}$ and let $S \in \mathbb{Q}^{r \times m}$. If $AS$ is an integer matrix, then $\det(A) S$ is an integer matrix.* *Proof.* Let $\mathop{\mathrm{adj}}(A)$ be the adjugate matrix of $A$, which is an integer matrix because $A$ is. Then $\mathop{\mathrm{adj}}(A)A = \det(A) I$, and we write $C=AS$. By assumption, $C$ is an integer matrix. Multiplying both sides from the left by $\mathop{\mathrm{adj}}(A)$ we get $\mathop{\mathrm{adj}}(A) C= \mathop{\mathrm{adj}}(A) A S = \det(A) S$, which is an integer matrix because $\mathop{\mathrm{adj}}(A)$ and $C$ are. ◻ **Definition 1**. *Let $A \in \mathbb{Q}^{r\times n}$ be a rational matrix with $\mathop{\mathrm{rk}}(A) = r$. We call $A$* 1. **$\Delta$-bound*, if for $1\leq m\leq r$ every $m\times m$-minor is bounded by $\Delta^m$ in absolute value,* 2. **totally generic*, if for $1\leq m\leq r$ every $m\times m$-minor is non-zero.* If $A$ is a totally generic $\Delta$-bound matrix, then in particular all entries of $A$ are non-zero and bounded by $\Delta$ in absolute value. An equivalent condition to being $\Delta$-bound is that $A/\Delta$ is totally $1$-submodular. As a sidenote, by Hadamard's inequality we have that if the absolute value of the entries of any matrix are bounded by $\beta$, then each $m\times m$-minor of that matrix is bounded by $m^{m/2} \beta^m$ in absolute value. So if $A$ is $\Delta$-bound this then gets rid of the factor $m^{m/2}$. A direct observation is that total $\Delta$-submodularity and total genericity reduce to $\Delta$-submodularity and genericity, respectively, if we amend a matrix with the identity matrix. **Proposition 3**. *Let $S \in \mathbb{Q}^{r \times n}$ have rank $r$ and let $I \in \mathbb{Z}^{r \times r}$ be the identity matrix. Then, for every $\Delta \in \mathbb{Z}_{>0}$, we have that $$S \textrm{ is totally } \Delta\textrm{-submodular} \quad \Longleftrightarrow \quad (I,S) \textrm{ is } \Delta\textrm{-submodular},$$ and $$S \textrm{ is totally generic } \quad \Longleftrightarrow \quad (I,S) \textrm{ is generic}.$$* We now develop the key lemma that is the basis of our algorithm to compute $\mathop{\mathrm{g}}(\Delta,r)$, for any given parameters $\Delta,r$, in the next section. Note that our arguments are similar to those establishing the key lemma in [@averkovschymura2023onthemaximal Lem. 1]. It relies on the notion of a normal form for integer matrices under unimodular transformations. We only need the concept for square matrices here. For such an $A \in \mathbb{Z}^{r \times r}$, there exists a uniquely defined unimodular matrix $U \in \mathbb{Z}^{r \times r}$ such that $A = UH$ and $H \in \mathbb{Z}^{r \times r}$ is an upper triangular matrix with positive diagonal entries $h_{11},\ldots,h_{rr}$ and off-diagonal entries $0 \leq h_{ij} < h_{jj}$, for all $1 \leq j \leq r$ and $1 \leq i < j$. The matrix $H$ is called the *Hermite normal form* of $A$ (see [@cohen1993acourse Ch. 2] for more background and computational approaches to efficiently compute $H$ from a given matrix $A$). **Lemma 5**. *Let $D\in\mathbb{Z}^{r\times(r+L)}$ be a generic $\Delta$-modular matrix with $\mathop{\mathrm{rk}}(D)=r$. Then there exists a totally generic $\Delta$-bound matrix $C\in\mathbb{Z}^{r\times L}$. Further, each column of $C$ is a solution to a linear system of equations $Ax=0 \bmod \Delta$ for an integer matrix $A\in \mathbb{Z}^{r\times r}$ in Hermite normal form satisfying $\det(A)=\Delta$.* *Proof.* Since $D$ is $\Delta$-modular, there exists a $r\times r$ submatrix of $D$ with determinant $\pm \Delta$. After rearranging the columns of $D$ and applying row operations (where we only scale by $\pm 1$), we can assume that $D$ has the form $$\begin{aligned} D = (A, u_1, u_2, \ldots, u_L),\label{eqn:D-after-reduction}\end{aligned}$$ with columns $u_i\in \mathbb{Z}^r$ for $i=1, \ldots, L$ and $A$ being in Hermite normal form with determinant $\Delta$. As $\det(A)=\Delta\neq 0$, $A$ has full rank. Therefore each column $u_i$ is a linear combination of the columns of $A$, in other words for each $1\leq i\leq L$ there exists a rational vector $s_i\in\mathbb{Q}^r$ with $As_i = u_i$. Let $S=(s_1, \ldots, s_L)$ and $C=\Delta S$. By [Lemma 4](#lem:det-times-rational){reference-type="ref" reference="lem:det-times-rational"}, $C$ is an integer matrix. Also note that as $As_i=u_i$, we get that $Ac_i = A(\Delta s_i) = \Delta u_i = 0 \bmod \Delta$ for the columns $c_1,\ldots,c_L$ of $C$. It remains to show that $C \in \mathbb{Z}^{r \times L}$ is $\Delta$-bound and totally generic. Since $C = \Delta S$, it suffices to prove that $S$ is totally $1$-submodular and totally generic. In order to see this, observe that by construction $D = A \cdot (I,S)$. Because $D$ is generic and $\Delta$-modular and $\det(A) = \Delta \neq 0$, this means that $(I,S)$ is generic and $1$-submodular. Hence, in view of [Proposition 3](#prop:totally-vs-simple-modularity-and-genericity){reference-type="ref" reference="prop:totally-vs-simple-modularity-and-genericity"}, $S$ is totally generic and totally $1$-submodular as claimed. ◻ It is known that the entries of the matrix [\[eqn:D-after-reduction\]](#eqn:D-after-reduction){reference-type="eqref" reference="eqn:D-after-reduction"} in the proof of [Lemma 5](#lem:existence-Deltabound-totally-gen){reference-type="ref" reference="lem:existence-Deltabound-totally-gen"} can be bounded by $\Delta$ in absolute value (cf. [@gribanovmalyshevpardalosveselov Lem. 1]). ## An algorithm to compute $\mathop{\mathrm{g}}(\Delta,r)$ {#sect:algorithm-description} [Lemma 5](#lem:existence-Deltabound-totally-gen){reference-type="ref" reference="lem:existence-Deltabound-totally-gen"} and its proof suggest an algorithm for finding a generic $\Delta$-modular matrix of size $r\times (r+L)$ with maximal $L$. 1. Construct all integer matrices $A\in \mathbb{Z}^{r\times r}$ in Hermite normal form with determinant $\det(A)=\Delta$. 2. Find all solutions $x \in \mathbb{Z}_{\Delta}^r$ of $Ax=0 \bmod \Delta$. For each such solution $x$ calculate all possible representatives in $\mathbb{Z}^r$ with nonzero entries bounded by $\Delta$ in absolute value. We can assume that the first entry is positive. 3. Find a largest possible totally generic $\Delta$-bound matrix with the columns obtained in Step 2. 4. Let $C \in \mathbb{Z}^{r \times L}$ be the totally generic $\Delta$-bound matrix found in Step 3. Then, $D=(A, AC/\Delta)$ is a generic $\Delta$-modular matrix of size $r\times(r+L)$. Pseudocode for this algorithmic approach is given as [\[alg:computeHeller\]](#alg:computeHeller){reference-type="ref" reference="alg:computeHeller"}. Correctness of the algorithm follows directly from [Lemma 5](#lem:existence-Deltabound-totally-gen){reference-type="ref" reference="lem:existence-Deltabound-totally-gen"}, so that in the remainder of this subsection we will be concerned with describing how we can implement the various steps in an efficient manner. $\Delta, r \in \mathbb{Z}_{>0}$ $\mathop{\mathrm{g}}(\Delta, r)$, generic $\Delta$-modular matrix $D \in \mathbb{Z}^{r \times \mathop{\mathrm{g}}(\Delta,r)}$ $\mathop{\mathrm{g}}\gets 0$ $D \gets 0$ $H \gets$ list of all relevant Hermite normal forms[\[alg:hermite-step\]]{#alg:hermite-step label="alg:hermite-step"} [\[alg:for-loop\]]{#alg:for-loop label="alg:for-loop"} $X \gets$ relevant integer solutions to $Ax = 0 \bmod \Delta$[\[alg:relevant-solutions\]]{#alg:relevant-solutions label="alg:relevant-solutions"} $C \gets$ largest possible totally generic $\Delta$-bound matrix with columns in $X$[\[alg:expensive-step\]]{#alg:expensive-step label="alg:expensive-step"} $\mathop{\mathrm{g}}\gets r+\operatorname{number-of-columns}(C)$ $D \gets (A, A C/\Delta)$ **return** $\mathop{\mathrm{g}}, D$ [\[alg:computeHeller\]]{#alg:computeHeller label="alg:computeHeller"} Note that Step 3 above, respectively Line [\[alg:expensive-step\]](#alg:expensive-step){reference-type="ref" reference="alg:expensive-step"} in [\[alg:computeHeller\]](#alg:computeHeller){reference-type="ref" reference="alg:computeHeller"}, is by far the most computationally expensive step. Therefore, we first investigate whether we actually need to run through *all* the Hermite normal forms in Step 1, and likewise through *all* integer solutions to the linear system in Step 2. It turns out that we can reduce the possibilities quite a bit and only need to consider the *relevant* of these objects as is hinted at in the description of [\[alg:computeHeller\]](#alg:computeHeller){reference-type="ref" reference="alg:computeHeller"} in Lines [\[alg:hermite-step\]](#alg:hermite-step){reference-type="ref" reference="alg:hermite-step"} and [\[alg:relevant-solutions\]](#alg:relevant-solutions){reference-type="ref" reference="alg:relevant-solutions"}. Let us start with the Hermite normal forms. ### Relevant Hermite normal forms Among any pair $A_1$ and $A_2$ of integer matrices in Hermite normal form that give equivalent results, we need to iterate only over one of them. Let us make more precise what we consider as equivalent results: **Definition 2**. *Let $D_1, D_2 \in \mathbb{Z}^{r\times n}$. We say that $D_1$ is *equivalent* to $D_2$, in symbols $D_1 \simeq D_2$, if there exists a unimodular matrix $S \in \mathbb{Z}^{r \times r}$, a diagonal matrix $D \in \mathbb{Z}^{n \times n}$ with diagonal entries $\pm 1$, and a permutation matrix $P \in \mathbb{Z}^{n \times n}$ such that $D_2 = S D_1 D P$.* Hence, $D_1 \simeq D_2$, if $D_2$ can be obtained from $D_1$ by applying integer row operations, rearranging columns and multiplying columns by $-1$. Note that both $\Delta$-modularity and genericity are preserved under this notion of equivalence. Let $D_1=(A_1, U_1)$ be a generic $\Delta$-modular matrix with $A_1$ in Hermite normal form and let $A_2$ be a matrix in Hermite normal form and which is equivalent to $A_1$, i.e. $A_2 = S A_1 D P$ for suitable matrices $S,D$, and $P$ as in [Definition 2](#def:equivalence){reference-type="ref" reference="def:equivalence"}. Then $D_2 = (S A_1 D P, S U_1) = (A_2, S U_1)$ is also generic $\Delta$-modular and will be found by the algorithm. In particular, during the algorithm we do not have to run through the for loop for both $A_1$ and $A_2$. However, even if $D_1 = (A_1, U_1), D_2 = (A_2, U_2)$ are matrices with $A_1, A_2$ in Hermite normal form and such that $D_1 \simeq D_2$, then $A_1$ and $A_2$ need not be equivalent. There can be a submatrix $\tilde A_1$ of $U_1$ which is equivalent to $A_2$. Therefore, even if we restrict the equivalence to the Hermite normal forms that we investigate, and for each equivalence class we test only one matrix, we might still find resulting matrices during the algorithm that are equivalent. This problem is related to finding a canonical form under the equivalence relation $\simeq$. We are not aware of any work in the literature solving this problem, but we discuss two operations below that can be used to reduce the amount of Hermite normal forms that have to be checked. Note that a slightly different notion of equivalence has been studied by Paolini [@paolini2017analgorithm]. He says that two matrices $D,D' \in \mathbb{Z}^{r \times n}$ are equivalent, if there is a permutation matrix $P \in \mathbb{Z}^{n \times n}$ and an affine unimodular equivalence $\varphi : \mathbb{Z}^r \to \mathbb{Z}^r$, that is $\varphi(x) = Ax + b$, for some unimodular $A \in \mathbb{Z}^{r \times r}$ and some $b \in \mathbb{Z}^r$, such that $D' = \varphi(D)P$. Paolini defines a canonical form for this notion and devises an efficient algorithm to compute it. The question on whether his approach can be adapted to our notion of equivalence will be left for future work. ### Operation 1: Sorting the diagonal {#operation-1-sorting-the-diagonal .unnumbered} Let $A \in \mathbb{Z}^{r \times r}$ be a matrix in Hermite normal form with diagonal $(a_1, \ldots, a_r)$. We claim that we can apply suitable column and row operations to obtain an equivalent matrix $A' \simeq A$ in Hermite normal form with diagonal $(a_1', \ldots, a_r')$ such that $a_1' \leq \ldots \leq a_r'$. To see this it suffices to show how we may "swap" two adjacent diagonal entries, that is, if $A$ has diagonal $(a_1, \ldots, a_{i-1}, a_i, a_{i+1}, a_{i+2}, \ldots, a_r)$ with $a_i > a_{i+1}$, then we can obtain an equivalent matrix $A'$ in Hermite normal form with the diagonal $(a_1, \ldots, a_{i-1}, a_{i+1}', a_i', a_{i+2}, \ldots, a_r)$ and $a_{i+1}' \leq a_i'$. This can be achieved by following a simple procedure: 1. Write $b$ for the entry of $A$ at position $(i,i+1)$ and exchange columns $i$ and $i+1$ of $A$. 2. Perform the euclidean algorithm on $b$ and $a_{i+1}$ by applying row operations only involving rows $i$ and $i+1$ until the matrix is again upper triangular. 3. Use rows $i,i+1,\ldots,r$ of the obtained matrix to restore the Hermite normal form condition for $i$-th row. 4. Use rows $i+1,\ldots,r$ to restore the Hermite normal form condition for $(i+1)$-st row. The following scheme illustrates the procedure: $$\begin{aligned} A &= \begin{pmatrix} a_1 \\ & \ddots \\ & & a_i & b \\ & & 0 & a_{i+1} \\ & & & & \ddots \\ & & & & & a_r \end{pmatrix} \rightarrow \begin{pmatrix} a_1 \\ & \ddots \\ & & b & a_i \\ & & a_{i+1} & 0 \\ & & & & \ddots \\ & & & & & a_r \end{pmatrix} \\ &\rightarrow \begin{pmatrix} a_1 \\ & \ddots \\ & & \gcd(b, a_{i+1}) & b' \\ & & 0 & a_i' \\ & & & & \ddots \\ & & & & & a_r \end{pmatrix} = A'.\end{aligned}$$ Note that the entry of the obtained matrix $A'$ at position $(i,i)$ is now $a_{i+1}'=\gcd(b, a_{i+1})\leq a_{i+1} < a_i$. Since $A' \simeq A$, the determinant didn't change, and thus $a_i \cdot a_{i+1} = a_i' \cdot a_{i+1}'$. Using $a_{i+1}'\leq a_{i+1}$, we obtain $a_i \leq a_i'$ and thus $a_{i+1}'\leq a_i'$, as desired. In fact the described procedure shows that for $A = (a_{ij})$, we can assume that $a_{i,i}\leq \gcd(a_{i, i+1}, a_{i+1,i+1})$ for all $i=1, \ldots, r-1$ in addition to $a_{i,i}\leq a_{i+1, i+1}$. It is possible that by exchanging also non-adjacent columns one may give a stronger condition based on greatest common divisors. We have not looked into this further, though. ### Operation 2: Sorting the last column {#operation-2-sorting-the-last-column .unnumbered} Our second operation to reduce the number of relevant Hermite normal forms only applies to those with diagonal $(1, \ldots, 1, \Delta)$. In this case, instead of having the last column be any of the vectors $(v_1, \ldots, v_{r-1}, \Delta)$ with $0\leq v_i\leq \Delta-1$ for $i=1,\ldots, r-1$, we can instead assume that $0 \leq v_1 \leq v_2 \leq \ldots \leq v_{r-1} \leq \Delta/2$. Let $v_j > \Delta/2$ for some $j\in\{1, \ldots, r-1\}$. We can multiply the $j$-th row by $-1$, then multiply the $j$-th column by $-1$, and then add the last row to the $j$-th row to obtain an equivalent matrix with identical entries except for the one in place of $v_j$ which is now $\Delta-v_j \leq \Delta/2$. Note that we use here that the only nonzero elements in the $j$-th row and $j$-th column are the diagonal element and $v_j$, otherwise we would have to do extra row additions to obtain a matrix in Hermite normal form again. Next, to sort the last column, let $v_i > v_j$ with $i<j$. We can simply exchange rows $i$ and $j$, and then exchange columns $i$ and $j$ to obtain an equivalent matrix in Hermite normal form with $v_i$ exchanged for $v_j$. Again we used the specific form of the original matrix here. Even in the case that $\Delta$ is prime -- which means that the only possible diagonal is $(1, \ldots, 1, \Delta)$ -- these two operations alone do not produce a canonical form. As an example we observe that for $r=3$ and $\Delta=7$ the two matrices $$A_1 = \begin{pmatrix} 1 \\ & 1 & 2 \\ & & 7 \end{pmatrix} \qquad \text{and} \qquad A_2 = \begin{pmatrix} 1 \\ & 1 & 3 \\ & & 7 \end{pmatrix}$$ are equivalent. Indeed, by first multiplying the third column of $A_1$ by $-1$, then swapping the second and third column, and then computing the Hermite normal form of the resulting matrix, we obtain $A_2$. However, applying the two operations drastically reduces the number of Hermite normal forms that we need to consider in Line [\[alg:hermite-step\]](#alg:hermite-step){reference-type="ref" reference="alg:hermite-step"} of [\[alg:computeHeller\]](#alg:computeHeller){reference-type="ref" reference="alg:computeHeller"}. To exemplify this claim, let $H(\Delta,r)$ be the number of Hermite normal forms of size $r \times r$ with determinant $\Delta$, let $H_{op}(\Delta,r)$ be the number of such Hermite normal forms that remain after applying Operations 1 and 2, and let $H_{\simeq}(\Delta,r)$ be the number of pairwise inequivalent such Hermite normal forms. For comparison, in [1](#tbl:HNFs){reference-type="ref" reference="tbl:HNFs"} we list these numbers for $r=4$ and some small values of $\Delta$. $\Delta$ 12 13 14 15 16 17 18 19 20 ------------------------ ------ ------ ------ ------ ------- ------ ------- ------ ------- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- $H(\Delta,4)$ 6200 2380 6000 6240 11811 5220 18150 7240 24180 $H_{op}(\Delta,4)$ 652 84 283 204 1267 165 1287 220 2523 $H_{\simeq}(\Delta,4)$ 149 37 99 89 250 64 259 80 360 : A few numbers of Hermite normal forms under various reductions discussed in the text. The number $H(\Delta,r)$ is known exactly and depends on the prime decomposition $\Delta = p_1^{e_1} \cdot \ldots \cdot p_t^{e_t}$ of $\Delta$ (cf. [@gruber1997alternative] and the discussion therein for other formulae): $$H(\Delta,r) = \sum_{\substack{d_1,\ldots,d_r \in \mathbb{Z}_{>0}\\ d_1\cdot \ldots \cdot d_r = \Delta}} d_1^0 d_2^1 \cdot \ldots \cdot d_r^{r-1} = \prod_{i=1}^t \prod_{j=1}^{e_i} \frac{p_i^{j+r-1} - 1}{p_i^j - 1} .$$ We are not aware of a similarly good understanding of the number of inequivalent Hermite normal forms: **Question 2**. *Is there a closed formula for $H_{\simeq}(\Delta,r)$ as well? What is the growth rate of the numbers $H_{\simeq}(\Delta,r)$ ?* ### Relevant integer solutions Now, we investigate the *relevant* solutions to the equation in Line [\[alg:relevant-solutions\]](#alg:relevant-solutions){reference-type="ref" reference="alg:relevant-solutions"} of [\[alg:computeHeller\]](#alg:computeHeller){reference-type="ref" reference="alg:computeHeller"}. For a matrix $A \in \mathbb{Z}^{r \times r}$ in Hermite normal form and with $\det(A) = \Delta$, we therefore consider the linear system $Ax = 0 \bmod \Delta$. Since $A$ is in Hermite normal form, this system can be solved by backwards substitution, but as $\det(A) = \Delta$, the matrix $A$ is not invertible over $\mathbb{Z}_\Delta$. Thus, there will be multiple solutions in $\mathbb{Z}_\Delta^r$ and for each of those there might be several integer representatives. As a first step, we determine the number of solutions of the linear system above in $\mathbb{Z}_\Delta^r$. For this, we need the Smith normal form of an integer matrix (cf. Cohen [@cohen1993acourse Ch. 2]). **Theorem 6**. *Let $A \in \mathbb{Z}^{r \times r}$ have non-zero determinant. Then, there exist unimodular matrices $Q, P \in \mathbb{Z}^{r \times r}$ such that $PAQ = \mathop{\mathrm{diag}}(\alpha_1, \ldots, \alpha_r)$, where the integers $\alpha_1,\ldots,\alpha_r$ are the elementary divisors of $A$ satisfying $0 \leq \alpha_i | \alpha_{i+1}$, for all $1 \leq i \leq r-1$.* The matrix $S := PAQ$ in this theorem is uniquely determined and is called the *Smith normal form* of $A$. By definition, we have $\left|\det(A)\right| = \det(S) = \alpha_1 \cdot \ldots \cdot \alpha_r$. **Lemma 6**. *Let $a,b,\Delta \in \mathbb{Z}$ be integers with $\Delta > 0$ and let $A \in \mathbb{Z}^{r \times r}$ have determinant $\det(A)=\Delta$.* 1. *The equation $ax = b \bmod \Delta$ has either $0$ or $\gcd(a, \Delta)$ solutions in $\mathbb{Z}_\Delta$.* 2. *There are exactly $\Delta$ solutions to $Ax = 0 \bmod \Delta$ in $\mathbb{Z}_\Delta^r$.* *Proof.* (i): By linearity we only need to consider $b=0$. If $ax=0 \bmod \Delta$, then $ax=k\Delta$ for some integer $k$. Let $d=\gcd(a, \Delta)$ and $a=ds, \Delta=dt$ be with $\gcd(s,t)=1$. Plugging in, we obtain $dsx=kdt$, or equivalently $sx=kt$. As $s$ divides $sx=kt$, but $\gcd(s,t)=1$, we have $x = (k/s)t$, where $k/s=m$ is an integer. It is easy to check that indeed for any integer $m$ the number $x=mt$ is a solution. We now need to check that these give $d$ distinct solutions in $\mathbb{Z}_\Delta$. To this end, let $m_1, m_2\in \mathbb{Z}$ be with $m_1t=m_2t \bmod \Delta$. Then, $(m_1-m_2)t = k\Delta = k d t$ for some integer $k$. Therefore, $m_1-m_2=kd = 0\bmod d$ and $m_1=m_2 \bmod d$. Hence, each $0\leq m < d$ gives a distinct solution and this proves the claim. (ii): Let $S=PAQ$ be the Smith normal form of $A$. Since as integer matrices $\left|\det(P)\right| = \left|\det(Q)\right| = 1$, both matrices $Q,P$ are invertible over $\mathbb{Z}_\Delta^r$. Further, we have $\alpha_1 \cdot \ldots \cdot \alpha_r = \Delta$ for the diagonal elements of $S$. Multiplying $Ax = 0 \bmod \Delta$ from the left by $P$ gives us $$PAx = PAQQ^{-1}x = S(Q^{-1}x) = 0 \bmod \Delta.$$ As $P$ and $Q$ are invertible, the number of solutions to $Ax = 0 \bmod \Delta$ and $Sy = 0 \bmod \Delta$ with $y = Q^{-1}x$ are the same. The $i$-th row of the system $Sy = 0 \bmod \Delta$ is given by $\alpha_i y_i = 0 \bmod \Delta$, which by part (i) has exactly $\alpha_i$ solutions in $\mathbb{Z}_\Delta$, since $\alpha_i$ is a divisor of $\Delta$. Combining these solutions for each row, we obtain exactly $\alpha_1 \cdot \ldots \cdot \alpha_r = \Delta$ solutions of $Ax = 0 \bmod \Delta$ over $\mathbb{Z}_\Delta^r$. ◻ Now, as the second step, for each of the $\Delta$ solutions $x \in \mathbb{Z}_\Delta^r$ to $Ax = 0 \bmod \Delta$, we want to find all the integer representatives in $x + \Delta \mathbb{Z}^r$ that are relevant for the computation of $\mathop{\mathrm{g}}(\Delta,r)$ by [\[alg:computeHeller\]](#alg:computeHeller){reference-type="ref" reference="alg:computeHeller"}. Since we want to build a totally generic $\Delta$-bound matrix out of these vectors, in particular each of the entries of such a representative needs to be non-zero and bounded by $\Delta$ in absolute value. Further, if we find a matrix using the representative $v$, then we will also find an equivalent matrix using the representative $-v$, and no matrix can contain both of $v$ and $-v$ because this would create a vanishing minor. Therefore, we only need to take one of $v$ and $-v$, which we can achieve, for example, by fixing the first entry of each representative to be positive. In summary, if $x\in\mathbb{Z}_\Delta^r$ is a solution and $v \in x + \Delta \mathbb{Z}^r$ a representative under the discussed conditions, we have that if $x_i = 0$, for some index $i$, then $v_i \in \{\Delta, -\Delta\}$, and if $x_i = k \neq 0$, then $v_i \in \{k, k-\Delta\}$. This gives $2$ options for each entry of $v$, except for the first entry, which must be positive. Hence, that leaves us with $2^{r-1}$ relevant representatives of $x$. The list $X$ in Line [\[alg:relevant-solutions\]](#alg:relevant-solutions){reference-type="ref" reference="alg:relevant-solutions"} of [\[alg:computeHeller\]](#alg:computeHeller){reference-type="ref" reference="alg:computeHeller"} will therefore in total have $2^{r-1}\Delta$ vectors. For particular matrices $A$, we may further remove irrelavant representatives. Indeed, if $v$ and $\ell v$ are elements in the list $X$, for some $\ell > 1$, and we find a matrix using the column $\ell v$, then we will also find a matrix using the column $v$. This means that we only need to take $v$, and can neglect $\ell v$. **Remark 2**. *The previous observations show that the matrices $C$ that are constructed in [\[alg:computeHeller\]](#alg:computeHeller){reference-type="ref" reference="alg:computeHeller"} have at most $2^{r-1}\Delta$ columns, which means that the generic $\Delta$-modular matrices $(A, AC/\Delta)$ have at most $r+2^{r-1}\Delta$ columns. This proves $\mathop{\mathrm{g}}(\Delta, r) \in \mathcal{O}(\Delta)$, for fixed values of $r$, without using the theory of finite fields employed in [2.1](#sect:mds){reference-type="ref" reference="sect:mds"}. However, here the implied constant depends exponentially on $r$ instead of being constant. Also, as discussed after [Theorem 5](#thm:ball-mds){reference-type="ref" reference="thm:ball-mds"}, the linear bound $\mathop{\mathrm{g}}(\Delta, r)\leq r+p-1$ is not very involved, and in particular is independent of Ball's result [Theorem 5](#thm:ball-mds){reference-type="ref" reference="thm:ball-mds"}.* *A similar argument was carried out in [@averkovschymura2023onthemaximal Rem. 1], showing the bound $\mathop{\mathrm{h}}(\Delta,r) \leq 3^r \Delta$ for the maximal column number of not necessarily generic $\Delta$-modular matrices with $r$ rows.* ## Finding a totally generic $\Delta$-bound matrix with the most columns After the previous reductions have been carried out and, for a given Hermite normal from $A$, the candidate set $X$ has been computed, how can we now find a largest possible totally generic $\Delta$-bound matrix $C$ with columns in $X$? We solve this problem by translating it into a clique problem in a suitably defined (hyper-)graph. Recall that a clique in a simple graph $\mathcal{G}= (V, E)$ is a collection of vertices $K \subseteq V$, so that for every $i, j \in K$ with $i \neq j$, we have $\{i,j\} \in E$. A clique is called *maximal* if it is not a proper subset of another clique, and it is called *maximum* if there is no clique of larger cardinality in $\mathcal{G}$. Now, let $\mathcal{H}= (V,H)$ be a simple hypergraph, where $H \subseteq 2^V$ is the set of hyperedges. For $k \geq 2$, a *$k$-hyperclique* in $\mathcal{H}$ is a set $K \subseteq V$ of vertices so that every subset $I \subseteq K$ of size $1 < |I| \leq k$ is a hyperedge. Maximal and maximum $k$-hypercliques are defined analogously to maximal and maximum cliques in graphs, respectively. Now, assume we are given a list $X = \{v_1, \ldots, v_n\}$ of vectors with $n \leq 2^{r-1} \Delta$ generated in Line [\[alg:relevant-solutions\]](#alg:relevant-solutions){reference-type="ref" reference="alg:relevant-solutions"} of [\[alg:computeHeller\]](#alg:computeHeller){reference-type="ref" reference="alg:computeHeller"}, and we want to build a maximum totally generic $\Delta$-bound matrix $C$ with columns from $X$. We define the hypergraph $\mathcal{H}_r = ([n],H_r)$, where $I \subseteq [n]$ is a hyperedge, if and only if $1<|I|\leq r$ and the matrix $(v_i : i \in I)$ is totally generic $\Delta$-bound. Note that the hypergraph $\mathcal{H}_r$ is downward-closed, because if a matrix $M$ is totally generic $\Delta$-bound, then so is any submatrix of $M$ obtained by removing columns. The problem of finding the matrix $C$ above is now equivalent to finding a maximum $r$-hyperclique in $\mathcal{H}_r$. Let us first consider the case $r=2$, in which the hypergraph $\mathcal{H}_2$ is just a simple graph. For the maximum clique problem in graphs there are various software implementations available. We used the Sagemath [@sagemath] implementation `sage.graphs.cliquer.max_clique`. For $r \geq 3$, we implemented an algorithm for the hypergraph clique problem in C, based on `hClique` described in [@torres2017hclique]. ## Numerical results {#sect:numerical-results} In this last part, we discuss the computational results that we obtained by implementing the algorithm described in the previous sections and running it for small parameters $r$ and $\Delta$. The `sage` and C source code and the computed data can be found at <https://github.com/BKriepke/DeltaModular>. ### The case $r=2$ Our implementation allows to compute the numbers $\mathop{\mathrm{g}}(\Delta,2)$, for every $2 \leq \Delta \leq 450$ in a reasonable amount of time. In the following, we try to explain the obtained values as much as possible and make some hypotheses that they suggest. We refrain from writing down each particular value of $\mathop{\mathrm{g}}(\Delta,2)$, for $\Delta \leq 450$, and instead refer the interested reader to the data in the repository linked above. Investigating the data, the first observation that comes to mind is that $\mathop{\mathrm{g}}(\Delta,2)$ seems to be a non-decreasing function of $\Delta$. Up to now, we could not find a theoretical explanation. **Conjecture 3**. *The function $\Delta \mapsto \mathop{\mathrm{g}}(\Delta,2)$ is non-decreasing.* In view of the matrix in [\[eqn:Delta+2Ex\]](#eqn:Delta+2Ex){reference-type="eqref" reference="eqn:Delta+2Ex"} and [Theorem 2](#thm:linear-upper-bound){reference-type="ref" reference="thm:linear-upper-bound"}, we have $\Delta + 2 \leq \mathop{\mathrm{g}}(\Delta,2) \leq p + 1$, with $p$ being the smallest prime larger than $\Delta$. Let us first have a look at those $\Delta \leq 450$ for which this upper bound is attained. A large part of these values is of course explained by the three infinite families [\[itm:F1\]](#itm:F1){reference-type="ref" reference="itm:F1"}--[\[itm:F3\]](#itm:F3){reference-type="ref" reference="itm:F3"} from [3.1](#sect:constr-two-rows){reference-type="ref" reference="sect:constr-two-rows"}. [2](#tab:gDelta2Values){reference-type="ref" reference="tab:gDelta2Values"} shows the results for the remaining parameters $\Delta$ in the range $7 \leq \Delta \leq 124$, compared to the upper bound $p+1$. ---------------------------------- -------- -------- ----- ----- -------- -------- ----- ----- -------- -------- -------- ----- --------- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- $\Delta$ 7 13 19 23 24 25 31 32 33 34 37 43 47 $\mathop{\mathrm{g}}(\Delta, 2)$ 10 16 23 27 **30** **30** 34 36 37 **38** **42** 46 51 $p+1$ 12 18 24 30 **30** **30** 38 38 38 **38** **42** 48 54 $\Delta$ 48 49 53 54 55 61 62 63 64 67 73 74 75 $\mathop{\mathrm{g}}(\Delta, 2)$ **54** **54** 57 59 **60** 65 67 67 **68** 70 76 78 78 $p+1$ **54** **54** 60 60 **60** 68 68 68 **68** 72 80 80 80 $\Delta$ 76 79 83 84 85 89 90 91 92 93 94 97 103 $\mathop{\mathrm{g}}(\Delta, 2)$ **80** 82 87 89 89 93 94 94 96 96 **98** 100 106 $p+1$ **80** 84 90 90 90 98 98 98 98 98 **98** 102 108 $\Delta$ 109 113 114 115 116 117 118 119 120 121 122 123 124 $\mathop{\mathrm{g}}(\Delta, 2)$ 112 117 119 119 120 120 122 123 126 126 126 126 **128** $p+1$ 114 128 128 128 128 128 128 128 128 128 128 128 **128** ---------------------------------- -------- -------- ----- ----- -------- -------- ----- ----- -------- -------- -------- ----- --------- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- : Values for $\mathop{\mathrm{g}}(\Delta,2)$ compared with the upper bound $p+1$ from [Theorem 2](#thm:linear-upper-bound){reference-type="ref" reference="thm:linear-upper-bound"}, for those $\Delta \leq 124$ that do not belong to any of the families [\[itm:F1\]](#itm:F1){reference-type="ref" reference="itm:F1"}--[\[itm:F3\]](#itm:F3){reference-type="ref" reference="itm:F3"} from [3.1](#sect:constr-two-rows){reference-type="ref" reference="sect:constr-two-rows"}. In bold are those values for which the upper bound is met. There are some further values of $\Delta$ for which we can give an explicit construction that gives the correct value of $\mathop{\mathrm{g}}(\Delta,2)$, even though those values do not attain the upper bound of [Theorem 2](#thm:linear-upper-bound){reference-type="ref" reference="thm:linear-upper-bound"} (except for $\Delta=24$). Indeed, for $\Delta = 30s + 24$, with $s$ an integer, consider the matrix $$\begin{aligned} M(a,b) = \left(\begin{matrix} 0 & 1 & \cdots & 1 & 2 & \cdots & 2 & 3 & \cdots & 3 \\ 1 & 0 & \cdots & 11s+9 & 6s+5 & \cdots & 16s+13 & 12s+10 & \cdots & 21s+17 \end{matrix}\right. \notag\\ \phantom{++++} \left.\begin{matrix} 4 & \cdots & 4 & 5 & \cdots & 5 \\ 18s+15 & \cdots & 26s+13+\nu_s & 25s+21 & \cdots & 30s+24 \end{matrix}\right),\label{eqn:Delta24mod30}\end{aligned}$$ where $\nu_s$ is a suitable integer with $0 \leq \nu_s \leq 8 - s/2$. This construction has the maximal number $\mathop{\mathrm{g}}(30s+24, 2) = \Delta+2+\left\lfloor \nu_s/2\right\rfloor$ of columns; see [3](#tab:nuValues){reference-type="ref" reference="tab:nuValues"} for the precise values of the parameters $\nu_s$. In particular, for $s\leq 12$, this construction has more columns than the one given by [\[eqn:Delta+2Ex\]](#eqn:Delta+2Ex){reference-type="eqref" reference="eqn:Delta+2Ex"}. For $s \in \{13, 14\}$, we have $\mathop{\mathrm{g}}(\Delta, 2)=\Delta+2$, which further indicates that eventually all even values $\Delta$ either satisfy $\mathop{\mathrm{g}}(\Delta,2)=\Delta+4$, if $\Delta = 2 \bmod 3$, or $\mathop{\mathrm{g}}(\Delta,2)=\Delta+2$ otherwise. ---------------------------------------------- ---- ---- ---- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- $s$ 0 1 2 3 4 5 6 7 8 9 10 11 12 $\Delta=30s+24$ 24 54 84 114 144 174 204 234 264 294 324 354 384 $\nu_s$ 8 6 6 6 6 4 4 4 4 2 2 2 2 $\Delta+2+\left\lfloor \nu_s/2\right\rfloor$ 30 59 89 119 149 178 208 238 268 297 327 357 387 ---------------------------------------------- ---- ---- ---- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- : The values $\nu_s$ for the construction in [\[eqn:Delta24mod30\]](#eqn:Delta24mod30){reference-type="eqref" reference="eqn:Delta24mod30"} realizing $\mathop{\mathrm{g}}(\Delta,2)$. Let us now have a look at the excess $\varepsilon_\Delta := \mathop{\mathrm{g}}(\Delta,2) - (\Delta+2) \geq 0$ of $\mathop{\mathrm{g}}(\Delta,2)$ over the lower bound that is implied by [\[eqn:Delta+2Ex\]](#eqn:Delta+2Ex){reference-type="eqref" reference="eqn:Delta+2Ex"}. Already written down as [Conjecture 2](#conj:constant-excess){reference-type="ref" reference="conj:constant-excess"} in [3.1](#sect:constr-two-rows){reference-type="ref" reference="sect:constr-two-rows"}, our data suggests that $\varepsilon_\Delta$ might be upper bounded by a constant. In [2](#fig:gDelta2Values){reference-type="ref" reference="fig:gDelta2Values"}, we have plotted $\varepsilon_\Delta$ against $2 \leq \Delta \leq 450$, and we make the following particular observations explaining most of the values: - We have $\mathop{\mathrm{g}}(\Delta, 2) \leq \Delta+6$, equivalently $\varepsilon_\Delta \leq 4$, for all $2\leq\Delta\leq 450$. In fact, we even have $\mathop{\mathrm{g}}(\Delta,2) \leq \Delta+4$, for all $170\leq \Delta\leq 450$. - We have $\mathop{\mathrm{g}}(\Delta,2) = \Delta+3$, for every *odd* number $171 \leq \Delta \leq 450$. - The values $\Delta\geq 170$ with $\mathop{\mathrm{g}}(\Delta,2) = \Delta+4$ are even (by the previous observation), and either $\Delta = 2 \bmod 3$ or $\Delta\in\{174, 184, 204, 208, 234, 264\}$. - The even values $\Delta \geq 170$ with $\mathop{\mathrm{g}}(\Delta,2)=\Delta+3$ are given by $$\Delta\in\{214, 244, 274, 294, 304, 324, 354, 384\}.$$ They all have the form $\Delta=4 \bmod 30$ or $\Delta=24 \bmod 30$. The values $\Delta=24 \bmod 30$ are realized by the family in [\[eqn:Delta24mod30\]](#eqn:Delta24mod30){reference-type="eqref" reference="eqn:Delta24mod30"}. For $\Delta=4 \bmod 30$ we suspect that there is a similar family. These families will eventually have size $\leq \Delta+2$. - The smallest even value $\Delta$ with $\Delta+1$ nonprime and $\mathop{\mathrm{g}}(\Delta,2) < \Delta+4$ is $\Delta=132$ with $\mathop{\mathrm{g}}(132,2)=134$. - The only values $\Delta$ which meet the upper bound of [Theorem 2](#thm:linear-upper-bound){reference-type="ref" reference="thm:linear-upper-bound"} and which do not belong to the families  [\[itm:F1\]](#itm:F1){reference-type="ref" reference="itm:F1"}--[\[itm:F3\]](#itm:F3){reference-type="ref" reference="itm:F3"} are given by $$\Delta\in\{24, 25, 34, 37, 48, 49, 55, 64, 76, 94, 124, 127, 154, 168, 169, 208\}.$$ - The only even values of $\Delta = 2 \bmod 3$ with $\mathop{\mathrm{g}}(\Delta,2) \neq \Delta+4$ are $\mathop{\mathrm{g}}(2,2) = 4 = \Delta+2$ and $\mathop{\mathrm{g}}(62,2) = 67 = \Delta+5$. - The values $\Delta$ with $\mathop{\mathrm{g}}(\Delta,2)=\Delta+5$ are: - $\Delta\in\{25, 49, 121, 169\}$, i.e. $\Delta=p^2$ for $p\in\{5, 7, 11, 13\}$ - $\Delta\in\{54, 84, 114, 144\}$, i.e. $\Delta = 24 \bmod 30$, realized by a generic $\Delta$-modular matrix of the form [\[eqn:Delta24mod30\]](#eqn:Delta24mod30){reference-type="eqref" reference="eqn:Delta24mod30"}; compare [3](#tab:nuValues){reference-type="ref" reference="tab:nuValues"}. - $\Delta\in\{37, 55, 62, 127, 142\}$ - The values $\Delta$ with $\mathop{\mathrm{g}}(\Delta,2) = \Delta+6$ are $\Delta\in\{24, 48, 120, 168\}$, i.e. $\Delta=p^2-1$ for $p\in\{5, 7, 11, 13\}$. ![A plot showing $\varepsilon_\Delta = \mathop{\mathrm{g}}(\Delta,2) - (\Delta+2)$, for $2\leq\Delta\leq 450$. The black circle values are those $\Delta$ which belong to one of the families [\[itm:F1\]](#itm:F1){reference-type="ref" reference="itm:F1"}--[\[itm:F3\]](#itm:F3){reference-type="ref" reference="itm:F3"}, the blue square values are other $\Delta$ that meet the upper bound of [Theorem 2](#thm:linear-upper-bound){reference-type="ref" reference="thm:linear-upper-bound"}, and the red cross values are the remaining $\Delta$ which do not meet the upper bound of [Theorem 2](#thm:linear-upper-bound){reference-type="ref" reference="thm:linear-upper-bound"}. ](plotoption5.pdf "fig:"){#fig:gDelta2Values width="\\textwidth"}\ ![A plot showing $\varepsilon_\Delta = \mathop{\mathrm{g}}(\Delta,2) - (\Delta+2)$, for $2\leq\Delta\leq 450$. The black circle values are those $\Delta$ which belong to one of the families [\[itm:F1\]](#itm:F1){reference-type="ref" reference="itm:F1"}--[\[itm:F3\]](#itm:F3){reference-type="ref" reference="itm:F3"}, the blue square values are other $\Delta$ that meet the upper bound of [Theorem 2](#thm:linear-upper-bound){reference-type="ref" reference="thm:linear-upper-bound"}, and the red cross values are the remaining $\Delta$ which do not meet the upper bound of [Theorem 2](#thm:linear-upper-bound){reference-type="ref" reference="thm:linear-upper-bound"}. ](plotoption6.pdf "fig:"){#fig:gDelta2Values width="\\textwidth"} From a broad perspective, the data suggests that if $\Delta$ is large enough, then the behavior of the function $\mathop{\mathrm{g}}(\Delta,2)$ gets much more uniform, and that only for small $\Delta$ there are exceptional values not following a consistent pattern. **Conjecture 4**. *There is a constant $\delta_2 \in \mathbb{Z}_{>0}$, such that for every $\Delta \geq \delta_2$, we have $$\mathop{\mathrm{g}}(\Delta, 2) = \begin{cases} \Delta + 4 & , \text{if } \Delta \text{ is even and } \Delta = 2 \bmod 3 \\ \Delta + 3 & , \text{if }\Delta \text{ is odd} \\ \Delta + 2 & , \text{otherwise}. \end{cases}$$* Our data suggests that the choice $\delta_2=385$ might work. If this is the case, then this also implies [Conjecture 3](#conj:monotonicity-two-rows){reference-type="ref" reference="conj:monotonicity-two-rows"}, as the function described in [Conjecture 4](#conj:explicit-form-g-Delta-2){reference-type="ref" reference="conj:explicit-form-g-Delta-2"} is non-decreasing. ### More than two rows For the case of more than two rows, we computationally determined the values of $\mathop{\mathrm{g}}(\Delta,r)$, for every $r \in \{3,4,5\}$ and $2 \leq \Delta \leq 40$. The results are given in [4](#tab:gDelta345Values){reference-type="ref" reference="tab:gDelta345Values"} and are depicted in [\[fig:gDelta345Values\]](#fig:gDelta345Values){reference-type="ref" reference="fig:gDelta345Values"}. Consistent with the sublinear bound in [Theorem 3](#thm:sublinear-bound-generic-heller){reference-type="ref" reference="thm:sublinear-bound-generic-heller"} is that the linear upper bound $\mathop{\mathrm{g}}(\Delta,r) \leq p+1$ from [Theorem 2](#thm:linear-upper-bound){reference-type="ref" reference="thm:linear-upper-bound"} is attained only for certain small parameters $\Delta$. In contrast to the case $r=2$, we see that $\Delta \mapsto \mathop{\mathrm{g}}(\Delta,r)$ cannot be a non-decreasing function. However, again suggesting more uniformity for larger values of $\Delta$, the following question (which extends [Conjecture 3](#conj:monotonicity-two-rows){reference-type="ref" reference="conj:monotonicity-two-rows"}) might have an affirmative answer: **Question 3**. *Given $r$, is there a constant $\Delta_r \in \mathbb{Z}_{>0}$, such that $\mathop{\mathrm{g}}(\Delta,r) \leq \mathop{\mathrm{g}}(\Delta+1,r)$, for every $\Delta \geq \Delta_r$ ?* ---------------- ------- ------- ------- ------- ------- ---- ---- -------- ---- ---- ---- ---- ---- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- $\Delta$ 2 3 4 5 6 7 8 9 10 11 12 13 14 $g(\Delta, 3)$ **4** **6** **6** **8** **8** 8 8 **12** 10 12 12 12 14 $g(\Delta, 4)$ **5** **6** **6** **8** 7 8 9 10 9 10 10 10 11 $g(\Delta, 5)$ **6** **6** **6** **8** **8** 8 10 10 10 10 10 10 10 $\Delta$ 15 16 17 18 19 20 21 22 23 24 25 26 27 $g(\Delta, 3)$ 13 14 14 14 16 16 16 16 17 16 17 18 18 $g(\Delta, 4)$ 10 11 11 12 12 12 12 12 12 12 12 12 12 $g(\Delta, 5)$ 10 10 11 10 11 11 11 11 11 12 11 11 12 $\Delta$ 28 29 30 31 32 33 34 35 36 37 38 39 40 $g(\Delta, 3)$ 18 18 19 19 20 20 20 20 21 22 22 22 24 $g(\Delta, 4)$ 13 13 13 13 13 13 14 14 14 14 14 14 14 $g(\Delta, 5)$ 11 12 12 12 13 13 12 12 12 12 12 12 13 ---------------- ------- ------- ------- ------- ------- ---- ---- -------- ---- ---- ---- ---- ---- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- : The values $\mathop{\mathrm{g}}(\Delta,r)$ for $2 \leq \Delta \leq 40$ and $r=3,4,5$. The values in bold meet the upper bound in [Theorem 2](#thm:linear-upper-bound){reference-type="ref" reference="thm:linear-upper-bound"}. ### Non-generic setting {#sect:non-generic-computations} With only slight modifications the ideas behind [\[alg:computeHeller\]](#alg:computeHeller){reference-type="ref" reference="alg:computeHeller"} are also applicable in the non-generic setting. More precisely, we are interested in determining the maximum number of columns that a $\Delta$-modular matrix $A$ with $r = \mathop{\mathrm{rk}}(A)$ rows can have. Compared to the generic case, this means that we allow vanishing minors of size $r \times r$. The following lemma is the "non-generic" version of [Lemma 5](#lem:existence-Deltabound-totally-gen){reference-type="ref" reference="lem:existence-Deltabound-totally-gen"} and can be proved analogously. **Lemma 7**. *Let $D \in \mathbb{Z}^{r\times(r+L)}$ be a $\Delta$-modular matrix with $\mathop{\mathrm{rk}}(D)=r$. Then, there exists a $\Delta$-bound matrix $C\in\mathbb{Z}^{r\times L}$. Further, each column of $C$ is a solution to a linear system of equations $Ax=0 \bmod \Delta$ for an integer matrix $A\in \mathbb{Z}^{r\times r}$ in Hermite normal form satisfying $\det(A)=\Delta$.* While the corresponding counting function $\Delta \mapsto \mathop{\mathrm{h}}(\Delta,r)$ concerns matrices allowing a zero-column and also both $v$ and $-v$, for reasons of efficiency, we implemented the corresponding algorithm in a way that both of these are not allowed. This reduces the size of the hypergraphs roughly by a factor of two and therefore improves the runtime of the algorithm considerably. However, the hypergraphs still end up very large and also happen to have large cliques. Therefore, the performance compared to the generic setting is much worse, so that we get more limited data. The values $\mathop{\mathrm{h}}(\Delta, 3)$, for $3\leq \Delta\leq 11$, have been computed by an alternative algorithm in [@averkovschymura2023onthemaximal]. With our implementation of [\[alg:computeHeller\]](#alg:computeHeller){reference-type="ref" reference="alg:computeHeller"} tailored to $\mathop{\mathrm{h}}(\Delta,r)$, we could compute further values and determined $\mathop{\mathrm{h}}(\Delta,3)$ up to $\Delta \leq 25$. As a result, we have $\mathop{\mathrm{h}}(\Delta,3)=6\Delta+9 = r^2 + r + 1 + 2r(\Delta-1)$, for all $3\leq \Delta\leq 25, \Delta\neq 4$, which is the size of the construction given in [@leepaatstallknechtxu2022polynomial]. This gives further evidence that $\Delta=4$ might be the only exception for $r=3$ rows. For $r=4$ rows, we were able to calculate $\mathop{\mathrm{h}}(\Delta,4)$ in the range $3 \leq \Delta \leq 8$, with the results given in [5](#tbl:hDelta4values){reference-type="ref" reference="tbl:hDelta4values"}. For $\Delta \notin \{4,8\}$, the values again are given by the construction in [@leepaatstallknechtxu2022polynomial], while for $\Delta \in \{4,8\}$, our computations show that the larger examples constructed in [@averkovschymura2023onthemaximal] are best possible. --------------------------------- ---- ---- ---- ---- ---- ---- $\Delta$ 3 4 5 6 7 8 $\mathop{\mathrm{h}}(\Delta,4)$ 37 49 53 61 69 81 --------------------------------- ---- ---- ---- ---- ---- ---- : The values $\mathop{\mathrm{h}}(\Delta,4)$ for $3 \leq \Delta \leq 8$. Explicit constructions attaining $\mathop{\mathrm{h}}(4,4)$ and $\mathop{\mathrm{h}}(8,4)$ can be found in [@averkovschymura2023onthemaximal]. An interesting observation is that there are several hypergraphs for which there are a lot of maximum cliques. Therefore it takes a lot of time to go through the search tree of all cliques and the `hClique` algorithm takes considerably longer for these hypergraphs. In our implementation of [\[alg:computeHeller\]](#alg:computeHeller){reference-type="ref" reference="alg:computeHeller"} we parallelized the for loop in Line [\[alg:for-loop\]](#alg:for-loop){reference-type="ref" reference="alg:for-loop"}. In the generic setting this works well, as most hypergraphs have roughly the same size. However, in the non-generic setting most of the time will be spent waiting on a single thread corresponding to one such large hypergraph. Therefore it might be fruitful to instead parallelize the algorithm `hClique` itself. [^1]: We borrow this term from matroid theory, as the matroid represented by a simple matrix is a simple matroid.
arxiv_math
{ "id": "2309.03772", "title": "On the size of integer programs with bounded non-vanishing\n subdeterminants", "authors": "Bj\\\"orn Kriepke, Gohar M. Kyureghyan, Matthias Schymura", "categories": "math.CO math.OC", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | A theoretical analysis of the finite element method for a generalized Robin boundary value problem, which involves a second-order differential operator on the boundary, is presented. If $\Omega$ is a general smooth domain with a curved boundary, we need to introduce an approximate domain $\Omega_h$ and to address issues owing to the domain perturbation $\Omega \neq \Omega_h$. In contrast to the transformation approach used in existing studies, we employ the extension approach, which is easier to handle in practical computation, in order to construct a numerical scheme. Assuming that approximate domains and function spaces are given by isoparametric finite elements of order $k$, we prove the optimal rate of convergence in the $H^1$- and $L^2$-norms. A numerical example is given for the piecewise linear case $k = 1$. address: Graduate School of Mathematical Sciences, The University of Tokyo, 3-8-1 Komaba, Meguro, 153-8914 Tokyo, Japan author: - Takahito Kashiwabara title: Finite element analysis of a generalized Robin boundary value problem in curved domains based on the extension method --- # Introduction The generalized Robin boundary value problem for the Poisson equation introduced in [@KCDQ2015] is described by $$\begin{aligned} -\Delta u &= f \quad\text{in}\quad \Omega, \label{eq1: g-robin} \\ \frac{\partial u}{\partial \bm n} + u - \Delta_{\Gamma} u &= \tau \quad\text{on}\quad \Gamma := \partial\Omega, \label{eq2: g-robin}\end{aligned}$$ where $\Omega \subset \mathbb R^d$ is a smooth domain, $\bm n$ is the outer unit normal to $\Gamma$, and $\Delta_{\Gamma}$ stands for the Laplace--Beltrami operator defined on $\Gamma$. Since elliptic equations in the bulk domain and on the surface are coupled through the normal derivative, it can be regarded as one of the typical models of coupled bulk-surface PDEs, cf. [@EllRan2013]. It is also related to problems with dynamic boundary conditions [@KovLub2017] or to reduced-order models for fluid-structure interaction problems [@CDQ2014]. Throughout this paper, we exploit the standard notation of the Sobolev spaces in the domain and on the boundary, that is, $W^{m,p}(\Omega)$ and $W^{m,p}(\Gamma)$ (written as $H^m(\Omega)$ and $H^m(\Gamma)$ if $p = 2$), together with the non-standard ones $$H^m(\Omega; \Gamma) := \{ v \in H^m(\Omega) \mid v|_{\Gamma} \in H^m(\Gamma) \}, \quad \|v\|_{H^m(\Omega; \Gamma)} := \|v\|_{H^m(\Omega)} + \|v\|_{H^m(\Gamma)}.$$ According to [@KCDQ2015 Section 3.1], the weak formulation for ([\[eq1: g-robin\]](#eq1: g-robin){reference-type="ref" reference="eq1: g-robin"})--([\[eq2: g-robin\]](#eq2: g-robin){reference-type="ref" reference="eq2: g-robin"}) consists in finding $u \in H^1(\Omega; \Gamma)$ such that $$\label{eq: continuous problem} (\nabla u, \nabla v)_{\Omega} + (u, v)_{\Gamma} + (\nabla_{\Gamma}u, \nabla_{\Gamma}v)_{\Gamma} = (f, v)_{\Omega} + (\tau, v)_{\Gamma} \qquad \forall v \in H^1(\Omega; \Gamma),$$ where $(\cdot, \cdot)_\Omega$ and $(\cdot, \cdot)_\Gamma$ denote the $L^2(\Omega)$- and $L^2(\Gamma)$-inner products respectively, and $\nabla_\Gamma$ stands for the surface gradient along $\Gamma$. It is shown in [@KCDQ2015] that this problem admits the following regularity structure for some constant $C > 0$: $$\|u\|_{H^m(\Omega; \Gamma)} \le C (\|f\|_{H^{m-2}(\Omega)} + \|\tau\|_{H^{m-2}(\Gamma)}) \qquad (m = 2, 3, \dots).$$ Moreover, the standard finite element analysis is shown to be applicable, provided that either $\Omega$ is a polyhedral domain and ([\[eq2: g-robin\]](#eq2: g-robin){reference-type="ref" reference="eq2: g-robin"}) is imposed on a whole edge or face in $\Gamma$, or $\Omega$ is smooth and can be exactly represented in the framework of the isogeometric analysis. For a more general smooth domain, a feasible setting is to exploit the $\mathbb P_k$-isoparametric finite element method, in which $\Gamma = \partial\Omega$ is approximated by piecewise polynomial (of degree $k$) boundary $\Gamma_h = \partial\Omega_h$. Because the approximate domain $\Omega_h$ does not agree with $\Omega$, its theoretical analysis requires estimation of errors owing to the discrepancy of the two domains, i.e., the domain perturbation. Such an error analysis is presented by [@KovLub2017] in a time-dependent case for $k = 1$ and by [@Ede2021] for $k \ge 1$, based on the *transformation method*. The name comes from the fact that they introduce a bijection $L_h : \Omega_h \to \Omega$ and "lift" a function $v : \Omega_h \to \mathbb R$ to $v \circ L_h^{-1} : \Omega \to \mathbb R$ defined in $\Omega$, thus transforming all functions so that they are defined in the original domain $\Omega$. In this setting, the finite element scheme reads, with a suitable choice of the finite element space $V_h \subset H^1(\Omega_h; \Gamma_h)$: find $u_h \in V_h$ such that $$\label{eq: scheme by transformation method} (\nabla u_h, \nabla v_h)_{\Omega_h} + (u_h, v_h)_{\Gamma_h} + (\nabla_{\Gamma_h}u_h, \nabla_{\Gamma_h}v_h)_{\Gamma_h} = (f^{-l}, v_h)_{\Omega_h} + (\tau^{-l}, v_h)_{\Gamma_h} \qquad \forall v_h \in V_h,$$ where $f^{-l} = f \circ L_h$ and $\tau^{-l} = \tau \circ L_h$ mean the inverse lifts of $f$ and $\tau$ respectively. Then the error between the approximate and exact solutions are defined as $u - u_h^l$ on $\Omega$ with $u_h^l := u_h \circ L_h^{-1}$. It is theoretically proved by [@Len86] and [@Ber1989] that such a transformation $L_h$ indeed exists. However, from the viewpoint of practical computation, it does not seem easy to construct $L_h$ for general domains in a concrete way. Therefore, it is non-trivial to numerically compute $f^{-l}, \tau^{-l}$, and $u - u_h^l$. There is a more classical and direct approach to treat the situation $\Omega \neq \Omega_h$, which we call the *extension method* (see e.g. [@Cia78 Section 4.5] and [@BaEl88]; a more recent result is found in [@ChiSai2023]). Namely, we extend $f$ and $\tau$ to some $\tilde f$ and $\tilde\tau$ which are defined in $\mathbb R^d$, preserving their smoothness (this can be justified by the Sobolev extension theorem or the trace theorem). Then the numerical scheme reads: find $u_h \in V_h$ such that $$(\nabla u_h, \nabla v_h)_{\Omega_h} + (u_h, v_h)_{\Gamma_h} + (\nabla_{\Gamma_h}u_h, \nabla_{\Gamma_h}v_h)_{\Gamma_h} = (\tilde f, v_h)_{\Omega_h} + (\tilde\tau, v_h)_{\Gamma_h} \qquad \forall v_h \in V_h,$$ and the error is defined as $\tilde u - u_h$ in the approximate domain $\Omega_h$. If $f$ and $\tau$ are given as entire functions, which is often the case in practical computation, then no special treatment for them is needed. Moreover, when computing errors numerically for verification purposes, it is usual to calculate $\tilde u - u_h$ in the computational domain $\Omega_h$ rather than $u - u_h^l$ in $\Omega$ simply because the former is easier to deal with. In view of these situations, we aim to justify the use of the extension method for problem ([\[eq1: g-robin\]](#eq1: g-robin){reference-type="ref" reference="eq1: g-robin"})--([\[eq2: g-robin\]](#eq2: g-robin){reference-type="ref" reference="eq2: g-robin"}) in the present paper. Considering $\Omega_h$ which approximates $\Omega$ by the $\mathbb P_k$-isoparametric elements, we establish in Section [4](#sec: error estimate){reference-type="ref" reference="sec: error estimate"} the following error estimates as the main result: $$\|\tilde u - u_h\|_{H^1(\Omega_h; \Gamma_h)} \le O(h^{k}), \qquad \|\tilde u - u_h\|_{L^2(\Omega_h; \Gamma_h)} \le O(h^{k+1}).$$ They do not follow from the results of [@KovLub2017] or [@Ede2021] directly since we need to estimate errors caused from a transformation that are absent in the transformation method. In addition, there is a completely non-trivial point that is specific to the boundary condition ([\[eq2: g-robin\]](#eq2: g-robin){reference-type="ref" reference="eq2: g-robin"}): even if $u \in H^2(\Omega)$ with $u|_\Gamma \in H^2(\Gamma)$, we may have only $\tilde u|_{\Gamma_h} \in H^{3/2}(\Gamma_h)$, which could cause loss in the rate of convergence on the boundary. To overcome this technical difficulty, a delicate analysis of interpolation errors on $\Gamma_h$, including the use of the Scott--Zhang interpolation operator on the boundary, is necessary as presented in Section [3](#sec: FEM){reference-type="ref" reference="sec: FEM"}. There is another delicate point when comparing a quantity defined in $\Gamma_h$ with that in $\Gamma$. For simplicity in the explanation, let $\Gamma_h$ be given as a piecewise linear ($k = 1$) approximation to $\Gamma$. If $d = 2$ and every node (vertex) of $\Gamma_h$ lies exactly on $\Gamma$, then the orthogonal projection $\bm p: \Gamma \to \Gamma_h$ is bijective and it is reasonable to set a local coordinate along each boundary element $S \in \mathcal S_h$ (see Subsection [2.2](#subsec: approximate domains){reference-type="ref" reference="subsec: approximate domains"} for the notation). Namely, $S$ and $\bm p^{-1}(S)$ are represented as graphs $(y_1, 0)$ and $(y_1, \varphi(y_1))$ respectively with a local coordinate $(y_1, y_2)$. However, if nodes do not belong to $\Gamma$, then $\bm p$ is no longer injective (see Figure [2](#fig1){reference-type="ref" reference="fig1"}). Furthermore, for $d \ge 3$ the same situation necessarily occurs---no matter if boundary nodes are in $\Gamma$ or not---since $\partial S$ (its dimension is $\ge 1$) is not exactly contained in $\Gamma$. Consequently, it is inconsistent in general to assume the following simultaneously: 1. each $S \in \mathcal S_h$ has one-to-one correspondence to some subset $\Gamma_S \subset \Gamma$; 2. both $S$ and $\Gamma_S$ admit graph representations in some rotated cartesian coordinate, whose domains of definition are the same; 3. $\Gamma = \bigcup_{S \in \mathcal S_h} \Gamma_S$ is a disjoint union, that is, $\{\Gamma_S\}_{S \in \mathcal S_h}$ forms an exact partition of $\Gamma$. We remark that this inconsistency is sometimes overlooked in literature considering $\Omega \neq \Omega_h$. [\[fig1\]]{#fig1 label="fig1"} ![$\Gamma$ and $\Gamma_h$ for $d = 2$ and $k = 1$. Left: if $\partial S \not\subset \Gamma$, $\bm p$ is not injective (in the red part) and property (iii) fails to hold. Right: $\bm\pi(S)$ and $S$ are parametrized over the common domain $S'$. The representation of $\bm\pi(S)$ is a graph but that of $S$ is not.](fig1.pdf "fig:"){#fig1 width="7cm"} ![$\Gamma$ and $\Gamma_h$ for $d = 2$ and $k = 1$. Left: if $\partial S \not\subset \Gamma$, $\bm p$ is not injective (in the red part) and property (iii) fails to hold. Right: $\bm\pi(S)$ and $S$ are parametrized over the common domain $S'$. The representation of $\bm\pi(S)$ is a graph but that of $S$ is not.](fig2.pdf "fig:"){#fig1 width="7cm"} To address the issue, we utilize the orthogonal projection $\bm\pi : \Gamma_h \to \Gamma$ (its precise definition is given in Subsection [2.3](#subsec: local coordinate){reference-type="ref" reference="subsec: local coordinate"}) instead of $\bm p$. This map is bijective as long as $\Gamma_h$ is close enough to $\Gamma$, so that properties (i) and (iii) hold with $\Gamma_S = \bm\pi(S)$. Then we set a local coordinate along $\bm\pi(S)$ and parametrize $S$ through $\bm\pi$ with the same domain as in Figure [2](#fig1){reference-type="ref" reference="fig1"}, avoiding the inconsistency above (we do not rely on a graph representation of $S$ in evaluating surface integrals etc.). Finally, in Appendix [8](#sec: estimate in exact domain){reference-type="ref" reference="sec: estimate in exact domain"}, considering the so-called natural extension of $u_h$ to $\Omega$ denoted by $\bar u_h$, we also prove that $u - \bar u_h$ converges to $0$ at the optimal rate in $H^1(\Omega; \Gamma)$ and $L^2(\Omega; \Gamma)$ (actually there is some abuse of notation here; see Remark [Remark 9](#rem: abuse of notation){reference-type="ref" reference="rem: abuse of notation"}). This result may be regarded as an extension of [@Ric2017 Section 4.2.3], which discussed a Dirichlet problem for $d = 2$, to a more general setting. Whereas it is of interest mainly from the mathematical point of view, it justifies calculating errors in approximate domains $\Omega_h, \Gamma_h$ based on extensions to estimate the rate of convergence in the original domains $\Omega, \Gamma$. # Approximation and perturbation of domains ## Assumptions on $\Omega$ Let $\Omega \subset \mathbb R^d \, (d \ge 2)$ be a bounded domain of $C^{k+1,1}$-class ($k \ge 1$), with $\Gamma := \partial\Omega$. Then there exist a system of local coordinates $\{(U_r, \bm y_r, \varphi_r)\}_{r=1}^M$ such that $\{U_r\}_{r=1}^M$ forms an open covering of $\Gamma$, $\bm y_r = {}^t(y_{r1}, \dots, y_{rd-1}, y_{rd}) = {}^t(\bm y_r', y_{rd})$ is a rotated coordinate of $\bm x$, and $\varphi_r: \Delta_r \to \mathbb R$ gives a graph representation $\bm \Phi_r(\bm y_r') := {}^t(\bm y_r', \varphi_r(\bm y_r'))$ of $\Gamma \cap U_r$, where $\Delta_r$ is an open cube in $\mathbb R^{N-1}$. Because $C^{k, 1}(\Delta_r) = W^{k+1, \infty}(\Delta_r)$, we may assume that $$\|(\nabla')^{m} \varphi_r\|_{L^\infty(\Delta')} \le C \quad (m = 0, \dots, k+1, \; r = 1, \dots, M)$$ for some constant $C > 0$, where $\nabla'$ means the gradient with respect to $\bm y_r'$. We also introduce a notion of tubular neighborhoods $\Gamma(\delta) := \{x\in\mathbb R^N \,:\, \operatorname{dist}(x, \Gamma) \le \delta\}$. It is known that (see [@GiTr98 Section 14.6]) there exists $\delta_0>0$, which depends on the $C^{1,1}$-regularity of $\Omega$, such that each $\bm x \in \Gamma(\delta_0)$ admits a unique representation $$\bm x = \bar{\bm x} + t \bm n(\bar{\bm x}), \qquad \bar{\bm x} \in \Gamma, \, t \in [-\delta_0, \delta_0].$$ We denote the maps $\Gamma(\delta_0)\to \Gamma$; $\bm x\mapsto\bar{\bm x}$ and $\Gamma(\delta_0)\to \mathbb R$; $\bm x \mapsto t$ by $\bm\pi(\bm x)$ and $d(\bm x)$, respectively (actually, $\bm\pi$ is an orthogonal projection to $\Gamma$ and $d$ agrees with the signed-distance function). The regularity of $\Omega$ is transferred to that of $\bm\pi$, $d$, and $\bm n$ (cf. [@DeZo11 Section 7.8]). In particular, $\bm n \in \bm C^{k, 1}(\Gamma)$. ## Assumptions on approximate domains {#subsec: approximate domains} We make the following assumptions (H1)--(H8) on finite element partitions and approximate domains. First we introduce a regular family of triangulations $\{\tilde{\mathcal T}_h\}_{h \downarrow 0}$ of *straight $d$-simplices* and define the set of nodes corresponding to the standard $\mathbb P_k$-finite element. 1. Every $T \in \mathcal{\tilde T}_h$ is affine-equivalent to the standard closed simplex $\hat T$ of $\mathbb R^d$, via the isomorphism $\tilde{\bm F}_{T}(\hat{\bm x}) = B_{T} \hat{\bm x} + \bm b_{T}$. The set $\tilde{\mathcal T}_h$ is mutually disjoint, that is, the intersection of every two different elements is either empty or agrees with their common face of dimension $\le d - 1$. 2. $\{\tilde{\mathcal T}_h\}_{h \downarrow 0}$ is regular in the sense that $$h_T \le C \rho_T \quad (\forall h > 0, \, \forall T \in \mathcal T_h),$$ where $h_T$ and $\rho_T$ stand for the diameter of the smallest ball containing $T$ and that of the largest ball contained $T$, respectively. 3. We let $\hat\Sigma_k = \{\hat{\bm a}_i\}_{i=1}^{N_k}$ denote the nodes in $\hat T$ of the continuous $\mathbb P_k$-finite element (see e.g. [@Cia78 Section 2.2]). The nodal basis functions $\hat\phi_i \in \mathbb P_k(\hat T)$, also known as the shape functions, are then defined by $\hat\phi_i(\hat{\bm a}_j) = \delta_{ij}$ (the Kronecker delta) for $i, j = 1, \dots, N_k$. **Remark 1**. If $\hat T$ is chosen as the standard $d$-simplex, i.e., $\hat T = \{(\hat x_1, \dots, \hat x_d) \in \mathbb R^d \mid x_1 \ge 0, \dots, x_d \ge 0, \hat x_1 + \cdots + \hat x_d \le 1\}$, then the standard position of the nodes for the $\mathbb P_k$-finite element is specified as $\hat\Sigma_k = \{ (\hat i_1/k, \dots, \hat i_d/k) \in \hat T \mid \hat i_1, \dots, \hat i_d \in \mathbb N_{\ge 0} \}$. We now introduce a partition into $\mathbb P_k$-isoparametric finite elements, denoted by $\mathcal T_h$, from $\tilde{\mathcal T}_h$, which results in approximate domains $\Omega_h$. We assume that $\Omega_h$ is a perturbation of a polyhedral domain. 4. [\[H4\]]{#H4 label="H4"} For $\tilde T \in \tilde{\mathcal T}_h$ we define a parametric map $\bm F \in [\mathbb P_k(\hat T)]^d$ by $$\bm F(\hat{\bm x}) = \sum_{i=1}^{N_k} \bm a_i \hat\phi_i(\hat{\bm x}),$$ where the "mapped nodes" $\bm a_i \in \mathbb R^d \, (i = 1, \dots, N_k)$ satisfy $$|\bm a_i - \bm F_{\tilde T}(\hat{\bm a}_i)| \le Ch_{\tilde T}^2.$$ If $h_{\tilde T}$ is small such $\bm F$ becomes diffeomorphic on $\hat T$ (see [@CiaRav1972 Theorem 3]), and we set $T := \bm F(\hat T)$. For convenience in the notation, henceforth we write $\bm F$ as $\bm F_T$, $\tilde{\bm F}_{\tilde T}$ as $\tilde{\bm F}_T$, and $h_{\tilde T}$ as $h_T$. 5. The partition $\mathcal T_h$ is defined as the set of $T$ constructed above. We define $\Omega_h$ to be the interior of the union of $\mathcal T_h$; in particular, $\overline\Omega_h = \bigcup_{T \in \mathcal T_h} T$. 6. [\[H6\]]{#H6 label="H6"} $\{\mathcal T_h\}_{h \downarrow 0}$ is regular of order $k$ in the sense of [@Ber1989 Definition 3.2], that is, $$\|\nabla_{\hat{\bm x}}^m \bm F_T\|_{L^\infty(\hat T)} \le C\|B_T\|_{\mathcal L(\mathbb R^d, \mathbb R^d)}^m \le Ch_T^m \qquad (T \in \mathcal T_h, \quad m = 2, \dots, k+1),$$ where $C$ is independent of $h$ (if $m = k + 1$ the left-hand side is obviously $0$). **Remark 2**. (i) Throughout this paper, we assume without special emphasis that $h$ is sufficiently small; especially that $h \le 1$. \(ii\) [\[H6\]](#H6){reference-type="ref" reference="H6"} automatically holds if $\bm F_T$ is an $O(h^k)$-perturbation of $\tilde{\bm F}_T$ (see [@CiaRav1972 p. 239]). It is a reasonable assumption for $k = 2$, but is not compatible with [\[H8\]](#H8){reference-type="ref" reference="H8"} below for $k \ge 3$, which is why we presume [\[H6\]](#H6){reference-type="ref" reference="H6"} independently. \(iii\) [@Len86] presented a procedure to construct $\mathcal T_h$ satisfying [\[H4\]](#H4){reference-type="ref" reference="H4"}--[\[H6\]](#H6){reference-type="ref" reference="H6"} for general $d$ and $k$, which is done inductively on $k$. In order to get, e.g., cubic isoparametric partitions with regularity of order 3, one needs to know a quadratic partition of order 2 in advance. Then, a kind of perturbation is added to the quadratic map to satisfy the condition of order 3 (see [@Len86 eq. (22)]). \(iv\) As a result of [\[H4\]](#H4){reference-type="ref" reference="H4"}--[\[H6\]](#H6){reference-type="ref" reference="H6"}, for $T \in \mathcal T_h$ we have (see [@CiaRav1972 Theorems 3 and 4] and [@Len86 Theorem 1]): $$\begin{gathered} \|\nabla_{\hat{\bm x}} \bm F_T\|_{L^\infty(\hat T)} \le C\|B_T\|_{\mathcal L(\mathbb R^d, \mathbb R^d)} \le Ch_T, \\ C_1 h_T^d \le |\operatorname{det} (\nabla_{\hat{\bm x}} \bm F_T)| \le C_2 h_T^d, \\ \|\nabla_{\bm x}^m \bm F_T^{-1}\|_{L^\infty(T)} \le C \|B_T^{-1}\|_{\mathcal L(\mathbb R^d, \mathbb R^d)}^m \le C h_T^{-m} \quad (m = 1, \dots, k+1). \end{gathered}$$ We next introduce descriptions on boundary meshes. Setting $\Gamma_h := \partial\Omega_h$, we define the boundary mesh $\mathcal S_h$ inherited from $\mathcal T_h$ by $$\mathcal S_h = \{ S \subset \Gamma_h \mid \text{$S = \bm F_T(\hat S)$ for some $T \in \mathcal T_h$, where $\hat S \subset \partial\hat T$ is a $(d-1)$-face of $\hat T$} \}.$$ Then we have $\Gamma_h = \bigcup_{S \in \mathcal T_h} S$ (disjoint union). Each boundary element $S \in \mathcal S_h$ admits a unique $T \in \mathcal T_h$ such that $S \subset \partial T$, which is denoted by $T_S$. We let $\bm b_r : U_r \to \mathbb R^{d-1}; {}^t(\bm y_r', y_{rd}) \mapsto \bm y_r'$ denote the projection to the base set. Let us now assume that $\Omega$ is approximated by $\Omega_h$ in the following sense. 7. [\[asmp: omit r\]]{#asmp: omit r label="asmp: omit r"} $\Gamma_h$ is covered by $\{U_r\}_{r=1}^M$, and each portion $\Gamma_h \cap U_r$ is represented as a graph $(\bm y_r', \varphi_{rh}(\bm y_r'))$, where $\varphi_{rh}$ is a continuous function defined in $\overline{\Delta_r}$. Moreover, each $S \in \mathcal S_h$ is contained in some $U_r$. We fix such $r$ and agree to omit the subscript $r$ for simplicity when there is no fear of confusion. 8. [\[H8\]]{#H8 label="H8"} The restriction of $\varphi_{rh}$ to $\bm b_r(S)$ for each $S \in \mathcal S_h$ is a polynomial function of degree $\le k$. Moreover, $\varphi_{rh}$ approximates $\varphi_r$ as accurately as a general $\mathbb P_k$-interpolation does; namely, we assume that $$\begin{aligned} \|\varphi_r - \varphi_{rh}\|_{L^\infty(\bm b_r(S))} &\le Ch_{S}^{k+1} =: \delta_S, \label{eq: phi - phih} \\ \|(\nabla')^m (\varphi_r - \varphi_{rh})\|_{L^\infty(\bm b_r(S))} &\le Ch_{S}^{k+1-m} \qquad (m = 1, \dots, k + 1), \label{eq: derivatives of phi - phih} \end{aligned}$$ where the boundary mesh size is defined as $h_S := h_{T_S}$. These assumptions essentially imply that the local coordinate system for $\Omega$ is compatible with $\{\Omega_h\}_{h \downarrow 0}$ and that $\Gamma_h$ is a piecewise $\mathbb P_k$ interpolation of $\Gamma$. Setting $\delta := \max_{S \in \mathcal S_h} \delta_S$, we have $\operatorname{dist}(\Gamma, \Gamma_h) \le \delta < \delta_0$ if $h$ is sufficiently small, so that $\bm\pi$ is well-defined on $\Gamma_h$. ## Local coordinates for $\Gamma$ and $\Gamma_h$ {#subsec: local coordinate} In [@KOZ16 Proposition 8.1], we proved that $\bm\pi|_{\Gamma_h}$ gives a homeomorphism (and element-wisely a diffeomorphism) between $\Gamma$ and $\Gamma_h$ provided $h$ is sufficiently small, taking advantage of the fact that $\Gamma_h$ can be regarded as a $\mathbb P_k$-interpolation of $\Gamma$ (there we assumed $k = 1$, but the method can be easily adapted to general $k \ge 1$). If we write its inverse map $\bm\pi^* : \Gamma\to\Gamma_h$ as $\bm\pi^*(\bm x) = \bar{\bm x} + t^*(\bar{\bm x}) \bm n(\bar{\bm x})$, then $t^*$ satisfies (cf. [@KOZ16 Proposition 8.2]) $$\label{eq: global t*} \| t^*\|_{L^\infty(\Gamma)} \le \delta, \qquad \|\nabla_\Gamma^m t^*\|_{L^\infty(\Gamma)} \le Ch^{k+1-m} \quad (m = 1, \dots, k + 1),$$ corresponding to ([\[eq: phi - phih\]](#eq: phi - phih){reference-type="ref" reference="eq: phi - phih"}) and ([\[eq: derivatives of phi - phih\]](#eq: derivatives of phi - phih){reference-type="ref" reference="eq: derivatives of phi - phih"}). Here, $\nabla_\Gamma$ means the surface gradient along $\Gamma$ and the constant depends only on the $C^{1,1}$-regularity of $\Omega$. This in particular implies that $\Omega_h\triangle\Omega := (\Omega_h\setminus\Omega) \cup (\Omega\setminus\Omega_h)$ and $\Gamma_h \cup \Gamma$ are contained in $\Gamma(\delta)$. We refer to $\Omega_h\triangle\Omega$, $\Gamma(\delta)$ and their subsets as *boundary-skin layers* or more simply as *boundary skins*. For $S\in \mathcal S_h$, we may assume that $S \cup \bm\pi(S)$ is contained in some local coordinate neighborhood $U_r$. As announced in [\[asmp: omit r\]](#asmp: omit r){reference-type="ref" reference="asmp: omit r"} above, we will omit the subscript $r$ in the subsequent argument. We define $$S' := \bm b(\bm\pi(S)) \quad (\text{note that it differs from $\bm b(S)$})$$ to be the common domain of parameterizations of $\bm\pi(S) \subset \Gamma$ and $S \subset \Gamma_h$. In fact, $\bm\Phi : S' \to \bm\pi(S)$ and $\bm\Phi_{h} := \bm\pi^* \circ \bm\Phi : S' \to S$ constitute smooth (at least $C^{k,1}$) bijections. We then obtain $\bm\pi^*(\bm\Phi(\bm z')) = \bm\Phi(\bm z') + t^*(\bm\Phi(\bm z')) \bm n(\bm\Phi(\bm z'))$ for $\bm z' \in S'$ and $$\|t^* \circ \bm\Phi\|_{L^\infty(S')} \le \delta_S, \qquad \|(\nabla')^m (t^* \circ \bm\Phi)\|_{L^\infty(S')} \le Ch_{S}^{k+1-m} \quad (m = 1, \dots, k + 1),$$ which are localized versions of ([\[eq: global t\*\]](#eq: global t*){reference-type="ref" reference="eq: global t*"}). Let us represent integrals associated with $S$ in terms of the local coordinates introduced above. First, surface integrals along $\bm\pi(S)$ and $S$ are expressed as $$\begin{aligned} \int_{\bm\pi(S)} v \, d\gamma = \int_{S'} v(\bm\Phi(\bm y')) \sqrt{\operatorname{det} G(\bm y')} \, d\bm y', \qquad \int_{S} v \, d\gamma_h = \int_{S'} v(\bm\Phi_h(\bm y')) \sqrt{\operatorname{det} G_h(\bm y')} \, d\bm y',\end{aligned}$$ where $G$ and $G_h$ denote the Riemannian metric tensors obtained from the parameterizations $\bm\Phi$ and $\bm\Phi_h$, respectively. Namely, for tangent vectors $\bm g_\alpha := \frac{\partial \bm\Phi}{\partial z_\alpha}$ and $\bm g_{h, \alpha} := \frac{\partial \bm\Phi_h}{\partial z_\alpha}$ $(\alpha = 1, \dots, d-1)$, the components of and $G$ and $G_h$, which are $(d-1)\times(d-1)$ matrices, are given by $$G_{\alpha \beta} = \bm g_\alpha \cdot \bm g_\beta, \qquad G_{h, \alpha \beta} = \bm g_{h, \alpha} \cdot \bm g_{h, \beta}.$$ The contravariant components of the metric tensors and the contravariant vectors on $\Gamma$ are defined as $$G^{\alpha \beta} = (G^{-1})_{\alpha \beta}, \qquad \bm g^{\alpha} = \sum_{\beta=1}^{d-1} G^{\alpha \beta} \bm g_\beta,$$ together with their counterparts $G_h^{\alpha, \beta}$ and $\bm g_h^\alpha$ on $\Gamma_h$. Then the surface gradients along $\Gamma$ and $\Gamma_h$ can be represented in the local coordinate as (see [@KCDQ2015 Lemma 2.1]) $$\label{eq: local representation of surface gradient} \nabla_{\Gamma} = \sum_{\alpha=1}^{d-1} \bm g^\alpha \frac{\partial}{\partial z_\alpha}, \qquad \nabla_{\Gamma_h} = \sum_{\alpha=1}^{d-1} \bm g_h^\alpha \frac{\partial}{\partial z_\alpha}.$$ In the same way as we did in [@KOZ16 Theorem 8.1], we can show $\|\bm g_\alpha - \bm g_{h,\alpha}\|_{L^\infty(S')} \le Ch_S^k$ and $\|G_{\alpha\beta} - G_{h, \alpha \beta}\|_{L^\infty(S')} \le C\delta_S$. We then have $\|G^{\alpha\beta} - G_h^{\alpha \beta}\|_{L^\infty(S')} \le C\delta_S$, because $$G_h^{-1} - G^{-1} = G^{-1}\underbrace{(G_h - G)}_{=O(\delta_S)}G_h^{-1}.$$ Note that the stability of $G_h^{-1}$ follows from the representation $G_h = G (I + G^{-1} X)$, with $X = G_h - G$ denoting a perturbation, together with a Neumann series argument. As a result, one also gets an error estimate for contravariant vectors, i.e., $\|\bm g^\alpha - \bm g_h^\alpha\|_{L^\infty(S')} \le Ch_S^k$. Derivative estimates for metric tensors and vectors can be derived as well for $m = 1, \dots, k$: $$\label{eq: derivative of G - Gh} \begin{aligned} \|G_{\alpha\beta} - G_{h, \alpha \beta}\|_{W^{m, \infty}(S')} &\le Ch_S^{k-m}, \qquad \|G^{\alpha\beta} - G_h^{\alpha \beta}\|_{W^{m, \infty}(S')} \le Ch_S^{k-m}, \qquad \\ \|\bm g_\alpha - \bm g_{h,\alpha}\|_{W^{m, \infty}(S')} &\le Ch_S^{k-m}, \qquad \|\bm g^\alpha - \bm g^{h,\alpha}\|_{W^{m, \infty}(S')} \le Ch_S^{k-m}. \end{aligned}$$ Next, let $\bm\pi(S, \delta) := \{\bar{\bm x} + t \bm n(\bar{\bm x}) \mid \bar{\bm x} \in \bm\pi(S), \; -\delta\le t\le \delta \}$ be a tubular neighborhood with the base $\bm\pi(S)$, and consider volume integrals over $\bm\pi(S, \delta)$. To this end we introduce a one-to-one transformation $\bm\Psi: S'\times [-\delta, \delta] \to \bm\pi(S, \delta)$ by $$\bm x = \bm\Psi(\bm z', t) := \bm\Phi(\bm z') + t \bm n(\bm\Phi(\bm z')) \Longleftrightarrow \bm z' = \bm b(\bm\pi(\bm x)), \; t = d(\bm x),$$ where we recall that $\bm b : \mathbb R^d \to \mathbb R^{d-1}$ is the projection. Then, by change of variables, we obtain $$\int_{\bm \pi(S,\delta)} v(\bm x) \, d\bm x = \int_{S' \times [-\delta, \delta]} v(\bm\Psi(\bm z', t)) |\operatorname{det} J(z', t)| \, d\bm z' dt,$$ where $J := \nabla_{(\bm z', t)} \bm\Psi$ denotes the Jacobi matrix of $\bm\Psi$. In the formulas above, $\operatorname{det}G$, $\operatorname{det}G_h$, and $\operatorname{det}J$ can be bounded, from above and below, by positive constants depending on the $C^{1,1}$-regularity of $\Omega$, provided $h$ is sufficiently small. In particular, we obtain the following equivalence estimates: $$\begin{aligned} &C_1 \int_{\bm\pi(S)} |v| \, d\gamma \le \int_{S'} |v \circ \bm\Phi| \, d\bm z' \le C_2 \int_{\bm\pi(S)} |v| \, d\gamma, \label{eq: local surface integral on Gamma} \\ &C_1 \int_{S} |v| \, d\gamma_h \le \int_{S'} |v \circ \bm\Phi_h| \, d\bm z' \le C_2 \int_{S} |v| \, d\gamma_h, \label{eq: local surface integral on Gammah} \\ &C_1 \int_{\bm\pi(S, \delta)} |v| \, d\bm x \le \int_{S' \times [-\delta, \delta]} |v \circ \bm\Psi| \,d\bm z' dt \le C_2 \int_{\bm\pi(S, \delta)} |v| \, d\bm x. \label{eq: local tubular neighborhood equivalence}\end{aligned}$$ We remark that the width $\delta$ in ([\[eq: local tubular neighborhood equivalence\]](#eq: local tubular neighborhood equivalence){reference-type="ref" reference="eq: local tubular neighborhood equivalence"}) may be replaced with arbitrary $\delta' \in [\delta_S, \delta]$. We also state an equivalence relation between $W^{m,p}(\Gamma)$ and $W^{m,p}(\Gamma_h)$ when the transformation $\bm\pi$ is involved. **Lemma 1**. *Let $m =0, \dots, k + 1$ and $1\le p\le \infty$. For $S \in \mathcal S_h$ and $v \in W^{m, p}(\bm\pi(S))$, we have $$\begin{aligned} C_1 \|v\|_{L^p(\bm\pi(S))} &\le \|v \circ \bm\pi\|_{L^p(S)} \le C_2 \|v\|_{L^p(\bm\pi(S))}, \label{eq1: equivalence of surface integrals} \\ C_1 \|\nabla_{\Gamma} v\|_{L^p(\bm\pi(S))} &\le \|\nabla_{\Gamma_h} (v \circ \bm\pi)\|_{L^p(S)} \le C_2 \|\nabla_{\Gamma} v\|_{L^p(\bm\pi(S))}, \label{eq2: equivalence of surface integrals} \\ C_1 \|v\|_{W^{m, p}(\bm\pi(S))} &\le \|v \circ \bm\pi\|_{W^{m, p}(S)} \le C_2 \|v\|_{W^{m, p}(\bm\pi(S))} \quad (m \ge 2). \label{eq3: equivalence of surface integrals} \end{aligned}$$* *Proof.* Estimate ([\[eq1: equivalence of surface integrals\]](#eq1: equivalence of surface integrals){reference-type="ref" reference="eq1: equivalence of surface integrals"}) follows from ([\[eq: local surface integral on Gamma\]](#eq: local surface integral on Gamma){reference-type="ref" reference="eq: local surface integral on Gamma"}) and ([\[eq: local surface integral on Gammah\]](#eq: local surface integral on Gammah){reference-type="ref" reference="eq: local surface integral on Gammah"}) combined with $\bm\Phi_h = \bm\pi^* \circ \bm\Phi \Longleftrightarrow \bm\pi \circ \bm\Phi_h = \bm\Phi$. To obtain derivative estimates ([\[eq2: equivalence of surface integrals\]](#eq2: equivalence of surface integrals){reference-type="ref" reference="eq2: equivalence of surface integrals"}) and ([\[eq3: equivalence of surface integrals\]](#eq3: equivalence of surface integrals){reference-type="ref" reference="eq3: equivalence of surface integrals"}), it suffices to notice that we can invert ([\[eq: local representation of surface gradient\]](#eq: local representation of surface gradient){reference-type="ref" reference="eq: local representation of surface gradient"}) as $$\frac{\partial}{\partial z_\alpha} = \sum_{\beta = 1}^{d-1} G_{\alpha\beta} (\bm g^\beta \cdot \nabla_{\bm\pi(S)}), \qquad \frac{\partial}{\partial z_\alpha} = \sum_{\beta = 1}^{d-1} G_{h, \alpha\beta} (\bm g_h^\beta \cdot \nabla_S),$$ and that the derivatives of $G_{h, \alpha \beta}, G_h^{\alpha \beta}, \bm g_{h, \alpha}, \bm g_h^\alpha$ up to the $k$-th order are bounded independently of $h$ in $L^\infty(S')$, due to ([\[eq: derivative of G - Gh\]](#eq: derivative of G - Gh){reference-type="ref" reference="eq: derivative of G - Gh"}) and $h_S \le 1$. ◻ ## Estimates for domain perturbation errors We recall the following boundary-skin estimates for $S \in \mathcal S_h$, $1\le p\le \infty$, and $v \in W^{1,p}(\Omega \cup \Gamma(\delta))$ (note that $\Omega \cup \Gamma(\delta) \supset \Omega \cup \Omega_h$): $$\begin{aligned} &\left| \int_{\bm\pi(S)} v\,d\gamma - \int_{S} v\circ\bm\pi\,d\gamma_h \right| \le C\delta_S \|v\|_{L^1(\bm\pi(S))}, \label{eq1: boundary-skin estimates} \\ &\|v\|_{L^p(\bm\pi(S, \delta'))} \le C(\delta'^{1/p} \|v\|_{L^p(\bm\pi(S))} + \delta' \|\nabla v\|_{L^p(\bm\pi(S, \delta'))}) \quad (\delta' \in [\delta_S, \delta]), \label{eq2: boundary-skin estimates} \\ &\|v - v\circ\bm\pi\|_{L^p(S)} \le C\delta_S^{1-1/p} \|\nabla v\|_{L^p(\bm\pi(S, \delta_S))}. \label{eq3: boundary-skin estimates}\end{aligned}$$ The proofs are given in [@KOZ16 Theorems 8.1--8.3] for the case $k = 1$, which can be extended to $k \ge 2$ without essential difficulty. As a version of ([\[eq1: boundary-skin estimates\]](#eq1: boundary-skin estimates){reference-type="ref" reference="eq1: boundary-skin estimates"})--([\[eq3: boundary-skin estimates\]](#eq3: boundary-skin estimates){reference-type="ref" reference="eq3: boundary-skin estimates"}), we also have $$\begin{aligned} \left| \int_{\bm\pi(S)} v\circ\bm\pi^* \,d\gamma - \int_{S} v \,d\gamma_h \right| &\le C\delta_S \|v\|_{L^1(S)}, \notag \\ \|v\|_{L^p(\bm\pi(S, \delta))} &\le C(\delta_S^{1/p} \|v\|_{L^p(S)} + \delta_S \|\nabla v\|_{L^p(\bm\pi(S, \delta))}), \label{eq2': boundary-skin estimates} \\ \|v\circ\bm\pi^* - v\|_{L^p(\bm\pi(S))} &\le C\delta_S^{1-1/p} \|\nabla v\|_{L^p(\bm\pi(S, \delta))}. \notag\end{aligned}$$ Adding up these for $S \in \mathcal S_h$ yields corresponding global estimates on $\Gamma$ or $\Gamma(\delta)$. The following estimate limited to $\Omega_h \setminus \Omega$, rather than the whole boundary skin $\Gamma(\delta)$, also holds: $$\label{eq: RHS with Omegah minus Omega} \|v\|_{L^p(\Omega_h \setminus \Omega)} \le C(\delta^{1/p} \|v\|_{L^p(\Gamma_h)} + \delta \|\nabla v\|_{L^p(\Omega_h \setminus \Omega)}),$$ which is proved in [@KasKem2020a Lemma A.1]. Finally, denoting by $\bm n_h$ the outward unit normal to $\Gamma_h$, we notice that its error compared with $\bm n$ is estimated as (see [@KOZ16 Lemma 9.1]) $$\label{eq: n - nh} \|\bm n\circ\bm\pi - \bm n_h\|_{L^\infty(S)} \le Ch_S^k.$$ We now state a version of ([\[eq3: boundary-skin estimates\]](#eq3: boundary-skin estimates){reference-type="ref" reference="eq3: boundary-skin estimates"}) which involves the surface gradient. The proof will be given in Appendix [6](#apx: nablaGammah(v - vpi)){reference-type="ref" reference="apx: nablaGammah(v - vpi)"}. **Lemma 2**. *Let $S \in \mathcal S_h$ and $v \in W^{2,p}(\Omega \cup \Gamma(\delta))$ for $1\le p\le \infty$. Then we have $$\begin{aligned} \|\nabla_{\Gamma_h} (v - v\circ\bm\pi) \|_{L^p(S)} &\le Ch_S^k \|\nabla v\|_{L^p(S)} + C \delta_S^{1-1/p} \|\nabla^2 v\|_{L^p(\bm\pi(S, \delta_S))}, \label{eq1: conclusion of lemma which bounds nabla(v - v circ pi)} \\ \|\nabla_{\Gamma_h} (v - v\circ\bm\pi) \|_{L^p(S)} &\le Ch_S^k \|\nabla v\|_{L^p(\bm\pi(S))} + C \delta_S^{1-1/p} \|\nabla^2 v\|_{L^p(\bm\pi(S, \delta_S))}. \label{eq2: conclusion of lemma which bounds nabla(v - v circ pi)} \end{aligned}$$* **Corollary 1**. *Let $m = 0, 1$ and assume that $v \in H^{2}(\Omega \cup \Gamma(\delta))$ if $k = 1$ and that $v \in H^{3}(\Omega \cup \Gamma(\delta))$ if $k \ge 2$. Then we have $$\| v - v\circ\bm\pi\|_{H^m(\Gamma_h)} \le Ch^{k+1-m} \|v\|_{H^{\min\{k+1, 3\}}(\Omega \cup \Gamma(\delta))}.$$* *Proof.* By virtue of ([\[eq2: boundary-skin estimates\]](#eq2: boundary-skin estimates){reference-type="ref" reference="eq2: boundary-skin estimates"}) and ([\[eq3: boundary-skin estimates\]](#eq3: boundary-skin estimates){reference-type="ref" reference="eq3: boundary-skin estimates"}) (more precisely, their global versions) we have $$\begin{aligned} \| v - v\circ\bm\pi\|_{L^2(\Gamma_h)} &\le C \delta^{1/2} \|\nabla v\|_{L^2(\Gamma(\delta))} \le C \delta^{1/2} (\delta^{1/2} \|\nabla v\|_{L^2(\Gamma)} + \delta \|\nabla^2 v\|_{L^2(\Gamma(\delta))}) \le C \delta \|v\|_{H^2(\Omega \cup \Gamma(\delta))}. \end{aligned}$$ Similarly, we see from ([\[eq2: conclusion of lemma which bounds nabla(v - v circ pi)\]](#eq2: conclusion of lemma which bounds nabla(v - v circ pi)){reference-type="ref" reference="eq2: conclusion of lemma which bounds nabla(v - v circ pi)"}) that $$\begin{aligned} \|\nabla_{\Gamma_h} ( v - v\circ\bm\pi)\|_{L^2(\Gamma_h)} &\le C h^k (\|\nabla v\|_{L^2(\Gamma)} + \delta^{1/2} \|\nabla^2 v\|_{L^2(\Gamma(\delta))} ) \\ &\le \begin{cases} Ch \|v\|_{H^2(\Omega)} + Ch \|\nabla^2 v\|_{L^2(\Omega \cup \Gamma(\delta))} & (k = 1) \\ Ch^k \|v\|_{H^2(\Omega)} + C\delta^{1/2} (\delta^{1/2}\|\nabla^2 v\|_{L^2(\Gamma)} + \delta \|\nabla^3 v\|_{L^2(\Gamma(\delta))}) & (k \ge 2) \end{cases} \\ &\le \begin{cases} Ch \|v\|_{H^2(\Omega \cup \Gamma(\delta))} & (k = 1) \\ Ch^k \|v\|_{H^3(\Omega \cup \Gamma(\delta))} & (k \ge 2), \end{cases} \end{aligned}$$ where we have used $\delta = Ch^{k+1}$ and $h \le 1$. ◻ Below several lemmas are introduced to address errors related with the $L^2$-inner product on surfaces. **Lemma 3**. *For $u, v \in H^2(\Omega \cup \Gamma(\delta))$ we have $$|(u, v)_{\Gamma_h} - (u, v)_\Gamma| \le C \delta \|u\|_{H^2(\Omega \cup \Gamma(\delta))} \|v\|_{H^2(\Omega \cup \Gamma(\delta))}.$$* *Proof.* Observe that $$(u, v)_{\Gamma_h} - (u, v)_\Gamma = (u - u\circ\bm\pi, v)_{\Gamma_h} + \big[ (u\circ\bm\pi, v)_{\Gamma_h} - (u, v\circ\bm\pi^*)_\Gamma \big] + (u, v\circ\bm\pi^* - v)_\Gamma.$$ The first term in the right-hand side is bounded by $C \delta \|\tilde u\|_{H^2(\Omega \cup \Gamma(\delta))} \|v\|_{L^2(\Gamma_h)}$ due to Corollary [Corollary 1](#cor: u - u circ pi on Gammah){reference-type="ref" reference="cor: u - u circ pi on Gammah"}. The third term can be treated similarly. From ([\[eq1: boundary-skin estimates\]](#eq1: boundary-skin estimates){reference-type="ref" reference="eq1: boundary-skin estimates"}) and ([\[eq1: equivalence of surface integrals\]](#eq1: equivalence of surface integrals){reference-type="ref" reference="eq1: equivalence of surface integrals"}) the second term is bounded by $$C\delta \|u (v\circ\bm\pi^*)\|_{L^1(\Gamma)} \le C \delta \|u\|_{L^2(\Gamma)} \|v\circ\bm\pi^*\|_{L^2(\Gamma)} \le C \delta \|u\|_{L^2(\Gamma)} \|v\|_{L^2(\Gamma_h)}.$$ Using trace inequalities on $\Gamma$ and $\Gamma_h$, we arrive at the desired estimate. ◻ **Lemma 4**. *For $u \in H^2(\Gamma)$ and $v \in H^1(\Gamma_h)$ we have $$\big| ( (\Delta_\Gamma u)\circ\bm\pi, v )_{\Gamma_h} + (\nabla_{\Gamma_h}(u\circ\bm\pi), \nabla_{\Gamma_h}v)_{\Gamma_h} \big| \le C\delta (\|u\|_{H^2(\Gamma)} \|v\|_{L^2(\Gamma_h)} + \|\nabla_\Gamma u\|_{L^2(\Gamma)} \|\nabla_{\Gamma_h} v\|_{L^2(\Gamma_h)}).$$* *Proof.* Using an integration-by-parts formula on $\Gamma$, we decompose the left-hand side as $$\begin{aligned} &( (\Delta_\Gamma u)\circ\bm\pi, v )_{\Gamma_h} + (\nabla_{\Gamma_h}(u\circ\bm\pi), \nabla_{\Gamma_h}v)_{\Gamma_h} \\ = \; &\big[ ( (\Delta_\Gamma u)\circ\bm\pi, v )_{\Gamma_h} - (\Delta_\Gamma u, v\circ\bm\pi^*)_\Gamma \big] + \big[ -(\nabla_\Gamma u, \nabla_\Gamma(v\circ\bm\pi^*))_\Gamma + (\nabla_{\Gamma_h}(u\circ\bm\pi), \nabla_{\Gamma_h}v)_{\Gamma_h} \big] \\ =: \; &I_1 + I_2. \end{aligned}$$ By ([\[eq1: boundary-skin estimates\]](#eq1: boundary-skin estimates){reference-type="ref" reference="eq1: boundary-skin estimates"}) and ([\[eq1: equivalence of surface integrals\]](#eq1: equivalence of surface integrals){reference-type="ref" reference="eq1: equivalence of surface integrals"}), $|I_1| \le C\delta \|(\Delta_\Gamma u)\circ\bm\pi\|_{L^2(\Gamma_h)} \|v\|_{L^2(\Gamma_h)} \le Ch^2 \|u\|_{H^2(\Gamma)} \|v\|_{L^2(\Gamma_h)}$. For $I_2$, we represent the surface integrals on $S$ and $\bm\pi(S)$ based on the local coordinate as follows: $$\begin{aligned} \int_S \nabla_{\Gamma_h}(u\circ\bm\pi) \cdot \nabla_{\Gamma_h} v \, d\gamma_h &= \int_{S'} \sum_{\alpha,\beta} \partial_\alpha (u \circ \bm\Phi) \partial_\beta (v \circ \bm\Phi_h) \, G_h^{\alpha\beta} \sqrt{\operatorname{det}G_h} \, dz', \\ \int_{\bm\pi(S)} \nabla_{\Gamma}u \cdot \nabla_{\Gamma} (v\circ\bm\pi^*) \, d\gamma &= \int_{S'} \sum_{\alpha,\beta} \partial_\alpha (u \circ \bm\Phi) \partial_\beta (v \circ \bm\Phi_h) \, G^{\alpha\beta} \sqrt{\operatorname{det}G} \, dz'. \end{aligned}$$ Since $\|G - G_h\|_{L^\infty(S')} \le C\delta_S$, their difference is estimated by $$C\delta_S \|\nabla_{\bm z'}(u\circ\bm\Phi)\|_{L^2(S')} \|\nabla_{\bm z'}(v\circ\bm\Phi_h)\|_{L^2(S')} \le C\delta_S \|\nabla_\Gamma u\|_{L^2(\bm\pi(S))} \|\nabla_{\Gamma_h} v\|_{L^2(S)}.$$ Adding this up for $S \in \mathcal S_h$ gives $|I_2| \le C\delta \|\nabla_\Gamma u\|_{L^2(\Gamma)} \|\nabla_{\Gamma_h} v\|_{L^2(\Gamma_h)}$, and this completes the proof. ◻ **Remark 3**. (i) Since $\Gamma_h$ itself is not $C^{1,1}$-smooth globally, $(-\Delta_{\Gamma_h} u, v) = (\nabla_{\Gamma_h} u, \nabla_{\Gamma_h} v)$ does not hold in general (see [@KCDQ2015 Lemma 3.1]). \(ii\) An argument similar to the proof above shows, for $u, v \in H^1(\Gamma)$, $$\big| ( \nabla_{\Gamma_h}(u\circ\bm\pi), \nabla_{\Gamma_h}(v\circ\bm\pi) )_{\Gamma_h} - (\nabla_\Gamma u, \nabla_\Gamma v)_\Gamma \big| \le C \delta \|\nabla_\Gamma u\|_{L^2(\Gamma)} \|\nabla_\Gamma v\|_{L^2(\Gamma)}. \label{eq: error between nabla Gammah and nabla Gamma}$$ **Lemma 5**. *Let $u \in H^2(\Omega \cup \Gamma(\delta))$ and $v \in H^2(\Gamma)$. Then we have $$\big| (\nabla_{\Gamma_h} (u - u\circ\bm\pi), \nabla_{\Gamma_h} (v\circ\bm\pi))_{\Gamma_h} \big| \le C\delta \|u\|_{H^2(\Omega \cup \Gamma(\delta))}\|v\|_{H^2(\Gamma)}.$$* *Proof.* By ([\[eq: error between nabla Gammah and nabla Gamma\]](#eq: error between nabla Gammah and nabla Gamma){reference-type="ref" reference="eq: error between nabla Gammah and nabla Gamma"}), $$\big| (\nabla_{\Gamma_h} (u - u\circ\bm\pi), \nabla_{\Gamma_h} (v\circ\bm\pi))_{\Gamma_h} - (\nabla_{\Gamma} (u\circ\bm\pi^* - u), \nabla_{\Gamma} v)_{\Gamma} \big| \le C\delta (\|u\|_{H^1(\Gamma_h)} + \|u\|_{H^1(\Gamma)}) \|v\|_{H^1(\Gamma)}.$$ Next we observe that $$\begin{aligned} |(\nabla_{\Gamma} (u\circ\bm\pi^* - u), \nabla_{\Gamma} v)_{\Gamma}| &= |(u\circ\bm\pi^* - u, \Delta_\Gamma v)_{\Gamma}| \le \|u\circ\bm\pi^* - u\|_{L^2(\Gamma)} \|v\|_{H^2(\Gamma)} \\ &\le C \|u - u\circ\bm\pi\|_{L^2(\Gamma_h)} \|v\|_{H^2(\Gamma)}. \end{aligned}$$ This combined with the boundary-skin estimate $$\|u - u\circ\bm\pi\|_{L^2(\Gamma_h)} \le C\delta^{1/2} \|\nabla u\|_{L^2(\Gamma(\delta))} \le C\delta^{1/2} (\delta^{1/2} \|\nabla u\|_{H^1(\Gamma)} + \delta \|\nabla^2 u\|_{L^2(\Gamma(\delta))}),$$ with the trace theorem in $\Omega$, and with $\delta \le 1$, yields the desired estimate. ◻ # Finite element approximation {#sec: FEM} ## Finite element spaces We introduce the global nodes of $\mathcal T_h$ by $$\mathcal N_h = \{ \bm F_T(\hat{\bm a}_i) \in \overline\Omega_h \mid T \in \mathcal T_h, \; i = 1, \dots, N_k \}.$$ The interior and boundary nodes are denoted by $\mathring{\mathcal N}_h = \mathcal N_h \cap \operatorname{int}\Omega_h$ and $\mathcal N_h^\partial = \mathcal N_h \cap \Gamma_h$, respectively. We next define the global nodal basis functions $\phi_{\bm p} \, (\bm p \in \mathcal N_h)$ by $$\phi_{\bm p}|_T = \begin{cases} 0 & \text{ if } \bm p \notin T, \\ \hat\phi_i \circ \bm F_T^{-1} & \text{ if $\bm p \in T$ and $\bm p = \bm F_T(\hat{\bm a_i})$ with $\hat{\bm a_i} \in \Sigma_k$}, \end{cases} \quad (\forall T \in \mathcal T_h)$$ which becomes continuous in $\overline\Omega_h$ thanks to the assumption on $\hat\Sigma_k$. Then $\phi_{\bm p}(\bm q) = 1$ if $\bm p = \bm q$ and $\phi_{\bm p}(\bm q) = 0$ otherwise, for $\bm p, \bm q \in \mathcal N_h$. We now set the $\mathbb P_k$-isoparametric finite element spaces by $$V_h = \operatorname{span}\{\phi_{\bm p}\}_{\bm p \in \mathcal N_h} = \{ v_h \in C(\overline\Omega_h) \mid v_h\circ \bm F_T \in \mathbb P_k(\hat T) \; (\forall T \in \mathcal T_h) \}.$$ We see that $V_h \subset H^1(\Omega_h; \Gamma_h)$. In particular, the restriction of $v_h \in V_h$ to $\Gamma_h$ is represented by $\mathbb P_k$-isoparametric finite element bases defined on $\Gamma_h$, that is, $$(v_h \circ \bm F_{T_S})|_{\hat S} \in \mathbb P_k(\hat S) \quad (\forall S \in \mathcal S_h),$$ where $\hat S := \bm F_{T_S}^{-1}(S)$ denotes the pullback of the face $S$ in the reference coordinate (recall that $T_S$ is the element in $\mathcal T_h$ that contains $S$). Noticing the chain rules $\nabla_{\bm x} = (\nabla_{\bm x}\bm F_T^{-1}) \nabla_{\hat{\bm x}}$, $\nabla_{\hat{\bm x}} = (\nabla_{\bm x}\bm F_T) \nabla_{\bm x}$ and the estimates given in Remark [Remark 2](#rem: after regularity of order k){reference-type="ref" reference="rem: after regularity of order k"}(v), we obtain the following estimates concerning the transformation between $\hat T$ and $T$: **Proposition 1**. *For $T \in \mathcal T_h$ and $v \in H^m(T)$ we have $$\begin{aligned} \|\nabla_{\bm x}^m v\|_{L^2(T)} \le C h_T^{-m + d/2} \|\hat v\|_{H^m(\hat T)}, \qquad \|\nabla_{\hat{\bm x}}^m \hat v\|_{L^2(\hat T)} \le C h_T^{m - d/2} \|v\|_{H^m(T)}, \end{aligned}$$ where $\hat v := v \circ \bm F_T \in H^m(\hat T)$.* In particular, if $T \in \mathcal T_h$, $\bm p \in \mathcal N_h \cap T$, and $\bm p = \bm F_T(\hat{\bm a}_i)$, then $$\|\nabla_{\bm x}^m \phi_{\bm p}\|_{L^2(T)} \le Ch_T^{-m + d/2} \Big( \sum_{l = 0}^m \|\nabla_{\hat{\bm x}}^l \hat\phi_{\bm p}\|_{L^2(\hat T)}^2 \Big)^{1/2} \le Ch_T^{-m + d/2},$$ where the quantities depending only on the reference element $\hat T$ have been combined into the generic constant. To get an analogous estimate on the boundary $\Gamma_h$, we let $S$ be a curved $(d-1)$-face of $T \in \mathcal T_h$, i.e., $S = \bm F_T(\hat S)$ where $\hat S$ is a $(d-1)$-face of $\hat T$. Then $\hat S$ is contained in some hyperplane $\hat x_d = \hat{\bm a}_{\hat S}' \cdot \hat{\bm x}' + \hat{\bm b}_{\hat S}$, and we get the following parametrization of $S$: $$\bm F_S: \hat S' \to S; \quad \hat{\bm x}' \mapsto \bm F_T(\hat{\bm x}', \hat{\bm a}_{\hat S}' \cdot \hat{\bm x}' + \hat{\bm b}_{\hat S}) =: \bm F_T \circ \bm\Phi_{\hat S}(\hat{\bm x}'),$$ where $\hat S'$ is the projected image of $\hat S$ to the plane $\{x_d = 0\}$. A similar parametrization can be obtained for the straight $(d-1)$-simplex $\tilde{\bm F}_T(\hat S) =: \tilde S$, which is denoted by $\tilde{\bm F}_S$ and is affine. We see that the covariant and contravariant vectors $\tilde{\bm g}_\alpha, \tilde{\bm g}^\alpha$, and the covariant and contravariant components of metric tensors $\tilde G_{\alpha\beta}, \tilde G^{\alpha\beta}$ with respect to $\tilde S$ satisfies, for $\alpha, \beta = 1, \dots, d-1$, $$\begin{aligned} &|\tilde{\bm g}_\alpha| \le Ch_S, \quad |\tilde{\bm g}^\alpha| \le Ch_S^{-1}, \\ &C_1 h_S^{d-1} \le \sqrt{\operatorname{det} \tilde G} = \frac{\operatorname{meas}_{d-1}(\tilde S)}{\operatorname{meas}_{d-1}(\hat S)} \le C_2 h_S^{d-1}, \quad |\tilde G_{\alpha\beta}| \le C h_S^2, \quad |\tilde G^{\alpha\beta}| \le C h_S^{-2},\end{aligned}$$ where $h_S := h_T$ and the regularity of the meshes has been used. These vectors and components can also be defined for the curved simplex $S$, which are denoted by $\bar{\bm g}_\alpha, \bar{\bm g}^\alpha, \bar G_{\alpha\beta}, \bar G^{\alpha\beta}$. Because $\bm F_S$ is a perturbation of $\tilde{\bm F}_S$, they satisfy the following estimates. **Proposition 2**. *(i) Let $m = 0, \dots, k$, and $\alpha, \beta = 1, \dots, d-1$. Then, for $S \in \mathcal S_h$ we have $$\begin{aligned} &\|\nabla_{\hat{\bm x}'}^m \bar{\bm g}_\alpha\|_{L^\infty(\hat S')} \le Ch_S^{m+1}, \quad \|\nabla_{\hat{\bm x}'}^m \bar{\bm g}^\alpha\|_{L^\infty(\hat S')} \le Ch_S^{m-1}, \\ &\|\nabla_{\hat{\bm x}'}^m \bar G_{\alpha\beta}\|_{L^\infty(\hat S')} \le C h_S^{m+2}, \quad \|\nabla_{\hat{\bm x}'}^m \bar G^{\alpha\beta}\|_{L^\infty(\hat S')} \le C h_S^{m-2}, \\ &C_1 h_S^{(d-1)} \le \sqrt{\operatorname{det} \bar G} \le C_2 h_S^{(d-1)}. \end{aligned}$$* *(ii) For $v \in H^m(S)$ we have $$\begin{aligned} \|\nabla_{S}^m v\|_{L^2(S)} \le C h_S^{-m + (d-1)/2} \|v \circ \bm F_S\|_{H^m(\hat S')}, \qquad \|\nabla_{\hat{\bm x}'}^m (v \circ \bm F_S)\|_{L^2(\hat S')} \le C h_S^{m - (d-1)/2} \|v\|_{H^m(S)}. \end{aligned}$$* *Proof.* (i) First let $m = 0$. Since $\bar{\bm g}_\alpha = (\frac{\partial \bm F_T}{\partial \hat x_\alpha} + \hat a_{\hat S\alpha}' \frac{\partial \bm F_T}{\partial \hat x_d})|_{\bm\Phi_{\hat S}}$, we have $\|\bar{\bm g}_\alpha\|_{L^\infty(\hat S')} \le Ch_S$, so that $\|\bar G_{\alpha\beta}\|_{L^\infty(\hat S')} \le C h_S^{2}$. By assumption [\[H4\]](#H4){reference-type="ref" reference="H4"}, we also get $\|\bar{\bm g}_\alpha - \tilde{\bm g}_\alpha\|_{L^\infty(\hat S')} \le Ch_S^2$ and $\|\bar G_{\alpha\beta} - \tilde G_{\alpha\beta}\|_{L^\infty(\hat S')} \le C h_S^3$, which allows us to bound $\operatorname{det} \bar G$ from above and below. This combined with the formula $\bar G^{-1} = (\operatorname{det} \bar G)^{-1} \operatorname{Cof} \bar G$ yields $\|\bar G^{\alpha\beta}\|_{L^\infty(\hat S')} \le C h_S^{-2}$, and, consequently, $\|\bar{\bm g}^\alpha\|_{L^\infty(\hat S')} \le Ch_S^{-1}$. The case $m \ge 1$ can be addressed by induction using assumption [\[H6\]](#H6){reference-type="ref" reference="H6"}. \(ii\) The first inequality is a result of $\nabla_S = \sum_{\alpha = 1}^{d-1} \bar{\bm g}^\alpha \frac{\partial}{\partial \hat x_\alpha}$ and (i). To show the second inequality, its inverted formula $$\frac{\partial}{\partial \hat x_\alpha} = \sum_{\beta = 1}^{d-1} \bar G_{\alpha\beta} (\bar{\bm g}^\beta \cdot \nabla_S)$$ is useful. We also notice the following for the case $m \ge 2$: even when $\nabla_S$ is acted on $\bar G_{\alpha\beta}, \bar{\bm g}^\beta$, or on their derivatives rather than on $v$, the $L^\infty$-bounds of them---in terms of the order of $h_S$---are the same as in the case where all the derivatives are applied to $v$. For example, $$\|\nabla_{\!S} \, \bar G_{\alpha\beta}\|_{L^\infty(\hat S')} = \Big\| \sum_{\alpha = 1}^{d-1} \bar{\bm g}^\alpha \frac{\partial \bar G_{\alpha\beta}}{\partial \hat x_\alpha} \Big\|_{L^\infty(\hat S')} \le Ch_S^{-1} \times Ch_S^3 = Ch_S^2,$$ which can be compared with $\|\bar G_{\alpha\beta}\|_{L^\infty(\hat S')} \le Ch_S^2$. Therefore, $$\begin{aligned} \|\nabla_{\hat{\bm x}'}^m (v \circ \bm F_S)\|_{L^2(\hat S')} &\le C (h_S^2 h_S^{-1})^m h_S^{(1-d)/2} \bigg[ \sum_{l = 0}^{k+1} \int_{\hat S'} \Big| \Big( \sum_{\alpha = 1}^{d-1} \bar{\bm g}^\alpha \frac{\partial}{\partial \hat x_\alpha} \Big)^l (v \circ \bm F_S) \Big|^2 \sqrt{\operatorname{det} \bar G} \, d\hat{\bm x}' \bigg]^{1/2} \\ &= C h_S^{m - (d-1)/2} \|v\|_{H^{k+1}(S)}, \end{aligned}$$ which is the desired estimate. ◻ In particular, if $\bm p \in \mathcal N_h \cap S$ and $\bm p = \bm F_T(\hat{\bm a}_i)$, we obtain $$\label{eq: nodal basis estimate on S} \|\nabla_S^m \phi_{\bm p}\|_{L^2(S)} \le Ch_S^{-m + (d-1)/2}.$$ ## Scott--Zhang interpolation operator We need the interpolation operator $\mathcal I_h$ introduced by [@ScoZha1990], which is well-defined and stable in $H^1(\Omega_h)$. We show that it is also stable in $H^1(\Gamma_h)$ on the boundary. To each node $\bm p \in \mathcal N_h$ we assign $\sigma_{\bm p}$, which is either a $d$-curved simplex or $(d-1)$-curved simplex, in the following way: - If $\bm p \in \mathring{\mathcal N}_h$, we set $\sigma_{\bm p}$ to be one of the elements $T \in \mathcal T_h$ containing $\bm p$. - If $\bm p \in \mathcal N_h^\partial$, we set $\sigma_{\bm p}$ to be one of the boundary elements $S \in \mathcal S_h$ containing $\bm p$. For each $\bm p \in \mathcal N_h$, we see that $V_h|_{\sigma_{\bm p}}$ (the restrictions to $\sigma_{\bm p}$ of the functions in $V_h$) is a finite dimensional subspace of the Hilbert space $L^2(\sigma_{\bm p})$. We denote by $\psi_{\bm q}$ the dual basis function corresponding to $\phi_{\bm p}$ with respect to $L^2(\sigma_{\bm p})$, that is, $\{\psi_{\bm q}\}_ {\bm q \in \mathcal N_h} \subset V_h$ is determined by $$(\phi_{\bm p}, \psi_{\bm q})_{L^2(\sigma_{\bm p})} = \begin{cases} 1 & \text{if $\bm p = \bm q$}, \\ 0 & \text{otherwise}, \end{cases} \qquad \forall \bm p \in \mathcal N_h.$$ The support of $\psi_{\bm p}$ is contained in a "macro element" of $\sigma_{\bm p}$. In fact, depending on the cases $\sigma_{\bm p} = T \in \mathcal T_h$ and $\sigma_{\bm p} = S \in \mathcal S_h$, it holds that $$\begin{aligned} \operatorname{supp} \psi_{\bm p} \subset M_T &:= \bigcup \mathcal T_h(T), \quad \mathcal T_h(T) := \{T_1 \in \mathcal T_h \mid T_1 \cap T \neq \emptyset \}, \\ \operatorname{supp} \psi_{\bm p} \subset M_S &:= \bigcup \mathcal S_h(S), \quad \mathcal S_h(S) := \{S_1 \in \mathcal S_h \mid S_1 \cap S \neq \emptyset \}.\end{aligned}$$ Now we define $\mathcal I_h : H^1(\Omega_h) \to V_h$ by $$\mathcal I_h v = \sum_{\bm p \in \mathcal N_h} (v, \psi_{\bm p})_{L^2(\sigma_{\bm p})} \phi_{\bm p}.$$ By direct computation one can check $\mathcal I_h v_h = v_h$ for $v_h \in V_h$. This invariance indeed holds at local level as shown in the lemma below. To establish it, we first notice that $\mathcal I_h v$ in $T \in \mathcal T_h$ (resp. in $S \in \mathcal S_h$) is completely determined by $v$ in $M_T$ (resp. in $M_S$), which allows us to exploit the notation $(\mathcal I_h v)|_T$ for $v \in H^1(M_T)$ (resp. $(\mathcal I_h v)|_S$ for $v \in H^1(M_S)$). **Remark 4**. The choices of $\{\sigma_{\bm p}\}_{\bm p\in\mathcal N_h}$ and $\{\psi_{\bm p}\}_{\bm p \in \mathcal N_h}$ are not unique. Although the definition of $\mathcal I_h$ are dependent on those choices, the norm estimates below only depends on the shape-regularity constant and on a reference element. **Lemma 6**. *Let $\bm p \in \mathcal N_h$ and $v \in H^1(\Omega_h)$.* *(i) If $\sigma_{\bm p} = T \in \mathcal T_h$, then $$\|\psi_{\bm p}\|_{L^\infty(T)} \le C h_T^{-d}.$$ Moreover, if $v \circ \bm F_{T_1} \in \mathbb P_k(\hat T)$ for $T_1 \in \mathcal T_h(T)$, then $(\mathcal I_h v)|_{T} = v|_{T}$.* *(ii) If $\sigma_{\bm p} = S \in \mathcal S_h$, then $$\label{eq: Linfty norm of psip} \|\psi_{\bm p}\|_{L^\infty(S)} \le C h_S^{1-d}.$$ Moreover, if $v \circ \bm F_{S_1} \in \mathbb P_k(\hat S')$ for $S_1 \in \mathcal S_h(S)$, then $(\mathcal I_h v)|_{S} = v|_{S}$.* *Proof.* We consider only case (ii); case (i) can be treated similarly. We can represent $\psi_{\bm p}$ as $$\psi_{\bm p} = \sum_{\bm q \in \mathcal N_h \cap M_S} C_{\bm p \bm q} \phi_{\bm q},$$ where $C = (C_{\bm p \bm q})$ is the inverse matrix of $A = ((\phi_{\bm p}, \phi_{\bm q})_{L^2(S)})$ (its dimension is supposed to be $D$). Note that each component of $A$ is bounded by $Ch_S^{d-1}$ and that $\operatorname{det} A \ge Ch_S^{D(d-1)}$. Therefore, each component of $C = (\operatorname{det} A)^{-1} \operatorname{Cof} A$ is bounded by $Ch_S^{(1-d)}$. This combined with $\|\phi_{\bm q}\|_{L^\infty(S)} \le C$ proves ([\[eq: Linfty norm of psip\]](#eq: Linfty norm of psip){reference-type="ref" reference="eq: Linfty norm of psip"}). To show the second statement, observe that $$\label{eq: Ih v restricted to S} (\mathcal I_h v)|_{S} = \sum_{\bm q \in \mathcal N_h} (v, \psi_{\bm q})_{L^2(\sigma_{\bm q})} \phi_{\bm q}|_S.$$ However, $\phi_{\bm q}|_S$ is non-zero only if $\bm q \in S$, in which case $\sigma_{\bm q} \in \mathcal S_h(S)$. Therefore, $v|_{\sigma_{\bm q}}$ is represented as a linear combination of $\phi_{\bm s}|_{\sigma_{\bm q}} \, (\bm s \in \mathcal N_h \cap \sigma_{\bm q})$. This implies that ([\[eq: Ih v restricted to S\]](#eq: Ih v restricted to S){reference-type="ref" reference="eq: Ih v restricted to S"}) agrees with $v|_S$. ◻ Let us establish the stability of $\mathcal I_h$, which is divided into two lemmas and is proved in Appendix [7](#sec: stability of Ih){reference-type="ref" reference="sec: stability of Ih"}. **Lemma 7**. *Let $v \in H^1(\Omega_h; \Gamma_h)$, $T \in \mathcal T_h$, and $S \in \mathcal S_h$. Then for $m = 0, 1$ we have $$\begin{aligned} \|\nabla^m (\mathcal I_h v)\|_{L^2(T)} \le C \sum_{l=0}^1 h_T^{l-m} \sum_{T_1 \in \mathcal T_h(T)} \|\nabla^l v\|_{L^2(T_1)}, \notag \\ \|\nabla_S^m (\mathcal I_h v)\|_{L^2(S)} \le C \sum_{l=0}^1 h_S^{l-m} \sum_{S_1 \in \mathcal S_h(S)} \|\nabla_{S_1}^l v\|_{L^2(S_1)}, \label{eq: local stability of Ihv on S} \end{aligned}$$ where $\mathcal T_h(T) = \{T_1 \in \mathcal T_h \mid T_1 \cap T \neq \emptyset\}$ and $\mathcal S_h(S) = \{S_1 \in \mathcal S_h \mid S_1 \cap S \neq \emptyset\}$.* **Lemma 8**. *Under the same assumptions as in Lemma [Lemma 7](#lem1: stability of Ih){reference-type="ref" reference="lem1: stability of Ih"}, we have $$\begin{aligned} \|v - \mathcal I_h v\|_{H^m(T)} &\le Ch_T^{1-m} \sum_{T_1 \in \mathcal T_h(T)} \|v\|_{H^1(T_1)}, \notag \\ \|v - \mathcal I_h v\|_{H^m(S)} &\le Ch_S^{1-m} \sum_{S_1 \in \mathcal S_h(S)} \|v\|_{H^1(S_1)}. \label{eq: local interpolation error on S} \end{aligned}$$* Adding up ([\[eq: local interpolation error on S\]](#eq: local interpolation error on S){reference-type="ref" reference="eq: local interpolation error on S"}) for $S \in \mathcal S_h$ immediately leads to a global estimate (note that the regularity of the meshes implies $\sup_{S\in\mathcal S_h} \#\mathcal S_h(S) \le C$). Together with an estimate in $\Omega_h$, which can be obtained in a similar manner, we state it as follows: **Corollary 2**. *Let $m = 0, 1$ and $v \in H^1(\Omega_h; \Gamma_h)$. Then $$\|v - \mathcal I_h v\|_{H^m(\Omega_h)} \le Ch^{1-m} \|v\|_{H^1(\Omega_h)}, \qquad \|v - \mathcal I_h v\|_{H^m(\Gamma_h)} \le Ch^{1-m} \|v\|_{H^1(\Gamma_h)}.$$* ## Interpolation error estimates First we recall the definition of the Lagrange interpolation operator and its estimates. Define $\mathcal I_h^L : C(\overline\Omega_h) \to V_h$ by $$\mathcal I_h^L v = \sum_{\bm p \in \mathcal N_h} v(\bm p) \phi_{\bm p}.$$ We allow the notation $(\mathcal I_h^L v)|_T$ if $v \in C(T)$, $T \in \mathcal T_h$, and $(\mathcal I_h^L v)|_S$ if $v \in C(S)$, $S \in \mathcal S_h$. **Proposition 3**. *Let $T \in \mathcal T_h$ and $S \in \mathcal S_h$. Assume $k + 1 > d/2$, so that $H^{k+1}(T) \hookrightarrow C(T)$ and $H^{k+1}(S) \hookrightarrow C(S)$ hold. Then, for $0\le m\le k+1$ we have $$\begin{aligned} \|\nabla^m(v - \mathcal I_h^L v)\|_{L^2(T)} &\le Ch_T^{k+1 - m} \|v\|_{H^{k+1}(T)} \qquad \forall v \in H^{k+1}(T), \label{eq: Lagrange interpolation error estimate in T} \\ \|\nabla_S^m(v - \mathcal I_h^L v)\|_{L^2(S)} &\le Ch_S^{k+1 - m} \|v\|_{H^{k+1}(S)} \qquad \forall v \in H^{k+1}(S). \label{eq: Lagrange interpolation error estimate on S} \end{aligned}$$* *Proof.* By the Bramble--Hilbert theorem it holds that $$\|\nabla_{\hat{\bm x}'}^l [v \circ \bm F_S - (\mathcal I_h^L v) \circ \bm F_S)] \|_{L^2(\hat S')} \le C \|\nabla_{\hat{\bm x}'}^{k+1} (v \circ \bm F_S) \|_{L^2(\hat S')} \quad (l = 0, \dots, m),$$ where the constant $C$ depends only on $\hat S'$. This combined with Proposition [Proposition 2](#prop: transformation between S and Shat){reference-type="ref" reference="prop: transformation between S and Shat"}(ii) yields ([\[eq: Lagrange interpolation error estimate on S\]](#eq: Lagrange interpolation error estimate on S){reference-type="ref" reference="eq: Lagrange interpolation error estimate on S"}). Estimate ([\[eq: Lagrange interpolation error estimate in T\]](#eq: Lagrange interpolation error estimate in T){reference-type="ref" reference="eq: Lagrange interpolation error estimate in T"}) is obtained similarly (or one can refer to [@CiaRav1972 Theorem 5]). ◻ **Remark 5**. (i) Adding up ([\[eq: Lagrange interpolation error estimate in T\]](#eq: Lagrange interpolation error estimate in T){reference-type="ref" reference="eq: Lagrange interpolation error estimate in T"}) for $T \in \mathcal T_h$ leads to the global estimate $$\label{eq: global interpolation estimate in Omegah} \|v - \mathcal I_h^L v\|_{H^m(\Omega_h)} \le Ch^{k+1 - m} \|v\|_{H^{k+1}(\Omega_h)} \qquad \forall v \in H^{k+1}(\Omega_h) \quad (m = 0, 1).$$ \(ii\) A corresponding global estimate on $\Gamma_h$ also holds; however, it is not useful for our purpose. To explain the reason, let us suppose $v \in H^m(\Omega; \Gamma)$ and extend it to some $\tilde v \in H^m(\mathbb R^d)$. Since we expect only $\tilde v|_{\Gamma_h} \in H^{m-1/2}(\Gamma_h)$ by the trace theorem, the direct interpolation $\mathcal I_h^L\tilde v$ may not have a good convergence property. To overcome this technical difficulty, we consider $\mathcal I_h^L (\tilde v \circ \bm\pi)$ instead in the theorem below, taking advantage of the fact that $v\circ\bm\pi$ is element-wisely as smooth on $\Gamma_h$ as $v$ is on $\Gamma$. **Theorem 1**. *Let $k + 1 > d/2$ and $m = 0,1$. For $v \in H^{k+1}(\Omega \cup \Gamma(\delta))$ satisfying $v|_\Gamma \in H^{k+1}(\Gamma)$ we have $$\| v - \mathcal I_h v\|_{H^{m}(\Omega_h; \Gamma_h)} \le Ch^{k+1-m} (\|v\|_{H^{k+1}(\Omega \cup \Gamma(\delta))} + \|v\|_{H^{k+1}(\Gamma)}).$$* *Proof.* Let $\mathcal I$ denote the identity operator. Since $\mathcal I_h \mathcal I_h^L = \mathcal I_h^L$, one gets $\mathcal I - \mathcal I_h = (\mathcal I - \mathcal I_h)(\mathcal I - \mathcal I_h^L)$. Then it follows from Corollary [Corollary 2](#cor: H1 interpolation estimate){reference-type="ref" reference="cor: H1 interpolation estimate"} and ([\[eq: global interpolation estimate in Omegah\]](#eq: global interpolation estimate in Omegah){reference-type="ref" reference="eq: global interpolation estimate in Omegah"}) that $$\begin{aligned} \| v - \mathcal I_h v\|_{H^m(\Omega_h)} &= \|(\mathcal I - \mathcal I_h)(v - \mathcal I_h^L v)\|_{H^m(\Omega_h)} \le Ch^{1-m} \| v - \mathcal I_h^L v\|_{H^1(\Omega_h)} \le Ch^{k+1-m} \|v\|_{H^{k+1}(\Omega_h)}. \end{aligned}$$ To consider the boundary estimate, observe that $$\begin{aligned} v - \mathcal I_h v = (\mathcal I - \mathcal I_h)(v - v \circ \bm\pi) + (\mathcal I - \mathcal I_h)(\mathcal I - \mathcal I_h^L) (v \circ \bm\pi) =: J_1 + J_2. \end{aligned}$$ By Corollaries [Corollary 2](#cor: H1 interpolation estimate){reference-type="ref" reference="cor: H1 interpolation estimate"} and [Corollary 1](#cor: u - u circ pi on Gammah){reference-type="ref" reference="cor: u - u circ pi on Gammah"}, $$\|J_1\|_{H^m(\Gamma_h)} \le Ch^{1-m} \|v - v\circ\bm\pi\|_{H^1(\Gamma_h)} \le Ch^{k+1-m} \|v\|_{H^{\min\{k+1, 3\}}(\Omega \cup \Gamma(\delta))}.$$ From Corollary [Corollary 2](#cor: H1 interpolation estimate){reference-type="ref" reference="cor: H1 interpolation estimate"}, ([\[eq: Lagrange interpolation error estimate on S\]](#eq: Lagrange interpolation error estimate on S){reference-type="ref" reference="eq: Lagrange interpolation error estimate on S"}), and ([\[eq3: equivalence of surface integrals\]](#eq3: equivalence of surface integrals){reference-type="ref" reference="eq3: equivalence of surface integrals"}) we obtain $$\begin{aligned} \|J_2\|_{H^m(\Gamma_h)} &\le Ch^{1-m} \|v\circ\bm\pi - \mathcal I_h^L(v\circ\bm\pi)\|_{H^1(\Gamma_h)} \le Ch^{k+1-m} \Big( \sum_{S \in \mathcal S_h} \|v\circ\bm\pi\|_{H^{k+1}(S)}^2 \Big)^{1/2} \\ &\le Ch^{k+1-m} \Big( \sum_{S \in \mathcal S_h} \|v\|_{H^{k+1}(\bm\pi(S))}^2 \Big)^{1/2} = Ch^{k+1-m} \|v\|_{H^{k+1}(\Gamma)}, \end{aligned}$$ where we have used Lemma [Lemma 1](#lem: equivalence of Sobolev spaces on Gamma and Gammah){reference-type="ref" reference="lem: equivalence of Sobolev spaces on Gamma and Gammah"}. Combining the estimates above proves the theorem. ◻ # Error estimates in an approximate domain {#sec: error estimate} We continue to denote by $k \ge 1$ the order of the isoparametric finite element approximation throughout this and next sections. ## Finite element scheme based on extensions We recall that the weak formulation for ([\[eq1: g-robin\]](#eq1: g-robin){reference-type="ref" reference="eq1: g-robin"})--([\[eq2: g-robin\]](#eq2: g-robin){reference-type="ref" reference="eq2: g-robin"}) is given by ([\[eq: continuous problem\]](#eq: continuous problem){reference-type="ref" reference="eq: continuous problem"}). In order to define its finite element approximation, one needs counterparts to $f$ and $\tau$ given in $\Omega_h$ and $\Gamma_h$ respectively. For this we will exploit extensions that preserves the smoothness as mentioned in Introduction. Namely, if $f \in H^{k-1}(\Omega)$, one can choose some $\tilde f \in H^{k-1}(\mathbb R^d)$ such that $\|\tilde f\|_{H^{k-1}(\mathbb R^d)} \le C \|f\|_{H^{k-1}(\Omega)}$. For $\tau$, we assume $\tau \in H^{k-1/2}(\Gamma)$ so that it admits an extension $\tilde\tau \in H^k(\mathbb R^d)$ such that $\|\tilde\tau\|_{H^{k}(\mathbb R^d)} \le C \|\tau\|_{H^{k-1/2}(\Gamma)}$ (the extension operator $\tilde\cdot$ has different meanings for $f$ and $\tau$, but there should be no fear of confusion). The resulting discrete problem is to find $u_h \in V_h$ such that $$\label{eq: FE scheme} a_h(u_h, v_h) := (\nabla u_h, \nabla v_h)_{\Omega_h} + (u_h, v_h)_{\Gamma_h} + (\nabla_{\Gamma_h}u_h, \nabla_{\Gamma_h}v_h)_{\Gamma_h} = (\tilde f, v_h)_{\Omega_h} + (\tilde\tau, v_h)_{\Gamma_h} \qquad \forall v_h \in V_h.$$ Because the bilinear form $a_h$ is uniformly coercive in $V_h$, i.e., $a_h(v_h, v_h) \ge C \|v_h\|_{H^1(\Omega_h; \Gamma_h)}^2$ for all $v_h \in V_h$ with $C$ independent of $h$, the existence and uniqueness of a solution $u_h$ is an immediate consequence of the Lax--Milgram theorem. ## $H^1$-error estimate We define the residual functionals for $v \in H^1(\Omega_h; \Gamma_h)$ by $$\begin{aligned} R_u^1(v) &:= (-\Delta\tilde u - \tilde f, v)_{\Omega_h \setminus \Omega} + (\partial_{n_h}\tilde u - (\partial_n u)\circ\bm\pi, v)_{\Gamma_h} + (\tilde u - u\circ\bm\pi, v)_{\Gamma_h} + (\tau\circ\bm\pi - \tilde\tau, v)_{\Gamma_h}, \label{eq: R1u(v)} \\ R_u^2(v) &:= \big[ ( (\Delta_\Gamma u)\circ\bm\pi, v )_{\Gamma_h} + (\nabla_{\Gamma_h}(u\circ\bm\pi), \nabla_{\Gamma_h}v)_{\Gamma_h} \big]+ ( \nabla_{\Gamma_h}(\tilde u - u\circ\bm\pi), \nabla_{\Gamma_h} v_h )_{\Gamma_h}, \notag \\ R_u(v) &:= R_u^1(v) + R_u^2(v), \notag\end{aligned}$$ which completely vanish if we formally assume $\Omega_h = \Omega$. Therefore, the residual terms above is considered to represent domain perturbation. Let us state consistency error estimates, or, in other words, Galerkin orthogonality relation with domain perturbation terms. **Proposition 4**. *Assume that $f \in H^{k-1}(\Omega)$, $\tau \in H^{k-1/2}(\Gamma)$ if $k = 1, 2$, and that $f \in H^1(\Omega)$, $\tau \in H^{3/2}(\Gamma)$ if $k \ge 3$. Let $u$ and $u_h$ be the solutions of ([\[eq: continuous problem\]](#eq: continuous problem){reference-type="ref" reference="eq: continuous problem"}) and ([\[eq: FE scheme\]](#eq: FE scheme){reference-type="ref" reference="eq: FE scheme"}) respectively. Then we have $$\label{eq: asymptotic Galerkin orthogonality} a_h(\tilde u - u_h, v_h) = R_u(v_h) \qquad \forall v_h \in V_h.$$ Moreover, the following estimate holds: $$\label{eq: 1st estimate of Ru(v)} |R_u(v)| \le C h^k (\|f\|_{H^{\min\{k-1, 1\}}(\Omega)} + \|\tau\|_{H^{\min\{k-1/2, 3/2\}}(\Gamma)}) \|v\|_{H^1(\Omega_h; \Gamma_h)} \qquad \forall v \in H^1(\Omega_h; \Gamma_h).$$* *Proof.* Equation ([\[eq: asymptotic Galerkin orthogonality\]](#eq: asymptotic Galerkin orthogonality){reference-type="ref" reference="eq: asymptotic Galerkin orthogonality"}) results from a direct computation as follows: $$\begin{aligned} a_h(\tilde u - u_h, v_h) &= (\nabla(\tilde u - u_h), \nabla v_h)_{\Omega_h} + (\tilde u - u_h, v_h)_{\Gamma_h} + (\nabla_{\Gamma_h}(\tilde u - u_h), \nabla_{\Gamma_h} v_h)_{\Gamma_h} \\ &= (-\Delta\tilde u , v_h)_{\Omega_h} + (\partial_{n_h}\tilde u + \tilde u, v_h)_{\Gamma_h} + (\nabla_{\Gamma_h}\tilde u, \nabla_{\Gamma_h} v_h)_{\Gamma_h} - (\tilde f, v_h)_{\Omega_h} - (\tilde\tau, v_h)_{\Gamma_h} \\ &= (-\Delta\tilde u - \tilde f, v_h)_{\Omega_h \setminus \Omega} + (\partial_{n_h}\tilde u - (\partial_n u)\circ\bm\pi, v_h)_{\Gamma_h} + (\tilde u - u\circ\bm\pi, v_h)_{\Gamma_h} + (\tau\circ\bm\pi - \tilde\tau, v_h)_{\Gamma_h} \\ &\hspace{1cm} + ((\Delta_\Gamma u)\circ\bm\pi, v_h)_{\Gamma_h} + (\nabla_{\Gamma_h} (u\circ\bm\pi), \nabla_{\Gamma_h} v_h)_{\Gamma_h} + (\nabla_{\Gamma_h}(\tilde u - u\circ\bm\pi), \nabla_{\Gamma_h} v_h)_{\Gamma_h} \\ &= R_u^1(v_h) + R_u^2(v_h) = R_u(v_h). \end{aligned}$$ Let $C_{f,\tau}$ denote a generic constant multiplied by $\|f\|_{H^{\min\{k-1, 1\}}(\Omega)} + \|\tau\|_{H^{\min\{k-1/2, 3/2\}}(\Gamma)}$. We will make use of the regularity structure $\|u\|_{H^{k+1}(\Omega; \Gamma)} \le C (\|f\|_{H^{k-1}(\Omega)} + \|\tau\|_{H^{k-1}(\Gamma)})$ and the stability of extensions without further emphasis. Applying the boundary-skin estimate ([\[eq: RHS with Omegah minus Omega\]](#eq: RHS with Omegah minus Omega){reference-type="ref" reference="eq: RHS with Omegah minus Omega"}), we obtain $$\begin{aligned} |(-\Delta\tilde u - \tilde f, v)_{\Omega_h \setminus \Omega}| &\le \begin{cases} C (\|\Delta \tilde u\|_{L^2(\Omega_h)} + \|\tilde f\|_{L^2(\Omega_h)}) \cdot C\delta^{1/2} \|v\|_{H^1(\Omega_h)} & \quad (k = 1) \\ C \delta^{1/2} (\|\tilde u\|_{H^3(\Omega_h)} + \|\tilde f\|_{H^1(\Omega_h)}) \cdot C \delta^{1/2} \|v\|_{H^1(\Omega_h)} & \quad (k \ge 2) \end{cases} \\ &\le C_{f, \tau} h^k \|v\|_{H^1(\Omega_h)}, \end{aligned}$$ where we have used $\delta = Ch^{k+1}$ and $h \le 1$. The second term of $R^1_u(v)$ is estimated as $$\begin{aligned} |(\partial_{n_h}\tilde u - (\partial_n u)\circ\bm\pi, v)_{\Gamma_h}| &= \big|\big( \nabla\tilde u \cdot (\bm n_h - \bm n\circ\bm\pi), v \big)_{\Gamma_h} + \big((\nabla\tilde u - (\nabla u)\circ\bm\pi) \cdot \bm n\circ\bm\pi, v \big)_{\Gamma_h}\big| \\ &\le C (h^k \|\nabla\tilde u\|_{L^2(\Gamma_h)} + \delta^{1/2} \|\nabla^2\tilde u\|_{L^2(\Gamma(\delta))}) \|v\|_{L^2(\Gamma_h)} \\ &\le \begin{cases} C (h^k \|\tilde u\|_{H^2(\Omega_h)} + \delta^{1/2} \|\tilde u\|_{H^2(\Gamma(\delta))}) \|v\|_{H^1(\Omega_h)} & \quad (k = 1) \\ C (h^k \|\tilde u\|_{H^2(\Omega_h)} + \delta \|\tilde u\|_{H^3(\Omega \cup \Gamma(\delta))}) \|v\|_{H^1(\Omega_h)} & \quad (k \ge 2) \end{cases} \\ &\le C_{f, \tau} h^k \|v\|_{H^1(\Omega_h)}, \end{aligned}$$ as a result of ([\[eq: n - nh\]](#eq: n - nh){reference-type="ref" reference="eq: n - nh"}), ([\[eq3: boundary-skin estimates\]](#eq3: boundary-skin estimates){reference-type="ref" reference="eq3: boundary-skin estimates"}), and ([\[eq2: boundary-skin estimates\]](#eq2: boundary-skin estimates){reference-type="ref" reference="eq2: boundary-skin estimates"}). Similarly, the third term of $R^1_u(v)$ is bounded by $$\begin{aligned} C \delta^{1/2} \|\nabla\tilde u\|_{L^2(\Gamma(\delta))} \|v_h\|_{L^2(\Gamma_h)} \le C_{f, \tau} h^k \|v_h\|_{H^1(\Omega_h)}. \end{aligned}$$ For the fourth term of $R^1_u(v)$, we need the regularity assumption $\tau \in H^{1/2}(\Gamma)$ for $k = 1$ and $\tau \in H^{3/2}(\Gamma)$ for $k \ge 2$ to ensure $\tilde\tau \in H^1(\mathbb R^d)$ and $\tilde\tau \in H^2(\mathbb R^d)$, respectively. Then $|(\tau\circ\bm\pi - \tilde\tau, v_h)_{\Gamma_h}|$ is bounded by $$\begin{aligned} C \delta^{1/2} \|\nabla\tilde\tau\|_{L^2(\Gamma(\delta))} \|v_h\|_{L^2(\Gamma_h)} &\le \begin{cases} C \delta^{1/2} \|\nabla\tilde\tau\|_{L^2(\Gamma(\delta))} \|v_h\|_{H^1(\Omega_h)} \quad &(k = 1) \\ C \delta \|\tilde\tau\|_{H^2(\Omega \cup \Gamma(\delta))} \|v_h\|_{H^1(\Omega_h)} &(k \ge 2) \end{cases} \\ &\le C_{f, \tau} h^k \|v_h\|_{H^1(\Omega_h)}. \end{aligned}$$ For $R_u^2(v)$, we apply Lemma [Lemma 4](#lem: error from integration by parts on Gammah){reference-type="ref" reference="lem: error from integration by parts on Gammah"} and Corollary [Corollary 1](#cor: u - u circ pi on Gammah){reference-type="ref" reference="cor: u - u circ pi on Gammah"} to obtain $$\begin{aligned} \big| ( (\Delta_\Gamma u)\circ\bm\pi, v )_{\Gamma_h} + (\nabla_{\Gamma_h}(u\circ\bm\pi), \nabla_{\Gamma_h}v)_{\Gamma_h} \big| &\le C\delta (\|u\|_{H^2(\Gamma)} \|v\|_{L^2(\Gamma_h)} + \|\nabla_\Gamma u\|_{L^2(\Gamma)} \|\nabla_{\Gamma_h} v\|_{L^2(\Gamma_h)}) \\ &\le C_{f, \tau} h^k \|v_h\|_{H^1(\Gamma_h)}, \\ \big| (\nabla_{\Gamma_h} (\tilde u - u\circ\bm\pi), \nabla_{\Gamma_h} v)_{\Gamma_h} \big| &\le Ch^k \|\tilde u\|_{H^{\min\{k+1, 3\}}(\Omega \cup \Gamma(\delta))} \|\nabla_{\Gamma_h} v\|_{L^2(\Gamma_h)} \le C_{f, \tau} h^k \|v_h\|_{H^1(\Gamma_h)}. \end{aligned}$$ Combining the estimates above all together concludes ([\[eq: 1st estimate of Ru(v)\]](#eq: 1st estimate of Ru(v)){reference-type="ref" reference="eq: 1st estimate of Ru(v)"}). ◻ **Remark 6**. If the transformation $\tau\circ\bm\pi$ instead of the extension $\tilde\tau$ is employed in the FE scheme ([\[eq: FE scheme\]](#eq: FE scheme){reference-type="ref" reference="eq: FE scheme"}), then assuming just $\tau \in H^{k-1}(\Gamma)$ is sufficient to get $$|R_u(v)| \le C h^k (\|f\|_{H^{\min\{k-1, 1\}}(\Omega)} + \|\tau\|_{H^{\min\{k-1, 1\}}(\Gamma)}) \|v\|_{H^1(\Omega_h; \Gamma_h)},$$ because the term involving $\tau$ in ([\[eq: R1u(v)\]](#eq: R1u(v)){reference-type="ref" reference="eq: R1u(v)"}) disappears. We are ready to state the $H^1$-error estimates. **Theorem 2**. *Let $k + 1 > d/2$. Assume that $f \in L^2(\Omega)$, $\tau \in H^{1/2}(\Gamma)$ for $k = 1$, that $f \in H^{1}(\Omega)$, $\tau \in H^{3/2}(\Gamma)$ for $k = 2$, and that $f \in H^{k-1}(\Omega)$, $\tau \in H^{k-1}(\Gamma)$ for $k \ge 3$. Then we have $$\|\tilde u - u_h\|_{H^1(\Omega_h; \Gamma_h)} \le C h^k (\|f\|_{H^{k-1}(\Omega)} + \|\tau\|_{H^{\max\{k-1, \min\{k-1/2,3/2\}\}}(\Gamma)}),$$ where $u$ and $u_h$ are the solutions of ([\[eq: continuous problem\]](#eq: continuous problem){reference-type="ref" reference="eq: continuous problem"}) and ([\[eq: FE scheme\]](#eq: FE scheme){reference-type="ref" reference="eq: FE scheme"}) respectively.* *Proof.* To save the space we introduce the notation $C_{f, \tau} := C (\|f\|_{H^{k-1}(\Omega)} + \|\tau\|_{H^{\max\{k-1, \min\{k-1/2,3/2\}\}}(\Gamma)})$. It follows from the uniform coercivity of $a_h$ and ([\[eq: asymptotic Galerkin orthogonality\]](#eq: asymptotic Galerkin orthogonality){reference-type="ref" reference="eq: asymptotic Galerkin orthogonality"}) that $$\begin{aligned} C\|\tilde u - u_h\|_{H^1(\Omega_h; \Gamma_h)}^2 \le a_h(\tilde u - u_h, \tilde u - u_h) = a_h(\tilde u - u_h, \tilde u - \mathcal I_h \tilde u) + R_u(\mathcal I_h \tilde u - u_h). \end{aligned}$$ In view of Theorem [Theorem 1](#thm: interpolation error estimate){reference-type="ref" reference="thm: interpolation error estimate"}, the first term in the right-hand side is bounded by $$\begin{aligned} \|\tilde u - u_h\|_{H^1(\Omega_h; \Gamma_h)} \|\tilde u - \mathcal I_h \tilde u\|_{H^1(\Omega_h; \Gamma_h)} &\le Ch^k (\|\tilde u\|_{H^{k+1}(\Omega \cup \Gamma(\delta))} + \|u\|_{H^{k+1}(\Gamma)}) \|\tilde u - u_h\|_{H^1(\Omega_h; \Gamma_h)} \\ &\le C_{f, \tau} h^k \|\tilde u - u_h\|_{H^1(\Omega_h; \Gamma_h)} \end{aligned}$$ as a result of the regularity of $u$ and the stability of extensions. Estimate ([\[eq: 1st estimate of Ru(v)\]](#eq: 1st estimate of Ru(v)){reference-type="ref" reference="eq: 1st estimate of Ru(v)"}) applied to $R_u(\mathcal I_h \tilde u - u_h)$ combined again with Theorem [Theorem 1](#thm: interpolation error estimate){reference-type="ref" reference="thm: interpolation error estimate"} gives the upper bound of the second term as $$\begin{aligned} C_{f, \tau} h^k \|\mathcal I_h \tilde u - u_h\|_{H^1(\Omega_h; \Gamma_h)} \le C_{f, \tau} h^k \|\tilde u - u_h\|_{H^1(\Omega_h; \Gamma_h)} + (C_{f, \tau}h^{k})^2. \end{aligned}$$ Consequently, $$C\|\tilde u - u_h\|_{H^1(\Omega_h; \Gamma_h)}^2 \le C_{f, \tau} h^k \|\tilde u - u_h\|_{H^1(\Omega_h; \Gamma_h)} + (C_{f, \tau} h^k)^2,$$ which after an absorbing argument proves the theorem. ◻ ## $L^2$-error estimate Let $\varphi\in L^2(\Omega_h), \psi \in L^2(\Gamma_h)$ be arbitrary such that $\|\varphi\|_{L^2(\Omega_h)} = \|\psi\|_{L^2(\Gamma_h)} = 1$. We define $w \in H^2(\Omega; \Gamma)$ as the solution of the dual problem introduced as follows: $$\label{eq: dual problem} -\Delta w = \varphi \quad\text{in }\;\Omega, \qquad \textstyle\frac{\partial w}{\partial n} + w - \Delta_{\Gamma} w = \psi\circ\bm\pi^* \quad\text{on }\; \Gamma,$$ where $\varphi$ is extended to $\mathbb R^d \setminus \Omega_h$ by 0. For $v \in H^1(\Omega_h; \Gamma_h)$ we define residual functionals w.r.t. $w$ by $$\begin{aligned} R_w^1(v) &:= (v, -\Delta\tilde w - \varphi)_{\Omega_h\setminus\Omega} + (v, \partial_{n_h}\tilde w - (\partial_n w) \circ \pi)_{\Gamma_h} + (v, \tilde w - w\circ\bm\pi)_{\Gamma_h}, \\ R_w^2(v) &:= \big[ (v, (\Delta_\Gamma w)\circ\bm\pi)_{\Gamma_h} + ( \nabla_{\Gamma_h} v, \nabla_{\Gamma_h}(w\circ\bm\pi))_{\Gamma_h} \big] + ( \nabla_{\Gamma_h} v, \nabla_{\Gamma_h}(\tilde w - w\circ\bm\pi) )_{\Gamma_h} \\ R_w(v) &:= R_w^1(v) + R_w^2(v).\end{aligned}$$ **Lemma 9**. *Let $k \ge 1$, $v \in H^1(\Omega_h; \Gamma_h)$, and $w$ be as above. Then we have $$\label{eq: L2 duality representation for phi and psi} (v, \varphi)_{\Omega_h} + (v, \psi)_{\Gamma_h} = a_h(v, \tilde w) - R_w(v).$$ Moreover, the following estimate holds: $$\label{eq: estimate of Rw(v)} |R_w(v)| \le Ch \|w\|_{H^2(\Omega; \Gamma)} \|v\|_{H^1(\Omega_h; \Gamma_h)}.$$* *Proof.* A direct computation shows $$\begin{aligned} a_h(v, \tilde w) &= (\nabla v, \nabla\tilde w)_{\Omega_h} + (v, \tilde w)_{\Gamma_h} + (\nabla_{\Gamma_h} v, \nabla_{\Gamma_h}\tilde w)_{\Gamma_h} \\ &= (v, -\Delta\tilde w)_{\Omega_h} + (v, \partial_{n_h}\tilde w)_{\Gamma_h} + (v, \tilde w)_{\Gamma_h} + (\nabla_{\Gamma_h} v, \nabla_{\Gamma_h}\tilde w)_{\Gamma_h} \\ &= [(v, \varphi)_{\Omega_h} + (v, -\Delta\tilde w - \varphi)_{\Omega_h\setminus\Omega}] + (v, \partial_{n_h}\tilde w - (\partial_n w) \circ \pi)_{\Gamma_h} \\ &\qquad + (v, \tilde w - w\circ\bm\pi)_{\Gamma_h} + (v, (\Delta_\Gamma w)\circ\bm\pi)_{\Gamma_h} + (\nabla_{\Gamma_h} v, \nabla_{\Gamma_h}\tilde w)_{\Gamma_h} + (v, \psi)_{\Gamma_h} \\ &= (v, \varphi)_{\Omega_h} + (v, \psi)_{\Gamma_h} + R_w^1(v) + R_w^2(v), \end{aligned}$$ which is ([\[eq: L2 duality representation for phi and psi\]](#eq: L2 duality representation for phi and psi){reference-type="ref" reference="eq: L2 duality representation for phi and psi"}). Estimate ([\[eq: estimate of Rw(v)\]](#eq: estimate of Rw(v)){reference-type="ref" reference="eq: estimate of Rw(v)"}) is obtained by almost the same manner as ([\[eq: 1st estimate of Ru(v)\]](#eq: 1st estimate of Ru(v)){reference-type="ref" reference="eq: 1st estimate of Ru(v)"}) for $k = 1$. The only difference is that no domain perturbation term involving $\psi$ appears this time (cf. Remark [Remark 6](#rem: in case of transformation less regularity for tau is OK){reference-type="ref" reference="rem: in case of transformation less regularity for tau is OK"}). ◻ Next we show that $R_u(v)$ admits another equivalent representation if $v \in H^2(\Omega \cup \Gamma(\delta))$ and $v|_{\Gamma} \in H^2(\Gamma)$. We make use of the integration by parts formula $$\label{eq: integration by parts in symmetric difference} (\Delta u, v)_{\Omega_h\triangle\Omega}' + (\nabla u, \nabla v)_{\Omega_h\triangle\Omega}' = (\partial_{n_h}u, v)_{\Gamma_h} - (\partial_n u, v)_{\Gamma},$$ where $(u, v)_{\Omega_h\triangle\Omega}' := (u, v)_{\Omega_h \setminus \Omega} - (u, v)_{\Omega \setminus \Omega_h}$. **Proposition 5**. *Let $k \ge 1$, $f \in H^1(\Omega)$, $\tau \in H^{3/2}(\Gamma)$. Assume that $u \in H^{\min\{k+1, 3\}}(\Omega; \Gamma)$ be the solution of ([\[eq: continuous problem\]](#eq: continuous problem){reference-type="ref" reference="eq: continuous problem"}). Then, for $v \in H^2(\Omega \cup \Gamma(\delta))$ we have $$\label{eq: another form of Ru(v)} R_u(v) = -(\tilde f, v)_{\Omega_h \triangle \Omega}' + (\tilde u - \tilde\tau, v)_{\Gamma_h \cup \Gamma}' + (\nabla\tilde u, \nabla v)_{\Omega_h\triangle\Omega}' + (\nabla_{\Gamma_h}\tilde u, \nabla_{\Gamma_h} v)_{\Gamma_h} - (\nabla_\Gamma u, \nabla_\Gamma v)_\Gamma,$$ where $(u, v)_{\Gamma_h \cup \Gamma}' := (u, v)_{\Gamma_h} - (u, v)_\Gamma$. If in addition $v|_\Gamma \in H^2(\Gamma)$, the following estimate holds: $$\label{eq: 2nd estimate of Ru(v)} |R_u(v)| \le C\delta (\|f\|_{H^1(\Omega)} + \|\tau\|_{H^{3/2}(\Gamma)}) (\|v\|_{H^2(\Omega \cup \Gamma(\delta))} + \|v\|_{H^2(\Gamma)}).$$* *Proof.* Since $-\Delta u = f$ in $\Omega$ and $-\partial_n u - u + \tau + \Delta_\Gamma u = 0$ on $\Gamma$, it follows from ([\[eq: integration by parts in symmetric difference\]](#eq: integration by parts in symmetric difference){reference-type="ref" reference="eq: integration by parts in symmetric difference"}) that $$\begin{aligned} R_u(v) &= (-\Delta\tilde u - \tilde f, v)_{\Omega_h\triangle\Omega}' + (\partial_{n_h}\tilde u, v)_{\Gamma_h} + (\tilde u - \tilde\tau, v)_{\Gamma_h} + (\nabla_{\Gamma_h}\tilde u, \nabla_{\Gamma_h} v)_{\Gamma_h} \\ &= - (\tilde f, v)_{\Omega_h\triangle\Omega}' + (\nabla\tilde u, \nabla v)_{\Omega_h\triangle\Omega}' + (\partial_n u, v)_{\Gamma} + (\tilde u - \tilde\tau, v)_{\Gamma_h} + (\nabla_{\Gamma_h}\tilde u, \nabla_{\Gamma_h} v)_{\Gamma_h} \\ &= - (\tilde f, v)_{\Omega_h\triangle\Omega}' + (\nabla\tilde u, \nabla v)_{\Omega_h\triangle\Omega}' + (-u + \tau + \Delta_\Gamma u, v)_{\Gamma} + (\tilde u - \tilde\tau, v)_{\Gamma_h} + (\nabla_{\Gamma_h}\tilde u, \nabla_{\Gamma_h} v)_{\Gamma_h}, \end{aligned}$$ which after the integration by parts on $\Gamma$ yields ([\[eq: another form of Ru(v)\]](#eq: another form of Ru(v)){reference-type="ref" reference="eq: another form of Ru(v)"}). By the boundary-skin estimates, the regularity structure $\|u\|_{H^2(\Omega; \Gamma)} \le C(\|f\|_{L^2(\Omega)} + \|\tau\|_{L^2(\Gamma)})$, and the stability of extensions, the first three terms on the right-hand side of ([\[eq: another form of Ru(v)\]](#eq: another form of Ru(v)){reference-type="ref" reference="eq: another form of Ru(v)"}) is bounded as follows: $$\begin{aligned} |(\tilde f, v)_{\Omega_h\triangle\Omega}'| &\le \|\tilde f\|_{L^2(\Gamma(\delta))} \|v\|_{L^2(\Gamma(\delta))} \le C \delta \|f\|_{H^1(\Omega)} \|v\|_{H^1(\Omega \cup \Gamma(\delta))}, \\ |(\tilde u - \tilde\tau, v)_{\Gamma_h \cup \Gamma}'| &\le C \delta \|\tilde u - \tilde\tau\|_{H^2(\Omega \cup \Gamma(\delta))} \|v\|_{H^2(\Omega \cup \Gamma(\delta))} \le C \delta (\|f\|_{L^2(\Omega)} + \|\tau\|_{H^{3/2}(\Gamma)}) \|v\|_{H^2(\Omega \cup \Gamma(\delta))}, \\ |(\nabla\tilde u, \nabla v)_{\Omega_h\triangle\Omega}'| &\le C \delta \|\nabla\tilde u\|_{H^1(\Omega \cup \Gamma(\delta))} \|\nabla v\|_{H^1(\Omega \cup \Gamma(\delta))} \le C \delta (\|f\|_{L^2(\Omega)} + \|\tau\|_{L^2(\Gamma)}) \|v\|_{H^2(\Omega \cup \Gamma(\delta))}. \end{aligned}$$ For the fourth and fifth terms of ([\[eq: another form of Ru(v)\]](#eq: another form of Ru(v)){reference-type="ref" reference="eq: another form of Ru(v)"}), we start from the obvious equality $$\begin{aligned} &(\nabla_{\Gamma_h}\tilde u, \nabla_{\Gamma_h} v)_{\Gamma_h} - (\nabla_\Gamma u, \nabla_\Gamma v)_\Gamma \\ = \; &( \nabla_{\Gamma_h}(\tilde u - u\circ\bm\pi), \nabla_{\Gamma_h}(v - v\circ\bm\pi) )_{\Gamma_h} + ( \nabla_{\Gamma_h}(u\circ\bm\pi), \nabla_{\Gamma_h}(v - v\circ\bm\pi) )_{\Gamma_h} \\ &\qquad + ( \nabla_{\Gamma_h}(\tilde u - u\circ\bm\pi), \nabla_{\Gamma_h}(v\circ\bm\pi) )_{\Gamma_h} + \big[ ( \nabla_{\Gamma_h}(u\circ\bm\pi), \nabla_{\Gamma_h}(v\circ\bm\pi) )_{\Gamma_h} - (\nabla_\Gamma u, \nabla_\Gamma v)_\Gamma \big] \\ =: \; &I_1 + I_2 + I_3 + I_4. \end{aligned}$$ By Corollary [Corollary 1](#cor: u - u circ pi on Gammah){reference-type="ref" reference="cor: u - u circ pi on Gammah"}, $|I_1| \le Ch^{2k} \|\tilde u\|_{H^{\min\{k+1, 3\}}(\Omega \cup \Gamma(\delta))} \|v\|_{H^{\min\{k+1, 3\}}(\Omega \cup \Gamma(\delta))}$ (note that $h^{2k} \le C\delta$). From Lemma [Lemma 5](#lem: inner product between nabla(u - u circ pi) and nabla v circ pi){reference-type="ref" reference="lem: inner product between nabla(u - u circ pi) and nabla v circ pi"} we have $$|I_2| \le C \delta \|u\|_{H^2(\Gamma)} \|v\|_{H^2(\Omega \cup \Gamma(\delta))}, \qquad |I_3| \le C \delta \|\tilde u\|_{H^2(\Omega \cup \Gamma(\delta))} \|v\|_{H^2(\Gamma)}.$$ Finally, $|I_4| \le C \delta \|u\|_{H^1(\Gamma)} \|v\|_{H^1(\Gamma)}$ by ([\[eq: error between nabla Gammah and nabla Gamma\]](#eq: error between nabla Gammah and nabla Gamma){reference-type="ref" reference="eq: error between nabla Gammah and nabla Gamma"}). Combining the estimates above concludes ([\[eq: 2nd estimate of Ru(v)\]](#eq: 2nd estimate of Ru(v)){reference-type="ref" reference="eq: 2nd estimate of Ru(v)"}). ◻ **Remark 7**. We need $f \in H^1(\Omega)$ and $\tau \in H^{3/2}(\Gamma)$ even for $k = 1$. We are in the position to state the $L^2$-error estimate in $\Omega_h$ and on $\Gamma_h$. **Theorem 3**. *Let $k + 1 > d/2$. Assume that $f \in H^1(\Omega)$, $\tau \in H^{3/2}(\Gamma)$ for $k = 1, 2$ and that $f \in H^{k-1}(\Omega)$, $\tau \in H^{k-1}(\Gamma)$ for $k \ge 3$. Then we have $$\|\tilde u - u_h\|_{L^2(\Omega_h; \Gamma_h)} \le C_{f, \tau}h^{k+1},$$ where $C_{f, \tau} := C (\|f\|_{H^{\max\{k-1, 1\}}(\Omega)} + \|\tau\|_{H^{\max\{k-1, 3/2\}}(\Gamma)})$.* *Proof.* We consider the solution $w$ of ([\[eq: dual problem\]](#eq: dual problem){reference-type="ref" reference="eq: dual problem"}) obtained from the following choices of $\varphi$ and $\psi$: $$\varphi = \frac{\tilde u - u_h}{\|\tilde u - u_h\|_{L^2(\Omega)}}, \qquad \psi = \frac{\tilde u - u_h}{\|\tilde u - u_h\|_{L^2(\Gamma)}}.$$ Taking then $v = \tilde u - u_h$ in ([\[eq: L2 duality representation for phi and psi\]](#eq: L2 duality representation for phi and psi){reference-type="ref" reference="eq: L2 duality representation for phi and psi"}) and using ([\[eq: asymptotic Galerkin orthogonality\]](#eq: asymptotic Galerkin orthogonality){reference-type="ref" reference="eq: asymptotic Galerkin orthogonality"}), we obtain $$\begin{aligned} \|\tilde u - u_h\|_{L^2(\Omega_h; \Gamma_h)} &= a_h(\tilde u - u_h, \tilde w) - R_w(\tilde u - u_h) \\ &= a_h(\tilde u - u_h, \tilde w - w_h) - R_u(\tilde w - w_h) - R_w(\tilde u - u_h) + R_u(\tilde w), \end{aligned}$$ where we set $w_h := \mathcal I_h \tilde w$. Since $\|\tilde u - u_h\|_{H^1(\Omega_h; \Gamma_h)} \le C_{f, \tau} h^{k}$ by Theorem [Theorem 2](#thm: H1 error estimate){reference-type="ref" reference="thm: H1 error estimate"} and $\|w\|_{H^2(\Omega; \Gamma)} \le C$, we find from Theorem [Theorem 1](#thm: interpolation error estimate){reference-type="ref" reference="thm: interpolation error estimate"} and the residual estimates ([\[eq: 1st estimate of Ru(v)\]](#eq: 1st estimate of Ru(v)){reference-type="ref" reference="eq: 1st estimate of Ru(v)"}), ([\[eq: estimate of Rw(v)\]](#eq: estimate of Rw(v)){reference-type="ref" reference="eq: estimate of Rw(v)"}), and ([\[eq: 2nd estimate of Ru(v)\]](#eq: 2nd estimate of Ru(v)){reference-type="ref" reference="eq: 2nd estimate of Ru(v)"}) that $$\begin{aligned} |a_h(\tilde u - u_h, \tilde w - w_h)| &\le C\|\tilde u - u_h\|_{H^1(\Omega_h; \Gamma_h)} \|\tilde w - w_h\|_{H^1(\Omega_h; \Gamma_h)} \le C_{f, \tau} h^{k+1}, \\ | R_u(\tilde w - w_h) - R_w(\tilde u - u_h) | &\le C_{f, \tau} h^k \|\tilde w - w_h\|_{H^1(\Omega_h; \Gamma_h)} + Ch \|\tilde u - u_h\|_{H^1(\Omega_h; \Gamma_h)} \le C_{f, \tau} h^{k+1}, \\ | R_u(\tilde w) | &\le C_{f, \tau} \delta (\|\tilde w\|_{H^2(\Omega \cup \Gamma(\delta))} + \|w\|_{H^2(\Gamma)}) \le C_{f, \tau}h^{k+1}, \end{aligned}$$ where the stability of extensions has been used. This proves the theorem. ◻ # Numerical example Let $\Omega = \{(x, y) \in \mathbb R^2 \mid x^2 + y^2 = 1\}$ be the unit disk (thus $\Gamma$ is the unit circle) and set the exact solution to be $$u(x, y) = 10x^2 y.$$ With the linear finite element method, i.e., $k = 1$, we compute approximate solutions using the software `FreeFEM`. The surface gradient $\nabla_{\Gamma_h} u_h$ is computed by $$\nabla_{\Gamma_h} u_h = (I - \bm n_h \otimes \bm n_h) \nabla u_h \quad\text{on }\; \Gamma_h.$$ The errors are computed by interpolating the exact solution to the quadratic finite element spaces. The results are reported in Table [1](#tab: error){reference-type="ref" reference="tab: error"}, where $N$ denotes the number of nodes on the boundary. We see that the $H^1(\Omega_h; \Gamma_h)$- and $L^2(\Omega_h; \Gamma_h)$-errors behave as $O(h)$ and $O(h^2)$ respectively, which is consistent with the theoretical results established in Theorems [Theorem 2](#thm: H1 error estimate){reference-type="ref" reference="thm: H1 error estimate"} and [Theorem 3](#thm: L2 error estimate){reference-type="ref" reference="thm: L2 error estimate"}. $N$ $h$ $\|\nabla(u - u_h)\|_{L^2(\Omega_h)}$ $\|\nabla_{\Gamma_h} (u - u_h)\|_{L^2(\Gamma_h)}$ $\|u - u_h\|_{L^2(\Omega_h)}$ $\|u - u_h\|_{L^2(\Gamma_h)}$ ----- --------- --------------------------------------- --------------------------------------------------- ------------------------------- ------------------------------- 32 0.293 1.84 1.58 7.23E-2 0.129 64 0.161 0.93 0.794 1.81E-2 3.27E-2 128 9.44E-2 0.460 0.397 4.49E-3 8.11E-3 256 4.26E-2 0.229 0.199 1.09E-3 2.03E-3 : behavior of the error in $H^1(\Omega_h)$, $H^1(\Gamma_h)$, $L^2(\Omega_h)$, and $L^2(\Gamma_h)$ 10 J. W. Barrett and C. M. Elliott, *Finite-element approximation of elliptic equations with a Neumann or Robin condition on a curved boundary*, IMA J. Numer. Anal., 8 (1988), pp. 321--342. C. Bernardi, *Optimal finite-element interpolation on curved domains*, SIAM J. Numer. Anal., 26 (1989), pp. 1212--1240. C. Bernardi and V. Girault, *A local regularization operator for triangular and quadrilateral finite elements*, SIAM J. Numer. Anal., 35 (1998), pp. 1893--1916. Y. Chiba and N. Saito, *Nitsche's method for a Robin boundary value problem in a smooth domain*, Numer. Methods Partial Differential Eq., 39 (2023), pp. 4126--4144. P. G. Ciarlet, *The Finite Element Method for Elliptic Problems*, SIAM, 1978. P. G. Ciarlet and P.-A. Raviart, *Interpolation theory over curved elements, with applications to finite element methods*, Comput. Math. Appl. Mech. Engrg., 1 (1972), pp. 217--249. C. M. Colciago, S. Deparis, and A. Quarteroni, *Comparisons between reduced order models and full 3D models for fluid-structure interaction problems in haemodynamics*, J. Comput. Appl. Math., 265 (2014), pp. 120--138. M. C. Delfour and J.-P. Zolésio, *Shapes and Geometries---Metrics, Analysis, Differential Calculus, and Optimization*, SIAM, 2nd ed., 2011. D. Edelmann, *Isoparametric finite element analysis of a generalized Robin boundary value problem on curved domains*, SMAI J. Comput. Math., 7 (2021), pp. 57--73. C. M. Elliott and T. Ranner, *Finite element analysis for a coupled bulk-surface partial differential equations*, IMA J. Numer. Anal., 33 (2013), pp. 377--402. D. Gilbarg and N. S. Trudinger, *Elliptic Partial Differential Equations of Second Order*, Springer, 1998. T. Kashiwabara, C. M. Colciago, L. Dedè, and A. Quarteroni, *Well-posedness, regularity, and convergence analysis of the finite element approximation of a generalized Robin boundary value problem*, SIAM J. Numer. Anal., 53 (2015), pp. 105--126. T. Kashiwabara and T. Kemmochi, *Pointwise error estimates of linear finite element method for Neumann boundary value problems in a smooth domain*, Numer. Math., 144 (2020), pp. 553--584. T. Kashiwabara, I. Oikawa, and G. Zhou, *Penalty method with P1/P1 finite element approximation for the stokes equations under the slip boundary condition*, Numer. Math., 134 (2016), pp. 705--740. B. Kovács and C. Lubich, *Numerical analysis of parabolic problems with dynamic boundary conditions*, IMA J. Numer. Anal., 37 (2017), pp. 1--39. M. Lenoir, *Optimal isoparametric finite elements and error estimates for domains involving curved boundaries*, SIAM J. Numer. Anal., 21 (1986), pp. 562--580. T. Richter, *Fluid-structure Interactions---Moldels, Analysis and Finite Elements*, Springer, 2017. L. R. Scott and S. Zhang, *Finite element interpolation of nonsmooth functions satisfying boundary conditions*, Math. Comp., 54 (1990), pp. 483--493. # Proof of Lemma [Lemma 2](#lem: nablaGammah(v - vpi)){reference-type="ref" reference="lem: nablaGammah(v - vpi)"} {#apx: nablaGammah(v - vpi)} We consider the case $p < \infty$ only because $p = \infty$ can be addressed by an obvious modification. We start from the local coordinate representation $$\label{eq1: proof of nabla (v - v circ pi)} \int_S |\nabla_{\Gamma_h} (v - v\circ\bm\pi)|^p \, d\gamma_h = \int_{S'} \Big| \sum_{\alpha} \bm g_h^\alpha \partial_\alpha \big[ v(\bm\Phi_h(\bm z')) - v(\bm\Phi(\bm z')) \big] \Big|^p \sqrt{\operatorname{det}G_h} \, d\bm z'.$$ Since $\bm\Psi(\bm z', t) = \bm\Phi(\bm z') + t \bm n(\bm\Phi(\bm z'))$, one has $$v(\bm\Phi_h(\bm z')) - v(\bm\Phi(\bm z')) = \int_0^{t^*(\bm\Phi(\bm z'))} \bm n(\bm\Phi(\bm z')) \cdot (\nabla v) \circ \bm\Psi(\bm z', t) \, dt \quad (\bm z' \in S').$$ Consequently, $$\begin{aligned} \sum_{\alpha} \bm g_h^\alpha \partial_\alpha \big[ v(\bm\Phi_h(\bm z')) - v(\bm\Phi(\bm z')) \big] &= \sum_{\alpha=1}^{d-1} \bm g_h^\alpha \, \partial_\alpha(t^* \circ \bm\Phi) \, \big[ (\bm n\circ\bm\Phi) \cdot ((\nabla v) \circ \bm\Phi_h) \big] \\ &+ \sum_{\alpha=1}^{d-1} \bm g_h^\alpha \int_0^{t^*\circ\bm\Phi} \Big( \partial_\alpha(\bm n\circ\bm\Phi) \cdot (\nabla v)\circ\bm\Psi + \bm n\circ\bm\Phi \cdot (\nabla^2 v)\circ\bm\Psi \cdot \partial_\alpha\bm\Psi \Big) \, dt,\end{aligned}$$ where $\bm a \cdot A \cdot \bm b$ means ${}^t\!\bm a A \bm b$ for vectors $\bm a, \bm b$ and a matrix $A$. For the first term, since $|\partial_\alpha (t^*\circ\bm\Phi)| \le Ch_S^k$ by ([\[eq: global t\*\]](#eq: global t*){reference-type="ref" reference="eq: global t*"}), $$\label{eq2: proof of nabla (v - v circ pi)} \int_{S'} \Big| \sum_{\alpha=1}^{d-1} \bm g_h^\alpha \, \partial_\alpha(t^* \circ \bm\Phi) \, \big[ (\bm n\circ\bm\Phi) \cdot ((\nabla v) \circ \bm\Phi_h) \big] \Big|^p \sqrt{\operatorname{det}G_h} \, d\bm z' \le Ch_S^{kp} \|\nabla u\|_{L^p(S)}^p.$$ For the second term, since $|\bm g_h^\alpha| \le C, |\partial_\alpha(\bm n\circ\bm\Phi)| \le C$, $|t^*\circ\bm\Phi| \le C\delta_S$, and $|\partial_\alpha\bm\Psi| = |\partial_\alpha\bm\Phi + t \partial_\alpha(\bm n\circ\bm\Phi)| \le C$, we obtain $$\begin{aligned} &\int_{S'} \bigg| \sum_{\alpha=1}^{d-1} \bm g_h^\alpha \int_0^{t^*\circ\bm\Phi} \Big( \partial_\alpha(\bm n\circ\bm\Phi) \cdot (\nabla v)\circ\bm\Psi + \bm n\circ\bm\Phi \cdot (\nabla^2 v)\circ\bm\Psi \cdot \partial_\alpha\bm\Psi \Big) \, dt \bigg|^p \sqrt{\operatorname{det}G_h} \, d\bm z' \notag \\ \le \; &C \int_{S'} \bigg| \int_{-\delta_S}^{\delta_S} \Big( |(\nabla v)\circ\bm\Psi| + |(\nabla^2 v)\circ\bm\Psi| \Big) \, dt \bigg|^p \, d\bm z' \notag \\ \le \; &C \delta_S^{p-1} \int_{S'} \int_{-\delta_S}^{\delta_S} \Big( |(\nabla v)\circ\bm\Psi|^p + |(\nabla^2 v)\circ\bm\Psi|^p \Big) \, dt \, d\bm z' \notag \\ \le \; &C \delta_S^{p-1} ( \|\nabla v\|_{L^p(\bm\pi(S, \delta_S))}^p + \|\nabla^2 v\|_{L^p(\bm\pi(S, \delta_S))}^p ). \label{eq3: proof of nabla (v - v circ pi)}\end{aligned}$$ where we have used Hölder's inequality and ([\[eq: local tubular neighborhood equivalence\]](#eq: local tubular neighborhood equivalence){reference-type="ref" reference="eq: local tubular neighborhood equivalence"}) in the third and fourth lines, respectively. Substituting ([\[eq2: proof of nabla (v - v circ pi)\]](#eq2: proof of nabla (v - v circ pi)){reference-type="ref" reference="eq2: proof of nabla (v - v circ pi)"}) and ([\[eq3: proof of nabla (v - v circ pi)\]](#eq3: proof of nabla (v - v circ pi)){reference-type="ref" reference="eq3: proof of nabla (v - v circ pi)"}) into ([\[eq1: proof of nabla (v - v circ pi)\]](#eq1: proof of nabla (v - v circ pi)){reference-type="ref" reference="eq1: proof of nabla (v - v circ pi)"}), we deduce that $$\|\nabla_{\Gamma_h} (v - v\circ\bm\pi) \|_{L^p(S)} \le Ch_S^k \|\nabla v\|_{L^p(S)} + C \delta_S^{1-1/p} ( \|\nabla v\|_{L^p(\bm\pi(S, \delta_S))} + \|\nabla^2 v\|_{L^p(\bm\pi(S, \delta_S))} ).$$ Since $\|\nabla v\|_{L^p(\bm\pi(S, \delta_S))} \le C \delta_S^{1/p} \|\nabla v\|_{L^p(S)} + C \delta_S \|\nabla^2 v\|_{L^p(\bm\pi(S, \delta_S))}$ by ([\[eq2\': boundary-skin estimates\]](#eq2': boundary-skin estimates){reference-type="ref" reference="eq2': boundary-skin estimates"}) and $h_S \le 1$, we conclude ([\[eq1: conclusion of lemma which bounds nabla(v - v circ pi)\]](#eq1: conclusion of lemma which bounds nabla(v - v circ pi)){reference-type="ref" reference="eq1: conclusion of lemma which bounds nabla(v - v circ pi)"}). Estimate ([\[eq2: conclusion of lemma which bounds nabla(v - v circ pi)\]](#eq2: conclusion of lemma which bounds nabla(v - v circ pi)){reference-type="ref" reference="eq2: conclusion of lemma which bounds nabla(v - v circ pi)"}) follows from ([\[eq1: conclusion of lemma which bounds nabla(v - v circ pi)\]](#eq1: conclusion of lemma which bounds nabla(v - v circ pi)){reference-type="ref" reference="eq1: conclusion of lemma which bounds nabla(v - v circ pi)"}) because $\|\nabla v\|_{L^p(S)} \le C(\|\nabla v\|_{L^p(\bm\pi(S))} + \delta_S^{1-1/p} \|\nabla^2 v\|_{L^p(\bm\pi(S, \delta_S))})$ by ([\[eq3: boundary-skin estimates\]](#eq3: boundary-skin estimates){reference-type="ref" reference="eq3: boundary-skin estimates"}) and ([\[eq1: equivalence of surface integrals\]](#eq1: equivalence of surface integrals){reference-type="ref" reference="eq1: equivalence of surface integrals"}). This completes the proof of Lemma [Lemma 2](#lem: nablaGammah(v - vpi)){reference-type="ref" reference="lem: nablaGammah(v - vpi)"}. # Stability of $\mathcal I_h$ {#sec: stability of Ih} *Proof of Lemma [Lemma 7](#lem1: stability of Ih){reference-type="ref" reference="lem1: stability of Ih"}.* We focus only on the estimate on $S$; the one on $T$ can be proved similarly. It follows from ([\[eq: nodal basis estimate on S\]](#eq: nodal basis estimate on S){reference-type="ref" reference="eq: nodal basis estimate on S"}) and ([\[eq: Linfty norm of psip\]](#eq: Linfty norm of psip){reference-type="ref" reference="eq: Linfty norm of psip"}) that $$\label{eq: Ihu in the Hm norm on S} \begin{aligned} \|\nabla_S^m (\mathcal I_h u)\|_{L^2(S)} &\le \sum_{\bm p \in \mathcal N_h \cap S} |(v, \psi_{\bm p})_{L^2(\sigma_{\bm p})}| \, \|\nabla_S^m \phi_{\bm p}\|_{L^2(S)} \le Ch_S^{-m + (d-1)/2} \sum_{\bm p \in \mathcal N_h \cap S} \|v\|_{L^1(\sigma_{\bm p})} \|\psi_{\bm p}\|_{L^\infty(\sigma_{\bm p})} \\ &\le Ch_S^{-m - (d-1)/2} \sum_{S_1 \in \mathcal S_h(S)} \|v\|_{L^1(S_1)} \le Ch_S^{-m} \sum_{S_1 \in \mathcal S_h(S)} \|v\|_{L^2(S_1)}, \end{aligned}$$ where $\operatorname{meas}_{d-1}(S_1) \le Ch_S^{d-1}$ is used in the last line. By Proposition [Proposition 2](#prop: transformation between S and Shat){reference-type="ref" reference="prop: transformation between S and Shat"}(ii), for $S_1 \in \mathcal S_h(S)$ we have $$\label{eq: v in the L2 norm on S1} \|v\|_{L^2(S_1)} \le Ch_S^{(d-1)/2} \|v \circ \bm F_S\|_{L^2(\hat S')} \le Ch_S^{(d-1)/2} \|v \circ \bm F_S\|_{H^1(\hat S')} \le C \sum_{l=0}^1 h_S^l \|\nabla_{S_1}^l v\|_{L^2(S_1)}$$ where $\hat S'$ is a projected image of $\hat S := \bm F_{T_{S_1}}^{-1}(S_1)$. Substitution of ([\[eq: v in the L2 norm on S1\]](#eq: v in the L2 norm on S1){reference-type="ref" reference="eq: v in the L2 norm on S1"}) into ([\[eq: Ihu in the Hm norm on S\]](#eq: Ihu in the Hm norm on S){reference-type="ref" reference="eq: Ihu in the Hm norm on S"}) concludes ([\[eq: local stability of Ihv on S\]](#eq: local stability of Ihv on S){reference-type="ref" reference="eq: local stability of Ihv on S"}). ◻ *Proof of Lemma [Lemma 8](#lem2: stability of Ih){reference-type="ref" reference="lem2: stability of Ih"}.* We focus on the estimate on $S$; the one on $T$ can be proved similarly. We introduce a reference macro element $\hat M_S' \subset \mathbb R^{d-1}$ which is a union of $\# \mathcal S_h(S)$ simplices of dimension $d-1$. By the regularity of the meshes, there is only a finite number of possibilities for $\hat M_S'$, which is independent of $h$ and $S$. There is a homeomorphism $\bm F_{M_S}: \hat M_S' \to M_S$ such that its restriction to each $(d-1)$-simplex $\hat S_1'$ belongs to $\mathbb P_k(\hat S_1')$ and is a $C^k$-diffeomorphism. For arbitrary $\hat P \in \mathbb P_k(\hat M_S')$, observe that $P := \hat P \circ \bm F_{M_S}^{-1} \in \operatorname{span}\{\phi_{\bm p}\}_{\bm p \in \mathcal N_h \cap M_S}$, and hence $(\mathcal I_h P)|_S = P|_S$. Therefore, it follows from Lemma [Lemma 7](#lem1: stability of Ih){reference-type="ref" reference="lem1: stability of Ih"} and Proposition [Proposition 2](#prop: transformation between S and Shat){reference-type="ref" reference="prop: transformation between S and Shat"}(ii) that $$\begin{aligned} \|\nabla_S^m (v - \mathcal I_h v)\|_{L^2(S)} &\le \|\nabla_S^m (v - P)\|_{L^2(S)} + \|\nabla_S^m \mathcal I_h(P - v)\|_{L^2(S)} \notag \\ &\le C \sum_{l=0}^1 h_S^{l-m} \sum_{S_1 \in \mathcal S_h(S)} \|\nabla_{S_1}^l (v - P)\|_{L^2(S_1)} \notag \\ &\le C \sum_{l=0}^1 h_S^{-m+(d-1)/2} \|\nabla_{\hat{\bm x}'}^l (v \circ \bm F_{M_S} - \hat P)\|_{L^2(\hat M_S')}. \label{eq1: proof of local interpolation error} \end{aligned}$$ The Bramble--Hilbert theorem, combined with an appropriate choice of a constant function for $\hat P$, yields $$\label{eq2: proof of local interpolation error} \|\nabla_{\hat{\bm x}'}^l (v \circ \bm F_{M_S} - \hat P)\|_{L^2(\hat M_S')} \le C \|\nabla_{\hat{\bm x}'} (v \circ \bm F_{M_S})\|_{L^2(\hat M_S')} \le C h_S^{1-(d-1)/2} \|v\|_{H^1(M_S)}$$ for $l = 0, 1$, where we have used Proposition [Proposition 2](#prop: transformation between S and Shat){reference-type="ref" reference="prop: transformation between S and Shat"}(ii) again. Now ([\[eq: local interpolation error on S\]](#eq: local interpolation error on S){reference-type="ref" reference="eq: local interpolation error on S"}) results from ([\[eq1: proof of local interpolation error\]](#eq1: proof of local interpolation error){reference-type="ref" reference="eq1: proof of local interpolation error"}) and ([\[eq2: proof of local interpolation error\]](#eq2: proof of local interpolation error){reference-type="ref" reference="eq2: proof of local interpolation error"}). ◻ **Remark 8**. The argument above closely follows that of [@BerGir1998 p. 1899]. Because $\bm F_{M_S} \notin H^2(\hat M_S')$ in general, we considered the situation in which only $H^1(S_1)$-norms of $v$ appear. # Error estimates in the exact domain {#sec: estimate in exact domain} ## Extension of finite element functions We define a natural extension of $v_h \in V_h$ to $\Gamma(\delta)$ by $$\bar v_h|_{\bm\pi(S, \delta) \setminus \Omega_h} = \hat v_h \circ \bm F_{T_S}^{-1} \quad (S \in \mathcal S_h).$$ Here note that $\hat v_h = v_h|_{T_S} \circ \bm F_{T_S} \in \mathbb P_k(\hat T)$ as well as $\bm F_{T_S} \in [\mathbb P_k(\hat T)]^d$ can be naturally extended (or extrapolated) to $2\hat T$, which doubles $\hat T$ by similarity with respect to its barycenter. Because $h$ is sufficiently small, we may assume that $\bm F_{T_S}$ is a $C^k$-diffeomorphism defined in $2\hat T$ and that $\bm\pi(S, \delta) \subset \bm F_{T_S}(2\hat T) =: 2T_S$. The extended function $\hat v_h \circ \bm F_{T_S}^{-1} \in C^\infty(2T_S)$ is denoted by $\overline{v_{h, T_S}}$. **Remark 9**. The discrete extension satisfies only $\bar v_h \in L^2(\Omega \cup \Gamma(\delta))$ and $\bar v_h|_{\Gamma} \in L^2(\Gamma)$ globally, since $\bar v_h$ may be discontinuous across $\{ \bm x \in \Omega \setminus \Omega_h \mid \bm\pi(\bm x) \in \bm\pi(\partial S) \}$ for $S \in \mathcal S_h$ (i.e. the "lateral part" of the boundary of $\bm\pi(S, \delta) \setminus T_S$; cf. Figure [2](#fig1){reference-type="ref" reference="fig1"}). Nevertheless, for simplicity in reading, we will allow for the following abuse of notation: $$\begin{aligned} \|\nabla(u - \bar u_h)\|_{L^2(\Omega)} &= \Big( \|\nabla(u - u_h)\|_{L^2(\Omega \cap \Omega_h)}^2 + \sum_{S \in \mathcal S_h} \|\nabla(u - \bar u_h)\|_{L^2(\bm\pi(S, \delta) \cap (\Omega \setminus \Omega_h))}^2 \Big)^{1/2}, \\ \|\nabla(u - \bar u_h)\|_{L^2(\Gamma)} &= \Big( \sum_{S \in \mathcal S_h} \|\nabla_\Gamma (u - \bar u_h)\|_{L^2(\bm\pi(S))}^2 \Big)^{1/2}. \end{aligned}$$ Note, however, that $\bar v_h$ is continuous if $d = 2$ and the nodes of $\Gamma_h$ lie exactly on $\Gamma$. The discrete extension $\bar v_h$ can be estimated in $\Omega \setminus \Omega_h$ as follows. **Lemma 10**. *Let $v_h \in V_h$ and $S \in \mathcal S_h$. Then, for $p \in [1, \infty]$ we have $$\begin{aligned} \|\overline{v_{h, T_S}}\|_{L^p(\bm\pi(S, \delta_S))} &\le C \Big( \frac{\delta_S}{h_S} \Big)^{1/p} \|v_h\|_{L^p(T_S)}, \\ \|\nabla^m \overline{v_{h, T_S}}\|_{L^p(\bm\pi(S, \delta_S))} &\le C \Big( \frac{\delta_S}{h_S} \Big)^{1/p} h_S^{1-m} \|\nabla v_h\|_{L^p(T_S)} \quad (m \ge 1). \end{aligned}$$* *Proof.* We focus on the case $m \ge 1$; the case $m = 0$ can be treated similarly. For simplicity, we write $T := T_S$. By the Hölder inequality, $$\label{eq1: estimate in Omega minus Omegah} \|\nabla^m \overline{v_{h, T}}\|_{L^p(\bm\pi(S, \delta_S)} \le \operatorname{meas}_d(\bm\pi(S, \delta_S))^{1/p} \|\nabla^m \overline{v_{h, T}}\|_{L^\infty(2T)} \le C (h_S^{d-1} \delta_S)^{1/p} \|\nabla^m \overline{v_{h, T}}\|_{L^\infty(2T)}.$$ Transforming to the reference coordinate, we have $$\begin{aligned} \|\nabla_{\bm x}^m \overline{v_{h, T}}\|_{L^\infty(2T)} &\le C \sum_{l=1}^m h_S^{-l} \|\nabla_{\hat{\bm x}}^l \hat v_h\|_{L^\infty(2\hat T)} \le C h_S^{-m} \sum_{l=1}^m \|\nabla_{\hat{\bm x}}^l \hat v_h\|_{L^\infty(\hat T)} \le C h_S^{-m} \|\nabla_{\hat{\bm x}} \hat v_h\|_{L^p(\hat T)} \notag \\ &\le C h_S^{1-m-d/p} \|\nabla_{\bm x} v_h\|_{L^p(T)}, \label{eq2: estimate in Omega minus Omegah} \end{aligned}$$ where we note that $\|\cdot\|_{L^\infty(\hat T)}$ and $\|\cdot\|_{L^\infty(2\hat T)}$ define norms for polynomials and that all norms of a finite dimensional space are equivalent to each other. Now substitution of ([\[eq2: estimate in Omega minus Omegah\]](#eq2: estimate in Omega minus Omegah){reference-type="ref" reference="eq2: estimate in Omega minus Omegah"}) into ([\[eq1: estimate in Omega minus Omegah\]](#eq1: estimate in Omega minus Omegah){reference-type="ref" reference="eq1: estimate in Omega minus Omegah"}) proves the desired estimate. ◻ For $S \in \mathcal S_h$, we will make use of the following trace inequality: $$\label{eq: local trace estimate} \|v\|_{L^2(S)} \le Ch_S^{-1/2} \|v\|_{L^2(T_S)} + C \|v\|_{L^2(T_S)}^{1/2} \|\nabla v\|_{L^2(T_S)}^{1/2} \quad \forall v \in H^1(T_S),$$ as well as the inverse inequality $$\|\nabla^m v_h\|_{L^2(S)} \le Ch_S^{m-1/2} \|v_h\|_{H^m(T_S)} \quad \forall v_h \in V_h, \; m \ge 0.$$ ## $H^1$-error estimate in $\Omega$ Let us prove that $$\Big( \|\nabla(u - u_h)\|_{L^2(\Omega \cap \Omega_h)}^2 + \sum_{S \in \mathcal S_h} \|\nabla(u - \bar u_h)\|_{L^2(\bm\pi(S, \delta) \cap (\Omega \setminus \Omega_h))}^2 \Big)^{1/2} \le Ch^k \|u\|_{H^{k+1}(\Omega)}.$$ For this, by virtue of the $H^1(\Omega_h)$-error estimate in Theorem [Theorem 2](#thm: H1 error estimate){reference-type="ref" reference="thm: H1 error estimate"}, it suffices to show the following: **Proposition 6**. *Under the same assumptions as in Theorem [Theorem 2](#thm: H1 error estimate){reference-type="ref" reference="thm: H1 error estimate"} we have $$\Big( \sum_{S \in \mathcal S_h} \|\nabla(\tilde u - \bar u_h)\|_{L^2(\bm\pi(S, \delta_S) \setminus \Omega_h)}^2 \Big) \le Ch^{k+1}(\|\tilde u\|_{H^2(\Gamma(\delta))} + \|\tilde u\|_{H^{k+1}(\Omega_h)}) + Ch^{k/2} \|\nabla(\tilde u - u_h)\|_{L^2(\Omega_h)}.$$* *Proof.* Fixing an arbitrary $S \in \mathcal S_h$ and setting $v_h = \mathcal I_h^L \tilde u$, we start from $$\begin{aligned} \|\nabla(\tilde u - \bar u_h)\|_{L^2(\bm\pi(S, \delta_S) \setminus \Omega_h)} &\le \|\nabla(\tilde u - \overline{u_{h, T_S}})\|_{L^2(\bm\pi(S, \delta_S))} \\ &\le \|\nabla(\tilde u - \overline{v_{h, T_S}})\|_{L^2(\bm\pi(S, \delta_S))} + \|\nabla(\overline{v_{h, T_S}} - \overline{u_{h, T_S}})\|_{L^2(\bm\pi(S, \delta_S))}. \end{aligned}$$ We apply ([\[eq2: boundary-skin estimates\]](#eq2: boundary-skin estimates){reference-type="ref" reference="eq2: boundary-skin estimates"}) to get $$\|\nabla(\tilde u - \overline{v_{h, T_S}})\|_{L^2(\bm\pi(S, \delta_S))} \le C \delta_S^{1/2} \|\nabla( \tilde u - v_h )\|_{L^2(S)} + C \delta_S \|\nabla^2( \tilde u - \overline{v_{h, T_S}} )\|_{L^2(\bm\pi(S, \delta_S))}.$$ Because of ([\[eq: Lagrange interpolation error estimate on S\]](#eq: Lagrange interpolation error estimate on S){reference-type="ref" reference="eq: Lagrange interpolation error estimate on S"}) and ([\[eq: local trace estimate\]](#eq: local trace estimate){reference-type="ref" reference="eq: local trace estimate"}), we get $$\label{eq1: H1 error in Omega minus Omegah} \|\nabla(\tilde u - \bar v_h)\|_{L^2(S)} \le Ch_S^{k-1/2} \|\tilde u\|_{H^{k+1}(T_S)}.$$ By Lemma [Lemma 10](#lem: estimate in Omega minus Omegah){reference-type="ref" reference="lem: estimate in Omega minus Omegah"}, $$\begin{aligned} \|\nabla^2(\tilde u - \overline{v_{h, T_S}})\|_{L^2(\bm\pi(S, \delta_S))} &\le \|\nabla^2 \tilde u\|_{L^2(\bm\pi(S, \delta_S))} + \|\nabla^2 \overline{v_{h, T_S}}\|_{L^2(\bm\pi(S, \delta_S))} \notag \\ &\le \|\nabla^2 \tilde u\|_{L^2(\bm\pi(S, \delta_S))} \notag + C \Big( \frac{\delta_S}{h_S} \Big)^{1/2} h_S^{-1} \|\nabla v_h\|_{L^2(T_S)} \notag \\ &\le \|\nabla^2 \tilde u\|_{L^2(\bm\pi(S, \delta_S))} + C \delta_S^{1/2} h_S^{-3/2} \|\tilde u\|_{H^{k+1}(T_S)}. \label{eq2: H1 error in Omega minus Omegah} \end{aligned}$$ Finally, observe that $$\begin{aligned} \|\nabla(\overline{v_{h, T_S}} - \overline{u_{h, T_S}})\|_{L^2(\bm\pi(S, \delta_S))} &\le C \Big( \frac{\delta_S}{h_S} \Big)^{1/2} \|\nabla(v_h - u_h)\|_{L^2(T_S)} \notag \\ &\le C \Big( \frac{\delta_S}{h_S} \Big)^{1/2} (h_S^k \|\tilde u\|_{H^{k+1}(T_S)} + \|\nabla(\tilde u - u_h)\|_{L^2(T_S)}). \label{eq3: H1 error in Omega minus Omegah} \end{aligned}$$ Now we deduce from ([\[eq1: H1 error in Omega minus Omegah\]](#eq1: H1 error in Omega minus Omegah){reference-type="ref" reference="eq1: H1 error in Omega minus Omegah"})--([\[eq3: H1 error in Omega minus Omegah\]](#eq3: H1 error in Omega minus Omegah){reference-type="ref" reference="eq3: H1 error in Omega minus Omegah"}) that $$\begin{aligned} \|\nabla(\tilde u - \bar u_h)\|_{L^2(\bm\pi(S, \delta_S) \setminus \Omega_h)} &\le C \Big( \frac{\delta_S}{h_S} \Big)^{1/2} (h_S^k \|\tilde u\|_{H^{k+1}(T_S)} + \|\nabla(\tilde u - u_h)\|_{L^2(T_S)}) \\ &\qquad + C \delta_S \|\nabla^2 \tilde u\|_{L^2(\bm\pi(S, \delta_S))} + C \Big( \frac{\delta_S}{h_S} \Big)^{3/2} \|\tilde u\|_{H^{k+1}(T_S)} \\ &\le Ch_S^{k+1}(\|\tilde u\|_{H^2(\bm\pi(S, \delta_S))} + \|\tilde u\|_{H^{k+1}(T_S)}) + Ch_S^{k/2} \|\nabla(\tilde u - u_h)\|_{L^2(T_S)}. \end{aligned}$$ Taking the square and the summation over $S \in \mathcal S_h$, we obtain the desired estimate. ◻ ## $H^1$-error estimate on $\Gamma$ To establish $$\label{eq: global H1 error estimate on Gamma} \Big( \sum_{S \in \mathcal S_h} \|\nabla_\Gamma (u - \bar u_h)\|_{L^2(\bm\pi(S))}^2 \Big)^{1/2} \le Ch^k \|u\|_{H^{k+1}(\Omega; \Gamma)},$$ we show the following estimate on each boundary mesh $S$. **Proposition 7**. *In addition to the assumptions of Theorem [Theorem 2](#thm: H1 error estimate){reference-type="ref" reference="thm: H1 error estimate"} we assume $k > d/2$ so that $u \in H^{k+1}(\Omega) \hookrightarrow C^1(\overline\Omega)$. Then, for $S \in \mathcal S_h$ we have $$\begin{aligned} \|\nabla_\Gamma (u - \bar u_h)\|_{L^2(\bm\pi(S))} &\le C \|\nabla_{\Gamma_h} (\tilde u - u_h) \|_{L^2(S)} + C \delta_S^{3/2} \|\tilde u\|_{H^3(\bm\pi(S, \delta_S))} \\ &\qquad + C h_S^{(k-1)/2} \sum_{T_1 \in \mathcal T_h(T_S)} (h_S^k \|\tilde u\|_{H^{k+1}(T_1)} + \|\nabla (\tilde u - u_h)\|_{L^2(T_1)}), \end{aligned}$$ where $\mathcal T_h(T)$ means $\{T_1 \in \mathcal T \mid T_1 \cap T \neq \emptyset\}$ for $T \in \mathcal T_h$.* *Proof.* Since $h_S$ is sufficiently small, we may assume $\bm\pi(S) \subset \Omega_h^c \cup \bigcup \mathcal T_h(T)$ with $T : = T_S$. Therefore, $$\begin{aligned} \|\nabla_\Gamma (u - \bar u_h)\|_{L^2(\bm\pi(S))} = \Big( \|\nabla_\Gamma( u - \overline{u_{h, T}} )\|_{L^2(\bm\pi(S) \cap (T \cup \Omega_h^c))}^2 + \sum_{T_1 \in \mathcal T_h(T) \setminus \{T\}} \|\nabla_\Gamma(u - u_h)\|_{L^2(\bm\pi(S) \cap T_1)}^2 \Big)^{1/2}. \end{aligned}$$ Setting $v_h = \mathcal I_h^L \tilde u$, we have $$\begin{aligned} \|\nabla_\Gamma( u - \overline{u_{h, T}} )\|_{L^2(\bm\pi(S))} &\le C\|\nabla_{\Gamma_h} [(u - \overline{u_{h, T}}) \circ \bm\pi] \|_{L^2(S)} \\ &\le C(\|\nabla_{\Gamma_h} (\tilde u - u_h) \|_{L^2(S)} + \|\nabla_{\Gamma_h} [(\tilde u - v_h) - (u - \overline{v_{h, T}}) \circ \bm\pi] \|_{L^2(S)} \\ &\qquad + \|\nabla_{\Gamma_h} [(v_h - u_h) - (\overline{v_{h, T}} - \overline{u_{h, T}}) \circ \bm\pi] \|_{L^2(S)}). \end{aligned}$$ The second and third terms in the right-hand side are majorized by $$\begin{aligned} &C h_S^k \|\nabla (\tilde u - v_h)\|_{L^2(S)} + C \delta_S^{1/2} \|\nabla^2 (\tilde u - \overline{v_{h, T}})\|_{ L^2(\bm\pi(S, \delta_S)) } \\ \le \; & Ch_S^{2k-1/2} \|\tilde u\|_{H^{k+1}(T)} + C \delta_S \|\nabla^2 (\tilde u - v_h)\|_{L^2(S)} + C \delta_S^{3/2} \|\nabla^3 (\tilde u - \overline{v_{h, T}})\|_{ L^2(\bm\pi(S, \delta_S)) } \\ \le \; & C h_S^{2k-1/2} \|\tilde u\|_{H^{k+1}(T)} + C \delta_S^{3/2} \|\tilde u\|_{ H^3(\bm\pi(S, \delta_S)) } + C \delta_S^{3/2} \Big( \frac{\delta_S}{h_S} \Big)^{1/2} h_S^{-2} \|\nabla v_h\|_{L^2(T)} \\ \le \; & C h_S^{2k-1/2} \|\tilde u\|_{H^{k+1}(T)} + C \delta_S^{3/2} \|\tilde u\|_{H^3(\bm\pi(S, \delta_S))}, \end{aligned}$$ and by $$\begin{aligned} &C h_S^k \|\nabla (v_h - u_h)\|_{L^2(S)} + C \delta_S^{1/2} \|\nabla^2 (\overline{v_{h, T}} - \overline{u_{h, T}})\|_{L^2(\bm\pi(S, \delta_S))} \\ \le \; &Ch_S^{k-1/2} \|\nabla (v_h - u_h)\|_{L^2(T)} + C \delta_S^{1/2} \Big( \frac{\delta_S}{h_S} \Big)^{1/2} h_S^{-1} \|\nabla (v_h - u_h)\|_{L^2(T)} \\ \le \; &Ch_S^{k-1/2} (h_S^k \|\tilde u\|_{H^{k+1}(T)} + \|\nabla (\tilde u - u_h)\|_{L^2(T)}), \end{aligned}$$ respectively. Therefore, we obtain $$\begin{aligned} \|\nabla_\Gamma( u - \overline{u_{h, T}} )\|_{L^2(\bm\pi(S) \cap (T \cup \Omega_h^c))} &\le \|\nabla_{\Gamma_h} (\tilde u - u_h) \|_{L^2(S)} + C h_S^{2k-1/2} \|\tilde u\|_{H^{k+1}(T_S)} + C \delta_S^{3/2} \|\tilde u\|_{H^3(\bm\pi(S, \delta_S))} \\ &\qquad + C h_S^{k-1/2} \|\nabla (\tilde u - u_h)\|_{L^2(T_S)}. \end{aligned}$$ Next, for $T_1 \in \mathcal T_h(T)$ such that $T_1 \neq T$, we have $$\|\nabla_\Gamma(u - u_h)\|_{L^2(\bm\pi(S) \cap T_1)} \le \|\nabla_\Gamma(u - v_h)\|_{L^2(\bm\pi(S) \cap T_1)} + \|\nabla_\Gamma(v_h - u_h)\|_{L^2(\bm\pi(S) \cap T_1)}.$$ We estimate the first and second terms by $$\begin{aligned} \operatorname{meas}_{d-1}(\bm\pi(S) \cap T_1)^{1/2} \|\nabla(\tilde u - v_h)\|_{L^\infty(T_1)} \le C (h_S^{d-2} \delta_S)^{1/2} \cdot C h_{T_1}^{k-d/2} \|\tilde u\|_{H^{k+1}(T_1)} \le C h_S^{(3k-1)/2} \|\tilde u\|_{H^{k+1}(T_1)} \end{aligned}$$ (note that $h_{T_1} \le C h_S$ by the regularity of the meshes), and by $$\begin{aligned} \operatorname{meas}_{d-1}(\bm\pi(S) \cap T_1)^{1/2} \|\nabla(v_h - u_h)\|_{L^\infty(T_1)} &\le C (h_S^{d-2} \delta_S)^{1/2} \cdot C h_{T_1}^{-d/2} \|\nabla(v_h - u_h)\|_{L^2(T_1)} \\ &\le C h_S^{(k-1)/2} (h_{T_1}^k \|\tilde u\|_{H^{k+1}(T_1)} + \|\nabla(\tilde u - u_h)\|_{L^2(T_1)}). \end{aligned}$$ Combining the estimates above all together, we conclude the desired inequality. ◻ **Remark 10**. (i) Because $\#\mathcal T_h(T) \le C$ for all $T \in \mathcal T_h$ by the regularity of the meshes, we can deduce ([\[eq: global H1 error estimate on Gamma\]](#eq: global H1 error estimate on Gamma){reference-type="ref" reference="eq: global H1 error estimate on Gamma"}) from the proposition above and Theorem [Theorem 2](#thm: H1 error estimate){reference-type="ref" reference="thm: H1 error estimate"}. \(ii\) The assumption $k > d/2$ excludes the case $k = 1$ even for $d = 2, 3$. However, a similar argument using $\nabla^2 v_h \equiv 0$ for $k = 1$ on each element yields $$\begin{aligned} \|\nabla_\Gamma (u - \bar u_h)\|_{L^2(\bm\pi(S))} &\le C \|\nabla_{\Gamma_h} (\tilde u - u_h) \|_{L^2(S)} + C \delta_S^{1/2} \|\tilde u\|_{H^2(\bm\pi(S, \delta_S))} \\ &\qquad + C \sum_{T_1 \in \mathcal T_h(T_S)} (h_S |T_1|^{d/2} \|\tilde u\|_{W^{2,\infty}(T_1)} + \|\nabla (\tilde u - u_h)\|_{L^2(T_1)}), \end{aligned}$$ under the additional regularity assumption $u \in W^{2,\infty}(\Omega)$. ## Final result The final result including $L^2$-estimates is stated as follows. **Theorem 4**. *Let $k > d/2$. Assume that $f \in H^1(\Omega)$, $\tau \in H^{3/2}(\Gamma)$ if $k = 2$ and that $f \in H^{k-1}(\Omega)$, $\tau \in H^{k-1}(\Gamma)$ if $k \ge 3$. If $u$ and $u_h$ are the solutions of ([\[eq: continuous problem\]](#eq: continuous problem){reference-type="ref" reference="eq: continuous problem"}) and ([\[eq: FE scheme\]](#eq: FE scheme){reference-type="ref" reference="eq: FE scheme"}) respectively, then we have $$\begin{aligned} \|\nabla (u - \bar u_h)\|_{L^2(\Omega)} + \|\nabla_\Gamma (u - \bar u_h)\|_{L^2(\Gamma)} &\le C_{f, \tau} h^k, \\ \|u - \bar u_h\|_{L^2(\Omega)} + \|u - \bar u_h\|_{L^2(\Gamma)} &\le C_{f, \tau} h^{k+1}, \end{aligned}$$ where $C_{f, \tau} := C (\|f\|_{H^{k-1}(\Omega)} + \|\tau\|_{H^{\max\{k-1, 3/2\}}(\Gamma)})$.* *Proof.* The $H^1$-estimates are already shown by Propositions [Proposition 6](#prop: H1 error estimate in Omega){reference-type="ref" reference="prop: H1 error estimate in Omega"} and [Proposition 7](#prop: H1 error estimate on Gamma){reference-type="ref" reference="prop: H1 error estimate on Gamma"}. The $L^2(\Omega)$-estimate follows from $$\|\tilde u - \bar u_h\|_{L^2(\Gamma(\delta))} \le C \delta^{1/2} \|\tilde u - u_h\|_{L^2(\Gamma_h)} + C \delta \|\nabla (\tilde u - \bar u_h)\|_{L^2(\Gamma(\delta))} \le C_{f, \tau} (\delta^{1/2} h^{k+1} + \delta h^k),$$ where we have used the result of Theorem [Theorem 3](#thm: L2 error estimate){reference-type="ref" reference="thm: L2 error estimate"}. For the $L^2(\Gamma)$-estimate, we see that $$\begin{aligned} \|u - \bar u_h\|_{L^2(\Gamma)} &\le C \|(u - \bar u_h) \circ \bm\pi\|_{L^2(\Gamma_h)} \le C \|\tilde u - u_h\|_{L^2(\Gamma_h)} + C \|(\tilde u - u_h) - (u - \bar u_h) \circ \bm\pi\|_{L^2(\Gamma_h)} \\ &\le C_{f, \tau} h^{k+1} + C \delta^{1/2} \|\nabla (\tilde u - \bar u_h)\|_{L^2(\Gamma(\delta))} \\ &\le C_{f, \tau} (h^{k+1} + \delta^{1/2} h^k), \end{aligned}$$ where we have used Proposition [Proposition 6](#prop: H1 error estimate in Omega){reference-type="ref" reference="prop: H1 error estimate in Omega"} together with Theorem [Theorem 2](#thm: H1 error estimate){reference-type="ref" reference="thm: H1 error estimate"} in the last line. ◻
arxiv_math
{ "id": "2310.00519", "title": "Finite element analysis of a generalized Robin boundary value problem in\n curved domains based on the extension method", "authors": "Takahito Kashiwabara", "categories": "math.NA cs.NA", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | The Peterson variety (which we denote by $Y$) is a subvariety of the flag variety, introduced by Dale Peterson to describe the quantum cohomology rings of all the partial flag varieties. Motivated by the mirror symmetry for partial flag varieties, Rietsch studied the totally nonnegative part $Y_{\ge0}$ and its cell decomposition. Based on the structure of those cells, Rietsch gave the following conjecture in Lie type A; as a cell decomposed space, $Y_{\ge0}$ is homeomorphic to the cube $[0,1]^{\dim_{\mathbb C}Y}$. In this paper, we give a proof of Rietsch's conjecture on $Y_{\ge0}$ in Lie type A by using toric geometry which is closely related to the Peterson variety. address: - Faculty of Science, Department of Applied Mathematics, Okayama University of Science, 1-1 Ridai-cho, Kita-ku, Okayama, 700-0005, Japan - School of Mathematics and Statistics, Huazhong University of Science and Technology, Wuhan, 430074, P.R. China author: - Hiraku Abe - Haozhi Zeng title: | Totally nonnegative part of the Peterson variety\ in Lie type A --- # Introduction {#sect: intro} An invertible matrix is said to be totally positive (resp. nonnegative) if all its minors are positive (resp. nonnegative). In 1990's, Lusztig extended in [@lu94] the theory of totally positive matrices to an arbitrary split reductive connected algebraic group over $\mathbb R$. It was motivated by Kostant's question on the relation between totally positive matrices and positivity properties of canonical bases for quantized enveloping algebras. He also introduced the totally nonnegative part of the flag varieties in [@lu94; @lu98], and now total positivity appears in several fields of mathematics including cluster algebras ([@fo10; @fo-ze02]), KP equations ([@ko-wi11; @ko-wi14]), and mathematical physics ([@ga-py20; @li17; @ri12]). For general properties of the totally nonnegative parts of the flag varieties, see e.g. [@ga-ka-lam22; @he14; @lu94; @po-sp-wi09; @ri99; @ri-wi08]. Also in 1990's, Dale Peterson ([@pe-97]) observed that there is an algebraic variety $Y$ endowed with a certain stratification whose strata describe the quantum cohomology rings of all the partial flag varieties. This variety $Y$ is now called the Peterson variety, and in Lie type A, it can be defined as follows: $$Y\coloneqq \{gB^-\in {SL_{n}}(\mathbb{C})/B^- \mid (\text{\rm Ad}_{g^{-1}} f)_{i,j}=0\ (j>i+1)\},$$ where $B^-\subseteq {SL_{n}}(\mathbb C)$ is the subgroup consisting of lower triangular matrices, and $f$ is the $n\times n$ regular nilpotent matrix in Jordan canonical form. In [@ri03], Rietsch gave a proof of Peterson's claim for the quantum cohomology ring $qH^*({SL_{n}}(\mathbb{C})/P)$ for parabolic subgroups $P\subseteq {SL_{n}}(\mathbb{C})$ (cf. [@ch22]). By combining it with Lusztig's theory of total positivity on ${SL_{n}}(\mathbb{C})/B^-$, she obtained a parametrization of the set of $n\times n$ totally nonnegative Toeplitz matrices. Here, Toeplitz matrices are the matrices of the form $$\begin{pmatrix} 1& &&&\\ x_1 & \!\!\!1 & && \\ x_2 & \!\!\!x_1 & \!\!\!1 & &\\ \vdots & \!\!\!x_2 & \!\!\!x_1 & \ddots\\ x_{n-2} & \!\!\!\vdots & \!\!\!\ddots & \ddots & \ddots\\ x_{n-1} & \!\!\!x_{n-2}& \!\!\!\cdots & x_2 & x_1& 1 \end{pmatrix}$$ for $x_1,x_2,\ldots,x_{n-1}\in \mathbb C$. In fact, Rietsch showed that the totally nonnegative Toeplitz matrices are completely parametrized by the lower left $i\times i$ minors for $1\le i\le n-1$ ([@ri03 Proposition 1.2]). One of the key ingredients of the proof was to introduce the totally nonnegative parts of the strata of the Peterson variety $Y$ in the connection to Peterson's description of quantum cohomology rings $qH^*({SL_{n}}(\mathbb{C})/P)$ of partial flag varieties. Motivated by the mirror symmetry for ${SL_{n}}(\mathbb{C})/P$, Rietsch studied a cell decomposition of the totally nonnegative part $Y_{\ge0}$ ([@ri03; @ri06]) whose cells are given by the intersections with open Richardson varieties, and she also showed that the whole space $Y_{\ge0}$ is contractible. Based on the observations on the structure of the cells, Rietsch gave the following conjecture ([@ri06 Conjecture 10.3]), where we set $[0,1]=\{x\in\mathbb R\mid 0\le x\le 1\}$.\ **Rietsch's conjecture on $Y_{\ge0}$.** As a cell decomposed space, $Y_{\ge 0}$ is homeomorphic to the $(n-1)$-dimensional cube $[0,1]^{n-1}$.\ In this paper, we prove that Rietsch's conjecture on $Y_{\ge0}$ holds. The idea of our proof is as follows. In [@ab-ze23], the authors constructed a particular morphism $\Psi \colon Y\rightarrow X(\Sigma)$, where $X(\Sigma)$ is the toric orbifold introduced by M. Blume ([@bl15]). We show that this morphism restricts to a homeomorphism on their nonnegative parts: $$\Psi_{\ge0} \colon Y_{\ge0} \stackrel{\approx}{\rightarrow} X(\Sigma)_{\ge0}.$$ The toric orbifold $X(\Sigma)$ is projective so that there is a simplicial lattice polytope $P$ whose normal fan is $\Sigma$. This polytope $P$ is combinatorially equivalent to an $(n-1)$-dimensional cube $[0,1]^{n-1}$ which implies that we have $P\approx [0,1]^{n-1}$. Since $X(\Sigma)_{\ge0}$ is homeomorphic to the polytope $P$ under the moment map, we conclude that $$Y_{\ge0} \approx [0,1]^{n-1}.$$ We also show that the resulting homeomorphism preserves the cells of $Y$ and $[0,1]^{n-1}$. As one can see, our proof uses toric geometry. A similar approach was taken in [@po-sp-wi09; @ri-wi08] to show that the nonnegative parts of partial flag varieties are CW complexes. This paper is organized as follows. In Section 2, we fix some notations which we use throughout this paper. In Sections 3 and 4, we describe the toric orbifold $X(\Sigma)$ and its nonnegative part $X(\Sigma)_{\ge 0}$. In Sections 5 and 6, we study the Peterson variety $Y$ and its totally nonnegative part $Y_{\ge 0}$. In Sections 7 and 8, we recall the definition of the morphism $\Psi \colon Y \rightarrow X(\Sigma)$, and we show that $Y_{\ge 0}$ and $X(\Sigma)_{\ge 0}$ are homeomorphic as cell decomposed spaces through $\Psi$. Finally, in Section 9, we give a proof of Rietsch's conjecture on $Y_{\ge0}$. **Acknowledgments**. We are grateful to Konstanze Rietsch, Bin Zhang, Changzheng Li, Naoki Fujita, Mikiya Masuda, Takashi Sato, and Tatsuya Horiguchi for valuable discussions. This research is supported in part by Osaka Central Advanced Mathematical Institute (MEXT Joint Usage/Research Center on Mathematics and Theoretical Physics): Geometry and combinatorics of Hessenberg varieties. The first author is supported in part by JSPS Grant-in-Aid for Scientific Research(C): 23K03102. The second author is supported in part by NSFC: 11901218. # Notations {#sec: notations} We fix some notations which we use throughout this paper. Let $n\ge 2$ be a positive integer, and for simplicity let us denote by $$\begin{aligned} {SL_{n}}\coloneqq {SL_{n}}(\mathbb C)\end{aligned}$$ the complex special linear group of degree $n$. Let $B^+$ (resp. $B^-$) be the Borel subgroup of ${SL_{n}}$ consisting of upper (resp. lower) triangular matrices. We denote by $T\coloneqq B^+\cap B^-$ the maximal torus of ${SL_{n}}$ consisting of diagonal matrices: $$\begin{aligned} T = \{\text{diag}(t_1,\ldots,t_n)\in {SL_{n}} \mid t_i\in\mathbb C^{\times}\ (1\le i\le n)\}.\end{aligned}$$ Let $U^+$ (resp. $U^-$) be the unipotent subgroup of ${SL_{n}}$ consisting of upper (resp. lower) triangular matrices having $1$'s on the diagonal. We write $[n-1]\coloneqq\{1,2,\ldots,n-1\}$. For $i\in[n-1]$, let $\alpha_i \colon T\rightarrow \mathbb C^{\times}$ be the $i$-th simple root of $T$ given by $$\begin{aligned} \alpha_i(t) \coloneqq t_i t_{i+1}^{-1} \qquad \text{for $t=\text{diag}(t_1,\ldots,t_n)\in T$},\end{aligned}$$ and let $\varpi_i \colon T\rightarrow \mathbb C^{\times}$ be the $i$-th fundamental weight of $T$ given by $$\begin{aligned} \varpi_i(t) \coloneqq t_1t_2\cdots t_i \qquad \text{for $t=\text{diag}(t_1,\ldots,t_n)\in T$}.\end{aligned}$$ # Toric orbifolds associated to Cartan matrices {#sec: Toric orbifold} In this section, we describe the projective toric orbifold $X(\Sigma)$ appeared in the introduction. We begin with its quotient description, and we describe its fan $\Sigma$. The goal of this section is to describe the nonnegative part $X(\Sigma)_{\ge0}$ with its cell decomposition. ## Definition by quotient {#subsec: Cartan toric orbifolds} We take the following description of $\mathbb C^{2(n-1)}$: $$\begin{aligned} \mathbb C^{2(n-1)} = \{ (x_1,\ldots,x_{n-1};y_1,\ldots,y_{n-1}) \mid x_i,y_i\in\mathbb C\ (i\in [n-1]) \}.\end{aligned}$$ The maximal torus $T\subseteq {SL_{n}}$ has the following linear action on $\mathbb C^{2(n-1)}$: $$\begin{aligned} \label{eq: def of T-action on C2r} \begin{split} &t\cdot (x_1,\ldots,x_{n-1};y_1,\ldots,y_{n-1}) \\ &\hspace{20pt}\coloneqq\big(\varpi_1(t)x_1,\ldots,\varpi_{n-1}(t)x_{n-1}; \alpha_1(t)y_1,\ldots,\alpha_{n-1}(t)y_{n-1}\big) \end{split}\end{aligned}$$ for $t\in T$ and $(x_1,\ldots,x_{n-1};y_1,\ldots,y_{n-1})\in \mathbb C^{2(n-1)}$. Namely, $T$ acts on the first $n-1$ components by the fundamental weights and on the last $n-1$ components by the simple roots. We define a subset $E\subset \mathbb C^{2(n-1)}$ by $$\begin{aligned} \label{eq: def of exceptional locus} E\coloneqq \bigcup_{i\in [n-1]} \{ (x_1,\ldots,x_{n-1}; y_1,\ldots,y_{n-1})\in\mathbb C^{2(n-1)} \mid x_i=y_i=0 \}.\end{aligned}$$ Then its complement is given by $$\begin{aligned} \label{eq: def of C-E} \mathbb C^{2(n-1)}-E= \{ (x_1,\ldots,x_{n-1};y_1,\ldots,y_{n-1}) \mid (x_i,y_i)\ne(0,0) \ (i\in [n-1]) \}.\end{aligned}$$ It is obvious that the $T$-action on $\mathbb C^{2(n-1)}$ defined in [\[eq: def of T-action on C2r\]](#eq: def of T-action on C2r){reference-type="eqref" reference="eq: def of T-action on C2r"} preserves this subset $\mathbb C^{2(n-1)}-E$. We now consider the quotient $$\begin{aligned} (\mathbb C^{2(n-1)}-E)/T\end{aligned}$$ by the action of the maximal torus $T\subseteq {SL_{n}}$ given in [\[eq: def of T-action on C2r\]](#eq: def of T-action on C2r){reference-type="eqref" reference="eq: def of T-action on C2r"}. The quotient space $(\mathbb C^{2(n-1)}-E)/T$ admits an action of a complex torus defined as follows. The $2(n-1)$-dimensional torus $(\mathbb C^{\times})^{2(n-1)}$ acts on $\mathbb C^{2(n-1)}-E$ by the coordinate-wise multiplication, and the above $T$-action on $\mathbb C^{2(n-1)}-E$ is obtained through the homomorphism $$\begin{aligned} \label{eq: injection from T} T \rightarrow (\mathbb C^{\times})^{2(n-1)} \quad ; \quad t \mapsto \big(\varpi_1(t),\ldots,\varpi_{n-1}(t); \alpha_1(t),\ldots,\alpha_{n-1}(t)\big).\end{aligned}$$ The injectivity of this homomorphism implies that the quotient torus $(\mathbb C^{\times})^{2(n-1)}/T$ is defined, and an action of the torus $(\mathbb C^{\times})^{2(n-1)}/T$ on $(\mathbb C^{2(n-1)}-E)/T$ is naturally induced[^1]. In [@ab-ze23], the authors showed that the quotient space $(\mathbb C^{2(n-1)}-E)/T$ with this torus action is a simplicial projective toric variety of complex dimension $n-1$. In fact, we constructed a simplicial projective fan $\Sigma$ in $\mathbb R^{n-1}$ such that the quotient construction of the associated toric orbifold $X(\Sigma)$ agrees with the quotient map $$\mathbb C^{2(n-1)}-E\rightarrow (\mathbb C^{2(n-1)}-E)/T.$$ We review the construction of this fan $\Sigma$ in the next subsection. ## The fan for $(\mathbb C^{2(n-1)}-E)/T$ {#subsec: Cartan toric orbifolds from fan} In this subsection, let us review how we see that the quotient $(\mathbb C^{2(n-1)}-E)/T$ with the action of $(\mathbb C^{\times})^{2(n-1)}/T$ is the toric orbifold $X(\Sigma)$ associated with a simplicial projective fan $\Sigma$. Details can be found in [@ab-ze23 Sect. 3 and 4]. We begin with defining this fan $\Sigma$. For $i\in[n-1]$, we set $$\begin{aligned} -\alpha^{\vee}_i \coloneqq \bm{e}_{i-1}-2\bm{e}_i+\bm{e}_{i+1} \in \mathbb R^{n-1}\end{aligned}$$ with the convention $\bm{e}_0=\bm{e}_n=0$, where each $\bm{e}_i$ is the $i$-th standard basis vector in $\mathbb R^{n-1}$. For $K,L\subseteq [n-1]$ such that $K\cap L=\emptyset$, we set $$\begin{aligned} \sigma_{K,L} \coloneqq \text{cone}(\{ -\alpha^{\vee}_i \mid i\in K \}\cup \{ \bm{e}_{i} \mid i \in L \}), \end{aligned}$$ where we take $\sigma_{\emptyset,\emptyset}=\{\bm{0}\}$ as the convention. The dimension of this cone is given by $$\begin{aligned} \label{eq: dim of cone} \dim_{\mathbb R} \sigma_{K,L} =|K|+|L|\end{aligned}$$ ([@ab-ze23 Lemma 3.3]) which implies that $\sigma_{K,L}$ is a simplicial cone. We define $$\begin{aligned} \label{eq: def of fan 1} \Sigma\coloneqq \{ \sigma_{K,L} \subseteq \mathbb R^{n-1} \mid K,L\subseteq [n-1], \ K\cap L=\emptyset \}.\end{aligned}$$ In [@ab-ze23], it is proved that $\Sigma$ with the lattice $\mathbb Z^{n-1}=\oplus_{i=1}^n\mathbb Z\bm{e}_i$ is a simplicial projective fan in $\mathbb R^{n-1}$ ([@ab-ze23 Corollary 3.6 and Proposition 4.4]). We denote by $X(\Sigma)$ the simplicial projective toric variety associated to the fan $\Sigma$. Cox's quotient construction of $X(\Sigma)$ is given as follows. Let $\Sigma(1)$ be the set of rays (i.e. 1 dimensional cones) of $\Sigma$. Namely, $$\begin{aligned} \Sigma(1) = \{\text{cone}(-\alpha^{\vee}_i) \mid i\in [n-1]\} \cup\{\text{cone}(\bm{e}_i) \mid i\in [n-1]\}.\end{aligned}$$ A collection of rays in $\Sigma(1)$ is called a primitive collection when it is a minimal collection which does not span a cone in $\Sigma$. Recalling the definition of cones $\sigma_{K,L}$ in $\Sigma$, it is clear that each primitive collection is given by $\{\text{cone}(-\alpha^{\vee}_i), \text{cone}(\bm{e}_i)\}$ for each $i\in [n-1]$. Therefore, the exceptional set $E$ in the affine space $\mathbb C^{\Sigma(1)}=\mathbb C^{2(n-1)}$ determined by these primitive collections agrees with the one given in [\[eq: def of exceptional locus\]](#eq: def of exceptional locus){reference-type="eqref" reference="eq: def of exceptional locus"}: $$\begin{aligned} E= \bigcup_{i\in [n-1]} \{ (x_1,\ldots,x_{n-1}; y_1,\ldots,y_{n-1})\in\mathbb C^{2(n-1)} \mid x_i=y_i=0 \}\end{aligned}$$ (see [@co-li-sc Proposition 5.1.6]). Here, we regard the first $n-1$ components of $\mathbb C^{2(n-1)}$ to correspond to the rays $\text{cone}(-\alpha^{\vee}_i)$ in order, and we regard the last $n-1$ components of $\mathbb C^{2(n-1)}$ to correspond to the rays $\text{cone}(\bm{e}_i)$ in order. Now we consider the surjective homomorphism $$\begin{aligned} \rho \colon (\mathbb C^{\times})^{2(n-1)}\rightarrow (\mathbb C^{\times})^{n-1}\end{aligned}$$ determined by the ray vectors $-\alpha^{\vee}_i$ and $\bm{e}_i$ as follows: $$\begin{aligned} \rho(u_1,\ldots,u_{n-1};v_1,\ldots,v_{n-1}) = ( u^{-\alpha^{\vee}_1} v^{\bm{e}_1}, \ldots, u^{-\alpha^{\vee}_{n-1}} v^{\bm{e}_{n-1}})\end{aligned}$$ for $(u_1,\ldots,u_{n-1};v_1,\ldots,v_{n-1})\in (\mathbb C^{\times})^{2(n-1)}$, where $u^{-\alpha^{\vee}_i}\coloneqq u_{i-1}u_i^{-2}u_{n+1}$ and $v^{\bm{e}_i}\coloneqq v_i$ for $1\le i\le n-1$ with the convention $u_0=u_n=1$. To describe the kernel of $\rho$, we recall the injective homomorphism $T \rightarrow (\mathbb C^{\times})^{2(n-1)}$ given in [\[eq: injection from T\]](#eq: injection from T){reference-type="eqref" reference="eq: injection from T"}, where $T\subseteq {SL_{n}}$ is the maximal torus. By a straightforward calculation, one can verify that $\rho$ fits in the exact sequence $$\begin{aligned} \label{eq: exact sequence of tori} 1\rightarrow T \rightarrow (\mathbb C^{\times})^{2(n-1)} \stackrel{\rho}{\rightarrow} (\mathbb C^{\times})^{n-1} \rightarrow 1,\end{aligned}$$ where the second map is the one defined in [\[eq: injection from T\]](#eq: injection from T){reference-type="eqref" reference="eq: injection from T"}. The homomorphism $\rho$ in this sequence extends to a morphism $$\begin{aligned} \label{eq: geometric quotient 1} \mathbb C^{2(n-1)}-E\rightarrow X(\Sigma)\end{aligned}$$ which is equivariant with respect to $\rho$. Since the fan $\Sigma$ is simplicial ([@ab-ze23 Corollary 3.6]), we can describe the toric variety $X(\Sigma)$ by Cox's quotient construction ([@co-li-sc Theorem 5.1.11]) which is known to work with possibly non-primitive ray vectors[^2] by [@bo-ch-sm05 Proposition 3.7] (cf. [@ab-ze23 Sect. 3.3]). It claims that the morphism [\[eq: geometric quotient 1\]](#eq: geometric quotient 1){reference-type="eqref" reference="eq: geometric quotient 1"} is a geometric quotient for the $T$-action on $\mathbb C^{2(n-1)}-E$. This implies that the morphism [\[eq: geometric quotient 1\]](#eq: geometric quotient 1){reference-type="eqref" reference="eq: geometric quotient 1"} induces a bijection $$\begin{aligned} \label{eq: geometric quotient 2} (\mathbb C^{2(n-1)}-E)/T \rightarrow X(\Sigma)\end{aligned}$$ so that we may regard the quotient $(\mathbb C^{2(n-1)}-E)/T$ as an algebraic variety which is isomorphic to $X(\Sigma)$ under the map [\[eq: geometric quotient 2\]](#eq: geometric quotient 2){reference-type="eqref" reference="eq: geometric quotient 2"}. By construction, this isomorphism is equivariant with respect to the group isomorphism of tori $(\mathbb C^{\times})^{2(n-1)}/T \stackrel{\cong}{\rightarrow} (\mathbb C^{\times})^{n-1}$ induced by [\[eq: exact sequence of tori\]](#eq: exact sequence of tori){reference-type="eqref" reference="eq: exact sequence of tori"}. Recalling that $\mathbb C^{2(n-1)}-E$ was given in [\[eq: def of C-E\]](#eq: def of C-E){reference-type="eqref" reference="eq: def of C-E"}, we obtain $$\begin{aligned} X(\Sigma) \cong \{[x_1,\ldots,x_{n-1};y_1,\ldots,y_{n-1}] \mid x_i, y_i\in\mathbb C, (x_i,y_i)\ne(0,0) \ (i\in [n-1])\}.\end{aligned}$$ In the rest of this paper, we identify these two varieties, and we take the action of $(\mathbb C^{\times})^{2(n-1)}/T$ as the canonical torus action on $X(\Sigma)$. As is well-known, the cones in the fan $\Sigma$ correspond to the torus orbits in $X(\Sigma)$ which gives us the orbit decomposition of $X(\Sigma)$. In later sections, we will study a Richardson type decomposition of the Peterson variety (see [\[eq: def of richardson strata\]](#eq: def of richardson strata){reference-type="eqref" reference="eq: def of richardson strata"}). To compare these decompositions, it is more suited to consider the cones $\sigma_{K,[n-1]-J}$ (rather than $\sigma_{K,J}$). Here, we note that the cone $\sigma_{K,[n-1]-J}$ is defined for $K\subseteq J\subseteq [n-1]$ since the condition $K\subseteq J$ is equivalent to $K\cap([n-1]-J)=\emptyset$. Because of this reason, let us use the following notation: $$\begin{aligned} \label{eq: def of cone 2} \tau_{K,J} \coloneqq \sigma_{K,[n-1]-J} =\text{cone}(\{ -\alpha^{\vee}_i \mid i\in K \}\cup \{ \bm{e}_{i} \mid i \notin J \})\end{aligned}$$ for $K\subseteq J\subseteq [n-1]$. Since $\sigma_{K,[n-1]-J}$ is a cone of dimension $|K|+(n-1)-|J|$ by [\[eq: dim of cone\]](#eq: dim of cone){reference-type="eqref" reference="eq: dim of cone"}, we obtain $$\begin{aligned} \mathop{\mathrm{codim}}_{\mathbb R}\tau_{K,J} = |J| - |K|.\end{aligned}$$ By the definition [\[eq: def of fan 1\]](#eq: def of fan 1){reference-type="eqref" reference="eq: def of fan 1"} of $\Sigma$, it can be expressed as $$\begin{aligned} \label{eq: def of fan 2} \Sigma= \{ \tau_{K,J} \subseteq \mathbb R^{n-1} \mid K\subseteq J\subseteq [n-1]\}.\end{aligned}$$ **Example 1**. *When $n=3$, then $\Sigma$ is a fan in $\mathbb R^2$ which we depict in Figure [\[pic: fan2 and fan3\]](#pic: fan2 and fan3){reference-type="ref" reference="pic: fan2 and fan3"}.* *(23.7500,21.2500)(7.7000,-32.9500)* *(28.4400,-15.6400)(0,0)\[lb\]$\tau_{\emptyset,\emptyset}$ (11.5000,-30.0000)(0,0)\[lb\]$\tau_{\{1,2\},\{1,2\}}$ (28.9000,-27.2000)(0,0)\[lb\]$\tau_{\{2\},\{2\}}$ (12.0600,-14.8400)(0,0)\[lb\]$\tau_{\{1\},\{1\}}$ (27.0300,-22.1500)(0,0)\[lb\]$\tau_{\emptyset,\{2\}}$ (21.3500,-16.4600)(0,0)\[lb\]$\tau_{\emptyset,\{1\}}$ (25.1700,-30.7500)(0,0)\[lb\]$\tau_{\{2\},\{1,2\}}$ (7.7000,-20.2000)(0,0)\[lb\]$\tau_{\{1\},\{1,2\}}$* *(16.5500,-24.3100)(0,0)\[lb\]$\tau_{\emptyset,\{1,2\}}$* ## Torus orbit decomposition of $X(\Sigma)$ {#subsec: Orbit decomposition} Recall from the last subsection that the toric orbifold $X(\Sigma)=(\mathbb C^{2(n-1)}-E)/T$ is the quotient of the $T$-action on $\mathbb C^{2(n-1)}-E$ given in [\[eq: def of T-action on C2r\]](#eq: def of T-action on C2r){reference-type="eqref" reference="eq: def of T-action on C2r"}. For simplicity, we write elements of $\mathbb{C}^{2(n-1)}$ by $$\begin{aligned} (x;y) = (x_1,\dots, x_{n-1}; y_1,\dots, y_{n-1}).\end{aligned}$$ Having the definition [\[eq: def of cone 2\]](#eq: def of cone 2){reference-type="eqref" reference="eq: def of cone 2"} of the cone $\tau_{K,J}$ in mind, we consider the following subsets of $X(\Sigma)$. For $K,J\subseteq [n-1]$, we define a subset $X(\Sigma)_{K,J}\subseteq X(\Sigma)$ by $$\label{eq: definition of X sigma IKJ} X(\Sigma)_{K,J}\coloneqq\left\{[x;y]\in X(\Sigma) \ \left| \ \begin{cases} x_i=0\quad \text{if}\quad i\in K\\ x_i\neq 0\quad \text{if}\quad i\notin K \end{cases} \hspace{-10pt},\hspace{5pt} \begin{cases} y_i=0\quad \text{if}\quad i\notin J\\ y_i\neq 0\quad \text{if}\quad i\in J \end{cases}\right\} \right. .\vspace{5pt}$$ Namely, $K$ and $[n-1]-J$ specify the coordinates having $0$ in their entries. This subset is well-defined since the action of $T$ on $\mathbb C^{2(n-1)}-E$ is given by a coordinate-wise multiplication of non-zero complex numbers (see [\[eq: def of T-action on C2r\]](#eq: def of T-action on C2r){reference-type="eqref" reference="eq: def of T-action on C2r"}). For an element $[x;y]\in X(\Sigma)=(\mathbb C^{2(n-1)}-E)/T$, it is obvious that we have $[x;y]\in X(\Sigma)_{K,J}$ for some $K,J\subseteq [n-1]$. Thus, we obtain that $$\begin{aligned} X(\Sigma) = \bigcup_{K,J\subseteq [n-1]} X(\Sigma)_{K,J} .\end{aligned}$$ **Lemma 1**. *For $K,J\subseteq [n-1]$, we have $X(\Sigma)_{K,J}=\emptyset$ unless $K\subseteq J$.* *Proof.* Suppose that $X(\Sigma)_{K,J}\ne\emptyset$, and we prove that $K\subseteq J$ holds. This assumption means that there exists an element $[x;y]\in X(\Sigma)_{K,J}$. Since $(x;y)\in \mathbb C^{2(n-1)}-E$, it satisfies the following property: if $x_i=0$ then $y_i\ne0$ (see [\[eq: def of C-E\]](#eq: def of C-E){reference-type="eqref" reference="eq: def of C-E"}). Recalling the definition [\[eq: definition of X sigma IKJ\]](#eq: definition of X sigma IKJ){reference-type="eqref" reference="eq: definition of X sigma IKJ"} of $X(\Sigma)_{K,J}$, this means that $K\subseteq J$, as desired. ◻ Recall that $X(\Sigma)=(\mathbb C^{2(n-1)}-E)/T$ has the canonical action of the torus $(\mathbb C^{\times})^{2(n-1)}/T$ given by the coordinate-wise multiplication. One can easily see that a non-empty subset of the form [\[eq: definition of X sigma IKJ\]](#eq: definition of X sigma IKJ){reference-type="eqref" reference="eq: definition of X sigma IKJ"} is an orbit of this torus action on $X(\Sigma)$. More specifically, by the quotient construction of simplicial toric varieties (see [@co-li-sc Section 5.1]), each $X(\Sigma)_{K,J}$ is the torus orbit in $X(\Sigma)$ corresponding to the cone $\tau_{K,J}=\text{cone}(\{ -\alpha^{\vee}_i \mid i\in K \}\cup \{ \bm{e}_{i} \mid i \notin J \})$ in $\Sigma$ under the orbit-cone correspondence. In particular, we have $$\begin{aligned} \dim_{\mathbb C}X(\Sigma)_{K,J}=\mathop{\mathrm{codim}}_{\mathbb R} \tau_{K,J} = |J|-|K|\end{aligned}$$ for $K\subseteq J\subseteq[n-1]$. In particular, we have $\mathop{\mathrm{codim}}_{\mathbb C}X(\Sigma)_{K,J}=|K|+(n-1)-|J|$ which agrees with the codimension seen from the definition [\[eq: definition of X sigma IKJ\]](#eq: definition of X sigma IKJ){reference-type="eqref" reference="eq: definition of X sigma IKJ"}. **Corollary 2**. *The orbit decomposition of $X(\Sigma)=(\mathbb C^{2(n-1)}-E)/T$ with respect to the canonical torus action of $(\mathbb C^{\times})^{2(n-1)}/T$ is given by $$\begin{aligned} X(\Sigma) = \bigsqcup_{K\subseteq J \subseteq [n-1]} X(\Sigma)_{K,J} ,\end{aligned}$$ where $X(\Sigma)_{K,J}$ is the orbit corresponding to the cone $\tau_{K,J}\in\Sigma$.* We end this section by recording the following claim. **Lemma 3**. *For $K\subseteq J\subseteq [n-1]$, we have $$X(\Sigma)_{K,J}=\left\{[x;y]\in X(\Sigma) \ \left| \ \begin{cases} x_i=0\quad \text{if}\quad i\in K\\ x_i\ne 0\quad \text{if}\quad i\notin K \end{cases} \hspace{-10pt},\hspace{5pt} \begin{cases} y_i=0\quad \text{if}\quad i\notin J\\ y_i=1\quad \text{if}\quad i\in J \end{cases}\right\} \right. .$$* *Proof.* Let us denote the right hand side by $X'_{K,J}$, and we show that $X(\Sigma)_{K,J}=X'_{K,J}$. By definition, we have $X'_{K,J}\subseteq X(\Sigma)_{K,J}$. Since we have $[x;y]=[t\cdot(x;y)]$ for all $t\in T$, the opposite inclusion follows from the surjectivity of the homomorphism $$\begin{aligned} \label{eq: alpha surj} T\rightarrow (\mathbb C^{\times})^{n-1} \quad ; \quad t\mapsto (\alpha_1(t),\ldots,\alpha_{n-1}(t)).\end{aligned}$$ ◻ **Example 2**. * Let $n=3$, then the orbits in $X(\Sigma)$ of dimension $0$ (i.e., the fixed points) are $$X(\Sigma)_{\emptyset, \emptyset}, \ X(\Sigma)_{\{1\}, \{1\}}, \ X(\Sigma)_{\{2\}, \{2\}}, \ X(\Sigma)_{\{1,2\}, \{1,2\}}.$$ The orbits in $X(\Sigma)$ of dimension $1$ are $$X(\Sigma)_{\emptyset, \{1\}}, \ X(\Sigma)_{\emptyset, \{2\}}, \ X(\Sigma)_{\{1\}, \{1,2\}}, \ X(\Sigma)_{\{2\}, \{1,2\}}.$$ Finally, the unique free orbit in $X(\Sigma)$ is $X(\Sigma)_{\emptyset, \{1,2\}}$. Compare these with the cones in Figure [\[pic: fan2 and fan3\]](#pic: fan2 and fan3){reference-type="ref" reference="pic: fan2 and fan3"}. * # Nonnegative part of $X(\Sigma)$ In this section, we study the nonnegative part of the toric orbifold $X(\Sigma)$. As is well-known, the torus orbit decomposition of a projective toric variety gives rise to a cell decomposition of the nonnegative part of $X(\Sigma)$. We describe this explicitly. ## Definition of $X(\Sigma)_{\ge 0}$ For an arbitrary (normal) toric variety $X$ over $\mathbb C$, the nonnegative part $X_{\ge 0}$ is defined as the set of "$\mathbb R_{\ge 0}$-valued points\" of $X$. More details on $X_{\ge 0}$ can be found in [@co-li-sc Sect. 12.2]. Recall that we have $X(\Sigma)=(\mathbb C^{2(n-1)}-E)/T$. By [@co-li-sc Proposition 12.2.1], we have $$\begin{aligned} X(\Sigma)_{\ge0} = \{[x;y]\in X(\Sigma) \mid x_i, y_i\ge0\ \text{for $i\in [n-1]$}\}.\end{aligned}$$ Namely, an element of $X(\Sigma)$ belongs to $X(\Sigma)_{\ge0}$ if and only if it can be represented by some $(x;y)\in\mathbb C^{2(n-1)}-E$ with nonnegative entries. ## A decomposition of $X(\Sigma)_{\ge 0}$ Recall from Corollary [Corollary 2](#cor: orbit decomp of X){reference-type="ref" reference="cor: orbit decomp of X"} that the torus orbit decomposition of $X(\Sigma)$ is given by $$\begin{aligned} \label{eq: X nonnegative decomp} X(\Sigma) = \bigsqcup_{K\subseteq J \subseteq [n-1]} X(\Sigma)_{K,J} .\end{aligned}$$ To obtain a decomposition of $X(\Sigma)_{\ge0}$, we set $$\begin{aligned} X(\Sigma)_{{K,J};> 0} \coloneqq X(\Sigma)_{K,J}\cap X(\Sigma)_{\ge 0} \quad \text{for $K\subseteq J \subseteq [n-1]$.}\end{aligned}$$ Then, we obtain from [\[eq: X nonnegative decomp\]](#eq: X nonnegative decomp){reference-type="eqref" reference="eq: X nonnegative decomp"} that $$\begin{aligned} \label{eq: decomp of X nonnegative KJ} X(\Sigma)_{\ge0} = \bigsqcup_{K\subseteq J \subseteq [n-1]} X(\Sigma)_{{K,J};> 0} .\end{aligned}$$ We define $T_{>0}\subseteq T (\subseteq {SL_{n}})$ by $$T_{>0} \coloneqq \{ \text{diag}(t_1,\ldots,t_n)\in {SL_{n}} \mid \text{$t_i>0$ for $i\in [n-1]$}\}$$ (which agrees with Lusztig's definition of $T_{>0}$ given in Section [6](#sec: nonnegative part of Y){reference-type="ref" reference="sec: nonnegative part of Y"}). It is straightforward to see that the surjective homomorphism $T\rightarrow (\mathbb C^{\times})^{n-1}$ given in [\[eq: alpha surj\]](#eq: alpha surj){reference-type="eqref" reference="eq: alpha surj"} restricts to $$\begin{aligned} \label{eq: varpi iso 3} T_{>0}\rightarrow (\mathbb R_{>0})^{n-1} \quad ; \quad t\mapsto (\alpha_1(t),\ldots,\alpha_{n-1}(t)).\end{aligned}$$ **Lemma 4**. *The map [\[eq: varpi iso 3\]](#eq: varpi iso 3){reference-type="eqref" reference="eq: varpi iso 3"} is a bijection.* *Proof.* To begin with, we consider the following map $$\begin{aligned} \label{eq: varpi iso 5} T_{>0}\rightarrow (\mathbb R_{>0})^{n-1} \quad ; \quad t\mapsto (\varpi_1(t),\ldots,\varpi_{n-1}(t)).\end{aligned}$$ Since we have $\varpi_i(t)=t_1t_2\cdots t_i$ for $i\in[n-1]$, it is straightforward to see that this map is a bijection. Note that each component of [\[eq: varpi iso 3\]](#eq: varpi iso 3){reference-type="eqref" reference="eq: varpi iso 3"} can be written as $$\begin{aligned} \alpha_i(t)=\varpi_{i-1}(t)^{-1}\varpi_i(t)^2\varpi_{i+1}(t)^{-1}\end{aligned}$$ with the convention $\varpi_0(t)=\varpi_n(t)=1$. This leads us to consider the functions $\phi_i(u)\coloneqq u_{i-1}^{-1}u_i^2u_{i+1}^{-1}$ for $i\in[n-1]$ and $u\in (\mathbb R_{>0})^{n-1}$ with the convention $u_0=u_n=1$, and then the homomorphism [\[eq: varpi iso 3\]](#eq: varpi iso 3){reference-type="eqref" reference="eq: varpi iso 3"} is the composition of [\[eq: varpi iso 5\]](#eq: varpi iso 5){reference-type="eqref" reference="eq: varpi iso 5"} and the map $$\begin{aligned} \label{eq: varpi iso 6} (\mathbb R_{>0})^{n-1} \rightarrow (\mathbb R_{>0})^{n-1} \quad ; \quad u \mapsto (\phi_1(u),\ldots,\phi_{n-1}(u)).\end{aligned}$$ Since [\[eq: varpi iso 5\]](#eq: varpi iso 5){reference-type="eqref" reference="eq: varpi iso 5"} is bijective, it now suffices to show that the map [\[eq: varpi iso 6\]](#eq: varpi iso 6){reference-type="eqref" reference="eq: varpi iso 6"} is a bijection. Under the identification $\mathbb R_{>0}\cong\mathbb R$ given by the logarithm, [\[eq: varpi iso 6\]](#eq: varpi iso 6){reference-type="eqref" reference="eq: varpi iso 6"} is identified with the linear map $$\begin{aligned} \mathbb R^{n-1} \rightarrow \mathbb R^{n-1} \quad ; \quad \bm{x} \mapsto C\bm{x},\end{aligned}$$ where $C$ is the Cartan matrix of type A$_{n-1}$. Since we have $\det C\ne0$, this is a bijection. Hence, the claim follows. ◻ **Proposition 5**. *For $K\subseteq J\subseteq [n-1]$, we have $$X(\Sigma)_{{K,J};> 0}=\left\{[x;y]\in X(\Sigma) \ \left| \ \begin{cases} x_i=0\quad \text{if}\quad i\in K\\ x_i>0\quad \text{if}\quad i\notin K \end{cases} \hspace{-10pt},\hspace{5pt} \begin{cases} y_i=0\quad \text{if}\quad i\notin J\\ y_i=1\quad \text{if}\quad i\in J \end{cases}\right\} \right. .$$* *Proof.* Denote the right hand of the desired equality by $X'_{{K,J};> 0}$, and let us show that $X(\Sigma)_{{K,J};> 0} =X'_{{K,J};> 0}$. We have $$X'_{{K,J};> 0} \subseteq X(\Sigma)_{K,J}\cap X(\Sigma)_{\ge 0}=X(\Sigma)_{{K,J};> 0}$$ by the definitions of $X(\Sigma)_{K,J}$ and $X(\Sigma)_{{K,J};> 0}$. Hence, it suffices to show that $X(\Sigma)_{{K,J};> 0} \subseteq X'_{{K,J};> 0}$. To this end, take an arbitrary element $$\label{eq: xy in intersection} [x;y]\in X(\Sigma)_{{K,J};> 0}=X(\Sigma)_{K,J}\cap X(\Sigma)_{\ge 0}.$$ Since $[x;y]\in X(\Sigma)_{K,J}$, Lemma [Lemma 3](#lem: preparation for nonnegative strata){reference-type="ref" reference="lem: preparation for nonnegative strata"} implies that we may assume that $x$ and $y$ satisfy $$\label{eq: nonnegative stratum 10} \begin{cases} x_i=0\quad \text{if}\quad i\in K\\ x_i\neq 0\quad \text{if}\quad i\notin K \end{cases} \quad \text{and} \quad \begin{cases} y_i=0\quad \text{if}\quad i\notin J\\ y_i=1\quad \text{if}\quad i\in J . \end{cases}$$ By [\[eq: xy in intersection\]](#eq: xy in intersection){reference-type="eqref" reference="eq: xy in intersection"}, we also have $[x;y]\in X(\Sigma)_{\ge 0}$ which means that there exists $(x';y')\in\mathbb C^{2(n-1)}-E$ such that - $x'_i,y'_i\ge0$ for $i\in [n-1]$, - $[x;y] = [x';y']$. The condition (2) implies that we have $x_i\ne0$ if and only if $x'_i \ne0$ (similarly, $y_i\ne0$ if and only if $y'_i \ne0$) since the action of $T$ on $\mathbb C^{2(n-1)}-E$ (given in [\[eq: def of T-action on C2r\]](#eq: def of T-action on C2r){reference-type="eqref" reference="eq: def of T-action on C2r"}) preserves the condition that a given component is zero or not. Hence, these two conditions and [\[eq: nonnegative stratum 10\]](#eq: nonnegative stratum 10){reference-type="eqref" reference="eq: nonnegative stratum 10"} imply that we have $$\begin{cases} x'_i=0\quad \text{if}\quad i\in K\\ x'_i>0\quad \text{if}\quad i\notin K \end{cases} \quad \text{and} \quad \begin{cases} y'_i=0\quad \text{if}\quad i\notin J\\ y'_i>0\quad \text{if}\quad i\in J . \end{cases}$$ Comparing this and [\[eq: nonnegative stratum 10\]](#eq: nonnegative stratum 10){reference-type="eqref" reference="eq: nonnegative stratum 10"}, it follows from Lemma [Lemma 4](#eq: alpha surjectivity){reference-type="ref" reference="eq: alpha surjectivity"} that there exits $t\in T_{>0}$ such that $$\begin{aligned} \big( \alpha_1(t)y'_1,\ldots,\alpha_{n-1}(t)y'_{n-1} \big) = \big( y_1,\ldots,y_{n-1} \big).\end{aligned}$$ By construction, we have $$\begin{aligned} =[t\cdot (x';y')]=[x'',y], \end{aligned}$$ where we set $x''\coloneqq \big( \varpi_1(t)x'_1,\ldots,\varpi_{n-1}(t)x'_{n-1} \big)$. Since we have $t\in T_{>0}$ and $x'_i\ge0$ for all $i\in [n-1]$, it follows that $$\begin{cases} x''_i=0\quad \text{if}\quad i\in K\\ x''_i>0\quad \text{if}\quad i\notin K . \end{cases}$$ This means that we have $[x'';y]\in X'_{{K,J};> 0}$ since $y$ satisfies the latter condition in [\[eq: nonnegative stratum 10\]](#eq: nonnegative stratum 10){reference-type="eqref" reference="eq: nonnegative stratum 10"}. Now we obtained that $[x;y]=[x';y']=[x'';y]\in X'_{{K,J};> 0}$, as desired. ◻ By this proposition, the largest stratum $X(\Sigma)_{{\emptyset,[n-1]};> 0}$ is given by $$X(\Sigma)_{{\emptyset,[n-1]};> 0} = \{[x_1,\ldots,x_{n-1};1,\ldots,1]\in X(\Sigma) \mid x_i>0\ (i\in [n-1]) \} .$$ For the later use, we prepare the following lemma. **Lemma 6**. *We have $$\overline{X(\Sigma)_{{\emptyset,[n-1]};> 0}}= X(\Sigma)_{\ge0},$$ where the closure is taken in $X(\Sigma)_{\ge0}$.* *Proof.* It suffices to show that the inclusion $\overline{X(\Sigma)_{{\emptyset,[n-1]};> 0}}\supseteq X(\Sigma)_{\ge0}$ holds. For an element $[x;y]\in X(\Sigma)_{\ge0}$, the decomposition [\[eq: decomp of X nonnegative KJ\]](#eq: decomp of X nonnegative KJ){reference-type="eqref" reference="eq: decomp of X nonnegative KJ"} of $X(\Sigma)_{\ge0}$ means that we have $[x;y]\in X(\Sigma)_{{K,J};> 0}$ for some $K\subseteq J\subseteq [n-1]$. Thus, we may assume by Proposition [Proposition 5](#prop: nonnengative stratum){reference-type="ref" reference="prop: nonnengative stratum"} that $$\begin{cases} x_i=0\quad \text{if}\quad i\in K\\ x_i>0\quad \text{if}\quad i\notin K \end{cases} \hspace{-10pt},\hspace{5pt} \begin{cases} y_i=0\quad \text{if}\quad i\notin J\\ y_i=1\quad \text{if}\quad i\in J . \end{cases}$$ Using these $x$ and $y$, we consider an element $[x';y']\in X(\Sigma)$ given by $$x'_i = \begin{cases} \varepsilon \quad &(i\in K)\\ x_i &(i\notin K) \end{cases} \quad \text{and} \quad y'_i = \begin{cases} \varepsilon \quad &(i\notin J)\\ y_i &(i\in J), \end{cases}$$ where $\varepsilon>0$ is a positive real number. By construction, we have $x'_i, y'_i>0$ for all $i\in[n-1]$ which means that the element $[x';y']$ belongs to both of $X(\Sigma)_{\emptyset,[n-1]}$ (see [\[eq: definition of X sigma IKJ\]](#eq: definition of X sigma IKJ){reference-type="eqref" reference="eq: definition of X sigma IKJ"}) and $X(\Sigma)_{\ge0}$. That is, we have $$\in X(\Sigma)_{\emptyset,[n-1]}\cap X(\Sigma)_{\ge0}=X(\Sigma)_{{\emptyset,[n-1]};> 0}.$$ By taking the limit $\varepsilon\rightarrow 0$, we obtain that $$\lim_{\varepsilon\rightarrow 0} \ [x';y'] = [x;y]$$ which means that the given element $[x;y]$ lies in the closure $\overline{X(\Sigma)_{{\emptyset,[n-1]};> 0}}$. ◻ For a projective toric variety $X'$, it is well-known that the nonnegative part $X'_{\ge0}$ has a structure of a polytope. In fact, there is a lattice polytope $P'$ whose normal fan describe the toric variety $X'$, and the moment map associated to $P'$ provides a homeomorphism between $X'_{\ge0}$ and $P'$ (e.g. [@co-li-sc Sect. 12.2] or [@fu Sect. 4.2]). Under the moment map, the nonnegative part of each torus orbit corresponds to a relative interior of each face of $P'$. In [@ab-ze23 Proposition 4.4], the authors showed that the toric orbifold $X(\Sigma)$ is projective. In particular, there is a simple lattice polytope which is homeomorphic to $X(\Sigma)_{\ge0}$ in the above sense. In the next subsection, we describe this polytope explicitly. ## The Polytope $P_{n-1}$ {#subsec: polytope} In this subsection, we construct a full-dimensional lattice polytope in $\mathbb R^{n-1}$ whose normal fan is $\Sigma$. This polytope was originally constructed in Horiguchi-Masuda-Shareshian-Song ([@ho-ma-sh-so21]) by cutting the permutohedron. To describe the polytope explicitly, we rather start from a concrete definition. We say that a subset $J\subseteq[n-1]$ is *connected* if it is of the form $$\begin{aligned} J=\{a,a+1,\ldots,b\}\end{aligned}$$ for some $a,b\in[n-1]$. For a connected subset $J=\{a,a+1,\ldots,b\}\subseteq[n-1]$, we set $$\begin{aligned} v_J \coloneqq \sum_{i=a}^b (i+1-a )(b+1-i) e_i .\end{aligned}$$ Noticing that $|J|=b+1-a$, this can be written in the coordinate expression as $$\begin{aligned} v_J = (0,\ldots,0,p, 2(p-1), 3(p-2),\ldots,(p-1)2, p,0,\ldots,0) \qquad (p= |J|), \end{aligned}$$ where the non-zero coordinates start from the $a$-th component and end at the $b$-th component. For example, if $n=8$, then for $J=\{\ \ , 2,3,4,5, \ \ ,\ \ \}\subseteq[7]$ we have $$v_J=(0,4,6,6,4,0,0)$$ since $|J|=4$, and for $J=\{\ \ , \ \ ,3,4,5,6,\ \ \}\subseteq[7]$ we have $$v_J=(0,0,4,6,6,4,0)$$ by the same reason. By definition, we have $$\begin{aligned} \bm{e}_i\circ v_J = (i+1-a )(b+1-i) \quad \text{for $i\in J=\{a,a+1,\ldots,b\}$}.\end{aligned}$$ We note that the same equality holds even when $i=a-1$ and $i=b+1$ since both sides are zero in those cases, where we take the convention $\bm{e}_0=\bm{e}_n=0$. Thus, we obtain that $$\begin{aligned} \label{eq: polytope 20} \bm{e}_i\circ v_J = (i+1-a )(b+1-i) \quad \text{for $i\in\{a-1,a,\ldots,b,b+1\}$}.\end{aligned}$$ For a general subset $J\subseteq[n-1]$, we take the decomposition $J=J_1\sqcup\cdots\sqcup J_m$ into the connected components, and set $$\begin{aligned} v_J \coloneqq v_{J_1}+\cdots+v_{J_m}\in \mathbb R^{n-1}\end{aligned}$$ with the convention $v_{\emptyset}\coloneqq\bm{0}$. For example, if $n=12$ and $J=\{\ \ ,2,3,4,5,\ \ ,\ \ ,8,9,10, \ \ \}$, then $$\begin{aligned} v_J = (0,4,6,6,4,0,0,3,4,3,0) \in\mathbb R^{11} .\end{aligned}$$ **Definition 7**. *We set $P_{n-1}$ as the convex hull of the points $v_J\in \mathbb R^{n-1}$ for $J\subseteq[n-1]$$:$ $$\begin{aligned} P_{n-1} \coloneqq \text{\rm Conv}(v_J \mid J\subseteq [n-1]) \subseteq \mathbb R^{n-1} .\end{aligned}$$* Note that the zero vector $\bm{0}$ and the standard vectors $\bm{e}_i$ ($1\le i\le n-1$) belong to $P_{n-1}$ since we have $v_{\emptyset}=\bm{0}$ and $v_{\{i\}}=\bm{e}_i$ for $1\le i\le n-1$. This means that $P_{n-1}$ contains an $(n-1)$-dimensional simplex, and hence it follows that $P_{n-1}$ is a full-dimensional polytope in $\mathbb R^{n-1}$, that is, $$\begin{aligned} \label{eq: polytope 30} \dim P_{n-1} = n-1.\end{aligned}$$ **Example 3**. *The polytopes $P_2$ and $P_3$ are depicted in Figure [\[pic: P2 and P3\]](#pic: P2 and P3){reference-type="ref" reference="pic: P2 and P3"}. We encourage the reader to find the points $v_{\emptyset}$, $v_{\{2\}}$, $v_{\{1,3\}}$ in the picture of $P_3$.* *(52.2100,19.3000)(7.3000,-25.1000)* *(20.0000,-22.2000)(0,0)\[lb\]$v_{\{1\}}=(1,0)$ (7.3000,-16.0000)(0,0)\[lb\]$v_{\{2\}}=(0,1)$ (22.6000,-11.3000)(0,0)\[lb\]$v_{\{1,2\}}=(2,2)$ (8.3000,-22.0000)(0,0)\[lb\]$v_{\emptyset}=(0,0)$* *(17.9000,-17.8000)(0,0)\[lb\]$P_2$* *(33.5000,-16.6000)(0,0)\[lb\]$v_{\{3\}}$ (34.2000,-18.6000)(0,0)\[lb\]$=(0,0,1)$* *(48.0000,-17.6000)(0,0)\[lb\]$P_3$ (41.6000,-9.8000)(0,0)\[lb\]$v_{\{2,3\}}=(0,2,2)$ (54.6000,-7.4000)(0,0)\[lb\]$v_{\{1,2,3\}}=(3,4,3)$ (52.5000,-21.3000)(0,0)\[lb\]$v_{\{1,2\}}=(2,2,0)$* *(42.6000,-23.3000)(0,0)\[lb\]$v_{\{1\}}=(1,0,0)$* In the rest of this section, we set $$\begin{aligned} -\alpha^{\vee}_i \coloneqq \bm{e}_{i-1}-2\bm{e}_i+\bm{e}_{i+1} \in \mathbb R^{n-1}\end{aligned}$$ for $i\in[n-1]$ with the convention $\bm{e}_0=\bm{e}_n=0$. For $a,b\in\mathbb R^{n-1}$, we denote the standard inner product of $a$ and $b$ by $a\circ b$. **Lemma 8**. *For $1\le i\le n-1$ and $J\subseteq[n-1]$, the following hold.* - *We have $(-\alpha^{\vee}_i) \circ v_J \ge -2$. Moreover, the equality holds if and only if $i\in J$.* - *We have $\bm{e}_i \circ v_J \ge0$. Moreover, the equality holds if and only if $i\notin J$.* *Proof.* Let us first prove the claim (2). Since $\bm{e}_i \circ v_J$ is the $i$-th component of $v_J$, the claim follows by the definition of $v_J$. We prove the claim (1) in what follows. We first consider the case $i\in J$, and we prove that $(-\alpha^{\vee}_i) \circ v_J=-2$. Take the decomposition $J=J_1\sqcup\cdots \sqcup J_m$ into the connected components. Since $i\in J$, we have $i\in J_{k}$ for some $1\le k\le m$. Writing $$\begin{aligned} J_{k}=\{a,a+1,\ldots,b\}\end{aligned}$$ for some $a,b\in [n-1]$, we have from [\[eq: polytope 20\]](#eq: polytope 20){reference-type="eqref" reference="eq: polytope 20"} that $$\begin{aligned} (-\alpha^{\vee}_i) \circ v_{J_{k}} &= (\bm{e}_{i-1}-2\bm{e}_i+\bm{e}_{i+1})\circ v_{J_{k}} \\ &= (i-a)(b+2-i) -2(i+1-a)(b+1-i) + (i+2-a)(b-i) \\ &= -2\end{aligned}$$ by a direct calculation. Since two connected components of $J$ are separated at least by an integer, the condition $i\in J_{k}$ implies that $i-1, i, i+1 \notin J_{\ell}$ for all $\ell\ne k$. This and the claim (2) imply that $$\begin{aligned} (-\alpha^{\vee}_i)\circ v_{J_{\ell}} = (\bm{e}_{i-1}-2\bm{e}_i+\bm{e}_{i+1})\circ v_{J_{\ell}} = 0 \quad \text{for all $\ell\ne k$}.\end{aligned}$$ Since $v_J = v_{J_1}+\cdots+v_{J_m}$, we obtain $(-\alpha^{\vee}_i) \circ v_J=-2$ in this case. For the case $i\notin J$, we prove that $(-\alpha^{\vee}_i) \circ v_J\ge0$ which completes the proof. This assumption means that we have $\bm{e}_i\circ v_J=0$ by the claim (2). Since $-\alpha^{\vee}_i=\bm{e}_{i-1}-2\bm{e}_i+\bm{e}_{i+1}$, this implies that $$\begin{aligned} (-\alpha^{\vee}_i) \circ v_J = \bm{e}_{i-1}\circ v_J+0+\bm{e}_{i+1}\circ v_J \ge0,\end{aligned}$$ as desired. ◻ Motivated by Lemma [Lemma 8](#lem: polytope ineq ei){reference-type="ref" reference="lem: polytope ineq ei"}, we set $$\begin{aligned} &F^{+}_{i} \coloneqq \text{\rm Conv}(v_J \mid i\in J) \subseteq P_{n-1}, \\ &F^{-}_{i} \coloneqq \text{\rm Conv}(v_J \mid i\notin J) \subseteq P_{n-1}\end{aligned}$$ for $1\le i\le n-1$. Lemma [Lemma 8](#lem: polytope ineq ei){reference-type="ref" reference="lem: polytope ineq ei"} means that these are faces of $P_{n-1}$. For example, in the picture of $P_3$ depicted in Figure [\[pic: P2 and P3\]](#pic: P2 and P3){reference-type="ref" reference="pic: P2 and P3"}, one can see that $F^{+}_{3}$ and $F^{-}_{3}$ are the top facet and the bottom facet of $P_{2}$ (with respect to the third axis), respectively. **Proposition 9**. *For $1\le i\le n-1$, $F^{+}_{i}$ and $F^{-}_{i}$ are facets of $P_{n-1}$.* *Proof.* Since Lemma [Lemma 8](#lem: polytope ineq ei){reference-type="ref" reference="lem: polytope ineq ei"} implies that $F^{+}_{i}$ and $F^{-}_{i}$ are non-empty proper faces of $P_{n-1}$, it suffices to show that their dimensions are equal to $n-2$. To compute the dimension of $F^{-}_{i}$, we observe that $F^{-}_{i}$ contains $\bm{0}$ and $\bm{e}_k$ for all $k\ne i$ since $v_{\emptyset}=\bm{0}$ and $v_{\{k\}}=\bm{e}_k$. Thus, $F^{-}_{i}$ contains an $(n-2)$-dimensional simplex. Since $F^{-}_{i}$ is a proper face of $P_{n-1}$ as we saw above, we conclude that $\dim F^{-}_{i}=n-2$. To compute the dimension of $F^{+}_{i}$, we consider a projection $\mathbb R^{n-1}\rightarrow \mathbb R^{n-2}$ which maps $(x_1,\ldots,x_{n-1})\in \mathbb R^{n-1}$ to $$\begin{aligned} (x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_{n-1})\in\mathbb R^{n-2}.\end{aligned}$$ In particular, this map sends $\bm{e}_i\in\mathbb R^{n-1}$ to the zero vector $\bm{0}\in\mathbb R^{n-2}$. The image of $F^{+}_{i}$ under this projection is a convex polytope since the map is linear. It contains the zero vector $\bm{0}\in\mathbb R^{n-2}$ which is the image of $v_{\{i\}}=\bm{e}_i\in F^{+}_{i}$. It also contains a nonzero multiple of each standard basis vector of $\mathbb R^{n-2}$ since it is the images of $v_{\{i,k\}}\in F^{+}_{i}$ for some $k\ne i$ which is given by $$\begin{aligned} v_{\{i,k\}} = \begin{cases} 2\bm{e}_i+2\bm{e}_k \quad &\text{if $\{i,k\}$ is connected},\\ \bm{e}_i+\bm{e}_k &\text{otherwise}. \end{cases}\end{aligned}$$ Hence, the image of $F^{+}_{i}$ contains an $(n-2)$-dimensional simplex in $\mathbb R^{n-2}$. Since $F^{+}_{i}$ is a proper face of $P_{n-1}$ as we saw above, this implies that $\dim F^{+}_{i}=n-2$. ◻ Now, the following claim is a corollary of Lemma [Lemma 8](#lem: polytope ineq ei){reference-type="ref" reference="lem: polytope ineq ei"} and Proposition [Proposition 9](#prop: polytope facets){reference-type="ref" reference="prop: polytope facets"}. **Corollary 10**. *For $1\le i\le n-1$, the vectors $-\alpha^{\vee}_i$ and $\bm{e}_i$ are inward normal vectors of the facets $F^{+}_{i}$ and $F^{-}_{i}$, respectively.* We will show later that there is no other facets of $P_{n-1}$ other than $F^{\pm}_{i}$ for $1\le i\le n-1$ (see Proposition [Proposition 13](#prop: polytope ineq -alphai){reference-type="ref" reference="prop: polytope ineq -alphai"} below). Before proving it, let us describe a few properties of these facets $F^{+}_{i}$ and $F^{-}_{i}$. **Lemma 11**. *For $i\in [n-1]$, we have $F^{+}_{i}\cap F^{-}_{i}=\emptyset$.* *Proof.* We suppose that there is an element $x\in F^{+}_{i}\cap F^{-}_{i}$, and we deduce a contradiction. Since $x\in F^{+}_{i}=\text{\rm Conv}(v_J \mid i\in J)$, it can be written as an average of the vectors $v_J$ whose index $J$ contains $i$: $$\begin{aligned} x = \sum_{\substack{J\subseteq[n-1], \\ \text{$J$ contains $i$}}} \lambda_J v_J\end{aligned}$$ with $\lambda_J\ge0$ for all such $J$ and $\sum\lambda_J=1$. For each $v_J$ appearing in this expression, we have $\bm{e}_i\circ v_J>0$ since $i\in J$ (Lemma [Lemma 8](#lem: polytope ineq ei){reference-type="ref" reference="lem: polytope ineq ei"}). Thus, it follows that $$\begin{aligned} \bm{e}_i\circ x > 0.\end{aligned}$$ We also have $x\in F^{-}_{i}=\text{\rm Conv}(v_J \mid i\notin J)$ by the assumption, and an argument similar to that above shows $$\begin{aligned} \bm{e}_i\circ x = 0\end{aligned}$$ since we have $\bm{e}_i\circ v_J=0$ for each $J$ which does not contain $i$ (Lemma [Lemma 8](#lem: polytope ineq ei){reference-type="ref" reference="lem: polytope ineq ei"}). Therefore, we obtain a contradiction, as desired. ◻ **Lemma 12**. *For $J\subseteq [n-1]$ and $i\in [n-1]$, we have* - *$v_J \in F^{+}_{i}$ if and only if $i\in J$,* - *$v_J \in F^{-}_{i}$ if and only if $i\notin J$.* *Proof.* We first prove (1). By definition, we have $v_J \in F^{+}_{i}$ for all $i\in J$. Moreover, if $v_J\in F^{+}_{k}$ for some $k\in [n-1]-J$, then it follows that $v_J\in F^{+}_{k}\cap F^{-}_{k}$ since $k\notin J$ which contradicts to $F^{+}_{k}\cap F^{-}_{k}=\emptyset$ (Lemma [Lemma 11](#lem: up and bottom do not intersect){reference-type="ref" reference="lem: up and bottom do not intersect"}). The claim (2) follows by an argument similar to this. ◻ For subsets $K,L\subseteq[n-1]$, Lemma [Lemma 11](#lem: up and bottom do not intersect){reference-type="ref" reference="lem: up and bottom do not intersect"} means that the intersection of $F^{+}_{i}$ for $i\in K$ and $F^{-}_{i}$ for $i\in L$ is empty unless $K\cap L=\emptyset$. Writing $J=[n-1]-L$, this condition is equivalent to $K\subseteq J$ which we also saw when we define the cone $\tau_{K,J}$ in [\[eq: def of cone 2\]](#eq: def of cone 2){reference-type="eqref" reference="eq: def of cone 2"}. This leads us to consider the following faces of $P_{n-1}$. For $K\subseteq J\subseteq [n-1]$, we set $$\begin{aligned} \label{eq: def of FKJ'} F_{K,J} \coloneqq \Bigg( \bigcap_{i\in K}F^{+}_{i} \Bigg) \cap \Bigg( \bigcap_{i\notin J}F^{-}_{i} \Bigg) \subseteq P_{n-1}\end{aligned}$$ with the convention $F_{\emptyset,[n-1]}\coloneqq P_{n-1}$. As we saw above, the condition $K\subseteq J$ ensures that there is no index $i$ appearing in common in [\[eq: def of FKJ\'\]](#eq: def of FKJ'){reference-type="eqref" reference="eq: def of FKJ'"}. Moreover, we note that $F_{K,J}\ne\emptyset$ since we have $v_K,v_J\in F_{K,J}$ because of the condition $K\subseteq J$. **Example 4**. * Let $n=4$. In this case, our polytope is $P_3$ depicted in Figure [\[pic: P2 and P3\]](#pic: P2 and P3){reference-type="ref" reference="pic: P2 and P3"}. We visualize some faces of $P_{3}$. For example, $F_{\{1\},\{1,2\}}=F^{+}_{1}\cap F^{-}_{3}$ is the edge joining two vertices $v_{\{1\}}$ and $v_{\{1,2\}}$. Also, $F_{\{2,3\},\{2,3\}}=F^{+}_{2}\cap F^{+}_{3}\cap F^{-}_{1}=\{v_{\{2,3\}}\}$ is a vertex. The largest face $F_{\emptyset,\{1,2\}}$ is the polytope $P_{2}$ itself by definition. * We now prove the following by using the faces $F_{K,J}$. **Proposition 13**. *The facets of $P_{n-1}$ are exactly $F^{\pm}_{i}$ for $1\le i\le n-1$.* *Proof.* Let $\Sigma'$ be the normal fan of the polytope $P_{n-1}$, where we consider the set of integral points $\mathbb Z^{n-1}\subseteq \mathbb R^{n-1}$ as the lattice for the fan $\Sigma$. We show that the rays in $\Sigma'$ are in one-to-one correspondence to the facets $F^{\pm}_{i}$ for $1\le i\le n-1$. For $K\subseteq J\subseteq [n-1]$, we saw that the intersection $F_{K,J}$ defined in [\[eq: def of FKJ\'\]](#eq: def of FKJ'){reference-type="eqref" reference="eq: def of FKJ'"} is a non-empty face of $P_{n-1}$. Let $\tau'_{K,J}\in\Sigma'$ be the cone corresponding to the face $F_{K,J}$. By [\[eq: def of FKJ\'\]](#eq: def of FKJ'){reference-type="eqref" reference="eq: def of FKJ'"}, we have $$\begin{aligned} F_{K,J} \subseteq F^{+}_{i} \ \ \text{for $i\in K$} \quad \text{and} \quad F_{K,J} \subseteq F^{-}_{i} \ \ \text{for $i\notin J$}.\end{aligned}$$ In the normal fan $\Sigma'$, this means that $$\begin{aligned} \text{Cone}(-\alpha^{\vee}_i) \subseteq \tau'_{K,J} \ \ \text{for $i\in K$} \quad \text{and} \quad \text{Cone}(\bm{e}_{i}) \subseteq \tau'_{K,J} \ \ \text{for $i\notin J$}.\end{aligned}$$ In fact, these form a subset of rays of $\tau'_{K,J}$ ([@co-li-sc Proposition 2.3.7 (b)]), and we have $$\begin{aligned} \label{eq: def of sigma KL 2} \tau_{K,J} \subseteq \tau'_{K,J}, \end{aligned}$$ where $\tau_{K,J} = \text{Cone}(\{ -\alpha^{\vee}_i \mid i\in K \}\cup \{ \bm{e}_{i} \mid i \notin J \})$ is the cone defined in [\[eq: def of cone 2\]](#eq: def of cone 2){reference-type="eqref" reference="eq: def of cone 2"}. Recall from [\[eq: def of fan 1\]](#eq: def of fan 1){reference-type="eqref" reference="eq: def of fan 1"} and [\[eq: def of fan 2\]](#eq: def of fan 2){reference-type="eqref" reference="eq: def of fan 2"} that the fan $\Sigma$ is given by $$\begin{aligned} \Sigma= \{ \tau_{K,J} \mid K\subseteq J\subseteq [n-1]\} =\{ \sigma_{K,L} \mid K,J\subseteq [n-1], \ K\cap L=\emptyset \}.\end{aligned}$$ In [@ab-ze23 Proposition 4.3, Remark 3.8], the authors proved the fan $\Sigma$ is complete. Let us denote by $\Sigma(k)$ and $\Sigma'(k)$ the set of $k$-dimensional cones in $\Sigma$ and $\Sigma'$, respectively ($0\le k\le n-1$). The completeness of $\Sigma$ means that the set of full-dimensional cones $\tau_{K,J}\in\Sigma(n-1)$ cover the entire space $\mathbb R^{n-1}$. This and [\[eq: def of sigma KL 2\]](#eq: def of sigma KL 2){reference-type="eqref" reference="eq: def of sigma KL 2"} imply that $$\begin{aligned} \tau_{K,J} = \tau'_{K,J} \quad \text{for all $\tau_{K,J}\in \Sigma(n-1)$}.\end{aligned}$$ Since these full-dimensional cones $\tau_{K,J}\in \Sigma(n-1)$ cover $\mathbb R^{n-1}$, there is no space left in $\mathbb R^{n-1}$ for which other full-dimensional cones in $\Sigma'(n-1)$ can fit in. Thus, it follows that $$\begin{aligned} \label{eq: def of sigma KL 5} \Sigma(n-1) = \Sigma'(n-1).\end{aligned}$$ By definition, the rays in $\Sigma$ are $\text{cone}( -\alpha^{\vee}_i)$ and $\text{cone}( \bm{e}_i)$ for $1\le i\le n-1$ which are generated by normal vectors of the facets $F^{+}_{i}$ and $F^{-}_{i}$ of $P_{n-1}$, respectively. This means that we have $\Sigma(1)\subseteq \Sigma'(1)$. To prove that there is no facet of $P_{n-1}$ other than $F^{\pm}_{i}$ for $1\le i\le n-1$, it is enough show that $\Sigma(1)=\Sigma'(1)$. To see this, let $\rho\in\Sigma'(1)$ be an arbitrary ray. It corresponds to a facet of $P_{n-1}$, and this facet contains a vertex of $P_{n-1}$, say $v$. Since $P_{n-1}$ is a full-dimensional polytope in $\mathbb R^{n-1}$, the vertex $v$ corresponds to a full dimensional cone $\sigma_{\text{full}}\in\Sigma'$. This means that $\rho$ is a ray (i.e. a $1$-dimensional face) of $\sigma_{\text{full}}$ ([@co-li-sc Proposition 2.3.7 (b)]). Now, [\[eq: def of sigma KL 5\]](#eq: def of sigma KL 5){reference-type="eqref" reference="eq: def of sigma KL 5"} implies that we have $\sigma_{\text{full}}=\tau_{K,J}$ for some $\tau_{K,J}\in\Sigma(n-1)$. In particular, its ray $\rho$ belongs to $\Sigma(1)$. Hence, we conclude that $\Sigma(1)=\Sigma'(1)$, as desired. ◻ In the next claim, we take the set of integral points $\mathbb Z^{n-1}\subseteq \mathbb R^{n-1}$ as the lattice of $\mathbb R^{n-1}$. **Corollary 14**. *$P_{n-1}\subseteq\mathbb R^{n-1}$ is a full-dimensional simple lattice polytope.* *Proof.* By the definition of $P_{n-1}$ and [\[eq: polytope 30\]](#eq: polytope 30){reference-type="eqref" reference="eq: polytope 30"}, it is clear that $P_{n-1}$ is a lattice polytope of full-dimension in $\mathbb R^{n-1}$. We show that the polytope $P_{n-1}$ is simple. By definition, the set of vertices of $P_{n-1}$ is a subset of $\{v_J\mid J\subseteq[n-1]\}$. It follows from Lemma [Lemma 12](#lem: contain i J){reference-type="ref" reference="lem: contain i J"} that each $v_J\in P_{n-1}$ is contained in exactly $n-1$ facets $F^{+}_{k}\ (k\in J)$ and $F^{-}_{j}\ (j\notin J)$. Thus, each vertex of $P_{n-1}$ is contained in exactly $n-1$ facets. Thus, the claim follows. ◻ Recall that a face of an arbitrary convex polytope must be an intersection of its facets ([@zi95 Theorem 2.7(v)]). Proposition [Proposition 13](#prop: polytope ineq -alphai){reference-type="ref" reference="prop: polytope ineq -alphai"} now implies that the set of non-empty faces of $P_{n-1}$ are given by $F_{K,J}$ defined in [\[eq: def of FKJ\'\]](#eq: def of FKJ'){reference-type="eqref" reference="eq: def of FKJ'"} for $K\subseteq J\subseteq[n-1]$. We note that $$\begin{aligned} \begin{split} \dim F_{K,J} &= (n-1)-|K|-|[n-1]-J| = |J|-|K| \end{split}\end{aligned}$$ since $P_{n-1}$ is a simple polytope. In particular, the vertices of $P_{n-1}$ (i.e. the faces of dimension 0) are given by $F_{J,J}=\{v_J\}$ for $J\subseteq[n-1]$ which means that the vertex set of $P_{n-1}$ is precisely $\{v_J\mid J\subseteq[n-1]\}$. Since the set of non-empty faces of $P_{n-1}$ are given by $F_{K,J}$, we obtain the following **Corollary 15**. *The normal fan of $P_{n-1}$ coincides with the fan $$\begin{aligned} \Sigma= \{ \tau_{K,J} \mid K\subseteq J\subseteq [n-1]\}\end{aligned}$$ described in [\[eq: def of fan 2\]](#eq: def of fan 2){reference-type="eqref" reference="eq: def of fan 2"}, where $\tau_{K,J}=\text{\rm cone}(\{ -\alpha^{\vee}_i \mid i\in K \}\cup \{ \bm{e}_{i} \mid i \notin J \})$ is the cone corresponding to the face $F_{K,J}\subseteq P_{n-1}$.* We end this subsection by determining the combinatorial type of the polytope $P_{n-1}$. Let $$^{n-1} \coloneqq \{(x_1,\ldots,x_{n-1})\in \mathbb R^{n-1} \mid 0\le x_i\le 1 \ (i\in[n-1])\}$$ be the standard cube of dimension $n-1$. There are $2(n-1)$ facets of $[0,1]^{n-1}$; the ones given by $x_i=0$ for $i\in[n-1]$ and the ones given by $x_i=1$ for $i\in[n-1]$. Hence, the faces of $[0,1]^{n-1}$ are intersection of these facets, and each of them can be written as $$\label{eq: face post of cube} E_{K,J}\coloneqq \left\{(x_1,\ldots,x_{n-1})\in [0,1]^{n-1} \ \left| \ \begin{array}{l} x_i=0\quad\text{if $i\in K$},\\ x_i=1\quad\text{if $i\notin J$} \end{array} \right. \right\}$$ for some $K\subseteq J\subseteq [n-1]$ (so that $E_{K,J}\ne\emptyset$). It is straightforward to verify that these faces satisfy $$\label{eq: face post of cube 2} E_{K,J}\subseteq E_{K',J'} \quad \text{if and only if} \quad K'\subseteq K\subseteq J\subseteq J',$$ where the equality holds if and only if $K=K'$ and $J=J'$. The next claim means that $P_{n-1}$ is combinatorially equivalent to the cube $[0,1]^{n-1}$. **Proposition 16**. *For the set of faces of $P_{n-1}$, we have $$F_{K,J}\subseteq F_{K',J'} \quad \text{if and only if} \quad K'\subseteq K\subseteq J\subseteq J',$$ where the equality holds if and only if $K=K'$ and $J=J'$.* *Proof.* If $K'\subseteq K\subseteq J\subseteq J'$, then it is clear that we have $F_{K,J}\subseteq F_{K',J'}$ by the definition of $F_{K,J}$ and $F_{K',J'}$. Conversely, suppose that $F_{K,J}\subseteq F_{K',J'}$. Recall that we have $$\begin{aligned} v_K, v_J\in F_{K,J}.\end{aligned}$$ In particular, it follows that $v_K\in F_{K',J'}$. Thus, the definition of $F_{K',J'}$ and Lemma [Lemma 12](#lem: contain i J){reference-type="ref" reference="lem: contain i J"} imply that $K'\subseteq K$. Similarly, we have $v_J\in F_{K',J'}$, and hence we obtain that $J\subseteq J'$ by the definition of $F_{K',J'}$ and Lemma [Lemma 12](#lem: contain i J){reference-type="ref" reference="lem: contain i J"}. Thus, we conclude that $K'\subseteq K\subseteq J\subseteq J'$, as desired. Lastly, we have $F_{K,J}= F_{K',J'}$ if and only if $K=K'$ and $J=J'$ because of the first claim of this proposition. ◻ ## Nonnegative part $X(\Sigma)_{\ge0}$ and the polytope $P_{n-1}$ {#subsec: nonnegative and polytope} In the last subsection, we showed that the normal fan of the polytope $P_{n-1}$ is precisely the fan $\Sigma$ for the toric orbifold $X(\Sigma)=(\mathbb C^{2(n-1)}-E)/T$, where each $\tau_{K,J}\in\Sigma$ is the cone corresponding to the face $F_{K,J}\subseteq P_{n-1}$ for $K\subseteq J\subseteq [n-1]$. In Section [3.3](#subsec: Orbit decomposition){reference-type="ref" reference="subsec: Orbit decomposition"}, we also saw that $\tau_{K,J}$ is the cone corresponding to the torus orbit $X(\Sigma)_{K,J}\subseteq X(\Sigma)$. Therefore, $X(\Sigma)_{K,J}$ is the orbit corresponding to the face $F_{K,J}$. By using the language of relative interiors of faces, the correspondence becomes clearer; the (disjoint) decomposition of $X(\Sigma)$ by the torus orbits $X(\Sigma)_{K,J}$ corresponds to the (disjoint) decomposition of $P_{n-1}$ by the relative interiors $\mathop{\mathrm{Int}}(F_{K,J})$ of the faces. As is well-known in the theory of projective toric varieties (e.g. [@co-li-sc Sect. 12.2] or [@fu Sect. 4.2]), the above correspondence implies that there is the moment map $$\mu \colon X(\Sigma)\rightarrow \mathbb R^{n-1}$$ determined by the full-dimensional simple lattice polytope $P_{n-1}\subseteq \mathbb R^{n-1}$ which satisfies the following property; it restricts to a homeomorphism $$\overline{\mu} \colon X(\Sigma)_{\ge 0}\to P_{n-1}$$ such that $$\overline{\mu}(X(\Sigma)_{{K,J};> 0})=\mathop{\mathrm{Int}}(F_{K,J})\quad \text{for $K\subseteq J\subseteq [n-1]$},$$ where $\mathop{\mathrm{Int}}(F_{K,J})$ is the relative interior of the face $F_{K,J}\subseteq P_{n-1}$. In particular, each piece $X(\Sigma)_{{K,J};> 0}$ is homeomorphic to an open disk so that the decomposition [\[eq: decomp of X nonnegative KJ\]](#eq: decomp of X nonnegative KJ){reference-type="eqref" reference="eq: decomp of X nonnegative KJ"} of $X(\Sigma)_{\ge 0}$ is a cell decomposition. See [@co-li-sc Sect. 12.2] or [@fu Sect. 4.2] for an explicit formula of the moment map $\mu$. **Example 5**. * Let $n=4$ so that our polytope is $P_3$ depicted in Figure [\[pic: P2 and P3\]](#pic: P2 and P3){reference-type="ref" reference="pic: P2 and P3"}. The moment map gives us a homeomorphism $$X(\Sigma)_{\ge0} \rightarrow P_3$$ which sends each stratum of $X(\Sigma)_{\ge0}$ to the corresponding face of $P_3$. For example, by Proposition [Proposition 5](#prop: nonnengative stratum){reference-type="ref" reference="prop: nonnengative stratum"}, we have $$X(\Sigma)_{{\{1,2\},\{1,2,3\}};> 0}= \left\{ [0,0,x_3;1,1,1]\in X(\Sigma) \mid x_3>0 \right\} ,$$ and this is sent under the moment map to the open edge $\mathop{\mathrm{Int}}(F_{\{1,2\},\{1,2,3\}})=\mathop{\mathrm{Int}}(F^{+}_{1}\cap F^{+}_{2})$ connecting the vertices $v_{\{1,2\}}$ and $v_{\{1,2,3\}}$. * # Peterson Varieties In this section, we recall the definition of the Peterson variety, and we give a certain stratification of the Peterson variety according to K. Rietsch ([@ri03; @ri06]). ## Peterson varietiy in ${SL_{n}}/B^-$ We keep the notation for ${SL_{n}}={SL_{n}}(\mathbb C)$, $B^{\pm}$, $T$, $U^{\pm}$ from Section [2](#sec: notations){reference-type="ref" reference="sec: notations"}. Let $f$ be an $n\times n$ regular nilpotent matrix given by $$\begin{aligned} \label{eq: def of f} f = \begin{pmatrix} 0 & & & & \\ 1 & 0 & & \vspace{-5pt}\\ & 1 & \ddots & & \vspace{-5pt}\\ & & \ddots & 0 & \\ & & & 1 & 0 \\ \end{pmatrix}.\end{aligned}$$ **Definition 17**. *The Peterson variety $Y$ in ${SL_{n}}/B^-$ is defined as follows: $$\begin{aligned} Y\coloneqq \{gB^-\in {SL_{n}}/B^- \mid (\text{\rm Ad}_{g^{-1}} f)_{i,j}=0 \ (j>i+1)\},\end{aligned}$$ where $(\text{\rm Ad}_{g^{-1}} f)_{i,j}$ is the $(i,j)$-th component of the matrix $\text{\rm Ad}_{g^{-1}} f=g^{-1}fg$.* ## Richardson strata of $Y$ {#subsec: A decomposition of XJ} [\[subsec: decomposition of Y\]]{#subsec: decomposition of Y label="subsec: decomposition of Y"} We begin with fixing our notations. For $i,j\in [n]$, we denote by $E_{i,j}$ the $n\times n$ matrix having $1$ at the $(i,j)$-th position and $0$'s elsewhere. Let $e_i \coloneqq E_{i,i+1}$ and $f_i \coloneqq E_{i+1,i}$ for $i\in [n-1]$. We set $$\begin{aligned} x_i(t)\coloneqq \exp(te_i) \in U^+(\subseteq {SL_{n}}) \quad \text{and} \quad y_i(t)\coloneqq \exp(tf_i) \in U^-(\subseteq {SL_{n}})\end{aligned}$$ for $1\le i\le n-1$ and $t\in \mathbb C$. For example, when $n=3$, we have $$\begin{aligned} x_1(t)= \begin{pmatrix} 1 & t &0 \\ 0 & 1 &0\\ 0&0&1 \end{pmatrix} \quad \text{and} \quad y_2(t)=\begin{pmatrix} 1 & 0 &0 \\ 0 & 1 &0\\ 0 & t &1 \end{pmatrix} \quad \text{for $t\in\mathbb C$}.\end{aligned}$$ For $i\in [n-1]$, let $$\begin{aligned} \dot{s}_i\coloneqq y_i(-1)x_i(1)y_i(-1).\end{aligned}$$ Then we have $\dot{s}_i\in N(T)$, where $N(T)$ is the normalizer of the maximal torus $T\subseteq {SL_{n}}$. For example, when $n=3$, then we have $$\begin{aligned} \dot{s}_1=\begin{pmatrix} 0 & 1 & 0\\ -1& 0 & 0\\ 0 & 0 & 1 \end{pmatrix} \quad \text{and} \quad \dot{s}_2=\begin{pmatrix} 1 & 0 & 0\\ 0 & 0 & 1\\ 0 & -1& 0 \end{pmatrix}.\end{aligned}$$ The Weyl group is defined by $W\coloneqq N(T)/T=\langle s_1,s_2,\cdots, s_{n-1}\rangle$, where $s_i \coloneqq \dot{s_i}T\in W$ for $i\in [n-1]$. In the rest of this paper, we identify the Weyl group $W$ and the permutation group $\mathfrak{S}_n$ which corresponds $s_i$ to the simple reflection $(i,i+1)$ for $i\in[n-1]$. For $w\in \mathfrak{S}_n$, we define a representative $\dot{w}\in {SL_{n}}$ by $$\label{eq: def of representative} \dot{w}\coloneqq\dot{s}_{i_1}\dot{s}_{i_2}\cdots \dot{s}_{i_m}\in {SL_{n}},$$ where $s_{i_1}s_{i_2}\cdots s_{i_m}$ is a reduced expression of $w$. It follows from Matsumoto's criterion ([@ma99 Theorem 1.8]) that the definition of $\dot{w}$ does not depend on a choice of a reduced expression of $w$ since the matrices $\dot{s}_1,\ldots,\dot{s}_{n-1}\in {SL_{n}}$ satisfy - $\dot{s}_i\dot{s}_j=\dot{s}_j\dot{s}_i$ for $i,j\in [n-1]$ with $|i-j|\ge2$, - $\dot{s}_i\dot{s}_{i+1}\dot{s}_i=\dot{s}_{i+1}\dot{s}_i\dot{s}_{i+1}$ for $i\in [n-2]$. For the longest element $w_0\in\mathfrak{S}_n$, the representative $\dot{w}_0$ is given by $$\label{eq: representative of w0} \dot{w}_0= \begin{pmatrix} & & & & \!\!1\\ & & & \!\!-1 & \\ & & \!\!1 & & \\ & \!\!\!\!\rotatebox{75}{$\ddots$} & & & \vspace{-3pt}\\ (-1)^{n-1}\!\!\!\! & & & & & \end{pmatrix} \in {SL_{n}}.$$ We call $\dot{w}_0$ **the longest permutation matrix in ${SL_{n}}$**. For a subset $J\subseteq [n-1]$, we may decompose it into the connected components as in Section [4.3](#subsec: polytope){reference-type="ref" reference="subsec: polytope"}: $$\begin{aligned} \label{eq: connected components} J = J_1 \sqcup J_2 \sqcup \cdots \sqcup J_{m},\end{aligned}$$ where each $J_{k}\ (1\le k\le m)$ is a maximal connected subset of $J$. To determine each $J_{k}$ uniquely, we require that elements of $J_{k}$ are less than elements of $J_{{k}'}$ when $k< {k}'$. For $J\subseteq [n-1]$, let us introduce a Young subgroup of $\mathfrak{S}_n$ given by $$\begin{aligned} \mathfrak{S}_J \coloneqq \mathfrak{S}_{J_1}\times \mathfrak{S}_{J_2}\times \cdots\times\mathfrak{S}_{J_m} \subseteq \mathfrak{S}_n,\end{aligned}$$ where each $\mathfrak{S}_{J_{k}}$ is the subgroup of $\mathfrak{S}_n$ generated by the simple reflections $s_i$ for all $i\in J_{k}$. Let $w_J$ be the longest element of $\mathfrak{S}_J$, i.e.,  $$\begin{aligned} w_J \coloneqq w_{J_1} w_{J_2} \cdots w_{J_{m}} \in \mathfrak{S}_J , \end{aligned}$$ where each $w_{J_{k}}$ is the longest element of the permutation group $\mathfrak{S}_{J_{k}}$ $(1\le k\le m)$. It is well known that $w_K\le w_J$ in the Bruhat order if and only if $K\subseteq J$ ([@AHKZ Lemma 2.4] or [@Insko-Tymoczko Lemma 6]). Listing reduced expressions of $w_{J_{k}}$ from $k=1$ to $k=m$, we obtain a reduced expression of $w_J$. This gives us a representative $\dot{w}_J\in {SL_{n}}$ by the construction given in [\[eq: def of representative\]](#eq: def of representative){reference-type="eqref" reference="eq: def of representative"}. We illustrate the form of $\dot{w}_J\in {SL_{n}}$ by an example. **Example 6**. *Let $n=10$ and $J=\{1,2, \ \ ,4,5,6, \ \ , \ \ ,9\}$. Then we have $$\begin{aligned} J = \{1,2\} \sqcup\{4,5,6\} \sqcup\{9\} = J_1\sqcup J_2\sqcup J_3.\end{aligned}$$ Thus, we have $$\begin{aligned} {\tiny \dot{w}_J = \dot{w}_{J_1} \dot{w}_{J_2} \dot{w}_{J_3} = \left( \begin{array}{@{\,}ccc|cccc|c|cc@{\,}} & & 1\!\! & & & & & & & \\ & \!\!\!\!-1\!\!\!\! & & & & & & & & \\ 1 & & & & & & & & & \\ \hline & & & & & & 1\!\!& & & \\ & & & & & \!\!\!\!-1\!\!\!\! & & & & \\ & & & & 1 & & & & & \\ & & & \!\!\!-1\!\!\!\! & & & & & & \\ \hline & & & & & & & \!\!1\!\! & & \\ \hline & & & & & & & & & 1 \\ & & & & & & & & \!\!\!-1\!\!\!\! & \end{array} \right)\in {SL_{10}}.}\end{aligned}$$* For $J\subseteq [n-1]$, let $$X_{w_J}^{\circ} = B^-\dot{w}_JB^-/B^- \quad \text{and} \quad \Omega_{w_J}^{\circ} = B^+\dot{w}_JB^-/B^-$$ be the Schubert cell and the opposite Schubert cell associated with the permutation $w_J$, respectively. According to Rietsch ([@ri03; @ri06]), we consider a **Richardson stratum** $$\label{eq: def of richardson strata} Y_{K,J}\coloneqq Y\cap \Omega_{w_K}^{\circ} \cap X_{w_J}^{\circ} \subseteq Y$$ for $J, K\subseteq [n-1]$. We note that $Y_{K,J}=\emptyset$ unless $w_K\le w_J$ (i.e., $K\subseteq J$) by [@Richardson92 Lemma 3.1(3)]. It is well-known that $$Y=\bigsqcup_{K\subseteq [n-1]}Y\cap \Omega_{w_K}^{\circ} =\bigsqcup_{J\subseteq [n-1]}Y\cap X_{w_J}^{\circ}$$ (cf. [@AHKZ Lemma 3.5]). Therefore, we obtain $$\label{decomposition of Y as union of ZKJ} Y=\bigsqcup_{K\subseteq J\subseteq [n-1]}Y_{K,J}.$$ As a special case, we call $Y_{K,[n-1]}$ ($K\subseteq [n-1]$) a **Richardson stratum in the big cell**. ## $J$-Toeplitz matrices and $Y\cap X_{w_J}^{\circ}$ {#subsec: on elements in Y ge 0 cap XJ} For a positive integer $k$, a $k\times k$ matrix of the form $$\begin{pmatrix} 1& &&&\\ x_1 & \!\!\!1 & && \\ x_2 & \!\!\!x_1 & \!\!\!1 & &\\ \vdots & \!\!\!x_2 & \!\!\!x_1 & \ddots\\ x_{k-2} & \!\!\!\vdots & \!\!\!\ddots & \ddots & \ddots\\ x_{k-1} & \!\!\!x_{k-2}& \!\!\!\cdots & x_2 & x_1& 1 \end{pmatrix} \qquad (x_1,\ldots,x_{k-1}\in\mathbb C)$$ is called a **Toeplitz matrix**. We call the product $x\dot{w}_0$ for a $k\times k$ Toeplitz matrix $x$ and $\dot{w}_0\in {SL_{k}}$ (see [\[eq: representative of w0\]](#eq: representative of w0){reference-type="eqref" reference="eq: representative of w0"}) an **anti-Toeplitz matrix**. For example, $$\begin{pmatrix} & & 1 \\ & -1 & x_1 \\ 1 & -x_1 & x_2 \end{pmatrix} \quad \text{and} \quad \begin{pmatrix} & & & 1 \\ & & -1 & x_1 \\ & 1 & -x_1 & x_2 \\ -1 & x_1 & -x_2 & x_3 \end{pmatrix}$$ are anti-Toeplitz matrices. For each $J\subseteq[n-1]$, we construct a partition of $[n]$ as follows. Take the decomposition $J = J_1 \sqcup J_2 \sqcup \cdots \sqcup J_{m}$ into the connected components (see [\[eq: connected components\]](#eq: connected components){reference-type="eqref" reference="eq: connected components"}), and set $$\label{def: definition of J'i} {J_{k}^*}\coloneqq J_{k}\cup\left\{\max(J_{k})+1\right\} \qquad (1\le k\le m).$$ Let ${J_{}^*}\coloneqq {J_{1}^*}\sqcup {J_{2}^*}\sqcup \cdots \sqcup {J_{m}^*}\subseteq[n]$. We write $[n]-{J_{}^*} =\{p_1,\ldots,p_s\}$. Then we obtain an (unorderd) partition of $[n]$: $$\label{eq: partition from J} [n] = {J_{1}^*}\sqcup \cdots \sqcup {J_{m}^*} \sqcup \{p_1\} \sqcup \cdots \sqcup\{p_s\}.$$ This allows us to consider block diagonal matrices with respect to the partition [\[eq: partition from J\]](#eq: partition from J){reference-type="eqref" reference="eq: partition from J"} of $[n]$. We call such matrices **$J$-block matrices**. In the same manner, **$J$-block diagonal matrices** and **$J$-block lower triangular matrices** are defined. **Example 7**. *Let $n=10$ and $J=\{1,2\} \sqcup\{4,5,6\} \sqcup\{9\}$ as above. Then we have ${J_{}^*}={J_{1}^*}\sqcup {J_{2}^*}\sqcup {J_{3}^*}$ with $${J_{1}^*}=\{1,2,3\}, \ {J_{2}^*}=\{4,5,6,7\}, \ {J_{3}^*}=\{9,10\}.$$ Hence, we have $[10]-{J_{}^*}=\{8\}$. Thus, $J$-block diagonal matrices and $J$-block lower triangular matrices are of the form $$\begin{aligned} {\tiny \left( \begin{array}{@{\,}ccc|cccc|c|cc@{\,}} *\!\! & \!\!*\!\! & \!\!* & & & & & & & \\ *\!\! & \!\!*\!\! & \!\!* & & & & & & & \\ *\!\! & \!\!*\!\! & \!\!* & & & & & & & \\ \hline & & & *\!\! & \!\!*\!\! & \!\!*\!\! & \!\!* & & & \\ & & & *\!\! & \!\!*\!\! & \!\!*\!\! & \!\!* & & & \\ & & & *\!\! & \!\!*\!\! & \!\!*\!\! & \!\!* & & & \\ & & & *\!\! & \!\!*\!\! & \!\!*\!\! & \!\!* & & & \\ \hline & & & & & & & \!\!*\!\! & & \\ \hline & & & & & & & & *\!\! & \!\!* \\ & & & & & & & & *\!\! & \!\!* \end{array} \right)} \quad \text{and} \quad {\tiny \left( \begin{array}{@{\,}ccc|cccc|c|cc@{\,}} *\! & \!\!*\!\! & \!* & & & & & & & \\ *\! & \!\!*\!\! & \!* & & & & & & & \\ *\! & \!\!*\!\! & \!* & & & & & & & \\ \hline *\! & \!\!*\!\! & \!* & *\! & \!\!*\!\! & \!*\! & \!\!* & & & \\ *\! & \!\!*\!\! & \!* & *\! & \!\!*\!\! & \!*\! & \!\!* & & & \\ *\! & \!\!*\!\! & \!* & *\! & \!\!*\!\! & \!*\! & \!\!* & & & \\ *\! & \!\!*\!\! & \!* & *\! & \!\!*\!\! & \!*\! & \!\!* & & & \\ \hline *\! & \!\!*\!\! & \!* & *\! & \!\!*\!\! & \!*\! & \!\!* & \!\!*\!\! & & \\ \hline *\! & \!\!*\!\! & \!* & *\! & \!\!*\!\! & \!*\! & \!\!* & \!\!*\!\! & *\!\! & \!\!* \\ *\! & \!\!*\!\! & \!* & *\! & \!\!*\!\! & \!*\! & \!\!* & \!\!*\!\! & *\!\! & \!\!* \end{array} \right), }\end{aligned}$$ where all $*$'s can take arbitrary complex numbers.* **Definition 18**. * For $J\subseteq[n-1]$, a **$J$-Toeplitz matrix** is a $J$-block diagonal matrix whose diagonal blocks consist of Toeplitz matrices. * **Example 8**. *Let $n=10$ and $J=\{1,2\}\sqcup\{4,5,6\}\sqcup\{9\}$ as above. Then a $J$-Toeplitz matrix is of the form $$\begin{aligned} {\tiny \left( \begin{array}{@{\,}ccc|cccc|c|cc@{\,}} 1& & & & & & & & & \\ a & \!\!1\!\!& & & & & & & & \\ b & \!\!a\!\! & \!1& & & & & & & \\ \hline & & & 1 & & & & & & \\ & & & x & \!\!1\!\! & & & & & \\ & & & y & \!\!x\!\! & 1& & & & \\ & & & z & \!\!y\!\! & x & \!\!\!1 & & & \\ \hline & & & & & & & \!\!1\!\! & & \\ \hline & & & & & & & & 1\!\! & \\ & & & & & & & & s\!\! & \!1 \end{array} \right) } \qquad (a,b,x,y,z,s\in \mathbb C).\end{aligned}$$* Since $w_J$ is the longest element of $\mathfrak{S}_{J}(\subseteq \mathfrak{S}_{n})$, the associated Schubert cell $X_{w_J}^{\circ}=B^-\dot{w}_JB^-/B^-$ is the set of $gB^- \in {SL_{n}}/B^-$ such that $g$ is $J$-block diagonal: $$X_{w_J}^{\circ} = \{ gB^- \in {SL_{n}}/B^- \mid \text{$g$ is $J$-block diagonal}\}.$$ By a direct computation, one can verify the following well-known fact; an element $gB^-\in {SL_{n}}/B^-$ belongs to $Y\cap X_{w_J}^{\circ}$ if and only if it can be represented by a matrix $g\in {SL_{n}}$ such that $g$ is a $J$-block diagonal matrix whose diagonal blocks consist of anti-Toeplitz matrices. We illustrate this in the next example. **Example 9**. *Let $n=10$ and $J=\{1,2\}\sqcup\{4,5,6\}\sqcup\{9\}$ as above. Then $Y\cap X_{w_J}^{\circ}$ is the set of matrices given by $$\begin{aligned} {\tiny \left( \begin{array}{@{\,}ccc|cccc|c|cc@{\,}} & & \!\!1 & & & & & & & \\ & \!\!\!\!-1 & \!\!a & & & & & & & \\ 1 & \!\!\!\!-a & \!\!b & & & & & & & \\ \hline & & & & & & \!\!\!1& & & \\ & & & & & \!\!\!\!-1 & \!\!\!x & & & \\ & & & & \!\!\!\!1 & \!\!\!\!-x & \!\!\!y & & & \\ & & & \!\!-1 & \!\!\!\!x & \!\!\!\!-y & \!\!\!z & & & \\ \hline & & & & & & & \!\!1\!\! & & \\ \hline & & & & & & & & & \!\!\!\!\!1 \\ & & & & & & & & \!-1 & \!\!\!\!\!s \end{array} \right)B^- } \qquad (a,b,x,y,z,s\in \mathbb C).\end{aligned}$$* In other words, we can write $$\label{mathcal{X}J=J-toeplitz} Y\cap X_{w_J}^{\circ} =\{x\dot{w}_JB^- \in {SL_{n}}/B^- \mid \text{$x$ is $J$-Toeplitz}\}.$$ For example, the matrix exhibited in the previous example is the product of the $J$-Toeplitz matrix in Example [Example 8](#ex: J-Toeplitz matrices){reference-type="ref" reference="ex: J-Toeplitz matrices"} and the permutation matrix $\dot{w}_J$ in Example [Example 6](#eg: wJ){reference-type="ref" reference="eg: wJ"}. ## Description of the strata $Y_{K,J}$ {#subsec: description of RKJ} For $i\in[n-1]$, let $$\label{eq:4.3 first} \Delta_{i} \colon {SL_{n}}\to\mathbb C$$ be the function which takes the lower right $(n-i)\times (n-i)$ minor of a given matrix in ${SL_{n}}$. For example, for $g=(a_{i,j})_{1\le i\le 4,1\le j\le 4}\in {SL_{4}}$, we have $$\label{eq:4.3 second} \Delta_1(g) = \det \begin{pmatrix} a_{22} & a_{23} & a_{24} \\ a_{32} & a_{33} & a_{34} \\ a_{42} & a_{43} & a_{44} \end{pmatrix}, \ \Delta_2(g) = \det \begin{pmatrix} a_{33} & a_{34} \\ a_{43} & a_{44} \end{pmatrix}, \ \Delta_3(g) = a_{44}.$$ To study the strata $Y_{K,J}=Y\cap \Omega_{w_K}^{\circ} \cap X_{w_J}^{\circ}$, we first recall Rietsch's description of Richardson strata in the big cell from [@ri03]. The claim [@ri03 Lemma 4.5] combined with [@ri03 Theorem 4.6(1)] due to Dale Peterson gives the following description of Richardson strata in the big cell. **Proposition 19**. *Let $K\subseteq [n-1]$. Then, we have $$Y_{K,[n-1]} =\left\{x\dot{w}_0 B^- \ \left| \ \text{$x$ is Toeplitz and}~\begin{cases}\Delta_{i}(x\dot{w}_0)=0\quad\text{if}\quad i\in K\\ \Delta_{i}(x\dot{w}_0)\ne 0\quad\text{if}\quad i\notin K \end{cases} \right. \right\}.$$* **Example 10**. *Let $n=3$. Then we have $$Y_{\{1\},\{1,2\}} =\left\{ \left. \begin{pmatrix} 0&0&1&\\ 0&-1&a\\ 1&-a&b \end{pmatrix}B^- \ \right| \ \Delta_1=a^2-b= 0, \ \Delta_2=b\ne0 \right\}$$ and $$Y_{\emptyset,\{1,2\}} =\left\{ \left. \begin{pmatrix} 0&0&1&\\ 0&-1&a\\ 1&-a&b \end{pmatrix}B^- \ \right| \ \Delta_1=a^2-b\ne 0, \ \Delta_2=b\ne0 \right\}$$ (cf. Example [Example 2](#ex: n=3 orbits){reference-type="ref" reference="ex: n=3 orbits"} in Section [3.3](#subsec: Orbit decomposition){reference-type="ref" reference="subsec: Orbit decomposition"}).* We apply Rietsch's result (Proposition [Proposition 19](#lemm: Rietsch's result for Z KI){reference-type="ref" reference="lemm: Rietsch's result for Z KI"}) to characterize $Y_{K,J}$ for all $K\subseteq J\subseteq [n-1]$ in the next claim. For that purpose, let us prepare some notations. Let $J\subseteq[n-1]$, and take the decomposition $$J = J_1 \sqcup J_2 \sqcup \cdots \sqcup J_{m}$$ into the connected components. For a matrix $g\in {SL_{n}}$, we denote the ${J_{k}^*}\times {J_{k}^*}\ (\subseteq[n]\times [n])$ diagonal block of $g$ by $g({J_{k}^*})$ for $1\le k\le m$ (see [\[def: definition of J\'i\]](#def: definition of J'i){reference-type="eqref" reference="def: definition of J'i"} for the definition of ${J_{k}^*}$). For each connected component $J_{k}\subseteq J$, we set $$n_{k}\coloneqq |{J_{k}^*}| .$$ We consider the special linear group ${SL_{n_{k}}}$ of degree $n_{k}$ and its Borel subgroup $B^-_{n_{k}}$ consisting of lower triangular matrices. We denote by $Y_{n_{k}}$ the Peterson variety in ${SL_{n_{k}}}/B^-_{n_{k}}$. **Corollary 20**. *Let $K\subseteq J\subseteq [n-1].$ Then, we have $$Y_{K,J}=\left\{x\dot{w}_JB^- \ \left| \ \text{\rm $x$ is $J$-Toeplitz and}~\begin{cases}\Delta_{i}(x\dot{w}_J)=0\quad\text{if}\quad i\in K\\ \Delta_{i}(x\dot{w}_J)\ne 0\quad\text{if}\quad i\notin K \end{cases} \right. \right\}.$$* *Proof.* To begin with, we note that the right hand side of the desired equality is well-defined. Denoting it by $Y'_{K,J}$, we prove that $Y_{K,J}=Y'_{K,J}$. First we show that $$\label{RKJ sub RS'KJ} Y_{K,J}\subseteq Y'_{K,J}.$$ By [\[mathcal{X}J=J-toeplitz\]](#mathcal{X}J=J-toeplitz){reference-type="eqref" reference="mathcal{X}J=J-toeplitz"}, an arbitrary element $gB^-\in Y_{K,J}=Y\cap \Omega_{w_K}^{\circ} \cap X_{w_J}^{\circ}$ can be written as $$gB^-=x\dot{w}_JB^-$$ for some $J$-Toeplitz matrix $x\in {SL_{n}}$. Since each block $x({J_{k}^*})$ is a Toeplitz matrix in ${SL_{n_{k}}}$, it follows from [\[mathcal{X}J=J-toeplitz\]](#mathcal{X}J=J-toeplitz){reference-type="eqref" reference="mathcal{X}J=J-toeplitz"} (for ${SL_{n_{k}}}/B_{n_{k}}^-$) that $$\label{eq: J-Toeplitz 10} x({J_{k}^*})\dot{w}_J({J_{k}^*})B_{n_{k}}^- \in Y_{n_{k}}\cap (B^-_{n_{k}}\dot{w}_J({J_{k}^*})B^-_{n_{k}}/B^-_{n_{k}}) \vspace{3pt}$$ for $1\le k\le m$, where we note that $\dot{w}_J({J_{k}^*})$ is the longest permutation matrix in ${SL_{n_{k}}}$. Since $x\dot{w}_JB^-(=gB^-) \in \Omega_{w_K}^{\circ}=B^+\dot{w}_KB^-/B^-$, there exist $b^-\in B^-$ and $b^+\in B^+$ such that $$x\dot{w}_J b^- = b^+\dot{w}_K . \vspace{3pt}$$ Since $K\subseteq J$, $K$-block diagonal matrices are also $J$-block diagonal. In particular, $\dot{w}_K$ in this equality is $J$-block diagonal. By focusing on each ${J_{k}^*}\times {J_{k}^*}\ (\subseteq[n]\times [n])$ diagonal block of this equality, it follows that $$x({J_{k}^*})\dot{w}_J({J_{k}^*}) B_{n_{k}}^- \in B_{n_{k}}^+\dot{w}_K({J_{k}^*}) B_{n_{k}}^-/B_{n_{k}}^- \vspace{3pt}$$ for $1\le k\le m$. Combining this and [\[eq: J-Toeplitz 10\]](#eq: J-Toeplitz 10){reference-type="eqref" reference="eq: J-Toeplitz 10"}, it follows that $x({J_{k}^*})\dot{w}_J({J_{k}^*})B_{n_{k}}^-$ in ${SL_{n_{k}}}/B^-_{n_{k}}$ belongs to a Richardson stratum in the big cell: $$\label{eq: J-Toeplitz 30} x({J_{k}^*})\dot{w}_J({J_{k}^*}) B_{n_{k}}^- \ \ \in \ \ Y_{n_{k}}\cap (B_{n_{k}}^+\dot{w}_K({J_{k}^*}) B_{n_{k}}^-/B_{n_{k}}^-)\cap (B^-_{n_{k}}\dot{w}_J({J_{k}^*})B^-_{n_{k}}/B^-_{n_{k}})\vspace{3pt}$$ for $1\le k\le m$. We prove [\[RKJ sub RS\'KJ\]](#RKJ sub RS'KJ){reference-type="eqref" reference="RKJ sub RS'KJ"} by using [\[eq: J-Toeplitz 30\]](#eq: J-Toeplitz 30){reference-type="eqref" reference="eq: J-Toeplitz 30"} in what follows. For each $j\in J_{k}$, we set $$\label{eq: def of j'} j' \coloneqq j+1-\min J_{k} \in [n_{k} -1]\vspace{3pt}$$ which encodes the position of $j$ in the set $J_{k}$; for example, if $j=\min J_{k}$, then $j'=1$. Recall that $\dot{w}_J({J_{k}^*})$ in [\[eq: J-Toeplitz 30\]](#eq: J-Toeplitz 30){reference-type="eqref" reference="eq: J-Toeplitz 30"} is the longest permutation matrix in ${SL_{n_{k}}}$. Applying Rietsch's result (Proposition [Proposition 19](#lemm: Rietsch's result for Z KI){reference-type="ref" reference="lemm: Rietsch's result for Z KI"}) to the element in [\[eq: J-Toeplitz 30\]](#eq: J-Toeplitz 30){reference-type="eqref" reference="eq: J-Toeplitz 30"}, it follows that there exists a Toeplitz matrix $x_{k}\in{SL_{n_{k}}}$ such that - $x({J_{k}^*})\dot{w}_J({J_{k}^*})B_{n_{k}}^- = x_{k}\dot{w}_J({J_{k}^*})B_{n_{k}}^-$ ; - For $j\in J_{k}$, we have $\Delta_{j'}^{(n_{k})}(x_{k}\dot{w}_J({J_{k}^*}))=0 \ \text{if and only if} \ j\in K\cap J_{k}$, where $j'$ is the one defined in [\[eq: def of j\'\]](#eq: def of j'){reference-type="eqref" reference="eq: def of j'"} and $\Delta_{j'}^{(n_{k})}:{SL_{n_{k}}}\rightarrow \mathbb C$ is the minor defined in [\[eq:4.3 first\]](#eq:4.3 first){reference-type="eqref" reference="eq:4.3 first"} for ${SL_{n_{k}}}$. In (1), both of $x({J_{k}^*})$ and $x_{k}$ are lower triangular matrices having $1$'s on the diagonal, and $\dot{w}_J({J_{k}^*})$ is the longest permutation matrix in ${SL_{n_{k}}}$. Hence, (1) simply implies that $x({J_{k}^*})=x_{k}$. Thus, the condition (2) can be written as follows; for $j\in J_{k}$, we have $$\Delta_{j'}^{(n_{k})}\big(x({J_{k}^*})\dot{w}_J({J_{k}^*})\big)=0\quad \text{if and only if}\quad j\in K\cap J_{k} . \vspace{3pt}$$ Since $x$ is a $J$-Toeplitz matrix, the determinants of the diagonal blocks of $x\dot{w}_J\in{SL_{n}}$ are all equal to $1$, and hence the left hand side of this equality can be written as $\Delta_{j}(x\dot{w}_J)$, where $\Delta_{j}\colon {SL_{n}}\rightarrow \mathbb C$ is the function defined in [\[eq:4.3 first\]](#eq:4.3 first){reference-type="eqref" reference="eq:4.3 first"} for ${SL_{n}}$. Namely, we have $$\Delta_{j}(x\dot{w}_J)=0\quad \text{if and only if}\quad j\in K\cap J_{k} .$$ This holds for each connected component $J_{k}$ $(1\le k\le m)$, and we have $\Delta_{j}(x\dot{w}_J)=1$ for all $j\notin J$ since $x$ is $J$-Toeplitz (see Example [Example 8](#ex: J-Toeplitz matrices){reference-type="ref" reference="ex: J-Toeplitz matrices"}). Therefore, we conclude that for $j\in [n-1]$ we have $$\Delta_{j}(x\dot{w}_J)=0\quad \text{if and only if}\quad j\in K .$$ Recalling $gB^-=x\dot{w}_JB^-$ from the construction, we conclude that $Y_{K,J}\subseteq Y'_{K,J}$ as claimed in [\[RKJ sub RS\'KJ\]](#RKJ sub RS'KJ){reference-type="eqref" reference="RKJ sub RS'KJ"}. Conversely, let us show that $$Y_{K,J}\supseteq Y'_{K,J}.$$ Take an arbitrary element $x\dot{w}_J B^- \in Y'_{K,J}$, where $x$ is a $J$-Toeplitz matrix. By [\[mathcal{X}J=J-toeplitz\]](#mathcal{X}J=J-toeplitz){reference-type="eqref" reference="mathcal{X}J=J-toeplitz"}, we have $x\dot{w}_J B^- \in Y\cap X_{w_J}^{\circ}$. In what follows, let us prove that $$\label{eq: xwJ in OmegawJ} x\dot{w}_J B^- \in \Omega_{w_K}^{\circ}$$ which completes the proof. By the definition of $Y'_{K,J}$, we have $$\Delta_{j}(x\dot{w}_J)=0\quad \text{if and only if}\quad j\in K .$$ By reversing the argument above, we obtain the following claim for each ${J_{k}^*}\times {J_{k}^*}$ diagonal block; for $j\in J_{k}$, we have $$\Delta_{j'}\big(x({J_{k}^*})\dot{w}_J({J_{k}^*})\big)=0 \quad \text{if and only if}\quad j\in K\cap J_{k} , \vspace{3pt}$$ where $j'\in [n_{k} -1]$ is the position of $j$ in $J_{k}$ defined in [\[eq: def of j\'\]](#eq: def of j'){reference-type="eqref" reference="eq: def of j'"}. This and Rietsch's result (Proposition [Proposition 19](#lemm: Rietsch's result for Z KI){reference-type="ref" reference="lemm: Rietsch's result for Z KI"}) for ${SL_{n_{k}}}/B_{n_{k}}^-$ imply that each $x({J_{k}^*})\dot{w}_J({J_{k}^*})B_{n_{k}}^-$ belongs to a Richardson stratum in the big cell $$Y_{n_{k}}\cap (B_{n_{k}}^+\dot{w}_K({J_{k}^*}) B_{n_{k}}^-/B_{n_{k}}^-)\cap (B^-_{n_{k}}\dot{w}_J({J_{k}^*})B^-_{n_{k}}/B^-_{n_{k}}). \vspace{3pt}$$ In particular, we have $$x({J_{k}^*})\dot{w}_J({J_{k}^*})B_{n_{k}}^- \in B_{n_{k}}^+\dot{w}_K({J_{k}^*}) B_{n_{k}}^-/B_{n_{k}}^- . \vspace{3pt}$$ This means that there exist $b_{k}^-\in B^-_{n_{k}}$ and $b_{k}^+\in B^+_{n_{k}}$ such that $$x({J_{k}^*})\dot{w}_J({J_{k}^*})b_{k}^- = b_{k}^+ \dot{w}_K({J_{k}^*}) . \vspace{3pt}$$ Let $b^-\in B^-$ be the $J$-block diagonal matrix having $b_{k}^-$ on its ${J_{k}^*}\times {J_{k}^*}\ (\subseteq[n]\times [n])$ diagonal block for $1\le k\le m$ and $1$'s on the diagonal blocks of size $1$. Similarly, define $b^+\in B^+$ from $b_{k}^+$ $(1\le k\le m)$ by the same manner. Then we obtain an equality of $J$-block diagonal matrices $$x\dot{w}_Jb^- = b^+ \dot{w}_K$$ which can be shown by comparing the diagonal blocks (including the blocks of size $1$). Hence, we obtain that $$x\dot{w}_J B^- \in B^+\dot{w}_K B^-/B^- = \Omega_{w_K}^{\circ} . \vspace{3pt}$$ as claimed in [\[eq: xwJ in OmegawJ\]](#eq: xwJ in OmegawJ){reference-type="eqref" reference="eq: xwJ in OmegawJ"}. Thus, we conclude that $Y'_{K,J}\subseteq Y_{K,J}$ holds as well. ◻ **Example 11**. *Let $n=5$. Then, for example, we have $$Y_{\{1\},\{1,2,\ \ ,4\}} =\left\{ \left. \left( \begin{array}{@{\,}ccc|ccc@{\,}} 0&0&1&0&0\\ 0&-1&a&0&0\\ 1&-a&b&0&0\\ \hline 0&0&0&0&1\\ 0&0&0&-1&x \end{array} \right) B^- \ \right| \ \begin{matrix} \Delta_1=a^2-b= 0, \ \Delta_2=b\ne0, \\ (\Delta_3=1\ne0), \ \Delta_4=x\ne0 \end{matrix} \right\}.$$* # Totally nonnegative part of $Y$ {#sec: nonnegative part of Y} In this section, we study the totally nonnegative part of the Peterson variety with its cell decomposition introduced by Rietsch ([@ri03; @ri06]). We begin with reviewing Lusztig's theory of the totally nonnegative parts of ${SL_{n}}$ and ${SL_{n}}/B^-$ from [@lu94]. ## Totally nonnegative part of ${SL_{n}}$ Recall from Section [\[subsec: decomposition of Y\]](#subsec: decomposition of Y){reference-type="ref" reference="subsec: decomposition of Y"} that we have $$\begin{aligned} x_i(t)= \exp(te_i) \in U^+(\subseteq {SL_{n}}) \quad \text{and} \quad y_i(t)= \exp(tf_i) \in U^-(\subseteq {SL_{n}})\end{aligned}$$ for $i\in [n-1]$ and $t\in \mathbb C$. Let $U^+_{\ge0}$ be the submonoid $($with $1$$)$ of $U^+$ generated by the elements $x_i(a)$ for $i\in [n-1]$ and $a\in\mathbb{R}_{\ge 0}$. Following [@lu94 Proposition 2.7], we decompose $U^+_{\ge0}$ as follows. For each $w\in \mathfrak{S}_n$ and its reduced expression $w=s_{i_1}s_{i_2}\cdots s_{i_p}$, the map given by $$\mathbb{R}^p_{>0}\to U^+_{\ge0} \quad ; \quad (a_1,a_2,\cdots, a_p)\to x_{i_1}(a_1)x_{i_2}(a_2)\cdots x_{i_p}(a_p)$$ is known to be injective. Its image, denoted by $U^+(w)$, does not depend on the choice of the reduced expression $w=s_{i_1}s_{i_2}\cdots s_{i_p}$. Moreover, this provide us a disjoint decomposition $$\begin{aligned} \label{eq: decomp of U^-} U^+_{\ge 0}=\bigsqcup_{w\in \mathfrak{S}_n}U^+(w).\end{aligned}$$ We set $U^+_{>0}\coloneqq U^+(w_0)$, where $w_0$ is the longest element in $\mathfrak{S}_n$. **Example 12**. *Let $n=3$. Then we have $w_0=s_1s_2s_1$ so that $$\begin{split} U^+_{>0}&=U^+(s_1s_2s_1) =\{x_1(a)x_2(b)x_1(c) \mid a>0, b>0, c>0\}\\ &=\left\{ \left. \begin{pmatrix} 1 & a &0 \\ 0 & 1 &0\\ 0&0&1 \end{pmatrix}\begin{pmatrix} 1 & 0 &0 \\ 0 & 1 &b\\ 0&0&1 \end{pmatrix}\begin{pmatrix} 1 & c &0 \\ 0 & 1 &0\\ 0&0&1 \end{pmatrix}\ \right| \ a>0, \ b>0, \ c>0\right\}\\ &=\left. \left\{\begin{pmatrix} 1 & x& y \\ 0 & 1 &z\\ 0&0&1 \end{pmatrix}\ \right| \ x>0, \ y>0, z>0, \ xz-y>0\right\}, \end{split}$$ where we leave the reader to verify the last equality.* **Example 13**. *Let $n=9$ and $J=\{1, 3,4, 7,8\}=\{1\}\sqcup\{3,4\}\sqcup\{7,8\}$. Then the longest element $w_J$ of the Young subgroup $\mathfrak{S}_J$ is given by $w_J=s_1\cdot s_3s_4s_3\cdot s_7s_8s_7$. Hence, it follows that $$\begin{split} U^+(w_J)&=U^+(s_1\cdot s_3s_4s_3\cdot s_7s_8s_7)\\ &=\left\{\left( \left. {\tiny \begin{array}{@{\,}cc|ccc|c|ccc@{\,}} 1& \!\!a& & & & & & & \\ & \!\!1& & & & & & & \\ \hline & & 1& \!\!x & \!\!y & & & & \\ & &&\!\!1 &\!\!z & & & & \\ & && & \!\!1& & & &\\ \hline & & & & & \!\!1\!\! & & &\\\hline & & & & & & 1 & \!\!s & \!\!t\\ & & & & & & & \!\!1 &\!\!u\\ & & & & & & & & \!\!1\\ \end{array} } \right) \ \right| \ \begin{array}{l} a>0, \\ x>0, \ y>0, \ z>0, \ xz-y>0, \\ s>0, \ t>0, \ u>0, \ su-t>0 \end{array} \right\} \end{split}$$ by the previous example. We note that each block of the matrix appearing in this set belongs $U^+_{\ge0}$ for special linear groups of smaller ranks.* Recall that we have the Bruhat decomposition of ${SL_{n}}$: $$\begin{aligned} {SL_{n}} = \bigsqcup_{w\in \mathfrak{S}_n} B^-\dot{w}B^- ,\end{aligned}$$ where $\dot{w}\in{SL_{n}}$ is the representative of $w$ defined in [\[eq: def of representative\]](#eq: def of representative){reference-type="eqref" reference="eq: def of representative"}. Since $U^+_{\ge0}\subseteq {SL_{n}}$, the intersections $U^+_{\ge0}\cap B^-\dot{w}B^-$ for $w\in W$ decompose $U^+_{\ge0}$. The following claim means that this coincides with the decomposition in [\[eq: decomp of U\^-\]](#eq: decomp of U^-){reference-type="eqref" reference="eq: decomp of U^-"}. **Lemma 21**. *For $w\in W$, we have $U^+(w) = U^+_{\ge0}\cap B^-\dot{w}B^-$.* *Proof.* By [@lu94 Proposition 2.7 (d)], we have $U^+(w) \subseteq U^+_{\ge0}\cap B^-\dot{w}B^-$. In addition, both of these decompose $U^+_{\ge0}$: $$\begin{aligned} U^+_{\ge0} = \bigsqcup_{w\in \mathfrak{S}_n} U^+(w) \quad \text{and} \quad U^+_{\ge0} = \bigsqcup_{w\in \mathfrak{S}_n} (U^+_{\ge0}\cap B^-\dot{w}B^-).\end{aligned}$$ Thus, the claim follows. ◻ Similarly, $U^-_{\ge0}$ and $U^-(w)\subseteq U^-_{\ge0}$ for $w\in \mathfrak{S}_{n}$ are defined in the same manner, where we exchange $x_i(a)$ in the definitions of $U^+_{\ge0}$ and $U^+(w)$ to $y_i(a)$. In particular, we have $$\begin{aligned} \label{eq: minus decomp} U^-_{\ge0} = \bigsqcup_{w\in \mathfrak{S}_n} U^-(w).\end{aligned}$$ Similar to Lemma [Lemma 21](#lem: plus w = bruhat){reference-type="ref" reference="lem: plus w = bruhat"}, we have $$\begin{aligned} \label{eq: minus w = bruhat} U^-(w) = U^-_{\ge0}\cap B^+\dot{w}B^+ \end{aligned}$$ for $w\in \mathfrak{S}_n$. In fact, we can obtain this equality by taking the transpose to the equality of Lemma [Lemma 21](#lem: plus w = bruhat){reference-type="ref" reference="lem: plus w = bruhat"} for $w^{-1}$ ([@lu94 Section 1.2]). Also, $T_{>0}$ is defined as follows. For $i\in[n]$, let $\alpha^{\vee}_i \colon \mathbb C^{\times}\rightarrow T\ (\subset {SL_{n}})$ be the homomorphism which sends $t\in \mathbb C^{\times}$ to the element of $T$ which has $t$ and $t^{-1}$ on the $i$-th and $(i+1)$-st component, respectively, and $1$'s on the rest of the component: $$\begin{aligned} \alpha^{\vee}_i(t) = \text{diag}(1,\ldots,1,t,t^{-1},1\ldots,1) \in T .\end{aligned}$$ Then, $T_{>0}$ is defined to be the submonoid of $T(\subseteq {SL_{n}})$ generated by the elements $\alpha^{\vee}_i(a)$ for $a\in\mathbb{R}_{>0}$ and $i\in [n-1]$. It is straightforward to verify that $$\begin{aligned} T_{>0} = \{\text{diag}(t_1,\ldots,t_n)\in T \mid t_i>0 \ \text{for $1\le i\le n$}\}.\end{aligned}$$ According to [@lu94], the totally nonnegative part of ${SL_{n}}$ is the submonoid of ${SL_{n}}$ generated by the elements $x_i(a)$, $y_i(a)$ for $i\in [n-1]$, $a\in\mathbb R_{\ge0}$ and by the elements $\alpha^{\vee}_i(t)$ for $i\in [n-1]$, $t\in\mathbb R_{>0}$. It can be written as $$\begin{aligned} ({SL_{n}})_{\ge0} = U^-_{\ge0}T_{>0}U^+_{\ge0} = U^+_{\ge0}T_{>0}U^-_{\ge0}.\end{aligned}$$ By A. Whitney's theorem ([@wh52]), we can also express $({SL_{n}})_{\ge0}$ as follows (e.g. see [@lu08]): $$\begin{aligned} \label{eq: type A nonnegativity} ({SL_{n}})_{\ge0} = \{ g\in {SL_{n}} \mid \text{all minors of $g$ are $\ge0$}\}.\end{aligned}$$ The totally positive part of ${SL_{n}}$ is defined to be $$\begin{aligned} ({SL_{n}})_{>0} \coloneqq U^-_{>0}T_{>0}U^+_{>0} = U^+_{>0}T_{>0}U^-_{>0},\end{aligned}$$ and it follows that $({SL_{n}})_{\ge0}=\overline{({SL_{n}})_{>0}}$ ([@lu94 Theorem 4.3 and Remark 4.4]). An equality similar to [\[eq: type A nonnegativity\]](#eq: type A nonnegativity){reference-type="eqref" reference="eq: type A nonnegativity"} holds for $({SL_{n}})_{>0}$ as well by replacing the symbols $\ge$ to $>$ (see [@lu08; @wh52]). ## Totally nonnegative part of the flag variety According to [@lu94], the totally positive part of ${SL_{n}}/B^-$ is defined to be $$\begin{aligned} ({SL_{n}}/B^-)_{> 0} \coloneqq \{gB^- \in {SL_{n}}/B^- \mid g\in ({SL_{n}})_{>0}\} =\{uB^- \in {SL_{n}}/B^- \mid u\in U^+_{>0}\}, \end{aligned}$$ and the totally nonnegative part of ${SL_{n}}/B^-$ is defined to be $$\begin{aligned} ({SL_{n}}/B^-)_{\ge 0}\coloneqq \overline{({SL_{n}}/B^-)_{> 0}}, \end{aligned}$$ where the closure is taken in ${SL_{n}}/B^-$ with respect to the classical topology. For example, if $n=2$, then $$({SL_{2}}/B^-)_{> 0}= \{uB^- \in {SL_{2}}/B^- \mid u\in U^+_{>0}\} = \left\{ \left. \begin{pmatrix} 1 & t \\ 0 & 1 \\ \end{pmatrix}B^- \ \right| \ t>0\right\}.$$ One can verify that its closure $({SL_{2}}/B^-)_{\ge 0}= \overline{({SL_{2}}/B^-)_{> 0}}$ is obtained by adding two points $B^-$ and $\dot{w}_0B^-$, and hence it is homeomorphic to an interval $[0,1]$. Recall that we have $U^+(w) = U^+_{\ge0}\cap B^-\dot{w}B^-$ for $w\in \mathfrak{S}_n$ from Lemma [Lemma 21](#lem: plus w = bruhat){reference-type="ref" reference="lem: plus w = bruhat"}. One can verify that this equality implies that $$\label{x lies in U-nonnegatiave} U^+(w)B^-/B^- = ({SL_{n}}/B^-)_{\ge0}\cap (B^+B^-/B^-)\cap (B^-\dot{w}B^-/B^-)$$ for $w\in \mathfrak{S}_n$ (e.g. [@ri99 Sect. 1.3]) by a straightforward argument. Also, in the proof of Corollary 10.5 of [@ri06], Rietsch proved that $$\label{lemm: exptf acts on xwoB-} U^-_{>0}\cdot({SL_{n}}/B^-)_{\ge 0}\subseteq ({SL_{n}}/B^-)_{\ge 0}\cap (B^+B^-/B^-).$$ We use these properties of $({SL_{n}}/B^-)_{\ge 0}$ to study the totally nonnegative part of the Peterson variety in the next subsection. For the Borel subgroup $B^+\subset {SL_{n}}$ consisting of upper triangular matrices, $({SL_{n}}/B^+)_{>0}$ and $({SL_{n}}/B^+)_{\ge0}$ are defined similarly by changing the role of $U^+_{>0}$ to $U^-_{>0}$. ## Totally nonnegative part of the Peterson variety {#subsec: A decompostion of Y ge 0} We now study the totally nonnegative part of the Peterson variety introduced by Rietsch in [@ri03; @ri06]. **Definition 22** ([@ri03; @ri06]). *The totally nonnegative part of the Peterson variety $Y$ in ${SL_{n}}/B^-$ is defined by $Y_{\ge 0}\coloneqq Y\cap ({SL_{n}}/B^-)_{\ge 0}$.* Recall that we have the following decomposition of $Y$ by the Richardson strata in [\[decomposition of Y as union of ZKJ\]](#decomposition of Y as union of ZKJ){reference-type="eqref" reference="decomposition of Y as union of ZKJ"}: $$Y=\bigsqcup_{K\subseteq J\subseteq [n-1]}Y_{K,J} .$$ **Definition 23** ([@ri06]). *For $K\subseteq J\subseteq [n-1]$, we set $$\label{eq: def of YKJ>0} Y_{K,J;>0}\coloneqq Y_{K,J}\cap({SL_{n}}/B^-)_{\ge 0}.$$* By construction, we obtain a stratification of $Y_{\ge 0}$: $$\label{decomposition of nonnegative part of Peterson} Y_{\ge 0}=\bigsqcup_{K\subseteq J\subseteq [n-1]}Y_{K,J;>0}.$$ The goal of this subsection is to give a concrete description of $Y_{K,J;>0}$ (see Proposition [Proposition 27](#prop: description for R KJ>0){reference-type="ref" reference="prop: description for R KJ>0"} below). We being with preparing some lemmas. Let $J\subseteq [n-1]$, and take the decomposition $J = J_1 \sqcup J_2 \sqcup \cdots \sqcup J_{m}$ into the connected components. Similar to the notations prepared for the proof of Corollary [Corollary 20](#coro: a description for ZKJ){reference-type="ref" reference="coro: a description for ZKJ"}, we consider the special linear groups ${SL_{n_{k}}}$ with $n_{k}\coloneqq |{J_{k}^*}|$ and the subgroups $U^-_{n_{k}}\subseteq B^-_{n_{k}} \subseteq {SL_{n_{k}}}$, where $U^-_{n_{k}}$ is the subgroup consisting of all the lower triangular matrices with $1$'s on the diagonal $(1\le k\le m)$. We also consider $U^+_{n_{k}}\subseteq B^+_{n_{k}}\subseteq {SL_{n_{k}}}$ by exchanging the role of "lower triangular\" to "upper triangular\". For a $J$-block matrix $g\in {SL_{n}}$ (see Section [5.3](#subsec: on elements in Y ge 0 cap XJ){reference-type="ref" reference="subsec: on elements in Y ge 0 cap XJ"} for the definition of $J$-block matrices), we denote the ${J_{k}^*}\times {J_{k}^*}\ (\subseteq[n]\times [n])$ diagonal block of $g$ by $g({J_{k}^*})$ for $1\le k\le m$. We keep these notations in the rest of this section. **Lemma 24**. *Let $J\subseteq [n-1]$, and suppose that $x\in U^-$ is a $J$-block diagonal matrix. Then, we have $$x\in U^-_{\ge 0} \quad \text{if and only if} \quad x({J_{k}^*})\in (U_{n_{k}}^-)_{\ge 0}\ (1\le k\le m).$$* *Proof.* Suppose that $x\in U^-_{\ge0}$. The matrix $x$ has $1$'s on the diagonal, and it is a $J$-block diagonal matrix by the assumption. Thus, each diagonal block $x({J_{k}^*})$ belongs to ${SL_{n_{k}}}$. Hence, the Bruhat decomposition of ${SL_{n_{k}}}$ implies that there exists a permutation $w_{k}\in\mathfrak{S}_{J_{k}}$ such that $x({J_{k}^*})\in B^+_{n_{k}}\dot{w}_{k}B^+_{n_{k}}$. This claim holds for each $1\le k\le m$. Since $x$ is a $J$-block diagonal matrix having $1$'s on the diagonal, it follows that $$x\in B^+\dot{w}B^+ ,$$ where we set $w\coloneqq w_1\times\cdots\times w_m\in\mathfrak{S}_{J}=\mathfrak{S}_{J_1}\times \mathfrak{S}_{J_2}\times \cdots\times\mathfrak{S}_{J_m}\subseteq \mathfrak{S}_n$. Thus, we have $$\label{eq: x in UwJ} x \in U^-_{\ge0}\cap B^+\dot{w}B^+ = U^-(w)$$ by [\[eq: minus w = bruhat\]](#eq: minus w = bruhat){reference-type="eqref" reference="eq: minus w = bruhat"}. Since $w$ is an element of the Young subgroup $\mathfrak{S}_{J}$, a reduced expression of $w$ can be obtained by listing reduced expressions for $w_1,\ldots,w_m$ in order. Hence, by the definition of $U^-(w)$, elements of $U^-(w)$ are $J$-block diagonal matrices having elements of $(U_{n_{k}}^-)_{\ge 0}$ on ${J_{k}^*}\times {J_{k}^*}$ diagonal blocks for $1\le k\le m$ and having $1$'s on the diagonal blocks of size $1$. In particular, [\[eq: x in UwJ\]](#eq: x in UwJ){reference-type="eqref" reference="eq: x in UwJ"} implies that each block $x({J_{k}^*})$ belongs to $(U_{n_{k}}^-)_{\ge 0}$. Conversely, suppose that a $J$-block diagonal matrix $x\in U^-$ satisfies $x({J_{k}^*})\in (U_{n_{k}}^-)_{\ge0}$ for $1\le k\le m$. Then we have $x({J_{k}^*})\in U_{n_{k}}^-(w_k)$ for some $w_k\in \mathfrak{S}_{J_{k}}$ by [\[eq: minus decomp\]](#eq: minus decomp){reference-type="eqref" reference="eq: minus decomp"} for $(U_{n_{k}}^-)_{\ge0}$. Let $w_k=s_{i_1}s_{i_2}\cdots s_{i_p}$ be a reduced expression of $w_k$ for some $i_1, i_2,\ldots,i_p\in J_{k}$. Then, we can write $$\label{eq: reduced word for xJk} x({J_{k}^*})=x_{i_1}(a_1)x_{i_2}(a_2)\cdots x_{i_p}(a_p) \in U_{n_{k}}^-(w_k)$$ for some $(a_1,a_2,\cdots, a_p)\in \mathbb{R}^p_{>0}$ by the definition of $U_{n_{k}}^-(w_k)$. To study the entire matrix $x\in U^-$, we set $$w\coloneqq w_1\cdots w_m\in \mathfrak{S}_J=\mathfrak{S}_{J_1}\times \mathfrak{S}_{J_2}\times \cdots\times\mathfrak{S}_{J_m}\subseteq \mathfrak{S}_n .$$ By listing the above (reduced) expressions of $w_k$ for $1\le k\le m$ in order, we obtain a reduced expression of $w$. Now, [\[eq: reduced word for xJk\]](#eq: reduced word for xJk){reference-type="eqref" reference="eq: reduced word for xJk"} for $1\le k\le m$ and the definition of $U^-(w)$ imply that $x \in U^-(w)$ since $x\in U^-$ is $J$-block diagonal. Since $U^-(w)\subseteq U_{\ge 0}^-$ by definition, we obtain that $x\in U_{\ge 0}^-$. ◻ **Lemma 25**. *For $x\in U^-$, we have $$x\dot{w}_0B^-\in ({SL_{n}}/B^-)_{>0} \quad \text{if and only if} \quad x\in U^-_{>0}.$$* *Proof.* By [@lu94 Theorem 8.7], the isomorphism $$\varphi \colon {SL_{n}}/B^+\to {SL_{n}}/B^- \quad ; \quad gB^+\mapsto g\dot{w}_0B^-$$ restricts to an isomorphism between $({SL_{n}}/B^+)_{>0}$ and $({SL_{n}}/B^-)_{>0}$. We use this observation to prove the claim of this Lemma. Suppose that $x\in U^-_{>0}$. Then we have $xB^+\in ({SL_{n}}/B^+)_{>0}$ by the definition of $({SL_{n}}/B^+)_{>0}$ which means that $$x\dot{w}_0B^- =\varphi(xB^+) \in ({SL_{n}}/B^-)_{>0}$$ by the above observation. Conversely, suppose that $x\dot{w}_0B^-\in ({SL_{n}}/B^-)_{>0}$. Then, $$xB^+ = \varphi^{-1}(x\dot{w}_0B^-)\in ({SL_{n}}/B^+)_{>0}$$ by the above observation. By the definition of $({SL_{n}}/B^+)_{>0}$, this means that there exists $x'\in U^-_{>0}$ such that $xB^+=x'B^+$. Since $x,x'\in U^-$, we conclude that $x=x'\in U^-_{>0}$. ◻ **Proposition 26**. *Let $J\subseteq [n-1]$, and suppose that $x\in U^-$ is a $J$-block diagonal matrix. Then, we have $$x\dot{w}_JB^-\in ({SL_{n}}/B^-)_{\ge 0} \quad \text{if and only if} \quad x\in U^-_{\ge0}.$$* *Proof.* Suppose that $x\dot{w}_JB^-\in ({SL_{n}}/B^-)_{\ge 0}$. Let $t>0$ and set $y(t)\coloneqq\exp(tf)$, where $f$ is the regular nilpotent matrix defined in [\[eq: def of f\]](#eq: def of f){reference-type="eqref" reference="eq: def of f"}. Then we have $y(t)\in U^-_{>0}$ by [@lu94 Proposition 5.9 (b)]. Thus, [\[lemm: exptf acts on xwoB-\]](#lemm: exptf acts on xwoB-){reference-type="eqref" reference="lemm: exptf acts on xwoB-"} implies that $$y(t)\cdot x\dot{w}_JB^-\in ({SL_{n}}/B^-)_{\ge0}\cap (B^+B^-/B^-).$$ Note that we also have $$y(t)\cdot x\dot{w}_JB^-\in B^-\dot{w}_JB^-/B^-$$ since $y(t)$ and $x$ belong to $B^-$. Therefore, we obtain that $$y(t)\cdot x\dot{w}_JB^-\in ({SL_{n}}/B^-)_{\ge0}\cap (B^+B^-/B^-)\cap (B^-\dot{w}_JB^-/B^-) =U^+(w_J)B^- ,$$ where the last equality follows from [\[x lies in U-nonnegatiave\]](#x lies in U-nonnegatiave){reference-type="eqref" reference="x lies in U-nonnegatiave"}. This means that there exists $u^+\in U^+(w_J)$ and $b^-\in B^-$ such that $$y(t)\cdot x\dot{w}_J=u^+b^-.$$ Notice that all the matrices in this equality are $J$-lower triangular (see Example [Example 13](#ex: U plus wJ){reference-type="ref" reference="ex: U plus wJ"} for the form of $u^+$). Thus, by focusing on each ${J_{k}^*}\times {J_{k}^*}$ diagonal block, we obtain $$\big(y(t)x\big)({J_{k}^*})\cdot \dot{w}_J({J_{k}^*})=u^+({J_{k}^*})\cdot b^-({J_{k}^*})$$ for $1\le k\le m$. Here, we have $u^+({J_{k}^*})\in (U_{n_{k}}^+)_{>0}$ since $u^+\in U^+(w_J)$ (see Example [Example 13](#ex: U plus wJ){reference-type="ref" reference="ex: U plus wJ"}). Hence, we obtain that $$\big(y(t)x\big)({J_{k}^*})\cdot \dot{w}_J({J_{k}^*}) B_{n_{k}}^- \in ({SL_{n_{k}}}/B_{n_{k}}^-)_{>0},$$ where we note that $\dot{w}_J({J_{k}^*})$ is the longest permutation matrix in ${SL_{n_{k}}}$. Applying Lemma [Lemma 25](#lem: yw0=u+b){reference-type="ref" reference="lem: yw0=u+b"} to this equality, we obtain that $$\label{eq: ytx nonnegative} \big(y(t)x\big)({J_{k}^*})\in (U_{n_{k}}^-)_{>0}.$$ Note that we have $\lim_{t\rightarrow+0}y(t)=1$. Since $(U_{n_{k}}^-)_{\ge0}$ is a closed subset of $U_{n_{k}}^-$ containing $(U_{n_{k}}^-)_{>0}$ ([@lu94 Proposition 4.2]), it follows that $x({J_{k}^*})\in (U_{n_{k}}^-)_{\ge0}$ by taking $t\rightarrow+0$ in [\[eq: ytx nonnegative\]](#eq: ytx nonnegative){reference-type="eqref" reference="eq: ytx nonnegative"}. Therefore, we obtain $x\in U^-_{\ge0}$ by Lemma [Lemma 24](#lemm: if and only if lemma which is needed){reference-type="ref" reference="lemm: if and only if lemma which is needed"}. Conversely, suppose that $x\in U^-_{\ge 0}$. Let $t>0$, and set $y(t)=\exp(tf)\in U^-_{>0}$ as above. Since $x\in U^-_{\ge0}$, we have $x({J_{k}^*})\in (U_{n_{k}}^-)_{\ge 0}$ by Lemma [Lemma 24](#lemm: if and only if lemma which is needed){reference-type="ref" reference="lemm: if and only if lemma which is needed"}. Noticing that $y(t)({J_{k}^*})=\exp(tf({J_{k}^*}))\in (U_{n_{k}}^-)_{>0}$, we have $$\big(y(t)({J_{k}^*})\big)\cdot x({J_{k}^*})\in (U_{n_{k}}^-)_{>0}$$ by [@lu94 Section 2.12]. By Lemma [Lemma 25](#lem: yw0=u+b){reference-type="ref" reference="lem: yw0=u+b"}, we obtain that $$\big(y(t)({J_{k}^*})\big)\cdot x({J_{k}^*})\cdot \dot{w}_J({J_{k}^*}) \in ({SL_{n_{k}}}/B_{n_{k}}^-)_{>0},$$ where $\dot{w}_J({J_{k}^*})$ is the longest permutation matrix in ${SL_{n_{k}}}$. Namely, there exists $b_{k}^-\in B_{n_{k}}^-$ and $u_{k}^+\in (U_{n_{k}}^+)_{>0}$ such that $$\big(y(t)({J_{k}^*})\big)\cdot x({J_{k}^*})\cdot(\dot{w}_J)({J_{k}^*})=u_{k}^+b_{k}^-.$$ Let $u^+\in U^+$ be the $J$-block diagonal matrix having $u_{k}^+$ for ${J_{k}^*}\times {J_{k}^*}$ diagonal block for $1\le k\le m$ and $1$'s on the diagonal blocks of size $1$. Similarly, define $b^-\in B^-$ and $\widetilde{y}(t)\in U^+$ by the same manner by using $b_{k}^-$ and $y(t)({J_{k}^*})$, respectively. Then, we have $$\widetilde{y}(t)\cdot x\cdot \dot{w}_J =u^+b^-.$$ Since $u^+\in U^+$ satisfies $u^+({J_{k}^*})=u_{k}^+\in (U_{n_{k}}^+)_{\ge0}$ for all $1\le k\le m$, one can verify that $$u^+\in U^+_{\ge 0}\subseteq ({SL_{n}})_{\ge0}$$ by an argument which is completely parallel to the proof of Lemma [Lemma 24](#lemm: if and only if lemma which is needed){reference-type="ref" reference="lemm: if and only if lemma which is needed"}. Since we have $({SL_{n}})_{\ge0}=\overline{({SL_{n}})_{>0}}$ ([@lu94 Remark 4.4]), the element $u^+$ can be written as the limit of a sequence in $({SL_{n}})_{>0}$. Thus, we have $$\widetilde{y}(t)\cdot (x\dot{w}_J)B^-=u^+B^-\in({SL_{n}}/B^-)_{\ge 0}.$$ Therefore, we obtain that $$x\dot{w}_JB^-=\lim\limits_{t\to +0}\big(\widetilde{y}(t)\cdot (x\dot{w}_J)B^-\big)\in({SL_{n}}/B^-)_{\ge 0}$$ since $({SL_{n}}/B^-)_{\ge 0}$ is a closed subset of ${SL_{n}}/B^-$. ◻ We now give a concrete description of $Y_{K,J;>0}$ defined in [\[eq: def of YKJ\>0\]](#eq: def of YKJ>0){reference-type="eqref" reference="eq: def of YKJ>0"}. **Proposition 27**. *Let $K\subseteq J\subseteq [n-1].$ Then, we have $$Y_{K,J;>0}=\left\{x\dot{w}_JB^- \ \left| \ \begin{matrix} \text{ \rm $x\in U^-_{\ge0}$ is $J$-Toeplitz and} ~\begin{cases}\Delta_{i}(x\dot{w}_J)=0\quad\text{if}\quad i\in K\\ \Delta_{i}(x\dot{w}_J)> 0\quad\text{if}\quad i\notin K \end{cases} \end{matrix} \right. \right\}.$$* *Proof.* Denoting the right hand side of the desired equality by $Y'_{K,J;>0}$, we show that $Y_{K,J;>0}=Y'_{K,J;>0}$. We first show that $$\begin{aligned} \label{eq: first goal RKJ nonnegative} Y_{K,J;>0} \supseteq Y'_{K,J;>0}.\end{aligned}$$ By definition, an arbitrary element of $Y'_{K,J;>0}$ can be written as $x\dot{w}_JB^-$ for some $J$-Toeplitz matrix $x\in U^-_{\ge0}$ such that $$~\begin{cases}\Delta_{i}(x\dot{w}_J)=0\quad\text{if}\quad i\in K\\ \Delta_{i}(x\dot{w}_J)> 0\quad\text{if}\quad i\notin K . \end{cases}$$ By Corollary [Corollary 20](#coro: a description for ZKJ){reference-type="ref" reference="coro: a description for ZKJ"}, this means that $x\dot{w}_JB^-\in Y_{K,J}$. Since $x\in U^-_{\ge0}$ is $J$-Toeplitz, Proposition [Proposition 26](#prop: x in U-ge 0){reference-type="ref" reference="prop: x in U-ge 0"} implies that $$x\dot{w}_JB^-\in ({SL_{n}}/B^-)_{\ge 0}.$$ Therefore, we conclude that $x\dot{w}_JB^-\in Y_{K,J}\cap ({SL_{n}}/B^-)_{\ge 0}=Y_{K,J;>0}$ as claimed in [\[eq: first goal RKJ nonnegative\]](#eq: first goal RKJ nonnegative){reference-type="eqref" reference="eq: first goal RKJ nonnegative"}. To complete the proof, we prove that the opposite inclusion holds in what follows. Since $Y_{K,J;>0}=Y_{K,J}\cap ({SL_{n}}/B^-)_{\ge0}$, Corollary [Corollary 20](#coro: a description for ZKJ){reference-type="ref" reference="coro: a description for ZKJ"} implies that an arbitrary element of $Y_{K,J;>0}$ can be written as $x\dot{w}_JB^-$ for some $J$-Toeplitz matrix $x\in U^-$ such that $$\begin{aligned} \text{$\Delta_{i}(x\dot{w}_J)=0$ if and only if $i\in K$.}\end{aligned}$$ Since $x\dot{w}_JB^-\in Y_{K,J;>0}=Y_{K,J}\cap ({SL_{n}}/B^-)_{\ge0}$ and $x\in U^-$ is $J$-Toeplitz, it follows that $x\in U^-_{\ge0}$ by Proposition [Proposition 26](#prop: x in U-ge 0){reference-type="ref" reference="prop: x in U-ge 0"}. Thus, to see that $x\dot{w}_JB^-\in Y'_{K,J;>0}$, it suffices to show that $\Delta_{i}(x\dot{w}_J)\ge0$ for all $i\in [n-1]$. For this purpose, let us make an observation. For a matrix $g\in {SL_{n}}$, the function $\Delta_i(g)$ was defined to be the $(n-i)\times (n-i)$ lower right minor of $g$ (in Section [5.4](#subsec: description of RKJ){reference-type="ref" reference="subsec: description of RKJ"}). From this, one can verify that $\Delta_i(g\dot{w}_0)$ is the $(n-i)\times (n-i)$ lower *left* minor of $g$. Based on this observation, we now prove that $\Delta_{i}(x\dot{w}_J)\ge 0$ for all $i\in [n-1]$. We take cases. First, suppose that $i\in J$. In this case, we have $i\in J_{k}$ for some $1\le k\le m$. Set $i'\coloneqq i+1-\min J_{k}$ which encodes the position of $i$ in $J_{k}$ (as in [\[eq: def of j\'\]](#eq: def of j'){reference-type="eqref" reference="eq: def of j'"}). This notation allows us to express the minor $\Delta_i(x\dot{w}_J)$ in terms of minors of diagonal blocks $(x\dot{w}_J)({J_{k}^*})$: $$\begin{aligned} \Delta_i(x\dot{w}_J)&=\Delta_{i'}\big((x\dot{w}_J)({J_{k}^*})\big) =\Delta_{i'}\big(x({J_{k}^*})\cdot\dot{w}_J({J_{k}^*})\big),\end{aligned}$$ where we note that $\dot{w}_J({J_{k}^*})$ is the longest permutation matrix in ${SL_{n_{k}}}$. By applying the above observation to ${SL_{n_{k}}}$, it follows that the right most expression in this equality is a lower left minor of $x({J_{k}^*})$ which is a minor of $x$. Since we saw $x\in U^-_{\ge 0}\subseteq ({SL_{n_{k}}})_{\ge0}$ above, Whitney's description [\[eq: type A nonnegativity\]](#eq: type A nonnegativity){reference-type="eqref" reference="eq: type A nonnegativity"} for $({SL_{n_{k}}})_{\ge0}$ implies that $\Delta_i(x\dot{w}_J)$ (which is a minor of $x$ as we just saw) is nonnegative. If $i\notin J$, then we have $\Delta_i(x\dot{w}_J)=1\ge0$ since $x\dot{w}_J$ is a $J$-block diagonal matrix having $1$'s on the diagonal blocks of size $1$. Hence, the claim follows. ◻ **Example 14**. *Let $n=5$. We computed $Y_{\{1\},\{1,2,\ \ ,4\}}$ in Example [Example 11](#eg: RJK){reference-type="ref" reference="eg: RJK"}. By Proposition [Proposition 27](#prop: description for R KJ>0){reference-type="ref" reference="prop: description for R KJ>0"} with Lemma [Lemma 24](#lemm: if and only if lemma which is needed){reference-type="ref" reference="lemm: if and only if lemma which is needed"} in mind, an arbitrary element of $Y_{\{1\},\{1,2,\ \ ,4\};>0}$ can be written as $$\left( \begin{array}{@{\,}ccc|ccc@{\,}} 0&0&1&0&0\\ 0&-1&a&0&0\\ 1&-a&b&0&0\\ \hline 0&0&0&0&1\\ 0&0&0&-1&x \end{array} \right)$$ for some $a,b,x\in\mathbb R_{\ge0}$ such that $\Delta_1=a^2-b= 0, \ \Delta_2=b>0, \ (\Delta_3=1), \ \Delta_4=x>0$.* # A morphism from $Y$ to $X(\Sigma)$ In [@ab-ze23], the authors constructed a particular morphism from the Peterson variety $Y$ to the toric orbifold $X(\Sigma)$. In this section, we review its construction since we use it to prove Rietsch's conjecture on $Y_{\ge0}$ in the next section. Details can be found in [@ab-ze23 Sect. 6]. We keep the notations from Section [2](#sec: notations){reference-type="ref" reference="sec: notations"} where we defined the simple roots $\alpha_i \colon T\rightarrow \mathbb C^{\times}$ and the fundamental weights $\varpi_i \colon T\rightarrow \mathbb C^{\times}$ for $i\in[n-1]$. By composing with the canonical projection $B^-\twoheadrightarrow T$, we may regard them as a homomorphisms from $B^-$, and we denote them by the same symbols $\alpha_i$ and $\varpi_i$ by abusing notations. ## Two functions For $i\in[n-1]$, we considered in Section [5.4](#subsec: description of RKJ){reference-type="ref" reference="subsec: description of RKJ"} the function $$\Delta_{i} \colon {SL_{n}}\to\mathbb C$$ which takes the lower right $(n-i)\times (n-i)$ minor of a given matrix in ${SL_{n}}$ (see the examples in [\[eq:4.3 second\]](#eq:4.3 second){reference-type="eqref" reference="eq:4.3 second"}). For $u\in U^+$ and $b=(b_{i,j})\in B^-$, we have $$\label{eq:4.3} \Delta_i(ub)=\Delta_i(b)=b_{i+1,i+1}b_{i+2,i+2}\cdots b_{n,n}=(-\varpi_{i})(b),$$ where we write $(-\varpi_i)(b)\coloneqq \varpi_i(b)^{-1}$ for $b\in B^-$. Since $U^+B^-=B^+B^-\subseteq {SL_{n}}$ is a Zariski open subset, this means that $\Delta_i$ is the unique regular function on ${SL_{n}}$ satisfying the above equality. In particular, $\Delta_i$ is an extension of the character $-\varpi_{i}$ on $B^-$. Moreover, the function $\Delta_{i}\colon {SL_{n}}\to\mathbb C$ is $B^-$-equivariant in the following sense: $$\Delta_{i}(gb) =(-\varpi_{i})(b)\Delta_{i}(g) \qquad (g\in {SL_{n}}, \ b\in B^-).$$ Let $P_Y$ be the restriction of the principal $B^-$-bundle ${SL_{n}}\to {SL_{n}}/B^-$ on the Peterson variety $Y$; $$P_Y \coloneqq \{g\in {SL_{n}} \mid gB^-\in Y\}=\{g\in {SL_{n}} \mid (\text{\rm Ad}_{g^{-1}} f)_{i,j}=0 \ (j>i+1)\},$$ where $f$ is the regular nilpotent matrix given in [\[eq: def of f\]](#eq: def of f){reference-type="eqref" reference="eq: def of f"}, and $(\text{\rm Ad}_{g^{-1}}f)_{i,j}$ denotes the $(i,j)$-th component of the matrix $\text{\rm Ad}_{g^{-1}}f$. By definition, $P_Y\subseteq G$ is preserved by the multiplication of elements of $B^-$ from the right, and hence $P_Y$ inherits a right $B^-$-action. With this action, $P_Y$ is a principal $B^-$-bundle over $Y$ in the sense that the projection map $P_Y\rightarrow Y$ is $B^-$-equivariantly locally trivial. Now consider a function $$q_i \colon P_Y\to\mathbb C;\quad g\mapsto -(\text{\rm Ad}_{g^{-1}}f)_{i,i+1}.$$ This function is $B^-$-equivariant in the following sense; for $g\in P_Y$ and $b\in B^-$, we have $$\label{eq:4.5} q_i(gb) = -\big(\text{\rm Ad}_{b^{-1}}(\text{\rm Ad}_{g^{-1}}f)\big)_{(i,i+1)} = (-\alpha_i)(b)q_i(g),$$ where the last equality follows since $gB\in Y$ implies that $(\text{\rm Ad}_{g^{-1}} f)_{i,j}=0$ for $j>i+1$. Here, we also write $(-\alpha_i)(b)\coloneqq \alpha_i(b)^{-1}$ for $b\in B^-$. These two functions $\Delta_i$ and $q_i$ give rise to sections of certain line bundles on $Y$ ([@ab-ze23 Section 5]), and the following proposition means that their zero loci do not intersect for each $i\in[n-1]$. **Proposition 28** ([@ab-ze23 Proposition 5.6]). *For $i\in [n-1]$, we have $$\{gB^-\in Y \mid \Delta_{i}(g)=0 \ \text{and}\ q_i(g)=0\}=\emptyset.$$* *Proof.* Recall that we defined the Peterson variety $Y$ in ${SL_{n}}/B^-$ (Definition [Definition 17](#definition of complex Peterson variety){reference-type="ref" reference="definition of complex Peterson variety"}). The roots $-\alpha_1,\ldots,-\alpha_{n-1}$ can be thought as the set of simple roots for $B^-$, and then the matrix $f$ (which we used to define $Y$) is a sum of simple root vectors for $B^-$. Also, $-\varpi_1,\ldots,-\varpi_{n-1}$ are the fundamental weights of $B^-$ with respect to $-\alpha_1,\ldots,-\alpha_{n-1}$. Now, in the language of [@ab-ze23 Sect. 5], the functions $\Delta_{i}$ and $q_i$ are written as $\Delta_{-\varpi_i}$ and $q_{-\alpha_i}$, respectively. Therefore, the claim follows from [@ab-ze23 Proposition 5.6] by taking $B^-$ as our choice of a Borel subgroup of ${SL_{n}}$. ◻ **Remark 29**. * For an elementary proof of Proposition [Proposition 28](#prop:Delta i intersects ad alpha i=emptyset){reference-type="ref" reference="prop:Delta i intersects ad alpha i=emptyset"}, see [@AHKZ Lemma 3.9]. * ## Construction of the morphism $\Psi$ Recall we have $X(\Sigma)=(\mathbb C^{2(n-1)}-E)/T$ from Section [3](#sec: Toric orbifold){reference-type="ref" reference="sec: Toric orbifold"}. Recall also that $P_Y$ is the restriction of the principal $B^-$-bundle ${SL_{n}}\to {SL_{n}}/B^-$ on the Peterson variety $Y$. Now consider a morphism $\widetilde{\Psi} \colon P_Y\to\mathbb C^{2(n-1)}$ given by $$\widetilde{\Psi}(g)=(\Delta_{1}(g),\dots, \Delta_{n-1}(g); q_{1}(g),\dots, q_{n-1}(g))$$ for $g\in P_Y$. Proposition [Proposition 28](#prop:Delta i intersects ad alpha i=emptyset){reference-type="ref" reference="prop:Delta i intersects ad alpha i=emptyset"} implies that $\Delta_{i}(g)$ and $q_i(g)$ can not be zero simultaneously for each $i\in [n-1].$ Thus, the image of $\widetilde{\Psi}$ lies in $\mathbb C^{2(n-1)}- E$ which was defined in [\[eq: def of C-E\]](#eq: def of C-E){reference-type="eqref" reference="eq: def of C-E"}. Namely, we have $$\widetilde{\Psi} \colon P_Y\to \mathbb C^{2(n-1)}- E.$$ Combining this with the quotient morphism $\mathbb C^{2(n-1)}- E\rightarrow (\mathbb C^{2(n-1)}- E)/T$, we obtain a morphism $P_Y\rightarrow X(\Sigma)=(\mathbb C^{2(n-1)}- E)/T$. Recall that $P_Y$ admits the multiplication of $B^-$ from the right. The $B^-$-equivariance ([\[eq:4.3\]](#eq:4.3){reference-type="ref" reference="eq:4.3"}) and ([\[eq:4.5\]](#eq:4.5){reference-type="ref" reference="eq:4.5"}) now imply that the resulting morphism $P_Y\rightarrow X(\Sigma)$ is $B^-$-invariant, where we note that $(-\varpi_{i})(t)=\varpi_i(t^{-1})$ and $(-\alpha_{i})(t)=\alpha_i(t^{-1})$ for $t\in T$. Since the map $P_Y \rightarrow Y$ is $B^-$-equivariantly locally trivial, we obtain a morphism $$\Psi \colon Y\to X(\Sigma)$$ given by $$\Psi(gB^-)=\big[\Delta_{1}(g),\dots, \Delta_{n-1}(g); q_{1}(g),\dots, q_{n-1}(g)\big]$$ for $gB^-\in Y$. **Remark 30**. * The toric orbifold $X(\Sigma)$ was defined as the quotient of $\mathbb C^{2(n-1)}- E$ by the linear $T$-action given in [\[eq: def of T-action on C2r\]](#eq: def of T-action on C2r){reference-type="eqref" reference="eq: def of T-action on C2r"}. Since we have $$\begin{aligned} (-\varpi_{i})(t)=\varpi_i(t^{-1}),\quad (-\alpha_{i})(t) = \alpha_i(t^{-1}) \qquad (t\in T)\end{aligned}$$ for $i\in [n-1]$, it follows that $X(\Sigma)$ can also be constructed from the linear $T$-action on $\mathbb C^{2(n-1)}-E$ defined by the weights $-\varpi_1,\ldots,-\varpi_{n-1}, -\alpha_1,\ldots,-\alpha_{n-1}$ which are fundamental weights and simple roots for the Borel subgroup $B^-\subseteq {SL_{n}}$. Therefore, the above morphism $\Psi\colon Y\rightarrow X(\Sigma)$ agrees with the one constructed in [@ab-ze23] by taking $B^-$ as our choice of a Borel subgroup of ${SL_{n}}$. * # Relation between $Y_{\ge 0}$ and $X(\Sigma)_{\ge0}$ In this section, we show that $Y_{\ge 0}$ is homeomorphic to $X(\Sigma)_{\ge0}$ under the map $\Psi\colon Y\to X(\Sigma)$ given in the last section. For each subset $J\subseteq[n-1]$, we have the decomposition $J = J_1 \sqcup J_2 \sqcup \cdots \sqcup J_{m}$ into the connected components. In the rest of this section, we use the same notations ${J_{k}^*}$, $n_{k}=|{J_{k}^*}|$, ${SL_{n_{k}}}$, $x({J_{k}^*})$ etc for $1\le k\le m$ prepared to prove Corollary [Corollary 20](#coro: a description for ZKJ){reference-type="ref" reference="coro: a description for ZKJ"}. ## Restriction of $\Psi \colon Y\rightarrow X(\Sigma)$ to nonnegative parts We first prove that the map $\Psi \colon Y\rightarrow X(\Sigma)$ restricts to a map from $Y_{\ge0}$ to $X(\Sigma)_{\ge0}$. To this end, recall that we have the decompositions $$\label{eq: two decompositions} Y_{\ge 0}=\bigsqcup_{K\subseteq J\subseteq [n-1]}Y_{K,J;>0} \quad \text{and} \quad X(\Sigma)_{\ge0} = \bigsqcup_{K\subseteq J \subseteq [n-1]} X(\Sigma)_{{K,J};> 0}$$ from [\[decomposition of nonnegative part of Peterson\]](#decomposition of nonnegative part of Peterson){reference-type="eqref" reference="decomposition of nonnegative part of Peterson"} and [\[eq: decomp of X nonnegative KJ\]](#eq: decomp of X nonnegative KJ){reference-type="eqref" reference="eq: decomp of X nonnegative KJ"}. We prove that $\Psi$ sends each stratum $Y_{K,J;>0}$ to $X(\Sigma)_{{K,J};> 0}$ in what follows. **Lemma 31**. *Let $J\subseteq[n-1]$. If $x\in {SL_{n}}$ is J-Toeplitz then $$q_i(x\dot{w}_J)=-\big(\text{\rm Ad}_{(x\dot{w}_J)^{-1}} f\big)_{i,i+1}=\begin{cases} 0\quad\text{if\quad$i\notin J$}\\ 1\hspace{4mm}\text{if\quad$i\in J$}, \end{cases}$$ where $f$ is the regular nilpotent matrix given in [\[eq: def of f\]](#eq: def of f){reference-type="eqref" reference="eq: def of f"}.* *Proof.* For a matrix $g\in {SL_{n}}$, one can verify that $$\label{eq: w0gw0 obs} (\dot{w}_0^{-1}g\dot{w}_0)_{i,i+1} = -g_{n+1-i,n-i} \quad \text{for $i\in [n-1]$}$$ by a direct calculation since $(\dot{w}_0)_{i,j}=\delta_{i,n+1-j}(-1)^{i-1}$ for $i,j\in[n-1]$ and $\dot{w}_0^{-1}$ is the transpose of $\dot{w}_0$. Based on this observation, we now prove the claim. Case 1: We first consider the case $i\in J$. In this case, we have $i\in J_{k}$ for some $1\le k\le m$. Then we have $i,i+1\in {J_{k}^*}$ by the definition of ${J_{k}^*}$ (see [\[def: definition of J\'i\]](#def: definition of J'i){reference-type="eqref" reference="def: definition of J'i"} for the definition of ${J_{k}^*}$). Set $i'\coloneqq i+1-\min J_{k}$ which encodes the position of $i$ in $J_{k}$ (as in [\[eq: def of j\'\]](#eq: def of j'){reference-type="eqref" reference="eq: def of j'"}). Then we have $i',i'+1\in [n_{k}]$, and hence $$\begin{split} \big(\text{\rm Ad}_{(x\dot{w}_J)^{-1}} f\big)_{i,i+1} &=\Big( \big(\text{\rm Ad}_{(x\dot{w}_J)^{-1}}f\big)({J_{k}^*}) \Big)_{i',i'+1} \\ &=\Big( \left(\dot{w}_J^{-1} x^{-1}fx\dot{w}_J\right)({J_{k}^*}) \Big)_{i',i'+1} . \end{split}$$ Since $\dot{w}_J$ and $\dot{w}_J^{-1}$ are $J$-block diagonal matrices, we can decompose the products in the last expression: $$\begin{split} \big(\text{\rm Ad}_{(x\dot{w}_J)^{-1}} f\big)_{i,i+1} &=\Big( \dot{w}_J({J_{k}^*})^{-1}\cdot(x^{-1}fx)({J_{k}^*})\cdot\dot{w}_J({J_{k}^*}) \Big)_{i',i'+1}. \end{split}$$ Thus, by [\[eq: w0gw0 obs\]](#eq: w0gw0 obs){reference-type="eqref" reference="eq: w0gw0 obs"}, we obtain $$\begin{split} \big(\text{\rm Ad}_{(x\dot{w}_J)^{-1}} f\big)_{i,i+1} =&-\left((x^{-1}fx)({J_{k}^*})\right)_{n_k+1-i',n_k-i'}\\ &=-\sum_{\ell=2}^{n_k}\big(x({J_{k}^*})^{-1}\big)_{n_k+1-i',\ell}\big(x({J_{k}^*})\big)_{\ell-1,n_k-i'} \quad \text{(by \eqref{eq: def of f})}. \end{split}$$ Since $x({J_{k}^*})^{-1}$ and $x({J_{k}^*})$ are lower triangular matrices, the summands in the last expression vanish except for the term $\ell = n_k+1-i'$. In addition, the term $\ell = n_k+1-i'$ appears in the summation since $i'\in [n_{k} -1]$ implies that $n_k+1-i'\ge2$. Thus, we obtain $$\begin{split} \big(\text{\rm Ad}_{(x\dot{w}_J)^{-1}} f\big)_{i,i+1} &=-\big(x({J_{k}^*})^{-1}\big)_{n_k+1-i',n_k+1-i'}\big(x({J_{k}^*})\big)_{n_k-i',n_k-i'}\\ &=-1 \end{split}$$ since $x({J_{k}^*})^{-1}$ and $x({J_{k}^*})$ have $1$'s on the diagonal. Hence we obtain $-\big(\text{\rm Ad}_{(x\dot{w}_J)^{-1}} f\big)_{i,i+1}=1$ in this case. Case 2: We next consider the case $i\notin J$. Since the matrix $(x\dot{w}_J)^{-1}f(x\dot{w}_J)$ is $J$-lower triangular, the condition $i\notin J$ implies that we have $$\big(\text{\rm Ad}_{(x\dot{w}_J)^{-1}} f\big)_{i,i+1}=\big((x\dot{w}_J)^{-1}f(x\dot{w}_J)\big)_{i,i+1} =0$$ in this case. This completes the proof. ◻ **Lemma 32**. *Let $K\subseteq J\subseteq [n-1]$. Then we have $\Psi(Y_{K,J;>0})\subseteq X(\Sigma)_{{K,J};> 0}.$* *Proof.* By Proposition [Proposition 27](#prop: description for R KJ>0){reference-type="ref" reference="prop: description for R KJ>0"}, an arbitrary element of $Y_{K,J;>0}$ can be written as $x\dot{w}_JB^-$ for some $J$-Toeplitz matrix $x\in U^-_{\ge0}$ such that $$\begin{cases}\Delta_{i}(x\dot{w}_J)=0\quad\text{if}\quad i\in K\\ \Delta_{i}(x\dot{w}_J)>0 \quad\text{if}\quad i\notin K . \end{cases}$$ Since $x$ is $J$-Toeplitz, we have from Lemma [Lemma 31](#lemm: ad part for xwJB-){reference-type="ref" reference="lemm: ad part for xwJB-"} that $$q_i(x\dot{w}_J) =\begin{cases} 0\quad\text{if $i\notin J$}\\ 1\hspace{4mm}\text{if $i\in J$} . \end{cases}$$ Thus, by Proposition [Proposition 5](#prop: nonnengative stratum){reference-type="ref" reference="prop: nonnengative stratum"}, $\Psi(x\dot{w}_JB^-)$ is an element of $X(\Sigma)_{{K,J};> 0}$. ◻ Now the following claim follows from Lemma [Lemma 32](#lemma: image of nonnegative part of ZKJ){reference-type="ref" reference="lemma: image of nonnegative part of ZKJ"} and the decompositions [\[eq: two decompositions\]](#eq: two decompositions){reference-type="eqref" reference="eq: two decompositions"}. **Proposition 33**. *The morphism $\Psi \colon Y\to X(\Sigma)$ restricts to a continuous map $$\Psi_{\ge 0} \colon Y_{\ge 0}\to X(\Sigma)_{\ge 0}$$ which sends $Y_{K,J;>0}$ to $X(\Sigma)_{{K,J};> 0}$ for $K\subseteq J \subseteq [n-1]$.* ## Properties of $\Psi_{\ge 0}$ In this subsection, we prove that $\Psi_{\ge 0} \colon Y_{\ge 0}\to X(\Sigma)_{\ge 0}$ is a homeomorphism. For that purpose, we apply the following result due to Rietsch ([@ri03]). For $i\in[n-1]$, recall that $\Delta_i(g)$ is the lower right $(n-i)\times (n-i)$ minor of $g\in{SL_{n}}$ (see [\[eq:4.3 second\]](#eq:4.3 second){reference-type="eqref" reference="eq:4.3 second"}). By a direct calculation, one can verify that $\Delta_i(g\dot{w}_0)$ is the lower *left* $(n-i)\times (n-i)$ minor of $g$. **Proposition 34** ([@ri03 Proposition 1.2]). *Let $X_{\ge0}$ be the set of totally nonnegative Toeplitz matrices in $U^-$. Then, the map $$X_{\ge0}\rightarrow \mathbb R^{n-1}_{\ge0} \quad ; \quad x\mapsto (\Delta_1(x\dot{w}_0),\ldots,\Delta_{n-1}(x\dot{w}_0))$$ is a homeomorphism, where each $\Delta_i(x\dot{w}_0)$ is the lower left $(n-i)\times (n-i)$ minor of $x$.* We apply this result as a key fact to show that $\Psi_{\ge 0} \colon Y_{\ge 0}\to X(\Sigma)_{\ge 0}$ is a homeomorphism. **Proposition 35**. *For $K\subseteq J\subseteq [n-1]$, the map $$\Psi_{\ge 0} \colon Y_{K,J;>0}\to X(\Sigma)_{{K,J};> 0}$$ is injective.* *Proof.* Take $p,p'\in Y_{K,J;>0}$ such that $\Psi_{\ge 0}(p)=\Psi_{\ge 0}(p').$ By Proposition [Proposition 27](#prop: description for R KJ>0){reference-type="ref" reference="prop: description for R KJ>0"}, we may write $$\text{$p=x\dot{w}_JB^-$, $p'=x'\dot{w}_JB^-$ for some $J$-Toeplitz matrices $x,x'\in U^-_{\ge0}$}$$ such that $$\label{eq: with positive properties} \begin{cases} \Delta_{i}(x\dot{w}_J)=0\quad\text{if}\quad i\in K\\ \Delta_{i}(x\dot{w}_J)>0 \quad\text{if}\quad i\notin K \end{cases} \quad \text{and} \quad \begin{cases} \Delta_{i}(x'\dot{w}_J)=0\quad\text{if}\quad i\in K\\ \Delta_{i}(x'\dot{w}_J)>0 \quad\text{if}\quad i\notin K . \end{cases}$$ To prove that $p=p'$, we show that $x=x'$ in what follows. The equality $\Psi_{\ge 0}(p)=\Psi_{\ge 0}(p')$ in $X(\Sigma)=(\mathbb C^{2(n-1)}- E)/T$ implies that there exists $t\in T$ such that $$\label{eq: delta and ad for x and x'} \begin{split} &\varpi_i(t)\cdot \Delta_{i}(x\dot{w}_J)=\Delta_{i}(x'\dot{w}_J), \\ &\alpha_i(t)\cdot q_i(x\dot{w}_J) =q_i(x'\dot{w}) \end{split}$$ for $1\le i\le n-1$. Since $x,x'$ are $J$-Toeplitz matrices, to prove that $x=x'$, it suffices to show that $$\label{eq: x=x' on Jk} x({J_{k}^*})=x'({J_{k}^*}) \qquad (1\le k\le m).$$ Let us prove this in the rest of the proof. By Lemma [Lemma 31](#lemm: ad part for xwJB-){reference-type="ref" reference="lemm: ad part for xwJB-"} and ([\[eq: delta and ad for x and x\'\]](#eq: delta and ad for x and x'){reference-type="ref" reference="eq: delta and ad for x and x'"}), we have $\alpha_i(t)=1$ for $i\in J_{k}$, so the ${J_{k}^*}\times {J_{k}^*}$ diagonal block $t({J_{k}^*})$ of the matrix $t$ takes of the form $$t({J_{k}^*})= \text{diag}(\lambda_k,\lambda_k,\ldots,\lambda_k)$$ for some $\lambda_k\in\mathbb C^{\times}$. Since $x$ and $x'$ are $J$-Toeplitz matrices, we have $\Delta_{n-i}(x\dot{w}_J)=\Delta_{n-i}(x'\dot{w}_J)=1$ for $i\notin J$. Applying this to the first equality in [\[eq: delta and ad for x and x\'\]](#eq: delta and ad for x and x'){reference-type="eqref" reference="eq: delta and ad for x and x'"}, we obtain $$\label{eq: pi=1 on not J} \varpi_i(t)=1 \qquad (i\notin J).$$ Thus, we may prove inductively that $$\det(t({J_{k}^*}))=(\lambda_k)^{n_k}=1 \qquad (1\le k\le m).$$ In particular, we obtain that $|\lambda_k|=1$ for $1\le k\le m$. This and [\[eq: pi=1 on not J\]](#eq: pi=1 on not J){reference-type="eqref" reference="eq: pi=1 on not J"} imply that $$\label{eq: abs pi} |\varpi_i(t)|=1 \qquad (i\in[n-1]).$$ We clam that $$\label{eq: injectivity 10'} \text{$\Delta_{i}(x\dot{w}_J)=\Delta_{i}(x'\dot{w}_J)$} \qquad (i\in J_{k}).$$ To see this, we take cases. If $J_{k}\subseteq K$, then the claim follows from [\[eq: with positive properties\]](#eq: with positive properties){reference-type="eqref" reference="eq: with positive properties"} since both sides of this equality are zero. If $J_{k}\not\subseteq K$, then by [\[eq: with positive properties\]](#eq: with positive properties){reference-type="eqref" reference="eq: with positive properties"}, we have $$\label{eq: JcapK and J-K} \begin{split} &\Delta_{i}(x\dot{w}_J)=\Delta_{i}(x'\dot{w}_J)=0 \quad (i\in J_{k}\cap K), \\ &\text{$\Delta_{i}(x\dot{w}_J)>0$ and $\Delta_{i}(x'\dot{w}_J)>0$} \quad (i\in J_{k}- K). \end{split}$$ Applying the latter claim to [\[eq: delta and ad for x and x\'\]](#eq: delta and ad for x and x'){reference-type="eqref" reference="eq: delta and ad for x and x'"}, we see that $\varpi_i(t)>0$ for $i\in J_{k}- K$. Now [\[eq: abs pi\]](#eq: abs pi){reference-type="eqref" reference="eq: abs pi"} implies that $\varpi_i(t)=1$ for $i\in J_{k}- K$. Applying this to [\[eq: delta and ad for x and x\'\]](#eq: delta and ad for x and x'){reference-type="eqref" reference="eq: delta and ad for x and x'"} again, we obtain that $$\text{$\Delta_{i}(x\dot{w}_J)=\Delta_{i}(x'\dot{w}_J)$}\quad (i\in J_{k}- K).$$ Combining this with the former equality in [\[eq: JcapK and J-K\]](#eq: JcapK and J-K){reference-type="eqref" reference="eq: JcapK and J-K"}, we conclude [\[eq: injectivity 10\'\]](#eq: injectivity 10'){reference-type="eqref" reference="eq: injectivity 10'"} as claimed above. We now prove [\[eq: x=x\' on Jk\]](#eq: x=x' on Jk){reference-type="eqref" reference="eq: x=x' on Jk"} by using [\[eq: injectivity 10\'\]](#eq: injectivity 10'){reference-type="eqref" reference="eq: injectivity 10'"}. For each $i\in J_k$, we set $i'\coloneqq i+1-\min J_{k}$ which encodes the position of $i$ in $J_{k}$ (as in [\[eq: def of j\'\]](#eq: def of j'){reference-type="eqref" reference="eq: def of j'"}). Since $x$ and $x'$ in [\[eq: injectivity 10\'\]](#eq: injectivity 10'){reference-type="eqref" reference="eq: injectivity 10'"} are $J$-Toeplitz matrices, it follows that $$\Delta_{i'}^{(n_k)}\big(x({J_{k}^*})\dot{w}_J({J_{k}^*})\big)=\Delta_{i'}^{(n_k)}\big(x'({J_{k}^*})\dot{w}_J({J_{k}^*})\big) \qquad (i'\in [n_k]),$$ where $\Delta_{i'}^{(n_k)}\colon {SL_{n_{k}}}\rightarrow \mathbb C$ is the function defined in [\[eq:4.3 first\]](#eq:4.3 first){reference-type="eqref" reference="eq:4.3 first"} for ${SL_{n_{k}}}$, and $\dot{w}_J({J_{k}^*})$ is the longest permutation in ${SL_{n_{k}}}$. In this equality, we have $$\label{eq: nonnegative on Jk} x({J_{k}^*}), x'({J_{k}^*})\in (U^-_{n_k})_{\ge 0} \qquad (1\le k\le m)$$ which follows from $x,x'\in U^-_{\ge0}$ and Lemma [Lemma 24](#lemm: if and only if lemma which is needed){reference-type="ref" reference="lemm: if and only if lemma which is needed"}. Now, Rietsch's result (Proposition [Proposition 34](#prop: Rietsch's result){reference-type="ref" reference="prop: Rietsch's result"}) implies that we have $x({J_{k}^*})=x'({J_{k}^*})$, as claimed in [\[eq: x=x\' on Jk\]](#eq: x=x' on Jk){reference-type="eqref" reference="eq: x=x' on Jk"}. This completes the proof. ◻ **Corollary 36**. *The map $\Psi_{\ge 0} \colon Y_{\ge 0}\to X(\Sigma)_{\ge 0}$ is injective.* *Proof.* Lemma [Lemma 32](#lemma: image of nonnegative part of ZKJ){reference-type="ref" reference="lemma: image of nonnegative part of ZKJ"} and the decompositions [\[eq: two decompositions\]](#eq: two decompositions){reference-type="eqref" reference="eq: two decompositions"} imply that $$\Psi_{\ge 0}(Y_{K,J;>0})\cap \Psi_{\ge 0}(Y_{K',J';>0})=\emptyset \quad \text{for $(K,J)\ne(K',J')$}.$$ Thus, the claim follows from the previous proposition. ◻ To prove that $\Psi_{\ge 0} \colon Y_{\ge 0}\to X(\Sigma)_{\ge 0}$ is surjective, we first establish its surjectivity on the largest stratum $X(\Sigma)_{{\emptyset,[n-1]};> 0}$. **Proposition 37**. *The map $\Psi_{\ge 0} \colon Y_{\emptyset,[n-1];>0}\to X(\Sigma)_{{\emptyset, [n-1]};> 0}$ is surjective.* *Proof.* By Proposition [Proposition 5](#prop: nonnengative stratum){reference-type="ref" reference="prop: nonnengative stratum"} we have $$X(\Sigma)_{{\emptyset,[n-1]};> 0}= \left\{[x_1,\dots, x_{n-1}; 1,\dots, 1]\in X(\Sigma) \mid x_i>0\ (i\in [n-1])\right\}.$$ Take an arbitrary element $$p=[x_1,\dots, x_{n-1};1,\dots,1]\in X(\Sigma)_{{\emptyset,[n-1]};> 0}$$ with $x_i>0$ for $i\in [n-1]$. Then by Rietsch's result (Proposition [Proposition 34](#prop: Rietsch's result){reference-type="ref" reference="prop: Rietsch's result"}), there exists a Toeplitz matrix $x\in U^-_{\ge 0}$ such that $$(\Delta_{1}(x\dot{w}_0),\ldots,\Delta_{n-1}(x\dot{w}_0)) =(x_1,\dots, x_{n-1}).$$ This and Lemma [Lemma 31](#lemm: ad part for xwJB-){reference-type="ref" reference="lemm: ad part for xwJB-"} imply that we have $\Psi(x\dot{w}_0 B^-)=[x_1,\dots, x_{n-1};1,\dots,1]=p$. Therefore, it suffices to show that $$\label{eq: surj goal} x\dot{w}_0B^-\in Y_{\emptyset, [n-1];>0}.$$ We prove this in what follows. Since $x$ is a Toeplitz matrix and we have $x_i\ne0$ for all $i\in [n-1]$, Corollary [Corollary 20](#coro: a description for ZKJ){reference-type="ref" reference="coro: a description for ZKJ"} implies that $$\label{eq: xw0B^-in Zemptyset I} x\dot{w}_0B^-\in Y_{\emptyset, [n-1]}.$$ For $t>0$, let $y(t)\coloneqq\exp(tf)$, where $f$ is the regular nilpotent matrix defined in [\[eq: def of f\]](#eq: def of f){reference-type="eqref" reference="eq: def of f"}. Since we have $x\in U^-_{\ge0}$ and $y(t)\in U^-_{>0}$ ([@lu94 Proposition 5.9]), we have $$y(t)x\in U^-_{>0}\quad \text{for $t>0$}$$ by [@lu94 Sect. 2.12]. This means that $y(t)xB^+\in ({SL_{n}}/B^+)_{>0}$ for $t>0$, and hence $$y(t)x\dot{w}_0B^-\in ({SL_{n}}/B^-)_{>0}\quad \text{for $t>0$}$$ (see the proof of Lemma [Lemma 25](#lem: yw0=u+b){reference-type="ref" reference="lem: yw0=u+b"}). Taking $t\rightarrow +0$, we obtain that $x\dot{w}_0B^-\in ({SL_{n}}/B^-)_{\ge 0}$. Together with ([\[eq: xw0B\^-in Zemptyset I\]](#eq: xw0B^-in Zemptyset I){reference-type="ref" reference="eq: xw0B^-in Zemptyset I"}), we now have $$x\dot{w}_0B^-\in Y_{\emptyset, [n-1]}\cap({SL_{n}}/B^-)_{\ge 0}=Y_{\emptyset, [n-1];>0},$$ as claimed in [\[eq: surj goal\]](#eq: surj goal){reference-type="eqref" reference="eq: surj goal"}. Hence, the claim follows. ◻ **Corollary 38**. *The map $\Psi_{\ge 0} \colon Y_{\ge 0}\to X(\Sigma)_{\ge 0}$ is surjective.* *Proof.* By Proposition [Proposition 37](#lemm: surjective from Z empty I ge 0 to X sigma I empty I ge 0){reference-type="ref" reference="lemm: surjective from Z empty I ge 0 to X sigma I empty I ge 0"}, we have $\Psi_{\ge 0}(Y_{\emptyset,[n-1];>0})=X(\Sigma)_{{\emptyset, [n-1]};> 0}$ for the largest stratum. Taking the closure in $X(\Sigma)_{\ge0}$, we obtain that $$\label{eq: closure of nonnegative part of X sigma I empty II} \overline{\Psi_{\ge 0}(Y_{\emptyset,[n-1];>0})} =\overline{X(\Sigma)_{{\emptyset, [n-1]};> 0}} =X(\Sigma)_{\ge0},$$ where the last equality follows from Lemma [Lemma 6](#lem: closure){reference-type="ref" reference="lem: closure"}. By definition, we have $\Psi_{\ge 0}(Y_{\emptyset,[n-1];>0})\subseteq \Psi_{\ge 0}(Y_{\ge 0})$ which implies that $$\overline{\Psi_{\ge 0}(Y_{\emptyset,[n-1];>0})} \subseteq \overline{\Psi_{\ge 0}(Y_{\ge 0})}.$$ Since $Y_{\ge 0}$ and $X(\Sigma)_{\ge 0}$ are compact Hausdorff spaces, $\Psi_{\ge 0}(Y_{\ge 0})$ is a closed subspace of $X(\Sigma)_{\ge 0}$. Thus, we obtain $$\overline{\Psi_{\ge 0}(Y_{\emptyset,[n-1];>0})} \subseteq \Psi_{\ge 0}(Y_{\ge 0}).$$ This and [\[eq: closure of nonnegative part of X sigma I empty II\]](#eq: closure of nonnegative part of X sigma I empty II){reference-type="eqref" reference="eq: closure of nonnegative part of X sigma I empty II"} imply that $X(\Sigma)_{\ge 0}\subseteq \Psi_{\ge 0}(Y_{\ge 0})$ which means that $\Psi_{\ge 0}$ is surjective. ◻ **Remark 39**. * One can also deduce Corollary $\ref{prop: surjection of psi ge 0}$ by directly showing that $\Psi_{\ge 0}$ is surjective on each stratum $X(\Sigma)_{K,J;\ge 0}$ as in the proof of Proposition $\ref{lemm: injection on ZKJ>0}$. * We now prove the main theorem of this section. **Theorem 40**. *The map $\Psi_{\ge 0} \colon Y_{\ge 0}\to X(\Sigma)_{\ge 0}$ is a homeomorphism such that $$\Psi_{\ge0}(Y_{K,J;> 0})=X(\Sigma)_{{K,J};> 0}$$ for $K\subseteq J\subseteq [n-1].$* *Proof.* We know that $\Psi_{\ge 0} \colon Y_{\ge 0}\to X(\Sigma)_{\ge 0}$ is a continuous bijection by Corollary [Corollary 36](#prop: injection of psi ge 0){reference-type="ref" reference="prop: injection of psi ge 0"} and Corollary [Corollary 38](#prop: surjection of psi ge 0){reference-type="ref" reference="prop: surjection of psi ge 0"}. Since $Y_{\ge 0}$ and $X(\Sigma)_{\ge 0}$ are compact Hausdorff space, it follows that $\Psi_{\ge 0}$ is a homeomorphism. By Lemma [Lemma 32](#lemma: image of nonnegative part of ZKJ){reference-type="ref" reference="lemma: image of nonnegative part of ZKJ"}, we have $\Psi_{\ge0}(Y_{K,J;> 0})\subseteq X(\Sigma)_{{K,J};> 0}$ for $K\subseteq J\subseteq [n-1]$. Here, $Y_{K,J;> 0}$ and $X(\Sigma)_{{K,J};> 0}$ are the pieces of disjoint decompositions of $Y_{\ge 0}$ and $X(\Sigma)_{\ge 0}$, respectively (see [\[eq: two decompositions\]](#eq: two decompositions){reference-type="eqref" reference="eq: two decompositions"}). Since $\Psi_{\ge 0}$ is a bijection, it follows that the equality $\Psi_{\ge0}(Y_{K,J;> 0})= X(\Sigma)_{{K,J};> 0}$ holds for all $K\subseteq J\subseteq [n-1]$. ◻ # A proof of Rietsch's conjecture on $Y_{\ge0}$ In this section, we prove that Rietsch's conjecture on $Y_{\ge0}$ holds. ## Statement of the conjecture Let us recall the statement of Rietsch's conjecture on $Y_{\ge0}$. Let $K\subseteq J\subseteq [n-1]$. Recall from [\[eq: face post of cube\]](#eq: face post of cube){reference-type="eqref" reference="eq: face post of cube"} that we defined the face $E_{K,J}$ of the standard cube $[0,1]^{n-1}$ as $$E_{K,J}= \left\{(x_1,\ldots,x_{n-1})\in [0,1]^{n-1} \ \left| \ \begin{array}{l} x_i=0\quad\text{if $i\in K$},\\ x_i=1\quad\text{if $i\notin J$} \end{array} \right. \right\}.$$ Its relative interior $\mathop{\mathrm{Int}}(E_{K,J})$ is given by $$\mathop{\mathrm{Int}}(E_{K,J}) \coloneqq \left\{(x_i)\in [0,1]^{n-1} \ \left| \ \begin{array}{l} x_i=0\hspace{12mm}\text{if}\quad i\in K,\\ x_i=1\hspace{12mm}\text{if}\quad i\notin J, \\ 0<x_i<1\quad\text{otherwise} \end{array} \right. \right\}.$$ In [@ri06 Sect. 10.3], Rietsch observed that the decomposition $Y_{\ge 0}=\bigsqcup_{K\subseteq J\subseteq [n-1]}Y_{K,J;>0}$ in [\[decomposition of nonnegative part of Peterson\]](#decomposition of nonnegative part of Peterson){reference-type="eqref" reference="decomposition of nonnegative part of Peterson"} has properties similar to that of the cell decomposition $$[0,1]^{n-1}=\bigsqcup_{K\subseteq J\subseteq [n-1]}\mathop{\mathrm{Int}}(E_{K,J}).$$ In fact, she showed that $Y_{K,J;> 0}$ is homeomorphic to a cell $\mathbb R^{|J|-|K|}_{>0}$ for $K\subseteq J\subseteq [n-1]$. She also proved that $Y_{K,>0}(\coloneqq Y_{\ge0}\cap \Omega_{w_K}^{\circ})$ is homeomorphic to $\mathbb R^{(n-1)-|K|}_{\ge0}$ for $K\subseteq[n-1]$ and that $Y_{\ge0}$ is contractible. Based on these observations, Rietsch gave the following conjecture in [@ri06 Conjecture 10.3].\ **Rietsch's conjecture on $Y_{\ge0}$.** There is a homeomorphism $$Y_{\ge 0}\to [0,1]^{n-1}$$ such that $Y_{K,J;>0}$ is mapped to $\mathop{\mathrm{Int}}(E_{K,J})$ for $K\subseteq J\subseteq [n-1]$.\ In the next subsection, we prove that the conjecture holds by applying the main theorem (Theorem [Theorem 40](#theo: main theorem){reference-type="ref" reference="theo: main theorem"}) of this paper. For that purpose, let us recall a few notions for polytopes. More details can be found in [@bu-pa Chapter 1]. Let $P$ be a polytope. The *face poset* of $P$ is the partially ordered set of faces of $P$, with respect to inclusion. We denote the face poset of $P$ by $\text{Pos}(P)$. The following claim is well-known; it can be deduced by using simplicial subdivision for polytopes and simplicial homeomorphisms (See Lemma 2.8 of [@mu2]). **Lemma 41**. *Let $P$ and $P'$ be polytopes. If there is a bijection $\alpha \colon \text{\rm Pos}(P)\to \text{\rm Pos}(P')$ preserving the partial orders, then there exists a homeomorphism $f\colon P\to P'$ such that* 1. *$f(F)=\alpha(F)$,* 2. *$f(\mathop{\mathrm{Int}}(F))=\mathop{\mathrm{Int}}(\alpha(F))$.* ## A proof of Rietsch's conjecture on $Y_{\ge0}$ By Theorem [Theorem 40](#theo: main theorem){reference-type="ref" reference="theo: main theorem"}, the map $\Psi_{\ge 0} \colon Y_{\ge 0}\to X(\Sigma)_{\ge 0}$ is a homeomorphism such that $$\Psi_{\ge0}(Y_{K,J;> 0})=X(\Sigma)_{{K,J};> 0} \quad \text{for $K\subseteq J\subseteq [n-1].$}$$ As we saw in Section [4.4](#subsec: nonnegative and polytope){reference-type="ref" reference="subsec: nonnegative and polytope"}, the moment map $\mu \colon X(\Sigma)_{\ge 0}\to \mathbb R^{n-1}$ restricts to a homeomorphism $\overline{\mu} \colon X(\Sigma)_{\ge 0}\to P_{n-1}$ such that $$\overline{\mu}(X(\Sigma)_{{K,J};> 0})=\mathop{\mathrm{Int}}(F_{K,J})\quad \text{for $K\subseteq J\subseteq [n-1]$}.$$ By [\[eq: face post of cube 2\]](#eq: face post of cube 2){reference-type="eqref" reference="eq: face post of cube 2"} and Proposition [Proposition 16](#prop: polytope bijection){reference-type="ref" reference="prop: polytope bijection"}, the polytope $P_{n-1}$ and the regular cube $[0,1]^{n-1}$ are combinatorially equivalent. Namely, there is a bijection $$\alpha \colon \text{Pos}(P_{n-1})\to \text{Pos}([0,1]^{n-1}) \quad ; \quad F_{K,J}\mapsto E_{K,J}$$ which preserves the partial orders. Thus, by Lemma [Lemma 41](#lemm: combinatorially equivalent induced from a homeomorphism){reference-type="ref" reference="lemm: combinatorially equivalent induced from a homeomorphism"}, there exists a homeomorphism $f \colon P_{n-1}\to [0,1]^{n-1}$ such that $$f(\mathop{\mathrm{Int}}(F_{K,J}))=\mathop{\mathrm{Int}}(E_{K,J})\quad \text{for $K\subseteq J\subseteq [n-1]$}.$$ Writing $\varphi\coloneqq f\circ\overline{\mu}\circ\Psi_{\ge 0}$, we conclude that the map $$\varphi \colon Y_{\ge 0}\to [0,1]^{n-1}$$ is a homeomorphism such that $$\varphi(Y_{K,J;>0})=\mathop{\mathrm{Int}}(E_{K,J}) \quad \text{for $K\subseteq J\subseteq [n-1]$}.$$ This proves Rietsch's conjecture on $Y_{\ge 0}$. H. Abe, T. Horiguchi, H. Kuwata, and H. Zeng, *Geometry of Peterson Schubert calculus in type A and left-right diagrams*, arXiv:2104.02914. H. Abe and H. Zeng, *Peterson varieties and toric orbifolds associated to Cartan matrices*, preprint in arXiv. M. Blume, *Toric orbifolds associated to Cartan matrices* , Ann. Inst. Fourier (Grenoble), **65** (2) (2015), 863--901. A. Borisov, L. Chen, G. Smith, *The orbifold Chow ring of toric Deligne-Mumford stacks*, J. Amer. Math. Soc. **18** (2005), no. 1, 193--215. V. Buchstaber and T. Panov, *Toric Topology*, Math. Surveys Monogr., 204, American Mathematical Society, Providence, RI, 2015. C. Chow, *On D. Peterson's presentation of quantum cohomology of G/P*, arXiv:2210.17382. D. Cox, J. Little and H. Schenck, *Toric varieties*, Graduate Studies in Mathematics, 124, American Mathematical Society, Providence RI (2011). S. Fomin, *Total positivity and cluster algebras*, Proceedings of the International Congress of Mathematicians. Volume II. (2010), 125--145. S. Fomin and A. Zelevinsky, *Cluster algebras. I. Foundations\|* J. Amer. Math. Soc. **15** (2002), no.2, 497--529. W. Fulton, *Introduction to Toric Varieties*, Ann. of Math. Stud., 131, Princeton University Press, Princeton, NJ, 1993. P. Galashin; S. Karp and T. Lam, *Regularity theorem for totally nonnegative flag varieties*, J. Amer. Math. Soc. **35** (2022), no. 2, 513--579. P. Galashin and P. Pylyavskyy, *Ising model and the positive orthogonal Grassmannian*, Duke Math. J. **169** (2020), no.10, 1877--1942. P. Hersh, *Regular cell complexes in total positivity*, Invent. Math. **197** (2014), no.1, 57--114. T. Horiguchi, M. Masuda, J. Shareshian, J. Song, *Toric orbifolds associated with partitioned weight polytopes in classical types*, arXiv:2105.05453. E. Insko and J. Tymoczko, *Intersection theory of the Peterson variety and certain singularities of Schubert varieties*, Geom. Dedicata **180** (2016), 95--116. Y. Kodama and L. Williams, *KP solitons, total positivity, and cluster algebras*, Proc. Natl. Acad. Sci. USA **108** (2011), no.22, 8984--8989. Y. Kodama and L. Williams, *KP solitons and total positivity for the Grassmannian*, Invent. Math. **198** (2014), no. 3, 637--699. M. Lis, *The planar Ising model and total positivity*, J. Stat. Phys. **166** (2017), no.1, 72--89. G. Lusztig, *Total Positivity in Reductive Groups*, Lie theory and geometry: in honor of Bertram Kostant (G. I. Lehrer, ed.), Progress in Mathematics, vol. 123, Birkh$\ddot{\text{a}}$user, Boston, 1994, pp. 531--568. MR 96m:20071 G. Lusztig, *Introduction to total positivity*, Positivity in Lie theory: open problems, Edited by Joachim Hilgert, Jimmie D. Lawson, Karl-Hermann Neeb and Ernest B. Vinberg De Gruyter Exp. Math., 26, 1998. G. Lusztig, *A survey of total positivity*, Milan J. Math. **76** (2008), 125--134. A. Mathas, *Iwahori-Hecke algebras and Schur algebras of the symmetric group*, Univ. Lecture Ser., 15, American Mathematical Society, Providence, RI, 1999. xiv+188 pp. J. Munkres, *Elements of Algebraic Topology*, Addison-Wesley, Reading, MA, 1984. Reprinted by CRC press. D. Peterson, *Quantum cohomology of $G/P$*, Lecture Course, M.I.T., Spring Term, 1997. A. Postnikov, *Total positivity, Grassmannians, and networks*, arXiv:math/0609764. A. Postnikov, D. Speyer, and L. Williams, *Matching polytopes, toric geometry, and the totally non-negative Grassmannian*, J. Algebraic Combin. **30** (2009), no.2, 173--191. R. Richardson, *Intersections of double cosets in algebraic groups*, Indag. Mathem., N.S., **3** (1), 69--77. K. Rietsch, *An algebraic cell decomposition of the nonnegative part of a flag variety*, J. Algebra **213**(1999), no.1, 144--154. K. Rietsch, *Totally positive Toeplitz matrices and quantum cohomology of partial flag varieties*, J. Amer. Math. Soc. **16** (2003), no.2, 363--392. K. Rietsch, *A mirror construction for the totally nonnegative part of the Peterson variety*, (Nagoya Math. J. **183** (2006), 105--142. K. Rietsch, *A mirror symmetric solution to the quantum Toda lattice*, Comm. Math. Phys. **309** (2012), no.1, 23--49. K. Rietsch and L. Williams, *The totally nonnegative part of G/P is a CW complex*, Transform. Groups **13** (2008), no.3-4, 839--853. A. Whitney, *A reduction theorem for totally positive matrices*, J. d'Analyse Math. **2** (1952), 88--92. G. Ziegler, *Lectures on polytopes*, Grad. Texts in Math., 152, Springer-Verlag, New York, 1995. [^1]: In [@ab-ze23], the action of $(\mathbb C^{\times})^{2(n-1)}/T$ is identified with an action of $T/Z$, where $Z$ is the center of ${SL_{n}}$. See [@ab-ze23 Sect. 3.3] for details. [^2]: When $n\ge3$, the ray vectors $-\alpha^{\vee}_i$ and $\bm{e}_i$ are primitive for $1\le i\le n-1$. When $n=2$, the ray vector $-\alpha^{\vee}_1$ becomes $-2\bm{e}_1$ which is not primitive.
arxiv_math
{ "id": "2310.02819", "title": "Totally nonnegative part of the Peterson variety in Lie type A", "authors": "Hiraku Abe and Haozhi Zeng", "categories": "math.AG", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | For fixed coprime polynomials $U,V \in \mathbb{F}_q [T]$ with degrees of different parities, and a general polynomial $A \in \mathbb{F}_q [T]$, define the arithmetic function $S_{U,V} (A)$ to be the number of representations of $A$ of the form $UE^2 + VF^2$ with $E,F \in \mathbb{F}_q [T]$. We study the mean and variance of $S_{U,V}$ over short intervals in $\mathbb{F}_q [T]$, and this can be interpreted as the function field analogue of the mean and variance of lattice points in thin elliptic annuli, where the scaling factor of the ellipses is rational. Our main result is an asymptotic formula for the variance even when the length of the interval remains constant relative to the absolute value of the centre of the interval. In terms of lattice points, this means we obtain the variance in the so-called "local" or "microscopic" regime, where the area of the annulus remains constant relative to the inner radius. We also obtain asymptotic or exact formulas for almost all other lengths of the interval, and we see some interesting behaviour at the boundary between short and long intervals. Our approach is that of additive characters and Hankel matrices that we employed for the divisor function and a restricted sum-of-squares function in previous work, and we develop further results on Hankel matrices in this paper. address: School of Mathematical Sciences, University of Nottingham, Nottingham, NG7 2RD, UK author: - Michael Yiasemides bibliography: - YiasemidesBibliography1.bib title: Lattice Point Variance in Thin Elliptic Annuli over $\mathbb{F}_q [T]$ --- # Introduction {#section, introduction} Lattice points have been a key area of study in mathematics, due to their intrinsic interest as well as the connections to number theory [@Heath-Brown1992_DistrMomErrTermDirichletDivProb; @Ivic2009_DivFuncRZFShortInterv] and the applications to physics [@BerryTabor1977_LevelClusterRegularSpectrum; @BleherLebowitz1994_EnergyLevelStatModelQuantSystUnivScalingLatticePoint; @BleherLebowitz1994_VarNumbLatticePointRandomNarrowElliptStrip]. A very well known problem is the Gauss circle problem, which asks for the number of lattice points (in $\mathbb{Z}^2$) that lie in a circle: $$\begin{aligned} N (t) := \Big\lvert \big\{ (x,y) \in \mathbb{Z}^2 : x^2 + y^2 \leq t \big\} \Big\rvert .\end{aligned}$$ We are interested in the asymptotic behaviour as $t \longrightarrow \infty$. It can be shown that the asymptotic main term is the area of the circle, $\pi t$. The error term is dependent on the behaviour of the lattice points near the boundary of the circle, and so we can see that the error term is at most proportional to the boundary length. That is, the error is $O (t^{\frac{1}{2}})$. Although, stronger bounds exist and it is conjectured that the error is $O (t^{\frac{1}{4} +\epsilon})$.\ A related problem asks about the number of lattice points in a circular annulus $$\begin{aligned} N \big( t , a(t) \big) := \Big\lvert \big\{ (x,y) \in \mathbb{Z}^2 : t \leq x^2 + y^2 \leq t + a(t) \big\} \Big\rvert .\end{aligned}$$ Here, we are interested in the asymptotic behaviour as $t \longrightarrow \infty$ and $a(t) = o(t)$. We note that the area of the annulus is $\pi a(t)$, while the boundary is $\asymp t^{\frac{1}{2}}$. In particular, the lattice points near the boundary will have a larger relative contribution to $N \big( t , a(t) \big)$ if $a(t)$ grows slowly, and indeed the main term may not come from the area of the annulus if $a(t)$ grows slowly enough. This makes lattice points in circular annuli an interesting problem to study.\ We briefly remark that the set up we have established for lattice points above (and below) is not standard, in that $t$ would usually denote the radius of the circle, and we would work with the width of the annulus instead of the area $a(t)$. However, this is the setup used in [@BleherLebowitz1994_EnergyLevelStatModelQuantSystUnivScalingLatticePoint; @BleherLebowitz1994_VarNumbLatticePointRandomNarrowElliptStrip], and we have chosen to adopt this because we are particularly interested in the relation between our results in function fields and their classical results.\ Now, it is worth noting that lattice points in circular annuli are directly related to the arithmetic function $$\begin{aligned} r (n) := \sum_{\substack{x,y \in \mathbb{Z} \\ x^2 + y^2 = n }} 1 = 4 \sum_{\substack{d \mid n \\ d=1,3 (\mathop{\mathrm{mod}}4)}} (-1)^{\frac{d-1}{2}} ,\end{aligned}$$ and its behaviour over intervals. Because $\frac{1}{4} r(n)$ is multiplicative, one can associate an $L$-function to it, and, by making use of Poisson summation, one can obtain results on its distribution over intervals. See [@Heath-Brown1992_DistrMomErrTermDirichletDivProb; @Ivic2009_DivFuncRZFShortInterv] for further details.\ This number theoretic approach is limited when one considers the generalisation of our lattice point problems to elliptic annuli, which no longer have an associated *multiplicative* arithmetic function. Thus, lattice points in elliptic annuli are particularly interesting to study. To this end, we make the following definition, again adopting the setup from [@BleherLebowitz1994_EnergyLevelStatModelQuantSystUnivScalingLatticePoint; @BleherLebowitz1994_VarNumbLatticePointRandomNarrowElliptStrip]. For $\mu > 0$, $$\begin{aligned} %\label{statement, def, lattice point in elliptic annuli notation} N_{\mu} \big( t , a(t) \big) := \Big\lvert \big\{ (x,y) \in \mathbb{Z}^2 : t \leq x^2 + \mu y^2 \leq t + a(t) \big\} \Big\rvert .\end{aligned}$$ There are four regimes to consider, which depend on how the area of the annulus grows with respect to the boundary. 1. ["Global" or "macroscopic" regime:]{.ul} $a(t)t^{-\frac{1}{2}} \longrightarrow \infty$ as $t \longrightarrow \infty$ (with $a(t) = o (t)$). In this regime, both the area and boundary tend to infinity, but the area grows at a faster rate.\ 2. ["Saturation" regime:]{.ul} $a(t)t^{-\frac{1}{2}} \longrightarrow c$ as $t \longrightarrow \infty$, for a positive constant $c$. In this regime, the area and boundary both tend to infinity with the same order of growth.\ 3. ["Intermediate" or "mesoscopic" regime:]{.ul} $a(t)t^{-\frac{1}{2}} \longrightarrow 0$ but $a(t) \longrightarrow \infty$ as $t \longrightarrow \infty$. In this regime, the area and boundary both tend to infinity, but the boundary grows at a faster rate.\ 4. ["Local" or "microscopic" regime:]{.ul} $a(t)$ remains constant as $t \longrightarrow \infty$. In this regime, the area remains constant, while the boundary tends to infinity.\ Let us now highlight some of the key results that have been established in these regimes. In [@BleherLebowitz1994_EnergyLevelStatModelQuantSystUnivScalingLatticePoint], Bleher and Lebowitz consider the variance $$\begin{aligned} \frac{1}{T} \int_{t=T}^{2T} \Big( N_{1} \big( u^2 , a \big) - a \Big)^2 \phi (u) \mathrm{d} u ,\end{aligned}$$ where $\phi (t)$ is a suitable smoothing function given in (1.12) of [@BleherLebowitz1994_EnergyLevelStatModelQuantSystUnivScalingLatticePoint]. Note the replacement of $t$ by $u^2$; that $a$ is taken to be constant within the integral (although after the integral is performed we will consider limiting behaviour for $a$); and that $\mu = 1$ (that is, we are considering circular annuli). For the global regime, they prove that $$\begin{aligned} \lim_{\substack{aT^{-2} \rightarrow 0 \\ a T^{-1} \rightarrow \infty }} \frac{1}{T} \int_{t=T}^{2T} \Big( N_{1} \big( u^2 , a \big) - \pi a \Big)^2 \phi (u) \mathrm{d} u = v T ,\end{aligned}$$ for a constant $v$. For the saturation regime, they prove that $$\begin{aligned} \lim_{a T^{-1} \rightarrow c } \frac{1}{T} \int_{t=T}^{2T} \Big( N_{1} \big( u^2 , a \big) - \pi a \Big)^2 \phi (u) \mathrm{d} u = V (c) T ,\end{aligned}$$ for a function $V$ that they compute as an infinite series. They show that $$\begin{aligned} \lim_{T \longrightarrow \infty} \frac{1}{T} \int_{z=0}^{T} V(z) \mathrm{d} z = v . \end{aligned}$$ For the intermediate regime, for any constant $\omega \in \Big( 0 , \frac{1}{2} \Big)$, Bleher and Lebowitz [@BleherLebowitz1994_VarNumbLatticePointRandomNarrowElliptStrip] prove $$\begin{aligned} \lim_{\substack{aT^{-\frac{1}{2}} \rightarrow 0 \\ a T^{-\omega} \rightarrow \infty }} \frac{1}{T} \int_{t=T}^{2T} \frac{\Big( N_{\mu} \big( t , a \big) - \pi \mu^{-\frac{1}{2}} a \Big)^2}{\pi \mu^{-\frac{1}{2}} a} \mathrm{d} t = \begin{cases} 4 &\text{ if $\mu$ is Diophantine,} \\ \infty &\text{ if $\mu$ is a Liouville number.} \end{cases}\end{aligned}$$ Whereas, if $\mu$ is rational and is equal to $\frac{p}{q}$ for coprime $p,q$, they prove $$\begin{aligned} \label{statement, rational ellipse variance, intermediate regime} \lim_{\substack{aT^{-\frac{1}{2}} \rightarrow 0 \\ a T^{-\omega} \rightarrow \infty }} \frac{1}{T} \int_{t=T}^{2T} \frac{\Big( N_{\mu} \big( t , a \big) - \pi \mu^{-\frac{1}{2}} a \Big)^2}{\pi \mu^{-\frac{1}{2}} a \lvert \log d \rvert} \mathrm{d} t = c (\mu) ;\end{aligned}$$ where $d = 2\pi \mu^{-\frac{1}{2}} a T^{-\frac{1}{2}}$, and $$\begin{aligned} c (\mu) = \begin{cases} \frac{4}{\pi} d(p) d(q) (pq)^{-\frac{1}{2}} &\text{ if $p,q \equiv 1 \mathop{\mathrm{mod}}2$,} \\ & \\ \frac{1}{\pi} (6l+2) d(p') d(q) (pq)^{-\frac{1}{2}} &\text{ if $p = 2^l p'$ and $p',q \equiv 1 \mathop{\mathrm{mod}}2$.} \end{cases}\end{aligned}$$ It may be helpful for the reader to keep in mind that for the results from [@BleherLebowitz1994_VarNumbLatticePointRandomNarrowElliptStrip], Bleher and Lebowitz use the variable $t$, whereas in [@BleherLebowitz1994_EnergyLevelStatModelQuantSystUnivScalingLatticePoint] they use $u^2$; and this effects the limits one must take for each regime.\ Finally, again in the intermediate regime, assuming the additional condition that for all $\delta < 0$ we have $T,a \longrightarrow \infty$ with $\frac{a}{T} \longrightarrow 0$ and $\frac{a}{T} \gg T^{-\delta}$, Hughes and Rudnick [@HughesRudnick2004_DistrLatticePointThinAnnuli] prove for any interval $[a,b]$ that $$\begin{aligned} \lim_{T \longrightarrow \infty} \frac{1}{T} \mathrm{meas} \bigg\{ t \in [T,2T] : \frac{N_1 (u^2 , a) - \pi a}{\sigma_1 u^{\frac{1}{2}}} \in [a,b] \bigg\} = \frac{1}{\sqrt{2 \pi}} \int_{x=a}^{b} e^{-\frac{x^2}{2}} \mathrm{d} x ,\end{aligned}$$ where $$\begin{aligned} {\sigma_1}^2 := 16 \frac{a}{u} \log \big( \frac{u}{a} \big) .\end{aligned}$$ That is, we have a Gaussian limiting distribution. Wigman [@Wigman2006_DistrLattPointElliptAnnuli] extends upon this, to the elliptic cases for which $\mu^{\frac{1}{2}}$ is transcendental and strongly Diophantine[^1], by proving that $$\begin{aligned} \lim_{T \longrightarrow \infty} \frac{1}{T} \mathrm{meas} \bigg\{ t \in [T,2T] : \frac{N_1 (u^2 , a) - \pi \mu^{-\frac{1}{2}} a}{\sigma_2 u^{\frac{1}{2}}} \in [a,b] \bigg\} = \frac{1}{\sqrt{2 \pi}} \int_{x=a}^{b} e^{-\frac{x^2}{2}} \mathrm{d} x ,\end{aligned}$$ where $$\begin{aligned} {\sigma_2}^2 := \frac{8 \pi}{\mu^{\frac{1}{2}}} a u^{-1} .\end{aligned}$$ Let us now discuss the results we prove in this paper. First, we will need to introduce some notation. Let $\mathcal{A} := \mathbb{F}_q [T]$, the polynomial ring over the finite field of order $q$, where $q$ will be taken to be an odd prime power. For $A \in \mathcal{A} \backslash \{ 0 \}$, we define $\lvert A \rvert := q^{\mathop{\mathrm{deg}}A}$, and $\lvert 0 \rvert := 0$. We denote the set of monics by $\mathcal{M}$, and the set of monic primes by $\mathcal{P}$. For $\mathcal{S} \subseteq \mathcal{A}$ and integers $n \geq 0$, we define $\mathcal{S}_n , \mathcal{S}_{\leq n} , \mathcal{S}_{<n}$ to be the set of elements in $\mathcal{S}$ with degree $n , \leq n , < n$, respectively (we take $\mathop{\mathrm{deg}}0$ to be $-\infty$). For $A \in \mathcal{A}$ and $h \geq 0$, we define the interval of centre $A$ and radius $h$ by $I(A;<h) := \{ B \in \mathcal{A} : \mathop{\mathrm{deg}}(B-A) < h \}$.[^2] We denote the greatest common (monic) divisor of two polynomials $A,B$, not both zero, by $(A,B)$. For a monic polynomial $A \in \mathcal{A} \backslash \{ 0 \}$, we define the totient function by $$\begin{aligned} \phi (A) := \sum_{\substack{C \in \mathcal{A} \\ \mathop{\mathrm{deg}}C < \mathop{\mathrm{deg}}A \\ (C,A)=1 }}1 .\end{aligned}$$ Furthermore, we define the radical of a polynomial $A \neq 0$ by $\mathop{\mathrm{rad}}A := \prod_{P \mid A} P$, where the product is over *primes*. Now let us define our function that counts the number of "elliptic" representations of a polynomial. **Definition 1**. *Let $U,V \in \mathcal{M}$ be coprime and such that $\mathop{\mathrm{deg}}U$ is even and $\mathop{\mathrm{deg}}V$. For $B \in \mathcal{A}$, we define $$\begin{aligned} S_{U,V} (B) := \lvert \{ (E,F) \in \mathcal{A}^2 : B = UE^2 + VF^2 \} \rvert .\end{aligned}$$* **Remark 1**. *. Taking the degrees of $U,V$ to have different parities seems to be the natural choice over $\mathbb{F}_q [T]$. Indeed, if both were even/odd then $S_{U,V} (B)$ would be zero for all $B$ with odd/even degree. Furthermore, if $U,V$ were both even or both odd, then for certain cases of $q$ (the order of our finite field) we could have cancellation when we sum $UE^2$ and $VF^2$. Taking $U,V$ to have degrees of different parities avoids all of these problems.\ Suppose $B$ is monic. We note that if $\mathop{\mathrm{deg}}B = n$ is even, then we must have that $\mathop{\mathrm{deg}}E = \frac{n - \mathop{\mathrm{deg}}U}{2}$ and that $E$ or $-E$ are monic, while $F$ takes values in $\mathcal{A}$ with $\mathop{\mathrm{deg}}F \leq \frac{n - \mathop{\mathrm{deg}}V -1}{2}$. On the other hand, if $\mathop{\mathrm{deg}}B = n$ is odd, then we must have that $\mathop{\mathrm{deg}}F = \frac{n - \mathop{\mathrm{deg}}V}{2}$ and that $F$ or $-F$ are monic, while $E$ takes values in $\mathcal{A}$ with $\mathop{\mathrm{deg}}E \leq \frac{n-\mathop{\mathrm{deg}}U -1}{2}$.* Now, we wish to understand the behaviour of $S_{U,V} (B)$ over intervals. For the mean, we have $$\begin{aligned} \begin{split} \label{statement, lattice point ellipse mean value calculations} \frac{1}{q^n} \sum_{A \in \mathcal{M}_n } \sum_{B \in I (A; <h)} S_{U,V} (B) = &\frac{q^h}{q^n} \sum_{A \in \mathcal{M}_n} S_{U,V} (A) = \frac{q^h}{q^n} \sum_{A \in \mathcal{M}_n} \sum_{\substack{E,F \in \mathcal{A} : \\ A = UE^2 + VF^2}} 1 \\ = &\begin{cases} \frac{2q^h}{q^n} \sum_{\substack{E \in \mathcal{M} \\ \mathop{\mathrm{deg}}E = \frac{n-\mathop{\mathrm{deg}}U}{2}}} \sum_{\substack{F \in \mathcal{A} \\ \mathop{\mathrm{deg}}F \leq \frac{n- \mathop{\mathrm{deg}}V -1}{2}}} 1 &\text{ if $n$ is even} \\ \frac{2q^h}{q^n} \sum_{\substack{E \in \mathcal{A} \\ \mathop{\mathrm{deg}}E = \frac{n-\mathop{\mathrm{deg}}U-1}{2}}} \sum_{\substack{F \in \mathcal{M} \\ \mathop{\mathrm{deg}}F \leq \frac{n- \mathop{\mathrm{deg}}V }{2}}} 1 &\text{ if $n$ is odd} \end{cases} \\ = &2 q^{h - \frac{\mathop{\mathrm{deg}}U}{2} - \frac{\mathop{\mathrm{deg}}V}{2} + \frac{1}{2}} . \end{split}\end{aligned}$$ We now define $$\begin{aligned} \Delta_{S_{U,V}} (A; <h) = \sum_{B \in I (A; <h)} S_{U,V} (B) \hspace{1em} - q^{h - \frac{\mathop{\mathrm{deg}}U}{2} - \frac{\mathop{\mathrm{deg}}V}{2} + \frac{1}{2}} ,\end{aligned}$$ and we are interested in the variance $$\begin{aligned} \frac{1}{q^n} \sum_{A \in \mathcal{M}_n } \bigg\lvert \Delta_{S_{U,V}} (A;<h) \bigg\rvert^2 .\end{aligned}$$ Let us describe the analogies for $\mathbb{F}_q [T]$ of the four regimes. We have that $q^n$ is analogous to $t$, while $q^h$ is analogous to the area $a$. In what follows, we always have $h \leq n$. 1. [Global regime:]{.ul} $n,h \longrightarrow \infty$ with $h - \frac{n}{2} \longrightarrow \infty$.\ 2. [Saturation regime:]{.ul} $n,h \longrightarrow \infty$ with $h - \frac{n}{2} \longrightarrow c$, for a real constant $c$.\ 3. [Intermediate regime:]{.ul} $n,h \longrightarrow \infty$ with $\frac{n}{2} - h \longrightarrow \infty$.\ 4. [Local regime:]{.ul} $n \longrightarrow \infty$ with $h$ constant.\ Our main result is the following theorem. There are several cases due to the various ranges of $h$ that are covered, but the most significant is ([\[statement, main theorem, case 3 asymptotic\]](#statement, main theorem, case 3 asymptotic){reference-type="ref" reference="statement, main theorem, case 3 asymptotic"}) as this addresses short intervals, and in the context of lattice points it includes the intermediate and local regimes. **Theorem 1**. *Let $U,V \in \mathcal{M}$ with $\mathop{\mathrm{deg}}U$ even and $\mathop{\mathrm{deg}}V$ odd, and $(U,V)=1$. Let $0 \leq h \leq n$. In the interest of presentation, we define $$\begin{aligned} &s:= \begin{cases} \mathop{\mathrm{deg}}U &\text{ if $n$ is even,} \\ \mathop{\mathrm{deg}}U +1 &\text{ if $n$ is odd;} \end{cases} &&s' := \frac{n-s}{2} ; &&&n_1 := \Big\lfloor \frac{n+2}{2} \Big\rfloor ; \\ &t = \begin{cases} \mathop{\mathrm{deg}}V +1 &\text{ if $n$ is even,} \\ \mathop{\mathrm{deg}}V &\text{ if $n$ is odd;} \end{cases} &&t' := \frac{n-t}{2} ; &&&n_2 := \Big\lfloor \frac{n+3}{2} \Big\rfloor .\end{aligned}$$ We have the following cases.\ Case 1: If $n$ is even and $h \geq s'+s$, or if $n$ is odd and $h \geq t'+t$, then $$\begin{aligned} \frac{1}{q^n} \sum_{A \in \mathcal{M}_n } \bigg\lvert \Delta_{S_{U,V}} (A;<h) \bigg\rvert^2 = 0 . \\\end{aligned}$$* *Case 2: If $n$ is even and $n_2 -1 \leq h < s'+s$, or if $n$ is odd and $n_2 -1 \leq h < t'+t$, we have $$\begin{aligned} \frac{1}{q^n} \sum_{A \in \mathcal{M}_n } \bigg\lvert \Delta_{S_{U,V}} (A;<h) \bigg\rvert^2 = q^{h} f_{U,V} (n,h) ;\end{aligned}$$ where, if $n$ is even, then $$\begin{aligned} f_{U,V} (n,h) = \frac{4(q-1)}{q^{\frac{1}{2}} \lvert UV \rvert^{\frac{1}{2}} } \sum_{r_1 = s'+1}^{n-h} q^{r_1 -(n-h)} \sum_{\substack{B_1 \in \mathcal{A}_{\leq s'+s-r_1} \\ B_2 \in \mathcal{M}_{\leq r_1 -s' -1} \\ (B_2 , U) \mid B_1 }} \lvert (B_2 , U) \rvert \sum_{\substack{C_1 \in \mathcal{A}_{\leq t'+t-r_1 -1} \\ C_2 \in \mathcal{A}_{\leq r_1 -t' -2} \\ (C_2 , V) \mid C_1 }} \lvert (C_2 , V) \rvert ;\end{aligned}$$ and if $n$ is odd, then $$\begin{aligned} f_{U,V} (n,h) = \frac{4(q-1)}{q^{\frac{1}{2}} \lvert UV \rvert^{\frac{1}{2}} } \sum_{r_1 = t'+1}^{n-h} q^{r_1 -(n-h)} \sum_{\substack{B_1 \in \mathcal{A}_{\leq s'+s-r_1 -1} \\ B_2 \in \mathcal{A}_{\leq r_1 -s' -2} \\ (B_2 , U) \mid B_1 }} \lvert (B_2 , U) \rvert \sum_{\substack{C_1 \in \mathcal{A}_{\leq t'+t-r_1} \\ C_2 \in \mathcal{M}_{\leq r_1 -t' -1} \\ (C_2 , V) \mid C_1 }} \lvert (C_2 , V) \rvert .\end{aligned}$$ Furthermore, we have the following bound for all $n$: $$\begin{aligned} f_{U,V} (n,h) \leq 4 q^{\frac{1}{2}} \lvert UV \rvert^{\frac{1}{2}} (\log_q \mathop{\mathrm{deg}}U) (\log_q \mathop{\mathrm{deg}}V) . \\\end{aligned}$$* *Case 3: Suppose $3 (\mathop{\mathrm{deg}}UV +1) \leq h < \min \{ s' , t' \} -1$ (and $n$ can be even or odd). Then, $$\begin{aligned} &\frac{1}{q^n} \sum_{A \in \mathcal{M}_n } \bigg\lvert \Delta_{S_{U,V}} (A;<h) \bigg\rvert^2 \\ = &4 (1-q^{-1}) q^h M (U,V) \Big( \frac{n}{2} -h \Big) + q^{2h - (n_2 -1)} f_{U,V} (n , n_1 -1) + O_{\epsilon} (q^h \lvert UV \rvert^{-1+\epsilon}) + O (q^{h + \frac{3}{2}} \mathop{\mathrm{deg}}UV ) ;\end{aligned}$$ where, for a non-zero polynomial $A$, we define $e_P (A)$ to be the maximal integer such that $P^{e_P (A)} \mid A$; and $$\begin{aligned} M (U,V) := \lvert UV \rvert^{-1} \prod_{P \mid UV} \Bigg( 1 + \bigg( \frac{1- \lvert P \rvert^{-1}}{1+ \lvert P \rvert^{-1}} \bigg) e_P (UV) \Bigg) .\end{aligned}$$ In particular, if $n \longrightarrow \infty$ with $\frac{n}{2} - h \longrightarrow \infty$ (note this includes both the intermediate and local regimes), we have $$\begin{aligned} \label{statement, main theorem, case 3 asymptotic} \frac{1}{q^n} \sum_{A \in \mathcal{M}_n } \bigg\lvert \Delta_{S_{U,V}} (A;<h) \bigg\rvert^2 \sim 4 (1-q^{-1}) q^h M (U,V) \Big( \frac{n}{2} -h \Big) .\end{aligned}$$* It is of interest to note that ([\[statement, main theorem, case 3 asymptotic\]](#statement, main theorem, case 3 asymptotic){reference-type="ref" reference="statement, main theorem, case 3 asymptotic"}) bears similarity to ([\[statement, rational ellipse variance, intermediate regime\]](#statement, rational ellipse variance, intermediate regime){reference-type="ref" reference="statement, rational ellipse variance, intermediate regime"}). The scalings are different, but we can see that $\big( \frac{n}{2} -h \big)$ in ([\[statement, main theorem, case 3 asymptotic\]](#statement, main theorem, case 3 asymptotic){reference-type="ref" reference="statement, main theorem, case 3 asymptotic"}) accounts for the $\rvert \log d \rvert \approx \big( \frac{1}{2} \log T - \log a \big)$ in $(\ref{statement, rational ellipse variance, intermediate regime})$. Also, the factor of $M (U,V)$ in ([\[statement, main theorem, case 3 asymptotic\]](#statement, main theorem, case 3 asymptotic){reference-type="ref" reference="statement, main theorem, case 3 asymptotic"}) is similar to, but not the exact analogue of, $d(p) d(q)$ in ([\[statement, main theorem, case 3 asymptotic\]](#statement, main theorem, case 3 asymptotic){reference-type="ref" reference="statement, main theorem, case 3 asymptotic"}). It would be interesting to investigate why this difference occurs. Furthermore, we note that ([\[statement, main theorem, case 3 asymptotic\]](#statement, main theorem, case 3 asymptotic){reference-type="ref" reference="statement, main theorem, case 3 asymptotic"}) includes the local regime, and as far as we are aware this is the first time that results on this regime have been obtained, either in the classical or function field setting.\ Cases 1 and 2 include the saturation regime for $c$ positive (recall, in our description of the saturation regime, $c$ is such that $h - \frac{n}{2} \longrightarrow c$). When $c$ is very small, and thus $h$ is just slightly larger than $\frac{n}{2}$, this is case 2 and we see some interesting behaviour.\ Finally, the global regime is also included in Case 1, and we see vanishing of the variance. This is not unusual in function fields; for example, we see vanishing of the variance over large intervals of the divisor function as well.\ The approach we employ to prove Theorem [Theorem 1](#main theorem, lattice point variance elliptic annuli){reference-type="ref" reference="main theorem, lattice point variance elliptic annuli"} is different to the approaches that have been used for the classical problem described earlier, and our approach makes use of the structure of polynomial ring.\ Full details can be found in Section [2](#section, Hankel matrices){reference-type="ref" reference="section, Hankel matrices"}, but a short summary of our approach is the following. We count solutions to the equation $B - UE^2 - VF^2 = 0$ using the indicator function $$\begin{aligned} \text{$\mathbbm{1} (A)$} := \begin{cases} 1 &\text{ if $A=0$,} \\ 0 &\text{ if $A \in \mathcal{A} \backslash \{ 0 \}$.} \end{cases}\end{aligned}$$ We then take a certain Fourier expansion of $\mathbbm{1}$ using additive characters and, because each function in the expansion is additive, we can consider each term $B$, $UE^2$, and $VF^2$ separately. This ultimately requires us to understand the kernel structure of Hankel matrices[^3] over $\mathbb{F}_q$. Interestingly, the kernel of a Hankel matrix can be interpreted as the linear span of two coprime polynomials. Details can be found in Section [2](#section, Hankel matrices){reference-type="ref" reference="section, Hankel matrices"}, but a brief indication of how Hankel matrices appear is the following. Suppose we have two polynomials $E = e_0 + e_1 T + \ldots + e_l T^l \in \mathcal{A}_l$ and $F = f_0 + f_1 T + \ldots + f_m T^m \in \mathcal{A}_m$ with $l+m=n$. With a few calculations, we can see that $$\begin{aligned} \begin{matrix} EF = \begin{pmatrix} e_0 & e_1 & \cdots & e_l \end{pmatrix} \\ \\ \\ \\ \\ \\ \end{matrix} \begin{pmatrix} 1 & T & T^2 & \cdots & \cdots & T^m \\ T & T^2 & & & & \vdots \\ T^2 & & & & & \vdots \\ \vdots & & & & & \vdots \\ \vdots & & & & & T^{n-1} \\ T^l & \cdots & \cdots & \cdots & T^{n-1} & T^{n} \end{pmatrix} \begin{matrix} \begin{pmatrix} f_0 \\ f_1 \\ \vdots \\ f_m \end{pmatrix} \\ \\ \\ \end{matrix} .\end{aligned}$$ Now, this is the approach we previously used for the variance over short intervals in $\mathbb{F}_q [T]$ for the divisor function [@Yiasemides2021_VariCorrDivFuncFpTHankelMatr_ArXiv_v2] and for a restricted sum-of-squares function [@Yiasemides2022_VariSumTwoSquareOverIntervalFqT_I_Arxiv]. Indeed, in those cases we are counting solutions to the equations $B-EF=0$ and $B- E^2 - F^2 = 0$. However, in this paper the situation is more complex due to the fact that we have terms in the equation $B - UE^2 - VF^2 = 0$ that are products of three polynomials. Typically, this would mean we must work with three-dimensional Hankel arrays (tensors), which is considerably more difficult. Indeed, this is also the obstacle that we would encounter were we to investigate the variance over short intervals of higher divisor functions, such as $d_3$ where we would work with the equation $B - EFG = 0$; and this is an unsolved problem with powerful implications. However, because $U,V$ are fixed, we are able to make progress, but we must obtain further results on Hankel matrices. Specifically, these are Lemmas [Lemma 1](#lemma, bijection from quasiregular Hankel matrices to M_r times A_<r){reference-type="ref" reference="lemma, bijection from quasiregular Hankel matrices to M_r times A_<r"} and [Lemma 1](#lemma, U reduction, 1 dimensional kernel case){reference-type="ref" reference="lemma, U reduction, 1 dimensional kernel case"}, and they involve understanding the common factors between $UV$ and the polynomials found in the kernels of Hankel matrices. Most of the rest of Section [2](#section, Hankel matrices){reference-type="ref" reference="section, Hankel matrices"} is dedicated to providing results on Hankel matrices that we have already established in [@Yiasemides2021_VariCorrDivFuncFpTHankelMatr_ArXiv_v2; @Yiasemides2022_VariSumTwoSquareOverIntervalFqT_I_Arxiv].\ The proof of the entirety of Theorem [Theorem 1](#main theorem, lattice point variance elliptic annuli){reference-type="ref" reference="main theorem, lattice point variance elliptic annuli"} is given in Section [3](#section, main theorem proof){reference-type="ref" reference="section, main theorem proof"}. For a discussion on how one could investigate the extension of our approach to (1) moments higher than the variance, (2) to other arithmetic functions such as higher divisor functions, and (3) to a possible classical analogue, then we refer the reader to Subsection 1.4 of [@Yiasemides2021_VariCorrDivFuncFpTHankelMatr_ArXiv_v2]. # Hankel Matrices {#section, Hankel matrices} An $(l+1) \times (m+1)$ Hankel matrix over $\mathbb{F}_q$ is a matrix of the form $$\begin{aligned} \begin{pNiceMatrix} \alpha_0 & \alpha_1 & \alpha_2 & \Cdots & & \alpha_m \\ \alpha_1 & \alpha_2 & & & & \Vdots \\ \alpha_2 & & & & & \\ \Vdots & & & & & \\ & & & & & \alpha_{n-1} \\ \alpha_l & & & \Cdots & \alpha_{n-1} & \alpha_{n} \end{pNiceMatrix} ,\end{aligned}$$ where $n := l+m$ and $\alpha_0 , \ldots , \alpha_n \in \mathbb{F}_q$. This is similar to a Toeplitz matrix, but in this case all entries on a given skew-diagonal are the same. We can associate the sequence $$\begin{aligned} \boldsymbol{\alpha} := (\alpha_0 , \alpha_1 , \ldots , \alpha_n)\end{aligned}$$ to the matrix above, and denote the matrix by $H_{l+1 , m+1} (\boldsymbol{\alpha})$. We can see that there are $n+1$ possibly values for $l,m \geq 0$ such that $l+m=n$, and so there are $n+1$ possible matrices we can associate with $\boldsymbol{\alpha}$ in this way. As we will briefly discuss later, when studying the kernel of Hankel matrices, it is natural to group the matrices that have the same associated sequence and consider how the kernel differs between $H_{l+1 , m+1} (\boldsymbol{\alpha})$ and $H_{(l+1)-1 , (m+1)+1} (\boldsymbol{\alpha}) = H_{l , m+2} (\boldsymbol{\alpha})$. This then allows us to understand the kernel structure of Hankel matrices generally.\ Hankel matrices are defined by the characteristic that $\alpha_{i,j} = \alpha_{k,l}$ whenever $i+j=k+l$, where $\alpha_{i,j}$ is the $(i,j)$-th entry of the matrix. The can be generalised to higher dimensional arrays (tensors). For example, if we have a $k$ dimensional tensor with $(i_1 , \ldots , i_k)$-th entry denoted by $\alpha_{i_1 , \ldots , i_k}$, then we say it is Hankel if $\alpha_{i_1 , \ldots , i_k} = \alpha_{j_1 , \ldots , j_k}$ whenever $i_1 + \ldots i_k = j_1 + \ldots + j_k$. Note that one-dimensional arrays (that is, vectors) are always Hankel.\ At this point, one may ask what the connection between Hankel matrices and function fields is. To answer this, let us first make a definition. Suppose we have a polynomial $B=b_0 + b_1 T + \ldots + b_n T^n \in \mathbb{F}_q [T]$ with $b_n \neq 0$. Then, for $k \geq n$, we define the vector $[B]_k \in \mathbb{F}_q^{k+1}$ by $$\begin{aligned} _k := \begin{pmatrix} b_0 \\ b_1 \\ \vdots \\ b_n \\ 0 \\ \vdots \\ 0 \end{pmatrix} ,\end{aligned}$$ where there are $k-n$ zeros at the end. Typically, we will have $k=n$ or $k=n+1$ (and so either no zeros or one zero at the end), but not always. Also, if $B=0$ then $[B]_k$ is just defined to be the $(k+1) \times 1$ zero vector. In essence, $[B]_k$ is just a representation of $B$ as a vector, but we are also keeping track of the number of zeros at the end. Now, suppose that $B \in \mathcal{A}_n$. We clearly have $$\begin{aligned} \label{statement, B as Hankel vector product} \begin{matrix} B = \begin{pmatrix} b_0 & b_1 & \cdots & b_n \end{pmatrix} \\ \\ \\ \\ \end{matrix} \begin{pmatrix} 1 \\ T \\ \vdots \\ T^n \end{pmatrix} .\end{aligned}$$ Now suppose we have $E = e_0 + e_1 T + \ldots + e_l T^l \in \mathcal{A}_l$ and $F = f_0 + f_1 T + \ldots + f_m T^m \in \mathcal{A}_m$ with $l+m=n$. With a few calculations, we can see that $$\begin{aligned} \label{statement, EF as Hankel vector product} \begin{matrix} EF = \begin{pmatrix} e_0 & e_1 & \cdots & e_l \end{pmatrix} \\ \\ \\ \\ \\ \\ \end{matrix} \begin{pmatrix} 1 & T & T^2 & \cdots & \cdots & T^m \\ T & T^2 & & & & \vdots \\ T^2 & & & & & \vdots \\ \vdots & & & & & \vdots \\ \vdots & & & & & T^{n-1} \\ T^l & \cdots & \cdots & \cdots & T^{n-1} & T^{n} \end{pmatrix} \begin{matrix} \begin{pmatrix} f_0 \\ f_1 \\ \vdots \\ f_m \end{pmatrix} \\ \\ \\ \end{matrix} .\end{aligned}$$ What we see here is "Hankel structure" appearing in polynomial multiplication. That is, in ([\[statement, B as Hankel vector product\]](#statement, B as Hankel vector product){reference-type="ref" reference="statement, B as Hankel vector product"}), we have a product of one polynomial (just $B$), and this involves the Hankel vector $(1 , T , \cdots , T^n)^T$. In ([\[statement, EF as Hankel vector product\]](#statement, EF as Hankel vector product){reference-type="ref" reference="statement, EF as Hankel vector product"}), we have the product of two polynomials $E,F$, and this involves the $(l+1) + (m+1)$ Hankel matrix above. If we were to consider a product of three polynomials, we could obtain a similar representation as above but with a three-dimensional Hankel tensor.\ This gives an indication of how "Hankel structure" relates to polynomial multiplication. However, to see how Hankel matrices appear, that are specifically over $\mathbb{F}_q$, we will first need to make the following two definitions. **Definition 1** (Additive Character). *A function $\psi : \mathbb{F}_q \longrightarrow \mathbb{C}^*$ is an additive character on $\mathbb{F}_q$ if $\psi (a+b) = \psi (a) \psi (b)$ for all $a,b \in \mathbb{F}_q$. We say it is non-trivial if there exists some $c \in \mathbb{F}_q^*$ such that $\psi (c) \neq 1$. In the remainder of the paper, we will take $\psi$ to be a fixed non-trivial additive character on $\mathbb{F}_q$, unless otherwise stated.* Non-trivial additive characters satisfy the orthogonality relation $$\begin{aligned} \label{statement, additive character FF orthog relation} \frac{1}{q} \sum_{a \in \mathbb{F}_q} \psi (ab) = \begin{cases} 1 &\text{ if $b=0$,} \\ 0 &\text{ if $b \in \mathbb{F}_q^*$.} \end{cases}\end{aligned}$$ They can be viewed as analogous to the classical exponential function, and in a similar way that the exponential function is used in Fourier expansions, we can define Fourier expansions over $\mathcal{A}$ using additive characters. In this paper we will only require the Fourier expansion of the indicator function, for its use in counting solutions to Diophantine equations as described in Section [1](#section, introduction){reference-type="ref" reference="section, introduction"}. **Definition 1** (Fourier Expansion of the Indicator Function). *We define $\mathbbm{1} : \mathcal{A} \longrightarrow \mathbb{C}$ by $$\begin{aligned} \text{$\mathbbm{1} (A)$} = \begin{cases} 1 &\text{ if $A=0$,} \\ 0 &\text{ if $A \in \mathcal{A} \backslash \{ 0 \}$;} \end{cases}\end{aligned}$$ and, for any $n \geq 0$ and $A \in \mathcal{A}_{\leq n}$, we have the Fourier expansion $$\begin{aligned} \mathbbm{1} (A) = \frac{1}{q^{n+1}} \sum_{\boldsymbol{\alpha} \in \mathbb{F}_q^{n+1}} \psi (\boldsymbol{\alpha} \cdot [A]_n) .\end{aligned}$$* In some cases, we will have products $EF$ instead of $A$ above, and this is how Hankel matrices over $\mathbb{F}_q$ appear. Indeed, we have $$\begin{aligned} \label{statement, how Hankel matrices appear from products} \begin{matrix} \boldsymbol{\alpha} \cdot [EF]_n = \begin{pmatrix} e_0 & e_1 & \cdots & e_l \end{pmatrix} \\ \\ \\ \\ \\ \\ \end{matrix} \begin{pmatrix} 1 & \alpha_1 & \alpha_2 & \cdots & \cdots & \alpha_m \\ \alpha_1 & \alpha_2 & & & & \vdots \\ \alpha_2 & & & & & \vdots \\ \vdots & & & & & \vdots \\ \vdots & & & & & \alpha_{n-1} \\ \alpha_l & \cdots & \cdots & \cdots & \alpha_{n-1} & \alpha_{n} \end{pmatrix} \begin{matrix} \begin{pmatrix} f_0 \\ f_1 \\ \vdots \\ f_m \end{pmatrix} \\ \\ \\ \end{matrix} .\end{aligned}$$ Now that we have seen how Hankel matrices relate to function fields, let us recall some definitions and results that we have established in [@Yiasemides2021_VariCorrDivFuncFpTHankelMatr_ArXiv_v2; @Yiasemides2022_VariSumTwoSquareOverIntervalFqT_I_Arxiv] that we will require later.\ In the remainder of the paper, for an integer $n \geq 0$, we define $n_1 := \lfloor \frac{n+2}{2} \rfloor$ and $n_2 := \lfloor \frac{n+3}{2} \rfloor$. In particular, if we have $\boldsymbol{\alpha} = (\alpha_0 , \ldots , \alpha_n)$, then $H_{n_1 , n_2} (\boldsymbol{\alpha})$ is the square/almost square matrix associated to $\boldsymbol{\alpha}$, depending on whether $n$ is even/odd.\ Now suppose we have $l,m$ such that $l+m =: n' \leq n$. If we have the equality $n'=n$, then $H_{l+1,m+1} (\boldsymbol{\alpha})$ is well-defined based on our definitions above. If $n' < n$, then we define $$\begin{aligned} H_{l+1,m+1} (\boldsymbol{\alpha}) := H_{l+1,m+1} (\boldsymbol{\alpha}')\end{aligned}$$ where $\boldsymbol{\alpha}' := (\alpha_0 , \ldots , \alpha_{n'})$ is a truncation of $\boldsymbol{\alpha}$ and the right side is well defined based on our previously established definitions. In particular, the matrices $H_{1,1} (\boldsymbol{\alpha}) , \ldots , H_{n_1 , n_1} (\boldsymbol{\alpha})$ are top-left submatrices of $H_{n_1 , n_2} (\boldsymbol{\alpha})$.\ We now define the $(\rho , \pi )$-characteristic of a Hankel matrix, which is given in [@HeinigRost1984_AlgMethToeplitzMatrOperat]. There, their definition is different but ultimately equivalent to ours. Ours is based more on the results found in [@Garcia-ArmasGhorpadeRam2011_RelativePrimePolyNonsingHankelMatrFinField]. **Definition 1** (The $(\rho , \pi )$-characteristic.). *Let $\boldsymbol{\alpha} \in \mathbb{F}_q^{n+1}$.* - *We define $r (\boldsymbol{\alpha})$ to be the rank of $H_{n_1 , n_2} (\boldsymbol{\alpha})$.\ * - *We define $\rho (\boldsymbol{\alpha})$ to be the largest integer $\rho_1$ in $\{ 1 , \ldots , n_1 \}$ such that $H_{\rho_1 , \rho_1} (\boldsymbol{\alpha})$ is invertible, and if no such $\rho_1$ exists then we take $\rho (\boldsymbol{\alpha}) = 0$.\ * - *We also define $\pi (\boldsymbol{\alpha}) := r (\boldsymbol{\alpha}) - \rho (\boldsymbol{\alpha})$.* *We say the $(\rho , \pi )$-characteristic of $\boldsymbol{\alpha}$ is $(\pi_1 , \rho_1)$ if $\rho (\boldsymbol{\alpha}) = \rho_1$ and $\pi (\boldsymbol{\alpha}) = \pi_1$.* The $(\rho , \pi )$-characteristic is a natural property to study, as will become clear later. We will also define the strict $(\rho , \pi )$-characteristic, which is very similar. **Definition 1** (The strict $(\rho , \pi )$-characteristic.). *Let $\boldsymbol{\alpha} \in \mathbb{F}_q^{n+1}$.* - *We define $r (\boldsymbol{\alpha})$ to be the rank of $H_{n_1 , n_2} (\boldsymbol{\alpha})$.\ * - *We define $\rho_{\mathrm{S}}(\boldsymbol{\alpha})$ to be the largest integer $\rho_1$ in $\{ 1 , \ldots , n_2 -1 \}$ such that $H_{\rho_1 , \rho_1} (\boldsymbol{\alpha})$ is invertible, and if no such $\rho_1$ exists then we take $\rho (\boldsymbol{\alpha}) = 0$.\ * - *We also define $\pi_{\mathrm{S}}(\boldsymbol{\alpha}) := r (\boldsymbol{\alpha}) - \rho_{\mathrm{S}}(\boldsymbol{\alpha})$.* *We say the strict $(\rho , \pi )$-characteristic of $\boldsymbol{\alpha}$ is $(\pi_1 , \rho_1)$ if $\rho_{\mathrm{S}}(\boldsymbol{\alpha}) = \rho_1$ and $\pi_{\mathrm{S}}(\boldsymbol{\alpha}) = \pi_1$.* The only difference between the $(\rho , \pi )$-characteristic and the strict $(\rho , \pi )$-characteristic is in the definition of $\rho_{\mathrm{S}}(\boldsymbol{\alpha})$ (although, it does indirectly affect the definition of $\pi_{\mathrm{S}}(\boldsymbol{\alpha})$ as well). We have that $\rho (\boldsymbol{\alpha})$ can take values up to $n_1$, while $\rho_{\mathrm{S}}(\boldsymbol{\alpha})$ can take values up to $n_2 -1$. Note that if $n$ is odd then $n_1 = n_2 -1$, and so the difference exists only when $n$ is even. Even then, the value of $\rho_{\mathrm{S}}(\boldsymbol{\alpha})$ differs from $\rho (\boldsymbol{\alpha})$ only when $H_{n_1 , n_1} (\boldsymbol{\alpha})$ has full rank (and thus is invertible). Studying Hankel matrices often involves viewing them as extensions of their submatrices, and it turns out that these full-rank square Hankel matrices mentioned above are "threshold" cases in the extensions, and this leads to two possible natural concepts for the $(\rho , \pi )$-characteristic. It will be helpful at times to extend our definitions to the Hankel matrices associated to $\boldsymbol{\alpha}$: For $l,m \geq 0$ with $l+m=n$, we define $$\begin{aligned} &r \big( H_{l+1 , m+1} (\boldsymbol{\alpha}) \big) := r (\boldsymbol{\alpha}) , \\ &\rho \big( H_{l+1 , m+1} (\boldsymbol{\alpha}) \big) := \rho (\boldsymbol{\alpha}) , &&\rho_{\mathrm{S}}\big( H_{l+1 , m+1} (\boldsymbol{\alpha}) \big) := \rho_{\mathrm{S}}(\boldsymbol{\alpha}) , \\ &\pi \big( H_{l+1 , m+1} (\boldsymbol{\alpha}) \big) := \pi (\boldsymbol{\alpha}) , &&\pi_{\mathrm{S}}\big( H_{l+1 , m+1} (\boldsymbol{\alpha}) \big) := \pi_{\mathrm{S}}(\boldsymbol{\alpha}) .\end{aligned}$$ Note that $r \big( H_{l+1 , m+1} (\boldsymbol{\alpha}) \big)$ is not necessarily equal to the rank of $H_{l+1 , m+1} (\boldsymbol{\alpha})$.\ We can now define the following sets in $\mathbb{F}_q^{n+1}$. **Definition 1**. *Let $n,h \geq 0$. We define $$\begin{aligned} \mathcal{L}_n^h := &\{ \boldsymbol{\alpha} \in \mathbb{F}_q^{n+1} : \text{ $\alpha_i = 0$ for $0 \leq i \leq h-1$} \} , \\ \mathcal{L}_n (r_1) := &\{ \boldsymbol{\alpha} \in \mathbb{F}_q^{n+1} : r (\boldsymbol{\alpha}) = r_1 \} , \\ \mathcal{L}_n^h (r_1) := &\{ \boldsymbol{\alpha} \in \mathbb{F}_q^{n+1} : r (\boldsymbol{\alpha}) = r_1 , \text{ $\alpha_i = 0$ for $0 \leq i \leq h-1$} \} , \\ \vspace{0.5em} \\ \mathcal{L}_n (r_1 , \rho_1 , \pi_1) := &\{ \boldsymbol{\alpha} \in \mathbb{F}_q^{n+1} : r (\boldsymbol{\alpha}) = r_1 , \rho (\boldsymbol{\alpha}) = \rho_1 , \pi (\boldsymbol{\alpha}) = \pi_1 \} , \\ \mathcal{L}_n^h (r_1 , \rho_1 , \pi_1) := &\{ \boldsymbol{\alpha} \in \mathbb{F}_q^{n+1} : r (\boldsymbol{\alpha}) = r_1 , \rho (\boldsymbol{\alpha}) = \rho_1 , \pi (\boldsymbol{\alpha}) = \pi_1 , \text{ $\alpha_i = 0$ for $0 \leq i \leq h-1$} \} , \\ \vspace{0.5em} \\ \prescript{}{\mathrm{S}}{\mathcal{L}}_n (r_1 , \rho_1 , \pi_1) := &\{ \boldsymbol{\alpha} \in \mathbb{F}_q^{n+1} : r (\boldsymbol{\alpha}) = r_1 , \rho_{\mathrm{S}}(\boldsymbol{\alpha}) = \rho_1 , \pi_{\mathrm{S}}(\boldsymbol{\alpha}) = \pi_1 \} , \\ \prescript{}{\mathrm{S}}{\mathcal{L}}_n^h (r_1 , \rho_1 , \pi_1) := &\{ \boldsymbol{\alpha} \in \mathbb{F}_q^{n+1} : r (\boldsymbol{\alpha}) = r_1 , \rho_{\mathrm{S}}(\boldsymbol{\alpha}) = \rho_1 , \pi_{\mathrm{S}}(\boldsymbol{\alpha}) = \pi_1 , \text{ $\alpha_i = 0$ for $0 \leq i \leq h-1$} \} .\end{aligned}$$* It is helpful to keep in mind that, by definition, we always have $r_1 = \rho_1 + \pi_1$ above. Note also that for $n$ even, the set $\prescript{}{\mathrm{S}}{\mathcal{L}}_n (n_1 , n_1 , 0)$ is empty, because we always have $\rho_{\mathrm{S}}(\boldsymbol{\alpha}) \leq n_2 -1 = n_1 -1$.\ Now, let us make some remarks that we will refer to later, but first we will need to define lower skew-triangular Hankel matrices. **Definition 1** (Lower Skew-triangular Hankel Matrices). *Lower skew-triangular Hankel matrices are defined to be exactly the matrices $H_{n_1 , n_2} (\boldsymbol{\alpha})$ for which $\boldsymbol{\alpha} \in \mathcal{L}_n^{n_1}$ (for any $n \geq 0$). In particular, the first $n_1$ entries of $\boldsymbol{\alpha}$ are zero, and so $H_{n_1 , n_2} (\boldsymbol{\alpha})$ is of the form $$\begin{aligned} \begin{pNiceMatrix} 0 & \Cdots & & & & & 0 \\ \Vdots & & & & & & \Vdots \\ 0 & \Cdots & & & & & 0 \\ 0 & \Cdots & & & & 0 & \alpha_{n_1 +1} \\ 0 & \Cdots & & & 0 & \alpha_{n_1 +1} & \alpha_{n_1 +2} \\ \Vdots & & & \Iddots & \Iddots & \Iddots & \Vdots \\ 0 & \Cdots & 0 & \alpha_{n_1 +1} & \alpha_{n_1 +2} & \Cdots & \alpha_n \\ \end{pNiceMatrix} .\end{aligned}$$ Recall that if $\boldsymbol{\alpha} \in \mathcal{L}_n^{n_1}$ then *at least* the first $n_1$ entries of $\boldsymbol{\alpha}$ that are zero, and there may be more. Thus $\alpha_{n_1 +1}$ is not necessarily non-zero above. It is also worth noting that if $n$ is even, then there must be at least one row of zeros at the top and at least one column of zeros on the left; while if $n$ is odd then there is at least one column of zeros on the left, but it is possible that there are no rows of zeros at the top.* **Remark 1**. *Suppose that $\boldsymbol{\alpha} \in \mathcal{L}_n^h$. Then, we must have that $\rho (\boldsymbol{\alpha}) = 0$ or $\rho (\boldsymbol{\alpha}) \geq h+1$, and $\rho_{\mathrm{S}}(\boldsymbol{\alpha}) = 0$ or $\rho_{\mathrm{S}}(\boldsymbol{\alpha}) \geq h+1$. This is not surprising. Indeed, if the first $h$ entries of $\boldsymbol{\alpha}$ are zero, then $H_{l,l} (\boldsymbol{\alpha})$ is strictly lower skew-triangular for $l \leq h$, and thus it is not invertible.\ Note this implies that if $h \geq n_1$, then $\rho (\boldsymbol{\alpha}) = 0$ (and $\rho_{\mathrm{S}}(\boldsymbol{\alpha}) = 0$). Furthermore, $H_{n_1 , n_2} (\boldsymbol{\alpha})$ is lower skew-triangular. In particular, if we let $z \geq h$ be such that there are exactly $z$ zeros at the start of $\boldsymbol{\alpha}$ (Recall that if $\boldsymbol{\alpha} \in \mathcal{L}_n^h$ then $\boldsymbol{\alpha}$ has *at least* $h$ zeros at the start, but it could have more), then $H_{n_1 , n_2} (\boldsymbol{\alpha})$ is of the form $$\begin{aligned} \begin{pNiceMatrix} 0 & \Cdots & & & & & 0 \\ \Vdots & & & & & & \Vdots \\ 0 & \Cdots & & & & & 0 \\ 0 & \Cdots & & & & 0 & \alpha_{z+1} \\ 0 & \Cdots & & & 0 & \alpha_{z+1} & \alpha_{z+2} \\ \Vdots & & & \Iddots & \Iddots & \Iddots & \Vdots \\ 0 & \Cdots & 0 & \alpha_{z+1} & \alpha_{z+2} & \Cdots & \alpha_n \\ \end{pNiceMatrix} \end{aligned}$$ with $\alpha_{z+1} \neq 0$. In particular, we see that the rank of $H_{n_1 , n_2} (\boldsymbol{\alpha})$ is $n-z$.\ We also note a converse result. If $\boldsymbol{\alpha} \in \mathcal{L}_n$ and $\rho (\boldsymbol{\alpha}) = 0$ (or $\rho_{\mathrm{S}}(\boldsymbol{\alpha}) = 0$), then $H_{n_1 , n_2} (\boldsymbol{\alpha})$ is lower skew-triangular. Indeed, by definition of $\rho$ (and $\rho_{\mathrm{S}}$), we have that $H_{1,1} (\boldsymbol{\alpha}) , \ldots , H_{n_1,n_1} (\boldsymbol{\alpha})$ are all not invertible, and then an inductive argument tells us that the first $n_1$ entries of $\boldsymbol{\alpha}$ are zero.* Theorem 2.3.1 of [@Yiasemides2021_VariCorrDivFuncFpTHankelMatr_ArXiv_v2] gives us the number of elements in the sets we have defined above: **Lemma 1**. *Let $n \geq 0$ and $0 \leq h \leq n+1$, and consider $\mathscr{L}_{n}^{h} (r , \rho_1 , \pi_1 )$. Let us also define $\delta_{\mathrm{E}}(n)$ to be $1$ if $n$ is even, and $0$ if $n$ is odd.* - *Suppose $0 \leq r \leq \min \{ n_1 - \delta_{\mathrm{E}}(n) , n-h+1 \}$. Then, $$\begin{aligned} \lvert \mathscr{L}_{n}^{h} (r , 0 , r ) \rvert = \begin{cases} 1 &\text{ if $r=0$,} \\ (q-1) q^{r-1} &\text{ if $r > 0$.} \end{cases} \\\end{aligned}$$* - *Suppose that $h+1 \leq \rho_1 \leq n_1 - 1$ and $0 \leq \pi_1 \leq n_1 - \rho_1 - \delta_{\mathrm{E}}(n)$. Then, $$\begin{aligned} \lvert \mathscr{L}_{n}^{h} (\rho_1 + \pi_1 , \rho_1 , \pi_1 ) \rvert = \begin{cases} (q-1) q^{2 \rho_1 - h - 1} &\text{ if $\pi_1 = 0$,} \\ (q-1)^2 q^{2 \rho_1 + \pi_1 - h -2} &\text{ if $\pi_1 > 0$.} \end{cases} \\\end{aligned}$$* - *Suppose $h+1 \leq n_1$. We have $$\begin{aligned} \lvert \mathscr{L}_{n}^{h} (n_1 , n_1 , 0 ) \rvert = (q-1) q^{n-h} . \\\end{aligned}$$* - *Consider $\mathscr{L}_n^h (r )$. We have $$\begin{aligned} \lvert \mathscr{L}_n^h (r ) \rvert = \begin{cases} 1 &\text{ if $r=0$,} \\ (q-1) q^{r-1} &\text{ if $1 \leq r \leq \min \{ h , n-h+1 \} $,} \\ (q^2 -1) q^{2r-h-2} &\text{ if $h+1 \leq r \leq n_1 - 1$,} \\ q^{n-h+1} - q^{2n_1 - h -2} &\text{ if $r = n_1$ (which is only possible if $h+1 \leq n_1$).} \end{cases}\end{aligned}$$* Now that we have defined the $(\rho , \pi )$-characteristic of a Hankel matrix, we can introduce the $(\rho , \pi )$-form of such matrices. This involves the application of certain row operations, and the resulting matrix is useful in understanding the kernel structure. We will only give a brief summary of this so that it is clear to the reader what the importance of the $(\rho , \pi )$-characteristic is in relation to the kernel, before explicitly giving results on the kernel structure. For more details, we refer the reader to Section 2 of [@Yiasemides2021_VariCorrDivFuncFpTHankelMatr_ArXiv_v2].\ Suppose we have $\boldsymbol{\alpha} \in \mathcal{L}_n (r_1 , \rho_1 , \pi_1)$ with $1 \leq \rho_1 \leq n_1 -1$, and consider the matrix $H_{n_1 , n_2} (\boldsymbol{\alpha})$. By definition of $\rho (\boldsymbol{\alpha})$ we can see that the top-left submatrix $H_{\rho_1 , \rho_1} (\boldsymbol{\alpha})$ is invertible. In particular, there is a solution $\mathbf{x} = (x_0 , \ldots , x_{\rho_1 -1}) \in \mathbb{F}_q^{\rho_1}$ to $$\begin{aligned} \label{statement, H_(rho_1 , rho_1) (alpha) x = right vector} H_{\rho_1 , \rho_1} (\boldsymbol{\alpha}) \mathbf{x} = \begin{pmatrix} \alpha_{\rho_1} \\ \alpha_{\rho_1 +1} \\ \vdots \\ \alpha_{2\rho_1 -1} \end{pmatrix} .\end{aligned}$$ The vector on the right side is simply the column vector directly to the right of the submatrix $H_{\rho_1 , \rho_1} (\boldsymbol{\alpha})$ in $H_{n_1 , n_2} (\boldsymbol{\alpha})$. By symmetry, it is also the transpose of the row directly below $H_{\rho_1 , \rho_1} (\boldsymbol{\alpha})$. Now let $R_i$ be the $i$-th row of $H_{n_1 , n_2} (\boldsymbol{\alpha})$ and apply the row operations $$\begin{aligned} R_i \longrightarrow R_i - (x_0 , \ldots , x_{\rho_1 -1}) \begin{pmatrix} R_{i-\rho_1} \\ \vdots \\ R_{i-1} \end{pmatrix}\end{aligned}$$ for $i = n_1 , n_1 -1 , \ldots , \rho_1 +1$ in that order. The resulting matrix is of the form $$\begin{aligned} \label{statement, rhopi-form of H_(n_1 , n_2) (alpha)} \left( \begin{array}{c|c} H_{\rho_1 , \rho_1} (\boldsymbol{\alpha}) & \boldsymbol{*} \\ \hline \mathbf{0} & \begin{NiceMatrix} 0 & \Cdots & & & & & 0 \\ \Vdots & & & & & & \Vdots \\ 0 & \Cdots & & & & & 0 \\ 0 & \Cdots & & & & 0 & \lambda \\ 0 & \Cdots & & & 0 & \lambda & \beta_2 \\ \Vdots & & & \Iddots & \Iddots & \Iddots & \Vdots \\ 0 & \Cdots & 0 & \lambda & \beta_2 & \Cdots & \beta_{\pi_1} \\ \end{NiceMatrix} \end{array} \right) .\end{aligned}$$ The top-right quadrant $\boldsymbol{*}$ is the same as the corresponding submatrix in $H_{n_1 , n_2} (\boldsymbol{\alpha})$, but we do not need to express it explicitly; $\lambda$ is some element in $\mathbb{F}_q^*$; and $\beta_2 , \ldots , \beta_{\pi_1} \in \mathbb{F}_q$. The full and rigorous explanation of this is given in Section 2.2 of [@Yiasemides2021_VariCorrDivFuncFpTHankelMatr_ArXiv_v2], but we can give some intuitive indications as to its validity. First, note that the entries directly below $H_{\rho_1 , \rho_1} (\boldsymbol{\alpha})$ must be zero by definition of $\mathbf{x}$ and the row operations we applied. Second, the row operations we applied certainly changed the bottom two quadrants, but they preserved the property that the matrix formed by these bottom two quadrants is Hankel. Thus, all that remains to be shown is that the first non-zero skew-diagonal of the bottom-right quadrant is the $\pi_1$-th skew diagonal from the end. This can be proved inductively by considering the submatrices $H_{l,l} (\boldsymbol{\alpha})$ for $l = \rho_1 +1 , \ldots n_1$ and using the fact that all of these are not invertible (by definition of $\rho (\boldsymbol{\alpha})$). This will justify the skew-diagonals that are zero. The fact that $\lambda \neq 0$ can be seen by the fact that we require the rank of $H_{n_1 , n_2} (\boldsymbol{\alpha})$ to be $r_1 = \rho_1 + \pi_1$.\ The matrix ([\[statement, rhopi-form of H\_(n_1 , n_2) (alpha)\]](#statement, rhopi-form of H_(n_1 , n_2) (alpha)){reference-type="ref" reference="statement, rhopi-form of H_(n_1 , n_2) (alpha)"}) is the $(\rho , \pi )$-form of $H_{n_1 , n_2} (\boldsymbol{\alpha})$. Above, we only considered the case where $1 \leq \rho_1 \leq n_1 -1$, but ([\[statement, rhopi-form of H\_(n_1 , n_2) (alpha)\]](#statement, rhopi-form of H_(n_1 , n_2) (alpha)){reference-type="ref" reference="statement, rhopi-form of H_(n_1 , n_2) (alpha)"}) can be used to extend the definition naturally to the cases $\rho_1 = 0$ and $\rho_1 = n_1$.\ When $\rho_1 = 0$, $H_{0,0} (\boldsymbol{\alpha})$ is interpreted as being an empty matrix, as are the top-right and bottom-left quadrants of ([\[statement, rhopi-form of H\_(n_1 , n_2) (alpha)\]](#statement, rhopi-form of H_(n_1 , n_2) (alpha)){reference-type="ref" reference="statement, rhopi-form of H_(n_1 , n_2) (alpha)"}) since they have zero rows and columns, respectively. In particular, we are left with the lower skew-triangular Hankel matrix from the bottom-right. One may ask what row operations we apply to obtain this from the original matrix. The answer is none (which can be interpreted as applying the trivial row-operations). Indeed, Remark [Remark 1](#remark, rho_1 values dependent on h){reference-type="ref" reference="remark, rho_1 values dependent on h"} tells us that when $\rho_1 = 0$, the matrix $H_{n_1 , n_2} (\boldsymbol{\alpha})$ is lower skew-triangular and of the same form as the bottom-right quadrant in ([\[statement, rhopi-form of H\_(n_1 , n_2) (alpha)\]](#statement, rhopi-form of H_(n_1 , n_2) (alpha)){reference-type="ref" reference="statement, rhopi-form of H_(n_1 , n_2) (alpha)"}). Therefore, the $(\rho , \pi )$-form of $H_{n_1 , n_2} (\boldsymbol{\alpha})$ is just itself.\ When $\rho_1 = n_1$, the top-left quadrant of ([\[statement, rhopi-form of H\_(n_1 , n_2) (alpha)\]](#statement, rhopi-form of H_(n_1 , n_2) (alpha)){reference-type="ref" reference="statement, rhopi-form of H_(n_1 , n_2) (alpha)"}) has as many rows as the entire matrix, and so the bottom two quadrants have zero rows and are therefore empty matrices. Hence, ([\[statement, rhopi-form of H\_(n_1 , n_2) (alpha)\]](#statement, rhopi-form of H_(n_1 , n_2) (alpha)){reference-type="ref" reference="statement, rhopi-form of H_(n_1 , n_2) (alpha)"}) is just the original matrix. That is, the $(\rho , \pi )$-form of $H_{n_1 , n_2} (\boldsymbol{\alpha})$ is just itself.\ So far, we have defined the $(\rho , \pi )$-form only for $H_{n_1 , n_2} (\boldsymbol{\alpha})$. A similar definition holds for $H_{l+1 , m+1} (\boldsymbol{\alpha})$ when $l+m=n$, but we are particularly interested in the case where both $l+1 , m+1 \geq r_1$ hold. In this case, again we can apply the row operations $$\begin{aligned} R_i \longrightarrow R_i - (x_0 , \ldots , x_{\rho_1 -1}) \begin{pmatrix} R_{i-\rho_1} \\ \vdots \\ R_{i-1} \end{pmatrix} ,\end{aligned}$$ but for $i = l+1 , l , \ldots , \rho_1 +1$ this time. We will obtain a matrix of the form ([\[statement, rhopi-form of H\_(n_1 , n_2) (alpha)\]](#statement, rhopi-form of H_(n_1 , n_2) (alpha)){reference-type="ref" reference="statement, rhopi-form of H_(n_1 , n_2) (alpha)"}) again. Of course, the number of rows and columns are different, but the most significant difference this causes is the number of zero-columns/zero-rows that appear on the left and top of the bottom-right quadrant; and in the boundary cases $m+1 =r_1$ or $l+1 =r_1$ we will have no zero-columns or zero-rows, respectively.\ Finally, we can define the strict $(\rho , \pi )$-form, which, as the name suggests, is based on the strict $(\rho , \pi )$-characteristic. It is essentially the same approach used to obtain ([\[statement, rhopi-form of H\_(n_1 , n_2) (alpha)\]](#statement, rhopi-form of H_(n_1 , n_2) (alpha)){reference-type="ref" reference="statement, rhopi-form of H_(n_1 , n_2) (alpha)"}), but we take $\rho_1 = \rho_{\mathrm{S}}(\boldsymbol{\alpha})$ and $\pi_1 = \pi_{\mathrm{S}}(\boldsymbol{\alpha})$, instead of $\rho_1 = \rho (\boldsymbol{\alpha})$ and $\pi_1 = \pi (\boldsymbol{\alpha})$. The only time where the strict $(\rho , \pi )$-form differs from the standard $(\rho , \pi )$-form is when $n$ is even and $\boldsymbol{\alpha} \in \mathcal{L}_n (n_1)$. In this case, $H_{n_1 , n_2} (\boldsymbol{\alpha})$ has full rank and so the standard $(\rho , \pi )$-form is simply the matrix itself. Whereas the strict $(\rho , \pi )$-form is of the form $$\begin{aligned} \left( \begin{array}{c|c} H_{\rho_1 , \rho_1} (\boldsymbol{\alpha}) & \boldsymbol{*} \\ \hline \mathbf{0} & \begin{NiceMatrix} 0 & \Cdots & & 0 & \lambda \\ \Vdots & & \Iddots & \lambda & \beta_2 \\ & \Iddots & \Iddots & \Iddots & \Vdots \\ 0 & \lambda & \Iddots & & \\ \lambda & \beta_2 & \Cdots & & \beta_{\pi_1} \\ \end{NiceMatrix} \end{array} \right) .\end{aligned}$$ Let us now make two remarks that we will refer to later. **Remark 1**. *Suppose $\boldsymbol{\alpha} \in \mathcal{L}_n (r_1 , \rho_1 , \pi_1)$, for some $r_1 , \pi_1 , \rho_1$. We have that $$\begin{aligned} \mathop{\mathrm{rank}}H_{l+1 , m+1} (\boldsymbol{\alpha}) = \min \{ r_1 , l+1 , m+1 \} .\end{aligned}$$ This is not difficult to see. Indeed, if $l+1 , m+1 \geq r_1$, then $H_{l+1 , m+1} (\boldsymbol{\alpha})$ has $(\rho , \pi )$-form of the form ([\[statement, rhopi-form of H\_(n_1 , n_2) (alpha)\]](#statement, rhopi-form of H_(n_1 , n_2) (alpha)){reference-type="ref" reference="statement, rhopi-form of H_(n_1 , n_2) (alpha)"}). This clearly has rank $\rho_1 + \pi_1 = r_1$, and since the row operations we applied to obtain the $(\rho , \pi )$-form do not alter the rank, we have that $\mathop{\mathrm{rank}}H_{l+1 , m+1} (\boldsymbol{\alpha}) = r_1$.\ On the other hand, if $l+1 < r_1$, then we note that the rows of $H_{l+1 , m+1} (\boldsymbol{\alpha})$ can be truncated to become the first $l+1$ rows of $H_{\rho_1 , \rho_1} (\boldsymbol{\alpha})$. Since $H_{\rho_1 , \rho_1} (\boldsymbol{\alpha})$ has full rank, we must have that the rows of $H_{l+1 , m+1} (\boldsymbol{\alpha})$ are linearly independent, and thus the rank of $H_{l+1 , m+1} (\boldsymbol{\alpha})$ is $l+1$. If $m+1 < r_1$, then a similar argument applies but with the columns instead of the rows.* **Remark 1**. *Suppose $n$ is even and $\boldsymbol{\alpha} \in \prescript{}{\mathrm{S}}{\mathcal{L}}_n (r_1 , \rho_1 , \pi_1)$, for some $r_1 , \pi_1 , \rho_1$. Let $\boldsymbol{\alpha}'$ be the sequence obtained by removing the last entry of $\boldsymbol{\alpha}$. It is not difficult to see that $H_{n_1 -1 , n_1} (\boldsymbol{\alpha}')$ is the matrix obtained by removing the last row of $H_{n_1 , n_1} (\boldsymbol{\alpha})$. Furthermore, the $(\rho , \pi )$-form of $H_{n_1 -1 , n_1} (\boldsymbol{\alpha}')$ is the matrix obtained by removing the last row of the $(\rho , \pi )$-form of $H_{n_1 , n_1} (\boldsymbol{\alpha})$. In particular, we see that if $\pi_{\mathrm{S}}(\boldsymbol{\alpha}) \geq 1$, then $\boldsymbol{\alpha}' \in \prescript{}{\mathrm{S}}{\mathcal{L}}_{n-1} (r_1 -1 , \rho_1 , \pi_1 -1)$; while if $\pi_{\mathrm{S}}(\boldsymbol{\alpha}) = 0$, then $\boldsymbol{\alpha}' \in \prescript{}{\mathrm{S}}{\mathcal{L}}_{n-1} (r_1 , \rho_1 , \pi_1)$. Furthermore, this implies $$\begin{aligned} \lvert \mathop{\mathrm{ker}}H_{n_1 , n_1} (\boldsymbol{\alpha}) \rvert = \begin{cases} q^{-1} \lvert \mathop{\mathrm{ker}}H_{n_1 -1 , n_1} (\boldsymbol{\alpha}') \rvert &\text{ if $\pi_{\mathrm{S}}(\boldsymbol{\alpha}) \geq 1$,} \\ \lvert \mathop{\mathrm{ker}}H_{n_1 -1 , n_1} (\boldsymbol{\alpha}') \rvert &\text{ if $\pi_{\mathrm{S}}(\boldsymbol{\alpha}) = 0$.} \end{cases}\end{aligned}$$* Continuing our discussion, using these $(\rho , \pi )$-forms, and the fact that the row operations do not affect the kernel, it is possible to give an intuitive explanation of the kernel structure of Hankel matrices. First consider the matrix $H_{n+2-r_1 , r_1} (\boldsymbol{\alpha})$. The $(\rho , \pi )$-form is of the form $$\begin{aligned} \left( \begin{array}{c|c} H_{\rho_1 , \rho_1} (\boldsymbol{\alpha}) & \boldsymbol{*} \\ \hline \mathbf{0} & \begin{NiceMatrix} 0 & \Cdots & & & 0 \\ \Vdots & & & & \Vdots \\ & & & & 0 \\ & & & 0 & \lambda \\ & & 0 & \lambda & \beta_2 \\ & \Iddots & \Iddots & \Iddots & \Vdots \\ 0 & \lambda & \Cdots & & \\ \lambda & \beta_2 & \Cdots & & \beta_{\pi_1} \\ \end{NiceMatrix} \end{array} \right) .\end{aligned}$$ Clearly, the kernel here consists only of the zero vector, as both the top-left and bottom-right quadrants have full column rank. In fact, it is not difficult to see from this that $H_{l+1 , m+1} (\boldsymbol{\alpha})$ has full column rank, and thus a trivial kernel, for all $m+1 \leq \rho_1 + \pi_1 =r_1$. Now consider $H_{n+1-r_1 , r_1 +1} (\boldsymbol{\alpha})$, which has one more column and one less row compared to $H_{n+2-r_1 , r_1} (\boldsymbol{\alpha})$ (but, of course, they have the same underlying sequence $\boldsymbol{\alpha}$). The matrix $H_{n+1-r_1 , r_1 +1} (\boldsymbol{\alpha})$ has $(\rho , \pi )$-form of the form $$\begin{aligned} \left( \begin{array}{c|c} H_{\rho_1 , \rho_1} (\boldsymbol{\alpha}) & \boldsymbol{*} \\ \hline \mathbf{0} & \begin{NiceMatrix} 0 & \Cdots & & & 0 \\ \Vdots & & & & \Vdots \\ 0 & \Cdots & & & 0 \\ 0 & \Cdots & & 0 & \lambda \\ 0 & \Cdots & 0 & \lambda & \beta_2 \\ \Vdots & \Iddots & \Iddots & \Iddots & \Vdots \\ 0 & \lambda & \beta_2 & \Cdots & \beta_{\pi_1} \\ \end{NiceMatrix} \end{array} \right) .\end{aligned}$$ Clearly, this has a one-dimensional kernel, and we can deduce that the kernel consists of scalar multiples of $[A_1]_{r_1}$, where $$\begin{aligned} A_1 := -x_0 - x_1 T - \ldots - x_{\rho_1 -1} T^{\rho_1 -1} + T^{\rho_1}\end{aligned}$$ and $x_0 , \ldots , x_{\rho_1 -1}$ is as in ([\[statement, H\_(rho_1 , rho_1) (alpha) x = right vector\]](#statement, H_(rho_1 , rho_1) (alpha) x = right vector){reference-type="ref" reference="statement, H_(rho_1 , rho_1) (alpha) x = right vector"}). We now make the following two observations: 1. Consider $H_{l+1 , m+1} (\boldsymbol{\alpha})$ where both $l+1 , m+1 \geq r_1 +1$ hold. As we have already stated, the $(\rho , \pi )$-form is of the form ([\[statement, rhopi-form of H\_(n_1 , n_2) (alpha)\]](#statement, rhopi-form of H_(n_1 , n_2) (alpha)){reference-type="ref" reference="statement, rhopi-form of H_(n_1 , n_2) (alpha)"}). From this, we can see that the rank of $H_{l+1 , m+1} (\boldsymbol{\alpha})$ is always $r_1 = \rho_1 + \pi_1$, regardless of the values of $l+1,m+1$ in the given ranges. However, we note that $H_{l , m+2} (\boldsymbol{\alpha})$ has one more row than $H_{l+1 , m+1} (\boldsymbol{\alpha})$, and so the dimension of the kernel increases by one every time we remove one row and add one column. 2. Suppose $\mathbf{f} \in \mathbb{F}_q^{m+1}$ is in the kernel of $H_{l+1 , m+1} (\boldsymbol{\alpha})$. It is not difficult to see that the vectors $$\begin{aligned} \begin{pmatrix} \mathbf{f} \\ \hline 0 \end{pmatrix} \hspace{3em} \text{ and } \hspace{3em} \begin{pmatrix} 0 \\ \hline \mathbf{f} \end{pmatrix}\end{aligned}$$ are in the kernel of $H_{l , m+2} (\boldsymbol{\alpha})$, as is any linear combination (over $\mathbb{F}_q$). If we associate $\mathbf{f}$ with a polynomial $F$, then the above says that if $[F]_m$ is in the kernel of $H_{l+1 , m+1} (\boldsymbol{\alpha})$ then $[BF]_{m+1}$ is in the kernel of $H_{l , m+2} (\boldsymbol{\alpha})$ for any $B \in \mathcal{A}_{\leq 1}$; and more generally, $[BF]_{m+k}$ is in the kernel of $H_{l-k+1 , m+k+1} (\boldsymbol{\alpha})$ for any $B \in \mathcal{A}_{\leq k}$.\ Based on these two points, we can deduce that for $l+1 \geq r_1$, the kernel of $H_{l+1 , m+1} (\boldsymbol{\alpha})$ consists exactly of the vectors $[B_1 A_1]_m$ with $B_1 \in \mathcal{A}_{m-r_1}$. This is non-trivial when $m+1 \geq r_1 +1$ (recall that for all positive integers $i$, the set $\mathcal{A}_{-i}$ is defined to be $\{ 0 \}$).\ Now consider the matrices $H_{l+1 , m+1} (\boldsymbol{\alpha})$ and $H_{l , m+2} (\boldsymbol{\alpha})$ when $l+1 \leq r$. By similar reasoning as above, we can see that as we go from the former to the latter, the rank decreases by one and the number of columns increases by one. Thus, the dimension of the kernel increases by two. In particular, a new polynomial $A_2$, independent of $A_1$ (see below for a formal exposition), must appear in the kernel of $H_{r-1 , n+3-r} (\boldsymbol{\alpha})$, and its multiples appear in kernels of $H_{l+1 , m+1} (\boldsymbol{\alpha})$ for $l+1 \leq r-2$. Formally, we have the following result, which is Theorem 2.4.4 from [@Yiasemides2021_VariCorrDivFuncFpTHankelMatr_ArXiv_v2] (this result is originally given in [@HeinigRost1984_AlgMethToeplitzMatrOperat]). **Lemma 1**. *Let $\boldsymbol{\alpha} \in \mathcal{L}_n (r_1 , \rho_1 , \pi_1)$. Then, there exist coprime polynomials $A_1 (\boldsymbol{\alpha}) \in \mathcal{M}_{\rho_1}$ and $A_2 (\boldsymbol{\alpha}) \in \mathcal{A}_{\leq n-r_1 +2}$ such that for all $l,m \geq 0$ with $l+m=n$ we have $$\begin{aligned} \mathop{\mathrm{ker}}H_{l+1 , m+1} (\boldsymbol{\alpha}) = \bigg\{ [B_1 A_1 (\boldsymbol{\alpha}) + B_2 A_2 (\boldsymbol{\alpha})]_m : \substack{B_1 , B_2 \in \mathcal{A} \\ \mathop{\mathrm{deg}}B_1 \leq m-r_1 \\ \mathop{\mathrm{deg}}B_2 \leq m - (n-r_1+2)} \bigg\} .\end{aligned}$$* **Definition 1** (Characteristic Polynomials). *The polynomials $A_1 (\boldsymbol{\alpha})$ and $A_2 (\boldsymbol{\alpha})$ are called the first and second characteristic polynomials of $\boldsymbol{\alpha}$, respectively. We may also say they are the characteristic polynomials of $H_{l+1 , m+1} (\boldsymbol{\alpha})$ for any $l,m \geq 0$ with $l+m=n$. This should not be confused with the characteristic polynomial associated to the determinant of a square matrix.* Let us make a couple of remarks that we will require later. **Remark 1**. *The first characteristic polynomial $A_1 (\boldsymbol{\alpha})$ is unique. Non-zero scalar multiples can also be used in its place, but we simply take the monic case. The second characteristic polynomial $A_2 (\boldsymbol{\alpha})$ is unique only up to non-zero scalar multiplication and addition of polynomials of the form $B_1 A_1 (\boldsymbol{\alpha})$ for $B_1 \in \mathcal{A}_{\leq n-2r_1 +2}$. In the special case where $\rho_1 = r_1$, and thus $\mathop{\mathrm{deg}}A_1 (\boldsymbol{\alpha}) = r_1$, this means we can take $A_2 (\boldsymbol{\alpha})$ to be the representative modulo $A_1 (\boldsymbol{\alpha})$ of degree $< \mathop{\mathrm{deg}}A_1 (\boldsymbol{\alpha}) = r_1$ that is monic. This is unique. In the case where $\rho_1 < r_1$, there is not a natural unique representation to take.\ In the paragraph above, we explain to what extent the characteristic polynomials are unique for a given $\boldsymbol{\alpha}$. We can also ask whether two characteristic polynomials $A_1 , A_2$ have a unique sequence $\boldsymbol{\alpha}$ associated to them. The answer is that there are actually $q-1$ possible associated $\boldsymbol{\alpha} \in \mathbb{F}_q^{n+1}$, because we can multiply any such $\boldsymbol{\alpha}$ by an element in $\mathbb{F}_q^*$ and this does not affect kernel, and thus it does not affect the characteristic polynomials. Note if $\boldsymbol{\alpha} = \mathbf{0}$, then multiplying by an element in $\mathbb{F}_q^*$ does nothing, and so in this case $\mathbf{0}$ does not share its characteristic polynomials.\ Now, we considered the case $\boldsymbol{\alpha} = \mathbf{0}$ above, but this raises the question if $A_2 (\boldsymbol{\alpha})$ is well-defined. Indeed, $A_2 (\boldsymbol{\alpha})$ appears in the kernel of at least one $H_{l+1 , m+1} (\boldsymbol{\alpha})$ with $l+m=n$ only when $r_1 \geq 2$. However, it is useful to define it even when $r_1 \leq 1$ because it may be used when considering extensions of $\boldsymbol{\alpha}$ (although we do not consider such extensions in this paper). In fact, the definition of $A_2 (\boldsymbol{\alpha})$ given in the lemma (that is, the degree restrictions and coprimality condition) accommodates for this and it gives $$\begin{aligned} A_2 (\boldsymbol{\alpha}) = \begin{cases} 0 &\text{ if $\boldsymbol{\alpha} \in \mathcal{L}_n (0,0,0)$ (that is, $\boldsymbol{\alpha} = \mathbf{0}$),} \\ 1 &\text{ if $\boldsymbol{\alpha} \in \mathcal{L}_n (1,1,0)$,} \\ T^{n+1} &\text{ if $\boldsymbol{\alpha} \in \mathcal{L}_n (1,0,1)$.} \\ \end{cases}\end{aligned}$$ (For the first two cases, this is also consistent with the unique representation of $A_2 (\boldsymbol{\alpha})$ modulo $A_1 (\boldsymbol{\alpha})$ mentioned above).\ Finally, we note that $A_2 (\boldsymbol{\alpha})$ is non-zero except for the single trivial case given above, and so we usually take it to be monic (even though there is no natural unique representation to take when $\rho_1 < r_1$).* **Remark 1**. *Let $\boldsymbol{\alpha} \in \mathcal{L}_n (r_1 , \rho_1 , \pi_1)$, and suppose $l+m=n$ with $l+1 \geq r_1$. Lemma [Lemma 1](#lemma, kernel of Hankel matrices){reference-type="ref" reference="lemma, kernel of Hankel matrices"} tells us that $$\begin{aligned} \mathop{\mathrm{ker}}H_{l+1 , m+1} (\boldsymbol{\alpha}) = \bigg\{ [B_1 A_1 (\boldsymbol{\alpha})]_m : \substack{B_1 \in \mathcal{A} \\ \mathop{\mathrm{deg}}B_1 \leq m-r_1 } \bigg\} .\end{aligned}$$ We can see that the kernel has a vector with non-zero final entry if and only if $\mathop{\mathrm{deg}}B_1 A_1 (\boldsymbol{\alpha}) = m$. In turn, this occurs if and only if $(r_1 , \rho_1 , \pi_1) = (r_1 , r_1 , 0)$.\ This provides a useful equivalence for the condition $\pi (\boldsymbol{\alpha}) = 0$, which we will require later. Note also that if $\pi_1 = 0$, then the number of elements in $\mathop{\mathrm{ker}}H_{l+1 , m+1} (\boldsymbol{\alpha})$ is just $q$ times the number of vectors in the kernel with final entry equal to zero.\ We must keep in mind that the above only applies when $l+1 \geq r_1$, because when $l+1 < r_1$ the second characteristic polynomial appears in the kernel and interferes with our reasoning above.* So, we now have an understanding of the kernel structure of Hankel matrices, and how the $(\rho , \pi )$-form is used in determining this. We now wish to state some results on the value distribution of quadratic forms associated to square Hankel matrices. We undertook this in [@Yiasemides2022_VariSumTwoSquareOverIntervalFqT_I_Arxiv]. Let us give a brief indication of our approach to this.\ The strict $(\rho , \pi )$-characteristic allows us to obtain a (strict) $(\rho , \pi )$-form of the form ([\[statement, rhopi-form of H\_(n_1 , n_2) (alpha)\]](#statement, rhopi-form of H_(n_1 , n_2) (alpha)){reference-type="ref" reference="statement, rhopi-form of H_(n_1 , n_2) (alpha)"}) even when our matrix has full rank. In particular, for square Hankel matrices, we can apply this inductively, first to $H_{n_1 , n_1} (\boldsymbol{\alpha})$, second to the submatrix $H_{\rho_1 , \rho_1} (\boldsymbol{\alpha})$, and so on. Ultimately, this is just an application of row operations. We then also apply the same operations but for the columns. In the context of quadratic forms, we have simply undertaken a change of basis, which does not change the value distribution. The benefit is that it is easier to determine the value distribution of the matrix obtained after the operations are applied. Indeed, ultimately we obtain a matrix that is block-diagonal and each block is a square, lower skew-triangular Hankel matrix (that is, the same form as the bottom-right quadrant of ([\[statement, rhopi-form of H\_(n_1 , n_2) (alpha)\]](#statement, rhopi-form of H_(n_1 , n_2) (alpha)){reference-type="ref" reference="statement, rhopi-form of H_(n_1 , n_2) (alpha)"})). This is called the reduced $(\rho , \pi )$-form, and in the context of quadratic forms we can study each block individually, and their lower skew-triangular Hankel form is particularly easy to work with. The details of this can be found in Section 3 of [@Yiasemides2022_VariSumTwoSquareOverIntervalFqT_I_Arxiv], but for this paper we only need the following result which follows from that section (particularly Lemma 3.3.4): **Lemma 1**. *Let $l \geq 1$ and $\boldsymbol{\alpha} \in \prescript{}{\mathrm{S}}{\mathcal{L}}_{2l} (r_1 , \rho_1 , \pi_1)$ ($r_1$ can take any value in $[0,1, \ldots , l+1]$). Then, $$\begin{aligned} \bigg\lvert \sum_{E \in \mathcal{A}_{\leq l}} \psi \big( [E]_l^T H_{l+1 , l+1} (\boldsymbol{\alpha}) [E]_l \big) \bigg\rvert^2 = q^{2l+2-r_1} = q^{l+1} \lvert \mathop{\mathrm{ker}}H_{l+1 , l+1} (\boldsymbol{\alpha}) \rvert\end{aligned}$$ and $$\begin{aligned} \bigg\lvert \sum_{E \in \mathcal{M}_{l}} \psi \big( [E]_l^T H_{l+1 , l+1} (\boldsymbol{\alpha}) [E]_l \big) \bigg\rvert^2 =\begin{cases} q^{2l-r_1} = q^{l-1} \lvert \mathop{\mathrm{ker}}H_{l+1 , l+1} (\boldsymbol{\alpha}) \rvert &\text{ if $\pi_1 = 0$,} \\ q^{2l+1-r_1} = q^{l} \lvert \mathop{\mathrm{ker}}H_{l+1 , l+1} (\boldsymbol{\alpha}) \rvert &\text{ if $\pi_1 = 1$,} \\ 0 &\text{ if $\pi_1 \geq 2$.} \end{cases}\end{aligned}$$* We have now covered the results from [@Yiasemides2021_VariCorrDivFuncFpTHankelMatr_ArXiv_v2; @Yiasemides2022_VariSumTwoSquareOverIntervalFqT_I_Arxiv] that we require. For the new results that we will prove, we will require the following definition. **Definition 1**. *Suppose we have a vector $\mathbf{v} = (v_1 , v_2 , \ldots , v_l) \in \mathbb{F}_q^{l}$ and a sequence $\mathbf{s} = (s_1 , s_2 , \ldots )$ of length $m \geq l$, where $m$ may be finite or infinity. We define $\mathbf{s} \odot \mathbf{v}$ to be the sequence $\mathbf{t} = (t_1 , t_2 , \ldots )$ of length $m-l+1$ (we define $\infty -k$ to be $\infty$ for all integers $k$) such that $$\begin{aligned} t_i = \begin{pmatrix} s_i & s_{i+1} & \cdots & s_{i+l-1} \end{pmatrix} \begin{pmatrix} v_1 \\ v_2 \\ \vdots \\ v_l \end{pmatrix}\end{aligned}$$ for all $i$ that index $\mathbf{t}$. That is, we are taking the dot product between $\mathbf{v}$ and successive blocks of $\mathbf{s}$ of same length as $\mathbf{v}$, and forming a new sequence.* We observe that for $\boldsymbol{\alpha} \in \mathcal{L}_n (r_1)$ the vector $\boldsymbol{\alpha} \odot [A_1 (\boldsymbol{\alpha})]_{r_1}$ is simply the zero vector in $\mathbb{F}_q^{n-r_1 +1}$. Recall that if $\rho_1 = r_1$ then $[A_1 (\boldsymbol{\alpha})]_{r_1}$ is a vector with non-zero final entry. In particular, this means that if $\boldsymbol{\alpha} \in \mathcal{L}_n (r_1 , r_1 , 0)$ we can define an infinite sequence $\overrightarrow{\boldsymbol{\alpha}}$ such that the first $n+1$ terms are exactly those in $\boldsymbol{\alpha}$ and the later terms are defined by the condition $\overrightarrow{\boldsymbol{\alpha}} \odot [A_1 (\boldsymbol{\alpha})]_{r_1} = \mathbf{0}$. Clearly, $\overrightarrow{\boldsymbol{\alpha}}$ is unique for every $\boldsymbol{\alpha}$. Also, $A_1 (\boldsymbol{\alpha})$ defines a recurrence relation on $\overrightarrow{\boldsymbol{\alpha}}$ (and hence $\boldsymbol{\alpha}$ as well) and we will say it has length $r_1 +1$ (this is not necessarily standard terminology). Note that no shorter recurrence relation can exist, otherwise this would ultimately imply that $H_{n+2-r_1 , r_1} (\boldsymbol{\alpha})$ has a non-trivial vector in its kernel, which contradicts the fact that $r (\boldsymbol{\alpha}) = r_1$. **Lemma 1**. *Consider $\mathcal{L}_n^h (r,r,0)$ with $2 < r \leq n_2-1$ and $h<r$. The following map is bijective: $$\begin{aligned} \mathcal{L}_n^h (r,r,0) \rightarrow &\big\{ (A,B) \in \mathcal{M}_r \times \mathcal{A}_{<r-h} : (A,B) \text{ coprime} \big\} \\ \boldsymbol{\alpha} \mapsto &\Big( A_1 (\boldsymbol{\alpha}) , A_1 (\boldsymbol{\alpha}) \times L (\overrightarrow{\boldsymbol{\alpha}}) \Big) ;\end{aligned}$$ where, if $\overrightarrow{\boldsymbol{\alpha}} = (\alpha_0 , \alpha_1 , \ldots )$, then we define the Laurent series $$\begin{aligned} L (\overrightarrow{\boldsymbol{\alpha}}) := \sum_{i = 0}^{\infty} \alpha_{i} T^{-i} ;\end{aligned}$$ and $A_1 (\boldsymbol{\alpha}) \times L (\overrightarrow{\boldsymbol{\alpha}})$ is the standard multiplication of Laurent series.* *Proof.* First we note that $A_1 (\boldsymbol{\alpha}) \times L (\overrightarrow{\boldsymbol{\alpha}})$ is a polynomial of degree $<r$, which we will denote by $B$. This follows from the fact that $\overrightarrow{\boldsymbol{\alpha}} \odot [A_1 (\boldsymbol{\alpha})]_{r} = \mathbf{0}$, which we established above. Furthermore, since the first $h$ entries of $\overrightarrow{\boldsymbol{\alpha}}$ are zero, we have that $L (\overrightarrow{\boldsymbol{\alpha}}) = \sum_{i = h}^{\infty} \alpha_{i} T^{-i}$, and so $B$ must actually have degree $<r-h$.\ To prove that $A_1 (\boldsymbol{\alpha})$ and $B$ are coprime, we note that $L (\overrightarrow{\boldsymbol{\alpha}})$ is the Laurent series for the rational function $\frac{B}{A_1 (\boldsymbol{\alpha})}$. In particular, if $A_1 (\boldsymbol{\alpha})$ and $B$ were not coprime, then there exist some $C,D$ with $\mathop{\mathrm{deg}}C < \mathop{\mathrm{deg}}A_1 (\boldsymbol{\alpha})$ such that $$\begin{aligned} \frac{B}{A_1 (\boldsymbol{\alpha})} = \frac{D}{C} .\end{aligned}$$ But then this would imply that $\overrightarrow{\boldsymbol{\alpha}}$ has a recurrence relation of length $\mathop{\mathrm{deg}}C +1 < r+1$, which contradicts that $\boldsymbol{\alpha} \in \mathcal{L}_n^h (r,r,0)$. Thus, $A_1 (\boldsymbol{\alpha})$ and $B$ must be coprime.\ To prove that the map is injective, we note the following one-to-one correspondences $$\begin{aligned} \boldsymbol{\alpha} \longleftrightarrow \overrightarrow{\boldsymbol{\alpha}} \longleftrightarrow L (\overrightarrow{\boldsymbol{\alpha}}) \longleftrightarrow \frac{B}{A_1 (\boldsymbol{\alpha})} .\end{aligned}$$ Finally, surjectivity follows from injectivity and the fact that $$\begin{aligned} \lvert \mathcal{L}_n^h (r,r,0) \rvert = (q-1) q^{2r-h-1} = \big\lvert \big\{ (A,B) \in \mathcal{M}_r \times \mathcal{A}_{<r-h} : (A,B) \text{ coprime} \big\} \big\rvert ,\end{aligned}$$ where we have used Lemma [Lemma 1](#lemma, number of sequences of given rhopi form){reference-type="ref" reference="lemma, number of sequences of given rhopi form"}. ◻ Let us now make another definition, before proving the final lemma of this section. **Definition 1**. *Let $W = (w_0 , w_1 , \ldots , w_s) \in \mathcal{A}_{\leq s}$ (for some $s \geq 0$) and $k \geq 1$. We define the $T_{k+s,k} ([W]_s)$ to be the $(k+s) \times k$ matrix with $j$-th column equal to $$\begin{aligned} \begin{pmatrix} \smash{\underbrace{\begin{matrix} 0 & \ldots & 0 \end{matrix}}_{\text{ $j-1$ times }}} & \smash{\begin{matrix} w_0 & w_1 & \ldots & w_s \end{matrix}} & \smash{\underbrace{\begin{matrix} 0 & \ldots & 0 \end{matrix}}_{\text{ $k-j-s$ times }}} \end{pmatrix}^T . \\\end{aligned}$$ This is a circulant Toeplitz matrix.* **Remark 1**. *First, we note that for $B \in \mathcal{A}_{\leq k}$, we have $T_{k+s,k} ([W]_s) [B]_k = [WB]_{k+s}$. Now suppose we have an $l \times (k+s)$ Hankel matrix $H_{l , k+s} (\boldsymbol{\alpha})$. It is not difficult to see that the matrix $H_{l , k+s} (\boldsymbol{\alpha}) T_{k+s,k} ([W]_s)$ is the $l \times k$ Hankel matrix $H_{l,k} (\boldsymbol{\alpha} \odot [W]_s)$.* We are interested in the kernel of $H_{l,k} (\boldsymbol{\alpha} \odot [W]_s)$. For $\mathop{\mathrm{deg}}B \leq k$, we have that $$\begin{aligned} &[B]_k \in \mathop{\mathrm{ker}}H_{l,k} (\boldsymbol{\alpha} \odot [W]_s) \\ &\iff H_{l , k+s} (\boldsymbol{\alpha}) T_{k+s,k} ([W]_s) [B]_k = \mathbf{0} \\ &\iff H_{l , k+s} (\boldsymbol{\alpha}) [WB]_{k+s} = \mathbf{0} .\end{aligned}$$ That is, there is an injective map from $\mathop{\mathrm{ker}}H_{l,k} (\boldsymbol{\alpha} \odot [W]_s)$ to the subset of $\mathop{\mathrm{ker}}H_{l , k+s} (\boldsymbol{\alpha})$ consisting of vectors $[C]_{k+s}$ for some $C \in \mathcal{A}_{\leq k+s}$ with $W \mid C$. We note that this map is surjective onto the subset if $\mathop{\mathrm{deg}}W = s$, but not necessarily surjective if $\mathop{\mathrm{deg}}W < s$. Indeed, in the latter case, we have $\mathop{\mathrm{deg}}(WB) < k+s$, meaning it does not account for polynomials $C$ with $\mathop{\mathrm{deg}}C = k+s$ and $W \mid C$. The following lemma gives further information on two cases given some restrictions. **Lemma 1**. *Let $s \geq 0$ and let $W \in \mathcal{A}$ with $0 \leq s_1 := \mathop{\mathrm{deg}}W \leq s$. Also, let $\boldsymbol{\alpha} \in \mathcal{L}_n (r_1 , \rho_1 , \pi_1)$ with $n \geq 2$ and $n \geq 2r_1 +s-1$, and let $A_1 (\boldsymbol{\alpha}) \in \mathcal{M}_{\rho_1}$ be the first characteristic polynomial. Then, $$\begin{aligned} r \Big( \boldsymbol{\alpha} \odot [W]_s \Big) = &r_1 - \mathop{\mathrm{deg}}( A_1 (\boldsymbol{\alpha}) , W ) - \min \{ s-s_1 , \pi_1 \} , \\ \rho \Big( \boldsymbol{\alpha} \odot [W]_s \Big) = &\rho_1 - \mathop{\mathrm{deg}}( A_1 (\boldsymbol{\alpha}) , W ) , \\ \pi \Big( \boldsymbol{\alpha} \odot [W]_s \Big) = &\max \{ 0 , \pi_1 - (s-s_1) \} .\end{aligned}$$ and $$\begin{aligned} A_1 \Big( \boldsymbol{\alpha} \odot [W]_s \Big) = \frac{A_1 (\boldsymbol{\alpha})}{\big( A_1 (\boldsymbol{\alpha}) , W \big)} .\end{aligned}$$* *For our second claim, suppose that $s \geq 0$ and let $W \in \mathcal{A}$ with $0 \leq s_1 := \mathop{\mathrm{deg}}W \leq s$. Also, let $\boldsymbol{\alpha} \in \prescript{}{\mathrm{S}}{\mathcal{L}}_n \Big( \frac{n-s}{2} +1 , 0 , \frac{n-s}{2} +1 \Big)$ with $n-s \geq 2$ being even. Then, $\boldsymbol{\alpha} \odot [W]_s \in \prescript{}{\mathrm{S}}{\mathcal{L}}_{n-s} \Big( \frac{n-s}{2} +1 , 0 , \frac{n-s}{2} +1 \Big)$.* *Proof.* Consider the first claim. Let $l,m$ be such that $l+1=r_1$ and $l+m=n$. Then, $$\begin{aligned} \label{statement, alpha odot [U] kernel, m+1 bound} m+1 = (n+2) - (l+1) \geq r_1 + s+1 .\end{aligned}$$ Now, we have already established that $$\begin{aligned} _{m-s} \in \mathop{\mathrm{ker}}H_{l+1 , m-s+1} (\boldsymbol{\alpha} \odot [W]_s) \hspace{2em} \iff \hspace{2em} &[WB]_m \in \mathop{\mathrm{ker}}H_{l+1 , m+1} (\boldsymbol{\alpha}) , \\ &\mathop{\mathrm{deg}}B \leq m-s .\end{aligned}$$ Since $l+1 = r_1$, we have that $$\begin{aligned} \mathop{\mathrm{ker}}H_{l+1 , m+1} (\boldsymbol{\alpha}) = \Big\{ [B_1 A_1 (\boldsymbol{\alpha})]_m : B_1 \in \mathcal{A}_{\leq m-r_1 } \Big\} .\end{aligned}$$ So, we are looking for solutions to the equation $$\begin{aligned} B_1 A_1 (\boldsymbol{\alpha}) = WB\end{aligned}$$ where $B_1 \in \mathcal{A}_{\leq m-r_1 }$ and $B \in \mathcal{A}_{m-s}$. We can see that solutions are of the form $$\begin{aligned} (B_1 , B) = \bigg( \frac{W}{\big( A_1 (\boldsymbol{\alpha}) , W \big)} C , \frac{A_1 (\boldsymbol{\alpha})}{\big( A_1 (\boldsymbol{\alpha}) , W \big)} C \bigg)\end{aligned}$$ for $C \in \mathcal{A}$ with $$\begin{aligned} \mathop{\mathrm{deg}}C \leq &\min \bigg\{ m-r_1 - \mathop{\mathrm{deg}}\frac{W}{\big( A_1 (\boldsymbol{\alpha}) , W \big)} , m-s - \mathop{\mathrm{deg}}\frac{A_1 (\boldsymbol{\alpha})}{\big( A_1 (\boldsymbol{\alpha}) , W \big)} \bigg\} \\ = &m-r_1 -s + \mathop{\mathrm{deg}}\big( A_1 (\boldsymbol{\alpha}) , W \big) + \min \{ s-s_1 , \pi_1 \} .\end{aligned}$$ Thus, $$\begin{aligned} \mathop{\mathrm{ker}}H_{l+1 , m-s+1} (\boldsymbol{\alpha} \odot [W]_s) = \Bigg\{ \bigg[ C \frac{A_1 (\boldsymbol{\alpha})}{\big( A_1 (\boldsymbol{\alpha}) , W \big)} \bigg]_{m-s} : \substack{C \in \mathcal{A} \\ \mathop{\mathrm{deg}}C \leq m-r_1 -s + \mathop{\mathrm{deg}}( A_1 (\boldsymbol{\alpha}) , W ) + \min \{ s-s_1 , \pi_1 \} } \Bigg\} ,\end{aligned}$$ Note that ([\[statement, alpha odot \[U\] kernel, m+1 bound\]](#statement, alpha odot [U] kernel, m+1 bound){reference-type="ref" reference="statement, alpha odot [U] kernel, m+1 bound"}) tells us that $m-r_1 -s + \mathop{\mathrm{deg}}( A_1 (\boldsymbol{\alpha}) , W ) + \min \{ s-s_1 , \pi_1 \} \geq 0$, and so $C$ can take a non-zero value. Therefore, we can see that the first characteristic polynomial is $$\begin{aligned} A_1 \Big( \boldsymbol{\alpha} \odot [W]_s \Big) = \frac{A_1 (\boldsymbol{\alpha})}{\big( A_1 (\boldsymbol{\alpha}) , W \big)} ,\end{aligned}$$ from which we also deduce that $\rho \Big( \boldsymbol{\alpha} \odot [W]_s \Big) = \rho_1 - \mathop{\mathrm{deg}}( A_1 (\boldsymbol{\alpha}) , W )$. The value of $r(\boldsymbol{\alpha})$ is simply the number of columns of $H_{l+1 , m-s+1} (\boldsymbol{\alpha} \odot [W]_s)$ minus the dimension of the kernel, and so we have $$\begin{aligned} r \Big( \boldsymbol{\alpha} \odot [W]_s \Big) = &\Big( m-s+1 \Big) - \Big( m-r_1 -s + \mathop{\mathrm{deg}}( A_1 (\boldsymbol{\alpha}) , W ) + \min \{ s-s_1 , \pi_1 \} +1 \Big) \\ = &r_1 - \mathop{\mathrm{deg}}( A_1 (\boldsymbol{\alpha}) , W ) - \min \{ s-s_1 , \pi_1 \} .\end{aligned}$$ Finally, by definition, we have $$\begin{aligned} \pi \Big( \boldsymbol{\alpha} \odot [W]_s \Big) = r \Big( \boldsymbol{\alpha} \odot [W]_s \Big) - \rho \Big( \boldsymbol{\alpha} \odot [W]_s \Big) = \max \{ 0 , \pi_1 - (s-s_1) \} .\end{aligned}$$ For the second claim, we note that the conditions on $\boldsymbol{\alpha}$ mean that the first $\frac{n+s}{2}$ entries of $\boldsymbol{\alpha}$ are zero, while the $\big( \frac{n+s}{2} + 1 \big)$-th entry is non-zero. So, by definition of $\odot$, we have that there are $\frac{n-s}{2}$ zeros at the start of $\boldsymbol{\alpha} \odot [U]_s$, while its $\big( \frac{n-s}{2} + 1 \big)$-th entry is non-zero. From this, we deduce that $\boldsymbol{\alpha} \odot [W]_s \in \prescript{}{\mathrm{S}}{\mathcal{L}}_{n-s} \Big( \frac{n-s}{2} +1 , 0 , \frac{n-s}{2} +1 \Big)$. ◻ # Proof of Theorem [Theorem 1](#main theorem, lattice point variance elliptic annuli){reference-type="ref" reference="main theorem, lattice point variance elliptic annuli"} {#section, main theorem proof} *Proof of Theorem [Theorem 1](#main theorem, lattice point variance elliptic annuli){reference-type="ref" reference="main theorem, lattice point variance elliptic annuli"}.* We will prove the theorem for when $n$ is even. When $n$ is odd, the proof is almost identical and we simply swap the roles of $U$ and $V$. Remark [Remark 1](#remark, explanation why U,V are odd, even coprime monic){reference-type="ref" reference="remark, explanation why U,V are odd, even coprime monic"} indicates why we consider the two cases separately. Recall that when $n$ is even we have $s:= \mathop{\mathrm{deg}}U$ and $t := \mathop{\mathrm{deg}}V +1$. Also, recall that $s' := \frac{n - s}{2}$ and $t' := \frac{n - t}{2}$; and $n_1 := \Big\lfloor \frac{n+2}{2} \Big\rfloor$ and $n_2 := \Big\lfloor \frac{n+3}{2} \Big\rfloor$.\ We will consider the three cases of the theorem separately, but first we will reformulate the problem by making use of the Fourier expansion for the indicator function (Definition [Definition 1](#definition, Fourier exp indicator function){reference-type="ref" reference="definition, Fourier exp indicator function"}). We have $$\begin{aligned} \frac{1}{q^n} \sum_{A \in \mathcal{M}_n } \bigg\lvert \sum_{B \in I (A; <h)} S_{U,V} (B) \bigg\rvert^2 = &\frac{4}{q^n} \sum_{A \in \text{$\mathcal{M}_{n}$} } \bigg( \sum_{\substack{ B \in \mathcal{M}_{n} \\ \mathop{\mathrm{deg}}(B-A) < h }} \sum_{\substack{E \in \mathcal{M}_{s'} \\ F \in \mathcal{A}_{\leq t'} }} \mathbbm{1} (B - U E^2 - V F^2 ) \bigg)^2 \\ = &\frac{4}{q^n} \sum_{A \in \text{$\mathcal{A}_{\leq n}$} } \bigg( \sum_{\substack{B \in \mathcal{A}_{\leq n} \\ \mathop{\mathrm{deg}}(B-A) < h }} \sum_{\substack{E \in \mathcal{M}_{s'} \\ F \in \mathcal{A}_{\leq t'} }} \mathbbm{1} (B - U E^2 - V F^2 ) \bigg)^2 .\end{aligned}$$ For the first equality, the ranges of $E,F$ are justified by Remark [Remark 1](#remark, explanation why U,V are odd, even coprime monic){reference-type="ref" reference="remark, explanation why U,V are odd, even coprime monic"}; and the factor of $4$ appears due to symmetry, to account for the fact that we are not including $-E \in \mathcal{M}_{s'}$ in the summation range. For the second line, note that the conditions on $E,F$, and the equation $B - U E^2 - V F^2 = 0$, force $B$ to be in $\mathcal{M}_n$, and so we have not changed the final result by rewriting the range of $B$ to $\mathcal{A}_{\leq n}$. Similarly, we rewrote the range of $A$ to be in $\mathcal{A}_{\leq n}$, because the fact that $B \in \mathcal{M}_n$, and $\mathop{\mathrm{deg}}(B-A) < h \leq n$, force $A$ to be in $\mathcal{M}_n$. Continuing, we have $$\begin{aligned} &\frac{1}{q^n} \sum_{A \in \mathcal{M}_n } \bigg\lvert \sum_{B \in I (A; <h)} S_{U,V} (B) \bigg\rvert^2 \\ = &\frac{4}{q^{3n+2}} \sum_{A \in \mathcal{A}_{\leq n} } \bigg( \sum_{\substack{B \in \mathcal{A}_{\leq n} \\ \mathop{\mathrm{deg}}(B-A) < h }} \sum_{\substack{E \in \mathcal{M}_{s'} \\ F \in \mathcal{A}_{\leq t'} }} \sum_{\boldsymbol{\alpha} \in \mathbb{F}_q^{n+1}} \\ &\hspace{3em} \psi \Big( \boldsymbol{\alpha} \cdot [B]_n - [E]_{s'}^T H_{s'+1 , n-s'+1} (\boldsymbol{\alpha}) [UE]_{n-s'} - [F]_{t'}^T H_{t'+1 , n-t'+1} (\boldsymbol{\alpha}) [VF]_{n-t'} \Big) \bigg)^2 ,\end{aligned}$$ where we have used the Fourier expansion of $\mathbbm{1} (B - U E^2 - V F^2 )$ given in Definition [Definition 1](#definition, Fourier exp indicator function){reference-type="ref" reference="definition, Fourier exp indicator function"}, and ([\[statement, how Hankel matrices appear from products\]](#statement, how Hankel matrices appear from products){reference-type="ref" reference="statement, how Hankel matrices appear from products"}). Note that $\mathop{\mathrm{deg}}UE = s + s' = n-s'$ and $UE$ is monic, and so $[UE]_{n-s'}$ is a vector with final entry equal to $1$. On the other hand, $\mathop{\mathrm{deg}}VE \leq (t-1) + t' = n - t' -1$, and so $[VF]_{n-t'}$ is a vector with at least one zero at the end. Ultimately, this stems from the difference in parity between $n$ and $\mathop{\mathrm{deg}}V$. This will be used later in the proof.\ Now, if we have $A_1 , A_2$ with $\mathop{\mathrm{deg}}(A_1 - A_2) <h$ then the contributions of the summand when $A=A_1$ and $A=A_2$ are identical. Given that the condition $\mathop{\mathrm{deg}}(A_1 - A_2) <h$ creates equivalence classes of size $q^h$, we can consider one polynomial from each class and multiply by $q^h$. The natural polynomial to take from each class is the one with first $h$ entries being $0$. These polynomials are represented in vector form by $\mathbf{a} \in \{ 0 \}^h \times \mathbb{F}_q^{n-h+1}$. It is not difficult to see that, in vector form, the polynomial $B$ is just $\mathbf{a} + \mathbf{b}$ for some $\mathbf{b} \in \mathbb{F}_q^h \times \{ 0 \}^{n-h+1}$. Thus, we have $$\begin{aligned} &\frac{1}{q^n} \sum_{A \in \mathcal{M}_n } \bigg\lvert \sum_{B \in I (A; <h)} S_{U,V} (B) \bigg\rvert^2 \\ = &\frac{4 q^h}{q^{3n+2}} \sum_{\mathbf{a} \in \{ 0 \}^h \times \mathbb{F}_q^{n-h+1} } \bigg( \sum_{\mathbf{b} \in \mathbb{F}_q^h \times \{ 0 \}^{n-h+1} } \sum_{\substack{E \in \mathcal{M}_{s'} \\ F \in \mathcal{A}_{\leq t'} }} \sum_{\boldsymbol{\alpha} \in \mathbb{F}_q^{n+1}} \\ &\hspace{3em} \psi \Big( \boldsymbol{\alpha} \cdot (\mathbf{a} + \mathbf{b}) - [E]_{s'}^T H_{s'+1 , n-s'+1} (\boldsymbol{\alpha}) [UE]_{n-s'} - [F]_{t'}^T H_{t'+1 , n-t'+1} (\boldsymbol{\alpha}) [VF]_{n-t'} \Big) \bigg)^2 .\end{aligned}$$ Now, the additive nature of $\psi$ allows us to factor out the term involving $\mathbf{b}$ above. That is, within the large parentheses, we have the factor $\sum_{\mathbf{b} \in \mathbb{F}_q^h \times \{ 0 \}^{n-h+1} } \psi ( \boldsymbol{\alpha} \cdot \mathbf{b})$. By the orthogonality relation ([\[statement, additive character FF orthog relation\]](#statement, additive character FF orthog relation){reference-type="ref" reference="statement, additive character FF orthog relation"}), we have $$\begin{aligned} \sum_{\mathbf{b} \in \mathbb{F}_q^h \times \{ 0 \}^{n-h+1} } \psi ( \boldsymbol{\alpha} \cdot \mathbf{b}) = \prod_{i=0}^{h-1} \sum_{b_i \in \mathbb{F}_q} \psi ( \alpha_i b_i) = \begin{cases} q^h &\text{ if $\alpha_0 , \ldots , \alpha_{h-1} = 0$,} \\ 0 &\text{ otherwise.} \end{cases}\end{aligned}$$ Thus, a non-zero contribution occurs only if $\alpha_0 , \ldots , \alpha_{h-1} = 0$. We can apply a similar approach to handle the term $\boldsymbol{\alpha} \cdot \mathbf{a}$. However, unlike with $\mathbf{b}$, the sum over $\mathbf{a}$ appears outside the large parentheses. Whereas, the sum over $\boldsymbol{\alpha}$ appears within them. Given the square power, this means that there are "two $\boldsymbol{\alpha}$" that we must consider, say $\boldsymbol{\alpha}_1$ and $\boldsymbol{\alpha}_2$. We have already established that the sum over $\mathbf{b}$ "forces" the first $h$ entries of $\boldsymbol{\alpha}_1$ and $\boldsymbol{\alpha}_2$ to be zero. Similar reasoning for sum over $\mathbf{a}$ will "force" the condition $\boldsymbol{\alpha}_1 + \boldsymbol{\alpha}_2 = \mathbf{0}$. That is, $\boldsymbol{\alpha}_2 = - \boldsymbol{\alpha}_1$. Therefore, we have $$\begin{aligned} &\frac{1}{q^n} \sum_{A \in \mathcal{M}_n } \bigg\lvert \sum_{B \in I (A; <h)} S_{U,V} (B) \bigg\rvert^2 \\ = &\frac{4 q^{2h}}{q^{2n+1}} \sum_{\boldsymbol{\alpha}_1 \in \mathcal{L}_n^h} \bigg( \sum_{\substack{E \in \mathcal{M}_{s'} \\ F \in \mathcal{A}_{\leq t'} }} \psi \Big( - [E]_{s'}^T H_{s'+1 , n-s'+1} (\boldsymbol{\alpha}_1) [UE]_{n-s'} - [F]_{t'}^T H_{t'+1 , n-t'+1} (\boldsymbol{\alpha}_1) [VF]_{n-t'} \Big) \bigg) \\ &\hspace{4em} \times \bigg( \sum_{\substack{E \in \mathcal{M}_{s'} \\ F \in \mathcal{A}_{\leq t'} }} \psi \Big( - [E]_{s'}^T H_{s'+1 , n-s'+1} (-\boldsymbol{\alpha}_1) [UE]_{n-s'} - [F]_{t'}^T H_{t'+1 , n-t'+1} (-\boldsymbol{\alpha}_1) [VF]_{n-t'} \Big) \bigg) \\ = &\frac{4 q^{2h}}{q^{2n+1}} \sum_{\boldsymbol{\alpha} \in \mathcal{L}_n^h} \bigg\lvert \sum_{\substack{E \in \mathcal{M}_{s'} \\ F \in \mathcal{A}_{\leq t'} }} \psi \Big( [E]_{s'}^T H_{s'+1 , n-s'+1} (\boldsymbol{\alpha}) [UE]_{n-s'} + [F]_{t'}^T H_{t'+1 , n-t'+1} (\boldsymbol{\alpha}) [VF]_{n-t'} \Big) \bigg\rvert^2 .\end{aligned}$$ Now, consider the cases where $\boldsymbol{\alpha} \in \mathcal{L}_n^h (0,0,0)$ and $\boldsymbol{\alpha} \in \mathcal{L}_n^h (1,0,1)$. This is just when all entries of $\boldsymbol{\alpha}$ are zero except the last which can take any value in $\mathbb{F}_q$. It is not difficult to see that the contributions of these cases is $4 q^{2h - \mathop{\mathrm{deg}}U -\mathop{\mathrm{deg}}V +1}$. By ([\[statement, lattice point ellipse mean value calculations\]](#statement, lattice point ellipse mean value calculations){reference-type="ref" reference="statement, lattice point ellipse mean value calculations"}), this is just $$\begin{aligned} \bigg( \frac{1}{q^n} \sum_{A \in \mathcal{M}_n } \sum_{B \in I (A; <h)} S_{U,V} (B) \bigg)^2 ,\end{aligned}$$ and so $$\begin{aligned} \begin{split} \label{statement, variance as add char after mean square removed} &\frac{1}{q^n} \sum_{A \in \mathcal{M}_n } \bigg\lvert \Delta_{S_{U,V}} (A; <h) \bigg\rvert^2 \\ = &\frac{1}{q^n} \sum_{A \in \mathcal{M}_n } \bigg\lvert \sum_{B \in I (A; <h)} S_{U,V} (B) \bigg\rvert^2 -\bigg( \frac{1}{q^n} \sum_{A \in \mathcal{M}_n } \sum_{B \in I (A; <h)} S_{U,V} (B) \bigg)^2 \\ = &\frac{4 q^{2h}}{q^{2n+1}} \sum_{\substack{\boldsymbol{\alpha} \in \mathcal{L}_n^h : \\ \boldsymbol{\alpha} \not\in \mathcal{L}_n^h (0,0,0) \\ \boldsymbol{\alpha} \not\in \mathcal{L}_n^h (1,0,1) }} \bigg\lvert \sum_{\substack{E \in \mathcal{M}_{s'} \\ F \in \mathcal{A}_{\leq t'} }} \psi \Big( [E]_{s'}^T H_{s'+1 , n-s'+1} (\boldsymbol{\alpha}) [UE]_{n-s'} + [F]_{t'}^T H_{t'+1 , n-t'+1} (\boldsymbol{\alpha}) [VF]_{n-t'} \Big) \bigg\rvert^2 . \\ = &\frac{4 q^{2h}}{q^{2n+1}} \sum_{\substack{\boldsymbol{\alpha} \in \mathcal{L}_n^h : \\ \boldsymbol{\alpha} \not\in \mathcal{L}_n^h (0,0,0) \\ \boldsymbol{\alpha} \not\in \mathcal{L}_n^h (1,0,1) }} \bigg\lvert \sum_{\substack{E \in \mathcal{M}_{s'} \\ F \in \mathcal{A}_{\leq t'} }} \psi \Big( [E]_{s'}^T H_{s'+1 , s'+1} (\boldsymbol{\alpha}\odot [U]_s ) [E]_{s'} + [F]_{t'}^T H_{t'+1 , t'+1} (\boldsymbol{\alpha} \odot [V]_t ) [F]_{t'} \Big) \bigg\rvert^2 , \end{split}\end{aligned}$$ where the last equality uses Definition [Definition 1](#definition, circulant Toeplitz matrix def){reference-type="ref" reference="definition, circulant Toeplitz matrix def"} and Remark [Remark 1](#remark, circulant Toeplitz multiplied by vector and by Hankel matrix){reference-type="ref" reference="remark, circulant Toeplitz multiplied by vector and by Hankel matrix"}. Specifically, $$\begin{aligned} H_{s'+1 , n-s'+1} (\boldsymbol{\alpha}) [UE]_{n-s'} = H_{s'+1 , n-s'+1} (\boldsymbol{\alpha}) T_{n-s'+1 , s'+1} ([U]_s) [E]_{s'} = H_{s'+1 , s'+1} (\boldsymbol{\alpha}\odot [U]_s ) [E]_{s'} \end{aligned}$$ and $$\begin{aligned} H_{t'+1 , n-t'+1} (\boldsymbol{\alpha}) [VF]_{n-t'} = H_{t'+1 , n-t'+1} (\boldsymbol{\alpha}) T_{n-t'+1 , t'+1} ([V]_t) [F]_{t'} = H_{t'+1 , t'+1} (\boldsymbol{\alpha} \odot [V]_t ) [F]_{t'}.\end{aligned}$$ Now that we have established ([\[statement, variance as add char after mean square removed\]](#statement, variance as add char after mean square removed){reference-type="ref" reference="statement, variance as add char after mean square removed"}), we are in a position to consider each case of the theorem separately.\ [**Case 1:**]{.ul} $h \geq s'+s$.\ The $\boldsymbol{\alpha}$ that appear in ([\[statement, variance as add char after mean square removed\]](#statement, variance as add char after mean square removed){reference-type="ref" reference="statement, variance as add char after mean square removed"}) are in $\mathcal{L}_n^h$. Remark [Remark 1](#remark, rho_1 values dependent on h){reference-type="ref" reference="remark, rho_1 values dependent on h"} tells us that either $h+1 \leq \rho_{\mathrm{S}}(\boldsymbol{\alpha} ) \leq n_2 -1$ or $\rho_{\mathrm{S}}(\boldsymbol{\alpha}) = 0$. Given that $h \geq s' + s \geq n_2 -1$, we must have that $\rho_{\mathrm{S}}(\boldsymbol{\alpha}) = 0$. That is, $\boldsymbol{\alpha} \in \prescript{}{\mathrm{S}}{\mathcal{L}}_n^h (r_1 , 0 , r_1)$ for some $r_1 \geq 2$ (the cases $r_1 = 0,1$ are excluded in the summation in ([\[statement, variance as add char after mean square removed\]](#statement, variance as add char after mean square removed){reference-type="ref" reference="statement, variance as add char after mean square removed"})). Remark [Remark 1](#remark, rho_1 values dependent on h){reference-type="ref" reference="remark, rho_1 values dependent on h"} now tells us that $H_{n_1 , n_2} (\boldsymbol{\alpha})$ is lower skew-triangular and that the first non-zero skew-diagonal will determine the rank and hence the value of $r (\boldsymbol{\alpha})$. In particular, the first possible non-zero skew-diagonal is the $(h+1)$-th skew-diagonal, and so $$\begin{aligned} r (\boldsymbol{\alpha}) \leq (n+1) - h \leq (n+1) - (s' +s) = s'+1.\end{aligned}$$ Thus, in this case, all $\boldsymbol{\alpha}$ that appear in ([\[statement, variance as add char after mean square removed\]](#statement, variance as add char after mean square removed){reference-type="ref" reference="statement, variance as add char after mean square removed"}) are in $\prescript{}{\mathrm{S}}{\mathcal{L}}_n^h (r_1 , 0 , r_1 )$ for some $2 \leq r_1 \leq s'+1$. Lemma [Lemma 1](#lemma, U reduction, 1 dimensional kernel case){reference-type="ref" reference="lemma, U reduction, 1 dimensional kernel case"} (including the final claim of the lemma) tells us that $\pi_{\mathrm{S}}\Big( \boldsymbol{\alpha} \odot [U]_s \Big) = r_1$; and so because $r_1 \geq 2$, Lemma [Lemma 1](#lemma, quadratic form values over monics and nonmonics){reference-type="ref" reference="lemma, quadratic form values over monics and nonmonics"} tells us that $$\begin{aligned} \sum_{E \in \mathcal{M}_{s'}} \psi \Big( [E]_{s'}^T H_{s'+1 , s'+1} (\boldsymbol{\alpha} \odot [U]_s ) [E]_{s'} \Big) = 0 .\end{aligned}$$ Finally, substituting into ([\[statement, variance as add char after mean square removed\]](#statement, variance as add char after mean square removed){reference-type="ref" reference="statement, variance as add char after mean square removed"}), we have $$\begin{aligned} \frac{1}{q^n} \sum_{A \in \mathcal{M}_n } \bigg\lvert \Delta_{S_{U,V}} (A;<h) \bigg\rvert^2 = 0 .\end{aligned}$$ [**Case 2:**]{.ul} $n_2 -1 \leq h < s'+s$.\ Again, consider the $\boldsymbol{\alpha}$ that appear in ([\[statement, variance as add char after mean square removed\]](#statement, variance as add char after mean square removed){reference-type="ref" reference="statement, variance as add char after mean square removed"}). As in Case 1, we can show that $\boldsymbol{\alpha} \in \prescript{}{\mathrm{S}}{\mathcal{L}}_n^h (r_1 , 0 , r_1)$ for some $r_1 \geq 2$; and similar to that case, we have $r_1 \leq (n+1) - h$. We have already established in Case 1 that $r_1 \leq s'+1$ contributes zero. Hence, we only need to consider $\boldsymbol{\alpha} \in \prescript{}{\mathrm{S}}{\mathcal{L}}_n (r_1 , 0 , r_1 )$ with $s'+1 < r_1 \leq n+1-h$. Also, the second result in Lemma [Lemma 1](#lemma, quadratic form values over monics and nonmonics){reference-type="ref" reference="lemma, quadratic form values over monics and nonmonics"} tells us that a non-zero contribution will occur only if $\pi_{\mathrm{S}}(\boldsymbol{\alpha} \odot [U]_s ) \leq 1$. So, applying the above to ([\[statement, variance as add char after mean square removed\]](#statement, variance as add char after mean square removed){reference-type="ref" reference="statement, variance as add char after mean square removed"}), we obtain $$\begin{aligned} \begin{split} \label{statement, main theorem proof, variance in terms of kernels, first} &\frac{1}{q^n} \sum_{A \in \mathcal{M}_n } \bigg\lvert \Delta_{S_{U,V}} (A;<h) \bigg\rvert^2 \\ = &\frac{4 q^{2h}}{q^{2n+1}} \sum_{r_1 = s'+2}^{n+1-h} \sum_{\substack{\boldsymbol{\alpha} \in \prescript{}{\mathrm{S}}{\mathcal{L}}_n (r_1,0,r_1) : \\ \pi_{\mathrm{S}}(\boldsymbol{\alpha} \odot [U]_s ) \leq 1 }} \\ &\hspace{3em} \bigg\lvert \sum_{\substack{E \in \mathcal{M}_{s'} \\ F \in \mathcal{A}_{\leq t'} }} \psi \Big( [E]_{s'}^T H_{s'+1 , s'+1} (\boldsymbol{\alpha}\odot [U]_s ) [E]_{s'} + [F]_{t'}^T H_{t'+1 , t'+1} (\boldsymbol{\alpha} \odot [V]_t ) [F]_{t'} \Big) \bigg\rvert^2 \\ = &\frac{4 q^{2h}}{q^{2n+1}} \sum_{r_1 = s'+2}^{n+1-h} \sum_{\substack{\boldsymbol{\alpha} \in \prescript{}{\mathrm{S}}{\mathcal{L}}_n (r_1,0,r_1) : \\ \pi_{\mathrm{S}}(\boldsymbol{\alpha} \odot [U]_s ) \leq 1 }} q^{s' + t' + \pi_{\mathrm{S}}(\boldsymbol{\alpha} \odot [U]_s ) } \Big\lvert \mathop{\mathrm{ker}}H_{s'+1 , s'+1} (\boldsymbol{\alpha} \odot [U]_s ) \Big\rvert \Big\lvert \mathop{\mathrm{ker}}H_{t'+1 , t'+1} (\boldsymbol{\alpha} \odot [V]_t ) \Big\rvert , \end{split}\end{aligned}$$ where the second equality follows by Lemma [Lemma 1](#lemma, quadratic form values over monics and nonmonics){reference-type="ref" reference="lemma, quadratic form values over monics and nonmonics"}. Now, let $\boldsymbol{\alpha}'$ be the sequence we obtain by removing the last term of $\boldsymbol{\alpha}$. It will be helpful to reformulate the above in terms of $\boldsymbol{\alpha}'$ instead of $\boldsymbol{\alpha}$. Note that $\boldsymbol{\alpha}' \odot [U]_s$ is the same sequence as the one obtained by removing the last term of $\boldsymbol{\alpha} \odot [U]_s$. Furthermore, it is not difficult to see that, due to the zero at the end of $[V]_t$, we have $\boldsymbol{\alpha} \odot [V]_t = \boldsymbol{\alpha}' \odot [V]_{t-1}$ and thus $H_{t'+1 , t'+1} (\boldsymbol{\alpha} \odot [V]_t ) = H_{t'+1 , t'+1} (\boldsymbol{\alpha}' \odot [V]_{t-1} )$. We can also use Remark [Remark 1](#lemma, effect of removing last entry of alpha){reference-type="ref" reference="lemma, effect of removing last entry of alpha"} to see that $$\begin{aligned} \Big\lvert \mathop{\mathrm{ker}}H_{s'+1 , s'+1} (\boldsymbol{\alpha} \odot [U]_s ) \Big\rvert = \begin{cases} \Big\lvert \mathop{\mathrm{ker}}H_{s' , s'+1} (\boldsymbol{\alpha}' \odot [U]_s ) \Big\rvert &\text{ if $\pi_{\mathrm{S}}(\boldsymbol{\alpha} \odot [U]_n ) = 0$,} \\ q^{-1} \Big\lvert \mathop{\mathrm{ker}}H_{s' , s'+1} (\boldsymbol{\alpha}' \odot [U]_s ) \Big\rvert &\text{ if $\pi_{\mathrm{S}}(\boldsymbol{\alpha} \odot [U]_n ) = 1$.} \end{cases}\end{aligned}$$ Finally, Remark [Remark 1](#lemma, effect of removing last entry of alpha){reference-type="ref" reference="lemma, effect of removing last entry of alpha"} also tells us that $\boldsymbol{\alpha}' \in \prescript{}{\mathrm{S}}{\mathcal{L}}_{n-1} (r_1 -1,0,r_1 -1)$. Applying these points to ([\[statement, main theorem proof, variance in terms of kernels, first\]](#statement, main theorem proof, variance in terms of kernels, first){reference-type="ref" reference="statement, main theorem proof, variance in terms of kernels, first"}), we obtain $$\begin{aligned} \begin{split} \label{statement, main theorem, variance in terms of kernels of shortened alpha} &\frac{1}{q^n} \sum_{A \in \mathcal{M}_n } \bigg\lvert \Delta_{S_{U,V}} (A;<h) \bigg\rvert^2 \\ = &\frac{4 q^{2h+s' + t'} }{q^{2n+1}} \sum_{r_1 = s'+2}^{n+1-h} \sum_{\substack{\boldsymbol{\alpha}' \in \prescript{}{\mathrm{S}}{\mathcal{L}}_{n-1} (r_1 -1,0,r_1 -1) : \\ \pi_{\mathrm{S}}(\boldsymbol{\alpha}' \odot [U]_s ) = 0 \\ a_n \in \mathbb{F}_q }} \Big\lvert \mathop{\mathrm{ker}}H_{s' , s'+1} (\boldsymbol{\alpha}' \odot [U]_s ) \Big\rvert \Big\lvert \mathop{\mathrm{ker}}H_{t'+1 , t'+1} (\boldsymbol{\alpha}' \odot [V]_{t-1} ) \Big\rvert \\ = &\frac{4 q^{2h+s' + t'} }{q^{2n}} \sum_{r_1 = s'+1}^{n-h} \sum_{\substack{\boldsymbol{\alpha} \in \prescript{}{\mathrm{S}}{\mathcal{L}}_{n-1} (r_1,0,r_1) : \\ \pi_{\mathrm{S}}(\boldsymbol{\alpha} \odot [U]_s ) = 0 }} \Big\lvert \mathop{\mathrm{ker}}H_{s' , s'+1} (\boldsymbol{\alpha} \odot [U]_s ) \Big\rvert \Big\lvert \mathop{\mathrm{ker}}H_{t'+1 , t'+1} (\boldsymbol{\alpha} \odot [V]_{t-1} ) \Big\rvert . \end{split}\end{aligned}$$ For the second equality, the summand is independent of $a_n$ and so we can remove the sum by multiplying the entire expression by $q$. We also relabelled $\boldsymbol{\alpha}'$ to $\boldsymbol{\alpha}$ for presentational reasons. Now, we have that $$\begin{aligned} \begin{split} \label{statement, main theorem, kernels expressed as polynomial equations} &\sum_{\substack{\boldsymbol{\alpha} \in \prescript{}{\mathrm{S}}{\mathcal{L}}_{n-1} (r_1,0,r_1) : \\ \pi_{\mathrm{S}}(\boldsymbol{\alpha} \odot [U]_s ) = 0 }} \Big\lvert \mathop{\mathrm{ker}}H_{s' , s'+1} (\boldsymbol{\alpha} \odot [U]_s ) \Big\rvert \Big\lvert \mathop{\mathrm{ker}}H_{t'+1 , t'+1} (\boldsymbol{\alpha} \odot [V]_{t-1} ) \Big\rvert \\ = &\frac{q-1}{q^{n-2r_1 +2}} \sum_{A_2 (\boldsymbol{\alpha}) \in \mathcal{M}_{n-r_1+1}} \bigg( q \sum_{\substack{B_1 \in \mathcal{A}_{\leq s'+s-r_1} \\ B_2 \in \mathcal{M}_{r_1 -s'-1} \\ B \in \mathcal{M}_{s'} \\ B_1 + B_2 A_2 (\boldsymbol{\alpha}) = UB}} 1 \bigg) \bigg( \sum_{\substack{C_1 \in \mathcal{A}_{\leq t'+t-r_1 -1} \\ C_2 \in \mathcal{A}_{\leq r_1 -t'-2} \\ C \in \mathcal{A}_{\leq t'} \\ C_1 + C_2 A_2 (\boldsymbol{\alpha}) = VC}} 1 \bigg) \\ = &\frac{q-1}{q^{n-2r_1 +2}} \sum_{A \in \mathcal{M}_{n-r_1+1}} \bigg( q \sum_{\substack{B_1 \in \mathcal{A}_{\leq s'+s-r_1} \\ B_2 \in \mathcal{M}_{r_1 -s'-1} \\ B \in \mathcal{M}_{s'} \\ B_1 + B_2 A = UB}} 1 \bigg) \bigg( \sum_{\substack{C_1 \in \mathcal{A}_{\leq t'+t-r_1 -1} \\ C_2 \in \mathcal{A}_{\leq r_1 -t'-2} \\ C \in \mathcal{A}_{\leq t'} \\ C_1 + C_2 A = VC}} 1 \bigg) . \end{split}\end{aligned}$$ For the second equality we simply rewrote $A_2 (\boldsymbol{\alpha})$ as $A$, for presentational reasons. The justification for the first equality is as follows. First, we wish to express the sum over$\boldsymbol{\alpha} \in \prescript{}{\mathrm{S}}{\mathcal{L}}_{n-1} (r_1,0,r_1)$ as a sum over characteristic polynomials. Lemma [Lemma 1](#lemma, kernel of Hankel matrices){reference-type="ref" reference="lemma, kernel of Hankel matrices"} tells us that that $A_1 (\boldsymbol{\alpha}) \in \mathcal{M}_{\rho (\boldsymbol{\alpha})}$. Since $\rho (\boldsymbol{\alpha}) = 0$, we have that $A_1 (\boldsymbol{\alpha}) = 1$. Lemma [Lemma 1](#lemma, kernel of Hankel matrices){reference-type="ref" reference="lemma, kernel of Hankel matrices"} also tells us that $A_2 (\boldsymbol{\alpha}) \in \mathcal{M}_{n-r_1+1}$ (Note that in the Lemma we have $\boldsymbol{\alpha} = (\alpha_0 , \ldots , \alpha_{n})$ while here we have $\boldsymbol{\alpha} = (\alpha_0 , \ldots , \alpha_{n-1})$). Given these two points, we may wish to replace $\boldsymbol{\alpha} \in \prescript{}{\mathrm{S}}{\mathcal{L}}_{n-1} (r_1,0,r_1)$ with $A_2 (\boldsymbol{\alpha}) \in \mathcal{M}_{n-r_1+1}$. However, the correspondence between $\boldsymbol{\alpha}$ and $A_2 (\boldsymbol{\alpha})$ here is not bijective. Indeed, Remark [Remark 1](#remark, kernel of Hankel matrices){reference-type="ref" reference="remark, kernel of Hankel matrices"} tells us that $A_2 (\boldsymbol{\alpha})$ is unique only up to addition of polynomials of the form $B_1 A_1 (\boldsymbol{\alpha})$ for $B_1 \in \mathcal{A}_{\leq n-2r_1 +1}$. Remark [Remark 1](#remark, kernel of Hankel matrices){reference-type="ref" reference="remark, kernel of Hankel matrices"} also tells us that for each pair of characteristic polynomials $A_1 , A_2$, there are $q-1$ associated Hankel matrices (simply because multiplying our matrices by an element in $\mathbb{F}_q^*$ does not change the characteristic polynomials). Thus, we can replace $\boldsymbol{\alpha} \in \prescript{}{\mathrm{S}}{\mathcal{L}}_{n-1} (r_1,0,r_1)$ with $A_2 (\boldsymbol{\alpha}) \in \mathcal{M}_{n-r_1+1}$, but we must multiply by the factor $\frac{(q-1)}{\lvert \mathcal{A}_{\leq n-2r_1 +1} \rvert} = \frac{q-1}{q^{n-2r_1 +2}}$. However, this requires one last piece of justification. Namely, that our summand is independent of which representation of $A_2 (\boldsymbol{\alpha})$ one takes. This ultimately follows from the conditions $B_1 + B_2 A_2 (\boldsymbol{\alpha}) = UB$ and $C_1 + C_2 A_2 (\boldsymbol{\alpha}) = VC$, and the ranges given for $B_1 , B_2 , B , C_1 , C_2 , C$. This justifies the change in the summation range. Let us now justify the change in the summand.\ Consider $\Big\lvert \mathop{\mathrm{ker}}H_{s' , s'+1} (\boldsymbol{\alpha} \odot [U]_s ) \Big\rvert$. Similarly as in the proof of Lemma [Lemma 1](#lemma, U reduction, 1 dimensional kernel case){reference-type="ref" reference="lemma, U reduction, 1 dimensional kernel case"}, we have that a polynomial $B \in \mathcal{A}_{\leq s'}$ satisfies $[B]_{s'} \in \mathop{\mathrm{ker}}H_{s' , s'+1} (\boldsymbol{\alpha} \odot [U]_s )$ if and only if $UB = B_1 + B_2 A_2 (\boldsymbol{\alpha})$ for some $B_1 \in \mathcal{A}_{\leq s'+s-r_1}$ and $B_2 \in \mathcal{A}_{\leq r_1 -s'-1}$. However, recall that we also have the condition $\pi_{\mathrm{S}}(\boldsymbol{\alpha} \odot [U]_s ) = 0$. Remark [Remark 1](#remark, pi_1 = 0 iff vector in kernel with no zero at end){reference-type="ref" reference="remark, pi_1 = 0 iff vector in kernel with no zero at end"} tells us that $\pi_{\mathrm{S}}(\boldsymbol{\alpha} \odot [U]_s ) = 0$ if and only if there is $B \in \mathcal{M}_{s'}$ with $[B]_{s'} \in \mathop{\mathrm{ker}}H_{s' , s'+1} (\boldsymbol{\alpha} \odot [U]_s )$. Note that if $B \in \mathcal{M}_{s'}$, then the condition $B_1 + B_2 A_2 (\boldsymbol{\alpha}) = UB$ will require that $B_2 \in \mathcal{M}_{r_1 -s'-1}$. So we have established that the sum $$\begin{aligned} \sum_{\substack{B_1 \in \mathcal{A}_{\leq s'+s-r_1} \\ B_2 \in \mathcal{M}_{r_1 -s'-1} \\ B \in \mathcal{M}_{s'} \\ B_1 + B_2 A_2 (\boldsymbol{\alpha}) = UB}} 1\end{aligned}$$ will be zero if $\pi_{\mathrm{S}}(\boldsymbol{\alpha} \odot [U]_s ) \geq 0$, as required. Although, when $\pi_{\mathrm{S}}(\boldsymbol{\alpha} \odot [U]_s ) = 0$, because of the additional restrictions on $B$ and $B_2$ to be monic, the sum is not actually equal to $\Big\lvert \mathop{\mathrm{ker}}H_{s' , s'+1} (\boldsymbol{\alpha} \odot [U]_s ) \Big\rvert$ which is what we originally wanted. However, Remark [Remark 1](#remark, pi_1 = 0 iff vector in kernel with no zero at end){reference-type="ref" reference="remark, pi_1 = 0 iff vector in kernel with no zero at end"} also tells us that this can be remedied by multiplying by $q$. Finally, $\Big\lvert \mathop{\mathrm{ker}}H_{t'+1 , t'+1} (\boldsymbol{\alpha} \odot [V]_{t-1} ) \Big\rvert$ can be express as a similar sum, but it is simpler because we do not have a condition analogous to $\pi_{\mathrm{S}}(\boldsymbol{\alpha} \odot [U]_s ) = 0$.\ Continuing, by substituting ([\[statement, main theorem, kernels expressed as polynomial equations\]](#statement, main theorem, kernels expressed as polynomial equations){reference-type="ref" reference="statement, main theorem, kernels expressed as polynomial equations"}) into ([\[statement, main theorem, variance in terms of kernels of shortened alpha\]](#statement, main theorem, variance in terms of kernels of shortened alpha){reference-type="ref" reference="statement, main theorem, variance in terms of kernels of shortened alpha"}), we obtain $$\begin{aligned} \begin{split} \label{statement, main theorem, after kernels expressed as polynomial equations} &\frac{1}{q^n} \sum_{A \in \mathcal{M}_n } \bigg\lvert \Delta_{S_{U,V}} (A;<h) \bigg\rvert^2 \\ = &4 (q-1) \frac{q^{2h+s' + t'} }{q^{3n+1}} \sum_{r_1 = s'+1}^{n-h} q^{2 r_1} \sum_{A \in \mathcal{M}_{n-r_1+1}} \bigg( \sum_{\substack{B_1 \in \mathcal{A}_{\leq s'+s-r_1} \\ B_2 \in \mathcal{M}_{r_1 -s'-1} \\ B \in \mathcal{M}_{s'} \\ B_1 + B_2 A = UB}} 1 \bigg) \bigg( \sum_{\substack{C_1 \in \mathcal{A}_{\leq t'+t-r_1 -1} \\ C_2 \in \mathcal{A}_{\leq r_1 -t'-2} \\ C \in \mathcal{A}_{\leq t'} \\ C_1 + C_2 A = VC}} 1 \bigg) . \end{split}\end{aligned}$$ We wish to use Dirichlet characters[^4] of conductor $U$ to address the condition $B_2 A = UB - B_1$. To do so, we require $(B_1 , U)=1$. This is clearly not always the case, and so we will condition on the value of $(B_1 , U)$. Suppose $(B_1 , U) = U'$ and let $U_3$ be such that $U' U_3 = U$. First note that we require $\mathop{\mathrm{deg}}U' \leq \mathop{\mathrm{deg}}B_1 \leq s'+s-r_1$. Second, the equation $B_2 A = UB - B_1$ forces $\big( B_2 A , U \big)=U'$, and so we can write $U' = U_1 U_2$, where $(B_2 , U)=U_1$ and $U_2 \mid A$. Note that we require $\mathop{\mathrm{deg}}U_1 \leq \mathop{\mathrm{deg}}B_2 = r_1 -s' -1$. Applying the above, as well as a similar approach for the sum involving $C_1 , C_2 , C$, we obtain $$\begin{aligned} \begin{split} \label{statement, main theorem, just before dirichlet character application} &\sum_{A \in \mathcal{M}_{n-r_1+1}} \bigg( \sum_{\substack{B_1 \in \mathcal{A}_{\leq s'+s-r_1} \\ B_2 \in \mathcal{M}_{\leq r_1 -s'-1} \\ B \in \mathcal{M}_{s'} \\ B_1 + B_2 A = UB}} 1 \bigg) \bigg( \sum_{\substack{C_1 \in \mathcal{A}_{\leq t'+t-r_1 -1} \\ C_2 \in \mathcal{A}_{\leq r_1 -t'-2} \\ C \in \mathcal{A}_{\leq t'} \\ C_1 + C_2 A = VC}} 1 \bigg) \\ = & \sum_{\substack{U_1 \in \mathcal{M}_{\leq r_1-s'-1} \\U_2 \in \mathcal{M}_{\leq s'+s-r_1-\mathop{\mathrm{deg}}U_1} \\U_3 \in \mathcal{M} \\ U_1 U_2 U_3 = U }} \sum_{\substack{V_1 \in \mathcal{M}_{\leq r_1-t'-2} \\ V_2 \in \mathcal{M}_{\leq t'+t-r_1 -1-\mathop{\mathrm{deg}}V_1} \\ V_3 \in \mathcal{M} \\ V_1 V_2 V_3 = V }} \sum_{\substack{A \in \mathcal{M}_{n-r_1+1} \\ U_2 \mid A \\ V_2 \mid A}} \bigg( \sum_{\substack{B_1 \in \mathcal{A}_{\leq s'+s-r_1} \\ (B_1 , U) = U_1 U_2 \\ B_2 \in \mathcal{M}_{\leq r_1 -s'-1} \\ (B_2 , U) = U_1 \\ B \in \mathcal{M}_{s'} \\ B_1 + B_2 A = UB }} 1 \bigg) \bigg( \sum_{\substack{C_1 \in \mathcal{A}_{\leq t'+t-r_1 -1} \\ (C_1 , V) = V_1 V_2 \\ C_2 \in \mathcal{A}_{\leq r_1 -t' -2} \\ (C_2 , V) = V_1 \\ C \in \mathcal{A}_{\leq t'} \\ C_1 + C_2 A = VC }} 1 \bigg) \\ = & \sum_{\substack{U_1 \in \mathcal{M}_{\leq r_1-s'-1} \\U_2 \in \mathcal{M}_{\leq s'+s-r_1-\mathop{\mathrm{deg}}U_1} \\U_3 \in \mathcal{M} \\ U_1 U_2 U_3 = U }} \sum_{\substack{V_1 \in \mathcal{M}_{\leq r_1-t'-2} \\ V_2 \in \mathcal{M}_{\leq t'+t-r_1 -1-\mathop{\mathrm{deg}}V_1} \\ V_3 \in \mathcal{M} \\ V_1 V_2 V_3 = V }} \sum_{\substack{B_1 \in \mathcal{A}_{\leq s'+s-r_1-\mathop{\mathrm{deg}}U_1 U_2} \\ (B_1 , U_3) = 1 \\ B_2 \in \mathcal{M}_{\leq r_1 -s'-1-\mathop{\mathrm{deg}}U_1} \\ (B_2 , U_2 U_3) = 1 }} \sum_{\substack{C_1 \in \mathcal{A}_{\leq t'+t-r_1 -1-\mathop{\mathrm{deg}}V_1 V_2} \\ (C_1 , V_3) = 1 \\ C_2 \in \mathcal{A}_{\leq r_1 -t' -2-\mathop{\mathrm{deg}}V_1} \\ (C_2 , V_2 V_3) = 1 }} \sum_{\substack{A \in \mathcal{M}_{n-r_1+1-\mathop{\mathrm{deg}}U_2 V_2} \\ B \in \mathcal{M}_{s'} \\ C \in \mathcal{A}_{\leq t'} \\ B_1 + B_2 V_2 A = U_3 B \\ C_1 + C_2 U_2 A = V_3 C }} 1 . \end{split}\end{aligned}$$ The last summation can be addressed using Dirichlet characters. In what follows, for a general monic polynomial $W$, a sum over all Dirichlet characters of conductor $W$ is expressed as $\sum_{\chi \mathop{\mathrm{mod}}W}$. We have, $$\begin{aligned} \begin{split} \label{statement, main theorem proof, final application of Dirichlet characters} &\sum_{\substack{A \in \mathcal{M}_{n-r_1+1-\mathop{\mathrm{deg}}U_2 V_2} \\ B \in \mathcal{M}_{s'} \\ C \in \mathcal{A}_{\leq t'} \\ B_1 + B_2 V_2 A = U_3 B \\ C_1 + C_2 U_2 A = V_3 C }} 1 \\ = &\sum_{A \in \mathcal{M}_{n-r_1+1-\mathop{\mathrm{deg}}U_2 V_2}} \bigg( \frac{1}{\phi (U_3)} \sum_{\chi_1 \mathop{\mathrm{mod}}U_3} \chi_1 (B_2 V_2 A) \overline{\chi}_1 (-B_1) \bigg) \bigg( \frac{1}{\phi (V_3)} \sum_{\chi_2 \mathop{\mathrm{mod}}V_3} \chi_2 (C_2 U_2 A) \overline{\chi}_2 (-C_1) \bigg) \\ = &\frac{1}{\phi (U_3)} \frac{1}{\phi (V_3)} \sum_{\substack{\chi_1 \mathop{\mathrm{mod}}U_3 \\ \chi_2 \mathop{\mathrm{mod}}V_3 }} \chi_1 (B_2 V_2 ) \overline{\chi}_1 (-B_1) \chi_2 (C_2 U_2 ) \overline{\chi}_2 (-C_1) \sum_{A \in \mathcal{M}_{n-r_1+1-\mathop{\mathrm{deg}}U_2 V_2}} \chi_1 (A) \chi_2 (A) \\ = &\frac{q^{n-r_1+1}}{\lvert U_2 U_3 V_2 V_3 \rvert} , \end{split}\end{aligned}$$ where, for the first equality we used the orthogonality relations. For the final equality, we used the fact that $\chi_1 \chi_2$ is a Dirichlet character of conductor $U_3 V_3$ (since $U_3 ,V_3$ are coprime), and so due to the orthogonality relations and the fact that $n-r_1 +1 - \mathop{\mathrm{deg}}U_2 V_2 > \mathop{\mathrm{deg}}U_3 V_3$, we can see that a non-zero contribution occurs only if $\chi_1 \chi_2$ is trivial (and thus $\chi_1 , \chi_2$ are trivial[^5]). Note that our result holds true even for the special cases where $U_3 = 1$ or $V_3 = 1$, including the special subcases where $B_1 = 0$ or $C_1 = 0$. Let us now apply ([\[statement, main theorem proof, final application of Dirichlet characters\]](#statement, main theorem proof, final application of Dirichlet characters){reference-type="ref" reference="statement, main theorem proof, final application of Dirichlet characters"}) to ([\[statement, main theorem, just before dirichlet character application\]](#statement, main theorem, just before dirichlet character application){reference-type="ref" reference="statement, main theorem, just before dirichlet character application"}) and consider the sum over $U_1 , U_2 , U_3$. A similar result applies for the sum over $V_1, V_2 , V_3$. We have $$\begin{aligned} \sum_{\substack{U_1 \in \mathcal{M}_{\leq r_1-s'-1} \\U_2 \in \mathcal{M}_{\leq s'+s-r_1-\mathop{\mathrm{deg}}U_1} \\U_3 \in \mathcal{M} \\ U_1 U_2 U_3 = U }} \sum_{\substack{B_1 \in \mathcal{A}_{\leq s'+s-r_1-\mathop{\mathrm{deg}}U_1 U_2} \\ (B_1 , U_3) = 1 \\ B_2 \in \mathcal{M}_{\leq r_1 -s'-1-\mathop{\mathrm{deg}}U_1} \\ (B_2 , U_2 U_3) = 1 }} \frac{1}{\lvert U_2 U_3 \rvert} = &\frac{1}{\lvert U \rvert} \sum_{\substack{U_1 \in \mathcal{M}_{\leq r_1-s'-1} \\U_2 \in \mathcal{M}_{\leq s'+s-r_1-\mathop{\mathrm{deg}}U_1} \\U_3 \in \mathcal{M} \\ U_1 U_2 U_3 = U }} \sum_{\substack{B_1 \in \mathcal{A}_{\leq s'+s-r_1} \\ (B_1 , U) = U_1 U_2 }} \sum_{\substack{B_2 \in \mathcal{M}_{\leq r_1 -s'-1} \\ (B_2 , U) = U_1 }} \lvert U_1 \rvert \\ = &\frac{1}{\lvert U \rvert} \sum_{U' U_3 = U} \sum_{\substack{B_1 \in \mathcal{A}_{\leq s'+s-r_1} \\ (B_1 , U) = U' }} \sum_{U_1 U_2 = U'} \sum_{\substack{B_2 \in \mathcal{M}_{\leq r_1 -s'-1} \\ (B_2 , U) = U_1 }} \lvert (B_2 , U) \rvert \\ = &\frac{1}{\lvert U \rvert} \sum_{U' U_3 = U} \sum_{\substack{B_1 \in \mathcal{A}_{\leq s'+s-r_1} \\ B_2 \in \mathcal{M}_{\leq r_1 -s'-1} \\ (B_1 , U) = U' \\ (B_2 , U) \mid (B_1 , U) }} \lvert (B_2 , U) \rvert \\ = &\frac{1}{\lvert U \rvert} \sum_{\substack{B_1 \in \mathcal{A}_{\leq s'+s-r_1} \\ B_2 \in \mathcal{M}_{\leq r_1 -s'-1} \\ (B_2 , U) \mid B_1 }} \lvert (B_2 , U) \rvert . \end{aligned}$$ Similarly, $$\begin{aligned} \sum_{\substack{V_1 \in \mathcal{M}_{\leq r_1-t'-2} \\ V_2 \in \mathcal{M}_{\leq t'+t-r_1 -1-\mathop{\mathrm{deg}}V_1} \\ V_3 \in \mathcal{M} \\ V_1 V_2 V_3 = V }} \sum_{\substack{C_1 \in \mathcal{A}_{\leq t'+t-r_1 -1-\mathop{\mathrm{deg}}V_1 V_2} \\ (C_1 , V_3) = 1 \\ C_2 \in \mathcal{A}_{\leq r_1 -t' -2-\mathop{\mathrm{deg}}V_1} \\ (C_2 , V_2 V_3) = 1 }} \frac{1}{\lvert V_2 V_3 \rvert} = \frac{1}{\lvert V \rvert} \sum_{\substack{C_1 \in \mathcal{A}_{\leq t'+t-r_1 -1} \\ C_2 \in \mathcal{A}_{\leq r_1 -t'-2} \\ (C_2 , V) \mid C_1 }} \lvert (C_2 , V) \rvert . \end{aligned}$$ Hence, by applying these to ([\[statement, main theorem, just before dirichlet character application\]](#statement, main theorem, just before dirichlet character application){reference-type="ref" reference="statement, main theorem, just before dirichlet character application"}), we obtain $$\begin{aligned} &\sum_{A_2 (\boldsymbol{\alpha}) \in \mathcal{M}_{n-r_1+1}} \bigg( \sum_{\substack{B_1 \in \mathcal{A}_{\leq s'+s-r_1} \\ B_2 \in \mathcal{M}_{\leq r_1 -s'-1} \\ B \in \mathcal{M}_{s'} \\ B_1 + B_2 A_2 (\boldsymbol{\alpha}) = UB}} 1 \bigg) \bigg( \sum_{\substack{C_1 \in \mathcal{A}_{\leq t'+t-r_1 -1} \\ C_2 \in \mathcal{A}_{\leq r_1 -t'-2} \\ C \in \mathcal{A}_{\leq t'} \\ C_1 + C_2 A_2 (\boldsymbol{\alpha}) = VC}} 1 \bigg) \\ = &\frac{q^{n-r_1+1}}{\lvert UV \rvert} \sum_{\substack{B_1 \in \mathcal{A}_{\leq s'+s-r_1} \\ B_2 \in \mathcal{M}_{\leq r_1 -s'-1} \\ (B_2 , U) \mid B_1 }} \lvert (B_2 , U) \rvert \sum_{\substack{C_1 \in \mathcal{A}_{\leq t'+t-r_1 -1} \\ C_2 \in \mathcal{A}_{\leq r_1 -t'-2} \\ (C_2 , V) \mid C_1 }} \lvert (C_2 , V) \rvert .\end{aligned}$$ Finally, applying this to ([\[statement, main theorem, after kernels expressed as polynomial equations\]](#statement, main theorem, after kernels expressed as polynomial equations){reference-type="ref" reference="statement, main theorem, after kernels expressed as polynomial equations"}), we obtain $$\begin{aligned} &\frac{1}{q^n} \sum_{A \in \mathcal{M}_n } \bigg\lvert \Delta_{S_{U,V}} (A;<h) \bigg\rvert^2 \\ = &4 q^{h} \frac{q-1}{q^{\frac{1}{2}} \lvert UV \rvert^{\frac{1}{2}} } \sum_{r_1 = s'+1}^{n-h} q^{r_1 -(n-h)} \sum_{\substack{B_1 \in \mathcal{A}_{\leq s'+s-r_1} \\ B_2 \in \mathcal{M}_{\leq r_1 -s'-1} \\ (B_2 , U) \mid B_1 }} \lvert (B_2 , U) \rvert \sum_{\substack{C_1 \in \mathcal{A}_{\leq t'+t-r_1 -1} \\ C_2 \in \mathcal{A}_{\leq r_1 -t'-2} \\ (C_2 , V) \mid C_1 }} \lvert (C_2 , V) \rvert \\ = & q^h f_{U,V} (n,h) .\end{aligned}$$ Let us now prove the bound for $f_{U,V} (n,h)$. We have $$\begin{aligned} \label{statement, main theorem, case 3, what must be bounded from case 2} f_{U,V} (n,h) = \frac{4(q-1)}{q^{\frac{1}{2}} \lvert UV \rvert^{\frac{1}{2}} } \sum_{r_1 = s'+1}^{n-h} q^{r_1 -(n-h)} \sum_{\substack{B_1 \in \mathcal{A}_{\leq s'+s-r_1} \\ B_2 \in \mathcal{M}_{\leq r_1 -s'-1} \\ (B_2 , U) \mid B_1 }} \lvert (B_2 , U) \rvert \sum_{\substack{C_1 \in \mathcal{A}_{\leq t'+t-r_1 -1} \\ C_2 \in \mathcal{A}_{\leq r_1 -t'-2} \\ (C_2 , V) \mid C_1 }} \lvert (C_2 , V) \rvert .\end{aligned}$$ We note that $(B_2 , U) \mid B_1$ and $(B_2 , U) \mid B_2$, and so $$\begin{aligned} \mathop{\mathrm{deg}}(B_2 , U) \leq \min \{ \mathop{\mathrm{deg}}B_1 , \mathop{\mathrm{deg}}B_2 \} \leq \min \{ s'+s-r_1 , r_1 -s'-1 \} \leq \frac{\mathop{\mathrm{deg}}U}{2} -1 .\end{aligned}$$ Similarly, $\mathop{\mathrm{deg}}(C_2 , V) \leq \frac{\mathop{\mathrm{deg}}V -1}{2} -1$. So, we have $$\begin{aligned} &\sum_{\substack{B_1 \in \mathcal{A}_{\leq s'+s-r_1} \\ B_2 \in \mathcal{M}_{\leq r_1 -s'-1} \\ (B_2 , U) \mid B_1}} \lvert (B_2 , U) \rvert = \sum_{\substack{U_1 \mid U \\ \mathop{\mathrm{deg}}U_1 \leq \frac{\mathop{\mathrm{deg}}U}{2} -1 }} \lvert U_1 \rvert \sum_{\substack{B_2 \in \mathcal{M}_{\leq r_1 -s'-1} \\ (B_2 , U) = U_1 }} \sum_{\substack{B_1 \in \mathcal{A}_{\leq s'+s-r_1} \\ U_1 \mid B_1 }} 1 \\ = &q^{s'+s-r_1 +1} \sum_{\substack{U_1 \mid U \\ \mathop{\mathrm{deg}}U_1 \leq \frac{\mathop{\mathrm{deg}}U}{2} -1 }} \sum_{\substack{B_2 \in \mathcal{M}_{\leq r_1 -s'-1} \\ (B_2 , U) = U_1 }} 1 \leq q^{s} \sum_{\substack{U_1 \mid U \\ \mathop{\mathrm{deg}}U_1 \leq \frac{\mathop{\mathrm{deg}}U}{2} -1 }} \frac{1}{ \lvert U_1 \rvert} \leq \lvert U \rvert (\log_q \mathop{\mathrm{deg}}U) .\end{aligned}$$ Similarly, $$\begin{aligned} \sum_{\substack{C_1 \in \mathcal{A}_{\leq t'+t-r_1 -1} \\ C_2 \in \mathcal{A}_{\leq r_1 -t'-2} \\ (C_2 , U) \mid C_1 }} \lvert (C_2 , V) \rvert \leq \lvert V \rvert (\log_q \mathop{\mathrm{deg}}V) .\end{aligned}$$ Applying this to ([\[statement, main theorem, case 3, what must be bounded from case 2\]](#statement, main theorem, case 3, what must be bounded from case 2){reference-type="ref" reference="statement, main theorem, case 3, what must be bounded from case 2"}) we obtain $$\begin{aligned} &\frac{4(q-1)}{q^{\frac{1}{2}} \lvert UV \rvert^{\frac{1}{2}} } \sum_{r_1 = s'+1}^{n-h} q^{r_1 -(n-h)} \sum_{\substack{B_1 \in \mathcal{A}_{\leq s'+s-r_1} \\ B_2 \in \mathcal{M}_{\leq r_1 -s'-1} \\ (B_2 , U) \mid B_1 }} \lvert (B_2 , U) \rvert \sum_{\substack{C_1 \in \mathcal{A}_{\leq t'+t-r_1 -1} \\ C_2 \in \mathcal{A}_{\leq r_1 -t'-2} \\ (C_2 , V) \mid C_1 }} \lvert (C_2 , V) \rvert \\ \leq & 4 (q-1) q^{- \frac{1}{2}} \Big( \sum_{r_1 = s'+1}^{n-h} q^{r_1 - (n-h)} \Big) \lvert UV \rvert^{\frac{1}{2}} (\log_q \mathop{\mathrm{deg}}U) (\log_q \mathop{\mathrm{deg}}V) \\ \leq &4 q^{\frac{1}{2}} \lvert UV \rvert^{\frac{1}{2}} (\log_q \mathop{\mathrm{deg}}U) (\log_q \mathop{\mathrm{deg}}V) . \\\end{aligned}$$ [**Case 3:**]{.ul} $3 (\mathop{\mathrm{deg}}UV +1) \leq h < \min \{ s' , t' \} -1$.\ As in Cases 1 and 2, consider the $\boldsymbol{\alpha} \in \mathcal{L}_n^h$ that appear in ([\[statement, variance as add char after mean square removed\]](#statement, variance as add char after mean square removed){reference-type="ref" reference="statement, variance as add char after mean square removed"}). Remark [Remark 1](#remark, rho_1 values dependent on h){reference-type="ref" reference="remark, rho_1 values dependent on h"} tells us that either $h+1 \leq \rho_{\mathrm{S}}(\boldsymbol{\alpha} ) \leq n_2 -1$ or $\rho_{\mathrm{S}}(\boldsymbol{\alpha}) = 0$. Certainly, as in Cases 1 and 2, $\rho_{\mathrm{S}}(\boldsymbol{\alpha}) = 0$ is a possibility; but now, we also have that the range $h+1 \leq \rho_{\mathrm{S}}(\boldsymbol{\alpha} ) \leq n_2 -1$ is non-empty. We will divide this range into two parts: The $\boldsymbol{\alpha}$ that satisfy $$\begin{aligned} \label{statement, main theorem, LOT second contribution range} h+1 \leq \rho_{\mathrm{S}}(\boldsymbol{\alpha} ) \leq n_2 -1 \hspace{3em} \text{ and } \hspace{3em} r (\boldsymbol{\alpha}) \geq \min \{ s' , t' \} +1 ;\end{aligned}$$ and the $\boldsymbol{\alpha}$ that satisfy $$\begin{aligned} \label{statement, main theorem, main term contribution range} h+1 \leq \rho_{\mathrm{S}}(\boldsymbol{\alpha} ) \leq \min \{ s' , t' \} \hspace{3em} \text{ and } \hspace{3em} r (\boldsymbol{\alpha}) \leq \min \{ s' , t' \} .\end{aligned}$$ Step 1 below will address the error terms, which come from the cases $\rho_{\mathrm{S}}(\boldsymbol{\alpha}) = 0$ and ([\[statement, main theorem, LOT second contribution range\]](#statement, main theorem, LOT second contribution range){reference-type="ref" reference="statement, main theorem, LOT second contribution range"}) above. The former is essentially just Case 1 and 2 above, and so we will only need to provide a bound for Case 2 (since Case 1 contributes zero). The latter is somewhat difficult to evaluate asymptotically, but it can be bounded. Step 2 below will address the main term for this case, which comes from ([\[statement, main theorem, main term contribution range\]](#statement, main theorem, main term contribution range){reference-type="ref" reference="statement, main theorem, main term contribution range"}).\ [Step 1:]{.ul}\ Suppose $\rho_{\mathrm{S}}(\boldsymbol{\alpha}) = 0$. Similarly as in the previous cases, we have that $\boldsymbol{\alpha} \in \prescript{}{\mathrm{S}}{\mathcal{L}}_n^h (r_1 , 0 , r_1)$ for some $2 \leq r_1 \leq n_1$. Case 1 tells us that if $r_1 \leq s'+1$, then the contribution is zero. By a slight adaptation of Case 2, we can see that the contribution of $s' + 2 \leq r_1 \leq n_1$ is $$\begin{aligned} q^{2h - (n_2 -1)} f_{U,V} (n , n_1 -1) \leq 4 q^{2h - n_2 + \frac{3}{2}} \lvert UV \rvert^{\frac{1}{2}} (\log_q \mathop{\mathrm{deg}}U) (\log_q \mathop{\mathrm{deg}}V).\end{aligned}$$ So, we have bounded the contribution from the cases where $\rho_{\mathrm{S}}(\boldsymbol{\alpha}) = 0$. Now let us bound the contribution from the cases where ([\[statement, main theorem, LOT second contribution range\]](#statement, main theorem, LOT second contribution range){reference-type="ref" reference="statement, main theorem, LOT second contribution range"}) holds.\ To this end, suppose that $h +1 < \rho (\boldsymbol{\alpha} ) \leq n_2 -1$ and $r (\boldsymbol{\alpha}) \geq \min \{ s' , t' \} +1$. The contribution of these cases to ([\[statement, variance as add char after mean square removed\]](#statement, variance as add char after mean square removed){reference-type="ref" reference="statement, variance as add char after mean square removed"}) is $$\begin{aligned} \begin{split} \label{statement, main theorem proof, bounding min(s',t') < rho (alpha), Cauchy Schwarz} &\frac{4 q^{2h}}{q^{2n+1}} \hspace{-1.75em} \sum_{\substack{\boldsymbol{\alpha} \in \mathcal{L}_n^h : \\ h +1 < \rho (\boldsymbol{\alpha} ) \leq n_2 -1 \\ r (\boldsymbol{\alpha}) \geq \min \{ s' , t' \} +1 }} \hspace{-0.5em} \bigg\lvert \sum_{\substack{E \in \mathcal{M}_{s'} \\ F \in \mathcal{A}_{\leq t'} }} \psi \Big( [E]_{s'}^T H_{s'+1 , s'+1} (\boldsymbol{\alpha}\odot [U]_s ) [E]_{s'} + [F]_{t'}^T H_{t'+1 , t'+1} (\boldsymbol{\alpha} \odot [V]_t ) [F]_{t'} \Big) \bigg\rvert^2 \\ \leq &\frac{4 q^{2h}}{q^{2n+1}} \bigg[ \sum_{\substack{\boldsymbol{\alpha} \in \mathcal{L}_n^h : \\ r (\boldsymbol{\alpha}) \geq \min \{ s' , t' \} +1 }} \bigg\lvert \sum_{E \in \mathcal{M}_{s'}} \psi \Big( [E]_{s'}^T H_{s'+1 , s'+1} (\boldsymbol{\alpha}\odot [U]_s ) [E]_{s'} \Big) \bigg\rvert^4 \bigg]^{\frac{1}{2}} \\ &\hspace{6em} \times \bigg[ \sum_{\substack{\boldsymbol{\alpha} \in \mathcal{L}_n^h : \\ r (\boldsymbol{\alpha}) \geq \min \{ s' , t' \} +1 }} \bigg\lvert \sum_{F \in \mathcal{A}_{\leq t'}} \psi \Big( [F]_{t'}^T H_{t'+1 , t'+1} (\boldsymbol{\alpha} \odot [V]_t ) [F]_{t'} \Big) \bigg\rvert^4 \bigg]^{\frac{1}{2}} \\ \leq &\frac{4 q^{2h}}{q^{2n+1}} \bigg[ q^s \sum_{\substack{\boldsymbol{\beta} \in \mathcal{L}_{n-s}^{h-s} : \\ r (\boldsymbol{\beta}) \geq \min \{ s' , t' \} +1 - s }} \bigg\lvert \sum_{E \in \mathcal{M}_{s'}} \psi \Big( [E]_{s'}^T H_{s'+1 , s'+1} (\boldsymbol{\beta}) [E]_{s'} \Big) \bigg\rvert^4 \bigg]^{\frac{1}{2}} \\ &\hspace{6em} \times \bigg[ q^t \sum_{\substack{\boldsymbol{\beta} \in \mathcal{L}_{n- t}^{h-t} : \\ r (\boldsymbol{\beta}) \geq \min \{ s' , t' \} +1 - t}} \bigg\lvert \sum_{F \in \mathcal{A}_{\leq t'}} \psi \Big( [F]_{t'}^T H_{t'+1 , t'+1} (\boldsymbol{\beta}) [F]_{t'} \Big) \bigg\rvert^4 \bigg]^{\frac{1}{2}} . \end{split}\end{aligned}$$ The first relation uses the Cauchy-Schwarz inequality; and it also removes the condition $h +1 < \rho (\boldsymbol{\alpha} ) \leq n_2 -1$ from the sum, which we can do because we are considering upper bounds of a sum of positive values. The second uses the fact that the map $\boldsymbol{\alpha} \to \boldsymbol{\alpha} \odot [U]_s$ is a linear map from $\mathcal{L}_n^h$ to $\mathcal{L}_{n-s}^{h-s}$, with the kernel of this linear map having $q^s$ elements in it. We also used the fact that $r (\boldsymbol{\alpha} \odot [U]_s ) \geq \min \{ s' , t' \} +1 - s$, which requires more justification. For this, we note the following implications $$\begin{aligned} &r (\boldsymbol{\alpha} \odot [U]_s ) \geq \min \{ s' , t' \} +1 - s \\ \iff &\mathop{\mathrm{rank}}H_{s'+1 , s'+1} (\boldsymbol{\alpha} \odot [U]_s ) \geq \min \{ s' , t' \} +1 - s \\ \iff &\dim \mathop{\mathrm{ker}}H_{s'+1 , s'+1} (\boldsymbol{\alpha} \odot [U]_s ) \leq \max \{ 0 , s' - t' \} + s \\ \iff &\dim \mathop{\mathrm{ker}}H_{s'+1 , n-s'+1} (\boldsymbol{\alpha}) T_{n-s'+1 , s'+1} ([U]_s) \leq \max \{ 0 , s' - t' \} + s \\ \Longleftarrow \hspace{0.25em} &\dim \mathop{\mathrm{ker}}H_{s'+1 , n-s'+1} (\boldsymbol{\alpha}) \leq \max \{ 0 , s' - t' \} + s +1 \\ \iff & \mathop{\mathrm{rank}}H_{s'+1 , n-s'+1} (\boldsymbol{\alpha}) \geq \min \{ s' , t' \} +1 .\end{aligned}$$ The first implication follows by definition of $r (\boldsymbol{\alpha} \odot [U]_s )$. The second and last simply use the fact that the dimension of the kernel is the number of columns minus the rank. The third follows from Remark [Remark 1](#remark, circulant Toeplitz multiplied by vector and by Hankel matrix){reference-type="ref" reference="remark, circulant Toeplitz multiplied by vector and by Hankel matrix"}. The fourth follows from the fact that $T_{n-s'+1 , s'+1} ([U]_s)$ defines an injective map. Now, the last line is true, which follows from the fact that $r (\boldsymbol{\alpha}) \geq \min \{ s' , t' \} +1$ and Remark [Remark 1](#remark, rank of Hankel matrix is minimum of rows, columns, and r){reference-type="ref" reference="remark, rank of Hankel matrix is minimum of rows, columns, and r"}. Thus, the first line is true, as required. A similar justification applies for the sum involving $V$ in ([\[statement, main theorem proof, bounding min(s\',t\') \< rho (alpha), Cauchy Schwarz\]](#statement, main theorem proof, bounding min(s',t') < rho (alpha), Cauchy Schwarz){reference-type="ref" reference="statement, main theorem proof, bounding min(s',t') < rho (alpha), Cauchy Schwarz"}).\ Continuing from ([\[statement, main theorem proof, bounding min(s\',t\') \< rho (alpha), Cauchy Schwarz\]](#statement, main theorem proof, bounding min(s',t') < rho (alpha), Cauchy Schwarz){reference-type="ref" reference="statement, main theorem proof, bounding min(s',t') < rho (alpha), Cauchy Schwarz"}), let us consider the sums over $E$ and $F$ separately. We have that $$\begin{aligned} &\sum_{\substack{\boldsymbol{\beta} \in \mathcal{L}_{n-s}^{h-s} : \\ r (\boldsymbol{\beta}) \geq \min \{ s' , t' \} +1 - s }} \bigg\lvert \sum_{E \in \mathcal{M}_{s'}} \psi \Big( [E]_{s'}^T H_{s'+1 , s'+1} (\boldsymbol{\beta}) [E]_{s'} \Big) \bigg\rvert^4 \\ = &\sum_{r= \min \{ s' , t' \} +1-s}^{s'} \sum_{\boldsymbol{\beta} \in \prescript{}{\mathrm{S}}{\mathcal{L}}_{n-s}^{h-s} (r,r,0) } \bigg\lvert \sum_{E \in \mathcal{M}_{s'}} \psi \Big( [E]_{s'}^T H_{s'+1 , s'+1} (\boldsymbol{\beta}) [E]_{s'} \Big) \bigg\rvert^4 \\ &+\sum_{r= \min \{ s' , t' \} +1-s}^{s'+1} \sum_{\boldsymbol{\beta} \in \prescript{}{\mathrm{S}}{\mathcal{L}}_{n-s}^{h-s} (r,r-1,1) } \bigg\lvert \sum_{E \in \mathcal{M}_{s'}} \psi \Big( [E]_{s'}^T H_{s'+1 , s'+1} (\boldsymbol{\beta}) [E]_{s'} \Big) \bigg\rvert^4 \\ = &\sum_{r= \min \{ s' , t' \} +1-s}^{s'} \sum_{\boldsymbol{\beta} \in \prescript{}{\mathrm{S}}{\mathcal{L}}_{n-s}^{h-s} (r,r,0) } q^{2 (2s' -r)} +\sum_{r= \min \{ s' , t' \} +1-s}^{s'+1} \sum_{\boldsymbol{\beta} \in \prescript{}{\mathrm{S}}{\mathcal{L}}_{n-s}^{h-s} (r,r-1,1) } q^{2 (2s' +1 -r)} ,\end{aligned}$$ where both equalities use the second result in Lemma [Lemma 1](#lemma, quadratic form values over monics and nonmonics){reference-type="ref" reference="lemma, quadratic form values over monics and nonmonics"}. Note that the first sum over $r$ does not include $r=s'+1$, because $\prescript{}{\mathrm{S}}{\mathcal{L}}_{n-s}^{h-s} (s'+1,s'+1,0) = \prescript{}{\mathrm{S}}{\mathcal{L}}_{n-s}^{h-s} \big( \frac{n-s}{2}+1,\frac{n-s}{2}+1,0 \big)$ is empty, as mentioned after Definition [Definition 1](#definition, notation for sets in terms of n,h,rho,pi,r){reference-type="ref" reference="definition, notation for sets in terms of n,h,rho,pi,r"}. Now let us use Lemma [Lemma 1](#lemma, number of sequences of given rhopi form){reference-type="ref" reference="lemma, number of sequences of given rhopi form"} to count the number of $\boldsymbol{\beta}$. We have, $$\begin{aligned} \begin{split} \label{statement, main theorem proof, bounding min(s',t') < rho (alpha), after Cauchy Schwarz, E sum} &\sum_{\substack{\boldsymbol{\beta} \in \mathcal{L}_{n-s}^{h-s} : \\ r (\boldsymbol{\beta}) \geq \min \{ s' , t' \} +1 - s }} \bigg\lvert \sum_{E \in \mathcal{M}_{s'}} \psi \Big( [E]_{s'}^T H_{s'+1 , s'+1} (\boldsymbol{\beta}) [E]_{s'} \Big) \bigg\rvert^4 \\ = &\sum_{r= \min \{ s' , t' \} +1-s}^{s'} (q-1) q^{2r -h+s -1} q^{2 (2s' -r)} + \sum_{r= \min \{ s' , t' \} +1-s}^{s'+1} (q-1)^2 q^{2r-h+s-3} q^{2 (2s' +1 -r)} \\ \leq & q^{2n-h-s+1} \big( s+1 - \max \{ 0 , s'-t' \} \big) . \end{split}\end{aligned}$$ We apply a similar approach to the sum over $F$ in ([\[statement, main theorem proof, bounding min(s\',t\') \< rho (alpha), Cauchy Schwarz\]](#statement, main theorem proof, bounding min(s',t') < rho (alpha), Cauchy Schwarz){reference-type="ref" reference="statement, main theorem proof, bounding min(s',t') < rho (alpha), Cauchy Schwarz"}), although since $F$ is not restricted to monics, we use the first part of Lemma [Lemma 1](#lemma, quadratic form values over monics and nonmonics){reference-type="ref" reference="lemma, quadratic form values over monics and nonmonics"} instead of the second. We obtain $$\begin{aligned} \begin{split} \label{statement, main theorem proof, bounding min(s',t') < rho (alpha), after Cauchy Schwarz, F sum} &\sum_{\substack{\boldsymbol{\beta} \in \mathcal{L}_{n- t}^{h-t} : \\ r (\boldsymbol{\beta}) \geq \min \{ s' , t' \} +1 - t}} \bigg\lvert \sum_{F \in \mathcal{A}_{\leq t'}} \psi \Big( [F]_{t'}^T H_{t'+1 , t'+1} (\boldsymbol{\beta}) [F]_{t'} \Big) \bigg\rvert^4 \\ = &\sum_{r= \min \{ s' , t' \} +1-t}^{t' +1} \sum_{\boldsymbol{\beta} \in \mathcal{L}_{n- t}^{h-t} (r)} \bigg\lvert \sum_{F \in \mathcal{A}_{\leq t'}} \psi \Big( [F]_{t'}^T H_{t'+1 , t'+1} (\boldsymbol{\beta}) [F]_{t'} \Big) \bigg\rvert^4 \\ = &\sum_{r= \min \{ s' , t' \} +1-t}^{t' +1} \sum_{\boldsymbol{\beta} \in \mathcal{L}_{n- t}^{h-t} (r)} q^{2(2t' + 2 -r)} \\ = &\bigg( \sum_{r= \min \{ s' , t' \} +1-t}^{t'} (q^2 -1) q^{2r-h+t-2} q^{2(2t' + 2 -r)} \bigg) + (q-1) q^{n-h} q^{2t' + 2} \\ \leq &q^{2n-h-t+4} \big( t+1 - \max \{ 0 , t'-s' \} \big) . \end{split}\end{aligned}$$ Finally, applying ([\[statement, main theorem proof, bounding min(s\',t\') \< rho (alpha), after Cauchy Schwarz, E sum\]](#statement, main theorem proof, bounding min(s',t') < rho (alpha), after Cauchy Schwarz, E sum){reference-type="ref" reference="statement, main theorem proof, bounding min(s',t') < rho (alpha), after Cauchy Schwarz, E sum"}) and ([\[statement, main theorem proof, bounding min(s\',t\') \< rho (alpha), after Cauchy Schwarz, F sum\]](#statement, main theorem proof, bounding min(s',t') < rho (alpha), after Cauchy Schwarz, F sum){reference-type="ref" reference="statement, main theorem proof, bounding min(s',t') < rho (alpha), after Cauchy Schwarz, F sum"}) to ([\[statement, main theorem proof, bounding min(s\',t\') \< rho (alpha), Cauchy Schwarz\]](#statement, main theorem proof, bounding min(s',t') < rho (alpha), Cauchy Schwarz){reference-type="ref" reference="statement, main theorem proof, bounding min(s',t') < rho (alpha), Cauchy Schwarz"}) gives $$\begin{aligned} &\frac{4 q^{2h}}{q^{2n+1}} \hspace{-1.75em} \sum_{\substack{\boldsymbol{\alpha} \in \mathcal{L}_n^h : \\ r (\boldsymbol{\alpha}) \geq \min \{ s' , t' \} +1 }} \hspace{-0.5em} \bigg\lvert \sum_{\substack{E \in \mathcal{M}_{s'} \\ F \in \mathcal{A}_{\leq t'} }} \psi \Big( [E]_{s'}^T H_{s'+1 , s'+1} (\boldsymbol{\alpha}\odot [U]_s ) [E]_{s'} + [F]_{t'}^T H_{t'+1 , t'+1} (\boldsymbol{\alpha} \odot [V]_t ) [F]_{t'} \Big) \bigg\rvert^2 \\ \ll & q^{h+\frac{3}{2}} \max \{ \mathop{\mathrm{deg}}U , \mathop{\mathrm{deg}}V \} ,\end{aligned}$$ as required.\ [Step 2:]{.ul}\ We will now consider the contribution to ([\[statement, variance as add char after mean square removed\]](#statement, variance as add char after mean square removed){reference-type="ref" reference="statement, variance as add char after mean square removed"}) from the range ([\[statement, main theorem, main term contribution range\]](#statement, main theorem, main term contribution range){reference-type="ref" reference="statement, main theorem, main term contribution range"}), which will give us the main term. We wish to evaluate $$\begin{aligned} \frac{4 q^{2h}}{q^{2n+1}} \hspace{-1.75em} \sum_{\substack{\boldsymbol{\alpha} \in \mathcal{L}_n^h : \\ h+1 \leq \rho_{\mathrm{S}}(\boldsymbol{\alpha} ) \leq \min \{ s' , t' \} \\ r (\boldsymbol{\alpha}) \leq \min \{ s' , t' \} \\ \pi_{\mathrm{S}}(\boldsymbol{\alpha} \odot [U]_s) \leq 1 }} \bigg\lvert \sum_{\substack{E \in \mathcal{M}_{s'} \\ F \in \mathcal{A}_{\leq t'} }} \psi \Big( [E]_{s'}^T H_{s'+1 , s'+1} (\boldsymbol{\alpha}\odot [U]_s ) [E]_{s'} + [F]_{t'}^T H_{t'+1 , t'+1} (\boldsymbol{\alpha} \odot [V]_t ) [F]_{t'} \Big) \bigg\rvert^2 .\end{aligned}$$ As before, the condition $\pi_{\mathrm{S}}(\boldsymbol{\alpha} \odot [U]_s) \leq 1$ follows from the second result in Lemma [Lemma 1](#lemma, quadratic form values over monics and nonmonics){reference-type="ref" reference="lemma, quadratic form values over monics and nonmonics"}. Although, because $r (\boldsymbol{\alpha} ) \leq \min \{ s' , t' \}$ and $\mathop{\mathrm{deg}}U = s$, Lemma [Lemma 1](#lemma, U reduction, 1 dimensional kernel case){reference-type="ref" reference="lemma, U reduction, 1 dimensional kernel case"} tells us that$\pi_{\mathrm{S}}(\boldsymbol{\alpha} \odot [U]_s) = \pi_{\mathrm{S}}(\boldsymbol{\alpha})$. That is, we can impose the additional condition $\pi_{\mathrm{S}}(\boldsymbol{\alpha}) \leq 1$ without affecting the final result. Furthermore, note that the two conditions $h+1 \leq \rho_{\mathrm{S}}(\boldsymbol{\alpha} ) \leq \min \{ s' , t' \}$ and $\pi_{\mathrm{S}}(\boldsymbol{\alpha}) \leq 1$ imply the condition $r (\boldsymbol{\alpha}) \leq \min \{ s' , t' \} +1$. Thus, we could remove the third condition $r (\boldsymbol{\alpha}) \leq \min \{ s' , t' \}$, were it not for the additional case where $r (\boldsymbol{\alpha}) = \min \{ s' , t' \} +1$. However, the contribution of $r (\boldsymbol{\alpha}) = \min \{ s' , t' \} +1$ is lower order, and has already been included in the bounds we obtained in Step 1. Thus, for convenience, we can include it without affecting the final result. Thus, we wish to evaluate $$\begin{aligned} \frac{4 q^{2h}}{q^{2n+1}} \hspace{-1.75em} \sum_{\substack{\boldsymbol{\alpha} \in \mathcal{L}_n^h : \\ h+1 \leq \rho_{\mathrm{S}}(\boldsymbol{\alpha} ) \leq \min \{ s' , t' \} \\ \pi_{\mathrm{S}}(\boldsymbol{\alpha}) \leq 1 \\ \pi_{\mathrm{S}}(\boldsymbol{\alpha} \odot [U]_s) \leq 1 }} \bigg\lvert \sum_{\substack{E \in \mathcal{M}_{s'} \\ F \in \mathcal{A}_{\leq t'} }} \psi \Big( [E]_{s'}^T H_{s'+1 , s'+1} (\boldsymbol{\alpha}\odot [U]_s ) [E]_{s'} + [F]_{t'}^T H_{t'+1 , t'+1} (\boldsymbol{\alpha} \odot [V]_t ) [F]_{t'} \Big) \bigg\rvert^2 .\end{aligned}$$ Applying Lemma [Lemma 1](#lemma, quadratic form values over monics and nonmonics){reference-type="ref" reference="lemma, quadratic form values over monics and nonmonics"} gives $$\begin{aligned} &\sum_{\substack{\boldsymbol{\alpha} \in \mathcal{L}_n^h : \\ h+1 \leq \rho_{\mathrm{S}}(\boldsymbol{\alpha} ) \leq \min \{ s' , t' \} \\ \pi_{\mathrm{S}}(\boldsymbol{\alpha}) \leq 1 \\ \pi_{\mathrm{S}}(\boldsymbol{\alpha} \odot [U]_s) \leq 1 }} \bigg\lvert \sum_{\substack{E \in \mathcal{M}_{s'} \\ F \in \mathcal{A}_{\leq t'} }} \psi \Big( [E]_{s'}^T H_{s'+1 , s'+1} (\boldsymbol{\alpha}\odot [U]_s ) [E]_{s'} + [F]_{t'}^T H_{t'+1 , t'+1} (\boldsymbol{\alpha} \odot [V]_t ) [F]_{t'} \Big) \bigg\rvert^2 \\ = &\sum_{\substack{\boldsymbol{\alpha} \in \mathcal{L}_n^h : \\ h+1 \leq \rho_{\mathrm{S}}(\boldsymbol{\alpha} ) \leq \min \{ s' , t' \} \\ \pi_{\mathrm{S}}(\boldsymbol{\alpha}) \leq 1 \\ \pi_{\mathrm{S}}(\boldsymbol{\alpha} \odot [U]_s) \leq 1 }} q^{s'+t' + \pi_{\mathrm{S}}(\boldsymbol{\alpha} \odot [U]_s) } \Big\lvert \mathop{\mathrm{ker}}H_{s'+1 , s'+1} (\boldsymbol{\alpha}\odot [U]_s ) \Big\rvert \Big\lvert \mathop{\mathrm{ker}}H_{t'+1 , t'+1} (\boldsymbol{\alpha}\odot [V]_t ) \Big\rvert .\end{aligned}$$ Now, almost identically as in Case 2, we can consider $\boldsymbol{\alpha}'$ instead of $\boldsymbol{\alpha}$, and this simplifies the expression. Ultimately, we obtain $$\begin{aligned} &\sum_{\substack{\boldsymbol{\alpha} \in \mathcal{L}_n^h : \\ h+1 \leq \rho_{\mathrm{S}}(\boldsymbol{\alpha} ) \leq \min \{ s' , t' \} \\ \pi_{\mathrm{S}}(\boldsymbol{\alpha}) \leq 1 \\ \pi_{\mathrm{S}}(\boldsymbol{\alpha} \odot [U]_s) \leq 1 }} \bigg\lvert \sum_{\substack{E \in \mathcal{M}_{s'} \\ F \in \mathcal{A}_{\leq t'} }} \psi \Big( [E]_{s'}^T H_{s'+1 , s'+1} (\boldsymbol{\alpha}\odot [U]_s ) [E]_{s'} + [F]_{t'}^T H_{t'+1 , t'+1} (\boldsymbol{\alpha} \odot [V]_t ) [F]_{t'} \Big) \bigg\rvert^2 \\ = &q^{s'+t'} \sum_{\substack{\boldsymbol{\alpha} \in \mathcal{L}_{n-1}^h : \\ h+1 \leq \rho_{\mathrm{S}}(\boldsymbol{\alpha} ) \leq \min \{ s' , t' \} \\ \pi_{\mathrm{S}}(\boldsymbol{\alpha}) = 0 \\ \pi_{\mathrm{S}}(\boldsymbol{\alpha} \odot [U]_s) = 0 }} \Big\lvert \mathop{\mathrm{ker}}H_{s' , s'+1} (\boldsymbol{\alpha} \odot [U]_s ) \Big\rvert \Big\lvert \mathop{\mathrm{ker}}H_{t'+1 , t'+1} (\boldsymbol{\alpha} \odot [V]_{t-1} ) \Big\rvert \\ = &q^{s'+t'} \sum_{\substack{\boldsymbol{\alpha} \in \mathcal{L}_{n-1}^h : \\ h+1 \leq \rho_{\mathrm{S}}(\boldsymbol{\alpha} ) \leq \min \{ s' , t' \} \\ \pi_{\mathrm{S}}(\boldsymbol{\alpha}) = 0 }} \Big\lvert \mathop{\mathrm{ker}}H_{s' , s'+1} (\boldsymbol{\alpha} \odot [U]_s ) \Big\rvert \Big\lvert \mathop{\mathrm{ker}}H_{t'+1 , t'+1} (\boldsymbol{\alpha} \odot [V]_{t-1} ) \Big\rvert \\ = &q^{s'+t'} \sum_{r=h+1}^{ \min \{ s' , t' \} } \sum_{\boldsymbol{\alpha} \in \prescript{}{\mathrm{S}}{\mathcal{L}}_{n-1}^h (r,r,0) } \Big\lvert \mathop{\mathrm{ker}}H_{s' , s'+1} (\boldsymbol{\alpha} \odot [U]_s ) \Big\rvert \Big\lvert \mathop{\mathrm{ker}}H_{t'+1 , t'+1} (\boldsymbol{\alpha} \odot [V]_{t-1} ) \Big\rvert ,\end{aligned}$$ where, for the second equality, we removed the condition $\pi_{\mathrm{S}}(\boldsymbol{\alpha} \odot [U]_s) = 0$ because, as before, it follows from the fact that $\pi_{\mathrm{S}}(\boldsymbol{\alpha} \odot [U]_s) = \pi_{\mathrm{S}}(\boldsymbol{\alpha})$ (which in turn follows from Lemma [Lemma 1](#lemma, U reduction, 1 dimensional kernel case){reference-type="ref" reference="lemma, U reduction, 1 dimensional kernel case"}). Now, when $\boldsymbol{\alpha} \in \prescript{}{\mathrm{S}}{\mathcal{L}}_{n-1}^h (r,r,0)$ with $r \leq \min \{ s' , t' \}$, Lemma [Lemma 1](#lemma, U reduction, 1 dimensional kernel case){reference-type="ref" reference="lemma, U reduction, 1 dimensional kernel case"} tells us that $$\begin{aligned} r (\boldsymbol{\alpha} \odot [U]_s) = &r - \mathop{\mathrm{deg}}\big( A_1 (\boldsymbol{\alpha}) , U \big) , \\ r (\boldsymbol{\alpha} \odot [V]_{t-1}) = &r - \mathop{\mathrm{deg}}\big( A_1 (\boldsymbol{\alpha}) , V \big) ;\end{aligned}$$ and so $$\begin{aligned} \Big\lvert \mathop{\mathrm{ker}}H_{s' , s'+1} (\boldsymbol{\alpha} \odot [U]_s ) \Big\rvert = &q^{s'+1 - r} \lvert \big( A_1 (\boldsymbol{\alpha}) , U \big) \rvert , \\ \Big\lvert \mathop{\mathrm{ker}}H_{t'+1 , t'+1} (\boldsymbol{\alpha} \odot [V]_{t-1} ) \Big\rvert = &q^{t'+1 - r} \lvert \big( A_1 (\boldsymbol{\alpha}) , V \big) \rvert .\end{aligned}$$ Hence, $$\begin{aligned} \begin{split} \label{statement, main theorem proof, W sum} &\sum_{\substack{\boldsymbol{\alpha} \in \mathcal{L}_n^h : \\ h+1 \leq \rho_{\mathrm{S}}(\boldsymbol{\alpha} ) \leq \min \{ s' , t' \} \\ \pi_{\mathrm{S}}(\boldsymbol{\alpha}) \leq 1 \\ \pi_{\mathrm{S}}(\boldsymbol{\alpha} \odot [U]_s) \leq 1 }} \bigg\lvert \sum_{\substack{E \in \mathcal{M}_{s'} \\ F \in \mathcal{A}_{\leq t'} }} \psi \Big( [E]_{s'}^T H_{s'+1 , s'+1} (\boldsymbol{\alpha}\odot [U]_s ) [E]_{s'} + [F]_{t'}^T H_{t'+1 , t'+1} (\boldsymbol{\alpha} \odot [V]_t ) [F]_{t'} \Big) \bigg\rvert^2 \\ = &q^{2(s'+t' +1)} \sum_{r=h+1}^{ \min \{ s' , t' \} } q^{-2r} \sum_{\boldsymbol{\alpha} \in \prescript{}{\mathrm{S}}{\mathcal{L}}_{n-1}^h (r,r,0) } \lvert \big( A_1 (\boldsymbol{\alpha}) , U \big) \rvert \lvert \big( A_1 (\boldsymbol{\alpha}) , V \big) \rvert \\ = &q^{2(s'+t' +1)} \sum_{\substack{U_1 \mid U \\ V_1 \mid V }} \lvert U_1 V_1 \rvert \sum_{r=h+1}^{ \min \{ s' , t' \} } q^{-2r} \sum_{\substack{\boldsymbol{\alpha} \in \prescript{}{\mathrm{S}}{\mathcal{L}}_{n-1}^h (r,r,0) \\ \big( A_1 (\boldsymbol{\alpha}) , U \big) = U_1 \\ \big( A_1 (\boldsymbol{\alpha}) , V \big) = V_1 }} 1 \\ = &q^{2(s'+t' +1)} \sum_{W_1 \mid W} \lvert W_1 \rvert \sum_{r=h+1}^{ \min \{ s' , t' \} } q^{-2r} \sum_{\substack{\boldsymbol{\alpha} \in \prescript{}{\mathrm{S}}{\mathcal{L}}_{n-1}^h (r,r,0) \\ \big( A_1 (\boldsymbol{\alpha}) , W \big) = W_1 }} 1 \\ = &q^{2(s'+t' +1)} \sum_{W_1 \mid W} \lvert W_1 \rvert \sum_{r=h+1}^{ \min \{ s' , t' \} } q^{-2r} \sum_{\substack{A \in \mathcal{M}_r \\ B \in \mathcal{A}_{<r-h} \\ (A,B) =1 \\ (A,W) = W_1}} 1. \end{split}\end{aligned}$$ For the third equality we are writing $W=UV$ and $W_1 = U_1 V_1$, and we have made use of the fact that $U,V$ are coprime. For the final equality we used lemma [Lemma 1](#lemma, bijection from quasiregular Hankel matrices to M_r times A_<r){reference-type="ref" reference="lemma, bijection from quasiregular Hankel matrices to M_r times A_<r"}. Now, we have that $$\begin{aligned} &\sum_{W_1 \mid W} \lvert W_1 \rvert \sum_{r=h+1}^{ \min \{ s' , t' \} } q^{-2r} \sum_{\substack{A \in \mathcal{M}_r \\ B \in \mathcal{A}_{<r-h} \\ (A,B) =1 \\ (A,W) = W_1}} 1 \\ = &\sum_{\substack{W_2 W_3 = W \\ (W_2 , W_3) =1 }} \sum_{W_1 W_1' = W_2} \lvert W_1 \rvert \sum_{r=h+1}^{ \min \{ s' , t' \} } q^{-2r} \sum_{\substack{A \in \mathcal{M}_r \\ B \in \mathcal{A}_{<r-h} \\ (A,B) =1 \\ (A,W_2) = W_1 \\ \mathop{\mathrm{rad}}(B,W) = \mathop{\mathrm{rad}}W_3}} 1 \\ = &\sum_{\substack{W_2 W_3 = W \\ (W_2 , W_3) =1 }} \sum_{W_1 W_1' = W_2} \lvert W_1 \rvert \sum_{r=h+1}^{ \min \{ s' , t' \} } q^{-2r} \sum_{\substack{B \in \mathcal{A}_{<r-h} \backslash \{ 0 \} \\ \mathop{\mathrm{rad}}(B,W) = \mathop{\mathrm{rad}}W_3}} \sum_{\substack{A \in \mathcal{M}_r \\ (A,B) =1 \\ (A,W_2) = W_1 }} 1 . \end{aligned}$$ For the first equality, we conditioned on the value of $\mathop{\mathrm{rad}}(B,W)$. Note that the three conditions $(A,B)=1$, $\mathop{\mathrm{rad}}(B,W) = \mathop{\mathrm{rad}}W_3$, and $W_2 W_3 = W$ with $(W_2 , W_3) =1$, imply that $(A,W)$ must divide $W_2$, and this explains the second sum on the second line and the condition $(A,W_2) = W_1$. Now, we have that $$\begin{aligned} \sum_{\substack{A \in \mathcal{M}_r \\ (A,B) =1 \\ (A,W_2) = W_1 }} 1 = \sum_{\substack{A \in \mathcal{M}_{r-\mathop{\mathrm{deg}}W_1} \\ (A,B) =1 \\ (A,W_1') = 1 }} 1 = q^{r - \mathop{\mathrm{deg}}W_1} \frac{\phi (B)}{\lvert B \rvert} \frac{\phi (W_1')}{\lvert W_1' \rvert} .\end{aligned}$$ For this to hold, we are using the fact that $B$ and $W_1'$ are coprime, and we also require that $r - \mathop{\mathrm{deg}}W_1 \geq \mathop{\mathrm{deg}}B + \mathop{\mathrm{deg}}W_1'$, which follows from $$\begin{aligned} r > (r-h-1) + h \geq \mathop{\mathrm{deg}}B + h \geq \mathop{\mathrm{deg}}B + \mathop{\mathrm{deg}}UV \geq \mathop{\mathrm{deg}}B + \mathop{\mathrm{deg}}W_1 W_1' .\end{aligned}$$ Thus, $$\begin{aligned} \begin{split} \label{statement, main theorem proof, W sum as phi B/B} &\sum_{W_1 \mid W} \lvert W_1 \rvert \sum_{r=h+1}^{ \min \{ s' , t' \} } q^{-2r} \sum_{\substack{A \in \mathcal{M}_r \\ B \in \mathcal{A}_{<r-h} \\ (A,B) =1 \\ (A,W) = W_1}} 1 \\ &= \sum_{\substack{W_2 W_3 = W \\ (W_2 , W_3) =1 }} \sum_{W_1 W_1' = W_2} \frac{\phi (W_1')}{\lvert W_1' \rvert} \sum_{r=h+1}^{ \min \{ s' , t' \} } q^{-r} \sum_{\substack{B \in \mathcal{A}_{<r-h} \backslash \{ 0 \} \\ \mathop{\mathrm{rad}}(B,W) = \mathop{\mathrm{rad}}W_3}} \frac{\phi (B)}{\lvert B \rvert} . \end{split}\end{aligned}$$ We note that $$\begin{aligned} &\sum_{r=h+1}^{ \min \{ s' , t' \} } q^{-r} \sum_{\substack{B \in \mathcal{A}_{<r-h} \backslash \{ 0 \} \\ \mathop{\mathrm{rad}}(B,W) = \mathop{\mathrm{rad}}W_3}} \frac{\phi (B)}{\lvert B \rvert} \\ = &q^{-h-1} \sum_{r=0}^{ \min \{ s' , t' \} -h-1} q^{-r} \sum_{\substack{B \in \mathcal{A}_{\leq r} \backslash \{ 0 \} \\ \mathop{\mathrm{rad}}(B,W) = \mathop{\mathrm{rad}}W_3}} \frac{\phi (B)}{\lvert B \rvert} \\ = &q^{-h-1} \sum_{\substack{B \in \mathcal{A}_{\leq \min \{ s' , t' \} -h-1} \backslash \{ 0 \} \\ \mathop{\mathrm{rad}}(B,W) = \mathop{\mathrm{rad}}W_3}} \bigg( \sum_{r=\mathop{\mathrm{deg}}B}^{ \min \{ s' , t' \} -h-1} q^{-r} \bigg) \frac{\phi (B)}{\lvert B \rvert} \\ = &\frac{q^{-h-1}}{1-q^{-1}} \sum_{\substack{B \in \mathcal{A}_{\leq \min \{ s' , t' \} -h-1} \backslash \{ 0 \} \\ \mathop{\mathrm{rad}}(B,W) = \mathop{\mathrm{rad}}W_3}} \frac{\phi (B)}{\lvert B \rvert^2} - q^{h+1-\min \{ s' , t' \} } \frac{\phi (B)}{\lvert B \rvert} \\ = &(1-q^{-1}) q^{-h} \Bigg[ \Big( \frac{n}{2} -h + O (\mathop{\mathrm{deg}}W) \Big) \prod_{P \mid W} (1 + \lvert P \rvert^{-1})^{-1} \prod_{P \mid W_3} \lvert P \rvert^{-1} \hspace{1em} + O (1) + O_\epsilon \Big( q^{-\frac{k}{2}} \lvert W \rvert^{\epsilon} \Big) \Bigg] ,\end{aligned}$$ where we have used Lemma [Lemma 1](#lemma, phi (B)/B^2 sum, perron application){reference-type="ref" reference="lemma, phi (B)/B^2 sum, perron application"} for the sum involving $\frac{\phi (B)}{\lvert B \rvert^2}$, and we have applied a trivial bound for the sum involving $\frac{\phi (B)}{\lvert B \rvert}$. We apply this to ([\[statement, main theorem proof, W sum as phi B/B\]](#statement, main theorem proof, W sum as phi B/B){reference-type="ref" reference="statement, main theorem proof, W sum as phi B/B"}) and then ([\[statement, main theorem proof, W sum\]](#statement, main theorem proof, W sum){reference-type="ref" reference="statement, main theorem proof, W sum"}), and after some rearrangement, and simplification of the error terms, we obtain $$\begin{aligned} &\frac{4 q^{2h}}{q^{2n+1}} \hspace{-2em} \sum_{\substack{\boldsymbol{\alpha} \in \mathcal{L}_n^h : \\ h+1 \leq \rho_{\mathrm{S}}(\boldsymbol{\alpha} ) \leq \min \{ s' , t' \} \\ \pi_{\mathrm{S}}(\boldsymbol{\alpha}) \leq 1 \\ \pi_{\mathrm{S}}(\boldsymbol{\alpha} \odot [U]_s) \leq 1 }} \bigg\lvert \sum_{\substack{E \in \mathcal{M}_{s'} \\ F \in \mathcal{A}_{\leq t'} }} \psi \Big( [E]_{s'}^T H_{s'+1 , s'+1} (\boldsymbol{\alpha}\odot [U]_s ) [E]_{s'} + [F]_{t'}^T H_{t'+1 , t'+1} (\boldsymbol{\alpha} \odot [V]_t ) [F]_{t'} \Big) \bigg\rvert^2 \\ = &4 (1-q^{-1}) q^h M (U,V) \Big( \frac{n}{2} -h \Big) + O_{\epsilon} (q^h \lvert UV \rvert^{-1+\epsilon}) ;\end{aligned}$$ where, for a non-zero polynomial $A$, we define $e_P (A)$ to be the maximal integer such that $P^{e_P (A)} \mid A$; and $$\begin{aligned} M (U,V) := \lvert UV \rvert^{-1} \prod_{P \mid UV} \Bigg( 1 + \bigg( \frac{1- \lvert P \rvert^{-1}}{1+ \lvert P \rvert^{-1}} \bigg) e_P (UV) \Bigg) .\end{aligned}$$ ◻ **Lemma 1**. *Let $W \in \mathcal{M}$, and let $W_1 , W_2 \in \mathcal{M}$ be such that $W = W_1 W_2$ and $(W_1 , W_2)=1$. Let $k$ be a positive integer. Then, $$\begin{aligned} &\sum_{\substack{B \in \mathcal{A}_{\leq k} \backslash \{ 0 \} \\ \mathop{\mathrm{rad}}(B,W) = \mathop{\mathrm{rad}}W_3}} \frac{\phi (B)}{\lvert B \rvert^2} \\ &= \frac{(q-1)^2}{q} \Big( k + O ( \log \mathop{\mathrm{deg}}W) \Big) \prod_{P \mid W} (1 + \lvert P \rvert^{-1})^{-1} \prod_{P \mid W_3} \lvert P \rvert^{-1} \hspace{1em} + O_\epsilon \Big( q^{-\frac{k}{2} +1} \lvert W \rvert^{\epsilon} \Big) .\end{aligned}$$* *Proof.* Let $c>0$. Using Perron's formula, we have $$\begin{aligned} \sum_{\substack{B \in \mathcal{A}_{\leq k} \backslash \{ 0 \} \\ \mathop{\mathrm{rad}}(B,W) = \mathop{\mathrm{rad}}W_3}} \frac{\phi (B)}{\lvert B \rvert^2} = &(q-1) \sum_{\substack{B \in \mathcal{M}_{\leq k} \\ \mathop{\mathrm{rad}}W_3 \mid B \\ (B,W_2)=1}} \frac{\phi (B)}{\lvert B \rvert^2} = \frac{q-1}{2 \pi i} \int_{\operatorname{Re}(s) =c} \frac{q^{(k+\frac{1}{2})s}}{s} \sum_{\substack{B \in \mathcal{M} \\ \mathop{\mathrm{rad}}W_3 \mid B \\ (B,W_2)=1}} \frac{\phi (B)}{\lvert B \rvert^{2+s}} \mathrm{d} s \\ = &\frac{q-1}{2 \pi i} \int_{\operatorname{Re}(s) =c} \frac{F(s)}{s} \mathrm{d} s \end{aligned}$$ where $$\begin{aligned} F(s) := q^{(k+\frac{1}{2})s} \prod_{P \mid W_3} \bigg( \frac{1 - \lvert P \rvert^{-1}}{\lvert P \rvert^{1+s} -\lvert P \rvert^{-1}} \bigg) \prod_{P \mid W_2} \bigg( \frac{\lvert P \rvert^{1+s} -1}{\lvert P \rvert^{1+s} - \lvert P \rvert^{-1}} \bigg) \frac{1- q^{-1-s}}{1- q^{-s}} .\end{aligned}$$ We now shift the contour to $\operatorname{Re}(s) = -\frac{1}{2}$. Formally, this will be done as the limit as $m \longrightarrow \infty$ of a rectangular contour, orientated anticlockwise, with vertices at $c \pm \frac{(4m+1) \pi i}{\log q}$ and $-\frac{1}{2} \pm \frac{(4m+1) \pi i}{\log q}$. The contour itself avoids any singularities. Due to the vertical periodicity of $F(s)$, we have $$\begin{aligned} \int_{c + \frac{(4m+1) \pi i}{\log q}}^{-\frac{1}{2} + \frac{(4m+1) \pi i}{\log q}} \frac{F(s)}{s} \mathrm{d} s = \int_{c + \frac{\pi i}{\log q}}^{-\frac{1}{2} + \frac{\pi i}{\log q}} \frac{F(s)}{s + \frac{4m \pi i}{\log q}} \mathrm{d} s \ll \frac{\log q}{m} \int_{c + \frac{\pi i}{\log q}}^{-\frac{1}{2} + \frac{\pi i}{\log q}} \lvert F(s) \rvert \mathrm{d} s \longrightarrow 0 ,\end{aligned}$$ as $m \longrightarrow \infty$. For the line $\operatorname{Re}(s) = - \frac{1}{2}$, again using the vertical periodicity of $F(s)$, we have $$\begin{aligned} &\int_{\operatorname{Re}(s) = - \frac{1}{2}} \frac{F(s)}{s} \mathrm{d} s \\ = &\sum_{m=0}^{\infty} \bigg( \int_{t=0}^{1} \frac{F \big(- \frac{1}{2} + \frac{4 \pi i}{\log q} (m+t ) \big)}{- \frac{1}{2} + \frac{4 \pi i}{\log q} (m+t )} \mathrm{d} t + \int_{t=0}^{1} \frac{F \big(- \frac{1}{2} + \frac{4 \pi i}{\log q} (-m-1+t ) \big)}{- \frac{1}{2} + \frac{4 \pi i}{\log q} (-m-1+t )} \mathrm{d} t \bigg) \\ = &\sum_{m=0}^{\infty} \bigg( \int_{t=0}^{1} \frac{F \big(- \frac{1}{2} + \frac{4 \pi i}{\log q} t \big)}{- \frac{1}{2} + \frac{4 \pi i}{\log q} (m+t )} \mathrm{d} t + \int_{t=0}^{1} \frac{F \big(- \frac{1}{2} + \frac{4 \pi i}{\log q} t \big)}{- \frac{1}{2} + \frac{4 \pi i}{\log q} (-m-1+t )} \mathrm{d} t \bigg) \\ = &\sum_{m=0}^{\infty} \int_{t=0}^{1} \frac{-1 + \frac{4 \pi i}{\log q} (2t-1)}{\Big(- \frac{1}{2} + \frac{4 \pi i}{\log q} (m+t ) \Big) \Big( - \frac{1}{2} + \frac{4 \pi i}{\log q} (-m-1+t ) \Big)} F \Big(- \frac{1}{2} + \frac{4 \pi i}{\log q} t \Big) \mathrm{d} t \\ = &\sum_{m=0}^{\infty} \frac{(\log q)^2}{(m+1)^2} \int_{t=0}^{1} \Big\lvert F \Big(- \frac{1}{2} + \frac{4 \pi i}{\log q} t \Big) \Big\rvert \mathrm{d} t \\ = & O_\epsilon \Big( q^{-\frac{k}{2}} \lvert W \rvert^{\epsilon} \Big) ,\end{aligned}$$ for $\epsilon > 0$. The last line follows from a bound on $\Big\lvert F \Big(- \frac{1}{2} + \frac{4 \pi i}{\log q} t \Big) \Big\rvert$, which can be obtained via similar techniques as those found in Section A.2 of [@Yiasemides2020_PhDThesis_PostMinorCorr]. Now let us consider the singularities. $\frac{F(s)}{s}$ has a double pole at $s=0$ and single poles at $s= \pm \frac{2m \pi i}{\log q}$ for integers $m>0$. The single poles cancel due to the fact that $$\begin{aligned} \lim_{s \longrightarrow \frac{2m \pi i}{\log q}} \frac{F(s) \big( s - \frac{2m \pi i}{\log q} \big)}{s} = - \lim_{s \longrightarrow - \frac{2m \pi i}{\log q}} \frac{F(s) \big( s + \frac{2m \pi i}{\log q} \big)}{s} .\end{aligned}$$ For the double pole we must evaluate $$\begin{aligned} (q-1) \lim_{s \longrightarrow 0} \frac{\mathrm{d}}{\mathrm{d}s} s F(s) .\end{aligned}$$ The product rule must be applied due to the various factors, and the main term (as $k \longrightarrow \infty$) will come from when we differentiate the factor $q^{(k + \frac{1}{2})s}$, while we get lower order terms when we differentiate the other factors (one may wish to use Lemma 4.2.2 of [@Yiasemides2020_PhDThesis_PostMinorCorr] for this). We obtain $$\begin{aligned} (q-1) \lim_{s \longrightarrow 0} \frac{\mathrm{d}}{\mathrm{d}s} s F(s) = &\frac{(q-1)^2}{q} \Big( k + O ( \log \mathop{\mathrm{deg}}W) \Big) \prod_{P \mid W_3} \bigg( \frac{1 - \lvert P \rvert^{-1}}{\lvert P \rvert -\lvert P \rvert^{-1}} \bigg) \prod_{P \mid W_2} \bigg( \frac{\lvert P \rvert -1}{\lvert P \rvert - \lvert P \rvert^{-1}} \bigg) \\ = &\frac{(q-1)^2}{q} \Big( k + O ( \log \mathop{\mathrm{deg}}W) \Big) \prod_{P \mid W} (1 + \lvert P \rvert^{-1})^{-1} \prod_{P \mid W_3} \lvert P \rvert^{-1} .\end{aligned}$$ ◻ **Acknowledgements:** This research was conducted during a postdoctoral fellowship at the University of Nottingham funded by the EPSRC research grant "Modular Symbols and Applications" (grant number EP/S032460/1). The author is grateful to the research council for this funding, and to the grant holder, Nikolaos Diamantis, for his support. The author is also grateful to Ofir Gorodetsky and Joshua Pimm for helpful conversations and references to related work. [^1]: "Strongly Diophantine" is defined by Wigman in [@Wigman2006_DistrLattPointElliptAnnuli] [^2]: In [@KeatingRodgersRudnik2018_SumDivFuncFqtMatrInt] they use a slightly different notation, namely $I(A;h) := \{ B \in \mathcal{A} : \mathop{\mathrm{deg}}(B-A) \leq h \}$. We prefer to use our definition of $I(A;<h)$ because it is more natural in some regards, in that $\lvert I(A;<h) \rvert = q^h$ as opposed to $\lvert I(A;h) \rvert = q^{h+1}$. Due to the explicit difference in notation, this should not cause any confusion. [^3]: A Hankel matrix is a matrix where all the entries on a given skew-diagonal are identical. [^4]: See Section 1.4 of [@Yiasemides2020_PhDThesis_PostMinorCorr] for details on Dirichlet characters in function fields, including orthogonality relations. [^5]: To prove this, without obtaining further results on Dirichlet characters, one can prove the contrapositive statement, that if $\chi_1 , \chi_2$ are not both trivial, then $\chi_1 \chi_2$ is not trivial. Indeed, suppose $\chi_1$ is non-trivial and let $D$ be such that $\chi_1 (D) \neq 0,1$. Since $(U_3 , V_3)=1$, there exists a polynomial $E$ such that $E \equiv D (\mathop{\mathrm{mod}}U_3)$ and $E \equiv 1 (\mathop{\mathrm{mod}}V_3)$. Thus, $\chi_1 \chi_2 (E) = \chi_2 (E) \neq 0,1$, and so $\chi_1 \chi_2$ is not trivial.
arxiv_math
{ "id": "2309.01290", "title": "Lattice Point Variance in Thin Elliptic Annuli over $\\mathbb{F}_q [T]$", "authors": "Michael Yiasemides", "categories": "math.NT", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | The electric vehicle (EV) industry is rapidly evolving owing to advancements in smart grid technologies and charging control strategies. While EVs are promising in decarbonizing the transportation system and providing grid services, their widespread adoption has led to notable and erratic load injections that can disrupt the normal operation of power grid. Additionally, the unprotected collection and utilization of personal information during the EV charging process cause prevalent privacy issues. To address the scalability and data confidentiality in large-scale EV charging control, we propose a novel decentralized privacy-preserving EV charging control algorithm via state obfuscation that 1) is scalable *w.r.t.* the number of EVs and ensures optimal EV charging solutions; 2) achieves privacy preservation in the presence of honest-but-curious adversaries and eavesdroppers; and 3) is applicable to eliminate privacy concerns for general multi-agent optimization problems in large-scale cyber-physical systems. The EV charging control is structured as a constrained optimization problem with coupled objectives and constraints, then solved in a decentralized fashion. Privacy analyses and simulations demonstrate the efficiency and efficacy of the proposed approach. author: - "Xiang Huo$^{\\dagger}$ and Mingxi Liu$^{\\dagger}$ [^1] [^2]" bibliography: - bibliography.bib title: "**On Privacy Preservation of Electric Vehicle Charging Control via State Obfuscation**" --- # Introduction The ongoing advancements in electric vehicle (EV) technologies have accelerated the development of a sustainable power grid, owing to the EVs' green credentials and flexible charging options. Despite the multifarious benefits, the occurrence of plug-and-play EV charging events, especially those involving a significant number of EVs, can cause several negative impacts on the power grid, such as load profile fluctuations, voltage deviations, and increased power loss [@yilmaz2012review]. Therefore, advancing scalable EV charging coordination and control strategies is of paramount importance to alleviate the strain on the power grid and ultimately provide synergistic grid-edge services, such as valley-filling, peak-shaving, and frequency regulation. In enabling such synergy between EVs and the power grid, the EV charging control problem can be framed as a constrained optimization problem. Let $\bm{x}_{\hat{\imath}} = {[x_{\hat{\imath}}(1), \ldots, x_{\hat{\imath}}(T)]}^{\mathsf{T}}$ denote the charging profile of EV $\hat{\imath}$ during $T$ consecutive time slots, $\mathcal{X}_{\hat{\imath}}$ denote the local feasible set that contains the charging requirements of EV $\hat{\imath}$, and function $g(\cdot)$ denote the networked constraint function. Then, the EV charging control problem can be formulated into a constrained optimization problem as $$\label{sss1} \begin{aligned} &\text{min} \quad J(\left\{\bm{x}_{\hat{\imath}}\right\}_{\hat{\imath}=1}^{\hat{n}}) = F \left(\bm{p}_{b}, \bm{x}_{1}\ldots, \bm{x}_{\hat{n}}\right) \\ & \,\, \text{s.t.} \, \quad \bm{x}_{\hat{\imath}} \in \mathcal{X}_{\hat{\imath}}, ~ \forall \hat{\imath}=1,2, \ldots, \hat{n} \\ &\,\,\,\, \qquad g(\bm{x}) \leq 0 \end{aligned}$$where the cost function $F(\cdot) : \mathbb{R}^T \mapsto \mathbb{R}$ is assumed convex and differentiable, $\bm{p}_{b} \in \mathbb{R}^T$ captures the baseline load of the network, $\bm{x} = {[\bm{x}_{\hat{\imath}}^{\mathsf{T}}, \ldots, \bm{x}_{\hat{n}}^{\mathsf{T}}]}^{\mathsf{T}}$, and $\hat{n}$ denotes the total number of EVs. The use of scalable optimization methods, such as distributed and decentralized approaches, has gained popularity in solving [\[sss1\]](#sss1){reference-type="eqref" reference="sss1"}. In [@karfopoulos2012multi], a distributed multi-agent EV charging control method was developed based on the Nash certainty equivalence principle to account for network impacts. Gan *et al.* in [@gan2012optimal] proposed a decentralized EV charging control algorithm with the objective of addressing the valley-filling problem using EVs' charging loads. To scale with the EV fleet size and the length of control periods, decentralized EV charging protocols were developed in [@zhang2016scalable] for network-constrained EV charging problems. In [@liu2017decentralized], a decentralized EV charging control scheme was developed to achieve valley filling, meanwhile accommodating individual charging needs and distribution network constraints. To further improve the scalability, a distributed optimization framework was proposed in [@huo2022two] to offer two-facet scalability over both the agent population size and network dimension. Besides scalability, the increased risk of privacy exposure is another major obstacle in deploying large-scale EV charging control strategies. To address the pressing need for privacy preservation in both EV charging control and generic multi-agent systems, one potential solution is using differential privacy (DP). Fiore and Russo in [@fiore2019resilient] designed a DP-based consensus algorithm for multi-agent systems where a subset of agents are adversaries. In [@nozari2016differentially], a distributed functional perturbation framework was developed based on DP to protect each agent's private objective function. In [@wang2022differentially], DP-based distributed algorithms were designed to preserve privacy in finding the Nash equilibrium of stochastic aggregative games. Although DP-based methods are commonly adopted for privacy preservation, the inevitable trade-off between accuracy and privacy remains a major challenge in practical implementation. Another frequently utilized method for preserving privacy involves cryptographic techniques, such as the Paillier cryptosystem and Shamir's secret sharing (SSS). In [@fang2021secure], a Paillier-based privacy-preserving algorithm was proposed for securing the average consensus of networked systems with high-order dynamics. Zhang *et al.* in [@zhang2021privacy] developed a privacy-preserving power exchange service system that uses data encryption to protect EV users' privacy. In [@huo2021encrypted], a decentralized privacy-preserving multi-agent cooperative optimization paradigm was designed based on cryptography for large-scale industrial cyber-physical systems. In [@zhang2018enabling], a novel decentralized privacy preservation approach was designed by integrating a partially homomorphic cryptosystem into the decentralized optimization architecture. Compared to encryption-based methods that rely on large integer calculations, SSS-based privacy-preserving approaches are more efficient in the computation of shares while offering information-theoretical security [@huo2022secret]. In [@li2019privacy], an SSS-based privacy-preserving algorithm was developed to solve the consensus problem while concurrently protecting each individual's private information. Rottondi *et al.* in [@rottondi2014enabling] designed a privacy-preserving vehicle-to-grid architecture based on SSS to ensure the confidentiality of the private information of EV owners from aggregators. Huo and Liu in [@huo2022distributed] proposed an SSS-based privacy-preserving EV charging control protocol, which eliminates the need for a system operator (SO) in achieving overnight valley filling. While cryptographic methods effectively achieve high levels of accuracy and privacy, the accompanying increased computation and communication complexity become the bottleneck in their practical use. Non-cryptographic approaches like state decomposition (SD) decompose the true state into two sub-states, and only one sub-state is visible to others, therefore protecting the true value of the original state. However, state-of-the-art SD-based strategies are not applicable to solve [\[sss1\]](#sss1){reference-type="eqref" reference="sss1"} as they mainly focus on consensus problems [@wang2019privacy; @zhang2022privacy]. This paper aims to design a decentralized privacy-preserving optimization algorithm, which is scalable and low in complexity, suitable for large-scale multi-agent optimization, specifically for EV charging control. The contributions of this paper are three-fold: 1) the proposed decentralized privacy-preserving algorithm can scale with the number of EVs and provide optimal decentralized EV charging solutions; 2) privacy preservation is achieved in the presence of honest-but-curious adversaries and external eavesdroppers; and 3) the proposed approach has low computing and communication overhead, making it widely applicable for preserving privacy in coupled multi-agent optimization problems in cyber-physical systems. # Problem Formulation ## Distribution Network Model {#Distribution-network} In a radial distribution network, the power flow can be represented by DistFlow branch equations that consist of the real power, reactive power, and voltage magnitude [@baran1989network]. Consider a re-indexed radial distribution network and define $\mathbb{N}=\{i \mid i=1, \ldots, n\}$ as the set of downstream buses. Let $\left|V_{i}(t)\right|$ denote the voltage magnitude of bus $i$ at time $t$, $|V_0|$ denote the voltage magnitude of the slack bus, and $p_i(t)$ and $q_i(t)$ denote the active and reactive loads of bus $i$ at time $t$. Following the linear DistFlow branch equations [@baran1989network], the squared voltage magnitude at node $i$ is $$\label{11sss} \boldsymbol{V}_i =\boldsymbol{V}_{0}- 2\sum_{j=1}^{n} \boldsymbol{R}_{ij} \boldsymbol{p}_j - 2\sum_{j=1}^{n} \boldsymbol{X}_{ij} \boldsymbol{q}_j$$ where $\boldsymbol{V}_{i} =[\left|V_{i}(1)\right|^{2}, \ldots, \left|V_{i}(T)\right|^{2}]^{\mathsf{T}} \in \mathbb{R}^{T}$, $\boldsymbol{V}_{0} =[\left|V_{0}\right|^{2}, \ldots, \left|V_{0}\right|^{2}]^{\mathsf{T}} \in \mathbb{R}^{T}$, $\bm{p}_i = [p_{i}(1), \ldots, p_{i}(T)]^{\mathsf{T}} \in \mathbb{R}^{T}$, $\bm{q}_i = \left[q_{i}(1),\ldots, q_{i}(T)\right]^{\mathsf{T}} \in \mathbb{R}^{T}$, and the adjacency matrices $\bm{R}$ and $\bm{X}$ are defined as $$\begin{aligned} \boldsymbol{R} &\in \mathbb{R}^{n \times n},~ \boldsymbol{R}_{ij}=\sum_{(\hat{\imath}, \hat{\jmath}) \in \mathbb{E}_{i} \cap \mathbb{E}_{j}} r_{\hat{\imath} \hat{\jmath}} \nonumber\\ \boldsymbol{X} &\in \mathbb{R}^{n \times n}, ~\boldsymbol{X}_{ij}=\sum_{(\hat{\imath}, \hat{\jmath}) \in \mathbb{E}_{i} \cap \mathbb{E}_{j}} x_{\hat{\imath} \hat{\jmath}}\nonumber\end{aligned}$$ where $r_{\hat{\imath} \hat{\jmath}}$ and $x_{\hat{\imath} \hat{\jmath}}$ denote the resistance and reactance from bus $\hat{\imath}$ to bus $\hat{\jmath}$, respectively. The sets of line segments that connect the slack bus to bus $i$ and bus $j$ are denoted by $\mathbb{E}_{i}$ and $\mathbb{E}_{j}$, respectively. In this paper, we focus on the charging control of plug-in EVs on radial distribution networks. A 13-bus distribution network with charging stations situated at different nodes is shown in Fig. [1](#13_distribution_network){reference-type="ref" reference="13_distribution_network"}. ![A 13-bus distribution network connected with EVs.](13_distribution.pdf){#13_distribution_network width="47%"} ## EV Charging Model {#EV-model} Let $\bm{r}_{\hat{\imath}} \in \mathbb{R}^{T}$ denote the piece-wise constant charging power of the $\hat{\imath}$th EV during $T$ time intervals, and $\bm{r}_{\hat{\imath}}$ is constrained by $$\label{1s} \bm{0} \leq \bm{r}_{\hat{\imath}} \leq \overline{\bm{r}}_{\hat{\imath}}$$ where $\overline{\bm{r}}_{\hat{\imath}} = [\overline{r}_{\hat{\imath}}, \ldots, \overline{r}_{\hat{\imath}}]^{\mathsf{T}} \in \mathbb{R}^{T}$ and $\overline{r}_{\hat{\imath}}$ denotes the maximum charging power. Let $\delta_t$ denote the sampling period and $[1:T\delta_t]$ denote the charging duration. To ensure EVs are charged to the desired energy levels by the end of the charging period, the total energy charged for the $\hat{\imath}$th EV should satisfy $$\bm{G}\bm{r}_{\hat{\imath}} = d_{\hat{\imath}}\label{2s}$$ where $\bm{G} {=} [\delta_t\eta,\ldots,\delta_t\eta] {\in} \mathbb{R}^{1\times T}$ denotes the aggregation vector, $\eta$ denotes the charging efficiency, and $d_{\hat{\imath}}$ denotes the charging demand request of the $\hat{\imath}$th EV. ## Valley-Filling Optimization Problem {#opt-problem} The valley-filling problem aims at filling the aggregated load valley and smoothing the aggregated load profile of the entire distribution network. This service is typically provisioned during the late evening and early morning when significant energy use reduction occurs. In this paper, the controllable charging loads of EVs, e.g., from community overnight parking and charging lot, are scheduled and shifted to flatten the valley in the power load profile. To this end, the valley-filling problem is formulated as a constrained optimization problem at the grid scale aiming at determining the optimal charging schedules of all EVs. Suppose in total $\hat{n}$ EVs need to be fully charged during the time period $[1:T\delta_t]$. This paper takes the nodal voltage constraint, which manifests as the global constraint, for example, to illustrate the impacts of EV charging on the distribution network as $$\label{4sk} \underline{\bm{V}} \leq \bm{V}_i \leq \overline{\bm{V}}, ~ \forall i=1,2, \ldots, n$$ where $\underline{\bm{V}} {=} [\underline{V}, \ldots, \underline{V}]^{\mathsf{T}} {\in} \mathbb{R}^{T}$, $\overline{\bm{V}} {=} [\overline{V}, \ldots, \overline{V}]^{\mathsf{T}} {\in} \mathbb{R}^{T}$, $\underline{V}$ and $\overline{V}$ denote the lower and upper voltage bounds, respectively. The optimal EV charging control problem is then formulated into a quadratic programming problem as $$\label{s1} \begin{aligned} &\underset{}{\text{min}} \quad J(\left\{\bm{r}_{\hat{\imath}}\right\}_{\hat{\imath}=1}^{\hat{n}}) = \frac{1}{2}\left\|\bm{p}_{b}+\sum_{\hat{\imath}=1}^{\hat{n}} \bm{r}_{\hat{\imath}} \right\|_{2}^{2} \\ & \,\, \text{s.t.} \, \quad \bm{r}_{\hat{\imath}} \in \mathcal{R}_{\hat{\imath}}, ~ \forall \hat{\imath}=1,2, \ldots, \hat{n} \\ & \,\,\,\, \qquad \underline{\bm{V}} \leq \bm{V}_i \leq \overline{\bm{V}}, ~ \forall i=1,2, \ldots, n \end{aligned}$$where $\bm{p}_{b}$ denotes the aggregated baseline load and $\mathcal{R}_{\hat{\imath}}$ denotes the local feasible set of EV $\hat{\imath}$ that is defined by $$\mathcal{R}_{\hat{\imath}} = \{ \bm{r}_{\hat{\imath}} | \: 0 \leq \bm{r}_{\hat{\imath}} \leq \overline{\bm{r}}_{\hat{\imath}}, \bm{G}\bm{r}_{\hat{\imath}} = d_{\hat{\imath}} \}.$$ Note that [\[1s\]](#1s){reference-type="eqref" reference="1s"} and [\[2s\]](#2s){reference-type="eqref" reference="2s"} are basic constraints that describe the EV charging process, additional constraints that introduce EVs' local characteristics can be included in the feasible set $\mathcal{R}_{\hat{\imath}}$ without affecting the algorithm design. # Main Results ## Decentralized PGM To solve the constrained optimization problem in [\[s1\]](#s1){reference-type="eqref" reference="s1"} via a decentralized manner, EVs (agents) can work cooperatively by adopting the projected gradient method (PGM) [@bertsekas2015parallel]. In PGM, the $\hat{\imath}$th EV can update its decision variable (primal variable) by $$\label{eq1} \bm{r}_{\hat{\imath}}^{(\ell+1)} = \Pi_{\mathcal{R}_{\hat{\imath}}}[\bm{r}_{\hat{\imath}}^{(\ell)} - \gamma^{(\ell)}_{\hat{\imath}} \Phi_{\hat{\imath}}^{(\ell)} (\bm{r}^{(\ell)})]$$ where $\ell$ denotes the iteration index, $\bm{r}^{(\ell)} = [{\bm{r}_1^{(\ell)}}^\mathsf{T},\ldots,{\bm{r}_{\hat{n}}^{(\ell)}}^\mathsf{T}]^{\mathsf{T}}$, $\gamma^{(\ell)}_{\hat{\imath}}$ denotes the primal update step size of EV $\hat{\imath}$, $\Phi_{\hat{\imath}}^{(\ell)}(\cdot)$ denotes the first-order gradient of the Lagrangian function *w.r.t.* $\bm{r}_{\hat{\imath}}^{(\ell)}$, and $\Pi_{\mathcal{R}_{\hat{\imath}}}[\cdot]$ denotes the Euclidean projection operation onto $\mathcal{R}_{\hat{\imath}}$. The relaxed Lagrangian of [\[s1\]](#s1){reference-type="eqref" reference="s1"} can be derived as $$\label{5} \mathcal{L}(\bm{r},\bm{\lambda})= \frac{1}{2}\left\|\bm{p}_{b}+\sum_{\hat{\imath}=1}^{\hat{n}} \bm{r}_{\hat{\imath}}\right\|_{2}^{2} + \sum_{i=1}^{n}\bm{\lambda}_i^\mathsf{T}(\underline{\bm{V}} - \bm{V}_i)$$ where $\bm{\lambda} = [\bm{\lambda}_1^\mathsf{T},\ldots,\bm{\lambda}_n^\mathsf{T}]^\mathsf{T}$ and $\bm{\lambda}_i$ denotes the dual variable associated with the $i$th inequality constraint. Note that the Lagrangian in [\[5\]](#5){reference-type="eqref" reference="5"} is relaxed by moving EVs' local constraints into $\mathcal{R}_{\hat{\imath}}$. Only the lower bound constraint on the bus voltage magnitudes is considered, as the charging loads of EVs are the only active power consumption within the distribution network. The subgradients of $\mathcal{L}(\bm{r},\bm{\lambda})$ *w.r.t.* $\bm{r}_{\hat{\imath}}$ and $\bm{\lambda}_i$ are [\[6s\]]{#6s label="6s"} $$\begin{aligned} \nabla_{\bm{r}_{\hat{\imath}} } \mathcal{L}(\bm{r},\bm{\lambda}) & = \bm{p}_{b} + \sum_{\hat{\imath}=1}^{\hat{n}} \bm{r}_{\hat{\imath}} -\sum_{i=1}^{n}\nabla_{\bm{r}_{\hat{\imath}} }(\bm{\lambda}_i^\mathsf{T}\bm{V}_i) \label{4as}\\ \nabla_{\bm{\lambda}_i} \mathcal{L}(\bm{r},\bm{\lambda}) & = \underline{\bm{V}} - \bm{V}_i. \label{4bs}\end{aligned}$$ Substitute the linear DistFlow branch equation [\[11sss\]](#11sss){reference-type="eqref" reference="11sss"} into [\[6s\]](#6s){reference-type="eqref" reference="6s"}, we have [\[7s\]]{#7s label="7s"} $$\begin{aligned} \nabla_{\bm{r}_{\hat{\imath}} } \mathcal{L}(\bm{r},\bm{\lambda}) & = \bm{p}_{b} + \sum_{i=1}^{n} \bm{p}_{i} - \hat{\bm{s}}_{\hat{\imath}} \label{7as}\\ \nabla_{\bm{\lambda}_i} \mathcal{L}(\bm{r},\bm{\lambda}) & = \tilde{\bm{V}} + 2\sum_{j=1}^{n} \boldsymbol{R}_{ij} \boldsymbol{p}_j \label{7bs}\end{aligned}$$ where $\hat{\bm{s}}_{\hat{\imath}} = \sum_{i=1}^{n}\nabla_{\bm{r}_{\hat{\imath}} }(\bm{\lambda}_i^\mathsf{T}\bm{V}_i)$, $\tilde{\bm{V}} = \underline{\bm{V}} - \boldsymbol{V}_{0}$, $\bm{p}_i = \sum_{\hat{\imath}=1}^{\hat{n}_i} \bm{r}_{\hat{\imath}}$, and $\hat{n}_i$ denotes the number of EVs connected at bus $i$. Note that the exact form of $\hat{\bm{s}}_{\hat{\imath}}$ is decided based on the bus location of the $\hat{\imath}$th EV, e.g., if the $\hat{\imath}$th EV is connected at bus $k$, then $\hat{\bm{s}}_{\hat{\imath}} = 2\sum_{i=1}^{n} \boldsymbol{R}_{ik}\bm{\lambda}_i$. Based on the subgradients in [\[7s\]](#7s){reference-type="eqref" reference="7s"}, the primal and dual variables can be updated through the PGM by [\[update\]]{#update label="update"} $$\begin{aligned} \bm{r}_{\hat{\imath}}^{(\ell+1)} &= \Pi_{\mathcal{R}_{\hat{\imath}}}\left( \bm{r}_{\hat{\imath}}^{(\ell)}-\gamma_{\hat{\imath}} \nabla_{\bm{r}_{\hat{\imath}}} \mathcal{L}\left(\bm{r}^{(\ell)}, \bm{\lambda}^{(\ell)}\right)\right) \label{11a}\\ \bm{\lambda}_i^{(\ell+1)} &= \Pi_{\mathcal{D}_{i}}\left( \bm{\lambda}_i^{(\ell)}+\beta_i \nabla_{\bm{\lambda}_i} \mathcal{L}\left(\bm{r}^{(\ell)}, \bm{\lambda}^{(\ell)}\right)\right)\label{11b}\end{aligned}$$ where $\mathcal{D}_{i}= \{\bm{\lambda}_{i} ~|~ \bm{\lambda}_{i} \geq \boldsymbol{0} \}$ denotes the feasible set of $\bm{\lambda}_{i}$ and $\beta_i$ denotes the associated dual update step size. The PGM update in [\[update\]](#update){reference-type="eqref" reference="update"} is scalable *w.r.t.* the number of EVs owing to the parallel computing structure. However, due to the couplings of decision variables in both the objective function and the global voltage constraint, the primal and dual updates require the exchange of decision variables between all EVs, e.g., calculating the subgradient in [\[7as\]](#7as){reference-type="eqref" reference="7as"} requires $\bm{r}_{\hat{\imath}}$'s from all EVs. Therefore, without appropriate privacy preservation measures, the inevitable and frequent information exchange can put EVs' private data at breaching risks. To address this concern, we aim to develop a privacy-preserving EV charging control framework via state obfuscation to protect EVs' true decision variables. ## Privacy-Preserving EV Charging Control Via State Obfuscation The goal of privacy preservation is to ensure EV owners' private information is protected during the charging schedules. Specifically, private data of the $\hat{\imath}$th EV are defined to include the charging profiles $\bm{r}_{\hat{\imath}}^{(\ell)}$ in all iterations, charging demand $d_{\hat{\imath}}$, and the maximum charging power $\overline{\bm{r}}_{\hat{\imath}}$. The primal update in [\[11a\]](#11a){reference-type="eqref" reference="11a"} naturally inherits local privacy preservation owing to the independent projection operation $\Pi_{\mathcal{R}_{i}}$. This is because the private data such as the charging demand $d_{\hat{\imath}}$ and the maximum charging power $\overline{\bm{r}}_{\hat{\imath}}$ are exclusive to the ${\hat{\imath}}$th EV and only contained in the feasible set $\mathcal{R}_{\hat{\imath}}$ for implementing the primal update. Therefore, the local private information is securely retained within $\mathcal{R}_{\hat{\imath}}$ and will not be disclosed to other parties. Despite the scalability of decentralized EV charging architectures, they require frequent exchange of EVs' charging profiles through communication channels between EVs and the SO, making the entire system prone to privacy leakages. To resolve this issue, we propose a state-obfuscation-based algorithm that can protect EVs' charging profiles during any planned charging window. The basic concept behind state obfuscation is to obfuscate EVs' true decision variables by using the values of random variables drawn from a probability distribution. Regarding a set of mutually independent random variables, e.g., drawn from a normal distribution, we have the following theorem **Theorem 1** [@ross1995stochastic]: If $X_1, \ldots, X_z$ are mutually independent normal random variables with means $\mu_1, \ldots, \mu_z$ and variances $\sigma_1^2, \ldots, \sigma_z^2$, then the linear combination $Y = \sum_{i=1}^z c_iX_i$ follows the normal distribution $\mathcal{N}(\sum_{i=1}^z c_i \mu_i, \sum_{i=1}^z c_i^2 \sigma_i^2)$. $\blacksquare$ To integrate state obfuscation into EV charging control, we propose a novel communicating architecture, as shown in Fig. [2](#communicating_structure){reference-type="ref" reference="communicating_structure"}, for the privacy-preserving algorithm implementation. ![Communicating structure of the proposed privacy-preserving algorithm (the communications of the EVs connected at buses 4 and 5 are given).](EV_communication_2.pdf){#communicating_structure width="47%"} The EVs in the distribution network layer first send the obfuscated charging profiles to the SO, the SO then aggregates the obfuscated data bus by bus according to EVs' bus locations. Specifically, suppose the $\hat{\imath}$th EV is connected at bus $i$. At the $\ell$th iteration, the $\hat{\imath}$th EV uses a random normal variable $X_{\hat{\imath}} \sim \mathcal{N}(\mu_{i}, \sigma_i^2)$ to generate $T$ random sets $\Tilde{X}_{\hat{\imath},t}^{(\ell)} = \{e_{\hat{\imath},t,1}^{(\ell)}, \ldots, e_{\hat{\imath},t,m}^{(\ell)}\}$, $\forall t=1,\ldots,T$ where each set contains $m$ random elements. For clarity, we represent $\Tilde{X}_{\hat{\imath},t}^{(\ell)}$ by a vector as $\Tilde{\bm{X}}_{\hat{\imath},t}^{(\ell)} = [e_{\hat{\imath},t,1}^{(\ell)}, \ldots, e_{\hat{\imath},t,m}^{(\ell)}]^{\mathsf{T}}$. The $\hat{\imath}$th EV then extracts each element from its decision variable $\bm{r}_{\hat{\imath}}^{(\ell)}$ to calculate $\bm{r}_{\hat{\imath}}^{(\ell)}(t) \Tilde{\bm{X}}_{\hat{\imath},t}^{(\ell)}$, $\forall t=1,\ldots,T$. Consequently, the $\hat{\imath}$th EV can obtain a total of $T$ new vectors and reformulate them into ${\Tilde{\bm{W}}_{\hat{\imath}}}^{(\ell)}$, defined by $$\label{11st} \Tilde{\bm{W}}_{\hat{\imath}}^{(\ell)} = [\bm{r}_{\hat{\imath}}^{(\ell)}(1) {{}\Tilde{\bm{X}}_{\hat{\imath},1}^{(\ell)}}^{\mathsf{T}}, \ldots, \bm{r}_{\hat{\imath}}^{(\ell)}(T) {{}\Tilde{\bm{X}}_{\hat{\imath},T}^{(\ell)}}^{\mathsf{T}} ]^{\mathsf{T}}.$$ Subsequently, instead of sending the true decision variables directly, all EVs send their $\tilde{\bm{W}}_{\hat{\imath}}^{(\ell)}$'s, i.e., the obfuscated states, to the SO. Then the SO computes the sum of the received obfuscated states using $$\label{12sss} \bm{Y}_i^{(\ell)} = \sum_{\hat{\imath}=1}^{\hat{n}_i} \Tilde{\bm{W}}_{\hat{\imath}}^{(\ell)}$$ for the EVs connected at the $i$th bus. As shown in [\[11st\]](#11st){reference-type="eqref" reference="11st"}, every element in $\bm{r}_{\hat{\imath}}^{(\ell)}$ is obfuscated and expanded by $m$ random values. To retrieve the summed charging profiles for all EVs connected at bus $i$, the SO needs to calculate the mean of every $m$ elements in $\bm{Y}_i^{(\ell)}$. For example, for the first $m$ elements, the SO calculates $(\sum_{\kappa=1}^{m} \bm{Y}_i^{(\ell)}(\kappa))/m$ that is equal to $\sum_{\hat{\imath}=1}^{\hat{n}_i} \bm{r}_{\hat{\imath}}^{(\ell)} (1) \bar{\mu}_{i}$ where $\bar{\mu}_{i}$ is an approximation of $\mu_{i}$. Suppose the SO knows the true mean $\mu_{i}$, the SO can acquire $\sum_{\hat{\imath}=1}^{\hat{n}_i} \bm{r}_{\hat{\imath}}^{(\ell)} \bar{\mu}_{i}$, and further obtain $\tau\sum_{\hat{\imath}=1}^{\hat{n}_i} \bm{r}_{\hat{\imath}}^{(\ell)}$ where $\tau = \bar{\mu}_{i}/\mu_{i}$ denotes the approximation error. Therefore, the SO now has the approximated active power consumption $\bar{\bm{p}}_i^{(\ell)} = \tau \sum_{\hat{\imath}=1}^{\hat{n}_i} \bm{r}_{\hat{\imath}}^{(\ell)}$ of bus $i$, and it repeats the procedure to obtain $\bar{\bm{p}}_i^{(\ell)}$, $\forall i=1,\ldots,n$. Finally, the SO estimates the subgradients in [\[7s\]](#7s){reference-type="eqref" reference="7s"} using the approximated active power consumption, then broadcasts [\[7as\]](#7as){reference-type="eqref" reference="7as"} to the $i$th EV while utilizing [\[7bs\]](#7bs){reference-type="eqref" reference="7bs"} to conduct dual updates. Thereafter, EV $i$ can update its decision variable $\bm{r}_{\hat{\imath}}^{(\ell)}$ in parallel using [\[11a\]](#11a){reference-type="eqref" reference="11a"}. The step-by-step process of the proposed approach is outlined in **Algorithm [\[alg_1\]](#alg_1){reference-type="ref" reference="alg_1"}**. EVs initialize decision variables, tolerance $\epsilon_0$, iteration counter $\ell=0$, and maximum iteration $\ell_{max}$. The $\hat{\imath}$th EV connected at bus $i$ generates a normal random variable $X_{\hat{\imath}} \sim \mathcal{N}(\mu_{i}, \sigma_i^2)$ and draws random elements from $X_{\hat{\imath}}$ to obtain $\Tilde{\bm{X}}_{\hat{\imath},t}^{(\ell)} = [e_{\hat{\imath},t,1}^{(\ell)}, \ldots, e_{\hat{\imath},t,m}^{(\ell)}]^{\mathsf{T}}$, $\forall t=1,\ldots,T$. The $\hat{\imath}$th EV uses the elements of $\bm{r}_{\hat{\imath}}^{(\ell)}$ to calculate $\bm{r}_{\hat{\imath}}^{(\ell)}(t) \Tilde{\bm{X}}_{\hat{\imath},t}^{(\ell)}$, $\forall t=1,\ldots,T$ elementwisely. Then each EV formulates $\Tilde{\bm{W}}_{\hat{\imath}}^{(\ell)}$ and sends it to the SO. The SO calculates the summation $\bm{Y}_i^{(\ell)}$ using [\[12sss\]](#12sss){reference-type="eqref" reference="12sss"} for each bus, then calculates the mean of every $m$ elements in $\bm{Y}_i^{(\ell)}$, to obtain the approximated $\bar{\bm{p}}_i^{(\ell)}$, $\forall i =1, \ldots, n$. The SO estimates the subgradient in [\[7as\]](#7as){reference-type="eqref" reference="7as"} and broadcasts it to the $\hat{\imath}$th EV. The $\hat{\imath}$th EV updates $\bm{r}_{\hat{\imath}}^{(\ell)} \rightarrow \bm{r}_{\hat{\imath}}^{(\ell+1)}$ by PGM using [\[11a\]](#11a){reference-type="eqref" reference="11a"}, then calculates the error $\epsilon_{\hat{\imath}}^{(\ell)}$. The SO updates the dual variables $\bm{\lambda}_{i}^{(\ell)} \rightarrow \bm{\lambda}_{i}^{(\ell+1)}$, $\forall i=1,\ldots,n$ using [\[11b\]](#11b){reference-type="eqref" reference="11b"}. $\ell=\ell+1$. [\[alg_1\]]{#alg_1 label="alg_1"} **Theorem 2**: **Algorithm [\[alg_1\]](#alg_1){reference-type="ref" reference="alg_1"}** has an accuracy level of $\tau$. With appropriate choices of $\sigma$ and $m$, the convergence of primal and dual variables is guaranteed. $\blacksquare$ **Theorem 2** states the correctness and convergence of the proposed algorithm. By carrying out **Algorithm 1**, the subgradients in [\[7s\]](#7s){reference-type="eqref" reference="7s"} can be efficiently approximated and calculated since the mean of $\bm{Y}_i^{(\ell)}$ can be used to retrieve $\bar{\bm{p}}_i^{(\ell)}$ that is an estimation of $\bm{p}_i^{(\ell)}$. When determining the accuracy level, the standard error of the mean (SEM), defined by $SEM_{m} = \sigma/\sqrt{m}$, can quantify how a larger sample size produces more precise estimates of the means. **Remark 1 :** Without the loss of generality, $\mu_{i}$ was set uniformly across all EVs connected at the same bus to avoid over-complicated algorithm implementation. In a broader scenario, the mean values of different EVs can be chosen independently. In other words, the mean value $\mu_{\hat{\imath}}$ serves as a unique key between the $\hat{\imath}$th EV and the SO. $\square$ # Privacy Analysis ## Privacy and Attack Models To preserve EV owner's privacy, two types of adversaries are considered: 1) An *honest-but-curious adversary* is an agent who adheres to the algorithm but intends to utilize the accessible data to infer private information of other participants, and 2) an *external eavesdropper* is an external attacker who wiretaps communication links to obtain the private information of the participants. ## Privacy Analysis **Algorithm [\[alg_1\]](#alg_1){reference-type="ref" reference="alg_1"}** allows EVs to use the values of random variables drawn from a normal distribution to protect the true decision variables $\bm{r}_{\hat{\imath}}^{(\ell)}$'s. The privacy preservation properties of **Algorithm [\[alg_1\]](#alg_1){reference-type="ref" reference="alg_1"}** are given by the following theorem **Theorem 3**: **Algorithm [\[alg_1\]](#alg_1){reference-type="ref" reference="alg_1"}** preserves the private data of EV owners against both honest-but-curious adversaries and external eavesdroppers. $\blacksquare$ *Proof*: Proof of **Theorem 3** is approached from the adversaries' perspective based on the data they can access. From the view of an honest-but-curious adversary, suppose both EV $\hat{\imath}_1$ and EV $\hat{\imath}_2$ are connected at the same bus, and EV $\hat{\imath}_1$ is curious in inferring the charging profiles of EV $\hat{\imath}_2$. At the $\ell$th iteration, EV $\hat{\imath}_1$ can have access to the data set $\mathcal{A}_{\hat{\imath}_1}^{(\ell)} = \{ \bm{r}_{\hat{\imath}_1}^{(\ell)}, \overline{\bm{r}}_{\hat{\imath}_1}, d_{\hat{\imath}_1},\mathcal{R}_{\hat{\imath}_1}, \gamma_{\hat{\imath}}, \bar{\nabla}_{\bm{r}_{\hat{\imath}_1} } \mathcal{L}(\bm{r}^{(\ell)},\bm{\lambda}^{(\ell)} ) \}$ where $\overline{\bm{r}}_{\hat{\imath}_1}$, $d_{\hat{\imath}_1}$, $\mathcal{R}_{\hat{\imath}_1}$, and $\gamma_{\hat{\imath}}$ are private information of EV $\hat{\imath}_{\hat{\imath}_1}$ and kept to EV $\hat{\imath}_1$ locally. The local information, therefore, cannot provide any useful information in inferring $\bm{r}_{\hat{\imath}_2}^{(\ell)}$. Besides, EV $\hat{\imath}_1$ also has access to the approximated subgradient $\bar{\nabla}_{\bm{r}_{\hat{\imath}_1} } \mathcal{L}(\bm{r}^{(\ell)},\bm{\lambda}^{(\ell)})$ that is calculated by the SO. However, the baseline load $\bm{p}_b$ and the adjacency matrix $\bm{R}$ are held by the SO, and therefore remain invisible to any EVs. Therefore, EV $\hat{\imath}_1$ cannot infer the charging profiles $\bm{r}_{\hat{\imath}_2}$ of EV $\hat{\imath}_2$ based on its accessible information contained in $\mathcal{A}_{\hat{\imath}_1}^{(\ell)}$. For any external eavesdropper, by wiretapping the communication channels at the $\ell$th iteration, it can obtain the information set $\mathcal{E} = \{ \Tilde{\bm{W}}_{\hat{\imath}}^{(\ell)}, \bar{\nabla}_{\bm{r}_{\hat{\imath}} } \mathcal{L}(\bm{r}^{(\ell)},\bm{\lambda}^{(\ell)}), \forall \hat{\imath} =1, \ldots, \hat{n} \}$. Suppose an external eavesdropper knows the protocols of **Algorithm [\[alg_1\]](#alg_1){reference-type="ref" reference="alg_1"}**. To infer $\bm{r}_{\hat{\imath}}^{(\ell)}$ by using $\mathcal{E}$, it still needs to know the cardinality $m$ and the mean value $\mu_{i}$ that is associated with the random variable $X_{\hat{\imath}}$. Though the approximated subgradient $\bar{\nabla}_{\bm{r}_{\hat{\imath}} } \mathcal{L}(\bm{r}^{(\ell)},\bm{\lambda}^{(\ell)})$ could potentially reveal the converging direction of the decision variable, the eavesdropper is still blind from $\gamma_{\hat{\imath}}$ and $\mathcal{R}_{\hat{\imath}}$, therefore unable to imitate the primal update in [\[11a\]](#11a){reference-type="eqref" reference="11a"}. $\square$ **Remark 2:** A trade-off between the level of security and computing cost exists in **Algorithm [\[alg_1\]](#alg_1){reference-type="ref" reference="alg_1"}**. When a specific accuracy requirement is decided by $\tau$ under a fixed sample size $m$, a smaller variance $\sigma_{i}^2$ will result in a smaller SEM and therefore require fewer data points to achieve the accuracy standard. The proposed state obfuscation refines $k$-anonymity [@samarati1998protecting] by introducing randomization of $m$ anonymous random variables for each true value. Though a smaller variance would result in less computation and communication cost, it can also lead to a higher degree of similarity in $\Tilde{\bm{X}}_{\hat{\imath},t}^{(\ell)}$, thus compromising the level of randomization and privacy. $\square$ # Simulation Results The effectiveness of the proposed obfuscation-based privacy-preserving EV control strategy is verified through the simplified single-phase IEEE 13-bus test feeder as shown in Fig. [1](#13_distribution_network){reference-type="ref" reference="13_distribution_network"}. The baseline load profile was taken and scaled from California Independent System Operator on 09/16/2021 and 09/17/2021 [@CISO]. We consider the penetration level of 7 EVs per bus, and in total 84 EVs are connected to the distribution network. The charging demands of all EVs randomly distribute in $[10,40]$ kWh. The maximum charging power $\bar{r}_{\hat{\imath}}$'s are uniformly set to be 6.6 kW based on the level-2 EV charging standards and the charging efficiency is set to be $\eta = 0.85$. The valley-filling horizon is set to begin at 19:00 and lasts until 7:00 the next morning. The entire control horizon is divided into $T=48$ time slots with 15-minute resolution. It is required that, by the end of the valley-filling period, all EVs need to be charged to the desired energy levels. The primal update step sizes are chosen based on experience as $\gamma_{\hat{\imath}} = 4\times10^{-4}$, $\forall \hat{\imath} =1,\dots,\hat{n}$, and the dual update step sizes are $\beta_i = 2\times10^{-3}$, $\forall i =1,\dots, n$. Initial values of $\bm{r}_{\hat{\imath}}^{(0)}$'s and $\bm{\lambda}_i^{(0)}$'s are all set to be zeros. The normal random variables $\Tilde{X}_{\hat{\imath}}$'s generated by EVs connected at bus $i$ follow the nomral distribution $\Tilde{X}_{\hat{\imath}} \sim \mathcal{N}(\mu_{i} = 1, \sigma_{i}^2 = 0.2)$. The cardinality of $\Tilde{\bm{X}}_{\hat{\imath}}$'s is set uniformly to be $m=40$. By applying **Algorithm [\[alg_1\]](#alg_1){reference-type="ref" reference="alg_1"}**, Fig. [\[valley_filling\]](#valley_filling){reference-type="ref" reference="valley_filling"} ![Optimal charging profiles of all EVs.](valley_filling.pdf){#charging_profiles width="100%"} ![Optimal charging profiles of all EVs.](EV.pdf){#charging_profiles width="100%"} shows that the baseline load was flattened by using EVs' charging load. The optimal charging profiles of all EVs are shown in Fig. [4](#charging_profiles){reference-type="ref" reference="charging_profiles"}. At around 1:00 a.m., when the baseline load reaches its minimum, all EVs charge at their highest power. To observe the privacy features, Fig. [5](#fig6){reference-type="ref" reference="fig6"} presents the random values $\bm{r}_{8}^{(12)}(t) \Tilde{\bm{X}}_{8,t}^{(12)}$, $\forall t=1,\ldots,T$ that were generated by EV 8 at the 12th iteration. The true charging profile $\bm{r}_{8}^{(12)}$ was obfuscated into $\bm{r}_{8}^{(12)}(t) \Tilde{\bm{X}}_{8,t}^{(12)}(\Tilde{m})$, $\forall t=1,\ldots, T, \Tilde{m} = 1,\ldots,m$. The range of the obfuscated data is shown by the shaded area, where the obfuscation achieves nearly 50% randomization of the original data. ![The true and obfuscated data generated by EV 8 at the 12th iteration.](error_bar.pdf){#fig6 width="44%"} Fig. [6](#voltage){reference-type="ref" reference="voltage"} ![Nodal voltage magnitudes of 12 buses under baseline load (solid lines) and total load (dashed lines).](nodal_voltage.pdf){#voltage width="48%"} shows the nodal voltage magnitudes of 12 buses in the distribution network, where all voltage magnitudes are above the lower voltage limit (0.95 p.u., the black line). # Conclusion In this paper, we proposed a novel privacy-preserving decentralized algorithm to achieve privacy preservation and scalability in large-scale multi-agent cooperative optimization, particularly in the context of cooperative EV charging control. The proposed algorithm enables EVs to protect their decision variables via state obfuscation while facilitating the cooperation between EVs and the SO to achieve overnight valley filling. The privacy guarantees were theoretically analyzed and evaluated against honest-but-curious adversaries and external eavesdroppers. Simulations on an EV charging control problem validated the accuracy, efficiency, and privacy preservation properties of the proposed approach. [^1]: $^{\dagger}$X. Huo and M. Liu are with the Department of Electrical and Computer Engineering at the University of Utah, 50 S Central Campus Drive, Salt Lake City, UT, 84112, USA `{xiang.huo, mingxi.liu}@utah.edu`. [^2]: This work is supported by NSF Award: ECCS-2145408.
arxiv_math
{ "id": "2309.00139", "title": "On Privacy Preservation of Electric Vehicle Charging Control via State\n Obfuscation", "authors": "Xiang Huo and Mingxi Liu", "categories": "math.OC", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this paper, we discuss a variational approach to the length functional and its relation to sub-Hamiltonian equations on sub-Finsler manifolds. Then, we introduce the notion of the nonholonomic sub-Finslerian structure and prove that the distributions are geodesically invariant concerning the Barthel non-linear connection. We provide necessary and sufficient conditions for the existence of the curves that are abnormal extremals; likewise, we provide necessary and sufficient conditions for normal extremals to be the motion of a free nonholonomic mechanical system, and vice versa. Moreover, we show that a coordinate-free approach for a free particle is a comparison between the solutions of the nonholonomic mechanical problem and the solutions of the Vakonomic dynamical problem for the nonholonomic sub-Finslerian structure. In addition, we provide an example of the nonholonomic sub-Finslerian structure. Finally, we show that the sub-Laplacian measures the curvature of the nonholonomic sub-Finslerian structure. address: Dep. of Math, College of Science, University of Al Qadisiyah, Al-Qadisiyah 58001, Iraq author: - Layth M. Alabdulsada title: Sub-Finsler geometry and nonholonomic mechanics --- # Introduction Sub-Finsler geometry and nonholonomic mechanics have attracted more attention recently; they are rich subjects with many applications. Sub-Finsler geometry is a natural generalization of sub-Riemannian geometry. The sub-Riemannian metric was initially referred to as the Carnot-Carathéodory metric. J. Mitchell, [@MI85], investigated the Carnot-Carathéodory distance between two points by considering a smooth Riemannian $n$-manifold $(M, g)$ equipped with a $k$-rank distribution $\mathcal{D}$ of the tangent bundle $TM$. A decade later, M. Gromov [@Gr96] provided a comprehensive study of the above concepts. V. N. Berestovskii [@BE88] identified the Carnot-Caratheodory Finsler metric version as the Finsler counterpart of this metric, now commonly known as the sub-Finsler metric. In this study, our definition of the sub-Finsler metric closely aligns with the definition presented in previous works [@L.K; @JCG]. The motivation behind studying sub-Finsler geometry lies in its pervasive presence within various branches of pure mathematics, particularly in differential geometry and applied fields like geometric mechanics, control theory, and robotics. We refer the readers to [@ABB; @L.K1; @BBLS17; @LoMa00]. Nonholonomic mechanics is currently a very active area of the so-called geometric mechanics [@KF00]. Constraints on mechanical systems are typically classified into two categories: integrable and nonintegrable constraints. *Nonholonomic mechanics*: constraints that are not holonomic; these might be constraints that are expressed in terms of the velocity of the coordinates that cannot be derived from the constraints of the coordinates (thereby nonintegrable) or the constraints that are not given as an equation at all [@Lewis]. Nonholonomic control systems exhibit unique characteristics, allowing control of underactuated systems due to constraint nonintegrability. These problems arise in physical contexts like wheel systems, cars, robotics, and manipulations, with more insights found in [@BCGM15; @KF00]. In [@La01], B. Langerock considered a general notion of connections over a vector bundle map and applied it to the study of mechanical systems with linear nonholonomic constraints and a Lagrangian of kinetic energy type. A. D. Lewis in [@Lewis], investigated various consequences of a natural restriction of a given affine connection to distribution. The basic construction comes from the dynamics of a class of mechanical systems with nonholonomic constraints. In a previous paper in collaboration with L. Kozma [@L.K], constructed a generalized non-linear connection for a sub-Finslerian manifold, called $\mathcal{L}$-connection by the Legendre transformation which characterizes normal extremals of a sub-Finsler structure as geodesics of this connection. In this paper, [@L.K] and [@L.K1] play an important role in calculating our main results. These results are divided into two parts: sub-Hamiltonian systems and nonholonomic sub-Finslerian structures on the nonintegrable distributions. The paper is organized according to the following: In Section 2, we review some standard facts about sub-Finslerian settings. In Section 3, we define a sub-Finsler metric on $\mathcal{D}$ by using a sub-Hamiltonian function $\eta(x,p)$ and show the correspondence between the solutions of sub-Hamiltonian equations and the solution of a variational problem. Section 4 introduces the notion of nonholonomic sub-Finslerian structures and presents the main results, including conditions for the motion of a free mechanical system under linear nonholonomic constraints to be normal extremal with respect to the linked sub-Finslerian structure. Section 5 provides an example of the nonholonomic sub-Finslerian structure, and Section 6 discusses the curvature of the sub-Finslerian structure. We conclude that if the sub-Laplacian $\Delta_F$ is zero, then the sub-Finslerian structure is flat and locally isometric to a Riemannian manifold, while if $\Delta_F$ is nonzero, the sub-Finslerian structure is curved and the shortest paths between two points on the manifold are not necessarily straight lines. # Preliminaries Let $M$ be an $n$-dimensional smooth ($C^{\infty}$) manifold, and let $T_xM$ represent its tangent space at a point $x \in M$. We denote the module of vector fields over $C^{\infty}(M)$ by $\mathfrak{X}(M)$, and the module of $1$-forms by $\mathfrak{X}^*(M)$. Consider $\mathcal{D}$, a *regular distribution* on $M$, defined as a subbundle of the tangent bundle $TM$ with a constant rank of $k$. Locally, in coordinates, this distribution can be expressed as $\mathcal{D} = \textrm{span} \{X_{1}, \ldots, X_{k}\}$, where $X_i(x) \in \mathfrak{X}(M)$ are linearly independent vector fields. A non-negative function $F: \mathcal{D} \rightarrow \mathbb{R}_+$ is called a *sub-Finsler metric* if it satisfies the following conditions: - **Smoothness**: $F$ is a smooth function over $\mathcal{D}\setminus{0}$; - **Positive Homogeneity**: $F(\lambda v) = |\lambda| F(v)$ for all $\lambda \in \mathbb{R}$ and $v \in \mathcal{D} \setminus {0}$; - **Positive Definiteness**: The Hessian matrix of $F^2$ is positive definite at every $v \in \mathcal{D}_x \setminus {0}$. A differential manifold $M$ equipped with a sub-Finsler metric $F$ is recognized as a *sub-Finsler manifold*, denoted by $(M, \mathcal{D}, F)$. A piecewise smooth curve, denoted as $\sigma:[0, 1] \rightarrow M$, is considered *horizontal* if its tangent vector field $\dot{\sigma}(t)$ lies within $\mathcal{D}_{\sigma(t)}$ for all $t \in [0, 1]$, whenever it is defined. This condition reflects the nonholonomic constraints imposed on the curve. The length functional of such a horizontal curve $\sigma$ possesses a derivative for almost all $t \in [0, 1]$, with the components of the derivative, $\dot{\sigma}$, representing measurable curves. The *length* of $\sigma$ is usually defined as: $$\ell(\sigma)=\int_0^1 F(\dot{\sigma}(t))dt.$$ This length structure gives rise to a *distance function*, denoted as $d: M \times M \rightarrow \mathbb{R}_+$, defined by: $$d(x_0, x_1)=\inf \ell(\sigma), \qquad x_0, x_1 \in M,$$ and the infimum is taken over all horizontal curves connecting $\sigma(0) = x_0$ to $\sigma(1) = x_1$. This distance metric captures the minimal length among all possible horizontal paths between two points on the manifold $M$. A *geodesic*, also known as a *minimizing geodesic*, refers to a horizontal curve $\sigma:[0, 1] \rightarrow M$ that realizes the distance between two points, , i.e., $\ell(\sigma) = d(\sigma (0), \sigma (1))$. Throughout this paper, it is consistently assumed that $\mathcal{D}$ is bracket-generating. A distribution $\mathcal{D}$, is characterized as *bracket-generating* if every local frame $X_i$ of $\mathcal{D}$, along with all successive Lie brackets involving these frames, collectively span the entire tangent bundle $TM$. If $\mathcal{D}$ represents a bracket-generating distribution on a connected manifold $M$, it follows that any two points within $M$ can be joined by a horizontal curve. This foundational concept was initially established by C. Carathéodory [@CA09] and later reaffirmed by W. L. Chow [@Chow39] and P. K. Rashevskii [@RA38]. However, for a comprehensive explanation of the bracket-generating concept, one can turn to R. Montgomery's book, [@Mo02]. # Sub-Hamiltonian associated with sub-Finslerian manifolds ## The Legendre transformation and Finsler dual of sub-Finsler metrics Let $\mathcal{D}^*$ be a rank-$s$ codistribution on a smooth manifold $M$, assigning to each point $x \in U \subset M$ a linear subspace $\mathcal{D}^*_x \subset T^*_xM$. This codistribution is a smooth subbundle, and spanned locally by $s$ pointwise linearly independent smooth differential 1-forms: $$\mathcal{D}_x^* = \mathrm{span}\{\alpha_1(x), \ldots, \alpha_s(x)\},\ \text{ with}\ \alpha_i(x) \in \mathfrak{X}^*(M).$$ We define the annihilator of a distribution $\mathcal{D}$ on $M$ as $(\mathcal{D}^{\bot})^0$, a subbundle of $T^*M$ consisting of covectors that vanish on $\mathcal{D}$: $$(\mathcal{D}^{\bot})^0 = \{\alpha \in T^*M: \alpha(v) = 0 \text{ for all } v \in \mathcal{D}\},$$ such that $\langle v, \alpha \rangle := \alpha(v)$. Similarly, we define the annihilator of the orthogonal complement of $\mathcal{D}$, denoted by $\mathcal{D}^0$, as the subbundle of $T^*M$ consisting of covectors that vanish on $TM^{\bot}$. Using these notions, we can define a sub-Finslerian function denoted by $F^* \in \mathcal{D}^* \sim T^*M\setminus\mathcal{D}^0$, where $F^*$ is a positive function. This function shares similar properties with $F$, but is based on $\mathcal{D}^*$ instead of $\mathcal{D}$. In our previous work [@L.K1], we established the relationship: $$\label{Fpv} F^*(p)= F(v), \ \mathrm{where}\ p =\mathcal{L}_L(v), \quad\mbox{for every}\quad p \in \mathcal{D}^*_x \quad\mbox{and}\quad v \in \mathcal{D}_x,$$ such that $\mathcal{L}_L$ is the Legendre transformation of the sub-Lagrangian function $L: \mathcal{D} \subset TM \to \mathbb{R}$, a diffeomorphism between $\mathcal{D}$ and $\mathcal{D}^*$. In this context, to express $F^*$ in terms of $F$, we consider the Legendre transformation of $F$ with respect to the sub-Lagrangian function $L(v) = \frac{1}{2} F(v,v)$, where $F(v,v)$ is the square of the Finsler norm of $v$. The Legendre transformation $\mathcal{L}_L$ maps $v \in \mathcal{D}$ to $p = \frac{\partial L}{\partial v}(v)$. Utilizing the definition of the Legendre transformation, we observe that $$p = \frac{\partial L}{\partial v}(v) = \frac{\partial}{\partial v}\left(\frac{1}{2}F(v,v)\right) = F(v,\cdot),$$ where $F(v,\cdot)$ denotes the differential of $F$ with respect to its first argument evaluated at $v$. Note that $F(v,\cdot)$ is a linear function on $\mathcal{D}_x$. Given a covector $p \in \mathcal{D}^*$, we find that $$F^*(p) = \sup_{v \in \mathcal{D}_x} \biggl\{\langle p,v\rangle - L(v)\biggl\},$$ where $\langle p,v\rangle$ represents the inner product between the covector $p$ and the vector $v$. Substituting the expression for $L(v)$ and employing the Legendre transformation $\mathcal{L}_L(v) = F(v,\cdot)$, we get $$F^*(p) = \sup_{v \in \mathcal{D}_x} \biggl\{\langle p,v\rangle - \frac{1}{2} F(v,v)\biggl\} = \sup_{v \in \mathcal{D}_x} \biggl\{\langle p,v\rangle - \frac{1}{2} |F(v,\cdot)|^2\biggl\}.$$ Since $|F(v,\cdot)|^2 = F(v,v)$, we can express the Finsler dual $F^*$ in terms of $F$ as $$F^*(p) = \sup_{v \in \mathcal{D}_x} \biggl\{\langle p,v\rangle - \frac{1}{2} F(v,v)\biggl\}.$$ Therefore, when we have a sub-Finsler metric $F$ on $\mathcal{D}$, the Finsler dual $F^*$ is a function on $\mathcal{D}^*$ defined by $$F^*(p)= \sup_{v \in \mathcal{D}_x} \biggl\{\langle p,v\rangle - F(v)\biggl\}, \quad\mathrm{for}\quad p \in \mathcal{D}^*_x,$$ where $x$ is the base point of $\mathcal{D}$. ## The Sub-Hamiltonian Function and Sub-Hamilton's Equations for Sub-Finsler Manifolds The sub-Hamiltonian function associated with a sub-Finsler metric $F$ given by $$\eta := \frac{1}{2}(F^*)^2.$$ Here, $F^*$ denotes the dual metric to $F$, defined by $$\label{F*} F^*(p) = \sup_{v \in \mathcal{D}_x, F(v)=1} \langle p,v\rangle,$$ where $p$ represents a momentum vector in $\mathcal{D}^*_x$ associated with the point $x$ in the manifold $M$, and $\langle \cdot,\cdot \rangle$ denotes the inner product induced by a Riemannian metric $g$. The sub-Finslerian metric defined by ([\[F\*\]](#F*){reference-type="ref" reference="F*"}) is known as the Legendre transform of $F$, i.e., satisfying the relationship in [\[Fpv\]](#Fpv){reference-type="eqref" reference="Fpv"}. It is worth noting that the sub-Hamiltonian function associated with a Finsler metric is not unique, and different choices of Hamiltonians may lead to different dynamics for the associated geodesics. The sub-Hamiltonian formalism is a method of constructing a sub-Finsler metric on a subbundle $\mathcal{D}$ by defining a sub-Hamiltonian function $\eta(x,p)$ on the subbundle $\mathcal{D}^*$, where $x$ denotes a point in $M$ and $p$ denotes a momentum vector in $\mathcal{D}^*$, as explained in the following remark: **Remark 1**. The sub-Finsler vector bundle, introduced in [@L.K1] and expanded upon in [@LA23], plays a pivotal role in formulating sub-Hamiltonians in sub-Finsler geometry. Consider the covector subbundle $(\mathcal{D}^*, \tau, M)$ with projection $\tau : \mathcal{D}^* \to M$, forming a rank-$k$ subbundle in the cotangent bundle of $T^*M$. The pullback bundle $\tau^*(\tau) = (\mathcal{D}^* \times \mathcal{D}^*, \mathrm{pr}_1, \mathcal{D}^*)$ is obtained by pulling back $\tau$ through itself and is denoted as the sub-Finsler bundle over $\mathcal{D}^*_x$. This bundle allows the introduction of $k$ orthonormal covector fields $X_1, X_2, \dots, X_k$ with respect to the induced Riemannian metric $g$. The sub-Hamiltonian $\eta$ induces a metric $g$ on the sub-Finsler bundle. In terms of this metric, the sub-Hamiltonian function $\eta$ can be expressed as a function of components $p_i$. Specifically, $\eta(x, p) = \frac{1}{2} \sum_{i,j=1}^{n} {g}^{ij} p_i p_j$, where $g^{ij}$ is the inverse of the metric tensor $g_{ij}$ for the extended Finsler metric $\hat{F}$ on $TM$, kindly check Remark [Remark 2](#EXT){reference-type="ref" reference="EXT"}. This defines a sub-Finsler metric on a subbundle $\mathcal{D}$ of $TM$ that is determined by a distribution on $M$. The sub-Finsler metric $F$ is then defined as follows: $$F_x(v) = \sup_{p \in \mathcal{D}^*_x} \{ \langle p,v\rangle - \eta(x,p) \},$$ where $v$ is a tangent vector at $x$. Now fixing a point $x \in M$, for any covector $p \in \mathcal{D}^*$, there exists a unique *sub-Hamiltonian vector field* on $\mathcal{D}^*$, denoted by $\vec{H}$, described by $$\label{SH} \vec{H}= \frac{\partial \eta}{\partial p_i} \frac{\partial}{\partial x^i} - \frac{\partial \eta}{\partial x^i} \frac{\partial}{\partial p_i}.$$ where the partial derivatives are taken with respect to the local coordinates $(x^i, p_i)$ on $\mathcal{D}^* \subset T^*M$. **Definition 1**. The sub-Hamiltonian equations on $\mathcal{D}^*$ are then given by [\[HE\]]{#HE label="HE"} $$\begin{aligned} \frac{\partial \eta}{\partial p_i} & = g^{ij} p_j, \label{HE1} \\ \frac{\partial \eta}{\partial x^i} & = -\frac{1}{2} \frac{\partial g^{jk}}{\partial x^i} p_j p_k, \label{HE2}\end{aligned}$$ where dot denotes differentiation with respect to time. These equations express the fact that the sub-Hamiltonian vector field $\vec{H}$ preserves the sub-Finsler metric $F^*$ on $\mathcal{D}^*$. If the Hamiltonian is independent of the cotangent variables $p_i$, then the second equation above reduces to the Hamilton-Jacobi equation for the sub-Finsler manifold $(M, \mathcal{D}, F)$. **Remark 2**. We extended sub-Finsler metrics to full Finsler metrics using an orthogonal complement subbundle in [@L.K]. However, here are more details and evidence. Given a subbundle $\mathcal{D}$ of the tangent bundle $TM$, its direct complement $\mathcal{D}^\perp$ is a subbundle of $TM$ such that $TM = \mathcal{D} \oplus \mathcal{D}^\perp$, and at every point $x \in M$, $\mathcal{D}_x \cap \mathcal{D}_x^\perp = {0}$ and $\mathcal{D}_x + \mathcal{D}_x^\perp = T_xM$. One canonical way to obtain a direct complement to $\mathcal{D}$ is to use the notion of an orthogonal complement. Given a subbundle $\mathcal{D}$ of $TM$, we define the orthogonal complement bundle $\mathcal{D}^\perp$ as follows: $$\mathcal{D}^\perp_x = \{v \in T_xM :\langle v,w\rangle=0 \text{ for all } w \in \mathcal{D}_x\},$$ such that $v, w$ are orthogonal with respect to the inner product induced by the Riemannian metric. It can be shown that $\mathcal{D}^\perp$ is a subbundle of $TM$ and satisfies the conditions for being a direct complement to $\mathcal{D}$. Moreover, it can be shown that any two direct complements to $\mathcal{D}$ are isomorphic bundles, so the orthogonal complement is unique up to bundle isomorphism. Note that if $M$ is equipped with a sub-Finsler metric, then the metric induces a non-degenerate inner product on $\mathcal{D}$, so we can use this inner product to define the orthogonal complement. However, if $M$ is not equipped with a Riemannian metric, then the notion of an orthogonal complement may not be well-defined. So, to extend a given sub-Finsler metric $F$ on a subbundle $\mathcal{D}$ of $TM$ to a full Finsler metric on $TM$, one can use an orthogonal complement subbundle $\mathcal{D}^{\perp}$. This is a regular subbundle of $TM$ that is orthogonal to $\mathcal{D}$ with respect to the Riemannian metric $g_{ij}$. Locally, $\mathcal{D}^{\perp}$ can be written as: $$\mathcal{D}^{\perp} = \text{span}\{ X'_1,\ldots,X'_{n-k}\},$$ where $k$ is the rank of the subbundle $\mathcal{D}$ and $X'_1,\ldots,X'_{n-k}$ are local vector fields that form a basis for $\mathcal{D}^{\perp}$. Then, one can define a Finsler metric $\hat{F}$ on $TM$ by: $$\label{HF} \hat{F}(v) = \sqrt{ F^2(P(v)) + \widetilde{F}^2(P^{c}(v)) },$$ where $P$ is the projection onto $\mathcal{D}$, $P^{c}$ is the projection onto $\mathcal{D}^{\perp}$, and $\widetilde{F}$ is a Finsler metric on $\mathcal{D}^{\perp}$. This construction yields a full Finsler metric on $TM$ that extends the sub-Finsler metric $F$ on $\mathcal{D}$. Note that the Finsler metric $\widetilde{F}$ on $\mathcal{D}^{\perp}$ is not unique, so the choice of $\widetilde{F}$ is arbitrary. However, the resulting Finsler metric $\hat{F}$ on $TM$ is unique and independent of the choice of $\widetilde{F}$. To see this, suppose we have two choices of Finsler metrics $\widetilde{F}$ and $\widetilde{F}'$ on $\mathcal{D}^{\perp}$. Let $\hat{F}$ and $\hat{F}'$ be the corresponding extensions of $F$ to $TM$ using Equation [\[HF\]](#HF){reference-type="ref" reference="HF"}. Then for any $v \in TM$, we have $$\begin{aligned} \hat{F}^2(v) &= F^2(P(v)) + \widetilde{F}^2(P^c(v)) \\ \hat{F}'^2(v) &= F^2(P(v)) + \widetilde{F}'^2(P^c(v)).\end{aligned}$$ Subtracting these two equations, we obtain $$\hat{F}'^2(v) - \hat{F}^2(v) = \widetilde{F}'^2(P^c(v)) - \widetilde{F}^2(P^c(v)).$$ Since $v$ can be decomposed uniquely as $v = v_{\parallel} + v_{\perp}$ with $v_{\parallel} \in \mathcal{D}$ and $v_{\perp} \in \mathcal{D}^{\perp}$, we have $P^c(v) = v_{\perp}$, and the right-hand side of the above equation depends only on $v_{\perp}$. Since the choice of $\widetilde{F}$ on $\mathcal{D}^{\perp}$ is arbitrary, we can choose $\widetilde{F}$ and $\widetilde{F}'$ to be equal except on a single vector $v_{\perp}$, in which case $\widetilde{F}'^2(P^c(v)) - \widetilde{F}^2(P^c(v))$ will be nonzero only for that vector. Therefore, we have $\hat{F}'^2(v) - \hat{F}^2(v) \neq 0$ only for that vector, and hence $\hat{F} = \hat{F}'$. Therefore, we have shown that the resulting Finsler metric $\hat{F}$ on $TM$ is unique and independent of the choice of $\widetilde{F}$. Let us turn to define the normal and abnormal extremals: The projection $x(t)$ to $M$ is called a *normal extremal*. One can see that every sufficiently short subarc of the normal extremal $x(t)$ is a minimizer sub-Finslerian geodesic. This subarc is the unique minimizer joining its endpoints (see [@L.K1; @L3]). In the sub-Finslerian manifold, not all the sub-Finslerian geodesics are normal (contrary to the Finsler manifold). This is because the sub-Finslerian geodesics, which admit a minimizing geodesic, might not solve the sub-Hamiltonian equations. Those minimizers that are not normal extremals are called *singular* or *abnormal extremals*, (see for instance [@Mo02]). Even in the sub-Finslerian case, Pontryagin's maximum principle implies that every minimizer of the arc length of the horizontal curves is a normal or abnormal extremal. ## Non-Linear Connections on a sub-Finsler manifolds **Definition 2**. An $\mathcal{L}$-*connection* $\nabla$ on a sub-Finsler manifold is a generalized non-linear connection over the induced mapping $$\label{E-def} E:T^*M \to TM, \ \ \ E(\alpha(x))= \mathbf{i}(\mathcal{L}_{\eta}(\mathbf{i}^*(\alpha(x))))\in TM,$$ constructed by Legendre transformation $\mathcal{L}_{\eta}: \mathcal{D}^* \subset T^*M \to \mathcal{D} \subset TM$ by ([\[E-def\]](#E-def){reference-type="ref" reference="E-def"}), where $\mathbf{i}^*:T^*M \to \mathcal{D}^*$ is the adjoint mapping of $\mathbf{i}: \mathcal{D} \rightarrow TM$, i.e. for any $\alpha(x) \in \mathfrak{X}^*(M),$ $\mathbf{i}^*(\alpha(x))$ is determined by $$\langle X(x), \mathbf{i}^*(\alpha(x))\rangle = \langle \mathbf{i}(X(x)), \alpha(x) \rangle \ \text {for all} \ X(x) \in \mathfrak{X}(M),$$ such that $\langle v, \alpha \rangle := \alpha(v)$ for all $v \in \mathcal{D}, \alpha \in \mathcal{D}^*$. For more details about the settings of the $\mathcal{L}$- connection $\nabla$, we refer the reader to [@L.K]. Obviously, $E$ is a bundle mapping whose image set is precisely the subbundle $\mathcal{D}$ of $TM$ and whose kernel is the annihilator $\mathcal{D}^0$ of $\mathcal{D}$. Moreover, we recall the *Barthel non-linear connection* $\overline{\nabla}^B$ of the cotangent bundle as follows $$\overline{\nabla}^B_X\alpha (Y) = X(\alpha(Y))- \alpha (\nabla^B_X Y),$$ where the Berwald connection $\nabla^B$ on the tangent bundle was locally given by $$N_j^i= \frac{1}{2}\frac{\partial G^i}{\partial v^j};\quad G^i = g^{ij}\left ( \frac{\partial^2 L}{\partial v^j \partial x^k}v^k - \frac{\partial L}{\partial x^j } \right). \label{barthel}$$ The Barthel nonlinear connection plays the same role in the positivity homogeneous case as the Levi-Civita connection in Riemannian geometry, see [@K03]. **Definition 3**. A curve $\alpha:[0,1] \to T^*M$ is said to be $E$-*admissible* if $E(\alpha(t))=\dot{\sigma}(t)\ \forall t \in [0,1]$ such that $\pi_M:T^*M \to M$ is the natural cotangent bundle projection. An *auto-parallel* curve is the $E$-admissible curve with respect to $\mathcal{L}$-connection $\nabla$ if it satisfies $\nabla_{\alpha} \alpha(t) = 0$ for all $t \in [0,1]$. The *geodesic* of $\nabla$ is just the base curve $\gamma= \pi_M \circ \alpha$ of the auto-parallel curve. In coordinates, an auto-parallel curve $\alpha(t)=(x^i(t), p_i(t))$ satisfies the equations $$\dot{x}^i(t) = g^{ij} (x(t), p(t)) p_j(t), \qquad \dot{p}_i(t) = - \Gamma^{jk}_i(x(t), p(t)) p_j(t)p_k(t),$$ such that $g^{ij}$ and $\Gamma^{ik}_j$ are the local components of the contravariant tensor field of $TM\otimes TM \to M$ associated with the sub-Hamiltonian structure and the connection coefficients of $\nabla$, respectively. In fact, given a non-linear $\mathcal{L}$-connection $\nabla$ we can always introduce a smooth vector field $\Gamma^{\nabla}$ on $\mathcal{D}^*$, in addition, their integral curves are auto-parallel curves in relation to $\nabla$. In canonical coordinates, this vector field given by $$\Gamma^{\nabla}(x, p)= g^{ij}(x, p)p_j \frac{\partial}{\partial x^i}- \Gamma^{ik}_j(x, p) p_ip_k\frac{\partial}{\partial p_j}.$$ In [@L.K], we proved that every geodesic of $\nabla$ is a normal extremal, and vice versa. More precisely, we have shown that the coordinate expression for the sub-Hamiltonian vector field (this is another form of ([\[SH\]](#SH){reference-type="ref" reference="SH"})) $\vec{H}$ equals: $$\vec{H}(x, p) = g^{ij}(x, p)p_j \frac{ \partial}{\partial x^i} - \frac{1}{2} \frac{\partial g^{ij}}{\partial x^k}(x, p) p_ip_j \frac{ \partial}{\partial p_k}.$$ Comparing the latter formula with the definition of $\Gamma^{\nabla}$, yields that $\Gamma^{\nabla}(x, p)= \vec{H}(x, p)$. ## Variational approach to the length functional and its relation to sub-Hamiltonian equations on sub-Finsler manifolds We can consider a small variation $\psi(s,t)$ of the curve $\sigma(t)$ such that $\psi(s,0)$ and $\psi(s,1)$ are fixed at $x_0$ and $x_1$, respectively, and $\psi(0,t) = \sigma(t)$ for all $t\in[0,1]$. We can think of $\psi(s,t)$ as a one-parameter family of curves in the set of all curves joining $x_0$ and $x_1$, and we can consider the variation vector field $v(t) = \frac{\partial \psi}{\partial s}(0,t)$, which is tangent to the curve $\sigma(t)$. Then, we can define the directional derivative of the length functional $\ell$ along the variation vector field $v$ as $$\label{Var1} \mathbf{d}\ell(\sigma)\cdot v = \frac{d}{ds}\Big|_{s=0} \ell(\psi(s,\cdot)).$$ Note that $\ell(\psi(s,\cdot))$ is the length of the curve $\psi(s,\cdot)$, which starts at $x_0$ and ends at $x_1$. Therefore, $\frac{d}{ds}\Big|_{s=0} \ell(\psi(s,\cdot))$ is the rate of change of the length of the curve as we vary it along the vector field $v$. By chain rule, we can write $$\frac{d}{ds}\Big|_{s=0} \ell(\psi(s,\cdot)) = \int_0^1 \frac{\partial \ell}{\partial x^a}(\sigma(t))\frac{\partial \psi^a}{\partial s}(0,t)dt,$$ where $\frac{\partial \ell}{\partial x^a}$ is the gradient of the length functional. Using the fact that $\psi(s,t)$ is a variation of $\sigma(t)$ and $\psi(0,t) = \sigma(t)$, we can express $\frac{\partial \psi^a}{\partial s}(0,t)$ in terms of the variation vector field $v$ as $$\frac{\partial \psi^a}{\partial s}(0,t) = \frac{\partial}{\partial s}\Big|_{s=0} \psi^a(s,t) = \frac{\partial}{\partial s}\Big|_{s=0} \sigma^a(t) + s\frac{\partial}{\partial t}\Big|{t=t} v^a(t) = v^a(t).$$ Therefore, we obtain $$\frac{d}{ds}\Big|_{s=0} \ell(\psi(s,\cdot)) = \int_0^1 \frac{\partial \ell}{\partial x^a}(\sigma(t))v^a(t)dt = \mathbf{d}\ell(\sigma)\cdot v,$$ which gives the desired equation ([\[Var1\]](#Var1){reference-type="ref" reference="Var1"}). Let us clarify the correct relationship between the sub-Hamiltonian equations and the length functional. Given a sub-Finsler manifold $(M, \mathcal{D}, F)$, the sub-Hamiltonian equations on $M$ are given by $$\label{SHE} \frac{d}{dt}\left(\frac{\partial F}{\partial p_a}(\sigma(t))\right) = -\frac{\partial F}{\partial x^a}(\sigma(t)),$$ where $\sigma: [0,1] \rightarrow M$ is a piecewise smooth curve in $M$ with $\sigma(0) = x_0$ and $\sigma(1) = x_1$. On the other hand, the length functional on $M$ is defined as $$\ell(\sigma) = \int_0^1 F(\sigma(t), \dot{\sigma}(t)),dt,$$ where $\sigma$ is a piecewise smooth curve in $M$ with $\sigma(0) = x_0$ and $\sigma(1) = x_1$. It is a well-known fact that a curve $\sigma$ is a solution to the sub-Hamiltonian equations if and only if it is a critical point of the length functional $\ell$. In other words, if $\sigma$ satisfies the sub-Hamiltonian equations, then $\mathbf{d}\ell(\sigma) = 0$, and conversely, if $\sigma$ is a critical point of $\ell$, then it satisfies the sub-Hamiltonian equations. **Proposition 1**. *A piecewise smooth curve $\sigma:[0,1]\rightarrow M$ joining $\sigma(0)=x_0$ with $\sigma(1)=x_1$ in $M$ is a solution to the sub-Hamiltonian equations if and only if it is a critical point of the length functional $\ell$. That is, if and only if $\mathbf{d}\ell(\sigma) = 0$.* *Proof.* We will begin by proving the first direction: Assume that $\sigma$ satisfies the sub-Hamiltonian equations. Then, we have $$\frac{d}{dt}\left(\frac{\partial F}{\partial p_a}(\sigma(t))\right) = -\frac{\partial F}{\partial x^a}(\sigma(t)),$$ for all $a=1,\ldots,m$ and $t\in[0,1]$. Note that $\frac{\partial F}{\partial p_a}$ is the conjugate momentum of $x^a$, and we can write the sub-Finsler Lagrangian as $$L(x,\dot{x}) = F(x,\dot{x})\sqrt{\det(g_{ij}(x))},$$ where $g_{ij}(x) = \frac{\partial^2 F^2}{\partial \dot{x}^i \partial \dot{x}^j}(x,\dot{x})$ is the sub-Finsler metric tensor. Then, the length functional can be written as $$\ell(\sigma) = \int_0^1 L(\sigma(t),\dot{\sigma}(t)),dt.$$ Using the Euler-Lagrange equation for the Lagrangian $L$, we have $$\frac{d}{dt}\left(\frac{\partial L}{\partial \dot{x}^a}(\sigma(t),\dot{\sigma}(t))\right) - \frac{\partial L}{\partial x^a}(\sigma(t),\dot{\sigma}(t)) = 0,$$ for all $a=1,\ldots,m$ and $t\in[0,1]$. Since $L$ depends only on $\dot{x}$ and not on $x$ explicitly, we can write this as $$\frac{d}{dt}\left(\frac{\partial F}{\partial \dot{x}^a}(\sigma(t))\sqrt{\det(g_{ij}(\sigma(t)))}\right) - \frac{\partial F}{\partial x^a}(\sigma(t))\sqrt{\det(g_{ij}(\sigma(t)))} = 0,$$ for all $a=1,\ldots,m$ and $t\in[0,1]$. Using the chain rule and the fact that $\sigma$ is piecewise smooth, we can write this as $$\frac{d}{dt}\left(\frac{\partial F}{\partial p_a}(\sigma(t))\right) - \frac{\partial F}{\partial x^a}(\sigma(t)) = 0,$$ for all $a=1,\ldots,m$ and $t\in[0,1]$. This is exactly the condition for $\sigma$ to be a critical point of $\ell$, i.e., $\mathbf{d}\ell(\sigma) = 0$. Now, let us proceed to prove the second direction: Assume that $\sigma$ is a critical point of $\ell$, i.e., $\mathbf{d}\ell(\sigma) = 0$. Then, for any smooth variation $\delta\sigma:[0,1]\rightarrow TM$ with $\delta\sigma(0) = \delta\sigma(1) = 0$, we have $$0 = \mathbf{d}\ell(\sigma)(\delta\sigma) = \int_0^1\left\langle\frac{\partial F}{\partial x^a}(\sigma(t)),\delta x^a(t)\right\rangle dt,$$ where $\delta x^a(t) = \frac{d}{ds}\bigg|_{s=0}x^a(\sigma(t)+s\delta\sigma(t))$ is the variation of the coordinates $x^a$ induced by $\delta\sigma$. Note that we have used the fact that $\delta\sigma(0) = \delta\sigma(1) = 0$ to get rid of boundary terms. Since $\delta\sigma$ is arbitrary, this implies that $$\frac{\partial F}{\partial x^a}(\sigma(t)) = 0,$$ for all $a=1,\ldots,m$ and $t\in[0,1]$. Using the sub-Hamiltonian equations, we can write this as $$\frac{d}{dt}\left(\frac{\partial F}{\partial p_a}(\sigma(t))\right) = 0,$$ for all $a=1,\ldots,m$ and $t\in[0,1]$. This implies that $\frac{\partial F}{\partial p_a}$ is constant along $\sigma$. Since $\sigma$ is piecewise smooth, we can choose a partition $0=t_0<t_1<\cdots<t_n=1$ such that $\sigma$ is smooth on each subinterval $[t_{i-1},t_i]$. Let $c_a$ be the constant value of $\frac{\partial F}{\partial p_a}$ on $\sigma$. Then, for each $i=1,\ldots,n$, we have $$\frac{d}{dt}\left(\frac{\partial F}{\partial p_a}(\sigma(t))\right) = 0,$$ for all $a=1,\ldots,m$ and $t\in[t_{i-1},t_i]$. This implies that $$\frac{\partial F}{\partial p_a}(\sigma(t)) = c_a,$$ for all $a=1,\ldots,m$ and $t\in[t_{i-1},t_i]$. Since $\frac{\partial F}{\partial p_a}$ is the conjugate momentum of $x^a$, this implies that $\sigma$ satisfies the sub-Hamiltonian equations on each subinterval $[t_{i-1},t_i]$. Therefore, $\sigma$ satisfies the sub-Hamiltonian equations on the whole interval $[0,1]$, which completes the proof of second direction. ◻ **Corollary 1**. *If $\sigma:[0,1]\rightarrow M$ is a piecewise smooth horizontal curve that minimizes the length functional $\ell$ between two points $x_0$ and $x_1$ on a sub-Finsler manifold $(M, \mathcal{D}, F)$, then $\sigma$ is a smooth geodesic between $x_0$ and $x_1$. Conversely, if $\sigma$ is a smooth geodesic between $x_0$ and $x_1$, then its length $\ell(\sigma)$ is locally minimized.* *Proof.* The proof of this corollary follows directly from Proposition [Proposition 1](#LFH){reference-type="ref" reference="LFH"}. ◻ The Proposition [Proposition 1](#LFH){reference-type="ref" reference="LFH"} establish the significance of the results in the context of sub-Hamiltonian equations and curve optimization on a sub-Finsler manifold. The corollary highlights the connection between curve optimization, geodesics, and the length functional on sub-Finsler manifolds. Collectively, these results provide deep insights into the geometric behavior of curves on sub-Finsler manifolds, linking the sub-Hamiltonian equations, length minimization, and the concept of geodesics in this context. # Nonholonomic sub-Finslerian structure A sub-Finslerian structure is a generalization of a Finslerian structure, where the metric on the tangent space at each point is only required to be positive-definite on a certain subbundle of tangent vectors. A *nonholonomic sub-Finslerian structure* is a triple $(M, \mathcal{D}, F)$ where $M$ is a smooth manifold of dimension $n$, $\mathcal{D}$ is a non-integrable distribution of rank $k < n$ on $M$, which means that it cannot be generated by taking the Lie bracket of vector fields. This property leads to the nonholonomicity of the structure and has important implications for the geometry and dynamics of the system. The regularity condition on $\mathcal{D}$ means that it can be locally generated by smooth vector fields, and the nonholonomic condition means that it cannot be integrable to a smooth submanifold of $M$. The sub-Finslerian metric $F$ is a positive-definite inner product on the tangent space of $\mathcal{D}$ at each point of $M$. It is often expressed as a norm that satisfies the triangle inequality but does not necessarily have the homogeneity property of a norm. The metric $F$ induces a distance function on $M$, known as the sub-Riemannian distance or Carnot-Carathéodory distance, which is a natural generalization of the Riemannian distance. Mechanically, sub-Riemannian manifolds $(M, \mathcal{D}, g)$ and their generalization, sub-Finslerian manifolds $(M, \mathcal{D}, F)$ are classified as configuration spaces [@L.K2]. Nonholonomic sub-Finslerian structures arise in the study of control theory and robotics, where they model the motion of nonholonomic systems, i.e., systems that cannot achieve arbitrary infinitesimal motions despite being subject to arbitrary small forces. The motivation for this generalization comes from the need to provide a framework that captures the complexities of motion in such systems beyond what sub-Riemannian geometry alone can achieve. The study of these structures involves geometric methods, such as the theory of connections and curvature, and leads to interesting mathematical problems. This generalization not only extends the applicability of the theory to a wider class of problems but also paves the way for new insights into the geometric mechanics of nonholonomic systems. ## Nonholonomic Free Particle Motion under a Non-Linear Connection and Projection Operators We have the projection operator $P^* : T^*M \to \mathcal{D}^0$ that projects any covector $\alpha \in T^*M$ onto its horizontal component with respect to the non-linear connection induced by the distribution $\mathcal{D}$. More precisely, for any $Y \in TM$, we define $P(Y)$ to be the projection of $Y$ onto $\mathcal{D}$, and then $P^*(\alpha)(Y) = \alpha(Y - P(Y))$. Next, we have the complement projection $(P^*)^c : T^*M \to (\mathcal{D}^\bot)^0$, which projects any covector $\alpha \in T^*M$ onto its vertical component with respect to the non-linear connection induced by the distribution $\mathcal{D}$. More precisely, for any $Y \in TM$, we define $P^\bot(Y)$ to be the projection of $Y$ onto $\mathcal{D}^\bot$, and then $(P^*)^c(\alpha)(Y) = \alpha(P^\bot(Y))$. Now, we consider a nonholonomic free particle moving along a piecewise smooth horizontal curve $\sigma : [0,1] \to M$. Let $\overline{\nabla}^B$ be a Barthel non-linear connection, (see [@L.K; @La01]), and the condition $P^*(\overline{\nabla}^B_{\dot{\sigma}(t)} \dot{\sigma}(t)) = 0$ expresses the fact that the velocity vector $\dot{\sigma}(t)$ is constrained to be horizontal, while the constraint condition $\dot{\sigma}(t) \in \mathcal{D}^0$ expresses the fact that the velocity vector lies in the distribution $\mathcal{D}$. Using the fact that $T^*M$ can be decomposed into its horizontal and vertical components with respect to the non-linear connection induced by the distribution $\mathcal{D}$, we can express any covector $\alpha \in T^*M$ as $\alpha = P^*(\alpha) + (P^*)^c(\alpha)$. Then, the constraint condition $\dot{\sigma}(t) \in \mathcal{D}^0$ can be written as $(P^*)^c(\mathrm{d}\sigma/\mathrm{d}t) = 0$. Using the above decomposition of $\alpha$, we can rewrite the condition $P^*(\overline{\nabla}^B_{\dot{\sigma}(t)} \dot{\sigma}(t)) = 0$ as $P^*(\overline{\nabla}^B_{\dot{\sigma}(t)} \dot{\sigma}(t)) = P^*(\mathrm{d}\dot{\sigma}/\mathrm{d}t) = \mathrm{d}(P^*(\dot{\sigma}))/\mathrm{d}t = 0$, where we have used the fact that $P^*(\mathrm{d}\dot{\sigma}/\mathrm{d}t)$ is the derivative of the horizontal component of $\dot{\sigma}$ with respect to time, and hence is zero if $\dot{\sigma}$ is constrained to be horizontal. Therefore, the conditions $P^*(\overline{\nabla}^B_{\dot{\sigma}(t)} \dot{\sigma}(t)) = 0$ and $(P^*)^c(\mathrm{d}\sigma/\mathrm{d}t) = 0$ together express the fact that the velocity vector $\dot{\sigma}(t)$ of the nonholonomic free particle is constrained to be horizontal and lie in the distribution $\mathcal{D}$, respectively. Since $T^*M$ is identified with $TM$ via a Riemannian metric $g$, we have a natural isomorphism between $(\mathcal{D}^{\bot})^0$ and $\mathcal{D}^0$ given by the orthogonal projection. In particular, we have a direct sum decomposition of the cotangent bundle $T^*M$ as $$T^*M \cong (\mathcal{D}^{\bot})^0 \oplus \mathcal{D}^0.$$ Note that any covector $\alpha \in T^*M$ can be uniquely decomposed as $\alpha = (P^*)^c(\alpha) + P^*(\alpha)$. We can define a new *non-linear connection* $\overline{\nabla}$ on $(M, \mathcal{D}, F)$ according to $$\label{AAlA} \overline{\nabla}_X (P^* (\alpha))(Y) =\overline{\nabla}^B_X (P^* (\alpha))(Y) +\overline{\nabla}^B_X ((P^*)^c (\alpha))(Y)$$ for all $X \in \mathfrak{X}(M)$ and $\alpha \in \mathfrak{X}^*(M)$. We restrict this connection to $\mathcal{D}^0$ and the equations of motion of the nonholonomic free particle can be re-written as $\overline{\nabla}_{\dot{\sigma}(t)} {\dot{\sigma}(t)}=0$, together with the initial velocity taken in $\mathcal{D}^0$ (see [@La01; @Lewis]). Given a nonholonomic sub-Finsler structure $(M, \mathcal{D}, F)$ one can always construct a normal and $\mathcal{D}$-adapted $\mathcal{L}$-connection [@L.K Proposition 16]. Furthermore, we can construct a generalized non-linear connection over the vector bundle $\mathbf{i}: \mathcal{D} \to TM$, we will set $X \in \Gamma (\mathcal{D})$ with $\mathcal{L}_{\eta}(\mathbf{i} \circ X) \in \mathfrak{X}^*(M)$. So, attached to $(M, \mathcal{D}, F)$ there is a non-linear connection $\nabla^{H}: \Gamma (\mathcal{D}) \times \Gamma (\mathcal{D}^0) \to \Gamma (\mathcal{D}^0)$ called the *nonholonomic connection* over the adjoint mapping $\mathbf{i}: \mathcal{D} \rightarrow TM$ on natural projection $\tau: \mathcal{D}^0 \to M$ given by $$\nabla^{H}_X \alpha(Y) = P^*(\overline{\nabla}^B_{X}\alpha(Y)).$$ Moreover, there is no doubt this indeed determines a non-linear connection, namely, $$\nabla^{H}_{X}\alpha(Y)= \overline{\nabla}_X (P^* (\alpha))(Y),$$ such that $\overline{\nabla}$ is the non-linear connection given in ([\[AAlA\]](#AAlA){reference-type="ref" reference="AAlA"}), for all $X \in \Gamma(\mathcal{D})$ and $\alpha \in \Gamma(\mathcal{D}^0)$. In the nonholonomic setting, the horizontal curves are $\hat{\sigma}$ in $\mathcal{D}$ that are extensions of curves in $M$, i.e. $\hat{\sigma} (t)= \dot{\sigma}(t)$ for some curve $\sigma$ in $M$. **Definition 4**. Let $(M, \mathcal{D}, F)$ be a nonholonomic sub-Finsler structure. A *nonholonomic bracket* $$[\cdot,\cdot] : \Gamma (\pi_{\mathcal{D}})\otimes \Gamma (\tau) \to \Gamma (\tau)$$ is defined as $[X, \alpha]= (P^*)^c[X, \alpha]$ for all $X \in \Gamma (\pi_{\mathcal{D}})$, $\alpha \in \Gamma (\tau)$, $\pi_{\mathcal{D}} : \mathcal{D} \to M$ and $\tau: \mathcal{D}^* \to M$. This Lie bracket satisfies all the regular properties of the Lie bracket with the exception of the Jacobi identity. It may happen that the nonholonomic bracket $[X, \alpha]\notin \Gamma (\tau)$ because $\mathcal{D}^*$ is nonintegrable. Now, we can formally define the torsion operator $$T (X,\alpha):=\nabla^{H}_{X}\alpha - \nabla^{H}_{\alpha}X- P^*[X, \alpha].$$ In this setting, due to the symmetry of the non-linear connection $\nabla^{H}_{X}\alpha=\nabla^{H}_{\alpha}X$, the torsion $T (X, \alpha) = 0$ for all $X \in \Gamma(\mathcal{D})$ and $\alpha \in \Gamma(\mathcal{D}^0)$. Moreover, [@L.K1 Lemma 5], implies that the non-linear connection $\nabla^{H}$ preserves the sub-Finsler metric $F$ on $\mathcal{D}$, i.e. $\nabla^{H}_{X}F=0$ for all $X \in \Gamma(\mathcal{D})$. Therefore, there exists a unique conservative homogeneous nonlinear connection $\nabla^{H}$ with zero torsion and we can write the equations of motion for the given nonholonomic problem as $\nabla^{H}_{\dot{\sigma}(t)}\dot{\sigma}(t) = 0$, in such a way that $\sigma$ is a curve in $M$ tangent to $\mathcal{D}$. There is a close relationship between nonholonomic constraints and the controllability of non-linear systems. More precisely, there is a beautiful link between optimal control of nonholonomic systems and sub-Finsler geometry. In the case of a large class of physically interesting systems, the optimal control problem is reduced to finding geodesics with respect to the sub-Finslerian metric. The geometry of such geodesic flows is exceptionally rich and provides guidance for the design of control laws, for more information see Montgomery [@Mo02]. We have seen in Section 2 that for each point $x \in M$, we have the following distribution of rank $k$ $$\mathcal{D}= \textrm{span} \{X_{1}, \ldots, X_{k}\}, \qquad X_i(x) \in T_xM,$$ such that for any control function $u(t) = (u_1(t), \ldots, u_t(t))\in \mathbb{R}^k$ the the control system is defined as $$\dot{x}=\sum_{i=1}^{k} u_iX_{i}(x), \qquad x \in M,$$ is called a *nonholonomic control system* or *driftless control system* in the quantum mechanical sense, see [@L.K2]. ## Results The subsequent findings enhance comprehension of nonholonomic sub-Finslerian structures and their relevance in geometric mechanics. These insights offer essential tools for addressing and resolving issues concerning restricted movement within mathematical and physical domains. Specifically, these results shed light on the behavior of nonholonomic structures and their utility in analyzing constrained motion, particularly within the realm of geometric mechanics. **Remark 3**. We call the distribution $\mathcal{D}$ a *geodesically invariant* if for every geodesic $\sigma: [0, 1] \to M$ of $\overline{\nabla}^B$, $\dot{\sigma}(0) \in \mathcal{D}_{{\sigma}(0)}$ implies that $\dot{\sigma}(t) \in \mathcal{D}_{{\sigma}(t)}$ for every $t \in (0,1]$. One can prove that if $(M, \mathcal{D}, F)$ is a sub-Finslerian manifold such that for any $x \in M$, $\mathcal{D}_x$ is a vector subspace of $T_xM$. The distribution $\mathcal{D}$ is geodesically invariant if and only if, for any $x \in M$ and any $v \in \mathcal{D}_x$, the Jacobi field along any geodesic $\gamma(t)$ with initial conditions $\gamma(0)=x$ and $\dot{\gamma}(0)=v$ is also in $\mathcal{D}$. In other words, if the Jacobi fields along any geodesic with initial conditions in $\mathcal{D}$ remain in $\mathcal{D}$, then $\mathcal{D}$ is geodesically invariant. Conversely, if $\mathcal{D}$ is geodesically invariant, then any Jacobi field along a geodesic with initial conditions in $\mathcal{D}$ must also remain in $\mathcal{D}$. We leave the proof of this statement for future work. The following Proposition implies, in particular, that $\mathcal{D}$ is geodesically invariant with respect to Barthel's non-linear connection $\overline{\nabla}^B$. **Proposition 2**. - *For each $X \in \mathfrak{X}(M)$ and $\alpha \in \Gamma(\mathcal{D}^0)$, $\overline{\nabla}_X (P^*(\alpha))(Y)\in \Gamma(\mathcal{D}^0)$.* - *For each $X \in \mathfrak{X}(M)$ and $\alpha \in \Gamma(\mathcal{D}^0)$, $\overline{\nabla}^B_X ((P^*)^c(\alpha))(Y) \in \Gamma(\mathcal{D}^0)$.* - *For each $X \in \mathfrak{X}(M)$ and $\alpha \in \Gamma(\mathcal{D}^\bot)^0$, $\overline{\nabla}^B_X ((P^*)^c(\alpha))(Y) \in \Gamma(\mathcal{D}^\bot)^0$.* *Proof.* - Let $X \in \mathfrak{X}(M)$ and $\alpha \in \Gamma(\mathcal{D}^0)$. Then, by the definition of the pullback connection, given in ([\[AAlA\]](#AAlA){reference-type="ref" reference="AAlA"}), and the Leibniz rule, we have $$\begin{aligned} \overline{\nabla}_X(P^*(\alpha))(Y) &= X(P^*(\alpha)(Y)) - P^*(\alpha)(\overline{\nabla}_X(Y)) \\ &= X(\alpha(P(Y))) - \alpha(\overline{\nabla}_X(Y)) \\ &= \alpha(X(P(Y))) - \alpha(\overline{\nabla}_X(Y)) \\ &= \alpha(P(\mathcal{L}_X(Y))) - \alpha(\overline{\nabla}_X(Y)) \\ &= P(\alpha(\mathcal{L}_X(Y))) - \alpha(\overline{\nabla}_X(Y)) \\ &= P(\mathcal{L}_X(\alpha(Y))) - \alpha(\overline{\nabla}_X(Y)) \\ &= P(\mathcal{L}_X(P^*(\alpha)(Y))) - \alpha(\overline{\nabla}_X(Y)) \\ &= P(\overline{\nabla}_X(P^*(\alpha))(Y)) - \alpha(\overline{\nabla}_X(Y)).\end{aligned}$$ Since $P(\overline{\nabla}_X(P^*(\alpha))(Y))$ and $\alpha(\overline{\nabla}_X(Y))$ both lie in $\Gamma(\mathcal{D}^0)$, it follows that $\overline{\nabla}_X(P^*(\alpha))(Y)$ also lies in $\Gamma(\mathcal{D}^0)$. - Using the definition of the connection $\overline{\nabla}^B$, we have: $$\begin{aligned} \overline{\nabla}^B_X ((P^*)^c(\alpha))(Y) &= X((P^*)^c(\alpha)(Y)) - (P^*)^c(\alpha)(\nabla_X^B Y) \\ &\quad + (P^\bot)^c(\alpha)(\nabla_X^B Y).\end{aligned}$$ Now, let us analyze each term on the right-hand side individually: First, consider $X((P^*)^c(\alpha)(Y))$. Since $(P^*)^c(\alpha)(Y)$ is a section of $\mathcal{D}^0$ and $X$ is a vector field on $M$, $X((P^*)^c(\alpha)(Y))$ is a section of $\mathcal{D}^0$. Next, we have $-(P^*)^c(\alpha)(\nabla_X^B Y)$. Here, $(P^*)^c(\alpha)$ is a bundle map from $\mathcal{E}$ to $\mathcal{D}^0$, so $(P^*)^c(\alpha)(\nabla_X^B Y)$ is a section of $\mathcal{D}^0$. The negative sign in front ensures that the result remains in $\mathcal{D}^0$. Finally, we consider $(P^\bot)^c(\alpha)(\nabla_X^B Y)$. Since $(P^\bot)^c(\alpha)$ is a bundle map from $\mathcal{E}$ to the orthogonal complement of $\mathcal{D}^0$, $(P^\bot)^c(\alpha)(\nabla_X^B Y)$ is a section of $\Gamma(\mathcal{D}^\bot)$. However, we need it to be a section of $\Gamma(\mathcal{D}^0)$. To ensure that $(P^\bot)^c(\alpha)(\nabla_X^B Y)$ lies in $\Gamma(\mathcal{D)^0}$, we can use the projection operator $P$ to project it back onto $\mathcal{D}^0$. This projection ensures that the final result remains within $\Gamma(\mathcal{D}^0)$. Combining these results, we see that $\overline{\nabla}^B_X ((P^*)^c(\alpha))(Y)$ is a section of $\Gamma(\mathcal{D}^0)$, as desired. - Using the definition of the connection $\overline{\nabla}^B$, we have $$\begin{aligned} \overline{\nabla}^B_X ((P^*)^c(\alpha))(Y) &= X[(P^*)^c(\alpha)(Y)] - (P^*)^c(\alpha)(\overline{\nabla}_XY) + (P^*)^c(\overline{\nabla}^B_X\alpha)(Y)\\ &= X[(P^*)^c(\alpha)(Y)] - (P^*)^c(\alpha)(\overline{\nabla}_XY) + (P^*)^c((\overline{\nabla}_X\alpha)^\top)(Y)\\ &= X[(P^*)^c(\alpha)(Y)] - (P^*)^c(\alpha)(\nabla_X^B Y) + (P^*)^c((\overline{\nabla}_X\alpha)^\top)(Y)\end{aligned}$$ where in the last step we used the fact that $$(P^*)^c(\alpha)(\nabla_X^B Y) = -(P^*)^c(\alpha)(\overline{\nabla}_XY),$$ which follows from the definition of the codifferential operator and the fact that $(P^*)^c = -(P^*)^c$. Now we need to show that the three terms on the right-hand side of this expression lie in $\Gamma(\mathcal{D}^\bot)^0$. We will do this term by term. First, note that $(P^*)^c(\alpha)(Y) \in \Gamma(\mathcal{D}^\bot)^0$ since $(P^*)^c(\alpha)$ maps $\Gamma(\mathcal{D}^\bot)$ to itself and $Y \in \Gamma(\mathcal{D}^\bot)^0$. Next, we need to show that $(P^*)^c(\alpha)(\nabla_X^B Y) \in \Gamma(\mathcal{D}^\bot)^0$. Note that $$(P^*)^c(\alpha)(\nabla_X^B Y) = - (P^*)^c(\alpha)(\overline{\nabla}_XY),$$ so it suffices to show that $(P^*)^c(\alpha)(\overline{\nabla}_XY) \in \Gamma(\mathcal{D}^\bot)^0$. To see this, note that $\overline{\nabla}_XY \in \Gamma(\mathcal{D}^\bot)^0$ since $X$ and $Y$ are both sections of $\mathcal{D}^\bot$, and that $(P^*)^c(\alpha)$ maps $\Gamma(\mathcal{D}^\bot)^0$ to itself. Finally, we need to show that $(P^*)^c((\overline{\nabla}_X\alpha)^\top)(Y) \in \Gamma(\mathcal{D}^\bot)^0$. To see this, note that $(\overline{\nabla}_X\alpha)^\top$ is a tensor of type $(1,1)$ that maps vectors tangent to $M$ to vectors tangent to $M$, so $(P^*)^c((\overline{\nabla}_X\alpha)^\top)(Y)$ is a section of $\mathcal{D}^\bot$. Moreover, $(P^*)^c((\overline{\nabla}_X\alpha)^\top)$ maps $\Gamma(\mathcal{D}^\bot)^0$ to itself since $(\overline{\nabla}_X\alpha)^\top$ maps $\Gamma(TM)$ to itself and $(P^*)^c$ maps $\Gamma(\mathcal{D}^\bot)$ to itself. Therefore, we have shown that $\overline{\nabla}_X\alpha \in \Gamma(\mathcal{D}^\bot)^0$, which implies that $\alpha$ is a harmonic one-form with respect to the induced metric on $\partial M$. To summarize, we showed that if $\alpha$ is a closed one-form on $M$ such that $\alpha|_{\partial M}=0$, then $\alpha$ is a harmonic one-form with respect to the induced metric on $\partial M$.  ◻ In the following, we shall present the nonholonomic sub-Finslerian structure results. To begin, we define coordinate independent conditions for the motion of a free mechanical system subjected to linear nonholonomic constraints to be normal extremal with respect to the connected sub-Finslerian manifold, and vice versa. Then, we address the problem of characterizing the normal and abnormal extremals that validate both nonholonomic and Vakonomic equations for a free particle subjected to certain kinematic constraints. Let $(M, \mathcal{D}, F)$ be a nonholonomic sub-Finslerian structure and $\sigma:[0,1] \to M$ be a piecewise smooth horizontal curve tangent to $\mathcal{D}$, then $\sigma$ is said to be a normal extremal if there exists $E$-admissible curve $\alpha$ with base curve $\sigma$ that is auto-parallel with respect to a normal $\mathcal{L}$-connection (Definition [Definition 3](#GEO){reference-type="ref" reference="GEO"}). While the curve $\sigma$ is said to be an abnormal extremal if there exists $\gamma \in \Gamma( \mathcal{D}^0)$ along $\sigma$ such that $\nabla_\alpha \gamma (t)= 0$ for all $t \in [0,1]$, with $\alpha$ a $E$-admissible curve with base curve $\sigma$. **Remark 4**. Cortés et al. [@CLM], made a comparison between the solutions of the nonholonomic mechanical problem and the solutions of the Vakonomic dynamical problem for the general Lagrangian system. The Vakonomic dynamical problem, associated with a free particle with linear nonholonomic constraints, consists of finding normal extremals with respect to the sub-Finsleriann structure $(M, \mathcal{D}, F)$. It is an interesting comparison because the equations of motion for the mechanical problem are derived by means of d'Alembert's principle, while the normal extremals are derived from a variation principle. Our next results are an alternate approach to the Cortés results, that is a coordinate-free approach, for the free particle case in the sub-Finslerian settings. **Definition 5**. Let $(M, \mathcal{D}, F)$ be a nonholonomic sub-Finslerian structure, one can establish new tensorial operators according to the following: $$\begin{aligned} T^B&: \Gamma(\mathcal{D})\otimes \Gamma(\mathcal{D}^*) \to \Gamma(\mathcal{D}^0), \quad (X, \alpha) \mapsto P^*(\overline{\nabla}_X^B\alpha); \\ T &: \Gamma(\mathcal{D})\otimes \Gamma(\mathcal{D}^0) \to \Gamma(\mathcal{D}^{\bot})^0, \quad (X, \gamma) \mapsto (P^*)^c(\delta_X\gamma); \end{aligned}$$ such that $$\delta: \Gamma(\mathcal{D}) \times \Gamma(\mathcal{D}^0) \to \mathfrak{X}^*(M), \quad (X, \gamma)\mapsto \delta_X \gamma=i_Xd\gamma.$$ In addition, these tensorial operators have the following properties: - $T^B$ and $T$ are $\mathcal{F}(M)$-bilinear in their independent variables; - The behavior of $T^B$ and $T$ can be identified pointwise; - $T_x^B(X, \alpha)$ and $T_x(X, \gamma)$ have a clear and unequivocal meaning for all $X \in \mathcal{D}, \alpha \in \mathcal{D}^*$ and $\gamma \in \mathcal{D}^0$. In the following, w show the relation between the operator $T^B$ and the curvature of the distribution $\mathcal{D}$ using the following condition: Suppose $X \in \mathcal{D}, \alpha \in \mathcal{D}^*$, then one has $$\langle T(X, \gamma), \alpha \rangle = \langle \delta_X\gamma, \alpha \rangle= - \langle \gamma, [X,\alpha] \rangle,$$ for any $\gamma \in \Gamma(\mathcal{D}^0).$ Therefore, $T$ is trivial if and only if $\mathcal{D}$ is involutive. **Definition 6**. Let $\nabla^T$ denote the non-linear connection over $i : \mathcal{D} \to TM$ on $\mathcal{D}^0$ by the following formula $$\nabla^T_X \gamma = P^*(\delta_X\gamma),$$ such that $X \in \Gamma(\mathcal{D})$ and $\gamma \in \Gamma(\mathcal{D}^0).$ **Proposition 3**. *Let $(M, \mathcal{D}, F)$ be a nonholonomic sub-Finslerian structure, assume that $\sigma: [0,1] \to M$ is a horizontal curve on $\mathcal{D}$ and let $\nabla$ be a $\mathcal{D}$-adapted $\mathcal{L}$-connection. Then, the following properties are satisfied:* - *If $p_0 \in \mathcal{D}^*_{\sigma(0)}$ is a given initial point, then $p(t)=\tilde{p}(t)$ for each $t \in [0,1]$ if and only if $T^B(\dot{\sigma}(t), \tilde{p}(t))= 0$, such that $p(t)$ and $\tilde{p}(t)$ are parallel transported curves along $\sigma$ w.r.t. $\overline{\nabla}^B$ and $\nabla^{H}$, respectively.* - *If $\gamma_0 \in \mathcal{D}^0_{\sigma(0)}$ is a given initial point, then $\gamma(t) = \tilde{\gamma}(t)$ for each $t \in [0,1]$ if and only if $T(\dot{\sigma}(t), \tilde{\gamma}(t))= 0$, such that $\gamma(t)$ and $\tilde{\gamma}(t)$ are parallel transported curves along $\sigma$ w.r.t. $\nabla$ and $\nabla^{T}$, respectively.* *Proof.* It is sufficient to prove that the first case and the second one follow similar arguments. As a consequence of the definition of the tensorial operator $T^B$, for any section $S(t)$ of $\mathcal{D}^*$ along $\sigma$, the next expression is true $$\nabla^{H}_{{\dot{\sigma}(t)}}S(t)= \overline{\nabla}^B_{{\dot{\sigma}(t)}}S(t)- T^B({\dot{\sigma}(t)}, S(t)).$$ Now, suppose that $S(t)= \tilde{p}(t)= p(t)$, then we get, $$T^B({\dot{\sigma}(t)}, S(t))= 0.$$ Conversely, it is well known that, regarding any connection, the parallel transported curves are uniquely determined by their initial conditions. ◻ It is clear that the second property of the above Proposition yields necessary and sufficient conditions for the existence of the curves that have abnormal extremals. In other words, $\sigma$ is an abnormal extremal if and only if there exists a parallel transported section $\tilde{\gamma}$ of $\mathcal{D}^0$ along $\sigma$ with respect to $\nabla^{T}$ such that $T(\dot{\sigma}(t), \tilde{\gamma}(t))= 0$. Now, by the next Proposition, one can derive the necessary and sufficient condition for normal extremals to be a motion of a free nonholonomic mechanical system and vice versa. **Lemma 1**. *Let $(M, \mathcal{D}, F)$ be nonholonomic sub-Finslerian structures, and $\nabla$ be a normal non-linear $\mathcal{L}$-connection. Then for any $\alpha \in \mathfrak{X}^*(M)$ we have that $\nabla_{\alpha}\alpha= 0$ if and only if $$\begin{aligned} \nabla^H_{E(\alpha)} \alpha(P)= & - T(E(\alpha),(P^*)^c(\alpha)); \\ \nabla^T_{E(\alpha)}P^*(\alpha)= & - T^B(E(\alpha), \alpha(P)). \end{aligned}$$* *Proof.* We proved in [@L.K], that $\nabla_{\alpha}\alpha= 0$ if and only if $\nabla_{\alpha} \alpha = \overline{\nabla}^B_{E(\alpha)} (P^*)^c(\alpha)+ \delta_{E(\alpha)} P^*(\alpha)=0.$ Moreover, $P^*(\alpha) = \alpha (P)$ and the Barthel non-linear connection preserves the metric, i.e. $\nabla^B \circ \mathcal{L}_{\eta} = \mathcal{L}_{\eta} \circ \overline{\nabla}^B$, therefore $$\begin{aligned} \overline{\nabla}^B_{E(\alpha)}P^*(\alpha)= & \nabla^{H}_{E(\alpha)}P^*(\alpha)+T^B(E(\alpha),P^*(\alpha)), \\ \delta_{E(\alpha)}P^*(\alpha)= & \nabla^{T}_{E(\alpha)}P^*(\alpha)+T(E(\alpha),(P^*)^c(\alpha)). \end{aligned}$$ According to the fact that $T^*M$ can be written as the direct sum of $(\mathcal{D}^{\bot})^0$ and $\mathcal{D}^0$, so the equivalence is pretty clear. ◻ **Theorem 1**. *If $\sigma :[0, 1]\rightarrow M$ is a solution of a free nonholonomic system given by nonholonomic sub-Finslerian structures, then it is also a solution of the corresponding Vakonomic problem, and vice versa, if and only if there exists $\gamma \in \Gamma(\mathcal{D}^0)$ along $\sigma$ such that $$\label{Last} \nabla^T_{\dot{\sigma}} \gamma (t)= - T^B(\dot{\sigma}(t), \mathcal{L}_{L}(\dot{\sigma}(t))),$$ further, for all $t$, $\gamma (t)\in {(\mathcal{D}_{\sigma(t)}+ [\dot{\sigma}(t), \mathcal{D}_{\sigma(t)}])}^0$.* *Proof.* $\nabla_{\alpha} \alpha(t) = 0$ is the condition for any $E$-admissible curve $\alpha(t) = \mathcal{L}_{L}(\dot{\sigma}(t))+\gamma (t)$ to be parallel transported with respect to a normal $\mathcal{L}$-connection. In other words, $$\nabla^H_{\dot{\sigma}} \mathcal{L}_{L}(\dot{\sigma}(t))= - T(\dot{\sigma}(t), \gamma (t))$$ and $$\nabla^T_{\dot{\sigma}}\gamma (t)=- T^B(\dot{\sigma}(t), \mathcal{L}_{L}(\dot{\sigma}(t))).$$ Therefore, $\nabla^H_{\dot{\sigma}} \mathcal{L}_{L}(\dot{\sigma}(t))=0$ if and only if $T(\dot{\sigma}(t), \gamma (t))=0$, such that $\gamma (t)$ is a solution of ([\[Last\]](#Last){reference-type="ref" reference="Last"}). Since Remark [Remark 3](#GI){reference-type="ref" reference="GI"} and Proposition [Proposition 2](#12){reference-type="ref" reference="12"} guaranteed that $\mathcal{D}$ is geodesically invariant, therefore, given any $\gamma (t)$ in ${(\mathcal{D}_{\sigma(t)}+ [\dot{\sigma}(t), \mathcal{D}_{\sigma(t)}])}^0$, then ([\[Last\]](#Last){reference-type="ref" reference="Last"}) ensure that there is always a solution for all $t \in [0,1]$ not only for $\gamma (0)$ in ${(\mathcal{D}_{\sigma(0)}+ [\dot{\sigma}(0), \mathcal{D}_{\sigma(0)}])}^0$. ◻ # Examples from Robotics Typically, nonholonomic systems occur when velocity restrictions are applied, such as the constraint that bodies move on a surface without slipping. Bicycles, cars, unicycles, and anything with rolling wheels are all examples of nonholonomic sub-Finslerian structures. We will discuss the simplest wheeled mobile robot, which is a single upright rolling wheel, or unicycle, which is known as a kinematic penny rolling on a plane. Assume this wheel is of radius 1 and does not allow sideways sliding. Its configuration $M$ consists of the heading angle $\phi$, the wheel's point or the contact position $(x_1, x_2)$, and the rolling angle $\psi$ (see Figure [1](#KK){reference-type="ref" reference="KK"}). Consequently, the space concerned has dimensions four, i.e., $M= \mathbb{R}^2 \times S^1 \times S^1$. There are two control functions deriving the wheel [@JCG; @KF00]: ![A kinematic penny rolling on a plane](D1.eps){#KK} - $u_1$ \[rolling speed\], the forward-backward rolling angular, - $u_2$ \[turning speed\], the speed of turning the heading direction $\phi$. With these controls, the rate of change of the coordinates can be expressed as follows: $$\begin{aligned} \dot{M} &= \begin{bmatrix} \dot{\phi}\\ \dot{x}_{1} \\ \dot{x}_{2} \\ \dot{\psi} \end{bmatrix} = \begin{bmatrix} 0 & 1 \\ \cos {\phi} & 0\\ \sin {\phi} & 0\\ 1 & 0\\ \end{bmatrix} \begin{bmatrix} u_1 \\ u_2 \end{bmatrix} = X(M) u. \end{aligned}$$ As we generally do not worry about the wheel's rolling angle, we could drop the fourth row from the above equation to get a simpler control system $$\begin{aligned} \dot{M} &= \begin{bmatrix} \dot{\phi}\\ \dot{x}_{1} \\ \dot{x}_{2} \end{bmatrix} = \begin{bmatrix} 0 & 1 \\ \cos {\phi} & 0\\ \sin {\phi} & 0 \end{bmatrix} \begin{bmatrix} u_1 \\ u_2 \end{bmatrix} = X(M) u, \end{aligned}$$ which can be written as the following equation: $$X(M) u = X_1(M) u_1 + X_2(M) u_2,$$ such that $u_1, u_2$ are called the controls and $X_1(w), X_2(w)$ are called vector fields. Moreover, each vector field assigns a velocity to every point $w$ in the configuration space, so these vector fields are sometimes called velocity vector fields. Hence the velocity vector fields of any solution curve should lie in $\mathcal{D}$ spanned by the following vector fields: $$\begin{aligned} X_1(M) & = \cos {\phi} \frac{\partial}{\partial x_1} + \sin {\phi} \frac{\partial}{\partial x_2} + \frac{\partial}{\partial \psi}\\ X_2(M)& = \frac{\partial}{\partial \phi}.\end{aligned}$$ In a natural way, a sub-Riemannian metric on $\mathcal{D}$ is gained by asserting the vector fields $X_1(M), X_2(M)$ to be orthonormal vectors, $$\langle u_1 X_1(M) + u_2 X_2(M),u_1 X_1(M) + u_2X_2(M)\rangle = u_1^2 + u_2^2.$$ The integral of this quadratic form measures the work completed in rolling the heading angle $\phi$ at the rate $\dot{\phi}$ and propelling the wheel ahead at the rate of $\dot{\psi}$. The sub-Riemannian structure will be adjusted as specified by the notion that curvature is costly: namely, it takes more attempts to steer the wheel in a tight circle with little forward or backward movement than to steer it in a wide arc. Therefore, the curvature of the projection $\sigma$ given by $\kappa = \frac{\dot{\phi}}{\dot{\psi}}$ this brings us to assume sub-Finsler metrics of the body $$F = f (\kappa)\sqrt{d{\psi}^2 + d{\phi}^2},$$ such that $f$ grows larger but remains constrained as $\mid \kappa \mid$ increases. After we check the sub-Finslerian property, one finds the nonholonomic of the rolling wheel, often known as a unicycle, by the equation $\dot{M}=X(M)u$ which is the kinematic model of the unicycle. # The sub-Laplacian associated with nonholonomic sub-Finslerian structures *The sub-Laplacian* is a differential operator that arises naturally in the study of nonholonomic sub-Finslerian structures. These are geometric structures that generalize Riemannian manifolds, allowing for non-integrable distributions of tangent spaces. On a sub-Finslerian manifold $M$, there is a distinguished distribution of tangent spaces $\mathcal{D}$, which corresponds to the directions that are accessible by moving along curves with bounded sub-Finsler length. The sub-Finsler metric $F$ on $M$ measures the sub-Finsler length of curves with respect to this distribution. The sub-Laplacian is defined as a second-order differential operator that acts on functions on $M$ and is defined in terms of the metric $F$ and the distribution $\mathcal{D}$. It is given by $$\Delta_F = \mathrm{div}_{\mathcal{D}} (\mathrm{grad}_F),$$ where $\mathrm{grad}_F$ is the gradient vector field associated with $F$ which is the unique vector field satisfying $\mathrm{d}F(\mathrm{grad}_F, X) = X(F)$ for all vector fields $X$ on $M$, and $\mathrm{div}_{\mathcal{D}}$ is the divergence operator with respect to the distribution $\mathcal{D}$, which is defined as the trace of the tangential part of the connection on $\mathcal{D}$. Our goal in this section is to show that the sub-Laplacian measures the curvature of the sub-Finslerian structure. It captures the interplay between the sub-Finsler metric $F$ and the distribution $\mathcal{D}$, and plays a crucial role in many geometric and analytic problems on nonholonomic sub-Finslerian manifolds. For example, the heat kernel associated with the sub-Laplacian provides a way to study the long-term behavior of solutions to the heat equation on sub-Finslerian manifolds. The Hodge theory on sub-Finslerian manifolds is also intimately related to the sub-Laplacian, and involves the study of differential forms that are harmonic with respect to the sub-Laplacian. **Remark 5**. To see that the sub-Laplacian measures the curvature of the sub-Finslerian structure, let us first recall some basic facts about Riemannian manifolds, see [@GL]. On a Riemannian manifold $(M,g)$, the Laplace-Beltrami operator is defined as $$\Delta_g = \mathrm{div} (\mathrm{grad}_g),$$ where $\mathrm{grad}_g$ is the gradient vector field associated with the Riemannian metric $g$, and $\mathrm{div}$ is the divergence operator. It is a well-known fact that the Laplace-Beltrami operator measures the curvature of the Riemannian structure in the sense that it is zero if and only if the Riemannian manifold is flat. The sub-Finslerian case is more complicated due to the presence of the distribution $\mathcal{D}$ that is not integrable in general. However, the sub-Laplacian $\Delta_F$ can still be understood as a curvature operator. To see this, we need to introduce the notion of a horizontal vector field. A vector field $X$ on $M$ is called horizontal if it is tangent to the distribution $\mathcal{D}$. Equivalently, $X$ is horizontal if it is locally of the form $X = \sum_{i=1}^k h_i X_i$, where $h_i$ are smooth functions and $X_1,\ldots,X_k$ are smooth vector fields that form a basis for $\mathcal{D}$. Given a horizontal vector field $X$, we can define its sub-Finsler length $|X|_F$ as the infimum of the lengths of horizontal curves that are tangent to $X$ at each point. Equivalently, $|X|_F$ is the supremum of the scalar products $g(X,Y)$ over all horizontal vector fields $Y$ with $|Y|_F \leq 1$. With these definitions in place, we can now show that the sub-Laplacian measures the curvature of the sub-Finslerian structure. More precisely, we have the following result: **Theorem 2**. *The sub-Laplacian $\Delta_F$ is zero if and only if the sub-Finslerian manifold $(M,F,\mathcal{D})$ is locally isometric to a Riemannian manifold.* *Proof.* First, suppose that $(M,F,\mathcal{D})$ is locally isometric to a Riemannian manifold $(M,g)$. Then we can choose a local frame of orthonormal horizontal vector fields $X_1,\ldots,X_k$ with respect to the Riemannian metric $g$. In this frame, we have $$\mathrm{grad}_F h = \sum_{i=1}^k g(\mathrm{grad}_h,X_i) X_i$$ for any function $h$ on $M$, and hence $$\Delta_Fh= \sum_{i=1}^{k} \mathrm{div}_{\mathcal{D}}(g( \mathrm{grad}_h, X_i)X_i).$$ Using the fact that the $X_i$ form a basis for $\mathcal{D}$, we can rewrite this as $$\Delta_Fh=\mathrm{div}( \mathrm{grad}_h)=\Delta_gh,$$ where $\Delta_g$ is the Laplace-Beltrami operator associated with the Riemannian metric $g$. Since $\Delta_g$ is zero if and only if $(M,g)$ is flat, it follows that $\Delta_F$ is zero if and only if $(M,F,\mathcal{D})$ is locally isometric to a Riemannian manifold, which implies that the sub-Finslerian structure is also flat. Conversely, suppose that $\Delta_F$ is zero. Let $X_1,\ldots,X_k$ be a local frame of horizontal vector fields such that $F(X_i) = 1$ for all $i$, and let $\omega_{ij} = g(X_i,X_j)$ be the Riemannian metric induced by $F$ on $\mathcal{D}$. Using the definition of the sub-Laplacian and the fact that $\Delta_F$ is zero, we have $$0= \Delta_FF=\mathrm{div}_{\mathcal{D}}( \mathrm{grad}_FF)=\sum_{i=1}^{k} \sum_{i=1}^{k} \frac{\partial^2F}{\partial x_i \partial x_j}\omega_{ij},$$ where $x_1,\ldots,x_k$ are local coordinates on $M$ that are adapted to $\mathcal{D}$ (i.e., $X_1,\ldots,X_k$ form a basis for the tangent space at each point). This implies that the Hessian of $F$ with respect to the Riemannian metric $\omega$ is zero, so $F$ is locally affine with respect to $\omega$. In other words, $(M,F,\mathcal{D})$ is locally isometric to a Riemannian manifold. ◻ **Remark 6**. In the above Theorem [Theorem 2](#SLSF){reference-type="ref" reference="SLSF"}, we have shown that the sub-Laplacian $\Delta_F$ measures the curvature of the sub-Finslerian structure. If $\Delta_F$ is zero, then the sub-Finslerian manifold is locally isometric to a Riemannian manifold, and hence the sub-Finslerian structure is flat. If $\Delta_F$ is nonzero, then the sub-Finslerian manifold is not locally isometric to a Riemannian manifold, and the sub-Finslerian structure is curved. This means that the shortest paths between two points on the manifold are not necessarily straight lines, and the geometry of the manifold is more complex than that of a Riemannian manifold. 99 A. Agrachev, D. Barilari, and U. Boscain, A comprehensive introduction to sub-Riemannian geometry. *Cambridge Studies in Advanced Mathematics* (2019). D. Bao, S.-S. Chern, and Z. Shen, An introduction to Riemann-Finsler geometry, Graduate Texts in Mathematics 200. *Springer-Verlag, New York,* (2000). L. M. Alabdulsada, L. Kozma, On the connection of sub-Finslerian geometry. *Int. J. Geom. Methods Mod. Phys.***16**, no. supp02, 1941006, (2019). L. M. Alabdulsada, L. Kozma, Hopf-Rinow theorem of sub-Finslerian geometry. https://arxiv.org/abs/2301.13438 L. M. Alabdulsada, Sub-Finsler manifolds and their associated sub-Hamiltonian functions. Submitted L. M. Alabdulsada, A note on the distributions in quantum mechanical systems. *J. Phys.: Conf. Ser.* **1999**, 012112, (2021). L. M. Alabdulsada, Sub-Finsler Geometry and Non-positive Curvature in Hilbert Geometry. *PhD thesis, University of Debrecen, Hungary*, (2019). D. Barilari, U. Boscain, E. Le Donne, and M. Sigalotti, Sub-Finsler structures from the time-optimal control viewpoint for some nilpotent distributions. *J. Dyn. Control Syst.* 23(3), 547-575, (2017). V. N. Berestovskii, Homogeneous manifolds with intrinsic metric. I., *Sib. Math. J.* 29, 887-897, (1988). A. Bloch, L. Colombo, R. Gupta, and D. Martı́n de Diego, A geometric approach to the optimal control of nonholonomic mechanical systems, Analysis and Geometry in Control Theory and its Applications. *INdAM* **11**, (Springer), pp. 35-64, (2015). O. Calin, D.-C. Chang, Sub-Riemannian Geometry: General Theory and Examples. *Cambridge University Press*, New York, (2009). C. Carathéodory, Untersuchungen über die Grundlagen der Termodynamik, *Math. Ann.,* **67**, 93-161, (1909). W.-L. Chow, Über Systeme von linearen partiellen Differentialgleichungen erster Ordnung. (German) *Math. Ann.* **117**, 98-105, (1939). J.N. Clelland, C.G. Moseley, and G.R. Wilkens, Geometry of sub-Finsler Engel manifolds. *Asian J. Math* **11**, no. 4, 699-726, (2007). J. Cortés, M. de León, D. Martı́n de Diego, and S. Martı́nez, Geometric description of Vakonomic and nonholonomic dynamics. Comparison of Solutions. *SIAM J. Control Optim.* **41**, no. 5, 1389-1412, (2002). M. Gordina, T. Laetsch, Sub-Laplacians on Sub-Riemannian manifolds. *Potential Anal* **44**, 811-837, (2016). Gromov, M. Carnot-Caratheodory spaces seen from within. In Sub-Riemannian geometry, vol. 144 of Progr. Math. Birkhäuser, Basel, 79-323, (1996). B. Langerock, Nonholonomic mechanics and connections over a bundle map. *J. Phys. A*, **34**, 609-615, (2001). A. D. Lewis, Affine connections and distributions with applications to nonholonomic mechanics. *Rep. Math. Phys.* **42**, no. 1-2, 135-164, (1998). C. López; E. Martı́nez, Sub-Finslerian metric associated to an optimal control system. *SIAM J. Control Optim.* **39**, no. 3, 798-811, (2000). Kevin M. Lynch, Frank C. Park, Modern robotics. *Cambridge University Press,* (2017). L. Kozma, Holonomy structures in Finsler geometry, *Handbook of Finsler Geometry* ed. Antonelli (Kluwer), (2003). J. Mitchell, On Carnot-Carathéodory metrics, *J. Different. Geom.,* 21, No. 1, 35-45, (1985). R. Montgomery, A tour of subriemannian geometries, their geodesics and applications. Mathematical Surveys and Monographs 91, *Amer. Math. Soc., Providence, RI,* (2002). P. K. Rashevsky, Any two points of a totally nonholonomic space may be connected by an admissible line. *Uch. Zap. Ped. Inst. Im. Liebknechta, Ser. Phys. Math.* (in Russian). 2: 83-94, (1938).
arxiv_math
{ "id": "2309.03489", "title": "Sub-Finsler geometry and nonholonomic mechanics", "authors": "Layth M. Alabdulsada", "categories": "math.DG math-ph math.MP", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | In this paper we introduce a new logarithmic double phase type operator of the form $$\begin{aligned} \begin{aligned} \mathcal{G}u & :=- \operatorname{div}\left( \left|{ \nabla u }\right|^{p(x) - 2} \nabla u \right. \\ & \left. \qquad\qquad\quad + \mu(x) \left[ \log ( e + \left|{\nabla u}\right| ) + \frac{ \left|{\nabla u}\right| }{q(x) (e + \left|{\nabla u}\right|)} \right] \left|{ \nabla u }\right|^{q(x) - 2} \nabla u \right), \end{aligned} \end{aligned}$$ whose energy functional is given by $$\begin{aligned} u \mapsto \int_{\Omega}\left( \frac{ \left|{\nabla u}\right|^{p(x)} }{p(x)} + \mu(x) \frac{ \left|{\nabla u}\right|^{q(x)} }{q(x)} \log (e + \left|{\nabla u}\right|) \right) \mathop{\mathrm{d\textit{x}}}, \end{aligned}$$ where $\Omega \subseteq \mathbb{R}^N$, $N \geq 2$, is a bounded domain with Lipschitz boundary $\partial \Omega$, $p,q \in C(\overline{\Omega})$ with $1<p(x) \leq q(x)$ for all $x \in \overline{\Omega}$ and $\mu \in L^{1}(\Omega)$. First, we prove that the logarithmic Musielak-Orlicz Sobolev spaces $W^{1, \ensuremath{\mathcal{H}_{\log}}}(\Omega)$ and $W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ with $\ensuremath{\mathcal{H}_{\log}}(x,t) = t^{p(x)} + \mu(x) t^{q(x)} \log (e + t)$ for $(x,t)\in \overline{\Omega}\times [0,\infty)$ are separable, reflexive Banach spaces and $W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ can be equipped with the equivalent norm $$\inf \left\lbrace \lambda > 0 \,:\, \int_{\Omega}\left[ \left|{ \frac{\nabla u}{\lambda} }\right|^{p(x)} +\mu(x) \left|{\frac{\nabla u}{\lambda}}\right| ^{q(x)} \log \left( e + \frac{ \left|{\nabla u}\right| }{\lambda} \right) \right]\mathop{\mathrm{d\textit{x}}}\leq 1\right\rbrace.$$ We also prove several embedding results for these spaces and the closedness of $W^{1, \ensuremath{\mathcal{H}_{\log}}}(\Omega)$ and $W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ under truncations. In addition we show the density of smooth functions in $W^{1, \ensuremath{\mathcal{H}_{\log}}}(\Omega)$ even in the case of an unbounded domain by supposing Nekvinda's decay condition on $p(\cdot)$. The second part is devoted to the properties of the operator and it turns out that it is bounded, continuous, strictly monotone, of type [(S$_+$)]{.nodecor}, coercive and a homeomorphism. Also, the related energy functional is of class $C^1$. As a result of independent interest we also present a new version of Young's inequality for the product of a power-law and a logarithm. In the last part of this work we consider equations of the form $$\mathcal{G}u = f(x,u) \quad \text{in } \Omega, \quad u = 0 \quad \text{on } \partial\Omega$$ with superlinear right-hand sides. We prove multiplicity results for this type of equation, in particular about sign-changing solutions, by making use of a suitable variation of the corresponding Nehari manifold together with the quantitative deformation lemma and the Poincaré-Miranda existence theorem. address: - Department of Mathematical Sciences, Indian Institute of Technology Varanasi (IIT-BHU), Uttar Pradesh 221005, India - Technische Universität Berlin, Institut für Mathematik, Straße des 17. Juni 136, 10623 Berlin, Germany - Technische Universität Berlin, Institut für Mathematik, Straße des 17. Juni 136, 10623 Berlin, Germany author: - Rakesh Arora - Ángel Crespo-Blanco - Patrick Winkert title: On logarithmic double phase problems --- # Introduction In recent years double phase problems have received a lot of attention from the mathematical community. These problems typically involve a functional with the shape $$\begin{aligned} \label{double-phase-functional-without-log} u \mapsto \int_{\Omega}\left( \frac{ \left|{\nabla u}\right|^{p(x)} }{p(x)} + \mu(x) \frac{ \left|{\nabla u}\right|^{q(x)} }{q(x)} \right) \mathop{\mathrm{d\textit{x}}}\end{aligned}$$ or, alternatively, a differential operator of the form $$\begin{aligned} \label{double-phase-operator-without-log} - \operatorname{div} \left( \left|{ \nabla u }\right|^{p(x) - 2} \nabla u + \mu(x) \left|{ \nabla u }\right|^{q(x) - 2} \nabla u \right) .\end{aligned}$$ Naturally, one can see $p,q$ as functions or limit the study to the constant exponents case. These problems are called double phase problems because of their nonuniform ellipticity, with two regions of different behavior. In the set $\{ x \in \Omega \,:\, \mu(x)\geq \varepsilon> 0\}$ for any fixed $\varepsilon> 0$, the ellipticity in the gradient of the integrand is of order $q(x)$, while in the set $\{ x \in \Omega \,:\, \mu(x) = 0\}$ that ellipticity is of order $p(x)$. Let $p$ and $q$ be constants, the double phase energy functional $$\begin{aligned} \label{integral_minimizer} u \mapsto \int_\Omega \left(|\nabla u|^p+\mu(x)|\nabla u|^q\right)\mathop{\mathrm{d\textit{x}}}\end{aligned}$$ appeared for the first time in a work of Zhikov [@Zhikov-1986] in order to describe models for strongly anisotropic materials in the context of homogenization and elasticity theory, see also Zhikov [@Zhikov-1995; @Zhikov-2011]. Indeed, in elasticity theory, the modulating coefficient $\mu(\cdot)$ dictates the geometry of composites made of two different materials with distinct power hardening exponents $q(\cdot)$ and $p(\cdot)$, see the mentioned works of Zhikov. We also point out that there are several other applications in physics and engineering of double phase differential operators and related energy functionals, see the works of Bahrouni-Rădulescu-Repovš [@Bahrouni-Radulescu-Repovs-2019] on transonic flows, Benci-D'Avenia-Fortunato-Pisani [@Benci-DAvenia-Fortunato-Pisani-2000] on quantum physics and Cherfils-Il'yasov [@Cherfils-Ilyasov-2005] on reaction diffusion systems. In recent years functionals of the shape [\[integral_minimizer\]](#integral_minimizer){reference-type="eqref" reference="integral_minimizer"} have been treated in many papers concerning regularity, in particular of local minimizers (also for nonstandard growth). We refer to the works of Baroni-Colombo-Mingione [@Baroni-Colombo-Mingione-2015; @Baroni-Colombo-Mingione-2018], Baroni-Kuusi-Mingione [@Baroni-Kuusi-Mingione-2015], Byun-Oh [@Byun-Oh-2020], Byun-Ok-Song [@Byun-Ok-Song-2022], Colombo-Mingione [@Colombo-Mingione-2015a; @Colombo-Mingione-2015b], De Filippis-Palatucci [@De-Filippis-Palatucci-2019], Harjulehto-Hästö-Toivanen [@Harjulehto-Hasto-Toivanen-2017], Ok [@Ok-2018; @Ok-2020], Ragusa-Tachikawa [@Ragusa-Tachikawa-2020] and the references therein. Furthermore, nonuniformly elliptic variational problems and nonautonomous functionals have been studied in the papers of Beck-Mingione [@Beck-Mingione-2020], De Filippis-Mingione [@DeFilippis-Mingione-2021-2; @DeFilippis-Mingione-2020-2] and Hästö-Ok [@Hasto-Ok-2019]. We point out that [\[integral_minimizer\]](#integral_minimizer){reference-type="eqref" reference="integral_minimizer"} also belongs to the class of the integral functionals with nonstandard growth condition as a special case of the outstanding papers of Marcellini [@Marcellini-1991; @Marcellini-1989], see also the recent papers by Cupini-Marcellini-Mascolo [@Cupini-Marcellini-Mascolo-2023] and Marcellini [@Marcellini-2023] with $u$-dependence. However, such works are limited to consider only a power-law type of growth in each of the addends. If one wants to consider other types of growth, the first idea that comes up naturally is to modify power-laws with a logarithm. For this reason, in this paper we consider logarithmic type functionals of the form $$\begin{aligned} \label{log-energy-functional} I(u)=\int_{\Omega}\left( \frac{ \left|{\nabla u}\right|^{p(x)} }{p(x)} + \mu(x) \frac{ \left|{\nabla u}\right|^{q(x)} }{q(x)} \log (e + \left|{\nabla u}\right|) \right) \mathop{\mathrm{d\textit{x}}},\end{aligned}$$ and its associated differential operator $$\begin{aligned} \label{log-operator} \begin{aligned} \mathcal{G}u & :=- \operatorname{div}\left( \left|{ \nabla u }\right|^{p(x) - 2} \nabla u \right. \\ & \left. \qquad\qquad\quad + \mu(x) \left[ \log ( e + \left|{\nabla u}\right| ) + \frac{ \left|{\nabla u}\right| }{q(x) (e + \left|{\nabla u}\right|)} \right] \left|{ \nabla u }\right|^{q(x) - 2} \nabla u \right), \end{aligned}\end{aligned}$$ where $\Omega \subseteq \mathbb{R}^N$, $N \geq 2$, is a bounded domain with Lipschitz boundary $\partial \Omega$, $e$ stands for Euler's number, $p,q \in C(\overline{\Omega})$ with $1 < p(x) \leq q(x)$ for all $x \in \overline{\Omega}$ and $\mu \in L^{1}(\Omega)$. One work closely related to ours is Baroni-Colombo-Mingione [@Baroni-Colombo-Mingione-2016], where they prove the local Hölder continuity of the gradient of local minimizers of the functional $$\begin{aligned} \label{log-functional-Mingione1} w \mapsto \int_{\Omega}\left[ \left|{D w}\right|^p + a(x) \left|{D w}\right|^p \log (e + \left|{D w}\right|) \right] \mathop{\mathrm{d\textit{x}}}\end{aligned}$$ provided that $1 < p < \infty$ and $0 \leq a(\cdot) \in C^{0,\alpha} (\overline{\Omega})$. Note that when we take $p=q$ and constant, [\[log-energy-functional\]](#log-energy-functional){reference-type="eqref" reference="log-energy-functional"} and [\[log-functional-Mingione1\]](#log-functional-Mingione1){reference-type="eqref" reference="log-functional-Mingione1"} are the same functional up to a multiplicative constant. In another recent work of De Filippis-Mingione [@DeFilippis-Mingione-2023] the local Hölder continuity of the gradients of local minimizers of the functional $$\begin{aligned} \label{log-functional-Mingione2} w \mapsto \int_{\Omega}\big[|D w|\log(1+|D w|)+a(x)|D w|^q\big]\mathop{\mathrm{d\textit{x}}}\end{aligned}$$ has been shown provided $0 \leq a(\cdot)\in C^{0,\alpha}(\overline{\Omega})$ and $1<q<1+\frac{\alpha}{n}$ whereby $\Omega \subset \mathbb{R}^n$. The functional [\[log-functional-Mingione2\]](#log-functional-Mingione2){reference-type="eqref" reference="log-functional-Mingione2"} has its origin in the functional with nearly linear growth given by $$\begin{aligned} \label{log-functional-Mingione3} w \mapsto \int_{\Omega}|D w|\log(1+|D w|)\mathop{\mathrm{d\textit{x}}},\end{aligned}$$ which has been studied in Fuchs-Mingione [@Fuchs-Mingione-2000] and Marcellini-Papi [@Marcellini-Papi-2006]. Note that functionals of the form [\[log-functional-Mingione3\]](#log-functional-Mingione3){reference-type="eqref" reference="log-functional-Mingione3"} appear in the theory of plasticity with logarithmic hardening, see, for example, Seregin-Frehse [@Seregin-Frehse-1999] and the monograph of Fuchs-Seregin [@Fuchs-Seregin-2000] about variational methods for problems which have their origins in plasticity theory and generalized Newtonian fluids. To the best of our knowledge, our work is the first one dealing with such logarithmic operator given in [\[log-operator\]](#log-operator){reference-type="eqref" reference="log-operator"} and associated energy functional [\[log-energy-functional\]](#log-energy-functional){reference-type="eqref" reference="log-energy-functional"} in a very general setting. Indeed, there are many innovations and novelties in this work which we want to summarize below. The first step in studying the operator and its energy functional is the finding of the right function space. For this purpose, we consider logarithmic Musielak-Orlicz Sobolev spaces $W^{1, \ensuremath{\mathcal{H}_{\log}}}(\Omega)$ and $W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ with the generalized weak $\Phi$-function $\ensuremath{\mathcal{H}_{\log}}\colon \overline{\Omega}\times [0,\infty) \to [0,\infty)$ given by $$\begin{aligned} \ensuremath{\mathcal{H}_{\log}}(x,t) = t^{p(x)} + \mu(x) t^{q(x)} \log (e + t).\end{aligned}$$ We are able to prove that these spaces are separable, reflexive Banach spaces and $W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ can be equipped with the equivalent norm $$\inf \left\lbrace \lambda > 0 \,:\, \int_{\Omega}\left[ \left|{ \frac{\nabla u}{\lambda} }\right|^{p(x)} +\mu(x) \left|{\frac{\nabla u}{\lambda}}\right| ^{q(x)} \log \left( e + \frac{ \left|{\nabla u}\right| }{\lambda} \right) \right]\mathop{\mathrm{d\textit{x}}}\leq 1\right\rbrace.$$ Such norm will be later useful in our existence results for corresponding logarithmic double phase equations. In addition, we prove several embedding results into variable exponent Lebesgue spaces and the closedness of $W^{1, \ensuremath{\mathcal{H}_{\log}}}(\Omega)$ and $W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ under truncations. Moreover, we show the density of smooth functions. To be more precise, under the assumptions that $p,q \colon \overline{\Omega}\to [1,\infty)$ and $\mu \colon \overline{\Omega}\to [0,\infty)$ are Hölder continuous and $$\begin{aligned} \left( \frac{q}{p} \right)_+ < 1 + \frac{\gamma}{N},\end{aligned}$$ where $\gamma$ is the Hölder exponent of $\mu$, we obtain that $C^\infty (\Omega) \cap W^{1, \ensuremath{\mathcal{H}_{\log}}}(\Omega)$ is dense in $W^{1, \ensuremath{\mathcal{H}_{\log}}}(\Omega)$. As an result of independent interest, we extend this assertion to the case of unbounded domains under the additional hypothesis that $p$ satisfies Nekvinda's decay condition and $p$ and $q$ are bounded. This condition was first introduced by Nekvinda in the article [@Nekvinda-2004] and says that a measurable function $r \colon \Omega \to [1,\infty]$ satisfies Nekvinda's decay condition if there exists $r_\infty \in [1 , \infty]$ and $c \in (0,1)$ such that $$\begin{aligned} \int_{\Omega}c^{\frac{1}{ \left|{ \frac{1}{r(x)} - \frac{1}{r_\infty} }\right| }} \mathop{\mathrm{d\textit{x}}}< \infty.\end{aligned}$$ In the second part of this paper we are interested in the properties of the logarithmic double phase operator $\mathcal{G}$ given in [\[log-operator\]](#log-operator){reference-type="eqref" reference="log-operator"} and its corresponding energy functional $I$ in [\[log-energy-functional\]](#log-energy-functional){reference-type="eqref" reference="log-energy-functional"}. We prove that the operator is bounded, continuous, strictly monotone, of type [(S$_+$)]{.nodecor}, coercive and a homeomorphism. Moreover, the functional $I$ is of class $C^1$. As a result of independent interest we also present a new version of Young's inequality for the product of a power-law and a logarithm, which says that for $s,t \geq 0$ and $r > 1$ it holds $$\begin{aligned} s t^{r-1} \left[ \log (e + t ) + \frac{ t }{r (e + t)} \right] \leq \frac{s^r}{r} \log (e + s ) + t^r \left[ \frac{r - 1}{r} \log (e + t ) + \frac{ t }{r (e + t)} \right] .\end{aligned}$$ This inequality is essential for our proof that the operator fulfills the [(S$_+$)]{.nodecor}-property. Finally, in the last part of the paper, we are interested in existence and multiplicity results for the equation $$\begin{aligned} \label{Eq:Problem} \mathcal{G}u = f(x,u) \quad \text{in } \Omega, \quad u= 0 \quad \text{on } \partial\Omega,\end{aligned}$$ where $\mathcal{G}$ is given in [\[log-operator\]](#log-operator){reference-type="eqref" reference="log-operator"} and $f\colon\Omega\times \mathbb{R}\to\mathbb{R}$ is a Carathéodory function with subcritical growth which satisfies appropriate conditions, see [\[Hf\]](#Hf){reference-type="eqref" reference="Hf"}, [\[Asf:QuotientMono\]](#Asf:QuotientMono){reference-type="eqref" reference="Asf:QuotientMono"}, [\[Asf:GrowthZeroAlt\]](#Asf:GrowthZeroAlt){reference-type="eqref" reference="Asf:GrowthZeroAlt"} and [\[Asf:Nonnegative\]](#Asf:Nonnegative){reference-type="eqref" reference="Asf:Nonnegative"} for the details. We prove the existence of three nontrivial weak solutions of problem [\[Eq:Problem\]](#Eq:Problem){reference-type="eqref" reference="Eq:Problem"} including determining their sign. More precisely, one solution is positive, one is negative and the third one has changing sign. The existence of the sign-changing solution is the main difficult part in our existence section, the idea is to the use an appropriate variation $\mathcal{N}_0$ of the corresponding Nehari manifold of problem [\[Eq:Problem\]](#Eq:Problem){reference-type="eqref" reference="Eq:Problem"} given by $$\begin{aligned} \mathcal{N} = \left\{ u \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)\,:\, \langle \varphi'(u) , u \rangle = 0, \; u \neq 0 \right\},\end{aligned}$$ where $\varphi$ is the energy functional corresponding to [\[Eq:Problem\]](#Eq:Problem){reference-type="eqref" reference="Eq:Problem"}. The definition of $\mathcal{N}$ is motivated by the works of Nehari [@Nehari-1960; @Nehari-1961] and the set $\mathcal{N}_0$ consists of all functions of $W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ such that the positive and negative parts are also in $\mathcal{N}$. The idea in the proof is, among other things, a suitable combination of the quantitative deformation lemma and the Poincaré-Miranda existence theorem. Such treatment has been applied to double phase problems without logarithmic term and by using the Brouwer degree instead of the Poincaré-Miranda existence theorem by the works of Liu-Dai [@Liu-Dai-2018] for $p,q$ constants and Crespo-Blanco-Winkert [@Crespo-Blanco-Winkert-2022] for the operator given in [\[double-phase-operator-without-log\]](#double-phase-operator-without-log){reference-type="eqref" reference="double-phase-operator-without-log"} with associated functional [\[double-phase-functional-without-log\]](#double-phase-functional-without-log){reference-type="eqref" reference="double-phase-functional-without-log"}. Note that the appearance of the logarithmic term in our operator makes the treatment much more complicated than in the works [@Liu-Dai-2018] and [@Crespo-Blanco-Winkert-2022]. As mentioned above, to the best of our knowledge, there exists no other work dealing with the logarithmic double phase operator given in [\[log-operator\]](#log-operator){reference-type="eqref" reference="log-operator"}. However, some papers deal with logarithmic terms on the right-hand side for Schrödinger equations or $p$-Laplace problems. In 2009, Montenegro-de Queiroz [@Montenegro-deQueiroz] studied nonlinear elliptic problem $$\begin{aligned} \label{reference-1} -\Delta u=\chi_{u>0}(\log (u)+\lambda f(x,u))\quad \text{in } \Omega, \quad u= 0 \quad \text{on } \partial\Omega,\end{aligned}$$ where $f\colon \Omega \times [0,\infty) \to [0,\infty)$ is nondecreasing, sublinear and $f_u$ is continuous. The authors show that [\[reference-1\]](#reference-1){reference-type="eqref" reference="reference-1"} has a maximal solution $u_\lambda \geq 0$ of type $C^{1,\gamma}(\overline{\Omega})$. Logarithmic Schrödinger equations of the shape $$\begin{aligned} \label{reference-2} -\Delta u+V(x)u=Q(x)u \log(u^2)\quad \text{in }\mathbb{R}^N\end{aligned}$$ have been studied by Squassina-Szulkin [@Squassina-Szulkin-2015] proving that [\[reference-2\]](#reference-2){reference-type="eqref" reference="reference-2"} has infinitely many solutions, where $V$ and $Q$ are $1$-periodic functions of the variables $x_1,\ldots,x_N$ and $Q\in C^1(\mathbb{R}^N)$. Further results for logarithmic Schrödinger equations can be found in the works of Alves-de Morais Filho [@Alves-deMoraisFilho-2018], Alves-Ji [@Alves-Ji-2020] and Shuai [@Shuai-2019], see also Gao-Jiang-Liu-Wei [@Gao-Jiang-Liu-Wei-2023] for logarithmic Kirchhoff type equations and the recent work of Alves-da Silva [@Alves-daSilva-2023] about logarithmic Schrödinger equations on exterior domains. We also refer to the papers of Alves-Moussaoui-Tavares [@Alves-Moussaoui-Tavares-2019] for singular systems with logarithmic right-hand sides driven by the $\Delta_{p(\cdot)}$-Laplacian and Shuai [@Shuai-2023] for a Laplace equation with right-hand side $a(x)u\log(|u|)$ with weight function $a$ which may change sign. To finish this introduction, we would like to mention some famous works in the direction of double phase problems (without logarithmic term) appearing in the last years based on different methods and techniques. We refer to the papers of Arora-Shmarev [@Arora-Shmarev-2023; @Arora-Shmarev-2023-b] (parabolic double-phase problems), Colasuonno-Squassina [@Colasuonno-Squassina-2016] (double phase eigenvalue problems), Farkas-Winkert [@Farkas-Winkert-2021] (Finsler double phase problems), Gasiński-Papageorgiou [@Gasinski-Papageorgiou-2019] (locally Lipschitz right-hand sides), Gasiński-Winkert [@Gasinski-Winkert-2020] (convection problems), Ho-Winkert [@Ho-Winkert-2023] (new critical embedding results), Liu-Dai [@Liu-Dai-2018] (superlinear problems), Papageorgiou-Rădulescu-Repovš [@Papageorgiou-Radulescu-Repovs-2019-a; @Papageorgiou-Radulescu-Repovs-2020] (property of the spectrum and ground state solutions), Perera-Squassina [@Perera-Squassina-2018] (Morse theory for double phase problems), Zhang-Rădulescu [@Zhang-Radulescu-2018] and Shi-Rădulescu-Repovš-Zhang [@Shi-Radulescu-Repovs-Zhang-2020] (double phase anisotropic variational problems with variable exponents), Zeng-Bai-Gasiński-Winkert [@Zeng-Bai-Gasinski-Winkert-2020] (implicit obstacle double phase problems), Zeng-Rădulescu-Winkert [@Zeng-Radulescu-Winkert-2022] (implicit obstacle double phase problems with mixed boundary condition), see also the references therein. The paper is organized as follows. Section [2](#musielak-orlicz-preliminaries){reference-type="ref" reference="musielak-orlicz-preliminaries"} consists of an outline of the properties of variable exponent spaces, Musielak-Orlicz spaces, their associated Sobolev spaces and other mathematical tools used later in the text. These tools include some inequalities, the quantitative deformation lemma and the Poincaré-Miranda existence theorem. In Section [3](#functional-space){reference-type="ref" reference="functional-space"} we introduce the functional space that will be used throughout the rest of the paper and we give its main characteristics. In Section [4](#energy-functional-operator){reference-type="ref" reference="energy-functional-operator"} we provide some strong properties of the differential operator of problem [\[Eq:Problem\]](#Eq:Problem){reference-type="eqref" reference="Eq:Problem"}. These are essential when applying many techniques used to study nonlinear PDEs, including our treatment. After that, in Section [5](#constant-sign-solutions){reference-type="ref" reference="constant-sign-solutions"} we prove the existence of two nontrivial solutions of our problem, one of positive sign and one of negative sign. On top of this, in Section [6](#sign-changing-solution){reference-type="ref" reference="sign-changing-solution"} we show the existence of a third nontrivial solution with changing sign using the Nehari manifold technique and variational arguments. As a closing result, in Section [7](#nodal-domains){reference-type="ref" reference="nodal-domains"} we give information on the nodal domains of this sign-changing solution. # Musielak-Orlicz spaces and preliminaries {#musielak-orlicz-preliminaries} Some of the natural ingredients to study this kind of operator are the variable exponent Lebesgue and Sobolev spaces. For this reason, we start this section with a small summary of their properties. Later in this section we will also work with the theory of Musielak-Orlicz spaces in order to build an appropriate space for this operator. Let $1 \leq r \leq \infty$, we denote by $L^{r}(\Omega)$ the standard Lebesgue space equipped with the norm $\|\cdot\|_r$ and by $W^{1,r}(\Omega)$ and $W^{1,r}_0(\Omega)$ the typical Sobolev spaces fitted with the norm $\|\cdot\|_{1,r}$ and, for the case $1 \leq r < \infty$, also the norm $\|\cdot\|_{1,r,0}$. Let us also denote the positive and negative part as follows. Let $t \in \mathbb{R}$, then $t^\pm = \max\{ \pm t , 0 \}$, i.e. $t = t^+ - t^-$ and $\left|{t}\right| = t^+ + t^-$. For any function $u \colon \Omega \to \mathbb{R}$, we denote $u^\pm (x) = \left[ u(x) \right] ^\pm$ for all $x \in \Omega$. For the variable exponent case, we need to introduce some common notation. Let $r \in C(\overline{\Omega})$, we define $r_- = \min_{x \in \overline{\Omega}} r(x)$ and $r_+ = \max_{x \in \overline{\Omega}} r(x)$ and also the space $$\begin{aligned} C_+ (\Omega) = \{ r \in C(\overline{\Omega}) \,:\, 1 < r_- \}.\end{aligned}$$ Let $r \in C_+ (\Omega)$ and let $M(\Omega) = \{ u \colon \Omega \to \mathbb{R}\,:\, u \text{ is measurable}\}$, we denote by $L^{r(\cdot)}(\Omega)$ the Lebesgue space with variable exponent given by $$\begin{aligned} L^{r(\cdot)}(\Omega) = \left\lbrace u \in M(\Omega) \,:\, \varrho_{r(\cdot)} (u) < \infty \right\rbrace,\end{aligned}$$ where the modular associated with $r$ is $$\begin{aligned} \varrho_{r(\cdot)} (u) = \int_{\Omega}\left|{u}\right|^{r(x)} \mathop{\mathrm{d\textit{x}}}\end{aligned}$$ and it is equipped with its associated Luxemburg norm $$\begin{aligned} \left\|u\right\|_{r(\cdot)} = \inf \left\lbrace \lambda > 0 \,:\, \varrho_{r(\cdot)} \left( \frac{u}{\lambda} \right) \leq 1 \right\rbrace .\end{aligned}$$ These spaces have been investigated in many works, and as a result we nowadays have a comprehensive theory. One can find most of the relevant results in the book by Diening-Harjulehto-Hästö-R$\mathring{\text{u}}$žička [@Diening-Harjulehto-Hasto-Ruzicka-2011]. We know that $L^{r(\cdot)}(\Omega)$ is a separable and reflexive Banach space and its norm is uniformly convex. We also know that $\left[ L^{r(\cdot)}(\Omega) \right] ^*=L^{r'(\cdot)}(\Omega)$, where $r' \in C_+(\overline{\Omega})$ is the conjugate variable exponent of $r$ and is given by $r'(x) = r(x) / [r(x) - 1]$ for all $x \in \overline{\Omega}$, see for example [@Diening-Harjulehto-Hasto-Ruzicka-2011 Lemma 3.2.20]. In these spaces we also have a weaker version of Hölder's inequality, like the one in [@Diening-Harjulehto-Hasto-Ruzicka-2011 Lemma 3.2.20], which states that $$\begin{aligned} \int_{\Omega}|uv| \mathop{\mathrm{d\textit{x}}}\leq \left[\frac{1}{r_-}+\frac{1}{r'_-}\right] \|u\|_{r(\cdot)}\|v\|_{r'(\cdot)} \leq 2 \|u\|_{r(\cdot)}\|v\|_{r'(\cdot)} \quad \text{for all } u,v\in L^{r(\cdot)}(\Omega).\end{aligned}$$ Additionally, if $r_1, r_2\in C_+(\overline{\Omega})$ and $r_1(x) \leq r_2(x)$ for all $x\in \overline{\Omega}$, it is possible to embed continuously one space in the other like in [@Diening-Harjulehto-Hasto-Ruzicka-2011 Theorem 3.3.1], meaning $L^{r_2(\cdot)}(\Omega) \hookrightarrow L^{r_1(\cdot)}(\Omega)$. Finally, the norm and its modular are strongly related as one can see in the following result, it can be found in the paper of Fan-Zhao [@Fan-Zhao-2001 Theorems 1.2 and 1.3]. **Proposition 1**. *Let $r\in C_+(\overline{\Omega})$, $\lambda>0$, and $u\in L^{r(\cdot)}(\Omega)$, then* 1. *$\|u\|_{r(\cdot)}=\lambda$ if and only if $\varrho_{r(\cdot)}\left(\frac{u}{\lambda}\right)=1$ with $u \neq 0$;* 2. *$\|u\|_{r(\cdot)}<1$ (resp. $=1$, $>1$) if and only if $\varrho_{r(\cdot)}(u)<1$ (resp. $=1$, $>1$);* 3. *if $\|u\|_{r(\cdot)}<1$, then $\|u\|_{r(\cdot)}^{r_+} \leq \varrho_{r(\cdot)}(u) \leq \|u\|_{r(\cdot)}^{r_-}$;* 4. *if $\|u\|_{r(\cdot)}>1$, then $\|u\|_{r(\cdot)}^{r_-} \leq \varrho_{r(\cdot)}(u) \leq \|u\|_{r(\cdot)}^{r_+}$;* 5. *$\|u\|_{r(\cdot)} \to 0$ if and only if $\varrho_{r(\cdot)}(u)\to 0$;* 6. *$\|u\|_{r(\cdot)}\to +\infty$ if and only if $\varrho_{r(\cdot)}(u)\to +\infty$.* For our purposes we will also need the associated Sobolev spaces to the variable exponent Lebesgue spaces. These are also treated in the book by Diening-Harjulehto-Hästö-R$\mathring{\text{u}}$žička [@Diening-Harjulehto-Hasto-Ruzicka-2011]. Let $r \in C_+(\overline{\Omega})$, the Sobolev space $W^{1,r(\cdot)}(\Omega)$ is given by $$\begin{aligned} W^{1,r(\cdot)}(\Omega)=\left\{ u \in L^{r(\cdot)}(\Omega) \,:\, |\nabla u| \in L^{r(\cdot)}(\Omega)\right\},\end{aligned}$$ on it we can define the modular $$\begin{aligned} \varrho_{1, r(\cdot)} (u) = \varrho_{ r(\cdot)} (u) + \varrho_{ r(\cdot)} ( \nabla u ),\end{aligned}$$ where $\varrho_{ r(\cdot)} ( \nabla u ) = \varrho_{ r(\cdot)} ( \left|{\nabla u}\right| )$, and it is equipped with its associated Luxemburg norm $$\begin{aligned} \left\|u\right\|_{1, r(\cdot)} = \inf \left\lbrace \lambda > 0 \,:\, \varrho_{1, r(\cdot)} \left( \frac{u}{\lambda} \right) \leq 1 \right\rbrace .\end{aligned}$$ Furthermore, similarly to the standard case, we denote $$\begin{aligned} W^{1,r(\cdot)}_0(\Omega)= \overline{C^\infty _0(\Omega)}^{\|\cdot\|_{1,r(\cdot)}}.\end{aligned}$$ The spaces $W^{1,r(\cdot)}(\Omega)$ and $W^{1,r(\cdot)}_0(\Omega)$ are both separable and reflexive Banach spaces and the norm $\|\cdot\|_{1,r}$ is uniformly convex. A Poincaré inequality of the norms holds in the space $W^{1,r(\cdot)}_0(\Omega)$. One way to see this is the paper by Fan-Shen-Zhao [@Fan-Shen-Zhao-2001 Theorem 1.3], together with the standard way to derive the Poincaré inequality from the compact embedding, see for example the paper by Crespo-Blanco-Gasiński-Harjulehto-Winkert [@Crespo-Blanco-Gasinski-Harjulehto-Winkert-2022 Proposition 2.18 (ii)]. **Proposition 2**. *Let $r \in C_+(\overline{\Omega})$, then there exists $c_0>0$ such that $$\begin{aligned} \|u\|_{r(\cdot)} \leq c_0 \|\nabla u\|_{r(\cdot)} \quad\text{for all } u \in W^{1,r(\cdot)}_0(\Omega). \end{aligned}$$* Thus, we can define the equivalent norm on $W^{1,r(\cdot)}_0(\Omega)$ given by $\|u\|_{1,r(\cdot),0}=\|\nabla u\|_{r(\cdot)}$. This norm is also uniformly convex. Alternatively, assuming an additional monotonicity condition on $r$, we also have a Poincaré inequality for the modular, see the paper by Fan-Zhang-Zhao [@Fan-Zhang-Zhao-2005 Theorem 3.3]. **Proposition 3**. *Let $r \in C_+(\overline{\Omega})$ be such that there exists a vector $l \in \mathbb{R}^N \setminus \{0\}$ with the property that for all $x \in \Omega$ the function $$\begin{aligned} h_x (t) = r(x + tl) \quad \text{with } t \in I_x = \{ t \in \mathbb{R}\,:\, x + tl \in \Omega\} \end{aligned}$$ is monotone. Then there exits a constant $C>0$ such that $$\begin{aligned} \varrho_{r(\cdot)} (u) \leq C \varrho_{r(\cdot)} (\nabla u) \quad \text{for all } u \in W^{1,r(\cdot)}(\Omega), \end{aligned}$$ where $\varrho_{r(\cdot)} (\nabla u) = \varrho_{r(\cdot)} ( | \nabla u | )$.* For $r \in C_+ (\overline{\Omega})$ we introduce the critical Sobolev variable exponents $r^*$ and $r_*$ with the following expression $$\begin{aligned} r^*(x) & = \begin{cases} \frac{N r(x)}{N - r(x) } & \text{if } r(x) < N\\ +\infty & \text{if } r(x) \geq N \end{cases} , \quad\text{for all } x \in \overline{\Omega}, \\[1ex] r_*(x) & = \begin{cases} \frac{(N -1) r(x)}{N - r(x) } & \text{if } r(x) < N \\ +\infty & \text{if } r(x) \geq N \end{cases} , \quad\text{for all } x \in \overline{\Omega}.\end{aligned}$$ Note that for any $r \in C(\overline{\Omega})$ it holds that $(r^*)_-=(r_-)^*$, so we will denote it simply by $r^*_-$. On the other hand, the space $C^{0, \frac{1}{|\log t|}}(\overline{\Omega})$ is the set of all functions $h\colon \overline{\Omega}\to \mathbb{R}$ such that are log-Hölder continuous, i.e. there exists $C>0$ such that $$\begin{aligned} |h(x)-h(y)| \leq \frac{C}{|\log |x-y||}\quad\text{for all } x,y\in \overline{\Omega}\text{ with } |x-y|<\frac{1}{2}.\end{aligned}$$ In the variable exponent setting we also have embeddings analogous to the Sobolev embeddings of the constant exponent case. The next two results can be found in Crespo-Blanco-Gasiński-Harjulehto-Winkert [@Crespo-Blanco-Gasinski-Harjulehto-Winkert-2022 Propositions 2.1 and 2.2] or Ho-Kim-Winkert-Zhang [@Ho-Kim-Winkert-Zhang-2022 Proposition 2.4 and 2.5]. **Proposition 4**. *Let $r\in C^{0, \frac{1}{|\log t|}}(\overline{\Omega}) \cap C_+(\overline{\Omega})$ and let $s\in C(\overline{\Omega})$ be such that $1\leq s(x)\leq r^*(x)$ for all $x\in\overline{\Omega}$. Then, we have the continuous embedding $$\begin{aligned} W^{1,r(\cdot)}(\Omega) \hookrightarrow L^{s(\cdot) }(\Omega ). \end{aligned}$$ If $r\in C_+(\overline{\Omega})$, $s\in C(\overline{\Omega})$ and $1\leq s(x)< r^*(x)$ for all $x\in\overline{\Omega}$, then this embedding is compact.* **Proposition 5**. *Suppose that $r\in C_+(\overline{\Omega})\cap W^{1,\gamma}(\Omega)$ for some $\gamma>N$. Let $s\in C(\overline{\Omega})$ be such that $1\leq s(x)\leq r_*(x)$ for all $x\in\overline{\Omega}$. Then, we have the continuous embedding $$\begin{aligned} W^{1,r(\cdot)}(\Omega)\hookrightarrow L^{s(\cdot) }(\partial \Omega). \end{aligned}$$ If $r\in C_+(\overline{\Omega})$, $s\in C(\overline{\Omega})$ and $1\leq s(x)< r_*(x)$ for all $x\in\overline{\Omega}$, then the embedding is compact.* For the purpose of introducing the functional space mentioned in the Introduction, we present now the main features of Musielak-Orlicz spaces. Almost all definitions and results from this part of the work are from the book by Harjulehto-Hästö [@Harjulehto-Hasto-2019]. We start with some special types of growth. For the rest of this section let us denote by $(A,\Sigma,\mu)$ a $\sigma$-finite, complete measure space with $\mu \not \equiv 0$, while $\Omega$ still denotes a bounded domain in $\mathbb{R}^N$ with $N \geq 2$ and Lipschitz boundary $\partial \Omega$. **Definition 6**. *Let $g \colon A \times (0,+\infty) \to \mathbb{R}$. We say that* 1. *$\varphi$ is almost increasing in the second variable if there exists $a \geq 1$ such that $\varphi(x,s) \leq a \varphi(s,t)$ for all $0 < s < t$ and for a.a. $x \in A$;* 2. *$\varphi$ is almost decreasing in the second variable if there exists $a \geq 1$ such that $a \varphi(x,s) \geq \varphi(x,t)$ for all $0 < s < t$ and for a.a. $x \in A$.* *Let $\varphi\colon A \times (0,+\infty) \to \mathbb{R}$ and $p,q>0$. We say that $\varphi$ satisfies the property* 1. *if $t^{-p}\varphi(x,t)$ is increasing in the second variable;* 2. *if $t^{-p}\varphi(x,t)$ is almost increasing in the second variable;* 3. *if $t^{-q}\varphi(x,t)$ is decreasing in the second variable;* 4. *if $t^{-q}\varphi(x,t)$ is almost decreasing in the second variable.* *Without subindex, that is [(Inc)]{.nodecor}, [(aInc)]{.nodecor}, [(Dec)]{.nodecor} and [(aDec)]{.nodecor}, it indicates that there exists some $p>1$ or $q<\infty$ such that the condition holds.* Now we are in the position to give the definition of a generalized $\Phi$-function. **Definition 7**. *A function $\varphi\colon A \times [0,+\infty) \to [0,+\infty]$ is said to be a generalized $\Phi$-function if $\varphi$ is measurable in the first variable, increasing in the second variable and satisfies $\varphi(x,0)=0$, $\lim_{t\to 0^+} \varphi(x,t) = 0$ and $\lim_{t \to +\infty} \varphi(x,t) = +\infty$ for a.a. $x \in A$. Moreover, we say that* 1. *$\varphi$ is a generalized weak $\Phi$-function if it satisfies [(aInc)]{.nodecor}$_1$ on $A \times (0,+\infty)$;* 2. *$\varphi$ is a generalized convex $\Phi$-function if $\varphi(x,\cdot)$ is left-continuous and convex for a.a. $x \in A$;* 3. *$\varphi$ is a generalized strong $\Phi$-function if $\varphi(x,\cdot)$ is continuous in the topology of $[0,\infty]$ and convex for a.a. $x \in A$.* **Remark 8**. *A generalized strong $\Phi$-function is a generalized convex $\Phi$-function, and that a generalized convex $\Phi$-function is a generalized weak $\Phi$-function, check equation (2.1.1) in the book by Harjulehto-Hästö [@Harjulehto-Hasto-2019].* Associated to each generalized $\Phi$-function, it is possible to define its conjugate function and its left-inverse. **Definition 9**. *Let $\varphi\colon A \times [0,+\infty) \to [0,+\infty]$. We denote by $\varphi^*$ the conjugate function of $\varphi$ which is defined for $x \in A$ and $s \geq 0$ by $$\begin{aligned} \varphi^*(x,s) = \sup_{t \geq 0} (ts - \varphi(x,t)). \end{aligned}$$ We denote by $\varphi^{-1}$ the left-continuous inverse of $\varphi$, defined for $x \in A$ and $s \geq 0$ by $$\begin{aligned} \varphi^{-1}(x,s) = \inf \{t \geq 0\,:\, \varphi(x,t) \geq s\}. \end{aligned}$$* The function spaces that we will build based on these generalized $\Phi$-functions can have specially nice properties if these functions fulfill some extra assumptions like the following ones. **Definition 10**. *Let $\varphi\colon A \times [0,+\infty) \to [0,+\infty]$, we say that* 1. *$\varphi$ is doubling (or satisfies the $\Delta_2$ condition) if there exists a constant $K \geq 2$ such that $$\begin{aligned} \varphi(x,2t) \leq K \varphi(x,t) \end{aligned}$$ for all $t \in (0,+\infty]$ and for a.a. $x \in A$;* 2. *$\varphi$ satisfies the $\nabla_2$ condition if $\varphi^*$ satisfies the $\Delta_2$ condition.* *Let $\Omega \subset \mathbb{R}^N$ and $\varphi\colon \Omega \times [0,+\infty) \to [0,+\infty]$ be a generalized $\Phi$-function, we say that it satisfies the condition* 1. *if there exists $0 < \beta \leq 1$ such that $\beta \leq \varphi^{-1} (x,1) \leq \beta^{-1}$ for a.a. $x \in \Omega$;* 2. *if there exists $0 < \beta \leq 1$ such that $\varphi(x,\beta) \leq 1 \leq \varphi(x,\beta^{-1})$ for a.a. $x \in \Omega$;* 3. *if there exists $0 < \beta < 1$ such that $\beta \varphi^{-1}(x,t) \leq \varphi^{-1}(y,t)$ for every $t \in [1 , 1/|B|]$ and for a.a. $x,y \in B \cap \Omega$ with every ball $B$ such that $|B| \leq 1$;* 4. *if there exists $0 < \beta < 1$ such that $\varphi(x,\beta t) \leq \varphi(y,t)$ for every $t \geq 0$ such that $\varphi(y,t) \in [1 , 1/|B|]$ and for a.a. $x,y \in B \cap \Omega$ with every ball $B$ such that $|B| \leq 1$;* 5. *if for every $s>0$ there exists $0 < \beta \leq 1$ and $h \in L^{1}(\Omega) \cap L^{\infty}(\Omega)$ such that $\beta \varphi^{-1}(x,t) \leq \varphi^{-1}(y,t)$ for every $t \in [h(x) + h(y) , s]$ and for a.a. $x,y \in \Omega$;* 6. *if there exists $s>0$, $0 < \beta \leq 1$, $\varphi_\infty$ weak $\Phi$-function (that is, constant in the first variable) and $h \in L^{1}(\Omega) \cap L^{\infty}(\Omega)$ such that $\varphi(x,\beta t) \leq \varphi_\infty(t) + h(x)$ and $\varphi_\infty (\beta t) \leq \varphi(x,t) + h(x)$ for a.a. $x \in \Omega$ and for all $t\geq0$ such that $\varphi_\infty(t) \leq s$ and $\varphi(x,t) \leq s$.* It is important to notice that some of the conditions we already mentioned are equivalent. The reason behind this is that, depending on the context, some conditions can be easier to check than others. For the following result, see Lemma 2.2.6, Corollary 2.4.11, Corollary 3.7.4, Corollary 4.1.6 and Lemma 4.2.7 in the book by Harjulehto-Hästö [@Harjulehto-Hasto-2019]. **Lemma 11**. *Let $\varphi\colon A \times [0,+\infty) \to [0,+\infty]$ be a generalized weak $\Phi$-function, then* 1. *it satisfies the $\Delta_2$ condition if and only if it satisfies [(aDec)]{.nodecor};* 2. *if it is a generalized convex $\Phi$-function, it satisfies the $\Delta_2$ condition if and only if it satisfies [(Dec)]{.nodecor};* 3. *it satisfies the $\nabla_2$ condition if and only if it satisfies [(aInc)]{.nodecor};* 4. *it satisfies the [(A0)]{.nodecor} condition if and only if it satisfies the [(A0)']{.nodecor} condition;* 5. *if it satisfies the [(A0)]{.nodecor} condition, the [(A1)]{.nodecor} condition holds if and only if the [(A1)']{.nodecor} condition holds;* 6. *it satisfies the [(A2)]{.nodecor} condition if and only if it satisfies the [(A2)']{.nodecor} condition.* Now we see how a Musielak-Orlicz space is defined, alongside with which properties it has based on the properties of its associated $\Phi$-function. For this result check Lemma 3.1.3, Lemma 3.2.2, Theorem 3.3.7, Theorem 3.5.2 and Theorem 3.6.6 in the book by Harjulehto-Hästö [@Harjulehto-Hasto-2019]. **Proposition 12**. *Let $\varphi\colon A \times [0,+\infty) \to [0,+\infty]$ be a generalized weak $\Phi$-function and let its associated modular be $$\begin{aligned} \varrho_\varphi(u) = \int_A \varphi(x,\left|{u(x)}\right|) \mathop{}\!\mathrm{d}\mu (x). \end{aligned}$$ Then, the set $$\begin{aligned} L^\varphi(A) = \{ u \in M(A)\,:\, \varrho_\varphi(\lambda u) < \infty \text{ for some } \lambda > 0 \} \end{aligned}$$ equipped with the associated Luxemburg quasi-norm $$\begin{aligned} \left\|u\right\|_\varphi= \inf \left\lbrace \lambda > 0 \,:\, \varrho_\varphi\left( \frac{u}{\lambda} \right) \leq 1 \right\rbrace \end{aligned}$$ is a quasi Banach space. Furthermore, if $\varphi$ is a generalized convex $\Phi$-function, it is a Banach space; if $\varphi$ satisfies [(aDec)]{.nodecor}, it holds that $$\begin{aligned} L^\varphi(A) = \{ u\in M(A) \,:\, \varrho_\varphi(u) < \infty \}; \end{aligned}$$ if $\varphi$ satisfies [(aDec)]{.nodecor} and $\mu$ is separable, then $L^\varphi(A)$ is separable; and if $\varphi$ satisfies [(aInc)]{.nodecor} and [(aDec)]{.nodecor} it possesses an equivalent, uniformly convex norm, hence it is reflexive.* In Musielak-Orlicz spaces it is important to understand with detail the relation between the modular and the norm, because it is often used when dealing with growths or convergences. Because of this, the following result is of importance, one can find it as Lemma 3.2.9 in the book of Harjulehto-Hästö [@Harjulehto-Hasto-2019]. **Proposition 13**. *Let $\varphi\colon A \times [0,+\infty) \to [0,+\infty]$ be a generalized weak $\Phi$-function that satisfies [(aInc)]{.nodecor}$_p$ and [(aDec)]{.nodecor}$_q$, with $1 \leq p \leq q < \infty$. Then $$\begin{aligned} \frac{1}{a} \min \left\lbrace \left\|u\right\|_\varphi^p , \left\|u\right\|_\varphi^q \right\rbrace \leq \varrho_\varphi(u) \leq a \max \left\lbrace \left\|u\right\|_\varphi^p , \left\|u\right\|_\varphi^q \right\rbrace \end{aligned}$$ for all measurable functions $u \colon A \to \mathbb{R}$, where $a$ is the maximum of the constants of [(aInc)]{.nodecor}$_p$ and [(aDec)]{.nodecor}$_q$.* There are embedding relations between the Musielak-Orlicz spaces depending of the chosen function. The following result characterizes these relations and can be found as Theorem 3.2.6 of the book by Harjulehto-Hästö [@Harjulehto-Hasto-2019]. **Proposition 14**. *Let $\varphi, \psi \colon A \times [0,+\infty) \to [0,+\infty]$ be generalized weak $\Phi$-functions and let $\mu$ be atomless. Then $L^\varphi(A) \hookrightarrow L^\psi (A)$ if and only if there exits $K>0$ and $h \in L^1 (A)$ with $\left\|h\right\|_1 \leq 1$ such that for all $t \geq 0$ and for a.a. $x \in \Omega$ $$\begin{aligned} \psi\left( x,\frac{t}{K} \right) \leq \varphi(x,t) + h(x). \end{aligned}$$* In Musielak-Orlicz spaces we even have a Hölder inequality based on the chosen function, see the following result which can be found as Lemma 3.2.11 of the book by Harjulehto-Hästö [@Harjulehto-Hasto-2019]. **Proposition 15**. *Let $\varphi\colon A \times [0,+\infty) \to [0,+\infty]$ be a generalized weak $\Phi$-function, then $$\begin{aligned} \int_A \left|{u}\right| \left|{v}\right| \mathop{}\!\mathrm{d}\mu (x) \leq 2 \left\|u\right\|_{\varphi} \left\|v\right\|_{\varphi^*} \quad \text{for all } u \in L^{\varphi}(A), v \in L^{\varphi^*}(A). \end{aligned}$$ Moreover, the constant $2$ is sharp.* Lastly, one can define the associated Sobolev spaces to these Musielak-Orlicz spaces analogously to the classical case. The following result can be found as Theorem 6.1.4 and Theorem 6.1.9 of the book by Harjulehto-Hästö [@Harjulehto-Hasto-2019]. **Proposition 16**. *Let $\varphi\colon \Omega \times [0,+\infty) \to [0,+\infty]$ be a generalized weak $\Phi$-function such that $L^{\varphi}(\Omega) \subseteq L^1_{\mathop{\mathrm{loc}}} (\Omega)$ and $k\geq 1$. Then, the set $$\begin{aligned} W^{k,\varphi} (\Omega) = \{ u \in L^{\varphi}(\Omega) \,:\, \partial_\alpha u \in L^{\varphi}(\Omega) \text{ for all } \left|{\alpha}\right| \leq k \}, \end{aligned}$$ where we consider the modular $$\begin{aligned} \varrho_{k,\varphi} (u) = \sum_{0 \leq \left|{\alpha}\right| \leq k } \varrho_\varphi(\partial_\alpha u) \end{aligned}$$ and the associated Luxemburg quasi-norm $$\begin{aligned} \left\|u\right\|_{k,\varphi} = \inf \left\lbrace \lambda > 0 : \varrho_{k,\varphi} \left( \frac{u}{\lambda} \right) \leq 1 \right\rbrace \end{aligned}$$ is a quasi Banach space. Analogously, the set $$\begin{aligned} W^{k,\varphi}_0 (\Omega) = \overline{C_0^\infty (\Omega)}^{\left\|\cdot\right\|_{k,\varphi}}, \end{aligned}$$ where $C_0^\infty (\Omega)$ are the $C^\infty (\Omega)$ functions with compact support, equipped with the same modular and norm is also a quasi Banach space.* *Furthermore, if $\varphi$ is a generalized convex $\Phi$-function, both spaces are Banach spaces; if $\varphi$ satisfies [(aDec)]{.nodecor}, then they are separable; and if $\varphi$ satisfies [(aInc)]{.nodecor} and [(aDec)]{.nodecor} they possess an equivalent, uniformly convex norm, hence they are reflexive.* The following statement cannot be found explicitly written in the book by Harjulehto-Hästö [@Harjulehto-Hasto-2019]. However, due to the form of $\varrho_{k,\varphi} (\cdot)$ and $\|\cdot\|_{k,\varphi}$, one can repeat step by step the arguments of the proof of Lemma 3.2.9 of this same book to obtain it. **Proposition 17**. *Let $\varphi\colon A \times [0,+\infty) \to [0,+\infty]$ be a generalized weak $\Phi$-function that satisfies [(aInc)]{.nodecor}$_p$ and [(aDec)]{.nodecor}$_q$, with $1 \leq p \leq q < \infty$. Then $$\begin{aligned} \frac{1}{a} \min \left\lbrace \left\|u\right\|_{k,\varphi}^p , \left\|u\right\|_{k,\varphi}^q \right\rbrace \leq \varrho_{k,\varphi} (u) \leq a \max \left\lbrace \left\|u\right\|_{k,\varphi}^p , \left\|u\right\|_{k,\varphi}^q \right\rbrace \end{aligned}$$ for all $u \in W^{k,\varphi} (\Omega)$, where $a$ is the maximum of the constants of [(aInc)]{.nodecor}$_p$ and [(aDec)]{.nodecor}$_q$.* One might also wonder when smooth functions are dense in a Sobolev Musielak-Orlicz space. Sufficient conditions are given in the next result taken from Theorem 6.4.7 of the book by Harjulehto-Hästö [@Harjulehto-Hasto-2019]. **Theorem 18**. *Let $\varphi\colon A \times [0,+\infty) \to [0,+\infty]$ be a generalized weak $\Phi$-function that satisfies [(A0)]{.nodecor}, [(A1)]{.nodecor}, [(A2)]{.nodecor} and [(aDec)]{.nodecor}. Then $C^\infty (\Omega) \cap W^{k,\varphi} (\Omega)$ is dense in $W^{k,\varphi} (\Omega)$.* As a penultimate, let us recall a few properties of the $\log$ function that are useful when dealing with logarithmic growth next to power-law growth. For $s,t \geq 0$ and $C \geq 1$, $$\begin{aligned} \label{Eq:LogGrowthProduct} \log (e + xy) \leq \log (e + x ) + \log (e + y), \qquad \log (e + C x) \leq C \log (e + x)\end{aligned}$$ and for $s,t \geq 0$ and $q \geq 1$, $$\label{Eq:LogGrowthSum} \begin{aligned} (s + t)^q \log (e + s + t) & \leq (2s)^q \log (e + 2s) + (2t)^q \log (e + 2t) \\ & \leq 2^{q+1} s^q \log (e + s) + 2^{q+1} t^q \log(e + t) \end{aligned}$$ Lastly, we give here the statement of a version of the mountain pass theorem, the quantitative deformation lemma and the Poincaré-Miranda existence theorem, all of which will be used later. Let $X$ be a Banach space, we say that a functional $\varphi\colon X \to \mathbb{R}$ satisfies the Cerami condition or [C]{.nodecor}-condition if for every sequence $\{u_n\}_{n \in \mathbb{N}} \subseteq X$ such that $\{ \varphi(u_n) \}_{n \in \mathbb{N}} \subseteq \mathbb{R}$ is bounded and it also satisfies $$\begin{aligned} ( 1 + \| u_n \| )\varphi'(u_n) \to 0 \quad\text{as } n \to \infty,\end{aligned}$$ then it contains a strongly convergent subsequence. Furthermore, we say that it satisfies the Cerami condition at the level $c \in \mathbb{R}$ or the [C$_c$]{.nodecor}-condition if it holds for all the sequences such that $\varphi(u_n) \to c$ as $n \to \infty$ instead of for all the bounded sequences. The proof of the following mountain pass theorem can be found in the book by Papageorgiou-Rădulescu-Repovš [@Papageorgiou-Radulescu-Repovs-2019a Theorem 5.4.6]. For the quantitative deformation lemma after it we refer to the book by Willem [@Willem-1996 Lemma 2.3] and regarding the Poincaré-Miranda existence theorem, the proof can be found in the book by Dinca-Mawhin [@Dinca-Mawhin-2021 Corollary 2.2.15]. **Theorem 19** (Mountain pass theorem). *Let X be a Banach space and suppose $\varphi\in C^1(X)$, $u_0, u_1 \in X$ with $\| u_1 - u_0 \| > \delta > 0$, $$\begin{aligned} \max\{\varphi(u_0), \varphi(u_1)\} & \leq \inf\{\varphi(u) \,:\, \| u - u_0 \| = \delta \} = m_\delta, \\ c = \inf_{ \gamma \in \Gamma} \max_{ 0 \leq t \leq 1} \varphi(\gamma (t)) \text{ with } \Gamma & = \{\gamma \in C([0, 1], X) \,:\, \gamma(0) = u_0, \gamma(1) = u_1\} \end{aligned}$$ and $\varphi$ satisfies the [C$_c$]{.nodecor}-condition. Then $c \geq m_\delta$ and $c$ is a critical value of $\varphi$. Moreover, if $c = m_\delta$, then there exists $u \in \partial B_\delta (u_0)$ such that $\varphi'(u) = 0$.* **Lemma 20** (Quantitative deformation lemma). *Let $X$ be a Banach space, $\varphi\in C^1(X;\mathbb{R})$, $\emptyset \neq S \subseteq X$, $c \in \mathbb{R}$, $\varepsilon,\delta > 0$ such that for all $u \in \varphi^{-1}([c - 2\varepsilon, c + 2\varepsilon]) \cap S_{2 \delta}$ there holds $\| \varphi'(u) \|_* \geq 8\varepsilon/ \delta$, where $S_{r} = \{ u \in X \,:\, d(u,S) = \inf_{u_0 \in S} \| u - u_0 \| < r \}$ for any $r > 0$. Then there exists $\eta \in C([0, 1] \times X; X)$ such that* 1. *$\eta (t, u) = u$, if $t = 0$ or if $u \notin \varphi^{-1}([c - 2\varepsilon, c + 2\varepsilon]) \cap S_{2 \delta}$;* 2. *$\varphi( \eta( 1, u ) ) \leq c - \varepsilon$ for all $u \in \varphi^{-1} ( ( - \infty, c + \varepsilon] ) \cap S$;* 3. *$\eta(t, \cdot )$ is an homeomorphism of $X$ for all $t \in [0,1]$;* 4. *$\| \eta(t, u) - u \| \leq \delta$ for all $u \in X$ and $t \in [0,1]$;* 5. *$\varphi( \eta( \cdot , u))$ is decreasing for all $u \in X$;* 6. *$\varphi(\eta(t, u)) < c$ for all $u \in \varphi^{-1} ( ( - \infty, c] ) \cap S_\delta$ and $t \in (0, 1]$.* **Theorem 21** (Poincaré-Miranda existence theorem). *Let $P = [-t_1, t_1] \times \cdots \times [-t_N, t_N]$ with $t_i > 0$ for $i \in {1, \ldots, N}$ and $d \colon P \to \mathbb{R}^N$ be continuous. If for each $i \in \{1, \ldots, N\}$ one has $$\begin{aligned} \begin{aligned} d_i (a) &\leq 0 \quad\text{when } a \in P \text{ and } a_i = -t_i,\\ d_i (a) &\geq 0 \quad\text{when } a \in P \text{ and } a_i = t_i, \end{aligned} \end{aligned}$$ then $d$ has at least one zero in $P$.* # Logarithmic function space {#functional-space} In light of the previous section, we now choose an appropriate $\Phi$-function that allows us to study our problem [\[Eq:Problem\]](#Eq:Problem){reference-type="eqref" reference="Eq:Problem"}. Let $\ensuremath{\mathcal{H}_{\log}}\colon \overline{\Omega}\times [0,\infty) \to [0,\infty)$ be given by $$\begin{aligned} \ensuremath{\mathcal{H}_{\log}}(x,t) = t^{p(x)} + \mu(x) t^{q(x)} \log (e + t),\end{aligned}$$ where we assume the following conditions: 1. [\[Assump:Space\]]{#Assump:Space label="Assump:Space"} $\Omega \subseteq \mathbb{R}^N$, with $N \geq 2$, is a bounded domain with Lipschitz boundary $\partial \Omega$, $p,q \in C_+ (\overline{\Omega})$ with $p(x) \leq q(x)$ for all $x \in \overline{\Omega}$ and $\mu \in L^{1}(\Omega)$. We can see that this $\Phi$-function satisfies some important properties from the previous section. First we prove the following lemma. **Lemma 22**. *The function $f_\varepsilon\colon [0,+\infty) \to [0,+\infty)$ given by $$\begin{aligned} f_\varepsilon(t) = \frac{ t^\varepsilon} { \log (e + t) } \end{aligned}$$ is increasing for $\varepsilon\geq \kappa$ and almost increasing for $0 < \varepsilon< \kappa$ with constant $a_\varepsilon$, where $\kappa = e/(e + t_0)$, with $t_0$ being the only positive solution of $t_0 = e \log(e + t_0)$.* *Proof.* It is immediate to check that $f_\varepsilon' (t) > 0$ ($= 0, < 0$) if and only if $$\begin{aligned} \varepsilon(e + t) \log (e + t) - t > 0 \quad ( = 0, < 0). \end{aligned}$$ Hence we are interested in when the function $$\begin{aligned} g (t) = \frac { t }{ (e + t) \log (e + t) } \end{aligned}$$ satisfies $g (t) < \varepsilon$ ($= \varepsilon, > \varepsilon$). Arguing similarly, $g ' (t) > 0$ ($= 0, < 0$) if and only if $$\begin{aligned} e \log (e+t) - t > 0 \quad ( = 0, < 0). \end{aligned}$$ Therefore we now look at the function $$\begin{aligned} h(t) = e \log (e+t) - t, \end{aligned}$$ which is strictly decreasing, strictly positive at $0$ and $-\infty$ at $+\infty$. Thus $g$ achieves its global maximum at $t_0$ defined by $h(t_0) = 0$ and it holds that $g(t_0) = \kappa$. This proves the case $\varepsilon\geq \kappa$. If $0 < \varepsilon< \kappa$, then there exist $t_{1,\varepsilon} < t_0 < t_{2, \varepsilon}$ such that $t_{1,\varepsilon}$ is the unique local maximum of $f_\varepsilon$ and $t_{2,\varepsilon}$ is the unique local minimum of $f_\varepsilon$ apart from $0$. Since $f_\varepsilon$ is increasing in $(0,t_{1,\varepsilon})$ and in $(t_{2,\varepsilon},\infty)$, $f_\varepsilon$ is almost increasing if and only if there exists $a_\varepsilon> 0$ such that $f_\varepsilon(t_{1,\varepsilon}) / f_\varepsilon(t_{2,\varepsilon}) \leq a_\varepsilon$, which is trivially true. ◻ **Remark 23**. *Note that with the choice of $a_\varepsilon$ in the proof of the previous result we cannot ensure that there exists a constant uniform in $\varepsilon$. On the other hand, note also that $t_0 \simeq 5.8340$ and $\kappa \simeq 0.31784$.* **Lemma 24**. *Let [\[Assump:Space\]](#Assump:Space){reference-type="eqref" reference="Assump:Space"} be satisfied, then $\ensuremath{\mathcal{H}_{\log}}$ is a generalized strong $\Phi$-function and it fulfills* 1. *[(Inc)]{.nodecor}$_{p_-}$;* 2. *[(Dec)]{.nodecor}$_{q_+ + \kappa}$;* 3. *[(aDec)]{.nodecor}$_{q_+ + \varepsilon}$ for $0 < \varepsilon< \kappa$ and with constant $a_\varepsilon$,* *where $\kappa$ and $a_\varepsilon$ are the same as in Lemma [Lemma 22](#Le:fepsilon){reference-type="ref" reference="Le:fepsilon"}.* *Proof.* First we see that it is a generalized strong $\Phi$-function. Since the other conditions are straightforward, we only need to check the convexity in the second variable, which follows from $$\begin{aligned} & {\partial_t}^2\ensuremath{\mathcal{H}_{\log}}(x,t)\\ & = p(x) ( p(x) - 1) t^{p(x) - 2} \\ & \quad + \mu(x) t^{q(x) -2} \left[ q(x)(q(x) - 1)\log (e + t) + (2q(x) - 1)\frac{t}{e+t} + \frac{et}{(e+t)^2} \right] > 0 \end{aligned}$$ for all $t>0$ and for a.a. $x \in \Omega$. Now we prove [(i)]{.nodecor}, [(ii)]{.nodecor} and [(iii)]{.nodecor}. Notice that $$\begin{aligned} \frac{\ensuremath{\mathcal{H}_{\log}}(x,t)}{t^{p_-}} = t^{p(x) - p_-} + \mu(x) t^{q(x) - p_-} \log (e + t) \end{aligned}$$ is an increasing function because all the exponents are positive. Similarly, by Lemma [Lemma 22](#Le:fepsilon){reference-type="ref" reference="Le:fepsilon"} $$\begin{aligned} \frac{\ensuremath{\mathcal{H}_{\log}}(x,t)}{t^{q_+ + \varepsilon}} = t^{p(x) - q_+ -\varepsilon} + \mu(x) t^{q(x) - q_+} \frac{\log (e + t)}{t^{\varepsilon}} \end{aligned}$$ is a decreasing function when $\varepsilon\geq \kappa$ and almost decreasing when $0 < \varepsilon< \kappa$, with the constant $a_\varepsilon$ from Lemma [Lemma 22](#Le:fepsilon){reference-type="ref" reference="Le:fepsilon"}. ◻ As a consequence of the previous result, we obtain the following. **Proposition 25**. *Let [\[Assump:Space\]](#Assump:Space){reference-type="eqref" reference="Assump:Space"} be satisfied, then $L^{\ensuremath{\mathcal{H}_{\log}}}(\Omega)$ is a separable, reflexive Banach space and the following hold:* 1. *$\left\|u\right\|_{\ensuremath{\mathcal{H}_{\log}}} = \lambda$ if and only if $\ensuremath{\varrho_{\ensuremath{\mathcal{H}_{\log}}} \left( \frac{u}{\lambda} \right)} = 1$ for $u \neq 0$ and $\lambda>0$;* 2. *$\left\|u\right\|_{\ensuremath{\mathcal{H}_{\log}}} < 1$ (resp. $=1$, $>1$) if and only if $\ensuremath{\varrho_{\ensuremath{\mathcal{H}_{\log}}} \left(u\right)} < 1$ (resp. $=1$, $>1$);* 3. *$\min\left\lbrace \left\|u\right\|_{\ensuremath{\mathcal{H}_{\log}}}^{p_-} , \left\|u\right\|_{\ensuremath{\mathcal{H}_{\log}}}^{q_+ + \kappa} \right\rbrace \leq \ensuremath{\varrho_{\ensuremath{\mathcal{H}_{\log}}} \left(u\right)} \leq \max\left\lbrace \left\|u\right\|_{\ensuremath{\mathcal{H}_{\log}}}^{p_-} , \left\|u\right\|_{\ensuremath{\mathcal{H}_{\log}}}^{q_+ + \kappa} \right\rbrace$ for $\kappa>0$ as in Lemma [Lemma 22](#Le:fepsilon){reference-type="ref" reference="Le:fepsilon"};* 4. *$\frac{1}{a_\varepsilon}\min\left\lbrace \left\|u\right\|_{\ensuremath{\mathcal{H}_{\log}}}^{p_-} , \left\|u\right\|_{\ensuremath{\mathcal{H}_{\log}}}^{q_+ + \varepsilon} \right\rbrace \leq \ensuremath{\varrho_{\ensuremath{\mathcal{H}_{\log}}} \left(u\right)} \leq a_\varepsilon\max\left\lbrace \left\|u\right\|_{\ensuremath{\mathcal{H}_{\log}}}^{p_-} , \left\|u\right\|_{\ensuremath{\mathcal{H}_{\log}}}^{q_+ + \varepsilon} \right\rbrace$ for $0 < \varepsilon< \kappa$, where $\kappa$ and $a_\varepsilon$ are the same as in Lemma [Lemma 22](#Le:fepsilon){reference-type="ref" reference="Le:fepsilon"};* 5. *$\left\|u\right\|_{\ensuremath{\mathcal{H}_{\log}}} \to 0$ if and only if $\ensuremath{\varrho_{\ensuremath{\mathcal{H}_{\log}}} \left(u\right)} \to 0$;* 6. *$\left\|u\right\|_{\ensuremath{\mathcal{H}_{\log}}} \to \infty$ if and only if $\ensuremath{\varrho_{\ensuremath{\mathcal{H}_{\log}}} \left(u\right)} \to \infty$.* *Proof.* First, by Proposition [Proposition 12](#Prop:AbstractBanach){reference-type="ref" reference="Prop:AbstractBanach"} and Lemma [Lemma 24](#Le:PropertiesHlog){reference-type="ref" reference="Le:PropertiesHlog"}, we know that $L^{\ensuremath{\mathcal{H}_{\log}}}(\Omega)$ is a separable, reflexive Banach space. For [(i)]{.nodecor} and [(ii)]{.nodecor}, note that the function $\lambda \mapsto \ensuremath{\varrho_{\ensuremath{\mathcal{H}_{\log}}} \left(u/\lambda\right)}$ with $\lambda \geq 0$ is continuous, convex and strictly increasing. This directly implies [(i)]{.nodecor}, and [(i)]{.nodecor} with the strict increasing property yields [(ii)]{.nodecor}. Finally, [(iii)]{.nodecor} and [(iv)]{.nodecor} follow from Proposition [Proposition 13](#Prop:AbstractNormModular){reference-type="ref" reference="Prop:AbstractNormModular"} and Lemma [Lemma 24](#Le:PropertiesHlog){reference-type="ref" reference="Le:PropertiesHlog"}; and [(v)]{.nodecor} and [(vi)]{.nodecor} follow from [(iii)]{.nodecor}. ◻ This space satisfies the following embeddings. As in the usual notation, given $r \in C_+(\overline{\Omega})$, let $L^{r(x)} \log L (\Omega) = L^{\zeta}(\Omega)$, where $\zeta(x,t) = t^{r(x)}\log (e + t)$. **Proposition 26**. *Let [\[Assump:Space\]](#Assump:Space){reference-type="eqref" reference="Assump:Space"} be satisfied and let $\varepsilon> 0$, then it holds $$\begin{aligned} L^{q(\cdot) + \varepsilon}(\Omega) \hookrightarrow L^{q(\cdot)} \log L (\Omega) \quad\text{and}\quad L^{\ensuremath{\mathcal{H}_{\log}}}(\Omega)\hookrightarrow L^{p(\cdot)}(\Omega). \end{aligned}$$ If we further assume $\mu \in L^{\infty}(\Omega)$, then it also holds $L^{q(\cdot)} \log L (\Omega) \hookrightarrow L^{\ensuremath{\mathcal{H}_{\log}}}(\Omega)$.* *Proof.* We will prove all the embeddings by applying Proposition [Proposition 14](#Prop:AbstractEmbedding){reference-type="ref" reference="Prop:AbstractEmbedding"} to the corresponding $\Phi$-functions. First, for any $K>0$, by the inequality $\log (e + t) \leq C_\varepsilon+ t^\varepsilon$ it holds that $$\begin{aligned} \left( \frac{t}{K} \right) ^{q(x)} \log \left( e + \frac{t}{K} \right) \leq \frac{ t^{q(x) + \varepsilon} + 1 }{ K^{q(x)} } C_\varepsilon+ \frac{ t^{q(x) + \varepsilon} }{ K^{q(x) + \varepsilon} } \end{aligned}$$ and if we choose $K \geq \max \{ 2^{1/(q_- + \varepsilon)} , (2 C_\varepsilon)^{1/q_-} , ( C_\varepsilon|\Omega| )^{1/q_-} \}$, it follows $$\begin{aligned} \left( \frac{t}{K} \right) ^{q(x)} \log \left( e + \frac{t}{K} \right) \leq \frac{ C_\varepsilon}{ K^{q(x)} } + t^{ q(x) + \varepsilon}, \quad \text{ with } \int_{\Omega}\frac{ C_\varepsilon}{ K^{q(x)} } \mathop{\mathrm{d\textit{x}}}\leq 1. \end{aligned}$$ This concludes the proof of the first embedding. The condition for the second embedding is straightforward to verify. Finally, for any $K>0$, $$\begin{aligned} \ensuremath{\mathcal{H}_{\log}}\left( x, \frac{t}{K} \right) \leq \frac{1}{K^{p(x)}} + \left( \frac{1}{K^{p(x)}} + \frac{ \left\|\mu\right\|_{\infty} }{K^{q(x)}} \right) t^{q(x)} \log \left( e + \frac{t}{K} \right) \end{aligned}$$ and if we choose $K \geq \max \{ 2^{1/p_-} , ( 2 \left\|\mu\right\|_{\infty} )^{1/q_-} , |\Omega| ^{1/p_-} \}$, it follows $$\begin{aligned} \ensuremath{\mathcal{H}_{\log}}\left( x, \frac{t}{K} \right) \leq \frac{1}{K^{p(x)}} + t^{q(x)} \log \left( e + t \right) \quad \text{with } \int_{\Omega}\frac{1}{K^{p(x)}} \mathop{\mathrm{d\textit{x}}}\leq 1. \end{aligned}$$ This shows the proof of the third embedding. ◻ For our purposes we further need to work on the associated Sobolev space, whose properties are summarized in the following statement. Its proof is completely analogous to the proof of Proposition [Proposition 25](#Prop:HlogModularNorm){reference-type="ref" reference="Prop:HlogModularNorm"} except that now we use Proposition [Proposition 16](#Prop:AbstractSobolev){reference-type="ref" reference="Prop:AbstractSobolev"} (instead of Proposition [Proposition 12](#Prop:AbstractBanach){reference-type="ref" reference="Prop:AbstractBanach"}) and Proposition [Proposition 17](#Prop:AbstractoneNormModular){reference-type="ref" reference="Prop:AbstractoneNormModular"} (instead of Proposition [Proposition 13](#Prop:AbstractNormModular){reference-type="ref" reference="Prop:AbstractNormModular"}). **Proposition 27**. *Let [\[Assump:Space\]](#Assump:Space){reference-type="eqref" reference="Assump:Space"} be satisfied, then $W^{1, \ensuremath{\mathcal{H}_{\log}}}(\Omega)$ and $W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ are separable, reflexive Banach spaces and the following hold:* 1. *$\left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}}} = \lambda$ if and only if $\ensuremath{\varrho_{1,\ensuremath{\mathcal{H}_{\log}}} \left( \frac{u}{\lambda} \right)} = 1$ for $u \neq 0$ and $\lambda>0$;* 2. *$\left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}}} < 1$ (resp. $=1$, $>1$) if and only if $\ensuremath{\varrho_{1,\ensuremath{\mathcal{H}_{\log}}} \left(u\right)} < 1$ (resp. $=1$, $>1$);* 3. *$\min\left\lbrace \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}}}^{p_-} , \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}}}^{q_+ + \kappa} \right\rbrace \leq \ensuremath{\varrho_{1,\ensuremath{\mathcal{H}_{\log}}} \left(u\right)} \leq \max\left\lbrace \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}}}^{p_-} , \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}}}^{q_+ + \kappa} \right\rbrace$;* 4. *$$\begin{aligned} &\frac{1}{a_\varepsilon}\min\left\lbrace \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}}}^{p_-} , \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}}}^{q_+ + \varepsilon} \right\rbrace\\ &\leq \ensuremath{\varrho_{1,\ensuremath{\mathcal{H}_{\log}}} \left(u\right)} \leq a_\varepsilon\max\left\lbrace \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}}}^{p_-} , \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}}}^{q_+ + \varepsilon} \right\rbrace \end{aligned}$$ for $0 < \varepsilon< \kappa$, where $\kappa$ and $a_\varepsilon$ are the same as in Lemma [Lemma 22](#Le:fepsilon){reference-type="ref" reference="Le:fepsilon"};* 5. *$\left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}}} \to 0$ if and only if $\ensuremath{\varrho_{1,\ensuremath{\mathcal{H}_{\log}}} \left(u\right)} \to 0$;* 6. *$\left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}}} \to \infty$ if and only if $\ensuremath{\varrho_{1,\ensuremath{\mathcal{H}_{\log}}} \left(u\right)} \to \infty$.* These Sobolev spaces satisfy the following embeddings. **Proposition 28**. *Let [\[Assump:Space\]](#Assump:Space){reference-type="eqref" reference="Assump:Space"} be satisfied, then the following hold:* 1. *$W^{1, \ensuremath{\mathcal{H}_{\log}}}(\Omega)\hookrightarrow W^{1,p(\cdot)}(\Omega)$ and $W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)\hookrightarrow W^{1,p(\cdot)}_0(\Omega)$ are continuous;* 2. *if $p \in C_+(\overline{\Omega}) \cap C^{0, \frac{1}{|\log t|}}(\overline{\Omega})$, then $W^{1, \ensuremath{\mathcal{H}_{\log}}}(\Omega)\hookrightarrow L^{p^*(\cdot)}(\Omega)$ and $W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ $\hookrightarrow L^{p^*(\cdot)}(\Omega)$ are continuous;* 3. *$W^{1, \ensuremath{\mathcal{H}_{\log}}}(\Omega)\hookrightarrow L^{r(\cdot)}(\Omega)$ and $W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)\hookrightarrow L^{r(\cdot)}(\Omega)$ are compact for $r \in C(\overline{\Omega})$ with $1 \leq r(x) < p^*(x)$ for all $x\in \overline{\Omega}$;* 4. *if $p \in C_+(\overline{\Omega}) \cap W^{1,\gamma}(\Omega)$ for some $\gamma>N$, then $W^{1, \ensuremath{\mathcal{H}_{\log}}}(\Omega)\hookrightarrow L^{p_*(\cdot)}(\partial\Omega)$ and $W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)\hookrightarrow L^{p_*(\cdot)}(\partial\Omega)$ are continuous;* 5. *$W^{1, \ensuremath{\mathcal{H}_{\log}}}(\Omega)\hookrightarrow L^{r(\cdot)}(\partial\Omega)$ and $W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)\hookrightarrow L^{r(\cdot)}(\partial\Omega)$ are compact for $r \in C(\overline{\Omega})$ with $1 \leq r(x) < p_*(x)$ for all $x\in \overline{\Omega}$.* *Proof.* The proof of [(i)]{.nodecor} follows directly from Proposition [Proposition 26](#Prop:EmbeddingHlog){reference-type="ref" reference="Prop:EmbeddingHlog"}. The proofs of [(ii)]{.nodecor} - [(v)]{.nodecor} follow from [(i)]{.nodecor} and the usual Sobolev embeddings of $W^{1,p(\cdot)}(\Omega)$ and $W^{1,p(\cdot)}_0(\Omega)$ in Propositions [Proposition 4](#Prop:classicalembedd){reference-type="ref" reference="Prop:classicalembedd"} and [Proposition 5](#Prop:classicalembedd:boundary){reference-type="ref" reference="Prop:classicalembedd:boundary"}. ◻ We also have the property that the truncation of functions on $W^{1, \ensuremath{\mathcal{H}_{\log}}}(\Omega)$ and $W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ stays within the space, as it is proven in the next result. **Proposition 29**. *Let [\[Assump:Space\]](#Assump:Space){reference-type="eqref" reference="Assump:Space"} be satisfied, then* 1. *If $u \in W^{1, \ensuremath{\mathcal{H}_{\log}}}(\Omega)$, then $u^\pm \in W^{1, \ensuremath{\mathcal{H}_{\log}}}(\Omega)$ with $\nabla (\pm u) = \nabla u 1_{ \{ \pm u > 0 \} }$;* 2. *if $u_n \to u$ in $W^{1, \ensuremath{\mathcal{H}_{\log}}}(\Omega)$, then $u_n^\pm \to u^\pm$ in $W^{1, \ensuremath{\mathcal{H}_{\log}}}(\Omega)$;* 3. *if we further assume $\mu \in L^{\infty}(\Omega)$, then $u \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ implies $u^\pm \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$.* *Proof.* Part [(i)]{.nodecor} follows from the classical case. Indeed, let $r \in \mathbb{R}$ with $1 \leq r \leq \infty$, then for any $u \in W^{1,r}(\Omega)$ we know that $u^\pm \in W^{1,r}(\Omega)$ and $\nabla (\pm u) = \nabla u 1_{ \{ \pm u > 0 \} }$; see, for example, the book by Heinonen-Kilpeläinen-Martio [@Heinonen-Kilpelainen-Martio-2006 Lemma 1.19]. The reason is that by Proposition [Proposition 28](#Prop:EmbeddingHlogSobolev){reference-type="ref" reference="Prop:EmbeddingHlogSobolev"} [(i)]{.nodecor} $u \in W^{1,p_-}(\Omega)$ and $\left|{\pm u ^\pm}\right| \leq \left|{u}\right|$. For part [(ii)]{.nodecor}, note again that $\left|{\pm u_n ^\pm \mp u^\pm}\right| \leq \left|{u_n - u}\right|$. Then by Proposition [Proposition 27](#Prop:oneHlogModularNorm){reference-type="ref" reference="Prop:oneHlogModularNorm"} [(v)]{.nodecor}, we have that $\ensuremath{\varrho_{\ensuremath{\mathcal{H}_{\log}}} \left(u_n^\pm - u^\pm\right)} \to 0$. It is only left to see the convergence of the terms with the gradients. We do only the logarithmic term, the other one can be done similarly. Note that $\left|{\pm \nabla u_n^\pm \mp \nabla u^\pm}\right| = \left|{1_{\{ \pm u_n > 0 \}} \nabla u_n - 1_{\{ \pm u > 0 \}} \nabla u}\right|$, then by [\[Eq:LogGrowthSum\]](#Eq:LogGrowthSum){reference-type="eqref" reference="Eq:LogGrowthSum"} we have $$\begin{aligned} & \int_{\Omega}\mu(x) \left|{\pm \nabla u_n^\pm \mp \nabla u^\pm}\right|^{q(x)} \log (e + \left|{\pm \nabla u_n^\pm \mp \nabla u^\pm}\right| ) \mathop{\mathrm{d\textit{x}}}\\ & \leq 2^{q_+ + 1} \int_{\Omega}\mu(x) \left|{ \nabla u_n - \nabla u }\right| ^{q(x)} \log(e + \left|{ \nabla u_n - \nabla u }\right| ) \mathop{\mathrm{d\textit{x}}}\\ & \quad + 2^{q_+ + 1} \int_{\Omega}\mu(x)\left|{ \nabla u}\right|^{q(x)} \left|{ 1_{\{ \pm u_n > 0 \}} - 1_{\{ \pm u > 0 \}} }\right| ^{q(x)} \\ & \qquad\qquad\qquad \times \log (e + \left|{\nabla u}\right| \left|{ 1_{\{ \pm u_n > 0 \}} - 1_{\{ \pm u > 0 \}} }\right| ) \mathop{\mathrm{d\textit{x}}}. \end{aligned}$$ The first term on the right-hand side converges to zero by Proposition [Proposition 25](#Prop:HlogModularNorm){reference-type="ref" reference="Prop:HlogModularNorm"} [(v)]{.nodecor}. The second one also converges to zero by taking an a.e. convergent subsequence, using the dominated convergence theorem and then using the subsequence principle. For the application of the dominated convergence theorem, take into account that $\nabla u = 0$ on the set $\{ u = 0 \}$ by [(i)]{.nodecor}. All in all, this yields $\ensuremath{\varrho_{1,\ensuremath{\mathcal{H}_{\log}}} \left(u_n^\pm - u^\pm\right)} \to 0$ and the final result by Proposition [Proposition 27](#Prop:oneHlogModularNorm){reference-type="ref" reference="Prop:oneHlogModularNorm"} [(v)]{.nodecor}. Part [(iii)]{.nodecor} is more technical. For $u \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ we know that there exists a sequence $\{u_n\}_{n \in \mathbb{N}} \subseteq C^\infty _0 (\Omega)$ such that $u_n \to u$ in $W^{1, \ensuremath{\mathcal{H}_{\log}}}(\Omega)$, which by part [(ii)]{.nodecor} implies $u_n^\pm \to u^\pm$ in $W^{1, \ensuremath{\mathcal{H}_{\log}}}(\Omega)$. In particular we have that $u_n^\pm \in C_0 = \{ v \in C(\Omega) : \operatorname{supp} v \text{ is compact}\}$ and $\partial_{x_i} u_n^\pm \in L^{\infty}(\Omega)$ for all $n \in \mathbb{N}$ and all $1 \leq i \leq N$. Consider the standard mollifier $\eta_\varepsilon$. For each $n \in \mathbb{N}$ there is a small $\varepsilon_n > 0$ such that $\eta _ \varepsilon\ast u_n^\pm \in C^\infty_0 (\Omega)$ for $0 < \varepsilon< \varepsilon_n$. Moreover, for any $\delta > 0$ $$\begin{aligned} & \eta_\varepsilon\ast u_n^\pm \to u_n^\pm \quad \text{uniformly in $\Omega$ as } \varepsilon\to 0, \\ & \partial_{x_i} (\eta_\varepsilon\ast u_n^\pm) = \eta_\varepsilon\ast \partial_{x_i} u_n^\pm \to \partial_{x_i} u_n^\pm \quad \text{in } L^{q_+ + \delta}(\Omega) \text{ as } \varepsilon\to 0. \end{aligned}$$ By Propositions [Proposition 26](#Prop:EmbeddingHlog){reference-type="ref" reference="Prop:EmbeddingHlog"} and [Proposition 25](#Prop:HlogModularNorm){reference-type="ref" reference="Prop:HlogModularNorm"} [(v)]{.nodecor} this means that $\ensuremath{\varrho_{1,\ensuremath{\mathcal{H}_{\log}}} \left(\eta _ {\varepsilon} \ast u_n^\pm - u_n^\pm\right)} \to 0$ as $\varepsilon\to 0$, or also in the norm. Altogether, for each $u_n^+, u_n^-$ we can find some $v_n, \widetilde{v}_n \in C_0 ^\infty (\Omega)$ as close as we want to them in the norm of $W^{1, \ensuremath{\mathcal{H}_{\log}}}(\Omega)$; and these new sequences satisfy $v_n \to u^+$ and $\widetilde{v}_n \to u^-$ in $W^{1, \ensuremath{\mathcal{H}_{\log}}}(\Omega)$. ◻ As a consequence of the embeddings above we can prove that a Poincaré inequality is satisfied in $W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ if we further assume in [\[Assump:Space\]](#Assump:Space){reference-type="eqref" reference="Assump:Space"} that $\mu \in L^{\infty}(\Omega)$ and $q(x) < p^*(x)$ for all $x \in \overline{\Omega}$. So we suppose the following: 1. [\[Assump:Poincare\]]{#Assump:Poincare label="Assump:Poincare"} $\Omega \subseteq \mathbb{R}^N$, with $N \geq 2$, is a bounded domain with Lipschitz boundary $\partial \Omega$, $p,q \in C_+ (\overline{\Omega})$ with $p(x) \leq q(x) < p^*(x)$ for all $x \in \overline{\Omega}$ and $\mu \in L^{\infty}(\Omega)$. As it is usual in the literature, we denote $\left\| \nabla u\right\|_{\ensuremath{\mathcal{H}_{\log}}} = \left\|\,\left|{\nabla u}\right|\,\right\|_{\ensuremath{\mathcal{H}_{\log}}}$ and $\ensuremath{\varrho_{\ensuremath{\mathcal{H}_{\log}}} \left( \nabla u\right)} = \ensuremath{\varrho_{\ensuremath{\mathcal{H}_{\log}}} \left( \left|{\nabla u}\right| \right)}$. **Proposition 30**. *Let [\[Assump:Poincare\]](#Assump:Poincare){reference-type="eqref" reference="Assump:Poincare"} be satisfied. Then $W^{1, \ensuremath{\mathcal{H}_{\log}}}(\Omega)\hookrightarrow L^{\ensuremath{\mathcal{H}_{\log}}}(\Omega)$ is a compact embedding and there exists a constant $C>0$ such that $$\begin{aligned} \left\|u\right\|_{\ensuremath{\mathcal{H}_{\log}}} \leq C \left\| \nabla u \right\|_{\ensuremath{\mathcal{H}_{\log}}} \quad \text{for all } u \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega). \end{aligned}$$* *Proof.* From [\[Assump:Poincare\]](#Assump:Poincare){reference-type="eqref" reference="Assump:Poincare"} we can deduce that there exists $\varepsilon> 0$ such that $q(x) + \varepsilon< p^*(x)$ for all $x \in \overline{\Omega}$. Then the compact embedding follows from Propositions [Proposition 28](#Prop:EmbeddingHlogSobolev){reference-type="ref" reference="Prop:EmbeddingHlogSobolev"} [(iii)]{.nodecor} and [Proposition 26](#Prop:EmbeddingHlog){reference-type="ref" reference="Prop:EmbeddingHlog"}. The inequality follows from the compact embedding in the standard way (see, for example, Proposition 2.18 in the paper by Crespo-Blanco-Gasiński-Harjulehto-Winkert [@Crespo-Blanco-Gasinski-Harjulehto-Winkert-2022]) ◻ Due to the previous inequality we can take in the space $W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ the equivalent norm $\left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0} = \left\| \nabla u \right\|_{\ensuremath{\mathcal{H}_{\log}}}$. In the last part of this section, we investigate the density of smooth functions in the space $W^{1, \ensuremath{\mathcal{H}_{\log}}}(\Omega)$. For this purpose, we check which of the [(A0)]{.nodecor}, [(A1)]{.nodecor} and [(A2)]{.nodecor} assumptions are satisfied by $\ensuremath{\mathcal{H}_{\log}}$. **Lemma 31**. *Let [\[Assump:Space\]](#Assump:Space){reference-type="eqref" reference="Assump:Space"} be satisfied, then $\ensuremath{\mathcal{H}_{\log}}$ satisfies [(A0)]{.nodecor} and [(A2)]{.nodecor}.* *Proof.* By Lemma [Lemma 11](#Le:equivalences){reference-type="ref" reference="Le:equivalences"} [(iv)]{.nodecor} and [(vi)]{.nodecor}, it is equivalent to check the conditions [(A0)']{.nodecor} and [(A2)']{.nodecor}. For [(A0)']{.nodecor} one can take $\beta = \left[ 2 ( \left\|\mu\right\|_\infty + 1) \log(e + 1/2) \right] ^{-1}$ since $$\begin{aligned} \ensuremath{\mathcal{H}_{\log}}(x,\beta) & \leq \frac{1}{2} + \frac{1}{2} \frac{ \left\|\mu\right\|_\infty }{ \left\|\mu\right\|_\infty + 1 } \frac{ \log \left( e + \frac{1}{2} \right) }{ \log \left( e + \frac{1}{2} \right) } \leq 1, \\ \ensuremath{\mathcal{H}_{\log}}(x,\beta^{-1}) & \geq 2 + 0 \geq 1. \end{aligned}$$ For [(A2)']{.nodecor}, take $s=1$, $\varphi_\infty(t) = t^{p_+ + 1}$ and $\beta = 1$. First note that $\varphi_\infty(t) \leq 1$ implies $t \leq 1$. Thus, by Young's inequality $$\begin{aligned} \ensuremath{\mathcal{H}_{\log}}(x, \beta t) & \leq [ 1 + \left\|\mu\right\|_\infty \log (e + 1) ] t^{p(x)} \\ & \leq \frac{ p(x) }{ p_+ + 1 } t^{ p_+ + 1 } + \frac{p_+ - p (x) + 1 }{ p_+ + 1 } [ 1 + \left\|\mu\right\|_\infty \log( e + 1 ) ] ^{ \frac{ p_+ + 1 }{p_+ - p (x) + 1 } } \\ & \leq \varphi_\infty (t) + [ 1 + \left\|\mu\right\|_\infty \log( e + 1 ) ] ^{ p_+ + 1 }. \end{aligned}$$ Take $h$ as the additive constant in the previous line, then we also have $$\begin{aligned} \varphi_\infty ( \beta t ) \leq t^{p(x)} \leq \ensuremath{\mathcal{H}_{\log}}(x,t) + h(x). \end{aligned}$$ ◻ **Remark 32**. *Note that in the proof of [(A2)']{.nodecor} the function $h$ would not be in $L^{1}(\Omega)$ if $|\Omega| = \infty$, so this argument does not generalize for unbounded domains. For that purpose, one needs an extra assumption, see Theorem [Theorem 34](#Th:DensityUnbounded){reference-type="ref" reference="Th:DensityUnbounded"}.* For the remaining assumption [(A1)]{.nodecor} we use much stricter assumptions, see the next result. **Theorem 33**. *Let $\Omega \subseteq \mathbb{R}^N$, with $N \geq 2$, be a bounded domain and the functions $p,q \colon \overline{\Omega}\to [1,\infty)$ and $\mu \colon \overline{\Omega}\to [0,\infty)$ be Hölder continuous functions such that $1 < p(x) \leq q(x)$ for all $x \in \overline{\Omega}$ and $$\begin{aligned} \left( \frac{q}{p} \right)_+ < 1 + \frac{\gamma}{N}, \end{aligned}$$ where $\gamma$ is the Hölder exponent of $\mu$. Then $\ensuremath{\mathcal{H}_{\log}}$ satisfies [(A1)]{.nodecor} and $C^\infty (\Omega) \cap W^{1, \ensuremath{\mathcal{H}_{\log}}}(\Omega)$ is dense in $W^{1, \ensuremath{\mathcal{H}_{\log}}}(\Omega)$.* *Proof.* It suffices to show that $\ensuremath{\mathcal{H}_{\log}}$ satisfies [(A1)]{.nodecor} because the density follows from Theorem [Theorem 18](#Th:AbstractDensity){reference-type="ref" reference="Th:AbstractDensity"}, Lemma [Lemma 24](#Le:PropertiesHlog){reference-type="ref" reference="Le:PropertiesHlog"} and Lemma [Lemma 31](#Le:A0A2){reference-type="ref" reference="Le:A0A2"}. By Lemma [Lemma 11](#Le:equivalences){reference-type="ref" reference="Le:equivalences"} [(v)]{.nodecor}, it is equivalent to check [(A1)']{.nodecor}. Let $B \subseteq \mathbb{R}^N$ be a ball such that $|B| \leq 1$. We start by rewriting the condition $\ensuremath{\mathcal{H}_{\log}}(y,t) \in [ 1 , 1/|B|]$ into a simpler statement. From this condition we can derive that $$\begin{aligned} \begin{rcases} \text{if } t \leq 1, & 1 \leq \ensuremath{\mathcal{H}_{\log}}(y,t) \\ \text{if } t \geq 1, & \phantom{\leq \ensuremath{\mathcal{H}_{\log}}(y,t)} 1 \end{rcases} \leq [ 1 + \left\|\mu\right\|_\infty \log(e + 1)] t^{p (y)}, \end{aligned}$$ and $$\begin{aligned} \begin{rcases} \text{if } t \leq 1, & \phantom{\ensuremath{\mathcal{H}_{\log}}(y)} \frac{1}{|B|} \geq 1 \\ \text{if } t \geq 1, & \frac{1}{|B|} \geq \ensuremath{\mathcal{H}_{\log}}(y,t) \end{rcases} \geq t^{p (y)}, \end{aligned}$$ which altogether means $$\begin{aligned} \label{Eq:A1Interval} \frac{1}{[ 1 + \left\|\mu\right\|_\infty \log(e + 1)]^{\frac{1}{p (y)} }} \leq t \leq \frac{1}{|B|^{\frac{1}{p (y)} }}. \end{aligned}$$ **Claim:** There exists a constant $M > 0$ depending only on $N, p, q, \mu$ such that $$\begin{aligned} t^{p(x)} \leq M t^{p(y)} \quad \text{and} \quad t^{q(x)} \leq M t^{q(y)} \end{aligned}$$ for all $x,y \in B \cap \Omega$ and $t \geq 0$ such that $\ensuremath{\mathcal{H}_{\log}}(y,t) \in [ 1 , 1/|B|]$, and any ball $B \subseteq \mathbb{R}^N$ such that $|B| \leq 1$. We only do the $p$ case, the $q$ case is identical. If either $t \leq 1$ and $p(x) \geq p(y)$, or $t \geq 1$ and $p(x) \leq p(y)$, then one can simply take $M=1$. If $t \leq 1$ and $p(x) \leq p(y)$, from [\[Eq:A1Interval\]](#Eq:A1Interval){reference-type="eqref" reference="Eq:A1Interval"} we obtain $$\begin{aligned} t^{p(x)} = t^{p(x) - p(y)} t^{p(y)} & \leq \left( [ 1 + \left\|\mu\right\|_\infty \log(e + 1)]^{\frac{1}{p (y)} } \right) ^{p(y) - p(x)} t^{p(y)} \\ & \leq [ 1 + \left\|\mu\right\|_\infty \log(e + 1)]^{\frac{p_+}{p_-}} t^{p(y)}, \end{aligned}$$ so we can take $M = [ 1 + \left\|\mu\right\|_\infty \log(e + 1)]^{ p_+ / p_- }$. Consider the remaining case $t \geq 1$ and $p(x) \geq p(y)$. Note that for any ball $B \subseteq \mathbb{R}^N$ of radius $R$ we have $|B| = \omega(N) R^N$, where $\omega(N) > 0$ is the constant associated to $\mathbb{R}^N$, and also that $x,y \in |B|$ implies $\left|{x-y}\right| \leq 2R$. As $p$ is Hölder continuous with exponent $0 < \alpha \leq 1$ and constant $c_p > 0$, $x,y \in |B|$ implies $\left|{p(x) - p(y)}\right| \leq c_p 2^\alpha R^\alpha$. Remember that $|B| \leq 1$, so $|B|^{-1/p(y)} \leq |B|^{-1/p_-}$. From this and [\[Eq:A1Interval\]](#Eq:A1Interval){reference-type="eqref" reference="Eq:A1Interval"} we get $$\begin{aligned} t^{p(x)} = t^{p(x) - p(y)} t^{p(y)} & \leq \left( \omega(N) R^N \right) ^{ \frac{-1}{p_-} c_p 2^\alpha R^\alpha } t^{p(y)} \\ & = \left( \omega(N)^{\frac{ - c_p 2^\alpha }{ p_- }} \right)^{R^\alpha} \left( R^{R^\alpha} \right)^{\frac{- c_p 2^\alpha N}{p_-}} t^{p(y)}. \end{aligned}$$ Since $|B| \leq 1$, we know that $R \leq \omega(N)^{-1/N}$. On the other hand, the function $h(R) = \left( \omega(N)^{ - c_p 2^\alpha / p_- } \right)^{R^\alpha} \left( R^{R^\alpha} \right)^{ - c_p 2^\alpha N / p_- }$ is strictly positive and continuous in the interval $[0 , \omega(N)^{-1/N}]$ (note $h(0) = 1)$. Thus it attains its maximum at some $R_0$ in that interval and we can take $M = h(R_0)$. This ends the proof of the Claim. Let us now prove the inequality of [(A1)']{.nodecor} using the previous information. To this end, let $0 < \beta < M^{-1/p_-} < 1$, $0 < \gamma \leq 1$ be the Hölder exponent of $\mu$ and $c_\mu > 0$ be the corresponding constant, so as in the proof of the Claim $x,y \in |B|$ implies $\left|{\mu(x) - \mu(y)}\right| \leq c_\mu 2^\gamma R^\gamma$. By all of this and the Claim above we have $$\begin{aligned} \ensuremath{\mathcal{H}_{\log}}(\beta x, t) & \leq \beta^{p_-} \left( t^{p(x)} + \mu(x) t^{q(x)} \log(e + t) \right) \\ & \leq \beta^{p_-} M \left( t^{p(y)} + \mu(x) t^{q(y)} \log(e + t) \right) \\ & \leq \beta^{p_-} M \left( t^{p(y)} + \mu(y) t^{q(y)} \log(e + t) + c_\mu 2^\gamma R^\gamma t^{q(y)} \log(e + t) \right) \\ & \leq \beta^{p_-} M \left( t^{p(y)} + c_\mu 2^\gamma R^\gamma t^{q(y)} \log(e + t) \right) + \mu(y) t^{q(y)} \log(e + t). \end{aligned}$$ We continue the inequality using [\[Eq:A1Interval\]](#Eq:A1Interval){reference-type="eqref" reference="Eq:A1Interval"}, where we take again into account that $|B| = \omega(N) R^N$, with result $$\begin{aligned} & \ensuremath{\mathcal{H}_{\log}}(\beta x, t) \\ & \leq \beta^{p_-} M t^{p(y)} \left( 1 + c_\mu 2^\gamma R^\gamma t^{q(y) - p(y)} \log(e + t) \right) + \mu(y) t^{q(y)} \log(e + t) \\ & \leq \beta^{p_-} M t^{p(y)} \left[ 1 + c_\mu 2^\gamma R^\gamma \left( \omega(N) R^N \right) ^{ \frac{-1}{p(y)} (q(y) - p(y)) } \log \left( e + \left( \omega(N) R^N \right) ^{ \frac{-1}{p(y)} } \right) \right] \\ & \quad + \mu(y) t^{q(y)} \log(e + t). \end{aligned}$$ Now we need to estimate the part in square brackets independently of $y$ and $R$. Let $\tau_{p,q,N} = q_-/p_+$ if $\omega(N) > 1$ and $\tau_{p,q,N} = q_+/p-$ if $\omega(N) \leq 1$. Once again, as $\omega(N) R^N \leq 1$ $$\begin{aligned} & R^\gamma \left( \omega(N) R^N \right) ^{ \frac{-1}{p(y)} (q(y) - p(y)) } \log \left( e + \left( \omega(N) R^N \right) ^{ \frac{-1}{p(y)} } \right) \\ & \leq \omega(N)^{1 - \tau_{p,q,N}} R^{ \gamma + N - N\frac{q(y)}{p(y)}} \log \left( e + \left( \omega(N) R^N \right) ^{ \frac{-1}{p_-} } \right). \end{aligned}$$ Let us distinguish two cases. If $R \leq 1$, $$\begin{aligned} & R^{ \gamma + N - N\frac{q(y)}{p(y)}} \log \left( e + \left( \omega(N) R^N \right) ^{ \frac{-1}{p_-} } \right) \\ & \leq R^{ \gamma + N - N \left( \frac{q}{p} \right)_+ } \log \left( e + \left( \omega(N) R^N \right) ^{ \frac{-1}{p_-} } \right) = h(R). \end{aligned}$$ This function $h$ is positive and continuous in the interval $[0 , \omega(N)^{-1/N}]$ because $$\begin{aligned} \lim_{R \to 0^+} h(R) = 0, \end{aligned}$$ where this limit follows from $\gamma + N - N (q/ p)_+ > 0$ and by L'Hospital's rule. Hence $h$ attains its maximum at some $R_0$ in that interval and we can use $h(R_0)$ as upper estimate. In the other case, if $R \geq 1$ $$\begin{aligned} & R^{ \gamma + N - N\frac{q(y)}{p(y)}} \log \left( e + \left( \omega(N) R^N \right) ^{ \frac{-1}{p_-} } \right) \\ & \leq R^{ \gamma + N - N \left( \frac{q}{p} \right)_- } \log \left( e + \left( \omega(N) \right) ^{ \frac{-1}{p_-} } \right)\\ & \leq \omega(N)^{ \frac{- \gamma}{N} - 1 + \left( \frac{q}{p} \right)_- } \log \left( e + \left( \omega(N) \right) ^{ \frac{-1}{p_-} } \right) = \widetilde{\Lambda}_{p,q,N}, \end{aligned}$$ which follows from $\gamma + N - N ( \frac{q}{p} )_- > 0$ and $R \leq \omega(N)^{-1/N}$ (or equivalently $|B| \leq 1$). Let $\Lambda_{p,q,N}$ be the maximum of $\omega(N)^{1 - \tau_{p,q,N}} h(R_0)$ and $\omega(N)^{1 - \tau_{p,q,N}} \widetilde{\Lambda}_{p,q,N}$. Altogether, we have proved that $$\begin{aligned} \ensuremath{\mathcal{H}_{\log}}(\beta x, t) \leq \beta^{p_-} M t^{p(y)} \left[ 1 + c_\mu 2^\gamma \Lambda_{p,q,N} \right] + \mu(y) t^{q(y)} \log(e + t). \end{aligned}$$ If we take $$\begin{aligned} \beta < M^{\frac{-1}{p_-}} \left[ 1 + c_\mu 2^\gamma \Lambda_{p,q,N} \right] ^{\frac{-1}{p_-}}, \end{aligned}$$ we obtain $\ensuremath{\mathcal{H}_{\log}}(\beta x, t) \leq \ensuremath{\mathcal{H}_{\log}}(y, t)$ and the proof is complete. ◻ In this work and in the previous result we deal with bounded domains. However, one can also obtain the density for unbounded domains by adding one more assumption at infinity, hence we include it here for the shake of completion. Let $\Omega \subseteq \mathbb{R}^N$ be an open subset, we say that a measurable function $r \colon \Omega \to [1,\infty]$ satisfies Nekvinda's decay condition if there exists $r_\infty \in [1 , \infty]$ and $c \in (0,1)$ such that $$\begin{aligned} \int_{\Omega}c^{\frac{1}{ \left|{ \frac{1}{r(x)} - \frac{1}{r_\infty} }\right| }} \mathop{\mathrm{d\textit{x}}}< \infty,\end{aligned}$$ or, equivalently, $1 \in L^{s(\cdot)}(\Omega)$, where $s^{-1} (x) = \left|{ p^{-1} (x) - p^{-1}_\infty}\right|$. This condition was first introduced in the paper by Nekvinda [@Nekvinda-2004]. **Theorem 34**. *Let $\Omega \subseteq \mathbb{R}^N$, with $N \geq 2$, be an unbounded domain and the functions $p,q \colon \overline{\Omega}\to [1,\infty)$ and $\mu \colon \overline{\Omega}\to [0,\infty)$ be bounded, Hölder continuous functions such that $p$ satisfies Nekvinda's decay condition, $1 < p(x) \leq q(x)$ for all $x \in \overline{\Omega}$ and $$\begin{aligned} \left( \frac{q}{p} \right)_+ < 1 + \frac{\gamma}{N}, \end{aligned}$$ where $\gamma$ is the Hölder exponent of $\mu$. Then $\ensuremath{\mathcal{H}_{\log}}$ satisfies [(A0)]{.nodecor}, [(A1)]{.nodecor}, [(A2)]{.nodecor}, [(aDec)]{.nodecor} and $C^\infty (\Omega) \cap W^{1, \ensuremath{\mathcal{H}_{\log}}}(\Omega)$ is dense in $W^{1, \ensuremath{\mathcal{H}_{\log}}}(\Omega)$.* *Proof.* The proof of [(A0)]{.nodecor}, [(A1)]{.nodecor} and [(aDec)]{.nodecor} is exactly like in Lemma [Lemma 24](#Le:PropertiesHlog){reference-type="ref" reference="Le:PropertiesHlog"} and Lemma [Lemma 31](#Le:A0A2){reference-type="ref" reference="Le:A0A2"}, they are not affected if $\Omega$ is unbounded. Therefore, it suffices to show that $\ensuremath{\mathcal{H}_{\log}}$ satisfies [(A2)]{.nodecor} because the density follows from Theorem [Theorem 18](#Th:AbstractDensity){reference-type="ref" reference="Th:AbstractDensity"}. By Lemma [Lemma 11](#Le:equivalences){reference-type="ref" reference="Le:equivalences"} [(v)]{.nodecor}, it is equivalent to check [(A2)']{.nodecor}. For this purpose, take $s=1$, $\varphi_\infty(t) = t^{p_\infty}$ and $\beta \leq 1$. First note that $\varphi_\infty(t) \leq 1$ implies $t \leq 1$. Let us distinguish two cases. In the points where $p(x) < p_\infty$, by Young's inequality $$\begin{aligned} \ensuremath{\mathcal{H}_{\log}}(x,\beta t) & \leq [ 1 + \left\|\mu\right\|_\infty \log (e + 1) ] \beta^{p(x)} t^{p(x)} \\ & \leq \frac{ p(x) }{ p_\infty } t^{ p_\infty } + \frac{p_\infty - p (x) }{ p_\infty } [ 1 + \left\|\mu\right\|_\infty \log( e + 1 ) ] ^{ \frac{ p_\infty }{p_\infty - p (x) } } \beta ^{\frac{1}{ \left|{ \frac{1}{p(x)} - \frac{1}{p_\infty} }\right| }} \\ & \leq \varphi_\infty (t) + \left( \beta [ 1 + \left\|\mu\right\|_\infty \log( e + 1 ) ] \right) ^{\frac{1}{ \left|{ \frac{1}{p(x)} - \frac{1}{p_\infty} }\right| }}. \end{aligned}$$ Let us take $$\begin{aligned} \beta & < c [ 1 + \left\|\mu\right\|_\infty \log( e + 1 ) ]^{-1},\\ h(x) & = \left( \beta [ 1 + \left\|\mu\right\|_\infty \log( e + 1 ) ] \right) ^{\frac{1}{ \left|{ \frac{1}{p(x)} - \frac{1}{p_\infty} }\right| }}, \end{aligned}$$ where $c \in (0,1)$ is the constant of Nekvinda's decay condition of $p$. Then we know that $h \in L^{1}(\Omega) \cap L^{\infty}(\Omega)$. In the points where $p(x) \geq p_\infty$, with the same choice of $\beta$ we have $$\begin{aligned} \ensuremath{\mathcal{H}_{\log}}(x,\beta t) \leq [ 1 + \left\|\mu\right\|_\infty \log (e + 1) ] \beta t^{p_\infty} \leq \varphi_\infty (t). \end{aligned}$$ We do the other inequality in a similar way. In the points where $p(x) \leq p_\infty$, as $\beta \leq 1$ $$\begin{aligned} \varphi_\infty ( \beta t ) \leq t^{p(x)} \leq \ensuremath{\mathcal{H}_{\log}}(x,t) , \end{aligned}$$ and in the points where $p(x) > p_\infty$, using again Young's inequality $$\begin{aligned} \varphi_\infty ( \beta t ) & \leq \frac{ p_\infty }{ p(x) } t^{ p(x) } + \frac{p(x) - p _\infty }{ p(x) } \beta ^{\frac{1}{ \left|{ \frac{1}{p(x)} - \frac{1}{p_\infty} }\right| }} \\ & \leq \ensuremath{\mathcal{H}_{\log}}(x,t) + h(x). \end{aligned}$$ ◻ **Remark 35**. *Note that in the previous result we only imposed Nekvinda's decay condition on $p$, there is no condition at infinity imposed on $q$.* # Energy functional and logarithmic operator {#energy-functional-operator} In this section we investigate the properties of the associated energy functional and the logarithmic operator given in [\[log-operator\]](#log-operator){reference-type="eqref" reference="log-operator"}. We denote by $\langle\cdot\,,\cdot\rangle$ the duality pairing between $W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ and its dual space $[W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)]^*$. Let $A \colon W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)\to [W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)]^*$ be our operator of interest, which for each $u,v \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ is given by the expression $$\begin{aligned} \left\langle A(u),v \right\rangle & = \int_{\Omega}\left|{\nabla u}\right|^{p(x)-2}\nabla u \cdot \nabla v \vphantom{\frac{\left|{\nabla u}\right|}{q(x) (e + \left|{\nabla u}\right|)}}\cdot \nabla v\mathop{\mathrm{d\textit{x}}}\\ & \quad +\int_{\Omega}\mu(x) \left[ \log (e + \left|{\nabla u}\right| ) + \frac{\left|{\nabla u}\right|}{q(x) (e + \left|{\nabla u}\right|)} \right] \left|{\nabla u}\right|^{q(x)-2}\nabla u \cdot \nabla v \mathop{\mathrm{d\textit{x}}}.\end{aligned}$$ Furthermore, let $I \colon W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)\to \mathbb{R}$ be its associated energy functional, which for each $u \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ is given by $$\begin{aligned} I(u)=\int_{\Omega}\left( \frac{\left|{\nabla u}\right|^{p(x)}}{p(x)} + \mu(x) \frac{\left|{\nabla u}\right|^{q(x)}}{q(x)} \log (e + \left|{\nabla u}\right|) \right) \mathop{\mathrm{d\textit{x}}}.\end{aligned}$$ We first deal with the differentiability of the energy functional. **Theorem 36**. *Let [\[Assump:Space\]](#Assump:Space){reference-type="eqref" reference="Assump:Space"} be satisfied, then the functional $I$ is $C^1$ with $I'(u) = A(u)$.* *Proof.* By the additivity of the Fréchet derivative, we only do the argument for the logarithmic term, the other one can be done analogously (and can also be found in previous literature, for example, Proposition 3.1 in the paper by Crespo-Blanco-Gasiński-Harjulehto-Winkert [@Crespo-Blanco-Gasinski-Harjulehto-Winkert-2022]). The proof is divided in two steps: first we prove the Gateaux differentiability and later its continuous dependence. For the Gateaux differentiability, let $u,v \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ and $t \in \mathbb{R}$. In the points where $\nabla u \neq 0$, by the mean value theorem there exists $\theta_{x,t} \in (0,1)$ such that $$\begin{aligned} & \beta (x,t) \\ & = \frac{\mu(x)}{t q(x)} \left( \left|{\nabla u + t \nabla v}\right|^{q(x)} \log (e + \left|{\nabla u + t \nabla v}\right|) - \left|{\nabla u}\right|^{q(x)} \log (e + \left|{\nabla u}\right|) \right) \\ & = \mu(x) \left( \log (e + \left|{\nabla u + t \theta_{x,t} \nabla v}\right|) \left|{\nabla u + t \theta_{x,t} \nabla v}\right|^{q(x) - 2} (\nabla u + t \theta_{x,t} \nabla v) \cdot \nabla v \vphantom{\frac{(\nabla u + t \theta_{x,t} \nabla v) \cdot \nabla v}{ \left|{\nabla u + t \theta_{x,t} \nabla v}\right| }} \right. \\ & \quad \left. + \left|{\nabla u + t \theta_{x,t} \nabla v}\right|^{q(x)} \frac{1}{q(x)(e + \left|{\nabla u + t \theta_{x,t} \nabla v}\right|)} \frac{(\nabla u + t \theta_{x,t} \nabla v) \cdot \nabla v}{ \left|{\nabla u + t \theta_{x,t} \nabla v}\right| } \right) \\ & \xrightarrow{t \to 0} \mu(x) \left( \log (e + \left|{\nabla u }\right|) \left|{\nabla u }\right|^{q(x) - 2} \nabla u \cdot \nabla v + \left|{\nabla u }\right|^{q(x)} \frac{1}{q(x)(e + \left|{\nabla u }\right|)} \frac{\nabla u \cdot \nabla v}{ \left|{\nabla u }\right| } \right) \end{aligned}$$ and in the points where $\nabla u = 0$ we can derive the same limit directly. So this convergence is true a.e. in $\Omega$. On the other hand, using [\[Eq:LogGrowthSum\]](#Eq:LogGrowthSum){reference-type="eqref" reference="Eq:LogGrowthSum"}, for $t \leq 1$ $$\begin{aligned} \left|{\beta(x,t)}\right| & \leq \mu(x) \left[ \left( \left|{ \nabla u }\right| + \left|{ \nabla v }\right| \right)^{q(x) - 1} \left|{ \nabla v}\right| \left( \log( e + \left|{ \nabla u }\right| + \left|{ \nabla v }\right|) + 1 \right) \right] \\ & \leq 2^{q^+ +1}\mu(x) \left[|\nabla u|^{q(x)} \log(e+ |\nabla u|) + |\nabla v|^{q(x)} \log(e+ |\nabla v|) + 1 \right], \end{aligned}$$ which is an $L^1(\Omega)$-function. Hence, by the dominated convergence theorem, we proved that the Gateaux derivative exists and coincides with $A$. For the $C^1$-property, let $u_n \to u$ in $W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ and $v \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ with $\left\|v\right\|_{1,\ensuremath{\mathcal{H}_{\log}}} \leq 1$. For the following computations, we define $$\begin{aligned} w_n(x) & = \left[ \log (e + \left|{\nabla u_n}\right|) + \frac{\left|{\nabla u_n}\right|}{q(x) (e + \left|{\nabla u_n}\right|)}\right] \left|{\nabla u_n}\right|^{q(x) -2 }\nabla u_n \\ & \quad - \left[ \log (e + \left|{\nabla u}\right|) + \frac{\left|{\nabla u}\right|}{q(x) (e + \left|{\nabla u}\right|)}\right] \left|{\nabla u}\right|^{q(x) -2 }\nabla u, \\ g_n(x)& = \log^{1/q(x)} ( e + \max\{ \left|{\nabla u}\right|, \left|{\nabla u_n}\right| \}),\\ \Omega_u & = \{ x \in \Omega \,:\, \max\{ \left|{\nabla v}\right|, \left|{\nabla u}\right|, \left|{\nabla u_n}\right| \} = \left|{\nabla u}\right|\}, \\ \Omega_{u_n} & = \{ x \in \Omega \,:\, \left|{\nabla u}\right| < \max\{ \left|{\nabla v}\right|, \left|{\nabla u}\right|, \left|{\nabla u_n}\right| \} = \left|{\nabla u_n}\right|\}, \\ \Omega_v & = \{ x \in \Omega \,:\, \left|{\nabla u}\right|, \left|{\nabla u_n}\right| < \max\{ \left|{\nabla v}\right|, \left|{\nabla u}\right|, \left|{\nabla u_n}\right| \} = \left|{\nabla v}\right|\}. \end{aligned}$$ By Hölder's inequality in $L^{q(\cdot)}(\Omega)$, we have $$\begin{aligned} \quad \left| \int_{\Omega}\mu(x) w_n \cdot \nabla v \mathop{\mathrm{d\textit{x}}}\right| \leq 2 \left\|(\mu (x))^{\frac{q(x) - 1}{q(x)}} \frac{\left|{w_n}\right|}{g_n}\right\|_{\frac{q(\cdot)}{q(\cdot) - 1}} \left\|(\mu (x))^{1/q(x)} \left|{\nabla v}\right| g_n \right\|_{q(\cdot)}. \end{aligned}$$ The second factor is uniformly bounded in $n$ and $v$ by Proposition [Proposition 1](#Prop:ModularNormVarExp){reference-type="ref" reference="Prop:ModularNormVarExp"} [(vi)]{.nodecor} and $$\begin{aligned} \varrho_{ q(\cdot)} \left( (\mu (x))^{1/q(x)} \left|{\nabla v}\right| g_n \right) % \into \mu(x) \abs{ \nabla v }^{q(x)} g_n^{q(x)} \dx & \leq \int_{\Omega_u} \mu(x) \left|{ \nabla v }\right|^{q(x)} \log( e + \left|{\nabla u}\right| ) \mathop{\mathrm{d\textit{x}}}\\ & \quad + \int_{\Omega_{u_n}} \mu(x) \left|{ \nabla v }\right|^{q(x)} \log( e + \left|{\nabla u_n}\right| ) \mathop{\mathrm{d\textit{x}}}\\ & \quad + \int_{\Omega_v} \mu(x) \left|{ \nabla v }\right|^{q(x)} \log( e + \left|{\nabla v}\right| ) \mathop{\mathrm{d\textit{x}}}\\ & \leq \ensuremath{\varrho_{\ensuremath{\mathcal{H}_{\log}}} \left( \nabla u \right)} + \underbrace{\ensuremath{\varrho_{\ensuremath{\mathcal{H}_{\log}}} \left( \nabla v \right)}}_{\leq 1} + \underbrace{\ensuremath{\varrho_{\ensuremath{\mathcal{H}_{\log}}} \left( \nabla u_n \right)} }_{\leq M}, \end{aligned}$$ where the last two estimates follow from Proposition [Proposition 25](#Prop:HlogModularNorm){reference-type="ref" reference="Prop:HlogModularNorm"} [(ii)]{.nodecor} and [(vi)]{.nodecor} and $u_n \to u$ in $W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$. Therefore, we only need to prove that the first factor converges to zero. By Proposition [Proposition 1](#Prop:ModularNormVarExp){reference-type="ref" reference="Prop:ModularNormVarExp"} [(v)]{.nodecor}, it is enough to see that this happens in the modular of $L^{\frac{q(\cdot)}{q(\cdot) - 1}}(\Omega)$, that is $$\begin{aligned} \varrho_{ \frac{q(\cdot)}{q(\cdot) - 1} } \left( (\mu (x))^{\frac{q(x) - 1}{q(x)}} \frac{\left|{w_n}\right|}{g_n} \right) = \int_{\Omega}\mu(x) \left( \frac{\left|{w_n }\right| }{g_n} \right) ^{\frac{q(x)}{q(x) - 1}} \mathop{\mathrm{d\textit{x}}}\; \xrightarrow{n \to \infty} 0. \end{aligned}$$ We prove this convergence by using Vitali's theorem. By Proposition [Proposition 28](#Prop:EmbeddingHlogSobolev){reference-type="ref" reference="Prop:EmbeddingHlogSobolev"} [(i)]{.nodecor}, $\nabla u_n \to \nabla u$ in measure, and using the property that convergence in measure is preserved by composition with continuous functions, we obtain the convergence in measure to zero of the integrand. For the uniform integrability, note that $$\begin{aligned} & \mu(x) \left( \frac{\left|{w_n }\right| }{g_n} \right) ^{\frac{q(x)}{q(x) - 1}} \\ & \leq \mu(x) \left( \left[ \log^{1 - \frac{1}{q(x)}} (e + \left|{\nabla u_n}\right|) + 1 \right] \left|{\nabla u_n}\right|^{q(x) - 1}\right. \\ & \left.\qquad\qquad + \left[ \log^{1 - \frac{1}{q(x)}} (e + \left|{\nabla u}\right|) + 1 \right] \left|{\nabla u}\right|^{q(x) - 1} \right) ^{\frac{q(x)}{q(x) - 1}} \\ & \leq C \mu(x) \left( \left[ \log (e + \left|{\nabla u_n}\right|) + 1 \right] \left|{\nabla u_n}\right|^{q(x)} + \left[ \log (e + \left|{\nabla u}\right|) + 1 \right] \left|{\nabla u}\right|^{q(x)} \right). \end{aligned}$$ As $\nabla u_n \to \nabla u$ in measure and $\ensuremath{\varrho_{\ensuremath{\mathcal{H}_{\log}}} \left( \nabla u_n \right)} \to \ensuremath{\varrho_{\ensuremath{\mathcal{H}_{\log}}} \left( \nabla u \right)}$, we know that\ $\left|{\nabla u_n}\right|^{q(x)} \log (e + \left|{ \nabla u_n }\right|)$ is uniformly integrable, hence we also know that our sequence is uniformly integrable and this finishes the proof. ◻ Next we are concerned with the properties of the operator $A$. For this purpose, we need the following two lemmas. The first one is concerned with the monotonicity of terms that are not power laws, but still something similar. **Lemma 37**. *Let $h \colon [0,\infty) \to [0,\infty)$ be a increasing function and $r > 1$. Then, for any $\xi, \eta \in \mathbb{R}^N$ $$\begin{aligned} \left( h( \left|{\xi}\right| ) \left|{\xi}\right|^{r-2} \xi - h( \left|{\eta}\right| ) \left|{\eta}\right|^{r-2} \eta \right) \cdot \left( \xi - \eta \right) \geq C_r \left|{ \xi - \eta }\right|^r h(m) \end{aligned}$$ if $r \geq 2$, and $$\begin{aligned} \left( \left|{\xi}\right| + \left|{\eta}\right| \right)^{2 - r} \left( h( \left|{\xi}\right| ) \left|{\xi}\right|^{r-2} \xi - h( \left|{\eta}\right| ) \left|{\eta}\right|^{r-2} \eta \right) \cdot \left( \xi - \eta \right) \geq C_r \left|{\xi - \eta}\right|^2 h(m) \end{aligned}$$ if $1 < r < 2$, where $m = \min \{ \left|{\xi}\right|, \left|{\eta}\right| \}$ and $$\begin{aligned} C_r = \begin{cases} \min \{ 2^{2-r}, 2^{-1} \} & \text{if } r \geq 2, \\ r-1 & \text{if } 1 < r < 2. \end{cases} \end{aligned}$$* *Proof.* For the case $r \geq 2$, we obtain the identity $$\begin{aligned} & \left( h( \left|{\xi}\right| ) \left|{\xi}\right|^{r-2} \xi - h( \left|{\eta}\right| ) \left|{\eta}\right|^{r-2} \eta \right) \cdot \left( \xi - \eta \right) \\ & = \left( h( \left|{\xi}\right| ) \left|{\xi}\right|^{r-2} \xi \right) \cdot \left( \xi - \eta \right) + \left( - h( \left|{\eta}\right| ) \left|{\eta}\right|^{r-2} \eta \right) \cdot \left( \xi - \eta \right) \\ & = h( \left|{\xi}\right| ) \left|{\xi}\right|^{r-2} \left( \frac{1}{2} \xi - \frac{1}{2} \eta \right) \cdot \left( \xi - \eta \right) + h( \left|{\eta}\right| ) \left|{\eta}\right|^{r-2} \left( \frac{1}{2} \xi - \frac{1}{2} \eta \right) \cdot \left( \xi - \eta \right) \\ & \quad + h( \left|{\xi}\right| ) \left|{\xi}\right|^{r-2} \left( \frac{1}{2} \xi + \frac{1}{2} \eta \right) \cdot \left( \xi - \eta \right) + h( \left|{\eta}\right| ) \left|{\eta}\right|^{r-2} \left( - \frac{1}{2} \xi - \frac{1}{2} \eta \right) \cdot \left( \xi - \eta \right) \\ & = \frac{1}{2} \left( h( \left|{\xi}\right| ) \left|{\xi}\right|^{r-2} + h( \left|{\eta}\right| ) \left|{\eta}\right|^{r-2} \right) \left|{ \xi - \eta }\right|^2 \\ & \quad + \frac{1}{2} \left( h( \left|{\xi}\right| ) \left|{\xi}\right|^{r-2} - h( \left|{\eta}\right| ) \left|{\eta}\right|^{r-2} \right) \left( \left|{\xi}\right|^2 - \left|{\eta}\right|^2 \right) . \end{aligned}$$ Since $h$ is an increasing function, the second term is nonnegative if $r \geq 2$. Hence the inequality follows when $r \geq 2.$ This is the same strategy in which one proves the usual inequality without $h$ (see for example, Chapter 12, (I) in the book by Lindqvist [@Lindqvist-2019]). From here the inequality follows. For the case $1 < r < 2$, we follow the argument of equation (2.2) from the paper by Simon [@Simon-1978]. The main difference now is that the expression is nonhomogeneous and therefore cannot be scaled, but it is still invariant under rotations. For this reason it is enough to consider the case $\left|{\xi}\right| \geq \left|{\eta}\right|$, $\xi = \left|{\xi}\right| e_1$, $\eta= \eta_1 e_1 + \eta_2 e_2$. We split the argument in two cases. First, if $\eta_1 \leq 0$ $$\begin{aligned} \left( h( \left|{\xi}\right| ) \left|{\xi}\right|^{r-1} - h( \left|{\eta}\right| ) \left|{\eta}\right|^{r-2} \eta_1 \right) \geq h( \left|{\eta}\right| ) \left|{\xi}\right|^{r-2} \left( \left|{\xi}\right| - \eta_1 \right) \end{aligned}$$ and if $0 \leq \eta_1 \leq \left|{\xi}\right|$, by the mean value theorem $$\begin{aligned} \left( h( \left|{\xi}\right| ) \left|{\xi}\right|^{r-1} - h( \left|{\eta}\right| ) \left|{\eta}\right|^{r-2} \eta_1 \right) & \geq h( \left|{\xi}\right| ) \left( \left|{\xi}\right|^{r-1} - \eta_1^{r-1} \right) \\ & \geq h( \left|{\eta}\right| ) (r-1) \left|{\xi}\right|^{r-2} \left( \left|{\xi}\right| - \eta_1 \right). \end{aligned}$$ Altogether, this yields $$\begin{aligned} & \left( h( \left|{\xi}\right| ) \left|{\xi}\right|^{r-2} \xi - h( \left|{\eta}\right| ) \left|{\eta}\right|^{r-2} \eta \right) \cdot \left( \xi - \eta \right) \\ & = \left( h( \left|{\xi}\right| ) \left|{\xi}\right|^{r-1} - h( \left|{\eta}\right| ) \left|{\eta}\right|^{r-2} \eta_1 \right) \left( \left|{\xi}\right| - \eta_1 \right) + h( \left|{\eta}\right| ) \left|{\eta}\right|^{r-2} \eta_2^2 \\ & \geq h( \left|{\eta}\right| ) (r-1) \left( \left|{\xi}\right| + \left|{\eta}\right| \right) ^{r-2} \left( [\left|{\xi}\right| - \eta_1]^2 + \eta_2^2 \right) \\ & = h( \left|{\eta}\right| ) (r-1) \left( \left|{\xi}\right| + \left|{\eta}\right| \right) ^{r-2} \left|{\xi - \eta}\right|^2. \end{aligned}$$ ◻ The second lemma is concerned with a version of Young's inequality specially tailored for our line of work. It becomes indispensable in the proof of the [(S$_+$)]{.nodecor}-property. **Lemma 38** (Young's inequality for the product of a power-law and a logarithm). *Let $s,t \geq 0$, $r > 1$ then $$\begin{aligned} s t^{r-1} \left[ \log (e + t ) + \frac{ t }{r (e + t)} \right] \leq \frac{s^r}{r} \log (e + s ) + t^r \left[ \frac{r - 1}{r} \log (e + t ) + \frac{ t }{r (e + t)} \right] . \end{aligned}$$* *Proof.* The result is a consequence of the general version of Young's inequality for functions that are positive, continuous, strictly increasing and vanish at zero, see for example Theorem 156 of the classical book by Hardy-Littlewood-Pólya [@Hardy-Littlewood-Polya-1934]. Let $h \colon [0, \infty) \to [0, \infty)$ satisfy all of the above, then for any $s,t \geq 0$ it holds $$\begin{aligned} s h(t) \leq \int_0^s h(y) \mathop{\mathrm{d\textit{y}}}+ \int_0^{h(t)} h^{-1}(y) \mathop{\mathrm{d\textit{y}}}. \end{aligned}$$ We also make use of another general result about the primitive of the inverse of a function. For $h$ as above and $t>0$ it holds $$\begin{aligned} \int_0^{h(t)} h^{-1}(y) \mathop{\mathrm{d\textit{y}}}= t h(t) - \int_0^t h(y) \mathop{\mathrm{d\textit{y}}}. \end{aligned}$$ Choosing $$\begin{aligned} h(t) = t^{r-1} \left[ \log (e + t ) + \frac{ t }{r(e + t)} \right] , \end{aligned}$$ hence $\int_0^t h(s) \mathop{\mathrm{d\textit{s}}}= t^r/r \log (e + t)$, one obtains the desired result. ◻ Now we can state the main properties of the operator $A$. **Theorem 39**. *Let [\[Assump:Space\]](#Assump:Space){reference-type="eqref" reference="Assump:Space"} be satisfied, then the operator $A$ is bounded, continuous and strictly monotone. If we further assume [\[Assump:Poincare\]](#Assump:Poincare){reference-type="eqref" reference="Assump:Poincare"}, then it also is of type [(S$_+$)]{.nodecor}, coercive and a homeomorphism.* *Proof.* The continuity follows directly from Theorem [Theorem 36](#Th:C1functional){reference-type="ref" reference="Th:C1functional"}. The strict monotonicity follows from Lemma [Lemma 37](#Le:MonotoneInequality){reference-type="ref" reference="Le:MonotoneInequality"} and the widely-known inequality $$\begin{aligned} \left( \left|{\xi}\right|^{r-2}\xi -\left|{\eta}\right|^{r-2}\eta \right) \cdot (\xi-\eta) > 0 \quad\text{if } r >1 \text{ for all }\xi,\eta \in \mathbb{R}^N \text{ with }\xi\neq\eta, \end{aligned}$$ since for $u,v \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ with $u \neq v$ they imply $$\begin{aligned} \left\langle A(u) - A (v) , u - v \right\rangle \geq \int_{\Omega}\left( \left|{\nabla u}\right| ^{p(x)-2}\nabla u - \left|{\nabla v}\right|^{p(x)-2}\nabla v \right) \cdot \left( \nabla u - \nabla v \right) \mathop{\mathrm{d\textit{x}}}> 0. \end{aligned}$$ For the boundedness, let us take $u,v \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)\setminus \{0\}$ with $\left\|v\right\|_{1,\ensuremath{\mathcal{H}_{\log}}} = 1$. Then, use [\[Eq:LogGrowthProduct\]](#Eq:LogGrowthProduct){reference-type="eqref" reference="Eq:LogGrowthProduct"} in the case $\left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}}} \geq 1$ and that $t \mapsto \log (e + t)$ is increasing and $\left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}}} ^{1 - p_-} \leq \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}}} ^{1 - q_-}$ in the case $\left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}}} \leq 1$. Together with Young's inequality it holds $$\begin{aligned} &\min \left\lbrace \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}}}^{- q_+} , \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}}}^{1 - p_-} \right\rbrace \left|{ \langle A(u),v \rangle }\right|\\ & \leq \min \left\lbrace \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}}}^{- q_+} , \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}}}^{1 - p_-} \right\rbrace \\ & \qquad \times\int_{\Omega}\left( |\nabla u|^{p(x)-1} \left|{ \nabla v }\right| \mathop{+} \mu(x) \left[ \log (e + \left|{ \nabla u }\right|) + 1 \right] |\nabla u|^{q(x)-1} \left|{ \nabla v }\right| \right) \mathop{\mathrm{d\textit{x}}}\\ & \leq \int_{\Omega}\left|{ \frac{\nabla u}{ \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}}} } }\right| ^{p(x)-1} \left|{ \nabla v }\right| \mathop{\mathrm{d\textit{x}}}\\ & \quad + \int_{\Omega}\left(\mu(x) \left[ \log \left( e + \frac{\left|{ \nabla u }\right|}{ \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}}} } \right) + 1 \right] \left|{ \frac{\nabla u}{ \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}}} } }\right| ^{q(x)-1} \left|{ \nabla v }\right| \right) \mathop{\mathrm{d\textit{x}}}\\ & \leq \frac{p_+-1}{p_-} \int_{\Omega}\left|{ \frac{\nabla u}{ \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}}} } }\right| ^{p(x)} \mathop{\mathrm{d\textit{x}}}+ \frac{1}{p_-} \int_{\Omega}\left|{ \nabla v }\right|^{p(x)} \mathop{\mathrm{d\textit{x}}}\\ & \quad +\frac{q_+-1}{q_-} \int_{\Omega}\mu(x) \left[ \log \left( e + \frac{\left|{ \nabla u }\right|}{ \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}}} } \right) + 1 \right] \left|\frac{\nabla u}{ \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}}} }\right|^{q(x)} \mathop{\mathrm{d\textit{x}}}\\ & \quad + \frac{1}{q_-} \int_{\Omega}\mu(x) \left[ \log \left( e + \frac{\left|{ \nabla u }\right|}{ \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}}} } \right) + 1 \right] \left|{ \nabla v }\right| ^{q(x)} \mathop{\mathrm{d\textit{x}}}. \end{aligned}$$ We can estimate the last addend by splitting $\Omega$ in the set where $\left|{ \nabla v }\right|$ is greater than $\left|{ \nabla u }\right| / \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}}}$ and its complement. This together with Proposition [Proposition 27](#Prop:oneHlogModularNorm){reference-type="ref" reference="Prop:oneHlogModularNorm"} [(i)]{.nodecor} leads to $$\begin{aligned} & \min \left\lbrace \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}}}^{- q_+} , \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}}}^{1 - p_-} \right\rbrace \left|{ \langle A(u),v \rangle }\right| \\ & \leq \max \left\lbrace \frac{q^+}{q_-}, \frac{p_+-1}{p_-} \right\rbrace \ensuremath{\varrho_{\ensuremath{\mathcal{H}_{\log}}} \left( \frac{\nabla u}{ \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}}} }\right)}+\frac{1}{p_-} \ensuremath{\varrho_{\ensuremath{\mathcal{H}_{\log}}} \left( \nabla v \right)} \\ & \leq \max \left\lbrace \frac{q^+}{q_-}, \frac{p_+-1}{p_-} \right\rbrace\ensuremath{\varrho_{1,\ensuremath{\mathcal{H}_{\log}}} \left( \frac{ u}{ \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}}} }\right)}+ \frac{1}{p_-} \ensuremath{\varrho_{1,\ensuremath{\mathcal{H}_{\log}}} \left( v \right)} \\ & = \max \left\lbrace \frac{q^+}{q_-}, \frac{p_+-1}{p_-} \right\rbrace + \frac{1}{p_-} = \max \left\lbrace \frac{q_+ p_- + q_-}{q_- p_-} , \frac{p_+}{p_-} \right\rbrace. \end{aligned}$$ As a consequence $$\begin{aligned} \|A(u)\|_* & = \sup_{ \left\|v\right\|_{1,\ensuremath{\mathcal{H}_{\log}}} = 1 } \left\langle A(u) , v \right\rangle \\ & \leq \max \left\lbrace \frac{q_+ p_- + q_-}{q_- p_-} , \frac{p_+}{p_-} \right\rbrace \max \left\lbrace \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}}}^{q_+} , \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}}}^{p_- - 1} \right\rbrace. \end{aligned}$$ From now on we assume that [\[Assump:Poincare\]](#Assump:Poincare){reference-type="eqref" reference="Assump:Poincare"} holds for the proof of the remaining properties. Now we will deal with the [(S$_+$)]{.nodecor}-property. Let $\{u_n\}_{n \in \mathbb{N}} \subseteq W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ be a sequence such that $$\begin{aligned} \label{Eq:SequenceSplus} u_n \rightharpoonup u \quad\text{in } W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega) \quad\text{and}\quad \limsup_{n \to \infty}{\left\langle A(u_n), u_n - u \right\rangle } \leq 0. \end{aligned}$$ By the strict monotonicity of $A$ and the weak convergence of $u_n$, we obtain $$\begin{aligned} 0 \leq \liminf_{n \to \infty} \left\langle A(u_n) - A(u) , u_n - u \right\rangle & \leq \limsup_{n \to \infty}\left\langle A(u_n) - A(u) , u_n - u \right\rangle \\ & = \limsup_{n \to \infty}\left\langle A(u_n) , u_n - u \right\rangle \leq 0, \end{aligned}$$ which means $$\begin{aligned} \lim_{n \to \infty} \left\langle A(u_n) - A(u) , u_n - u \right\rangle = 0. \end{aligned}$$ **Claim:** $\nabla u_n \to \nabla u$ in measure. In particular, as the previous expression can be decomposed in the sum of nonnegative terms, it follows $$\begin{aligned} \label{Eq:CasePgeq2} \lim_{n \to \infty} \int_{ \{p \geq 2 \}} \left( \left|{\nabla u_n}\right|^{p(x)-2} \nabla u_n - \left|{\nabla u}\right|^{p(x)-2} \nabla u \right) \cdot \left( \nabla u_n - \nabla u \right) \mathop{\mathrm{d\textit{x}}}= 0, \\ \label{Eq:CaseP<2} \lim_{n \to \infty} \int_{ \{p < 2 \}} \left( \left|{\nabla u_n}\right|^{p(x)-2} \nabla u_n - \left|{\nabla u}\right|^{p(x)-2} \nabla u \right) \cdot \left( \nabla u_n - \nabla u \right) \mathop{\mathrm{d\textit{x}}}= 0. \end{aligned}$$ From Lemma [Lemma 37](#Le:MonotoneInequality){reference-type="ref" reference="Le:MonotoneInequality"} with $f = 1$ and [\[Eq:CasePgeq2\]](#Eq:CasePgeq2){reference-type="eqref" reference="Eq:CasePgeq2"}, we can directly derive that $$\begin{aligned} \lim_{n \to \infty} \int_{ \{p \geq 2 \}} \left|{\nabla u_n - \nabla u}\right|^{p(x)} \mathop{\mathrm{d\textit{x}}}= 0, \end{aligned}$$ hence $\nabla u_n 1_{ \{p \geq 2 \}} \to \nabla u 1_{ \{p \geq 2 \}}$ in measure. On the other hand, let $$\begin{aligned} E_n = \{ \nabla u_n \neq 0\} \cup \{ \nabla u \neq 0\}, \end{aligned}$$ then for any $\varepsilon> 0$ we know that $$\begin{aligned} & \left\lbrace 1_{ \{p < 2 \}} (p_- - 1) \left|{\nabla u_n - \nabla u}\right|^2 1_{E_n} \left( \left|{\nabla u_n}\right| + \left|{\nabla u}\right| \right)^{p(x) - 2} \geq \varepsilon\right\rbrace \\ & \subseteq \left\lbrace 1_{ \{p < 2 \}} \left( \left|{\nabla u_n}\right|^{p(x)-2} \nabla u_n - \left|{\nabla u}\right|^{p(x)-2} \nabla u \right) \cdot \left( \nabla u_n - \nabla u \right) \geq \varepsilon\right\rbrace. \end{aligned}$$ From [\[Eq:CaseP\<2\]](#Eq:CaseP<2){reference-type="eqref" reference="Eq:CaseP<2"} and the previous expression we obtain that $$\begin{aligned} 1_{ \{p < 2 \}} \left|{\nabla u_n - \nabla u}\right|^2 1_{E_n} \left( \left|{\nabla u_n}\right| + \left|{\nabla u}\right| \right)^{p(x) - 2} \rightarrow 0 \quad \text{in measure.} \end{aligned}$$ Hence the same convergence is true a.e. along a subsequence $u_{n_k}$ and then for a.a. $x \in \Omega$ there exists $M(x) > 0$ such that for all $k \in \mathbb{N}$ $$\begin{aligned} M(x) & \geq 1_{ \{p < 2 \}} \left|{\nabla u_{n_k} - \nabla u}\right|^2 1_{E_{n_k}} \left( \left|{\nabla u_{n_k}}\right| + \left|{\nabla u}\right| \right)^{p(x) - 2} \\ & \geq 1_{ \{p < 2 \}} \left|{ \left|{\nabla u_{n_k}}\right| - \left|{\nabla u}\right| }\right|^2 1_{E_{n_k}} \left( \left|{\nabla u_{n_k}}\right| + \left|{\nabla u}\right| \right)^{p(x) - 2}. \end{aligned}$$ Note that, given any $c > 0$ and $0 < P < 1$, the function $h(t) = \left|{t - c}\right|^2 (t + c)^{P-2}$ satisfies $\lim_{t \to +\infty} h(t) = +\infty$. Therefore, there exists $m(x) > 0$ such that $\left|{ \nabla u_{n_k} }\right| \leq m(x)$ for a.a. $x \in \Omega$ and all $k \in \mathbb{N}$. As a consequence $$\begin{aligned} & 1_{ \{p < 2 \}} \left|{\nabla u_{n_k} - \nabla u}\right|^2 1_{E_{n_k}} \left( \left|{\nabla u_{n_k}}\right| + \left|{\nabla u}\right| \right)^{p(x) - 2} \\ & \geq 1_{ \{p < 2 \}} \left|{\nabla u_{n_k} - \nabla u}\right|^2 1_{E_{n_k}} \left( m(x) + \left|{\nabla u}\right| \right)^{p(x) - 2} \end{aligned}$$ and the convergence a.e. to zero of the left-hand side yields $1_{ \{p < 2 \}} \left|{ \nabla u_{n_k} - \nabla u }\right| \to 0$ a.e., since $$\begin{aligned} & \text{ for all } x \in \Omega \text{ such that } \nabla u (x) \neq 0, 1_{E_{n_k}} (x) = 1 \text{ for all } k \in \mathbb{N}; \\ & \text{ for all } x \in \Omega \text{ such that } \nabla u (x) = 0, \text{ along any subsequence such that }\\ & \left|{\nabla u_{n_{k'}} (x)}\right| > \varepsilon> 0 \text{ it holds that }1_{E_{n_{k'}}} (x) = 1 \text{ for all } k' \in \mathbb{N}. \end{aligned}$$ Then $1_{ \{p < 2 \}} \left|{ \nabla u_{n_k} - \nabla u }\right| \to 0$ in measure and by the subsequence principle, this is also true for the whole sequence $u_n$. Together with the case on $\{p \geq 2 \}$, this finishes the proof of the Claim. By the usual Young's inequality and Lemma [Lemma 38](#Le:YoungIneqLog){reference-type="ref" reference="Le:YoungIneqLog"} it follows $$\begin{aligned} & \int_{\Omega}\left|{\nabla u_n}\right|^{p(x)-2}\nabla u_n \cdot \nabla (u_n - u ) \mathop{\mathrm{d\textit{x}}}\\ & \quad+\int_{\Omega}\mu(x) \left[ \log (e + \left|{\nabla u_n}\right| ) + \frac{\left|{\nabla u_n}\right|}{q(x) (e + \left|{\nabla u_n}\right|)} \right] \left|{\nabla u_n}\right|^{q(x)-2}\nabla u_n \cdot\nabla (u_n - u ) \mathop{\mathrm{d\textit{x}}}\\ & = \int_{\Omega}\left|{\nabla u_n}\right|^{p(x)} \mathop{\mathrm{d\textit{x}}}- \int_{\Omega}\left|{\nabla u_n}\right|^{p(x)-2}\nabla u_n \cdot \nabla u \mathop{\mathrm{d\textit{x}}}\\ & \quad + \int_{\Omega}\mu(x) \left[ \log (e + \left|{\nabla u_n}\right| ) + \frac{\left|{\nabla u_n}\right|}{q(x) (e + \left|{\nabla u_n}\right|)} \right] \left|{\nabla u_n}\right|^{q(x)} \mathop{\mathrm{d\textit{x}}}\\ & \quad - \int_{\Omega}\mu(x) \left[ \log (e + \left|{\nabla u_n}\right| ) + \frac{\left|{\nabla u_n}\right|}{q(x) (e + \left|{\nabla u_n}\right|)} \right] \left|{\nabla u_n}\right|^{q(x)-2}\nabla u_n \cdot \nabla u \mathop{\mathrm{d\textit{x}}}\\ & \geq \int_{\Omega}\left|{\nabla u_n}\right|^{p(x)} \mathop{\mathrm{d\textit{x}}}- \int_{\Omega}\left|{\nabla u_n}\right|^{p(x)-1} \left|{\nabla u}\right| \mathop{\mathrm{d\textit{x}}}\\ & \quad + \int_{\Omega}\mu(x) \left[ \log (e + \left|{\nabla u_n}\right| ) + \frac{\left|{\nabla u_n}\right|}{q(x) (e + \left|{\nabla u_n}\right|)} \right] \left|{\nabla u_n}\right|^{q(x)} \mathop{\mathrm{d\textit{x}}}\\ & \quad - \int_{\Omega}\mu(x) \left[ \log (e + \left|{\nabla u_n}\right| ) + \frac{\left|{\nabla u_n}\right|}{q(x) (e + \left|{\nabla u_n}\right|)} \right] \left|{\nabla u_n}\right|^{q(x)- 1} \left|{\nabla u}\right| \mathop{\mathrm{d\textit{x}}}\\ & \geq \int_{\Omega}\left|{\nabla u_n}\right|^{p(x)} \mathop{\mathrm{d\textit{x}}}- \int_{\Omega}\left( \frac{p(x)-1}{p(x)}\left|{\nabla u_n}\right|^{p(x)}+\frac{1}{p(x)} \left|{\nabla u}\right|^{p(x)} \right) \mathop{\mathrm{d\textit{x}}}\\ & \quad + \int_{\Omega}\mu(x) \left[ \log (e + \left|{\nabla u_n}\right| ) + \frac{\left|{\nabla u_n}\right|}{q(x) (e + \left|{\nabla u_n}\right|)} \right] \left|{\nabla u_n}\right|^{q(x)} \mathop{\mathrm{d\textit{x}}}\\ & \quad - \int_{\Omega}\mu(x)\left( \left[ \frac{q(x)-1}{q(x)} \log (e + \left|{\nabla u_n}\right| ) + \frac{\left|{\nabla u_n}\right|}{q(x) (e + \left|{\nabla u_n}\right|)} \right] \left|{\nabla u_n}\right|^{q(x)} \right. \\ & \quad \left. + \frac{1}{q(x)}\left|{\nabla u}\right|^{q(x)} \log (e + \left|{ \nabla u }\right|) \right) \mathop{\mathrm{d\textit{x}}}\\ & = \int_{\Omega}\frac{1}{p(x)}\left|{\nabla u_n}\right|^{p(x)} \mathop{\mathrm{d\textit{x}}}- \int_{\Omega}\frac{1}{p(x)} \left|{\nabla u}\right|^{p(x)} \mathop{\mathrm{d\textit{x}}}\\ & \quad + \int_{\Omega}\frac{\mu(x)}{q(x)}\left|{\nabla u_n}\right|^{q(x)} \log (e + \left|{ \nabla u_n }\right|) \mathop{\mathrm{d\textit{x}}}- \int_{\Omega}\frac{\mu(x)}{q(x)}\left|{\nabla u}\right|^{q(x)} \log (e + \left|{ \nabla u }\right|) \mathop{\mathrm{d\textit{x}}}. \end{aligned}$$ As a consequence, by [\[Eq:SequenceSplus\]](#Eq:SequenceSplus){reference-type="eqref" reference="Eq:SequenceSplus"} $$\begin{aligned} & \limsup_{n \to \infty} \int_\Omega \left(\frac{\left|{\nabla u_n}\right|^{p(x)}}{p(x)} + \mu(x) \frac{ \left|{\nabla u_n}\right|^{q(x)}}{q(x)} \log (e + \left|{ \nabla u_n }\right|) \right) \mathop{\mathrm{d\textit{x}}}\\ & \leq \int_\Omega \left(\frac{\left|{\nabla u}\right|^{p(x)}}{p(x)} + \mu(x) \frac{ \left|{\nabla u}\right|^{q(x)}}{q(x)} \log (e + \left|{ \nabla u }\right|) \right) \mathop{\mathrm{d\textit{x}}}. \end{aligned}$$ By Fatou's Lemma one can obtain that the limit inferior satisfies the opposite inequality, so all in all $$\begin{aligned} & \lim_{n \to \infty} \int_\Omega \left(\frac{\left|{\nabla u_n}\right|^{p(x)}}{p(x)} + \mu(x) \frac{ \left|{\nabla u_n}\right|^{q(x)}}{q(x)} \log (e + \left|{ \nabla u_n }\right|) \right) \mathop{\mathrm{d\textit{x}}}\\ & = \int_\Omega \left(\frac{\left|{\nabla u}\right|^{p(x)}}{p(x)} + \mu(x) \frac{ \left|{\nabla u}\right|^{q(x)}}{q(x)} \log (e + \left|{ \nabla u }\right|) \right) \mathop{\mathrm{d\textit{x}}}. \end{aligned}$$ By the previous Claim, passing to a.e. convergence along a subsequence and using the subsequence principle, we can prove that the integrand of the left-hand side converges in measure to the integrand of the right-hand side. Then we can derive from the so-called converse of Vitali's theorem its $L^1$ convergence, and in particular the uniform integrability of the sequence $$\begin{aligned} \left\lbrace \frac{\left|{\nabla u_n}\right|^{p(x)}}{p(x)} + \mu(x) \frac{ \left|{\nabla u_n}\right|^{q(x)}}{q(x)} \log (e + \left|{ \nabla u_n }\right|) \right\rbrace _{n \in \mathbb{N}} . \end{aligned}$$ On the other hand, by [\[Eq:LogGrowthSum\]](#Eq:LogGrowthSum){reference-type="eqref" reference="Eq:LogGrowthSum"} we know that $$\begin{aligned} & \left|{\nabla u_n - \nabla u}\right|^{p(x)} + \mu(x) \left|{\nabla u_n - \nabla u}\right|^{q(x)} \log (e + \left|{ \nabla u_n - \nabla u }\right|) \\ & \leq 2 ^{q_+ + 1} q_+ \left( \frac{\left|{ \nabla u_n }\right|^{p(x)}}{p(x)} + \frac{\left|{ \nabla u }\right|^{p(x)}}{p(x)} \right. \\ & \phantom{\leq 2 ^{q_+ + 1} q_+ } \quad \left. + \frac{ \left|{ \nabla u_n }\right|^{q(x)} }{q(x)} \log(e + \left|{ \nabla u_n }\right| ) + \frac{ \left|{ \nabla u }\right|^{q(x)} }{q(x)} \log(e + \left|{ \nabla u }\right| ) \right), \end{aligned}$$ which yields the uniform integrability of the sequence $$\begin{aligned} \left\lbrace \left|{\nabla u_n - \nabla u}\right|^{p(x)} + \mu(x) \left|{\nabla u_n - \nabla u}\right|^{q(x)} \log (e + \left|{ \nabla u_n - \nabla u }\right|) \right\rbrace _{n \in \mathbb{N}} . \end{aligned}$$ As above, by the Claim, passing to a.e. convergence along a subsequence and using the subsequence principle, we can prove that this sequence converges in measure to zero. By Vitali's theorem, these two facts imply $$\begin{aligned} & \lim_{n \to \infty} \ensuremath{\varrho_{\ensuremath{\mathcal{H}_{\log}}} \left( \nabla u_n - \nabla u \right)} \\ & = \lim_{n \to \infty} \int_{\Omega}\left( \left|{\nabla u_n - \nabla u}\right|^{p(x)} + \mu(x) \left|{\nabla u_n - \nabla u}\right|^{q(x)} \log (e + \left|{ \nabla u_n - \nabla u }\right|) \right) \mathop{\mathrm{d\textit{x}}} = 0. \end{aligned}$$ By Proposition [Proposition 25](#Prop:HlogModularNorm){reference-type="ref" reference="Prop:HlogModularNorm"} [(v)]{.nodecor}, this is equivalent to $\left\| u_n - u \right\|_{1,\ensuremath{\mathcal{H}_{\log}},0} = \left\| \nabla u_n - \nabla u \right\|_{\ensuremath{\mathcal{H}_{\log}}} \to 0$, i.e. by Proposition [Proposition 30](#Prop:Poincare){reference-type="ref" reference="Prop:Poincare"} we know that $u_n \to u$ in $W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$. Let us now prove that the operator is coercive. For any $u \in W^{1, \ensuremath{\mathcal{H}_{\log}}}(\Omega)$ with $\left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0} \geq 1$, using the fact that $t \mapsto \log (e + t)$ is increasing, by Proposition [Proposition 25](#Prop:HlogModularNorm){reference-type="ref" reference="Prop:HlogModularNorm"} [(i)]{.nodecor} it follows $$\begin{aligned} \frac{\left\langle A(u) , u \right\rangle}{ \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0} } & = \int_{\Omega}\left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0}^{p(x)-1} \left( \frac{\left|{\nabla u}\right|}{ \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0} } \right)^{p(x)} \mathop{\mathrm{d\textit{x}}}\\ & \quad +\int_{\Omega}\left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0}^{q(x)-1} \mu(x) \left( \frac{\left|{\nabla u}\right|}{ \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0} } \right)^{q(x)} \log ( e + \left|{\nabla u}\right| ) \mathop{\mathrm{d\textit{x}}}\\ & \geq \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0}^{p_- - 1} \ensuremath{\varrho_{1,\ensuremath{\mathcal{H}_{\log}},0} \left( \frac{\nabla u}{ \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0} } \right)} = \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0}^{p_- - 1} \to + \infty \end{aligned}$$ as $\left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0} \to + \infty$. Finally, we show that $A$ is a homeomorphism. By the previously proven properties and the Minty-Browder theorem (see, for example, Zeidler [@Zeidler-1990 Theorem 26.A]), we know that $A$ is invertible and that $A^{-1}$ is strictly monotone, demicontinuous and bounded. It is only left to see that $A^{-1}$ is continuous. With this purpose in mind, let $\{y_n\}_{n \in \mathbb{N}} \subseteq [ W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)] ^*$ be a sequence such that $y_n \to y$ in $[ W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)] ^*$ and let $u_n=A^{-1}(y_n)$ as well as $u=A^{-1}(y)$. The strong convergence of $\{y_n\}_{n \in \mathbb{N}}$ and the boundedness of $A^{-1}$ imply that $u_n$ is bounded in $W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$. Thus, there exists a subsequence $\{u_{n_k}\}_{k\in\mathbb{N}}$ such that $u_{n_k} \rightharpoonup u_0$ in $W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$. All these properties yield $$\begin{aligned} & \lim_{k\to \infty} \left\langle A(u_{n_k})-A(u_0),u_{n_k}-u_0\right\rangle \\ & = \lim_{k \to \infty} \left\langle y_{n_k}-y,u_{n_k}-u_0\right\rangle+\lim_{k \to \infty} \left\langle y-A(u_0),u_{n_k}-u_0\right\rangle=0. \end{aligned}$$ By the [(S$_+$)]{.nodecor}-property of $A$ we obtain that $u_{n_k}\to u_0$ in $W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$. The operator $A$ is also continuous, so $$\begin{aligned} A(u_0)=\lim_{k \to \infty} A(u_{n_k})=\lim_{k \to \infty} y_{n_k}=y=A(u). \end{aligned}$$ As $A$ is injective, this proves that $u=u_0$. By the subsequence principle we can show that this convergence holds for the whole sequence. ◻ When we consider the operator $\tilde{A} \colon W^{1, \ensuremath{\mathcal{H}_{\log}}}(\Omega)\to \left[ W^{1, \ensuremath{\mathcal{H}_{\log}}}(\Omega)\right] ^*$ given by the same expression as $A$, one has the following result. **Theorem 40**. *Let [\[Assump:Space\]](#Assump:Space){reference-type="eqref" reference="Assump:Space"} be satisfied, then the operator $\tilde{A}$ is bounded, continuous and strictly monotone. If we further assume [\[Assump:Poincare\]](#Assump:Poincare){reference-type="eqref" reference="Assump:Poincare"}, then it also is of type [(S$_+$)]{.nodecor}, coercive and a homeomorphism.* *Proof.* For the [(S$_+$)]{.nodecor}-property, repeat the same argument as before together with the compact embedding $W^{1, \ensuremath{\mathcal{H}_{\log}}}(\Omega)\hookrightarrow L^{\ensuremath{\mathcal{H}_{\log}}}(\Omega)$ from Proposition [Proposition 30](#Prop:Poincare){reference-type="ref" reference="Prop:Poincare"}. The rest of the assertions follow in the same way as before. ◻ For completeness, we consider the operator $B \colon W^{1, \ensuremath{\mathcal{H}_{\log}}}(\Omega)\to \left[ W^{1, \ensuremath{\mathcal{H}_{\log}}}(\Omega)\right] ^*$ given by $$\begin{aligned} \left\langle B(u),v \right\rangle & = \int_{\Omega}\left|{\nabla u}\right|^{p(x)-2}\nabla u \cdot \nabla v \vphantom{\frac{\left|{\nabla u}\right|}{q(x) (e + \left|{\nabla u}\right|)}}\cdot \nabla v\mathop{\mathrm{d\textit{x}}}\\ & \quad +\int_{\Omega}\mu(x) \left[ \log (e + \left|{\nabla u}\right| ) + \frac{\left|{\nabla u}\right|}{q(x) (e + \left|{\nabla u}\right|)} \right] \left|{\nabla u}\right|^{q(x)-2}\nabla u \cdot \nabla v\mathop{\mathrm{d\textit{x}}}\\ & \quad + \int_{\Omega}\left|{ u}\right|^{p(x)-2} u v \mathop{\mathrm{d\textit{x}}}\\ & \quad +\int_{\Omega}\mu(x) \left[ \log (e + \left|{ u}\right| ) + \frac{\left|{ u}\right|}{q(x) (e + \left|{ u}\right|)} \right] \left|{ u}\right|^{q(x)-2} u v\mathop{\mathrm{d\textit{x}}},\end{aligned}$$ and the operator $\tilde{B} \colon W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)\to \left[ W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)\right] ^*$ given by the same expression. Then we have the following result without the need for the condition [\[Assump:Poincare\]](#Assump:Poincare){reference-type="eqref" reference="Assump:Poincare"}. The proof is once again the same as in Theorem [Theorem 39](#Th:PropertiesOperator){reference-type="ref" reference="Th:PropertiesOperator"}. **Theorem 41**. *Let [\[Assump:Space\]](#Assump:Space){reference-type="eqref" reference="Assump:Space"} be satisfied, then the operators $B$ and $\tilde{B}$ are bounded, continuous, strictly monotone, of type [(S$_+$)]{.nodecor}, coercive and homeomorphisms.* One direct consequence of the previous theorems is the following existence and uniqueness result. We say that $u \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ is a weak solution of [\[Eq:Problem\]](#Eq:Problem){reference-type="eqref" reference="Eq:Problem"} if for all $v \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ it holds that $$\begin{aligned} & \int_{\Omega}\left|{\nabla u}\right|^{p(x)-2}\nabla u \cdot \nabla v \mathop{\mathrm{d\textit{x}}}\\ & \quad +\int_{\Omega}\mu(x) \left[ \log (e + \left|{\nabla u}\right| ) + \frac{\left|{\nabla u}\right|}{q(x) (e + \left|{\nabla u}\right|)} \right] \left|{\nabla u}\right|^{q(x)-2}\nabla u \cdot \nabla v \mathop{\mathrm{d\textit{x}}}\\ & = \int_{\Omega}f(x,u) v \mathop{\mathrm{d\textit{x}}}.\end{aligned}$$ **Theorem 42**. *Let [\[Assump:Poincare\]](#Assump:Poincare){reference-type="eqref" reference="Assump:Poincare"} hold and $f \in L^{\frac{ r(\cdot) }{ r(\cdot) - 1 }}(\Omega)$, where either $r \in C_+ (\overline{\Omega})$ and $r(x) < p^*(x)$ for all $x \in \overline{\Omega}$ or $r \in C_+ (\overline{\Omega}) \cap C^{0, \frac{1}{ |\log t| }} (\overline{\Omega})$ and $r(x) \leq p^*(x)$ for all $x \in \overline{\Omega}$. Then there exists a unique weak solution $u \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ of problem [\[Eq:Problem\]](#Eq:Problem){reference-type="eqref" reference="Eq:Problem"}.* *Proof.* As by Proposition [Proposition 28](#Prop:EmbeddingHlogSobolev){reference-type="ref" reference="Prop:EmbeddingHlogSobolev"} [(ii)]{.nodecor} and [(iii)]{.nodecor} $W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)\hookrightarrow L^{r(\cdot)}(\Omega)$, then $f \in L^{r'(\cdot)}(\Omega) = \left[ L^{r(\cdot)}(\Omega) \right] ^* \hookrightarrow \left[ W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)\right] ^*$. The result follows because $A$ is a bijection by Theorem [Theorem 39](#Th:PropertiesOperator){reference-type="ref" reference="Th:PropertiesOperator"}. ◻ # Constant sign solutions After the comprehensive list of properties of the function space and the logarithmic double phase operator given in the previous sections, we are now in the position to prove our first multiplicity result for problem [\[Eq:Problem\]](#Eq:Problem){reference-type="eqref" reference="Eq:Problem"}. We impose the following assumptions on the right-hand side $f$ and some slightly stricter requirements on the exponents than on [\[Assump:Poincare\]](#Assump:Poincare){reference-type="eqref" reference="Assump:Poincare"}: 1. [\[Assump:H2\]]{#Assump:H2 label="Assump:H2"} $\Omega \subseteq \mathbb{R}^N$, with $N \geq 2$, is a bounded domain with Lipschitz boundary $\partial \Omega$, $p,q \in C_+ (\overline{\Omega})$ with $p(x) \leq q(x) \leq q_+ < p^*_-$ for all $x \in \overline{\Omega}$ and $\mu \in L^{\infty}(\Omega)$. ```{=html} <!-- --> ``` 1. [\[Hf\]]{#Hf label="Hf"} Let $f \colon \Omega \times \mathbb{R}\to \mathbb{R}$ and $F(x,t) = \int_{0}^{t} f(x,s) \mathop{\mathrm{d\textit{s}}}$. 1. [\[Asf:Caratheodory\]]{#Asf:Caratheodory label="Asf:Caratheodory"} The function $f$ is Carathéodory type, i.e. $t \mapsto f(x,t)$ is continuous for a.a. $x \in \Omega$ and $x \mapsto f(x,t)$ is measurable for all $t \in \mathbb{R}$. 2. [\[Asf:WellDef\]]{#Asf:WellDef label="Asf:WellDef"} There exists $r \in C_+(\overline{\Omega})$ with $r_+ < p^*_-$ and $C>0$ such that $$\begin{aligned} \left|{f(x,t)}\right| \leq C \left( 1 + \left|{t}\right|^{r(x)-1} \right) \end{aligned}$$ for a.a. $x \in \Omega$ and for all $t \in \mathbb{R}$. 3. [\[Asf:GrowthInfty\]]{#Asf:GrowthInfty label="Asf:GrowthInfty"} $$\begin{aligned} \lim\limits_{s \to \pm \infty} \frac{F(x,s)}{\left|{s}\right|^{q_+} \log(e + \left|{s}\right|) } = + \infty \quad \text{uniformly for a.a.\,} x \in \Omega. \end{aligned}$$ 4. [\[Asf:GrowthZero\]]{#Asf:GrowthZero label="Asf:GrowthZero"} There exists $\theta > 0$ such that $F(x,t) \leq 0$ for $\left|{t}\right| \leq \theta$ and for a.a. $x \in \Omega$ and $f(x,0)=0$ for a.a. $x \in \Omega$. 5. [\[Asf:CeramiAssumption\]]{#Asf:CeramiAssumption label="Asf:CeramiAssumption"} There exists $l, \widetilde{l} \in C_+(\overline{\Omega})$ such that $\min \{l_-, \widetilde{l}_- \} \in \left( (r_+ - p_-) \frac{N}{p_-} , r_+ \right)$ and $K > 0$ with $$\begin{aligned} 0 < K \leq \liminf_{s \to + \infty} \frac{f(x,s)s - q_+ \left( 1 + \frac{\kappa}{q_-} \right) F(x,s)}{\left|{s}\right|^{l(x)}} \end{aligned}$$ uniformly for a.a. $x\in \Omega$, and $$\begin{aligned} 0 < K \leq \liminf_{s \to - \infty} \frac{f(x,s)s - q_+ \left( 1 + \frac{\kappa}{q_-} \right) F(x,s)}{\left|{s}\right|^{\widetilde{l}(x)}} \end{aligned}$$ uniformly for a.a. $x \in \Omega$, where $\kappa$ is the same one as in Lemma [Lemma 22](#Le:fepsilon){reference-type="ref" reference="Le:fepsilon"}. As a consequence the function $f$ has the following properties. **Lemma 43**. *Let $f \colon \Omega \times \mathbb{R}\to \mathbb{R}$.* 1. *[\[Propf:q\<r\]]{#Propf:q<r label="Propf:q<r"} If $f$ fulfills [\[Asf:WellDef\]](#Asf:WellDef){reference-type="eqref" reference="Asf:WellDef"} and [\[Asf:GrowthInfty\]](#Asf:GrowthInfty){reference-type="eqref" reference="Asf:GrowthInfty"}, then $q_+ < r_-$.* 2. *[\[Propf:Boundedbelow\]]{#Propf:Boundedbelow label="Propf:Boundedbelow"} If $f$ fulfills [\[Asf:WellDef\]](#Asf:WellDef){reference-type="eqref" reference="Asf:WellDef"} and [\[Asf:GrowthInfty\]](#Asf:GrowthInfty){reference-type="eqref" reference="Asf:GrowthInfty"}, then there exist some $M>0$ such that $$\begin{aligned} F(x,t) > - M \quad \text{for a.a.\,} x \in \Omega \text{ and for all } t \in \mathbb{R}. \end{aligned}$$* 3. *[\[Propf:EpsilonUpperBoundF\]]{#Propf:EpsilonUpperBoundF label="Propf:EpsilonUpperBoundF"} If $f$ fulfills [\[Asf:WellDef\]](#Asf:WellDef){reference-type="eqref" reference="Asf:WellDef"} and [\[Asf:GrowthZero\]](#Asf:GrowthZero){reference-type="eqref" reference="Asf:GrowthZero"}, then there exists $C > 0$ such that $$\begin{aligned} F(x,t) \leq C \left|{t}\right|^{r(x)} \quad \text{for a.a.\,} x \in \Omega. \end{aligned}$$* 4. *[\[Propf:EpsilonLowerBoundF\]]{#Propf:EpsilonLowerBoundF label="Propf:EpsilonLowerBoundF"} If $f$ fulfills [\[Asf:WellDef\]](#Asf:WellDef){reference-type="eqref" reference="Asf:WellDef"} and [\[Asf:GrowthInfty\]](#Asf:GrowthInfty){reference-type="eqref" reference="Asf:GrowthInfty"}, then for each $\varepsilon> 0$ there exists $C_\varepsilon> 0$ such that $$\begin{aligned} \left|{F(x,t)}\right| \geq \frac{\varepsilon}{q_+} \left|{t}\right|^{q_+} \log( e + \left|{t}\right| ) - C_\varepsilon\quad \text{for a.a.\,} x \in \Omega. \end{aligned}$$* 5. *[\[Propf:ConvergenceFterm\]]{#Propf:ConvergenceFterm label="Propf:ConvergenceFterm"} If $f$ fulfills [\[Asf:Caratheodory\]](#Asf:Caratheodory){reference-type="eqref" reference="Asf:Caratheodory"} and [\[Asf:WellDef\]](#Asf:WellDef){reference-type="eqref" reference="Asf:WellDef"}, then the functional $I_f \colon W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)\to \mathbb{R}$ given by $$\begin{aligned} I_f(u) = \int_{\Omega}F(x,u) \mathop{\mathrm{d\textit{x}}} \end{aligned}$$ and its derivative $I_f ' \colon W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)\to \left[ W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)\right] ^*$, given by $$\begin{aligned} \left \langle I_f ' (u) , v \right \rangle = \int_{\Omega}f(x,u) v \mathop{\mathrm{d\textit{x}}}, \end{aligned}$$ are strongly continuous, i.e. $u_n \rightharpoonup u$ in $W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ implies $I_f(u_n) \to I_f(u)$ in $\mathbb{R}$ and $I_f '(u_n) \to I_f ' (u)$ in $\left[ W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)\right] ^*$.* **Remark 44**. *As in [\[Assump:H2\]](#Assump:H2){reference-type="eqref" reference="Assump:H2"} we assumed $q_+ < p^*_-$, we can always find at least one $r \in C_+(\overline{\Omega})$ such that $q_+ < r_- \leq r_+ < p^+_-$.* In order to find weak solutions we will work on the associated energy functional to problem [\[Eq:Problem\]](#Eq:Problem){reference-type="eqref" reference="Eq:Problem"}, since they coincide with the critical points of this functional. It is defined as the functional $\varphi\colon W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)\to \mathbb{R}$ given by $$\begin{aligned} \varphi(u) = \int_{\Omega}\left( \frac{\left|{\nabla u}\right|^{p(x)}}{p(x)} + \mu(x) \frac{\left|{ \nabla u}\right|^{q(x)}}{q(x)} \log( e + \left|{\nabla u}\right| ) \right) \mathop{\mathrm{d\textit{x}}}\; - \int_{\Omega}F(x,u) \mathop{\mathrm{d\textit{x}}}.\end{aligned}$$ In particular, as we are interested in constant sign solutions, we consider the truncated functionals $\varphi_\pm \colon W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)\to \mathbb{R}$ defined by $$\begin{aligned} \varphi_\pm (u) = \int_{\Omega}\left( \frac{\left|{\nabla u}\right|^{p(x)}}{p(x)} + \mu(x) \frac{\left|{ \nabla u}\right|^{q(x)}}{q(x)} \log( e + \left|{\nabla u}\right| ) \right) \mathop{\mathrm{d\textit{x}}}\; - \int_{\Omega}F(x, \pm u^\pm ) \mathop{\mathrm{d\textit{x}}}.\end{aligned}$$ **Remark 45**. *Note that by the second half of [\[Asf:GrowthZero\]](#Asf:GrowthZero){reference-type="eqref" reference="Asf:GrowthZero"}, $F(x, \pm t^\pm) = \int_0^t f(x, \pm s ^\pm) \mathop{\mathrm{d\textit{s}}}$ for a.a. $x \in \Omega$ and all $t \in \mathbb{R}$. This will allow us to use Lemma [Lemma 43](#Le:Propf){reference-type="ref" reference="Le:Propf"} [\[Propf:ConvergenceFterm\]](#Propf:ConvergenceFterm){reference-type="ref" reference="Propf:ConvergenceFterm"} on the truncated functionals both for the differentiability and the strong continuity.* In the remaining part of this section we check the assumptions of the mountain pass theorem, see Theorem [Theorem 19](#Th:MPT){reference-type="ref" reference="Th:MPT"}, for the truncated functionals $\varphi_\pm$. We start by checking the compactness-type property. For this purpose, we first need the following lemma. Its proof is straightforward. **Lemma 46**. *Let $Q>1$ and $h \colon [0,\infty) \to [0,\infty)$ given by $h(t) = \frac{t}{Q (e + t) \log(e + t) }$. Then $h$ attains its maximum value at $t_0$ and the value is $\frac{\kappa}{Q}$, where $t_0$ and $\kappa$ are the same as in Lemma [Lemma 22](#Le:fepsilon){reference-type="ref" reference="Le:fepsilon"}.* **Proposition 47**. *Let [\[Assump:H2\]](#Assump:H2){reference-type="eqref" reference="Assump:H2"} be satisfied and $f$ fulfill [\[Asf:Caratheodory\]](#Asf:Caratheodory){reference-type="eqref" reference="Asf:Caratheodory"}, [\[Asf:WellDef\]](#Asf:WellDef){reference-type="eqref" reference="Asf:WellDef"}, [\[Asf:GrowthZero\]](#Asf:GrowthZero){reference-type="eqref" reference="Asf:GrowthZero"} and [\[Asf:CeramiAssumption\]](#Asf:CeramiAssumption){reference-type="eqref" reference="Asf:CeramiAssumption"}. Then the functionals $\varphi_\pm$ satisfy the [C]{.nodecor}-condition.* *Proof.* Here we give the argument only for $\varphi_+$, the case of $\varphi_-$ is almost the same. Let $C_1 > 0$ and $\{u_n\}_{n \in \mathbb{N}} \subseteq W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ be a sequence such that $$\begin{aligned} \left|{\varphi_+(u_n)}\right| & \leq C_1 \quad \text{for all } n \in \mathbb{N}, \label{Eq:C-bound} \\ \label{Eq:C-conv} (1 + \left\|u_n\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0}) \varphi_+ ' (u_n) & \to 0 \quad \text{in } \left[ W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)\right] ^*. \end{aligned}$$ From [\[Eq:C-conv\]](#Eq:C-conv){reference-type="eqref" reference="Eq:C-conv"} we know that there exists a sequence $\varepsilon_n \to 0$ such that for all $v \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ $$\label{Eq:C-convEps} \begin{aligned} & \left| \int_{\Omega}\left|{\nabla u_n}\right|^{p(x)-2}\nabla u_n \cdot \nabla v \mathop{\mathrm{d\textit{x}}}\right. \\ & \left. +\int_{\Omega}\mu(x) \left[ \log (e + \left|{\nabla u_n}\right| ) + \frac{\left|{\nabla u_n}\right|}{q(x) (e + \left|{\nabla u_n}\right|)} \right] \left|{\nabla u_n}\right|^{q(x)-2}\nabla u_n \cdot \nabla v \mathop{\mathrm{d\textit{x}}}\right. \\ & \left. - \int_{\Omega}f(x,u_n^+) v \mathop{\mathrm{d\textit{x}}}\right| \leq \frac{\varepsilon_n \left\|v\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0}}{1 + \left\|u_n\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0}} \quad \text{for all } n \in \mathbb{N}. \end{aligned}$$ For any $v \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ we know that $v^\pm \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ by Proposition [Proposition 29](#Prop:Truncations){reference-type="ref" reference="Prop:Truncations"}. In particular we can take $v = - u_n^- \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ in [\[Eq:C-convEps\]](#Eq:C-convEps){reference-type="eqref" reference="Eq:C-convEps"}. As the fraction in the brackets is nonnegative and $f(x,u_n^+)u_n^- = 0$ for a.a. $x \in \Omega$, we have $$\begin{aligned} & \ensuremath{\varrho_{1,\ensuremath{\mathcal{H}_{\log}},0} \left(u_n^-\right)} \\ & \leq \int_{\Omega}\left( \left|{\nabla u_n^-}\right|^{p(x)} + \mu(x) \left[ \log (e + \left|{\nabla u_n^-}\right| ) + \frac{\left|{\nabla u_n^-}\right|}{q(x) (e + \left|{\nabla u_n^-}\right|)} \right] \left|{\nabla u_n^-}\right|^{q(x)}\right) \mathop{\mathrm{d\textit{x}}}\\ & \leq \varepsilon_n \quad \text{for all } n \in \mathbb{N}, \end{aligned}$$ or equivalently (see Proposition [Proposition 25](#Prop:HlogModularNorm){reference-type="ref" reference="Prop:HlogModularNorm"} [(v)]{.nodecor}) $$\begin{aligned} \label{Eq:NegativeToZero} - u_n^- \to 0 \quad \text{in } W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega). \end{aligned}$$ **Claim:** There exists $M>0$ such that $\left\|u_n^+\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0} \leq M$ for all $n \in \mathbb{N}$. By Lemma [Lemma 46](#Le:QuotientFracLog){reference-type="ref" reference="Le:QuotientFracLog"} and taking $v = u_n^+ \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ in [\[Eq:C-convEps\]](#Eq:C-convEps){reference-type="eqref" reference="Eq:C-convEps"} we have $$\begin{aligned} & - \left( 1 + \frac{\kappa}{q_-} \right) \ensuremath{\varrho_{1,\ensuremath{\mathcal{H}_{\log}},0} \left(u_n^+\right)} + \int_{\Omega}f(x,u_n^+) u_n^+ \mathop{\mathrm{d\textit{x}}}\\ & \leq - \int_{\Omega}\left( \left|{\nabla u_n^+}\right|^{p(x)} + \mu(x) \left[ \log (e + \left|{\nabla u_n^+}\right| ) + \frac{\left|{\nabla u_n^+}\right|}{q(x) (e + \left|{\nabla u_n^+}\right|)} \right] \left|{\nabla u_n^+}\right|^{q(x)}\right) \mathop{\mathrm{d\textit{x}}}\\ & \quad + \int_{\Omega}f(x,u_n^+) u_n^+ \mathop{\mathrm{d\textit{x}}} \leq \varepsilon_n \quad \text{for all } n \in \mathbb{N}. \end{aligned}$$ On the other hand, from [\[Eq:C-bound\]](#Eq:C-bound){reference-type="eqref" reference="Eq:C-bound"} and [\[Eq:NegativeToZero\]](#Eq:NegativeToZero){reference-type="eqref" reference="Eq:NegativeToZero"}, we also know that $$\begin{aligned} \left( 1 + \frac{\kappa}{q_-} \right) \ensuremath{\varrho_{1,\ensuremath{\mathcal{H}_{\log}},0} \left(u_n^+\right)} - \int_{\Omega}q_+ \left( 1 + \frac{\kappa}{q_-} \right) F(x,u_n^+) \mathop{\mathrm{d\textit{x}}}\leq C_2 \quad \text{for all } n \in \mathbb{N}. \end{aligned}$$ Adding both inequalities one gets $$\begin{aligned} \int_{\Omega}f(x,u_n^+) u_n^+ - q_+ \left( 1 + \frac{\kappa}{q_-} \right) F(x,u_n^+) \mathop{\mathrm{d\textit{x}}}\leq C_3 \quad \text{for all } n \in \mathbb{N}. \end{aligned}$$ By [\[Asf:CeramiAssumption\]](#Asf:CeramiAssumption){reference-type="eqref" reference="Asf:CeramiAssumption"}, where we assume, without loss of generality, that $l_- \leq \widetilde{l}_-$, there exist $C_4,C_5 > 0$ such that $$\begin{aligned} C_4 |s|^{l_-} - C_5 \leq f(x,s)s - q_+ \left( 1 + \frac{\kappa}{q_-} \right) F(x,s) \end{aligned}$$ for all $s \in \mathbb{R}$ and for a.a. $x \in \Omega$. The combination of the last two inequalities imply $$\begin{aligned} \label{Eq:BoundedL-} \left\|u_n^+\right\|_{l_-} \leq C_6 \quad \text{for all } n \in \mathbb{N}. \end{aligned}$$ Note that $l_- < r_+ < p^*_-$ because of [\[Asf:WellDef\]](#Asf:WellDef){reference-type="eqref" reference="Asf:WellDef"} and [\[Asf:CeramiAssumption\]](#Asf:CeramiAssumption){reference-type="eqref" reference="Asf:CeramiAssumption"}, so one can find $t \in (0,1)$ such that $$\begin{aligned} \frac{1}{r_+} = \frac{t}{p^*_-} + \frac{1-t}{l_-}. \end{aligned}$$ This situates us in the appropriate setting to apply an interpolation inequality, like the one found in Proposition 2.3.17 in the book by Papageorgiou-Winkert [@Papageorgiou-Winkert-2018]. Together with [\[Eq:BoundedL-\]](#Eq:BoundedL-){reference-type="eqref" reference="Eq:BoundedL-"}, this gives us $$\begin{aligned} \left\|u^+_n\right\|_{r_+}^{r_+} \leq \left( \left\|u^+_n\right\|^{t}_{p^*_-} \left\|u^+_n\right\|^{1-t}_{l_-} \right)^{r_+} \leq C_6^{(1-t)r_+} \left\|u^+_n\right\|^{t r_+}_{p^*_-} \quad \text{for all } n \in \mathbb{N}. \end{aligned}$$ For simplicity, we consider the case that $\left\|u_n^+\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0} \geq 1$ for all $n \in \mathbb{N}$. From Proposition [Proposition 25](#Prop:HlogModularNorm){reference-type="ref" reference="Prop:HlogModularNorm"} [(iii)]{.nodecor}, and then by [\[Eq:C-convEps\]](#Eq:C-convEps){reference-type="eqref" reference="Eq:C-convEps"} with $v = u_n^+ \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ and [\[Asf:WellDef\]](#Asf:WellDef){reference-type="eqref" reference="Asf:WellDef"} we have $$\begin{aligned} \left\|u^+_n\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0}^{p_-} \leq \ensuremath{\varrho_{1,\ensuremath{\mathcal{H}_{\log}},0} \left( u^+_n \right)} \leq C_7 (1 + \left\|u_n^+\right\|_1 + \left\|u^+_n\right\|^{r_+}_{r_+}) \quad \text{for all } n \in \mathbb{N}. \end{aligned}$$ These last two inequalities together with the embeddings (see Proposition [Proposition 28](#Prop:EmbeddingHlogSobolev){reference-type="ref" reference="Prop:EmbeddingHlogSobolev"} (i)) $$\begin{aligned} W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)\hookrightarrow W^{1,p_-}(\Omega) \hookrightarrow L^{p_-^*}(\Omega), \qquad L^{r_+}(\Omega) \hookrightarrow L^{1}(\Omega) \end{aligned}$$ yield $$\begin{aligned} \left\|u^+_n\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0}^{p_-} \leq C_9 \left( 1 + \left\|u^+_n\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0}^{tr_+} \right) \quad \text{for all } n \in \mathbb{N}. \end{aligned}$$ From the interval assumption in [\[Asf:CeramiAssumption\]](#Asf:CeramiAssumption){reference-type="eqref" reference="Asf:CeramiAssumption"}, we know that $$\begin{aligned} tr_+ & = \frac{p^*_-(r_+ - l_-)}{p^*_- - l_-} = \frac{N p_-(r_+ - l_-)}{Np_- - N l_- + p_- l_-} \\ & < \frac{N p_-(r_+ - l_-)}{Np_- - N l_- + p_- (r_+ - p_-) \frac{N}{p_-}} = p_-, \end{aligned}$$ thus there exists $M>0$ such that $\left\|u^+_n\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0} \leq M$ for all $n \in \mathbb{N}$ and this completes the proof of the Claim. The boundedness of $\{u_n\}_{n \in \mathbb{N}}$ in $W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ is established by [\[Eq:NegativeToZero\]](#Eq:NegativeToZero){reference-type="eqref" reference="Eq:NegativeToZero"} and the previous Claim. Hence there is a subsequence $\{u_{n_k}\}_{k \in \mathbb{N}}$ such that $$\begin{aligned} u_{n_k} \rightharpoonup u \quad \text{in } W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega). \end{aligned}$$ Now taking $v = u_{n_k} - u \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ in [\[Eq:C-convEps\]](#Eq:C-convEps){reference-type="eqref" reference="Eq:C-convEps"}, we know that $$\begin{aligned} \lim\limits_{k \to \infty} \langle \varphi_+ ' (u_{n_k}) , u_{n_k} - u \rangle = 0, \end{aligned}$$ and from the weak convergence of $\{u_{n_k}\}_{k \in \mathbb{N}}$ and Lemma [Lemma 43](#Le:Propf){reference-type="ref" reference="Le:Propf"} [\[Propf:ConvergenceFterm\]](#Propf:ConvergenceFterm){reference-type="ref" reference="Propf:ConvergenceFterm"} (check remark [Remark 45](#Re:StrongCont){reference-type="ref" reference="Re:StrongCont"}) we obtain $$\begin{aligned} \lim\limits_{k \to \infty} \int_{\Omega}f(x,u_{n_k}^+) ( u_{n_k} - u ) \mathop{\mathrm{d\textit{x}}}= 0. \end{aligned}$$ The last two limits together yield $$\begin{aligned} \lim\limits_{k \to \infty} \langle A(u_{n_k}) , u_{n_k} - u \rangle = 0 \end{aligned}$$ and the [(S$_+$)]{.nodecor}-property of the operator $A$ (see Theorem [Theorem 39](#Th:PropertiesOperator){reference-type="ref" reference="Th:PropertiesOperator"}) implies that $$\begin{aligned} u_{n_k} \to u \quad \text{in } W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega). \end{aligned}$$ ◻ Now we have to check the mountain pass geometry. **Proposition 48**. *Let [\[Assump:H2\]](#Assump:H2){reference-type="eqref" reference="Assump:H2"} be satisfied and $f$ fulfill [\[Asf:Caratheodory\]](#Asf:Caratheodory){reference-type="eqref" reference="Asf:Caratheodory"}, [\[Asf:WellDef\]](#Asf:WellDef){reference-type="eqref" reference="Asf:WellDef"} and [\[Asf:GrowthZero\]](#Asf:GrowthZero){reference-type="eqref" reference="Asf:GrowthZero"}. Then there exist constants $C_1,C_2,C_3 >0$ such that for all $\varepsilon> 0$ $$\begin{aligned} \varphi(u), \varphi_\pm (u) \geq \begin{cases} C_1 a_\varepsilon^{-1} \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0}^{q_+ + \varepsilon} - C_2 \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0}^{r_-}, & \text{if } \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0} \leq \min\{1,C_3\}, \\ C_1 \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0}^{p_-} - C_2 \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0}^{r_+}, & \text{if } \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0} \geq \max\{1,C_3\}, \end{cases} \end{aligned}$$ where $a_\varepsilon$ is the same one as in Lemma [Lemma 22](#Le:fepsilon){reference-type="ref" reference="Le:fepsilon"}.* *Proof.* We do the argument only for $\varphi$, as for $\varphi_\pm$ we can use $\varrho_{r(\cdot)} ( \pm u^\pm ) \leq \varrho_{r(\cdot)} ( u )$. Applying Lemma [Lemma 43](#Le:Propf){reference-type="ref" reference="Le:Propf"} [\[Propf:EpsilonUpperBoundF\]](#Propf:EpsilonUpperBoundF){reference-type="ref" reference="Propf:EpsilonUpperBoundF"}, the embedding of $W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)\hookrightarrow L^{r(\cdot)}(\Omega)$ with constant $C_{\ensuremath{\mathcal{H}_{\log}}}$ from Proposition [Proposition 28](#Prop:EmbeddingHlogSobolev){reference-type="ref" reference="Prop:EmbeddingHlogSobolev"} [(iii)]{.nodecor} and Proposition [Proposition 1](#Prop:ModularNormVarExp){reference-type="ref" reference="Prop:ModularNormVarExp"} [(iii)]{.nodecor}, [(iv)]{.nodecor}, we have for any $u \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ $$\begin{aligned} \varphi(u) & \geq \frac{1}{q_+} \ensuremath{\varrho_{1,\ensuremath{\mathcal{H}_{\log}},0} \left(u\right)} - C \varrho_{r(\cdot)} ( u ) \\ & \geq \frac{1}{q_+} \ensuremath{\varrho_{1,\ensuremath{\mathcal{H}_{\log}},0} \left(u\right)} - C_\varepsilon\max_{k \in \{ r_+, r_- \} } \{ C_{\ensuremath{\mathcal{H}_{\log}}} ^k \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0}^k \}. \end{aligned}$$ Choosing $C_3 = 1/C_{\ensuremath{\mathcal{H}_{\log}}}$, the result follows from Proposition [Proposition 25](#Prop:HlogModularNorm){reference-type="ref" reference="Prop:HlogModularNorm"} [(iii)]{.nodecor} and [(iv)]{.nodecor} with $$\begin{aligned} C_1 = \frac{1}{q_+} \quad \text{and}\quad C_2 = \begin{cases} C_\varepsilon C_{\ensuremath{\mathcal{H}_{\log}}}^{r_-} & \text{for } \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0} \leq C_3, \\ C_\varepsilon C_{\ensuremath{\mathcal{H}_{\log}}}^{r_+} & \text{for } \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0} > C_3. \end{cases} \end{aligned}$$ ◻ **Corollary 49**. *Let [\[Assump:H2\]](#Assump:H2){reference-type="eqref" reference="Assump:H2"} be satisfied and $f$ fulfill [\[Asf:Caratheodory\]](#Asf:Caratheodory){reference-type="eqref" reference="Asf:Caratheodory"}, [\[Asf:WellDef\]](#Asf:WellDef){reference-type="eqref" reference="Asf:WellDef"} with $q_+<r_-$ and [\[Asf:GrowthZero\]](#Asf:GrowthZero){reference-type="eqref" reference="Asf:GrowthZero"}. Then there exist $\delta >0$ such that $$\begin{aligned} \inf_{\left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0} = \delta} \,\varphi(u) >0 \quad \text{and} \quad \inf_{\left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0} = \delta} \,\varphi_\pm(u) > 0. \end{aligned}$$ Alternatively, there exists $\delta' > 0$ such that $\varphi(u) > 0$ for $0 < \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0} < \delta'$.* **Proposition 50**. *Let [\[Assump:H2\]](#Assump:H2){reference-type="eqref" reference="Assump:H2"} be satisfied and $f$ fulfill [\[Asf:Caratheodory\]](#Asf:Caratheodory){reference-type="eqref" reference="Asf:Caratheodory"}, [\[Asf:WellDef\]](#Asf:WellDef){reference-type="eqref" reference="Asf:WellDef"} and [\[Asf:GrowthInfty\]](#Asf:GrowthInfty){reference-type="eqref" reference="Asf:GrowthInfty"}. Let $0 \neq u \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$, then $\varphi(tu) \xrightarrow{t \to \pm \infty} - \infty$. Furthermore, if $u \geq 0$ a.e. in $\Omega$, $\varphi_\pm (tu) \xrightarrow{t \to \pm \infty} - \infty$.* *Proof.* As before, we only show the argument for $\varphi$. We can do this because if $u \geq 0$ a.e. in $\Omega$, $\varphi_\pm (tu) = \varphi(tu)$ for $\pm t > 0$. Fix any $0 \neq u \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ and let $t, \varepsilon\geq 1$. First note that $\left\|u\right\|_{q_+} < \infty$ due to Proposition [Proposition 28](#Prop:EmbeddingHlogSobolev){reference-type="ref" reference="Prop:EmbeddingHlogSobolev"} [(iii)]{.nodecor} and Lemma [Lemma 43](#Le:Propf){reference-type="ref" reference="Le:Propf"} [\[Propf:q\<r\]](#Propf:q<r){reference-type="ref" reference="Propf:q<r"}. From [\[Eq:LogGrowthProduct\]](#Eq:LogGrowthProduct){reference-type="eqref" reference="Eq:LogGrowthProduct"} and Lemma [Lemma 43](#Le:Propf){reference-type="ref" reference="Le:Propf"} [\[Propf:EpsilonLowerBoundF\]](#Propf:EpsilonLowerBoundF){reference-type="ref" reference="Propf:EpsilonLowerBoundF"} we obtain $$\begin{aligned} \varphi(tu) & \leq \frac{\left|{t}\right|^{p_+}}{p_-} \varrho_{p(\cdot)} ( \nabla u ) + \frac{\left|{t}\right|^{q_+}}{q_-} \log (e + \left|{t}\right|) \int_{\Omega}\mu(x) \left|{\nabla u}\right|^{q(x)} \mathop{\mathrm{d\textit{x}}}\\ & \quad + \frac{\left|{t}\right|^{q_+}}{q_-} \int_{\Omega}\mu(x) \left|{\nabla u}\right|^{q(x)} \log(e + \left|{\nabla u}\right|) \mathop{\mathrm{d\textit{x}}}\\ & \quad - \frac{\varepsilon\left|{t}\right|^{q_+}}{q_+} \log (e + \left|{t}\right|) \left\|u\right\|_{q_+}^{q_+} + C_\varepsilon|\Omega| \\ & = \frac{\left|{t}\right|^{p_+}}{p_-} \varrho_{p(\cdot)} ( \nabla u ) + \left|{t}\right|^{q_+} \log (e + \left|{t}\right|) \left( \frac{ \int_{\Omega}\mu(x) \left|{\nabla u}\right|^{q(x)} \mathop{\mathrm{d\textit{x}}}}{q_-}- \varepsilon\frac{ \left\|u\right\|_{q_+}^{q_+} }{q_+} \right)\\ & \quad + \frac{\left|{t}\right|^{q_+}}{q_-} \int_{\Omega}\mu(x) \left|{\nabla u}\right|^{q(x)} \log(e + \left|{\nabla u}\right|) \mathop{\mathrm{d\textit{x}}}+ C_\varepsilon|\Omega|. \end{aligned}$$ The second term is negative for values of $\varepsilon$ large enough. Setting such a value directly yields $\varphi(tu) \xrightarrow{t \to \pm \infty} - \infty$. ◻ Now we have all the required properties to apply the mountain pass theorem. **Theorem 51**. *Let [\[Assump:H2\]](#Assump:H2){reference-type="eqref" reference="Assump:H2"} be satisfied and $f$ fulfill [\[Asf:Caratheodory\]](#Asf:Caratheodory){reference-type="eqref" reference="Asf:Caratheodory"}, [\[Asf:WellDef\]](#Asf:WellDef){reference-type="eqref" reference="Asf:WellDef"}, [\[Asf:GrowthInfty\]](#Asf:GrowthInfty){reference-type="eqref" reference="Asf:GrowthInfty"}, [\[Asf:GrowthZero\]](#Asf:GrowthZero){reference-type="eqref" reference="Asf:GrowthZero"} and [\[Asf:CeramiAssumption\]](#Asf:CeramiAssumption){reference-type="eqref" reference="Asf:CeramiAssumption"}. Then there exist nontrivial weak solutions $u_0,v_0 \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ of problem [\[Eq:Problem\]](#Eq:Problem){reference-type="eqref" reference="Eq:Problem"} such that $u_0 \geq 0$ and $v_0 \leq 0$ a.e. in $\Omega$.* *Proof.* Due to the combination of Propositions [Corollary 49](#Cor:RingOfMountains){reference-type="ref" reference="Cor:RingOfMountains"}, [Proposition 50](#Prop:PointBeyondMountains){reference-type="ref" reference="Prop:PointBeyondMountains"} and [Proposition 47](#Prop:CeramiCondition){reference-type="ref" reference="Prop:CeramiCondition"}, the assumptions of Theorem [Theorem 19](#Th:MPT){reference-type="ref" reference="Th:MPT"} are satisfied for the truncated energy functionals $\varphi_\pm$. This yields the existence of $u_0,v_0 \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ such that $\varphi_+'(u_0) = 0 = \varphi_-'(v_0)$ and $$\begin{aligned} \varphi_+(u_0), \varphi_-(v_0) \geq \inf_{\left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0} = \delta} \,\varphi_\pm(u) > 0 = \varphi_+ (0), \end{aligned}$$ which proves that $u_0 \neq 0 \neq v_0$. Furthermore, testing $\varphi_+'(u_0) = 0$ with $-u_0^-$ we obtain $\ensuremath{\varrho_{1,\ensuremath{\mathcal{H}_{\log}},0} \left(u^-\right)} = 0$, which by Proposition [Proposition 25](#Prop:HlogModularNorm){reference-type="ref" reference="Prop:HlogModularNorm"} [(i)]{.nodecor} means that $-u_0^- = 0$ a.e. in $\Omega$, so $u_0 = u_0^+ \geq 0$ a.e. in $\Omega$. Analogously, $v_0 \leq 0$ a.e. in $\Omega$. ◻ Alternatively, we could have used the following assumptions instead of [\[Assump:H2\]](#Assump:H2){reference-type="eqref" reference="Assump:H2"} and [\[Asf:GrowthZero\]](#Asf:GrowthZero){reference-type="eqref" reference="Asf:GrowthZero"}. 1. [\[Assump:H2prime\]]{#Assump:H2prime label="Assump:H2prime"} $\Omega \subseteq \mathbb{R}^N$, with $N \geq 2$, is a bounded domain with Lipschitz boundary $\partial \Omega$, $p,q \in C_+ (\overline{\Omega})$ with $p(x) \leq q(x) \leq q_+ < p^*_-$ for all $x \in \overline{\Omega}$, $p$ satisfies a monotonicity condition, that is, there exists a vector $l \in \mathbb{R}^N \setminus \{ 0 \}$ such that for all $x \in \Omega$ the function $$\begin{aligned} h_x (t) = p(x + tl) \quad \text{with } t \in I_x = \{ t \in \mathbb{R}\,:\, x + tl \in \Omega\} \end{aligned}$$ is monotone, and $\mu \in L^{\infty}(\Omega)$. ```{=html} <!-- --> ``` 4. [\[Asf:GrowthZeroAlt\]]{#Asf:GrowthZeroAlt label="Asf:GrowthZeroAlt"} $$\begin{aligned} \lim\limits_{s \to 0} \frac{F(x,s)}{\left|{s}\right|^{p(x)}} = 0 \quad \text{uniformly for a.a.\,} x \in \Omega. \end{aligned}$$ This new assumption in [\[Asf:GrowthZeroAlt\]](#Asf:GrowthZeroAlt){reference-type="eqref" reference="Asf:GrowthZeroAlt"} has a slightly different consequence than its counterpart earlier in this section. **Lemma 52**. *Let $f \colon \Omega \times \mathbb{R}\to \mathbb{R}$ .* 1. *[\[Propf:Zeroatzero\]]{#Propf:Zeroatzero label="Propf:Zeroatzero"} If $f$ fulfills [\[Asf:Caratheodory\]](#Asf:Caratheodory){reference-type="eqref" reference="Asf:Caratheodory"} and [\[Asf:GrowthZeroAlt\]](#Asf:GrowthZeroAlt){reference-type="eqref" reference="Asf:GrowthZeroAlt"}, then $f(x,0)=0$ for a.a. $x \in \Omega$.* 2. *[\[Propf:EpsilonUpperBoundAlt\]]{#Propf:EpsilonUpperBoundAlt label="Propf:EpsilonUpperBoundAlt"} If $f$ fulfills [\[Asf:WellDef\]](#Asf:WellDef){reference-type="eqref" reference="Asf:WellDef"} and [\[Asf:GrowthZeroAlt\]](#Asf:GrowthZeroAlt){reference-type="eqref" reference="Asf:GrowthZeroAlt"}, then for each $\varepsilon> 0$ there exists $C_\varepsilon> 0$ such that $$\begin{aligned} \left|{F(x,t)}\right| \leq \frac{\varepsilon}{p(x)} \left|{t}\right|^{p(x)} + C_\varepsilon\left|{t}\right|^{r(x)} \quad \text{for a.a.\,} x \in \Omega. \end{aligned}$$* **Remark 53**. *Without the assumption [\[Asf:GrowthZero\]](#Asf:GrowthZero){reference-type="eqref" reference="Asf:GrowthZero"} one needs to prove a result like Lemma [Lemma 52](#Le:PropLimftozero){reference-type="ref" reference="Le:PropLimftozero"} [\[Propf:Zeroatzero\]](#Propf:Zeroatzero){reference-type="ref" reference="Propf:Zeroatzero"}, since this condition is necessary to ensure that $\varphi_\pm$ are differentiable among other important properties, see Remark [Remark 45](#Re:StrongCont){reference-type="ref" reference="Re:StrongCont"}.* In this case, all the propositions are identical except for the behavior close to zero, for which we provide the following proof. **Proposition 54**. *Let [\[Assump:H2prime\]](#Assump:H2prime){reference-type="eqref" reference="Assump:H2prime"} be satisfied and $f$ fulfill [\[Asf:Caratheodory\]](#Asf:Caratheodory){reference-type="eqref" reference="Asf:Caratheodory"}, [\[Asf:WellDef\]](#Asf:WellDef){reference-type="eqref" reference="Asf:WellDef"} and [\[Asf:GrowthZeroAlt\]](#Asf:GrowthZeroAlt){reference-type="eqref" reference="Asf:GrowthZeroAlt"}. Then there exist constants $C_1,C_2,C_3 >0$ such that for all $\varepsilon> 0$ $$\begin{aligned} \varphi(u), \varphi_\pm (u) \geq \begin{cases} C_1 a_\varepsilon^{-1} \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0}^{q_+ + \varepsilon} - C_2 \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0}^{r_-}, & \text{if } \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0} \leq \min\{1,C_3\}, \\ C_1 \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0}^{p_-} - C_2 \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0}^{r_+}, & \text{if } \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0} \geq \max\{1,C_3\}, \end{cases} \end{aligned}$$ where $a_\varepsilon$ is the same as in Lemma [Lemma 22](#Le:fepsilon){reference-type="ref" reference="Le:fepsilon"}.* *Proof.* As in Proposition [Proposition 48](#Prop:PhLowerBound){reference-type="ref" reference="Prop:PhLowerBound"} we only do the argument for $\varphi$ for the same reasons. Using Lemma [Lemma 52](#Le:PropLimftozero){reference-type="ref" reference="Le:PropLimftozero"} [\[Propf:EpsilonUpperBoundAlt\]](#Propf:EpsilonUpperBoundAlt){reference-type="ref" reference="Propf:EpsilonUpperBoundAlt"}, Poincaré inequality for the modular in $W_0^{1,p(\cdot)} (\Omega)$ with constant $C_{p(\cdot)}$ from Proposition [Proposition 3](#Prop:PoincareModular){reference-type="ref" reference="Prop:PoincareModular"}, the embedding of $W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)\hookrightarrow L^{r(\cdot)}(\Omega)$ with constant $C_{\ensuremath{\mathcal{H}_{\log}}}$ from Proposition [Proposition 28](#Prop:EmbeddingHlogSobolev){reference-type="ref" reference="Prop:EmbeddingHlogSobolev"} [(iii)]{.nodecor} and Proposition [Proposition 1](#Prop:ModularNormVarExp){reference-type="ref" reference="Prop:ModularNormVarExp"} [(iii)]{.nodecor}, [(iv)]{.nodecor}, we obtain for any $u \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ $$\begin{aligned} & \varphi(u) \\ & \geq \frac{1}{p_+} \varrho_{p(\cdot)} ( \nabla u ) + \frac{1}{q_+} \int_{\Omega}\mu(x) \left|{\nabla u}\right|^{q(x)} \log(e + \left|{\nabla u}\right|) \mathop{\mathrm{d\textit{x}}} - \frac{\lambda}{p_-} \varrho_{p(\cdot)} ( u ) - C_{\lambda} \varrho_{r(\cdot)} ( u ) \\ & \geq \left( \frac{1}{p_+} - \frac{C_{p(\cdot)} \lambda}{p_-} \right) \varrho_{p(\cdot)} ( \nabla u ) +\frac{1}{q_+} \int_{\Omega}\mu(x) \left|{\nabla u}\right|^{q(x)} \log(e + \left|{\nabla u}\right|) \mathop{\mathrm{d\textit{x}}}\\ & \quad - C_{\lambda} \max_{k \in \{ r_+, r_- \} } \{ C_{\ensuremath{\mathcal{H}_{\log}}} ^k \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0}^k \} \\ & \geq \min \left\{ \frac{1}{p_+} - \frac{C_{p(\cdot)} \lambda}{p_-} , \frac{1}{q_+} \right\} \ensuremath{\varrho_{1,\ensuremath{\mathcal{H}_{\log}},0} \left(u\right)} - C_{\lambda} \max_{k \in \{ r_+, r_- \} } \{ C_{\ensuremath{\mathcal{H}_{\log}}} ^k \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0}^k \}. \end{aligned}$$ If $p_+ = q_+$, any $\lambda > 0$ works, otherwise choose $0 < \lambda < \frac{p_-(q_+-p_+)}{C_{p(\cdot)} q_+ p_+}$. By Proposition [Proposition 25](#Prop:HlogModularNorm){reference-type="ref" reference="Prop:HlogModularNorm"} [(iii)]{.nodecor} and [(iv)]{.nodecor} the result follows with $C_3 = 1/C_{\ensuremath{\mathcal{H}_{\log}}}$, $$\begin{aligned} C_1 = \frac{1}{q_+} \quad\text{and}\quad C_2 = \begin{cases} C_{\lambda} C_{\ensuremath{\mathcal{H}_{\log}}}^{r_-} & \text{for } \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0} \leq C_3, \\ C_{\lambda} C_{\ensuremath{\mathcal{H}_{\log}}}^{r_+} & \text{for } \left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0} > C_3. \end{cases} \end{aligned}$$ ◻ With a reasoning identical to Theorem [Theorem 51](#Th:ConstantSignSolutions){reference-type="ref" reference="Th:ConstantSignSolutions"} but using Proposition [Proposition 54](#Prop:PhLowerBoundAlt){reference-type="ref" reference="Prop:PhLowerBoundAlt"}, we obtain the next result. **Theorem 55**. *Let [\[Assump:H2prime\]](#Assump:H2prime){reference-type="eqref" reference="Assump:H2prime"} be satisfied and $f$ fulfill [\[Asf:Caratheodory\]](#Asf:Caratheodory){reference-type="eqref" reference="Asf:Caratheodory"}, [\[Asf:WellDef\]](#Asf:WellDef){reference-type="eqref" reference="Asf:WellDef"}, [\[Asf:GrowthInfty\]](#Asf:GrowthInfty){reference-type="eqref" reference="Asf:GrowthInfty"}, [\[Asf:GrowthZeroAlt\]](#Asf:GrowthZeroAlt){reference-type="eqref" reference="Asf:GrowthZeroAlt"} and [\[Asf:CeramiAssumption\]](#Asf:CeramiAssumption){reference-type="eqref" reference="Asf:CeramiAssumption"}. Then there exist nontrivial weak solutions $u_0,v_0 \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ of problem [\[Eq:Problem\]](#Eq:Problem){reference-type="eqref" reference="Eq:Problem"} such that $u_0 \geq 0$ and $v_0 \leq 0$ a.e. in $\Omega$.* # Sign-changing solution The aim of this section is to prove the existence of a sign-changing solution on top of the other two solutions from last section, which were one positive and one negative. We need to substitute [\[Asf:GrowthInfty\]](#Asf:GrowthInfty){reference-type="eqref" reference="Asf:GrowthInfty"} with a stronger assumption and to restrict a bit more the assumptions on the functional space with respect to [\[Assump:H2prime\]](#Assump:H2prime){reference-type="eqref" reference="Assump:H2prime"}: 1. [\[Assump:H3\]]{#Assump:H3 label="Assump:H3"} $\Omega \subseteq \mathbb{R}^N$, with $N \geq 2$, is a bounded domain with Lipschitz boundary $\partial \Omega$, $p,q \in C_+ (\overline{\Omega})$ with $p(x) \leq q(x) \leq q_+ < q_+ + 1 < p_-^*$ for all $x \in \overline{\Omega}$, $p$ satisfies a monotonicity condition, that is, there exists a vector $l \in \mathbb{R}^N \setminus \{ 0 \}$ such that for all $x \in \Omega$ the function $$\begin{aligned} h_x (t) = p(x + tl) \quad \text{ with } t \in I_x = \{ t \in \mathbb{R}\,:\, x + tl \in \Omega\} \end{aligned}$$ is monotone, and $\mu \in L^{\infty}(\Omega)$. ```{=html} <!-- --> ``` 3. [\[Asf:QuotientMono\]]{#Asf:QuotientMono label="Asf:QuotientMono"} The function $t \mapsto f(x,t)/\left|{t}\right|^{q_+}$ is increasing in $(- \infty, 0)$ and in $(0, + \infty)$ for a.a. $x \in \Omega$. **Remark 56**. *A necessary condition for [\[Assump:H3\]](#Assump:H3){reference-type="eqref" reference="Assump:H3"} to be satisfied is that $p_+ + 1 \leq p_-^*$. For the case of constant exponents, which is the least restrictive case, this is equivalent to $\frac{- 1 + \sqrt{1 + 4 N}}{2} \leq p$. This strong assumption is required for [\[Asf:QuotientMono\]](#Asf:QuotientMono){reference-type="eqref" reference="Asf:QuotientMono"}.* Similarly to the previous sections, we can prove some important consequences of these assumptions on the right-hand side. **Lemma 57**. *Let $f \colon \Omega \times \mathbb{R}\to \mathbb{R}$ satisfy the following assumptions.* 1. *If $f$ fulfills [\[Asf:WellDef\]](#Asf:WellDef){reference-type="eqref" reference="Asf:WellDef"} and [\[Asf:QuotientMono\]](#Asf:QuotientMono){reference-type="eqref" reference="Asf:QuotientMono"}, then $q_+ + 1 \leq r_-$.* 2. *[\[Propf:growthq+1\]]{#Propf:growthq+1 label="Propf:growthq+1"} If $f$ fulfills [\[Asf:QuotientMono\]](#Asf:QuotientMono){reference-type="eqref" reference="Asf:QuotientMono"} then for any $\varepsilon> 0$ $$\begin{aligned} \lim\limits_{s \to \pm \infty} \frac{F(x,s)}{\left|{s}\right|^{q_+ + 1 - \varepsilon}} = + \infty \quad \text{uniformly for a.a.\,} x \in \Omega. \end{aligned}$$ In particular, [\[Asf:GrowthInfty\]](#Asf:GrowthInfty){reference-type="eqref" reference="Asf:GrowthInfty"} is satisfied.* **Remark 58**. *As in [\[Assump:H3\]](#Assump:H3){reference-type="eqref" reference="Assump:H3"} we assumed $q_+ + 1 < p^*_-$, we can always find at least one $r \in C_+(\overline{\Omega})$ such that $q_+ + 1 \leq r_- \leq r_+ < p^*_-$.* We achieve the existence of such sign-changing solution making use of the Nehari manifold technique. Our treatment is inspired by the recent work by Crespo-Blanco-Winkert [@Crespo-Blanco-Winkert-2022], a more comprehensive explanation of this technique can be found in the book chapter by Szulkin-Weth [@Szulkin-Weth-2010]. The Nehari manifold associated to $\varphi$ is the set $$\begin{aligned} \mathcal{N} = \left\{ u \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)\,:\, \langle \varphi'(u) , u \rangle = 0, \; u \neq 0 \right\}.\end{aligned}$$ A significant property of this set is that all weak solutions of [\[Eq:Problem\]](#Eq:Problem){reference-type="eqref" reference="Eq:Problem"} (or critical points of $\varphi$) belong to this set, except for $u=0$, which can be studied separately. An important remark is that, although the most used name in the literature is Nehari manifold, it does not have to be one in general. For our purposes this is not relevant, so we do not do any discussion in this direction. We are looking for sign-changing solutions, so in the center of our results we work with the variation $$\begin{aligned} \mathcal{N}_0 = \left\{ u \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)\,:\, \pm u^\pm \in \mathcal{N} \right\}.\end{aligned}$$ Note that in our case $\mathcal{N}_0$ is a subset of $\mathcal{N}$, but this might not be true in general. Our treatment starts by establishing some structure on $\mathcal{N}$ which is needed for the work on $\mathcal{N}_0$. The following lemma is used in the proof and the main reason why we need [\[Asf:QuotientMono\]](#Asf:QuotientMono){reference-type="eqref" reference="Asf:QuotientMono"} as it is instead of with exponent $q_+ - 1$ (which would be much less restrictive). **Lemma 59**. *Let $b > 0$ and $Q > 1$, the mapping $t \mapsto \frac{ t^{1-\varepsilon} b }{Q (e + t b)}$ is decreasing only for $\varepsilon\geq 1$.* **Proposition 60**. *Let [\[Assump:H3\]](#Assump:H3){reference-type="eqref" reference="Assump:H3"} be satisfied and $f$ fulfill [\[Asf:Caratheodory\]](#Asf:Caratheodory){reference-type="eqref" reference="Asf:Caratheodory"}, [\[Asf:WellDef\]](#Asf:WellDef){reference-type="eqref" reference="Asf:WellDef"}, [\[Asf:QuotientMono\]](#Asf:QuotientMono){reference-type="eqref" reference="Asf:QuotientMono"} and [\[Asf:GrowthZeroAlt\]](#Asf:GrowthZeroAlt){reference-type="eqref" reference="Asf:GrowthZeroAlt"}. Then for any $u \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)\setminus\{0\}$ there exists a unique $t_u > 0$ such that $t_u u \in \mathcal{N}$. Furthermore, $\varphi(t_u u) > 0$, $\frac{d}{dt} \varphi(tu) > 0$ for $0 < t < t_u$, $\frac{d}{dt} \varphi(tu) = 0$ for $t = t_u$, $\frac{d}{dt} \varphi(tu) < 0$ for $t > t_u$, and therefore $\varphi(tu) < \varphi(t_u u)$ for all $0 < t \neq t_u$.* *Proof.* For any $u \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)\setminus \{0\}$ we define its associated fibering function $\theta_u \colon [0,\infty) \to \mathbb{R}$ as $\theta_u(t) = \varphi(tu)$. This function is $C^1$ in $(0,\infty)$ and continuous in $[0,\infty)$ because it is the composition of such functions. Clearly $\theta_u(0) = 0$ and from Propositions [Proposition 54](#Prop:PhLowerBoundAlt){reference-type="ref" reference="Prop:PhLowerBoundAlt"} and [Proposition 50](#Prop:PointBeyondMountains){reference-type="ref" reference="Prop:PointBeyondMountains"} we know that there exist $\delta,K > 0$ with the properties $$\begin{aligned} \label{Eq:FiberingZeroInfty} \theta_u (t) > 0 \quad \text{for } 0 < t < \delta \quad \text{and} \quad \theta_u(t) < 0 \quad \text{for } t > K. \end{aligned}$$ Then the extreme value theorem implies that the global maximum of $\theta_u$ is attained at some $t_u \in (0,K]$. Furthermore, it is a critical point of $\theta_u$ and by the chain rule we know $$\begin{aligned} 0 = \theta_u ' (t_u) = \langle \varphi'(t_u u) , u \rangle, \end{aligned}$$ which proves that $t_u u \in \mathcal{N}$. In order to see the uniqueness and the sign of the derivatives, observe that for $t > 0$ and due to [\[Asf:QuotientMono\]](#Asf:QuotientMono){reference-type="eqref" reference="Asf:QuotientMono"}, it holds $$\begin{aligned} \frac{f(x,tu)}{t^{q_+}\left|{u}\right|^{q_+}} \text{ increasing in $t$, so } \frac{f(x,tu)u}{t^{q_+}} \text{ increasing in $t$} & \text{, for } x \in \Omega \text{ with } u(x) > 0, \\ \frac{f(x,tu)}{t^{q_+}\left|{u}\right|^{q_+}} \text{ decreasing in $t$, so } \frac{f(x,tu)u}{t^{q_+}} \text{ increasing in $t$} & \text{, for } x \in \Omega \text{ with } u(x) < 0. \end{aligned}$$ Similarly to above, we know that $tu \in \mathcal{N}$ implies $u \neq 0$ and $\theta_u '(t) = 0$. By multiplying this equation with $1/t^{q_+}$ it follows $$\begin{aligned} & \int_{\Omega}\left( \frac{ 1 } {t^{q_+ + 1 - p(x)}} \left|{\nabla u}\right| ^{p(x)}\right. \\ & \left.\qquad+ \frac{1}{ t^{q_+ - q(x)} } \mu(x) \left|{\nabla u}\right|^{q(x)} \left[ \frac{\log (e + t \left|{\nabla u}\right| )}{t} + \frac{\left|{\nabla u}\right|}{q(x) (e + t \left|{\nabla u}\right|)} \right] \right. \\ & \qquad \left. - \frac{f(x,tu)u}{t^{q_+ }} \right) \mathop{\mathrm{d\textit{x}}}= 0. \end{aligned}$$ On the set $\{ \nabla u \neq 0 \}$, the first term is strictly decreasing since $p_+ < q_+ + 1$, the second term is decreasing by Lemmas [Lemma 22](#Le:fepsilon){reference-type="ref" reference="Le:fepsilon"} and [Lemma 59](#Le:QuotientMonotone){reference-type="ref" reference="Le:QuotientMonotone"}, and the third one is decreasing by the observation just above it. We know that $u \neq 0$, so the whole left-hand side of the equation is strictly decreasing as a function of $t$, which means that there can be at most one value $t_u>0$ such that $\theta_u'(t_u) = 0$, i.e. $t_u u \in \mathcal{N}$. Furthermore, $\theta_u'(t)$ cannot take value $0$ anywhere else, so it has constant sign on $(0,t_u)$ and $(t_u,\infty)$, and they must be positive and negative respectively by [\[Eq:FiberingZeroInfty\]](#Eq:FiberingZeroInfty){reference-type="eqref" reference="Eq:FiberingZeroInfty"}. ◻ The advantage of working in the Nehari manifold instead of the whole space is that our functional, even when it was not coercive in the original space, it does have some coercivity-type property on the restricted space. **Proposition 61**. *Let [\[Assump:H3\]](#Assump:H3){reference-type="eqref" reference="Assump:H3"} be satisfied and $f$ fulfill [\[Asf:Caratheodory\]](#Asf:Caratheodory){reference-type="eqref" reference="Asf:Caratheodory"}, [\[Asf:WellDef\]](#Asf:WellDef){reference-type="eqref" reference="Asf:WellDef"}, [\[Asf:QuotientMono\]](#Asf:QuotientMono){reference-type="eqref" reference="Asf:QuotientMono"} and [\[Asf:GrowthZeroAlt\]](#Asf:GrowthZeroAlt){reference-type="eqref" reference="Asf:GrowthZeroAlt"}. Then the functional $\varphi_{|_\mathcal{N}}$ is sequentially coercive, in the sense that for any sequence $\{u_n\}_{n \in \mathbb{N}} \subseteq \mathcal{N}$ such that $\left\|u_n\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0} \xrightarrow{n \to \infty} + \infty$ it follows that $\varphi(u_n) \xrightarrow{n \to \infty} + \infty$.* *Proof.* Consider a sequence $\{u_n\}_{n \in \mathbb{N}}$ with the property that $\left\|u_n\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0} \xrightarrow{n \to \infty} \infty$. Let us define $y_n = u_n / \left\|u_n\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0}$. Due to Proposition [Proposition 28](#Prop:EmbeddingHlogSobolev){reference-type="ref" reference="Prop:EmbeddingHlogSobolev"} [(iii)]{.nodecor}, we know that there exists a subsequence $\{y_{n_k}\}_{k \in \mathbb{N}}$ and $y \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ such that $$\begin{aligned} y_{n_k} \rightharpoonup y \quad\text{in } W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega), \quad y_{n_k} \to y \quad\text{in } L^{r(\cdot)}(\Omega) \text{ and pointwisely a.e.\,in }\Omega. \end{aligned}$$ **Claim:** $y = 0$ We proceed by contradiction, so assume that $y \neq 0$. Let $\varepsilon> 0$ and without loss of generality let $\left\|u_{n_k}\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0} \geq 1$ for all $k \in \mathbb{N}$. Proposition [Proposition 25](#Prop:HlogModularNorm){reference-type="ref" reference="Prop:HlogModularNorm"} [(iii)]{.nodecor} and [(iv)]{.nodecor} yield for all $k \in \mathbb{N}$ $$\begin{aligned} \varphi(u_{n_k}) & \leq \frac{1}{p_-} \ensuremath{\varrho_{1,\ensuremath{\mathcal{H}_{\log}},0} \left( u_{n_k} \right)} - \int_{\Omega}F(x, u_{n_k}) \mathop{\mathrm{d\textit{x}}}\\ & \leq \frac{a_\varepsilon}{p_-} \left\| u_{n_k} \right\|_{1,\ensuremath{\mathcal{H}_{\log}},0}^{q_+ + \varepsilon} - \int_{\Omega}F(x, u_{n_k}) \mathop{\mathrm{d\textit{x}}} \end{aligned}$$ and dividing by $\left\|u_{n_k}\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0}^{ q_+ + \varepsilon}$ we obtain for all $k \in \mathbb{N}$ $$\begin{aligned} \label{Eq:CoercivityClaim} \frac{\varphi(u_{n_k})}{\left\| u_{n_k} \right\|_{1,\ensuremath{\mathcal{H}_{\log}},0}^{q_+ + \varepsilon}} \leq \frac{a_\varepsilon}{p_-} - \int_{\Omega}\frac{F(x, u_{n_k})}{\left\| u_{n_k} \right\|_{1,\ensuremath{\mathcal{H}_{\log}},0}^{q_+ + \varepsilon}} \mathop{\mathrm{d\textit{x}}}. \end{aligned}$$ On the other hand, by Lemma [Lemma 57](#Le:Propq+1){reference-type="ref" reference="Le:Propq+1"} [\[Propf:growthq+1\]](#Propf:growthq+1){reference-type="ref" reference="Propf:growthq+1"} we know $$\begin{aligned} \lim_{k \to \infty} \frac{F(x, u_{n_k})}{\left\| u_{n_k} \right\|_{1,\ensuremath{\mathcal{H}_{\log}},0}^{q_+ + \varepsilon}} = \lim_{k \to \infty} \frac{F(x, u_{n_k})}{\left|{ u_{n_k} }\right|^{q_+ + \varepsilon}} \left|{ y_{n_k} }\right|^{q_+ + \varepsilon} = \infty \end{aligned}$$ for $x \in \Omega$ with $y(x) \neq 0$. So by Lemma [Lemma 43](#Le:Propf){reference-type="ref" reference="Le:Propf"} [\[Propf:Boundedbelow\]](#Propf:Boundedbelow){reference-type="ref" reference="Propf:Boundedbelow"} and Fatou's Lemma , where in the following $\Omega_0 = \{ x \in \Omega \,:\, y(x) = 0\}$ $$\begin{aligned} \int_{\Omega}\frac{F(x, u_{n_k})}{\left\| u_{n_k} \right\|_{1,\ensuremath{\mathcal{H}_{\log}},0}^{q_+ + \varepsilon}} \mathop{\mathrm{d\textit{x}}} & = \int_{\Omega \setminus \Omega_0} \frac{F(x, u_{n_k})}{\left\| u_{n_k} \right\|_{1,\ensuremath{\mathcal{H}_{\log}},0}^{q_+ + \varepsilon}} \mathop{\mathrm{d\textit{x}}}+ \int_{\Omega_0} \frac{F(x, u_{n_k})}{\left\| u_{n_k} \right\|_{1,\ensuremath{\mathcal{H}_{\log}},0}^{q_+ + \varepsilon}} \mathop{\mathrm{d\textit{x}}}\\ & \geq \int_{\Omega \setminus \Omega_0} \frac{F(x, u_{n_k})}{\left\| u_{n_k} \right\|_{1,\ensuremath{\mathcal{H}_{\log}},0}^{q_+ + \varepsilon}} \mathop{\mathrm{d\textit{x}}}- \frac{M |\Omega| }{\left\| u_{n_k} \right\|_{1,\ensuremath{\mathcal{H}_{\log}},0}^{q_+ + \varepsilon}} \xrightarrow{k \to \infty} \infty. \end{aligned}$$ Together with [\[Eq:CoercivityClaim\]](#Eq:CoercivityClaim){reference-type="eqref" reference="Eq:CoercivityClaim"}, this implies that $\varphi(u_{n_k}) < 0$ for $k$ large enough, which is a contradiction to the fact that $\varphi(u_n) > 0$ for all $n \in \mathbb{N}$ given by Proposition [Proposition 60](#Prop:NehariManifoldProps){reference-type="ref" reference="Prop:NehariManifoldProps"} and the Claim is proved. Take any $C>1$. By Proposition [Proposition 60](#Prop:NehariManifoldProps){reference-type="ref" reference="Prop:NehariManifoldProps"}, as $u_{n_k} \in \mathcal{N}$ for all $k \in \mathbb{N}$, we know that $\varphi(u_{n_k}) \geq \varphi(C y_{n_k})$ for all $k \in \mathbb{N}$. Also by using Proposition [Proposition 25](#Prop:HlogModularNorm){reference-type="ref" reference="Prop:HlogModularNorm"} [(iii)]{.nodecor} one gets for all $k \in \mathbb{N}$ $$\begin{aligned} \varphi(u_{n_k}) & \geq \varphi(C y_{n_k}) \geq \frac{1}{q_+} \ensuremath{\varrho_{1,\ensuremath{\mathcal{H}_{\log}},0} \left(C y_{n_k}\right)} - \int_{\Omega}F(x, Cy_{n_k}) \mathop{\mathrm{d\textit{x}}}\\ & \geq \frac{1}{q_+} \left\|C y_{n_k}\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0}^{p_-} - \int_{\Omega}F(x, Cy_{n_k})\mathop{\mathrm{d\textit{x}}} = \frac{C^{p_-}}{q_+} - \int_{\Omega}F(x, Cy_{n_k}) \mathop{\mathrm{d\textit{x}}}. \end{aligned}$$ We also know that $C y_{n_k} \rightharpoonup 0$, which together with the strong continuity of Lemma [Lemma 43](#Le:Propf){reference-type="ref" reference="Le:Propf"} [\[Propf:ConvergenceFterm\]](#Propf:ConvergenceFterm){reference-type="ref" reference="Propf:ConvergenceFterm"} gives us a $k_0 \in \mathbb{N}$ such that for all $k \geq k_0$ $$\begin{aligned} \varphi(u_{n_k}) \geq \frac{C^{p_-}}{q_+} - 1. \end{aligned}$$ In the previous argument, $C$ can be as big as desired and $k_0$ depends on $C$, so we derived that $\varphi(u_{n_k}) \xrightarrow{k \to \infty} + \infty$. The subsequence principle yields $\varphi(u_n) \xrightarrow{n \to \infty} + \infty$. ◻ Minimizing over the Nehari manifold we always stay strictly positive. This is useful to ensure that we are not converging to zero along a minimizing sequence. **Proposition 62**. *Let [\[Assump:H3\]](#Assump:H3){reference-type="eqref" reference="Assump:H3"} be satisfied and $f$ fulfill [\[Asf:Caratheodory\]](#Asf:Caratheodory){reference-type="eqref" reference="Asf:Caratheodory"}, [\[Asf:WellDef\]](#Asf:WellDef){reference-type="eqref" reference="Asf:WellDef"}, [\[Asf:QuotientMono\]](#Asf:QuotientMono){reference-type="eqref" reference="Asf:QuotientMono"} and [\[Asf:GrowthZeroAlt\]](#Asf:GrowthZeroAlt){reference-type="eqref" reference="Asf:GrowthZeroAlt"}. Then $$\begin{aligned} \inf_{u \in \mathcal{N}} \varphi(u) > 0 \quad \text{ and } \quad \inf_{u \in \mathcal{N}_0} \varphi(u) > 0. \end{aligned}$$* *Proof.* From Proposition [Proposition 60](#Prop:NehariManifoldProps){reference-type="ref" reference="Prop:NehariManifoldProps"} and Corollary [Corollary 49](#Cor:RingOfMountains){reference-type="ref" reference="Cor:RingOfMountains"}, for any $u \in \mathcal{N}$ we know that $$\begin{aligned} \varphi(u) \geq \varphi\left( \frac{\delta}{\left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0}} u \right) \geq \inf_{\left\|u\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0} = \delta} \varphi(u) > 0, \end{aligned}$$ which proves the first part. The second one follows from the first one since $\varphi(u) = \varphi(u^+) + \varphi(-u^-)$ and $u^+, -u^- \in \mathcal{N}$ for $u \in \mathcal{N}_0$. ◻ We can now perform calculus of variations on $\mathcal{N}_0$ to obtain our candidate for a third solution. **Proposition 63**. *Let [\[Assump:H3\]](#Assump:H3){reference-type="eqref" reference="Assump:H3"} be satisfied and $f$ fulfill [\[Asf:Caratheodory\]](#Asf:Caratheodory){reference-type="eqref" reference="Asf:Caratheodory"}, [\[Asf:WellDef\]](#Asf:WellDef){reference-type="eqref" reference="Asf:WellDef"}, [\[Asf:QuotientMono\]](#Asf:QuotientMono){reference-type="eqref" reference="Asf:QuotientMono"} and [\[Asf:GrowthZeroAlt\]](#Asf:GrowthZeroAlt){reference-type="eqref" reference="Asf:GrowthZeroAlt"}. Then there exists $w_0 \in \mathcal{N}_0$ such that $\varphi(w_0) = \inf_{u \in \mathcal{N}_0} \varphi(u)$.* *Proof.* We prove the result via the direct method of calculus of variations. Let $\{u_n\}_{n \in \mathbb{N}} \subseteq \mathcal{N}_0$ be a minimizing sequence, i.e., a sequence such that $\varphi(u_n) \searrow \inf_{u \in \mathcal{N}_0} \varphi(u)$. Recall that, by Proposition [Proposition 29](#Prop:Truncations){reference-type="ref" reference="Prop:Truncations"}, $u_n^+, - u_n^- \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ for all $n \in \mathbb{N}$. Then the sequences $\{ \varphi( u_n^+ ) \}_{n \in \mathbb{N}}$ and $\{ \varphi( - u_n^- ) \}_{n \in \mathbb{N}}$ are bounded in $\mathbb{R}$, since $\varphi(u_n) = \varphi(u_n^+) \mathop{+} \varphi(-u_n^-)$ for all $n \in \mathbb{N}$ and by Proposition [Proposition 60](#Prop:NehariManifoldProps){reference-type="ref" reference="Prop:NehariManifoldProps"} we have that $\varphi( u_n^+) > 0$ and $\varphi(- u_n^-) > 0$ for all $n \in \mathbb{N}$. Hence the coercivity condition of Proposition [Proposition 61](#Prop:Coercivity){reference-type="ref" reference="Prop:Coercivity"} yields the boundedness of $\{ u_n^+ \}_{n \in \mathbb{N}}$ and $\{ - u_n^- \}_{n \in \mathbb{N}}$ in $W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$. By Proposition [Proposition 28](#Prop:EmbeddingHlogSobolev){reference-type="ref" reference="Prop:EmbeddingHlogSobolev"} [(iii)]{.nodecor}, we obtain subsequences $\{ u_{n_k} ^+\}_{k \in \mathbb{N}}$ and $\{- u_{n_k} ^-\}_{k \in \mathbb{N}}$, and $z_1, z_2 \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ such that $$\label{Eq:MinimizingSubsequence} \begin{aligned} & u_{n_k} ^+ \rightharpoonup z_1 \quad\text{in } W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega), \quad u_{n_k} ^+ \to z_1 \quad\text{in } L^{r(\cdot)}(\Omega) \quad \text{and a.e.\,in }\Omega, \\ & - u_{n_k} ^- \rightharpoonup z_2 \quad\text{in } W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega), \quad - u_{n_k} ^- \to z_2 \quad\text{in } L^{r(\cdot)}(\Omega) \quad \text{and a.e.\,in }\Omega, \\ & \text{with } z_1 \geq 0, \quad z_2 \leq 0 \quad \text{and } \quad z_1 z_2 = 0. \end{aligned}$$ **Claim:** $z_1 \neq 0 \neq z_2$ We only prove the assertion for $z_1$, the other one is exactly the same. Arguing by contradiction, let $z_1 = 0$. By assumption $u_{n_k} ^+ \in \mathcal{N}$, so we have $$\begin{aligned} 0 & = \langle \varphi'( u_{n_k} ^+) , u_{n_k} ^+ \rangle \\ & = \ensuremath{\varrho_{1,\ensuremath{\mathcal{H}_{\log}},0} \left( u_{n_k} ^+\right)} + \int_{\Omega}\frac{\left|{ \nabla u_{n_k} ^+}\right|}{q(x) (e + \left|{ \nabla u_{n_k} ^+}\right|)} \left|{ \nabla u_{n_k} ^+}\right|^{q(x)} \mathop{\mathrm{d\textit{x}}} - \int_{\Omega}f(x, u_{n_k}^+) u_{n_k} ^+ \mathop{\mathrm{d\textit{x}}}\\ & \geq \ensuremath{\varrho_{1,\ensuremath{\mathcal{H}_{\log}},0} \left( u_{n_k} ^+\right)} - \int_{\Omega}f(x, u_{n_k}^+) u_{n_k} ^+ \mathop{\mathrm{d\textit{x}}}. \end{aligned}$$ The weak convergence of [\[Eq:MinimizingSubsequence\]](#Eq:MinimizingSubsequence){reference-type="eqref" reference="Eq:MinimizingSubsequence"} along with the strong continuity of Lemma [Lemma 43](#Le:Propf){reference-type="ref" reference="Le:Propf"} [\[Propf:ConvergenceFterm\]](#Propf:ConvergenceFterm){reference-type="ref" reference="Propf:ConvergenceFterm"} yield $\ensuremath{\varrho_{1,\ensuremath{\mathcal{H}_{\log}},0} \left( u_{n_k} ^+\right)} \to 0$, i.e., $u_{n_k} ^+ \to 0$ in $W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ by Proposition [Proposition 25](#Prop:HlogModularNorm){reference-type="ref" reference="Prop:HlogModularNorm"} [(v)]{.nodecor}. However, as $\varphi$ is continuous and by Proposition [Proposition 62](#Prop:Infimum>0){reference-type="ref" reference="Prop:Infimum>0"} $$\begin{aligned} 0 < \inf_{u \in \mathcal{N}} \varphi(u) \leq \varphi( u_{n_k} ^+) \longrightarrow \varphi(0) = 0 \quad \text{as } k \to \infty. \end{aligned}$$ This is a contradiction and finishes the proof of the Claim. From the previous Claim and Proposition [Proposition 60](#Prop:NehariManifoldProps){reference-type="ref" reference="Prop:NehariManifoldProps"}, we have $t_1, t_2 > 0$ such that $t_1 z_1, t_2 z_2 \in \mathcal{N}$. We define $w_0 = t_1 z_1 + t_2 z_2$, which by [\[Eq:MinimizingSubsequence\]](#Eq:MinimizingSubsequence){reference-type="eqref" reference="Eq:MinimizingSubsequence"} satisfies that $w_0^+ = t_1 z_1$ and $- w_0^- = t_2 z_2$, thus $w_0 \in \mathcal{N}_0$. In order to see that this is the minimizer, we note that $\varphi$ is sequentially weakly lower semicontinuous. Indeed, the $F$ term is even strongly continuous by Lemma [Lemma 43](#Le:Propf){reference-type="ref" reference="Le:Propf"} [\[Propf:ConvergenceFterm\]](#Propf:ConvergenceFterm){reference-type="ref" reference="Propf:ConvergenceFterm"}, the part with exponent $p(\cdot)$ is weakly lower semicontinuous because it is continuous and convex, and the part with exponent $q(\cdot)$ and logarithm is also weakly lower semicontinuous for the same reasons. If we combine Proposition [Proposition 60](#Prop:NehariManifoldProps){reference-type="ref" reference="Prop:NehariManifoldProps"} with the previous property, we get $$\begin{aligned} \inf_{u \in \mathcal{N}_0} \varphi(u) = \lim\limits_{k \to \infty} \varphi(u_{n_k}) & = \lim\limits_{k \to \infty} \varphi(u_{n_k}^+) + \varphi(- u_{n_k} ^-) \\ & \geq \liminf\limits_{k \to \infty} \varphi(t_1 u_{n_k}^+) + \varphi(- t_2 u_{n_k} ^-) \\ & \geq \varphi(t_1 z_1) + \varphi(t_2 z_2) \\ & = \varphi(w_0^+) + \varphi(- w_0^-) = \varphi(w_0) \geq \inf_{u \in \mathcal{N}_0} \varphi(u). \end{aligned}$$ ◻ Our last step towards finding a sign-changing solution is to see that any minimizer like the one we obtained in the previous proposition is actually a critical point of $\varphi$. **Proposition 64**. *Let [\[Assump:H3\]](#Assump:H3){reference-type="eqref" reference="Assump:H3"} be satisfied and $f$ fulfill [\[Asf:Caratheodory\]](#Asf:Caratheodory){reference-type="eqref" reference="Asf:Caratheodory"}, [\[Asf:WellDef\]](#Asf:WellDef){reference-type="eqref" reference="Asf:WellDef"}, [\[Asf:QuotientMono\]](#Asf:QuotientMono){reference-type="eqref" reference="Asf:QuotientMono"} and [\[Asf:GrowthZeroAlt\]](#Asf:GrowthZeroAlt){reference-type="eqref" reference="Asf:GrowthZeroAlt"}. Let $w_0 \in \mathcal{N}_0$ such that $\varphi(w_0) = \inf_{u \in \mathcal{N}_0} \varphi(u)$. Then $w_0$ is a critical point of $\varphi$.* *Proof.* This is a proof by contradiction. Assume that $\varphi'(w_0) \neq 0$, one can find $\lambda, \delta_0 > 0$ such that $$\begin{aligned} \left\| \varphi'(u) \right\|_{*} \geq \lambda, \quad \text{for all } u \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)\text{ with } \left\|u - w_0\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0} < 3 \delta_0. \end{aligned}$$ On the other hand, let $C_{\ensuremath{\mathcal{H}_{\log}}}$ be the constant of the embedding $W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)\hookrightarrow L^{p_-}(\Omega)$ given by Proposition [Proposition 28](#Prop:EmbeddingHlogSobolev){reference-type="ref" reference="Prop:EmbeddingHlogSobolev"} [(i)]{.nodecor} and the usual embedding $W^{1,p_-}_0(\Omega) \hookrightarrow L^{p_-}(\Omega)$. As $w_0^+ \neq 0 \neq w_0^-$, for any $v \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ we know that $$\begin{aligned} \left\| w_0 - v \right\|_{1,\ensuremath{\mathcal{H}_{\log}},0} \geq C_{\ensuremath{\mathcal{H}_{\log}}}^{-1} \left\| w_0 - v\right\|_{p_-} \geq \begin{cases} C_{\ensuremath{\mathcal{H}_{\log}}}^{-1} \left\| w_0^- \right\|_{p_-}, & \text{if } v^- = 0, \\ C_{\ensuremath{\mathcal{H}_{\log}}}^{-1} \left\| w_0^+ \right\|_{p_-}, & \text{if } v^+ = 0. \end{cases} \end{aligned}$$ We can choose some value $0 < \delta_1 < \min \{ C_{\ensuremath{\mathcal{H}_{\log}}}^{-1} \left\| w_0^- \right\|_{p_-} , C_{\ensuremath{\mathcal{H}_{\log}}}^{-1} \left\| w_0^+ \right\|_{p_-} \}$. As a consequence, for any $v \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ with $\left\| w_0 - v \right\|_{1,\ensuremath{\mathcal{H}_{\log}},0} < \delta_1$ we know that $v^+ \neq 0 \neq v^-$. For the rest of the proof we work with $\delta = \min \{ \delta_0, \delta_1 / 2\}$. Observe that the mapping $(s,t) \mapsto s w_0^+ - t w_0^-$ is a continuous mapping $[0,\infty)^2 \to W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$. Hence, we can find some $0 < \alpha < 1$ such that for all $s,t \geq 0$ with $\max \{ \left|{s - 1}\right|, \left|{t - 1}\right| \} < \alpha$ it holds $$\begin{aligned} \left\| s w_0^+ - t w_0^- - w_0 \right\|_{1,\ensuremath{\mathcal{H}_{\log}},0} < \delta. \end{aligned}$$ Let $D = ( 1 - \alpha, 1 + \alpha)^2$. Note also that for any $s,t \geq 0$ with $s \neq 1 \neq t$, by Proposition [Proposition 60](#Prop:NehariManifoldProps){reference-type="ref" reference="Prop:NehariManifoldProps"} we know $$\label{Eq:ParametersEstimatedByInfimum} \begin{aligned} \varphi(s w_0^+ - t w_0^-) & = \varphi(s w_0^+) + \varphi(- t w_0^-) \\ & < \varphi(w_0^+) + \varphi(- w_0^-) = \varphi(w_0) = \inf_{u \in \mathcal{N}_0} \varphi(u). \end{aligned}$$ In particular, this implies that $$\begin{aligned} \zeta = \max_{(s,t) \in \partial D} \varphi(s w_0^+ - t w_0^-) < \varphi(w_0) = \inf_{u \in \mathcal{N}_0} \varphi(u). \end{aligned}$$ At this point we are in the position to apply the quantitative deformation lemma given in Lemma [Lemma 20](#Le:DeformationLemma){reference-type="ref" reference="Le:DeformationLemma"}. With its notation, take $$\begin{aligned} S = B (w_0, \delta), \quad c = \inf_{u \in \mathcal{N}_0} \varphi(u), \quad \varepsilon= \min \left\lbrace \frac{c - \zeta}{4}, \frac{\lambda \delta}{8} \right\rbrace , \quad \delta \text{ be as defined above.} \end{aligned}$$ The assumptions are satisfied because $S_{2 \delta} = B (w_0, 3 \delta)$ and the choice of $\varepsilon$, so we know that there is a mapping $\eta$ with the properties stated in the lemma. Because of the choice of $\varepsilon$ we also know that $$\begin{aligned} \label{Eq:2epsAtBoundary} \varphi(s w_0^+ - t w_0^-) \leq \zeta + c - c < c - \left( \frac{c - \zeta}{2} \right) \leq c - 2 \varepsilon \end{aligned}$$ for all $(s,t) \in \partial D$. Let us define $h \colon [0,\infty)^2 \to W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ and $H \colon (0,\infty)^2 \to \mathbb{R}^2$ by $$\begin{aligned} h(s,t) & = \eta(1 , s w_0^+ - t w_0^-) \\ H (s,t) & = \left( \; \frac{1}{s} \langle \varphi'(h^+(s,t)) , h^+(s,t) \rangle \; , \; \frac{1}{t} \langle \varphi'(- h^- (s,t)) , -h^- (s,t) \rangle \; \right). \end{aligned}$$ As $\eta$ is continuous, so is $h$, and as $\varphi$ is $C^1$, $H$ is also continuous. Because of Lemma [Lemma 20](#Le:DeformationLemma){reference-type="ref" reference="Le:DeformationLemma"} [(i)]{.nodecor} and [\[Eq:2epsAtBoundary\]](#Eq:2epsAtBoundary){reference-type="eqref" reference="Eq:2epsAtBoundary"}, for all $(s,t) \in \partial D$ we know that $h(s,t) = s w_0^+ - t w_0^-$ and $$\begin{aligned} H (s,t) = \left( \; \langle \varphi'(s w_0^+) , w_0^+ \rangle \; , \; \langle \varphi'(- t w_0^-) , - w_0^- \rangle \; \right). \end{aligned}$$ Furthermore, if we also take into consideration the information on the derivatives from Proposition [Proposition 60](#Prop:NehariManifoldProps){reference-type="ref" reference="Prop:NehariManifoldProps"} we have the componentwise inequalities $$\begin{aligned} & H (1 - \alpha,t) > (0,0) > H (1 + \alpha,t),\\ & H (t, 1 - \alpha) > (0,0) > H (t,1 + \alpha) \quad \text{for all } t \in (1 - \alpha, 1 + \alpha). \end{aligned}$$ With the information above, by the Poincaré-Miranda existence theorem given in Theorem [Theorem 21](#Th:PoincareMiranda){reference-type="ref" reference="Th:PoincareMiranda"} applied to $d(s,t) = - H(1 + s, 1 + t)$, we find $(s_0,t_0) \in D$ such that $H(s_0,t_0) = 0$, or equivalently $$\begin{aligned} \langle \varphi'(h^+(s_0 , t_0)) , h^+(s_0 , t_0) \rangle = 0 = \langle \varphi'(- h^- (s_0 , t_0)) , -h^- (s_0 , t_0) \rangle. \end{aligned}$$ From Lemma [Lemma 20](#Le:DeformationLemma){reference-type="ref" reference="Le:DeformationLemma"} [(iv)]{.nodecor} and the choice of $\alpha$, we also know that $$\begin{aligned} \left\|h(s_0 , t_0) - w_0\right\|_{1,\ensuremath{\mathcal{H}_{\log}},0} \leq 2 \delta \leq \delta_1, \end{aligned}$$ which by the choice of $\delta_1$ gives us $$\begin{aligned} h^+(s_0 , t_0) \neq 0 \neq - h^-(s_0 , t_0). \end{aligned}$$ Altogether, this means that $h(s_0 , t_0) \in \mathcal{N}_0$. However, by Lemma [Lemma 20](#Le:DeformationLemma){reference-type="ref" reference="Le:DeformationLemma"} [(ii)]{.nodecor}, the choice of $\alpha$ and [\[Eq:ParametersEstimatedByInfimum\]](#Eq:ParametersEstimatedByInfimum){reference-type="eqref" reference="Eq:ParametersEstimatedByInfimum"}, we also know that $\varphi( h(s_0 , t_0) ) \leq c - \varepsilon$, which is a contradiction and this finishes the proof. ◻ The combination of Propositions [Proposition 63](#Prop:MinimizerExistence){reference-type="ref" reference="Prop:MinimizerExistence"} and [Proposition 64](#Prop:Sing-changing-CritPoint){reference-type="ref" reference="Prop:Sing-changing-CritPoint"}, Lemma [Lemma 57](#Le:Propq+1){reference-type="ref" reference="Le:Propq+1"} [\[Propf:growthq+1\]](#Propf:growthq+1){reference-type="ref" reference="Propf:growthq+1"} and Theorem [Theorem 55](#Th:ConstantSignSolutionsAlt){reference-type="ref" reference="Th:ConstantSignSolutionsAlt"} yield the following results. **Theorem 65**. *Let [\[Assump:H3\]](#Assump:H3){reference-type="eqref" reference="Assump:H3"} be satisfied and $f$ fulfill [\[Asf:Caratheodory\]](#Asf:Caratheodory){reference-type="eqref" reference="Asf:Caratheodory"}, [\[Asf:WellDef\]](#Asf:WellDef){reference-type="eqref" reference="Asf:WellDef"}, [\[Asf:QuotientMono\]](#Asf:QuotientMono){reference-type="eqref" reference="Asf:QuotientMono"} and [\[Asf:GrowthZeroAlt\]](#Asf:GrowthZeroAlt){reference-type="eqref" reference="Asf:GrowthZeroAlt"}. Then there exists a nontrivial weak solution $w_0 \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ of problem [\[Eq:Problem\]](#Eq:Problem){reference-type="eqref" reference="Eq:Problem"} with changing sign.* **Theorem 66**. *Let [\[Assump:H3\]](#Assump:H3){reference-type="eqref" reference="Assump:H3"} be satisfied and $f$ fulfill [\[Asf:Caratheodory\]](#Asf:Caratheodory){reference-type="eqref" reference="Asf:Caratheodory"}, [\[Asf:WellDef\]](#Asf:WellDef){reference-type="eqref" reference="Asf:WellDef"}, [\[Asf:QuotientMono\]](#Asf:QuotientMono){reference-type="eqref" reference="Asf:QuotientMono"}, [\[Asf:GrowthZeroAlt\]](#Asf:GrowthZeroAlt){reference-type="eqref" reference="Asf:GrowthZeroAlt"} and [\[Asf:CeramiAssumption\]](#Asf:CeramiAssumption){reference-type="eqref" reference="Asf:CeramiAssumption"}. Then there exist nontrivial weak solutions $u_0,v_0,w_0 \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ of problem [\[Eq:Problem\]](#Eq:Problem){reference-type="eqref" reference="Eq:Problem"} such that $u_0 \geq 0$, $v_0 \leq 0$ and $w_0$ has changing sign.* # Nodal domains In this last section, we provide more information on the sign-changing solution found in Section [6](#sign-changing-solution){reference-type="ref" reference="sign-changing-solution"}. In particular we will determine the number of nodal domains, that is, the number of maximal regions where it does not change sign. The usual definition of nodal domains for $u \in C(\Omega,\mathbb{R})$ is the connected components of $\Omega \setminus Z$, where the set $Z = \{ x \in \Omega \,:\, u(x) = 0 \}$ is called the nodal set of $u$. In our case, we have no information on the continuity of the solutions, so this definition is not meaningful for us. A similar situation was already noted in a previous work by Crespo-Blanco-Winkert [@Crespo-Blanco-Winkert-2022], where the following definition was proposed. Let $u \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ and $A$ be a Borelian subset of $\Omega$ with $|A| > 0$. We say that $A$ is a nodal domain of $u$ if 1. $u \geq 0$ a.e. on $A$ or $u \leq 0$ a.e. on $A$; 2. $0 \neq u 1_A \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$; 3. $A$ is minimal w.r.t. [(i)]{.nodecor} and [(ii)]{.nodecor}, i.e., if $B \subseteq A$ with $B$ being a Borelian subset of $\Omega$, $|B| > 0$ and $B$ satisfies [(i)]{.nodecor} and [(ii)]{.nodecor}, then $|A \setminus B| = 0$. For the purpose of determining the number of nodal domains, we need one last assumption on our right-hand side. 6. [\[Asf:Nonnegative\]]{#Asf:Nonnegative label="Asf:Nonnegative"} $f(x,t)t - q_+ \left( 1 + \frac{\kappa}{q_-} \right) F(x,t) \geq 0$ for all $t \in \mathbb{R}$ and for a.a. $x \in \Omega$. **Proposition 67**. *Let [\[Assump:H3\]](#Assump:H3){reference-type="eqref" reference="Assump:H3"} be satisfied and $f$ fulfill [\[Asf:Caratheodory\]](#Asf:Caratheodory){reference-type="eqref" reference="Asf:Caratheodory"}, [\[Asf:WellDef\]](#Asf:WellDef){reference-type="eqref" reference="Asf:WellDef"}, [\[Asf:QuotientMono\]](#Asf:QuotientMono){reference-type="eqref" reference="Asf:QuotientMono"}, [\[Asf:GrowthZeroAlt\]](#Asf:GrowthZeroAlt){reference-type="eqref" reference="Asf:GrowthZeroAlt"} and [\[Asf:Nonnegative\]](#Asf:Nonnegative){reference-type="eqref" reference="Asf:Nonnegative"}. Then, any minimizer of $\varphi_{|_{\mathcal{N}_0}}$ has exactly two nodal domains.* *Proof.* Let $w_0$ be such that $\varphi(w_0) = \inf_{u \in \mathcal{N}_0} \varphi(u)$. The sets $\Omega_{w_0 > 0} = \{ x \in \Omega \,:\, w_0 (x) > 0 \}$ and $\Omega_{w_0 < 0} = \{ x \in \Omega \,:\, w_0 (x) < 0 \}$ are determined up to a zero measure set (for any choice of a representative of the class $w_0$ they may differ, but at most in a zero measure set), which is no problem for the definition above. Furthermore, by Proposition [Proposition 29](#Prop:Truncations){reference-type="ref" reference="Prop:Truncations"} and $w_0^+ = w_0 1_{\Omega_{w_0 > 0}}$, $-w_0^- = w_0 1_{\Omega_{w_0 < 0}}$, conditions [(i)]{.nodecor} and [(ii)]{.nodecor} hold for these sets. So it is only left to see their minimality. We proceed by contradiction. Without loss of generality, let $B_1, B_2$ be Borelian subsets of $\Omega$ with the following properties: disjoint sets with $\Omega_{w_0 < 0} = B_1 \dot{\cup} B_2$, both have positive measure and $B_1$ fulfills [(i)]{.nodecor} and [(ii)]{.nodecor} in the definition above. This implies that for $B_2$ conditions [(i)]{.nodecor} and [(ii)]{.nodecor} hold too, since $w_0 1_{B_2} = w_0 1_{\Omega_{w_0 < 0}} - w_0 1_{B_1} \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$. Let $u = 1_{\Omega_{w_0 > 0}} w_0 + 1_{B_1} w_0$ and $v = 1_{B_2} w_0$, hence $u ^+= 1_{\Omega_{w_0 > 0}} w_0$ and $- u ^- = 1_{B_1} w_0$. By Proposition [Proposition 64](#Prop:Sing-changing-CritPoint){reference-type="ref" reference="Prop:Sing-changing-CritPoint"} we know that $\varphi'(w_0) = 0$ and as the supports of $u^+ , - u^-$ and $v$ are disjoint, we get $$\begin{aligned} 0 = \langle \varphi'(w_0) , u^+ \rangle = \langle \varphi'(u^+) , u^+ \rangle, \end{aligned}$$ that is, $u^+ \in \mathcal{N}$. With a similar argument $- u^- \in \mathcal{N}$, so $u \in \mathcal{N}_0$; and also $\langle \varphi'(v) , v \rangle = 0$. Altogether, by these properties, condition [\[Asf:Nonnegative\]](#Asf:Nonnegative){reference-type="eqref" reference="Asf:Nonnegative"} and Lemma [Lemma 46](#Le:QuotientFracLog){reference-type="ref" reference="Le:QuotientFracLog"}, we get $$\begin{aligned} \inf_{w \in \mathcal{N}_0} \varphi(w) & = \varphi(w_0) = \varphi(u) + \varphi(v) - \frac{1}{ q_+ \left( 1 + \frac{\kappa}{q_-} \right) } \langle \varphi'(v) , v \rangle \\ & \geq \varphi(u) + \left[ \frac{1}{p_+} - \frac{1}{ q_+ \left( 1 + \frac{\kappa}{q_-} \right) } \right] \varrho_{p(\cdot)} ( \nabla v ) \\ & \quad + \int_{\Omega}\left[ \frac{1}{ q_+ \left( 1 + \frac{\kappa}{q_-} \right) } f(x,v) v - F(x,v) \right] \mathop{\mathrm{d\textit{x}}}\\ & \quad + \frac{1}{q_+} \int_{\Omega}\mu(x) \left|{\nabla v}\right|^{q(x)} \log(e + \left|{\nabla v}\right|) \mathop{\mathrm{d\textit{x}}}\\ & \quad - \frac{1}{ q_+ \left( 1 + \frac{\kappa}{q_-} \right) } \int_{\Omega}\mu(x) \left|{\nabla v}\right|^{q(x)} \left[ \log (e + \left|{\nabla v}\right|) + \frac{ \left|{\nabla v}\right| }{ q(x) (e + \left|{\nabla v}\right|) } \right] \mathop{\mathrm{d\textit{x}}}\\ & \geq \varphi(u) + \left[ \frac{1}{p_+} - \frac{1}{ q_+ \left( 1 + \frac{\kappa}{q_-} \right) } \right] \varrho_{p(\cdot)} ( \nabla v ) \\ & \geq \inf_{w \in \mathcal{N}_0} \varphi(w) + \left[ \frac{1}{p_+} - \frac{1}{ q_+ \left( 1 + \frac{\kappa}{q_-} \right) } \right] \varrho_{p(\cdot)} ( \nabla v ). \end{aligned}$$ Since $p_+ \leq q_+$ and $v \neq 0$, we get the desired contradiction. ◻ Combining Theorems [Theorem 65](#Th:SignChangingSolution){reference-type="ref" reference="Th:SignChangingSolution"} and [Theorem 66](#Th:ThreeSolutions){reference-type="ref" reference="Th:ThreeSolutions"} with Proposition [Proposition 67](#Prop:NodalDomains){reference-type="ref" reference="Prop:NodalDomains"}, we obtain the final existence results of this work. **Theorem 68**. *Let [\[Assump:H3\]](#Assump:H3){reference-type="eqref" reference="Assump:H3"} be satisfied and $f$ fulfill [\[Asf:Caratheodory\]](#Asf:Caratheodory){reference-type="eqref" reference="Asf:Caratheodory"}, [\[Asf:WellDef\]](#Asf:WellDef){reference-type="eqref" reference="Asf:WellDef"}, [\[Asf:QuotientMono\]](#Asf:QuotientMono){reference-type="eqref" reference="Asf:QuotientMono"}, [\[Asf:GrowthZeroAlt\]](#Asf:GrowthZeroAlt){reference-type="eqref" reference="Asf:GrowthZeroAlt"} and [\[Asf:Nonnegative\]](#Asf:Nonnegative){reference-type="eqref" reference="Asf:Nonnegative"}. Then there exists a nontrivial weak solution $w_0 \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ of problem [\[Eq:Problem\]](#Eq:Problem){reference-type="eqref" reference="Eq:Problem"} with changing sign and two nodal domains.* **Theorem 69**. *Let [\[Assump:H3\]](#Assump:H3){reference-type="eqref" reference="Assump:H3"} be satisfied and $f$ fulfill [\[Asf:Caratheodory\]](#Asf:Caratheodory){reference-type="eqref" reference="Asf:Caratheodory"}, [\[Asf:WellDef\]](#Asf:WellDef){reference-type="eqref" reference="Asf:WellDef"}, [\[Asf:QuotientMono\]](#Asf:QuotientMono){reference-type="eqref" reference="Asf:QuotientMono"}, [\[Asf:GrowthZeroAlt\]](#Asf:GrowthZeroAlt){reference-type="eqref" reference="Asf:GrowthZeroAlt"}, [\[Asf:CeramiAssumption\]](#Asf:CeramiAssumption){reference-type="eqref" reference="Asf:CeramiAssumption"} and [\[Asf:Nonnegative\]](#Asf:Nonnegative){reference-type="eqref" reference="Asf:Nonnegative"}. Then there exist nontrivial weak solutions $u_0,v_0,w_0 \in W^{1, \ensuremath{\mathcal{H}_{\log}}}_0(\Omega)$ of problem [\[Eq:Problem\]](#Eq:Problem){reference-type="eqref" reference="Eq:Problem"} such that $u_0 \geq 0$, $v_0 \leq 0$ and $w_0$ has changing sign with two nodal domains.* Let us finish this work with some examples of right-hand side functions that would fit in our assumptions. **Example 70**. *For simplicity, assume not only [\[Assump:H3\]](#Assump:H3){reference-type="eqref" reference="Assump:H3"}, but also $q_+ \kappa / q_- < 1$, which implies $q_+ (1 + \kappa/q_-) < q_+ + 1 < p^+_-$.* 1. *Let $0 < \varepsilon< 1 - q_+ \kappa / q_-$ and take $$\begin{aligned} f(x,t) = \left|{t}\right|^{q_+ \left( 1 + \frac{\kappa}{q_-} \right) + \varepsilon- 2 } t. \end{aligned}$$ This function satisfies [\[Asf:Caratheodory\]](#Asf:Caratheodory){reference-type="eqref" reference="Asf:Caratheodory"}, [\[Asf:WellDef\]](#Asf:WellDef){reference-type="eqref" reference="Asf:WellDef"}, [\[Asf:QuotientMono\]](#Asf:QuotientMono){reference-type="eqref" reference="Asf:QuotientMono"}, [\[Asf:GrowthZeroAlt\]](#Asf:GrowthZeroAlt){reference-type="eqref" reference="Asf:GrowthZeroAlt"}, [\[Asf:CeramiAssumption\]](#Asf:CeramiAssumption){reference-type="eqref" reference="Asf:CeramiAssumption"} and [\[Asf:Nonnegative\]](#Asf:Nonnegative){reference-type="eqref" reference="Asf:Nonnegative"}. For [\[Asf:WellDef\]](#Asf:WellDef){reference-type="eqref" reference="Asf:WellDef"} take $r = q_+ ( 1 + \kappa / q_- ) + \varepsilon$ and for [\[Asf:CeramiAssumption\]](#Asf:CeramiAssumption){reference-type="eqref" reference="Asf:CeramiAssumption"} take $l = \widetilde{l} = q_+ ( 1 + \kappa / q_- )$.* 2. *Let $l, \widetilde{l}, m \in C_+ (\Omega)$ such that $q_+ + 1 \leq \min\{ m_-, l_-, \widetilde{l}_- \}$, $\max\{ l_+, \widetilde{l}_+\} < p^*_-$ and $$\begin{aligned} \frac{ \max\{ l_+, \widetilde{l}_+ \} }{p_-} - \frac{ \min\{ l_-, \widetilde{l}_- \} }{N} < 1. \end{aligned}$$ Then we can take $$\begin{aligned} f(x,t) = \begin{cases} |t|^{\widetilde{l}(x)-2}t[1 + \log(-t)], & \text{if } \phantom{-1}t \leq -1, \\ |t|^{m(x)-2}t, & \text{if } -1 < t < 1, \\ |t|^{l(x)-2}t[1 + \log(t)], & \text{if } \phantom{-1} 1 \leq t, \end{cases} \end{aligned}$$ which also satisfies [\[Asf:Caratheodory\]](#Asf:Caratheodory){reference-type="eqref" reference="Asf:Caratheodory"}, [\[Asf:WellDef\]](#Asf:WellDef){reference-type="eqref" reference="Asf:WellDef"}, [\[Asf:QuotientMono\]](#Asf:QuotientMono){reference-type="eqref" reference="Asf:QuotientMono"}, [\[Asf:GrowthZeroAlt\]](#Asf:GrowthZeroAlt){reference-type="eqref" reference="Asf:GrowthZeroAlt"}, [\[Asf:CeramiAssumption\]](#Asf:CeramiAssumption){reference-type="eqref" reference="Asf:CeramiAssumption"} and [\[Asf:Nonnegative\]](#Asf:Nonnegative){reference-type="eqref" reference="Asf:Nonnegative"}. For [\[Asf:WellDef\]](#Asf:WellDef){reference-type="eqref" reference="Asf:WellDef"} take $r(x) = \max\{ l(x), \widetilde{l}(x) \} + \varepsilon$, where $0 < \varepsilon< p^*_- - \max\{ l_+, \widetilde{l}_+\}$ and is also small enough so that $$\begin{aligned} \frac{r_+}{p_-} - \frac{ \min\{ l_-, \widetilde{l}_- \} }{N} < 1. \end{aligned}$$ For [\[Asf:CeramiAssumption\]](#Asf:CeramiAssumption){reference-type="eqref" reference="Asf:CeramiAssumption"} take $l$ and $\widetilde{l}$ to be the same ones as here. This inequality is why we need the compatibility assumption on $\max \{ l_+, \widetilde{l}_+ \}$ and $\min \{ l_-, \widetilde{l}_- \}$. In particular, when $l=\widetilde{l}$ and they are constant functions, this condition is exactly $l < p^*_-$, hence redundant in this case.* # Acknowledgments {#acknowledgments .unnumbered} The first author acknowledges the support of the Start-up Research Grant (SRG) SRG/2023/000308, Science and Engineering Research Board (SERB), India, and Seed grant IIT(BHU)/DMS/2023-24/493. The second author was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - The Berlin Mathematics Research Center MATH+ and the Berlin Mathematical School (BMS) (EXC-2046/1, project ID:\ 390685689). 99 C.O. Alves, I.S. da Silva, *Existence of a positive solution for a class of Schrödinger logarithmic equations on exterior domains*, preprint, https://arxiv.org/abs/2309.01003. C.O. Alves, D.C. de Morais Filho, *Existence and concentration of positive solutions for a Schrödinger logarithmic equation*, Z. Angew. Math. Phys. **69** (2018), no. 6, Paper No. 144, 22 pp. C.O. Alves, C. Ji, *Existence and concentration of positive solutions for a logarithmic Schrödinger equation via penalization method*, Calc. Var. Partial Differential Equations **59** (2020), no. 1, Paper No. 21, 27 pp. C.O. Alves, A. Moussaoui, L. Tavares, *An elliptic system with logarithmic nonlinearity*, Adv. Nonlinear Anal. **8** (2019), no. 1, 928--945. R. Arora, S. Shmarev, *Double-phase parabolic equations with variable growth and nonlinear sources*, Adv. Nonlinear Anal. **12** (2023), no. 1, 304--335. R. Arora, S. Shmarev, *Existence and regularity results for a class of parabolic problems with double phase flux of variable growth*, Rev. R. Acad. Cienc. Exactas Fı́s. Nat. Ser. A Mat. RACSAM **117** (2023), no. 1, Paper No. 34, 48 pp. A. Bahrouni, V.D. Rădulescu, D.D. Repovš, *Double phase transonic flow problems with variable growth: nonlinear patterns and stationary waves*, Nonlinearity **32** (2019), no. 7, 2481--2495. P. Baroni, M. Colombo, G. Mingione, *Harnack inequalities for double phase functionals*, Nonlinear Anal. **121** (2015), 206--222. P. Baroni, M. Colombo, G. Mingione, *Non-autonomous functionals, borderline cases and related function classes*, St. Petersburg Math. J. **27** (2016), 347--379. P. Baroni, T. Kuusi, G. Mingione, *Borderline gradient continuity of minima*, J. Fixed Point Theory Appl. **15** (2014), no. 2, 537--575. P. Baroni, M. Colombo, G. Mingione, *Regularity for general functionals with double phase*, Calc. Var. Partial Differential Equations **57** (2018), no. 2, Art. 62, 48 pp. L. Beck, G. Mingione, *Lipschitz bounds and nonuniform ellipticity*, Comm. Pure Appl. Math. **73** (2020), no. 5, 944--1034. V. Benci, P. D'Avenia, D. Fortunato, L. Pisani, *Solitons in several space dimensions: Derrick's problem and infinitely many solutions*, Arch. Ration. Mech. Anal. **154** (2000), no. 4, 297--324. S.-S. Byun, J. Oh, *Regularity results for generalized double phase functionals*, Anal. PDE **13** (2020), no. 5, 1269--1300. S.-S. Byun, J. Ok, K. Song, *Hölder regularity for weak solutions to nonlocal double phase problems*, J. Math. Pures Appl. (9) **168** (2022), 110--142. L. Cherfils, Y. Il'yasov, *On the stationary solutions of generalized reaction diffusion equations with $p\&q$-Laplacian*, Commun. Pure Appl. Anal. **4** (2005), no. 1, 9--22. F. Colasuonno, M. Squassina, *Eigenvalues for double phase variational integrals*, Ann. Mat. Pura Appl. (4) **195** (2016), no. 6, 1917--1959. Á. Crespo-Blanco, L. Gasiński, P. Harjulehto, P. Winkert, *A new class of double phase variable exponent problems: existence and uniqueness*, J. Differential Equations **323** (2022), 182--228. Á. Crespo-Blanco, P. Winkert, *Nehari manifold approach for superlinear double phase problems with variable exponents*, Ann. Mat. Pura Appl. (4), https://doi.org/10.1007/s10231-023-01375-2. M. Colombo, G. Mingione, *Bounded minimisers of double phase variational integrals*, Arch. Ration. Mech. Anal. **218** (2015), no. 1, 219--273. M. Colombo, G. Mingione, *Regularity for double phase variational problems*, Arch. Ration. Mech. Anal. **215** (2015), no. 2, 443--496. G. Cupini, P. Marcellini, E. Mascolo, *Local boundedness of weak solutions to elliptic equations with $p,q$-growth*, Math. Eng. **5** (2023), no. 3, Paper No. 065, 28 pp. C. De Filippis, G. Mingione, *Lipschitz bounds and nonautonomous integrals*, Arch. Ration. Mech. Anal. **242** (2021), 973--1057. C. De Filippis, G. Mingione, *On the regularity of minima of non-autonomous functionals*, J. Geom. Anal. **30** (2020), no. 2, 1584--1626. C. De Filippis, G. Mingione, *Regularity for double phase problems at nearly linear growth*, Arch. Ration. Mech. Anal. **247** (2023), no. 5, 85. C. De Filippis, G. Palatucci, *Hölder regularity for nonlocal double phase equations*, J. Differential Equations **267** (2019), no. 1, 547--586. G. Dinca, J. Mawhin, "Brouwer Degree -- The Core of Nonlinear Analysis", Birkhäuser/ Springer, Cham, 2021. L. Diening, P. Harjulehto, P. Hästö, M. R$\mathring{\text{u}}$žička, "Lebesgue and Sobolev Spaces with Variable Exponents", Springer, Heidelberg, 2011. X. Fan, J. Shen, D. Zhao, *Sobolev embedding theorems for spaces $W^{k,p(x)}(\Omega)$*, J. Math. Anal. Appl. **262** (2001), no. 2, 749--760. X. Fan, Q. Zhang, D. Zhao, *Eigenvalues of $p(x)$-Laplacian Dirichlet problem*, J. Math. Anal. Appl. **302** (2005), no. 2, 306--317. X. Fan, D. Zhao, *On the spaces $L^{p(x)}(\Omega)$ and $W^{m,p(x)}(\Omega)$*, J. Math. Anal. Appl. **263** (2001), no. 2, 424--446. C. Farkas, P. Winkert, *An existence result for singular Finsler double phase problems*, J. Differential Equations **286** (2021), 455--473. M. Fuchs, G. Mingione, *Full $C^{1,\alpha}$-regularity for free and constrained local minimizers of elliptic variational integrals with nearly linear growth*, Manuscripta Math. **102** (2000), no. 2, 227--250. M. Fuchs, G. Seregin, "Variational Methods for Problems from Plasticity Theory and for Generalized Newtonian Fluids", Springer-Verlag, Berlin, 2000. Y. Gao, Y. Jiang, L. Liu, N. Wei, *Multiple positive solutions for a logarithmic Kirchhoff type problem in $\mathbb{R}^3$*, Appl. Math. Lett. **139** (2023), Paper No. 108539, 6 pp. L. Gasiński, N.S. Papageorgiou, *Constant sign and nodal solutions for superlinear double phase problems*, Adv. Calc. Var. **14** (2021), no. 4, 613--626. L. Gasiński, P. Winkert, *Existence and uniqueness results for double phase problems with convection term*, J. Differential Equations **268** (2020), no. 8, 4183--4193. G.H. Hardy, J.E. Littlewood, G. Pólya, "Inequalities", Cambridge University Press, 1934. P. Harjulehto, P. Hästö, "Orlicz Spaces and Generalized Orlicz Spaces", Springer, Cham, 2019. P. Harjulehto, P. Hästö, O. Toivanen, *Hölder regularity of quasiminimizers under generalized growth conditions*, Calc. Var. Partial Differential Equations **56** (2017), no. 2, Paper No. 22, 26pp. P. Hästö, J. Ok, *Maximal regularity for local minimizers of non-autonomous functionals*, J. Eur. Math. Soc. **24** (2022), no. 4, 1285--1334. J. Heinonen, T. Kilpeläinen, O. Martio, "Nonlinear Potential Theory of Degenerate Elliptic Equations", Dover Publications, Inc., Mineola, NY, 2006. K. Ho, Y.-H. Kim, P. Winkert, C. Zhang, *The boundedness and Hölder continuity of weak solutions to elliptic equations involving variable exponents and critical growth*, J. Differential Equations **313** (2022), 503--532. K. Ho, P. Winkert, *New embedding results for double phase problems with variable exponents and a priori bounds for corresponding generalized double phase problems*, Calc. Var. Partial Differential Equations **62** (2023), no. 8, Paper No. 227, 38 pp. P. Lindqvist, "Notes on the Stationary $p$-Laplace Equation", Springer, Cham, 2019. W. Liu, G. Dai, *Existence and multiplicity results for double phase problem*, J. Differential Equations **265** (2018), no. 9, 4311--4334. P. Marcellini, *Local Lipschitz continuity for $p,q$-PDEs with explicit $u$-dependence*, Nonlinear Anal. **226** (2023), Paper No. 113066, 26 pp. P. Marcellini, *Regularity and existence of solutions of elliptic equations with $p,q$-growth conditions*, J. Differential Equations **90** (1991), no. 1, 1--30. P. Marcellini, *Regularity of minimizers of integrals of the calculus of variations with nonstandard growth conditions*, Arch. Rational Mech. Anal. **105** (1989), no. 3, 267--284. P. Marcellini, G. Papi, *Nonlinear elliptic systems with general growth*, J. Differential Equations **221** (2006), no. 2, 412--443. M. Montenegro, O.S. de Queiroz, *Existence and regularity to an elliptic equation with logarithmic nonlinearity*, J. Differential Equations **246** (2009), no. 2, 482--511. Z. Nehari, *On a class of nonlinear second-order differential equations*, Trans. Am. Math. Soc. **95** (1960) 101-123. Z. Nehari, *Characteristic values associated with a class of non-linear second-order differential equations*, Acta Math. **105** (1961) 141-175. A. Nekvinda, *Hardy-Littlewood maximal operator on $L^{p(x)} (\mathbb{R})$*, Math. Inequal. Appl. **7** (2004), no. 2, 255--265. J. Ok, *Partial regularity for general systems of double phase type with continuous coefficients*, Nonlinear Anal. **177** (2018), 673--698. J. Ok, *Regularity for double phase problems under additional integrability assumptions*, Nonlinear Anal. **194** (2020), 111408. N.S. Papageorgiou, V.D. Rădulescu, D.D. Repovš, *Double-phase problems and a discontinuity property of the spectrum*, Proc. Amer. Math. Soc. **147** (2019), no. 7, 2899--2910. N.S. Papageorgiou, V.D. Rădulescu, D.D. Repovš, *Ground state and nodal solutions for a class of double phase problems*, Z. Angew. Math. Phys. **71** (2020), no. 1, Paper No. 15, 15 pp. N.S. Papageorgiou, V.D. Rădulescu, D.D. Repovš, "Nonlinear Analysis -- Theory and Methods", Springer, Cham, 2019. N.S. Papageorgiou, P. Winkert, "Applied Nonlinear Functional Analysis", De Gruyter, Berlin, 2018. K. Perera, M. Squassina, *Existence results for double-phase problems via Morse theory*, Commun. Contemp. Math. **20** (2018), no. 2, 1750023, 14 pp. M.A. Ragusa, A. Tachikawa, *Regularity for minimizers for functionals of double phase with variable exponents*, Adv. Nonlinear Anal. **9** (2020), no. 1, 710--728. G.A. Seregin, J. Frehse, *Regularity of solutions to variational problems of the deformation theory of plasticity with logarithmic hardening*, in: "Proceedings of the St. Petersburg Mathematical Society, Vol. V", Amer. Math. Soc., Providence, RI **193**, 1999, 127--152. X. Shi, V.D. Rădulescu, D.D. Repovš, Q. Zhang, *Multiple solutions of double phase variational problems with variable exponent*, Adv. Calc. Var. **13** (2020), no. 4, 385--401. W. Shuai, *Multiple solutions for logarithmic Schrödinger equations*, Nonlinearity **32** (2019), no. 6, 2201--2225. W. Shuai, *Two sequences of solutions for the semilinear elliptic equations with logarithmic nonlinearities*, J. Differential Equations **343** (2023), 263--284. J. Simon, *Régularité de la solution d'une équation non linéaire dans $\mathbb{R}^{N}$*, Journées d'Analyse Non Linéaire (Proc. Conf. Besançon, 1977), Springer, Berlin **665** (1978), 205--227. M. Squassina, A. Szulkin, *Multiple solutions to logarithmic Schrödinger equations with periodic potential*, Calc. Var. Partial Differential Equations **54** (2015), no. 1, 585--597. A. Szulkin, T. Weth, *The method of Nehari manifold*, in: "Handbook of Nonconvex Analysis and Applications", Int. Press, Somerville, MA, 2010, 597--632. M. Willem, "Minimax Theorems", Birkhäuser Boston, Inc., Boston, MA, 1996. E. Zeidler, "Nonlinear Functional Analysis and its Applications. II/B", Springer-Verlag, New York, 1990. S. Zeng, Y. Bai, L. Gasiński, P. Winkert, *Existence results for double phase implicit obstacle problems involving multivalued operators*, Calc. Var. Partial Differential Equations **59** (2020), no. 5, 176. S. Zeng, V.D. Rădulescu, P. Winkert, *Double phase implicit obstacle problems with convection and multivalued mixed boundary value conditions*, SIAM J. Math. Anal. **54** (2022), 1898--1926. Q. Zhang, V.D. Rădulescu, *Double phase anisotropic variational problems and combined effects of reaction and absorption terms*, J. Math. Pures Appl. (9) **118** (2018), 159--203. V.V. Zhikov, *Averaging of functionals of the calculus of variations and elasticity theory*, Izv. Akad. Nauk SSSR Ser. Mat. **50** (1986), no. 4, 675--710. V.V. Zhikov, *On Lavrentiev's phenomenon*, Russian J. Math. Phys. **3** (1995), no. 2, 249--269. V.V. Zhikov, *On variational problems and nonlinear elliptic equations with nonstandard growth conditions*, J. Math. Sci. **173** (2011), no. 5, 463--570.
arxiv_math
{ "id": "2309.09174", "title": "On logarithmic double phase problems", "authors": "Rakesh Arora and \\'Angel Crespo-Blanco and Patrick Winkert", "categories": "math.AP", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Let $M$ be a closed hyperbolic 3-manifold. Let $\nu_{\mathop{\mathrm{Gr}}M}$ denote the probability volume (Haar) measure of the 2-plane Grassmann bundle $\mathop{\mathrm{Gr}}M$ of $M$ and let $\nu_T$ denote the area measure on $\mathop{\mathrm{Gr}}M$ of an immersed closed totally geodesic surface $T\subset M$. We say a sequence of $\pi_1$-injective maps $f_i:S_i\to M$ of surfaces $S_i$ is *asymptotically Fuchsian* if $f_i$ is $K_i$-quasifuchsian with $K_i\to 1$ as $i\to \infty$. We show that the set of weak-\* limits of the probability area measures induced on $\mathop{\mathrm{Gr}}M$ by asymptotically Fuchsian minimal or pleated maps $f_i:S_i\to M$ of closed connected surfaces $S_i$ consists of all convex combinations of $\nu_{\mathop{\mathrm{Gr}}M}$ and the $\nu_T$. address: Max Planck Institute for Mathematics in the Sciences, Inselstr. 22, 04103 Leipzig author: - Fernando Al Assal title: limits of asymptotically Fuchsian surfaces in a closed hyperbolic 3-manifold --- # Introduction Let $M = \Gamma\backslash \mathbf{H}^3$ be a closed hyperbolic 3-manifold, where $\Gamma\leq \mathop{\mathrm{PSL}}_2\mathbf{C}$ is a cocompact lattice. We say a sequence of $\pi_1$-injective (essential) maps $f:S_i\to M$ of surfaces $S_i$ is *asymptotically Fuchsian* if $f_i$ is $K_i$-quasifuchsian with $K_i\to 1$ as $i\to\infty$. For an almost-everywhere differentiable map $f:S\to M$ of a surface into $M$, we let $\nu(f)$ denote the probability area measure induced by $f$ on the oriented 2-plane Grassmann bundle $\mathop{\mathrm{Gr}}M$ of $M$. (Precisely, if we let $\overline{f}:S\to \mathop{\mathrm{Gr}}M$ be given by $\overline{f}(p) = (f(p),T_{f(p)} f(S))$, then $\nu(f)$ is the pushforward via $\overline{f}$ of the pullback via $f$ of the volume measure of $M$, normalized to have mass 1.) We let $\mathscr{G}$ denote the set of immersed closed totally geodesic surfaces in $M$. For $T\in \mathscr{G}$, we let $\nu_T$ denote the area measure of $T$ on $\mathop{\mathrm{Gr}}M$. We let $\nu_{\mathop{\mathrm{Gr}}M}$ denote the probability volume (Haar) measure of $\mathop{\mathrm{Gr}}M$. The main theorem of the article is **Theorem 1**. *The set of weak-\* limits of $\nu(f_i)$, where $f_i:S_i\to M$ are asymptotically Fuchsian minimal or pleated maps of closed connected surfaces, consists of all measures of the form $$\nu= \alpha_M\nu_{\mathop{\mathrm{Gr}}M} + \sum_{T\in\mathscr{G}} \alpha_T \nu_T$$ where $\alpha_M + \sum_{T\in\mathscr{G}} \alpha_T = 1$.* An important part of the proof of Theorem 1.1 is showing that the weak-\* limits of convergent subsequences of $\nu(f_i)$ do not depend on whether $f_i$ is minimal or pleated, or in particular on the choice of pleated map. This is despite the fact that, in the pleated case, the universal covers of $f_i(S_i)$ do not converge to a geodesic plane in the $C^1$ sense. **Theorem 2**. *Suppose $f_i : S_i \to M$ are essential asymptotically Fuchsian maps of a closed connected surface. Let $f_i^p$ and $f_i^m$ be, respectively, pleated and minimal maps homotopic to $f_i$. Then, the probability area measures $\nu(f_i^p)$ and $\nu(f_i^m)$ have the same weak-\* limit along any convergent subsequence.* ![The universal covers of asymptotically Fuchsian pleated surfaces are not necessarily embedded in $\mathbf{H}^3$ and may develop wrinkles as above, so they are never $C^1$-close to a totally geodesic plane](wrinkle3.jpg){#introwrinkle} Theorem [Theorem 1](#main){reference-type="ref" reference="main"} is in contrast with the case in which the maps $f_i:S_i\to M$ are all Fuchsian and the $S_i$ are all distinct. Then, the surfaces $f_i(S_i)$ equidistribute in $\mathop{\mathrm{Gr}}M$, namely **Theorem 1** (Shah-Mozes). *$\nu(f_i) \stackrel{\star}\rightharpoonup\nu_{\mathop{\mathrm{Gr}}M}$ as $i\to \infty$.* This follows from a more general theorem of Shah and Mozes ([@SM]). This is an article about unipotent dynamics, that builds on work of Dani, Margulis and Ratner. A special case of the main theorem in [@SM] is that a sequence of infinitely many distinct orbit closures of the unipotent flow in $\mathop{\mathrm{Gr}}M$ equidistributes. Due to Ratner ([@R]), these orbit closures are either totally geodesic surfaces or all of $\mathop{\mathrm{Gr}}M$. More recently, Margulis-Mohammadi ([@MM]) and Bader-Fisher-Miller-Stover ([@BFMS]) showed that if $M$ contains infinitely many distinct totally geodesic surfaces, then $M$ is aritmhetic. (On the other hand, it was already known, due to Reid ([@Re]) and Maclachlan-Reid ([@MR]) that if $M$ is arithmetic, then it contains either zero or infinitely many distinct totally geodesic surfaces.) This rigid behavior of totally geodesic surfaces, however, is not shared by the nearly Fuchsian surfaces of $M$. Due to the surface subgroup theorem of Kahn and Markovic ([@KM]), any closed hyperbolic 3-manifold $M$ has infinitely many essential $K$-quasifuchsian surfaces, for any $K>1$. The Kahn-Markovic construction of surface subgroups has a probabilistic flavor. The building blocks from which the nearly Fuchsian surfaces are assembled are the *$(\epsilon,R)$*-good pants, which are the maps $f:P\to M$ from a pair of pants $P$ taking the cuffs of $P$ to $(\epsilon,R)$-good curves in $M$ -- the closed geodesics with complex translation length $2\epsilon$-close to $2R$. We say two $(\epsilon,R)$-good pants $f$ and $g$ are equivalent if $f$ is homotopic to $g\circ \phi$, for some orientation-preserving homeomorphism $\phi:P\to P$. For more detailed and precise definitions, see Section 2. A crucial reason why this construction works is that the good pants incident to a given good curve $\gamma$ come from a well-distributed set of directions. Precisely, the *feet* of the good pants are well-distributed in the unit normal bundle $\mathop{\mathrm{N}}^1(\gamma)$ of $\gamma$. The feet of a good pants $\pi = f:P\to M$ are the derivatives of the unit speed geodesic segments connecting a cuff of $f(P)$ to another, meeting both cuffs orthogonally. Each cuff has two feet, and it turns out that they define the same point, the *foot*, denoted $\mathop{\mathrm{\mathbf{f}\mathbf{t}}}(\pi)$, in the quotient $\mathop{\mathrm{N}}^1(\sqrt{\gamma})$ of $\mathop{\mathrm{N}}^1(\gamma)$ by $n\mapsto n + \mathbf{h}\mathbf{l}(\gamma)/2$, where $\mathbf{h}\mathbf{l}(\gamma)$ is half of the translation length of $\gamma$. The precise statement of the equidistribution of the feet follows below, from the article of Kahn and Wright [@KW] with proof in the supplement [@KW2]. In [@KW] Kahn and Wright extend the surface subgroup theorem to the case where $M$ has finite volume, while simplifying some elements of the original proof of Kahn-Markovic. The proof of the well-distribution of feet in [@KW2] follows a different approach than the original Kahn-Markovic argument in [@KM]. In the latter, the pants are constructed by flowing tripods via the frame flow. In the former, pants with a given cuff are constructed from geodesic segments meeting the cuff orthogonally (the *orthogeodesic connections*). Denote the space of $(\epsilon,R)$-good curves in $M$ as $\mathbf{\Gamma}_{\epsilon, R}$ and the space of $(\epsilon,R)$-good pants having $\gamma$ as a cuff as $\mathbf{\Pi}_{\epsilon, R}(\gamma)$. **Theorem 3** (Kahn-Wright: Equidistribution of feet). *There is $q=q(M)>0$ so that if $\epsilon>0$ is small enough and $R > R_0(\epsilon)$, the following holds. Let $\gamma\in \mathbf{\Gamma}_{\epsilon, R}$. If $B\subset\mathop{\mathrm{N}}^1 (\sqrt{\gamma})$, then $$(1-\delta) \lambda (N_{-\delta} B) \leq \frac{\#\{ \pi \in \mathbf{\Pi}_{\epsilon, R}(\gamma) : \mathop{\mathrm{\mathbf{f}\mathbf{t}}}_{\gamma} \pi \in B\}} {C_{\epsilon,R,\gamma}} \leq (1+\delta) \lambda(N_{\delta} B),$$ where $\lambda=\lambda_{\gamma}$ is the probability Lebesgue measure on $\mathop{\mathrm{N}}^1(\sqrt{\gamma})$, $\delta = e^{-qR}$, $N_{\delta}(B)$ is the $\delta$-neighborhood of $B$, $N_{-\delta}(B)$ is the complement of $N_{\delta} (\mathop{\mathrm{N}}^1(\sqrt{\gamma}) - B)$ and $C_{\epsilon,R,\gamma}$ is a constant depending only on $\epsilon$, $R$ and $\mathbf{l}(\gamma)$.* This theorem will be used in many ways in the article. It implies that a nearly geodesic surface $S(\epsilon,R)$ may be built using one representative of each equivalence class of $(\epsilon,R)$-good pants. The equidistribution of feet (in a slight generalization explained in Section 5) will also be used to show that these surfaces equidistribute in $\mathop{\mathrm{Gr}}M$ as $\epsilon\to 0$. It will also be important in the construction of *non-equidistributing* asymptotically Fuchsian surfaces. The surface $S(\epsilon,R)$ built out of a representative of each equivalence class may not be connected, however. And we do need, for our main theorem, a *connected* surface that goes through every good pants, meeting every cuff in a well-distributed set of directions. This can be achieved using the work of Liu and Markovic from [@LM]. Using their ideas, we can reglue the pants used to build $N = N(\epsilon,R)$ copies of $S(\epsilon,R)$ and obtain a connected surface that goes through every cuff in many directions. ## Further directions {#further-directions .unnumbered} One can ask the same questions for finite-volume hyperbolic 3-manifolds $M$. Crucially, Kahn and Wright in [@KW] extended the surface subgroup theorem of Kahn and Markovic to this context, simplifying and sharpening some proofs on the way. The tool we still do not have to execute our construction there is the existence of equidistributing *connected* $\pi_1$-injective, closed asymptotically Fuchsian surfaces in $M$. This would use the work of Sun [@Sun] that generalizes ideas of Liu and Markovic from [@LM] to finite-volume 3-manifolds. Another difference in this setting is that $\Gamma\backslash G$ has more complicated closed orbits of the unipotent flow, namely the closed horospheres associated to the cusps of $M$. It is not clear whether connected asymptotically Fuchsian surfaces may accumulate there or not. Another direction is to extend these results to other homogeneous spaces $\Gamma\backslash G$, where $G$ is a semisimple Lie group and $\Gamma < G$ a cocompact lattice. It has been shown that $\Gamma$ has many surface subgroups, in the style of the Kahn-Markovic theorem, when $\Gamma$ is a uniform lattice in a rank one simple Lie group of noncompact type distinct from $\mathop{\mathrm{SO}}_{2m,1}$ by Hämenstadt in [@H] and when $\Gamma$ is a uniform lattice in a center-free complex semisimple Lie group by Kahn, Labourie and Mozes in [@KLM]. In the latter article, the authors show that their surface groups are *$K$-Sullivan* for any $K>1$, which is a generalization of $K$-quasifuchsian for the higher rank setting. Again, it would be necessary to extend ideas of Liu and Markovic from [@LM] to those settings. Moreover, when $G$ is a higher rank Lie group, there are more kinds of closed unipotent orbits in the homogeneous space $\Gamma\backslash G$ in which a sequence of asymptotically Fuchsian (or $K$-Sullivan with $K\to 1$) could perhaps accumulate in. ## Outline {#outline .unnumbered} The large-scale structure of the article is the following. In Section 2, we show that that the weak-\* limits of the probability area measures of asymptotically Fuchsian surfaces in $M$ is a convex combination of the volume measure of $\mathop{\mathrm{Gr}}M$ and the area measures supported on closed geodesic surfaces. This is one of the directions of Theorem [Theorem 1](#main){reference-type="ref" reference="main"}. The other direction of this equality will be proved in Sections 3, 4, 5, 6 and 7. Details follow below. In Section 2, we prove Theorem [Theorem 2](#samelimit_intro){reference-type="ref" reference="samelimit_intro"}. Namely, we describe how nearly Fuchsian surfaces may be realized geometrically inside $M$ as pleated or as minimal surfaces. We argue that, as these surfaces $f_i: S_i\to M$ become closer to Fuchsian, the weak-\* limits of their area measures in $\mathop{\mathrm{Gr}}M$ do not depend on the choice of geometric structure. We do this by mapping the universal covers of our surfaces to a component of the convex core of $\pi_1(f_i)(\pi_1(S_i))$ via the normal flow, and arguing that this map has small derivatives in most of its domain. This is despite the fact that the universal covers of the pleated surfaces do not converge to a geodesic disc in the $C^1$ sense. For the case of minimal surfaces, we use the fact that their principal curvatures are uniformly small, as shown by Seppi in [@Se]. Using a theorem of Lowe for minimal surfaces from [@Lo], we conclude that the limiting measures are $\mathop{\mathrm{PSL}}_2 \mathbf{R}$-invariant. Thus, due to the Ratner measure classification, they are a convex combination of the volume measure of $\mathop{\mathrm{Gr}}M$ and area measures of the totally geodesic surfaces of $M$. This shows one direction of Theorem [Theorem 1](#main){reference-type="ref" reference="main"}. In Section 3, we explain how to construct nearly geodesic closed essential surfaces in $M$, following Kahn, Markovic and Wright ([@KM] and [@KW]). We define their building blocks, the $(\epsilon,R)$-good pants, and the correct ($(\epsilon,R)$-*good*) way to glue them together so the result is nearly Fuchsian. Finally, we explain how to use the equidistribution of feet (Theorem [Theorem 21](#kw){reference-type="ref" reference="kw"}), together with the Hall marriage theorem from combinatorics, to show that a copy of each good pants may be glued via good gluings to form a closed surface $S(\epsilon,R)$. In Section 4, we follow ideas of Liu and Markovic in [@LM] to explain how to reassemble a *connected* nearly Fuchsian closed essential surface $\hat{S}(\epsilon,R)$ from $N = N(\epsilon,R)$ copies of the surface built in Section 2. In particular, this connected surface defines the same area measure $\nu(\epsilon,R)$ in $\mathop{\mathrm{Gr}}(M)$ as $S(\epsilon,R)$. In Section 5, we endow the connected surface built out of the same number of copies of each good pants $\hat{S}(\epsilon,R)$ with the pleated structure in which every good pants is glued from two ideal triangles. We show that the barycenters of these triangles equidistribute in the frame bundle $\mathop{\mathrm{Fr}}M$ of $M$ as the surfaces become more Fuchsian (namely, as $\epsilon\to 0$ and $R(\epsilon)\to\infty$). To do so, we use a generalization of the equidistribution of feet (Theorem [Theorem 21](#kw){reference-type="ref" reference="kw"}), in which a continuous function $g\in C(\mathop{\mathrm{N}}^1 (\sqrt{\gamma}))$ plays the role of the set $B$ in the statement above. We also use the fact, from a version due to Lalley in [@L], that asymptotically almost surely, the cuffs of the pants equidistribute in the unit tangent bundle $\mathop{\mathrm{T}}^1 M$. In Section 6, from the equidistribution of the barycenters of the triangles, we conclude that the surfaces $\hat{S}(\epsilon,R)$ built from the triangles equidistribute as $\epsilon\to 0$ and $R(\epsilon)\to \infty$. This is because each triangle can be obtained from the right action of a subset $\Delta\subset\mathop{\mathrm{PSL}}_2\mathbf{R}$ on the barycenter. The approach we take in Sections 5 and 6 is similar to the one used by Labourie in [@La] to show that certain perhaps disconnected asymptotically Fuchsian surfaces equidistribute in $M$. A difference is that the surfaces in [@La] are built from a different multiset of good pants that comes from the original Kahn-Markovic construction. It is not clear, for example, how many copies of each pants are used to build those asymptotically Fuchsian surfaces. In Section 7, we build a family of nearly Fuchsian surfaces by gluing the equidistributing surfaces $\hat{S}(\epsilon,R)$ of Sections 5 and 6 to high degree covers of totally geodesic surfaces in $M$. To do so, we need the fact that a high degree cover of the totally geodesic surfaces of $M$ may be built from good gluings of good pants. This was shown by Kahn and Markovic in [@KME] in order to prove the Ehrenpreis conjecture. We show that as these hybrid surfaces become asymptotically Fuchsian, they may accumulate on any of the totally geodesic surfaces. ## Acknowledgements {#acknowledgements .unnumbered} I would like to thank Danny Calegari, James Farre, Ben Lowe and Franco Vargas-Pallete for useful discussions and correspondence. I thank Rich Schwartz for suggesting the adverb "asymptotically" instead of "increasingly" for the title. I would also like to thank Natalie Rose Schwartz for help drawing Figure [7](#spun){reference-type="ref" reference="spun"}. I especially thank Jeremy Kahn for suggesting the proof of Theorem [Lemma 30](#equidftfm){reference-type="ref" reference="equidftfm"} and my advisor Yair Minsky for all the help. # Geometric realizations of nearly Fuchsian surfaces Suppose $f:S\to M$ is an essential nearly Fuchsian immersion of a closed connected orientable surface. Then, $f$ is homotopic to maps with interesting geometric properties, namely a unique minimal map and many pleated maps. In this section, we will describe these geometric realizations and show that their area measures in $\mathop{\mathrm{Gr}}M$ have the same limit as they become asymptotically Fuchsian. Precisely, suppose $f_i: S_i \to M$ are asymptotically Fuchsian maps of closed connected surfaces $S_i$. Let $f_i^p$ and $f_i^m$ be, respectively, pleated and minimal maps homotopic to $f_i$. Let $\nu(f^p_i) = p_i$ and $\nu(f^m_i) = m_i$ be the probability area measures induced by these maps on the 2-plane Grassmann bundle $\mathop{\mathrm{Gr}}M$. The main theorem of this section is the following, which was labeled as Theorem 1.2 in the introduction. **Theorem 4**. *A subsequence $m_{i_j}$ satisfies $m_{i_j}\stackrel{\star}\rightharpoonup\nu$ as $j\to\infty$ if and only if $p_{i_j}\stackrel{\star}\rightharpoonup\nu$.* Let $\hat{m}_i = \hat{\nu}(f_i^m)$ be the probability measure induced by $f_i^m$ on the frame bundle $\mathop{\mathrm{Fr}}M$. By the weak-\* compactness of the probability measures on $\mathop{\mathrm{Fr}}M$, the $\hat{m}_i$ converge to a measure $\hat{\nu}$ along a subsequence. As shown by Lowe in Proposition 5.2 of [@Lo] and Labourie in Section 5 of [@La], the measure $\hat{\nu}$ is invariant under the right action of $\mathop{\mathrm{PSL}}_2 \mathbf{R}$. Thus, from the Ratner measure classification theorem [@R], it follows that the weak-\* subsequential limits of $m_i$ are of the form $$\tag{$\star$} \nu = \alpha_M \nu_{\mathop{\mathrm{Gr}}M} + \sum_{T\in\mathscr{G}} \alpha_T \nu_T.$$ As before, $\mathscr{G}$ is a set containing a representative of each commensurability class of closed immersed totally geodesic surfaces in $M$, $\nu_{\mathop{\mathrm{Gr}}M}$ is the probability Haar measure on $\mathop{\mathrm{Gr}}M$, and $\nu_T$ is the probability area measure of an immersed closed totally geodesic surface $T\subset M$. The coefficients $\alpha_M$ and $\alpha_T$ sum to 1. This, combined with Theorem [Theorem 4](#samelimit){reference-type="ref" reference="samelimit"}, shows one of the directions of the main theorem of the article, Theorem [Theorem 1](#main){reference-type="ref" reference="main"}. In Sections 5, 6 and 7, we will show that given any $\nu$ of the form ($\star$), we may find asymptotically Fuchsian connected closed surfaces in $M$ with limiting measure $\nu$. Let $H_i^+$ be a (the top) component of the boundary of the convex core of $Q_i = \pi_1(f)(\pi_1 S_i)$. Let $f_i^h$ be the pleated map homotopic to $f_i$ that whose lift to the universal cover maps $\hat{S}_i$ into $H_i^+$ and say $h_i = \nu(f_i^h)$. To prove Theorem 4.1, we show that each of $p_i$ and $m_i$ has the same weak-\* subsequential limits as $h_i$. **Theorem 5**. *A subsequence $p_{i_j}$ satisfies $p_{i_j}\stackrel{\star}\rightharpoonup\nu$ as $j\to\infty$ if and only if $h_{i_j}\stackrel{\star}\rightharpoonup\nu$.* **Theorem 6**. *A subsequence $m_{i_j}$ satisfies $m_{i_j}\stackrel{\star}\rightharpoonup\nu$ as $j\to\infty$ if and only if $h_{i_j}\stackrel{\star}\rightharpoonup\nu$.* Theorems [Theorem 5](#sl1){reference-type="ref" reference="sl1"} and [Theorem 6](#sl2){reference-type="ref" reference="sl2"} are in turn proven by flowing the universal covers $\widetilde{f}^m_i(\widetilde{S}_i)$ and $\widetilde{f}^p_i(\widetilde{S}_i)$ normally into $H_i^+$. We argue that this process has uniformly small area distortion. In the pleated case of Theorem [Theorem 5](#sl1){reference-type="ref" reference="sl1"}, we need to argue a definite distance $\eta>0$ away from the bending lamination of $\widetilde{f}^p_i(\widetilde{S}_i)$ to avoid complicated wrinkles as in Figure [1](#introwrinkle){reference-type="ref" reference="introwrinkle"}. Then, we take $\eta\to 0$. In the minimal case of Theorem [Theorem 6](#sl2){reference-type="ref" reference="sl2"}, we use the result of Seppi [@Se] that says that the principal curvatures of $\widetilde{f}^m_i(\widetilde{S}_i)$ go uniformly to zero as the quasiconformal constant $K_i$ tends to 1. ![Visual outline of the proof of Theorem [Theorem 4](#samelimit){reference-type="ref" reference="samelimit"}. We will flow the universal covers $\widetilde{f}^m_i(\widetilde{S}_i)$ and $\widetilde{f}^p_i(\widetilde{S}_i)$ of the asymptotically Fuchsian minimal and pleated surfaces normally till they hit a component $H_i^+$ of the boundary of the convex core. We will argue this process has a uniformly small area distortion (away from the pleating lamination, in the pleated case).](compare2.jpg) ## Quasiconformal maps and quasifuchsian groups Let $\Omega\subset\hat{\mathbf{C}}$ be a domain. A continuous map $h:\Omega\to \hat{\mathbf{C}}$ is quasiconformal if its weak derivatives are locally in $L^2(\Omega)$ and it satisfies the Beltrami equation $$\partial_z h(z) = \mu(z) \partial_{\bar{z}} h(z)$$ for almost every $z\in \Omega$ for some $\mu \in L^{\infty} (\Omega)$ with $\|\mu\|_{L^{\infty}(\Omega)} < 1$. The derivatives $\partial_z = (\partial_x - i\partial_y)/2$ and $\partial_{\bar{z}} = (\partial_x + i\partial_y)/2$ are understood in the distributional sense. We say that $h:\Omega\to\hat{\mathbf{C}}$ is $K$-quasiconformal if $\mu$, which is called the Beltrami differential of $h$, satisfies $$K(h):= \frac{1+\|\mu\|_{\infty}}{1-\|\mu\|_{\infty}} \leq K.$$ In general, $\mu$ is a Beltrami differential in a domain $\Omega\subset\hat{\mathbf{C}}$ if it is an element of the open unit ball around the origin $B_1(0)$ of $L^{\infty} (\Omega)$. The measurable Riemann mapping theorem says that given a Beltrami differential in $\hat{\mathbf{C}}$, we may find a unique quasiconformal mapping $h:\hat{\mathbf{C}}\to\hat{\mathbf{C}}$ fixing $0$, $1$ and $\infty$ with $\partial_z h = \mu \partial_{\bar{z}}h$. Quasiconformal maps enjoy the following compactness property that will be useful to us. (It is Lemma 6 on page 21 of [@G].) **Lemma 1** (Compactness). *Let $h_i: \hat{\mathbf{C}} \to \hat{\mathbf{C}}$ be a sequence of $K$-quasiconformal maps fixing $0$, $1$ and $\infty$. Then the $h_i$ converge uniformly to $h$ as $i\to\infty$, where $h$ is a $K$-quasiconformal map.* It turns out that $1$-quasiconformal maps are conformal, which is a regularity theorem for the solutions of the Beltrami equation. Thus, it follows that if the $h_i$ are $K_i$-quasiconformal fixing $0$, $1$ and $\infty$ with $K_i\to 1$ as $i\to \infty$, then they converge uniformly to the identity. Let $\mathbf{U}\subset\hat{\mathbf{C}}$ denote the upper half plane, and let $\mathbf{L}= \hat{\mathbf{C}} \setminus \bar{\mathbf{U}}$. We define the universal Teichmüller space of $\mathbf{U}$ as $$\mathscr{T}(\mathbf{U}) = \{ h:\hat{\mathbf{C}}\to\hat{\mathbf{C}}\text{ quasiconformal fixing 0, 1 and }\infty\,:\, h|_{\mathbf{L}}\text{ is conformal}\}.$$ To obtain elements of $\mathscr{T}(\mathbf{U})$, let $\mu$ be a Beltrami differential in $\mathbf{U}$. We may extend it to a Beltrami differential also denoted $\mu$ in $\hat{\mathbf{C}}$ by setting $\mu|_{\mathbf{L}} = 0$. By the measurable Riemann mapping theorem, there is a unique quasiconformal mapping $h$ of $\hat{\mathbf{C}}$ that fixes 0, 1 and $\infty$ and satisfies $\partial_z h = \mu \partial_{\bar{z}} h$. Moreover, $\partial_{\bar{z}} h= 0$ in $\mathbf{L}$, so $h|_{\mathbf{L}}$ is conformal. A Jordan curve $\Lambda\subset\hat{\mathbf{C}}$ is a $K$-quasicircle if $$K = \inf \{ K(h) \,:\,h\in \mathscr{T}(\mathbf{U})\text{ and }\Lambda=h(\partial\mathbf{U}) \}.$$ Note that this infimum is achieved: if $h_i$ are elements of $\mathscr{T}(\mathbf{U})$ with $K(h_i) \to K$, then by the compactness lemma, the $h_i$ converge uniformly to a $K$-quasiconformal mapping of $\hat{\mathbf{C}}$ fixing 0, 1 and $\infty$ with $\Lambda = h(\partial\mathbf{U})$. A group $Q\leq \mathop{\mathrm{PSL}}_2 \mathbf{C}$ is $K$-*quasifuchsian* if $F=hQ h^{-1}$ is a Fuchsian group for some $K$-quasiconformal map $h:\hat{\mathbf{C}}\to\hat{\mathbf{C}}$. Up to conjugating $Q$ by a $g\in \mathop{\mathrm{PSL}}_2 \mathbf{C}$, we can say that its limit set $\Lambda_Q$ contains $0$, $1$ and $\infty$. Thus, there is a $K$-quasiconformal mapping $h\in \mathscr{T}(\mathbf{U})$ so that $\Lambda_Q = f(\partial\mathbf{U})$. In particular, we see that $\Lambda_Q$ is a $K$-*quasicircle* -- the image of a circle under a $K$-quasiconformal map. These are nowhere differentiable Hölder curves. A continuous, $\pi_1$-injective map $f:S\to M$ of a hyperbolic surface $S$ into a hyperbolic 3-manifold $M$ is $K$-*quasifuchsian* if $f_*(\pi_1 S) \leq \Gamma \cong \pi_1 M \leq \mathop{\mathrm{PSL}}_2\mathbf{C}$ is a $K$-quasifuchsian group. Given a $K$-quasifuchsian subgroup $Q$ of the Kleinian group $\Gamma \cong \pi_1 M$, we may recover a $K$-quasifuchsian map $f:S\to M$ in the following way. As described above, $Q$ gives rise to a $K$-quasiconformal map $h\in \mathscr{T}(\mathbf{U})$, whose restriction to $\partial \mathbf{U}\cong \partial_{\infty} \mathbf{H}^2$ may be extended to a $Q$-equivariant map $\tilde{f} : \mathbf{H}^2 \to \mathbf{H}^3$. The map $\tilde{f}$ in turn descends to $f:S\to M$. (We will describe examples of this extensions as minimal or pleated maps in detail below.) A sequence of maps $f:S_i\to M$ of hyperbolic surfaces $S_i$ into a hyperbolic 3-manifold is *asymptotically Fuchsian* if the $f_i$ are $K_i$-quasifuchsian for $K_i\to 1$ as $i\to\infty$. Given such a sequence, we may find a sequence of $K_i$-quasiconformal maps $h_i\in \mathscr{T}(\mathbf{U})$ that conjugate $Q_i = (f_*)(\pi_1 S_i)$ into $\mathop{\mathrm{PSL}}_2\mathbf{R}$. From the compactness theorem of quasiconformal maps, it follows that the $h_i$ converge uniformly to the identity. In particular, the limit sets $\Lambda_{Q_i}$ are sandwiched between two circles at an Euclidean distance going to zero as $i\to\infty$. ## The Schwarzian derivative and the Bers norm The *Schwarzian derivative* of a holomorphic function $f$ with nonvanishing derivative is given by $$S_f = \left( \frac{f''}{f'}\right)' - \frac{1}{2}\left( \frac{f''}{f'} \right)^2.$$ This vanishes precisely at the Möbius transformations and it can be shown that if $f_i$ converges uniformly to a Möbius transformation as $i\to\infty$, then $S_{f_i}\to 0$ as $i\to \infty$. The *Bers norm* of $f\in\mathscr{T}(\mathbf{U})$ is given by $$\|f\|_B := \sup_{z\in\mathbf{L}} |S_f(z)| \rho^2(z),$$ where $\rho$ is the Poincaré metric of curvature -1 on $\mathbf{L}$. As the quasiconformal constant of $f$ goes to 1, $f$ converges uniformly to the identity on $\hat{\mathbf{C}}$, and so $\|f\|_B \to 0$. ## Pleated surfaces and the convex core A $\pi_1$-injective isometric map $f:S\to M$ of a surface $S$ is *pleated* or *uncrumpled* if every $p\in S$ is inside a geodesic arc of $S$ that is mapped to a geodesic arc of $M$. It turns out (see Proposition 8.8.2 of [@Th]) that the set $\lambda \subset S$ of points that lie in a single geodesic segment that gets mapped to a geodesic is a lamination on $S$, and that $f$ is totally geodesic outside $\lambda$. The lamination $\lambda$ is called the *pleating* or *bending* lamination, and can be given a transverse measure that keeps track of the bending angles between the totally geodesic pieces of $f(S)$. A $K$-quasifuchsian map $f:S\to M$ is homotopic to many pleated surfaces -- given any geodesic lamination $\lambda \subset S$, it is possible to find a pleated map homotopic to $f$ whose pleating locus is $\lambda$. One such pleated map of note comes from the boundary of the convex core of the quasifuchsian group $Q = f_* (\pi_1 S)$. Let $\Lambda$ be the limit set of $Q$. The convex core of $Q$ is the smallest set $\mathop{\mathrm{core}}Q\subset\mathbf{H}^3$ containing the geodesics with endpoints in $\Lambda$. Thurston showed that its boundary $\partial \mathop{\mathrm{core}}Q$ has two components $H^-$ and $H^+$ that are the image of $\mathbf{H}^2$ under a $Q$-equivariant pleated map [@EM]. In particular, $f:S\to M$ is homotopic to a pleated map $f^h:S\to M$ so that $\widetilde{f^h}(\tilde{S}) = H^+$. The pleated discs $H^-$ and $H^+$ inherit an orientation from $f$, and in particular normal vector fields $n^-$ and $n^+$ away from their bending loci. We will follow the convention that $H^-$ is the component so that the trajectory from flowing a vector $n^-$ via the geodesic flow will meet $H^+$ at some positive time. Another pleated map homotopic to $f:S\to M$ of importance in this article is the one where the bending lamination consists of a pants decomposition of $S$ as well as three spiraling geodesics per pants that divide the pants into two ideal triangles. We will keep track of these triangles to show that the surface built out of one copy of each $(\epsilon,R)$-good pants equidistributes as $\epsilon\to 0$ in Section [5](#equid){reference-type="ref" reference="equid"}. ## Proving Theorem [Theorem 5](#sl1){reference-type="ref" reference="sl1"} {#proving-theorem-sl1} We are now ready to restate and prove Theorem [Theorem 5](#sl1){reference-type="ref" reference="sl1"}. Let $f_i:S_i\to M$ be asymptotically Fuchsian maps, with $Q_i = (f_i)_* (\pi_1 S_i)$. Let $H_i^-$ and $H_i^+$ be the components of $\partial\mathop{\mathrm{core}}Q_i$ (again, chosen so flowing normally from $H_i^-$ gets you to $H_i^+$). Let $f_i^p$ and $f_i^h$ be pleated maps homotopic to $f_i$, where $f_i^h$ has a lift to the universal cover $\widetilde{f_i^h} : \widetilde{S_i} \to \mathbf{H}^3$ so that $\widetilde{f_i^h} (\widetilde{S_i}) = H_i^+$. Let $p_i = \nu(f_i^p)$ and $h_i = \nu(f_i^h)$ be the area measures induced on $\mathop{\mathrm{Gr}}M$ by $f_i^p$ and $f_i^h$, respectively. **Theorem 2**. *A subsequence $p_{i_j}$ satisfies $p_{i_j}\stackrel{\star}\rightharpoonup\nu$ as $j\to\infty$ if and only if $h_{i_j}\stackrel{\star}\rightharpoonup\nu$.* Let $\Lambda_i \subset\hat{\mathbf{C}}$ be the limit set of $Q_i$ Let $\widetilde{f_i^p}:\widetilde{S_i}\to \mathbf{H}^3$ be the lift of $f_i^p$ to the universal cover so $\partial_{\infty} \widetilde{f_i^p} (\widetilde{S_i}) = \Lambda_i$. We define $P_i:= \widetilde{f_i^p} (\widetilde{S_i})$. We let $\tilde{p}_i$ and $\tilde{h}_i$ be, respectively, the area measures induced by $\widetilde{f_i^p}$ and $\widetilde{f_i^h}$ on $\mathop{\mathrm{Gr}}\mathbf{H}^3$. We denote the pleating laminations of $P_i$ and $H_i^+$ by $\lambda_i$ and $\beta_i$, respectively. Finally, we define $\Sigma_i := \Gamma\backslash P_i$ and $R_i := \Gamma\backslash H_i^+$. We let $n_t: \mathop{\mathrm{Gr}}\mathbf{H}^3\to \mathbf{H}^3$ be the map taking $(p,P)\in \mathop{\mathrm{Gr}}\mathbf{H}^3$ to the point $q\in \mathbf{H}^3$ obtained by flowing $p$ in the direction normal to $P$ (from the orientation of $P$) for time $t$ via the geodesic flow. We define a map $$F_i^{\eta} : P_i^{\eta} \longrightarrow H_i^+$$ by flowing $p\in P_i^{\eta}$ normally for the time $\tau_i(p)$ it takes to hit $H_i^+$. In other words, $F_i^{\eta}(p) = n_{\tau_i(p)}$. We also let $\det(dF_i^{\eta})$ be the Radon-Nikodym derivative $$\det (dF_i^{\eta}) := \frac{d (F_i^{\eta})^* \tilde{h}_i}{d \tilde{p}_i},$$ which is defined due to parts i and ii of the Propositon [Proposition 7](#map){reference-type="ref" reference="map"} below. ![A visualization of the map $F^{\eta}_i$, flowing normally from $P_i^{\eta}$ till $H_i^+$. Lemma [Lemma 8](#box){reference-type="ref" reference="box"} below shows that these lines indeed do not meet for $i$ large enough.](map2.jpg) **Proposition 7**. *For $i\geq I_0(\eta)$, these maps $F_i^{\eta}$ satisfy* i. *$F_i$ is differentiable outside of $(F^{\eta}_i)^{-1}(\beta_i) \cup \lambda_i,$* ii. *$\tilde{p}_i\left( (F^{\eta}_i)^{-1} (\beta_i) \right) = 0,$* iii. *$\|\det(dF^{\eta}_i) - 1\|_{L^{\infty} (P^{\eta}_i)} = o_i(1)$.* *Proof.* (We will drop the superscript $\eta$ when convenient.) **i.** Let $p\in P^{\eta}_i \setminus (F^{-1}(\beta_i)\cup\lambda_i)$. Then, $F_i$ maps a small disc around $p$ to a piece of a totally geodesic plane in $H^+_i$ via the normal flow. This is a differentiable map. **ii.** For this we need the following **Lemma 8**. *Supose $i\geq I_0(\eta)$. Then, there is $t(\eta)>0$ so that for any two ideal triangles $S$ and $T$ in $P_i$, we have $$n_{s_1} (S^{\eta}) \cap n_{s_2} (T^{\eta}) = \emptyset$$ for all $0\leq s_1, s_2 \leq t(\eta)$.* ![Lemma [Lemma 8](#box){reference-type="ref" reference="box"} says that the boxes made out of flowing $S^{\eta}$ and $T^{\eta}$ for time $t(\eta)$ never meet.](box.jpg) *Proof.* Without loss of generality, up to conjugating everything by Möbius transformations, we may take $S= \Delta$. Recall that $\widetilde{f^p_i}: \mathbf{H}^2 \longrightarrow\mathbf{H}^3$ is the pleated map so that $\widetilde{f^p_i} (\mathbf{H}^2) = P_i$. We know that $\partial_\infty p_i$ is the $K_i$-quasiconformal homeomorphism $h_i:\hat{\mathbf{C}} \to \hat{\mathbf{C}}$ fixing $0$, $1$ and $\infty$ so that $h_i(\hat{\mathbf{R}}) = \Lambda_i$ (where $K_i\to 1$ as $i\to\infty$). In particular, $\widetilde{f^p_i}$ is the identity on $\Delta$. Moreover, as discussed previously, $h_i$ converges uniformly to the identity map as $i\to\infty.$ Denote this modulus of uniform convergence as $\omega_i$. Define the following closed intervals in $\hat{\mathbf{R}}\subset\hat{\mathbf{C}}$: $$I^1 = [0,1],\quad I^2 = [1,\infty]\quad\text{and}\quad I^3 = [\infty,0].$$ Let $T$ be a triangle in $P_i\setminus \lambda_i$, distinct from $\Delta$. Then, $\widetilde{f_i^p}^{-1}_i (T)$ and $\widetilde{f_i^p}^{-1} (\Delta) = \Delta$ are triangles in the ideal triangulation $\widetilde{f_i^p}^{-1}(\lambda_i)$ of $\mathbf{H}^2$. In particular, they do not intersect, so the vertices of $\widetilde{f_i^p}^{-1}(T)$ all lie in $I^{\ell}$ for some $\ell\in \{1,2,3\}$. Thus, as $f_i$ is uniformly $\omega_i$ close to the identity, the vertices of $T$ are contained in $N_{\omega_i}(I^{\ell})$, the $\omega_i$-neighborhood of $I^{\ell}$ in $\hat{\mathbf{C}}$. As the vertices of $T$ are trapped in a shrinking neighborhood of $I^{\ell}$, we have that $$T^{\eta} \cap N_{\eta}(\Delta^{\eta}) = \emptyset,$$ which in turn implies that $$\sup_{p\in T^{\eta}} \mathop{\mathrm{dist}}_{\mathbf{H}^3} (p,\Delta^{\eta}) \geq \eta.$$ It follows that $n_{s_1} (\Delta^{\eta})\cap n_{s_2} (T^{\eta}) = \emptyset$ for $0\leq s_1,s_2 \leq \eta/2$. We conclude the theorem holds for $t(\eta) = \eta/2$. ◻ For $i$ sufficiently large depending on $\eta$, the lemma allows us to define a map $$G_i : E^{\eta} \longrightarrow P^{\eta}_i$$ which takes $q\in P^{\eta,t}_i$ back to the point $p\in P^{\eta}_i$ so that $g_t(p,n) = q$. This is well defined as the components of $E^{\eta}$ given by normal flow starting at some triangle of $P^{\eta}_i$ never intersect. The map $G_i$ is smooth and its restriction to $H^{\eta}_i$ is Lipschitz and equal to $(F^{\eta}_i)^{-1}$. In particular, it takes sets of measure zero to sets of measure zero. **iii.** It suffices to show that $\det (dF^{\eta}_i)$ converges uniformly to 1 in the fixed triangle $\Delta^{\eta}$, namely **Proposition 9**. *$\|\det(dF_i) - 1 \|_{L^{\infty}(\Delta^{\eta})} \to 0$ as $i\to\infty$.* Indeed, if $T_{i_j} \subset P_i$ is a sequence of triangles, there are Möbius transformations $f_{i_j}$ so that $f_{i_j} T_{i_j} (f_{i_j})^{-1} = \Delta$ while $f_{i_j} \Lambda_i f_{i_j}^{-1}$ is still trapped in a $\delta(i)$ neighborhood of $\mathbf{R}$, where $\delta(i) = o_i(1)$ and does not depend on $j$. Therefore, conjugating by $f_{i_j}$ does not affect the following analysis and in particular $\det dF_i$ being uniformly close to 1 in $\Delta^{\eta}$ implies $\det dF_i$ is uniformly close to 1 in all of $P_i^{\eta}$. Recall that $\tau_i$ is the time it takes for a point $p\in \Delta^{\eta}$ to hit $H_i^+$ via the normal flow, i.e., $$\tau_i(x) = \inf \{t>0 : n_t(x) \in H_i^+\}.$$ In order to prove Proposition [Proposition 9](#derivdelta){reference-type="ref" reference="derivdelta"}, we will need the following **Lemma 10**. *$\|\tau_i\|_{C^1(\Delta^{\eta})} \to 0$ as $i\to\infty$.* *Proof.* We begin by showing that $$\label{cpt} \|\tau_i\|_{L^{\infty}(\Delta^{\eta})} \to 0 \text{ as }i\to\infty.$$ Recall that $\Lambda_i= f_i (\mathbf{R})$, where the $f_i$ are $K_i$-quasiconformal maps with $K_i\to 1$ that converge uniformly to the identity on $\hat{\mathbf{C}}$. In particular, we can find a function $\delta(i)\to 0$ with $i\to\infty$ so that $\Lambda_i \subset N_{\delta(i)}(\mathbf{R})$. Let $\Pi_i^+$ and $\Pi_i^-$ be the totally geodesic planes satisfying $$\partial_{\infty} \Pi_i^+ \cup \partial_{\infty} \Pi_i^- = \partial N_{\delta(i)} (\mathbf{R}),$$ with $\Pi_i^+$ in the same side of the plane containing $\Delta$ as $H_i^+$. Let $T_i(x)$ be the time it takes for a point $x$ to hit $\Pi_i^+$ via the normal flow, i.e., $$T_i(x) = \inf \{t>0 : n_t(x) \in \Pi_i^+\}.$$ By construction, $\tau_i(x) \leq T_i(x)$ for $x\in \Delta^{\eta}$. In addition, as $\delta (i) \to 0$, we also have that $T_i(x) \to 0$ uniformly in $x$, with $i\to\infty$. This shows [\[cpt\]](#cpt){reference-type="ref" reference="cpt"}. Let $p\in \Delta^{\eta}$ be a point outside of $F_i^{-1}(\beta_i)$ and let $v\in T_p \Delta^{\eta}$ be a unit vector. Now, we show that $$\label{cptder} d\tau_i(p)(v) \stackrel{i\to\infty}\longrightarrow 0$$ uniformly in $(p,v)\in \mathop{\mathrm{T}}^1 \Delta^{\eta}.$ Let $\theta_i(p,v)$ be the angle in $(0,\pi/2]$ that the geodesic normal to $\Delta$ through $p$ makes with the curve $s\mapsto F_i(\exp_p sv)$, for $s\geq 0$. It suffices to show that this angle is uniformly close to $\pi/2$ as $(p,v)$ varies in $\mathop{\mathrm{T}}^1 \Delta^{\eta}$. ![If $\theta_i$ was smaller or equal to $\tilde{\theta}_i = \cos^{-1}(\tanh \tau_i(p)/\tanh \eta)$ as in the figure above, then the supporting plane to $H_i^+$ containing $F_i(p)$ would intersect $\Delta$, in a violation of convexity.](angle.jpg){#angle} Let $\alpha_i$ be the angle based at $F_i(p)$ between the normal geodesic $t\mapsto n_t(p)$ to $\Delta$ at $p$ and the geodesic segment from $F_i(p)$ to $\exp_p (\eta v)$. (See Figure [2](#angle){reference-type="ref" reference="angle"}.) Note that $\alpha_i < \theta_i$. If that was not the case and $\theta_i$ was any smaller, the supporting plane of $H_i^+$ containing $F_i(p)$ would intersect the intrinsict disc $B_{\eta}(p)\subset\Delta$ of radius $\eta$, in a contradiction of convexity. On the other hand, from trigonometry, $$\cos \theta_i < \cos \alpha_i = \frac{\tanh \tau_i(p)}{\tanh \eta}.$$ Thus, as $\|\tau_i\|_{L^{\infty}(\Delta^{\eta})}\to 0$ as $i\to \infty$, it follows that $\cos \theta_i\to 0$ uniformly in $(p,v)$. This finishes the argument. ◻ *Proof of proposition.* For $p\in\mathbf{H}^3$, let $v,w \in \mathop{\mathrm{T}}_p\mathbf{H}^3$. We let $\mathop{\mathrm{area}}_p(v,w)$ denote the area spanned by $v$ and $w$ in $\mathop{\mathrm{T}}_p \mathbf{H}^3$, with respect to the hyperbolic metric $G = \langle\cdot,\cdot\rangle$ of $\mathbf{H}^3$. In formulas, $$\mathop{\mathrm{area}}_p (v,w) = \left| \det \begin{bmatrix} \langle v,v\rangle& \langle v, w\rangle\\ \langle v,w\rangle& \langle w,w \rangle\end{bmatrix} \right|^{1/2}.$$ The Radon-Nikodym derivative $\det dF_i$ measures the area distortion caused by $F_i$. If $p\in \Delta_i^{\eta}$, we have $$\det dF_i (p) = \frac{\mathop{\mathrm{area}}(dF_i(p)v,dF_i(p)w)}{\mathop{\mathrm{area}}(v,w)},$$ where $v$ and $w$ are distinct vectors in $\mathop{\mathrm{T}}^1_p\mathbf{H}^3$. Consider the coordinates in $\mathbf{H}^3$ given by $(x,y,t) = n_t(x,y)$, where we choose $(x,y)\in\mathbf{H}^2$ so that $\partial_x$ and $\partial_y$ form an orthonormal basis for $\mathop{\mathrm{T}}_p \mathbf{H}^2$, where $\mathbf{H}^2$ denotes the geodesic plane containing $\Delta$. In these coordinates, the metric on $n_t(\mathbf{H}^2)$ is given by $$G_t = \cosh^2 t g_{\mathbf{H}^2} + dt^2$$, where $g_{\mathbf{H}^2}$ is the hyperbolic metric of $\mathbf{H}^2$. We also have $$F_i(x,y,0) = (x,y,\tau_i(x,y)),$$ and for $v\in T_p\mathbf{H}^2$, $$\begin{aligned} dF_i(p)(v) &= v + d\tau_i(p) v \,\partial_t.\end{aligned}$$ With these explicit formulae for $dF_i$ and $G_t$, we can calculate the area distortion $\det dF_i(p)$, which turns out to be $$\begin{aligned} \det dF_i(p) &= \frac{\mathop{\mathrm{area}}(dF_i(p)\, \partial_x,dF_i(p)\,\partial_y)}{\mathop{\mathrm{area}}(\partial_x,\partial_y)} \\ &= \left| \det \begin{bmatrix} \cosh^2 \tau_i(p) + (\partial_x \tau_i(p))^2 & \partial_x \tau_i(p)\,\partial_y \tau_i(p) \\ \partial_x \tau_i(p)\,\partial_y \tau_i(p) & \cosh^2 \tau_i(p) + (\partial_y \tau_i(p))^2 \end{bmatrix} \right|^{1/2} \\ &= \left( \cosh^4 \tau_i(p) + |\nabla \tau_i (p)|^2 \cosh^2 \tau_i(p) \right)^{1/2}.\end{aligned}$$ Using Lemma [Lemma 10](#c1){reference-type="ref" reference="c1"} above, we see that $$\|\det dF_i - 1\|_{L^{\infty}(\Delta^{\eta})} \to 0 \text{ as }i\to\infty.$$ ◻ As argued above, it follows that$$\|\det dF_i - 1\|_{L^{\infty}(P_i^{\eta})} \to 0 \text{ as }i\to\infty.$$ This concludes the proof of item **iii** of Proposition [Proposition 7](#map){reference-type="ref" reference="map"}. ◻ Let $H^{\eta}_i := F_i(P^{\eta}_i)$ and let $R^{\eta}_i$ be its projection to $M$. Let $\tilde{h}^{\eta}_i$ be the area measure of $H^{\eta}_i$ and let $h^{\eta}_i$ be the restriction of $h_i$ (the probability area measure of $R_i = \Gamma\backslash H_i^+$) to $R^{\eta}_i$. **Corollary 11**. *$h^{\eta}_i (M\setminus R^{\eta}_i) \to 0$ as $\eta\to 0$.* *Proof.* Given an ideal triangle $T\subset P_i\setminus \lambda_i$, the area of $F_i^{\eta} (T^{\eta})$ is larger than that of $T^{\eta}$. Since $R_i$ and $\Sigma_i = \Gamma \backslash P_i$ have the same area (as they are pleated and homotopic to each other), the corollary follows from the fact that $p_i(M\setminus \Sigma^{\eta}_i)\to 0$ as $\eta\to 0$. ◻ **Claim 12**. *Let $(g_{\alpha}) \subset C(\mathop{\mathrm{Gr}}\mathbf{H}^3)$ be a bounded and equicontinuous family of functions, namely* i. *$\sup_{\alpha} \|g_{\alpha}\|_{\infty} < \infty$* ii. *There is a function $w:(0,\infty)\to\mathbf{R}$ satisfying $w(\delta)\to 0$ as $\delta\to 0$ such that $$|g_{\alpha} (x) - g_{\alpha} (y)| \leq w\left(\mathop{\mathrm{dist}}_{\mathop{\mathrm{Gr}}\mathbf{H}^3} (x,y)\right).$$* *Then, $$\sup_{\alpha} \left| \int_{\mathop{\mathrm{Gr}}\mathbf{H}^3} g_{\alpha} \,d\tilde{p}^{\eta}_i - \int_{\mathop{\mathrm{Gr}}\mathbf{H}^3} g_{\alpha} \,d \tilde{h}^{\eta}_i \right| \to 0 \text{ as }i\to\infty.$$* *Proof.* Since $\det dF_i = d(F_i^* \tilde{h}^{\eta}_i)/d\tilde{p}^{\eta}_i$, we have $$\int_{\mathop{\mathrm{Gr}}\mathbf{H}^3} g_{\alpha} \, d\tilde{h}^{\eta}_i = \int_{P^{\eta}_i} g_{\alpha} (F_i(x)) \, \det dF_i (x) \,d\tilde{p}^{\eta}_i(x).$$ Thus, $$\left| \lim_{i\to\infty} \int_{\mathop{\mathrm{Gr}}\mathbf{H}^3} g_{\alpha} \,d\tilde{p}^{\eta}_i - \lim_{i\to\infty} \int_{\mathop{\mathrm{Gr}}\mathbf{H}^3} g_{\alpha} \,d\tilde{h}^{\eta}_i \right| \leq \int_{P^{\eta}_i} |g_{\alpha} \circ F_i | \, |\det(dF_i) - 1| \, d\tilde{h}^{\eta}_i + \int_{P^{\eta}_i} | g_{\alpha}\circ F_i - g_{\alpha}|\, d\tilde{p}^{\eta}_i$$ From the boundedness and equicontinuity of $g_{\alpha}$ and the fact that $\det dF_i$ converges uniformly to 1 (Claim [Proposition 7](#map){reference-type="ref" reference="map"}), we see that the right hand side of this inequality goes to zero. ◻ **Claim 13**. *Let $g\in C(\mathop{\mathrm{Gr}}M)$. Then, $$\lim_{i\to\infty} \int_{\mathop{\mathrm{Gr}}M} g \, dp^{\eta}_i = \lim_{i\to\infty} \int_{\mathop{\mathrm{Gr}}M} g \, dh^{\eta}_i.$$* *Proof.* It suffices to take a $g$ supported in a small geodesic ball $B\subset\mathop{\mathrm{Gr}}M$. Let $\tilde{B}$ be a lift of this ball to $\mathop{\mathrm{Gr}}\mathbf{H}^3$. Then, there is $\tilde{g}\in C(\mathop{\mathrm{Gr}}\mathbf{H}^3)$ and a finite set $K_i \subset\Gamma$ so that $$\int_{\mathop{\mathrm{Gr}}M} g \,dp^{\eta}_i = \frac{1}{\mathop{\mathrm{area}}(\Sigma_i)} \sum_{\gamma\in K_i} \int_{\mathop{\mathrm{Gr}}\mathbf{H}^3} \tilde{g}\circ \gamma \, d\tilde{p}^{\eta}_i$$ and similarly, $$\int_{\mathop{\mathrm{Gr}}M} g \,dh^{\eta}_i = \frac{1}{\mathop{\mathrm{area}}(R_i)} \sum_{\gamma\in K_i} \int_{\mathop{\mathrm{Gr}}\mathbf{H}^3} \tilde{g}\circ \gamma \, d\tilde{h}^{\eta}_i.$$ Note that $(\tilde{g}\circ\gamma)_{\gamma\in\Gamma}$ is a bounded and equicontinuous family of functions in $C(\mathop{\mathrm{Gr}}M)$. We claim moreover that the $K_i$ can be chosen so that $$\frac{\sup_i \#K_i}{\mathop{\mathrm{area}}(\Sigma_i)} < \infty.$$ Indeed, let $2B\subset\mathop{\mathrm{Gr}}M$ be a ball of twice the radius as $B$, centered at the same point. Then, $\#K_i$ is no larger than the number of connected components of $\Sigma_i\cap 2B$ that meet $B$. Each such component $C$ satisfies $\mathop{\mathrm{area}}(C) \geq c(B)$, where $c(B)$ is a constant depending only on $B$. Thus, we have $$\#K_i \cdot c(B) \leq \mathop{\mathrm{area}}(\Sigma_i),$$ which shows that $\#K_i/\mathop{\mathrm{area}}(\Sigma_i)\leq c(B)^{-1}$. Using the fact that $\mathop{\mathrm{area}}(\Sigma_i) = \mathop{\mathrm{area}}(R_i)$, we estimate $$\begin{aligned} &\left|\int_{\mathop{\mathrm{Gr}}M} g\,dp^{\eta}_i - \int_{\mathop{\mathrm{Gr}}M} g\,dh^{\eta}_i \right| \leq \frac{1}{\mathop{\mathrm{area}}(\Sigma_i)} \sum_{\gamma\in K_i} \left| \int_{\mathop{\mathrm{Gr}}\mathbf{H}^3} \tilde{g}\circ \gamma \, d\tilde{p}^{\eta}_i - \int_{\mathop{\mathrm{Gr}}\mathbf{H}^3} \tilde{g}\circ \gamma \, d\tilde{h}^{\eta}_i \right|\\ &\leq \frac{\# K_i}{\mathop{\mathrm{area}}(\Sigma_i)} \sup_{\gamma\in\Gamma} \left| \int_{\mathop{\mathrm{Gr}}\mathbf{H}^3} \tilde{g}\circ\gamma\,d\tilde{p}^{\eta}_i - \int_{\mathop{\mathrm{Gr}}\mathbf{H}^3} \tilde{g}\circ \gamma\, d\tilde{h}^{\eta}_i \right|.\end{aligned}$$ The term above goes to zero as $i\to \infty$ due to the boundedness of $\#K_i/\mathop{\mathrm{area}}(\Sigma_i)$ and the boundedness and equicontinuity of $(\tilde{g}\circ\gamma)_{\gamma\in\Gamma}$. ◻ Finally, let $g\in C(\mathop{\mathrm{Gr}}M)$. Then, $$\begin{aligned} &\left| \int_{\mathop{\mathrm{Gr}}M} g\,dp_i - \int_{\mathop{\mathrm{Gr}}M} g\,dh_i \right|\\ &\leq \left| \int_{\mathop{\mathrm{Gr}}M} g\,dp^{\eta}_i - \int_{\mathop{\mathrm{Gr}}M} g\,dh^{\eta}_i\right| + \int_{\mathop{\mathrm{Gr}}M} |g|\cdot 1_{P_i \setminus P^{\eta}_i}\,dp_i + \int_{\mathop{\mathrm{Gr}}M} |g|\cdot 1_{H^+_i \setminus H^{\eta}_i}\,dh_i\\ &\leq \left| \int_{\mathop{\mathrm{Gr}}M} g\,dp^{\eta}_i - \int_{\mathop{\mathrm{Gr}}M} g\,dh^{\eta}_i\right| + \|g\|_{L^{\infty}(\mathop{\mathrm{Gr}}M)} \left( p_i^{\eta} (M - \Sigma^{\eta}_i) + h^{\eta}_i (M - H_i^{\eta}) \right).\end{aligned}$$ Claim [Claim 13](#upseta){reference-type="ref" reference="upseta"} implies that the first summand in the expression above goes to zero as $i\to \infty$. The second summand, in turn, goes to zero as $\eta\to 0$, from Corollary [Corollary 11](#complementsmall){reference-type="ref" reference="complementsmall"}. Since $\eta$ was arbitrary, we have shown $$\left| \int_{\mathop{\mathrm{Gr}}M} g\,dp_i - \int_{\mathop{\mathrm{Gr}}M} g\,dh_i \right| \to 0 \text{ as }i\to\infty.$$ In particular, if a subsequence $p_{i_j}$ converges to $\nu$, then so does $h_{i_j}$ and vice-versa. This completes the proof of Theorem [Theorem 5](#sl1){reference-type="ref" reference="sl1"}. ## Minimal surfaces A map $f:S\to M$ of a surface $S$ into $M$ is *minimal* if the principal curvatures of $f(S)$ (a *minimal surface*) sum to zero at every point. These surfaces turn out to be locally area-minimizing. Let $f:S\to M$ be a $\pi_1$-injective map of a hyperbolic surface $S$ into $M$. Schoen-Yau [@SY] and Sacks-Uhlenbeck [@SU] show that $f$ is homotopic to a minimal map $f^m$. In addition, Uhlenbeck shows that if the principal curvatures $\pm \lambda(p)$ of $f^m(S)$ satisfy $\lambda(p)\in (-1,1)$ for every $p\in f^m(S)$, then $f^m$ is quasifuchsian and it is the unique minimal map in its homotopy class. In addition, Seppi [@Se] shows that for a minimal $K$-quasifuchsian map $f^m:S\to M$ with $K$ small enough, **Theorem 14** (Seppi). *The principal curvatures $\pm \lambda$ of $f^m(S)$ satisfy $$\|\lambda\|_{L^{\infty}(f^m(S))} \leq C\log K,$$ for an universal constant $C$.* Combining these theorems, we see that if $f_i:S_i\to M$ are asymptotically Fuchsian maps, then for $i$ large enough $f_i$ is homotopic to a unique minimal map $f_i^m$. In addition, the principal curvatures of $f_i^m(S_i)$ go to zero uniformly as $i\to\infty$. ## Proving Theorem [Theorem 6](#sl2){reference-type="ref" reference="sl2"} {#proving-theorem-sl2} We will now restate and prove Theorem [Theorem 6](#sl2){reference-type="ref" reference="sl2"}. As before, $f_i:S_i\to M$ are asymptotically Fuchsian maps and $f_i^h$ is the pleated map homotopic to $f_i$ coming from the top component $H_i^+$ of $Q_i = (f_i)_* (\pi_1 S_i)$. We let $f_i^m$ be the minimal maps homotopic to $f_i$. We denote the probability area measure induced by $f_i^m$ and $f_i^h$ as $m_i = \nu(f_i^m)$ and $h_i = \nu(f_i^h)$. **Theorem 3**. *A subsequence $m_{i_j}$ satisfies $m_{i_j}\stackrel{\star}\rightharpoonup\nu$ if and only if $h_{i_j}\stackrel{\star}\rightharpoonup\nu$.* We let $\widetilde{f_i^m}$ be the lift of $f_i^m$ to $\mathbf{H}^2$ so that $\partial_{\infty} \widetilde{f_i^m}$ is the limit set $\Lambda_i$ of $Q_i$. We let $\tilde{m}_i$ and $\tilde{h}_i$ be the area measures induced by $\widetilde{f^m_i}$ and $\widetilde{f^h_i}$ on $\mathop{\mathrm{Gr}}\mathbf{H}^3$. As before, $\beta_i$ is the bending lamination of $H_i^+$, $R_i = \Gamma\backslash H_i^+$ and we put $N_i:= \Gamma\backslash D_i = f_i^m(S_i)$. As in the proof of Theorem [Theorem 5](#sl1){reference-type="ref" reference="sl1"}, we define a map $$F_i : D_i \longrightarrow H_i^+$$ where $F_i(p)$ is given by flowing $p$ in the direction normal to $D_i$ for the time $\tau_i(p)$ it takes to hit $H_i^+$. Concisely, $F_i(p) = n_{\tau_i(p)}(p)$. **Claim 15**. *The map $F_i$ satisfies the following properties:* i. *$F_i$ is differentiable outside of $F_i^{-1}(\beta_i)$* ii. *$\tilde{m}_i (F_i^{-1}(\beta_i)) = 0$* iii. *$\| \det(dF_i) - 1 \|_{L^{\infty} (D_i)} \to 0$ as $i\to\infty$.* To prove this claim, we will use the following rephrasing of Proposition 4.1 of Seppi in [@Se]: **Proposition 16**. *Suppose $i$ is large enough that the uniformizing map $f_i$ has Bers norm $\|f_i\|_B<1/2$. Then, we may find surfaces $D^-_i$ and $D^+_i$ that are equidistant from $D_i$ so that the region between $D^-_i$ and $D^+_i$ is convex and thus contains $\mathop{\mathrm{core}}Q_i$.* *Moreover, given $x\in D_i$, there is a geodesic segment $\alpha$ from $D_i^-$ to $D_i^+$, meeting $D_i$ and $D_i^{\pm}$ orthogonally, whose length satisfies $$\label{seppi} \ell(\alpha) \leq \mathop{\mathrm{arctanh}}(2\|f_i\|_B).$$* ![Illustrating Proposition [Proposition 16](#seppiprop){reference-type="ref" reference="seppiprop"}](seppi.jpg) In particular, given $x_i\in D_i$, let $P^+_i$ and $P^-_i$ be the geodesic planes tangent to $D^+_i$ and $D^-_i$ at the endpoints of the segment $\alpha_i$. From [\[seppi\]](#seppi){reference-type="ref" reference="seppi"}, we see that the distance between $P^+_i$ and $P^-_i$ goes to zero as $i\to\infty$ and does not depend on the chosen point $x_i \in D_i$. *Proof of Claim [Claim 15](#Fnice){reference-type="ref" reference="Fnice"}.* **i.** Let $x\in D_i \setminus F_i^{-1} (\beta_i)$. Then, $F_i$ maps a disc around $x$ to a piece of a totally geodesic plane in $H_i^+$ by the geodesic flow in the normal direction. This is a smooth map. **ii.** Let $E$ be the set containing all the points above $D_i^-$ and below $D_i^+$. This set is foliated by surfaces $D_i^t$ equidistant to $D_i$, for $t\in [-\mathop{\mathrm{dist}}(D_i^-,D_i),\mathop{\mathrm{dist}}(D_i,D_i^+)]$. The set $E$ is also foliated by the orbits of the geodesic flow going through points in $D_i$ and their normal vector. These flow lines never meet. If they did, the pullback metrics of the $D_i^t$ on $D_i$ would be degenerate, which is the not the case, as their principal curvatures are given by $$\frac{\lambda-\tanh t}{1-\lambda\tanh t} \quad\text{and}\quad\frac{-\lambda-\tanh t}{1+\lambda\tanh t}$$ and $\lambda\in (-1,1).$ In particular, we can define a map $$G_i : E \longrightarrow D_i,$$ which takes $y \in D_i^t \subset E$ back to the point $x\in D_i$ so that $g_t(x,n) = y$. This map is smooth and in particular, its restriction to $H_i^+$ is Lipschitz and hence takes sets of measure zero to sets of measure zero. Thus, $$\tilde{m}_i (G_i(\beta_i)) = \tilde{m}_i (F_i^{-1}(\beta_i)) = 0.$$ **iii.** As before, first we show that the hitting times $\tau_i$ converge to zero uniformly on $D_i$ in the $C^1$ sense. **Lemma 17**. *$\|\tau_i\|_{C^1(D_i)}\to 0$ as $i\to\infty$.* *Proof.* The fact that $\|\tau_i\|_{L^{\infty}(D_i)}\to 0$ as $i\to\infty$ follows readily from Seppi's Proposition [Proposition 16](#seppiprop){reference-type="ref" reference="seppiprop"}. It remains to show that $d\tau_i(p)v\to 0$ as $i\to\infty$ uniformly in $\mathop{\mathrm{T}}^1 (D_i - F_i^{-1}(\beta_i)$. This in turn will follow from Seppi's Theorem [Theorem 14](#seppimain){reference-type="ref" reference="seppimain"} that says that the principal curvatures of $D_i$ converge uniformly from zero. For $(p,v)\in \mathop{\mathrm{T}}^1 D_i$, we let $\theta_i(p,v)$ be the angle in $(0,\pi/2]$ that the geodesic normal to $D_i$ at $p$ makes with the curve $s\mapsto F_i(\exp_p sv)$. As in the pleated case (Lemma [Lemma 10](#c1){reference-type="ref" reference="c1"}), it suffices to show that $\theta_i(p,v)\to \pi/2$ uniformly in $\mathop{\mathrm{T}}^1D_i$. Fix $\eta > 0$. Again, we let $\alpha_i$ be the angle based at $F_i(p)$ between the normal geodesic $t\mapsto n_t(p)$ to $\Delta$ at $p$ and the geodesic segment from $F_i(p)$ to $\exp_p (\eta v)$. Let $\exp^{D_i}_p : \mathop{\mathrm{T}}^1_p D_i \to D_i$ denote the exponential map intrinsic to $D_i$. We let $\alpha'_i$ be the angle based at $F_i(p)$ between the normal geodesic $t\mapsto n_t(p)$ to $\Delta$ at $p$ and the *intrinsic* geodesic segment of $D_i$ from $F_i(p)$ to $\exp^{D_i}_p (\eta v)$. ![The angle $\alpha'_i$ is defined in a similar way to $\alpha_i$, except that it is opposite to an intrinsic geodesic of $D_i$ of length $\eta$, rather than a geodesic of $\mathbf{H}^3$.](angle2.jpg) Note that $\alpha'_i < \theta_i$. If that was not the case, a supporting plane to $H_i^+$ based at $F_i(p)$ would meet $D_i$, in a violation of convexity. Due to the principal curvatures of $D_i$ going to zero as $i\to \infty$, it follows that the difference between $\alpha_i$ and $\alpha'_i$ also goes to zero unformly as $i\to \infty$. In other words, there is a quantity $\omega_i \to 0$ as $i\to \infty$ (which depends on the choice of $\eta>0$, but not of $(p,v)\in \mathop{\mathrm{T}}^1 D_i$), so that $$|\cos \alpha_i - \cos \alpha'_i| \leq \omega_i.$$ But as before, $\cos \alpha_i = \tanh(\tau_i(p))/\tanh\eta$. Thus, $$\cos \theta_i \leq \frac{\tanh\|\tau_i\|_{L^{\infty} (D_i)}}{\tanh\eta} + \omega_i \stackrel{i\to\infty}\longrightarrow 0.$$ ◻ For each $i$, we choose coordinates on $\mathbf{H}^3$ given by $(x,y,t) = n_t(x,y)$, where $(x,y)$ are coordinates for $D_i$ chosen so that $\partial_x$ and $\partial_y$ form an orthonormal basis for $T_{p_i} D_i$. (The points $p_i$ are chosen in the full measure set $D_i - F_i^{-1}(\beta_i)$.) In these coordinates, the metric $G_t$ on $n_t(D_i)$ is given by $$G_t = g_t + dt^2,$$ where at $n_t(p_i)$, the matrix entries of $g_t$ corresponding to the basis $\partial_x$, $\partial_y$ are given by $$\tag{$\star$} g_t = (\cosh t \mathop{\mathrm{Id}}+ \sinh t A_i)^2$$ and $A_i$ is the second fundamental form of $D_i$. (See Section 5 of Uhlenbeck [@U] for details.) As before, in these coordinates we have $F_i(x,y,0) = (x,y,\tau_i(x,y))$ and so $dF_i(p)v = v + d\tau_i(p) v \, \partial/\partial t$. Thus, $$\begin{aligned} \det dF_i(p_i) &= {\mathop{\mathrm{area}}(dF_i(p_i)\,\partial_x,dF_i(p_i)\,\partial_y)} \\ &= \left| \det \begin{bmatrix} g_{\tau_i(p_i)} (\partial_x,\partial_x) + (\partial_x \tau_i(p_i))^2 & \partial_x \tau_i(p)\,\partial_y \tau_i(p_i) \\ \partial_x \tau_i(p_i)\,\partial_y \tau_i(p) & g_{\tau_i(p_i)} (\partial_y,\partial_y) + (\partial_y \tau_i(p_i))^2 \end{bmatrix} \right|^{1/2} \\ &= \left( |\partial_x|^2_{\tau_i(p_i)}|\partial_y|^2_{\tau_i(p_i)} + (\partial_x\tau_i(p_i))^2|\partial_y|^2_{\tau_i(p_i)} + (\partial_y\tau_i(p_i))^2 |\partial_x|^2_{\tau_i(p_i)} \right)^{1/2},\end{aligned}$$ Above, $|\partial_x|^2_{\tau_i(p_i)}$ and $|\partial_y|^2_{\tau_i(p_i)}$ denote, respectively, the first and second diagonal entries of $g_{\tau_i(p_i)}$. From Seppi's result (Theorem [Theorem 14](#seppimain){reference-type="ref" reference="seppimain"}), we know that the second fundamental forms $A_i$ converge uniformly to the zero matrix as $i\to\infty$. In view of the formula ($\star$), it follows that $|\partial_x|^2_{\tau_i(p_i)}$ and $|\partial_y|^2_{\tau_i(p_i)}$ both converge uniformly to 1 as $i\to\infty$. In addition, from Lemma [Lemma 17](#c12){reference-type="ref" reference="c12"}, we know that the derivatives of $\tau_i(p_i)$ converge to zero as $i\to\infty$. We can thus conclude that $$\|\det dF_i(p) - 1 \|_{L^{\infty} (D_i)} \to 0 \text{ as }i\to\infty.$$ ◻ Let $\tilde{m}_i$ and $\tilde{h}_i$ be the area measures of $D_i$ and $H^+_i$ in $\mathop{\mathrm{Gr}}\mathbf{H}^3$. **Claim 18**. *Let $(g_{\alpha}) \subset C(\mathop{\mathrm{Gr}}\mathbf{H}^3)$ a bounded and equicontinuous family of functions. Then, $$\sup_{\alpha} \left| \int_{\mathop{\mathrm{Gr}}\mathbf{H}^3} g_{\alpha} \,d\tilde{m}_i - \int_{\mathop{\mathrm{Gr}}\mathbf{H}^3} g_{\alpha} \,d \tilde{h}_i \right| \to 0 \text{ as }i\to\infty.$$* *Proof.* The proof is the same as the proof of Claim [Claim 12](#pleatedupstairs){reference-type="ref" reference="pleatedupstairs"}, substituting $\tilde{h}_i$ for $\tilde{h}^{\eta}_i$ and $\tilde{m}_i$ for $\tilde{p}^{\eta}_i$. ◻ Now, let $g\in C(\mathop{\mathrm{Gr}}M)$. In a similar fashion to the proof of Claim [Claim 13](#upseta){reference-type="ref" reference="upseta"}, we proceed to show that $$\lim_{i\to\infty} \int_{\mathop{\mathrm{Gr}}M} g\,dm_i = \lim_{i\to\infty} \int_{\mathop{\mathrm{Gr}}M} g\,dh_i.$$ As before, we may choose $g$ to be supported in a small geodesic ball $B\subset\mathop{\mathrm{Gr}}M$. For a lift $\tilde{B}$ of $B$ to $\mathop{\mathrm{Gr}}\mathbf{H}^3$, there is $\tilde{g}\in C(\mathop{\mathrm{Gr}}\mathbf{H}^3)$ and a finite set $K_i \subset\Gamma$ so that $$\int_{\mathop{\mathrm{Gr}}M} g \,dm_i = \frac{1}{\mathop{\mathrm{area}}(N_i)} \sum_{\gamma\in K_i} \int_{\mathop{\mathrm{Gr}}\mathbf{H}^3} \tilde{g}\circ \gamma \, d\tilde{m}_i$$ and similarly, $$\int_{\mathop{\mathrm{Gr}}M} g \,dh_i = \frac{1}{\mathop{\mathrm{area}}(\Sigma_i)} \sum_{\gamma\in K_i} \int_{\mathop{\mathrm{Gr}}\mathbf{H}^3} \tilde{g}\circ \gamma \, d\tilde{h}_i.$$ As before, $(\tilde{g}\circ\gamma)_{\gamma\in\Gamma}$ is a bounded and equicontinuous family of functions in $C(\mathop{\mathrm{Gr}}M)$, and the $K_i$ can be chosen so that $\sup_i \#K_i/\mathop{\mathrm{area}}(\Sigma_i) < \infty.$ Now we estimate $$\begin{aligned} &\left|\int_{\mathop{\mathrm{Gr}}M} g\,dm_i - \int_{\mathop{\mathrm{Gr}}M} g\,dh_i \right| \leq \frac{1}{\mathop{\mathrm{area}}(\Sigma_i)} \sum_{\gamma\in K_i} \left| \frac{\mathop{\mathrm{area}}(\Sigma_i)}{\mathop{\mathrm{area}}(N_i)}\int_{\mathop{\mathrm{Gr}}\mathbf{H}^3} \tilde{g}\circ \gamma \, d\tilde{m}_i - \int_{\mathop{\mathrm{Gr}}\mathbf{H}^3} \tilde{g}\circ \gamma \, d\tilde{h}_i \right|\\ &\leq \frac{\# K_i}{\mathop{\mathrm{area}}(\Sigma_i)} \sup_{\gamma\in\Gamma} \left[ \left| \int_{\mathop{\mathrm{Gr}}\mathbf{H}^3} \tilde{g}\circ\gamma\,d\tilde{m}_i - \int_{\mathop{\mathrm{Gr}}\mathbf{H}^3} \tilde{g}\circ \gamma\, d\tilde{h}_i \right| + \left| 1 - \frac{\mathop{\mathrm{area}}(\Sigma_i)}{\mathop{\mathrm{area}}(N_i)} \right|\, \| \tilde{g}\circ \gamma\|_{L^1(\tilde{h}_i)} \right].\end{aligned}$$ The upper bound above goes to zero as $i\to \infty$ due to the boundedness of $\#K_i/\mathop{\mathrm{area}}(\Sigma_i)$; the equicontinuity and boundedness of $(\tilde{g}\circ\gamma)_{\gamma\in\Gamma}$ and the fact that $\mathop{\mathrm{area}}(\Sigma_i)/\mathop{\mathrm{area}}(N_i)$ goes to 1. To see the latter, say $\pm \lambda_i$ are the principal curvatures of $N_i$ and $g_i$ is the genus of $S_i$. Using the Gauss-Bonnet formula, we have $$\begin{aligned} \left| 1 - \frac{\mathop{\mathrm{area}}(\Sigma_i)}{\mathop{\mathrm{area}}(N_i)} \right| &= \frac{1}{\mathop{\mathrm{area}}(N_i)} \left| \int_{N_i} 1 \,d\mathop{\mathrm{area}}- 2\pi(2g_i - 2) \right| \\ &= \frac{1}{\mathop{\mathrm{area}}(N_i)} \left| \int_{N_i} 1 \,d\mathop{\mathrm{area}}- \int_{N_i} \lambda_i^2 \,d\mathop{\mathrm{area}}\right| \\ &\leq \frac{1}{\mathop{\mathrm{area}}(N_i)} \| 1 - \lambda_i^2 \|_{L^{\infty} (N_i)} \mathop{\mathrm{area}}(N_i).\end{aligned}$$ We know that $\|1 - \lambda_i^2\|_{L^{\infty}(N_i)}\to 0$ as $i\to\infty$ due to Seppi's Theorem [Theorem 14](#seppimain){reference-type="ref" reference="seppimain"}. # Building surfaces out of good pants In this section, we will outline how to construct a $\pi_1$-injectively immersed closed oriented nearly Fuchsian surface in $M$ out of good pants. This is the Kahn-Markovic surface subgroup theorem from [@KM], though our exposition will line up with that of Kahn and Wright in [@KW] as well as use some notation from Liu and Markovic in [@LM]. ## Building blocks The following paragraphs define the many terms related to the building blocks of this construction. An orthogeodesic $\gamma$ between two closed geodesics $\alpha_0, \alpha_1\subset M$ is a geodesic segment parametrized with unit speed going from $\alpha_0$ to $\alpha_1$ and meeting both curves orthogonally. We denote by $\mathbf{\Gamma}_{\epsilon, R}$ the space of $(\epsilon,R)$-*good curves*. Those are the closed oriented geodesics whose complex translation length $\mathbf{l}(\gamma)$ is $2\epsilon$-close to $2R$. Let $P_R$ be the planar oriented hyperbolic pair of pants whose cuffs $C_i$ have length exactly $2R$ for $i\in \mathbf{Z}/3$. We define the space $\mathbf{\Pi}_{\epsilon, R}$ of $(\epsilon,R)$-good pants to be the space of equivalence classes of maps $f:P_R\to M$ so that $f(C_i)$ is homotopic to an element of $\mathbf{\Gamma}_{\epsilon, R}$, for all $i\in\mathbf{Z}/3$. Two representatives $f$ and $g$ of elements of $\mathbf{\Pi}_{\epsilon, R}$ are equivalent if $f$ is homotopic to $g\circ \psi$ for some orientation-preserving homeomorphism $\psi:P_R\to P_R$. We let $\mathbf{\widetilde{\Pi}}_{\epsilon, R}$ be the space of *ends of $(\epsilon,R)$ good pants*, which can be thought as good pants with a marked cuff. Precisely $\mathbf{\widetilde{\Pi}}_{\epsilon, R}$ is the space of equivalence classes of pairs $[(f,C_i)]$, where $f\in\mathbf{\Pi}_{\epsilon, R}$ and $C_i\subset\partial P_R$ is a cuff. We say two representatives $(f,C_i)$ and $(g,C_j)$ of elements of $\mathbf{\widetilde{\Pi}}_{\epsilon, R}$ are equivalent if $f$ is homotopic to $g\circ \psi$, where $\psi:P_R\to P_R$ is an orientation-preserving homeomorphism $\psi: P_R\to P_R$ so that $\psi(C_i) = C_j$. Note that forgetting the cuff of $[(f,C_i)]\in \mathbf{\widetilde{\Pi}}_{\epsilon, R}$ defines a three-to-one surjection from $e:\mathbf{\widetilde{\Pi}}_{\epsilon, R}\to\mathbf{\Pi}_{\epsilon, R}$. For $\pi\in\mathbf{\Pi}_{\epsilon, R}$, we call $e^{-1}(\pi)$ the *ends* of $\pi$. For $\gamma\in\mathbf{\Gamma}_{\epsilon, R}$, we let $\mathbf{\widetilde{\Pi}}_{\epsilon, R}(\gamma)$ be the $[(f,C_i)]\in \mathbf{\widetilde{\Pi}}_{\epsilon, R}$ so that $f(C_i)$ is homotopic to $\gamma$ or its orientation reversal $\gamma^{-1}$. We can decompose $\mathbf{\widetilde{\Pi}}_{\epsilon, R}(\gamma)$ into $\mathbf{\Pi}_{\epsilon, R}^+(\gamma)\sqcup \mathbf{\Pi}_{\epsilon, R}^-(\gamma)$, where $\mathbf{\Pi}_{\epsilon, R}^+(\gamma)$ consists of the $[(f,C_i)]$ with $f(C_i) \sim \gamma$ and $\mathbf{\Pi}_{\epsilon, R}^-(\gamma)$ consists of the $(f,C_i)$ with $f(C_i)\sim \gamma^{-1}$. There is a bijection $$r: \mathbf{\Pi}_{\epsilon, R}^-(\gamma) \longrightarrow\mathbf{\Pi}_{\epsilon, R}^+ (\gamma)$$ given by $r([(f,C_i)]) = ([(f\circ \rho,C_i)])$, where $\rho: P_R\to P_R$ is the reflection along the short orthogeodesics of $P_R$. We let $\mathbf{\Pi}_{\epsilon, R}(\gamma)$ denote the quotient of $\mathbf{\widetilde{\Pi}}_{\epsilon, R}(\gamma)$ by $r$. ![Short orthogeodesics and feet of a good pants.](ortho.jpg) The planar pair of pants $P_R$ is equipped with six oriented *short orthogeodesics* $a_{ij}$, which are the geodesic segments connecting the cuffs $C_i$ The planar pair of pants $P_R$ is equipped with three *short orthogeodesics*, which are the orthogonal geodesic segments from one cuff to another. The short orthogeodesic from $C_i$ to $C_j$ is denoted $a_{ij}$. A marked pair of pants $\pi\in \mathbf{\widetilde{\Pi}}_{\epsilon, R}$ comes with left and right short orthogeodesics, respectively denoted $\eta^{\ell}(\pi)$ and $\eta^r(\pi)$, which are defined as follows. Choose a representative $(f,C_i)\in\pi$ that sends cuffs $C_j\subset\partial P_R$ to geodesics $\gamma_j\subset M$. We let $\eta^{\ell}(\pi)$ be the geodesic segment homotopic to $f(a_{i,i+1})$ (through segments from $\gamma_i$ to $\gamma_{i-1}$) meeting $\gamma_i$ and $\gamma_{i-1}$ orthogonally. Similarly, $\eta^{r}(\pi)$ is the geodesic segment homotopic to $f(a_{i,i-1})$ meeting $\gamma_i$ and $\gamma_{i+1}$ orthogonally. Note that these definitions do not depend on the choice of representative in $\pi$. We endow the short orthogeodesics of $\pi$ with unit speed parametrizations, and from their construction, they are oriented to go from $\gamma_i$ to the other cuffs. The *feet* of a short orthogeodesic $\gamma$ of $\pi$ are the unit vectors $-\eta'(0)$ and $\eta'(\ell(\eta))$. We call $\mathop{\mathrm{\mathbf{f}\mathbf{t}}}^{\ell} (\pi) = -(\eta^{\ell}(\pi))'(0)$ and $\mathop{\mathrm{\mathbf{f}\mathbf{t}}}^{r} (\pi) = -(\eta^{r}(\pi))'(0)$ respectively the left and right foot of $\pi$. We define the *half length* $\mathbf{h}\mathbf{l}(\gamma_i)$ of $\gamma_i$ to be the complex distance between lifts of $\eta_{i,i-1}$ and $\eta_{i,i+1}$ to $\mathbf{H}^3$ that differ by a positively oriented segment of $\gamma$ joining $\eta_{i,i-1}$ to $\eta_{i,i+1}$. It turns out that $\mathbf{l}(\gamma) = 2\mathbf{h}\mathbf{l}(\gamma)$ (see [@KW], Section 2.8). The unit normal bundle $\mathop{\mathrm{N}}^1 (\gamma)$ to a oriented closed geodesic $\gamma$ in $M$ is acted upon simply and transitively by the group $\mathbf{C}/ (\mathbf{l}(\gamma) + 2\pi i \mathbf{Z}).$ We define $\mathop{\mathrm{N}}^1(\sqrt{\gamma})$ to be the quotient of $\mathop{\mathrm{N}}^1 (\gamma)$ by the involution $n\mapsto n + \mathbf{h}\mathbf{l}(\gamma)$. This is acted upon simply and transitively by $\mathbf{C}/(\mathbf{h}\mathbf{l}(\gamma) + 2\pi i \mathbf{Z})$. As $\mathbf{h}\mathbf{l}(\gamma) = \mathbf{l}(\gamma)/2$, the left and right feet of $\pi \in\mathbf{\widetilde{\Pi}}_{\epsilon, R}(\gamma)$ turn out to define the same point in $\mathop{\mathrm{N}}^1(\sqrt{\gamma})$. We thus have a map $$\mathop{\mathrm{\mathbf{f}\mathbf{t}}}: \mathbf{\widetilde{\Pi}}_{\epsilon, R}(\gamma) \longrightarrow\mathop{\mathrm{N}}^1\left(\sqrt{\gamma}\right)$$ which assigns the pants in $\pi$ to its foot in $\mathop{\mathrm{N}}^1 (\sqrt{\gamma})$. This map is also well defined on the unoriented version $\mathbf{\Pi}_{\epsilon, R}(\gamma)$. ![A good gluing between $\pi_1$ and $\pi_2$.](goodglue.jpg) Two pants $\pi_1, \pi_2 \in \mathbf{\widetilde{\Pi}}_{\epsilon, R}(\gamma)$ that induce opposite orientations on $\gamma$ are $(\epsilon,R)$-*well matched* or *well glued* along $\gamma\in \mathbf{\Gamma}_{\epsilon, R}$ if $$\mathop{\mathrm{dist}}_{\mathop{\mathrm{N}}^1(\sqrt{\gamma})} (\mathop{\mathrm{\mathbf{f}\mathbf{t}}}\pi_1, \tau(\mathop{\mathrm{\mathbf{f}\mathbf{t}}}\pi_2)) < \frac{\epsilon}{R},$$ where $\tau$ is the translation of $\mathop{\mathrm{N}}^1(\sqrt{\gamma})$ given by $\tau(x) =x + 1 + i\pi$. In other words, the shearing between the feet is always approximately one (and the $i\pi$ takes into account that they point toward nearly opposite directions). Heuristically, the nearly constant shearing ensures you are never gluing the thin part of a pants (near the short orthogeodesics) to the thin part of another pants repeatedly. For a finite set $X$, we let $\mathscr{M}(X)$ be the space of measures on $X$ that are valued on the nonnegative integers. For $\mu\in\mathscr{M}( X)$, we let $\mathscr{S}(\mu)$ be the multiset consisting of $\mu(x)$ copies of each $x\in X$. For $\mu\in \mathscr{M}(\mathbf{\Pi}_{\epsilon, R})$, we let $\tilde{\mu}\in \mathscr{M}(\mathbf{\widetilde{\Pi}}_{\epsilon, R})$ denote the measure so that $\tilde{\mu}(\tilde{\pi}) = \mu(\pi)$ for any end $\tilde{\pi}$ of $\pi$. For $\gamma\in \mathbf{\Gamma}_{\epsilon, R}$ and $\mathscr{F}$ a multiset of elements of $\mathbf{\widetilde{\Pi}}_{\epsilon, R}$, we let $\mathscr{F}_{\gamma}$ consists of the elements of $\mathscr{F}$ that also lie in $\mathbf{\widetilde{\Pi}}_{\epsilon, R}(\gamma)$. The multiset $\mathscr{S}_{\gamma}(\tilde{\mu})$ decomposes into a disjoint union $\mathscr{F}^-_{\gamma}\sqcup \mathscr{F}^+_{\gamma}$ of the ends reversing and preserving the orientation of $\mu$. There is a map $$\partial: \mathscr{M}(\mathbf{\Pi}_{\epsilon, R}) \longrightarrow\mathscr{M}(\mathbf{\Gamma}_{\epsilon, R})$$ defined via $\partial \mu (\gamma) = \sum_{\pi\in \mathbf{\widetilde{\Pi}}_{\epsilon, R}(\gamma)} \mu(\pi)$. We say a surface is *built out of* $\mu$ if it is obtained from gluing the elements of a submultiset of ends $\mathscr{F}\subset\mathscr{S}(\tilde{\mu})$ via bijections $\sigma_{\gamma}:\mathscr{F}^-_{\gamma}\to\mathscr{F}^+_{\gamma}$ for every cuff $\gamma\in\mathop{\mathrm{supp}}\partial\mu$. A surface is $(\epsilon,R)$-*well built* out of $\mu$ if all the gluings are $(\epsilon,R)$-good. ## Assembling the surface The first step in the construction is to show that a surface made out of good pants glued via good gluings is essential and nearly closed. Precisely, **Theorem 19**. *Let $\mu\in\mathscr{M}(\mathbf{\Pi}_{\epsilon, R})$ be so that a closed surface $S$ may be $(\epsilon,R)$-well built from $\mu$. Then, $S$ is essential and $(1+O(\epsilon))$-quasifuchsian.* *Proof.* The proof of this is long and is the content of Section 2 of [@KM]. A more concise proof that $\rho$ is $K(\epsilon)$-quasifuchsian, with $K(\epsilon)\to 1$ as $\epsilon\to 0$ (without the quantitative statement that $K(\epsilon) = 1+O(\epsilon)$) can be found in the appendix of [@KW]. ◻ It remains to find such a measure $\mu\in \mathscr{M}(\mathbf{\Pi}_{\epsilon, R})$ from which we can build a closed surface with good gluings. The matching theorem below tells us we can take $\mu$ to be the measure $\mu_{\epsilon,R}$ that gives weight 1 to each $\pi\in\mathbf{\Pi}_{\epsilon, R}$. **Theorem 20**. *For $\epsilon>0$ sufficiently small, there is $R\geq R_0(\epsilon)$ so if $\gamma\in\mathbf{\Gamma}_{\epsilon, R}$, there is a bijection $$\sigma_{\gamma} : \mathbf{\Pi}_{\epsilon, R}^- (\gamma) \longrightarrow\mathbf{\Pi}_{\epsilon, R}^+(\gamma)$$ with the property that $\pi$ is $(\epsilon,R)$-well matched to $\sigma_{\gamma}(\pi)$ for all $\pi\in\mathbf{\Pi}_{\epsilon, R}^-(\gamma)$.* Gluing the pants in $\mathbf{\widetilde{\Pi}}_{\epsilon, R}(\gamma)$ via $\sigma_{\gamma}$ for every $\gamma$ gives us a closed surface, which by Theorem [Theorem 19](#good){reference-type="ref" reference="good"} is essential and $(1+O(\epsilon))$-quasifuchsian. The crucial ingredient in the proof of the matching theorem is the fact that the feet of pants in $\mathbf{\Pi}_{\epsilon, R}$ are well distributed in $\mathop{\mathrm{N}}^1(\sqrt{\gamma})$ for every $\gamma \in \mathbf{\Gamma}_{\epsilon, R}$. This is the content of the equidistribution theorem below. ![The feet of the good pants with cuff $\gamma$ are well distributed in $\mathop{\mathrm{N}}^1(\sqrt{\gamma})$.](equid.jpg) **Theorem 21** (Equidistribution of feet). *There is $q=q(M)>0$ so that if $\epsilon>0$ is small enough and $R > R_0(\epsilon)$, the following holds. Let $\gamma\in \mathbf{\Gamma}_{\epsilon, R}$. If $B\subset\mathop{\mathrm{N}}^1 (\sqrt{\gamma})$, then $$(1-\delta) \lambda (N_{-\delta} B) \leq \frac{\#\{ \pi \in \mathbf{\Pi}_{\epsilon, R}(\gamma) : \mathop{\mathrm{\mathbf{f}\mathbf{t}}}\pi \in B\}} {C(\epsilon,R,\gamma)} \leq (1+\delta) \lambda(N_{\delta} B),$$ where $\lambda=\lambda_{\gamma}$ is the probability Lebesgue measure on $\mathop{\mathrm{N}}^1(\sqrt{\gamma})$, $\delta = e^{-qR}$, $N_{\delta}(B)$ is the $\delta$-neighborhood of $B$, $N_{-\delta}(B)$ is the complement of $N_{\delta} (\mathop{\mathrm{N}}^1(\sqrt{\gamma}) - B)$ and $C(\epsilon,R,\gamma)$ is a constant depending only on $\epsilon$, $R$ and $\mathbf{l}(\gamma)$.* *Proof.* The proof of the equidistribution of feet is the content of [@KW2]. The main engine is the mixing of the frame flow in $\mathop{\mathrm{Fr}}M$. We use need a slight generalization of this theorem in Sections 5 and 6, which is explained then. ◻ To complete the exposition, we will include a proof of the matching theorem using the equidistribution of feet along a curve. This is a relatively short argument which uses the Hall marriage theorem of combinatorics. Before stating it, we fix some notation: in a graph $X$, we wrtie $v\sim w$ for two vertices $v$ and $w$ that are connected by an edge. For a set $A$ of vertices, we let $\partial N_1 (A)$ be the vertices $w\notin A$ satisfying $w\sim v$ for some $v\in A.$ **Theorem 22** (Hall marriage). *Suppose $X$ is a bipartite graph, i.e., the vertices $V$ of $X$ are the disjoint union of $V_1$ and $V_2$, where no two elements of a given $V_i$ are connected by an edge. Then, there is a *matching* $m:V_1\to V_2$, namely an injection so that $v\sim m(v)$ if and only if $$\# A \leq \# \partial N_1 (A)$$ for any finite $A \subset V_1$.* We will also use the following fact **Proposition 23**. *Let $M$ be a Riemannian manifold equipped with a volume measure $|\cdot|$ and suppose $A\subset M$ satisfies $|N_{\eta} (A)| \leq 1/2$ for some $\eta>0$. Then, $$\frac{|N_{\eta}(A)|}{|A|}\geq 1 + \eta h(M),$$ where $h(M)$ is the Cheeger constant of $M$.* *Proof.* $$|N_{\eta}A - A| = \int_0^{\eta} |\partial N_t A| \,dt \geq \eta \, h(M) \, |A|.$$ ◻ It can be shown that the Cheeger constant of the flat torus $\mathop{\mathrm{N}}^1(\sqrt{\gamma})$ satisfies $h\left(\mathop{\mathrm{N}}^1(\sqrt{\gamma})\right) > 1/R$. (See, for example, [@HHM].) Thus, if $A\subset\mathop{\mathrm{N}}^1(\sqrt{\gamma})$ satisfies $\lambda(N_{\eta} (A)) \leq 1/2$, we have $$\label{Cheeger} \frac{\lambda(N_{\eta}(A))}{\lambda(A)}> 1 + \frac{\eta}{R}.$$ We also define $$\mathop{\mathrm{Ft}}B := \#\{\pi\in \mathbf{\Pi}_{\epsilon, R}(\gamma): \mathop{\mathrm{\mathbf{f}\mathbf{t}}}\pi\in B \}.$$ The inequality [\[Cheeger\]](#Cheeger){reference-type="ref" reference="Cheeger"}, together with the equidistribution of feet gives us **Lemma 24**. *Let $B\subset\mathop{\mathrm{N}}^1 (\sqrt{\gamma})$ and let $\rho:\mathop{\mathrm{N}}^1(\sqrt{\gamma})\to \mathop{\mathrm{N}}^1(\sqrt{\gamma})$ be a translation. Then, $$\mathop{\mathrm{Ft}}B \leq \mathop{\mathrm{Ft}}\rho \left(N_{\epsilon/R} B\right).$$* *Proof.* From the equidistribution of feet and the fact that $\rho$ is measure preserving, we have that $$(1-\delta) \lambda\left( N_{\epsilon/R - \delta} B \right) \leq \frac{\mathop{\mathrm{Ft}}\rho (N_{\epsilon/R} B)}{C_{\epsilon,R,\gamma}}.$$ Thus, it suffices to show that $$\label{suff} \frac{\mathop{\mathrm{Ft}}B}{C_{\epsilon,R,\gamma}} \leq (1-\delta)\, \lambda\left( N_{\epsilon/R - \delta} B \right).$$ Using the equidistribution of feet again, we have that $$\frac{\mathop{\mathrm{Ft}}B}{C_{\epsilon,R,\gamma}} \leq (1+\delta) \lambda\left( N_{\delta} B\right).$$ This reduces our goal to showing $$\label{suff2} \lambda\left( N_{\delta} B \right) \leq \frac{1-\delta}{1+\delta}\, \lambda\left(N_{\epsilon/R - \delta} B \right).$$ Suppose now that $\lambda(N_{\epsilon/2R} B) \leq 1/2$. From equation [\[Cheeger\]](#Cheeger){reference-type="ref" reference="Cheeger"}, we have that $$\lambda\left(N_{\delta} B\right) < \frac{1}{1 + \left(\frac{\epsilon}{2R} - \delta \right)\frac{1}{R}} \lambda\left( N_{\epsilon/2R} B\right).$$But if $R$ is large enough, as $\delta = e^{-qR}$, we have[^1] $$\frac{1}{1 + \left(\frac{\epsilon}{2R} - \delta \right)\frac{1}{R}} \leq \frac{1 - \delta}{1+\delta}.$$ Thus, we conclude that [\[suff2\]](#suff2){reference-type="ref" reference="suff2"} holds if $\lambda(N_{\epsilon/R} B) \leq 1/2.$ In particular, $$\mathop{\mathrm{Ft}}B \leq \mathop{\mathrm{Ft}}\rho (N_{\epsilon/R} B)$$ holds in this case. On the other hand, if $\lambda(N_{\epsilon/2R}B)>1/2$, let $C = \mathop{\mathrm{N}}^1(\sqrt{\gamma}) - N_{\epsilon/R} \rho(B)$. Then, $\lambda(N_{\epsilon/2R} C) \leq 1/2$ and so by the same argument as above, for $C$ instead of $B$ and $\rho^{-1}$ instead of $\rho$, we have $$\mathop{\mathrm{Ft}}C \leq \mathop{\mathrm{Ft}}\rho^{-1} (N_{\epsilon/R} C).$$ But $\mathop{\mathrm{Ft}}C = \mathop{\mathrm{Ft}}\mathop{\mathrm{N}}^1 (\sqrt{\gamma}) - \mathop{\mathrm{Ft}}\rho(N_{\epsilon/R} B)$ and $\mathop{\mathrm{Ft}}\rho^{-1} (N_{\epsilon/R} C) = \mathop{\mathrm{Ft}}\mathop{\mathrm{N}}^1(\sqrt{\gamma}) - \mathop{\mathrm{Ft}}B$. Therefore, $$\mathop{\mathrm{Ft}}B \leq \mathop{\mathrm{Ft}}\rho (N_{\epsilon/R} B),$$ in this case as well. This completes the proof of the lemma. ◻ *Proof of the matching theorem.* For $\gamma\in\mathbf{\Gamma}_{\epsilon, R}$, we can make $\mathbf{\widetilde{\Pi}}_{\epsilon, R}(\gamma)$ into a graph by saying that $\pi_1\sim\pi_2$ if $\pi_1$ and $\pi_2$ are $(\epsilon,R)$-well matched, namely if they induce opposite orientations on $\gamma$ and $\mathop{\mathrm{dist}}_{\mathop{\mathrm{N}}^1(\sqrt{\gamma})} (\mathop{\mathrm{\mathbf{f}\mathbf{t}}}\pi_1,\tau(\mathop{\mathrm{\mathbf{f}\mathbf{t}}}\pi_2)) < \epsilon/R$, where $\tau:\mathop{\mathrm{N}}^1(\sqrt{\gamma})\to\mathop{\mathrm{N}}^1(\sqrt{\gamma})$ is the translation $\tau(x) = x+1+i\pi$. Since only the pants inducing opposite orientations on $\gamma$ may be matched, $\mathbf{\widetilde{\Pi}}_{\epsilon, R}(\gamma) = \mathbf{\Pi}_{\epsilon, R}^-(\gamma)\sqcup\mathbf{\Pi}_{\epsilon, R}^+(\gamma)$ is a bipartite graph. We wish to show there is a matching $$\sigma_{\gamma} : \mathbf{\Pi}_{\epsilon, R}^-(\gamma) \longrightarrow\mathbf{\Pi}_{\epsilon, R}^+(\gamma).$$ By the Hall marriage theorem, it suffices to show that for $A\subset\mathbf{\Pi}_{\epsilon, R}^-(\gamma)$, $$\begin{aligned} \# A &\leq \partial N_1 (A) \\ &= \# \{\pi^+ \in \mathbf{\Pi}_{\epsilon, R}^+(\gamma) : |\mathop{\mathrm{\mathbf{f}\mathbf{t}}}\pi^+ - \tau(\mathop{\mathrm{\mathbf{f}\mathbf{t}}}\pi^-) | < \epsilon/R \text{ for some }\pi^-\in A \} \\ &= \mathop{\mathrm{Ft}}\tau(N_{\epsilon/R} \mathop{\mathrm{\mathbf{f}\mathbf{t}}}A).\end{aligned}$$ This, in turn, follows from Lemma [Lemma 24](#matchlem){reference-type="ref" reference="matchlem"} for $B = \mathop{\mathrm{\mathbf{f}\mathbf{t}}}A$ and $\rho = \tau$. Since the sets $\mathbf{\Pi}_{\epsilon, R}^-(\gamma)$ and $\mathbf{\Pi}_{\epsilon, R}^+(\gamma)$ are finite and have the same cardinality, it follows that $\sigma_{\gamma}$ is a bijection, which concludes the proof of the matching Theorem [Theorem 20](#matching){reference-type="ref" reference="matching"}. ◻ In summary, we have shown the matching Theorem [Theorem 20](#matching){reference-type="ref" reference="matching"}, which allows us to build a closed $(1+O(\epsilon))$-quasifuchsian surface $S_{\epsilon,R}$ in $M$ by gluing one copy of each pants in $\mathbf{\Pi}_{\epsilon, R}$ via $(\epsilon,R)$-good gluings. # Connected surfaces going through every good pants Recall that $\mu_{\epsilon,R}\in \mathscr{M}(\mathbf{\Pi}_{\epsilon, R})$ is the measure so that $\mu_{\epsilon,R} (\pi) = 1$ for each $\pi\in\mathbf{\Pi}_{\epsilon, R}$. In the previous section, we have seen that a closed, oriented, essential and $(1+O(\epsilon))$-quasifuchsian surface $S_{\epsilon,R}$ may be built from $\mu_{\epsilon,R}$. We do not know, however, whether $S_{\epsilon,R}$ is connected, or what its components may look like. Following ideas of Liu and Markovic [@LM], we show that if we take $N=N(\epsilon,R,M)$ copies of $S_{\epsilon,R}$, it is possible to perform cut-and-paste surgeries around certain good curves in order to get *connected* closed, oriented, essential and $(1+O(\epsilon))$-quasifuchsian surfaces $\hat{S}_{\epsilon,R}$. **Theorem 25**. *There is an integer $N = N(\epsilon,R,M) > 0$ so that a *connected*, closed, oriented, essential and $(1+O(\epsilon))$-quasifuchsian surface may be built from $N\mu_{\epsilon,R}$.* We define a measure $\mu\in\mathscr{M}(\mathbf{\Pi}_{\epsilon, R})$ to be *irreducible* if for any nontrivial decomposition $\mu = \mu_1 + \mu_2$, there is a curve $\gamma\in\mathbf{\Gamma}_{\epsilon, R}$ so that $\gamma$ lies in $\mathop{\mathrm{supp}}\partial\mu_1$ and its orientation reversal $\gamma^{-1}$ lies in $\mathop{\mathrm{supp}}\partial\mu_2$. If $\mu$ is *not* irreducible, then no connected surface may be built from $\mu$. In fact, if there is a nontrivial decomposition $\mu=\mu_1+\mu_2$ so that if $\gamma\in \mathop{\mathrm{supp}}\partial\mu_1$, then $\gamma^{-1}\notin\mathop{\mathrm{supp}}\partial\mu_2$, then no pants in $\mathop{\mathrm{supp}}\partial\mu_1$ may be glued to pants in $\mathop{\mathrm{supp}}\partial\mu_2$. Thus, a surface built out of $\mu$ will have at least two components. On the other hand, if $\mu$ is irreducible, we have the following theorem, which is close to Lemma 3.9 of Liu and Markovic [@LM]. (They do not assume $\mu$ to be positive on all pants, using a weaker hypothesis instead, but the conclusion is the same.) **Theorem 26**. *Suppose $\mu\in \mathscr{M}(\mathbf{\Pi}_{\epsilon, R})$ is an irreducible measure so that $\mu(\pi)>0$ for every $\pi\in\mathbf{\Pi}_{\epsilon, R}$ and a closed surface may be $(\epsilon,R)$-well built from $\mu$. Then, there is an integer $N = N(\mu)$ so that a *connected* closed surface may be $(2\epsilon,R)$-well built from $N\mu$.* In view of that, our goal for this section is to prove that $\mu_{\epsilon,R}$ is irreducible. Fortunately, we have the following theorem, which is Proposition 7.1 of [@LM]. **Proposition 27**. *Given two curves $\gamma_0,\gamma_1 \in \mathbf{\Gamma}_{\epsilon, R}$, we may find a sequence of pants $\pi_0,\ldots,\pi_n$ in $\mathbf{\Pi}_{\epsilon, R}$ so that $\gamma_0$ is a cuff of $\pi_0$, $\gamma_1$ is a cuff of $\pi_n$ and $\pi_i$ may be glued to $\pi_{i+1}$ for $0\leq i <n$.* The gluings in the proposition above are not necessarily $(\epsilon,R)$-good. *Proof sketch.* Liu and Markovic argue that $\gamma_0$ and $\gamma_1$ are, respectively, the boundaries of pants $\pi_0$ and $\pi_1$ that have cuffs in $\mathbf{\Gamma}_{\epsilon/10000,R}$. For $i\in \{0,1\}$, the pants $\pi_i$ is obtained by taking an appropriate self-orthogeodesic segment $\sigma_i$ of $\gamma_i$ and homotoping the closed curves comprising of the pieces of $\gamma_i$ between the endpoints of $\sigma_i$ and $\sigma_i$ to good cuffs. Thus, this reduces the problem to showing that any two curves $\gamma_0, \gamma_1\in \mathbf{\Gamma}_{\epsilon/10000,R}$ may be connected via pants in $\mathbf{\Pi}_{\epsilon, R}$. This is done by an yet trickier geometric construction, called *swapping*. Roughly, using mixing of the frame flow, the curves $\gamma_0$ and $\gamma_1$ are joined by two segments of length approximately $R/2$. These segments are chosen in a way that they, together with $\gamma_0$ and $\gamma_1$ lie on a surface $F$, that has bounded genus and exactly 4 boundary components. This surface $F$, in turn, admits a pants decomposition by pants in $\mathbf{\Pi}_{\epsilon, R}$. ◻ **Theorem 4**. *The measure $\mu_{\epsilon,R}$ is irreducible.* *Proof.* Let $\mu_{\epsilon,R} = \mu_0 + \mu_1$ be a nontrivial decomposition. Let $\gamma_0\in \mathop{\mathrm{supp}}\partial\mu_0$ and $\gamma_1\in\mathop{\mathrm{supp}}\partial\mu_1$. In view of Proposition [Proposition 27](#conn){reference-type="ref" reference="conn"}, there are pants $\pi_0,\ldots,\pi_n$ in $\mathbf{\Pi}_{\epsilon, R}= \mathop{\mathrm{supp}}\mu_{\epsilon,R}$ so that $\gamma_0$ is a cuff of $\pi_0$, $\gamma_1$ is a cuff of $\pi_n$ and $\pi_i$ may be glued to $\pi_{i+1}$. This means there is a curve $\gamma$, which is a cuff of some $\pi_i$, so that $\gamma\in\mathop{\mathrm{supp}}\mu_1$ and $\gamma^{-1}\in\mathop{\mathrm{supp}}\mu_2$. This means $\mu_{\epsilon,R}$ is irreducible. ◻ We conclude the section by providing a proof of Theorem [Theorem 26](#irred){reference-type="ref" reference="irred"}. The regluing of surfaces featured in this proof provides inspiration for the construction of non-equidistributing surfaces in Section 7. We start with the following lemma about pants decompositions. **Lemma 28**. *Let $S$ be a surface with a pants decomposition $P$. Then, $S$ has a double cover $\hat{S}$ to which the pants in $P$ lift homeomorphically to pants with nonseparating cuffs.* ![Proof of Lemma [Lemma 28](#double){reference-type="ref" reference="double"}. On the left, we have the dual graph $X$ to the pants decomposition $P$ of $S$. On the right, we have the double cover $\hat{X}$. $T_i^1$ and $T_i^2$ are the lifts of $T_i$ to $\hat{X}$, which are nonseparating in $\hat{X}$.](doublecover.jpg){#doublecover} *Proof.* Let $X$ be the graph whose the vertices are pants in $P$ and the edges are the cuffs shared by pants in $P$. A cuff is separating in $S$ if and only if its corresponding edge in $X$ is separating. Thus, our task is to show that $X$ has a double cover $\hat{X}$ that only has nonseparating edges. To do so, let $F= \sqcup_{i=1}^n T_i \subset X$ be the graph-theoretic forest consisting of all separating edges of $X$, where the $T_i$ are disjoint trees. We also write $X - F = \sqcup_{j=1}^m C_j$, where $C_j$ are disjoint connected components. For each $j$, we take a double cover $d_j:\hat{C}_j\to C_j$. This gives us a double cover $d:\sqcup_{j=1}^m \hat{C}_j\to \sqcup_{j=1}^m C_j$. Note that each $\hat{C_j}$ consists of nonseparating edge. If some $\hat{C_j}$ had a separating edge $e$, it would have another separating edge $e'$, the image of $e$ under the nontrivial deck transformation $\hat{C_j}\to\hat{C_j}$. Thus, $\hat{C_j}-\{e,e'\}$ consists of *three* components, otherwise one of $e$ or $e'$ would not be separating. In particular, the inverse image of the (connected) set $C_j-d_j(e)$ under $d_j$ would consist of three compoents, contradicting the fact that $d_j$ is a double cover. For each $i$, we attach a copy of $T_i$ to each of the two lifts that $\partial T_i$ has in $\sqcup_{i=1}^m \hat{C}_j$. As a result, we get a double cover $D:\hat{X}\to X$ which extends $d$. (See Figure [3](#doublecover){reference-type="ref" reference="doublecover"}.) The trees $T_i\subset X$ have nonseparating lifts to $\hat{X}$, as both of their lifts are bounded by the same subset of the $\{\hat{C}_j\}_{j=1 }^m$. ◻ ![Regluing $S_1$ to $S_2$ with a good gluing and reducing the number of components of $S$.](reglue.jpg){#reglue} *Proof of Theorem [Theorem 26](#irred){reference-type="ref" reference="irred"}.* Suppose the closed oriented essential $(1+O(\epsilon))$-quasifuchsian surface $S$ we build out of $\mu_{\epsilon,R}$ has $r$ components: $$S = S_1\sqcup \cdots \sqcup S_r.$$ We take a finite covering $\hat{S}\to S$ of degree $N = N(\epsilon,R)$ so that the good pants lift homeomorphically and each good curve appears at least $r$ times in each component. In view of Lemma [3](#doublecover){reference-type="ref" reference="doublecover"}, we may assume that the good curves lift to nonseparating curves in $\hat{S}$. Due to the irreducibility of $\mu_{\epsilon,R}$, there is a cuff $\gamma\in\mathbf{\Gamma}_{\epsilon, R}$ that is shared by at least two components. Let $\Pi_k(\gamma)$ be the ends of pants in $\mathbf{\widetilde{\Pi}}_{\epsilon, R}(\gamma)$ that lie in $S_k$. We can divide $\Pi_k(\gamma) = \Pi_k^-(\gamma) \sqcup \Pi_k^+(\gamma)$ that induce a negative and positive orientation on $\gamma$. We wish to show **Claim 1**. *There is $i\neq j$ so that there are pants $\pi_i^- \in \Pi_i^-(\gamma)$ and $\pi_j^+ \in \Pi_j^+(\gamma)$ so that $$|\mathop{\mathrm{\mathbf{f}\mathbf{t}}}_{\gamma} \pi_i^- - \mathop{\mathrm{\mathbf{f}\mathbf{t}}}_{\gamma} \pi_j^+ | < \frac{\epsilon}{R}.$$* From the claim, since $\pi^-_k$ is $(\epsilon,R)$-well glued to some $\pi^+_k\in\Pi^+_k(\gamma)$ for $k\in \{i,j\}$, we have that $$|\mathop{\mathrm{\mathbf{f}\mathbf{t}}}\pi_i^- - \tau(\mathop{\mathrm{\mathbf{f}\mathbf{t}}}\pi_j^+) | <\frac{2\epsilon}{R} \quad\text{and}\quad |\mathop{\mathrm{\mathbf{f}\mathbf{t}}}\pi_i^+ - \tau(\mathop{\mathrm{\mathbf{f}\mathbf{t}}}\pi_j^-) | <\frac{2\epsilon}{R}$$ and so $\pi_i^{\mp}$ may be $(2\epsilon,R)$-well glued to $\pi_j^{\pm}$, where $\tau(x) = x + 1 + i\pi$. Performing this regluing reduces the number of components of $\hat{S}$, and continuing this process we obtain a connected surface with the desired properties. We conclude by proving the claim. Let $U_k:= N_{\epsilon/2R} (\mathop{\mathrm{\mathbf{f}\mathbf{t}}}\Pi_k^-(\gamma))$. From the equidistribution of feet, we have that $\bigcup_{k=1}^r U_k = \mathop{\mathrm{N}}^1(\sqrt{\gamma})$. Indeed, if we let $F = \mathop{\mathrm{\mathbf{f}\mathbf{t}}}(\mathbf{\Pi}_{\epsilon, R}^-(\gamma))$, we have that $$0 = \frac{\mathop{\mathrm{Ft}}(\mathop{\mathrm{N}}^1(\sqrt{\gamma})- F)}{\#\mathbf{\Pi}_{\epsilon, R}(\gamma)} \geq (1-\delta) |N_{-\delta} (\mathop{\mathrm{N}}^1(\sqrt{\gamma}) - F)|,$$ which implies that $N_{\delta} (F)$ has full measure. Thus, as $\epsilon/2R> \delta = e^{-qR}$, we conclude that $N_{\epsilon/R}(F) = \bigcup_{k=1}^r U_k$ is all of $\mathop{\mathrm{N}}^1(\sqrt{\gamma})$. But as $\mathop{\mathrm{N}}^1(\sqrt{\gamma})$ is connected and the $U_k$ are open, there has to be an $i\neq j$ so that $U_i\cap U_j\neq \emptyset$. ◻ # Barycenters of the good pants are equidistributed {#equid} ![Left: midpoints and barycenter of an ideal triangle. Right: one of the three framed barycenters of an ideal triangle.](triangle.jpg "fig:") ![Left: midpoints and barycenter of an ideal triangle. Right: one of the three framed barycenters of an ideal triangle.](onebar.jpg "fig:") Let $T\subset\mathbf{H}^3$ be an oriented ideal triangle. There are three horocycles based on the vertices of $T$ that are pairwise tangent, with their tangency points lying in $\partial T$. The points where the horocycles meet $\partial T$ are called the *midpoints* of the edges of $T$. The geodesic rays from the midpoints of $T$ towards the opposite vertices meet at the *barycenter* $b(T)$ of $T$. The *framed barycenters* of $T$ are the frames $(v,w,n)$ based at $b(T)$, where $v$ points away from a side of $T$, $n$ is normal to $T$ and $v\times w = n$. The barycenter of an ideal triangle $T\subset M$ is the projection onto $M$ of the barycenter of a lift of $T$ to $\mathbf{H}^3$. The framed barycenters of $T\subset M$ are the projections to $\mathop{\mathrm{Fr}}M$ of the framed barycenters of a lift of $T$ to $\mathbf{H}^3$. A good pants $\pi\in\mathbf{\Pi}_{\epsilon, R}$ has a pleated structure consisting of two ideal triangles, as in Figure [5](#twist){reference-type="ref" reference="twist"}. Its *barycenters* are the framed barycenters of these ideal triangles. We let $\beta_{\epsilon,R}$ be the weighted uniform probability measure supported on the barycenters of the pants in $\mathbf{\Pi}_{\epsilon, R}$. In this section, we will show **Theorem 29** (Equidistribution of barycenters). *For $\epsilon \to 0$ and $R(\epsilon)\to \infty$ fast enough, $$\beta_{\epsilon,R(\epsilon)} \stackrel{\star}\rightharpoonup\nu_{\mathop{\mathrm{Fr}}M},$$ where $\nu_{\mathop{\mathrm{Fr}}M}$ is the probability volume measure on $\mathop{\mathrm{Fr}}M$.* In other words, the barycenters of the good pants equidistribute in $\mathop{\mathrm{Fr}}M$ as $\epsilon\to 0$ and $R(\epsilon)\to\infty$. This will be used in Section 6 to show that the connected surface $S_{\epsilon,R}$ built out of $N= N(\epsilon,R,M)$ copies of each pants in $\mathbf{\Pi}_{\epsilon, R}$ equidistributes as $\epsilon\to 0$ and $R\to\infty$. This will follow from the fact that the unit tangent bundle of each pair of pants (outside of the pleats) may be obtained from the barycenters via the right action of a set $\Delta \subset\mathop{\mathrm{PSL}}_2 \mathbf{R}$. ![Pleated structure of a pair of pants consisting of two ideal triangles.](3dspun.jpg){#twist} ## Outline of the proof We will show the equidistribution of barycenters, Theorem 5.1, in three steps. First, we will prove that the feet of all pants in $\mathbf{\Pi}_{\epsilon, R}$, seen as points in $\mathop{\mathrm{Fr}}M$, equidistributes as $\epsilon\to 0$. Precisely, a foot $f$ of $\pi = [(f,C_i)] \in \mathbf{\widetilde{\Pi}}_{\epsilon, R}$ is associated to the frame $(v,f,v\times f)$, where $v$ is the unit tangent vector to the $\gamma\in\mathbf{\Gamma}_{\epsilon, R}$ homotopic to $f(C_i)$. (With this identification, we can realize $\mathop{\mathrm{N}}^1(\gamma)$ as a subset of $\mathop{\mathrm{Fr}}M$.) We let $\phi_{\epsilon,R}$ be the weighted uniform probability measure on $\mathop{\mathrm{Fr}}M$ supported on the feet of pants in $\mathbf{\Pi}_{\epsilon, R}$. We will show **Lemma 30** (Equidistribution of feet in $\mathop{\mathrm{Fr}}M$). *For $\epsilon \to 0$ and $R(\epsilon)\to \infty$ fast enough, $$\phi_{\epsilon,R(\epsilon)} \stackrel{\star}\rightharpoonup\nu_{\mathop{\mathrm{Fr}}M}.$$* The proof of Lemma 5.2 will use the fact that the feet are well-distributed in the unit normal bundle of a given good curve (due to Kahn-Wright [@KW2], in a modified version), as well as the fact that the good curves themselves are asymptotically almost surely well-distributed in $\mathop{\mathrm{T}}^1 M$ (due to Lalley [@L]). Let $a_t = \mathop{\mathrm{diag}}(e^{t/2},e^{-t/2})$ and $k\in \mathop{\mathrm{SO}}_2$ be the ninety-degree rotation bringing the first vector in a frame to the second, fixing the third. The second step of the proof is to observe that the right action[^2] $v_R:= R_{a_{R/2} k a_{\log(\sqrt{3}/2)}}$ of the element $$a_{R/2} \,k\, a_{\log(\sqrt{3}/2)} \in \mathop{\mathrm{PSL}}_2 \mathbf{C},$$ brings the feet of a pants $\pi$ to frames that are very close to the framed barycenters of the triangles of the pleated structure of $\pi$. We call the images of the feet of $\mathbf{\Pi}_{\epsilon, R}$ under $v_R$ the *approximate barycenters* of the pants in $\mathbf{\Pi}_{\epsilon, R}$. In Lemma [Lemma 35](#ab){reference-type="ref" reference="ab"}, we show that the distances in $\mathop{\mathrm{Fr}}M$ between the approximate barycenters and the actual barycenters of pants in $\mathbf{\Pi}_{\epsilon, R}$ go to zero uniformly as $\epsilon\to 0$. Let $\beta^a_{\epsilon,R}$ be the (weighted) uniform probability measure on the approximate barycenters of the pants in $\mathbf{\Pi}_{\epsilon, R}$. We will show that these approximate barycenters equidistribute in $\mathop{\mathrm{Fr}}M$, namely **Proposition 31**. *\[Equidistribution of approximate barycenters\] For $\epsilon\to 0$ and $R(\epsilon)\to\infty$ fast enough, $$\beta^{a}_{\epsilon,R(\epsilon)} \stackrel{\star}\rightharpoonup\nu_{\mathop{\mathrm{Fr}}M}.$$* To conclude, we use Lemmas [Lemma 35](#ab){reference-type="ref" reference="ab"} and [Proposition 31](#equidab){reference-type="ref" reference="equidab"} to show the main theorem of the section -- the *actual* barycenters of the pants equidistribute. ![ The pants $P_R$ divided into left and right hexagons, with its left and right feet.](leftandright2.jpg){#leftandright} ## Left and right In this subsection we will do some bookkeeping that will be useful to carry out the rest of the proof. Let $P_R$ be the oriented planar hyperbolic pair of pants whose cuffs have size $2R$, as defined in Section 3. The cuffs of $P_R$ are named $C_0$, $C_1$ and $C_2$, as in Figure [6](#leftandright){reference-type="ref" reference="leftandright"}. As defined before, each cuff $C_i$ has two *feet* in $\mathop{\mathrm{N}}^1 (C_i)$, which are unit vectors in the direction of the short orthogeodesics incident to $C_i$. The *left* foot of $C_i$ points towards $C_{i-1}$ and the *right* foot points towards $C_{i+1}$. We can cut $P_R$ along its short orthogeodesics to obtain two right-angled hexagons $H_R^{\ell}$ and $H_R^r$. The *left* right-angled hexagon $H_R^{\ell}$ of $P_R$ is the one so that a traveller going around $\partial H^{\ell}_R$ in the direction given by the orientation of $P_R$ sees the cuffs in the cyclic order $(C_0\,C_1\,C_2)$. The *right* right-angled hexagon is the other one (associated to the cyclic order $(C_0\,C_2\,C_1)$). As before, let $v_R$ be the right action of $a_{R/2} k a_{\log(\sqrt{3}/2)}\in \mathop{\mathrm{PSL}}_2\mathbf{R}$. Observe that the image of a left foot of $P_R$ under $v_R$ falls inside $H_R^{\ell}$. Similarly, the image of a right foot under $v_R$ falls in $H_R^r$. ![ Spinning the hexagons of $P_R$ into two ideal triangles.](2dspun3.jpg){#spun} We can turn the right-angled hexagons $H_R^{\ell}$ and $H_R^r$ into ideal triangles $T_R^{\ell}$ and $T_R^r$ by spinning their vertices around the cuffs, following their orientation. See Figure [7](#spun){reference-type="ref" reference="spun"}. Let $\pi\in \mathbf{\Pi}_{\epsilon, R}$ and $f\in\pi$ be a pleated representative (so $f(P_R)$ is made out of two ideal triangles). We call $f(T^{\ell}_R)$ the *left triangle* $T^{\ell}(\pi)$ of $\pi$ and $f(T^r_R)$ the *right triangle* $T^r(\pi)$ of $\pi$. Note that these are well-defined as they do not depend on the choice of pleated representative in $\pi$. Now let $\pi\in\mathbf{\widetilde{\Pi}}_{\epsilon, R}$ and $(f,C_i)\in \pi$ be a pleated representative. The *left barycenter* of $\pi$, denoted $\mathop{\mathrm{\mathbf{b}\mathbf{a}\mathbf{r}}}^{\ell}(\pi)$, is the framed barycenter of $T^{\ell}(\pi)$ associated to the side $f(C_i)$. Similarly, the *right barycenter* of $\pi$, denoted $\mathop{\mathrm{\mathbf{b}\mathbf{a}\mathbf{r}}}^r(\pi)$, is the framed barycenter of $T^r(\pi)$ associated to the side $f(C_i)$. ## Equidistribution of feet in $\mathop{\mathrm{Fr}}M$ The goal of this subsection is to prove Lemma [Lemma 30](#equidftfm){reference-type="ref" reference="equidftfm"}, in other words, that $$\phi_{\epsilon,R(\epsilon)} \stackrel{\star}\rightharpoonup\nu_{\mathop{\mathrm{Fr}}M}$$ as $\epsilon\to 0$. To do so, we will use the fact that the feet of pants are well-distributed along a good curve. This is Theorem [Theorem 21](#kw){reference-type="ref" reference="kw"}, due to Kahn-Wright [@KW], but we will use the modified version of Theorem [Theorem 32](#ftalongcurve){reference-type="ref" reference="ftalongcurve"} below. The difference is that Theorem [Theorem 21](#kw){reference-type="ref" reference="kw"} is stated for counting feet in a subset $B$ of $\mathop{\mathrm{N}}^1(\sqrt{\gamma})$, whereas the counting we will do is weighted by a nonnegative function $g\in L^{\infty}(\mathop{\mathrm{N}}^1(\gamma))\subset L^{\infty} (\mathop{\mathrm{Fr}}M)$. For $\gamma\in \mathbf{\Gamma}_{\epsilon, R}$, we let $\lambda^{\gamma}$ denote the probability Lebesgue measure in $\mathop{\mathrm{N}}^1(\gamma)\subset\mathop{\mathrm{Fr}}M$. For a bounded function $g$ on a metric space, we let $$m_{\delta} (g) (p) = \inf_{B_{\delta}(p)} g \quad\text{and}\quad M_{\delta} (g)(p) = \sup_{B_{\delta}(p)} g,$$ where $B_{\delta}(p)$ is the metric ball of radius $\delta$ around $p$. **Theorem 32**. *\[Equidistribution of feet along a curve\] There exists $q>0$ depending on $M$ such that for any $\epsilon > 0$, there is $R\geq R_0(\epsilon)$ so that the following holds. Let $\gamma\in \mathbf{\Gamma}_{\epsilon, R}$. If $g \in L^{\infty} \left( \mathop{\mathrm{Fr}}M \right)$ is a nonnegative function, then $$(1-\delta) \int_{\mathop{\mathrm{N}}^1 (\gamma)} m_{\delta} (g) \,d\lambda^{\gamma} \leq \frac{1}{C_{\epsilon,R,\gamma}} \sum_{\pi\in\mathbf{\Pi}_{\epsilon, R}(\gamma)} \left( g(\mathop{\mathrm{\mathbf{f}\mathbf{t}}}^{\ell} \pi) + g(\mathop{\mathrm{\mathbf{f}\mathbf{t}}}^r \pi) \right) \leq (1 + \delta) \int_{\mathop{\mathrm{N}}^1 (\gamma)} M_{\delta} (g) \,d\lambda^{\gamma},\tag{$\star$}$$ where $\delta = e^{-qR}$, $$C_{\epsilon,R,\gamma} = \frac{2\pi c_{\epsilon} \epsilon^4 \ell(\gamma) e^{4R-\ell(\gamma)}}{\mathop{\mathrm{vol}}M}$$ and $c_{\epsilon}\stackrel{\epsilon\to 0}\longrightarrow 1$.* *Proof from Theorem [Theorem 21](#kw){reference-type="ref" reference="kw"}.* Let $g\in L^{\infty}(\mathop{\mathrm{Fr}}M)$. Since the measure $\lambda^{\gamma}$ is supported on $\mathop{\mathrm{N}}^1(\gamma)$, we may assume $g\in L^{\infty}(\mathop{\mathrm{N}}^1(\gamma))$. Let $h(n):= g(n) + g(n+\mathbf{h}\mathbf{l}(\gamma))$. Since $h$ is invariant under $n\mapsto n+\mathbf{h}\mathbf{l}(\gamma)$, $h$ descends to a function $\check{h}\in L^{\infty}(\mathop{\mathrm{N}}^1(\sqrt{\gamma}))$ so that $h\circ \mathop{\mathrm{proj}}= \check{h}$, where $\mathop{\mathrm{proj}}: \mathop{\mathrm{N}}^1(\gamma)\to\mathop{\mathrm{N}}^1(\sqrt{\gamma})$ is the quotient projection. Using the shorthand notation $$\{f > y\}:= \{n\in \mathop{\mathrm{N}}^1(\sqrt{\gamma}) : f(n)>y\},$$ note that $$N_{-\delta} \{f>y\} = \{m_{\delta}(f)>y\} \quad\text{and}\quad N_{\delta} \{f> y\} = \{M_{\delta}(f)> y\}.$$ Thus, Theorem [Theorem 21](#kw){reference-type="ref" reference="kw"} gives us $$\label{before} (1-\delta)\,\lambda(\{m_{\delta}(\check{h})>y \}) \leq \frac{\#\{\pi\in\mathbf{\Pi}_{\epsilon, R}:\check{h}(\mathop{\mathrm{\mathbf{f}\mathbf{t}}}\pi)>y\}}{C_{\epsilon,R,\gamma}} \leq (1+\delta)\,\lambda(\{M_{\delta}(\check{h})> y\}),$$ where $\lambda$ is the probability Lebesgue measure on $\mathop{\mathrm{N}}^1(\sqrt{\gamma})$. A basic property of the Lebesgue integral says that for a function $f\in L^{\infty}(X,\mu)$, where $X$ is a space with a measure $\mu$, we have $\int_X f\,d\mu = \int_0^{\|f\|_{\infty}} \mu(\{f>y\}) \,dy$. Thus, if we integrate the inequality ([\[before\]](#before){reference-type="ref" reference="before"}) above with respect to $y$ from $0$ to $\|\check{h}\|_{L^{\infty}(\mathop{\mathrm{N}}^1(\sqrt{\gamma}))}$ and apply this property for $\lambda$ and the counting measure of feet in $\mathop{\mathrm{N}}^1(\sqrt{\gamma})$, we obtain $$(1-\delta)\int_{\mathop{\mathrm{N}}^1(\sqrt{\gamma})} m_{\delta}(\check{h})\,d\lambda \leq \frac{1}{C_{\epsilon,R,\gamma}} \sum_{\pi\in\mathbf{\Pi}_{\epsilon, R}(\gamma)} \check{h}(\mathop{\mathrm{\mathbf{f}\mathbf{t}}}\pi) \leq (1+\delta)\int_{\mathop{\mathrm{N}}^1(\sqrt{\gamma})} M_{\delta}(\check{h})\,d\lambda.$$ Note that $\check{h}(\mathop{\mathrm{\mathbf{f}\mathbf{t}}}\pi) = g(\mathop{\mathrm{\mathbf{f}\mathbf{t}}}^{\ell}\pi)+g(\mathop{\mathrm{\mathbf{f}\mathbf{t}}}^r\pi)$, so the middle term of the inequality is the same as in $(\star)$. On the other hand, $m_{\delta}(\check{h})\circ \mathop{\mathrm{proj}}= m_{\delta}(h)$. Thus, $\int_{\mathop{\mathrm{N}}^1(\sqrt{\gamma})} m_{\delta} (\check{h})\, d\lambda = \int_{\mathop{\mathrm{N}}^1(\gamma)} m_{\delta} (h)\,d\lambda^{\gamma}$. Finally, $\int_{\mathop{\mathrm{N}}^1(\gamma)} m_{\delta} (h)\,d\lambda^{\gamma} \geq 2\int_{\mathop{\mathrm{N}}^1(\gamma)} m_{\delta} (g)\, d\lambda^{\gamma}$ and similarly $\int_{\mathop{\mathrm{N}}^1(\sqrt{\gamma})} M_{\delta} (\check{h})\, d\lambda \leq 2\int_{\mathop{\mathrm{N}}^1(\gamma)} M_{\delta}(g)\,d\lambda^{\gamma}$. This yields the desired inequality $(\star)$, up to the constant $C_{\epsilon,R,\gamma}$ absorbing a factor of $2$. ◻ We can simplify the main statement of the theorem with the following notations. For a measure $\mu$ on a space $X$, and $g\in L^{\infty}(X)$, we let $\mu(g):=\int_X gd\mu$. We define a measure $\phi^{\gamma}_{\epsilon,R}$ supported on $\mathop{\mathrm{N}}^1(\gamma)$ by $$\phi^{\gamma}_{\epsilon,R} (g) =\frac{1}{C_{\epsilon,R,\gamma}} \sum_{\pi \in \mathbf{\Pi}_{\epsilon, R}(\gamma)} \left( g(\mathop{\mathrm{\mathbf{f}\mathbf{t}}}^{\ell}\pi) + g(\mathop{\mathrm{\mathbf{f}\mathbf{t}}}^r \pi) \right),$$ where $g\in C(\mathop{\mathrm{Fr}}M)$. Fix a nonnegative function $g\in C(\mathop{\mathrm{Fr}}M)$. The inequality $(\star)$ can be rewritten as $$(1-\delta)\, \lambda^{\gamma} \left( m_{\delta} g \right) \leq \phi^{\gamma}_{\epsilon,R} (g) \leq (1 + \delta) \,\lambda^{\gamma} \left(M_{\delta} g \right)$$ and we can average it over all $\gamma\in\mathbf{\Gamma}_{\epsilon, R}$, yielding $$(1-\delta) \frac{1}{\#\mathbf{\Gamma}_{\epsilon, R}}\sum_{\gamma\in\mathbf{\Gamma}_{\epsilon, R}} \lambda^{\gamma} \left( m_{\delta} g \right) \leq \frac{1}{\#\mathbf{\Gamma}_{\epsilon, R}}\sum_{\gamma\in\mathbf{\Gamma}_{\epsilon, R}}\phi^{\gamma}_{\epsilon,R} (g) \leq (1 + \delta) \frac{1}{\#\mathbf{\Gamma}_{\epsilon, R}}\sum_{\gamma\in\mathbf{\Gamma}_{\epsilon, R}}\lambda^{\gamma} \left(M_{\delta} g \right).$$ If we can show that the upper and lower bounds of this inequality are very close to $\nu_{\mathop{\mathrm{Fr}}M}(g)$ and that the middle term is very close to $\phi_{\epsilon,R} (g)$ as $\epsilon\to 0$, then it will follow that $\phi_{\epsilon,R} (g) \stackrel{\epsilon\to 0}\longrightarrow\nu_{\mathop{\mathrm{Fr}}M} (g)$. This, in turn, implies Lemma [Lemma 30](#equidftfm){reference-type="ref" reference="equidftfm"}, using the fact that since $M$ is compact, $C(\mathop{\mathrm{Fr}}M)\subset L^{\infty}(\mathop{\mathrm{Fr}}M)$, as well as the fact that if $\phi_{\epsilon,R} (g) \stackrel{\epsilon\to 0}\longrightarrow\nu_{\mathop{\mathrm{Fr}}M} (g)$ for nonnegative functions $g\in C(\mathop{\mathrm{Fr}}M)$, it follows that $\phi_{\epsilon,R}(g)\stackrel{\epsilon\to 0}\longrightarrow\nu_{\mathop{\mathrm{Fr}}M} (g)$ for all $g\in C(\mathop{\mathrm{Fr}}M)$. Our task is is therefore to show the following two lemmas: **Lemma 33**. *For $g\in C(\mathop{\mathrm{Fr}}M)$, $$\left| \frac{1}{\#\mathbf{\Gamma}_{\epsilon, R}}\sum_{\gamma\in\mathbf{\Gamma}_{\epsilon, R}} \lambda^{\gamma} \left( m_{\delta} g \right) - \nu_{\mathop{\mathrm{Fr}}M} (g) \right| \quad\text{and}\quad\left| \frac{1}{\#\mathbf{\Gamma}_{\epsilon, R}}\sum_{\gamma\in\mathbf{\Gamma}_{\epsilon, R}} \lambda^{\gamma} \left( M_{\delta} g \right) - \nu_{\mathop{\mathrm{Fr}}M} (g) \right|$$ converge to zero as $\epsilon\to 0$.* **Lemma 34**. *For $g\in C(\mathop{\mathrm{Fr}}M)$, $$\left| \frac{1}{\#\mathbf{\Gamma}_{\epsilon, R}}\sum_{\gamma\in\mathbf{\Gamma}_{\epsilon, R}}\phi^{\gamma}_{\epsilon,R} (g) - \phi_{\epsilon,R} (g) \right| \stackrel{\epsilon\to 0}\longrightarrow 0.$$* *Proof of Lemma [Lemma 33](#ft1){reference-type="ref" reference="ft1"}.* For $g\in C(\mathop{\mathrm{Fr}}M)$, we define a function $\hat{g} \in C(\mathop{\mathrm{T}}^1 M)$ via $$\hat{g}(p,v) = \frac{1}{2\pi} \int_{S^1(v)} g(p,v,\theta)\,d\theta,$$ where $S^1(v)$ is the circle in $T^1_p M$ orthogonal to $v$. For $\gamma\in\mathbf{\Gamma}_{\epsilon, R}$, we let $d\gamma$ be the probability length measure of $\gamma$ on $\mathop{\mathrm{T}}^1(M)$, in other words, for $h\in C(\mathop{\mathrm{T}}^1 M)$. $$\int_{\mathop{\mathrm{T}}^1 M} h\,d\gamma = \frac{1}{\ell(\gamma)} \int_0^{\ell(\gamma)} h(\gamma(t),\gamma'(t)) \,dt.$$ Note that for $g\in C(\mathop{\mathrm{Fr}}M)$, $$\int_{\mathop{\mathrm{T}}^1 M} \hat{g} \,d\gamma = \int_{\mathop{\mathrm{Fr}}M} g \, d\lambda^{\gamma}.$$ Let $\mathop{\mathrm{\mathbf{P}\mathbf{r}\mathbf{o}\mathbf{b}}}_{\epsilon}$ be the uniform probability measure on $\mathbf{\Gamma}_{\epsilon, R}$. In Theorem II of [@L], Lalley showed that, if $h\in C(\mathop{\mathrm{T}}^1 M)$ and $\eta>0$, then $$\mathop{\mathrm{\mathbf{P}\mathbf{r}\mathbf{o}\mathbf{b}}}_{\epsilon} \left( \left| \int_{\mathop{\mathrm{T}}^1 M} h\, d \gamma - \nu_{\mathop{\mathrm{T}}^1 M} (h) \right| > \eta \right) \stackrel{\epsilon\to 0}\longrightarrow 0.$$ In other words, if $g\in C(\mathop{\mathrm{Fr}}M)$, then $$\mathop{\mathrm{\mathbf{P}\mathbf{r}\mathbf{o}\mathbf{b}}}_{\epsilon} \left( \left| \lambda^{\gamma}(g) - \nu_{\mathop{\mathrm{Fr}}M} (g) \right| > \eta \right) \stackrel{\epsilon\to 0}\longrightarrow 0.$$ Let $\mathbf{\Gamma}_{\epsilon, R}^{\geq \eta}$ be the $\gamma\in\mathbf{\Gamma}_{\epsilon, R}$ so that $|\lambda^{\gamma}(g) - \nu_{\mathop{\mathrm{Fr}}M}(g)| \geq \eta$ and let $\mathbf{\Gamma}_{\epsilon, R}^{<\eta} := \mathbf{\Gamma}_{\epsilon, R}- \mathbf{\Gamma}_{\epsilon, R}^{\geq\eta}$. Then, $$\begin{aligned} \left|\frac{1}{\#\mathbf{\Gamma}_{\epsilon, R}} \sum_{\gamma\in\mathbf{\Gamma}_{\epsilon, R}} \lambda^{\gamma}(g) - \nu_{\mathop{\mathrm{Fr}}M} (g) \right| &\leq \frac{1}{\#\mathbf{\Gamma}_{\epsilon, R}} \sum_{\gamma\in\mathbf{\Gamma}_{\epsilon, R}^{<\eta}} \left|\lambda^{\gamma}(g)- \nu_{\mathop{\mathrm{Fr}}M}(g)\right| +\sum_{\gamma\in\mathbf{\Gamma}_{\epsilon, R}^{\geq\eta}} \left|\lambda^{\gamma}(g)- \nu_{\mathop{\mathrm{Fr}}M}(g)\right| \\ &\leq \eta + 2\|g\|_{L^{\infty} (\mathop{\mathrm{Fr}}M)} \mathop{\mathrm{\mathbf{P}\mathbf{r}\mathbf{o}\mathbf{b}}}_{\epsilon} (\mathbf{\Gamma}_{\epsilon, R}^{>\eta}).\end{aligned}$$ As $\eta>0$ was arbitrary, this shows that $$\lim_{\epsilon\to 0} \frac{1}{\#\mathbf{\Gamma}_{\epsilon, R}} \sum_{\gamma\in\mathbf{\Gamma}_{\epsilon, R}} \lambda^{\gamma}(g) = \nu_{\mathop{\mathrm{Fr}}M } (g).$$ Finally, since $g$ is continuous and $\delta\to 0$ as $\epsilon\to 0$, we also conclude that $$\frac{1}{\#\mathbf{\Gamma}_{\epsilon, R}} \sum_{\gamma\in\mathbf{\Gamma}_{\epsilon, R}} \lambda^{\gamma}\left( m_{\delta} g\right) \quad\text{and}\quad\frac{1}{\#\mathbf{\Gamma}_{\epsilon, R}} \sum_{\gamma\in\mathbf{\Gamma}_{\epsilon, R}} \lambda^{\gamma}\left( M_{\delta} g\right)$$ converge to $\nu_{\mathop{\mathrm{Fr}}M}(g)$ as $\epsilon\to 0$, which concludes the proof of Lemma [Lemma 33](#ft1){reference-type="ref" reference="ft1"}. ◻ *Proof of [Lemma 34](#ft2){reference-type="ref" reference="ft2"}.* To begin, for $\gamma\in\mathbf{\Gamma}_{\epsilon, R}$ and for $g\in C(\mathop{\mathrm{Fr}}M)$, we define $$\mathop{\mathrm{Ft}}_{\gamma}(g) := \sum_{\pi\in\mathbf{\Pi}_{\epsilon, R}(\gamma)} \left( g(\mathop{\mathrm{\mathbf{f}\mathbf{t}}}^{\ell} \pi) + g(\mathop{\mathrm{\mathbf{f}\mathbf{t}}}^r \pi) \right).$$ Using this notation, $$\begin{aligned} \frac{1}{\#\mathbf{\Gamma}_{\epsilon, R}} \phi_{\epsilon,R}^{\gamma}(g) - \phi_{\epsilon,R} (g) &= \frac{1}{\#\mathbf{\Gamma}_{\epsilon, R}} \sum_{\gamma\in\mathbf{\Gamma}_{\epsilon, R}} \frac{\mathop{\mathrm{Ft}}_{\gamma}(g)}{C_{\epsilon,R,\gamma}} - \frac{1}{\#\mathbf{\widetilde{\Pi}}_{\epsilon, R}} \sum_{\gamma\in\mathbf{\Gamma}_{\epsilon, R}} \mathop{\mathrm{Ft}}_{\gamma}(g) \\ &= \frac{1}{\#\mathbf{\widetilde{\Pi}}_{\epsilon, R}} \sum_{\gamma\in\mathbf{\Gamma}_{\epsilon, R}} \mathop{\mathrm{Ft}}_{\gamma}(g) \left(\frac{\#\mathbf{\widetilde{\Pi}}_{\epsilon, R}/ \#\mathbf{\Gamma}_{\epsilon, R}}{C_{\epsilon,R,\gamma}} - 1 \right).\end{aligned}$$ Thus, we obtain $$\begin{aligned} \left| \frac{1}{\#\mathbf{\Gamma}_{\epsilon, R}} \phi_{\epsilon,R}^{\gamma}(g) - \phi_{\epsilon,R} (g) \right| &\leq \frac{\#\mathbf{\Gamma}_{\epsilon, R}}{\#\mathbf{\widetilde{\Pi}}_{\epsilon, R}} \sup_{\gamma\in\mathbf{\Gamma}_{\epsilon, R}} \mathop{\mathrm{Ft}}_{\gamma}(g) \left(\frac{\#\mathbf{\widetilde{\Pi}}_{\epsilon, R}/ \#\mathbf{\Gamma}_{\epsilon, R}}{C_{\epsilon,R,\gamma}} - 1 \right).\end{aligned}$$ Using the fact that $$|\mathop{\mathrm{Ft}}_{\gamma}(g)| \leq 2 \#\mathbf{\Pi}_{\epsilon, R}(\gamma)\, \|g\|_{L^{\infty}(\mathop{\mathrm{Fr}}M)},$$ we have $$\begin{aligned} \label{bound} \left| \frac{1}{\#\mathbf{\Gamma}_{\epsilon, R}} \phi_{\epsilon,R}^{\gamma}(g) - \phi_{\epsilon,R} (g) \right| &\leq \|g\|_{L^{\infty}(\mathop{\mathrm{Fr}}M)} \,\sup_{\gamma\in\mathbf{\Gamma}_{\epsilon, R}} \left( \frac{2\#\mathbf{\Pi}_{\epsilon, R}(\gamma)}{C_{\epsilon,R,\gamma}} - \frac{2\#\mathbf{\Pi}_{\epsilon, R}(\gamma)}{ \#\mathbf{\widetilde{\Pi}}_{\epsilon, R}/\#\mathbf{\Gamma}_{\epsilon, R}} \right).\end{aligned}$$ The equidistribution of feet along a curve, Theorem [Theorem 32](#ftalongcurve){reference-type="ref" reference="ftalongcurve"}, is what allows us to argue that the right hand side goes to zero as $\epsilon\to 0$. Say $a(\epsilon)\sim b(\epsilon)$ if $a(\epsilon)/b(\epsilon) \to 0$ as $\epsilon\to 0$. Then, Theorem [Theorem 32](#ftalongcurve){reference-type="ref" reference="ftalongcurve"} applied to $g = 1_{\mathop{\mathrm{Fr}}M}$ says that $$1-\delta \leq \frac{2\#\mathbf{\Pi}_{\epsilon, R}(\gamma)}{C_{\epsilon,R,\gamma}} \leq 1+\delta,$$ where $\delta = e^{-qR}$, which implies $$2\#\mathbf{\Pi}_{\epsilon, R}(\gamma) \sim C_{\epsilon,R,\gamma}$$ for any $\gamma\in\mathbf{\Gamma}_{\epsilon, R}$, as well as $$\frac{\#\mathbf{\widetilde{\Pi}}_{\epsilon, R}}{\#\mathbf{\Gamma}_{\epsilon, R}} \sim \frac{1}{\mathbf{\Gamma}_{\epsilon, R}}\sum_{\gamma\in\mathbf{\Gamma}_{\epsilon, R}} C_{\epsilon,R,\gamma}.$$ But from the definition of $C_{\epsilon,R,\gamma}$, we have $$C_{\epsilon,R,\gamma} \sim \frac{1}{\mathbf{\Gamma}_{\epsilon, R}}\sum_{\#\gamma\in\mathbf{\Gamma}_{\epsilon, R}} C_{\epsilon,R,\gamma} \sim \tilde{c}_{\epsilon} \epsilon^4 e^{2R}R,$$ where $\tilde{c}_{\epsilon}$ is bounded in $\epsilon$. Thus, $$2\#\mathbf{\Pi}_{\epsilon, R}(\gamma) \sim C_{\epsilon,R,\gamma} \sim \frac{\#\mathbf{\widetilde{\Pi}}_{\epsilon, R}}{\#\mathbf{\Gamma}_{\epsilon, R}},$$ which allows us to conclude that the right hand side of [\[bound\]](#bound){reference-type="ref" reference="bound"} goes to zero as $\epsilon\to 0$. ◻ To wrap up this section, we have proved Lemmas [Lemma 33](#ft1){reference-type="ref" reference="ft1"} and [Lemma 34](#ft2){reference-type="ref" reference="ft2"}, which is what we needed to show that the feet of all pants equidistribute in $\mathop{\mathrm{Fr}}M$, namely $$\phi_{\epsilon,R(\epsilon)} \stackrel{\star}\rightharpoonup\nu_{\mathop{\mathrm{Fr}}M}.$$ ## Approximate barycenters of pants As before, we let $v_R$ be the right action on $\mathop{\mathrm{Fr}}M\simeq \mathop{\mathrm{PSL}}_2\mathbf{C}$ of the element $$a_{R/2}\, k\, a_{\log(\sqrt{3}/2)} \in \mathop{\mathrm{PSL}}_2 \mathbf{C},$$ where $a_t = \mathop{\mathrm{diag}}(e^{t/2},e^{-t/2})$ and $k\in \mathop{\mathrm{SO}}_2$ is the ninety-degree rotation bringing the first frame to the second. We defined *left* and *right approximate barycenter* of an end of pants $\pi\in\mathbf{\widetilde{\Pi}}_{\epsilon, R}$ respectively by $$\mathop{\mathrm{\mathbf{a}\mathbf{b}\mathbf{a}\mathbf{r}}}^{\ell} (\pi) = v_R (\mathop{\mathrm{\mathbf{f}\mathbf{t}}}^{\ell} \pi) \quad\text{and}\quad \mathop{\mathrm{\mathbf{a}\mathbf{b}\mathbf{a}\mathbf{r}}}^r (\pi) = v_R (\mathop{\mathrm{\mathbf{f}\mathbf{t}}}^r \pi).$$ This subsection is dedicated to proving that the aproximate barycenters are indeed close to barycenters. Precisely, **Lemma 35**. *Let $\pi\in \mathbf{\widetilde{\Pi}}_{\epsilon, R}$ and $s = \ell$ or $r$. Then, $$\mathop{\mathrm{dist}}_{\mathop{\mathrm{Fr}}M} \left( \mathop{\mathrm{\mathbf{a}\mathbf{b}\mathbf{a}\mathbf{r}}}^{s}\pi,\mathop{\mathrm{\mathbf{b}\mathbf{a}\mathbf{r}}}^s \pi \right) \leq \omega(\epsilon),$$ where $\omega(\epsilon)\to 0$ as $\epsilon\to 0$.* *Proof.* ![Lifting the pants $\pi$ and its left foot to $\mathbf{H}^3$.](kappa2.jpg "fig:"){#kappafig} Throughout the proof, we will let $\omega(\epsilon)$ denote any quantity that goes to zero as $\epsilon\to 0$. Let $\pi = [(f,C_i)] \in \mathbf{\widetilde{\Pi}}_{\epsilon, R}$. Pick a representative $(f,C_i)$ with geodesic cuffs and let $f(C_j) = \gamma_{j-i}$, for $i,j \in \mathbf{Z}/3$. Without loss of generality and to be explicit, we assume $f$ to be orientation-preserving. Lift $\gamma_0$ to the geodesic $\tilde{\gamma}_0$ from $\infty$ to $0$ in $\hat{\mathbf{C}}\simeq \partial_{\infty}\mathbf{H}^3$. Lift the left foot $\mathop{\mathrm{\mathbf{f}\mathbf{t}}}^{\ell}\pi$ to the frame based $f$ at $e^{R/2} i$, whose first vector points at the direction of $\gamma$ and whose second vector points at the positive direction of the real line $\mathbf{R}\subset\hat{\mathbf{C}}$. Let $\kappa$ be the ray given by $$\kappa(t) := R_{a_t} R_{k} R_{a_{R/2}} f$$ for $t\geq 0$. The left approximate barycenter $\mathop{\mathrm{\mathbf{a}\mathbf{b}\mathbf{a}\mathbf{r}}}^{\ell}\pi$ lifts to the framed barycenter $\kappa(\log(\sqrt{3}/2))$ of the triangle with vertices $(\infty, 0,\kappa_+)$ associated to the side $(\infty,0) = \tilde{\gamma}_0$. Choose lifts $\tilde{\gamma}_1$ and $\tilde{\gamma}_2$ of the other cuffs of $\pi$ so they are connected to $\tilde{\gamma}_0$ by lifts of the short orthogeodesics, as in Figure [8](#kappafig){reference-type="ref" reference="kappafig"}. Note that $\tilde{\gamma}_1$ and $\tilde{\gamma}_2$ lie, respectively, in geodesic planes $P_1$and $P_2$ with $\infty$ in their boundary that make an angle of $\omega(\epsilon)$ with each other and the plane $P_0$ that contains $\tilde{\gamma}_0$ and $(\kappa(t))_{t\geq 0}$. The left triangle of $\pi$ lifts to the triangle with vertices $(\infty,\tilde{\gamma}_1^-,\tilde{\gamma}_2^-)$ in this picture. We let $$\sigma(t) := R_{a_t} R_k R_{a_{\mathbf{h}\mathbf{l}(\gamma)/2}} f,$$ for $t\geq 0$, be a lift of the orthogeodesic ray from $\gamma_0$ to itself. Then, $\sigma^+$ lies in the annulus $E$ of $\hat{\mathbf{C}}$ so that $\tilde{\gamma}_1^+,\tilde{\gamma}_2^- \in \partial E$. Since the geodesic planes $P_1$ and $P_2$ contianing $\tilde{\gamma}_1$ and $\tilde{\gamma}_2$ make a small angle with each other, it follows that $\sigma^+$ is within distance $\omega(\epsilon)$ in $\hat{\mathbf{C}}$ of $\tilde{\gamma}_1^+$ and $\tilde{\gamma}_2^-$. On the other hand, since $R_{a_{R/2}} f$ and $R_{a_{\mathbf{h}\mathbf{l}(\gamma)/2}} f$ are at a distance $O(\epsilon)$ of each other in $\mathop{\mathrm{Fr}}\mathbf{H}^3$, it follows that $\sigma^+$ and $\kappa^+$ are at a distance $O(\epsilon)$ in $\hat{\mathbf{C}}$. We conclude that the vertices $(\infty,\tilde{\gamma}_1^-,\tilde{\gamma}_2^-)$ of the left triangle of $\pi$ are within distance $\omega(\epsilon)$ of the vertices $(\infty,0,\kappa^+)$ of the triangle whose framed barycenter associated to $(\infty,0)$ is $\mathop{\mathrm{\mathbf{a}\mathbf{b}\mathbf{a}\mathbf{r}}}^{\ell} \pi$. This means that all the framed barycenters of these triangles are within $\omega(\epsilon)$ of each other in $\mathop{\mathrm{Fr}}\mathbf{H}^3$. Thus, $$\mathop{\mathrm{dist}}_{\mathop{\mathrm{Fr}}M} (\mathop{\mathrm{\mathbf{a}\mathbf{b}\mathbf{a}\mathbf{r}}}^{\ell}\pi,\mathop{\mathrm{\mathbf{b}\mathbf{a}\mathbf{r}}}^{\ell}\pi) \leq \omega(\epsilon),$$ as desired. The proof follows in the same way for the right barycenters. ◻ ## Equidistribution of approximate barycenters in $\mathop{\mathrm{Fr}}M$ In this subsection, we will show that the probability uniform measure $\beta^a_{\epsilon,R}$ supported on the approximate barycenters of $\pi\in\mathbf{\widetilde{\Pi}}_{\epsilon, R}$ equidistributes as $\epsilon\to 0$ and $R(\epsilon)\to\infty$. We first claim **Lemma 36**. *As $\epsilon\to 0$ and $R(\epsilon)\to\infty$, $$(R_{a_{R/2}})_* \phi_{\epsilon,R} \stackrel{\star}\rightharpoonup\nu_{\mathop{\mathrm{Fr}}M}.$$* *Proof.* Let $g\in C(\mathop{\mathrm{Fr}}M)$ be nonnegative. Along each $\gamma\in\mathbf{\Gamma}_{\epsilon, R}$, the Lebesgue probability measure $\lambda^{\gamma}$ is invariant under the right action of $a_t \in \mathop{\mathrm{PSL}}_2 \mathbf{C}$. Therefore, the equidistribution of feet along a curve, Lemma [Theorem 32](#ftalongcurve){reference-type="ref" reference="ftalongcurve"} gives us $$(1-\delta)\lambda^{\gamma}\left(m_{\delta}g \right) \leq (R_{a_{R/2}})_* \phi^{\gamma}_{\epsilon,R} (g) \leq (1+\delta) \lambda^{\gamma} (M_{\delta} g),$$ where as usual $\delta= e^{-qR}$. Thus, the arguments from section 5.2 show that $$(R_{a_{R/2}})_* \phi^{\gamma}_{\epsilon,R} (g) \to \nu_{\mathop{\mathrm{Fr}}M} (g)$$ as $\epsilon\to 0$ and $R(\epsilon)\to \infty$. ◻ We also observe **Lemma 37**. *Suppose $\nu_i$ are probability measures in $\mathop{\mathrm{Fr}}M$ so that $\nu_i \stackrel{\star}\rightharpoonup\nu_{\mathop{\mathrm{Fr}}M}$ as $i\to\infty$. Then, given $h\in \mathop{\mathrm{PSL}}_2 \mathbf{C}$, we have $$(R_h)_* \nu_i \stackrel{\star}\rightharpoonup\nu_{\mathop{\mathrm{Fr}}M}$$* *Proof.* Suppose $g\in C(\mathop{\mathrm{Fr}}M)$. Then, $$(R_h)_* \nu_i (g) = \nu_i (g\circ R_h^{-1}).$$ Thus $(R_h)_*\nu_i (g) \to \nu_{\mathop{\mathrm{Fr}}M}(g\circ R_h^{-1})$ as $i\to\infty$. As $\nu_{\mathop{\mathrm{Fr}}M}$ is invariant under the right action of $\mathop{\mathrm{PSL}}_2\mathbf{C}$, we conclude. ◻ By construction, $$\beta^{a}_{\epsilon,R} = (R_{k a_{\log (\sqrt{3}/2)}})_* (R_{a_{R/2}})_* \phi_{\epsilon,R}.$$ Combining the two lemmas above, we conclude that $\beta^{a}_{\epsilon,R} \stackrel{\star}\rightharpoonup\nu_{\mathop{\mathrm{Fr}}M}$ as $\epsilon\to 0$ and $R(\epsilon)\to \infty$. ## Conclusion: equidistribution of barycenters in $\mathop{\mathrm{Fr}}M$ Finally, as a corollary of the equidistribution of the approximate barycenters, we obtain the equidistribution of the actual barycenters, which is the main theorem of the section. For $g\in C(\mathop{\mathrm{Fr}}M)$, we have $$\beta^a_{\epsilon,R}(g) - \beta_{\epsilon,R}(g) = \frac{1}{2\#\mathbf{\widetilde{\Pi}}_{\epsilon, R}} \sum_{\pi \in t\mathbf{\Pi}_{\epsilon, R}}\sum_{s\in \{\ell,r\}} \left( g(\mathop{\mathrm{\mathbf{a}\mathbf{b}\mathbf{a}\mathbf{r}}}^s \pi) - g(\mathop{\mathrm{\mathbf{b}\mathbf{a}\mathbf{r}}}^s \pi) \right)$$ As $g$ is uniformly continuous and $$\mathop{\mathrm{dist}}_{\mathop{\mathrm{Fr}}M} (\mathop{\mathrm{\mathbf{a}\mathbf{b}\mathbf{a}\mathbf{r}}}^s \pi,\mathop{\mathrm{\mathbf{b}\mathbf{a}\mathbf{r}}}^s \pi) \leq \omega(\epsilon),$$ where $\omega(\epsilon)\to 0$ as $\epsilon\to 0$, we conclude that $$|\beta^a_{\epsilon,R}(g) - \beta_{\epsilon,R}(g)| \stackrel{\epsilon\to 0}\longrightarrow 0.$$ Thus, since $\beta^a_{\epsilon,R}(g) \to \nu_{\mathop{\mathrm{Fr}}M}(g)$, we conlcude that $\beta_{\epsilon,R}(g) \to \nu_{\mathop{\mathrm{Fr}}M}(g)$ as $\epsilon\to 0$ and $R(\epsilon)\to\infty$. # Equidistributing surfaces Let $S_{\epsilon,R}$ be the connected, closed, $\pi_1$-injective and $(1+O(\epsilon))$-quasifuchsian surface made out of $N(\epsilon,R)$ copies of each pants in $\mathbf{\Pi}_{\epsilon, R}$, as explained in Section 2. Let $\nu_{S_{\epsilon,R}}$ be their probability area measures on $\mathop{\mathrm{Gr}}M$. Note that these measures are also the probability area measure of the possibly disconnected surface built out of one copy of each $\pi\in\mathbf{\Pi}_{\epsilon, R}$. Using the fact that the barycenters of good pants are well distributed, we will show **Theorem 38**. *As $\epsilon\to 0$ and $R(\epsilon)\to\infty$ fast enough, $$\nu_{S_{\epsilon,R(\epsilon)}} \stackrel{\star}\rightharpoonup\nu_{\mathop{\mathrm{Gr}}M},$$ where $\nu_{\mathop{\mathrm{Gr}}M}$ is the probability volume measure of $\mathop{\mathrm{Gr}}M$.* Outside of the pleating lamination, we may define the unit tangent bundle $\mathop{\mathrm{T}}^1 S_{\epsilon,R}$ of $S_{\epsilon,R}$. This can be seen as a three-dimensional submanifold of $\mathop{\mathrm{Fr}}M$, where $(p,v)\in \mathop{\mathrm{T}}^1 S_{\epsilon,R}$ is included in $\mathop{\mathrm{Fr}}M$ as the frame $(p,v,w,v\times w)$, where $w$ is the image of $v$ under the ninety-degree rotation $k\in \mathop{\mathrm{SO}}_2$ described in the last section. Let $\nu_{\mathop{\mathrm{T}}^1 S_{\epsilon,R}}$ be the probability volume measure of $\mathop{\mathrm{T}}^1 S_{\epsilon,R}$ on $\mathop{\mathrm{Fr}}M$. We show **Claim 39**. *As $\epsilon\to 0$ and $R(\epsilon)\to\infty$, $$\nu_{\mathop{\mathrm{T}}^1 S_{\epsilon,R(\epsilon)}} \stackrel{\star}\rightharpoonup\nu_{\mathop{\mathrm{Fr}}M}.$$* *Proof.* This proof is similar to pages 23-26 of [@La]. Let $\Delta \subset\mathop{\mathrm{PSL}}_2 \mathbf{R}$ be the set so that $R_{\Delta} (b)$ is the unit tangent bundle of the ideal triangle in $M$ with $b\in \mathop{\mathrm{Fr}}M$ as a framed barycenter (for any $b\in \mathop{\mathrm{Fr}}M$). Let $\nu_{\mathop{\mathrm{PSL}}_2 \mathbf{R}}$ be the probability Haar measure on $\mathop{\mathrm{PSL}}_2 \mathbf{R}$. Thus, given $g\in C(\mathop{\mathrm{Fr}}M)$, $$\nu_{\mathop{\mathrm{T}}^1 S_{\epsilon, R}} (g) = \int_{\mathop{\mathrm{Fr}}M} \frac{1}{\nu_{\mathop{\mathrm{PSL}}_2 \mathbf{R}}(\Delta)}\int_{\Delta} g(R^{-1}_t b) \,d\nu_{\mathop{\mathrm{PSL}}_2 \mathbf{R}}(t)\, d\beta_{\epsilon,R}.$$ By Fubini's theorem, $$\nu_{\mathop{\mathrm{T}}^1 S_{\epsilon, R}} (g) = \frac{1}{\nu_{\mathop{\mathrm{PSL}}_2 \mathbf{R}}(\Delta)}\int_{\Delta} \beta_{\epsilon,R} (g\circ R^{-1}_t) \,d\nu_{\mathop{\mathrm{PSL}}_2\mathbf{R}} (t).$$ From the equidistribution of the barycenters and the $\mathop{\mathrm{PSL}}_2 \mathbf{C}$-invariance of $\nu_{\mathop{\mathrm{Fr}}M}$, the integrand $\beta_{\epsilon,R}(g\circ R_t^{-1})$ converges to $\nu_{\mathop{\mathrm{Fr}}M} (g)$. Using the dominated convergence theorem, we conclude that $$\nu_{\mathop{\mathrm{T}}^1 S_{\epsilon,R}} (g) \stackrel{\epsilon\to 0}\longrightarrow\frac{1}{\nu_{\mathop{\mathrm{PSL}}_2 \mathbf{R}}(\Delta)}\int_{\Delta} \nu_{\mathop{\mathrm{Fr}}M} (g) \,d\nu_{\mathop{\mathrm{PSL}}_2\mathbf{R}} = \nu_{\mathop{\mathrm{Fr}}M} (g).$$ ◻ For $g\in C(\mathop{\mathrm{Fr}}M)$, we let $\tilde{g}\in C(\mathop{\mathrm{Gr}}M)$ be the function defined by $$\tilde{g}(p,P) = \frac{1}{2\pi} \int_{0}^{2\pi} g(p,r_{\theta} f)\,d\theta,$$ where $r_{\theta}\in \mathop{\mathrm{PSL}}_2 \mathbf{R}$ is the rotation of degree $\theta$ and $f$ is the frame whose first two vectors span the oriented plane $P$. Fubini's theorem tells us that for $g\in C(\mathop{\mathrm{Fr}}M)$, $$\nu_{\mathop{\mathrm{T}}^1 S_{\epsilon,R}} (g) = \nu_{S_{\epsilon,R}} (\tilde{g}).$$ Moreover, any $h\in C(\mathop{\mathrm{Gr}}M)$ is of the form $h=\tilde{g}$, where $h(p,P) = g(p,f)$ for any frame $f$ whose first two vectors span $P$. Thus Claim 6.2 implies Theorem 6.1, and we can conclude that the connected surfaces made out of $N(\epsilon,R,M)$ copies of each pants in $\mathbf{\Pi}_{\epsilon, R}$ equidistribute in $\mathop{\mathrm{Gr}}M$ as $\epsilon\to 0$ and $R(\epsilon)\to\infty$. # Non-equidistributing surfaces Let $\mathscr{G}$ be a set containing a representative of each commensurability class of closed immersed totally geodesic surfaces in $M$. Let $(\mathscr{G}_k)_{k\ge 1} \subseteq\mathscr{G}$ be an increasing sequence of finite subsets, so that $\bigcup_{k\geq 1} \mathscr{G}_k = \mathscr{G}$. (In the case when $\mathscr{G}$ is finite, it suffices to take $\mathscr{G}_k = \mathscr{G}$ for all $k\geq 1$.) Kahn and Marković [@KME] proved that given $k\geq 1$, $\epsilon>0$ small enough and $R = R(\epsilon,k)$ large enough, each $T\in \mathscr{G}_k$ has a finite cover $\hat{T}$ which admits a pants decomposition of pants in $\mathbf{\Pi}_{\epsilon, R}$ that are all glued via $(\epsilon,R)$-good gluings. (This fact was used to prove the Ehrenpreis conjecture.) By possibly passing to a further double cover, we can assume the cuffs of each $\hat{T}$ are all nonseparating, as explained in the proof of Theorem [Theorem 26](#irred){reference-type="ref" reference="irred"}. For each $T\in \mathscr{G}_k$, let $T^d$ be a cover of $\hat{T}$ of degree $d = d(T,\epsilon,R)$. We may choose this cover so that $T^d$ also admits a pants decomposition, denoted $\Pi_T$, by pants in $\mathbf{\Pi}_{\epsilon, R}$ that are glued via $(\epsilon,R)$-good gluings. Let $\hat{S}_{\epsilon,R}$ be the connected, closed, $\pi_1$-injective and $(1+O(\epsilon))$-quasifuchsian surface produced in the previous section. We may assume that $\hat{S}_{\epsilon,R}$ is built out of $N(\epsilon,R,k)\geq \#\mathscr{G}_k$ copies of each $\pi\in\mathbf{\Pi}_{\epsilon, R}$. For each $T\in\mathscr{G}_k$, choose a curve $\gamma\subset T^d$ that arises as a boundary of a pants in $\Pi_T$. Let $\pi_T^- \in \mathbf{\Pi}_{\epsilon, R}^- (\gamma_T)$ and $\pi_T^+ \in \mathbf{\Pi}_{\epsilon, R}^+(\gamma_T)$ be pants in $\Pi_T$ that are $(\epsilon,R)$-well glued along $\gamma_T$. Namely, $$\left|\mathop{\mathrm{\mathbf{f}\mathbf{t}}}\pi^-_T - \tau\left(\mathop{\mathrm{\mathbf{f}\mathbf{t}}}\pi^+_T\right) \right| < \frac{\epsilon}{R},$$ where as before $\tau(x) = x + 1 + i\pi$. As argued in the proof of Theorem [Theorem 26](#irred){reference-type="ref" reference="irred"}, since $\hat{S}_{\epsilon,R}$ is built out of $N$ copies of *each* $\pi \in \mathbf{\Pi}_{\epsilon, R}$, there is a pants $p^-_T\in \mathbf{\Pi}_{\epsilon, R}^-(\gamma_T)$ in $\hat{S}_{\epsilon,R}$ so that $$|\mathop{\mathrm{\mathbf{f}\mathbf{t}}}\pi^-_T - \mathop{\mathrm{\mathbf{f}\mathbf{t}}}p^-_T | < \frac{\epsilon}{R}.$$ On the other hand, $p^-_T$ is $(\epsilon,R)$-well glued to a pants $p^+_T$ also from $\hat{S}_{\epsilon,R}$, i.e., $$\left|\mathop{\mathrm{\mathbf{f}\mathbf{t}}}p^-_T - \tau \left( \mathop{\mathrm{\mathbf{f}\mathbf{t}}}p^-_T \right) \right| < \frac{\epsilon}{R}.$$ Putting these inequalities together, we have that $$\left|\mathop{\mathrm{\mathbf{f}\mathbf{t}}}p^-_T - \tau\left(\mathop{\mathrm{\mathbf{f}\mathbf{t}}}\pi^+_T\right) \right| < 2\frac{\epsilon}{R} \quad\text{and}\quad \left|\mathop{\mathrm{\mathbf{f}\mathbf{t}}}\pi^-_T - \tau\left(\mathop{\mathrm{\mathbf{f}\mathbf{t}}}p^+_T\right) \right| < 2\frac{\epsilon}{R}.$$ In other words, we may cut along $\gamma_T$ and reglue $p^-_T$ to $\pi^+_T$ and $\pi^-_T$ to $p^+_T$ in a $(2\epsilon,R)$-good way. We call this reglued surface $S_{\epsilon,R,\mathbf{d}}$, where $\mathbf{d}= (d(T,\epsilon,R))_{T\in\mathscr{G}_k}$ is a vector keeping track of the degrees of each cover $T^d\to \hat{T}$. ![The family of surfaces $S_{\epsilon,R,\mathbf{d}}$, which can accumulate on the totally geodesic surfaces $T$ by appropriately choosing the degrees $d$ of their covers.](noneq2.jpg) The regluings are done along nonseparating curves, so each $S_{\epsilon,R,\mathbf{d}}$ is closed, oriented, *connected* and $(1+O(\epsilon))$-quasifuchsian. (The connectedness uses the fact that the regluings were done along nonseparating cuffs.) As usual, we let $\nu(S_{\epsilon,R,\mathbf{d}})$ denote the probability area measure of $S_{\epsilon,R,\mathbf{d}}$ on the Grassmann bundle $\mathop{\mathrm{Gr}}M$. Recall that $\nu_{\mathop{\mathrm{Gr}}M}$ denotes the Haar measure on $\mathop{\mathrm{Gr}}M$ and $\nu_T$ denotes the area measure of $T$ on $\mathop{\mathrm{Gr}}M$. **Proposition 40**. *The weak-\* limit points, as $\epsilon\to 0$ and $k\to\infty$, of the measures $\nu(S_{\epsilon,R(\epsilon,k),\mathbf{d}(\epsilon,k)})$ on $\mathop{\mathrm{Gr}}M$ consist of all measures $\nu$ of the form $$\tag{$\star$} \nu = \alpha_M \nu_{\mathop{\mathrm{Gr}}M} + \sum_{T\in\mathscr{G}} \alpha_T \nu_T.$$* *Proof.* Let $g_T$ denote the genus of a totally geodesic surface $T$ and let $g_{\epsilon,R}$ denote the genus of $\hat{S}_{\epsilon,R}$. We can write the area measure $\nu(S_{\epsilon,R,\mathbf{d}})$ as $$\nu(S_{\epsilon,R,\mathbf{d}}) = \frac{1}{\sum_{T\in \mathscr{G}_k} 2\pi(g_T-1)d(T) + 2\pi(g_{\epsilon,R} - 1)} \left( \sum_{T\in \mathscr{G}_k} 2\pi(g_T-1) d(T) \nu_T + 2\pi(g_{\epsilon,R} - 1)\nu(\hat{S}_{\epsilon,R}) \right).$$ Recall that $\nu(\hat{S}_{\epsilon,R(\epsilon)})\stackrel{\star}\rightharpoonup\nu_{\mathop{\mathrm{Gr}}M}$ as $\epsilon\to 0$. Thus, by making $d(T,\epsilon,R(\epsilon,k))$ grow appropriately fast for each $T$, we can make $\nu(S_{\epsilon,R,\mathbf{d}})$ converge to any given measure of the form $(\star)$ as $\epsilon\to 0$ and $k\to\infty$. ◻ This, together with Theorem 1.2, which was proved in Section 4, completes the proof of Theorem 1.1. 99 Bader, Uri; Fisher, David; Miller, Nicholas; Stover, Matthew Arithmeticity, superrigidity, and totally geodesic submanifolds. *Ann. of Math.* (2) 193 (2021), no. 3, 837--861. Calegari, Danny; Marques, Fernando C.; Neves, André. Counting minimal surfaces in negatively curved 3-manifolds. Arxiv preprint, arXiv:2002.01062 \[math.DG\]. Epstein, D. B. A., Marden A. Convex hulls in hyperbolic space, a theorem of Sullivan, and measured pleated surfaces. *Analytical and geometric aspects of hyperbolic space (Coventry/Durham, 1984)*, 113-253, London Math. Soc. Lecture Note Ser., 111, Cambridge Univ. Press, Cambridge, 1987. Gardiner, Frederick P. *Teichmüller theory and quadratic differentials.* Pure and Applied Mathematics (New York). A Wiley-Interscience Publication. John Wiley & Sons, Inc., New York, 1987. Hamenstädt, Ursula. Incompressible surfaces in rank one locally symmetric spaces. *Geom. Funct. Anal.* 25 (2015), no. 3, 815--859. Howards, Hugh; Hutchings, Michael; Morgan, Frank. The isoperimetric problem on surfaces, *The American Mathematical Monthly*, Vol. 106, No. 5 (May, 1999), pp.430-439. Kahn, Jeremy; Labourie, François; Mozes, Shahar. Surface subgroups in uniform lattices of some semi-simple groups. arXiv preprint, arxiv:1805.10189. Kahn, Jeremy; Marković, Vladimir. Immersing almost geodesic surfaces in a closed hyperbolic three manifold. *Ann. of Math. (2)* 175 (2012), no. 3, 1127--1190. Kahn, Jeremy; Marković, Vladimir. The good pants homology and the Ehrenpreis Conjecture. *Ann. of Math.* 182 (2015), no. 1, 1-72. Kahn, Jeremy; Wright, Alex. Nearly Fuchsian surface subgroups of finite covolume Kleinian groups. ArXiv preprint, arXiv:1809.07211 \[math.GT\]. Kahn, Jeremy; Wright, Alex. Counting connections in a locally symmetric space. http://www.math.brown.edu/jk17/Connections.pdf. Labourie, François. Asymptotic counting of minimal surfaces in hyperbolic 3-manifolds \[according to Calegari, Marques and Neves\]. arXiv:2203.09366v1 \[math.DG\]. Lalley, Steve. Distribution of periodic orbits of symbolic and Axiom A flows. *Adv. in Appl. Math. 8* (1987), no. 2, 154--193. Liu, Yi; Marković, Vladimir. Homology of curves and surfaces in closed hyperbolic 3-manifolds. *Duke Math. J.* 164 (2015), no. 14, 2723--2808. Lowe, Ben. Deformations of totally geodesic foliations and minimal surfaces in negatively curved 3-manifolds. Geom. Funct. Anal. 31 (2021), no. 4, 895--929. Maclachlan, C.; Reid, A. W. Commensurability classes of arithmetic Kleinian groups and their Fuchsian subgroups. *Math. Proc. Cambridge Philos. Soc.* 102 (1987), no. 2, 251--257. Mohammadi, Amir; Margulis, Gregorii Arithmeticity of hyperbolic 3-manifolds containing infinitely many totally geodesic surfaces. *Ergodic Theory Dynam. Systems* 42 (2022), no. 3, 1188--1219. Ratner, Marina. On Raghunathan's measure conjecture. *Ann. of Math. (2)* 134 (1991), no. 3, 545-607. Reid, Alan W. Totally geodesic surfaces in hyperbolic 3-manifolds. *Proc. Edinburgh Math. Soc.* (2) 34 (1991), no. 1, 77--88. Sacks, J.; Uhlenbeck, K. Minimal immersions of closed Riemann surfaces. *Trans. Amer. Math. Soc.* 271 (1982), no. 2, 639--652. (Reviewer: R. Osserman) Seppi, Andrea. Minimal discs in hyperbolic space bounded by a quasicircle at infinity. *Comment. Math. Helv.* 91 (2016), no. 4, 807-839. Mozes, Shahar; Shah, Nimish. On the space of ergodic invariant measures of unipotent flows. *Ergodic Theory Dynam. Systems* 15 (1995), no. 1, 149--159. Schoen, R.; Yau, Shing Tung Existence of incompressible minimal surfaces and the topology of three-dimensional manifolds with nonnegative scalar curvature. *Ann. of Math.* (2) 110 (1979), no. 1, 127--142. (Reviewer: Jonathan Sacks) Sun, Hongbin The panted cobordism groups of cusped hyperbolic 3-manifolds. J. Topol. 15 (2022), no. 3, 1580--1634. (Reviewer: Ken'ichi Yoshida) Thurston, William P. *The Geometry and Topology of Three-Manifolds.* Princeton lecture notes. Uhlenbeck, Karen K. Closed minimal surfaces in hyperbolic 3-manifolds. *Seminar on minimal submanifolds,* 147-168, *Ann. of Math. Stud.*, 103, Princeton Univ. Press, Princeton, NJ, 1983. [^1]: $$\frac{1}{1 + \left(\frac{\epsilon}{2R} - \delta \right)\frac{1}{R}} \leq \frac{1}{1 + \frac{\epsilon}{3R^2}} \leq 1 - \frac{\epsilon}{6R^2} \leq 1 - 3\delta \leq \frac{1 - \delta}{1+\delta}.$$ [^2]: We choose an origin $o\in \mathop{\mathrm{Fr}}M$ and identify $\mathop{\mathrm{Fr}}M \cong \mathop{\mathrm{PSL}}_2 \mathbf{C}$ by sending $go$ to $g$. We say that the right action $R_h$ of an element $h\in G$ on $go\in \mathop{\mathrm{Fr}}M$ is given by $R_h (go) = gho$. This is an antihomomorphism $R:G\to \mathop{\mathrm{Aut}}G$.
arxiv_math
{ "id": "2309.02164", "title": "Limits of asymptotically Fuchsian surfaces in a closed hyperbolic\n 3-manifold", "authors": "Fernando Al Assal", "categories": "math.GT math.DG math.DS", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We present a new approach to stabilizing high-order Runge-Kutta discontinuous Galerkin (RKDG) schemes using weighted essentially non-oscillatory (WENO) reconstructions in the context of hyperbolic conservation laws. In contrast to RKDG schemes that overwrite finite element solutions with WENO reconstructions, our approach employs the reconstruction-based smoothness sensor presented by Kuzmin and Vedral (J. Comput. Phys. 487:112153, 2023) to control the amount of added numerical dissipation. Incorporating a dissipation-based WENO stabilization term into a discontinuous Galerkin (DG) discretization, the proposed methodology achieves high-order accuracy while effectively capturing discontinuities in the solution. As such, our approach offers an attractive alternative to WENO-based slope limiters for DG schemes. The reconstruction procedure that we use performs Hermite interpolation on stencils composed of a mesh cell and its neighboring cells. The amount of numerical dissipation is determined by the relative differences between the partial derivatives of reconstructed candidate polynomials and those of the underlying finite element approximation. The employed smoothness sensor takes all derivatives into account to properly assess the local smoothness of a high-order DG solution. Numerical experiments demonstrate the ability of our scheme to capture discontinuities sharply. Optimal convergence rates are obtained for all polynomial degrees. address: | Institute of Applied Mathematics (LS III), TU Dortmund University\ Vogelpothsweg 87, D-44227 Dortmund, Germany author: - Joshua Vedral bibliography: - paper_vedral_arxiv.bib title: Dissipative WENO stabilization of high-order discontinuous Galerkin methods for hyperbolic problems --- hyperbolic conservation laws, discontinuous Galerkin methods, WENO reconstruction, smoothness indicator, numerical diffusion, high-order finite elements # Introduction {#sec:intro} It is well known that the presence of shocks, discontinuities and steep gradients in exact solutions of hyperbolic conservation laws makes the design of accurate and robust numerical approximations a challenging task. Standard finite volume (FV) and finite element (FE) methods produce numerical solutions which converge to wrong weak solutions and introduce either spurious oscillations or excessive dissipation near shocks. To address this issue, stabilized high-resolution numerical methods were introduced to capture sharp features and discontinuities accurately. Many representatives of such schemes are based on the *essentially non-oscillatory* (ENO) paradigm introduced by Harten et al. [@harten1987]. The key idea is to select the smoothest polynomial from a set of candidate polynomials which are constructed using information from neighboring elements. By employing a weighted combination of all polynomial approximations, Liu et al. [@liu1994] introduced the framework of weighted ENO (WENO) schemes which allows for better representation of steep gradients, offering more accurate solutions in the presence of shocks and discontinuities. The seminal paper by Jiang and Shu [@jiang1996] introduced a fifth-order WENO scheme using a new measurement of the solution smoothness. More recently, numerous variants of the original WENO methodology have been developed. A significant finding in [@henrick2005] revealed that the WENO scheme introduced in [@jiang1996] may encounter accuracy issues when the solution comprises critical points. For instance, the fifth-order scheme presented in [@jiang1996] degrades to third order in the vicinity of smooth extrema. To address this concern, a mapping function was proposed to correct the weights, resulting in the WENO-M schemes [@henrick2005]. Moreover, central WENO (CWENO) schemes [@levy1999; @levy2000; @qiu2002] were introduced by splitting the reconstruction stencil into a central stencil and a WENO stencil. The contributions of both stencils are blended using suitable weights. The so-called WENO-Z schemes [@acker2016; @castro2011; @wang2018] reduce the dissipative error by using a linear combination of existing low-order smoothness indicators to construct a new high-order one. Qiu and Shu [@qiu2005] applied the WENO framework to Runge-Kutta discontinuous Galerkin (RKDG) methods by introducing a limiter based on WENO reconstructions. They further investigated the use of Hermite WENO (HWENO) schemes as limiters for RKDG schemes in one [@qiu2004] and two [@qiu2005b] dimensions. An extension to unstructured grids was presented by Zhu and Qiu [@zhu2009]. To account for the local properties of DG schemes, Zhong and Shu [@zhong2013] developed a limiter which uses information only from immediate neighboring elements. Their idea of reconstructing the entire polynomial instead of reconstructing point values was adopted by Zhu et al. for structured [@zhu2016] and unstructured [@zhu2017] grids. The simplified limiter presented by Zhang and Shu [@zhang2010] makes it possible to construct uniformly high-order accurate FV and DG schemes satisfying a strict maximum principle for scalar conservation laws on rectangular meshes. Their approach was later generalized to positivity preserving schemes for the compressible Euler equations of gas dynamics on rectangular [@zhang2010c] and triangular [@zhang2012] meshes. An overview of existing DG-WENO methods can be found, e.g., in [@shu2016; @zhang2011]. The use of WENO reconstructions as limiters is clearly not an option for continuous Galerkin (CG) methods. To our best knowledge, the first successful attempt to construct WENO-based stabilization in the CG setting was recently made in [@kuzmin2023a]. Instead of overwriting polynomial solutions with WENO reconstructions, the approach proposed in [@kuzmin2023a] uses a reconstruction-based smoothness sensor to adaptively blend the numerical viscosity operators of high- and low-order stabilization terms. In this work, we extend the methodology developed in [@kuzmin2023a] to DG methods and hyperbolic systems. Building upon the standard DG scheme, we introduce a nonlinear stabilization technique that incorporates low-order dissipation in the vicinity of discontinuities. The smoothness sensor developed in [@kuzmin2023a] is employed to control the amount of added diffusion. The resulting scheme achieves high-order accuracy while effectively capturing discontinuities in the solution. Moreover, the stabilized space discretization corresponds to a consistent weighted residual formulation of the problem at hand. In the next section, we provide a review of standard RKDG methods. Section [3](#sec:stab){reference-type="ref" reference="sec:stab"} introduces the dissipation-based nonlinear stabilization technique. The WENO-based smoothness sensor is presented in Section [4](#sec:sensor){reference-type="ref" reference="sec:sensor"}. To assess the performance of the proposed methodology, we perform a series of numerical experiments in Section [5](#sec:numex){reference-type="ref" reference="sec:numex"}. The paper ends with concluding remarks in Section [6](#sec:concl){reference-type="ref" reference="sec:concl"}. # Standard RKDG methods {#sec:rkdg} Let $U(\mathbf{x},t)\in \mathbb{R}^m$, $m\in \mathbb{N}$ be the vector of conserved quantities at the space location $\mathbf{x}\in \bar{\Omega}$ and time instant $t\geq 0$. We consider the initial value problem $$\begin{aligned} {2} \frac{\partial U}{\partial t}+\nabla\cdot\mathbf{F}(U)&=0 \quad &&\text{in }\Omega \times \mathbb{R}_+, \label{eq:pde}\\ U(\cdot,0)&=U_0 \quad &&\text{in } \Omega, \end{aligned}$$[\[eq:ivp\]]{#eq:ivp label="eq:ivp"} where $\Omega\in \mathbb{R}^d$, $d\in\{1,2,3\}$ is a Lipschitz domain, $\mathbf{F}(U)\in \mathbb{R}^{m\times d}$ is the flux function and $U_0\in\mathbb{R}^m$ is the initial datum. In the scalar ($m=1$) case, boundary conditions are imposed weakly at the inlet $\partial \Omega_{-}=\{\mathbf{x}\in \partial \Omega:\mathbf{v}\cdot\mathbf{n}<0\}$ of the domain. Here, $\mathbf{v}(\mathbf{x},t)=\mathbf{F}'(U(\mathbf{x},t))$ is a velocity field and $\mathbf{n}$ denotes the unit outward normal to the boundary of the domain. For hyperbolic systems ($m>1$), appropriate boundary conditions (see, e.g., [@guaily2013]) need to be imposed. Let $\mathcal{T}_h=\{K_1,\ldots,K_{E_h}\}$ be a decomposition of the domain $\Omega$ into non-overlapping elements $K_e$, $e=1,...,E_h$. To discretize [\[eq:ivp\]](#eq:ivp){reference-type="eqref" reference="eq:ivp"} in space, we use the standard discontinuous Galerkin finite element space $$\mathbb{V}_p(\mathcal{T}_h)=\{v_h\in L^2(\Omega):v_h|_{K_e}\in \mathbb{V}_p(K_e)\,\forall K_e\in\mathcal{T}_h\},$$ where $\mathbb{V}_p(K_e)\in \{\mathbb{P}_p(K_e), \mathbb{Q}_p(K_e)\}$ is the space spanned by polynomials of degree $p$. We seek an approximate solution $U_h^e\in \mathbb{V}_p(K_e)$ of the form $$U_h^e(\mathbf{x},t):=U_h(\mathbf{x},t)|_{K_e}=\sum_{j=1}^{N_e}U_j(t)\varphi_j(\mathbf{x}), \quad \mathbf{x}\in K_e, \quad t\in [0,T], \label{eq:fesol}$$ where $\varphi_j$, $j=1,\ldots,N_e$ are finite element basis functions. We remark that our methodology does not rely on a particular choice of basis functions. Popular choices include, e.g., Lagrange, Bernstein or Legendre-Gauss-Lobatto polynomials. Inserting [\[eq:fesol\]](#eq:fesol){reference-type="eqref" reference="eq:fesol"} into [\[eq:pde\]](#eq:pde){reference-type="eqref" reference="eq:pde"}, multiplying by a sufficiently smooth test function $W_h$ and applying integration by parts yields the local weak formulation $$\int_{K_e}W_h\cdot\frac{\partial U_h^e}{\partial t}\,\mathrm{d}\mathbf{x}-\int_{K_e}\nabla W_h\cdot\mathbf{F}(U_h^e)\,\mathrm{d}\mathbf{x}+\int_{\partial K_e}W_h\cdot\mathbf{F}(U_h^e)\cdot\mathbf{n}_e\,\mathrm{ds}=0. \label{eq:weak1}$$ The DG solution $U_h$ is generally discontinuous across element interfaces. The one-sided limits $$U_h^{e,\pm}(\mathbf{x},t):=\lim_{\varepsilon\to\pm 0}U_h(\mathbf{x}+\varepsilon\mathbf{n}_e,t), \quad \mathbf{x}\in\partial K_e$$ are used to construct an approximation $\mathcal{H}(U_h^{e,-},U_h^{e,+},\mathbf{n}_e)$ to $\mathbf{F}(U_h^e)\cdot \mathbf{n}_e$. A numerical flux function $\mathcal{H}(U_L,U_R,\mathbf{n})$ must provide consistency $$\mathcal{H}(U,U,\mathbf{n})=\mathbf{F}(U)\cdot \mathbf{n}$$ and conservation $$\mathcal{H}(U,V,\mathbf{n})=-\mathcal{H}(V,U,-\mathbf{n}).$$ For scalar problems, we employ solely the local Lax-Friedrichs numerical flux $$\mathcal{H}_{LLF} (U_h^{e,-},U_h^{e,+},\mathbf{n}_e)=\mathbf{n}_e\cdot \frac{\mathbf{F}(U_h^{e,-})+\mathbf{F}(U_h^{e,+})}{2}-\frac{1}{2}s^{+}(U_h^{e,+}-U_h^{e,-}),$$ where $s^{+}$ is the maximum wave speed of the 1D Riemann problem in the normal direction $\mathbf{n}_e$, i.e., $$s^{+}=\max_{\omega \in [0,1]}|\mathbf{F}'(\omega U_h^{e,-}+(1-\omega)U_h^{e,+})\cdot \mathbf{n}_e|.$$ For systems, we use the Harten-Lax-van Leer (HLL) [@harten1983b] Riemann solver $$\mathcal{H}_{HLL} (U_h^{e,-},U_h^{e,+},\mathbf{n}_e)=\begin{cases} \mathbf{F}(U_h^{e,-}) \quad &\text{if } 0 < s^{-},\\ \frac{-s^- \mathbf{F}(U_h^{e,+})+s^+ \mathbf{F}(U_h^{e,-})+s^-s^+(U_h^{e,+}-U_h^{e,-})}{s^+-s^-} \quad &\text{if } s^-<0<s^+,\\ \mathbf{F}(U_h^{e,+}) \quad &\text{if } s^{+} < 0. \end{cases} \label{eq:hll}$$ Bounds for the minimum and maximum wave speeds are given by [@davis1988] $$s^-=\min\{\mathbf{v}^-\cdot \mathbf{n}_e-a^-,\mathbf{v}^+\cdot \mathbf{n}_e-a^+\}, \quad s^+=\max\{\mathbf{v}^-\cdot \mathbf{n}_e+a^-,\mathbf{v}^+\cdot \mathbf{n}_e+a^+\}, \quad$$ where $\mathbf{v}^{\pm}$ denotes the fluid velocity and $a^{\pm}$ is the speed of sound. Further wave speed estimates can be found, e.g., in [@einfeldt1988; @toro2013]. Many stabilized RKDG schemes employ upwind fluxes or the (local) Lax-Friedrichs flux. While the choice of the numerical flux function is important for piecewise constant approximations, it has little influence on the quality of high-order DG methods [@hajduk2022diss]. By inserting a numerical flux $\mathcal{H} (U_h^{e,-},U_h^{e,+},\mathbf{n}_e)$ into [\[eq:weak1\]](#eq:weak1){reference-type="eqref" reference="eq:weak1"} and replacing $W_h$ with $\varphi_i^e$, we obtain a system of $N_e$ semi-discrete equations $$\int_{K_e}\varphi_i^e\cdot\frac{\partial U_h^e}{\partial t}\,\mathrm{d}\mathbf{x}=\int_{K_e}\nabla \varphi_i^e\cdot\mathbf{F}(U_h^e)\,\mathrm{d}\mathbf{x}-\int_{\partial K_e}\varphi_i^e\cdot\mathcal{H}(U_h^{e,-},U_h^{e,+},\mathbf{n}_e)\,\mathrm{ds}, \label{eq:weak2}$$ which can be written in matrix form as $$M\frac{\mathrm{d}U}{\mathrm{dt}}=R(U),$$ where $M=(m_{ij})$ denotes the mass matrix, $U$ is the vector of degrees of freedom and $R(U)$ is the residual vector. The numerical solution is evolved in time using the optimal third-order strong stability preserving (SSP) Runge-Kutta method [@gottlieb2001] $$\begin{aligned} \begin{split} U^{(1)}&=U^n+\Delta t M^{-1}R(U^n),\\ U^{(2)}&=\frac{3}{4}U^n+\frac{1}{4}U^{(1)}+\frac{1}{4}\Delta t M^{-1}R(U^{(1)}),\\ U^{(n+1)}&=\frac{1}{3}U^n+\frac{2}{3}U^{(2)}+\frac{2}{3}\Delta tM^{-1}R(U^{(2)}), \end{split} \label{eq:ssp}\end{aligned}$$ unless stated otherwise. # Dissipation-based nonlinear stabilization {#sec:stab} The RKDG scheme maintains numerical stability in applications to initial value problems that involve smooth solutions or weak shocks. While the semi-discrete scheme satisfies a cell entropy inequality [@jiang1994] and, therefore, provides $L^2$ stability in the scalar case [@cockburn1998a], it still tends to generate significant oscillations in the presence of strong discontinuities. To address this issue, we introduce artificial diffusion near shocks which effectively dampens the spurious oscillations, ensures stability and improves the accuracy of the RKDG scheme. By incorporating a local stabilization operator $s_h^e(U_h^e,\varphi_i^e)$ into the weak formulation [\[eq:weak2\]](#eq:weak2){reference-type="eqref" reference="eq:weak2"}, the general form of a stabilized scheme can be written as $$\int_{K_e}\varphi_i^e\cdot\frac{\partial U_h^e}{\partial t}\,\mathrm{d}\mathbf{x}+s_h^e(U_h^e,\varphi_i^e)=\int_{K_e}\nabla \varphi_i^e\cdot\mathbf{F}(U_h^e)\,\mathrm{d}\mathbf{x}-\int_{\partial K_e}\varphi_i^e\cdot\mathcal{H}(U_h^{e,-},U_h^{e,+},\mathbf{n}_e)\,\mathrm{ds}.$$ Since piecewise-constant (DG-$\mathbb{P}_0$) approximations naturally introduce sufficient diffusion to ensure numerical stability, a straightforward and practical approach is to add isotropic artificial diffusion throughout the computational domain. In this context, the corresponding low-order nonlinear stabilization operator reads $$s_h^{e,L}(U_h^e,\varphi_i^e)=\nu_e\int_{K_e}\nabla \varphi_i^e\cdot \nabla U_h^e\,\mathrm{d}\mathbf{x}, \label{eq:lostab}$$ where $\nu_e$ is the viscosity parameter given by $$\nu_e=\frac{\lambda_e h_e}{2p}. \label{eq:visc}$$ Here, $\lambda_e=\|\mathbf{F}'(U_h^e)\|_{L^{\infty}(K_e)}$ denotes the maximum wave speed and $h_e$ is the local mesh size. To apply stabilization only in regions with discontinuities, a shock detector $\gamma_e\in [0,1]$ is introduced to control the amount of added diffusion. In our numerical experiments, we use the adaptive nonlinear stabilization operator $$s_h^{e,A}(U_h^e,\varphi_i^e)=\gamma_e\nu_e\int_{K_e}\nabla \varphi_i^e\cdot \nabla U_h^e\,\mathrm{d}\mathbf{x}. \label{eq:astab}$$ Until now, the employed stabilization technique is completely independent of the WENO framework. The accuracy and stability of the resulting scheme depend solely on the definition of the smoothness indicator $\gamma_e$, which we give in the next section. In contrast to the postprocessing nature of standard WENO-based RKDG schemes, in which the numerical solution is overwritten by WENO reconstructions, our stabilization technique is directly integrated into the semi-discrete problem. This incorporation is akin to standard shock-capturing schemes, which makes our stabilization technique well suited for further modifications, such as flux limiting [@kuzmin2021a; @kuzmin2021; @rueda2023]. The stabilized continuous Galerkin (CG) version presented in [@kuzmin2023a] adaptively blends the numerical viscosity operators of low- and high-order stabilization terms. However, the purpose of employing high-order linear stabilization in [@kuzmin2023a] was primarily to obtain optimal convergence rates for problems with smooth exact solutions. Given the inherent properties of DG methods, our scheme does not require the incorporation of high-order linear stabilization. Instead, it relies solely on the low-order nonlinear stabilization operator [\[eq:astab\]](#eq:astab){reference-type="eqref" reference="eq:astab"} equipped with an appropriate shock detector $\gamma_e$. # WENO-based smoothness sensor {#sec:sensor} A well-designed shock-capturing method for DG discretizations of hyperbolic problems should suppress spurious oscillations in a manner that preserves high-order accuracy in smooth regions. To meet this requirement, the use of nonlinear stabilization is commonly restricted to cells that are identified as 'troubled' by smoothness sensors. Alongside traditional shock detectors that rely on measures such as entropy [@guermond2011; @krivodonova2004; @lv2016], total variation and slope [@ducros1999; @harten1978; @harten1983a; @hendricks2018; @jameson1981; @ren2003] or residual [@marras2018; @stiernstrom2021] of a finite element approximation, remarkable progress has been made in the development of smoothness sensors based on the WENO methodology [@hill2004; @li2020; @movahed2013; @visbal2005; @wang2023; @zhao2019; @zhao2020]. Building upon these advances, we present a smoothness sensor that aligns with the principles of WENO-based shock detection. Let $U_h^{e,*}$ denote a WENO reconstruction. We equip our stabilization operator [\[eq:astab\]](#eq:astab){reference-type="eqref" reference="eq:astab"} with the smoothness sensor [@kuzmin2023a] $$\gamma_e = \min\Bigg(1, \frac{\|U_h^e-U_h^{e,*}\|_e}{\|U_h^e\|_e}\Bigg)^q, \label{eq:sensor}$$ where $q\geq 1$ allows for tuning the sensitivity of the relative difference between $U_h^e$ and $U_h^{e,*}$. To accurately evaluate the smoothness of the numerical solution, we use the scaled Sobolev semi-norm [@friedrich1998; @jiang1996] $$\|v\|_e=\Bigg(\sum_{1\leq |\mathbf{k}|\leq p}h_e^{2|\mathbf{k}|-d}\int_{K_e}|D^{\mathbf{k}}v|^2\,\mathrm{d}\mathbf{x}\Bigg)^{1/2} \quad \forall v\in H^p(K_e),$$ where $\mathbf{k}=(k_1,\ldots,k_d)$ is the multiindex of the partial derivative $$D^{\mathbf{k}}v=\frac{\partial^{|\mathbf{k}|}v}{\partial x_1^{k_1}\cdots \partial x_d^{k_d}}, \quad |\mathbf{k}|=k_1+\ldots+k_d.$$ Jiang and Shu [@jiang1996] and Friedrich [@friedrich1998] measured the smoothness of candidate polynomials using derivative-based metrics of the form $\|\cdot\|_e^q$, $q \in \{1,2\}$. In this context, our smoothness sensor shares similarities with existing WENO-based shock sensors [@hill2004; @movahed2013; @visbal2005; @zhao2020]. We modify the smoothness sensor [\[eq:sensor\]](#eq:sensor){reference-type="eqref" reference="eq:sensor"} by setting it to zero if the relative difference between the numerical solution and the WENO reconstruction is smaller than a prescribed tolerance. Thus, we effectively avoid introducing unnecessary numerical dissipation in smooth regions, thereby enhancing overall accuracy. A generalized version of the smoothness indicator [\[eq:sensor\]](#eq:sensor){reference-type="eqref" reference="eq:sensor"} is given by $$\begin{aligned} \gamma_{e}^b = \begin{cases} \gamma_e\quad &\text{if } \frac{\|U_h^e-U_h^{e,*}\|_e}{\|U_h^e\|_e}\geq b, \\ 0 \quad &\text{otherwise}, \end{cases} \label{eq:gensensor} \end{aligned}$$ where $b\in[0,1]$ is the threshold for activating numerical dissipation. We recover the smoothness sensor [\[eq:sensor\]](#eq:sensor){reference-type="eqref" reference="eq:sensor"} by using $b=0$. We complete the description of our smoothness sensor by discussing the WENO reconstruction polynomial $U_h^{e,*}$. Exploiting the local nature of standard DG schemes, we follow the Hermite WENO (HWENO) approach introduced by Qiu and Shu [@qiu2004]. Various extensions of the HWENO method can be found in the literature (see, e.g., [@liu2015; @luo2007; @luo2012; @qiu2004; @zhang2023; @zhu2009; @zhu2017]). By incorporating information about the derivatives, HWENO schemes allow for a more accurate representation of the solution, especially in the presence of discontinuities and sharp gradients. The weights assigned to candidate polynomials are determined based on smoothness indicators, with larger weights given to less oscillatory polynomials. This weighting strategy effectively reduces numerical oscillations and improves the overall accuracy of the solution. An adaptive HWENO reconstruction can be written as $$U_h^{e,*}=\sum_{l=0}^{m_e}\omega_l^eU_{h,l}^e \in \mathbb{P}_p(K_e),$$ where $\omega_l^e$ and $U_{h,l}^e$, $l=0,\ldots,m_e$ denote nonlinear weights and Hermite candidate polynomials, respectively. We remark that the main contribution of our work is not the construction of $U_h^{e,*}$ but the way in which we use $U_h^{e,*}$ to stabilize the underlying RKDG scheme. Therefore, we omit the details of the HWENO reconstruction procedure employed in this work. For a detailed description of this procedure, we refer the reader to [@kuzmin2023a Sec. 5]. # Numerical examples {#sec:numex} To assess the properties of the WENO-based stabilization procedure, we conduct a series of numerical experiments involving both linear and nonlinear scalar test problems, as well as the compressible Euler equations of gas dynamics. Our objective is to demonstrate the optimal convergence behavior of our method for smooth problems, its superb shock-capturing capabilities, and its ability to converge to correct entropy solutions in the nonlinear case. Throughout the numerical experiments, we utilize Lagrange basis functions of polynomial order $p\in\{1, 2, 3\}$. We set $q=1$ in [\[eq:sensor\]](#eq:sensor){reference-type="eqref" reference="eq:sensor"}. Numerical solutions are evolved in time using the third-order explicit SSP Runge-Kutta scheme [\[eq:ssp\]](#eq:ssp){reference-type="eqref" reference="eq:ssp"}, unless stated otherwise. For clarity, we assign the following labels to the methods under investigation: ------ ---------------------------------------------------------------------------------------------------------------------------------------------------- DG discontinuous Galerkin method without any stabilization; LO DG + linear low-order stabilization, i.e., using $s_h^{e,L}$ defined by [\[eq:lostab\]](#eq:lostab){reference-type="eqref" reference="eq:lostab"}; WENO DG + nonlinear stabilization, i.e., using $s_h^{e,A}$ defined by [\[eq:astab\]](#eq:astab){reference-type="eqref" reference="eq:astab"}. ------ ---------------------------------------------------------------------------------------------------------------------------------------------------- We measure the experimental order of convergence (EOC) using [@leveque1996; @lohmann2017] $$p^{EOC} = \log\Bigg(\frac{E_1(h_2)}{E_1(h_1)}\Bigg)\log\Bigg(\frac{h_2}{h_1}\Bigg)^{-1},$$ where $E_1(h_1)$ and $E_1(h_2)$ are the $L^1$ errors on two meshes of mesh size $h_1$ and $h_2$, respectively.The implementation of all schemes is based on the open-source C++ library MFEM [@anderson2021; @mfem]. For visualization purposes, numerical solutions to two-dimensional problems are $L^2$-projected into the same-order space of continuous functions. The corresponding results are visualized using the open source C++ software GLVis [@glvis]. ## One-dimensional linear advection with constant velocity To begin, we consider the one-dimensional linear transport problem $$\frac{\partial u}{\partial t}+v\frac{\partial u}{\partial x}=0 \quad \text{in } \Omega= (0,1) \label{eq:linadv}$$ with constant velocity $v=1$ and periodic boundary conditions. To assess the convergence properties of our scheme, we evolve the smooth initial condition $$u_0(x)=\cos(2\pi(x-0.5))$$ up to the final time $t=1.0$ using SSP Runge-Kutta methods of order $p+1$. Table [3](#tab:convlinadv){reference-type="ref" reference="tab:convlinadv"} shows the $L^1$ errors and EOCs for Lagrange finite elements of order $p\in\{1,2,3\}$. Both the DG and the WENO scheme deliver the optimal $L^1$ convergence rates, i.e., $\text{EOC}\approx p+1$. Notably, we observe that the error of the WENO scheme converges towards the Galerkin error as the mesh is refined. In contrast, the LO scheme achieves only first-order accuracy because numerical dissipation is introduced throughout the domain. DG LO WENO ------- ---------------------------------- ------ ---------------------------------- ------ ---------------------------------- ------ $E_h$ $\|u_h-u_{\text{exact}}\|_{L^1}$ EOC $\|u_h-u_{\text{exact}}\|_{L^1}$ EOC $\|u_h-u_{\text{exact}}\|_{L^1}$ EOC 16 7.28e-03 -- 2.97e-01 -- 6.25e-02 -- 32 1.67e-03 2.13 1.70e-01 0.80 1.50e-02 2.06 64 3.97e-04 2.07 9.14e-02 0.90 2.13e-03 2.82 128 9.69e-05 2.04 4.75e-02 0.95 1.78e-04 3.58 256 2.39e-05 2.02 2.42e-02 0.97 2.48e-05 2.84 512 5.95e-06 2.01 1.22e-02 0.99 5.96e-06 2.06 1024 1.48e-06 2.00 6.13e-03 0.99 1.48e-06 2.01 2048 3.70e-07 2.00 3.07e-03 1.00 3.70e-07 2.00 : One-dimensional linear advection with constant velocity, grid convergence history for finite elements of degree $p\in\{1,2,3\}$. DG LO WENO ------- ---------------------------------- ------ ---------------------------------- ------ ---------------------------------- ------ $E_h$ $\|u_h-u_{\text{exact}}\|_{L^1}$ EOC $\|u_h-u_{\text{exact}}\|_{L^1}$ EOC $\|u_h-u_{\text{exact}}\|_{L^1}$ EOC 16 1.59e-04 -- 2.41e-01 -- 1.25e-03 -- 32 1.95e-05 3.02 1.35e-01 0.84 1.24e-04 3.33 64 2.43e-06 3.01 7.14e-02 0.92 6.79e-06 4.19 128 3.03e-07 3.00 3.67e-02 0.96 3.79e-07 4.16 256 3.78e-08 3.00 1.86e-02 0.98 4.38e-08 3.11 512 4.73e-09 3.00 9.39e-03 0.99 5.47e-09 3.00 1024 5.91e-10 3.00 4.71e-03 0.99 6.84e-10 3.00 2048 7.40e-11 3.00 2.36e-03 1.00 8.59e-11 2.99 : One-dimensional linear advection with constant velocity, grid convergence history for finite elements of degree $p\in\{1,2,3\}$. DG LO WENO ------- ---------------------------------- ------ ---------------------------------- ------ ---------------------------------- ------ $E_h$ $\|u_h-u_{\text{exact}}\|_{L^1}$ EOC $\|u_h-u_{\text{exact}}\|_{L^1}$ EOC $\|u_h-u_{\text{exact}}\|_{L^1}$ EOC 16 3.86e-06 -- 1.84e-01 -- 2.09e-04 -- 32 2.40e-07 4.01 1.00e-01 0.88 2.17e-05 3.27 64 1.50e-08 4.00 5.21e-02 0.94 1.63e-07 7.06 128 9.35e-10 4.00 2.66e-02 0.97 7.03e-09 4.54 256 5.85e-11 4.00 1.34e-02 0.98 3.08e-10 4.51 512 3.83e-12 3.93 6.76e-03 0.99 4.05e-11 4.08 : One-dimensional linear advection with constant velocity, grid convergence history for finite elements of degree $p\in\{1,2,3\}$. To test the robustness of our scheme, we solve [\[eq:linadv\]](#eq:linadv){reference-type="eqref" reference="eq:linadv"} using the initial condition $$u_0(x) = \begin{cases} 1 & \mbox{if} \ 0.15 \le x \le 0.45, \\ \Big[\cos\Big(\frac{10\pi}{3}(x-0.7)\Big)\Big]^2 & \mbox{if} \ 0.55 < x < 0.85, \\ 0 & \mbox{otherwise}, \end{cases} \label{num:advinittwo}$$ consisting of a discontinuous and an infinitely differentiable smooth profile. We run the numerical simulations up to the final time $t=1.0$ using $E_h=128$ elements. The results are shown in Figs [1](#fig:advdg){reference-type="ref" reference="fig:advdg"}-[3](#fig:advweno){reference-type="ref" reference="fig:advweno"}. As expected, the DG solutions display spurious oscillations near the discontinuities, while the solutions obtained with the LO method suffer from excessive numerical dissipation, resulting in reduced accuracy. The WENO solutions provide a sharp representation of the discontinuities without any apparent over- or undershoots. However, it is worth noting that for linear finite elements a peak clipping effect can be observed in the smooth profile. ![DG](results/advdg.eps){#fig:advdg width="\\linewidth"} ![LO](results/advlo.eps){#fig:advlo width="\\linewidth"} ![WENO](results/advweno.eps){#fig:advweno width="\\linewidth"} ## One-dimensional inviscid Burgers equation The first nonlinear problem we consider is the one-dimensional inviscid Burgers equation $$\frac{\partial u}{\partial t}+\frac{\partial(u^2/2)}{\partial x} = 0 \quad \text{in } \Omega=(0,1).$$ The initial condition $$u_0(x)=\sin(2\pi x)$$ remains smooth up to the critical time $t_c=\frac{1}{2\pi}$ of shock formation. Thus, we perform grid convergence studies at the time $t=0.1$ for which the exact solution remains sufficiently smooth. The results are shown in Table [6](#tab:convburgers){reference-type="ref" reference="tab:convburgers"}. Similarly to the case of linear advection, both the DG and the WENO scheme yield optimal convergence rates. Once more, the WENO error converges to the Galerkin error. The numerical solutions obtained with the LO method maintain first-order accuracy in the nonlinear case. DG LO WENO ------- ---------------------------------- ------ ---------------------------------- ------ ---------------------------------- ------ $E_h$ $\|u_h-u_{\text{exact}}\|_{L^1}$ EOC $\|u_h-u_{\text{exact}}\|_{L^1}$ EOC $\|u_h-u_{\text{exact}}\|_{L^1}$ EOC 16 8.59e-03 -- 4.61e-02 -- 1.30e-02 -- 32 2.07e-03 2.06 2.45e-02 0.91 3.31e-03 1.97 64 5.17e-04 2.00 1.27e-02 0.94 7.37e-04 2.17 128 1.32e-04 1.97 6.50e-03 0.97 1.36e-04 2.44 256 3.36e-05 1.98 3.28e-03 0.99 3.36e-05 2.01 512 8.49e-06 1.98 1.65e-03 0.99 8.49e-06 1.98 1024 2.14e-06 1.99 8.26e-04 1.00 2.14e-06 1.99 2048 5.36e-07 1.99 4.13e-04 1.00 5.36e-07 1.99 : One-dimensional Burgers equation, grid convergence history for finite elements of degree $p\in\{1,2,3\}$. DG LO WENO ------- ---------------------------------- ------ ---------------------------------- ------ ---------------------------------- ------ $N_h$ $\|u_h-u_{\text{exact}}\|_{L^1}$ EOC $\|u_h-u_{\text{exact}}\|_{L^1}$ EOC $\|u_h-u_{\text{exact}}\|_{L^1}$ EOC 16 3.52e-04 -- 4.39e-02 -- 5.73e-04 -- 32 9.34e-05 1.92 2.35e-02 0.90 1.44e-04 1.99 64 1.21e-05 2.95 1.25e-02 0.92 1.44e-05 3.33 128 1.54e-06 2.97 6.44e-03 0.95 1.53e-06 3.23 256 1.95e-07 2.98 3.27e-03 0.98 1.97e-07 2.96 512 2.47e-08 2.98 1.65e-03 0.99 2.50e-08 2.98 1024 3.11e-09 2.99 8.26e-04 1.00 3.16e-09 2.98 2048 3.91e-10 2.99 4.13e-04 1.00 3.97e-10 2.99 : One-dimensional Burgers equation, grid convergence history for finite elements of degree $p\in\{1,2,3\}$. DG LO WENO ------- ---------------------------------- ------ ---------------------------------- ------ ---------------------------------- ------- $N_h$ $\|u_h-u_{\text{exact}}\|_{L^1}$ EOC $\|u_h-u_{\text{exact}}\|_{L^1}$ EOC $\|u_h-u_{\text{exact}}\|_{L^1}$ EOC 16 1.28e-04 -- 3.10e-02 -- 8.88e-03 -- 32 8.65e-06 3.89 1.59e-02 0.96 1.80e-03 2.30 64 5.10e-07 4.08 8.17e-03 0.96 1.27e-06 10.47 128 3.22e-08 3.99 4.16e-03 0.97 4.34e-08 4.88 256 2.04e-09 3.98 2.10e-03 0.99 2.71e-09 4.00 512 1.30e-10 3.98 1.05e-03 0.99 1.75e-10 3.95 : One-dimensional Burgers equation, grid convergence history for finite elements of degree $p\in\{1,2,3\}$. To investigate the shock behavior for nonlinear problems, we extend the final time to $t=1.0$ after the shock has formed. For our numerical simulations, we employ finite elements of degree $p\in\{1,2,3\}$ on $E_h=128$ elements. The results obtained with the LO scheme and the WENO scheme are shown in Figs [4](#fig:burgerslo){reference-type="ref" reference="fig:burgerslo"} and [5](#fig:burgersweno){reference-type="ref" reference="fig:burgersweno"}, respectively. Due to the self-steepening nature of shocks, the LO scheme exhibits reduced diffusion. Interestingly, there are no observable differences between the solutions obtained using different polynomial degrees. This finding is consistent with the results reported in [@kuzmin2023a]. ![LO](results/burgerslo.eps){#fig:burgerslo width="\\linewidth"} ![WENO](results/burgersweno.eps){#fig:burgersweno width="\\linewidth"} ## Two-dimensional solid body rotation Next, we consider LeVeque's solid body rotation benchmark [@leveque1996]. We solve the transport problem $$\frac{\partial u}{\partial t}+\nabla\cdot(\mathbf{v}u) = 0 \quad \text{in } \Omega=(0,1)^2$$ with the divergence-free velocity field ${\bf v}(x,y)=2\pi(0.5-y,x-0.5)$. In this two-dimensional test problem, a smooth hump, a sharp cone and a slotted cylinder are rotated around the center of the domain. After each revolution ($t=2\pi r$, $r\in \mathbb{N}$) the exact solution corresponds to the initial condition given by $$u_0(x,y) = \begin{cases} u_0^{\text{hump}}(x,y) & \text{if } \sqrt{(x-0.25)^2+(y-0.5)^2}\le 0.15, \\ u_0^{\text{cone}}(x,y) & \text{if } \sqrt{(x-0.5)^2+(y-0.25)^2}\le 0.15, \\ 1 & \text{if }\begin{cases} (\sqrt{(x-0.5)^2+(y-0.75)^2}\le0.15) \wedge \\ (|x-0.5|\ge0.025\vee y\ge0.85), \end{cases}\\ 0 & \text{otherwise}, \end{cases}$$ where $$\begin{aligned} u_0^{\text{hump}}(x,y) &= \frac{1}{4}+\frac{1}{4}\cos\bigg(\frac{\pi\sqrt{(x-0.25)^2+(y-0.5)^2}}{0.15}\bigg), \\ u_0^{\text{cone}}(x,y) &= 1-\frac{\sqrt{(x-0.5)^2+(y-0.25)^2}}{0.15}.\end{aligned}$$ We perform numerical computations up to the final time $t=1.0$ on a uniform quadrilateral mesh using $E_h=128^2$ elements and $p=2$. Similarly to the one-dimensional scalar test problems, the DG solution, as presented in Fig. [6](#fig:sbrdg){reference-type="ref" reference="fig:sbrdg"}, exhibits over- and undershoots near discontinuities, while solutions obtained with the LO scheme suffer from significant numerical dissipation; see Fig. [7](#fig:sbrlo){reference-type="ref" reference="fig:sbrlo"}. In fact, the LO scheme fails to reproduce the geometric features of the initial condition accurately. Contrary to that, the WENO scheme effectively suppresses oscillations near discontinuities and accurately preserves the structure of all rotating objects; see Fig. [8](#fig:sbrweno){reference-type="ref" reference="fig:sbrweno"}. ![Solid body rotation, numerical solutions at $t=1.0$ obtained using $E_h=128^2$ and $p=2$.](results/sbrdg1.png){#fig:sbr width="\\linewidth"} ![Solid body rotation, numerical solutions at $t=1.0$ obtained using $E_h=128^2$ and $p=2$.](results/sbrlo1.png){#fig:sbr width="\\linewidth"} ![Solid body rotation, numerical solutions at $t=1.0$ obtained using $E_h=128^2$ and $p=2$.](results/sbrweno1.png){#fig:sbr width="\\linewidth"} ![DG, $u_h \in [-0.107,1.120]$](results/sbrdg2.png){#fig:sbrdg width="\\linewidth"} ![LO, $u_h \in [0.000,0.596]$](results/sbrlo2.png){#fig:sbrlo width="\\linewidth"} ![WENO, $u_h \in [0.000,0.963]$](results/sbrweno2.png){#fig:sbrweno width="\\linewidth"} ## Two-dimensional KPP problem We investigate the entropy stability properties of our scheme using the two-dimensional KPP problem [@kurganov2007]. In this example, we solve $$\frac{\partial u}{\partial t}+\nabla \cdot \mathbf{f}(u) = 0 ,$$ where the nonconvex flux function is given by $$\mathbf{f}(u)=(\sin(u),\cos(u)).$$ The computational domain is $\Omega=(-2,2)\times(-2.5,1.5)$. The main challenge of this test problem lies in the potential convergence to incorrect weak solutions instead of the entropy solution, which exhibits a rotational wave structure. The initial condition is given by $$u_0(x,y)=\begin{cases} \frac{7\pi}{2} & \text{if }\sqrt{x^2+y^2}\le 1,\\ \frac{\pi}{4} & \text{otherwise}. \end{cases}$$ Once again, we perform numerical simulations up to the final time $t=1.0$ on a uniform quadrilateral mesh using $E_h=128^2$ elements and $p=2$. The global upper bound for the maximum speed required to compute the viscosity parameter $\nu_e$ in [\[eq:visc\]](#eq:visc){reference-type="eqref" reference="eq:visc"} is $\lambda_e=1.0$. More accurate bounds can be found in [@guermond2017]. Unlike the WENO-based stabilization approach applied to continuous finite elements in [@kuzmin2023a], we do not modify the WENO parameters for this particular test problem. Fig. [12](#fig:kppdg){reference-type="ref" reference="fig:kppdg"} shows that the DG solution suffers from excessive oscillations and converges to an entropy-violating solution. Contrary to that, both the LO scheme and the WENO scheme converge to the correct entropy solution. The numerical solutions produced by these schemes are shown in Figs [13](#fig:kpplo){reference-type="ref" reference="fig:kpplo"} and [14](#fig:kppweno){reference-type="ref" reference="fig:kppweno"}, respectively. The WENO result is less diffusive, capturing shocks more sharply than the solution obtained using the LO scheme. ![DG, $u_h \in [-45.978,46.440]$](results/kppdg.png){#fig:kppdg width="\\linewidth"} ![LO, $u_h \in [0.785,10.992]$](results/kpplo.png){#fig:kpplo width="\\linewidth"} ![WENO, $u_h \in [0.785,10.996]$](results/kppweno.png){#fig:kppweno width="\\linewidth"} ## Euler equations of gas dynamics We consider the Euler equations of gas dynamics which represent the conservation of mass, momentum and total energy. The solution vector and flux matrix in [\[eq:pde\]](#eq:pde){reference-type="eqref" reference="eq:pde"} read $$U = \begin{bmatrix} \varrho \\ \varrho \mathbf{v}\\ \varrho E \end{bmatrix}\in \mathbb{R}^{d+2}, \quad \mathbf{F(U)}=\begin{bmatrix} \varrho \mathbf{v} \\ \varrho \mathbf{v} \bigotimes \mathbf{v}+p\mathbf{I}\\ (\varrho E + p)\mathbf{v} \end{bmatrix}\in \mathbb{R}^{(d+2) \times d},$$ where $\varrho$, $\mathbf{v}$, $E$ are the density, velocity and specific total energy, respectively. The identity matrix is denoted by $\mathbf{I}$, and the pressure $p$ is computed using the polytropic ideal gas assumption $$p=\varrho e (\gamma-1).$$ Here, $\varrho e$ and $\gamma$ are the internal energy and the heat capacity ratio, respectively. We use $\gamma=1.4$ in all of our numerical experiments. Similarly to the approach presented in [@zhao2020], our WENO-based shock sensor is designed to use information solely from the density field, resulting in significant computational savings [@pirozzoli2002]. ### Sod shock tube Sod's shock tube problem [@sod1978] serves as our first test for solving the Euler equations. This classical problem is commonly employed to assess the accuracy of high-order numerical methods. The computational domain $\Omega=(0,1)$ is delimited by reflecting walls and initially divided by a membrane into two distinct regions. Upon removing the membrane, a discontinuity at $x=0.5$ leads to the formation of a shock wave, rarefaction wave, and contact discontinuity. The initial condition is given by $$\begin{bmatrix} \varrho_L \\v_L\\p_L \end{bmatrix}= \begin{bmatrix}1.0\\0.0\\1.0 \end{bmatrix}, \quad \begin{bmatrix} \varrho_R \\v_R\\p_R \end{bmatrix}= \begin{bmatrix}0.125\\0.0\\0.1 \end{bmatrix}.$$ We evolve numerical solutions up to the final time $t=0.231$ using $E_h=128$ elements and $p=2$. Fig. [15](#fig:soddg){reference-type="ref" reference="fig:soddg"} illustrates that the DG solution exhibits over- and undershoots near discontinuities. The LO solution, as shown in Fig. [16](#fig:sodlo){reference-type="ref" reference="fig:sodlo"}, is free of spurious local extrema but has the drawback of being highly diffusive. The numerical solution obtained with the WENO scheme captures discontinuities sharply while preventing the occurrence of spurious oscillations; see Fig. [17](#fig:sodweno){reference-type="ref" reference="fig:sodweno"}. ![DG](results/soddg.eps){#fig:soddg width="\\linewidth"} ![LO](results/sodlo.eps){#fig:sodlo width="\\linewidth"} ![WENO](results/sodweno.eps){#fig:sodweno width="\\linewidth"} To further evaluate the accuracy of our scheme, we perform a comparison between the shock sensor $\gamma_e$ in [\[eq:sensor\]](#eq:sensor){reference-type="eqref" reference="eq:sensor"}, the modified version $\gamma_e^b$ displayed in [\[eq:gensensor\]](#eq:gensensor){reference-type="eqref" reference="eq:gensensor"}, and the shock sensor proposed by Zhao et al. [@zhao2020], i.e., $$\gamma_e^Z=\frac{\sum_{l=0}^{m_e}|\frac{\omega_l}{\tilde{\omega}_l}-1|^{\theta}}{|\frac{1}{\min_l\tilde{\omega}_l}-1|^{\theta}+m_e}, \quad 0 \leq l \leq m_e.$$ Here, deviations between nonlinear weights $\omega_l$, $l=0,\ldots,m_e$ and ideal linear weights $\tilde{\omega}_l$, $l=0,\ldots,m_e$ assigned to candidate polynomials are computed to assess the regularity of the solution. The free parameter $\theta$ controls the sensitivity to deviations from the ideal weights. We set $\theta = 1$. Continuing with the aforementioned setup, we present, in Fig. [23](#fig:sodzoom){reference-type="ref" reference="fig:sodzoom"}, numerical solutions in different regions of the computational domain. Notably, our WENO-based shock sensor is less diffusive compared to the one introduced in [@zhao2020]. By employing the generalized smoothness sensor, the accuracy can be further improved. However, this approach may introduce instabilities for strong shocks. ![Sod shock tube, density profiles $\varrho$ at $t=0.231$ obtained using $E_h=128$, $p=2$ and several shock sensors.](results/sodzoom1.eps "fig:"){#fig:sodzoom width="\\linewidth"} [\[fig:sodzoom1\]]{#fig:sodzoom1 label="fig:sodzoom1"} ![Sod shock tube, density profiles $\varrho$ at $t=0.231$ obtained using $E_h=128$, $p=2$ and several shock sensors.](results/sodzoom2.eps "fig:"){#fig:sodzoom width="\\linewidth"} [\[fig:sodzoom2\]]{#fig:sodzoom2 label="fig:sodzoom2"} ![Sod shock tube, density profiles $\varrho$ at $t=0.231$ obtained using $E_h=128$, $p=2$ and several shock sensors.](results/sodzoom3.eps "fig:"){#fig:sodzoom width="\\linewidth"} [\[fig:sodzoom3\]]{#fig:sodzoom3 label="fig:sodzoom3"} ### Modified Sod shock tube We investigate the entropy stability properties of our scheme for hyperbolic systems using the modified Sod shock tube problem [@toro2013]. Here, the initial condition is specified as $$\begin{bmatrix} \varrho_L \\v_L\\p_L \end{bmatrix}= \begin{bmatrix}1.0\\0.75\\1.0 \end{bmatrix}, \quad \begin{bmatrix} \varrho_R \\v_R\\p_R \end{bmatrix}= \begin{bmatrix}0.125\\0.0\\0.1 \end{bmatrix}.$$ For this particular problem, many high-order schemes encounter an entropy glitch due to the presence of a sonic point within the rarefaction wave. The computational domain is given by $\Omega=(0,1)$. Unlike the classical Sod shock tube, the left boundary functions as an inlet. Numerical simulations are carried out using $E_h=128$ elements and $p=2$. The results obtained with the LO method and the WENO method are depicted in Figs [24](#fig:msodlo){reference-type="ref" reference="fig:msodlo"} and [25](#fig:msodweno){reference-type="ref" reference="fig:msodweno"}, respectively. Interestingly, when employing the LO method, an entropy shock can be observed in all variables. In contrast, the numerical solutions obtained with the WENO scheme remain entropy stable, thereby eliminating the need for (semi-)discrete entropy fixes in our method. ![LO](results/msodlo.eps){#fig:msodlo width="\\linewidth"} ![WENO](results/msodweno.eps){#fig:msodweno width="\\linewidth"} ### Lax shock tube Next, we consider a 1D tube that is equipped with a diaphragm placed at its center, dividing the tube into two distinct regions with varying pressures. The rupture of the diaphragm at $t=0$ leads to the propagation of a rarefaction wave on the left side and the formation of a contact discontinuity and a shock wave on the right side. This test problem is known as the Lax shock tube problem [@lax1954]. The initial data $$\begin{bmatrix} \varrho_L \\v_L\\p_L \end{bmatrix}= \begin{bmatrix}0.445\\0.698\\3.528 \end{bmatrix}, \quad \begin{bmatrix} \varrho_R \\v_R\\p_R \end{bmatrix}= \begin{bmatrix}0.5\\0.0\\0.571 \end{bmatrix}$$ are prescribed in the computational domain $\Omega=(0,2)$. We perform numerical experiments up to the final time $t=0.14$ using $E_h=512$ elements and $p\in \{1,2,3\}$. The density profiles of the DG solutions, LO solutions and WENO solutions are displayed in Figs [26](#fig:laxdg){reference-type="ref" reference="fig:laxdg"}-[28](#fig:laxweno){reference-type="ref" reference="fig:laxweno"}, respectively. Notably, the DG scheme equipped with cubic basis polynomials oscillates heavily, resulting in a distorted solution that is not shown here. Once again, the WENO method demonstrates sharp capturing of discontinuities, while the LO scheme suffers from excessive numerical dissipation. ![DG](results/laxdg.eps){#fig:laxdg width="\\linewidth"} ![LO](results/laxlo.eps){#fig:laxlo width="\\linewidth"} ![WENO](results/laxweno.eps){#fig:laxweno width="\\linewidth"} ### Shu-Osher problem We investigate the sine-shock interaction problem, also known as the Shu-Osher problem [@shu1989]. The computational domain $\Omega=(-5, 5)$ is bounded by an inlet on the left boundary and a reflecting wall on the right boundary. This problem serves as a useful test to evaluate the resolution of higher frequency waves behind a shock. The problem is equipped with the initial conditions $$\begin{bmatrix} \varrho_L \\v_L\\p_L \end{bmatrix}= \begin{bmatrix}3.857143\\2.629369\\10.3333 \end{bmatrix}, \quad \begin{bmatrix} \varrho_R \\v_R\\p_R \end{bmatrix}= \begin{bmatrix}1.0+0.2\sin(5x)\\0.0\\1.0 \end{bmatrix}.$$ We perform numerical simulations up to the final time $t=1.8$ using $E_h=512$ elements and $p\in\{1,2,3\}$. Figs [29](#fig:solo){reference-type="ref" reference="fig:solo"} and [30](#fig:soweno){reference-type="ref" reference="fig:soweno"} show the density profiles of the solutions obtained with the LO scheme and the WENO scheme, respectively. The LO scheme struggles to accurately capture the physical oscillations within the exact solution. Even the WENO scheme equipped with linear basis functions exhibits a significant amount of numerical dissipation. However, when the WENO scheme is equipped with high-order polynomials, it accurately captures all features of the exact solution. ![LO](results/solo.eps){#fig:solo width="\\linewidth"} ![WENO](results/soweno.eps){#fig:soweno width="\\linewidth"} ### Woodward-Colella blast wave problem We consider the Woodward-Colella blast wave problem [@woodward1984], a challenging test for many high-order numerical schemes. Its solution involves multiple interactions of strong shock waves and rarefaction waves with each other and with contact waves. For a comprehensive understanding of all wave interactions, we refer the reader to [@woodward1984]. This problem serves as a rigorous test to assess the robustness of numerical schemes. The initial data $$\begin{bmatrix} \varrho_L \\v_L\\p_L \end{bmatrix}= \begin{bmatrix}1.0\\0.0\\1000.0 \end{bmatrix}, \quad \begin{bmatrix} \varrho_M \\v_M\\p_M \end{bmatrix}= \begin{bmatrix}1.0\\0.0\\0.1 \end{bmatrix},\quad \begin{bmatrix} \varrho_R \\v_R\\p_R \end{bmatrix}= \begin{bmatrix}1.0\\0.0\\100.0 \end{bmatrix}$$ are prescribed in the computational domain $\Omega=(0,1)$, which is bounded by reflecting walls. Fig. [\[fig:wc\]](#fig:wc){reference-type="ref" reference="fig:wc"} displays the density profiles of the LO solutions and the WENO solutions at the final time $t=0.038$. The numerical results were obtained using $E_h=512$ elements and $p\in\{1,2,3\}$. It is evident that the WENO scheme achieves higher accuracy compared to the dissipative LO scheme and accurately captures all sharp features without any visible oscillations. Similarly to the Shu-Osher problem, the benefits of using high-order polynomials are clearly visible in this test. ![LO](results/wclo.eps){#fig:wclo width="\\linewidth"} ![WENO](results/wcweno.eps){#fig:wcweno width="\\linewidth"} ### Double Mach reflection In our last numerical example, we consider the double Mach reflection problem of Woodward and Colella [@woodward1984]. The computational domain is the rectangle $\Omega=(0,4)\times(0,1)$. In this benchmark, the flow pattern features a Mach 10 shock in air which initially makes a $60^{\circ}$ angle with a reflecting wall. The following pre-shock and post-shock values of the flow variables are used $$\begin{bmatrix} \varrho_L \\v_{x,L}\\v_{y,L}\\p_L \end{bmatrix}= \begin{bmatrix}8.0\\\phantom{-}8.25\cos({30}^{\circ})\\-8.25\sin({30}^{\circ})\\116.5 \end{bmatrix}, \quad \begin{bmatrix} \varrho_R \\v_{x,R}\\v_{y,R}\\p_R \end{bmatrix}= \begin{bmatrix}1.4\\0.0\\0.0\\1.0 \end{bmatrix}.$$ Initially, the post-shock values (subscript L) are prescribed in the subdomain $\Omega_L=\{(x,y)\;|\;x<\frac{1}{6}+\frac{y}{\sqrt{3}}\}$ and the pre-shock values (subscript R) in $\Omega_R = \Omega \setminus \Omega_L$. The reflecting wall corresponds to $1/6 \leq x \leq 4$ and $y=0$. No boundary conditions are required along the line $x=4$. On the rest of the boundary, the post-shock conditions are prescribed for $x<\frac{1}{6}+\frac{1+20t}{\sqrt{3}}$ and the pre-shock conditions elsewhere. The so-defined values along the top boundary describe the exact motion of the initial Mach 10 shock. In Fig. [\[fig:dmr\]](#fig:dmr){reference-type="ref" reference="fig:dmr"}, we present snapshots of the density distribution at the final time $t=0.2$ obtained using $E_h=192\cdot48$ elements and $p=2$. The LO approximation exhibits strong numerical diffusion, resulting in a poor resolution of the interacting shock waves. Fig. [34](#fig:dmrweno){reference-type="ref" reference="fig:dmrweno"} confirms that the WENO approximation introduces sufficient amounts of numerical dissipation to suppress spurious oscillations. We remark that a relatively coarse mesh was used for this benchmark. ![LO, $u_h \in [1.400,19.691]$](results/dmrlo.png){#fig:dmrlo width="\\linewidth"} ![WENO, $u_h \in [1.400,20.077]$](results/dmrweno.png){#fig:dmrweno width="\\linewidth"} # Conclusions {#sec:concl} We have shown that the methodology developed in [@kuzmin2023a] for stabilization of continuous Galerkin methods can be extended to and is ideally suited for RKDG discretizations of hyperbolic problems. By introducing low-order nonlinear numerical dissipation based on the relative differences between reconstructed candidate polynomials and the underlying DG approximation, we achieve high-order accuracy while suppressing spurious oscillations in the vicinity of discontinuities. Our algorithm exhibits a modular structure similar to the CG version presented in [@kuzmin2023a]. This modular design allows for easy customization of individual components, including the stabilization operator, smoothness sensor, and WENO reconstruction procedure. While our current smoothness sensor is piecewise constant, we aim to develop smoothness sensors inheriting a polynomial structure within each element. This modification will further improve the accuracy of our scheme, especially when dealing with coarse meshes. Theoretical studies of the WENO-based stabilization operator were performed in [@kuzmin2023a] in the CG context. We envisage that this preliminary analysis can be readily extended to the DG version. **Acknowledgments.** The development of the proposed methodology was sponsored by the German Research Association (DFG) under grant KU 1530/23-3.
arxiv_math
{ "id": "2309.12019", "title": "Dissipative WENO stabilization of high-order discontinuous Galerkin\n methods for hyperbolic problems", "authors": "Joshua Vedral", "categories": "math.NA cs.NA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We consider polynomial approximations of $\bar{z}$ to better understand the torsional rigidity of polygons. Our main focus is on low degree approximations and associated extremal problems that are analogous to Pólya's conjecture for torsional rigidity of polygons. We also present some numerics in support of Pólya's Conjecture on the torsional rigidity of pentagons. author: - Adam Kraus $\&$ Brian Simanek title: New Perspectives on Torsional Rigidity and Polynomial Approximations of z-bar --- **Keywords:** Torsional Rigidity, Bergman Analytic Content, Symmetrization **Mathematics Subject Classification:** Primary 41A10; Secondary 31A35, 74P10 # Introduction {#Intro} Let $\Omega\subseteq\mathbb{C}$ be a bounded and simply connected region whose boundary is a Jordan curve. We will study the *torsional rigidity* of $\Omega$ (denoted $\rho(\Omega)$), which is motivated by engineering problems about a cylindrical beam with cross-section $\Omega$. One can formulate this quantity mathematically for simply connected regions by the following variational formula of Hadamard type $$\begin{aligned} \label{rhodef} \rho(\Omega):=\sup_{u\in C^1_0(\bar{\Omega})}\frac{4\left(\int_{\Omega}u(z)dA(z)\right)^2}{\int_{\Omega}|\nabla u(z)|^2dA(z)},\end{aligned}$$ where $dA$ denotes area measure on $\Omega$ and $C^1_0(\bar{\Omega})$ denotes the set of all continuously differentiable functions on $\Omega$ that vanish on the boundary of $\Omega$ (see [@PS] and also [@BFL; @Makai]). The following basic facts are well known and easy to verify: - for any $c\in\mathbb{C}$, $\rho(\Omega+c)=\rho(\Omega)$, - for any $r\in\mathbb{C}$, $\rho(r\Omega)=|r|^4\rho(\Omega)$, - if $\Omega_1$ and $\Omega_2$ are simply connected and $\Omega_1\subseteq\Omega_2$, then $\rho(\Omega_1)\leq\rho(\Omega_2)$, - if $\mathbb{D}=\{z:|z|<1\}$, then $\rho(\mathbb{D})=\pi/2$. There are many methods one can use to estimate the torsional rigidity of the region $\Omega$ (see [@Mush; @Sok]). For example, one can use the Taylor coefficients for a conformal bijection between the unit disk and the region (see [@PS pages 115 $\&$ 120] and [@Sok Section 81]), the Dirichlet spectrum for the region (see [@PS page 106]), or the expected lifetime of a Brownian Motion (see [@BVdbC Equations 1.8 and 1.11] and [@HMP]). These methods are difficult to apply in general because the necessary information is rarely available. More recently, Lundberg et al. proved that since $\Omega$ is simply connected, it holds that $$\label{fzapp} \rho(\Omega)=\inf_{f\in A^2(\Omega)}\int_{\Omega}|\bar{z}-f|^2dA(z),$$ where $A^2(\Omega)\subseteq L^2(\Omega,dA)$ is the Bergman space of $\Omega$ (see [@FL17]). The right-hand side of [\[fzapp\]](#fzapp){reference-type="eqref" reference="fzapp"} is the square of the Bergman analytic content of $\Omega$, which is the distance from $\bar{z}$ to $A^2(\Omega)$ in $L^2(\Omega,dA)$. This formula was subsequently used extensively in [@FS19] to calculate the approximate torsional rigidity of various regions. To understand their calculations, let $\{p_n\}_{n=0}^{\infty}$ be the sequence of Bergman orthonormal polynomials, which are orthonormal in $A^2(\Omega)$. By [@Farrell Theorem 2] we know that $\{p_n(z;\Omega)\}_{n\geq0}$ is an orthonormal basis for $A^2(\Omega)$ (because $\Omega$ is a Jordan domain) and hence $$\rho(\Omega)= \int_{\Omega}|z|^2dA-\sum_{n=0}^{\infty}|\langle1,wp_n(w)\rangle|^2$$ (see [@FS19]). Thus, one can approximate $\rho(\Omega)$ by calculating $$\rho_N(\Omega):=\int_{\Omega}|z|^2dA-\sum_{n=0}^{N}|\langle1,wp_n(w)\rangle|^2$$ for some finite $N\in\mathbb{N}$. Let us use $\mathcal{P}_n$ to denote the space of polynomials of degree at most $n$. Notice that $\rho_N(\Omega)$ is the square of the distance from $\bar{z}$ to $\mathcal{P}_N$ in $L^2(\Omega,dA)$. For this reason, and in analogy with the terminology of Bergman analytic content, we shall say that $\mbox{dist}(\bar{z},\mathcal{P}_N)$ is the *Bergman $N$-polynomial content* of the region $\Omega$. The calculation of $\rho_n(\Omega)$ is a manageable task in many applications, as was demonstrated in [@FS19]. One very useful fact is that $\rho_N(\Omega)\geq\rho(\Omega)$, so these approximations are always overestimates (a fact that was also exploited in [@FS19]). Much of the research around torsional rigidity of planar domains focuses on extremal problems and the search for maximizers under various constraints. For example, Saint-Venant conjectured that among all simply connected Jordan regions with area $1$, the disk has the largest torsional rigidity. This conjecture has since been proven and is now known as Saint-Venant's inequality (see [@Pol], [@PS page 121], and also [@BFL; @OR]). It has been conjectured that the $n$-gon with area $1$ having maximal torsional rigidity is the regular $n$-gon (see [@Pol]). This conjecture remains unproven for $n\geq5$. It was also conjectured in [@FS19] that among all right triangles with area $1$, the one that maximizes torsional rigidity is the isosceles right triangle. This was later proven in a more general form by Solynin in [@Sol20]. Additional results related to optimization of torsional rigidity can be found in [@Lipton; @SZ; @vdBBV]. The formula [\[fzapp\]](#fzapp){reference-type="eqref" reference="fzapp"} tells us that maximizing $\rho$ within a certain class of Jordan domains means finding a domain on which $\bar{z}$ is not well approximable by analytic functions (see [@FK]). This suggests that the Schwarz function of a curve is a relevant object. For example, on the real line, $f(z)=z$ satisfies $f(z)=\bar{z}$ and hence we can expect that any region that is always very close to the real line will have small torsional rigidity. Similar reasoning can be applied to other examples and one can interpret [\[fzapp\]](#fzapp){reference-type="eqref" reference="fzapp"} as a statement relating the torsional rigidity of $\Omega$ to a similarity between $\Omega$ and an analytic curve with a Schwarz function. Some of the results from [@Makai] are easily understood by this reasoning. The quantities $\rho_N$, defined above, suggest an entirely new class of extremal problems related to torsional rigidity. In this vein, we formulate the following conjecture, which generalizes Pólya's conjecture: **Conjecture 1**. For an $n,N\in\mathbb{N}$ with $n\geq3$, the convex $n$-gon of area $1$ that maximizes the Bergman $N$-polynomial content is the regular $n$-gon. We will see by example why we need to include convexity in the hypotheses of this conjecture (see Theorem [Theorem 3](#nomax){reference-type="ref" reference="nomax"} below). The most common approach to proving conjectures of the kind we have mentioned is through symmetrization. Indeed, one can prove Pólya's Conjecture and the St. Venant Conjecture through the use of Steiner symmetrization. This process chooses a line $\ell$ and then replaces the intersection of $\Omega$ with every perpendicular $\ell'$ to $\ell$ by a line segment contained in $\ell'$, centered on $\ell$, and having length equal to the $1$-dimensional Lebesgue measure of $\ell'\cap\Omega$. This procedure results in a new region $\Omega'$ with $\rho(\Omega')\geq\rho(\Omega)$. Applications of this method and other symmetrization methods to torsional rigidity can be found in [@Sol20]. The rest of the paper presents significant evidence in support of Conjecture [Conjecture 1](#rhon){reference-type="ref" reference="rhon"} and also the Pólya Conjecture for $n=5$. The next section will explain the reasoning behind Conjecture [Conjecture 1](#rhon){reference-type="ref" reference="rhon"} by showing that many optimizers of $\rho_N$ exhibit as much symmetry as possible, though we will see that Steiner symmetrization does not effect $\rho_N$ the same way it effects $\rho$. In Section [3](#pentnum){reference-type="ref" reference="pentnum"}, we will present numerical evidence in support of Pólya's Conjecture for pentagons by showing that among all equilateral pentagons with area $1$, the one with maximal torsional rigidity must be very nearly the regular one. # New Conjectures and Results {#new} Let $\Omega$ be a simply connected Jordan region in the complex plane (or the $xy$-plane). Our first conjecture asserts that there is an important difference between the Bergman analytic content and the Bergman $N$-polynomial content. We state it as follows. **Conjecture 2**. For each $N\in\mathbb{N}$, there is an $n\in\mathbb{N}$ so that among all $n$-gons with area $1$, $\rho_N$ has no maximizer. We will provide evidence for this conjecture by proving the following theorem, which shows why we included the convexity assumption in Conjecture [Conjecture 1](#rhon){reference-type="ref" reference="rhon"}. **Theorem 3**. *Among all hexagons with area $1$, $\rho_1$ and $\rho_2$ have no maximizer.* Before we prove this result, let us recall some notation. We define the moments of area for $\Omega$ as in [@FS19] by $$I_{m,n}:=\int_\Omega x^my^ndxdy,\qquad\qquad\qquad m,n\in\mathbb{N}_0.$$ In [@FS19] it was shown that if the centroid of $\Omega$ is $0$, then $$\label{rho1form} \rho_1(\Omega)=4\frac{I_{2,0}I_{0,2}-I_{1,1}^2}{I_{2,0}+I_{0,2}}$$ (see also [@DW]). One can write down a similar formula for $\rho_2(\Omega)$, which is the content of the following proposition. **Proposition 4**. *Let $\Omega$ be a simply connected, bounded region of area 1 in $\mathbb{C}$ whose centroid is at the origin. Then $\rho_2(\Omega)=4\Big(I_ {0,4} I_ {1,1}^2 - 4 I_ {1,1}^4 - 2 I_ {0,3} I_ {1,1} I_ {1,2} + I_ {0,2}^3 I_ {2, 0} + I_ {0,3}^2 I_ {2,0} + 4 I_ {1,2}^2 I_ {2 ,0} - I_ {1,1}^2 I_ {2,0}^2 - I_ {0, 2}^2 (I_ {1,1}^2 + 2 I_ {2,0}^2) - 6 I_ {1,1} I_ {1,2} I_ {2,1} - 2 I_ {0,3} I_ {2,0} I_ {2,1} + I_ {2,0} I_ {2,1}^2 + 2 I_ {1,1}^2 I_ {2,2} + 2 I_ {0,3} I_ {1,1} I_ {3,0} - 2 I_ {1,1} I_ {2,1} I_ {3,0} + I_ {0,2} (4 I_ {2,1}^2 + (I_ {1,2} - I_ {3,0})^2 + I_ {2,0} (-I_ {0,4} + 6 I_ {1,1}^2 + I_ {2,0}^2 - 2 I_ {2,2} - I_ {4,0})) + I_ {1,1}^2 I_ {4, 0}\Big)\Big/\Big((I_ {0,3} + I_ {2,1})^2 + (I_ {1,2} + I_ {3,0})^2 + (I_ {0,2} + I_ {2,0}) (-I_ {0,4} + 4 I_ {1,1}^2 + (I_ {0,2} - I_ {2,0})^2 - 2 I_ {2,2} - I_ {4,0})\Big)$* *Proof.* It has been shown in [@FS19] that $$\label{rho2det} |\langle 1,wp_2(w)\rangle |^2= \begin{vmatrix} c_{0,0}&c_{0,1}&c_{0,2} \\ c_{1,0}&c_{1,1}&c_{1,2} \\ c_{0,1}&c_{0,2}&c_{0,3} \end{vmatrix}^2\Bigg/\left(\begin{vmatrix} c_{0,0}&c_{1,0} \\ c_{0,1}&c_{1,1} \end{vmatrix}\cdot\begin{vmatrix} c_{0,0}&c_{0,1}&c_{0,2} \\ c_{1,0}&c_{1,1}&c_{1,2} \\ c_{2,0}&c_{2,1}&c_{2,2} \end{vmatrix}\right)$$ where $$c_{m,n}=\langle z^m,z^n\rangle=\int_\Omega z^m\bar z^ndA(z)$$ We can then write $$\begin{aligned} \rho_2(\Omega)&=\rho_1(\Omega)-|\langle 1,wp_2(w;\Omega)\rangle|^2\\ &=4\frac{I_{2,0}I_{0,2}-I_{1,1}^2}{I_{2,0}+I_{0,2}}-|\langle 1,wp_2(w;\Omega)\rangle|^2\end{aligned}$$ If one calculates $|\langle 1,wp_2(w;\Omega)\rangle|^2$ using [\[rho2det\]](#rho2det){reference-type="eqref" reference="rho2det"}, one obtains the desired formula for $\rho_2({\Omega})$. ◻ *Proof of Theorem [Theorem 3](#nomax){reference-type="ref" reference="nomax"}.* We will rely on the formula [\[rho1form\]](#rho1form){reference-type="eqref" reference="rho1form"} in our calculations. To begin, fix $a>0$ and construct a triangle with vertices $(-\epsilon,0)$, $(\epsilon/2,\frac{\epsilon\sqrt{3}}{2})$, and $(-\epsilon/2,\frac{\epsilon\sqrt{3}}{2})$, where $\epsilon=\frac{2}{3a\sqrt{3}}$. Consider also the set of points $S=\{(-a/2,\frac{a\sqrt{3}}{2}),(a,0),(-a/2,-\frac{a \sqrt{3}}{2})\}$. To each side of our triangle, append another triangle whose third vertex is in the set $S$, as shown in Figure 1. Let this resulting "windmill\" shaped region be denoted by $\Gamma_a$. ![*The region $\Gamma_a$ from the proof of Theorem [Theorem 3](#nomax){reference-type="ref" reference="nomax"}.*](Rh1NoMax.png){#label1} To calculate the moments of area, we first determine the equations of the lines that form the boundary of this region. Starting with $C_1(x,a)$ in the 3rd quadrant and moving clockwise we have: $$\begin{aligned} C_1(x,a)&=x\sqrt{3}-\frac{6(2x+a)}{2\sqrt{3}+9a^2}\\ C_2(x,a)&=\frac{3a(2+3ax\sqrt{3})}{-4\sqrt{3}+9a^2}\\ C_3(x,a)&=\frac{3a(2+3ax\sqrt{3})}{4\sqrt{3}-9a^2}\\ C_4(x,a)&=-x\sqrt{3}+\frac{6(2x+a)}{2\sqrt{3}+9a^2}\\ C_5(x,a)&=\frac{3(x-a)}{\sqrt{3}-9a^2}\\ C_6(x,a)&=\frac{-3(x-a)}{\sqrt{3}-9a^2}\end{aligned}$$ To determine $\rho_1(\Gamma_a)$, we calculate the terms $I_{2,0},I_{0,2},$ and $I_{1,1}$ with boundaries determined by the lines given above. Thus for $m,n\in\{0,1,2\}$ we have $$\begin{aligned} I_{m,n}(\Gamma_a)&=\int_{-\epsilon}^{\epsilon/ 2}\int_{C_1}^{C_4} x^my^n dydx + \int_{\epsilon/2}^a\int_{C_6}^{C_5} x^my^n dydx + \int_{-a/2}^{-\epsilon}\int_{C_3}^{C_4}x^my^n dydx\\ &\hspace{17mm}+ \int_{-a/2}^{-\epsilon}\int_{C_1}^{C_2}x^my^n dydx %+\int_{-\epsilon}^{\epsilon/2}\int_{C_7}^{C_4}x^my^n dydx+\int_{-\epsilon}^{\epsilon/2}\int_{C_1}^{C_8}x^my^n dydx\end{aligned}$$ These are straightforward double integrals and after some simplification, we obtain $$\rho_1(\Gamma_a)=4\frac{I_{2,0}I_{0,2}-I_{1,1}^2}{I_{2,0}+I_{0,2}}=\frac{1}{162}\left(3\sqrt{3}+\frac{4}{a^2}+27a^2\right)$$ and $$\rho_2(\Gamma_a)=\frac{1}{1620}\left(3\sqrt{3}+\frac{4}{a^2}+27a^2(1+90/(27a^4-6\sqrt{3}a^2+4))\right),$$ where we used the formula from Proposition [Proposition 4](#rho2def){reference-type="ref" reference="rho2def"} to calculate this last expression. Notice that we have constructed $\Gamma_a$ so that the area of $\Gamma_a$ is $1$ for all $a>0$. Thus, as $a\rightarrow\infty$, it holds that $\rho_j(\Gamma_a)\rightarrow\infty$ for $j=1,2$. ◻ Theorem [Theorem 3](#nomax){reference-type="ref" reference="nomax"} has an important corollary, which highlights how the optimization of $\rho_N$ is fundamentally different than the optimization of $\rho$. **Corollary 5**. *Steiner symmetrization need not increase $\rho_1$ or $\rho_2$.* *Proof.* If we again consider the region $\Gamma_a$ from the proof of Theorem [Theorem 3](#nomax){reference-type="ref" reference="nomax"}, we see that if we Steiner symmetrize this region with respect to the real axis, then the symmetrized version is a thin region that barely deviates from the real axis (as $a$ becomes large). Thus, $\bar{z}$ is approximately equal to $z$ in this region and one can show that $\rho_1$ of the symmetrized region remains bounded as $a\rightarrow\infty$. Since $\rho_2\leq\rho_1$, the same holds true for $\rho_2$. ◻ Our next several theorems will be about triangles. For convenience, we state the following basic result, which can be verified by direct computation. **Proposition 6**. *Let $\Delta$ be the triangle with vertices $(x_1,y_1),(x_2,y_2),(x_3,y_3)$ and centroid $\vec{c}$. Then $$\vec{c}=\left(\frac{x_1+x_2+x_3}{3},\frac{y_1+y_2+y_3}{3}\right)$$* For the following results, we define the *base* of an isosceles triangle as the side whose adjacent sides have equal length to each other. In the case of an equilateral triangle, any side may be considered as the base. Here is our first open problem about maximizing $\rho_N$ for certain fixed collections of triangles. **Problem 7**. For each $N\in\mathbb{N}$ and $a>0$, find the triangle with area $1$ and fixed side length $a$ that maximizes $\rho_N$. Given the prevalence of symmetry in the solution to optimization problems, one might be tempted to conjecture that the solution to Problem [Problem 7](#fixedbasen){reference-type="ref" reference="fixedbasen"} is the isosceles triangle with base $a$. This turns out to be true for $\rho_1$, but it is only true for $\rho_2$ for some values of $a$. Indeed, we have the following result. **Theorem 8**. *(i) Among all triangles with area 1 and a fixed side of length $a$, the isosceles triangle with base $a$ maximizes $\rho_1$.* *(ii) Let $t_*$ be the unique positive root of the polynomial $$999x^4/64-93x^3-664x^2-5376x-9216.$$ If $0<a\leq t_*^{1/4}$, then among all triangles with area 1 and a fixed side of length $a$, the isosceles triangle with base $a$ maximizes $\rho_2$. If $a>t_*^{1/4}$, then among all triangles with area 1 and a fixed side of length $a$, the isosceles triangle with base $a$ does not maximize $\rho_2$.* *Proof.* Let $\hat{\Omega}$ be an area-normalized triangle with fixed side length $a$. We begin by proving part (i). As $\rho_1$ is rotationally invariant, we may position $\hat{\Omega}$ so that the side of fixed length is parallel to the $y$-axis as seen in Figure 2. Denote vertex $\hat{A}$ as the origin, $\hat{B}$ as the point $(0,a)$, and $\hat{C}$ as the point $(-2/a,\lambda)$. ![*An area-normalized triangle $\hat{\Omega}$ with variable $\lambda$ and fixed base length $a$.*](IsoscelesMax.png){#label4} Notice as $\lambda$ varies, the vertex $\hat{C}$ stays on the line $x=-\frac{2}{a}$ in order to preserve area-normalization. If we define $$\label{txty} T_x:=\int_{\hat{\Omega}}xdA\qquad\qquad\mbox{and}\qquad\qquad T_y:=\int_{\hat{\Omega}}ydA$$ then we may translate our triangle to obtain a new triangle $\Omega$ with vertices $A$, $B$, and $C$ given by $$\begin{aligned} A&=\left(-T_x,-T_y\right)\\ B&=\left(-T_x,a-T_y\right)\\ C&=\left(-\frac{2}{a}-T_x,\lambda-T_y\right)\end{aligned}$$ which has centroid zero (see Figure 3). ![*The area-normalized triangle $\Omega$ as pictured is a translation of $\hat\Omega$, with variable $\lambda$, fixed base length $a$, and whose centroid is the origin.*](IsoscelesMax2.png){#label5} If we define $$\begin{aligned} \ell_1&=\lambda-\frac{\lambda a}{2}\left(x+\frac{2}{a}\right),\qquad\qquad\ell_2=\lambda+\frac{a^2-a\lambda}{2}\left(x+\frac{2}{a}\right),\end{aligned}$$ by recalling our formula for the moments of area, we have $$I_{m,n}(\Omega)=\int_{-\frac{2}{a}-T_x}^{-T_x}\int_{\ell_1}^{\ell_2}x^my^ndydx$$ We can now calculate $\rho_1$ using [\[rho1form\]](#rho1form){reference-type="eqref" reference="rho1form"} to obtain $$\rho_1(\Omega)=\frac{2a^2}{3(4+a^2(a^2-a\lambda+\lambda^2))}$$ By taking the first derivative with respect to $\lambda$ of equation (5) we obtain $$\frac{d}{d\lambda}\left[\rho_1(\Omega)\right]=\frac{2a^4(a-2\lambda)}{3(4+a^2(a^2-a\lambda+\lambda^2))^2}$$ Thus, the only critical point is when $\lambda=\frac{a}{2}$, when the $y$-coordinate of the vertex $\hat{C}$ is at the midpoint of our base. To prove part (ii), we employ the same method, but use the formula from Proposition [Proposition 4](#rho2def){reference-type="ref" reference="rho2def"}. After a lengthy calculation, we find a formula $$\frac{d}{d\lambda}\left[\rho_2(\Omega)\right]=\frac{P(\lambda)}{Q(\lambda)}$$ for explicit polynomials $P$ and $Q$. The polynomial $Q$ is always positive, so we will ignore that when finding critical points. By inspection, we find that we can write $$P(\lambda)=(\lambda-a/2)S(\lambda),$$ where $S(\lambda+a/2)$ is an even polynomial. Again by inspection, we find that every coefficient of $S(\lambda+a/2)$ is negative, except the constant term, which is $$999a^{20}/64-93a^{16}-664a^{12}-5376a^8-9216a^4.$$ Thus, if $0<a<t_*^{1/4}$, then this coefficient is also negative and therefore $S(\lambda+a/2)$ does not have any positive zeros (and therefore does not have any real zeros since it is an even function of $\lambda$). If $a>t_*^{1/4}$, then $S(\lambda+a/2)$ does have a unique positive zero and it is easy to see that it is a local maximum of $\rho_2$ (and the zero of $P$ at $a/2$ is a local minimum of $\rho_2$). ◻ We remark that the number $t_*^{1/4}$ from Theorem [Theorem 8](#steinertri){reference-type="ref" reference="steinertri"} is approximately $1.86637\ldots$ and $\sqrt{3}=1.73205\ldots$, so Theorem [Theorem 8](#steinertri){reference-type="ref" reference="steinertri"} does not disprove Conjecture [Conjecture 1](#rhon){reference-type="ref" reference="rhon"}. In Figure 4, we have plotted $\rho_2$ as a function of $\lambda$ when $a=1$. We see the maximum is attained when $\lambda=1/2$. In Figure 5, we have plotted $\rho_2$ as a function of $\lambda$ when $a=3$. We see that $\lambda=3/2$ is a local minimum and the maximum is actually attained when $\lambda=3/2\pm0.86508\ldots$. ![*$\rho_2$ of a triangle with area $1$ and fixed side length $1$.*](rho2-1.pdf){#rho2-1.pdf} ![*$\rho_2$ of a triangle with area $1$ and fixed side length $3$.*](rho2-3.pdf){#label7} We can now prove the following corollary, which should be interpreted in contrast to Corollary [Corollary 5](#nosteiner){reference-type="ref" reference="nosteiner"}. **Corollary 9**. *Given an arbitrary triangle of area 1, Steiner symmetrization performed parallel to one of the sides increases $\rho_1$. Consequently, the equilateral triangle is the unique maximum for $\rho_1$ among all triangles of fixed area.* *Proof.* We saw in the proof of Theorem [Theorem 8](#steinertri){reference-type="ref" reference="steinertri"} that if a triangle has any two sides not equal, than we may transform it in a way that increases $\rho_1$. The desired result now follows from the existence of a triangle that maximizes $\rho_1$. ◻ We can also consider a related problem of maximizing $\rho_N$ among all triangles with one fixed angle. To this end, we formulate the following conjecture, which is analogous to Problem [Problem 7](#fixedbasen){reference-type="ref" reference="fixedbasen"}. If true, it would be an analog of results in [@Sol20] for the Bergman $N$-polynomial content. **Conjecture 10**. For an $N\in\mathbb{N}$ and $\theta\in(0,\pi)$, the triangle with area $1$ and fixed interior angle $\theta$ that maximizes $\rho_N$ is the isosceles triangle with area $1$ and interior angle $\theta$ opposite the base. The following theorem provides strong evidence that Conjecture [Conjecture 10](#fixedanglen){reference-type="ref" reference="fixedanglen"} is true. **Theorem 11**. *Among all triangles with area 1 and fixed interior angle $\theta$, the isosceles triangle with interior angle $\theta$ opposite the base maximizes $\rho_1$ and $\rho_2$.* *Proof.* Let $\Omega$ be an area-normalized triangle with fixed interior angle $\theta$, centroid zero, and side length $a$ adjacent to our angle $\theta$. As $\rho_N$ is rotationally invariant, let us position $\Omega$ so that the side of length $a$ runs parallel to the $x$-axis. First, let us consider the triangle $\hat\Omega$ which is a translation of $\Omega$, so that the corner of $\hat\Omega$ with angle $\theta$, say vertex $A$, lies at the origin. Define $(T_x,T_y)$ as in [\[txty\]](#txty){reference-type="eqref" reference="txty"}. By translating the entire region $\hat\Omega$ by its centroid we attain the previously described region $\Omega$, now with centroid zero, as pictured in Figure 6. ![*A triangle $\Omega$ with fixed angle $\theta$, variable side length $a$, area 1, and centroid zero.*](Triangle4.png){#Triangle4.png} Our region $\Omega$ is now a triangle with centroid $0$ having vertices $A$, $B$, and $C$ given by $$\begin{aligned} A&=(-T_x,-T_y)\\ B&=(-a-T_x,-T_y)\\ C&=\left(\frac{-2}{a\tan\theta}-T_x,\frac{2}{a}-T_y\right)\end{aligned}$$ We can now use [\[rho1form\]](#rho1form){reference-type="eqref" reference="rho1form"} to calculate $$\rho_1(\Omega)=\frac{2a^2}{3a^4-6a^2\cot\theta+12\csc^2\theta}$$ By taking the first derivative with respect to $a$ of equation (4) we obtain $$\frac{d}{da}\left[{\rho_1(\Omega)}\right]=\frac{-4a(a^4-4\csc^2\theta)}{3(a^4-2a^2\cot\theta+4\csc^2\theta)^2}$$ Thus, the only critical point of $\rho_1$ is $a=\sqrt{2\csc\theta}$ and this point is a local maximum. We conclude our proof by observing that $a=\sqrt{2\csc\theta}$ is the side length of the area-normalized isosceles triangle with interior angle $\theta$ opposite the base. The calculation for $\rho_2$ follows the same basic strategy, albeit handling lengthier calculations. In this case, we calculate $$\frac{d}{da}\left[{\rho_2(\Omega)}\right]=\frac{Q_{\theta}(a)}{P_{\theta}(a)}$$ for explicitly computable functions $Q_{\theta}$ and $P_{\theta}$, which are polynomials in $a$ and have coefficients that depend on $\theta$. The function $P_{\theta}(a)$ is positive for all $a>0$, so the zeros will be the zeros of $Q_{\theta}$. One can see by inspection that $Q_{\theta}(\sqrt{2\csc\theta})=0$, so let us consider $$S_{\theta}(a)=Q_{\theta}(a+\sqrt{2\csc\theta}).$$ Then $S(0)=0$ and there is an obvious symmetry to these triangles that shows the remaining real zeros of $S$ must come in pairs with a positive zero corresponding to a negative one. Thus, it suffices to rule out any positive zeros of $S_{\theta}$. This is done with Descartes' Rule of Signs, once we notice that all coefficients of $S_{\theta}$ are negative. For example, one finds that the coefficient of $a^7$ in $S_{\theta}(a)$ is equal to $$-12288\csc^7\theta(140\cos(4\theta)-3217\cos(3\theta)+25010\cos(2\theta)-82016\cos(\theta)+70136)$$ One can plot this function to verify that it is indeed negative for all $\theta\in[0,\pi]$. Similar elementary calculations can be done with all the other coefficients in the formula for $S_{\theta}(a)$, but they are too numerous and lengthy to present here. The end result is the conclusion that $a=\sqrt{2\csc\theta}$ is the unique positive critical point of $\rho_2$ and hence must be the global maximum, as desired. ◻ We can now prove the following result, which is the a natural follow-up to Corollary [Corollary 9](#equimax1){reference-type="ref" reference="equimax1"}. **Corollary 12**. *The equilateral triangle is the unique maximum for $\rho_2$ among all triangles of fixed area.* *Proof.* We saw in the proof of Theorem [Theorem 11](#fixedangle){reference-type="ref" reference="fixedangle"} that if a triangle has any two sides not equal, than we may transform it in a way that increases $\rho_2$. The desired result now follows from the existence of a triangle that maximizes $\rho_2$. ◻ The same proof shows that Corollary [Corollary 9](#equimax1){reference-type="ref" reference="equimax1"} is also a corollary of Theorem [Theorem 11](#fixedangle){reference-type="ref" reference="fixedangle"}. # Numerics on Torsional Rigidity for Pentagons {#pentnum} Here we present numerical evidence in support of Pólya's conjecture for pentagons. In particular, we will consider only equilateral pentagons and show that in this class, the maximizer of torsional rigidity must be very close to the regular pentagon (see [@CCGINPT21] for another computational approach to a similar problem). Our first task is to show that to every $\theta,\phi\in(0,\pi)$ satisfying $$\begin{aligned} (1-\cos(\theta)-\cos(\phi))^2+&(\sin(\theta)-\sin(\phi))^2\leq 4,\\ \cos(\theta)&\leq1-\cos(\phi),\end{aligned}$$ there exists a unique equilateral pentagon of area $1$ with adjacent interior angles $\theta$ and $\phi$ (where the uniqueness is interpreted modulo rotation, translation, and reflection). ![*A pentagon constructed with vertices $V_1$, $V_2$, and interior angles $\theta$, $\phi$ as described below. There is exactly one point on the perpendicular bisector of $\overline{V_1V_2}$ for which our pentagon is equilateral.*](Pentagon_Construction.png){#Pentagon.png} To see this, construct a pentagon with one side being the interval $[0,1]$ in the real axis. Form two adjacent sides of length $1$ with interior angles $\phi$ and $\theta$ by choosing vertices $V_1=(\cos(\theta),\sin(\theta))$ and $V_2=(1-\cos(\phi),\sin(\phi))$. Our conditions imply that $V_1$ lies to the left of $V_2$ and the distance between $V_1$ and $V_2$ is less than or equal to $2$. Thus, if we join each of $V_1$ and $V_2$ to an appropriate point on the perpendicular bisector of the segment $\overline{V_1V_2}$, we complete our equilateral pentagon with adjacent angles $\theta$ and $\phi$ (see Figure 7). Obtaining the desired area is now just a matter of rescaling. Using this construction, one can write down the coordinates of all five vertices, which are simple (but lengthy) formulas involving basic trigonometric functions in $\theta$ and $\phi$. It is then a simple matter to compute a double integral and calculate the area of the resulting pentagon, rescale by the appropriate factor and thus obtain an equilateral pentagon with area $1$ and the desired adjacent internal angles. One can then compute $\rho_N$ for arbitrary $N\in\mathbb{N}$ using the method of [@FS19] to estimate the torsional rigidity of such a pentagon. Theoretically, this is quite simple, but in practice this is a lengthy calculation. We were able to compute $\rho_{33}(\Omega)$ for a large collection of equilateral pentagons $\Omega$. Note that all interior angles in the regular pentagon are equal to $108$ degrees. We discretized the region $\theta,\phi\in[105,110]$ (in degrees) and calculated $\rho_{33}$ for each pentagon in this discretization. The results showed a clear peak near $(\theta,\phi)=(108,108)$, so we further discretized the region $\theta,\phi\in[107.5,108.5]$ (in degrees) into $400$ equally spaced grid points and computed $\rho_{33}$ for each of the $400$ pentagons in our discretization. We then interpolated the results linearly and the resulting plot is shown as the orange surface in Figure 8. ![*$\rho_{33}$ for a selection of equilateral pentagons with area $1$ having angles close to those of the regular pentagon.*](2a.jpg){#2a.jpg} The blue surface in Figure 8 is the plane at height 0.149429, which is the (approximate) torsional rigidity of the area normalized regular pentagon calculated by Keady in [@Keady]. Recall that every $\rho_N$ is an overestimate of $\rho$, so any values of $\theta$ and $\phi$ for which $\rho_{33}$ lies below this plane will not be the pentagon that maximizes $\rho$. Thus, if we take the value 1.49429 from [@Keady] as the exact value of the torsional rigidity of the regular pentagon with area $1$, we see that among all equilateral pentagons, the maximizer of $\rho$ will need to have two adjacent angles within approximately one third of one degree of $108$ degrees. This is extremely close to the regular pentagon, and of course the conjecture is that the regular pentagon is the maximizer. **Acknowledgements.** The second author graciously acknowledges support from the Simons Foundation through collaboration grant 707882. R. Bañuelos, M. Van Den Berg, and T. Carroll, *Torsional rigidity and expected lifetime of Brownian motion*, J. London Math. Soc. (2) 66 (2002), no. 2, 499--512. S. Bell, T. Ferguson, and E. Lundberg, *Self-commutators of Toeplitz operators and isoperimetric inequalities*, Math. Proc. R. Ir. Acad. 114A (2014), no. 2, 115--133. F. Calabrò, S. Cuomo, F. Giampaolo, S. Izzo, C. Nitsch, F. Piccialli, and C. Trombetti, *Deep learning for the approximation of a shape functional*, arXiv preprint (2021). J. B. Diaz and A. Weinstein, *The torsional rigidity and variational methods*, Amer. J. Math. 70 (1948), 107--116. O. J. Farrell, *On approximation to an analytic function by polynomials*, Bull. Amer. Math. Soc. 40 (1934), no. 12, 908--914. M. Fleeman and D. Khavinson, *Approximating $\bar{z}$ in the Bergman space*, in *Recent progress on operator theory and approximation in spaces of analytic functions*, Contemp. Math., 679 (2016), 79--90. M. Fleeman and E. Lundberg, *The Bergman analytic content of planar domains*, Comput. Methods Funct. Theory 17 (2017), no. 3, 369--379. M. Fleeman and B. Simanek, *Torsional rigidity and Bergman analytic content of simply connected regions*, Comput. Methods Funct. Theory 19 (2019), no. 1, 37--63. A. Hurtado, S. Markovorsen, and V. Palmer, *Torsional rigidity of submanifolds with controlled geometry*, Math. Ann. 344 (2009), no. 3, 511--542. G. Keady, *Steady slip flow of Newtonian fluids through tangential polygonal microchannels*, IMA J. Appl. Math. 86 (2021), no. 3, 547--564. R. Lipton, *Optimal fiber configurations for maximum torsional rigidity*, Arch. Ration. Mech. Anal. 144 (1998), no. 1, 79--106. E. Makai, *On the principal frequency of a membrane and the torsional rigidity of a beam*, Stanford Studies in Mathematics and Statistics (1962), 227--231. N. I. Muskhelishvili, *Some Basic Problems of the Mathematical Theory of Elasticity*, Noordhoff International Publishing, 1977. J-F. Olsen and M. Reguera, *On a sharp estimate for Hankel operators and Putnam's inequality*, Rev. Mat. Iberoam. 32 (2016), no. 2, 495--510. G. Pólya, *Torsional rigidity, principal frequency, electrostatic capacity and symmetrization*, Quart. Appl. Math. 6 (1948), 267--277. G. Pólya and G. Szegö, *Isoperimetric Inequalities in Mathematical Physics*, Annals of Mathematics Studies, no. 27, Princeton University Press, Princeton, N. J., 1951. I. S. Sokolnikoff, *Mathematical Theory of Elasticity*, Summer Session for Advanced Instruction and Research in Mechanics, Brown University, 1941. A. Solynin, *Exercises on the theme of continuous symmetrization*, Comput. Methods Funct. Theory 20 (2020), no. 3--4, 465--509. A. Solynin and V. Zalgaller, *The inradius, the first eigenvalue, and the torsional rigidity of curvilinear polygons*, Bull. Lond. Math. Soc. 42 (2010), no. 5, 765--783. M. Van den Berg, G. Buttazzo, and B. Velichkov, *Optimization problems involving the first Dirichlet eigenvalue and the torsional rigidity*, New trends in shape optimization, Internat. Ser. Numer. Math. 166 (2015), 19--41.
arxiv_math
{ "id": "2309.16450", "title": "New Perspectives on Torsional Rigidity and Polynomial Approximations of\n z-bar", "authors": "Adam Kraus, Brian Simanek", "categories": "math.CA", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | Let $L^p(\mathbf{T})$ be the Lesbegue space of complex-valued functions defined in the unit circle $\mathbf{T}=\{z: |z|=1\}\subseteq \mathbb{C}$. In this paper, we address the problem of finding the best constant in the inequality of the form: $$\|f\|_{L^p(\mathbf{T})}\le A_{p,b} \|(|P_+ f|^2+b| P_{-} f|^2)^{1/2}\|_{L^p(\mathbf{T})}.$$ Here $p\in[1,2]$, $b>0$, and by $P_{-} f$ and $P_+ f$ are denoted co-analytic and analytic projection of a function $f\in L^p(\mathbf{T})$. The equality is \"attained\" for a quasiconformal harmonic mapping. The result extends a sharp version of M. Riesz conjugate function theorem of Pichorides and Verbitsky and some well-known estimates for holomorphic functions. address: Faculty of Natural Sciences and Mathematics, University of Montenegro, Cetinjski put b.b. 81000 Podgorica, Montenegro author: - David Kalaj title: On M. Riesz conjugate function theorem for harmonic functions --- [^1] # Introduction Let $\mathbf{U}$ denote the unit disk and $\mathbf{T}$ the unit circle in the complex plane. For $p>1$, we define the Hardy class $\mathbf{h}^p$ as the class of harmonic mappings $f=g+\bar h$, where $g$ and $h$ are holomorphic mappings defined on the unit disk $\mathbf{U},$ so that $$\|f\|_p=\|f\|_{\mathbf{h}^p}=\sup_{0<r< 1} M_p(f,r)<\infty,$$ where $$M_p(f,r)=\left(\int_{\mathbf{T}}|f(r\zeta)|^p d\sigma(\zeta)\right)^{1/p}.$$ Here $d\sigma(\zeta)=\frac{dt}{2\pi},$ if $\zeta=e^{it}\in \mathbf{T}$. The subclass of holomorphic mappings that belongs to the class $\mathbf{h}^p$ is denoted by $H^p$. If $f\in \mathbf{h}^p$, then it is well-known that there exists $$f(e^{it})=\lim_{r\to 1} f(re^{it}), a.e.$$ and $f\in L^p(\mathbf{T}).$ Then there hold $$\label{come}\|f\|^p_{{\mathbf{h}^p}}=\lim_{r\to 1}\int_{0}^{2\pi}|f(re^{it})|^p \frac{dt}{2\pi}= \int_{0}^{2\pi}|f(e^{it})|^p \frac{dt}{2\pi}.$$ Let $1<p<\infty$ and let $\overline p=\max\{p,p/(p-1)\}$. Verbitsky in [@ver] proved the following results. If $f=u+iv\in H^p$ and $v(0)=0$, then $$\label{ver}\sec(\pi/(2\overline p))\|v\|_p\leq\|f\|_p,$$ and $$\label{1ver}\|f\|_p\leq\csc(\pi/(2\overline p))\|u\|_p,$$ and both estimates are sharp. Those results improve the sharp inequality $$\label{pico}\|v\|_p\leq\cot(\pi/(2\overline p))\|u\|_p$$ found by S. Pichorides ([@pik]). For some related results see [@essen; @studia; @graf; @verb2]. Then those results have been extended by the author in [@tams] by proving the sharp inequalities $$\label{nes2}\left(\int_{\mathbf{T}}\left({|g|^2+|h|^2}\right)^{\frac{p}{2}}\right)^{1/p}\le c_p\left(\int_{\mathbf{T}} |g+\bar h|^p\right)^{1/p}$$ and $$\label{nes3}\left(\int_{\mathbf{T}} |g+\bar h|^p\right)^{1/p}\le d_p\left(\int_{\mathbf{T}}\left({|g|^2+|h|^2}\right)^{\frac{p}{2}}\right)^{1/p},$$ where $$c_p=\left(\sqrt{2}\sin\frac{\pi}{2\bar p}\right)^{-1},\text{ and }d_p= \sqrt{2}\cos\frac{\pi}{2\bar p},$$ and $f=g+\bar h\in \mathbf{h}^p$, $\Re(h(0)g(0))=0$, $1<p<\infty$. Then inequalities [\[nes2\]](#nes2){reference-type="eqref" reference="nes2"} and [\[nes3\]](#nes3){reference-type="eqref" reference="nes3"} imply [\[pico\]](#pico){reference-type="eqref" reference="pico"}, [\[1ver\]](#1ver){reference-type="eqref" reference="1ver"} and [\[ver\]](#ver){reference-type="eqref" reference="ver"}. As a byproduct, the author by using [\[nes2\]](#nes2){reference-type="eqref" reference="nes2"} proved a Hollenbeck-Verbitsky conjecture for the case $s=2$. Further, the inequality [\[nes2\]](#nes2){reference-type="eqref" reference="nes2"} has been extended by Marković and Melentijević in [@melmar] by finding the best constant $c_{p,s}$ in the following inequality $$\label{nes4}\left(\int_{\mathbf{T}}(|g|^s+|h|^s)^{p/s}\right)^{1/p}\le c_{p,s}\left(\int_{\mathbf{T}} |g+\bar h|^p\right)^{1/p},$$ for certain range of parameters $(p,s)$ including the case $(p,2)$. Then [\[nes4\]](#nes4){reference-type="eqref" reference="nes4"} has been extended by Melentijević in [@mel], proving in this way a Hollenbeck-Verbitsky conjecture for the case $s<\sec^2(\pi/(2p))$, $p\le 4/3$ or $p\geqslant 2$. So this problem remains open for the case $p\in[4/3,2]$. We will consider another related problem. We will extend the result of Verbitsky [\[ver\]](#ver){reference-type="eqref" reference="ver"} and the result of the author [\[nes3\]](#nes3){reference-type="eqref" reference="nes3"} (for the case $1\le p\le 2$). For a given $b>0$ and $p\in[1,2]$ we will find the best constant in the following inequality $$\label{nes32}\left(\int_{\mathbf{T}} |g+\bar h|^p\right)^{1/p}\le A_{p,b}\left(\int_{\mathbf{T}}\left({|g|^2+b|h|^2}\right)^{\frac{p}{2}}\right)^{1/p}.$$ The previous estimate is equivalent to the following estimate: $$\|f\|_{p}\le A_{p,b}\|(| P_+[f]|^2+|bP_{-}[f]|^2)^{\frac{p}{2}}\|_p,$$ where $f\in L^p(\mathbf{T})$ and $P_{+}$ and $P_{-}$ are analytic and co-analytic projection of a function $$f(e^{it})=\sum_{k=-\infty}^{+\infty} c_k e^{ik t},$$ i.e. $$P_{+}[f]=\sum_{k=0}^{+\infty} c_k e^{ik t}, \text{and} \ \ P_{-}[f]=\sum_{k=1}^{+\infty} c_{-k} e^{-ik t}.$$ Here $$c_k = \frac{1}{2\pi}\int_0^{2\pi} f(e^{it})e^{-ki t}dt.$$ We remark that all the inequalities [\[ver\]](#ver){reference-type="eqref" reference="ver"}, [\[pico\]](#pico){reference-type="eqref" reference="pico"}, [\[nes2\]](#nes2){reference-type="eqref" reference="nes2"}, [\[nes3\]](#nes3){reference-type="eqref" reference="nes3"}, [\[nes4\]](#nes4){reference-type="eqref" reference="nes4"} can be formulated by using analytic and co-analytic projection. # Main results The main result is the following theorem **Theorem 1**. *Let $1\le p\le 2$ and assume that $b>0$. Then we have the following sharp inequality $$\label{nesit}\left(\int_{\mathbf{T}} |g+\bar h|^p\right)^{1/p}\le A_{p,b}\left(\int_{\mathbf{T}}(|g|^2+b |h|^2)^{\frac{p}{2}}\right)^{1/p},$$ for $f=g+\bar h\in \mathbf{h}^p$ with $\Re(g(0)h(0))\le 0$ where $$A_{p,b}=\left(\frac{1+b+\sqrt{1+b^2+2 b \cos \left[\frac{2 \pi }{p}\right]}}{2 b}\right)^{1/2}.$$ The equality is \"attained\" for $f_c$, when $c\uparrow 1/p$ and* *$$\label{fbb}f_c(z) = \left(\frac{1+z}{1-z}\right)^c -\bar r\left(\frac{1+\bar z}{1-\bar z}\right)^c,$$ and $$\bar r=\frac{\left(-1+b-\sqrt{1+b^2+2 b \cos\left[\frac{2 \pi }{p}\right]}\right) \sec\left(\frac{\pi }{p}\right)}{2 b},$$ for $p<2$. For $p=2$ $$\label{fbb0}f_c(z) = \left\{ \begin{array}{ll} \left(\frac{1+\bar z}{1-\bar z}\right)^c, & \hbox{for $b\le 1$;} \\ \left(\frac{1+z}{1-z}\right)^c, & \hbox{for $b>1$.} \end{array} \right.$$* *Remark 2*. a) Theorem [Theorem 1](#Tao){reference-type="ref" reference="Tao"} remains true if the condition $\Re(g(0)h(0))\le 0$ is replaced by more general condition $$\frac{\pi(p-1)}{p}\le |\mathrm{arg}(g(0)h(0))|\le \pi.$$ In the previous inequality we considered the branch of $\arg$ with $\arg(x)\in (-\pi,\pi]$. b\) Theorem [Theorem 1](#Tao){reference-type="ref" reference="Tao"} reduces to the inequality [\[ver\]](#ver){reference-type="eqref" reference="ver"} by Verbitsky for $b=1$ and $g=-h$. Namely $h-\bar h=2i\mathrm{Im}\, h$ and $\mathrm{Im}(h(0))=0$ implies $\Re(-h^2(0))\le 0$. Theorem [Theorem 1](#Tao){reference-type="ref" reference="Tao"} coincides with the author's inequality [\[nes3\]](#nes3){reference-type="eqref" reference="nes3"} for $b=1$ and for the case $p\in[1,2]$. In the last case $D_{p,b}=d_p$. c\) Concerning the analytic version of the Riesz type theorem, the following example suggested by A. Calderon (see [@pik]) shows that [\[ver\]](#ver){reference-type="eqref" reference="ver"} and [\[1ver\]](#1ver){reference-type="eqref" reference="1ver"} are sharp. Namely if $g_c(z)=\left(\frac{1+z}{1-z}\right)^{c}$, $\lvert\arg\frac{1+z}{1-z}\rvert\le \frac{\pi}{2}$, and $c<\frac{1}{p}$, then $g_c=u +iv\in h^p$. Further $|g_c|=\csc \frac{c\pi}{2} |v|$ almost everywhere on $\mathbf{T}$. In contrast to the analytic case where the univalent (conformal injective) function $g$ of the unit disk of a certain angle is the minimizer, for harmonic version of this theorem, the minimizer $f_b$ is a $\min\{\bar r ,\bar r^{-1}\} -$quasiconformal harmonic mapping of the unit disk onto an angle, provided that $b\neq 1$. The case $b=1$ implies $\bar r=1$, and then $f_b$ from [\[fbb\]](#fbb){reference-type="eqref" reference="fbb"} coincides with $2i\mathrm{Im}( g_c)$. # Strategy of the proof As the authors of the paper did in [@verb1], (see also [@tams; @melmar; @mel]) we use \"pluri-subharmonic minorant\". **Definition 3**. An upper semi-continuous real function $u$ is called subharmonic in an open set $\Omega$ of the complex plane, if for every compact subset $K$ of $\Omega$ and every harmonic function $f$ defined on $K$, the inequality $u(z)\le f(z)$ for $z\in \partial K$ implies that $u(z)\le f(z)$ on $K$. A property which characterizes the subharmonic mappings is the sub-mean value property which states that if $u$ is a subharmonic function defined on a domain $\Omega$, then for every closed disk $\overline{D(z_0,r)}\subset \Omega$, we have the inequality $$u(z_0)\le \frac{1}{2\pi r}\int_{|z-z_0|=r} {u(z)|dz|}.$$ **Definition 4**. [@lars] A function $u$ defined in an open set $\Omega\subset \mathbf{C}^n$ with values in $[ - \infty, +\infty)$ is called plurisubharmonic if 1. $u$ is semicontinuous from above; 2. For arbitrary $z,w \in\mathbf{C}^n$, the function $t\to u(z + tw)$ is subharmonic in the part $\mathbf{C}$ where it is defined. **Definition 5**. A pluri-subharmonic function $f$ is called a pluri-subharmonic minorant of $g$ on $\Omega$ if $f(z,w)\le g(z,w)$ for $(z,w)\in \Omega\subset\mathbf{C}^2$, and $f(z_0,w_0)=g(z_0,w_0)$, for a point $(z_0,w_0)\in \Omega.$ Let $p\in[1,2]$ and $b>0$. The main task in the proof of Theorem [Theorem 1](#Tao){reference-type="ref" reference="Tao"} is to find optimal positive constants $D_{p,b}$, $E_{p,b}$, and pluri-subharmonic functions $\mathcal{G}_p(z,w)$ (Lemma [Lemma 6](#nice1){reference-type="ref" reference="nice1"}) for $z,w\in \mathbf{C}$, vanishing for $z=0$ or $w=0$, so that the inequality $$\label{calf} |w+\bar z|^p\le (|w|^2+ b |z|^2)^{\frac{p}{2}} D_{p,b} - E_{p,b} \mathcal{G}_p(z,w)$$ is sharp. # Proof of Theorem [Theorem 1](#Tao){reference-type="ref" reference="Tao"} {#proof-of-theorem-tao} The main content of the proof is the following lemma which we prove in the next section. Let $\zeta=\rho e^{it}$, $t\in(-\pi,\pi]$, and consider the following branches $$\label{br1}(-{\zeta})^{\frac{p}{2}}:=|\zeta|^{p/2}e^{ip/2(\pi+t)}$$ $$\label{br2}(-{\bar \zeta})^{\frac{p}{2}}:=|\zeta|^{p/2}e^{ip/2(\pi-t)}.$$ Then for $\zeta=\rho e^{it}$, $t\in[-\pi,\pi]$ consider $$\label{rez}k(\rho,t):=\max\{\Re((-{\zeta})^{\frac{p}{2}}),\Re((-\bar{\zeta})^{\frac{p}{2}})\}=\rho^{\frac{p}{2}}\cos(\frac{p}{2}(\pi-|t|)).$$ Further define $$K(\rho,t)=\left\{ \begin{array}{ll} k(\rho,t), & \hbox{if $|t|\le \pi$;} \\ k(\rho,t-2\pi), & \hbox{if $t\in [\pi,2\pi]$;} \\ k(\rho, t+2\pi), & \hbox{if $t\in[-2\pi,-\pi]$.} \end{array} \right.$$ Then for $\zeta=\rho e^{it}\in \mathbb{C}$ define the function $H(\zeta)=K(\rho,t)$. This function is well-defined and continuous in $\mathbb{C}$ because $K(\rho,\pi)=K(\rho,-\pi)$. **Lemma 6**. *The function $H(\zeta)$ is subharmonic on $\mathbf{C}$. For $w=|w| e^{it}$, $t\in(-\pi,\pi]$ and $z=|z|e^{is}$, $s\in(-\pi,\pi]$, define $\mathcal{G}_p(z,w)=K(|z|\cdot |w|, t+s)$. Then $\mathcal{G}_p$ is pluri-subharmonic on $\mathbf{C}^2.$ For every two complex numbers $z$ and $w$ and $p\in[1,2)$, we have $$\label{fsh} |w+\bar z|^p\le D_{p,b}(|w|^2+b|z|^2)^{\frac{p}{2}} - E_{p,b} \mathcal{G}_p(z,w),$$* *where $$\label{aba}D_{p,b}=\left(\frac{1+b+\sqrt{S_b}}{2 b}\right)^{\frac{p}{2}},$$ and $$\label{abb} E_{p,b}=2^{2-\frac{p}{2}} \left(-\frac{\left(S_b+(1+b) \sqrt{S_b}\right) \sec\left(\frac{\pi }{p}\right)}{b}\right)^{\frac{1}{2} (-2+p)} \sin \frac{\pi}{p}.$$ Here we use the shorthand notation $$\label{Sb}S_b:=1+b^2+2 b \cos \left[\frac{2 \pi }{p}\right].$$ Furthermore the equality in [\[fsh\]](#fsh){reference-type="eqref" reference="fsh"} is attained for $p<2$ if and if $${|w|}=-\frac{1}{2} \left(-1+b+\sqrt{S_b}\right) \sec\left(\frac{\pi }{p}\right){|z|}$$ and $\arg(wz)=\frac{-\pi}{p}\mod \pi$.* *Proof of inequality of Theorem [Theorem 1](#Tao){reference-type="ref" reference="Tao"}.* We use Lemma [Lemma 6](#nice1){reference-type="ref" reference="nice1"}. Assume that $f=g+\bar h$, where $g$ and $h$ are holomorphic functions on the unit disk. Then from Lemma [Lemma 6](#nice1){reference-type="ref" reference="nice1"}, we have $$|g(z)+\overline{h(z)}|^p \le D_{p,b} (|g(z)|^2+b |h(z)|^2)^{\frac{p}{2}} - E_{p,b} \mathcal{G}_p(g(z),h(z)),$$ where $D_{p,b}$ and $E_{p,b}$ are given in [\[aba\]](#aba){reference-type="eqref" reference="aba"} and [\[abb\]](#abb){reference-type="eqref" reference="abb"}. Then $$\int_{\mathbf{T}} |g(z)+\overline{h(z)}|^p\le D_{p,b} \int_{\mathbf{T}} (|g(z)|^2+b |h(z)|^2)^{\frac{p}{2}} - E_{p,b} \int_{\mathbf{T}}\mathcal{G}_p(g(z),h(z)).$$ Let $\theta=\arg(g(0)h(0))$. As $\mathcal{P}_p(z)= \mathcal{G}_p(g(z),h(z))$ is subharmonic for every $p\geqslant 1$, by sub-mean inequality we have that $$\int_{\mathbf{T}}\mathcal{G}_p(g(z),h(z))\geqslant\mathcal{G}_p(g(0),h(0))=|g(0)h(0)|^p\cos\left[p\frac{\pi-|\theta|}{2}\right]\geqslant 0,$$ because $\theta\in[\pi/2,3\pi/2]$ and $p\in[1,2]$. ◻ *Proof of sharpness of Theorem [Theorem 1](#Tao){reference-type="ref" reference="Tao"}.* Let $\beta \in[0,1]$ and define $$f_\beta=g+\bar h=\beta \left(\frac{1+z}{1-z}\right)^c+(\beta-1)\left(\frac{1+\bar z}{1-\bar z}\right)^c,\, c<1/p.$$ Notice that $\mathrm{arg}(g(0)h(0))=\pi$, or $g(0)h(0)=0$ and thus $\mathrm{Re}(g(0)h(0))\le 0$. Let $$Z(\beta):={\left(\int_{\mathbf{T}} |g+\bar h|^p\right)}.$$ Then $$Z(\beta) =2 \pi (1+2 (-1+\beta) \beta+2 (-1+\beta) \beta \cos[c \pi ])^{\frac{p}{2}} \sec\left[\frac{c p \pi }{2}\right].$$ Further let $$X(\beta) :={\left(\int_{\mathbf{T}} (|g|^2+b |h|^2)^{\frac{p}{2}}\right)}.$$ Then $$X(\beta)= (1+b)^{\frac{p}{2}}\int_0^{2\pi} \left|\frac{1+e^{it}}{1-e^{it}}\right|^{cp} dt= 2 \pi(1+b)^{\frac{p}{2}} \sec\left[\frac{c p \pi }{2}\right].$$ Now for $c=1/p$ and for $$\beta=\frac{\left(-1+b+2 b \cos \frac{\pi}{p}+\sqrt{S_b}\right) \sec\left[\frac{\pi }{2 p}\right]^2}{4 (-1+b)},$$ we have $$K(\beta,b,c)=\frac{Z(\beta)}{X(\beta)}={D(p,b)}.$$ We need also to check that $\beta\in [0,1]$. But after strighforward calculation this is equivalent with the inequality $|b-1|\cos\frac{\pi}{p}<0$ which is true for $b\ne 1$ and $p\neq 2$. For $b=1$ we obtain $\beta = 1/2$. If $b\neq 1$, then $\kappa=\min\{\beta/(1-\beta),(1-\beta)/\beta\} \in(0,1)$ and $f$ is quasiconformal harmonic mapping of the unit disk onto an angle. ◻ # Proof of Lemma [Lemma 6](#nice1){reference-type="ref" reference="nice1"} {#proof-of-lemma-nice1} Prove first the subharmonicity of $H$. Let $z=re^{i\theta},$ $\theta\in(-\pi,\pi]$. Notice that $$\label{form}\max\{r^{p/2} \cos[p/2(\pi -\theta)],r^{p/2} \cos[p/2(\pi+\theta)]\}=r^{p/2}\cos(\frac{p}{2}(\pi-\lvert\theta\rvert).$$ Thus $$H(z)=r^{p/2}\cos(\frac{p}{2}(\pi-\lvert\theta\rvert)$$ is subharmonic as a maximum of harmonic functions $$\label{br3} \Re((-{z})^{\frac{p}{2}}), \ \ \Re((-\bar{z})^{\frac{p}{2}})$$ for $z\in \mathbb{C}\setminus (-\infty,0]$. But the same formula [\[form\]](#form){reference-type="eqref" reference="form"} hold for $\theta\in (-\pi/p-\pi/2, \pi/p+\pi/2)\supseteq [-\pi,\pi]$, for $1\le p<2$, and thus $H$ is subharmonic as the maximum of the branches of the same harmonic functions [\[br3\]](#br3){reference-type="eqref" reference="br3"} defined in $\mathrm{Re}(z)\le 0$. Thus $H$ is subharmonic in $\mathbb{C}\setminus \{0\}$. The subharmonicity at $z=0$ is verified by proving sub-mean inequality: $$\frac{1}{2r\pi}\int_{-\pi}^{\pi}H(re^{it})dt =\frac{4 \sin\left[\frac{p \pi }{2}\right]}{2r\pi p}\geqslant H(0)=0.$$ For a related similar statement see [@verb1 Remark 2.3]. Further, let $r=|w|/|z|$ for $z\neq 0$ and $t=\mathrm{arg}(zw)$. Then Lemma [Lemma 6](#nice1){reference-type="ref" reference="nice1"} follows from the following lemma. **Lemma 7**. *For every $r\geqslant 0$ and $t\in[0,\pi]$ and $b>0$ we have $$\label{compG}G(r,b)=-r^{\frac{p}{2}} X \cos \left[\frac{1}{2} p (\pi -t)\right]-\left(1+r^2+2 r \cos t\right)^{\frac{p}{2}}+Y \left(b+r^2 \right)^{\frac{p}{2}}\geqslant 0$$ where $$X= 2^{2-\frac{p}{2}} \left(-\frac{\left(S_b+(1+b) \sqrt{S_b}\right) \sec\left(\frac{\pi }{p}\right)}{b}\right)^{\frac{1}{2} (-2+p)} \sin \frac{\pi}{p}$$ and $$Y=\left(\frac{1+b+\sqrt{S_b}}{2 b}\right)^{\frac{p}{2}}.$$ The minimum is zero and it is attained for $t=\pi-\pi/p$ and $$r=R:=-\frac{1}{2} \left(-1+b+\sqrt{S_b}\right) \sec\left(\frac{\pi }{p}\right)$$* *Proof.* The first step is to simplify rather complex inequality [\[compG\]](#compG){reference-type="eqref" reference="compG"} into [\[secondf\]](#secondf){reference-type="eqref" reference="secondf"}, which is suitable for further work. We first have $$b=\frac{R-R^2 \cos \frac{\pi}{p}}{R-\cos \frac{\pi}{p}}.$$ Then $$X=2 \left(\frac{1}{R}+R-2 \cos \frac{\pi}{p}\right)^{\frac{1}{2} (-2+p)} \sin \frac{\pi}{p}$$ and $$Y = \left(1-\frac{\cos \frac{\pi}{p}}{R}\right)^{\frac{p}{2}}.$$ Then our main inequality becomes $$\left(\frac{\left(1+r^2\right) R-\left(r^2+R^2\right) \cos \frac{\pi}{p}}{r R}\right)^{\frac{p}{2}}-\left(\frac{1}{r}+r+2 \cos t\right)^{\frac{p}{2}}$$ $$-2 \left(\frac{1}{R}+R-2 \cos \frac{\pi}{p}\right)^{\frac{1}{2} (-2+p)} \cos \left[\frac{1}{2} p (\pi -t)\right] \sin \frac{\pi}{p}\geqslant 0$$ which need to hold for every $r,R>0$. By making the substitution $\frac{1}{r}+r=2\cosh \alpha$ and $\frac{1}{R}+R=2\cosh \beta$ the previous inequality becomes $$\left({\cosh \alpha-\cosh(\alpha\pm \beta) \cos \frac{\pi}{p}}\right)^{\frac{p}{2}}-\left(\cosh \alpha+ \cos t\right)^{\frac{p}{2}}$$ $$- \left(\cosh \beta- \cos \frac{\pi}{p}\right)^{\frac{1}{2} (-2+p)} \cos \left[\frac{1}{2} p (\pi -t)\right] \sin \frac{\pi}{p}\geqslant 0$$ which need to hold for every $\alpha,\beta\in \mathbb{R}$. Then the last inequality can be written as $$\label{secondf}\begin{split}\bigg(\cosh \alpha&-\cos \frac{\pi}{p} \cosh \gamma\bigg)^{\frac{p}{2}}-(\cos t+\cosh \alpha)^{\frac{p}{2}}\\&- \left(\cosh(\alpha-\gamma)-\cos \frac{\pi}{p}\right)^{\frac{1}{2} (-2+p)} \cos \left[\frac{1}{2} p (\pi -t)\right]\sin \frac{\pi}{p}\geq 0,\end{split}$$ for $0,\gamma\le \alpha$, $t\in[0,\pi]$. The advantage of [\[secondf\]](#secondf){reference-type="eqref" reference="secondf"} is that the equality case occurs precisely when $\gamma=0$ and $t=\pi-\pi/p$. The main step of the proof of this technical Lemma is a refinement of Terence Tao's subtle method which works for the special case when $p=3/2$ ([@tao]). The first part of this method is to replace the trigonometric expressions in $t$ with quadratic expressions in $x = -\cos \frac{\pi}{p}-\cos t$, which is a convenient further change of variable (in particular, it allows us to shift the equality case from $t=\pi-\pi/p$ to $x=0$, and eliminate $x$ shortly afterward). **Lemma 8**. *For $p\in[1,2]$, $t\in[0,\pi]$ and $$q=\frac{p-2 \cot\left[\frac{\pi }{2 p}\right]}{p-p \cos \frac{\pi}{p}}$$ we have $$\sin\left(\frac{\pi }{p}\right)\cos \left[\frac{1}{2} p (\pi -t)\right]\le \frac{1}{2} p \left(-\cos \frac{\pi}{p}-\cos t-q \left(-\cos \frac{\pi}{p}-\cos t\right)^2\right) .$$* *Remark 9*. In Lemma [Lemma 8](#tt){reference-type="ref" reference="tt"} we are comparing two functions that coincide in two points $\pi$ and $\pi-\pi/p$. Moreover, their derivatives coincide in $\pi-\pi/p$ and this was the idea of how we came to the inequality. *Proof of Lemma [Lemma 8](#tt){reference-type="ref" reference="tt"}.* Let $a=\cos(\pi-t)=-\cos t$. Then $a\in[-1,1]$ when $t\in [0,\pi]$. We need to show that $\Phi(a)\geqslant 0$ where $$\Phi(a) = \frac{1}{2} p \left(a-\cos \frac{\pi}{p}-\frac{\left(a-\cos \frac{\pi}{p}\right)^2 \left(p-2 \cot\left[\frac{\pi }{2 p}\right]\right)}{p-p \cos \frac{\pi}{p}}\right)$$ $$-\cos \left[\frac{1}{2} p \cos^{-1}(a)\right] \sin \frac{\pi}{p}.$$ Then $$\Phi'''(a) = \frac{p \sin \frac{\pi}{p} \left(6 a \sqrt{1-a^2} p \cos \left[\frac{1}{2} p t\right]+\left(-4+p^2-a^2 \left(8+p^2\right)\right) \sin\left[\frac{1}{2} p t \right]\right)}{8 \left(1-a^2\right)^{5/2}}$$ where $t=\arccos a$, and this function is negative because $$\label{phip}\phi(p):=-4+p^2-\left(8+p^2\right) \cos^2 t+3 p \cot\left[\frac{p t}{2}\right] \sin(2t)\le 0.$$ Let us prove [\[phip\]](#phip){reference-type="eqref" reference="phip"}. It is enough to prove that $\phi$ is convex and $\phi(1)<0$ and $\phi(2)=0$. First of all $$\phi(1)=-3-9 \cos^2 t+3 \cot\left[\frac{t}{2}\right] \sin(2t)=-12 \sin\left[\frac{t}{2}\right]^4$$ and this is clearly negative for $t\in(0,\pi)$. Also, straightforward calculations yield $\phi(2)=0$. To prove that $\phi$ is convex we calculate $$\phi'''(p)=-\frac{3}{4} t^2\csc^4\left[\frac{p t}{2}\right]\sin(2t) (p t (2+\cos [p t])-3 \sin[p t])$$ and thus $\Phi'(p)$ is convex for $t\geqslant\pi/2$ and concave for $t\le \pi/2$. Fix now $t$. If $\phi'$ is concave, because $$\label{prima}\phi'(1)=2 \cot\left[\frac{t}{2}\right] (-3 t \cos t+\sin t+\sin(2t))>0$$ and $$\label{seconda}\phi'(2)=5+\cos (2t)-6 t \cot\,t>0,$$ it follows that $\phi'(p)> 0$ for $p\in[1,2]$. To prove [\[prima\]](#prima){reference-type="eqref" reference="prima"}, observe that $$\varphi(t):=-3 t \cos t+\sin t+\sin(2t)=\sum_{m=0}^\infty\frac{2 (-1)^m \left(-1+4^m-3 m\right)}{(2 m+1)!}t^{2m+1}.$$ Further two consecutive members $m=2k$ and $m=2k+1$ of the previous sum are $$\frac{2\text{ }\left(-1+4^{2 k}-6 k\right) t^{1+4 k}}{(4 k+1)}-\frac{2\text{ }\left(-1+4^{1+2 k}-3 (1+2 k)\right) t^{3+4 k}}{(4 k+3)!},$$ and the last quantity is $\geqslant 0$ if and only if $$2 \left(-1+16^{ k}-6 k\right)-\frac{\left(-1+4 \cdot 16^{ k}-3 (1+2 k)\right) t^2}{(1+2 k) (3+4 k)}>0.$$ Since $t<4$, it is enough to prove $$\left(58+40 k-136 k^2-96 k^3\right)+\left(-58+20 k+16 k^2\right) 16^k>0.$$ Since $16^k>1 +15 k$, it will be sufficient to check that $$-810 k+180 k^2+144 k^3>0,$$ and this is certainly true for $k\geqslant 2$. In particular, we arrive at the inequality $$\varphi(t)\geqslant\frac{3 t^5}{20}-\frac{3 t^7}{140}+\frac{3 t^9}{2240}-\frac{t^{11}}{19800}\geqslant 0$$ which is true for $t\in[0,\pi]$. Similarly, we prove [\[seconda\]](#seconda){reference-type="eqref" reference="seconda"}: It is clear that $\phi'(2)>0$ for $t\geqslant\pi/2$. For $t\le \pi/2$ we use Taylor extension $$\sin t\phi'(2) = \sum_{m=2}^\infty \frac{3 (-1)^m \left(-1+9^m-8 m\right)}{2 (2m+1)!}t^{2m+1}$$ which is greater or equal to $\frac{4 t^5}{5}-\frac{22 t^7}{105}\geqslant 0$ for $t\in[0,\pi/2]$. Further $$\label{pa2}\phi''(1)=\csc^2\frac{t}{2}\chi(t),$$ where $$\label{pa}\chi(t)=\left(\sin^2 t+\cos t \left(3 t^2 (1+\cos t)-\sin t (6 t+\sin t)\right)\right).$$ Let us show that $\phi''(1)>0$. First, the expression $\chi (t)$ is positive for $t\geqslant\pi/2$ as a sum of two positive functions. If $t\in[0,\pi/2]$, then we take the Taylor series of $\cos t$ and $\sin t$ up to some order and get the following estimates $$\label{est1}t-\frac{t^3}{6}\le \sin t\le t - \frac{t^3}{6} + \frac{t^5}{120}$$ and $$\label{est2}\cos t\geqslant 1-\frac{t^2}{2}+\left(\frac{2 \left(-8+\pi ^2\right)}{\pi ^4}\right)t^4.$$ Let us prove the non-trivial estimate [\[est2\]](#est2){reference-type="eqref" reference="est2"}. Let $$h(t)=-1+\frac{t^2}{2}-\frac{2 \left(-8+\pi ^2\right) t^4}{\pi ^4}+\cos t,$$ and consider $g(t) =\frac{h'(t)}{t^3}.$ We have that $$g'(t)=\frac{-t (2+\cos t)+3 \sin t}{t^4}=\sum_{k=0}^\infty \left(-\frac{2+4 k}{(4 k+5)!}+\frac{4 (1+k) t^2}{(4 k+7)!}\right) t^{1+2k}.$$ Since $$\left(-\frac{2+4 k}{(4 k+5)!}+\frac{4 (1+k) t^2}{(4 k+7)!}\right) \le\frac{1}{(4k+5)!} \left(-(2+4 k)+\frac{4 (1+k) \pi^2/4}{(7+4 k)(6+4k)}\right)<0,$$ it follows that $g$ is decreasing. Since $g(0)=\frac{1}{6}+\frac{64}{\pi ^4}-\frac{8}{\pi ^2}>0$ and $g(\pi/2)=-\frac{4 \left(-16+2 \pi +\pi ^2\right)}{\pi ^4}<0$. There is only one point $t_\circ\in(0,\pi/2)$ so that $g(t_\circ)=h'(t_\circ)=0$. Since $h(\frac{\pi}{4})=-\frac{15}{16}+\frac{1}{\sqrt{2}}+\frac{3 \pi ^2}{128}>0$. It follows that $t_\circ$ is a local maximum of $h$. Since $h(0)=h(\pi/2)=0$, this implies that $h(t)\geqslant 0$ for $t\in[0,\pi/2]$. Then [\[pa\]](#pa){reference-type="eqref" reference="pa"}, [\[est1\]](#est1){reference-type="eqref" reference="est1"} and [\[est2\]](#est2){reference-type="eqref" reference="est2"} imply that $$\chi(t)\geqslant t^6\zeta(t)$$ where $$\begin{split}\zeta(t)&=\left(\frac{1}{60}-\frac{32}{\pi ^4}+\frac{4}{\pi ^2}\right) +\left(\frac{1}{20}+\frac{80}{3 \pi ^4}-\frac{10}{3 \pi ^2}\right) t^2\\&+\left(-\frac{7}{4800}+\frac{768}{\pi ^8}-\frac{192}{\pi ^6}+\frac{608}{45 \pi ^4}-\frac{17}{90 \pi ^2}\right) t^4\\&+\frac{\left(-1280+160 \pi ^2+\pi ^4\right) t^6}{28800 \pi ^4}+\frac{\left(8-\pi ^2\right) t^8}{7200 \pi ^4}.\end{split}$$ Then $$\zeta(t)\approx \left(0.09344 -0.0139778 t^2-0.000663 t^4+0.000141 t^6-0.0000026657 t^8\right)$$ which is apparently positive for $t\in[0,\pi/2]$. Since $\phi'$ is convex, for $p>1$ we obtain that $\phi''(p)>\phi''(1)>0$. Thus $\phi$ is convex and hence $\phi(p)<0$ because $\phi(1)<0$ and $\phi(2)=0$. The conclusion is that $$\Phi'(a)=\frac{1}{2} p \left(1+\frac{2 \left(a-\cos \frac{\pi}{p}\right) \left(p-2 \cot\left[\frac{\pi }{2 p}\right]\right)}{p \left(-1+\cos \frac{\pi}{p}\right)}-\frac{\sin \frac{\pi}{p} \sin\left[\frac{1}{2} p \cos^{-1}(a)\right]}{\sqrt{1-a^2}}\right)$$ is concave. Since $\Phi'(\cos(\pi/p))=0$, it follows that there is at most one more point $-1<\bar a <1$ so that $\Phi'(\bar a)=0$. We now have $$\label{xp}\Phi''(\cos(\frac{\pi}{p}))=\frac{1}{4} \csc^3\left(\frac{\pi }{p}\right) \left(32 \cos^4\left[\frac{\pi }{2 p}\right]-4 p \sin \frac{\pi}{p}-3 p \sin\left[\frac{2 \pi }{p}\right]\right)>0.$$ In order to prove [\[xp\]](#xp){reference-type="eqref" reference="xp"} observe that, after taking the change $p=\pi/x$, it is equivalent with $$\left(32 x \cos^4\left[\frac{x}{2}\right]-4 \pi \sin x-3 \pi \sin(2x)\right)\geqslant 0, \ \ x\in[\pi/2,\pi].$$ The last relation can be written as $$8 x (1+\cos x)^2-2 \pi (2+3 \cos x) \sin x\geqslant 0.$$ After the change $x=y+\pi/2$ we arrive at the inequality $$h(y):=-2 \pi \cos y(2-3 \sin y)+8 \left(\frac{\pi }{2}+y\right) (1-\sin y)^2\geqslant 0.$$ Now it is clear that for $y\in[\arcsin \frac{2}{3},\pi/2]$, $h(y)\geqslant 0$. For $y\in[0,\arcsin \frac{2}{3}]$, we use the inequality $$h(y)\geqslant k(y):=-2 \pi (2-3 \sin y)+8 (1-\sin y)^2 \left(\frac{\pi }{2}+\sin y\right)=h(\sin y),$$ where $h(t)=-2 \pi (2-3 t)+8 (1-t)^2 \left(\frac{\pi }{2}+t\right)$. Since $h(t)\geqslant 0$ for $t\in[0,2/3]$, [\[xp\]](#xp){reference-type="eqref" reference="xp"} is proved. Since $\Phi''$ is decreasing, it follows that $\bar a>\cos \pi/p$. Then $\Phi$ is decreasing in $[-1,\cos(\pi/p)]$, increasing in $[\cos(\pi/p),\bar a]$ and decreasing in $[\bar a, 1]$. Since $\Phi(1)=0$ and $\Phi(\cos(\pi/p))=0$, it follows that $\Phi(a)\geqslant 0$ for every $a$. This finishes the proof of the lemma. ◻ **Continuation of proof of Lemma [Lemma 6](#nice1){reference-type="ref" reference="nice1"}** We begin with the required inequality $$\left(\cosh \alpha-\cos \frac{\pi}{p} \cosh \gamma\right)^{\frac{p}{2}}-(\cos t+\cosh \alpha)^{\frac{p}{2}}$$ $$- \left(\cosh(\alpha-\gamma)-\cos \frac{\pi}{p}\right)^{\frac{1}{2} (-2+p)} \cos \left[\frac{1}{2} p (\pi -t)\right]\sin \frac{\pi}{p}\geq 0,$$ for $0,\gamma\le \alpha$. Let $x=-(\cos t + \cos \frac{\pi}{p})$. From Lemma [Lemma 8](#tt){reference-type="ref" reference="tt"}, it is enough to prove $$\begin{split}&\left(\cosh\alpha -\cos \frac{\pi}{p} \cosh\gamma \right)^{\frac{p}{2}}-\left(-x-\cos \frac{\pi}{p}+\cosh\alpha \right)^{\frac{p}{2}}\\&\geqslant\left(-\cos \frac{\pi}{p}+\cosh [\alpha -\gamma ]\right)^{\frac{p}{2}-1} \left(\frac{p x}{2}-\frac{x^2 \left(p-p \cos \frac{\pi}{p}-2 \sin \frac{\pi}{p}\right)}{2 \left(1-\cos \frac{\pi}{p}\right)^2}\right)\end{split}$$ where $x=-\cos\frac{\pi }{p}-\cos t\in[-\cos\frac{\pi }{p}-1,-\cos\frac{\pi }{p}+1]$.\ From concavity of $s\to s^{\frac{p}{2}}$ we have $$-\left(-x-\cos \frac{\pi}{p}+\cosh\alpha \right)^{\frac{p}{2}}\geqslant-\left(\cosh\alpha-\cos \frac{\pi}{p} \right)^{\frac{p}{2}}+\frac{p}{2} \left(\cosh\alpha-\cos \frac{\pi}{p} \right)^{\frac{p}{2}-1}x.$$ It is enough to prove that $$\begin{split}\frac{1}{2} p x &\left(\cosh\alpha-\cos \frac{\pi}{p} \right)^{\frac{p}{2}-1}-\left(\cosh\alpha-\cos \frac{\pi}{p} \right)^{\frac{p}{2}}\\&+\left(\cosh\alpha -\cos \frac{\pi}{p} \cosh\gamma \right)^{\frac{p}{2}}\\&\geq \left(-\cos \frac{\pi}{p}+\cosh [\alpha -\gamma ]\right)^{\frac{p}{2}-1} \left(-\frac{p x}{2}-\frac{x^2 \left(p-p \cos \frac{\pi}{p}-2 \sin \frac{\pi}{p}\right)}{2 \left(1-\cos \frac{\pi}{p}\right)^2}\right) .\end{split}$$ Further $$\left(\cosh\alpha -\cos \frac{\pi}{p} \cosh\gamma \right)^{\frac{p}{2}}-\left(\cosh\alpha-\cos \frac{\pi}{p} \right)^{\frac{p}{2}}$$ $$>\frac{1}{2} p \cos \frac{\pi}{p} \left(\cosh\alpha-\cos \frac{\pi}{p} \right)^{\frac{p}{2}-1} (1-\cosh\gamma ).$$ Thus we need to show that $Wx^2+V x+U>0$ where $$W=\frac{\left(\cosh(\alpha-\gamma)-\cos \frac{\pi}{p}\right)^{\frac{p}{2}-1} \left(p-p \cos \frac{\pi}{p}-2 \sin \frac{\pi}{p}\right)}{2 \left(1-\cos \frac{\pi}{p}\right)^2},$$ $$V=\frac{1}{2} p \left(\cosh \alpha-\cos \frac{\pi}{p}\right)^{\frac{p}{2}-1}-\frac{1}{2} p \left(\cosh(\alpha-\gamma)-\cos \frac{\pi}{p}\right)^{\frac{p}{2}-1}$$ and $$U=\frac{p \cos \frac{\pi}{p} \left(\cosh \alpha-\cos \frac{\pi}{p}\right)^{\frac{p}{2}} (-1+\cosh \gamma)}{2 \left(\cos \frac{\pi}{p}-\cosh \alpha\right)}.$$ Since $W>0$, it is clear that it is enough to prove that the discriminant is negative. Now $$V^2- 4 U W<0$$ if and only if $$\begin{split}G(X)&:=\frac{1}{4} p \left(1-\left(\frac{1}{X}\right)^{1-\frac{p}{2}}\right)^2 X^{1-\frac{p}{2}}\\&+\frac{\cos \frac{\pi}{p} (-1+\cosh \gamma) \left(p-p \cos \frac{\pi}{p}-2 \sin \frac{\pi}{p}\right)}{\left(1-\cos \frac{\pi}{p}\right)^2}\le 0\end{split}$$ where $$X=X(\alpha)=\frac{-\cos \frac{\pi}{p}+\cosh \alpha}{-\cos \frac{\pi}{p}+\cosh(\alpha-\gamma)}.$$ It turns out that we need to split into two cases $\gamma\geqslant 0$ and $\gamma<0$. ## **The case $\gamma\geqslant 0$** First of all $$G'(X)=\frac{1}{8} (2-p) p X^{-2-\frac{p}{2}} \left(X^2-X^p\right)\geqslant 0$$ for $X\geqslant 1$. Further $$X'(\alpha)=\frac{\cos \frac{\pi}{p} (-\sinh \alpha+\sinh(\alpha-\gamma))+\sinh \gamma}{\left(\cos \frac{\pi}{p}-\cosh(\alpha-\gamma)\right)^2}\geqslant 0$$ and so $$G(X)\le G(\lim_{\alpha\to \infty } X(\alpha))=G(e^\gamma).$$ Let $\gamma=\log x$. It remains to prove that $$\label{goal}\frac{1}{4} \left(\frac{p x^{-1-\frac{p}{2}} \left(x-x^{\frac{p}{2}}\right)^2}{-2+\frac{1}{x}+x}+\frac{2 \cos \frac{\pi}{p} \left(p-p \cos \frac{\pi}{p}-2 \sin \frac{\pi}{p}\right)}{\left(1-\cos \frac{\pi}{p}\right)^2}\right)\le 0$$ for $x\geqslant 1$ and $p\in[1,2]$. Now we prove that $$\label{expression}\sup_{x>0,x\neq 1}\frac{x^{-1-\frac{p}{2}} \left(x-x^{\frac{p}{2}}\right)^2}{-2+\frac{1}{x}+x}=\lim_{x\to 1}\left(\frac{x^{1-p/4}-x^{p/4}}{x-1}\right)^2=\frac{1}{4} (2-p)^2.$$ To prove the first equality in [\[expression\]](#expression){reference-type="eqref" reference="expression"}, assume first that $x>1$. Then we need to show that $$\phi(x):=-\left(1-\frac{p}{2}\right) (-1+x)+x^{1-\frac{p}{4}}-x^{p/4}\le 0.$$ Since $$\phi''(x)=\frac{1}{16} (4-p) p x^{-2-\frac{p}{4}} \left(-x+x^{\frac{p}{2}}\right)\le 0,$$ it follows that $\Phi'$ is decreasing. In particular, $\phi'(x)<\phi'(1)=0$ for $x>1$. Thus $\phi$ is decreasing and hence $\phi(x)\le \phi(1)=0$, what we wanted to prove. The case $x<1$ can be treated similarly. So it remains to show that $$\frac{1}{4} (-2+p)^2+\frac{2 \cos \frac{\pi}{p} \left(p-p \cos \frac{\pi}{p}-2 \sin \frac{\pi}{p}\right)}{\left(1-\cos \frac{\pi}{p}\right)^2}\le 0$$ for $p\in[1,2]$. This inequality is equivalent to the inequality $$\label{sabato}s^2 (1+\sin s)^2-(\pi +2 s) \sin s \left[\pi -(\pi +2 s) \cos s+\pi \sin s\right]\le 0$$ for $s=\frac{\pi}{p}-\frac{\pi}{2}\in[0,\pi/2]$. We first have that $$\label{18sept}\psi(s):=\pi -(\pi +2 s) \cos s+\pi \sin s\geqslant( (-2+\pi ) s+\frac{\pi s^2}{2}).$$ To prove [\[18sept\]](#18sept){reference-type="eqref" reference="18sept"}, since $$\sin s\geqslant s-\frac{s^3}{6}\text{ and }\cos s\le 1-\frac{s^2}{2}+\frac{s^4}{24},$$ we have that $$\begin{split} \psi(s)&\geqslant\pi -(-2+\pi ) s-\frac{\pi s^2}{2}+\pi \left(s-\frac{s^3}{6}\right)-(\pi +2 s) \left(1-\frac{s^2}{2}+\frac{s^4}{24}\right) \\& =\frac{1}{24} s^3 \left(24-2 s^2-\pi (4+s)\right)\geqslant 0, \ \ s\in[0,\pi/2].\end{split}$$ This implies [\[18sept\]](#18sept){reference-type="eqref" reference="18sept"}. So to prove [\[sabato\]](#sabato){reference-type="eqref" reference="sabato"}, it is enough to prove the inequality $$s^2 (1+\sin s)^2-(\pi +2 s) \sin s ((-2+\pi ) s+\frac{\pi s^2}{2})\le 0.$$ Since $$t-\frac{t^3}{6}\le \sin t\le t,$$ it remains to prove the inequality $$\omega(s):=\left(1+2 \pi -\pi ^2\right) +\left(6-2 \pi -\frac{\pi ^2}{2}\right) s+\frac{1}{6} \left(6-8 \pi +\pi ^2\right) s^2$$ $$+\frac{1}{12} \left(-8+4 \pi +\pi ^2\right) s^3+\frac{\pi s^4}{6}\le 0.$$ We have that $$\omega'''(s)=\frac{1}{2} \left(-8+4 \pi +\pi ^2\right)+4 \pi s>0.$$ So $\omega'(s)$ is convex. In particular, $\omega'(s)$ can have at most two zeros. Since $\omega'(0)<0$ and $\omega'(\pi/2)>0$. There is only one point $s_\circ \in(0,\pi/2)$ so that $\omega'(s_\circ)=0$ and $s_\circ$ is a local minimum of $\omega$. Since $\omega(0)<0$ and $\omega(\pi/2)<0$, it follows that $\omega(s)<0$ for $s\in[0,\pi/2]$. This finishes [\[goal\]](#goal){reference-type="eqref" reference="goal"} and this case is finished. ## **Case $\gamma< 0$** In this case $G'(X)\le 0$ and $X'(a)\geqslant 0$. So $$G(X(\alpha))\le G(X(0))=G\left(\frac{1-\cos \frac{\pi}{p}}{\cosh \gamma-\cos \frac{\pi}{p} }\right)\le G(e^\gamma),$$ because $$\frac{1-\cos \frac{\pi}{p}}{{\cosh \gamma}-\cos \frac{\pi}{p}}\geqslant e^\gamma,$$ for $\gamma\le 0$. In this case $\gamma<0$, but we use again [\[expression\]](#expression){reference-type="eqref" reference="expression"}, and the inequality is proved. ◻ 1 [M. Essén,]{.smallcaps} *A superharmonic proof of the M. Riesz conjugate function theorem.* Ark. Mat. 22 (1984), no. 2, 241--249. [B. Hollenbeck, N. J. Kalton, I. E. Verbitsky,]{.smallcaps} *Best constants for some operators associated with the Fourier and Hilbert transforms.* Studia Math. 157 (2003), no. 3, 237--278. [B. Hollenbeck, I. E. Verbitsky,]{.smallcaps} *Best constants for the Riesz projection.* J. Funct. Anal. 175 (2000), no. 2, 370--392. [B. Hollenbeck, I. E. Verbitsky,]{.smallcaps} *Best constant inequalities involving the analytic and co-analytic projection.* Operator Theory: Advances and Applications 202, 285-295 (2010). [L. Hörmander,]{.smallcaps} *An introduction to complex analysis in several variables.* D. Van Nostrand Co., Inc., Princeton, N.J.-Toronto, Ont.-London 1966 x+208 pp. [L. Grafakos,]{.smallcaps} *Best bounds for the Hilbert transform on $L^p(\mathbf{R})$.* Math. Res. Lett. 4 (1997), no. 4, 469--471. [D. Kalaj,]{.smallcaps} *On Riesz type inequalities for harmonic mappings on the unit disk,* Trans. Am. Math. Soc. 372, No. 6, 4031--4051 (2019). [P. Melentijević,]{.smallcaps} *Hollenbeck-Verbitsky conjecture on best constant inequalities for analytic and co-analytic projections.* Math. Ann. (2023). https://doi.org/10.1007/s00208-023-02639-1 [P. Melentijević, M. Marković,]{.smallcaps} *Best Constants in Inequalities Involving Analytic and Co-Analytic Projections and Riesz's Theorem in Various Function Spaces.* Potential Anal (2022). https://doi.org/10.1007/s11118-022-10021-0 [S. K. Pichorides, S.]{.smallcaps} *On the best values of the constants in the theorems of M. Riesz, Zygmund and Kolmogorov.* Collection of articles honoring the completion by Antoni Zygmund of 50 years of scientific activity, II. Studia Math. 44 (1972), 165--179. [T. Tao,]{.smallcaps} *https://mathoverflow.net/questions/454007* [I. E. Verbitsky,]{.smallcaps} *Estimate of the norm of a function in a Hardy space in terms of the norms of its real and imaginary parts.* Linear operators. Mat. Issled. No. 54 (1980), 16--20, 16--165, \"Fifteen Papers on Functional Analysis,\" Amer. Math. Soci. Transl. Ser. 124 (1984), 11--15 (English transl.) [^1]: 2010 *Mathematics Subject Classification*: Primary 47B35
arxiv_math
{ "id": "2310.00464", "title": "On M. Riesz conjugate function theorem for harmonic functions", "authors": "David Kalaj", "categories": "math.CV", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Let $E/F$ be a CM extension of number fields, and let $H < G$ be a unitary Gan--Gross--Prasad pair defined with respect to $E/F$ that is compact at infinity. We consider a family $\mathcal F$ of automorphic representations of $G \times H$ that is varying at a finite place $w$ that splits in $E/F$. We assume that the representations in $\mathcal F$ satisfy certain conditions, including being tempered and distinguished by the GGP period. For a representation $\pi \times \pi_H \in \mathcal F$ with base change $\Pi \times \Pi_H$ to ${\rm GL}_{n+1}(E) \times {\rm GL}_n(E)$, we prove a subconvex bound $$L(1/2, \Pi \times \Pi_H^\vee) \ll C(\Pi \times \Pi_H^\vee)^{1/4 - \delta}$$ for any $\delta < \tfrac{1}{4n(n+1)(2n^2 + 3n + 3)}$. Our proof uses the unitary Ichino--Ikeda period formula to relate the central $L$-value to an automorphic period, before bounding that period using the amplification method of Iwaniec--Sarnak. address: | Department of Mathematics\ University of Wisconsin -- Madison\ 480 Lincoln Drive\ Madison\ WI 53706, USA author: - Simon Marshall title: Subconvexity for $L$-functions on ${\rm U}(n) \times {\rm U}(n+1)$ in the depth aspect --- # Introduction The purpose of this paper is to prove a subconvex bound for certain families of $L$-functions on ${\rm U}(n) \times {\rm U}(n+1)$, by first using the unitary Ichino--Ikeda period formula to relate the central $L$-value to an automorphic period, and then using the amplification method of Iwaniec--Sarnak to bound that period. We begin by stating the period bound we shall prove, as it holds for families of forms that are, in principle, more general than those for which our subconvex bound holds. ## The period bound We shall consider the following unitary GGP periods. Let $E/F$ be a CM extension of number fields. Let $V$ be a non-degenerate Hermitian space of dimension $n+1$ with respect to $E/F$ that is positive definite at all infinite places, and let $V_H \subset V$ be an $n$-dimensional subspace, which is automatically non-degenerate. We define the unitary groups $G = {\rm U}(V)$ and $H = {\rm U}(V_H)$. We shall consider periods of automorphic forms on $G \times H$ over the diagonally embedded copy of $H$. If $f$ and $f_H$ are continuous functions on the adelic quotients $[G]$ and $[H]$ we introduce the notation $$\mathcal P(f, f_H) = \int_{ [H] } f(h) f_H(h) dh$$ for this period. We next define the family of automorphic representations $\pi \times \pi_H$ on $G \times H$ we shall work with. Let $S$ be the finite set of places chosen in Section [3.1](#sec:notation1){reference-type="ref" reference="sec:notation1"}, which contains the infinite places, and all places at which $G$ and $H$ are ramified. We also let $K_v$ and $K_{H,v}$ be the compact open subgroups of $G_v$ and $H_v$ chosen there, which are assumed to be hyperspecial for $v \notin S$. Let $w \notin S$ be a place at which $E/F$ splits. This is the place at which our representations will vary, and will be distinguished throughout the paper. At $w$, we will assume that $\pi_w$ and $\pi_{H,w}$ are principal series representations induced from a *generic pair* of characters. To define this, let $\mathcal O_w$ be the completion of the ring of integers of $F$ at $w$, $\mathfrak p$ the maximal ideal of $\mathcal O_w$, and $q$ the order of the residue field. **Definition 1**. Let $l \geqslant 1$ be given. Let $\chi$ and $\chi_H$ be complex characters of $(F_w^\times)^{n+1}$ and $(F_w^\times)^n$ respectively, which are not assumed to be unitary. 1. We say that $\chi = (\chi_1, \ldots, \chi_{n+1})$ is generic of conductor $\mathfrak p^l$ if it is trivial on $(1 + \mathfrak p^l)^{n+1}$, and the components $\chi_i$ have the property that $\chi_i \chi_j^{-1}$ has conductor $\mathfrak p^l$ for all $i \neq j$. We make the analogous definition for $\chi_H$. 2. We say that $(\chi, \chi_H)$ is a generic pair of conductor $\mathfrak p^l$ if both $\chi$ and $\chi_H$ are generic of conductor $\mathfrak p^l$, and if $\chi_i \chi_{H,j}^{-1}$ has conductor $\mathfrak p^l$ for all $i$ and $j$ Given a generic pair $(\chi, \chi_H)$ of conductor $\mathfrak p^l$, we may form the representation $\text{Ind}_{B_w}^{G_w} \chi$ of $G_w$ unitarily induced from a Borel subgroup $B_w$, which we denote by $I(\chi)$, and define the representation $I(\chi_H)$ of $H_w$ similarly. Note that $I(\chi)$ and $I(\chi_H)$ are irreducible, by a theorem of Bernstein--Zelevinsky [@BZ Theorem 4.2]. These will be our choices for $\pi_w$ and $\pi_{H,w}$. In order to choose our test vectors at $w$, we shall also need the notion of a compatible pair of microlocal lift vectors $\phi \in I(\chi)$ and $\phi_H \in I(\chi_H)$. We define this formally in Definition [Definition 15](#compatdef){reference-type="ref" reference="compatdef"}, but summarize that definition here. If $l$ is even, then in Definition [Definition 8](#chitypedef){reference-type="ref" reference="chitypedef"} we define the notion of a $\chi$-type, which is a pair $(J, \widetilde{\lambda})$ where $J$ is a compact open subgroup of $K_w$ and $\widetilde{\lambda}$ is a character of $J$. By a theorem of Roche [@Ro Thm. 7.7], they are in fact $\mathfrak{s}$-types, in the sense of Bushnell--Kutzko, for the supercuspidal data $\mathfrak{s}$ naturally associated with $\chi$, but we will not use this fact. We then say that $\phi \in I(\chi)$ is a *microlocal lift vector* associated to $(J, \widetilde{\lambda})$ if $\phi$ transforms by $\widetilde{\lambda}$ under $J$. (We will show that a nonzero microlocal lift vector is associated to at most one $\chi$-type.) We define the notion of a $\chi_H$-type $(J_H, \widetilde{\lambda}_H)$, and microlocal lift vectors $\phi_H \in I(\chi_H)$, for $H_w$ in a similar way. We say that $(J, \widetilde{\lambda})$ and $(J_H, \widetilde{\lambda}_H)$ are compatible $(\chi, \chi_H)$-types if $\widetilde{\lambda}$ and $\widetilde{\lambda}_H$ agree on $J \cap J_H$ (which will be equal to the principal congruence subgroup $K_{H,w}(\mathfrak p^l)$ by Lemma [Lemma 19](#Atrans){reference-type="ref" reference="Atrans"}), and that microlocal lift vectors $\phi$ and $\phi_H$ are compatible if their types are. We may now define the family $\mathcal F_\text{P}$ of automorphic representations for which our period bound holds. **Definition 2**. Let $\mathcal F_\text{P}$ be the set of automorphic representations $\pi \times \pi_H$ of $G \times H$ satisfying the following conditions: 1. $\pi_v$ and $\pi_{H,v}$ have nonzero fixed vectors under $K_v$ and $K_{H,v}$ for finite places $v \neq w$. In particular, $\pi_v$ and $\pi_{H,v}$ are unramified outside of $S \cup \{ w \}$. 2. $\pi_w$ and $\pi_{H,w}$ are principal series representations induced from a generic pair of characters of square conductor $\mathfrak p^{2l}$ for some $l \geqslant 1$. Our main automorphic period bound is the following. **Theorem 3**. *Let $\pi \times \pi_H \in \mathcal F_\textup{P}$. We denote the representation $\pi_\infty$ of $G_\infty$ by $\mu$, and let $d_\mu$ denote its dimension. Let $\phi = \otimes_v \phi_v \in \pi$ and $\phi_H = \otimes_v \phi_{H,v} \in \pi_H$ be $L^2$-normalized vectors satisfying the following:* 1. *$\phi_v$ and $\phi_{H,v}$ are invariant under $K_v$ and $K_{H,v}$ for finite places $v \neq w$.* 2. *$\phi_w$ and $\phi_{H,w}$ are compatible microlocal lift vectors in the sense of Definition [Definition 15](#compatdef){reference-type="ref" reference="compatdef"}.* *Then $$\mathcal P_H(\phi, \overline{\phi_H} ) \ll d_\mu q^{(n/2 - \delta)l}$$ for any $\delta < \tfrac{1}{4n^2 + 6n + 6}$.* Note that the dependence of the constant on $\mu$ in Theorem [Theorem 3](#mainperiod){reference-type="ref" reference="mainperiod"}, and Proposition [Proposition 4](#trivialbound){reference-type="ref" reference="trivialbound"} below, is not sharp. *Remark 1*. Theorem [Theorem 3](#mainperiod){reference-type="ref" reference="mainperiod"} makes use of the fact that $\pi$ and $\pi_H$ are tempered at all places of $F$ that split in $E$, which follows from work of Caraiani [@Caraiani] and Labesse [@Lab] as we establish in Section [4.6](#sec:tempered){reference-type="ref" reference="sec:tempered"}. However, this is not essential -- one may obtain a version of Theorem [Theorem 3](#mainperiod){reference-type="ref" reference="mainperiod"} with a weaker exponent if one only assumes that $\pi$ and $\pi_H$ are $\theta$-tempered at almost all split places, for some $0 \leqslant \theta < 1/2$. (We refer to Section [4.3](#sec:spherical){reference-type="ref" reference="sec:spherical"} for the definition of $\theta$-temperedness.) The modifications required in this case are simple: when proving the bound of Proposition [Proposition 28](#diagonal bound){reference-type="ref" reference="diagonal bound"} for the diagonal contribution on the geometric side of the relative trace formula, one applies Lemma [Lemma 24](#Hecke restriction){reference-type="ref" reference="Hecke restriction"} with this $\theta$, and obtains a bound of $N^{1 + 2\theta + \epsilon} d_\mu^2 q^{nl}$ instead of $N^{1 + \epsilon} d_\mu^2 q^{nl}$. The rest of the proof proceeds as before, and one obtains an exponent of $\delta < \tfrac{1-2\theta}{ 2(2n^2 +3n +3 - 4\theta) }$ in the final period bound. We point this out because $\theta$-temperedness, for some $0 \leqslant \theta < 1/2$, is known for a much larger collection of automorphic forms than temperedness is. For instance, $\theta$ temperedness is known for cusp forms on ${\rm GL}_N$, by the bounds of [@LRS] towards the generalized Ramamujan conjecture, and can be deduced for certain forms on classical groups by endoscopy or base change arguments. This is in contrast with Theorem [Theorem 6](#mainsubconvex){reference-type="ref" reference="mainsubconvex"}, in which temperedness is needed to apply the Ichino--Ikeda formula of [@BPCZ]. The bound $\mathcal P_H(\phi, \overline{\phi_H} ) \ll q^{nl/2}$ may be thought of as the trivial bound for this period, and it corresponds to the convex bound for $L(1/2, \Pi \times \Pi_H^\vee)$ in Theorem [Theorem 6](#mainsubconvex){reference-type="ref" reference="mainsubconvex"} below. It holds in greater generality than Theorem [Theorem 3](#mainperiod){reference-type="ref" reference="mainperiod"}, and only uses 'local' information about $\phi$ and $\phi_H$, by which we mean information about how they transform under a compact subgroup. As this may be of independent interest, we state it here as a separate result. **Proposition 4**. *Let $(\chi, \chi_H)$ be a generic pair of characters of conductor $\mathfrak p^{2l}$. Let $\phi \in L^2( [G])$ and $\phi_H \in L^2([H])$ be $L^2$-normalized functions satisfying the following:* 1. *$\phi$ is $\mu$-isotypic under the action of $G_\infty$, for some irreducible representation $\mu$ of $G_\infty$.* 2. *$\phi$ and $\phi_{H}$ are invariant under $K_v$ and $K_{H,v}$ for finite places $v \neq w$.* 3. *There are compatible $(\chi, \chi_H)$-types $(J, \widetilde{\lambda})$ and $(J_H, \widetilde{\lambda}_H)$, in the sense of Definition [Definition 15](#compatdef){reference-type="ref" reference="compatdef"}, such that $\phi$ transforms under $J$ according to $\widetilde{\lambda}$, and likewise for $\phi_H$.* *Then $\mathcal P_H(\phi, \overline{\phi_H} ) \ll d_\mu q^{nl/2}$, where $d_\mu$ denotes the dimension of $\mu$.* ## The subconvex bound Our subconvex bound will hold for the following family of representations. **Definition 5**. Let $\mathcal F_\text{SC}$ be the set of automorphic representations $\pi \times \pi_H$ of $G \times H$ satisfying (C1) and (C2), as well as the following extra conditions: 1. $\pi_\infty \in \Theta_\infty$, and $\pi_{H,\infty} \in \Theta_{H,\infty}$, where $\Theta_\infty$ and $\Theta_{H,\infty}$ are finite sets of irreducible representations of $G_\infty$ and $H_\infty$. 2. $\pi$ and $\pi_H$ are tempered, and admit weak base change lifts $\Pi$ and $\Pi_H$ to ${\rm GL}_{n+1}$ and ${\rm GL}_n$ that satisfy the conditions for being a Hermitian Arthur parameter given in [@BPCZ Section 1.1.3]. 3. $(\pi_v, \pi_{H,v})$ is locally distinguished for all $v$, in the sense that there is a nonzero $H_v$-intertwining map $\pi_v \to \pi_{H,v}$. *Remark 2*. We note that condition (C4) is implied by the work of Mok [@Mo] and Kaletha--Minguez--Shin--White [@KMSW], although this is currently conditional. Moreover, the unconditional results of Caraiani and Labesse do not suffice here; while [@Caraiani] implies that $\Pi$ and $\Pi_H$ are tempered everywhere, [@Lab] does not establish local-global compatibility at all places, so we cannot deduce that $\pi$ and $\pi_H$ are tempered everywhere. Likewise, while [@Lab] proves the existence of the weak base change, it does not give the extra conditions of [@BPCZ Section 1.1.3], which Beuzart-Plessis informs us are needed for their proof. Our main subconvex bound is the following. **Theorem 6**. *Let $\pi \times \pi_H \in \mathcal F_\textup{SC}$, and let $\Pi \times \Pi_H$ be its base change to ${\rm GL}_{n+1}(\mathbb A_E) \times {\rm GL}_n(\mathbb A_E)$. Then the $L$-function $L(1/2, \Pi \times \Pi_H^\vee)$ satisfies the bound $$L(1/2, \Pi \times \Pi_H^\vee) \ll q^{l n (n+1) (1 - 4 \delta )} \ll C(\Pi \times \Pi_H^\vee)^{1/4 - \delta}$$ for any $\delta < \tfrac{1}{4n(n+1)(2n^2 + 3n + 3)}$.* *Remark 3*. The contribution to the conductor of $\Pi \times \Pi_H$ from the places above $w$ is equal to $q^{4l n (n+1)}$. Because $\pi \times \pi_H$ has bounded ramification away from $w$, the same should be true of $\Pi \times \Pi_H$, which would imply that the conductor away from $w$ is bounded and hence that $C(\Pi \times \Pi_H^\vee) \sim q^{4l n (n+1)}$. However, to the best of our knowledge, actually proving this ramification bound requires the (conditional) results of [@KMSW]. Fortunately, Theorem [Theorem 6](#mainsubconvex){reference-type="ref" reference="mainsubconvex"} only requires the weaker statement $C(\Pi \times \Pi_H^\vee) \gg q^{4l n (n+1)}$. *Remark 4*. In deducing Theorem [Theorem 6](#mainsubconvex){reference-type="ref" reference="mainsubconvex"} from Theorem [Theorem 3](#mainperiod){reference-type="ref" reference="mainperiod"}, we have used the result [@Ne1 Lemma 5.2] of Nelson to provide a lower bound for the local periods appearing in the Ichino--Ikeda formula at ramified finite places. This result lets us handle the most general set of local factors of $\pi$ and $\pi_H$ at these places, and could be avoided by assuming that e.g. $E/F$ and $\pi \times \pi_H$ are unramified at finite places away from $w$. ## Relation to other results The literature on the subconvexity problem is extensive, and we refer to [@IS; @Mu4] for a survey of known results. On ${\rm GL}_3$ and ${\rm GL}_3 \times {\rm GL}_2$, we mention the bounds of Blomer--Buttcane [@BB1], Li [@Li], and Munshi [@Mu1; @Mu2; @Mu3]. In general rank, we mention the results of Hu--Nelson [@HN], Nelson [@Ne1; @Ne2], and Liyang Yang [@Liyang], which use a relative trace formula approach related to that of this paper. We proved Theorem [Theorem 3](#mainperiod){reference-type="ref" reference="mainperiod"} in 2018, and gave a detailed lecture at the IAS [@lecture] on the methods used. This paper, which has been significantly delayed, implements those methods. ## Acknowledgements We would like to thank Raphaël Beuzart-Plessis, Farrell Brumley, Paul Nelson, Peter Sarnak, and Sug Woo Shin for helpful conversations. We would also like to thank the IAS for providing an excellent working environment during the 2017--18 special year on Locally Symmetric Spaces: Analytical and Topological Aspects. # Microlocal lift vectors In this section, we prove the local representation-theoretic results we shall need at the place $w$. If $\chi$ is a generic character, we shall define the notion of a microlocal lift vector in the representation $I(\chi)$ and establish their basic properties such as existence, uniqueness, and the support of their diagonal matrix coefficient. Then, for a generic pair $(\chi, \chi_H)$, we define the notion of compatible microlocal lift vectors in $I(\chi)$ and $I(\chi_H)$, prove that such compatible vectors exist, and establish some transversality results for them. ## Notation As we shall work locally at $w$ in this section, we drop the place from the notation. We let $F$ be a $p$-adic field, with integers $\mathcal O$ and maximal ideal $\mathfrak p$. We let $q$ be the order of the residue field, and $\varpi$ a uniformizer. In the first part of this section, we shall work on the group ${\rm GL}_n(F)$. We let $B$, $T$, and $K$ be the standard Borel, diagonal subgroup, and maximal compact in ${\rm GL}_n(F)$. For $l \geqslant 1$ we let $K(l)$ be the standard principal congruence subgroup of $K$. Let $\mathfrak g\simeq M_n(F)$ be the Lie algebra of ${\rm GL}_n(F)$. If $g \in G$, we denote the conjugation action of $g$ on elements $j \in G$ and subsets $J \subset G$ by $j^g := g j g^{-1}$ and $J^g := g J g^{-1}$. If $J$ is a subgroup of $G$ and $\lambda$ is a character of $J$, then $\lambda^g$ is the character of $J^g$ given by $\lambda^g(j) = \lambda( g^{-1} j g)$. ## A review of characters of compact subgroups {#sec:characters} We shall use a correspondence between characters of certain compact open subgroups of ${\rm GL}_n(F)$, and cosets in $\mathfrak g$, which we recall from e.g. [@Ho Section 2]. Let $\mathcal L= M_n(\mathcal O) \subset \mathfrak g$. We have $K(l) = 1 + \mathfrak p^l \mathcal L$, and this induces a group isomorphism $K(l) / K(2l) \simeq \mathfrak p^l \mathcal L/ \mathfrak p^{2l} \mathcal L$. Let $\theta$ be an additive character of $F$ of conductor $\mathcal O$, and let $\langle \, , \rangle$ be the trace pairing on $\mathfrak g$. We define an isomorphism $\tau : \mathfrak g\to \widehat{\mathfrak g}$, where $\widehat{\mathfrak g}$ is the Pontryagin dual, by $\tau[X](Y) = \theta( \langle X, Y \rangle)$. The dual of $\mathfrak p^l \mathcal L$ under $\tau$ is $\mathfrak p^{-l} \mathcal L$, and so $\tau$ induces an isomorphism $$\mathfrak p^{-2l} \mathcal L/ \mathfrak p^{-l} \mathcal L\xrightarrow{\sim} \widehat{\mathfrak p^l \mathcal L/ \mathfrak p^{2l} \mathcal L} \simeq \widehat{K(l) / K(2l)}.$$ It will be convenient for us to rescale this to an isomorphism $$\log^*: \mathcal L/ \mathfrak p^l \mathcal L\xrightarrow{\sim} \widehat{K(l) / K(2l)}$$ defined by $\log^*[X](1+Y) = \theta( \varpi^{-2l} \langle X, Y \rangle)$, with inverse $$\exp^*: \widehat{K(l) / K(2l)} \xrightarrow{\sim} \mathcal L/ \mathfrak p^l \mathcal L\simeq M_n(\mathcal O/ \mathfrak p^l).$$ (The notation is intended to evoke the pullback of characters via the exponential map.) Specializing to the case $n = 1$, we obtain an isomorphism $\exp^*_1$ from the dual group of $(1 + \mathfrak p^l)^\times / (1 + \mathfrak p^{2l})^\times$ to $\mathcal O/ \mathfrak p^l$. We may use this to reformulate the definition of a generic character of square conductor. If $\chi = (\chi_1, \ldots, \chi_n)$ is a character of $(F^\times)^n$ with all $\chi_i$ trivial on $1 + \mathfrak p^{2l}$, we may define $$D(\chi) = \text{diag}( \exp^*_1(\chi_1), \ldots, \exp^*_1(\chi_1)) \in M_n(\mathcal O/ \mathfrak p^l),$$ where we have implicitly restricted each $\chi_i$ to $1 + \mathfrak p^l$. Then $\chi$ is generic of conductor $\mathfrak p^{2l}$ if and only if $D(\chi)$ is regular when reduced modulo $\mathfrak p$. Likewise, if $\chi$ and $\chi_H$ are characters of $(F^\times)^{n+1}$ and $(F^\times)^n$ all of whose coordinates are trivial on $1 + \mathfrak p^{2l}$, then $(\chi, \chi_H)$ is a generic pair of conductor $\mathfrak p^{2l}$ if and only if the mod $\mathfrak p$ reductions of $D(\chi)$ and $D(\chi_H)$ are regular with distinct eigenvalues. ## Microlocal lift vectors Let $\chi$ be a generic character of $(F^\times)^n$ of square conductor $\mathfrak p^{2l}$, and let $I(\chi) = \text{Ind}_{B}^{{\rm GL}_n} \chi$ be the associated principal series representation of ${\rm GL}_n(F)$. We shall define microlocal lift vectors in $I(\chi)$. These vectors will be associated with certain pairs $(J, \widetilde{\lambda})$, where $J$ is a compact open subgroup of $K$ and $\widetilde{\lambda}$ is a character of $J$, and will be characterized by the property that they transform by $\widetilde{\lambda}$ under $J$. The pairs $(J, \widetilde{\lambda})$ are called $\chi$-types, and are defined in Definition [Definition 8](#chitypedef){reference-type="ref" reference="chitypedef"} below. They are in fact $\mathfrak{s}$-types for the supercuspidal data $\mathfrak{s} = (T, \chi)$ in the sense of Bushnell--Kutzko, by a theorem of Roche [@Ro Thm. 7.7], but we will not use this fact. To define $\chi$-types, we begin by constructing the so-called standard $\chi$-type. Let $T_c$ be the maximal compact subgroup of $T$, and define $\widetilde{T}(l) = T_c K(l)$ to be the depth-$l$ fattening of $T_c$. **Lemma 7**. *There exists a unique character $\widetilde{\chi}$ of $\widetilde{T}(l)$ that agrees with $\chi$ on $T_c$. Moreover, $\widetilde{\chi}$ is trivial on $K(2l)$.* *Proof.* Any element of $\widetilde{T}(l)$ has the form $X + Y$ where $X$ is diagonal with entries in $\mathcal O^\times$, and $Y \in \mathfrak p^l M_n(\mathcal O)$ has trivial diagonal entries. With this notation, we define $\widetilde{\chi}(X + Y) = \chi(X)$. To show $\widetilde{\chi}$ is a character, we have $$\widetilde{\chi}( (X_1 + Y_1)(X_2 + Y_2)) = \widetilde{\chi}( X_1X_2 + X_1 Y_2 + Y_1 X_2 + Y_1 Y_2),$$ and because $X_1 Y_2 + X_2 Y_1$ has trivial diagonal entries and $Y_1 Y_2 \in \mathfrak p^{2l}M_n(\mathcal O)$, this is equal to $\chi(X_1 X_2) = \widetilde{\chi}(X_1 + Y_1) \widetilde{\chi}(X_2 + Y_2)$ as required. It is clear that $\widetilde{\chi}$ is trivial on $K(2l)$. To show uniqueness, suppose there are two such characters $\widetilde{\chi}_1$ and $\widetilde{\chi}_2$. Their quotient $\widetilde{\chi}_1 / \widetilde{\chi}_2$ is trivial on $T_c$, and on the commutator subgroup $[\widetilde{T}(l), \widetilde{T}(l)]$, so it is enough to show that these two groups generate $\widetilde{T}(l)$. If $E_{ij}$ is the $(i,j)$th elementary matrix, then $[\widetilde{T}(l), \widetilde{T}(l)]$ contains the subgroup $I + \mathfrak p^l E_{ij}$, as this is the set of commutators of $I + E_{ij}$ with $T_c$. It may be seen that $T_c$ and the subgroups $I + \mathfrak p^l E_{ij}$ generate $\widetilde{T}(l)$, which completes the proof. ◻ We next define $\chi$-types to be conjugates of $( \widetilde{T}(l), \widetilde{\chi})$ under $K$. **Definition 8**. Let $(J, \widetilde{\lambda})$ be a pair consisting of an open subgroup $J < K$ and a character of $J$. We say $(J, \widetilde{\lambda})$ is a $\chi$-type if there is $k \in K$ such that $J = \widetilde{T}(l)^k$ and $\widetilde{\lambda} = \widetilde{\chi}^k$. We call $( \widetilde{T}(l), \widetilde{\chi})$ the standard $\chi$-type. The restriction of $\widetilde{\chi}$ to $K(l)$ satisfies $\exp^*( \widetilde{\chi}|_{K(l)} ) = D(\chi)$, and hence for any $\chi$-type $(J, \widetilde{\lambda}) = ( \widetilde{T}(l)^k, \widetilde{\chi}^k )$ we have $\exp^*( \widetilde{\lambda}|_{K(l)} ) = D(\chi)^k$. We may now state the definition of microlocal lift vectors. **Definition 9** (Microlocal lifts). We say that $v \in I(\chi)$ is a microlocal lift vector associated to a $\chi$-type $(J, \widetilde{\lambda})$ if $v$ transforms by $\widetilde{\lambda}$ under $J$. We simply say that $v$ is a microlocal lift vector if it is a microlocal lift vector associated to some choice of $\chi$-type. The first result we shall prove about such vectors is that there is a unique microlocal vector up to scaling associated with each $\chi$-type. Uniqueness requires our assumption that $\chi$ is generic. **Lemma 10**. *For each $\chi$-type $(J, \widetilde{\lambda})$, the space of microlocal lift vectors in $I(\chi)$ associated with $(J, \widetilde{\lambda})$ is one-dimensional.* *Proof.* We may assume that $(J, \widetilde{\lambda})$ is standard. We work in the compact model of $I(\chi)$, as the space of functions on $K$ satisfying $$\label{Btransf} f(bk) = \chi(b) f(k) \quad \text{for} \quad b \in B_K := B \cap K.$$ We wish to show that there is a unique such function up to scalar that satisfies $$\label{atransf} f(k a) = \widetilde{\chi}(a) f(k) \quad \text{for} \quad a \in \widetilde{T}(l).$$ For existence, we may define a function $f$ supported on $B_K \widetilde{T}(l)$ by $f(ba) = \chi(b) \widetilde{\chi}(a)$. This is well defined, because it is clear that the characters $\chi$ of $B_K$ and $\widetilde{\chi}$ of $\widetilde{T}(l)$ agree on $B_K \cap \widetilde{T}(l) = B \cap \widetilde{T}(l)$. For uniqueness, we wish to show that if $f$ satisfies ([\[Btransf\]](#Btransf){reference-type="ref" reference="Btransf"}) and ([\[atransf\]](#atransf){reference-type="ref" reference="atransf"}), and $f(k) \neq 0$, then $k \in B_K \widetilde{T}(l)$. Property ([\[atransf\]](#atransf){reference-type="ref" reference="atransf"}) implies that $f(k_1 k) = \widetilde{\chi}^k(k_1) f(k)$ for $k_1 \in K(l)$, and comparing this with property ([\[Btransf\]](#Btransf){reference-type="ref" reference="Btransf"}) and our assumption $f(k) \neq 0$ we see that $\widetilde{\chi}^k = \chi$ on $B_K \cap K(l)$. If we let $X = D(\chi)$, then we have $\exp^*( \widetilde{\chi}) = X$ and $\exp^*(\widetilde{\chi}^k) = X^k$ as above. The condition that $\widetilde{\chi}^k = \chi$ on $B_K \cap K(l)$ implies that $X^k$ is upper triangular, and equal to $X$ on the diagonal. Let $\overline{k} \in {\rm GL}_n(\mathcal O/\mathfrak p^l)$ be the mod $\mathfrak p^l$ reduction of $k$. Because $X$ is regular there is some upper triangular matrix $b \in {\rm GL}_n(\mathcal O/ \mathfrak p^l)$ such that $b X b^{-1} = k X k^{-1} \in M_n(\mathcal O/ \mathfrak p^l)$. It follows that $\overline{k}^{-1} b$ centralizes $X$, and because $X$ is regular this means $\overline{k}^{-1} b$ is diagonal. Therefore $\overline{k}$ is upper triangular, so that $k \in B_K \widetilde{T}(l)$ as required. ◻ **Lemma 11**. *A nonzero microlocal lift vector $v$ is associated with at most one $\chi$-type.* *Proof.* Suppose there are two $\chi$-types $(J_1, \widetilde{\lambda}_1) = ( \widetilde{T}(l)^{k_1}, \widetilde{\chi}^{k_1})$ and $(J_2, \widetilde{\lambda}_2) = ( \widetilde{T}(l)^{k_2}, \widetilde{\chi}^{k_2})$ associated with $v$, for $k_1, k_2 \in K$. It follows that $\widetilde{\lambda}_1$ and $\widetilde{\lambda}_2$ have the same restriction to $K(l)$, which means that $D(\chi)^{k_1} = \exp^*( \widetilde{\lambda}_1 |_{K(l)} ) = \exp^*( \widetilde{\lambda}_2 |_{K(l)} ) = D(\chi)^{k_2}$. This implies that $k_1 k_2^{-1}$ commutes with $D(\chi)$, and hence that $k_1 k_2^{-1} \in \widetilde{T}(l)$ because $D(\chi)$ is regular. It follows that $(J_1, \widetilde{\lambda}_1) = (J_2, \widetilde{\lambda}_2)$ as required. ◻ We may prove the following lemma in the same way. **Lemma 12**. *The stabilizer of a $\chi$-type $(J, \widetilde{\lambda})$ is equal to $J$.* The final result we shall prove about individual microlocal lift vectors is a bound on the support of their matrix coefficients. This will be used later to compute local integrals in the Ichino--Ikeda formula. The following lemma is a sharper version of Lemma 3.3 of [@Ho]. **Lemma 13**. *Let $v \in I(\chi)$ be a microlocal lift vector associated to the $\chi$-type $( \widetilde{T}(l)^k, \widetilde{\lambda}^k)$, and let $v^\vee \in I(\chi^{-1}) \simeq I(\chi)^\vee$ be a microlocal lift vector associated to the $\chi$-type $( \widetilde{T}(l)^k, (\widetilde{\lambda}^k)^{-1})$. The matrix coefficient $\rho(g) = \langle g v, v^\vee \rangle$ is supported on $K(l) T^k K(l)$.* Note that that the set $K(l) T^k K(l)$ is independent of the choice of $k$ by Lemma [Lemma 12](#typestabilizer){reference-type="ref" reference="typestabilizer"}. *Proof.* It suffices to consider the standard $\chi$-type $( \widetilde{T}(l), \widetilde{\chi})$. The transformation of $v$ and $v^\vee$ under $K(l)$ implies that for $k \in K(l)$ we have $\rho(kg) = \widetilde{\chi}(k) \rho(g)$ and $\rho(gk) = \widetilde{\chi}(k) \rho(g)$. If we consider the character $\widetilde{\chi}^g$ of $K(l)^g$, we may write the second transformation property as $$\rho(kg) = \widetilde{\chi}^g(k) \rho(g) \quad \text{for} \quad k \in K(l)^g.$$ If $\rho(g) \neq 0$, comparing this with $\rho(kg) = \widetilde{\chi}(k) \rho(g)$ for $k \in K(l)$ implies that $\widetilde{\chi}$ and $\widetilde{\chi}^g$ must agree on $K(l) \cap K(l)^g$. We will show that this can only happen when $g \in K(l) T K(l)$. Let $X \in \mathcal L$ be any lift of $\exp^*(\widetilde{\chi}) = D(\chi)$ to characteristic zero. We now suppose that $\widetilde{\chi} = \widetilde{\chi}^g$ on $K(l) \cap K(l)^g$. We first show that this implies $X - X^g \in \mathfrak p^l \mathcal L+ \mathfrak p^l \mathcal L^g$. To show this, recall that $\widetilde{\chi}$ is given on $K(l) = 1 + \mathfrak p^l \mathcal L$ by $\widetilde{\chi}(1 + A) = \theta( \varpi^{-2l} \langle X, A \rangle)$, and likewise $\widetilde{\chi}^g$ is given on $K(l)^g = 1 + \mathfrak p^l \mathcal L^g$ by $\widetilde{\chi}(1 + A) = \theta( \varpi^{-2l} \langle X^g, A \rangle)$. We have $K(l) \cap K(l)^g = 1 + \mathfrak p^l( \mathcal L\cap \mathcal L^g)$, and so for $\widetilde{\chi}$ and $\widetilde{\chi}^g$ to agree we must have $\langle A, X - X^g \rangle \in \mathfrak p^{2l}$ for $A \in \mathfrak p^l( \mathcal L\cap \mathcal L^g)$, or equivalently $\langle \mathcal L\cap \mathcal L^g, X - X^g \rangle \in \mathfrak p^l$. As the dual lattice to $\mathcal L\cap \mathcal L^g$ is $\mathcal L+ \mathcal L^g$, this implies $X - X^g \in \mathfrak p^l\mathcal L+ \mathfrak p^l\mathcal L^g$ as required. We may reformulate $X - X^g \in \mathfrak p^l\mathcal L+ \mathfrak p^l\mathcal L^g$ as saying that there are $x_1, x_2 \in \mathfrak p^l \mathcal L$ such that $X + x_1 = (X + x_2)^g$. By iterating Lemma [Lemma 14](#diagadjust){reference-type="ref" reference="diagadjust"}, we see that $X + x_1$ is equal to $Y_1^{k_1}$ for some $k_1 \in K(l)$ and a regular diagonal matrix $Y_1 \in \mathcal L$ that satisfies $Y_1 \equiv X \; (\mathfrak p^l)$. Likewise, we may write $X + x_2 = Y_2^{k_2}$. Combining these, we have $Y_1^{k_1} = Y_2^{g k_2}$, which implies that $Y_1 = Y_2$ and $k_1^{-1} g k_2 \in A$. This gives $g \in K(l) T K(l)$ as required. ◻ **Lemma 14**. *Let $Y \in \mathcal L$ be diagonal and regular, and let $y \in \mathfrak p^l \mathcal L$ for some $l \geqslant 1$. There is $k \in K(l)$ such that $(Y + y)^k = Z + z$, where $Z$ is diagonal and regular, $Y \equiv Z \; (\mathfrak p^l)$, and $z \in \mathfrak p^{l+1} \mathcal L$.* *Proof.* Let $A \in \mathcal L$, so that $1 + \varpi^l A \in K(l)$. Because $K(l) / K(2l)$ is abelian, we have $(1 + \varpi^l A)^{-1} \equiv 1 - \varpi^l A \; ( \mathfrak p^{l+1} )$. A calculation gives $$(1 + \varpi^l A)(Y + y)(1 + \varpi^l A)^{-1} \equiv Y + y + \varpi^l[A, Y] \; (\mathfrak p^{l+1}).$$ Because $Y$ is regular, $[A, Y]$ can be made equal to any matrix in $\mathcal L$ with zeros on the diagonal if $A$ is chosen correctly. We may therefore choose $A$ so that the off-diagonal entries of $Y + y + \varpi^l[A, Y]$ all lie in $\mathfrak p^{l+1}$, which means that it may be written in the form $Z + z$ as in the statement of the lemma. ◻ ## Compatible pairs {#sec:compatpairs} We now define the notion of compatible pairs of $\chi$-types, and microlocal lift vectors, for a GGP pair of general linear groups. We let $G = {\rm GL}_{n+1}(F)$ and $H = {\rm GL}_n(F)$, and consider $H$ to be embedded in the upper left corner of $G$. We continue to use the notation of the previous sections for $G$, and indicate the corresponding objects for $H$ by adding a subscript. We fix a generic pair $(\chi, \chi_H)$ of conductor $\mathfrak p^{2l}$. **Definition 15**. We say that a $\chi$-type $(J, \widetilde{\lambda})$ and a $\chi_H$-type $(J_H, \widetilde{\lambda}_H)$ are compatible if $\widetilde{\lambda}$ and $\widetilde{\lambda}_H$ agree on $K_H(l)$. We say that nonzero microlocal lift vectors $v \in I(\chi)$ and $v_H \in I(\chi_H)$ are compatible if their types are, or equivalently, if they transform by the same character under $K_H(l)$. For simplicity, we will refer to $(J, \widetilde{\lambda})$ and $(J_H, \widetilde{\lambda}_H)$ as compatible $(\chi, \chi_H)$-types. If $\pi : \mathcal L/ \mathfrak p^l \mathcal L\to \mathcal L_H / \mathfrak p^l \mathcal L_H$ is the natural projection map, compatibility of $(J, \widetilde{\lambda})$ and $(J_H, \widetilde{\lambda}_H)$ is equivalent to $\pi( \exp^*(\widetilde{\lambda})) = \exp^*_H( \widetilde{\lambda}_H)$. It is clear that $K_H$ acts by conjugation on the set of compatible $(\chi, \chi_H)$-types. **Proposition 16**. *If $(J_H, \widetilde{\lambda}_H)$ is a $\chi_H$-type, there exists a $\chi$-type $(J, \widetilde{\lambda})$ compatible with it, and $J_H$ acts transitively on the set of such $\chi$-types.* It follows from this that $K_H$ acts transitively on the set of compatible $(\chi, \chi_H)$-types. *Proof.* Because $K_H$ acts transitively on $\chi_H$-types, for existence, it suffices to prove that there exists a $\chi$-type compatible with $(\widetilde{T}_H(l), \widetilde{\chi}_H)$, and for transitivity, it suffices to show that $\widetilde{T}_H(l)$ acts transitively on the set of such $\chi$-types. Consider a $\chi$-type $(J, \widetilde{\lambda}) = (\widetilde{T}(l)^k, \widetilde{\lambda}^k)$ for some $k \in K$. Because $\exp^*(\widetilde{\lambda}) = D(\chi)^k$, $(J, \widetilde{\lambda})$ is compatible with $(\widetilde{T}_H(l), \widetilde{\chi}_H)$ if and only if $\pi( D(\chi)^k) = D(\chi_H)$. To explicate this, we write $D(\chi) = \text{diag}( \alpha_1, \ldots, \alpha_{n+1})$ and $D(\chi_H) = \text{diag}(\beta_1, \ldots, \beta_n)$, where the $\alpha_i$ and $\beta_j$ are mutually distinct mod $\mathfrak p$ by the comments of Section [2.2](#sec:characters){reference-type="ref" reference="sec:characters"}. Compatibility is then equivalent to $$k \left( \begin{array}{ccc} \alpha_1 && \\ & \ddots & \\ && \alpha_{n+1} \end{array} \right) k^{-1} = \left( \begin{array}{cccc} \beta_1 &&& x_1 \\ & \ddots && \vdots \\ && \beta_n & x_n \\ y_1 & \ldots & y_n & z \end{array} \right)$$ for some $x_i, y_i, z \in \mathcal O/ \mathfrak p^l$. Lemma [Lemma 17](#adjproj){reference-type="ref" reference="adjproj"} says that there exists a $k_0$ with this property, and moreover that $k \in K$ has this property if and only if $k \in \widetilde{T}_H(l) k_0 \widetilde{T}(l)$. This implies the proposition. ◻ **Lemma 17**. *Let $A$ be the diagonal subgroup of ${\rm GL}_{n+1}(\mathcal O/ \mathfrak p^l)$, and let $A_H \subset A$ be the subgroup of elements whose last entry is equal to 1. Let $\alpha_i \in \mathcal O/\mathfrak p^l$, $1 \leqslant i \leqslant n+1$, and $\beta_j \in \mathcal O/\mathfrak p^l$, $1 \leqslant j \leqslant n$, have mutually distinct reductions modulo $\mathfrak p$. Then $g_0 \in {\rm GL}_{n+1}(\mathcal O/ \mathfrak p^l)$ has the property that $$\label{conjugate} g_0 \left( \begin{array}{ccc} \alpha_1 && \\ & \ddots & \\ && \alpha_{n+1} \end{array} \right) g_0^{-1} = \left( \begin{array}{cccc} \beta_1 &&& x_1 \\ & \ddots && \vdots \\ && \beta_n & x_n \\ y_1 & \ldots & y_n & z \end{array} \right)$$ for some $x_i, y_i, z \in \mathcal O/ \mathfrak p^l$ if and only if $$\label{g0} g_0 \in A_H \left( \begin{array}{ccc} & \left( \frac{1}{ \alpha_j - \beta_i } \right)_{ij} \\ 1 & \ldots & 1 \end{array} \right) A.$$* *Proof.* We first show that ([\[conjugate\]](#conjugate){reference-type="ref" reference="conjugate"}) implies ([\[g0\]](#g0){reference-type="ref" reference="g0"}). If we write $\alpha' = \text{diag}(\alpha_1, \ldots, \alpha_n)$ and $\beta = \text{diag}(\beta_1, \ldots, \beta_n)$, we may write the equation ([\[conjugate\]](#conjugate){reference-type="ref" reference="conjugate"}) as $$g_0 \left( \begin{array}{cc} \alpha' & \\ & \alpha_{n+1} \end{array} \right) = \left( \begin{array}{cc} \beta & x \\ {}^t y & z \end{array} \right) g_0.$$ If we write $g_0$ in the form $$g_0 = \left( \begin{array}{cc} A & b_1 \\ {}^t b_2 & c \end{array} \right),$$ where $A \in M_n(\mathcal O/ \mathfrak p^l)$ and $b_1$ and $b_2$ are column vectors, then this becomes $$\label{conjugate1} \left( \begin{array}{cc} A \alpha' & \alpha_{n+1} b_1 \\ {}^t b_2 \alpha' & c \alpha_{n+1} \end{array} \right) = \left( \begin{array}{cc} \beta A + x {}^t b_2 & \beta b_1 + cx \\ {}^t y A + z {}^t b_2 & {}^t y b_1 + cz \end{array} \right).$$ The top left entry gives $$\begin{aligned} A_{ij} \alpha_j & = \beta_i A_{ij} + x_i b_{2,j} \\ A_{ij} & = \frac{ x_i b_{2,j} }{ \alpha_j - \beta_i}\end{aligned}$$ for all $i, j$, and the top right entry gives $$\begin{aligned} \alpha_{n+1} b_{1,i} & = \beta_i b_{1,i} + c x_i \\ b_{1,i} & = \frac{c x_i}{\alpha_{n+1} - \beta_i }\end{aligned}$$ for all $i$. Combining these gives that any $g_0$ satisfying ([\[conjugate\]](#conjugate){reference-type="ref" reference="conjugate"}) must have the form $$g_0 = \left( \begin{array}{cccc} x_1 &&& \\ & \ddots && \\ && x_n & \\ &&& 1 \end{array} \right) \left( \begin{array}{ccc} & \left( \frac{1}{ \alpha_j - \beta_i } \right)_{ij} \\ 1 & \ldots & 1 \end{array} \right) \left( \begin{array}{cccc} b_{2,1} &&& \\ & \ddots && \\ && b_{2,n} & \\ &&& c \end{array} \right),$$ which is equivalent to ([\[g0\]](#g0){reference-type="ref" reference="g0"}) because all the matrices on the right hand side must be invertible. We next show that ([\[g0\]](#g0){reference-type="ref" reference="g0"}) implies ([\[conjugate\]](#conjugate){reference-type="ref" reference="conjugate"}). Because the set of $g_0$ satisfying ([\[conjugate\]](#conjugate){reference-type="ref" reference="conjugate"}) is bi-invariant under $A_H$ and $A$, it suffices to check the case when $$g_0 = \left( \begin{array}{ccc} & \left( \frac{1}{ \alpha_j - \beta_i } \right)_{ij} \\ 1 & \ldots & 1 \end{array} \right).$$ We shall show that there exists $y_i$ and $z$ such that $$\label{conjugate2} g_0 \left( \begin{array}{cc} \alpha' & \\ & \alpha_{n+1} \end{array} \right) = \left( \begin{array}{cc} \beta & 1_n \\ {}^t y & z \end{array} \right) g_0,$$ where $1_n$ denotes the column vector of 1's. If we write $g_0$ as $$g_0 = \left( \begin{array}{cc} A & b_1 \\ {}^t 1_n & 1 \end{array} \right),$$ then ([\[conjugate2\]](#conjugate2){reference-type="ref" reference="conjugate2"}) becomes $$\left( \begin{array}{cc} A \alpha' & \alpha_{n+1} b_1 \\ {}^t 1_n \alpha' & \alpha_{n+1} \end{array} \right) = \left( \begin{array}{cc} \beta A + I_n & \beta b_1 + 1_n \\ {}^t y A + z {}^t 1_n & {}^t y b_1 + z \end{array} \right).$$ As before, the top left and right entries of this equation are satisfied, and it remains to consider the two equations ${}^t 1_n \alpha' = {}^t y A + z {}^t 1_n$ and $\alpha_{n+1} = {}^t y b_1 + z$. Writing these out, we obtain the system of linear equations $$\label{FanPall} \alpha_j = z + \sum_{i = 1}^n y_i \frac{1}{\alpha_j - \beta_i}$$ for $1 \leqslant j \leqslant n+1$. This system is considered in the Lemma on p. 300 of [@FP]. It is proved that the determinant[^1] of this system is equal to $$(-1)^{n(n-1)/2} \frac{ \prod_{1 \leqslant i < j \leqslant n+1} (\alpha_i - \alpha_j) \prod_{1 \leqslant i < j \leqslant n} (\beta_i - \beta_j) }{ \prod_{1 \leqslant i \leqslant n+1} \prod_{1 \leqslant j \leqslant n} (\alpha_i - \beta_j) },$$ which by our assumptions on $\alpha_i$ and $\beta_j$ is invertible. It follows that there are unique $z$ and $y_i$ that satisfy ([\[FanPall\]](#FanPall){reference-type="ref" reference="FanPall"}), and hence ([\[conjugate2\]](#conjugate2){reference-type="ref" reference="conjugate2"}), as required. Moreover, Fan and Pall prove that $$y_i = - \frac{ \prod_{j = 1}^{n+1} (\alpha_j - \beta_i) }{ \prod_{j \neq i} ( \beta_j - \beta_i) },$$ and taking traces gives that $z = \sum \alpha_i - \sum \beta_i$. ◻ We shall need the following additional property of the element $g_0$. **Lemma 18**. *If $g_0$ is as in Lemma [Lemma 17](#adjproj){reference-type="ref" reference="adjproj"}, then all entries of $g_0$ and $g_0^{-1}$ are in $(\mathcal O/ \mathfrak p^l)^\times$.* *Proof.* The statement for $g_0$ follows immediately from ([\[g0\]](#g0){reference-type="ref" reference="g0"}). For $g_0^{-1}$, we note that it satisfies the equation $$\left( \begin{array}{cc} \alpha' & \\ & \alpha_{n+1} \end{array} \right) g_0^{-1} = g_0^{-1} \left( \begin{array}{cc} \beta & x \\ {}^t y & z \end{array} \right).$$ If we write $$g_0^{-1} = \left( \begin{array}{cc} A' & b'_1 \\ {}^t b'_2 & c' \end{array} \right),$$ then we see as in Lemma [Lemma 17](#adjproj){reference-type="ref" reference="adjproj"} that $$A'_{ij} = \frac{ b'_{1i} y_j}{ \alpha_i - \beta_j}, \quad b'_{2j} = \frac{c' y_j}{ \alpha_{n+1} - \beta_j}.$$ It follows that $$g_0^{-1} = \left( \begin{array}{ccccc} b'_{11} &&& \\ & \ddots && \\ && b'_{1n} & \\ &&& c' \end{array} \right) \left( \begin{array}{ccc} & & 1 \\ & \left( \frac{1}{ \alpha_i - \beta_j } \right)_{ij} & \vdots \\ & & 1 \end{array} \right) \left( \begin{array}{cccc} y_1 &&& \\ & \ddots && \\ && y_n & \\ &&& 1 \end{array} \right),$$ which implies the result. ◻ **Lemma 19**. *Let $(J, \widetilde{\lambda})$ and $(J_H, \widetilde{\lambda}_H)$ be compatible $(\chi, \chi_H)$-types. Let $v \in I(\chi)$ be a microlocal lift vector associated to $(J, \widetilde{\lambda})$, and let $v^\vee \in I(\chi^{-1})$ be a microlocal lift vector associated to $(J, \widetilde{\lambda}^{-1})$. Then the intersection of $H$ with the support of the matrix coefficient $\rho(g) = \langle gv, v^\vee \rangle$ is equal to $K_H(l)$. Moreover, we have $J \cap H = K_H(l)$.* *Proof.* Assume that $(J_H, \widetilde{\lambda}_H) = (\widetilde{T}_H(l), \widetilde{\chi}_H)$ is standard, so that $(J, \widetilde{\lambda}) = ( \widetilde{T}(l)^{g_0}, \widetilde{\chi}^{g_0})$ with $g_0$ as in Lemma [Lemma 17](#adjproj){reference-type="ref" reference="adjproj"}. Lemma [Lemma 13](#coeffsupport){reference-type="ref" reference="coeffsupport"} then implies that $\text{supp}(\rho)$ is contained in $K(l) T^{g_0} K(l)$. It therefore suffices to show that $$\label{coeffH} K(l) T^{g_0} K(l) \cap H = K_H(l),$$ as this gives $\text{supp}(\rho) \cap H \subset K_H(l)$, and the reverse inclusion $K_H(l) \subset \text{supp}(\rho) \cap H$ is clear. Moreover, ([\[coeffH\]](#coeffH){reference-type="ref" reference="coeffH"}) implies that $J \cap H = K_H(l)$, as we have $$K_H(l) \subset J \cap H \subset K(l) T^{g_0} K(l) \cap H = K_H(l).$$ To establish ([\[coeffH\]](#coeffH){reference-type="ref" reference="coeffH"}), let $h \in K(l) T^{g_0} K(l) \cap H$, and write $h$ as $k_1 g_0 t g_0^{-1} k_2$ with $k_i \in K(l)$ and $t = \text{diag}(t_1, \ldots, t_{n+1})$. We let $1 \leqslant j \leqslant n+1$ be such that $| t_j |$ is maximal; by replacing $h$ with $h^{-1}$ if necessary we may also assume that $| t_j | \geqslant | t_i^{-1} |$ for all $i$. Let $e_i$ be the standard basis for $F^{n+1}$. We will show that $t_i \in \mathcal O^\times$ for all $i$ by examining the vector $$h g_0 e_j = k_1 g_0 t g_0^{-1} k_2 g_0 e_j.$$ Because $g_0 e_j \in \mathcal O^{n+1}$, we have $k_2 g_0 e_j \in g_0 e_j + \mathfrak p^l \mathcal O^{n+1}$, so that $$\begin{aligned} h g_0 e_j & \in k_1 g_0 t g_0^{-1} (g_0 e_j + \mathfrak p^l \mathcal O^{n+1}) \\ & = k_1 g_0 (t e_j + t \mathfrak p^l \mathcal O^{n+1}).\end{aligned}$$ We have $t e_j = t_j e_j$, while our assumption that $| t_j |$ was maximal implies that $t \mathfrak p^l \mathcal O^{n+1} \subset t_j \mathfrak p^l \mathcal O^{n+1}$, which gives $$\begin{aligned} h g_0 e_j & \in k_1 g_0 t_j (e_j + \mathfrak p^l \mathcal O^{n+1}) \\ & = t_j g_0 (e_j + \mathfrak p^l \mathcal O^{n+1}) \\ & = t_j g_0 e_j + t_j \mathfrak p^l \mathcal O^{n+1}.\end{aligned}$$ Because $h \in H$, it preserves the last entry of the vector $g_0 e_j$, which is equal to $(g_0)_{n+1,j}$. By inspecting the last entry of the vectors in the equation above, we therefore have $(g_0)_{n+1,j} \in t_j (g_0)_{n+1,j} + t_j \mathfrak p^l \mathcal O$. Because $(g_0)_{n+1,j} \in \mathcal O^\times$ by Lemma [Lemma 18](#g0entry){reference-type="ref" reference="g0entry"}, this implies that $t_j \in \mathcal O$, and therefore that $t_i \in \mathcal O^\times$ for all $i$ by our assumption that $| t_j | \geqslant |t_i|, |t_i^{-1}|$. Having established that $t_i \in \mathcal O^\times$ for all $i$, we may apply the same argument as above to deduce that $h g_0 e_i \in t_i g_0 e_i + \mathfrak p^l \mathcal O^{n+1}$ for all $i$. Comparing the last entries of these vectors as before gives $t_i \in 1 + \mathfrak p^l$, which implies that $t$, and hence $h$, lie in $K(l)$ as required. ◻ # Proof of the trivial bound {#sec:trivialbd} ## Notation {#sec:notation1} If $f$ is a complex-valued function on a group, we denote by $f^\vee$ the function given by $f^\vee(g) = \overline{f}(g^{-1})$. ### Number fields {#sec:number fields} Let $E/F$ be a CM extension of number fields. Let $\mathcal O$ and $\mathbb A$ be the integers and adeles of $F$. We will denote places of $F$ by $v$ or $w$, possibly with some extra decoration. For any place $v$ of $F$, we let $\mathcal O_v$ be the integers in the completion $F_v$, $\mathfrak p_v$ the maximal ideal of $\mathcal O_v$, $\varpi_v$ a uniformizer, and $q_v$ the order of the residue field. We fix a place $w$ of $F$ that splits in $E$, and write $\mathfrak p$ and $q$ for $\mathfrak p_w$ and $q_w$. We let $\mathcal O_E$ be the integers of $E$. We denote places of $E$ by $u$ or $u'$. If $v$ is a place of $F$, we let $E_v = E \otimes_F F_v$, and likewise for other objects associated with $E$. ### Hermitian spaces {#sec:Hermitian spaces} Let $V$ be a vector space of dimension $n+1$ over $E$, and let $\langle \, , \, \rangle_V$ be a nondegenerate Hermitian form on $V$ with respect to $E/F$. We assume that $\langle \, , \, \rangle_V$ is positive definite at all infinite places. Let $V_H \subset V$ be a codimension one subspace. Let $x_1, \ldots, x_n$ be a basis of $V_H$, and let $x_{n+1} \in V_H^\perp$ be nonzero, so that $x_1, \ldots, x_{n+1}$ forms a basis of $V$. We let $L \subset V$ and $L_H \subset V_H$ be the $\mathcal O_E$-lattices spanned by $x_1, \ldots, x_{n+1}$ and $x_1, \ldots, x_n$ respectively. This definition implies that $L_v = L_{H,v} \oplus \mathcal O_{E,v} x_{n+1}$ for all finite places $v$. We assume that $\langle \, , \, \rangle_V$ is integral on $L$. We let $S$ be a finite set of places containing all infinite places and all places that ramify in $E/F$. We also assume that for $v \notin S$ the lattices $L_v$ and $L_{H,v}$ are self-dual. We now suppose that $v$ splits in $E/F$ as $u u'$, so that $V_v \simeq V_u \oplus V_{u'}$. The Hermitian form induces a nondegenerate bilinear pairing $B_v : V_u \times V_{u'} \to F_v$ of vector spaces over $F_v$, and we may write $\langle \, , \, \rangle_V$ in terms of $B_v$ as $$\langle (v_1, v_1'), (v_2, v_2') \rangle_V = ( B_v( v_1, v_2'), B_v( v_2, v_1') ) \in F_v \oplus F_v \simeq E_v.$$ There is an isomorphism $\iota_{V,v}: V_v \simeq F_v^{n+1} \oplus F_v^{n+1}$ that respects the direct sum decomposition $V_v \simeq V_u \oplus V_{u'}$ and that carries $B_v$ to the standard bilinear form on $F_v^{n+1} \times F_v^{n+1}$. We may assume that $\iota_{V,v}(V_{H,v}) = F_v^n \oplus F_v^n$. We now further assume that the split place $v$ does not lie in $S$, and in particular that it is finite. Because $\mathcal O_{E,v} \simeq \mathcal O_{E,u} \oplus \mathcal O_{E,u'}$, we have $L_v \simeq L_u \oplus L_{u'}$. Moreover, our assumption that $L_v$ was self-dual implies that $L_u$ and $L_{u'}$ are dual to each other under $B_v$, and likewise for $L_{H,u}$ and $L_{H,u'}$. It follows that we may choose $\iota_{V,v}$ to send $L_u$ and $L_{u'}$ to the relevant copies of $\mathcal O_v^{n+1} \subset F_v^{n+1}$, and likewise for $L_{H,u}$ and $L_{H,u'}$. ### Algebraic groups {#sec:algebraic groups} Let $G$ and $H$ be the unitary groups of $V$ and $V_H$. Our assumption that $V$ was positive definite implies that the adelic quotients of $G$ and $H$ are compact. We let $Z \simeq {\rm U}(1)$ be the center of $G$, and define $\widetilde{H} = ZH$. If $v$ splits in $E/F$ as $u u'$, the isomorphism $V_v \simeq V_u \oplus V_{u'}$ induces isomorphisms of $G_v$ with ${\rm GL}(V_u)$ and ${\rm GL}(V_{u'})$. Applying the isomorphism $\iota_{V,v}$ defined above then gives an isomorphism $\iota_v : G_v \simeq {\rm GL}_{n+1}(F_v)$. Note that this requires us to choose one of the places $u$ and $u'$, and changing our choice has the effect of composing $\iota_v$ with the automorphism $g \mapsto {}^t g^{-1}$ of ${\rm GL}_{n+1}(F_v)$. We have $\iota_v(H_v) = {\rm GL}_n(F_v)$, embedded in ${\rm GL}_{n+1}$ as the upper left-hand block. ### Compact subgroups {#sec:compact subgroups} For $v \notin S$, we define $K_v$ and $K_{H,v}$ to be the stabilizer of $L_v$ in $G_v$ and of $L_{H,v}$ in $H_v$ respectively. Our self-duality assumption on $L_v$ and $L_{H,v}$ implies that $K_v$ and $K_{H,v}$ are hyperspecial subgroups. Moreover, the relation $L_v = L_{H,v} \oplus \mathcal O_{E,v} x_{n+1}$ implies that $K_{H,v} = H_v \cap K_v$. When $v \notin S$ is split in $E/F$, it follows from Subsections [3.1.2](#sec:Hermitian spaces){reference-type="ref" reference="sec:Hermitian spaces"} and [3.1.3](#sec:algebraic groups){reference-type="ref" reference="sec:algebraic groups"} that the isomorphism $\iota_v$ sends $K_v$ and $K_{H,v}$ to ${\rm GL}_{n+1}(\mathcal O_v)$ and ${\rm GL}_{n}(\mathcal O_v)$, respectively. We let $K_w(l)$ and $K_{H,w}(l)$ denote the usual principal congruence subgroups of level $\mathfrak p^l$. At the place $w$, we define the maximal compact subgroup $K_{\widetilde{H},w}$ of $\widetilde{H}_w$ to be the product of $K_{H,w}$ and $\mathcal O_w^\times$, where the latter is identified with the maximal compact subgroup of $Z_w \simeq F_w^\times$. For finite places $v \in S$, we choose compact open subgroups $K_v < G_v$ and $K_{H,v} < H_v$ that stabilize $L_v$ and $L_{H,v}$, and satisfy $K_{H,v} = H_v \cap K_v$. We let $K_f = \prod_{v < \infty} K_v$, which is compact and open in $G(\mathbb A_f)$. We let $K_\infty = G_\infty$, and put $K = K_\infty K_f$. We define $K_{H,f}$, $K_{H, \infty}$, and $K_H$ analogously for $H$. ### Measures {#sec:measures} We choose Haar measures $dg = \prod_v dg_v$ and $dh = \prod_v dh_v$ on $G(\mathbb A)$ and $H(\mathbb A)$ as follows. At infinity, we let $dg_\infty$ and $dh_\infty$ give volume 1 to $G_\infty$ and $H_\infty$, and at finite places $v$, we let $dg_v$ and $dh_v$ give volume 1 to the compact subgroups $K_v$ and $K_{H,v}$. We choose a Haar measure $dz = \prod_v dz_v$ on $Z(\mathbb A)$ by requiring that $dz_v$ assign volume 1 to the maximal compact subgroup of $Z_v$ at all places. We have $\widetilde{H} \simeq H \times Z$, and we equip it with the product measure $dh dz$. ## A relative trace inequality The proof of both the trivial bound Proposition [Proposition 4](#trivialbound){reference-type="ref" reference="trivialbound"}, and of Theorem [Theorem 3](#mainperiod){reference-type="ref" reference="mainperiod"}, begin with the following inequality. Number theorists may view it as the result of dropping all but one term from the spectral side of the relative trace formula for $(G,H)$, while analysis may view it as an application of the $T T^*$ method for bounding the norm of an operator. In any case, it is proved using an elementary application of Cauchy-Schwartz. **Lemma 20**. *Let $k_0 \in C_c( G(\mathbb A))$, and let $k = k_0 * k_0^\vee$. If $f \in L^2( [G])$ with $\| f \|_2 = 1$, and $f_H \in L^2( [H])$, we have $$| \mathcal P( R(k_0) f, \overline{f}_H ) |^2 \leqslant \int_{ [H] \times [H]} \overline{f}_H(x) f_H(y) \sum_{\gamma \in G(F) } k(x ^{-1} \gamma y) dx dy,$$ where $R(k_0)$ denotes the action of $k_0$ in the right-regular representation. In particular, the right hand side is non-negative.* *Proof.* Substituting the definition of $R(k_0) f$ in $\mathcal P$ gives $$\mathcal P( R(k_0) f, \overline{f}_H ) = \int_{[H]} \overline{f}_H(x) \int_{G(\mathbb A)} k_0(g) f(xg) dg dx,$$ and after a change of variable this becomes $$\mathcal P( R(k_0) f, \overline{f}_H ) = \int_{[H]} \overline{f}_H(x) \int_{G(\mathbb A)} k_0(x^{-1} g) f(g) dg dx.$$ Folding over $G(F)$ and bringing the integral over $[G]$ to the outside gives $$\begin{aligned} \mathcal P( R(k_0) f, \overline{f}_H ) & = \int_{[H]} \overline{f}_H(x) \int_{[G]} \sum_{\gamma \in G(F)} k_0(x^{-1} \gamma g) f(g) dg dx \\ & = \int_{[G]} \int_{[H]} \sum_{\gamma \in G(F)} \overline{f}_H(x) k_0(x^{-1} \gamma g) f(g) dx dg.\end{aligned}$$ If we apply Cauchy-Schwartz to the integral over $[G]$ and use the fact that $\| f \|_2 = 1$, we obtain $$\begin{aligned} | \mathcal P( R(k_0) f, \overline{f}_H ) |^2 & \leqslant \int_{[G]} \left| \int_{[H]} \sum_{\gamma \in G(F)} \overline{f}_H(x) k_0(x^{-1} \gamma g) dx \right|^2 dg \\ & = \int_{[G]} \int_{ [H] \times [H]} \sum_{\gamma_1, \gamma_2 \in G(F)} \overline{f}_H(x) f_H(y) k_0( x^{-1} \gamma_1 g) \overline{k}_0( y^{-1} \gamma_2 g) dx dy dg.\end{aligned}$$ We now return the integral over $[G]$ to the inside and unfold, which gives $$\begin{aligned} | \mathcal P( R(k_0) f, \overline{f}_H ) |^2 & \leqslant \int_{ [H] \times [H]} \overline{f}_H(x) f_H(y) \int_{[G]} \sum_{\gamma_1, \gamma_2 \in G(F)} k_0( x^{-1} \gamma_1 g) \overline{k}_0( y^{-1} \gamma_2 g) dg dx dy \\ & = \int_{ [H] \times [H]} \overline{f}_H(x) f_H(y) \int_{G(\mathbb A)} \sum_{\gamma \in G(F)} k_0( x^{-1} \gamma g) \overline{k}_0( y^{-1} g) dg dx dy.\end{aligned}$$ We may write $\overline{k}_0( y^{-1} g)$ as $k_0^\vee( g^{-1} y)$, so that the integral over $G(\mathbb A)$ is $$\int_{G(\mathbb A)} k_0( x^{-1} \gamma g) \overline{k}_0( y^{-1} g) dg = \int_{G(\mathbb A)} k_0( x^{-1} \gamma g) k_0^\vee( g^{-1} y) dg = k(x^{-1} \gamma y).$$ This gives $$| \mathcal P( R(k_0) f, \overline{f}_H ) |^2 \leqslant \int_{ [H] \times [H]} \overline{f}_H(x) f_H(y) \sum_{\gamma \in G(F)} k(x^{-1} \gamma y) dx dy$$ as required. ◻ ## Proof of Proposition [Proposition 4](#trivialbound){reference-type="ref" reference="trivialbound"} {#sec:trivial proof} We recall the notation of Proposition [Proposition 4](#trivialbound){reference-type="ref" reference="trivialbound"}, including the generic pair $(\chi, \chi_H)$, the compatible $(\chi, \chi_H)$-types $(J, \widetilde{\lambda})$ and $(J_H, \widetilde{\lambda}_H)$, the functions $\phi$ and $\phi_H$, and the representation $\mu$. We shall prove Proposition [Proposition 4](#trivialbound){reference-type="ref" reference="trivialbound"} by applying Lemma [Lemma 20](#ampineq){reference-type="ref" reference="ampineq"} with $f$ equal to $\phi$, and $f_H$ running over a decomposition of $\phi_H$ into functions with small support. We choose the test function $k_0$ to be the following spectral projector onto $\phi$. Let $v' \neq w$ be an auxiliary finite place. At infinity, we take $k_{0,\infty}$ to be $d_\mu \chi_\mu$, where $d_\mu$ is the dimension of $\mu$ and $\chi_\mu$ is its character, which projects to the $\mu$-isotypic subspace of $L^2(G_\infty)$. At $w$, $k_{0,w}$ is the function supported on $J$ and equal to $\text{vol}(J)^{-1} \widetilde{\lambda}^{-1} \sim q^{n(n+1) l} \widetilde{\lambda}^{-1}$ there. At $v'$, we let $K'_{v'} \subset K_{v'}$ be an open subgroup to be chosen later, and let $k_{0,v'} = \text{vol}(K'_{v'})^{-1} 1_{K'_{v'}}$ be the normalized characteristic function of $K'_{v'}$. For the remaining places, we let $k_{0,v} = \text{vol}(K_v)^{-1} 1_{K_v}$. It follows that $k_0$ is a projection operator, i.e. $k_0 = k_0^\vee$ and $k_0 = k_0 * k_0^\vee$. The transformation properties of $\phi$ imply that $R(k_0) \phi = \phi$. We define $K' = K'_{v'} \times \prod_{v \neq v'} K_v$, which contains the support of $k_0$. Applying Lemma [Lemma 20](#ampineq){reference-type="ref" reference="ampineq"} with $f = \phi$ and this choice of $k_0$ gives $$\label{phiampineq} | \mathcal P( \phi, \overline{f}_H ) |^2 \leqslant \int_{ [H] \times [H]} \overline{f}_H(x) f_H(y) \sum_{\gamma \in G(F) } k_0(x ^{-1} \gamma y) dx dy$$ for any $f_H$. To choose $f_H$, let $B = K' \cap H(\mathbb A)$, and let $x_i B$ be a set of cosets that covers $[H]$. We let $f_{H,i}$ be a collection of functions such that $\text{supp}(f_i) \subset x_i B$, $\| f_{H,i} \|_2 \leqslant 1$, and $\phi_H = \sum f_{H,i}$. We have $$| \mathcal P( \phi, \overline{\phi}_H) |^2 = \Big| \sum_i \mathcal P( \phi, \overline{f}_{H,i} ) \Big|^2 \ll \sum_i | \mathcal P( \phi, \overline{f}_{H,i} ) |^2.$$ We may apply ([\[phiampineq\]](#phiampineq){reference-type="ref" reference="phiampineq"}) to each $f_{H,i}$, and so it suffices to prove that $$\label{fHi} \int_{ [H] \times [H]} \overline{f}_{H,i}(x) f_{H,i}(y) \sum_{\gamma \in G(F) } k_0(x ^{-1} \gamma y) dx dy \ll d_\mu^2 q^{nl}$$ for all $i$. **Lemma 21**. *If $K'_{v'}$ is chosen small enough, then only $\gamma \in H(F)$ contribute to the left hand side of ([\[fHi\]](#fHi){reference-type="ref" reference="fHi"}).* *Proof.* We must show that there is a choice of $K'_{v'}$ such that if $x, y \in x_i B$ for some $x_i$, and if $\gamma \in G(F)$ satisfies $x^{-1} \gamma y \in \text{supp}(k_0)$, then $\gamma \in H(F)$. We are free to translate $x_i$ on the left by $H(F)$, and so we may choose a compact set $\Omega = \prod \Omega_v \subset H(\mathbb A)$ containing a fundamental domain for $[H]$ and assume that $x_i \in \Omega$. If we let $x = x_i b_1$ and $y = x_i b_2$ with $b_1, b_2 \in B$, then we have $b_1^{-1} x_i^{-1} \gamma x_i b_2 \in \text{supp}(k_0)$, so $x_i^{-1} \gamma x_i \in B \text{supp}(k_0) B^{-1} \subset K'$. This implies that $$\gamma \in x_i K' x_i^{-1} \subset \prod_{v \neq v'} \Omega_v K_v \Omega_v^{-1} \times \bigcup_{y \in \Omega_{v'}} y K'_{v'} y^{-1}.$$ The compact set above is fixed at places away from $v'$, while at $v'$ it may be made arbitrarily small by shrinking $K'_{v'}$. It follows that there is a choice of $K'_{v'}$ such that the only $\gamma$ lying in this set is the identity, as required. ◻ With this choice of $K'_{v'}$, the left hand side of ([\[fHi\]](#fHi){reference-type="ref" reference="fHi"}) simplifies to $$\int_{ [H] \times [H]} \overline{f}_{H,i}(x) f_{H,i}(y) \sum_{\gamma \in H(F) } k_0(x ^{-1} \gamma y) dx dy = \langle R_H( k_H ) f_{H,i}, f_{H,i} \rangle,$$ where $R_H$ denotes the right-regular representation on $H$ and we have written $k_H$ for the restriction of $k_0$ to $H(\mathbb A)$. We must show that this is $\ll d_\mu^2 q^{nl}$. We have the trivial bound $\langle R_H( k_H) f_{H,i}, f_{H,i} \rangle \leqslant \| f_{H,i} \|_2^2 \| k_H \|_1 \leqslant \| k_H \|_1$, so it suffices to show $\| k_H \|_1 \ll d_\mu^2 q^{nl}$. We have $\| k_H \|_1 = \| k_{H,w} \|_1 \| k_H^w \|_1$, and moreover $\| k_H^w \|_1 \ll \| k_H^w \|_\infty \ll d_\mu^2$. The bound $\| k_{H,w} \|_1 \ll q^{nl}$ follows from the fact that $k_w$ is supported on $J$ and has size $q^{n(n+1)l}$ there, and the transversality result $J \cap H_w = K_{H,w}(\mathfrak p^l)$ from Lemma [Lemma 19](#Atrans){reference-type="ref" reference="Atrans"}. # Prelimiaries to amplification {#sec:amp prelim} This section contains results that will be used in the amplification argument in Section [5](#sec:amp){reference-type="ref" reference="sec:amp"}. ## Notation {#notation-1} Let $\mathcal P$ be the set of places of $F$ that are split in $E/F$ and do not lie in $S$. Our amplifier will be supported at places in $\mathcal P$. ### Root systems For each $v \in \mathcal P$, we have the identification of $G_v$ with ${\rm GL}_{n+1}(F_v)$ chosen in Section [3.1.3](#sec:algebraic groups){reference-type="ref" reference="sec:algebraic groups"}. We let $T_v$ be the maximal torus in $G_v$ that corresponds to the diagonal subgroup under this identification, and let $B_v$ correspond to the standard upper triangular Borel. We let $X^*(T_v)$ and $X_*(T_v)$ be the groups of characters and cocharacters of $T_v$. We let $\Phi$ be the roots of $T_v$ in $\text{Lie}(G_v)$, with positive roots $\Phi^+$ and simple roots $\Delta$ corresponding to $B_v$. (Technically these depend on $v$, but we may naturally identify them for all $v$.) We let $\rho \in X^*(T_v) \otimes_\mathbb Z\mathbb Q$ denote the half-sum of the positive roots, and $W$ be the Weyl group. We define $$X^+_*(T_v) = \{ \mu \in X_*(T_v) : \langle \mu, \alpha \rangle \geqslant 0, \, \alpha \in \Delta \}.$$ We introduce the function $$\| \mu \|^* = \underset{ w \in W}{\max} \langle w\mu, \rho \rangle$$ on $X_*(T_v)$. One sees that $\| \mu \|^*$ is a seminorm, with kernel equal to the central cocharacters; the condition that $\| \mu \|^* = \| -\mu \|^*$ follows from the fact that $\rho$ and $-\rho$ lie in the same Weyl orbit. We define the analogous objects for $H$ and $\widetilde{H}$, and denote them by adding the appropriate subscript. ### Metrics on groups {#sec:metrics} Let $u$ be a place of $E$, and let $A \in \text{End}_{E_u}(V_u)$. If $u$ is infinite, then $V_u$ is a complex vector space with the positive definite Hermitian form $\langle \, , \, \rangle_V$, and we define $\| A \|_u$ to be the usual operator norm of $A$. If $u$ is finite, we define $\| A \|_u$ to be the maximum of the $u$-adic norms of the matrix entries of $A$ in the basis $x_1, \ldots, x_{n+1}$, or equivalently as the smallest value of $\| a \|_u$ among $a \in E_u$ such that $A L_u \subset a L_u$. If $A \in \text{End}_E(V)$, we define $\| A \| = \prod_u \| A \|_u$. If $u$ is a place of $E$ above a place $v$ of $F$, and $g \in G_v$, we may naturally talk about $\| g \|_u$. It follows from the definition that $\| \cdot \|_u$ is bi-invariant under $K_v$. Recall the subgroup $K_{\widetilde{H},w}$ of $\widetilde{H}_w$ defined in Section [3.1.4](#sec:compact subgroups){reference-type="ref" reference="sec:compact subgroups"}. For $g_w \in K_w$, we define $d_w(g_w, K_{\widetilde{H},w})$ to be 0 if $g_w \in K_{\widetilde{H},w}$, and otherwise to be $q^{-l}$, where $l$ is the largest integer such that $g_w \in K_w(l) K_{\widetilde{H},w}$. ## Hecke algebras We let $\mathcal H$ be the Hecke algebra of compactly supported functions on $G(\mathbb A_f)$ that are bi-invariant under $K_f$. If $S'$ is a finite set of finite places, we likewise define the Hecke algebras $\mathcal H_{S'}$ and $\mathcal H^{S'}$ at $S'$ and away from $S'$, respectively. For $v \in \mathcal P$ and $\mu \in X_*( T_v)$, define $\tau(v, \mu) \in \mathcal H_v$ to be the function supported on $K_v \mu(\varpi_v) K_v$ and taking the value $q_v^{-\| \mu \|^*}$ there. We shall use the amplifier for ${\rm GL}_{n+1}$ constructed by Blomer--Maga [@BM Section 4]. To introduce this, for $j \in \mathbb Z$ let $[j] = (j, 0, \ldots, 0)\in X_*(T_v)$, and $[j,-j] = (j, 0, \ldots, 0, -j)\in X_*(T_v)$. We shall use the following results from [@BM Section 4]. **Proposition 22**. *Let $v \in \mathcal P$.* (a) *Let $\pi_v$ be an unramified representation of $G_v$, and $v \in \pi_v$ a nonzero spherical vector. Define $\lambda(j)$ by $\tau(v, [j]) v = \lambda(j) v$. If $q_v$ is sufficiently large depending on $n$, there is $1 \leqslant j \leqslant n+1$ such that $\lambda(j) \gg 1$, where the implied constant depends only on $n$.* (b) *For $1 \leqslant j \leqslant n+1$ we have $$\tau(v, [j]) \tau(v, [-j]) = \sum_{i = 0}^j c_{vij} \tau(v, [i,-i]),$$ where $c_{vij} \ll 1$ and the implied constant depends only on $n$.* ## Spherical representations and spherical functions {#sec:spherical} In this section, $v$ will denote a place in $\mathcal P$. We shall use the following parametrization of unramified characters of $T_v$. If $T_v^c$ is the maximal compact subgroup of $T_v$, then we may identify $T_v / T_v^c$ with $X_*(T_v)$ via the map sending $\mu \in X_*(T_v)$ to $\mu(\varpi_v) T_v^c$. This lets us identify the group of unramified characters of $T_v$ with the group of complex characters of $X_*(T_v)$, which we denote by $\widehat{X}_*(T_v)$. We may naturally identify $\widehat{X}_*(T_v)$ with $(\mathbb C^\times)^{n+1}$, and if $\alpha \in \widehat{X}_*(T_v)$ we let $(\alpha_1, \ldots, \alpha_{n+1})$ be its coordinates under this identification. We say that $\alpha$ is $\theta$-tempered if $q_v^{-\theta} \leqslant |\alpha_i | \leqslant q_v^\theta$ for all $i$. We next recall the classification of irreducible unramified representations of $G_v$ in terms of unramified characters of $T_v$. If $\chi$ is an unramified character of $T_v$ corresponding to $\alpha \in \widehat{X}_*(T_v)$, we let $\pi_{v,\alpha}$, or $\pi_{v,\chi}$, be the unique irreducible unramified subquotient of the unitarily induced representation $\text{Ind}_{B_v}^{G_v} \chi$. It is known [@Cartier Section 4.4] that all irreducible unramified representations of $G_v$ arise in this way, and that $\pi_{v,\alpha} \simeq \pi_{v,\alpha'}$ if and only if $\alpha$ and $\alpha'$ lie in the same Weyl orbit. It follows that an irreducible unramified representation $\pi_v$ is isomorphic to $\pi_{v, \alpha}$ for a unique $\alpha \in \widehat{X}_*(T_v) / W$, which we refer to as the Satake parameter of $\pi_v$. We say that $\pi_v$ is $\theta$-tempered if its Satake parameter is. We note that the contragredient $\pi_{v, \alpha}^\vee$ of $\pi_{v, \alpha}$ is isomorphic to $\pi_{v, \alpha^{-1}}$. For $\alpha \in \widehat{X}_*(T_v)$, we denote the spherical function with Satake parameter $\alpha$ by $\varphi_{v,\alpha}$. To recall its definition, we let $v \in \pi_{v, \alpha}$ and $v^\vee \in \pi_{v, \alpha}^\vee$ be spherical vectors with $\langle v, v^\vee \rangle = 1$. We then have $$\varphi_{v,\alpha}(g) = \langle \pi_{v, \alpha}(g) v, v^\vee \rangle.$$ We shall require the following bound for $\varphi_{v,\alpha}$. **Lemma 23**. *Let $\alpha \in \widehat{X}_*(T_v)$.* (i) *[\[phibound1\]]{#phibound1 label="phibound1"} When $\alpha = 1$ is the trivial character, we have $$\varphi_{v,1}( \mu(\varpi_v) ) \ll (\| \mu \|^*)^{n(n+1)/2} q_v^{ - \| \mu \|^*}$$ for $\mu \in X_*(T_v)$.* (ii) *[\[phibound2\]]{#phibound2 label="phibound2"} For general $\alpha$, we have $$\varphi_{v,\alpha}( \mu(\varpi_v) ) \leqslant \underset{ w \in W }{ \max} | \alpha( w\mu ) | \varphi_{v,1}( \mu(\varpi_v) ) \ll \underset{ w \in W }{ \max} | \alpha( w\mu ) | (\| \mu \|^*)^{n(n+1)/2} q_v^{ - \| \mu \|^*}$$ for $\mu \in X_*(T_v)$.* *Both implied constants depend only on $n$.* *Proof.* We may assume without loss of generality that $\mu \in X^+_*(T_v)$. We prove ([\[phibound1\]](#phibound1){reference-type="ref" reference="phibound1"}) using the formula for $\varphi_{v,1}$ given by Macdonald [@Mac Prop. 4.6.1]. Note that Macdonald only proves this when $G_v$ is simply connected, but for a general $G_v$ the formula may be derived in the same way from the formula of Casselman [@Ca Thm. 4.2]. Macdonald's formula states that $$\varphi_{v,1}( \mu(\varpi_v) ) = q_v^{ - \langle \mu, \rho \rangle} P(\mu, q_v^{-1})$$ for $\mu \in X^+_*(T_v)$, where $P(\mu, q_v^{-1})$ is a polynomial on $X_*(T_v) \times \mathbb C$ that depends only on $n$; in other words, it is a polynomial in $\mu$ whose coefficients are polynomials in $q_v^{-1}$. Moreover, the degree of $P(\mu, q_v^{-1})$ as a function of $\mu$ is at most $| \Phi^+ | = n(n+1)/2$, which gives ([\[phibound1\]](#phibound1){reference-type="ref" reference="phibound1"}). We prove ([\[phibound2\]](#phibound2){reference-type="ref" reference="phibound2"}) by comparing the integral representations of $\varphi_{v,\alpha}$ and $\varphi_{v,1}$. To state these representations, we recall the Iwasawa $A$-coordinate on $G_v$. For $g \in G_v$, this is the element $A(g) \in X_*(T_v)$ such that $g \in N_v A(g)(\varpi_v) K_v$, where $N_v$ is the unipotent radical of $B_v$. If we let $\delta \in \widehat{X}_*(T_v)$ be the modular character, we then have $$\varphi_{v,\alpha}( \mu(\varpi_v) ) = \int_{K_v} (\delta^{1/2} \alpha)( A( k \mu(\varpi_v)) ) dk.$$ We may compare these formulas for $\varphi_{v,\alpha}$ and $\varphi_{v,1}$ as follows: $$\begin{aligned} \varphi_{v,\alpha}( \mu(\varpi_v) ) & = \int_{K_v} (\delta^{1/2} \alpha)( A( k \mu(\varpi_v)) ) dk \\ & \leqslant \underset{k \in K_v}{\max} \, | \alpha( A( k \mu(\varpi_v)) ) | \int_{K_v} \delta^{1/2}( A( k \mu(\varpi_v)) ) dk \\ & = \underset{k \in K_v}{\max} \, | \alpha( A( k \mu(\varpi_v)) ) | \varphi_{v,1}( \mu(\varpi_v) ).\end{aligned}$$ It is known that the set $\{ A( k \mu(\varpi_v)) : k \in K_v \}$ lies in the convex hull of $W \mu$, which we denote $\text{Conv}(W \mu)$. (For instance, this follows by combining [@KP Prop. 5.4.2] with the fact that the image of the Satake transform is Weyl-invariant.) We therefore have $$\varphi_{v,\alpha}( \mu(\varpi_v) ) \leqslant \underset{ \lambda \in \text{Conv}(W \mu)}{ \max} | \alpha(\lambda) | \varphi_{v,1}( \mu(\varpi_v) ) = \underset{ w \in W }{ \max} | \alpha( w\mu ) | \varphi_{v,1}( \mu(\varpi_v) ),$$ which completes the proof. ◻ ## Restriction of Hecke operators to $\widetilde{H}$ {#sec:Hecke restriction} In this section, we continue to let $v$ denote a place in $\mathcal P$. If $\tau(v, \mu)|_{\widetilde{H}}$ denotes the restriction of $\tau(v, \mu)$ to $\widetilde{H}$, the following lemma gives a bound for the action of $\tau(v, \mu)|_{\widetilde{H}}$ in an unramified representation of $\widetilde{H}$. This will be used to bound the diagonal term in the amplified relative trace inequality. **Lemma 24**. *Let $\alpha \in \widehat{X}_*(T_{\widetilde{H},v})$ be $\theta$-tempered, and let $\pi^{\widetilde{H}}_{v, \alpha}$ be the unramified representation of $\widetilde{H}_v$ with Satake parameter $\alpha$. We assume that the central character of $\pi^{\widetilde{H}}_{v, \alpha}$ is unitary. If $v$ and $v^\vee$ are spherical vectors in $\pi^{\widetilde{H}}_{v, \alpha}$ and $(\pi^{\widetilde{H}}_{v, \alpha})^\vee$ with $\langle v, v^\vee \rangle = 1$, and $\mu \in X_*(T_v)$ is not a multiple of $(1, \ldots, 1)$, we have $$\langle \pi^{\widetilde{H}}_{v, \alpha}( \tau(v, \mu)|_{\widetilde{H}} ) v, v^\vee \rangle \ll q_v^{-1/2 + \theta},$$ where the implied constant depends only on $n$ and $\theta$.* *Proof.* We begin by describing the function $\tau(v, \mu)|_{\widetilde{H}}$. We claim that $$\label{coset restriction} K_v \mu(\varpi_v) K_v \cap \widetilde{H}_v = \bigcup_{\lambda \in W \mu} K_{\widetilde{H},v} \lambda(\varpi_v) K_{\widetilde{H},v}.$$ To prove the claim, it is clear that the right hand set is contained in the left. For the reverse inclusion, let $h \in K_v \mu(\varpi_v) K_v \cap \widetilde{H}_v$. The Cartan decomposition on $\widetilde{H}_v$ implies that $h \in K_{\widetilde{H},v} \lambda(\varpi_v) K_{\widetilde{H},v}$ for some $\lambda \in X_*(T_{\widetilde{H},v})$, and comparing this with the Cartan decomposition on $G_v$ we see that $\lambda \in W\mu$ as required. Let $\lambda^{(1)}, \ldots, \lambda^{(a)}$ be a set of representatives for the $W_H$-orbits in $W \mu$. Equation ([\[coset restriction\]](#coset restriction){reference-type="ref" reference="coset restriction"}) then gives $\tau(v, \mu)|_{\widetilde{H}} = q_v^{- \| \mu \|^*} \sum_{1 \leqslant i \leqslant a} 1_{\widetilde{H}}(v, \lambda^{(i)})$, where $1_{\widetilde{H}}(v, \lambda)$ denotes the characteristic function of the set $K_{\widetilde{H},v} \lambda(\varpi_v) K_{\widetilde{H},v}$. This formula for $\tau(v, \mu)|_{\widetilde{H}}$ gives $$\begin{aligned} \langle \pi^{\widetilde{H}}_{v, \alpha}( \tau(v, \mu)|_{\widetilde{H}} ) v, v^\vee \rangle & = q_v^{- \| \mu \|^*} \sum_{1 \leqslant i \leqslant a} \int_{ K_{\widetilde{H},v} \lambda^{(i)}(\varpi_v) K_{\widetilde{H},v} } \varphi^{\widetilde{H}}_{v, \alpha}(h) dh \\ & = q_v^{- \| \mu \|^*} \sum_{1 \leqslant i \leqslant a} \text{vol}( K_{\widetilde{H},v} \lambda^{(i)}(\varpi_v) K_{\widetilde{H},v} ) \varphi^{\widetilde{H}}_{v, \alpha}( \lambda^{(i)}(\varpi_v)),\end{aligned}$$ where $\varphi^{\widetilde{H}}_{v, \alpha}$ is the spherical function on $\widetilde{H}_v$. We have $\text{vol}( K_{\widetilde{H},v} \lambda^{(i)}(\varpi_v) K_{\widetilde{H},v} ) \ll q_v^{ 2 \| \lambda^{(i)} \|_{\widetilde{H}}^*}$, where the implied constant depends only on $n$. Combining this with Lemma [Lemma 23](#phibound){reference-type="ref" reference="phibound"} for the group $\widetilde{H}_v$ gives $$\label{lambda i sum} \langle \pi^{\widetilde{H}}_{v, \alpha}( \tau(v, \mu)|_{\widetilde{H}} ) v, v^\vee \rangle \ll q_v^{- \| \mu \|^*} \sum_{1 \leqslant i \leqslant a} \underset{ w \in W_H}{ \max} | \alpha(w \lambda^{(i)}) | ( \| \lambda^{(i)} \|^*_{\widetilde{H}} )^{ n(n-1)/2} q_v^{ \| \lambda^{(i)} \|_{\widetilde{H}}^*}.$$ We next examine the relation between $\mu$ and $\lambda^{(i)}$, and the difference $\| \lambda^{(i)} \|_{\widetilde{H}}^* - \| \mu \|^*$. Write $\mu = (\mu_1, \ldots, \mu_{n+1})$, and assume that the $\mu_i$ are in non-increasing order. There is a $1 \leqslant k \leqslant n+1$ such that $\lambda^{(i)} = (\mu_1, \ldots, \mu_{k-1}, \mu_{k+1}, \ldots, \mu_{n+1}, \mu_k)$. Moreover, because all the expressions involving $\mu$ and $\lambda^{(i)}$ appearing on the right hand side of ([\[lambda i sum\]](#lambda i sum){reference-type="ref" reference="lambda i sum"}) are invariant under adding multiples of $(1, \ldots, 1)$ to $\mu$, we may assume that $\mu_k = 0$, and therefore that $\mu_1 \geqslant \ldots \geqslant \mu_{k-1} \geqslant 0 \geqslant \mu_{k+1} \geqslant \ldots \geqslant \mu_{n+1}$. (Note that by assuming our original $\mu$ is not a multiple of $(1, \ldots, 1)$, we ensure that this normalized $\mu$ is not zero.) It follows that $$\| \lambda^{(i)} \|_{\widetilde{H}}^* - \| \mu \|^* = -\frac{1}{2}( \mu_1 + \ldots + \mu_{k-1}) + \frac{1}{2}( \mu_{k+1} + \ldots + \mu_{n+1}) = -\frac{1}{2} \sum_{j=1}^{n+1} | \mu_j |.$$ Our assumption that $\alpha$ is $\theta$-tempered implies that $$\underset{ w \in W_H}{ \max} | \alpha(w \lambda^{(i)}) | \leqslant q_v^{\theta \sum_{j = 1}^{n+1} | \lambda^{(i)}_j |} = q_v^{ \theta \sum_{j = 1}^{n+1} | \mu_j |},$$ which gives $$\begin{aligned} q_v^{- \| \mu \|^*} \underset{ w \in W_H}{ \max} | \alpha(w \lambda^{(i)}) | ( \| \lambda^{(i)} \|^*_{\widetilde{H}} )^{ n(n-1)/2} q_v^{ \| \lambda^{(i)} \|_{\widetilde{H}}^*} & \leqslant ( \| \lambda^{(i)} \|^*_{\widetilde{H}} )^{ n(n-1)/2} q_v^{ (-1/2 + \theta) \sum | \mu_j |} \\ & \ll \Big( \sum_{j=1}^{n+1} | \mu_j | \Big)^{ n(n-1)/2} q_v^{ (-1/2 + \theta) \sum | \mu_j |}.\end{aligned}$$ When $\sum | \mu_j | = 1$ this is $\ll q_v^{-1/2 + \theta}$, while for $\sum | \mu_j | \geqslant 2$ it is $\ll_\epsilon q_v^{ (-1/2 + \theta + \epsilon) \sum | \mu_j |} \leqslant q_v^{-1/2 + \theta}$, if $\varepsilon$ is chosen small enough depending on $\theta$. Summing this bound over $i$ in ([\[lambda i sum\]](#lambda i sum){reference-type="ref" reference="lambda i sum"}) completes the proof. ◻ ## Diophantine lemmas {#sec:diophantine} In this section we prove Lemma [Lemma 26](#Hclose){reference-type="ref" reference="Hclose"}, which will be used to show that those $\gamma \in G(F)$ that contribute to the off-diagonal term of the geometric side of the relative trace inequality are bounded away from $\widetilde{H}$. We recall the norms and distance functions introduced in Section [4.1.2](#sec:metrics){reference-type="ref" reference="sec:metrics"}. **Lemma 25**. *Let $v \in \mathcal P$ split in $E$ as $v = u u'$. If $g \in K_v \mu(\varpi_v) K_v$ for $\mu \in X_*(T_v)$, then $\{ \| g \|_u, \| g \|_{u'} \} = \{ q_v^{ \max \mu_i }, q_v^{ \max -\mu_i } \}$.* *Proof.* Because $\| \cdot \|_u$ and $\| \cdot \|_{u'}$ are bi-invariant under $K_v$, it suffices to calculate $\{ \| \mu(\varpi_v) \|_u, \| \mu(\varpi_v) \|_{u'} \}$. We may assume without loss of generality that $u$ is the place used to define the isomorphism $\iota_v$ of Section [3.1.3](#sec:algebraic groups){reference-type="ref" reference="sec:algebraic groups"}. In this case, it follows from our definitions that $\| \mu(\varpi_v) \|_u = q_v^{ \max \mu_i}$. Moreover, the discussion in Subsection [3.1.3](#sec:algebraic groups){reference-type="ref" reference="sec:algebraic groups"} implies that $\| \mu(\varpi_v) \|_{u'}$ is equal to the maximum valuation of the entries of $\mu(\varpi_v)^{-1}$, which is $q_v^{ \max -\mu_i }$. ◻ **Lemma 26**. *Let $\gamma \in G(F)$, and assume that $\gamma_w \in K_w$. Let $\| \gamma \|^w = \prod_{u \nmid w} \| \gamma \|_u$ be the contribution to $\| \gamma \|$ from places not dividing $w$. There is $C > 0$ depending only on $V$ such that if $$\label{distance vs height} \| \gamma \|^w d(\gamma_w, K_{\widetilde{H},w})^2 < C$$ then $\gamma \in \widetilde{H}(F)$.* *Proof.* It suffices to prove that ([\[distance vs height\]](#distance vs height){reference-type="ref" reference="distance vs height"}) implies $$\label{V product formula} \prod_u | \langle \gamma x_{n+1}, x_i \rangle_V |_u < 1$$ for all $i \neq n+1$. Indeed, this inequality implies that $\langle \gamma x_{n+1}, x_i \rangle_V = 0$ for all $i \neq n+1$ by the product formula, and hence that $\gamma x_{n+1} \in Ex_{n+1}$ as required. To establish ([\[V product formula\]](#V product formula){reference-type="ref" reference="V product formula"}), we bound $| \langle \gamma x_{n+1}, x_i \rangle_V |_u$ at each place $u$. When $u$ is infinite, we have $$| \langle \gamma x_{n+1}, x_i \rangle_V |_u \leqslant \| \gamma \|_u \| x_{n+1} \|_u \| x_i \|_u \leqslant \| x_{n+1} \|_u \| x_i \|_u,$$ which is bounded by a constant depending only on $V$. Next, suppose that $v \neq w$ is a finite place of $F$ that splits in $E$ as $v = u u'$. Let $a_u \in E_u$ be an element of minimal valuation such that $\gamma L_u \subset a_u L_u$, so that $\| \gamma \|_u = \| a_u \|_u$. By the remarks of Subsection [3.1.2](#sec:Hermitian spaces){reference-type="ref" reference="sec:Hermitian spaces"}, the $u$-component $\langle \gamma x_{n+1}, x_i \rangle_{V,u}$ is equal to $B_v( \gamma x_{n+1,u}, x_{i,u'} )$, where $B_v : V_u \times V_{u'} \to F_v \simeq E_u$ is the bilinear pairing introduced there. Moreover, we have $x_{n+1,u} \in L_u$ and $x_{i,u'} \in L_{u'}$, and our assumption that $\langle \, , \, \rangle_V$ is integral on $L$ means that $B_v$ pairs $L_u$ and $L_{u'}$ integrally. It follows that $B_v( \gamma x_{n+1,u}, x_{i,u'} ) \in a_u \mathcal O_{E,u}$, and hence that $| \langle \gamma x_{n+1}, x_i \rangle_V |_u \leqslant \| \gamma \|_u$. The same bound may also be established when $u$ lies over a nonsplit finite place of $F$. Finally, let $w$ split in $E$ as $u u'$. Let $d(\gamma_w, K_{\widetilde{H},w}) = q^{-l}$, so that $\gamma_w \in K_{\widetilde{H},w} K_w(l)$. This implies that $\gamma x_{n+1,u} \in \mathcal O_{E,u} x_{n+1,u} + \mathfrak p^l L_u$, and it follows as in the case above that $| \langle \gamma x_{n+1}, x_i \rangle_V |_u \leqslant q^{-l} = d(\gamma_w, K_{H,w})$. Combining these bounds gives $$\prod_u | \langle \gamma x_{n+1}, x_i \rangle_V |_u < C_\infty \| \gamma \|^w d(\gamma_w, K_{H,w})^2,$$ where $C_\infty$ is a constant depending only on $V$ coming from the infinite places. If we choose the constant $C$ in the lemma to be $1/C_\infty$, then condition ([\[distance vs height\]](#distance vs height){reference-type="ref" reference="distance vs height"}) implies ([\[V product formula\]](#V product formula){reference-type="ref" reference="V product formula"}) as required. ◻ ## Temperedness of $\pi$ and $\pi_H$ {#sec:tempered} We now deduce the temperedness of $\pi$ and $\pi_H$ at split places from work of Caraiani and Labesse. **Lemma 27**. *The representations $\pi_v$ and $\pi_{H,v}$ are tempered for all $v$ that split in $E/F$.* *Proof.* It suffices to discuss the case of $\pi_v$. Labesse [@Lab Corollary 5.3] shows that $\pi$ admits a weak base change to ${\rm GL}_{n+1}(\mathbb A_E)$, which we denote by $\Pi$, that is equal to the isobaric direct sum $\Pi_1 \boxplus \ldots \boxplus \Pi_r$ of discrete, conjugate self-dual representations $\Pi_i$ of ${\rm GL}_{n_i}(\mathbb A_E)$. Moreover, he proves that $\pi$ and $\Pi$ satisfy the usual local compatibility at places that split in $E/F$. Because $\pi_w$ is an irreducible principal series representation, it is generic, and by local-global compatibility $\Pi_u$ is also generic for $u | w$. This implies that each of the discrete representations $\Pi_i$ is in fact cuspidal. The infinitesimal character of $\pi_\infty$ is regular C-algebraic, which by [@Lab Corollary 5.3] implies that the same is true for $\Pi_\infty$. If we let $\eta$ be a unitary character of $\mathbb A_E^\times / E^\times$ whose components at archimedean places are equal to $z / |z|$, it follows as in [@MS Lemma 6.1] that for each $i$, either $\Pi_i$ or $\Pi_i \otimes \eta$ has regular C-algebraic infinitesimal character. We may then apply [@Caraiani] to deduce that $\Pi_i$ is tempered at all places for all $i$. This implies that $\Pi$ is tempered at all places, and hence that $\pi$ is tempered at all split $v$ by local-global compatibility. ◻ # Amplification {#sec:amp} We now begin the proof of Theorem [Theorem 3](#mainperiod){reference-type="ref" reference="mainperiod"}. We recall the notation associated with that theorem, including the representations $\pi$ and $\pi_H$, and the automorphic forms $\phi = \otimes_v \phi_v$ and $\phi_H = \otimes_v \phi_{H,v}$. Let $(J, \widetilde{\lambda})$ and $(J_H, \widetilde{\lambda}_H)$ be the compatible $(\chi, \chi_H)$-types associated with $\phi$ and $\phi_H$. We may assume without loss of generality that $\pi_\infty$ is isomorphic to a fixed irreducible representation $\mu$ of $G_\infty$. It will be convenient to enlarge the period $\mathcal P( \phi, \overline{\phi}_H )$ to the subgroup $\widetilde{H}$. We therefore define $\phi_{ \widetilde{H}} = \otimes_v \phi_{ \widetilde{H},v}$ to be the unique extension of $\phi_H$ to a function on $[\widetilde{H}]$ that transforms under $Z$ according to the central character of $\phi$. We also define the enlarged period $$\widetilde{\mathcal P}( \phi, \overline{\phi}_{ \widetilde{H}} ) = \int_{[\widetilde{H}]} \phi(h) \overline{\phi}_{ \widetilde{H}}(h) dh.$$ We have $\widetilde{\mathcal P}( \phi, \overline{\phi}_{ \widetilde{H}} ) = \text{vol}( [Z]) \mathcal P( \phi, \overline{\phi}_H )$. We have the following variant of Lemma [Lemma 20](#ampineq){reference-type="ref" reference="ampineq"}: if $k_0 \in C^\infty_c( G(\mathbb A))$, and $k = k_0 * k_0^\vee$, then $$\label{amp ineq 2} \text{vol}( [Z])^2 | \mathcal P( R(k_0) \phi, \overline{\phi}_H ) ) |^2 = | \widetilde{\mathcal P}( R(k_0) \phi, \overline{\phi}_{ \widetilde{H}} ) ) |^2 \leqslant \int_{ [\widetilde{H}] \times [\widetilde{H}]} \overline{\phi}_{ \widetilde{H}}(x) \phi_{ \widetilde{H}}(y) \sum_{\gamma \in G(F) } k(x ^{-1} \gamma y) dx dy.$$ For every $v \in \mathcal P\setminus \{ w \}$ and $1 \leqslant j \leqslant n+1$, let $\lambda_\phi( v, j)$ be the eigenvalue of $\tau(v, [j])$ on $\phi$, and let $c(v, j) = \lambda_\phi( v, j) / | \lambda_\phi( v, j)|$, with the convention that $0/0 = 0$. It follows from Proposition [Proposition 22](#amplifier){reference-type="ref" reference="amplifier"} that for all but finitely many $v$ we have $\lambda_\phi( v, j) \gg 1$ for some $j$. We next define the test functions we shall use in our amplification inequality. At the place $w$, it will be convenient to work on cosets of the subgroup $K_w(1)$, which we denote by $K_w^1$ for brevity. We likewise define $K_{H,w}^1 = K_{H,w}(1)$, $J^1 = J \cap K_w^1$, and $J_H^1 = J_H \cap K_{H,w}^1$. We choose our test function $k_{0,w}$ to be the function supported on $J^1$ and equal to $q^{n(n+1)l} \widetilde{\lambda}^{-1}$ there. We choose the archimedean test function $k_{0,\infty} = d_\mu \chi_\mu$ as in Section [3.3](#sec:trivial proof){reference-type="ref" reference="sec:trivial proof"}. For $N > 0$ to be chosen later, define $\mathcal P_N = \{ v \in \mathcal P\setminus \{ w \} : N/2 < q_v < N \}$. For $1 \leqslant j \leqslant n+1$ we let $$T_j^0 = \sum_{v \in \mathcal P_N} \overline{c(v, j)} \tau(v, [j]),$$ which we think of as an element of $\mathcal H^w$. For each such $j$, we define the test function $k^0_j$ to be the product $k_{0,\infty} k_{0,w} T_j^0$. We define $T_j = T_j^0 * (T_j^0)^\vee$, and $k_j = k^0_j * (k^0_j)^\vee$, and note that the latter is equal to $k_{0,\infty} k_{0,w} T_j$ up to a nonzero constant depending only on $n$ and $w$. We also define $T = \sum_{j=1}^{n+1} T_j$ and $k = \sum_{j=1}^{n+1} k_j$. If we apply ([\[amp ineq 2\]](#amp ineq 2){reference-type="ref" reference="amp ineq 2"}) to each $k^0_j$ and sum over $j$, we obtain $$\sum_{j=1}^{n+1} | \mathcal P( R(k^0_j) \phi, \overline{\phi}_H ) ) |^2 \ll \int_{ [\widetilde{H}] \times [\widetilde{H}]} \overline{\phi}_{ \widetilde{H}}(x) \phi_{ \widetilde{H}}(y) \sum_{\gamma \in G(F) } k(x ^{-1} \gamma y) dx dy.$$ $\phi$ is an eigenvector of each $k^0_j$, and if we let $\widehat{k}^0_j(\phi)$ denote the eigenvalue by which it acts, we have $\widehat{k}^0_j(\phi) \gg N^{1-\epsilon}$ for some $j$ by the remark about $\lambda_\phi( v, j)$ above. It follows that $$\sum_{j=1}^{n+1} | \mathcal P( R(k^0_j) \phi, \overline{\phi}_H ) ) |^2 \gg N^{2-\epsilon} | \mathcal P( \phi, \overline{\phi}_H ) ) |^2,$$ and hence that $$\label{amplified period bd} N^{2-\epsilon} | \mathcal P( \phi, \overline{\phi}_H ) ) |^2 \ll \int_{ [\widetilde{H}] \times [\widetilde{H}]} \overline{\phi}_{ \widetilde{H}}(x) \phi_{ \widetilde{H}}(y) \sum_{\gamma \in G(F) } k(x ^{-1} \gamma y) dx dy.$$ The reminder of the proof involves bounding the right hand side of ([\[amplified period bd\]](#amplified period bd){reference-type="ref" reference="amplified period bd"}), which we refer to as the geometric side of the relative trace inequality. We divide this into the diagonal term $${\rm D}(\phi_{ \widetilde{H}}) = \int_{ [\widetilde{H}] \times [\widetilde{H}]} \overline{\phi}_{ \widetilde{H}}(x) \phi_{ \widetilde{H}}(y) \sum_{\gamma \in \widetilde{H}(F) } k(x ^{-1} \gamma y) dx dy,$$ and the off-diagonal term $${\rm OD}(\phi_{ \widetilde{H}}) = \int_{ [\widetilde{H}] \times [\widetilde{H}]} \overline{\phi}_{ \widetilde{H}}(x) \phi_{ \widetilde{H}}(y) \sum_{\gamma \in G(F) - \widetilde{H}(F) } k(x ^{-1} \gamma y) dx dy.$$ We note that ${\rm D}(\phi_{ \widetilde{H}}) = \langle \pi_{\widetilde{H}}( k|_{\widetilde{H}}) \phi_{ \widetilde{H}}, \phi_{ \widetilde{H}} \rangle$, where $\pi_{\widetilde{H}}$ is the automorphic representation of $\widetilde{H}$ extending $\pi_H$. ## Bounding the diagonal term {#sec:diagonal} The diagonal term is controlled by the next proposition. **Proposition 28**. *We have $$\label{geometric diagonal} {\rm D}(\phi_{ \widetilde{H}}) = \langle \pi_{\widetilde{H}}( k|_{\widetilde{H}}) \phi_{ \widetilde{H}}, \phi_{ \widetilde{H}} \rangle \ll_\epsilon N^{1 + \epsilon} d_\mu^2 q^{nl}.$$* *Proof.* It suffices to prove the bound for each $k_j$, or equivalently for $k_{0,\infty} k_{0,w} T_j$. We have $$\langle \pi_{\widetilde{H}}( k_{0,\infty} k_{0,w} T_j |_{\widetilde{H}}) \phi_{ \widetilde{H}}, \phi_{ \widetilde{H}} \rangle = \langle \pi_{\widetilde{H}}( k_{0,\infty} |_{\widetilde{H}}) \phi_{ \widetilde{H}, \infty}, \phi_{ \widetilde{H}, \infty} \rangle \langle \pi_{\widetilde{H}}( k_{0,w} |_{\widetilde{H}}) \phi_{ \widetilde{H}, w}, \phi_{ \widetilde{H}, w} \rangle \langle \pi_{\widetilde{H}}( T_j |_{\widetilde{H}}) \phi_{ \widetilde{H}}^{w \infty}, \phi_{ \widetilde{H}}^{w \infty} \rangle.$$ We have $\langle \pi_{\widetilde{H}}( k_{0,\infty}|_{\widetilde{H}} ) \phi_{\widetilde{H},\infty}, \phi_{\widetilde{H},\infty} \rangle \ll d_\mu^2$ as in Section [3.3](#sec:trivial proof){reference-type="ref" reference="sec:trivial proof"}. The comments at the end of Section [3.3](#sec:trivial proof){reference-type="ref" reference="sec:trivial proof"} show that $\phi_{\widetilde{H}, w}$ is an eigenvector of $k_{0,w}|_{\widetilde{H}}$ with eigenvalue $\sim q^{nl}$, so that $$\langle \pi_{\widetilde{H}}( k_{0,w}|_{\widetilde{H}} ) \phi_{\widetilde{H}, w}, \phi_{\widetilde{H}, w} \rangle \ll q^{nl}.$$ It therefore suffices to show that $$\label{Tjbound} \langle \pi_{\widetilde{H}}( T_j |_{\widetilde{H}}) \phi_{ \widetilde{H}}^{w \infty}, \phi_{ \widetilde{H}}^{w \infty} \rangle \ll_\epsilon N^{1 + \epsilon}.$$ To do this, we write $T_j$ as a sum of factorizable terms, and apply Lemma [Lemma 24](#Hecke restriction){reference-type="ref" reference="Hecke restriction"} to each of them. Expanding $T_j$ gives $$T_j = \sum_{v \in \mathcal P_N} | c(v,j)|^2 \tau(v, [j]) \tau(v, [-j]) + \sum_{v_1 \neq v_2 \in \mathcal P_N} c(v_1, j) \overline{c(v_2, j)} \tau(v_1, [j]) \tau(v_2, [-j]),$$ which we may write as $T_j = T_j^I + T_j^{II}$. We use Proposition [Proposition 22](#amplifier){reference-type="ref" reference="amplifier"} to simplify $T_j^I$ as $$T_j^I = \sum_{v \in \mathcal P_N} \sum_{i = 0}^j c_{vij} | c(v,j)|^2 \tau(v, [i,-i]).$$ Lemma [Lemma 24](#Hecke restriction){reference-type="ref" reference="Hecke restriction"} gives $$\label{k1bound} \left\langle \pi_{\widetilde{H}}\left( \tau(v, [i,-i]) \big|_{\widetilde{H}} \right) \phi_{\widetilde{H}, v}, \phi_{\widetilde{H}, v} \right\rangle \ll \Bigg\{ \begin{array}{ll} 1, & i = 0, \\ q_v^{-1/2}, & i \geqslant 1, \end{array}$$ and summing this over $v \in \mathcal P_N$ and $i$ gives $$\langle \pi_{\widetilde{H}}( T_j^I |_{\widetilde{H}}) \phi_{ \widetilde{H}}^{w \infty}, \phi_{ \widetilde{H}}^{w \infty} \rangle \ll \sum_{v \in \mathcal P_N} 1 + q_v^{-1/2} \ll N.$$ Likewise, Lemma [Lemma 24](#Hecke restriction){reference-type="ref" reference="Hecke restriction"} gives $$\left\langle \pi_{\widetilde{H}}\left( \tau(v_1, [j]) \tau(v_2, [-j]) \big|_{\widetilde{H}} \right) \phi_{ \widetilde{H}}^{w \infty}, \phi_{ \widetilde{H}}^{w \infty} \right\rangle \ll (q_{v_1} q_{v_2})^{-1/2},$$ and summing over $v_1$ and $v_2$ gives $$\langle \pi_{\widetilde{H}}( T_j^{II} |_{\widetilde{H}}) \phi_{ \widetilde{H}}^{w \infty}, \phi_{ \widetilde{H}}^{w \infty} \rangle \ll \left( \sum_{v \in \mathcal P_N} q_v^{-1/2} \right)^2 \ll_\epsilon N^{1 + \epsilon}.$$ This completes the proof of ([\[Tjbound\]](#Tjbound){reference-type="ref" reference="Tjbound"}), and hence of the proposition. ◻ ## Bounding the off-diagonal term {#sec:off-diag} We next bound the off-diagonal term ${\rm OD}(\phi_{\widetilde{H}})$. We shall forgo cancellation in the integrals over $[\widetilde{H}]$, and take absolute values everywhere, which gives $${\rm OD}(\phi_{\widetilde{H}}) \leqslant \int_{ [\widetilde{H}] \times [\widetilde{H}]} | \phi_{ \widetilde{H}}(x) \phi_{ \widetilde{H}}(y) | \sum_{\gamma \in G(F) - \widetilde{H}(F) } |k(x ^{-1} \gamma y)| dx dy.$$ We let $\Omega = \prod_v \Omega_v \subset H(\mathbb A)$ be a compact set containing a fundamental domain for $[\widetilde{H}]$. After enlarging $S$ if necessary, we assume that $\Omega_v = K_{\widetilde{H},v}$ at places $v \notin S$. We may expand the integrals over $[\widetilde{H}]$ to $\Omega$, which gives $$\label{ODsum} {\rm OD}(\phi_{\widetilde{H}}) \leqslant \int_{ \Omega \times \Omega} | \phi_{ \widetilde{H}}(x) \phi_{ \widetilde{H}}(y) | \sum_{\gamma \in G(F) - \widetilde{H}(F) } |k(x ^{-1} \gamma y)| dx dy = \sum_{\gamma \in G(F) - \widetilde{H}(F) } I(\phi_{\widetilde{H}}, \gamma),$$ where $$I(\phi_{\widetilde{H}}, \gamma) = \int_{ \Omega \times \Omega} | \phi_{ \widetilde{H}}(x) \phi_{ \widetilde{H}}(y) k(x ^{-1} \gamma y)| dx dy.$$ The trivial bound for $I(\phi_{\widetilde{H}}, \gamma)$ is $I(\phi_{\widetilde{H}}, \gamma) \ll d_\mu^2 q^{nl} | T(\gamma)|$, and we shall improve this to $$\label{Iimproved} I(\phi_{\widetilde{H}}, \gamma) \ll d_\mu^2 q^{(n-1/2)l} N^{(n+1)/2} | T(\gamma)|;$$ note that we will choose $N$ to be small with respect to $q^l$, so this is in fact a strengthening. This bound is the key ingredient in the proof of Theorem [Theorem 3](#mainperiod){reference-type="ref" reference="mainperiod"}, as the saving of $q^{-l/2}$ it gives will imply a corresponding saving for ${\rm OD}(\phi_{\widetilde{H}})$, from which Theorem [Theorem 3](#mainperiod){reference-type="ref" reference="mainperiod"} follows. We prove ([\[Iimproved\]](#Iimproved){reference-type="ref" reference="Iimproved"}) by reducing it to a bound for the norm of an integral operator on $L^2(K_{H,w}^1)$, stated as Proposition [Proposition 29](#offdiagw){reference-type="ref" reference="offdiagw"} below, and which we establish in Section [6](#sec:offdiag){reference-type="ref" reference="sec:offdiag"}. To reduce ([\[Iimproved\]](#Iimproved){reference-type="ref" reference="Iimproved"}) to Proposition [Proposition 29](#offdiagw){reference-type="ref" reference="offdiagw"}, we first isolate the local behavior of $I(\phi_{\widetilde{H}}, \gamma)$ at $w$ by introducing an extra integral over $K_{H,w}^1$, which gives $$\label{Igamma2} I(\phi_{\widetilde{H}}, \gamma) = \text{vol}(K_{H,w}^1)^{-2} \int_{ \Omega \times \Omega} \int_{K_{H,w}^1 \times K_{H,w}^1} | \phi_{ \widetilde{H}}(x h_1) \phi_{ \widetilde{H}}(y h_2) k(h_1^{-1} x^{-1} \gamma y h_2)| dh_1 dh_2 dx dy.$$ For $x \in \widetilde{H}(\mathbb A)$ we define $f_{H,x}: K_{H,w}^1 \to \mathbb C$ by $f_{H,x}(h) = |\phi_{ \widetilde{H}}(xh)|$. With this notation, we may write the inner integral over $K_{H,w}^1 \times K_{H,w}^1$ as $$\begin{gathered} \label{Igamma3} \int_{K_{H,w}^1 \times K_{H,w}^1} | \phi_{ \widetilde{H}}(x h_1) \phi_{ \widetilde{H}}(y h_2) k(h_1^{-1} x^{-1} \gamma y h_2)| dh_1 dh_2 = \\ |k^w( x^{-1} \gamma y )| \int_{K_{H,w}^1 \times K_{H,w}^1} f_{H,x}( h_1) f_{H,y}(h_2) |k_w(h_1^{-1} x^{-1} \gamma y h_2)| dh_1 dh_2.\end{gathered}$$ We note that $| k_w |$ is equal to $q^{n(n+1)l} 1_{J^1}$, up to a constant factor depending only on $n$ and $w$, so we may simplify the integral on the RHS above to $$\label{KHlocal} q^{n(n+1)l} \int_{K_{H,w}^1 \times K_{H,w}^1} f_{H,x}( h_1) f_{H,y}(h_2) 1_{J^1}(h_1^{-1} x^{-1} \gamma y h_2) dh_1 dh_2.$$ We shall bound ([\[KHlocal\]](#KHlocal){reference-type="ref" reference="KHlocal"}) by thinking of it as the matrix coefficient of an operator on $L^2(K_{H,w}^1)$. For any $\delta \in K_w^1$, we define the operator $A_\delta$ on $L^2(K_{H,w}^1)$ by $$A_\delta f(x) = q^{n(n+1)l} \int_{K_{H,w}^1} 1_{J^1}( x^{-1} \delta y) f(y) dy.$$ Because $J^1 \subset K_w^1$, we may assume that $x^{-1} \gamma y \in K_w^1$ in ([\[KHlocal\]](#KHlocal){reference-type="ref" reference="KHlocal"}) as the integral vanishes otherwise. We may therefore rewrite ([\[KHlocal\]](#KHlocal){reference-type="ref" reference="KHlocal"}) as $\langle A_{x^{-1} \gamma y} f_{H,x}, f_{H,y} \rangle$. The key to our off-diagonal estimate will be to bound this matrix coefficient under the assumption, provided by Lemma [Lemma 30](#Hfar){reference-type="ref" reference="Hfar"} below, that the $\gamma$ making a nonzero contribution to the sum in ([\[ODsum\]](#ODsum){reference-type="ref" reference="ODsum"}) have the property that $\gamma_w$ is bounded away from $K_{\widetilde{H},w}$, and hence that $x^{-1} \gamma y$ is bounded away from $K_{\widetilde{H},w}$ for $x, y \in \Omega$. We note that we cannot bound $\langle A_{x^{-1} \gamma y} f_{H,x}, f_{H,y} \rangle$ by bounding the operator norm of $A_{x^{-1} \gamma y}$ alone, as the norm of $A_\delta$ will generically be $\gg q^{nl}$, even for $\delta$ that are far from $K_{ \widetilde{H},w}$. We must also use the information that $\phi_H$ and $f_{H,x}$ are respectively equivariant, and invariant, under $J_H^1$. We do this by defining $\Pi$ to be the operator on $L^2(K_{H,w}^1)$ given by averaging over $J^1_H$, so that $$\Pi f(x) = \text{vol}(J_H^1)^{-1} \int_{J_H^1} f(xg) dg,$$ and proving the following bound for the norm of $\Pi A_\delta \Pi$. **Proposition 29**. *Let $\delta \in K_w^1$. The norms of $A_\delta$ and $\Pi A_\delta \Pi$ satisfy the trivial bound $\| A_\delta \| \ll q^{nl}$, and the improvement $$\| \Pi A_\delta \Pi \| \ll q^{nl} \min \{ 1, q^{-l/2} d(\delta, K_{\widetilde{H},w})^{-1/2} \}.$$* We will combine this with the following lower bound on $d(\gamma_w, K_{\widetilde{H},w})$ for the $\gamma$ contributing to ([\[ODsum\]](#ODsum){reference-type="ref" reference="ODsum"}). **Lemma 30**. *If $\gamma \in G(F) - \widetilde{H}(F)$, and $x, y \in \Omega$ satisfy $k(x^{-1} \gamma y) \neq 0$, then $\gamma_w \in K_w$, and $d(\gamma_w, K_{\widetilde{H},w}) \gg N^{-n-1}$.* *Proof.* We begin with the assertion that $\gamma_w \in K_w$. We have $x_w, y_w \in \Omega_w = K_{\widetilde{H},w}$, and $x_w^{-1} \gamma_w y_w \in \text{supp}(k_w) \subset K_w$, so $\gamma_w \in K_w$ as required. Because $\gamma_w \in K_w$ and $\gamma \notin \widetilde{H}(F)$, we may apply Lemma [Lemma 26](#Hclose){reference-type="ref" reference="Hclose"} to deduce that $d(\gamma_w, K_{\widetilde{H},w})^2 \gg 1/\| \gamma \|^w$, where $\| \gamma \|^w = \prod_{u \nmid w} \| \gamma \|_u$. We must therefore show that $\| \gamma \|^w \ll N^{2n+2}$. The contribution to $\| \gamma \|^w$ from places $u$ lying above $v \in S$ is bounded, by the condition that $\gamma_v$ lies in the fixed compact set $\Omega_v \text{supp}(k_v) \Omega_v^{-1}$. We next bound the contribution from places $u$ lying above $v \notin S \cup \{ w \}$ using the fact that $\gamma^S$ must lie in $\text{supp}(T_j)$ for some $1 \leqslant j \leqslant n+1$. Expanding $T_j$ as in the proof of Proposition [Proposition 28](#diagonal bound){reference-type="ref" reference="diagonal bound"} shows that we either have $\gamma^S \in \text{supp}( \tau(v_1, [i,-i]) )$ for $v_1 \in \mathcal P_N$ and some $0 \leqslant i \leqslant j$, or $\gamma^S \in \text{supp}( \tau(v_1, [j]) \tau(v_2, [j]) )$ for $v_1, v_2 \in \mathcal P_N$. In the first case we have $\| \gamma \|_u = 1$ unless $u | v_1$, and if $v_1$ splits as $u_1 u'_1$ then Lemma [Lemma 25](#Hecke height){reference-type="ref" reference="Hecke height"} gives $$\| \gamma \|_{u_1} = \| \gamma \|_{u'_1} = q_{v_1}^i < N^{n+1}$$ so that $\| \gamma \|^w \ll N^{2n+2}$ as required. We likewise have $\| \gamma \|^w \ll N^{2n+2}$ in the second case, which completes the proof. ◻ Lemma [Lemma 30](#Hfar){reference-type="ref" reference="Hfar"} implies that any $\gamma$ contributing to ([\[ODsum\]](#ODsum){reference-type="ref" reference="ODsum"}) must satisfy $d(\gamma_w, K_{\widetilde{H},w}) \gg N^{-n-1}$, and so the same is true for $x^{-1} \gamma y$ for $x, y \in \Omega$. We may therefore apply Proposition [Proposition 29](#offdiagw){reference-type="ref" reference="offdiagw"} with $\delta = x^{-1} \gamma y$ to obtain $$\langle A_{x^{-1} \gamma y} f_{H,x}, f_{H,y} \rangle = \langle \Pi A_{x^{-1} \gamma y} \Pi f_{H,x}, f_{H,y} \rangle \ll \| f_{H,x} \|_2 \| f_{H,y} \|_2 q^{(n - 1/2)l} N^{(n+1)/2},$$ uniformly in $x$, $y$, and $\gamma$. Substituting this into ([\[Igamma3\]](#Igamma3){reference-type="ref" reference="Igamma3"}), and then into ([\[Igamma2\]](#Igamma2){reference-type="ref" reference="Igamma2"}), gives $$\label{Ibound2} I(\phi_{\widetilde{H}}, \gamma) \ll q^{(n - 1/2)l} N^{(n+1)/2} \int_{\Omega \times \Omega} |k^w( x^{-1} \gamma y )| \| f_{H,x} \|_2 \| f_{H,y} \|_2 dx dy.$$ We next observe that $k^w( x^{-1} \gamma y ) \ll d_\mu^2 |T(\gamma)|$, uniformly in $x$ and $y$. Indeed, we may write $k^w$ as a product $k_S k^{S \cup \{ w \} }$, and have $k_S( x^{-1} \gamma y ) \ll d_\mu^2$, and $k^{S \cup \{ w \} }( x^{-1} \gamma y ) = k^{S \cup \{ w \} }(\gamma) = T(\gamma)$. Applying this in ([\[Ibound2\]](#Ibound2){reference-type="ref" reference="Ibound2"}), we have $$\begin{aligned} \notag I(\phi_{\widetilde{H}}, \gamma) & \ll d_\mu^2 q^{(n - 1/2)l} N^{(n+1)/2} |T(\gamma)| \int_{\Omega \times \Omega} \| f_{H,x} \|_2 \| f_{H,y} \|_2 dx dy \\ \label{Ibound} & = d_\mu^2 q^{(n - 1/2)l} N^{(n+1)/2} |T(\gamma)| \left( \int_\Omega \| f_{H,x} \|_2 dx \right)^2.\end{aligned}$$ The Cauchy--Schwartz inequality gives $$\left( \int_\Omega \| f_{H,x} \|_2 dx \right)^2 \leqslant \text{vol}(\Omega) \int_\Omega \| f_{H,x} \|_2^2 dx \ll \| \phi_H \|_2^2 = 1,$$ and applying this in ([\[Ibound\]](#Ibound){reference-type="ref" reference="Ibound"}) gives the required bound ([\[Iimproved\]](#Iimproved){reference-type="ref" reference="Iimproved"}) for $I(\phi_{\widetilde{H}}, \gamma)$. With this, we may complete our bound for the geometric side of the trace formula. Applying ([\[Iimproved\]](#Iimproved){reference-type="ref" reference="Iimproved"}) in ([\[ODsum\]](#ODsum){reference-type="ref" reference="ODsum"}) gives $${\rm OD}(\phi_{\widetilde{H}}) \ll d_\mu^2 q^{(n - 1/2)l} N^{(n+1)/2} \sum_{\gamma \in G(F) } |T(\gamma)|.$$ One may easily show that $\sum_{\gamma \in G(F) } |T(\gamma)| \ll \| T \|_1$, and it follows from the definition of $T$ that $\| T \|_1 \ll N^{n^2+n+2}$, so that $${\rm OD}(\phi_{\widetilde{H}}) \ll d_\mu^2 q^{(n - 1/2)l} N^{(2n^2 + 3n+5)/2}.$$ When combined with the diagonal estimate from Proposition [Proposition 28](#diagonal bound){reference-type="ref" reference="diagonal bound"}, we have our final estimate for the geometric side of the amplified relative trace formula, which is $$\int_{ [\widetilde{H}] \times [\widetilde{H}]} \overline{\phi}_{ \widetilde{H}}(x) \phi_{ \widetilde{H}}(y) \sum_{\gamma \in G(F) } k(x ^{-1} \gamma y) dx dy \ll_\epsilon d_\mu^2 q^{nl} N^{1 + \epsilon} + d_\mu^2 q^{(n - 1/2)l} N^{(2n^2 + 3n+5)/2}.$$ Combining this with ([\[amplified period bd\]](#amplified period bd){reference-type="ref" reference="amplified period bd"}) gives $$| \mathcal P( \phi, \overline{\phi}_H ) ) |^2 \ll_\epsilon d_\mu^2 q^{nl} N^{-1 + \epsilon} + d_\mu^2 q^{(n - 1/2)l} N^{(2n^2 + 3n+1)/2 + \epsilon}.$$ Choosing $N = q^{\ell / (2n^2 + 3n + 3)}$ gives Theorem [Theorem 3](#mainperiod){reference-type="ref" reference="mainperiod"}. # The local off-diagonal bound {#sec:offdiag} This section contains the proof of Proposition [Proposition 29](#offdiagw){reference-type="ref" reference="offdiagw"}, which is the key local ingredient in our bound for the off-diagonal terms in Section [5](#sec:amp){reference-type="ref" reference="sec:amp"}. As we shall work locally at $w$ in this section, we shall omit $w$ from the notation everywhere. ## The trivial bound We begin with the proof of the trivial bound $\| A_\delta \| \ll q^{nl}$. We derive this from the following standard bound for integral operators on a measure space, see for instance [@Fo Thm. 6.18]. **Proposition 31**. *Let $K(x,y)$ be a measurable kernel function on a $\sigma$-finite measure space $(X,\mu)$. Suppose there exists $C > 0$ such that $\int_X | K(x,y) | d\mu(y) \leqslant C$ for a.e. $x \in X$ and $\int_X | K(x,y) | d\mu(x) \leqslant C$ for a.e. $y \in X$, and $1 \leqslant p \leqslant \infty$. If $f \in L^p(X)$, the integral $$Tf(x) = \int K(x,y) f(y) d\mu(y)$$ converges absolutely for a.e. $x \in X$, the function $Tf$ thus defined is in $L^p(X)$, and $\| Tf \|_p \leqslant C \| f \|_p$.* If we apply this to the operator $A_\delta$, with integral kernel $q^{n(n+1)l} 1_{J^1}( x^{-1} \delta y)$, it suffices to show that $$\label{kernelint} \underset{x \in K^1_H}{\sup} \int_{K^1_H} 1_{J^1}( x^{-1} \delta y) dy = \underset{x \in K^1_H}{\sup} \text{vol}( K^1_H \cap \delta^{-1} x J^1) \ll q^{-n^2 l},$$ and likewise for the $x$-integrals, although we shall omit these as they are similar to the $y$-integrals. To establish ([\[kernelint\]](#kernelint){reference-type="ref" reference="kernelint"}), let $x \in K^1_H$ be given. We may suppose that there is some $y_0 \in K^1_H$ such that $y_0 \in \delta^{-1} x J^1$, and then we have $$\text{vol}( K^1_H \cap \delta^{-1} x J^1) = \text{vol}( K^1_H \cap y_0^{-1} \delta^{-1} x J^1) = \text{vol}( K^1_H \cap J^1).$$ We have shown in Lemma [Lemma 19](#Atrans){reference-type="ref" reference="Atrans"} that $\text{vol}( K^1_H \cap J^1) \ll q^{-n^2l}$, which gives ([\[kernelint\]](#kernelint){reference-type="ref" reference="kernelint"}) and hence $\| A_\delta \| \ll q^{nl}$. ## Improving the trivial bound We next establish the improved bound $$\| \Pi A_\delta \Pi \| \ll q^{nl} \min \{ 1, q^{-l/2} d(\delta, K_{\widetilde{H},w})^{-1/2} \}.$$ The bound $\| \Pi A_\delta \Pi \| \ll q^{nl}$ follows from $\| A_\delta \| \ll q^{nl}$ and $\| \Pi \| \leqslant 1$, so it suffices to prove $$\label{Aimproved} \| \Pi A_\delta \Pi \| \ll q^{(n-1/2)l} d(\delta, K_{\widetilde{H},w})^{-1/2}.$$ We shall establish this by showing that the support of the integral kernel of $A_\delta$ is transverse to $J^1_H$. In particular, we define $$S = \{ (x, y) \in K^1_H \times K^1_H : x^{-1} \delta y \in J^1 \},$$ which is the support of the function $1_{J^1}( x^{-1} \delta y)$, and define $$\begin{aligned} S_1 & = \{ x \in K^1_H : x^{-1} \delta y \in J^1 \text{ for some } y \in K^1_H \}, \\ S_2 & = \{ y \in K^1_H : x^{-1} \delta y \in J^1 \text{ for some } x \in K^1_H \},\end{aligned}$$ to be its projections to the first and second factors. We shall derive ([\[Aimproved\]](#Aimproved){reference-type="ref" reference="Aimproved"}) from the following transversality statement. **Proposition 32**. *For any $x, y \in K^1_H$, we have $$\textup{vol}(S_1 \cap x J^1_H), \, \textup{vol}(S_2 \cap y J^1_H) \leqslant 2 q^{-l/2+1} d(\delta, K_{\widetilde{H},w})^{-1/2} \textup{vol}(J_H^1).$$* The bound ([\[Aimproved\]](#Aimproved){reference-type="ref" reference="Aimproved"}) follows easily from Proposition [Proposition 32](#transverse){reference-type="ref" reference="transverse"}, as we now demonstrate. If we let $1_{S_1}$ and $1_{S_2}$ be the multiplication operators by the corresponding characteristic functions, then we are free to insert $1_{S_1}$ and $1_{S_2}$ before and after $A_\delta$ which gives $$\| \Pi A_\delta \Pi \| = \| \Pi 1_{S_1} A_\delta 1_{S_2} \Pi \| \leqslant \| \Pi 1_{S_1} \| \| A_\delta \| \| 1_{S_2} \Pi \| \ll q^{nl} \| \Pi 1_{S_1} \| \| 1_{S_2} \Pi \|.$$ It therefore suffices to show that $$\label{Pi1bound} \| \Pi 1_{S_1} \|, \, \| 1_{S_2} \Pi \| \ll q^{-l/4} d(\delta, K_{\widetilde{H},w})^{-1/4}.$$ We shall do this for $\| \Pi 1_{S_1} \|$, as the other case is similar. If we use the identity $\| T \|^2 = \| T^* T \|$, valid for any bounded operator on a Hilbert space (where $T^*$ denotes the adjoint), we obtain $\| \Pi 1_{S_1} \|^2 = \| (\Pi 1_{S_1})^* \Pi 1_{S_1} \| = \| 1_{S_1} \Pi 1_{S_1} \|$. The integral kernel of the operator $1_{S_1} \Pi 1_{S_1}$ is the function $\text{vol}( J_H^1)^{-1} 1_{S_1}(x) 1_{J^1_H}(x^{-1} y) 1_{S_1}(y)$, and we apply Proposition [Proposition 31](#kernelbound){reference-type="ref" reference="kernelbound"} to this. For any $x$ we have $$\text{vol}( J_H^1)^{-1} \int_{K^1_H} 1_{S_1}(x) 1_{J^1_H}(x^{-1} y) 1_{S_1}(y) dy \leqslant \text{vol}( J_H^1)^{-1} \text{vol}( S_1 \cap x J^1_H),$$ and this is $\ll q^{-l/2} d(\delta, K_{\widetilde{H},w})^{-1/2}$ by Proposition [Proposition 32](#transverse){reference-type="ref" reference="transverse"}. The corresponding inequality for the $y$-integrals also holds, and so Proposition [Proposition 31](#kernelbound){reference-type="ref" reference="kernelbound"} gives ([\[Pi1bound\]](#Pi1bound){reference-type="ref" reference="Pi1bound"}), and hence ([\[Aimproved\]](#Aimproved){reference-type="ref" reference="Aimproved"}). This completes the proof of Proposition [Proposition 29](#offdiagw){reference-type="ref" reference="offdiagw"}, assuming Proposition [Proposition 32](#transverse){reference-type="ref" reference="transverse"}. ## Transversality We now prove Proposition [Proposition 32](#transverse){reference-type="ref" reference="transverse"}. We shall only prove the bound for $\textup{vol}(S_1 \cap x J^1_H)$, as the other bound is similar. If we define $T^1 = T \cap K^1$ and $T_H^1 = T_H \cap K_H^1$, we may assume that $J^1 = g_0 T^1 g_0^{-1} K(\mathfrak p^l)$ and $J_H^1 = T_H^1 K_H(\mathfrak p^l)$ by the results of Section [2.4](#sec:compatpairs){reference-type="ref" reference="sec:compatpairs"}, where $g_0 \in K$ is as in Lemma [Lemma 17](#adjproj){reference-type="ref" reference="adjproj"}. For any $z \in K_H^1$, we define $I_z = \{ t_H \in T_H^1 : zt_H \in S_1 \} \subset T_H^1$. By disintegrating $x J^1_H$ into cosets of $T^1_H$, we have $$\textup{vol}(S_1 \cap x J^1_H) \leqslant \text{vol}( J_H^1) \underset{z \in K_H^1}{\sup} \frac{ \text{vol}(I_z)}{ \text{vol}(T_H^1)},$$ where the volumes on $T_H$ are taken with respect to the Haar measure obtained as the product of the measures $dt / |t|$ on $F^\times$. Because this measure satisfies $\text{vol}(T_H^1) = q^{-n}$, it suffices to show that $\text{vol}(I_z) \leqslant 2 d(\delta, K_{\widetilde{H},w})^{-1/2} q^{-l/2-n+1}$ for all $z$. An elementary manipulation gives that $$S_1 = K_H^1 \cap \delta K_H^1 J^1 = K_H^1 \cap \delta K_H^1 g_0 T^1 g_0^{-1} K(\mathfrak p^l),$$ and hence $$\begin{aligned} \notag I_z & = \{ t_H \in T_H^1 : zt_H \in \delta K_H^1 g_0 T^1 g_0^{-1} K(\mathfrak p^l) \} \\ \label{Iz} & = \{ t_H \in T_H^1 : \delta^{-1} zt_H g_0 \in K_H^1 g_0 T^1 K(\mathfrak p^l) \}.\end{aligned}$$ For simplicity, we denote $\delta^{-1} z$ by $\sigma$. The properties we shall need of $\sigma$ are that $\sigma \in K^1$, and $d(\sigma, K_{\widetilde{H}}) = d( \delta, K_{\widetilde{H}})$. Roughly speaking, the condition $\sigma t_H g_0 \in K_H^1 g_0 T^1 K(\mathfrak p^l)$ in ([\[Iz\]](#Iz){reference-type="ref" reference="Iz"}) says that the images of $\sigma t_H g_0$ and $g_0$ in $H \backslash G / T$ are equal 'modulo $\mathfrak p^l$'. We may therefore study this condition using the invariant functions on $H \backslash G / T$. For $1 \leqslant i \leqslant n+1$, we define the functions $$P_i(g) = g_{(n+1)i}, \quad Q_i(g) = (g^{-1})_{i(n+1)}$$ on $G$, which are polynomials in $g_{ij}$ and $(\det g)^{-1}$ with integral coefficients. We note the relation $\sum P_i Q_i = 1$. These satisfy the equivariance relations $P_i(h g t) = t_i P_i(g)$ and $Q_i(h g t) = t_i^{-1} Q_i(g)$ for $h \in H$ and $t = \text{diag}(t_1, \ldots, t_{n+1}) \in T$, so that $P_i Q_i(g)$ is a function on $H \backslash G / T$. We have the following lemma. **Lemma 33**. *If $t_H \in I_z$, then we have $P_iQ_i( \sigma t_H g_0) \in P_iQ_i(g_0) + \mathfrak p^l$ for all $i$.* *Proof.* By equation ([\[Iz\]](#Iz){reference-type="ref" reference="Iz"}), the condition $t_H \in I_z$ implies that $\sigma t_H g_0 \in K_H^1 g_0 T^1 K(\mathfrak p^l)$, so we may write $\sigma t_H g_0 = k_H g_0 t k$ with $k_H \in K_H^1$, $t \in T$, and $k \in K(\mathfrak p^l)$. The entries of $k_H g_0 t$ and $k_H g_0 t k$ lie in $\mathcal O$ and are congruent modulo $\mathfrak p^l$, and when combined with the fact that the coefficients of $P_i$ and $Q_i$ are integral this gives $P_i Q_i( k_H g_0 t k) \in P_iQ_i( k_H g_0 t) + \mathfrak p^l$. The invariance of $P_i Q_i$ implies that $P_iQ_i( k_H g_0 t) = P_i Q_i(g_0)$, which completes the proof. ◻ We may therefore bound the volume of $I_z$ by finding an $i$ such that the polynomial $P_iQ_i( \sigma t_H g_0)$ on $T_H^1$ is not approximately constant. If we write $t_H = \text{diag}(t_{H,1}, \ldots, t_{H,n})$, a calculation gives that $$\begin{aligned} P_i(\sigma t_H g_0) & = \sum_{j = 1}^n \sigma_{(n+1)j} (g_0)_{ji} t_{H,j} + \sigma_{(n+1)(n+1)} (g_0)_{(n+1)i}, \\ Q_i(\sigma t_H g_0) & = \sum_{j = 1}^n (\sigma^{-1})_{j(n+1)} (g_0^{-1})_{ij} t_{H,j}^{-1} + (\sigma^{-1})_{(n+1)(n+1)} (g_0^{-1})_{i(n+1)}.\end{aligned}$$ We recall from Lemma [Lemma 18](#g0entry){reference-type="ref" reference="g0entry"} that all entries of $g_0$ and $g_0^{-1}$ lie in $\mathcal O^\times$. When combined with the fact that $\sigma \in K^1$, this implies that the constant terms of $P_i(\sigma t_H g_0)$ and $Q_i(\sigma t_H g_0)$ are in $\mathcal O^\times$, and the norms of the coefficients of $t_{H,j}$ and $t_{H,j}^{-1}$ are equal to $| \sigma_{(n+1)j} |$ and $| (\sigma^{-1})_{j(n+1)} |$. If we let $d = d(\sigma, K_{\widetilde{H}})$, we have $| \sigma_{(n+1)j} |, \, | (\sigma^{-1})_{j(n+1)} | \leqslant d$ for all $j$, with equality realized in at least one case. We assume that $| \sigma_{(n+1)j} | = d$ for some $j$, as the case when $| (\sigma^{-1})_{j(n+1)} | = d$ is similar. It follows that the coefficient of $t_{H,j}$ in $P_i(\sigma t_H g_0)$ has valuation $d$ for all $i$. If we fix $t_{H,k}$ for $k \neq j$ and consider $P_i(\sigma t_H g_0)$ and $Q_i(\sigma t_H g_0)$ as functions of $t_{H,j}$ alone, we have $$\label{tHrational} P_i Q_i(\sigma t_H g_0) = c_{-1} t_{H,j}^{-1} + c_0 + c_1 t_{H,j},$$ where $| c_{-1} | \leqslant d$, $|c_0| = 1$, and $|c_1| = d$. If we apply Lemma [Lemma 34](#poly){reference-type="ref" reference="poly"} below to the function ([\[tHrational\]](#tHrational){reference-type="ref" reference="tHrational"}) with $y = P_i Q_i(g_0)$, and combine this with Lemma [Lemma 33](#PQconst){reference-type="ref" reference="PQconst"}, we see that the set of $t_{H,j}$ with $(t_{H,1}, \ldots, t_{H,j}, \ldots, t_{H,n}) \in I_z$ has measure at most $2 d^{-1/2} q^{-l/2}$. Integrating in the other variables gives $\text{vol}(I_z) \leqslant 2 d^{-1/2} q^{-l/2-n+1}$ as required. **Lemma 34**. *Let $f: 1 + \mathfrak p\to \mathcal O$ be given by $f(x) = c_{-1} x^{-1} + c_0 + c_1 x$, where $c_i \in \mathcal O$, and $|c_1| = d$. Then for any $y \in \mathcal O$, we have $\textup{vol} ( f^{-1}( y + \mathfrak p^l) ) \leqslant 2 d^{-1/2} q^{-l/2}$.* *Proof.* We may assume that $y = 0$. We may then replace $f$ with $xf$, and write $x = 1+z$ for $z \in \mathfrak p$. This converts the problem to bounding $\text{vol} (g^{-1}(\mathfrak p^l) )$ where $g(z) = a + bz + cz^2$, and $|c| = d$. By performing another additive change of variable we may assume that $a = 0$. If $| g(z) | = |cz(b/c+z)| \leqslant q^{-l}$, then we have either $|z| \leqslant d^{-1/2} q^{-l/2}$ or $|b/c + z| \leqslant d^{-1/2} q^{-l/2}$, and the set of such $z$ has volume $\leqslant 2 d^{-1/2} q^{-l/2}$ as required. ◻ # Proof of the subconvex bound We now combine Theorem [Theorem 3](#mainperiod){reference-type="ref" reference="mainperiod"} with the Ichino--Ikeda formula of Beuzart-Plessis--Chaudouard--Zydor [@BPCZ] to deduce Theorem [Theorem 6](#mainsubconvex){reference-type="ref" reference="mainsubconvex"}. ## The unitary Ichino--Ikeda formula In this subsection, we let $\pi \times \pi_H$ be a general tempered representation of $G \times H$, and recall the unitary Ichino--Ikeda formula for $\pi \times \pi_H$ as stated in [@BPCZ]. We assume that $\pi \times \pi_H$ has a weak base change to ${\rm GL}_{n+1} \times {\rm GL}_n / E$, denoted $\Pi \times \Pi_H$, that satisfies the conditions of [@BPCZ Section 1.1.3]. We refer to Remark [Remark 2](#tempered2){reference-type="ref" reference="tempered2"} for discussion of the existence of such a lift. We define the product of special $L$-values $\mathcal L( 1/2, \pi \times \pi_H^\vee)$ by $$\mathcal L( 1/2, \pi \times \pi_H^\vee) = \prod_{i = 1}^{n+1} \Lambda(i, \eta^i) \frac{ \Lambda(1/2, \Pi \times \Pi_H^\vee) }{ \Lambda(1, \text{Ad}\, \pi) \Lambda(1, \text{Ad}\, \pi_H^\vee) },$$ where $\eta$ is the quadratic character associated to $E/F$, with corresponding local factors $\mathcal L( 1/2, \pi_v \times \pi_{H,v}^\vee)$. Our assumption that $\pi$ and $\pi_H$ are tempered implies that $\mathcal L( 1/2, \pi \times \pi_H^\vee)$ and $\mathcal L( 1/2, \pi_v \times \pi_{H,v}^\vee)$ are well-defined and nonzero. We continue to use the measures on $G(\mathbb A)$ and $H(\mathbb A)$ defined in Section [3.1.5](#sec:measures){reference-type="ref" reference="sec:measures"}, and choose invariant inner products on $\pi_v$ and $\pi_{H,v}$ that factorize the global Peterssen inner product. Let $\phi = \otimes_v \phi_v \in \pi$ and $\phi_H = \otimes_v \phi_{H,v} \in \pi_H$ be factorizable vectors, and assume that $\phi$ and $\phi_H$, as well as their local factors, have norm one. We have $\overline{\phi}_H \in \pi_H^\vee$. Moreover, if we identify $\pi_H^\vee$ and $\pi_{H,v}^\vee$ with the complex conjugate representations $\overline{\pi}_H$ and $\overline{\pi}_{H,v}$ via the Hermitian inner product, then we have local factors $\overline{\phi}_{H,v} \in \overline{\pi}_{H,v} \simeq \pi_{H,v}^\vee$ and a factorization $\overline{\phi}_H = \otimes_v \overline{\phi}_{H,v}$. For every $v$, we define the normalized local period $$\mathcal P_v^\natural(\phi_v \times \overline{\phi}_{H,v} ) = \mathcal L( 1/2, \pi_v \times \pi_{H,v}^\vee)^{-1} \int_{H_v} \langle \pi_v(h) \phi_v, \phi_v \rangle \overline{ \langle \pi_{H,v}(h) \phi_{H,v}, \phi_{H,v} \rangle } dh.$$ We recall that $\mathcal P_v^\natural(\phi_v \times \overline{\phi}_{H,v} ) = 1$ if all data are unramified, so we may define the product of these periods over all $v$. The period formula of [@BPCZ Thm 1.1.6.1] then implies that $$\label{IchinoIkeda} | \mathcal P_H( \phi, \overline{\phi}_H) |^2 = C \mathcal L(1/2, \pi \times \pi_H^\vee) \prod_v \mathcal P_v^\natural(\phi_v \times \overline{\phi}_{H,v} ),$$ where $C > 0$ is a constant such that $C$ and $C^{-1}$ are bounded depending on $n$ and the choice of measures. ## Proof of Theorem [Theorem 6](#mainsubconvex){reference-type="ref" reference="mainsubconvex"} {#proof-of-theorem-mainsubconvex} We recall the notation used in the statement of Theorem [Theorem 6](#mainsubconvex){reference-type="ref" reference="mainsubconvex"}, including the finite sets $\Theta_\infty$ and $\Theta_{H,\infty}$ and the family $\mathcal F_\text{SC}$, and we now let $\pi \times \pi_H$ be a representation in $\mathcal F_\text{SC}$. We will deduce Theorem [Theorem 6](#mainsubconvex){reference-type="ref" reference="mainsubconvex"} by applying Theorem [Theorem 3](#mainperiod){reference-type="ref" reference="mainperiod"} and the period relation ([\[IchinoIkeda\]](#IchinoIkeda){reference-type="ref" reference="IchinoIkeda"}) to vectors $\phi = \otimes_v \phi_v \in \pi$ and $\phi_H = \otimes_v \phi_{H,v} \in \pi_H$, which will be chosen so that the local periods $\mathcal P_v^\natural(\phi_v \times \overline{\phi}_{H,v} )$ are controlled for all $v$. As we aim to give an upper bound for $L(1/2, \Pi \times \Pi_H^\vee)$, we only need to bound these periods from below. We know that $\mathcal P_v^\natural(\phi_v \times \overline{\phi}_{H,v} ) = 1$ if $v \notin S \cup \{ w \}$. For $v = w$, we choose $\phi_w$ and $\phi_{H,w}$ to be compatible microlocal lift vectors with norm one. Lemma [Lemma 19](#Atrans){reference-type="ref" reference="Atrans"} then implies that the un-normalized local period $$\int_{H_w} \langle \pi_w(h) \phi_w, \phi_w \rangle \overline{ \langle \pi_{H,w}(h) \phi_{H,w}, \phi_{H,w} \rangle } dh$$ is equal to $\text{vol}( K_H(l) ) \sim q^{-n^2 l}$, which implies that the normalized period also satisfies $\mathcal P_w^\natural(\phi_w \times \overline{\phi}_{H,w} ) \sim q^{-n^2l}$. For finite primes $v \in S$, we use the local vectors given by following lemma, which is taken from Lemma 5.2 of [@Ne1]. **Lemma 35**. *Given $K_v$ and $K_{H,v}$, there is $c > 0$, and subgroups $K_v' < K_v$ and $K_{H,v}' < K_{H,v}$, such that there exist $\phi_v \in \pi_v^{K_v'}$ and $\phi_{H,v} \in \pi_{H,v}^{K_{H,v}'}$ such that $|\mathcal P_v^\natural(\phi_v \times \overline{\phi}_{H,v} )| > c$.* Finally, for $v = \infty$, there are a finite number of choices for $\pi_\infty$ and $\pi_{H,\infty}$, and for each we pick $\phi_\infty$ and $\phi_{H,\infty}$ such that $\mathcal P_\infty^\natural(\phi_\infty \times \overline{\phi}_{H,\infty} ) \neq 0$. With this choice of $\phi$ and $\phi_H$, the period formula ([\[IchinoIkeda\]](#IchinoIkeda){reference-type="ref" reference="IchinoIkeda"}) gives $$q^{n^2 l} | \mathcal P_H( \phi, \overline{\phi}_H) |^2 \gg \mathcal L(1/2, \pi \times \pi_H^\vee).$$ Moreover, applying Theorem [Theorem 3](#mainperiod){reference-type="ref" reference="mainperiod"} (with the subgroups $K_v$ and $K_{H,v}$ for finite $v \in S$ replaced with $K_v'$ and $K_{H,v}'$) gives $$\label{ellbound} q^{(n^2 + n-2\delta_0)l} \gg \mathcal L(1/2, \pi \times \pi_H^\vee)$$ for any $\delta_0 < \tfrac{1}{4n^2 + 6n + 6}$. We have $$\mathcal L(1/2, \pi \times \pi_H^\vee) \sim \frac{ L(1/2, \Pi \times \Pi_H^\vee) }{ L(1, \text{Ad}\, \pi) L(1, \text{Ad}\, \pi_H^\vee) }$$ by our assumption that the local factors $\pi_\infty$ and $\pi_{H,\infty}$ lie in fixed finite sets, and also that $$L(1, \text{Ad}\, \pi), \quad L(1, \text{Ad}\, \pi_H^\vee) \ll q^{\epsilon l}$$ which follows from the tempered assumption on $\pi$ and $\pi_H$ by a simple application of the functional equation and Phragmén--Lindelöf. Combining these with ([\[ellbound\]](#ellbound){reference-type="ref" reference="ellbound"}) gives $$q^{(n^2 + n-2\delta_0)l} \gg L(1/2, \Pi \times \Pi_H^\vee),$$ which is the same as the bound $q^{l n (n+1)(1 - 4 \delta)} \gg L(1/2, \Pi \times \Pi_H^\vee)$ for any $\delta < \tfrac{1}{4n(n+1)(2n^2 + 3n + 3)}$ stated in Theorem [Theorem 6](#mainsubconvex){reference-type="ref" reference="mainsubconvex"}. Finally, as discussed in Remark [Remark 3](#conductor){reference-type="ref" reference="conductor"}, we have the bound $C(\Pi \times \Pi_H) \ll q^{4l n(n+1)}$, which completes the proof. 9 I. N. Bernstein, A. V. Zelevinsky: *Induced representations of reductive $p$-adic groups. I*, A.S.E.N.S. 10 no. 4 (1977), 441-472. R. Beuzart-Plessis, P.-H. Chaudouard, M. Zydor: *The global Gan-Gross-Prasad conjecture for unitary groups: the endoscopic case*, Pub. Math. I.H.E.S 135 (2022), 183-336. V. Blomer, J. Buttcane: *On the subconvexity problem for $L$-functions on GL(3)*. Ann. Sci. Éc. Norm. Supér. (4), 53(6):1441--1500, 2020. V. Blomer, P. Maga: *The sup-norm problem for $PGL(4)$*, IMRN 2015 (vol. 14), 5311-5332. A. Caraiani: *Local-global compatibility and the action of monodromy on nearby cycles*, Duke Math. J. 161(12) (2012), 2311-2413. P. Cartier: *Representations of $\mathfrak{p}$-adic groups: a survey*, in Proceedings of Symposia in Pure Mathematics vol. 33 (1979), part 1, pp. 111-155. W. Casselman: *The unramified principal series of $p$-adic groups I*, Compositio Math. 40 (1980), 387-406. K. Y. Fan, G. Pall: *Imbedding conditions for Hermitian and normal matrices*, Canadian J. Math. 9 (1957), 298-304. doi:10.4153/CJM-1957-036-1. G. Folland: *Real analysis: modern techniques and their applications, second edition*. Wiley, 1999. R. Howe: *Some qualitative results on the representation theory of $GL_n$ over a $p$-adic field*, Pacific J. Math. 73 no. 2 (1977), 479-538. Y. Hu, P. Nelson: *Subconvex bounds for $U_{n+1} \times U_n$ in horizontal aspects*, available at arxiv.org:2309.06314. H. Iwaniec, P. Sarnak: *Perspectives on the analytic theory of L-functions*, Geom. Funct. Anal., (Special Volume, Part II, 2000), 705--741. T. Kaletha, A. Minguez, S.-W. Shin, and P.-J. White. *Endoscopic classification of representations: Inner forms of unitary groups*, arXiv:1409.3731. T. Kaletha, G. Prasad: Bruhat--Tits Theory: A New Approach. New Mathematical Monographs vol. 44, Cambridge University Press (2023). J.-P. Labesse: *Changement de base CM et séries discrètes.* In On the Stabilization of the Trace Formula, Stab. Trace Formula Shimura Var. Arith. Appl., 1. International Press, Somerville, MA, 2011. X. Li: *Bounds for $GL(3) \times GL(2)$ $L$-functions and $GL(3)$ $L$-functions*, Ann. of Math. (2), 173(1):301--336, 2011. W. Luo, Z. Rudnick, P. Sarnak: *On the generalized Ramanujan conjecture for GL(n),* Proc. Sym. Pure Math., 66, Vol. II, (1999), 301-311. I. G. Macdonald: *Spherical functions on a group of $p$-adic type*, Ramanujan Institute, Centre for Advanced Study in Mathematics, University of Madras, Madras, 1971. Publications of the Ramanujan Institute no. 2. S. Marshall: *Semiclassical analysis, amplification, and subconvexity*, video lecture, available at www.youtube.com/watch?v=zAKurx0UNi8. S. Marshall, S. W. Shin: *Endoscopy and cohomology in a tower of congruence manifolds for $U(n,1)$*, Forum Math. Sigma 7 (2019), DOI: https://doi.org/10.1017/fms.2019.13. C. P. Mok: *Endoscopic classification of representations of quasi-split unitary groups.* Mem. Amer. Math. Soc., 235(1108):vi+248, 2015. R. Munshi: *The circle method and bounds for $L$-functions?III: $t$-aspect subconvexity for $GL(3)$ $L$-functions*, J. Amer. Math. Soc., 28(4):913--938, 2015. R. Munshi: *The circle method and bounds for $L$-functions?IV: subconvexity for twists of $GL(3)$ $L$-functions*. Ann. of Math. (2), 182(2):617--672, 2015. R. Munshi: *The subconvexity problem for $L$-functions.* In Proceedings of the International Congress of Mathematicians--Rio de Janeiro 2018. Vol. II. Invited lectures, pages 363--376. World Sci. Publ., Hackensack, NJ, 2018. R. Munshi: *Subconvexity for $GL(3) \times GL(2)$ $L$-functions in $t$-aspect*, J. Eur. Math. Soc. 24 (2022), no. 5, 1543--1566. P. Nelson: *Spectral aspect subconvex bounds for ${\rm U}_{n+1} \times {\rm U}_n$*, Invent. Math. 232 (2023), 1273-1438. P. Nelson: *Bounds for standard $L$-functions*, available at arxiv:2109.15230. P. Nelson and A. Venkatesh: *The orbit method and analysis of automorphic forms*, Acta Math. 226 (2021), 1--209. A. Roche: *Types and Hecke algebras for principal series representations of split reductive $p$-adic groups*, A.S.E.N.S. 31 no. 3 (1998), 361-413. L. Yang: *Relative Trace Formula, Subconvexity and Quantitative Nonvanishing of Rankin-Selberg L-functions for $GL(n+1) \times GL(n)$*, available at arxiv:2309.07534. [^1]: The authors in [@FP] work over the reals, but they prove an identity of polynomials that is valid over any ring.
arxiv_math
{ "id": "2309.16667", "title": "Subconvexity for $L$-functions on ${\\rm U}(n) \\times {\\rm U}(n+1)$ in\n the depth aspect", "authors": "Simon Marshall", "categories": "math.NT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | The optimal $L^4$-Strichartz estimate for the Schrödinger equation on $\mathbb{T}^2$ is proved, which improves an estimate of Bourgain. A new method based on incidence geometry is used. In addition, the approach yields a uniform $L^4$ bound on a logarithmic time scale, which implies global existence of solutions to the mass-critical NLS in $H^s(\mathbb{T}^2)$ for any $s>0$ and small data. address: - Fakultat für Mathematik, Universität Bielefeld, Postfach 10 01 31, 33501 Bielefeld, Germany - Department of Mathematical Sciences, KAIST, 291 Daehak-ro, Yuseong-gu, Daejeon, Korea author: - Sebastian Herr - Beomjong Kwak bibliography: - BibTd.bib title: Strichartz estimates and global well-posedness of the cubic NLS on $\mathbb{T}^{2}$ --- # Introduction {#sec:intro} In the seminal work [@bourgain1993fourier] Bourgain proved Strichartz estimates for the Schrödinger equation on (rational) tori. More precisely, in dimension $d=2$, his endpoint estimate states that there exists $c>0$ such that for all $\phi \in L^2(\mathbb{T}^2)$ and $N \in \mathbf{\mathbb{N}}$ $$\|e^{it\Delta} P_N \phi \|_{L^4_{t,x}([0,2\pi]\times \mathbb{T}^2)}\leq C_N \|\phi\|_{L^2(\mathbb{T}^2)}, \text{ where } C_N= c\exp\Big(c\frac{\log(N)}{\log\log (N)}\Big).$$ The proof in [@bourgain1993fourier] is based on the circle method and can be reduced to an estimate for the number of divisors function, which necessitates the above constant $C_N$. However, in the example $\widehat{\phi}=\chi_{[-N,N]^2\cap \mathbf{\mathbb{Z}}^2}$ we have $$\|e^{it\Delta} P_N \phi \|_{L^4_{t,x}([0,2\pi]\times \mathbb{T}^2)}\sim (\log N)^{\frac14} \|\phi\|_{L^2(\mathbb{T}^2)},$$ see [@bourgain1993fourier; @takaoka20012d; @kishimoto2014remark]. In this paper, we prove the sharp estimate by using methods of incidence geometry. ** 1**. *There exists $c>0$, such that for all bounded sets $S\subset\mathbf{\mathbb{Z}}^{2}$ and all $\phi\in L^{2}(\mathbb{T}^{2})$ we have $$\|P_{S}e^{it\Delta}\phi\|_{L_{t,x}^{4}([0,2\pi]\times\mathbb{T}^{2}])}\leq c \left(\log\#S\right)^{1/4}\cdot\|\phi\|_{L^{2}}.\label{eq:log Stri}$$* The proof relies on an upper bound for the number of rectangles with vertices in a given set [@pach1992repeated], which follows from the Szemerédi-Trotter Theorem [@Szemeredi-trotter; @tao2006additive]. A refinement of the methods also gives an even stronger result. ** 2**. *There exists $c>0$, such that for all bounded sets $S\subset\mathbf{\mathbb{Z}}^{2}$ and all $\phi\in L^{2}(\mathbb{T}^{2})$ we have $$\|P_{S}e^{it\Delta}\phi\|_{L_{t,x}^{4}([0,\frac{1}{\log\#S}]\times\mathbb{T}^{2}])}\leq c \|\phi\|_{L^{2}}.\label{eq:localStri}$$* Here, the proof is based on a counting argument for parallelograms with vertices in a given set, which again relies on the Szemerédi-Trotter Theorem. * 3*. Theorem [ 2](#thm:local Stri){reference-type="ref" reference="thm:local Stri"} implies Theorem [ 1](#thm:log Stri){reference-type="ref" reference="thm:log Stri"}. Applying [\[eq:localStri\]](#eq:localStri){reference-type="eqref" reference="eq:localStri"} to each interval $[\frac{k-1}{m},\frac{k}{m}]$, $k=1,\ldots,m$ for $m\sim\log\#S$, we obtain [\[eq:log Stri\]](#eq:log Stri){reference-type="eqref" reference="eq:log Stri"}. The $L^4$-Strichartz estimate plays a distinguished role in the analysis of the cubic nonlinear Schrödinger equation (cubic NLS) $$iu_{t}+\Delta u=\pm|u|^{2}u, \qquad u|_{t=0}=u_{0}\in H^{s}(\mathbb{T}^{2}), \tag{{NLS}}\label{eq:NLS}$$ which is $L^2(\mathbb{T}^2)$-critical. [\[eq:NLS\]](#eq:NLS){reference-type="eqref" reference="eq:NLS"} is known to be locally well-posed in Sobolev spaces $H^s(\mathbb{T}^2)$ for $s>0$ due to [@bourgain1993fourier]. By the conservation of energy, this implies global well-posedness in $H^1(\mathbb{T}^2)$ for small enough data [@bourgain1993fourier Theorem 2]. In the defocusing case, this has been refined to global well-posedness in $H^s(\mathbb{T}^2)$ for $s>2/3$, see [@de2007global; @fan2018bilinear]. Theorem [ 2](#thm:local Stri){reference-type="ref" reference="thm:local Stri"} has the following consequence: ** 4**. *Let $s>0$. There exists $\delta>0$, such that for initial data $u_{0}\in H^{s}(\mathbb{T}^2)$ with $\|u_{0}\|_{L^{2}(\mathbb{T}^2)}\leq \delta$ the Cauchy problem [\[eq:NLS\]](#eq:NLS){reference-type="eqref" reference="eq:NLS"} is globally well-posed.* The proof is based on an estimate showing that $\|u(t)\|_{H^s(\mathbb{T}^2)}$ can grow only by a fixed multiplicative constant on a logarithmic time scale (see [\[eq:GWP main claim\]](#eq:GWP main claim){reference-type="eqref" reference="eq:GWP main claim"}) and the fact that $\sum_{N\in 2^\mathbf{\mathbb{N}}} 1/\log N=\infty$. This argument crucially relies on the estimate in Theorem [ 2](#thm:local Stri){reference-type="ref" reference="thm:local Stri"} and the precise dependence on $\# S$. ## Outline of the paper {#subsec:outline .unnumbered} We introduce notation and provide the Szemerédi-Trotter Theorem in Section [2](#sec:pre){reference-type="ref" reference="sec:pre"}. In Section [3](#sec:proofs){reference-type="ref" reference="sec:proofs"} we provide the proofs of the $L^4$-bounds in Theorem [ 1](#thm:log Stri){reference-type="ref" reference="thm:log Stri"} and Theorem [ 2](#thm:local Stri){reference-type="ref" reference="thm:local Stri"}. Finally, in Section [4](#sec:proof-gwp){reference-type="ref" reference="sec:proof-gwp"} we prove the global well-posedness result, i.e. Theorem [ 4](#thm:GWP){reference-type="ref" reference="thm:GWP"}. # Preliminaries {#sec:pre} We denote $A\lesssim B$ if $A\le CB$ for some constant $C$. Given a set $E$, we denote $\chi_{E}$ as the sharp cutoff at $E$. For proposition $P$, denote by $1_{P}$ the indicator function $1_{P}:=\begin{cases} 1 & \text{, \ensuremath{P} is true}\\ 0 & \text{, otherwise} \end{cases}$. Denote by $\mathbb{T}^{2}$ the square torus $\mathbb{T}^{2}:=(\mathbf{\mathbb{R}}/2\pi\mathbf{\mathbb{Z}})^{2}$. For function $f:\mathbb{T}^{2}\rightarrow\mathbf{\mathbb{C}}$, $\widehat{f}$ denotes the Fourier series on $\mathbb{T}^{2}$. For $S\subset\mathbf{\mathbb{Z}}^{2}$, we denote by $P_{S}$ the Fourier multiplier $\widehat{P_{S}f}:=\chi_{S}\cdot\widehat{f}$, where $\chi$ is a (sharp) characteristic function. $2^{\mathbf{\mathbb{N}}}$ denotes the set of dyadic numbers. For dyadic number $N\in2^{\mathbf{\mathbb{N}}}$, we denote by $P_{\le N}$ the sharp Littlewood-Paley cutoff $P_{\le N}f:=P_{[-N,N]^{2}}f$. We denote $P_{N}:=P_{\le N}-P_{\le N/2}$, where we set $P_{\le1/2}:=0$. For function $\phi:\mathbb{T}^{2}\rightarrow\mathbf{\mathbb{C}}$ and time $t\in\mathbf{\mathbb{R}}$, we define $e^{it\Delta}\phi$ as the function such that $$\widehat{e^{it\Delta}\phi}(\xi)=e^{- it\left|\xi\right|^{2}}\widehat{\phi}(\xi).$$ For simplicity, we denote $u_{N}=P_{N}u$ and $u_{\le N}=P_{\le N}u$, for $u:\mathbb{T}^2\rightarrow\mathbf{\mathbb{C}}$. ## Geometric notations on $\mathbf{\mathbb{Z}}^{2}$ {#geometric-notations-on-mathbfmathbbz2 .unnumbered} Denote by $\mathbf{\mathbb{Z}}_{\mathrm{irr}}^{2}$ the set of coprime integer pairs. For integer point $(a,b)\in\mathbf{\mathbb{Z}}^{2}$, $(a,b)^{\perp}$ denotes $(-b,a)$. For integer point $(a,b)\in\mathbf{\mathbb{Z}}^{2}\setminus\left\{ 0\right\}$, $\gcd\left((a,b)\right)$ denotes $\gcd(a,b)$. A *parallelogram* is a quadruple $Q=(\xi_{1},\xi_{2},\xi_{3},\xi_{4})\in(\mathbf{\mathbb{Z}}^{2})^{4}$ such that $\xi_{1}+\xi_{3}=\xi_{2}+\xi_{4}$. *Segments* and *points* are two-element pairs and elements of $\mathbf{\mathbb{Z}}^{2}$, respectively. We call by the edges of $Q$ either the segments $(\xi_{1},\xi_{2}),(\xi_{2},\xi_{3}),(\xi_{3},\xi_{4}),(\xi_{4},\xi_{1})$, or the vectors $\pm\left(\xi_{1}-\xi_{2}\right),\pm\left(\xi_{2}-\xi_{3}\right)$. For $S\subset\mathbf{\mathbb{Z}}^{2}$, $\mathcal{\mathcal{Q}}(S)$ denote the set of all parallelograms in $S$. $\mathcal{\mathcal{D}}(S)$ denotes the set of parallelograms $(\xi_{1},\xi_{2},\xi_{3},\xi_{4})\in\mathcal{\mathcal{Q}}(S)$ such that $\xi_2$ is equal to either $\xi_1$ or $\xi_3$. For $Q=(\xi_{1},\xi_{2},\xi_{3},\xi_{4})\in\mathcal{\mathcal{Q}}(\mathbf{\mathbb{Z}}^{2})\setminus\mathcal{\mathcal{D}}(\mathbf{\mathbb{Z}}^{2})$, we denote by $gcd^{-}(Q)$ the number $$\text{gcd}^{-}(Q):=\min\left\{ \gcd(\xi_{1}-\xi_{2}),\gcd(\xi_{2}-\xi_{3})\right\} .$$ For parallelogram $Q=(\xi_{1},\xi_{2},\xi_{3},\xi_{4})\in\mathcal{\mathcal{Q}}(\mathbf{\mathbb{Z}}^{2})$, we denote by $\tau_{Q}$ the number $$\tau_{Q}=\tau(\xi_{1},\xi_{2},\xi_{3},\xi_{4})=\left|\left|\xi_{1}\right|^{2}-\left|\xi_{2}\right|^{2}+\left|\xi_{3}\right|^{2}-\left|\xi_{4}\right|^{2}\right|=2\left|\left(\xi_{1}-\xi_{2}\right)\cdot\left(\xi_{2}-\xi_{3}\right)\right|.$$ For $\tau\in\mathbf{\mathbb{N}}$, we denote by $\mathcal{\mathcal{Q}}^{\tau}(S)$ the set of parallelograms $Q\in\mathcal{\mathcal{Q}}(S)$ such that $\tau_{Q}=\tau$. Thus, in particular, $\mathcal{\mathcal{Q}}^{0}(S)$ is the set of rectangles in $S$. For $\xi\in\mathbf{\mathbb{Z}}_{\mathrm{irr}}^{2}$, we denote by $\mathcal{\mathcal{Q}}_{\xi}^{\tau}(S)$ the set of parallelograms in $\mathcal{\mathcal{Q}}^{\tau}(S)$ with an edge parallel to $\xi$. When there is no ambiguity, we will abbreviate the set $S$ and simply denote $\mathcal{\mathcal{Q}},\mathcal{\mathcal{D}},\mathcal{\mathcal{Q}}^{\tau},\mathcal{\mathcal{Q}}_{\xi}^{\tau}$. ## Szemerédi-Trotter {#szemerédi-trotter .unnumbered} The following is a consequence of Szemerédi-Trotter theorem of incidence geometry. ** 5** ([@tao2006additive Corollary 8.5]). *Let $S\subset\mathbf{\mathbb{R}}^{2}$ be a set of $n$ points, where $n\in\mathbf{\mathbb{N}}$. Let $k\ge2$ be an integer. The number $m$ of lines in $\mathbf{\mathbb{R}}^{2}$ passing through at least $k$ points of $S$ is bounded by $$m\lesssim\frac{n^{2}}{k^{3}}+\frac{n}{k}.\label{eq:SzTr}$$* * 6*. An optimizer $S$ for [\[eq:SzTr\]](#eq:SzTr){reference-type="eqref" reference="eq:SzTr"} is a lattice $S=\mathbf{\mathbb{Z}}^{2}\cap[-N,N]^{2},N\in\mathbf{\mathbb{N}}$. # Proof of Theorem [ 1](#thm:log Stri){reference-type="ref" reference="thm:log Stri"} and Theorem [ 2](#thm:local Stri){reference-type="ref" reference="thm:local Stri"} {#sec:proofs} Although Theorem [ 2](#thm:local Stri){reference-type="ref" reference="thm:local Stri"} implies Theorem [ 1](#thm:log Stri){reference-type="ref" reference="thm:log Stri"}, we will first prove Theorem [ 1](#thm:log Stri){reference-type="ref" reference="thm:log Stri"} separately. Both the result itself and argument of proof of Theorem [ 1](#thm:log Stri){reference-type="ref" reference="thm:log Stri"} will be crucial to the proof of Theorem [ 2](#thm:local Stri){reference-type="ref" reference="thm:local Stri"}. ** 7**. *Assume that for bounded set $S\subset\mathbf{\mathbb{Z}}^{2}$, we have $$\#\mathcal{\mathcal{Q}}^{0}(S)\lesssim\#S^{2}\cdot\log\#S.\label{eq:=000023S ineq}$$ We have $$\|e^{it\Delta}P_{S}\phi\|_{L_{t,x}^{4}([0,2\pi]\times\mathbb{T}^{2})}\lesssim\left(\log\#S\right)^{1/4}\|\phi\|_{L^{2}(\mathbb{T}^{2})}.\label{eq:main}$$* *Proof.* [\[eq:main\]](#eq:main){reference-type="eqref" reference="eq:main"} is equivalent to showing that for each finite set $S\subset\mathbf{\mathbb{Z}}^{2}$, we have $$C(S):=\sup_{\widehat{\phi}\in\ell^{2}(S)\setminus\left\{ 0\right\} }\frac{1}{\|\widehat{\phi}\|_{\ell^{2}(S)}^{4}}\sum_{(\xi_{1},\xi_{2},\xi_{3},\xi_{4})=Q\in\mathcal{\mathcal{Q}}^{0}(S)}\widehat{\phi}(\xi_{1})\overline{\widehat{\phi}(\xi_{2})}\widehat{\phi}(\xi_{3})\overline{\widehat{\phi}(\xi_{4})}\lesssim\log\#S.$$ Since the summation increases when we take the absolute value on $\widehat{\phi}$ and $S$ is finite, we have a nonzero data $\widehat{\phi_{*}}\ge0$ such that $$C(S)=\frac{1}{\|\widehat{\phi_{*}}\|_{\ell^{2}(S)}^{4}}\sum_{(\xi_{1},\xi_{2},\xi_{3},\xi_{4})=Q\in\mathcal{\mathcal{Q}}^{0}(S)}\widehat{\phi_{*}}(\xi_{1})\widehat{\phi_{*}}(\xi_{2})\widehat{\phi_{*}}(\xi_{3})\widehat{\phi_{*}}(\xi_{4}).$$ Let $S_{*}\subset S$ be the support of $\widehat{\phi_{*}}$. For small $\epsilon>0$, we define $\phi_{*}^{\epsilon}$ as $\widehat{\phi_{*}^{\epsilon}}=1_{S_{*}}\cdot(\widehat{\phi_{*}}-\epsilon)$. Since $\phi_{*}$ is a maximizer to $C(S)$, we have $$\frac{d}{d\epsilon}\left(\sum_{(\xi_{1},\xi_{2},\xi_{3},\xi_{4})=Q\in\mathcal{\mathcal{Q}}^{0}(S)}\widehat{\phi_{*}^{\epsilon}}(\xi_{1})\widehat{\phi_{*}^{\epsilon}}(\xi_{2})\widehat{\phi_{*}^{\epsilon}}(\xi_{3})\widehat{\phi_{*}^{\epsilon}}(\xi_{4})-C(S)\cdot\|\widehat{\phi_{*}^{\epsilon}}\|_{\ell^{2}(S)}^{4}\right)\mid_{\epsilon=0}\le0,$$ which implies $$\sum_{Q\in{\mathcal{\mathcal{Q}}^0}(S_{*})}\sum_{\{\eta_1,\eta_2,\eta_3\}\subset Q}\widehat{\phi_{*}}(\eta_1)\widehat{\phi_{*}}(\eta_2)\widehat{\phi_{*}}(\eta_3)\ge4C(S)\cdot\|\widehat{\phi_*}\|_{\ell^{2}(S_{*})}^{2}\|\widehat{\phi_{*}}\|_{\ell^{1}(S_{*})}.$$ Thus, we have [\[eq:main\]](#eq:main){reference-type="eqref" reference="eq:main"} once we showed that for $S_{*}\subset S\subset\mathbf{\mathbb{Z}}^{2}$ and $\widehat{\phi_{*}}\in\ell^{2}(S_{*})$, $$\sum_{Q\in{\mathcal{\mathcal{Q}}^0}(S_*)}\sum_{\{\eta_1,\eta_2,\eta_3\}\subset Q}\widehat{\phi_{*}}(\eta_1)\widehat{\phi_{*}}(\eta_2)\widehat{\phi_{*}}(\eta_3)\lesssim\log\#S_{*}\cdot\|\widehat{\phi_{*}}\|_{\ell^{2}(S_{*})}^{2}\|\widehat{\phi_{*}}\|_{\ell^{1}(S_{*})}.$$ Iterating the same process four times, [\[eq:main\]](#eq:main){reference-type="eqref" reference="eq:main"} reduces to showing that for all finite set $S_{****}\subset\mathbf{\mathbb{Z}}^{2}$, $$\#{\mathcal{\mathcal{Q}}^0}(S_{****})\lesssim\log\#S_{****}\cdot\#S_{****}^{2},$$ which is the assumption and finishes the proof. ◻ Now we prove Theorem [ 1](#thm:log Stri){reference-type="ref" reference="thm:log Stri"} by showing [\[eq:=000023S ineq\]](#eq:=000023S ineq){reference-type="eqref" reference="eq:=000023S ineq"}. We point out that [\[eq:=000023S ineq\]](#eq:=000023S ineq){reference-type="eqref" reference="eq:=000023S ineq"} is a direct consequence of [@pach1992repeated Theorem 1]. Still, we give another proof of [\[eq:=000023S ineq\]](#eq:=000023S ineq){reference-type="eqref" reference="eq:=000023S ineq"} since its refinement will be crucial later in the proof of Theorem [ 2](#thm:local Stri){reference-type="ref" reference="thm:local Stri"}. *Proof of Theorem [ 1](#thm:log Stri){reference-type="ref" reference="thm:log Stri"}.* We prove [\[eq:=000023S ineq\]](#eq:=000023S ineq){reference-type="eqref" reference="eq:=000023S ineq"}, which implies Theorem [ 1](#thm:log Stri){reference-type="ref" reference="thm:log Stri"} by Lemma [ 7](#lem:Strichartz to sets){reference-type="ref" reference="lem:Strichartz to sets"}. We denote $n=\#S$. While the argument will have similarities with the proof of [@pach1992repeated Theorem 1], in particular in using the Szemerédi-Trotter Theorem, we count differently. We define two types of rectangles, $\mathcal{\mathcal{Q}}_{\le\sqrt{n}}^{0}$ and $\mathcal{\mathcal{Q}}_{>\sqrt{n}}^{0}$, as the following. - Let $\mathcal{\mathcal{Q}}_{\le\sqrt{n}}^{0}$ be the set of $Q\in\mathcal{\mathcal{Q}}^{0}\setminus\mathcal{\mathcal{D}}$ such that two orthogonal edges $e_{1},e_{2}$ of $Q$ extend to lines $\ell_{1},\ell_{2}$ with $\sqrt{n}\ge\#\left(\ell_{1}\cap S\right)\ge\#\left(\ell_{2}\cap S\right)$. Such $e_{1},e_{2}$ are said to be the first and second principal edges of $\mathcal{\mathcal{Q}}_{\le\sqrt{n}}^{0}$. - Let $\mathcal{\mathcal{Q}}_{>\sqrt{n}}^{0}$ be the set of $Q\in\mathcal{\mathcal{Q}}^{0}\setminus\mathcal{\mathcal{D}}$ such that two parallel edges $e_{1},e_{2}$ of $Q$ extend to lines $\ell_{1},\ell_{2}$ with $\#\left(\ell_{1}\cap S\right)\ge\#\left(\ell_{2}\cap S\right)>\sqrt{n}$. Such $e_{1},e_{2}$ are said to be the first and second principal edges of $\mathcal{\mathcal{Q}}_{>\sqrt{n}}^{0}$. We note that first and second principal edges of a rectangle are not necessarily unique. We have $\mathcal{\mathcal{Q}}^{0}=\mathcal{\mathcal{Q}}_{\le\sqrt{n}}^{0}\cup\mathcal{\mathcal{Q}}_{>\sqrt{n}}^{0}\cup\mathcal{\mathcal{D}}$. *Counting $\#\mathcal{\mathcal{Q}}_{\le\sqrt{n}}^{0}$*. Let $a\lesssim\sqrt{n}$ be a dyadic number. For each line $\ell\subset\mathbf{\mathbb{R}}^{2}$ such that $a<\#(\ell\cap S)\le2a$, we have at most $2a^{2}$ edges $e$ with vertices in $\ell\cap S$. For each $e$, there are at most $4a$ members of $\mathcal{\mathcal{Q}}_{\le \sqrt{n}}^{0}$ with first principal edge $e$. Thus, by [\[eq:SzTr\]](#eq:SzTr){reference-type="eqref" reference="eq:SzTr"}, we have $$\#\mathcal{\mathcal{Q}}_{\le\sqrt{n}}^{0}\lesssim\sum_{\substack{a\in2^{\mathbf{\mathbb{N}}}\\ a\lesssim\sqrt{n} } }a^{2}\cdot a\cdot\left(\frac{n^{2}}{a^{3}}+\frac{n}{a}\right)\lesssim\sum_{\substack{a\in2^{\mathbf{\mathbb{N}}}\\ a\lesssim\sqrt{n} } }n^{2}\lesssim n^{2}\log n.$$ \ *Counting $\#\mathcal{\mathcal{Q}}_{>\sqrt{n}}^{0}$*. Let $a\gtrsim\sqrt{n}$ be a dyadic number. For each line $\ell\subset\mathbf{\mathbb{R}}^{2}$ such that $a<\#\left(\ell\cap S\right)\le2a$, we have at most $2a^{2}$ segments $e$ with vertices in $\ell\cap S$. By [\[eq:SzTr\]](#eq:SzTr){reference-type="eqref" reference="eq:SzTr"}, the number of lines intersecting more than $a$ points of $S$ is at most comparable to $\frac{n}{a}$. Thus, for each $e$, the number of member $Q\in\mathcal{\mathcal{Q}}_{>\sqrt{n}}^{0}$ with second principal edge $e$ is at most comparable to $\frac{n}{a}$. Thus, by [\[eq:SzTr\]](#eq:SzTr){reference-type="eqref" reference="eq:SzTr"}, we have $$\#\mathcal{\mathcal{Q}}_{>\sqrt{n}}^{0}\lesssim\sum_{\substack{a\in2^{\mathbf{\mathbb{N}}}\\ a\gtrsim\sqrt{n} } }2a^{2}\cdot\frac{n}{a}\cdot\frac{n}{a}\lesssim n^{2}\log n.$$ Therefore, we have $$\#\mathcal{\mathcal{Q}}^{0}\le\#\mathcal{\mathcal{Q}}_{\le\sqrt{n}}^{0}+\#\mathcal{\mathcal{Q}}_{>\sqrt{n}}^{0}+\#\mathcal{\mathcal{D}}\lesssim n^{2}\log n,$$ finishing the proof.0◻ *Proof of Theorem [ 2](#thm:local Stri){reference-type="ref" reference="thm:local Stri"}.* We will prove that for each $M\in2^{\mathbf{\mathbb{N}}}$ and $S\subset\mathbf{\mathbb{Z}}^{2}$ with $\#S=n<\infty$, we have $$\frac{1}{M}\sum_{\tau\sim M}\#\mathcal{\mathcal{Q}}^{\tau}(S)\lesssim n^{2}.\label{eq:Qt}$$ [\[eq:Qt\]](#eq:Qt){reference-type="eqref" reference="eq:Qt"} implies [\[eq:localStri\]](#eq:localStri){reference-type="eqref" reference="eq:localStri"}. Indeed, following the proof of Lemma [ 7](#lem:Strichartz to sets){reference-type="ref" reference="lem:Strichartz to sets"}, assuming [\[eq:Qt\]](#eq:Qt){reference-type="eqref" reference="eq:Qt"} we also have for $\phi\in L^{2}$ with $\mathrm{supp}(\widehat{\phi})\subset S$: $$\frac{1}{M}\sum_{\substack{\tau\sim M\\ Q\in\mathcal{\mathcal{Q}}^{\tau} } }\left|\widehat{\phi}(Q)\right|\lesssim\|\phi\|_{L^{2}}^{4}.\label{eq:Qt'}$$ Here, $\widehat{\phi}(Q)$ denotes $\mathrm{Re}\left(\widehat{\phi}(\xi_{1})\overline{\widehat{\phi}(\xi_{2})}\widehat{\phi}(\xi_{3})\overline{\widehat{\phi}(\xi_{4})}\right)$ for parallelogram $Q=(\xi_{1},\xi_{2},\xi_{3},\xi_{4})$. Denote $T_{0}=1/\log n$. We have $$\begin{aligned} & \int_{0}^{T_{0}}\int_{\mathbb{T}^{2}}\left|e^{it\Delta}\phi\right|^{4}dxdt\\ & \le\frac{1}{T_{0}}\int_{T_{0}}^{2T_{0}}\int_{0}^{T}\int_{\mathbb{T}^{2}}\left|e^{it\Delta}\phi\right|^{4}dxdtdT\\ & \sim\frac{1}{T_{0}}\int_{T_{0}}^{2T_{0}}\int_{0}^{T}\widehat{\left|e^{it\Delta}\phi\right|^{4}}(0)dtdT\\ & \sim \frac{1}{T_{0}}\int_{T_{0}}^{2T_{0}}\sum_{Q\in\mathcal{\mathcal{Q}}}\widehat{\phi}\left(Q\right)\cdot\mathrm{Re}\int_{0}^{T}e^{it\tau_{Q}}dtdT\\ & \sim\sum_{Q\in\mathcal{\mathcal{Q}}}\widehat{\phi}\left(Q\right)\cdot\frac{\cos\left(T_{0}\tau_{Q}\right)-\cos\left(2T_{0}\tau_{Q}\right)}{T_{0}\tau_{Q}^{2}}\\ & \lesssim T_0\cdot\sum_{Q\in{\mathcal{\mathcal{Q}}^0}}\left|\widehat{\phi}(Q)\right|+\sum_{\substack{\tau>0} }\min\left\{ T_{0},\frac{1}{T_{0}\tau^{2}}\right\} \sum_{Q\in\mathcal{\mathcal{Q}}^{\tau}}\left|\widehat{\phi}(Q)\right|\\ & \lesssim T_0\log n\cdot\|\phi\|_{L^{2}}^{4}+\sum_{M\in2^{\mathbf{\mathbb{N}}}}\min\left\{ T_{0},\frac{1}{T_{0}M^{2}}\right\} M\|\phi\|_{L^{2}}^{4}\\ & \lesssim\|\phi\|_{L^{2}}^{4},\end{aligned}$$ which then finishes the proof of [\[eq:localStri\]](#eq:localStri){reference-type="eqref" reference="eq:localStri"}. We show [\[eq:Qt\]](#eq:Qt){reference-type="eqref" reference="eq:Qt"}. Let $\mathcal{\mathcal{Q}}_{\mid\tau}^{0}$ be the set of rectangles $Q\in\mathcal{\mathcal{Q}}^{0}\setminus \mathcal{\mathcal{D}}$ with at least one edge $\xi$ satisfying $\gcd(\xi)\mid\tau$. Our plan is a two-step proof: 1. for positive integer $\tau\in\mathbf{\mathbb{N}}$, we show $$\#\mathcal{\mathcal{Q}}^{\tau}\lesssim\#\mathcal{\mathcal{Q}}_{\mid\tau}^{0}+n^{2},\label{eq:(1)}$$ 2. for $M\in2^{\mathbf{\mathbb{N}}}$, we show $$\frac{1}{M}\sum_{\tau\sim M}\#\mathcal{\mathcal{Q}}_{\mid\tau}^{0}\lesssim n^{2}.\label{eq:(2)}$$ Combining [\[eq:(1)\]](#eq:(1)){reference-type="eqref" reference="eq:(1)"} and [\[eq:(2)\]](#eq:(2)){reference-type="eqref" reference="eq:(2)"} immediately gives [\[eq:Qt\]](#eq:Qt){reference-type="eqref" reference="eq:Qt"}, which also finishes the proof. We prove [\[eq:(1)\]](#eq:(1)){reference-type="eqref" reference="eq:(1)"}. For $\xi\in\mathbf{\mathbb{Z}}_{\mathrm{irr}}^{2}$ and $a\in2^{\mathbf{\mathbb{N}}}$, let $\mathcal{\mathcal{L}}_{\xi,a}$ be the set of lines $\ell\subset\mathbf{\mathbb{R}}^{2}$ parallel to $\xi$ such that $a<\#(\ell\cap S)\le2a$. We claim that $$\begin{aligned} \#\mathcal{\mathcal{Q}}_{\xi}^{\tau} & \lesssim\#\left(\mathcal{\mathcal{Q}}_{\mid\tau}^{0}\cap\mathcal{\mathcal{Q}}_{\xi}^{0}\right)+\sum_{a\in2^{\mathbf{\mathbb{N}}}}a^{2}\cdot\#\mathcal{\mathcal{L}}_{\xi,a}.\label{eq:main claim for (1)}\end{aligned}$$ For $k,\sigma\in\mathbf{\mathbb{Z}}$ we denote by $\mathcal{\mathcal{E}}_{k\xi}^{\sigma}$ the set of segments $(\xi_{1},\xi_{2})$ such that $\xi_{2}-\xi_{1}=k\xi$ and $\xi_{1}\cdot\xi=\sigma$. We have $$\begin{aligned} \#\mathcal{\mathcal{Q}}_{\xi}^{\tau} & =\sum_{\substack{\substack{k\mid\tau\\ \sigma_{1}-\sigma_{2}=\pm\tau/(2k) } } }\#\mathcal{\mathcal{E}}_{k\xi}^{\sigma_{1}}\cdot\#\mathcal{\mathcal{E}}_{k\xi}^{\sigma_{2}}\\ & \lesssim\sum_{\substack{k\mid\tau\\ \sigma\in\mathbf{\mathbb{Z}} } }\left(\#\mathcal{\mathcal{E}}_{k\xi}^{\sigma}\right)^{2}\\ & \lesssim\sum_{\substack{k\mid\tau\\ \sigma\in\mathbf{\mathbb{Z}} } }\#\mathcal{\mathcal{E}}_{k\xi}^{\sigma}\left(\#\mathcal{\mathcal{E}}_{k\xi}^{\sigma}-1\right)+\sum_{\substack{k,\sigma\in\mathbf{\mathbb{Z}}\\ k\neq 0}}\#\mathcal{\mathcal{E}}_{k\xi}^{\sigma}\\ & \lesssim\#\left(\mathcal{\mathcal{Q}}_{\mid\tau}^{0}\cap\mathcal{\mathcal{Q}}_{\xi}^{0}\right)+\sum_{a\in2^{\mathbf{\mathbb{N}}}}a^{2}\cdot\#\mathcal{\mathcal{L}}_{\xi,a},\end{aligned}$$ which is just [\[eq:main claim for (1)\]](#eq:main claim for (1)){reference-type="eqref" reference="eq:main claim for (1)"}. Summing up [\[eq:main claim for (1)\]](#eq:main claim for (1)){reference-type="eqref" reference="eq:main claim for (1)"} per $\xi\in\mathbf{\mathbb{Z}}_{\mathrm{irr}}^{2}$, by [\[eq:SzTr\]](#eq:SzTr){reference-type="eqref" reference="eq:SzTr"}, gives $$\#\mathcal{\mathcal{Q}}^{\tau}\lesssim\#\mathcal{\mathcal{Q}}_{\mid\tau}^{0}+\sum_{a\in2^{\mathbf{\mathbb{N}}}}a^{2}\cdot\left(\frac{n^{2}}{a^{3}}+\frac{n}{a}\right)\lesssim\#\mathcal{\mathcal{Q}}_{\mid\tau}^{0}+n^{2}.$$ We pass to showing [\[eq:(2)\]](#eq:(2)){reference-type="eqref" reference="eq:(2)"}. We have $$\frac{1}{M}\sum_{\tau\sim M}\#\mathcal{\mathcal{Q}}_{\mid\tau}^{0}\lesssim\sum_{Q\in\mathcal{\mathcal{Q}}^{0}\setminus\mathcal{\mathcal{D}}}\frac{1}{M}\cdot\sum_{\substack{\tau\sim M\\ e\text{ : edge of $Q$}}}1_{\gcd(e)\mid\tau}\lesssim\sum_{\substack{Q\in\mathcal{\mathcal{Q}}^0\setminus\mathcal{\mathcal{D}} \\ e\text{ : edge of $Q$}}}\frac{1}{\gcd (e)},$$ which is comparable to $\sum_{Q\in\mathcal{\mathcal{Q}}^{0}\setminus\mathcal{\mathcal{D}}}\frac{1}{\text{gcd}^{-}(Q)}$. It remains to show $\sum_{Q\in\mathcal{\mathcal{Q}}^{0}\setminus\mathcal{\mathcal{D}}}\frac{1}{\text{gcd}^{-}(Q)}\lesssim n^{2}$. To get extra decay factor from the denominator $\text{gcd}^{-}(Q)$, we revisit the proof of [\[eq:log Stri\]](#eq:log Stri){reference-type="eqref" reference="eq:log Stri"}. We follow the proof of [\[eq:log Stri\]](#eq:log Stri){reference-type="eqref" reference="eq:log Stri"} to estimate $\sum_{Q\in\mathcal{\mathcal{Q}}^{0}\setminus\mathcal{\mathcal{D}}}\frac{1}{\text{gcd}^{-}(Q)}$, then we gain extra factor $\frac{\log a}{a}$ to each edge-counting as follows. *Estimate on $\mathcal{\mathcal{Q}}_{\le\sqrt{n}}^{0}$*. Let $a\lesssim\sqrt{n}$ be a dyadic number. For each line $\ell\subset\mathbf{\mathbb{R}}^{2}$ such that $a<\#(\ell\cap S)\le2a$, the sum of $\frac{1}{\gcd(e_{1})}$ over $Q\in\mathcal{\mathcal{Q}}_{\le\sqrt{n}}^{0}$ whose first principal edge $e_{1}$ has vertices in $\ell\cap S$ is bounded by $$\#(\ell\cap S)\cdot\max_{\xi\in\ell\cap S}\sum_{\xi'\in\ell\cap S\setminus\{\xi\}}\frac{1}{\gcd(\xi'-\xi)}\cdot 2a\lesssim 2a\sum_{k=1}^{a}\frac{2}{k}\cdot2a\lesssim a^{2}\log a.$$ Similarly, the sum of $\frac{1}{\gcd(e_{2})}$ over $Q\in\mathcal{\mathcal{Q}}_{\le\sqrt{n}}^{0}$ with first principal edge $e_{1}$ having vertices in $\ell\cap S$ and second principal edge $e_{2}$ is bounded by $4a^{2}\cdot\sum_{k=1}^{a}\frac{2}{k}\lesssim a^{2}\log a$. Thus, by [\[eq:SzTr\]](#eq:SzTr){reference-type="eqref" reference="eq:SzTr"}, we have $$\sum_{Q\in\mathcal{\mathcal{Q}}_{\le\sqrt{n}}^{0}}\frac{1}{\gcd^{-}(Q)}\lesssim\sum_{\substack{a\in2^{\mathbf{\mathbb{N}}}\\ a\lesssim\sqrt{n} } }a^{2}\log a\left(\frac{n^{2}}{a^{3}}+\frac{n}{a}\right)\lesssim n^{2}.\label{eq:gcd-(Q) 1}$$ *Estimate on $\mathcal{\mathcal{Q}}_{>\sqrt{n}}^{0}$*. Let $a\gtrsim\sqrt{n}$ be a dyadic number. For each line $\ell\subset\mathbf{\mathbb{R}}^{2}$ such that $a<\#\left(\ell\cap S\right)\le2a$, the sum of $\frac{1}{\gcd(e_{2})}$ over $Q\in\mathcal{\mathcal{Q}}_{>\sqrt{n}}^{0}$ with second principal edge $e_{2}$ having vertices in $\ell\cap S$ is at most comparable to $2a\sum_{k=1}^{2a}\frac{2}{k}\cdot\frac{n}{a}\lesssim n\log a$. Similarly, the sum of $\frac{1}{\gcd(e_{1})}$ over $Q\in\mathcal{\mathcal{Q}}_{>\sqrt{n}}^{0}$ with second principal edge $e_{2}$ having vertices in $\ell\cap S$ is at most comparable to $2a^{2}\cdot\sum_{k=1}^{n/a}\frac{2}{k}\lesssim a^{2}\log\frac{n}{a}$. Thus, by [\[eq:SzTr\]](#eq:SzTr){reference-type="eqref" reference="eq:SzTr"}, we have $$\sum_{Q\in\mathcal{\mathcal{Q}}_{>\sqrt{n}}^{0}}\frac{1}{\gcd^{-}(Q)}\lesssim\sum_{\substack{a\in2^{\mathbf{\mathbb{N}}}\\ a\gtrsim\sqrt{n} } }\left(n\log a+a^{2}\log\frac{n}{a}\right)\cdot\frac{n}{a}\lesssim n^{2}.\label{eq:gcd-(Q) 2}$$ Summing up [\[eq:gcd-(Q) 1\]](#eq:gcd-(Q) 1){reference-type="eqref" reference="eq:gcd-(Q) 1"} and [\[eq:gcd-(Q) 2\]](#eq:gcd-(Q) 2){reference-type="eqref" reference="eq:gcd-(Q) 2"} gives $$\sum_{Q\in\mathcal{\mathcal{Q}}^{0}\setminus\mathcal{\mathcal{D}}}\frac{1}{\text{gcd}^{-}(Q)}\le\sum_{Q\in\mathcal{\mathcal{Q}}_{\le\sqrt{n}}^{0}}\frac{1}{\text{gcd}^{-}(Q)}+\sum_{Q\in\mathcal{\mathcal{Q}}_{>\sqrt{n}}^{0}}\frac{1}{\text{gcd}^{-}(Q)}\lesssim n^{2},$$ finishing the proof.0◻ # Proof of Theorem [ 4](#thm:GWP){reference-type="ref" reference="thm:GWP"} {#sec:proof-gwp} The proof is most convenient with adapted function spaces. For this purpose, we recall the definition of the function space $Y^{s}$ from [@herr2011global] and relevant facts. For a general theory, we refer to [@koch2014dispersive; @herr2011global; @hadac2009well; @hadac2010erratum]. ** 8**. Let $\mathcal{Z}$ be the collection of finite non-decreasing sequences $\left\{ t_{k}\right\} _{k=0}^{K}$ in $\mathbf{\mathbb{R}}$. We define $V^{2}$ as the space of all right-continuous functions $u:\mathbf{\mathbb{R}}\rightarrow\mathbf{\mathbb{C}}$ with $\lim_{t\rightarrow-\infty}u(t)=0$ and $$\|u\|_{V^{2}}:=\left(\sup_{\left\{ t_{k}\right\} _{k=0}^{K}\in\mathcal{\mathcal{Z}}}\sum_{k=1}^{K}\left|u(t_{k})-u(t_{k-1})\right|^{2}\right)^{1/2}<\infty.$$ For $s\in\mathbf{\mathbb{R}}$, we define $Y^{s}$ as the space of $u:\mathbf{\mathbb{R}}\times\mathbb{T}^{2}\rightarrow\mathbf{\mathbb{C}}$ such that $e^{it|\xi|^2}\widehat{u(t)}(\xi)$ lies in $V^{2}$ for each $\xi\in\mathbf{\mathbb{Z}}^{2}$ and $$\|u\|_{Y^{s}}\:=\left(\sum_{\xi\in\mathbf{\mathbb{Z}}^{2}}\left(1+\left|\xi\right|^{2}\right)^{s}\|e^{it|\xi|^{2}}\widehat{u(t)}(\xi)\|_{V^{2}}^{2}\right)^{1/2}<\infty.$$ The space $Y^{s}$ is used in [@herr2011global] and later works on critical regularity theory of Schrödinger equations on periodic domains. Some well-known properties are the following. ** 9** ([@herr2011global Section 2]). *$Y^{s}$-norms have the following properties.* - *Let $A,B$ be disjoint subsets of $\mathbf{\mathbb{Z}}^{2}$. For $s\in\mathbf{\mathbb{R}}$, we have $$\|P_{A\cup B}u\|_{Y^{s}}^{2}=\|P_{A}u\|_{Y^{s}}^{2}+\|P_{B}u\|_{Y^{s}}^{2}.\label{eq:l^2_=00005Cxistructure}$$* - *For $s\in\mathbf{\mathbb{R}}$, time $T>0$, and a function $f\in L^{1}H^{s}$, we have $$\|\chi_{[0,T)}\cdot\int_{0}^{t}e^{i(t-t')\Delta}f(t')dt'\|_{Y^{s}}\lesssim\sup_{v\in Y^{-s}:\|v\|_{Y^{-s}}\le1}\left|\int_0^T\int_{\mathbb{T}^{2}}f\overline{v}dxdt\right|.\label{eq:U2V2}$$* For $N\in2^{\mathbf{\mathbb{N}}}$, denote by $\mathcal{\mathcal{C}}_{N}$ the set of cubes of size $N$ $$\mathcal{\mathcal{C}}_{N}:=\left\{ (0,N]^{2}+N\xi_{0}:\xi_{0}\in\mathbf{\mathbb{Z}}^{2}\right\} .$$ We transfer [\[eq:localStri\]](#eq:localStri){reference-type="eqref" reference="eq:localStri"} to the following estimate. ** 10**. *For all $N\in2^{\mathbf{\mathbb{N}}}$, intervals $I\subset\mathbf{\mathbb{R}}$ such that $\left|I\right|\le\frac{1}{\log N}$, cubes $C\in\mathcal{\mathcal{C}}_{N}$, and $u\in Y^{0}$, we have $$\|P_{C}u\|_{L^{4}_{t,x}(I\times\mathbb{T}^{2})}\lesssim\|u\|_{Y^{0}}.\label{eq:local Y^0 Stri}$$* *Proof.* We follow the notations in [@herr2011global Section 2]. Let $u$ be a $U_{\Delta}^{4}L^2$-atom, i.e. $$u(t)=\sum_{j=1}^{J}1_{[t_{j-1},t_{j})}e^{it\Delta}\phi_{j}$$ for $\phi_{1},\ldots,\phi_{J}\in L^{2}(\mathbb{T}^{2})$, $t_0\le...\le t_J$, $\sum_{j=1}^{J}\|\phi_{j}\|_{L^{2}}^{4}=1$. By [\[eq:localStri\]](#eq:localStri){reference-type="eqref" reference="eq:localStri"}, we have $$\|P_{C}u\|_{L^{4}_{t,x}(I\times\mathbb{T}^{2})}^{4}\lesssim\sum_{j=1}^{J}\|P_{C}e^{it\Delta}\phi_{j}\|_{L^{4}_{t,x}(I\times\mathbb{T}^{2})}^{4}\lesssim\sum_{j=1}^{J}\|\phi_{j}\|_{L^{2}}^{4}\lesssim 1.\label{eq:local Y^0 Stri J}$$ By [@herr2011global Proposition 2.3] and [\[eq:local Y\^0 Stri J\]](#eq:local Y^0 Stri J){reference-type="eqref" reference="eq:local Y^0 Stri J"}, for $u\in Y^{0}$ we conclude $$\|P_{C}u\|_{L_{t,x}^{4}(I\times\mathbb{T}^{2})}\lesssim\|u\|_{U_{\Delta}^{4}L^2}\lesssim\|u\|_{V_{\Delta}^{2}L^2}\lesssim\|u\|_{Y^{0}}.$$ ◻ *Proof of Theorem [ 4](#thm:GWP){reference-type="ref" reference="thm:GWP"}.* Local well-posedness of [\[eq:NLS\]](#eq:NLS){reference-type="eqref" reference="eq:NLS"} in $Y^{s}$ for $s>0$ can be shown following the proofs of [@herr2011global; @herr2014strichartz; @killip2016scale]. If $s\ge 1$, Bourgain's result [@bourgain1993fourier Theorem 2] and the multilinear estimates used in [@herr2011global; @herr2014strichartz; @killip2016scale] imply that we have global well-posedness of [\[eq:NLS\]](#eq:NLS){reference-type="eqref" reference="eq:NLS"} in $H^{s}$ for $s\ge1$, with the solutions in the space $Y^{s}$ on any bounded time interval $[0,T)$. We pass to the main case $0<s<1$. Let $u$ be the global solution to [\[eq:NLS\]](#eq:NLS){reference-type="eqref" reference="eq:NLS"} such that $\chi_{[0,T)}\cdot u \in Y^{1}$ for any $T>0$, with initial data $u_{0}\in H^{1}$, $\|u_{0}\|_{L^{2}}\ll_{s}1$. Our goal is to give, for any $T>0$, a uniform bound on $\|\chi_{[0,T)}\cdot u\|_{Y^{s}}$, depending only on $T$ and $\|u_{0}\|_{H^{s}}$. We define a sequence $\left\{ T_{k}\right\} _{k\in\mathbf{\mathbb{N}}}$ inductively as $T_{0}=0$ and $T_{k+1}:=T_{k}+\frac{s}{\log(\epsilon^{-1}\cdot \|u(T_{k})\|_{H^{s}})}$, where $\epsilon=\epsilon(s)\ll 1$ is a number to be given later. Assuming for the moment that we have the estimate $$\|\chi_{[T_{k},T_{k+1})}\cdot u\|_{Y^{s}}\lesssim_s\|u(T_{k})\|_{H^{s}},\label{eq:GWP main claim}$$ we can conclude Theorem [ 4](#thm:GWP){reference-type="ref" reference="thm:GWP"} as follows: since $$\|u(T_{k+1})\|_{H^{s}}\lesssim\|\chi_{[T_{k},T_{k+1})}\cdot u\|_{Y^{s}}\lesssim\|u(T_{k})\|_{H^{s}},$$ we have $\|u(T_{k})\|_{H^{s}}\lesssim C^{k}$ for $C=C(s,\|u_0\|_{H^s})\gg1$. Thus, we have $$T_{k}\sim\sum_{j<k}\frac{1}{\log\|u(T_{j})\|_{H^{s}}}\gtrsim\sum_{j<k}\frac{1}{j}\xrightarrow{k\rightarrow\infty}\infty,$$ therefore $T_k>T$ for $k$ big enough, and $$\|\chi_{[0,T_{k})}\cdot u\|_{Y^{s}}\lesssim C^{k},\label{eq:gwp conseq}$$ which implies Theorem [ 4](#thm:GWP){reference-type="ref" reference="thm:GWP"}. It remains to prove the estimate [\[eq:GWP main claim\]](#eq:GWP main claim){reference-type="eqref" reference="eq:GWP main claim"}, for which we employ the following multilinear estimates for $N\in2^\mathbf{\mathbb{N}}$ and interval $I\subset\mathbf{\mathbb{R}}$ such that $|I|\le 1/\log N$: $$\left|\int_{I\times\mathbb{T}^{2}}\left|u\right|^{2}u\overline{v_{\le N}}dxdt\right|\lesssim\left(\|u\|_{Y^{0}}+N^{-s}\|u\|_{Y^{s}}\right)^{3}\|v\|_{Y^{0}}\label{eq:bootstrap1}$$ and $$\left|\int_{I\times\mathbb{T}^{2}}\left|u\right|^{2}u\overline{v_{>N}}dxdt\right|\lesssim\left(\|u\|_{Y^{0}}+N^{-s}\|u\|_{Y^{s}}\right)^{3}\cdot N^{s}\|v\|_{Y^{-s}}.\label{eq:bootstrap2}$$ If we assume [\[eq:bootstrap1\]](#eq:bootstrap1){reference-type="eqref" reference="eq:bootstrap1"} and [\[eq:bootstrap2\]](#eq:bootstrap2){reference-type="eqref" reference="eq:bootstrap2"}, by [\[eq:U2V2\]](#eq:U2V2){reference-type="eqref" reference="eq:U2V2"}, we have the bootstrapping estimate $$\begin{aligned} &\|\chi_I\cdot\int_{I\cap[0,t]}e^{i(t-t')\Delta}\left|u\right|^{2}u(t')dt'\|_{Y^{0}}+N^{-s}\|\chi_I\cdot\int_{I\cap[0,t]}e^{i(t-t')\Delta}\left|u\right|^{2}u(t')dt'\|_{Y^{s}}\\ \lesssim{}&\left(\|\chi_{I}\cdot u\|_{Y^{0}}+N^{-s}\|\chi_{I}\cdot u\|_{Y^{s}}\right)^{3}.\end{aligned}$$ Plugging $N=N_{k}$ such that $N_{k}^{s}=\epsilon^{-1}\|u(T_{k})\|_{H^{s}}$ for $\epsilon=\epsilon(s)\ll1$, which satisfies $\|u(T_{k})\|_{L^{2}}+N_{k}^{-s}\|u(T_{k})\|_{H^{s}}\ll1$ and $T_{k+1}-T_k=\frac{1}{\log N_k}$, we have $$N_{k}^{-s}\|\chi_{[T_{k},T_{k+1})}\cdot u\|_{Y^{s}}\lesssim\|u(T_{k})\|_{L^{2}}+N_k^{-s}\|u(T_{k})\|_{H^{s}}\lesssim N_{k}^{-s}\|u(T_{k})\|_{H^{s}},$$ which is just [\[eq:GWP main claim\]](#eq:GWP main claim){reference-type="eqref" reference="eq:GWP main claim"} and finishes the proof. For dyadic numbers $M\ge N$ and $C\in\mathcal{\mathcal{C}}_M$, partitioning $I$ to intervals of length comparable to $\frac{1}{\log M}$ and applying [\[eq:local Y\^0 Stri\]](#eq:local Y^0 Stri){reference-type="eqref" reference="eq:local Y^0 Stri"} to each, we have $$\|\chi_{I}\cdot P_C u\|_{L_{t,x}^{4}}\lesssim\left(\frac{\log M}{\log N}\right)^{1/4}\|u\|_{Y^{0}}.\label{eq:linear Y^0_M}$$ By [\[eq:local Y\^0 Stri\]](#eq:local Y^0 Stri){reference-type="eqref" reference="eq:local Y^0 Stri"} and [\[eq:linear Y\^0_M\]](#eq:linear Y^0_M){reference-type="eqref" reference="eq:linear Y^0_M"}, for $u\in Y^{s}$, we have $$\begin{aligned} \|\chi_{I}\cdot u\|_{L_{t,x}^{4}} & \lesssim\|u_{\le N}\|_{Y^{0}}+\sum_{M\gtrsim N}\left(\frac{\log M}{\log N}\right)^{1/4}\|u_{M}\|_{Y^{0}}\\ & \lesssim\|u\|_{Y^{0}}+\sum_{M\gtrsim N}\left(\frac{\log M}{\log N}\right)^{1/4}\frac{N^{s}}{M^{s}}\cdot N^{-s}\|u\|_{Y^{s}}\\ & \lesssim\|u\|_{Y^{0}}+N^{-s}\|u\|_{Y^{s}},\end{aligned}$$ which implies [\[eq:bootstrap1\]](#eq:bootstrap1){reference-type="eqref" reference="eq:bootstrap1"}. We prove [\[eq:bootstrap2\]](#eq:bootstrap2){reference-type="eqref" reference="eq:bootstrap2"} by partitioning the frequency domain $\mathbf{\mathbb{Z}}^{2}$ into congruent cubes. By [\[eq:local Y\^0 Stri\]](#eq:local Y^0 Stri){reference-type="eqref" reference="eq:local Y^0 Stri"}, [\[eq:linear Y\^0_M\]](#eq:linear Y^0_M){reference-type="eqref" reference="eq:linear Y^0_M"}, and [\[eq:l\^2\_=00005Cxistructure\]](#eq:l^2_=00005Cxistructure){reference-type="eqref" reference="eq:l^2_=00005Cxistructure"}, for $M\in2^\mathbf{\mathbb{N}}$ and $u,v\in Y^{0}$ we have $$\begin{aligned} &\|\chi_I \cdot P_{\le M}\left(u\overline{v}\right)\|_{L_{t,x}^{2}}\label{eq:bilinear}\\ & \lesssim\sum_{\substack{C_{1},C_{2}\in\mathcal{\mathcal{C}}_{M}\\ \mathop{\mathrm{dist}}(C_{1},C_{2})\le M } }\|\chi_I\cdot P_{C_{1}}u\cdot\overline{P_{C_{2}}v}\|_{L_{t,x}^{2}}\nonumber \\ & \lesssim\sum_{\substack{C_{1},C_{2}\in\mathcal{\mathcal{C}}_{M}\\ \mathop{\mathrm{dist}}(C_{1},C_{2})\le M } }\|\chi_I\cdot P_{C_{1}}u\|_{L_{t,x}^{4}}\|\chi_I\cdot P_{C_{2}}v\|_{L_{t,x}^{4}}\nonumber \\ & \lesssim\left(1+\frac{\log M}{\log N}\right)^{1/2}\left(\sum_{C\in\mathcal{\mathcal{C}}_{M}}\|P_{C}u\|_{Y^{0}}^2\sum_{C\in\mathcal{\mathcal{C}}_{M}}\|P_{C}v\|_{Y^{0}}^2\right)^{1/2}\nonumber \\ & \lesssim\left(1+\frac{\log M}{\log N}\right)^{1/2}\|u\|_{Y^{0}}\|v\|_{Y^{0}}.\nonumber \end{aligned}$$ We conclude quadrilinear estimates. By [\[eq:bilinear\]](#eq:bilinear){reference-type="eqref" reference="eq:bilinear"} and Young's convolution inequality on $(L,K)$ using that $\sum_{R\in2^\mathbf{\mathbb{N}}}R^{-s}\lesssim 1$, we have $$\begin{aligned} & \sum_{K\ge2N}\sum_{L\gtrsim K}\left|\int_{I\times\mathbb{T}^{2}}P_{\le N/2}\left(\left|u\right|^{2}\right)P_{\le N/2}\left(u_{L}\overline{v_{K}}\right)dxdt\right|\label{eq:quar1}\\ & \lesssim\|u\|_{Y^{0}}^{2}\sum_{K\ge2N}\sum_{L\gtrsim K}\|u_{L}\|_{Y^{0}}\|v_{K}\|_{Y^{0}}\nonumber \\ & \lesssim\|u\|_{Y^{0}}^{2}\sum_{K\ge2N}\sum_{L\gtrsim K}{(K/L)}^s\|u_{L}\|_{Y^{s}}\|v_{K}\|_{Y^{-s}}\nonumber \\ & \lesssim\|u\|_{Y^{0}}^2 \|u\|_{Y^{s}} \|v\|_{Y^{-s}}\nonumber \end{aligned}$$ and $$\begin{aligned} & \sum_{\substack{M\in2^{\mathbf{\mathbb{N}}}\\ M\ge N } }\sum_{K\ge2N}\sum_{L\gtrsim K}\left|\int_{I\times\mathbb{T}^{2}}P_{M}\left(\left|u\right|^{2}\right)P_{M}\left(u_{L}\overline{v_{K}}\right)dxdt\right|\label{eq:quar2}\\ & \lesssim\sum_{\substack{M\in2^{\mathbf{\mathbb{N}}}\\ M\ge N } }\frac{\log M}{\log N}\|u_{\ge M/4}\|_{Y^{0}}\|u\|_{Y^{0}}\sum_{K\ge2N}\sum_{L\gtrsim K}\|u_{L}\|_{Y^{0}}\|v_{K}\|_{Y^{0}}\nonumber \\ & \lesssim\sum_{\substack{M\in2^{\mathbf{\mathbb{N}}}\\ M\ge N } }\frac{\log M}{\log N}\frac{N^s}{M^s}\cdot N^{-s}\|u\|_{Y^{s}} \|u\|_{Y^0}\sum_{K\ge2N}\sum_{L\gtrsim K}\|u_{L}\|_{Y^{0}}\|v_{K}\|_{Y^{0}}\nonumber \\ & \lesssim N^{-s}\|u\|_{Y^{s}} \|u\|_{Y^0}\sum_{K\ge2N}\sum_{L\gtrsim K}\|u_{L}\|_{Y^{0}}\|v_{K}\|_{Y^{0}}\nonumber \\ & \lesssim N^{-s}\|u\|_{Y^{s}} \|u\|_{Y^0} \|u\|_{Y^s} \|v\|_{Y^{-s}}.\nonumber \end{aligned}$$ Combining [\[eq:quar1\]](#eq:quar1){reference-type="eqref" reference="eq:quar1"} and [\[eq:quar2\]](#eq:quar2){reference-type="eqref" reference="eq:quar2"}, we have $$\sum_{K\ge2N}\sum_{L\gtrsim K}\left|\int_{I\times\mathbb{T}^{2}}\left|u\right|^{2}u_{L}\overline{v_{K}}dxdt\right|\lesssim\left(\|u\|_{Y^{0}}+N^{-s}\|u\|_{Y^{s}}\right)^{3}\cdot N^{s}\|v\|_{Y^{-s}}.\label{eq:quar 1+2}$$ [\[eq:bilinear\]](#eq:bilinear){reference-type="eqref" reference="eq:bilinear"}, [\[eq:quar1\]](#eq:quar1){reference-type="eqref" reference="eq:quar1"}, [\[eq:quar2\]](#eq:quar2){reference-type="eqref" reference="eq:quar2"}, [\[eq:quar 1+2\]](#eq:quar 1+2){reference-type="eqref" reference="eq:quar 1+2"} hold true even if we change the integrands by different pairs of conjugates. By inclusion-exclusion on dyadic cutoffs, we have $$\begin{aligned} & \left|\int_{I\times\mathbb{T}^{2}}\left|u\right|^{2}u\overline{v_{>N}}dxdt\right| \nonumber \\ & \lesssim\sum_{K\ge2N}\left|\int_{I\times\mathbb{T}^{2}}u\overline{u}u_{\ge K/4}\overline{v_{K}}dxdt\right|+\left|\int_{I\times\mathbb{T}^{2}}u^{2}\overline{u_{\ge K/4}}\overline{v_{K}}dxdt\right|\label{eq:final1}\\ & +\sum_{K\ge2N}\left|\int_{I\times\mathbb{T}^{2}}u\overline{u_{\ge K/4}}u_{\ge K/4}\overline{v_{K}}dxdt\right|+\left|\int_{I\times\mathbb{T}^{2}}\overline{u}u_{\ge K/4}u_{\ge K/4}\overline{v_{K}}dxdt\right|\label{eq:final2}\\ & +\sum_{K\ge2N}\left|\int_{I\times\mathbb{T}^{2}}u_{\ge K/4}\overline{u_{\ge K/4}}u_{\ge K/4}\overline{v_{K}}dxdt\right|.\label{eq:final3}\end{aligned}$$ For [\[eq:final1\]](#eq:final1){reference-type="eqref" reference="eq:final1"}, we apply [\[eq:quar 1+2\]](#eq:quar 1+2){reference-type="eqref" reference="eq:quar 1+2"}. [\[eq:final2\]](#eq:final2){reference-type="eqref" reference="eq:final2"} and [\[eq:final3\]](#eq:final3){reference-type="eqref" reference="eq:final3"} contain extra Fourier cutoffs, but the proof of [\[eq:quar 1+2\]](#eq:quar 1+2){reference-type="eqref" reference="eq:quar 1+2"} is still valid so [\[eq:quar 1+2\]](#eq:quar 1+2){reference-type="eqref" reference="eq:quar 1+2"} is applicable. Thus, we have [\[eq:bootstrap2\]](#eq:bootstrap2){reference-type="eqref" reference="eq:bootstrap2"}, finishing the proof. 0◻ # Acknowledgement {#acknowledgement .unnumbered} Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -- IRTG 2235 -- Project-ID 282638148
arxiv_math
{ "id": "2309.14275", "title": "Strichartz estimates and global well-posedness of the cubic NLS on\n $\\mathbb{T}^{2}$", "authors": "Sebastian Herr, Beomjong Kwak", "categories": "math.AP math.CA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Active Flux is an extension of the Finite Volume method and additionally incorporates point values located at cell boundaries. This gives rise to a globally continuous approximation of the solution. The method is third-order accurate. We demonstrate that a new semi-discrete Active Flux method (first described in [@abgrall22] for one space dimension) can easily be used to solve nonlinear hyperbolic systems in multiple dimensions, such as the compressible Euler equations of inviscid hydrodynamics. Originally, the Active Flux method emerged as a fully discrete method, and required an exact or approximate evolution operator for the point value update. For nonlinear problems such an operator is often difficult to obtain, in particular for multiple spatial dimensions. With the new approach it becomes possible to leave behind these difficulties. We introduce a multi-dimensional limiting strategy and demonstrate the performance of the new method on both Riemann problems and subsonic flows. Keywords: Compressible Euler equations, Active Flux, High-order methods Mathematics Subject Classification (2010): 65M08, 65M20, 65M70, 76M12 --- The Active Flux method for the Euler equations\ on Cartesian grids Rémi Abgrall[^1], Wasilij Barsukow[^2], Christian Klingenberg[^3] # Introduction The Active Flux method uses as its degrees of freedom both cell averages and point values at cell interfaces. While the averages require a conservative update, the update of the point values is essentially not restricted by more than the condition that the resulting method should be stable. To this end it needs to incorporate upwinding, and the earliest version of the Active Flux method ([@vanleer77], for linear advection in 1-d) traced a characteristic back to the time level $t^n$ where a reconstruction of the data was evaluated. This approach was extended in [@barsukow19activeflux] to nonlinear scalar conservation laws in multiple dimensions, and to hyperbolic systems of conservation laws in one spatial dimension. The exact calculation of the characteristic curve was replaced by a sufficiently accurate approximation. This approach was used, for example, in [@barsukow20swaf] to solve the shallow water equations in presence of dry areas. For hyperbolic systems in multiple spatial dimensions, even if they are linear, characteristic curves no longer exist. Also, values in general are not transported, but the solution is a convolution of the initial data with a more or less complicated kernel. For the acoustic equations with the speed of sound $c$, for example, the solution in $x$ at time $t$ depends on the initial data in a disc with radius $ct$ around $x$. This disc is the interior of the intersection of the hypersurface of initial data with the cone of bicharacteristics which has its vertex at $(t, x)$. In [@eymann13], a solution operator was given for the acoustic equations, which relied on smoothness of the initial data, and in [@barsukow17] a solution in the sense of distributions was obtained which could be used to solve e.g. Riemann problems. These operators can be implemented efficiently and used to update the point values in an Active Flux method (as achieved in [@barsukow18activeflux]), but their derivation comes at great cost. Suitably high-order approximate evolution operators for multi-dimensional nonlinear systems of conservation laws are currently unavailable. All these Active Flux methods were $3^\text{rd}$ order accurate and fully-discrete. In [@abgrall20], a semi-discrete version of Active Flux was introduced. In order to obtain an equation for the point values, the spatial derivative in the PDE is discretized using finite difference formulae. At the price of a slightly reduced CFL condition this approach is immediately applicable to all kinds of nonlinear problems. In [@abgrall22; @abgrall22proceeding] it has been applied to one-dimensional nonlinear problems, and extended to arbitrary order. The aim of the present work is, maintaining $3^\text{rd}$ order of accuracy, to extend it to the multi-dimensional Euler equations. The paper is organized as follows: Section [2](#sec:semidiscreteAF){reference-type="ref" reference="sec:semidiscreteAF"} describes the method and Section [3](#sec:limiting){reference-type="ref" reference="sec:limiting"} presents a novel multi-dimensional limiting strategy. Numerical results are shown in Section [4](#sec:numerical){reference-type="ref" reference="sec:numerical"}. # The semi-discrete Active Flux method {#sec:semidiscreteAF} Here, we let ourselves guide by the approach of [@abgrall22] and extend it to multi-dimensional Cartesian grids. Consider a hyperbolic $m \times m$ system of conservation laws in $d$ spatial dimensions[^4] $$\begin{aligned} \partial_t q + \nabla \cdot \mathbf f(q) &= 0 \qquad q \colon \mathbb R^+_0 \times \mathbb R^d \to \mathbb R^m \label{eq:conslaw}\end{aligned}$$ For simplicity, we restrict ourselves to two spatial dimensions ($d=2$) and write $\mathbf f = (f^x, f^y)$, $\nabla_q f^x = J^x$, $\nabla_q f^y = J^y$. ## Update of the averages Integrating [\[eq:conslaw\]](#eq:conslaw){reference-type="eqref" reference="eq:conslaw"} over the Cartesian cell $$\begin{aligned} C_{ij} := \left[ x_{i-\frac12}, x_{i+\frac12} \right ] \times \left[ y_{j-\frac12}, y_{j+\frac12} \right ]\end{aligned}$$ and denoting the cell average by $$\begin{aligned} \bar q_{ij}(t) := \frac{1}{\Delta x \Delta y} \int_{C_{ij}} q(t, \mathbf x) \mathrm{d}\mathbf x\end{aligned}$$ one finds $$\begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t} \bar q_{ij} + \frac{1}{\Delta x \Delta y} \int_{\partial C_{ij}} \mathbf n \cdot \mathbf f(q) &= 0 \label{eq:semidiscretegauss}\end{aligned}$$ As there are degrees of freedom located at the boundary $\partial C_{ij}$ of cell $C_{ij}$, we intend to use them as quadrature points for a sufficiently accurate quadrature of the integral appearing in [\[eq:semidiscretegauss\]](#eq:semidiscretegauss){reference-type="eqref" reference="eq:semidiscretegauss"}. Inspired by previous approaches (e.g. [@barsukow18activeflux; @kerkmann18]) we use three Gauss-Lobatto points per edge, where the extreme points (corners) are shared. Note also that we enforce global continuity: the point values on an edge are the same as seen from either of the adjacent cells and a value at a corner is involved in the update of four cells. This is in contrast to e.g. discontinuous Galerkin methods. On Cartesian grids it is convenient to adopt the following notation for the 8 point values on the boundary of cell $C_{ij}$: $$\begin{aligned} &q_{i-\frac12,j+\frac12} &&q_{i,j+\frac12} &&q_{i+\frac12,j+\frac12}\\ &q_{i-\frac12,j} && &&q_{i+\frac12,j}\\ &q_{i-\frac12,j-\frac12} &&q_{i,j-\frac12} &&q_{i+\frac12,j-\frac12}\end{aligned}$$ Then, $$\begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t} \bar q_{ij} &+ \frac{1}{\Delta x \Delta y} \int_{y_{j-\frac12}}^{y_{j+\frac12}} \mathrm{d}y \Big ( f^x(q(t, x_{i+\frac12},y)) - f^x(q(t, x_{i-\frac12},y)) \Big ) \\ &+ \frac{1}{\Delta x \Delta y} \int_{x_{i-\frac12}}^{x_{i+\frac12}} \mathrm{d}x \Big ( f^y(q(t, x, y_{j+\frac12}) - f^y(q(t, x, y_{j-\frac12}) \Big) = 0\end{aligned}$$ using Simpson's rule ($\omega_{-\frac12} = \omega_{\frac12} = \frac16$, $\omega_0 = \frac23$) becomes $$\begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t} \bar q_{ij}(t) &+ \frac1{\Delta x} \sum_{K=-\frac12,0,\frac12} \omega_K \left( f^x(q_{i+\frac12,j+K}(t)) - f^x(q_{i-\frac12,j+K}(t)) \right ) \\ &+ \frac{1}{\Delta y} \sum_{K=-\frac12,0,\frac12} \omega_K \left ( f^y(q_{i+K,j+\frac12}(t)) - f^y(q_{i+K,j-\frac12}(t)) \right) = 0 \label{eq:averageupdatesemidiscrete}\end{aligned}$$ This method is conservative, with e.g. the $x$-flux through the cell interface $(i+\frac12,j)$ being given by $$\begin{aligned} \hat f^x_{i+\frac12,j} &= \sum_{K=-\frac12,0,\frac12} \omega_K f^x(q_{i+\frac12,j+K})\\ &= \frac{f^x(q_{i+\frac12,j-\frac12}) + 4f^x(q_{i+\frac12,j}) + f^x(q_{i+\frac12,j+\frac12})}{6}\end{aligned}$$ It is also at least $3^\text{rd}$ order accurate, for it is exact for biparabolic functions. ## Update of the point values The update of the cell averages, as described above, now needs to be complemented by an update of the point values. In the one-dimensional case, it was proposed in [@abgrall22] to replace the spatial derivatives appearing in [\[eq:conslaw\]](#eq:conslaw){reference-type="eqref" reference="eq:conslaw"} by finite differences. Here, the multi-dimensional case shall be addressed. Note first that hyperbolicity of [\[eq:conslaw\]](#eq:conslaw){reference-type="eqref" reference="eq:conslaw"} implies that it is always possible to define the positive and negative parts of the Jacobians via their eigenvalues. With $J^x = R \mathrm{diag}(\lambda_1, \ldots, \lambda_m)R^{-1}$ one has $$\begin{aligned} (J^x)^+ &:= R \mathrm{diag}(\lambda_1^+, \ldots, \lambda_m^+)R^{-1}\\ (J^x)^+ &:= R \mathrm{diag}(\lambda_1^-, \ldots, \lambda_m^-)R^{-1} \end{aligned}$$ where, for scalars $a \in \mathbb R$ the positive/negative parts are simply $a^+ = \max(0, a)$, $a^- = \min(0, a)$. The finite difference formulae are obtained by differentiating a reconstruction. Define first the unique biparabolic polynomial $$\begin{aligned} q_{ij,\text{recon}} \in P^{2,2}, \quad q_{ij,\text{recon}} \colon \left[ - \frac{\Delta x}{2}, \frac{\Delta x}{2} \right] \times \left[ - \frac{\Delta y}{2}, \frac{\Delta y}{2} \right] \to \mathbb R^m\end{aligned}$$ that interpolates the degrees of freedom of cell $ij$: $$\begin{aligned} q_{ij,\text{recon}}\left(-\frac{\Delta x}{2}, \frac{\Delta y}{2} \right) &= q_{i-\frac12,j+\frac12} &q_{ij,\text{recon}}\left( 0, \frac{\Delta y}{2} \right) &= q_{i,j+\frac12} \\ q_{ij,\text{recon}}\left( \frac{\Delta x}{2}, \frac{\Delta y}{2} \right) &= q_{i+\frac12,j+\frac12}\\ q_{ij,\text{recon}}\left(- \frac{\Delta x}{2}, 0 \right) &= q_{i-\frac12,j} &q_{ij,\text{recon}}\left( \frac{\Delta x}{2}, 0 \right) &= q_{i+\frac12,j}\\ q_{ij,\text{recon}}\left( -\frac{\Delta x}{2}, -\frac{\Delta y}{2} \right) &= q_{i-\frac12,j-\frac12} &q_{ij,\text{recon}}\left( 0, -\frac{\Delta y}{2} \right) &= q_{i,j-\frac12} \\q_{ij,\text{recon}}\left( \frac{\Delta x}{2}, -\frac{\Delta y}{2} \right) &= q_{i+\frac12,j-\frac12} \end{aligned}$$ and $$\begin{aligned} \frac{1}{\Delta x \Delta y} \int_{-\frac{\Delta x}{2}}^{\frac{\Delta x}{2}} \int_{-\frac{\Delta y}{2}}^{\frac{\Delta y}{2}} q_{ij,\text{recon}}(x, y) \, \mathrm{d}y \mathrm{d}x &= \bar q_{ij}\end{aligned}$$ This reconstruction has already been used in [@barsukow18activeflux; @kerkmann18] and is given there explicitly. Then we define the finite differences in the corner as $$\begin{aligned} (D^x)^+_{i+\frac12,j+\frac12}q &:= \partial_x q_{ij,\text{recon}}\left.\left(x, \frac{\Delta y}{2}\right) \right|_{x = \frac{\Delta x}{2}} \label{eq:findifffirst}\\ (D^x)^-_{i+\frac12,j+\frac12}q &:= \partial_x q_{i+1,j,\text{recon}}\left.\left(x, \frac{\Delta y}{2}\right) \right|_{x = -\frac{\Delta x}{2}} \\ (D^y)^+_{i+\frac12,j+\frac12}q &:= \partial_y q_{ij,\text{recon}}\left.\left(\frac{\Delta x}{2},y\right) \right|_{y = \frac{\Delta y}{2}} \\ (D^y)^-_{i+\frac12,j+\frac12}q &:= \partial_y q_{i,j+1,\text{recon}}\left.\left(\frac{\Delta x}{2},y\right) \right|_{x = -\frac{\Delta y}{2}} \end{aligned}$$ Observe that due to continuity, $$\begin{aligned} (D^x)^+_{i+\frac12,j+\frac12} q&= \partial_x q_{i,j+1,\text{recon}}\left(x, -\frac{\Delta y}{2}\right) \Big|_{x = \frac{\Delta x}{2}} \end{aligned}$$ such that this would be an equivalent definition that gives the same result (and similarly for the other finite differences). Analogously, we define the finite differences on the edges $$\begin{aligned} (D^x)^+_{i+\frac12,j} q&:= \partial_x q_{ij,\text{recon}}\left.\left(x, 0\right) \right|_{x = \frac{\Delta x}{2}} \\ (D^x)^-_{i+\frac12,j}q &:= \partial_x q_{i+1,j,\text{recon}}\left.\left(x, 0\right) \right|_{x = -\frac{\Delta x}{2}} \\ (D^y)_{i+\frac12,j} q&:= \partial_x q_{ij,\text{recon}}\left.\left(\frac{\Delta x}{2},y\right) \right|_{y = 0} \label{eq:findifflast} \end{aligned}$$ Observe that due to continuity, there is no distinction between $(D^y)^+_{i+\frac12,j}$ and $(D^y)^-_{i+\frac12,j}$. Here, again, the symmetric definition $$\begin{aligned} (D^y)_{i+\frac12,j}q &:= \partial_y q_{i+1,j,\text{recon}}\left.\left(-\frac{\Delta x}{2},y\right) \right|_{y = 0} \end{aligned}$$ yields the same result. The derivatives at $(i,j+\frac12)$ are obtained analogously. For reference we now state their explicit forms: $$\begin{aligned} \left (D^x\right )^+_{i+\frac12,j}q &= \frac{1}{4 \Delta x} \left( 4 \left (-9 \bar q_{ij} +2 \left (q_{i-\frac12,j}+2 q_{i+\frac12,j}\right )\right )+4 \left (q_{i,j-\frac12}+q_{i,j+\frac12} \right ) \right . \\ &\left . + q_{i-\frac12,j-\frac12}+q_{i+\frac12,j-\frac12}+q_{i-\frac12,j+\frac12}+q_{i+\frac12,j+\frac12} \right )\\ % \left (D^x\right )^-_{i+\frac12,j} q&= -\frac{1}{4 \Delta x} \left( -36 \bar q_{i+1,j} +8 \left (2q_{i+\frac12,j}+q_{i+\frac32,j}\right ) +q_{i+\frac12,j-\frac12} \right . \\ &\left .+4 \left (q_{i+1,j-\frac12}+q_{i+1,j+\frac12}\right ) + q_{i+\frac32,j-\frac12} + q_{i+\frac12,j+\frac12} + q_{i+\frac32,j+\frac12} \right )\\ \left (D^y\right )_{i+\frac12,j} q&= \frac{ q_{i+\frac12,j+\frac12}-q_{i+\frac12,j-\frac12}}{\Delta y}\\ % \left (D^y\right )^+_{i,j+\frac12} q&= \frac{1}{4 \Delta y } \left( 4 \left (q_{i-\frac12,j}-9 \bar q_{ij} +q_{i+\frac12,j}\right ) +q_{i-\frac12,j-\frac12} + q_{i-\frac12,j+\frac12} \right . \\ &\left .+ q_{i+\frac12,j-\frac12}+ q_{i+\frac12,j+\frac12} +8 \left (q_{i,j-\frac12} +2 q_{i,j+\frac12} \right ) \right )\\ \left (D^y\right )^-_{i,j+\frac12}q &= -\frac{1}{4 \Delta y} \left ( 4 \left (q_{i-\frac12,j+1}-9 \bar q_{i,j+1} +q_{i+\frac12,j+1} \right ) +q_{i-\frac12,j+\frac12} \right . \\ &\left . + q_{i+\frac12,j+\frac12}+ q_{i-\frac12,j+\frac32} + q_{i+\frac12,j+\frac32} +8 \left (2q_{i,j+\frac12}+q_{i,j+\frac32}\right ) \right )\\ \left (D^x\right )_{i,j+\frac12}q &= \frac{q_{i+\frac12,j+\frac12} -q_{i-\frac12,j+\frac12}}{\Delta x}\\ % \left (D^x\right )^+_{i+\frac12,j+\frac12} q&= \frac{q_{i-\frac12,j+\frac12}-4 q_{i,j+\frac12} +3 q_{i+\frac12,j+\frac12} }{\Delta x}\\ \left (D^x\right )^-_{i+\frac12,j+\frac12} q&= \frac{4 q_{i+1,j+\frac12} - 3q_{i+\frac12,j+\frac12}-q_{i+\frac32,j+\frac12}}{\Delta x}\\ \left (D^y\right )^+_{i+\frac12,j+\frac12} q&= \frac{q_{i+\frac12,j-\frac12}-4 q_{i+\frac12,j} +3 q_{i+\frac12,j+\frac12} }{\Delta y }\\ \left (D^y\right )^-_{i+\frac12,j+\frac12} q&= \frac{4 q_{i+\frac12,j+1} - 3q_{i+\frac12,j+\frac12}-q_{i+\frac12,j+\frac32}}{\Delta y} \end{aligned}$$ However, in some situations one might be willing to employ a different reconstruction, as is, for instance, the case in Section [3](#sec:limiting){reference-type="ref" reference="sec:limiting"} concerned with limiting. At this point one has to resort to the more general formulae [\[eq:findifffirst\]](#eq:findifffirst){reference-type="eqref" reference="eq:findifffirst"}--[\[eq:findifflast\]](#eq:findifflast){reference-type="eqref" reference="eq:findifflast"}. Finally, the upwinding is defined as $$\begin{aligned} (J^x D^x_{i+K,j+L})^\text{upw}q := (J^x)^+ (D^x)^+_{i+K,j+L}q + (J^x)^- (D^x)^-_{i+K,j+L}q\end{aligned}$$ with $K,L \in \{ -\frac12, 0, \frac12 \}$ and an analogous definition for $J^y$. We propose to update the point values as follows: $$\begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t} q_{i+\frac12,j} + (J^x D^x_{i+\frac12,j})^\text{upw} q + J^y D^y_{i+\frac12,j} q &= 0 \label{eq:edgexupdatesemidiscrete}\\ \frac{\mathrm{d}}{\mathrm{d}t} q_{i,j+\frac12} + J^x D^x_{i,j+\frac12} q + (J^y D^y_{i,j+\frac12})^\text{upw} q &= 0 \label{eq:edgeyupdatesemidiscrete}\\ \frac{\mathrm{d}}{\mathrm{d}t} q_{i+\frac12,j+\frac12} + (J^x D^x_{i+\frac12,j+\frac12})^\text{upw} q + (J^y D^y_{i+\frac12,j+\frac12})^\text{upw} q &= 0 \label{eq:nodeupdatesemidiscrete}\end{aligned}$$ As the finite differences are exact for biparabolic function, one expects $3^\text{rd}$ order of accuracy. The complete method consists of the ODEs [\[eq:averageupdatesemidiscrete\]](#eq:averageupdatesemidiscrete){reference-type="eqref" reference="eq:averageupdatesemidiscrete"} (average update), [\[eq:edgexupdatesemidiscrete\]](#eq:edgexupdatesemidiscrete){reference-type="eqref" reference="eq:edgexupdatesemidiscrete"}--[\[eq:edgeyupdatesemidiscrete\]](#eq:edgeyupdatesemidiscrete){reference-type="eqref" reference="eq:edgeyupdatesemidiscrete"} (point values at edge midpoints) and [\[eq:nodeupdatesemidiscrete\]](#eq:nodeupdatesemidiscrete){reference-type="eqref" reference="eq:nodeupdatesemidiscrete"} (point values at nodes). We propose to integrate these with an SSP-RK3 method. In [@abgrall22proceeding], it was shown for the one-dimensional case that this approach leads to a stable scheme with a maximum CFL number of 0.41. # Limiting {#sec:limiting} Existing approaches to limiting in the context of standard Finite Volume methods modify the values of the reconstruction at a cell interface. They cannot be used for Active Flux due to its global continuity and the fact that point values at cell interfaces are prescribed and cannot be modified arbitrarily. Limiting employed in [@kerkmann18] therefore gives up on continuity. Approaches to limiting that maintain continuity so far have only been treating the situation in which a parabolic reconstruction of monotone discrete data (point values and average) is not monotone, i.e. has an artificial extremum. In [@roe15], a piecewise linear/parabolic reconstruction is used in this case, and in [@barsukow19activeflux] the same situation is handled by replacing the parabola by a power law. One can show that then the reconstruction is always monotone whenever the discrete data are. Such modified reconstructions are effective in drastically reducing spurious oscillations, but they do not guarantee to remove them entirely. This is because the update of the averages is not limited and can itself create artificial extrema in the discrete data. However, in absence of better approaches, e.g. the power-law reconstruction is a viable limiting strategy. In particular, it is not computationally intensive. In multiple spatial dimensions, a similar strategy is presented here for the first time. The multi-dimensional case is, however, much more complex because every cell has access to 8 point values. Consider point values at edge centers $q_\text{N}, q_\text{S}, q_\text{W}, q_\text{E}$ and at vertices $q_\text{NE}, q_\text{SE}, q_\text{NW}, q_\text{SW}$ of a (reference) Cartesian cell $c = [-\frac{\Delta x}{2}, \frac{\Delta x}{2}] \times [-\frac{\Delta y}{2}, \frac{\Delta y}{2}]$ and a cell average $\bar q$ to be given. We shall refer to the four edges as N-edge, S-edge, W-edge and E-edge, respectively. The reconstruction shall simply be denoted by $q_\text{recon} \colon c \to \mathbb R$ for simplicity. There exist two types of maximum-principle violation, which can occur independently of each other: 1. It can happen that the parabolic reconstruction along an edge (as part of a biparabolic reconstruction in the cell) overshoots/undershoots the three point values along the edge in question. For the example of an N-edge, this happens if either - the point values $q_\text{NW}$, $q_\text{N}$, $q_\text{NE}$ are not monotone and $q_\text{NW} \neq q_\text{NE}$, or if - they are monotone (i.e. either $q_\text{NW} < q_\text{N} < q_\text{NE}$ or $q_\text{NW} > q_\text{N} > q_\text{NE}$), but $$\begin{aligned} \left |q_\text{N} - \frac{q_\text{NE} + q_\text{NW}}{2} \right | > \frac{|q_\text{NE} - q_\text{NW}|}{4} \label{eq:conditionhatreconedge}\end{aligned}$$ such that the parabolic reconstruction has an artificial extremum. In this case the reconstruction along the edge shall be chosen continuous piecewise linear ("hat"). We shall say that the **reconstruction along the edge is limited**, or just that the "edge is limited". To ensure continuity, the reconstruction in any cell with a limited edge can then no longer be biparabolic, but needs to be modified as detailed below and in Section [6.1](#sec:piecewisebiparabolic){reference-type="ref" reference="sec:piecewisebiparabolic"}. 2. Define $$\begin{aligned} m := \min(q_\text{N}, q_\text{S}, q_\text{W}, q_\text{E}, q_\text{NE}, q_\text{SE}, q_\text{NW}, q_\text{SW} ) \\ M := \max(q_\text{N}, q_\text{S}, q_\text{W}, q_\text{E}, q_\text{NE}, q_\text{SE}, q_\text{NW}, q_\text{SW} ) \end{aligned}$$ It can happen that despite $$\begin{aligned} m < \bar q < M \label{eq:quasimonotone}\end{aligned}$$ the reconstruction $q_\text{recon}$ inside the cell $c$ fails to fulfill the maximum-principle, i.e. $$\begin{aligned} \exists x \in c \text{ such that either } q_\text{recon}(x) < m \text{ or }q_\text{recon}(x) > M\end{aligned}$$ This situation shall be improved by introducing a piecewise defined reconstruction with a central region where the function is constant ("plateau"), and connecting the plateau to the (parabolic or hat) reconstructions along the edges in a continuous fashion. More details are given below and in Section [6.2](#sec:plateau){reference-type="ref" reference="sec:plateau"}; Figures [2](#fig:plateau){reference-type="ref" reference="fig:plateau"} and [4](#fig:plateau2){reference-type="ref" reference="fig:plateau2"} show examples. This new reconstruction fulfills $$\begin{aligned} m < q_\text{recon}(x) < M \qquad \forall x \in c\end{aligned}$$ We shall say that the **reconstruction inside the cell is limited**, or just that the "cell is limited". This situation appears already in 1-d, in which case it has been suggested in [@barsukow19activeflux] to replace the parabolic reconstruction in the cell by a power law. A multi-dimensional analogue of the power law seems unfeasible, though, and we resort here to a piecewise defined, but easier function. ![ *Left*: An example of a plateau reconstruction. Here, $q_\text{NW} = 1$, $q_\text{W} = 1.35$, $q_\text{SW} = 0.6$, $q_\text{S} = 0.4$, $q_\text{SE} = 0$, $q_\text{E} = -0.2$, $q_\text{NE} = 0.0$, $q_\text{N} = 1$, $\bar q = 0.9$ (the S-edge is on the left). All edges but the S-edge are reconstructed as hats, the S-edge is reconstructed parabolically. *Right*: A piecewise-biparabolic reconstruction of the same data; one clearly observes an overshoot. The isolines have a spacing of 0.1.](images/plateau.png "fig:"){#fig:plateau width="49%"} ![ *Left*: An example of a plateau reconstruction. Here, $q_\text{NW} = 1$, $q_\text{W} = 1.35$, $q_\text{SW} = 0.6$, $q_\text{S} = 0.4$, $q_\text{SE} = 0$, $q_\text{E} = -0.2$, $q_\text{NE} = 0.0$, $q_\text{N} = 1$, $\bar q = 0.9$ (the S-edge is on the left). All edges but the S-edge are reconstructed as hats, the S-edge is reconstructed parabolically. *Right*: A piecewise-biparabolic reconstruction of the same data; one clearly observes an overshoot. The isolines have a spacing of 0.1.](images/plateau-overshoot.png "fig:"){#fig:plateau width="49%"} ![ *Left*: An example of a plateau reconstruction. $q_\text{NE} = 1, q_\text{NW} = 2, q_\text{SW} = -4, q_\text{SE} = 0, q_\text{N} = -1, q_\text{S} = 4, q_\text{W} = -5, q_\text{E} = -3, \bar q = 2$ (the W-edge is on the left). All edges are reconstructed as hats. *Right*: A piecewise-biparabolic reconstruction of the same data; one clearly observes an overshoot. The isolines have a spacing of 0.25.](images/plateau2.png "fig:"){#fig:plateau2 width="49%"} ![ *Left*: An example of a plateau reconstruction. $q_\text{NE} = 1, q_\text{NW} = 2, q_\text{SW} = -4, q_\text{SE} = 0, q_\text{N} = -1, q_\text{S} = 4, q_\text{W} = -5, q_\text{E} = -3, \bar q = 2$ (the W-edge is on the left). All edges are reconstructed as hats. *Right*: A piecewise-biparabolic reconstruction of the same data; one clearly observes an overshoot. The isolines have a spacing of 0.25.](images/plateau2-overshoot.png "fig:"){#fig:plateau2 width="49%"} The two situations are independent: any number of edges along the boundary of a cell might require limiting, and this will not generally imply anything about whether the cell itself is to be limited. The possible presence of hat functions along the boundary requires the reconstruction inside the cell to flexibly adapt to the different combinations of edge-reconstructions in order to be continuous. For instance, the plateau reconstruction needs to connect the plateau continuously to either a parabola, or a hat function (see Section [6.2](#sec:plateau){reference-type="ref" reference="sec:plateau"}). Also, if there exists at least one edge that is reconstructed as a hat function, then one cannot use a biparabolic reconstruction inside the cell any longer, whether the cell is limited or not. Here, if the cell is not limited, but at least one edge, a piecewise-biparabolic reconstruction shall be used, detailed in Section [6.1](#sec:piecewisebiparabolic){reference-type="ref" reference="sec:piecewisebiparabolic"}. As we are aiming at a globally continuous reconstruction, that is computed locally from merely the cell average and the point values of the cell, the reconstruction along an edge can only depend on the three values associated to this edge, and cannot depend on other values in the cell. Indeed, if edge-reconstruction of one of edges of $c$ were to depend on, say, the average in the cell $c$, then the reconstruction in the neighbouring cell $c'$ would also need to know about the average in $c$. Due to the particular choice of degrees of freedom for Active Flux the reconstruction has to fulfill two types of conditions: It is supposed to interpolate the point values at cell interfaces and its average is supposed to be equal to the given one. The latter condition -- merely to simplify the calculations -- shall be replaced by a (yet unknown) point value $q_\text{C}$ at cell center which is kept as a variable in the formulae. Once the type of reconstruction in all regions of the cell has been determined, their integrals over the respective domains of definition can easily be found as functions of $q_\text{C}$, and $q_\text{C}$ is then determined by imposing the average of the reconstruction over the entire cell. This is a linear equation in $q_\text{C}$ due to linearity of the interpolation problem which makes $q_\text{C}$ enter linearly everywhere. The explicit formulae below therefore also depend on $q_\text{C}$, but the reconstruction in a cell in the end only depends on the point values along its boundary and on its average. This detour does not change the result but simplifies the algorithm. The overall structure of the reconstruction algorithm is: 1. Decide for every edge of the cell whether it is reconstructed parabolically, or as a hat function. 2. [\[it:piecewbipara\]]{#it:piecewbipara label="it:piecewbipara"} Assume as hypothesis that the cell does not require limiting (i.e. that it is reconstructed in a piecewise biparabolic fashion) and compute the value of $q_\text{C}$ that ensures that the average of the reconstruction agrees with the given cell average. 3. Check [\[eq:quasimonotone\]](#eq:quasimonotone){reference-type="eqref" reference="eq:quasimonotone"} and if true, decide whether the piecewise-biparabolic reconstruction obtained in [\[it:piecewbipara\]](#it:piecewbipara){reference-type="ref" reference="it:piecewbipara"} violates the maximum principle[^5] 4. If this is the case, the cell needs to be limited with a plateau reconstruction. Compute the parameters $\eta, q_\text{p}$ (see below) of the plateau reconstruction that ensure maximum principle preservation and the correct value of the average of the reconstruction. A pedagogical derivation of the reconstruction algorithm is given in Section [6](#app:recon){reference-type="ref" reference="app:recon"}. Here, we only state all the relevant results in a concise way. **Theorem 1**. *The following reconstruction $q_\text{recon} \colon \left[ -\frac{\Delta x}{2}, \frac{\Delta x}{2} \right] \times \left[ -\frac{\Delta y}{2}, \frac{\Delta y}{2} \right] \to \mathbb R$ is continuous, interpolates all the point values along the boundary of the cell, its average agrees with the given cell average and the reconstruction has the following properties:* (i) *If Condition [\[eq:quasimonotone\]](#eq:quasimonotone){reference-type="eqref" reference="eq:quasimonotone"}, i.e. $m < \bar q < M$ is fulfilled, then $m \leq q_\text{recon}(x) \leq M$ for all $x$ inside the cell.* (ii) *If $q_\text{NW} < q_\text{N} < q_\text{NE}$, then $q_\text{NW} \leq q_\text{recon}(x) \leq q_\text{NE}$ for all $x$ along the N-edge, and similarly for all the other edges.* *The definition of the reconstruction is as follows: If $m < \bar q < M$ is not fulfilled, or if additionally $m < q^\text{pw. biparab.}_\text{recon}(x, y) < M$ for all $(x, y) \in c$, then $$\begin{aligned} q_\text{recon}(x, y) := q^\text{pw. biparab.}_\text{recon}(x, y)\end{aligned}$$ otherwise $$\begin{aligned} q_\text{recon}(x, y) := q^\text{plateau}_\text{recon}(x, y),\end{aligned}$$ the two types of reconstruction being defined as follows:* *$$\begin{aligned} q_\text{recon}^\text{pw. biparab.}(x, y) &:= q_\text{recon}^\text{W}\left(\frac{q_\text{SW}}{2}, q_\text{W}, \frac{q_\text{NW}}{2}, x, y, \text{S}, \text{N}, \text{W}, \frac{\Delta \bar q}{4}\right) \\& \nonumber +q_\text{recon}^\text{S}\left(\frac{q_\text{SE}}{2}, q_\text{S}, \frac{q_\text{SW}}{2}, x, y, \text{E}, \text{W}, \text{S}, \frac{\Delta \bar q}{4}\right)\\ &\nonumber+q_\text{recon}^\text{N}\left(\frac{q_\text{NW}}{2}, q_\text{N}, \frac{q_\text{NE}}{2}, x, y, \text{W}, \text{E}, \text{N}, \frac{\Delta \bar q}{4}\right)\\& \nonumber +q_\text{recon}^\text{E}\left(\frac{q_\text{NE}}{2}, q_\text{E}, \frac{q_\text{SE}}{2}, x, y, \text{N}, \text{S}, \text{E}, \frac{\Delta \bar q}{4}\right)\\ &\nonumber+ (\bar q - \Delta \bar q)\end{aligned}$$ with $$\begin{aligned} q_\text{recon}^\text{S}(q_\text{SE}, q_\text{S}, q_\text{SW}, x, y, \text{E}, \text{W}, \text{S}, \bar q ) &= q_\text{recon}^\text{W}(q_\text{SE}, q_\text{S}, q_\text{SW}, y, -x, \text{E}, \text{W}, \text{S}, \bar q ) \label{eq:rotationreconS}\\ q_\text{recon}^\text{N}(q_\text{NW}, q_\text{N}, q_\text{NE}, x, y, \text{W}, \text{E}, \text{N}, \bar q ) &= q_\text{recon}^\text{W}(q_\text{NW}, q_\text{N}, q_\text{NE}, -y, x, \text{W}, \text{E}, \text{N}, \bar q ) \\ q_\text{recon}^\text{E}(q_\text{NE}, q_\text{E}, q_\text{SE}, x, y, \text{N}, \text{S}, \text{E}, \bar q ) &= q_\text{recon}^\text{W}(q_\text{NE}, q_\text{E}, q_\text{SE}, -x, -y, \text{N}, \text{S}, \text{E}, \bar q ) \label{eq:rotationreconE}\end{aligned}$$ and $$\begin{aligned} &q_\text{recon}^\text{W}(q_\text{SW}, q_\text{W}, q_\text{NW}, x, y, \text{S}, \text{N}, \text{W}, \bar q ) \\\nonumber &\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,= \begin{cases} \text{\eqref{eq:bipararecon}} & \text{N,S,W parabolic} \\ \text{\eqref{eq:parahathatleft}--\eqref{eq:parahathatright}} & \text{W parabolic, N,S hat} \\ \text{\eqref{eq:parahatparaleft}--\eqref{eq:parahatpararight}} & \text{W,S parabolic, N hat}\\ \text{\eqref{eq:paraparahatleft}--\eqref{eq:paraparahatright}} & \text{W,N parabolic, S hat}\\ \text{\eqref{eq:hatWparaS} and \eqref{eq:hatWparaN}} & \text{W hat, N,S parabolic}\\ \text{\eqref{eq:hatWparaS} and \eqref{eq:hatWhatNleft}--\eqref{eq:hatWhatNright}} & \text{W,N hat, S parabolic}\\ \text{\eqref{eq:hatWparaN} and \eqref{eq:hathatbottomleft}--\eqref{eq:hathatbottomright}} & \text{W,S hat, N parabolic}\\ \text{\eqref{eq:hatWhatNleft}--\eqref{eq:hatWhatNright} and \eqref{eq:hathatbottomleft}--\eqref{eq:hathatbottomright}} & \text{W,S,N hat} \end{cases}\end{aligned}$$ Here, N/S/E/W denote the edges of the cell. $q_\text{C}$ fulfills $$\begin{aligned} \begin{cases} q_\text{C} = \frac{1}{16}(36 \bar q -q_\text{NW}-q_\text{SW}-4 q_\text{W}) & \text{N,S,W parabolic} \\ q_\text{C} =\frac1{32} (72 \bar q -3 q_\text{NW}-3q_\text{SW}-8 q_\text{W}) & \text{W parabolic, N,S hat} \\ q_\text{C} =\frac{1}{32} (72 \bar q -3 q_\text{NW}-2 q_\text{SW}-8 q_\text{W}) & \text{W,S parabolic, N hat}\\ q_\text{C} =\frac1{32} (72 \bar q -2 q_\text{NW}-3 q_\text{SW}-8 q_\text{W}) & \text{W,N parabolic, S hat}\\ \bar q = \frac{2 q_\text{C}}9+ \frac{q_\text{SW}+q_\text{W}}{24} + \frac{2 q_\text{C}}9+\frac{q_\text{NW}+q_\text{W}}{24} & \text{W hat, N,S parabolic}\\ \bar q = \frac{2 q_\text{C}}9+ \frac{q_\text{SW}+q_\text{W}}{24} + \frac{2 q_\text{C}}9+\frac1{576} (35 q_\text{NW}+q_\text{SW}+22 q_\text{W}) & \text{W,N hat, S parabolic}\\ \bar q = \frac{2 q_\text{C}}9+\frac{q_\text{NW}+q_\text{W}}{24} + \frac{2 q_\text{C}}9+ \frac1{576} (q_\text{NW}+35 q_\text{SW}+22 q_\text{W}) & \text{W,S hat, N parabolic}\\ \bar q = \frac{2 q_\text{C}}9+\frac1{576} (35 q_\text{NW}+q_\text{SW}+22 q_\text{W}) + \frac{2 q_\text{C}}9+ \frac1{576} (q_\text{NW}+35 q_\text{SW}+22 q_\text{W}) & \text{W,S,N hat} \end{cases}\end{aligned}$$* *$$\begin{aligned} q_\text{recon}^\text{plateau}(x, y) &:= \begin{cases} q_\text{p} \text{ if } (x, y) \in \left[\Delta x \left(\eta-\frac12 \right), \Delta x \left(\frac12 - \eta \right)\right] \times \left[\Delta y \left(\eta-\frac12 \right), \Delta y \left(\frac12 - \eta \right)\right] \\ q_\text{recon}^\text{trapeze W}(q_\text{SW}, q_\text{W}, q_\text{NW}, x, y, \text{W}, \eta, q_\text{p}) \text{ if }(x,y)\in \text{W-trapeze} \\ q_\text{recon}^\text{trapeze S}(q_\text{SE}, q_\text{S}, q_\text{SW}, x, y, \text{S}, \eta, q_\text{p} ) \text{ if }(x,y)\in \text{S-trapeze}\\ q_\text{recon}^\text{trapeze N}(q_\text{NW}, q_\text{N}, q_\text{NE}, x, y, \text{N}, \eta, q_\text{p}) \text{ if }(x,y)\in \text{N-trapeze} \\ q_\text{recon}^\text{trapeze E}(q_\text{NE}, q_\text{E}, q_\text{SE}, x, y, \text{E}, \eta, q_\text{p} )\text{ if }(x,y)\in \text{E-trapeze} \end{cases}\end{aligned}$$* *with $$\begin{aligned} q_\text{recon}^\text{trapeze W}(q_\text{SW}, q_\text{W}, q_\text{NW}, x, y, \text{W}, \eta, q_\text{p}) = \begin{cases} \eqref{eq:trapezeWparabolic} & (x, y) \in \text{W parabolic}\\ \text{\eqref{eq:trapezeWhattop}--\eqref{eq:trapezeWhatbottom}} & (x, y) \in \text{W hat} \end{cases}\end{aligned}$$ defined only in $$\begin{aligned} \text{W-trapeze} = \left\{(x, y) \text{ s.t. } x \in \left[-\frac{\Delta x}{2}, -\Delta x\left(\frac12- \eta\right) \right] \text{ and } y \in \left[\frac{x}{\Delta y} \Delta x , -\frac{x}{\Delta y} \Delta x \right ] \right \}\end{aligned}$$* *The reconstructions of the other trapezes are $$\begin{aligned} q_\text{recon}^\text{trapeze S}(q_\text{SE}, q_\text{S}, q_\text{SW}, x, y, \text{S}, \eta, q_\text{p} ) &= q_\text{recon}^\text{trapeze W}(q_\text{SE}, q_\text{S}, q_\text{SW}, y, -x, \text{S}, \eta, q_\text{p}) \\ q_\text{recon}^\text{trapeze N}(q_\text{NW}, q_\text{N}, q_\text{NE}, x, y, \text{N}, \eta, q_\text{p}) &= q_\text{recon}^\text{trapeze W}(q_\text{NW}, q_\text{N}, q_\text{NE}, -y, x, \text{N}, \eta, q_\text{p} ) \\ q_\text{recon}^\text{trapeze E}(q_\text{NE}, q_\text{E}, q_\text{SE}, x, y, \text{E}, \eta, q_\text{p} ) &= q_\text{recon}^\text{trapeze W}(q_\text{NE}, q_\text{E}, q_\text{SE}, -x, -y, \text{E}, \eta, q_\text{p} ) \end{aligned}$$* *The parameters $q_\text{p}$ and $\eta$ are found according to procedure of Section [6.2.4](#ssec:plateaumaximumprinciple){reference-type="ref" reference="ssec:plateaumaximumprinciple"}.* *Proof.* Continuity is a consequence of Theorem [Theorem 5](#thm:continuitypara){reference-type="ref" reference="thm:continuitypara"}. The pointwise and average interpolation property follows from Theorems [Theorem 4](#thm:reconinterpol){reference-type="ref" reference="thm:reconinterpol"} and [Theorem 7](#thm:reconplateauinterpol){reference-type="ref" reference="thm:reconplateauinterpol"}. The pointwise interpolation property is, in fact, trivially guaranteed by construction (see Sections [6.1.1.4](#ssec:continuitypara){reference-type="ref" reference="ssec:continuitypara"}, [6.1.1.4](#ssec:continuitypara){reference-type="ref" reference="ssec:continuitypara"}, [6.2.1](#ssec:plateauinterpolation){reference-type="ref" reference="ssec:plateauinterpolation"}). Preservation of the maximum principle along the edges is clear from [\[eq:conditionhatreconedge\]](#eq:conditionhatreconedge){reference-type="eqref" reference="eq:conditionhatreconedge"} and the idea of reconstructing a hat function along the edge. Preservation of the maximum principle follows from Theorem [Theorem 7](#thm:reconplateauinterpol){reference-type="ref" reference="thm:reconplateauinterpol"}. ◻ **Theorem 2**. *The usage of the reconstruction from Theorem [Theorem 1](#thm:limitedrecon){reference-type="ref" reference="thm:limitedrecon"} in every cell leads to a globally continuous reconstruction.* *Proof.* It follows trivially by construction (see Sections [6.1.1.4](#ssec:continuitypara){reference-type="ref" reference="ssec:continuitypara"}, [6.1.1.4](#ssec:continuitypara){reference-type="ref" reference="ssec:continuitypara"}, [6.2.1](#ssec:plateauinterpolation){reference-type="ref" reference="ssec:plateauinterpolation"}) that the reconstruction in a cell $c$ continuously turns into the reconstruction along the edge as $c \ni (x,y) \to s \in \partial c$. The reconstructions along the edges only depend on the three point values located on the edge, and thus the limit as $(x,y)$ approaches the same edge from the other cell is the same. ◻ # Numerical results {#sec:numerical} Here, the Euler equations with $q = (\rho, \rho u, \rho v, e)$, $$\begin{aligned} f^x &= (\rho u, \rho u^2 + p, \rho u v, u (e+p)) \\ f^y &= (\rho v, \rho u v, \rho v^2 + p , v (e+p)) \\ e &= \frac{p}{\gamma-1} + \frac12 \rho (u^2 + v^2)\end{aligned}$$ and $\gamma=1.4$ are solved using the Active Flux method described above. Initial data are denoted by $\rho_0$, $u_0$, $v_0$, $p_0$. ## Convergence study For a convergence analysis, the following initial data (similar to those used in [@kerkmann18; @barsukow19activeflux]) are solved until $t=0.05$ on grids of different resolution: $$\begin{aligned} u_0(x,y) &= v_0(x,y) = 0 \\ \rho_0(x,y) &= p_0(x,y) = 1 + \frac12 \exp\left(-80 (x^2 + y^2)\right)\end{aligned}$$ Figure [6](#fig:convergence){reference-type="ref" reference="fig:convergence"} shows the setup and the error, computed with respect to a reference solution obtained on a grid of $1024 \times 1024$. Limiting is not used. One observes third order accuracy in agreement with the expectation. ![Convergence study. *Left*: Setup at initial time and at $t=0.05$, shown as scatter plot as a function of radius, computed on a $256\times 256$ grid. *Right*: $L^1$ error of the numerical solution of the point values.](images/convergence-solution.png "fig:"){#fig:convergence width=".49\\textwidth"}![Convergence study. *Left*: Setup at initial time and at $t=0.05$, shown as scatter plot as a function of radius, computed on a $256\times 256$ grid. *Right*: $L^1$ error of the numerical solution of the point values.](images/convergence.png "fig:"){#fig:convergence width=".49\\textwidth"} ## Spherical shock tube As a first test with discontinuities, Figure [8](#fig:sod){reference-type="ref" reference="fig:sod"} shows a 2-dimensional version of Sod's shock tube: $$\begin{aligned} \rho_0(x, y) &= \begin{cases} 1 & r < 0.3 \\ 0.125 & \text{else} \end{cases} & p_0(x, y) &= \begin{cases} 1 & r < 0.3 \\ 0.1 & \text{else} \end{cases} \\ u_0(x, y) &= v_0(x,y) = 0\end{aligned}$$ with $r = \sqrt{(x-\frac12)^2 + (y-\frac12)^2}$. One observes that the limiting is successful in suppressing oscillations. Global continuity does not impede Active Flux from converging to weak solutions, because the update of the averages is conservative and fulfills a version of the Lax-Wendroff-theorem ([@abgrall20]). ![Radial scatter plot of the two-dimensional version of Sod's shock tube solved on a $100 \times 100$ grid. The solid line shows a finely resolved solution of the one-dimensional, radial Euler equations obtained with a standard Finite Volume method. *Left*: No limiting. *Right*: Limiting used.](images/sod-radial.png "fig:"){#fig:sod width="49%"} ![Radial scatter plot of the two-dimensional version of Sod's shock tube solved on a $100 \times 100$ grid. The solid line shows a finely resolved solution of the one-dimensional, radial Euler equations obtained with a standard Finite Volume method. *Left*: No limiting. *Right*: Limiting used.](images/sod-radial-lim.png "fig:"){#fig:sod width="49%"} ## Multi-dimensional Riemann problems In [@lax98], particular multi-dimensional Riemann problems were studied, designed such that the one-dimensional Riemann problems outside the central interaction region result in elementary waves. Inside the interaction region these Riemann problems display a lot of sophisticated structure. They shall illustrate the ability of the proposed method to solve complex interactions of shocks, rarefactions and slip lines. All the Riemann problems shown in Figure [12](#fig:laxliu){reference-type="ref" reference="fig:laxliu"} are solved on grids with $\Delta x = \Delta y = \frac{1}{200}$ (double of what has been used in the original publication) with a domain slightly larger than the one shown (to exclude the influence of boundary conditions). A CFL number of 0.05 was used, as well as limiting, as described in Section [3](#sec:limiting){reference-type="ref" reference="sec:limiting"}. Figures [14](#fig:laxliulimdiff1){reference-type="ref" reference="fig:laxliulimdiff1"}--[15](#fig:laxliulimdiff2){reference-type="ref" reference="fig:laxliulimdiff2"} show a comparison between results obtained with and without limiting. It seems that the intricate structures in the interaction region are not significantly smeared out by the limiting while oscillations at shocks are very efficiently suppressed. ![Multi-dimensional Riemann problems solved on a grid with $\Delta x = \Delta y = \frac{1}{200}$ using limiting as described in Section [3](#sec:limiting){reference-type="ref" reference="sec:limiting"}. Configurations 6 (*top left*), 11 (*top right*), 12 (*bottom left*) and 16 (*bottom right*) from [@lax98] are shown. Color-coded is density. ](images/laxliu-6-lim.png "fig:"){#fig:laxliu width="49%"} ![Multi-dimensional Riemann problems solved on a grid with $\Delta x = \Delta y = \frac{1}{200}$ using limiting as described in Section [3](#sec:limiting){reference-type="ref" reference="sec:limiting"}. Configurations 6 (*top left*), 11 (*top right*), 12 (*bottom left*) and 16 (*bottom right*) from [@lax98] are shown. Color-coded is density. ](images/laxliu-11-lim.png "fig:"){#fig:laxliu width="49%"} ![Multi-dimensional Riemann problems solved on a grid with $\Delta x = \Delta y = \frac{1}{200}$ using limiting as described in Section [3](#sec:limiting){reference-type="ref" reference="sec:limiting"}. Configurations 6 (*top left*), 11 (*top right*), 12 (*bottom left*) and 16 (*bottom right*) from [@lax98] are shown. Color-coded is density. ](images/laxliu-12-lim.png "fig:"){#fig:laxliu width="49%"} ![Multi-dimensional Riemann problems solved on a grid with $\Delta x = \Delta y = \frac{1}{200}$ using limiting as described in Section [3](#sec:limiting){reference-type="ref" reference="sec:limiting"}. Configurations 6 (*top left*), 11 (*top right*), 12 (*bottom left*) and 16 (*bottom right*) from [@lax98] are shown. Color-coded is density. ](images/laxliu-16-lim.png "fig:"){#fig:laxliu width="49%"} ![Influence of limiting on the central region in Configuration 12. *Left*: Limiting off. *Right*: Limiting on. Without limiting one observes some undershoots inside the vortices. The structure of the solution feature is, however, not degraded by applying the limiter.](images/laxliu-12-zoom.png "fig:"){#fig:laxliulimdiff1 width="49%"} ![Influence of limiting on the central region in Configuration 12. *Left*: Limiting off. *Right*: Limiting on. Without limiting one observes some undershoots inside the vortices. The structure of the solution feature is, however, not degraded by applying the limiter.](images/laxliu-12-lim-zoom.png "fig:"){#fig:laxliulimdiff1 width="49%"} ![Influence of limiting on Configuration 12. Density is shown along the lines $x = 0.4325$ and $x = 0.7525$ (as indicated in the inset). One observes that limiting successfully removes spurious oscillations in the vicinity of discontinuities. However, it also gently shifts the location of the central double-vortex and smears out the feature along the $x=y$ diagonal in the first quadrant.](images/laxliu-12-lim-parts-inset.png){#fig:laxliulimdiff2 width="\\textwidth"} ## Kelvin-Helmholtz instability A special kind of a Kelvin-Helmholtz instability triggered by the passage of an acoustic wave has been used in [@munz03] to assess the properties of the numerical method for subsonic flow. The initial data are $$\begin{aligned} \rho_0(x,y) &= 1 + \frac{\mathcal M}5 \psi(x) + \varphi(y) & u_0(x,y) &= \sqrt{\gamma} \psi(x)\\ p_0(x,y) &= \frac{1}{\mathcal M^2} + \frac{1}{\mathcal M} \gamma \psi(x) & v_0(x, y) &= 0\end{aligned}$$ with $$\begin{aligned} \varphi(y) &:= \begin{cases} 2\mathcal M y & y < 4 \\ 2\mathcal M(y - 4) - 0.4 & \text{else} \end{cases} & \psi(x) &:= 1 + \cos(\pi \mathcal M x)\end{aligned}$$ The restriction of these initial data to the $x$-direction only $$\begin{aligned} \rho_0^{x}(x) &:= 1 + \frac{\mathcal M}5 \psi(x) & u_0^{x}(x) &:= \sqrt{\gamma} \psi(x)\\ p_0^{x}(x) &:= \frac{1}{\mathcal M^2} + \frac{1}{\mathcal M} \gamma \psi(x) &\end{aligned}$$ is a right-running sound wave: The linearized Euler equations $$\begin{aligned} \partial_t \rho^x(t,x) + \bar \rho \partial_x u^x(t,x) &= 0\\ \partial_t u^x(t,x) + \frac{1}{\bar \rho} \partial_x p^x(t,x) &= 0\\ \partial_t p^x(t,x) + \bar \rho c^2 \partial_x u^x(t,x) &= 0\end{aligned}$$ are solved by $$\begin{aligned} \rho^x(t,x) &= \rho_0^x(x - ct) & u^x(t,x) &= u_0^x(x - ct) & p^x(t,x) &= p_0^x(x - ct)\end{aligned}$$ with $c^2 = \frac{5 \gamma}{\mathcal M^2}$ and $\bar \rho = \frac{1}{\sqrt{5}}$. The non-linearity of the full Euler equations leads to a self-steepening of the sound wave. Additionally, due to the density change in $y$-direction, a shear flow is induced, which causes a Kelvin-Helmholtz instability. Here we show this setup on grids of $400 \times 80$ (Figure [19](#fig:kh1){reference-type="ref" reference="fig:kh1"}) and $800 \times 160$ (Figure [23](#fig:kh2){reference-type="ref" reference="fig:kh2"}) with a CFL of 0.15 and $\mathcal M = \frac{1}{20}$. No limiting was used. One observes that the method is able to adequately resolve both the instability and the sound waves passing through the domain. ![A Kelvin-Helmholtz instability is triggered by the passage of an acoustic wave. The setup is computed on a $400 \times 80$ grid without using limiting. Density is shown at times $t=3, 6, 9, 12$.](images/KH/KHsoundwave-400x80-t03.png "fig:"){#fig:kh1 width="\\textwidth"}\ ![A Kelvin-Helmholtz instability is triggered by the passage of an acoustic wave. The setup is computed on a $400 \times 80$ grid without using limiting. Density is shown at times $t=3, 6, 9, 12$.](images/KH/KHsoundwave-400x80-t06.png "fig:"){#fig:kh1 width="\\textwidth"}\ ![A Kelvin-Helmholtz instability is triggered by the passage of an acoustic wave. The setup is computed on a $400 \times 80$ grid without using limiting. Density is shown at times $t=3, 6, 9, 12$.](images/KH/KHsoundwave-400x80-t09.png "fig:"){#fig:kh1 width="\\textwidth"}\ ![A Kelvin-Helmholtz instability is triggered by the passage of an acoustic wave. The setup is computed on a $400 \times 80$ grid without using limiting. Density is shown at times $t=3, 6, 9, 12$.](images/KH/KHsoundwave-400x80-t12.png "fig:"){#fig:kh1 width="\\textwidth"} ![Same setup as Figure [19](#fig:kh1){reference-type="ref" reference="fig:kh1"}, but on a grid of $800\times 160$.](images/KH/KHsoundwave-hires-t03.png "fig:"){#fig:kh2 width="\\textwidth"}\ ![Same setup as Figure [19](#fig:kh1){reference-type="ref" reference="fig:kh1"}, but on a grid of $800\times 160$.](images/KH/KHsoundwave-hires-t06.png "fig:"){#fig:kh2 width="\\textwidth"}\ ![Same setup as Figure [19](#fig:kh1){reference-type="ref" reference="fig:kh1"}, but on a grid of $800\times 160$.](images/KH/KHsoundwave-hires-t09.png "fig:"){#fig:kh2 width="\\textwidth"}\ ![Same setup as Figure [19](#fig:kh1){reference-type="ref" reference="fig:kh1"}, but on a grid of $800\times 160$.](images/KH/KHsoundwave-hires-t12.png "fig:"){#fig:kh2 width="\\textwidth"} # Conclusions Active Flux combines aspects of Finite Volume and Finite Element methods. The evolution of cell averages ensures shock-capturing properties, while the incorporation of point values at cell interfaces leads to a globally continuous reconstruction. The incorporation of additional degrees of freedom and thus the compact nature of the stencil makes the method high-order, but efficient for parallelization or the implementation of boundary conditions. The shared degrees of freedom imply less memory cost than DG methods. Finally, point values do not need to be expressed in conservative variables, i.e. Active Flux offers more freedom than conventional approaches. The continuous reconstruction is of course the great difference to Godunov methods. There are, however, certain parallels between the history of development of Godunov methods and of Active Flux: Both started out as fully discrete methods requiring a fairly complex and expensive ingredient: an exact Riemann solver in the case of Godunov methods and an exact evolution operator in the case of Active Flux. Both can be understood as exact solutions for different IVPs: Riemann problem data in the case of Godunov methods and continuous, piecewise parabolic data in case of Active Flux. Then, for both types of methods there was a quest for simpler and more flexible approaches, with the passage from fully discrete methods to semi-discrete methods. For Godunov methods, for example, approximate Riemann solvers came up, and for Active Flux, approximate evolution operators were studied (e.g. in [@barsukow19activeflux]). However, due to the inherent high-order nature of Active Flux these latter needed to have high order of accuracy, and hence were non-trivial to find. Once Active Flux was rephrased as a semi-discrete method ([@abgrall20; @abgrall22]), these difficulties were overcome, for it is easy to derive spatial discretizations that use the degrees of freedom of Active Flux and to immediately write down evolution equations for the point values. The semi-discrete problem can then be integrated in time using standard methods. To show that such an Active Flux method can be successfully used to solve the multi-dimensional Euler equations is the aim of the present work. One finds that, endowed with a limiting strategy, Active Flux is indeed able to easily solve complex flow problems. This has been demonstrated here for examples of multi-dimensional Riemann problems and for subsonic flows. The approach is generic and can immediately be applied to other hyperbolic systems of conservation laws. Further research is necessary to understand the theoretical aspects of this method, such as entropy inequalities. Future work will also be directed towards improving and simplifying the limiting and towards preservation of physical conditions such as positivity of the pressure. # Acknowledgements {#acknowledgements .unnumbered} CK and WB acknowledge funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) within *SPP 2410 Hyperbolic Balance Laws in Fluid Mechanics: Complexity, Scales, Randomness (CoScaRa)*, project number 525941602. MRKG03 Remi Abgrall and Wasilij Barsukow. Extensions of Active Flux to arbitrary order of accuracy. , 57(2):991--1027, 2023. Rémi Abgrall and Wasilij Barsukow. A hybrid finite element--finite volume method for conservation laws. , 447:127846, 2023. Rémi Abgrall. A combination of Residual Distribution and the Active Flux formulations or a new class of schemes that can combine several writings of the same hyperbolic problem: application to the 1d Euler equations. , pages 1--33, 2022. Wasilij Barsukow. The active flux scheme for nonlinear problems. , 86(1):1--34, 2021. Wasilij Barsukow and Jonas P Berberich. A well-balanced Active Flux method for the shallow water equations with wetting and drying. , pages 1--46, 2023. Wasilij Barsukow, Jonathan Hohm, Christian Klingenberg, and Philip L Roe. The active flux scheme on Cartesian grids and its low Mach number limit. , 81(1):594--622, 2019. Wasilij Barsukow and Christian Klingenberg. Exact solution and a truly multidimensional Godunov scheme for the acoustic equations. , 56(1), 2022. Timothy A Eymann and Philip L Roe. Multidimensional active flux schemes. In *21st AIAA computational fluid dynamics conference*, 2013. Christiane Helzel, David Kerkmann, and Leonardo Scandurra. A new ADER method inspired by the active flux method. , 80(3):1463--1497, 2019. Peter D Lax and Xu-Dong Liu. Solution of two-dimensional riemann problems of gas dynamics by positive schemes. , 19(2):319--340, 1998. C-D Munz, Sabine Roller, Rupert Klein, and Karl J Geratz. The extension of incompressible flow solvers to the weakly compressible regime. , 32(2):173--196, 2003. Philip L Roe, Tyler Lung, and Jungyeoul Maeng. New approaches to limiting. In *22nd AIAA Computational Fluid Dynamics Conference*, page 2913, 2015. Bram van Leer. Towards the ultimate conservative difference scheme. IV. A new approach to numerical convection. , 23(3):276--299, 1977. # Detailed derivation of the multi-dimensional limiting {#app:recon} ## Piecewise-biparabolic reconstruction {#sec:piecewisebiparabolic} If none of the edges needs to be limited, then the natural choice of the reconstruction is biparabolic, as it has been used already since [@barsukow18activeflux; @kerkmann18]. In presence of limited edges the reconstruction shall be defined in a piecewise fashion, subdividing the cell into quadrants or halves. If the discrete data fulfill condition [\[eq:quasimonotone\]](#eq:quasimonotone){reference-type="eqref" reference="eq:quasimonotone"}, it is then tested (in an approximate way) whether this reconstruction fulfills $m \leq q_\text{recon}(x) \leq M$. In case not, it is discarded and replaced by the plateau reconstruction of Section [6.2](#sec:plateau){reference-type="ref" reference="sec:plateau"}. However, if one of the edges is reconstructed as a hat, then something else needs to be done inside the cell in order to ensure continuity. We generally choose to subdivide the cell into regions (quadrants or halves, depending on the situation) and to reconstruct biparabolically in every such region while maintaining global continuity. Linearity of the problem (in the point values and the average) shall be exploited by considering the average and all point values apart from $q_\text{SW}, q_\text{W}, q_\text{NW}$ to vanish: **Definition 1**. *Consider all point values apart from $q_\text{SW}, q\text{W}, q_\text{NW}, q_\text{C}$ and the average $\bar q$ to vanish. Then a reconstruction of the cell that interpolates these values pointwise and whose average agrees with $\bar q$ is called the **edge-basis-function** $q_\text{recon}^\text{W}$ of the W-edge: $$\begin{aligned} q_\text{recon}^\text{W}\left(-\frac{\Delta x}{2}, \frac{\Delta y}{2}\right) &= q_\text{NW} & q_\text{recon}^\text{W}\left(-\frac{\Delta x}{2}, 0\right) &= q_\text{W}\\ q_\text{recon}^\text{W}\left(-\frac{\Delta x}{2}, -\frac{\Delta y}{2}\right) &= q_\text{SW} & \frac{1}{\Delta x \Delta y} \int_{-\frac{\Delta x}{2}}^{\frac{\Delta x}{2}} \int_{-\frac{\Delta y}{2}}^{\frac{\Delta y}{2}} q_\text{recon}^\text{W}(x, y) \mathrm{d}x \mathrm{d}y &= \bar q \end{aligned}$$* *Similar notions shall be used for the other edges.* Observe that an edge-basis-function is a reconstruction of the entire cell. In the following, only the edge-basis-functions for the W-edge shall be given explicitly, as those for the other edges can be obtained by rotation, as long as $\Delta y = \Delta x$ (otherwise some rescaling is necessary). **Theorem 3**. *If the edge-basis-function for the W-edge is $$\begin{aligned} q_\text{recon}^\text{W}(q_\text{SW}, q_\text{W}, q_\text{NW}, x, y, \text{S}, \text{N}, \text{W}, \bar q )\end{aligned}$$ then the other basis functions are $$\begin{aligned} q_\text{recon}^\text{S}(q_\text{SE}, q_\text{S}, q_\text{SW}, x, y, \text{E}, \text{W}, \text{S}, \bar q ) &= q_\text{recon}^\text{W}(q_\text{SE}, q_\text{S}, q_\text{SW}, y, -x, \text{E}, \text{W}, \text{S}, \bar q ) \\ q_\text{recon}^\text{N}(q_\text{NW}, q_\text{N}, q_\text{NE}, x, y, \text{W}, \text{E}, \text{N}, \bar q ) &= q_\text{recon}^\text{W}(q_\text{NW}, q_\text{N}, q_\text{NE}, -y, x, \text{W}, \text{E}, \text{N}, \bar q ) \\ q_\text{recon}^\text{E}(q_\text{NE}, q_\text{E}, q_\text{SE}, x, y, \text{N}, \text{S}, \text{E}, \bar q ) &= q_\text{recon}^\text{W}(q_\text{NE}, q_\text{E}, q_\text{SE}, -x, -y, \text{N}, \text{S}, \text{E}, \bar q ) \end{aligned}$$* The edge-basis-function depends on $q_\text{SW}, q\text{W}, q_\text{NW}$, on whether the reconstruction of the W-edge is parabolic or hat, and -- this complicates things a little -- on whether the neighbouring edges (S and N) are reconstructed as hats or as parabolae. This is necessary due to global continuity and because the corner values $q_\text{SW}, q\text{NW}$ are shared with the S- and N-edges. As mentioned before, the value $q_\text{C}$ of the reconstruction at the cell center is chosen such that the average of the reconstruction agrees with the given one (zero for edge-basis-functions). The final reconstruction is obtained through summation: **Theorem 4**. *The following reconstruction $q_\text{recon}$ interpolates all the point values along the boundary of the cell and its average agrees with the given cell average: $$\begin{aligned} q_\text{recon}(x,y) &:= q_\text{recon}^\text{W}\left(\frac{q_\text{SW}}{2}, q_\text{W}, \frac{q_\text{NW}}{2}, x, y, \text{S}, \text{N}, \text{W}, \frac{\Delta \bar q}{4}\right) \\&\nonumber +q_\text{recon}^\text{S}\left(\frac{q_\text{SE}}{2}, q_\text{S}, \frac{q_\text{SW}}{2}, x, y, \text{E}, \text{W}, \text{S}, \frac{\Delta \bar q}{4}\right)\\ &\nonumber+q_\text{recon}^\text{N}\left(\frac{q_\text{NW}}{2}, q_\text{N}, \frac{q_\text{NE}}{2}, x, y, \text{W}, \text{E}, \text{N}, \frac{\Delta \bar q}{4}\right) \\&\nonumber +q_\text{recon}^\text{E}\left(\frac{q_\text{NE}}{2}, q_\text{E}, \frac{q_\text{SE}}{2}, x, y, \text{N}, \text{S}, \text{E}, \frac{\Delta \bar q}{4}\right)\\ &\nonumber+ (\bar q - \Delta \bar q) \end{aligned}$$ where $\Delta \bar q := \bar q - \frac{q_\text{SW} + q_\text{W} + q_\text{NW} + q_\text{N} + q_\text{NE} + q_\text{E} + q_\text{SE} + q_\text{S}}{8}$. Moreover, as all the point values tend to $\bar q$, $$\begin{aligned} q_\text{recon}(x, y) \to \bar q \label{eq:reconcontinuity} \end{aligned}$$ for all $x, y$.* *Proof.* The pointwise interpolation property is clear because, for example, $$\begin{aligned} q_\text{recon}\left( \frac{\Delta x}{2}, \frac{\Delta y}{2} \right) &= q_\text{recon}^\text{N}\left(\frac{q_\text{NW}}{2}, q_\text{N}, \frac{q_\text{NE}}{2}, \frac{\Delta x}{2}, \frac{\Delta y}{2}, \text{W}, \text{E}, \text{N}, \frac{\Delta \bar q}{4}\right)\\ +&q_\text{recon}^\text{E}\left(\frac{q_\text{NE}}{2}, q_\text{E}, \frac{q_\text{SE}}{2}, \frac{\Delta x}{2}, \frac{\Delta y}{2}, \text{N}, \text{S}, \text{E}, \frac{\Delta \bar q}{4}\right)\\ &= \frac{q_\text{NE}}{2} + \frac{q_\text{NE}}{2} = q_\text{NE} \end{aligned}$$ The correctness of the average follows from $$\begin{aligned} \frac{1}{\Delta x \Delta y} \int_{-\frac{\Delta x}{2}}^{\frac{\Delta x}{2}} \int_{-\frac{\Delta y}{2}}^{\frac{\Delta y}{2}} q_\text{recon}^\text{W}(x, y) \mathrm{d}x \mathrm{d}y = 4 \cdot \frac{\Delta \bar q}{4} + \bar q - \Delta \bar q = \bar q \end{aligned}$$ Finally, property [\[eq:reconcontinuity\]](#eq:reconcontinuity){reference-type="eqref" reference="eq:reconcontinuity"} is trivial if $\bar q = 0$, because the reconstruction is linear in all the point values and in the average, and thus $q_\text{recon}(x, y) \to 0$ uniformly in this case. If the point values tend to $\bar q \neq 0$, then $\Delta \bar q \to 0$ and thus $$\begin{aligned} q_\text{recon}(x,y) \to 0 + \bar q - \Delta \bar q \to \bar q \end{aligned}$$ ◻ **Remark 1**. *: One might think that it would be sufficient to define the reconstruction as $$\begin{aligned} &q_\text{recon}^\text{W}\left(\frac{q_\text{SW}}{2}, q_\text{W}, \frac{q_\text{NW}}{2}, x, y, \text{S}, \text{N}, \text{W}, \frac{\bar q}{4}\right) +q_\text{recon}^\text{S}\left(\frac{q_\text{SE}}{2}, q_\text{S}, \frac{q_\text{SW}}{2}, x, y, \text{E}, \text{W}, \text{S}, \frac{\bar q}{4}\right)\\ +&q_\text{recon}^\text{N}\left(\frac{q_\text{NW}}{2}, q_\text{N}, \frac{q_\text{NE}}{2}, x, y, \text{W}, \text{E}, \text{N}, \frac{\bar q}{4}\right) +q_\text{recon}^\text{E}\left(\frac{q_\text{NE}}{2}, q_\text{E}, \frac{q_\text{SE}}{2}, x, y, \text{N}, \text{S}, \text{E}, \frac{\bar q}{4}\right) \end{aligned}$$ This function also has the interpolation properties in Theorem [Theorem 4](#thm:reconinterpol){reference-type="ref" reference="thm:reconinterpol"}. However, in the limit of all the point values converging to $\bar q$, property [\[eq:reconcontinuity\]](#eq:reconcontinuity){reference-type="eqref" reference="eq:reconcontinuity"} is not guaranteed. Linearity merely implies that in the limit, $q_\text{recon}$ will be proportional to $\bar q$, but it can still have a non-trivial dependence on $x, y$.* If the reconstruction happens on the unit square, then $\Delta x = \Delta y = 1$ should be used in the formulas below. The sketches of the interpolation problem are encoded as follows: ![image](images/qC.png){width="2%"} denotes the central value $q_\text{C}$, ![image](images/zero.png){width="2%"} / ![image](images/zero-unused.png){width="2%"} denotes a value that is not on the W edge and thus zero (gray if it is not used in the interpolation), ![image](images/w-edge.png){width="2%"} / ![image](images/w-edge-unused.png){width="2%"} denotes one of the values $q_\text{NW}$, $q_\text{W}$, $q_\text{SW}$ (gray if it is not used in the interpolation). Values marked with an arrow do not, in principle, need to be included in the interpolation stencil, but are included here. The colored area denotes the support of the different functions that make up the piecewise defined reconstruction. ![image](images/lin.png){width="2%"} denotes an edge that is reconstructed linearly, in other words, as part of the interpolation procedure, we impose that the restriction of the reconstruction onto that edge is linear (the quadratic term vanishing). In many cases, the reconstruction is (piecewise) biparabolic, i.e. of the form $$\begin{aligned} (a_0 + a_1 x + a_2 x^2) + (a_3 + a_4 x + a_5 x^2 )y +(a_6 + a_7 x + a_8 x^2) y^2\end{aligned}$$ In the following, biparabolic reconstructions are given by specifying the values of these 9 coefficients. ### Parabolic reconstruction on W edge If edges W, S and N are all reconstructed parabolically, then the W-edge-basis-function is a biparabolic function. If either S or N (or both) are reconstructed as hat functions, the reconstruction in the cell is defined piecewise: the left and the rights halves of the cell have individual biparabolic reconstructions, which are joined in a continuous fashion. #### Parabolic reconstruction on both neighbouring edges If both neighbouring edges (N and S) are reconstructed parabolically, then the reconstruction inside the cell is the trivial biparabolic reconstruction (see Figure [25](#fig:example-bipara){reference-type="ref" reference="fig:example-bipara"}): $$\begin{aligned} q_\text{recon}^\text{W} &= \left\{a_0 = q_\text{C},a_1 = -\frac{q_\text{W}}{\Delta x},a_2 = -\frac{2 (2 q_\text{C}-q_\text{W})}{\Delta x^2},a_3 = 0,a_4 = -\frac{q_\text{NW}-q_\text{SW}}{\Delta x \Delta y}, \right . \\ & \phantom{mmm}\left.\nonumber a_5 = \frac{2(q_\text{NW}-q_\text{SW})}{\Delta x^2 \Delta y},a_6 = -\frac{4 q_\text{C}}{\Delta y^2},a_7 = -\frac{2(q_\text{NW}+q_\text{SW}-2 q_\text{W})}{\Delta x \Delta y^2}, \right . \\ & \phantom{mmm}\left.\nonumber a_8 = \frac{4 (4 q_\text{C}+q_\text{NW}+q_\text{SW}-2 q_\text{W})}{\Delta x^2 \Delta y^2} \right\} \label{eq:bipararecon}\\ q_\text{C} &= \frac{1}{16}(36 \bar q -q_\text{NW}-q_\text{SW}-4 q_\text{W})\end{aligned}$$ ![All edges are reconstructed parabolically, and the corresponding edge-basis-function is a simple biparabolic interpolation. $q_\text{NW} = 1.6$, $q_\text{W}=1.35$, $q_\text{SW} = 0.6$.](images/bipara.png "fig:"){#fig:example-bipara width="12%"}\ ![All edges are reconstructed parabolically, and the corresponding edge-basis-function is a simple biparabolic interpolation. $q_\text{NW} = 1.6$, $q_\text{W}=1.35$, $q_\text{SW} = 0.6$.](images/imgbipara.png "fig:"){#fig:example-bipara width=".7\\textwidth"} #### Hat reconstruction on both neighbouring edges {#ssec:parahathat} ![*Top*: The case of both neighbouring edges reconstructed using hat functions, while the primary edge is reconstructed parabolically. *Bottom*: The W edge is reconstructed parabolically, while the two neighbouring reconstructions are hat functions. $q_\text{NW} = 1.6$, $q_\text{W}=1.35$, $q_\text{SW} = 0.6$.](images/para-hat-hat.png "fig:"){#fig:para-hat-hat width="40%"}\ ![*Top*: The case of both neighbouring edges reconstructed using hat functions, while the primary edge is reconstructed parabolically. *Bottom*: The W edge is reconstructed parabolically, while the two neighbouring reconstructions are hat functions. $q_\text{NW} = 1.6$, $q_\text{W}=1.35$, $q_\text{SW} = 0.6$.](images/imgpara-hat-hat.png "fig:"){#fig:para-hat-hat width=".7\\textwidth"} The interpolation problem is shown in Figure [27](#fig:para-hat-hat){reference-type="ref" reference="fig:para-hat-hat"}. $$\begin{aligned} q_\text{recon}^\text{W}\Big|_{x< 0} &= \left\{a_0=q_\text{C},a_1=-\frac{q_\text{W}}{\Delta x},a_2=-\frac{2 (2 q_\text{C}-q_\text{W})}{\Delta x^2},a_3=0,a_4= -\frac{2 (q_\text{NW}-q_\text{SW})}{\Delta x \Delta y}, \label{eq:parahathatleft} \right . \\ & \phantom{mmm}\left.\nonumber a_5=0,a_6=-\frac{4 q_\text{C}}{\Delta y^2},a_7=-\frac{4 (q_\text{NW}+q_\text{SW}-q_\text{W})}{\Delta x \Delta y^2},a_8=\frac{8 (2 q_\text{C}-q_\text{W})}{\Delta x^2 \Delta y^2} \right \} \\ q_\text{recon}^\text{W}\Big|_{x\geq 0} &= \left\{a_0=q_\text{C},a_1=-\frac{q_\text{W}}{\Delta x},a_2=-\frac{2 (2 q_\text{C}-q_\text{W})}{\Delta x^2},a_3=0,a_4=0,\label{eq:parahathatright} \right . \\ & \phantom{mmm}\left.\nonumber a_5=0,a_6=-\frac{4 q_\text{C}}{\Delta y^2},a_7=\frac{4 q_\text{W}}{\Delta x \Delta y^2},a_8=\frac{8 (2 q_\text{C}-q_\text{W})}{\Delta x^2 \Delta y^2}\right\} \\ q_\text{C} &= \frac1{32} (72 \bar q -3 (q_\text{NW}+q_\text{SW})-8 q_\text{W})\end{aligned}$$ #### Hat reconstruction on just one neighbouring edge {#ssec:paraonehat} If the N edge is reconstructed using a hat function, and both the W-edge and the S-edge parabolically, then one reconstructs the cell as follows (Figure [29](#fig:para-hat-para){reference-type="ref" reference="fig:para-hat-para"}): $$\begin{aligned} q_\text{recon}^\text{W} \Big |_{x < 0} &= \left\{a_0=q_\text{C},a_1=-\frac{q_\text{W}}{\Delta x},a_2=-\frac{2 (2 q_\text{C}-q_\text{W})}{\Delta x^2},a_3=0,a_4=-\frac{2 q_\text{NW}-q_\text{SW}}{\Delta x \Delta y},\label{eq:parahatparaleft} \right . \\ & \phantom{mmm}\left.\nonumber a_5=-\frac{2 q_\text{SW}}{\Delta x^2 \Delta y},a_6=-\frac{4 q_\text{C}}{\Delta y^2},a_7=-\frac{2 (2 q_\text{NW}+q_\text{SW}-2 q_\text{W})}{\Delta x \Delta y^2}, \right . \\ & \phantom{mmm}\left.\nonumber a_8=\frac{4 (4 q_\text{C}+q_\text{SW}-2 q_\text{W})}{\Delta x^2 \Delta y^2}\right\} \\ q_\text{recon}^\text{W}\Big |_{x \geq 0} &= \left\{a_0=q_\text{C},a_1=-\frac{q_\text{W}}{\Delta x},a_2=-\frac{2 (2 q_\text{C}-q_\text{W})}{\Delta x^2},a_3=0,a_4=\frac{q_\text{SW}}{\Delta x \Delta y}, \label{eq:parahatpararight} \right . \\ & \phantom{mmm}\left.\nonumber a_5=-\frac{2 q_\text{SW}}{\Delta x^2 \Delta y},a_6=-\frac{4 q_\text{C}}{\Delta y^2},a_7=-\frac{2 (q_\text{SW}-2 q_\text{W})}{\Delta x \Delta y^2}, \right . \\ & \phantom{mmm}\left.\nonumber a_8=\frac{4 (4 q_\text{C}+q_\text{SW}-2 q_\text{W})}{\Delta x^2 \Delta y^2}\right\} \\ q_\text{C} &= \frac{1}{32} (72 \bar q -3 q_\text{NW}-2 (q_\text{SW}+4 q_\text{W}))\end{aligned}$$ ![*Top*: The case of the N edge reconstructed using hat functions, while the primary edge and the S-edge is reconstructed parabolically. *Bottom*: The W and S edge is reconstructed parabolically, while the N edge is reconstructed using a hat function. $q_\text{NW} = 1.6$, $q_\text{W}=1.35$, $q_\text{SW} = 0.6$.](images/para-hat-para.png "fig:"){#fig:para-hat-para width="40%"}\ ![*Top*: The case of the N edge reconstructed using hat functions, while the primary edge and the S-edge is reconstructed parabolically. *Bottom*: The W and S edge is reconstructed parabolically, while the N edge is reconstructed using a hat function. $q_\text{NW} = 1.6$, $q_\text{W}=1.35$, $q_\text{SW} = 0.6$.](images/imgpara-hat-para.png "fig:"){#fig:para-hat-para width=".7\\textwidth"} If it is the S edge, then (Figure [31](#fig:para-para-hat){reference-type="ref" reference="fig:para-para-hat"}): $$\begin{aligned} q_\text{recon}^\text{W}\Big |_{x < 0} &= \left \{a_0=q_\text{C},a_1=-\frac{q_\text{W}}{\Delta x},a_2=-\frac{2 (2 q_\text{C}-q_\text{W})}{\Delta x^2},a_3=0,a_4=-\frac{q_\text{NW}-2 q_\text{SW}}{\Delta x \Delta y}, \label{eq:paraparahatleft} \right . \\ & \phantom{mmm}\left.\nonumber a_5=\frac{2 q_\text{NW}}{\Delta x^2 \Delta y},a_6=-\frac{4 q_\text{C}}{\Delta y^2},a_7=-\frac{2 (q_\text{NW}+2 q_\text{SW}-2 q_\text{W})}{\Delta x \Delta y^2}, \right . \\ & \phantom{mmm}\left.\nonumber a_8=\frac{4 (4 q_\text{C}+q_\text{NW}-2 q_\text{W})}{\Delta x^2 \Delta y^2}\right \} \\ q_\text{recon}^\text{W}\Big |_{x \geq 0} &= \left\{a_0=q_\text{C},a_1=-\frac{q_\text{W}}{\Delta x},a_2=-\frac{2 (2 q_\text{C}-q_\text{W})}{\Delta x^2},a_3=0,a_4=-\frac{q_\text{NW}}{\Delta x \Delta y}, \label{eq:paraparahatright} \right . \\ & \phantom{mmm}\left.\nonumber a_5=\frac{2 q_\text{NW}}{\Delta x^2 \Delta y},a_6=-\frac{4 q_\text{C}}{\Delta y^2},a_7=-\frac{2 (q_\text{NW}-2 q_\text{W})}{\Delta x \Delta y^2}, \right . \\ & \phantom{mmm}\left.\nonumber a_8=\frac{4 (4 q_\text{C}+q_\text{NW}-2 q_\text{W})}{\Delta x^2 \Delta y^2}\right\}\\ q_\text{C}&= \frac1{32} (72 \bar q -2 q_\text{NW}-3 q_\text{SW}-8 q_\text{W})\end{aligned}$$ ![*Top*: The case of the S edge reconstructed using hat functions, while the primary edge is reconstructed parabolically. *Bottom*: The W and N edge is reconstructed parabolically, while the S edge is reconstructed using a hat function. $q_\text{NW} = 1.6$, $q_\text{W}=1.35$, $q_\text{SW} = 0.6$.](images/para-para-hat.png "fig:"){#fig:para-para-hat width="40%"} ![*Top*: The case of the S edge reconstructed using hat functions, while the primary edge is reconstructed parabolically. *Bottom*: The W and N edge is reconstructed parabolically, while the S edge is reconstructed using a hat function. $q_\text{NW} = 1.6$, $q_\text{W}=1.35$, $q_\text{SW} = 0.6$.](images/imgpara-para-hat.png "fig:"){#fig:para-para-hat width=".7\\textwidth"} #### Proof of continuity {#ssec:continuitypara} It is obvious from the sketches of the interpolation problem in Figures [25](#fig:example-bipara){reference-type="ref" reference="fig:example-bipara"}--[31](#fig:para-para-hat){reference-type="ref" reference="fig:para-para-hat"} that the reconstructions interpolate the values on the cell interfaces. What remains to be shown is that the piecewise defined reconstruction is continuous: **Theorem 5**. *The reconstructions from Sections [6.1.1.2](#ssec:parahathat){reference-type="ref" reference="ssec:parahathat"}--[6.1.1.3](#ssec:paraonehat){reference-type="ref" reference="ssec:paraonehat"} are continuous along the line $x=0$ where the two pieces are joined.* *Proof.* As is obvious from the sketches of the interpolation problems in Figures [27](#fig:para-hat-hat){reference-type="ref" reference="fig:para-hat-hat"}--[31](#fig:para-para-hat){reference-type="ref" reference="fig:para-para-hat"}, the three points along $x=0$, i.e. $$\begin{aligned} q_\text{recon}\left(0, \frac{\Delta y}{2} \right) &= 0 & q_\text{recon}\left(0, 0 \right) &= q_\text{C} & q_\text{recon}\left(0, -\frac{\Delta y}{2} \right) &= 0 \end{aligned}$$ are part of the interpolation. Recall that the restriction of a biparabolic function onto the straight line $x=0$ is a parabola in $y$, and that the latter is uniquely defined by three points. Therefore, all the values of the reconstruction along $x=0$ agree for all the reconstructions presented in Sections [6.1.1.2](#ssec:parahathat){reference-type="ref" reference="ssec:parahathat"}--[6.1.1.3](#ssec:paraonehat){reference-type="ref" reference="ssec:paraonehat"}. ◻ ### Hat reconstruction on W edge If the W-edge is reconstructed as a hat function, then necessarily one needs to consider a piecewise defined reconstruction with the pieces joined along $y = 0$. The reconstruction in each piece only depends on whether the other adjacent edge is reconstructed parabolically or as a hat function. One thus has less cases to consider. Consider the top piece, i.e. the one defined on $[-\frac{\Delta x}{2}, \frac{\Delta x}{2}] \times [0, \frac{\Delta y}{2}]$. It is bordered by the N-edge. If the N-edge is reconstructed as a hat function then one needs additionally to define the reconstruction piecewise in the left and right halves (joined along $x=0$), i.e. the reconstruction is piecewise by quadrant. This is not necessary if the N-edge is reconstructed parabolically. #### Parabolic reconstruction on at least one neighbouring edge {#ssec:hatpara} Here, the situation is considered in which either the N-edge or the S-edge are reconstructed as parabolae. Then it is possible to provide a biparabolic reconstruction of, respectively, the top or bottom half of the cell. These cases can occur individually or simultaneously. If both the N-edge and the S-edge are reconstructed parabolically, then the entire reconstruction of the cell is given by the two pieces given in [\[eq:topparthatpara\]](#eq:topparthatpara){reference-type="eqref" reference="eq:topparthatpara"}--[\[eq:bottomparthatpara\]](#eq:bottomparthatpara){reference-type="eqref" reference="eq:bottomparthatpara"}. If, for example, the N-edge is reconstructed parabolically, and the S-edge as a hat function, then the top piece of the reconstruction in the cell is to be taken from [\[eq:topparthatpara\]](#eq:topparthatpara){reference-type="eqref" reference="eq:topparthatpara"}, while the bottom piece used should be the one from [\[eq:hathatbottomleft\]](#eq:hathatbottomleft){reference-type="eqref" reference="eq:hathatbottomleft"}--[\[eq:hathatbottomright\]](#eq:hathatbottomright){reference-type="eqref" reference="eq:hathatbottomright"} in Section [6.1.2.2](#ssec:hathat){reference-type="ref" reference="ssec:hathat"}. See Figure [33](#fig:hat-para){reference-type="ref" reference="fig:hat-para"} for the setup of the interpolation problem. ![*Top*: The case of the neighbouring edges reconstructed using parabolas, while the primary edge is reconstructed using the hat function. *Bottom*: The W edge is reconstructed as a hat function, the other edges are reconstructed parabolically. $q_\text{NW} = 1$, $q_\text{W}=1.5$, $q_\text{SW} = 0$.](images/hat-para.png "fig:"){#fig:hat-para width="40%"}\ ![*Top*: The case of the neighbouring edges reconstructed using parabolas, while the primary edge is reconstructed using the hat function. *Bottom*: The W edge is reconstructed as a hat function, the other edges are reconstructed parabolically. $q_\text{NW} = 1$, $q_\text{W}=1.5$, $q_\text{SW} = 0$.](images/imghat-para.png "fig:"){#fig:hat-para width=".7\\textwidth"} $$\begin{aligned} q_\text{recon}^{W} \Big|_{y\geq 0} &= \left\{a_0=q_\text{C},a_1=-\frac{q_\text{W}}{\Delta x},a_2=-\frac{2 (2 q_\text{C}-q_\text{W})}{\Delta x^2},a_3=0,a_4=-\frac{2 (q_\text{NW}-q_\text{W})}{\Delta x \Delta y}, \right .\label{eq:topparthatpara} \\ & \phantom{mmm}\left.\nonumber a_5=\frac{4 (q_\text{NW}-q_\text{W})}{\Delta x^2 \Delta y},a_6=-\frac{4 q_\text{C}}{\Delta y^2},a_7=0,a_8=\frac{16 q_\text{C}}{\Delta x^2 \Delta y^2}\right\} \label{eq:hatWparaN}\\ \frac1{\Delta x \Delta y} & \int_{y\geq 0} q_\text{recon} \,\mathrm{d}x \mathrm{d}y = \frac{2 q_\text{C}}9+\frac{q_\text{NW}+q_\text{W}}{24}\\ % q_\text{recon}^{W} \Big|_{y < 0} &= \left\{a_0=q_\text{C},a_1=-\frac{q_\text{W}}{\Delta x},a_2=-\frac{2 (2 q_\text{C}-q_\text{W})}{\Delta x^2},a_3=0,a_4=\frac{2 (q_\text{SW}-q_\text{W})}{\Delta x \Delta y}, \right . \label{eq:bottomparthatpara}\\ & \phantom{mmm}\left.\nonumber a_5=-\frac{4 (q_\text{SW}-q_\text{W})}{\Delta x^2 \Delta y},a_6=-\frac{4 q_\text{C}}{\Delta y^2},a_7=0,a_8=\frac{16 q_\text{C}}{\Delta x^2 \Delta y^2}\right\} \label{eq:hatWparaS}\\ \frac1{\Delta x \Delta y}& \int_{y < 0} q_\text{recon} \,\mathrm{d}x \mathrm{d}y = \frac{2 q_\text{C}}9+ \frac{q_\text{SW}+q_\text{W}}{24}\end{aligned}$$ #### Hat reconstruction on at least one neighbouring edge {#ssec:hathat} In this case the reconstruction is additionally defined piecewise on each quadrant. The biparabolic reconstructions are obtained from interpolation problems shown in Figure [37](#fig:hat-hat){reference-type="ref" reference="fig:hat-hat"}. ![*Top*: The case of (possibly) all three edges reconstructed using hat functions. The reconstruction inside the cell is defined on four quadrants. *Middle*: The W edge is reconstructed as a hat function, and also N (*left*) / S (*right*). *Bottom*: All edges are reconstructed as hat functions.](images/hat-hat.png "fig:"){#fig:hat-hat width="40%"}\ ![*Top*: The case of (possibly) all three edges reconstructed using hat functions. The reconstruction inside the cell is defined on four quadrants. *Middle*: The W edge is reconstructed as a hat function, and also N (*left*) / S (*right*). *Bottom*: All edges are reconstructed as hat functions.](images/imghat-hat-para.png "fig:"){#fig:hat-hat width=".45\\textwidth"} ![*Top*: The case of (possibly) all three edges reconstructed using hat functions. The reconstruction inside the cell is defined on four quadrants. *Middle*: The W edge is reconstructed as a hat function, and also N (*left*) / S (*right*). *Bottom*: All edges are reconstructed as hat functions.](images/imghat-para-hat.png "fig:"){#fig:hat-hat width=".45\\textwidth"}\ ![*Top*: The case of (possibly) all three edges reconstructed using hat functions. The reconstruction inside the cell is defined on four quadrants. *Middle*: The W edge is reconstructed as a hat function, and also N (*left*) / S (*right*). *Bottom*: All edges are reconstructed as hat functions.](images/imghat-hat-hat.png "fig:"){#fig:hat-hat width=".7\\textwidth"} If the N edge is reconstructed as a hat function, then the top half $[-\frac{\Delta x}{2}, \frac{\Delta x}{2}] \times [0, \frac{\Delta y}{2}]$ of the cell is to be reconstructed as $$\begin{aligned} q_\text{recon}^\text{W} \Big |_{y \geq 0, x < 0}&=\left\{a_0=q_\text{C},a_1=-\frac{q_\text{W}}{\Delta x},a_2=-\frac{2 (2 q_\text{C}-q_\text{W})}{\Delta x^2},a_3=0,a_4=-\frac{3 q_\text{NW}-2 q_\text{W}}{\Delta x \Delta y}, \label{eq:hatWhatNleft} \right . \\ & \phantom{mmm}\left.\nonumber a_5=\frac{2 (q_\text{NW}-2 q_\text{W})}{\Delta x^2 \Delta y},a_6=-\frac{4 q_\text{C}}{\Delta y^2},a_7=-\frac{2 q_\text{NW}}{\Delta x \Delta y^2}, \right . \\ & \phantom{mmm}\left.\nonumber a_8=\frac{4 (4 q_\text{C}-q_\text{NW})}{\Delta x^2 \Delta y^2}\right\} \\ q_\text{recon}^\text{W} \Big |_{y \geq 0, x \geq 0}&= \left\{a_0=q_\text{C},a_1=-\frac{q_\text{W}}{\Delta x},a_2=-\frac{2 (2 q_\text{C}-q_\text{W})}{\Delta x^2},a_3=0,a_4=\frac{q_\text{SW}}{\Delta x \Delta y}, \label{eq:hatWhatNright} \right . \\ & \phantom{mmm}\left.\nonumber a_5=-\frac{2 q_\text{SW}}{\Delta x^2 \Delta y},a_6=-\frac{4 q_\text{C}}{\Delta y^2},a_7=-\frac{2 (q_\text{SW}-2 q_\text{W})}{\Delta x \Delta y^2}, \right . \\ & \phantom{mmm}\left.\nonumber a_8=\frac{4 (4 q_\text{C}+q_\text{SW}-2 q_\text{W})}{\Delta x^2 \Delta y^2}\right \} \\ \frac1{\Delta x \Delta y}&\int_{y \geq 0} q_\text{recon} \,\mathrm{d}x \mathrm{d}y = \frac{2 q_\text{C}}9+\frac1{576} (35 q_\text{NW}+q_\text{SW}+22 q_\text{W}) \end{aligned}$$ If the S edge is reconstructed as a hat function, then the reconstruction reads $$\begin{aligned} q_\text{recon}^\text{W} \Big |_{y < 0, x < 0}&= \left\{ a_0=q_\text{C},a_1=-\frac{q_\text{W}}{\Delta x},a_2=-\frac{2 (2 q_\text{C}-q_\text{W})}{\Delta x^2},a_3=0, \label{eq:hathatbottomleft} \right . \\ & \phantom{mmm}\left.\nonumber a_4=-\frac{-3 q_\text{SW}+2 q_\text{W}}{\Delta x \Delta y}, a_5=-\frac{2 (q_\text{SW}-2 q_\text{W})}{\Delta x^2 \Delta y},a_6=-\frac{4 q_\text{C}}{\Delta y^2}, \right . \\ & \phantom{mmm}\left.\nonumber a_7=-\frac{2 q_\text{SW}}{\Delta x \Delta y^2},a_8=\frac{4 (4 q_\text{C}-q_\text{SW})}{\Delta x^2 \Delta y^2}\right \} \\ q_\text{recon}^\text{W}\Big |_{y < 0, x \geq 0} &= \left\{a_0=q_\text{C},a_1=-\frac{q_\text{W}}{\Delta x},a_2=-\frac{2 (2 q_\text{C}-q_\text{W})}{\Delta x^2},a_3=0,a_4=-\frac{q_\text{NW}}{\Delta x \Delta y}, \label{eq:hathatbottomright} \right . \\ & \phantom{mmm}\left.\nonumber a_5=\frac{2 q_\text{NW}}{\Delta x^2 \Delta y},a_6=-\frac{4 q_\text{C}}{\Delta y^2},a_7=-\frac{2 (q_\text{NW}-2 q_\text{W})}{\Delta x \Delta y^2}, \right . \\ & \phantom{mmm}\left.\nonumber a_8=\frac{4 (4 q_\text{C}+q_\text{NW}-2 q_\text{W})}{\Delta x^2 \Delta y^2}\right\} \\ \frac1{\Delta x \Delta y}& \int_{y <0} q_\text{recon} \,\mathrm{d}x \mathrm{d}y = \frac{2 q_\text{C}}9+ \frac1{576} (q_\text{NW}+35 q_\text{SW}+22 q_\text{W})\end{aligned}$$ #### Proof of continuity {#ssec:continuityhat} **Theorem 6**. *The reconstructions in Sections [6.1.2.1](#ssec:hatpara){reference-type="ref" reference="ssec:hatpara"}--[6.1.2.2](#ssec:hathat){reference-type="ref" reference="ssec:hathat"} are continuous along $x = 0$ and along $y = 0$.* *Proof.* In complete analogy to the proof of Theorem [Theorem 5](#thm:continuitypara){reference-type="ref" reference="thm:continuitypara"} one observes from the sketches of the interpolation problem in Figures [33](#fig:hat-para){reference-type="ref" reference="fig:hat-para"}--[37](#fig:hat-hat){reference-type="ref" reference="fig:hat-hat"} that the points along $x = 0$ and $y = 0$ are always included. The three points along $x = 0$ and the three points along $y = 0$ each define a unique parabola. ◻ ## Plateau-limiting {#sec:plateau} Consider a situation in which [\[eq:quasimonotone\]](#eq:quasimonotone){reference-type="eqref" reference="eq:quasimonotone"} is true, while the reconstruction described above exceeds $m$ or $M$. In that case, the idea of a plateau reconstruction is to introduce a rectangle a distance $\eta \Delta x$/$\eta \Delta y$ away from the cell boundary, i.e. $$\left[\Delta x \left(-\frac12 + \eta \right), \Delta x \left(\frac12 - \eta \right)\right] \times \left[\Delta y \left(-\frac12 + \eta \right), \Delta y \left(\frac12 - \eta \right)\right]$$ with $\eta \in (0,\frac12)$ where the value of the reconstruction shall be constant and equal to $q_\text{p}$, a value to be determined to ensure that the average of the reconstruction equals the given average (see Figure [2](#fig:plateau){reference-type="ref" reference="fig:plateau"} for an example). This rectangle shall be referred to as **plateau**. The remaining four trapezes shall be the supports of functions that continuously join the reconstruction along the edge to the plateau in the simplest possible way. Because reconstructions along edges are either parabolas or hats, every trapezoidal region is either joining the plateau to a parabola or to a hat function. $\eta$ shall be chosen in such a way that the maximum principle is guaranteed. It is clear that, as [\[eq:quasimonotone\]](#eq:quasimonotone){reference-type="eqref" reference="eq:quasimonotone"} is true, this can always be done by choosing $\eta$ small enough. ![Sketch of the interpolation between the plateau and the boundary of the cell.](images/sketch.png){#fig:sketch width="50%"} ### Interpolation in the trapezes {#ssec:plateauinterpolation} Consider for definiteness the northern trapeze. Define a point $A_\alpha := \left(-\frac{\Delta x}{2} + \alpha \Delta x, \frac{\Delta y}{2}\right) \in \mathbb R^2$ parametrized by $\alpha \in [0,1]$. Define a point $$B_\alpha := \left(\Delta x(-\frac12 + \eta) + \alpha \Delta x(1 -2 \eta) , \Delta y(\frac12 - \eta)\right)$$ on the northern edge of the plateau. Observe that as $\alpha$ goes from 0 to 1, both points move all the way from the left to the right on their respective edges. The straight line $$\begin{aligned} g_\alpha := \left\{ (x, y) : \frac{x}{\Delta x} = -\frac{1}{2} + \alpha - \left(\frac{y}{\Delta y} - \frac{1}{2}\right) ( 1 -2 \alpha) \right\}\end{aligned}$$ connects them. Obviously, given $x$ and $y$ there is a unique $$\begin{aligned} \alpha = \frac{\frac{x}{\Delta x} + \frac{y}{\Delta y} }{2\frac{y}{\Delta y}} = \frac{x\Delta y + y\Delta x }{2y\Delta x} \label{eq:alphaasfctofxan}\end{aligned}$$ The idea of the reconstruction is to associate to a point $(x, y)$ the value given by a linear interpolation between the value of the reconstruction at $A_\alpha$ and the (constant) value $q_\text{p}$ at $B_\alpha$. In particular this means that the diagonal edges of the reconstruction (connections between the corners of the cell and the corners of the plateau) are straight lines. The four trapezes can be reconstructed individually, because continuity along the diagonal segments where they join is already guaranteed by the above procedure. For a given trapeze, the choice of reconstruction thus merely depends on whether the adjacent edge is reconstructed parabolically (see Section [6.2.2](#ssec:plateaupara){reference-type="ref" reference="ssec:plateaupara"}) or as a hat function (see Section [6.2.3](#ssec:plateauhat){reference-type="ref" reference="ssec:plateauhat"}). ### Parabolic reconstruction along the edge {#ssec:plateaupara} The parabolic reconstruction along the N-edge is given by $$\begin{aligned} q_\text{parabolic}^\text{N}(x) = q_\text{N} + \frac{x}{\Delta x} (q_\text{NE} - q_\text{NW}) + 2\frac{x^2}{\Delta x^2}(q_\text{NE} + q_\text{NW} - 2 q_\text{N}) \qquad x \in \left[-\frac{\Delta x}{2}, \frac{\Delta x}{2}\right]\end{aligned}$$ The value of this parabolic reconstruction is sought at the location $\xi$ of point $A_\alpha$ with $\alpha$ given by [\[eq:alphaasfctofxan\]](#eq:alphaasfctofxan){reference-type="eqref" reference="eq:alphaasfctofxan"}: $$\begin{aligned} \xi = \Delta x \left( -\frac{1}{2} + \frac{\frac{x}{\Delta x}\Delta y + y }{2y} \right) = \Delta x\frac{\frac{x}{\Delta x}}{2\frac{y}{\Delta y}}\end{aligned}$$ Finally, the reconstruction at $(x, y)$ is assigned the value $$\begin{aligned} q_\text{recon}^\text{N}(x, y) &:= q_\text{parabolic}^\text{N}(\xi) + \left(y - \frac{\Delta y}{2}\right) \frac{q_\text{p} - q_\text{parabolic}^\text{N}(\xi)}{- \Delta y \eta} \\ &= q_\text{parabolic}^\text{N}(\xi)\left(1 + \frac{\frac{y}{\Delta y} - \frac{1}{2}}{\eta} \right) - \frac{\frac{y}{\Delta y} - \frac{1}{2}}{\eta} q_\text{p} \end{aligned}$$ with $$\begin{aligned} q_\text{parabolic}^\text{N}(\xi) &= q_\text{N} + \frac{\hat x}{2\hat y} (q_\text{NE} - q_\text{NW}) + 2\left( \frac{\hat x}{2\hat y} \right)^2 (q_\text{NE} + q_\text{NW} - 2 q_\text{N}) \end{aligned}$$ and $\hat x := \frac{x}{\Delta x}$ and $\hat y := \frac{y}{\Delta y}$. Observe that the reconstruction is not polynomial, but lies in $$\begin{aligned} \mathrm{span}\left(1, \hat x, \hat y, \frac{\hat x}{\hat y}, \frac{{\hat x}^2}{\hat y}, \frac{{\hat x}^2}{{\hat y}^2} \right)\end{aligned}$$ For reference we give the four reconstructions: $$\begin{aligned} q_\text{recon}^\text{trapeze W}(x, y) &= q_\text{p}\frac{1+2 \hat x}{2 \eta} + (-1+2 \eta-2 \hat x) \left( \frac{q_\text{W} }{2 \eta} - \frac{(q_\text{NW}-q_\text{SW}) y}{4 \eta \hat x}+\frac{(q_\text{NW}+q_\text{SW}-2 q_\text{W}) \hat y^2}{4 \eta \hat x^2} \right) \label{eq:trapezeWparabolic}\\ % q_\text{recon}^\text{trapeze E}(x, y) &= q_\text{p}\frac{1-2 \hat x}{2 \eta}+ (-1+2 \eta+2 \hat x) \left( \frac{q_\text{E} }{2 \eta} +\frac{(q_\text{NE}-q_\text{SE}) \hat y}{4 \eta \hat x}-\frac{(2 q_\text{E}-q_\text{NE}-q_\text{SE}) \hat y^2}{4 \eta \hat x^2} \right ) \label{eq:trapezeEparabolic}\\ % q_\text{recon}^\text{trapeze N}(x, y) &= q_\text{p}\frac{1-2 \hat y}{2 \eta}+ (-1+2 \eta+2 \hat y) \left( -\frac{(2 q_\text{N}-q_\text{NE}-q_\text{NW}) \hat x^2 }{4 \eta \hat y^2}+\frac{(q_\text{NE}-q_\text{NW}) \hat x }{4 \eta \hat y}+\frac{q_\text{N} }{2 \eta}\right) \label{eq:trapezeNparabolic}\\ % q_\text{recon}^\text{trapeze S}(x, y) &= q_\text{p}\frac{1+2\hat y}{2\eta} + (1-2 \eta+2 \hat y) \left( \frac{(2 q_\text{S}-q_\text{SE}-q_\text{SW}) \hat x^2 }{4 \eta\hat y^2}+\frac{(q_\text{SE}-q_\text{SW}) \hat x }{4 \eta \hat y}-\frac{q_\text{S} }{2\eta} \right) \label{eq:trapezeSparabolic}\end{aligned}$$ The integrals over the four regions are $$\begin{aligned} \frac{1}{\Delta x \Delta y} \int_{\text{trapeze W}} q_\text{recon} \,\mathrm{d}x \mathrm{d}y &= \frac1{36} \eta \Big(6 (3-4 \eta) q_\text{P}-(2 \eta-3) (4 q_\text{W} + q_\text{NW}+q_\text{SW})\Big)\\ \frac{1}{\Delta x \Delta y} \int_{\text{trapeze E}} q_\text{recon} \,\mathrm{d}x \mathrm{d}y &= \frac1{36} \eta \Big( 6( 3 -4\eta) q_\text{P} -(2 \eta-3)( 4q_\text{E} +q_\text{NE}+q_\text{SE}) \Big)\\ \frac{1}{\Delta x \Delta y} \int_{\text{trapeze N}} q_\text{recon} \,\mathrm{d}x \mathrm{d}y &= \frac1{36} \eta \Big(6 (3-4 \eta) q_\text{P} -(2 \eta-3) (4 q_\text{N}+q_\text{NE}+q_\text{NW}) \Big)\\ \frac{1}{\Delta x \Delta y} \int_{\text{trapeze S}} q_\text{recon} \,\mathrm{d}x \mathrm{d}y &= \frac1{36}\eta \Big (6 (3-4 \eta) q_\text{P}-(2 \eta-3) (4 q_\text{S}+q_\text{SE}+q_\text{SW})\Big)\end{aligned}$$ and the integral over the plateau obviously $$\begin{aligned} \frac{1}{\Delta x \Delta y} \int_{\text{plateau}} q_\text{recon} \,\mathrm{d}x \mathrm{d}y = (1-2\eta)^2 q_\text{p}\end{aligned}$$ ### Hat-function reconstruction along the edge {#ssec:plateauhat} If an edge is reconstructed using a hat-function, then the reconstruction of the trapeze follows the algorithm outlined at the beginning of Section [6.2](#sec:plateau){reference-type="ref" reference="sec:plateau"}, but is naturally defined in a piecewise fashion. The reconstruction of the W-trapeze is $$\begin{aligned} q_\text{recon}^{\text{trapeze W}}(x,y) \Big |_{y \geq 0} &= q_\text{W}-\frac{\Delta x (q_\text{NW}-q_\text{W}) y}{x\Delta y}+\frac{\left(\frac{\Delta x}{2}+x\right) \left(q_\text{P}-q_\text{W}+\frac{\Delta x (q_\text{NW}-q_\text{W}) y }{x\Delta y} \right)}{\Delta x \eta} \label{eq:trapezeWhattop} \\ q_\text{recon}^{\text{trapeze W}}(x,y) \Big |_{y < 0} &= q_\text{W}+\frac{\Delta x (q_\text{SW}-q_\text{W}) y}{x\Delta y}+\frac{\left(\frac{\Delta x}{2}+x\right) (q_\text{P}-q_\text{W}-\frac{\Delta x (q_\text{SW}-q_\text{W}) y}{x\Delta y}}{\Delta x \eta} \label{eq:trapezeWhatbottom}\end{aligned}$$ $$\begin{aligned} \frac{1}{\Delta x \Delta y} \int_{\text{trapeze W}} q_\text{recon} \,\mathrm{d}x \mathrm{d}y &= \frac{1}{6} (3-4 \eta) \eta q_\text{p} +\frac{1}{24} \eta (2 \eta-3) (q_\text{NW} +q_\text{SW}+2 q_\text{W}) \end{aligned}$$ The reconstructions of the other trapezes can be obtained by rotation as in Equations [\[eq:rotationreconS\]](#eq:rotationreconS){reference-type="eqref" reference="eq:rotationreconS"}--[\[eq:rotationreconE\]](#eq:rotationreconE){reference-type="eqref" reference="eq:rotationreconE"}. ### Choice of the plateau value and the maximum principle {#ssec:plateaumaximumprinciple} **Theorem 7**. *There exists a choice of $\eta$ such that the reconstruction is conservative and $m \leq q_\text{recon}(x,y) \leq M$ for all $x,y$ inside the cell.* *Proof.* For any choice of $q_\text{p} \in (m, M)$, the reconstruction inside the cell fulfills $m \leq q_\text{recon} \leq M$, because the reconstructions inside the trapezes are interpolations along straight lines between $q_\text{p}$ and a maximum-preserving reconstruction along the edge. For the same reason, as $\eta \to 0$, the average of the reconstruction over the cell approaches $q_\text{p}$, because the reconstructions inside the trapezes remain bounded and their contribution to the cell average thus vanishes in the limit. Thus, for all $\epsilon > 0$ sufficiently small one can find an $\eta > 0$ such that $\frac{1}{\Delta x \Delta y} \int_c q_\text{recon}(x, y) \, \mathrm{d}x \mathrm{d}y = q_\text{p} + a$ with $|a| < \epsilon$. Then, choosing $q_\text{p} := \bar q - a$ ensures conservativity of the reconstruction. At the same time, as $m < \bar q < M$, one simply needs to choose $\epsilon < \min\left(M-\bar q, \bar q - m\right)$ to ensure that $m < q_\text{p} < M$. ◻ For example, if all edges are reconstructed parabolically, then the average of the reconstruction over the entire cell is $$\begin{aligned} q_\text{p}- \frac{1}{9} \eta (2 \eta-3) \Big (4 E -6 q_\text{p} +2 V \Big) \overset{!}{=} \bar q\end{aligned}$$ (where $4V := q_\text{NE}+q_\text{NW} +q_\text{SE}+q_\text{SW}$, $4 E := q_\text{E}+ q_\text{N} +q_\text{S} +q_\text{W}$) which gives the value of $q_\text{p}$: $$\begin{aligned} q_\text{p} = \frac{9 \bar q +\eta (2 \eta-3) (4E+2V) }{ 3 (3-6 \eta+4 \eta^2)}\end{aligned}$$ The polynomial in the denominator does not have real zeros. What thus remains is the choice of $\eta$. The only bounds on $\eta$ originate from the condition $$\begin{aligned} m < q_\text{p} < M\end{aligned}$$ The equation $q_\text{p} = \mu \in \{ m,M\}$ is quadratic in $\eta$ -- and this is true in general and not just in this example. It is therefore easy to identify real, positive solutions and to take their minimum. In practice, having established a minimum, $\eta$ is chosen to be half of it. In case no real, positive solutions are identified, $\eta$ is not subject to any conditions and we choose $\eta = \frac14$. [^1]: Institute for Mathematics & Computational Science, University of Zurich, Winterthurerstrasse 190, CH-8057 Zurich, Switzerland [^2]: Bordeaux Institute of Mathematics, Bordeaux University and CNRS/UMR5251, Talence, 33405 France [^3]: Institute for Mathematics, University of Wurzburg, Emil-Fischer-Strasse 40, 97074 Wurzburg, Germany [^4]: Boldface letters denote "spatial" vectors, i.e. those whose natural dimension is that of space ($d$). Other collections of scalars (such as the conserved quantities $q$) are not typeset in boldface. [^5]: This happens numerically by testing a given number of locations.
arxiv_math
{ "id": "2310.00683", "title": "The Active Flux method for the Euler equations on Cartesian grids", "authors": "R\\'emi Abgrall, Wasilij Barsukow, Christian Klingenberg", "categories": "math.NA cs.NA", "license": "http://creativecommons.org/licenses/by-nc-sa/4.0/" }
--- author: - Kaushik Bal and Sanjit Biswas bibliography: - Reference.bib nocite: "[@*]" title: Ground state solutions for quasilinear Schrödinger type equation involving anisotropic p-laplacian --- # Abstract {#abstract .unnumbered} This paper is concerned with the existence of a nonnegative ground state solution of the following quasilinear Schrödinger equation $$\begin{aligned} \begin{cases} -\Delta_{H,p}u+V(x)|u|^{p-2}u-\Delta_{H,p}(|u|^{2\alpha}) |u|^{2\alpha-2}u=\lambda |u|^{q-1}u \text{ in }\mathbb{R}^n\\ u\in W^{1,p}(\mathbb{R}^n)\cap L^\infty(\mathbb{R}^N) \end{cases}\end{aligned}$$ where $N\geq2$; $(\alpha,p)\in D_N=\{(x,y)\in \mathbb{R}^2 : 2xy\geq y+1,\; y\geq2x,\; y<N\}$ and $\lambda>0$ is a parameter. The operator $\Delta_{H,p}$ is the reversible Finsler p-Laplacian operator with the function $H$ being the Minkowski norm on $\mathbb{R}^N$. Under certain conditions on $V$, we establish the existence of a non-trivial non-negative bounded ground state solution of the above equation. # Introduction and main results {#int .unnumbered} In this paper, we are concerned with the following problem $$\begin{aligned} \label{maineq} \begin{cases} -\Delta_{H,p}u+V(x)|u|^{p-2}u-\Delta_{H,p}(|u|^{2\alpha}) |u|^{2\alpha-2}u=\lambda |u|^{q-1}u \text{ in } \mathbb{R}^n\\ u\in W^{1,p}(\mathbb{R}^n)\cap L^\infty(\mathbb{R}^N) \end{cases}\end{aligned}$$ where $\mathbb{N}\geq 2$, $(\alpha,p)\in D_N=\{(x,y)\in \mathbb{R}^2 : 2xy\geq y+1,\; y\geq2x,\; y<N\}$ and $\lambda>0$ is a parameter. The operator $\Delta_{H,p}$ is defined as $$\Delta_{H,p}u :=\mbox{div}(H(Du)^{p-1}\nabla_\eta H(Du))$$ known as anisotropic p-Laplacian, where $\nabla_\eta$ denotes the gradient operator with respect to $\eta$ variable. The function $H:\mathbb{R}^N\to [0, \infty)$ is a Minkowski norm satisfying the following properties: 1. $H$ is a norm on $\mathbb{R}^N$; 2. $H\in C^4(\mathbb{R}^N\setminus\{0\})$; 3. The Hessian matrix $\nabla_\eta^2(\frac{H^2}{2})$ is positive definite in $R^N\setminus\{0\}$; 4. $H$ is uniformly elliptic, that means the set $$\mathscr{B}_1^H:=\{ \xi\in\mathbb{R}^N : H(\xi)<1 \}$$ is uniformly convex, i.e. there exists $\Lambda>0$ such that $$\langle D^2H(\xi)\eta, \eta\rangle\geq \Lambda|\eta|^2, \text{ } \forall\xi\in\partial\mathscr{B}_1^H, \forall \eta\in\nabla H(\xi)^\perp.$$ 5. There exists a positive constant $M=M(N,H)$ such that for $1\leq i,j\leq N$, $$HH_{x_ix_j}+H_{x_i}H_{x_j}\leq M$$ For more details about $H$, one may consult [@KB; @FC; @CX] and the references therein. A few examples of $H$ are as follows: **Example 1**. *For $k=2$ or $k\geq 4$, we define $H_k:\mathbb{R}^N\to \mathbb{R}$ as $$\begin{aligned} H_k(x_1,x_2,...x_N):=(\sum_{i=1}^{N} |x_i|^k)^\frac{1}{k}. \end{aligned}$$* **Example 2**. *( [@VM]) For $\rho,\mu>0$, we define $H_{\rho,\mu}:\mathbb{R}^N\to\mathbb{R}$ as $$\begin{aligned} H_{\rho,\mu}(x_1,x_2,...,x_N):=\sqrt{\rho\sqrt{\sum_{i=1}^{N}x_i^4}+\mu\sum_{i=1}^{N}x_i^2} \end{aligned}$$* **Remark 3**. *There exists two constants $A,B>0$ such that $$\begin{aligned} A\;||x||\leq H(x)\leq B\;||x||, \mbox{ for all} \;x\in\mathbb{R}^N. \end{aligned}$$* **Remark 4**. *If $H=H_k$ then $$\Delta_{H,p}u=\begin{cases} \Delta_pu, \text{ if } k=2\;\mbox{and} \;1<p<\infty\\ \sum_{i=1}^{N}\frac{\partial}{\partial x_i}(|\frac{\partial u}{\partial x_i}|^{p-2}\frac{\partial u}{\partial x_i}), \text{ if } k=p\in (1,\infty) \end{cases}$$* Throughout this paper, we will assume that the potential $V\in C^3(\mathbb{R}^N,\mathbb{R})$ satisfies the following conditions: 1. $0\leq V(x)\leq V(\infty):=\liminf_{|x|\to\infty} V(x)\infty$ and $V$ is not identically equal to $V(\infty)$. 2. $\langle \nabla V(x), x\rangle \in L^\infty(\mathbb{R}^N)\cup L^{\frac{N}{p}}(\mathbb{R}^N)$ and $NV(x)+\langle \nabla V(x), x\rangle\geq 0$. The equation ([\[maineq\]](#maineq){reference-type="ref" reference="maineq"}) arises in various branches of mathematical physics. For instance, when $H=H_2,\; p=2$ and $\alpha=1$, solutions of ([\[maineq\]](#maineq){reference-type="ref" reference="maineq"}) are standing wave solution of the quasilinear Schrödinger equation of the form $$\begin{aligned} \label{phy} \iota\partial_t\Phi+\Delta\Phi+k\Delta h(|\Phi|^2)h'(|\Phi|^2)\Phi+\rho(|\Phi|^2)\Phi-V\Phi=0\end{aligned}$$ where $V:\mathbb{R}^N\to\mathbb{R}$ is a given potential, $\Phi:\mathbb{R}\times\mathbb{R}^N\to \mathbb{C}$; $h,\rho :\mathbb{R}^+\to \mathbb{R}$ are functions and $k$ is real constant. It is worth mentioning that the semilinear case corresponding to $k=0$ has been extensively studied by many authors ( see [@HP; @AK; @KJ] and the references therein).\ The general equation ([\[phy\]](#phy){reference-type="ref" reference="phy"}) with various forms of $h$ has been derived as models of several physical phenomena. 1. The superfluid film equation in plasma physics has the structure ([\[phy\]](#phy){reference-type="ref" reference="phy"}) for $h(s)=s$, see [@Kur]. 2. For $h(s)=(1+s)^\frac{1}{2}$, equation ([\[phy\]](#phy){reference-type="ref" reference="phy"}) models the self-channeling of a high-power ultrashort laser in matter (see [@BG; @BR]). In recent years, extensive studies have focused on the existence of solutions for quasilinear Schrödinger equation of the form $$\begin{aligned} \label{phy1} -\Delta u+V(x)u-k\Delta(u^2)u=g(u)\; \text{in}\; \mathbb{R}^N\end{aligned}$$ where $k>0$ is a constant. The existence of a nonnegative solution for ([\[phy1\]](#phy1){reference-type="ref" reference="phy1"}) was proved for $N=1$ and $g(u)=|u|^{p-1}u$ by Poppenberg et al. [@PS] and for $N\geq 2$ by Wang et al. [@Wang1]. In [@Wang2], Wang and Liu have proved that the equation ([\[phy1\]](#phy1){reference-type="ref" reference="phy1"}) for $k=\frac{1}{2}$ and $g(u)=\lambda |u|^{p-1}u$ has a positive ground state solution when $3\leq p<2.2^*-1$ and the potential $V\in C(\mathbb{R}^N,\mathbb{R})$ satisfies one of the following conditions: 1. $\lim_{|x|\to\infty} V(x)=\infty.$ 2. $V(x)=V(|x|)$ and $N\geq 2.$ 3. $V$ is periodic in each variable. 4. $V_\infty:=\lim_{|x|\to\infty}=||V||_{L^\infty(\mathbb{R}^N)}<\infty.$ They also have proved in [@Liu] the existence of both one-sign and nodal ground state of soliton type solutions for ([\[phy1\]](#phy1){reference-type="ref" reference="phy1"}) when $3\leq p<22^*-1$ and the potential $V\in C(\mathbb{R}^N,\mathbb{R})$ satisfies 1. $0<\inf_{\mathbb{R}^N} V(x)\leq V(\infty):=\lim_{|x|\to\infty} V(x)<\infty.$ 2. There are positive constants $M,\;A$ and $m$ such that for $|x|\geq M$, $V(x)\leq V_\infty-\frac{A}{1+|x|^m}.$ Similar work with critical growth has been done in [@QW]. Ruiz and Siciliano have proved the existence of a ground-state solution for ([\[phy1\]](#phy1){reference-type="ref" reference="phy1"}) with $g(u)=|u|^{p-1}u$, $N\geq 3$, $3\leq p<22^*-1$ under the following assumptions: 1. $0<V_0\leq V(x)\leq V(\infty):=\lim_{|x|\to\infty} V(x)>\infty$ and $\langle \nabla V(x), x\rangle\in L^\infty(\mathbb{R}^N).$ 2. For every $x\in \mathbb{R}^N$, the following map is concave $$s\to s^\frac{N+2}{N+p+1}V(s^\frac{1}{N+p+1}x)$$ In [@Chen], Chen and Xu have proved that the equation ([\[phy1\]](#phy1){reference-type="ref" reference="phy1"}) for $g(u)=\lambda |u|^{p-1}u$ has a positive ground state solution for large $\lambda>0$ under the condition $N\geq 3$, $3\leq p<22^*-1$, and the following assumptions on $V\in C(\mathbb{R}^N,\mathbb{R})$, 1. $0\leq V(x)\leq V(\infty):=\liminf_{|x|\to\infty} V(x)\infty$ and $V$ is not identically equal to $V(\infty)$. 2. $\langle \nabla V(x), x\rangle \in L^\infty(\mathbb{R}^N)\cup L^{\frac{N}{2}}(\mathbb{R}^N)$ and $NV(x)+\langle \nabla V(x), x\rangle\geq 0$. We end the literature review by mentioning Chen and Zhang [@ZJ], who proved the existence of a positive ground-state solution of $$\begin{aligned} -\Delta u+V(x)u-k\Delta(u^2)u= A(x)|u|^{p-1}u+\lambda B(x)|u|^{22^*-1}\end{aligned}$$ when $N\geq 3, 2\leq p<22^*-1$ and under the following assumptions: 1. $V\in C^1(\mathbb{R}^N,\mathbb{R}^+), 0<V_0:=\inf_{x\in\mathbb{R}^N}V(x)\leq V(x)\leq V_\infty:=\lim_{|x|\to\infty}<\infty$ and $V(x)\nequiv V_\infty$; 2. $\langle \nabla V(x),x\rangle\in L^\infty(\mathbb{R}^N)$, $\langle \nabla V(x),x\rangle\leq 0$; - $A\in C^1(\mathbb{R}^N,\mathbb{R}^+),\; \lim_{|x|\to\infty}A(x)=A_\infty\in (0,\infty), A(x)\geq A_\infty, 0\leq \langle \nabla A(x),\;x\rangle\in L^\infty(\mathbb{R}^N)$ - $B\in C^1(\mathbb{R}^N,\mathbb{R}^+), \;\lim_{|x|\to\infty} B(x)=B_\infty\in (0,\infty), B(x)\geq B_\infty, 0\leq \langle \nabla B(x),\;x\rangle\in L^\infty(\mathbb{R}^N)$ Before stating our main results we define two sets $$\begin{split} \Pi=(p-1, 2\alpha p^*-2\alpha q+p-1)\cup (2\alpha p-2\alpha, 2\alpha p^*-2\alpha)\cup (p+2\alpha-2, 2\alpha p^*-2\alpha q+p+2\alpha-2)\\ \cup (\frac{p-1}{2\alpha}+2\alpha-1, p^*-p+\frac{p-1}{2\alpha}+2\alpha-1)\cup (2\alpha p-1, 2\alpha p^*-1). \end{split}$$ and, $$D^N:=\{(x,y)\in\mathbb{R}^2: 2xy\geq y+1, y\geq 2x+1, y<N\}.$$ Our main results are as follows: **Theorem 5**. *Let $V$ be a constant potential, $N\geq 2$, $(\alpha, p)\in D_N$ and $q\in \Pi\cap (p-1, 2\alpha p^*-1)$. Then for large $\lambda>0$, equation ([\[maineq\]](#maineq){reference-type="ref" reference="maineq"}) admits a non-trivial non-negative bounded ground state solution $u\in C^1(\mathbb{R}^N)$.* **Theorem 6**. *Suppose that the potential $V$ satisfies $(v_1)-(v_2)$ and $\mathbb{N}\geq 3$. We also assume $p,\alpha$ satisfy one of the following* - *$p=2$, $(\alpha, p)\in D_N$;* - *$(\alpha, p)\in D^N$* *and $2\alpha p-1\leq q< 2\alpha p^*-1$ (one has to take strict inequality if $2\alpha p-1\notin \Pi$). Then for large $\lambda>0$, equation ([\[maineq\]](#maineq){reference-type="ref" reference="maineq"}) admits a non-trivial non-negative bounded ground state solution $u\in C^1(\mathbb{R}^N)$.* **Remark 7**. *If $H=H_2$, $p=2$, and $\alpha=1$ then by using the strong maximum principle for Laplacian in Theorem [Theorem 5](#TC){reference-type="ref" reference="TC"} and Theorem [Theorem 6](#TV){reference-type="ref" reference="TV"}, we obtain a positive ground state solution of ([\[maineq\]](#maineq){reference-type="ref" reference="maineq"}), which generalize the main results of Chen et al. [@Chen].* The paper is organized as follows. In section [1](#Pre){reference-type="ref" reference="Pre"}, we reformulate this problem in an appropriate Orlicz space and discuss a few useful lemmas. In section [2](#Aux){reference-type="ref" reference="Aux"}, we prove some auxiliary results. Section [3](#TCP){reference-type="ref" reference="TCP"} is devoted to the proof of Theorem [Theorem 5](#TC){reference-type="ref" reference="TC"}. Finally, in section [4](#TVP){reference-type="ref" reference="TVP"}, we will give the proof of Theorem [Theorem 6](#TV){reference-type="ref" reference="TV"}.\ **Notation:** In this work we will use the following notations: - $C$ represents a positive constant whose value may change from line to line. - $W^{1,p}(\mathbb{R}^N):=\{ u\in L^p(\mathbb{R}^N) : \nabla u\in L^p(\mathbb{R}^N)\}$ with the usual norm $$||u||_{1,p,\mathbb{R}^N}^p=\int_{\mathbb{R}^N} [|u|^p+|\nabla u|^p] dx$$ - $D^{1,p}(\mathbb{R}^N):=\{ u\in L^{p^*}(\mathbb{R}^N) : \nabla u\in L^p(\mathbb{R}^N)\}$ with the norm $$||u||^p:=\int_{\mathbb{R}^N} |\nabla u|^p dx,$$ where $p^*=\frac{Np}{N-p}$. - For a function $h\in L^1_{loc}(\mathbb{R}^N)$, we denote $$\int h(x) dx:=\int_{\mathbb{R}^N} h(x) dx.$$ - $C_c^\infty(\mathbb{R}^N):=\{ u\in C^\infty(\mathbb{R}^N) |\; u \text{ has compact support.}\}$ - $X'$ denotes the dual of $X$ and $\langle\cdot , \cdot \rangle$ denotes the duality relation. - For $\Omega\subset\mathbb{R}^N$,$|\Omega|$ denotes the Lebesgue measure of $\Omega$. - o(1) represents a quantity which tends to 0 as $n\to\infty$. - The symbols $\rightharpoonup$ and $\rightarrow$ denote weak convergence and strong convergence respectively. # Variational Framework and Preliminaries {#Pre} The variational form of the equation ([\[maineq\]](#maineq){reference-type="ref" reference="maineq"}) is $$\begin{aligned} I(u)=\frac{1}{p}\int [1+(2\alpha)^{p-1}|u|^{(2\alpha-1)p}]H(Du)^p+ V(x)|u|^p]dx-\frac{\lambda}{q+1}\int |u|^{q+1} dx \end{aligned}$$ which is not well-defined on $W^{1,p}(\mathbb{R}^N)$. Inspired by [@Wang2; @Wang1; @Chen], we choose a transformation $u=f(v)$ where $f$ is defined as follows: $$\begin{aligned} \label{DE} \begin{cases} f'(t)=[1+(2\alpha)^{p-1}|f(t)|^{(2\alpha-1)p}]^{-\frac{1}{p}},\; t>0\\ f(0)=0 \text{ and }f(-t)=-f(t) \text{ for all $t\in\mathbb{R}$ } \end{cases}\end{aligned}$$ Under the transformation $u=f(v)$, the above functional becomes $$\begin{aligned} \label{TF} I(f(v))=\frac{1}{p}\int [H(Dv)^p+ V(x)|f(v)|^p]dx-\frac{\lambda}{q+1}\int |f(v)|^{q+1} dx. \end{aligned}$$ We give some important properties of $f$, which will be useful for establishing our main results. **Lemma 8**. *The function $f$ enjoys the following properties:* 1. *$f$ is uniquely defined, a $C^2-$function, and invertible.* 2. *$|f(t)|\leq |t|$ for all $t\in\mathbb{R}$.* 3. *$\frac{f(t)}{t}\to 1$ as $t\to 0$.* 4. *There exists $a>0$ such that $f(t)t^{-\frac{1}{2\alpha}}\to a$ as $t\to\infty$.* 5. *$|f(t)|\leq (2\alpha)^\frac{1}{2\alpha p}|t|^\frac{1}{2\alpha}$ for all $t\in\mathbb{R}^.$* 6. *$f(t)\leq 2\alpha t f'(t)\leq2\alpha f(t)$ for all $t\geq 0.$* 7. *There exists $C>0$ such that $$f(t)\geq \begin{cases} C|t|,\; \text{ if }\; |t|\leq 1\\ C|t|^\frac{1}{2\alpha},\; \text{ if }\; |t|\geq 1 \end{cases}$$* 8. *$|f|^p$ is convex if and only if $p\geq 2\alpha.$* 9. *$|f^{2\alpha-1}(t)f'(t)|\leq (\frac{1}{2\alpha})^\frac{p-1}{p}$, for all $t\in\mathbb{R}.$* 10. *There exist two positive constants $M_1$ and $M_2$ such that $$|t|\leq M_1|f(t)|+M_2|f(t)|^{2\alpha}, \text{ for all $t\in \mathbb{R}$}.$$* *Proof.* The proof of the first seven properties can be found in [@FG]. To prove property $(viii)$ we define $\phi:\mathbb{R}\to\mathbb{R}$ by $\phi(t)=|f(t)|^p$. It is easy to check that, 1. $\phi'(t)=p|f(t)|^{p-2}f(t)f'(t)$ 2. $\phi"=p|f|^{p-2}[ff"+(p-1)f'^2].$ Now, $\phi$ is convex if and only if $$\begin{aligned} ff"+(p-1)f'^2=\frac{(p-1)+(p-2\alpha)(2\alpha)^{p-1}|f|^{p(2\alpha-1)}}{[1+(2\alpha)^{p-1}|f|^{p(2\alpha-1)}]^{1+\frac{2}{p}}}\geq 0.\end{aligned}$$ Moreover, $ff"+(p-1)f'^2\geq 0$ if and only if $(p-1)+(p-2\alpha)(2\alpha)^{p-1}|f|^{p(2\alpha-1)}\geq 0$. Hence, we can conclude that if $p\geq 2\alpha$ then $\phi$ is convex. Conversely, if $\phi$ is convex then $$(p-1)+(p-2\alpha)(2\alpha)^{p-1}|f(t)|^{p(2\alpha-1)}\geq 0 , \text{ for all } t>0.$$ Therefore, $$\begin{aligned} \frac{(p-1)+(p-2\alpha)(2\alpha)^{p-1}|f|^{p(2\alpha-1)}}{t^\frac{p(2\alpha-1)}{2\alpha}}\geq 0.\end{aligned}$$ Take $t\to \infty$, we have $$\begin{aligned} \lim_{t\to \infty}[(p-1)t^{-\frac{p(2\alpha-1)}{2\alpha}}+(p-2\alpha)(2\alpha)^{p-1}(|f(t)|{t^{-\frac{1}{2\alpha}}})^{p(2\alpha-1)}]\geq 0\end{aligned}$$ Using the fact $2\alpha\geq \frac{p+1}{p}$ and the property (iv), we have $(p-2\alpha)(2\alpha)^{p-1}a^{p(2\alpha-1)}\geq 0$ that is, $p\geq 2\alpha$.\ The definition of $f'$ follows the property (ix) and property (x) is an immediate consequence of properties (iv) and (v). ◻ Now, we are going to define a suitable space so that RHS of ([\[TF\]](#TF){reference-type="ref" reference="TF"}) makes sense. We define a normed space $X:=\{v\in W^{1,p}(\mathbb{R}^N): \int V(x)|f(v)|^p\;dx< \infty\}$ equipped with the norm $$\begin{aligned} \label{n1} ||v||=||H(Dv)||_{L^p(\mathbb{R}^N)}+\inf_{\eta>0} \frac{1}{\eta}\{1+\int V(x)|f(\eta v)|\; dx\}.\end{aligned}$$ The space $X_r=\{v\in X: \text{ v is radial }\}$ is a subspace of $X$. **Remark 9**. *The following inequality holds true $$\begin{aligned} \label{n2} ||v||\leq 1+||H(Dv)||_{L^p(\mathbb{R}^N)}+\int V(x)|f(v)|\; dx\end{aligned}$$* **Lemma 10**. 1. *There exists a positive constant $C$ such that for all $v\in X$ $$\frac{\int V(x)|f(v)|^p dx}{1+[\int V(x)|f(v)|^p]^\frac{p-1}{p}}\leq C[||\nabla v||_{L^p(\mathbb{R}^N)}^{p^*}+\inf_{\xi>0}\frac{1}{\xi}(1+\int V(x)|f(\xi v)|^p dx)].$$* 2. *If $v_n\rightarrow v$ in $X$ then $$\int V(x)|f(v_n)-f(v)|^p dx\rightarrow 0$$ and $$\int V(x)||f(v_n)|^p-|f(v)|^p| dx\rightarrow 0.$$* 3. *If $\int V(x)|f(v_n-v)|^p dx\rightarrow 0$ then $$\inf_{\xi>0}\frac{1}{\xi}[1+\int V(x)|f(\xi(v_n-v))|^p dx]\rightarrow 0.$$* *Proof.* For $\xi>0$ and $v\in X$, we define $$A_\xi=\{x\in\mathbb{R}^N:\xi|v(x)|\leq 1\}.$$ Now, by using (ii) we can write $$\begin{aligned} \int V(x)|f(v)|^p dx&=\int_{A_\xi} V(x)|f(v)|^p dx+\int_{A_\xi^c} V(x)|f(v)|^p dx\\ &=\int_{A_\xi} V(x)|f(v)|^{p-1}|v(x)| dx+\int_{A_\xi^c} V(x)|f(v)|^p dx \end{aligned}$$ Using Hölder inequality, (vii) in lemma [Lemma 8](#P){reference-type="ref" reference="P"} and $s^\frac{1}{p}\leq 1+s$ for all $s\geq 0$ we have $$\begin{aligned} \label{LL1} \int_{A_\xi} V(x)|f(v)|^{p-1}|v(x)| dx&\leq (\int_{A_\xi} V(x)|v|^p dx)^\frac{1}{p}(\int_{A_\xi} V(x)|f(v)|^p| dx)^\frac{p-1}{p}\nonumber\\ &\leq (\frac{1}{\xi}\int_{A_\xi} V(x)|\xi v|^p dx)^\frac{1}{p}(\int V(x)|f(v)|^p| dx)^\frac{p-1}{p}\nonumber\\ &\leq C(\frac{1}{\xi}\int_{A_\xi} V(x)|f(\xi v)|^p dx)^\frac{1}{p}(\int V(x)|f(v)|^p| dx)^\frac{p-1}{p}\nonumber\\ &\leq C[||\nabla v||_{L^p(\mathbb{R}^N)}^{p^*}+\frac{1}{\xi}(1+\int V(x)|f(\xi v)|^p dx)](\int V(x)|f(v)|^p| dx)^\frac{p-1}{p}\end{aligned}$$ If $\xi\geq 1$ then by using (iv) in lemma [Lemma 8](#P){reference-type="ref" reference="P"} we get $$\begin{aligned} \label{LL2} \int_{A_\xi^c} V(x)|f(v)|^p dx&\leq C\int_{A_\xi^c} V(x)|v|^\frac{p}{2\alpha} dx\leq C\frac{1}{\xi}\int_{A_\xi^c} V(x)|\xi v|^\frac{p}{2\alpha} dx\leq C\frac{1}{\xi}\int_{A_\xi^c} V(x)|f(\xi v)|^p dx\nonumber\\ &\leq C[||\nabla v||_{L^p(\mathbb{R}^N)}^{p^*}+\frac{1}{\xi}(1+\int V(x)|f(\xi v)|^p dx)]\end{aligned}$$ If $0<\xi<1$ then by using $(v_1)$, Sobolev inequality and Chebyshev inequality, we deduce $$\begin{aligned} \label{LL3} \int_{A_\xi^c} V(x)|f(v)|^p dx&\leq C\int_{A_\xi^c} V(x)|v|^p dx \leq C\int_{A_\xi^c}|v|^p dx \leq C[\int_{A_\xi^c}|v|^{p^*} dx]^\frac{p}{p^*}|A_\xi^c|^{1-\frac{p}{p^*}}\nonumber\\ &\leq C[\int|v|^{p^*} dx]^\frac{p}{p^*}|A_\xi^c|^{1-\frac{p}{p^*}}\leq C[\int |\nabla v|^p dx][\xi^{p^*}\int_{A_\xi^c}|v|^{p^*}]^\frac{p}{N}\nonumber\\ &\leq C[\int |\nabla v|^p dx][\int_{\mathbb{R}^N}|v|^{p^*}]^\frac{p}{N} \leq C[\int |\nabla v|^p dx]^{1+\frac{p^*}{N}} \leq C[\int |\nabla v|^p dx]^\frac{p^*}{p}\nonumber\\ &\leq C[||\nabla v||_{L^p(\mathbb{R}^N)}^{p^*}+\frac{1}{\xi}(1+\int V(x)|f(\xi v)|^p dx)]\end{aligned}$$ Thus, from ([\[LL1\]](#LL1){reference-type="ref" reference="LL1"})-([\[LL3\]](#LL3){reference-type="ref" reference="LL3"}) we can conclude that for all $\xi>0$ $$\int V(x)|f(v)|^p dx\leq C[||\nabla v||_{L^p(\mathbb{R}^N)}^p+\frac{1}{\xi}(1+\int V(x)|f(\xi v)|^p dx)][1+(\int V(x)|f(v)|^p dx)^\frac{p-1}{p}]$$ which proves the first property. To prove the second property, if $v_n\rightarrow v$ in $X$ then from the first property we have, $$\int V(x)|f(v_n-v)|^p dx\rightarrow 0 \text{ as } n\rightarrow\infty.$$ There exists a nonnegative function $h\in L^1(\mathbb{R}^N)$ such that up to a subsequence $v_n\rightarrow v$ a.e. in $\mathbb{R}^N$ and $\color{red}{V(x)|f(v_n-v)|^p\leq h}$. Since $|f|^p$ is convex and satisfies $\Delta_2$ condition (see M.M Rao[@MR]) so $V(x)|f(v_n)|^p\leq CV(x)[|f(v_n-v)|^p+|f(v)|^p]\leq C[h+V(x)|f(v)|^p]$. Moreover, Fatou's lemma ensures $\int V(x)|f(v)|^p dx<\infty$. Thus, by the Dominated Convergence Theorem, we can conclude $$\int V(x)|f(v_n)-f(v)|^p dx\rightarrow 0.$$ and $$\int V(x)||f(v_n)|^p-|f(v)|^p| dx\rightarrow 0.$$ To prove the third part, since $\frac{f(t)}{t}$ is nonincreasing in $(0, \infty)$ so for $\xi>1$ and $v\in X$, we obtain $$\begin{aligned} \label{LL4} \frac{1}{\xi}(1+\int V(x)|f(\xi v)|^p dx)\leq \frac{1}{\xi}+\xi^{p-1}\int V(x)|f(v_n-v)|^p dx\end{aligned}$$ For every $\epsilon>0$, we can choose $\xi_0>1$ such that $\frac{1}{\xi_0}<\frac{\epsilon}{2}$. There exists a positive integer $N_0$ such that $\int V(x)|f(v_n-v)|^p dx<\frac{\epsilon}{2\xi_0^{p-1}}$, for all $n\geq N_0.$ Thus, ([\[LL4\]](#LL4){reference-type="ref" reference="LL4"}) yields $$\begin{aligned} \inf_{\xi>0}\frac{1}{\xi}(1+\int V(x)|f(\xi v)|^p dx)\leq \epsilon, \text{ for all } n\geq N_0\end{aligned}$$ and the property (3) follows. ◻ **Corollary 11**. *If $u_n\rightarrow 0$ in $X$ if and only if $\int [H(Du_n)^p+V|f(u_n)|^p] dx\rightarrow 0$ as $n\rightarrow \infty$.* *Proof.* The proof is an immediate consequence of the above lemma. ◻ Define $E=\{u\in W^{1,p}(\mathbb{R}^N)| \int V(x)|u|^p\;dx<\infty\}$ equipped with the norm $$||u||^p=\int [|\nabla u|^p+V(x)|u|^p]\; dx$$ **Corollary 12**. *The embedding $E\hookrightarrow X$ is continuous.* *Proof.* Using the second property in lemma [Lemma 8](#P){reference-type="ref" reference="P"} we have $$\int V(x)|f(v_n)|^p dx\leq \int V(x)|v_n|^pdx.$$ Thus, if $v_n\rightarrow 0$ in $E$ then $$\int V(x)|f(v_n)|^p dx\rightarrow 0.$$ Hence, lemma [Lemma 10](#LL){reference-type="ref" reference="LL"} ensures $v_n\rightarrow 0$ in $X$. ◻ **Lemma 13**. 1. *The map $v\rightarrow f(v)$ is continuous from $X$ to $L^s(\mathbb{R}^N)$ for $p\leq s\leq 2\alpha p^*$. Moreover, the map is locally compact for $p\leq s<2\alpha p^*$.* 2. *The map $v\rightarrow f(v)$ from $X_r$ to $L^s(\mathbb{R}^N)$ is compact for $p<s<2\alpha p^*$.* *Proof.* It is clear that under the condition $(v_1)$, the embedding $E\hookrightarrow W^{1,p}(\mathbb{R}^N)$ is continuous. Moreover, if $v\in X$ then $f(v)\in E$. There exists $C>0$ such that for every $v\in X$, $$\begin{aligned} \label{LL5} ||f(v)||_{L^p(\mathbb{R}^N)}\leq C ||f(v)||_E\leq C[\int (|\nabla v|^p+V(x)|f(v)|^p)dx]^\frac{1}{p}.\end{aligned}$$ Using the property (v) and (ix) in lemma [Lemma 8](#P){reference-type="ref" reference="P"}, we have $$\begin{aligned} \label{LL6} \int |f(v)|^{2\alpha p^*} dx\leq [\int |\nabla f^{2\alpha}(v)|^p dx]^\frac{p}{p^*}\leq C[\int |\nabla v|^p dx]^\frac{p}{p^*}\end{aligned}$$ Using ([\[LL5\]](#LL5){reference-type="ref" reference="LL5"}), ([\[LL6\]](#LL6){reference-type="ref" reference="LL6"}) and the interpolation inequality, we can conclude $f(v)\in L^s(\mathbb{R}^N)$ for all $s \in [p, 2\alpha p^*]$. Let $v_n\rightarrow v$ in $X$. The property (1) in lemma [Lemma 10](#LL){reference-type="ref" reference="LL"} ensures $$\int V(x)|f(v_n)-f(v)|^p dx\rightarrow 0.$$ Furthermore, $Dv_n\rightarrow Dv$ in $(L^p(\mathbb{R}^N))^N$. For every $1\leq i\leq N$, without loss of generality we can assume that there exists $h_i\in L^p(\mathbb{R}^N)$ such that for almost every $x\in\mathbb{R}^N$, $$\begin{split} v_n(x)\rightarrow v(x) \;\mbox{ as }\; n\to \infty\\ \frac{\partial v_n}{\partial x_i}(x)\rightarrow \frac{\partial v_n}{\partial x_i}(x)\; \mbox{ as } \;n\to \infty\\ |\frac{\partial v_n}{\partial x_i}|,\, |\frac{\partial v}{\partial x_i}|\leq h_i \end{split}$$ By the Dominated Convergence Theorem, we have $$\int |\frac{\partial}{\partial x_i} (f(v_n))-\frac{\partial}{\partial x_i} (f(v))|^p dx=\int |f'(v_n)\frac{\partial v_n}{\partial x_i}-f'(v)\frac{\partial v}{\partial x_i}|^p dx\rightarrow 0.$$ Therefore, $Df(v_n)\rightarrow Df(v)$ in $L^p(\mathbb{R}^N)$. Consequently, $f(v_n)\to f(v)$ in $E$. Since for all $s\in [p, p^*]$, $$E\hookrightarrow W^{1,p}(\mathbb{R}^N)\hookrightarrow L^s(\mathbb{R}^N)$$ so $f(v_n)\to f(v)$ in $L^s(\mathbb{R}^N).$ Interpolation inequality and Rellich's lemma complete the first part. The second part is easily deduced from Theorem [\[ST\]](#ST){reference-type="ref" reference="ST"}. ◻ **Lemma 14**. *$(X, ||\cdot||)$ is a Banach space.* *Proof.* Let $\{u_n\}$ is a Cauchy sequence in $X$. Since $X\hookrightarrow D^{1,p}(\mathbb{R}^N)$ so there exists $u\in D^{1,p}(\mathbb{R}^N)$ such that $u_n\to u$ in $D^{1,p}(\mathbb{R}^N)$. By the inequality (1) in lemma [Lemma 10](#LL){reference-type="ref" reference="LL"}, we observe $$\int V|f(u_n-u_m)|^p dx\to 0 \text{ as } m,n\to\infty.$$ Under the assumption $(v_1)$, we have $$\label{f2} \int |f(u_n-u_m)|^p dx\to 0 \text{ as } m,n\to\infty$$ Using ([\[LL6\]](#LL6){reference-type="ref" reference="LL6"}), ([\[f2\]](#f2){reference-type="ref" reference="f2"}) and Interpolation inequality, we get $$\label{f3} \int |f(u_n-u_m)|^{2\alpha p} dx\to 0 \text{ as } m,n\to\infty.$$ Using property (x) in lemma [Lemma 8](#P){reference-type="ref" reference="P"}, we have $$\int |u_n-u_m|^p dx\leq M_1\int |f(u_n-u_m)|^p dx + M_2\int |f(u_n-u_m)|^{2\alpha p} dx\to 0 \text{ as } m,n\to\infty$$ which implies $\{u_n\}$ is Cauchy in $L^p(\mathbb{R}^N)$. Completeness property allows us to assume the existence of $w\in L^p(\mathbb{R}^N)$ such that $$\begin{split} u_n\to w \text{ in } L^p(\mathbb{R}^N),\; u_n\to w \text{ a.e. in } \mathbb{R}^N. \end{split}$$ Since $Du_n\to Du$ in $L^p(\mathbb{R}^N)$ so, $Dw=Du$. Consequently, $w\in W^{1,p}(\mathbb{R}^N).$\ Our next claim is $$u_n\to w \text{ in } X.$$ For every $\epsilon>0$ there exists $N_0\in \mathbb{N}$ such that $$\int V|f(u_n-u_m)|^p dx <\epsilon \text{ for all } m,n\geq N_0.$$ By Fatou's lemma, we have for $n\geq N_0$, $$\begin{aligned} \label{f4} \int V|f(u_n-w)|^p dx\leq\liminf\limits_{ m\to \infty}\int V|f(u_n-u_m)|^p dx<\epsilon\end{aligned}$$ Using property 3 in lemma [Lemma 10](#LL){reference-type="ref" reference="LL"}, we can conclude $u_n\to w$ in $X$. Hence, $(X, ||\cdot||)$ is a Banach space. ◻ **Remark 15**. *Under the condition $(v_1)$, $X=W^{1,p}(\mathbb{R}^N)$ and the $||\cdot||$ is equivalent to the usual norm $||\cdot||_{1,p,\mathbb{R}^N}$.* *Proof.* Let $u\in W^{1,p}(\mathbb{R}^N)$. By property (ii) in lemma [Lemma 8](#P){reference-type="ref" reference="P"}, we have $$\int V(x)|f(u)|^p dx\leq V(\infty) \int |u|^p \;dx<\infty.$$ Hence, $X=W^{1,p}(\mathbb{R}^N)$. We claim that the identity map $Id: W^{1,p}(\mathbb{R}^N)\to X$ is a bounded linear map. Using property (ii) in lemma [Lemma 8](#P){reference-type="ref" reference="P"} and $(v_1)$, we have $$\begin{aligned} \inf_{\xi>0}\frac{1}{\xi}[1+\int V(x)|f(\xi u)|^p\; dx]\leq \inf_{\xi>0}[\frac{1}{\xi}+(\xi)^{p-1}V(\infty)\int |u|^p\; dx]\end{aligned}$$ Now, consider the function $$g(\xi)=\frac{1}{\xi}+L\xi^{p-1} \text{ for } \xi>0$$ where $L=V(\infty)\int |u|^p\; dx$. One can directly find the global minimum of $g$, which is equal to $[(p-1)^\frac{1}{p}+(p-1)^\frac{1-p}{p}]\;L^\frac{1}{p}.$ Thus, there exists a constant $C=[(p-1)^\frac{1}{p}+(p-1)^\frac{1-p}{p}]V(\infty)^\frac{1}{p}$ such that $$\inf_{\xi>0}\frac{1}{\xi}[1+\int V(x)|f(\xi u)|^p dx]\leq C||u||_{L^p(\mathbb{R}^N)}$$ which proves that the map $Id$ is bounded. The conclusion follows from the Inverse Mapping Theorem. ◻ The following compactness lemma is very useful whose proof is similar to that of lemma 2.2 [@YW]. **Lemma 16**. *If $\{v_n\}$ is a bounded sequence in $X$ such that $$\sup_{x\in \mathbb{R}^N}\int_{B_1(x)}|f(v_n)|^p dx\rightarrow 0 \text{ as } n\rightarrow\infty.$$ Then $f(v_n)\rightarrow 0$ in $L^s(\mathbb{R}^N)$ for every $s\in (p, 2\alpha p^*).$* **Theorem 17**. *(Lions [@PLL] )[\[ST\]]{#ST label="ST"} The embedding $W^{1,p}_r(\mathbb{R}^N)\hookrightarrow L^q(\mathbb{R}^N)$ is compact for $p<q<p^*$.* We will use the following slightly modified version of Jeanjean [@JJ]. The last part of the theorem follows from [@JJ]. **Theorem 18**. *Let $(X,||\cdot||)$ be a Banach space and $J\subset\mathbb{R}^+$ be an interval. Consider the family of $C^1$ functional $$\begin{aligned} I_\delta(u)=Au-\delta Bu, \text{ }\delta\in J \end{aligned}$$ where $B$ is a non-negative functional and either $Au\to \infty$ or $Bu\to \infty$ as $||u||\to\infty$. If $$C_\delta:=\inf_{\gamma\in\Gamma_\delta}\max_{t\in[0,1]}I_\delta(\gamma(t))>0$$ where $\Gamma_\delta=\{ \gamma\in C([0, 1];X) :\gamma(0)=0, I_\delta(\gamma(1))<0 \}$. Then for almost every $\delta\in J$, there exists a sequence $\{x_n\}$ such that* - *$\{x_n\}$ is bounded in $X$.* - *$I_\delta(x_n)$ converges to $C_\delta$.* - *$I'_\delta(x_n)$ converges to 0 in $X^*$.* *Moreover, the map: $\delta\to C_\delta$ is continuous from the left.* Now, we are ready to reformulate our problem. We define another functional $J:X\to \mathbb{R}$ by $$\begin{aligned} J(v)=\frac{1}{p}\int H(Dv)^p+ V(x)|f(v)|^p]dx-\frac{\lambda}{q+1}\int |f(v)|^{q+1} dx. \end{aligned}$$ which is a $C^1$ functional, whose derivative is given by $$\begin{aligned} \langle J'(v),w\rangle=\int H(Dv)^{p-1}\nabla H(Dv).\nabla Dw dx +\int V|f(v)|^{p-2}f(v)f'(v)w dx - \lambda \int |f(v)|^{q-1}f(v)f'(v)w dx\end{aligned}$$ If $v$ is a critical point of $J$ then $v$ satisfies the following equation in the weak sense $$\begin{aligned} \label{maineq2} -\Delta_{H,p}v+V(x)|f(v)|^{p-2}f(v)f'(v)-\frac{\lambda}{q+1}|f(v)|^{q-1}f(v)f'(v)=0.\end{aligned}$$ Thus $u=f(v)$ is a solution of ([\[maineq\]](#maineq){reference-type="ref" reference="maineq"}). Our aim is to find a critical point of $J$ in $X$. Firstly, we present Poho$\breve{z}$aev type identity corresponding to ([\[maineq2\]](#maineq2){reference-type="ref" reference="maineq2"}), and for that, we need the following lemma. **Lemma 19**. *Let $v\in W^{1,p}(\mathbb{R}^N)$ be a solution of ([\[maineq2\]](#maineq2){reference-type="ref" reference="maineq2"}) with $q\in \Pi$. Then $v\in L^\infty(\mathbb{R}^N)$.* *Proof.* For each $m\in \mathbb{N}$ and $s>1$, we define $A_m=\{x\in\mathbb{R}^N| |u(x)|^{s-1}\leq m\}$, $B_m=\mathbb{R}^N\setminus A_m$ and let us consider two functions $$\begin{aligned} v_m=\begin{cases} v|v|^{p(s-1)} \text{ if } x\in A_m\\ m^pv \text{ if } x\in B_m \end{cases}\end{aligned}$$ and $$\begin{aligned} w_m=\begin{cases} v|v|^{(s-1)} \text{ if } x\in A_m\\ mv \text{ if } x\in B_m \end{cases}\end{aligned}$$ So we have that $v_m\in W^{1,p}(\mathbb{R}^N)$, $|v_m|\leq |v|^{ps-p+1}$, $||v|^{p-1}v|=|w_m|^p\leq m^p|v|^p$, $|w_m|\leq |v|^s$, $$\begin{aligned} \nabla v_m=\begin{cases} (ps-p+1)|v|^{p(s-1)}\nabla v \text{ if } x\in A_m\\ m^p\nabla v \text{ if } x\in B_m \end{cases}\end{aligned}$$ and $$\begin{aligned} \nabla w_m=\begin{cases} s|v|^{s-1}\nabla v \text{ if } x\in A_m\\ m\nabla v \text{ if } x\in B_m \end{cases}\end{aligned}$$ We also have $$\begin{aligned} \label{M0} \int H(Dw_m)^p dx&=\int_{A_m} H(s|v|^{s-1}Dv)^p dx + \int_{B_m} H(mDv)^p dx\nonumber\\ &=\int_{A_m} s^p |v|^{p(s-1)}H(Dv)^p dx + m^p\int_{B_m} H(Dv)^p dx\end{aligned}$$ and $$\begin{aligned} \label{M1} \int H(Dv)^{p-1}\nabla H(Dv).Dv_m dx&=(ps-p+1)\int_{A_m} |v|^{p(s-1)}H(Dv)^{p-1}\nabla H(Dv).Dv dx + m^p\int_{B_m} H(Dv)^p dx\nonumber\\ &=(ps-p+1)\int_{A_m} |v|^{p(s-1)}H(Dv)^p dx + m^p\int_{B_m} H(Dv)^p dx\end{aligned}$$ As a consequence of ([\[M1\]](#M1){reference-type="ref" reference="M1"}), we get $$\begin{aligned} \label{M2} \int_{A_m} |v|^{p(s-1)}H(Dv)^p dx\leq\frac{1}{ps-p+1}\int H(Dv)^{p-1}\nabla H(Dv).Dv_m dx\end{aligned}$$ Using ([\[M0\]](#M0){reference-type="ref" reference="M0"}) and ([\[M1\]](#M1){reference-type="ref" reference="M1"}), we derive $$\begin{aligned} \label{M3} \int H(Dw_m)^p dx=\int H(Dv)^{p-1}\nabla H(Dv).Dv_m dx+(s^p-ps+p-1) \int_{A_m} |v|^{p(s-1)}H(Dv)^p dx \end{aligned}$$ Consider $v_m$ as a test function and using the definition of weak solution, we obtain $$\begin{aligned} \label{M4} \int H(Dv)^{p-1}\nabla H(Dv).Dv_m=\int [\lambda|f(v)|^{q-1}-V(x)|f(v)|^{p-2}]f(v)f'(v)v_m dx \end{aligned}$$ Using ([\[M2\]](#M2){reference-type="ref" reference="M2"})-([\[M4\]](#M4){reference-type="ref" reference="M4"}), we obtain $$\begin{aligned} \label{M5} \int H(Dw_m)^p&dx+s^p\int V(x)|f(v)|^{p-2}f(v)f'(v)v_m dx=\int H(Dv)^{p-1}\nabla H(Dv).Dv_m dx+(s^p-ps+p-1)\nonumber\\ &\int_{A_m} |v|^{p(s-1)}H(Dv)^p dx+s^p\int V(x)|f(v)|^{p-2}f(v)f'(v)v_m dx\nonumber\\ &\leq [\frac{s^p-ps+p-1}{ps-p+1}+1]\int H(Dv)^{p-1}\nabla H(Dv).Dv_m dx+s^p\int V(x)|f(v)|^{p-2}f(v)f'(v)v_m dx\nonumber\\ &\leq s^p[\int H(Dv)^{p-1}\nabla H(Dv).Dv_m dx+\int V(x)|f(v)|^{p-2}f(v)f'(v)v_m dx]\nonumber\\ &=s^p\lambda\int |f(v)|^{q-1}f(v)f'(v)v_m dx\end{aligned}$$ since $f'(t)\leq 1$, ([\[M5\]](#M5){reference-type="ref" reference="M5"}) implies $$\begin{aligned} \int H(Dw_m)^p dx\leq s^p\lambda\int |f(v)|^q|v_m| dx\end{aligned}$$ By using the facts that $|f(t)|\leq |t|$, $|f(t)|\leq \frac{1}{2^{2\alpha p}}|t|^\frac{1}{2\alpha}$ and $||v|^{p-1}v_m|=|v_m|^p$, we have $$\begin{aligned} \label{M6} \int H(Dw_m)^p dx\leq s^p\lambda\int |f(v)|^{p-1}|f(v)|^{q-p+1}|v_m| dx &\leq s^p\lambda\int |v|^{p-1}|v_m||v|^\frac{q-p+1}{2\alpha} dx\nonumber\\&\leq s^p\lambda\int |w_m|^p|v|^\frac{q-p+1}{2\alpha} dx\end{aligned}$$ Thus, it follows from Hölder inequality and $|w_m|\leq |v|^s$, that $$\begin{aligned} \label{M7} \int H(Dw_m)^p dx\leq s^p\lambda (\int |w_m|^{pr} dx)^\frac{1}{r}(\int|v|^\frac{(q-p+1)r'}{2\alpha})^\frac{1}{r'}\leq s^p\lambda (\int |v|^{spr} dx)^\frac{1}{r}(\int|v|^{p^*} dx)^\frac{1}{r'}\end{aligned}$$ where we choose $r>0$ such that $$\frac{(q-p+1)r'}{2\alpha}=p^*$$ Now, by applying the Sobolev's inequality and the inequality ([\[M7\]](#M7){reference-type="ref" reference="M7"}), we deduce $$\begin{aligned} (\int_{A_m} |w_m|^{p^*})^\frac{p}{p^*}\leq C\int H(Dw_m)^p dx\leq s^p\lambda (\int |v|^{spr} dx)^\frac{1}{r}(\int|v|^{p^*} dx)^\frac{q-p+1}{2\alpha p^*}\end{aligned}$$ Since $|w_m|=|v|^s$ in $A_m$, by using the monotone convergence theorem, we obtain $$\begin{aligned} (\int |v|^{sp^*})^\frac{1}{sp^*}\leq \{C\lambda\}^\frac{1}{sp}s^\frac{1}{s} (\int |v|^{spr} dx)^\frac{1}{spr}(\int|v|^{p^*} dx)^\frac{q-p+1}{2sp\alpha p^*}\end{aligned}$$ that is, $$\begin{aligned} \label{M8} ||v||_{L^{sp^*}(\mathbb{R}^N)}\leq (C\lambda)^\frac{1}{sp}s^\frac{1}{s}||v||_{L^{spr}(\mathbb{R}^N)}||v||_{L^{p^*}(\mathbb{R}^N)}^\frac{\tilde{r}}{s}\end{aligned}$$ where $\tilde{r}=\frac{q-p+1}{2\alpha p}$. We choose $\sigma=\frac{p^*}{pr}$ and observe that $\sigma>1$ if and only if $q<2\alpha p^*-2\alpha p+p-1$.\ By taking $s=\sigma$ in ([\[M8\]](#M8){reference-type="ref" reference="M8"}), we obtain $$\begin{aligned} \label{M9} ||v||_{L^{\sigma p^*}(\mathbb{R}^N)}\leq (C\lambda)^\frac{1}{\sigma p}\sigma^\frac{1}{\sigma}||v||_{L^{p^*}(\mathbb{R}^N)}||v||_{L^{p^*}(\mathbb{R}^N)}^\frac{\tilde{r}}{\sigma}\end{aligned}$$ putting $s=\sigma^2$ and using ([\[M9\]](#M9){reference-type="ref" reference="M9"}), we have $$\begin{aligned} \label{M10} ||v||_{L^{\sigma^2p^*}(\mathbb{R}^N)}\leq (C\lambda)^{\frac{1}{p}[\frac{1}{\sigma}+\frac{1}{\sigma^2}]}\sigma^{[\frac{1}{\sigma}+\frac{2}{\sigma^2}]}||v||_{L^{p^*}(\mathbb{R}^N)}^{\tilde{r}[\frac{1}{\sigma}+\frac{1}{\sigma^2}]}\end{aligned}$$ putting $s=\sigma^k$ and continuing the above process, we obtain $$\begin{aligned} \label{M11} ||v||_{L^{\sigma^kp^*}(\mathbb{R}^N)} \leq (C\lambda)^{\frac{1}{p} \sum_{i=1}^{k}\frac{1}{\sigma^i}} \sigma^{\sum_{i=1}^{k}\frac{i}{\sigma^i}} ||v||_{L^{p^*}(\mathbb{R}^N)}^{\tilde{r}\sum_{i=1}^{k}\frac{1}{\sigma^i}}\end{aligned}$$ By taking $k\to \infty$, we obtain $$\begin{aligned} ||v||_{\infty}<\infty\end{aligned}$$ Thus, we proved that if $p-1<q<2\alpha p^*-2\alpha p+p-1$ then $u\in L^\infty(\mathbb{R}^N)$. Again, we can easily derive the following inequality from ([\[M5\]](#M5){reference-type="ref" reference="M5"}) by using the facts that $f'(t)\leq 1$, $|f(t)|\leq \frac{1}{2^{2\alpha p}}|t|^\frac{1}{2\alpha}$ and $||v|^{p-1}v_m|=|v_m|^p$. $$\begin{aligned} \label{M12} \int H(Dw_m)^p dx&\leq s^p\lambda 2^\frac{q}{2\alpha p}\int |w_m|^p|v|^{\frac{q}{2\alpha}-p+1} dx \nonumber\\\end{aligned}$$ Similar argument as before we can prove $u\in L^\infty(\mathbb{R}^N)$ if $2\alpha p-2\alpha<q<2\alpha p^*-2\alpha$. Since $|f'(t)|\leq \frac{C}{|f(t)|^{2\alpha-1}}$, ([\[M5\]](#M5){reference-type="ref" reference="M5"}) implies $$\begin{aligned} \int H(Dw_m)^p dx\leq Cs^p\lambda\int |f(v)|^{q-2\alpha+1}|v_m| dx\end{aligned}$$ If $q$ satisfies one of the following conditions 1. $p+2\alpha-2<q<2\alpha p^*-2\alpha p+2\alpha-2$ 2. $2\alpha p-1<q<2\alpha p^*-1$ 3. $\frac{p-1}{2\alpha}+2\alpha-1<q<p^*-p+\frac{p-1}{2\alpha}+2\alpha-1$ then by similar argument one can prove that $u\in L^\infty(\mathbb{R}^N)$. ◻ **Lemma 20**. *If $v$ is a critical point of the functional $J$ then $$P_V(v)=\frac{N-p}{p}\int H(Dv)^p dx+ \frac{N}{p}\int V(x)|f(v)|^p dx+\frac{1}{p}\int \langle\nabla V(x), x\rangle |f(v)|^p dx-\frac{\lambda N}{q+1}\int |f(v)|^{q+1} dx=0.$$ We denote $P_V(v)$ as Poho$\breve{z}$aev identity.* *Proof.* The preceding lemma ensures us that $v\in L^\infty(\mathbb{R}^N)$ and using [@CM], we obtain $v\in C^1(\mathbb{R}^N)$. Hence, this result follows from [@LS]. ◻ **Corollary 21**. *If V is constant then the Poho$\check{z}$aev identity becomes $$\begin{aligned} \label{poho} P(v)=\frac{N-p}{p}\int H(Dv)^p dx+ \frac{N}{p}\int V(x)|f(v)|^p dx-\frac{\lambda N}{q+1}\int |f(v)|^{q+1} dx=0.\end{aligned}$$* # Auxiliary results {#Aux} In this section, we prove a few auxiliary results that will be repeatedly used in the future. To begin with, we state two important lemmas whose proof can be found in (Bal et al. [@FC]) or in the references therein. **Lemma 22**. *([@KB ])[\[ET\]]{#ET label="ET"} Let $x\in \mathbb{R}^n\setminus\{0\}$ and $t\in\mathbb{R}\setminus\{0\}$ then* 1. *$x\cdot\nabla_\xi H(x)=H(x)$.* 2. *$\nabla_\xi H(tx)=\mbox{sign}(t) \nabla_\xi H(x)$.* 3. *$||\nabla_\xi H(x)||\leq C$, for some positive constant $C$.* 4. *$H$ is strictly convex.* **Lemma 23**. *([@KB ])[\[ET1\]]{#ET1 label="ET1"} Let $2\leq p<\infty$. Then for $x, y\in \mathbb{R}^N$, there exists a positive constant $C$ such that $$\begin{aligned} \label{IN1} \langle H(x)^{p-1}\nabla_\eta H(x)-H(y)^{p-1}\nabla_\eta H(y), x-y\rangle\geq C\;H(x-y)^p. \end{aligned}$$* Next, we prove the following lemma, which will assist us in drawing conclusions regarding the pointwise convergence of the gradient of a Palais-Smale sequence of $J$ in $X$. **Lemma 24**. *Let $p\geq 2$ and define $$T(t)=\begin{cases} t\; \text{ if }\; |t|\leq 1\\ \frac{t}{|t|} \;\mbox{ otherwise } \end{cases}$$ and assume that $[H1]-[H5]$ hold. Let $\{v_n\}$ be a sequence in $D^{1,p}(\mathbb{R}^N)$ such that $v_n\rightharpoonup v$ in $D^{1,p}(\mathbb{R}^N)$ and for every $\phi\in C^\infty_c(\mathbb{R}^N)$, $$\int \phi (H(Dv_n)^{p-1}\nabla H(Dv_n)-H(Dv)^{p-1}\nabla H(Dv)).\nabla T(v_n-v) dx\rightarrow 0.$$ Then up to a subsequence, the following conclusions hold:* 1. *$Dv_n\rightarrow Dv$ a.e. in $\mathbb{R}^N$.* 2. *$\lim_{n\rightarrow\infty}[||H(Dv_n)||_{L^p}^p-||H(Dv_n-Dv)||_{L^p}^p]=||H(Dv)||_{L^p}^p$.* 3. *$H(Dv_n)^{p-1}\nabla H(Dv_n)-H(Dv_n-Dv)^{p-1}\nabla H(Dv_n-Dv)\rightarrow H(Dv)^{p-1}\nabla H(Dv)$in $L^{\frac{p}{p-1}}(\mathbb{R}^N)$.* *Proof.* 1. Let us define $w_n=(H(Dv_n)^{p-1}\nabla H(Dv_n)-H(Dv)^{p-1}\nabla H(Dv)).(\nabla v_n-\nabla v)\geq 0$. Let $\phi\in C_c^\infty(\mathbb{R}^N)$ be a nonnegative function and $\Omega=\mbox{supp}(\phi)$. Without loss of generality, we can assume $v_n\rightarrow v$ in $L^p(\Omega)$ and $v_n\rightarrow v$ almost everywhere in $\mathbb{R}^N$. Using the given condition and Hölder inequality, we have for every $s\in(0, 1)$ $$\begin{aligned} \label{sa} 0\leq \int (\phi w_n)^s dx&\leq \int_{K_n} (\phi w_n)^s dx +\int_{L_n} (\phi w_n)^s dx\nonumber\\ &\leq |K_n|^{1-s}(\int_{K_n}\phi w_n dx)^s+ |L_n|^{1-s}(\int_{L_n}\phi w_n dx)^s\nonumber\\ &\leq |\Omega|^{1-s} o(1)+o(1)\end{aligned}$$ where $K_n=\{x\in \Omega: |u_n(x)-u(x)|\leq 1\}$ and $L_n=\{x\in \text{supp}(\phi): |u_n(x)-u(x)|>1\}$.\ From ([\[sa\]](#sa){reference-type="ref" reference="sa"}) one has $w_n\rightarrow 0$ a.e in $\mathbb{R}^N$. Hence remark [Remark 3](#equiv){reference-type="ref" reference="equiv"} and lemma [\[ET1\]](#ET1){reference-type="ref" reference="ET1"} ensure $Dv_n\rightarrow Dv$ almost everywhere in $\mathbb{R}^N$. 2. It follows from Brezis-Leib lemma [@BL]. 3. Define $G:\mathbb{R}^N\to\mathbb{R}^N$ by $$G(x)=H(x)^{p-1}\nabla H(x)$$ and let $$G_i(x)=H(x)^{p-2}HH_{x_i}.$$ By using lemma [\[ET\]](#ET){reference-type="ref" reference="ET"} and $(H4)$, we have $$\begin{aligned} |G(x+h)-G(x)|=|\int_{0}^{1}\frac{d}{dt}[G(x+th)] dt|&\leq C\int_{0}^{1} H(x+th)^{p-2}|h| dt\nonumber\\ &\leq C\int_{0}^{1} [H(x)^{p-2}+t^{p-2}H(h)^{p-2}]|H(h)| dt\nonumber\\ &\leq C[H(x)^{p-2}H(h)+H(h)^{p-1}]\leq \epsilon H(x)^{p-1}+C_\epsilon H(h)^{p-1}\nonumber\end{aligned}$$ where $\epsilon>0$ be any real number and $C_\epsilon>$ is a constant. Finally, we get $$\begin{aligned} \label{Y1} |G(x+h)-G(x)|< \epsilon H(x)^{p-1}+C_\epsilon H(h)^{p-1}\end{aligned}$$ We define $$\Psi_{\epsilon,n}:=[|G(Dv_n)-G(Dv_n-Dv)-G(Dv)|-\epsilon H(Dv_n)^{p-1}]^+.$$ Clearly, $\Psi_{\epsilon,n}\rightarrow 0$ as $n\rightarrow\infty$ almost everywhere in $\mathbb{R}^N$. Using ([\[Y1\]](#Y1){reference-type="ref" reference="Y1"}) and Remark [Remark 3](#equiv){reference-type="ref" reference="equiv"}, we have $$\Psi_{\epsilon,n}\leq G(Dv)+C_\epsilon H(Dv)^{p-1}\leq C H(Dv)^{p-1}.$$ By the Dominated Convergence Theorem, we get $$\begin{aligned} \label{Y2} \lim\int |\psi_{\epsilon,n}|^\frac{p}{p-1}dx=0.\end{aligned}$$ From the definition of $\phi_{\epsilon,n}$ $$\begin{aligned} |G(Dv_n)-G(Dv_n-Dv)-G(Dv)|\leq \Psi_{\epsilon,n}+\epsilon H(Dv_n)^{p-1}\end{aligned}$$ Since $\{v_n\}$ is bounded in $D^{1,p}(\mathbb{R}^N)$, using ([\[Y2\]](#Y2){reference-type="ref" reference="Y2"}) we conclude $$\limsup_{n\to \infty}\int |G(Dv_n)-G(Dv_n-Dv)-G(Dv)|^\frac{p}{p-1} dx\leq \epsilon M$$ for some $M>0$. As $\epsilon>0$ is arbitrary so (c) follows.  ◻ **Remark 25**. 1. *The condition (H5) is not required to conclude $(a)$ and $(b).$* 2. *The above result is true for $1<p<\infty$ if $H=H_2$.* **Lemma 26**. *Suppose that $p=2$ and $2\alpha\leq p$; or $p\geq 2\alpha+1$ hold and $G$ is function from $\mathbb{R}$ to $\mathbb{R}$ such that $G(t)=|f(t)|^{p-2}f(t)f'(t).$ If $v_n\rightharpoonup v$ in $X$ then $$\lim_{n\to \infty}\int |G(v_n)-G(v_n-v)-G(v)|^\frac{p}{p-1} dx=0.$$* *Proof.* The proof is the same as that of the previous lemma. we omit it here. ◻ **Lemma 27**. *Let us consider the function $h:(0, \infty)\to \mathbb{R}$, $h(t)=C_1t^{N-p}+t^N(C_2-\lambda C_3)$, where $C_1, C_2,\; \mbox{and}\; C_3$ are positive constants and $N>p$. Then for large enough $\lambda>0$, $h$ has a unique critical point, which corresponds to its maximum.* *Proof.* The proof is very simple and is omitted here. ◻ **Remark 28**. *Notice that if $h$ has a critical point $t_0>0$ then $C_2-\lambda C_3<0$ and $h(t_0)=\max_{t>0}h(t).$* We introduce the Poho$\check{z}$aev manifold $$M=\{v\in X_r\setminus\{0\}: P(v)=0\}$$ where $P_V$ is defined in ([\[poho\]](#poho){reference-type="ref" reference="poho"}). **Lemma 29**. *Let $N\geq 2$, $(\alpha, p)\in D_N$ and $p-1<q<2\alpha p^*-1$. Then* 1. *For any $v\in X_r\setminus\{0\}$, there exists unique $t_0=t_0(v)>0$ such that $v_{t_0}=v(\frac{.}{t_0})\in M$. Moreover, $J(v_{t_0})=\max_{t>0}J(v_t).$* 2. *$0\notin \partial M$ and $\inf_{v\in M} J(v)>0.$* 3. *For any $v\in M$, $P'(v)\neq 0.$* 4. *$M$ is a natural constraint of $J$.* *Proof.* 1. Let $v\in X_r\setminus\{0\}$. For $t>0$, we define $v_t(x)=v(\frac{x}{t})$. Now, consider the function $$\phi:(0, \infty)\to \mathbb{R}\;\mbox{by}\; \phi(t)=J(v_t).$$ After simplification, we have $$\phi(t)=\frac{t^{N-p}}{p}\int H(Dv)^p dx+ \frac{t^N}{p}\int V(x)|f(v)|^p dx-\frac{\lambda t^N}{q+1}\int |f(v)|^{q+1} dx.$$ By lemma[Lemma 27](#AL){reference-type="ref" reference="AL"}, for large $\lambda>0$, there exists $t_0>0$ such that $\phi'(t_0)=0$ and $\phi(t_0)=\max_{t>0}\phi(t).$ Also we notice that $P(v_{t_0})=t_0\phi'(t_0)=0$. Hence, $v_{t_0}\in M$ and $J(v_{t_0})=\max_{t>0}J(v_t).$ 2. If $v\in M$ then $$\begin{aligned} \frac{N-p}{p}\int [H(Dv)^p+V|f(v)|^p]dx-\frac{\lambda N}{q+1}\int |f(v)|^{q+1} dx\leq P(v)=0\end{aligned}$$ which ensures $$\begin{aligned} \label{Ineq} \frac{N-p}{p}\int [H(Dv)^p+V|f(v)|^p]dx\leq \frac{\lambda N}{q+1}\int |f(v)|^{q+1} dx\end{aligned}$$ Now, let $\int [H(Dv)^p+V|f(v)|^p]dx=\beta^p$ and $\gamma>0$ (to be determined later). By using Hölder inequality and Sobolev inequality, we obtain $$\begin{aligned} \label{ineq1} \int |f(v)|^{q+1} dx\leq \int |f(v)|^{q+1} dx&\leq [\int |f(v)|^p dx]^\frac{\gamma(q+1)}{p} [\int |f^{2\alpha}(v)|^{p^*}]^{1-\frac{\gamma(q+1)}{p}} \nonumber\\ &\leq C [\int |f(v)|^p dx]^\frac{\gamma(q+1)}{p} [\int |Dv|^p dx]^{\frac{p^*}{p}(1-\frac{\gamma(q+1)}{p})}\nonumber\\ &\leq C [\int |f(v)|^p dx]^\frac{\gamma(q+1)}{p} [\int H(Dv)^p dx]^{\frac{p^*}{p}(1-\frac{\gamma(q+1)}{p})}\\ &\leq C \beta^m\nonumber\end{aligned}$$ where $\gamma=\frac{p(2\alpha p^*-q-1)}{(q+1)(2\alpha p^*-p)}$ and $m=\frac{2\alpha p p^*-pq-p+p^*(q+1-p)}{2\alpha p^*-p}>p.$ As $m>p$ so by using the above inequality and ([\[Ineq\]](#Ineq){reference-type="ref" reference="Ineq"}), we get $\beta\geq C$, for some positive constant $C$. Hence, $0\notin \partial M$. Notice that if $v\in M$ then $NJ(v)-P(v)=\int H(Dv)^p dx>0$. So, $J(v)>0$, for all $v\in M$. We shall prove that $\inf_{v\in M} J(v)>0.$ If not then there exists a sequence $\{v_n\}$ in $M$ such that $J(v_n)\to 0.$ We can prove that the sequence $\{v_n\}$ is bounded (see the proof of Theorem [Theorem 5](#TC){reference-type="ref" reference="TC"}). Using ([\[ineq1\]](#ineq1){reference-type="ref" reference="ineq1"}) and $\lim_{n\to\infty}\int H(Dv_n)^p dx=\lim_{n\to\infty} NJ(v_n)=0$, we get $$\begin{aligned} \label{f1} \int |f(v_n)|^{q+1} dx\to 0 \text{ as } n\to \infty\end{aligned}$$ Now ([\[f1\]](#f1){reference-type="ref" reference="f1"}) and $\lim_{n\to\infty}J(v_n)=0$ implies $0\in \partial M$, which is a contradiction. 3. If possible let $P'(v)=0$, for some $v\in M$. As $v\in M$ and $r=J(v)>0$ so $$\begin{aligned} \label{P1} \frac{N-p}{p}\int H(Dv)^p dx+ \frac{N}{p}\int V|f(v)|^p dx-\frac{\lambda N}{q+1}\int |f(v)|^{q+1} dx=0 \end{aligned}$$ and $$\begin{aligned} \label{P2} \frac{1}{p}\int [H(Dv)^p+ V|f(v)|^p]dx-\frac{\lambda}{q+1}\int |f(v)|^{q+1} dx=r\end{aligned}$$ As $P'(v)=0$ so $v$ satisfies the following equation in weak sense $$-(N-p)\Delta_{H,p}v+NV|f(v)|^{p-2}f(v)f'(v)-\lambda N|f(v)|^{q-1}f(v)f'(v)=0.$$ Hence $v$ satisfies the corresponding Poho$\Check{z}$aev identity: $$\begin{aligned} \label{P3} \frac{(N-p)^2}{p}\int H(Dv)^p dx+ \frac{N^2}{p}\int V|f(v)|^p dx-\frac{\lambda N^2}{q+1}\int |f(v)|^{q+1} dx=0\end{aligned}$$ For simplicity, we assume $a=\int H(Dv)^p dx$, $b=\int V|f(v)|^p dx$, $c=\int |f(v)|^{q+1} dx$. By using ([\[P1\]](#P1){reference-type="ref" reference="P1"}), ([\[P3\]](#P3){reference-type="ref" reference="P3"}) and $p<N$, we have $a=0$. From ([\[P1\]](#P1){reference-type="ref" reference="P1"}), we obtain $$\frac{b}{p}=\frac{\lambda c}{q+1}.$$ By using the above result, ([\[P2\]](#P2){reference-type="ref" reference="P2"}) gives us $r=0$, which is a contradiction as $r>0$.\ 4. Let $v\in M$ such that $J(v)=\inf_{u\in M}J(u)=r>0$ (say). Our Claim is $J'(v)=0$ in $X^*$. By Lagrange's multiplier, there exists a $\tau\in \mathbb{R}$ such that $J'(v)=\tau P'(v)$. So, $v$ satisfies the following equation in the weak sense $$\begin{aligned} -(1-\tau(N-p))\Delta_{H,p}v+(1-\tau N)V|f(v)|^{p-2}f(v)f'(v)+\lambda(\tau N-1)|f(v)|^{q-1}f(v)f'(v)=0.\end{aligned}$$ Hence, $v$ satisfies the corresponding Poho$\Check{z}$aev identity. Using the same notation as before we have the following equations $$\begin{aligned} \label{P4} \frac{(N-p)(1-\tau(N-p))}{p}a+\frac{(1-\tau N)N}{p}b-\frac{N\lambda(1-\tau N)}{q+1}c=0.\end{aligned}$$ $$\begin{aligned} \label{P5} \frac{a}{p}+\frac{b}{p}-\frac{\lambda c}{q+1}=r.\end{aligned}$$ and $$\begin{aligned} \label{P6} \frac{N-p}{p}a+\frac{N}{p}b-\frac{\lambda N}{q+1}c=0.\end{aligned}$$ If $\tau\neq 0$ then by using ([\[P4\]](#P4){reference-type="ref" reference="P4"}), ([\[P5\]](#P5){reference-type="ref" reference="P5"}) and ([\[P6\]](#P6){reference-type="ref" reference="P6"}), we have $r=0$; which contradicts the fact that $r>0$. Hence, $\tau=0$. Consequently, $J'(v)=0$ in $X'$.  ◻ Let us consider a collection of auxiliary functional on $X$, $\{J_\delta\}_{\delta \in I}$ of the form $$\begin{aligned} \label{AF1} J_\delta(v)=\frac{1}{p}\int H(Dv)^p+ V(x)|f(v)|^p]dx-\frac{\lambda\delta}{q+1}\int |f(v)|^{q+1} dx\end{aligned}$$ and we define $$\begin{aligned} \label{AF2} J_{\infty,\delta}(v)= \frac{1}{p}\int H(Dv)^p+ V(\infty)|f(v)|^p]dx-\frac{\lambda \delta}{q+1}\int |f(v)|^{q+1} dx.\end{aligned}$$ **Lemma 30**. *Assume the potential $V$ satisfies $(v_1)$. Then the set $$\Gamma_\delta=\{\gamma\in C([0, 1]; X): \gamma(0)=0,\; J_\delta(\gamma(1))<0\}\neq \{0\},\;\mbox{for any}\; \delta\in I.$$* *Proof.* For every $v\in X$, $$\begin{aligned} J_\delta(v)\leq J_{\infty, \frac{1}{2}}(v)\end{aligned}$$ Now, let $v\in X\setminus \{0\}$, $$J_{\infty,\frac{1}{2}}(v_t)= J_{\infty,\frac{1}{2}}(v(\frac{x}{t}))= \frac{t^{N-p}}{p}\int H(Dv)^p dx+\frac{t^N}{p}\int V(\infty)|f(v)|^p]dx-\frac{\lambda t^N}{2(q+1)}\int |f(v)|^{q+1} dx$$ As $\lambda>0$ is large enough so $J_{\infty,\frac{1}{2}}(v_t)\to -\infty$ as $t\to \infty$. Hence, there exists $t_0>0$ such that $J_{\infty,\frac{1}{2}}(v_{t_0})<0$. Consequently, $J_\delta(v_{t_0})<0$, for all $\delta\in I$. Define $\gamma:[0,1 ]\to X$ as $$\gamma(t)=\begin{cases} 0, \text{ if } t=0\\ (v_{t_0})_t, \text{ if } 0<t\leq 1 \end{cases}$$ It is easy to prove that the $\gamma$ is continuous. Hence, $\gamma$ is a desired path. ◻ **Lemma 31**. *The above collection of functional $J_\delta$ satisfies all the hypotheses of Theorem [Theorem 18](#MP){reference-type="ref" reference="MP"}.* *Proof.* Here, $Au=\frac{1}{p}\int [H(Du)^p+V(x)|f(Du)|^p]\;dx$ and $Bu=\frac{\lambda \delta}{q+1}\int |f(u)|^{q+1}\;dx.$ Clearly, $B$ is nonnegative and $Au\to \infty$ as $||u||\to \infty$.\ Claim: $$\begin{aligned} \label{Cdef} C_\delta=\inf_{\gamma\in\Gamma} \max_{t\in[0, 1]} J_\delta(\gamma(t))>0.\end{aligned}$$ Let $u\in S(\beta):=\{u\in X: \int [H(Du)^p+V(x)|f(Du)|^p]\; dx=\beta^p$. By using ([\[ineq1\]](#ineq1){reference-type="ref" reference="ineq1"}), we have $$\begin{aligned} J_\delta(u)&=\frac{1}{p}\int [H(Du)^p+V(x)|f(Du)|^p]\; dx-\frac{\lambda \delta}{q+1}\int |f(u)|^{q+1} \;dx\\ &\geq \frac{1}{p}\beta^p -C\beta^m \end{aligned}$$ where $m>p$. If $\beta>0$ is small enough then there exists $r>0$ such that $J_\delta(u)\geq r$ and hence $C_\delta\geq r>0$. ◻ Define the Poho$\check{z}$aev Manifold $$M_{\infty,\delta}=\{v\in X_r\setminus\{0\} : P_{\infty, \delta}(v)= \frac{N-p}{p}\int H(Dv)^p\; dx+ \frac{N}{p}\int V(\infty)|f(v)|^p\; dx-\frac{\lambda \delta N}{q+1}\int |f(v)|^{q+1}\; dx=0\}.$$ We have the following lemma: **Lemma 32**. *If $N\geq 2$, $(\alpha, p)\in D_N$, $p-1<q<2\alpha p^*-1$ and $\delta\in I$. Then for large $\lambda>0$, there exists $v_{\infty,\delta}\in M_{\infty,\delta}$ such that $$J_{\infty, \delta}(v_{\infty,\delta})=m_{\infty,\delta}:=\{J_{\infty, \delta}(v) : v\neq 0, J'_{\infty, \delta}(v)=0\}.$$ Moreover, $$J'_{\infty, \delta}(v_{\infty,\delta})=0.$$* *Proof.* This lemma is a simple consequence of Theorem [Theorem 5](#TC){reference-type="ref" reference="TC"} so we omit the proof. ◻ Now we are going to prove the following lemma **Lemma 33**. *If the potential $V$ satisfies $(v_1)$ and $(v_2)$, $N\geq 2$, $(\alpha, p)\in D_N$ and $p-1<q<2\alpha p^*-1$. Then for every $\delta\in I$, $$C_\delta<m_{\infty, \delta}$$* *Proof.* It is easy to see that $$J_{\infty, \delta}(v_{\infty, \delta})=\max_{t>0}J_{\infty, \delta}(v_{\infty, \delta}(\frac{.}{t}))$$ Let $\gamma$ be the curve defined in the lemma [Lemma 30](#NL){reference-type="ref" reference="NL"} for $v=v_{\infty,\delta}$. By using $(v_1)$, we have $$C_\delta\leq \max_{t\in[0,1]} J_\delta(\gamma(t))\leq \max_{t\in[0,1]} J_{\infty,\delta}(\gamma(t))\leq J_{\infty,\delta}(v_{\infty,\delta})=m_{\infty,\delta}.$$ If possible let $C_\delta=m_{\infty,\delta}$. Then $\max_{t\in[0,1]} J_\delta(\gamma(t))= J_{\infty,\delta}(v_{\infty,\delta})$. As $m_{\infty,\delta}>0$ and $J_\delta o \gamma$ is a continuous map, so there exists $t^*\in (0,1)$ such that $J_\delta(\gamma(t^*))=J_{\infty,\delta}(v_{\infty,\delta})=m_{\infty,\delta}$. Moreover, since $t=1$ is the unique maxima of $J_{\infty,\delta} o \gamma$ so $J_\delta(\gamma(t^*))>J_{\infty,\delta}(\gamma(t^*))$, which is not possible because of $(v_1)$. Hence, $C_\delta<m_{\infty,\delta}.$ ◻ Now, we present the most important lemma to prove our main result. **Lemma 34**. *(Global compactness lemma)[\[GCL\]]{#GCL label="GCL"} Suppose that $(v_1)$ and $(v_2)$ hold, $N\geq 3$, $(\alpha,p)\in D^N$, and $2\alpha p-1\leq q<2\alpha p^*-1$. For every $\delta\in I$, let $\{u_n\}$ be a bounded $(PS)_{c_\delta}$ sequence for $J_\delta$. Then there exist a subsequence of $\{u_n\}$, still denote by $\{u_n\}$ and $u_0\in X$, an integer $k\in \mathbb{N}\cup\{0\}$, $w_i\in X$, sequence $\{x^i_n\}\subset\mathbb{R}^N$ for $1\leq i\leq k$ such that* 1. *$u_n\rightharpoonup u_0$ in $X$ with $J_\delta(u_0)\geq 0$ and $J'_\delta(u_0)=0$* 2. *$|x^i_n|\to\infty$, $|x^i_n-x^j_n|\to\infty$ as $n\to \infty$ if $i\neq j$.* 3. *$w_i\nequiv 0$ and $J_{\infty,\delta}'(w_i)=0$, for $1\leq i\leq k$.* 4. *$||u_n-u_0-\sum_{i=1}^{k}w_i(.-x^i_n)||\to 0$ as $n\to \infty$.* 5. *$J_\delta(u_n)\to J_\delta(u_0)+\sum_{i=1}^{k} J_{\infty,\delta}(w_i)$.* *Proof.* The proof consists of several steps: 1. Since $\{u_n\}$ is bounded so without loss of generality we can assume that 1. $u_n\rightharpoonup u_0\; \mbox{ in }\; X.$ 2. $f(u_n)\rightarrow f(u_0) \;\mbox{ in }\; L^s_{loc}(\mathbb{R}^N)\; \mbox{ for all}\; p\leq s\leq 2\alpha p^*.$ 3. $u_n\rightharpoonup u_0 \;\mbox{ a.e. in }\; \mathbb{R}^N.$ We show that for every $\phi\in C_c^\infty(\mathbb{R}^N)$,$$\langle J'_\delta(u_0), \phi\rangle=\lim_{n\to\infty} \langle J'_\delta(u_n), \phi\rangle.$$ That is, we only have to show the following identities: 1. $\lim_{n\to\infty}\int H(Du_n)^{p-1}\nabla H(Du_n).\nabla \phi\; dx=\int H(Du_0)^{p-1}\nabla H(Du_0).\nabla \phi\; dx$ 2. $\lim_{n\to\infty}\int V(x)|f(u_n)|^{p-2}f(u_n)f'(u_n)\phi\; dx=\int V(x)|f(u_0)|^{p-2}f(u_0)f'(u_0)\phi \;dx$ 3. $\lim_{n\to\infty}\int |f(u_n)|^{q-1}f(u_n)f'(u_n)\phi\; dx=\int V(x)|f(u_0)|^{q-1}f(u_0)f'(u_0)\phi\; dx$ Let $K=\mbox{supp}(\phi)$. For every $s\in[p, 2\alpha p^*)$, there exists $h_s\in L^s(K)$ such that up to a subsequence $|f(u_n)|,\; |f(u_0)|\leq h$ and $u_n\rightarrow u_0$ a.e. in $\mathbb{R}^N$. The equalities (b) and (c) are two consequences of the Dominated Convergence Theorem. As for the first part by Egorov's theorem for every $\epsilon>0$, there exists a measurable set $E\subset K$ such that $|E|<\epsilon$ and $u_n$ converges to $u$ uniformly on $E^c\cap K$. So for large n, $|u_n(x)-u(x)|\leq 1$. Using the fact that $u_n\rightharpoonup u_0$, we have $$\begin{aligned} |\int \phi H(Du_0)^{p-1}\nabla H(Du_0).\nabla T(u_n-u_0)\; dx|&\leq |\int_{E} \phi H(Du_0)^{p-1}\nabla H(Du_0).\nabla T(u_n-u_0)\; dx|\\ &+ |\int_{E^c\cap K} \phi H(Du_0)^{p-1}\nabla H(Du_0).\nabla T(u_n-u_0)\; dx|\\ & \leq \int_{E} |\phi H(Du_0)^{p-1}\nabla H(Du_0).\nabla T(u_n-u_0)|\; dx \\ & + |\int_{E^c\cap K} \phi H(Du_0)^{p-1}\nabla H(Du_0).\nabla(u_n-u_0) \;dx|\\ & \leq M\epsilon^\frac{1}{p}+o(1)\end{aligned}$$ Hence, $$\begin{aligned} \label{I1} \int \phi H(Du_0)^{p-1}\nabla H(Du_0).\nabla T(u_n-u_0)\; dx\rightarrow 0.\end{aligned}$$ Since $\{u_n\}$ is a bounded Palais-Smale sequence so $$\langle J'_\delta(u_n), \phi.T(u_n-u_0)\rangle=o(1)$$ which implies $$\begin{aligned} \label{I2} \int H(Du_n)^{p-1}\nabla H(Du_n).\nabla (\phi.T(u_n-u_0))\; dx&= -\int V|f(u_n)|^{p-2}f(u_n)f'(u_n)\phi T(u_n-u_0)\; dx\nonumber\\ &+ \lambda\delta\int |f(u_n)|^{q-1}f(u_n)f'(u_n)\phi T(u_n-u_0)\; dx + o(1)\end{aligned}$$ Using ([\[I2\]](#I2){reference-type="ref" reference="I2"}), we have $$\begin{aligned} \label{I3} |\int \phi H(Du_n)^{p-1}\nabla H(Du_n).\nabla T(u_n-u_0) \;dx|&\leq |\int H(Du_n)^{p-1}\nabla H(Du_n).\nabla (\phi.T(u_n-u_0)) \;dx|\nonumber\\ &+ \int H(Du_n)^{p-1}(\nabla H(Du_n).\nabla\phi) T(u_n-u_0)\;dx|\nonumber\\ &\leq \int V|f(u_n)|^{p-1}|\phi T(u_n-u_0)|\; dx\nonumber\\ &+ \lambda\delta\int |f(u_n)|^q|f'(u_n)||\phi T(u_n-u_0)| \;dx\nonumber \\ &+ |\int H(Du_n)^{p-1}(\nabla H(Du_n).\nabla\phi) T(u_n-u_0)\;dx|+ o(1)\nonumber\\ &=o(1)\end{aligned}$$ From ([\[I1\]](#I1){reference-type="ref" reference="I1"})-([\[I3\]](#I3){reference-type="ref" reference="I3"}), one has $$\int \phi (H(Du_n)^{p-1}\nabla H(Du_n)-H(Du_0)^{p-1}\nabla H(Du_0)).\nabla T(u_n-u_0) dx\rightarrow 0.$$ By lemma [Lemma 24](#PBS){reference-type="ref" reference="PBS"}, we can conclude $Du_n\rightarrow Du_0$ almost everywhere in $\mathbb{R}^N$. Moreover, 1. $H(Du_n)^{p-1}\nabla H(Du_n)$ is bounded in $L^\frac{p}{p-1}(\mathbb{R}^N)$ 2. $H(Du_n)^{p-1}\nabla H(Du_n)\rightarrow H(Du_0)^{p-1}\nabla H(Du_0)$ a.e. in $\mathbb{R}^N$. Hence, $H(Du_n)^{p-1}\nabla H(Du_n)\rightharpoonup H(Du_0)^{p-1}\nabla H(Du_0)$ in $L^\frac{p}{p-1}(\mathbb{R}^N)$. Consequently, (a) follows. 2. Since $J_\delta'(u_0)=0$ so $u_0$ satisfies the following Poho$\check{z}$aev identity $$\begin{aligned} \label{I4} \frac{N-p}{p}\int H(Du_0)^p + \frac{1}{p}\int [V(x)+\langle\nabla V(x),x\rangle]|f(u_0)|^p dx - \frac{\lambda\delta N}{q+1}\int |f(u_0)|^{q+1} dx=0. \end{aligned}$$ From ([\[I4\]](#I4){reference-type="ref" reference="I4"}) and $\langle J_\delta'(u_0),\frac{f(u_0)}{f'(u_0)}\rangle=0$ we deduce $$\begin{aligned} \label{I5} \frac{N-p}{p}A + \frac{N}{p}\beta_2+\frac{1}{p}B-\frac{\lambda\delta N}{q+1}\beta_3=0.\end{aligned}$$ and $$\begin{aligned} \label{I6} J_\delta(u_0)= (2\alpha-1)(\beta_1+\beta_2)+\frac{\lambda\delta\beta_3(q+1-2\alpha p)}{2\alpha p(q+1)}.\end{aligned}$$ where $\beta_1=\int \frac{H(Du_0)^p}{1+(2\alpha)^{p-1}|f(u_0)|^{p(2\alpha-1)}} dx$, $\beta_2=\int V(x)|f(u_0)|^p dx$, $\beta_3=\int |f(u_0)|^{q+1} dx$, $A=\int H(Du_0)^p dx$, and $B=\int \langle\nabla V(x),x\rangle|f(u_0)|^p dx$. Using ([\[I5\]](#I5){reference-type="ref" reference="I5"}), ([\[I6\]](#I6){reference-type="ref" reference="I6"}) and $(v_2)$, we get $$\begin{aligned} NJ_\delta(u_0)=(2\alpha-1)(\beta_1+\beta_2)+\frac{(N-p)(q+1-2\alpha p)}{2\alpha p^2}A+\frac{q+1-2\alpha p}{2\alpha p^2}(N\beta_2+B)\geq 0.\end{aligned}$$ 3. Since $\{u_n\}$ is a Palais-Smale sequence and $u_0$ is a critical point of $J_\delta$, we have $$\begin{aligned} \label{f5} \langle J'_\delta(u_n),\frac{f(u_n)}{f'(u_n)}\rangle&=\int H(Du_n)^p(1+G(u_n)) dx + \int V|f(u_n)|^p dx-\lambda\delta\int |f(u_n)|^{q+1} dx=o(1)\end{aligned}$$ and $$\begin{aligned} \label{f6} \langle J'_\delta(u_0),\frac{f(u_0)}{f'(u_0)}\rangle&=\int H(Du_0)^p(1+G(u_0)) dx + \int V|f(u_0)|^p dx-\lambda\delta\int |f(u_0)|^{q+1} dx=0\end{aligned}$$ where $G(t)=(2\alpha-1)(2\alpha)^{p-1}|f(t)|^{p(2\alpha-1)}[1+(2\alpha)^{p-1}|f(t)|^{p(2\alpha-1)}]^{-1}$.\ Let $$\rho=\limsup_{n\to\infty} \sup_{ y\in\mathbb{R}^N}\int_{ B_1(y)} |f(u^1_n)|^p dx$$ where $u^1_n=u_n-u_0\rightharpoonup 0$ in $X$. 1. **Vanishing Case:** If $\rho=0$ then by lemma [Lemma 16](#CP1){reference-type="ref" reference="CP1"}, we have $$\begin{aligned} f(u^1_n)\to 0\text{ in } L^{q+1}(\mathbb{R}^N).\end{aligned}$$ which implies $$\begin{aligned} \label{f7} f(u_n)\to f(u_0) \text{ in } L^{q+1}(\mathbb{R}^N)\end{aligned}$$ Using ([\[f5\]](#f5){reference-type="ref" reference="f5"}), ([\[f6\]](#f6){reference-type="ref" reference="f6"}) and ([\[f7\]](#f7){reference-type="ref" reference="f7"}), we deduce $$\begin{aligned} \lim_{n\to\infty}\int [H(Du_n)^p(1+G(u_n))+ V|f(u_n)|^p]\; dx=\int [H(Du_0)^p(1+G(u_0)) + \int V|f(u_0)|^p]\; dx\end{aligned}$$ Fatou's lemma ensures that up to a subsequence $$\label{ff1} \begin{split} \lim_{n\to\infty}\int H(Du_n)^p\; dx&=\int H(Du_0)^p\;dx\\ \lim_{n\to\infty} \int V|f(u_n)|^p\; dx&=\int V|f(u_0)|^p\; dx \end{split}$$ The Brezis-Lieb lemma and ([\[ff1\]](#ff1){reference-type="ref" reference="ff1"}) imply $u_n\to u_0$ in $X$. 2. **Non-Vanishing Case:** If vanishing does not occur then there exists a sequence $\{x^1_n\}\subset \mathbb{R}^N$ such that $$\begin{aligned} \label{f8} \int_{ B_1(0)} |f(\tilde{u}^1_n)|^p\; dx\geq \frac{\rho}{2}\end{aligned}$$ where $\tilde{u}^1_n(x):=u^1_n(x+x^1_n).$ Since the sequence $\{\tilde{u}^1_n\}$ is also bounded in $X$, there exists $w_1\in X$ such that $$\begin{aligned} \begin{cases} \tilde{u}^1_n\rightharpoonup w_1 \text{ in } X\\ f(\tilde{u}^1_n)\to f(w_1) \text{ in } L^p(B_1(0)). \end{cases}\end{aligned}$$ The inequality ([\[f8\]](#f8){reference-type="ref" reference="f8"}) ensures $w_1\nequiv 0$. Moreover, $\{x^1_n\}$ is unbounded. Our next goal is to show $J'_{\infty,\delta}(w_1)=0$. For $\phi\in C^\infty_c(\mathbb{R}^N)$, one has, $$\begin{split} \langle J'_{\infty,\delta}(w_1),\phi\rangle&=\lim_{n\to\infty}\langle J'_{\infty,\delta}(\tilde{u}^1_n),\phi\rangle\nonumber\\ &=\lim_{n\to\infty}\langle J'_\delta(u^1_n),\phi(\cdot-x^1_n)\rangle-\lim_{n\to\infty}\int (V(x+x^1_n)-V(\infty))|f(\tilde{u}^1_n)|^{p-2}f(\tilde{u}^1_n)f'(\tilde{u}^1_n)\phi\; dx \end{split}$$ By Remark [Lemma 26](#BL){reference-type="ref" reference="BL"}, we deduce $$\langle J'_\delta(u^1_n),\phi \rangle \to 0 \mbox{ uniformly with respect to }\phi.$$ Since $u^1_n\rightharpoonup 0$, one has that $\lim_{n\to\infty}\langle J'_\delta(u^1_n),\phi(\cdot-x^1_n)\rangle=0$ and the condition $(v_1)$ implies $$\lim_{n\to\infty}\int (V(x+x^1_n)-V(\infty))|f(\tilde{u}^1_n)|^{p-2}f(\tilde{u}^1_n)f'(\tilde{u}^1_n)\phi dx=0.$$ Thus, $J'_{\infty,\delta}(w_1)=0$. Again by Brezis-Lieb lemma, one has, $$\label{f9} \begin{split} \lim_{n\to\infty}\int [H(Du^1_n)^p-H(Du_n)^p+H(Du_0)^p] dx=0.\\ \lim_{n\to\infty}\int [|f(u^1_n)|^{q+1}-|f(u_n)|^{q+1}+|f(u_0)|^{q+1}] dx=0.\\ \lim_{n\to\infty}\int V[|f(u^1_n)|^p-|f(u_n)|^p+|f(u_0)|^p] dx=0. \end{split}$$ Under the assumption $(v_1)$, we have $$\begin{aligned} \label{f10} \lim_{n\to\infty} \int (V(x)-V(\infty)) |f(u^1_n)|^p dx=0\end{aligned}$$ Using ([\[f9\]](#f9){reference-type="ref" reference="f9"}) and ([\[f10\]](#f10){reference-type="ref" reference="f10"}), one can easily conclude, $$\label{f11} \begin{split} J_\delta(u^1_n)-J_\delta(u_n)+J_\delta(u_0) \to 0 \text{ as }n\to\infty.\\ J_\delta(u_n)-J_{\infty,\delta}(u^1_n)-J_\delta(u_0)\to 0 \text{ as } n\to\infty. \end{split}$$ 4. Now, we define $$\rho_1=\limsup_{n\to\infty} \sup_{ y\in\mathbb{R}^N}\int_{ B_1(y)} |f(u^2_n)|^p\; dx, \mbox{where}\; u^2_n=u^1_n-w_1(.-x^1_n)\rightharpoonup 0\;\mbox{in}\;X$$\ If $\rho_1=0$ then by a similar argument as **Step 3**, we obtain $$||u_n-u_0-w_1(.-x^1_n)||\to 0 \text{ in } X.$$ If $\rho_1\neq 0$ then there exists a sequence $\{x^2_n\}$ such that $\tilde{u}^2_n\rightharpoonup w_2\nequiv 0$. Moreover, $|x^1_n-x^2_n|\to \infty$ as $n\to\infty.$ Arguing as above, we obtain the following: $$\label{f12} \begin{split} ||H(Du^2_n)||^p_p-||H(Du_n)||^p_p+||H(Du_0)||^p_p+||H(Dw_1(.-x^1_n))||^p_p=o(1)\\ \int [V|f(u^2_n)|^p-V|f(u_n)|^p+V|f(u_0)|^p +V|f(w_1(.-x^1_n))|^p ]\;dx =o(1)\\ ||f(u^2_n)||_{q+1}^{q+1}-||f(u_n)||_{q+1}^{q+1}+||f(u_0)||_{q+1}^{q+1}+||f(w_1(.-x^1_n))||_{q+1}^{q+1}=o(1) \end{split}$$ which helps us to obtain $$\label{f13} \begin{split} J_\delta(u^2_n)=J_\delta(u_n)-J_\delta(u_0)-J_{\infty,\delta}+o(1)\\ J_{\infty,\delta}(u^2_n)=J_\delta(u^1_n)-J_{\infty,\delta}(w_1)+o(1). \end{split}$$ Moreover, using the Brezis-Lieb lemma, we deduce $$\begin{aligned} \label{BL1} \langle J'_\delta(u^2_n),\phi\rangle=\langle J'_\delta(u_n),\phi\rangle-\langle J'_\delta(u_0),\phi\rangle-\langle J'_{\infty,\delta}(w_1),\phi(\cdot+x^1_n)\rangle+o(1)=o(1) \end{aligned}$$ and $$\begin{aligned} ||\tilde{u}^1_n-w_1||^p_{1,p,\mathbb{R}^N}&=||\tilde{u}^1_n||^p_{1,p,\mathbb{R}^N}-||w_1||^p_{1,p,\mathbb{R}^N}+o(1)\\ &=||u_n||^p_{1,p,\mathbb{R}^N}-||u_0||^p_{1,p,\mathbb{R}^N}-||w_1||^p_{1,p,\mathbb{R}^N}+o(1)\end{aligned}$$ that is, $$||u_n-u_0-w_1(\cdot-x^1_n)||^p_{1,p,\mathbb{R}^N}=||u_n||^p_{1,p,\mathbb{R}^N}-||u_0||^p_{1,p,\mathbb{R}^N}-||w_1(\cdot-x^1_n)||^p_{1,p,\mathbb{R}^N}+o(1).$$ Since $u^2_n\rightharpoonup 0$, one has $\langle J'_\delta(u^2_n),\phi(\cdot-x^2_n)\rangle\to 0$. Consequently, $J'_{\infty,\delta}(w_2)=0$. Using ([\[f11\]](#f11){reference-type="ref" reference="f11"}) and ([\[f13\]](#f13){reference-type="ref" reference="f13"}), we have $$\begin{aligned} J_\delta(u_n)=J_\delta(u_0)+J_{\infty,\delta}(u^1_n)+o(1)=J_\delta(u_0)+J_{\infty,\delta}(w_1)+J_{\infty,\delta}(u^2_n)+o(1).\end{aligned}$$ Iterating this process k-times, we obtain $(k-1)$ number of sequences $\{x^j_n\}\subset \mathbb{R}^N$ for $j=1,2,...(k-1)$ and $(k-1)$ number of critical points $w_1,w_2,...w_{k-1}$ of $J'_{\infty,\delta}$ such that $$\label{BB1} \begin{split} ||u_n-u_0-\sum_{i=1}^{k-1}w_i(.-x^i_n)||^p_{1,p,\mathbb{R}^N}&=||u_n||^p_{1,p,\mathbb{R}^N}-||u_0||^p_{1,p,\mathbb{R}^N}-\sum_{i=1}^{k-1}||w_i(.-x^i_n)||^p_{1,p,\mathbb{R}^N}+o(1)\\ J_\delta(u_n)&\to J_\delta(u_0)+\sum_{i=1}^{k-1}J_{\infty,\delta}(w_i)+J_{\infty,\delta}(u^k_n) \end{split}$$ where $u^k_n;=u_n-u_0-\sum_{i=1}^{k-1}w_i(\cdot-x^i_n)\rightharpoonup 0$ in $X$.\ 5. Since $J'_{\infty,\delta}(w_i)=0$, by property (ii) in Lemma [Lemma 29](#NA){reference-type="ref" reference="NA"} there exists a constant $C>0$ such that $||w_i||\geq C.$ Using this fact together with ([\[BB1\]](#BB1){reference-type="ref" reference="BB1"}), we can conclude that the iteration stops after some finite index $k\in \mathbb{N}$.  ◻ **Lemma 35**. *Suppose that $(v_1)$ and $(v_2)$ hold, $N\geq 3$, $(\alpha,p)\in D^N$ and $2\alpha p-1\leq q<2\alpha p^*-1$. For $\delta\in I$, let $\{v_n\}$ be a bounded $(PS)_{C_\delta}$ sequence of $J_\delta$. Then there exists $v_\delta\in X$ such that $J'_\delta(v_\delta)=0$ and $J_\delta(v_\delta)=C_\delta$, where $C_\delta$ is defined by ([\[Cdef\]](#Cdef){reference-type="ref" reference="Cdef"}).* *Proof.* By using lemma [\[GCL\]](#GCL){reference-type="ref" reference="GCL"}, there exist $v_\delta\in X$, $k\in \mathbb{N}\cup \{0\}$ and $\{w_1,w_2,...w_k\}\subset X$ such that 1. $v_n\rightharpoonup v_\delta$, $J'_\delta(v_\delta)=0$ and $J_\delta(v_\delta)\geq 0$. 2. $w_i\nequiv 0$ and $J_{\infty,\delta}'(w_i)=0$, for $1\leq i\leq k$. 3. $J_\delta(v_n)\to J_\delta(v_\delta)+\sum_{i=1}^{k} J_{\infty,\delta}(w_i)$. Clearly, $J_{\infty,\delta}(w_i)\geq m_{\infty,\delta}$. If $k\neq 0$ then $C_\delta\geq m_{\infty,\delta}$, which contradicts the fact that $C_\delta<m_{\infty,\delta}$. Hence, $k=0$. By using lemma [\[GCL\]](#GCL){reference-type="ref" reference="GCL"}, we have $v_n\to v_\delta$ in $X$ and $J_\delta(v_\delta)=C_\delta$. ◻ **Corollary 36**. *If all the assumptions of lemma [Lemma 35](#ML){reference-type="ref" reference="ML"} are satisfied. Then for almost every $\delta\in I$, there exists $v_\delta\in X$ such that $J_{\delta}(v_\delta)=C_{\delta}$ and $J'_{\delta}(v_\delta)=0$.* *Proof.* Theorem [Theorem 18](#MP){reference-type="ref" reference="MP"} ensures us that for almost every $\delta\in I$, $J_\delta$ has a bounded $(PS)_{C_\delta}$ sequence. Hence by using the above lemma, we get the result. ◻ # Proof of Theorem [Theorem 5](#TC){reference-type="ref" reference="TC"} {#TCP} Let $l=\inf\{J(v): v\in M\}(>0)$ and $\{v_n\}\subset M$ be a minimizing sequence. Also, $$NJ(v_n)=NJ(v_n)-P(v_n)=\int H(Dv_n)^p dx$$ So, $\{H(Dv_n)\}$ is bounded in $L^p(\mathbb{R}^n)$. We will prove $\{v_n\}$ is bounded in $X$. From ([\[ineq1\]](#ineq1){reference-type="ref" reference="ineq1"}), we have $$\begin{aligned} \label{AA1} \int |(f(v_n))|^{q+1} dx&\leq C [\int |f(v_n)|^p dx]^\frac{\gamma(q+1)}{p} [\int H(Dv_n)^p dx]^{\frac{p^*}{p}(1-\frac{\gamma(q+1)}{p})}\nonumber\\ &\leq \epsilon \int |f(v_n)|^p + C_\epsilon (\int H(Dv_n)^p dx)^\frac{p^*}{p} \end{aligned}$$ where $\epsilon>0$ and $\gamma=\frac{p(2\alpha p^*-q-1)}{(q+1)(2\alpha p^*-p)}.$ Now, by using the Poho$\check{z}$aev identity and ([\[AA1\]](#AA1){reference-type="ref" reference="AA1"}), we get $$\begin{aligned} \frac{N}{p}\int V|f(v)|^p dx&=\frac{\lambda N}{q+1}\int |f(v)|^{q+1} dx -\frac{N-p}{p}\int H(Dv)^p dx.\\ &\leq \frac{\lambda N\epsilon}{q+1} \int |f(v_n)|^p + \frac{\lambda NC_\epsilon}{q+1} (\int H(Dv_n)^p dx)^\frac{p^*}{p} -\frac{N-p}{p}\int H(Dv)^p dx.\end{aligned}$$ Choose $\epsilon=\frac{V(q+1)}{2p\lambda}$ and by using the fact that $\{H(Dv_n\})$ is bounded in $L^p(\mathbb{R}^n)$, we can conclude that $\{\int V|f(v_n)|^p\}$ is bounded in $\mathbb{R}$. Finally, ([\[n2\]](#n2){reference-type="ref" reference="n2"}) ensures the boundedness of $\{v_n\}$ in $X$. By lemma [Lemma 13](#Comp){reference-type="ref" reference="Comp"}, up to a subsequence $v_n\rightharpoonup v$ in X and $f(v_n)\rightarrow f(v)$ in $L^{q+1}(\mathbb{R}^n)$. Now we will show $v\in M$ and $l=J(v)$. Now, $$\begin{aligned} P(v_n)=\frac{N-p}{p}\int H(Dv_n)^p dx+ \frac{N}{p}\int V|f(v_n)|^p dx-\frac{\lambda N}{q+1}\int |f(v_n)|^{q+1} dx=0.\end{aligned}$$ For simplicity, let 1. $a_n=\int H(Dv_n)^p\; dx$, $a=\lim_{n\to\infty}a_n$ and $\bar{a}=\int H(Dv)^p \;dx$. 2. $b_n=\int V|f(v_n)|^p\; dx$, $b=\lim_{n\to\infty}b_n$ and $\bar{b}=\int V|f(v)|^p\; dx$. 3. $c_n=\int |f(v_n)|^{q+1}\; dx$, $c=\lim_{n\to\infty}c_n$ and $\bar{c}=\int |f(v)|^{q+1} \;dx$. Clearly, $\bar{a}\leq a$, $\bar{b}\leq b$ and $c=\bar{c}$. Our claim is $a=\bar{a}$ and $b=\bar{b}$. For the time being, let us assume that the claim is true. Now, $$P(v)=\frac{N-p}{p}\int H(Dv)^p dx+ \frac{N}{p}\int V|f(v)|^p dx-\frac{\lambda N}{q+1}\int |f(v)|^{q+1} dx=\lim_{n\to \infty} P(v_n)=0$$ and $$\begin{aligned} J(v)=\lim_{n\to\infty} J(v_n)=l>0\end{aligned}$$ So, $v\in X_r\setminus\{0\}$. Hence, $v\in M$ and $J(v)=\inf_{u\in M} J(u)$. Moreover, by (iv) in lemma [Lemma 29](#NA){reference-type="ref" reference="NA"}, we have $J'(v)=0$. Without loss of generality, we can assume $v$ is non-negative and [@CM ] ensures $v\in C^1(\mathbb{R}^N)$. The function $u=f(v)\in C^1(\mathbb{R}^N)$ is a non-trivial non-negative bounded ground state solution of ([\[maineq\]](#maineq){reference-type="ref" reference="maineq"}).\ Now, we will prove our claim. If the claim is not true then $\bar{a}+\bar{b}<a+b$. Consider the following equations $$\begin{aligned} \frac{1}{p}a+\frac{1}{p}b-\frac{\lambda}{q+1}c=l\end{aligned}$$ and $$\begin{aligned} \frac{N-p}{p}a+\frac{N}{p}b-\frac{\lambda N}{q+1}c=0\end{aligned}$$ Clearly, $c\neq 0$. Define two functions $g_1,g_2:(0, \infty)\to \mathbb{R}$ by $$\begin{aligned} g_1(t)=\frac{1}{p}\bar{a}t^{N-p}+\frac{1}{p}\bar{b}t^{N}-\frac{\lambda}{q+1}\bar{c}t^{N}\end{aligned}$$ and $$\begin{aligned} g_2(t)=\frac{1}{p}at^{N-p}+\frac{1}{p}bt^{N}-\frac{\lambda}{q+1}ct^{N}\end{aligned}$$ It is clear that $g_1(t)<g_2(t)$, for all $t>0$. Also, $g_2'(1)=0$ and $g_2(1)=l$. Hence there exists $t_0>0$ such that $g_1(t_0)=\max_{t>0} g_1(t)<l$. Now, consider the function $v_{t_0}(x)=v(\frac{x}{t_0})$, which satisfies $J(v_{t_0})=g_1(t_0)<l$ and $P(v_{t_0})=t_0g_1'(t_0)=0$. Hence, $v_{t_0}\in M$ and $J(v_{t_0})<l$, which is a contradiction. # Proof of Theorem [Theorem 6](#TV){reference-type="ref" reference="TV"} {#TVP} Now we are ready to prove our main theorem which we split into two steps: 1. In this step, our aim is to show the existence of a non-trivial critical point of the functional $J$. By corollary [Corollary 36](#cor2){reference-type="ref" reference="cor2"}, we are allowed to choose a sequence $\delta_n\nearrow 1$ such that for any $n\geq 1$, there exists $v_n\in X\setminus\{0\}$ satisfying $$\label{L1} J_{\delta_n}(v_n)=\frac{1}{p}\int H(Dv_n)^p+ V(x)|f(v_n)|^p]dx-\frac{\lambda\delta_n}{q+1}\int |f(v_n)|^{q+1} dx=C_{\delta_n}$$ and $$J'_{\delta_n}(v_n)=0.$$ By using lemma [\[ET\]](#ET){reference-type="ref" reference="ET"} and $\langle J'_{\delta_n}(v_n),\frac{f(v_n)}{f'(v_n)}\rangle=0$, we deduce $$\begin{aligned} \label{L2} \int H(Dv_n)^p(2\alpha-F(v_n)) dx +\int V(x)|f(v_n)|^p dx-\lambda\delta_n\int |f(v_n)|^{q+1} dx=0\end{aligned}$$ where $F(v_n)=\frac{2\alpha-1}{1+(2\alpha)^{p-1}|f(v_n)|^{p(2\alpha-1)}}$. Moreover, $v_n$ satisfies the following Poho$\check{z}$aev identity, $$\begin{aligned} \label{L3} \frac{N-p}{p}\int H(Dv_n)^p dx+ \frac{N}{p}\int V(x)|f(v_n)|^p dx&+\frac{1}{p}\int \langle\nabla V(x), x\rangle |f(v_n)|^p dx\nonumber\\ &-\frac{N\lambda \delta_n}{q+1}\int |f(v_n)|^{q+1} dx=0.\end{aligned}$$ Multiplying ([\[L1\]](#L1){reference-type="ref" reference="L1"}) and ([\[L3\]](#L3){reference-type="ref" reference="L3"}) by $N$ and $r=\frac{q-2\alpha p+1}{2\alpha p}$ respectively and then adding those results we get $$\begin{aligned} \label{L4} [\frac{N}{p}+\frac{r(N-p)}{p}]\int H(Dv_n)^p dx+ \frac{N}{p}\int V(x)|f(v_n)|^p dx + \frac{r}{p}\int [NV(x)+\langle\nabla V(x), x\rangle]|f(v_n)|^p dx\nonumber\\=\frac{N\lambda\delta_n}{2\alpha p}\int |f(v_n)|^{q+1} dx+NC_{\delta_n} \end{aligned}$$ From ([\[L2\]](#L2){reference-type="ref" reference="L2"}) and ([\[L4\]](#L4){reference-type="ref" reference="L4"}), we deduce $$\label{L5} \begin{split} \frac{r(N-p)}{p}\int H(Dv_n)^p dx+ \frac{N(2\alpha-1)}{2\alpha p}\int V(x)|f(v_n)|^p dx + \frac{r}{p}\int [NV(x)+\langle\nabla V(x), x\rangle]|f(v_n)|^p dx\\ \nonumber+\frac{N}{2\alpha p}\int H(Dv_n)^pF(v_n) dx=NC_{\delta_n} \end{split}$$ Since $V$ satisfies $(v_2)$, $(\alpha,p)\in D_N$ and $\{C_{\delta_n}\}$ is bounded so ([\[L5\]](#L5){reference-type="ref" reference="L5"}) ensures the boundedness of $$\{\int H(Dv_n)^p dx+\int V(x)|f(v_n)|^p dx\}_n.$$ Hence $\{v_n\}$ is bounded in $X$. Now, $$\label{L6} J(v_n)=J_{\delta_n}(v_n)+\frac{\lambda(\delta_n-1)}{q+1}\int |f(v_n)|^{q+1} dx =C_{\delta_n}+\frac{\lambda(\delta_n-1)}{q+1}\int |f(v_n)|^{q+1} dx$$ and, $$\begin{split} \langle J'(v_n),w\rangle&= \langle J_{\delta_n}(v_n),w\rangle+\frac{\lambda(\delta_n-1)}{q+1}\int |f(v_n)|^{q-1}f(v_n)f'(v_n)w dx\\ &=\frac{\lambda(\delta_n-1)}{q+1}\int |f(v_n)|^{q-1}f(v_n)f'(v_n)w\;dx \end{split}$$ that is, $$\begin{aligned} \label{L7} J'(v_n)=J'_{\delta_n}(v_n)+\frac{\lambda (\delta_n-1)}{q+1} g(v_n)\end{aligned}$$ where $g(v_n)=|f(v_n)|^{q-1}f(v_n)f'(v_n)\in X'$. Since $\{v_n\}$ is bounded in $X$, by Banach-Steinhaus theorem we have $\{g(v_n)\}$ is bounded in $X'$. Using ([\[L6\]](#L6){reference-type="ref" reference="L6"}), ([\[L7\]](#L7){reference-type="ref" reference="L7"}), and the left continuity of the map $\delta\to C_\delta$, we obtain $$J(v_n)\to C_1 \text{ as } n\to \infty.$$ and, $$J'(v_n)\to 0 \text{ as } n\to \infty.$$ Hence $\{v_n\}$ is a bounded $(PS)_{C_1}$ sequence for $J$. By the lemma [Lemma 35](#ML){reference-type="ref" reference="ML"}, there exists $\tilde{v}\in X$ such that $J(\tilde{v})=C_1$ and $J'(\tilde{v})=0$. 2. Let $E=\{ v\in X\setminus\{0\} : J'(v)=0\}$ and $S=\inf_{v\in E} J(v)$. Clearly, $E$ is nonempty and $0\leq S\leq C_1<m_{\infty,1}$. Let $\{v_n\}\subset E$ be a minimizing sequence. Therefore, $J(v_n)\to S$ as $n\to\infty$ and $J'(v_n)=0$, for all $n\in \mathbb{N}$. Using a similar argument as **Step I**, we can prove that $\{v_n\}$ is a bounded $(PS)_S$ sequence for $J$. Using the argument introduced in the proof of the lemma [Lemma 33](#SL){reference-type="ref" reference="SL"}, there exists $v_0\in X$ such that $v_n\to v_0$ and $J(v_0)=S$. Without loss of generality, we can assume $v_0$ is nonnegative. We want to prove $v_0\nequiv 0$.\ Define a map $T: X\to \mathbb{R}$ as $$T(v):= \int [H(Dv)^p+V(x)|f(v)|^p] dx.$$ which is continuous. If $v_n\to 0$ in $X$ then $T(v_n)\to 0$.\ Now, $$\label{ineq2} \begin{split} \langle J'(v_n),\frac{f(v_n)}{f'(v_n)}\rangle&= \int H(Dv_n)^p(1+G(v_n)) dx+ \int V(x)|f(v_n)|^p dx-\lambda\int |f(v_n)|^{q+1} dx\\ &\geq \int H(Dv_n)^p dx+ \int V(x)|f(v_n)|^p dx-\lambda\int |f(v_n)|^{q+1} dx \end{split}$$ where $G(v_n)=\frac{(2\alpha-1)(2\alpha)^{p-1}|f(v_n)|^{p(2\alpha-1)}}{1+(2\alpha)^{p-1}|f(v_n)|^{p(2\alpha-1)}}\geq 0.$ Let $$v_n\in S(\beta_n)=\{v \in X: \int [H(Dv)^p+V(x)|f(v)|^p] dx=\beta_n^p\}$$ Using ([\[ineq1\]](#ineq1){reference-type="ref" reference="ineq1"}), ([\[ineq2\]](#ineq2){reference-type="ref" reference="ineq2"}) and $J'(v_n)=0$, we deduce $$\begin{split} \beta_n^p=\int H(Dv_n)^p dx+ \int V(x)|f(v_n)|^p dx&\leq \langle J'(v_n),\frac{f(v_n)}{f'(v_n)}\rangle +\lambda\int |f(v_n)|^{q+1} dx\\ &\leq C\beta_n^m, \text{ where } m>p \end{split}$$ Hence, the sequence $\{\beta_n\}$ is bounded below by some positive constant, which contradicts the fact that $T(v_n)\to 0$. Hence, $\tilde{v}=|v_0|$ is a non-trivial non-negative ground state solution of ([\[maineq2\]](#maineq2){reference-type="ref" reference="maineq2"}). By using lemma [Lemma 19](#Bdd){reference-type="ref" reference="Bdd"} and [@CM], we have $\tilde{v}$ is bounded and in $C^1(\mathbb{R}^N)$. Thus, $u_0=f(\tilde{v})\in C^1(\mathbb{R}^N)$ is a non-trivial non-negative bounded ground state solution of ([\[maineq\]](#maineq){reference-type="ref" reference="maineq"}). # Acknowledgement We would like to thank Prof. Adimurthi for his invaluable advice and assistance. The first author was supported by MATRICS project no MTR/2020/000594.
arxiv_math
{ "id": "2309.04457", "title": "Ground state solutions for quasilinear Schrodinger type equation\n involving anisotropic p-laplacian", "authors": "Kaushik Bal and Sanjit Biswas", "categories": "math.AP", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We consider quotients of complete flag manifolds in $\mathbb{C}^n$ and $\mathbb{R}^n$ by an action of the symmetric group on $n$ objects. We compute their cohomology with field coefficients of any characteristic. Specifically, we show that these topological spaces exhibit homological stability and we provide a closed-form description of their stable cohomology rings. We also describe a simple algorithmic procedure to determine their unstable cohomology additively. address: - Università di Roma Tor Vergata - Mathematics Department, University of British Columbia author: - Lorenzo Guerra - Santanil Jana bibliography: - Unordered_flag_varieties.bib title: Cohomology of complete unordered flag manifolds --- [^1] # Introduction A *complete flag* over a field $\mathbb{F}$ is a nested sequence of $\mathbb{F}$-linear subspaces of $\mathbb{F}^n$: $$\label{eq:flag} \{0\} = V_0 \subset V_1 \subset \cdots \subset V_n = \mathbb{F}^n$$ such that $\mathrm{dim}_{\mathbb{F}} (V_j) = j$, for all $j=1,2,\dots,n$. The space of all complete flags of order $n$ over $\mathbb{F}$ forms an algebraic variety called the *full flag variety* of order $n$ over $\mathbb{F}$, denoted as $\mathop{\mathrm{Fl}}_n (\mathbb{F})$. In this paper, our focus is exclusively on cases where $\mathbb{F}$ corresponds to the fields $\mathbb{R}$ and $\mathbb{C}$. Over these fields, $\mathop{\mathrm{Fl}}_n(\mathbb{F})$ endowed with the analytic topology is a topological manifold, that we call *real and complex flag manifold*, respectively. The unitary group $U(n)$ (or the orthogonal group $O(n)$) acts transitively on $\mathop{\mathrm{Fl}}_n (\mathbb{C})$ (or $\mathop{\mathrm{Fl}}_n (\mathbb{R})$), i.e. given any flag $\{ V_j\}_{j=0}^n$ as in ([\[eq:flag\]](#eq:flag){reference-type="ref" reference="eq:flag"}), there exists a $g\in U(n)$ (or $O(n)$) which maps the flag ([\[eq:flag\]](#eq:flag){reference-type="ref" reference="eq:flag"}) to the standard flag of $\mathbb{C}^n$ (or $\mathbb{R}^n$): $$\{0\} \subset \mathbb{C}\{e_1\} \subset \cdots \subset \mathbb{C}\{e_1, \dots, e_n\} = \mathbb{C}^n,$$ where $\{ e_1, \dots, e_n\}$ is the standard basis of $\mathbb{C}^n$. Therefore, complex (or real) flags can be identified with the elements of $U(n)$ (or $O(n)$), modulo the subgroup that leaves the standard flag fixed. Therefore, the flag manifold $\mathop{\mathrm{Fl}}_n (\mathbb{C})$ (or $\mathop{\mathrm{Fl}}_n (\mathbb{R})$) can be identified with $U(n)/T(n)$ (or $O(n)/T(n)$), where $T(n)$ is the maximal torus in $U(n)$ (or $O(n)$).\ Given a flag $\{ V_j\}_{j=0}^n$ as in ([\[eq:flag\]](#eq:flag){reference-type="ref" reference="eq:flag"}), there is an orthonormal basis $\{ v_1, \dots,v_n \}$ of $\mathbb{C}^n$ (or $\mathbb{R}^n$) such that $$V_j = \mathbb{C}\{ v_1,\dots,v_j\} \text{ }(\text{or } V_j = \mathbb{R}\{ v_1,\dots,v_j\} ) .$$ An ordered orthonormal basis of $\mathbb{C}^n$ (or $\mathbb{R}^n$) describes a complete flag. Any element in $\mathop{\mathrm{Fl}}_n (\mathbb{C})$ (or $\mathop{\mathrm{Fl}}_n (\mathbb{R})$) can be described as a set of $n$ ordered mutually orthogonal lines in $\mathbb{C}^n$ (or $\mathbb{R}^n$). The symmetric group $\Sigma_n$ acts freely on $\mathop{\mathrm{Fl}}_n (\mathbb{C})$ (or $\mathop{\mathrm{Fl}}_n (\mathbb{R})$) by permuting the ordered basis elements. Note that the action of $\Sigma_n$ on $\mathop{\mathrm{Fl}}_n (\mathbb{C})$ (or $\mathop{\mathrm{Fl}}_n (\mathbb{R})$) does not permute the subspaces $V_j$, rather it permutes the basis elements $v_j$. **Definition 1**. The complex (or real) *unordered flag manifold* of order $n$ is defined as the quotient $\mathop{\mathrm{Fl}}_n (\mathbb{C})/\Sigma_n$ (or $\mathop{\mathrm{Fl}}_n (\mathbb{R})/\Sigma_n$) and denoted as $\overline{\mathrm{Fl}}_n (\mathbb{C})$ (or $\overline{\mathrm{Fl}}_n (\mathbb{R})$). Let $N(n) := N_{U(n)} (T(n))$ be the normalizer of the maximal torus $T(n)$ in $U(n)$. The Weyl group of the maximal torus is $N(n)/T(n) \cong \Sigma_n$. The unordered flag manifold of order $n$ can also be described as $$\label{eq:flagidentity} \overline{\mathrm{Fl}}_n (\mathbb{C})= \mathop{\mathrm{Fl}}_n (\mathbb{C})/\Sigma_n \cong U(n)/N(n) .$$ Similarly $\mathop{\mathrm{B}}_n := N_{O(n)} (T(n))$ be the normalizer of the maximal torus in $O(n)$. Then we can identify $O(n) / \mathop{\mathrm{B}}_n$ with $\mathop{\mathrm{Fl}}_n (\mathbb{R})$. In §[4.2](#section:4.2){reference-type="ref" reference="section:4.2"} we use the alternating subgroup $\ensuremath \mathop{\mathrm{B}}_{n}^{+} := N_{SO(n)} (T(n) \cap SO(n))$ (so that $\overline{\mathrm{Fl}}_n (\mathbb{R}) \cong SO(n)/\ensuremath \mathop{\mathrm{B}}_{n}^{+}$) for our computations.\ In this paper, we delve into the study of the stable and unstable cohomology of complex unordered flag manifolds. While complete flag manifolds have been extensively studied in the fields of algebraic topology and geometry due to their significance in Lie theory, unordered complete flag manifolds have received less attention from algebraic topologists. Understanding the topology of these unordered flag manifolds is not only intriguing in its own right, but also carries important implications for emerging problems in algebraic topology and convex geometry.\ The cohomology of the unordered flag manifolds bears relevance to the computation of the cohomology of the classifying spaces for commutativity. The classifying space for commutativity $B_{com} G$ and the total space of the associated principal $G$-bundle, $E_{com} G$, was described as a homotopy colimit over a poset by Adem--Gómez [@Adem-Gomez]. This homotopy colimit diagram involves the unordered flag manifold $\overline{\mathrm{Fl}}_n (\mathbb{C})$ (or $\overline{\mathrm{Fl}}_n (\mathbb{R})$) when $G$ is $U(n)$ (or $O(n)$). The cohomology of these spaces, particularly the $p$-torsion structure, has implications for the study of spaces of homomorphism. The study of the cohomology of unordered flag manifolds is also connected to the resolution of a conjecture proposed by Atiyah [@Atiyah2002]. However, providing a comprehensive discussion of this conjecture is beyond the scope of the paper. For further information and in-depth analysis, we refer readers to [@Atiyah2002 §4].\ The cohomology of unordered flag manifolds also has implications in the estimation of the number of Auerbach bases of finite dimensional Banach spaces. Let $\mathbb{X}$ be a $n$ dimensional complex (or real) Banach space and $S_{\mathbb{X}}$ denote its unit sphere. A basis $\mathcal{B}=\{v_1,\dots,v_n\}$ of $\mathbb{X}$ is called an *Auerbach basis* if $v_i \in S_{\mathbb{X}}$ and there is a basis ${v^1,\dots,v^n}$ of the dual space $\mathbb{X}^{*}$ satisfying $$v^i(v_j) = \delta_{ij},\quad \text{and} \quad v^i \in S_{\mathbb{X}^{*}} \text{ for} \ i,j = 1,2,\dots,n.$$ Weber--Wojciechowski [@Weber2016] provided an estimate of the number of Auerbach bases of a finite-dimensional Banach space using topological methods. Here, one identifies bases that differ only by permutation or multiplication by scalars of absolute value one. In other words, two bases are said to be equivalent if they lie in the same orbit of the action of $N (n)$ (or $\mathop{\mathrm{B}}_n$) on $U(n)$ (or $O(n)$). In [@Weber2016], the estimates of the Lusternick--Schnirelmann category [@Fox1939] $\mathrm{cat} ({\overline{\mathrm{Fl}}}_n (\mathbb{R}))$ and $\mathrm{rank} (H^*({\overline{\mathrm{Fl}}}_n (\mathbb{R})))$ along with their complex counterparts were obtained using known results about the cohomology of ordered flag manifolds. The estimates for $\mathrm{rank} (H^*({\overline{\mathrm{Fl}}}_n (\mathbb{R})))$ and $\mathrm{cat}({\overline{\mathrm{Fl}}}_n (\mathbb{C}))$ has been much-improved by the computational results about the cohomology of the unordered flag varieties for lower orders in [@G-J-M].\ **Convention.** In the rest of this paper, $\mathbb{F}$ will denote a field and $p$ will denote a prime. The cohomological degree of a cohomology class $\gamma$ will be denoted by $|\gamma|$.\ **Acknowledgements.** The first author would like to thank Prof. Paolo Salvatore for the helpful conversation and acknowledges the MIUR Excellence Department Project awarded to the Department of Mathematics, University of Rome Tor Vergata, CUP E83C18000100006.\ The second author expresses their gratitude to Prof. Alejandro Adem for engaging discussions that significantly contributed to the research presented in this paper. # Recollection of the cohomology of extended powers {#sec:cohomology extended powers} Let $X$ be a Hausdorff compactly generated topological space. Define its $n$-fold extended (symmetric) power $D_n(X_+)$ as the homotopy quotient of $X^n$ with respect to the action of the symmetric group $\Sigma_n$ that permutes the cartesian factors. In particular, if $\{*\}$ is a space with one point, $D_n(\{*\}_+) = B(\Sigma_n)$, the classifying space of the symmetric group itself. We are mostly interested in the cases where $\mathbb{F}= \mathbb{F}_p$ for some prime number $p$ or $\mathbb{F}= \mathbb{Q}$. There will be some results that are valid only for these particular choices; we will stress explicitly this fact where it happens. Let $A_X = \bigoplus_{n,d} A_X^{n,d}$ be the bigraded $\mathbb{F}$-vector space defined by $A_X^{n,d}= H^d(D_n(X_+))$. We refer to the indices $n$ and $d$ as the component and (cohomological) dimension, respectively. On $A_X$ there are some structural morphisms, all natural in $X$, that provide it with a rich algebraic structure: - the usual cup product, component by component, $\cdot \colon A_X \otimes A_X \to A_X$. - a coproduct $\Delta \colon A_X \to A_X \otimes A_X$ induced by the maps $p_{n,m} \colon D_n(X_+) \times D_m(X_+) \to D_{n+m}(X_+)$ defined by passing to quotients the following map: $$\begin{aligned} &(p,(x_1,\dots,x_n)) \times (q,(x_{n+1},\dots,x_{n+m})) \in (E(\Sigma_n) \times X^n) \times (E(\Sigma_m) \times X^m) \\ &\mapsto (i_{n,m}(p,q),(x_1,\dots,x_n,x_{n+1},\dots,x_{n+m})) \in E(\Sigma_{n+m}) \times X^{n+m}, \end{aligned}$$ where $i_{n,m} \colon E(\Sigma_n) \times E(\Sigma_m) \to E(\Sigma_{n+m})$ is the $\Sigma_n \times \Sigma_m$-equivariant map induced by the standard inclusion $\Sigma_n \times \Sigma_m \hookrightarrow \Sigma_{n+m}$. - since $p_{n,m}$ is homotopy equivalent to a finite covering, there is a product $\odot \colon A_X \otimes A_X \to A_X$ corresponding to the cohomological transfer maps of $p_{n,m}$. **Definition 2**. A commutative component bigraded Hopf ring is a bigraded vector space $A = \bigoplus_{n,d} A^{n,d}$ with a coproduct $\Delta \colon A \to A \otimes A$, two products $\odot, \cdot \colon A \otimes A \to A$, a unit $\eta \colon \mathbb{F}\to A$, a counit $\varepsilon \colon A \to \mathbb{F}$, and an antipode $S \colon A \to A$ such that - $\Delta, \odot, \eta, \varepsilon$ are bigraded, $\cdot$ is graded with respect to $d$, and the products and the coproduct are graded commutative with respect to $d$, - $(A,\Delta,\odot,\eta,\varepsilon)$ is a Hopf algebra, - $(A, \cdot,\Delta)$ is a bialgebra, - $\forall x \in A^{n,d}, x' \in A^{n',d'}\colon x \cdot x' = 0$ if $n \not= n'$, - and the following Hopf ring distributivity axiom holds for all $x,y,z \in A$, where we use Sweedler's notation $\Delta(x) = \sum x_{(1)} \otimes x_{(2)}$: $$x \cdot (y \odot z) = \sum (-1)^{d(y)d(x_{(2)})} (x_{(1)} \cdot y) \odot (x_{(2)} \cdot z).$$ **Theorem 3** ([@Guerra-Salvatore-Sinha Corollary 2.31]). *$A_X$, with the morphisms defined above, the unit and counit that identify $A_X^{(0,0)}$ with $\mathbb{F}$, and an antipode given by multiplication by $\pm 1$ depending on the component, is a commutative bigraded component Hopf ring.* In the case where $\mathbb{F}= \mathbb{F}_p$ or $\mathbb{F}= \mathbb{Q}$, there are distinguished classes in $A_X$: - The first distinguished set of classes appears only if $\mathbb{F}= \mathbb{F}_p$ for some prime $p$. The unique map $\pi \colon X \to \{*\}$ induces an injective homomorphism $\pi^* \colon \bigoplus_{n \geq 0} H^*(\Sigma_n; \mathbb{F}_p) = A_{\{*\}} \to A_X$. By a slight abuse of notation, we identify the classes $\gamma_{k,l}$ of Giusti--Salvatore--Sinha [@Sinha:12] (if $p = 2$) or the classes $\gamma_{k,l}$, $\alpha_{i,k}$ and $\beta_{i,j,l}$ of Guerra [@Guerra:17] (if $p > 2$) with their image $\pi^*(\gamma_{k,l})$ under $\pi^*$, thus realizing them as elements of $A_X$. - The second distinguished set of classes appears for all choices of $\mathbb{F}$. For every even-dimensional cohomology class $\alpha \in H^*(X)$ and for all $n > 0$, the twisted cross product $\alpha_{[n]} = 1 \times_{\Sigma_n} \alpha^{\times n} \in H^*(E(\Sigma_n) \times_{\Sigma_n} X^n) = H^*(D_n(X_+))$ is well-defined. explicitly, the complex of singular chains of $E(\Sigma_n) \times_{\Sigma_n} X^n$ is quasi-isomorphic to $W_* \otimes_{\Sigma_n} C_*(X)^{\otimes n}$, where $W_*$ is a free resolution of $\mathbb{F}$ as a $\mathbb{F}[\Sigma_n]$-module. Let $\varepsilon \colon W_0 \to \mathbb{F}$ be the augmentation (representing the unit in $H^0(\Sigma_n)$) and $a$ be a cocycle representative of $\alpha$. Then $\alpha_{[n]}$ is represented by the $\Sigma_n$-invariant cocycle $\varepsilon \otimes a^{\otimes n} \colon W_* \otimes C_*(X)^{\otimes n} \to \mathbb{F}$, which does not depend on the chosen representative. We also define, by convention, $\alpha_{[0]} = 1_0$, the unit of the $0$-th component of $A_X$. If $\mathbb{F}= \mathbb{F}_2$, then $\varepsilon \otimes a^{\otimes n}$ is $\Sigma_n$-invariant even if $a$ is an odd-dimensional cochain; consequently, $\alpha_{[n]}$ is defined for all $\alpha \in H^*(X; \mathbb{F}_2)$, odd- and even-dimensional. Guerra--Salvatore--Sinha proved that these distinguished classes suffice to fully determine $A_X$ as a Hopf ring. We recall their results below. **Theorem 4** ([@Guerra-Salvatore-Sinha Theorem 2.37]). *Assume that $\mathbb{F}= \ensuremath{\mathbb{F}_2}$. As a commutative bigraded component Hopf ring, $A_X$ is generated by the classes $\gamma_{k,l}$ (for $k,l \geq 1$) and $x_{n}$ (for $x \in H^*(X)$ and $n \geq 1$), with the following relations:* - *the relations of [@Sinha:12 Theorem 1.2],* - *$x_{[n]}\cdot {x'}_{[n]} = (x \cdot x')_{[n]}$ for $x,x' \in H^*(X)$,* - *$\Delta(x_{[n]}) = \sum_{i=0}^n x_{[i]} \otimes x_{[n-i]}$ for $x \in H^*(X)$,* - *$x_{[m]} \odot x_{[n]} = \binom{m+n}{n} x_{[m+n]}$ for $x\in H^* (X)$,* - *$(\lambda x)_{[n]} = \lambda^n x_{[n]}$ and $(x+y)_{[n]} = \sum_{k=0}^n x_{[k]} \odot y_{[n-k]}$ for $x,y \in H^* (X)$ and $\lambda \in \mathbb{F}$.* **Theorem 5** ([@Guerra-Salvatore-Sinha Theorem 2.38], particular case). *Assume that $\mathbb{F}= \ensuremath{\mathbb{F}_p}$ with $p$ odd prime and that $H^*(X)$ is $0$ in odd degrees. Then, as a commutative bigraded component Hopf ring, $A_X$ is generated by the classes $\gamma_{k,l}$ (for $k,l \geq 1$), $\alpha_{i,k}$ (for $1 \leq i \leq k$), $\beta_{i,j,k}$ (for $1 \leq i < j \leq k$) and $x_{[n]}$ (for $x \in H^*(X)$ and $n \geq 1$), with the following relations:* - *the relations of [@Guerra:17 Theorem 2.7],* - *$x_{[n]}\cdot {x'}_{[n]} = (x \cdot x')_{[n]}$ for $x,x' \in H^*(X)$,* - *$\Delta(x_{[n]}) = \sum_{i=0}^n x_{[i]} \otimes x_{[n-i]}$ for $x \in H^*(X)$,* - *$x_{[m]} \odot x_{[n]} = \binom{m+n}{n} x_{[m+n]}$ for $x\in H^* (X)$,* - *$(\lambda x)_{[n]} = \lambda^n x_{[n]}$ and $(x+y)_{[n]} = \sum_{k=0}^n x_{[k]} \odot y_{[n-k]}$ for $x,y \in H^* (X)$ and $\lambda \in \mathbb{F}$.* From this presentation, one can extract an additive basis for the cohomology of $D_n(X_+)$ under the hypotheses of Theorems [Theorem 4](#thm:cohomology DX mod 2){reference-type="ref" reference="thm:cohomology DX mod 2"} and [Theorem 5](#thm:cohomology DX mod p){reference-type="ref" reference="thm:cohomology DX mod p"}. **Definition 6**. Let $\mathcal{B}$ be a graded basis of $H^*(X)$ as a $\mathbb{F}$-vector space. A decorated gathered block in $A_X$ is a couple $(b,x)$, where $b$ is a gathered block in $\bigoplus_{n \geq 0} H^*(\Sigma_n)$, in the sense of Giusti--Salvatore--Sinha [@Sinha:12 Definition 6.3] (if $\mathbb{F}= \ensuremath{\mathbb{F}_2}$) or Guerra [@Guerra:17 page 964] (if $\mathbb{F}= \ensuremath{\mathbb{F}_p}$ with $p > 2$), and $x \in \mathcal{B}$ is a basis element. We call $x$ the decoration of the gathered block. A decorated Hopf monomial is a formal expression of the form $b_1 \odot \dots \odot b_r$, where $b_1,\dots,b_r$ are decorated gathered blocks, such that any two distinct elements among these $r$ gathered blocks have not the same profile (in the sense of [@Sinha:12 Definition 6.3] (if $\mathbb{F}= \ensuremath{\mathbb{F}_2}$) or [@Guerra:17 Definition 3.1] and the same decoration concurrently. **Corollary 7** ([@Guerra-Salvatore-Sinha Proposition 4.5]). *Assume that $\mathbb{F}= \ensuremath{\mathbb{F}_p}$ with $p \geq 2$ and $H^d (X; \mathbb{F}_p) = 0$ for $d$ odd. Let $\mathcal{B}$ be as in the previous definition. Realize a decorated gathered block $(b,x)$, as an element of $A_X$ by taking the cup product $\pi^*(b) \cdot x_{[n]}$, where $n$ is the component of $b$. Realize a decorated Hopf monomial $x = b_1 \odot \dots \odot b_r$ as an element of $A_X$ by realizing the constituent decorated gathered blocks as elements of $A_X$ and taking their transfer product. Then the set $\mathcal{M}$ of decorated Hopf monomial is a bigraded basis for $A_X$ as a $\mathbb{F}$-vector space.* Hopf ring distributivity and the relations of Theorems [Theorem 4](#thm:cohomology DX mod 2){reference-type="ref" reference="thm:cohomology DX mod 2"} and [Theorem 5](#thm:cohomology DX mod p){reference-type="ref" reference="thm:cohomology DX mod p"} are enough to explicitly compute the products and the coproduct on decorated Hopf monomials in $A_X$. We exemplify below the general procedure in a specific case. The reader might also find the graphical algorithm described in [@Guerra-Salvatore-Sinha §4] in terms of skyline diagrams more accessible. *Example 8*. Let $\mathcal{B} = \{1, c, c^2, \dots \}$ be the graded basis of $H^* (BU(1); \mathbb{F}_2)$ as a $\mathbb{F}_2$-vector space as before. Consider the following decorated Hopf monomials $$x = (\gamma_{1,2}^2, c) \odot (\gamma_{1,1},c) \quad \text{and} \quad y = (\gamma_{2,1}, 1) \odot (\gamma_{1,1}, c).$$ We will compute the cup product $x\cdot y$. First, we compute the coproduct of $x$. Using the fact that the cup product and coproduct from a bialgebra, we have $$\begin{aligned} &\Delta ((\gamma_{1,2}^2, c)) = \Delta (\gamma_{1,2}^2 \cdot c_{[4]}) = \Delta (\gamma_{1,2})^2 \cdot \Delta (c_{[4]}) \\ & \hspace{1cm} = (\gamma_{1,2}^2 \otimes 1_0 + \gamma_{1,1}^2 \otimes \gamma_{1,1}^2 + 1_0 \otimes \gamma_{1,2}^2) \cdot (c_{[4]} \otimes 1_0 + c_{[3]} \otimes c_{[1]} + \cdots + 1_0 \otimes c_{[4]}) \\ & \hspace{1cm} = \gamma_{1,2}^2 \cdot c_{[4]} \otimes 1_0 + \gamma_{1,1}^2 \cdot c_{[2]} \otimes \gamma_{1,1}^2 \cdot c_{[2]} + 1_0 \otimes \gamma_{1,2}^2 \cdot c_{[4]} \end{aligned}$$ and $$\begin{aligned} \Delta ((\gamma_{1,1},c)) &= \Delta (\gamma_{1,1}) \cdot \Delta (c_{[2]}) \\ &= (\gamma_{1,1} \otimes 1_0 + 1_0 \otimes \gamma_{1,1}) \cdot (c_{[2]} \otimes 1_0 + c_{[1]} \otimes c_{[1]} + 1_0 \otimes c_{[2]}) \\ &= \gamma_{1,1} \cdot c_{[2]} \otimes 1_0 + 1_0 \otimes \gamma_{1,1} \cdot c_{[2]}. \end{aligned}$$ Using the fact that the transfer product and the coproduct form a bialgebra, we have $$\begin{aligned} \Delta (x) &= \Delta ((\gamma_{1,2}^2, c)) \odot \Delta ((\gamma_{1,1},c)) \\ &= x \otimes 1_0 + (\gamma_{1,2}^2 \cdot c_{[4]}) \otimes (\gamma_{1,1} \cdot c_{[2]}) +((\gamma_{1,1}^2 \cdot c_{[2]}) \odot (\gamma_{1,1} \cdot c_{[2]})) \otimes (\gamma_{1,1}^2 \cdot c_{[2]}) \\ &+ (\gamma_{1,1}^2 \cdot c_{[2]}) \otimes ((\gamma_{1,1}^2 \cdot c_{[2]}) \odot (\gamma_{1,1} \cdot c_{[2]})) + (\gamma_{1,1} \cdot c_{[2]}) \otimes (\gamma_{1,2}^2 \cdot c_{[4]}) + 1_0 \otimes x. \end{aligned}$$ Using the Hopf ring distributivity axiom, we have $$\begin{aligned} x \cdot y &= x\cdot ((\gamma_{2,1}, 1) \odot (\gamma_{1,1}, c)) \\ &= \sum_{\Delta x = \sum x' \otimes x''} (x' \cdot \gamma_{2,1} ) \odot (x'' \cdot \gamma_{1,1} \cdot c_{[2]}). \end{aligned}$$ We observe that only addends $x' \otimes x''$ in $\Delta (x)$ that have the right components such that the cup products are non-zero are $(\gamma_{1,2}^2 \cdot c_{[4]}) \otimes (\gamma_{1,1} \cdot c_{[2]})$ and $((\gamma_{1,1}^2 \cdot c_{[2]}) \odot (\gamma_{1,1} \cdot c_{[2]})) \otimes (\gamma_{1,1}^2 \cdot c_{[2]})$. Note that by Hopf ring distributivity $$\gamma_{2,1} \cdot ((\gamma_{1,1}^2 \cdot c_{[2]}) \odot (\gamma_{1,1} \cdot c_{[2]})) = 0 .$$ Therefore, $$\begin{aligned} x\cdot y &= (\gamma_{2,1} \cdot \gamma_{1,2}^2 \cdot c_{[4]}) \odot (\gamma_{1,1}^2 \cdot c_{[2]}^2) \end{aligned}$$ The corresponding skyline diagram is described in Figure [\[skylinexy\]](#skylinexy){reference-type="ref" reference="skylinexy"}. The rational cohomology of $D(X_+)$ is well-known to experts. We slightly enhance its usual description by incorporating the Hopf ring structure. **Proposition 9**. *Let $\mathbb{F}= \mathbb{Q}$. Let $\mathcal{B}$ be a graded basis of the cohomology of $X$. If $H^*(X; \mathbb{Q})$ is concentrated in even degrees, then $A_X$ is the commutative bigraded component Hopf ring generated by $x_{[n]}$ for $x \in \mathcal{B}$ with the following relations:* - *$x_{[n]}\cdot {x'}_{[n]} = (x \cdot x')_{[n]}$ for $x,x' \in H^*(X)$,* - *$\Delta(x_{[n]}) = \sum_{i=0}^n x_{[i]} \otimes x_{[n-i]}$ for $x \in H^*(X)$,* - *$x_{[m]} \odot x_{[n]} = \binom{m+n}{n} x_{[m+n]}$ for $x\in H^* (X)$,* - *$(\lambda x)_{[n]} = \lambda^n x_{[n]}$ and $(x+y)_{[n]} = \sum_{k=0}^n x_{[k]} \odot y_{[n-k]}$ for $x,y \in H^* (X)$ and $\lambda \in \mathbb{F}$.* *Moreover, the set $\mathcal{M}$ consisting of elements $(x_1)_{[n_1]} \odot \dots \odot (x_k)_{[n_k]}$ up to permutation of $\odot$-factors, where $x_i \not= x_j$ for $i\not=j$ and $x_i \in \mathcal{B}$, form a bigraded basis for $A_X$.* *Proof.* The proof that the relations hold and that $\mathcal{M}$ is a basis for the commutative bigraded component Hopf ring with that presentation is the same as in [@Guerra-Salvatore-Sinha]. The rational cohomology of $D_n(X_+)$ is isomorphic to the subspace of invariants of ${H^*(X)}^{\otimes n}$ under the action of $\Sigma_n$. Under this isomorphism, $(x_1)_{[n_1]} \odot \dots \odot (x_k)_{[n_k]}$ corresponds to the symmetrization by means of $(n_1,\dots,n_k)$-shuffles of $$\underbrace{x_1 \otimes \dots \otimes x_1}_{n_1 \mbox{ times}} \otimes \underbrace{x_2 \otimes \dots \otimes x_2}_{n_2 \mbox{ times}} \otimes \dots \otimes x_k$$ which constitute a basis for the subspace of invariants. ◻ We conclude this section by recalling some stable calculations. In the following definitions, we fix a graded basis $\mathcal{B}$ for the cohomology of $X$ containing the unit $1_X$. **Definition 10**. Let $\mathcal{M}$ be as in Corollary [Corollary 7](#cor: basis DX char+){reference-type="ref" reference="cor: basis DX char+"} or Proposition [Proposition 9](#prop: basis DX char0){reference-type="ref" reference="prop: basis DX char0"}. An element $x = b_1 \odot \dots \odot b_r$ is pure if none of the $b_i$s is equal to the unit class of a component $1_n \in H^*(D_n(X_+))$. The stabilizer of $n+1$ in $\Sigma_{n+1}$ is isomorphic to $\Sigma_n$. This provides an inclusion $i_n \colon \Sigma_n \hookrightarrow \Sigma_{n+1}$. If we fix a basepoint $* \in X$, then the spaces $D(X_+)$ exhibit homological stability with respect to the stabilization maps $$\begin{aligned} &j_n \colon (p,(x_1,\dots,x_n)) \in E(\Sigma_n) \times_{\Sigma_n} X^n = D_n(X_+) \\ &\mapsto (i_{n,1}(p),(x_1,\dots,x_n,*))\in E(\Sigma_{n+1}) \times_{\Sigma_{n+1}} X^{n+1} = D_{n+1}(X_+).\end{aligned}$$ $j_n$ depends on the basepoint, but if $X$ is path-connected, then any two choices yield homotopic maps. This homologically stability property is classically well-known and follows, for instance, from the calculations of [@May-Cohen Theorem 4.1]. Thus, under this connectedness hypothesis, the stable cohomology of $D_n(X_+)$ is given by $A_\infty(X) = \varprojlim_n H^*(D_n(X_+))$ is well-defined and coincides with $H^*(D_n(X_+))$ in low degrees. The stabilization maps in cohomology have a section on the subspace generated by pure Hopf monomials given by the transfer product with the units: $$1_1 \odot \_ \colon H^*(D_n(X_+)) \to H^*(D_{n+1}(X_+))$$ Therefore, we can define the stabilization of a pure $x \in \mathcal{M}$ as the unique class $x \odot 1_{\infty}$ restricting to $x \odot 1_{[n]} \in H^*(D_{n+n(x)}(X_+))$ for all $n \in \mathbb{N}$. **Corollary 11** ([@Guerra-Salvatore-Sinha Lemma 6.5]). *If $\mathop{\mathrm{char}}(\mathbb{F}) \not= 2$, assume that $H^*(X) = 0$ in odd degrees. Then, the set $\{ x \odot 1_{\infty}: x \in \mathcal{M} \mbox{ pure} \}$ is a graded basis for $A_\infty(X)$ as a $\mathbb{F}$-vector space.* **Corollary 12** ([@Guerra-Salvatore-Sinha Theorem 6.8], particular case). *Assume that $H^*(X)$ is a polynomial algebra, with even-dimensional generators if $\mathop{\mathrm{char}}(\mathbb{F}) \not= 2$. Let $\mathcal{B}$ be the monomial basis of $H^*(X)$. If $\mathop{\mathrm{char}}(\mathbb{F}) = p \not= 0$, then $A_\infty(X)$ is the free graded commutative algebra generated by the classes $b \odot 1_{\infty}$, where $b$ is a decorated gathered block satisfying the following conditions:* - *either the underlying (non-decorated) block of $b$ or the decoration of $b$ is not a $p$-th power,* - *the width of $b$ is a power of $p$,* - *$b$ is different from the unit class.* *If, instead, $\mathop{\mathrm{char}}(\mathbb{F}) = 0$, then $A_\infty(X)$ is a polynomial algebra generated by classes $x_{[n]} \odot 1_\infty$ for all $n \geq 1$ and $x$ ranging in a minimal set of algebra generators for $H^*(X)$.* # Review of the cohomology of alternating subgroups of the hyperoctahedral groups The normalizer $N_{O(n)}(T(n))$ of the "torus" of diagonal matrices $T(n) = \{D \in O(n): D \mbox{ is diagonal}\}$ is isomorphic to the wreath product $\Sigma_n \wr (\mathbb{Z}/2\mathbb{Z})$. This can be realized as the isometry group of a hyperoctahedron in $\mathbb{R}^n$. This makes $N_{O(n)}(T(n))$ a reflection group that, under the classification of finite Coxeter groups (see [@Humphreys]), corresponds to the Dynkin diagram $\mathop{\mathrm{B}}_n$ having the following form: The intersection $N_{O(n)}(T(n)) \cap SO(n)$ is identified with the alternating subgroup $\ensuremath \mathop{\mathrm{B}}_{n}^{+}$ of the Coxeter group $\mathop{\mathrm{B}}_n$, defined as the kernel of the sign homomorphism $\mathop{\mathrm{sgn}}_{\mathop{\mathrm{B}}_n} \colon \mathop{\mathrm{B}}_n \to C_2 = \{-1,1\}$ whose value on every reflection is $-1$. ## Mod $2$ cohomology of $\ensuremath \mathop{\mathrm{B}}_{n}^{+}$ The mod $2$ cohomology of $\mathop{\mathrm{B}}_n$ has first been computed by Guerra in [@Guerra:21], where the author shows that $\bigoplus_n H^* (B \mathop{\mathrm{B}}_n; \mathbb{F}_2 )$ is a Hopf ring. Here we observe that this Hopf ring is isomorphic to $A_{\mathbb{P}^{\infty} (\mathbb{R})}$ and that, consequently, his description is a particular case of Theorem [Theorem 4](#thm:cohomology DX mod 2){reference-type="ref" reference="thm:cohomology DX mod 2"}. A similar structure exists on the mod $2$ cohomology of $\ensuremath \mathop{\mathrm{B}}_{n}^{+}$. More precisely, on the direct sum $A'_{\ensuremath \mathop{\mathrm{B}}_{}^{+}}= \bigoplus_{n,d \geq 0} H^d (B\ensuremath \mathop{\mathrm{B}}_{n}^{+}; \mathbb{F}_2)$ we consider structural morphisms analogous to those defined in §[2](#sec:cohomology extended powers){reference-type="ref" reference="sec:cohomology extended powers"}: - the usual cup product, component by component, $\cdot \colon A'_{\ensuremath \mathop{\mathrm{B}}_{}^{+}}\otimes A'_{\ensuremath \mathop{\mathrm{B}}_{}^{+}}\to A'_{\ensuremath \mathop{\mathrm{B}}_{}^{+}}$. - a coproduct $\Delta \colon A'_{\ensuremath \mathop{\mathrm{B}}_{}^{+}}\to A'_{\ensuremath \mathop{\mathrm{B}}_{}^{+}}\otimes A'_{\ensuremath \mathop{\mathrm{B}}_{}^{+}}$ whose components are restriction maps associated with the inclusions of groups $\ensuremath \mathop{\mathrm{B}}_{n}^{+} \times \ensuremath \mathop{\mathrm{B}}_{m}^{+} \to \ensuremath \mathop{\mathrm{B}}_{n+m}^{+}$. - a transfer product $\odot \colon A'_{\ensuremath \mathop{\mathrm{B}}_{}^{+}}\otimes A'_{\ensuremath \mathop{\mathrm{B}}_{}^{+}}\to A'_{\ensuremath \mathop{\mathrm{B}}_{}^{+}}$ whose components are transfer maps associated with the same inclusions. **Definition 13** (from [@Sinha:17]). A commutative bigraded component almost-Hopf semiring is a bigraded vector space $A = \bigoplus_{n,d}A^{n,d}$ with a coproduct $\Delta$, two products $\odot,\cdot$, a unit $\eta \colon \mathbb{F}\to A$, and a counit $\varepsilon \colon A \to \mathbb{F}$ satisfying all the properties of Definition [Definition 2](#def:Hopf ring){reference-type="ref" reference="def:Hopf ring"}, except those involving the antipode and $\Delta$ and $\odot$ forming a bialgebra. Theorem 2.4 of [@Sinha:17] implies that $A'_{\ensuremath \mathop{\mathrm{B}}_{}^{+}}$, with the morphisms above, is an almost-Hopf ring over $\mathbb{F}_2$. There is an involution $\iota \colon A'_{\ensuremath \mathop{\mathrm{B}}_{}^{+}}\to A'_{\ensuremath \mathop{\mathrm{B}}_{}^{+}}$ induced on the $n$-th component by conjugation by a reflection in $\mathop{\mathrm{B}}_n$. Using $\iota$, $A'_{\ensuremath \mathop{\mathrm{B}}_{}^{+}}$ can be extended to a bigger almost-Hopf semiring $\widetilde{A'_{\ensuremath \mathop{\mathrm{B}}_{}^{+}}}$ that coincide with $A'_{\ensuremath \mathop{\mathrm{B}}_{}^{+}}$ in positive component, and such that $\widehat{A'_{\ensuremath \mathop{\mathrm{B}}_{}^{+}}}^{0,*} = \mathbb{F}_2 \{ 1^+, 1^- \}$ concentrated in dimension $0$. The unit $1_0 \in {A'_{\ensuremath \mathop{\mathrm{B}}_{}^{+}}}^{0,0}$ is identified with $1^+ + 1^-$. $1^+$ is the unit for the transfer product and $1^- \odot x = \iota(x)$ for all $x \in A'_{\ensuremath \mathop{\mathrm{B}}_{}^{+}}$. The cup product is extended on the $0$-th component by letting $1^+ \cdot 1^+ = 1^+$, $1^- \cdot 1^- = 1^-$ and $1^+ \cdot 1^- = 1^- \cdot 1^+ = 0$. The coproduct in $\widetilde{A'_{\ensuremath \mathop{\mathrm{B}}_{}^{+}}}$ is defined by $$\Delta(x) = 1^+ \otimes x + 1^- \otimes x + x \otimes 1^+ + x \otimes 1^- + \overline{\Delta}(x),$$ where $\overline{\Delta}$ is the reduced coproduct in $A'_{\ensuremath \mathop{\mathrm{B}}_{}^{+}}$. Proposition 4.5 of another paper from these two authors [@Guerra-Santanil], that will appear in the near future, guarantees that the almost-Hopf semiring structure on $\widetilde{A'_{\ensuremath \mathop{\mathrm{B}}_{}^{+}}}$ is well-defined. Clearly knowledge of $\widetilde{A'_{\ensuremath \mathop{\mathrm{B}}_{}^{+}}}$ implies knowledge of $A'_{\ensuremath \mathop{\mathrm{B}}_{}^{+}}$. Moreover, in that same paper, a full presentation of $\widetilde{A'_{\ensuremath \mathop{\mathrm{B}}_{}^{+}}}$ is determined: **Theorem 14** ([@Guerra-Santanil Theorem 12.6]). *Let $Q$ be the quotient Hopf ring of $A_{\mathbb{P}^\infty(\mathbb{R})}$ obtained by putting $\gamma_{k,l} = 0$ if $k \geq 2$. Consider the bigraded component bialgebra $$A = \bigoplus_{n \geq 0} \frac{Q^{n,*}}{(w,\gamma_{1,1} + w \odot 1_1, \dots, \gamma_{1,1} \odot 1_{n-2} + w \odot 1_{n-1},\dots)}.$$ Let $A^0$ be the Hopf ring obtained by adjoining a unit to $A$ and letting $\odot$ of elements of $A$ be $0$.* *$A^0$ is identified as a sub-Hopf semiring of $\widetilde{A'_{\ensuremath \mathop{\mathrm{B}}_{}^{+}}}$. Moreover, as an almost-Hopf semiring over $A^0$, $\widetilde{A'_{\ensuremath \mathop{\mathrm{B}}_{}^{+}}}$ is generated by the two classes $1^{\pm} \in \widehat{A'_{\ensuremath \mathop{\mathrm{B}}_{}^{+}}}^{0,0}$ and a family of classes $\{ \gamma_{k,l}^+ \in \widehat{A'_{\ensuremath \mathop{\mathrm{B}}_{}^{+}}}^{l2^k,l(2^k-1)} \}_{k \geq 2, l \geq 1}$.* *A complete set of relations is the following, where we denote $\gamma_{k,l}^+ \odot 1^-$ as $\gamma_{k,l}^-$ and where we add the apex $0$ to emphasize when elements belong to $A$:* 1. *$1^+ + 1^- = 1^0$, the unit of the $0$-th component of $A$,* 2. *$1^+$ is the Hopf ring unit of $\widetilde{A'_{\ensuremath \mathop{\mathrm{B}}_{}^{+}}}$,* 3. *$1^- \odot 1^- = 1^+$,* 4. *$\gamma_{k,l}^+ \odot \gamma_{k,m}^+ = \left( \begin{array}{c} l+m \\ l \end{array} \right) \gamma_{k,l+m}$ for all $k \geq 2$, $l,m \geq 1$,* 5. *$\gamma_{k,l}^+ \cdot \gamma_{k',l'}^- = 0$ unless $k = k' = 2$,* 6. *$\gamma_{2,m}^+ \cdot \gamma_{2,m}^- = \left\{ \begin{array}{ll} (\gamma_{2,m}^+)^2 + (\gamma_{2,m}^-)^2 + (\gamma_{2,m-1}^+)^2 \odot (\gamma_{1,2}^3)^0 & \mbox{if } m \mbox{ is odd} \\ (\gamma_{2,m-1}^+)^2 \odot (\gamma_{1,2}^3)^0 & \mbox{if } m \mbox{ is even} \end{array} \right.$ for all $m \geq 1$,* 7. *for all Hopf monomials $x = b_1 \odot \dots \odot b_r \in \mathcal{Q}_{\mathbb{P}^\infty(\mathbb{R}),1}$, $$\gamma_{k,l}^+ \cdot x^0 = \bigodot_{i=1}^r \left( \gamma_{k,\frac{n(b_i)}{2^k}}^+ \cdot (b_i)^0 \right) ,$$ where $n(b_i)$ is the component of $b_i$, and $\gamma_{k,l}$ is assumed to be $0$ if $l$ is not an integer,* 8. *$\Delta(x \odot y) = \Delta(x) \odot \rho_+ \Delta(y)$ for all $x,y \in \widetilde{A'_{\ensuremath \mathop{\mathrm{B}}_{}^{+}}}$,* 9. *$\Delta(\gamma_{k,l}^+) = \sum_{i=0}^l \left( \gamma_{k,i}^+ \otimes \gamma_{k,l-i}^+ + \gamma_{k,i}^- \otimes \gamma_{k,m-i}^- \right)$ for all $k \geq 2$, $l \geq 0$, with the convention that $\gamma_{k,0}^\pm = 1^\pm$.* There is also an explicit additive basis for $\widetilde{A'_{\ensuremath \mathop{\mathrm{B}}_{}^{+}}}$. **Definition 15** (from [@Guerra-Santanil]). Let $\mathcal{B} = \{1, w, w^2, \dots \}$ be the monomial basis of $H^*(\mathbb{P}^\infty(\mathbb{R}); \mathbb{F}_2) = \mathbb{F}_2[w]$. Let $\mathcal{M}$ be the associated decorated Hopf monomial basis of $A_{\mathbb{P}^\infty(\mathbb{R})}$. We define $\mathcal{G}_{ann}$ as the set of those $x = b_1 \odot \dots \odot b_r \in \mathcal{M}$ whose constituent gathered blocks $b_i$ all contain at least one factor $\gamma_{k,l}$ with $k \geq 2$. We also define $\mathcal{G}_{quot}$ as the set of those $x = b_1 \odot \dots \odot b_r \in \mathcal{M}$ satisfying one of the following two conditions: - at least one constituent gathered block $b_i$ of $x$ is of the form $(1_n,w^k)$, and the one with $k$ maximal (necessarily unique) satisfies $n \geq 2$ - no $b_i$ is of the form $(1_n,w^k)$ and, among the constituent gathered blocks of the form $(\gamma_{1,n}^l, w^k)$ (if there are any), the one with the couple $(k,l)$ maximal with respect to the lexicographic order $\leq_{lex}$ on $\mathbb{N} \times \mathbb{N}$ satisfies $n \geq 2$ or $l = 1$ For a gathered block $b$ containing at least one factor $\gamma_{k,l}$ with $k \geq 2$, $b = (w^{a_0})_{[m2^n]} \cdot \prod_{i=1}^n \gamma_{i,2^{n-i}m}^{a_i}$ for some $a_i \geq 0$, with $n \geq 2$ and $a_n \not= 0$. We write $$b^+ = ((w_{[m2^n]})^{a_0} \gamma_{1,m2^{n-1}}^{a_1})^0 \sum_{\substack{0 \leq k_0 \leq a_0 \\ \dots \\ 0 \leq k_{n-1} \leq a_{n-1} \\ 0 \leq k_n \leq \lfloor \frac{a_n}{2} \rfloor}}^{a_{n-1}} \prod_{i=2}^n \left( \begin{array}{c} a_i \\ k_i \end{array} \right) (\gamma_{i,m2^{n-i}}^-)^{k_i} (\gamma_{i,m2^{n-i}}^+)^{a_i-k_i}.$$ For an element $x = b_1 \odot \dots \odot b_r \in \mathcal{G}_{ann}$, we let $x^+ = \bigodot_{i=1}^r b_i^+$. We also let $x^- = \iota(x^+)$. For an element $x \in \mathcal{G}_{quot} \setminus \mathcal{G}_{ann}$, we let $x^0 = \mathop{\mathrm{res}}(x)$, where $\mathop{\mathrm{res}}\colon A_{\mathbb{P}^\infty(\mathbb{R})} \to A'_{\ensuremath \mathop{\mathrm{B}}_{}^{+}}$ is the direct sum of the cohomological restriction maps $\bigoplus_n \mathop{\mathrm{res}}^{\mathop{\mathrm{B}}_n}_{\ensuremath \mathop{\mathrm{B}}_{n}^{+}}$. **Theorem 16** ( Theorem 3.5 of [@Guerra-Santanil]). *With reference to Definition [Definition 15](#def:basis B+){reference-type="ref" reference="def:basis B+"}, let* - *$\mathcal{M}_0 = \{x^0 \}_{x \in \mathcal{G}_{quot}\setminus \mathcal{G}_{ann}}$,* - *$\mathcal{M}_+ = \{x^+ \}_{x \in \mathcal{G}_{ann}}$,* - *and $\mathcal{M}_- = \{x^- \}_{x \in \mathcal{G}_{ann}}$.* *The set $\mathcal{M}_{charged} = \mathcal{M}_0 \sqcup \mathcal{M}_+ \sqcup \mathcal{M}_-$ is a bigraded basis of $\widetilde{A'_{\ensuremath \mathop{\mathrm{B}}_{}^{+}}}$ over $\mathbb{F}_2$.* *Moreover, $\mathop{\mathrm{res}}(x) = x^+ + x^-$ for all $x \in \mathcal{G}_{ann}$.* Although the description above is more complicated than its analogs Theorem [Theorem 4](#thm:cohomology DX mod 2){reference-type="ref" reference="thm:cohomology DX mod 2"} and Corollary [Corollary 7](#cor: basis DX char+){reference-type="ref" reference="cor: basis DX char+"}, we can still deduce the ring structure on the components $H^*(B\ensuremath \mathop{\mathrm{B}}_{n}^{+}; \mathbb{F}_2)$. Since the component of all elements of $\mathcal{G}_{ann}$ is a multiple of $4$, then all classes belonging to the other components are restrictions of classes in $A_{\mathbb{P}^\infty(\mathbb{R})}$. More precisely, we have the following result. **Corollary 17** (of Theorem [Theorem 16](#thm:basis B+){reference-type="ref" reference="thm:basis B+"}). *If $n \not\equiv 0 \mod 4$, then the restriction map is surjective and induces an isomorphism $$\frac{H^*(B\mathop{\mathrm{B}}_n; \mathbb{F}_2)}{(\gamma_{1,1} \odot 1_{n-2} + w \odot 1_{n-1})} \cong H^*(B\ensuremath \mathop{\mathrm{B}}_{n}^{+}; \mathbb{F}_2).$$* In components that are multiples of $4$, the calculation of the cup product is exemplified below. *Example 18*. We determine the cup product of the two charged Hopf monomials $x = (\gamma_{2,2}^3 \odot \gamma_{3,1}^5 w_{[8]})^+$ and $y = (\gamma_{2,1}\gamma_{1,2} \odot \gamma_{2,1} \odot \gamma_{3,1})^+$ in $A'_{\ensuremath \mathop{\mathrm{B}}_{}^{+}}$. The first step is to compute the coproduct of the charged gathered blocks appearing in $x$. We use the fact that $\cdot$ and $\Delta$ form a bialgebra, the fact that cup products of classes in different components are zero, and the relations (5), (6), (7) and (8) of Theorem [Theorem 14](#thm:presentation B+){reference-type="ref" reference="thm:presentation B+"} to perform the computation. Explicitly $$\begin{aligned} &\Delta((\gamma_{2,2}^3)^+) = \Delta((\gamma_{2,2}^+)^3 + (\gamma_{2,2}^+)^2 \cdot \gamma_{2,2}^-) = \Delta((\gamma_{2,2}^+))^3 + \Delta(\gamma_{2,2}^+) \cdot \Delta(\gamma_{2,2}^-)^2 \\ %&= (1^+ \otimes \gamma_{2,2}^+ + \gamma_{2,1}^+ \otimes \gamma_{2,1}^+ + \gamma_{2,2}^+ \otimes 1^+ + 1^- \otimes \gamma_{2,2}^- + \gamma_{2,1}^- \otimes \gamma_{2,1}^- + \gamma_{2,2}^- \otimes 1^-)^3 \\ %&+ (1^+ \otimes \gamma_{2,2}^+ + \gamma_{2,1}^+ \otimes \gamma_{2,1}^+ + \gamma_{2,2}^+ \otimes 1^+ + 1^- \otimes \gamma_{2,2}^- + \gamma_{2,1}^- \otimes \gamma_{2,1}^- + \gamma_{2,2}^- \otimes 1^-) \\ %&\cdot ((1^+ \otimes \gamma_{2,2}^- + \gamma_{2,1}^+ \otimes \gamma_{2,1}^- + \gamma_{2,2}^+ \otimes 1^- + 1^- \otimes \gamma_{2,2}^+ + \gamma_{2,1}^- \otimes \gamma_{2,1}^+ + \gamma_{2,2}^- \otimes 1^+)^2 \\ &= \Big(1^+ \otimes (\gamma_{2,2}^+)^3 + 1^- \otimes (\gamma_{2,2}^+)^3 + (\gamma_{2,1}^+)^3 \otimes (\gamma_{2,1}^+)^3 + (\gamma_{2,1}^+)^2 \gamma_{2,1}^- \otimes (\gamma_{2,1}^+)^2 \gamma_{2,1}^- \\ &+ \gamma_{2,1}^+ (\gamma_{2,1}^-)^2 \otimes \gamma_{2,1}^+ (\gamma_{2,1}^-)^2 + 1^+ (\gamma_{2,2}^+)^3 \otimes 1^+ + (\gamma_{2,2}^-)^3 \otimes 1^- \Big) + \Big( 1^+ \otimes (\gamma_{2,2}^+)^2 \gamma_{2,2}^- \\ &+ 1^- \otimes \gamma_{2,2}^+ (\gamma_{2,2}^-)^2 + (\gamma_{2,1}^+)^3 \otimes (\gamma_{2,1}^+)^2 \gamma_{2,1}^- + (\gamma_{2,1}^+)^2 \gamma_{2,1}^- \otimes (\gamma_{2,1}^+)^3 \\ &+ \gamma_{2,1}^+(\gamma_{2,1}^-)^2 \otimes (\gamma_{2,1}^-)^3 + (\gamma_{2,1}^-)^3 \otimes \gamma_{2,1}^+(\gamma_{2,1}^-)^2 + (\gamma_{2,2}^+)^2 \gamma_{2,2}^- \otimes 1^+ + \gamma_{2,2}^+ (\gamma_{2,2}^-)^2 \otimes 1^- \Big) \\ &= 1^+ \otimes (\gamma_{2,2}^3)^+ + 1^- \otimes (\gamma_{2,2}^3)^+ + (\gamma_{2,2}^3)^+ \otimes 1^+ + (\gamma_{2,2}^3)^- \otimes 1^- + (\gamma_{2,1}^3)^+ \otimes (\gamma_{2,1}^3)^+ \\ &+ (\gamma_{2,1}^3)^- \otimes (\gamma_{2,1}^3)^- + (\gamma_{2,1}^3)^+ \otimes \gamma_{2,1}^+(\gamma_{2,1}^-)^2 + \gamma_{2,1}^+(\gamma_{2,1}^-)^2 \otimes (\gamma_{2,1}^3)^+ \\ &+ (\gamma_{2,1}^3)^- \otimes (\gamma_{2,1}^+)^2 \gamma_{2,1}^- + (\gamma_{2,1}^+)^2 \gamma_{2,1}^- \otimes (\gamma_{2,1}^-)^3. \end{aligned}$$ By relation (6) of Theorem [Theorem 14](#thm:presentation B+){reference-type="ref" reference="thm:presentation B+"}, $$(\gamma_{2,1}^+)^2\gamma_{2,1}^- = \left( (\gamma_{2,1}^-)^2 + (\gamma_{1,2}^3)^0 + \gamma_{1,2}^+ \gamma_{1,2}^- \right) \gamma_{2,1}^- = (\gamma_{2,1}^3)^- + (\gamma_{2,1}\gamma_{1,2}^3)^-$$ and similarly for $\gamma_{2,1}^+ (\gamma_{2,1}^-)^2$. Substituting in the expression for $\Delta((\gamma_{2,1}^3)^+)$ we obtain that $$\begin{aligned} &\Delta((\gamma_{2,1}^3)^+) = 1^+ \otimes (\gamma_{2,2}^3)^+ + 1^- \otimes (\gamma_{2,2}^3)^+ + (\gamma_{2,2}^3)^+ \otimes 1^+ + (\gamma_{2,2}^3)^- \otimes 1^- \\ &+ (\gamma_{2,1}^3)^+ \otimes (\gamma_{2,1}^3)^+ + (\gamma_{2,1}^3)^- \otimes \gamma_{2,1}^3)^- + (\gamma_{2,1}^3)^+ \otimes (\gamma_{2,1}\gamma_{1,2}^3)^- \\ &+ (\gamma_{2,1}^3)^- \otimes (\gamma_{2,1}\gamma_{1,2}^3)^+ + (\gamma_{2,1}\gamma_{1,2}^3)^+ \otimes (\gamma_{1,2}^3)^- + (\gamma_{2,1}\gamma_{1,2}^3)^- \otimes (\gamma_{2,1}^3)^+. \end{aligned}$$ The reader is encouraged to read Remark 5.6 of [@Sinha:17] for the explanation of some computational difficulties. A similar calculation shows that $$\begin{aligned} \Delta((\gamma_{3,1}^5 w_{[8]})^+) &= 1^+ \otimes (\gamma_{3,1}^5 w_{[8]})^+ + 1^- \otimes (\gamma_{3,1}^5 w_{[8]})^- \\ &+ (\gamma_{3,1}^5 w_{[8]})^+ \otimes 1^+ + (\gamma_{3,1}^5 w_{[8]})^- \otimes 1^-.\end{aligned}$$ As a second step, we exploit relations (3), (4) and (8) of Theorem [Theorem 14](#thm:presentation B+){reference-type="ref" reference="thm:presentation B+"} to determine the coproduct of $x$: $$\begin{aligned} &\Delta(x) = (\gamma_{3,1}^5 w_{[8]})^+ \otimes (\gamma_{2,2}^3)^+ + (\gamma_{3,1}^5 w_{[8]})^- \otimes (\gamma_{2,2}^3)^+ + (\gamma_{2,2}^3 \odot \gamma_{3,1}^5 w_{[8]})^+ \otimes 1^+ \\ &+ (\gamma_{2,2}^3 \odot \gamma_{3,1}^5 w_{[8]})^- \otimes 1^- + (\gamma_{2,1}^3 \odot \gamma_{3,1}^5 w_{[8]})^+ \otimes (\gamma_{2,1}^3 \odot \gamma_{3,1}^5 w_{[8]})^+ \\ &+ (\gamma_{2,1}^3 \odot \gamma_{3,1}^5 w_{[8]})^- \otimes (\gamma_{2,1}^3)^- + (\gamma_{2,1}^3 \odot \gamma_{3,1}^5 w_{[8]})^+ \otimes (\gamma_{2,1}\gamma_{1,2}^3)^- \\ &+ (\gamma_{2,1}^3 \odot \gamma_{3,1}^5 w_{[8]})^- \otimes (\gamma_{2,1}\gamma_{1,2}^3)^+ + (\gamma_{2,1}\gamma_{1,2}^3 \odot \gamma_{3,1}^5 w_{[8]})^+ \otimes (\gamma_{1,2}^3)^- \\ &+ (\gamma_{2,1}\gamma_{1,2}^3 \odot \gamma_{3,1}^5 w_{[8]})^- \otimes (\gamma_{2,1}^3)^+ + 1^+ \otimes (\gamma_{2,2}^3 \odot \gamma_{3,1}^5 w_{[8]})^+ + 1^- \otimes (\gamma_{2,2}^3 \odot \gamma_{3,1}^5 w_{[8]})^+ \\ &+ (\gamma_{2,2}^3)^+ \otimes (\gamma_{3,1}^5 w_{[8]})^+ + (\gamma_{2,2}^3)^- \otimes (\gamma_{3,1}^5 w_{[8]})^- + (\gamma_{2,1}^3)^+ \otimes (\gamma_{2,1}^3 \odot \gamma_{3,1}^5 w_{[8]})^+ \\ &+ (\gamma_{2,1}^3)^- \otimes (\gamma_{2,1}^3 \odot \gamma_{3,1}^5 w_{[8]})^- + (\gamma_{2,1}^3)^+ \otimes (\gamma_{2,1}\gamma_{1,2}^3 \odot \gamma_{3,1}^5 w_{[8]})^- \\ &+ (\gamma_{2,1}^3)^- \otimes (\gamma_{2,1}\gamma_{1,2}^3 \odot \gamma_{3,1}^5 w_{[8]})^+ + (\gamma_{2,1}\gamma_{1,2}^3)^+ \otimes (\gamma_{1,2}^3 \odot \gamma_{3,1}^5 w_{[8]})^- \\ &+ (\gamma_{2,1}\gamma_{1,2}^3)^- \otimes (\gamma_{2,1}^3 \odot \gamma_{3,1}^5 w_{[8]})^+\end{aligned}$$ Thirdly, as $y = (\gamma_{2,1}\gamma_{1,2})^+ \odot (\gamma_{2,1} \odot \gamma_{3,1})^+$, we can compute $x \cdot y$ from the previous coproduct calculations and Hopf ring distributivity $$x \cdot y = x \cdot \left((\gamma_{2,1}\gamma_{1,2})^+ \odot (\gamma_{2,1} \odot \gamma_{3,1})^+ \right) = \sum \left( x_{(1)} \cdot (\gamma_{2,1}\gamma_{1,2})^+ \right) \odot \left( x_{(2)} \cdot (\gamma_{2,1} \odot \gamma_{3,1})^+ \right).$$ One observes that the component of $(\gamma_{2,1}w_{2})^+$ is $4$ and the component of $(\gamma_{2,1} \odot \gamma_{3,1})^+$ is $12$. The addends in $\Delta(x)$ in the correct components are $(\gamma_{2,1}^3)^+ \otimes (\gamma_{2,1}^3 \odot \gamma_{3,1}^5 w_{[8]})^+$, $(\gamma_{2,1}^3)^- \otimes (\gamma_{2,1}^3 \odot \gamma_{3,1}^5 w_{[8]})^-$, $(\gamma_{2,1}^3)^+ \otimes (\gamma_{2,1}\gamma_{1,2}^3 \odot \gamma_{3,1}^5 w_{[8]})^-$, $(\gamma_{2,1}^3)^- \otimes (\gamma_{2,1}\gamma_{1,2}^3 \odot \gamma_{3,1}^5 w_{[8]})^+$, $(\gamma_{2,1}\gamma_{1,2}^3)^+ \otimes (\gamma_{1,2}^3 \odot \gamma_{3,1}^5 w_{[8]})^-$ and $(\gamma_{2,1}\gamma_{1,2}^3)^- \otimes (\gamma_{2,1}^3 \odot \gamma_{3,1}^5 w_{[8]})^+$. For all these terms, $x_{(2)}$ is the transfer product of two primitive elements. Therefore, the cup product $x_{(2)} \cdot (\gamma_{2,1} \odot \gamma_{3,1})^+$ is easily determined by applying Hopf ring distributivity again. $$\begin{aligned} x \cdot y &= (\gamma_{2,1}^3)^+ (\gamma_{2,1}\gamma_{1,2})^+ \odot (\gamma_{2,1}^3)^+ \gamma_{2,1}^+ \odot (\gamma_{3,1}^2 w_{[8]})^+ + (\gamma_{2,1}^3)^- (\gamma_{2,1}\gamma_{1,2})^+ \\ &\odot (\gamma_{2,1}^3)^- \gamma_{2,1}^+ \odot (\gamma_{3,1}^2 w_{[8]})^+ + (\gamma_{2,1}^3)^+ (\gamma_{2,1}\gamma_{1,2})^+ \odot (\gamma_{2,1}\gamma_{1,2}^3)^- \gamma_{2,1}^+ \odot (\gamma_{3,1}^2 w_{[8]})^+ \\ &+ (\gamma_{2,1}^3)^- (\gamma_{2,1}\gamma_{1,2})^+ \odot (\gamma_{2,1}\gamma_{1,2}^3)^+ \gamma_{2,1}^+ \odot (\gamma_{3,1}^2 w_{[8]})^+ + (\gamma_{2,1}\gamma_{1,2}^3)^- (\gamma_{2,1}\gamma_{1,2})^+ \\ &\odot (\gamma_{2,1}^3)^+ \gamma_{2,1}^+ \odot (\gamma_{3,1}^2 w_{[8]})^+ + (\gamma_{2,1}\gamma_{1,2}^3)^+ (\gamma_{2,1}\gamma_{1,2})^+ \odot (\gamma_{2,1}^3)^- \gamma_{2,1}^+ \odot (\gamma_{3,1}^2 w_{[8]})^+.\end{aligned}$$ Finally, in order to write all the addends of $x \cdot y$ as linear combinations of charged decorated Hopf monomials, we use our cup product relations again. Explicitly, we observe that $$\begin{gathered} \gamma_{2,1}^+ \cdot \gamma_{2,1}^+ = (\gamma_{2,1}^2)^+, (\gamma_{2,1}^3)^+ \cdot \gamma_{2,1}^+ = (\gamma_{2,1}^4)^+ + (\gamma_{2,1}^4)^- + (\gamma_{2,1}^2\gamma_{1,3}^3)^+ + (\gamma_{1,2}^6)^0 \\ \tag*{and} (\gamma_{2,1}^3)^- \cdot \gamma_{2,1}^+ = (\gamma_{2,1}^4)^- + (\gamma_{2,1}^2 \gamma_{1,2}^3)^-.\end{gathered}$$ Substituting in the espression above for $x \cdot y$, many terms cancel out and we obtain that $x \cdot y = (\gamma_{2,1}^4 \gamma_{1,2} \odot \gamma_{2,1}^4 \odot \gamma_{3,1}^2 w_{[8]})^+$. ## Cohomology of $\ensuremath \mathop{\mathrm{B}}_{n}^{+}$ at odd primes The cohomology of $\ensuremath \mathop{\mathrm{B}}_{n}^{+}$ modulo odd primes is much simpler than its mod $2$ counterpart. **Theorem 19**. *Let $n \in \mathbb{N}$ and let $p > 2$ be a prime number. Then the projection $\ensuremath \mathop{\mathrm{B}}_{n}^{+} \to \Sigma_n$ induces an isomorphism in mod $p$ cohomology. In particular, $\bigoplus_n H^*(\ensuremath \mathop{\mathrm{B}}_{n}^{+}; \mathbb{F}_p) \cong A_{\{*\}}$ as Hopf rings.* *Proof.* By Shapiro's lemma $H^*(\ensuremath \mathop{\mathrm{B}}_{n}^{+}; \mathbb{F}_p) \cong H^*(\mathop{\mathrm{B}}_n; \mathop{\mathrm{Ind}}_{\ensuremath \mathop{\mathrm{B}}_{n}^{+}}^{\mathop{\mathrm{B}}_n}(\mathbb{F}_p))$, where $\mathop{\mathrm{Ind}}_{\ensuremath \mathop{\mathrm{B}}_{n}^{+}}^{\mathop{\mathrm{B}}_n}(\mathbb{F}_p)$ is the induced representation of the constant representation $\mathbb{F}_p$. $\mathop{\mathrm{Ind}}_{\ensuremath \mathop{\mathrm{B}}_{n}^{+}}^{\mathop{\mathrm{B}}_n}(\mathbb{F}_p) \cong \mathbb{F}_p \oplus \mathop{\mathrm{sgn}}$, where $\mathop{\mathrm{sgn}}$ is the mod $p$ sign representation of $\mathop{\mathrm{B}}_n$, defined by $x.1 = (-1)^{l(x)}$, where $l(x)$ is the Coxeter length function. By the Leray spectral sequence for the fibration $B(\mathbb{Z}/2\mathbb{Z})^n \to B(\mathop{\mathrm{B}}_n) \to B(\Sigma_n)$, in the formulation of Lyndon--Hochschild--Serre, there are spectral sequences $$\begin{aligned} E_2^{*,*} &= H^*(\Sigma_n;{H^*(\mathbb{Z}/2\mathbb{Z}; \mathbb{F}_p)}^{\otimes n}) \Rightarrow H^*(\mathop{\mathrm{B}}_n; \mathbb{F}_p) \\ \tag*{and} E_2^{*,*} &= H^*(\Sigma_n; {H^*(\mathbb{Z}/2\mathbb{Z};\mathop{\mathrm{sgn}})}^{\otimes n} \otimes \mathop{\mathrm{sgn}}) \Rightarrow H^*(\mathop{\mathrm{B}}_n; \mathop{\mathrm{sgn}}).\end{aligned}$$ In the expression above, $\Sigma_n$ acts on ${H^*(\mathbb{Z}/2\mathbb{Z};\mathbb{F}_p)}^{\otimes n}$ and ${H^*(\mathbb{Z}/2\mathbb{Z};\mathop{\mathrm{sgn}}}^{\otimes n})$ by permuting the factors. By a standard group-cohomological application of transfer maps, since the order of $\mathbb{Z}/2\mathbb{Z}$ is coprime with $p$, $H^*(\mathbb{Z}/2\mathbb{Z};\mathbb{F}_p) \cong \mathbb{F}_p$ concentrated in degree $0$ and $H^*(\mathbb{Z}/2\mathbb{Z};\mathop{\mathrm{sgn}}) = 0$. Plugging these in the Lyndon--Hochschild--Serre spectral sequences above, we see that the $E_2$ page of one of them is concentrated in the first row, while the other is entirely zero. Therefore, they both collapse at the second page and provide the desired isomorphism. ◻ We also record the following simple remark. **Corollary 20**. *Let $p$ be an odd prime. Then $\mathop{\mathrm{res}}_n \colon H^*(\mathop{\mathrm{B}}_n; \mathbb{F}_p) \to H^*(\ensuremath \mathop{\mathrm{B}}_{n}^{+}; \mathbb{F}_p)$ is an isomorphism.* *Proof.* The same Lyndon--Hochschild--Serre spectral sequence used in the proof of Theorem [Theorem 19](#thm:cohomology alternating group mod p){reference-type="ref" reference="thm:cohomology alternating group mod p"} shows that $H^*(\mathop{\mathrm{B}}_n; \mathbb{F}_p) \cong H^*(\Sigma_n; \mathbb{F}_p)$ via the projection map $\mathop{\mathrm{B}}_n \to \Sigma_n$. ◻ # Stable cohomology of complete unordered flag varieties In this section, we study the stable behavior of the cohomology of the unordered flag manifolds $\{ \overline{\mathrm{Fl}}_n (\mathbb{C}) \}_n$ and $\{\overline{\mathrm{Fl}}_n (\mathbb{R}) \}_n$. First, we recall the following classical theorem about Borel spectral sequence, i.e. Serre spectral sequence of fiber bundles $X \rightarrow X_{hG} \rightarrow BG$ associated with the Borel construction (or homotopy quotient) $X_{hG}$. From now on we will be using this theorem in many of our proofs and computations without explicitly referring to it every time. **Theorem 21** ([@mcc]). *Let $X \rightarrow X_{hG} \rightarrow BG$ be the Borel fibration associated to a $G$-space $X$. There is a first quadrant spectral sequence of algebras $E_r^{*,*},d^r\}$, converging to $H^*(X_{hG};\mathbb{F}_2)$ as an algebra with $$E_2^{k,l}\cong H^k(BG;H^l(X)),$$ where $H^l(X)$ is understood as a $G$-representation via the monodromy action of $\pi_1(BG)$ onto the fiber. If $G$ acts trivially on the cohomology $H^*(X)$ and field coefficients are used, then by Künneth's theorem $$E_2^{k,l} \cong H^k(BG) \otimes H^l(X).$$* If $G$ acts freely on $X$, then $X_{hG}$ is known to be homotopy equivalent to the strict quotient $X/G$. **Proposition 22**. *For all $n\ge 1$, we have isomorphisms $BN(n) \cong D_n (BU(1)_+)$ and $B\mathop{\mathrm{B}}(n) \cong D_n (BO(1)_+)$.* *Proof.* We provide details for the complex case. The real case is similar. It is enough to check that $D_n (BU(1))$ is the base space of a fibration with fiber $N(n)$ and a contractible total space. Note that $$D_n (BU(1)_+) = E\Sigma_n \times_{\Sigma_n} BU(1)^n \cong \mathrm{Conf}_n (\mathbb{R}^{\infty}) \times_{\Sigma_n} (\mathbb{P}^{\infty} (\mathbb{C}))^n$$ and consider the contractible space $E= \mathrm{Conf}_n (\mathbb{R}^{\infty}) \times (S^{\infty})^n$. There is an action of $N(n) = \Sigma_n \ltimes U(1)^n$ on $E$ defined as follows: - $\Sigma_n$ acts diagonally on $\mathrm{Conf}_n (\mathbb{R}^{\infty}) \times (S^{\infty})^n$. - $U(1)^n$ acts on $(S^{\infty})^n$ factor by factor by identifying $U(1) \cong S^1$. This gives a well-defined action and a principal $N(n)$-bundle $E\rightarrow D_n (BU(1))$. ◻ There are inclusions $U(n) \hookrightarrow U(n+1)$ and $O(n) \rightarrow O(n+1)$ given by $$M \longmapsto \begin{bmatrix} M & 0 \\ 0 & 1\end{bmatrix}$$ which restrict to inclusions $N(n) \hookrightarrow N(n+1)$ and $\mathop{\mathrm{B}}(n) \hookrightarrow \mathop{\mathrm{B}}(n+1)$. Through the isomorphism of Proposition [Proposition 22](#prop:DnBU(1)-BN(n)-iso){reference-type="ref" reference="prop:DnBU(1)-BN(n)-iso"}, these inclusions correspond to the stabilization maps in $D_n (BU(1)_+)$ and $D_n (BO(1)_+)$. **Corollary 23**. *The sequence of spaces $\{ BN(n)\}_n$ and $\{ B\mathop{\mathrm{B}}(n) \}_{n}$ exhibit homological stability and if $N(\infty) = \varinjlim N(n)$ and $\mathop{\mathrm{B}}(\infty) = \varinjlim \mathop{\mathrm{B}}(n)$, then $H^* (BN(\infty); \mathbb{F}_p) \cong A_{\infty} (BU(1))$ and $H^* (B\mathop{\mathrm{B}}(\infty); \mathbb{F}_p) \cong A_{\infty} (BU(1))$.* The above morphisms also determine quotient maps $$\begin{aligned} i_n^{\mathbb{C}} \colon \frac{U(n)}{N(n)} \longrightarrow \frac{U(n+1)}{N(n+1)} \\ i_n^{\mathbb{R}} \colon \frac{O(n)}{\mathop{\mathrm{B}}(n)} \longrightarrow \frac{O(n+1)}{\mathop{\mathrm{B}}(n+1)} \end{aligned}$$ and in cohomology $$\begin{aligned} (i_n^{\mathbb{C}})^* \colon H^* \Big ( \frac{U(n+1)}{N(n+1)} ; \mathbb{F}_p \Big ) \longrightarrow H^* \Big ( \frac{U(n)}{N(n)} ; \mathbb{F}_p \Big ) \\ (i_n^{\mathbb{R}})^* \colon H^* \Big ( \frac{O(n+1)}{\mathop{\mathrm{B}}(n+1)}; \mathbb{F}_p \Big ) \longrightarrow H^* \Big ( \frac{O(n)}{\mathop{\mathrm{B}}(n)} ; \mathbb{F}_p \Big ).\end{aligned}$$ **Proposition 24**. *The sequence of spaces $\{ U(n)/N(n)\}_n$ and $\{ O(n)/\mathop{\mathrm{B}}(n)\}_n$ exhibit homological stability, *i.e.* for all $d\ge 0$ there exists $N(d)$ such that for all $n\ge N(d)$ $$\begin{aligned} (i_n^{\mathbb{C}})^* \colon H^d \Big ( \frac{U(n+1)}{N(n+1)}; \mathbb{F}_p \Big ) &\longrightarrow H^d \Big ( \frac{U(n)}{N(n)}; \mathbb{F}_p \Big ) \\ (i_n^{\mathbb{R}})^* \colon H^d \Big ( \frac{O(n+1)}{\mathop{\mathrm{B}}(n+1)}; \mathbb{F}_p \Big ) &\longrightarrow H^d \Big ( \frac{O(n)}{\mathop{\mathrm{B}}(n)}; \mathbb{F}_p \Big ) \end{aligned}$$ are isomorphisms.* *Proof.* From the previous corollary $\{ BN(n) \}_n$ exhibits homological stability. Also, it is known classically that $\{ U(n)\}_n$ exhibits homological stability. Consider the Serre spectral sequence associated with the fiber sequence $$\label{eq:fib-Fln} U(n) \longrightarrow \frac{U(n)}{N(n)} \longrightarrow BN(n).$$ Since homological stability holds for both fiber and base it must hold at the level of $E_2$-page. Since the spectral sequence converges, it must hold at the $E_{\infty}$-page. The proof for $\{ O(n)/\mathop{\mathrm{B}}(n) \}$ is similar. ◻ Proposition [Proposition 24](#prop:hom-stab-UN){reference-type="ref" reference="prop:hom-stab-UN"} allows us to discuss the stable cohomology of $U(n)/N(n)$ and $O(n)/\mathop{\mathrm{B}}(n)$. Let us denote the limit of $\overline{\mathrm{Fl}}_n (\mathbb{C})$ and $\overline{\mathrm{Fl}}_n (\mathbb{R})$ by $$\overline{\mathrm{Fl}}_{\infty} (\mathbb{C}) = \varinjlim_n \overline{\mathrm{Fl}}_n (\mathbb{C}) \text{ and } \overline{\mathrm{Fl}}_{\infty} (\mathbb{R}) = \varinjlim_n \overline{\mathrm{Fl}}_n (\mathbb{R})$$ respectively. Then we can write $$\begin{aligned} H^* ( \overline{\mathrm{Fl}}_{\infty} (\mathbb{C}); \mathbb{F}_p ) &= \varprojlim_n H^* ( \overline{\mathrm{Fl}}_n (\mathbb{C}) ; \mathbb{F}_p ) \\ H^* ( \overline{\mathrm{Fl}}_{\infty} (\mathbb{R}); \mathbb{F}_p ) &= \varprojlim_n H^* ( \overline{\mathrm{Fl}}_n (\mathbb{R}) ; \mathbb{F}_p ).\end{aligned}$$ Next, we will define a rank function on the basis of decorated Hopf monomials. This rank function establishes a filtration on $A_X$, which holds significant importance in our examination of the pullback of the Chern classes under $f_n$. **Definition 25**. Let $X$ be a topological space, whose cohomology is concentrated in even degrees if $p > 2$. The *rank of a decorated Hopf monomial* in $A_{X}$ denoted as $\mathop{\mathrm{rk}}$ is defined via the following rules: - The rank of a decorated gathered block $(b,a)$ is $0$ if $b$ contains at least one of $\gamma_{k,l}$ (respectively $\alpha_{j,k}$, $\beta_{i,j,l}$, or $\gamma_{k,l}$). - The rank of a decorated gathered block of the form $(1_n, a)$ is $n |a|$. - $\mathop{\mathrm{rk}}(b_1 \odot \cdots \odot b_r) = \sum_{i=1}^r \mathop{\mathrm{rk}}(b_i)$. We also define $\mathop{\mathrm{rk}}(x\otimes y) := \mathop{\mathrm{rk}}(x) + \mathop{\mathrm{rk}}(y)$ in $A_{X} \otimes A_{X}$. **Definition 26**. We define the *rank filtration* $\mathcal{F}_{*}$ for $A_{X}$ by setting $\mathcal{F}_n$ as the linear span of decorated Hopf monomials $x$ with $\mathop{\mathrm{rk}}(x) \le n$. Moreover, we define a rank filtration on $A_{\infty} (X)$, by defining $\mathcal{F}_n$ as the linear span of stabilized Hopf monomials $x\odot 1_{\infty}$ where $\mathop{\mathrm{rk}}(x)\leq n$. With a slight abuse of notation, we also denote it as $\mathcal{F}_{*}$. The rank filtration is exhaustive, increasing, and bounded from below. Moreover, due to the Hopf ring distributivity law, it is multiplicative with respect to the cup product, *i.e.* $\mathcal{F}_m \cdot \mathcal{F}_n \subseteq \mathcal{F}_{m+n}$.\ ## The complex case The inclusions $N(n) \hookrightarrow U(n)$ induce maps $f_n \colon BN(n) \rightarrow BU(n)$. We denote the limiting map by $f\colon BN(\infty) \rightarrow BU(\infty)$. So, $$f^* \colon H^* (BU(\infty); \mathbb{F}_p) \cong \mathbb{F}_p [c_1, c_2, \dots] \longrightarrow H^* (BN(\infty); \mathbb{F}_p) \cong A_{\infty} (BU(1)) .$$ Let us recall some classical results from [@Milnor-Stasheff]. The vector bundle corresponding to $\varphi \in [X, BU(n)]$ is given by the pullback $\varphi^* (\eta_n)$ of the universal vector bundle $\eta_n$ over $BU(n)$. Taking $X = BU(k) \times BU(n-k)$, the bundle $\eta_k \oplus \eta_{n-k}$ is classified by the homotopy class of a map $\delta_{k,n-k} \colon BU(k) \times BU(n-k) \rightarrow BU(n)$, induced by the direct sum of matrices. The induced map in cohomology $$\Delta_{k,n-k} \colon H^* (BU(n)) \longrightarrow H^* (BU(k)) \otimes H^* (B(n-k)).$$ is associative and commutative up to isomorphism, and the direct sum of these provides an associative and commutative coproduct $$\Delta \colon \bigoplus_{n\ge 0} H^* (BU(n)) \longrightarrow \Big ( \bigoplus_{n\ge 0} H^* (BU(n)) \Big ) \otimes \Big ( \bigoplus_{n\ge 0} H^* (BU(n)) \Big ).$$ We can regard the morphism $\Delta$ as a coproduct and note that it commutes with the cup product $\cdot$ as it is induced by homotopy classes of topological maps. This makes $\bigoplus_{n\ge 0} H^* (BU(n))$ a bialgebra. The stabilization maps of $BU(n)$ preserve $\Delta$, hence the limit $H^*(BU(\infty))$ is also a bialgebra. In fact, $\bigoplus_n f_n^* \colon \bigoplus_{n} H^*(BU(n)) \to A_{BU(1)}$ and $f^* \colon H^*(BU(\infty)) \to {A}_{\infty} (BU(1))$ are bialgebra morphisms. Chern classes of vector bundles over $X$ correspond to the pullbacks of the universal Chern classes along its classifying map. Since, $\delta_{k,n-k}^* (\eta_n)$ is isomorphic to $\eta_k \oplus \eta_{n-k}$, we have $\Delta (c_k) = \sum_{i=0}^k c_i \otimes c_{k-i}$. **Lemma 27**. *For $k \le n$ let $c_k \in H^{2k} (BU(n); \mathbb{F}_p)$ be the $k$-th universal Chern class. Then $f_n^* (c_k) = c_{[k]} \odot 1_{n-k} \mod \mathcal{F}_{2k-1}$.* *Proof.* We prove the lemma by induction on $k$. Since the only two-dimensional classes in $H^* (BN(n); \mathbb{F}_p)$ are $c_{[1]} \odot 1_{n-1}$, $\gamma_{1,1}^2 \odot 1_{n-2}$, and $\gamma_{1,2} \odot 1_{n-4}$, we have $$f_n^* (c_1) = \lambda c_{[1]} \odot 1_{n-1} + \mu \gamma_{1,1}^2 \odot 1_{n-2} + \nu \gamma_{1,2} \odot 1_{n-4}$$ for some $\lambda, \mu, \nu \in \mathbb{F}_p$. Since the restriction of $c_1$ to $H^* (BN(1); \mathbb{F}_p)$ is $c_{[1]}$, we must have $\lambda =1$. Hence, $f_n^* (c_1) = c_{[1]} \odot 1_{n-1} + x$, with $x\in \mathcal{F}_{0}$.\ We assume by the inductive hypothesis that the lemma is true for all $1<l<k$. Consider the reduced coproduct map $$\overline{\Delta} \colon H^* (BN(n); \mathbb{F}_p) \longrightarrow \bigoplus_{l=1}^{n-1} H^* (BN(l); \mathbb{F}_p) \otimes H^* (BN(n-l); \mathbb{F}_p) .$$ Recall that $\Delta_{l, n-l} (c_k) = \sum_{i=0}^k c_i \otimes c_{k-i}$ and the maps $f_n$ induce coalgebra maps. Therefore we have $$\overline{\Delta} (f_n^* (c_k)) = \sum_{l=1}^{n-1} \sum_{i=0}^k f_l^* (c_i) \otimes f_{n-l}^* (c_{k-i}).$$ By the inductive hypothesis, $$\overline{\Delta} (f_n^* (c_k)) = \sum_{l=1}^{n-1} \sum_{i=0}^k (c_{[i]} \odot 1_{l-i}) \otimes (c_{[k-i]} \odot 1_{n-l-k+i}) + x = \overline{\Delta} (c_{[k]} \odot 1_{n-k}) +x$$ where $x \in \mathcal{F}_{k-1}$. The reduced coproduct $\overline{\Delta}$ preserves rank, hence $$f_n^* (c_k) = c_{[k]} \odot 1_{n-k} \mod \mathcal{F}_{2k-1} + y,$$ where $y\in \mathop{\mathrm{Prim}}(A_{BU(1)})$. However, all the primitive elements have rank $0$ except for $c_{[1]}^j$ ($j\ge 1$), which are in component one. Hence, $f_n^* (c_k) = c_{[k]} \odot 1_{n-k} \mod \mathcal{F}_{2k-1}$. ◻ To fully compute $f_n^* (c_k)$ we can use an inductive argument using the coproduct and Kochman's formulas. **Theorem 28**. *(Kochman's formulae for $BU(\infty)$, [@Kochman]) [\[kof\]]{#kof label="kof"} Let $$Q_*^r \colon H^d (BU(\infty); \mathbb{F}_2) \rightarrow H^{d-r} (BU(\infty); \mathbb{F}_2)$$ be the dual of the $r$-th order Dyer-Lashof operation in homology. Then, for all $r\le k \le n$, the following identity holds: $$Q_*^{2r} (c_k) = \binom{r-1}{k-r-1} c_{k-r} .$$ Moreover if $[n] \in H_0 (BU(\infty) \times \mathbb{Z}; \mathbb{F}_2) \cong \mathbb{Z}$ is the class corresponding to $n\in \mathbb{Z}$, then $$Q^{2r} (1 \otimes [1]) = (c_1^r)^{\vee} \otimes [2] .$$ Moreover, if $p > 2$ is a prime number, the dual Dyer-Lashof operations $$Q^r_* \colon H^d(BU(\infty); \mathbb{F}_p) \to H^{d-2r(p-1)}(BU(\infty); \mathbb{F}_p)$$ satisfies $$\begin{gathered} Q_*^r(c_k) = (-1)^{r+k} \left( \begin{array}{c} r-1 \\ pr-k \end{array} \right) c_{k-r(p-1)} \\ \tag*{and} Q^r(1 \otimes [1]) = (c_{p-1}^r)^\vee \otimes [p]. \end{gathered}$$* As $H_* ( BU(\infty) \times \mathbb{Z}; \mathbb{F}_p )$ is generated as an algebra by the homology of its component corresponding to $0 \in \mathbb{Z}$ and $[1]$, Theorem [\[kof\]](#kof){reference-type="ref" reference="kof"} and the Cartan formula fully determine the Dyer-Lashof operation on this Hopf algebra. The above identities are valid in the stable cohomology ring $H^* (BU(\infty) \times \mathbb{Z}; \mathbb{F}_p)$. They also determine the $Q_*^{r}$ unstably because the universal map from $\coprod_{n\ge 0} BU(n)$ to its group completion $BU(\infty) \times \mathbb{Z}$ is injective in homology. It is worth noting that, if $p = 2$, $Q_*^{2r-1} = \beta Q_*^{2r}$, where $\beta$ is the mod $2$ Bockstein homomorphism. Therefore, in this case knowledge of $Q_*^{2r}$ is sufficient to determine $Q_*^{2r-1}$.\ Let us denote the primitive elements in $H^* (D_n (BU(1)_+); \mathbb{F}_p)$ as $$\mathrm{Prim}_n (A_{BU(1)} ) = \mathrm{Prim} ( A_{BU(1)} ) \cap A_{BU(1)}^{n,*}.$$ The module $\mathrm{Prim}_n (A_{BU(1)} )$ injects into the $n$-th component of the $\odot$-indecomposables $\mathrm{Indec} ( A_{BU(1)} )$ of $A_{BU(1)}$. The following proposition can be deduced from [@May-Cohen]. **Proposition 29**. *Let $X$ be a topological space and let $\mathcal{B}$ be a homogeneous vector space basis for $H^*(X; \mathbb{F}_p)$. The module $\mathrm{Indec} (A_X )$ is the linear dual of the subspace $M$ of $A_X$ with basis $\{ Q_*^I (x^{\vee}) \mid I \text{ admissible, } k\ge 0 , x \in \mathcal{B}\}$. Moreover, for $m\ge 0$, the pairing between $\mathrm{Prim}_{2^m} ( A_X )$ and the subspace $M'_m$ with basis $\{ Q_*^I (x^{\vee}) \mid I \text{ admissible, } l(I) = m, e(I)>0, k\ge 0 , x \in \mathcal{B} \}$ is perfect. Here $l(I)$ and $e(I)$ are as defined in [@May-Cohen].* Using Proposition [Proposition 29](#indec){reference-type="ref" reference="indec"} with $X = BU(1)$ with the monomial basis on $c$, we will be able to compute $f_n^* (c_k)$ recursively as follows: 1. From the proof of Lemma [Lemma 27](#lema:chernformula){reference-type="ref" reference="lema:chernformula"} we see that $f_1^* (c_1) = c$ and the reduced coproduct is given by $$\overline{\Delta} (f_n^* (c_k)) = \sum_{l=1}^{n-1} \sum_{i=0}^k f_l^* (c_i) \otimes f_{n-l}^* (c_{k-i}).$$ Hence, the knowledge of $f_l^* (c_i)$ for $l<n$ is enough to recursively compute $\overline{\Delta} (f_n^* (c_k))$. This gives us $f_n^* (c_k)$ up to primitives. 2. In order to find the primitive part, we note that $\mathrm{Prim}_n ( A_{BU(1)} )$ does not include any $\gamma_{k,l}$ unless $n$ is equal to $2^m$ for some $m \geq 0$. Then by applying Proposition [Proposition 29](#indec){reference-type="ref" reference="indec"}, we can determine the primitive part. **Lemma 30**. *Let $k\ge 1$ and $\langle \cdot , \cdot \rangle \colon H^* (BN(p^k); \mathbb{F}_p) \otimes H_* (BN(p^k); \mathbb{F}_p) \rightarrow \mathbb{F}_p$ be evaluation pairing between cohomology and homology of the space $BN(p^k)$. Then, if $p = 2$, $$\big \langle f_{2^k}^* (c_{2^k-1}) , Q^{2^{k}} Q^{2^{k-1}} \cdots Q^4 Q^2 (1) \big \rangle =1 ,$$ where $1 \in H_0 (BN(1); \mathbb{F}_2)$. Similarly, if $p > 2$, $$\big \langle f_{p^k}^* (c_{p^k-1}) , Q^{p^{k-1}} Q^{p^{k-2}} \cdots Q^p Q^1 (1) \big \rangle =1 ,$$ where $1 \in H_0 (BN(1); \mathbb{F}_p)$.* *Proof.* We prove the identity for $p = 2$, as the argument for $p > 2$ is entirely similar. In the proof, we will use the notation $\langle \cdot, \cdot \rangle$ for the evaluation pairing between cohomology and homology in any topological space, not just in $BN(2^k)$. We also use the naturality of this pairing with respect to continuous maps of spaces without further notice. since the map $\coprod_n BN(n) \rightarrow \coprod_n BU(n)$ is an $E_{\infty}$ mapping, it preserves the Dyer-Lashof operations in homology. Additionally, the map in cohomology $H^* (BU(\infty); \mathbb{F}_2) \rightarrow H^* (BU(n); \mathbb{F}_2)$ is surjective and the dual map $H_* (BU(n); \mathbb{F}_2) \rightarrow H_* (BU(\infty); \mathbb{F}_2)$ is injective. This allows us to perform computations in $BU(\infty) \times \mathbb{Z}$. So, we prove the equivalent formula $$\big \langle c_{2^k-1} \otimes [2^k], Q^{2^{k}} Q^{2^{k-1}} \cdots Q^4 Q^2 (1\otimes [1]) \big \rangle =1 ,$$ where $[1] \in H_0 (BU(\infty) \times \mathbb{Z}; \mathbb{F}_2)$. We proceed by induction on $k$. For $k=1$, $$\big \langle c_{1} \otimes [2], Q^2 (1 \otimes [1]) \big \rangle = \big \langle c_1 \otimes [2], c_1^{\vee} \otimes [2] \big \rangle = 1.$$ For $k>1$, $$\begin{aligned} \big \langle & c_{2^k-1} \otimes [2^k], Q^{2^{k}} Q^{2^{k-1}} \cdots Q^4 Q^2 (1\otimes [1]) \big \rangle \\ & \quad = \big \langle Q_*^{2^k} (c_{2^k-1} \otimes [2^{k}]), Q^{2^{k-1}} \cdots Q^4 Q^2 (1 \otimes [1]) \big \rangle \\ & \quad = \big \langle \binom{2^{k-1}-1}{2^{k-1}-2} c_{2^k -1 -2^{k-1}} \otimes [2^{k-1}], Q^{2^{k-1}} \cdots Q^2 (1 \otimes [1]) \big \rangle \text{ (by Theorem~\ref{kof})} \\ & \quad = \big \langle c_{2^{k-1}-1} \otimes [2^{k-1}], Q^{2^{k-1}} \cdots Q^4 Q^2 (1\otimes [1]) \big \rangle \end{aligned}$$ which is equal to $1$ by the inductive hypothesis. ◻ We will now derive the pullback formula for the Chern classes. We will state the formula in terms of the total Chern class $$c_* = 1+c_1 +c_2 +\cdots \in H^* (BU(\infty); \mathbb{F}).$$ To obtain the pullback formula for a specific Chern class $c_k$, we only need to consider the classes of cohomological dimension $2k$. We will elaborate on this topic later in the section. **Theorem 31**. *(Pullback Formula) Let $f\colon BN(\infty) \rightarrow BU(\infty)$ be the limiting map induced by the inclusions $N(n) \hookrightarrow U(n)$ and $\mathcal{X} = \{ \underline{a} = (a_0, a_1, a_2, \dots ) \in \mathbb{N}^{\mathbb{N}^*} \mid a_n = 0 \text{ for } n \gg 0 \}$ be the set of infinite sequences of natural numbers that have finite support i.e. that are eventually zero. Then, in cohomology with coefficients in $\mathbb{F}$, $$\label{pullback-chern} f^* (c_*) = \left\{ \begin{array}{ll} \sum_{\underline{a} \in \mathcal{X}} c_{[a_0]} \odot \big ( \bigodot_{i,a_i \neq 0} \gamma_{i, a_i}^2 \big ) \odot 1_{\infty} & \mbox{if } \mathop{\mathrm{char}}(\mathbb{F}) = 2 \\ \sum_{\underline{a} \in \mathcal{X}} c_{[a_0]} \odot \big ( \bigodot_{i,a_i \neq 0} \gamma_{i, a_i} \big ) \odot 1_{\infty} & \mbox{if } \mathop{\mathrm{char}}(\mathbb{F}) > p \\ \sum_{n=0}^\infty c_{[n]} \odot 1_{\infty} & \mbox{if } \mathop{\mathrm{char}}(\mathbb{F}) = 0 \\ \end{array} \right. .$$ Note that requiring that every sequence in $\mathcal{X}$ has finite support guarantees that the transfer product in the formula ([\[pullback-chern\]](#pullback-chern){reference-type="ref" reference="pullback-chern"}) is iterated a finite number of times and well-defined.* *Proof.* We first assume that $\mathop{\mathrm{char}}(\mathbb{F}) = 2$. Since $\mathbb{F}$ is a free module over $\mathbb{F}_2$, we can assume without loss of generality that $\mathbb{F}= \mathbb{F}_2$. Let us denote by $x_*$ the class $$x_* = \sum_{\underline{a} \in \mathcal{X}} c_{[a_0]} \odot \big ( \bigodot_{i,a_i \neq 0} \gamma_{i, a_i}^2 \big ) \odot 1_{\infty} .$$ We can write $x_* = \sum_{n\ge 0} x_n$, where $|x_n| = 2n$, as $\gamma_{i,a_i}^2$ and $c_{[a_0]}$ are always even dimensional cohomology classes. We have that $\Delta x_* = x_* \otimes x_*$.\ Extrapolating the $2n$-dimensional part, the coproduct of $x_n$ must be $$\Delta x_n = \sum_{j=0}^n x_j \otimes x_{n-j} ,$$ similar to those satisfied by the pullbacks of the Chern classes $$\Delta f^* (c_n) = \sum_{j=0}^n f^* (c_j) \otimes f^* (c_{n-j}) .$$ When $m, n$ are finite positive numbers, the restriction of $c_m$ to $H^* (BU(n); \mathbb{F}_2)$ is $0$ for all $m>n$. Therefore, the restriction of $f^* (c_m)$ to $H^* (BN(n); \mathbb{F}_2)$ is also $0$ for all $m>n$.\ **Claim 1.** The restriction of $x_m$ to $H^* (BN(n); \mathbb{F}_2)$ is $0$ if $m > n$.\ *Proof of Claim 1.* Recall that $|\gamma_{k,l}| = l(2^k -1)$ and $|c_{[k]}| = 2k$. Therefore the cohomological dimension of $$x_{\underline{a}} := c_{[a_0]} \odot \big ( \bigodot_{i,a_i \neq 0} \gamma_{i, a_i}^2 \big ) \odot 1_{\infty}$$ corresponding to a sequence $\underline{a} = (a_0, a_1, a_2, \dots ) \in \mathcal{X}$ is $2a_0 + \sum_{i=1}^{\infty} 2a_i (2^i -1)$. The class $x_m$ is of the form $\sum_{\underline{a}} x_{\underline{a}}$ for some sequences $\underline{a} \in \mathcal{X}$. If the cohomological dimension of $x_{m}$ is bigger that $2n$, *i.e.* $2m = 2 a_0 + \sum_{i=1}^{\infty} 2a_i (2^i -1) > 2n$, then $a_0 + \sum_{i=1}^{\infty} a_i 2^i > n$. Also, note that $x_{\underline{a}}$ is in the $(a_0 + \sum_{i=1}^{\infty} a_i 2^i)$-th component as the width of the skyline diagram corresponding to $x_{\underline{a}}$ is $a_0 + \sum_{i=1}^{\infty} a_i 2^i$, which is bigger than $n$. Therefore the restriction of $x_{\underline{a}}$ to $H^* (BN(n); \mathbb{F}_2$ is $0$. $\blacksquare$\ Assume that, $f^* (c_*) \neq x_*$. Then there exists a minimal $m$ such that $f^* (c_m) \neq x_m$. By minimality and the coproduct formulas, we have that $x_m -f^* (c_m)$ is primitive of dimension $m$. The primitives in $A_{\infty} (BU(1))$ are necessarily a linear combination of elements of the form $b\odot 1_{\infty}$, where $b$ is a primitive gathered block in a component equal to $2^k$ for some $k$. We will now determine the condition on the gathered block $b$ that can appear in this linear combination.\ **Claim 2.** For $x_m - f^* (c_m)$ to be non-zero, we must have $m=2^k -1$ for some $k\in \mathbb{N}$ and $$x_{2^k - 1} - f^* (c_{2^k -1}) = \lambda_k (\gamma_{k,1}^2 \odot 1_{\infty}) .$$ *Proof of Claim 2.* By Claim 1, $x_m - f^* (c_m)$ is in the kernel of the restriction map $H^* (BN(\infty); \mathbb{F}_2) \xrightarrow{res} H^* (BN(n); \mathbb{F}_2)$ for $n < m$. Note that $\mathrm{ker} \big ( H^* (BN(\infty); \mathbb{F}_2) \xrightarrow{res} H^* (BN(m); \mathbb{F}_2) \big )$ is generated by $y\odot 1_{\infty}$ for $y$ a full width Hopf monomial with width strictly bigger than $m$. This rules out all primitive gathered blocks in a component equal to $2^k$ for some $k < \lceil \log_2 (m) \rceil$. It follows from the calculation in [@Sinha:12] that only primitive gathered block in component $2^k$ whose cohomological dimension is a multiple of $2$ and does not exceed $2^{k+1}$ is $\gamma_{k,1}^2$ of dimension $2(2^k-1)$. Therefore $m$ must be equal to $2^{k}-1$ for some $k\in \mathbb{N}$ and $$\label{pulpf} x_{2^k - 1} - f^* (c_{2^k - 1}) = \lambda_k (\gamma_{k,1}^2 \odot 1_{\infty}),$$ for some $\lambda_k \in \mathbb{F}_2$. $\blacksquare$\ **Claim 3.** For all $k$, we have that $\lambda_k = 0$.\ *Proof of Claim 3.* We prove this by restricting ([\[pulpf\]](#pulpf){reference-type="ref" reference="pulpf"}) to $H^* (BN(2^k); \mathbb{F}_2)$ and evaluate the restriction on the homology class $Q^{2^k} Q^{2^{k-1}} \cdots Q^{4} Q^{2} (1)$, where $1 \in H_0 (BN(1); \mathbb{F}_2)$. Also note that $\gamma_{k,1}^2 \odot 1_{\infty}$ restricts to $\gamma_{k,1}^2$ in $H^* (BN(2^k); \mathbb{F}_2)$. Using a similar argument used to prove Claim 1, we can show that the only addends in $x_*$ whose width is at most $2^k$ and cohomological dimension is $2(2^k-1)$ are $c_{[p^k-p^l]} \odot \gamma_{l,1}^2 \odot 1_{\infty}$, with $1 \leq l \leq k$, and $c_{[2^k -1]} \odot 1_{\infty}$. Hence, the restriction of $x_m$ to $H^* (BN(2^k); \mathbb{F}_2)$ is $\sum_{l=1}^k c_{[p^k-p^l]} \odot \gamma_{l,1}^2 + c_{[2^k -1]} \odot 1_1$ and $$f_{2^k}^* (c_{2^k -1}) = (1-\lambda_k) \gamma_{k,1}^2 + \sum_{l=1}^{k-1} c_{[2^k-2^l]} \odot \gamma_{l,1}^2 + c_{[2^k -1]} \odot 1_1 .$$ From the definition of $\gamma_{k,1}$, we have $\langle \gamma_{k,1}^2 , Q^{2^{k}} Q^{2^{k-1}} \cdots Q^4 Q^2 (1) \rangle = 1$ and from Lemma [Lemma 30](#pairing){reference-type="ref" reference="pairing"}, we have $\langle f_{2^k}^* (c_{2^k-1}) , Q^{2^{k}} Q^{2^{k-1}} \cdots Q^4 Q^2 (1) \rangle = 1$. All the other addends pair trivially with $Q^{2^{k}} Q^{2^{k-1}} \cdots Q^4 Q^2 (1)$. This implies $\lambda_k = 0$. $\blacksquare$\ Claim 3 implies that, $x_m - f^* (c_m) = 0$ for any $m \in \mathbb{N}$. This contradicts our previous assumption that $x_* \neq f^* (c_*)$. Therefore, we conclude that $f^* (c_*)$ must be equal to $x_*$ and the theorem is proved. If $\mathop{\mathrm{char}}(\mathbb{F}) = p > 2$, then the argument is essentially the same with $x_* = \sum_{\underline{a} \in \mathcal{X}} c_{[a_0]} \odot \big ( \bigodot_{i,a_i \neq 0} \gamma_{i, a_i} \big ) \odot 1_{\infty}$, but the analog of Claim 2 is slightly more subtle. In order to prove that if $m$ is the minimal index such that $x_m - f^*(c_m) \not= 0$ then $m = p^k-1$ and $x_{p^k-1} - f^*(c_{p^k-1}) = \lambda_k (\gamma_{k,1} \odot 1_\infty)$, one is lead to consider primitive gathered blocks in component $p^k$ whose cohomological dimension does not excees $2p^k$. There are classes satisfying this contraint, in dimension $2(p^k-p^i-1)$, $2(p^k-p^i)-1$ for $1 \leq i \leq k$ and $2(p^k-1)$. The classes of dimension $2(p^k-p^i)-1$ are ruled out because $|x_m - f^*(c_m)|$ is even, while $2(p^k-p^i-1)$ are ruled out because their Bockstein is non-zero, while $\beta(x_{p^k-1} - f^*(c_{p^k-1})) = 0$. The only remaining primitive is $\gamma_{k,1}$, of dimension $2(p^k-1)$. If $\mathop{\mathrm{char}}(\mathbb{F}) = 0$, then Claims 2 and 3 are unnecessary, because the only non-zero primitives lie in component $1$. ◻ Let us now recall the definition of a regular sequence. The following definition and the subsequent lemma is a classical result [@Eisenbud99]. **Definition 32**. Let $A$ be a commutative algebra over a field $\mathbb{F}$. A sequence $\{ a_1, \dots , a_n \}$ in $A$ is said to be a *regular sequence* if $a_1$ is not a zero divisor in $A$ and for all $2\le i \le n$, $a_i$ is not a zero divisor in $A/(a_1, \dots, a_{i-1})$. If $\{ F_i \}_{i\in I}$ is a filtration of $A$, with $I$ being a totally ordered set, then we define the *associated graded algebra* of $A$ as $$\mathop{\mathrm{gr}}(A):= \bigoplus_{i\in I} F_i / F_{i-1} .$$ **Lemma 33** ([@Eisenbud99]). *Let $A$ be an algebra over a field $\mathbb{F}$. Let $\mathcal{F}$ be a multiplicative filtration of $A$ bounded below. For $i=1,2,\dots , n$ let $x_i \in \mathcal{F}_{j_i}/\mathcal{F}_{j_i -1} \subseteq \mathop{\mathrm{gr}}_{\mathcal{F}} (A)$ be graded elements in the associated graded algebra lifting to $\widetilde{x}_i \in \mathcal{F}_{j_i} \subseteq A$. If $\{ x_1 , \dots x_n \}$ is a regular sequence in $\mathop{\mathrm{gr}}_{\mathcal{F}} (A)$, then $\{ \widetilde{x}_1 , \dots \widetilde{x}_n \}$ is a regular sequence in $A$.* **Lemma 34**. *Let $A = H^* (BN(\infty); \mathbb{F}_p)$, and $\mathcal{F}_{*}$ be the rank filtration. The associated graded algebra is isomorphic to the polynomial algebra $$\mathop{\mathrm{gr}}_{\mathcal{F}} (A) = \mathcal{F}_0 [c_{[1]} \odot 1_{\infty} , c_{[2]} \odot 1_{\infty}, \dots , c_{[k]} \odot 1_{\infty}, \dots ]$$* *Proof.* Every stabilized Hopf monomial $x = c_{[k_1]}^{m_1} \odot \cdots \odot c_{[k_l]}^{m_l} \odot 1_{\infty}$ is uniquely determined by an eventually zero sequence of non-negative integers of the form $$\underline{s} (x) = (\underbrace{m_1,\dots,m_1}_{k_1 \text{ times}},\dots, \underbrace{m_l, \dots, m_l}_{k_l \text{ times}}, 0,0,\dots ).$$ Let $A'$ be the subspace of $A$ generated by the linear span of the Hopf monomials $c_{[k_1]}^{m_1} \odot \cdots \odot c_{[k_l]}^{m_l} \odot 1_{\infty}$ with $m_1 > m_2 > \cdots >m_l$ and $k_1,\dots,k_l \ge 1$. By Hopf ring distributivity, the product of any such two Hopf monomials is again of this form and hence an element of $A'$. This makes $A'$ a subalgebra of $A$. We define a total order on the set of these Hopf monomials by letting $x<y$ if and only if $\underline{s} (x) < \underline{s} (y)$ in the lexicographic order. The set of these monomials forms an ordered basis for $A'$ as a $\mathbb{F}_p$-vector space. Given an eventually zero sequence of non-negative integers $s$, we denote the unique basis element $x_s$ such that $\underline{s} (x_s) = s$.\ **Claim.** If $x= c_{[k_1]}^{m_1} \odot \cdots \odot c_{[k_l]}^{m_l} \odot 1_{\infty}$, then for $m\ge 1$ the leading term of $x\cdot (c_{[k]} \odot 1_{\infty})$ in the above ordered basis is $x_{\underline{s} (x)+ \underline{s} (c_{[k]}\odot 1_{\infty})}$.\ Recall from Theorem [Theorem 4](#thm:cohomology DX mod 2){reference-type="ref" reference="thm:cohomology DX mod 2"} that $\Delta c_{[k]} = \sum_{j=0}^k c_{[j]} \otimes c_{[k-j]}$. To prove this claim, we proceed by induction. If $l=1$, the Hopf ring distributivity implies $$x\cdot (c_{[k]} \odot 1_{\infty}) = \sum_{j=0}^{\mathrm{min} \{ k_1, k\}} c_{[j]}^{m_1 +1} \odot c_{[k_1 -j]}^{m_1} \odot c_{[k-j]} \odot 1_{\infty} .$$ The leading term is the one for which $j$ is maximal and this proves the claim in the case when $l=1$.\ To prove the general case, let $x' = c_{[k_2]}^{m_2} \odot \cdots \odot c_{[k_l]}^{m_l} \odot 1_{\infty}$. Again using the Hopf ring distributivity, we have $$x\cdot (c_{[k]} \odot 1_{\infty}) = \sum_{j=0}^{\mathrm{min} \{ k_1, k \}} c_{[j]}^{m_1 +1} \odot c_{[k_1 -j]}^{m_1} \odot \big ( x' \cdot ( c_{[k-j]} \odot 1_{\infty} ) \big ) .$$ The leading term is the one for which $j$ is maximal and this proves the induction step.\ As a consequence of the Claim $A' = \mathbb{F}_p [c\odot 1_{\infty}, \dots ,c_{[n]} \odot 1_{\infty}]$ as an algebra. Therefore, to prove the lemma it is enough to check that for all $x = \overline{x} \odot 1_{\infty} \in \mathcal{F}_0$ and for all $y=\overline{y} \odot 1_{\infty} \in A'$, the cup product $x\cdot y$ is equal to $\overline{x} \odot \overline{y} \odot 1_{\infty}$ plus elements of lower rank. Unpacking the Hopf ring distributivity $x \cdot y$ is obtained combinatorially by splitting the columns of the skyline diagram of $\overline{y}$, and stacking each column either onto a column of $\overline{x}$ with the same width, or onto the $1_{\infty}$ part. Stacking a column of $\overline{y}$ onto a column of $\overline{x}$ lowers rank. Therefore the term of maximal rank is obtained by stacking $\overline{y}$ onto the $1_{\infty}$ part, which yields $\overline{x} \odot \overline{y} \odot 1_{\infty}$. ◻ **Theorem 35**. *The sequence $\{ f^* (c_1), f^* (c_2), \dots \}$ in $A_{\infty} (BU(1))$ is a regular sequence.* *Proof.* From Lemma [Lemma 34](#lema:associated-graded){reference-type="ref" reference="lema:associated-graded"}, we have that the associated graded algebra $\mathop{\mathrm{gr}}_{\mathcal{F}} (A)$ corresponding to the rank filtration $\mathcal{F}_*$ of $A = H^* (BN (\infty); \mathbb{F}_p)$ is isomorphic to $\mathcal{F}_0 [c_{[1]} \odot 1_{\infty} , c_{[2]} \odot 1_{\infty}, \dots ]$. Also, note that the sequence $\{ c_{[1]} \odot 1_{\infty}, c_{[2]} \odot 1_{\infty}, \dots \}$ is a regular sequence in $\mathop{\mathrm{gr}}_{\mathcal{F}} (A)$. From Lemma [Lemma 27](#lema:chernformula){reference-type="ref" reference="lema:chernformula"}, we have $f_n^* (c_k) = c_{[k]} \odot 1_{n-k}$ modulo terms of lower rank and therefore $c_{[k]} \odot 1_{\infty} \in \mathop{\mathrm{gr}}_{\mathcal{F}} (A)$ lifts to $f^* (c_k) \in A$. Hence by Lemma [Lemma 33](#lema:graded){reference-type="ref" reference="lema:graded"}, $\{ f^* (c_1), f^* (c_2), \dots \}$ is a regular sequence in $A_{\infty} (BU(1))$. ◻ As a direct consequence of Proposition [Theorem 35](#theo:regular){reference-type="ref" reference="theo:regular"}, we have the following. **Corollary 36**. *For all prime $p$, there are isomorphisms of algebras $$H^* \Big ( \frac{U(\infty)}{N(\infty)} ; \mathbb{F}_p \Big ) \cong \frac{A_{\infty} (BU(1))}{(f^* (c_1), f^* (c_2), \dots )} .$$* *Proof.* We consider the Serre spectral sequence associated with the fiber sequence $$\label{eq3} U(\infty) \longrightarrow \frac{U(\infty)}{N(\infty)} \longrightarrow BN(\infty)$$ and compare it with the Serre spectral sequence associated with the universal fibration $$\label{eq4} U(\infty) \longrightarrow EU(\infty) \longrightarrow BU(\infty).$$ Recall that $H^* (U(\infty); \mathbb{F}_p)$ is the exterior algebra generated by $z_{2i-1}$ for $i=1,2, \dots$ and $z_{2i-1}$ transgresses to $c_i$ in the Serre spectral sequence associated with ([\[eq4\]](#eq4){reference-type="ref" reference="eq4"}). Comparing the two spectral sequences, we see that $z_{2i-1}$ transgresses to $f^* (c_i)$ in the Serre spectral sequence associated with ([\[eq3\]](#eq3){reference-type="ref" reference="eq3"}). Hence, the differentials are given as follows: for all $i \ge 1$, $d_{2i}\colon z_{2i-1} \mapsto f^* (c_i)$ and $d_{2i-1} \equiv 0$. By Theorem [Theorem 35](#theo:regular){reference-type="ref" reference="theo:regular"}, the sequence $\{ f^* (c_1), f^* (c_2), \dots \}$ is a regular sequence in $A_{\infty} (BU(1))$ and hence by [@atiyahMac Theorem 11.22] is algebraically independent over $\mathbb{F}_2$. Therefore, for all $i \ge 1$, $d_{2i}$ is injective on the ideal generated by $z_{2i-1}$, otherwise, there will be algebraic relations between the $f^* (c_i)$'s. The $E_{\infty}$-page is thus given by $$E_{\infty}^{*,*} = \frac{A_{\infty} (BU(1))}{(f^* (c_1), f^* (c_2), \dots )}$$ and the result follows. ◻ **Corollary 37**. *$H^* (\overline{\mathrm{Fl}}_{\infty} (\mathbb{C}); \mathbb{F}_p)$ is the free graded commutative algebra generated by the stabilization of decorated gathered blocks $b\odot 1_{\infty}$ such that $\mathop{\mathrm{rk}}(b) = 0$ and satisfies the conditions of Corollary [Corollary 12](#cor:polynomial generators){reference-type="ref" reference="cor:polynomial generators"}.* *Proof.* By Lemma [Lemma 34](#lema:associated-graded){reference-type="ref" reference="lema:associated-graded"} and Corollary [Corollary 36](#cor:stable-cohomology){reference-type="ref" reference="cor:stable-cohomology"}, the stable cohomology of the unordered flag manifold is isomorphic to $\mathcal{F}_0$ the rank zero subalgebra of $A_{\infty} (X)$. ◻ With rational coefficients, $A_{\infty} (BU(1))$ is the polynomial algebra generated by $c_{[n]} \odot 1_\infty$ for $n \geq 1$. Therefore, our analysis recovers the following well-known result. **Corollary 38**. *$H^*(\overline{\mathrm{Fl}}_{\infty} (\mathbb{C}); \mathbb{Q}) \cong \mathbb{Q}$.* ## The real case {#section:4.2} For the real case, we deal with the mod $2$, mod $p$ for $p>2$, and rational cohomologies separately. ### Mod $2$ cohomology To compute the cohomology of $\mathop{\mathrm{Fl}}_n(\mathbb{R})$, we chose to use the alternating subgroup $\ensuremath \mathop{\mathrm{B}}_{n}^{+}$ instead of $\mathop{\mathrm{B}}_n$. Therefore, in order to compute the cohomology of the limit $\mathop{\mathrm{Fl}}_{\infty} (\mathbb{R})$, we need to determine the stable cohomology of $\ensuremath \mathop{\mathrm{B}}_{n}^{+}$. **Corollary 39**. *The sequence of spaces $\{ B\ensuremath \mathop{\mathrm{B}}_{n}^{+} \}_{n \in \mathbb{N}}$ exhibits homological stability with stabilization maps $B\ensuremath \mathop{\mathrm{B}}_{n}^{+} \to B\ensuremath \mathop{\mathrm{B}}_{n+1}^{+}$ induced by the standard group inclusions $\ensuremath \mathop{\mathrm{B}}_{n}^{+} \hookrightarrow \ensuremath \mathop{\mathrm{B}}_{n+1}^{+}$. Moreover, for all primes $p$, the stable cohomology ring is $$H^*(B\ensuremath \mathop{\mathrm{B}}_{\infty}^{+}; \mathbb{F}_p) \cong \varinjlim_n H^*(B\ensuremath \mathop{\mathrm{B}}_{n}^{+}; \mathbb{F}_p) \cong \left\{ \begin{array}{ll} \frac{ A_{\infty}(\mathbb{P}^\infty(\mathbb{R})) }{(e_\infty)} & \mbox{if } p = 2 \\ A_{\infty}(\mathbb{P}^\infty(\mathbb{R})) & \mbox{if } p > 2 \end{array} \right.,$$ where $e_\infty = w \odot 1_\infty + \gamma_{1,1} \odot 1_\infty$.* *Proof.* If $p > 2$, the result follows immediately from Theorem [Theorem 19](#thm:cohomology alternating group mod p){reference-type="ref" reference="thm:cohomology alternating group mod p"} and the homological stability of $B(\mathop{\mathrm{B}}_n)$. For $p = 2$, we observe that, with the notation of Definition [Definition 15](#def:basis B+){reference-type="ref" reference="def:basis B+"}, every element of $\mathcal{G}_{ann} \cap H^*(\mathop{\mathrm{B}}_n; \mathbb{F}_2)$ has degree at least $3 \lfloor n/4 \rfloor$, which goes to $\infty$ as $n \to \infty$. We deduce that, for all $d \geq 0$ and for $n$ large enough, in the mod $2$ cohomological Gysin sequence of $B\ensuremath \mathop{\mathrm{B}}_{n}^{+} \to B\mathop{\mathrm{B}}_n$ the connecting homomorphisms is injective in degree $d-1$. Therefore there is a short exact sequence $$0 \longrightarrow H^{d-1}(\mathop{\mathrm{B}}_n; \mathbb{F}_2) \stackrel{e_n}{\longrightarrow} H^d(\mathop{\mathrm{B}}_n; \mathbb{F}_2) \longrightarrow H^d (\ensuremath \mathop{\mathrm{B}}_{n}^{+}; \mathbb{F}_2) \longrightarrow 0.$$ The stabilization maps induce homomorphisms of short exact sequences, hence the homological stability of $B\mathop{\mathrm{B}}_n$ implies that of $B\ensuremath \mathop{\mathrm{B}}_{n}^{+}$. Moreover, the stable cohomology of $B\ensuremath \mathop{\mathrm{B}}_{n}^{+}$ is the quotient of the stable cohomology of $B\mathop{\mathrm{B}}_n$ by the limit of the classes $e_n$, which is $e_\infty$. ◻ **Definition 40**. Let $p$ be a prime number. We define ${A}_{\infty}' (\ensuremath \mathop{\mathrm{B}}_{}^{+})$ as the ring $H^*(\ensuremath \mathop{\mathrm{B}}_{\infty}^{+}; \mathbb{F}_p)$ computed in the previous corollary. The inclusions $\ensuremath \mathop{\mathrm{B}}_{n}^{+} \hookrightarrow SO(n)$ induce maps $g_n \colon B\ensuremath \mathop{\mathrm{B}}_{n}^{+} \rightarrow BSO(n)$. We call the limiting map $g \colon B\ensuremath \mathop{\mathrm{B}}_{\infty}^{+} \rightarrow BSO(\infty)$. Similarly, we have maps $h_n \colon B\mathop{\mathrm{B}}_n \rightarrow BO(n)$ and $h \colon B \mathop{\mathrm{B}}_{\infty} \to BO(\infty)$ Note that, in analogy with the complex case, the homomorphism $\delta_{n,m} \colon SO(n) \times SO(m) \to SO(n+m)$ given by the direct sum of matrices induces a coassociative and cocommutative coproduct $\Delta$ on $\bigoplus_{n \geq 0} H^*(BSO(n); \mathbb{F}_p)$. Since it is compatible with cup product, this object is a bialgebra. The stabilization maps of $BSO(n)$ preserve $\Delta$ and hence the limit $H^*(BSO(\infty))$ is also a bialgebra. The morphisms $\bigoplus_n g_n^* \colon \bigoplus_n H^*(BSO(n); \mathbb{F}_p) \to A'_{\ensuremath \mathop{\mathrm{B}}_{}^{+}}$ and $g^* \colon H^*(BSO(\infty)) \to {A}_{\infty}' (\ensuremath \mathop{\mathrm{B}}_{}^{+})$ are bialgebra morphisms. Similarly to the complex case, there is a universal oriented real vector bundle $\eta_n$ on $BSO(n)$. Pullbacks of $\eta_n$ along (homotopy classes of) maps $X \to BSO(n)$ classify the isomorphism classes of oriented real vector bundles over $X$. Stiefel-Whitney classes of such a bundle correspond to pullbacks of the universal Stiefel-Whitney classes along its classifying map. Since $\delta_{n,m}^*(\eta_{n+m})$ is isomorphic to $\eta_n \oplus \eta_m$, the formula for characteristic classes of direct sums imply that $$\forall k \in \mathbb{N}: \quad \Delta(w_k) = \sum_{n+m=k} w_n \otimes w_m.$$ Similarly, $\bigoplus_n H^*(BO(n); \mathbb{F}_p)$ and $H^*(BO(\infty); \mathbb{F}_p)$ are bialgebras, and $$\begin{aligned} \bigoplus_n h_n^* \colon \bigoplus_n H^*(BO(n); \mathbb{F}_p) \longrightarrow A_{\mathbb{P}^\infty(\mathbb{R})} \\ \tag*{and} h^* \colon H^*(BO(\infty); \mathbb{F}_p) \longrightarrow A_\infty(\mathbb{P}^\infty(\mathbb{R})) \end{aligned}$$ are bialgebra homomorphisms. The same formula for the coproduct of universal characteristic classes holds in $BO(n)$. Let us define a charged version of the rank filtration when $p = 2$. **Definition 41**. Assume that $p = 2$. The *rank of a charged Hopf monomial* in $A'_{\ensuremath \mathop{\mathrm{B}}_{}^{+}}$ denoted as $\mathop{\mathrm{rk}}$ is defined as the rank of the corresponding non-charged decorated Hopf monomial in $A_{\mathbb{P}^\infty(\mathbb{R})}$. We also define $\mathop{\mathrm{rk}}(x\otimes y) := \mathop{\mathrm{rk}}(x) + \mathop{\mathrm{rk}}(y)$ in $A'_{\ensuremath \mathop{\mathrm{B}}_{}^{+}}\otimes A'_{\ensuremath \mathop{\mathrm{B}}_{}^{+}}$. **Definition 42**. Assume that $p = 2$ We define the *rank filtration* $\mathcal{F}_{*}$ for $A'_{\ensuremath \mathop{\mathrm{B}}_{}^{+}}$ by setting $\mathcal{F}_n$ as the linear span of charged Hopf monomials $x$ with $\mathop{\mathrm{rk}}(x) \le n$. Moreover, we define a rank filtration on ${A}_{\infty}' (\ensuremath \mathop{\mathrm{B}}_{}^{+})$, by defining $\mathcal{F}_n$ as the linear span of stabilized Hopf monomials $x\odot 1_{\infty}$ where $\mathop{\mathrm{rk}}(x)\leq n$. With a slight abuse of notation, we also denote it as $\mathcal{F}_{*}$. The rank filtration is exhaustive, increasing, and bounded from below. Moreover, due to the almost-Hopf ring distributivity law, it is multiplicative with respect to the cup product, *i.e.* $\mathcal{F}_m \cdot \mathcal{F}_n \subseteq \mathcal{F}_{m+n}$.\ We can now compute the pullbacks $g^*(w_k)$ of Stiefel-Whitney classes with essentially the same argument used for Chern classes in the previous subsection. **Lemma 43**. 1. *For $1 \leq k \leq n$ let $w_k \in H^{k} (BO(n); \mathbb{F}_2)$ be the $k$-th universal Stiefel-Whitney class. Then $h_n^* (w_k) = w_{[k]} \odot 1_{n-k} \mod \mathcal{F}_{k-1}$.* 2. *For $2 \leq k \leq n$, let $w_k \in H^k(BSO(n); \mathbb{F}_2)$ be the $k$-th universal Stiefel-Whitney class. Then $g_n^*(w_k) = (w_{[k]} \odot 1_{n-k})^0 \mod \mathcal{F}_{k-1}$.* *Proof.* The proof of (1) follows from the same argument used for the proof of Lemma [Lemma 27](#lema:chernformula){reference-type="ref" reference="lema:chernformula"}, by replacing Chern classes with Stiefel-Whitney classes. There is a commutative diagram The left vertical map induces $\mathop{\mathrm{res}}_n$ in cohomology, while pullback along the right vertical map sends $w_k$ to $w_k$ if $k \geq 2$ and $w_1$ to $0$. (2) follows from the induced commutative diagram in cohomology, by observing that $\mathop{\mathrm{res}}_n$ preserves the rank filtration. ◻ There is a version of Kochman's formulae for $BO(\infty)$. **Theorem 44** (Kochman's formulae for $BO$, [@Kochman]). *Let $Q_*^r \colon H^d(BO(\infty); \mathbb{F}_2) \to H^{d-r}(BO(\infty); \mathbb{F}_2)$ be the dual of the $r$-th order Dyer-Lashof operation in homology. Then, for all $r \leq k$, the following equality holds: $$Q_*^r(w_k) = \left( \begin{array}{c} r-1 \\ k-r-1 \end{array} \right) w_{k-r}.$$ Moreover, if $[n] \in H^0(BO(\infty) \times \mathbb{Z}; \mathbb{F}_2) \cong \mathbb{Z}$ corresponds to $n \in \mathbb{Z}$, then $$Q^r([1]) = (w_1^r)^\vee * [2].$$* Using this theorem, Lemma [Lemma 43](#lema:swformula){reference-type="ref" reference="lema:swformula"} and Proposition [Proposition 29](#indec){reference-type="ref" reference="indec"} with $X = BO(1)$ and the monomial basis on $w$, we can prove the pullback formula for $h^*(w_k)$. The proof is very similar to the complex case: we use Lemma [Lemma 27](#lema:chernformula){reference-type="ref" reference="lema:chernformula"} to reduce to primitives, and compute the primitive part of the pullback by applying Proposition [Proposition 29](#indec){reference-type="ref" reference="indec"} and Kochman's formulae via Lemma [Lemma 45](#pairing real){reference-type="ref" reference="pairing real"}. For this reason, we only provide the statements and we omit proofs. **Lemma 45**. *Let $k\ge 1$ and $\langle \cdot , \cdot \rangle \colon H^* (B\mathop{\mathrm{B}}_{2^k}; \mathbb{F}_2) \otimes H_* (B\mathop{\mathrm{B}}_{2^k}; \mathbb{F}_2) \rightarrow \mathbb{F}_2$ be evaluation pairing between cohomology and homology of the space $B\mathop{\mathrm{B}}_{2^k}$. Then, $$\big \langle h_{2^k}^* (w_{2^k-1}) , Q^{2^{k}} Q^{2^{k-1}} \cdots Q^4 Q^2 (1) \big \rangle =1 ,$$ where $1 \in H_0 (B\mathop{\mathrm{B}}_1; \mathbb{F}_2)$.* We will again state the pullback formula in terms of the total Stiefel-Whitney class $$w_* = 1+w_1 +w_2 +\cdots \in H^* (BO(\infty); \mathbb{F}).$$ To obtain the pullback formula for a specific Stiefel class $w_k$, we only need to consider the classes of cohomological dimension $k$. **Theorem 46**. *(Pullback Formula) Let $\mathcal{X} = \{ \underline{a} = (a_0, a_1, a_2, \dots ) \in \mathbb{N}^{\mathbb{N}^*} \mid a_n = 0 \text{ for } n \gg 0 \}$ be the set of infinite sequences of natural numbers that have finite support i.e. that are eventually zero. Then, $$\label{pullback-real} h^* (w_*) = \sum_{\underline{a} \in \mathcal{X}} w_{[a_0]} \odot \big ( \bigodot_{i,a_i \neq 0} \gamma_{i, a_i} \big ) \odot 1_{\infty} .$$ Note that requiring that every sequence in $\mathcal{X}$ has finite support guarantees that the transfer product in the formula ([\[pullback-real\]](#pullback-real){reference-type="ref" reference="pullback-real"}) is iterated a finite number of times and well-defined.* Modulo $2$, the real counterpart of Lemma [Lemma 34](#lema:associated-graded){reference-type="ref" reference="lema:associated-graded"} is proved with essentially the same argument. **Lemma 47**. *le $A = H^*(B\mathop{\mathrm{B}}_\infty; \mathbb{F}_2)$ and $\mathcal{F}_*$ be the rank filtration. The associated graded algebra is isomorphic to the polynomial algebra $$\mathop{\mathrm{gr}}_{\mathcal{F}}(A) = \mathcal{F}_0[w_{[1]} \odot 1_\infty, w_{[2]} \odot 1_\infty, \dots, w_{[k]} \odot 1_\infty, \dots].$$* Consequently, the description of the mod $2$ stable cohomology of unordered real flag manifold is similar to the complex one. **Theorem 48**. *Let $h^* \colon H^*(BO(\infty); \mathbb{F}_2) \to H^*(B\mathop{\mathrm{B}}_\infty; \mathbb{F}_2)$ be the map induced by $h$ in cohomology. Then the sequence $\{h^*(w_1),\dots, h^*(w_k),\dots \}$ is a regular sequence in $H^*(B\mathop{\mathrm{B}}_\infty; \mathbb{F}_2)$.* *Proof.* The proof is analogous to that of Theorem [Theorem 35](#theo:regular){reference-type="ref" reference="theo:regular"}. ◻ **Corollary 49**. *There is an isomorphism of algebras $$H^* (\overline{\mathop{\mathrm{Fl}}}_\infty(\mathbb{R}); \mathbb{F}_2) \cong \frac{A_\infty(BO(1))}{(h^*(w_1),h^*(w_2),\dots)}.$$* *Proof.* We compare the Serre spectral sequences associated with the fiber sequences $$\begin{gathered} SO(\infty) \longrightarrow \frac{SO(\infty)}{\ensuremath \mathop{\mathrm{B}}_{\infty}^{+}} \longrightarrow B\ensuremath \mathop{\mathrm{B}}_{\infty}^{+} \\ \tag*{and} SO(\infty) \longrightarrow ESO(\infty) \longrightarrow BSO(\infty).\end{gathered}$$ Recall that $H^*(SO(\infty); \mathbb{F}_2)$ is the exterior algebra generated by classes $a_{i-1}$ for $i = 2,3,\dots$ and $a_{i-1}$ transgresses to $w_i$ in the spectral sequence associated to the bottom fibration. Therefore, $d_i(a_{i-1}) = g^*(w_i)$ in the Serre spectral sequence associated to the top fibration. By Corollary [Corollary 39](#cor:homological stability alternating subgroups){reference-type="ref" reference="cor:homological stability alternating subgroups"}, the mod $2$ cohomology of $\ensuremath \mathop{\mathrm{B}}_{\infty}^{+}$ is isomorphic to $A_\infty(BO(1))/(h^*(w_1))$. As $\mathop{\mathrm{res}}_\infty \circ h = g$, under this isomorphism $g^*(w_k)$ is identified with the image of $h^*(w_k)$ in the quotient. By Theorem [Theorem 48](#theo:regular real){reference-type="ref" reference="theo:regular real"} $\{h^*(w_1), h^*(w_2), \dots \}$ is a regular sequence, and hence we must have for all $i \ge 1$, $d_i$ injective on the ideal generated by $a_{i-1}$. ◻ **Corollary 50**. *$H^*(\overline{\mathop{\mathrm{Fl}}}_\infty(\mathbb{R}); \mathbb{F}_2)$ is the free commutative algebra generated by the stabilization of decorated gathered blocks $b \odot 1_\infty$ in $A_\infty(BO(1))$ such that $\mathop{\mathrm{rk}}(b) = 0$.* ### Cohomology at odd primes With coefficients in $\mathbb{F}_p$, $p > 2$, the cohomology of $\mathbb{P}^\infty(\mathbb{R})$ is trivial. Thus, every Hopf monomial in $A_{\mathbb{P}^\infty(\mathbb{R})}$ has rank $0$ and the rank filtration becomes useless. Consequently, we must fully exploit the mod $p$ pullback formula to perform our stable computations. We recall that there are Pontrjagin classes $\mathcal{p}_k \in H^{4k}(BSO(\infty); \mathbb{Z})$ that are pullbacks of even Chern classes via the complexification map $cpx \colon BO(\infty) \to BU(\infty)$. We state our pullback formula for these by means of the total Pontrjagin class $$\mathcal{p}_* = \sum_{n=0}^\infty \mathcal{p}_n = 1 + \mathcal{p}_1 + \mathcal{p}_2 + \dots \in H^*(BO(\infty)),$$ where we let, by convention, $\mathcal{p}_0 = 1$. **Proposition 51**. *Let $\mathcal{Y} = \{ \underline{a} = (a_1, a_2, \dots ) \in \mathbb{N}^{\mathbb{N}^*} \mid a_n = 0 \text{ for } n \gg 0 \}$ be the set of infinite sequences of natural numbers that have finite support i.e. that are eventually zero. Then, $$\label{pullback-pon} g^* (\mathcal{p}_*) = \sum_{\underline{a} \in \mathcal{Y}} \big ( \bigodot_{i,a_i \neq 0} \gamma_{i, a_i} \big ) \odot 1_{\infty} .$$ Note that requiring that every sequence in $\mathcal{Y}$ has finite support guarantees that the transfer product in the formula ([\[pullback-pon\]](#pullback-pon){reference-type="ref" reference="pullback-pon"}) is iterated a finite number of times and well-defined.* *Proof.* We first prove that the desired identity holds for $h^*(\mathcal{p}_*) \in H^*(B\mathop{\mathrm{B}}_\infty; \mathbb{F}_p)$. There is a commutative diagram where $j_1$ and $j_2$ are the obvious inclusions. By passing to cohomology we deduce that $$j_1^* h^*(\mathcal{p}_k) = j_1^*h^* cpx^*(c_{2k}) = j_2^*f^*(c_k).$$ By functoriality, $j_2^*$ is the stabilization of the Hopf ring morphism $A_{BU(1)} \to A_{\{*\}}$ induced by the inclusion $\{*\} \to BU(1)$. In particular, $j_2^*(\gamma_{k,l}) = \gamma_{k,l}$ and $j_2^*(c_{[l]}) = 0$ for all $k,l \geq 1$. As $j_1^*$ is an isomorphism in mod $p$ cohomology, the result follows from Theorem [Theorem 31](#pullbackformula){reference-type="ref" reference="pullbackformula"}. To pass from $h^*(\mathcal{p}_*)$ to $g^*(\mathcal{p}_*)$, it is enough to recall that the restriction map $H^*(\mathop{\mathrm{B}}_\infty; \mathbb{F}_p) \to H^*(\ensuremath \mathop{\mathrm{B}}_{\infty}^{+}; \mathbb{F}_p)$ is an isomoprhism by Corollary [Corollary 20](#cor:res mod p){reference-type="ref" reference="cor:res mod p"}. ◻ **Corollary 52**. *Let $p > 2$ be an odd prime. Then, in mod $p$ cohomology, the following statements hold:* 1. *$g^*(\mathcal{p}_k) = 0$ if $k$ is not a multiple of $(p-1)/2$* 2. *$\{g^*(\mathcal{p}_{\frac{p-1}{2}}),g^*(\mathcal{p}_{2\frac{p-1}{2}}), g^*(\mathcal{p}_{3\frac{p-1}{2}}),\dots\}$ is a regular sequence in $H^*(B\ensuremath \mathop{\mathrm{B}}_{\infty}^{+}; \mathbb{F}_p)$.* *Proof.* Statement $1$ is immediate from Lemma [Proposition 51](#pullback pontrjagin){reference-type="ref" reference="pullback pontrjagin"}, because the right-hand side of the formula for $g^*(\mathcal{p}_*)$ only have terms in degrees that are multiple of $2(p-1)$. To prove the second claim, we first define an alternative rank function on stabilized Hopf monomials in $H^*(B\ensuremath \mathop{\mathrm{B}}_{\infty}^{+}; \mathbb{F}_p)$, which is identified with $H^*(B(\Sigma_\infty); \mathbb{F}_p) = A_\infty(\{*\})$ by Theorem [Theorem 19](#thm:cohomology alternating group mod p){reference-type="ref" reference="thm:cohomology alternating group mod p"}. Given a gathered block $b$, we define $$\mathop{\mathrm{rk}}(b) = \left\{ \begin{array}{ll} nm & \mbox{if } b = \gamma_{1,n}^m \\ 0 & \mbox{if } b \not= \gamma_{1,n}^m \forall n,m \in \mathbb{N} \end{array} \right. .$$ Given a Hopf monomial $x = b_1 \odot \dots \odot b_r$, we define $\mathop{\mathrm{rk}}(x) = \sum_{i=1}^r \mathop{\mathrm{rk}}(b_i)$. Finally, for all stabilized Hopf monomials $x \odot 1_\infty$, with $x$ pure, we let $\mathop{\mathrm{rk}}(x \odot 1_\infty) = \mathop{\mathrm{rk}}(x)$. This produces an increasing filtration $\mathcal{F} = \{ \mathcal{F}_n \}_{n \geq 0}$ such that $\mathcal{F}_n$ is the subspace linearly spanned by basis element of rank at most $n$. The filtration $\mathcal{F}_n$ is multiplicative, and the same arguments used in our proof of Lemma [Lemma 34](#lema:associated-graded){reference-type="ref" reference="lema:associated-graded"} shows that there is an isomorphism $$\mathop{\mathrm{gr}}_{\mathcal{F}}(A_\infty(\{*\})) \cong \mathcal{F}_0 [\gamma_{1,1} \odot 1_\infty, \dots, \gamma_{1,n} \odot 1_\infty,\dots].$$ As an immediate consequence of Proposition [Proposition 51](#pullback pontrjagin){reference-type="ref" reference="pullback pontrjagin"}, $g^*(\mathcal{p}_{k \frac{p-1}{2}}) = \gamma_{1,k} \odot 1_\infty$ modulo $\mathcal{F}_{k-1}$ and that the sequence $(\gamma_{1,1} \odot 1_\infty, \dots, \gamma_{1,n} \odot 1_\infty,\dots)$ is a regular sequence in $\mathop{\mathrm{gr}}_{\mathcal{F}}(A_\infty(\{*\}))$ by the same argument used in the proof of Theorem [Theorem 35](#theo:regular){reference-type="ref" reference="theo:regular"}. ◻ **Corollary 53**. *If $p > 2$ is an odd prime, the quotient algebra with mod $p$ coefficients $$\frac{A_\infty(\{*\})}{(g^*(\mathcal{p}_1),g^*(\mathcal{p}_2),\dots)}$$ is the free graded commutative algebra generated by stabilized gathered blocks $b \odot 1_\infty$ satisfying the conditions of Corollary [Corollary 12](#cor:polynomial generators){reference-type="ref" reference="cor:polynomial generators"} such that $\mathop{\mathrm{rk}}(b) = 0$.* Finally, we are ready to complete the computation of the stable mod $p$ cohomology of $O(\infty)/B(\infty) = SO(\infty)/\ensuremath \mathop{\mathrm{B}}_{\infty}^{+}$. **Theorem 54**. *Let $p > 2$ be an odd prime. Then there is an isomorphism of $\mathbb{F}_p$-algebras $$H^* \left( \frac{O(\infty)}{B(\infty)}; \mathbb{F}_p \right) \cong \Lambda(\{a_{4n-1}: \frac{p-1}{2} \not| n\}) \otimes \frac{A_\infty(\{*\})}{(g^*(\mathcal{p}_1),g^*(\mathcal{p}_2),\dots)},$$ where $\Lambda$ is the exterior algebra functor, and the classes $a_k$ are indexed by their degree.* *Proof.* We consider the cohomological Serre spectral sequence associated with the fiber sequence $SO(\infty) \to \overline{\mathop{\mathrm{Fl}}}_\infty(\mathbb{R}) \to B\ensuremath \mathop{\mathrm{B}}_{\infty}^{+}$. Recall that $H^*(SO(\infty); \mathbb{F}_p)$ is the exterior algebra generated by classes $a_3,a_7,\dots, a_{4k-1},\dots$. Note that the action of the fundamental group of the base on the fiber is homotopically trivial, so the $E_2^{*,*}$ page involves cohomology with constant coefficients. By comparing it with the fibration $SO(\infty) \to ESO(\infty) \to BSO(\infty)$ we deduce, as in the complex case, that $a_{4k-1}$ is transgressive and transgresses to $g^*(\mathcal{p}_{k})$. This completely determines the differentials on each page. In the cohomological Serre spectral sequence of the bottom fibration, the class $a_{4n-1} \in H^*(SO(\infty))$ is transgressive and transgresses to the Pontryagin class $\mathcal{p}_n$. By comparing the Serre spectral sequences of the two fibrations above, we see that, for the top fibration, $a_{4n-1}$ is also transgressive and transgresses to $g^*(\mathcal{p}_n)$. This spectral sequence is multiplicative with respect to the cup product, hence the remark above completely determines all its differentials. By Corollary [Corollary 52](#cor:regular real odd){reference-type="ref" reference="cor:regular real odd"}, the pullbacks of Pontryagin classes $g^*(\mathcal{p}_n)$ for $\frac{p-1}{2} | n$ form a regular sequence in the cohomology of the base and that the differentials of the remaining classes are $0$. Consequently, one can check inductively that the classes $a_{4n-1}$ with $\frac{p-1}{2} \not| n$ survive to the limit page and the differential $d_{4n}$ with $\frac{p-1}{2} | n$ is injective on the ideal generated by $a_{4n-1}$. Therefore $$E_\infty^{*,*} \cong \Lambda(\{a_{4n-1}: \frac{p-1}{2} \not| n\}) \otimes \frac{A_\infty(\{*\})}{(g^*(p_1),g^*(p_2),\dots)}.$$ Since we are using field coefficients, $H^*(\overline{\mathop{\mathrm{Fl}}}_\infty(\mathbb{R}); \mathbb{F}_p) \cong E_\infty^{*,*}$ as a graded vector space. A priori, this isomorphic might not preserve the product. In this particular case, however, the generators $a_{4k-1}$, being odd-dimensional, must actually span an exterior algebra inside $H^*(\overline{\mathop{\mathrm{Fl}}}_\infty(\mathbb{R}); \mathbb{F}_p)$, and thus this is an isomorphism of algebras. ◻ ### Rational cohomology The rational cohomology of unordered flag manifolds is known. However, we stress that we can retrieve it as a simple consequence of our machinery. **Proposition 55**. *$$H^*(\overline{\mathop{\mathrm{Fl}}}_\infty(\mathbb{R}); \mathbb{Q}) \cong H^*(SO(\infty); \mathbb{Q}) \cong \Lambda(\{a_{4k-1}:k\in \mathbb{N}\}).$$* *Proof.* The rational cohomology of the finite group $\ensuremath \mathop{\mathrm{B}}_{\infty}^{+}$ is trivial by Serre's theorem and the action of $\ensuremath \mathop{\mathrm{B}}_{\infty}^{+}$ on $SO(\infty)$ is homotopicallly trivial. Hence, the rational Serre spectral sequence of the fiber sequence $SO(\infty) \to \overline{\mathop{\mathrm{Fl}}}_\infty(\mathbb{R}) \to B\ensuremath \mathop{\mathrm{B}}_{\infty}^{+}$ collapses at the second page, which reduces to the cohomology of the fiber. ◻ ## Poincaré series of stable cohomology We now determine the Poincaré series formula for stable cohomology, specifically $H^* (\overline{\mathrm{Fl}}_{\infty} (\mathbb{C}) ; \mathbb{F}_2)$. For this, we need to obtain the Poincaré series of the stabilization $D_{\infty} (BU(1)_+) \cong BN(\infty)$. ### Poincaré series of $H^* (BN(\infty); \mathbb{F}_2)$ To obtain $H^* (BN(\infty); \mathbb{F}_2)$, we take the stabilization of full-width Hopf monomials in $\bigoplus_{n\ge 0} H^* (D_n (BU(1)_+); \mathbb{F}_2 )$. This means we exclude Hopf monomials that have a $\odot$-factor of $1_m$ in their expression. Let us denote the bigraded Poincaré series of $\bigoplus_{n\ge 0} H^* (D_n (BU(1)_+); \mathbb{F}_2 )$ by $\Pi (t,s)$. To obtain the Poincaré series $\Pi_{stab} (t)$ of $H^* (BN(\infty); \mathbb{F}_2)$, we need to - forget the second degree (component), - keep only terms corresponding to full-width Hopf monomials. Therefore $\Pi_{stab} (t) = [(1-s) \cdot \Pi (t,s)] \mid_{s=1}$. ### Poincaré series of $H^* (\overline{\mathrm{Fl}}_{\infty} (\mathbb{C}) ; \mathbb{F}_2)$ We know from Theorem [Theorem 35](#theo:regular){reference-type="ref" reference="theo:regular"} that the sequence $\big (f^* (c_1), f^* (c_2), \dots \big )$ is regular in $H^* ( BN (\infty); \mathbb{F}_2 )$ and from Corollary [Corollary 36](#cor:stable-cohomology){reference-type="ref" reference="cor:stable-cohomology"} $$H^* ( \overline{\mathrm{Fl}}_{\infty} (\mathbb{C}) ; \mathbb{F}_2 ) \cong \frac{H^* (BN (\infty); \mathbb{F}_2)}{\big (f^* (c_1), f^* (c_2), \dots \big )} .$$ From the additivity of Poincaré series and the short exact sequence of the graded vector spaces $$\begin{aligned} 0 \longrightarrow \frac{H^* (BN (\infty); \mathbb{F}_2)}{\big (f^* (c_1),\dots, f^* (c_n) \big )} [2(n+1)] &\longrightarrow \frac{H^* (BN (\infty); \mathbb{F}_2)}{\big ( f^* (c_1), \dots, f^* (c_n) \big )} \\ &\longrightarrow \frac{H^* (BN (\infty); \mathbb{F}_2)}{\big ( f^* (c_1),\dots , f^* (c_{n+1}) \big )} \longrightarrow 0 \end{aligned}$$ one can inductively compute the Poincaré series of $\frac{H^* (BN (\infty); \mathbb{F}_2)}{\big ( f^* (c_1), \dots, f^* (c_n) \big )}$ and pass to the limit to get the Poincaré series of $H^* (\overline{\mathrm{Fl}}_{\infty} (\mathbb{C}); \mathbb{F}_2)$: $$\Pi_{\frac{U(\infty)}{N(\infty)}} (t) = \Pi_{stab} (t) \cdot \big ( 1 - \sum_{n=1}^{\infty} t^{2n} \big ) .$$ It still remains to compute $\Pi (t,s)$. This can be done by first determining the bigraded Poincaré series of the primitive component and subsequently, by using the Hopf algebra structure, extending the result to obtain the Poincaré series $\Pi(t, s)$. # The spectral sequence computing $H^*(\overline{\mathop{\mathrm{Fl}}}_n(\mathbb{C});\mathbb{F})$ and $H^*(\overline{\mathop{\mathrm{Fl}}}_n(\mathbb{R});\mathbb{F})$ ## The complex case In this section, we prepare the ground for the general spectral sequence argument that we will use to compute $H^*(\overline{\mathrm{Fl}}_n (\mathbb{C}); \mathbb{F}_p)$. For $G = N(n)$ and $X = U(n)$ in Theorem [Theorem 21](#generalSSS){reference-type="ref" reference="generalSSS"}, we obtain the fiber sequence $$\label{spseq} U(n) \longrightarrow \overline{\mathrm{Fl}}_n (\mathbb{C}) \longrightarrow BN(n)$$ The $E_2$-page of the spectral sequence associated with ([\[spseq\]](#spseq){reference-type="ref" reference="spseq"}) is given by $$H^* (BN(n); H^* (U(n); \mathbb{F}_p)) \cong H^* (D_n (BU(1)_+); \mathbb{F}_p) \otimes H^* (U(n); \mathbb{F}_p).$$ Recall from [@Hatcher Proposition 3D.4] that $$\label{eq:unitary} H^* (U(n); \mathbb{F}_p) \cong \Lambda_{\mathbb{F}_p} [ z_{2k-1} \mid k=1,2,\dots , n],$$ where $z_{2k-1} \in H^{2k-1} (U(n); \mathbb{F}_p)$. We will express the differentials in the spectral sequence associated with ([\[spseq\]](#spseq){reference-type="ref" reference="spseq"}) using the pullbacks of the Chern classes $c_k$ under the map $f_n \colon D_n (BU(1)_+) \rightarrow BU(n)$. **Lemma 56**. *In the spectral sequence associated with the fiber sequence ([\[spseq\]](#spseq){reference-type="ref" reference="spseq"}) the $z_{2k-1}$ transgresses to $f_n^* (c_k)$.* *Proof.* Recall that a map between the fiber sequences induces a map between the corresponding spectral sequences which commute with the differentials. The lemma follows from this by comparing the Serre spectral sequence associated with the fiber sequence $U(n) \rightarrow \overline{\mathrm{Fl}}_n (\mathbb{C}) \rightarrow BN(n)$ and the universal fibration $U(n) \rightarrow EU(n) \rightarrow BU(n)$. ◻ The pullback formulas in Theorem [Theorem 31](#pullbackformula){reference-type="ref" reference="pullbackformula"} have a limitation in that they are expressed using the total Chern class $c_*$ and hence do not immediately give us $f_n^* (c_k)$ for specific $n$ and $k$, which is needed for the unstable calculations. However, this issue can be addressed by imposing certain constraints on the sequences that contribute to the sum for $f_n^*(c_k)$ in the original formula. The following proposition outlines the conditions that need to be met. **Proposition 57**. *The following statements are true:* 1. *Let $\mathcal{X}_n$ denote the set of sequences $\underline{a} = (a_0, a_1, a_2 , \dots) \in \mathcal{X}$ such that $\sum_{i \ge 0} a_i 2^i \le n$ and $\mathcal{X}_{n,k}$ denote the set of sequences $\underline{a} \in \mathcal{X}_n$ such that $a_0 + \sum_{i\ge 1} a_i (2^i -1) = k$. Then, $$f_n^* (c_k) = \sum_{\underline{a} \in \mathcal{X}_{n,k}} c_{[a_0]} \odot \big ( \bigodot_{i,a_i \neq 0} \gamma_{i, a_i}^2 \big ) \odot 1_{m} ,$$ where $m \in \mathbb{N}$ is such that the addend is in component $n$.* 2. *Let $p$ be an odd prime. Let $\mathcal{Y}_n$ denote the set of sequences $\underline{a} = (a_1, a_2 , \dots) \in \mathcal{Y}$ such that $\sum_{i \ge 1} a_i p^i \le n$ and $\mathcal{Y}_{n,k}$ denote the set of sequences $\underline{a} \in \mathcal{Y}_n$ such that $\sum_{i\ge 1} a_i (p^i -1) = 2k$. Then, in mod $p$ cohomology, $$f_n^*(c_k) = \sum_{\underline{a} \in \mathcal{Y}_{n,k}} c_{[a_0]} \odot \big ( \bigodot_{i, a_i \neq 0} \gamma_{i,a_i} \big ) \odot 1_{m},$$ where $m \in \mathbb{N}$ is such that the addend is in component $n$.* *Proof.* The proof is straightforward. Recall that the component to which $\gamma_{i, a_i}^2$ belongs is $a_i 2^i$ and the component to which $c_{[a_0]}$ belongs is $a_0$. Therefore, the component to which $c_{[a_0]} \odot \big ( \bigodot_{i,a_i \neq 0} \gamma_{i, a_i}^2 \big )$ belongs is $a_0+ \sum_{i\ge 1} a_i 2^i = \sum_{i \ge 0} a_i 2^i$. For the addend to be in $\mathrm{Im} (f_n^* ) \subset H^* (BN(n); \mathbb{F}_2)$, this value must be less than or equal to $n$. Moreover the cohomological dimension of $\gamma_{i, a_i}^2$ is $2a_i (2^i -1)$ and the cohomological dimension of $c_{[a_0]}$ is $2a_0$. Note that the pullback $f_n^*$ preserves the cohomological dimension. So, a sequence $\underline{a} \in \mathcal{X}_n$ is an addend in $f_n^* (c_k)$ if and only if $2a_0 + \sum_{i\ge 1} 2a_i (2^i -1) = 2k$. ◻ The following example demonstrates the use of this pullback formula for $n=5$. *Example 58*. First, we compute $\mathcal{X}_{5,k}$ for $1 \le k \le 5$. $$\begin{aligned} \mathcal{X}_{5,1} &= \{ (1,0,0,\dots) , (0,1,0,\dots ) \} , \\ \mathcal{X}_{5,2} &= \{ (2,0,0,\dots), (1,1,0,\dots) , (0,2,0,\dots) \} , \\ \mathcal{X}_{5,3} &= \{ (3,0,0,\dots), (2,1,0,\dots), (1,2,0,\dots), (0,0,1,0,\dots) \} , \\ \mathcal{X}_{5,4} &= \{ (4,0,0,\dots), (3,1,0,\dots), (1,0,1,0,\dots) \} , \\ \mathcal{X}_{5,5} &= \{ (5,0,0,\dots) \}. \end{aligned}$$ Hence by Proposition [Proposition 57](#prop:finite-pullback){reference-type="ref" reference="prop:finite-pullback"} we have $$\begin{aligned} f_5^* (c_1) &= c \odot 1_4 + \gamma_{1,1}^2 \odot 1_3, \\ f_5^* (c_2) &= c_{[2]} \odot 1_3 + c \odot \gamma_{1,1}^2 \odot 1_2 + \gamma_{1,2}^2 \odot 1_1 ,\\ f_5^* (c_3) &= c_{[3]} \odot 1_2 + c_{[2]} \odot \gamma_{1,1}^2 \odot 1_1 + c\odot \gamma_{1,2}^2 + \gamma_{2,1}^2 \odot 1_1, \\ f_5^* (c_4) &= c_{[4]} \odot 1_1 + c_{[3]} \odot \gamma_{1,1}^2 + c \odot \gamma_{2,1}^2 , \\ f_5^* (c_5) &= c_{[5]}. \end{aligned}$$ As $\overline{\mathrm{Fl}}_1 \cong \{*\}$ is just a point, $H^* (\overline{\mathrm{Fl}}_1 ; \mathbb{F}_2) \cong \mathbb{F}_2$ is trivial. For $n=2$, we have an isomorphism $\overline{\mathrm{Fl}}_2 (\mathbb{C}) \cong \mathbb{P}^{2} (\mathbb{R})$. For $n \ge 3$, we will be using the Serre spectral sequence $E$ associated with the fiber sequence $U(n) \rightarrow \overline{\mathrm{Fl}}_n (\mathbb{C}) \rightarrow BN(n)$ to determine the cohomology of $\overline{\mathrm{Fl}}_n (\mathbb{C})$. The $E_2$-page is given by $E_2^{k,l} = H^k (BN(n); H^l (U(n); \mathbb{F}_2))$. Recall the transgressive differentials $d_{2k}$ described in Lemma [Lemma 56](#differential){reference-type="ref" reference="differential"} as follows $$d_{2k} : z_{2k-1} \longmapsto f_n^* (c_k) .$$ The differential $d_{2k}$ can be fully described from the multiplicative structure of the spectral sequence. Also, note that odd differentials are zero. The pullbacks $f_n^* (c_k)$ are computed using the formula from Proposition [Proposition 57](#prop:finite-pullback){reference-type="ref" reference="prop:finite-pullback"}. On the $E_2$-page of the spectral sequence, the $l=0$ row is given by $H^* (BN(n); \mathbb{F}_2)$ and the higher rows are $H^* (BN(n); \mathbb{F}_2)$ multiplied by the product of some finite combination of the $z_{2k-1}$'s. A major step in the computation is checking whether the differentials $d_{2k}$ are injective or not, i.e. whether $d_{2k} (b\cdot z_{2k-1})$ is zero or not for $b$ a generator of $H^* (BN(n); \mathbb{F}_2)$. **Lemma 59**. *In the spectral sequence $E$, $d_{2k} ((c_{[l]} \odot 1_{n-l}) \cdot z_{2k-1})$ is non zero for all $1 \le k,l \le n$.* *Proof.* From the multiplicative structure of the spectral sequence $E$, we have $$d_{2k} ((c_{[l]} \odot 1_{n-l}) \cdot z_{2k-1}) = (c_{[l]} \odot 1_{n-l}) \cdot d_{2k} (z_{2k-1}) = (c_{[l]} \odot 1_{n-l}) \cdot f_n^* (c_k).$$ Recall from Proposition [Proposition 57](#prop:finite-pullback){reference-type="ref" reference="prop:finite-pullback"} that $f_n^* (c_k) = c_{[k]} \odot 1_{n-k} + \cdots$ in $H^* (BN(n); \mathbb{F}_2)$. Note that this also holds in $$\frac{H^* (BN(n); \mathbb{F}_2)}{ (f_n^* (c_1), \dots, f_n^* (c_{k-1}))} .$$ Without loss of any generality we assume $k \ge l$. The other case is similar. Using Hopf ring distributivity, $$(c_{[l]} \odot 1_{n-l}) \cdot (c_{[k]} \odot 1_{n-k}) = c_{[l]}^2 \odot c_{[k-l]} \odot 1_{n-k} + \cdots$$ which is always non-zero. We illustrate this using a skyline diagram with $k=4$, $l=3$, and $n=6$ in Figure [\[fig:cdot\]](#fig:cdot){reference-type="ref" reference="fig:cdot"}. Therefore $(c_{[l]} \odot 1_{n-l}) \cdot f_n^* (c_k) \neq 0$ for all $1 \le k,l \le n$. ◻ The previous lemma us that the differentials are always non-zero on classes of the form $c_{[m]} \odot 1_{n-m}$. Therefore, the only classes that can be mapped to zero by the differentials $d_{2k}$ are the rank zero decorated gathered blocks. Here the rank function is as defined in Definition [Definition 25](#dfn:rank){reference-type="ref" reference="dfn:rank"}. Hence, we only need to consider these classes when we are checking the injectivity of the differentials. This simplifies our computations in §[6](#section:6){reference-type="ref" reference="section:6"}. ## The real case {#the-real-case} A similar machinery can be used to compute $H^*(\overline{\mathop{\mathrm{Fl}}}_n; \mathbb{F}_p)$ for all $n \in \mathbb{N}$ and for all primes $p$. Recall from [@Hatcher Corollary 4D.3] that $$H^*(SO(n); \mathbb{F}_2) \cong \left\{ \begin{array}{ll} \Lambda(\{a_1,\dots,a_{n-1}\}) & \mbox{if } p = 2 \\ \Lambda(\{a_3,\dots,a_{4\lfloor \frac{n}{2} \rfloor -1}\}) & \mbox{if } p > 2, n \mbox{ odd} \\ \Lambda(\{a_3,\dots,a_{4\lfloor \frac{n}{2} \rfloor -5}, a'_{n-1}\}) & \mbox{if } p > 2, n \mbox{ even} \end{array} \right. ,$$ where generators are indexed by their degree. The generators of the mod $p$ cohomology of $SO(n)$ are transgressive in the Serre spectral sequence associated with the fiber sequence $SO(n) \to ESO(n) \to BSO(n)$. Moreover, $d_i(a_{i-1}) = w_i$ if $p = 2$, $d_i(a_{i-1}) = \mathcal{p}_i$ if $p > 2$, and $d_n(a'_{n-1})$ is the universal Euler class $X_n \in H^n(BSO(n); \mathbb{F}_p)$ if $p > 2$ and $n$ is even. By comparison with the Serre spectral sequence of the fibration $SO(n) \to \overline{\mathop{\mathrm{Fl}}}_n(\mathbb{R}) \to BSO(n)$ we deduce that the generators of the cohomology of $SO(n)$ are transgressive and transgresses to the pullback of the corresponding characteristic class via $g_n$. The pullbacks of the Stiefel-Whitney and Pontrjagin classes can be computed from Theorem [Theorem 46](#pullbackformula real){reference-type="ref" reference="pullbackformula real"} and Proposition [Proposition 51](#pullback pontrjagin){reference-type="ref" reference="pullback pontrjagin"} by restriction from the stable cohomology as done for Chern classes in Proposition [Proposition 57](#prop:finite-pullback){reference-type="ref" reference="prop:finite-pullback"}. **Proposition 60**. *The following statements are true:* 1. *Let $\mathcal{X}_n$ denote the set of sequences $\underline{a} = (a_0, a_1, a_2 , \dots) \in \mathcal{X}$ such that $\sum_{i \ge 0} a_i 2^i \le n$ and $\mathcal{X}_{n,k}$ denote the set of sequences $\underline{a} \in \mathcal{X}_n$ such that $a_0 + \sum_{i\ge 1} a_i (2^i -1) = k$. Then, in mod $2$ cohomology, $$g_n^* (w_k) = \sum_{\underline{a} \in \mathcal{X}_{n,k}} \mathop{\mathrm{res}}_n \left( w_{[a_0]} \odot \big ( \bigodot_{i,a_i \neq 0} \gamma_{i, a_i} \big ) \odot 1_{m}\right) ,$$ where $m \in \mathbb{N}$ is such that the addend is in component $n$.* 2. *Let $p$ be an odd prime. Let $\mathcal{Y}_n$ denote the set of sequences $\underline{a} = (a_1, a_2 , \dots) \in \mathcal{Y}$ such that $\sum_{i \ge 1} a_i p^i \le n$ and $\mathcal{Y}_{n,k}$ denote the set of sequences $\underline{a} \in \mathcal{Y}_n$ such that $\sum_{i\ge 1} a_i (p^i -1) = 2k$. Then, in mod $p$ cohomology, $$g_n^*(\mathcal{p}_k) = \sum_{\underline{a} \in \mathcal{Y}_{n,k}} \big ( \bigodot_{i, a_i \not= 0} \gamma_{i,a_i} \big ) \odot 1_{m},$$ where $m \in \mathbb{N}$ is such that the addend is in component $n$.* On the contrary, the calculation of $g_n^*(X_n)$ when $n$ is even requires an additional analysis. **Proposition 61**. *Let $p > 2$ be a prime. Then $g_n^*(X_n) = 0$ for all $n > 0$ even.* *Proof.* We preliminarly recall that $H^*(B\ensuremath \mathop{\mathrm{B}}_{n}^{+}; \mathbb{F}_p) \cong H^* (B\Sigma_n; \mathbb{F}_p)$ by Theorem [Theorem 19](#thm:cohomology alternating group mod p){reference-type="ref" reference="thm:cohomology alternating group mod p"}. We prove this identity by induction on $n$. If $n = 2$ (base of the induction), then $X_2 = 0$ because $H^*(B\Sigma_2; \mathbb{F}_p)$ is trivial. We now assume that $n > 2$. Then, by the formula for the Euler class of direct sums (see for instance [@Brown]), $\Delta(X_n) = \sum_{m=0}^{\frac{n}{2}} X_{2m} \otimes X_{n-2m}$. Hence, by induction hypothesis, $X_n$ is primitive. As $H^*(B\Sigma_n; \mathbb{F}_p)$ does not contain any non-zero primitive, this implies $X_n = 0$. ◻ *Example 62*. $\mathcal{X}_{4,3} = \{(3,0,0,\dots), (2,1,0,\dots), (0,0,1,0,\dots)\}$. Therefore, by Proposition [Proposition 60](#prop:finite-pullback real){reference-type="ref" reference="prop:finite-pullback real"} we have $$g_4^*(w_3) = (w_{[3]} \odot 1_2)^0 + (w_{[2]} \odot \gamma_{1,1})^0 + \gamma_{2,1}^+ + \gamma_{2,1}^-.$$ Similarly, from Proposition [Proposition 60](#prop:finite-pullback real){reference-type="ref" reference="prop:finite-pullback real"} and the calculations of $\mathcal{X}_{5,k}$ of Example [Example 58](#n5){reference-type="ref" reference="n5"} we have $$\begin{aligned} g_5^*(w_2) &= (w_{[2]} \odot 1_3)^0 + \mathop{\mathrm{res}}(w \odot \gamma_{1,1} \odot 1_2) = (w_{[2]} \odot 1_3)^0 + (\gamma_{1,1}^2 \odot 1_3)^0, \\ g_5^*(w_3) &= (w_{[3]} \odot 1_2)^0 + (w_{[2]} \odot \gamma_{1,1} \odot 1_1)^0 + \mathop{\mathrm{res}}_n(w \odot \gamma_{1,2}) + (\gamma_{2,1} \odot 1_1)^0 \\ &= (w_{[3]} \odot 1_2)^0 + (w_{[2]} \odot \gamma_{1,1} \odot 1_1)^0 + (\gamma_{1,1}^2 \odot \gamma_{1,1} \odot 1_1)^0 + (\gamma_{2,1} \odot 1_1)^0, \\ g_5^*(w_4) &= (w_{[4]} \odot 1_1)^0 + (w_{[3]} \odot \gamma_{1,1})^0 + \mathop{\mathrm{res}}_n(w\odot \gamma_{2,1}) = (w_{[4]} \odot 1_1)^0 + (w_{[3]} \odot \gamma_{1,1})^0, \\ g_5^*(w_5) &= (w_{[5]})^0.\end{aligned}$$ Propositions [Proposition 60](#prop:finite-pullback real){reference-type="ref" reference="prop:finite-pullback real"} and [Proposition 61](#prop:finite-pullback Euler){reference-type="ref" reference="prop:finite-pullback Euler"} and multiplicativity completely determine the differentials on the Serre spectral sequence associated to $SO(n) \to \overline{\mathop{\mathrm{Fl}}}_n(\mathbb{R}) \to B\ensuremath \mathop{\mathrm{B}}_{n}^{+}$. With our method, we can compute them systematically, and thus retrieving in a simpler way the results about $H^*(\overline{\mathop{\mathrm{Fl}}}_n(\mathbb{R}); \mathbb{F}_2)$ for $n \leq 5$ proved in [@G-J-M]. In that article, the authors used a complicated geometric argument to fully determine the spectral sequences, that we can completely avoid with our pullback formulas. # Unstable cohomology of complete unordered flag varieties {#section:6} In this section, we present some unstable calculations for the cohomology of the unordered flag manifolds of low ($n =3,4,5$) orders. We will also present the mod $p$ cohomology of $\overline{\mathrm{Fl}}_p (\mathbb{C})$ for all prime $p$. ## Mod $2$ Cohomology of $\overline{\mathrm{Fl}}_3 (\mathbb{C})$ Recall from ([\[eq:unitary\]](#eq:unitary){reference-type="ref" reference="eq:unitary"}) that $$H^* (U(3); \mathbb{F}_2) \cong \Lambda_{\mathbb{F}_2} [z_1, z_3, z_5].$$ From Lemma [Lemma 56](#differential){reference-type="ref" reference="differential"} and the pullback formula $$\begin{aligned} d_2 (z_1) &= c \odot 1_2 + \gamma_{1,1}^2 \odot 1_1 , \\ d_4 (z_3) &= c_{[2]} \odot 1_1 + c \odot \gamma_{1,1}^2 , \\ d_6 (z_5) &= c_{[3]}.\end{aligned}$$ Also, from the Hopf ring structure of $A_{BU(1)}$ in Theorem [Theorem 4](#thm:cohomology DX mod 2){reference-type="ref" reference="thm:cohomology DX mod 2"}, $$H^* (BN(3); \mathbb{F}_2) \cong \frac{\mathbb{F}_2 [c\odot 1_2, c_{[2]} \odot 1_1 , c_{[3]} , \gamma_{1,1} \odot 1_1 ]}{\big ( (c\odot 1_2)\cdot (c_{[2]} \odot 1_1)\cdot (\gamma_{1,1} \odot 1_1) + c_{[3]} \cdot (\gamma_{1,1} \odot 1_1) \big )} .$$ On the $E_2$-page the differential $d_2$ is non-zero by Lemma [Lemma 59](#cdiff){reference-type="ref" reference="cdiff"} and the following: $$\begin{aligned} d_2 ((\gamma_{1,1} \odot 1_1) \cdot z_1 ) &= (\gamma_{1,1} \odot 1_1) \cdot (c \odot 1_2 + \gamma_{1,1}^2 \odot 1_1) = c\odot \gamma_{1,1}^2 + \gamma_{1,1}^3 \odot 1_1 .\end{aligned}$$ This shows that $d_2$ is injective on the ideal generated by $z_1$. As before $E_3$-page is the same as the $E_4$-page as $d_3 \equiv 0$. Hence, the $E_4$-page is given as follows: $$\begin{aligned} E_4^{*,*} & \cong \frac{H^* (BN(3); \mathbb{F}_2)}{\big ( f_3^* (c_1) \big )} \otimes \Lambda_{\mathbb{F}_2} [z_3, z_5] \\ & \cong \frac{\mathbb{F}_2 [c_{[2]} \odot 1_1 , c_{[3]} , \gamma_{1,1} \odot 1_1 ]}{\big ( (c_{[2]} \odot 1_1)\cdot (\gamma_{1,1} \odot 1_1)^3 + c_{[3]} \cdot (\gamma_{1,1} \odot 1_1) \big )} \otimes \Lambda_{\mathbb{F}_2} [z_3, z_5] .\end{aligned}$$ On the $E_4$-page the the differential $d_4$ is also injective as $$\begin{aligned} d_4 ((\gamma_{1,1} \odot 1_3) \cdot z_3 ) &= (\gamma_{1,1} \odot 1_1) \cdot (c_{[2]} \odot 1_1 + c \odot \gamma_{1,1}^2) = c_{[2]} \gamma_{1,1} \odot 1_1 + c \odot \gamma_{1,1}^3 .\end{aligned}$$ The $E_6$-page is therefore given by $$\begin{aligned} E_6^{*,*} & \cong \frac{H^* (BN(3); \mathbb{F}_2)}{\big ( f_3^* (c_1), f_3^* (c_2) \big )} \otimes \Lambda_{\mathbb{F}_2} [z_5] \\ & \cong \frac{\mathbb{F}_2 [c_{[3]} , \gamma_{1,1} \odot 1_1 ]}{\big ( (\gamma_{1,1} \odot 1_1)^7 + c_{[3]} \cdot (\gamma_{1,1} \odot 1_1) \big )} \otimes \Lambda_{\mathbb{F}_2} [z_5] .\end{aligned}$$ Note that in $H^* (BN(3); \mathbb{F}_2)/ (f_3^* (c_1))$, we have the relation $c\odot 1_2 = \gamma_{1,1}^2 \odot 1_1$. Using this identification we have $$c\odot \gamma_{1,1}^2 = (c\odot 1_2) \cdot (\gamma_{1,1}^2 \odot 1_1) = (\gamma_{1,1} \odot 1_1)^3$$ and hence $c_{[2]} \odot 1_1 = c \odot \gamma_{1,1}^2 = (\gamma_{1,1} \odot 1_1)^3$ in $H^* (BN(3); \mathbb{F}_2)/ (f_3^* (c_1), f_3^* (c_2))$. On the $E_6$-page the differential $d_6$ is again injective as $$d_6 ((\gamma_{1,1} \odot 1_1) \cdot z_5) = c_{[2]} \gamma_{1,1} \odot c .$$ Again all higher differentials are zero, and $E_7^{*,*} \cong E_{\infty}^{*,*}$, which gives us the following: **Theorem 63**. *The mod $2$ cohomology ring of $\overline{\mathrm{Fl}}_3 (\mathbb{C})$ is given by $$H^* (\overline{\mathrm{Fl}}_3 (\mathbb{C}); \mathbb{F}_2) \cong \frac{H^* (BN(3); \mathbb{F}_2)}{\big ( f_3^* (c_1), f_3^* (c_2), f_3^* (c_3) \big )} \cong \frac{\mathbb{F}_2 [\gamma_{1,1} \odot 1_1 ]}{((\gamma_{1,1} \odot 1_1)^7 )} ,$$ where $|\gamma_{1,1} \odot 1_1 | = 1$.* **Corollary 64**. *The Poincaré series of the mod $2$ cohomology ring of $\overline{\mathrm{Fl}}_3 (\mathbb{C})$ is $$\Pi_{\overline{\mathrm{Fl}}_3 (\mathbb{C})} (t) = 1+t +t^2 + t^3 +t^4 + t^5 +t^6 .$$* Note that this agrees with the previous results obtained in Theorem 5.3 and Corollary 5.4 of [@G-J-M] using a different spectral sequence. ## Mod $2$ Cohomology of $\overline{\mathrm{Fl}}_4 (\mathbb{C})$ From ([\[eq:unitary\]](#eq:unitary){reference-type="ref" reference="eq:unitary"}), $H^* (U(4); \mathbb{F}_2) \cong \Lambda_{\mathbb{F}_2} [z_1, z_3, z_5, z_7]$. Also, from Lemma [Lemma 56](#differential){reference-type="ref" reference="differential"} and the pullback formula the differentials in the spectral sequence $E$ associated with $U(4) \rightarrow \overline{\mathrm{Fl}}_4 (\mathbb{C}) \rightarrow BN(4)$ are given by $$\begin{aligned} d_2 (z_1) &= c\odot 1_3 + \gamma_{1,1}^2 \odot 1_2 , \\ d_4 (z_3) &= c_{[2]} \odot 1_2 + c\odot \gamma_{1,1}^2 \odot 1_1 + \gamma_{1,2}^2 , \\ d_6 (z_5) &= c_{[3]} \odot 1_1 + c_{[2]} \odot \gamma_{1,1}^2 + \gamma_{2,1}^2, \\ d_8 (z_7) &= c_{[4]}.\end{aligned}$$ From the Hopf ring structure of $A_{BU(1)}$, we have that $H^* (BN(4); \mathbb{F}_2)$ is generated by $$\label{bn4} \{ c\odot 1_3, c_{[2]} \odot 1_3 , c_{[3]} \odot 1_1 , c_{[4]}, c_{[2]} \odot \gamma_{1,1} , \gamma_{1,1} \odot 1_2, \gamma_{1,2}, \gamma_{2,1} \} ,$$ with all the relations in Theorem [Theorem 4](#thm:cohomology DX mod 2){reference-type="ref" reference="thm:cohomology DX mod 2"} and Hopf ring distributivity. By Lemma [Lemma 59](#cdiff){reference-type="ref" reference="cdiff"}, the differential $d_2$ non-zero on $\{ c\odot 1_3, c_{[2]}\odot 1_2 , c_{[3]}\odot 1_1, c_{[4]} \}$. On the rank zero decorated gathered blocks $d_2$ is given by $$\begin{aligned} d_2 ((\gamma_{1,1} \odot 1_2) \cdot z_1 ) &= (\gamma_{1,1} \odot 1_2) \cdot (c \odot 1_3 + \gamma_{1,1}^2 \odot 1_2) = c\odot \gamma_{1,1} \odot 1_1 + \gamma_{1,1}^3 \odot 1_2 , \\ d_2 ((c_{[2]} \odot \gamma_{1,1} ) \cdot z_1 ) &= (c_{[2]} \odot \gamma_{1,1} ) \cdot (c \odot 1_3 + \gamma_{1,1}^2 \odot 1_2) = c^2 \odot c \odot \gamma_{1,1} + \cdots , \\ d_2 (\gamma_{1,2} \cdot z_1 ) &= \gamma_{1,2} \cdot (c \odot 1_3 + \gamma_{1,1}^2 \odot 1_2) = \gamma_{1,1}^3 \odot \gamma_{1,1} , \\ d_2 (\gamma_{2,1} \cdot z_1 ) &= \gamma_{2,1} \cdot (c \odot 1_3 + \gamma_{1,1}^2 \odot 1_2) = 0. \end{aligned}$$ As $d_2 (\gamma_{2,1} \cdot z_1) = 0$, we have that $d_2$ is not injective on the ideal generated by $z_1$ and $\mathrm{ker} (d_2)$ is the ideal generated by $(\gamma_{2,1} \cdot z_1)$. So, the $E_3 \equiv E_4$-page is given by $$\begin{aligned} & E_4^{*,*} \cong \Big ( \frac{H^* (BN(4); \mathbb{F}_2)}{\big ( f_4^* (c_1) \big ) } \oplus ( \langle \gamma_{2,1} \cdot z_1 \rangle ) \Big ) \otimes \Lambda_{\mathbb{F}_2} [z_3, z_5, z_7] ,\end{aligned}$$ where $\langle \gamma_{2,1} \cdot z_1 \rangle$ denotes the ideal generated by $\gamma_{2,1} \cdot z_1$. As $c\odot 1_3 = \gamma_{1,1}^2 \odot 1_2$ on $E_4^{*,*}$, we can rewrite the differential $d_4$ as $d_4 (z_3) = c_{[2]} \odot 1_2 + \gamma_{1,1}^4 \odot 1_1 + \gamma_{1,2}^2$. The injectivity of $d_4$, $d_6$, and $d_8$ on the ideals generated by $z_3$, $z_5$, and $z_7$ respectively, can be verified by directly computing these differentials on the rank zero gathered blocks multiplied by $z_3$, $z_5$, and $z_7$ respectively. On the first row of the spectral sequence, $\gamma_{1,2}^2 \gamma_{2,1} \cdot z_1$ is killed by $d_4 (\gamma_{2,1} \cdot z_1 z_3)$, $\gamma_{2,1}^3 \cdot z_1$ is killed by $d_6 (\gamma_{2,1} \cdot z_1 z_5)$, and $c_{[4]} \gamma_{2,1} \cdot z_1$ is killed by $d_8 (\gamma_{2,1} \cdot z_1 z_7)$ since $$\begin{aligned} d_4 (\gamma_{2,1} \cdot z_3) &= \gamma_{2,1} \cdot (c_{[2]} \odot 1_2 + \gamma_{1,1}^4 \odot 1_1 + \gamma_{1,2}^2) = \gamma_{1,2}^2 \gamma_{2,1}, \\ d_6 (\gamma_{2,1} \cdot z_5) &= \gamma_{2,1} \cdot (c_{[3]} \odot 1_1 + c_{[2]} \odot \gamma_{1,1}^2 + \gamma_{2,1}^2) = \gamma_{2,1}^3, \\ d_8 (\gamma_{2,1} \cdot z_7) &= \gamma_{2,1} \cdot c_{[4]} = c_{[4]} \gamma_{2,1}.\end{aligned}$$ All other higher differentials are zero and we have $$\begin{aligned} E_{\infty}^{*,*} & \cong \frac{H^* (BN(4); \mathbb{F}_2)}{\big ( f_4^* (c_1) , f_4^* (c_2), f_4^* (c_3), f_4^* (c_4) \big )} \oplus \mathbb{F}_2 \{ \gamma_{2,1} \cdot z_1, \gamma_{1,2} \gamma_{2,1} \cdot z_1 ,\gamma_{2,1}^2 \cdot z_1, \gamma_{1,2} \gamma_{2,1}^2 \cdot z_1 \}.\end{aligned}$$ We record the result of our above computation as the following Theorem. **Theorem 65**. *Let $u_4$ be the cohomology class $\gamma_{2,1} \cdot z_1$. Then, we have an isomorphism $$H^* (\overline{\mathrm{Fl}}_4 (\mathbb{C}) ; \mathbb{F}_2) \cong \frac{H^* (BN(4); \mathbb{F}_2)}{\big ( f_4^* (c_1) , f_4^* (c_2), f_4^* (c_3), f_4^* (c_4) \big )} \oplus \mathbb{F}_2 \{ u_4, u_4 \gamma_{1,2}, u_4 \gamma_{2,1}, u_4 \gamma_{1,2} \gamma_{2,1} \} .$$* On the $E_{\infty}$ page of the spectral sequence, the classes $u_4, u_4 \gamma_{1,2}, u_4 \gamma_{2,1}$, and $u_4 \gamma_{1,2} \gamma_{2,1}$ survive due to the non-injectivity of the differential $d_2$. However, in the corresponding spectral sequence for stable cohomology, all differentials are injective on the corresponding ideals generated by the generators $z_{2k-1}$, and therefore these four classes are not restrictions of any stable class. We refer to such classes as "unstable classes". **Corollary 66**. *The Poincaré series of the mod $2$ cohomology ring of $\overline{\mathrm{Fl}}_4$ is $$\Pi_{\overline{\mathrm{Fl}}_4} (t) = 1+t+2t^2 +3t^3 +4t^4 +4t^5 + 5t^6 + 4t^7 + 4t^8 + 3t^9 + 2t^{10} + t^{11} + t^{12} .$$* *Proof.* We can check this directly. Recall that the cohomological dimensions of the classes that generate $H^* (BN(4); \mathbb{F}_2)$ are as follows: $$\begin{aligned} |\gamma_{1,1} \odot 1_2| &= 1, \quad |\gamma_{1,2}| = 2, \quad |\gamma_{2,1}| = 3, \\ |c_{[2]} \odot \gamma_{1,1}| &= 5, \quad \text{and} \quad |c_{[k]} \odot 1_{4-k}| = 2k \quad (1 \le k \le 4). \end{aligned}$$ We also have the following relations between the generators:$$\begin{aligned} \big ( (\gamma_{1,1} \odot 1_2)\cdot (c_{[2]}+1_2) + (c_{[2]} \odot \gamma_{1,1}) \big )\cdot (c\odot 1_3) + (\gamma_{1,2}\odot 1_2)\cdot (c_{[3]} \odot 1_1) &= 0, \\ \text{for all } x\in \{ \gamma_{1,1}\odot 1_2, c_{[2]} \odot \gamma_{1,1}, c \odot 1_{3}, c_{[2]} \odot 1_{2}, c_{[3]} \odot 1_{1} \}, \quad x \cdot \gamma_{2,1} &= 0 , \\ (\gamma_{1,1}\odot 1_2) \cdot (c_{[3]} \odot 1_1) = 0, \quad (c_{[2]} \odot \gamma_{1,1}) \cdot (c_{[3]} \odot 1_1) &= 0, \\ \gamma_{1,2} \cdot (c \odot 1_3) = 0, \quad \gamma_{1,2} \cdot (c_{[3]} \odot 1_1) &= 0. \end{aligned}$$ Note that these relations between the generators are not exhaustive. From the description of $H^* (\overline{\mathrm{Fl}}_4; \mathbb{F}_2)$ in Theorem [Theorem 65](#unst4){reference-type="ref" reference="unst4"} and the above relations, we get the following relations in $H^* (\overline{\mathrm{Fl}}_4; \mathbb{F}_2)$ between the Hopf ring generators. By identifying $(\gamma_{1,1} \odot 1_2)^2 = c\odot 1_3$, we have $$\begin{aligned} (\gamma_{1,1} \odot 1_2)^2 \cdot \gamma_{1,2} = 0 , \quad (\gamma_{1,1} \odot 1_2)^2 \cdot \gamma_{2,1} &= 0,\\ (\gamma_{1,1} \odot 1_2)^3 \cdot (c_{[2]}+1_2) + (c_{[2]} \odot \gamma_{1,1})\cdot (\gamma_{1,1} \odot 1_2)^2 &+ (\gamma_{1,2}\odot 1_2)\cdot (c_{[3]} \odot 1_1) = 0. \end{aligned}$$ By identifying $c_{[2]} \odot 1_2 = c\odot \gamma_{1,1}^2 \odot 1_1 + \gamma_{1,2}^2 = (\gamma_{1,1}\odot 1_2)^4 + \gamma_{1,2}^2$ and $c_{[3]} \odot 1_1 = \gamma_{1,2}^2 + \gamma_{2,1}^2 + (c_{[2]}\odot \gamma_{1,1})\cdot (\gamma_{1,1} \odot 1_2)$ we have$$\begin{aligned} \gamma_{1,2}^2 \cdot \gamma_{2,1} = 0 ,\quad (\gamma_{1,1} \odot 1_2)^7 + (\gamma_{1,1} \odot 1_2) \cdot \gamma_{1,2}^3 = 0. \end{aligned}$$ With the information we have so far, we can explicitly write down all generators of the cohomology groups $H^k (\overline{\mathrm{Fl}}_4; \mathbb{F}_2)$ for $0\le k \le 6$. $$\begin{aligned} H^0 (\overline{\mathrm{Fl}}_4; \mathbb{F}_2) &\cong \mathbb{F}_2 , \\ H^1 (\overline{\mathrm{Fl}}_4; \mathbb{F}_2) &\cong \mathbb{F}_2 \{ (\gamma_{1,1} \odot 1_2) \}, \\ H^2 (\overline{\mathrm{Fl}}_4; \mathbb{F}_2) &\cong \mathbb{F}_2 \{ (\gamma_{1,1} \odot 1_2)^2, \gamma_{1,2} \}, \\ H^3 (\overline{\mathrm{Fl}}_4; \mathbb{F}_2) &\cong \mathbb{F}_2 \{ (\gamma_{1,1} \odot 1_2)^3, (\gamma_{1,1} \odot 1_2)\gamma_{1,2}, \gamma_{2,1} \}, \\ H^4 (\overline{\mathrm{Fl}}_4; \mathbb{F}_2) &\cong \mathbb{F}_2 \{ (\gamma_{1,1} \odot 1_2)^4, (\gamma_{1,1} \odot 1_2)\gamma_{2,1}, \gamma_{1,2}^2, u_4 \}, \\ H^5 (\overline{\mathrm{Fl}}_4; \mathbb{F}_2) &\cong \mathbb{F}_2 \{ (\gamma_{1,1} \odot 1_2)^5, (\gamma_{1,1} \odot 1_2)\gamma_{1,2}^2, \gamma_{1,2} \gamma_{2,1}, c_{[2]} \odot \gamma_{1,1} \}, \\ H^6 (\overline{\mathrm{Fl}}_4; \mathbb{F}_2) &\cong \mathbb{F}_2 \{ (\gamma_{1,1} \odot 1_2)^6, (\gamma_{1,1} \odot 1_2)(c_{[2]} \odot \gamma_{1,1}), \gamma_{1,2}^3, \gamma_{2,1}^2, u_4 \gamma_{1,2} \}. \end{aligned}$$ Although we haven't identified the specific generators of $H^k(\overline{\mathrm{Fl}}_4; \mathbb{F}_2)$ for $k>6$, we can still determine its Poincar'e series. Note that $\overline{\mathrm{Fl}}_4$ is a $12$-dimensional manifold and that with mod $2$ coefficients, Poincar'e duality holds without needing the orientability assumption. Hence, the corollary follows. ◻ ## Mod $2$ Cohomology of $\overline{\mathrm{Fl}}_5 (\mathbb{C})$ From ([\[eq:unitary\]](#eq:unitary){reference-type="ref" reference="eq:unitary"}), $H^* (U(5); \mathbb{F}_2) \cong \Lambda_{\mathbb{F}_2} [z_1, z_3, z_5, z_7, z_9]$. Also, from Lemma [Lemma 56](#differential){reference-type="ref" reference="differential"} and Example [Example 58](#n5){reference-type="ref" reference="n5"} the differentials in the spectral sequence $E$ associated with $U(5) \rightarrow \overline{\mathrm{Fl}}_5 (\mathbb{C}) \rightarrow BN(5)$ are given by $$\begin{aligned} d_2 (z_1) &= c \odot 1_4 + \gamma_{1,1}^2 \odot 1_3, \\ d_4 (z_3) &= c_{[2]} \odot 1_3 + c \odot \gamma_{1,1}^2 \odot 1_2 + \gamma_{1,2}^2 \odot 1_1 ,\\ d_6 (z_5) &= c_{[3]} \odot 1_2 + c_{[2]} \odot \gamma_{1,1}^2 \odot 1_1 + c\odot \gamma_{1,2}^2 + \gamma_{2,1}^2 \odot 1_1, \\ d_8 (z_7) &= c_{[4]} \odot 1_1 + c_{[3]} \odot \gamma_{1,1}^2 + c \odot \gamma_{2,1}^2 , \\ d_{10} (z_9) &= c_{[5]}.\end{aligned}$$ From the Hopf ring structure of $A_{BU(1)}$, we deduce that $H^* (BN(5); \mathbb{F}_2)$ is generated by $$\{ c\odot 1_4 , c_{[2]} \odot 1_3 , c_{[3]} \odot 1_2 , c_{[4]} \odot 1_1 , c_{[5]} , \gamma_{1,1} \odot 1_3 , c_{[2]} \odot \gamma_{1,1} \odot 1_1 ,\gamma_{1,2} \odot 1_1 , \gamma_{2,1} \odot 1_1 \}$$ with the relations determined by Theorem [Theorem 4](#thm:cohomology DX mod 2){reference-type="ref" reference="thm:cohomology DX mod 2"} along with Hopf ring distributivity. Again, we check the injectivity of the differentials on the rank zero Hopf monomials multiplied by the corresponding $z_{2k-1}$'s. On the $E_2$-page we have $$\begin{aligned} d_2 ((\gamma_{1,1} \odot 1_3) \cdot z_1 ) &= (\gamma_{1,1} \odot 1_3) \cdot d_2 (z_1) = c\odot \gamma_{1,1} \odot 1_2 + \gamma_{1,1}^3 \odot 1_3 + \cdots , \\ d_2 ((c_{[2]} \odot \gamma_{1,1} \odot 1_1) \cdot z_1 ) &= (c_{[2]} \odot \gamma_{1,1} \odot 1_1 ) \cdot d_2 (z_1) = c_{[3]} \odot \gamma_{1,1} + \cdots , \\ d_2 ((\gamma_{1,2} \odot 1_1) \cdot z_1 ) &= (\gamma_{1,2} \odot 1_1) \cdot d_2 (z_1) = c \odot \gamma_{1,2} + \gamma_{1,1}^3 \odot \gamma_{1,1} \odot 1_1 , \\ d_2 ((\gamma_{2,1} \odot 1_1) \cdot z_1 ) &= (\gamma_{2,1} \odot 1_1) \cdot (d_2 (z_1) = c\odot \gamma_{2,1} . \end{aligned}$$ The above formulas along with Lemma [Lemma 59](#cdiff){reference-type="ref" reference="cdiff"} show that $d_2$ is injective on the ideal generated by $z_1$. So, $E_4 \equiv E_3$ is given by $$E_4^{*,*} \cong \frac{H^* (BN(5); \mathbb{F}_2)}{\big ( f_5^* (c_1) \big )} .$$ Similarly as before it can be checked that the differentials $d_4$, $d_6$, and $d_8$ are all injective on the ideals generated by $z_3$, $z_5$, and $z_7$ respectively. Thus the $E_{10}$-page is given by $$E_{10}^{*,*} \cong \frac{H^* (BN(5); \mathbb{F}_2)}{\big ( f_5^* (c_1), f_5^* (c_2), f_5^* (c_3) , f_5^* (c_4) \big )}$$ Finally, the differential $d_{10}$ on the rank zero Hopf monomial on the $E_{10}$-page is given by $$\begin{aligned} d_{10} ((\gamma_{1,1} \odot 1_3) \cdot z_9 ) &= (\gamma_{1,1} \odot 1_3) \cdot c_{[5]} = c_{[2]}\gamma_{1,1} \odot c_{[3]} , \\ d_{10} ((c_{[2]} \odot \gamma_{1,1} \odot 1_1) \cdot z_9 ) &= (c_{[2]} \odot \gamma_{1,1} \odot 1_1 ) \cdot c_{[5]} = c_{[2]}^2 \odot c_{[2]}\gamma_{1,1} \odot c + \cdots , \\ d_{10} ((\gamma_{1,2} \odot 1_1) \cdot z_9 ) &= (\gamma_{1,2} \odot 1_1) \cdot c_{[5]} = c_{[4]}\gamma_{1,2} \odot c , \\ d_{10} ((\gamma_{2,1} \odot 1_1) \cdot z_9 ) &= (\gamma_{2,1} \odot 1_1) \cdot c_{[5]} = c_{[4]} \gamma_{2,1} \odot c \\ &= (c_{[4]} \gamma_{2,1} \odot 1_1) \cdot (c \cdot 1_4 + \gamma_{1,1}^2 \odot 1_3) = 0 .\end{aligned}$$ We see that $d_{10}$ is not injective with $\mathrm{ker}(d_{10}) = \langle (\gamma_{2,1} \odot 1_1) \cdot z_9 \rangle$. Note that $$\begin{aligned} d_{4} ((\gamma_{2,1} \odot 1_1) \cdot z_3 z_9) &= (\gamma_{1,2}^2 \gamma_{2,1} \odot 1_1) \cdot z_9 , \\ d_{6} ((\gamma_{2,1} \odot 1_1) \cdot z_5 z_9) &= (\gamma_{2,1}^3 \odot 1_1) \cdot z_9 , \\ d_{8} ((\gamma_{2,1} \odot 1_1) \cdot z_7 z_9) &= (c_{[4]} \gamma_{2,1} \odot 1_1) \cdot z_9 .\end{aligned}$$ All higher differentials are zero and therefore the spectral sequence collapses at the $E_{11}$-page. The $E_{\infty} \equiv E_{11}$-page is given by $$E_{\infty}^{*,*} \cong \frac{H^* (BN(5); \mathbb{F}_2)}{\big ( f_5^* (c_1), f_5^* (c_2), f_5^* (c_3) , f_5^* (c_4), f_5^* (c_5) \big )} \oplus \mathcal{U}$$ where $\mathcal{U} = \mathbb{F}_2 \{ (\gamma_{2,1} \odot 1_1) \cdot z_9 , (\gamma_{1,2} \gamma_{2,1} \odot 1_1) \cdot z_9 , (\gamma_{2,1}^2 \odot 1_1) \cdot z_9 , (\gamma_{1,2} \gamma_{2,1}^2 \odot 1_1) \cdot z_9 \}$. We summarize the result of our computations in this subsection as the following theorem. **Theorem 67**. *Let us denote the cohomology class $(\gamma_{2,1} \odot 1_1) \cdot z_9$ by $u_{12}$. Then, we have an isomorphism $$H^* (\overline{\mathrm{Fl}}_5 (\mathbb{C}) ; \mathbb{F}_2) \cong \frac{H^* (BN(5); \mathbb{F}_2)}{\big ( f_5^* (c_1) ,\dots , f_5^* (c_5) \big )} \oplus \mathbb{F}_2 \{ u_{12}, u_{12} \gamma_{1,2}, u_{12} \gamma_{2,1}, u_{12} \gamma_{1,2} \gamma_{2,1} \} .$$* ## Mod $p$ Cohomology of $\overline{\mathrm{Fl}}_p (\mathbb{C})$ for $p>2$ {#section:6.4} In this subsection, we present a computation for $H^* (\overline{\mathrm{Fl}}_p (\mathbb{C}); \mathbb{F}_p)$ for all odd prime $p$. From Theorem [Theorem 5](#thm:cohomology DX mod p){reference-type="ref" reference="thm:cohomology DX mod p"}, we have that $H^* (BN(p); \mathbb{F}_p)$ is generated by $\gamma_{1,1}$, $\alpha_{1,1}$ and $c_{[k]} \odot 1_{p-k}$ for $k=1,2,\dots ,p$. Our strategy will be to study the Serre spectral sequence associated with ([\[spseq\]](#spseq){reference-type="ref" reference="spseq"}) with $n=p$. We introduce some notations that will be helpful later in describing the cohomology of $\overline{\mathrm{Fl}}_p (\mathbb{C})$. Recall from ([\[eq:unitary\]](#eq:unitary){reference-type="ref" reference="eq:unitary"}) that, $$\label{eq5} H^* (U(p); \mathbb{F}_p) = \Lambda_{\mathbb{F}_p} [z_1, z_3,\dots , z_{2p-1}]$$ **Definition 68**. Let $S=\{s_1, \dots, s_k\}$ be a subset of $\{ 1,2,\dots, p\}$ such that $s_1 < \cdots <s_k$. We define $z_S$ to be the cohomology class $$z_S := \begin{cases} z_{2s_1 -1} \cdots z_{2s_k -1} & \text{if } S\neq \emptyset \\ 1_{U(p)} & \text{if } S = \emptyset \end{cases}$$ The following are immediate from Theorem [Theorem 31](#pullbackformula){reference-type="ref" reference="pullbackformula"} and Proposition [Proposition 57](#prop:finite-pullback){reference-type="ref" reference="prop:finite-pullback"}. **Corollary 69**. *Let $f_p : BN(p) \rightarrow BU(p)$ as before. Then $f_p^* (c_k) = c_{[k]} \odot 1_{p-k}$ for $k=1,\dots, p-2$, $f_p^* (c_{p-1}) = c_{[p-1]} \odot 1_1 + \gamma_{1,1}$, and $f_p^* (c_p) = c_{[p]}$.* Note that $\alpha_{1,1}$ never shows up in the pullback formulas for $f_p^*$ as it has an odd cohomological dimension whereas $f_p^* (c_k)$ are even-dimensional for all $1 \le k \le p$. **Theorem 70**. *Let $\gamma_S := z_S \cdot \gamma_{1,1}$ and $\alpha_S := z_S \cdot \alpha_{1,1}$, where $z_S$ is as in Definition [Definition 68](#dfn:unitary-gen){reference-type="ref" reference="dfn:unitary-gen"}. Then, $$H^* (\overline{\mathrm{Fl}}_p (\mathbb{C}) ; \mathbb{F}_p ) = \frac{\mathbb{F}_p [\gamma_S, \alpha_S| S \subset \{ 1,2,\dots, p-2\} ]}{(\gamma_S^2, \alpha_S^2, \gamma_S \cdot \alpha_S)}$$* *Proof.* From the Hopf ring structure of $A_{BU(1)}$ in Theorem [Theorem 4](#thm:cohomology DX mod 2){reference-type="ref" reference="thm:cohomology DX mod 2"}, we deduce that from $$H^* (BN(p) ; \mathbb{F}_p) \cong \frac{\mathbb{F}_p [\gamma_{1,1}, \alpha_{1,1}, c_{[k]} \odot 1_{p-k} \mid k=1,\dots , p]}{\big ( \alpha_{1,1}^2 , \gamma_{1,1} \cdot (c_{[k]} \odot 1_{p-k}) , \alpha_{1,1} \cdot (c_{[k]} \odot 1_{p-k}) \mid k=1,\dots ,p-1 \big )}$$ From Lemma [Lemma 56](#differential){reference-type="ref" reference="differential"}, the differential $d_{2k}$ is given by $$d_{2k} : z_{2k-1} \longmapsto f_p^* (c_k) .$$ Using the product structure on the $E_{2i}$-page of the spectral sequence, we deduce the following: $$\begin{aligned} \gamma_{1,1} \cdot (c_{[k]}\odot 1_{[p-k]}) = 0 \quad &\text{and} \quad \alpha_{1,1} \cdot (c_{[k]}\odot 1_{[p-k]}) = 0 \quad \text {for all } k=1,\dots,p-1 \\ \gamma_{1,1} \cdot c_{[p]} \neq 0 \quad &\text{and} \quad \alpha_{1,1} \cdot c_{[p]} \neq 0. \end{aligned}$$ Hence, $d_{2k} (z_{2k-1}\cdot \gamma_{1,1}) = 0$ and $d_{2k} (z_{2k-1} \cdot \alpha_{1,1}) = 0$ for all $k=1,2,\dots, p-2$. Also, using the multiplicative structure of the spectral sequence and the formulas from Corollary [Corollary 69](#cor:diff-modp){reference-type="ref" reference="cor:diff-modp"}, we have $$\begin{aligned} d_{2(p-1)} (z_{2p-3} \cdot \gamma_{1,1}) = \gamma_{1,1}^2 &, \quad d_{2(p-1)} (z_{2p-3} \cdot \alpha_{1,1}) = \gamma_{1,1} \cdot \alpha_{1,1} \\ d_{2p} (z_{2p-1} \cdot \gamma_{1,1}) = \gamma_{1,1} \cdot c_{[p]} &, \quad d_{2p} (z_{2p-1} \cdot \alpha_{1,1}) = \alpha_{1,1} \cdot c_{[p]}. \end{aligned}$$ As all higher differentials $d_k$ for $k > 2p$ are zero, the spectral sequence collapses at the $E_{2p}$-page and the $E_{\infty}$-page is given by $$E_{\infty}^{*,*} \cong \frac{H^* (BN(p) ; \mathbb{F}_p)}{\big ( f_p^* (c_1) ,\dots ,f_p^* (c_p) \big )} \oplus \big ( \mathbb{F}_p \{ \gamma_{1,1} , \alpha_{1,1} \} \otimes \Lambda_{\mathbb{F}_p} [z_1, \dots , z_{2p-5}] \big ) .$$ This proves the theorem. ◻ **Corollary 71**. *The mod $p$ Poincaré series of $\overline{\mathrm{Fl}}_p (\mathbb{C})$ is given by $$\Pi_{\overline{\mathrm{Fl}}_p}^p (t) = 1 + (t^{2p-3} + t^{2p-2}) \prod_{k=1}^{p-2} (1+t^{2k-1}) .$$* ## Mod $p$ cohomology of $\overline{\mathop{\mathrm{Fl}}}_p(\mathbb{R})$ for $p>2$ We conclude this section by computing the mod $p$ cohomology of $\overline{\mathop{\mathrm{Fl}}}_p(\mathbb{R})$ for all odd prime $p$. From Theorem [Theorem 19](#thm:cohomology alternating group mod p){reference-type="ref" reference="thm:cohomology alternating group mod p"} and Corollary [Corollary 20](#cor:res mod p){reference-type="ref" reference="cor:res mod p"}, there is an isomorphism $$H^*(B\ensuremath \mathop{\mathrm{B}}_{p}^{+}; \mathbb{F}_p) \cong \frac{\mathbb{F}_p[\gamma_{1,1},\alpha_{1,1}]}{(\alpha_{1,1}^2)},$$ where the degrees of $\gamma_{1,1}$ and $\alpha_{1,1}$ are $2p-2$ and $2p-3$, respectively. Similarly $$H^*(SO(p); \mathbb{F}_p) \cong \Lambda(\{a_1,\dots,a_{2p-3}).$$ The differential $d_{4r}$ of the $4r$-th page of the associated spectral sequence maps $a_{4r-1}$ to the pullback of the Pontrjagin class $\mathcal{p}_r$, and all the other differentials are $0$. By Proposition [Proposition 60](#prop:finite-pullback real){reference-type="ref" reference="prop:finite-pullback real"}, $d_{2p-2}(a_{2p-3}) = \gamma_{1,1}$, and $d_r = 0$ if $r \not= 2p-2$. We conclude that $E_2^{*,*} \cong E_{2p-2}^{*,*}$ and that $$E_{\infty}^{*,*} \cong E_{2p-1}^{*,*} \cong \Lambda(\{a_3,\dots,2p-7\}) \otimes \frac{\mathbb{F}_p[\alpha_{1,1}]}{(\alpha_{1,1}^2)}.$$ We deduce the following results. **Theorem 72**. *Let $p > 2$ be an odd prime. Then there is an isomoprhism of graded algebras $$H^*(\overline{\mathop{\mathrm{Fl}}}_p(\mathbb{R}); \mathbb{F}_p) \cong \Lambda(\{a_3,\dots,a_{2p-7}\}) \otimes \frac{\mathbb{F}_p[\alpha_{1,1}]}{(\alpha_{1,1}^2)}.$$* **Corollary 73**. *The mod $p$ Poincarè series of $\overline{\mathop{\mathrm{Fl}}}_p(\mathbb{R})$ is given by $$\Pi^p_{\overline{\mathop{\mathrm{Fl}}}_p(\mathbb{R})}(t) = (1+t^3)(1+t^7)\cdots (1+t^{2p-7})(1+t^{2p-3}).$$* [^1]: The first author acknowledges the MIUR Excellence Department Project awarded to the Department of Mathematics, University of Rome Tor Vergata, CUP E83C18000100006.
arxiv_math
{ "id": "2309.00429", "title": "Cohomology of complete unordered flag manifolds", "authors": "Lorenzo Guerra and Santanil Jana", "categories": "math.AT math.AG", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | In this paper, we estimate the variance of two coupled paths derived with the Multilevel Monte Carlo method combined with the Euler Maruyama discretization scheme for the simulation of McKean-Vlasov stochastic differential equations with small noise. The result often translates into a more efficient method than the standard Monte Carlo method combined with algorithms tailored to the small noise setting. **Key words:** Multilevel Monte Carlo, McKean-Vlasov stochastic differential equations, small noise, Euler-Maruyama scheme, variance of coupled paths. **AMS Subject Classification**: 60H10, 60H35, 65C30 author: - | **Ulises Botija-Munoz and Chenggui Yuan** \ Department of Mathematics, Swansea University, Bay Campus, SA1 8EN, UK\ Email: 942493\@swansea.ac.uk, c.yuan\@swansea.ac.uk title: " **Multilevel Monte Carlo EM scheme for MV-SDEs with small noise[^1]** " --- # Introduction An important problem in science is to approximate the value $\mathbb{E}[\Phi(X_T)]$ where $\{X_t\}_{0 \leq t \leq T}$ is the solution to an SDE and $\Psi:\mathbb R^d \rightarrow \mathbb R$. Among all the methods that allow us to compute the previous expectation, Monte Carlo simulation is arguably the more flexible. Its drawback is the high computational cost. Therefore a lot of effort has been placed to reduce this cost. In 2008, Giles, in a very relevant paper, [@g081], proposed the multilevel Monte Carlo (MLMC) method which greatly reduces the computational cost of solving the problem $\mathbb{E}[\Phi(X_T)]$ with respect to the standard Monte Carlo (MC) method. If $\delta$ is the accuracy in terms of confidence intervals, the computation of $\mathbb{E}[\Phi(X_T)]$ where $X_T$ is simulated using the Euler-Maruyama (EM) method, has a computation cost (measured as the number of times that the random number generator is called) that scales as $\delta^{-3}$. In [@g081], it is proved that the cost of the MLMC combined with the EM scheme scales like $\delta^{-2}(\log \delta)^2.$ Since then, numerous papers have appeared to customize, adapt and extend the principles of MLMC method to specific problems. One of these papers is [@ahs15], where the authors applied the multilevel Monte Carlo framework to SDEs with small noise. They compare the computation cost derived from the standard MC method (combined with discretization algorithms tailored to the small noise setting) versus the multilevel Monte Carlo method combined with the Euler-Maruyama (EM) scheme. They found that when $\delta \leq \varepsilon^2$, there is not benefit from using discretization methods customized for the small noise case. Moreover, if $\delta \geq e^{-\frac 1 \varepsilon}$, the EM scheme combined with the MLMC method leads to a cost $O(1).$ This is the same cost we would have with the standard MC method if we had $X_T$ as a formula of $W_T$, so no discretization method was required. In other words, the discretization method comes for free. Here, we extend the work from [@ahs15] to McKean-Vlasov SDEs (MV-SDEs) with small noise and we obtained the same estimate for the variance of two coupled paths. This means that the additional McKean-Vlasov component does not add computational complexity (per equation in the system of particles) and their conclusion about the computational cost of the method remains valid in our case. If we have a system of SDEs with $M$ particles, the total complexity is $M$ times the complexity of simulating one particle. The MV-SDE with small noise that we will be working on in this paper, has the form $$\label{2.0} {\mbox d}X^{\varepsilon}(t)=f(X^{\varepsilon}(t),\mathcal{L}_{t}^{X}){\mbox d}t+ \varepsilon g(X^{\varepsilon}(t),\mathcal{L}_{t}^{X}){\mbox d}W(t),t \geq 0$$ with initial data $X(0)=x_0$, where $\varepsilon\in(0,1)$, $\mathcal{L}_{t}^{X}$ is the law (or distribution) of $X(t)$, and $$f:\mathbb R^d \times \mathcal P_2(\mathbb R^{d}) \rightarrow \mathbb R^d \mbox{ and }g: \mathbb R^d \times \mathcal P_2(\mathbb R^{d}) \rightarrow \mathbb R^{d \times \bar d}.$$ The pionering work on McKean-Vlasov SDEs is due to McKean on his work on the Boltzmann equation, [@mckeanII]. Since then, MV-SDEs have been used extensively in in biological systems, financial engineering and physics, [@Bala],[@Buck], [@Gu], [@Erban]. The existence and uniqueness theory for strong solutions of MV-SDEs with coefficients satisfying the Lipschitz condition is well-established, see, e.g., [@Szn]. Due to the propagation of chaos result [@mckeanI], Equation [\[2.0\]](#2.0){reference-type="eqref" reference="2.0"} can be regarded as the limit of the following interacting particle system $$\begin{aligned} \label{particleSystem} \mathrm{d}X^{\varepsilon,i,M}(t) =f(X^{\varepsilon, i,M}(t),\frac{1}{M}\sum\limits_{j=1}^{M}\delta_{X^{\varepsilon, j,M}(t)})\mathrm{d}t +\varepsilon g(X^{\varepsilon,i,M}(t),\frac{1}{M}\sum\limits_{j=1}^{M}\delta_{X^{\varepsilon, j,M}(t)})\mathrm{d}W^{i}(t),~~~~t\in[0,T].\end{aligned}$$ Our main task in the rest of the paper is to discretize [\[particleSystem\]](#particleSystem){reference-type="eqref" reference="particleSystem"} using the EM scheme and estimate the variance of two coupled paths in the Multilevel Monte Carlo setting. This directly translate into the computational cost of solving $\mathbb{E}[\Phi((X^{\varepsilon,i,M}(T)],$ see [@ahs15] for details. # Preliminaries ## Notation Throughout this paper, unless otherwise specified, we let $(\Omega, {\cal{F}},\{{\cal{F}}_{t}\}_{t\ge 0}, \mathbb{P})$ be a complete probability space with a filtration $\{{\cal F}_t\}_{t\ge 0}$ satisfying the usual conditions. Let $W(t)=$ $(W_1(t),\ldots,W_{\bar d}(t))^T$ be an $\bar d$-dimensional Brownian motion defined on the probability space. For any $q>0$, let $L^{q}=L^{q}(\Omega;\mathbb R^{d})$ be the family of $\mathbb R^{d}$-valued random variables $Z$ with $\mathbb{E}[|Z|^q]<+\infty$. Let $\mathcal{L}^{Z}$ denote the probability law (or distribution) of a random variable $Z$. $\delta_{x}(\cdot)$ denotes the Dirac delta measure concentrated at a point $x\in\mathbb R^{d}$. For $q\geq1$, we denote by $\mathcal{P}_q(\mathbb R^{d})$ the set of probability measures on $\mathbb R^{d}$ with finite $q$th moments, and define $$\label{qmoments} W_q(\mu):=\left(\int_{\mathbb R^{d}}|x|^q\mu(\mathrm{d}x)\right)^{\frac{1}q}, \quad \forall \mu \in \mathcal{P}_q(\mathbb R^{d}).$$ We assume that $(\Omega,{\cal{F}},\{{\cal{F}}_{t}\}_{t\ge 0}, \mathbb{P})$ is atomless so that, for any $\mu \in \mathcal P_2(\mathbb R^d)$, there exists a random variable $X \in L^2(\Omega, {\cal F}, \mathbb{P}; \mathbb R^d)$ such that $\mu = \mathcal{L}^X$. Consider the MV-SDE with small noise [\[2.0\]](#2.0){reference-type="eqref" reference="2.0"}. with small noise of the form $$\label{2.0} {\mbox d}X^{\varepsilon}(t)=f(X^{\varepsilon}(t),\mathcal{L}_{t}^{X}){\mbox d}t+ \varepsilon g(X^{\varepsilon}(t),\mathcal{L}_{t}^{X}){\mbox d}W(t),t \geq 0$$ with initial data $X(0)=x_0$, where $\varepsilon\in(0,1)$, $\mathcal{L}_{t}^{X}$ is the law (or distribution) of $X(t)$, and $$f:\mathbb R^d \times \mathcal P_2(\mathbb R^{d}) \rightarrow \mathbb R^d \mbox{ and }g: \mathbb R^d \times \mathcal P_2(\mathbb R^{d}) \rightarrow \mathbb R^{d \times \bar d}.$$ Let $f_i$ be the $i^{th}$ component of $f$. Then for $x \in \mathbb R^d$ and $\mu \in \mathcal P_2(\mathbb R^d),$ we denote $$\begin{aligned} \nabla f_i(x,\mu) &:=\left(\frac{\partial f_i(x,\mu)}{\partial x_1},...,\frac{\partial f_i(x,\mu)}{\partial x_d}\right), \\ \nabla^2 f_i(x,\mu) &:= \begin{bmatrix} \frac{\partial^2 f_i(x,\mu)}{\partial x^2_1} & ... & \frac{\partial^2 f_i(x,\mu)}{\partial x_1 x_d} \\ \vdots & \vdots & \vdots\\ \frac{\partial^2 f_i(x,\mu)}{\partial x_d x_1} & ... & \frac{\partial^2 f_i(x,\mu)}{\partial x^2_d} \\ \end{bmatrix}.\end{aligned}$$ Note that $\nabla f_i$ and $\nabla^2 f_i$ are not the gradient and the Hessian matrix of $f_i$ (because $f_i$ depends also on a probability measure). But we use this notation for convenience. **Lemma 1**. *[@carmona] ( Wasserstein Distance ) Let $q\geq1$. Define $$\label{wasser} \mathbb{W}_q(\mu,\nu):=\inf_{\pi \in \mathcal{D}(\mu,\nu)}\bigg\{\int_{\mathbb R^{d}}|x-y|^q\pi(\mathrm{d}x,\mathrm{d}y)\bigg\}^{\frac{1}q}, ~\mu,\nu\in\mathcal{P}_q(\mathbb R^{d}),$$ where $\mathcal{D}(\mu,\nu)$ is the set of all couplings for $\mu$ and $\nu$. Then $\mathbb{W}_q$ is a distance on $\mathcal{P}_q(\mathbb R^{d})$.* **Lemma 2**. *[@carmona]   For any $\mu\in\mathcal{P}_{2}(\mathbb R^{d})$, $\mathbb{W}_{2}(\mu,\delta_{0})=W_{2}(\mu)$.* ## Lions Derivatives In this subsection, we will give the definition of Lions derivative (or $L$-derivative) for a function $u:\mathcal{P}(\mathbb R^{d}) \rightarrow \mathbb R^{d}$ as introduced in [@car]. Given $(\Omega, {\cal F}, \mathbb{P}),$ an atom is $A \in {\cal F}$ such that $\mathbb{P}(A)>0$ and for any $B \in {\cal F}, B \subset A, \mathbb{P}(A)>\mathbb{P}(B)$, we have that $\mathbb{P}(B)=0.$ **Definition 1**. *We say that $u:\mathcal{P}(\mathbb R^{d}) \rightarrow \mathbb R^{d}$ is $L$-differentiable at $\mu \in {\cal P}(\mathbb R^d)$ if there is an atomless, probability space $(\Omega, {\cal F}, \mathbb{P})$ and an $X \in L^2(\Omega, {\cal F}, \mathbb{P};\mathbb R^d)$ such that $\mu = \mathcal L(X)$ and the function $U: L^2(\Omega, {\cal F}, \mathbb{P};\mathbb R^d) \rightarrow \mathbb R$ given by $U(X):=u(\mathcal{L}(X))$ is Frechet differentiable at $X$.* We recall that $U$ is Frechet differentiable at $X$ means that there exists a continuous mapping $DU(X):L^2(\Omega, {\cal F}, \mathbb{P};\mathbb R^d) \rightarrow \mathbb R$ such that for any $Y \in L^2(\Omega, {\cal F}, \mathbb{P};\mathbb R^d)$ $$U(X+Y)-U(X)=DU(X)(Y)+o(|Y|_{L^2}), \quad \text{as } |Y|_{L^2} \rightarrow 0.$$ Since $DU(X) \in L^2(\Omega, {\cal F}, \mathbb{P};\mathbb R^d)$, by Riesz representation theorem, there exists a $\mathbb{P}$-a.s. unique variable $Z \in L^2(\Omega, {\cal F}, \mathbb{P};\mathbb R^d)$ such that for any $Y \in L^2(\Omega, {\cal F}, \mathbb{P};\mathbb R^d)$ $$DU(X)(Y) = \langle Y,Z \rangle_{L^2} = \mathbb{E}[YZ].$$ Cardaliaguet showed in [@car] that there exists a Borel measurable function $h:\mathbb R^d \rightarrow \mathbb R^d$ which only depends on the distribution $\mathcal{L}(X)$ rather than $X$ itself such that $Z=h(X)$. Thus, for $X \in L^2(\Omega, {\cal F}, \mathbb{P};\mathbb R^d)$ $$u(\mathcal{L}(Y))-u(\mathcal{L}(X))=\mathbb{E}[h(X)(Y-X)]+o(|Y-X|_{L^2}).$$ We call $\partial_\mu u(\mathcal{L}(X))(y):=h(y), y \in \mathbb R^d$ the $L$-derivative of $u$ at $\mathcal{L}(X),X \in L^2(\Omega, {\cal F}, \mathbb{P};\mathbb R^d).$ Let $\bar u:\mathcal{P}(\mathbb R^{d}) \rightarrow \mathbb R$ be $L$-differentiable. Then by the mean value theorem (see chapter 5 in [@carmona]), for any two $d$-dimensional random variables $X$ and $X'$, there exists a $\theta \in [0,1]$ such that $$\label{mvt} \bar u(\mathcal{L}(X))- \bar u(\mathcal{L}(X'))=\mathbb{E}[\langle \partial_\mu \bar u(\mathcal{L}(\theta X +(1-\theta)X'))(\theta X +(1-\theta)X'),(X-X')\rangle].$$ # Multilevel Monte Carlo EM scheme for MV-SDEs with small noise {#sec2} We shall impose the following hypothesis on the functions $f$ and $g$: **Assumption 1**. *There exists a positive constant  $K>0$ such that $$\begin{aligned} \label{a1a} %&|D(x_{_{1}})-D(x_{_{2}})|\leq\kappa|x_{1}-x_{2}|,~ ~ |f(0,0,\mu)|\vee|g(0,0,\mu)|\leq K_1\big(1+W_{2}(\mu)\big),\\ |f(x,\mu)-f(y,\nu)|^2\vee |g(x,\mu)-g(y,\nu)|^2 &\leq K\big(|x-y|^{2}+\mathbb{W}^{2}_{2}(\mu,\nu)\big), \end{aligned}$$ hold for any $x,~y\in\mathbb R^{d}$, $\mu,~\nu\in\mathcal{P}_{2}(\mathbb R^{d})$. Furthermore there exists a positive constant $\bar K$ such that $$|\nabla f(x,\mu)|^2 \vee |\nabla^2 f(x,\mu)|^2 \vee |\partial_{\mu} f(x,\mu)(y)|^2 \vee |\partial^2_{\mu} f(x,\mu)(y)|^2 \leq \bar K$$ for all $x,~y\in\mathbb R^{d}$, $\mu \in\mathcal{P}_{2}(\mathbb R^{d})$. In addition, there exists a positive constant $K$ such that $$\label{a2a} |\partial_{\mu} f(x,\mu)(y)-\partial_{\mu} f(\bar x,\nu)(\bar y)|^2 \leq K\big(|x-\bar x|^2 +|y-\bar y|^2 +\mathbb W^2_2(\mu,\nu)\big).$$ for all $x,y,\bar x, \bar y \in\mathbb R^{d}$, $\mu,\nu \in\mathcal{P}_{2}(\mathbb R^{d}).$* **Lemma 3**. *Let Assumption [Assumption 1](#a1){reference-type="ref" reference="a1"} hold. Then, for any $T>0$ and $p\ge 2,$ we have $$\mathbb{E}\left[\sup_{0\le t\le T}|X^\varepsilon(t)|^p\right] \le C.$$* The proof of this lemma is standard, we omit it here. **Remark 1**. *Assumption [Assumption 1](#a1){reference-type="ref" reference="a1"} implies the existence and uniqueness of equation [\[2.0\]](#2.0){reference-type="eqref" reference="2.0"}. Moreover, if Assumption [Assumption 1](#a1){reference-type="ref" reference="a1"}, then $$|f(x,\mu)|^2 \vee |g(x, \mu)|^2\le \beta(1+|x|^2 + W^{2}_{2}(\mu)),$$ where $\beta=2\max\{1, |f(0,\delta_{0})|,|g(0,\delta_{0})|\}$, and for any $x \in \mathbb R^d$ and $\mu\in\mathcal{P}_{2}(\mathbb R^{d}).$ $$\langle x-y,f(x,\mu)-f(y,\nu)\rangle \le \alpha(|x-\bar{x}|^2+\mathbb{W}^{2}_{2}(\mu,\nu)),$$ where $\bar{\alpha}=\frac{1}{2}(1+K)$. * ## Stochastic Particle Method {#s3.2} In this subsection, we make use of the stochastic particle method [@bossy] to approximate the MV-SDDE [\[2.0\]](#2.0){reference-type="eqref" reference="2.0"}. For any $i\in\mathbb{N}$, $\{W^{i}(t)\}_{t\in[0,T]}$ is an $\bar d$-dimension Brownian motion. Assume $\{W^{1}(t)\}, \{W^{2}(t)\}, \cdots$ are independent and $x^{1}, x^{2}, \cdots$ are independent and identically distributed ($i.i.d.$). Let $\{X^{\varepsilon,i}(t)\}_{t\in[0,T]}$ be the unique solution to MV-SDE $$\label{eq3.32} \mathrm{d}X^{\varepsilon, i}(t)=f(X^{\varepsilon, i}(t),\mathcal{L}_{t}^{X^{\varepsilon, i}}) \mathrm{d}t+\varepsilon g(X^{\varepsilon, i}(t),\mathcal{L}_{t}^{X^{\varepsilon, i}})W^{i}(t),$$ with the initial condition $X^{i}_0 =x^{i}$ and $\mathcal{L}_{t}^{X^{\varepsilon, i}}$ is the law of $X^{\varepsilon, i}(t)$. One can see that $X^{\varepsilon, 1}(t), X^{\varepsilon, 2}(t), \cdots$ are i.i.d.for $t\geq 0$. For any $M\in\mathbb{N},~1\leq i\leq M$, let $X^{\varepsilon,i,M}(t)$ be the solution of SDE $$\begin{aligned} \label{eq3.33} \mathrm{d}X^{\varepsilon,i,M}(t) =f(X^{\varepsilon, i,M}(t),\mathcal{L}_{t}^{\varepsilon,X,M})\mathrm{d}t +\varepsilon g(X^{\varepsilon,i,M}(t),\mathcal{L}_{t}^{\varepsilon,X,M})\mathrm{d}W^{i}(t),~~~~t\in[0,T],\end{aligned}$$ with the initial condition $X^{\varepsilon,i,M}_0 =x^{i}$, where $\mathcal{L}_{t}^{\varepsilon,X,M}:=\frac{1}{M}\sum\limits_{j=1}^{M}\delta_{X^{\varepsilon, j,M}(t)}$. We prepare a path-wise propagation of chaos result on SDEs $(\ref{eq3.33})$. **Lemma 4**. *[@li] [\[le3.6\]]{#le3.6 label="le3.6"} If Assumption $\ref{a1}$ holds, then $$\begin{aligned} \displaystyle\sup_{1\leq i\leq M}\mathbb{E}\big[\sup_{0\leq t\leq T}|X^{\varepsilon,i}(t)-X^{\varepsilon,i,M}(t)|^{2}\big]\leq C\left\{ \begin{array}{lll} M^{-\frac{1}{2}},~~~&1\leq d<4,\\ M^{-\frac{1}{2}}\log(M),~~~&d=4,\\ M^{-\frac{d}{2}},~~~&4<d, \end{array} \right.\end{aligned}$$ where $C$ is independent of $M$.* ## The EM Scheme for MV-SDEs with small noise {#sec2.1} We now introduce the EM scheme for [\[2.0\]](#2.0){reference-type="eqref" reference="2.0"}. Given any time $T>0$, assume that there exists a positive integer such that $h=\frac{T}{m}$, where $h\in (0,1)$ is the step size. Let $t_n=nh$ for $n \ge 0$. Compute the discrete approximations $Y_{h, n}^{\varepsilon, i, M}=Y_{h}^{\varepsilon, i, M}(t_n)$ by setting $Y_h^{\varepsilon, i, M}(0)=x_0$ and forming $$\label{discrete} \begin{split} Y_{h, n+1}^{\varepsilon, i, M}=Y_{h, n}^{\varepsilon, i, M} + f(Y_{h, n}^{\varepsilon, i, M}, \mathcal{L}_h^{\varepsilon, Y_n,M} )h+\varepsilon g(Y_{h, n}^{\varepsilon, i, M}, \mathcal{L}_h^{\varepsilon, Y_n,M} )\Delta W^i(t_n), \end{split}$$ where $\mathcal{L}_h^{\varepsilon, Y_n,M} =\frac{1}{M}\sum\limits_{j=1}^{M}\delta_{Y^{\varepsilon, j,M}_{h, n}}$ and $\Delta W(t_n)=W(t_{n+1})-W(t_n)$. Let $$\begin{aligned} \label{eq4.3} Y_h^{\varepsilon, i,M}(t)=Y^{\varepsilon, i,M}_{h,k}, ~~~~ t\in[t_k, t_{k+1}).\end{aligned}$$ For convenience, we define $\mathcal{L}^{\varepsilon, Y,M}_{h, t}=\frac{1}{M}\sum\limits_{j=1}^{M}\delta_{Y_h^{\varepsilon, i,M}(t)}$ and $\eta_h(t):=\lfloor t/h\rfloor h$ for $t\geq 0$. Then one observes $\mathcal{L}^{\varepsilon, Y,M}_{h, t}=\mathcal{L}^{\varepsilon, Y,M}_{h, \eta_h(t)}=\mathcal{L}^{\varepsilon, Y_k,M}_{h}$, for $t\in [t_k, t_{k+1})$. We now define the EM continuous approximate solution as follows: $$\label{EMsol} \bar{Y}_h^{\varepsilon, i,M}(t)=x^{i}+\int^{t}_{0}f(Y_h^{\varepsilon, i,M}(s),\mathcal{L}_{h, s}^{\varepsilon, Y,M})\mathrm{d}s +\varepsilon\int^{t}_{0}g (Y_h^{\varepsilon, i,M}(s),\mathcal{L}_{h,s}^{\varepsilon, Y,M})\mathrm{d}W^{i}(s), ~ t\ge 0.$$ **Lemma 5**. *Let Assumption [Assumption 1](#a1){reference-type="ref" reference="a1"} hold. Then, for any $T>0$ and $p\ge 2,$ we have $$\mathbb{E}\left[\sup_{0\le t\le T}|\bar Y_{h}^{\varepsilon, i,M}(t)|^p\right] \le C.$$* The proof of this lemma is standard, we omit it here. **Lemma 6**. *Let Assumption [Assumption 1](#a1){reference-type="ref" reference="a1"} hold. Then, for any $p\ge2,$ we have $$\sup_{0\le t \le T}\mathbb{E}[|\bar Y^{\varepsilon, i,M}_{h}(t)- Y_{h}^{\varepsilon, i,M}(t)|^p]\le Ch^p+ C \varepsilon^ph^{p/2}.$$* **Proof.** Let $n$ be such that $t_n \leq t \leq t_{n +1}.$ From [\[EMsol\]](#EMsol){reference-type="eqref" reference="EMsol"} we have $$\bar{Y}_h^{\varepsilon, i,M}(t)- Y_h^{\varepsilon, i,M}(t)=\int^{t}_{t_n}f(Y_h^{\varepsilon, i,M}(s),\mathcal{L}_{h, s}^{\varepsilon, Y,M})ds +\varepsilon\int^{t}_{t_n}g (Y_h^{\varepsilon, i,M}(s),\mathcal{L}_{h,s}^{\varepsilon, Y,M})dW^{i}(s)$$ By Remark [Remark 1](#onem){reference-type="ref" reference="onem"} and the BDG inequality, one has $$\begin{split} \mathbb{E}|\bar Y^{\varepsilon, i,M}_{h}(t)- Y_{h}^{\varepsilon, i,M}(t)|^p\le&2^{p-1}h^{p-1}\mathbb{E}\int_{t_n}^t|f(Y_h^{\varepsilon, i,M}(s),\mathcal{L}_{h, s}^{\varepsilon, Y,M})|^p ds\\ &+\varepsilon^ph^{\frac{p}{2}-1}\mathbb{E}\int_{t_n}^t|g(Y_h^{\varepsilon, i,M}(s),\mathcal{L}_{h, s}^{\varepsilon, Y,M})|^p ds\\ \le& Ch^p+ C\varepsilon^ph^{\frac{p}{2}}. \end{split}$$ The proof is therefore complete. $\Box$ We now reveal the error between the numerical solution [\[EMsol\]](#EMsol){reference-type="eqref" reference="EMsol"} and the exact solution [\[2.0\]](#2.0){reference-type="eqref" reference="2.0"}. **Theorem 1**. *Let Assumption [Assumption 1](#a1){reference-type="ref" reference="a1"} hold, assume that $\Psi:\mathbb{R}^d\rightarrow\mathbb{R}$ has continuous second order derivative and there exists a constant $C$ such that $$\begin{split} \left|\frac{\partial \Psi}{\partial x_i}\right|\le C \end{split}$$ for any $i=1,2,\cdots,d$. Then we have $$\begin{split} \sup\limits_{0\le t\le T}\mathbb{E}|\Psi(X^{\varepsilon, i, M}(t))-\Psi(\bar Y_h^{\varepsilon, i, M}(t))|^2 =Ch^2+ C h\varepsilon^2. \end{split}$$* **Proof.** By Assumption [Assumption 1](#a1){reference-type="ref" reference="a1"} and Lemma [Lemma 6](#womiss){reference-type="ref" reference="womiss"}, one can see that $$\label{new} \begin{split} &\sup\limits_{0\le t\le T}\mathbb{E}|X^{\varepsilon, i, M}(t)-\bar Y_h^{\varepsilon, i, M}(t)|^2\\ &\le 2T\mathbb{E}\int_0^T|f(X^{\varepsilon, i,M}(s),\mathcal{L}_{s}^{\varepsilon,X,M})-f(Y_h^{\varepsilon, i, M}(s), \mathcal{L}_{h, s}^{\varepsilon, Y,M})|^2 ds\\ &+8\sqrt{T}\varepsilon^2\mathbb{E}\int_0^T|g(X^{\varepsilon, i,M}(s),\mathcal{L}_{s}^{\varepsilon,X,M})-g(Y_h^{\varepsilon, i, M}(s), \mathcal{L}_{h, s}^{\varepsilon, Y,M})|^2 ds\\ &\le 2TK\mathbb{E}\int_0^T\big(|X^{\varepsilon, i,M}(s)-Y_h^{\varepsilon, i, M}(s)|^2+W_2^2(\mathcal{L}_{s}^{\varepsilon,X,M}, \mathcal{L}_{h, s}^{\varepsilon, Y,M}\big) \\ &+ 8K\sqrt{T}\varepsilon^2\mathbb{E}\int_0^T\big(|X^{\varepsilon, i,M}(s)-Y_h^{\varepsilon, i, M}(s)|^2+W_2^2(\mathcal{L}_{s}^{\varepsilon,X,M}, \mathcal{L}_{h, s}^{\varepsilon, Y,M}\big)\\ &\le 4TK\mathbb{E}\int_0^T|X^{\varepsilon, i,M}(s)- \bar Y_h^{\varepsilon, i, M}(s)|^2 ds + 4TK\mathbb{E}\int_0^T|\bar Y^{\varepsilon, i,M}(s)- Y_h^{\varepsilon, i, M}(s)|^2 ds \\ &+16K\sqrt{T}\varepsilon^2\mathbb{E}\int_0^T |X^{\varepsilon, i,M}(s)- \bar Y_h^{\varepsilon, i, M}(s)|^2 ds +16K\sqrt{T}\varepsilon^2\mathbb{E}\int_0^T |\bar Y^{\varepsilon, i,M}(s)-Y_h^{\varepsilon, i, M}(s)|^2 ds\\ &\le Ch^2+ C\varepsilon^2h+C\varepsilon^2\int_0^T\sup\limits_{0\le t\le s}\mathbb{E}|X^{\varepsilon, i,M}(s)-\bar Y_h^{\varepsilon, i, M}(s)|^2 ds+C\varepsilon^2 h^2+C\varepsilon^4h. \end{split}$$ The Gronwall inequality implies that $$\begin{split} \sup\limits_{0\le t\le T}\mathbb{E}|X^{\varepsilon, i,M}(t)-\bar Y_h^{\varepsilon, i, M}(t)|^2\le Ch^2+\varepsilon^2h. \end{split}$$ Since $\Psi$ has continuous bounded first order derivative, we immediately get $$\begin{split} \sup\limits_{0\le t\le T}\mathbb{E}|\Psi(X^{\varepsilon, i,M}(t))-\Psi(\bar Y_h^{\varepsilon, i, M}(t))|^2\le C\sup\limits_{0\le t\le T}\mathbb{E}|X^{\varepsilon, i,M}(t)-\bar Y_h^{\varepsilon, i, M}(t)|^2. \end{split}$$ The desired result then follows. $\Box$ In the next corollary, we are going to use different stepsize to define the numerical solutions. **Corollary 1**. *Assume that the conditions of Theorem [Theorem 1](#th0){reference-type="ref" reference="th0"} hold. Let $M\ge 2, l\ge 1$, $h_l=T\cdot M^{-l}, h_{l-1}=T\cdot M^{-(l-1)}$. Then $$\begin{split} \max_{0\le n<M^{l-1}}{\rm Var}(\Psi(\bar Y_{h_l}^{\varepsilon, i, M}(t_n))-\Psi(\bar Y_{h_{l-1}}^{\varepsilon, i, M}(t_n))\le Ch_{l-1}^2+C\varepsilon^2h_{l-1}. \end{split}$$* **Proof.** For $0\le n\le M^{l-1}-1$, by Theorem [Theorem 1](#th0){reference-type="ref" reference="th0"}, $$\begin{split} &{\rm Var}(\Psi(\bar Y_{h_l}^{\varepsilon, i, M}(t_n))-\Psi(\bar Y_{h_{l-1}}^{\varepsilon, i, M}(t_n))\le2 \mathbb{E}|\Psi(\bar Y_{h_l}^{\varepsilon, i, M}(t_n))-\Psi(\bar Y_{h_{l-1}}^{\varepsilon, i, M}(t_n))|^2\\ \le&4\mathbb{E}|\Psi(\bar Y_{h_l}^{\varepsilon, i, M}(t_n))-\Psi(X^{\varepsilon, i,M}(t))|^2+2\mathbb{E}|\Psi(X^{\varepsilon, i,M}(t))-\Psi(\bar Y_{h_{l-1}}^{\varepsilon, i, M}(t_n))|^2\\ \le&Ch_{l-1}^2+C\varepsilon^2h_{l-1}. \end{split}$$ $\Box$ The following lemma is presented here because it applies to any EM scheme but it will only be use later when estimating the variance of coupled processes in the Multilevel Monte Carlo setting. Define $\eta_h(s):=\lfloor s/h \rfloor$ where $\lfloor \cdot \rfloor$ is the integer-part function. Let $z_h$ be the deterministic solution to $$\label{z_h} z_h(t) = X(0) + \int_0^t f(z_h(\eta_h(s)),\delta_{z_h(s)})ds,$$ which is the Euler approximation to the ODE obtained from [\[2.0\]](#2.0){reference-type="eqref" reference="2.0"} when $\varepsilon$ is set to zero. **Lemma 7**. *For any $T>0$ we have $$\label{ODE} \mathbb{E}[\sup_{0 \leq s \leq T} |\bar Y_h^{\varepsilon, i, M}(s) - z_h(s)|^2] \leq C\varepsilon^2.$$* **Proof.** Using [\[EMsol\]](#EMsol){reference-type="eqref" reference="EMsol"} and [\[ODE\]](#ODE){reference-type="eqref" reference="ODE"}, using the fact that $|a+b|^2 \leq a^2 + b^2$ and the Cauchy-Schwarz inequality we have that for every $t \leq T$ $$\begin{aligned} |&\bar{Y}_h^{\varepsilon, i,M}(t)- z_h(t)|^2\\ &=\left|\int^t_0(f(Y_h^{\varepsilon, i,M}(s),\mathcal{L}_{h, s}^{\varepsilon, Y,M}) -f(z_h(\eta_h(s)),\delta_{z_h(s)})))ds +\varepsilon\int^{t}_{0}g (Y_h^{\varepsilon, i,M}(s),\mathcal{L}_{h,s}^{\varepsilon, Y,M})\mathrm{d}W^{i}(s)\right|^2 \\ &\leq 2T \int^t_0|f(Y_h^{\varepsilon, i,M}(s),\mathcal{L}_{h, s}^{\varepsilon, Y,M}) -f(z_h(\eta_h(s)),\delta_{z_h(s)}))|^2 ds + 2\varepsilon^2\left|\int^{t}_{0}g (Y_h^{\varepsilon, i,M}(s),\mathcal{L}_{h,s}^{\varepsilon, Y,M})\mathrm{d}W^{i}(s)\right|^2.\end{aligned}$$ By the BDG inequality we have that $$\mathbb{E}\left[\sup_{0 \leq s \leq t}\left|\int^{t}_{0}g (Y_h^{\varepsilon, i,M}(s),\mathcal{L}_{h,s}^{\varepsilon, Y,M})\mathrm{d}W^{i}(s)\right|^2 \right] \leq 4 \int_0^t \mathbb{E}[|g (Y_h^{\varepsilon, i,M}(s),\mathcal{L}_{h,s}^{\varepsilon, Y,M})|^2]ds.$$ Thus by Assumption [Assumption 1](#a1){reference-type="ref" reference="a1"} one can see that $$\begin{aligned} \mathbb{E}&[\sup_{0 \leq s \leq t}|\bar{Y}_h^{\varepsilon, i,M}(t)- z_h(t)|^2 ] \leq 2TK \int^t_0(\mathbb{E}[\sup_{0 \leq s \leq r}|\bar{Y}_h^{\varepsilon, i,M}(s)- z_h(s)|^2]+\sup_{0 \leq s \leq r}\mathbb W^{2}_{2}(\mathcal{L}_{h, s}^{\varepsilon, Y,M},\delta_{z_h(s)})) dr\\ &+ 8T\varepsilon^2 \beta \int^{t}_{0}\mathbb{E}[(1+|\bar{Y}_h^{\varepsilon, i,M}(s)|^2 + W^{2}_{2}(\mathcal{L}_{h, s}^{\varepsilon, Y,M})] ds.\end{aligned}$$ Using [\[qmoments\]](#qmoments){reference-type="eqref" reference="qmoments"}, [\[wasser\]](#wasser){reference-type="eqref" reference="wasser"} and Lemma [Lemma 5](#0pmoment){reference-type="ref" reference="0pmoment"} we have that for all $0 \leq t \leq T$ $$\begin{aligned} \mathbb{E}[\sup_{0 \leq s \leq t}|\bar{Y}_h^{\varepsilon, i,M}(t)- z_h(t)|^2 ] \leq C \varepsilon^2 + C \int^t_0\mathbb{E}[\sup_{0 \leq s \leq r}|\bar{Y}_h^{\varepsilon, i,M}(s)- z_h(s)|^{2}] dr. \end{aligned}$$ The final result is obtained by applying the Gronwall inequality. $\Box$ ## The Multilevel Monte Carlo EM Scheme {#sec2.2} We now define the multilevel Monte Carlo EM scheme. Given any $T>0$, let $N\ge 2, l \in \{0,...,L\}$, where $L$ is a positive integer that will be determined later. Let $h_l=T\cdot N^{-l}, h_{l-1}=T\cdot N^{-(l-1)}$. For step sizes $h_{l}$ and $h_{l-1}$ the EM continuous approximate solutions are respectively $$\label{c1} \begin{split} \bar{Y}_{h_l}^{\varepsilon, i,M}(t)=x^{i}+\int^{t}_{0}f(Y_{h_l}^{\varepsilon, i,M}(s),\mathcal{L}_{{h_l}, s}^{\varepsilon, Y,M})\mathrm{d}s +\int^{t}_{0}g (Y_{h_l}^{\varepsilon, i,M}(s),\mathcal{L}_{{h_l},s}^{\varepsilon, Y,M})\mathrm{d}W^{i}(s), \end{split}$$ and $$\label{c2} \begin{split} \bar{Y}_{h_{l-1}}^{\varepsilon, i,M}(t)=x^{i}+\int^{t}_{0}f(Y_{h_{l-1}}^{\varepsilon, i,M}(s),\mathcal{L}_{{h_{l-1}}, s}^{\varepsilon, Y,M})\mathrm{d}s +\int^{t}_{0}g (Y_{h_{l-1}}^{\varepsilon, i,M}(s),\mathcal{L}_{{h_{l-1}},s}^{\varepsilon, Y,M})\mathrm{d}W^{i}(s). \end{split}$$ We now construct the discrete version of the previous approximate solutions using the same Brownian motion for both processes. We say that the two processes are coupled. For $n\in \{0, 1, \ldots, N^{l-1}-1\}$ and $k\in \{0, \ldots, N\}$, let $$t_n=nh_{l-1} \mbox{ and } t_n^k=nh_{l-1}+kh_l.$$ This means we divide the interval $[t_n, t_{n+1}]$ into $N$ equal parts by ${h_l}$ with $t_n^0=t_n, t_n^{N}=t_{n+1}.$ For $n\in \{0, 1, \ldots, N^{l-1}-1\}$ and $k\in \{0, \ldots, N-1\}$, let $$\label{MMC EM scheme_1} \begin{split} Y_{h_{l}}^{\varepsilon, i, M}(t_n^{k+1})=Y_{h_l}^{\varepsilon, i, M}(t_n^{k}) + f(Y_{h_l}^{\varepsilon, i, M}(t_n^{k}), \mathcal{L}_{h_l}^{\varepsilon, Y_n^{k},M} )h_l+\varepsilon\sqrt{h_l}g(Y_{h_l}^{\varepsilon, i, M}(t_n^{k}), \mathcal{L}_{h_l}^{\varepsilon, Y_n^{k},M})\Delta \xi_n^k, \end{split}$$ where $\mathcal{L}_{h_l}^{\varepsilon, Y_n^{k},M}=\frac{1}{M}\sum_{j=1}^M\delta_{Y_{h_l}^{\varepsilon, j, M}(t_n^{k})}$, the random vector $\Delta \xi_n^k\in\mathbb{R}^{\bar d}$ has independent components, and each component is distributed as $\mathcal N(0, 1).$ Therefore, to simulate $Y_{h_{l}}^{\varepsilon, i, M},$ we use $$\label{MMC EM scheme_2} \begin{split} Y_{h_{l}}^{\varepsilon, i, M}(t_{n+1})=Y_{h_l}^{\varepsilon, i, M}(t_n)+ \sum_{k=0}^{N-1}f(Y_{h_l}^{\varepsilon, i, M}(t_n^{k}), \mathcal{L}_{h_l}^{\varepsilon, Y_n^{k},M} )h_l+\varepsilon\sqrt{h_l}\sum_{k=0}^{N-1}g(Y_{h_l}^{\varepsilon, i, M}(t_n^{k}), \mathcal{L}_{h_l}^{\varepsilon, Y_n^{k},M})\Delta \xi_n^k. \end{split}$$ To simulate $Y_{h_{l-1}}^{\varepsilon, i, M},$ we use $$\label{MMC EM scheme_3} \begin{split} Y_{h_{l-1}}^{\varepsilon, i, M}(t_{n+1}) &=Y_{h_{l-1}}^{\varepsilon, i, M}(t_{n})+f(Y_{h_{l-1}}^{\varepsilon, i, M}(t_{n}), \mathcal{L}_{h_{l-1}}^{\varepsilon, Y_n,M} )h_{l-1}\\ &+\varepsilon\sqrt{h_l}g(Y_{h_{l-1}}^{\varepsilon, i, M}(t_{n}), \mathcal{L}_{h_{l-1}}^{\varepsilon, Y_n, M})\sum_{k=0}^{N-1}\Delta \xi_n^k, \end{split}$$ where $\mathcal{L}_{h_{l-1}}^{\varepsilon, Y_n,M}=\frac{1}{M}\sum_{j=1}^M\delta_{Y_{h_{l-1}}^{\varepsilon, j, M}(t_n)}$. The following theorem is the main result of this section. **Theorem 2**. *Let Assumption [Assumption 1](#a1){reference-type="ref" reference="a1"} hold. Then it holds that $$\max_{0\le n<M^{l-1}}\mathbb{E}[|Y_{h_{l}}^{\varepsilon, i, M}(t_{n})-Y_{h_{l-1}}^{\varepsilon, i, M}(t_{n})|^2]\le C N^2 h_l^2+ \bar C\varepsilon^4 N h_l.$$* In order to prove Theorem [Theorem 2](#2ndMom2Paths){reference-type="ref" reference="2ndMom2Paths"}, we need a few lemmas. **Lemma 8**. *Let $0<p\leq 4$. Then $$\max_{\substack{0 \leq n \leq N^{l-1} \\ 1 \leq k \leq N}} \mathbb{E}[|Y_{h_{l}}^{\varepsilon, i, M}(t^k_n) - Y_{h_{l}}^{\varepsilon, i, M}(t_n)|^p] \leq C_1N^p h_l^p +C_2 N^{p/2} h_l^{p/2} \varepsilon^p,$$ where $C$ and $C$ are positive constants that only depend on $\beta, T,m$ and $X^{\varepsilon}(0)$ ($\beta$ from Remark [Remark 1](#onem){reference-type="ref" reference="onem"}).* **Proof.** Let $p = 4$. From [\[MMC EM scheme_1\]](#MMC EM scheme_1){reference-type="eqref" reference="MMC EM scheme_1"} we have that $$\label{Y^k-Y} Y_{h_{l}}^{\varepsilon, i, M}(t^k_n) - Y_{h_{l}}^{\varepsilon, i, M}(t_n) = \sum_{j=0}^{k-1}f(Y_{h_{l}}^{\varepsilon, i, M}(t_n^j),\mathcal{L}_{h_{l}}^{\varepsilon, Y^j_n, M})h_l + \varepsilon\sqrt{h_l} \sum_{j=0}^{k-1}g(Y_{h_{l}}^{\varepsilon, i, M}(t_n^j),\mathcal{L}_{h_{l}}^{\varepsilon, Y^j_n, M})\Delta\xi_n^j.$$ Hence, we obtain $$\begin{aligned} \label{eq1-lemma1} \mathbb{E}[|Y_{h_{l}}^{\varepsilon, i, M}(t^k_n) &- Y_{h_{l}}^{\varepsilon, i, M}(t_n)|^4] \\ &\leq 8 \mathbb{E}\left|\sum_{j=0}^{k-1}f(Y_{h_{l}}^{\varepsilon, i, M}(t_n^j),\mathcal{L}_{h_{l}}^{\varepsilon, Y^j_n, M})h_l\right|^4 + 8\mathbb{E}\left|\varepsilon\sqrt{h_l} \sum_{j=0}^{k-1}g(Y_{h_{l}}^{\varepsilon, i, M}(t_n^j),\mathcal{L}_{h_{l}}^{\varepsilon, Y^j_n, M})\Delta\xi_n^j \right|^4. \nonumber \end{aligned}$$ By Remark [Remark 1](#onem){reference-type="ref" reference="onem"} and Lemma [Lemma 5](#0pmoment){reference-type="ref" reference="0pmoment"} one can see that $$\begin{aligned} \label{eq2-lemma1} \mathbb{E}&\left| \sum_{j=0}^{k-1} f(Y_{h_{l}}^{\varepsilon, i, M}(t_n^j),\mathcal{L}_{h_{l}}^{\varepsilon, Y^j_n, M})h_l\right|^4 \leq N^3 \sum_{j=0}^{k-1} \mathbb{E}\left| f(Y_{h_{l}}^{\varepsilon, i, M}(t_n^j),\mathcal{L}_{h_{l}}^{\varepsilon, Y^j_n, M})h_l \right|^4 \nonumber \\ &\leq N^3 \sum_{j=0}^{k-1} \mathbb{E}\left[\left(\beta \left(1+|Y_{h_{l}}^{\varepsilon, i, M}(t^j_n)|^2 + W_2^2(\mathcal{L}_{h_{l}}^{\varepsilon, Y^j_n, M}) \right)\right)^2\right] \nonumber \\ &\leq 3 N^3 h_l^4 \beta^2 \sum_{j=0}^{k-1} \left(1+2\mathbb{E}|Y_{h_{l}}^{\varepsilon, i, M}(t^j_n)|^4 \right) \leq C N^4 h_l^4 . \end{aligned}$$ Using the BDG inequality, Remark [Remark 1](#onem){reference-type="ref" reference="onem"} and Lemma [Lemma 5](#0pmoment){reference-type="ref" reference="0pmoment"}, we obtain $$\begin{aligned} \label{eq3-lemma1} \mathbb{E}&\left|\varepsilon\sqrt{h_l} \sum_{j=0}^{k-1} g(Y_{h_{l}}^{\varepsilon, i, M}(t_n^j),\mathcal{L}_{h_{l}}^{\varepsilon, Y^j_n, M})\Delta\xi_n^j \right|^4 \leq C \varepsilon^4 \mathbb{E}\left[ \left| \sum_{j=0}^{k-1} | g(Y_{h_{l}}^{\varepsilon, i, M}(t_n^j),\mathcal{L}_{h_{l}}^{\varepsilon, Y^j_n, M})|^2 h_l \right|^2 \right] \nonumber \\ &\leq C \varepsilon^4 N h_l^2 \mathbb{E}\left[ \sum_{j=0}^{k-1} (| g(Y_{h_{l}}^{\varepsilon, i, M}(t_n^j),\mathcal{L}_{h_{l}}^{\varepsilon, Y^j_n, M})|^2)^2 \right] \nonumber \\ &\leq C \varepsilon^4 N h_l^2 \sum_{j=0}^{k-1} \mathbb{E}\left[\left(\beta \left(1+|Y_{h_{l}}^{\varepsilon, i, M}(t^j_n)|^2 + W_2^2(\mathcal{L}_{h_{l}}^{\varepsilon, Y^j_n, M}) \right)\right)^2\right] \leq C N^2 h_l^2 \varepsilon^4 \nonumber\\\end{aligned}$$ The result for $p=4$ follows from substituting [\[eq2-lemma1\]](#eq2-lemma1){reference-type="eqref" reference="eq2-lemma1"} and [\[eq3-lemma1\]](#eq3-lemma1){reference-type="eqref" reference="eq3-lemma1"} into [\[eq1-lemma1\]](#eq1-lemma1){reference-type="eqref" reference="eq1-lemma1"}. For $0 < p < 4$, the result follows from Jensen's inequality. $$\begin{aligned} \mathbb{E}[|Y_{h_{l}}^{\varepsilon, i, M}(t^k_n) - Y_{h_{l}}^{\varepsilon, i, M}(t_n)|^p]&=\mathbb{E}[(|Y_{h_{l}}^{\varepsilon, i, M}(t^k_n) - Y_{h_{l}}^{\varepsilon, i, M}(t_n)|^p)^{p/4}] \\ &\leq (\mathbb{E}[|Y_{h_{l}}^{\varepsilon, i, M}(t^k_n) - Y_{h_{l}}^{\varepsilon, i, M}(t_n)1|^4])^{p/4} \leq C_1 h_l^p + C_2 \varepsilon^p h_l^{p/2}.\end{aligned}$$ $\Box$ **Lemma 9**. *Let $f_{m}$ be the $m^{th}$ component of $f$. Then there exist $s,r \in [0,1]$ such that $$f(Y_{h_{l}}^{\varepsilon, i, M}(t_n^k),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})-f(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})=A_k + B_k + E_k,$$ where $$\begin{aligned} A_k &= (A_k^1,...,A_k^d)', B_k = (B_k^1,...,B_k^d)', E_k = (E_k^1,...,E_k^d)' \\ A^m_k &:= \langle \nabla f_m(s Y_{h_{l}}^{\varepsilon, i, M}(t_n^k) + (1-s)Y_{h_{l}}^{\varepsilon, i, M}(t_n)),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M}), h_l \sum_{j=0}^{k-1} f(Y_{h_{l}}^{\varepsilon, i, M}(t^j_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^j_n, M}) \rangle, \\ B^m_k &:= \langle \nabla f_m(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M}), \varepsilon\sqrt{h_l} \sum_{j=0}^{k-1} g(Y_{h_{l}}^{\varepsilon, i, M}(t^j_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^j_n, M}) \Delta\xi_n^j \rangle, \\ E^m_k &:= \langle \nabla^2 f_m(rs(Y_{h_{l}}^{\varepsilon, i, M}(t^k_n)- Y_{h_{l}}^{\varepsilon, i, M}(t_n))+ Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})(Y_{h_{l}}^{\varepsilon, i, M}(t^k_n)- Y_{h_{l}}^{\varepsilon, i, M}(t_n))s, \\ &\varepsilon\sqrt{h_l} \sum_{j=0}^{k-1} g(Y_{h_{l}}^{\varepsilon, i, M}(t^j_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^j_n, M}) \Delta\xi_n^j \rangle , m \in \{1,...,d\}.\end{aligned}$$* **Proof.** By the mean value theorem there exists a $s \in [0,1]$ such that $$\begin{aligned} f_m(Y_{h_{l}}^{\varepsilon, i, M}&(t_n^k),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})-f_m(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M}) \\ &= \langle \nabla f_m(s Y_{h_{l}}^{\varepsilon, i, M}(t_n^k) + (1-s)Y_{h_{l}}^{\varepsilon, i, M}(t_n)),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M}), (Y_{h_{l}}^{\varepsilon, i, M}(t_n^k) - Y_{h_{l}}^{\varepsilon, i, M}(t_n)) \rangle. \nonumber\end{aligned}$$ Substituting [\[Y\^k-Y\]](#Y^k-Y){reference-type="eqref" reference="Y^k-Y"} in the equation above yields $$\begin{aligned} \label{eq1-ABE} f_m(Y_{h_{l}}^{\varepsilon, i, M}&(t_n^k),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})-f_m(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M}) \\ &=\langle \nabla f_m(s Y_{h_{l}}^{\varepsilon, i, M}(t_n^k) + (1-s)Y_{h_{l}}^{\varepsilon, i, M}(t_n)),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M}), \sum_{j=0}^{k-1}f(Y_{h_{l}}^{\varepsilon, i, M}(t_n^j),\mathcal{L}_{h_{l}}^{\varepsilon, Y^j_n, M})h_l \rangle \nonumber \\ +& \langle \nabla f_m(s Y_{h_{l}}^{\varepsilon, i, M}(t_n^k) + (1-s)Y_{h_{l}}^{\varepsilon, i, M}(t_n)),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M}), \varepsilon\sqrt{h_l} \sum_{j=0}^{k-1}g(Y_{h_{l}}^{\varepsilon, i, M}(t_n^j),\mathcal{L}_{h_{l}}^{\varepsilon, Y^j_n, M})\Delta\xi_n^j \nonumber \rangle.\end{aligned}$$ Let $\nabla_q f_m$ denote the $q^{th}$ component of the vector function $\nabla f_m$. Applying the mean value theorem again with $y=s Y_{h_{l}}^{\varepsilon, i, M}(t_n^k) +(1-s)Y_{h_{l}}^{\varepsilon, i, M}(t_n),x=Y_{h_{l}}^{\varepsilon, i, M}(t_n)$ and $g(z)=\nabla_q f_m(z, \mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})$ ensures that there exists a $r \in [0,1]$ such that $$\begin{aligned} \nabla_q f_m &(s Y_{h_{l}}^{\varepsilon, i, M}(t_n^k) + (1-s)Y_{h_{l}}^{\varepsilon, i, M}(t_n)),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M}) = \nabla_q f_m(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M}) \\ &+ \langle \nabla(\nabla_q f_m)(rs (Y_{h_{l}}^{\varepsilon, i, M}(t_n^k)-Y_{h_{l}}^{\varepsilon, i, M}(t_n)) + Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M}),(Y_{h_{l}}^{\varepsilon, i, M}(t_n^k)-Y_{h_{l}}^{\varepsilon, i, M}(t_n))s \rangle.\end{aligned}$$ Thus $$\begin{aligned} \nabla f_m &(s Y_{h_{l}}^{\varepsilon, i, M}(t_n^k) + (1-s)Y_{h_{l}}^{\varepsilon, i, M}(t_n)),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M}) = \nabla f_m(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M}) \\ &+ \nabla^2 f_m(rs (Y_{h_{l}}^{\varepsilon, i, M}(t_n^k)-Y_{h_{l}}^{\varepsilon, i, M}(t_n)) + Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})(Y_{h_{l}}^{\varepsilon, i, M}(t_n^k)-Y_{h_{l}}^{\varepsilon, i, M}(t_n))s .\end{aligned}$$ $\nabla^2 f_m(rs (Y_{h_{l}}^{\varepsilon, i, M}(t_n^k)-Y_{h_{l}}^{\varepsilon, i, M}(t_n)) + Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})$ is a matrix $d \times d$ and $(Y_{h_{l}}^{\varepsilon, i, M}(t_n^k)-Y_{h_{l}}^{\varepsilon, i, M}(t_n))s$ is a vector $d \times 1$, so their multiplication gives a vector $d \times 1$ which the dimension of $\nabla f_m(\cdot,\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})$.\ Substituting the last equation into the second summand of the RHS of [\[eq1-ABE\]](#eq1-ABE){reference-type="eqref" reference="eq1-ABE"} completes the proof. $\Box$ **Lemma 10**. *There exist random variables $s,r:\Omega \rightarrow [0,1]$ such that $$f(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})-f(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M}) = \bar A_k + \bar E_k,$$ where $$\begin{aligned} \bar A_k &= (\bar A_k^1,..., \bar A_k^d)', \bar E_k = (\bar E_k^1,...,\bar E_k^d)' \\ \bar A^m_k &:=\mathbb{E}[\langle \partial_{\mu} f_m(Z,\mathcal{L}_{h_{l}}^{\varepsilon, Y^s_n, M})(Y_n^s), h_l \sum_{j=0}^{k-1} f(Y_{h_{l}}^{\varepsilon, i, M}(t^j_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^j_n, M}) \rangle]_{Z=Y_{h_{l}}^{\varepsilon, i, M}(t_n)} \\ \bar E^m_k &:=\mathbb{E}[\langle \partial^2_{\mu} f_m(Z,\mathcal L_{h_l}^{\varepsilon,Y^{s,r}_n,M})(Y^{s,r}_n)(((Y_{h_{l}}^{\varepsilon, i, M}(t^k_n)-Y_{h_{l}}^{\varepsilon, i, M}(t_n))s, \\ & \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \varepsilon\sqrt{h_l} \sum_{j=0}^{k-1} g(Y_{h_{l}}^{\varepsilon, i, M}(t^j_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^j_n, M}) \Delta\xi_n^j \rangle]_{Z=Y_{h_{l}}^{\varepsilon, i, M}(t_n)}, \\ Y_n^s &:= s Y_{h_{l}}^{\varepsilon, i, M}(t_n^k) + (1-s)Y_{h_{l}}^{\varepsilon, i, M}(t_n), \\ Y^{s,r}_n &:= sr(Y_{h_{l}}^{\varepsilon, i, M}(t^k_n)- Y_{h_{l}}^{\varepsilon, i, M}(t_n))+ Y_{h_{l}}^{\varepsilon, i, M}(t_n).\end{aligned}$$* **Proof.** Let $f_{m}$ be the $m^{th}$ component of $f$. A direct application of Equation [\[mvt\]](#mvt){reference-type="eqref" reference="mvt"} with $X = Y_{h_{l}}^{\varepsilon, i, M}(t_n^k),X'=Y_{h_{l}}^{\varepsilon, i, M}(t_n)$ and $\bar u(\mathcal L(\xi))=f_m(Y_{h_{l}}^{\varepsilon, i, M}, \mathcal L^{\varepsilon,\xi,M}_{h_l})$ implies that there exists a random variable $s:\Omega \rightarrow [0,1]$ such that $$\begin{aligned} \label{eq1-bar ABE} f_m(Y_{h_{l}}^{\varepsilon, i, M}(t_n),&\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})-f_m(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M}) \\ &= \mathbb{E}[\langle \partial_{\mu} f_m(Z,\mathcal{L}_{h_{l}}^{\varepsilon, Y^s_n, M})(Y_n^s),(Y_{h_{l}}^{\varepsilon, i, M}(t^k_n)-Y_{h_{l}}^{\varepsilon, i, M}(t_n)) \rangle]_{Z=Y_{h_{l}}^{\varepsilon, i, M}(t_n)} \nonumber \\ &=\mathbb{E}[\langle \partial_{\mu} f_m(Z,\mathcal{L}_{h_{l}}^{\varepsilon, Y^s_n, M})(Y_n^s),\sum_{j=0}^{k-1}f(Y_{h_{l}}^{\varepsilon, i, M}(t_n^j),\mathcal{L}_{h_{l}}^{\varepsilon, Y^j_n, M})h_l \rangle]_{Z=Y_{h_{l}}^{\varepsilon, i, M}(t_n)} \nonumber \\ &+ \mathbb{E}[\langle \partial_{\mu} f_m(Z,\mathcal{L}_{h_{l}}^{\varepsilon, Y^s_n, M})(Y_n^s), \varepsilon\sqrt{h_l} \sum_{j=0}^{k-1}g(Y_{h_{l}}^{\varepsilon, i, M}(t_n^j),\mathcal{L}_{h_{l}}^{\varepsilon, Y^j_n, M})\Delta\xi_n^j \rangle]_{Z=Y_{h_{l}}^{\varepsilon, i, M}(t_n)}. \nonumber\end{aligned}$$ Let $\partial_{\mu,q} f_m$ be the $q^{th}$ component of the vector function $\partial_{\mu} f_m.$ Applying Equation [\[mvt\]](#mvt){reference-type="eqref" reference="mvt"} again with $X= s Y_{h_{l}}^{\varepsilon, i, M}(t_n^k) + (1-s)Y_{h_{l}}^{\varepsilon, i, M}(t_n)=:Y_n^s,X'=Y_{h_{l}}^{\varepsilon, i, M}(t_n)$ and $\bar u(\mathcal L(\xi))= \partial_{\mu,q} f_m(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal L^{\varepsilon,\xi,M}_{h_l})(\xi)$, we find that there exists a random variable $r:\Omega \rightarrow [0,1]$ such that $$\begin{aligned} \partial_{\mu,q} &f_m(Z,\mathcal{L}_{h_{l}}^{\varepsilon, Y^s_n, M})(Y_n^s)=\partial_{\mu,q} f_m(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})(Y_{h_{l}}^{\varepsilon, i, M}(t_n)) \\ &+\mathbb{E}[\langle\partial_{\mu}(\partial_{\mu,q}f_m)(Z,\mathcal L_{h_l}^{\varepsilon,Y^{s,r}_n,M})(Y^{s,r}_n),(Y_{h_{l}}^{\varepsilon, i, M}(t_n^k)-Y_{h_{l}}^{\varepsilon, i, M}(t_n))s\rangle]_{Z=Y_{h_{l}}^{\varepsilon, i, M}(t_n)}.\end{aligned}$$ Thus $$\begin{aligned} \partial_{\mu} &f_m(Z,\mathcal{L}_{h_{l}}^{\varepsilon, Y^s_n, M})(Y_n^s)=\partial_{\mu} f_m(Z,\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})(Y_{h_{l}}^{\varepsilon, i, M}(t_n)) \\ &+\mathbb{E}[ \partial^2_{\mu}f_m(Z,\mathcal L_{h_l}^{\varepsilon,Y^{s,r}_n,M})(Y^{s,r}_n),(Y_{h_{l}}^{\varepsilon, i, M}(t_n^k)-Y_{h_{l}}^{\varepsilon, i, M}(t_n))s]_{Z=Y_{h_{l}}^{\varepsilon, i, M}(t_n)}.\end{aligned}$$ Substituting the last equation into the second summand of the RHS of Equation [\[eq1-bar ABE\]](#eq1-bar ABE){reference-type="eqref" reference="eq1-bar ABE"} yields $$\begin{aligned} &f_m(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})-f_m(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M}) \\ &=\mathbb{E}[\langle \partial_{\mu} f_m(Z,\mathcal{L}_{h_{l}}^{\varepsilon, Y^s_n, M})(Y_n^s),\sum_{j=0}^{k-1}f(Y_{h_{l}}^{\varepsilon, i, M}(t_n^j),\mathcal{L}_{h_{l}}^{\varepsilon, Y^j_n, M})h_l \rangle]_{Z=Y_{h_{l}}^{\varepsilon, i, M}(t_n)} \\ &+ \mathbb{E}[\langle \partial_{\mu} f_m(Z,\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})(Y_{h_{l}}^{\varepsilon, i, M}(t_n)), \varepsilon\sqrt{h_l} \sum_{j=0}^{k-1}g(Y_{h_{l}}^{\varepsilon, i, M}(t_n^j),\mathcal{L}_{h_{l}}^{\varepsilon, Y^j_n, M})\Delta\xi_n^j \rangle]_{Z=Y_{h_{l}}^{\varepsilon, i, M}(t_n)} \\ &+ \mathbb{E}[\langle \partial^2_{\mu}f_m(Z,\mathcal L_{h_l}^{\varepsilon,Y^{s,r}_n,M})(Y_n^{s,r})(Y_{h_{l}}^{\varepsilon, i, M}(t_n^k)-Y_{h_{l}}^{\varepsilon, i, M}(t_n))s, \\ & \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \varepsilon\sqrt{h_l} \sum_{j=0}^{k-1}g(Y_{h_{l}}^{\varepsilon, i, M}(t_n^j),\mathcal{L}_{h_{l}}^{\varepsilon, Y^j_n, M})\Delta\xi_n^j \rangle]_{Z=Y_{h_{l}}^{\varepsilon, i, M}(t_n)}.\end{aligned}$$ By independence the second expectation above is zero, therefore the proof is complete. $\Box$ **Proof of Theorem [Theorem 2](#2ndMom2Paths){reference-type="ref" reference="2ndMom2Paths"}** From [\[MMC EM scheme_2\]](#MMC EM scheme_2){reference-type="eqref" reference="MMC EM scheme_2"} and [\[MMC EM scheme_3\]](#MMC EM scheme_3){reference-type="eqref" reference="MMC EM scheme_3"} we have that for $n \leq N^{l-1}-1$ $$\begin{aligned} Y_{h_{l}}^{\varepsilon, i, M}(t_{n+1}) &- Y_{h_{l-1}}^{\varepsilon, i, M}(t_{n+1})=Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n) \\ &+h_l \sum_{k=0}^{N-1}\left(f(Y_{h_{l}}^{\varepsilon, i, M}(t_n^k),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})-f(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})\right) \\ &+h_l \sum_{k=0}^{N-1}\left(f(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})-f(Y_{h_{l-1}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l-1}}^{\varepsilon, Y_n, M})\right) \\ &+\varepsilon\sqrt{h_l} \sum_{k=0}^{N-1}\left(g(Y_{h_{l}}^{\varepsilon, i, M}(t_n^k),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})-g(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})\right)\Delta\xi^k_n \\ &+\varepsilon\sqrt{h_l}\sum_{k=0}^{N-1}\left(g(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})-g(Y_{h_{l-1}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l-1}}^{\varepsilon, Y_n, M})\right)\Delta\xi^k_n \\ &=: Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n) + R_N.\end{aligned}$$ By using the linearity property of the inner product, we obtain $$\begin{aligned} |Y_{h_{l}}^{\varepsilon, i, M}(t_{n+1}) &- Y_{h_{l-1}}^{\varepsilon, i, M}(t_{n+1})|^2=\langle Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)+R_N, Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)+R_N \rangle \\ &=|Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)|^2+|R_N|^2+2\langle Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n), R_N \rangle. \end{aligned}$$ Applying the elementary inequality $|a+b+c+d|^2 \leq 4|a|^2+4|b|^2+4|c|^2+4|d|^2$ to the term $|R_N|^2$ above, we derive that $$\begin{aligned} | Y_{h_{l}}^{\varepsilon, i, M}&(t_{n+1}) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_{n+1}) |^2 \leq |Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n) |^2 \\ &+ 4 h^2_l \left| \sum_{k=0}^{N-1}\left(f(Y_{h_{l}}^{\varepsilon, i, M}(t_n^k),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})-f(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})\right) \right|^2 \\ &+ 4 h^2_l \left| \sum_{k=0}^{N-1}\left(f(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})-f(Y_{h_{l-1}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l-1}}^{\varepsilon, Y_n, M})\right) \right|^2 \\ &+4 \varepsilon^2 \left| \sum_{k=0}^{N-1}\left(g(Y_{h_{l}}^{\varepsilon, i, M}(t_n^k),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})-g(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})\right)\sqrt{h_l}\Delta\xi^k_n \right|^2 \\ &+4 \varepsilon^2 \left| \sum_{k=0}^{N-1}\left(g(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})-g(Y_{h_{l-1}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l-1}}^{\varepsilon, Y_n, M})\right)\sqrt{h_l}\Delta\xi^k_n \right|^2 \\ &+ 2 h_l \sum_{k=0}^{N-1}\langle Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n),f(Y_{h_{l}}^{\varepsilon, i, M}(t_n^k),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})-f(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M}) \rangle \\ &+ 2 h_l \sum_{k=0}^{N-1}\langle Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n),f(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})-f(Y_{h_{l-1}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l-1}}^{\varepsilon, Y_n, M}) \rangle \\ &+ 2 \varepsilon\sqrt{h_l} \sum_{k=0}^{N-1}\langle Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n),\big(g(Y_{h_{l}}^{\varepsilon, i, M}(t_n^k),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})-g(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})\big) \Delta\xi^k_n \rangle \\ &+ 2 \varepsilon\sqrt{h_l} \sum_{k=0}^{N-1}\langle Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n),\big(g(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})-g(Y_{h_{l-1}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l-1}}^{\varepsilon, Y_n, M})\big) \Delta\xi^k_n \rangle. \end{aligned}$$ Now, we take expectations on both sides of the previous inequality. Since $\Delta \xi^k_n$ is independent of $Y_{h_{l}}^{\varepsilon, i, M}(t^k_n)$ and $Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)$, the expectation of the last two summands in the equation above is zero. Thus, $$\begin{aligned} \label{eq1 - theo 2nd moment} \mathbb{E}[| Y_{h_{l}}^{\varepsilon, i, M}&(t_{n+1}) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_{n+1}) |^2 ] \leq \mathbb{E}[ |Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n) |^2] \\ &+ 4 N h^2_l \sum_{k=0}^{N-1} \mathbb{E}\left| f(Y_{h_{l}}^{\varepsilon, i, M}(t_n^k),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})-f(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M}) \right|^2 \nonumber \\ &+ 4 N h^2_l \sum_{k=0}^{N-1} \mathbb{E}\left| f(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})-f(Y_{h_{l-1}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l-1}}^{\varepsilon, Y_n, M}) \right|^2 \nonumber \\ &+4 \varepsilon^2 \mathbb{E}\left[ \left|\sum_{k=0}^{N-1} \left(g(Y_{h_{l}}^{\varepsilon, i, M}(t_n^k),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})-g(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})\right)\sqrt{h_l}\Delta\xi^k_n \right|^2 \right] \nonumber \\ &+4 \varepsilon^2 \mathbb{E}\left[ \left| \sum_{k=0}^{N-1} \left(g(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})-g(Y_{h_{l-1}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l-1}}^{\varepsilon, Y_n, M})\right)\sqrt{h_l}\Delta\xi^k_n \right|^2\right] \nonumber \\ &+ 2 h_l \sum_{k=0}^{N-1} \mathbb{E}[\langle Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n),f(Y_{h_{l}}^{\varepsilon, i, M}(t_n^k),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})-f(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M}) \rangle] \nonumber\\ &+ 2 h_l \sum_{k=0}^{N-1} \mathbb{E}[\langle Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n),f(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})-f(Y_{h_{l-1}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l-1}}^{\varepsilon, Y_n, M}) \rangle]. \nonumber \\ &=: \mathbb{E}[ |Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n) |^2] + I_1+I_2+I_3+I_4+I_5+I_6.\end{aligned}$$ By Assumption [Assumption 1](#a1){reference-type="ref" reference="a1"} and Lemma [Lemma 8](#lemma kn){reference-type="ref" reference="lemma kn"}, one can see that $$\begin{aligned} I_1 &\leq 4 K N h_l^2 \sum_{k=0}^{N-1} ( \mathbb{E}|Y_{h_{l}}^{\varepsilon, i, M}(t_n^k)-Y_{h_{l}}^{\varepsilon, i, M}(t_n)|^2 + \mathbb W_2^2(\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M}, \mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})) \\ &\leq 8 K N h_l^2 \sum_{k=0}^{N-1} \mathbb{E}|Y_{h_{l}}^{\varepsilon, i, M}(t_n^k)-Y_{h_{l}}^{\varepsilon, i, M}(t_n)|^2 \leq 8 K N^2 h^2_l(C N^2 h^2_l+C N\varepsilon^2h_l).\end{aligned}$$ Also, by Assumption [Assumption 1](#a1){reference-type="ref" reference="a1"} $$\begin{aligned} I_2 &\leq 4 K N h_l^2 \sum_{k=0}^{N-1} (\mathbb{E}|Y_{h_{l}}^{\varepsilon, i, M}(t_n)-Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)|^2 + \mathbb W_2^2(\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M}, \mathcal{L}_{h_{l-1}}^{\varepsilon, Y_n, M}) )\\ &\leq 8 K N^2 h_l^2 \mathbb{E}[ |Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n) |^2].\end{aligned}$$ By the BDG inequality, Assumption [Assumption 1](#a1){reference-type="ref" reference="a1"} and Lemma [Lemma 8](#lemma kn){reference-type="ref" reference="lemma kn"}, we obtain $$\begin{aligned} I_3 &\leq C \varepsilon^2 \sum_{k=0}^{N-1}\mathbb{E}[| g(Y_{h_{l}}^{\varepsilon, i, M}(t_n^k),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})-g(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})|^2] h_l\\ &=C h_l \varepsilon^2 \sum_{k=0}^{N-1}(\mathbb{E}[|Y_{h_{l}}^{\varepsilon, i, M}(t^k_n)-Y_{h_{l}}^{\varepsilon, i, M}(t_n)|^2] + \mathbb W_2^2(\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M}, \mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})) \leq C N^3 h^3_l \varepsilon^2 + C N^2 h_l^2 \varepsilon^4. \end{aligned}$$ Similarly to $I_3$, $$I_4 \leq CN h_l \varepsilon^2 \mathbb{E}[|Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)|^2].$$ An application of the Cauchy-Schwarz inequality and Assumption [Assumption 1](#a1){reference-type="ref" reference="a1"} gives $$\begin{aligned} I_5 &= 2 h_l \sum_{k=0}^{N-1} \mathbb{E}[\langle Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n),f(Y_{h_{l}}^{\varepsilon, i, M}(t_n^k),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})-f(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M}) \rangle] \\ &= 2 h_l \sum_{k=0}^{N-1} \mathbb{E}[\langle Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n),f(Y_{h_{l}}^{\varepsilon, i, M}(t_n^k),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})-f(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M}) \rangle] \\ &+ 2 h_l \sum_{k=0}^{N-1} \mathbb{E}[\langle Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n),f(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})-f(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M}) \rangle] \\ &=: I_{5A} + I_{5B}.\end{aligned}$$ Applying Lemma [Lemma 9](#ABE){reference-type="ref" reference="ABE"} we have $$\begin{aligned} I_{5A} &\leq 2 h_l \sum_{k=0}^{N-1} \mathbb{E}[\langle Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n),A_k \rangle] +2 h_l \sum_{k=0}^{N-1} \mathbb{E}[\langle Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n),B_k \rangle] \\ &+ 2 h_l \sum_{k=0}^{N-1} \mathbb{E}[\langle Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n),E_k \rangle].\end{aligned}$$ By independence, the second summand above is zero. Also, we note that $$\begin{aligned} \mathbb{E}[|A_k|^2] &= \sum_{m=1}^{\bar d}\mathbb{E}[|A^m_k|]^2 \leq \bar d \bar K \mathbb{E}\left|h_l \sum_{j=0}^{k-1}f(Y_{h_{l}}^{\varepsilon, i, M}(t^j_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^j_n, M}) \right|^2 \\ &\leq \bar d \bar K h^2_j N \sum_{j=0}^{k-1} \mathbb{E}\left[\left(\beta \left(1+|Y_{h_{l}}^{\varepsilon, i, M}(t^j_n)|^2 + W_2^2(\mathcal{L}_{h_{l}}^{\varepsilon, Y^j_n, M}) \right)\right)^2\right] \\ &\leq \bar K h_l^2 N^2 C\end{aligned}$$ and $$\begin{aligned} \label{2ndE} \mathbb{E}[|E_k|^2] &= \sum_{m=1}^{\bar d}\mathbb{E}[(E^m_k)^2] \leq \bar d K \varepsilon^2h_l \mathbb{E}\left[|Y_{h_{l}}^{\varepsilon, i, M}(t^k_n)-Y_{h_{l}}^{\varepsilon, i, M}(t_n)|^2 \left|\varepsilon\sqrt{h_l} \sum_{j=0}^{k-1} g(Y_{h_{l}}^{\varepsilon, i, M}(t^j_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^j_n, M}) \Delta\xi_n^j \right|^2 \right] \nonumber \\ &\leq \bar d \bar K \varepsilon^2 h_l ( \mathbb{E}[|Y_{h_{l}}^{\varepsilon, i, M}(t^k_n)-Y_{h_{l}}^{\varepsilon, i, M}(t_n)|^4])^{1/2}\left(\mathbb{E}\left[\left|\sum_{j=0}^{k-1} g(Y_{h_{l}}^{\varepsilon, i, M}(t^j_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^j_n, M}) \Delta\xi_n^j \right|^4\right]\right)^{1/2} \nonumber \\ &\leq \bar K \varepsilon^2 C N^3 h_l^3+ \bar K \varepsilon^4 C N^2 h_l^2,\end{aligned}$$ where Lemma [Lemma 8](#lemma kn){reference-type="ref" reference="lemma kn"} is used in the last inequality. Therefore, applying the Cauchy-Schwartz inequality first and the elementary inequality $2ab \leq a^2 + b^2$ later yields $$\begin{aligned} I_{5A} &\leq 2 h_l \sum_{k=0}^{N-1} \mathbb{E}[| Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)||A_k | ] +2 h_l \sum_{k=0}^{N-1} \mathbb{E}[| Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)||E_k|] \\ &\leq 2 h_l \sum_{k=0}^{N-1} \mathbb{E}[| Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)|^2+ h_l \sum_{k=0}^{N-1} \mathbb{E}[|A_k |^2]+ h_l \sum_{k=0}^{N-1} \mathbb{E}[|E_k |^2] \\ &\leq 2 h_l N \mathbb{E}[| Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)|^2 +\bar K h_l^3 N^3 C + \bar K C N^4 h_l^4 \varepsilon^2 + \bar K C N^3 h_l^3 \varepsilon^4\end{aligned}$$ Similarly, using Lemma [Lemma 10](#bar ABE){reference-type="ref" reference="bar ABE"} one can see that $$\begin{aligned} I_{5B} &\leq 2 h_l \sum_{k=0}^{N-1} \mathbb{E}[\langle Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n),\bar A_k \rangle] +2 h_l \sum_{k=0}^{N-1} \mathbb{E}[\langle Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n),\bar E_k \rangle] \end{aligned}$$ Also, we have $\mathbb{E}[|\bar A_k|^2] \leq h_l^2 N^2 C$ and $$\label{2nd bar E} \mathbb{E}[|\bar E_k|^2] \leq \bar K \varepsilon^2 C N^3 h_l^3+\bar K \varepsilon^4 C N^2 h_l^2.$$ Thus, $$\begin{aligned} I_{5B} &\leq 2 h_l \sum_{k=0}^{N-1} \mathbb{E}[| Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)|\mathbb{E}[|\bar A_k |] ] +2 h_l \sum_{k=0}^{N-1} \mathbb{E}[| Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)| \mathbb{E}[|\bar E_k|]] \\ &\leq 2 h_l \sum_{k=0}^{N-1} \mathbb{E}[| Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)|^2+ h_l \sum_{k=0}^{N-1} \mathbb{E}[|\bar A_k |^2]+ h_l \sum_{k=0}^{N-1} \mathbb{E}[|\bar E_k |^2] \\ &\leq 2 h_l N \mathbb{E}[| Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)|^2 +\bar K h_l^3 N^3 C + \bar K C N^4 h_l^4 \varepsilon^2 + \bar K C N^3 h_l^3 \varepsilon^4\end{aligned}$$ Additionally, we have $$\begin{aligned} I_6 &\leq h_l N \mathbb{E}| Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)|^2 +h_l N \mathbb{E}| Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)|^2 + h_l N \mathbb W^2_2(\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M},\mathcal{L}_{h_{l-1}}^{\varepsilon, Y^k_n, M}) \\ &\leq 3 h_l N \mathbb{E}| Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)|^2.\end{aligned}$$ Substituting the bounds for the terms $I_1$ to $I_6$ into Equation [\[eq1 - theo 2nd moment\]](#eq1 - theo 2nd moment){reference-type="eqref" reference="eq1 - theo 2nd moment"} yields that for $n \leq N^{l-1}-1$ $$\begin{aligned} \mathbb{E}[| Y_{h_{l}}^{\varepsilon, i, M}(t_{n+1}) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_{n+1}) |^2 ] &\leq \mathbb{E}[ |Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n) |^2] + \hat C\mathbb{E}[ |Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n) |^2] \\ &+ C N^3 h_l^3 + C N^2 h_l^2 \varepsilon^4, \end{aligned}$$ which implies that that for $n \leq N^{l-1}-1$ $$\begin{aligned} \mathbb{E}[| Y_{h_{l}}^{\varepsilon, i, M}(t_{n+1}) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_{n+1}) |^2 ] &\leq \hat C \sum_{k=1}^n \mathbb{E}[ |Y_{h_{l}}^{\varepsilon, i, M}(t_k) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_k) |^2] \\ &+ C N^2 h_l^2 + C N h_l \varepsilon^4.\end{aligned}$$ An application of the discrete Gronwall inequality yields the result. $\Box$ ## Estimates of Variance In this section we provide an estimate for the variance of two coupled paths which is the main result of the paper and will be presented in Theorem [Theorem 3](#th1){reference-type="ref" reference="th1"}. We will need the following lemma taken from [@ahs15]. Proof of this theorem can be found in [@andersonII]. **Lemma 11**. *Suppose that $A^{\varepsilon,h}$ and $B^{\varepsilon,h}$ are families of random variables determined by scaling parameters $\varepsilon$ and $h$. Further, suppose that there are $C_1>0,C_2>0$ and $C_3 >0$ such that for all $\varepsilon\in (0,1)$ the following three conditions hold:* 1. *${\rm Var}(A^{\varepsilon,h}) \leq C_1\varepsilon^2 \ \text{uniformly in $h$}$,* 2. *$|A^{\varepsilon,h}| \leq C_2 \ \text{uniformly in $h$},$* 3. *$|\mathbb{E}[B^{\varepsilon,h}]| \leq C_3 h.$* *Then $${\rm Var}(A^{\varepsilon,h} B^{\varepsilon,h}) \leq 3C^2_3C_1 h^2 \varepsilon^2 + 15 C_2^2 {\rm Var}(B^{\varepsilon,h}).$$* The following two lemmas that will be needed to prove Theorem [Theorem 3](#th1){reference-type="ref" reference="th1"}. **Lemma 12**. *Assume that $\gamma:\mathbb R^d \rightarrow \mathbb R$ satisfies the Lipschitz condition, i.e. for all $x,y \in \mathbb R^d$ there exists a positive constant $L$, such that $|\gamma(x)-\gamma(y)|^2 \leq L|x-y|^2.$ Then for $s \in [0,1]$ one has $$\max_{\substack{0 \leq n \leq N^{l-1} \\ 1 \leq k \leq N}} {\rm Var}(\gamma(sY^{\varepsilon,i,M}_{h_{l_2}}(t^k_n)+(1-s)(Y^{\varepsilon,i,M}_{h_{l_1}}(t_n)))) \leq C \varepsilon^2.$$* **Proof.** Let $z_{h_{l_1}}$ and $z_{h_{l_2}}$ be defined by [\[z_h\]](#z_h){reference-type="eqref" reference="z_h"}. Using the fact that for a random variable $X$ and a constant $a,$ ${\rm Var}(X+a)={\rm Var}(X)$ and the fact that $\gamma$ is Lipschitz, we have that $$\begin{aligned} &\max_{\substack{0 \leq n \leq N^{l-1} \\ 1 \leq k \leq N}} {\rm Var}(\gamma(sY^{\varepsilon,i,M}_{h_{l_2}}(t^k_n)+(1-s)(Y^{\varepsilon,i,M}_{h_{l_1}}(t_n)))) \\ &= \max_{\substack{0 \leq n \leq N^{l-1} \\ 1 \leq k \leq N}} {\rm Var}(\gamma(sY^{\varepsilon,i,M}_{h_{l_2}}(t^k_n)+(1-s)(Y^{\varepsilon,i,M}_{h_{l_1}}(t_n)))-\gamma(sz_{h_{l_2}}(t^k_n)+(1-s)(z_{h_{l_1}}(t_n)))) \\ &\leq \max_{\substack{0 \leq n \leq N^{l-1} \\ 1 \leq k \leq N}}\mathbb{E}[|(\gamma(sY^{\varepsilon,i,M}_{h_{l_2}}(t^k_n)+(1-s)(Y^{\varepsilon,i,M}_{h_{l_1}}(t_n)))-\gamma(sz_{h_{l_2}}(t^k_n)+(1-s)(z_{h_{l_1}}(t_n)))|^2]\\ &= \max_{\substack{0 \leq n \leq N^{l-1} \\ 1 \leq k \leq N}}L\mathbb{E}[|sY^{\varepsilon,i,M}_{h_{l_2}}(t^k_n)+(1-s)(Y^{\varepsilon,i,M}_{h_{l_1}}(t_n))-sz_{h_{l_2}}(t^k_n)-(1-s)(z_{h_{l_1}}(t_n))|^2] \\ &\leq \max_{\substack{0 \leq n \leq N^{l-1} \\ 1 \leq k \leq N}}sL\mathbb{E}[|(Y^{\varepsilon,i,M}_{h_{l_2}}(t^k_n)-z_{h_{l_2}}(t^k_n)|^2]+(1-s)L\mathbb{E}[|(Y^{\varepsilon,i,M}_{h_{l_1}}(t_n)-(z_{h_{l_1}}(t_n)|^2].\end{aligned}$$ The required assertion follows by Lemma [Lemma 7](#z){reference-type="ref" reference="z"}. $\Box$ **Lemma 13**. *Let Assumption [Assumption 1](#a1){reference-type="ref" reference="a1"} hold. Then there exists a positive constant $C$ such that $$\max_{\substack{0 \leq n \leq N^{l-1} \\ 1 \leq k \leq N}} |\mathbb{E}[Y_{h_{l}}^{\varepsilon, i, M}(t^k_n) - Y_{h_{l}}^{\varepsilon, i, M}(t_n)]| \leq CN h_l.$$* **Proof.** From [\[MMC EM scheme_1\]](#MMC EM scheme_1){reference-type="eqref" reference="MMC EM scheme_1"} we have that $$\begin{aligned} |\mathbb{E}[Y_{h_{l}}^{\varepsilon, i, M}(t^k_n) &- Y_{h_{l}}^{\varepsilon, i, M}(t_n)]| \\ &=\left| \sum_{j=0}^{k-1}\mathbb{E}[f(Y_{h_{l}}^{\varepsilon, i, M}(t_n^j),\mathcal{L}_{h_{l}}^{\varepsilon, Y^j_n, M})]h_l + \varepsilon\sqrt{h_l} \sum_{j=0}^{k-1}\mathbb{E}[g(Y_{h_{l}}^{\varepsilon, i, M}(t_n^j),\mathcal{L}_{h_{l}}^{\varepsilon, Y^j_n, M})\Delta\xi_n^j]\right|. \end{aligned}$$ By independence the second summand of RHS in above is zero. Thus using Jensen's inequality and Remark [Remark 1](#onem){reference-type="ref" reference="onem"} yields $$\begin{aligned} |\mathbb{E}[Y_{h_{l}}^{\varepsilon, i, M}(t^k_n) - Y_{h_{l}}^{\varepsilon, i, M}(t_n)]| &\leq \sum_{j=0}^{k-1}\mathbb{E}[|f(Y_{h_{l}}^{\varepsilon, i, M}(t_n^j),\mathcal{L}_{h_{l}}^{\varepsilon, Y^j_n, M})|]h_l \\ &\leq h_l \sum_{j=0}^{k-1} \mathbb{E}[\sqrt \beta \big(1+|Y_{h_{l}}^{\varepsilon, i, M}(t^j_n)|^2 + W_2^2(\mathcal{L}_{h_{l}}^{\varepsilon, Y^j_n, M})\big)^{1/2} ] \\ &\leq \sqrt \beta h_l \sum_{j=0}^{k-1} \left(1+2\mathbb{E}[|Y_{h_{l}}^{\varepsilon, i, M}(t^j_n)|^2] \right)^{1/2}.\end{aligned}$$ An application of Lemma [Lemma 5](#0pmoment){reference-type="ref" reference="0pmoment"} and the fact that $k \leq N,$ completes the proof. $\Box$ Now, we can formulate the main result of the paper. **Theorem 3**. *Let Assumption [Assumption 1](#a1){reference-type="ref" reference="a1"} hold, assume that $\Psi:\mathbb{R}^d\rightarrow\mathbb{R}$ has continuous second order derivative and there exists a constant $C$ such that $$\begin{split} \left|\frac{\partial \Psi}{\partial x_i}\right|\le C~~and ~~\left|\frac{\partial^2 \Psi}{\partial x_i\partial x_j}\right|\le C \end{split}$$ for any $i,j=1,2,\cdots,a$. Then, we have $$\begin{split} \max_{0\le n<M^{l-1}}{\rm Var}(\Psi(Y_{h_{l}}^{\varepsilon, i, M}(t_{n+1}))-\Psi(Y_{h_{l-1}}^{\varepsilon, i, M}(t_{n+1}))\le C\varepsilon^2 h_{l-1}^2+C\varepsilon^4h_{l-1}. \end{split}$$* **Proof.** From [\[MMC EM scheme_2\]](#MMC EM scheme_2){reference-type="eqref" reference="MMC EM scheme_2"} and [\[MMC EM scheme_3\]](#MMC EM scheme_3){reference-type="eqref" reference="MMC EM scheme_3"} we have that for $n \leq N^{l-1}-1$ $$\begin{aligned} _j=[Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)]_j \\ &+h_l \sum_{k=0}^{N-1}\left(f_j(Y_{h_{l}}^{\varepsilon, i, M}(t_n^k),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})-f_j(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})\right) \\ &+h_l \sum_{k=0}^{N-1}\left(f_j(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})-f_j(Y_{h_{l-1}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l-1}}^{\varepsilon, Y_n, M})\right) \\ &+\varepsilon\sqrt{h_l} \sum_{k=0}^{N-1}\left(g_j(Y_{h_{l}}^{\varepsilon, i, M}(t_n^k),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})-g_j(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})\right)\Delta\xi^k_n \\ &+\varepsilon\sqrt{h_l}\sum_{k=0}^{N-1}\left(g_j(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})-g_j(Y_{h_{l-1}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l-1}}^{\varepsilon, Y_n, M})\right)\Delta\xi^k_n,\end{aligned}$$ where $f_j$ is the $j$th component of $f$ and $g_j$ is the $j$th row of $g.$ Taking variances on both sides of the previous inequality and using simple properties of variance and covariance functions, we obtain $$\begin{aligned} {\rm Var}&([Y_{h_{l}}^{\varepsilon, i, M}(t_{n+1}) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_{n+1})]_j)\leq(1+N h_l){\rm Var}([Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)]_j) \\ &+4 h^2_l N \sum_{k=0}^{N-1}{\rm Var}\left(f_j(Y_{h_{l}}^{\varepsilon, i, M}(t_n^k),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})-f_j(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})\right) \\ &+(4N h_l + 1)N h_l {\rm Var}\left(f_j(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})-f_j(Y_{h_{l-1}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l-1}}^{\varepsilon, Y_n, M})\right) \\ &+4\varepsilon^2 h_l \sum_{k=0}^{N-1}{\rm Var}\left(g_j(Y_{h_{l}}^{\varepsilon, i, M}(t_n^k),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})-g_j(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})\right)\Delta\xi^k_n \\ &+4 \varepsilon^2 h_l\sum_{k=0}^{N-1}{\rm Var}\left(g_j(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})-g_j(Y_{h_{l-1}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l-1}}^{\varepsilon, Y_n, M})\right)\Delta\xi^k_n \\ &+2 {\rm Cov}\left([Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)]_j,h_l \sum_{k=0}^{N-1}f_j(Y_{h_{l}}^{\varepsilon, i, M}(t_n^k),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})-f_j(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})\right)\\ &=: I_1 + I_2 + I_3 + I_4 + I_5 + I_6.\end{aligned}$$ In order to complete the proof of the theorem, we give estimates for $I_i, i=2,...,6,$ which will be shown in the following lemmas. **Lemma 14**. *There exists a positive constant $C$ such that $$I_2 \leq C N^3 h^3_l \varepsilon^2.$$* **Proof.** Using the fact that for two random variables $X,Y, {\rm Var}(X+Y)\leq 2{\rm Var}(X)+2{\rm Var}(Y)$, we have that $$\begin{aligned} {\rm Var}(f_j(&Y_{h_{l}}^{\varepsilon, i, M}(t_n^k),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})-f_j(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M}))\\ &\leq 2{\rm Var}(f_j(Y_{h_{l}}^{\varepsilon, i, M}(t_n^k),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})-f_j(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})) \\ &+ 2{\rm Var}(f_j(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})-f_j(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M}))=:I_{2A}+I_{2B}.\end{aligned}$$ First we estimate $I_{2A}$. By the mean value theorem there exists an $s \in [0,1]$ such that $$\begin{aligned} f_m(Y_{h_{l}}^{\varepsilon, i, M}&(t_n^k),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})-f_j(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M}) \\ &= \langle \nabla f_j(s Y_{h_{l}}^{\varepsilon, i, M}(t_n^k) + (1-s)Y_{h_{l}}^{\varepsilon, i, M}(t_n)),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M}), (Y_{h_{l}}^{\varepsilon, i, M}(t_n^k) - Y_{h_{l}}^{\varepsilon, i, M}(t_n)) \rangle. \nonumber\end{aligned}$$ Let $\nabla_q f_j(s Y_{h_{l}}^{\varepsilon, i, M}(t_n^k) + (1-s)Y_{h_{l}}^{\varepsilon, i, M}(t_n)),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})$ and $[(Y_{h_{l}}^{\varepsilon, i, M}(t_n^k) - Y_{h_{l}}^{\varepsilon, i, M}(t_n))]_q$ be the $q$ components of $\nabla f_j(s Y_{h_{l}}^{\varepsilon, i, M}(t_n^k) + (1-s)Y_{h_{l}}^{\varepsilon, i, M}(t_n)),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})$ and $(Y_{h_{l}}^{\varepsilon, i, M}(t_n^k) - Y_{h_{l}}^{\varepsilon, i, M}(t_n))$ respectively. We want to apply Lemma [Lemma 11](#vl2){reference-type="ref" reference="vl2"} with $A^{\varepsilon,h} = \nabla_q f_j(s Y_{h_{l}}^{\varepsilon, i, M}(t_n^k) + (1-s)Y_{h_{l}}^{\varepsilon, i, M}(t_n)),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})$ and $B^{\varepsilon,h}=[(Y_{h_{l}}^{\varepsilon, i, M}(t_n^k) - Y_{h_{l}}^{\varepsilon, i, M}(t_n))]_q$ so we check that the three conditions are satisfied. By Assumption [Assumption 1](#a1){reference-type="ref" reference="a1"}, the function $\nabla^2_q f_j$ is bounded, so $\nabla_q f_j$ is Lipschitz on the first argument. Applying Lemma [Lemma 12](#vl1){reference-type="ref" reference="vl1"} with $\gamma=\nabla_q f_j(\cdot,\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})$ and $h_{l_1} = h_{l_2} = h_l,$ we obtain $$\label{1stcond} {\rm Var}(\nabla_q f_j(s Y_{h_{l}}^{\varepsilon, i, M}(t_n^k) + (1-s)Y_{h_{l}}^{\varepsilon, i, M}(t_n)),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})) \leq C_1 \varepsilon^2,$$ so the first condition of Lemma [Lemma 11](#vl2){reference-type="ref" reference="vl2"} is satisfied. Conditions 2 and 3 are satisfied by Assumption [Assumption 1](#a1){reference-type="ref" reference="a1"} and Lemma [Lemma 13](#cond3){reference-type="ref" reference="cond3"} respectively. Thus by Lemma [Lemma 11](#vl2){reference-type="ref" reference="vl2"} we have that $$\begin{aligned} {\rm Var}(\nabla_q f_j(s Y_{h_{l}}^{\varepsilon, i, M}(t_n^k) &+ (1-s)Y_{h_{l}}^{\varepsilon, i, M}(t_n)),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})[(Y_{h_{l}}^{\varepsilon, i, M}(t_n^k) - Y_{h_{l}}^{\varepsilon, i, M}(t_n))]_q) \\ &\leq 3C^2_3C_1 N^2 h_l^2 \varepsilon^2 + 15 C_2^2 {\rm Var}([(Y_{h_{l}}^{\varepsilon, i, M}(t_n^k) - Y_{h_{l}}^{\varepsilon, i, M}(t_n))]_q).\end{aligned}$$ In order to estimate ${\rm Var}([(Y_{h_{l}}^{\varepsilon, i, M}(t_n^k) - Y_{h_{l}}^{\varepsilon, i, M}(t_n))]_q)$ we use Equation [\[MMC EM scheme_1\]](#MMC EM scheme_1){reference-type="eqref" reference="MMC EM scheme_1"} to obtain $$\begin{aligned} {\rm Var}&([(Y_{h_{l}}^{\varepsilon, i, M}(t_n^k) - Y_{h_{l}}^{\varepsilon, i, M}(t_n))]_q) \\ &\leq 2{\rm Var} (\sum_{j=0}^{k-1}f_q(Y_{h_{l}}^{\varepsilon, i, M}(t_n^j),\mathcal{L}_{h_{l}}^{\varepsilon, Y^j_n, M})h_l) + 2{\rm Var}(\varepsilon\sqrt{h_l} \sum_{j=0}^{k-1}g_q(Y_{h_{l}}^{\varepsilon, i, M}(t_n^j),\mathcal{L}_{h_{l}}^{\varepsilon, Y^j_n, M})\Delta\xi_n^j). \end{aligned}$$ By Asumption [Assumption 1](#a1){reference-type="ref" reference="a1"} and Lemma [Lemma 7](#z){reference-type="ref" reference="z"} we have that $$\begin{aligned} {\rm Var} (\sum_{j=0}^{k-1}&f_q(Y_{h_{l}}^{\varepsilon, i, M}(t_n^j),\mathcal{L}_{h_{l}}^{\varepsilon, Y^j_n, M})h_l) = {\rm Var} (h_l\sum_{j=0}^{k-1}f_q(Y_{h_{l}}^{\varepsilon, i, M}(t_n^j),\mathcal{L}_{h_{l}}^{\varepsilon, Y^j_n, M})-f_q(z_h(t_n^j),\delta_{z_h(t_n^j)})) \\ &\leq h_l^2 \mathbb{E}[|(\sum_{j=0}^{k-1}f_q(Y_{h_{l}}^{\varepsilon, i, M}(t_n^j),\mathcal{L}_{h_{l}}^{\varepsilon, Y^j_n, M})-f_q(z_h(t_n^j),\delta_{z_h(t_n^j)}))|^2] \leq C N^2 h_l^2 \varepsilon^2.\end{aligned}$$ From [\[eq3-lemma1\]](#eq3-lemma1){reference-type="eqref" reference="eq3-lemma1"} we have that $${\rm Var}(\varepsilon\sqrt{h_l} \sum_{j=0}^{k-1}g_q(Y_{h_{l}}^{\varepsilon, i, M}(t_n^j),\mathcal{L}_{h_{l}}^{\varepsilon, Y^j_n, M})\Delta\xi_n^j) \leq C N h_l \varepsilon^2.$$ Thus $${\rm Var}([(Y_{h_{l}}^{\varepsilon, i, M}(t_n^k) - Y_{h_{l}}^{\varepsilon, i, M}(t_n))]_q) \leq C N^2 h_l^2 \varepsilon^2 + C N h_l \varepsilon^2.$$ Using the formula ${\rm Var} (\sum_{i=1}^d X_i )\leq d \sum_{i=1}^d {\rm Var}(X_i)$ with $i=q,X_i = [Y_{h_{l}}^{\varepsilon, i, M}(t_n^k) - Y_{h_{l}}^{\varepsilon, i, M}(t_n)]_q$ yields $${\rm Var}([(Y_{h_{l}}^{\varepsilon, i, M}(t_n^k) - Y_{h_{l}}^{\varepsilon, i, M}(t_n))]_q) \leq d^2C N^2 h_l^2 \varepsilon^2 + d^2 C N h_l \varepsilon^2 \leq C N h_l \varepsilon^2.$$ Thus, $$\begin{aligned} I_{2A} \leq C N h_l \varepsilon^2 .\end{aligned}$$ Next, we estimate $I_{2B}$. By Equation [\[mvt\]](#mvt){reference-type="eqref" reference="mvt"} there exists a random variable $s:\Omega \rightarrow [0,1]$ such that $$\begin{aligned} f_j(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})-&f_j(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M}) \\ &=\mathbb{E}[\langle \partial_{\mu} f_j(Z,\mathcal{L}_{h_{l}}^{\varepsilon, Y^s_n, M})(Y_n^s),(Y_{h_{l}}^{\varepsilon, i, M}(t^k_n)-Y_{h_{l}}^{\varepsilon, i, M}(t_n)) \rangle]_{Z=Y_{h_{l}}^{\varepsilon, i, M}(t_n)}.\end{aligned}$$ where $Y_n^s := s Y_{h_{l}}^{\varepsilon, i, M}(t_n^k) + (1-s)Y_{h_{l}}^{\varepsilon, i, M}(t_n).$ Let $\partial_{\mu,q}f_j(Z,\mathcal{L}_{h_{l}}^{\varepsilon, Y^s_n, M})(Y_n^s)$ and $[Y_{h_{l}}^{\varepsilon, i, M}(t^k_n)-Y_{h_{l}}^{\varepsilon, i, M}(t_n)]_q$ be the $q$-components of $\partial_{\mu}f_j(Z,\mathcal{L}_{h_{l}}^{\varepsilon, Y^s_n, M})(Y_n^s)$ and $Y_{h_{l}}^{\varepsilon, i, M}(t^k_n)-Y_{h_{l}}^{\varepsilon, i, M}(t_n)$ respectively. Then $$\begin{aligned} &{\rm Var}(\mathbb{E}[\partial_{\mu,q}f_j(Z,\mathcal{L}_{h_{l}}^{\varepsilon, Y^s_n, M})(Y_n^s)[Y_{h_{l}}^{\varepsilon, i, M}(t^k_n)-Y_{h_{l}}^{\varepsilon, i, M}(t_n)]_q]_{Z=Y_{h_{l}}^{\varepsilon, i, M}(t_n)}) \\ &={\rm Var}(\mathbb{E}[\partial_{\mu,q}f_j(Z,\mathcal{L}_{h_{l}}^{\varepsilon, Y^s_n, M})(Y_n^s)[Y_{h_{l}}^{\varepsilon, i, M}(t^k_n)-Y_{h_{l}}^{\varepsilon, i, M}(t_n)]_q]_{Z=Y_{h_{l}}^{\varepsilon, i, M}(t_n)}\\ &-\mathbb{E}[\partial_{\mu,q}f_j(z_{h_l}(t_n),\delta_{z_{h_l}(t_n)})(z_{h_l}(t_n))[Y_{h_{l}}^{\varepsilon, i, M}(t^k_n)-Y_{h_{l}}^{\varepsilon, i, M}(t_n)]_q] ) \\ &= {\rm Var}(\mathbb{E}[(\partial_{\mu,q}f_j(Z,\mathcal{L}_{h_{l}}^{\varepsilon, Y^s_n, M})(Y_n^s)-\partial_{\mu,q}f_j(z_{h_l}(t_n),\delta_{z_{h_l}(t_n)})(z_{h_l}(t_n)))\\ &\times [Y_{h_{l}}^{\varepsilon, i, M}(t^k_n)-Y_{h_{l}}^{\varepsilon, i, M}(t_n)]_q]]_{Z=Y_{h_{l}}^{\varepsilon, i, M}(t_n)} )\\ &\leq \mathbb{E}[(\mathbb{E}[(\partial_{\mu,q}f_j(Z,\mathcal{L}_{h_{l}}^{\varepsilon, Y^s_n, M})(Y_n^s)-\partial_{\mu,q}f_j(z_{h_l}(t_n),\delta_{z_{h_l}(t_n)})(z_{h_l}(t_n)))\\ &\times [Y_{h_{l}}^{\varepsilon, i, M}(t^k_n)-Y_{h_{l}}^{\varepsilon, i, M}(t_n)]_q]]_{Z=Y_{h_{l}}^{\varepsilon, i, M}(t_n)} )^2] \\ &\leq \mathbb{E}[\mathbb{E}[|\partial_{\mu,q}f_j(Z,\mathcal{L}_{h_{l}}^{\varepsilon, Y^s_n, M})(Y_n^s)-\partial_{\mu,q}f_j(z_{h_l}(t_n),\delta_{z_{h_l}(t_n)})(z_{h_l}(t_n))|^2]_{Z=Y_{h_{l}}^{\varepsilon, i, M}(t_n)}\\ &\times\mathbb{E}[|[Y_{h_{l}}^{\varepsilon, i, M}(t^k_n)-Y_{h_{l}}^{\varepsilon, i, M}(t_n)]_q|^2]],\\\end{aligned}$$ where we have use the Cauchy-Schwarz inequality in the penultimate step. By condition [\[a2a\]](#a2a){reference-type="eqref" reference="a2a"} and Lemma [Lemma 7](#z){reference-type="ref" reference="z"} $$\mathbb{E}[\mathbb{E}[|\partial_{\mu,q}f_j(Z,\mathcal{L}_{h_{l}}^{\varepsilon, Y^s_n, M})(Y_n^s)-\partial_{\mu,q}f_j(z_{h_l}(t_n),\delta_{z_{h_l}(t_n)})(z_{h_l}(t_n))|^2]_{Z=Y_{h_{l}}^{\varepsilon, i, M}(t_n)} \leq C\varepsilon^2$$ and by Lemma [Lemma 8](#lemma kn){reference-type="ref" reference="lemma kn"} $$\mathbb{E}[|Y_{h_{l}}^{\varepsilon, i, M}(t^k_n) - Y_{h_{l}}^{\varepsilon, i, M}(t_n)|^2] \leq C N^2 h_l^2 +C N h_l \varepsilon^2.$$ Therefore $$I_{2B} \leq C N^2 h^2_l \varepsilon^2 + C N h_l \varepsilon^4,$$ and the proof is complete. $\Box$ **Lemma 15**. *There exists positive constants $C$ and $\bar C$ such that $$I_3 \leq C N h_l \sum_{q=1}^d {\rm Var}([(Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n))]_q +C N^3 h_l^3 \varepsilon^2.$$* **Proof.** Note that $$\begin{aligned} {\rm Var}(f_j&(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})-f_j(Y_{h_{l-1}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l-1}}^{\varepsilon, Y_n, M}))\\ &\leq 2{\rm Var}(f_j(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})-f_j(Y_{h_{l-1}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})) \\ &+ 2{\rm Var}(f_j(Y_{h_{l-1}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})-f_j(Y_{h_{l-1}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l-1}}^{\varepsilon, Y_n, M}))=:I_{3A} + I_{3B}.\end{aligned}$$ First, we estimate $I_{3B}.$ By the mean value theorem there exists an $s \in [0,1]$ such that $$\begin{aligned} f_j(Y_{h_{l}}^{\varepsilon, i, M}&(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})-f_j(Y_{h_{l-1}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M}) \\ &= \langle \nabla f_j(s Y_{h_{l}}^{\varepsilon, i, M}(t_n) + (1-s)Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M}), (Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)) \rangle. \nonumber\end{aligned}$$ Let $\nabla_q f_j(s Y_{h_{l}}^{\varepsilon, i, M}(t_n) + (1-s)Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})$ and $[(Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n))]_q$ be the $q$ components of $\nabla f_j(s Y_{h_{l}}^{\varepsilon, i, M}(t_n) + (1-s)Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})$ and $(Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n))$ respectively. We want to apply Lemma [Lemma 11](#vl2){reference-type="ref" reference="vl2"} with $A^{\varepsilon,h} = \nabla_q f_j(s Y_{h_{l}}^{\varepsilon, i, M}(t_n) + (1-s)Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})$ and $B^{\varepsilon,h}=[(Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n))]_q$ so we check that the three conditions are satisfied. Applying Lemma [Lemma 12](#vl1){reference-type="ref" reference="vl1"} with $\gamma=\nabla_q f_j, k=0,h_{l_1}=h_{l-1}$ and $h_{l_2} = h_l,$ we obtain $${\rm Var}(\nabla_q f_j(s Y_{h_{l}}^{\varepsilon, i, M}(t_n) + (1-s)Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})) \leq C_1 \varepsilon^2,$$ so the first condition of Lemma [Lemma 11](#vl2){reference-type="ref" reference="vl2"} is satisfied. Conditions 2 and 3 are satisfied by Assumption [Assumption 1](#a1){reference-type="ref" reference="a1"} and Lemma [Lemma 13](#cond3){reference-type="ref" reference="cond3"} respectively. Thus by Lemma [Lemma 11](#vl2){reference-type="ref" reference="vl2"} we have that $$\begin{aligned} {\rm Var}(\nabla_q f_j(s Y_{h_{l}}^{\varepsilon, i, M}(t_n) &+ (1-s)Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})[(Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n))]_q) \\ &\leq 3C^2_3C_1 N^2 h_l^2 \varepsilon^2 + 15 C_2^2 {\rm Var}([(Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n))]_q).\end{aligned}$$ Using the formula ${\rm Var} (\sum_{i=1}^d X_i )\leq d \sum_{i=1}^d {\rm Var}(X_i)$ with $i=q,X_i = [Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)]_q$ yields $${\rm Var}((Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n))) \leq C \sum_{q=1}^d {\rm Var}([(Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n))]_q +C N^2 h_l^2 \varepsilon^2.$$ Therefore, $$I_{3A} \leq CN^2h_l^2\varepsilon^2+ C \sum_{q=1}^d {\rm Var}([(Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)]_q).$$ Next we estimate $I_{3B}.$ By Equation [\[mvt\]](#mvt){reference-type="eqref" reference="mvt"} there exists a random variable $s:\Omega \rightarrow [0,1]$ such that $$\begin{aligned} f_j(Y_{h_{l-1}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})-&f_j(Y_{h_{l-1}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l-1}}^{\varepsilon, Y_n, M}) \\ &=\mathbb{E}[\langle \partial_{\mu} f_j(Z,\mathcal{L}_{h_{l}}^{\varepsilon, Y^s_n, M})(Y_n^s),(Y_{h_{l}}^{\varepsilon, i, M}(t_n)-Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)) \rangle]_{Z=Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)}.\end{aligned}$$ where $Y_n^s := s Y_{h_{l}}^{\varepsilon, i, M}(t_n) + (1-s)Y_{h_{l-1}}^{\varepsilon, i, M}(t_n).$ Let $\partial_{\mu,q}f_j(Z,\mathcal{L}_{h_{l}}^{\varepsilon, Y^s_n, M})(Y_n^s)$ and $[Y_{h_{l}}^{\varepsilon, i, M}(t^n)-Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)]_q$ be the $q$-components of $\partial_{\mu}f_j(Z,\mathcal{L}_{h_{l}}^{\varepsilon, Y^s_n, M})(Y_n^s)$ and $Y_{h_{l}}^{\varepsilon, i, M}(t_n)-Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)$ respectively. Then $$\begin{aligned} &{\rm Var}(\mathbb{E}[\partial_{\mu,q}f_j(Z,\mathcal{L}_{h_{l}}^{\varepsilon, Y^s_n, M})(Y_n^s)[Y_{h_{l}}^{\varepsilon, i, M}(t_n)-Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)]_q]_{Z=Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)}) \\ &={\rm Var}(\mathbb{E}[\partial_{\mu,q}f_j(Z,\mathcal{L}_{h_{l}}^{\varepsilon, Y^s_n, M})(Y_n^s)[Y_{h_{l}}^{\varepsilon, i, M}(t^k_n)-Y_{h_{l}}^{\varepsilon, i, M}(t_n)]_q]_{Z=Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)}\\ &-\mathbb{E}[\partial_{\mu,q}f_j(z_{h_{l-1}}(t_n),\delta_{z_{h_{l-1}}(t_n)})(z_{h_{l-1}}(t_n))[Y_{h_{l}}^{\varepsilon, i, M}(t_n)-Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)]_q] ) \\ &= {\rm Var}(\mathbb{E}[(\partial_{\mu,q}f_j(Z,\mathcal{L}_{h_{l}}^{\varepsilon, Y^s_n, M})(Y_n^s)-\partial_{\mu,q}f_j(z_{h_{l-1}}(t_n),\delta_{z_{h_{l-1}}(t_n)})(z_{h_{l-1}}(t_n)))\\ &\times[Y_{h_{l}}^{\varepsilon, i, M}(t_n)-Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)]_q]]_{Z=Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)} )\\ &\leq \mathbb{E}[(\mathbb{E}[(\partial_{\mu,q}f_j(Z,\mathcal{L}_{h_{l}}^{\varepsilon, Y^s_n, M})(Y_n^s)-\partial_{\mu,q}f_j(z_{h_{l-1}}(t_n),\delta_{z_{h_{l-1}}(t_n)})(z_{h_{l-1}}(t_n)))\\ &\times[Y_{h_{l}}^{\varepsilon, i, M}(t_n)-Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)]_q]]_{Z=Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)} )^2] \\ &\leq \mathbb{E}[\mathbb{E}[|\partial_{\mu,q}f_j(Z,\mathcal{L}_{h_{l}}^{\varepsilon, Y^s_n, M})(Y_n^s)-\partial_{\mu,q}f_j(z_{h_{l-1}}(t_n),\delta_{z_{h_{l-1}}(t_n)})(z_{h_{l-1}}(t_n))|^2]_{Z=Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)}\\ &\times \mathbb{E}[|[Y_{h_{l}}^{\varepsilon, i, M}(t_n)-Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)]_q|^2]],\end{aligned}$$ where we have use the Cauchy-Schwarz inequality in the penultimate step. By condition [\[a2a\]](#a2a){reference-type="eqref" reference="a2a"} and Lemma [Lemma 7](#z){reference-type="ref" reference="z"} $$\mathbb{E}[\mathbb{E}[|\partial_{\mu,q}f_j(Z,\mathcal{L}_{h_{l}}^{\varepsilon, Y^s_n, M})(Y_n^s)-\partial_{\mu,q}f_j(z_{h_{l-1}}(t_n),\delta_{z_{h_{l-1}}(t_n)})(z_{h_{l-1}}(t_n))|^2]_{Z=Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)} \leq C\varepsilon^2$$ and by Theorem [Theorem 2](#2ndMom2Paths){reference-type="ref" reference="2ndMom2Paths"} $$\mathbb{E}[|Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)|^2] \leq C N^2 h_l^2+ C\varepsilon^4 N h_l.$$ Therefore, $$I_{3B} \leq C N^2 h_l^2 \varepsilon^2+ C\varepsilon^6 N h_l,$$ and the proof is complete. $\Box$ **Lemma 16**. *There exists a positive constant $C$ such that $$I_4 \leq C \varepsilon^2 h^3_{l-1} + C \varepsilon^4 h^2_{l-1}.$$* **Proof.** By Lemma [Lemma 8](#lemma kn){reference-type="ref" reference="lemma kn"} and Assumption [Assumption 1](#a1){reference-type="ref" reference="a1"} one can see that $$\begin{aligned} I_4 &\leq 4\varepsilon^2 h_l \sum_{k=0}^{N-1} \mathbb{E}[|g_j(Y_{h_{l}}^{\varepsilon, i, M}(t_n^k),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})-g_j(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})|^2] \\ &\leq 8\varepsilon^2 h_l N K (C h_{l-1}^2+ C\varepsilon^2 h_{l-1}) = C \varepsilon^2 h^3_{l-1} + C \varepsilon^4 h^2_{l-1}. \end{aligned}$$ $\Box$ **Lemma 17**. *There exists a positive constant $C$ such that $$I_5 \leq C\varepsilon^2 h_{l-1}^3+ C \varepsilon^6 h^2_{l-1}.$$* **Proof.** By Assumption [Assumption 1](#a1){reference-type="ref" reference="a1"} and Theorem [Theorem 2](#2ndMom2Paths){reference-type="ref" reference="2ndMom2Paths"} we have that $$\begin{aligned} I_5 &\leq 4\varepsilon^2 h_l \sum_{k=0}^{N-1} \mathbb{E}[|g_j(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})-g_j(Y_{h_{l-1}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l-1}}^{\varepsilon, Y_n, M})|^2] \\ &\leq 4\varepsilon^2 h_l N K (C h_{l-1}^2+ C\varepsilon^4 h_{l-1})=C\varepsilon^2 h_{l-1}^3+C \varepsilon^6 h^2_{l-1}.\end{aligned}$$ $\Box$ **Lemma 18**. *There exists a positive constant $C$ such that $$I_6 \leq 2Nh_l {\rm Var}\big([Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)]_j\big)+ C N^3 h_l^3 \varepsilon^2.$$* **Proof.** Since the covariance is a linear function, by subtracting and adding $f(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})$ to $f_j(Y_{h_{l}}^{\varepsilon, i, M}(t_n^k),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})-f_j(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})$ we have that $$\begin{aligned} I_6 &= 2 {\rm Cov}\left([Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)]_j,h_l \sum_{k=0}^{N-1}[f_j(Y_{h_{l}}^{\varepsilon, i, M}(t_n^k),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})-f_j(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})]\right) \\ &+ 2 {\rm Cov}\left([Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)]_j,h_l \sum_{k=0}^{N-1}[f_j(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})-f_j(Y_{h_{l}}^{\varepsilon, i, M}(t_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y_n, M})]\right) \\ &=: I_{6A} + I_{6B}. \end{aligned}$$ By Lemma [Lemma 9](#ABE){reference-type="ref" reference="ABE"}, we obtain $$\begin{aligned} I_{6A} &= 2 {\rm Cov}\left([Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)]_j,h_l \sum_{k=0}^{N-1}(A^j_k + B^j_k+ E^j_k)\right) \end{aligned}$$ Using the bilinearity property of the covariance function we have $$\begin{aligned} I_{6A} &=2h_l \sum_{k=0}^{N-1}{\rm Cov}\Big([Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)]_j, A^j_k\Big)+2h_l\sum_{k=0}^{N-1} {\rm Cov}\Big([Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)]_j, B^j_k\Big) \\ &+ 2h_l\sum_{k=0}^{N-1} {\rm Cov}\Big([Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)]_j, E^j_k\Big).\end{aligned}$$ Using the definition of covariance and since the increments $\xi_n^j$ in $B_k^j$ are independent, we find that $$\begin{aligned} {\rm Cov}\Big([Y_{h_{l}}^{\varepsilon, i, M}(t_n) &- Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)]_j, B^j_k\Big)\\ &=\mathbb{E}[[Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)]_j B^j_k]-\mathbb{E}[[Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)]_j]\mathbb{E}[B^j_k]=0.\end{aligned}$$ Then using the fact that ${\rm Cov}(X,Y) \leq \frac 1 2 {\rm Var}(X) + \frac 1 2 {\rm Var}(Y)$, yields $$\label{I_6var} I_{6A} \leq 2Nh_l{\rm Var}\big([Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)]_j\big)+h_l\sum_{k=0}^{N-1}{\rm Var}(A^j_k)+h_l\sum_{k=0}^{N-1}{\rm Var}(E^j_k).$$ Recall from Lemma [Lemma 9](#ABE){reference-type="ref" reference="ABE"} that $$A^j_k = \langle \nabla f_j(s Y_{h_{l}}^{\varepsilon, i, M}(t_n^k) + (1-s)Y_{h_{l}}^{\varepsilon, i, M}(t_n)),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M}), h_l \sum_{r=0}^{k-1} f(Y_{h_{l}}^{\varepsilon, i, M}(t^r_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^r_n, M}) \rangle.$$ In order to estimate ${\rm Var}(A^j_k)$ we use Lemma [Lemma 11](#vl2){reference-type="ref" reference="vl2"} with $A^{\varepsilon,h} = \nabla_q f_j(s Y_{h_{l}}^{\varepsilon, i, M}(t_n^k) + (1-s)Y_{h_{l}}^{\varepsilon, i, M}(t_n)),\mathcal{L}_{h_{l}}^{\varepsilon, Y^k_n, M})$ and $B^{\varepsilon,h}=[h_l \sum_{r=0}^{k-1} f(Y_{h_{l}}^{\varepsilon, i, M}(t^r_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^r_n, M})]_q$ so we check that the three conditions are satisfied. The first and second conditions are satisfied by [\[1stcond\]](#1stcond){reference-type="eqref" reference="1stcond"} and Assumption [Assumption 1](#a1){reference-type="ref" reference="a1"} respectively. By Lemma [Lemma 5](#0pmoment){reference-type="ref" reference="0pmoment"} and Assumption [Assumption 1](#a1){reference-type="ref" reference="a1"} we have that $$\begin{aligned} |\mathbb{E}[[h_l \sum_{r=0}^{k-1} f(Y_{h_{l}}^{\varepsilon, i, M}(t^r_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^r_n, M})]_q]|\leq CNh_l,\end{aligned}$$ so the third condition is also satisfied. Thus Lemma [Lemma 11](#vl2){reference-type="ref" reference="vl2"} implies that $${\rm Var}(A^j_k) \leq C N^2 h^2 \varepsilon^2 + C {\rm Var}([h_l \sum_{r=0}^{k-1} f(Y_{h_{l}}^{\varepsilon, i, M}(t^r_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^r_n, M})]_q).$$ Lemma [Lemma 7](#z){reference-type="ref" reference="z"} yields $$\begin{aligned} {\rm Var}([h_l \sum_{r=0}^{k-1} &f(Y_{h_{l}}^{\varepsilon, i, M}(t^r_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^r_n, M})]_q)={\rm Var}([h_l \sum_{r=0}^{k-1} \{f(Y_{h_{l}}^{\varepsilon, i, M}(t^r_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^r_n, M})-f(z_{h_l}(t_n^r),\delta_{z_{h_l}(t_n^r)})\}]_q) \\ &\leq \mathbb{E}[|([h_l \sum_{r=0}^{k-1} \{f(Y_{h_{l}}^{\varepsilon, i, M}(t^r_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^r_n, M})-f(z_{h_l}(t_n^r),\delta_{z_{h_l}(t_n^r)})\}]_q)|^2]\leq C N^2 h_l^2 \varepsilon^2.\end{aligned}$$ Therefore $$\label{varA} {\rm Var}(A^j_k) \leq C N^2 h^2 \varepsilon^2 +C N^2 h_l^2 \varepsilon^2 .$$ From [\[2ndE\]](#2ndE){reference-type="eqref" reference="2ndE"} we have $$\label{varE} {\rm Var}(E^j_k)\leq \mathbb{E}[|E^j_k|^2] \leq C N^3 h_l^3 \varepsilon^2 + C N^2 h_l^2 \varepsilon^4.$$ Substituting [\[varA\]](#varA){reference-type="eqref" reference="varA"} and [\[varE\]](#varE){reference-type="eqref" reference="varE"} into [\[I_6var\]](#I_6var){reference-type="eqref" reference="I_6var"} we obtain $$I_{6A} \leq 2Nh_l {\rm Var}\big([Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)]_j\big)+ C N^3 h_l^3 \varepsilon^2.$$ Using Lemma [Lemma 10](#bar ABE){reference-type="ref" reference="bar ABE"} and simple properties of the covariance function, yields $$\begin{aligned} I_{6B} &= 2 {\rm Cov}\left([Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)]_j,h_l \sum_{k=0}^{N-1}( \bar A^j_k + \bar E^j_k) \right) \\ &\leq 2h_l \sum_{k=0}^{N-1}{\rm Cov}\Big([Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)]_j, \bar A^j_k\Big) + 2h_l\sum_{k=0}^{N-1} {\rm Cov}\Big([Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)]_j, \bar E^j_k\Big)\\ &\leq 2Nh_l{\rm Var}\big([Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)]_j\big)+h_l\sum_{k=0}^{N-1}{\rm Var}(\bar A^j_k)+h_l\sum_{k=0}^{N-1}{\rm Var}(\bar E^j_k).\end{aligned}$$ Recall from Lemma [Lemma 10](#bar ABE){reference-type="ref" reference="bar ABE"} that $$\bar A^j_k =\mathbb{E}[\langle \partial_{\mu} f_j(Z,\mathcal{L}_{h_{l}}^{\varepsilon, Y^s_n, M})(Y_n^s), h_l \sum_{r=0}^{k-1} f(Y_{h_{l}}^{\varepsilon, i, M}(t^r_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^r_n, M}) \rangle]_{Z=Y_{h_{l}}^{\varepsilon, i, M}(t_n)}.$$ Let $\partial_{\mu,q}f_j(Z,\mathcal{L}_{h_{l}}^{\varepsilon, Y^s_n, M})(Y_n^s)$ and $f_q(Y_{h_{l}}^{\varepsilon, i, M}(t^r_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^r_n, M})$ be the the $q$-components of $\partial_{\mu} f_j(Z,\mathcal{L}_{h_{l}}^{\varepsilon, Y^s_n, M})(Y_n^s)$ and $f(Y_{h_{l}}^{\varepsilon, i, M}(t^r_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^r_n, M})$ respectively. Then $$\begin{aligned} {\rm Var}&(\mathbb{E}[\partial_{\mu,q}f_j(Z,\mathcal{L}_{h_{l}}^{\varepsilon, Y^s_n, M})(Y_n^s)h_l \sum_{r=0}^{k-1} f_q(Y_{h_{l}}^{\varepsilon, i, M}(t^r_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^r_n, M})]_{Z=Y_{h_{l}}^{\varepsilon, i, M}(t_n)}) \\ &={\rm Var}(\mathbb{E}[\partial_{\mu,q}f_j(Z,\mathcal{L}_{h_{l}}^{\varepsilon, Y^s_n, M})(Y_n^s)h_l \sum_{r=0}^{k-1} f_q(Y_{h_{l}}^{\varepsilon, i, M}(t^r_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^r_n, M})]_{Z=Y_{h_{l}}^{\varepsilon, i, M}(t_n)}\\ &-\mathbb{E}[\partial_{\mu,q}f_j(z_{h_l}(t_n),\delta_{z_{h_l}(t_n)})(z_{h_l}(t_n))h_l \sum_{r=0}^{k-1} f_q(Y_{h_{l}}^{\varepsilon, i, M}(t^r_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^r_n, M})] ) \\ &= {\rm Var}(\mathbb{E}[(\partial_{\mu,q}f_j(Z,\mathcal{L}_{h_{l}}^{\varepsilon, Y^s_n, M})(Y_n^s)-\partial_{\mu,q}f_j(z_{h_l}(t_n),\delta_{z_{h_l}(t_n)})(z_{h_l}(t_n)))\\ &\times h_l \sum_{r=0}^{k-1} f_q(Y_{h_{l}}^{\varepsilon, i, M}(t^r_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^r_n, M})]]_{Z=Y_{h_{l}}^{\varepsilon, i, M}(t_n)} )\\ &\leq \mathbb{E}[(\mathbb{E}[(\partial_{\mu,q}f_j(Z,\mathcal{L}_{h_{l}}^{\varepsilon, Y^s_n, M})(Y_n^s)-\partial_{\mu,q}f_j(z_{h_l}(t_n),\delta_{z_{h_l}(t_n)})(z_{h_l}(t_n)))\\ &\times h_l \sum_{r=0}^{k-1} f_q(Y_{h_{l}}^{\varepsilon, i, M}(t^r_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^r_n, M})]_{Z=Y_{h_{l}}^{\varepsilon, i, M}(t_n)} )^2] \\ &\leq \mathbb{E}[\mathbb{E}[|\partial_{\mu,q}f_j(Z,\mathcal{L}_{h_{l}}^{\varepsilon, Y^s_n, M})(Y_n^s)-\partial_{\mu,q}f_j(z_{h_l}(t_n),\delta_{z_{h_l}(t_n)})(z_{h_l}(t_n))|^2]_{Z=Y_{h_{l}}^{\varepsilon, i, M}(t_n)}\\ &\times\mathbb{E}[|h_l \sum_{r=0}^{k-1} f_q(Y_{h_{l}}^{\varepsilon, i, M}(t^r_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^r_n, M})|^2]],\end{aligned}$$ where we have used the Cauchy-Schwarz inequality in the last step. By condition [\[a2a\]](#a2a){reference-type="eqref" reference="a2a"} and Lemma [Lemma 7](#z){reference-type="ref" reference="z"} $$\mathbb{E}[\mathbb{E}[|\partial_{\mu,q}f_j(Z,\mathcal{L}_{h_{l}}^{\varepsilon, Y^s_n, M})(Y_n^s)-\partial_{\mu,q}f_j(z_{h_l}(t_n),\delta_{z_{h_l}(t_n)})(z_{h_l}(t_n))|^2]_{Z=Y_{h_{l}}^{\varepsilon, i, M}(t_n)} \leq C\varepsilon^2$$ and by Lemma [Lemma 5](#0pmoment){reference-type="ref" reference="0pmoment"} and Remark [Remark 1](#onem){reference-type="ref" reference="onem"} $$\mathbb{E}[|h_l \sum_{r=0}^{k-1} f_q(Y_{h_{l}}^{\varepsilon, i, M}(t^r_n),\mathcal{L}_{h_{l}}^{\varepsilon, Y^r_n, M})|^2]] \leq C N^2 h_l^2.$$ Thus, $${\rm Var}(\bar A^j_k ) \leq C N^2 h_l^2 \varepsilon^2.$$ From [\[2nd bar E\]](#2nd bar E){reference-type="eqref" reference="2nd bar E"} we have $${\rm Var}(\bar E^j_k) \leq \mathbb{E}[|\bar E^j_k|^2] \leq \bar K \varepsilon^2 C_1 N^3 h_l^3+\bar K \varepsilon^4 C N^2 h_l^2.$$ Therefore, $$I_{6B} \leq 2Nh_l {\rm Var}\big([Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)]_j\big)+ C N^3 h_l^3 \varepsilon^2$$ and the proof is complete. $\Box$ **Continuation of the proof of Theorem [Theorem 3](#th1){reference-type="ref" reference="th1"}** By Lemmas [Lemma 14](#lem2){reference-type="ref" reference="lem2"}-[Lemma 18](#lem6){reference-type="ref" reference="lem6"}, we have $$\begin{aligned} {\rm Var}&([Y_{h_{l}}^{\varepsilon, i, M}(t_{n+1}) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_{n+1})]_j)\leq{\rm Var}([Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)]_j) \\ &+ C N h_l \sum_{q=1}^d {\rm Var}([Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)]_q)+ C N^3 h_l^3 \varepsilon^2 + C N^2 h_l^2 \varepsilon^4. \end{aligned}$$ Taking the maximum in both sides yields that for $n \leq N^{l-1}-1$ $$\begin{aligned} \max_{1 \leq j \leq d}{\rm Var}&([Y_{h_{l}}^{\varepsilon, i, M}(t_{n+1}) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_{n+1})]_j)= \max_{1 \leq j \leq d}{\rm Var}([Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)]_j) \\ &+ C N h_l \max_{1 \leq j \leq d} {\rm Var}([Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)]_j)+ C N^3 h_l^3 \varepsilon^2 + C N^2 h_l^2 \varepsilon^4. \end{aligned}$$ An application of the Grownwall inequality produces $$\label{varY} \max_{\substack{0 \leq n \leq N^{l-1} \\ 1 \leq j \leq N}}{\rm Var}([Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)]_j) \leq C N^2 h_l^2 \varepsilon^2 + C N h_l \varepsilon^4.$$ In order to estimate ${\rm Var}(\Psi(Y_{h_{l}}^{\varepsilon, i, M}(t_n))-\Psi(Y_{h_{l-1}}^{\varepsilon, i, M}(t_n))$ we apply the mean value theorem, so there exists $s \in [0,1]$ such that $$\Psi(Y_{h_{l}}^{\varepsilon, i, M}(t_n))-\Psi(Y_{h_{l-1}}^{\varepsilon, i, M}(t_n) = \nabla \Psi(s Y_{h_{l}}^{\varepsilon, i, M}(t_n) + (1-s)Y_{h_{l-1}}^{\varepsilon, i, M}(t_n))(Y_{h_{l}}^{\varepsilon, i, M}(t_n)-Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)).$$ We shall apply Lemma [Lemma 11](#vl2){reference-type="ref" reference="vl2"} with $A^{\varepsilon,h}=\nabla_q \Psi(s Y_{h_{l}}^{\varepsilon, i, M}(t_n) + (1-s)Y_{h_{l-1}}^{\varepsilon, i, M}(t_n))$ and $B^{\varepsilon,h}=[(Y_{h_{l}}^{\varepsilon, i, M}(t_n)-Y_{h_{l-1}}^{\varepsilon, i, M}(t_n))]_q.$ Applying Lemma [Lemma 12](#vl1){reference-type="ref" reference="vl1"} with $\gamma=\nabla_q \Psi, k=0,h_{l_1}=h_{l-1}$ and $h_{l_2} = h_l,$ we obtain $${\rm Var}(\nabla_q \Psi(s Y_{h_{l}}^{\varepsilon, i, M}(t_n) + (1-s)Y_{h_{l-1}}^{\varepsilon, i, M}(t_n))) \leq C \varepsilon^2,$$ so the first condition of Lemma [Lemma 11](#vl2){reference-type="ref" reference="vl2"} is satisfied. Conditions 2 and 3 are satisfied by Assumption [Assumption 1](#a1){reference-type="ref" reference="a1"} and Lemma [Lemma 13](#cond3){reference-type="ref" reference="cond3"} respectively. Thus by Lemma [Lemma 11](#vl2){reference-type="ref" reference="vl2"} we have that $$\begin{aligned} {\rm Var}(\nabla_q \Psi(s Y_{h_{l}}^{\varepsilon, i, M}(t_n) &+ (1-s)Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)))[(Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n))]_q) \\ &\leq C N^2 h_l^2 \varepsilon^2 + C {\rm Var}([(Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n))]_q).\end{aligned}$$ Thus $$\label{finVar} {\rm Var}(\Psi(Y_{h_{l}}^{\varepsilon, i, M}(t_n))-\Psi(Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)) \leq C N^2 h_l^2 \varepsilon^2 + C {\rm Var}((Y_{h_{l}}^{\varepsilon, i, M}(t_n) - Y_{h_{l-1}}^{\varepsilon, i, M}(t_n)).$$ Sustituting [\[varY\]](#varY){reference-type="eqref" reference="varY"} into [\[finVar\]](#finVar){reference-type="eqref" reference="finVar"} we obtain the desire result. $\Box$ # Summary Regarding the problem of computing $\mathbb{E}[\Phi(X_T)]$ where $X_T$ is the solution at time $T$ to an MV-SDEs with small noise, we studied the problem of comparing the computational cost of using the standard Monte Carlo method with a customatized discretization method versus using the multilevel Monte Carlo method combined with the Euler-Maruyama scheme. To this end, the crucial part is to estimate the variance of two coupled paths. We found that this variance is $\mathcal O (\varepsilon^2 h_{l-1}^2+\varepsilon^4h_{l-1})$ which is the same as in [@ahs15]. This means that the additional McKean-Vlasov component does not add computational complexity (per equation in the system of particles) and their conclusion about the computational cost of the method remains valid in our case. If $\delta \leq \varepsilon^2$ there is not benefit from using discretization methods customized for the small noise case. Moreover, if $\delta \geq e^{-\frac 1 \varepsilon}$, the EM scheme combined with the MLMC method leads to a cost $O(1).$ This is the same cost we would have with the standard MC method if we had $X_T$ as a formula of $W_T$, so no discretization method was required. 99 Anderson D.F., Higham D.J., Sun Y., Multilevel Monte Carlo for stochastic differential equations with small noise. *SIAM J. Numer. Anal.*, 54, 505-529, 2016. D.F. Anderson, D.J. Higham, and Y.Sun, Complexity of multilevel Monte Carlo tau-leaping, SIAM J. Numer. Anal., 52 (2014), pp. 3106--3127. Baladron, J., Fasoli, D., Faugeras, O. and Touboul, J., Mean-field description and propagation of chaos in networks of Hodgkin-Huxley and FitzHugh-Nagumo neurons, The Journal of Mathematical Neuroscience, 2 (1) (2012), 10. Bossy, M. and Talay, D., A stochastic particle method for the McKean-Vlasov and the Burgers equation, Mathematics of Computation, 66 (217) (1997), 157-192 Buckdahn, R., Li, J. and Ma, J., A mean-field stochastic control problem with partial observations, Annals of Applied Probability, 27 (5) (2017), 3201-3245. Cardaliaguet P., Notes on Mean Field Games (from P.-L. Lions' lectures at Collège de France), http://www.science.unitn.it/$\sim$ bagagiol /Notes by Cardaliaguet.pdf. Carmona R, Delarue F., Probabilistic Theory of Mean Field Games with Applications I, Springer, 2018. Erban, R., and Haskovec, J. From individual to collective behaviour of coupled velocity jump processes: a locust example. Kinetic and Related Models 5, 4 (December 2012), 817--842. Giles M.B., Multi-level Monte Carlo path simulation. *Oper. Res.*, 56(3), 607-617, 2008. Giles M.B., *Improved Multilevel Monte Carlo Convergence using the Milstein Scheme, Monte Carlo and Quasi-Monte Carlo Methods*. Springer, Berlin, 2008. Guhlke, C., Gajewski, P., Maurelli, M., Friz, P. K. and Dreyer, W., Stochastic many-particle model for LFP electrodes, Continuum Mechanics and Thermodynamics, 30 (3) (2018), 593-628. Guo Q., Liu W., Mao X., et al. Multi-level Monte Carlo methods with the truncated Euler-Maruyama scheme for stochastic differential equation. *International Journal of Computer Mathematics*, doi: 10.1080/00207160.2017.1329533, 2017. Higham D.J., Mao X.R., Yuan C.G., Almost Sure and Moment Exponential Stability in the Numerical Simulation of Stochastic Differential Equations. *SIAM Numer. Anal.*, 45, 592-609, 2007. Hutzenthaler M., Jentzen A., Kloeden P E., Strong and weak divergence in finite time of Euler's method for stochastic differential equations with non-globally Lipschitz continuous coefficients. *Proc R Soc Lond Ser A Math Phys Eng Sci*, 467, 1563-1576, 2011. Hutzenthaler M., Jentzen A., Kloeden P.E., Devergence of the multilevel monte carlo Euler method for nonlinear stochastic differential equations. *Annals Appl. Probab.*, 23, 1913-1966, 2013. Kloeden P.E., Platen E., Schurz H., The numerical solution of non-linear stochastic dynamical systems: A brief introduction. *Int. J. Bif. Chaos.*, 1, 277-286, 1991. Kloeden P. E., Platen E., *Numerical Solution of Stochastic Differential Equation*. Springer-Verlag, Berlin Heidelberg, 1992. Mao X., The truncated Euler-Maruyama method for stochastic differential equations. *J Comput Appl Math*, 290, 370-383, 2015. McKean, H. P., Propagation of chaos for a class of non-linear parabolic equations, In:Lecture Series in Differential Equations, 2 (1) (1967), 41-57. McKean, H. P., Fluctuations in the kinetic theory of gases, Communications on Pure and Applied Mathematics, 28 (4) (1975), 435-455. Milstein G.N., Tret'yakov M.V., Mean-square numerical methods for stochastic differential equations with small noises. *SIAM J. Sci. Comput.*, 18, 1067-1087, 1997. Platen E., Bruti-Liberati N., *Numerical Solution of Stochastic Differential Equations with Jumps in Finance*. Springer, 2011. Römisch W., Winkler R., Stepsize control for mean-square numerical methods for stochastic differential equations with small noise. *SIAM J. Sci. Comput.*, 28, 604-625, 2006. A.S. Sznitman, Topics in Propagation of Chaos, Ecole D'été de Probabilités de Saint-Flour XIX - 1989, in: Lect. Notes in Math., vol. 1464, Springer-Verlag, 1991. Li X., Yi L., Yuan C., Explicit numerical approximations for McKean-Vlasov neutral differential delay equations, arXiv:2105.04175v1. [^1]: Supported
arxiv_math
{ "id": "2310.01068", "title": "Multilevel Monte Carlo EM scheme for MV-SDEs with small noise", "authors": "Ulises Botija-Munoz and Chenggui Yuan", "categories": "math.PR", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We show that, for any prime $p$ and integer $k \geq 2$, a simple $\mathop{\mathrm{GF}}(p)$-representable matroid with sufficiently high rank has a rank-$k$ flat which is either independent in $M$, or is a projective or affine geometry. As a corollary we obtain a Ramsey-type theorem for $\mathop{\mathrm{GF}}(p)$-representable matroids. For any prime $p$ and integer $k\ge 2$, if we $2$-colour the elements in any simple $\mathop{\mathrm{GF}}(p)$-representable matroid with sufficiently high rank, then there is a monochromatic flat with rank $k$. address: - Department of Combinatorics and Optimization, University of Waterloo, Waterloo, Canada - Department of Combinatorics and Optimization, University of Waterloo, Waterloo, Canada author: - Jim Geelen - Matthew E. Kroeker title: Unavoidable Flats in Matroids Representable over Prime Fields --- [^1] # introduction In 1944, Gallai proved the following classical result [@Gallai], originally conjectured by Sylvester fifty years prior [@Sylvester]. **Sylvester-Gallai Theorem 1**. *Every rank-$3$ real-representable matroid has a two-point line.* Much attention has been given to the question of how to meaningfully generalize the Sylvester-Gallai Theorem, as well as whether similar phenomena occur in more abstract geometric settings. The following result of Kelly [@Kelly] is a particularly famous example of the latter. **Kelly's Theorem 1**. *Every rank-$4$ complex-representable matroid has a two-point line.* Note that Kelly's rank bound is best-possible: for example, the ternary affine plane is well known to be complex-representable. For more information on the Sylvester-Gallai Theorem and its various extensions and generalizations, see Borwein and Moser's survey [@BoMo]. In this paper, we are motivated by Sylvester-Gallai-type problems which assert the existence independent flats of rank greater than two. For instance, under what conditions is a matroid guaranteed to contain a three-point plane? For the case of real-representability, Bonnice and Edelstein [@BoEd] deduced the following result as a corollary of Hansen's Theorem [@Hansen]. **Theorem 1** ([@BoEd], [@Hansen]). *For an integer $k \geq 2$, if $M$ is a simple rank-$(2k-1)$ real-representable matroid, then $M$ contains a rank-$k$ independent flat.* The bound on the rank in Theorem [Theorem 1](#hansen){reference-type="ref" reference="hansen"} is tight, as can be seen by considering the direct sum of $k-1$ lines. This result was extended to complex-representable matroids by Barak, Dvir, Wigderson and Yehudayoff [@Barak] with weaker bounds. Those bounds have since been tightened (see [@Dvir; @GeKr]), but there is still room for further improvement. Projective geometries show that the Sylvester-Gallai theorem does not hold for matroids in general nor even for matroids representable over finite fields. However, Murty [@Murty] showed that any simple rank-$r$ matroid with fewer than $2^{r}-1$ points has a two-point line, and additionally that the binary projective geometry is the smallest such matroid with no two-point line. A stronger result for matroids representable over prime fields was proved by Bhattacharyya, Dvir, Shpilka, and Saraf [@BDSS] who showed that, for a prime $p$ and sufficiently large integer $r$, any rank-$r$ $\mathop{\mathrm{GF}}(p)$-representable matroid with no two-point line has at least $p^{\Omega(r)}$ points. In this paper we prove an analogue to Theorem [Theorem 1](#hansen){reference-type="ref" reference="hansen"} for the class of $\mathop{\mathrm{GF}}(p)$-representable matroids, where $p$ is prime. Our approach is to determine the unavoidable rank-$k$ flats in $\mathop{\mathrm{GF}}(p)$-representable matroids with sufficiently high rank. The following is our main result. **Theorem 2**. *For every prime $p$ and integer $k \geq 1$ there is an integer $N_{p}(k)$ such that, if $M$ is a simple $\mathop{\mathrm{GF}}(p)$-representable matroid with $r(M) \geq N_{p}(k)$, then $M$ contains a rank-$k$ flat $F$ such that either $F$ is independent in $M$, $M|F \cong \mathop{\mathrm{AG}}(k-1,p)$, or $M|F \cong \mathop{\mathrm{PG}}(k-1,p)$.* Since affine and projective geometries both contain $p$-point lines, if we preclude $p$-point lines, then independent rank-$k$ flats become unavoidable. We can also avoid affine and projective geometries by considering matroids with girth at least five; note that $\mathop{\mathrm{AG}}(n-1,2)$ has girth four. **Corollary 3**. *For every prime $p$ and integer $k \geq 1$, if $M$ is a simple $\mathop{\mathrm{GF}}(p)$-representable matroid with sufficiently high rank and girth at least five, then $M$ contains a rank-$k$ independent flat.* Another consequence of Theorem [Theorem 2](#unavoidable){reference-type="ref" reference="unavoidable"} is a Ramsey theorem for $\mathop{\mathrm{GF}}(p)$-representable matroids. **Corollary 4**. *For a prime $p$ and integer $k\ge 1$, if $M$ is a simple $\mathop{\mathrm{GF}}(p)$-representable matroid with sufficiently large rank then, then for any $2$-colouring of the points of $M$, there is a monochromatic rank-$k$ flat.* Note that, by Theorem [Theorem 2](#unavoidable){reference-type="ref" reference="unavoidable"}, to prove Corollary [Corollary 4](#Ramsey){reference-type="ref" reference="Ramsey"} it suffices to consider the cases where $M$ is a large independent set or $M$ is a high rank affine or projective geometry. For independents sets we get a monochromatic flat by majority, and for projective and affine geometries we use known Ramsey theorems; see Section 3. # Building on Kelly's Proof We start by reviewing Kelly's proof [@Kelly] that every rank-$4$ complex-representable matroid has a two-point line. His proof uses a result of Hirzebruch [@Hirzebruch] that every rank-$3$ complex-representable matroid has a line with at most three points. Let $e$ be an element in a simple rank-$4$ complex-representable matroid $M$. We assume by way of contradiction that $M$ has no $2$-point line. By Hirzebruch's result, $M/e$ contains a line $L$ with exactly three points, say $p_1$, $p_2$, and $p_3$. Let $N$ be the restriction of $M$ to the set $L\cup\{e\}$. Then $N$ is a simple rank-$3$ matroid, with no $2$-point line, and $N$ comprises three copunctual lines $p_1\cup\{e\}$, $p_2\cup\{e\}$, and $p_3\cup\{e\}$. Kelly's proof is completed by the following result, which is implicit in [@Kelly]. **Lemma 5**. *For a field $\mathbb{F}$, let $N$ be an $\mathbb{F}$-representable rank-$3$ matroid comprising three co-punctual lines $L_1$, $L_2$, and $L_3$. If $N$ has no $2$-point line, then $|L_1|=|L_2|=|L_3|$ and $|L_1|-1$ is divisible by the characteristic of $\mathbb{F}$, and, hence, $\mathbb{F}$ has positive characteristic.* *Proof sketch..* Let $e$ be the common point of $L_1$, $L_2$, and $L_3$. Any two points $e_1\in L_1\setminus\{e\}$ and $e_2\in L_2\setminus \{e\}$ are collinear with a third point $e_3\in L_3\setminus\{e\}$. By fixing a particular choice of element $e_2$ and varying $e_1$ we see that $|L_3|\ge |L_1|$. Then, by symmetry, we see that $L_1$, $L_2$, and $L_3$ all have the same size. Fix two distinct elements $a,b\in L_2\setminus\{e\}$. Consider the $2$-regular bipartite graph $H$ with bipartition $(L_1,L_3)$ such that $x\in L_1$ is adjacent to $y\in L_3$ if either $\{x,a,y\}$ or $\{x,b,y\}$ is a triangle. Thus $H$ is a disjoint union of even cycles. Consider one of these cycles, say $C$. A straightforward argument considering the representation of $M|(\{a,b\}\cup V(C))$ reveals that the characteristic of $\mathbb{F}$ is $\frac{1}{2} |V(C)|$. Since this holds for each component of $H$, we see that the characteristic of $\mathbb{F}$ divides $|L_1\setminus\{e\}|$, as required. ◻ The matroid $N|(\{a,b\}\cup L_1\cup L_3)$, in the proof sketch, is an example of a "Reid Geometry". The details regarding the representation of Reid Geometries, which were omitted in our proof sketch, are given explicitly by Kung [@Kung Proposition 2.2]. In the case that $\mathbb{F}$ is a finite field of size $p$, where $p$ is prime, the lines $L_1$, $L_2$, and $L_3$, in Lemma [Lemma 5](#kelly){reference-type="ref" reference="kelly"}, must have length exactly $p+1$, since longer lines are not GF$(p)$-representable. **Lemma 6**. *For a prime $p$, let $N$ be a simple $\mathop{\mathrm{GF}}(p)$-representable rank-$3$ matroid comprising three copunctual lines $L_1$, $L_2$, and $L_3$. If $N$ has no $2$-point line, then the lines $L_1$, $L_2$, and $L_3$ each have exactly $p+1$ points.* While Kelly used Reid geometries to find two-point lines, we are interested in independent flats of arbitrary rank. In the next two lemmas, we derive a high-dimensional generalization of Lemma [Lemma 6](#reid1){reference-type="ref" reference="reid1"} suitable for our purposes. Note that throughout this paper, when we refer to a *point* in a matroid, we mean a rank-$1$ flat (as opposed to an element in the ground set). **Lemma 7**. *For an integer $m\ge 3$ and a prime $p$, let $e$ be an element of a simple $\mathop{\mathrm{GF}}(p)$-representable matroid $M$ such that $\mathop{\mathrm{si}}(M/e)$ is a connected $m$-element matroid. If $e$ is not a coloop in $M$ and $M$ has no hyperplane disjoint from $e$ with fewer than $m$ points, then each of the lines of $M$ containing $e$ has length $p+1$.* *Proof.* Note that each hyperplane disjoint from $e$ contains exactly one element from each line containing $e$. Since $e$ is not a coloop, one of those lines, say $L_1$, has length at least $3$. Let $a_1$ and $a_2$ be distinct elements in $L_1\setminus \{e\}$ and let $b\in E(M)\setminus L_1$ be chosen arbitrarily. There is a hyperplane $H$ that contains $a_1$ and $b$ and is disjoint from $e$. Note that $M|H$ is isomorphic to ${\mathrm{si}}(M/e)$. Since ${\mathrm{si}}(M/e)$ is connected, there is a circuit $C$ of $M|H$ that contains both $a_1$ and $b$. Let $c\in C\setminus \{a_1,b\}$ and let $L_2$ and $L_3$ be the lines spanned by $\{e,b\}$ and $\{e,c\}$ respectively. Let $X=C\setminus \{a_1,b,c\}$ and let $N$ denote the restriction of $M/X$ to $L_1\cup L_2\cup L_3$. Note that $N$ is a simple rank-$3$ matroid and $L_1$, $L_2$, and $L_3$ are lines in $N$ that intersect at $e$. Consider any line $L$ of $N$ that does not contain $e$ and let $a$ and $b$ be distinct points of $L$. Note that $X \cup \{a,b,e\}$ is independent in $M$. Extend $X \cup \{a,b,e\}$ to a basis $B$ of $M$ and consider the hyperplane $F$ spanned by $B\setminus\{e\}$. This hyperplane intersects each of the lines through $e$ in a point, so $F$ contains one element from each of $L_1$, $L_2$, and $L_3$. Therefore $L$ is a $3$-point line in $N$. Thus every line in $N$ disjoint from $e$ has at least three points. The lines spanned by $\{a_1,b\}$ and $\{a_2,b\}$, in $N$, both intersect $L_3$ in a point distinct from $e$. Therefore $|L_3|\ge 3$ and, by symmetry, we also have $|L_2|\ge 3$. Thus $N$ has no two-point line. Therefore, by Lemma [Lemma 6](#reid1){reference-type="ref" reference="reid1"}, the lines $L_1$, $L_2$, and $L_3$ each have exactly $p+1$ points. Then, by our choice of $b$, all lines through $e$ in $M$ have length $p+1$. ◻ We use Lemma [Lemma 7](#reid2){reference-type="ref" reference="reid2"} iteratively to build affine geometries. **Lemma 8**. *For integers $k\ge 2$ and $n \geq t\ge 2$ and a prime $p$, if $M$ is a matroid of rank at least $t+k-1$ such that every rank-$t$ flat in $M$ has at least $n$ points, then either* - *$M$ has an $\mathop{\mathrm{AG}}(k-1,p)$-restriction,* - *there is a rank-$(k-1)$ flat $F$ in $M$ such that every rank-$t$ flat in $M/F$ has at least $n+1$ points, or* - *$M$ has a rank-$t$ flat $F$ such that $M|F$ is not connected.* *Proof.* It suffices to prove the result in the case that $M$ has rank equal to $t+k-1$. Let $C$ be a rank-$(k-1)$ flat in $M$. Since $M/C$ has rank $t$, we may assume that $M/C$ has exactly $n$ points since otherwise $(ii)$ holds. Moreover, we may assume that $M/C$ is connected, since otherwise $(iii)$ holds. **Claim 1**. *For each $e\in C$ and $f\in E(M)\setminus C$, the line spanned by $\{e,f\}$ has length $p+1$.* *Proof of claim..* There is a rank-$t$ flat $F$ of $M$ such that $f\in F$ and $r(F\cup C)= r(F)+r(C)$. Note that $M|F\cong \mathop{\mathrm{si}}(M/C)$, so $F$ has $n$ points and $M | F$ is connected. Let $N$ denote the restriction of $M$ to the flat spanned by $F\cup\{e\}$. Note that $e$ is not a coloop of $N$ since otherwise by considering a hyperplane of $N$ containing $e$ we get a rank-$t$ flat with fewer than $n$ points. Then, by Lemma [Lemma 7](#reid2){reference-type="ref" reference="reid2"}, every line of $N$ through $e$ has length $p+1$. In particular, the line spanned by $\{e,f\}$ has length $p+1$. ◻ Let $\{e_1,\ldots,e_{k-1}\}$ be a basis of $C$, let $f\in E(M)\setminus C$, and for each $i\in\{1,\ldots,k-1\}$ let $S_i:=\mathop{\mathrm{cl}}(\{f,e_1,e_2,\ldots,e_i\})\setminus C$. By the claim, $|S_1|=p$ and, for each $i\in\{1,\ldots,k-2\}$, each point in $S_i$ lifts to $p$ points in $S_{i+1}$. Therefore $|S_{i+1}|=p|S_i|$ and hence $|S_{k-1}|=p^{k-1}$. It follows that $M|S_{k-1}\cong \mathop{\mathrm{AG}}(k-1,p)$. ◻ # Proof of the Main Theorem Lemma [Lemma 8](#kelly2){reference-type="ref" reference="kelly2"} will do most of the work in finding the unavoidable flats in Theorem [Theorem 2](#unavoidable){reference-type="ref" reference="unavoidable"}, but two problems still remain to be solved: how do we find a flat with the desired size in the minor; and what do we do with an affine geometry restriction? We resolve both of these issues using Ramsey theory. The following is a consequence of Graham, Leeb and Rothschild's Ramsey theorem for vector spaces over a finite field [@Graham]. **Geometric Ramsey Theorem 1**. *For each prime-power $q$ and positive integer $t$ there is an integer $\mathop{\mathrm{R}}_q(t)$ such that , if we two-colour the points in a rank-$\mathop{\mathrm{R}}_q(t)$ projective geometry over $\mathop{\mathrm{GF}}(q)$, then there will be a monochromatic rank-$t$ flat.* Our other Ramsey-theoretic tool is a corollary of the Hales-Jewett Theorem [@Hales]. **Geometric Hales-Jewett Theorem 1**. *For each prime-power $q$ and positive integers $t$ and $k$ there is an integer $\mathop{\mathrm{HJ}}_q(t,k)$ such that , if we $k$-colour the points in a rank-$\mathop{\mathrm{HJ}}_q(t,k)$ affine geometry over $\mathop{\mathrm{GF}}(q)$, then there will be a monochromatic rank-$t$ flat.* A consequence of this is that coextensions of huge affine geometries contain large affine geometries. **Lemma 9**. *For a prime-power $q$ and integers $k,m\ge 2$ and $n=\mathop{\mathrm{HJ}}_q(k,q^m)$, if $J$ is a $m$-element independent set in $\mathop{\mathrm{GF}}(q)$-representable matroid $M$ and $M/J$ has an $\mathop{\mathrm{AG}}(n-1,q)$-restriction, then $M$ has an $\mathop{\mathrm{AG}}(k-1,q)$-restriction.* *Proof.* We may assume that $M/J$ is isomorphic to $\mathop{\mathrm{AG}}(n-1,q)$. The matroid $M$ has a $\mathop{\mathrm{GF}}(p)$-representation of the form $$A=\bordermatrix{ & J & \cr & I & B \cr & 0& D},$$ where $I$ denotes the $J\times J$ identity matrix and $D$ is a representation of $\mathop{\mathrm{AG}}(n-1,q)$. We may further assume that the first row of $D$ is the all-ones vector. Note that $B$ has at most $q^m$ distinct columns, and we assign each element of $M/J$ a colour according to the associated column of $B$. By the Geometric Hales-Jewett Theorem, there is a rank-$k$ flat $F$ of $(M/J)$ which is monochromatic under this colouring. By adding linear combinations of the first row of $D$ to the rows of $B$, we may assume that the restriction of $B$ to $F$ is the all-zero matrix. Therefore $M|F=(M/J)|F\cong \mathop{\mathrm{AG}}(k-1,q)$, as desired. ◻ Lemma [Lemma 9](#affine){reference-type="ref" reference="affine"} allows us to refine Lemma [Lemma 8](#kelly2){reference-type="ref" reference="kelly2"}. **Lemma 10**. *For integers $k \geq 2$, $n \geq t \geq 2$ and a prime $p$, there is an integer $N_{p}'(k,t,n)$ such that, if $M$ is a simple $\mathop{\mathrm{GF}}(p)$-representable matroid with $r(M) \geq N_{p}'(k,t,n)$, then $M$ has either an $\mathop{\mathrm{AG}}(k-1,p)$-restriction or a rank-$t$ flat which is either disconnected or has at most $n$ points.* *Proof.* Let $t \geq 2$, and observe that, for $n \geq \frac{p^{t}-1}{p-1}$, the integer $N_{p}'(k,t,n)$ exists for all $k \geq 2$. Now let $k \geq 2$, $t \leq n < \frac{p^{t}-1}{p-1}$, and assume that $N_{p}'(i,t,n+1)$ exists for all $i \geq 2$. Let $M$ be a simple $\mathop{\mathrm{GF}}(p)$-representable matroid with $$r(M) \geq t+k-1 + N_{p}'(\mathop{\mathrm{HJ}}_{p}(k,p^{k-1}),t,n+1).$$ We may assume that every rank-$t$ flat in $M$ is connected and has at least $n+1$ points. By Lemma [Lemma 8](#kelly2){reference-type="ref" reference="kelly2"}, there is a rank-$(k-1)$ flat $F$ in $M$ such that every rank-$t$ flat in $M /F$ has at least $n+2$ points. Moreover, every rank-$t$ flat in $M /F$ is connected. By induction, $M /F$ has an $\mathop{\mathrm{AG}}(\mathop{\mathrm{HJ}}_{p}(k,p^{k-1})-1,p)$-restriction. The result now follows by Lemma [Lemma 9](#affine){reference-type="ref" reference="affine"}. ◻ We can now prove Theorem [Theorem 2](#unavoidable){reference-type="ref" reference="unavoidable"}; to facilitate induction, we prove a slightly stronger result. **Theorem 11**. *For every prime $p$ and integers $k,t \geq 1$ there is an integer $N_{p}(k,t)$ such that, if $M$ is a simple $\mathop{\mathrm{GF}}(p)$-representable matroid with $r(M) \geq N_{p}(k,t)$, then $M$ contains either a rank-$t$ independent flat or a flat $F$ such that $M|F \cong \mathop{\mathrm{AG}}(k-1,p)$ or $M|F \cong \mathop{\mathrm{PG}}(k-1,p)$.* *Proof.* For $p$ prime and $k \geq 1$, we prove the result for $N_{p}(k,1)=1$, and, for $t \geq 2$, we recursively define $$N_{p}(k,t)=N_{p}'(R_{p}(k-1)+1, \, 2N_{p}(k,t-1),\, 2N_{p}(k,t-1)),$$ where, by convention, we take $R_{p}(0)=0$. We proceed by induction on $t$, where the base case holds because any point in a matroid forms a rank-$1$ independent flat. Now let $t \geq 2$, and assume that the assertion holds for $N_{p}(k,t-1)$ for every integer $k \geq 1$. For $k \geq 1$, suppose $M$ is a simple $\mathop{\mathrm{GF}}(p)$-representable matroid with $r(M) \geq N_{p}(k,t)$. Let $m=2N_{p}(k,t-1)$. Note that, if $F$ is a rank-$m$ flat with exactly $m$ points, then $M|F$ is not connected. Therefore, applying Lemma [Lemma 10](#restriction){reference-type="ref" reference="restriction"} gives one of the following two cases. **Case 1.** *There is a set $A \subseteq E(M)$ such that $M|A \cong \mathop{\mathrm{AG}}(R_{p}(k-1),p)$.* Let $F = \mathop{\mathrm{cl}}_{M}(A)$, and let $G \cong \mathop{\mathrm{PG}}(R_{p}(k-1),p)$ be a matroid with $F \subseteq E(G)$ and $G|F = M|F$. Then $E(G) \setminus A$ is a hyperplane in $G$, and so $G \setminus A \cong \mathop{\mathrm{PG}}(R_{p}(k-1)-1,p)$. By the Geometric Ramsey Theorem, there is a rank-$(k-1)$ flat $H$ in $G \setminus A$ such that either $H \subseteq E(M)$ or $H \subseteq E(G) \setminus E(M)$. Let $e \in A$ and let $F = E(M) \cap \mathop{\mathrm{cl}}_{G}(H \cup \{e\})$. Then $F$ is a rank-$k$ flat in $M$, and $M|F$ is either an affine or projective geometry. **Case 2.** *There is a rank-$m$ flat $F$ of $M$ such that $M|F$ is not connected.* Let $N$ be a smallest component of $M|F$, and let $F'=F \setminus E(N)$. Then $r_{M}(F') \geq N_{p}(k,t-1)$, and so, by induction, we may assume that $M|F'$ has a rank-$(t-1)$ independent flat $I$. Then for any $e \in E(N)$, $I \cup \{e\}$ is a rank-$t$ independent flat in $M$. ◻ B. Barak, Z. Dvir, A. Wigderson, A. Yehudayoff, Rank bounds for design matrices with applications to combinatorial geometry and locally correctable codes. Proceedings of the 43rd annual ACM symposium on theory of computing, STOC 2011, San Jose, CA, ACM, New York (2011) 519-528. A. Bhattacharyya, Z. Dvir, A. Shpilka, S. Saraf, tight lower bounds for 2-query lccs over finite fields. Proceedings of FOCS 2011 (2011) 638-647. P. Borwein, W.O.J Moser, A survey of Sylvester's problem and its generalizations, *Aequationes Mathematicae* 40 (1990) 111-135. W. Bonnice, M. Edelstein, Flats associated with the finite sets in $\mathbb{P}^d$, *Nieuw Archief voor Wiskunde* 15 (1967) 11-14. Z. Dvir, S. Saraf, A. Wigderson, Improved rank bounds for design matrices and a new proof of Kelly's theorem. Forum of Mathematics, Sigma (Vol. 2), Cambridge University Press (2014). T. Gallai, Solution to Problem 4065, *American Mathematical Monthly* 51 (1944) 169--171. J. Geelen and M.E. Kroeker, A Sylvester-Gallai-type theorem for complex-representable matroids. arXiv preprint arXiv:2212.03307 (2022). R.L. Graham, K. Leeb, B.L. Rothschild, Ramsey's theorem for a class of categories, *Advances in Mathematics*, 8(3) (1972) 417-433. A.W. Hales and R.I. Jewett, Regularity and positional games, *Transactions of the American Mathematical Society*, 106(2) (1963) 222-229. S. Hansen, A generalization of a theorem of Sylvester on the lines determined by a finite point set, *Mathematica Scandinavica* 16 (1965) 175-180. F. Hirzebruch, Arrangements of lines and algebraic surfaces. In Arithmetic and Geometry (Papers dedicated to I. R. Shafarevich), vol. 2, 113-140, Birkhauser (1983). L.M. Kelly, A resolution of the Sylvester-Gallai problem of J.-PP. Serre, *Discrete and Computational Geometry* 1 (1986) 101-104. J.P.S. Kung, The long-line graph of a combinatorial geometry. II. Geometries representable over two fields of different characteristics, *Journal of Combinatorial Theory, Series B* 50(1) (1990) 41-53. U.S.R. Murty, Matroids with Sylvester property, *Aequationes Mathematicae* 4 (1970) 44-50. J.J. Sylvester, Mathematical Question 11851, *Educational Times* 59 (1893) 98. [^1]: This research was partially supported by grants from the Office of Naval Research \[N00014-10-1-0851\] and NSERC \[203110-2011\] and by an NSERC Postgraduate Scholarship \[Application No. PGSD3 - 547508 - 2020\]
arxiv_math
{ "id": "2309.15185", "title": "Unavoidable flats in matroids representable over prime fields", "authors": "Jim Geelen, Matthew E. Kroeker", "categories": "math.CO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In a recent breakthrough Kelley and Meka proved a quasipolynomial upper bound for the density of sets of integers without non-trivial three-term arithmetic progressions. We present a simple modification to their method that strengthens their conclusion, in particular proving that if $A\subseteq\{1,\ldots,N\}$ has no non-trivial three-term arithmetic progressions then $$\lvert A\rvert \leq \exp(-c(\log N)^{1/9})N$$ for some $c>0$. author: - Thomas F. Bloom and Olof Sisask title: An improvement to the Kelley-Meka bounds on three-term arithmetic progressions --- The question of how large a subset of $\{1,\ldots,N\}$ without three-term arithmetic progressions can be is one of the most central in additive combinatorics. Recently Kelley and Meka [@KM] achieved a breakthrough new bound, proving that such a set must have size at most $$\exp(-c(\log N)^{1/12})N$$ for some constant $c>0$. For contrast, it is known by a result of Behrend [@Be] that there are such sets of size at least $\exp(-c'(\log N)^{1/2}))N$ for some constant $c'>0$ (with small improvements by Elkin [@El] and Green and Wolf [@GW], which do not change the bound's essential shape). The bound achieved by Kelley and Meka is a dramatic improvement over any bounds previously available (for example in [@BSa]), which were all of the shape $N/(\log N)^{O(1)}$. In this note we observe that a small modification to Kelley and Meka's argument (more precisely in the application of almost-periodicity) yields a slight quantitative improvement. **Theorem 1**. *If $A\subseteq \{1,\ldots,N\}$ contains only trivial three-term arithmetic progressions, then $$\left\lvert A\right\rvert \leq \exp(-c(\log N)^{1/9})N$$ for some constant $c>0$.* A more elaborate version of the idea in this note allows for further improvement of the exponent to $5/41$ (see the remarks after the proof of Lemma [Lemma 6](#lemma:symmetry_subspace){reference-type="ref" reference="lemma:symmetry_subspace"}), but the necessary technical overheads obscure the essential idea. Since we expect other ideas to render such lengthy technical optimisation redundant anyway, in this note we just present the relatively clean modification that allows for $1/9$. Similar quantitative improvements are available for other applications of Kelley and Meka's method. For example, the new argument in the 'model setting' of $\mathbb{F}_q^n$ yields the following. **Theorem 2**. *If $q$ is an odd prime and $A\subseteq \mathbb{F}_q^n$ has no non-trivial three-term progressions, then $$\left\lvert A\right\rvert \ll q^{n-cn^{1/7}}$$ for some constant $c>0$.* Kelley and Meka [@KM] proved a similar bound with $1/9$ in place of $1/7$. For this problem much better bounds (of the shape $q^{(1-c)n}$) were proved by Ellenberg and Gijswijt [@EG] using the polynomial method of Croot, Lev, and Pach [@CLP]. As usual in this area, however, $\mathbb{F}_q^n$ is useful as a simpler setting than $\{1,\ldots,N\}$ that still displays most of the important ideas. Another application of the (quantitatively improved) Kelley-Meka argument yields the following. Even in the model setting of $\mathbb{F}_q^n$, this type of result does not follow from the polynomial method. **Theorem 3**. *If $A\subseteq \mathbb{F}_q^n$ has density $\alpha=\left\lvert A\right\rvert/q^n$ and $\gamma\in(0,1]$ then there is some affine subspace $V\leq \mathbb{F}_q^n$ of codimension[^1] $O(\mathcal{L}(\alpha)^5\mathcal{L}(\gamma)^2)$ such that $$\left\lvert (A+A)\cap V\right\rvert\geq (1-\gamma)\left\lvert V\right\rvert.$$* For comparison, Kelley-Meka [@KM] proved this with codimension $O(\mathcal{L}(\alpha)^5\mathcal{L}(\gamma)^4)$, and Sanders [@Sa2] proved this with codimension $O(\mathcal{L}(\alpha)^4\gamma^{-2})$. There is a recent application of our improvement to the Kelley-Meka machinery, in the setting of $\mathbb{F}_2^n$, to Ramsey theory: Hunter and Pohoata [@HP23] use essentially Theorem [Theorem 3](#th-ff){reference-type="ref" reference="th-ff"} to improve the known bounds for the Ramsey problem of finding monochromatic subspaces in $2$-colourings of the $1$-dimensional subspaces of $\mathbb{F}_2^n$. Finally, we present a new bound for long arithmetic progressions in $A+A+A$. **Theorem 4**. *If $A\subseteq \{1,\ldots,N\}$ has size $\alpha N$ then $A+A+A$ contains an arithmetic progression of length at least $$\exp(-O(\mathcal{L}(\alpha)^2))N^{\Omega(1/\mathcal{L}(\alpha)^{7})}.$$* The authors proved a weaker version of this, with $9$ in place of $7$, in [@BS] as an application of a technically 'smoothed' version of the Kelley-Meka method. A construction due to Freiman, Halberstam, and Ruzsa [@FHR] shows that no exponent better than $\Omega(1/\mathcal{L}(\alpha))$ is possible. Since our new contribution is only a small modification of the argument of Kelley and Meka, we will be relatively brief, and just describe the changes required. In particular we assume that the reader is familiar with the simplified form of the Kelley-Meka argument as presented in [@BS]. We will use the same notation and conventions as given in [@BS Section 2], which we briefly recall below for the convenience of the reader. In Section [1](#sec-boot){reference-type="ref" reference="sec-boot"} we present the novel contribution of this paper, a quantitatively improved bootstrapping of almost-periodicity. In Section [2](#sec-sumup){reference-type="ref" reference="sec-sumup"} we explain how this improved almost-periodicity should be inserted into the Kelley-Meka argument (in the form presented in [@BS]) to prove our main results. ## Acknowledgements {#acknowledgements .unnumbered} The first author is supported by a Royal Society University Research Fellowship. ## Notational conventions {#notational-conventions .unnumbered} Logarithmic factors will appear often, and so in this paper we use the convenient abbreviation $\mathcal{L}(\alpha)$ to denote $\log(2/\alpha)$. In statements which refer to $G$, this can be taken to be any finite abelian group (although for the applications this will always be either $\mathbb{F}_q^n$ or $\mathbb{Z}/N\mathbb{Z}$). We use the normalised counting measure on $G$, so that $$\langle f,g\rangle = \mathop{ \mathchoice{\vcenter{\hbox{\larger[4]$\mathbb{E}$}}} {\kern 0pt\mathbb{E}} {\kern 0pt\mathbb{E}} {\kern 0pt\mathbb{E}} }\displaylimits _{x\in G}f(x)\overline{g(x)}\textrm{ and }\left\lVert f\right\rVert_p=\left( \mathop{ \mathchoice{\vcenter{\hbox{\larger[4]$\mathbb{E}$}}} {\kern 0pt\mathbb{E}} {\kern 0pt\mathbb{E}} {\kern 0pt\mathbb{E}} }\displaylimits _{x\in G}\left\lvert f(x)\right\rvert^p\right)^{1/p}\textrm{ for }1\leq p<\infty,$$ where $\mathop{ \mathchoice{\vcenter{\hbox{\larger[4]$\mathbb{E}$}}} {\kern 0pt\mathbb{E}} {\kern 0pt\mathbb{E}} {\kern 0pt\mathbb{E}} }\displaylimits _{x\in G}=\frac{1}{\lvert G\rvert}\sum_{x\in G}$. For any $f,g:G\to \mathbb{C}$ we define the convolution and the difference convolution[^2] as $$f\ast g(x)= \mathop{ \mathchoice{\vcenter{\hbox{\larger[4]$\mathbb{E}$}}} {\kern 0pt\mathbb{E}} {\kern 0pt\mathbb{E}} {\kern 0pt\mathbb{E}} }\displaylimits _y f(y)g(x-y)\quad\textrm{and}\quad f\circ g(x)= \mathop{ \mathchoice{\vcenter{\hbox{\larger[4]$\mathbb{E}$}}} {\kern 0pt\mathbb{E}} {\kern 0pt\mathbb{E}} {\kern 0pt\mathbb{E}} }\displaylimits _yf(x+y)\overline{g(y)}.$$ For some purposes it is conceptually cleaner to work relative to other non-negative functions on $G$, so that if $\mu:G\to\mathbb{R}_{\geq 0}$ has $\left\lVert \mu\right\rVert_1=1$ we write $$\left\lVert f\right\rVert_{p(\mu)}=\left( \mathop{ \mathchoice{\vcenter{\hbox{\larger[4]$\mathbb{E}$}}} {\kern 0pt\mathbb{E}} {\kern 0pt\mathbb{E}} {\kern 0pt\mathbb{E}} }\displaylimits _{x\in G}\mu(x)\left\lvert f(x)\right\rvert^p\right)^{1/p}\textrm{ for }1\leq p<\infty.$$ (The special case above is the case when $\mu\equiv 1$.) We write $\mu_A=\alpha^{-1}1_{A}$ for the normalised indicator function of $A$ (so that $\left\lVert \mu_A\right\rVert_1=1$). We will sometimes speak of $A\subseteq B$ with relative density $\alpha=\left\lvert A\right\rvert/\left\lvert B\right\rvert$. The Fourier transform of $f:G\to\mathbb{R}$ is $\widehat{f}:\widehat{G}\to \mathbb{C}$ defined for $\gamma\in\widehat{G}$ as $$\widehat{f}(\gamma)= \mathop{ \mathchoice{\vcenter{\hbox{\larger[4]$\mathbb{E}$}}} {\kern 0pt\mathbb{E}} {\kern 0pt\mathbb{E}} {\kern 0pt\mathbb{E}} }\displaylimits _{x\in G}f(x)\overline{\gamma(x)},$$ where $\widehat{G} = \{ \gamma : G \to \mathbb{C}^\times : \text{$\gamma$ a homomorphism} \}$ is the dual group of $G$. Finally, we use the Vinogradov notation $X \ll Y$ to mean $X = O(Y)$, that is, there exists some constant $C>0$ such that $\left\lvert X\right\rvert\leq CY$. We write $X\asymp Y$ to mean $X\ll Y$ and $Y\ll X$. The appearance of parameters as subscripts indicates that this constant may depend on these parameters (in some unspecified fashion). # An improved bootstrapping procedure {#sec-boot} The new contribution of this paper is to note that the 'bootstrapping procedure', in which a set of almost-periods is converted into a subspace (or, more generally, a Bohr set) of almost-periods, can be made more efficient, at least in the applications relevant to the Kelley-Meka argument. ## The $\mathbb{F}_q^n$ case We will first present the new idea in the technically simpler model case of $\mathbb{F}_q^n$. For this we will use the following form of almost-periodicity, a special case of [@SS Theorem 3.2], which is sufficient for our purposes. **Theorem 5** ($L^\infty$ almost-periodicity). *Let $\epsilon>0$ and $k\geq 1$. Let $S\subseteq G$ and $A_1,A_2\subseteq G$ with densities $\alpha_1,\alpha_2$ respectively. There is a set $X\subseteq G$ of size $$\left\lvert X\right\rvert\gg \exp(-O(\epsilon^{-2}k^2\mathcal{L}(\alpha_1)\mathcal{L}(\alpha_2)))\left\lvert G\right\rvert$$ such that $$\|\mu_X^{(k)}\ast\mu_{A_1}\circ\mu_{A_2}\ast 1_{S}-\mu_{A_1}\circ\mu_{A_2}\ast 1_{S}\|_{\infty}\leq \epsilon.$$* Bootstrapping refers to the process where the almost-period factor of $\mu_X^{(k)}$ is replaced by a more algebraically structured factor of $\mu_V$, where $V$ is a subspace. This is achieved by passing to Fourier space and considering the subspace of elements which annihilate (or approximately annihilate) those characters where $\lvert \widehat{\mu_X}\rvert$ is large. The problem is that to control the error term in such a replacement we need to 'cancel out' the quantity $$\sum_\gamma \lvert \widehat{\mu_{A_1}}(\gamma)\rvert\lvert \widehat{\mu_{A_2}}(\gamma)\rvert\lvert \widehat{1_{S}}(\gamma)\rvert.$$ Using the trivial bound $\lvert \widehat{\mu_{A_2}}(\gamma)\rvert\leq 1$, the Cauchy-Schwarz inequality, and Parseval's identity, we can bound this above by $$\begin{aligned} \sum_\gamma \lvert \widehat{\mu_{A_1}}(\gamma)\rvert\lvert \widehat{1_{S}}(\gamma)\rvert &\leq \left( \sum_\gamma \lvert \widehat{\mu_{A_1}}(\gamma)\rvert^2\right)^{1/2}\left( \sum_\gamma \lvert \widehat{1_{S}}(\gamma)\rvert^2\right)^{1/2}\\ &=\alpha_1^{-1/2}\mu(S)^{1/2}\\ &\leq \alpha_1^{-1/2}.\end{aligned}$$ This is multiplied by a factor of $\left\lvert \widehat{\mu_X}(\gamma)\right\rvert^k$, which we can take to be $\leq 2^{-k}$ (since we can discard the contribution from those $\gamma$ with $\left\lvert \widehat{\mu_X}(\gamma)\right\rvert\geq 1/2$ by passing a subspace of small codimension). In particular to 'cancel out' the contribution from this sum we need to take $k\approx \mathcal{L}(\alpha_1)$. For many applications of almost-periodicity, when $S$ is an arbitrary set, this is the best that we can do. In the Kelley-Meka application, however, $S$ is a structured set, and we can exploit that here. Essentially, we know that $\mu_A\circ \mu_A$ is large pointwise on $S$ (for some set $A$ which is denser than both $A_1$ and $A_2$), and therefore at a crucial stage of the argument we can replace $1_{S}$ by $\mu_A\circ \mu_A$ before bootstrapping. This leads to the use of the alternative bound $$\sum_\gamma \lvert \widehat{\mu_{A_1}}(\gamma)\rvert\lvert \widehat{\mu_{A_2}}(\gamma)\rvert\lvert \widehat{\mu_A}(\gamma)\rvert^2\leq \alpha^{-1}.$$ Thus we have a $\mathcal{L}(\alpha)$ term in place of a $\mathcal{L}(\alpha_1)$ term, and since $\mathcal{L}(\alpha_1)\approx \mathcal{L}(\alpha)^2$ in the Kelley-Meka method this leads to an improvement in the final bounds. The following lemma and proof is a precise statement of the above idea suitable for our applications. **Lemma 6**. *Let $\epsilon\in(0,1/8)$. Let $S\subseteq \mathbb{F}_q^n$ and $A,A_1,A_2\subseteq \mathbb{F}_q^n$ be sets with densities $\alpha,\alpha_1,\alpha_2$ respectively, such that* 1. *$\langle \mu_{A_1}\circ \mu_{A_2},1_{S}\rangle \geq 1-\epsilon$ and* 2. *$\mu_A\circ \mu_A(x) \geq 1+4\epsilon$ for any $x\in S$.* *There exists a subspace $V\leq \mathbb{F}_q^n$ of codimension $$\ll_\epsilon \mathcal{L}(\alpha)^2\mathcal{L}(\alpha_1)\mathcal{L}(\alpha_2)$$ such that $\left\lVert \mu_V\ast \mu_A\right\rVert_\infty \geq 1+\epsilon/2$.* *Proof.* Let $k\geq 2$ be chosen later and $X$ be as in Theorem [Theorem 5](#th-liap){reference-type="ref" reference="th-liap"}, so that $$\langle \mu_X^{(k)}\ast \mu_{A_1}\circ \mu_{A_2},1_{S}\rangle \geq 1-2\epsilon$$ and $$\left\lvert X\right\rvert \gg \exp(-O_\epsilon(k^2\mathcal{L}(\alpha_1)\mathcal{L}(\alpha_2)))\lvert \mathbb{F}_q^n\rvert.$$ It follows that $$\langle \mu_X^{(k)}\ast \mu_{A_1}\circ \mu_{A_2},\mu_A\circ \mu_A\rangle \geq (1+4\epsilon)(1-2\epsilon)\geq 1+\epsilon.$$ Let $V\leq \mathbb{F}_q^n$ be the subspace orthogonal to all those characters in $$\Delta_{1/2}(X)=\{\gamma : \lvert \widehat{\mu_X}(\gamma)\rvert\geq 1/2\}.$$ By Chang's lemma (as given in [@TV Lemma 4.36], for example), $V$ has codimension $$\ll \log(\lvert \mathbb{F}_q^n\rvert/\left\lvert X\right\rvert)\ll_\epsilon k^2\mathcal{L}(\alpha_1)\mathcal{L}(\alpha_2).$$ Furthermore, if we let $F=\mu_{A_1}\circ \mu_{A_2}\ast \mu_A\circ \mu_A$ for brevity, for all $t\in V$ we have $$\begin{aligned} \| \tau_t(\mu_X^{(k)}\ast F)-\mu_X^{(k)}\ast F\|_\infty &\leq \sum_\gamma \lvert \widehat{\mu_X}(\gamma)\rvert^k\lvert \widehat{F}(\gamma)\rvert\left\lvert \gamma(t)-1\right\rvert\\ &\leq 2\sum_{\gamma\not\in \Delta_\eta(X)} \lvert \widehat{\mu_X}(\gamma)\rvert^k\lvert \widehat{F}(\gamma)\rvert\\ &\leq 2^{1-k}\sum_\gamma \lvert \widehat{F}(\gamma)\rvert.\end{aligned}$$ We now note that $$\sum_\gamma \lvert \widehat{F}(\gamma)\rvert\leq \sum_\gamma \lvert \widehat{\mu_A}(\gamma)\rvert^2 \leq \alpha^{-1}.$$ In particular, we can choose $k\ll_\epsilon \mathcal{L}(\alpha)$ so that, for any $t \in V$, $$\| \tau_t(\mu_X^{(k)}\ast F)-\mu_X^{(k)}\ast F\|_\infty\leq \epsilon/2.$$ It follows that $$\langle \mu_V\ast \mu_X^{(k)}\ast \mu_{A_1}\circ \mu_{A_2},\mu_A\circ \mu_A\rangle \geq 1+\epsilon/2,$$ whence $\left\lVert \mu_V\ast \mu_A\right\rVert_\infty \geq 1+\epsilon/2$ as required. ◻ Note that in this proof we used a relatively trivial bound of $$\sum_\gamma \left\lvert \widehat{\mu_{A_1}}(\gamma)\widehat{\mu_{A_2}}(\gamma)\widehat{\mu_{A_1}}(\gamma)\right\rvert^2\leq \sum_\gamma \left\lvert \widehat{\mu_A}(\gamma)\right\rvert^2.$$ We could instead retain the $\mu_{A_i}$ factors, resulting in an upper bound of $$\langle \mu_{A_1}\circ \mu_{A_1},\mu_A\circ \mu_A\rangle^{1/2}\langle \mu_{A_2}\circ \mu_{A_2},\mu_A\circ \mu_A\rangle^{1/2}.$$ In particular, if both of these inner products are small (e.g. $\ll \mathcal{L}(\alpha)^{O(1)}$) then we could attain a sharper form of Lemma [Lemma 6](#lemma:symmetry_subspace){reference-type="ref" reference="lemma:symmetry_subspace"}, with $k\approx \log \mathcal{L}(\alpha)$. If not, say $$\langle \mu_{A_1}\circ \mu_{A_1},\mu_A\circ \mu_A\rangle\geq \mathcal{L}(\alpha),$$ then this is a large discrepancy over the 'expected value' of this inner product, which is $1$. This in turn can be fed back into the Kelley-Meka machinery to produce another density increment. This is not an immediate win, since the density of $A_1$ is much smaller than that of $A$, so it is not clear that we have gained more than we lost. Nonetheless a small improvement can be attained this way, optimising carefully, but this requires taking apart the Kelley-Meka machinery and a technical lengthy detour. Again, since we expect future ideas to make the gains from such an optimisation redundant anyway, we have chosen to present only the simpler version. Nonetheless, the possibility of improved bounds should be kept in mind, and the reader interested in applying an improved bootstrapping similar to Lemma [Lemma 6](#lemma:symmetry_subspace){reference-type="ref" reference="lemma:symmetry_subspace"} to other problems should explore whether a good upper bound on something like $$\langle \mu_{A_1}\circ \mu_{A_1},\mu_A\circ \mu_A\rangle$$ is available in their application. ## The general case We now present the general case of the improved bootstrapping procedure described in the previous subsection, required for the integer case. We will assume that the reader is familiar with the vocabulary and basic properties of Bohr sets (see, for example, [@BS Appendix 1]). In this section $G$ denotes any finite abelian group. We will use the following more general form of almost-periodicity, which is proved as [@SS Theorem 5.1]. **Theorem 7** ($L^\infty$ almost-periodicity). *Let $\epsilon>0$ and $k,K\geq 2$. Let $A_1,A_2,S,B\subseteq G$ and $\left\lvert A_2+B\right\rvert\leq K\left\lvert A_2\right\rvert$. Let $\eta=\left\lvert A_1\right\rvert/\left\lvert S\right\rvert$. There is a set $X\subseteq B$ of size $$\left\lvert X\right\rvert\gg \exp(-O(\epsilon^{-2}k^2\mathcal{L}(\eta)\log K))\left\lvert B\right\rvert$$ such that $$\|\mu_X^{(k)}\ast\mu_{A_1}\circ\mu_{A_2}\ast 1_{S}-\mu_{A_1}\circ\mu_{A_2}\ast 1_{S}\|_{\infty}\leq \epsilon.$$* The more general form of Lemma [Lemma 6](#lemma:symmetry_subspace){reference-type="ref" reference="lemma:symmetry_subspace"}, required for the application to the integers, is more complicated in technicalities only. It is important to note, however, that there is an additional loss in the size of the Bohr set comparable to $d\mathcal{L}(\alpha)$ (where $d$ is the rank of the Bohr set) -- it is ultimately this which is responsible for 'losing two logs' between the $\mathbb{F}_p^n$ and the integer case. **Lemma 8**. *There is a constant $c>0$ such that the following holds. Let $\epsilon\in (0,1/10)$ and $B,B',B''\subseteq G$ be regular Bohr sets of rank $d$. Suppose that $A\subseteq B$, $A_1\subseteq B'$, and $A_2\subseteq B''-x$ (for some $x$) with densities $\alpha,\alpha_1,\alpha_2$ respectively. Let $S$ be any set with $\left\lvert S\right\rvert\leq 2\left\lvert B'\right\rvert$ such that* 1. *$\langle \mu_{A_1}\circ \mu_{A_2},1_{S}\rangle \geq 1-\epsilon$ and* 2. *$\mu_A\circ \mu_A(x) \geq (1+2\epsilon)\mu(B)^{-1}$ for any $x\in S$.* *Let $L=\mathcal{L}(\alpha/d\mathcal{L}(\alpha_1)\mathcal{L}(\alpha_2))$. There is a regular Bohr set $B'''\subseteq B''$ of rank at most* *$$\leq d+O_\epsilon(\mathcal{L}(\alpha)^2\mathcal{L}(\alpha_1)\mathcal{L}(\alpha_2))$$ and $$\left\lvert B'''\right\rvert\geq \exp(-O_\epsilon(L(d +\mathcal{L}(\alpha)^2\mathcal{L}(\alpha_1)\mathcal{L}(\alpha_2))))\left\lvert B''\right\rvert$$ such that $\left\lVert \mu_{B'''}\ast \mu_A\right\rVert_\infty \geq (1+\epsilon/4)\mu(B)^{-1}$.* Note that in our application we have $\alpha_1,\alpha_2 \geq \exp(-O(\mathcal{L}(\alpha)^2))$ and $d\leq \alpha^{-O(1)}$, and hence the parameter $L$ is $O(\mathcal{L}(\alpha))$. *Proof.* Let $k\geq 2$ be chosen later and $X$ be as in Theorem [Theorem 7](#th-liap-gen){reference-type="ref" reference="th-liap-gen"}, applied with $B$ replaced by $B''_\rho$, where $\rho=c/100d$ for a constant $c\in (1/2,1)$ chosen so that $B''_\rho$ is regular. By regularity of $B''$ $$\left\lvert A_2+B''_\rho\right\rvert\leq \left\lvert B''+B''_\rho\right\rvert\leq 2\left\lvert B''\right\rvert\leq 2\alpha_2^{-1}\left\lvert A_2\right\rvert,$$ and so we can take $K=2\alpha_2^{-1}$ in Theorem [Theorem 7](#th-liap-gen){reference-type="ref" reference="th-liap-gen"}. We also have $\eta=\left\lvert A_1\right\rvert/\left\lvert S\right\rvert\geq \alpha_1/2$. We can thus find some $X\subseteq B_{\rho}''$ such that $$\langle \mu_X^{(k)}\ast \mu_{A_1}\circ \mu_{A_2},1_{S}\rangle \geq 1-\tfrac{5}{4}\epsilon$$ and $$\left\lvert X\right\rvert\gg \exp(-O_\epsilon(k^2\mathcal{L}(\alpha_1)\mathcal{L}(\alpha_2)))\lvert B''_\rho\rvert.$$ It follows that $$\langle \mu_X^{(k)}\ast \mu_{A_1}\circ \mu_{A_2},\mu_A\circ \mu_A\rangle \geq (1+\epsilon/2)\mu(B)^{-1}.$$ By Chang's lemma (for example as given in [@SS Proposition 5.3]) there is a regular Bohr set $B'''\subseteq B''_{\rho}$ of rank $$\leq d+O_\epsilon(k^2\mathcal{L}(\alpha_1)\mathcal{L}(\alpha_2))$$ and $$\left\lvert B'''\right\rvert\geq \exp(-O_\epsilon(L(d +k^2\mathcal{L}(\alpha_1)\mathcal{L}(\alpha_2))))\left\lvert B''\right\rvert$$ such that $\left\lvert \gamma(t)-1\right\rvert\leq \epsilon\alpha/10$ for all $\gamma\in \Delta_{1/2}(X)$ and $t \in B'''$. Writing $F=\mu_{A_1}\circ \mu_{A_2}\ast \mu_A\circ \mu_A$ for brevity, it follows that for all $t\in B'''$ we have $$\begin{aligned} \| \tau_t(\mu_X^{(k)}\ast F)-\mu_X^{(k)}\ast F\|_\infty &\leq \sum_\gamma \lvert \widehat{\mu_X}(\gamma)\rvert^k\lvert \widehat{F}(\gamma)\rvert\left\lvert \gamma(t)-1\right\rvert\\ &\leq (\epsilon\alpha/10+2^{1-k})\sum_\gamma \lvert \widehat{F}(\gamma)\rvert.\end{aligned}$$ By the Cauchy-Schwarz inequality $$\sum_\gamma \lvert \widehat{F}(\gamma)\rvert\leq \sum_\gamma \lvert \widehat{\mu_A}(\gamma)\rvert^2\leq \alpha^{-1}\mu(B)^{-1}.$$ In particular, we choose $k\ll_\epsilon \mathcal{L}(\alpha)$ so that, for each $t \in B'''$ $$\| \tau_t(\mu_X^{(k)}\ast F)-\mu_X^{(k)}\ast F\|_\infty\leq \tfrac{1}{4}\epsilon \mu(B)^{-1}.$$ It follows that $$\langle \mu_{B'''}\ast \mu_X^{(k)}\ast \mu_{A_1}\circ \mu_{A_2},\mu_A\circ \mu_A\rangle \geq (1+\epsilon/4)\mu(B)^{-1},$$ whence $\left\lVert \mu_{B'''}\ast \mu_A\right\rVert_\infty \geq (1+\epsilon/4)\mu(B)^{-1}$ as required. ◻ # Modifying the Kelley-Meka argument {#sec-sumup} ## The $\mathbb{F}_q^n$ case Both Theorems [Theorem 2](#th-main-ff){reference-type="ref" reference="th-main-ff"} and [Theorem 3](#th-ff){reference-type="ref" reference="th-ff"} follow by an iterative application of the following quantitative improvement of [@BS Proposition 12]. **Proposition 9**. *Let $q$ be any prime and $n\geq 1$. If $A,C\subseteq \mathbb{F}_q^n$, where $A$ has density $\alpha$ and $C$ has density $\gamma$, then for any $\epsilon\in(0,1)$, either* 1. *$\left\lvert \langle \mu_A\ast \mu_A,\mu_C\rangle -1\right\rvert\leq \epsilon$ or* 2. *there is a subspace $V$ of codimension $$\ll_\epsilon \mathcal{L}(\alpha)^4\mathcal{L}(\gamma)^2$$ such that $\left\lVert 1_{A}\ast \mu_V\right\rVert_\infty \geq (1+\epsilon/64)\alpha$.* *Proof.* By the argument of Kelley and Meka (such as a combination of [@BS Lemma 7, Corollary 9, Lemma 11], as described in the proof of [@BS Proposition 13]) if the first alternative fails then there are sets $A_1,A_2$, both of density $$\geq \exp(-O_\epsilon(\mathcal{L}(\alpha)\mathcal{L}(\gamma))),$$ such that $\langle \mu_{A_1}\circ \mu_{A_2},1_{S}\rangle\geq 1-\epsilon/32$ where $S=\{ x : \mu_A\circ \mu_A(x)\geq 1+\epsilon/8\}$. The result now follows from Lemma [Lemma 6](#lemma:symmetry_subspace){reference-type="ref" reference="lemma:symmetry_subspace"}. ◻ ## The general case Similarly, Theorems [Theorem 1](#th-main-int){reference-type="ref" reference="th-main-int"} and [Theorem 4](#th-3A){reference-type="ref" reference="th-3A"} follow from the following quantitative improvement of [@BS Proposition 14]. The deduction in this case is less routine, but is unchanged from the argument in [@BS], so we will not reproduce the details here. To summarise, however, the method of Kelley and Meka allows one to show that if there are too few three-term arithmetic progressions in $A\subseteq B$ then the hypothesis of the below holds with $\epsilon \gg 1$ and $p\asymp \mathcal{L}(\alpha)$. The density increment in the conclusion can hold only $\mathcal{L}(\alpha)$ many times. Beginning with the trivial rank $0$ Bohr set and iterating, therefore, we arrive at some Bohr set $B$ with rank $d\ll \mathcal{L}(\alpha)^7$ and density $\left\lvert B\right\rvert\gg \exp(-O(\mathcal{L}(\alpha)^9))\left\lvert G\right\rvert$ on which we have the 'expected' number of three-term arithmetic progressions. That is, there is some $A'\subseteq (A-x)\cap B$ for some $x$ which has $\gg \left\lvert B\right\rvert^2$ many arithmetic progressions. By assumption $A'$ only contains trivial three-term arithmetic progressions, and so this forces $\left\lvert B\right\rvert\ll 1$. Rearranging and using our lower bound on $\left\lvert B\right\rvert$ this implies $\alpha \leq \exp(-c(\log \left\lvert G\right\rvert)^{1/9})$ for some $c>0$ as required. **Proposition 10**. *There is a constant $c>0$ such that the following holds. Let $\epsilon>0$ and $p,k\geq 1$ be integers such that $(k,\left\lvert G\right\rvert)=1$ and $p\leq \alpha^{-O(1)}$. Let $B,B',B''\subseteq G$ be regular Bohr sets of rank $d\leq \alpha^{-O(1)}$ such that $B''\subseteq B'_{c/d}$ and $A\subseteq B$ with relative density $\alpha$. If $$\left\lVert \mu_{A}\circ \mu_{A}\right\rVert_{p(\mu_{k\cdot B'}\circ\mu_{k\cdot B'}\ast \mu_{k\cdot B''}\circ \mu_{k\cdot B''})} \geq \left(1+\epsilon\right) \mu(B)^{-1}$$ then there is a regular Bohr set $B'''\subseteq B''$ of rank at most $$\mathop{\mathrm{rk}}(B''')\leq d+O_{\epsilon}(\mathcal{L}(\alpha)^4p^2)$$ and $$\left\lvert B'''\right\rvert\geq \exp(-O_{\epsilon}(d\mathcal{L}(\alpha)+\mathcal{L}(\alpha)^5p^2))\left\lvert B''\right\rvert$$ such that $$\left\lVert \mu_{B'''}*\mu_A \right\rVert_\infty \geq (1+\epsilon/16)\mu(B)^{-1}.$$* *Proof.* As in the proof of [@BS Proposition 15], there exist $A_1\subseteq k\cdot B'$ and $A_2\subseteq k\cdot B''-x$ such that, with $S=\{x\in A_1-A_2 : \mu_{A}\circ \mu_A(x)\geq (1+\epsilon/2)\mu(B)^{-1}\}$, $$\langle \mu_{A_1}\circ \mu_{A_2},1_{S}\rangle \geq 1-\epsilon/4$$ and $$\min\left( \mu_{k\cdot B'}(A_1),\mu_{k\cdot B''-x}(A_2)\right)\gg \alpha^{p+O_{\epsilon}(1)}.$$ We now apply Lemma [Lemma 8](#lemma-genimp){reference-type="ref" reference="lemma-genimp"} (with $k\cdot B'$ and $k\cdot B''$ playing the roles of $B$ and $B'$ respectively), noting that $$\left\lvert S\right\rvert\leq \left\lvert B'+B''\right\rvert\leq 2\left\lvert B'\right\rvert,$$ and the conclusion follows. ◻ 0 F. A. Behrend "On sets of integers which contain no three terms in arithmetical progression" *Proc. Nat. Acad. Sci. U. S. A.* 32 (1946): 331-332. T. F. Bloom and O. Sisask "Breaking the logarithmic barrier in Roth's theorem on arithmetic progressions" *submitted* arXiv:2007.03528. T. F. Bloom and O. Sisask "The Kelley-Meka bounds for sets free of three-term arithmetic progressions" arXiv 2302.07211. E. Croot and V. Lev and P. Pach "Progression-free sets in $\mathbb{Z}_4^n$ are exponentially small" *Ann. of Math.* (2) 185, 1 (2017): 331-337. M. Elkin, "An improved construction of progression-free sets" *Israel J. Math.* 184 (2011), 93--128. J. S. Ellenberg and D. Gijswijt. "On large subsets of $\Bbb F^n_q$ with no three-term arithmetic progression" *Ann. of Math.* (2) 185, 1 (2017): 339-343. G.A. Freiman, H. Halberstam, and I.Z. Ruzsa "Integer sum sets containing long arithmetic progressions" *J. London Math. Soc.* (2) 46 (1992): 193--201. B. Green and J. Wolf, "A note on Elkin's improvement of Behrend's construction" Additive number theory, 141--144, Springer, New York, 2010. Z. Hunter and C. Pohoata, "A note on off-diagonal Ramsey numbers for vector spaces over $\mathbb{F}_2$.", arXiv. Z. Kelley and R. Meka, "Strong bounds for $3$-progressions", arXiv:2302.05537. T. Sanders "On the Bogolyubov-Ruzsa lemma" *Anal. PDE* 5(3) (2012): 627--655. T. Schoen and O. Sisask "Roth's theorem for four variables and additive structures in sums of sparse sets" *Forum Math. Sigma* 4 (2016), Paper No. e5. T. Tao and V. Vu "Additive Combinatorics" *Cambridge University Press* (2006). [^1]: *We recall our notational convention from [@BS] that $\mathcal{L}(\delta)=\log(2/\delta)$ when $\delta\in(0,1]$.* [^2]: We caution that, while convolution is commutative and associative, difference convolution is in general neither.
arxiv_math
{ "id": "2309.02353", "title": "An improvement to the Kelley-Meka bounds on three-term arithmetic\n progressions", "authors": "Thomas F. Bloom and Olof Sisask", "categories": "math.NT math.CO", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We estimate the frequency of singular matrices and of matrices of a given rank whose entries are parametrised by arbitrary polynomials over the integers and modulo a prime $p$. In particular, in the integer case, we improve a recent bound of V. Blomer and J. Li (2022). address: - School of Mathematics and Statistics, University of New South Wales, Sydney NSW 2052, Australia - School of Mathematics and Statistics, University of New South Wales, Sydney NSW 2052, Australia - School of Mathematics and Statistics, University of New South Wales, Sydney NSW 2052, Australia author: - Ali Mohammadi - Alina Ostafe - Igor E. Shparlinski title: On some matrix counting problems --- = = = # Introduction ## Background and motivation Given an $m \times n$ matrix $$\mathbf f=\left(f_{i,j}\left(X_{i,j}\right)\right)_{\substack{1\leqslant i\leqslant m \\ 1 \leqslant j\leqslant n}}$$ of univariate polynomials $f_{i,j}\left(X_{i,j}\right)\in{\mathbb Z}[X_{i,j}]$ we consider the family ${\mathcal M}_\mathbf f$ of matrices with polynomial entries of the form $${\mathcal M}_{\mathbf f} = \left\{ \left(f_{i,j}\left(x_{i,j}\right)\right)_{1\leqslant i,j\leqslant n}:~ x_{i,j} \in {\mathbb Z}, \ 1\leqslant i\leqslant m, \ 1 \leqslant j\leqslant n\right\}.$$ Furthermore, given an integer $H$, we consider the set ${\mathcal M}_{\mathbf f}(H)$ of $(2H+1)^{mn}$ matrices from ${\mathcal M}_\mathbf f$ with $x_{i,j} \in [-H,H]$, $1\leqslant i\leqslant m$, $1 \leqslant j\leqslant n$. Here we are interested in counting matrices from ${\mathcal M}_{\mathbf f}(H)$ which are of a given rank $r$ and denote the number of such matrices by $L_{\mathbf f,r}(H)$. Similarly, given a prime $p$ we also consider the number $L_{\mathbf f, r}(H,p)$ of matrices from ${\mathcal M}_{\mathbf f}(H)$ whose reduction modulo $p$ is of a given rank $r$ over the finite field ${\mathbb F}_p$ of $p$ elements. In the case of square matrices, that is, for $m=n$, the questions of counting singular matrices $$\label{eq:Count Sing} \begin{split} & N_{\mathbf f}(H) = \# \{\mathbf X\in {\mathcal M}_{\mathbf f}(H):~\det \mathbf X=0\}, \\ & N_{\mathbf f}(H, p) = \# \{\mathbf X\in {\mathcal M}_{\mathbf f}(H):~\det \mathbf X\equiv 0 \pmod p\}, \end{split}$$ are of special interest. These questions are partially motivated by recent work of Blomer and Li [@BlLi] who have introduced and estimated the quantity $L_{\mathbf f,r}(H)$ in the special case $$\label{eq:Mon} f_{i,j}(X_{i,j}) = X_{i,j}^d, \qquad 1\leqslant i\leqslant m, \ 1 \leqslant j\leqslant n,$$ for a fixed integer $d\geqslant 1$. In fact in [@BlLi] the entries of $A \in {\mathcal M}_{\mathbf f}$ belong to a dyadic interval $x_{i,j} \in [H/2,H]$, but the method can be extended to matrices with $x_{i,j} \in [-H,H]$. ## Previous results First we recall that in the special case of linear polynomials $f_{i,j}(X_{i,j})=X_{i,j}$, $1\leqslant i\leqslant m$, $1 \leqslant j\leqslant n$, in which case we write $L_{m,n, r}(H)$ instead of $L_{\mathbf f, r}(H)$, a result of Katznelson [@Katz Theorem 1], used in a very crude form, implies that $$\label{eq:Katz} L_{m,n, r}(H) = H^{nr + o(1)}.$$ Furthermore, in the monomial case ([\[eq:Mon\]](#eq:Mon){reference-type="ref" reference="eq:Mon"}), it is shown in the proof of [@BlLi Lemma 3] that $$\label{eq: BL-Bound} L_{\mathbf f,r}(H) \leqslant H^{mr + (n-r)(r-1) + o(1)}$$ for any fixed integers $n \geqslant m \geqslant r > 0$. In fact, one can easily see that one can extend this to many other choices of polynomials in $\mathbf f$, not necessary monomials as in ([\[eq:Mon\]](#eq:Mon){reference-type="ref" reference="eq:Mon"}). Note that for large $n=m$ and $r$ close to $n$ the exponent in ([\[eq: BL-Bound\]](#eq: BL-Bound){reference-type="ref" reference="eq: BL-Bound"}) is not too far from the exponent in ([\[eq:Katz\]](#eq:Katz){reference-type="ref" reference="eq:Katz"}). The quantity $L_{\mathbf f, r}(H,p)$ has not been studied prior this work except for the special case of linear polynomials $f_{i,j}(X_{i,j})=X_{i,j}$, $1\leqslant i\leqslant m$, $1 \leqslant j\leqslant n$, in which case we write $L_{m,n, r}(H,p)$ instead of $L_{\mathbf f, r}(H,p)$. In this case, the asymptotic formula of [@AhmShp Theorem 9] asserts that for $r \leqslant\min\{m,n\}$ for the number $L_{m,n,r}(H, p)$ of such matrices we have $$\label{eq:AhmShp} \begin{split} & \left|L_{m,n,r}(H, p) -\frac{1}{p^{(m-r)(n-r)}} (2H+1)^{mn} \right|\leqslant\\ &\qquad\qquad\qquad\qquad \left(p^{r(m+n-r)/2} + H^{r(m+n-r)-1}p^{1/2} \right)p^{o(1)}. \end{split}$$ In fact, [@AhmShp Theorem 9] gives a more precise error term with some logarithmic factors instead of $p^{o(1)}$. Furthermore, El-Baz, Lee and Strömbergsson [@E-BLS] have given matching upper and lower bounds for $L_{m,n.r}(H,p)$, which for $n \geqslant m \geqslant r > 0$ and $1 \leqslant H \leqslant p/2$ can be written as $$\label{eq:E-BLS-bound} \begin{split} \max\{H^{mr} & , H^{mn} p^{-(m-r)(n-r)}\} \\ &\quad \ll L_{m,n.r}(H,p) \ll \max\{H^{mr} , H^{mn} p^{-(m-r)(n-r)}\} , \end{split}$$ where the notations $U\ll V$ and $V \gg U$ are both equivalent to the statement $|U|\leqslant c V$, for some constant $c> 0$, which throughout this work may depend on the real positive parameters $m$, $n$ and $r$ and also, where obvious, on the polynomials in $\mathbf f$. ## Description of our results Here we use a combination of analytic and algebraic arguments to study $L_{\mathbf f,r}(H)$ and $L_{\mathbf f, r}(H,p)$. First, we modify an argument of Blomer and Li [@BlLi] and augment it with several new ideas to obtain a substantially stronger version of their bound ([\[eq: BL-Bound\]](#eq: BL-Bound){reference-type="ref" reference="eq: BL-Bound"}), see, for example, ([\[eq:m = n\]](#eq:m = n){reference-type="ref" reference="eq:m = n"}). This new bound is readily available to be used in the proof of [@BlLi Lemma 3]. It however remains to see whether our stronger bound leads to improvements of the main results of Blomer and Li [@BlLi]. We also use a similar approach to get an upper bound on $L_{\mathbf f, r}(H,p)$. For $H \geqslant p^{3/4+ \varepsilon}$ with some fixed $\varepsilon>0$, using a new result on absolute irreducibility of determinantal varieties, coupled with a result of Fouvry [@Fouv00], we obtain an asymptotic formula for $N_{\mathbf f}(H, p)$. We also obtain a similar result for vanishing immanants, which are broad generalisations of determinants and permanents. Finally, we pose an open Problem [Problem 10](#prob: AsymForm){reference-type="ref" reference="prob: AsymForm"} concerning an asymptotic formula for $L_{\mathbf f, r}(H,p)$. # Main results ## Results over ${\mathbb Z}$ We start with the following improvement and generalisation of the bound ([\[eq: BL-Bound\]](#eq: BL-Bound){reference-type="ref" reference="eq: BL-Bound"}). Here we assume that $d\geqslant 3$. Note that our approach works for $d=2$ as well, when it becomes of the same strength as ([\[eq: BL-Bound\]](#eq: BL-Bound){reference-type="ref" reference="eq: BL-Bound"}), while it still extends it to more general matrices. It is convenient to introduce the parameter $s_t$, which for $t = 3, \ldots, 10$ is given by Table [1](#tab:s_t){reference-type="ref" reference="tab:s_t"}, while for $t \geqslant 11$ we define $s_t$ as the largest integer $s\leqslant d$ with $s(s+1)\leqslant t+1$. **$t$** $3$ $4$ $5$ $6$ $7$ $8$ $9$ $10$ --------- ----- ------- ------- -------- ----- ----- ----- ------ $s_t$ $2$ $9/4$ $5/2$ $11/4$ $3$ $3$ $3$ $3$ Next, we define $$\Delta(d,m,n,r) = \max_{t =3, \ldots, r}\left\{0,\, \left(t-1\right)m -n\left(s_t-1\right) - r\left(t -s_t\right)\right\},$$ where $s_t$ is the largest integer $s\leqslant d$ with $s(s+1)\leqslant t+1$. Note that we use the convention that for $r\leqslant 2$ the last term in $\max_{t =3, \ldots, r}$ is omitted (or set to zero). **Theorem 1**. *Let $n\geqslant m \geqslant r \geqslant 4$. Fix an $m\times n$ matrix $\mathbf f$ of non-constant polynomials $f_{i,j}(X_{i,j})\in {\mathbb Z}[X_{i,j}]$ of degrees $\deg f_{i,j} \geqslant d\geqslant 3$, $1\leqslant i\leqslant m$, $1 \leqslant j\leqslant n$. Then, $$L_{\mathbf f, r}(H) \leqslant H^{m+nr-r+\Delta(d,m,n,r)+o(1)}$$ as $H\to \infty$.* To see that Theorem [Theorem 1](#thm: rank poly matr fij Z 1){reference-type="ref" reference="thm: rank poly matr fij Z 1"} improves the bound ([\[eq: BL-Bound\]](#eq: BL-Bound){reference-type="ref" reference="eq: BL-Bound"}) in a very broad range of parameters $n\geqslant m>r\geq 4$ and $d \geqslant 3$ we observe that $s_t \geqslant 9/4$ for $t \geqslant 4$ and so we have $$\begin{aligned} \left(t-1\right)m -n\left(s_t-1\right) - r\left(t -s_t\right) & = n-m +(m-r)t - (n-r) s_t \\ & \leqslant n-m +(m-r)r - 9(n-r)/4 \\ &= (m-r)r - m - 5n/4 +9r/4.\end{aligned}$$ Hence, considering the term corresponding to $t=3$ seprately, we see that Theorem [Theorem 1](#thm: rank poly matr fij Z 1){reference-type="ref" reference="thm: rank poly matr fij Z 1"}, used in a very crude form, implies $$\label{eq:Simple form} \begin{split} L_{\mathbf f, r}(H) \leqslant H^{m+nr-r+o(1)} &+ H^{3m +n(r-1) - 2r +o(1)} \\ & \qquad \quad + H^{mr + (n-r)(r-5/4) + o(1)}. \end{split}$$ **Remark 2**. *In the setting of [@BlLi], proportional rows are excluded from consideration. This means that in this scenario, the bound ([\[eq:t in \[0,1\]\]](#eq:t in [0,1]){reference-type="ref" reference="eq:t in [0,1]"}) can be dropped in the final bound in the proof of Theorem [Theorem 1](#thm: rank poly matr fij Z 1){reference-type="ref" reference="thm: rank poly matr fij Z 1"} in Section [5.1](#sec:T1 2){reference-type="ref" reference="sec:T1 2"}. Hence for such matrices we can replace $\Delta(d,m,n,r)$ with just $\max_{t =3, \ldots, r}\left\{\left(t-1\right)m -n\left(s_t-1\right) - r\left(t -s_t\right)\right\}$. In particular, for $L_{\mathbf f, r}^\sharp (H)$, defined fully analogously to $L_{\mathbf f, r}(H)$, but for matrices with this additional non-proprtionality restriction, instead of ([\[eq:Simple form\]](#eq:Simple form){reference-type="ref" reference="eq:Simple form"}) we have $$L_{\mathbf f, r}^\sharp(H) \leqslant H^{3m +n(r-1) - 2r +o(1)} + H^{mr + (n-r)(r-5/4) + o(1)},$$ which is always stronger than the bound ([\[eq: BL-Bound\]](#eq: BL-Bound){reference-type="ref" reference="eq: BL-Bound"}).* In the most interesting case $m=n$, for $r \geqslant 4$, the bound in Theorem [Theorem 1](#thm: rank poly matr fij Z 1){reference-type="ref" reference="thm: rank poly matr fij Z 1"} becomes $$\label{eq:m = n} L_{\mathbf f, r}(H) \leqslant H^{nr+(n-r)(r-s_r+1)+o(1)}.$$ Note that here we have used that $$\max_{t =3, \ldots, r}(t-s_t) = r-s_r,$$ since $t-s_t$ grows monotonically, which follows from the observation $s_{t+1}-s_t \leqslant 1 = (t+1)-t$. In particular, for $N_\mathbf f(H)$, given by ([\[eq:Count Sing\]](#eq:Count Sing){reference-type="ref" reference="eq:Count Sing"}), we have $$\label{eq:Sing Matr} N_\mathbf f(H) \leqslant H^{n^2-s_{n-1}+o(1)},$$ while ([\[eq: BL-Bound\]](#eq: BL-Bound){reference-type="ref" reference="eq: BL-Bound"}) gives $N_\mathbf f(H) \leqslant H^{n^2-2+o(1)}$. Thus if $d\geqslant n^{1/2}$ then we save about $n^{1/2}$ against the trivial bound. **Remark 3**. *While we compare the exponents in ([\[eq: BL-Bound\]](#eq: BL-Bound){reference-type="ref" reference="eq: BL-Bound"}) and ([\[eq:m = n\]](#eq:m = n){reference-type="ref" reference="eq:m = n"}) with that in ([\[eq:Katz\]](#eq:Katz){reference-type="ref" reference="eq:Katz"}), we do not have any convincing argument to suggest that the rank statistics of matrices with non-linear polynomials has to resemble that of matrices with linear polynomials. Note that in the case of equal polynomials $f_{ij}(X) = f(X)$, $1\leqslant i\leqslant m$, $1 \leqslant j\leqslant n$, one can easily show that $$L_{\mathbf f,r}(H) \gg H^{nr}$$ by first choosing $x_{i,j}$ such that $\left(f_{i,j}\left(x_{i,j}\right)\right)_{1\leqslant i,j\leqslant r}$ is non-singular (in $(2H+1) ^{r^2} + O\left(H^{r^2-1}\right)\gg H^{r^2}$ ways); choosing the remaining entries in the first $r$ rows arbitrary in $(2H+1)^{(n-r)r}$ ways, and setting $x_{h,j} = x_{1,j}$ for all $h =r +1, \ldots, m$ and $j=1, \ldots, n$. A similar comment also applies to the comparison between the bound ([\[eq:E-BLS-bound\]](#eq:E-BLS-bound){reference-type="ref" reference="eq:E-BLS-bound"}) and our bound ([\[eq:m = n Fp\]](#eq:m = n Fp){reference-type="ref" reference="eq:m = n Fp"}) below.* **Remark 4**. *Among other ingredients, our proof of Theorem [Theorem 1](#thm: rank poly matr fij Z 1){reference-type="ref" reference="thm: rank poly matr fij Z 1"} relies on a result of Pila [@Pila]. Under various additional assumptions on the polynomials $f_{i,j}$, $1\leqslant i\leqslant m$, $1 \leqslant j\leqslant n$, one can use stronger bounds such as of Browning and Heath-Brown [@BrHB1; @BrHB2] or Salberger [@Salb]. However this does not change the final result as other bounds dominate this part of the argument. On the other hand, for $d\geqslant 3$ some further improvements are possible as described in Section [6](#sec:improve){reference-type="ref" reference="sec:improve"}.* **Remark 5**. *A more general, although quantitatively weaker version of Theorem [Theorem 1](#thm: rank poly matr fij Z 1){reference-type="ref" reference="thm: rank poly matr fij Z 1"} concerning counting matrices with entries from some rather general convex sets (rather than polynomial images), may be obtained through the use of [@BrHaRu Theorem 3] and [@Shk Theorem 27] instead of our Lemmas [Lemma 23](#lem:SepVars){reference-type="ref" reference="lem:SepVars"}, [Lemma 25](#lem:Small k){reference-type="ref" reference="lem:Small k"} and [Lemma 26](#lem:Pila){reference-type="ref" reference="lem:Pila"}.* Now for $a \in {\mathbb Z}$ we denote $$N_{\mathbf f}(H;a) = \# \{\mathbf X\in {\mathcal M}_{\mathbf f}(H):~\det \mathbf X=a\}.$$ Thus $N_{\mathbf f}(H) = N_{\mathbf f}(H;0)$. **Corollary 6**. *Let $n \geqslant 4$. Fix an $n\times n$ matrix $\mathbf f$ of non-constant polynomials $f_{i,j}(X_{i,j})\in {\mathbb Z}[X_{i,j}]$ of degrees $\deg f_{i,j} \geqslant d\geqslant 3$, $1\leqslant i, j\leqslant n$. Then, uniformly over $a \in {\mathbb Z}$ we have $$N_{\mathbf f}(H;a) \leqslant H^{n^2-s_{n-1}+o(1)}$$ as $H\to \infty$.* ## Results over ${\mathbb F}_p$ {#sec: Res Fp} We first remark that using bounds of [@Chang] and [@KMS] instead of the bounds in Section [4.1](#sec: sol box Z){reference-type="ref" reference="sec: sol box Z"} one can derive analogues of Theorem [Theorem 1](#thm: rank poly matr fij Z 1){reference-type="ref" reference="thm: rank poly matr fij Z 1"} for polynomial matrices over a finite field. Furthermore, in some ranges of $H$ the bounds from [@Chang; @KMS] can be augmented with bounds of exponential sums with polynomials based on the Vinogradov mean value theorem, see, for example, [@Bourg Theorem 5] or, depending on the range of $H$, the classical Weil's bound, see, for example, [@Li Chapter 6, Theorem 3] or [@LN Theorem 5.38]. For such $H$ this leads to a stronger version of the trivial inequality ([\[eq:Triv Jk\]](#eq:Triv Jk){reference-type="ref" reference="eq:Triv Jk"}). To show the ideas and to avoid the unnecessary clutter, we only consider the case of small $H$, where the result takes the simplest form. We recall that $L_{\mathbf f, r}(H,p)$ is the number of matrices from ${\mathcal M}_{\mathbf f}(H)$ whose reduction modulo $p$ is of a given rank $r$ over the finite field ${\mathbb F}_p$ of $p$ elements. In fact, in this case, the result is uniform with respect to the polynomials in $\mathbf f$, which can now be assumed to be defined over ${\mathbb F}_p$ rather than over ${\mathbb Z}$ (as we need in Theorems [Theorem 8](#thm: sing poly matr fij Fp){reference-type="ref" reference="thm: sing poly matr fij Fp"} and [Theorem 9](#thm: imm poly matr fij Fp){reference-type="ref" reference="thm: imm poly matr fij Fp"} below). **Theorem 7**. *Let $n\geqslant m \geqslant r \geqslant 3$. Fix an $m\times n$ matrix $\mathbf f$ of non-constant polynomials $f_{i,j}(X_{i,j})\in {\mathbb F}_p[X_{i,j}]$, of degrees $e \geqslant\deg f_{i,j} \geqslant 2$, $1\leqslant i\leqslant m$, $1 \leqslant j\leqslant n$. Then, for $$H \leqslant p^{2/(e(e+1))}$$ we have $$L_{\mathbf f, r}(H,p) \leqslant H^{m+nr-r+\Gamma(m,n,r)+o(1)},$$ where $$\Gamma(m,n,r) = \max\{0, \, m- (n+r)/2, \, m(r-1) - n -r(r-2)\},$$ as $H\to \infty$.* For $m=n$, the bound in Theorem [Theorem 7](#thm: rank poly matr fij Fp){reference-type="ref" reference="thm: rank poly matr fij Fp"} becomes $$\label{eq:m = n Fp} L_{\mathbf f, r}(H,p) \ll H^{n(r+1) -r+(n-r)(r-2)+o(1)} = H^{nr+(n-r)(r-1)+o(1)}.$$ As in the case of polynomial matrices over ${\mathbb Z}$ we note that in the corresponding range of $H$, for $r$ close to $n$, the exponent in ([\[eq:m = n Fp\]](#eq:m = n Fp){reference-type="ref" reference="eq:m = n Fp"}) is not too far from the exponent in ([\[eq:E-BLS-bound\]](#eq:E-BLS-bound){reference-type="ref" reference="eq:E-BLS-bound"}), see however Remark [Remark 3](#rem:Pseudocomparison){reference-type="ref" reference="rem:Pseudocomparison"}. Next, for $m=n$ we present an asymptotic formula for the number of singular matrices $N_{\mathbf f}(H, p)$ given by ([\[eq:Count Sing\]](#eq:Count Sing){reference-type="ref" reference="eq:Count Sing"}). Similarly to the proof of ([\[eq:AhmShp\]](#eq:AhmShp){reference-type="ref" reference="eq:AhmShp"}) our result is based on a result of Fouvry [@Fouv00] on the distribution of rational points on rather general algebraic varieties over prime finite fields. Our main result is as follows. **Theorem 8**. *Let $n\geqslant 3$. Fix an $n\times n$ matrix $\mathbf f$ of non-constant polynomials $f_{i,j}(X_{i,j})\in {\mathbb Z}[X_{i,j}]$, $1\leqslant i, j\leqslant n$. Let $p$ be a sufficiently large prime. Then, for a positive integer $H \leqslant p/2$ we have $$N_{\mathbf f}(H, p)= \frac{1}{p} (2H+1)^{n^2}+ O\left(p^{(n^2-1)/2 +o(1)} + H^{n^2-2}p^{1/2+o(1)} \right),$$ as $p\to \infty$.* We remark that Theorem [Theorem 8](#thm: sing poly matr fij Fp){reference-type="ref" reference="thm: sing poly matr fij Fp"} is nontrivial for $H\geqslant p^{3/4+\varepsilon}$ for any fixed $\varepsilon > 0$ and sufficiently large prime $p$. Clearly, Theorems [Theorem 7](#thm: rank poly matr fij Fp){reference-type="ref" reference="thm: rank poly matr fij Fp"} and [Theorem 8](#thm: sing poly matr fij Fp){reference-type="ref" reference="thm: sing poly matr fij Fp"} can be used to derive analogues of Corollary [Corollary 6](#cor:det){reference-type="ref" reference="cor:det"}. Next, in the special case when $\mathbf f$ consists of polynomials of the same degree, we obtain a very broad generalisation of Theorem [Theorem 8](#thm: sing poly matr fij Fp){reference-type="ref" reference="thm: sing poly matr fij Fp"} to the much wider class of matrix functions known as *immanants* which are expressions of the form $${\mathrm {imm}\,}_\chi \mathbf X= \sum_{\sigma \in {\mathcal S}_n} \chi(\sigma) \prod_{i=1}^n x_{i,\sigma(i)},$$ where $\mathbf X= \left(x_{i,j}\right)_{1 \leqslant i,j \leqslant n}$ is an $n\times n$ matrix (over an arbitrary ring) and $\chi: {\mathcal S}_n \to {\mathbb C}$ is an irreducible character of the symmetric group ${\mathcal S}_n$. In particular, the trivial character $\chi(\sigma) = 1$ corresponds to the *permanent* ${\mathrm {per}\,}\mathbf X$, the alternating character $\chi(\sigma) = \mathop{\mathrm{sign}}\sigma$ corresponds to the *determinant* $\det \mathbf X$. This motivates us to define the following extension of $N_{\mathbf f}(H, p)$: $$N_{\mathbf f, \chi}(H, p) = \# \{\mathbf X\in {\mathcal M}_{\mathbf f}(H):~ {\mathrm {imm}\,}_\chi \mathbf X\equiv 0 \pmod p\},$$ where $\chi$ is an arbitrary character of ${\mathcal S}_n$. **Theorem 9**. *Let $n\geqslant 3$. Fix an $n\times n$ matrix $\mathbf f$ of non-constant polynomials $f_{i,j}(X_{i,j})\in {\mathbb Z}[X_{i,j}]$, $1\leqslant i, j\leqslant n$, of the same degree $d\geqslant 1$. Let $p$ be a sufficiently large prime. Then, for any character $\chi$ of ${\mathcal S}_n$, for a positive integer $H \leqslant p/2$ we have $$N_{\mathbf f, \chi}(H, p)= \frac{1}{p} (2H+1)^{n^2}+ O\left(p^{(n^2-1)/2 +o(1)} + H^{n^2-2}p^{1/2+o(1)} \right),$$ as $p\to \infty$.* We conclude with the following. **Problem 10**. *Obtain analogues of the asymptotic formulas of Theorems [Theorem 8](#thm: sing poly matr fij Fp){reference-type="ref" reference="thm: sing poly matr fij Fp"} and [Theorem 9](#thm: imm poly matr fij Fp){reference-type="ref" reference="thm: imm poly matr fij Fp"} for $L_{\mathbf f, r}(H,p)$.* The main obstacle towards a resolution of Problem [Problem 10](#prob: AsymForm){reference-type="ref" reference="prob: AsymForm"} is the lack of absolute irreducibility result for the corresponding algebraic variety, similar to Lemma [Lemma 15](#lem:detpolyirr){reference-type="ref" reference="lem:detpolyirr"}, which is an interesting question in its own rights. # Absolute irreducibility of some polynomials ## Preparations We require the following result of Tverberg [@Tve] (see also [@Sch Corollary 2, Section 1.7]). **Lemma 11**. *Let ${\mathbb K}$ be an algebraically closed field of characteristic zero, $n\geqslant 3$ and let $f_{i}\in {\mathbb K}[X_{i}]$, $i=1,\ldots,k$, be non-constant polynomials. Then the polynomial $$H(X_1, \ldots, X_k)=f_1(X_1)+\cdots+f_k(X_k)$$ is absolutely irreducible.* **Remark 12**. *If $\operatorname{char}{\mathbb K}=p>0$, it is known by the work of Schinzel [@Sch Corollary 3, Section 1.7] that the polynomial $$H(X_1, \ldots, X_k)=f_1(X_1)+\cdots+f_k(X_k)$$ is absolutely irreducible if and only if at least one polynomial $f_i(X_i)$ is not of the form $h_i(X_i)^p+c h_i(X_i)$, for some $c\in{\mathbb K}$ and some $h_i\in{\mathbb K}[X_i]$. This condition is indeed needed as the following example shows: for any $c\in{\mathbb K}$ let $f_i(X_i)=X_i^p+cX_i$, $i=1,\ldots,k$, then obviously the polynomial $$H(X_1,\ldots,X_k)=(X_1+\ldots+X_k)^p+c(X_1+\ldots+X_k)$$ is reducible over ${\mathbb F}_p$.* For us it will be sufficient to have a result as in Lemma [Lemma 11](#lem:diagonalpolyn){reference-type="ref" reference="lem:diagonalpolyn"} when $\operatorname{char}{\mathbb K}=p>0$ is a sufficiently large prime $p$ and the polynomials are defined over ${\mathbb Z}$, and thus we need Ostrowski's theorem (see [@Schmidt Corollary 2B]), which we state below. **Lemma 13**. *Let $f(X_1, \ldots, X_k)\in {\mathbb Z}[X_1, \ldots, X_k]$ be an absolutely irreducible polynomial of degree $d$ and let $p$ denote a prime with $$p> (4\|f\|)^{M^{2^M}},$$ where $\|f\|$ denotes the sum of the absolute values of the coefficients of $f$ and $M=\binom{k+d-1}{k}$. Then the reduction of $f$ modulo $p$ is absolutely irreducible over ${\mathbb F}_p$.* Therefore, we have the following direct consequence of Lemma [Lemma 11](#lem:diagonalpolyn){reference-type="ref" reference="lem:diagonalpolyn"} and Lemma [Lemma 13](#lem:Ostrowski){reference-type="ref" reference="lem:Ostrowski"}. **Corollary 14**. *Let $k\geqslant 3$ and let $f_{i}\in {\mathbb Z}[X_{i}]$, $i=1,\ldots,k$, be non-constant polynomials. Then, for any sufficiently large prime $p$, the polynomial $$H(X_1, \ldots, X_k)=f_1(X_1)+\cdots+f_k(X_k)$$ is absolutely irreducible over ${\mathbb F}_p$.* ## Absolute irreducibility of determinant varieties We believe that the following result is of independent interest and is our main tool in establishing Theorem [Theorem 8](#thm: sing poly matr fij Fp){reference-type="ref" reference="thm: sing poly matr fij Fp"}. **Lemma 15**. *Let $n\geqslant 3$ and let $f_{i,j}\in {\mathbb Z}[X_{i,j}]$, $i,j=1,\ldots,n$, be non-constant polynomials. Then the determinant $\det\left(f_{i,j}(X_{i,j})\right)_{1\leqslant i,j\leqslant n}$, viewed as an element of ${\mathbb Z}[X_{1,1}, \ldots, X_{n,n}]$, is absolutely irreducible over ${\mathbb Q}$ and over ${\mathbb F}_p$ for any sufficiently large prime $p$.* *Proof.* Let us denote $$D\left(\left(X_{i,j}\right)_{1\leqslant i,j\leqslant n}\right) = \det\left(f_{i,j}(X_{i,j})\right)_{1\leqslant i,j\leqslant n}$$ and $$d_{i,j}=\deg f_{i,j}, \qquad i,j=1,\ldots,n.$$ We prove first that $D$ is irreducible over ${\mathbb C}$ and then we apply Corollary [Corollary 14](#cor:abs irred p){reference-type="ref" reference="cor:abs irred p"} to conclude the absolute irreducibility modulo any sufficiently large prime $p$. Assume now that $D = fg$ for some $f,g\in {\mathbb C}[X_{1,1},\ldots X_{n,n}]$. We fix a specialisation $$(\alpha_{i,j}, ~ i=2,\ldots,n, \ j=1,\ldots,n)\in{\mathbb C}^{n(n-1)}$$ of the last $n(n-1)$ indeterminates $X_{2,1},\ldots, X_{n,n}$ such that we obtain $$\begin{aligned} D&(X_{1,1},\ldots, X_{1,n}, \alpha_{2,1}, \ldots, \alpha_{n,n}) \\ &\qquad\qquad\qquad\qquad\quad= \begin{vmatrix} f_{1,1}(X_{1,1}) & f_{1,2}(X_{1,2}) & \ldots & f_{1,n}(X_{1,n}) \\ 1 & & & \\ \vdots & & I_{n-1} & \\ 1 & & & \end{vmatrix}.\end{aligned}$$ We write $D_*, f_*, g_*$ for the resulting specialised $n$-variable polynomials. To compute $D_*$, let $M_j$ denote the $(n-1)\times(n-1)$ matrix resulting from removing the first row and $j$-th column of the matrix above, so that $$\label{eqn:D*1} D_*(X_{1,1},\ldots, X_{1,n}) = \sum_{j=1}^n (-1)^{j+1}\cdot \det M_j \cdot f_{1,j}(X_{1,j}).$$ Clearly, $\det M_1=1$. To compute $\det M_j$, for $2\leqslant j\leqslant n$, write $K_j$ for the matrix resulting from replacing the $j$-th column of $I_{n-1}$ by $[1,1, \ldots, 1]^t$ and note that $\det K_j = \det I_{n-1}=1$. Furthermore, for $2\leqslant j\leqslant n$, one gets $K_j$ by swapping columns of $M_j$, $j-2$ consecutive times and so $\det M_j= (-1)^{j-2}\det K_j = (-1)^{j-2}$. Hence, going back to ([\[eqn:D\*1\]](#eqn:D*1){reference-type="ref" reference="eqn:D*1"}), we have $$\begin{aligned} D_*(X_{1,1},\ldots, &X_{1,n})\\ & = f_{1,1}(X_{1,1}) + \sum_{j=2}^n (-1)^{2j-1}\cdot f_{1,j}(X_{1,j}) \\ &= f_{1,1}(X_{1,1}) - f_{1,2}(X_{1,2}) - f_{1,3}(X_{1,3})- \ldots - f_{1,n}(X_{1,n}).\end{aligned}$$ By Lemma [Lemma 11](#lem:diagonalpolyn){reference-type="ref" reference="lem:diagonalpolyn"}, $D_*$ is an absolutely irreducible polynomial, which, together with the assumption $D_* = f_* g_*$ implies $$\begin{aligned} {d_{1,j}}& \leqslant\max\{\deg_{X_{1,j}} f_* , \deg_{X_{1,j}}g_*\} \\ & \leqslant\max\{\deg_{X_{1,j}} f, \deg_{X_{1,j}} g\}\leqslant{d_{1,j}}\end{aligned}$$ for all $1\leqslant j\leqslant n$. That is $$\label{eqn:degx11f} \max\{\deg_{X_{1,j}} f, \deg_{X_{1,j}} g\} = {d_{1,j}}, \qquad 1\leqslant j\leqslant n.$$ Next, we use ([\[eqn:degx11f\]](#eqn:degx11f){reference-type="ref" reference="eqn:degx11f"}) to show that $D$ is absolutely irreducible. In particular, we use the following two basic observations: 1. If $h$ is a monomial appearing in $D$, such that $X_{i,j} \mid h$ for some $1\leqslant i,j\leqslant n$, then $X_{i,k}$, $X_{k, j} \nmid h$ for $1\leqslant k\leqslant n$. Indeed, one can see this by using the determinant formula $$D=\sum_{\sigma\in \mathsf{S}_n} (-1)^{\pi(\sigma )} f_{1,\sigma(1)}(X_{1,\sigma(1)})\cdots f_{n,\sigma(n)}(X_{n,\sigma(n)}),$$ where the sum is over all permutations $\sigma$ of the set $\{1,\ldots,n\}$ and $\pi(\sigma)$ is the parity of $\sigma$. 2. We have $\deg_{X_{i,j}} f + \deg_{X_{i,j}} g =d_{i,j}$ for $1\leqslant i,j \leqslant n$. By ([\[eqn:degx11f\]](#eqn:degx11f){reference-type="ref" reference="eqn:degx11f"}), suppose without loss of generality that $\deg_{X_{1,1}} f = d_{1,1}$, which by (ii) implies $\deg_{X_{1,1}} g = 0$. Then, the indeterminates $X_{1,j}$ do not appear in $g$ for any $2\leqslant j\leqslant n$ as otherwise this would contradict (i). To see this, writing $f=AX_{1,1}^{d_{1,1}} + B$ for some polynomials $A, B$, with $\deg_{X_{1,1}} B<d_{1,1}$, we conclude that the coefficient of $X_{1,1}^{d_{1,1}}$, in $fg$, is precisely $Ag$. Now, if $\deg_{X_{1,j}} g>0$, we have $\deg_{X_{1,j}} Ag>0$, and thus $X_{1,1}$ and $X_{1,j}$ would divide a same monomial in $D$, contradicting (i). Finally, suppose $g$ involves some indeterminate $X_{i,j}$. Then since $\deg_{X_{1,j}} g=0$ for all $1\leqslant j\leqslant n$, as above, writing $f=AX_{1,j}^{d_{1,j}} + B$, for some polynomials $A, B$, with $\deg_{X_{1,j}} B<d_{1,j}$, we conclude that the coefficient of $X_{1,j}^{d_{1,j}}$, in $fg$, is precisely $Ag$. This shows again that $X_{1,j}$ and $X_{i,j}$ divide a same monomial in $D$, contradicting (i). Since this applies for any variable $X_{i,j}$, we obtain that $g$ is constant (and hence $g=1$), which concludes the absolute irreducibility over ${\mathbb Q}$. Applying now Corollary [Corollary 14](#cor:abs irred p){reference-type="ref" reference="cor:abs irred p"}, we conclude the proof. ◻ **Remark 16**. *We note that Lemma [Lemma 15](#lem:detpolyirr){reference-type="ref" reference="lem:detpolyirr"} holds over any algebraically closed field ${\mathbb K}$ and without any condition on the characteristic $p$ if we impose some extra condition on $f_{i,j}$ for some $i,j=1,\ldots,n$, as noted in Remark [Remark 12](#rem:abs irred){reference-type="ref" reference="rem:abs irred"}. More precisely, one has the following statement for which the proof follows exactly the same, applying Remark [Remark 12](#rem:abs irred){reference-type="ref" reference="rem:abs irred"} instead of Corollary [Corollary 14](#cor:abs irred p){reference-type="ref" reference="cor:abs irred p"}:* *Let ${\mathbb K}$ be an algebraically closed field, $n\geqslant 3$ and $f_{i,j}\in {\mathbb K}[X_{i,j}]$, $i,j=1,\ldots,n$, non-constant polynomials. If $\operatorname{char}{\mathbb K}=p>0$, assume also that for some $i,j=1,\ldots,n$, the polynomial $f_{i,j}$ is not of the form $h_{i,j}^p+ch_{i,j}$ for some $c\in{\mathbb K}^*$ and $h_{i,j}\in{\mathbb K}[X_{i,j}]$.* *Then the determinant $\det\left(f_{i,j}(X_{i,j})\right)_{1\leqslant i,j\leqslant n}$, viewed as an element of ${\mathbb K}[X_{1,1}, \ldots, X_{n,n}]$, is absolutely irreducible.* **Remark 17**. *We note that Lemma [Lemma 15](#lem:detpolyirr){reference-type="ref" reference="lem:detpolyirr"} does not necessarily hold for $n=2$, since for example for $f_{i,j}=X_{i,j}^2$, $i,j=1,2$, we have $$\begin{aligned} \det\left(X_{i,j}^2\right)_{1\leqslant i,j\leqslant 2} &= X_{1,1}^2X_{2,2}^2 - X_{1,2}^2X_{2,1}^2 \\ &= (X_{1,1}X_{2,2} - X_{1,2}X_{2,1})(X_{1,1}X_{2,2} + X_{1,2}X_{2,1}).\end{aligned}$$* **Remark 18**. *It is certainly interesting to obtain a version of Lemma [Lemma 15](#lem:detpolyirr){reference-type="ref" reference="lem:detpolyirr"} for the variety of matrices from ${\mathcal M}_{\mathbf f}$ of a given rank $r \leqslant n$. For matrices with linear entries, that is, when $f_{i,j}(X_{i,j})=X_{i,j}$, $i,j=1,\ldots,n$, such results are known. Indeed, this follows from the observation that, the set of such matrices is the epimorphic image of the irreducible variety $\operatorname{GL}_n\times \operatorname{GL}_n$, under the regular mapping $(A, B) \mapsto AMB^{-1}$, for any fixed $n\times n$ matrix $M$ of rank $r$. See, for example, [@Abh] or [@BruVet Proposition 1.1].* In the case of polynomials of the same degree we have the following broad generalisation of Lemma [Lemma 15](#lem:detpolyirr){reference-type="ref" reference="lem:detpolyirr"} on absolute irreducibility of arbitrary linear combinations of minors. **Lemma 19**. *Let $d\geq 1$, $n,r\geqslant 3$, with $r\leqslant n$ and let $$s= \binom{n}{r}^2.$$ Given non-constant polynomials $f_{i,j}\in {\mathbb Z}[X]$, $i,j=1, \ldots, n$ of the same degree, write $D_h$, $1\leqslant h\leqslant s$, for the $r\times r$ minors of the matrix $\mathbf f= \left(f_{i,j}(X_{i,j})\right)_{1\leqslant i,j \leqslant n}$. Then any non-trivial linear combination $$\label{eqn:linComb} \sum_{1\leqslant h \leqslant s} c_h D_h,\qquad \left(c_1, \ldots, c_s\right)\in {\mathbb K}^s \setminus \{(0, \ldots, 0)\},$$ is absolutely irreducible over the field $K$ where $${\mathbb K}= {\mathbb Q}\qquad \text{or} \qquad {\mathbb K}= {\mathbb F}_p$$ for any sufficiently large prime $p$.* *Proof.* Let $\deg f_{i,j} = d\geq 1$, $1 \leqslant i,j \leqslant n$. We begin by noting that since linear combinations of corresponding minors of the matrix $\left(X_{i,j}^d\right)_{1\leqslant i,j \leqslant n}$ appear as the homogeneous part of top degree of ([\[eqn:linComb\]](#eqn:linComb){reference-type="ref" reference="eqn:linComb"}), their absolute irreducibility imply that of ([\[eqn:linComb\]](#eqn:linComb){reference-type="ref" reference="eqn:linComb"}). Thus it suffices to consider only the case $f_{i,j}= X_{i,j}^d$. The proof is by induction on the number of nonzero terms appearing in ([\[eqn:linComb\]](#eqn:linComb){reference-type="ref" reference="eqn:linComb"}), denoted by $t$, noting that the case $t=1$ has been settled by Lemma [Lemma 15](#lem:detpolyirr){reference-type="ref" reference="lem:detpolyirr"} (here is why we need $p$ to be sufficiently large). Renumbering, we can assume that $c_1, \ldots, c_t \ne 0$ and write ${\mathcal S}= \{ D_\nu:~1\leqslant\nu \leqslant t\}$ for the minors appearing in ([\[eqn:linComb\]](#eqn:linComb){reference-type="ref" reference="eqn:linComb"}). Let $t\geq 2$ and suppose the desired result holds for $t-1$. Note that one may choose a row or column of the matrix ${\mathcal M}$ giving a non-trivial partition ${\mathcal S}= {\mathcal S}_1 \sqcup {\mathcal S}_2$, according to whether a given minor belonging to ${\mathcal S}$ takes entries from that row/column or not. This follows from the fact that two distinct minors may not match in all rows and columns. Write $$R = \sum_{1\leqslant h \leqslant t} c_h D_h =R_1 + R_2,$$ such that $R_\nu$ corresponds to the sum of terms appearing in ${\mathcal S}_\nu$ for $\nu=1,2$. Suppose, without loss of generality, that the minors appearing in $R_1$ and $R_2$ differ in the row $i_0$ and set $X_{i_0, j} = 1$, for $1\leqslant j\leqslant n$. Write $$R^* = R_1 + R_2^*$$ for this specialisation and note that $R_1$ and $R_2^*$ are homogeneous polynomials of degrees $rd$ and $(r-1)d$, respectively. Let $$H(X_{1,1}, \ldots, X_{n,n}, Z) = Z^{nd}\cdot R\left(\frac{X_{1,1}}{Z}, \ldots, \frac{X_{n,n}}{Z}\right).$$ That is, $H$ represents the homogenised form of $R$. Note that $$H = R_1 + R_2^* \cdot Z^d.$$ By the induction hypothesis, $R_1$ is irreducible over the algebraic closure $\overline {\mathbb K}$ of ${\mathbb K}$ (for sufficiently large $p$, if ${\mathbb K}={\mathbb F}_p)$, and given that $\deg(R_1)>\deg(R_2^*)$, clearly we have $R_1 \nmid R_2^*$. Thus irreducibility of $H$ follows by an application of Eisenstein's criterion. In turn, this implies the irreducibility of $R^*$, as a polynomial is irreducible over $\overline {\mathbb K}$ if and only if its homogenised form is irreducible (see [@CoxLitShe Exercise 9, p. 392]). Now, suppose that $R = fg$ for some $f,g\in \overline {\mathbb K}[X_{1,1},\ldots X_{n,n}]$ and write $f^*$ and $g^*$ for the polynomials resulting from setting $X_{i_0, j} = 1$, for $1\leqslant j\leqslant n$. Thus $R^* = f^*g^*$. Similarly to the arguments of Lemma [Lemma 15](#lem:detpolyirr){reference-type="ref" reference="lem:detpolyirr"}, irreducibility of $R^*$, implies $$\begin{aligned} {d} \leqslant\max\{\deg_{X_{i,j}} f^* , \deg_{X_{i,j}}g^*\} & \leqslant\max\{\deg_{X_{i,j}} f, \deg_{X_{i,j}} g\}\leqslant{d},\\ 1\leqslant i,j\leqslant n, & \quad i\not=i_0.\end{aligned}$$ That is, $$\label{eqn:degx11f2} \max\{\deg_{X_{i,j}} f, \deg_{X_{i,j}} g\} = d \qquad 1\leqslant i,j\leqslant n, \quad i\not=i_0.$$ Furthermore, clearly both observations (i) and (ii), in the proof, of Lemma [Lemma 15](#lem:detpolyirr){reference-type="ref" reference="lem:detpolyirr"} hold for the polynomial $R$. Thus the remainder of the proof is essentially a repetition of the arguments of Lemma [Lemma 15](#lem:detpolyirr){reference-type="ref" reference="lem:detpolyirr"} and is streamlined. By ([\[eqn:degx11f2\]](#eqn:degx11f2){reference-type="ref" reference="eqn:degx11f2"}), without loss of generality, let $\deg_{X_{1,1}} f = d$. Then, by (ii) in the proof of Lemma [Lemma 15](#lem:detpolyirr){reference-type="ref" reference="lem:detpolyirr"}, we have $\deg_{X_{1,1}} g = 0$ and thus by observation (i) in the proof of Lemma [Lemma 15](#lem:detpolyirr){reference-type="ref" reference="lem:detpolyirr"}, $g$ cannot involve the indeterminates $X_{1,j}$ for any $2\leqslant j\leqslant n$. Furthermore if $g$ involves some indeterminate $X_{i,j}$, since $\deg_{X_{1,j}} g=0$ for all $1\leqslant j\leqslant n$, this again contradicts (i) in the proof of Lemma [Lemma 15](#lem:detpolyirr){reference-type="ref" reference="lem:detpolyirr"}. Consequently, we obtain that $g$ is constant (and hence $g=1$), concluding the absolute irreducibility over $\overline {\mathbb K}$. ◻ # Point counting on some hypersurfaces ## Solutions to polynomial equations in a box {#sec: sol box Z} For a polynomial $f(X)\in {\mathbb Z}[X]$ and a real $\kappa> 0$ we denote $$I_{\kappa}(f,H) = \int_0^1 \left| \sum_{x=-H}^H {\mathbf{\,e}}\left(\alpha f(x)\right) \right|^{\kappa} \, d\alpha ,$$ where ${\mathbf{\,e}}(z) = \exp(2\pi i z)$. We now recall the following result of Wooley [@Wool Corollary 14.2]. **Lemma 20**. *Let $f(X)\in {\mathbb Z}[X]$ be a fixed polynomial of degree $d\geqslant 1$. Then for each integer $s$ with $1 \leqslant s \leqslant d$, we have $$I_{s(s+1)}(f,H) \leqslant H^{s^2 + o(1)}, \qquad H \to \infty.$$* Furthermore, for some parameters the following result of Hua [@Hua] gives better estimates. **Lemma 21**. *Let $f(X)\in {\mathbb Z}[X]$ be a fixed polynomial of degree $d\geqslant 1$. Then for each integer $s$ with $1 \leqslant s \leqslant d$, we have $$I_{2^s}(f,H) \leqslant H^{2^s - s + o(1)}, \qquad H \to \infty.$$* We are now able to summarise our bound on $I_{k}(f,H)$ for integers $k \leqslant 11$. It is convenient to define $$\label{eq: sigma_k} \sigma_{k} = s_{k-1}, \qquad k = 4, \ldots, 11,$$ which corresponds to the values of $s_t$ in Table [1](#tab:s_t){reference-type="ref" reference="tab:s_t"}. **Lemma 22**. *Let $f(X)\in {\mathbb Z}[X]$ be a fixed polynomial of degree $d\geqslant 3$. Then for $k = 4, \ldots, 11$ we have $$I_{k} (f,H) \leqslant H^{k -\sigma_k + o(1)}, \qquad H \to \infty.$$* *Proof.* Clearly, for $k=4$ and $k=8$, the result follows from Lemma [Lemma 21](#lem:Hua){reference-type="ref" reference="lem:Hua"} (recalling that $d\geqslant 3$), taken with $s=2$ and $s =3$, respectively. For $k =9,10, 11$ we simply use the trivial bound $$I_{k}(f,H) \ll H^{k-8} I_{8}(f,H) .$$ Next we consider $k=5$ and note that by the Hölder inequality $$I_{5}(f,H) \leqslant\left(I_{4}(f,H)\right)^{3/4} \left(I_{8}(f,H) \right)^{1/4} \leqslant H^{11/4 + o(1)} .$$ Similarly $$I_{6}(f,H) \leqslant\left(I_{4}(f,H)\right)^{1/2} \left(I_{8}(f,H) \right)^{1/2} \leqslant H^{7/2 + o(1)} ,$$ and $$I_{7}(f,H) \leqslant\left(I_{4}(f,H)\right)^{1/4} \left(I_{8}(f,H) \right)^{3/4} \leqslant H^{17/4 + o(1)},$$ which concludes the proof. ◻ It is easy to see that by the orthogonality of exponential functions, $I_{2k}(f,H)$ is the number of solutions to the Diophantine equation $$\sum_{i=1}^k f(x_i) =\sum_{i=1}^k f(y_i) , \qquad -H \leqslant x_i, y_i \leqslant H, \ i =1, \ldots, k.$$ We use a generalised form of this observation, together with Lemma [Lemma 20](#lem:Wool){reference-type="ref" reference="lem:Wool"} to estimate the number of solutions of a more general equation. Given $k$ polynomials $f_i(X)\in {\mathbb Z}[X]$ and an integer vector $\mathbf a=(a_1, \ldots, a_k)\in {\mathbb Z}^k$, for an integer $H\geqslant 1$ we denote by $T_\mathbf a(f_1, \ldots, f_k; H)$ the number of solutions to the Diophantine equation $$\sum_{i=1}^k a_i f_i(x_i) =0 , \qquad -H \leqslant x_i \leqslant H, \ i =1, \ldots, k.$$ **Lemma 23**. *Let $f_i(X)\in {\mathbb Z}[X]$, $i=1,\ldots,k$, be $k$ fixed polynomials of degrees $\deg f_i \geqslant d\geqslant 2$, and let $\mathbf a=(a_1, \ldots, a_k)\in {\mathbb Z}^k$ be an arbitrary integer vector with nonzero components $a_i\ne 0$, $i =1, \ldots, k$. Then for each positive integer $s$ such that $s \leqslant d$ and $s(s+1) \leqslant k$ we have $$T_\mathbf a(f_1, \ldots, f_k; H) \leqslant H^{k-s+o(1)}, \qquad H \to \infty.$$* *Proof.* As in the above, by the orthogonality of exponential functions, we write $$\begin{aligned} T_\mathbf a(f_1, \ldots, f_k; H) & = \int_0^1 \prod_{i=1}^k \sum_{x_i=-H}^H {\mathbf{\,e}}\left(\alpha a_i f_i(x_i)\right) \, d\alpha\\ & \leqslant\int_0^1 \prod_{i=1}^k \left| \sum_{x_i=-H}^H {\mathbf{\,e}}\left(\alpha a_i f_i(x_i)\right) \right| \, d\alpha . \end{aligned}$$ Hence, by the Hölder inequality $$\label{eq: T and I} T_\mathbf a(f_1, \ldots, f_k; H) \leqslant\prod_{i=1}^k \left(\int_0^1 \left| \sum_{x_i=-H}^H {\mathbf{\,e}}\left(\alpha a_i f_i(x_i)\right) \right|^k \, d\alpha\right)^{1/k}.$$ Since the function ${\mathbf{\,e}}(z)$ is periodic with period $1$, and $a_i \ne 0$, for each $i =1, \ldots, k$, we have $$\begin{aligned} \int_0^1 \left| \sum_{x_i=-H}^H {\mathbf{\,e}}\left(\alpha a_i f_i(x_i)\right) \right|^k \, d\alpha & = \frac{1}{a_i} \int_0^1 \left| \sum_{x_i=-H}^H {\mathbf{\,e}}\left(\alpha a_i f_i(x_i)\right) \right|^k \, d(\alpha a_i)\\ & = \frac{1}{a_i} \int_0^{a_i} \left| \sum_{x_i=-H}^H {\mathbf{\,e}}\left(\beta f_i(x_i)\right) \right|^k \, d\beta\\ & = \int_0^1 \left| \sum_{x_i=-H}^H {\mathbf{\,e}}\left(\alpha f_i(x_i)\right) \right|^k \, d \alpha\\ &= I_k(f_i,H) \end{aligned}$$ (we remark that the above calculation holds for both positive and negative values of $a_i$). Hence we derive from ([\[eq: T and I\]](#eq: T and I){reference-type="ref" reference="eq: T and I"}) that $$\label{eq:Tk vs Ik} T_\mathbf a(f_1, \ldots, f_k; H) \leqslant\prod_{i=1}^k I_{k}(f_i,H)^{1/k}.$$ For $k \geqslant s(s+1)$ we can use the trivial bound $$\label{eq: Ik - triv} I_{k}(f_i, H) \leqslant H^{k - s(s+1)} I_{s(s+1)}(f_i, H)$$ and since $s \leqslant d \leqslant\deg f_i$, $i =1, \ldots, k$, Lemma [Lemma 20](#lem:Wool){reference-type="ref" reference="lem:Wool"} applies, and after simple calculations implies the desired result. ◻ **Remark 24**. *Clearly instead of ([\[eq: Ik - triv\]](#eq: Ik - triv){reference-type="ref" reference="eq: Ik - triv"}), assuming that  $$s(s+1) \leqslant k < (s+1)(s+2),$$ one can use the Hölder inequality as in the proof of Lemma [Lemma 22](#lem:Ik_small k){reference-type="ref" reference="lem:Ik_small k"}, and estimate $$I_{k}(f, H) \leqslant\left(I_{s(s+1)}\left(f, H\right)\right)^{1/\alpha} \left(I_{(s+1)(s+2)}\left(f, H\right)\right)^{1-1/\alpha},$$ with $$\alpha = \frac{2 (s+1)}{\left(s+1\right)\left(s+2\right) - k}.$$ However, for large $k$ this leads to somewhat cluttered formulas, while providing only marginal improvements.* For the small values $k=4,\ldots, 11$, we obtain better bounds on $T_\mathbf a(f_1, \ldots, f_k; H)$ than in Lemma [Lemma 23](#lem:SepVars){reference-type="ref" reference="lem:SepVars"}, namely, using Lemma [Lemma 22](#lem:Ik_small k){reference-type="ref" reference="lem:Ik_small k"} in the inequality ([\[eq:Tk vs Ik\]](#eq:Tk vs Ik){reference-type="ref" reference="eq:Tk vs Ik"}), we obtain the following bound. **Lemma 25**. *Let $f_i(X)\in {\mathbb Z}[X]$, $i=1,\ldots,k$, be $k$ fixed polynomials of degrees $\deg f_i \geqslant d\geqslant 3$, and let $\mathbf a=(a_1, \ldots, a_k)\in {\mathbb Z}^k$ be an arbitrary integer vector with nonzero components $a_i\ne 0$, $i =1, \ldots, k$. Then for $k=4,\ldots, 11$ we have $$T_\mathbf a(f_1, \ldots, f_k; H) \leqslant H^{k-\sigma_k+o(1)}, \qquad H \to \infty,$$ where $\sigma_k$ is given by ([\[eq: sigma_k\]](#eq: sigma_k){reference-type="ref" reference="eq: sigma_k"}).* Next, we can also use a general bound of Pila [@Pila Theorem A], which applies by Lemma [Lemma 11](#lem:diagonalpolyn){reference-type="ref" reference="lem:diagonalpolyn"} (we only use it for $k=3$ but present it in full generality). **Lemma 26**. *Let $f_i(X)\in {\mathbb Z}[X]$, $i=1,\ldots,k$, be $k$ fixed polynomials of degrees $\deg f_i \geqslant d\geqslant 1$, and let $\mathbf a=(a_1, \ldots, a_k)\in {\mathbb Z}^k$ be an arbitrary integer vector with nonzero components $a_i\ne 0$, $i =1, \ldots, k$. Then $$T_\mathbf a(f_1, \ldots, f_k; H) \leqslant H^{k-2+1/d+o(1)}, \qquad H \to \infty.$$* It is important to observe that for $d \geqslant 3$ all bounds of this section are of the form $$T_\mathbf a(f_1, \ldots, f_k; H) \leqslant H^{k - \rho+ o(1)},$$ which is uniform with respect to the coefficients $a_1, \ldots, a_k$, and - for $k \geqslant 12$, we use Lemma [Lemma 23](#lem:SepVars){reference-type="ref" reference="lem:SepVars"} (with some $s \geqslant 3$ but with $s(s+1) \leqslant k$), which allows us to take $\rho \geqslant 3$; - for $11\geqslant k \geqslant 4$, we use Lemma [Lemma 25](#lem:Small k){reference-type="ref" reference="lem:Small k"} which allows us to take $\rho =\sigma_k$; - for $k=3$, we use Lemma [Lemma 26](#lem:Pila){reference-type="ref" reference="lem:Pila"}, which allows us to take $\rho= 2-1/d$. ## Solutions to polynomial congruences in a box {#sec: sol box Fp} As we have mentioned in Section [2.2](#sec: Res Fp){reference-type="ref" reference="sec: Res Fp"}, some analogues of the results from Section [4.1](#sec: sol box Z){reference-type="ref" reference="sec: sol box Z"} can be extracted from [@Chang; @KMS]. This leads to a large variety of results. We concentrate on the simplest (at least in typographic sense) case of small boxes. Since the argument is a discrete version of that of Section [4.1](#sec: sol box Z){reference-type="ref" reference="sec: sol box Z"} we are rather brief in our exposition here. Note that below we freely switch between the language of congruences and the language of finite fields. For a polynomial $f(X)\in {\mathbb F}_p[X]$, we denote $$J_k(f,H,p) = \frac{1}{p} \sum_{\alpha \in {\mathbb F}_p} \left| \sum_{x=-H}^H {\mathbf{\,e}}_p\left(\alpha f(x)\right) \right|^k,$$ where ${\mathbf{\,e}}_p(z) = \exp(2\pi i z/p)$. The following bound is a special case of [@KMS Theorem 1.3]. **Lemma 27**. *Let $f(X)\in {\mathbb Z}[X]$ be a fixed polynomial of degree $d\geqslant 2$. Then, for $$H \leqslant p^{2/(d(d+1))}$$ we have $$J_4(f,H,p) \leqslant H^{2+ o(1)}, \qquad H \to \infty.$$* Given $k$ polynomials $f_i(X)\in {\mathbb F}_p[X]$ and a vector $\mathbf a=(a_1, \ldots, a_k)\in {\mathbb F}_p^k$, for an integer $H\geqslant 1$ we denote by $T_\mathbf a(f_1, \ldots, f_k; H,p)$ the number of solutions to the congruence $$\sum_{i=1}^k a_i f_i(x_i) \equiv 0 \pmod p , \qquad -H \leqslant x_i \leqslant H, \ i =1, \ldots, k.$$ **Lemma 28**. *Let $f_i(X)\in {\mathbb F}_p[X]$, $i=1,\ldots,k$, be $k$ fixed polynomials of degrees $\deg f_i \geqslant d\geqslant 2$, and let $\mathbf a=(a_1, \ldots, a_k)\in {\mathbb F}_p^k$ be an arbitrary vector with nonzero components $a_i\ne 0$, $i =1, \ldots, k$. Then for $H \leqslant p^{2/(d(d+1))}$, we have $$T_\mathbf a(f_1, \ldots, f_k; H,p ) \leqslant \begin{cases} H^{3/2+o(1)} & \text{if}\ k = 3,\\ H^{k-2+o(1)} & \text{if}\ k \geqslant 4, \end{cases} \qquad H \to \infty.$$* *Proof.* As in the proof of Lemma [Lemma 23](#lem:SepVars){reference-type="ref" reference="lem:SepVars"}, using the orthogonality of exponential sums and the Hölder inequality, one obtains the analogue of ([\[eq: T and I\]](#eq: T and I){reference-type="ref" reference="eq: T and I"}), that is, $$\label{eq:T and J} T_\mathbf a(f_1, \ldots, f_k; H,p )\leqslant\prod_{i=1}^k J_k(f_i,H,p)^{1/k}.$$ For each $i=1,\ldots,k$, using the Hölder inequality for $k = 3$ gives us $J_3(f_i,H,p) \leqslant J_4(f_i,H,p)^{3/4}$ while for $k \geqslant 4$, we have the trivial bound $$\label{eq:Triv Jk} J_k(f_i,H,p) \leqslant J_4(f_i,H,p) H^{k-4+ o(1)}.$$ We now see that Lemma [Lemma 27](#lem:KMS){reference-type="ref" reference="lem:KMS"} implies $$J_k(f_i,H,p) \leqslant \begin{cases} H^{3/2+o(1)} & \text{if}\ k = 3,\\ H^{k-2+o(1)} & \text{if}\ k \geqslant 4, \end{cases} \qquad H \to \infty,$$ provided $H \leqslant p^{2/(d(d+1))}$. Plugging these in ([\[eq:T and J\]](#eq:T and J){reference-type="ref" reference="eq:T and J"}), we derive the desired bound. ◻ We now present another important technical tool, given by the work of Fouvry [@Fouv00]. For a polynomial in $k \geqslant 2$ variables $$F(X_1, \ldots, X_k) \in {\mathbb Z}[X_1, \ldots, X_k]$$ and an integer $H\geqslant 1$, we denote by $T_F(H, p)$ the number of solutions to the congruence $$F(x_1, \ldots, x_k) \equiv 0 \pmod p, \qquad (x_1, \ldots, x_k) \in [-H, H]^k,$$ modulo a prime $p$. We also define $T_F(p) = T_F((p-1)/2, p)$, that is, $T_F(p)$ is the number of solutions to the above congruence with unrestricted variables from ${\mathbb F}_p$. Then, the main result of [@Fouv00 Theorem], in the case of one polynomial takes the following form. For a polynomial $F\in{\mathbb Z}[X_1,\ldots,X_k]$, we denote by ${\mathcal Z}(F)$ the algebraic subset of ${\mathbb C}^k$ of all zeros of $F$ in ${\mathbb C}^k$. **Lemma 29**. *Let $$F(X_1, \ldots, X_k) \in {\mathbb Z}[X_1, \ldots, X_k]$$ be an irreducible over ${\mathbb C}$ polynomial in $k \geqslant 2$ variables, such that the hypersurface ${\mathcal Z}(F)$ is not contained in any hyperplane of ${\mathbb C}^k$. Then for any prime $p$, we have $$T_F(H, p) = \left(\frac{H}{p}\right)^k T_F(p) + O\left(p^{(k-1)/2 + o(1)} + H^{k-2} p^{1/2 + o(1)}\right)$$ as $p \to \infty$.* # Proofs of main results ## Proof of Theorem [Theorem 1](#thm: rank poly matr fij Z 1){reference-type="ref" reference="thm: rank poly matr fij Z 1"} {#sec:T1 2} As in [@BlLi] we observe that it is enough to estimate the number $L_{\mathbf f, r}^*(H)$ of matrices $\mathbf X\in {\mathcal M}_{\mathbf f}(H)$ which are of rank $r$ and such that the top left $r \times r$ minor $\mathbf X_r = \left(f_{i,j}\left(x_{i,j}\right)\right)_{1\leqslant i,j\leqslant r}$ is non-singular. We now fix the values of such $x_{i,j}$, $i,j=1, \ldots, r$, in $$\label{eq:contrib Xr} {\mathfrak A}\ll H^{r^2}$$ ways. We now observe that for every integer $h$, $r < h \leqslant m$, once the minor $\mathbf X_r$ is fixed, the $h$-th row of every matrix $U$ which is counted by $L_{\mathbf f, r}^*(H)$ is a unique linear combination of the first $r$ rows with coefficients $\left(\rho_{1}(h) , \ldots , \rho_{r}(h)\right) \in {\mathbb Q}^r$. We say that the $(m-r)\times r$ matrix $$\mathbf Y_r =\left(f_{h,j}\left(x_{h,j}\right)\right)_{\substack{r+1\leqslant h\leqslant m \\ 1 \leqslant j\leqslant r}}$$ (which is directly under $\mathbf X_r$ in $\mathbf X$) is of type $t\geqslant 0$ if $t$ is the largest number of non-zeros among the coefficients $\left(\rho_{1}(h) , \ldots , \rho_{r}(h)\right) \in {\mathbb Q}^r$ taken over all $h = r+1, \ldots, m$. In particular type $t = 0$ corresponds to the zero matrix $\mathbf Y_r$. Clearly if the $h$-th row is of type $t$, that is, $t$ of the coefficients in $\left(\rho_{1}(h) , \ldots , \rho_{r}(h)\right) \in {\mathbb Q}^r$ are non-zero, say $\rho_{1}(h) , \ldots , \rho_{t}(h) \ne 0$, and thus we can choose a $t \times t$ non-singular sub-matrix of the matrix $\left(f_{i,j}\left(x_{i,j}\right)\right)_{\substack{1\leqslant i\leqslant t \\ 1 \leqslant j\leqslant r}}$ . Again, without loss of generality we can assume that this is $$\mathbf X_t = \left(f_{i,j}\left(x_{i,j}\right)\right)_{1\leqslant i, j\leqslant t}.$$ This means that each of $O(H^t)$ possible choices of $X_{h,1},\ldots,X_{h,t}$ defines the coefficients $$\left( \rho_{1}(h) , \ldots , \rho_{r}(h)\right) = \left( \rho_{1}(h) , \ldots , \rho_{t}(h), 0, \ldots, 0\right)$$ and hence the rest of the values $f_{h,j}(x_{h,j})$, $j = t+1, \ldots, r$. Note that this bound is monotonically increasing with $t$ and thus applies to every row of matrices $\mathbf Y_r$ of type $t$. Therefore, for each fixed $\mathbf X_r$, there are $$\label{eq:contrib Yr-t} {\mathfrak B}_t \ll H^{t(m-r)}$$ matrices $\mathbf Y_r$ of type $t$ (note that this is also true for $t = 0$). Let now a matrix $\mathbf X_r$ and a matrix $\mathbf Y_r$ of type $t$ be both fixed. Hence there is an $h$, $r+1\leqslant h \leqslant m$, such that the $h$-th row can be written as a linear combination of the top $r$ rows. As before, without loss of generality we can assume that the vector $\left( \rho_{1}(h) , \ldots , \rho_{r}(h)\right)$ contains exactly $t$ non-zero components, which for each $j = r+1, \ldots, n$ leads to an equation $$\label{eq:lin rel} \rho_{1}(h) f_{1,j}\left(x_{1,j}\right) + \ldots + \rho_{r}(h) f_{r,j}\left(x_{r,j}\right) = f_{h,j}\left(x_{h,j} \right)$$ with exactly $t+1$ non-zero coefficients. After this, all other elements $f_{i,j}\left(x_{i,j}\right)$, $i \in \{r+1, \ldots, m \} \setminus \{h\}$ and $j=r+1,\ldots,n$, are uniquely defined by an analogue of the relation ([\[eq:lin rel\]](#eq:lin rel){reference-type="ref" reference="eq:lin rel"}), since now for every $i \in \{r+1, \ldots, m \} \setminus \{h\}$, the left hand side is fixed. Let ${\mathfrak C}_{t}$, $j = r+1, \ldots, n$, be the largest (taken over all choices of $h\in \{r+1, \ldots, m \}$ and $j\in \{r+1, \ldots, n \}$) number of solutions to ([\[eq:lin rel\]](#eq:lin rel){reference-type="ref" reference="eq:lin rel"}) in variables $\left(x_{1,j}, \ldots , x_{r,j}, x_{h,j}\right) \in [-H, H]^{r+1}$. Then we can summarise the above discussion as the bound $$\label{eq: L vs ABC} L_{\mathbf f, r}^*(H) \ll {\mathfrak A}\sum_{t =0}^r {\mathfrak B}_t {\mathfrak C}_t^{n-r}.$$ Under the condition of Theorem [Theorem 1](#thm: rank poly matr fij Z 1){reference-type="ref" reference="thm: rank poly matr fij Z 1"}, to estimate ${\mathfrak C}_t$, we apply: - the trivial bound ${\mathfrak C}_t \ll H^r$ if $t \in\{0,1\}$; - the bound ${\mathfrak C}_t \ll H^{r-1+1/d+o(1)}$ of Lemma [Lemma 26](#lem:Pila){reference-type="ref" reference="lem:Pila"} if $t =2$; - the bound ${\mathfrak C}_t \ll H^{r+1-s_t+o(1)}$, which combines Lemma [Lemma 25](#lem:Small k){reference-type="ref" reference="lem:Small k"} if $3 \leqslant t \leqslant 10$ and Lemma [Lemma 23](#lem:SepVars){reference-type="ref" reference="lem:SepVars"} (used with $s=s_t$ if $t \geqslant 11$, where, as before, $s_t\geqslant 3$ is the largest integer $s\leqslant d$ with $s(s+1)\leqslant t+1$. Hence, combining the above bounds with ([\[eq:contrib Xr\]](#eq:contrib Xr){reference-type="ref" reference="eq:contrib Xr"}) and ([\[eq:contrib Yr-t\]](#eq:contrib Yr-t){reference-type="ref" reference="eq:contrib Yr-t"}) and substituting in ([\[eq: L vs ABC\]](#eq: L vs ABC){reference-type="ref" reference="eq: L vs ABC"}) we obtain the following contributions to $L_{\mathbf f, r}^*(H)$ for $t =0, \ldots, r$. For $t \in\{0,1\}$ the total contribution ${\mathfrak L}_{0,1}$ satisfies $$\label{eq:t in [0,1]} \begin{split} {\mathfrak L}_{0,1} &\ll H^{r^2} \left(\left( H^r\right)^{n-r}+H^{m-r}\left( H^r\right)^{n-r}\right) \\ & \ll H^{r^2} H^{m-r}\left( H^r\right)^{n-r} = H^{m +nr - r} . \end{split}$$ For $t = 2$ the total contribution ${\mathfrak L}_{2}$ satisfies $$\label{eq:t = 2} \begin{split} {\mathfrak L}_{2} &\leqslant H^{r^2+o(1) }H^{2(m-r)} \left( H^{r-1+1/d}\right)^{n-r} \\ & = H^{2m +n(r-1+1/d) - r (1+1/d)+o(1)} . \end{split}$$ Finally, for $t \geqslant 3$, the total contribution ${\mathfrak L}_{\geqslant 3}$ satisfies $$\label{eq:t ge 3} \begin{split} {\mathfrak L}_{\geqslant 3}&\leqslant H^{r^2+o(1) }\sum_{t =3}^r H^{t(m-r)} \left( H^{r+1-s_t}\right)^{n-r} \\ & =\max_{t = 3, \ldots, r} H^{tm +n(r+1-s_t) - r(t +1-s_t) +o(1)} . \end{split}$$ Substituting the bounds ([\[eq:t in \[0,1\]\]](#eq:t in [0,1]){reference-type="ref" reference="eq:t in [0,1]"}), ([\[eq:t = 2\]](#eq:t = 2){reference-type="ref" reference="eq:t = 2"}), and ([\[eq:t ge 3\]](#eq:t ge 3){reference-type="ref" reference="eq:t ge 3"}) in the inequality $$L_{\mathbf f, r}(H) \ll L_{\mathbf f, r}^*(H) \leqslant{\mathfrak L}_{0,1} + {\mathfrak L}_{2} + {\mathfrak L}_{\geqslant 3} ,$$ which implies that $$L_{\mathbf f, r}(H) \leqslant H^{m+nr-r+\widetilde \Delta(d,m,n,r)+o(1)}$$ with $$\begin{aligned} \widetilde \Delta(d,m,n,r) & = \max \bigl\{0, \, m -n + (n-r)/d,\\ & \qquad \qquad \qquad \max_{t =3, \ldots, r}\{ (t-1)m -n(s_t-1) - r(t -s_t)\}\bigr\}.\end{aligned}$$ It remains to show that the term $m -n + (n-r)/d$ in $\widetilde \Delta(d,m,n,r)$ never dominates and thus can be dropped leading to $$\label{eq:Deltas} \widetilde \Delta(d,m,n,r) = \Delta(d,m,n,r)$$ for $d \geqslant 3$ and $r \geqslant 4$. Since $m -n + (n-r)/d \leqslant m -n + (n-r)/3$, it is sufficient to consider the case of $d=3$. For this we notice that we can clearly assume that $m -n + (n-r)/3 \geqslant 0$ as otherwise ([\[eq:Deltas\]](#eq:Deltas){reference-type="ref" reference="eq:Deltas"}) is trivial. Thus $$\label{eq:large m} m \geqslant n - (n-r)/3 = (2n+r)/3.$$ It is sufficient to show that the term $3m -5n/4 - 7r/4$, which corresponding to $t = 4$ in $\widetilde \Delta(d,m,n,r)$, satisfies $$\label{eqT2 vs T3} 3m -5n/4 - 7r/4 \geqslant m -n + (n-r)/3,$$ which is equivalent to $$2m \geqslant n/4 + 7r/4 + (n-r)/3 = 7n/12 + 17r/12,$$ which in turn follows from ([\[eq:large m\]](#eq:large m){reference-type="ref" reference="eq:large m"}) since for $n > r$ we have $$2(2n+r)/3 \geqslant 7n/12 + 17r/12.$$ This implies ([\[eqT2 vs T3\]](#eqT2 vs T3){reference-type="ref" reference="eqT2 vs T3"}), which means that ([\[eq:Deltas\]](#eq:Deltas){reference-type="ref" reference="eq:Deltas"}) holds, and this concludes the proof. ## Proof of Corollary [Corollary 6](#cor:det){reference-type="ref" reference="cor:det"} {#proof-of-corollary-cordet} We can assume that $a \ne 0$ as otherwise the result follows from ([\[eq:Sing Matr\]](#eq:Sing Matr){reference-type="ref" reference="eq:Sing Matr"}). We now write $$\det \left(f_{i,j}\left(X_{i,j}\right)\right)_{1\leqslant i, j\leqslant n} = \sum_{h=1} (-1)^{h-1}f_{1,h}\left(X_{1,h}\right) D_h,$$ where $D_j$ are the determinants of the minors supported on the variables $X_{i,j}$, $2 \leqslant i \leqslant n$, $1\leqslant j \leqslant n$, $j \ne h$. We see from Lemmas [Lemma 23](#lem:SepVars){reference-type="ref" reference="lem:SepVars"} and [Lemma 25](#lem:Small k){reference-type="ref" reference="lem:Small k"} that the solutions with $D_h \ne 0$, $h=1, \ldots, n$, contribute at most $$H^{n(n-1)} H^{n-s_{n-1} + o(1)} = H^{n^2-s_{n-1}+o(1)}.$$ On the other hand, by ([\[eq:Sing Matr\]](#eq:Sing Matr){reference-type="ref" reference="eq:Sing Matr"}), using that $a \ne0$, we see that the contribution from other solutions can be estimated as $$H^{(n-1)^2-s_{n-2}+o(1)} H^{n-1} H^{n-1} = H^{n^2-s_{n-2} - 1+o(1)}.$$ Since $s_{n-1}\leqslant s_{n-2}+1$, the result follows. ## Proof of Theorem [Theorem 7](#thm: rank poly matr fij Fp){reference-type="ref" reference="thm: rank poly matr fij Fp"} {#proof-of-theorem-thm-rank-poly-matr-fij-fp} We mimic the proof of Theorem [Theorem 1](#thm: rank poly matr fij Z 1){reference-type="ref" reference="thm: rank poly matr fij Z 1"}, but we use the bounds of Lemma [Lemma 28](#lem:Energy Fp){reference-type="ref" reference="lem:Energy Fp"} instead of the bounds of Section [4.1](#sec: sol box Z){reference-type="ref" reference="sec: sol box Z"}. In particular, we introduce the same quantities ${\mathfrak A}$, ${\mathfrak B}_t$ and ${\mathcal C}_t$, but defined for matrices over ${\mathbb F}_p$. Then, under the condition of Theorem [Theorem 7](#thm: rank poly matr fij Fp){reference-type="ref" reference="thm: rank poly matr fij Fp"}, to estimate ${\mathfrak C}_t$, we apply: - the trivial bound ${\mathfrak C}_t \ll H^r$ if $t \in\{0,1\}$; - the bound ${\mathfrak C}_t \ll H^{r-1/2+o(1)}$ of Lemma [Lemma 28](#lem:Energy Fp){reference-type="ref" reference="lem:Energy Fp"} if $t =2$; - the bound ${\mathfrak C}_t \ll H^{r-1+o(1)}$ of Lemma [Lemma 28](#lem:Energy Fp){reference-type="ref" reference="lem:Energy Fp"} if $t \geqslant 3$. Hence, instead of the bounds ([\[eq:t in \[0,1\]\]](#eq:t in [0,1]){reference-type="ref" reference="eq:t in [0,1]"}), ([\[eq:t = 2\]](#eq:t = 2){reference-type="ref" reference="eq:t = 2"}), and ([\[eq:t ge 3\]](#eq:t ge 3){reference-type="ref" reference="eq:t ge 3"}) we now have the following estimates. For $t \in\{0,1\}$ the total contribution ${\mathfrak L}_{0,1}$ satisfies $$\label{eq:t in [0,1] p} {\mathfrak L}_{0,1} \ll H^{m +nr - r} .$$ (exactly as ([\[eq:t in \[0,1\]\]](#eq:t in [0,1]){reference-type="ref" reference="eq:t in [0,1]"})). For $t = 2$ the total contribution ${\mathfrak L}_{2}$ now satisfies $$\label{eq:t = 2 p} \begin{split} {\mathfrak L}_{2} &\leqslant H^{r^2+o(1)}H^{2(m-r)} \left( H^{r-1/2}\right)^{n-r} \\ & = H^{2m +n(r-1/2) - 3r/2+o(1)} . \end{split}$$ For $t \geqslant 3$, the total contribution ${\mathfrak L}_{\geqslant 3}$ satisfies $$\label{eq:t ge 3 p} \begin{split} {\mathfrak L}_{\geqslant 3}&\leqslant H^{r^2+o(1) } \sum_{t =3}^{r} H^{t(m-r)} \left( H^{r-1}\right)^{n-r} \\ & \leqslant H^{r^2+o(1) } H^{r(m-r)} \left( H^{r-1}\right)^{n-r} \\ & = H^{rm +n(r-1) - r(r-1) +o(1)} . \end{split}$$ Substituting the bounds ([\[eq:t in \[0,1\] p\]](#eq:t in [0,1] p){reference-type="ref" reference="eq:t in [0,1] p"}), ([\[eq:t = 2 p\]](#eq:t = 2 p){reference-type="ref" reference="eq:t = 2 p"}), and ([\[eq:t ge 3 p\]](#eq:t ge 3 p){reference-type="ref" reference="eq:t ge 3 p"}) in the inequality $$L_{\mathbf f, r}(H,p) \ll L_{\mathbf f, r}^*(H,p) \leqslant{\mathfrak L}_{0,1} + {\mathfrak L}_{2} + {\mathfrak L}_{\geqslant 3},$$ where $L_{\mathbf f, r}^*(H,p)$ is defined as in Section [5.1](#sec:T1 2){reference-type="ref" reference="sec:T1 2"}, we conclude the proof. ## Proofs of Theorems [Theorem 8](#thm: sing poly matr fij Fp){reference-type="ref" reference="thm: sing poly matr fij Fp"} and [Theorem 9](#thm: imm poly matr fij Fp){reference-type="ref" reference="thm: imm poly matr fij Fp"} {#proofs-of-theorems-thm-sing-poly-matr-fij-fp-and-thm-imm-poly-matr-fij-fp} Recall that by Lemma [Lemma 15](#lem:detpolyirr){reference-type="ref" reference="lem:detpolyirr"}, the polynomial $$D\left( \left(X_{i,j}\right)_{1\leqslant i,j\leqslant n}\right) =\det\left(f_{i,j}(X_{i,j})\right)_{1\leqslant i,j\leqslant n},$$ is absolutely irreducible over ${\mathbb F}_p$ for sufficiently large prime $p$. Hence by the famous result of Lang and Weil [@LaWe], we have $$\label{eq:LW} T_D(p) = p^{n^2-1} + O(p^{n^2-3/2}),$$ where we recall that $T_D(p)$ is the number of solutions to the congruence $$D\left((x_{i,j})_{1\leqslant i,j\leqslant n}\right)\equiv 0 \pmod p$$ with unrestricted variables from ${\mathbb F}_p$. To apply Lemma [Lemma 29](#lem:box){reference-type="ref" reference="lem:box"}, we need to show that the algebraic variety ${\mathcal V}={\mathcal Z}(D)$, where ${\mathcal Z}(D)$ denotes the set of zeros of $D$ in ${\mathbb C}^{n^2}$, is not contained in any hyperplane. Let us assume that ${\mathcal V}\subseteq {\mathcal H}$, where ${\mathcal H}={\mathcal Z}(L)$ is the hyperplane defined by an affine polynomial $L \in {\mathbb C}[X_{1,1},\ldots,X_{n,n}]$. Since both $D$ (by Lemma [Lemma 15](#lem:detpolyirr){reference-type="ref" reference="lem:detpolyirr"}) and $L$ are absolutely irreducible, the ideals generated by them, $I$ and $J$, respectively, are radical ideals (in fact, they are prime). Therefore, the inclusion ${\mathcal V}\subseteq {\mathcal H}$, implies, via the (strong) Nullstellensatz, the inclusion of ideals $J\subseteq I$, see for example [@CoxLitShe Theorems 6 and 7, Chapter 4]. Thus, we have $L \in I$, which means that $D$ is a divisor of $L$ in ${\mathbb C}[X_{1,1},\ldots,X_{n,n}]$, which is a contradiction. Hence, ${\mathcal V}$ is not contained in any hyperplane. Thus, since by Lemma [Lemma 15](#lem:detpolyirr){reference-type="ref" reference="lem:detpolyirr"}, $D$ is also irreducible over ${\mathbb C}$, the result follows by an application of Lemma [Lemma 29](#lem:box){reference-type="ref" reference="lem:box"} and ([\[eq:LW\]](#eq:LW){reference-type="ref" reference="eq:LW"}), after one verifies that $$\frac{p^{n^2-3/2}}{p^{n^2}} H^{n^2} = H^{n^2} p^{-3/2} \leqslant H^{n^2-2}p^{1/2}$$ for $H \leqslant p$, which give Theorem [Theorem 8](#thm: sing poly matr fij Fp){reference-type="ref" reference="thm: sing poly matr fij Fp"}. The proof of Theorem [Theorem 9](#thm: imm poly matr fij Fp){reference-type="ref" reference="thm: imm poly matr fij Fp"} is fully analogous, except using polynomials $f_{i,j}(X_{i,j})\in {\mathbb Z}[X_{i,j}]$, $i,j=1,\ldots,n$ of the same degree, we are now able to use Lemma [Lemma 19](#lem:lcminorsirr){reference-type="ref" reference="lem:lcminorsirr"} instead of Lemma [Lemma 15](#lem:detpolyirr){reference-type="ref" reference="lem:detpolyirr"}. Indeed, we also note that the same argument as above, applied with the nonlinear polynomial $$\sum_{\sigma \in {\mathcal S}_n} \chi(\sigma) \prod_{i=1}^n f_{i,\sigma(i)}(X_{i,\sigma(i)}),$$ which is irreducible by Lemma [Lemma 19](#lem:lcminorsirr){reference-type="ref" reference="lem:lcminorsirr"}, shows that its zero set in ${\mathbb C}^{n^2}$ is not contained in any hyperplane. # Further improvements and generalisations {#sec:improve} First we recall that Remark [Remark 24](#rem:Better I_k){reference-type="ref" reference="rem:Better I_k"} outlines a possibility for deriving slightly stronger bounds. Next, we observe that for polynomials of large degree, one can also use the bound $$I_{6}(f,H) \leqslant H^{3+ o(1)}\left(H^{1/3} + H^{2/\sqrt{d} + 1/(d-1)}\right)$$ of Browning [@Brow Theorem 2], see also [@BrHB0], which is stronger than the bound on $I_{6}(f,H)$ in Lemma [Lemma 25](#lem:Small k){reference-type="ref" reference="lem:Small k"} for $d \geqslant 20$. Then, using the Hölder inequality one can also estimate $I_{5}(f,H) \leqslant I_{6}(f,H)^{5/6}$. We also note that in the case when the polynomials $f_{i,j}$ are monomials $f_{i,j}(X_{i,j}) = X_{i,j}^d$, $1\leqslant i\leqslant m$, $1 \leqslant j\leqslant n$, of the same degree $d\geqslant 3$ then using [@Wool Corollary 14.7] one can for some parameters obtain a stronger version of Lemma [Lemma 20](#lem:Wool){reference-type="ref" reference="lem:Wool"} and thus of Theorem [Theorem 1](#thm: rank poly matr fij Z 1){reference-type="ref" reference="thm: rank poly matr fij Z 1"}. This corresponds to the scenario of [@BlLi]. Lemma [Lemma 19](#lem:lcminorsirr){reference-type="ref" reference="lem:lcminorsirr"} also allows us to get an asymptotic formula of the type of Theorem [Theorem 8](#thm: sing poly matr fij Fp){reference-type="ref" reference="thm: sing poly matr fij Fp"} for the number of singular matrices of the form $$\begin{pmatrix} f_{1,1}\left(x_{1,1}\right) & \ldots & f_{1,n}\left(x_{1,n}\right) \\ \vdots & \ldots& \vdots\\ f_{n-1,1}\left(x_{n-1,1}\right) & \ldots & f_{n-1,n}\left(x_{n-1,n}\right) \\ a_1 & \ldots & a_n \end{pmatrix},$$ with integers $x_{i,j} \in [-H,H]$, $1\leqslant i\leqslant n$, $1 \leqslant j\leqslant n-1$, for a fixed vector $\mathbf a= (a_1, \ldots, a_n) \in {\mathbb F}_p^n$. This has an interpretation as the number of polynomial vectors containing $\mathbf a$ in their span. Furthermore, using several recent results coming from additive combinatorics, such as of Bradshaw, Hanson and Rudnev [@BrHaRu Theorem 5] and of Mudgal [@Mud Theorem 1.6], one can obtain analogues of our results for matrices with elements of arbitrary but sufficiently quickly growing sequences. In the case of matrices defined over ${\mathbb F}_p$, we note that if the polynomials $\mathbf f$ in Theorem [Theorem 7](#thm: rank poly matr fij Fp){reference-type="ref" reference="thm: rank poly matr fij Fp"} are fixed of degree at most $e$ and defined over ${\mathbb Z}$ then for $H \leqslant c(\mathbf f) p^{1/e}$ with some constant $c(\mathbf f)>0$ depending only on $\mathbf f$, one can get results of the same strength as in Theorem [Theorem 1](#thm: rank poly matr fij Z 1){reference-type="ref" reference="thm: rank poly matr fij Z 1"}. Indeed, in this case the congruence $$\sum_{i=1}^k f(x_i) \equiv \sum_{i=1}^k f(y_i) \pmod p, \qquad -H \leqslant x_i, y_i \leqslant H, \ i =1, \ldots, k,$$ whose number of solutions is given by $J_{2k}(f,H,p)$, becomes an equation. Therefore, we now have $J_{2k}(f,H,p) = I_{2k}(f,H)$ and hence the bounds of Section [4.1](#sec: sol box Z){reference-type="ref" reference="sec: sol box Z"} apply. We also note that by [@Go-PSh Lemma 6] one can multiply all coefficients of $$f(X) = a_e X^e + \ldots + a_1X + a_0 \in {\mathbb Z}[X]$$ by some integer $\lambda \not \equiv 0 \pmod p$ such that their smallest by absolute value residues $b_j \equiv \lambda a_j \pmod p$ satisfy $$b_j \ll p^{1-2j/(e(e+1))}, \qquad j=0, \ldots, e.$$ Hence for $H \leqslant c(e) p^{2/(e(e+1))}$ with some constant $c(e)$ depending only on $e$, we have $J_{2k}(f,H,p) = I_{2k}(g,H)$, where $$g(X) = b_e X^e + \ldots + b_1X + b_0 \in {\mathbb Z}[X].$$ It remains to recall that many results obtained via the determinant method, such as [@Pila], are uniform with respect to the polynomials involved. Finally, we mention that in the same fashion as Theorems [Theorem 1](#thm: rank poly matr fij Z 1){reference-type="ref" reference="thm: rank poly matr fij Z 1"} and [Theorem 7](#thm: rank poly matr fij Fp){reference-type="ref" reference="thm: rank poly matr fij Fp"}, one may obtain similar results allowing the polynomials $f_{i,j}(X_{i,j})$ in the matrix $\mathbf f$ to take values from sets ${\mathcal A}\subseteq {\mathbb Z}$, or ${\mathbb F}_p$, of cardinality $A$, with small sum set ${\mathcal A}+{\mathcal A}= \{a + b:~a,b\in {\mathcal A}\}$. That is, if $\#({\mathcal A}+{\mathcal A}) \leqslant KA$ with $K= A^{1-\varepsilon}$, for some $\varepsilon>0$. For instance, given a quadratic polynomial $f\in {\mathbb F}_p[X]$, for sets ${\mathcal A}\subset {\mathbb F}_p$, with $A\leqslant p^{2/3}$ and $\#({\mathcal A}+{\mathcal A})\ll A$, it follows from [@ShkShp Lemma 2.10] that the number of solutions to the equation $$\begin{aligned} f(u) + f(v) +f(w) & = f(x) + f(y) + f(z), \\(u,v,w,x&,y,z) \in {\mathcal A}^6,\end{aligned}$$ is $O\left(A^{5-1/2}\right)$. See also [@ShkShp Corollaries 2.10 and 2.11]. We further point out that the relevant bounds of [@ShkShp] also hold for subsets of ${\mathbb Z}$, due to the generality of their underlying result from [@Rud]. Perhaps the approach of [@KMS] can also be adapted to equations of the above type with 6 or more variables. We have already mentioned possible generalisations of our results from matrices with polynomial entries to matrices with elements coming from sequences with some additive properties. Another possible generalisation is for sequences with some multiplicative properties. First we recall that Alon and Solymosi [@AlSol Theorem 1] have shown that matrices with entries from finitely generated subgroups of ${\mathbb C}^*$ have a rank growing with their dimension $n$. Using bounds on the number of solutions to so-called *$S$-unit equations*, see, for example, [@AmVia Theorem 6.2] and our argument, one can obtain various counting versions of [@AlSol Theorem 1]. # Acknowledgements {#acknowledgements .unnumbered} The authors are very grateful to Valentin Blomer and Junxian Li for useful comments on the preliminary versions of this paper and to Akshat Mudgal for several important suggestions and references, which helped to improve some of the results. During the preparation of this work, the authors were supported in part by the Australian Research Council Grants DP200100355 and DP230100530. www S. S. Abhyankar, 'Combinatoire des tableaux de Young, variétés déterminantielles et calcul de fonctions de Hilbert', *Rend. Sem. Mat. Univ. Politec. Torino*, **42** (1984), 65--88. O. Ahmadi and I. E. Shparlinski, 'Distribution of matrices with restricted entries over finite fields', *Indag. Mathem.* **18** (2000), 327--337. N. Alon and J. Solymosi, 'Rank of matrices with entries from a multiplicative group', *Intern. Math. Res. Notices*, **2023** (2003), 12383--12399. F. Amoroso and E. Viada, 'Small points on subvarieties of a torus', *Duke Math. J.* **150** (2009), 407--442. V. Blomer and J. Li, 'Correlations of values of random diagonal forms' *Intern. Math. Res. Notices* (to appear). P. J. Bradshaw, B. Hanson and M. Rudnev, 'Higher convexity and iterated second moment estimates', *Electron. J. Comb.*, **29** (2022), P.3.6. J. Bourgain, 'On the Vinogradov mean value', *Proc. Steklov Math. Inst.*, **296** (2017), 30--40. T. D. Browning, 'Equal sums of like polynomials', *Bull. London Math. Soc.*, **37** (2005), 801--808. T. D. Browning and R. Heath-Brown, 'Equal sums of powers', *Invent. Math.*, **157** (2004), 553--573. T. D. Browning and R. Heath-Brown, 'The density of rational points on non-singular hypersurfaces, I', *Bull. London Math. Soc.*, **38** (2006), 401--410. T. D. Browning and R. Heath-Brown, 'The density of rational points on non-singular hypersurfaces, II', *Proc. London Math. Soc.*, **93** (2006), 273--303. W. Bruns and U. Vetter, *Determinantal rings*, Lecture Notes in Math, Vol. 1327. Springer-Verlag, Berlin Heidelberg 1988. M.-C. Chang, 'Sparsity of the intersection of polynomial images of an interval', *Acta Arith.*, **165** (2014), 243--249. D. A. Cox, J. Little and D. O'Shea, *Ideals, varieties, and algorithms: An introduction to computational algebraic geometry and commutative algebra*, Springer International Publishing Switzerland 2015. D. El-Baz, M. Lee and A. Strömbergsson, 'Effective equidistribution of primitive rational points on expanding horospheres', *Preprint*, 2022 (available from <http://arxiv.org/abs/2212.07408>). E. Fouvry, 'Consequences of a result of N. Katz and G. Laumon concerning trigonometric sums', *Isr. J. Math..*, **120** (2000), 81--96. D. Gómez-Pérez and I. E. Shparlinski, 'Subgroups generated by rational functions in finite fields', *Monat. Math.*, **176** (2015), 241--253. L.-K. Hua, 'On Waring's problem', *Q. J. Math. Oxford.*, **9** (1938), 199--202. Y. R. Katznelson, 'Integral matrices of fixed rank', *Proc. Amer. Math. Soc.*, **120** (1994), 667--675. B. Kerr, A. Mohammadi and I. E. Shparlinski, 'Additive energy of polynomial images', *Preprint*, 2023, (available from <https://arxiv.org/abs/2306.10677>). S. Lang and A. Weil, 'Number of points of varieties in finite fields', *Amer. J. Math.*, **76** (1954), 819--827. W.-C. W. Li, *Number theory with applications*, World Scientific, Singapore, 1996. R. Lidl and H. Niederreiter, *Finite fields*, Cambridge Univ. Press, Cambridge, 1997. A. Mudgal, 'Energy estimates in sum-product and convexity problems', *Preprint*, 2021 (available from <https://arxiv.org/abs/2109.04932>). J. Pila, 'Density of integral and rational points on varieties', *Astérique*, **228** (1995), 183--187. M. Rudnev, 'On the number of incidences between points and planes in three dimensions', *Combinatorica*,**38** (2018), 219--238. P. Salberger, 'Counting rational points on projective varieties', *Proc. London Math. Soc.*, **126** (2023), 1092--1133. A. Schinzel, *Polynomials with special regard to reducibility*, Encycl. Math. Appl., vol. 7, Cambridge Univ. Press, Cambridge, 2000. W. M. Schmidt, *Equations over finite fields: An elementary approach*, Springer-Verlag, Berlin-Heidelberg-New York, 1976. I. Shkredov, 'Some new results on higher energies', *Trudy Mosk. Mat. Obs.*, **74** (2013), 35--73, (in Russian). I. D. Shkredov and I. E. Shparlinski, 'Double character sums with intervals and arbitrary sets', *Proc. Steklov Inst. Math.*, **303** (2018), 239--258. H. Tverberg, 'A remark on Ehrenfeucht's criterion for irreducibility of polynomials', *Prace Matematyczne Warszawa*, **8** (1964), 117--118. T. D. Wooley, 'Nested efficient congruencing and relatives of Vinogradov's mean value theorem', *Proc. London Math. Soc.* **118** (2019), 942--1016.
arxiv_math
{ "id": "2310.05038", "title": "On some matrix counting problems", "authors": "Ali Mohammadi, Alina Ostafe, Igor Shparlinski", "categories": "math.NT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Semi-standard graded rings are a generalized notion of standard graded rings. In this paper, we compare generalized notions of the Gorenstein property in semi-standard graded rings. We discuss the commonalities between standard graded rings and semi-standard graded rings, as well as elucidating distinctive phenomena present in semi-standard graded rings that are absent in standard graded rings. address: Department of Pure And Applied Mathematics, Graduate School Of Information Science And Technology, Osaka University, Suita, Osaka 565-0871, Japan author: - Sora Miyashita title: Comparing generalized Gorenstein properties in semi-standard graded rings --- # Introduction With the development of non-Gorenstein Cohen-Macaulay analysis, various generalized properties of Gorenstein rings have been defined. Notable examples include nearly Gorenstein, almost Gorenstein, and level property. In particular, comparisons of these properties have been done in [@herzog2019trace; @higashitani2022levelness; @miyashita2022levelness; @moscariello2021nearly]. Nearly Gorenstein and almost Gorenstein rings have been studied in various classes such as standard graded rings [@higashitani2016almost; @miyashita2022levelness], numerical semigroup rings [@kumashiro2023nearly; @moscariello2021nearly], and Ehrhart rings [@hall2023nearly; @miyazaki2022gorenstein]. In particular, Higashitani [@higashitani2016almost] developed the theory of almost Gorenstein standard graded domains in terms of their $h$-vector. It is known that the almost Gorenstein property in standard graded domains can be determined by using their $h$-vector and Cohen-Macaulay type (see [@higashitani2016almost Corollary 2.7]). Moreover, the author proved that in standard graded affine semigroup rings, the last term of its $h$-vector of non-Gorenstein nearly Gorenstein rings is always at least 2 (see [@miyashita2022levelness Theorem 4.4]). This is a useful tool when we compare nearly Gorenstein property with other properties in standard graded affine semigroup rings. Furthermore, the level ring, a prominent generalization of Gorenstein rings, is a notion defined for graded rings. Its behavior of standard graded rings has been extensively studied (see [@geramita2007hilbert; @hibi1988level; @yanagawa1995castelnuovo]). On the other hand, for the class of rings generalized from standard graded rings, known as semi-standard graded rings, various studies have been conducted regarding level property, especially in relation to their $h$-vector (see [@haase2020levelness; @higashitani2018non; @stanley1991hilbert]). Let us recall the definitions of standard graded and semi-standard graded here. **Definition 1**. Let $R=\bigoplus_{i \in \mathbb{N}} R_i$ be a graded noetherian $\Bbbk$-algebra over a field $R_0=\Bbbk$. If $R =\Bbbk[R_1]$, that is, $R$ is generated by $R_1$ as a $\Bbbk$-algebra, then we say $R$ is *standard graded*. If $R$ is finitely generated as a $\Bbbk[R_1]$-module, then we say $R$ is *semi-standard graded*. The *Ehrhart rings* of lattice polytopes and the face rings of simplicial posets (see [@stanley1991f]) are typical classes of semi-standard graded rings. From perspective of combinatorial commutative algebra, the concept of semi-standard graded rings naturally arises in this context. If $R$ is a semi-standard graded ring of dimension $d$, its Hilbert series is of the form $$\sum_{i \in \mathbb{N}} (\dim_\Bbbk R_i) t^i =\frac{h_0+h_1 t + \cdots +h_st^s}{(1-t)^d}$$ for some integers $h_0, h_1, \cdots, h_s$ with $\sum_{i=0}^s h_i \ne 0$ and $h_s \ne 0$. We call the integer sequence $(h_0,h_1,\cdots,h_s)$ the *$h$-vector* of $R$ and denote it as $h(R)$. Moreover, we call $s$ the *socle degree* of $R$ and denote it as $s(R)$. We always have $h_0=1$. If a semi-standard graded ring $R$ is Cohen--Macaulay, its $h$-vector satisfies $h_i \geqq 0$ for every $i$. If further $R$ is standard graded, we have $h_i > 0$ for every $i$. For further information on the $h$-vectors of Cohen--Macaulay semi-standard (resp. standard) graded rings, see [@stanley1991hilbert] (resp. [@stanley2007combinatorics]). Goto, Takahashi, and Taniguchi [@goto2015almost] compare almost Gorenstein and level properties in standard graded rings, while the author [@miyashita2022levelness] compares nearly Gorenstein and level properties in standard graded rings. These studies have gradually revealed the compatibility of these properties in standard graded rings. However, there are still many unknown things in the context of semi-standard graded rings. Therefore, the main theme of this paper is to compare nearly Gorenstein, almost Gorenstein, and level properties in semi-standard graded rings. In this paper, we apply the techniques developed for standard graded rings, as seen in [@higashitani2016almost; @miyashita2022levelness], to the case of semi-standard graded rings. We establish the theory for semi-standard graded rings and extend several well-known results about standard graded rings to the case of semi-standard graded rings. For instance, we extend the statement regarding the last term of its $h$-vector of standard graded rings, as introduced earlier, to the semi-standard case (Corollary [Corollary 16](#NGhvector){reference-type="ref" reference="NGhvector"}). We extend this further and demonstrate the following statement regarding trace ideals of semi-standard graded rings. This is applicable even in the non-Cohen-Macaulay case. ****Theorem [Theorem 15](#TraceIdealAffine){reference-type="ref" reference="TraceIdealAffine"}** 1**. *Let $R=\Bbbk[S]$ be a semi-standard graded affine semigroup ring. Let $I$ be a non-principle ideal of $R$ and let $b=\min \{i: I_i\neq 0\}$. If $\textrm{depth}R \geqq 2$ and $$(\mathbf{x}^\mathbf{e}: \mathbf{e}\in E_S)R \subseteq \textrm{tr}(I),$$ then we have $\dim_\Bbbk I_b \geqq 2$.* ****Corollary [Corollary 16](#NGhvector){reference-type="ref" reference="NGhvector"}** 1**. *Let $R=\Bbbk[S]$ be a semi-standard graded Cohen-Macaulay affine semigroup ring. If $R$ is not Gorenstein and $(\mathbf{x}^\mathbf{e}: \mathbf{e}\in E_S)R \subseteq \textrm{tr}(\omega_{R})$, then $h_{s(R)} \geqq 2$. In particular, if $R$ is non-Gorenstein nearly Gorenstein, then $h_{s(R)} \geqq 2$.* The following theorem regarding the necessary and sufficient condition to be level and almost Gorenstein was also known in the case of standard graded rings, as shown in [@goto2015almost]. We give a new proof by using Stanley's inequalities (see Corollary [Corollary 27](#prop1){reference-type="ref" reference="prop1"}), and extend their results to the case of semi-standard graded rings. ****Theorem [Theorem 31](#AGandLevel){reference-type="ref" reference="AGandLevel"}** 1**. *Let $R$ be a Cohen--Macaulay semi-standard graded ring with $\dim R>0$. Suppose that $R$ is not Gorenstein. Then the following conditions are equivalent:* - *$R$ is almost Gorenstein and level;* - *$R$ is generically Gorenstein and $s(R)=1$.* At the same time, we investigate specific properties that do not hold in standard graded rings but hold for semi-standard graded rings. For instance, there are intriguing differences if we consider nearly Gorenstein affine semigroup rings with projective dimension 2. In the case of standard graded affine semigroup rings, there exist non-Gorenstein yet nearly Gorenstein rings with projective dimension 2, and their characterization is provided for the case of projective monomial curves (see [@miyashita2023nearly Theorem A]). However, in the case of non-standard semi-standard graded rings, it turns out that there is no non-Gorenstein nearly Gorenstein ring with projective dimension 2. ****Theorem [Theorem 21](#pd2NGsemi){reference-type="ref" reference="pd2NGsemi"}** 1**. *Let $R$ be a non-standard semi-standard graded Cohen-Macaulay affine semigroup ring with projective dimension 2. Then the following conditions are equivalent:* - *$R$ is nearly Gorenstein;* - *$R$ is Gorenstein.* Moreover, in standard graded affine semigroup rings, it is known that there is no instance that is non-level almost Gorenstein and nearly Gorenstein(see Theorem [Theorem 31](#AGandLevel){reference-type="ref" reference="AGandLevel"} and Theorem [Theorem 36](#AGandNG!){reference-type="ref" reference="AGandNG!"}). However, in the case of semi-standard graded affine semigroup rings, such a special family is known to exist when the socle degree is 2. For the case of dimension 2, this family can be characterized as follows. The proof of this assertion relies significantly on the proof presented in [@higashitani2018non Theorem 3.5]. ****Theorem [Theorem 38](#nonlevelAGwithSocleDeg2){reference-type="ref" reference="nonlevelAGwithSocleDeg2"}** 1**. *Let $R=\Bbbk[S]$ be a Cohen--Macaulay semi-standard graded affine semigroup ring with $\dim R=s(R)=2$. Then the following conditions are equivalent:* - *$R$ is non-level and almost Gorenstein;* - *$S \cong \langle \{(2i,2n-2i): 0 \leqq i \leqq n \} \bigcup \{ (2j+2k-1,4n-2j-2k+1): 0 \leqq j \leqq n-1 \} \rangle$ for some $n \geqq 2$ and $1 \leqq k \leqq n+1$.* *Moreover, if this is the case, then $R$ is always nearly Gorenstein and $h(R)=(1,n-1,n)$.* Furthermore, it is known that every $1$-dimensional almost Gorenstein ring is nearly Gorenstein ring (see [@herzog2019trace Proposition 6.1]). If we consider non-standard semi-standard graded affine semigroup rings with socle degree 2, we can establish that a 2-dimensional version of this statement holds true. ****Theorem [Theorem 37](#AGisNGw){reference-type="ref" reference="AGisNGw"}** 1**. *Let $R$ be a non-standard semi-standard graded Cohen-Macaulay affine semigroup ring with $\dim R=s(R)=2$. If $R$ is almost Gorenstein, then it is nearly Gorenstein.* The structure of this paper is as follows. In Section 2, we prepare some definitions and facts for the discussions later. In Section 3, we first organize the general theory concerning nearly Gorenstein semi-standard graded affine semigroup rings based on [@miyashita2022levelness]. We extend the results for nearly Gorenstein standard graded affine semigroup rings from [@miyashita2022levelness] to the case of semi-standard graded. Moreover, we prove a statement concerning the trace ideal of (not necessarily Cohen-Macaulay) semi-standard graded affine semigroup rings with depth greater than or equal to 2. Additionally, we discuss special properties of nearly Gorenstein semi-standard graded rings with small projective dimensions and provide a special family of nearly Gorenstein with socle degree 3. In Section 4, we establish the general theory for almost Gorenstein semi-standard graded rings based on [@higashitani2016almost]. We demonstrate that the theory regarding $h$-vector can be directly applied to the case of semi-standard graded rings. In Section 5, we compare between almost Gorenstein rings and level rings. Utilizing the framework organized in Section 4, and considering cases with small socle degrees, we explore relationships between almost Gorenstein and level properties by using Stanley's inequalities. In Section 6, we compare almost Gorenstein rings with nearly Gorenstein rings. For semi-standard graded affine semigroup rings with socle degree and dimension both equal to 2, we completely determine the structure of non-level and almost Gorenstein rings. Furthermore, we prove that this structure coincides with the special family of nearly Gorenstein introduced in Section 3. ## Acknowledgement {#acknowledgement .unnumbered} I am grateful to professor Akihiro Higashitani for his very helpful comments and instructive discussions. In particular, I could discover Theorem [Theorem 15](#TraceIdealAffine){reference-type="ref" reference="TraceIdealAffine"} for him. # Preliminaries Let $\Bbbk$ be a field, and let $R$ be an $\mathbb{N}$-graded $\Bbbk$-algebra with a unique graded maximal ideal $\mathbf{m}$. Apart from Section 2, we always assume that $R$ is Cohen-Macaulay and admits a canonical module $\omega_R$. - Let $\omega_R$ denote a canonical module of $R$. Let $a(R)$ denote the $a$-invariant of $R$, i.e., $a(R)=-\min\{n:(\omega_R)_n \neq 0\}$. - Let $r(R)$ be the Cohen-Macaulay type of $R$, and let $\textrm{pd}(R)$ be the projective dimension of $R$. - For a graded $R$-module $M$ and $N$, we use the following notation: - Let $\mu(M)$ denote the number of minimal generators of $M$ as an $R$-module. - Let $e(M)$ denote the multiplicity of $M$. Then the inequality $\mu(M) \leq e(M)$ always holds. - Fix an integer $k$. Let $M(-k)$ denote the $R$-module whose grading is given by $M(-k)_n=M_{n-k}$ for any $n \in \mathbb{Z}$. Moreover, if $k>0$, we write $M^{\oplus k} = M \oplus M \oplus \cdots \oplus M$ ($k$ times). - Let $\textrm{tr}_R(M)$ be the sum of the ideals $\phi(M)$ with $\phi \in \textrm{Hom}^*_R(M,R)$. Thus, $$\textrm{tr}_R(M)=\sum_{\phi \in \textrm{Hom}^*_R(M,R)}\phi(M)$$ where $\textrm{Hom}^*_R(M,R)=\{ \phi \in \textrm{Hom}_R(M,R): \phi \text{\;is\;a\;graded\;homomorphism\;} \}.$ When there is no risk of confusion about the ring we simply write $\textrm{tr}(M)$. If $M$ and $N$ are isomorphic as graded $R$-module, then $\textrm{tr}(M)=\textrm{tr}(N)$. Let us recall the definitions and facts of the nearly Gorenstein property and level property of graded rings. **Definition 2** (see [@stanley2007combinatorics Chapter III, Proposition 3.2]). We say that $R$ is$\textit{level}$ if all the degrees of the minimal generators of $\omega_R$ are the same. **Definition 3** (see [@herzog2019trace Definition 2.2]). We say that $R$ is $\textit{nearly Gorenstein}$ if $\textrm{tr}(\omega_R) \supseteq \mathbf{m}$. In particular, $R$ is nearly Gorenstein but not Gorenstein if and only if $\textrm{tr}(\omega_R) = \mathbf{m}$. Let $R$ be a ring and $I$ an ideal of $R$ containing a non-zero divisor of $R$. Let $Q(R)$ be the total quotient ring of fractions of $R$ and set $\displaystyle I^{-1}:=\{x \in Q(R):xI\subset R\}.$ Then $$\numberwithin{equation}{section}\label{traceformula} \displaystyle \textrm{tr}(I)=II^{-1}$$ (see [@herzog2019trace Lemma 1.1]). If $R$ is an $\mathbb{N}$-graded ring, then $\omega_R$ is isomorphic to an ideal $I_R$ of $R$ as an $\mathbb{N}$-graded module up to degree shift if and only if $R_{\mathfrak{p}}$ is Gorenstein for every minimal prime ideal $\mathfrak{p}$ (for example, if $R$ is a domain). We call $I_R$ the canonical ideal of $R$. **Remark 4** (see [@stanley2007combinatorics Chapter I, Section 12]). Fix an integer $n$ with $n \geqq 2$. If $R$ is an $\mathbb{N}^n$-graded domain, then $\omega_R$ is isomorphic to an ideal of $R$ as an $\mathbb{N}^n$-graded module up to degree shift. We recall some definitions about affine semigroups. An *affine semigroup* $S$ is a finitely generated sub-semigroup of $\mathbb{Z}^d.$ For $X \subseteq S$, we denote by $\langle X \rangle$ the smallest sub-semigroup of $S$ containing $X$. We denote the group generated by $S$ by $\mathbb{Z}S$, the convex cone generated by $S$ by $\mathbb{R}_{\geqq 0} S\subseteq \mathbb{R}^d$ and the normalization by $\overline{S}=\mathbb{Z}{S} \cap \mathbb{R}_{\geqq 0} S$. The affine semigroup $S$ is pointed if $S\cap(-S)=\{0\}$. We can check easily that every semi-standard graded affine semigroup is pointed. It is known that $S$ is pointed if and only if the associated cone $C=\mathbb{R}_{\geqq0} S$ is pointed (see [@miller2005combinatorial Lemma 7.12]). Moreover, every pointed affine semigroup $S$ has a unique finite minimal generating set (see [@miller2005combinatorial Proposition 7.15]). Thus $C=\mathbb{R}_{\geqq0} S$ is a finitely generated cone. A face $F \subseteq S$ of $S$ is a subset such that for every $a,b \in S$ the following holds: $$a+b \in F \Leftrightarrow a \in F \text{\;and\;} b \in F.$$ The dimension of the face $F$ equals the rank of $\mathbb{Z}F$. The $1$-dimensional faces of a pointed semigroup $S$ are called its extremal rays. We prepare the following basic lemma for proving Theorem 4.4. We denote $(\mathbf{a},\mathbf{b})$ as inner product of $\mathbf{a}, \mathbf{b}\in \mathbb{R}^d$. **Lemma 5** (see [@miyashita2022levelness Lemma 2.6]). *Let $d \geqq 2$ and let $S$ be a $d$-dimensional pointed affine semigroup, and let $C=\mathbb{R}_{\geqq 0} S$. Let $E$ be the set of extremal rays of $C$. If $\mathbf{x}\notin C$, then there exists $l \in E$ such that $(\mathbf{x}+ l) \cap C = \emptyset$.* **Theorem 6** (see [@katthan2015non Theorem 3.1]). *Let $S$ be a pointed affine semigroup. There exists a (not necessarily disjoint) decomposition $$\label{hole}\overline{S} \setminus S = \bigcup_{i=1}^l(s_i+\mathbb{Z}F_i) \cap \mathbb{R}_{\geqq 0}S$$ with $s_i \in \overline{S}$ and faces $F_i$ of $S$.* A set $s_i+\mathbb{Z}F_i$ from ([\[hole\]](#hole){reference-type="ref" reference="hole"}) is called a $j$-dimensional family of holes, where $j$ is the dimension of $F_i$. **Theorem 7** (see [@katthan2015non Theorem 5.2]). *Let $S$ be a pointed affine semigroup of dimension $d \geqq 2$. Then $\textrm{depth}\Bbbk[S] \geqq 2$ if and only if every family of holes has dimension at least $1$.* **Theorem 8** (see [@herzog2019trace Corollary 3.2 and Corollary 3.5]). *Let $S=\Bbbk[x_1,\cdots,x_n]$ be a polynomial ring, let $\mathbf{n}=(x_1,\cdots,x_n)$ be the graded maximal ideal of $S$ and let $$\mathbb{F}:0 \rightarrow F_p \xrightarrow{\phi_p} F_{p-1} \rightarrow \cdots \rightarrow F_1 \rightarrow F_0 \rightarrow R \rightarrow 0$$ be a graded minimal free $S$-resolution of the Cohen-Macaulay ring $R=S/J$ with $J \subseteq \mathbf{n}^2$. Let $I_1(\phi_p)$ be the ideal of $R$ generated by all components of a representation matrix of $\phi_p$. Then the following hold.* **$(a)$* Let $e_1,\cdots,e_t$ be a basis of $F_p$. Suppose that for $i=1,\cdots,s$ the elements $\sum_{j=1}^t r_{ij}e_j$ generate the kernel of $$\psi_p : F_p \otimes R \longrightarrow F_{p-1} \otimes R,$$ where $$\psi_p=\phi_p \otimes R.$$ Then $\textrm{tr}(\omega_R)$ is generated by the elements $r_{ij}$ with $i=1,\cdots,s$ and $j=1,\cdots,t$.* **$(b)$* If $r(R)=2$ and $R$ is a domain, then $\textrm{tr}(\omega_R)=I_1(\phi_p)$.* We state the necessary results about the minimal free resolution of the codimension 2 lattice ideal based on [@peeva1998syzygies]. **Definition 9**. Let $S=\Bbbk[x_1,\cdots,x_n]$ be a polynomial ring and let $L$ be any sublattice of $\mathbb{Z}^n$. We put $\mathbf{x}^{\mathbf{a}} := {x_1}^{a_1}{x_2}^{a_2}\cdots {x_n}^{a_n}$ where $\mathbf{a}=(a_1,a_2, \cdots, a_n) \in \mathbb{N}^n.$ Then its associated *lattice ideal* in $S$ is $$I_L := (\mathbf{x}^\mathbf{a}-\mathbf{x}^\mathbf{b}: \mathbf{a},\mathbf{b}\in \mathbb{N}^n \;\;\textit{and} \;\;\mathbf{a}-\mathbf{b}\in L ).$$ Prime lattice ideals are called *toric ideals*. Prime binomial ideals and toric ideals are identical (see [@miller2005combinatorial Theorem 7.4]). **Proposition 10** (see [@peeva1998syzygies Comments 5.9 (a) and Theorem 6.1 (ii)]). *Let $S=\Bbbk[x_1,\cdots,x_n]$ be a polynomial ring. If $I$ is a codimension $2$ lattice ideal of $S$ and the number of minimal generators of $I$ is 3, then $R=S/I$ is Cohen-Macaulay and the graded minimal free resolution of $R$ is the following form. $$0\rightarrow S^{2} \xrightarrow {\left[ \begin{array}{cc} u_1 & u_4 \\ u_2 & u_5 \\ u_3 & u_6 \end{array} \right] } S^{3} \rightarrow S \rightarrow R \rightarrow 0,$$ where $u_i$ is a monomial of $S$ for all $1 \leqq i \leqq 6$.* Note that a codimension 2 toric ideal $I$ is Cohen-Macaulay but not Gorenstein if and only if the number of minimal generators of $I$ is 3 (see [@peeva1998syzygies Remark 5.8 and Theorem 6.1]). # Nearly Gorenstein property versus level property on semi-standard graded affine semigroup rings In this section, we will establish the theory of nearly Gorenstein affine semigroup rings and generalize the results of [@miyashita2022levelness Section 4] to the case of semi-standard graded affine semigroup rings. Moreover, we prove a statement concerning the trace ideal of affine semigroup rings with depth greater than or equal to 2. This result holds even in the case of non-Cohen-Macaulay rings (Theorem 3.5). Proposition [Proposition 14](#extremal){reference-type="ref" reference="extremal"} is the key to extend the results of [@miyashita2022levelness Section 4] to our case. Let $S$ be a semi-standard graded affine semigroup, and let $G_S=\{ \mathbf{a}_1,\cdots,\mathbf{a}_s \} \subseteq \mathbb{N}^d$ be the minimal generators of $S$. Fix the affine semigroup ring $R=\Bbbk[S]$. For any $\mathbf{a}\in \mathbb{N}^d$, we set $$\notag R_\mathbf{a} = \begin{cases} \Bbbk\mathbf{x}^\mathbf{a}& (\mathbf{a}\in S)\\ 0 & (\mathbf{a}\notin S). \end{cases}$$ Then $R = \bigoplus_{\mathbf{a}\in \mathbb{N}^d }R_\mathbf{a}$ is a direct sum decomposition as an abelian group, and $R_\mathbf{a}R_\mathbf{b}\subseteq R_{\mathbf{a}+\mathbf{b}}$ for any $\mathbf{a},\mathbf{b}\in \mathbb{N}^d$. Thus we can regard $R$ as an $\mathbb{N}^d$-graded ring. For an $\mathbb{N}^d$-graded ideal $I$ of $R$, we put $$V_I=\left \{\mathbf{v}_1,\cdots,\mathbf{v}_r : \{\mathbf{x}^\mathbf{v}_1,\cdots,\mathbf{x}^\mathbf{v}_r\} \text{\;is the minimal generating system of\;} I \right \} \subseteq S.$$ If $R$ is Cohen-Macaulay, then $\omega_R$ is isomorphic to an ideal $I_R \subseteq R$ and denote $V_{\omega_R}$ as $V_{I_R}$. Moreover, we use the following notations; $({G_S})_{\min} = \{\mathbf{v}\in G_S : \deg \mathbf{x}^\mathbf{v}\leqq \deg \mathbf{x}^{\mathbf{v}_i} \;\text{for all} \; 1 \leqq i \leqq r\}$, $(V_I)_{\min} = \{\mathbf{v}\in V_I : \deg \mathbf{x}^\mathbf{v}\leqq \deg \mathbf{x}^{\mathbf{v}_i} \;\text{for all} \; 1 \leqq i \leqq r\}$, $S-V_I = \left \{\mathbf{u}\in \mathbb{Z}S : \mathbf{x}^{\mathbf{u}} \in I^{-1} \right \} = \{ \mathbf{u}\in \mathbb{Z}S : \mathbf{u}+ \mathbf{v}\in S \; \text{for all}\; \mathbf{v}\in V_I\},$ $E_S=\{\mathbf{a}\in G_S : \mathbb{N}\mathbf{a}\text{\;is a $1$-dimensional face of $S$}\}$. Thus the following holds. **Proposition 11**. *Let $S$ be a pointed affine semigroup, let $R=\Bbbk[S]$ and let $\mathbf{a}\in G_S$. The following are equivalent:* **$(a)$* $\mathbf{x}^\mathbf{a}\in \textrm{tr}(I)$;* **$(b)$* There exist $\mathbf{v}\in V_I$ and $\mathbf{u}\in S-V_I$ such that $\mathbf{a}= \mathbf{u}+ \mathbf{v}$.* **Proof.* $(b) \Rightarrow (a)$ follows from equality (2.[\[traceformula\]](#traceformula){reference-type="ref" reference="traceformula"}). We show $(a) \Rightarrow (b)$. Assume that $\mathbf{x}^\mathbf{a}\in \textrm{tr}(I)$. Then we know from equality (2.[\[traceformula\]](#traceformula){reference-type="ref" reference="traceformula"}) that ${\mathbf{x}}^{\mathbf{a}_i} \in II^{-1}$. Since $II^{-1}$ is an $\mathbb{N}^d$-graded homogeneous ideal and $R$ is a domain, we can write $\mathbf{x}^{\mathbf{a}}=c_1\mathbf{x}^{\mathbf{v}_1}\mathbf{x}^{\mathbf{u}_1}+\cdots+c_r\mathbf{x}^{\mathbf{v}_r}\mathbf{x}^{\mathbf{u}_r}$, where $c_i \in \Bbbk$ and $\mathbf{u}_i \in \mathbb{Z}^d$. Thus we have $\mathbf{x}^{\mathbf{v}_i+\mathbf{u}_i}=\mathbf{x}^{\mathbf{a}}$ for at least one $1 \leqq i \leqq r$, as desired. ◻* **Corollary 12**. *Let $S$ be a Cohen-Macaulay pointed affine semigroup. The following are equivalent: *$(a)$* $R=\Bbbk[S]$ is nearly Gorenstein;* **$(b)$* For any $\mathbf{a}\in G_S$, there exist $\mathbf{v}\in V_{\omega_R}$ and $\mathbf{u}\in S-V_{\omega_R}$ such that $\mathbf{a}= \mathbf{u}+ \mathbf{v}$.* **Lemma 13**. *Let $S$ be a pointed affine semigroup and let $\mathbf{a}\in ({G_S})_{\min}$. If there exist $\mathbf{v}\in V_I$ and $\mathbf{u}\in S-V_I$ such that $\mathbf{a}= \mathbf{u}+ \mathbf{v}$, then $\mathbf{v}\in (V_I)_{\min}$.* **Proof.* If $(V_I)_{\min}=V_I$, it is obvious. Assume $(V_I)_{\min} \neq V_I$ and $\mathbf{v}\notin (V_I)_{\min}$. Then there exists $\mathbf{v}' \in (V_I)_\min$ such that $\deg \mathbf{x}^{\mathbf{v}'-\mathbf{v}} <0$. Since $\mathbf{u}\in S-V_I$, Thus we get $\mathbf{a}+ \mathbf{v}'-\mathbf{v}=\mathbf{u}+ \mathbf{v}' \in S \setminus \{0\}$. Then $\deg \mathbf{x}^{\mathbf{a}+ \mathbf{v}'-\mathbf{v}} \geqq \deg \mathbf{x}^{\mathbf{a}}$. Therefore, we have $\deg \mathbf{x}^{\mathbf{v}'-\mathbf{v}}=\deg \mathbf{x}^{\mathbf{a}+\mathbf{v}'-\mathbf{v}}-\deg{ \mathbf{x}^{\mathbf{a}} } \geqq 0$, which yields a contradiction. ◻* **Proposition 14**. *Let $S$ be a pointed affine semigroup. If $R=\Bbbk[S]$ is semi-standard graded, then $(\mathbf{x}^\mathbf{e}: \mathbf{e}\in E_S)R \subseteq R_1$.* **Proof.* Take $\mathbf{e}\in E_S$. Since $\mathbf{x}^\mathbf{e}\in R$ and $R$ is finitely generated $\Bbbk[R_1]$-module, we can write $\mathbf{e}=\mathbf{a}+\mathbf{b}$ for some $\mathbf{a}\in G_S$ and $\mathbf{b}\in S$ with $\deg \mathbf{x}^\mathbf{a}=1$. Then $F=\mathbb{N}\mathbf{e}$ is a face of $S$, so $$\mathbf{e}=\mathbf{a}+\mathbf{b}\in F \iff \mathbf{a}\in F \text{\;and\;} \mathbf{b}\in F.$$ Thus we get $\mathbf{b}=\mathbf{0}$ and $\mathbf{x}^\mathbf{e}=\mathbf{x}^\mathbf{a}\in R_1$, as desired. ◻* **Theorem 15**. *Let $R=\Bbbk[S]$ be a semi-standard graded affine semigroup ring. Let $I$ be a non-principle ideal of $R$ and let $b=\min \{i: I_i\neq 0\}$. If $\textrm{depth}R \geqq 2$ and $$(\mathbf{x}^\mathbf{e}: \mathbf{e}\in E_S)R \subseteq \textrm{tr}(I),$$ then we have $\dim_\Bbbk I_b \geqq 2$.* *Proof.* Assume that $\textrm{depth}R \geqq 2$, $(\mathbf{x}^\mathbf{e}: \mathbf{e}\in E_S)R \subseteq \textrm{tr}(I)$ and $\dim_\Bbbk I_b = 1$. Then $(V_I)_\min=\{\mathbf{v}\}$ and we can take $\mathbf{v}' \in V_I$ such that $\deg \mathbf{x}^\mathbf{v}< \deg \mathbf{x}^{\mathbf{v}'}$. Since $\mathbf{v}, \mathbf{v}' \in V_I$ and $\mathbf{v}\neq \mathbf{v}'$, we get $\mathbf{v}'-\mathbf{v}\in \mathbb{Z}{S}\setminus S$. We claim $\mathbf{v}'-\mathbf{v}\in \mathbb{R}_{\geqq 0}S$. Assume $\mathbf{v}'-\mathbf{v}\notin \mathbb{R}_{\geqq 0}S$. Then by Lemma [Lemma 5](#extremalpoly){reference-type="ref" reference="extremalpoly"}, there exists $\mathbf{a}\in E_S$ such that $\mathbf{v}' - \mathbf{v}+ \mathbf{a}\notin S.$ On the other hand, since $\mathbf{a}\in (G_S)_\min$ and $\mathbf{x}^\mathbf{a}\in \textrm{tr}(I)$, there exists $\mathbf{u}\in S-V_I$ such that $\mathbf{a}=\mathbf{u}+\mathbf{v}$ by Proposition [Proposition 11](#NGtokuchouAffine){reference-type="ref" reference="NGtokuchouAffine"} and Proposition [Proposition 14](#extremal){reference-type="ref" reference="extremal"}. Thus we get $\mathbf{v}' - \mathbf{v}+ \mathbf{e}= \mathbf{u}+\mathbf{v}' \in S,$ this yields a contradiction. Then, $\mathbf{v}'-\mathbf{v}\in \mathbb{R}_{\geqq 0}S$ and we get $\mathbf{v}'-\mathbf{v}\in \overline{S} \setminus S$. Since $\mathbf{v}'-\mathbf{v}\in \overline{S}\setminus S$, there exist $s_i \in \overline{S}$ and face $F$ of $S$ such that $\mathbf{v}'-\mathbf{v}\in s_i +\mathbb{Z}F$ and $(s_i+\mathbb{Z}F) \cap S= \emptyset$ by using Theorem [Theorem 6](#holedecomp){reference-type="ref" reference="holedecomp"}. Since $\textrm{depth}R \geqq 2$, every family of holes has dimension at least $1$ by Theorem [Theorem 7](#S2){reference-type="ref" reference="S2"}. So we have $\dim F \geqq 1$. Since $\mathbf{v}'-\mathbf{v}\in s_i +\mathbb{Z}F$, we can take $\mathbf{x}\in \mathbb{Z}F$ and write $\mathbf{v}'-\mathbf{v}=s_i+\mathbf{x}$. Thus, we get $(\mathbf{v}'-\mathbf{v}+ \mathbb{Z}F)\cap S=(s_i+\mathbb{Z}F) \cap S=\emptyset$. In particular, by taking an extremal ray $l$ of facet $F$, we get ($\mathbf{v}'-\mathbf{v}+ \mathbb{Z}l) \cap S = \emptyset$. On the other hand, we have $\mathbf{v}'-\mathbf{v}+ l \in S$ because $\mathbf{x}^l \in \textrm{tr}(\omega_R)$. This yields a contradiction. ◻ **Corollary 16**. *Let $R=\Bbbk[S]$ be a semi-standard graded Cohen-Macaulay affine semigroup ring. If $R$ is not Gorenstein and $(\mathbf{x}^\mathbf{e}: \mathbf{e}\in E_S)R \subseteq \textrm{tr}(\omega_{R})$, then $h_{s(R)} \geqq 2$. In particular, if $R$ is non-Gorenstein nearly Gorenstein, then $h_{s(R)} \geqq 2$.* *Proof.* Recall that $\omega_R$ is isomorphic to an ideal $I_R$ of $R$ as graded $R$-module. Since $\textrm{tr}(\omega_R)=\textrm{tr}(I_R)$ and $h_s= \dim_\Bbbk(\omega_R)_{-a(R)}$, the assertion follows from Theorem [Theorem 15](#TraceIdealAffine){reference-type="ref" reference="TraceIdealAffine"}. ◻ **Corollary 17**. *For any semi-standard graded Cohen-Macaulay affine semigroup ring, if it is nearly Gorenstein with Cohen-Macaulay type 2, then it is level.* *Proof.* If $R$ is not level, then $h_s=1$. This contradicts Corollary [Corollary 16](#NGhvector){reference-type="ref" reference="NGhvector"}. ◻ **Examples 18**. By using Theorem [Theorem 8](#trace){reference-type="ref" reference="trace"} and $\mathtt{Macaulay2}$ $($[@M2]$)$, we can check the following: - Both of $R_1=\mathbb{Q}[t,s^2t,s^4t,s^6t,s^8t,st^2,s^3t^2]$ and $R_2=\mathbb{Q}[t,s^3t,s^{6}t,s^{9}t,st^2,s^4t^2]$ are non-level nearly Gorenstein, and non-standard semi-standard graded affine semigroup ring with $r(R_i)=2i$, where $\deg s^at^b=b$ for any $a,b \in \mathbb{N}$. - The affine semigroup ring $$R=\Bbbk[x_1,x_2,x_3,x_4]/(x_2x_3^2-x_1x_4,x_2^3-x_3x_4,x_1x_2^2-x_3^3)$$ is nearly Gorenstein with $r(R)=\textrm{pd}(R)=2$, where $\deg x_1=\deg x_2=\deg x_3=1$ and $\deg x_4 = 2$. This is not semi-standard graded by Theorem [Theorem 21](#pd2NGsemi){reference-type="ref" reference="pd2NGsemi"}. Example [Examples 18](#exNG1){reference-type="ref" reference="exNG1"} (b) shows that nearly Gorenstein property does not imply $\dim_\Bbbk(\omega_R)_{-a(R)} \geqq 2$ for non-semi-standard graded affine semigroup ring $S$ in general. When the projective dimension is greater than or equal to 3, there are many examples of non-standard graded semi-standard graded nearly Gorenstein affine semigroup rings. The following family is valuable example of semi-standard graded affine semigroup ring that are non-level, almost Gorenstein, and nearly Gorenstein (In the case of standard graded affine semigroup rings, such examples do not exist! Refer to Theorems [Corollary 32](#s2ikahalevel){reference-type="ref" reference="s2ikahalevel"} and [Theorem 36](#AGandNG!){reference-type="ref" reference="AGandNG!"}). While we prove here that they are non-level and nearly Gorenstein, the proof of their almost Gorenstein property will be presented in the next section (see Theorem [Theorem 38](#nonlevelAGwithSocleDeg2){reference-type="ref" reference="nonlevelAGwithSocleDeg2"}). **Proposition 19**. *Fix $n \geqq 2$, $1 \leqq k \leqq n+1$ and define the affine semigroup $S$ as $$S=\langle \{(2i,2n-2i): 0 \leqq i \leqq n \} \cup \{ (2j+2k-1,4n-2j-2k+1): 0 \leqq j \leqq n-1 \} \rangle.$$ Then $R=\Bbbk[S]$ is nearly Gorenstein with $h(R)=(1,n-1,n)$ and $r(R)=2n-1$.* **Proof.* Put $F_1 = \mathbb{N}(2n,0)$, $F_2 = \mathbb{N}(0,2n)$ and put $$C_i=\{w \in \mathbb{Z}S_\mathbf{a}\; : \;w+g \notin S_\mathbf{a}\; \textit{for any}\; g \in F_i\}$$ for $i=1,2$, respectively. Denote by $\Bbbk[\omega_S]$ the $R$-submodule of $R$ generated by $\{ \mathbf{x}^{v} : v \in \omega_{S} \}$, where $\omega_S=-(C_1 \cap C_2)$. By applying [@goto1976affine Theorem 3] to our case, $\Bbbk[\omega_S]$ is the canonical module of $R$ if $R$ is Cohen-Macaulay. First we show $R$ is Cohen-Macaulay. To check this, it is enough to check $\mathbb{Z}S= S \cup C_1 \cup C_2$ (see [@goto1976affine Theorem 1]). $$\omega_S=\langle \{(1-2i,-1+2i) : k-n \leqq i \leqq k-1 \} \cup \{(2i,2n-i): 1 \leqq i \leqq n-1 \} \rangle.$$ We put $$\notag X_1 = \begin{cases} \{(-1+2i,1-2i): k-n \leqq i \leqq 0\} & (1 \leqq k \leqq n)\\ \emptyset & (k=n+1), %a_{i-m}b_m+a_{i-m+1}b_{m-1}+\cdots+a_{n}b_{i-n} & (n \leqq i \leqq n+m) \end{cases}$$ $$\notag X_2 = \begin{cases} \{(-1+2i,1-2i) : 1 \leqq i \leqq k-1\} & (2 \leqq k \leqq n+1)\\ \emptyset & (k=1), %a_{i-m}b_m+a_{i-m+1}b_{m-1}+\cdots+a_{n}b_{i-n} & (n \leqq i \leqq n+m) \end{cases}$$ $Y_1=X_1 \cup \{(n,-n): n \in \mathbb{Z}_{>0} \}$ and $Y_2=X_2 \cup \{(-n,n) : n \in \mathbb{Z}_{>0} \}$. Then we have $$C_1=\bigcup_{a\in Y_1} (a+\mathbb{Z}(2n,0)), \;\;\;\;C_2=\bigcup_{a\in Y_2} (a+\mathbb{Z}(0,2n)).$$ Thus, $\mathbb{Z}S=\{(i,2nj-i): i,j \in \mathbb{Z}\}=S \cup C_1 \cup C_2$ so $S$ is Cohen-Macaulay. Moreover, we can calculate the canonical module as follows: $$\begin{aligned} \omega_S&=-(C_1 \cap C_2) \nonumber\\ &=\bigcup_{a\in X_1} (-a+\mathbb{N}(2n,0)) \cup \bigcup_{a\in X_2} (-a+\mathbb{N}(0,2n)) \cup \{(i,2nj-i): \; 1 \leqq i \leqq 2n-1 \;\textit{and\;} j \geqq 1 \} \nonumber\\ &=\langle \{(1-2i,-1+2i) : k-n \leqq i \leqq k-1 \} \cup \{(2i,2n-i): 1 \leqq i \leqq n-1 \} \rangle \nonumber.\end{aligned}$$ Then $r(R)=2n-1$. Moreover, we can check $R$ is nearly Gorenstein by Proposition [Corollary 12](#NGsemigroup){reference-type="ref" reference="NGsemigroup"}. Lastly, we get $h(R)=(1,n-1,n)$ from the following graded isomorphism as $R_1$-module: $$R \cong R_1 \oplus R_1\langle x_1^{2j+2k-1}x_2^{4n-2j-2k+1}: 0 \leqq j \leqq n-1 \rangle.$$ ◻* The family of nearly Gorenstein rings $R$ for Proposition [Proposition 19](#NGfamily){reference-type="ref" reference="NGfamily"} satisfies $\textrm{pd}(R)=3$. There is also another example of a nearly Gorenstein semi-standard graded affine semigroup ring $R$ with $\textrm{pd}(R)=3$, such as the following. **Example 20**. $\mathbb{Q}[u,s^2u,t^2u,s^2t^2u,su^2,s^3u^2]$ is nearly Gorenstein semi-standard graded affine semigroup ring, where $\deg s^at^bu^c=c$ for any $a,b,c \in \mathbb{N}$. We can check it in the same way as Examples [Examples 18](#exNG1){reference-type="ref" reference="exNG1"}. A non-Gorenstein nearly Gorenstein standard graded affine semigroup ring with projective dimension 2 does exist, and that its characterization is known in the context of projective monomial curves (see [@miyashita2023nearly Theorem A]). However, for non-standard graded semi-standard graded affine semigroup rings, there are no examples with projective dimension 2 that are nearly Gorenstein, except for those that are Gorenstein. **Theorem 21**. *Let $R$ be a non-standard graded semi-standard graded Cohen-Macaulay affine semigroup ring with $\textrm{pd}(R) = 2$. Then the following conditions are equivalent:* - *$R$ is nearly Gorenstein;* - *$R$ is Gorenstein.* **Proof.* It is enough to show that (1) implies (2). Assume that $R$ is not Gorenstein. We put $R=\Bbbk[S]$ and $d=\dim R$. Note that $n=d+2$ by Auslander-Buchsbaum formula. Since $R$ is semi-standard graded affine semigroup ring, we may assume $d\geqq 2$. By the assumption, there exists a codimension 2 homogeneous prime binomial ideal $I$ such that $I$ is minimally generated by three elements and $R \cong A/I$, where $A=\Bbbk[x_1,\cdots,x_n]$ is a polynomial ring. Since $I$ is a codimension $2$ lattice ideal and the number of minimal generators of $I$ is 3, the graded minimal free resolution of $R$ is one of the following form by Proposition [Proposition 10](#codim2peeva){reference-type="ref" reference="codim2peeva"}. Note that $R$ is level by Corollary [Corollary 17](#type2NG){reference-type="ref" reference="type2NG"} since $r(R)=2$. $$\notag 0\rightarrow A(-\deg f_1-\deg u_1)^2 \xrightarrow {X =\left[ \begin{array}{cc} u_1 & -u_4 \\ -u_2 & u_5 \\ u_3 & -u_6 \end{array} \right] } A(-\deg f_1) \oplus A(-\deg f_2) \oplus A(-\deg f_3) \rightarrow A \rightarrow R \rightarrow 0. \small$$ Here, $u_i$ is a monomial of $A$ for all $1 \leqq i \leqq 6$ and $f_1=u_1u_5-u_2u_4, f_2=u_3u_4-u_1u_6$ and $f_3=u_2u_6-u_3u_5$. Since $I$ is a graded prime ideal, we have $\gcd(u_1u_5,u_2u_4)=\gcd(u_3u_4,u_1u_6)=\gcd(u_2u_6,u_3u_5)=1$ and $$\deg u_1u_5=\deg u_2u_4, \;\;\; \deg u_3u_4=\deg u_1u_6, \;\;\; \deg u_2u_6=\deg u_3u_5. \label{ddd}$$* *Moreover, since $R$ is level, we have $$\deg u_1=\deg u_4, \;\;\;\deg u_2=\deg u_5, \;\;\;\deg u_3=\deg u_6. \label{ccc}$$* - *Let $d=2$. Since $R$ is nearly Gorenstein, $X$ may be assumed to have one of the following forms by Theorem [Theorem 8](#trace){reference-type="ref" reference="trace"}.* *(i) $X =\left[ \begin{array}{cc} x_1 & -x_4 \\ -x_2 & u_5 \\ x_3 & -u_6 \end{array} \right]$ or (ii) $X =\left[ \begin{array}{cc} x_1 & -x_3 \\ -u_2 & x_4 \\ x_2 & -u_6 \end{array} \right]$ or (iii) $X =\left[ \begin{array}{cc} x_1 & -x_3 \\ -x_2 & x_4 \\ u_3 & -u_6 \end{array} \right]$.* *(For example, there is also a possibility that $X =\left[ \begin{array}{cc} u_1 & -x_2 \\ -u_2 & x_3 \\ x_1 & -x_4 \end{array} \right]$, but this can be regarded to be the same as (i).)* - *We can write $X=\left[ \begin{array}{cc} x_1 & -x_4 \\ -x_2 & x_1^ax_3^b \\ x_3 & -x_1^cx_2^d \end{array} \right]$ for some $a,b,c,d \in \mathbb{N}$ with $ac=0$. Moreover, we have $x_{i},x_{j} \in E_S$ for some $1 \leqq i<j \leqq 4$ by Proposition [Proposition 14](#extremal){reference-type="ref" reference="extremal"}.* - *If $(i,j)=(1,2)$, then we have $(u_5,u_6)=(x_1,x_2^{\deg x_3})$ or $(x_3,x_1)$ or $(x_3,x_2)$. $(u_5,u_6)=(x_1,x_2^{\deg x_3})$ implies $x_2^{\deg x_3+1}=x_1x_3$. Since $x_2 \in E_S$, this yields a contradiction. $(u_5,u_6)=(x_3,x_1)$ or $(x_3,x_2)$ implies $\deg x_i=1$ for all $1 \leqq i \leqq 4$, this leads to a contradiction since $R$ is standard graded.* - *If $(i,j)=(1,3)$, then we have $(u_5,u_6)=(x_3^{\deg x_2},x_1)$ or $(x_3,x_2)$. By the same argument as above, this yields a contradiction.* - *If $(i,j)=(1,4)$, according to ([\[ddd\]](#ddd){reference-type="ref" reference="ddd"}), we obtain $\deg x_2=a+b \deg x_3$ and $\deg x_3=c+d \deg x_2$, so we have $(bd-1)\deg x_3=-(c+ad) \leqq 0.$ Thus $bd=0$ or $1$. If $bd=0$, then we get $x_1^{a+1}=x_2x_4$ or $x_1^{c+1}=x_3x_4$. this leads to a contradiction since $x_1 \in E_S$. If $bd=1$, then we get $a=c=0$ and $x_2^{d+1}-x_3^{b+1}=0$. This yields a contradiction since $I$ is prime.* - *If $(i,j)=(2,3)$, then we have $(a,b)=(1,0)$ or $(0,1)$. Thus we obtain $\deg x_1=\deg x_4=1$ or $x_2^2-x_3^2=0$ so this yields a contradiction.* - *If $(i,j)=(2,4)$, we have $\deg x_1=1$ and $(a,b)=(1,0)$ or $(0,1)$. If $(a,b)=(1,0)$, we get $(c,d)=(0,1)$ and $x_2^2=x_1x_3$ so this contradicts to $x_2 \in E_S$. If $(a,b)=(0,1)$, we have $\deg x_i=1$ for all $1\leqq i \leqq 4$, this is a contradiction.* - *If $(i,j)=(3,4)$, by the same discussion as above, we get a contradiction.* - *We can write $X=\left[ \begin{array}{cc} x_1 & -x_3 \\ -x_3^a & x_4 \\ x_2 & -x_1^b \end{array} \right]$ for some $a,b\in \mathbb{Z}_{>0}$. Moreover, we have $x_{i},x_{j} \in E_S$ for some $1 \leqq i<j \leqq 4$ by Proposition [Proposition 14](#extremal){reference-type="ref" reference="extremal"}.* - *If $(i,j)=(1,2)$ or $(1,3)$ or $(1,4)$ or $(2,3)$ or $(3,4)$, then we have $x_1^{b+1}=x_2x_3$ or $x_3^{a+1}=x_1x_4$. This contradicts either $x_1 \in E_S$ or $x_3 \in E_S$.* - *If $(i,j)=(2,4)$, then $R$ is standard graded, this is a contradiction.* - *We can write $X=\left[ \begin{array}{cc} x_1 & -x_3 \\ -x_2 & x_4 \\ x_3^ax_4^b & -x_1^cx_2^d \end{array} \right]$ for some $a,b\in \mathbb{Z}_{>0}$. Moreover, we have $x_{i},x_{j} \in E_S$ for some $1 \leqq i<j \leqq 4$ by Proposition [Proposition 14](#extremal){reference-type="ref" reference="extremal"}.* - *If $(i,j)$ equals $(1,2)$, $(1,4)$, $(2,3)$ or $(3,4)$, then $R$ is standard graded, this is a contradiction.* - *If $(i,j)=(1,3)$, we get $R/(x_1,x_3)R \cong \Bbbk$. Since $x_1,x_3 \in E_S$ is a regular sequence of $R$, we get $R \cong A$. This is a contradiction.* - *If $(i,j)=(2,4)$, by the same discussion as above, we get a contradiction.* - *Let $d=3$. Since $R$ is nearly Gorenstein, We can write $X=\left[ \begin{array}{cc} x_1 & -x_4 \\ -x_2 & x_5 \\ x_3 & -x_1^ax_2^b \end{array} \right]$ for some $(a,b) \in \mathbb{N}^2 \setminus \{(0,0)\}$ by Theorem [Theorem 8](#trace){reference-type="ref" reference="trace"}. Moreover, we have $x_{i},x_{j},x_{k} \in E_S$ for some $1 \leqq i<j<k \leqq 5$ by Proposition [Proposition 14](#extremal){reference-type="ref" reference="extremal"}.* - *If $(i,j,k)=(1,2,3)$, then $R$ is standard graded, this is a contradiction.* - *If $(i,j,k)=(1,2,4)$, we have $x_1x_5=x_2x_4$ and $F=\mathbb{N}\mathbf{a}_2+ \mathbb{N}\mathbf{a}_4$ is a 2-dimensional face of $S$ where $x_i=\mathbf{x}^{\mathbf{a}_i} \in \Bbbk[S]$ for $i=2,4$. Thus we get $x_1 \in (x_2,x_4)R$. This yields a contradiction.* - *If $(i,j,k)=(1,2,5)$ or $(1,4,5)$ or $(2,4,5)$, by the same discussion as above, we get a contradiction.* - *If $(i,j,k)=(1,3,4)$ or $(1,3,5)$ or $(2,3,4)$ or $(2,3,5)$ or $(3,4,5)$, by the same argument as in $d=2$, it contradicts in this case as well.* - *Let $d=4$. By the same argument as in $d=3$, it contradicts in this case as well.* - *Let $d\geqq 5$, $R$ cannot be nearly Gorenstein by Theorem [Theorem 8](#trace){reference-type="ref" reference="trace"}.* * ◻* # Almost Gorenstein semi-standard graded rings Let us recall the definition of the almost Gorenstein *graded* ring. **Definition 22** ([@goto2015almost Definition 1.5]). We say that a Cohen--Macaulay graded ring $R$ is *almost Gorenstein* if there exists an exact sequence $$\begin{aligned} \label{ex_seq} 0 \rightarrow R \xrightarrow{\phi} \omega_R(-a) \rightarrow C \rightarrow 0\end{aligned}$$ of graded $R$-modules with $\mu(C)=e(C)$, where $\phi$ is an injection of degree 0. From now, we will apply the discussion in [@higashitani2016almost] below to semi-standard graded rings. First we consider the condition $$\begin{aligned} \label{condition} \text{there exists an injection $\phi : R \rightarrow \omega_R(-a)$ of degree 0}. \end{aligned}$$ This is a necessary condition for $R$ to be almost Gorenstein. Let $C=\cok(\phi)$. Then $C$ is a Cohen--Macaulay $R$-module of dimension $d-1$ if $C\not=0$ (see [@goto2015almost Lemma 3.1]). The condition [\[condition\]](#condition){reference-type="eqref" reference="condition"} is satisfied if $R$ is a domain or generically Gorenstein and a level ring. To prove this, we use the following well-known result. **Proposition 23**. *Let $R$ be a semi-standard graded Cohen-Macaulay ring. If $R$ is a domain, or generically Gorenstein and a level ring, then there exists a homogeneous element $\omega_R$ of degree $-a(R)$ such that $R \cong Rx(-a(R))$.* **Proof.* While the proof for standard graded rings is given in [@bruns1998cohen Theorem 4.4.9], it also works for semi-standard graded rings. ◻* **Proposition 24**. *When $R$ is a domain, or generically Gorenstein and a level ring, $R$ always satisfies the condition [\[condition\]](#condition){reference-type="eqref" reference="condition"}.* *Proof.* By Proposition [Proposition 23](#nzd){reference-type="ref" reference="nzd"}, we can pick a homogeneous element $x \in (\omega_R)_{-a}$ such that $R$-homomorphism $\phi : R \xrightarrow{x} \omega_R(-a)$ is an injection of degree $0$, as desired. ◻ **Remark 25**. Let $\Bbbk$ be a finite field and let $R$ be a semi-standard graded ring with $R_0=\Bbbk$. Assume that $R$ satisfies the condition [\[condition\]](#condition){reference-type="eqref" reference="condition"}. Put $K=\Bbbk(x)$, then $$\begin{aligned} 0 \rightarrow R \otimes_\Bbbk K \rightarrow \omega_R(-a) \otimes_\Bbbk K \rightarrow C \otimes_\Bbbk K \rightarrow 0 \nonumber\end{aligned}$$ is also exact. Moreover, we have $\omega_R(-a) \otimes_\Bbbk K=\omega_{R\otimes_\Bbbk K}(-a)$ ([@bruns1998cohen see Exercise 3.3.31]), $\dim C = \dim C \otimes_\Bbbk K$, $\textrm{depth}C = \textrm{depth}C \otimes_\Bbbk K$ and the Hilbert series of $C$ and $C \otimes_\Bbbk K$ are equal. **Theorem 26**. *Assume that $R$ satisfies [\[condition\]](#condition){reference-type="eqref" reference="condition"} and let $(h_0,h_1,\ldots,h_s)$ be the $h$-vector of $R$. Then the following is true.* - *We have $\mu(C)=r(R)-1$.* - *The Hilbert series of $C$ is $$\begin{aligned} \label{ccc} \frac{\sum_{j=0}^{s-1}((h_s+\cdots+h_{s-j})-(h_0+\cdots+h_j))t^j}{(1-t)^{\dim R-1}}.\end{aligned}$$ In particular, we have $e(C)=\sum_{j=0}^{s-1}((h_s+\cdots+h_{s-j})-(h_0+\cdots+h_j))$.* **Proof.* (1) follows from the same proof of [@higashitani2016almost Proposition 2.3].* *(2) We may assume $R$ is not Gorenstein. Then there is the short exact sequence of graded $R$-module of degree $0$ as follows: $$\begin{aligned} 0 \rightarrow R \rightarrow \omega_R(-a) \rightarrow C \rightarrow 0.\end{aligned}$$ By Remark 3.4, we may assume $\Bbbk$ is an infinite field. Then we can show the statement in the same way as [@stanley1991hilbert Theorem 2.1]. ◻* From this Proposition, we have the Stanley's inequality ([@stanley1991hilbert Theorem 2.1]): **Corollary 27**. *Assume that $R$ satisfies [\[condition\]](#condition){reference-type="eqref" reference="condition"}. Let $(h_0,h_1,\ldots,h_s)$ be the $h$-vector of $R$. Then we have the inequality $$h_s+\cdots+h_{s-j} \geqq h_0+\cdots+h_j$$ for each $j=0,1,\ldots,\lfloor s/2 \rfloor$.* As the same proof of [@higashitani2016almost Corollary 2.7, Theorem 3.1 and Theorem 4.1], we can prove the following. **Corollary 28**. *Assume that $R$ satisfies [\[condition\]](#condition){reference-type="eqref" reference="condition"}. The following conditions are equivalent:* - *there exists an injection $\phi : R \rightarrow \omega_R(-a)$ of degree 0 such that $C=\cok(\phi)$ satisfies $\mu(C)=e(C)$, namely, $R$ is almost Gorenstein;* - *every injection $\phi : R \rightarrow \omega_R(-a)$ of degree 0 satisfies $\mu(C)=e(C)$;* - *$$r(R)-1=\sum_{j=0}^{s-1}((h_s+\cdots+h_{s-j})-(h_0+\cdots+h_j));$$* *In particular, $\phi$ does not matter for the almost Gorenstein property of $R$.* **Corollary 29**. *[\[Higashitani\]]{#Higashitani label="Higashitani"} Let $R$ be a Cohen--Macaulay semi-standard graded ring with $h(R)=(h_0,h_1,\ldots,h_s)$ where $h_s \neq 0$. Then the following is true.* - *If $R$ is domain and $h_i=h_{s-i}$ for $i=0,1,\cdots, \lfloor \frac{s}{2} \rfloor -1$, then $R$ is almost Gorenstein.* - *If $R$ satisfies [\[condition\]](#condition){reference-type="eqref" reference="condition"} and $s(R)=1$, then $R$ is always almost Gorenstein.* Note that $R$ is generically Gorenstein if and only if $Q(R)$ is Gorenstein for Cohen-Macaulay ring $R$, where $Q(R)$ is the total ring of fractions of $R$. It is known that every almost Gorenstein ring is generically Gorenstein. **Lemma 30** ([@goto2015almost Lemma 3.1(1)]). *Let $R \xrightarrow{\phi} \omega_R \rightarrow C \rightarrow 0$ be an exact sequence of $R$-modules. If $\dim C \leqq d-1$, then $\phi$ is injective and $R$ is a generically Gorenstein ring. In particular, if $R$ is almost Gorenstein, then $R$ is generically Gorenstein.* # Almost Gorenstein property versus level property When $R$ is a standard graded ring, the next is known by [@goto2015almost]. Actually, this result holds true even when $R$ is a semi-standard graded ring. Here, we give another proof of [@goto2015almost] by using Stanley's enequality (Corollary [Corollary 27](#prop1){reference-type="ref" reference="prop1"}). **Theorem 31**. *Let $R$ be a semi-standard Cohen--Macaulay graded ring with $\dim R>0$. Suppose that $R$ is not Gorenstein. Then the following conditions are equivalent:* - *$R$ is almost Gorenstein and level;* - *$R$ is generically Gorenstein and $s(R)=1$.* **Proof.* First we show (2) implies (1). Since $R$ is semi-standard graded ring, we can check $s(R)=1$ implies $R$ is level. Thus $R$ is almost Gorenstein by Proposition [Proposition 24](#x){reference-type="ref" reference="x"} and Corollary [\[Higashitani\]](#Higashitani){reference-type="ref" reference="Higashitani"}(b). We show (1) implies (2). Since $R$ is generically Gorenstein by Lemma [Lemma 30](#AGisGG){reference-type="ref" reference="AGisGG"}, it is enough to show $s(R)=1$. We assume $s(R) \geqq 2$. Since $R$ is generically Gorenstein and level and almost Gorenstein, we have $e(C)=r(A)-1=h_s-1$ by Proposition [Proposition 24](#x){reference-type="ref" reference="x"} and Corollary [Corollary 28](#tokuchou){reference-type="ref" reference="tokuchou"}. Therefore, we have $$(s-1)(h_s-1) =\sum_{j=1}^{\lfloor \frac{s}{2} \rfloor}(s-2j)(h_j-h_{s-j}).$$ Moreover, we have the following enequalities by Proposition [Proposition 24](#x){reference-type="ref" reference="x"} and Corollary [Corollary 27](#prop1){reference-type="ref" reference="prop1"}. $$\begin{aligned} h_s-1 &\geqq 0 \tag{*}\\ h_s-1 &\geqq (h_1-h_{s-1}) \tag{*1*}\\ h_s-1 &\geqq (h_1-h_{s-1}) + (h_2-h_{s-2}) \tag{*2*}\\ \vdots \notag \\ h_s-1 &\geqq (h_1-h_{s-1}) + (h_2-h_{s-2}) + \cdots + (h_{\lfloor \frac{s}{2} \rfloor}-h_{s-\lfloor \frac{s}{2} \rfloor}) \tag{*$\left \lfloor \frac{s}{2} \right \rfloor$*}\end{aligned}$$ By $2\times \left({(\text{*}1\text{*})}+{(\text{*}2\text{*})}+\cdots+(\text{*}\left \lfloor \frac{s}{2} \right \rfloor-1\text{*}) \right)+\left(s-2 \lfloor \frac{s}{2} \rfloor\right) \times \left(\text{*}\lfloor \frac{s}{2} \rfloor\text{*}\right)$, we have $$\begin{aligned} (s-2)(h_s-1) \geqq \sum_{j=1}^{\lfloor \frac{s}{2} \rfloor}(s-2j)(h_j-h_{s-j})=(s-1)(h_s-1) \iff 0 \geqq h_s-1. \tag{**}\end{aligned}$$ By (\*) and (\*\*), we get $h_s=1$. So $R$ is Gorenstein. This yields a contradiction. ◻* Next, we will discuss non-level and almost Gorenstein semi-standard graded domains $R$ with small socle degree. For $s(R)=2$, the following result is known. **Corollary 32** (see [@yanagawa1995castelnuovo Corollary 3.11] and [@higashitani2016almost Corollary 4.3]). *Let $R$ be a standard graded domain. If $s(R)\leqq 2$, then $R$ is level.* In the case of semi-standard graded domain, the condition $s(R)=2$ does not implies level property in general. The following is known about the Cohen-Macaulay type. **Proposition 33** ([@higashitani2018non Proposition 3.6]). *Let $R$ be a Cohen--Macaulay semi-standard graded ring with the $h$-vector $(h_0,h_1,h_2)$. If $R$ is not level and $\Bbbk[R_1]$ is a domain, then the Cohen-Macaulay type $r(R)$ of $R$ is equal to $h_1+h_2$.* By using this, we have the following. **Proposition 34**. *Let $R$ be a Cohen--Macaulay semi-standard graded domain with $s(R)=2$. The following conditions are equivalent:* - *$R$ is non-level and almost Gorenstein;* - *$R$ is almost Gorenstein and $h(R)=(1,a,a+1)$ for some $a>0$;* - *$R$ is non-level and $h(R)=(1,a,a+1)$ for some $a>0$;* **Proof.* First we show (1) implies (2). Since $R$ is almost Gorenstein, then we have $2(h_2-1)=(h_1+h_2)-1$ by Corollary [Corollary 28](#tokuchou){reference-type="ref" reference="tokuchou"} and [Proposition 33](#type){reference-type="ref" reference="type"}. Thus $h(R)=(1,h_1,h_1+1)$ and $h_1>0$ since $R$ is non-level. Next we show (2) implies (3). If $R$ is level, then we have $a=0$ since $R$ is almost Gorenstein. Lastly, we show (3) implies (1). Since $R$ is non-level, we get $r(R)=2a+1$ by Proposition [Proposition 33](#type){reference-type="ref" reference="type"}. Then $R$ is almost Gorenstein by Proposition [Corollary 28](#tokuchou){reference-type="ref" reference="tokuchou"}. ◻* **Remark 35**. Even if $R$ satisfies the condition $h(R)=(1,a,a+1)$ for some $a>0$, $R$ is not necessarily non-level and almost Gorenstein. Indeed, consider semi-standard graded ring $R=\Bbbk[s,st,st^2,st^6,s^2t^5]$ with $\deg s=\deg st=\deg st^2=\deg st^6=1$ and $\deg s^2t^5=2$. Then $h(R)=(1,2,3)$ but $R$ is level and non-almost Gorenstein. # Almost Gorenstein property versus nearly Gorenstein property Lastly, we discuss the relation between almost Gorenstein property and nearly Gorenstein property. The following theorem follows from [@miyashita2022levelness Theorem 4.4] and [@higashitani2016almost Theorem 4.7]. Note that [@miyashita2022levelness Theorem 4.4] is standard graded version of Corollary [Corollary 16](#NGhvector){reference-type="ref" reference="NGhvector"}. **Theorem 36**. *Let $R$ be a standard graded Cohen-Macaulay affine semigroup ring with $s(R) \geqq 2$. The following conditions are equivalent:* - *$R$ is almost Gorenstein and nearly Gorenstein;* - *$R$ is Gorenstein.* Moreover, it is known that every $1$-dimensional almost Gorenstein ring is nearly Gorenstein(see [@herzog2019trace Proposition 6.1]). Therefore, we consider the comparison of nearly Gorenstein and almost Gorenstein properties in semi-standard graded affine semigroup rings when the socle degree and dimension are small. In this Section, we show the following. **Theorem 37**. *Let $R$ be a non-standard semi-standard graded Cohen-Macaulay affine semigroup ring with $\dim R=s(R)=2$. If $R$ is almost Gorenstein, then it is nearly Gorenstein.* To show this statement, we prove the following. **Theorem 38**. *Let $R=\Bbbk[S]$ be a Cohen--Macaulay semi-standard graded affine semigroup ring with $\dim R=s(R)=2$. Then the following conditions are equivalent:* - *$R$ is non-level and almost Gorenstein;* - *$S \cong \langle \{(2i,2n-2i): 0 \leqq i \leqq n \} \bigcup \{ (2j+2k-1,4n-2j-2k+1): 0 \leqq j \leqq n-1 \} \rangle$ for some $n \geqq 2$ and $1 \leqq k \leqq n+1$.* *Moreover, if this is the case, then $R$ is always nearly Gorenstein and $h(R)=(1,n-1,n)$.* **Proof.* By using Lemma 3.3, it is enough to show that (1) implies (2). From the proof of [@higashitani2018non Theorem 3.5], we get $R \cong \Bbbk[R_1] \oplus C$ as $\Bbbk[R_1]$-module. Moreover, $B=\Bbbk[R]_1\cong \bigoplus_{i \in \mathbb{N}} T_{ni}$ and $C(2) \cong \bigoplus_{i \in \mathbb{N}} T_{n-1 + ni}$ for some $n \geqq 2$ where $T=\Bbbk[x,y]$ and $n=h_1+1=h_2$. Thus there exist $V=\{ (id,(n-i)d): 0 \leqq i \leqq n \}$ such that $B=\Bbbk[\langle V \rangle]$. Note that $C$ has a minimal genearating system consisting of $n$ elements as a $B$-module, and all of its generators have degree 2. Furthermore, since $R$ is semi-standard graded $B$-module, the subset $W \subseteq S$ corresponding to the minimal generating system of $C$ satisfies $W \subseteq \mathbb{R}_{\geqq 0}\langle V \rangle$. Therefore, there exists $W=\{(a+ik,2nd-a-ik) : 0 \leqq i \leqq n-1 \}$ such that $C=B \langle{x_1}^{u_1}{x_2}^{u_2} : (u_1,u_2) \in W \rangle$ and $\Bbbk[S] \cong \Bbbk[\langle V \cup W\rangle]$ where $k>0$ and $0 < a < (n+1)d$ with $a \not\equiv 0\; (\textrm{mod}\; d$). First we show $d=k$. Note that $$C_3 \supseteq \{ x_1^{a+d}x_2^{(3n-1)d-a}\} \cup \{x_1^{a+(i-1)k}x_2^{3nd-a-(i-1)k},\; x_1^{a+(n-1)k+id}x_2^{(3n-i)d-a-(n-1)k} : 1 \leqq i \leqq n\}.$$ If $d>k$, then $2n=\dim_\Bbbk T_{2n-1}=\dim_\Bbbk C_3 \geqq 2n+1$, this yields a contradiction. If $d \geqq k$, then we can check $x_1^{a+d}x_2^{(3n-1)d-a} \in \bigoplus_{2 \leqq i \leqq n}x_1^{a+(i-1)k}x_2^{3nd-a-(i-1)k}$. Thus $d=jk$ for some $1 \leqq j \leqq n-2$. Moreover, since $$\left(x_1^{a+(n-2)k}x_2^{2nd-a-(n-2)d} \right)x_1^{nd} \in \bigoplus_{1 \leqq i \leqq n-1} x_1^{a+(n-1)k+id}x_2^{(3n-i)d-a-(n-1)k},$$ we get $k=(n-i)d$ for some $1 \leqq i \leqq n-1$. Thus $d=k$. Then we get $$B_4 = \bigoplus_{0 \leqq i \leqq 4n} \Bbbk x_1^{id}x_2^{(4n-i)d}, \;C_4 = \Bbbk x_1^{a+(i-1)d}x_2^{(4n-i+1)d-a}\;\;\textit{and}\;\;R_4 = B_4 \oplus C_4.$$ For $x_1^{a}x_2^{2nd-a} \in R_2$, we can check $(x_1^{a}x_2^{2nd-a})^2 \in B_4$. Note that $0 < a < (n+1)d$ and $a \not\equiv 0\; (\textrm{mod}\; d$). Then there exists $1 \leqq l \leqq 2n+1$ such that $2a=ld$. Moreover, we can write $d=2m$ and $l=2k-1$ for some $m>0$ and $1 \leqq k \leqq n+1$. On the other hand, $\Bbbk[S] \cong \Bbbk[\langle V \cup W\rangle]$ implies $S \cong \langle V \cup W\rangle$ (see [@gubeladze1998isomorphism Theorem 2.1 (b)]). Thus, $$\begin{aligned} \begin{split}\nonumber S &\cong \langle \{(id,nd-id): 0 \leqq i \leqq n \} \cup \{ (a+jd,2nd-jd-a): 0 \leqq j \leqq n-1 \} \rangle\\ &\cong \langle \{(2i,2n-2i)): 0 \leqq i \leqq n \} \cup \{ (2j+l,4n-2j-2k+1): 0 \leqq j \leqq n-1 \} \rangle \end{split}\end{aligned}$$ for some $n \geqq 2$ and $1 \leqq k \leqq n+1$. ◻* *Proof of Theorem [Theorem 37](#AGisNGw){reference-type="ref" reference="AGisNGw"}.* Since $R$ is not level by Theorem [Theorem 31](#AGandLevel){reference-type="ref" reference="AGandLevel"}, it is nearly Gorenstein by Theorem [Theorem 38](#nonlevelAGwithSocleDeg2){reference-type="ref" reference="nonlevelAGwithSocleDeg2"}. ◻ **Examples 39**. For semi-standard graded affine semigroup rings where either the socle degree or the dimension is greater than 2, almost Gorenstein property does not imply nearly Gorenstein property in general. We can check the following is true in the same way as Examples [Examples 18](#exNG1){reference-type="ref" reference="exNG1"}. - $R=\mathbb{Q}[t,st,s^5t,s^4t^2]$ is non-nearly Gorenstein almost Gorenstein semi-standard graded affine semigroup ring with $\dim R=2$ and $s(R)=3$ where $\deg s^at^b=b$ for any $a,b \in \mathbb{N}$. From $h_3=1$, we can also confirm that $R$ is not nearly Gorenstein by Corollary [Corollary 16](#NGhvector){reference-type="ref" reference="NGhvector"}. - $R=\mathbb{Q}[u,s^2u,t^2u,t^4u,tu^2,t^3u^2]$ is non-nearly Gorenstein almost Gorenstein semi-standard graded affine semigroup ring with $\dim R=3$ and $h(R)=(1,1,2)$ where $\deg s^at^bu^c=c$ for any $a,b,c \in \mathbb{N}$. - $R= %\QQ[wyz^2,wy^2z,wx,wxyz,wxy^2z^2,w^2xy^2z^2,w^2xy^3z^3]\cong \mathbb{Q}[P_{1,1}] \cong \mathbb{Q}[wyz^2,wy^2z,wx,wxyz,wxy^2z^2,w^2xy^2z^2,w^2xy^3z^3]$ $($see [@higashitani2018non Theorem 4.5]$)$ is non-nearly Gorenstein almost Gorenstein Ehrhart ring with $\dim R=4$ and $h(R)=(1,1,2)$. 10 Winfried Bruns and H Jürgen Herzog. . Number 39. Cambridge university press, (1998). Anthony V. Geramita. . American Mathematical Soc., (2007). Shiro Goto, Naoyoshi Suzuki, and Keiichi Watanabe. On affine semigroup rings. , 2(1):1--12, (1976). Shiro Goto, Ryo Takahashi, and Naoki Taniguchi. Almost gorenstein rings--towards a theory of higher dimension. , 219(7):2666--2712, (2015). Daniel R. Grayson and Michael E. Stillman. Macaulay2, a software system for research in algebraic geometry. Available at <http://www2.macaulay2.com>. Joseph Gubeladze. The isomorphism problem for commutative monoid rings. , 129(1):35--65, (1998). Christian Haase, Florian Kohl, and Akiyoshi Tsuchiya. Levelness of order polytopes. , 34(2):1261--1280, (2020). Thomas Hall, Max Kölbl, Koji Matsushita, and Sora Miyashita. Nearly gorenstein polytopes. , (2023). Jürgen Herzog, Takayuki Hibi, and Dumitru I Stamate. The trace of the canonical module. , 233:133--165, (2019). Takayuki Hibi. Level rings and algebras with straightening laws. , 117(2):343--362, (1988). Akihiro Higashitani. Almost gorenstein homogeneous rings and their $h$-vectors. , 456:190--206, (2016). Akihiro Higashitani and Koji Matsushita. Levelness versus almost gorensteinness of edge rings of complete multipartite graphs. , 50(6):2637--2652, (2022). Akihiro Higashitani and Kohji Yanagawa. Non-level semi-standard graded cohen--macaulay domain with $h$-vector $(h_0, h_1, h_2)$. , 222(1):191--201, (2018). Lukas Katthän. Non-normal affine monoid algebras. , 146:223--233, (2015). Shinya Kumashiro, Naoyuki Matsuoka, and Taiga Nakashima. Nearly gorenstein local rings defined by maximal minors of a $2 \times n$ matrix. , (2023). Ezra Miller and Bernd Sturmfels. , volume 227. Springer, (2005). Sora Miyashita. Levelness versus nearly Gorensteinness of homogeneous domains. , (2022). Sora Miyashita. Nearly Gorenstein projective monomial curves of small codimension. , (2023). Mitsuhiro Miyazaki. Gorenstein on the punctured spectrum and nearly gorenstein property of the ehrhart ring of the stable set polytope of an $h$-perfect graph. , (2022). Alessio Moscariello and Francesco Strazzanti. Nearly gorenstein vs almost gorenstein affine monomial curves. , 18(4):127, (2021). Irena Peeva and Bernd Sturmfels. Syzygies of codimension 2 lattice ideals. , 229(1):163, (1998). Richard P. Stanley. $f$-vectors and $h$-vectors of simplicial posets. , 71(2-3):319--331, (1991). Richard P. Stanley. On the hilbert function of a graded cohen-macaulay domain. , 73(3):307--314, (1991). Richard P. Stanley. , volume 41. Springer Science & Business Media, (2007). Kohji Yanagawa. Castelnuovo's lemma and $h$-vectors of cohen-macaulay homogeneous domains. , 105(1):107--116, (1995).
arxiv_math
{ "id": "2309.09221", "title": "Comparing generalized Gorenstein properties in semi-standard graded\n rings", "authors": "Sora Miyashita", "categories": "math.AC math.CO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- author: - Pakanun Dokyeesun title: Maker-Breaker domination game on Cartesian products of graphs --- ------------------------------------------------------------------------ Submission for publication in [Opuscula Mathematica]{.smallcaps} ------------------------------------------------------------------------ **Abstract.** The Maker-Breaker domination game is played on a graph $G$ by two players, called Dominator and Staller. They alternately select an unplayed vertex in $G$. Dominator wins the game if he forms a dominating set while Staller wins the game if she claims all vertices from a closed neighborhood of a vertex. If Dominator is the winner in the D-game (or the S-game), then $\gamma_{\rm MB}(G)$ (or $\gamma_{\rm MB}'(G)$) is defined by the minimum number of moves of Dominator to win the game under any strategy of Staller. Analogously, when Staller is the winner, $\gamma_{\rm SMB}(G)$ and $\gamma_{\rm SMB}'(G)$ can be defined in the same way. We determine the winner of the game on the Cartesian product of paths, stars, and complete bipartite graphs, and how fast the winner wins. We prove that Dominator is the winner on $P_m \square P_n$ in both the D-game and the S-game, and $\gamma_{\rm MB}(P_m \square P_n)$ and $\gamma_{\rm MB}'(P_m \square P_n)$ are determined when $m=3$ and $3 \le n \le 5$. Dominator also wins on $G \square H$ in both games if $G$ and $H$ admit nontrivial path covers. Furthermore, we establish the winner in the D-game and the S-game on $K_{m,n} \square K_{m',n'}$ for every positive integers $m, m',n,n'$. We prove the exact formulas for $\gamma_{\rm MB}(G)$, $\gamma_{\rm MB}'(G)$, $\gamma_{\rm SMB}(G)$, and $\gamma_{\rm SMB}'(G)$ where $G$ is a product of stars. **Keywords:** domination game; Maker-Breaker game; Maker-Breaker domination game; hypergraph; Cartesian product of graphs. **Mathematics Subject Classification (2020):** 91A24, 05C57, 05C65, 05C69. # Introduction For a positive integer $n$, we set $[n] = \{1, \ldots, n\}$. Let $G =(V(G), E(G))$ be a graph. The order of $G$ is denoted by $n(G)$. A graph $H$ is a *subgraph* of $G$ if $V(H) \subseteq V(G)$ and $E(H)\subseteq E(G)$. For any $v \in V(G)$, the *open neighborhood* $N_G(v)$ of $v$ is the set of all vertices adjacent to $v$ and the *closed neighborhood* of $v$ is $N_G[v] = N_G(v) \cup \{v\}$. If $S \subseteq V(G)$, then $N_G(S)=\bigcup_{v \in S} N_G(v)$ and $N_G[S]=\bigcup_{v \in S} N_G[v]$. A set $D \subseteq V(G)$ is a *dominating set* if $N_G[D]=V(G)$ and the minimum cardinality of dominating sets is the *domination number* $\gamma(G)$ of $G$. A set $M \subseteq E(G)$ is a *matching* if no two edges share a vertex in $M$. A matching $M$ is a *perfect matching* if $M$ covers every vertex in $G$. A *path cover* of $G$ is a set of pairwise vertex-disjoint paths which cover $V(G)$. A path cover of $G$ in which every path has length at least $1$ is called a *nontrivial path cover*. The *Cartesian product* $G\,\square\,H$ of graphs $G$ and $H$ is defined on the vertex set $V(G)\times V(H)$ such that two vertices $(g,h)$ and $(g',h')$ are adjacent if either $gg'\in E(G)$ and $h=h'$, or $g=g'$ and $hh'\in E(H)$. If $h\in V(H)$, then the subgraph of $G\,\square\,H$ induced by the vertex set $\{(g,h):\ g\in V(G)\}$ is a *$G$-layer*, and denoted by $G^h$. Analogously the $H$-layers are defined and denoted by $^g\!H$ for a fixed vertex $g\in V(G)$. A *hypergraph* ${\cal H}=(V({\cal H}), E({\cal H}))$ consists of the vertex set $V({\cal H})$ and the *(hyper)edge* set $E({\cal H})$ containing nonempty subsets of $V({\cal H})$ of any cardinality. In other words, $E({\cal H}) \subseteq 2^{V({\cal H})}$. A vertex set $T \subseteq V({\cal H})$ is a *transversal* (or vertex cover) in ${\cal H}$ if every (hyper)edge in ${\cal H}$ contains at least one vertex in $T$. Note that a loop less graph is a hypergraph where each (hyper)edge contains exactly two vertices. Many of the fundamental definitions associated with graphs can be extended to hypergraphs (for more details, see [@berge]). The Maker-Breaker game is a positional game introduced in 1973 by Erdős and Selfridge [@erdos-1973] and was widely studied (see [@beck-1981; @hefetz-2014]). The game is played on a hypergraph ${\cal H}$ by two players, named Maker and Breaker, where the hyperedges of ${\cal H}$ are the *winning sets*. During the game, the players alternately select unplayed vertices in the hypergraph and Maker wins if he occupies all the vertices of a winning set, otherwise Breaker wins. In 2020, the *Maker-Breaker domination game* (MBD game) was introduced by Duchêne, Gledel, Parreau, and Renault in [@duchene-2020]. The game is played on a graph $G$ by two players, called Dominator and Staller. Both players alternately select an unplayed vertex in $G$. Dominator wins the game if he can form a dominating set while Staller wins if she can prevent Dominator to form a dominating set. In other words, Staller wins if she claims a closed neighborhood of a vertex in $G$. The total version of the game was studied in [@forcan-2022; @gledel-2020]. The MBD game can be considered as a variation of the Maker-Breaker game where the winning sets are minimal dominating sets, Dominator is Maker and Staller is Breaker. Conversely, if the closed neighborhoods of the vertices are considered to be the winning sets, Dominator is Breaker and Staller is Maker, see [@bujtas-2021; @bujtas-2023]. The names of the players are selected to be consistent with the domination game. For more details and results on the domination game, see [@bresar-2010; @book-2021; @jacobson-1984; @kinnersley-2013]. An MBD game is referred to as *D-game* if Dominator is the first player in the game and the game is called *S-game* when Staller starts the game. The sequence $d_1, s_1, d_2, s_2, \ldots$ is a sequence of played vertices in a D-game and the sequence $s'_1, d'_1, s'_2, d'_2, \ldots$ is a sequence of played vertices in an S-game respectively. In [@gledel-2019], the *Maker-Breaker domination number* (MBD-number) was introduced in the following way. If Dominator has a winning strategy in the D-game, the MBD-number $\gamma_{\rm MB}(G)$ represents the minimum number of moves Dominator needs to win the game when both players play optimally. Otherwise, when Dominator cannot win the game, $\gamma_{\rm MB}(G) = \infty$. Similarly, $\gamma_{\rm MB}'(G)$ is defined in the same way for the S-game. In [@bujtas-2021], the *Staller-Maker-Breaker domination number* (SMBD-number) was defined analogously to the MBD-number. That is, $\gamma_{\rm SMB}(G)$ is the minimum number of moves Staller needs to win the D-game if both players play optimally. If Staller does not have a winning strategy in the D-game, we set $\gamma_{\rm SMB}(G)= \infty$. For the S-game the corresponding invariant is denoted by $\gamma_{\rm SMB}'(G)$. Recently, Forcan and Qi [@forcan-qi-2022] studied the Maker-Baker domination game on Cartesian products of graphs. It was shown that if Dominator wins on a graph $G$ in the D-game and the S-game, then Dominator also wins on $G \,\square\,H$ in both games for every graph $H$. In particular, Dominator always wins on $P_2 \,\square\,H$ in the D-game and the S-game. This inspired us to consider the winner of the game on $P_3 \,\square\,H$ for any graph $H$. To approach the problem, we use nontrivial path covers to investigate the winner on Cartesian products of graphs. #### Structure of the paper. In this paper, we study the Maker-Breaker domination game on Cartesian products of paths and complete bipartite graphs. In the next section, we first provide basic properties of the game and recall results which are used in the rest of the paper. Then, in Section $3$, we determine the winner of the D-game and the S-game on $P_m \,\square\,P_n$ and $G \,\square\,H$, where $G$ and $H$ admit nontrivial path covers. Moreover, formulas for $\gamma_{\rm MB}(P_3 \,\square\,P_n)$ and $\gamma_{\rm MB}'(P_3 \,\square\,P_n)$ are proved for $n \in \{3,4\}$. Not all graphs admit a nontrivial path cover, for example, stars. In Section $4$, the outcome of games on products of complete bipartite graphs is studied and it is determined how fast the winner can win the game on products of stars. # Preliminaries By the definition of the MBD game exactly one player wins in the D-game (or the S-game). The *outcome* is defined in [@duchene-2020] according the possible winners of the game depending on who is the first player. **Definition 1**. The outcome $o(G)$ of $G$ is one of the following: - $\cal D$, if Dominator has a winning strategy in the D-game and the S-game, - $\cal S$, if Staller has a winning strategy in the D-game and the S-game, - $\cal N$, if the first player has a wining strategy in the game. These are all possible outcomes of the MBD game because the remaining case is not possible as it was shown in [@hefetz-2014]. The disjoint union of graphs $G$ and $H$ is denoted by $G \cup H$. The following theorem shows the outcome on $G \cup H$. **Theorem 2** ([@duchene-2020]). *Let $G$ and $H$ be graphs. Then* - *If $o(G)= \cal S$ or $o(H)= \cal S$, then $o(G \cup H)= \cal S$.* - *If $o(G)=o(H)= \cal N$, then $o(G \cup H)= \cal S$.* - *If $o(G)=o(H)= \cal D$, then $o(G \cup H)= \cal D$.* - *Otherwise, $o(G \cup H)= \cal N$.* Note that if $e \in E(G)$, then, for every $x \in V(G)$ that $N_{G-e}[x] \subseteq N_G[x]$. By Proposition 2.2 in [@bujtas-2021], it implies that deleting $e$ is not a disadvantage for Staller. Thus, if Staller wins the D-game and the S-game on $G$, then she also wins the games in $G-e$. On the other hand, if Dominator wins the D-game and the S-game on $G-e$, then he also wins the games in $G$. This result implies the following lemma which will be used often later. **Lemma 3**. *Let $G$ be a graph and $e \in E(G)$.* - *If $o(G-e) = \cal D$ then $o(G)= \cal D$. Moreover, $\gamma_{\rm MB}(G-e) \ge \gamma_{\rm MB}(G)$ and $\gamma_{\rm MB}'(G-e) \ge \gamma_{\rm MB}'(G)$.* - *If $o(G) = \cal S$ then $o(G-e)= \cal S$. Moreover, $\gamma_{\rm SMB}(G-e) \le \gamma_{\rm SMB}(G)$ and $\gamma_{\rm SMB}'(G-e) \le \gamma_{\rm SMB}'(G)$.* The pairing strategy for Breaker, defined in [@hefetz-2014], ensures that she can win the Maker-Breaker game. It implies for the MBD game that Dominator has a winning strategy in both the D-game and the S-game if the graph has a perfect matching. The next lemma gives a more general result. **Lemma 4** ([@bujtas-2021]). *Consider an MBD game on $G$ and let $X$ and $Y$ be the sets of vertices played by Dominator and Staller, respectively, until a moment during the game. If there exists a matching $M$ in $G-(X\cup Y)$ such that $V(G) \setminus V(M) \subseteq N_G[X]$, then Dominator has a strategy to win the continuation of the game, no matter who plays the next vertex.* **Remark 5**. To win the continuation of the game, Dominator applies the following strategy. If Staller claims an unplayed vertex $v$ such that $uv \in E(M)$, Dominator responds by playing vertex $u$ it it is unplayed. Otherwise Dominator plays an arbitrary vertex. By this strategy, the number of moves of Dominator is at most $|X|+|M|$. **Lemma 6** (No-Skip Lemma [@gledel-2019; @bujtas-2021]). *Let $G$ be a graph.* - *In an optimal strategy of Dominator to achieve $\gamma_{\rm MB}(G)$ or $\gamma_{\rm MB}'(G)$ it is never an advantage for him to skip a move. Moreover, if Staller skips a move it can never disadvantage Dominator.* - *In an optimal strategy of Staller to achieve $\gamma_{\rm SMB}(G)$ or $\gamma_{\rm SMB}'(G)$ it is never an advantage for her to skip a move. Moreover, if Dominator skips a move it can never disadvantage Staller.* As a further consequence of No-skip Lemma, S-game is the D-game when Dominator skips the first move. Similarly, D-game is the S-game when Staller skips the first move. These facts imply the following consequence of No-Skip Lemma. **Corollary 7** ([@gledel-2019; @bujtas-2021]). *If $G$ is a graph, then* - *$\gamma_{\rm MB}(G) \le \gamma_{\rm MB}'(G).$* - *$\gamma_{\rm SMB}'(G) \le \gamma_{\rm SMB}(G).$* In [@gledel-2019], sharp bounds for $\gamma_{\rm MB}(G)$ and $\gamma_{\rm MB}'(G)$ were determined. As each vertex is played at most once during the game, $\gamma_{\rm MB}(G) < \infty$ implies $$\begin{aligned} \label{bound1} 1 \le \gamma_{\rm MB}(G) \le \left\lceil \frac{n(G)}{2} \right\rceil. \end{aligned}$$ Similarly, if $\gamma_{\rm MB}'(G) < \infty$, then $$\begin{aligned} \label{bound2} 1 \le \gamma_{\rm MB}'(G) \le \left\lfloor \frac{n(G)}{2} \right\rfloor. \end{aligned}$$ Forcan and Qi investigated the MBD-number on the Cartesian product of two graphs when Dominator is the winner on at least one of these two graphs in the D-game and the S-game. We can rewrite this result as follows. **Theorem 8** ([@forcan-qi-2022]). *Let $G$ and $H$ be two graphs. If $o(G) = \cal D$ or $o(H) = \cal D$, then $o(G\,\square\,H) = \cal D$.* This result implies that $o(P_{2m} \,\square\,P_n)=D$ for every positive integer $m, n$. The main result of [@forcan-qi-2022] asserts the following; **Theorem 9** ([@forcan-qi-2022]). *$\gamma_{\rm MB}'(P_2 \,\square\,P_n) = n$ for $n \ge 1$ and $\gamma_{\rm MB}(P_2 \,\square\,P_n) = n-2$ for $n \ge 13$.* To further investigate the outcome of the game on Cartesian products of paths and complete bipartite graphs, we will use nontrivial path covers. **Theorem 10** ([@Lavasz-1970]). *A graph $G$ has a nontrivial path cover if and only if $i(G-S) \le 2 |S|$ for every $S \subseteq V(G)$ where $i(G-S)$ is the number of isolated vertices in $G-S$.* By Theorem [Theorem 10](#thm: factor){reference-type="ref" reference="thm: factor"}, we can conclude that $K_{r,s}$ does not have a nontrivial path cover if $1 \le 2r <s$. # Grid graphs In this section, we set $Z = P_m\,\square\,P_n$, where $m, n$ are positive integers, $V(Z) = \{(i,j):\ i\in [m], j\in [n]\}$, and $E(Z) = \{(i,j)(i,j+1):\ i\in [m], j\in [n-1]\} \cup \{(i,j)(i+1,j):\ i\in [m-1], j\in [n]\}$. We consider the outcome of the MBD game on $Z$ and the MBD-number of some small grids. The following result round off previous partial result on the outcome in the game on grids. **Theorem 11**. *If $n \ge m \ge 2$, then $o(P_m \,\square\,P_n)= \cal D$.* Assume that $n \ge m \ge 2$ and now $Z = P_m \,\square\,P_n$. We will show that Dominator has a winning strategy in the D-game and the S-game on $Z$. By Corollary [Corollary 7](#cor:skip){reference-type="ref" reference="cor:skip"}, it suffices to show that Dominator wins the S-game on $Z$. #### Case 1. $m=2$. Then Dominator wins the S-game by Theorem [Theorem 8](#thm:cart){reference-type="ref" reference="thm:cart"}. #### Case 2. $m=n=3$. Consider the possible first move $s'_1$ of Staller. By symmetry, it is enough to consider the following cases. - If $s'_1 \in \{(1,1)$, $(2,2)\}$, then Dominator replies by playing $d'_1 =(1,2)$. We can find a matching $M$ in $Z-\{s'_1, (1,2)\}$ such that $V(Z)\setminus V(M) \subseteq N_Z[(1,2)]$. By Lemma [Lemma 4](#lem:pairing){reference-type="ref" reference="lem:pairing"}, Dominator has a winning strategy in the game. - If $s'_1=(1,2)$, then Dominator replies by playing $d'_1=(2,1)$. In the second turn, if $s'_2 \in \{(1,1), (2,2), (2,3), (3,1)\}$, then Dominator plays $d'_2=(1,3)$. One can see that only $(3,2)$ and $(3,3)$ remain undominated unplayed vertices by $\{d'_1,d'_2\}$. Then Dominator will win the game by playing one of these two vertices in his next move. Suppose that $s'_2 \in \{(1,3), (3,2), (3,3)\}$. Then Dominator replies by playing $d'_2=(2,3)$. Thus only vertices $(1,2)$ and $(3,2)$ remain undominated $\{d'_1,d'_2\}$. If $s'_3 \ne (2,2)$, then Dominator replies $d'_3 = (2,2)$ and he wins the game. Assume that $s'_3=(2,2)$. Dominator plays $d'_3 \in \{ (1,1), (1,3)\}$ and then he will play an unplayed vertex in the layer $^3 P_3$ in his next move. In all cases, Dominator wins the game within his next two moves. Hence, Dominator has a winning strategy in the S-game on $P_3\,\square\,P_3$. #### Case 3. $m \ge 3$ and $n \ge 4$. Observe that each path $P_{\ell}$, where $\ell \ge 3$ admits a path cover obtained from only copies of $P_2$ and $P_3$. Let $X$ and $Y$ be path covers of $P_m$ and $P_n$ containing only copies of $P_2$ and $P_3$, respectively. Let $P_m'$ and $P_n'$ be the disjoint union of paths from the path covers $X$ and $Y$, respectively. Set $Z'= P_m' \,\square\,P_n'$. Then $Z'$ is a disjoint union of copies of $P_2 \,\square\,P_2$, $P_2 \,\square\,P_3$, $P_3 \,\square\,P_2$, and $P_3 \,\square\,P_3$. By Case 1. and Case 2., Dominator wins the S-game on every component of $Z'$. By Theorem [Theorem 2](#thm:union){reference-type="ref" reference="thm:union"}, Dominator can win the game on $Z'$. Since $Z'$ can be spanning subgraph obtained from $Z=P_m\,\square\,P_n$ by deleting some edges and Dominator has a winning strategy in the game on $Z'$, Dominator can win the S-game on $Z$ by Lemma [Lemma 3](#lem:o_edgedelete){reference-type="ref" reference="lem:o_edgedelete"}. 0◻ According to Theorem [Theorem 11](#thm:pmpn){reference-type="ref" reference="thm:pmpn"}, we can conclude the outcome of the MBD game on Cartesian products of graphs which admit nontrivial path covers as follows. **Theorem 12**. *If $G$ and $H$ are graphs which admit nontrivial path covers, then $o(G \,\square\,H) = \cal D$.* Let $X$ and $Y$ be nontrivial path covers of $G$ and $H$, respectively. Let $G'$ and $H'$ be the disjoint union of paths from the path covers $X$ and $Y$, respectively. Then $G' \,\square\,H'$ is a disjoint union of copies of Cartesian products of nontrivial paths. By Theorem [Theorem 11](#thm:pmpn){reference-type="ref" reference="thm:pmpn"}, Dominator wins the MBD games on every component in $G' \,\square\,H'$. It implies that Dominator wins the games on $G' \,\square\,H'$ by Theorem [Theorem 2](#thm:union){reference-type="ref" reference="thm:union"}. Observe that $G' \,\square\,H'$ can be spanning subgraph obtained from $G \,\square\,H$ by deleting some edges. By Lemma [Lemma 3](#lem:o_edgedelete){reference-type="ref" reference="lem:o_edgedelete"}, it follows that Dominator wins the games on $G \,\square\,H$. 0◻ **Proposition 13**. $\gamma_{\rm MB}(P_3 \,\square\,P_3) = \gamma_{\rm MB}' (P_3 \,\square\,P_3) =4$. Now $Z = P_3 \,\square\,P_3$. By Theorem [Theorem 11](#thm:pmpn){reference-type="ref" reference="thm:pmpn"}, we know that Dominator has a winning strategy in the D-game and the S-game. By Corollary [Corollary 7](#cor:skip){reference-type="ref" reference="cor:skip"} (i) and inequality ([\[bound2\]](#bound2){reference-type="ref" reference="bound2"}), $\gamma_{\rm MB}(Z) \le \gamma_{\rm MB}'(Z) \le \lfloor\frac{9}{2} \rfloor = 4$. It remains to show that $\gamma_{\rm MB}(Z) \ge 4$, that is, Staller has a strategy to ensure that Dominator cannot win the game within three moves. By symmetry, it is enough to consider the following three cases. - If $d_1=(1,1)$, then Staller replies by playing $s_1=(3,2)$. If $d_2 \notin \{(2,1), (3,1)\}$, then Staller responds by choosing $s_2=(3,1)$. If $d_2 \in \{(2,1), (3,1)\}$, then Staller selects $s_2=(2,3)$. In any case, Dominator needs to play at least two more vertices to dominate $Z$. - If $d_1=(1,2)$, then Staller replies by playing $s_1(3,1)$. If $d_2 \in \{(1,3), (2,3), (3,3)\}$, then Staller responds by choosing $s_2=(2,1)$. If $d_2 \in \{(1,1), (2,1)\}$, then Staller replies by playing $s_2=(3,3)$. If $d_2 =(3,2)$, then Staller selects $s_2=(2,2)$. Otherwise, Staller plays $s_2=(3,2)$. By this strategy, Dominator needs to plays two more vertices to dominate $Z$. - If $d_1=(2,2)$, then Staller replies by playing $s_1=(1,2)$. In the second move of Staller, she plays $s_2=(2,1)$ if it is possible. Otherwise, she will play $s_2=(2,3)$. Then Dominator cannot dominate $Z$ within three moves. Thus Dominator needs to play at least four moves to win the game which means that $\gamma_{\rm MB}(Z) \ge 4$. We conclude that $\gamma_{\rm MB}(Z) = \gamma_{\rm MB}' (Z) =4$. 0◻ **Proposition 14**. $\gamma_{\rm MB}(P_3 \,\square\,P_4) =5$ and $\gamma_{\rm MB}'(P_3 \,\square\,P_4) =6$. Now $Z = P_3 \,\square\,P_4$. We will prove that $\gamma_{\rm MB}(Z) \ge 5$ by providing a strategy for Staller which ensures that Dominator cannot form a dominating set within four moves in the D-game. Consider the possible first move $d_1$ of Dominator. - If $d_1 = (1,1)$, then Staller plays $s_1=(3,2)$. In her second move, she plays $s_2 \in \{(1,4), (3,4)\}$. See Figure [\[fig:Staller for P3P4\]](#fig:Staller for P3P4){reference-type="ref" reference="fig:Staller for P3P4"} (1.1), (1.2). - If $d_1 = (1,2)$, then Staller replies by playing $s_1=(3,1)$. After that, she plays $s_2=(3,3)$ if it is possible, otherwise she plays $s_2=(2,1)$. See Figure [\[fig:Staller for P3P4\]](#fig:Staller for P3P4){reference-type="ref" reference="fig:Staller for P3P4"} (2.1), (2.2). - If $d_1 = (2,1)$, then Staller responds at $s_1=(1,3)$. Then she plays $s_2=(3,3)$ if it is possible, otherwise she plays $s_2(1,4)$. See Figure [\[fig:Staller for P3P4\]](#fig:Staller for P3P4){reference-type="ref" reference="fig:Staller for P3P4"} (3.1), (3.2). - If $d_1 = (2,2)$, then Staller replies by playing $s_1=(2,1)$. In her second move, she plays $s_2 \in \{(1,4), (3,4)\}$. See Figure [\[fig:Staller for P3P4\]](#fig:Staller for P3P4){reference-type="ref" reference="fig:Staller for P3P4"} (4.1), (4.2). From the above strategies, one can see that Dominator cannot form a dominating set within four moves. Therefore $\gamma_{\rm MB}(Z) \ge 5.$ Next, we will provide a strategy for Dominator to win the D-game with five moves. Assume that Dominator starts the game by playing $d_1=(2,1)$. By symmetry, it is enough to consider the following cases. - If $s_1 \in \{(1,1), (3,1), (3,2), (3,4), (2,2),(2,3)\}$, then Dominator replies by playing $d_2 =(3,3)$. Then there is a matching $M \{ (1,2)(1,3), (1,4)(2,4)\}$ of size 2 in $Z - \{d_1,s_1,d_2\}$ such that $V(G)\setminus V(M) \subseteq N_G[\{d_1,d_2\}]$. By Lemma [Lemma 4](#lem:pairing){reference-type="ref" reference="lem:pairing"} and Remark [Remark 5](#re:pairing){reference-type="ref" reference="re:pairing"}, Dominator wins the game with four moves. See Figure [\[fig:Dom for P3P4\]](#fig:Dom for P3P4){reference-type="ref" reference="fig:Dom for P3P4"} (1). - If $s_1 =(3,3)$, then Dominator replies by playing $d_2 =(1,3)$. If $s_2 \ne (3,4)$, then Dominator plays $d_3 =(3,4)$ and wins the game in the next move by dominate vertex $(3,2)$. Otherwise, $s_2 = (3,4)$, Dominator will play $(2,4)$ and he needs at most two more moves to dominate $(3,2)$ and $(3,3)$. See Figure [\[fig:Dom for P3P4\]](#fig:Dom for P3P4){reference-type="ref" reference="fig:Dom for P3P4"} (2.1), (2.2). - If $s_1 =(2,4)$, then Dominator replies by playing $d_2 =(1,3)$. If $s_2 \ne (3,4)$, then Dominator plays $d_3 =(3,4)$ and he can win the game in the next move. Otherwise, Dominator plays $d_3 =(3,3)$ and wins the game in the next move by dominating $(2,4)$. See Figure [\[fig:Dom for P3P4\]](#fig:Dom for P3P4){reference-type="ref" reference="fig:Dom for P3P4"} (3.1), (3.2). Thus Dominator has a strategy to win the game within five moves which implies $\gamma_{\rm MB}(Z) \le 5$. Since Dominator is the winner in the S-game, $\gamma_{\rm MB}'(Z) \le 6$ by $(2)$. It remains to show that $\gamma_{\rm MB}'(Z) \ge 6$. We will provide a strategy for Staller to ensure that Dominator needs to play at least six moves. Staller starts the game with $s_1 =(2,1)$. #### Case 1. Dominator plays $d_1 \in [3]\times \{3,4\}$. Then Staller responds by playing $s_2=(3,1)$. It forces Dominator to reply $d_2=(3,2)$, otherwise Staller will win in the next move. After that Staller plays $s_3=(1,1)$ and she will win in her next move by playing either $(1,2)$ or $(2,2)$. #### Case 2. Dominator plays $d_1 =(2,2)$. Then Staller replies by playing $s_2=(1,2)$ and it forces Dominator to play $d_2 =(1,1)$. Then Staller plays $s_3=(3,2)$ and it forces Dominator to play $d_3 =(3,1)$, otherwise Staller will win in her next move. After that Staller plays $s_4=(3,4)$. - If $d_4 =(3,3)$, then Staller responds by selecting $s_5=(1,4)$. - If $d_4 =(2,3)$, then Staller responds by selecting $s_5=(2,4)$. - If $d_4 =(2,4)$, then Staller responds by selecting $s_5=(2,3)$. - If $d_4 =(1,3)$, then Staller responds by selecting $s_5=(2,4)$. - If $d_4 =(1,4)$, then Staller responds by selecting $s_5=(3,3)$. By above strategy, Dominator cannot dominate $Z$ with five moves if his first move is $d_1=(2,2)$. See Figure [\[fig:Stall Sgame\]](#fig:Stall Sgame){reference-type="ref" reference="fig:Stall Sgame"} (2.1)-(2.5). #### Case 3. Dominator plays $d_1 =(3,1)$. Then Staller plays $s_2=(1,2)$ and Dominator must select $d_2=(1,1)$, otherwise Staller will win in the next move. After that Staller plays $s_3=(2,3)$. - If $d_3 =(3,2)$, then Staller selects $s_4=(2,4)$. If $d_4=(1,3)$, then Staller plays $s_5=(3,4)$. If $d_4=(1,4)$, then Staller plays $s_5=(3,3)$. If $d_4=(3,3)$, then Staller plays $s_5=(1,4)$. If $d_4=(3,4)$, then Staller plays $s_5=(1,3)$. In each case, Dominator cannot dominate $Z$ within five moves. - If $d_3 =(3,3)$, then Staller responds by selecting $s_4=(1,4)$. If $d_4 \neq (1,3)$, then Staller plays $s_5=(1,3)$ and wins the game. So, Dominator needs to play $d_4 = (1,3)$ and he also needs two more moves to dominate $Z$. Thus he needs to play at least six moves. - If $d_3 =(2,2)$, then Staller responds by selecting $s_4=(1,4)$ and Dominator has to play $d_4=(1,3)$. Thus Staller plays $s_5=(3,4)$ and Dominator needs two more moves to win the game. - If $d_3 =(2,4)$, then Staller responds by selecting $s_4=(3,2)$. Thus Dominator needs three more moves to dominate $Z$. - If $d_3 =(1,3)$, then Staller responds by selecting $s_4=(3,4)$. Then Staller plays $s_5=(2,4)$ if it is possible, otherwise $s_5=(3,2)$. Thus Dominator needs to play at least six moves. - If $d_3 =(1,4)$, then Staller responds by selecting $s_4=(2,2)$ and it forces Dominator to play $d_4=(3,2)$. Then Staller selects $s_5 = (1,3)$ and Dominator needs two more moves to win the game. Therefore, Dominator needs at least six moves to win the game if he starts the game with $d_1=(3,1)$. See Figure [\[fig:Stall Sgame\]](#fig:Stall Sgame){reference-type="ref" reference="fig:Stall Sgame"} (3.1)-(3.6). #### Case 4. Dominator plays $d_1 =(3,2)$. Then Staller replies by playing $s_2=(2,4)$. - If $d_2 \in \{(3,1), (2,2), (1,1), (1,2)\}$, then Staller replies by playing $s_3=(3,4)$ and Dominator has to play $d_3=(1,3)$, otherwise Staller will win in the next move. After that Staller plays $s_4=(3,3)$ and she will win in her next turn. - If $d_2= (3,3)$, then Staller replies by playing $s_3=(1,4)$ and Dominator has to play $d_3=(1,3)$, otherwise Staller will win in the next move. After that Staller plays $s_3=(1,1)$ and Dominator has to play $d_3=(1,2)$, otherwise Staller will win in the next move. Later Staller plays $s_4=(1,4)$ and Dominator needs three more moves to win the game. - If $d_2= (3,4)$, then Staller replies by playing $s_3=(1,1)$ and Dominator has to play $d_3=(1,2)$, otherwise Staller will win in the next move. Later Staller plays $s_4=(2,2)$ and it forces Dominator to play $d_4=(3,1)$. Thus Dominator needs two more moves to win the game. - If $d_2= (2,3)$, then Staller replies by playing $s_3=(3,4)$ and Dominator needs to play $d_3=(3,3)$. After that Staller plays $s_4=(1,4)$ an it forces Dominator to play $d_4=(1,3)$. Thus Staller plays $s_5=(1,1)$ and Dominator needs two more moves to win the game. - If $d_2= (1,3)$, then Staller replies by playing $s_3=(3,4)$ and Dominator needs to play $d_3=(3,3)$. After that Staller plays $(1,1)$ and Dominator needs to play $d_3=(1,2)$. Thus Dominator needs to play two more moves to win the game. - If $d_2= (1,4)$, then Staller replies by playing $s_3=(1,1)$ and it forces Dominator to reply $d_3=(1,2)$. Then Staller plays $s_4=(3,3)$ and it forces Dominator to play $d_4=(3,4)$. Later Staller plays $s_5=(2,2)$. Thus Dominator need two more moves to win the game. By above strategy, Dominator needs at least six moves to win the game if his first move is $d_1=(3,2)$. See Figure [\[fig:Stall Sgame\]](#fig:Stall Sgame){reference-type="ref" reference="fig:Stall Sgame"} (4.1)-(4.6). By all cases, we conclude that $\gamma_{\rm MB}' (Z) \ge 6$. 0◻ In [@jacobson-1984], the domination number of $P_m \,\square\,P_n$ was established for $1 \le m \le 4$ and all $n \ge 1$. In particular, it was shown that $$\begin{aligned} \label{gam_grid} \gamma(P_3 \,\square\,P_n) = \left\lfloor\frac{3n+4}{4}\right\rfloor,\ n \ge 1. \end{aligned}$$ **Theorem 15**. *If $n \ge 2$, then $\lfloor\frac{3n+4}{4}\rfloor \le \gamma_{\rm MB}(P_3 \,\square\,P_n) \le \lceil\frac{4n}{3}\rceil$.* By ([\[gam_grid\]](#gam_grid){reference-type="ref" reference="gam_grid"}), $\gamma_{\rm MB}(P_3 \,\square\,P_n) \ge \gamma(P_3 \,\square\,P_n) = \lfloor\frac{3n+4}{4}\rfloor$. It remains to show that $\gamma_{\rm MB}(P_3 \,\square\,P_n) \le \lceil\frac{4n}{3}\rceil$. It is easy to see that $\gamma_{\rm MB}(P_3 \,\square\,P_2) = 3$ and by Proposition [Proposition 13](#prop:p3p3){reference-type="ref" reference="prop:p3p3"}, we have $\gamma_{\rm MB}(P_3 \,\square\,P_3) = 4$. For $n \ge 4$ we consider the following cases. #### Case 1. $n=3k$ for some positive integer $k$. Let $G$ be a disjoint union of $k$ copies of $P_3 \,\square\,P_3$. By Proposition [Proposition 13](#prop:p3p3){reference-type="ref" reference="prop:p3p3"} and Theorem [Theorem 2](#thm:union){reference-type="ref" reference="thm:union"}, $\gamma_{\rm MB}(G) \le 4k$. Since $G$ is a subgraph of $P_3 \,\square\,P_n$ by deleting some edges, $\gamma_{\rm MB}(P_3 \,\square\,P_n) \le \gamma_{\rm MB}(G) \le 4k$ as a result of Lemma [Lemma 3](#lem:o_edgedelete){reference-type="ref" reference="lem:o_edgedelete"} (i). Thus $\gamma_{\rm MB}(P_3 \,\square\,P_n) \le 4k =\lceil\frac{4n}{3}\rceil$. #### Case 2. $n=3k+1$ for some positive integer $k$. Let $G$ be a union of $k-1$ copies of $P_3 \,\square\,P_3$ and two copies of $P_3 \,\square\,P_2$. By Proposition [Proposition 13](#prop:p3p3){reference-type="ref" reference="prop:p3p3"} and Theorem [Theorem 2](#thm:union){reference-type="ref" reference="thm:union"}, $\gamma_{\rm MB}(G) \le 4(k-1)+6$. Similar to the proof of Case 1., $\gamma_{\rm MB}(P_3 \,\square\,P_n) \le 4k+2=\frac{4(n-1)}{3}+2 =\lceil\frac{4n}{3}\rceil$. #### Case 3. $n=3k+2$ for some positive integer $k$. Let $G$ be a union of $k$ copies of $P_3 \,\square\,P_3$ and a copy of $P_3 \,\square\,P_2$. By Proposition [Proposition 13](#prop:p3p3){reference-type="ref" reference="prop:p3p3"} and Theorem [Theorem 2](#thm:union){reference-type="ref" reference="thm:union"}, $\gamma_{\rm MB}(G) \le 4k+3$. Similar to the proof of Case 1., $\gamma_{\rm MB}(P_3 \,\square\,P_n) \le 4k+3=\frac{4(n-2)}{3}+3 =\lceil\frac{4n}{3}\rceil$. We conclude that $\gamma_{\rm MB}(P_3 \,\square\,P_n) \le \lceil\frac{4n}{3}\rceil$ for every $n \ge 2$. 0◻ # Products of complete bipartite graphs In this section we consider the outcome of products of complete bipartite graph and obtain their exact MBD-numbers and SMBD-numbers. **Proposition 16**. If $n \ge m \ge 1$, then $$\begin{aligned} o(K_{m,n}) &= \begin{cases} \cal D; & n=m=1,\text{or\ }n, m \ge 2 \\ \cal N; & n > m =1. \end{cases} \end{aligned}$$ Since $K_{1,1} = P_2$, Dominator wins the game in his first move when he is the first or the second player. Now consider star $K_{1,n}$, $n \ge 2$. In the D-game, Dominator plays the central vertex and wins the game in his first move. In the S-game, Staller also plays the central vertex in her first move. No matter where Dominator replies, Staller wins in her next move by playing an unplayed leaf. Thus $o(K_{1,n}) = \cal N$. Next, assume that $n \ge 2$ and $m \ge 2$. Let $X$, $Y$ be the bipartition sets of $K_{m,n}$. It suffices to show that Dominator can win the S-game. Without loss of generality, suppose that Staller starts the game by playing a vertex in $X$. Then Dominator plays an unplayed vertex in $X$ and dominates all vertices in $Y$. After the second move of Staller, Dominator plays an unplayed vertex in $Y$ and wins the game. By the No-skip Lemma, Dominator also wins in D-game. Therefore $o(K_{m,n}) = \cal D$ where $n, m \ge 2$. 0◻ By Theorem [Theorem 8](#thm:cart){reference-type="ref" reference="thm:cart"} and Proposition [Proposition 16](#prop:o_Kmn){reference-type="ref" reference="prop:o_Kmn"}, we obtain the outcomes of the MBD game on the Cartesian product of complete bipartite graphs as follows. **Corollary 17**. *Let $m, m', n, n'$ be positive integers. If $n \ge m \ge 2$ and $n' \ge m' \ge 2$, then $o(K_{m,n} \,\square\,K_{m',n'}) = \cal D$.* Note that Theorem [Theorem 8](#thm:cart){reference-type="ref" reference="thm:cart"} does not help us to find the outcome of products of two stars. Moreover, a star $K_{1,n}$, where $n \ge 3$ does not admit a nontrivial path cover by Theorem [Theorem 10](#thm: factor){reference-type="ref" reference="thm: factor"}. Hence, we need to consider the outcome of products of stars as follows. Let $Z$ be the graph $K_{1,m} \,\square\,K_{1,n}$, and $V(K_{1,m} \,\square\,K_{1,n}) = \{(i,j):\ i\in [m], j\in [n]\} \cup \{(a,j):\ j\in [n]\} \cup \{(i,b):\ i\in [m]\}$ where $a, b$ are the central vertices of $K_{1,m}$, and $K_{1,n}$, respectively. **Theorem 18**. *If $n \ge m \ge 2$, then $$\begin{aligned} o(K_{1,m} \,\square\,K_{1,n}) &= \begin{cases} \cal D; & m=n=2,\ \\ \cal N; & m=2,\ n \ge 3,\\ \cal S; & n \ge m \ge 3. \end{cases} \end{aligned}$$* Assume that $n \ge m \ge 2$ and $Z = K_{1,m} \,\square\,K_{1,n}$ We consider the following cases. #### Case 1. If $m=n=2$, then $Z=K_{1,2} \,\square\,K_{1,2} = P_3 \,\square\,P_3$. By Thoerem [Theorem 11](#thm:pmpn){reference-type="ref" reference="thm:pmpn"}, $o(Z) = \cal D$. #### Case 2. If $m=2$ and $n \ge 3$, we will show that the first player has a winning strategy on $Z = K_{1,2} \,\square\,K_{1,n}$, $n \ge 3$ in both games. In the D-game, Dominator starts the game by playing $(1,b)$. Then there is a matching $M$ in $G-\{(1,b)\}$ such that $V(Z)\setminus V(M) \subseteq N_Z[(1,b)]$. By Lemma [Lemma 4](#lem:pairing){reference-type="ref" reference="lem:pairing"}, Dominator wins the game. In the S-game, Staller plays $(1,b)$ in her first move. After Dominator replies by playing $v$ in his first move, we consider the following strategies. - $v=(2,b)$. Then Staller continues the game by playing $(1,1)$ that forces Dominator to play $(a,1)$. Otherwise, Staller can win in the next move. In each turn, Staller uses the same strategy to play $(1,i)$ for each $i \in [n]$ until she plays all the vertices in the layer $^1(K_{1,n})$, and she will win the game by playing $(a,b)$ in her next move by playing $(1,j)$ or $(2,j)$. - $v\neq(2,b)$. Then Staller responds on vertex $(2,b)$. After the second move of Dominator, there is an unplayed layer of $(K_{1,2})^j$ and Staller will play $(a,j)$. Thus Staller can win the game in her next move. #### Case 3. Assume that $m=3$ and $n = 3$. To show that $o(Z) = \cal S$, it suffices to show that Staller has a winning strategy in the D-game when $Z =K_{1,3} \,\square\,K_{1,3}$. Consider the following possible cases. *Case 3.1:* $d_1=(a,1)$. Then there are two unplayed layers $(K_{1,3})^2$ and $(K_{1,3})^3$. Then Staller plays $s_1=(a,2)$. - If $d_2 \ne (a,3)$, then Staller replies $s_2=(a,3)$. After Dominator plays $d_3$, there exists $i\in[3]$ such that $(i,b), (i,2), (i,3)$ are unplayed. Then Staller plays $(i,b)$, and she will win her next move by claiming the closed neighborhood of $(i,2)$ or $(i,3)$. - If $d_2=(a,3)$, then Staller replies $s_2=(1,2)$. It forces Dominator to play $d_3=(1,b)$. Then Staller can play all neighbors of $(a,2)$, and she will win the game. *Case 3.2:* $d_1 = (1,b)$. Then Staller replies $s_1=(2,b)$. By symmetry, we can see $(1,b)$ as $(a,1)$ and $(2,b)$ as $(2,a)$. Then we can apply the above strategy which implies the same outcome. *Case 3.3:* $d_1=(1,1)$. Then Staller plays $s_1=(2,b)$. The continuation of the game Staller applies the same strategy as Case 2.1. *Case 3.4:* $d_1=(a,b)$. There are three unplayed layers $(K_{1,3})^1$, $(K_{1,3})^2$, and $(K_{1,3})^3$. Then Staller plays $s_1=(a,1)$ and plays $s_2=(a,j)$ where $j = 2,3$. After the third move of Dominator, there exists $i\in[3]$ such that $(i,b), (i,1), (i,j)$ are unplayed. Thus Staller plays $(i,b)$ and she will win her next move by claiming the closed neighborhood of $(i,1)$ or $(i,j)$. Therefore, Staller has a strategy to win the game on $Z$. #### Case 4. $m \ge 3,\ n \ge 4$. We will show that Staller has a winning strategy in the D-game on $Z$ that implies $o(Z) = \cal S$. Since $m \ge 3$, after the first move of Dominator, there are two unplayed layers $^i(K_{1,n})$ and $^{i'}(K_{1,n})$ where $i,i' \in [m]$. Then Staller plays $(i,b)$ in her first move. Since $n \ge 4$, after the second move of Dominator, there are two unplayed layers $(K_{1,m})^j$ and $(K_{1,m})^{j'}$ where $j,j' \in [n]$. Then Staller replies $(a,j)$ in her second move and it makes Dominator play $(i,j)$, otherwise, Staller will win in her next move. By this strategy, $(K_{1,3})^{j'}$ is still unplayed at this moment, so, Staller plays $(a,j')$. If Dominator does not reply by playing $(i,j')$, then Staller will win the game by playing $(i,j')$. If Dominator replies $(i,j')$, then Staller responds at $(i',b)$ and she will win the game in her next turn. 0◻ **Theorem 19**. *If $n \ge 3$, then $$\begin{aligned} \gamma_{\rm MB}(P_3 \,\square\,K_{1,n}) = \gamma_{\rm SMB}' (P_3 \,\square\,K_{1,n}) = n+2. \end{aligned}$$* Assume that $n \ge 3$ and $Z = P_3 \,\square\,K_{1,n}$. Recall that $a, b$ are the central vertices of $P_3$ and $K_{1,n}$, respectively. First, we consider the D-game. We will show that $\gamma_{\rm MB}(Z) \le n+2$. Dominator starts the game by playing $(1,b)$. Then there is a matching $M$ in $Z-\{(1,b)\}$ such that $V(Z)\setminus V(M) \subseteq N_Z[(1,b)]$ and $|M| = n+1$. By Lemma [Lemma 4](#lem:pairing){reference-type="ref" reference="lem:pairing"} and Remark [Remark 5](#re:pairing){reference-type="ref" reference="re:pairing"}, Dominator has a strategy to win the game within $n+2$ moves. Thus $\gamma_{\rm MB}(Z) \le n+2$. Now we will show that Staller has a strategy ensure that Dominator needs to play at least $n+2$ moves. By symmetry, we consider the following cases. - If $d_1=(1,b)$, then Staller plays $s_1=(2,1)$. - If $d_1= (a,b)$, then Staller plays $s_1=(a,1)$. - If $d_1=(1,1)$, then Staller plays $s_1=(2,b)$. - If $d_1=(a,1)$, then Staller plays $s_1=(a,b)$. In the second turn, Staller selects an unplayed vertex $s_2$ in the layer $(P_3)^b$. For the continuation of the game, she will try to play a vertex $(a,j)$ for $j \in [n]$ if it is possible, otherwise she plays an arbitrary unplayed vertex. One can see that Dominator needs at least two moves to dominate the layers $(P_3)^b$ and $(P_3)^1$, and $n-1$ moves to dominate the rest of the graph. Thus $\gamma_{\rm MB}(Z) \ge n+2$ and hence $\gamma_{\rm MB}(Z) = n+2$. Next, we consider the S-game. By the proof of Theorem [Theorem 18](#prop:out_star){reference-type="ref" reference="prop:out_star"}, Case 1, Staller has a strategy to win the game within $n+2$ moves. It remains to show that $\gamma_{\rm MB}' (Z) \ge n+2$. Notice that closed neighborhoods in $Z$ are of size $3$, $4$, $n+2$, and $n+3$. Assume that Staller plays $(u,v)$ in her first move. Then Dominator plays an unplayed neighbor $(i,v)$ of $(u,v)$ and he uses this strategy for the continuation game. Then Staller cannot claim a closed neighborhood before playing at least $n+2$ moves. 0◻ **Theorem 20**. *If $n \ge m \ge3$, then $$\begin{aligned} \gamma_{\rm SMB}(K_{1,m} \,\square\,K_{1,n}) &= \begin{cases} 5; & m=3,\\ 4; & m \ge 4, \end{cases} \end{aligned}$$* *and $$\begin{aligned} \gamma_{\rm SMB}' (K_{1,m} \,\square\,K_{1,n}) = 4. \end{aligned}$$* Assume that $m \ge n \ge3$ and $Z=K_{1,m} \,\square\,K_{1,n}$. Recall that $a, b$ are the central vertices of $K_{1,m}$ and $K_{1,n}$, respectively. Observe that every vertex is in the closed neighborhood of size $3, m+2, n+2$, and $m+n+1$, respectively. We first consider the D-game. #### Case 1. $n=3$ and now $Z =\gamma_{\rm SMB}(K_{1,m} \,\square\,K_{1,3}) =5$. By the proof of Theorem [Theorem 18](#prop:out_star){reference-type="ref" reference="prop:out_star"} Case 3 and Case 4, Staller has a strategy to win the D-game within five moves. It implies that $\gamma_{\rm SMB}(Z) \le 5$. It remains to show that Staller needs to play at least five moves to win the game. Assume Dominator plays $(a,1)$ in his first move. Assume that Staller plays $v$ in her first move. - $v= (a,2)$. Then Dominator replies by playing $(a,3)$ in his second move. For each turn , if Staller plays $(i,j)$, then Dominator replies by selecting $(i, b)$ if it is possible, otherwise he plays $(i, 2)$ if it is available or any arbitrary vertex if $(1,2)$ was played earlier. Thus Staller cannot claim a closed neighborhood of size $3$. Hence, Staller needs to play at least five moves. - $v\ne (1,b)$. Then Dominator plays $(a,2)$ in his second move. After that Dominator will play $(a,3)$ if it is unplayed, otherwise he will play a neighbor of vertex played by Staller which is in a closed neighborhood of size $3$. Thus Staller cannot win the game by claiming a closed neighborhood of size $3$. So, she needs to play at least five moves. Therefore, $\gamma_{\rm SMB}(Z) \ge 5$. #### Case 2. $n \ge 4$ and now $Z =\gamma_{\rm SMB}(K_{1,m} \,\square\,K_{1,n}) =5$. Notice that each vertex, except $(a,b)$, is in only one closed neighborhood of size $3$. For each turn, when Staller plays $v$, Dominator can play a vertex in $N_Z[v]$ which is in the closed neighborhood of size $3$ if it is possible. Then Staller cannot claim a closed neighborhood within three moves. It remains to show that $\gamma_{\rm SMB}(Z) \le 4$ by providing a strategy for Staller to win the D-game in four moves. After the first move of Dominator, there exists $j \in [3]$ such that $(a, j)$ is unplayed. Then Staller plays $(a,j)$. Similarly, after the second move of Dominator, there exists $j' \in [3]$ such that $j' \ne j$ and $(a,j')$ is unplayed. Then Staller plays $(a,j')$. Since $m \ge n \ge 4$, after the third move of Dominator, there is $(i,b)$ such that $(i,b), (i,j), (i,j')$ are unplayed. So, Staller plays $(i,b)$ and she can win the game in her next move. Thus $\gamma_{\rm SMB}(Z) \le 4$. For the S-game, we first provide a strategy for Staller to win the game within four moves. Assume that Staller plays $(a,1)$ in her first move. After Dominator replies, there is $j = 2,3$ such that $(a,j)$ is unplayed. Then Staller responds by choosing $(a,j)$ in her second move. Then in her third move, there exists $i\in[3]$ such that $(i,b), (i,1), (i,j)$ are unplayed. Thus Staller plays $(i,b)$, and she will win her next move by claiming the closed neighborhood of $(i,1)$ or $(i,j)$. Thus $\gamma_{\rm SMB}'(Z)\le 4$. Next, we will show that Staller cannot win the game in three moves. Observe that every vertex is in a closed neighborhood of size $3$ except $(a,b)$. By symmetry, we consider the four following cases. - If Staller plays $(a,b)$ in her first move, then she needs to play at least four moves to claim a closed neighborhood. - If Staller plays $(a,1)$ in her first move, then Dominator replies by playing $(1,1)$. Thus Staller needs to play at least three more moves to claim a closed neighborhood. - If Staller plays $(1,b)$ in her first move, then Dominator replies by playing $(1,1)$. Thus Staller needs to play at least three more moves to claim a closed neighborhood. - If Staller plays $(1,1)$ in her first move, then Dominator replies by playing $(1,a)$. Thus Staller needs to play at least three more moves to claim a closed neighborhood. It implies that $\gamma_{\rm SMB}'(Z) \ge 4$. Therefore, $\gamma_{\rm SMB}'(Z) = 4$. 0◻ # Acknowledgements {#acknowledgements .unnumbered} The author would like to express our sincere gratitude to Prof. Sandi Klavžar and Assoc. Prof. Csilla Bujtás, for their invaluable guidance and support. 99 J. Beck, *On positional games*, J. Combin. Theory Ser. A **30** (1981), 117--133. C. Berge, *Hypergraphs: Combinatorics of Finite Sets*, North Holland, 1989. B. Brešar, S. Klavžar, D.F. Rall, *Domination game and an imagination strategy*, SIAM J. Discrete Math. **24** (2010), 979--991. B. Brešar, M.A. Henning, S. Klavžar, D.F. Rall, *Domination Games Played on Graphs*, SpringerBriefs in Mathematics, 2021. Cs. Bujtás, P. Dokyeesun, *Fast winning strategies for Staller in the Maker-Breaker domination game*, `arXiv:2206.12812 [math.CO]`. Cs. Bujtás, P. Dokyeesun, S. Klavžar, *Maker-Breaker domination game on trees when staller wins*, Discrete. Math. Theor. Comput. Sci. **25(2)** (2023), 12. E. Duchêne, V. Gledel, A. Parreau, G. Renault, *Maker-Breaker domination game*, Discrete Math. **343** (2020), 111955. P. Erdős, J.L. Selfridge, *On a combinatorial game*, J. Combin. Theory Ser. A **14** (1973), 298--301. J. Forcan, M. Mikalački, *Maker-Breaker total domination game on cubic graphs*, Discrete Math. Theor. Comput. Sci. **24(1)** (2022), 20. J. Forcan, J. Qi, *Maker-Breaker domination number for $P_2\,\square\,P_n$*, [arXiv:2004.13126 \[math.CO\]](arXiv:2004.13126 [math.CO]). V. Gledel, M.A. Henning, V. Iršič, S. Klavžar, *Maker-Breaker total domination game*, Discrete Appl. Math. **282** (2020), 96--107. V. Gledel, V. Iršič, S. Klavžar, *Maker-Breaker domination number*, Bull. Malays. Math. Sci. Soc. **42** (2019), 1773--1789. D. Hefetz, M. Krivelevich, M. Stojaković, T. Szabó, *Fast winning strategies in Maker-Breaker games*, J. Combin. Theory Ser. B **99** (2009), 39--47. D. Hefetz, M. Krivelevich, M. Stojaković, T. Szabó, *Positional Games*, Birkhäuser/Springer, Basel, 2014. M. S. Jacobson, L. F. Kinch, *On the domination number of products of graphs, I*, Ars. Combin. **18** (1984), 33--44. M. Kano, A. Saito, *$[a,b]$-factors of graphs*, Discrete Math. **47** (1983), 113--116. W.B. Kinnersley, D.B. West, R. Zamani, *Extremal problems for game domination number*, SIAM J. Discrete Math. **27** (2013), 2090--2107. L. Lovász, *Subgraphs with prescribed valencies*, J. Combin. Theory **8** (1970), 391--416. D.B. West, *Introduction to Graph Theory, 2nd ed.*, Prentice-Hall, NJ, 2001. Pakanun Dokyeesun\ papakanun\@gmail.com\
arxiv_math
{ "id": "2310.04103", "title": "Maker-Breaker domination game on Cartesian products of graphs", "authors": "Pakanun Dokyeesun", "categories": "math.CO", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | Let $S=K[x_1,\ldots,x_n]$, where $K$ is a field, and $t_i(S/I)$ denotes the maximal shift in the minimal graded free $S$-resolution of the graded algebra $S/I$ at degree $i$, where $I$ is an edge ideal. In this paper, we prove that if $t_b(S/I)\geq \lceil \frac{3b}{2} \rceil$ for some $b\geq 0$, then the subadditivity condition $t_{a+b}(S/I)\leq t_a(S/I)+t_b(S/I)$ holds for all $a\geq 0$. In addition, we prove that $t_{a+4}(S/I)\leq t_a(S/I)+t_4(S/I)$ for all $a\geq 0$ (the case $b=0,1,2,3$ is known). We conclude that if the projective dimension of $S/I$ is at most $9$, then $I$ satisfies the subadditivity condition. address: Department of Mathematics, Braude College of Engineering, 2161002 Karmiel, Israel author: - Abed Abedelfatah title: On the subadditivity condition of edge ideal --- # Introduction Let $S=K[x_1,\ldots,x_n]$ denote the polynomial ring with $n$ variables over the fixed field $K$, graded by setting $\deg(x_i)=1$ for each variable. Since Hilbert's syzygy theorem, minimal free resolutions of graded finitely generated $S$-modules, and particularly their graded Betti numbers, which carry most of the numerical data about them, became central invariants of study in commutative algebra, with applications in other areas, e.g., in algebraic geometry, hyperplane arrangements, and combinatorics. Restricting to $S$-modules $S/I$ for monomial ideals $I$, and particularly to edge ideals, makes combinatorial, and particularly graph theoretical, tools available. This perspective proved to be very useful in recent decades. Assume that $I$ is a graded ideal of $S$ and suppose $S/I$ has minimal graded free $S$-resolution $$0\rightarrow F_p=\bigoplus_{j\in \mathbb{N}}S(-j)^{\beta _{p,j}}\rightarrow\cdots\rightarrow F_1=\bigoplus_{j\in \mathbb{N}}S(-j)^{\beta _{1,j}}\rightarrow F_0=S\rightarrow S/I\rightarrow0.$$ The numbers $\beta_{i,j}=\beta_{i,j}(S/I)$, where $i,j\geq0$, are called the *graded Betti numbers* of $I$, which count the elements of degree $j$ in a minimal generator of ($i+1$)-th *syzygy*:  $\mathrm{Syz}_{i+1}(S/I)=\ker~(F_i\rightarrow F_{i-1})$. Let $t_i$ denote the maximal shifts in the minimal graded free $S$-resolution of $S/I$, namely $$t_i=t_i(S/I):= \max\{j:\ \beta_{i,j}(S/I)\neq 0\}.$$ We say that $I$ satisfies the *subadditivity condition* if $$\label{eq2} t_{a+b}\leq t_a+t_b$$ for all $a,b\geq0$ and $a+b\leq \mathrm{pd}_S(S/I)$, where $\mathrm{pd}_S(S/I)$ is the projective dimension of $S/I$. It is known that graded ideals may not satisfy the subadditivity condition as shown by the counter example in [@Conca-Sub Section 6]. However, no counter examples are known for monomial ideals. Note that if $I$ is a monomial ideal, then by polarization, we can reduce the problem to a squarefree monomial ideal. In the case of monomial ideals, there are special cases for which the subadditivity condition holds: Herzog and Srinivasan [@Herzog-Srinivasan Corollary 4] proved ([\[eq2\]](#eq2){reference-type="ref" reference="eq2"}) for $b=1$ which was proved earlier for edge ideals in [@Oscar Theorem 4.1]. The author and Nevo proved ([\[eq2\]](#eq2){reference-type="ref" reference="eq2"}) when $b=2,3$ for edge ideals [@Abed-Eran Theorem 1.3]. Bigdeli and Herzog proved ([\[eq2\]](#eq2){reference-type="ref" reference="eq2"}), for edge ideal of a chordal graph or a whisker graph[@mina-herzog Theorem 1]. The author proved ([\[eq2\]](#eq2){reference-type="ref" reference="eq2"}) when $I=I_{\Delta}$ where $\Delta$ is a simplicial complex such that $\dim(\Delta)< t_b-b$ [@Abed Theorem 3.3]. Jayanthan and Kumar proved ([\[eq2\]](#eq2){reference-type="ref" reference="eq2"}) for several classes of edge ideals of graphs and path ideals of rooted trees [@Jayanthan-Kumer]. Some more results regarding subadditivity condition have been obtained by Khoury and Srinivasan [@Khoury-Srin Theorem 2.3], Faridi and Shahada [@Faridi-Shahada Corollarly 5.3] and Faridi [@Faridi Theorem 3.7]. Let $I$ be an edge ideal of graph $G$. By using topological-combinatorics arguments, we prove that, over any field, if $t_b(S/I)\geq \lceil \frac{3b}{2} \rceil$ for some $b\geq 0$, then for all $a\geq0$, $t_{a+b}(S/I)\leq t_a(S/I)+t_b(S/I)$. Note that in general $b<t_b\leq 2b$. In addition, we prove the subadditivity condition ([\[eq2\]](#eq2){reference-type="ref" reference="eq2"}) for $0\leq b\leq 4$ and all $a\geq0$. We conclude that if $t_j\geq \lceil \frac{3j}{2} \rceil$ for all $5\leq j\leq \lfloor \frac{\mathrm{pd}_S(S/I)}{2}\rfloor$, then $I$ satisfies the subadditivity condition. In particular, if $\mathrm{pd}_S(S/I)\leq 9$, then $I$ satisfies the subadditivity condition.\ # Preliminaries Fix a field $K$. Let $S=K[x_1,\dots,x_n]$ be the graded polynomial ring with $\deg(x_i)=1$ for all $i$, and $M$ be a graded $S$-module. The integer $\beta_{i,j}^S(M)=\dim_KTor_i^S(M,K)_j$ is called the $(i,j)$*th graded Betti number of* $M$. Note that if $I$ is a graded ideal of $S$, then $\beta_{i+1,j}^S(S/I)=\beta_{i,j}^S(I)$ for all $i,j\geq 0$. For a simplicial complex $\Delta$ on the vertex set $\Delta_0=[n]=\{1,\dots,n\}$, its *Stanley-Reisner ideal* $I_{\Delta}\subset S$ is the ideal generated by the squarefree monomials $x_F=\prod_{i\in F}x_i$ with $F\notin \Delta$, $F\subset [n]$. The dimension of the face $F$ is $|F|-1$ and the *dimension of $\Delta$* is $\max\{\dim F~:~F\in\Delta\}$. Let $G$ be a simple graph on the set $[n]$ and denote by $E(G)$ the set of its edges. We define the *edge ideal* of $G$ to be the ideal $$I(G)=\langle x_ix_j~:~\{i,j\}\in E(G)\rangle\subset S.$$ So if $\Delta$ is a flag simplicial complex and $H$ is the graph of minimal non-faces of $\Delta$, then $I_\Delta=I(H)$. For $W\subset V$, we write $$\Delta[W]=\{F\in\Delta~:~F\subset W\}$$ for the induced subcomplex of $\Delta$ on $W$. We denote by $\beta_i(\Delta)=\dim_K \widetilde{H}_i(\Delta;K)$ the dimension of the $i$-th reduced homology group of $\Delta$ with coefficients in $K$. The following result is known as Hochster's formula for graded Betti numbers. **Theorem 1** (Hochster). *Let $\Delta$ be a simplicial complex on $[n]$. Then $$\beta_{i,i+j}(S/I_{\Delta})=\sum_{W\subset[n],~|W|=i+j}\beta_{j-1}(\Delta[W];K)$$ for all $i,j\geq0$.* If $\Delta_1$ and $\Delta_2$ are two subcomplexes of $\Delta$ such that $\Delta=\Delta_1\cup \Delta_2$, then there is a long exact sequence of reduced homologies, called the *Mayer-Vietoris sequence* $$\begin{aligned} \cdots \rightarrow \widetilde{H}_i(\Delta_1\cap\Delta_2;K)\rightarrow \widetilde{H}_i(\Delta_1;K)\oplus \widetilde{H}_i(\Delta_2;K)&\rightarrow \widetilde{H}_i(\Delta;K)\\\rightarrow \widetilde{H}_{i-1}(\Delta_1\cap\Delta_2;K)\rightarrow\cdots\end{aligned}$$ Using the Mayer-Vietoris sequence, Fernández-Ramos and Gimenez proved the following lemma that we will use in the main results. **Lemma 2**. **([@Oscar Theorem 2.1])*[\[thm-corner\]]{#thm-corner label="thm-corner"} For an edge ideal $I=I(G)$, over any field, if $\beta_{i,i+j}(S/I)=0=\beta_{i,i+j+1}(S/I)$ then $\beta_{i+1,i+j+2}(S/I)=0$.* Finally we state the following results of the author and Nevo on vanishing patterns in the Betti table of edge ideals and the subadditivity condition. **Lemma 3**. **([@Abed-Eran Theorem 3.4])*[\[thm-vanishing\]]{#thm-vanishing label="thm-vanishing"} For an edge ideal $I=I(G)$, over any field, if $\beta_{i,i+2}(S/I)\neq 0$ and $\beta_{i+k,j+2+k}(S/I)\neq 0$ where $i\geq 0$ and $k>0$, then $$\beta_{i+m,j+2+m}(S/I)\neq 0$$ for all $0\leq m\leq k$.* **Lemma 4**. **([@Abed-Eran Theorem 1.3])*[\[thm-abederan\]]{#thm-abederan label="thm-abederan"} For any edge ideal over any field, the subadditivity condition ([\[eq2\]](#eq2){reference-type="ref" reference="eq2"}) holds for $b=1,2,3$ and any $a\geq 0$.* # The main Results We start with the following lemma that we will use in the main results. **Lemma 5**. *If $I=I(G)$ is the edge ideal of a graph $G$ and $t_a<2a$ for $a\geq 2$, then for each $r$ edges $\{v_1,v_2\},\dots,\{v_{2r-1},v_{2r}\}$ in $G$ where $r>a$, the induced subgraph $H$ on the vertices $v_1,\dots,v_{2r}$ has a vertex $v$ with $$\deg_H(v)\geq2.$$* *Proof.* Assume on contrary that $\deg_H(v)=1$ for all $v\in H$. It follows that $\beta_{a,2a}(S/I(H))\neq 0$ since $\beta_{a,2a}(S/I(H))$ is the number of induced subgraphs of $H$ which are $a$ disjoint edges. By Hochster's formula $$\beta_{a,2a}(S/I(H))\leq \beta_{a,2a}(S/I(G)),$$ so $\beta_{a,2a} (S/I(G))\neq 0$, a contradiction to the assumptions. ◻ **Proposition 6**. *Over any field, if $I=I(G)$ is the edge ideal of a graph $G$ and $t_a<2a$ for some $a\geq 2$, then for all $b\geq0$, $$t_{a+b}\leq t_a+\lceil \frac{3b}{2} \rceil.$$* *Proof.* We prove the proposition by induction on $b$. If $b=0$ or $b=1$, then we are done. Let $b>1$. The Taylor resolution of $S/I$ shows that $G$ has $a+b$ edges on the vertices $v_1,\dots,v_{t_{a+b}}$ such that $\beta_{a+b,t_{a+b}}(S/I(H))\neq 0$ where $H$ is the induced subgraph on these vertices. Let $W \subseteq [n]$ such that $H=G[W]$ and denote $N=\Delta[W]$ where $\Delta$ is the simplicial complex such that $I_{\Delta}=I$. Since $t_a((S/I(H))<2a$, it follows by ([Lemma 5](#lem 1){reference-type="ref" reference="lem 1"}) that the graph $H$ has a vetex $v$ with $\deg_H(v)\geq2$. Let $x_1,x_2$ be two neighbors of $v$ in $H$. Clearly, $N=(N-v)\cup(N-\{x_1,x_2\})$. Set $\Delta_1=N-v$ and $\Delta_2=N-\{x_1,x_2\}$. Consider the long exact sequence of reduced homologies $$\begin{aligned} \cdots \rightarrow \widetilde{H}_j(\Delta_1\cap\Delta_2;K)\rightarrow \widetilde{H}_j(\Delta_1;K)\oplus \widetilde{H}_j(\Delta_2;K)&\rightarrow \widetilde{H}_j(N;K)\\\rightarrow \widetilde{H}_{j-1}(\Delta_1\cap\Delta_2;K)\rightarrow\cdots\end{aligned}$$ where $j=t_{a+b}-(a+b)-1$.\ If $\widetilde{H}_{j}(\Delta_1)\neq 0$, then $\beta_{a+b-1,t_{a+b}-1}(S/I(H))\neq0$ and so, $$t_{a+b}(S/I)=t_{a+b}(S/I(H))\leq t_{a+b-1}(S/I(H))+1.$$ Our induction hypothesis implies that $$t_{a+b}(S/I)\leq t_a(S/I(H))+\lceil \frac{3(b-1)}{2} \rceil+1\leq t_a(S/I)+\lceil \frac{3b}{2}\rceil.$$ So we may assume that $\widetilde{H}_{j}(\Delta_1)=0$.\ If $\widetilde{H}_{j}(\Delta_2)\neq 0$, then $\beta_{a+b-2,t_{a+b}-2}(S/I(H))\neq0$, and so $$t_{a+b}(S/I)=t_{a+b}(S/I(H))\leq t_{a+b-2}(S/I(H))+2.$$ If $b=2$, then $$t_{a+2}(S/I)\leq t_a(S/I(H))+2\leq t_a(S/I))+2.$$ Let $b>2$. The induction hypothesis implies that $$t_{a+b}(S/I)\leq t_a(S/I(H))+\lceil \frac{3(b-2)}{2} \rceil+2\leq t_a(S/I)+\lceil \frac{3b}{2}\rceil.$$ So we may assume also that $\widetilde{H}_{j}(\Delta_2)=0$. Since $\widetilde{H}_j(N;K)\neq 0$, it follows that $\widetilde{H}_{j-1}(\Delta_1\cap\Delta_2;K)\neq 0$, and so $\beta_{a+b-2,t_{a+b}-3}(S/I(H))\neq0$. Then $$t_{a+b}(S/I)=t_{a+b}(S/I(H))\leq t_{a+b-2}(S/I(H))+3.$$ If $b=2$, then $$t_{a+2}(S/I)\leq t_a(S/I(H))+3\leq t_a(S/I))+3.$$ Let $b>2$. Our induction hypothesis implies that $$t_{a+b}(S/I)\leq t_a(S/I(H))+\lceil \frac{3(b-2)}{2} \rceil+3\leq t_a(S/I)+\lceil \frac{3b}{2}\rceil.$$ This completes the proof. ◻ **Theorem 7**. *Over any field, if $I=I(G)$ is the edge ideal of a graph $G$ and $t_b\geq \lceil \frac{3b}{2} \rceil$ for some $b\geq 0$, then for all $a\geq0$, $$t_{a+b}\leq t_a+t_b.$$* *Proof.* If $a=0$ or $a=1$, then we are done. Assume that $a\geq 2$. If $t_a=2a$, then $$t_{a+b}\leq at_1+t_b\leq 2a+t_b=t_a+t_b.$$ So assume that $t_a<2a$. By ([Proposition 6](#pro1){reference-type="ref" reference="pro1"}), $$t_{a+b}\leq t_a+\lceil \frac{3b}{2} \rceil\leq t_a+t_b,$$ as desired. ◻ **Theorem 8**. *Over any field, if $I=I(G)$ is the edge ideal of a graph $G$, then for all $a\geq0$, $$t_{a+4}\leq t_a+t_4.$$* *Proof.* By ([Theorem 7](#thm1){reference-type="ref" reference="thm1"}) we may assume $t_4=5$ (note that $5\leq t_4\leq 8$). If $t_2=4$, then combined with $t_4=5$, lemma ([\[thm-vanishing\]](#thm-vanishing){reference-type="ref" reference="thm-vanishing"}) says $\beta_{3+k,5+k}(S/I)=0$ for any $k>0$. Further, by lemma ([\[thm-corner\]](#thm-corner){reference-type="ref" reference="thm-corner"}) we conclude that $t_k\leq k+1$ for all $k\geq 4$. So $$t_{a+4}\leq a+5\leq t_a+t_4$$ as desired. Thus, we may assume $t_2=3$ and then $G$ has no two disjoint edges which form an induced subgraph. Similarly, we may assume $t_3=4$. If not then by ([\[thm-corner\]](#thm-corner){reference-type="ref" reference="thm-corner"}), $t_3=5$ and then $\beta_{3+k,5+k}(S/I)=0$ for any $k>0$ by ([\[thm-vanishing\]](#thm-vanishing){reference-type="ref" reference="thm-vanishing"}). Also we obtain that $t_{a+4}\leq a+5\leq t_a+t_4$. By ([\[thm-abederan\]](#thm-abederan){reference-type="ref" reference="thm-abederan"}, we may also assume $a\geq4$. If $t_{a+4}\leq a+6$, then we are done: $$t_a+t_4\geq a+1+5=a+6\geq t_{a+4}.$$ Let $t_{a+4}\geq a+7$. The Taylor resolution of $S/I$ shows that $G$ has $a+4$ edges on the vertices $v_1,\dots,v_{t_{a+4}}$ such that $\beta_{a+4,t_{a+4}}(S/I(H))\neq 0$ where $H$ is the induced subgraph on these vertices. Let $W \subseteq [n]$ such that $H=G[W]$ and denote $N=\Delta[W]$ where $\Delta$ is the simplicial complex such that $I_{\Delta}=I$. Let $W'\subset W$ such that $|W'|=5$ and $H'=G[W']$ is the (induced) subgraph of $H$ with $$\label{eq1} \small t_2(S/I(H'))=3,~t_3(S/I(H'))=4~,t_4(S/I(H'))=5,~\mathrm{and}~\deg_{H'}(v)\geq1.$$ First, we claim that there exists a vertex $v$ in $H'$ such that $\deg_{H'}(v)\geq3$. If not, then $\deg_{H'}(v)\leq 2$ and so $H'$ is either an induced $C_5$, or a path with $4$ edges. This contradicts ([\[eq1\]](#eq1){reference-type="ref" reference="eq1"}). Second, we show that the graph $H$ has a vertex of degree at least $4$. Assume on contrary that $\deg_{H}(w)\leq3$ for all $w$ in $H$. After suitable renumbering of the vertices, $H'$ has the following proper subgraph: Note that the graph in figure ([\[subgraph\]](#subgraph){reference-type="ref" reference="subgraph"}) do not satisfies ([\[eq1\]](#eq1){reference-type="ref" reference="eq1"}). If $\{2,4\}\in E(H)$ or $\{3,4\}\in E(H)$, then there is necessarily an edge $\{i,j\}\in E(H)$ such that $i,j\notin \{1,\dots,5\}$, since $1\leq \deg_{H}(w)\leq3$ for all $w$ in $H$ and $H$ has at least $11$ vertices. Then the two disjoint edges $\{i,j\}$ and $\{1,4\}$ form an induced subgraph. This contradicts the assumption $t_2=3$. So assume that $\{2,4\}\notin E(H)$ and $\{3,4\}\notin E(H)$. If $\{2,3\}\in E(H)$, then since $t_2=3$, $\{2,5\}\in E(H)$ or $\{3,5\}\in E(H)$. In the two cases, we must have an edge $\{i,j\}\in E(H)$ such that $i,j\notin \{1,\dots,5\}$. Such edge with $\{1,2\}$ (or $\{1,3\}$) are two disjoint edges that form an induced subgraph, a contradiction. So we may also assume that $\{2,3\}\notin E(H)$. We conclude that the only option of the graph $H'$ is: In this case, there is two distinct edges $\{i_1,j_1\}\in E(H)$ and $\{i_2,j_2\}\in E(H)$ such that $i_1,j_1,i_2,j_2\notin \{1,\dots,5\}$. Since $t_2=3$, each vertex of the set $\{2,3,4\}$ is connected to $i_1$ or $j_1$ by edge. Similarly, each vertex of the set $\{2,3,4\}$ is connected to $i_2$ or $j_2$ by edge. This contradicts the assumption $\deg_{H}(w)\leq3$ for all $w$ in $H$. We conclude that there is a vertex $v$ in $H$ such that $\deg_{H}(v)\geq4$. Let $v\in W$ with $\deg_H(v)\geq4$. Let $x_1,x_2,x_3,x_4$ be four neighbors of $v$ in $H$. Clearly, for $N=\Delta[W]$, $N=(N-v)\cup(N-\{x_1,x_2,x_3,x_4\})$. Set $\Delta_1=N-v$ and $\Delta_2=N-\{x_1,x_2,x_3,x_4\}$. We may assume that $\widetilde{H}_{j}(\Delta_1)\neq0$, where $j=t_{a+4}-(a+4)-1$. For otherwise, we obtain that $\beta_{a+3,t_{a+4}-1}(S/I)\neq0$ and so $$t_{a+4}\leq t_{a+3}+1\leq t_a+t_3+1=t_a+5=t_a+t_4$$ as desired. Using the Mayer-Vietoris sequence we have $\widetilde{H}_j(\Delta_2)\neq0$ or $\widetilde{H}_{j-1}(\Delta_1\cap\Delta_2)\neq0$. If $\widetilde{H}_j(\Delta_2)\neq0$, then $\beta_{a,t_{a+4}-4}(S/I)\neq0$, and so $$t_{a+4}\leq t_{a}+4 < t_a+t_4.$$ Finally, if $\widetilde{H}_{j-1}(\Delta_1\cap\Delta_2)\neq0$, then $\beta_{a,t_{a+4}-5}(S/I)\neq0$, and so $$t_{a+4}\leq t_a+5=t_a+t_4.$$ ◻ Combining theorem ([Theorem 8](#a=4){reference-type="ref" reference="a=4"}) with lemma ([\[thm-abederan\]](#thm-abederan){reference-type="ref" reference="thm-abederan"}) we obtain the following. **Corollary 9**. *For any edge ideal over any field, the subadditivity condition holds for $0\leq b\leq 4$ and any $a\geq 0$.* Finally, we state the following corollary. **Corollary 10**. *Over any field, if $I=I(G)$ is the edge ideal of a graph $G$ and $t_j\geq \lceil \frac{3j}{2} \rceil$ for all $5\leq j\leq \lfloor \frac{\mathrm{pd}_S(S/I)}{2}\rfloor$, then $I$ satisfies the subadditivity condition. In particular, the subadditivity condition holds if $\mathrm{pd}_S(S/I)\leq 9$* *Proof.* By lemma ([\[thm-abederan\]](#thm-abederan){reference-type="ref" reference="thm-abederan"}) and theorem ([Theorem 8](#a=4){reference-type="ref" reference="a=4"}) we may assume that $a>4$, $b>4$. we also assume that $a\geq b$. Since $a+b\leq \mathrm{pd}_S(S/I)$, we obtain that $5\leq b\leq \lfloor \frac{\mathrm{pd}_S(S/I)}{2}\rfloor$. Hence the assertion follows from ([Theorem 7](#thm1){reference-type="ref" reference="thm1"}). ◻ *Data sharing not applicable to this article as no datasets were generated or analysed during the current study.* 99 A. Abedelfatah, E. Nevo, *On vanishing patterns in $j$-strands of edge ideals*, J. Algebraic Comb. 46(2), 287-295 (2017). A. Abedelfatah, *Some results on the subadditivity condition of syzygies*, Collect. Math. 32 (2021). L. L. Avramov, A. Conca, S. Iyengar, *Subadditivity of syzygies of Koszul algebras*, Math. Ann. 361, 511-534 (2015). M. Bigdeli and J. Herzog, *Betti diagrams with special shape*, Homological and Computational Methods in Commutative algebra, Springer INdAM Series 20, 33-52 (2017). S. Faridi, Lattice complements and the subadditivity of syzygies of simplicial forests, http://arxiv.org/abs/1605.07727 (2016). S. Faridi, M. Shahada, *Breaking up simplicial homology and subadditivity of syzygies*, J. Algebraic Comb. 55(2), 277-295 (2022). O. Fernández-Ramos, P. Gimenez, *Regularity 3 in edge ideals associated to bipartite graphs*, J. Algebraic Comb. 39, 919-937 (2014). J. Herzog, H. Srinivasan, *A note on the subadditivity problem for maximal shifts in free resolutions*, Commutative Algebra and Noncommutative Algebraic Geometry, II. MSRI Publications, vol. 68, 245-250 (2015). A. V. Jayanthan, A. Kumer, *Subadditivity, strand connectivity and multigraded Betti numbers of monomial ideals* arXiv:2007.15319 \[math.AC\] (2020). S. E. Khoury, H. Srinivasan, *A note on the subadditivity of syzygies*. J. Algebra Appl. 1750177 (2016).
arxiv_math
{ "id": "2309.14990", "title": "On the subadditivity condition of edge ideal", "authors": "Abed Abedelfatah", "categories": "math.AC math.CO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Erdős and Graham asked whether, for any coloring of the Euclidean plane $\mathbb{R}^2$ in finitely many colors, some color class contains the vertices of a rectangle of every given area. We give the negative answer to this question and its higher-dimensional generalization: there exists a finite coloring of the Euclidean space $\mathbb{R}^n$, $n\geqslant 2$, such that no color class contains the $2^n$ vertices of a rectangular box of volume $1$. The present note is a very preliminary version of a longer treatise on similar problems. address: Department of Mathematics, Faculty of Science, University of Zagreb, Bijenička cesta 30, 10000 Zagreb, Croatia author: - Vjekoslav Kovač bibliography: - monorectangles.bib title: Monochromatic boxes of unit volume --- # Introduction Systematic study of the Euclidean Ramsey theory was initiated by Erdős, Graham, Montgomery, Rothschild, Spencer, and Straus [@Eetal1; @Eetal2; @Eetal3] in the 1970s. Graham [@Gra80] answered positively a question of Gurevich by showing that for any finite coloring of the Euclidean plane some color class contains the vertices of a triangle of any given area. In fact, he could choose the triangles to be right-angled with axis-aligned legs. Erdős and Graham [@EG79 p. 331] (in a paper which is also published as a chapter in the well-known problem book [@EG80]), after mentioning Graham's result on triangles, posed a natural follow-up question: > *Is this also true for rectangles?* The following precise wording of this problem is literally taken from Thomas Bloom's website *Erdős problems* [@EP]. **Problem 1** ([@EP \#189]). *If $\mathbb{R}^2$ is finitely coloured then must there exist some colour class which contains the vertices of a rectangle of every area?* To the best of author's knowledge, Problem [Problem 1](#prob:main){reference-type="ref" reference="prob:main"} has not been addressed in the literature so far. It was explicitly mentioned as being open in *Mathematical Reviews* MR0558877 by Karsten Steffens and MR2106573 by Sheila Oates-Williams. Finally, Problem [Problem 1](#prob:main){reference-type="ref" reference="prob:main"} (for rectangles, but also for a few other configurations) was stated as an open problem in the 2015 edition of the book *Rudiments of Ramsey theory* by Graham and Butler [@GB15 p. 56]. Here we show that the question in Problem [Problem 1](#prob:main){reference-type="ref" reference="prob:main"} has the negative answer. **Theorem 2**. *It is possible to partition $\mathbb{R}^2$ into $25$ color classes such that none of them contains the vertices of a rectangle of area $1$.* We have singled out the two-dimensional case in Theorem [Theorem 2](#thm:colorR2){reference-type="ref" reference="thm:colorR2"} above, because its proof is particularly elegant. However, with a bit more work one can show a natural generalization to higher dimensions. **Theorem 3**. *For every integer $n\geqslant 2$ there exists a finite coloring of the Euclidean space $\mathbb{R}^n$ such that there is no rectangular box of volume $1$ with all of its $2^n$ vertices colored the same.* In the proofs of these theorems, i.e., for any coloring constructed in this paper, we will choose each color class to be a countable union of Jordan-measurable sets. For this reason our constructions can also disprove some density theorems that could potentially hold for subsets $A\subseteq\mathbb{R}^n$ of positive upper density; see Remark [Remark 5](#rem:hypercubes){reference-type="ref" reference="rem:hypercubes"} for an example. Namely, in any partition of $\mathbb{R}^n$ into finitely many Lebesgue-measurable color classes, at least one of the classes needs to have strictly positive upper density. The purpose of this note is to motivate further questions related to Problem [Problem 1](#prob:main){reference-type="ref" reference="prob:main"}, which actually have positive answers and their proofs require more substantial tools. Some of these questions will be addressed by the author in the next version of this manuscript, which is still in preparation. # Proof of Theorem [Theorem 2](#thm:colorR2){reference-type="ref" reference="thm:colorR2"} {#sec:R2} We are about to give a coloring of $\mathbb{R}^2$ that uses $25$ colors and has a slightly stronger property: no color class will contain the vertices of a parallelogram such that the product of lengths of its two consecutive sides equals $1$. For rectangles this clearly specializes to the property of their area being equal to $1$. ![Coordinatization of a parallelogram.](colfig1.eps){#fig:colfig1 width="0.6\\linewidth"} *Proof of Theorem [Theorem 2](#thm:colorR2){reference-type="ref" reference="thm:colorR2"}.* Let us place a (possibly degenerate) parallelogram $\mathcal{P}=ABCD$ in the complex plane, so that its vertices $A,B,C,D$ are respectively coordinatized by the complex numbers $z_A,z_B,z_C,z_D$ as in Figure [1](#fig:colfig1){reference-type="ref" reference="fig:colfig1"}. Consider a complex quantity $\mathscr{I}(\mathcal{P})$ defined as $$\label{eq:defofI} \mathscr{I}(\mathcal{P}) := z_A^2 - z_B^2 + z_C^2 - z_D^2.$$ In this definition we specify the vertex $A$ to be the one with the smallest coordinate $z_A$ in the lexicographic ordering of $\mathbb{C}\equiv\mathbb{R}^2$. Otherwise, $\mathscr{I}(\mathcal{P})$ would have only been determined up to multiplication by $\pm1$. There exist $u,v,z\in\mathbb{C}$ such that the vertices of $\mathcal{P}$ have complex coordinates $$z_A=z, \quad z_B=z+u, \quad z_C=z+u+v, \quad z_D=z+v;$$ see Figure [1](#fig:colfig1){reference-type="ref" reference="fig:colfig1"} again. The quantity $\mathscr{I}(\mathcal{P})$ now simplifies as $$\mathscr{I}(\mathcal{P}) = z^2 - (z+u)^2 + (z+u+v)^2 - (z+v)^2 = 2uv.$$ Consecutive side lenghts of $\mathcal{P}$ are $|u|$ and $|v|$, so we have $$|\mathscr{I}(\mathcal{P})| = 2$$ whenever their product equals $1$. As we have already mentioned, this holds in particular if $\mathcal{P}$ is a rectangle of area $1$. Therefore, it remains to find a coloring of $\mathbb{C}$ such that, if all vertices of $\mathcal{P}$ are assigned the same color, then the complex number $\mathscr{I}(\mathcal{P})$ does not lie on the circle $$\label{eq:circle} \{w\in\mathbb{C} : |w|=2\}.$$ ![The circle misses the squares.](colfig2.eps){#fig:colfig2 width="0.6\\linewidth"} For each pair $(j,k)\in\{0,1,2,3,4\}^2$ define a color class $\mathscr{C}_{j,k}$ as $$\mathscr{C}_{j,k} := \bigg\{ z\in\mathbb{C} \,:\, z^2 \in \frac{10}{3} \bigg( \mathbb{Z}+ \mathbbm{i}\mathbb{Z}+ \frac{j + \mathbbm{i}k}{5} + \Big[0,\frac{1}{5}\Big) + \mathbbm{i}\Big[0,\frac{1}{5}\Big) \bigg) \bigg\}.$$ If the four vertices of $\mathcal{P}=ABCD$ belonged to the same color class, then, by the definition [\[eq:defofI\]](#eq:defofI){reference-type="eqref" reference="eq:defofI"}, we would clearly have $$\mathscr{I}(\mathcal{P}) \in \frac{10}{3} \bigg( \mathbb{Z}+ \mathbbm{i}\mathbb{Z}+ \Big(-\frac{2}{5},\frac{2}{5}\Big) + \mathbbm{i}\Big(-\frac{2}{5},\frac{2}{5}\Big) \bigg).$$ The above set does not intersect the circle [\[eq:circle\]](#eq:circle){reference-type="eqref" reference="eq:circle"}; see Figure [2](#fig:colfig2){reference-type="ref" reference="fig:colfig2"}. Indeed, the central square lies fully inside [\[eq:circle\]](#eq:circle){reference-type="eqref" reference="eq:circle"} because of $4\sqrt{2}/3<2$, while all remaining open squares clearly belong to its exterior. ◻ We can say that the above solution uses the help of the invariant quantity $|\mathscr{I}(\mathcal{P})|$ assigned to rectangles of area $1$. It does not generalize to all higher dimensions, since it uses multiplication of complex numbers, so we will resort to an "almost invariant" quantity in the next section. Construction of the above coloring could be thought of as a complex modification of the approach of Erdős at al.[@Eetal1 §3] who used $|z|^2$ in place of $z^2$. Let us illustrate the coloring constructed in the previous proof. Boundaries of the color classes are given in the $(x,y)$-coordinate system by the equations $$x^2-y^2 = \frac{2a}{3} \quad\text{and}\quad xy = \frac{b}{3}$$ for arbitrary $a,b\in\mathbb{Z}$. These are two mutually orthogonal families of hyperbolas (including degenerate ones for $a=0$ or $b=0$), depicted in Figure [3](#fig:colfig3){reference-type="ref" reference="fig:colfig3"}. ![Boundaries of color classes $\mathscr{C}_{j,k}$.](colfig3.eps){#fig:colfig3 width="0.5\\linewidth"} # Proof of Theorem [Theorem 3](#thm:colorRn){reference-type="ref" reference="thm:colorRn"} {#sec:Rn} The idea is based on the following simple observation. Let us first take an axes-aligned box in $\mathbb{R}^n$ with edge lengths $a_1,a_2,\ldots,a_n\in(0,\infty)$. Its vertices can be enumerated by subsets $T$ of $\{1,2,\ldots,n\}$ as $$\label{eq:verticesofR0} \Big( \mathbf{q} + \sum_{j\in T} a_j \mathbf{e}_j : T\subseteq\{1,2,\ldots,n\} \Big),$$ where $\mathbf{q}=(q_1,\ldots,q_n)\in\mathbb{R}^n$ is some point. Let us compute the alternating sum of the product of coordinates of the box vertices, $$\sum_{T\subseteq\{1,2,\ldots,n\}} (-1)^{n-|T|} \Big(\prod_{j\in T^c} q_j\Big) \Big(\prod_{j\in T} (q_j+a_j)\Big) = \prod_{j=1}^{n} (-q_j + q_j+a_j) = a_1 a_2 \cdots a_n,$$ and notice that we have obtained precisely the box volume. The crucial part of the proof will be to show that the same quantity is almost invariant for slightly rotated rectangular boxes, where the "slightness" can be prescribed uniformly over all box eccentricities. Note that this property is not quite obvious and it is sensitive to the pattern shape, as the uniformity will fail already for parallelograms in $\mathbb{R}^2$. After we construct a coloring that prohibits those slightly tilted boxes, in the last step we will rotate it by finitely many matrices $U_1,\ldots,U_m\in\textup{SO}(n)$, thanks to compactness of the rotation group. Before the proof we need a simple identity. **Lemma 4**. *The identity $$\label{eq:identity} \sum_{T\subseteq\{1,2,\ldots,n\}} (-1)^{n-|T|} \prod_{k=1}^{n} \Big(p_k + \sum_{j\in T} v_{j,k}\Big) = \sum_{\sigma\in S_n} v_{1,\sigma(1)} v_{2,\sigma(2)} \cdots v_{n,\sigma(n)}$$ holds for real numbers $(p_k)_{1\leqslant k\leqslant n}$ and $(v_{j,k})_{1\leqslant j,k\leqslant n}$.* It is peculiar to notice that the right hand side $\textup{RHS}$ of [\[eq:identity\]](#eq:identity){reference-type="eqref" reference="eq:identity"} is the permanent of the matrix $(v_{j,k})_{j,k}$. *Proof of Lemma [Lemma 4](#lm:identity){reference-type="ref" reference="lm:identity"}.* We will prove the identity by induction on $n$. The basis case $n=1$ is trivial as then the identity reads $$- p_1 + (p_1 + v_{1,1}) = v_{1,1}.$$ Take a positive integer $n\geqslant 2$. Observe that the left hand side $\textup{LHS}$ of [\[eq:identity\]](#eq:identity){reference-type="eqref" reference="eq:identity"} is a homogeneous polynomial of degree $n$ in $n^2+n$ variables $p_k$ and $v_{j,k}$, but the degree of each of the variables in it is at most $1$. Differentiating it with respect to the variable $v_{n,l}$ for some $l\in\{1,\ldots,n\}$ we obtain $$\begin{aligned} \frac{\partial}{\partial v_{n,l}} \textup{LHS} & = \sum_{T\subseteq\{1,\ldots,n-1\}} (-1)^{n-1-|T|} \prod_{\substack{1\leqslant k\leqslant n\\k\neq l}} \Big(p_k + v_{n,k} + \sum_{j\in T} v_{j,k}\Big) \\ & = \sum_{\substack{\sigma\in S_n\\\sigma(n)=l}} v_{1,\sigma(1)} v_{2,\sigma(2)} \cdots v_{n-1,\sigma(n-1)},\end{aligned}$$ where in the last equality we applied the inductions hypothesis to a smaller collection of numbers $$(p_k + v_{n,k})_{1\leqslant k\leqslant n, k\neq l},\quad (v_{j,k})_{1\leqslant j\leqslant n-1,1\leqslant k\leqslant n,k\neq l}$$ and renamed the indices. Therefore $$\textup{LHS} - \sum_{l=1}^{n} v_{n,l} \frac{\partial}{\partial v_{n,l}} \textup{LHS} = \textup{LHS} - \textup{RHS}$$ and this quantity can be evaluated by plugging $v_{n,1}=\cdots=v_{n,n}=0$ into the left hand side: $$\sum_{T\subseteq\{1,\ldots,n\}} (-1)^{n-|T|} \prod_{k=1}^{n} \Big(p_k + \sum_{\substack{1\leqslant j\leqslant n-1\\j\in T}} v_{j,k}\Big) = 0.$$ The last sum equals $0$ because its terms can be paired to cancel each other: adding/subtracting the element $n$ to/from the set $T$ gives exactly the same term, only with the opposite sign $(-1)^{n-|T|}$. ◻ Now we turn to the proof of the announced result. *Proof of Theorem [Theorem 3](#thm:colorRn){reference-type="ref" reference="thm:colorRn"}.* To each rectangular box $\mathcal{R}$ in $\mathbb{R}^n$ we assign a real quantity $\mathscr{J}(\mathcal{R})$ defined as $$\label{eq:defofJ} \mathscr{J}(\mathcal{R}) := \sum_{\mathbf{x}=(x_1,\ldots,x_n) \text{ is a vertex of }\mathcal{R}} (-1)^{n-\text{par}(\mathbf{x})} \,x_1\cdots x_n.$$ Here $\text{par}(\mathbf{x})$ denotes the *parity* of the vertex $\mathbf{x}$, which is computed as follows. Choose the base vertex of $\mathcal{R}$ to be the one with the smallest coordinate representation in the lexicographic ordering of $\mathbb{R}^n$. The parity of any vertex of $\mathcal{R}$ is defined to be the parity of its distance from the base vertex in the $1$-skeleton graph of $\mathcal{R}$. This number is either $0$ or $1$ and it changes as we move from a vertex to its neighbor along the $1$-edge of $\mathcal{R}$. Clearly, the expression [\[eq:defofJ\]](#eq:defofJ){reference-type="eqref" reference="eq:defofJ"} has $2^n$ terms, half of them get the $+$ sign and half of them come with the $-$ sign; see the illustration of these signs in Figure [4](#fig:colfig4){reference-type="ref" reference="fig:colfig4"}. ![Signs attached to box vertices.](colfig4.eps){#fig:colfig4 width="0.5\\linewidth"} Every rectangular box $\mathcal{R}$ can be obtained by rotating an axes-aligned box $\mathcal{R}_0$ about the origin, where the rotation is given by a special orthogonal transformation, $U\in\textup{SO}(n)$: $$\label{eq:recimage} \mathcal{R} = U \mathcal{R}_0.$$ Suppose that the vertices of $\mathcal{R}_0$ are [\[eq:verticesofR0\]](#eq:verticesofR0){reference-type="eqref" reference="eq:verticesofR0"}, while the vertices of $\mathcal{R}$ are given by $$\Big( \mathbf{p} + \sum_{j\in T} \mathbf{v}_j : T\subseteq\{1,2,\ldots,n\} \Big),$$ where $\mathbf{p}=U\mathbf{q}$ and $\mathbf{v}_j=a_j U\mathbf{e}_j$ for each index $1\leqslant j\leqslant n$. Also, write coordinate-wise: $$\mathbf{p} = (p_k)_{1\leqslant k\leqslant n}, \quad \mathbf{v}_j = (v_{j,k})_{1\leqslant k\leqslant n}$$ and note that Lemma [Lemma 4](#lm:identity){reference-type="ref" reference="lm:identity"} gives $$\label{eq:repofJ} \mathscr{J}(\mathcal{R}) = \pm \sum_{\sigma\in S_n} v_{1,\sigma(1)} v_{2,\sigma(2)} \cdots v_{n,\sigma(n)}.$$ First, suppose that the rotation $U$ satisfies $\|U-I\|_{\textup{op}}<\varepsilon$, where $$\varepsilon := \frac{1}{2^{n+2}n!}.$$ In particular, $$\Big| \frac{1}{a_j}\mathbf{v}_j - \mathbf{e}_j \Big| = |(U-I)\mathbf{e}_j| < \varepsilon,$$ so that $$| v_{j,k} - a_j \delta_{j,k} | < \varepsilon a_j.$$ Consequently, $$\begin{aligned} & \Big| \sum_{\sigma\in S_n} v_{1,\sigma(1)} v_{2,\sigma(2)} \cdots v_{n,\sigma(n)} - a_1 a_2 \cdots a_n \Big| \nonumber \\ & \leqslant| v_{1,1} v_{2,2} \cdots v_{n,n} - a_1 a_2 \cdots a_n | + \sum_{\substack{\sigma\in S_n\\ \sigma\neq\textup{id}}} | v_{1,\sigma(1)} v_{2,\sigma(2)} \cdots v_{n,\sigma(n)} | \nonumber \\ & < n \varepsilon (1+\varepsilon)^{n-1} a_1 a_2 \cdots a_n + (n!-1) \varepsilon (1+\varepsilon)^{n-1} a_1 a_2 \cdots a_n \nonumber \\ & \leqslant 2^n n! \varepsilon a_1 a_2 \cdots a_n \leqslant\frac{1}{4} a_1 a_2 \cdots a_n. \label{repofJaux}\end{aligned}$$ Let us now additionally assume that $\mathcal{R}$ has volume $1$. Then [\[eq:repofJ\]](#eq:repofJ){reference-type="eqref" reference="eq:repofJ"} combined with [\[repofJaux\]](#repofJaux){reference-type="eqref" reference="repofJaux"} and $a_1 a_2 \cdots a_n=1$ gives $$\label{eq:Jisinint1} \mathscr{J}(\mathcal{R}) \in \Big(-\frac{5}{4},-\frac{3}{4}\Big) \cup \Big(\frac{3}{4},\frac{5}{4}\Big).$$ Partition $\mathbb{R}^n$ into the sets $$\mathscr{S}_l := \bigg\{ (x_1,x_2,\ldots,x_n) \in \mathbb{R}^n : x_1 x_2 \cdots x_n \in \frac{3}{2}\bigg(\mathbb{Z} + \Big[\frac{l}{3\cdot2^{n}},\frac{l+1}{3\cdot2^{n}}\Big)\bigg) \bigg\}$$ for $0\leqslant l\leqslant 3\cdot2^{n}-1$. We claim that the vertices of $\mathcal{R}$ cannot all belong to the same set $\mathscr{S}_l$. Namely, if they did, then the definition of $\mathscr{J}(\mathcal{R})$ would give $$\label{eq:Jisinint2} \mathscr{J}(\mathcal{R}) \in \frac{3}{2}\mathbb{Z} + \Big(-\frac{1}{4},\frac{1}{4}\Big) = \cdots \cup \Big(-\frac{7}{4},-\frac{5}{4}\Big) \cup \Big(-\frac{1}{4},\frac{1}{4}\Big) \cup \Big(\frac{5}{4},\frac{7}{4}\Big) \cup \cdots.$$ However, [\[eq:Jisinint1\]](#eq:Jisinint1){reference-type="eqref" reference="eq:Jisinint1"} and [\[eq:Jisinint2\]](#eq:Jisinint2){reference-type="eqref" reference="eq:Jisinint2"} together lead to a contradiction as the sets on their right hand sides are disjoint. Finally, we handle completely arbitrary rectagular boxes $\mathcal{R}$ of unit volume. Consider an open neighborhood $\mathcal{O}$ of the identity $I$ in the rotation group $\textup{SO}(n)$ defined as $$\mathcal{O} := \{ V\in\textup{SO}(n) \,:\, \|V-I\|_{\textup{op}}<\varepsilon \}.$$ Then the family $\{U\mathcal{O} : U\in\textup{SO}(n)\}$ constitutes an open cover of the compact space $\textup{SO}(n)$, so it can be reduced to a finite subcover $\{U_1\mathcal{O}, U_2\mathcal{O}, \ldots, U_m\mathcal{O}\}$. The color classes of the desired coloring of $\mathbb{R}^n$ can now be defined as $$\mathscr{C}_{l_1,l_2,\ldots,l_m} := (U_1 \mathscr{S}_{l_1}) \cap (U_2 \mathscr{S}_{l_2}) \cap \cdots \cap (U_m \mathscr{S}_{l_m}),$$ where $(l_1,l_2,\ldots,l_m)$ run over all $m$-tuples of elements from $\{0,1,2,\ldots,3\cdot 2^n-1\}$. Suppose that the vertices of $\mathcal{R}$ belong to the same color class $\mathscr{C}_{l_1,l_2,\ldots,l_m}$. Let $U\in\textup{SO}(n)$ be as in [\[eq:recimage\]](#eq:recimage){reference-type="eqref" reference="eq:recimage"}, but without any assumption on the norm of $U-I$. Take an index $i\in\{1,\ldots,m\}$ such that $U\in U_i\mathcal{O}$. Then the box $$\mathcal{R}' := U_i^{-1} \mathcal{R}$$ satisfies $$\mathcal{R}' = U_i^{-1} U \mathcal{R}_0, \quad \|U_i^{-1} U-I\|_{\textup{op}}<\varepsilon$$ and all of its vertices are in the set $$U_i^{-1} \mathscr{C}_{l_1,l_2,\ldots,l_m} \subseteq U_i^{-1} U_i \mathscr{S}_{l_i} = \mathscr{S}_{l_i}.$$ This contradicts the previous part of the proof. ◻ The number of colors needed in the above proof grows superexponentially in $n$. The trick of using compactness of the underlying transformation group (which is $\textup{SO}(n)$ in our case) is essentially due to Straus [@Str75]. However, note that its applicability is far from automatic; see Remark [Remark 6](#rem:parallelograms){reference-type="ref" reference="rem:parallelograms"}. # Other configurations Here we give a couple of simple observations on further applicability of constructions from Sections [2](#sec:R2){reference-type="ref" reference="sec:R2"} and [3](#sec:Rn){reference-type="ref" reference="sec:Rn"}. **Remark 5**. Let us conveniently work in the complex plane again. Take a positive integer $n\geqslant 2$, positive numbers $a_1,\ldots,a_n$, and complex numbers $z,u_1,\ldots,u_n$. If $2^n$ points $$\label{eq:1skeleton} z + r_1 u_1 + r_2 u_2 + \cdots + r_n u_n \quad\text{for } (r_1,r_2,\ldots,r_n)\in\{0,1\}^n$$ are mutually distinct and $|u_j|=a_j$ for $1\leqslant j\leqslant n$, then we can say that [\[eq:1skeleton\]](#eq:1skeleton){reference-type="eqref" reference="eq:1skeleton"} is an *embedding* in the plane of a $1$-skeleton of an $n$-dimensional box $$[0,a_1]\times[0,a_2]\times\cdots\times[0,a_n];$$ see Figure [5](#fig:colfig5){reference-type="ref" reference="fig:colfig5"}. ![Embedding of a $1$-skeleton of an $n$-box.](colfig5.eps){#fig:colfig5 width="0.65\\linewidth"} Predojević and the author [@KP23] showed that a measurable subset $A\subseteq\mathbb{R}^2$ of positive upper density contains all sufficiently large dilates of a fixed $1$-skeleton of an $n$-box; for instance we can take $a_1=\cdots=a_n=\lambda$ for all sufficiently large numbers $\lambda\in(0,\infty)$. On the other extreme, a fixed $1$-skeleton, such as the one with $a_1=\cdots=a_n=1$ need not be embeddable in $A$, simply because there is no reason why $A$ should contain two point at distance $1$ apart. Moreover, by a simple modification of the construction from Section [2](#sec:R2){reference-type="ref" reference="sec:R2"} we can even find a measurable finite coloring of $\mathbb{R}^2$ such that no color class contains a $1$-skeleton of an $n$-box satisfying $a_1\cdots a_n=1$. Namely, let us first observe that the identity [\[eq:identity\]](#eq:identity){reference-type="eqref" reference="eq:identity"} remains to hold for complex numbers $p_k$, $v_{j,k}$, simply by acknowledging the same proof given in Section [2](#sec:R2){reference-type="ref" reference="sec:R2"}. By taking $p_k=z$ and $v_{j,k}=u_j$ for each index $k$, we obtain a simpler identity $$\sum_{T\subseteq\{1,2,\ldots,n\}} (-1)^{n-|T|} \Big(z + \sum_{j\in T} u_j\Big)^n = n! \,u_1 u_2 \cdots u_n$$ for $z,u_1,u_2,\ldots,u_n\in\mathbb{C}$. (Actually, its analogous inductive proof is even notationally slightly simpler.) It is now easy to color $\mathbb{C}$ appropriately, according to where $z^n$ lies with respect to the Gaussian integers $\mathbb{Z}+\mathbbm{i}\mathbb{Z}$. **Remark 6**. In this paper we do not study parallelograms, which were also mentioned by Erdős and Graham [@EG79 p. 331], but we comment on them rather briefly. The same quantity $\mathscr{J}$ from Section [3](#sec:Rn){reference-type="ref" reference="sec:Rn"} is actually an invariant for parallelograms with one side parallel to the horizontal axis. Thus, a finite coloring of $\mathbb{R}^2$ can certainly avoid axis-parallel parallelograms. Using the same trick of compactness of the group of rotations $\textup{SO}(2)$ one can easily also construct a finite coloring of $\mathbb{R}^2$ that avoids monochromatic parallelograms with a fixed angle. For these reasons the author believes that the variant of Problem [Problem 1](#prob:main){reference-type="ref" reference="prob:main"} for parallelograms is quite difficult. There is another reason adding to this belief. Erdős remarked [@Erd83:open p. 324] (without a reference) that he and Mauldin constructed a set $S\subseteq\mathbb{R}^2$ of infinite measure that does not contains vertices of a parallelogram of area $1$. Even if this has no implications to finite colorings of $\mathbb{R}^2$ or to positive density subsets of $\mathbb{R}^2$, it still hints that sets that avoid the vertices of parallelograms of unit area can be quite large. # Acknowledgment {#acknowledgment .unnumbered} The author is grateful to Rudi Mrazović for a useful discussion.
arxiv_math
{ "id": "2309.09973", "title": "Monochromatic boxes of unit volume", "authors": "Vjekoslav Kova\\v{c}", "categories": "math.CO math.CA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We study a Markov decision problem in which the state space is the set of finite marked point configurations in the plane, the actions represent thinnings, the reward is proportional to the mark sum which is discounted over time, and the transitions are governed by a birth-death-growth process. We show that thinning points with large marks is optimal when births follow a Poisson process and marks grow logistically. Explicit values for the thinning threshold and the discounted total expected reward over finite and infinite horizons are also provided. When the points are required to respect a hard core distance, upper and lower bounds on the discounted total expected reward are derived.\ *2020 Mathematics Subject Classification:* 60G55, 90C40. --- **Optimal decision rules for marked point process models**\ M.N.M. van Lieshout\ *CWI, P.O. Box 94079, NL-1090 GB Amsterdam, The Netherlands\ Department of Applied Mathematics, University of Twente, P.O. Box 217\ NL-7500 AE Enschede, The Netherlands*\ # Introduction {#S:intro} The classic Markov decision process [@Bert95; @FeinSchw02; @Pute94] on a finite state space ${\cal{X}}$ and action set $A$ is defined as follows. Write $A(x)$ for the subset of $A$ which contains all actions that may be taken in state $x\in{\cal{X}}$. Then, a policy $\phi$ is a procedure for the selection of an action at each decision epoch $i\in {\mathbb N}_0$. Such a policy could be random or deterministic, and in principle take into account the entire history of the process. Often though, one may restrict attention to the class of deterministic Markov policies. Such a policy $\phi = (\phi_i)_{i=0}^\infty$ is a sequence of mappings $\phi_i: {\cal{X}} \to A$ that, at time $i$, assign an action $a = \phi_i(x) \in A(x)$ to the current state $x$. In doing so, a direct reward $r(x,a)$ is earned and a probability mass function $p(\cdot|x,a)$ on ${\cal{X}}$ governs the next state of the process. Being Markovian, only the current state and action are important; the past history is irrelevant. A policy $(\phi_i)_{i}$ is said to be stationary if its members $\phi_i$ do not depend on the time $i$. Let $(X_i, Y_i)$ denote the stochastic process of states $X_i$ and actions $Y_i$. Write ${\mathbb E}_x^\phi$ for its expectation when the initial state $X_0=x$ and the transitions are driven by policy $\phi$. Then an optimal policy maximises the discounted total expected reward $$v_\alpha^\phi(x) = {\mathbb E}_x^\phi\left[ \sum_{i=0}^{\infty} \alpha^i r(X_i, Y_i) \right], \label{e:discount}$$ $0 \leq \alpha < 1$. The reward function is usually assumed to be bounded, in which case ([\[e:discount\]](#e:discount){reference-type="ref" reference="e:discount"}) is well-defined. When the state and action spaces are both finite, it is well known [@Pute94 Theorem 5.5.3b] that it suffices to consider only Markov policies and, by [@Pute94 Theorem 6.2.10], one may restrict oneself even further to the class of Markov policies that are deterministic and stationary. The maximal discounted total expected reward can be found by policy iteration [@Pute94 Theorem 6.4.2] or value iteration, also known as successive approximation or dynamic programming. When the cardinality of the state or action space is infinite, policy iteration is not guaranteed to converge in a finite number of steps. The dynamic programming approach on the other hand is amenable to generalisation to more general state and action spaces. Results in this direction include [@BertShrev78; @FeinLewi07; @HernLass96; @Scha93]. The tutorial by Feinberg [@Fein16] provides an exhaustive overview with particular emphasis on inventory control problems. In this paper, we concentrate on the case where the state space ${\cal{X}}$ consists of finite simple marked point patterns in two-dimensional Euclidean space. Markov decision theory using spatial point process models has found many applications in mobile network optimisation. However, the role of the point process is auxiliary in that it is used to model the spatial distribution of users, base stations and so on, from which coverage probabilities and other performance characteristics of the network can be calculated [@BaccBart09; @Leeetal20; @Luetal21; @Khloetal15]. Spatial point process models are also convenient in multi-target tracking [@Lies08] and their void probabilities or divergence measures can form the basis for observer trajectory optimisation [@Bearetal17]. Our focus of interest here is to assume that the actions operate directly on the point process. More precisely, we assume that, at decision epoch $i$, an action $\phi_i(\mathbf{x})$ maps $\mathbf{x}$ into a subset of $\mathbf{x}$. In other words, the action set $A(\mathbf{x})$ is the finite power set of $\mathbf{x}$. When the decision to retain a point or not is based on the mark or the inter-point distances, it can be interpreted as a (mark-)dependent thinning [@Mate86; @Myll09]. The set-up described above is appropriate for harvesting problems in forestry [@Pret09]. Here, the classical strategy is to use discretised stand based growth tables and dynamic programming [@Ronn03]. Point pattern based policies have been rarer due to 'a lack of models and to difficulties in selecting trees to be removed' [@PukkMiin98] and tend to be simulation based [@Franetal20; @Pukketal15; @RensSark01; @Rensetal09]. One example is German thinning, which enhances natural selection by picking trees whose diameter at breast height is at most $d$ and fells a fraction of them. More formally, if each $X_i$ consists of tree locations marked by diameter at breast height, German thinning fells a fraction of the set $\{ (x,m) \in X_i: m \leq d \}$ and the Markov transition kernel governs the growth of the remaining trees (for example using the logistic growth curve or extensions such as the Richards curve [@Rich59]) as well as natural births and deaths (e.g. a hardcore model [@KellRipl76], the asymmetric soft core models of [@Lies09] or the dynamic models of [@RensSark01]). French thinning is similar, except that a fraction of trees with large rather than small sizes is removed to stimulate forest rejuvenation. In either case, picking a policy amounts to choosing the level $d$. Simulations suggest that French thinning might be the better strategy [@Franetal20]. The paper's plan is as follows. In Section [2](#S:Verhulst){reference-type="ref" reference="S:Verhulst"}, we study a decision process in which the actions consist of deleting a subset of the current points and the reward is proportional to the marks. The stochastic process that governs the dynamics is a birth-and-death process with independent deaths and a Poisson process of births; the marks grow logistically. We calculate the discounted total expected reward function over finite and infinite horizons and show that French thinning is an optimal policy. An explicit expression for the mark threshold $d$ is derived too. In Section [3](#S:hardcore){reference-type="ref" reference="S:hardcore"}, we move on to allow interaction between the points and replace the Poisson birth process by one in which no point is allowed to come too close to another point. In this setting, we provide upper and lower bounds on the discounted total expected reward function over finite and infinite horizons. The tightness of the bounds is investigated by means of some simulated examples. We conclude by mentioning some topics for further research. # Marked Poisson process model with logistic growth {#S:Verhulst} ## Definition of the model {#S:Defs} To define a Markov decision process [@Pute94 Section 2.3.2], let the state space ${\cal{X}}$ consists of finite simple marked point patterns on a compact set $W \subset {\mathbb R}^2$ with marks in $L =[0,K]$ for some $K > 0$. When ${\cal{X}}$ is equipped with the Borel $\sigma$-algebra of the weak topology, by the discussion below [@DaleVere88 Prop 9.1.IV], ${\cal{X}}$ is Polish. When at time $i \in {\mathbb N}_0$ the process is in state $\mathbf{x}$, a thinning action is carried out, resulting in a new state $\mathbf{a}$ that consists of all retained points $\mathbf{a}\subset \mathbf{x}$. Thus, the action space $A(\mathbf{x})$ is finite and contains all subsets of $\mathbf{x}$. Define a stationary reward function $r_i(\mathbf{x}, \mathbf{a}) = r(\mathbf{x}, \mathbf{a})$ by $$r(\mathbf{x}, \mathbf{a}) = R \sum_{ (x,m) \in \mathbf{x}\setminus \mathbf{a}} m, \quad \mathbf{x}\in {\cal{X}}, \mathbf{a}\subset \mathbf{x}. \label{e:reward}$$ Thus, the reward is proportional to the sum of the marks of all removed points. When $R > 0$, $r(\cdot, \cdot) \geq 0$. Since the mark content in an ${\mathbb R}^+$-marked point process is a random variable [@DaleVere88 Proposition 6.4.V], $r$ is well-defined. Upon taking action $\mathbf{a}$ in state $\mathbf{x}$, the dynamics that lead to the next state are modelled as a birth-death-growth process. Specifically, the marks of the retained points $(x,m) \in \mathbf{a}$ grow according to the well-known logistic model that was proposed around 1840 by Verhulst and Quetelet [@Rich59]. In this model, when the mark at time $0$ is $m_0 > 0$, the mark at time $n \in {\mathbb N}\cup \{ 0 \}$ is $$\label{e:gn} g^{(n)}(m_0) = \frac{K}{ 1 + \left( \frac{K}{m_0} - 1 \right) e^{-\lambda n} }.$$ By convention, $g^{(n)}(0) = 0$. The parameter $\lambda > 0$ governs the rate of growth and $K \geq m_0 \geq 0$ is an upper bound on the size. In combination with independent births and deaths, the next state is defined by the following dynamics: - delete $\mathbf{x}\setminus \mathbf{a}$; - independently of other points, let each $(x_i, m_i) \in \mathbf{a}$ die with probability $p_d \in (0,1)$ (natural deaths) and otherwise grow to $(x_i, g^{(1)}(m_i))$ as in ([\[e:gn\]](#e:gn){reference-type="ref" reference="e:gn"}); - add a Poisson process on $W$ with intensity $\beta > 0$ and mark its points independently according to a probability measure $\nu$ on $[0,K]$. Write $(X_i, Y_i)_{i=0}^\infty$ for the sequence of successive states $X_i$ and actions $Y_i$. A randomised policy $\phi = (\phi_i)_{i=0}^\infty$ is a sequence of conditional probability kernels $\phi_i(\cdot | X_0$, $Y_0, \dots$, $X_{i-1}$, $Y_{i-1}$, $X_i)$ on $A$ to generate $Y_i$ based on the history of the process such that $\phi_i( A(\mathbf{x}_i) | \mathbf{x}_0, \mathbf{a}_0, \dots, \mathbf{x}_i) = 1$. If the policy is Markov and deterministic, $Y_i$ is simply a function of $X_i$, and one may write $Y_i= \phi_i(X_i)$. Then, for $0 \leq \alpha < 1$, the infinite horizon $\alpha$-discounted total expected reward function ([\[e:discount\]](#e:discount){reference-type="ref" reference="e:discount"}) under policy $\phi = (\phi_i)_{i=0}^\infty$ with initial state $X_0 = \mathbf{x}$ is $$v_\alpha^\phi(\mathbf{x}) = {\mathbb E}^\phi \left[ \sum_{i=0}^\infty \alpha^i \left( R \sum_{(x,m)\in X_i \setminus Y_i} m \right) \mid X_0 = \mathbf{x}\right]. \label{e:value}$$ The following Lemma shows that the model is well-defined for the birth-death-grow dynamics defined above. **Lemma 1**. *The infinite horizon $\alpha$-discounted total expected reward function $v_\alpha^\phi(\mathbf{x})$, $\mathbf{x}\in {\cal{X}}$, defined in ([\[e:value\]](#e:value){reference-type="ref" reference="e:value"}) is finite for all $0 \leq \alpha < 1$, all $R>0$ and all policies $\phi$.* Pick $\mathbf{x}\in {\cal{X}}$ and write $n(\mathbf{x}) < \infty$ for its cardinality. Since the growth function ([\[e:gn\]](#e:gn){reference-type="ref" reference="e:gn"}) is bounded by $K$, $${\mathbb E}\left[ \sum_{(x,m)\in X_0 \setminus Y_0} m \mid X_0 = \mathbf{x}\right] \leq K n(\mathbf{x}).$$ For $i > 0$, $X_i$ is the union of survivors from $\mathbf{x}$, from subsequent generations starting with $X_1\setminus X_0$ up to $X_{i-1} \setminus X_{i-2}$ and points born in the last decision epoch. Therefore, recalling the birth and death dynamics, $${\mathbb E}\left[ \sum_{(x,m)\in X_i \setminus Y_i} m | X_0 = \mathbf{x}\right] \leq K n(\mathbf{x}) (1-p_d)^i \\ + K \beta |W| \sum_{k=0}^{i-1} (1-p_d)^k$$ where $|W|$ denotes the area of $W$. Hence $$v_\alpha^\phi(\mathbf{x}) \leq R K n(\mathbf{x}) \sum_{i=0}^\infty \alpha^i (1-p_d)^i + R K \beta |W| \sum_{i=1}^\infty \alpha^i \sum_{k=0}^{i-1} (1-p_d)^k.$$ For all $p_d \in (0,1)$, the first series in the right hand side converges to $1 / ( 1 - \alpha (1-p_d))$. Since $$\sum_{i=1}^\infty \alpha^i \sum_{k=0}^{i-1} (1-p_d)^k = \sum_{i=1}^\infty \alpha^i \frac{1-(1-p_d)^i}{p_d} \leq \frac{1}{p_d} \sum_{i=1}^\infty \alpha^i < \infty$$ for all $p_d \in (0,1)$, $v_\alpha^\phi(\mathbf{x})$ is finite. $\square$\ The reward function $r$ itself is not bounded, so the (N) regime of [@BertShrev78 Chapter 9] applies. ## Optimal policy and reward {#S:French} The optimal $\alpha$-discounted total expected reward $v^*_\alpha(\mathbf{x})$ is defined as the supremum of the $v_\alpha^\phi(\mathbf{x})$ over all policies, including randomised ones. In this section, we will show that French thinning is optimal and give an explicit expression for the corresponding reward. By [@BertShrev78 Proposition 9.1], the supremum in the definition of $v^*_\alpha(\mathbf{x})$ may be taken over the class of Markov policies, and, by [@BertShrev78 Proposition 9.8], satisfies the equation $$\label{e:Bellman} v^*_\alpha(\mathbf{x}) = \max_{\mathbf{a}\subset \mathbf{x}} \left\{ R \sum_{(x,m) \in \mathbf{x}\setminus \mathbf{a}} + \alpha {\mathbb E}\left[ v^*_\alpha(X) \mid \mathbf{x}, \mathbf{a}\right] \right\}$$ where $X$ is distributed according to the one step birth-death-growth dynamics from state $\mathbf{x}$ under action $\mathbf{a}$. Observe that the optimality equations ([\[e:Bellman\]](#e:Bellman){reference-type="ref" reference="e:Bellman"}), $\mathbf{x}\in {\cal{X}}$, are not sufficient conditions for $v^*_\alpha$. Nevertheless, $v^*_\alpha(\mathbf{x})$ can be calculated as the limit of an iterative procedure [@BertShrev78 Proposition 9.14] known as the dynamic programming algorithm. Set $v_0(\mathbf{x}) = 0$ for all $\mathbf{x}\in {\cal{X}}$ and set $n=1$. Define, for every $\mathbf{x}\in {\cal{X}}$, $$v_n(\mathbf{x}) = \max_{\mathbf{a}\subset \mathbf{x}} \left\{ R \sum_{(x,m)\in \mathbf{x}\setminus \mathbf{a}} m + \alpha {\mathbb E}\left[ v_{n-1}(X) \mid \mathbf{x}, \mathbf{a}\right] \right\}.$$ Then set $n = n + 1$ and repeat. This algorithm converges to $v^*_\alpha(\mathbf{x})$ as $n\to \infty$ by [@BertShrev78 Proposition 9.14] but -- in general -- is of little help in constructing an optimal policy, let alone a stationary one. Given a stationary policy $\phi$, a necessary and sufficient condition for it to be optimal is [@BertShrev78 Prop. 9.13] $$v_{\alpha}^{\phi}(\mathbf{x}) = \max_{\mathbf{a}\subset \mathbf{x}} \left\{ R \sum_{(x,m) \in \mathbf{x}\setminus \mathbf{a}} + \alpha {\mathbb E}\left[ v_{\alpha}^{\phi}(X) | \mathbf{x}, \mathbf{a}\right] \right\}. \label{e:optimalDiscount}$$ For our model, the dynamic programming algorithm does suggest an optimal deterministic and stationary Markov policy. **Theorem 1**. *Consider the Markov decision process with state space ${\cal{X}}$, action spaces $A(\mathbf{x}) = \{ \mathbf{y}\in {\cal{X}}: \mathbf{y}\subset \mathbf{x}\}$, $\mathbf{x}\in {\cal{X}}$, reward function ([\[e:reward\]](#e:reward){reference-type="ref" reference="e:reward"}) with $R > 0$, and birth-death-growth dynamics based on independent deaths with probability $p_d \in (0,1)$, a Poisson birth process with intensity $\beta > 0$ marked independently according to probability measure $\nu$ on $[0,K]$ for $K > 0$ and logistic growth function ([\[e:gn\]](#e:gn){reference-type="ref" reference="e:gn"}). Then, for $0\leq \alpha < 1$, $$v^*_\alpha(\mathbf{x}) = R \beta | W| \sum_{k=1}^\infty \alpha^k \int_0^K s(m) \, d\nu(m) + R \sum_{(x,m)\in \mathbf{x}} s(m),$$ where $|W|$ is the area of $W$ and $$s(m) = \sup_{n\in {\mathbb N}_0} \left\{ \frac{ K \alpha^n ( 1 - p_d)^n }{ 1 + \left( \frac{K}{m} - 1 \right) e^{-\lambda n} } \right\}, \quad m \in [0, K].$$ Furthermore, the optimal $\alpha$-discounted total expected reward corresponds to a French thinning that removes all points with a mark that is at least $$d^*_\alpha = \sup_{n\in {\mathbb N}_0} \left\{ \frac{K}{1 - e^{-n\lambda}} \left( \alpha^n ( 1 - p_d)^n - e^{-n \lambda} \right) \right\}.$$* For $\alpha = 1$, the total expected reward $v^*_1(\mathbf{x})$ is infinite. After initialising $v_0(\mathbf{x}) = 0$ for all $\mathbf{x}\in {\cal{X}}$, clearly the optimal expected reward at time $0$ is $v_1(\mathbf{x}) = R \sum_{(x,m)\in \mathbf{x}} m,$ which is attained for action $\mathbf{a}= \emptyset$, or, in other words, by removing all points with mark greater than or equal to $d_1 = 0$. The proof proceeds by induction. Set, for $n\in{\mathbb N}$, $$d_n = \max\left\{ 0, K \frac{ \alpha ( 1- p_d) - e^{-\lambda}} {1 - e^{- \lambda}}, \, \dots, \, K \frac{ \alpha^{n-1} ( 1- p_d)^{n-1} - e^{-(n-1)\lambda}} {1 - e^{- (n-1)\lambda}} \right\} \label{e:an}$$ and suppose that the optimal $\alpha$-discounted expected reward over $n$ steps is attained by French thinning at level $d_n$ and given by $$v_n(\mathbf{x}) = R \beta |W| \sum_{k=1}^{n-1} \alpha^k \int_0^K s_{n-k}(m) \, d\nu(m) + R \sum_{(x,m)\in \mathbf{x}} s_n(m) \label{e:vn}$$ where, for $1\leq k\leq n$, $$s_k(m) = \max\left\{ m, \alpha \left(1-p_d \right) g^{(1)}(m), \, \dots, \, \alpha^{k-1} \left(1-p_d \right)^{k-1} g^{(k-1)}(m) \right\}.$$ Now, for $n+1$, the optimal finite horizon $\alpha$-discounted expected reward is $$v_{n+1}(\mathbf{x}) = \max_{ \mathbf{a}\subset \mathbf{x}} \left\{ R \sum_{(x,m)\in \mathbf{x}\setminus \mathbf{a}} m + \alpha {\mathbb E}\left[ v_n(X) \mid \mathbf{x}, \mathbf{a}\right] \right\}.$$ By the induction assumption, the discounted expectation $\alpha {\mathbb E}\left[ v_n(X) \mid \mathbf{x}, \mathbf{a}\right]$ is the sum of $$\alpha R \beta |W| \sum_{k=1}^{n-1} \alpha^k \int_0^K s_{n-k}(m) \, d\nu(m) = R \beta |W| \sum_{k=2}^{n} \alpha^k \int_0^K s_{n+1-k}(m) \, d\nu(m)$$ and contributions from the points in $\mathbf{a}$ that survive a decision epoch as well as from points born in the interval between time $n$ and $n+1$. These contributions are, respectively, $$\alpha R \sum_{(x,m) \in \mathbf{a}} \left(1 - p_d \right) s_n( g(m) )$$ and, using the Campbell--Mecke formula [@DaleVere88 Section 6.1], $$\alpha R \beta |W| \int_0^K s_n(m) \, d\nu(m).$$ The optimal policy assigns a point $(x,m) \in \mathbf{x}$ to $\mathbf{x}\setminus \mathbf{a}$ if and only if $m \geq \alpha(1-p_d) s_n ( g^{(1)}(m) ).$ By the induction assumption and ([\[e:gn\]](#e:gn){reference-type="ref" reference="e:gn"}), this is the case if and only if $$m \geq \alpha ^k( 1-p_d)^k g^{(k)}(m) \Leftrightarrow m \geq K \frac{\alpha^k (1-p_d)^k - e^{-k\lambda}}{1 - e^{-k\lambda}} \label{e:threshold}$$ for all integers $1 \leq k \leq n$. Consequently, $d_{n+1}$ has the required form. For this allocation rule, the reward is $\max\left\{ m, \alpha \left( 1 - p_d \right) s_n( g^{(1)}(m) ) \right \} = s_{n+1}(m)$ and the induction step is complete. Next, let $n$ go to infinity and fix $m \in [0, K]$. Then $s(m)$ is finite for all $p_d \in (0,1)$ and $0\leq \alpha < 1$. Additionally, $\lim_{n\to\infty} s_n(m) = s(m)$. Thus, for any $\mathbf{x}\in {\cal{X}}$, $$R \sum_{(x,m)\in\mathbf{x}} s_n(m) \to R \sum_{(x,m)\in\mathbf{x}} s(m)$$ as $n\to \infty$. Furthermore, $$\sum_{k=1}^{n-1} \alpha^k \int_0^K s_{n-k}(m) \, d\nu(m) \to \sum_{k=1}^\infty \alpha^k \int_0^K s(m) \, d\nu(m), \quad n\to \infty,$$ because of dominated convergence applied to the doubly indexed sequence $a_{k,n}$ defined by ${\mathbf{1}}\left\{ k \leq n-1 \right\} \alpha^k \int s_{n-k} \, d\nu.$ In conclusion, for each $\mathbf{x}\in {\cal{X}}$, $\lim_{n\to\infty} v_n(\mathbf{x}) = v^*_\alpha(\mathbf{x})$, the optimal $\alpha$-discounted total expected reward [@BertShrev78 Proposition 9.14], and $v^*_\alpha(\mathbf{x})$ has the claimed form. To complete the proof, we need to show that $v^*(\mathbf{x})$ is attained by the stationary deterministic policy that retains all points with mark smaller than $d^*_\alpha$. Denote its infinite horizon $\alpha$-discounted total expected reward by $$v_\alpha^{d^*}(\mathbf{x}) = {\mathbb E}\left[ R \sum_{i=0}^\infty \alpha^i \sum_{(x,m)\in X_i} m \, {\mathbf{1}}\{ m \geq d^*_\alpha \} \mid X_0 = \mathbf{x}\right]$$ and focus on the contributions of each generation of points. A point $(x,m) \in \mathbf{x}$, the initial generation, yields a reward $R \, \alpha^n (1-p_d)^n g^{(n)}(m)$ precisely when $g^{(n-1)}(m)$ is less than $d^*_\alpha$ but $g^{(n)}(m) \geq d^*_\alpha$. Since, as in ([\[e:threshold\]](#e:threshold){reference-type="ref" reference="e:threshold"}), $g^{(n)}(m) \geq d^*_\alpha$ if and only if $$g^{(n)}(m) \geq \alpha^k ( 1 - p_d )^k g^{(n+k)}(m)$$ for all $k \in {\mathbb N}_0$, we conclude that every point of $\mathbf{x}$ contributes $R \, s(m)$. The points that are born in the first decision epoch (generation $1$) yield the same total expected reward, but this is discounted by $\alpha$ due to the later birth date. Similarly, the total expected reward of points belonging to the second generation is discounted by $\alpha^2$, and so on. Tallying up, the $\alpha$-discounted total expected reward of generations $k = 1, 2, \dots$ is $$R \beta |W| \sum_{k=1}^\infty \alpha^k \int_0^K s(m) \, d\nu(m)$$ on application of the Campbell--Mecke formula. Finally add the contribution from the initial generation to conclude that the threshold $d^*_\alpha$ defines an optimal policy. Condition ([\[e:optimalDiscount\]](#e:optimalDiscount){reference-type="ref" reference="e:optimalDiscount"}) is readily verified. $\square$\ As a by-product, the proof of Theorem [Theorem 1](#t:simple){reference-type="ref" reference="t:simple"} derives the optimal $\alpha$-discounted total expected reward ([\[e:vn\]](#e:vn){reference-type="ref" reference="e:vn"}) for finite time horizons too, and French thinning with threshold ([\[e:an\]](#e:an){reference-type="ref" reference="e:an"}) is an optimal policy. The suprema in $s(m)$ and $d^*_\alpha$ are attained, which can be seen by considering the limit for $n\to\infty$. # Hard core models with logistic growth {#S:hardcore} ## Bounds for the optimal discounted total expected reward {#S:bounds} In this section, we refine the Poisson model of the previous section to the case where births are governed by a hard core process. Thus, the state space ${\cal{X}}_K$ consists of all finite simple marked point patterns on a compact set $W$ in the plane that contain no pair $\{ x_1, x_2 \}$ such that $|| x_1 - x_2 || \leq K$ with marks in $L = [0, K]$. For the motivating example from forestry in which the marks correspond to the diameter at breast height, the condition ensures that all trees can grow to their maximal size. As in Section [2.1](#S:Defs){reference-type="ref" reference="S:Defs"}, when at time $i \in {\mathbb N}_0$ the process is in state $\mathbf{x}$, a thinning action is carried out, resulting in a new state $\mathbf{a}$ that consists of all retained points. The reward is defined in ([\[e:reward\]](#e:reward){reference-type="ref" reference="e:reward"}). The dynamics are modified in such a way that the hard core is respected. Specifically, suppose that action $\mathbf{a}$ is taken in state $\mathbf{x}\in {\cal{X}}_K$. The next state is then governed by the following birth-death-growth process: - delete $\mathbf{x}\setminus \mathbf{a}$; - independently of other points, let each $(x_i, m_i) \in \mathbf{a}$ die with probability $p_d \in (0,1)$ and otherwise grow to $(x_i, g(m_i) )$ for some bounded, continuous function $g: [0,K] \to [0,K]$ satisfying $m \leq g(m)$ for $m \in [0,K]$; - add a hard core process on $W$ with hard core distance $K$ and intensity $\beta > 0$; mark its points independently according to a probability measure $\nu$ on $[0,K]$ and remove all points that fall within distance $K$ to a point in $\mathbf{a}$. In this framework, the reward function is bounded since the hard core condition implies an upper bound on the number of points that can be alive at any time. We are therefore in the (D) regime of [@BertShrev78 Chapter 9]. For $\mathbf{x}\in {\cal{X}}_K$, define $v^*_\alpha(\mathbf{x})$ as the supremum of ([\[e:value\]](#e:value){reference-type="ref" reference="e:value"}) over all policies $\phi$. By [@BertShrev78 Proposition 9.1] it suffices to consider Markov policies only, and $v^*_\alpha(\mathbf{x})$ is the limit of the dynamic programming algorithm [@BertShrev78 Proposition 9.14]. The optimality condition ([\[e:optimalDiscount\]](#e:optimalDiscount){reference-type="ref" reference="e:optimalDiscount"}) applies. Moreover, since the action sets are finite, Corollary 9.17.1 in [@BertShrev78] guarantees the existence of an optimal deterministic stationary policy. An explicit expression seems hard to obtain. However, the following bounds are available. **Theorem 2**. *Consider the Markov decision process with state space ${\cal{X}}_K$, action spaces $A(\mathbf{x}) = \{ \mathbf{y}\in {\cal{X}}_K: \mathbf{y}\subset \mathbf{x}\}$, $\mathbf{x}\in {\cal{X}}_K$, reward function ([\[e:reward\]](#e:reward){reference-type="ref" reference="e:reward"}) with $R > 0$, and birth-death-growth dynamics based on independent deaths with probability $p_d \in (0,1)$, a hard core birth process with intensity $\beta > 0$ marked independently according to probability measure $\nu$ on $[0,K]$ for $K > 0$ and growth function $g$. Write $g^{(n)}(m)$ for the $n$-fold composition of $g$.* *For $\alpha \in [0,1)$, initialise $v_0(\mathbf{x}) = 0$ for all $\mathbf{x}\in {\cal{X}}_K$. Define, for $n\in{\mathbb N}$ and $\mathbf{x}\in{\cal{X}}_K$, $$v_n(\mathbf{x}) = \max_{\mathbf{a}\subset \mathbf{x}} \left\{ R \sum_{(x,m)\in\mathbf{x}\setminus\mathbf{a}} m + \alpha {\mathbb E}\left[ v_{n-1}(X) \mid \mathbf{x}, \mathbf{a}\right] \right\}.$$ Then $\tilde v_n(\mathbf{x}) \leq v_n(\mathbf{x}) \leq \hat v_n(\mathbf{x})$ where $$\begin{aligned} \tilde v_n(\mathbf{x}) & = & R \sum_{(x,m)\in \mathbf{x}} \tilde s_n(x,m) + R \beta \sum_{k=1}^{n-1} \alpha^k \int_{W} \int_0^K \tilde s_{n-k}(w,l) \, d\nu(l) dw \\ \hat v_n(\mathbf{x}) & = & R \sum_{(x,m)\in \mathbf{x}} \hat s_n(m) + R \beta |W| \sum_{k=1}^{n-1} \alpha^k \int_0^K \hat s_{n-k}(l) \, d\nu(l) \\\end{aligned}$$ with $\tilde s_0 = \hat s_0 = 0$ and, for $n\in{\mathbb N}$, $$\hat s_n(m) = \max \left\{ m, \alpha (1-p_d) g^{(1)}(m), \,\dots, \, \alpha^{n-1} (1-p_d)^{n-1} g^{(n-1)}(m) \right\}$$ and, writing $b(x,K)$ for the closed ball centred at $x$ with radius $K$, $$\tilde s_n(x,m) = \max\{ m, \alpha (1-p_d) g^{(1)}(m) - \alpha K \beta | b(x,K) \cap W |, \, \dots,$$ $$\alpha^{n-1} (1-p_d)^{n-1} g^{(n-1)}(m) - \alpha K \beta | b(x,K) \cap W | \sum_{i=0}^{n-2} \alpha^i (1-p_d)^i \}.$$ [\[t:hardcore\]]{#t:hardcore label="t:hardcore"}* When the growth function is logistic, $$\begin{aligned} \tilde s_n(x,m) & = & \max_{i=0, \dots, n-1} \left\{ \frac{K \alpha^i ( 1 - p_d )^i}{1 + \left(\frac{K}{m} - 1 \right) e^{-\lambda i}} -\alpha K \beta | W \cap b(x,K) | \frac{ 1 - \alpha^{i}(1-p_d)^i }{ 1 - \alpha ( 1 - p_d) } \right\}; \\ \hat s_n(m) & = & \max_{ i = 0, \dots, n-1} \left\{ \frac{ K \alpha^i ( 1 - p_d )^i } {1 + \left(\frac{K}{m} - 1 \right) e^{-\lambda i}} \right\}.\end{aligned}$$ The proof proceeds by induction. For $n=0$, evidently $v_0 \leq \tilde v_0$. Assume that $\tilde v_k(\mathbf{x}) \leq v_k(\mathbf{x}) \leq \hat v_k(\mathbf{x})$ for all $k \leq n$ and all $\mathbf{x}\in{\cal{X}}_K$ and that $\tilde v_k$, $\hat v_k$ have the required form. Since $$v_{n+1}(\mathbf{x}) = \max_{ \mathbf{a}\subset \mathbf{x}} \left\{ R \sum_{(x,m) \in \mathbf{x}\setminus \mathbf{a}} m + \alpha {\mathbb E}\left[ v_n(X) \mid \mathbf{x}, \mathbf{a}\right] \right\} \label{e:vnext}$$ and $v_n(X) \geq \tilde v_n(X)$, let us consider the expectation of $\tilde v_n(X)$ under the hard core birth-death-growth dynamics when action $\mathbf{a}$ is taken in state $\mathbf{x}$. By the definition of $\tilde v_n$ and distinguishing between surviving and new-born points, $$\begin{aligned} {\mathbb E}\left[ \tilde v_n(X) \mid \mathbf{x}, \mathbf{a}\right] & = & R \, {\mathbb E}\left[ \sum_{(x,m) \in X} \tilde s_n(x,m) \mid \mathbf{x}, \mathbf{a}\right] + R \beta \sum_{k=1}^{n-1} \alpha^k \int_{W} \int_0^K \tilde s_{n-k}(w,l) \, d\nu(l) dw \\ & = & R \sum_{(x,m) \in \mathbf{a}} \left( 1-p_d \right) \tilde s_n(x,g^{(1)}(m) ) + R \beta \sum_{k=1}^{n-1} \alpha^k \int_{W} \int_0^K \tilde s_{n-k}(w,l) \, d\nu(l) dw \\ & & + R \beta \int_{W} \int_0^K \tilde s_{n}(w,l) \, {\mathbf{1}}\{ w \not \in U_K(\mathbf{a}) \} \, d\nu(l) dw \end{aligned}$$ where the symbol $U_K(\mathbf{a})$ signifies the union of closed balls with radius $K$ around the points in $\mathbf{a}$. The calculation of the last term above relies on the Campbell--Mecke formula [@DaleVere88 Section 6.1]. Now, the integral in the last line above can be written as $$R \beta \int_{W} \int_0^K \tilde s_n(w,l) \, d\nu(l) dw - R \beta \int_{W} \int_0^K \tilde s_n (w, l) \, {\mathbf{1}}\{ w \in U_K(\mathbf{a}\} \, d\nu(l) dw$$ and is bounded from below by $$R \beta \int_{W} \int_0^K \tilde s_n(w,l) \, d\nu(l) dw - R K \beta \sum_{(x,m)\in \mathbf{a}} \int_{W} \int_0^K {\mathbf{1}}\{ w \in b(x,K) \} \, d\nu(l) \label{e:lowerbound}$$ where the induction assumption is invoked for the inequality $\tilde s_n \leq K$. Next, return to ([\[e:vnext\]](#e:vnext){reference-type="ref" reference="e:vnext"}). The bound on ${\mathbb E}\left[ \tilde v_n(X) \mid \mathbf{x}, \mathbf{a}\right]$ implies $$v_{n+1}(\mathbf{x}) \geq \max_{\mathbf{a}\subset \mathbf{x}} \left\{ R \sum_{(x,m)\in\mathbf{x}\setminus\mathbf{a}} m + \alpha {\mathbb E}\left[ \tilde v_{n}(X) \mid \mathbf{x}, \mathbf{a}\right] \right\} \\ \geq \max_{\mathbf{a}\subseteq \mathbf{x}} \{ R \sum_{(x,m) \in \mathbf{x}\setminus \mathbf{a}} m +$$ $$\alpha R \sum_{(x,m) \in \mathbf{a}} \left[ ( 1-p_d ) \tilde s_n(x,g^{(1)}(m) ) - K \beta | b(x,K) \cap W | \right] + R \beta \sum_{k=1}^{n} \alpha^k \int_{W} \int_0^K \tilde s_{n+1-k}(w,l) d\nu(l) dw .$$ The policy that assigns $(x,m)$ to $\mathbf{x}\setminus \mathbf{a}$ if and only if $$m \geq \alpha\left[ \left( 1-p_d \right) \tilde s_n(x, g^{(1)}(m)) - K \beta | b(x,K) \cap W| \right]$$ optimises the right hand side and, with $$\tilde s_{n+1}(x,m) = \max\left\{ m, \alpha \left( 1-p_d \right) \tilde s_{n}(x, g^{(1)}(m)) - \alpha K \beta | b(x,K) \cap W | \right\},$$ one sees that $$v_{n+1}(\mathbf{x}) \geq \tilde v_{n+1}(\mathbf{x}) = R \sum_{(x,m)\in \mathbf{x}} \tilde s_{n+1}(x,m) + R \beta \sum_{k=1}^{n} \alpha^k \int_{W} \int_0^K \tilde s_{n+1-k}(w,l) \, d\nu(l) dw,$$ an observation that completes the induction argument and therefore the proof of the lower bound. For the upper bound $v_n \leq \hat v_n$, as in the proof of Theorem [Theorem 1](#t:simple){reference-type="ref" reference="t:simple"}, an induction proof applies based on $\hat s_n$ but with ([\[e:lowerbound\]](#e:lowerbound){reference-type="ref" reference="e:lowerbound"}) replaced by the upper bound $$R \beta \int_{W} \int_0^K \hat s_n(w,l) \, d\nu(l) dw.$$ $\square$\ Over an infinite time horizon, the optimal $\alpha$-discounted total expected reward is bounded by the same functional forms, which coincide if $\alpha = 0$. **Corollary 1**. *The functions $\hat s_n$ and $\tilde s_n$ defined in Theorem [\[t:hardcore\]](#t:hardcore){reference-type="ref" reference="t:hardcore"} take values in $[0,K]$ and increase monotonically to $$\hat s(m) = \sup_{n\in {\mathbb N}_0} \left\{ \alpha^n (1-p_d)^n g^{(n)}(m) \right\}, \quad m \in [0,K],$$ and, for $x\in W$ and $m\in [0,K]$, $$\tilde s(x,m) = \sup_{n\in {\mathbb N}_0} \left\{ \alpha^n (1-p_d)^n g^{(n)}(m) - \alpha K | b(x,K) \cap W | \sum_{i=0}^{n-1} \alpha^i (1-p_d)^i \right\}.$$* ## Simulation study {#S:simu} To assess the tightness of the bounds in Theorem [\[t:hardcore\]](#t:hardcore){reference-type="ref" reference="t:hardcore"}, we calculated $\hat v_n(\mathbf{x})$ and $\tilde v_n(\mathbf{x})$ in two regimes, a dense one and a sparse one. For the inital pattern $\mathbf{x}$, a sample from a Strauss process [@KellRipl76] on $W = [0,5]^2$ with interaction parameter set to zero was chosen. The activity parameter was set to give the required intensity: $\beta = 1.0$ in the sparse regime and $\beta = 4.3$ in the dense regime. For the mark dynamics, we used a logistic growth function with $\lambda = 2$ and maximal size $K=0.1$; the initial marks were sampled from a Beta distribution with shape parameters $\lambda_1 = 2$ and $\lambda_2 = 20$. The death rate was set to $p_d = 0.05$. Finally, we used discount factor $\alpha = 0.9$ and reward parameter $R=1$. The results are plotted in Figure [1](#F:simu){reference-type="ref" reference="F:simu"}. The left panels show the pattern $\mathbf{x}$. In the right panels, the solid lines are the graphs of $\hat v_n(\mathbf{x})$ as a function of $n$, the dotted lines show $\tilde v_n(\mathbf{x})$ plotted against $n$. Integrals were estimated by the Monte Carlo method with $1,000$ samples. In the sparse regime, the approximation is quite good, for the denser regime, the gap between the two graphs is quite wide except for very small $n$. In both cases, the dynamic programming algorithm converges rapidly. ![Left panels: samples $\mathbf{x}$ from a Strauss hard core process with intensity $\beta = 1.0$ (top) and $\beta = 4.3$ (bottom) on $[0,5]^2$. Right panels: graphs of $\hat v_n(\mathbf{x})$ (solid lines) and $\tilde v_n(\mathbf{x})$ (dotted lines) against $n$ for the birth-death-growth dynamics of Section [3.2](#S:simu){reference-type="ref" reference="S:simu"}.](fig3.ps){#F:simu width="5in"} # Conclusion In this paper we considered optimal policies for Markov decision problems inspired by forest harvesting. We proved that French thinning is optimal when births follow a Poisson process and marks grow logistically. When the points are required to respect a hard core distance, we derived upper and lower bounds on the discounted total expected reward for general birth-death-growth dynamics. Although we focused on a homogeneous birth process, the results carry over to the case where the birth process is governed by some spatially varying intensity function. In future it would be of interest to study configuration dependent asymmetric birth and growth models [@Lies08; @Lies09; @Rensetal09]. Indeed, in a forestry setting, the growth of well-established, large trees may hardly be hampered by the emergence of saplings close by, while it would be harder for young and small trees to flourish near large ones. Moreover, the natural environment, such as the availability of nutrients, might play a role. Finally, refinements of the action space that allow for different thresholds in different mark strata could be investigated. 99 Baccelli, F. and Blaszczyszyn, B. (2009). *Stochastic geometry and wireless networks*, in two volumes. NOW. Beard, M., Vo, B.T., Vo, B.N. and Arulampalam, S. (2017). Void probabilities and Cauchy--Schwarz divergence for generalized labeled multi-Bernoulli models. *IEEE Trans. Signal Process.*, **65**, 5047--5061. Bertsekas, D.P. (1995). *Dynamic programming and optimal control.* Prentice and Hall. Bertsekas, D.P. and Shreve, S.E. (1978). *Stochastic optimal control: The discrete time case.* Academic Press. Daley, D.J. and Vere--Jones, D. (2003, 2008). *An introduction to the theory of point processes*, second edition in two volumes. Springer. Feinberg, E.A. (2016). *Optimality conditions for inventory control*. Tutorials in Operations Research, INFORMS 2016, pp. 14--44. Feinberg, E.A. and Lewis, M.E. (2007). Optimality inequalities for average cost Markov decision processes and the stochastic cash balance problem. *Math. Oper. Res.*, **32**, 769--783. Feinberg, E.A. and Schwartz, A. (2002). *Handbook of Markov decision processes*. Springer. Fransson, P., Franklin, O., Lindroos, O., Nilsson, U. and Brännström, Å. (2020). A simulation-based approach to a near optimal thinning strategy: Allowing for individual harvesting times for individual trees. *Can. J. For. Res.*, **50**, 320--331. Hernández--Lerma, O. and Lasserre, B.J. (1996). *Discrete-time Markov control processes: Basic optimality criteria*. Springer. Khloussy, E., Gelabert, X. and Jiang, Y. (2015). Investigation on MDP-based radio access technology selection in heterogeneous wireless networks. *Comput. Netw.*, **91**, 57--67. Kelly, F.P. and Ripley, B.D. (1976). On Strauss's model for clustering. *Biometrika*, **63**, 357--360. Lee, W., Jung, B.C. and Lee, H. (2020). DeCoNet: Density clustering-based base station control for energy-efficient cellular IoT networks. *IEEE Access*, **8**, 120881. Lieshout, M.N.M. van (2008). Depth map calculation for a variable number of moving objects using Markov sequential object processes. *IEEE Trans. Pattern Anal. Mach. Intell.*, **30**, 1308--1312. Lieshout, M.N.M. van (2009). Sequential spatial processes for image analysis. In *Stereology and Image Analysis. ECS10--Proceedings of the 10th European Congress of ISS*, V. Capasso *et al.* (Eds.), 6 pages. Bologna. Lu, X., Salehi, M., Haenggi, M., and Hossain, E. (2021). Stochastic geometry analysis of spatial-temporal performance in wireless networks: A tutorial. *IEEE Commun. Surveys & Tutorials*, **23**, 2753--2801. Matérn, B. (1986). *Spatial variation*. Springer. Myllimäki, M. (2009). *Statistical models and inference for spatial point patterns with intensity-dependent marks*. PhD thesis, University of Jyväskylä. Pretzch, H. (2009). *Forest dynamics, growth and yield*. Springer. Pukkala, T. and Miina, J. (1998). Tree-selection algorithms for optimizing thinning using a distance-dependent growth model. *Can. J. For. Res.*, **28**, 693--702. Pukkala, T., Lähde, E. and Laiho, O. (2015). Which trees should be removed in thinning treatments? *For. Ecosyst.*, **2**, 1--12. Puterman, M.L. (1994). *Markov decision processes*. Wiley. Renshaw, E. and Särkkä, A. (2001). Gibbs point processes for studying the development of spatial-temporal stochastic processes. *Comput. Stat. Data Anal.*, **36**, 85--105. Renshaw, E., Comas, C. and Mateu, J. (2009). Analysis of forest thinning strategies through the development of space-time growth-interaction simulation models. *Stoch. Environ. Res. Risk Assess.*, **23**, 275--288. Richards, F.J. (1959). A flexible growth function for empirical use. *J. Exp. Bot.*, **10**, 290--300. Rönnqvist, M. (2003). Optimization in forestry. *Math. Program. Ser. B*, **97**, 267--284. Scha̋l, M. (1993). Average optimality in dynamic programming with general state space. *Math. Oper. Res.*, **18**, 163--172.
arxiv_math
{ "id": "2309.03752", "title": "Optimal decision rules for marked point process models", "authors": "M.N.M. van Lieshout", "categories": "math.PR", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | In this paper we prove Poincaré inequalities for the Discrete de Rham (DDR) sequence on a general connected polyhedral domain $\Omega$ of $\mathbb{R}^3$. We unify the ideas behind the inequalities for all three operators in the sequence, deriving new proofs for the Poincaré inequalities for the gradient and the divergence, and extending the available Poincaré inequality for the curl to domains with arbitrary second Betti numbers. A key preliminary step consists in deriving "mimetic" Poincaré inequalities giving the existence and stability of the solutions to topological balance problems useful in general discrete geometric settings. As an example of application, we study the stability of a novel DDR scheme for the magnetostatics problem on domains with general topology.\ **Key words.** Discrete de Rham complex, polytopal methods, Poincaré inequalities\ **MSC2020.** 65N30, 65N99, 14F40 author: - Daniele A. Di Pietro - Marien-Lorenzo Hanot bibliography: - ddr-poincare.bib title: Uniform Poincaré inequalities for the Discrete de Rham complex on general domains --- # Introduction Poincaré inequalities are a key tool to prove the well-posedness of many common partial differential equation problems. Mimicking them at the discrete level is typically required for the stability of numerical approximations. Poincaré inequalities for conforming Finite Element de Rham complexes can be derived through bounded cochain projections as described, e.g., in [@Arnold:18 Chapter 5]; see also [@Christiansen.Licht:20] for a recent generalisation. In the context of Virtual Element de Rham complexes [@Beirao-da-Veiga.Brezzi.ea:18], similar results typically hinge on non-trivial norm comparison results, examples of which can be found in [@Beirao-da-Veiga.Dassi.ea:22*1]. Discrete Poincaré-type inequalities in the context of the (non-compatible) Hybrid High-Order methods have been derived, e.g., in [@Di-Pietro.Droniou:20] (gradient), [@Botti.Di-Pietro.ea:17] (symmetric gradient) and [@Chave.Di-Pietro.ea:22; @Lemaire.Pitassi:23] (curl). The focus of the present work is on the derivation of Poincaré inequalities for the Discrete de Rham (DDR) sequence of [@Di-Pietro.Droniou:23*1] on domains with general topology. Unlike Finite and Virtual Elements, DDR formulations are fully discrete, with spaces spanned by vectors of polynomials and continuous vector calculus operators replaced by discrete counterparts. Discrete Poincaré inequalities thus require to bound $L^2$-like norms of vectors of polynomials with $L^2$-like norms of suitable discrete operators applied to them. To establish such bounds, we take inspiration from [@Di-Pietro.Droniou.ea:23], where it was noticed that the topological information is fully contained in the lowest-order DDR subsequence, and [@Di-Pietro.Droniou:21], where a Poincaré inequality for the curl on topologically trivial domains of $\mathbb{R}^3$ was derived. The lowest-order DDR sequence is strongly linked to Mimetic Finite Differences and related methods [@Brezzi.Lipnikov.ea:05; @Brezzi.Buffa.ea:09; @Beirao-da-Veiga.Lipnikov.ea:14; @Bonelle.Ern:14; @Bonelle.Di-Pietro.ea:15; @Codecasa.Specogna.ea:10]. The first step to prove discrete Poincaré inequalities in DDR spaces is thus precisely to establish the mimetic counterparts stated in Theorems [Theorem 4](#thm:Whitney.V){reference-type="ref" reference="thm:Whitney.V"}, [Theorem 6](#thm:Whitney.E){reference-type="ref" reference="thm:Whitney.E"}, and [Theorem 7](#thm:Whitney.F){reference-type="ref" reference="thm:Whitney.F"} below. Their proofs require to work at the global level, with conditions accounting for the topology of the domain appearing for the curl. The discrete Poincaré inequalities for arbitrary-order DDR spaces collected in Section [2.7](#sec:main.results){reference-type="ref" reference="sec:main.results"} below are then obtained combining the mimetic Poincaré inequalities with local estimates of the higher-order components. We next briefly discuss the links between the present work and previous results for DDR methods. Fully general Poincaré inequalities for the gradient and the divergence had already been obtained, respectively, in [@Di-Pietro.Droniou:23*1 Theorem 3] and [@Di-Pietro.Droniou:21] using different techniques. The main novelty of the proofs provided here is that they are better suited to generalisations in the framework of the Polytopal Exterior Calculus recently introduced in [@Bonaldi.Di-Pietro.ea:23]. A Poincaré inequality for the curl on topologically trivial domains had been obtained in [@Di-Pietro.Droniou:21 Theorem 20]. The main novelty with respect to this result consists in the extension to domains encapsulating voids. The interest of the material in this paper is additionally that it contains preliminary results to establish discrete Poincaré inequalities for advanced complexes, such as the three-dimensional discrete div-div complex recently introduced in [@Di-Pietro.Hanot:23]. The rest of the paper is organized as follows. The definitions of the relevant DDR spaces and operators are briefly recalled in Section [2](#sec:DDRconstruction){reference-type="ref" reference="sec:DDRconstruction"}. Mimetic Poincarés inequalities are derived in Section [3](#sec:proof.poincaré){reference-type="ref" reference="sec:proof.poincaré"}, and then used to prove discrete Poincarés inequalities for the DDR complex in Section [4](#sec:proof.DDR.poincare){reference-type="ref" reference="sec:proof.DDR.poincare"}. The latter are used in Section [5](#sec:vectorLaplace){reference-type="ref" reference="sec:vectorLaplace"} to carry out the stability analysis of a DDR scheme for the magnetostatics problem on domains with general topology. Some arguments in the proofs of mimetic Poincaré inequalities rely on specific shape functions for Finite Element spaces on a submesh, whose definitions and properties are summarised in Appendix [6](#sec:simplicial.de-rham){reference-type="ref" reference="sec:simplicial.de-rham"}. # Discrete de Rham construction {#sec:DDRconstruction} ## Domain and mesh {#sec:setting:domain.mesh} Let $\Omega\subset\mathbb{R}^3$ denote a connected polyhedral domain. We consider a polyhedral mesh $\mathcal{M}_h\coloneq\mathcal{T}_{h}\cup\mathcal{F}_{h}\cup\mathcal{E}_{h}\cup\mathcal{V}_{h}$, where $\mathcal{T}_{h}$ gathers the elements, $\mathcal{F}_{h}$ the faces, $\mathcal{E}_{h}$ the edges, and $\mathcal{V}_{h}$ the vertices. For all $Y\in\mathcal{M}_h$, we denote by $h_Y$ its diameter and set $h\coloneq\max_{T\in\mathcal{T}_{h}}h_T$. For each face $F\in\mathcal{F}_{h}$, we fix a unit normal $\boldsymbol{n}_F$ to $F$ and, for each edge $E\in\mathcal{E}_{h}$, a unit tangent $\boldsymbol{t}_E$. For $T\in\mathcal{T}_{h}$, $\mathcal{F}_{T}$ gathers the faces on the boundary $\partial T$ of $T$ and $\mathcal{E}_{T}$ the edges in $\partial T$; if $F\in\mathcal{F}_{h}$, $\mathcal{E}_{F}$ is the set of edges contained in the boundary $\partial F$ of $F$. For $F\in\mathcal{F}_{T}$, $\omega_{TF}\in\{-1,+1\}$ is such that $\omega_{TF}\boldsymbol{n}_F$ is the outer normal on $F$ to $T$. Each face $F\in\mathcal{F}_{h}$ is oriented counter-clockwise with respect to $\boldsymbol{n}_F$ and, for $E\in\mathcal{E}_{F}$, we let $\omega_{FE}\in\{-1,+1\}$ be such that $\omega_{FE}=+1$ if $\boldsymbol{t}_E$ points along the boundary $\partial F$ of $F$ in the clockwise sense, and $\omega_{FE}=-1$ otherwise; we also denote by $\boldsymbol{n}_{FE}$ the unit normal vector to $E$, in the plane spanned by $F$, such that $\omega_{FE}\boldsymbol{n}_{FE}$ points outside $F$. We denote by $\mathop{\mathrm{\bf grad}}_F$ and $\mathop{\mathrm{div}}_F$ the tangent gradient and divergence operators acting on smooth enough functions. Moreover, for any $r:F\to\mathbb{R}$ and $\boldsymbol{z}:F\to\mathbb{R}^2$ smooth enough, we let $\mathop{\mathrm{\bf rot}}_F r\coloneq (\mathop{\mathrm{\bf grad}}_F r)^\perp$ and $\mathop{\mathrm{rot}}_F\boldsymbol{z}=\mathop{\mathrm{div}}_F(\boldsymbol{z}^\perp)$, with $\perp$ denoting the rotation of angle $-\frac\pi2$ in the oriented tangent space to $F$. We further assume that $(\mathcal{T}_{h},\mathcal{F}_{h})$ belongs to a regular mesh sequence in the sense of [@Di-Pietro.Droniou:20 Definition 1.9], with mesh regularity parameter $\varrho>0$. This implies that, for each $Y\in\mathcal{T}_{h}\cup\mathcal{F}_{h}\cup\mathcal{E}_{h}$, there exists a point $\boldsymbol{x}_{Y}\in Y$ such that the ball centered at $\boldsymbol{x}_Y$ and of radius $\varrho h_Y$ is contained in $Y$. Throughout the paper, $a\lesssim b$ (resp., $a\gtrsim b$) stands for $a\le Cb$ (resp., $a\ge Cb$) with $C$ depending only on $\Omega$, the mesh regularity parameter and, when polynomial functions are involved, the corresponding polynomial degree. We also write $a \simeq b$ when both $a \lesssim b$ and $b \lesssim a$ hold. ## Polynomial spaces and $L^2$-orthogonal projectors For any $Y\in\mathcal{M}_h$ and an integer $\ell\ge 0$, we denote by $\mathcal{P}_{}^{\ell}(Y)$ the space spanned by the restriction to $Y$ of polynomial functions of the space variables. Let, for $Y \in \mathcal{T}_{h}\cup \mathcal{F}_{h}$, $\boldsymbol{\mathcal{P}}_{}^{\ell}(Y)\coloneq\mathcal{P}_{}^{\ell}(Y)^n$ with $n$ denoting the dimension of $Y$. We have the following direct decompositions: For all $F\in\mathcal{F}_{h}$, $$\text{% $\boldsymbol{\mathcal{P}}_{}^{\ell}(F) = \boldsymbol{\mathcal{R}}^{\ell}(F) \oplus \boldsymbol{\mathcal{R}}^{\mathrm{c},\ell}(F)$ with $\boldsymbol{\mathcal{R}}^{\ell}(F)\coloneq\mathop{\mathrm{\bf rot}}_F\mathcal{P}_{}^{\ell+1}(F)$ and $\boldsymbol{\mathcal{R}}^{\mathrm{c},\ell}(F)\coloneq(\boldsymbol{x}-\boldsymbol{x}_F)\mathcal{P}_{}^{\ell-1}(F)$ }$$ and, for all $T\in\mathcal{T}_{h}$, $$\begin{alignedat}{4} \boldsymbol{\mathcal{P}}_{}^{\ell}(T) &= \boldsymbol{\mathcal{G}}^{\ell}(T) \oplus \boldsymbol{\mathcal{G}}^{\mathrm{c}, \ell}(T) &\enspace& \text{% with $\boldsymbol{\mathcal{G}}^{\ell}(T)\coloneq\mathop{\mathrm{\bf grad}}\mathcal{P}_{}^{\ell+1}(T)$ and $\boldsymbol{\mathcal{G}}^{\mathrm{c}, \ell}(T)\coloneq(\boldsymbol{x}-\boldsymbol{x}_T)\times \boldsymbol{\mathcal{P}}_{}^{\ell-1}(T)$% } \\ \label{eq:vPoly=Roly+cRoly} &= \boldsymbol{\mathcal{R}}^{\ell}(T) \oplus \boldsymbol{\mathcal{R}}^{\mathrm{c},\ell}(T) &\enspace& \text{% with $\boldsymbol{\mathcal{R}}^{\ell}(T)\coloneq\mathop{\mathrm{\bf curl}}\mathcal{P}_{}^{\ell+1}(T)$ and $\boldsymbol{\mathcal{R}}^{\mathrm{c},\ell}(T)\coloneq(\boldsymbol{x}-\boldsymbol{x}_T)\mathcal{P}_{}^{\ell-1}(T)$.% } \end{alignedat}$$ We extend the above notations to negative exponents $\ell$ by setting all the spaces appearing in the decompositions equal to the trivial vector space $\{\boldsymbol{0}\}$. Given a polynomial (sub)space $\mathcal{X}^\ell(Y)$ on $Y\in\mathcal{M}_h$, the corresponding $L^2$-orthogonal projector is denoted by $\pi_{\mathcal{X},Y}^\ell$. Boldface font will be used when the elements of $\mathcal{X}^\ell(Y)$ are vector-valued, and $\boldsymbol{\pi}_{\boldsymbol{\mathcal{X}},Y}^{\mathrm{c},\ell}$ will denote the $L^2$-orthogonal projector on $\boldsymbol{\mathcal{X}}^{{\rm c},\ell}(Y)$. ## DDR spaces {#sec:DDRconstruction:spaces} The discrete counterparts of the spaces appearing in the continuous de Rham complex are defined as follows: $$\underline{X}_{\mathop{\mathrm{\bf grad}},h}^k\coloneq\Big\{ \begin{aligned}[t] \underline{q}_h &=\big((q_T)_{T\in\mathcal{T}_{h}},(q_F)_{F\in\mathcal{F}_{h}}, (q_E)_{E\in\mathcal{E}_{h}}, (q_V)_{V\in\mathcal{V}_{h}}\big)\,:\, \\ &\qquad \text{$q_T\in \mathcal{P}_{}^{k-1}(T)$ for all $T\in\mathcal{T}_{h}$, $q_F\in\mathcal{P}_{}^{k-1}(F)$ for all $F\in\mathcal{F}_{h}$,} \\ &\qquad \text{$q_E\in\mathcal{P}_{}^{k-1}(E)$ for all $E\in\mathcal{E}_{h}$, and $q_V \in \mathbb{R}$ for all $V\in\mathcal{V}_{h}$} \Big\}, \end{aligned}$$ $$\underline{\boldsymbol{X}}_{\mathop{\mathrm{\bf curl}},h}^k\coloneq\Big\{ \begin{aligned}[t] \underline{\boldsymbol{v}}_h &=\big( (\boldsymbol{v}_{\boldsymbol{\mathcal{R}},T},\boldsymbol{v}_{\boldsymbol{\mathcal{R}},T}^\mathrm{c})_{T\in\mathcal{T}_{h}},(\boldsymbol{v}_{\boldsymbol{\mathcal{R}},F},\boldsymbol{v}_{\boldsymbol{\mathcal{R}},F}^\mathrm{c})_{F\in\mathcal{F}_{h}}, (v_E)_{E\in\mathcal{E}_{h}} \big)\,:\, \\ &\qquad\text{$\boldsymbol{v}_{\boldsymbol{\mathcal{R}},T}\in\boldsymbol{\mathcal{R}}^{k-1}(T)$ and $\boldsymbol{v}_{\boldsymbol{\mathcal{R}},T}^\mathrm{c}\in\boldsymbol{\mathcal{R}}^{\mathrm{c},k}(T)$ for all $T\in\mathcal{T}_{h}$,} \\ &\qquad\text{$\boldsymbol{v}_{\boldsymbol{\mathcal{R}},F}\in\boldsymbol{\mathcal{R}}^{k-1}(F)$ and $\boldsymbol{v}_{\boldsymbol{\mathcal{R}},F}^\mathrm{c}\in\boldsymbol{\mathcal{R}}^{\mathrm{c},k}(F)$ for all $F\in\mathcal{F}_{h}$,} \\ &\qquad\text{and $v_E\in\mathcal{P}_{}^{k}(E)$ for all $E\in\mathcal{E}_{h}$}\Big\}, \end{aligned}$$ $$\underline{\boldsymbol{X}}_{\mathop{\mathrm{div}},h}^k\coloneq\Big\{ \begin{aligned}[t] \underline{\boldsymbol{w}}_h &=\big((\boldsymbol{w}_{\boldsymbol{\mathcal{G}},T},\boldsymbol{w}_{\boldsymbol{\mathcal{G}},T}^\mathrm{c})_{T\in\mathcal{T}_{h}}, (w_F)_{F\in\mathcal{F}_{h}}\big)\,:\, \\ &\qquad\text{$\boldsymbol{w}_{\boldsymbol{\mathcal{G}},T}\in\boldsymbol{\mathcal{G}}^{k-1}(T)$ and $\boldsymbol{w}_{\boldsymbol{\mathcal{G}},T}^\mathrm{c}\in\boldsymbol{\mathcal{G}}^{\mathrm{c}, k}(T)$ for all $T\in\mathcal{T}_{h}$,} \\ &\qquad\text{and $w_F\in\mathcal{P}_{}^{k}(F)$ for all $F\in\mathcal{F}_{h}$} \Big\}, \end{aligned}$$ and $$\mathcal{P}_{}^{k}(\mathcal{T}_{h})\coloneq\left\{ q_h\in L^2(\Omega)\,:\,\text{$(q_h)_{|T}\in\mathcal{P}_{}^{k}(T)$ for all $T\in\mathcal{T}_{h}$} \right\}.$$ ## Local vector calculus operators and potentials ### Gradient For any $E\in\mathcal{E}_{h}$, the edge gradient $G_E^k:\underline{X}_{\mathop{\mathrm{\bf grad}},E}^k\to\mathcal{P}_{}^{k}(E)$ is such that, for all $\underline{q}_E\in\underline{X}_{\mathop{\mathrm{\bf grad}},E}^k$, $$\label{eq:GE} \int_E G_E^k\underline{q}_E\, r = -\int_E q_E\, r' + \llbracket q_V\, r\rrbracket_{E},$$ with derivative taken in the direction of $\boldsymbol{t}_E$ and with $\llbracket\cdot\rrbracket_{E}$ denoting the difference between vertex values on an edge such that, for any function $\phi\in C^0(\overline{E})$ and any family $\{w_{V_1},w_{V_2}\}$ of vertex values such that $\boldsymbol{t}_E$ points from $V_1$ to $V_2$, $$\llbracket w_V\phi\rrbracket_{E}\coloneq w_{V_2} \phi(\boldsymbol{x}_{V_2}) - w_{V_1} \phi(\boldsymbol{x}_{V_1}).$$ For any $F\in\mathcal{F}_{h}$, the face gradient $\boldsymbol{G}_F^k:\underline{X}_{\mathop{\mathrm{\bf grad}},F}^k\to\boldsymbol{\mathcal{P}}_{}^{k}(F)$ and the scalar trace $\gamma_F^{k+1}:\underline{X}_{\mathop{\mathrm{\bf grad}},F}^k\to\mathcal{P}_{}^{k+1}(F)$ are such that, for all $\underline{q}_F\in\underline{X}_{\mathop{\mathrm{\bf grad}},F}^k$, $$\begin{aligned} {4}\label{eq:GF} \int_F\boldsymbol{G}_F^k\underline{q}_F\cdot\boldsymbol{v} &= -\int_F q_F\mathop{\mathrm{div}}_F\boldsymbol{v} + \sum_{E\in\mathcal{E}_{F}}\omega_{FE}\int_E q_E~(\boldsymbol{v}\cdot\boldsymbol{n}_{FE}) &\quad&\forall\boldsymbol{v}\in\boldsymbol{\mathcal{P}}_{}^{k}(F), \\ \nonumber \int_F\gamma_F^{k+1}\underline{q}_F\mathop{\mathrm{div}}_F\boldsymbol{v} &= -\int_F\boldsymbol{G}_F^k\underline{q}_F\cdot\boldsymbol{v} + \sum_{E\in\mathcal{E}_{F}}\omega_{FE}\int_E q_E~(\boldsymbol{v}\cdot\boldsymbol{n}_{FE}) &\quad&\forall\boldsymbol{v}\in\boldsymbol{\mathcal{R}}^{\mathrm{c},k+2}(F).\end{aligned}$$ Similarly, for all $T\in\mathcal{T}_{h}$, the element gradient $\boldsymbol{G}_T^k:\underline{X}_{\mathop{\mathrm{\bf grad}},T}^k\to\boldsymbol{\mathcal{P}}_{}^{k}(T)$ is defined such that, for all $\underline{q}_T\in\underline{X}_{\mathop{\mathrm{\bf grad}},T}^k$, $$\label{eq:GT} \int_T\boldsymbol{G}_T^k\underline{q}_T\cdot\boldsymbol{v} = -\int_T q_T\mathop{\mathrm{div}}\boldsymbol{v} + \sum_{F\in\mathcal{F}_{T}}\omega_{TF}\int_F\gamma_F^{k+1}\underline{q}_F~(\boldsymbol{v}\cdot\boldsymbol{n}_F) \qquad\forall\boldsymbol{v}\in\boldsymbol{\mathcal{P}}_{}^{k}(T),$$ ### Curl For all $F\in\mathcal{F}_{h}$, the face curl $C_F^k:\underline{\boldsymbol{X}}_{\mathop{\mathrm{\bf curl}},F}^k\to\mathcal{P}_{}^{k}(F)$ and tangential trace $\boldsymbol{\gamma}_{{\rm t},F}^k:\underline{\boldsymbol{X}}_{\mathop{\mathrm{\bf curl}},F}^k\to\boldsymbol{\mathcal{P}}_{}^{k}(F)$ are such that, for all $\underline{\boldsymbol{v}}_F\in\underline{\boldsymbol{X}}_{\mathop{\mathrm{\bf curl}},F}^k$, $$\label{eq:CF} \int_FC_F^k\underline{\boldsymbol{v}}_F\,r = \int_F\boldsymbol{v}_{\boldsymbol{\mathcal{R}},F}\cdot\mathop{\mathrm{\bf rot}}_F r - \sum_{E\in\mathcal{E}_{F}}\omega_{FE}\int_E v_E\,r \qquad\forall r\in\mathcal{P}_{}^{k}(F)$$ and, for all $(r,\boldsymbol{w})\in\mathcal{P}_{0}^{k+1}(F) \times\boldsymbol{\mathcal{R}}^{\mathrm{c},k}(F)$, $$\int_F\boldsymbol{\gamma}_{{\rm t},F}^k\underline{\boldsymbol{v}}_F\cdot(\mathop{\mathrm{\bf rot}}_F r + \boldsymbol{w}) = \int_FC_F^k\underline{\boldsymbol{v}}_F\,r + \sum_{E\in\mathcal{E}_{F}}\omega_{FE}\int_E v_E\,r + \int_F\boldsymbol{v}_{\boldsymbol{\mathcal{R}},F}^\mathrm{c}\cdot\boldsymbol{w}.$$ For all $T\in\mathcal{T}_{h}$, the element curl $\boldsymbol{C}_T^k:\underline{\boldsymbol{X}}_{\mathop{\mathrm{\bf curl}},T}^k\to\boldsymbol{\mathcal{P}}_{}^{k}(T)$ is defined such that, for all $\underline{\boldsymbol{v}}_T\in\underline{\boldsymbol{X}}_{\mathop{\mathrm{\bf curl}},T}^k$, $$\label{eq:CT} \int_T\boldsymbol{C}_T^k\underline{\boldsymbol{v}}_T\cdot\boldsymbol{w} = \int_T\boldsymbol{v}_{\boldsymbol{\mathcal{R}},T}\cdot\mathop{\mathrm{\bf curl}}\boldsymbol{w} + \sum_{F\in\mathcal{F}_{T}}\omega_{TF}\int_F\boldsymbol{\gamma}_{{\rm t},F}^k\underline{\boldsymbol{v}}_F\cdot(\boldsymbol{w}\times\boldsymbol{n}_F) \qquad\forall\boldsymbol{w}\in\boldsymbol{\mathcal{P}}_{}^{k}(T).$$ ### Divergence For all $T\in\mathcal{T}_{h}$, the element divergence $D_T^k:\underline{\boldsymbol{X}}_{\mathop{\mathrm{div}},T}^k\to\mathcal{P}_{}^{k}(T)$ is defined by: For all $\underline{\boldsymbol{w}}_T\in\underline{\boldsymbol{X}}_{\mathop{\mathrm{div}},T}^k$, $$\label{eq:DT} \int_TD_T^k\underline{\boldsymbol{w}}_T\,q = -\int_T\boldsymbol{w}_{\boldsymbol{\mathcal{G}},T}\cdot\mathop{\mathrm{\bf grad}}q + \sum_{F\in\mathcal{F}_{T}}\omega_{TF}\int_F w_F\,q \qquad\forall q\in\mathcal{P}_{}^{k}(T).$$ ## DDR complex {#sec:ddr.complex} The DDR complex reads: $$\begin{tikzcd} 0\arrow{r} & \underline{X}_{\mathop{\mathrm{\bf grad}},h}^k\arrow{r}{\underline{\boldsymbol{G}}_h^k} & \underline{\boldsymbol{X}}_{\mathop{\mathrm{\bf curl}},h}^k\arrow{r}{\underline{\boldsymbol{C}}_h^k} & \underline{\boldsymbol{X}}_{\mathop{\mathrm{div}},h}^k\arrow{r}{D_h^k} & \mathcal{P}_{}^{k}(\mathcal{T}_{h})\arrow{r}{0} & \{0\}, \end{tikzcd}$$ where, for all $(\underline{q}_h,\underline{\boldsymbol{v}}_h,\underline{\boldsymbol{w}}_h)\in\underline{X}_{\mathop{\mathrm{\bf grad}},h}^k\times\underline{\boldsymbol{X}}_{\mathop{\mathrm{\bf curl}},h}^k\times\underline{\boldsymbol{X}}_{\mathop{\mathrm{div}},h}^k$, $$\begin{aligned} \label{eq:uGh} \underline{\boldsymbol{G}}_h^k\underline{q}_h &\coloneq \big( ( \boldsymbol{\pi}_{\boldsymbol{\mathcal{R}},T}^{k-1}\boldsymbol{G}_T^k\underline{q}_T,\boldsymbol{\pi}_{\boldsymbol{\mathcal{R}},T}^{\mathrm{c},k}\boldsymbol{G}_T^k\underline{q}_T )_{T\in\mathcal{T}_{h}}, ( \boldsymbol{\pi}_{\boldsymbol{\mathcal{R}},F}^{k-1}\boldsymbol{G}_F^k\underline{q}_F,\boldsymbol{\pi}_{\boldsymbol{\mathcal{R}},F}^{\mathrm{c},k}\boldsymbol{G}_F^k\underline{q}_F )_{F\in\mathcal{F}_{h}}, ( G_E^kq_E )_{E\in\mathcal{E}_{h}} \big), \\ \label{eq:uCh} \underline{\boldsymbol{C}}_h^k\underline{\boldsymbol{v}}_h &\coloneq\big( ( \boldsymbol{\pi}_{\boldsymbol{\mathcal{G}},T}^{k-1}\boldsymbol{C}_T^k\underline{\boldsymbol{v}}_T,\boldsymbol{\pi}_{\boldsymbol{\mathcal{G}},T}^{\mathrm{c},k}\boldsymbol{C}_T^k\underline{\boldsymbol{v}}_T )_{T\in\mathcal{T}_{h}}, ( C_F^k\underline{\boldsymbol{v}}_F )_{F\in\mathcal{F}_{h}} \big), \\ \nonumber (D_h^k\underline{\boldsymbol{w}}_h)_{|T} &\coloneq D_T^k\underline{\boldsymbol{w}}_T\qquad\forall T\in\mathcal{T}_{h}.\end{aligned}$$ ## Component norms We endow the discrete spaces defined in Section [2.3](#sec:DDRconstruction:spaces){reference-type="ref" reference="sec:DDRconstruction:spaces"} with the $L^2$-like norms defined as follows: For all $(\underline{q}_h,\underline{\boldsymbol{v}}_h,\underline{\boldsymbol{w}}_h)\in\underline{X}_{\mathop{\mathrm{\bf grad}},h}^k\times\underline{\boldsymbol{X}}_{\mathop{\mathrm{\bf curl}},h}^k\times\underline{\boldsymbol{X}}_{\mathop{\mathrm{div}},h}^k$, $$\begin{alignedat}{4}%% \label{eq:tnorm.grad.h} \vert\kern-0.25ex\vert\kern-0.25ex\vert\underline{q}_h\vert\kern-0.25ex\vert\kern-0.25ex\vert_{\mathop{\mathrm{\bf grad}},h}^2 &\coloneq \sum_{T\in\mathcal{T}_{h}} \vert\kern-0.25ex\vert\kern-0.25ex\vert\underline{q}_T\vert\kern-0.25ex\vert\kern-0.25ex\vert_{\mathop{\mathrm{\bf grad}},T}^2\text{ with} \\ %% \label{eq:tnorm.grad.T} \vert\kern-0.25ex\vert\kern-0.25ex\vert\underline{q}_T\vert\kern-0.25ex\vert\kern-0.25ex\vert_{\mathop{\mathrm{\bf grad}},T}^2 &\coloneq \|q_T\|_{L^2(T)}^2 + h_T \sum_{F\in\mathcal{F}_{h}} \vert\kern-0.25ex\vert\kern-0.25ex\vert\underline{q}_F\vert\kern-0.25ex\vert\kern-0.25ex\vert_{F}^2 &\qquad& \forall T \in \mathcal{T}_{h}, \\ %% \label{eq:tnorm.grad.F} \vert\kern-0.25ex\vert\kern-0.25ex\vert\underline{q}_F\vert\kern-0.25ex\vert\kern-0.25ex\vert_{\mathop{\mathrm{\bf grad}},F}^2 &\coloneq \|q_F\|_{L^2(F)}^2 + h_F \sum_{E\in\mathcal{E}_{h}} \vert\kern-0.25ex\vert\kern-0.25ex\vert\underline{q}_E\vert\kern-0.25ex\vert\kern-0.25ex\vert_{\mathop{\mathrm{\bf grad}},E}^2 &\qquad& \forall F \in \mathcal{F}_{h}, \\ %% \label{eq:tnorm.grad.E} \vert\kern-0.25ex\vert\kern-0.25ex\vert q_E\vert\kern-0.25ex\vert\kern-0.25ex\vert_{\mathop{\mathrm{\bf grad}},E}^2 &\coloneq \|q_E\|_{L^2(E)}^2 + h_E \sum_{V\in\mathcal{V}_{E}} |q_V|^2 &\qquad& \forall E \in \mathcal{E}_{h}, \end{alignedat}$$ $$\label{eq:tnorm.curl.h} \begin{alignedat}{4} \vert\kern-0.25ex\vert\kern-0.25ex\vert\underline{\boldsymbol{v}}_h\vert\kern-0.25ex\vert\kern-0.25ex\vert_{\mathop{\mathrm{\bf curl}},h}^2 &\coloneq \sum_{T\in\mathcal{T}_{h}} \vert\kern-0.25ex\vert\kern-0.25ex\vert\underline{\boldsymbol{v}}_T\vert\kern-0.25ex\vert\kern-0.25ex\vert_{\mathop{\mathrm{\bf curl}},T}^2\text{ with} \\ \vert\kern-0.25ex\vert\kern-0.25ex\vert\underline{\boldsymbol{v}}_T\vert\kern-0.25ex\vert\kern-0.25ex\vert_{\mathop{\mathrm{\bf curl}},T}^2 &\coloneq \|\boldsymbol{v}_{\boldsymbol{\mathcal{R}},T}\|_{\boldsymbol{L}^2(T;\mathbb{R}^3)}^2 + \|\boldsymbol{v}_{\boldsymbol{\mathcal{R}},T}^\mathrm{c}\|_{\boldsymbol{L}^2(T;\mathbb{R}^3)}^2 + h_T \sum_{F\in\mathcal{F}_{T}} \vert\kern-0.25ex\vert\kern-0.25ex\vert\underline{\boldsymbol{v}}_F\vert\kern-0.25ex\vert\kern-0.25ex\vert_{\mathop{\mathrm{\bf curl}},F}^2 &\qquad& \forall T \in \mathcal{T}_{h}, \\ \vert\kern-0.25ex\vert\kern-0.25ex\vert\underline{\boldsymbol{v}}_F\vert\kern-0.25ex\vert\kern-0.25ex\vert_{\mathop{\mathrm{\bf curl}},F}^2 &\coloneq \|\boldsymbol{v}_{\boldsymbol{\mathcal{R}},F}\|_{\boldsymbol{L}^2(F;\mathbb{R}^2)}^2 + \|\boldsymbol{v}_{\boldsymbol{\mathcal{R}},F}^\mathrm{c}\|_{\boldsymbol{L}^2(F;\mathbb{R}^2)}^2 + h_F \sum_{E\in\mathcal{E}_{F}} \|v_E\|_{L^2(E)}^2 &\qquad& \forall F \in \mathcal{F}_{h}, \end{alignedat}$$ and $$\label{eq:tnorm.div.h} \begin{aligned} \vert\kern-0.25ex\vert\kern-0.25ex\vert\underline{\boldsymbol{w}}_h\vert\kern-0.25ex\vert\kern-0.25ex\vert_{\mathop{\mathrm{div}},h}^2 &\coloneq \sum_{T\in\mathcal{T}_{h}} \vert\kern-0.25ex\vert\kern-0.25ex\vert\underline{\boldsymbol{w}}_T\vert\kern-0.25ex\vert\kern-0.25ex\vert_{\mathop{\mathrm{div}},T}^2\text{ with} \\ \vert\kern-0.25ex\vert\kern-0.25ex\vert\underline{\boldsymbol{w}}_T\vert\kern-0.25ex\vert\kern-0.25ex\vert_{\mathop{\mathrm{div}},T}^2 &\coloneq \|\boldsymbol{w}_{\boldsymbol{\mathcal{G}},T}\|_{\boldsymbol{L}^2(T;\mathbb{R}^3)}^2 + \|\boldsymbol{w}_{\boldsymbol{\mathcal{G}},T}^\mathrm{c}\|_{\boldsymbol{L}^2(T;\mathbb{R}^3)}^2 + h_T \sum_{F\in\mathcal{F}_{T}} \|v_F\|_{L^2(F)}^2 \qquad \forall T \in \mathcal{T}_{h}. \end{aligned}$$ ## Main results {#sec:main.results} **Theorem 1** (Poincaré inequality for the gradient). *For all $\underline{p}_h \in \underline{X}_{\mathop{\mathrm{\bf grad}},h}^k$, it holds $$%% \begin{equation}\label{eq:poincare:grad} \inf_{\underline{r}_h \in \mathop{\mathrm{Ker}}\underline{\boldsymbol{G}}_h^k} \vert\kern-0.25ex\vert\kern-0.25ex\vert\underline{p}_h - \underline{r}_h\vert\kern-0.25ex\vert\kern-0.25ex\vert_{\mathop{\mathrm{\bf grad}},h} \lesssim \vert\kern-0.25ex\vert\kern-0.25ex\vert\underline{\boldsymbol{G}}_h^k\underline{p}_h\vert\kern-0.25ex\vert\kern-0.25ex\vert_{\mathop{\mathrm{\bf curl}},h},$$ with hidden constant only depending on $\Omega$, the mesh regularity parameter, and $k$.* *Proof.* See Section [4.1](#sec:uGh.poincare){reference-type="ref" reference="sec:uGh.poincare"}. ◻ **Theorem 2** (Poincaré inequality for the curl). *For all $\underline{\boldsymbol{v}}_h \in \underline{\boldsymbol{X}}_{\mathop{\mathrm{\bf curl}},h}^k$, it holds $$%% \begin{equation}\label{eq:poincare:curl} \inf_{\underline{\boldsymbol{z}}_h \in \mathop{\mathrm{Ker}}\underline{\boldsymbol{C}}_h^k} \vert\kern-0.25ex\vert\kern-0.25ex\vert\underline{\boldsymbol{v}}_h - \underline{\boldsymbol{z}}_h\vert\kern-0.25ex\vert\kern-0.25ex\vert_{\mathop{\mathrm{\bf curl}},h} \lesssim \vert\kern-0.25ex\vert\kern-0.25ex\vert\underline{\boldsymbol{C}}_h^k\underline{\boldsymbol{v}}_h\vert\kern-0.25ex\vert\kern-0.25ex\vert_{\mathop{\mathrm{div}},h},$$ with hidden constant only depending on $\Omega$, the mesh regularity parameter, and $k$.* *Proof.* See Section [4.2](#sec:uCh.poincare){reference-type="ref" reference="sec:uCh.poincare"}. ◻ **Theorem 3** (Poincaré inequality for the divergence). *For all $\underline{\boldsymbol{w}}_h \in \underline{\boldsymbol{X}}_{\mathop{\mathrm{div}},h}^k$, it holds $$%% \begin{equation}\label{eq:poincare:div} \inf_{\underline{\boldsymbol{z}}_h \in \mathop{\mathrm{Ker}}D_h^k} \vert\kern-0.25ex\vert\kern-0.25ex\vert\underline{\boldsymbol{w}}_h - \underline{\boldsymbol{z}}_h\vert\kern-0.25ex\vert\kern-0.25ex\vert_{D_h^k,h} \lesssim \|D_h^k\underline{\boldsymbol{w}}_h\|_{L^2(\Omega)},$$ with hidden constant only depending on $\Omega$, the mesh regularity parameter, and $k$.* *Proof.* See Section [4.3](#sec:Dh.poincare){reference-type="ref" reference="sec:Dh.poincare"}. ◻ # Mimetic Poincaré inequalities {#sec:proof.poincaré} This section contains Poincaré inequalities in mimetic spaces that are instrumental in proving the main results stated in the previous section. Their proofs rely on the use of a tetrahedral submesh $\mathfrak{M}_h= \mathfrak{T}_{h}\cup \mathfrak{F}_{h}\cup \mathfrak{E}_{h}\cup \mathfrak{V}_{h}$ in the sense of [@Di-Pietro.Droniou:20 Definition 1.8], with $\mathfrak{T}_{h}$ collecting the tetrahedral subelements and $\mathfrak{F}_{h}$, $\mathfrak{E}_{h}$, and $\mathfrak{V}_{h}$ their faces, edges, and vertices, respectively. We assume, for the sake of simplicity, that this submesh can be obtained adding as new vertices only centers of the faces and elements of $\mathcal{M}_h$. As a result of the assumptions in [@Di-Pietro.Droniou:20 Definition 1.8], the regularity parameter of the submesh only depends on that of $\mathcal{M}_h$ and, for a given element $T \in \mathcal{T}_{h}$, the diameters of the submesh entities contained in $\overline{T}$ are comparable to $h_T$ uniformly in $h$. ## Mimetic Poincaré inequality for collections of vertex values **Theorem 4** (Mimetic Poincaré inequality for collections of vertex values). *Let $(\alpha_V)_{V\in\mathcal{V}_{h}}\in\mathbb{R}^{\mathcal{V}_{h}}$ be a collection of values at vertices. Then, there is $C \in \mathbb{R}$ such that $$\label{eq:Whitney.V:poincare} \sum_{T\in\mathcal{T}_{h}} h_T^3 \sum_{V\in\mathcal{V}_{T}} (\alpha_V - C)^2 \lesssim \sum_{T\in\mathcal{T}_{h}} h_T \sum_{E\in\mathcal{E}_{T}} \vert \llbracket\alpha_V\rrbracket_{E} \vert^2,$$ with hidden constant only depending on $\Omega$ and the mesh regularity parameter.* *Proof.* We extend the collection $(\alpha_V)_{V\in\mathcal{V}_{h}}$ to $\mathfrak{V}_{h}$ setting the values at face/element centers equal to the value taken at an arbitrary vertex of the face/element in question. For any simplex $S \in \mathfrak{T}_{h}$ and any vertex $V\in\mathfrak{V}_{S}$ (with $\mathfrak{V}_{S}$ collecting the vertices of $S$), let $\phi_{S,V}$ denote the restriction to $S$ of the piecewise affine "hat" function associated with $V$ given by [\[eq:S.DR.0\]](#eq:S.DR.0){reference-type="eqref" reference="eq:S.DR.0"}, and let $\phi_h \in H^1(\Omega)$ be the piecewise polynomial function defined by setting $$\label{eq:Whitney.V:phi.h} (\phi_h)_{\vert S} \coloneq \sum_{V\in\mathfrak{V}_{S}} (\alpha_V-C) \phi_{S,V} \qquad\forall S \in \mathfrak{T}_{h},$$ where $C\in\mathbb{R}$ is chosen so that the zero-average condition $\int_\Omega \phi_h = 0$ is satisfied. We next prove the following norm equivalences: $$\begin{aligned} \label{eq:W0.P1} \|\phi_h\|_{L^2(\Omega)}^2 &\simeq \sum_{T\in\mathcal{T}_{h}} h_T^3 \sum_{V\in\mathcal{V}_{T}} (\alpha_V - C)^2, \\ \label{eq:W0.P2} \|\mathop{\mathrm{\bf grad}}\phi_h\|_{\boldsymbol{L}^2(\Omega; \mathbb{R}^3)}^2 &\simeq \sum_{T\in\mathcal{T}_{h}} h_T \sum_{E\in\mathcal{E}_{T}} \vert \llbracket\alpha_V\rrbracket_{E} \vert^2. \end{aligned}$$ The conclusion follows from the above relations writing $$\sum_{T\in\mathcal{T}_{h}} h_T^3 \sum_{V\in\mathcal{V}_{T}} (\alpha_V - C)^2 \overset{\eqref{eq:W0.P1}}\lesssim \|\phi_h\|_{L^2(\Omega)}^2 \lesssim \|\mathop{\mathrm{\bf grad}}\phi_h\|_{\boldsymbol{L}^2(\Omega;\mathbb{R}^3)}^2 \overset{\eqref{eq:W0.P2}}\lesssim \sum_{T\in\mathcal{T}_{h}} h_T \sum_{E\in\mathcal{E}_{T}} \vert \llbracket\alpha_V\rrbracket_{E} \vert^2,$$ where the second inequality follows from the continuous Poincaré--Wirtinger inequality, which holds since $\int_\Omega \phi_h = 0$.\ [(i) *Proof of [\[eq:W0.P1\]](#eq:W0.P1){reference-type="eqref" reference="eq:W0.P1"}.*]{.ul} For any $T \in \mathcal{T}_{h}$, by regularity of the submesh, we have $h_S \simeq h_T$ for all $S \in \mathfrak{T}_{T}$ (with $\mathfrak{T}_{T}$ collecting the subelements contained in $T$). It holds $$\label{eq:W0.P4} \begin{aligned} \|\phi_h\|_{L^2(\Omega)}^2 \overset{\eqref{eq:Whitney.V:phi.h}} &= \sum_{T\in\mathcal{T}_{h}} \sum_{S\in\mathfrak{T}_{T}} \int_S \bigg(\sum_{V\in\mathfrak{V}_{S}} (\alpha_V-C) \phi_{S,V}\bigg)^2 \\ &\simeq \sum_{T\in\mathcal{T}_{h}} \sum_{S\in\mathfrak{T}_{T}} \sum_{V\in\mathfrak{V}_{S}} (\alpha_V-C)^2 \|\phi_{S,V}\|_{L^2(S)}^2 \\ \overset{\eqref{eq:W.norm.0}}&\simeq \sum_{T\in\mathcal{T}_{h}} h_T^3 \sum_{S\in\mathfrak{T}_{T}} \sum_{V\in\mathfrak{V}_{S}} (\alpha_V-C)^2 \\ &\simeq \sum_{T\in\mathcal{T}_{h}} h_T^3 \sum_{V\in\mathcal{V}_{T}} (\alpha_V - C)^2, \end{aligned}$$ where the second equivalence follows from the fact that $\mathop{\mathrm{card}}(\mathfrak{V}_{S}) \lesssim 1$, in the third one we have additionally used $h_S \simeq h_T$ for all $T \in \mathcal{T}_{h}$ and all $S \in \mathfrak{T}_{T}$, and the last one is justified by the choice we made at the beginning for $\alpha_V$, $V \in \mathfrak{V}_{h}\setminus \mathcal{V}_{h}$. This readily gives [\[eq:W0.P1\]](#eq:W0.P1){reference-type="eqref" reference="eq:W0.P1"}.\ [(ii) *Proof of [\[eq:W0.P2\]](#eq:W0.P2){reference-type="eqref" reference="eq:W0.P2"}.*]{.ul} The key argument to obtain [\[eq:W0.P2\]](#eq:W0.P2){reference-type="eqref" reference="eq:W0.P2"} lies in the de Rham theorem: Let $(\boldsymbol{\psi}_{S,E})_{E\in\mathfrak{E}_{S}}$ (with $\mathfrak{E}_{S}$ collecting the edges of $S$) be the basis for the edge Nédélec space given by [\[eq:S.DR.1\]](#eq:S.DR.1){reference-type="eqref" reference="eq:S.DR.1"}. Then, summing [\[eq:W.diff.0\]](#eq:W.diff.0){reference-type="eqref" reference="eq:W.diff.0"}, we have $$\label{eq:W0.W1} \mathop{\mathrm{\bf grad}}\left(\sum_{V\in\mathfrak{V}_{S}} \alpha_V \phi_{S,V} \right) = \sum_{E\in\mathfrak{E}_{S}} \llbracket\alpha_V\rrbracket_{E} \boldsymbol{\psi}_{S,E} .$$ Starting from [\[eq:W0.W1\]](#eq:W0.W1){reference-type="eqref" reference="eq:W0.W1"} and proceeding in a similar way as in [\[eq:W0.P4\]](#eq:W0.P4){reference-type="eqref" reference="eq:W0.P4"} with [\[eq:W.norm.1\]](#eq:W.norm.1){reference-type="eqref" reference="eq:W.norm.1"} replacing [\[eq:W.norm.0\]](#eq:W.norm.0){reference-type="eqref" reference="eq:W.norm.0"}, we have $$\label{eq:W0.P5} \int_{\Omega} \Vert\mathop{\mathrm{\bf grad}}\phi_h\Vert^2 \simeq \sum_{T\in\mathcal{T}_{h}} h_T \sum_{S\in\mathfrak{T}_{T}} \sum_{E\in\mathfrak{E}_{S}} \vert \llbracket\alpha_V\rrbracket_{E} \vert^2,$$ with, for any $\boldsymbol{v} \in \mathbb{R}^3,$ $\|\boldsymbol{v}\|_{}$ denoting the Euclidian norm of $\boldsymbol{v}$. Now, for any edge $E \in \mathfrak{E}_{S}$ of any simplex $S \in \mathfrak{T}_{T}$, either $E \in \mathcal{E}_{T}$ or, by the choice made at the beginning of this proof for $\alpha_V$ with $V$ face or element center, $\llbracket\alpha_V\rrbracket_{E}$ can be computed as the sum of jumps along the boundary of $T$ (i.e. $\llbracket\alpha_V\rrbracket_{E} = \sum_{E'\in\mathcal{E}_{T}} \omega_{E'E} \llbracket\alpha_V\rrbracket_{E'}$ for $\omega_{E'E} \in \lbrace -1, 0, 1 \rbrace$). Therefore, $$\sum_{S\in\mathfrak{T}_{T}} \sum_{E\in\mathfrak{E}_{S}} \vert \llbracket\alpha_V\rrbracket_{E} \vert^2 \simeq \sum_{E\in\mathcal{E}_{T}} \vert \llbracket\alpha_V\rrbracket_{E} \vert^2,$$ and we infer [\[eq:W0.P2\]](#eq:W0.P2){reference-type="eqref" reference="eq:W0.P2"} from [\[eq:W0.P5\]](#eq:W0.P5){reference-type="eqref" reference="eq:W0.P5"}. ◻ ## Mimetic Poincaré inequality for collections of edge values If the topology of the domain is non-trivial, a suitable condition for each void must be satisfied in order to establish a mimetic Poincaré inequality for collections of edge values. Denote by $b_2$ the second Betti number, i.e., the number of voids encapsulated by $\Omega$. Let $(\mathcal{F}_{\gamma_i})_{1\leq i \leq b_2}$ denote collections of boundary faces such that $\gamma_i \coloneq \bigcup_{F \in \mathcal{F}_{\gamma_i}} \overline{F}$ is the boundary if the $i$th void. We start by proving a necessary and sufficient condition under which a function in the lowest-order Raviart--Thomas--Nédélec face space on the tetrahedral submesh is the curl of a function in the edge Nédélec space on the same mesh. **Lemma 5** (Condition on the cohomology). *Denote by $\mathcal{RT}^{1}(\mathfrak{T}_{h})$ and $\mathcal{N}^{1}(\mathfrak{T}_{h})$ lowest-order face and edge finite element spaces on the submesh. Then, for all $\boldsymbol{\phi}_h \in \mathcal{RT}^{1}(\mathfrak{T}_{h})$, there exists $\boldsymbol{\chi}_h \in \mathcal{N}^{1}(\mathfrak{T}_{h})$ such that $\boldsymbol{\phi}_h = \mathop{\mathrm{\bf curl}}\boldsymbol{\chi}_h$ if and only if $$\label{eq:im.curl.NE1} \text{% $\mathop{\mathrm{div}}\boldsymbol{\phi}_h = 0$ and $\sum_{F\in\mathcal{F}_{\gamma_i}} \int_F \boldsymbol{\phi}_h \cdot \boldsymbol{n}_\Omega = 0$ for all integer $i$ such that $1 \leq i \leq b_2$, }$$ with $\boldsymbol{n}_\Omega$ denoting the unit normal vector to the boundary of $\Omega$ pointing out of $\Omega$.* *Proof.* To check that [\[eq:im.curl.NE1\]](#eq:im.curl.NE1){reference-type="eqref" reference="eq:im.curl.NE1"} is necessary, notice that the first condition comes from the identity $\mathop{\mathrm{div}}\mathop{\mathrm{\bf curl}}= 0$, while the second one follows from Green's theorem along with the fact that the flux of the curl of any function across a closed boundary is zero. We next prove that condition [\[eq:im.curl.NE1\]](#eq:im.curl.NE1){reference-type="eqref" reference="eq:im.curl.NE1"} is sufficient using a counting argument. From the de Rham Theorem, we know that that the dimension of the space of harmonic forms is precisely the number of voids $b_2$. For all integer $i$ such that $1 \leq i \leq b_2$, we define the linear form $L_i$ such that, for any vector-valued function $\boldsymbol{\phi}$ smooth enough, $$L_i(\boldsymbol{\phi}) \coloneq -\sum_{F\in\mathcal{F}_{\gamma_i}} \int_F \boldsymbol{\phi} \cdot \boldsymbol{n}_\Omega .$$ For any $1 \leq i \leq b_2$, let $\boldsymbol{x}_i \in \mathbb{R}^3$ be any point inside the $i$th void and consider the function $\boldsymbol{\phi}_i: \mathbb{R}^3 \ni \boldsymbol{x}\mapsto \frac{\boldsymbol{x}-\boldsymbol{x}_i}{\Vert \boldsymbol{x} - \boldsymbol{x}_i \Vert^3} \in \mathbb{R}^3$. Noticing that $\mathop{\mathrm{div}}\boldsymbol{\phi}_i = 0$ and applying the divergence theorem inside the $j$th void, we infer that $L_j(\boldsymbol{\phi}_i) = 0$ if $j \neq i$. Let us now show that $L_i(\boldsymbol{\phi}_i) > 0$. Let $r > 0$ be the distance between $\boldsymbol{x}_i$ and $\Omega$. Denoting by $\mathcal{S}_i$ the sphere of radius $\frac{r}2$ centred in $\boldsymbol{x}_i$, we have that $\int_{\mathcal{S}_i} \boldsymbol{\phi}_i \cdot \frac{(\boldsymbol{x} - \boldsymbol{x}_i)}{\Vert \boldsymbol{x}- \boldsymbol{x}_i \Vert} = \int_{\mathcal{S}_i} 4 r^{-2} = 4 \pi > 0$. Applying once again the divergence theorem to $\boldsymbol{\phi}_i$ on the volume $\mathcal{V}_i$ enclosed between $\mathcal{S}_i$ and $\gamma_i$, we have that $0 = \int_{\mathcal{V}_i} \mathop{\mathrm{div}}\boldsymbol{\phi}_i = L_i(\boldsymbol{\phi}_i) - \int_{\mathcal{S}_i} \boldsymbol{\phi}_i \cdot \frac{(\boldsymbol{x} - \boldsymbol{x}_i)}{\Vert \boldsymbol{x}- \boldsymbol{x}_i \Vert}$. Therefore, $$\label{eq:Li.phii>0} L_i(\boldsymbol{\phi}_i) = \int_{\mathcal{S}_i} \boldsymbol{\phi}_i \cdot \frac{(\boldsymbol{x} - \boldsymbol{x}_i)}{\Vert \boldsymbol{x}- \boldsymbol{x}_i \Vert} > 0.$$ Denoting by $\boldsymbol{\pi}_{\mathcal{RT},h}^{1}$ the canonical interpolator onto $\boldsymbol{\mathcal{RT}}^1(\mathfrak{T}_{h})$, we know that, for any function $\boldsymbol{\phi}$, $\mathop{\mathrm{div}}(\boldsymbol{\pi}_{\mathcal{RT},h}^{1} \boldsymbol{\phi}) = \pi_{\mathcal{P},0}^{h} (\mathop{\mathrm{div}}\boldsymbol{\phi})$ and $L_i(\boldsymbol{\pi}_{\mathcal{RT},h}^{1} \boldsymbol{\phi}) = L_i(\boldsymbol{\phi})$ by definition of the interpolator. Therefore, for any integer $i$ such that $1 \leq i \leq b_2$, $\boldsymbol{\pi}_{\mathcal{RT},h}^{1} \boldsymbol{\phi}_i$ is a discrete harmonic form and, by a counting argument, the linearly independent family $(\boldsymbol{\pi}_{\mathcal{RT},h}^{1} \boldsymbol{\phi}_i)_{1 \leq i \leq b_2}$ spans the space of discrete harmonic forms. Let now $\boldsymbol{\boldsymbol{\phi}}_h$ be such that $\mathop{\mathrm{div}}\boldsymbol{\boldsymbol{\phi}}_h = 0$. Then, $\boldsymbol{\boldsymbol{\phi}}_h = \mathop{\mathrm{\bf curl}}\boldsymbol{\chi}_h + \sum_{i = 1}^{b_2} \lambda_i\, \boldsymbol{\pi}_{\mathcal{RT},h}^{1} \boldsymbol{\phi}_i$ for some $\boldsymbol{\chi}_h \in \mathcal{N}^{1}(\mathfrak{T}_{h})$ and $(\lambda_i)_{1 \leq i \leq b_2}\in \mathbb{R}^{b_2}$. We prove that the condition $L_i(\boldsymbol{\phi}) = 0$ for all $1\le i\le b_2$ is sufficient to ensure that $\boldsymbol{\boldsymbol{\phi}}_h$ is in the range of $\mathop{\mathrm{\bf curl}}$ by contradiction. As a matter of fact, if this were not the case, then there would be $i_0$ such that $\lambda_{i_0} \neq 0$. However, by [\[eq:Li.phii\>0\]](#eq:Li.phii>0){reference-type="eqref" reference="eq:Li.phii>0"}, this would also imply $L_{i_0}(\boldsymbol{\boldsymbol{\phi}}_h) = \lambda_{i_0} L_{i_0}(\boldsymbol{\pi}_{\mathcal{RT},h}^{1} \boldsymbol{\phi}_{i0}) \neq 0$, which is the sought contradiction. ◻ **Theorem 6** (Mimetic Poincaré inequality for collections of edge values). *Let $(\alpha_F)_{F\in\mathcal{F}_{h}}\in\mathbb{R}^{\mathcal{F}_{h}}$ be a collection of values at faces satisfying $$\begin{gathered} \label{eq:Whitney.E:condition.1} \text{% $\sum_{F\in\mathcal{F}_{T}} \omega_{TF} \alpha_F = 0$ for all $T \in \mathcal{T}_{h}$ and } \\ \nonumber %% \label{eq:Whitney.E:condition.2} \text{% $\sum_{F\in\mathcal{F}_{\gamma_i}} \omega_{\Omega F}\alpha_F = 0$ for all integer $i$ such that $1\le i\le b_2$, } \end{gathered}$$ where $\omega_{\Omega F}\in \lbrace -1, 1 \rbrace$ is such that $\omega_{\Omega F}\boldsymbol{n}_F$ points outside the domain $\Omega$. Then, there is a collection $(\alpha_E)_{E\in\mathcal{E}_{h}}\in\mathbb{R}^{\mathcal{E}_{h}}$ of values at edges such that, for all $F\in\mathcal{F}_{h}$, $$\label{eq:Whitney.E:poincare} \text{% $\sum_{E\in\mathcal{E}_{F}}\omega_{FE}\alpha_E = \alpha_F$ and $\sum_{T\in\mathcal{T}_{h}} h_T\sum_{E\in\mathcal{E}_{T}} \alpha_E^2 \lesssim \sum_{T\in\mathcal{T}_{h}} h_T^{-1} \sum_{F\in\mathcal{F}_{T}} \alpha_F^2$ }$$ with hidden constant only depending on $\Omega$ and the mesh regularity parameter.* *Proof.* Let $(\boldsymbol{\phi}_{E,S})_{S\in\mathfrak{T}_{h},E\in\mathfrak{E}_{S}}$ and $(\boldsymbol{\psi}_{F,S})_{S\in\mathfrak{T}_{h},F\in\mathfrak{F}_{S}}$ (with $\mathfrak{F}_{S}$ denoting the set of triangular faces of $S$) be the families of basis functions respectively given by [\[eq:S.DR.1\]](#eq:S.DR.1){reference-type="eqref" reference="eq:S.DR.1"} and [\[eq:S.DR.2\]](#eq:S.DR.2){reference-type="eqref" reference="eq:S.DR.2"} below. The main difficulty is to extend the family $(\alpha_F)_{F\in\mathcal{F}_{h}}$ to a family $(\alpha_F)_{F\in \mathfrak{F}_{h}}$ satisfying, for all $S\in\mathfrak{T}_{h}$, $$\sum_{F\in\mathfrak{F}_{S}} \omega_{SF} \alpha_F = 0.$$ We perform the construction locally on each element $T\in\mathcal{T}_{h}$. Let $g \in L^2(T)$ be the piecewise constant function on $\mathfrak{T}_{T}$ such that $$\label{eq:Whitney.E:g} g_{|S} \coloneq - \frac{1}{\vert S \vert}\sum_{F\in\mathfrak{F}_{S}\cap\mathcal{F}_{T}} \omega_{TF}\alpha_F \qquad\forall S \in \mathfrak{T}_{T}.$$ Recalling [\[eq:Whitney.E:condition.1\]](#eq:Whitney.E:condition.1){reference-type="eqref" reference="eq:Whitney.E:condition.1"}, by definition we have $\int_T g = 0$. Hence, using Lion's Lemma [@Amrouche.Ciarlet.ea:15 Theorem 3.1.e], we infer the existence of $\boldsymbol{u} \in \boldsymbol{H}^1_0(T;\mathbb{R}^3)$ such that $$\label{eq:lions.u} \text{% $\mathop{\mathrm{div}}\boldsymbol{u} = g$ and $% |\boldsymbol{u}|_{\boldsymbol{H}^1(T;\mathbb{R}^3)} \lesssim \|g\|_{L^2(T)} \lesssim h_T^{-\frac32} \sum_{F\in\mathcal{F}_{T}} \vert \alpha_F \vert$, }$$ where the last inequality follows from the definition of $g$ after observing that $|S| \simeq h_T^3$ for all $S \in \mathfrak{T}_{T}$ by mesh regularity. Setting $\alpha_F \coloneq \int_F \boldsymbol{u} \cdot \boldsymbol{n}_F$ (with $\boldsymbol{n}_F$ denoting the unit normal vector to $F$, with orientation consistent with that of $F$), for all $S \in \mathfrak{T}_{T}$ and all $F \in \mathfrak{F}_{S}\setminus \mathcal{F}_{T}$. We infer that, for all $S \subset \mathfrak{T}_{h}$, $\sum_{F\in\mathfrak{F}_{S}} \omega_{SF} \alpha_F = 0$, noticing that $$\begin{aligned} \sum_{F\in\mathfrak{F}_{S}} \omega_{SF} \alpha_F &= \sum_{F\in \mathfrak{F}_{S}\setminus \mathcal{F}_{T}} \omega_{SF} \alpha_F + \sum_{F\in \mathfrak{F}_{S}\cap \mathcal{F}_{T}} \omega_{SF} \alpha_F\\ &= \sum_{F\in \mathfrak{F}_{S}} \omega_{SF} \int_F \boldsymbol{u} \cdot \boldsymbol{n}_F+ \sum_{F\in \mathfrak{F}_{S}\cap \mathcal{F}_{T}} \omega_{SF} \alpha_F\\ &= \int_S \mathop{\mathrm{div}}\boldsymbol{u} + \sum_{F\in \mathfrak{F}_{S}\cap \mathcal{F}_{T}} \omega_{SF} \alpha_F\\ \overset{\eqref{eq:lions.u},\,\eqref{eq:Whitney.E:g}}&= -\sum_{F\in \mathfrak{F}_{S}\cap \mathcal{F}_{T}} \omega_{SF} \alpha_F + \sum_{F\in \mathfrak{F}_{S}\cap \mathcal{F}_{T}} \omega_{SF} \alpha_F = 0, \end{aligned}$$ where we have used the fact that $\boldsymbol{u} \cdot \boldsymbol{n}_F= 0$ on every face $F \in \mathcal{F}_{T}$ lying on the boundary of $T$ to obtain the second equality. Let $\overline{\boldsymbol{u}}_T \coloneq \frac{1}{|T|}\int_T \boldsymbol{u}$ and, for all integer $i$ such that $1\le i\le 3$, denote by $u_i$ and $\overline{u}_i$ the $i$th components of $\boldsymbol{u}$ and $\overline{\boldsymbol{u}}_T$, respectively. It holds $$\begin{aligned} |T|\,\overline{u}_i = \int_T u_i = \int_T \boldsymbol{u}\cdot \boldsymbol{e}_i &= \int_T \boldsymbol{u}\cdot \mathop{\mathrm{\bf grad}}(x_i - \boldsymbol{x}_T \cdot \boldsymbol{e}_i) \\ &= - \int_T \mathop{\mathrm{div}}\boldsymbol{u} ~ (x_i - \boldsymbol{x}_T \cdot \boldsymbol{e}_i) = - \int_T g ~ (x_i - \boldsymbol{x}_T \cdot \boldsymbol{e}_i) \lesssim h_T \sum_{F\in\mathcal{F}_{T}} \vert \alpha_F \vert, \end{aligned}$$ so that, recalling that $|T| \simeq h_T^3$, $$\label{eq:norm.bar.u} \|\overline{\boldsymbol{u}}_T\|_{\boldsymbol{L}^2(T;\mathbb{R}^3)} \lesssim h_T^{-\frac12} \sum_{F\in\mathcal{F}_{T}} \vert \alpha_F \vert.$$ Therefore, we can use the Poincaré inequality on the domain $T$ to write $$\begin{aligned} \|\boldsymbol{u}\|_{\boldsymbol{L}^2(T;\mathbb{R}^3)} &\leq \|\boldsymbol{u} - \overline{\boldsymbol{u}}_T\|_{\boldsymbol{L}^2(T;\mathbb{R}^3)} + \|\overline{\boldsymbol{u}}_T\|_{\boldsymbol{L}^2(T;\mathbb{R}^3)} \\ &\lesssim h_T |\boldsymbol{u}|_{\boldsymbol{H}^1(T;\mathbb{R}^3)} + \|\overline{\boldsymbol{u}}_T\|_{\boldsymbol{L}^2(T;\mathbb{R}^3)} \overset{\eqref{eq:lions.u},\,\eqref{eq:norm.bar.u}}\lesssim h_T^{-\frac12} \sum_{F\in\mathcal{F}_{T}} \vert \alpha_F \vert. \end{aligned}$$ Combining this result with the continuous trace inequality we have, for all $S \in \mathfrak{T}_{T}$ and all $F \in \mathfrak{F}_{S}\setminus \mathcal{F}_{T}$, $$\vert \alpha_F \vert \lesssim h_F \|\boldsymbol{u}\|_{\boldsymbol{L}^2(F;\mathbb{R}^3)} \lesssim h_T^{\frac12} \|\boldsymbol{u}\|_{\boldsymbol{L}^2(T;\mathbb{R}^3)} + h_T^{\frac32} % |\boldsymbol{u}|_{\boldsymbol{H}^1(T; \mathbb{R}^3)} \lesssim \sum_{F\in\mathcal{F}_{T}} \vert \alpha_F \vert .$$ Therefore, summing over all tetrahedra inside $T$ and all tetrahedral faces, we obtain $$\label{eq:PW1.alpha} \sum_{S\in\mathfrak{T}_{T}} \sum_{F\in\mathfrak{F}_{S}} \alpha_F^2 \lesssim \sum_{F\in\mathcal{F}_{T}} \alpha_F^2 .$$ We next define the following piecewise polynomial function: $$\label{eq:W1.P0} \boldsymbol{\psi}_h \coloneq \sum_{S\in\mathfrak{T}_{h}} \sum_{F\in\mathfrak{F}_{S}} \alpha_F \boldsymbol{\psi}_{F,S} \in\boldsymbol{H}(\mathop{\mathrm{div}};\Omega).$$ Since $\mathop{\mathrm{div}}\boldsymbol{\psi}_h = 0$ and, for all $1 \leq i \leq b_2$, $\sum_{F\in\mathcal{F}_{\gamma_i}} \omega_{\Omega F}\int_F \boldsymbol{\psi}_h \cdot \boldsymbol{n}_F= 0$, we can use Lemma [Lemma 5](#lem:condition.cohom.div){reference-type="ref" reference="lem:condition.cohom.div"} to infer from the uniform Poincaré inequality on the simplicial de Rham complex [@Arnold:18] the existence of $$\boldsymbol{\phi}_h \coloneq \sum_{S\in\mathfrak{T}_{h}}\sum_{E\in\mathfrak{E}_{S}} \alpha_E \boldsymbol{\phi}_{E,S} \in \boldsymbol{H}(\mathop{\mathrm{\bf curl}};\Omega)$$ such that $$\label{eq:curl.surjectivity} \text{% $\mathop{\mathrm{\bf curl}}\boldsymbol{\phi}_h = \boldsymbol{\psi}_h$ and $\|\boldsymbol{\phi}_h\|_{\boldsymbol{L}^2(\Omega;\mathbb{R}^3)} \lesssim \|\boldsymbol{\psi}_h\|_{\boldsymbol{L}^2(\Omega;\mathbb{R}^3)}$. }$$ Summing [\[eq:W.diff.1\]](#eq:W.diff.1){reference-type="eqref" reference="eq:W.diff.1"}, we have $$\label{eq:W1.P1} \mathop{\mathrm{\bf curl}}\boldsymbol{\phi}_h = \sum_{S\in\mathfrak{T}_{h}}\sum_{F\in\mathfrak{F}_{S}} \left( \sum_{E\in\mathcal{E}_{F}} \omega_{FE}\alpha_E \right) \boldsymbol{\psi}_{F,S}.$$ Hence, equating [\[eq:W1.P0\]](#eq:W1.P0){reference-type="eqref" reference="eq:W1.P0"} and [\[eq:W1.P1\]](#eq:W1.P1){reference-type="eqref" reference="eq:W1.P1"}, we infer that, for all $F\in\mathcal{F}_{h}\subset\mathfrak{F}_{h}$, $\sum_{E\in\mathcal{E}_{F}}\omega_{FE}\alpha_E = \alpha_F$. Moreover, noticing that both $\boldsymbol{\phi}_{E,S}$ and $\boldsymbol{\psi}_{F,S}$ are only supported in $S$, we have $$\label{eq:PW1.norm.E} \begin{aligned} \|\boldsymbol{\phi}_h\|_{\boldsymbol{L}^2(\Omega;\mathbb{R}^3)}^2 &= \sum_{T\in\mathcal{T}_{h}}\sum_{S\in\mathfrak{T}_{T}} \int_S \left( \sum_{E\in\mathfrak{E}_{S}} \alpha_E \boldsymbol{\phi}_{E,S}\right)^2 \\ &\simeq \sum_{T\in\mathcal{T}_{h}}\sum_{S\in\mathfrak{T}_{T}} \sum_{E\in\mathfrak{E}_{S}} \alpha_E^2 \|\boldsymbol{\phi}_{E,S}\|_{\boldsymbol{L}^2(S;\mathbb{R}^3)}^2 \\ \overset{\eqref{eq:W.norm.1}}&\simeq \sum_{T\in\mathcal{T}_{h}}\sum_{S\in\mathfrak{T}_{T}} h_S \sum_{E\in\mathfrak{E}_{S}} \alpha_E^2 \\ &\gtrsim \sum_{T\in\mathcal{T}_{h}} h_T \sum_{E\in\mathcal{E}_{T}} \alpha_E^2 \end{aligned}$$ and $$\label{eq:PW1.norm.F} \begin{aligned} \|\boldsymbol{\psi}_h\|_{\boldsymbol{L}^2(\Omega;\mathbb{R}^3)}^2 &= \sum_{T\in\mathcal{T}_{h}}\sum_{S\in\mathfrak{T}_{T}} \int_S \left( \sum_{F\in\mathfrak{F}_{S}} \alpha_F \boldsymbol{\psi}_{F,S}\right)^2 \\ &\simeq \sum_{T\in\mathcal{T}_{h}}\sum_{S\in\mathfrak{T}_{T}} \sum_{F\in\mathfrak{F}_{S}} \alpha_F^2 \|\boldsymbol{\psi}_{F,S}\|_{\boldsymbol{L}^2(S;\mathbb{R}^3)}^2 \\ \overset{\eqref{eq:W.norm.2}}&\simeq \sum_{T\in\mathcal{T}_{h}}\sum_{S\in\mathfrak{T}_{T}} h_S^{-1} \sum_{F\in\mathfrak{F}_{S}} \alpha_F^2 \\ \overset{\eqref{eq:PW1.alpha}}&\lesssim \sum_{T\in\mathcal{T}_{h}} h_T^{-1} \sum_{F\in\mathcal{F}_{T}} \alpha_F^2, \end{aligned}$$ where, in the last passage, we have additionally used the fact that $h_S^{-1} \lesssim h_T^{-1}$ for all $S \in \mathfrak{T}_{T}$ by mesh regularity. Finally, to prove [\[eq:Whitney.E:poincare\]](#eq:Whitney.E:poincare){reference-type="eqref" reference="eq:Whitney.E:poincare"}, it suffices to write $$\sum_{T\in\mathcal{T}_{h}} h_T \sum_{E\in\mathcal{E}_{T}} \alpha_E^2 \overset{\eqref{eq:PW1.norm.E}}\lesssim \|\boldsymbol{\phi}_h\|_{\boldsymbol{L}^2(\Omega;\mathbb{R}^3)}^2 \overset{\eqref{eq:curl.surjectivity}}\lesssim \|\boldsymbol{\psi}_h\|_{\boldsymbol{L}^2(\Omega;\mathbb{R}^3)}^2 \overset{\eqref{eq:PW1.norm.F}}\simeq \sum_{T\in\mathcal{T}_{h}} h_T^{-1} \sum_{F\in\mathcal{F}_{T}} \alpha_F^2 .\qedhere$$ ◻ ## Mimetic Poincaré inequality for collections of face values **Theorem 7** (Mimetic Poincaré inequality for collections of face values). *Let $(\alpha_T)_{T\in\mathcal{T}_{h}}\in\mathbb{R}^{\mathcal{T}_{h}}$ be a collection of values at elements. Then, there is a collection $(\alpha_F)_{F\in\mathcal{F}_{h}}\in\mathbb{R}^{\mathcal{F}_{h}}$ of values at faces such that, for all $T\in\mathcal{T}_{h}$, $$\label{eq:Whitney.F:poincare} \text{% $\sum_{F\in\mathcal{F}_{T}}\omega_{TF} \alpha_F = \alpha_T$ and $\sum_{T\in\mathcal{T}_{h}} h_T^{-1} \sum_{F\in\mathcal{F}_{T}} \alpha_F^2 \lesssim \sum_{T\in\mathcal{T}_{h}} h_T^{-3} \alpha_T^2$ }$$ with hidden constant only depending on $\Omega$ and the mesh regularity parameter.* *Proof.* Let $(\boldsymbol{\phi}_F)_{F\in\mathfrak{F}_{h}}$ and $(\psi_S)_{S\in\mathfrak{T}_{h}}$ be the basis functions of the face Raviart--Thomas--Nédélec and of the fully discontinuous piecewise affine spaces on the tetrahedral submesh, respectively given by [\[eq:S.DR.2\]](#eq:S.DR.2){reference-type="eqref" reference="eq:S.DR.2"} and [\[eq:S.DR.3\]](#eq:S.DR.3){reference-type="eqref" reference="eq:S.DR.3"} below. Define the following piecewise polynomial function: $$\label{eq:W2.P0} \psi_h \coloneq \sum_{T\in\mathcal{T}_{h}}\sum_{S\in\mathfrak{T}_{T}} \alpha_T \frac{\vert S \vert}{\vert T \vert} \psi_S .$$ We infer from the uniform Poincaré inequality on the simplicial de Rham complex [@Arnold:18] the existence of $\boldsymbol{\phi}_h \coloneq \sum_{F\in\mathfrak{F}_{h}} \alpha_F \boldsymbol{\phi}_F\in \mathcal{RT}^{1}(\mathfrak{T}_{h}) \subset \boldsymbol{H}(\mathop{\mathrm{div}};\Omega)$ such that $$\label{eq:div.surjectivity} \text{% $\mathop{\mathrm{div}}\boldsymbol{\phi}_h = \psi_h$ and $\|\boldsymbol{\phi}_h\|_{\boldsymbol{L}^2(\Omega;\mathbb{R}^3)} \lesssim \|\psi_h\|_{L^2(\Omega)}$. }$$ Summing [\[eq:W.diff.2\]](#eq:W.diff.2){reference-type="eqref" reference="eq:W.diff.2"}, on the other hand, we obtain $$\label{eq:W2.P3} \mathop{\mathrm{div}}\boldsymbol{\phi}_h = \sum_{T\in\mathcal{T}_{h}}\sum_{S\in\mathfrak{T}_{T}} \sum_{F\in\mathfrak{F}_{S}}\omega_{SF} \alpha_F \psi_S,$$ where $\omega_{SF}$ is the orientation of $F$ relative to $S$. Since each $\psi_S$ is supported in $S$, we infer equating [\[eq:W2.P0\]](#eq:W2.P0){reference-type="eqref" reference="eq:W2.P0"} and [\[eq:W2.P3\]](#eq:W2.P3){reference-type="eqref" reference="eq:W2.P3"} that, for all $T \in \mathcal{T}_{h}$ and all $S \in \mathfrak{T}_{T}$, $$\alpha_T \frac{\vert S \vert}{\vert T \vert} = \sum_{F\in\mathfrak{F}_{S}}\omega_{SF} \alpha_F.$$ For all $T\in\mathcal{T}_{h}$, summing this relation over $S \in \mathfrak{T}_{T}$, we have $$\sum_{S\in\mathfrak{T}_{T}} \alpha_T \frac{\vert S \vert}{\vert T \vert} = \sum_{S\in\mathfrak{T}_{T}} \sum_{F\in\mathfrak{F}_{S}}\omega_{SF} \alpha_F \implies \alpha_T \cancelto{1}{\frac{\sum_{S\in\mathfrak{T}_{T}} \vert S \vert}{\vert T \vert}} = \sum_{F\in\mathcal{F}_{T}}\omega_{TF} \alpha_F,$$ where we have used the fact that the contributions from the simplicial faces internal to $T$ cancel out in the right-hand side. It remains to check that [\[eq:Whitney.F:poincare\]](#eq:Whitney.F:poincare){reference-type="eqref" reference="eq:Whitney.F:poincare"} holds. Noticing that the only $\boldsymbol{\phi}_F$ supported in $S$ are those associated with its faces $F$ collected in the set $\mathfrak{F}_{S}$, we have $$\label{eq:est.norm.phih} \begin{aligned} \|\boldsymbol{\phi}_h\|_{\boldsymbol{L}^2(\Omega;\mathbb{R}^3)}^2 &= \sum_{T\in\mathcal{T}_{h}} \sum_{S\in\mathfrak{T}_{T}} \int_S \bigg( \sum_{F\in\mathfrak{F}_{S}} \alpha_F \boldsymbol{\phi}_F\bigg)^2 \\ &\simeq \sum_{T\in\mathcal{T}_{h}} \sum_{S\in\mathfrak{T}_{T}} \sum_{F\in\mathfrak{F}_{S}} \alpha_F^2 \|\boldsymbol{\phi}_F\|_{\boldsymbol{L}^2(S;\mathbb{R}^3)}^2 \\ \overset{\eqref{eq:W.norm.2}}&\simeq \sum_{T\in\mathcal{T}_{h}} h_T^{-1} \sum_{S\in\mathfrak{T}_{T}} \sum_{F\in\mathfrak{F}_{S}} \alpha_F^2 \\ &\gtrsim \sum_{T\in\mathcal{T}_{h}} h_T^{-1} \sum_{F\in\mathcal{F}_{T}} \alpha_F^2, \end{aligned}$$ where we have used $h_S^{-1} \simeq h_T^{-1}$ (consequence of mesh regularity) in the third line. Likewise, we have $$\label{eq:est.norm.psih} \|\psi_h\|_{L^2(\Omega)}^2 = \sum_{T\in\mathcal{T}_{h}} \sum_{S\in\mathfrak{T}_{T}} \alpha_T^2 \frac{\vert S \vert^2}{\vert T \vert^2} \|\psi_S\|_{L^2(S)}^2 \overset{\eqref{eq:W.norm.3}}\simeq \sum_{T\in\mathcal{T}_{h}} h_T^{-3} \alpha_T^2,$$ where we have used $h_S \simeq h_T$ and $\vert S \vert \simeq \vert T \vert$. Finally, to prove [\[eq:Whitney.F:poincare\]](#eq:Whitney.F:poincare){reference-type="eqref" reference="eq:Whitney.F:poincare"}, we write $$\sum_{T\in\mathcal{T}_{h}} h_T^{-1} \sum_{F\in\mathcal{F}_{T}} \alpha_F^2 \overset{\eqref{eq:est.norm.phih}}\lesssim \|\boldsymbol{\phi}_h\|_{\boldsymbol{L}^2(\Omega;\mathbb{R}^3)}^2 \overset{\eqref{eq:div.surjectivity}}\lesssim \|\psi_h\|_{L^2(\Omega)}^2 \overset{\eqref{eq:est.norm.psih}}\simeq \sum_{T\in\mathcal{T}_{h}} h_T^{-3} \alpha_T^2 .\qedhere$$ ◻ # Proofs of Poincaré inequalities in DDR spaces {#sec:proof.DDR.poincare} ## Poincaré inequality for the gradient {#sec:uGh.poincare} We start with the following preliminary lemma. **Lemma 8** (Continuous inverse of the discrete gradient). *For all $\underline{p}_h \in \underline{X}_{\mathop{\mathrm{\bf grad}},h}^k$, there is $\underline{q}_h \in \underline{X}_{\mathop{\mathrm{\bf grad}},h}^k$ such that $$\label{eq:uGh.inverse} \text{% $\underline{\boldsymbol{G}}_h^k\underline{q}_h = \underline{\boldsymbol{G}}_h^k\underline{p}_h$ and $\vert\kern-0.25ex\vert\kern-0.25ex\vert\underline{q}_h\vert\kern-0.25ex\vert\kern-0.25ex\vert_{\mathop{\mathrm{\bf grad}},h} \lesssim \vert\kern-0.25ex\vert\kern-0.25ex\vert\underline{\boldsymbol{G}}_h^k\underline{p}_h\vert\kern-0.25ex\vert\kern-0.25ex\vert_{\mathop{\mathrm{\bf curl}},h}$. }$$* *Proof.* We provide an explicit definition of $\underline{q}_h$ and check that [\[eq:uGh.inverse\]](#eq:uGh.inverse){reference-type="eqref" reference="eq:uGh.inverse"} holds. Specifically, we let $\underline{q}_h \in \underline{X}_{\mathop{\mathrm{\bf grad}},h}^k$ be such that, for all $E\in\mathcal{E}_{h}$, $$\begin{gathered} \label{eq:qV} \llbracket q_V\rrbracket_{E} = \int_E G_E^k\underline{p}_E \qquad \forall E\in\mathcal{E}_{h}, \\ \label{eq:qE} \int_E q_E\, r' = - \int_E G_E^k\underline{p}_E\, r + \llbracket q_V\, r\rrbracket_{E} \qquad \forall r \in \mathcal{P}_{0}^{k}(E), \end{gathered}$$ for all $F\in\mathcal{F}_{h}$, $$\label{eq:qF} \int_F q_F\,\mathop{\mathrm{div}}_F \boldsymbol{v} = -\int_F \boldsymbol{G}_F^k\underline{p}_F \cdot \boldsymbol{v} + \sum_{E\in\mathcal{E}_{F}}\omega_{FE}\int_E q_E\,(\boldsymbol{v}\cdot\boldsymbol{n}_{FE}) \qquad \forall \boldsymbol{v} \in \boldsymbol{\mathcal{R}}^{\mathrm{c},k}(F),$$ and, for all $T\in\mathcal{T}_{h}$, $$\label{eq:qT} \int_T q_T\,\mathop{\mathrm{div}}\boldsymbol{v} + \sum_{F\in\mathcal{F}_{T}} \omega_{TF}\int_F q_F\,(\boldsymbol{v}\cdot\boldsymbol{n}_F) \qquad \forall \boldsymbol{v} \in \boldsymbol{\mathcal{R}}^{\mathrm{c},k}(T).$$ Notice that the vertex values $(q_V)_{V \in \mathcal{V}_{h}}$ are only defined up to a global constant and that $q_F$ (resp., $q_T$) is well-defined by condition [\[eq:qF\]](#eq:qF){reference-type="eqref" reference="eq:qF"} (resp., [\[eq:qT\]](#eq:qT){reference-type="eqref" reference="eq:qT"}) since $\mathop{\mathrm{div}}_F:\boldsymbol{\mathcal{R}}^{\mathrm{c},k}(F)\to\mathcal{P}_{}^{k-1}(F)$ (resp., $\mathop{\mathrm{div}}:\boldsymbol{\mathcal{R}}^{\mathrm{c},k}(T)\to\mathcal{P}_{}^{k-1}(T)$) is an isomorphism.\ [1. *Equality of the discrete gradient.*]{.ul} Let us first briefly show that $$\label{eq:uGh.q=uGh.p} \underline{\boldsymbol{G}}_h^k\underline{q}_h = \underline{\boldsymbol{G}}_h^k\underline{p}_h.$$ By [\[eq:qV\]](#eq:qV){reference-type="eqref" reference="eq:qV"} and [\[eq:qE\]](#eq:qE){reference-type="eqref" reference="eq:qE"} along with the definition [\[eq:GE\]](#eq:GE){reference-type="eqref" reference="eq:GE"} of $G_E^k$, it holds $$\label{eq:GE.q=GE.p} G_E^k\underline{q}_E = G_E^k\underline{p}_E \qquad \forall E \in \mathcal{E}_{h}.$$ Let now $F \in \mathcal{F}_{h}$. By [@Bonaldi.Di-Pietro.ea:23 Lemma 14] for $(k,d) = (0,2)$, $\int_F \boldsymbol{G}_F^k\underline{p}_F \cdot \mathop{\mathrm{\bf rot}}_F r = -\sum_{E\in\mathcal{E}_{F}}\omega_{FE}\int_E G_E^k\underline{p}_E\, r$ for all $r \in \mathcal{P}_{0}^{k+1}(F)$, so that, by [\[eq:GE.q=GE.p\]](#eq:GE.q=GE.p){reference-type="eqref" reference="eq:GE.q=GE.p"}, $\boldsymbol{\pi}_{\boldsymbol{\mathcal{R}},F}^{k} \boldsymbol{G}_F^k\underline{q}_F = \boldsymbol{\pi}_{\boldsymbol{\mathcal{R}},F}^{k} \boldsymbol{G}_F^k\underline{p}_F$. On the other hand, plugging the definition [\[eq:qF\]](#eq:qF){reference-type="eqref" reference="eq:qF"} of $q_F$ into [\[eq:GF\]](#eq:GF){reference-type="eqref" reference="eq:GF"}, we readily infer that $\boldsymbol{\pi}_{\boldsymbol{\mathcal{R}},F}^{\mathrm{c},k} \boldsymbol{G}_F^k\underline{q}_F = \boldsymbol{\pi}_{\boldsymbol{\mathcal{R}},F}^{\mathrm{c},k} \boldsymbol{G}_F^k\underline{p}_F$. The above relations imply $$\label{eq:GF.q=GF.p} \boldsymbol{G}_F^k\underline{q}_F = \boldsymbol{G}_F^k\underline{p}_F \qquad \forall F\in\mathcal{F}_{h}.$$ The equality of the components associated with an element $T \in \mathcal{T}_{h}$ is proved in a similar way. First, using again [@Bonaldi.Di-Pietro.ea:23 Lemma 14], this time with $(k,d) = (0,3)$ (which corresponds to [@Di-Pietro.Droniou:23*1 Proposition 1]), we infer that $\int_T \boldsymbol{G}_T^k\underline{q}_T \cdot \mathop{\mathrm{\bf curl}}\boldsymbol{v} = - \sum_{F\in\mathcal{F}_{T}} \int_F \boldsymbol{G}_F^k\underline{q}_F \cdot (\boldsymbol{v} \times \boldsymbol{n}_F)$ for all $\boldsymbol{v} \in \boldsymbol{\mathcal{G}}^{\mathrm{c}, k+1}(T)$. Accounting for [\[eq:GF.q=GF.p\]](#eq:GF.q=GF.p){reference-type="eqref" reference="eq:GF.q=GF.p"}, this yields $\boldsymbol{\pi}_{\boldsymbol{\mathcal{R}},T}^{k} \boldsymbol{G}_T^k\underline{q}_T = \boldsymbol{\pi}_{\boldsymbol{\mathcal{R}},T}^{k} \boldsymbol{G}_T^k\underline{p}_T$. Then, plugging the definition [\[eq:qT\]](#eq:qT){reference-type="eqref" reference="eq:qT"} of $q_T$ into [\[eq:GT\]](#eq:GT){reference-type="eqref" reference="eq:GT"}, we get $\boldsymbol{\pi}_{\boldsymbol{\mathcal{R}},T}^{\mathrm{c},k} \boldsymbol{G}_T^k\underline{q}_T = \boldsymbol{\pi}_{\boldsymbol{\mathcal{R}},T}^{\mathrm{c},k} \boldsymbol{G}_T^k\underline{p}_T$. These equalities give $$\label{eq:GT.q=GT.p} \boldsymbol{G}_T^k\underline{q}_T = \boldsymbol{G}_T^k\underline{p}_T \qquad \forall F\in\mathcal{T}_{h}.$$ Gathering [\[eq:GE.q=GE.p\]](#eq:GE.q=GE.p){reference-type="eqref" reference="eq:GE.q=GE.p"}, [\[eq:GF.q=GF.p\]](#eq:GF.q=GF.p){reference-type="eqref" reference="eq:GF.q=GF.p"}, and [\[eq:GT.q=GT.p\]](#eq:GT.q=GT.p){reference-type="eqref" reference="eq:GT.q=GT.p"}, and recalling the definition [\[eq:uGh\]](#eq:uGh){reference-type="eqref" reference="eq:uGh"} of the discrete gradient [\[eq:uGh.q=uGh.p\]](#eq:uGh.q=uGh.p){reference-type="eqref" reference="eq:uGh.q=uGh.p"} follows.\ [2. *Continuity.*]{.ul} Using the fact that, for all $T \in \mathcal{T}_{h}$, $h_Y \lesssim h_T$ for all $Y \in \mathcal{F}_{T}\cup \mathcal{E}_{T}$ and that the number of faces of each element and of edges of each face is $\lesssim 1$ by mesh regularity, we have $$\sum_{T\in\mathcal{T}_{h}} h_T \sum_{F\in\mathcal{F}_{T}} h_F \sum_{E\in\mathcal{E}_{F}} h_E \sum_{V\in\mathcal{V}_{E}} |q_V|^2 \lesssim \sum_{T\in\mathcal{T}_{h}} h_T^3 \sum_{V\in\mathcal{V}_{T}} |q_V|^2 \overset{\eqref{eq:Whitney.V:poincare}}\lesssim \sum_{T\in\mathcal{T}_{h}} h_T \sum_{E\in\mathcal{E}_{T}} |\llbracket q_V\rrbracket_{E}|^2,$$ where, to apply Theorem [Theorem 4](#thm:Whitney.V){reference-type="ref" reference="thm:Whitney.V"}, we have used the fact that vertex values are defined up to a constant. Taking absolute values in [\[eq:qV\]](#eq:qV){reference-type="eqref" reference="eq:qV"} and using a Cauchy--Schwarz inequality in the right-hand side, on the other hand, we obtain $|\llbracket q_V\rrbracket_{E}| \lesssim h_E^{\frac12} \|G_E^k\underline{p}_E\|_{L^2(E)} \le h_T^{\frac12} \|G_E^k\underline{p}_E\|_{L^2(E)}$ for all $T \in \mathcal{T}_{h}$ such that $E \in \mathcal{E}_{T}$. Plugging this bound into the previous expression, we obtain $$\label{eq:inverse.uGh:estimate.V} \sum_{T\in\mathcal{T}_{h}} h_T \sum_{F\in\mathcal{F}_{T}} h_F \sum_{E\in\mathcal{E}_{F}} h_E \sum_{E\in\mathcal{V}_{E}} |q_V|^2 \lesssim \sum_{T\in\mathcal{T}_{h}} h_T^2 \sum_{E\in\mathcal{E}_{T}} \|G_E^k\underline{p}_E\|_{L^2(E)}^2 \lesssim \vert\kern-0.25ex\vert\kern-0.25ex\vert\underline{\boldsymbol{G}}_h^k\underline{p}_h\vert\kern-0.25ex\vert\kern-0.25ex\vert_{\mathop{\mathrm{\bf curl}},h}^2,$$ where the last inequality is obtained recalling the definition [\[eq:tnorm.curl.h\]](#eq:tnorm.curl.h){reference-type="eqref" reference="eq:tnorm.curl.h"} of $\vert\kern-0.25ex\vert\kern-0.25ex\vert{\cdot}\vert\kern-0.25ex\vert\kern-0.25ex\vert_{\mathop{\mathrm{\bf curl}},h}$ with $\underline{\boldsymbol{v}}_h = \underline{\boldsymbol{G}}_h^k\underline{p}_h$ and using mesh regularity. Taking in [\[eq:qE\]](#eq:qE){reference-type="eqref" reference="eq:qE"} $r$ such that $r' = q_E$, using Cauchy--Schwarz and trace inequalities in the right-hand side, simplifying, and squaring, we obtain $$\|q_E\|_{L^2(E)}^2 \lesssim h_E^2 \|G_E^k\underline{p}_E\|_{L^2(E)}^2 + h_E \sum_{V\in\mathcal{V}_{E}} |q_V|^2,$$ so that, using the fact that $h_E \le h$ for all $E\in\mathcal{E}_{h}$ in the first term, $$\label{eq:inverse.uGh:estimate.E} \begin{aligned} \sum_{T\in\mathcal{T}_{h}} h_T \sum_{F\in\mathcal{F}_{T}} h_F \sum_{E\in\mathcal{E}_{F}} \|q_E\|_{L^2(E)}^2 &\lesssim h^2 \sum_{T\in\mathcal{T}_{h}} h_T \sum_{F\in\mathcal{F}_{T}} h_F \sum_{E\in\mathcal{E}_{F}} \|G_E^k\underline{p}_E\|_{L^2(E)}^2 \\ &\quad + \sum_{T\in\mathcal{T}_{h}} h_T \sum_{F\in\mathcal{F}_{T}} h_F \sum_{E\in\mathcal{E}_{F}} h_E \sum_{V\in\mathcal{V}_{E}} |q_V|^2 \\ \overset{\eqref{eq:tnorm.curl.h},\,\eqref{eq:inverse.uGh:estimate.V}}&\lesssim \vert\kern-0.25ex\vert\kern-0.25ex\vert\underline{\boldsymbol{G}}_h^k\underline{p}_h\vert\kern-0.25ex\vert\kern-0.25ex\vert_{\mathop{\mathrm{\bf curl}},h}^2, \end{aligned}$$ where, in the last inequality, we have additionally used the fact that $h \le \mathop{\mathrm{diam}}(\Omega) \lesssim 1$. To estimate the face components, we first take in [\[eq:qF\]](#eq:qF){reference-type="eqref" reference="eq:qF"} $\boldsymbol{v}$ such that $\mathop{\mathrm{div}}_F\boldsymbol{v} = q_F$ (this is possible since $\mathop{\mathrm{div}}_F:\boldsymbol{\mathcal{R}}^{\mathrm{c},k}(F)\to\mathcal{P}_{}^{k-1}(F)$ is an isomorphism by [@Arnold:18 Corollary 7.3]) use Cauchy--Schwarz and trace inequalities in the right-hand side to obtain $$\|q_F\|_{L^2(F)}^2 \lesssim h_F^2 \|\boldsymbol{G}_F^k\underline{p}_F\|_{\boldsymbol{L}^2(F;\mathbb{R}^2)}^2 + h_F \sum_{E\in\mathcal{E}_{F}} \|q_E\|_{L^2(E)}^2,$$ so that, using the fact that $h_F \le h$ for all $F\in\mathcal{F}_{h}$ in the first term, $$\label{eq:inverse.uGh:estimate.F} \begin{aligned} \sum_{T\in\mathcal{T}_{h}} h_T \sum_{F\in\mathcal{F}_{T}} \|q_F\|_{L^2(F)}^2 &\lesssim h^2 \sum_{T\in\mathcal{T}_{h}} h_T \sum_{F\in\mathcal{F}_{T}} \|\boldsymbol{G}_F^k\underline{p}_F\|_{\boldsymbol{L}^2(F;\mathbb{R}^2)}^2 \\ &\quad + \sum_{T\in\mathcal{T}_{h}} h_T \sum_{F\in\mathcal{F}_{T}} h_F \sum_{E\in\mathcal{E}_{F}} \|q_E\|_{L^2(E)}^2 \overset{\eqref{eq:tnorm.curl.h},\,\eqref{eq:inverse.uGh:estimate.E}}\lesssim \vert\kern-0.25ex\vert\kern-0.25ex\vert\underline{\boldsymbol{G}}_h^k\underline{p}_h\vert\kern-0.25ex\vert\kern-0.25ex\vert_{\mathop{\mathrm{\bf curl}},h}^2, \end{aligned}$$ where we have again used the fact that $h \lesssim 1$ for the first term. The estimate of the element components is entirely similar: taking in [\[eq:qT\]](#eq:qT){reference-type="eqref" reference="eq:qT"} $\boldsymbol{v} \in \boldsymbol{\mathcal{R}}^{\mathrm{c},k}(T)$ such that $\mathop{\mathrm{div}}\boldsymbol{v} = q_T$ (which is possible since $\mathop{\mathrm{div}}: \boldsymbol{\mathcal{R}}^{\mathrm{c},k}(T) \to \mathcal{P}_{}^{k-1}(T)$ is an isomorphism again by [@Arnold:18 Corollary 7.3]), we obtain $\|q_T\|_{L^2(T)}^2 \lesssim h_T^2 \|\boldsymbol{G}_T^k\underline{p}_T\|_{\boldsymbol{L}^2(T;\mathbb{R}^3)}^2 + h_T \sum_{F\in\mathcal{F}_{T}} \|q_F\|_{L^2(F)}^2$. Summing over $T\in\mathcal{T}_{h}$, using the fact that $h_T \le h\lesssim 1$ for all $T\in\mathcal{T}_{h}$ for the first term and [\[eq:inverse.uGh:estimate.F\]](#eq:inverse.uGh:estimate.F){reference-type="eqref" reference="eq:inverse.uGh:estimate.F"} for the second, and recalling the definition [\[eq:tnorm.curl.h\]](#eq:tnorm.curl.h){reference-type="eqref" reference="eq:tnorm.curl.h"} of $\vert\kern-0.25ex\vert\kern-0.25ex\vert{\cdot}\vert\kern-0.25ex\vert\kern-0.25ex\vert_{\mathop{\mathrm{\bf curl}},h}$, we arrive at $$\label{eq:inverse.uGh:estimate.T} \sum_{T\in\mathcal{T}_{h}} \|q_T\|_{L^2(T)}^2 \lesssim \vert\kern-0.25ex\vert\kern-0.25ex\vert\underline{\boldsymbol{G}}_h^k\underline{p}_h\vert\kern-0.25ex\vert\kern-0.25ex\vert_{\mathop{\mathrm{\bf curl}},h}^2.$$ Summing [\[eq:inverse.uGh:estimate.V\]](#eq:inverse.uGh:estimate.V){reference-type="eqref" reference="eq:inverse.uGh:estimate.V"}, [\[eq:inverse.uGh:estimate.E\]](#eq:inverse.uGh:estimate.E){reference-type="eqref" reference="eq:inverse.uGh:estimate.E"}, [\[eq:inverse.uGh:estimate.F\]](#eq:inverse.uGh:estimate.F){reference-type="eqref" reference="eq:inverse.uGh:estimate.F"}, and [\[eq:inverse.uGh:estimate.T\]](#eq:inverse.uGh:estimate.T){reference-type="eqref" reference="eq:inverse.uGh:estimate.T"}, the continuity bound in [\[eq:uGh.inverse\]](#eq:uGh.inverse){reference-type="eqref" reference="eq:uGh.inverse"} follows. ◻ *Proof of Theorem [Theorem 1](#thm:uGh.poincare){reference-type="ref" reference="thm:uGh.poincare"}.* Let $\underline{q}_h \in \underline{X}_{\mathop{\mathrm{\bf grad}},h}^k$ be given by Lemma [Lemma 8](#lem:uGh.inverse){reference-type="ref" reference="lem:uGh.inverse"}. Noticing that $\underline{p}_h - \underline{q}_h \in \mathop{\mathrm{Ker}}\underline{\boldsymbol{G}}_h^k$, we write $$\inf_{\underline{r}_h \in \mathop{\mathrm{Ker}}\underline{\boldsymbol{G}}_h^k} \vert\kern-0.25ex\vert\kern-0.25ex\vert\underline{p}_h - \underline{r}_h\vert\kern-0.25ex\vert\kern-0.25ex\vert_{\mathop{\mathrm{\bf grad}},h} \le \vert\kern-0.25ex\vert\kern-0.25ex\vert\underline{p}_h - (\underline{p}_h - \underline{q}_h)\vert\kern-0.25ex\vert\kern-0.25ex\vert_{\mathop{\mathrm{\bf grad}},h} = \vert\kern-0.25ex\vert\kern-0.25ex\vert\underline{q}_h\vert\kern-0.25ex\vert\kern-0.25ex\vert_{\mathop{\mathrm{\bf grad}},h} \overset{\eqref{eq:uGh.inverse}}\lesssim \vert\kern-0.25ex\vert\kern-0.25ex\vert\underline{\boldsymbol{G}}_h^k\underline{p}_h\vert\kern-0.25ex\vert\kern-0.25ex\vert_{\mathop{\mathrm{\bf curl}},h}.\qedhere$$ ◻ ## Poincaré inequality for the curl {#sec:uCh.poincare} The proof of Theorem [Theorem 2](#thm:uCh.poincare){reference-type="ref" reference="thm:uCh.poincare"} is analogous to that of Theorem [Theorem 1](#thm:uGh.poincare){reference-type="ref" reference="thm:uGh.poincare"} provided we establish the following result. **Lemma 9** (Continuous inverse of the discrete curl). *For any $\underline{\boldsymbol{v}}_h \in \underline{\boldsymbol{X}}_{\mathop{\mathrm{\bf curl}},h}^k$, there is $\underline{\boldsymbol{z}}_h \in \underline{\boldsymbol{X}}_{\mathop{\mathrm{\bf curl}},h}^k$ such that $$\label{eq:uCh.inverse} \text{% $\underline{\boldsymbol{C}}_h^k\underline{\boldsymbol{z}}_h = \underline{\boldsymbol{C}}_h^k\underline{\boldsymbol{v}}_h$ and $\vert\kern-0.25ex\vert\kern-0.25ex\vert\underline{\boldsymbol{z}}_h\vert\kern-0.25ex\vert\kern-0.25ex\vert_{\mathop{\mathrm{\bf curl}},h} \lesssim \vert\kern-0.25ex\vert\kern-0.25ex\vert\underline{\boldsymbol{C}}_h^k\underline{\boldsymbol{v}}_h\vert\kern-0.25ex\vert\kern-0.25ex\vert_{\mathop{\mathrm{div}},h}$. }$$* *Proof.* Let $$\label{eq:inverse.uCh:alpha.F} \alpha_F \coloneq \int_F C_F^k\underline{\boldsymbol{v}}_F \qquad\forall F \in \mathcal{F}_{h}.$$ Recalling that $D_h^k\underline{\boldsymbol{C}}_h^k\underline{\boldsymbol{v}}_h = 0$ and using the definition [\[eq:DT\]](#eq:DT){reference-type="eqref" reference="eq:DT"} of $D_T^k$ it holds, for all $T \in \mathcal{T}_{h}$, $$0 = \int_T D_T^k\underline{\boldsymbol{C}}_T^k\underline{\boldsymbol{v}}_T = \sum_{F \in \mathcal{F}_{T}} \omega_{TF} \int_F C_F^k\underline{\boldsymbol{v}}_F = \sum_{F \in \mathcal{F}_{T}} \omega_{TF} \alpha_F,$$ showing that [\[eq:Whitney.E:condition.1\]](#eq:Whitney.E:condition.1){reference-type="eqref" reference="eq:Whitney.E:condition.1"} is met. If the domain encapsulates $b_2 > 0$ voids, we infer using [\[eq:CF\]](#eq:CF){reference-type="eqref" reference="eq:CF"} for $r = 1$, that, for all integers $1 \leq i \leq b_2$, $$\sum_{F\in\mathcal{F}_{\gamma_i}} \omega_{\Omega F}\alpha_F = \sum_{F\in\mathcal{F}_{\gamma_i}} \omega_{\Omega F}\int_F C_F^k\underline{\boldsymbol{v}}_F \\ = -\sum_{F\in\mathcal{F}_{\gamma_i}} \omega_{\Omega F}\sum_{E\in\mathcal{E}_{F}} \omega_{FE}\int_E v_E = 0,$$ where we have used the fact that each edge in the summation appears exactly twice with opposite coefficients, and therefore cancels out. We can thus invoke Theorem [Theorem 6](#thm:Whitney.E){reference-type="ref" reference="thm:Whitney.E"} to infer the existence of a collection of values at edges $(\alpha_E) \in \mathbb{R}^{\mathcal{E}_{h}}$ satisfying [\[eq:Whitney.E:poincare\]](#eq:Whitney.E:poincare){reference-type="eqref" reference="eq:Whitney.E:poincare"}. We then take $z_E \in \mathcal{P}_{}^{0}(E)$ such that $$\label{eq:z.E} \int_E z_E = h_E\, z_E = \alpha_E \qquad \forall E \in \mathcal{E}_{h}.$$ For all $F \in \mathcal{F}_{h}$, the face components $\boldsymbol{z}_{\boldsymbol{\mathcal{R}},F} \in \boldsymbol{\mathcal{R}}^{k-1}(F)$ and $\boldsymbol{z}_{\boldsymbol{\mathcal{R}},F}^{\mathrm{c}} \in \boldsymbol{\mathcal{R}}^{\mathrm{c},k}(F)$ are selected such that $$\label{eq:z.RF} \int_F \boldsymbol{z}_{\boldsymbol{\mathcal{R}},F}\cdot\mathop{\mathrm{\bf rot}}_F r = \int_F C_F^k\underline{\boldsymbol{v}}_F\, r + \sum_{E \in \mathcal{E}_{F}} \int_E z_E\, r \qquad\forall r \in \mathcal{P}_{0}^{r}(F)$$ and $$\label{eq:z.cRF} \boldsymbol{z}_{\boldsymbol{\mathcal{R}},F}^{\mathrm{c}} = \boldsymbol{0}.$$ Similarly, for any $T \in \mathcal{T}_{h}$, the element components $\boldsymbol{z}_{\boldsymbol{\mathcal{R}},T} \in \boldsymbol{\mathcal{R}}^{k-1}(T)$ and $\boldsymbol{z}_{\boldsymbol{\mathcal{R}},T}^{\mathrm{c}} \in \boldsymbol{\mathcal{R}}^{\mathrm{c},k}(T)$ satisfy $$\label{eq:z.RT} \int_T \boldsymbol{z}_{\boldsymbol{\mathcal{R}},T} \cdot \mathop{\mathrm{\bf curl}}\boldsymbol{w} = \int_T \boldsymbol{C}_T^k\underline{\boldsymbol{v}}_T \cdot \boldsymbol{w} + \sum_{F \in \mathcal{F}_{T}} \omega_{TF} \int_F \boldsymbol{\gamma}_{{\rm t},F}^k\underline{\boldsymbol{z}}_F \cdot (\boldsymbol{w} \times \boldsymbol{n}_F) \qquad \forall \boldsymbol{w} \in \boldsymbol{\mathcal{G}}^{\mathrm{c}, k}(T)$$ and $$\label{eq:z.cRT} \boldsymbol{z}_{\boldsymbol{\mathcal{R}},T}^{\mathrm{c}} = \boldsymbol{0}.$$ Notice that [\[eq:z.RT\]](#eq:z.RT){reference-type="eqref" reference="eq:z.RT"} defines $\boldsymbol{z}_{\boldsymbol{\mathcal{R}},T}$ uniquely since $\mathop{\mathrm{\bf curl}}: \boldsymbol{\mathcal{G}}^{\mathrm{c}, k}(T) \mapsto \boldsymbol{\mathcal{R}}^{k-1}(T)$ is an isomorphism by [@Arnold:18 Corollary 7.3].\ [1. *Equality of the discrete curl.*]{.ul} By definition of $z_E$, it holds $\pi_{\mathcal{P},F}^{0} C_F^k\underline{\boldsymbol{z}}_F = \pi_{\mathcal{P},F}^{0} C_F^k\underline{\boldsymbol{v}}_F$ for all $F \in \mathcal{F}_{h}$. The equality of the higher-order components is obtained plugging [\[eq:z.RF\]](#eq:z.RF){reference-type="eqref" reference="eq:z.RF"} into the definition [\[eq:CF\]](#eq:CF){reference-type="eqref" reference="eq:CF"} of $C_F^k\underline{\boldsymbol{z}}_F$, which leads to $\int_F C_F^k\underline{\boldsymbol{z}}_F\, r = \int_F C_F^k\underline{\boldsymbol{v}}_F\, r$ for all $r \in \mathcal{P}_{0}^{k}(F)$. By [@Di-Pietro.Droniou:23*1 Proposition 4] (which corresponds to [@Bonaldi.Di-Pietro.ea:23 Lemma 14] with $(d,k) = (3,1)$), the equality of the face curls implies $\boldsymbol{\pi}_{\boldsymbol{\mathcal{G}},T}^{k}(\boldsymbol{C}_T^k\underline{\boldsymbol{z}}_T) = \boldsymbol{\pi}_{\boldsymbol{\mathcal{G}},T}^{k} (\boldsymbol{C}_T^k\underline{\boldsymbol{v}}_T)$. The equality of the projections on $\boldsymbol{\mathcal{R}}^{\mathrm{c},k}(T)$ results plugging [\[eq:z.RT\]](#eq:z.RT){reference-type="eqref" reference="eq:z.RT"} into the definition [\[eq:CT\]](#eq:CT){reference-type="eqref" reference="eq:CT"} of $\boldsymbol{C}_T^k\underline{\boldsymbol{z}}_{T}$ with test function $\boldsymbol{w} \in \boldsymbol{\mathcal{G}}^{\mathrm{c}, k}(T)$. Gathering the above results, we obtain $\boldsymbol{C}_T^k\underline{\boldsymbol{z}}_T = \boldsymbol{C}_T^k\underline{\boldsymbol{v}}_{T}$ for all $T \in \mathcal{T}_{h}$, from which $\underline{\boldsymbol{C}}_h^k\underline{\boldsymbol{z}}_h = \underline{\boldsymbol{C}}_h^k\underline{\boldsymbol{v}}_h$ follows recalling the definition [\[eq:uCh\]](#eq:uCh){reference-type="eqref" reference="eq:uCh"} of the discrete curl.\ [2. *Continuity.*]{.ul} Let us now show the bound in [\[eq:uCh.inverse\]](#eq:uCh.inverse){reference-type="eqref" reference="eq:uCh.inverse"}. Concerning edge components, we write $$\label{eq:est.z.E} \begin{aligned} \sum_{T\in\mathcal{T}_{h}} h_T \sum_{F\in\mathcal{F}_{T}} h_F \sum_{E\in\mathcal{E}_{F}} \|z_E\|_{L^2(E)}^2 &\lesssim \sum_{T\in\mathcal{T}_{h}} h_T \sum_{E\in\mathcal{E}_{T}} \alpha_E^2 \\ &\lesssim \sum_{T\in\mathcal{T}_{h}} h_T^{-1} \sum_{F\in\mathcal{F}_{T}} \left( \int_F C_F^k\underline{\boldsymbol{v}}_F \right)^2 \lesssim \vert\kern-0.25ex\vert\kern-0.25ex\vert\underline{\boldsymbol{C}}_h^k\underline{\boldsymbol{v}}_h\vert\kern-0.25ex\vert\kern-0.25ex\vert_{\mathop{\mathrm{div}},h}^2, \end{aligned}$$ where we have used [\[eq:z.E\]](#eq:z.E){reference-type="eqref" reference="eq:z.E"} along with the fact that each edge $E \in \mathcal{E}_{T}$ is shared by exactly two faces in $\mathcal{F}_{T}$, the mimetic Poincaré inequality [\[eq:Whitney.E:poincare\]](#eq:Whitney.E:poincare){reference-type="eqref" reference="eq:Whitney.E:poincare"} together with the definition [\[eq:inverse.uCh:alpha.F\]](#eq:inverse.uCh:alpha.F){reference-type="eqref" reference="eq:inverse.uCh:alpha.F"} of $\alpha_F$ in the second passage, and $\left|\int_F C_F^k\underline{\boldsymbol{v}}_F\right| \lesssim |F|^{\frac12} \|C_F^k\underline{\boldsymbol{v}}_F\|_{L^2(F)} \lesssim h_F \|C_F^k\underline{\boldsymbol{v}}_F\|_{L^2(F)} \le h_T \|C_F^k\underline{\boldsymbol{v}}_F\|_{L^2(F)}$ for all $F \in \mathcal{F}_{h}$ and all $T \in \mathcal{T}_{h}$ to which $F$ belongs together with the definition [\[eq:tnorm.div.h\]](#eq:tnorm.div.h){reference-type="eqref" reference="eq:tnorm.div.h"} of $\vert\kern-0.25ex\vert\kern-0.25ex\vert{\cdot}\vert\kern-0.25ex\vert\kern-0.25ex\vert_{\mathop{\mathrm{div}},h}$ to conclude. To estimate the face component, we let $r$ in [\[eq:z.RF\]](#eq:z.RF){reference-type="eqref" reference="eq:z.RF"} be such that $\mathop{\mathrm{\bf rot}}_F r = \boldsymbol{z}_{\boldsymbol{\mathcal{R}},F}$ and use Cauchy--Schwarz, inverse, and trace inequalities in the right-hand side to infer $\|\boldsymbol{z}_{\boldsymbol{\mathcal{R}},F}\|_{\boldsymbol{L}^2(F;\mathbb{R}^2)} \lesssim h_F \|C_F^k\underline{\boldsymbol{v}}_F\|_{L^2(F)} + h_F^{\frac12}\sum_{E\in\mathcal{E}_{F}} \|z_E\|_{L^2(E)}$. Squaring the above relation and using standard inequalities for the square of a finite sum of terms, we obtain, after noticing that $h_F \le h$ for all $F \in \mathcal{F}_{h}$, $$\label{eq:est.z.RF} \begin{aligned} \sum_{T\in\mathcal{T}_{h}} h_T \sum_{F\in\mathcal{F}_{T}} \|\boldsymbol{z}_{\boldsymbol{\mathcal{R}},F}\|_{\boldsymbol{L}^2(F;\mathbb{R}^2)}^2 &\lesssim h^2 \sum_{T\in\mathcal{T}_{h}} h_T \sum_{F\in\mathcal{F}_{T}} \|C_F^k\underline{\boldsymbol{v}}_F\|_{L^2(F)}^2 \\ &\quad + \sum_{T\in\mathcal{T}_{h}} h_T \sum_{F\in\mathcal{F}_{T}} h_F \sum_{E\in\mathcal{E}_{F}} \|z_E\|_{L^2(E)}^2 \lesssim \vert\kern-0.25ex\vert\kern-0.25ex\vert\underline{\boldsymbol{C}}_h^k\underline{\boldsymbol{v}}_h\vert\kern-0.25ex\vert\kern-0.25ex\vert_{\mathop{\mathrm{div}},h}^2, \end{aligned}$$ where the conclusion follows noticing that $h \le \mathop{\mathrm{diam}}(\Omega) \lesssim 1$, recalling the definition [\[eq:tnorm.div.h\]](#eq:tnorm.div.h){reference-type="eqref" reference="eq:tnorm.div.h"} of $\vert\kern-0.25ex\vert\kern-0.25ex\vert{\cdot}\vert\kern-0.25ex\vert\kern-0.25ex\vert_{\mathop{\mathrm{div}},h}$ for the first term, and invoking [\[eq:est.z.E\]](#eq:est.z.E){reference-type="eqref" reference="eq:est.z.E"} for the second one. The estimate of the element component is obtained in a similar way, starting from [\[eq:z.RT\]](#eq:z.RT){reference-type="eqref" reference="eq:z.RT"} with $\boldsymbol{w}$ such that $\mathop{\mathrm{\bf curl}}\boldsymbol{w} = \boldsymbol{z}_{\boldsymbol{\mathcal{R}},T}$, leading to $$\label{eq:est.z.RT} \sum_{T\in\mathcal{T}_{h}} \|\boldsymbol{z}_{\boldsymbol{\mathcal{R}},T}\|_{\boldsymbol{L}^2(T;\mathbb{R}^3)}^2 \lesssim \vert\kern-0.25ex\vert\kern-0.25ex\vert\underline{\boldsymbol{C}}_h^k\underline{\boldsymbol{v}}_h\vert\kern-0.25ex\vert\kern-0.25ex\vert_{\mathop{\mathrm{div}},h}^2.$$ Summing [\[eq:est.z.E\]](#eq:est.z.E){reference-type="eqref" reference="eq:est.z.E"}, [\[eq:est.z.RF\]](#eq:est.z.RF){reference-type="eqref" reference="eq:est.z.RF"}, and [\[eq:est.z.RT\]](#eq:est.z.RT){reference-type="eqref" reference="eq:est.z.RT"}, recalling the definition [\[eq:tnorm.curl.h\]](#eq:tnorm.curl.h){reference-type="eqref" reference="eq:tnorm.curl.h"} of $\vert\kern-0.25ex\vert\kern-0.25ex\vert\underline{\boldsymbol{z}}_h\vert\kern-0.25ex\vert\kern-0.25ex\vert_{\mathop{\mathrm{\bf curl}},h}$ as well as [\[eq:z.cRF\]](#eq:z.cRF){reference-type="eqref" reference="eq:z.cRF"} and [\[eq:z.cRT\]](#eq:z.cRT){reference-type="eqref" reference="eq:z.cRT"}, the bound in [\[eq:uCh.inverse\]](#eq:uCh.inverse){reference-type="eqref" reference="eq:uCh.inverse"} follows. ◻ ## Poincaré inequality for the divergence {#sec:Dh.poincare} Theorem [Theorem 3](#thm:Dh.poincare){reference-type="ref" reference="thm:Dh.poincare"} is established in the same way as Theorem [Theorem 1](#thm:uGh.poincare){reference-type="ref" reference="thm:uGh.poincare"} (see the end of Section [4.1](#sec:uGh.poincare){reference-type="ref" reference="sec:uGh.poincare"}) starting from the following result. **Lemma 10** (Continuous inverse of the discrete divergence). *For any $\underline{\boldsymbol{w}}_h \in \underline{\boldsymbol{X}}_{\mathop{\mathrm{div}},h}^k$, there is $\underline{\boldsymbol{z}}_h \in \underline{\boldsymbol{X}}_{\mathop{\mathrm{div}},h}^k$ such that $$\label{eq:Dh.inverse} \text{% $D_h^k\underline{\boldsymbol{z}}_h = D_h^k\underline{\boldsymbol{w}}_h$ and $\vert\kern-0.25ex\vert\kern-0.25ex\vert\underline{\boldsymbol{z}}_h\vert\kern-0.25ex\vert\kern-0.25ex\vert_{\mathop{\mathrm{div}},h} \lesssim \|D_h^k\underline{\boldsymbol{w}}_h\|_{L^2(\Omega)}$. }$$* *Proof.* The face components of $\underline{\boldsymbol{z}}_h$ are obtained applying Theorem [Theorem 7](#thm:Whitney.F){reference-type="ref" reference="thm:Whitney.F"} with $\alpha_T = \int_TD_T^k\underline{\boldsymbol{w}}_T$ for all $T\in\mathcal{T}_{h}$ and letting $z_F \in \mathcal{P}_{}^{0}(F)$ be such that $$\label{eq:z.F} \int_F z_F = |F|\, z_F =\alpha_F \qquad \forall F\in\mathcal{F}_{h}.$$ For all $T\in\mathcal{T}_{h}$, the element component $\boldsymbol{z}_{\boldsymbol{\mathcal{G}},T} \in \boldsymbol{\mathcal{G}}^{k-1}(T)$ is defined by the following relation: $$\label{eq:z.GT} \int_T \boldsymbol{z}_{\boldsymbol{\mathcal{G}},T}\cdot\mathop{\mathrm{\bf grad}}q = - \int_T D_T^k\underline{\boldsymbol{w}}_T\,q + \sum_{F\in\mathcal{F}_{T}}\omega_{TF}\int_F z_F\,q \qquad\forall q\in\mathcal{P}_{0}^{k}(T).$$ Finally, we set $$\label{eq:z.cGT} \boldsymbol{z}_{\boldsymbol{\mathcal{G}},T}^{\mathrm{c}} = \boldsymbol{0} \qquad \forall T\in\mathcal{T}_{h}.$$\ [1. *Equality of the discrete divergence.*]{.ul} To check the first condition in [\[eq:Dh.inverse\]](#eq:Dh.inverse){reference-type="eqref" reference="eq:Dh.inverse"}, it suffices to show that $D_T^k\underline{\boldsymbol{z}}_T = D_T^k\underline{\boldsymbol{w}}_T$ for a generic $T\in\mathcal{T}_{h}$. To this end, we start by noticing that $$\int_T D_T^k\underline{\boldsymbol{w}}_T = \alpha_T = \sum_{F\in\mathcal{F}_{T}}\omega_{TF}\alpha_F = \sum_{F\in\mathcal{F}_{T}}\omega_{TF}\int_F z_F = \int_T D_T^k\underline{\boldsymbol{z}}_T,$$ showing that $\pi_{\mathcal{P},T}^{0}( D_T^k\underline{\boldsymbol{z}}_T ) = \pi_{\mathcal{P},T}^{0}( D_T^k\underline{\boldsymbol{w}}_T )$. To show the equivalence of the higher-order components, it suffices to use [\[eq:z.GT\]](#eq:z.GT){reference-type="eqref" reference="eq:z.GT"} in [\[eq:DT\]](#eq:DT){reference-type="eqref" reference="eq:DT"} written for $\underline{\boldsymbol{z}}_T$ to infer $\int_T D_T^k\underline{\boldsymbol{z}}_T\,q = \int_T D_T^k\underline{\boldsymbol{w}}_T\,q$ for all $q\in\mathcal{P}_{0}^{k}(T)$.\ [2. *Continuity.*]{.ul} Let us now show the continuity bound in [\[eq:Dh.inverse\]](#eq:Dh.inverse){reference-type="eqref" reference="eq:Dh.inverse"}. Observing that, by [\[eq:z.F\]](#eq:z.F){reference-type="eqref" reference="eq:z.F"}, for all $F \in \mathcal{F}_{h}$ it holds $\|z_F\|_{L^2(F)}^2 = |F|\, z_F^2 = |F|^{-1} \alpha_F^2 \lesssim h_T^{-2} \alpha_F^2$ for all $T \in\mathcal{T}_{h}$ such that $F \in \mathcal{F}_{T}$ (the last inequality being a consequence of mesh regularity), we have $$\label{eq:est.z.F} \begin{aligned} \sum_{T\in\mathcal{T}_{h}} h_T \sum_{F\in\mathcal{F}_{T}} \|z_F\|_{L^2(F)}^2 &\lesssim \sum_{T\in\mathcal{T}_{h}} h_T^{-1} \sum_{F\in\mathcal{F}_{T}} \alpha_F^2 \overset{\eqref{eq:Whitney.F:poincare}}\lesssim \sum_{T\in\mathcal{T}_{h}} h_T^{-3} \alpha_T^2 \\ &\lesssim \sum_{T\in\mathcal{T}_{h}} h_T^{-3}\, |T|\, \|\pi_{\mathcal{P},T}^{0}(D_T^k\underline{\boldsymbol{w}}_T)\|_{L^2(T)}^2 \lesssim \|D_h^k\underline{\boldsymbol{w}}_h\|_{L^2(\Omega)}^2, \end{aligned}$$ where, to pass to the second line, we have used the mesh regularity assumption to infer $h_T^{-3}\, |T| \lesssim 1$. To estimate the element components, we take $q$ in [\[eq:z.GT\]](#eq:z.GT){reference-type="eqref" reference="eq:z.GT"} such that $\mathop{\mathrm{\bf grad}}q = \boldsymbol{z}_{\boldsymbol{\mathcal{G}},T}$, use Cauchy--Schwarz, trace, and inverse inequalities in the right-hand side, pass to the square and use standard inequalities for the square of a finite sum of terms to obtain $$\|\boldsymbol{z}_{\boldsymbol{\mathcal{G}},T}\|_{\boldsymbol{L}^2(T;\mathbb{R}^3)}^2 \lesssim h_T^2 \|D_T^k\underline{\boldsymbol{w}}_T\|_{L^2(T)}^2 + h_T \sum_{F\in\mathcal{F}_{T}} \|z_F\|_{L^2(F)}^2.$$ Summing the above relation over $T\in\mathcal{T}_{h}$, using the fact that $h_T \le \mathop{\mathrm{diam}}(\Omega) \lesssim 1$ for the first term in the right-hand side and [\[eq:est.z.F\]](#eq:est.z.F){reference-type="eqref" reference="eq:est.z.F"} for the second, we obtain $$\label{eq:est.z.GT} \sum_{T\in\mathcal{T}_{h}} \|\boldsymbol{z}_{\boldsymbol{\mathcal{G}},T}\|_{\boldsymbol{L}^2(T;\mathbb{R}^3)}^2 \lesssim \|D_h^k\underline{\boldsymbol{w}}_h\|_{L^2(\Omega)}^2.$$ Summing [\[eq:est.z.F\]](#eq:est.z.F){reference-type="eqref" reference="eq:est.z.F"} to [\[eq:est.z.GT\]](#eq:est.z.GT){reference-type="eqref" reference="eq:est.z.GT"} and recalling the definition [\[eq:tnorm.div.h\]](#eq:tnorm.div.h){reference-type="eqref" reference="eq:tnorm.div.h"} of $\vert\kern-0.25ex\vert\kern-0.25ex\vert{\cdot}\vert\kern-0.25ex\vert\kern-0.25ex\vert_{\mathop{\mathrm{div}},h}$ along with [\[eq:z.cGT\]](#eq:z.cGT){reference-type="eqref" reference="eq:z.cGT"} yields the inequality in [\[eq:Dh.inverse\]](#eq:Dh.inverse){reference-type="eqref" reference="eq:Dh.inverse"}. ◻ # Stability analysis of a DDR scheme for the magnetostatics problem {#sec:vectorLaplace} We apply the Poincaré inequalities stated in Section [2.7](#sec:main.results){reference-type="ref" reference="sec:main.results"} to the stability analysis of a DDR scheme for the magnetostatics problem which generalises the one presented in [@Di-Pietro.Droniou:21] to domains with non-trivial topology. We introduce the space of discrete harmonic forms $$\underline{\boldsymbol{\mathfrak{H}}}_{\mathop{\mathrm{div}},h}^k\coloneq \left\lbrace \underline{\boldsymbol{w}}_h \in \underline{\boldsymbol{X}}_{\mathop{\mathrm{div}},h}^k \,:\, \text{% $D_h^k\underline{\boldsymbol{w}}_h = 0$ and $(\underline{\boldsymbol{w}}_h, \underline{\boldsymbol{C}}_h^k\underline{\boldsymbol{v}}_h)_{\mathop{\mathrm{div}},h} = 0$ for all $\underline{\boldsymbol{v}}_h \in \underline{\boldsymbol{X}}_{\mathop{\mathrm{\bf curl}},h}^k$ } \right\rbrace.$$ For a given source term $\boldsymbol{f} \in \boldsymbol{H}^1(\Omega;\mathbb{R}^3)$, we consider the following DDR approximation of the magnetostatics problem: Find $(\underline{\boldsymbol{\sigma}}_h,\underline{\boldsymbol{u}}_h,\underline{\boldsymbol{p}}_h) \in \underline{\boldsymbol{X}}_{\mathop{\mathrm{\bf curl}},h}^k \times \underline{\boldsymbol{X}}_{\mathop{\mathrm{div}},h}^k \times \underline{\boldsymbol{\mathfrak{H}}}_{\mathop{\mathrm{div}},h}^k$ such that, for all $(\underline{\boldsymbol{\tau}}_h,\underline{\boldsymbol{v}}_h,\underline{\boldsymbol{q}}_h) \in \underline{\boldsymbol{X}}_{\mathop{\mathrm{\bf curl}},h}^k \times \underline{\boldsymbol{X}}_{\mathop{\mathrm{div}},h}^k \times \underline{\boldsymbol{\mathfrak{H}}}_{\mathop{\mathrm{div}},h}^k$, $$%% \begin{equation} \label{eq:HL.defPb} A_h((\underline{\boldsymbol{\sigma}}_h,\underline{\boldsymbol{u}}_h,\underline{\boldsymbol{p}}_h), (\underline{\boldsymbol{\tau}}_h,\underline{\boldsymbol{v}}_h,\underline{\boldsymbol{q}}_h)) = (\underline{\boldsymbol{I}}^k_{\mathop{\mathrm{div}},h} \boldsymbol{f}, \underline{\boldsymbol{v}}_h)_{\mathop{\mathrm{div}},h},$$ where the bilinear form $A_h:\big[\underline{\boldsymbol{X}}_{\mathop{\mathrm{\bf curl}},h}^k \times \underline{\boldsymbol{X}}_{\mathop{\mathrm{div}},h}^k \times \underline{\boldsymbol{\mathfrak{H}}}_{\mathop{\mathrm{div}},h}^k\big]^2 \to \mathbb{R}$ is given by: $$\label{eq:HL.defA} \begin{aligned} A_h((\underline{\boldsymbol{\sigma}}_h,\underline{\boldsymbol{u}}_h,\underline{\boldsymbol{p}}_h), (\underline{\boldsymbol{\tau}}_h,\underline{\boldsymbol{v}}_h,\underline{\boldsymbol{q}}_h)) &\coloneq (\underline{\boldsymbol{\sigma}}_h, \underline{\boldsymbol{\tau}}_h)_{\mathop{\mathrm{\bf curl}},h} - (\underline{\boldsymbol{u}}_h, \underline{\boldsymbol{C}}_h^k\underline{\boldsymbol{\tau}}_h)_{\mathop{\mathrm{div}},h} + (\underline{\boldsymbol{C}}_h^k\underline{\boldsymbol{\sigma}}_h, \underline{\boldsymbol{v}}_h)_{\mathop{\mathrm{div}},h} \\ &\quad + (D_h^k\underline{\boldsymbol{u}}_h, D_h^k\underline{\boldsymbol{v}}_h)_{L^2(\Omega)} + (\underline{\boldsymbol{p}}_h,\underline{\boldsymbol{v}}_h)_{\mathop{\mathrm{div}},h} + (\underline{\boldsymbol{u}}_h,\underline{\boldsymbol{q}}_h)_{\mathop{\mathrm{div}},h}, \end{aligned}$$ with discrete $L^2$-like products $(\cdot,\cdot)_{\mathop{\mathrm{\bf curl}},h}$ and $(\cdot,\cdot)_{\mathop{\mathrm{div}},h}$ defined as in [@Di-Pietro.Droniou:23*1 Section 4.4]. We define the graph norm on this product space as $$\label{eq:norm.h} \|(\underline{\boldsymbol{\sigma}}_h,\underline{\boldsymbol{u}}_h,\underline{\boldsymbol{p}}_h)\|_{h} \coloneq \left( \|\underline{\boldsymbol{\sigma}}_h\|_{\mathop{\mathrm{\bf curl}},h}^2 + \|\underline{\boldsymbol{C}}_h^k\underline{\boldsymbol{\sigma}}_h\|_{\mathop{\mathrm{div}},h}^2 + \|\underline{\boldsymbol{u}}_h\|_{\mathop{\mathrm{div}},h}^2 + \|D_h^k\underline{\boldsymbol{u}}_h\|_{L^2(\Omega)}^2 + \|\underline{\boldsymbol{p}}_h\|_{\mathop{\mathrm{div}},h}^2 \right)^{\frac12},$$ with norms $\|{\cdot}\|_{\mathop{\mathrm{\bf curl}},h} \coloneq (\cdot,\cdot)_{\mathop{\mathrm{\bf curl}},h}^{\frac12}$ on $\underline{\boldsymbol{X}}_{\mathop{\mathrm{\bf curl}},h}^k$ and $\|{\cdot}\|_{\mathop{\mathrm{div}},h} \coloneq (\cdot,\cdot)_{\mathop{\mathrm{div}},h}^{\frac12}$ on $\underline{\boldsymbol{X}}_{\mathop{\mathrm{div}},h}^k$ induced by the corresponding discrete $L^2$-products. **Theorem 11** (Stability of the discrete bilinear form). *The discrete bilinear form [\[eq:HL.defA\]](#eq:HL.defA){reference-type="eqref" reference="eq:HL.defA"} is inf-sup stable for the graph norm $\|{\cdot}\|_{h}$, i.e., for all $(\underline{\boldsymbol{\sigma}}_h,\underline{\boldsymbol{u}}_h,\underline{\boldsymbol{p}}_h) \in \underline{\boldsymbol{X}}_{\mathop{\mathrm{\bf curl}},h}^k \times \underline{\boldsymbol{X}}_{\mathop{\mathrm{div}},h}^k \times \underline{\boldsymbol{\mathfrak{H}}}_{\mathop{\mathrm{div}},h}^k$, it holds $$\label{eq:inf-sup} \vert\kern-0.25ex\vert\kern-0.25ex\vert(\underline{\boldsymbol{\sigma}}_h,\underline{\boldsymbol{u}}_h,\underline{\boldsymbol{p}}_h)\vert\kern-0.25ex\vert\kern-0.25ex\vert_{h} \lesssim \sup_{(\underline{\boldsymbol{\tau}}_h,\underline{\boldsymbol{v}}_h,\underline{\boldsymbol{q}}_h) \in \underline{\boldsymbol{X}}_{\mathop{\mathrm{\bf curl}},h}^k \times \underline{\boldsymbol{X}}_{\mathop{\mathrm{div}},h}^k \times \underline{\boldsymbol{\mathfrak{H}}}_{\mathop{\mathrm{div}},h}^k\setminus \{0\}}\frac{% A_h((\underline{\boldsymbol{\sigma}}_h,\underline{\boldsymbol{u}}_h,\underline{\boldsymbol{p}}_h), (\underline{\boldsymbol{\tau}}_h,\underline{\boldsymbol{v}}_h,\underline{\boldsymbol{q}}_h)) }{% \vert\kern-0.25ex\vert\kern-0.25ex\vert(\underline{\boldsymbol{\tau}}_h,\underline{\boldsymbol{v}}_h,\underline{\boldsymbol{q}}_h)\vert\kern-0.25ex\vert\kern-0.25ex\vert_{h} }.$$* *Proof.* Let $(\underline{\boldsymbol{\sigma}}_h,\underline{\boldsymbol{u}}_h,\underline{\boldsymbol{p}}_h)$ be given, and let $C_{\rm P} > 0$ be the maximum of the hidden constants in the continuity estimates [\[eq:Dh.inverse\]](#eq:Dh.inverse){reference-type="eqref" reference="eq:Dh.inverse"} for the divergence and [\[eq:uCh.inverse\]](#eq:uCh.inverse){reference-type="eqref" reference="eq:uCh.inverse"} for the curl. Let $\underline{\boldsymbol{z}}_h \in \underline{\boldsymbol{X}}_{\mathop{\mathrm{div}},h}^k$ be given by [\[eq:Dh.inverse\]](#eq:Dh.inverse){reference-type="eqref" reference="eq:Dh.inverse"}, i.e., such that $$\label{eq:zh} \text{% $D_h^k\underline{\boldsymbol{z}}_h = D_h^k\underline{\boldsymbol{u}}_h$ and $\|\underline{\boldsymbol{z}}_h\|_{\mathop{\mathrm{div}},h} \leq C_{\rm P} \|D_h^k\underline{\boldsymbol{u}}_h\|_{L^2(\Omega)} $. }$$ Since $D_h^k(\underline{\boldsymbol{u}}_h - \underline{\boldsymbol{z}}_h) = 0$, there exists $(\underline{\boldsymbol{\alpha}}_h, \underline{\boldsymbol{\beta}}_h) \in \underline{\boldsymbol{X}}_{\mathop{\mathrm{\bf curl}},h}^k \times \underline{\boldsymbol{\mathfrak{H}}}_{\mathop{\mathrm{div}},h}^k$ such that $\underline{\boldsymbol{u}}_h - \underline{\boldsymbol{z}}_h = \underline{\boldsymbol{C}}_h^k\underline{\boldsymbol{\alpha}}_h + \underline{\boldsymbol{\beta}}_h$. Using [\[eq:uCh.inverse\]](#eq:uCh.inverse){reference-type="eqref" reference="eq:uCh.inverse"} applied to $\underline{\boldsymbol{\alpha}}_h$, we infer the existence of $$\label{eq:alpha'h} \text{ $\underline{\boldsymbol{\alpha}}_h'$ such that $\underline{\boldsymbol{C}}_h^k\underline{\boldsymbol{\alpha}}_h' = \underline{\boldsymbol{C}}_h^k\underline{\boldsymbol{\alpha}}_h$ and $\|\underline{\boldsymbol{\alpha}}_h'\|_{\mathop{\mathrm{\bf curl}},h} \leq C_{\rm P} \|\underline{\boldsymbol{C}}_h^k\underline{\boldsymbol{\alpha}}_h\|_{\mathop{\mathrm{div}},h}$. }$$ Noticing that ${\|\underline{\boldsymbol{u}}_h - \underline{\boldsymbol{z}}_h\|_{\mathop{\mathrm{div}},h}^2} = {\|\underline{\boldsymbol{C}}_h^k\underline{\boldsymbol{\alpha}}_h\|_{\mathop{\mathrm{div}},h}^2} + {\|\underline{\boldsymbol{\beta}}_h\|_{\mathop{\mathrm{div}},h}^2}$ by orthogonality, using a triangle inequality we infer that $$ \label{eq:HL.bound.Cp} \|\underline{\boldsymbol{C}}_h^k\underline{\boldsymbol{\alpha}}_h\|_{\mathop{\mathrm{div}},h}^2 + \|\underline{\boldsymbol{\beta}}_h\|_{\mathop{\mathrm{div}},h}^2 \lesssim \|\underline{\boldsymbol{u}}_h\|_{\mathop{\mathrm{div}},h}^2 + \|\underline{\boldsymbol{z}}_h\|_{\mathop{\mathrm{div}},h}^2 \overset{\eqref{eq:zh}}\le \|\underline{\boldsymbol{u}}_h\|_{\mathop{\mathrm{div}},h}^2 + C_{\rm P} \|D_h^k\underline{\boldsymbol{u}}_h\|_{L^2(\Omega)}^2.$$ We define $$ \label{eq:HL.def.TF} \underline{\boldsymbol{\tau}}_h \coloneq 2 C_{\rm P}^2 \underline{\boldsymbol{\sigma}}_h - \underline{\boldsymbol{\alpha}}_h', \quad \underline{\boldsymbol{v}}_h \coloneq \underline{\boldsymbol{C}}_h^k\underline{\boldsymbol{\sigma}}_h + \underline{\boldsymbol{p}}_h + 2 C_{\rm P}^2 \underline{\boldsymbol{u}}_h , \quad \underline{\boldsymbol{q}}_h \coloneq \underline{\boldsymbol{\beta}}_h - 2 C_{\rm P}^2 \underline{\boldsymbol{p}}_h.$$ The following bound is readily inferred using triangle inequalities: $$ \label{eq:HL.bound.TF} \begin{aligned} &\|\underline{\boldsymbol{\tau}}_h\|_{\mathop{\mathrm{\bf curl}},h}^2 + \|\underline{\boldsymbol{C}}_h^k\underline{\boldsymbol{\tau}}_h\|_{\mathop{\mathrm{div}},h}^2 + \|\underline{\boldsymbol{v}}_h\|_{\mathop{\mathrm{div}},h}^2 + \|D_h^k\underline{\boldsymbol{v}}_h\|_{L^2(\sigma)}^2 + \|\underline{\boldsymbol{q}}_h\|_{\mathop{\mathrm{div}},h}^2\\ &\quad \begin{aligned}[t] &\lesssim \|\underline{\boldsymbol{\sigma}}_h\|_{\mathop{\mathrm{\bf curl}},h}^2 + \|\underline{\boldsymbol{\alpha}}_h'\|_{\mathop{\mathrm{\bf curl}},h}^2 + \|\underline{\boldsymbol{C}}_h^k\underline{\boldsymbol{\sigma}}_h\|_{\mathop{\mathrm{div}},h}^2 + \|\underline{\boldsymbol{p}}_h\|_{\mathop{\mathrm{div}},h}^2 + \|\underline{\boldsymbol{u}}_h\|_{\mathop{\mathrm{div}},h}^2 + \|\underline{\boldsymbol{\beta}}_h\|_{\mathop{\mathrm{div}},h}^2 \\ \overset{\eqref{eq:alpha'h},\,\eqref{eq:HL.bound.Cp},\,\eqref{eq:norm.h}}&\lesssim \vert\kern-0.25ex\vert\kern-0.25ex\vert(\underline{\boldsymbol{\sigma}}_h,\underline{\boldsymbol{u}}_h,\underline{\boldsymbol{p}}_h)\vert\kern-0.25ex\vert\kern-0.25ex\vert_{h}^2. \end{aligned} \end{aligned}$$ Plugging the test functions [\[eq:HL.def.TF\]](#eq:HL.def.TF){reference-type="eqref" reference="eq:HL.def.TF"} into the expression [\[eq:HL.defA\]](#eq:HL.defA){reference-type="eqref" reference="eq:HL.defA"} of $A_h$ gives $$\label{eq:HL.P.INFSUP.0} \begin{aligned} &A_h((\underline{\boldsymbol{\sigma}}_h,\underline{\boldsymbol{u}}_h,\underline{\boldsymbol{p}}_h), (\underline{\boldsymbol{\tau}}_h,\underline{\boldsymbol{v}}_h,\underline{\boldsymbol{q}}_h)) \\ &\quad= 2 C_{\rm P}^2 \|\underline{\boldsymbol{\sigma}}_h\|_{\mathop{\mathrm{\bf curl}},h}^2 - (\underline{\boldsymbol{\sigma}}_h, \underline{\boldsymbol{\alpha}}_h')_{\mathop{\mathrm{\bf curl}},h} \\ &\qquad - \bcancel{2 C_{\rm P}^2 (\underline{\boldsymbol{u}}_h, \underline{\boldsymbol{C}}_h^k\underline{\boldsymbol{\sigma}}_h)_{\mathop{\mathrm{div}},h}} + (\underline{\boldsymbol{u}}_h, \underline{\boldsymbol{C}}_h^k\underline{\boldsymbol{\alpha}}_h)_{\mathop{\mathrm{div}},h} \\ &\qquad + \|\underline{\boldsymbol{C}}_h^k\underline{\boldsymbol{\sigma}}_h\|_{\mathop{\mathrm{div}},h}^2 + \cancel{(\underline{\boldsymbol{C}}_h^k\underline{\boldsymbol{\sigma}}_h, \underline{\boldsymbol{p}}_h)_{\mathop{\mathrm{div}},h}} + \bcancel{2 C_{\rm P}^2 (\underline{\boldsymbol{C}}_h^k\underline{\boldsymbol{\sigma}}_h, \underline{\boldsymbol{u}}_h)_{\mathop{\mathrm{div}},h}} \\ &\qquad - (D_h^k\underline{\boldsymbol{u}}_h, \cancel{D_h^k\underline{\boldsymbol{C}}_h^k\underline{\boldsymbol{\sigma}}_h})_{L^2(\mathcal{T}_{h})} - (D_h^k\underline{\boldsymbol{u}}_h, \cancel{D_h^k\underline{\boldsymbol{p}}_h})_{L^2(\mathcal{T}_{h})} + 2 C_{\rm P}^2 \|D_h^k\underline{\boldsymbol{u}}_h\|_{L^2(\Omega)}^2 \\ &\qquad + \cancel{(\underline{\boldsymbol{p}}_h,\underline{\boldsymbol{C}}_h^k\underline{\boldsymbol{\sigma}}_h)_{\mathop{\mathrm{div}},h} } + \|\underline{\boldsymbol{p}}_h\|_{\mathop{\mathrm{div}},h}^2 + \bcancel{2 C_{\rm P}^2 (\underline{\boldsymbol{p}}_h,\underline{\boldsymbol{u}}_h)_{\mathop{\mathrm{div}},h}} \\ &\qquad + (\underline{\boldsymbol{u}}_h,\underline{\boldsymbol{\beta}}_h)_{\mathop{\mathrm{div}},h} - \bcancel{2 C_{\rm P}^2 (\underline{\boldsymbol{u}}_h,\underline{\boldsymbol{p}}_h)_{\mathop{\mathrm{div}},h}} \\ &\quad= 2 C_{\rm P}^2 \|\underline{\boldsymbol{\sigma}}_h\|_{\mathop{\mathrm{\bf curl}},h}^2 - (\underline{\boldsymbol{\sigma}}_h, \underline{\boldsymbol{\alpha}}_h')_{\mathop{\mathrm{\bf curl}},h} + \|\underline{\boldsymbol{C}}_h^k\underline{\boldsymbol{\sigma}}_h\|_{\mathop{\mathrm{div}},h}^2 + \|\underline{\boldsymbol{p}}_h\|_{\mathop{\mathrm{div}},h}^2 \\ &\qquad + 2 C_{\rm P}^2 \|D_h^k\underline{\boldsymbol{u}}_h\|_{L^2(\Omega)}^2 + (\underline{\boldsymbol{u}}_h, \underline{\boldsymbol{C}}_h^k\underline{\boldsymbol{\alpha}}_h + \underline{\boldsymbol{\beta}}_h)_{\mathop{\mathrm{div}},h}. \end{aligned}$$ Using Cauchy--Schwarz and generalised Young inequalities, we have $$\label{eq:HL.P.IN.1} (\underline{\boldsymbol{\sigma}}_h, \underline{\boldsymbol{\alpha}}_h')_{\mathop{\mathrm{\bf curl}},h} \overset{\eqref{eq:alpha'h}}\leq C_{\rm P} \|\underline{\boldsymbol{\sigma}}_h\|_{\mathop{\mathrm{\bf curl}},h} \|\underline{\boldsymbol{C}}_h^k\underline{\boldsymbol{\alpha}}_h\|_{\mathop{\mathrm{div}},h} \leq \frac{3 C_{\rm P}^2}{4} \|\underline{\boldsymbol{\sigma}}_h\|_{\mathop{\mathrm{\bf curl}},h}^2 + \frac{1}{3} \|\underline{\boldsymbol{C}}_h^k\underline{\boldsymbol{\alpha}}_h\|_{\mathop{\mathrm{div}},h}^2.$$ Moreover, the decomposition $\underline{\boldsymbol{u}}_h = \underline{\boldsymbol{z}}_h + \underline{\boldsymbol{C}}_h^k\underline{\boldsymbol{\alpha}}_h + \underline{\boldsymbol{\beta}}_h$ gives $(\underline{\boldsymbol{u}}_h, \underline{\boldsymbol{C}}_h^k\underline{\boldsymbol{\alpha}}_h + \underline{\boldsymbol{\beta}}_h)_{\mathop{\mathrm{div}},h} = \|\underline{\boldsymbol{u}}_h\|_{\mathop{\mathrm{div}},h}^2 - (\underline{\boldsymbol{u}}_h, \underline{\boldsymbol{z}}_h)_{\mathop{\mathrm{div}},h}$ with $$\label{eq:HL.P.IN.2} (\underline{\boldsymbol{u}}_h, \underline{\boldsymbol{z}}_h)_{\mathop{\mathrm{div}},h} \leq \|\underline{\boldsymbol{u}}_h\|_{\mathop{\mathrm{div}},h} C_{\rm P}\|D_h^k\underline{\boldsymbol{u}}_h\|_{L^2(\Omega)} \leq \frac13 \|\underline{\boldsymbol{u}}_h\|_{\mathop{\mathrm{div}},h}^2 + \frac{3C_{\rm P}^2}{4} \|D_h^k\underline{\boldsymbol{u}}_h\|_{L^2(\Omega)}^2.$$ Plugging [\[eq:HL.P.IN.1\]](#eq:HL.P.IN.1){reference-type="eqref" reference="eq:HL.P.IN.1"} and [\[eq:HL.P.IN.2\]](#eq:HL.P.IN.2){reference-type="eqref" reference="eq:HL.P.IN.2"} into [\[eq:HL.P.INFSUP.0\]](#eq:HL.P.INFSUP.0){reference-type="eqref" reference="eq:HL.P.INFSUP.0"}, we have $$\begin{aligned} &A_h((\underline{\boldsymbol{\sigma}}_h,\underline{\boldsymbol{u}}_h,\underline{\boldsymbol{p}}_h), (\underline{\boldsymbol{\tau}}_h,\underline{\boldsymbol{v}}_h,\underline{\boldsymbol{q}}_h)) \\ &\quad\geq \frac{5 C_{\rm P}^2}{4} \|\underline{\boldsymbol{\sigma}}_h\|_{\mathop{\mathrm{\bf curl}},h}^2 + \|\underline{\boldsymbol{C}}_h^k\underline{\boldsymbol{\sigma}}_h\|_{\mathop{\mathrm{div}},h}^2 + \|\underline{\boldsymbol{p}}_h\|_{\mathop{\mathrm{div}},h}^2 %% \\ %% &\qquad + \frac{5 C_{\rm P}^2}{4} \|D_h^k\underline{\boldsymbol{u}}_h\|_{L^2(\Omega)}^2 + \frac23\|\underline{\boldsymbol{u}}_h\|_{\mathop{\mathrm{div}},h}^2 - \frac13\|\underline{\boldsymbol{C}}_h^k\underline{\boldsymbol{\alpha}}_h\|_{\mathop{\mathrm{div}},h}^2 \\ &\quad\geq \frac{5 C_{\rm P}^2}{4} \|\underline{\boldsymbol{\sigma}}_h\|_{\mathop{\mathrm{\bf curl}},h}^2 + \|\underline{\boldsymbol{C}}_h^k\underline{\boldsymbol{\sigma}}_h\|_{\mathop{\mathrm{div}},h}^2 + \|\underline{\boldsymbol{p}}_h\|_{\mathop{\mathrm{div}},h}^2 + \frac{11 C_{\rm P}^2}{12} \|D_h^k\underline{\boldsymbol{u}}_h\|_{L^2(\Omega)}^2 + \frac13\|\underline{\boldsymbol{u}}_h\|_{\mathop{\mathrm{div}},h}^2 \\ &\quad\gtrsim \vert\kern-0.25ex\vert\kern-0.25ex\vert(\underline{\boldsymbol{\sigma}}_h,\underline{\boldsymbol{u}}_h,\underline{\boldsymbol{p}}_h)\vert\kern-0.25ex\vert\kern-0.25ex\vert_{h}^2. \end{aligned}$$ Denoting by $\$$ the supremum in [\[eq:inf-sup\]](#eq:inf-sup){reference-type="eqref" reference="eq:inf-sup"}, we then use the previous bound to write $$\vert\kern-0.25ex\vert\kern-0.25ex\vert(\underline{\boldsymbol{\sigma}}_h,\underline{\boldsymbol{u}}_h,\underline{\boldsymbol{p}}_h)\vert\kern-0.25ex\vert\kern-0.25ex\vert_{h}^2 \lesssim A_h((\underline{\boldsymbol{\sigma}}_h,\underline{\boldsymbol{u}}_h,\underline{\boldsymbol{p}}_h), (\underline{\boldsymbol{\tau}}_h,\underline{\boldsymbol{v}}_h,\underline{\boldsymbol{q}}_h)) \le \$\, \vert\kern-0.25ex\vert\kern-0.25ex\vert(\underline{\boldsymbol{\tau}}_h,\underline{\boldsymbol{v}}_h,\underline{\boldsymbol{q}}_h)\vert\kern-0.25ex\vert\kern-0.25ex\vert_{h} \overset{\eqref{eq:HL.bound.TF}}\lesssim \$\, \vert\kern-0.25ex\vert\kern-0.25ex\vert(\underline{\boldsymbol{\sigma}}_h,\underline{\boldsymbol{u}}_h,\underline{\boldsymbol{p}}_h)\vert\kern-0.25ex\vert\kern-0.25ex\vert_{h}.$$ Simplifying, the conclusion follows. ◻ # Results on the trimmed finite element sequence on tetrahedral meshes {#sec:simplicial.de-rham} In this section we provide the explicit expression of the polynomial basis functions used in Section [3](#sec:proof.poincaré){reference-type="ref" reference="sec:proof.poincaré"}. These bases can be easily described on a reference element (see [@Arnold.Logg:14]). However, in order the compute their norms, we need to know their expression on the physical element. The transformation from the reference to the physical element is given by the pullback of the mapping between the two. We consider a simplex $S$ with vertices $V_0$, $V_1$, $V_2$, and $V_3$ ordered so that, denoting by $\boldsymbol{x}_{V_i}$ the coordinate vector of $V_i$, $\left\{\boldsymbol{x}_{V_1} - \boldsymbol{x}_{V_0},\, \boldsymbol{x}_{V_2} - \boldsymbol{x}_{V_0},\, \boldsymbol{x}_{V_3} - \boldsymbol{x}_{V_0}\right\}$ forms a direct basis of $\mathbb{R}^3$. The basis for the local affine Lagrange space $\mathcal{P}_{1}^{{-}}\Lambda^0(S) \cong \mathcal{P}_{}^{1}(S)$ spanned by "hat" functions is $$\label{eq:S.DR.0} \begin{aligned} \phi_0(\boldsymbol{x}) \coloneq \frac{\big[(\boldsymbol{x}_{V_2} - \boldsymbol{x}_{V_1}) \times (\boldsymbol{x}_{V_3} - \boldsymbol{x}_{V_1}) \big] \cdot (\boldsymbol{x}_{V_3} - \boldsymbol{x})} {\det\big(\boldsymbol{x}_{V_2} - \boldsymbol{x}_{V_1}, \boldsymbol{x}_{V_3} - \boldsymbol{x}_{V_1}, \boldsymbol{x}_{V_3}-\boldsymbol{x}_{V_0}\big)} &,\; \phi_1(\boldsymbol{x}) \coloneq \frac{\big[(\boldsymbol{x}_{V_2} - \boldsymbol{x}_{V_0}) \times (\boldsymbol{x}_{V_3} - \boldsymbol{x}_{V_0}) \big] \cdot (\boldsymbol{x} - \boldsymbol{x}_{V_0})} {\det\big(\boldsymbol{x}_{V_2} - \boldsymbol{x}_{V_0}, \boldsymbol{x}_{V_3} - \boldsymbol{x}_{V_0}, \boldsymbol{x}_{V_1}-\boldsymbol{x}_{V_0}\big)} ,\\ \phi_2(\boldsymbol{x}) \coloneq \frac{\big[(\boldsymbol{x}_{V_1} - \boldsymbol{x}_{V_0}) \times (\boldsymbol{x}_{V_3} - \boldsymbol{x}_{V_0}) \big] \cdot (\boldsymbol{x} - \boldsymbol{x}_{V_0})} {\det\big(\boldsymbol{x}_{V_1} - \boldsymbol{x}_{V_0}, \boldsymbol{x}_{V_3} - \boldsymbol{x}_{V_0}, \boldsymbol{x}_{V_2}-\boldsymbol{x}_{V_0}\big)} &,\; \phi_3(\boldsymbol{x}) \coloneq \frac{\big[(\boldsymbol{x}_{V_1} - \boldsymbol{x}_{V_0}) \times (\boldsymbol{x}_{V_2} - \boldsymbol{x}_{V_0}) \big] \cdot (\boldsymbol{x} - \boldsymbol{x}_{V_0})} {\det\big(\boldsymbol{x}_{V_1} - \boldsymbol{x}_{V_0}, \boldsymbol{x}_{V_2} - \boldsymbol{x}_{V_0}, \boldsymbol{x}_{V_3}-\boldsymbol{x}_{V_0}\big)}. \end{aligned}$$ The basis of the lowest-order local face Raviart--Thomas--Nédélec space $\mathcal{P}_{1}^{{-}}\Lambda^1(S) \cong \mathcal{RT}^{1}(S)$ is $$\label{eq:S.DR.1} \begin{aligned} \boldsymbol{\phi}_{23}(\boldsymbol{x}) \coloneq \frac{(\boldsymbol{x}_{V_1} - \boldsymbol{x}_{V_0}) \times (\boldsymbol{x} - \boldsymbol{x}_{V_0})} {\det\big(\boldsymbol{x}_{V_1} - \boldsymbol{x}_{V_0}, \boldsymbol{x}_{V_2} - \boldsymbol{x}_{V_0}, \boldsymbol{x}_{V_3}-\boldsymbol{x}_{V_2}\big)} &,\; \boldsymbol{\phi}_{13}(\boldsymbol{x}) \coloneq \frac{(\boldsymbol{x}_{V_2} - \boldsymbol{x}_{V_0}) \times (\boldsymbol{x} - \boldsymbol{x}_{V_0})} {\det\big(\boldsymbol{x}_{V_2} - \boldsymbol{x}_{V_0}, \boldsymbol{x}_{V_1} - \boldsymbol{x}_{V_0}, \boldsymbol{x}_{V_3}-\boldsymbol{x}_{V_1}\big)} ,\\ \boldsymbol{\phi}_{12}(\boldsymbol{x}) \coloneq \frac{(\boldsymbol{x}_{V_3} - \boldsymbol{x}_{V_0}) \times (\boldsymbol{x} - \boldsymbol{x}_{V_0})} {\det\big(\boldsymbol{x}_{V_3} - \boldsymbol{x}_{V_0}, \boldsymbol{x}_{V_1} - \boldsymbol{x}_{V_0}, \boldsymbol{x}_{V_2}-\boldsymbol{x}_{V_1}\big)} &,\; \boldsymbol{\phi}_{03}(\boldsymbol{x}) \coloneq \frac{(\boldsymbol{x}_{V_2} - \boldsymbol{x}_{V_1}) \times (\boldsymbol{x} - \boldsymbol{x}_{V_1})} {\det\big(\boldsymbol{x}_{V_2} - \boldsymbol{x}_{V_1}, \boldsymbol{x}_{V_0} - \boldsymbol{x}_{V_1}, \boldsymbol{x}_{V_3}-\boldsymbol{x}_{V_0}\big)} ,\\ \boldsymbol{\phi}_{02}(\boldsymbol{x}) \coloneq \frac{(\boldsymbol{x}_{V_3} - \boldsymbol{x}_{V_1}) \times (\boldsymbol{x} - \boldsymbol{x}_{V_1})} {\det\big(\boldsymbol{x}_{V_3} - \boldsymbol{x}_{V_1}, \boldsymbol{x}_{V_0} - \boldsymbol{x}_{V_1}, \boldsymbol{x}_{V_2}-\boldsymbol{x}_{V_0}\big)} &,\; \boldsymbol{\phi}_{01}(\boldsymbol{x}) \coloneq \frac{(\boldsymbol{x}_{V_3} - \boldsymbol{x}_{V_2}) \times (\boldsymbol{x} - \boldsymbol{x}_{V_2})} {\det\big(\boldsymbol{x}_{V_3} - \boldsymbol{x}_{V_2}, \boldsymbol{x}_{V_0} - \boldsymbol{x}_{V_2}, \boldsymbol{x}_{V_1}-\boldsymbol{x}_{V_0}\big)}, \end{aligned}$$ where the function $\boldsymbol{\phi}_{ij}$ is associated to the edge $E_{ij}$ with vertices $V_i$ and $V_j$. The basis of the lowest-order local edge Nédélec space $\mathcal{P}_{1}^{{-}}\Lambda^2(S) \cong \mathcal{N}^{1}(S)$ is $$\label{eq:S.DR.2} \begin{aligned} \boldsymbol{\phi}_{123}(\boldsymbol{x}) \coloneq \frac{2(\boldsymbol{x} - \boldsymbol{x}_{V_0})} {\det\big(\boldsymbol{x}_{V_2} - \boldsymbol{x}_{V_1}, \boldsymbol{x}_{V_3} - \boldsymbol{x}_{V_1}, \boldsymbol{x}_{V_1}-\boldsymbol{x}_{V_0}\big)} &,\; \boldsymbol{\phi}_{023}(\boldsymbol{x}) \coloneq \frac{2(\boldsymbol{x} - \boldsymbol{x}_{V_1})} {\det\big(\boldsymbol{x}_{V_2} - \boldsymbol{x}_{V_0}, \boldsymbol{x}_{V_3} - \boldsymbol{x}_{V_0}, \boldsymbol{x}_{V_1}-\boldsymbol{x}_{V_0}\big)} ,\\ \boldsymbol{\phi}_{013}(\boldsymbol{x}) \coloneq \frac{2(\boldsymbol{x} - \boldsymbol{x}_{V_2})} {\det\big(\boldsymbol{x}_{V_1} - \boldsymbol{x}_{V_0}, \boldsymbol{x}_{V_3} - \boldsymbol{x}_{V_0}, \boldsymbol{x}_{V_2}-\boldsymbol{x}_{V_0}\big)} &,\; \boldsymbol{\phi}_{012}(\boldsymbol{x}) \coloneq \frac{2(\boldsymbol{x} - \boldsymbol{x}_{V_3})} {\det\big(\boldsymbol{x}_{V_1} - \boldsymbol{x}_{V_0}, \boldsymbol{x}_{V_2} - \boldsymbol{x}_{V_0}, \boldsymbol{x}_{V_3}-\boldsymbol{x}_{V_0}\big)}, \end{aligned}$$ with function $\boldsymbol{\phi}_{ijk}$ associated to the simplicial face $F_{ijk}$ with vertices $V_i$, $V_j$, and $V_k$. Finally, the basis of $\mathcal{P}_{1}^{{-}}\Lambda^3(S) \cong \mathcal{P}_{}^{0}(S)$ is $$\label{eq:S.DR.3} \phi_{0123}(\boldsymbol{x}) \coloneq \frac{6} {\det\big(\boldsymbol{x}_{V_1} - \boldsymbol{x}_{V_0}, \boldsymbol{x}_{V_2} - \boldsymbol{x}_{V_0}, \boldsymbol{x}_{V_3}-\boldsymbol{x}_{V_0}\big)}$$ **Lemma 12** (Dual basis). *The following identities hold:* *$$\begin{aligned} {4} \phi_i(\boldsymbol{x}_{i'}) &= \delta^i_{i'} &\qquad& \forall i, i' \in \lbrace 0, 1, 2, 3 \rbrace \label{eq:W.dual.0} \\ \int_{E_{(ij)'}} \boldsymbol{\phi}_{ij} \cdot \boldsymbol{t}_E &= \delta^{ij}_{(ij)'} &\qquad& \forall (ij), (ij)' \in \lbrace 23, 13, 12, 03, 02, 01 \rbrace \label{eq:W.dual.1}\\ \int_{F_{(ijk)'}} \boldsymbol{\phi}_{ijk} \cdot \boldsymbol{n}_F &= \delta^{ijk}_{(ijk)'} &\qquad& \forall (ijk),(ijk)' \in \lbrace 123, 023, 013, 012 \rbrace,\label{eq:W.dual.2} \end{aligned}$$* *where $\delta_a^b = 1$ if $a = b$, $\delta_a^b = 0$ otherwise.* *Proof.* The proof of [\[eq:W.dual.0\]](#eq:W.dual.0){reference-type="eqref" reference="eq:W.dual.0"} readily follows from the orthogonality of the cross product. Let us check [\[eq:W.dual.1\]](#eq:W.dual.1){reference-type="eqref" reference="eq:W.dual.1"} for $\boldsymbol{\phi}_{23}$, the other being similar. For $(ij) \in \lbrace 23, 13, 12, 03, 02, 01 \rbrace$, we have $$\int_{E_{ij}} \boldsymbol{\phi}_{23} \cdot \boldsymbol{t}_E = \int_{t=0}^1 \frac{ \cancel{t [(\boldsymbol{x}_{V_1} - \boldsymbol{x}_{V_0}) \times (\boldsymbol{x}_{V_j} - \boldsymbol{x}_{V_i}) ] \cdot (\boldsymbol{x}_{V_j} - \boldsymbol{x}_{V_i})} + [(\boldsymbol{x}_{V_1} - \boldsymbol{x}_{V_0}) \times (\boldsymbol{x}_{V_i} - \boldsymbol{x}_{V_0})] \cdot (\boldsymbol{x}_{V_j} - \boldsymbol{x}_{V_i}) } {\det\big(\boldsymbol{x}_{V_1} - \boldsymbol{x}_{V_0}, \boldsymbol{x}_{V_2} - \boldsymbol{x}_{V_0}, \boldsymbol{x}_{V_3}-\boldsymbol{x}_{V_2}\big)}.$$ If $i = 0$ or $i = 1$, then $(\boldsymbol{x}_{V_1} - \boldsymbol{x}_{V_0}) \times (\boldsymbol{x}_{V_i} - \boldsymbol{x}_{V_0}) = 0$. If $i = 2$ and $j = 3$, we can use the vector triple product to write $(\boldsymbol{x}_{V_1} - \boldsymbol{x}_{V_0}) \times (\boldsymbol{x}_{V_2} - \boldsymbol{x}_{V_0}) \cdot(\boldsymbol{x}_{V_3} - \boldsymbol{x}_{V_2}) = \det\big(\boldsymbol{x}_{V_1} - \boldsymbol{x}_{V_0}, \boldsymbol{x}_{V_2} - \boldsymbol{x}_{V_0}, \boldsymbol{x}_{V_3}-\boldsymbol{x}_{V_2}\big)$, so that $\int_{E_{23}} \boldsymbol{\phi}_{23} \cdot \boldsymbol{t}_E = 1$. Finally, let us check [\[eq:W.dual.2\]](#eq:W.dual.2){reference-type="eqref" reference="eq:W.dual.2"} for $\boldsymbol{\phi}_{123}$. Using the change of variable induced by $\boldsymbol{\psi}: (\lambda_j,\lambda_k) \mapsto \lambda_j (\boldsymbol{x}_{V_j} - \boldsymbol{x}_{V_i}) + \lambda_k (\boldsymbol{x}_{V_k} - \boldsymbol{x}_{V_i}) + \boldsymbol{x}_{V_i}$, along with the fact that $\boldsymbol{n}_F = \frac{(\boldsymbol{x}_{V_j} - \boldsymbol{x}_{V_i})\times (\boldsymbol{x}_{V_k} - \boldsymbol{x}_{V_i})}{2 |F_{ijk}|}$, we have $$\begin{aligned} &\int_{F_{ijk}} \boldsymbol{\phi}_{123} \cdot \boldsymbol{n}_F \\ &\quad = \int_{\lambda_j = 0}^1 \int_{\lambda_k = 0}^{1 - \lambda_2} 2\frac{ (\boldsymbol{x}_{V_i} - \boldsymbol{x}_{V_0}) + \lambda_2 (\boldsymbol{x}_{V_j} - \boldsymbol{x}_{V_i}) + \lambda_3 (\boldsymbol{x}_{V_3} - \boldsymbol{x}_{V_i}) }{ \det\big(\boldsymbol{x}_{V_2} - \boldsymbol{x}_{V_1}, \boldsymbol{x}_{V_3} - \boldsymbol{x}_{V_1}, \boldsymbol{x}_{V_1}-\boldsymbol{x}_{V_0}\big) } \cdot \big[(\boldsymbol{x}_{V_j} - \boldsymbol{x}_{V_i}) \times (\boldsymbol{x}_{V_k} - \boldsymbol{x}_{V_i})\big] \\ &\quad= 2 \int_{\lambda_j = 0}^1 \int_{\lambda_k = 0}^{1 - \lambda_2} \frac{ (\boldsymbol{x}_{V_i} - \boldsymbol{x}_{V_0}) \cdot \big[(\boldsymbol{x}_{V_j} - \boldsymbol{x}_{V_i}) \times (\boldsymbol{x}_{V_k} - \boldsymbol{x}_{V_i})\big] }{ \det\big(\boldsymbol{x}_{V_2} - \boldsymbol{x}_{V_1}, \boldsymbol{x}_{V_3} - \boldsymbol{x}_{V_1}, \boldsymbol{x}_{V_1}-\boldsymbol{x}_{V_0}\big) } \\ &\quad= \frac{ \det\big(\boldsymbol{x}_{V_j} - \boldsymbol{x}_{V_i}, \boldsymbol{x}_{V_k} - \boldsymbol{x}_{V_i},\boldsymbol{x}_{V_i} - \boldsymbol{x}_{V_0}\big) }{ \det\big(\boldsymbol{x}_{V_2} - \boldsymbol{x}_{V_1}, \boldsymbol{x}_{V_3} - \boldsymbol{x}_{V_1}, \boldsymbol{x}_{V_1}-\boldsymbol{x}_{V_0}\big) } = \delta_{123}^{ijk}.\qedhere \end{aligned}$$ ◻ **Lemma 13** (Norm of the basis function). *The functions given by [\[eq:S.DR.0\]](#eq:S.DR.0){reference-type="eqref" reference="eq:S.DR.0"}-[\[eq:S.DR.3\]](#eq:S.DR.3){reference-type="eqref" reference="eq:S.DR.3"} have the following $L^2$-norms: For all $i \in \lbrace 0, 1, 2, 3 \rbrace$, all $(ij) \in \lbrace 23, 13, 12, 03, 02, 01 \rbrace$, and all $(ijk) \in \lbrace 123, 023, 013, 012 \rbrace$, $$\begin{aligned} \label{eq:W.norm.0} \|\phi_i\|_{L^2(S)}^2 &= \frac{\vert S \vert}{10} \simeq h_S^3, \\ \label{eq:W.norm.1} \|\boldsymbol{\phi}_{ij}\|_{\boldsymbol{L}^2(S;\mathbb{R}^3)}^2 &= \frac{ \vert F_{012} \vert^2 + \vert F_{013} \vert^2 + c_{01}\vert F_{012} \vert\vert F_{013} \vert }{ 90 \vert S \vert }\simeq h_S, \\ \label{eq:W.norm.2} \|\boldsymbol{\phi}_{ijk}\|_{\boldsymbol{L}^2(S;\mathbb{R}^3)}^2 &= \frac{\vert E_{li} \vert^2 + \vert E_{lj} \vert^2 + \vert E_{lk} \vert^2 + 2 \vert F_{lij} \vert + 2 \vert F_{lik} \vert + 2 \vert F_{ljk}\vert}{180 \vert S \vert}\simeq h_S^{-1}, \\ \label{eq:W.norm.3} \|\phi_{0123}\|_{L^2(S)}^2 &= \frac{1}{\vert S \vert} \simeq h_S^{-3}, \end{aligned}$$ where, for $\lbrace i,j,k,l \rbrace = \lbrace 0, 1, 2, 3 \rbrace$, $c_{kl} = \boldsymbol{n}_{kli} \cdot \boldsymbol{n}_{klj}$ is the dihedral angle associated to the edge $E_{kl}$.* *Proof.* We will only show the computation for one function of each space, the others being similar. In order to integrate over the simplex S, we consider the change of variable induced by $\boldsymbol{\psi}: (\lambda_1,\lambda_2,\lambda_3) \mapsto \lambda_1 \boldsymbol{x}_{V_1} + \lambda_2 \boldsymbol{x}_{V_2} + \lambda_3 \boldsymbol{x}_{V_3} + (1 - \lambda_1 - \lambda_2 - \lambda_3) \boldsymbol{x}_{V_0}$. Notice that $\vert \det D \boldsymbol{\psi} \vert = 6 \vert S \vert$. Let us first consider the family given by [\[eq:S.DR.0\]](#eq:S.DR.0){reference-type="eqref" reference="eq:S.DR.0"}. Using the orthogonality of the cross product, and the identity $(\boldsymbol{a} \times \boldsymbol{b}) \cdot \boldsymbol{c} = \det(\boldsymbol{a},\boldsymbol{b},\boldsymbol{c})$, we notice that $$\phi_3 (\boldsymbol{\psi}) = \lambda_3 \frac{\big[(\boldsymbol{x}_{V_1} - \boldsymbol{x}_{V_0}) \times (\boldsymbol{x}_{V_2} - \boldsymbol{x}_{V_0}) \big] \cdot (\boldsymbol{x}_{V_3} - \boldsymbol{x}_{V_0})} {\det\big(\boldsymbol{x}_{V_1} - \boldsymbol{x}_{V_0}, \boldsymbol{x}_{V_2} - \boldsymbol{x}_{V_0}, \boldsymbol{x}_{V_3}-\boldsymbol{x}_{V_0}\big)} = \lambda_3 .$$ Hence, we have $$\int_S \phi_3^2 = \int_{\lambda_1 = 0}^1 \int_{\lambda_2 = 0}^{1 - \lambda_1} \int_{\lambda_3 = 0}^{1 - \lambda_1 - \lambda_2} (\lambda_3)^2 6 \vert S \vert = \frac{\vert S \vert}{10}.$$ Then, we proceed with the family [\[eq:S.DR.1\]](#eq:S.DR.1){reference-type="eqref" reference="eq:S.DR.1"}. We have $$\begin{aligned} \boldsymbol{\phi}_{23}(\boldsymbol{\psi}) =& \frac{\lambda_2 (\boldsymbol{x}_{V_1} - \boldsymbol{x}_{V_0}) \times (\boldsymbol{x}_{V_2} - \boldsymbol{x}_{V_0}) + \lambda_3 (\boldsymbol{x}_{V_1} - \boldsymbol{x}_{V_0}) \times (\boldsymbol{x}_{V_3} - \boldsymbol{x}_{V_0})} {\det\big(\boldsymbol{x}_{V_1} - \boldsymbol{x}_{V_0}, \boldsymbol{x}_{V_2} - \boldsymbol{x}_{V_0}, \boldsymbol{x}_{V_3}-\boldsymbol{x}_{V_2}\big)} \\ =& \frac{1}{6 \vert S \vert} \left( \lambda_2 2 \vert F_{012} \vert \boldsymbol{n}_{012} + \lambda_3 2 \vert F_{013} \vert \boldsymbol{n}_{013} \right). \end{aligned}$$ Expanding the product, we obtain $$\begin{aligned} \int_S \boldsymbol{\phi}_{23} \cdot \boldsymbol{\phi}_{23} ={}& \int_{\lambda_1 = 0}^1 \int_{\lambda_2 = 0}^{1 - \lambda_1} \int_{\lambda_3 = 0}^{1 - \lambda_1 - \lambda_2} \left(\frac{1}{3 \vert S \vert}\right)^2 \left( \lambda_2^2 \vert F_{012} \vert^2 + \lambda_3^2 \vert F_{013} \vert^2 + 2 \lambda_2 \lambda_3 c_{01}\vert F_{012} \vert\vert F_{013} \vert \right) 6 \vert S \vert \\ ={}& \frac{2}{3 \vert S \vert} \frac{\vert F_{012} \vert^2 + \vert F_{013} \vert^2 + c_{01}\vert F_{012} \vert\vert F_{013} \vert}{60} \end{aligned}$$ Finally, we prove that [\[eq:W.norm.2\]](#eq:W.norm.2){reference-type="eqref" reference="eq:W.norm.2"} holds for $\boldsymbol{\phi}_{123}$ given by [\[eq:S.DR.2\]](#eq:S.DR.2){reference-type="eqref" reference="eq:S.DR.2"}. We have $$\boldsymbol{\phi}_{123}(\boldsymbol{\psi}) = 2 \frac{\lambda_1 (\boldsymbol{x}_{V_1} - \boldsymbol{x}_{V_0})+ \lambda_2 (\boldsymbol{x}_{V_2} - \boldsymbol{x}_{V_0}) + \lambda_3 (\boldsymbol{x}_{V_3} - \boldsymbol{x}_{V_0})}{\det\big(\boldsymbol{x}_{V_2} - \boldsymbol{x}_{V_1}, \boldsymbol{x}_{V_3} - \boldsymbol{x}_{V_1}, \boldsymbol{x}_{V_1}-\boldsymbol{x}_{V_0}\big)} .$$ Noticing that $(\boldsymbol{x}_{V_i} - \boldsymbol{x}_{V_0}) \cdot (\boldsymbol{x}_{V_j} - \boldsymbol{x}_{V_0}) = 2 \vert F_{0ij} \vert$, we have $$\begin{aligned} \int_S \boldsymbol{\phi}_{123}\cdot\boldsymbol{\phi}_{123} ={}& \int_{\lambda_1 = 0}^1 \int_{\lambda_2 = 0}^{1 - \lambda_1} \int_{\lambda_3 = 0}^{1 - \lambda_1 - \lambda_2} \left(\frac{2}{6\vert S \vert}\right)^2 \big(\lambda_1^2 \vert E_{01} \vert^2 + \lambda_2^2 \vert E_{02} \vert^2 + \lambda_3^2 \vert E_{03} \vert^2 \\ &\qquad\qquad + 4 \lambda_1\lambda_2 \vert F_{012} \vert + 4 \lambda_1\lambda_3 \vert F_{013} \vert + 4 \lambda_2\lambda_3 \vert F_{023} \vert\big) 6 \vert S \vert \\ =& \frac{1}{3\vert S \vert} \frac{\vert E_{01} \vert^2 + \vert E_{02} \vert^2 + \vert E_{03} \vert^2 + 2 \vert F_{012} \vert + 2 \vert F_{013} \vert + 2 \vert F_{023}\vert}{60} \end{aligned}$$ ◻ **Lemma 14** (Link with the differential operators). *For all face $F$ of $S$, we define $\omega_{SF}\in \lbrace-1,1\rbrace$ such that $\omega_{SF}\boldsymbol{n}_F$ is outward pointing. Then, the followings identities hold; $$\begin{aligned} {4} \label{eq:W.diff.0} \mathop{\mathrm{\bf grad}}\phi_i &= \sum_{j < i} \boldsymbol{\phi}_{ji} - \sum_{j > i} \boldsymbol{\phi}_{ij} &\qquad& \forall i \in \lbrace 0,1,2,3 \rbrace, \\ \label{eq:W.diff.1} \mathop{\mathrm{\bf curl}}\boldsymbol{\phi}_{ij} &= \omega_{SF_{\hat{i}}} \omega_{F_{\hat{i}}E_{ij}} \boldsymbol{\phi}_{\hat{i}} + \omega_{SF_{\hat{j}}} \omega_{F_{\hat{j}}E_{ij}} \boldsymbol{\phi}_{\hat{j}} &\qquad& \forall (ij) \in \lbrace 23, 13, 12, 03, 02, 01 \rbrace, \\ \label{eq:W.diff.2} \mathop{\mathrm{div}}\boldsymbol{\phi}_{ijk} &= \omega_{SF_{ijk}} \phi_{0123}, &\qquad& \forall (ijk) \in \lbrace 123, 023, 013, 012 \rbrace, \end{aligned}$$ where $\hat{i}$ denotes the complementary of $i$ in $\lbrace 0,1,2,3 \rbrace$.* *Proof.* First, let us prove [\[eq:W.diff.2\]](#eq:W.diff.2){reference-type="eqref" reference="eq:W.diff.2"}. Noticing that $\mathop{\mathrm{div}}\boldsymbol{x} = 3$, it only remain to check that $$\label{eq:P.W.diff.2} \mathop{\mathrm{sgn}}\det\big(\boldsymbol{x}_{V_j} - \boldsymbol{x}_{V_i}, \boldsymbol{x}_{V_k} - \boldsymbol{x}_{V_i}, \boldsymbol{x}_{V_i}-\boldsymbol{x}_{V_l}\big) = \omega_{SF_{ijk}}.$$ This holds, since $(\boldsymbol{x}_{V_j} - \boldsymbol{x}_{V_i}) \times (\boldsymbol{x}_{V_k} - \boldsymbol{x}_{V_i}) = \Vert \boldsymbol{x}_{V_j} - \boldsymbol{x}_{V_i}\Vert \Vert \boldsymbol{x}_{V_k} - \boldsymbol{x}_{V_i} \Vert \boldsymbol{n}_F$, and $\boldsymbol{x}_{V_i}-\boldsymbol{x}_{V_l}$ is always outward pointing. Then, to prove [\[eq:W.diff.1\]](#eq:W.diff.1){reference-type="eqref" reference="eq:W.diff.1"}, we use the identity $\mathop{\mathrm{\bf curl}}(\boldsymbol{A}\times \boldsymbol{B}) = \mathop{\mathrm{\bf div}}(\boldsymbol{B} \otimes \boldsymbol{A}^\top - \boldsymbol{A} \otimes \boldsymbol{B}^\top)$ in [\[eq:S.DR.1\]](#eq:S.DR.1){reference-type="eqref" reference="eq:S.DR.1"} to write $$\label{eq:P.W.diff.1.0} \mathop{\mathrm{\bf curl}}\boldsymbol{\phi}_{ij} = \frac{3 (\boldsymbol{x}_{V_l} - \boldsymbol{x}_{V_k}) - (\boldsymbol{x}_{V_l} - \boldsymbol{x}_{V_k})} {\det \big(\boldsymbol{x}_{V_l} - \boldsymbol{x}_{V_k}, \boldsymbol{x}_{V_i} - \boldsymbol{x}_{V_k}, \boldsymbol{x}_{V_j}-\boldsymbol{x}_{V_i}\big)},$$ where $(kl)$ is such that $\lbrace i,j,k,l \rbrace = \lbrace 0,1,2,3 \rbrace$ and $k < l$. Noticing that $\boldsymbol{x}_{V_l} - \boldsymbol{x}_{V_k}$ is an inward pointing to the face $\hat{k}$, and $\boldsymbol{x}_{V_i} - \boldsymbol{x}_{V_k}$ lies outward pointing in the plane of $F_{\hat{k}}$, we have $$\label{eq:P.W.diff.1.1} \det \big(\boldsymbol{x}_{V_l} - \boldsymbol{x}_{V_k}, \boldsymbol{x}_{V_i} - \boldsymbol{x}_{V_k}, \boldsymbol{x}_{V_j}-\boldsymbol{x}_{V_i}\big) = \frac{1}{6 \vert S \vert} \omega_{F_{\hat{k}}E_{ij}} = -\frac{1}{6 \vert S \vert} \omega_{F_{\hat{l}}E_{ij}}.$$ where we inserted $\boldsymbol{x}_{V_k} - \boldsymbol{x}_{V_i}$ in the second argument of the determinant to get the second inequality. Inserting $\boldsymbol{x} - \boldsymbol{x}$ in [\[eq:P.W.diff.1.0\]](#eq:P.W.diff.1.0){reference-type="eqref" reference="eq:P.W.diff.1.0"} and replacing the denominator according to [\[eq:P.W.diff.1.1\]](#eq:P.W.diff.1.1){reference-type="eqref" reference="eq:P.W.diff.1.1"}, we obtain $\mathop{\mathrm{\bf curl}}\boldsymbol{\phi}_{ij} = 6 \vert S \vert \left( \omega_{F_{\hat{k}}E_{ij}} 2 (\boldsymbol{x} - \boldsymbol{x}_{V_k}) + \omega_{F_{\hat{l}}E_{ij}} 2 (\boldsymbol{x} - \boldsymbol{x}_{V_l}) \right)$ We infer [\[eq:W.diff.1\]](#eq:W.diff.1){reference-type="eqref" reference="eq:W.diff.1"} recalling [\[eq:P.W.diff.2\]](#eq:P.W.diff.2){reference-type="eqref" reference="eq:P.W.diff.2"}. Finally, let us prove [\[eq:W.diff.0\]](#eq:W.diff.0){reference-type="eqref" reference="eq:W.diff.0"}. We only prove the equality for $\phi_0$, the other three being similar. By the assumption on the basis, we have $\det \big(\boldsymbol{x}_{V_2} - \boldsymbol{x}_{V_1}, \boldsymbol{x}_{V_3} - \boldsymbol{x}_{V_1}, \boldsymbol{x}_{V_3}-\boldsymbol{x}_{V_0}\big) = 6 \vert S \vert$. Then, a direct computation shows that $$\begin{aligned} \mathop{\mathrm{\bf grad}}\phi_0 =& -\frac{(\boldsymbol{x}_{V_2} - \boldsymbol{x}_{V_1})\times(\boldsymbol{x}_{V_3} - \boldsymbol{x}_{V_1})}{6 \vert S \vert}\\ =& -\frac{(\boldsymbol{x}_{V_2} - \boldsymbol{x})\times(\boldsymbol{x}_{V_3} - \boldsymbol{x}_{V_2}) + (\boldsymbol{x}_{V_2} - \boldsymbol{x})\times(\boldsymbol{x}_{V_2} - \boldsymbol{x}_{V_1}) + (\boldsymbol{x} - \boldsymbol{x}_{V_1})\times(\boldsymbol{x}_{V_3} - \boldsymbol{x}_{V_1}) }{6 \vert S \vert}. \end{aligned}$$ Noticing that $$\begin{gathered} \boldsymbol{\phi}_{03}(\boldsymbol{x}) = \frac{(\boldsymbol{x}_{V_2} - \boldsymbol{x}_{V_1}) \times (\boldsymbol{x} - \boldsymbol{x}_{V_1})} {6 \vert S \vert} ,\quad \boldsymbol{\phi}_{02}(\boldsymbol{x}) = -\frac{(\boldsymbol{x}_{V_3} - \boldsymbol{x}_{V_1}) \times (\boldsymbol{x} - \boldsymbol{x}_{V_1})} {6 \vert S \vert} , \\ \boldsymbol{\phi}_{01}(\boldsymbol{x}) = \frac{(\boldsymbol{x}_{V_3} - \boldsymbol{x}_{V_2}) \times (\boldsymbol{x} - \boldsymbol{x}_{V_2})} {6 \vert S \vert}, \end{gathered}$$ we infer that $\mathop{\mathrm{\bf grad}}\phi_0 = - \boldsymbol{\phi}_{01} - \boldsymbol{\phi}_{03} - \boldsymbol{\phi}_{02}$. ◻ # Acknowledgements {#acknowledgements .unnumbered} Daniele Di Pietro acknowledges the partial support of *Agence Nationale de la Recherche* through the grant "HIPOTHEC". Both authors acknowledge the partial support of *Agence Nationale de la Recherche* through the grant ANR-16-IDEX-0006 "RHAMNUS".
arxiv_math
{ "id": "2309.15667", "title": "Uniform Poincar\\'e inequalities for the Discrete de Rham complex on\n general domains", "authors": "Daniele A. Di Pietro and Marien-Lorenzo Hanot", "categories": "math.NA cs.NA", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We define Hitomezashi patterns and loops on a torus and provide several structural results for such loops. For a given pattern, our main theorems give optimal residual information regarding the Hitomezashi loop length, loop count, as well as possible homology classes of such loops. Special attention is paid to toroidal Hitomezashi patterns that are symmetric with respect to the diagonal $x = y$. address: - Department of Mathematics, University of California, Berkeley, Berkeley, CA 94720, USA - Department of Mathematics, Stanford University, Stanford, CA 94305, USA author: - Qiuyu Ren - Shengtong Zhang bibliography: - bib.bib title: Toroidal Hitomezashi Patterns --- # Introduction Hitomezashi, a type of Japanese style embroidery, has recently attracted attention due to its interesting mathematical properties. The mathematics of Hitomezashi was first studied by Pete back in 2004 [@Pete2008] under a different name. After Numberphile popularized the mathematical definition of Hitomezashi patterns in their YouTube video [@numberphile21], Defant, Kravitz and Tenner [@defant2022loops; @defant2023] discovered many interesting mathematical properties for loops in Hitomezashi patterns ("Hitomezashi loops\"). For example, the length of any Hitomezashi loops is congruent to $4$ modulo $8$, and the area enclosed is congruent to $1$ modulo $4$. Thus far, the Hitomezashi patterns studied in [@defant2022loops; @defant2023; @numberphile21; @Pete2008] are on the planar grid $\mathbb Z^2$. In this paper, we suggest a natural generalization of Hitomezashi patterns on a toroidal grid $\mathbb Z/ M\mathbb Z\times \mathbb Z/ N \mathbb Z$. We show that such patterns and loops enjoy some non-trivial combinatorial properties. Interestingly, our study of these properties explores a novel connection between Hitomezashi and knot theory. Following [@defant2022loops], we recall the definition of Hitomezashi patterns on $\mathbb Z^2$. **Definition 1**. Consider the graph $\mathsf{Cloth}_{\mathbb Z}$ on $\mathbb Z\times \mathbb Z$ with $(i, j)$ adjacent to $(i, j \pm 1)$ and $(i \pm 1, j)$. A **planar Hitomezashi pattern** is a subgraph of $\mathsf{Cloth}_{\mathbb Z}$ defined by two infinite sequences $\epsilon, \eta \in \{0, 1\}^{\mathbb Z}$, with edge set $$\{\{(i, j), (i + 1, j): i \equiv \eta_j \bmod{2}\} \bigcup \{\{(i, j), (i, j + 1)\}: j \equiv \epsilon_i \bmod{2}\}.$$ A **planar Hitomezashi loop** is a cycle in a planar Hitomezashi pattern. A **planar Hitomezashi path** is a path in a planar Hitomezashi pattern. This definition generalizes to $\mathbb Z/ M\mathbb Z\times \mathbb Z/ N \mathbb Z$ when $M$ and $N$ are both even. If either $M$ or $N$ is odd, the definition does not work since the parity of the coordinates is undefined. Instead, we use an approach based on Lemma 2.3 of [@defant2022loops], which states that if we orient a Hitomezashi path $P$, then all edges of $P$ on the same horizontal / vertical line point in the same direction. This motivates the following definition. **Definition 2**. Let $M, N\geq 3$ be integers. Consider the graph $\mathsf{Cloth}_{M, N}$ with vertex set $\mathbb Z/ M\mathbb Z\times \mathbb Z/ N\mathbb Z$, where a vertex $(i, j)$ is adjacent to $(i \pm 1, j)$ and $(i, j \pm 1)$. Let $x \in \{-1, 1\}^N, y \in \{-1, 1\}^M$ be binary strings. The **toroidal Hitomezashi pattern** given by $x, y$, denoted $\mathsf{Cloth}_{M, N}(x, y)$, is defined as the following orientation of $\mathsf{Cloth}_{M, N}$: an edge $\{(i, j), (i + 1, j)\}$ is oriented $(i, j) \to (i + 1, j)$ if $x_j = 1$, and oriented $(i + 1, j) \to (i, j)$ if $x_j = -1$. Symmetrically, an edge $\{(i, j), (i, j + 1)\}$ is oriented $(i, j) \to (i, j + 1)$ if $y_i = 1$, and oriented $(i, j + 1) \to (i, j)$ if $y_i = -1$. A **toroidal Hitomezashi loop** is a circuit in the oriented graph $\mathsf{Cloth}_{M, N}(x,y)$ whose edges alternate between vertical and horizontal edges.[^1] A toroidal Hitomezashi pattern is **symmetric** if $M = N$ and $x = y$. ![The symmetric toroidal Hitomezashi pattern $\mathsf{Cloth}_{8,8}(x,x)$ for $x=---+++++$, with Hitomezashi loops distinguished by different colors. The red and orange loops are nontrivial with homology class $(1,1)$, while the other six loops are trivial. The black dotted square represents a fundamental domain for the torus. ](Figures/00011111.png){#fig:00011111 width="45%"} Let us explain the relationship between [Definition 1](#defn:hitomezashi-original){reference-type="ref" reference="defn:hitomezashi-original"} and [Definition 2](#defn:hitomezashi-toroidal){reference-type="ref" reference="defn:hitomezashi-toroidal"} when $M, N$ are both even. Let $C$ be a toroidal Hitomezashi pattern as in [Definition 2](#defn:hitomezashi-toroidal){reference-type="ref" reference="defn:hitomezashi-toroidal"}. Let $A$ be the union of Hitomezashi loops in $C$ whose coordinates modulo $2$ follow the pattern $(0, 0) \to (0, 1) \to (1, 1) \to (1, 0) \to (0, 0) \to \cdots$, and let $B$ be the union of Hitomezashi loops whose coordinates modulo $2$ follow the pattern $(0, 0) \to (1, 0) \to (1, 1) \to (0, 1) \to (0, 0) \to \cdots$. When we forget the orientation, $A, B$ are Hitomezashi patterns in the sense of [Definition 1](#defn:hitomezashi-original){reference-type="ref" reference="defn:hitomezashi-original"}, and they are duals of each other in the sense of [@defant2022loops Section 7.1]. Thus, the toroidal Hitomezashi pattern in [Definition 2](#defn:hitomezashi-toroidal){reference-type="ref" reference="defn:hitomezashi-toroidal"} decomposes into a Hitomezashi pattern in the sense of [Definition 1](#defn:hitomezashi-original){reference-type="ref" reference="defn:hitomezashi-original"} and its dual. See [3](#fig:decomposition){reference-type="ref" reference="fig:decomposition"} for an illustration. ![Decomposition of [1](#fig:00011111){reference-type="ref" reference="fig:00011111"} into two Hitomezashi patterns as in [Definition 1](#defn:hitomezashi-original){reference-type="ref" reference="defn:hitomezashi-original"}.](Figures/00011111A.png){#fig:decomposition width="90%"} ![Decomposition of [1](#fig:00011111){reference-type="ref" reference="fig:00011111"} into two Hitomezashi patterns as in [Definition 1](#defn:hitomezashi-original){reference-type="ref" reference="defn:hitomezashi-original"}.](Figures/00011111B.png){#fig:decomposition width="90%"} Perhaps the most interesting difference between planar and toroidal Hitomezashi loops is that some toroidal Hitomezashi loops are not contractible. On the torus, the isotopy class of a noncontractible simple closed curve is classified by its homology class. In our setup, the homology class of a Hitomezashi loop can be defined combinatorially. **Definition 3** (Combinatorial Definition of homology). Given a toroidal Hitomezashi loop on $\mathsf{Cloth}_{M, N}(x, y)$, define its $x$-shift $\Delta x$ as the number of edges $(i, j) \to (i + 1, j)$ minus the number of edges $(i + 1, j) \to (i, j)$ on the loop, and its $y$-shift $\Delta y$ analogously. The **homology class** of a loop is defined as $(\Delta x / M, \Delta y / N)$. We call a loop **trivial** if its homology class is $(0,0)$, and **nontrivial** otherwise. We shall see that trivial toroidal Hitomezashi loops are essentially the same as planar Hitomezashi loops, while nontrivial toroidal Hitomezashi loops enjoy somewhat different properties. We begin with some topological observations, to be used throughout the rest of the paper. **Observation 4**.   [\[obs\]]{#obs label="obs"} 1. In a toroidal Hitomezashi pattern, each vertex has exactly one vertical in-edge and one horizontal in-edge. Thus, the pattern is partitioned into toroidal Hitomezashi loops, and toroidal Hitomezashi loops never crosses itself transversely. 2. Since every toroidal Hitomezashi loop has no transversal self-intersection, and every two different toroidal Hitomezashi loops are disjoint, it is a topological fact that the homology class of any nontrivial toroidal Hitomezashi loop is $\pm(u,v)$ for some coprime integers $u,v$ independent of the chosen loop. 3. Consider the case when the pattern is symmetric. In this case, any toroidal Hitomezashi loop cannot transversely cross the diagonal $\{(a,a)\colon a\in\mathbb R/N\mathbb Z\}$ in the torus. Together with (2), we see its homology class is either $(0,0)$ or $\pm(1,1)$. In the rest of the paper, we establish some properties concerning toroidal Hitomezashi patterns and loops. Let $k(x)$ denote the difference between the number of $1$'s and the number of $(-1)$'s in $x$. Let $\gcd(x, y)$ denote $\gcd(\left\vert k(x) \right \vert, \left\vert k(y) \right \vert)$. The first property states that, when $k(x), k(y)$ are both non-zero, all nontrivial Hitomezashi loops have the same homology class, which can be expressed in terms of $k(x)$ and $k(y)$. **Theorem 1** (Homology class). *  [\[prop:homology\]]{#prop:homology label="prop:homology"}* 1. *In a symmetric toroidal Hitomezashi pattern $\mathsf{Cloth}_{N, N}(x, x)$, every nontrivial toroidal Hitomezashi loop has homology class $(1,1)$ if $k(x) > 0$, and homology class $(-1,-1)$ if $k(x) < 0$. No nontrivial toroidal Hitomezashi loop exists if $k(x) = 0$.* 2. *In a general toroidal Hitomezashi pattern $\mathsf{Cloth}_{M, N}(x, y)$, the possible homology classes of nontrivial toroidal Hitomezashi loops are* *$k(x) \neq 0$* *$k(x) = 0$* ----------------- -------------------------------------------- -------------------------------- *$k(y) \neq 0$* *$(k(x) / \gcd(x, y), k(y) / \gcd(x, y))$* *$(0, \pm 1)$* *$k(y) = 0$* *$(\pm 1, 0)$* *$(0, \pm 1)$ or $(\pm 1, 0)$* This property have two interesting consequences. First, when $k(x)$ and $k(y)$ are both nonzero, all the nontrivial Hitomezashi loops must travel in the same direction.[^2] Second, flipping a single bit of $x$ or $y$ could have a tremendous effect on the picture of toroidal Hitomezashi loops. See [5](#fig:flip_bit){reference-type="ref" reference="fig:flip_bit"} for an illustration. ![Comparison between $\mathsf{Cloth}_{7,7}(x, x)$ and $\mathsf{Cloth}_{7,7}(x, x')$, where $x = --+++++$ and $x' = ---++++$. On the right, all green edges form a single toroidal Hitomezashi loop of homology class $(3, 1)$.](Figures/0011111.png){#fig:flip_bit width="90%"} ![Comparison between $\mathsf{Cloth}_{7,7}(x, x)$ and $\mathsf{Cloth}_{7,7}(x, x')$, where $x = --+++++$ and $x' = ---++++$. On the right, all green edges form a single toroidal Hitomezashi loop of homology class $(3, 1)$.](Figures/0011111_0001111.png){#fig:flip_bit width="90%"} The second property is analogous to one of the main theorems in [@defant2022loops]. It determines the length of toroidal Hitomezashi loops modulo $8$. **Theorem 2** (Loop length). *  [\[prop:length\]]{#prop:length label="prop:length"}* 1. *Every trivial toroidal Hitomezashi loop has length $4$ modulo $8$.* 2. *In a symmetric pattern $\mathsf{Cloth}_{N, N}(x, x)$, every nontrivial toroidal Hitomezashi loop has length $2\left\vert k(x) \right \vert$ modulo 8.* 3. *In a general pattern $\mathsf{Cloth}_{M, N}(x, y)$, if a nontrivial toroidal Hitomezashi loop has homology class $(\lambda, \mu)$, then its length is congruent to $2(\mu N + \lambda M - \mu k(x))$ modulo $8$.[^3]* The last property is a counting of toroidal Hitomezashi loops on $\mathsf{Cloth}_{M, N}(x, y)$. **Theorem 3** (Loop count). * * 1. *In $\mathsf{Cloth}_{M, N}(x, y)$, if $k(x), k(y)$ are both non-zero, then the number of nontrivial toroidal Hitomezashi loops is $\gcd(x, y)$.* 2. *The number of trivial toroidal Hitomezashi loops in any toroidal Hitomezashi pattern is even.* 3. *In a symmetric pattern $\mathsf{Cloth}_{N, N}(x, x)$, the number of all toroidal Hitomezashi loops is congruent to $N$ modulo $4$.* We will use different approaches for each result. Let us summarize these approaches here. [\[prop:homology\]](#prop:homology){reference-type="ref" reference="prop:homology"} is proven using the idea of "heights\" introduced in [@Pete2008]. We lift the toroidal Hitomezashi loops to infinite planar Hitomezashi paths, and use height to analyze two such paths that are adjacent. [\[prop:length\]](#prop:length){reference-type="ref" reference="prop:length"} is proven using the tool of "excursions\" as in [@shortproof23]. We first establish a connection between toroidal Hitomezashi loops and two types of planar Hitomezashi excursions, then use the method of induction in [@shortproof23] to determine the length of these excursions. [Theorem 3](#prop:loop-count){reference-type="ref" reference="prop:loop-count"}(1) is a corollary of [\[prop:homology\]](#prop:homology){reference-type="ref" reference="prop:homology"}. [Theorem 3](#prop:loop-count){reference-type="ref" reference="prop:loop-count"}(2) can either be derived from [\[prop:length\]](#prop:length){reference-type="ref" reference="prop:length"} by considering the sum of lengths of all loops, or derived from the fact that the number of clockwise trivial Hitomezashi loops is equal to the number of counterclockwise trivial Hitomezashi loops. The proof of [Theorem 3](#prop:loop-count){reference-type="ref" reference="prop:loop-count"}(3) is more involved. It is based on a novel connection between symmetric Hitomezashi patterns $\mathsf{Cloth}_{N, N}(x, x)$ and knots. By applying some "triple-point moves," motivated by the Reidemeister III move in the knot theory, we can show that [Theorem 3](#prop:loop-count){reference-type="ref" reference="prop:loop-count"}(3) is equivalent for $\mathsf{Cloth}_{N, N}(x, x)$ and $\mathsf{Cloth}_{N, N}(x', x')$, where $x'$ is $x$ with two adjacent bits flipped. This allows us to reduce $x$ to consist of consecutive $1$'s followed by consecutive $(-1)$'s, where the property can be easily verified. *Remark 1*. One can check via examples that the moduli in the above theorems are optimal. For example, the number of loops in $\mathsf{Cloth}_{N, N}(x, x)$ modulo any $K$ with $K\nmid4$ cannot be written as a function of $k(x)$ and $N$. The rest of the paper is structured as follows. Section [2](#sec:homology){reference-type="ref" reference="sec:homology"}, [3](#sec:length){reference-type="ref" reference="sec:length"}, [4](#sec:knot-theory){reference-type="ref" reference="sec:knot-theory"} are devoted to the proof of [\[prop:homology\]](#prop:homology){reference-type="ref" reference="prop:homology"}, [\[prop:length\]](#prop:length){reference-type="ref" reference="prop:length"}, [Theorem 3](#prop:loop-count){reference-type="ref" reference="prop:loop-count"}, respectively. In Section [5](#sec:problems){reference-type="ref" reference="sec:problems"}, we present some futher directions of research. # Acknowledgement {#acknowledgement .unnumbered} Shengtong Zhang is supported by the Craig Franklin Fellowship in Mathematics at Stanford University. # Homology class and Height {#sec:homology} ## Height For $x, y \in \{-1, 1\}^{\mathbb Z}$, let $\mathsf{Cloth}_{\mathbb Z}(x, y)$ denote the orientation of $\mathsf{Cloth}_{\mathbb Z}$ defined verbatim as [Definition 2](#defn:hitomezashi-toroidal){reference-type="ref" reference="defn:hitomezashi-toroidal"}. Following [@Pete2008], we define the height[^4] for regions, edges, and Hitomezashi paths of $\mathsf{Cloth}_\mathbb Z(x,y)$. Here a region means a unit square region on the plane divided by the graph $\mathsf{Cloth}_\mathbb Z$. For every oriented edge in $\mathsf{Cloth}_\mathbb Z(x,y)$, we demand the region on its left to be one unit higher than the region on its right, and the edge to be of the average height of these two regions. This uniquely defines the height on regions and edges up to an additive constant. More concretely, one may define the height of the region containing $(i+1/2,j+1/2)$ to be $\sum_{k=0}^jx_k+\sum_{k=0}^iy_k$, where $\sum_{k=0}^r$ is interpreted as $-\sum_{k=r}^{-1}$ if $r<0$. If two regions are connected by a path on the plane that intersects $\mathsf{Cloth}_\mathbb Z$ only discretely and does not transversely intersect any Hitomezashi path, then they have the same height. In particular, by perturbing a Hitomezashi path to its left (resp. right) except at vertices, we see all regions on the immediate left (resp. right) of a Hitomezashi path have the same height, an observation first made in [@Pete2008 Page 5-6]. Therefore all edges on a Hitomezashi path have the same height, which is defined to be the height of this Hitomezashi path. Finally, for a toroidal Hitomezashi pattern $\mathsf{Cloth}_{M, N}(x,y)$ with $k(x)=k(y)=0$, we may define the height of its regions, edges, and toroidal Hitomezashi loops exactly as above. ## Proof of [\[prop:homology\]](#prop:homology){reference-type="ref" reference="prop:homology"} \(1\) is a consequence of (2) together with [\[obs\]](#obs){reference-type="ref" reference="obs"}(3). Thus we only prove (2). The sum of homology classes of all Hitomezashi loops in $\mathsf{Cloth}_{M, N}(x,y)$ equals to $(k(x),k(y))$. Together with [\[obs\]](#obs){reference-type="ref" reference="obs"}(2), we see if $(k(x),k(y))\ne(0,0)$, the homology class of any nontrivial toroidal Hitomezashi loop must be $\pm(k(x)/\gcd(x,y),k(y)/\gcd(x,y))$. This proves (2) for the cases $k(x)=0$, $k(y)\ne0$ and $k(x)\ne0$, $k(y)=0$. Below we deal with the other two cases. **Case 1**: $k(x),k(y)\ne0$. By symmetry we may assume $k(x)>0$. Assume for the sake of contradiction that there exists a toroidal Hitomezashi loop $\gamma$ with homology class $-(k(x)/\gcd(x,y),k(y)/\gcd(x,y)).$ Necessarily there also exists a loop $\gamma'$ with homology class $(k(x)/\gcd(x,y),$ $k(y)/\gcd(x,y))$. The loops $\gamma,\gamma'$ cobound two annular regions on the torus. We can choose $\gamma,\gamma'$ so that $\gamma'$ is immediately below $\gamma$ in the sense that the annular region below $\gamma$ and above $\gamma'$ contains no nontrivial toroidal Hitomezashi loops (we may talk about above/below thanks to $k(x)\ne0$). Pullback the toroidal Hitomezashi pattern $\mathsf{Cloth}_{M,N}(x,y)$ to the planar graph $\mathsf{Cloth}_\mathbb Z$, i.e. define infinite strings $\tilde x,\tilde y\in\{-1,1\}^\mathbb Z$ by repeating $x,y$ periodically, and consider the planar Hitomezashi pattern $\mathsf{Cloth}_\mathbb Z(\tilde x,\tilde y)$, which comes with a covering map onto $\mathsf{Cloth}_{M,N}(x,y)$. Let $\tilde\gamma$, $\tilde\gamma'$ be lifts of $\gamma$, $\gamma'$ with $\tilde\gamma'$ immediately below $\tilde\gamma$. Then there is a planar path between them that does not transversely intersect any Hitomezashi path. Therefore, the regions on the left of $\tilde\gamma$ has the same height with the regions on the left of $\tilde\gamma'$. Hence $\tilde\gamma$ and $\tilde\gamma'$ have the same height. Since $k(x)>0$, by cyclically permuting $x$ we may assume all nonempty partial sums of $x$ are positive, i.e. $\sum_{k=0}^j\tilde x_k>0$ for all $j\ge0$. Since the second component of the homology class of $\gamma'$ is nonzero, $\tilde\gamma'$ contains an edge $e_1$ of the form $(i,0)\to(i+1,0)$. Since the first component of the homology class of $\gamma$ is negative and $\tilde\gamma$ is above $\tilde\gamma'$, $\tilde\gamma$ contains an edge $e_2$ of the form $(i+1,j)\to(i,j)$ with $j>0$. Now, the region on the right to $e_2$ and the region on the right to $e_1$ have the same height because $\tilde\gamma,\tilde\gamma'$ have the same height. On the other hand, their height difference equals to $\sum_{k=0}^j\tilde x_k>0$. This is a contradiction. **Case 2**: $k(x)=k(y)=0$. By cyclically permuting $x,y$, we may assume all partial sums of $x$ are nonpositive and all partial sums of $y$ are nonnegative. Then the height of $(-1/2,-1/2)$ is maximal among all $(-1/2,j-1/2)$ and minimal among all $(i-1/2,-1/2)$. Consequently, all edges of the form $\{(-1,j),(0,j)\}$ have strictly lower height than all edges of the form $\{(i,-1),(i,0)\}$. It follows that for a fixed toroidal Hitomezashi loop, it either passes no edge of the former type, or passes no edge of the latter type. This forbids the existence of a Hitomezashi loop with homology class $(u,v)$, $uv\ne0$.0◻ # Loop length and Excursion {#sec:length} In this section, we prove Theorem [\[prop:length\]](#prop:length){reference-type="ref" reference="prop:length"} based on the excursion arguments in [@shortproof23]. ## Excursions Let $\gamma$ be a nontrivial Hitomezashi loop on $\mathsf{Cloth}_{M, N}(x, y)$, and let $\widetilde{\gamma}$ be its lift to $\mathsf{Cloth}_{\mathbb Z}(\Tilde{x}, \Tilde{y})$, where $\Tilde{x}, \Tilde{y} \in \{\pm 1\}^{\mathbb Z}$ are periodic extensions of $x, y$. Note that $\widetilde{\gamma}$ is an infinite planar Hitomezashi path. The goal of this subsection is to introduce "Hitomezashi excursions\", and chop $\widetilde{\gamma}$ up into these excursions.[^5] **Definition 5**. Let $a,b$ be integers. An **(Hitomezashi) $a$-excursion** is a planar Hitomezashi path with at least three vertices, start vertex $(a-1, i)$ for some $i \in \mathbb Z$, end vertex $(a-1, j)$ for some $j > i$, and all other vertices lying in the half plane $\mathcal H_a = \{(x, y): x \geq a\}$. An **(Hitomezashi) $(a, b)$-excursion** is a planar Hitomezashi path with start vertex $(p, b - 1)$ for some $p \geq a$, end vertex $(a - 1, q)$ for some $q \geq b$, and all other vertices lying in the quadrant $\mathcal H_{a, b} = \{(x, y): x \geq a, y \geq b\}$. Aside from the restriction $j > i$, our definition of $a$-excursion is the same as in [@shortproof23]. Note that the first and last edge in an $a$-excursion must be horizontal. In an $(a, b)$-excursion, the first edge is vertical and the last edge is horizontal. The next lemma explains how $\widetilde{\gamma}$ can be chopped up into Hitomezashi excursions. **Lemma 6**. *Let $L$ be the length of $\gamma$. Let $(\lambda, \mu)$ be the homology class of $\gamma$. Denote the vertices of $\widetilde{\gamma}$ by $\cdots, \widetilde{v_{-1}}, \widetilde{v_0}, \widetilde{v_1}, \cdots$.* 1. *If $\lambda < 0, \mu > 0$, there exists $i < 0, j > 0$ such that for any $k, \ell \geq 0$, the subpath of $\widetilde{\gamma}$ formed by $\{\widetilde{v_t}: t \in [i - kL, j + \ell L]\}$ is an $(a, b)$-excursion for some $a, b \in \mathbb Z$.* 2. *If $\lambda = 0, \mu > 0$, there exists an $a \in \mathbb Z$ such that $\widetilde{\gamma}$ can be partitioned into a disjoint union of $a$-excursions and upward-pointing edges lying on the line $x = a - 1$.* *Proof.* Let the coordinates of $\widetilde{v_i}$ be $(a_i, b_i)$. \(1\) Assume $\lambda < 0, \mu > 0$. As $b_{m + L} = b_m + \mu N$ for any $m \in \mathbb Z$, the quantity $\overline{b}_m = \inf_{n \geq m} b_n$ is finite and satisfies $\lim_{a \to -\infty} \overline{b}_a = -\infty$. Thus, there exists some $i < 0$ such that $\overline{b}_i < \overline{b}_{i + 1}$. For this $i$, we have $b_i < \inf_{n \geq i + 1} b_n$. Similarly, there exists some $j > 0$ such that $a_j < \inf_{n \leq j - 1} a_n$. Take $a = a_j + 1$ and $b = b_i + 1$. By definition, for any $i + 1 \leq n \leq j - 1$, we have $a_n > a_j = a - 1$ and $b_n > b_i = b - 1$, so $\widetilde{v_i}$ lies in $\mathcal H_{a, b}$. Furthermore, $b_i > b_j = b - 1$ and $a_j > a_i = a - 1$. So $\{v_t: t \in [i, j]\}$ form an $(a, b)$-excursion. For any $k, \ell \geq 0$, we have $b_{m - k L} = b_m - k \mu N$ for any $m \in \mathbb Z$, so $b_{i - kL} < \inf_{n \geq i - kL + 1} b_n$. Similarly, we have $a_{j + \ell L} < \inf_{n \leq j + \ell L - 1} a_n$. So by the same reasoning as the previous paragraph $\{v_t: t \in [i - kL, j + \ell L]\}$ is an $(a, b)$-excursion for $a = a_{j + \ell L } + 1, b = b_{i - kL} + 1$. \(2\) Assume $\lambda = 0, \mu > 0$. As $a_{m + L} = a_m + \mu N = a_m$, the quantity $a = 1 + \inf_{m \in \mathbb Z} a_m$ is well-defined. Removing all edges on $x = a - 1$ partitions $\widetilde{\gamma}$ into finite Hitomezashi paths. We first show that the edges of $\widetilde{\gamma}$ lying on the line $x = a - 1$ point upwards. Let $(\widetilde{v_{i}}, \widetilde{v_{i + 1}})$ be such an edge. The subpaths $\{\widetilde{v_{n}}: i - L + 1\leq n \leq i\}$ and $\{\widetilde{v_{n}}: i + 1\leq n \leq i + L\}$ are disjoint paths in $\mathcal H_{a - 1}$, with endpoints $\{(a - 1, b_{i + 1} - \mu N), (a - 1, b_{i})\}$ and $\{(a - 1, b_{i + 1}), (a - 1, b_{i} + \mu N)\}$ respectively. If $b_i > b_{i + 1}$, then $$b_{i + 1} - \mu N < b_{i + 1} < b_i < b_{i} + \mu N$$ contradicting Lemma 2.3 of [@shortproof23]. Thus the edge $(\widetilde{v_{i}}, \widetilde{v_{i + 1}})$ must point upward. Let $(\widetilde{v_{i}}, \widetilde{v_{i + 1}})$ and $(\widetilde{v_{j}}, \widetilde{v_{j + 1}})$ be consecutive edges of $\widetilde{\gamma}$ lying on $x = a - 1$. The subpaths $\{\widetilde{v_{n}}: i + 1 \leq n \leq j\}$ and $\{\widetilde{v_{n}}: j + 1\leq n \leq i + L + 1\}$ are disjoint paths in $\mathcal H_{a - 1}$, with endpoints $\{(a - 1, b_{i + 1}), (a - 1, b_{j})\}$ and $\{(a - 1, b_{j + 1}), (a - 1, b_{i + 1} + \mu N)\}$. If $b_{j + 1} < b_{i + 1}$, then we have $$b_{j} < b_{j + 1} < b_{i + 1} < b_{i + 1} + \mu N$$ contradicting Lemma 2.3 of [@shortproof23]. Thus $b_{j + 1} > b_{i + 1}$, which implies $b_j > b_{i + 1}$ since $b_j \neq b_{i + 1}$. Therefore, the segment of $\widetilde{\gamma}$ between consecutive edges on $x = a - 1$ form an $a$-excursion. ◻ ## Length of excursions To prove our claim on the length of toroidal Hitomezashi loops, we prove congruences for the lengths of excursions. Two claims we need have already been established. **Lemma 7** ([@defant2022loops]). *The length of a planar Hitomezashi loop is congruent to $4$ modulo $8$.* **Lemma 8** ([@shortproof23]). *The length of an $a$-excursion from $(a - 1, i)$ to $(a - 1, j)$ is congruent to $2(j - i) + 1$ modulo $8$.* Here we prove an additional result on the length of $(a,b)$-excursions. **Lemma 9**. *On $\mathsf{Cloth}_{\mathbb Z}(x, y)$, let $\mathcal C$ be an $(a, b)$-excursion from $(p, b - 1)$ to $(a - 1, q)$. Let $k(\mathcal C)$ denote $$k(\mathcal C) = -\sum_{k = b}^q x_k.$$ Then the length of $\mathcal C$ is congruent to $2(q - b - p + a + k(\mathcal C))$ modulo $8$.* *Proof.* We induct on the length of $\mathcal C$. When the length of $\mathcal C$ is $2$, the result is trivial. For the induction step, we proceed analogous to [@shortproof23]. Let $(p_1, b), (p_2, b), \cdots, (p_t, b)$ be the starting vertex of every edge of $\mathcal C$ that lies on the horizontal line $y = b$, in the order they appear in $\mathcal C$. Clearly, $p_1 = p$. Using a similar argument as in [@shortproof23], all $p_i$ have the same parity. Furthermore, one of two cases must hold. **Case 1**: $x_b = -1$, and $p_1 > p_2 > \cdots > p_t$. Then we can partition $\mathcal C$ into the following subpaths: the edge $(p, b - 1) \to (p, b)$, the subpaths $\mathcal C_i$ from $(p_i - 1, b)$ to $(p_{i + 1}, b)$ for each $1 \leq i \leq t - 1$, the edges $(p_i, b) \to (p_i - 1, b)$ for each $1 \leq i \leq t$, and the subpath $\mathcal C'$ from $(p_t - 1, b)$ to $(a - 1, q)$. Each $\mathcal C_i$ is a rotated $b$-excursion, and $\mathcal C'$ is an $(a, b + 1)$-excursion, so we can apply [Lemma 8](#thm:half-excursion-length){reference-type="ref" reference="thm:half-excursion-length"} and the induction hypothesis to conclude that $$\begin{aligned} \left\vert \mathcal C \right \vert &= 2 + \sum_{i = 1}^{t - 1} (\left\vert \mathcal C_i \right \vert + 1) + \left\vert \mathcal C' \right \vert \\ &\equiv 2 + \sum_{i = 1}^{t - 1} 2(p_i - p_{i + 1}) + 2(q - (b + 1) - (p_t - 1) + a + k(\mathcal C')) \\ &\equiv 2 + 2(p - p_t) + 2(q - (b + 1) - (p_t - 1) + a + (k(\mathcal C) - 1)) \\ &\equiv 2(q - b - p + a + k(\mathcal C)) \pmod{8}.\end{aligned}$$ **Case 2**: $x_b = 1$, and $p_1 < p_2 < \cdots < p_t$. Then we can partition $\mathcal C$ into the following subpaths: the edge $(p, b - 1) \to (p, b)$, the segments $\mathcal C_i$ from $(p_i + 1, b)$ to $(p_{i + 1}, b)$ for each $1 \leq i \leq t - 1$, the edges $(p_i, b) \to (p_i + 1, b)$ for each $1 \leq i \leq t$, and the segments $\mathcal C'$ from $(p_t + 1, b)$ to $(a - 1, q)$. Each $\mathcal C_i$ is a rotated $b$-excursion, and $\mathcal C'$ is an $(a, b + 1)$-excursion, so we can apply [Lemma 8](#thm:half-excursion-length){reference-type="ref" reference="thm:half-excursion-length"} and the induction hypothesis to conclude that $$\begin{aligned} \left\vert \mathcal C \right \vert &= 2 + \sum_{i = 1}^{t - 1} (\left\vert \mathcal C_i \right \vert + 1) + \left\vert \mathcal C' \right \vert \\ &\equiv 2 + \sum_{i = 1}^{t - 1} 2(p_{i + 1} - p_i) + 2(q - (b + 1) - (p_t + 1) + a + k(\mathcal C')) \\ &\equiv 2 + 2(p_t - p) + 2(q - (b + 1) - (p_t + 1) + a + (k(\mathcal C) + 1)) \\ &\equiv 2(q - b - p + a + k(\mathcal C)) \pmod{8}.\end{aligned}$$ In either cases the induction hypothesis holds. So the induction is complete. ◻ We can finally prove [\[prop:length\]](#prop:length){reference-type="ref" reference="prop:length"}. *Proof of [\[prop:length\]](#prop:length){reference-type="ref" reference="prop:length"}.* (1) If $\gamma$ is a trivial toroidal Hitomezashi loop, then its lifting $\widetilde{\gamma}$ is a planar Hitomezashi loop of the same length, which is congruent to $4$ modulo $8$ by [Lemma 7](#thm:loop-length){reference-type="ref" reference="thm:loop-length"}. \(2\) This follows from (3) and [\[prop:homology\]](#prop:homology){reference-type="ref" reference="prop:homology"}(1). \(3\) First note that if $\gamma$ is a nontrivial toroidal Hitomezashi loop in $\mathsf{Cloth}_{M, N}(x, y)$, then the reflection $\gamma^{\circ}$ about the $x$-axis is a nontrivial toroidal Hitomezashi loop in $\mathsf{Cloth}_{M, N}(x^{\circ}, -y)$, where $x^{\circ}_k = x_{-k}.$ If $\gamma$ has homology class $(\lambda, \mu)$, then $\gamma^{\circ}$ has homology class $(\lambda, -\mu)$. As $k(x)$ and $N$ have the same parity, we have $$2(\mu N + \lambda M) - 2 \mu k(x) \equiv 2(-\mu N + \lambda M) + 2 \mu k(x) \pmod{8}.$$ Hence, if the result holds for $\gamma$, it holds for its reflection about the $x$-axis. By the symmetry observed earlier, the same is true for reflection about the $y$-axis and the diagonal $y = x$. Hence, it suffices to prove the result in the cases $\lambda < 0, \mu > 0$ and $\lambda = 0, \mu > 0.$ First assume $\lambda < 0, \mu > 0$. By [Lemma 6](#lem:excursion-in-lifting){reference-type="ref" reference="lem:excursion-in-lifting"}, there exists $i < 0, j > 0$ such that for any $k, \ell \geq 0$, the subpath of $\widetilde{\gamma}$ formed by $\{\widetilde{v_{t}}: t \in [i - kL, j + \ell L]\}$ is an $(a, b)$-excursion for some $a, b \in \mathbb Z$. Let $\widetilde{v_{i}} = (p, b - 1)$ and $\widetilde{v_{j}} = (a - 1, q)$. By [Lemma 9](#lem:quarter-excursion-length){reference-type="ref" reference="lem:quarter-excursion-length"} applied to the subpath of $\widetilde{\gamma}$ from $\widetilde{v_{i}}$ to $\widetilde{v_{j}}$, we have $$j - i \equiv 2(q - b - p + a) - 2 \sum_{k = b}^q \Tilde{x}_k \pmod{8}.$$ We also have $\widetilde{v_{j + L}} = (a + \lambda M - 1, q + \mu N )$. By [Lemma 9](#lem:quarter-excursion-length){reference-type="ref" reference="lem:quarter-excursion-length"} applied to the subpath of $\widetilde{\gamma}$ from $\widetilde{v_{i}}$ to $\widetilde{v_{j + L}}$, we have $$j - i + L \equiv 2(q - b - p + a + \lambda M + \mu N) - 2 \sum_{k = b}^{q + \mu N} \Tilde{x}_k \pmod{8}.$$ Subtracting the two equations, we conclude that $$L \equiv 2(\lambda M + \mu N) - 2 \mu k(x) \pmod{8}$$ where we used the fact that $k(x) < 0$ from [\[prop:homology\]](#prop:homology){reference-type="ref" reference="prop:homology"}. In the case $\lambda = 0, \mu > 0$, note that $k(x) = 0$ by [\[prop:homology\]](#prop:homology){reference-type="ref" reference="prop:homology"}. Let $\widetilde{\gamma}$ be the lifting of $\gamma$. By [Lemma 6](#lem:excursion-in-lifting){reference-type="ref" reference="lem:excursion-in-lifting"}, there exists an $a \in \mathbb Z$ such that $\widetilde{\gamma}$ can be partitioned into a disjoint union of $a$-excursions and upward-pointing edges lying on the line $x = a - 1$. Let $(a - 1, b_0), (a - 1, b_1), \cdots$ be the starting points of the upward-pointing edges lying on the line $x = a - 1$. There must exist some $j$ such that $b_j = b_0 + \mu N.$ For any $i$, the subpath of $\widetilde{\gamma}_i$ from $(a - 1, b_i + 1)$ to $(a - 1, b_{i + 1})$ is an $a$-excursion, so [Lemma 8](#thm:half-excursion-length){reference-type="ref" reference="thm:half-excursion-length"} gives $$\left\vert \widetilde{\gamma}_i \right \vert \equiv 2b_{i + 1} - 2b_i - 1 \bmod{8}.$$ Summing for $i = 0, 1, \cdots, j - 1$, we get $$L = \sum_{i = 0}^{j - 1} (\left\vert \widetilde{\gamma}_i \right \vert + 1) \equiv 2(b_j - b_0) \equiv 2 \mu N \bmod{8}.$$ This completes the proof of [\[prop:length\]](#prop:length){reference-type="ref" reference="prop:length"}. ◻ # Loop count; Symmetric Hitomezashi patterns and knot theory {#sec:knot-theory} In this section we give a proof to [Theorem 3](#prop:loop-count){reference-type="ref" reference="prop:loop-count"} motivated by knot theory. Since the total homology class of all toroidal Hitomezashi loops on $\mathsf{Cloth}_{M,N}(x,y)$ is equal to $(k(x),k(y))$, (1) is immediate from [\[prop:homology\]](#prop:homology){reference-type="ref" reference="prop:homology"}(2). We prove (2) as follows. At each vertex, each loop turns $90^{\circ}$ clockwise or counterclockwise. By elementary geometry, the quantity $(\#\text{clockwise turn} - \#\text{counterclockwise turn})$ equals $4$ for clockwise trivial Hitomezashi loops, $-4$ for counterclockwise trivial Hitomezashi loops, and $0$ for nontrivial Hitomezashi loops. On the other hand, the total number of clockwise turns across all loops is equal to the total number of counterclockwise turns, as each vertex in $\mathsf{Cloth}_{M, N}(x, y)$ contributes one clockwise and one counterclockwise turn. So the number of clockwise trivial Hitomezashi loops is equal to the number of counterclockwise trivial Hitomezashi loops in $\mathsf{Cloth}_{M, N}(x, y)$, which implies the desired statement. For (3), it suffices to perform the proof in the following two steps. **Lemma 10**. *[Theorem 3](#prop:loop-count){reference-type="ref" reference="prop:loop-count"}(3) holds if $x$ only has at most two blocks, i.e. (up to cyclic permutation) $x$ is some number of $1$'s followed by some number of $-1$'s. In fact, the loop count is exactly $N$ in this case.* **Lemma 11**. *If [Theorem 3](#prop:loop-count){reference-type="ref" reference="prop:loop-count"}(3) holds for $x$, then it also holds for any $x'$ obtained by switching two adjacent bits of $x$.* [Lemma 10](#lem:two_blocks){reference-type="ref" reference="lem:two_blocks"} is a direct verification. Instead of carrying out the details, we refer the readers to [1](#fig:00011111){reference-type="ref" reference="fig:00011111"} and the figure on the left in [5](#fig:flip_bit){reference-type="ref" reference="fig:flip_bit"}. We devote the rest of this section to proving [Lemma 11](#lem:switch){reference-type="ref" reference="lem:switch"}. Since cyclically permuting $x$ does not affect the statement of [Theorem 3](#prop:loop-count){reference-type="ref" reference="prop:loop-count"}(3), we may assume $x'$ is obtained by switching the first two bits in $x$. ## Redraw the grid onto the annulus {#sec:annulus} In our setup, since any toroidal Hitomezashi loop does not cross the diagonal, we can redraw the grid onto the annulus as in [\[fig:10110_move\]](#fig:10110_move){reference-type="ref" reference="fig:10110_move"}, where the open segments on the left and right are pairwisely identified to close the grid up. Here, we think of the grid as the union of $N$ closed strands, each consists of one horizontal and one vertical side both of length $N$. Then the orientation given by the binary string $x$ corresponds to an orientation to each of the strands. The central circle of the annulus has homology class $\pm(1,1)$. We generalize the notion of Hitomezashi loops a bit. In a $4$-regular planar graph, any two edges sharing the same vertex are either adjacent or opposite to each other. A **link-like graph** is a directed $4$-regular planar graph whose any two opposite edges at any vertex point to the same direction in the plane. Such a graph is an oriented link diagram (in the sense of knot theory) with the overpass/underpass information forgotten. A **Hitomezashi loop** of a link-like graph $G$ is a directed circuit in $G$ whose any two consecutive edges are adjacent. Then, our grid on the annulus, upon forgetting the $2N$ corners, is such a graph $G'(x)$ (it is planar because the annulus embeds naturally into the plane), and its Hitomezashi loops correspond exactly to those in the sense of our previous definition. *Remark 2*. (1) In the knot theoretic context, our definition of Hitomezashi loops corresponds to the notion of *Seifert circles* of a link diagram.\ (2) The graph $G'(x)$ defined above is the underlying graph of the link diagram for the torus link $T(N,N)$ drawn as the braid closure of the full twist $(\sigma_1\cdots\sigma_{N-1})^N$ in the braid group $B_N$. Every link-like graph admits a checkerboard coloring, from which the following lemma is immediate by considering the color on the left of a Hitomezashi loop. **Lemma 12**. *A Hitomezashi loop of a link-like graph passes any vertex at most once.0◻* Lastly we remark that the annulus admits a $\mathbb Z/2$-symmetry by reflecting across and do a half rotation along the central circle (which exchanges the horizontal and vertical sides of each of the $N$ strand respectively). It will be of use that our graph $G'(x)$, thus the whole Hitomezashi pattern, is invariant under this $\mathbb Z/2$-symmetry. ## The triple point move The triple point move, also known as the Reidemeister III move in the knot theoretic context, is the local move that changes one link-like graph to another as shown in [\[fig:triple\]](#fig:triple){reference-type="ref" reference="fig:triple"}, thinking as passing the top strand across the bottom intersection point. The orientations of the strands do not change under the move. Note this move is reversible and cyclically symmetric. Label the strands of $G'(x)$ from bottom to top by $1,2,\cdots,N$ as in Figure [\[fig:10110_move\]](#fig:10110_move){reference-type="ref" reference="fig:10110_move"}. Perform a sequence of triple point moves by passing the first strand across the $2N-2$ intersection points of the second strand with the rest strands (marked blue in [\[fig:10110_move\]](#fig:10110_move){reference-type="ref" reference="fig:10110_move"}), bottom to top, left to right, we arrive at a link-like graph isomorphic to $G'(x')$ where $x'$ is the binary string obtained by exchanging the first two bits of $x$. To prove Lemma [Lemma 11](#lem:switch){reference-type="ref" reference="lem:switch"}, it is thus important for us to examine the change of the number of Hitomezashi loops, and their homology classes, under a general triple point move. The three strands involved in a triple point move have either adjacent ([\[fig:adjacent\]](#fig:adjacent){reference-type="ref" reference="fig:adjacent"}) or alternating orientations ([\[fig:alternating\]](#fig:alternating){reference-type="ref" reference="fig:alternating"}). **Lemma 13** (Triple point move for adjacent orientations). *If the orientations of the three strands in a triple point move are adjacent, then the number of Hitomezashi loops does not change under the triple point move.* *Proof.* The local picture does not change. See [\[fig:adjacent\]](#fig:adjacent){reference-type="ref" reference="fig:adjacent"}. Note by Lemma [Lemma 12](#lem:vertex_once){reference-type="ref" reference="lem:vertex_once"} the three colored paths as shown do belong to different Hitomezashi loops, although we do not need this fact for our argument. ◻ If the orientations of the three strands in a triple point move are alternating, complete the local fragments involved in the move into Hitomezashi loops. Up to overall orientation reversal, cyclic permutation, and passing strands across $\infty$, there are three configurations, as shown in [\[fig:alternating\]](#fig:alternating){reference-type="ref" reference="fig:alternating"}. From left to right, we call them Configurations I, II, III, respectively. Then the triple point move switches Configurations I and III and leaves Configuration II invariant. As a corollary we obtain the following. **Lemma 14** (Triple point move for alternating orientations). *If the orientations of the three strands in a triple point move are alternating, then under the move, the number of Hitomezashi loops decreases/increases by two for Configurations I, III, and does not change for Configuration II.0◻* ## Proof of [Lemma 11](#lem:switch){reference-type="ref" reference="lem:switch"} {#proof-of-lemswitch} We perform the sequence of triple point move switching the first two strands, as described in the previous section. More carefully, we pass the first strand alternatingly across the vertical array (from bottom to top) and the horizontal array (from left to right) of the intersection points (as marked blue in [\[fig:10110_move\]](#fig:10110_move){reference-type="ref" reference="fig:10110_move"}). Now [Lemma 11](#lem:switch){reference-type="ref" reference="lem:switch"} is implied by the following claim. **Claim:** The number of Hitomezashi loops modulo $4$ does not change under every pair of triple point moves (i.e. the two moves that passes the first strand across the two intersections between the second strand with one of the rest strands) performed. *Proof of claim.* Note that the $\mathbb Z/2$-symmetry mentioned at the end of [4.1](#sec:annulus){reference-type="ref" reference="sec:annulus"} is preserved under each pair of moves. Therefore, the two sets of orientations for a pair of triple point moves are both adjacent or both alternating. If they are both adjacent, we are done by Lemma [Lemma 13](#lem:adjacent){reference-type="ref" reference="lem:adjacent"}. Suppose now they are both alternating. Then the two triple point moves have the same configuration type as shown in [\[fig:alternating\]](#fig:alternating){reference-type="ref" reference="fig:alternating"}. Each of the two local pictures has six endpoints, and these twelve endpoints are paired up by six Hitomezashi paths (see [\[fig:six_pairs\]](#fig:six_pairs){reference-type="ref" reference="fig:six_pairs"}). By the $\mathbb Z/2$-symmetry, among these six, there must be an even number of *cross* Hitomezashi paths, i.e. paths that pair up two endpoints from different local pictures. We divide into three cases. **Case 1** There is no cross Hitomezashi path. In this case, performing one of the triple point moves does not change the configuration type of the other move. Therefore the number of Hitomezashi loops changes by $\pm4$ or $0$ under the pair of moves by [Lemma 14](#lem:alternating){reference-type="ref" reference="lem:alternating"}. **Case 2** There are exactly two cross Hitomezashi paths. Then in each local picture, four of the six endpoints are paired up by non-cross paths. With this observation, we conclude verbatim as in Case 1. **Case 3** There are at least four cross Hitomezashi paths. Label six endpoints by $1,2,\cdots,6$ clockwisely in one local picture, and the other six also by $1,2,\cdots,6$ such that the labels are $\mathbb Z/2$-invariant. By purely topological consideration, if there are cross paths between $1$'s and $2$'s or $1$'s and $6$'s, then there must be exactly two cross paths, which contradicts our assumption. Therefore if the $1$'s are on cross paths, there must be two cross paths between $1$'s and $4$'s. Cyclic permutations of this conclusion also hold. If two of the $(1,4)$, $(2,5)$, $(3,6)$ endpoints are paired by cross paths, so is the third. Therefore there is exactly one possible configuration (at least when forgetting about how the loops embed onto the annulus) as shown on the left of [\[fig:six_pairs\]](#fig:six_pairs){reference-type="ref" reference="fig:six_pairs"}. In this case, performing the pair of triple point moves does not change the number of Hitomezashi loops. ◻ # Open problems {#sec:problems} We point out some potentially interesting directions for future study. **Problem 15**. *When $k(x) = k(y) = 0$, we know from [\[prop:homology\]](#prop:homology){reference-type="ref" reference="prop:homology"} that the possible homology classes of nontrivial Hitomezashi loops in $\mathsf{Cloth}_{M, N}(x, y)$ are $(\pm 1, 0)$ and $(0, \pm 1)$. However, by [\[obs\]](#obs){reference-type="ref" reference="obs"}(2), loops with homology class $(\pm 1, 0)$ and $(0, \pm 1)$ cannot coexist in the same pattern. Is there a simple criterion on $x, y$ that tells us which homology classes can exist in this case?* **Problem 16**. *We could not formulate a generalization of [Theorem 3](#prop:loop-count){reference-type="ref" reference="prop:loop-count"}(2) for non-symmetric Hitomezashi patterns. In particular, the total number of loops modulo $4$ is not a function of $M, N, k(x), k(y)$, as illustrated in [7](#fig:loop_count){reference-type="ref" reference="fig:loop_count"}. This is still the case if we assume $k(x) \neq 0, k(y) \neq 0$, as the number of loops is congruent to $0$ modulo $4$ in $\mathsf{Cloth}_{8,8}(x,x)$ and $2$ modulo $4$ in $\mathsf{Cloth}_{8,8}(x, y)$ for $x = +-++--++$ and $y = +-++-+-+$.* *Could we find a formula for the number of loops modulo $4$ in a general Hitomezashi pattern $\mathsf{Cloth}_{M, N}(x, y)$?* ![When $x = ++--$ and $x' = +-+-$, $\mathsf{Cloth}_{4,4}(x, x)$ has $4$ toroidal Hitomezashi loops, while $\mathsf{Cloth}_{4,4}(x, x')$ has $6$.](Figures/1100_1100.png){#fig:loop_count width="70%"} ![When $x = ++--$ and $x' = +-+-$, $\mathsf{Cloth}_{4,4}(x, x)$ has $4$ toroidal Hitomezashi loops, while $\mathsf{Cloth}_{4,4}(x, x')$ has $6$.](Figures/1100_1010.png){#fig:loop_count width="70%"} **Problem 17**. *As explained in the introduction, in the case when $M,N$ are both even, for a fixed Hitomezashi pattern $\mathsf{Cloth}_{M,N}(x,y)$, the set of Hitomezashi loops decompose into two parts that are dual to each other. What can one say about the loop count of each part?* **Problem 18**. *Generalize our results to Hitomezashi patterns on an infinite cylindrical grid $\mathbb Z\times\mathbb Z/N\mathbb Z$.* **Problem 19**. *Colin, Defant, and Tenner [@defant2023] sketched some reasonable definitions for Hitomezashi patterns on the standard planar triangular grid whose vertex set is given by $T=\{i+j\omega+k\omega^2\colon i,j,k\in\mathbb Z\}\subset\mathbb C$, where $\omega=e^{2\pi i/3}$. Can one find a reformulation of their definitions in the spirit of Definition [Definition 2](#defn:hitomezashi-toroidal){reference-type="ref" reference="defn:hitomezashi-toroidal"}? What can be said about planar triangular Hitomezashi patterns and toroidal triangular Hitomezashi patterns? A reasonable toroidal triangular grid to consider is $T/NT$ for some integer $N\ge2$.* [^1]: Note that when $M$ or $N$ is odd, a loop may pass a vertex twice. See the figure on the right in [5](#fig:flip_bit){reference-type="ref" reference="fig:flip_bit"}. [^2]: When $k(x) = 0$, loops with homology class $(0, 1)$ and $(0, -1)$ can coexist in the same Hitomezashi pattern. See the orange and cyan loop in the second figure of [7](#fig:loop_count){reference-type="ref" reference="fig:loop_count"}. [^3]: *As $\mu k(x) = \lambda k(y)$, this expression can also be written as $2(\mu N + \lambda M) - \mu k(x) - \lambda k(y)$, so it is invariant if we switch $x$ and $y$.* [^4]: Note that our definition of height differs from the "height\" in [@Pete2008] by a factor of $2$. [^5]: Note that [@Pete2008] introduces a different definition of "excursion\".
arxiv_math
{ "id": "2309.02741", "title": "Toroidal Hitomezashi Patterns", "authors": "Qiuyu Ren, Shengtong Zhang", "categories": "math.CO math.GN", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Given a closed, genus $g$ surface $S$, we consider $Aut(\mathcal{ML})$, the group of homeomorphism of $\mathcal{ML}$ that preserves the intersection number. We prove that except in a few special cases, $Aut(\mathcal{ML})$ is isomorphic to the extended mapping class group. The theorem is a special case of Ivanov's *meta conjecture* which states that any "sufficiently rich" object naturally associated to a surface has automorphism group isomorphic to the extended mapping class group. Some of the results in this paper can be generalized to the setting of geodesic currents. To illustrate the challenges encountered when extending the theorem to the context of currents, we construct infinite family of pairs of closed curves that have the same marked length spectra and self intersection number. author: - Meenakshy Jyothis bibliography: - paperdraft.bib title: Ivanov's meta-conjecture in the context of measured laminations --- # Introduction Let $S$ be a closed, orientable, finite type surface of genus $g \geq 2$ and let $Mod^{\displaystyle \pm}(S)$ denote the extended mapping class group of $S$. Ivanov's meta conjecture states that when $g \geq 3$, then every object naturally associated to a surface $S$ that has a "sufficiently rich" structure has $Mod^{\displaystyle \pm}(S)$ as its group of automorphisms, and makes a similar claim for when $g = 2$ [@Iva06]. In his seminal work, Ivanov proved that the automorphism group of the curve complex of $S$ is $Mod^{\displaystyle \pm}(S)$ [@Iva97]. Ivanov's theorem inspired a number of results in the following years ([@Irm06], [@IK07],[@Di12],[@AACLOSX], [@BM19]). Many of these results considered different complexes associated to a surface and showed that their automorphism group is $Mod^{\displaystyle \pm}(S)$, and their proofs use Ivanov's original theorem. In this paper, we consider the space of measured laminations on a surface, $\mathcal{ML}(S)$. Given a hyperbolic structure on $S$, a measured lamination is a closed subset of $S$ foliated by geodesics, equipped with a transverse measure. We aim to prove Ivanov's meta conjecture for a specific automorphism group of $\mathcal{ML}(S)$. Weighted simple closed multicurves are dense in $\mathcal{ML}(S)$. By work of Kerckhoff, the geometric intersection number on simple closed curves extends to an 'intersection form' $i(.,.)$ on $\mathcal{ML}(S)$. In particular, the intersection form is a continuous bilinear map $i(.,.): \mathcal{ML}(S) \times \mathcal{ML}(S) \rightarrow \mathbb{R}$, such that for any two simple closed curves $\gamma$ and $\delta$, the intersection form $i(\gamma, \delta)$ agrees with the their geometric intersection number. In this paper we will be considering the automorphism group $Aut(\mathcal{ML})$ that consists of homeomorphisms on $\mathcal{ML}(S)$ that preserve the intersection form. Let $$Aut(\mathcal{ML}) = \{ \phi: \mathcal{ML}(S) \rightarrow \mathcal{ML}(S) \text{ homeo } | i(\lambda, \mu) = i(\phi(\lambda) , \phi(\mu)) \}$$ We show that in most cases $Aut(\mathcal{ML})$ is isomorphic to the extended mapping class group. **Theorem 1**. *Let $S_{g}$ be a closed, orientable, finite type surface of genus $g \geq 2$ and let $Aut(\mathcal{ML})$ denote the group of homeomorphisms on $\mathcal{ML}(S_{g})$ that preserve the intersection form. Then, for all $g \neq 2$ $$Aut(\mathcal{ML}) \cong Mod^{\displaystyle \pm} (S_{g})$$ For the surface of genus 2, $$Aut(\mathcal{ML}) \cong \displaystyle{Mod^{ \pm} (S_{2})/H}$$ Where, $H$ is the order two subgroup generated by the hyperelliptic involution.* **Remark 2**. *The above theorem is equivalent to the statement that $Aut(\mathcal{ML})$ is isomorphic to the automorphism group of the curve complex for all surface $S_{g}$ with $g \geq 2$.* ## The case of geodesic currents The space of geodesic currents on a surface, $\mathscr{C}(S)$, is an extension of $\mathcal{ML}$ that contains weighted *non-simple* closed curves as a dense subset in the same way as $\mathcal{ML}$ contains weighted simple closed multicurves as a dense subspace. By work of Bonahon, the intersection form also extends to $\mathscr{C}(S)$ [@Bon86]. The space of geodesic currents comes equipped with the weak\* topology. Let $Aut(\mathscr{C})$ denote the group of homeomorphisms on $\mathscr{C}(S)$ that preserve the intersection form. It is already known that $\displaystyle{Mod^{\pm}(S)}$ embeds in $Aut(\mathscr{C})$. For $g \geq 3$, the theorem in this paper also gives us a surjection. $$\displaystyle{Mod^{\pm}(S) \hookrightarrow Aut(\mathscr{C}) \xtwoheadrightarrow{f} Aut(\mathcal{ML}) \xrightarrow{\cong} Mod^{\pm}(S)}$$ We are interested to know if Ivanov's theorem holds for $Aut(\mathscr{C})$. After we finished writing, we discovered that Ken'ichi Ohshika and Athanase Papadopoulos also prove that $Aut(\mathcal{ML})$ is isomorphic to $Mod^{\pm}(S)$ [@OHSHIKA2018899]. However, they use quite different techniques from us. In particular, many of the results in this paper also apply to the general setting of geodesic currents. But, the Ivanov's meta conjecture for $Aut(\mathscr{C})$ does not follow immediately from these results. One reason why this is not immediate is because it is hard to show that the surjective map $f:Aut(\mathscr{C}) \twoheadrightarrow Aut(\mathcal{ML})$ is also injective. In fact, we construct infinite family of pairs of closed curves $(\gamma_{n}$, $\gamma_{n}')$ with the same self intersection number and simple marked length spectra defined in [2.2](#sec2.2){reference-type="ref" reference="sec2.2"}. The kernel of $f$ could contain automorphisms of currents that map $\gamma_{n}$ to $\gamma_{n}'$. **Theorem 3**. *Let $S$ be a surface of genus at least 2. We can find infinitely many pairs of closed curves $\gamma_n$ and $\gamma_n'$ on $S$ such that $\gamma_n$ and $\gamma_n'$ have the same self intersection number and the same simple marked length spectra.* There are results similar to [Theorem 3](#thm0.3){reference-type="ref" reference="thm0.3"} in the literature. Two non-isotopic closed curves $\alpha$ and $\beta$ are said to be $k$-equivalent if they intersect all the closed curves with self intersection $k$ the same number of times [@kavi2019]. For any given $k > 0$, Parlier and Xu construct closed curves $\alpha$ and $\beta$ that are not $k$-equivalent, but are $k'$-equivalent for any $k'< mk^2$ different from $k$ [@HpBx2023]. In particular, the curves $\alpha$ and $\beta$ constructed have the same simple marked length spectra. However, the curves $\alpha$ and $\beta$ have different self intersection number. This means that any map $\phi$ in the kernel of $f$ cannot map $\alpha$ to $\beta$. For the purpose of this paper we are interested in pairs of closed curves that share the same self intersection number and the same simple marked length spectrum. ## Plan of the paper The paper is organized as follows. Section 2 provides background on measured laminations and geodesic currents. In section 3 we prove some properties of elements in $Aut(\mathcal{ML})$. The proof of [Theorem 1](#thm0.1){reference-type="ref" reference="thm0.1"} follows immediately from these properties. In particular, in section 3 we prove that any $\phi \in Aut(\mathcal{ML})$ is linear, when $\mathcal{ML}$ is viewed as a subset of $\mathscr{C}(S)$. In this section, we also show that any such $\phi$ maps simple closed curves to weighted simple closed curves. As a consequence of these properties, we get that any $\phi \in Aut(\mathcal{ML})$ maps simple closed curves to simple closed curves, and therefore has an action on the curve complex. In section 4, we prove [Theorem 1](#thm0.1){reference-type="ref" reference="thm0.1"} using Ivanov's original theorem and a density argument. In section 5, we talk about some obstructions that arise when we try to generalize [Theorem 1](#thm0.1){reference-type="ref" reference="thm0.1"} to the setting of currents. In this section we construct examples of pairs of closed curve that have the same self intersection number and simple marked length spectrum. ## Acknowledgements. The author would like to thank her advisor Eugenia Sapir for her support throughout this project and for the many insightful conversations that greatly contributed to this work. The author would also like to thank Didac Martinez Granado for bringing Ken'ichi Ohshika's and Athanase Papadopoulos' work and Hugo Parlier's and Binbin Xu's work to her attention. # Background Let S be a hyperbolic surface with a complete metric defined by $\mathbb{H}^2/ \Gamma$, for $\Gamma \leq$ PSL(2,$\mathbb{R}$). We can identify the universal cover of the surface $S$ with $\mathbb{H}^2$. The spaces $\mathcal{ML}(S)$ and $\mathscr{C}(S)$, that were discussed in the introduction, are independent of the choice of hyperbolic metric on $S$ [@Mar2016]. However, in order to define a geodesic current or a measured lamination we will need to fix a hyperbolic metric. Throughout the paper, subsurfaces of $S$ and closed curves will be considered up to isotopy. The complexity of a closed surface of genus $g$ is defined to be 3$g$ - 3. ## Laminations: A *geodesic lamination* $\lambda$ is a closed subset of $S$ foliated by simple, complete geodesics. An example of a geodesic lamination is a multicurve consisting of pairwise disjoint, simple closed geodesics on $S$. Another fundamental example arises from considering a set of disjoint geodesics in $\mathbb{H}^2$ that are invariant under $\Gamma$, whose union is closed. Projecting this set onto S gives us a geodesic lamination. We say a geodesic lamination is *minimal* if it contains no proper non-empty sublamination. A *complementary region* of a geodesic lamination $\lambda$ is a connected component of the open complement $S \backslash \lambda$. A geodesic lamination that *fills* a subsurface $S'$ of $S$ intersects every essential, non-peripheral simple closed curve on $S'$. The complementary regions of such a filling lamination are either ideal polygons on $S'$ or crowns homotopic to a boundary curve of $S'$. Some geodesic laminations can be equipped with a transverse measure, which assigns a positive measure to each arc $\tau$ that intersects the lamination $\lambda$ transversely. This measure is invariant under homotopy transverse to $\lambda$ and is supported on $\tau \cap \lambda$. A *measured lamination* is a geodesic lamination with a transverse measure that has full support. Abusing notation, we will use $\lambda$ to denote the transverse measure. It is important to note that a measured lamination $\lambda$ can come equipped with different transverse measures. To avoid confusion between measures and their supports we have used notation more carefully later in this paper. A description of this can be found under 'Notational Conventions' in [2.3](#NC){reference-type="ref" reference="NC"}. ## Geodesic currents: {#sec2.2} Measured laminations can be thought of as a subset of a larger space of measures called geodesic currents defined as follows. Let $\mathcal{G}$ be the set of all unparameterized, unoriented complete geodesics in $\mathbb{H}^2$. Any geodesic in $\mathbb{H}^2$ can be determined by its extreme points on the boundary of $\mathbb{H}^2$. Hence there is a natural bijection $\mathcal{G} \cong \displaystyle{( S^1 \times S^1 \setminus \Delta)/ \sim}$, where $\Delta$ is the diagonal in $S^1 \times S^1$ and the equivalence relation $\sim$ identifies the coordinates $(a,b)$ and $(b,a)$. We assign $\mathcal{G}$ the topology of $\displaystyle{( S^1 \times S^1 \setminus \Delta)/ \sim}$. The fundamental group, $\pi_1(S)$ embeds as a subgroup $\Gamma$ of PSL(2,$\mathbb R$). The isometry group PSL(2,$\mathbb R$) of $\mathbb{H}^2$ acts naturally on $\mathcal{G}$. A geodesic current is a $\Gamma$-invariant, locally finite, positive, Borel measure on $\mathcal{G}$. The space of all geodesic currents endowed with a weak\* topology is denoted by $\mathscr{C}(S)$. The set of closed curves on $S$ embeds into the space of currents. To see this consider a closed curve $\gamma$ on $S$. The lifts of $\gamma$ are a discrete subset of $\mathcal{G}$. The Dirac measure on this discrete set is a geodesic current, and by abuse of notation we will denote the geodesic current by $\gamma$ as well. Since positive measures are closed under addition and scalar multiplication by positive reals, weighted multicurves are also examples of geodesic currents. By work of Bonahon, it is known that weighted closed curves are dense in $\mathscr{C}(S)$ [@Bon86]. Bonahon also shows that geometric intersection number $i(.,.)$ defined on a pair of closed curve can be extended bilinearly and continuously to intersection form on a pair of currents. For two any two currents $\mu$ and $\nu$, the intersection form $i(\mu, \nu)$ is defined as $\mu \times \nu (\mathcal{J}/ \Gamma)$, where $\mathcal{J}$ is a subset of $\mathcal{G} \times \mathcal{G}$ consisting of all pairs of incident distinct geodesics in $\mathbb{H}^2$. Any geodesic current $\mu$ satisfying $i(\mu, \mu) = 0$ is supported on a $\Gamma$-invariant geodesic lamination on $\mathbb{H}^2$, and $\mu$ induces a transverse measure on its support. Because of this, $\mathcal{ML}$ embeds into $\mathscr{C}(S)$. For a geodesic current $\mu$, one can consider the intersection of $\mu$ with every closed curve on $S$. The infinite coordinate $\{i(\mu, \gamma) \}$, where $\gamma$ is any closed curve on $S$ is the marked length spectrum of $\mu$. Otal proved that every geodesic current $\mu$ can be identified using its unique marked length spectrum. [@Ot90]. The simple marked length spectrum of $\mu$ is the infinite coordinate $\{i(\mu, s) \}$, where $s$ is any simple closed curve on $S$. ## Notational Convention: {#NC} For a geodesic current $\mu$, consider the support of $\mu$ in $\mathcal{G}$ and take its projection onto $S$. We will denote this set by $|\mu|$. We will often use this notation to avoid confusion between the current and its support. For instance, the geodesic current corresponding to a closed curve $\gamma$ will be denoted by $\gamma$, but the curve itself will be denoted by $|\gamma|$. Similarly, the transverse measure on a measured lamination will be denoted by $\lambda$, and the lamination itself will be denoted by $|\lambda|$. For example, if a measured lamination is not uniquely ergodic and supports two distinct measures we will denote them using distinct symbols $\lambda$ and $\eta$. But, we will have $|\lambda| = |\eta|$. ## Curve Complex and Ivanov's theorem: The curve complex, $CC$ on a surface $S$ is a simplicial complex whose vertices correspond to isotopy classes of essential simple closed curves on $S$, and whose edges correspond to pairs of simple closed curves that have geometric intersection number zero. Let us denote the vertex set of the curve complex by $V(S)$. An automorphism of a curve complex is a bijection on $V(S)$ that takes simplices to simplices. We will use $Aut(CC)$ to denote the automorphism group of the curve complex on a surface $S$ . Ivanov's theorem states that $Aut(\mathcal{CC}) \cong Mod^{\displaystyle \pm} (S)$ for surfaces of genus at least three. For $S_2$, the closed, orientable surface of genus 2, $Aut(\mathcal{CC}) \cong \displaystyle{Mod^{ \pm} (S_{2})/H}$, where, $H$ denotes the subgroup generated by the hyperelliptic involution. # Properties of automorphisms that preserve the intersection form In this section we prove two properties of elements in $Aut(\mathcal{ML})$ that play a key role in proving $Aut(\mathcal{ML}) \cong Mod^{\displaystyle \pm} (S)$. Namely, we show that any $\phi \in Aut(\mathcal{ML})$ is linear and that any such $\phi$ maps simple closed curves to simple closed curves. Most of the content in this section remains true in the setting of geodesic currents. In fact, the statements of Propositions 3.1, 3.3 and 3.13 hold true for any $\phi \in Aut(\mathscr{C})$. The remarks in this section concern about how the results generalizes to the space of currents. ## Linearity **Proposition 4**. *Let $\phi \in Aut(\mathcal{ML})$ and let $\lambda , \nu \in \mathcal{ML}$ such that $\lambda + \nu \in \mathcal{ML}$. Let $c$ be a positive real number. Then $$\phi( \lambda + \textup{c} \hspace{0.02cm} \nu) = \phi(\lambda) + \textup{c} \hspace{0.02cm} \phi(\nu)$$* *Proof.* Observe that $Aut(\mathcal{ML})$ is a group, and if $\phi$ preserves the intersection form then $\phi^{-1}$ preserves the intersection form as well. Now, for any simple closed curve $\gamma$, we have the following equality $$\begin{aligned} i( \gamma, \phi(\lambda + \textup{c} \hspace{0.02cm} \nu)) & = i(\phi^{-1}(\gamma), \lambda + \textup{c} \hspace{0.02cm} \nu)\\ & = i(\phi^{-1}(\gamma), \lambda) + \textup{c} \hspace{0.02cm} i(\phi^{-1}(\gamma), \nu)\\ & = i( \gamma, \phi(\lambda)) + \textup{c} \hspace{0.02cm} i(\gamma, \phi(\nu))\\ & = i( \gamma, \phi(\lambda)) + i(\gamma, \textup{c} \hspace{0.02cm} \phi(\nu))\\ & = i(\gamma, \phi(\lambda) + \textup{c} \hspace{0.02cm} \phi(\nu))\end{aligned}$$ This implies $\phi(\lambda + \textup{c} \hspace{0.02cm}\nu)$ and $\phi(\lambda) + \textup{c} \hspace{0.02cm}\phi(\nu)$ have the same simple marked length spectrum, and therefore $\phi(\lambda + \textup{c} \hspace{0.02cm}\nu) = \phi(\lambda) + \textup{c} \hspace{0.02cm}\phi(\nu)$ [@Primer]. ◻ **Remark 5**. *If $\gamma$ is allowed to be any closed curve, then the same proof can be used to see that any $\phi \in Aut(\mathscr{C})$ is linear. In this case, we will be looking at the marked length spectrum of a current instead of its simple marked length spectrum. However, from Otal's work on currents it is known that a geodesic current is uniquely determined by its marked length spectrum [@Ot90].* ## Mapping simple closed curves to weighted simple closed curves **Proposition 6**. *Automorphisms of the space of measured laminations that preserve the intersection form map simple closed curves to weighted simple closed curves.* We prove this proposition after proving Lemmas 2.3 - 2.7. **Lemma 7**. *Let $\lambda_1$ and $\lambda_2$ be any two geodesic currents satisfying $|\lambda_1| = |\lambda_2|$. Then for any geodesic current $\mu$, $i(\lambda_1, \mu ) \neq 0 \Longleftrightarrow i(\lambda_2, \mu) \neq 0$.* *In particular, if $\lambda_1$ and $\lambda_2$ are two measured laminations satisfying $|\lambda_1| = |\lambda_2|$, then for any measured lamination $\mu$, $i(\lambda_1, \mu ) \neq 0 \Longleftrightarrow i(\lambda_2, \mu) \neq 0$.* *Proof.* Let $\mathcal{J}$ be the set defined in [2.2](#sec2.2){reference-type="ref" reference="sec2.2"}. The support of $\lambda_{1} \times \mu$ in $\mathcal{J}$ is the set of ordered pair of geodesics $(g_{1}, g_{2})$ such that $g_{1} \in |\lambda_{1}|$ and $g_{2} \in |\mu|$ [@JACL]. That means, the support of $\lambda_{1} \times \mu$ in $\mathcal{J}$ is same as the support of $\lambda_{2} \times \mu$ in $\mathcal{J}$. Therefore, the supports of $\lambda_{1} \times \mu$ and $\lambda_{2} \times \mu$ in $\mathcal{J}/ \Gamma$ are the same. But then, $i(\lambda_1, \mu ) \neq 0$ implies $i(\lambda_2, \mu) \neq 0$ and vice versa. ◻ The next lemma was previously known, see for example [@DS03]. But we include the proof here for completeness. **Lemma 8**. *Let $\lambda$ be a minimal measured lamination that fills a subsurface $S'$ of $S$ and let $\mu$ be any other measured lamination with non-empty support satisfying:* 1. *$|\mu|$ is contained in the interior of $S'$.* 2. *$i(\lambda, \mu) = 0$* *Then $|\mu| = |\lambda|$.* *Proof.* Let $\omega$ be a measured lamination consisting of leaves which are common to $|\lambda|$ and $|\mu|$. Since $\lambda$ is minimal, $|\omega|$ has to be either empty or is all of $|\lambda|$. **Case 1.** If $\omega$ is empty, then the geodesics in $|\lambda|$ and $|\mu|$ neither coincide nor intersect. Also, observe that $|\mu|$ cannot intersect any of the boundary curves of $S'$. This implies that, in the universal cover, any lift of $|\mu|$ has to live in the complementary regions formed by lifts of $|\lambda|$ and the lifts of the boundary curves of $S'$. As $\lambda$ is filling, such complementary regions will either be ideal polygons in $\mathbb{H}^2$ bounded by geodesics in the lift of $|\lambda|$ or they will be crowns bounded by both the geodesics in the lifts of $|\lambda|$ and the geodesics in the lifts of boundary curves of $S'$ (see Figure 1). ![image](PNGimage.png){width="\\linewidth"} Figure 1: Two types of complimentary region of $|\lambda|$. Here, $\gamma$ is a peripheral curve of $S'$ Any leaf in $\mu$ can only go from one ideal vertex of such an ideal polygons to another. But any such leaf is an isolated open leaf and cannot be in the support of a measured lamination [@CB88]. This contradicts our assumption that $\mu$ is a measured lamination with a non empty support. **Case 2.** Now, if $\omega$ is all of $|\lambda|$, we get $|\lambda| = |\omega| \subseteq |\mu|$. Let $\mu_{\omega}$ denote the restriction of $\mu$ to $\omega$. Notably, $|\omega|$ is closed. Using decomposition of laminations into minimal components, we can write $\mu = \mu_{\omega} + \mu^{\prime}$. Here $\mu^{\prime}$ represents the measured lamination that encompasses the portion of $\mu$ disjoint from $\omega$. Observe that, both the conditions (1) and (2) in the statement of the lemma are fulfilled by $\mu^{\prime}$. As we have already established in the proof of case 1, $\mu^{\prime}$ cannot support a measured lamination. Consequently, we conclude that $|\lambda|$ is equal to $|\mu|$. ◻ Before proving more results, we want to define the set $\mathcal{E}_{\lambda}^{\mathcal{ML}}$, a generalized version of the set $\mathcal{E}_{\lambda}$ constructed in [@BIPP] [@BIPP2]. For a measured lamination $\lambda$, the set $\mathcal{E}_{\lambda}$ is defined as the set of all closed geodesics $c$ on $S$ that satisfy the following two conditions. 1. $i(\lambda, c) = 0$ 2. For every closed curves $c'$ with $i(c, c') \neq 0$, $i(\lambda, c') \neq 0$. If $\lambda$ is a measured lamination that fills a subsurface $S'$ of $S$, then $\mathcal{E}_{\lambda}$ consists of all simple geodesics that forms the boundary of $S'$ [@BIPP]. We will call such curves peripheral curves of $S'$. The set $\mathcal{E}_{\lambda}^{\mathcal{ML}}$ generalizes this construction to measured laminations and is defined as follows. For a measured lamination $\lambda$, the set $\mathcal{E}_{\lambda}^{\mathcal{ML}}$ is the set of all *measured laminations* $\mu$ on $S$ that satisfy the following two conditions. 1. $i(\lambda, \mu) = 0$ 2. For every measured lamination $\kappa$ with $i(\mu, \kappa) \neq 0$, $i(\lambda, \kappa) \neq 0$. We make the following claims about $\mathcal{E}_{\lambda}^{\mathcal{ML}}$. **Claim 9**. *$\mathcal{E}_{\lambda}^{\mathcal{ML}}$ only consists of pairwise disjoint measured laminations.* *Proof.* Any two measured laminations that intersect cannot simultaneously satisfy both conditions in the definition of $\mathcal{E}_{\lambda}^{\mathcal{ML}}$. Let $\mu_{1}$ and $\mu_{2}$ be two measured laminations such that $\mu_{1} \in \mathcal{E}_{\lambda}^{\mathcal{ML}}$ and $i(\mu_1, \mu_2) \neq 0$. Then by the second condition in the definition of $\mathcal{E}_{\lambda}^{\mathcal{ML}}$, we get $i(\lambda, \mu_2) \neq 0$. Any such $\mu_2$ fails to belong to $\mathcal{E}_{\lambda}^{\mathcal{ML}}$. ◻ **Claim 10**. *The set $\mathcal{E}_{\lambda}^{\mathcal{ML}}$ is closed under addition and scalar multiplication by positive reals.* *Proof.* The proof readily follows from the bilinearity of the intersection form. ◻ Now, we will discuss what elements are contained in the set $\mathcal{E}_{\lambda}^{\mathcal{ML}}$ for some specific cases of $\lambda$. **Lemma 11**. *If $\lambda$ is a simple closed curve, then $\mathcal{E}_{\lambda}^{\mathcal{ML}}$ consists of all the positive scalar multiples of $\lambda$. $$\mathcal{E}_{\lambda}^{\mathcal{ML}} = \{c\lambda : c \in \mathbb{R}_{+}\}.$$* *Proof.* It is easy to see that all measured laminations $c \lambda$ with $c \in \mathbb{R}_{+}$ belong to $\mathcal{E}_{\lambda}^{\mathcal{ML}}$. The set $\mathcal{E}_{\lambda}^{\mathcal{ML}}$ does not contain any other simple closed curve as they all fail to satisfy the second condition in the definition of $\mathcal{E}_{\lambda}^{\mathcal{ML}}$. For every simple closed curve $|\gamma| \neq |\lambda|$, we can find a simple closed curve that intersects $|\gamma|$ and not $|\lambda|$. The set $\mathcal{E}_{\lambda}^{\mathcal{ML}}$ also cannot contain any measured lamination that fills a subsurface $S'$ of $S$. Any subsurface $S'$ that $\mu$ fills will have complexity at least one. This means that we can find a curve $|\gamma|$ intersecting $\mu$ and contained entirely in $S'$. Now, if $\mu$ satisfies $i(\lambda, \mu) = 0$, then the curve $|\lambda|$ lies entirely outside of $S'$ and has zero intersection with $\gamma$. Therefore, $\mu$ fails to satisfy the second condition in the definition of $\mathcal{E}_{\lambda}^{\mathcal{ML}}$. ◻ **Lemma 12**. *If $\lambda$ is a minimal measured lamination that fills a subsurface $S'$ of $S$, then the set $\mathcal{E}_{\lambda}^{\mathcal{ML}}$ is the smallest set that is closed under addition and scalar multiplication (by positive reals), that consists of:* 1. *peripheral curves that form the boundary of the subsurface $S'$,* 2. *measured laminations supported on $|\lambda|$.* *Proof.* The proof is divided into two parts. The first part shows that peripheral curves of $S'$ are the simple closed curves in $\mathcal{E}_{\lambda}^{\mathcal{ML}}$. The second part focuses on measured laminations supported on $|\lambda|$. **Peripheral curves of** $\mathbf{S'}$**:** To see the peripheral curves of $S'$ are the only simple closed curves in $\mathcal{E}_{\lambda}^{\mathcal{ML}}$, consider the complimentary subsurface $S \backslash S'$. We will denote this subsurface by $S''$. If $S''$ were a pair of pants, then it does not contain any other simple closed curve. And if it is a surface of a higher complexity, then for every simple closed curve $\delta$ in the interior of $S''$ we can find an another simple closed curve in the interior of $S''$ that also intersects $\delta$. So, simple closed curve in the interior of $S''$ cannot satisfy both the conditions of $\mathcal{E}_{\lambda}^{\mathcal{ML}}$. Also, any closed curve contained entirely in $S'$ will intersect $\lambda$ and therefore cannot be contained in $\mathcal{E}_{\lambda}^{\mathcal{ML}}$. ![image](PNGimage8.png){width="\\linewidth"} Figure 2: Example of a lamination $k$ that intersects $\gamma$ but not $\lambda$. Now, we show that all the peripheral curves of $S'$ belong to $\mathcal{E}_{\lambda}^{\mathcal{ML}}$. Consider $\gamma$, a peripheral curve of $S'$. As $\gamma \in \mathcal{E}_{\lambda}$ we know that $i(\lambda, \gamma) = 0$ and that if $\gamma$ intersects any closed curve then $\lambda$ intersects the same closed curve as well. It is left to check whether all the measured laminations that intersect $\gamma$ also intersect $\lambda$. To see that, consider the lift of $\gamma$ and $|\lambda|$ on $\mathbb{H}^2$. The complimentary region bounded by these geodesics will give us a crown as given in Figure 2. If there exist a lamination $k$ that intersects $\gamma$ but not $\lambda$, then the lift of $k$ will contain a leaf that has its one point in an ideal vertex of the crown, as given in Figure 2. But any such leaf will be an isolated leaf and cannot be in the support of a measured lamination [@CB88]. Therefore, no such $k$ exist. **Measured laminations supported on** $\mathbf{|\lambda|}$ **:** If $\mu$ is a measured lamination that fills a subsurface $S''$, and if $S'' \setminus S'$ is non empty, then $\mu$ will intersect a closed curve that does not intersect $\lambda$. This implies that any filling measured lamination in $\mathcal{E}_{\lambda}^{\mathcal{ML}}$ must have its support inside $S'$. But since these measured lamination also do not intersect $\lambda$, by [Lemma 8](#lma3.5){reference-type="ref" reference="lma3.5"} its support will be $|\lambda|$. ◻ Since the criteria we used for defining $\mathcal{E}_{\lambda}^{\mathcal{ML}}$ only depends on the intersection form, $\mathcal{E}_{\lambda}^{\mathcal{ML}}$ itself depends on only the support $|\lambda|$ . That is, if $|\lambda_{1}| = |\lambda_2|$, then $\mathcal{E}_{\lambda_1}^{\mathcal{ML}} = \mathcal{E}_{\lambda_2}^{\mathcal{ML}}$. However, the converse is not true in general. **Lemma 13**. *Let $\lambda$ be a minimal measured lamination and $\mu$ be any measured lamination. Then $\mathcal{E}_{\lambda}^{\mathcal{ML}} = \mathcal{E}_{\mu}^{\mathcal{ML}}$ if and only if $\mu = \lambda' + c_1\gamma_1+ c_2 \gamma_2 + \dots + c_n \gamma_n$; where $\lambda'$ is a measured lamination supported on $|\lambda|$, $\gamma_{i}$ is a boundary curve of the subsurface filled by $\lambda$ and $c_{i} \geq 0$ for all $i$.* *Consequently, if $\lambda$ and $\mu$ are minimal measured laminations, then $\mathcal{E}_{\lambda}^{\mathcal{ML}} = \mathcal{E}_{\mu}^{\mathcal{ML}}$ if and only if $|\lambda| = |\mu|$.* *Proof.* If $\lambda$ is a measure supported on a simple closed curve, then by [Lemma 11](#lma3.6){reference-type="ref" reference="lma3.6"} $$\mathcal{E}_{\lambda}^{\mathcal{ML}} = \{c\lambda : c \in \mathbb{R}_{+}\}= \mathcal{E}_{\mu}^{\mathcal{ML}}$$ if and only if $|\lambda| = |\mu|.$ Now, assume $\lambda$ is a minimal lamination that fills a subsurface $S'$ of $S$ and $\mu = \lambda' + c_1\gamma_1+ c_2 \gamma_2 + \dots + c_n \gamma_n$; where $\gamma_{i}'s$ are the boundary curves of $S'$ and $|\lambda'| = |\lambda|$. For any measured lamination $\delta$, $$i(\mu, \delta) = i(\lambda', \delta) + c_{1}i(\gamma_{1}, \delta) + \dots + c_{n}i(\gamma_n, \delta).$$ Any measured lamination that has zero intersection with $\mu$ must have a zero intersection with $\lambda'$, and therefore a zero intersection with $\lambda$. If $\delta$ belonged to $\mathcal{E}_{\mu}^{\mathcal{ML}}$, then for any measured lamination $c$, $$i(\delta, c) \neq 0 \Rightarrow i(\mu, c) \neq 0.$$ This would mean at least one of the summand in $$i(\mu, c) = i(\lambda', c) + c_{1}i(\gamma_{1}, c) + \dots + c_{n}i(\gamma_n, c)$$ must be non zero. Since $\lambda'$ and the peripheral curves $\gamma_{i}$ all belong in $\mathcal{E}_{\lambda}^{\mathcal{ML}}$, any one of those summand being non zero implies $i( \lambda, c) \neq 0$. This gives us, $$\mathcal{E}_{\mu}^{\mathcal{ML}} \subseteq \mathcal{E}_{\lambda}^{\mathcal{ML}}$$ To see the other inclusion, observe that $\mathcal{E}_{\mu}^{\mathcal{ML}}$ contains measured laminations that are supported on its minimal components, $|\lambda|$, $|\gamma_1|$, $|\gamma_{2}|$, $\dots$ and $|\gamma_{n}|$. But, from [Lemma 12](#lma3.7){reference-type="ref" reference="lma3.7"} we get $$\mathcal{E}_{\lambda}^{\mathcal{ML}} \subseteq \mathcal{E}_{\mu}^{\mathcal{ML}} .$$ To see the converse, assume that $\mathcal{E}_{\mu}^{\mathcal{ML}}$ = $\mathcal{E}_{\lambda}^{\mathcal{ML}}.$ The lamination $\mu$ cannot have a minimal component $\lambda_1$, that fills some subsurface of $S$ but is not supported in $|\lambda|$. If it does, then $\lambda_1$ will be contained in $\mathcal{E}_{\mu}^{\mathcal{ML}}$, but not in $\mathcal{E}_{\lambda}^{\mathcal{ML}}$. For the same reason, $|\mu|$ cannot contain any simple closed curves that are not a boundary curve of the subsurface $S'$. This means that $|\lambda|$ and $|\mu|$ can only differ by the simple closed curves that form the boundary curves of the subsurface filled by $\lambda$. ◻ **Remark 14**. *The construction $\mathcal{E}_{\lambda}$ can be generalized even further to the setting of geodesic currents. We will denote it by $\mathcal{E}^{c}_{\lambda}$, and it is defined as follows: If $\lambda$ is a geodesic current $$\mathcal{E}^{c}_{\lambda} := \begin{cases} \mu \in \mathscr{C} : & i(\lambda, \mu) = 0 \text{ and } \\ & i(\lambda, k) \neq 0 \text{ for every geodesic current } k \text{ with } i(\mu, k) \neq 0. \end{cases}$$ It can be shown that $\mathcal{E}^{c}_{\lambda}$ only consists of measured laminations. In fact, for a minimal measured lamination $\lambda$, the sets $\mathcal{E}^{c}_{\lambda}$ and $\mathcal{E}^{\mathcal{ML}}_{\lambda}$ are the same. For a closed curve $\gamma$, the sets $\mathcal{E}^{c}_{\gamma}$ and $\mathcal{E}^{\mathcal{ML}}_{\gamma}$ are the same as well. For this reason, replacing $\mathcal{E}_{\lambda}^{\mathcal{ML}}$ by $\mathcal{E}^{c}_{\lambda}$ and $\mathcal{E}_{\mu}^{\mathcal{ML}}$ by $\mathcal{E}^{c}_{\mu}$ in the statement of [Lemma 13](#lma3.8){reference-type="ref" reference="lma3.8"} still gives us a true statement. The proof of this statement follows closely to the proof of [Lemma 13](#lma3.8){reference-type="ref" reference="lma3.8"}.* In the following lemma we prove that for any measured lamination $\lambda$ the operations $\mathcal{E}_{-}^{\mathcal{ML}}$ and $\phi$ commute for each $\phi \in Aut(\mathcal{ML})$ **Lemma 15**. *Let $\lambda$ be a measured lamination and $\phi \in Aut(\mathcal{ML})$. Then $\phi(\mathcal{E}_{\lambda}^{\mathcal{ML}}) = \mathcal{E}_{\phi(\lambda)}^{\mathcal{ML}}$.* *Proof.* Let $\mu$ be a measured lamination. It is enough to prove that $\mu \in \mathcal{E}_{\phi(\lambda)}^{\mathcal{ML}}$ if and only if $\phi^{-1}(\mu) \in \mathcal{E}_{\lambda}^{\mathcal{ML}}$. Observe that, $i(\mu, \phi(\lambda)) = 0 \Leftrightarrow i(\phi^{-1}(\mu), \lambda) = 0.$ Let $\mu \in \mathcal{E}_{\phi(\lambda)}^{\mathcal{ML}}$. This gives, $i(\mu, c') \neq 0 \Rightarrow i(\phi(\lambda), c') \neq 0$ for any measured lamiantion $c'.$ But now, $$\begin{aligned} {2} & i(\phi^{-1}(\mu), c') && \neq 0\\ \Rightarrow \hspace{0.2cm} & i(\mu, \phi(c')) && \neq 0\\ \Rightarrow \hspace{0.2cm} & i(\phi(\lambda), \phi(c')) && \neq 0\\ \Rightarrow \hspace{0.2cm} & i(\lambda, c') && \neq 0 \end{aligned}$$ So, $\phi^{-1}(\mu) \in \mathcal{E}_{\lambda}^{\mathcal{ML}}$. A very similar argument will give the other implication, that is, if $i(\phi^{-1}(\mu), c') \neq 0 \Rightarrow i(\lambda, c') \neq 0$, then $i(\mu, c') \neq 0 \Rightarrow i(\phi(\lambda), c') \neq 0.$ This gives us, $\mu \in \mathcal{E}_{\phi(\lambda)}^{\mathcal{ML}} \Leftrightarrow \phi^{-1}(\mu) \in \mathcal{E}_{\lambda}^{\mathcal{ML}} \Leftrightarrow \mu \in \phi(\mathcal{E}_{\lambda}^{\mathcal{ML}}) .$ ◻ **Remark 16**. *The same proof can be used to show that for a geodesic current $\lambda$ and $\phi \in Aut(\mathscr{C})$, $\phi(\mathcal{E}^{c}_{\lambda}) = \mathcal{E}^{c}_{\phi(\lambda)}$.* **Lemma 17**. *Automorphisms of measured laminations that preserve the intersection form map minimal measured laminations to minimal measured laminations.* *Proof.* Let $\lambda$ be a minimal measured lamination on $S$ and let $\phi \in Aut(\mathcal{ML})$. Let $$\phi(\lambda) = \lambda_1 + \lambda_2 + \dots + \lambda_n$$ be the decomposition of the measured lamination $\phi(\lambda)$ into minimal measured laminations. Since $\phi^{-1} \in Aut(\mathcal{ML})$ is linear, we get $$\lambda = \phi^{-1}(\lambda_1) + \phi^{-1}(\lambda_2) + \dots + \phi^{-1}(\lambda_n)$$ Thus, support $|\phi^{-1}(\lambda_i)|$ is contained in $|\lambda|$ for all $i$. But since $\lambda$ is minimal $|\phi^{-1}(\lambda_i)| = |\lambda|$ for all $i$. Therefore, for any two minimal components $\lambda_{i}$ and $\lambda_{j}$ $$\begin{aligned} {2} & \hspace{0.3cm} |\phi^{-1}(\lambda_{i})| &&= |\phi^{-1}(\lambda_{j})|\\ \Rightarrow \hspace{0.2cm} &\hspace{0.6cm} \mathcal{E}_{\phi^{-1}(\lambda_{i})}^{\mathcal{ML}} &&= \mathcal{E}_{\phi^{-1}(\lambda_{j})}^{\mathcal{ML}}\\ \Rightarrow \hspace{0.2cm} & \phi( \mathcal{E}_{\phi^{-1}(\lambda_{i})}^{\mathcal{ML}}) &&= \phi(\mathcal{E}_{\phi^{-1}(\lambda_{j})}^{\mathcal{ML}})\\ \Rightarrow \hspace{0.2cm} &\hspace{1.4cm} \mathcal{E}_{\lambda_{i}}^{\mathcal{ML}} &&= \mathcal{E}_{\lambda_{j}}^{\mathcal{ML}}\\\end{aligned}$$ Since $\lambda_{i}$ and $\lambda_{j}$ are minimal laminations, by  [Lemma 13](#lma3.8){reference-type="ref" reference="lma3.8"} we can conclude that $|\lambda_{i}| = |\lambda_{j}|$. Therefore, all the minimal components of $\phi(\lambda)$ have the same support and hence $\phi(\lambda)$ is a minimal measured lamination. ◻ **Remark 18**. *Any $\phi \in Aut(\mathscr{C})$ also maps minimal measured laminations to minimal measured laminations. To see this replace the set $\mathcal{E}_{-}^{\mathcal{ML}}$ in the above proof with $\mathcal{E}^{c}_{-}$.* *A similar alteration to the proof of [Proposition 6](#prop3.2){reference-type="ref" reference="prop3.2"} will show that any $\phi \in Aut(\mathscr{C})$ maps simple closed curves to weighted simple closed curves.* **Proof of [Proposition 6](#prop3.2){reference-type="ref" reference="prop3.2"}:** *Proof.* Let $\gamma$ be a simple closed curve on $S$. Then, $$\mathcal{E}_{\gamma}^{\mathcal{ML}} = \{c\gamma: c \in \mathbb{R}_{+}\}.$$ This implies, $$\phi(\mathcal{E}_{\gamma}^{\mathcal{ML}}) = \{c\phi(\gamma): c \in \mathbb{R}_{+}\}$$ and so, $$\mathcal{E}_{\phi(\gamma)}^{\mathcal{ML}} = \{c\phi(\gamma): c \in \mathbb{R}_{+}\}.$$ From [Lemma 17](#lma3.12){reference-type="ref" reference="lma3.12"}, we know that $\phi(\gamma)$ is minimal. By [Lemma 11](#lma3.6){reference-type="ref" reference="lma3.6"} and [Lemma 12](#lma3.7){reference-type="ref" reference="lma3.7"}, $\mathcal{E}_{\phi(\gamma)}^{\mathcal{ML}} = \{c\phi(\gamma): c \in \mathbb{R}_{+}\}$ if and only if $|\phi(\gamma)|$ satisfies one of the following two scenarios: 1. $|\phi(\gamma)|$ is a simple closed curve. 2. $|\phi(\gamma)|$ is a uniquely ergodic minimal measured lamination that fills the entire surface $S$. Assume case $2$. Now, consider any simple closed curve $\alpha$ in $S$ such that $i(\gamma, \alpha) = 0$. This gives us $i(\phi(\gamma), \phi(\alpha)) = 0$. Since $\phi(\lambda)$ is a minimal lamination that fills all of $S$, [Lemma 8](#lma3.5){reference-type="ref" reference="lma3.5"} implies that $|\phi(\gamma)| = |\phi(\alpha)|$. Let us now pick another simple closed curve $\beta$, such that $i(\gamma, \beta) = 0$ and $i(\alpha, \beta) \neq 0$. This implies, $i(\phi(\gamma), \phi(\beta)) = 0$ and $i(\phi(\alpha), \phi(\beta)) \neq 0$. But earlier we concluded that $|\phi(\gamma)| = |\phi(\alpha)|$. By  [Lemma 7](#lma3.4){reference-type="ref" reference="lma3.4"}, this is a contradiction. The only possible case is $1$, $|\phi(\gamma)|$ is a simple closed curve. Therefore, $\phi(\gamma)$ is a weighted simple closed curve. ◻ ## Preserving weights of simple closed curves We can now go one step further, and show that any $\phi \in Aut(\mathcal{ML})$ also preserves the weights of simple closed curves. That is, if $\phi \in Aut(\mathcal{ML})$, then $\phi$ maps a simple closed curve with unit weight to a simple closed curve with unit weight. In order to prove this, we consider the action of $\phi$ on the curve complex on $S$. **Lemma 19**. *For any $\phi \in Aut(\mathcal{ML})$, we can find $\phi^{\prime} \in Aut(\mathcal{ML})$ so that for any simple closed curve $\gamma$, there is some $k$ so that $\phi^{\prime} \circ \phi(\gamma) = k \gamma$.* *Proof.* Let $\left[ \gamma \right]$ denote the vertex in the curve complex that corresponds to the simple closed curve $\gamma$. Consider a $\phi \in Aut(\mathcal{ML})$. Let's say $\phi(\gamma) = k \gamma'$, where $k$ depends on the choice of $\gamma$. We define the action of $\phi$ on the curve complex by $$\phi^{*}(\left[ \gamma \right]) := \left[ \gamma' \right].$$ This action is well defined as $\phi$ preserves disjointedness. If $i(\gamma_1, \gamma_2) = 0$, then $i(\phi(\gamma_1), \phi(\gamma_2)) = 0$. By Ivanov's theorem we can find an $f \in Mod^{\pm}(S)$ such that its action on the curve complex agrees with $\phi^*$. That is $$f^*([\gamma]) = \phi^*([\gamma]) \text{ , for any closed curve } \gamma.$$ But then $(f^{-1})^{*}(\left[ \gamma' \right]) = \left[ \gamma \right]$. This means that the action of $f^{-1}$ on $\mathcal{ML}(S)$ maps $\gamma'$ to $\gamma$. Let us use $\phi'$ to denote the action of $f^{-1}$ on $\mathcal{ML}(S)$. We have, $$\phi'(\gamma') = \gamma$$ The map $\phi'$ satisfies the desired relation $$\phi' \circ \phi (\gamma) = \phi' (k \gamma') = k\gamma$$ ◻ **Proposition 20**. *Automorphisms of measured laminations that preserve the intersection form map simple closed curves to simple closed curves.* *Proof.* Let $\phi \in Aut(\mathcal{ML})$. From [Proposition 6](#prop3.2){reference-type="ref" reference="prop3.2"}, we know that $\phi$ maps simple closed curves to weighted simple simple closed curves. By [Lemma 19](#lemma3.9){reference-type="ref" reference="lemma3.9"}, we can find a $\phi' \in Aut(\mathcal{ML})$ such that $\phi' \circ \phi(\gamma) = k \gamma$ for all simple closed curves $\gamma$, and $k$ depending on $\gamma$. We will now show that the weight k has to be one for any simple closed curve. Fix a simple closed curve $\gamma_{1}$ on our surface. Find simple closed curves $\gamma_2,\gamma_3$ so that $\gamma_1, \gamma_2$ and $\gamma_3$ all pairwise intersect. All of these can be summarised by the following notations $$\phi' \circ \phi(\gamma_{i}) = k_{i} \gamma_{i} \text{ for all }i \in \{ 1,2,3 \}$$ and $$i(\gamma_{i}, \gamma_{j}) \neq 0 \text{ for all }i,j \in \{1,2,3\}$$ Observe that $$i(\gamma_{i}, \gamma_{j}) = i(\phi' \circ \phi (\gamma_{i}), \phi' \circ \phi(\gamma_{j})) = i(k_{i} \gamma_{i}, k_{j} \gamma_{j}) = k_{i} k_{j} \cdot i (\gamma_{i}, \gamma_{j})$$ for all $i, j = 1,2,3$. This implies $$\begin{aligned} {1} &k_{1}k_{2} = k_{1}k_{3} = 1\\ \Rightarrow \hspace{0.2cm} & k_{1}(k_{2} - k_{3}) = 0\\ \Rightarrow \hspace{0.2cm} & k_{1} = 0 \text{ or } k_{2} = k_{3}\\\end{aligned}$$ Since $k_{1}$ divides one, $k_{1}$ does not equal $0$. This gives us $k_{2} = k_{3}$. Furthermore, $k_{2}k_{3} = 1$ and the weights $k_{2}$ and $k_{3}$ are both positive. This proves that $k_{2} = k_{3} = 1$ and therefore $k_{1} = 1$. Since the choice of our simple closed curve $\gamma_{1}$ was arbitrary, it follows that $\phi' \circ \phi(\gamma) = \gamma$ for all simple closed curve $\gamma$. The proof of [Lemma 19](#lemma3.9){reference-type="ref" reference="lemma3.9"} establishes that $\phi'$ is induced by an element belonging to $Mod^{\pm}(S)$. Consequently, this implies that $\phi(\gamma) = \gamma$ for all simple closed curve $\gamma$. ◻ # Proof of the theorem ** 1**. *Let $S_{g}$ be a closed, orientable, finite type surface of genus $g \geq 2$ and let $Aut(\mathcal{ML})$ denote the group of homeomorphisms on $\mathcal{ML}(S_{g})$ that preserve the intersection form. Then $$Aut(\mathcal{ML}) \cong Mod^{\displaystyle \pm} (S_{g})$$ for all $g \neq 2$. For the surface of genus 2, $$Aut(\mathcal{ML}) \cong \displaystyle{Mod^{ \pm} (S_{2})/H}$$ Where, $H$ is the order two subgroup generated by the hyperelliptic involution.* *Proof.* Let us denote the automorphism group of the curve complex on a surface $S_{g}$ by $Aut(CC)$. From Ivanov's theorem we know that $Aut(CC) \cong Mod^{\displaystyle \pm} (S_{g})$, when $g \neq 2$. And for the surface of genus 2, $Aut(CC) \cong \displaystyle{Mod^{ \pm} (S_{2})/H}$. We will show that $Aut(\mathcal{ML}) \cong Aut(CC)$. Consider the map $\psi: Aut(\mathcal{ML}) \rightarrow Aut(CC)$ that maps automorphisms on $\mathcal{ML}$ to its action on the curve complex. $$\psi(\phi) = \phi^{*}$$ To see $\psi$ is injective, consider any element $\phi$ in the kernel of $\psi$. By [Proposition 20](#prop3.17){reference-type="ref" reference="prop3.17"} any such $\phi$ fixes all the simple closed curves. Because $\phi$ is linear by [Proposition 4](#prop3.1){reference-type="ref" reference="prop3.1"}, it also fixes all the weighted multicurves. But, weighted multicurves are dense in $Aut(\mathcal{ML})$. Therefore, $\phi$ is identity on $\mathcal{ML}$, and consequently $\psi$ is injective. Observe that, $Mod^{\pm}(S_{g})$ has a natural action on $\mathcal{ML}(S_{g})$ that also preserves the intersections form. For every $f \in Mod^{\pm}(S_{g})$, we can find a $\widetilde{f} \in Aut(\mathcal{ML})$. This shows that $\psi$ is surjective. ◻ # Obstruction when generalizing to currents For a surface $S$ of genus at least 3, [Theorem 1](#thm0.1){reference-type="ref" reference="thm0.1"} gives us a surjection from $Aut(\mathscr{C})$ to $Mod^{\pm}(S)$. $$Aut(\mathscr{C}) \xtwoheadrightarrow{f} Aut(\mathcal{ML}) \cong Aut(CC) \cong Mod^{\pm}(S)$$ We are interested to know if the map $f$ is injective. The kernel of $f$ consists of maps in $Aut(\mathscr{C})$ that induces an identity map on $Aut(CC)$. In other words, any $\phi$ in the kernel of $f$ fixes all simple closed curves. Now, if $f$ is injective, then the kernel of $f$ will only contain the identity map on currents. This means that $f$ being injective will imply the following statement: Any automorphism of currents that preserves the intersection form and fixes all simple closed curves is the identity on currents. Clearly, if $\phi$ is in the kernel of $f$ then it preserves the simple marked length spectrum of any current $\mu$. However, this is not enough to prove $f$ is injective. In fact, in this section we construct an infinite family of closed curves that have the same self intersection number and simple marked length spectrum. Let $\gamma$ and $\delta$ be two such closed curves. It is not immediate why there cannot exist a $\phi \in ker(f)$ that maps $\gamma$ to $\delta$. The remaining part of this section will be focused on constructing these examples. ** 2**. *Let $S$ be a surface of genus at least 2. We can find infinitely many pairs of closed curves $\gamma_n$ and $\gamma_n'$ on $S$ such that $\gamma_n$ and $\gamma_n'$ have the same self intersection number and the same simple marked length spectra.* *Proof.* Let $P$ be a pair of pants on $S$ and let $\partial P$ be its boundary consisting of oriented curves $X, Y$ and $Z$ labeled below. We consider on $P$ the family of closed curves as given in Figure 3. ![image](PNGimage24.jpg){width="\\linewidth"} Figure 3: Closed curve with number of half twists $a =15$, $b = 11$ and $c = 3$. The figure shows the curve $\gamma_{(15,11,3)}$ intersecting itself minimally. For each triple $(a,b,c)$ where $a$, $b$, $c$ are positive odd integers with $a \geq b \geq c$ we consider the closed curve $\gamma_{(a,b,c)}$. The path of the curve $\gamma_{(a,b,c)}$ is described as follows: $(1)$ One full twist about $X$, $(2)$ $b$ half twists about $Y$, $(3)$ one full twist about $Z$, $(4)$ $a$ half twists about $Y$ in the orientation opposite to $Y$, $(5)$ one full twist about $Z$ in the orientation opposite to $Z$, $(6)$ $c$ half twists about $Y$ in the orientation opposite to $Y$, and then the curve closes. Any two curves in this family are identical, except for the number of half twists $a$, $b$ and $c$. The curve $\gamma_{(a,b,c)}$ is in the minimal position that realizes its self intersection as long as $a \geq b \geq c$. Counting the number of intersections we get $$i(\gamma_{(a,b,c)}, \gamma_{(a,b,c)}) = \displaystyle{ \Big( \frac{a-1}{2} \Big) + 3 \Big(\frac{b-1}{2} \Big) + 5 \Big( \frac{c-1}{2} \Big) + 5 }$$. Any two curves $\gamma_{(a,b,c)}$ and $\gamma_{(a',b',c')}$ with the same self intersection number will satisfy $$\displaystyle{ \Big( \frac{a-1}{2} \Big) + 3 \Big(\frac{b-1}{2} \Big) + 5 \Big( \frac{c-1}{2} \Big) + 5 } = \displaystyle{ \Big( \frac{a'-1}{2} \Big) + 3 \Big(\frac{b'-1}{2} \Big) + 5 \Big( \frac{c'-1}{2} \Big) + 5 }$$ This is $$(a - a') + 3(b - b') + 5(c - c') = 0.$$ Now, we will show that for $\gamma_{(a,b,c)}$ and $\gamma_{(a',b',c')}$ to have the same simple marked length spectra it is enough that the following equation is satisfied: $$(a - a') + (b - b') + (c - c') = 0$$ A simple closed curve in $S$ intersects $\gamma_{(a,b,c)}$ or $\gamma_{(a',b',c')}$ if and only if it passes through $P$. Any such simple closed curve will intersect $P$ as an essential arc with its end points in $\partial P$. For any essential arc $s$ in $P$ with its end points in $\partial P$, we want: $$i(s, \gamma_{(a,b,c)}) = i(s,\gamma_{(a',b',c')})$$ Here, we let $i(\cdot, \cdot)$ denote the minimum intersection between $\gamma_{(a,b,c)}$ and any arc in the free homotopy class of $s$, where the endpoints of $s$ to remain in $\partial P$ throughout the homotopy; the end points of $s$ need not be fixed. Figure 4 shows all possible essential simple arcs on $P$ up to homotopy. It is straightforward to see that all the three arcs $s_1$, $s_2$ and $s_3$ intersects $\gamma_{a,b,c}$ minimally in Figure 4. ![image](PNGimage20.png){width="\\linewidth"} Figure 4: Arcs $s_1$, $s_2$ and $s_3$ are homotoped to intersect $\gamma_{a,b,c}$ minimal number of times. Counting the number intersections we get $$\displaystyle{i(\gamma_{(a,b,c)}, s_1) = \Big( \frac{a-1}{2} \Big) + \Big( \frac{b-1}{2} \Big) + \Big( \frac{c-1}{2} \Big) + 2 }$$ $$\displaystyle{i(\gamma_{(a,b,c)}, s_2) = a + b + c }$$ $$\displaystyle{ i(\gamma_{(a,b,c)}, s_3) = \Big( \frac{a-1}{2} \Big) + \Big( \frac{b-1}{2} \Big) + \Big( \frac{c-1}{2} \Big) + 3 }$$ Replacing $a$, $b$, $c$ in the above equations by $a'$, $b'$ and $c'$ respectively give us the intersection number between $\gamma_{(a,',b',c')}$ and the arcs. The condition $$i(s_j, \gamma_{(a,b,c)}) = i(s_j,\gamma_{(a',b',c')})$$ for $j = 1, 2$ and $3$ is equivalent to equation (3). For $k$ an even positive integer and let $t$ an odd integer greater than or equal to 3, the set $$\begin{aligned} &c = t &c'= k + t\\ &b = 4k +t &b'= 2k + t\\ &a = 6k +t &a'= 7k + t\\ \end{aligned}$$ satisfies equations (2) and (3). It also satisfies $a \geq b \geq c$ and $a' \geq b' \geq c'$. The conditions on $k$ and $t$ make certain that the number of half twists are odd and at least equal to 3. This gives us a infinite collection of curves $\gamma_{(a,b,c)}$ and $\gamma_{(a',b',c')}$ have the same self intersection number and simple marked length spectra. ◻
arxiv_math
{ "id": "2309.14532", "title": "Ivanov's meta-conjecture in the context of measured laminations", "authors": "Meenakshy Jyothis", "categories": "math.GT", "license": "http://creativecommons.org/licenses/by-nc-sa/4.0/" }
--- abstract: | As a significant application of the duality theory between real Hardy spaces $H^1(\mathbb{R}^{n})$ and $\operatorname{BMO}(\mathbb{R}^{n})$, Fefferman and Stein developed a representation theorem for $\operatorname{BMO}(\mathbb{R}^{n})$ by utilizing the Riesz transforms $(n\geq 1)$. L. Carleson provided a constructive proof for the case $n=1$. In this article, we propose a representation theorem for $\operatorname{VMO}(\mathbb S)$ using Carleson's construction and demonstrate a representation theorem for $\operatorname{VMO}(\mathbb{R}^{n})$ through an iterative process. Additionally, we provide a brand-new characterization of $\operatorname{VMO}$ as an application of our results. address: - $1.$ Beijing International Center for Mathematical Research, Peking University, Beijing 100871, People's Republic of China - $2.$ Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100049, People's Republic of China - $3.$ School of Mathematical Sciences, University of Chinese Academy of Sciences, Beijing 100049, People's Republic of China author: - Zheng-yi Lu ^3^ - Fei Tao ^1^ - Yaosong Yang ^2,3^ bibliography: - vv.bib title: Representation theorems for functions of vanishing mean oscillation --- # Introduction Fefferman and Stein developed a new perspective about the Hardy classes and they further generalized the classical Hardy spaces to $n$-dimensional theory in a decisive way [@stein1972hp] which brought to light the real variable meaning of $H^p$. Hereby, the classical theory of the Hardy spaces $H^p$ is a mixture of real and complex analysis. Let $\Delta$ denote the unit disk in the extended complex plane $\hat{\mathbb{C}}=\mathbb C\cup \{\infty\}$, and $\mathbb S=\partial \Delta$ be the unit circle. For $0<p<\infty$, the Hardy space $H^p(\Delta)$ is the class of holomorphic functions $f$ on the open unit disk $\Delta$ satisfying $$\sup _{0 \leq r<1}\left(\frac{1}{2 \pi} \int_0^{2 \pi}\left|f\left(r e^{i \theta}\right)\right|^p d \theta\right)^{\frac{1}{p}}<\infty.$$ The space $H^{\infty}(\Delta)$ is defined as the vector space of bounded holomorphic functions on the unit disk $\Delta$ such that $$\sup_{z\in \Delta}|f(z)|<\infty.$$ On the real vector space $\mathbb{R}^{n}$, the Hardy space $H^p(\mathbb{R}^{n})(0<p\leq \infty)$ contains all tempered distributions $f$ such that for some Schwartz function $\Phi$ with $\int_{\mathbb{R}^{n}}\Phi(x)dx=1$, the maximal function $$(M_{\Phi}f)(x)=\sup_{t>0}|(f*\Phi_t)|(x)$$ is in $L^{p}(\mathbb{R}^{n})$, where $*$ is the convolution and $\Phi_t(x) = t^{-n}\Phi(x/t)$. Furthermore, Fefferman and Stein first showed the duality between the real Hardy space $H^1(\mathbb{R}^{n})$ and $\operatorname{BMO}(\mathbb{R}^{n})$, the space of functions of bounded mean oscillation on $\mathbb{R}^{n}$ (see Section [2](#pre){reference-type="ref" reference="pre"} for precise definition). **Theorem 1** (Theorem 2, [@stein1972hp]). *The dual of $H^{1}(\mathbb{R}^{n})$ is $\operatorname{BMO}(\mathbb{R}^{n})$. More precisely,* (i) *Suppose $\varphi\in\operatorname{BMO}(\mathbb{R}^{n})$, then for the linear functional $$f\mapsto \int_{\mathbb{R}^{n}}f(x)\varphi(x)dx,\ f\in H_0^{1}(\mathbb{R}^{n}),$$ there exists a bounded extension to $H^1(\mathbb{R}^{n})$, where $H_0^1(\mathbb{R}^{n})$ is a dense subspace of $H^1(\mathbb{R}^{n})$ (see [@stein1970singular]).* (ii) *Conversely, every continuous linear functional on $H^1(\mathbb{R}^{n})$ arises as in $(i)$ with a unique element $\varphi$ of $\operatorname{BMO}(\mathbb{R}^{n})$.* To a large extent, it is the duality theorem and the conformal invariance that make $\operatorname{BMO}$ play a crucial role in many fields, such as univalent function theory, quasiconformal mapping, partial differential equation, probability theory, and so on. The duality of $H^1(\mathbb R)$ and $\operatorname{BMO}(\mathbb R)$ suggests Fefferman-Stein's decomposition: for any $\varphi \in \operatorname{BMO}(\mathbb R)$, $\varphi$ has the following decomposition $$\varphi=f+Hg, \ f,g\in L^{\infty}(\mathbb R),$$ where $H$ is the Hilbert transform defined by $$Hg(x)=P.V. \frac{1}{\pi}\int_{\mathbb R}g(t)\left(\frac{1}{x-t}-\frac{1}{-t}\chi_{t\geq 1}(t)\right)dt.$$ It is worth noting that we may deduce Fefferman-Stein's decomposition by combining the work of Helson and Szegö[@helson1960problem] with the results of Hunt, Muckenhoupt and Wheeden[@hunt1973weighted]. However, all of this work relied on some existence theorems in functional analysis, such as Hahn-Banach Theorem. To deeply understand $\operatorname{BMO}$, it is desirable to have a constructive approach to obtain an explicit decomposition of it. Let $h$ be a continuously differentiable function in $\mathbb{R}^{n}$ with $\int_{\mathbb{R}^{n}}h(x)dx=1$, and there exists $\alpha>0$ such that $$|\nabla h(x)|<\alpha(1+|x|)^{-n+1}.$$ By a constructive argument, Carleson gave the following theorem[@carleson1976two]. **Theorem 2** (Theorem 2, [@carleson1976two]). *Let $h$ be as above, and let $\varphi\in \operatorname{BMO}(\mathbb R)$ with compact support. Then there exists a constant $C(n)>0$ and a sequence of functions $b_i(x)$ such that $$\sum_{i=1}^{\infty}\left\|b_i\right\|_{\infty}\leq C(n)\left\|\varphi\right\|_{*},$$ and $t_i(y)>0$ such that $$\varphi(x)=\sum_{i=1}^{\infty} \int h_{t_i(y)}(x-y) b_i(y) d y+b_0(x),$$ where $\left\|\cdot\right\|_{*}$ denote the $\operatorname{BMO}$-norm (see Section [2](#pre){reference-type="ref" reference="pre"}).* Since the basic importance in harmonic analysis of $\operatorname{BMO}$ was exhibited, a certain natural subspace of $\operatorname{BMO}$ was then called attention by Sarason[@sarason1975functions]. This subspace is called $\operatorname{VMO}$, the space of functions of vanishing mean oscillation, roughly speaking, which occupies the same position in $\operatorname{BMO}$ as does the space of bounded uniformly continuous functions on $\mathbb{R}^{n}$ in the space $L^{\infty}$. The functions in $\operatorname{VMO}$ are those with the additional property of having uniform small mean oscillations over small cubes or intervals. Let $\mathrm{C}$ denote the space of continuous functions on $\mathbb S$. The space $H^{\infty}+\mathrm{C}$ is a closed subalgebra of $L^{\infty}$ (of Lebesgue measure on $\mathbb S$ ). The algebra of functions that belong together with their complex conjugates to $H^{\infty}+\mathrm{C}$ is denoted by $Q C$. Sarason proved that $Q C=\mathrm{VMO} \cap L^{\infty}$[@sarason1975functions]. Motivated by influential earlier work of Sarason, Chang and Marshall proved a conjecture by Douglas that every closed algebra between $H^{\infty}$ and $L^{\infty}$ actually is a Douglas algebra[@chang2006some]. Later on, applications arose in connection with parabolic and elliptic PDEs with $\operatorname{VMO}$ coefficients(see[@iwaniec1998riesz; @kim2007elliptic; @krylov2007parabolic; @krylov2007parabolic1]). One of our key purpose of the present paper is to give representations for $\operatorname{VMO}(\mathbb S)$ and $\operatorname{VMO}(\mathbb{R}^{n})$ respectively. To drop the assumption that $\varphi$ has compact support in Theorem [Theorem 2](#carleson){reference-type="ref" reference="carleson"} and to avoid using additional hypothesis at infinity, we first consider the compact case $\mathbb S$. Let $\mu$ be a finite signed measure on $\Delta$. The *balayage* or *sweep* of $\mu$ is the function $$S\mu(\zeta)=\int_{\Delta}P_{z}(\zeta)d\mu(z), \ \ \zeta\in e^{i\theta},$$ where $$P_{z}(\zeta)=\frac{1}{2\pi}\frac{1-|z|^2}{|1-\bar{\zeta}{z}|^2}=\frac{1}{2\pi}\frac{1-r^2}{1-2r\cos(\theta-\varphi)+r^2}:=P_{r}(\theta-\varphi),\ z=e^{i\varphi}.$$ Fubini's theorem asserts that $S\mu(\zeta)$ exists almost everywhere and $$\int_{\mathbb S}|S\mu(\zeta)||d\zeta|\leq \int_{\Delta}d|\mu|(x):=\left\|\mu\right\|.$$ With Carleson's construction, we demonstrate a representation theorem for $\operatorname{VMO}(\mathbb S)$ as follows. **Theorem 3**. *Let $f\in \operatorname{VMO}(\mathbb S)$. Then there exists a vanishing Carleson measure $\mu \in CM_0(\Delta)$ and a continuous function $g\in C(\mathbb S)$ such that $$\begin{aligned} \label{de} f(\zeta)=g(\zeta)+S\mu(\zeta),\ \zeta=e^{i\theta}\in \mathbb S,\end{aligned}$$ where $C(\mathbb S)$ is the function space of all continuous functions defined on $\mathbb S$, $CM_0(\Delta)$ denotes vanishing Carleson measures on $\Delta$ (see Section [2](#pre){reference-type="ref" reference="pre"} for precise definition).* It is natural to ask whether a representation theorem of $\operatorname{VMO}$ exists in a higher dimension. Comparing to Fefferman-Stein's decomposition in $1$-dimension and applying the Riesz transforms, we obtain a representation theorem for $\operatorname{VMO}(\mathbb{R}^{n})$. Let $\mathrm{UC(\mathbb{R}^{n})}$ be the function space of uniformly continuous functions on $\mathbb{R}^{n}$, and write $\mathrm{BUC(\mathbb{R}^{n})}$ for $L^\infty(\mathbb{R}^{n}) \cap \mathrm{UC(\mathbb{R}^{n})}.$ We have **Theorem 4**. *Assume $\varphi \in \operatorname{BMO}\left(\mathbb{R}^n\right)$, the following conditions are equivalent.* (i) *$\varphi \in \operatorname{VMO}\left(\mathbb{R}^n\right)$.* (ii) *there exists $\varphi_j \in \mathrm{BUC}\left(\mathbb{R}^n\right)$, $j=0,\cdots,n$, such that $$\begin{aligned} \varphi(x)=\varphi_0(x)+\sum_{j=1}^{n} R_j(\varphi_j)(x). \end{aligned}$$ where $R_j$ is the $j$-th Riesz transform (see Definition [Remark 2](#rieszinfinity){reference-type="ref" reference="rieszinfinity"}).* The paper is structured as follows: in Section [2](#pre){reference-type="ref" reference="pre"}, we provide some basic definitions and facts. In Section [3](#section3){reference-type="ref" reference="section3"}, We utilize an iteration argument to show a representation theorem for $\operatorname{VMO}(\mathbb S)$. Conversely, we also show that functions with such a representation must have vanishing mean oscillation. Section [4](#application){reference-type="ref" reference="application"} is devoted to some applications of Theorem [Theorem 3](#vmo){reference-type="ref" reference="vmo"}. With various equivalent descriptions of $\operatorname{VMO}(\mathbb{R}^{n})$, we illustrate a representation theorem for $\operatorname{VMO}(\mathbb{R}^{n})$ in the final section. # Preliminaries and lemmas {#pre} Let $Q$ be a cube in $\mathbb{R}^{n}$ with sides parallel to the axes. A locally integrable function $f \in L^{1}_{loc}\left(\mathbb{R}^{n}\right)$ is said to have *bounded mean oscillation* (abbreviated to $\operatorname{BMO}$) if $$\left\|f\right\|_{*} := \sup_{|Q|} \frac{1}{|Q|} \int_{Q}\left|f(x)-f_{Q}\right| d x <\infty,$$ where the supremum is taken over all possible cubes of $\mathbb{R}^{n}$, $\mid\cdot\mid$ denotes the Lebesgue measure. And $f_{Q}$ is the average of $f$ over $Q$, namely, $$f_{Q}=\frac{1}{|Q|} \int_{Q} f(x)d x.$$ We write the *$\operatorname{BMO}$-norm* of $f$ by $\left\|f\right\|_{*}$. The set of all $\operatorname{BMO}$ functions on $\mathbb{R}^{n}$ is denoted by $\operatorname{BMO}(\mathbb{R}^{n})$. However, $\Vert \cdot \Vert_{*}$ is not a true norm since it is trivial that the $\operatorname{BMO}$-norm of any constant function is $0$. In fact, this is regarded as a Banach space with norm $\Vert \cdot \Vert_{*}$ modulo constants. It is worthwhile to notice that $f_Q$ can be substituted by any constant in the definition of $\operatorname{BMO}$ (see [@girela2001analytic; @garnett2007bounded]). Moreover, if $f$ also satisfies the condition $$\lim _{|Q| \rightarrow 0} \frac{1}{|Q|} \int_{Q}\left|f(x)-f_{Q}\right|d x= 0,$$ we say $f$ has *vanishing mean oscillation* (abbreviated to $\operatorname{VMO}$). Similarly, the set of all $\operatorname{VMO}$ functions on $\mathbb{R}^{n}$ is denoted by $\operatorname{VMO}(\mathbb{R}^{n})$. Functions of bounded mean oscillation were first introduced by John and Nirenberg in [@john1961functions]. If $f\in L^{1}(\mathbb S)$, replace the cubes with intervals on the unit circle we can define $\operatorname{BMO}(\mathbb S)$ and $\operatorname{VMO}(\mathbb S)$ in the same way as before. Earlier in the former introduction, we recall that $\operatorname{VMO}$ is a naturally closed subspace of $\operatorname{BMO}$. Let us recall that a measure $\mu$ on $\mathbb{R}^{n+1}_{+}$ is said to be a *Carleson measure* (we denote by $\mu\in CM(\mathbb{R}^{n+1}_+)$) or satisfy the Carleson measure condition if there exists a constant $C>0$ such that $$\sup_{Q}\frac{|\mu|(\widetilde{Q})}{|Q|}<C,$$ where $Q$ is as before, $|\mu|$ is the total variation of $\mu$. And $$\widetilde{Q}=\{(x,y)\in\mathbb{R}^{n+1}_+:x\in Q, 0<y<\ell(Q)\},$$ where $\ell(Q)$ is the length of the side of $Q$, (see [@carleson1962interpolations; @carleson1958interpolation; @stein1993harmonic]). A measure $\mu$ on $\Delta$ is said to be a *Carleson measure* (we denote by $\mu\in CM(\Delta)$) if $\widetilde{Q}$ is replaced by the Carleson square $$\widetilde{I}=\{re^{i\theta}:\theta\in I\subset[0,2\pi], \ 1-|I|/2\pi<r<1\},$$ where $I$ is any subinterval of $[0,2\pi]$. Roughly speaking, a Carleson measure on a domain is a measure that does not vanish at the boundary when compared to the surface measure on the boundary. The infimum of the set of constants $C > 0$ for which the Carleson condition holds is known as the *Carleson norm* of the measure $\mu$, denoted by $\left\|\cdot\right\|_{\mathcal{C}}$. Furthermore, if a Carleson measure also satisfies $$\lim_{|Q|\to 0}\frac{|\mu|(\widetilde{Q})}{|Q|}=0\ \text{ or }\ \lim_{|I|\to 0}\frac{|\mu|(\widetilde{I})}{|I|}=0,$$ we say $\mu$ is a *vanishing Carleson measure*, denoted by $CM_{0}(\mathbb{R}^{n+1}_+)$ or $CM_{0}(\Delta)$. Note that it is not essential if we replace the cube and the Carleson square with $\Omega\cap B(x_0,r)$ in the definition of Carleson measure, where $x_0\in \partial\Omega$, $\Omega=\Delta$ or $\mathbb{R}^{n+1}_{+}$. Recall the *balayage* or *sweep* of a finite signed measure $\mu$ is $$S\mu(\zeta)=\int_{\Delta}P_{z}(\zeta)d\mu(z), \ \ \zeta\in e^{i\theta}.$$ It is true when $\mu$ is a Carleson measure, in this case, $S\mu\in \operatorname{BMO}$. **Theorem 5** (Garnett, [@garnett2007bounded]). *If $\mu$ is a Carleson measure on $\Delta$, then $S\mu\in \operatorname{BMO}(\mathbb S)$ and $$\left\|S\mu\right\|_{*}\leq C\left\|\mu\right\|_{\mathcal{C}}$$ for some universal constant $C$.* It is more remarkable that Theorem [Theorem 5](#sbmo){reference-type="ref" reference="sbmo"} has a converse. That is to say, any $f\in \operatorname{BMO}(\mathbb S)$ has a representation as follows. **Theorem 6** ([@carleson1976two; @uchiyama1980remark] ). *Let $f\in \operatorname{BMO}(\mathbb S)$. Then there exists a Carleson measure $\mu \in CM(\Delta)$ and a function $g\in L^{\infty}(\mathbb S)$ such that $$f(\zeta)=g(\zeta)+S\mu(\zeta),\ \zeta=e^{i\theta}\in \mathbb S.$$* **Remark 1**. The theorem above is an immediate consequence of the duality between the real Banach space $H^1$ and $\operatorname{BMO}$. It should be noted that conversely one can easily obtain the duality theorem between $\operatorname{BMO}$ and $H^1$ from this representation of $\operatorname{BMO}$. A constructive proof of such a representation was investigated by Carleson in [@carleson1976two] and Fefferman (unpublished, see [@garnett2007bounded]). Precisely, for any given ascending sequence $\{r_n\}\to 1$, there exists $h_n\in L^{\infty}(r_n\mathbb S)$, $n=1,2,\cdots$ such that $$\mu(z)=\sum_{n=1}^{\infty}\delta_{r_n}(r)h_n(r_{n}e^{i\varphi}),\ z=re^{i\varphi},$$ where $\delta_{r_n}$ is the Dirac function supported on the point $r_n$. Furthermore, we have $\left\|g\right\|_{\infty}+\sum_{n=1}^{\infty}\left\|h_{n}\right\|_{\infty}\leq C\left\|f\right\|_{*}$ (see page 152 in [@stein1972hp]), and $C$ is independent on the function $f$. For $\varphi\in L^p(\mathbb R^n)$, $(1\leq p <\infty)$, the Riesz transforms of $\varphi$ are defined as follows. **Definition 1**. Assume $1\leq p <\infty$, $\varphi\in L^p(\mathbb R^n)$, $j=1,\cdots,n$. We define $n$ transforms: $$R_j(\varphi)(x)=P.V.\int_{\mathbb R^n}K(x-y)\varphi(y)dy,$$ where $$K(x)=\frac{\Gamma\left(\frac{n+1}{2}\right)}{\pi^{\frac{n+1}{2}}}\frac{x_j}{|x|^{n+1}}=c_n\frac{x_j}{|x|^{n+1}}.$$ $\{R_j\}$ is said to be the *the Riesz transform(family),* $j=1, \cdots,n$. In the context of harmonic analysis, the Riesz transform family is a natural generalization of the Hilbert transform to Euclidean spaces of dimension $n$. Given by the convolution of one function with another function that has a singularity at the origin, it is an easy matter to verify that the Riesz transform $R_j$ is a particular type of singular integral operator. **Remark 2**. Let $B_r$ denote the open ball in $\mathbb{R}^{n}$ with radius $r$ and centered at the origin. To well define the Riesz transform of $\varphi \in L^{\infty}$, we have $$R_j(\varphi)(x)=P.V.\int_{\mathbb R^n}\left(K(x-y)-K(-y)\chi_{\mathbb R^n\backslash \overline{B}_1}(y)\right)\varphi(y)dy.$$ Stein and Fefferman established the following theorem, whose $(ii)$ is the well-known Stein-Fefferman's decomposition. **Theorem 7** (Theorem 3, [@stein1972hp]). *The following three conditions on $\varphi$ are equivalent:* (i) *$\varphi\in \operatorname{BMO}(\mathbb{R}^{n})$.* (ii) *there exist $\varphi_j \in L^{\infty}(\mathbb{R}^{n})$, $j=0,\cdots, n$, such that $$\begin{aligned} \varphi(x)=\varphi_0(x)+\sum_{j=1}^nR_j(\varphi_j)(x).\end{aligned}$$* (iii) *$\varphi$ satisfies $$\begin{aligned} \label{bmopoisson} \int_{\mathbf{R}^n} \frac{|\varphi(x)| }{1+|x|^{n+1}}d x<\infty,\end{aligned}$$ and $$\sup _{x_0 \in \mathbf{R}^n} \int_{T\left(x_0, h\right)} t|\nabla \varphi(x,t)|^2 d x d t \leq A h^n,\ 0<h<\infty.$$ where$$|\nabla \varphi(x,t)|^2=\left|\frac{\partial \varphi}{\partial t}\right|^2+\sum_{j=1}^n\left|\frac{\partial \varphi}{\partial x_j}\right|^2,$$ $$T\left(x_0, h\right)=\left\{(x, t): 0<t<h,\left|x-x_0\right|<h\right\}.$$* **Remark 3**. The function $\varphi(x, t)$ is the Poisson integral of $\varphi$. Similar to the $1$-dimension case, condition [\[bmopoisson\]](#bmopoisson){reference-type="eqref" reference="bmopoisson"} suggests the existence of the Poisson integral of $\varphi$. **Remark 4**. By the way, the original proof by Stein and Fefferman in [@stein1972hp] also indicates that $(ii)$ in Theorem [Theorem 7](#bmornequ){reference-type="ref" reference="bmornequ"} satisfies $$\begin{aligned} \sum_{j=0}^{n}\left\|\varphi_j\right\|_{\infty}\leq C\left\|\varphi\right\|_{*},\end{aligned}$$ where the constant $C$ is independent on $\varphi$. **Theorem 8**. *$R_j$ is a bounded linear operator from $L^{\infty}(\mathbb{R}^{n})$ to $\operatorname{BMO}(\mathbb{R}^{n})$.* *Proof.* For the sake of simplicity, the author will not specifically point out the integrals in the sense of principal value. Readers shall identify them. Fix $x_0$, set $Q$ to be a cube in $\mathbb{R}^n$ with center $x_0$, $Q'$ be the concentric cube with triple side length. Denote $\varphi=\varphi_1+\varphi_2$, where $\varphi_1=\varphi \chi_{Q'},\ \varphi_2=\varphi-\varphi \chi_{Q'}$. Thus, $$R_j(\varphi)=R_j(\varphi_1)+R_j(\varphi_2).$$ Accordingly, we shall estimate $R_j(\varphi_1)$ and $R_j(\varphi_2)$ separately. For $R_j(\varphi_1)$, the integral $\int_{\mathbb R^n}K(-y)\chi_{\mathbb R^n\backslash \overline{B}_1}(y)\varphi_1(y)dy$ is a constant bounded by $c\left\|\varphi\right\|_{\infty}$ for some universal constant $c$, we denote by $C(\varphi)$. Then $$\begin{aligned} R_j(\varphi_1)=&\int_{\mathbb R^n}K(x-y)\varphi_1(y)dy+C(\varphi).\end{aligned}$$ and hence $$\begin{aligned} \label{rj_1} \notag &\frac{1}{|Q|}\int_{Q}\left|R_j(\varphi_1)-\left(R_j(\varphi_1)\right)_{Q}\right|dx\\ = & \frac{1}{|Q|}\int_{Q}\left|\int_{\mathbb R^n}K(x-y)\varphi_1(y)dy-\left(\int_{\mathbb R^n}K(x-y)\varphi_1(y)dy\right)_{Q}\right|dx\notag\\ \leq &\frac{2}{|Q|}\int_{Q}\left|\int_{\mathbb R^n}K(x-y)\varphi_1(y)dy\right|dx.\end{aligned}$$ By Hölder's inequality, we obtain then $$\begin{aligned} \label{rest1} \notag\frac{1}{|Q|}\int_{Q}\left|\int_{\mathbb R^n}K(x-y)\varphi_1(y)dy\right|dx&\leq \left(\frac{1}{|Q|}\int_{Q}\left|\int_{\mathbb R^n}K(x-y)\varphi_1(y)dy\right|^2dx \right)^{\frac{1}{2}}\\\notag &= \left(\frac{1}{|Q|}\int_{Q}\left|R_j(\varphi_1)-C(\varphi)\right|^2dx \right)^{\frac{1}{2}}\\ &\leq \left(\frac{2}{|Q|}\int_{Q}\left|R_j(\varphi_1)\right|^2dx \right)^{\frac{1}{2}}+2c\left\|\varphi\right\|_{\infty}.\end{aligned}$$ Note that $\varphi_1\in L^{2}$, the right-hand side of the above inequality is estimated as follows. $$\begin{aligned} \label{rj11} \left(\int_Q\left|R_j(\varphi_1)\right|^2dx \right)^{\frac{1}{2}} %=&\left(\int_Q\left|\int_{\rr^n}K(x-y)\varphi_1(y)dy\right|^2dx \right)^{\frac{1}{2}}\notag\\\notag %\leq &\left(\int_{\rr^n}\left|\int_{\rr^n}K(x-y)\varphi_1(y)dy\right|^2dx \right)^{\frac{1}{2}}\\ \leq\left\|R_j(\varphi_1)\right\|_{2}\leq \left\|\varphi_1\right\|_{2}\leq |Q'|^{\frac{1}{2}}\left\|\varphi\right\|_{\infty}.\end{aligned}$$ The following fact explains why the second inequality persists, if $\varphi\in L^{2}$, for the corresponding Riesz transforms, we have $$\sum_{j=1}^n\left\|R_j(\varphi)\right\|_{2}^{2}=\left\|\varphi\right\|_{2}^2.$$ Combined with [\[rj_1\]](#rj_1){reference-type="eqref" reference="rj_1"}, [\[rest1\]](#rest1){reference-type="eqref" reference="rest1"} and [\[rj11\]](#rj11){reference-type="eqref" reference="rj11"}, we conclude that $$\begin{aligned} \label{rj111} \frac{1}{|Q|}\int_{Q}\left|R_j(\varphi_1)-\left(R_j(\varphi_1)\right)_{Q}\right|dx \leq 4(c+3^{\frac{n}{2}})\left\|\varphi\right\|_{\infty}.\end{aligned}$$ On the other hand, it should be noticed that $$\begin{aligned} \label{rj2-x0} \left|R_j(\varphi_2)(x)-R_j(\varphi_2)(x_0)\right|\leq c_n\int_{\mathbb{R}^n \backslash \overline{Q'}}\left|\frac{x_j-y_j}{|x-y|^{n+1}}-\frac{x_{0,j}-y_j}{|x_0-y|^{n+1}}\right||\varphi (y)|dy.\end{aligned}$$ For all $x \in Q$, $y \in \mathbb{R}^n \backslash \overline{Q'}$, it is verified that $|x-y|>\ell(Q)$, $|x_0-y|>\ell(Q)$. Therefore, $$\begin{aligned} |x_0-y|&\leq |x-x_0|+|x-y|\leq \frac{\sqrt{n}}{2}\ell(Q)+|x-y|\leq \left(1+\frac{\sqrt{n}}{2}\right)|x-y|,\\ |x-y|&\leq |x-x_0|+|x_0-y|\leq \frac{\sqrt{n}}{2}\ell(Q)+|x_0-y|\leq \left(1+\frac{\sqrt{n}}{2}\right)|x_0-y|.\end{aligned}$$ This of course yields that $|x-y|$ and $|x_0-y|$ are comparable, thus $$\begin{aligned} \label{x0-yx-y} \left||x-y|^{n+1}-|x_0-y|^{n+1}\right|&\leq \left||x-y|-|x_0-y|\right|\sum_{k=1}^{n}|x-y|^k|x_0-y|^{n-k}\notag\\ &\leq C(n)|x-x_0||x_0-y|^{n}.\end{aligned}$$ From [\[x0-yx-y\]](#x0-yx-y){reference-type="eqref" reference="x0-yx-y"} and triangle inequality, we have $$\begin{aligned} \label{x0-y-x-y} \notag&\left| \frac{x_j-y_j}{|x-y|^{n+1}}-\frac{x_{0,j}-y_j}{|x_0-y|^{n+1}}\right| \\ % =&\frac{\left|(x_j-y_j)|x_0-y|^{n+1}-(x_{0,j}-y_j)|x-y|^{n+1}\right|}{|x_0-y|^{n+1}|x-y|^{n+1}}\\ \leq & \frac{|x_j-x_{0,j}||x_0-y|^{n+1}+|x_{0,j}-y_j|\left||x-y|^{n+1}-|x_0-y|^{n+1}\right|}{|x_0-y|^{n+1}|x-y|^{n+1}}\notag\\ \leq & \frac{C(n)\ell(Q)}{|x_0-y|^{n+1}}.\end{aligned}$$ We shall continue our estimate on [\[rj2-x0\]](#rj2-x0){reference-type="eqref" reference="rj2-x0"}, according to [\[x0-y-x-y\]](#x0-y-x-y){reference-type="eqref" reference="x0-y-x-y"}, we then have $$\begin{aligned} \label{rj-r0} \left|R_j(\varphi_2)(x)-R_j(\varphi_2)(x_0)\right|&\leq C(n)\ell(Q)\int_{\mathbb{R}^n \backslash B_{\ell(Q)}}\frac{|\varphi (y)|}{|x_0-y|^{n+1}}dy\notag\\\notag & \leq C(n)\ell(Q)\left\|\varphi\right\|_{\infty}\int_{\mathbb{R}^n \backslash B_{\ell(Q)}}\frac{1}{|x_0-y|^{n+1}}dy\\ &= C(n)\ell(Q)\left\|\varphi\right\|_{\infty}\int_{\ell(Q)}^{\infty}\frac{1}{r^2}dr=C(n)\left\|\varphi\right\|_{\infty}.\end{aligned}$$ It is easy to see that $R_j(\varphi_2)(x_0)$ is a constant depending only on $\varphi$ and $Q$. By [\[rj-r0\]](#rj-r0){reference-type="eqref" reference="rj-r0"}, we know that for any cube $Q\subset\mathbb{R}^n$, $$\begin{aligned} \label{rj2} &\frac{1}{|Q|}\int_{Q}\left|R_j(\varphi_2)(x)-\left(R_j(\varphi_2)\right)_Q\right|dx \notag\\ =&\frac{1}{|Q|}\int_{Q}\left|\left(R_j(\varphi_2)(x)-R_j(\varphi_2)(x_0)\right)-\left(R_j(\varphi_2)-R_j(\varphi_2)(x_0)\right)_Q\right|dx \notag\\ \leq &\frac{2}{|Q|}\int_{Q}\left|R_j(\varphi_2)(x)-R_j(\varphi_2)(x_0)\right|dx\leq C(n)\left\|\varphi\right\|_{\infty},\end{aligned}$$ which together with [\[rj111\]](#rj111){reference-type="eqref" reference="rj111"} implies that $$\begin{aligned} \left\|R_j(\varphi)\right\|_{*}\leq \left\|R_j(\varphi_1)\right\|_{*}+\left\|R_j(\varphi_2)\right\|_{*}\leq C(n)\left\|\varphi\right\|_{\infty}\end{aligned}$$ and completes the proof of the theorem, that is to say, $R_j$ is a bounded linear operator from $L^\infty(\mathbb R^n)$ to $\operatorname{BMO}(\mathbb R^n)$. ◻ # Proof of Theorem [Theorem 3](#vmo){reference-type="ref" reference="vmo"} {#section3} We give the proof by an iteration argument. Suppose $f\in \operatorname{VMO}(\mathbb S)$, since $\operatorname{VMO}$ is the closed subspace of $\operatorname{BMO}$, Remark [Remark 1](#rmk1){reference-type="ref" reference="rmk1"} indicates that there exists $g_1(\zeta)\in L^{\infty}(\mathbb S),$ $\mu_{1}(z)\in CM(\Delta)$ and a constant $C>0$ (irrelevant to $f$) such that $$f(\zeta)=g_{1}(\zeta)+S\mu_{1}(\zeta),$$ $$\left\|g_1\right\|_{\infty}+\sum_{n=1}^{\infty}\left\|h_{n}^{(1)}\right\|_{\infty}\leq C\left\|f\right\|_{*},$$ where $$\mu_{1}(z)=\sum_{n=1}^{\infty}\delta_{r_n}(r)h_n^{(1)}(r_{n}e^{i\varphi}),\ h_n^{(1)} \in L^{\infty}(r_n\mathbb S),\ z=re^{i\varphi}.$$ From the Theorem 5.1 in [@garnett2007bounded] and [@sarason1975functions], if $f\in \operatorname{VMO}(\mathbb S)$, then there exists $r^{(1)}>0$ such that $$\left\|f-P_{r^{(1)}}*f\right\|_{*}<\left\|f\right\|_{*}/2.$$ Applying Fubini's theorem to the following equality, it is not hard to find that $$P_{r^{(1)}}*f=P_{r^{(1)}}*g_1+P_{r^{(1)}}*S\mu_{1}:=\tilde{g}_1+S\tilde{\mu}_{1},$$ where $\tilde{\mu}_1(z)=\sum_{n=1}^{\infty}\delta_{r_n}(r)\tilde{h}_n^{(1)}(r_{n}e^{i\varphi}),\ \tilde{h}_n^{(1)}=P_{r^{(1)}}*h_n^{(1)}$. Since $g_1\in L^{\infty}(\mathbb S),\ h_{n}^{(1)}\in L^{\infty}(r_{n}\mathbb S)$, it is easy to see from the properties of convolution that $\tilde{g}_1\in C(\mathbb S)$ and $P_{r^{(1)}}*h_{n}^{(1)}\in C(r_n\mathbb S)$. Moreover, we have $$\left\|\tilde{g}_1\right\|_{\infty}+\sum_{n=1}^{\infty}\left\|\tilde{h}_{n}^{(1)}\right\|_{\infty}\leq\left\|g_1\right\|_{\infty}+\sum_{n=1}^{\infty}\left\|h_{n}^{(1)}\right\|_{\infty}\leq C\left\|f\right\|_{*}.$$ It must be made clear that the interchange of the order of integration without any explanation in the following proof can be verified by dedicated readers for the uniformity of the convergence. Set $f^{(1)}=f-P_{r^{(1)}}*f$, $\left\|f^{(1)}\right\|_{*}\leq {\left\|f\right\|_{*}}/{2}$. By a slight calculation, we claim that $f^{(1)}\in\operatorname{VMO}(\mathbb S)$. First, we can represent the mean integral of $f^{(1)}$ over $I\subset \mathbb S$ as follows, $$\begin{aligned} f_{I}^{(1)}=\frac{1}{|I|}\int_{I}f^{(1)}(e^{i\theta})d\theta&=\frac{1}{|I|}\int_{I}\left(\int_{0}^{2\pi}\left[f(e^{i\theta})-f(e^{i(\theta-\varphi)})\right]P_{r^{(1)}}(\varphi)d\varphi\right)d\theta\\ &=\int_{0}^{2\pi}P_{r^{(1)}}(\varphi)(f-f_{\varphi})_{I}d\varphi,\end{aligned}$$ where $f_\varphi=f(e^{i(\theta-\varphi)})$ is the translation of $f(e^{i\theta})$. Then, for any $I$ with small enough arclength $\delta$, we obtain that $$\begin{aligned} &\frac{1}{|I|} \int_I\left|f^{(1)}(e^{i \theta})-f_I^{(1)}\right| d \theta\\ =&\frac{1}{|I|} \int_I \left| \int_0^{2 \pi} P_{r^{(1)}}(\varphi) \left(f(e^{i \theta})-f(e^{i(\theta-\varphi)})\right)d \varphi-\int_0^{2 \pi} P_{r^{(1)}}(\varphi)\left(f-f_{\varphi}\right)_I d \varphi\right| d\theta\\ =& \frac{1}{|I|} \int_I\left|\int_0^{2 \pi} P_{r^{(1)}}(\varphi)\left[\left(f(e^{i \theta})-f_I\right)-\left(f_{\varphi}-\left(f_{\varphi}\right)_I\right)\right] d \varphi\right| d \theta \end{aligned}$$ Again by Fubini's theorem, we shall get $$\begin{aligned} &\frac{1}{|I|} \int_I\left|f^{(1)}(e^{i \theta})-f_I^{(1)}\right| d \theta\\ \leq &\int_0^{2 \pi} P_{r^{(1)}}(\varphi)\left(\frac{1}{|I|} \int_I\left|f(e^{i \theta})-f_I\right| d \theta+\frac{1}{|I|} \int_I\left|f_{\varphi}-(f_{\varphi})_{I}\right| d \theta\right) d \varphi\\ \leq &\ 2 \sup_{|I|=\delta}\frac{1}{|I|}\int_{I}|f-f_{I}|d\theta.\end{aligned}$$ Let $\delta \rightarrow 0, f \in \operatorname{VMO}(\mathbb S)$ implies that $f^{(1)} \in \operatorname{VMO}(\mathbb S)$. Now, we shall claim that $\tilde{\mu}_{1}$ is a (vanishing) Carleson measure. It holds that $$\begin{aligned} \label{carleson measure} \notag\frac{1}{|I|}\int_{\widetilde{I}}d|\tilde{\mu}_1|(z)&\leq \frac{1}{|I|}\int_{I}\sum_{\{n:r_{n}\geq 1-|I|/2\pi\}}^{\infty}|\tilde{h}_{n}^{(1)}(r_{n}e^{i\varphi})|d\varphi\\ %&\leq \sum_{\{n:r_{n}\geq 1-|I|/2\pi\}}^{\infty}\frac{1}{|I|}\int_{I}|\tilde{h}_{n}^{(1)}(r_{n}e^{i\varphi})|d\varphi\\ &\leq \sum_{\{n:r_{n}\geq 1-|I|/2\pi\}}^{\infty}\left\|\tilde{h}_{n}^{(1)}\right\|_{\infty}\leq\sum_{\{n:r_{n}\geq 1-|I|/2\pi\}}^{\infty}\left\|h_{n}^{(1)}\right\|_{\infty},\end{aligned}$$ for any Carleson square $$\widetilde{I}=\{z=re^{i\varphi}:\varphi\in I,\ r\in (1-|I|/2\pi,1)\},\ I\subset [0, 2\pi].$$ It follows then $\tilde{\mu}_{1}$ is a Carleson measure. Then repeating the above argument on $f^{(1)}$, there exists $r^{(2)}>0$ such that $$\left\|f^{(1)}-P_{r^{(2)}} * f^{(1)}\right\|_* \leq \frac{\left\|f^{(1)}\right\|_{*}}{2} \leq \frac{\left\|f\right\|_*}{4},$$ and there exists $g_2 \in L^{\infty}(\mathbb S),\ \mu_2(z)=\sum_{n=1}^{\infty} \delta_{r_n}(r) h_{n}^{(2)}(r_n e^{i\varphi})$, $h_{n}^{(2)}\in L^{\infty}(r_n\mathbb S)$ such that $$f^{(1)}=g_2+S \mu_2,$$ and $$\left\|g_2\right\|_{\infty}+\sum_{n=1}^{\infty}\left\|h_{n}^{(2)}\right\|_{\infty}\leq C\left\|f^{(1)}\right\|_{*}\leq \frac{C}{2}\left\|f\right\|_{*}.$$ We get $\tilde{g}_2$ and $\tilde{\mu}_2$ as before, and we have $$\begin{aligned} \left\|\tilde{g}_2\right\|_{\infty}+\sum_{n=1}^{\infty}\left\|\tilde{h}_n^{(2)}\right\|_{\infty} \leq\left\|g_2\right\|_{\infty}+\sum_{n=1}^{\infty}\left\|h_n^{(2)}\right\|_{\infty} \leq C\|f\|_*.\end{aligned}$$ One does this infinitely often by iterating, obtaining the limit function $g$ and the limit measure $\mu$. Therefore, $$f=\sum_{k=1}^{\infty}\tilde{g}_k+\int_{\Delta}P_{z}(\zeta)\left(\sum_{k=1}^{\infty}d\tilde{\mu}_k(z)\right):=g+S\mu.$$ Since $\tilde{g}_{k}$ is continuous on the unit circle $\mathbb S$, and $$\left\|\sum_{k=1}^{\infty}\tilde{g}_k\right\|_{\infty}\leq \sum_{k=1}^{\infty}\left\|\tilde{g}_k\right\|_{\infty}\leq \sum_{k=1}^{\infty}\frac{C\left\|f\right\|_{*}}{2^{k-1}}=2C\left\|f\right\|_{*}.$$ The convergence of $\sum_{k=1}^{\infty}\tilde{g}_k$ is uniform, which indicates $g\in C(\mathbb S)$. Now it remains to prove that $\mu$ is a vanishing Carleson measure. From the estimate [\[carleson measure\]](#carleson measure){reference-type="eqref" reference="carleson measure"}, it follows therefore that $$\begin{aligned} \lim_{|I|\to 0}\frac{1}{|I|}\int_{\widetilde{I}}d|\mu|(z)\leq& \lim_{|I|\to 0}\frac{1}{|I|}\int_{\widetilde{I}}\sum_{k=1}^{\infty}d|\tilde{\mu}_k|(z)\\ \leq&\lim_{|I|\to 0}\sum_{k=1}^{\infty}\sum_{\{n:r_n\geq 1-|I|/2\pi\}}\left\|\tilde{h}_{n}^{(k)}\right\|_{\infty}.\end{aligned}$$ On the other hand, $$\begin{aligned} %&\lim_{|I|\to 0}\sum_{k=1}^{\infty}\sum_{\{n:r_n\geq 1-|I|/2\pi\}}\left\|h_{n}^{(k)}\right\|_{\infty}\\ %=&\sum_{k=1}^{\infty}\lim_{|I|\to 0}\sum_{\{n:r_n\geq 1-|I|/2\pi\}}\left\|h_{n}^{(k)}\right\|_{\infty}=0.\\ &\sum_{k=1}^{\infty}\sum_{\{n:r_n\geq 1-|I|/2\pi\}}\left\|\tilde{h}_{n}^{(k)}\right\|_{\infty}\leq\sum_{k=1}^{\infty}\sum_{n=1}^{\infty}\left\|\tilde{h}_{n}^{(k)}\right\|_{\infty}\leq\sum_{k=1}^{\infty}\frac{C\left\|f\right\|_{*}}{2^{k-1}}=2C\left\|f\right\|_{*}.\end{aligned}$$ It readily follows from Tannery's theorem that $$\begin{aligned} 0\leq \lim_{|I|\to 0}\frac{1}{|I|}\int_{\widetilde{I}}d|\mu|(z)\leq\sum_{k=1}^{\infty}\lim_{|I|\to 0}\sum_{\{n:r_n\geq 1-|I|/2\pi\}}\left\|\tilde{h}_{n}^{(k)}\right\|_{\infty}=0.\end{aligned}$$ That is to say, for any given $\varepsilon>0$, there exists $\delta>0$ such that for $|I|<\delta$, $$\frac{1}{|I|}\int_{\widetilde{I}}d|\mu|(z)\leq\varepsilon,$$ which gives the required result that $\mu$ is a vanishing Carleson measure. Here we finished the proof of Theorem [Theorem 3](#vmo){reference-type="ref" reference="vmo"}. Conversely, any function $f$ which has a representation of the form [\[de\]](#de){reference-type="eqref" reference="de"} has vanishing mean oscillation. Since $\operatorname{VMO}(\mathbb S)$ is the closure of $C(\mathbb S)$ in $\operatorname{BMO}(\mathbb S)$, it suffices to show that **Proposition 9**. *If $\mu$ is a vanishing Carleson measure on $\Delta$, then $S\mu\in \operatorname{VMO}(\mathbb S)$.* *Proof.* A slight modification of the proof of Theorem [Theorem 5](#sbmo){reference-type="ref" reference="sbmo"} can be used to show this proposition. A proof will now be given for the convenience of the reader. Writing $\mu=\mu^{+}-\mu^{-}$, where $|\mu|=\mu^{+}+\mu^{-}$, $\mu^{+}\geq 0$, $\mu^{-}\geq 0$. Without loss of generality, we can suppose $\mu\geq 0$. Let $I_0$ be an any fixed interval on $[0,2\pi]$ with $|I_0|=h$, let $I_k$ be the concentric interval with length $2^kh$, $I_{N_0+1}=2\pi$, where $N_0$ is the natural number such that $2^{N_0}\leq 2\pi<2^{N_0+1}$. Set $$c_h=\sup_{|I|\leq h}\frac{\mu(\widetilde{I})}{|I|},$$ we have $\lim_{h\to 0}c_{h}=0$ uniformly. For any $e^{i\theta},e^{i\theta_0}\in e^{iI_{0}}$, $z\in\widetilde{I}_{k}\backslash\widetilde{I}_{k-1}$, $2\leq k\leq N_0+1$, a simple geometric argument (for details, see [@girela2001analytic; @garnett2007bounded]) can be made to show that there exists a positive constant $C$ (independent of $I_0$) such that $$\begin{aligned} \label{pz} |P_{z}(e^{i\theta})-P_{z}(e^{i\theta_0})|=\left|\frac{1-|z|^2}{|e^{i\theta}-z|^2}-\frac{1-|z|^2}{|e^{i\theta_0}-z|^2}\right|\leq \frac{C}{2^{2k}|I_0|}.\end{aligned}$$ By dividing the unit disk into the union of different Carleson squares, we obtain $$\begin{aligned} \label{su} S\mu(e^{i\theta})=\int_{\Delta}P_{z}(e^{i\theta})d\mu(z)=\left(\int_{\widetilde{I}_{1}}+\sum_{k=2}^{N_0+1}\int_{\widetilde{I}_{k}\backslash\widetilde{I}_{k-1}}\right)P_{z}(e^{i\theta})d\mu(z).\end{aligned}$$ Replacing $(S\mu)_{I_0}$ with a constant $\sum_{k=2}^{N_0+1}\int_{\widetilde{I}_{k}\backslash\widetilde{I}_{k-1}}P_{z}(e^{i\theta_0})d\mu(z)$ and combing with [\[su\]](#su){reference-type="eqref" reference="su"}, we deduce that $$\begin{aligned} \label{1} \notag&\frac{1}{|I_0|}\int_{I_0}\left|S\mu(e^{i\theta})-\sum_{k=2}^{N_0+1}\int_{\widetilde{I}_{k}\backslash\widetilde{I}_{k-1}}P_{z}(e^{i\theta_0})d\mu(z)\right|d\theta\\\notag \leq &\frac{1}{|I_0|}\int_{I_0}\left(\int_{\widetilde{I}_1}P_{z}(e^{i\theta})d\mu(z)\right)d\theta\\ &+\sum_{k=2}^{N_0+1}\frac{1}{|I_0|}\int_{I_0}\left(\int_{\widetilde{I}_{k}\backslash\widetilde{I}_{k-1}}\left|P_{z}(e^{i\theta})-P_{z}(e^{i\theta_0})\right|d\mu(z)\right)d\theta.\end{aligned}$$ On the one hand, Fubini's theorem suggests that $$\begin{aligned} \label{dybf} \frac{1}{\left|I_0\right|} \int_{I_0}\left(\int_{\widetilde{I}_1} P_z\left(e^{i \theta}\right) d \mu(z)\right) d \theta \leq \frac{2}{\left|\widetilde{I}_1\right|} \int_{\widetilde{I}_1} d \mu(z) \leq 2 c_{2 h} .\end{aligned}$$ On the other hand, by [\[pz\]](#pz){reference-type="eqref" reference="pz"} it is easy to see that $$\begin{aligned} \label{2} &\sum_{k=2}^{N_0+1}\frac{1}{|I_0|}\int_{I_0}\left(\int_{\widetilde{I}_{k}\backslash\widetilde{I}_{k-1}}\left|P_{z}(e^{i\theta})-P_{z}(e^{i\theta_0})\right|d\mu(z)\right)d\theta\notag\\ \leq& \sum_{k=2}^{N_0+1}\int_{\widetilde{I}_{k}\backslash\widetilde{I}_{k-1}}\frac{C}{2^{2k}|I_0|}d\mu(z)\leq C\sum_{k=2}^{N_0+1}\frac{\mu(\widetilde{I}_k)}{2^{2k}|I_0|}\leq C\sum_{k=2}^{N_0+1}\frac{c_{2^kh}}{2^{k}}.\end{aligned}$$ For any $\varepsilon>0$ fixed, there exists $N>0$ such that $\sum_{k=N}^{\infty}\frac{\left\|\mu\right\|_{\mathcal{C}}}{2^k}<\varepsilon$. Then fix this $N$, there exists $\delta>0$ such that $\forall\ $ $h\leq \delta2^{-N}$, we have $c_{2^{k}h}<\varepsilon$ for any $k< N$. Here we finished the proof of the proposition. More specifically speaking, for arbitrary $\varepsilon>0$, let us assume that $N<N_0+2$, (otherwise the following third term falls away). In combination with [\[1\]](#1){reference-type="eqref" reference="1"}, [\[dybf\]](#dybf){reference-type="eqref" reference="dybf"} and [\[2\]](#2){reference-type="eqref" reference="2"}, there exists $\delta>0$ such that for any $|I_0|=h<\delta$, we have then clearly: $$\begin{aligned} &\frac{1}{|I_0|}\int_{I_0}\left|S\mu(e^{i\theta})-\sum_{k=2}^{N_0+1}\int_{\widetilde{I}_{k}\backslash\widetilde{I}_{k-1}}P_{z}(e^{i\theta_0})d\mu(z)\right|d\theta\\ \leq&\ 2c_{2h}+\sum_{k=2}^{N-1}\frac{c_{2^kh}}{2^k}+\sum_{k=N}^{N_0+1}\frac{c_{2^kh}}{2^k}\\ \leq& \ 2\varepsilon+\sum_{k=2}^{\infty}\frac{\varepsilon}{2^k}+\sum_{k=N}^{\infty}\frac{\left\|\mu\right\|_{\mathcal{C}}}{2^k}< 4\varepsilon,\end{aligned}$$ which yields that $S\mu\in\operatorname{VMO}(\mathbb S)$. ◻ Combined with Theorem [Theorem 3](#vmo){reference-type="ref" reference="vmo"} and Proposition [Proposition 9](#vmo1){reference-type="ref" reference="vmo1"}, a representation theorem of $\operatorname{VMO}(\mathbb S)$ will be given. **Corollary 10**. *Let $f\in L^{1}(\mathbb S)$. Then $f\in \operatorname{VMO}(\mathbb S)$ if and only if there exists a vanishing Carleson measure $\mu \in CM_0(\Delta)$ and a continuous function $g\in C(\mathbb S)$ such that $$f(\zeta)=g(\zeta)+S\mu(\zeta),\ \zeta=e^{i\theta}\in \mathbb S.$$* # Applications of Theorem [Theorem 3](#vmo){reference-type="ref" reference="vmo"} {#application} The proof of the duality theorem would have been considerably easier had it been the case that $|\nabla \varphi| dxdy$ were a Carleson measure whenever $\varphi\in \operatorname{BMO}(\mathbb R)$. However, this is not the case even when $\varphi$ is a Blaschke product. The Littlewood--Paley expression $|\nabla\varphi|^2y dxdy$ can overcome this difficulty. Another approach was proposed by Varopoulos [@varopoulos1977bmo; @varopoulos1978remark] when he attempted to study the $\bar{\partial}$-equation associated with the Corona problem and the relation of that equation with $\operatorname{BMO}$ functions on the boundary. Varopoulos considered a smooth function $F$ on $\mathbb{R}^{n+1}_{+}$ with $|\nabla F| dxdy\in CM(\mathbb{R}^{n+1}_{+})$ such that the boundary value of $F$ deviated from $\varphi$ by a bounded function $g$. Inspired by his results, we derive the following proposition. **Proposition 11**. *Let $f\in\operatorname{VMO}(\mathbb S)$. Then there exists a smooth function $F\in C^{\infty}(\Delta)$ that satisfies the following conditions.* (i) *$\lim_{r\to 1}F(re^{i\theta})-f(e^{i\theta})\in C(\mathbb S)$;* (ii) *The measure $|\nabla F|dxdy$ is a vanishing Carleson measure on $\Delta$.* Before we prove this proposition, we now define an auxiliary function as follows: $$\widetilde{P}_z(u)=P_z(\zeta) \chi_{(r, 1)}(\rho),$$ where $$z=r e^{i \varphi} \in \Delta,\ u=\rho \zeta \in \Delta,\ 0<r, \rho<1,\ \zeta=e^{i\theta} \in \mathbb S,$$ and $\chi_{(r, 1)}$ denotes the characteristic function of the interval $(r, 1)$. Let $z\in\Delta$ be fixed, and denote by $$\begin{aligned} \nu_z(u)=\frac{\partial \widetilde{P}_z}{\partial \rho}(u),\ \lambda_z(u)=\frac{\partial \widetilde{P}_z}{\partial \theta}(u),\ u=\rho e^{i\theta},\ 0<\rho<1,\end{aligned}$$ It is perfectly clear that $\nu_z$ is the Lebesgue linear measure on the circle $\{u:|u|=|z|=r\}$ multiplied by the Poisson kernel $P_z(\zeta)$ and $$\lambda_z(u)=\frac{d P_z(\zeta)}{d\theta} \chi_{(r, 1)}(\rho).$$ It follows from Lemma 1.3.2 and Lemma 1.3.3 in [@varopoulos1977bmo] that $\left\|\nu_z\right\|\leq 1$ and $\left\|\lambda_z\right\|\leq C$ for some numerical constant, here $\left\|\nu_z\right\|=\int_{z\in\Delta}d|\nu_z|$ and $\left\|\lambda_z\right\|$ is defined similarly. Let us now suppose that $f \in \mathrm{VMO}(\mathbb S)$ be some $\operatorname{VMO}$ function on the unit circle and let $\mu$ be some vanishing Carleson measure that satisfies [\[de\]](#de){reference-type="eqref" reference="de"}. We set $$\begin{aligned} F(u) &=\int_{\Delta} \widetilde{P}_z(u) d \mu(z), \quad u\in\Delta, \\ g(\zeta) &=\int_{\Delta} P_z(\zeta) d|\mu|(z) \quad \zeta \in \mathbb S.\end{aligned}$$ Theorem [Theorem 5](#sbmo){reference-type="ref" reference="sbmo"} implies that $g(\zeta)<+\infty$ almost everywhere. For any $\zeta\in\mathbb S$ such that $g(\zeta)<+\infty$, by Lebesgue's dominated theorem we easily see that $$\lim_{\rho\to 1}F(\rho\zeta)=\int_{\Delta}P_{z}(\zeta)d\mu(z).$$ The definition of $F$ immediately follows that $$\begin{aligned} \label{df} \frac{\partial F}{\partial \rho}(u)=\int_{\Delta}\nu_z(u)d\mu(z),\ \frac{\partial F}{\partial \theta}(u)=\int_{\Delta}\lambda_z(u)d\mu(z).\end{aligned}$$ We continue our proof in the following: *Proof of Proposition [Proposition 11](#app1){reference-type="ref" reference="app1"}.* We shall first show that $\partial F/\partial \rho$, $\partial F/\partial \theta$ are vanishing Carleson measures. It follows from Lemma 1.3.1 in [@varopoulos1977bmo] that $\partial F/\partial \rho$, $\partial F/\partial \theta$ are Carleson measures. Let $h$ be small enough, and $I_1$ be a fixed interval on $[0,2\pi]$ with $|I_1|=h$. As before, let $I_m$ be the concentric interval with length $mh$, $I_{N_1+1}=2\pi$, where $N_1$ is the natural number such that ${N_1h}< 2\pi\leq {(N_1+1)h}$. Let $\zeta$ be any point on the arc $e^{iI_1}$. Define $$\widehat{I}_m=\{z=re^{i\varphi}:\varphi\in I_m,\ r\in (1-h/2\pi,1)\},\ I_m\subset [0, 2\pi].$$ For any $z=e^{i\varphi} \in \widehat{I}_{m}\backslash\widehat{I}_{m-1}$ we observe that $$|z-\zeta|=|e^{i\left(\theta-\varphi\right)-1}|\asymp|\theta-\varphi|\asymp mh, 3\leq m\leq N_1+1,$$ where $A\asymp B$ means that there exists a universal constant $C\geq 1$ such that $B/C\leq A\leq CB.$ Here we notice that $\widehat{I}_{m}\backslash\widehat{I}_{m-1}$ consists of two Carleson squares with length $h$ for each $m$. It is easily verified that there exists a universal constant $C>0$ such that $$\begin{aligned} |\nu_z|(\widehat{I}_1)&\leq C h \frac{1-r^2}{|z-\zeta|^2}\leq \frac{C}{m^2}, \ &\forall\ z=re^{i\varphi}\in\widehat{I}_{m}\backslash\widehat{I}_{m-1},\ 1-r\leq h;\\ |\nu_z|(\widehat{I}_1)&=0,\ &\forall\ z=re^{i\varphi},\ 1-r> h.\end{aligned}$$ From this fact, we conclude that $$\begin{aligned} \label{dfdp} \left|\frac{\partial F}{\partial\rho}\right|(\widehat{I}_1)&\leq\int_{\Delta}|\nu_z|(\widehat{I}_1)d|\mu|(z)\notag\\\notag &=\int_{\{z:|z|>1-h\}}|\nu_z|(\widehat{I}_1)d|\mu|(z)\\ &\leq \int_{\widehat{I}_2}\left\|\nu_z\right\|d|\mu|(z)+\sum_{m=3}^{N_1+1}\int_{\widehat{I}_{m}\backslash\widehat{I}_{m-1}}\frac{C}{m^2}d|\mu|(z)\notag\\ &\leq 2|\mu|(\widetilde{I}_h)+2C\sum_{m=1}^{\infty}\frac{|\mu|(\widetilde{I}_h)}{m^2}\leq C c_h h.\end{aligned}$$ It should be pointed out that all Carleson squares of $\widehat{I}_{m}\backslash\widehat{I}_{m-1}$ are denoted by $\widetilde{I}_h$ for the simplicity of notations. Since $\mu$ is a vanishing Carleson measure, it now follows that $\partial F/\partial \rho$ is also a vanishing Carleson measure. Let us now deal with $\partial F/\partial \theta$. We see that $$\begin{aligned} \label{dfdtheta} \left|\frac{\partial F}{\partial \theta}(u)\right|\leq \int_{\Delta}\left|\frac{\partial \widetilde{P}_z(u)}{\partial \theta}\right| d|\mu|(z)=\int_{\Delta}\left|\frac{dP_z(e^{i\theta})}{d \theta}\right|\chi_{(\rho,1)} d|\mu|(z),\ u=\rho e^{i\theta}.\end{aligned}$$ We have the estimate $$\begin{aligned} \label{dtheta} \left|\frac{dP_z(e^{i\theta})}{d \theta}\right|\leq \frac{C}{|z-e^{i\theta}|^2}\end{aligned}$$ valid for all $z=re^{i\varphi}$ and all $\theta\in [0,2\pi]$ (see [@varopoulos1977bmo]). From estimates [\[df\]](#df){reference-type="eqref" reference="df"}, [\[dfdtheta\]](#dfdtheta){reference-type="eqref" reference="dfdtheta"} and [\[dtheta\]](#dtheta){reference-type="eqref" reference="dtheta"}, it is obtained that $$\begin{aligned} \label{dfdi} \notag\left|\frac{\partial F}{\partial \theta}\right|(\widehat{I}_1)&=\int_{\Delta}\int_{\widehat{I}_1}\left|\frac{dP_z(e^{i\theta})}{d \theta}\right|\chi_{(\rho,1)}dud|\mu|(z)\\ &=\int_{\{z:|z|>1-h\}}\int_{\widehat{I}_1}\left|\frac{dP_z(e^{i\theta})}{d \theta}\right|\chi_{(\rho,1)}dud|\mu|(z)\notag\\ &\leq \int_{\widehat{I}_2}\left\|\lambda_z\right\|d|\mu|(z)+\sum_{m=3}^{N_1+1}\int_{\widehat{I}_{m}\backslash\widehat{I}_{m-1}}\int_{\widehat{I}_1}\frac{C}{|z-e^{i\theta}|^2}dud|\mu|(z)\end{aligned}$$ It follows again from the fact that $|z-\zeta|$ is comparable to $m h$ for any $z\in\widehat{I}_{m}\backslash\widehat{I}_{m-1}$, $3\leq m\leq N_1+1$, $\zeta=e^{i\theta}\in e^{iI_{1}}$, we can see that $$\begin{aligned} \int_{\widehat{I}_1}\frac{C}{|z-e^{i\theta}|^2}du\leq \frac{C}{m^2}.\end{aligned}$$ Therefore, let us continue the estimate of [\[dfdi\]](#dfdi){reference-type="eqref" reference="dfdi"} $$\begin{aligned} \label{df1} \left|\frac{\partial F}{\partial \theta}\right|(\widehat{I}_1)\leq C|\mu|(\widetilde{I}_h)+\sum_{m=3}^{N_1+1}\frac{|\mu|(\widetilde{I}_h)}{m^2}\leq C c_h h,\end{aligned}$$ which implies that $\partial F/\partial\theta$ is a vanishing Carleson measure. From what we have shown above, the fact that $|\nabla F|dxdy$ is a vanishing Carleson measure is an immediate consequence of [\[dfdp\]](#dfdp){reference-type="eqref" reference="dfdp"} and [\[df1\]](#df1){reference-type="eqref" reference="df1"}. We shall explain it in detail. By computing the gradient in polar coordinates and using the Chain rule, we have $$\begin{aligned} \label{grad} |\nabla F|=\sqrt{\left(\frac{\partial F}{\partial \rho}\right)^2+\left(\frac{1}{\rho}\frac{\partial F}{\partial \theta}\right)^2},\end{aligned}$$ and for small enough $h$, $\frac{1}{\rho}$ is bounded above $2$. Thus $$|\nabla F| \leq 2 \sqrt{\left(\frac{\partial F}{\partial \rho}\right)^{2}+\left(\frac{\partial F}{\partial \theta}\right)^{2}} \leq 2\left(\left|\frac{\partial F}{\partial \rho}\right|+\left|\frac{\partial F}{\partial \theta}\right|\right).$$ Finally, we will smooth the constructed function $F$ by truncating $P_z$ with a smooth function rather than the characteristic function: $$\widetilde{P}_z(u)=P_z(\zeta) \varphi\left(\frac{1-|u|}{1-|z|}\right) \ \forall\ u\in \Delta,$$ where $\varphi(t)$ is some nonnegative $C^{\infty}$ function which is chosen such that $$\varphi(t)=\begin{cases}0, &t\geq 2;\\ 1, &0\leq t\leq1. \end{cases}$$ By a dedicated check, it is quite clear that $F(u)$ satisfies all two conditions of Proposition [Proposition 11](#app1){reference-type="ref" reference="app1"}, which gives us the required result. ◻ Hopefully, we wonder whether the converse statement of Proposition [Proposition 11](#app1){reference-type="ref" reference="app1"} is valid. Indeed it is. **Theorem 12**. *Let $F(x, y)=F(re^{i\theta}) \in C^1(\Delta)$ be a once continuously differentiable function such that $|\nabla F| d x d y$ is a vanishing Carleson measure on $\Delta$ and such that the limit $$\lim _{r \rightarrow 1} F(re^{i\theta})=f(e^{i\theta})$$ exists for almost all $e^{i\theta} \in \mathbb S$. Then $f \in \operatorname{VMO}(\mathbb S)$.* *Proof.* According to Theorem 1.1.2 in [@varopoulos1977bmo], $f\in\operatorname{BMO}(\mathbb S)$, it is sufficient to show that $$\lim _{|I| \rightarrow 0} \int_{I}\left|f(\zeta)-f_{I}\right||d z|=0.$$ Consider any Carleson square $\widetilde{I}_h$ defined as before ($0<h<1/10$), $$\widetilde{I}_h=\{z=re^{i\varphi}:\varphi\in I_h,\ r\in (1-h/2\pi,1)\},\ I_h=[\theta_0,\theta_0+h]\subset [0, 2\pi].$$ By the formula [\[grad\]](#grad){reference-type="eqref" reference="grad"}, $|\nabla F|(x,y)$ are comparable to $$\sqrt{|\partial F/\partial r|^2+|\partial F/\partial \theta|^2}=|\nabla_{r,\theta}F|(re^{i\theta}),$$ thus $$\int_{\widetilde{I}_h}|\nabla F|(x,y)dxdy\asymp\int_{\widetilde{I}_h}|\nabla_{r,\theta}F|(re^{i\theta})drd\theta\leq c_h h,$$ Then from Fubini's theorem, there exists some $r_0\in (1-h/2\pi,1)$ such that $$\begin{aligned} \int_{I_h}|\nabla_{r,\theta}F|(r_0e^{i\theta})d\theta\leq c_h.\end{aligned}$$ Otherwise, for all $1-h/2\pi<r<1$, we have $$\int_{I_h}|\nabla_{r,\theta}F|(re^{i\theta})d\theta> c_h.$$ Hence, $$\int_{\widetilde{I}_h}|\nabla_{r,\theta}F|(re^{i\theta})drd\theta> c_h h.$$ This is a contradiction by the fact that $|\nabla_{r,\theta}F|(re^{i\theta})drd\theta$ is a Carleson measure. From [\[grad\]](#grad){reference-type="eqref" reference="grad"} it follows that $$\begin{aligned} \label{11} |F(r_0e^{i\theta})-F(r_0e^{i\theta_0})|\leq c_h, \ \forall\ \theta\in I_h.\end{aligned}$$ It is an easy matter to see that we always have $$\begin{aligned} \label{22} \left|\lim_{r\to 1}F(re^{i\theta})-F(r_0e^{i\theta})\right|\leq \int_{1-r_0}^{1}|\nabla_{r,\theta}F|(re^{i\theta})dr\end{aligned}$$ Combing with [\[11\]](#11){reference-type="eqref" reference="11"} and [\[22\]](#22){reference-type="eqref" reference="22"}, we conclude that $$\begin{aligned} \label{33} \left|f(e^{i\theta})-F(r_0e^{i\theta_0})\right|\leq c_h+\int_{1-\frac{h}{2\pi}}^{1}|\nabla_{r,\theta}F|(re^{i\theta})dr\end{aligned}$$ Integrating [\[33\]](#33){reference-type="eqref" reference="33"} over $I_h$, then $$\begin{aligned} \int_{I_h}\left|f(e^{i\theta})-F(r_0e^{i\theta_0})\right|d\theta\leq c_h h+\int_{\widetilde{I}_h}|\nabla_{r,\theta}F|(re^{i\theta})drd\theta\leq 2c_h h,\end{aligned}$$ which proves that $f\in \operatorname{VMO}(\mathbb S)$, and this completes the proof. ◻ # A representation theorem of $\operatorname{VMO}(\mathbb{R}^{n})$ ## $\operatorname{VMO}$ of several variables To prove Theorem [Theorem 4](#vmobuc){reference-type="ref" reference="vmobuc"}, we first give the following equivalent description of $\varphi \in \operatorname{VMO}\left(\mathbb{R}^n\right)$. **Theorem 13**. *Assume $\varphi \in \operatorname{BMO}\left(\mathbb{R}^n\right)$, the following conditions are equivalent.* (i) *$\varphi \in \operatorname{VMO}\left(\mathbb{R}^n\right)$.* (ii) *$\lim_{|y|\to 0}\left\|\varphi_y-\varphi\right\|_{*}=0$, where $\varphi_y(x)=\varphi(x-y)$ is the translation of $\varphi$ by $y$.* (iii) *$\lim _{t \rightarrow 0}\left\|\varphi(x)-\varphi(x,t)\right\|_*=0$, where $$\varphi(x,t)=(P_t*\varphi)(x)=\int_{\mathbb{R}^n} P_t(x-y)\varphi(y)dy,$$ $$P_t(x)=\frac{c_n t}{(t^{2}+|x|^{2})^{\frac{n+1}{2}}},\ t>0.$$* (iv) *$\varphi$ is in the $\operatorname{BMO}$-closure of $\mathrm{U C}\left(\mathbb{R}^n\right) \cap \operatorname{BMO}\left(\mathbb{R}^n\right)$.* **Remark 5**. It is noticeable that if $\varphi \in \operatorname{BMO}(\mathbb{R}^n), \varphi(x, t)$ is well-defined, and $$\int_{\mathbb{R}^n}\frac{|\varphi(x)| }{1+|x|^{n+1}} d x<\infty.$$ *Proof of Theorem [Theorem 13](#vmoequivalent){reference-type="ref" reference="vmoequivalent"}.* We verify the circle of implications $$(1)\Rightarrow(2)\Rightarrow(3)\Rightarrow(4)\Rightarrow(1).$$ First, we show $(1)\Rightarrow(2)$. Assume $\varphi\in \operatorname{VMO}\left(\mathbb{R}^n\right)$, fix $\delta>0$, set $$M_{\delta}(\varphi)=\sup_{\ell(Q)<\delta}\frac{1}{|Q|}\int_{Q}|\varphi(x)-\varphi_{Q}|dx.$$ let $\mathcal{L}$ be the natural subdivision of $\mathbb{R}^n$ into half closed cubes of side length $\delta$ such that the set of vertices of the cubes of $\mathcal{L}$ is $\mathbb{Z}^n \delta$. Set $\mathcal{L}=\bigcup_{j\in \mathbb{Z}} Q_j$, where $Q_j$ is half closed cube of side length $\delta$. Define $$h(x)=\sum_{j \in \mathbb{Z}} \varphi_{Q_j} \chi_{Q_j}(x),$$ where $\varphi_{Q_j}=\frac{1}{\left|Q_j\right|} \int_{Q_j} \varphi(x) d x$. Ideally, we require the existence of a constant $C(n)$ that depends only on $n$ to provide us with the following estimate: $$\left\|\varphi-h\right\|_{*} \leq C(n)M_{2\delta}(\varphi).$$ For the case that $\ell(Q) \leq \delta$, it follows $Q$ is contained in a cube $Q^{\prime}$ of length $2 \delta$ which is the union of $2^n Q_{i_k} s,\ k=1, \cdots, 2^n$. Note that $$\begin{aligned} \left|\varphi_{Q_{j_{k}}}-\varphi_{Q^{\prime}}\right| &\leq \frac{1}{\left|Q_{j_k}\right|} \int_{Q_{j_{k}}}\left|\varphi(x)-\varphi_{Q^{\prime}}\right| d x\\ & \leq \frac{2^n}{\left|Q^{\prime}\right|} \int_{Q^{\prime}}\left|\varphi(x)-\varphi_{Q^{\prime}}\right| d x \leq 2^n M_{2 \delta}(\varphi).\end{aligned}$$ Therefore, $$\begin{aligned} \label{jn12} \left|\varphi_{Q_{j_{k_1}}}-\varphi_{Q_{j_{k_2}}}\right| \leq\left|\varphi_{Q_{j_{k_1}}}-\varphi_{Q^{\prime}}\right|+\left|\varphi_{Q_{j_{k_2}}}-\varphi_{Q^{\prime}}\right| \leq 2^{n+1} M_{2 \delta}(\varphi). \end{aligned}$$ Compute the mean value integral $h_{Q}$ directly, we have $$\begin{aligned} \label{pjzd} h_Q=\frac{1}{|Q|} \int_Q h(x) d x =\frac{1}{|Q|} \sum_{k=1}^{2^n} \int_{Q_{j_k} \cap Q} \varphi_{Q_{j_k}} d x =\sum_{k=1}^{2^n} \frac{\left|Q_{j_k} \cap Q\right|}{|Q|} \varphi_{Q_{j_k}}.\end{aligned}$$ Using [\[pjzd\]](#pjzd){reference-type="eqref" reference="pjzd"} we deduce the mean oscillation of $h$ as follows. $$\begin{aligned} \frac{1}{|Q|} \int_Q\left|h(x)-h_Q\right| d x =&\frac{1}{|Q|} \sum_{k=1}^{2^n} \int_{Q_{j_k} \cap Q}\left|\varphi_{Q_{j_k}}-\sum_{m=1}^{2^n} \frac{\left|Q_{j_m} \cap Q\right|}{|Q|} \varphi_{Q_{j_m}}\right| d x \\ =&\sum_{k=1}^{2^n} \frac{\left|Q_{j_k} \cap Q\right|}{|Q|} \left| \varphi_{Q_{j_k}}-\sum_{m=1}^{2^n} \frac{\left|Q_{j_m} \cap Q\right|}{|Q|} \varphi_{Q_{j_m}}\right|.\end{aligned}$$ From the identity $$\sum_{m=1}^{2^n} \frac{\left|Q_{j_m} \cap Q\right|}{|Q|}=1$$ and inequality [\[jn12\]](#jn12){reference-type="eqref" reference="jn12"}, it then follows that $$\begin{aligned} \label{pjzd1} \notag\frac{1}{|Q|} \int_{Q}\left|h(x)-h_{Q}\right| d x & \leq \sum_{k=1}^{2^{n}} \frac{\left|Q_{j_{k}} \cap Q\right|}{|Q|} \sum_{m=1}^{2^{n}}\left|\varphi_{Q_{j_{k}}}-\varphi_{Q_{j_{m}}}\right| \frac{\left|Q_{j_{m}} \cap Q\right|}{|Q|} \\ \notag & \leq 2^{n+1} M_{2 \delta}(\varphi) \quad \sum_{k=1}^{2^{n}} \frac{\left|Q_{j_{k}} \cap Q\right|}{|Q|} \sum_{m=1}^{2^{n}} \frac{\left|Q_{j_{m}} \cap Q\right|}{|Q|} \\ & =2^{n+1} M_{2 \delta}(\varphi) .\end{aligned}$$ It is clear from [\[pjzd1\]](#pjzd1){reference-type="eqref" reference="pjzd1"} we obtain the mean oscillation of $\varphi-h$ as follows. $$\begin{aligned} \label{3.4.3} \notag& \frac{1}{|Q|} \int_Q\left|(\varphi(x)-h(x))-(\varphi-h)_Q\right| d x \\ \leq &\frac{1}{|Q|} \int_Q\left|\varphi(x)-\varphi_Q\right| d x+\frac{1}{|Q|} \int_Q\left|h(x)-h_Q\right| d x\notag \\ \leq & M_{2\delta}(\varphi)+2^{n+1}M_{2\delta}(\varphi)\leq 2^{n+2}M_{2\delta}(\varphi).\end{aligned}$$ For the second case $\ell(Q)>\delta$, set $Q^{\prime \prime}=\left\{\bigcup Q_j: Q_j \cap Q \neq \phi\right\}$, it is perfectly clear that $Q^{\prime \prime}$ is a cube such that $$\ell\left(Q^{\prime \prime}\right) \leq \ell(Q)+2 \delta<3 \ell(Q).$$ It indicates that $$\begin{aligned} \label{qq} \frac{1}{\ell(Q)} \leq \frac{3}{\ell \left( Q^{\prime \prime}\right)}\end{aligned}$$ Write $Q^{\prime \prime}=\bigcup_{k=1}^{2^N} Q_{j_k}$, note that $\left|Q^{\prime \prime}\right|=2^N| Q_{j_k}|$ for any $k=1,\cdots, 2^N$. To estimate the mean oscillation of $\varphi-h$, we have then $$\begin{aligned} \frac{1}{|Q|} \int_Q\left|(\varphi(x)-h(x))-(\varphi-h)_Q\right| d x \leq \frac{2}{|Q|} \int_Q|\varphi(x)-h(x)| d x \end{aligned}$$ Inequality [\[qq\]](#qq){reference-type="eqref" reference="qq"} gives that $$\begin{aligned} \frac{2}{|Q|} \int_{Q}|\varphi(x)-h(x)| d x & \leq \frac{2 \cdot 3^{n}}{\left|Q^{\prime \prime}\right|} \int_{Q^{\prime \prime}}|\varphi(x)-h(x)| d x \\ & \leq \frac{2 \cdot 3^{n}}{\left|Q^{\prime \prime}\right|} \sum_{k=1}^{2^{N}} \int_{Q_{j_{k}}}\left|\varphi(x)-\varphi_{Q_{j_{k}}}\right| d x \\ & =\frac{2 \cdot 3^{n}}{2^{N}} \sum_{k=1}^{2^{N}} \frac{1}{\left|Q_{j_{k}}\right|} \int_{Q_{j_{k}}}\left|\varphi(x)-\varphi_{Q_{j_{k}}}\right| d x,\end{aligned}$$ from which it follows $$\begin{aligned} \frac{1}{|Q|} \int_{Q}\left|(\varphi(x)-h(x))-(\varphi-h)_{Q}\right| d x \leq 2 \cdot 3^{n} M_{\delta}(\varphi) \leq 2 \cdot 3^{n} M_{2 \delta}(\varphi).\end{aligned}$$ which together with [\[3.4.3\]](#3.4.3){reference-type="eqref" reference="3.4.3"}, we know that $$\begin{aligned} \left\|\varphi-h\right\|_* \leq\left(2 \cdot 3^n+2^{n+2}\right) M_{2 \delta}(\varphi).\end{aligned}$$ If $|y|<\delta$, for each $x$ in some $Q_j$, we have $|h(x)-h(x-y)|=|\varphi_{Q j}-\varphi_{Q_m} |\leq 2^{n+1} M_{2 \delta}(\varphi)$ for some $m$, where $Q_j$ and $Q_m$ are adjacent (or coincide). Therefore, $$\left\|h(x)-h(x-y)\right\|_{\infty} \leq 2^{n+1} M_{2 \delta}(\varphi).$$ By the inequalities above, it then follows that $$\begin{aligned} \left\|\varphi_y-\varphi\right\|_* & \leq\left\|\varphi-h\right\|_*+\left\|h_y-h\right\|_*+\left\|\varphi_y-h_y\right\|_* \\ & \leq 2\left\|\varphi-h\right\|_*+\left\| h_y-h\right\|_{\infty} \\ & \leq\left(2^{n+1}+2\left(2 \cdot 3^n+2^{n+2}\right)\right) M_{2 \delta}(\varphi) \\ &=C(n) M_{2 \delta}(\varphi).\end{aligned}$$ Send $\delta$ to $0$, we obtain $(2)$ holds. Next, we shall show that $(2) \Rightarrow (3)$. By a similar discussion as the case $\mathbb{S}$, we have $$\begin{aligned} & \frac{1}{| Q |} \int_Q \left|(\varphi(x)-\varphi(x, t))-(\varphi(x)-\varphi(x, t))_Q \right| d x \\ \leq & \int_{\mathbb{R}^n} P_t(y)\left(\frac{1}{|Q|} \int_Q\left|\varphi(x)-\varphi_y(x)-\left(\varphi-\varphi_y\right)_Q\right| d x\right) d y.\end{aligned}$$ For any given $\varepsilon>0$, there exists $\delta>0$ such that $\left\|\varphi-\varphi_y\right\|_*<\varepsilon$ for any $|y|<\delta$. Then fix this $\delta$, the above inequality is bounded above as follows $$\begin{aligned} & \int_{|y|<\delta} P_t(y)\left(\frac{1}{|Q|} \int_Q\left|\varphi(x)-\varphi_y(x) -\left(\varphi-\varphi_y\right)_Q \right| d x\right) d y \\ &+\int_{|y|>\delta} P_{t}(y)\left(\frac{1}{|Q|} \int_Q \left|\left(\varphi(x)-\varphi_Q\right)-\left(\varphi_y-\left(\varphi_y\right)_Q\right) \right| d x\right) d y\\ \leq & \int_{|y|<\delta}\left\|\varphi-\varphi_y\right\|_* P_t(y) d y+2\left\|\varphi\right\|_* \int_{|y|>\delta} P_t(y) d y.\end{aligned}$$ The first term is clearly less than $\varepsilon$. For the second term, a simple calculation yields that $$\begin{aligned} \int_{|y|>\delta} P_t(y) d y &=\int_{|y|>\delta} \frac{c_n t}{\left(t^2+|y|^2\right)^{\frac{n+1}{2}}} d y \leq 2^{\frac{n+1}{2}}\int_{|y|>\delta} \frac{c_n t}{(t+|y|)^{n+1}} d y \\ &=2^{\frac{n+1}{2}} c_n t\int_{\delta}^{\infty}\frac{r^{n-1}}{(t+r)^{n+1}}d r=2^{\frac{n+1}{2}} c_n \left(1-\delta^n(\delta+t)^{-n}\right)/n.\end{aligned}$$ There exists $t_0>0$ such that for this fixed $\delta$, $2^{\frac{n+1}{2}} c_n \left(1-\delta^n(\delta+t)^{-n}\right)/n<\varepsilon$ for all $0<t<t_0$. Therefore, for any $\varepsilon>0$, there exists $t_0>0$ such that for any $0<t<t_0$, $$\left\|\varphi(x)-\varphi(x, t)\right\|_*<\varepsilon+2\left\|\varphi\right\|_*\varepsilon.$$ Now, we prove $(3)\Rightarrow(4)$. Do some elementary computation, we get the identity $$\begin{aligned} \left|\nabla_x P_t(x-y)\right|=\frac{(n+1) P_t(x-y)|x-y|}{t^2+|x-y|^2} \end{aligned}$$ Thanks to the mean value inequality, we have $$\begin{aligned} \label{tp} t\left|\nabla_x P_t(x-y)\right| \leq \frac{n+1}{2} P_t(x-y).\end{aligned}$$ It is not hard to see that $$\nabla_x \varphi(x, t)=\int_{\mathbb{R}^n}(\varphi(y)-\varphi(x, t)) \nabla_x P_t(x-y) d y.$$ By this and inequality [\[tp\]](#tp){reference-type="eqref" reference="tp"}, we obtain $$\begin{aligned} \label{tpt} t\left|\nabla_x \varphi(x, t)\right| &\leq\int_{\mathbb{R}^n} t\left|\varphi(y)-\varphi(x,t)\right|\left|\nabla_x P_t(x-y)\right| d y \notag\\ & \leq\frac{n+1}{2} \int_{\mathbb{R}^n}\left|\varphi(y)-\varphi(x, t)\right| P_t(x-y) d y \end{aligned}$$ In order that $(4)$ hold, we need the following estimate. $$\int_{\mathbb{R}^n}\left|\varphi(y)-\varphi(x, t)\right| P_t(x-y) d y \leq C(n)\left\|\varphi\right\|_*.$$ Indeed, set $B_k=\left\{y:\left|x-y\right|<2^k t\right\}$, $k=0,1,\cdots$, then $$\begin{aligned} & P_t(x-y) \leq \frac{c_n}{t^n},\quad y \in B_0 \\ & P_t(x-y) \leq \frac{c_n}{2^{k(n+1)} t^n},\quad y \in B_{k+1} \backslash B_k,\ k=0,1,\cdots.\end{aligned}$$ We handle the integral into two parts and get $$\begin{aligned} & \int_{\mathbb{R}^{n}}|\varphi(y)-\varphi(x, t)| P_{t}(x-y) d y \\ \leq & \int_{\mathbb{R}^{n}}\left|\varphi(y)-\varphi_{B_{0}}\right| P_{t}(x-y) d y+\left|\varphi(x, t)-\varphi_{B_{0}}\right| \\ \leq & 2 \int_{\mathbb{R}^{n}}\left|\varphi(y)-\varphi_{B_{0}}\right| P_{t}(x-y) d y.\end{aligned}$$ Here $\varphi_{B_k}=\frac{1}{\left|B_k\right|} \int_{B_k} \varphi(y) d y$. Divide $\mathbb{R}^n$ into the union of $B_{k+1} \backslash B_k$, the integrand is nonnegative, it thus follows $$\begin{aligned} \label{dvd} \notag& \int_{\mathbb{R}^{n}}\left|\varphi(y)-\varphi_{B_{0}}\right| P_{t}(x-y) d y \\ \notag\leq & \frac{c_{n}}{t^{n}} \int_{B_{0}}\left|\varphi(y)-\varphi_{B_{0}}\right| d y+\sum_{k=0}^{\infty} \frac{c_{n}}{2^{k(n+1)} t^{n}} \int_{B_{k+1} \backslash B_{k}}\left|\varphi(y)-\varphi_{B_{0}}\right| d y \\ \notag \leq & C c_{n}\|\varphi\|_{*}+C c_{n} \sum_{k=0}^{\infty} \frac{2^{(k+1) n}}{2^{k(n+1)}} \frac{1}{\left|B_{k+1}\right|} \int_{B_{k+1}}\left|\varphi(y)-\varphi_{B_{k+1}}\right| d y \\ & +c_{n} \sum_{k=0}^{\infty} \frac{2^{(k+1) n}}{2^{k(n+1)}}\left|\varphi_{B_{k+1}}-\varphi_{B_{0}}\right|,\end{aligned}$$ where $C$ is a universal constant. Note that $$\begin{aligned} \left|\varphi_{B_{k+1}}-\varphi_{B_k}\right| \quad & =\frac{1}{\left|B_k\right|} \int_{B_k}\left|\varphi(y)-\varphi_{B_{k+1}}\right| d y \\ & \leq \frac{2^n}{\left|B_{k+1}\right|} \int_{B_{k+1}}\left|\varphi(y)-\varphi_{B_{k+1}}\right| d y \\ & \leq 2^n \left\| \varphi\right\|_{*},\end{aligned}$$ which implies that $$\begin{aligned} \label{kn} \left|\varphi_{B_{k+1}}-\varphi_{B_0}\right|\leq \sum_{j=0}^{k}\left|\varphi_{B_{j+1}}-\varphi_{B_{j}}\right| \leq (k+1)2^n\left\|\varphi\right\|_{*}.\end{aligned}$$ From inequalities [\[dvd\]](#dvd){reference-type="eqref" reference="dvd"} and [\[kn\]](#kn){reference-type="eqref" reference="kn"}, we obtain $$\begin{aligned} &\int_{\mathbb{R}^n}\left|\varphi(y)-\varphi_{B_0}\right| P_t(x-y) d y\\ \leq & Cc_n \left\| \varphi\right\|_{*}+Cc_n \sum_{k=0}^{\infty} \frac{2^n}{2^{k}}\left\| \varphi\right\|_{*}+c_n \sum_{k=0}^{\infty} \frac{(k+1)2^{2n}}{2^k} \left\| \varphi\right\|_{*} \leq C(n)\left\|\varphi\right\|_*,\end{aligned}$$ which gives us the desired estimate. Combine this and inequality [\[tpt\]](#tpt){reference-type="eqref" reference="tpt"}, we know that $$\begin{aligned} t\left|\nabla_x \varphi(x, t)\right| \leq C(n)\left\|\varphi\right\|_{*}.\end{aligned}$$ Therefore, for any fixed $t > 0$, $\varphi(x, t)$ is uniformly continuous with respect to $x$. It is obvious from the proof of Theorem [Theorem 3](#vmo){reference-type="ref" reference="vmo"} that if $\varphi \in \operatorname{BMO}(\mathbb{R}^{n})$, $\varphi-\varphi(x, t) \in \operatorname{BMO}(\mathbb{R}^{n})$, and hence $\varphi(x, t) \in \operatorname{BMO}(\mathbb{R}^{n})$. Accordingly, for any given fixed $t$, $\varphi(x, t) \in \operatorname{BMO}(\mathbb{R}^{n})\cap\mathrm{UC}(\mathbb{R}^n)$. Since $\lim\limits_{t \rightarrow 0} \left\|\varphi(x)-\varphi(x, t)\right\|_*=0$, then $\varphi(x)\in\overline{\operatorname{BMO}(\mathbb{R}^n) \cap \mathrm{UC}(\mathbb{R}^n)}$. So we have proved $(3)\Rightarrow(4).$ Finally, we shall show that $(4)\Rightarrow(1)$. Assume $\varphi \in \operatorname{BMO}(\mathbb{R}^n) \cap \mathrm{UC}(\mathbb{R}^n)$, that is to say, for any given $\varepsilon>0$, there exists $\delta>0$ such that for all $y \in Q=\{y: |x-y|<\delta\}$, we have $\ \left|\varphi\left(y\right)-\varphi\left(x\right)\right|<\varepsilon$. It clearly follows that $$\begin{aligned} \left|\varphi_Q-\varphi(x)\right|&=\left|\frac{1}{|Q|} \int_Q\varphi(y) d y-\varphi\left(x\right)\right|\leq \frac{1}{|Q|} \int_Q\left|\varphi(y)-\varphi(x)\right| d y<\varepsilon.\end{aligned}$$ Thus $$\begin{aligned} \frac{1}{|Q|} \int_Q\left|\varphi(x)-\varphi_Q\right| d x<\varepsilon,\end{aligned}$$ which implies $\varphi\in \operatorname{VMO}(\mathbb{R}^n)$ and hence $\operatorname{BMO}(\mathbb{R}^n) \cap \mathrm{UC}(\mathbb{R}^n)\subset \operatorname{VMO}(\mathbb{R}^n)$. Since $\operatorname{VMO}(\mathbb{R}^n)$ is a closed subspace of $\operatorname{BMO}(\mathbb{R}^n)$ as we mentioned before, we finished the proof here. ◻ ## Proof of Theorem [Theorem 4](#vmobuc){reference-type="ref" reference="vmobuc"} {#proof-of-theorem-vmobuc} This section mainly focuses on the proof of Theorem [Theorem 4](#vmobuc){reference-type="ref" reference="vmobuc"}. We assume first that $\varphi\in \operatorname{VMO}(\mathbb{R}^n)$. By Theorem [Theorem 7](#bmornequ){reference-type="ref" reference="bmornequ"}, there exist $\varphi_{j}^{(1)}\in L^{\infty}(\mathbb{R}^{n})$ satisfying $$\begin{aligned} \varphi(x):=\varphi^{(1)}(x)=\varphi_0^{(1)}(x)+\sum_{j=1}^nR_j\left(\varphi_j^{(1)}\right)(x),\ j=0,\cdots, n.\end{aligned}$$ According to Remark [Remark 4](#norm){reference-type="ref" reference="norm"}, there also exists a constant $C$ independent on $\varphi$ such that $$\begin{aligned} \left\|\varphi_0^{(1)}\right\|_{\infty}+\sum_{j=1}^n\left\|\varphi_j^{(1)}\right\|_{\infty}\leq C\left\|\varphi^{(1)}\right\|_{*}= C\left\|\varphi\right\|_{*}.\end{aligned}$$ With the help of Theorem [Theorem 13](#vmoequivalent){reference-type="ref" reference="vmoequivalent"}, there exists $t_1>0$ such that $$\left\|\varphi-P_{t_1}*\varphi\right\|_{*}<\frac{\left\|\varphi\right\|_{*}}{2}.$$ Set $\tilde{\varphi}^{(1)}(x)=P_{t_1}*\varphi(x)$, $\tilde{\varphi}_j^{(1)}(x)=P_{t_1}*\varphi_j^{(1)}(x)$, we have $\tilde{\varphi}_j^{(1)}\in\mathrm{BUC}(\mathbb{R}^{n})$, $j=0,\cdots,n$. Since the Riesz transform is invariant under translation, it commutes with convolution operators. Hence $$\begin{aligned} \tilde{\varphi}^{(1)}(x)=\tilde{\varphi}_0^{(1)}(x)+\sum_{j=1}^nR_j\left(\tilde{\varphi}_j^{(1)}\right)(x).\end{aligned}$$ Similarly, let $\varphi^{(2)}(x)=\varphi(x)-P_{t_1}*\varphi(x)=\varphi(x)-\tilde{\varphi}^{(1)}(x)$, we have $\left\|\varphi^{(2)}(x)\right\|_*<\frac{\left\|\varphi\right\|_*}{2}$. Due to Theorem [Theorem 3](#vmo){reference-type="ref" reference="vmo"}, we have every reason to believe that $\varphi^{(2)}\in\operatorname{VMO}(\mathbb{R}^{n})$. Thus there exist $\varphi_j^{(2)}\in L^{\infty}(\mathbb{R}^{n})$ satisfying $$\begin{aligned} \varphi^{(2)}(x)=\varphi_0^{(2)}(x)+\sum_{j=1}^nR_j\left(\varphi_j^{(2)}\right)(x),\ j=0,\cdots, n.\end{aligned}$$ Similarly, we can obtain $$\begin{aligned} \left\|\varphi_0^{(2)}\right\|_{\infty}+\sum_{j=1}^n\left\|\varphi_j^{(2)}\right\|_{\infty}\leq C\left\|\varphi^{(2)}\right\|_{*}<\frac{\left\|\varphi\right\|_{*}}{2}.\end{aligned}$$ and there exists $t_2>0$ such that $$\left\|\varphi^{(2)}-P_{t_2}*\varphi^{(2)}\right\|_{*}<\frac{\left\|\varphi^{(2)}\right\|_{*}}{2}<\frac{\left\|\varphi\right\|_{*}}{2^2}.$$ Define $\tilde{\varphi}^{(2)}(x)=P_{t_2}*\varphi^{(2)}(x)$, it follows $\tilde{\varphi}_j^{(2)}(x)=P_{t_2}*\varphi_j^{(2)}(x)\in \mathrm{BUC}(\mathbb{R}^{n})$, $j=0,\cdots,n$. From the above argument, we know that $$\begin{aligned} \tilde{\varphi}^{(2)}(x)=\tilde{\varphi}_0^{(2)}(x)+\sum_{j=1}^nR_j\left(\tilde{\varphi}_j^{(2)}\right)(x).\end{aligned}$$ Repeating this process, it is perfectly clear that $\tilde{\varphi}_j^{(k)}\in \mathrm{BUC}(\mathrm{\mathbb{R}^{n}})$, $j=0,\cdots,n$, $k=1,2,\cdots$, and $$\begin{aligned} \label{diedai} \left\|\tilde{\varphi}_0^{(k)}\right\|_{\infty}+\sum_{j=1}^n\left\|\tilde{\varphi}_j^{(k)}\right\|_{\infty}\leq \left\|\varphi_0^{(k)}\right\|_{\infty}+\sum_{j=1}^n\left\|\varphi_j^{(k)}\right\|_{\infty}\leq \frac{C}{2^{k-1}}\left\|\varphi\right\|_{*}.\end{aligned}$$ Therefore, $$\begin{aligned} \varphi(x)&=\sum_{k=1}^{\infty}\tilde{\varphi}_0^{(k)}(x)+\sum_{j=1}^nR_j\left(\sum_{k=1}^{\infty}\tilde{\varphi}_j^{(k)}\right)(x)\\ :&=\varphi_0(x)+\sum_{j=1}^nR_j\left(\varphi_j\right)(x).\end{aligned}$$ Combined with Remark [Remark 4](#norm){reference-type="ref" reference="norm"} and the fact [\[diedai\]](#diedai){reference-type="eqref" reference="diedai"}, we easily see that $$\begin{aligned} \label{summ} \left\|\sum_{k=1}^{\infty}\tilde{\varphi}_j^{(k)}\right\|_{\infty}\leq \sum_{k=1}^{\infty}\left\|\tilde{\varphi}_j^{(k)}\right\|_{\infty}\leq C\sum_{k=1}^{\infty}\frac{\left\|\varphi\right\|_{*}}{2^{k-1}}, \ j=0,1, \cdots,n.\end{aligned}$$ By inequality [\[summ\]](#summ){reference-type="eqref" reference="summ"}, it is clear to us that the convergence is uniform and $\varphi_j(x)$ is bounded. So we get $$\begin{aligned} \varphi_j(x)=\sum_{k=1}^{\infty}\tilde{\varphi}_j^{(k)}(x)\in \mathrm{BUC}(\mathbb{R}^{n}), \ j=0,1, \cdots,n.\end{aligned}$$ This concludes the statement $(2)$. Next, we are going to show $(2)\Rightarrow (1)$. Assume that there exists $\varphi_j\in \mathrm{BUC}(\mathbb{R}^{n})$ such that $$\begin{aligned} \varphi(x)=\varphi_0(x)+\sum_{j=1}^nR_j(\varphi_j)(x),\ j=0,\cdots, n.\end{aligned}$$ From Theorem [Theorem 13](#vmoequivalent){reference-type="ref" reference="vmoequivalent"}, it is easy to see $\mathrm{BUC}(\mathbb{R}^{n})\subset \operatorname{VMO}(\mathbb{R}^{n})$. So what remains to show is that for any $\varphi\in \mathrm{BUC}(\mathbb{R}^{n})$, we have $R_j(\varphi)\in \operatorname{VMO}(\mathbb{R}^{n})$. For any $\varepsilon>0$, there exists $\delta>0$ such that for any $|y|<\delta$, we then have $$\left\|\varphi-\varphi_y\right\|_{\infty}<\varepsilon.$$ By Theorem [Theorem 8](#rieszLtB){reference-type="ref" reference="rieszLtB"}, $R^j$ is a bounded linear operator from $L^{\infty}$ to $\operatorname{BMO}$, thus $$\begin{aligned} \left\|R_j(\varphi)-R_j(\varphi_y)\right\|_{*}=\left\|R_j(\varphi-\varphi_y)\right\|_{*}\leq C\left\|\varphi-\varphi_y\right\|_{\infty}<C\varepsilon.\end{aligned}$$ According to Theorem [Theorem 13](#vmoequivalent){reference-type="ref" reference="vmoequivalent"} again, we obtain that $R_j(\varphi)\in \operatorname{VMO}(\mathbb{R}^{n})$, which completes the proof.
arxiv_math
{ "id": "2309.06155", "title": "Representation theorems for functions of vanishing mean oscillation", "authors": "Zheng-yi Lu, Fei Tao and Yaosong Yang", "categories": "math.CV", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | The *$m$-neighbor complex* of a graph is the simplicial complex in which faces are sets of vertices with at least $m$ common neighbors. We consider these complexes for Erdős-Rényi random graphs and find that for certain explicit families of parameters the resulting complexes are with high probability $(t-1)$-dimensional with all $(t-2)$-faces and each $(t-1)$-face present with a fixed probability. Unlike the Linial-Meshulam measure on the same complexes there can be correlations between pairs of $(t-1)$-faces but we conjecture that the two measures converge in total variation for certain parameter sequences. address: - Department of Mathematics, University of California at Davis, California 95616, U.S.A. - Faculty of Mathematics and Information Science, Warsaw University of Technology, Koszykowa 75, 00-662 Warsaw, Poland author: - Eric Babson - Jan Spaliński title: From Erdős--Rényi graphs to Linial-Meshulam complexes via the multineighbor construction --- # Introduction The *$m$-neighbor complex* $N_m(G)$ of a graph $G$ is the simplicial complex in which faces are sets of vertices with at least $m$ common neighbors. This construction is studied in detail in [@MS]. We consider these complexes for Erdős-Rényi random graphs [@ER] and find that for certain explicit families of parameters the resulting complexes are with high probability $(t-1)$-dimensional with all $(t-2)$-faces and each $(t-1)$-face is present with a fixed probability. Unlike the Linial-Meshulam ([@LM1], [@ALLM]) measure on the same complexes there can be correlations between pairs of $(t-1)$-faces but we conjecture that the two measures converge in total variation for certain parameter sequences. # Preliminaries We recall some well known facts and fix notation. Write $[n]=\{1,2,\ldots, n\}$. If $A$ is a finite set write $|A|$ for its cardinality, $\mathcal P A$ for the set of its subsets and ${A\choose c}\subseteq \mathcal P A$ for those with cardinality $c$. If $X$ and $W$ are both graphs or both simplicial complexes write $Z_XW$ for the set of injective maps from $W$ to $X$, $z_xw=z_XW=|Z_XW|$ for their number which depends only on the shapes $x$ and $w$ of $X$ and $W$ as defined in section 4 below and if $Z_XW\not=\emptyset$ say that $X$ contains a copy of $W$. If $G$ is a graph write $VG$ and $EG$ for its vertices and edges. If $X$ is a simplicial complex write $X_{\mathcal F}$ for the set of facets and $X_0$ for the set of vertices. A random variable $B$ is said to have a binomial distribution $B\sim \textbf{Bin}_{n,q}$ if $$\mathbb P(B=k) = {n \choose k} q^k (1-q)^{(n-k)},\qquad k=0,\dots,n.$$ The mean and variance are given by $\mu = nq$ and $\sigma^2 = nq(1-q)$. We will use the following bounds which are proven in the final section. **Hoeffding's Inequalities:**  *If $B\sim\textbf{Bin}_{n,q}$ then:* - $\mathbb{P}(B\leq m)\leq \exp[-\frac{2}{n}(nq-m)^2]$ if $m<nq$ and - $\mathbb{P}(B\geq m)\leq \exp[-\frac{2}{n}(nq-m)^2]$ if $m>nq$. As an illustration, we include images of two small graphs and the 1-skeletons of the associated $1$- and $2$-neighbor complexes. ![A graph and the 1-skeleta of the $m$-neighbor complexes for $m=1$ and $m=2$.](Graph_and_multineighbor_cplx_ex_1.png){width="1\\linewidth"} ![A graph and the 1-skeleta of the $m$-neighbor complexes for $m=1$ and $m=2$.](Graph_and_multineighbor_cplx_ex_2.png){width="1\\linewidth"} # Support of $N_m(G(n,p))$ Take $n$ to be a positive integer and $p\in (0,1)$ a probability and consider the Erdős--Rényi probability measure $G(n,p)$ on graphs with vertex set $[n]$ and each edge introduced independently with probability $p$. Consider $\Gamma_{n,m,p}=N_m G(n,p)$ the probability measure on simplicial complexes which is the $m$-neighbor complex of a random graph from $G(n,p)$. Below is a picture of an Erdős--Rényi graph with parameters $n=100$ and $p=0.31$ on the left and the 1-skeleton of its $14$-neighbor complex on the right. ![An Erdős--Rényi graph from G(100,0.31) and the 1-skeleton of the 14-neighbor complex.](Erdos_Renyi_and_multineighbor_4.png){width="1\\linewidth"} Given $n$, $m$ and $p$ let $t$ be the number defined as follows: $$t = \left[\!\left[ \frac{\ln(n)-\ln(m)}{-\ln(p)} \right]\!\right] = \left[\!\left[ \log_p\left(\frac{m}{n}\right)\right]\!\right]$$ with $[[\cdot]]$ meaning a closest integer, taking the smaller choice if there are two possibilities. Next, take $\tau$ with $|\tau|\leq\frac{1}{2}$ to be the difference between $t$ and the expression being rounded: $$\tau = t + \frac{\ln(n)-\ln(m)}{\ln(p)} = t + \log_p \left( \frac{n}{m} \right)$$ so that $$p^t = p^{ \log_p \left( \frac{m}{n} \right) +\tau } = \left(\frac{m}{n}\right)p^\tau.$$ Let $Y_{n,k-1}$ be the set of simplicial complexes with: - vertex set $[n]$, - all faces with $k-1$ vertices and - no faces with $k+1$ vertices. Hence all but one complex in $Y_{n,k-1}$ is $(k-1)$-dimensional. Let $Y_{n,k-1,q}$ be the Linial-Meshulam probability distribution on $Y_{n,k-1}$ so that: - faces with $k$ vertices occur independently with probability $q$. We show that for a large range of choices of $m$ and $n$, with high probability the complexes drawn from $\Gamma_{n,m,p}$ belong to $Y_{n,t-1}$ with $t=\left[\!\left[ \log_p\left(\frac{m}{n}\right)\right]\!\right]$ as above and further the probability that such a complex contains any particular $(t-1)$-face is the probability that $B\sim\textbf{Bin}_{n-t,p^t}$ takes a value of at least $m$. Call this face probability $q$. The distributions $\Gamma_{n,m,p}$ and $Y_{n,t-1,q}$ differ in that in $\Gamma$ face occurances may be correlated while in $Y$ they are not. **Lemma 1**. *If $p\in (0,1)$ there is $c>0$ (with $c=\frac{1}{2}$ if $p\le\frac{1}{4}$) so that for any $n$, $m$ and $f\in{[n]\choose t+1}$ with $t=\left[\!\left[\log_p\left(\frac{m}{n}\right)\right]\!\right]$ as above $\mathbb{P}_{K\in \Gamma_{n,m,p}}(f\in K)\leq\exp\left[ -c\frac{m^2}{n} \right]$.* *Proof.* If $\Gamma$ is a graph with $V\Gamma=[n]$ and $f\in{[n]\choose t+1}$ write $\beta_f\Gamma=|\{v\in V\Gamma-f| \{v\}\times f\subseteq E\Gamma\}|$ for the number of common neighbors so $\beta_fG(n,p)=B\sim \textbf{Bin}_{n-t-1,p^{t+1}}$ and $\mathbb{P}_{K\in \Gamma_{n,m,p}}(f\in K)=\mathbb{P}(B\geq m)$. First consider the case $p\leq\frac{1}{4}$ and take $c=\frac{1}{2}$. In order to apply Hoeffding's inequality, we take $|\tau|\leq\frac{1}{2}$ as above so $p^{t} = \frac{m}{n} p^{\tau}$ and verify that if $|f|=t+1$ then $\mu=\mathbb{E\,}B<m$: $$\mu=(n-(t+1))p^{t+1} = (n-(t+1))\cdot\frac{m}{n}\cdot p^{\tau+1} = m \cdot\frac{n-(t+1)}{n}\cdot p^{\tau+1} < m.$$ Moreover, since $p\in(0,\frac{1}{4}]$, we have $$0\le \frac{m}{n} p^{\tau+1}\le \frac{1}{2}\left( \frac{m}{n-(t+1)} \right).$$ Hence we have: $$\begin{aligned} \mathbb{P}_{K\in \Gamma_{n,m,p}}(f\in K) &= \mathbb{P}(B\ge m) \\ &\le \exp\left[-2(n-t-1)\left(p^{t+1}-\frac{m}{n-(t+1)}\right)^2\right] \\ &\le \exp\left[-2(n-t-1)\frac{1}{4}\left(\frac{m}{n-(t+1)}\right)^2\right] \\ &\le \exp\left[-\frac{1}{2}\left(\frac{m^2}{n-(t+1)}\right)\right] \\ &\le \exp\left[-\frac{1}{2}\left(\frac{m^2}{n}\right)\right]. \\\end{aligned}$$ Finally for the case $p>\frac{1}{4}$ take $c=2(1-p^{\frac{1}{2}})^2$. The argument is analogous to the $p\leq\frac{1}{4}$ case upon noting that $0<p^{\tau+1}<p^{\frac{1}{2}}$ and $\frac{m}{n-(t+1)} \ge \frac{m}{n}$ and hence $$\frac{m}{n-(t+1)} - \frac{m}{n} p^{\tau+1} \ge \frac{m}{n-(t+1)} - \frac{m}{n-(t+1)} p^{\tau+1} \ge\frac{m}{n-(t+1)} \left( 1-p^{\frac{1}{2}} \right).$$ ◻ **Lemma 2**. *If $p\in (0,1)$ there is $c>0$ (with $c=\frac{1}{3}$ if $p\le\frac{1}{4}$) so that for any $n\ge 9$, $m$ and $f\in{[n]\choose t-1}$ with $t=\left[\!\left[\log_p\left(\frac{m}{n}\right)\right]\!\right]$ as above $\mathbb{P}_{K\in \Gamma_{n,m,p}}(f\not\in K)\leq\exp\left[ -c\frac{m^2}{n} \right]$.* *Proof.* Similarly to the previous proof if $f\in{[n]\choose t-1}$ then $\mathbb{P}_{K\in \Gamma_{n,m,p}}(f\not\in K)=\mathbb{P}(B<m)$ with $B\sim \textbf{Bin}_{n-(t-1),p^{t-1}}$. Once again consider first the case $p\leq\frac{1}{4}$ and take $c=\frac{1}{3}$. Note that $$t-1 = \left[\!\left[ \frac{\ln(n)-\ln(m)}{-\ln(p)} \right]\!\right] -1 \le \frac{\ln(n)}{\ln(p^{-1})} \le \frac{\ln(n)}{\ln 4} \le \ln(n) \le \sqrt{n}.$$ Hence $$\frac{n}{n-(t-1)} \le \frac{n}{n-\sqrt{n}} \le \frac{n(n+\sqrt{n})}{n^2-n} \le \frac{n+\sqrt{n}}{n-1}. \label{lemma_two_inequality}$$ The function on the right hand side above is clearly decreasing (for $n>1$), and has value $\frac{3}{2}$ for $n=9$, hence the left hand side above is bounded by $\frac{3}{2}$ for all $n\ge 9$. Moreover with $|\tau|\leq\frac{1}{2}$ as above, $p^{\tau-1}\ge 2$. Hence we have $$\begin{aligned} p^{\tau-1} - \frac{n}{n-(t-1)} &\ge \frac{1}{2}, \\ \frac{p^{\tau-1}}{n} - \frac{1}{n-(t-1)} &\ge \frac{1}{2n}. \\\end{aligned}$$ In order to apply Hoeffding's inequality, we verify that $\mu=\mathbb{E\,}B>m$. $$\mu=(n-(t-1))p^{t-1} = (n-(t-1))\cdot\frac{m}{n}\cdot p^{\tau-1} = m \cdot\frac{n-(t-1)}{n}\cdot p^{\tau-1} > m,$$ where the last inequality follows from the estimates of the factors in the previous paragraph. Hence we have. $$\begin{aligned} \mathbb{P}_{K\in\Gamma_{n,m,p}}(f\not\in K) & = \mathbb{P}(B\le m-1) \\ & \le \mathbb{P}(B\le m) \\ &\le \exp\left[-2(n-(t-1))\left(p^{t-1}-\frac{m}{n-(t-1)}\right)^2\right] \\ &= \exp\left[-2(n-(t-1))\left(\frac{m}{n} p^{\tau-1}-\frac{m}{n-(t-1)}\right)^2\right] \\ &= \exp\left[-2m^2(n-(t-1))\left(\frac{1}{n} p^{\tau-1}-\frac{1}{n-(t-1)}\right)^2\right] \\ &\le \exp\left[-2m^2(n-(t-1))\frac{1}{4n^2}\right] \\ &\le \exp\left[-\frac{1}{3} \frac{m^2}{n}\right]. \\\end{aligned}$$ The last inequality follows from the fact that $\frac{n-(t-1)}{n} \ge \frac{2}{3}$. Finally for the case $p>\frac{1}{4}$ take $c=\frac{1}{3}(p^{-\frac{1}{2}}-1)^2$. Choose $\varepsilon=p^{-\frac{1}{2}}-1$ so that $$p^{d-1} \ge \frac{1}{\sqrt{p}} = 1 + \varepsilon.$$ From ([\[lemma_two_inequality\]](#lemma_two_inequality){reference-type="ref" reference="lemma_two_inequality"}) there exists an $N$ such that for $n>N$ we have $$\frac{n}{n-(t-1)} \le 1+\frac{\varepsilon}{2}.$$ Hence for $n>N$ we have $$\begin{aligned} p^{t-1} - \frac{m}{n-(t-1)} &= \frac{m}{n} p^{d-1} - \frac{m}{n-(t-1)} \\ &= \frac{m}{n} \left( p^{d-1} - \frac{n}{n-(t-1)} \right) \\ &\ge \frac{m}{n} \cdot \frac{\varepsilon}{2}.\end{aligned}$$ Using this and $n-(t-1)\ge \frac{2}{3}n$ an argument analogus to the $p\leq\frac{1}{4}$ case yields (for $n>N$): $$\mathbb{P}(B< m) \le \exp\left[-\frac{\varepsilon^2}{3}\left(\frac{m^2}{n}\right)\right].$$ ◻ For the example in figure 3 with $n=100$ and $p=.31$ there is $t=2$, $\tau=.32$, $c=.39$ for lemma 1 and $c=.21$ for lemma 2. Thus lemma 1 implies that the chance of each triple of vertices of the graph to not be a triangle in the complex is at least $.53$ so the chance that there are no triangles is at least $0$ as these events are not independent while lemma 2 implies that the chance of each vertex of the graph to be a vertex of the complex is at least $.66$ so the chance that they all are is at least $10^{-18}$ which is nonzero only because these events are independent. Thus these parameters are not in the regime addressed in the following theorem with $t=3$ in which the complexes are gauranteed to have high probability of having every vertex of the graph as a vertex and no triangles. The first two parts of the following theorem use these two lemmas while the third part follows from lemmas 4 and 5 in the next section. **Theorem 1**. *If $p_n\in(0,1)$ and $m_n\in \mathbb{N}$ are sequences for which any of the following three pairs of conditions holds:* - *$p_n$ is constant and $\lim_{n\to\infty} \frac{m_n^2}{n(\ln n)^2}=\infty$,* - *$\lim_{n\to\infty}p_n=0$ and $\lim_{n\to\infty}\frac{-(\ln p) m_n^2}{n(\ln n)^2}>4$ or* - *$m_n=m$ is constant and there constants $t<\sqrt{2m+1}$ an integer and $b\in (t-1,\frac{m(t+1)}{m+t+1})$ with $p_n=n^{\frac{-1}{b}}$.* *then $$\lim_{n\to\infty} \mathbb{P}_{K\in\Gamma}(K\in Y) = 1$$ where $\Gamma=\Gamma_{n,m_n,p_n}$ and $Y=Y_{n,t_n-1}$ with $t_n=\left[\!\left[\log_{p_n}\left(\frac{m_n}{n}\right)\right]\!\right]$.* **Definition 1**. Write *$\Gamma_m$ has property $P$ asymptotically almost surely (aas)* if $\lim_{n\to\infty} \mathbb{P}_{K\in\Gamma}(K\hbox{ has property }P) = 1$. Thus the conclusion of the theorem is that $\Gamma\in Y$ aas. Figures [1](#fig:First_Conditions){reference-type="ref" reference="fig:First_Conditions"} and [2](#fig:Second_Conditions){reference-type="ref" reference="fig:Second_Conditions"} ilustrate the first two sets of coditions, with the value of $t=2$, the size $n$ of the Erdős--Renyi graph given on the horizontal axis and the proporiton of simplices in the $m$-neighbor complex to the maximal possible displayed on the vertical axis. ![First set of conditions. Here $p=0.5$, $m=\textrm{Ceiling\,}(n/4)$ and $t=2$. Here $q\approx 0.49$.](Theorem_1_First_Conditions.png){#fig:First_Conditions width="0.6\\linewidth"} ![Second set of conditions. Here $p=1/ln(ln(n))$, $m=\textrm{round}(n p^2)$ and $t=2$. Here $q\approx 0.48$.](Theorem_1_Second_Conditions.png){#fig:Second_Conditions width="0.6\\linewidth"} *Proof.* This is a proof of the first two parts. The third follows immediately from lemmas 4 and 5 of the next section. First use the second lemma above and the first moment method to see that ${[n]\choose t_n-1}\subseteq \Gamma$ aas. Let $K$ be a complex drawn from $\Gamma$ as described in the statement of the theorem, and let $N$ be the random variable counting the number of $(t_n-1)$-element subsets of $K_0=[n]$ which are not faces of $K$. By the First Moment Method (see [@FK], Lemma 22.2), we have $$\mathbb{P}(N>0)\le \mathbb{E\,}N.$$ Let $\kappa_n = \frac{m_n^2(-\ln p_n)}{n(\ln n)^2}$. By the second lemma above there is a constant $c>0$ depending on $p$ with $$\begin{aligned} \mathbb{E}N &= {n\choose {t_n-1}} \mathbb{P}_{K\in\Gamma(n,m,p)}\left(\textrm{a fixed $(t_n-1)$-tuple is not a face of $K$}\right) \\ &\le n^{t_n-1} \exp\left[ -c\frac{m_n^2}{n} \right] \\ &\le \exp\left[\left(t_n-\frac{1}{2}\right) \ln(n)\right] \exp\left[ -c\frac{m_n^2}{n} \right] \\ &\le \exp\left[\log_{p_n}\left(\frac{m_n}{n}\right)\ln(n) -c\frac{m_n^2}{n} \right] \\ &\le \exp\left[\frac{(\ln n)^2}{-\ln p_n}\left(1- c\kappa_n \right)\right]. \end{aligned}$$ In the first case of the theorem $\lim \kappa_n = \infty$ and $c$ depends only on $p$ so the limit of the last expression is equal to zero. In the second case of the theorem $\lim \kappa_n \geq 2$ and $c=\frac{1}{2}$ so again the limit of the last expression is equal to zero. Next use the first lemma above and the first moment method again to show that $\Gamma$ aas has dimension at most $t$. The argument is very similar to that above but also uses the bound $$t_n+1 \le 2 \frac{\ln(n)-\ln(m_n)}{-\ln(p_n)}.$$ ◻ # Asymptotics of $N_m(G(n,p))$ In this section we consider the hypotheses from the third part of theorem 1. That is $\Gamma=\Gamma_{n,m,p}$ with $p=n^{\frac{-1}{\beta}}$ for a fixed density parameter $\beta>0$, a fixed number $m$ and a growing number $n$ of vertices. This makes the parameter $t-\tau$ from the previous section converge to $\beta$. We then fix a finite simplicial complex $X$ and study the limiting probability that $X$ is isomorphic to a subcomplex of a complex $K$ chosen from $\Gamma$. This analysis includes a proof of the last part of theorem 1 but does not give total variation convergence which we conjecture below. **Definition 2**. Call $\beta$ a *threshold* for a property of a complex in $\Gamma_m$ if a complex drawn from $\Gamma_{m,n,p}$ with $p=n^{\frac{-1}{b}}$ has the property aas as $n$ grows if $b>\beta$ and does not have it aas as $n$ grows if $b<\beta$. This ignores the behavior at $b=\beta$. Similarly, call $\beta$ a *threshold* for a property of a graph $H$ drawn from $G(n,p)$, where $p=n^{\frac{-1}{b}}$ , if $H$ aas has the property as $n$ grows if $b>\beta$ and does not have it aas as $n$ grows if $b<\beta$. The second part of the above definition is consistent with Definition 1.6 in [@FK], for the choice of the threshold function $p^*(n) = n^{\frac{-1}{\beta}}$. The first part of this section sets up notation to define the $m$-density of a complex $X$ and shows that it is a threshold for $\Gamma_m$ to contain $X$ as a subcomplex. ![Copies of pure simplicial complexes in m-neighbor constructions on Erdős-- Rényi graphs](Correlations_n_28_m_2_Ex_1.png){#fig:Correlations width="1.14\\linewidth"} ![Copies of pure simplicial complexes in m-neighbor constructions on Erdős-- Rényi graphs](Correlations_n_50_m_2_Ex_2.png){#fig:Correlations width="1.14\\linewidth"} ![Copies of pure simplicial complexes in m-neighbor constructions on Erdős-- Rényi graphs](Correlations_n_50_m_2_Ex_3.png){#fig:Correlations width="1.14\\linewidth"} The result is based on one for subgraphs of random graphs which Erdős and Rényi proved for balanced graphs (see below) and Bollobás stated in the form we use. The account given in Frieze--Karoński [@FK] is particularly useful for our purposes. Define the density of a nonempty graph $H$ as the ratio of the number of edges to the number of vertices: $$d_H=\frac{e_H}{v_H}$$ and the related maximum subgraph density: $$\bar{d}_H = \max\{ d_K: \emptyset\not=K\subseteq H \}.$$ A graph is balanced if $\bar{d}_G=d_G$ and strictly balanced if $d_G>d_H$ for all proper subgraphs $H$ in $G$. **Theorem 5.3 in [@FK]** If $H$ is a graph with $d_H>0$, then $\bar{d_H}$ is a threshold for the appearence of $H$ in $G(n,p)$ with $p=n^{\frac{-1}{b}}$. To study the probability of finding a copy of a finite complex $X$ with facets $F=X_{\mathcal F}$ in a complex drawn from $\Gamma_{n,m,p}$, we will consider functions $$W: F \rightarrow \mathcal \mathcal P X_0$$ along with the functions they induce on the power set $$W^{\cap}, W^{!}, W^{\cup}: \mathcal PF \rightarrow \mathcal P X_0\ $$ which take a set of facets respectively to the intersection, exclusive intersection or union of the images of $W$. Write $W^*_A=W^*(A)$, $w^*_A=|W^*_A|$ and by convention $W^\cap_\emptyset=X_0$. Call each $W^*$ a version of the $F$-set $W$ and each $w^*$ a version of the $F$-shape $w$. **Example 1.** Let $X$ be the pure $(3-1)$ dimensional simplicial complex with facets $F=\{f,g,h\}$ where $$f=\{\alpha,\gamma,\delta\}, \quad g=\{\gamma,\delta,\theta\}, \quad\textrm{and}\quad h=\{\delta,\kappa, \lambda\}.$$ The complex and the geometric realization are displayed in Figure [7](#fig:Pure_Complex_1){reference-type="ref" reference="fig:Pure_Complex_1"}. ![The pure simplicial complex $X$ and its geometric realization](Venn_diagram.png){#fig:Pure_Complex_1 width="1.7\\linewidth"} ![The pure simplicial complex $X$ and its geometric realization](Pure_Complex_X.png){#fig:Pure_Complex_1 width="1.7\\linewidth"} Hence $$\mathcal P F =\{ \emptyset, \ \{f\},\ \{g\},\ \{h\},\ \{f,g\},\ \{f,h\},\ \{g,h\},\ \{f,g,h\}\}$$ and $X$ is itself an $F$-set with shape $x$ with the values of $x^{\cap }$, $x^{!}$ and $x^{\cup}$ on the subsets $A$ of $F$ given in the following table. $A$ $\emptyset$ $\{f\}$ $\{g\}$ $\{h\}$ $\{f,g\}$ $\{f,h\}$ $\{g,h\}$ $\{f,g,h\}$ ----------------- ------------- --------- --------- --------- ----------- ----------- ----------- ------------- $x^{\cap }_{A}$ $6$ $3$ $3$ $3$ $2$ $1$ $1$ $1$ $x^{!}_{A}$ $0$ $1$ $1$ $2$ $1$ $0$ $0$ $1$ $x^{\cup }_{A}$ $0$ $3$ $3$ $3$ $4$ $5$ $5$ $6$ Note that any one of the three versions $x^{\cap}$, $x^{!}$ and $x^{\cup}$ can be explicity expressed in terms of any other as described in the following proposition. **Proposition 1**. *If $x$ is an $F$-shape and $A \subseteq F$ then:* *$$\begin{aligned} &(a)\quad x^{\cup}_{A}=\sum_{\emptyset\not= B\subseteq A}(-1)^{|B|-1} x^{\cap}_{B}\\ &(b)\quad x^{!}_{A}=\sum_{B\supseteq A}(-1)^{|B|-|A|} x^{\cap}_{B}\\ &(c)\quad x^{\cup}_{A}=\sum_{B\cap A\not=\emptyset} x^{!}_{B} \\ &(d)\quad x^{\cap}_{A}=\sum_{B\supseteq A} x^{!}_{B}\\ &(e)\quad x^{\cap}_{A} = \left.\begin{cases}\displaystyle\sum_{ B\subseteq A} (-1)^{|B|-1} x^{\cup}_{B}, \qquad &A\neq \emptyset, \\ \quad x^{\cup}_{F} \qquad &A =\emptyset\end{cases} \right. \\ &(f)\quad x^{!}_{A}=\sum_{B\subseteq A\neq \emptyset} (-1)^{|A|-|B|+1} x^{\cup}_{F \setminus B}.\end{aligned}$$* *Proof.* Formulas $(a)$ and $(b)$ follow directly from the inclusion and exclusion formula (see M. Aigner [@Aigner], Chapter 5, Sieve Methods, Section 1: Inclusion-Exclusion). Formula $(c)$ follows from the fact that each element of $X_0$ increases $t_A^{!}$ by $1$ for a unique $A\subseteq F$. ◻ More generally a triple $x=( x^{\cap}, x^!, x^{\cup})$ of integer valued functions on $\mathcal PF$ related via the above summations is called an $F$*-shape* while a set valued one is called an $F$*-set*. The three entries $x^*$ are called versions of $x$. Write $x_0=x^{\cap}_{\emptyset}=x^{\cup}_F$. If $x$ is the shape of a simplicial complex then $x_0$ is the number of vertices. A final useful construction is the pointwise $\cap$-product of $F$-shapes defined by $(wx)^{\cap}_{A}=w^{\cap}_{A}x^{\cap}_{A}$. Note that it is the $\cap$-versions which multiply pointwise while the effects on the $!$- and $\cup$-versions are more complicated. Call an $F$-shape $x$ *nonnegative* if $x^!_A\geq 0$ for every $A\subseteq F$ and *$k$-pure* (*pure*) if $x^\cap_{\{f\}}=k$ for every $f\in F$. In the latter case write $\bar{x}=k$. Note that the shape of any simplicial complex is nonnegative and the shape is $k$-pure exactly if the complex is $(k-1)$-pure. Given $F$-shapes $z$ and $x$ we say that $z\le x$ if $x-z$ is nonnegative. A key quantity for our considerations is a measure of density required for the appearence of $X$ in $\Gamma_{n,m,p}$. **Definition 3** ($m$-density of $X$). For a pair of pure $F$-shapes $x$ and $w$, let $b(x,w)=\frac{(xw)_0}{x_0+w_0}$. We define the *$m$-density of $X$* as $$b_m(x) = \min_{\bar{w}=m} \left\{\max_{ z, v > 0 }\left\{ b(z,v) \mid z\le x,\, v\le w \right\} \right\}$$ Write also $b_m(X)=b_m(x)$ if $X$ has shape $x$. These arise in the next theorem when studying whether a complex $K$ drawn from $\Gamma_{n,m,p}$ contains a copy of a given pure finite simplicial complex $X$ with shape $x$ by considering an $m$-pure $F$-set $W$ with shape $w$. Write $H=H(G,X,W)$ for the set of all injective maps $\rho:X_0\cup W_0\rightarrow VG$ for which $\cup_{f\in F}\left[\rho(X_{\{f\}}^\cap)\times \rho(W_{\{f\}}^\cap)\right]\subseteq EG$. Thus if $\rho\in H$ then $\rho|_{X_0}:X\rightarrow N_mG$ induces an injective map of simplicial complexes. If $G$ is drawn from $G(n,p)$ then the log base $n$ of the expected value of $|H|$ is positive if $\beta>b(x,w)$ for $n$ sufficiently large, as the following calculation shows. $$\begin{aligned} \lim_{n\to\infty}\log_n \mathbb{E}|H| &= \lim_{n\to\infty} \log_n {n \choose {x_0+w_0}} p^{(xw)_0} \\ &= x_0+w_0 -\frac{b}{\beta}(x+w)_0 = (x_0+w_0)\left(1-\frac{b}{\beta}\right). \end{aligned}$$ **Example 2.** Consider the pure (3-1)-dimensional simplicial complex $X$ of shape $x$ with facets $F=\{f,g,h\}$ where: $$f=\{\alpha,\gamma,\delta \}, g=\{ \gamma,\delta, \theta \}, h=\{\theta, \kappa, \lambda\}$$ so $x_0=6$, $\bar{x}=3$, $x_F=3$ and $\phi_x=\frac{3}{2}$. The complex and the geometric realization are displayed in Figure [9](#fig:Pure_Complex_2){reference-type="ref" reference="fig:Pure_Complex_2"}. ![The pure simplicial complex $X$ and its geometric realization](Venn_diagram_TWO.png){#fig:Pure_Complex_2 width="1.4\\linewidth"} ![The pure simplicial complex $X$ and its geometric realization](Pure_Complex_X_TWO.png){#fig:Pure_Complex_2 width="1.4\\linewidth"} If a copy of the complex $X$ appears in a complex $K=N_2G$ drawn from $\Gamma_{n,2,p}$ via $\rho:X_0\rightarrow VG=K_0$ then - $X$ and $\rho X$ are $F$-sets with the same shape and - for each facet $f\in F$ the associated vertices $\rho X^{\cup}_{\{f\}}\subseteq VG$ have at least two common neighbors in $G$. Choose any such pair to be $W^{\cup}_{\{f\}}$ and call the resulting $F$-set $W$ (which by construction is $2$-pure) a *$2$-witness* to the copy $\rho X$ of $X$. In the example the $\cap$ version of the shape of $X$ is the vector: $$x^\cap = (x^\cap_{\{f\}},x^\cap_{\{g\}},x^\cap_{\{h\}},x^\cap_{\{f,g\}},x^\cap_{\{f,h\}},x^\cap_{\{g,h\}},x^\cap_{\{f,g,h\}})=(3,3,3,2,0,1,0).$$ Here are some possibilities for the shape $w$ of a $2$-witness $W$ to a copy of $X$: A\) If $w_0$ takes its largest possible value of $2|F|=6$ then $w^\cap_A= w^!_A=0$ for every $A$ with $|A|\geq 2$ $$w^{\cap}= (2,2,2,0,0,0,0).$$ This extreme case appears later as the shape $r$. Each element of $W_0$ is connected to the $3$ vertices of the face of $X$ over which it lies in Figure [10](#fig:First_Witness_to_Y){reference-type="ref" reference="fig:First_Witness_to_Y"}. ![First witness to $X$ with $(wx)_0=18$ edges](First_Witness_to_Y.png){#fig:First_Witness_to_Y width="1\\linewidth"} The density of the associated union of three complete bipartite graphs is $$b(x,w)=\frac{(x w)_0}{x_0+w_0} = \frac{18}{6+6} = \frac{3}{2}.$$ B\) One possibility with $w_0=5$ has each element of $W_0$ connected to the vertices of the face of $X$ over which it lies in Figure [11](#fig:Second_Witness_to_Y){reference-type="ref" reference="fig:Second_Witness_to_Y"} and $$w^{\cap}= (2,2,2,1,0,0,0).$$ ![Second witness to $X$ with $(wx)_0=16$ edges](Second_Witness_to_Y.png){#fig:Second_Witness_to_Y width="1\\linewidth"} The density of this associated union of three complete bipartite graphs is $$b(x,w)=\frac{(x w)_0}{x_0+w_0} = \frac{16}{6+5} = \frac{16}{11}.$$ C\) If $w_0=2$ which is the smallest possible value then each element of $W_0$ is connected to every vertex of $X$ as in Figure [12](#fig:Third_Witness_to_Y){reference-type="ref" reference="fig:Third_Witness_to_Y"} and $$w^\cap= (2,2,2,2,2,2,2).$$ ![Third witness to $X$ with $(wx)_0=12$ edges](Third_Witness_to_Y.png){#fig:Third_Witness_to_Y width="1\\linewidth"} The density of this associated complete bipartite graph is $$b(x,w)=\frac{(x w)_0}{x_0+w_0} = \frac{12}{6+2} = \frac{3}{2}.$$ **Theorem 2**. *If $X$ is a finite simplicial complex and $m\geq 1$ then the $m$-density of $X$ is a threshold for the appearance of $X$ in $\Gamma_m$.* *Proof.* Write $F=X_{\mathcal F}$. For an $m$-pure $F$-set $W$ with shape $w$, by Theorem 5.3 of [@FK] above, the formula $$\max_{ Z\subseteq X, V\subseteq W}\, \frac{(z v)_0}{z_0+v_0}$$ gives a threshold for $H(G,X,W)$ to be nonempty in $\Gamma_m$. The threshold for the appearance of $X$ then only requires selecting the $W$ for which this is minimal. ◻ Figures [13](#fig:Asymptotics1){reference-type="ref" reference="fig:Asymptotics1"}--[15](#fig:Asymptotics3){reference-type="ref" reference="fig:Asymptotics3"} dispaly the average number of copies of $X$ from Examples 1 and 2 observed in 10 draws from $\Gamma_{n,m,p}$ using various values of the relevant parameters. ![Average number of copies of $X$ from Example 1 in $\Gamma_{n,m,p}$, where $n=50$, $m=2$, $b\approx 1.4$, $\beta$ is in {1.45--1.8} ](Asymptotics_Ex_1_m2.png){#fig:Asymptotics1 width="0.5\\linewidth"} ![Average number of copies of $X$ from Example 2 in $\Gamma_{n,m,p}$, where $n=50$, $m=2$, $b\approx 1.4$, $\beta$ is in {1.45--1.8} ](Asymptotics_Ex_2_m2.png){#fig:Asymptotics2 width="0.5\\linewidth"} ![Average number of copies of $X$ from Example 1 in $\Gamma_{n,m,p}$, where $m=4$, $b\approx 2$, $\beta=2.2$, and $n$ is in {50--130} ](Asymptotics_Ex_1_m4.png){#fig:Asymptotics3 width="0.5\\linewidth"} Write $r$ for the $m$-pure $F$-shape with $r^{\cap}_{A}=0$ for every $A\subseteq F$ with $|A|\geq 2$ so $r_.=r^{\cap}_{\{f\}}=m$ and $r_0=m|F|$ and note that if $x$ is also a pure $F$-shape then $$b(x,r)=\frac{(xr)_0}{x_0+r_0}=\frac{\bar{x}}{\frac{x_0}{m|F|}+1}$$ **Lemma 3**. *If $X$ is a finite $(k-1)$-pure simplicial complex with $\phi$ facets and $m>kx_0\phi$ then any $m$-pure $X_{\mathcal F}$-shape $w\not=r$ has $b(x,w)>b(x,r)$.* *Proof.* Write $F=X_{\mathcal F}$ and without loss of generality, assume that $\phi\geq 2$. Since $w\not=r$ and $r^!_A=0$ for every $|A|\geq 2$ there is some $A\subseteq F$ with $|A|\geq 2$ and $w^!_A\geq 1$. Fix such a set $A$ and write $v$ for the $F$-shape with $v^!_A=1$, every $a\in A$ has $v^!_{\{a\}}=-1$ and otherwise $v^!_{B}=0$. Hence if $X$ is the complex from Example 1, $v$ is the $F$-shape described by Figure [16](#fig:V_Venn_diagram){reference-type="ref" reference="fig:V_Venn_diagram"}. ![The $F$-shape $v$](V_Venn_diagram.png){#fig:V_Venn_diagram width="0.8\\linewidth"} Hence if $u=w-v$ is another $m$-pure $F$-shape it suffices to check that $b(x,w)>b(x,u)$. Note that $v^\cap_B=1$ if $B\subseteq A$ and $|B|\geq 2$ and $v^\cap_B=0$ otherwise and that $v_0=1-|A|$ while $(xv)_0=x^\cup_A-|A|k$. The last equality follows from the fact that $v^{\cap} = (0,\dots,0,1,1,\dots 1)$ with zeros in the first $|A|$ slots and ones elsewhere. Compute: $$\begin{aligned} &[(x_0+w_0)(x_0+u_0)]\\ &=(xw)_0(x_0+u_0)-(xu)_0(x_0+w_0)\\ &=(xv)_0(x_0+w_0)-(xw)_0v_0\\ &=(x^\cup_A-|A|k)(x_0+w_0)+(xw)_0(|A|-1)\\ &=(x^\cup_A-k)w_0 + (|A|-1)[(xw)_0-kw_0] - (|A|k-x^\cup_A)x_0\\ &> m +0 - \phi k x_0\\ &\geq 0.\end{aligned}$$ For the strict inequality $x^\cup_A$ is the number of vertices in a union of at least two $(k-1)$-faces and hence at least $k+1$. Since $w$ is an $m$-pure $w_0$ is at least $m$ so the first term is also. The second term is positive since $(xw)_0$ is the number of edges connecting a witness of shape $w$ to a copy of $X$ and $w_0$ is the number of vertices in the witness each of which is contained in at least $k$ edges and since $w\not=r$ some vertex is contained in more than $k$ edges. For the third term $x^\cup_A$ is nonnegative so discard it and $|A|$ is at most $\phi$. ◻ **Lemma 4**. *The property of having every size $k$ subset of the vertices as a face has $k$ as a threshold in $\Gamma_m$.* *Proof.* Write $\Delta$ for the $(k-1)$-pure simplicial complex which is just a single simplex and $\Delta_0=[k]$. Consider a complex $K$ drawn from $\Gamma_{n,m,p}$ with $p=n^{\frac{-1}{b}}$. Consider the number of simplex witnesses $N=|\{\rho\in Z_K\Delta|(\forall i\leq k)\rho i=i\}|$ and compute $$\begin{aligned} \log_n \Bbb E(N) &\le \log_n \left[ {n \choose m} p^{mk} \right] \le \log_n \left[ n^m n^{\frac{-1}{b} mk} \right] \\ &\le \log_n \left[ n^{m -\frac{1}{b} mk} \right] \le m\left(1-\frac{k}{b}\right).\end{aligned}$$ The last expression is less than zero for $b<k$, and hence the expected number itself has limit zero as $n$ goes to infinity. We then apply the First Moment Method (see [@FK], Lemma 22.2): If $X$ is a nonnegative integer valued random variable, then $$\mathbb{P}(X>0) \le \mathbb{E}X.$$ We conclude that the probability that a given set of $k$ vertices is a face is aas equal to zero giving one of the threshold directions. For the other direction take $b>k$. For each $f\in {[n]\choose k}$ the distribution of the number of common neighbors is a binomial variable $X_f\sim B_{n-k}$ with probability $p^k=n^{\frac{-k}{b}}$ and $n-k$ samples. If $f\cap g=\emptyset$ then $X_f$ and $X_g$ are independent and they are positively correlated otherwise. Hence the probability that all ${n \choose k}$ such subsets have at least $m$ common neighbors is at least the ${n \choose k}$ power of $\mathbb{P}(B_{n-k}\ge m)$. By the following argument this is aas equal to one. We apply the following version of the Bernstein--Chernoff bound (see A. Klenke [@AK], Exercise 5.2.1, pg. 110): If $X_i,\dots, X_n$ are i.i.d. Bernoulli variables, and $S_n=X_1+\dots+X_n$ with $\mu =\mathbb{E}S_n$, then for any $\delta$ $$\mathbb{P}[S_n\le (1-\delta)\mu ] \le \exp \left( - \frac{\delta^2 \mu}{2} \right)$$ Note that $\mu_n=\mathbb{E}(B_{n-k}) = (n-k) n^{-\frac{k}{b}} = n^{1-\frac{k}{b}} - k n^{-\frac{k}{b}}$. Choose $\delta_n$ so that $(1-\delta_n)\mu_n = m-1$. The probability $P_n$ that all $k$ element subsets have at least $m$ neighbors is bounded from below as follows: $$\begin{aligned} P_n&\ge \left(1- \exp \left( - \frac{\delta_n^2\ \mu_n}{2} \right) \right)^{n \choose k}\\ &\ge 1-{n \choose k} \exp \left( - \frac{\delta_n^2\ \mu_n}{2} \right)\\ &\ge 1- n^k \exp \left( - \frac{\delta_n^2\ \mu_n}{2} \right)\\ &\ge 1-\exp \left( k\ln n- \left[1-\frac{m}{\mu_n} +\frac{1}{\mu_n}\right]^2 \frac{\mu_n}{2} \right)\\\end{aligned}$$ The last bound has limit 1 as $$\lim_{n\to\infty} \frac {\mu_n}{\ln n} = \infty$$ ◻ **Lemma 5**. *The property of having some size $k$ subset of the vertices as a face has $\frac{mk}{m+k}$ as a threshold in $\Gamma_m$.* *Proof.* First consider the case $b< \frac{mk}{m+k}$. If $K=N_mG$ is drawn from $\Gamma_{n,m,p}$ with $p$ as above write $M=z_GK_{m,k}$ for the number of $m$-witnesses to $(k-1)$-simplices and compute $$\begin{aligned} \log_n \Bbb E(M) &= \log_n \left[ {n \choose k}{{n-k} \choose m} p^{mk} \right]\\ & = \log_n \left[ \frac{n!}{k!(n-k)!}\frac{(n-k)!}{(n-k-m)!m!} p^{mk} \right] \\ &\le \log_n \left[ n^{k+m -\frac{1}{b} mk} \right] - log_n(k! m!)\\ & = -\varepsilon - \log_n(k! m!)\end{aligned}$$ where $\varepsilon = \frac{mk}{b} - (k+m) >0$ so $M$ is aas zero and using the first moment method as above there are aas no $k$ faces in $K$. Next consider the case $b>\frac{mk}{m+k}$. Here we apply the Second Moment Method (see e.g. Lemma 22.5 in [@FK]): If $X$ is a nonnegative integer valued random variable, then $$\mathbb{P}(X=0)) \le \frac{\textrm{Var\,}X}{E(X^2)} = 1 - \frac{(\mathbb{E}X)^2}{\mathbb{E}(X^2).}$$ We apply this to $M$ to obtain a lower bound on $\mathbb{P}(M>0)=\mathbb{P}(M\ge 1)$: $$\frac{(\mathbb{E}M)^2}{\mathbb{E}(M^2)} \le \mathbb{P}(M>0)$$ and then show that $$\lim_n\frac{(\mathbb{E}M)^2}{\mathbb{E}(M^2)}=1.$$ This is achieved by writing $M^2=\sum_aM_a^2$ a finite sum with the number of terms independent of $n$ and then computing that the limit as $n$ grows without bound of $$\log_n \frac{(\mathbb{E}M)^2}{\mathbb{E}(M_a^2)} = 2 \log_n \mathbb{E}M - \log_n \mathbb{E}(M_a^2) \label{difference}$$ is zero for one choice of $a$ and at positive for each of the others. Minor modifications of the argument in the first part of the proof show that $$\lim_n\log_n(\Bbb EM)^2=2\big(k+m-\frac{km}{b}\big).$$ ![](Lemma_5.png){width="1\\linewidth"} Bounds on the second term of ([\[difference\]](#difference){reference-type="ref" reference="difference"}) are obtained by considering the possible intersection patterns for pairs of $k$-sets $K_1$ and $K_2$ in ${[n]\choose k}$ with $m$-witnesses $M_1$ and $M_2$ in ${[n]\choose m}$ and is indexed by four nonnegative parameters $$\begin{split} &a_{kk}=|K_1\cap K_2|\\ &a_{km}=|K_1\cap M_2|\\ &a_{mk}=|K_2\cap M_1|\\ &a_{mm}=|M_1\cap M_2|\\ \end{split} \hskip 1.4 cm \begin{split} &a_{kk} + a_{km} \le k\\ &a_{kk} + a_{mk} \le k\\ &a_{mm} + a_{km} \le m\\ &a_{mm} + a_{mk} \le m\\ \end{split}$$ It is then possible to compute $\Bbb E(M^2)$ as a sum over the possible $a_{..}$ values and check that the Expression ([\[difference\]](#difference){reference-type="ref" reference="difference"}) is positive. Specifically $\Bbb EM^2$ is the sum over finitely many quadruples $a=\{a_{..}\}$ of $\Bbb EM^2_a$ and $\lim_n\log_n\Bbb EM^2_a=2(k+m-\frac{km}{b})-(a_{kk}+a_{mm}+a_{mk}+a_{km}-\frac{a_{kk}a_{mm}+a_{mk}a_{km}}{b})$. Since the number of choices for $a$ is a function of $k$ and $m$ independent of $n$, it suffices to check that $(a_{kk}+a_{mm}+a_{mk}+a_{km}-\frac{a_{kk}a_{mm}+a_{mk}a_{km}}{b})>0$ if $a\not=0$ which is an easy check if $b>\frac{mk}{m+k}$. Specifically, we can apply 1-variable calculus to the function $f(x)= x + y - \frac{xy}{b}$, where $y$ is held fixed in $[0,m]$ and $x$ ranges over $[0,k]$, to see that the function takes positive values on $(0,k)$. It follows that the order of magnitude (as a power of $n$) of the expectation of $M^2$ does not exceed that of the square of the expectation of $M$. It follows that the limit of Expression ([\[difference\]](#difference){reference-type="ref" reference="difference"}) is equal to zero and aas $\mathbb{P}(M>0)=1$. ◻ **Corollary 1**. *If $k<\sqrt{2m+1}$ there is an interval of $\beta$ values for which $\Gamma_{m,n,p}$ with $p=n^{\frac{-1}{\beta}}$ is aas in $Y_{n,k-1}$ but they are not all the same, while if $k^2+k<m$ and $\max\{k,\frac{mk}{m+k}\}< \beta < \min\{k+1,\frac{mk+m}{m+k+1}\}$ then all such complexes are aas always the complex with every set of vertices of size $k$ a face and none of size $k+1$.* **Theorem 3**. *If $X$ is a finite $(k-1)$-pure simplicial complex with $\phi$ facets, $m>kx_0\phi$ and $k>\beta>\frac{km\phi}{x_0+m\phi}$ then with $p=n^{\frac{-1}{\beta}}$ and $q=n^{m(1-\frac{k}{\beta})}$ there is $\lim_{n\rightarrow\infty}\frac{\mathbb{E}_{K\in\Gamma_{n,m,p}}(z_K X)}{\mathbb{E}_{K\in Y_{n,k,q}}(z_K X)}=1$.* Write $R$ for the extreme $m$-pure $F$-set of shape $r$ with every $r^\cap_A=0$ if $A\in {F\choose 2}$. *Proof.* Write $F$ for the facets of $X$ and let $a=1-\frac{k}{\beta}$. Note that $$\begin{aligned} \lim_{n\rightarrow\infty}\log_n\mathbb{E}_{K\in Y_{n,k,q}}(z_KX) &= \lim_{n\rightarrow\infty}\log_n {n \choose x_0} q^{\phi} \\ & = \lim_{n\rightarrow\infty}\log_n {n \choose x_0} n^{m \phi a} = x_0+am\phi\end{aligned}$$ For the lower bound there are enough copies of $X$ with witnesses of shape $r$. If $G$ is a graph write $$\begin{aligned} &< km\phi (m \phi - w_0) \\ x_0 k m \phi - x_0(xw)_0 - m \phi (xw)_0 &< - w_0 k m \phi \\ (x_0 + w_0) k m \phi &< (x_0 + m \phi) (xw)_0 \\ b(x,r)=\frac{ k m \phi}{(x_0 + m \phi)} &< \frac{(xw)_0}{(x_0 + w_0)} = b(x,w)\end{aligned}$$ By Lemma 3 if $r\not= w$, $b(x,r)<b(x,w)$, hence the final inequality holds. Since the expected number $\mathbb{E}_{G\in G(n,p)}(h(G,w,x))$ of copies of $X$ in $N_mG$ together with a witness of shape $w$ is approximately $n^{x_0+w_0-\frac{(xw)_0}{\beta}}$, we have $$\begin{aligned} \lim_{n\rightarrow\infty}\log_n\mathbb{E}_{K\in\Gamma_{n,m,p}}(z_K X) &= \lim_{n\rightarrow\infty}\log_n \sum_{w\in \Omega} \mathbb{E}_{G\in G(n,p)}(h(G,w, X))\\ &= \lim_{n\rightarrow\infty}\log_n \sum_{w\in \Omega} n^{x_0+w_0 - \frac{(xw)_0}{\beta}}\\ &\le \lim_{n\rightarrow\infty}\log_n \sum_{w\in \Omega} n^{x_0+a m \phi}\\ &= \lim_{n\rightarrow\infty}\log_n \left( |\Omega| \cdot n^{x_0+a m \phi}\right)\\ &= x_0+am\phi\\\end{aligned}$$ ◻ **Conjecture 1**. *For every $k$ here is a sequence $n_m$ for which the total variation distance between the $\Gamma_{n,m,p}$ and $Y_{n,k,q}$ distributions tends to zero as $m$ tends to infinity if $n=n_m$, $\beta=k-\frac{1}{n_m}$, $p=n^{\frac{-1}{\beta}}$ and $q=n^{\frac{-km}{\beta}}$.* **Conjecture 2**. *There is a choice of $m$, $\beta$ and a positive $k$-pure shape $x$ for which if $p=n^{\frac{-1}{\beta}}$ then the support of $\Gamma_{n,m,p}$ is aas in $Y_{n,k}$ but if also $q=n^{\frac{-km}{\beta}}$ then $\lim_{n\rightarrow\infty}\frac{\mathbb{E}_{K\in\Gamma_{n,m,p}}(z_Kx)}{\mathbb{E}_{K\in Y_{n,k,q}}(z_Kx)}\not=1$.* In an effort to find an example for the second conjecture or disprove it consider the following reduction. If $W$ is an $m$- witness for a $(k-1)$-pure complex $X$ write $$\begin{aligned} \begin{split} \bar w &= m, \\ \bar x &= k , \\ x_w &= \frac{x_0\bar w}{w_0\bar x},\\ \pi^w_x &= \frac{(xw)_0}{x_0\bar w}\geq 1,\\ \pi^x_w &= \frac{(xw)_0}{w_0\bar x}\geq 1,\\ \end{split} \hskip 2 cm \begin{split} \phi &= x_F,\\ \phi_x &= \frac{\phi \bar x}{x_0}\geq 1, \\ \phi_w &= \frac{\phi \bar w}{w_0}\geq 1,\\ b &= \frac{(xw)_0}{x_0+w_0}=\frac{\bar x\bar w}{\bar x(\pi^w_x)^{-1}+\bar w(\pi^x_w)^{-1}}. \end{split}\end{aligned}$$ The above lemmas imply that there is a choice of $\beta$ in the conjecture if $$\begin{aligned} \bar x-1<b \qquad\textrm{ (so all $k-2$ faces occur aas), }\\ b<\frac{\bar w(\bar x+1)}{\bar w+\bar x+1} \qquad\textrm{(so no $k$ faces occur aas) and }\\ b<\frac{\phi \bar x\bar w}{x_0+\phi \bar w} \qquad\textrm{ (so $X$ does not occur in $Y_{n,k,q}$).} \end{aligned}$$ Substituting the definition of $b$, cross multiplying and collecting terms involving $\bar w$ in these three inequalities allows them to be rewritten as $$\frac{x_w(\bar x-1)}{1+\bar x(\pi^x_w-1)}<\frac{\bar w}{\bar x},$$ $$\frac{(\pi^x_w-x_w)(\bar x+1)}{1-\bar x(\pi^x_w-1)}<\frac{\bar w}{\bar x},$$ $$\frac{\bar w}{\bar x}<\frac{\phi_w-\pi^x_w}{\phi_x(\pi^x_w-1)}$$ respectively. Finally the first and third and second and third yield inequalities by ignoring the intervening $\frac{\bar w}{\bar x}$ and again crossmultiplying and collecting $\bar x$ terms yields equivalent inequalities $$\begin{aligned} \bar x&<\frac{\phi_w-1}{\pi^x_w-1}, \label{Conjecture2_inequality1} \\ \bar x&<\frac{1}{\pi^x_w-1}(1-\phi_w\frac{\pi^w_x-1}{\phi_x-1})\label{Conjecture2_inequality2} \end{aligned}$$ respectively. Perhaps this formulation is easier to work with. **Example 2, continued.** Returning to the complex $X$ of shape $x$ with facets $F=\{f,g,h\}$: $$f=\{\alpha,\gamma,\delta \}, g=\{ \gamma,\delta, \theta \}, h=\{\theta, \kappa, \lambda\}$$ we have $x_0=6$, $\bar{x}=3$, $x_F=3$ and $\phi_x=\frac{3}{2}$. For the 2-witnesses W mentioned earlier, the above parameters take the following form. A\) We have $(wx)_0=18$, $\phi_w=1$, $\pi^x_w=1$, $\pi^w_x=\frac{3}{2}$ and the inequalities ([\[Conjecture2_inequality1\]](#Conjecture2_inequality1){reference-type="ref" reference="Conjecture2_inequality1"}) and ([\[Conjecture2_inequality2\]](#Conjecture2_inequality2){reference-type="ref" reference="Conjecture2_inequality2"}) that would be needed for a counterexample do not apply since $W=R$. In both cases the left hand side equals 3, and the right hand sides are fractions with zero in both the numerator and denominator. B\) We have $(wx)_0=16$, $\phi_w=\frac{6}{5}$, $\pi^x_w=\frac{16}{15}$, $\pi^w_x=\frac{4}{3}$ and the inequalities ([\[Conjecture2_inequality1\]](#Conjecture2_inequality1){reference-type="ref" reference="Conjecture2_inequality1"}) and ([\[Conjecture2_inequality2\]](#Conjecture2_inequality2){reference-type="ref" reference="Conjecture2_inequality2"}) that would be needed for a counterexample are almost satisfied (both turn out to be $3<3$) . C\) We have $(wx)_0=12$, $\phi_w=3$, $\pi^x_w=2$, $\pi^w_x=1$ and the inequalities ([\[Conjecture2_inequality1\]](#Conjecture2_inequality1){reference-type="ref" reference="Conjecture2_inequality1"}) and ([\[Conjecture2_inequality2\]](#Conjecture2_inequality2){reference-type="ref" reference="Conjecture2_inequality2"}) that would be needed for a counterexample are $3<2$ and $3<1$ respectively. A simple case to study is that in which $X$ has only two facets which each have $k$ vertices and share $\ell$ of these. In this case if $m<\frac{\ell(2k-\ell)}{2(k-\ell)}$ then $\Gamma_{n,m,n^{\frac{-1}{\beta}}}$ aas contains all $k-1$ faces if $\beta> \frac{(k+1)m}{k+1+m}$ and aas contains no pair of $k-1$ faces which intersect in $\ell$ vertices if $\beta< \frac{(k+1)m}{k+1+m}$. If on the other hand $m>\frac{\ell(2k-\ell)}{2(k-\ell)}$ there is a third phenomenon. If $\frac{2km}{2k+2m-\ell}<\beta<\frac{(k+1)m}{k+1+m}$ then $\Gamma$ aas contains pairs of $k-1$ faces which share $\ell$ vertices but no $k$ faces. In this case the expected number of such pairs is approximately $n^{2k+2m-\ell-\frac{1}{\beta}2km}$. # Computer Simulations In this section we briefly summarize the results of computer simulations which are consistent with the results presented in the earlier section. Using the Networkx and Gudhi Python libraries, we have implemented the Erdős--Rényi graphs and the m-neighbor construction. For $n=150$ vertices, the probability $p=0.2$ and the values of $m$ in the set $\{2,4,8,12\}$ we have obtained random complexes whoses number of simplices is presented in the following table. ----------------------- -------- -------- ------- ------- -------- -- Number of neighbors m=1 m=2 m=4 m=8 m=12 Value of $t$ 3 3 2 2 2 Number of Simplices 11160 11097 150 150 150 in dimension $t-2$ Number of Simplices 389580 219430 9515 2882 207 in dimension $t-1$ Number of Simplices 0 0 0 0 0 in dimension $t$ Ratio of simplices in dimension $t-2$ to 0.999 0.993 1 1 1 the maximal possible Ratio of simplices in dimension $t-1$ to 0.707 0.398 0.851 0.257 0.0185 the maximal possible Estimate of $q$ via the binomial 0.693 0.329 0.847 0.242 0.0162 distribution ----------------------- -------- -------- ------- ------- -------- -- # Proofs **Hoeffding's Inequalities.**  *If $B\sim\textbf{Bin}_{n,q}$ then:* - $\mathbb{P}(B\leq m)\leq \exp[-\frac{2}{n}(nq-m)^2]$ if $m<nq$ and - $\mathbb{P}(B\geq m)\leq \exp[-\frac{2}{n}(nq-m)^2]$ if $m>nq$. *Proof.* We start with the original theorem of Hoeffding: *Theorem 2 [@H]:* If $X_1,\dots,X_n$ are independent, $a_i\le X_i\le b_i$ for $i=1,\dots,n$, and $\mu=\mathbb{E}\bar X$ with $\bar X=\frac{1}{n}\sum_{i=1}^n X_i$ then for $t>0$ $$\mathbb{P}\{\bar X - \mu \ge t \} \le \textrm{e}^{-2n^2t^2/\sum_{i=1}^n (b_i-a_i)^2}.$$ If we assume that the $X_i \sim \textbf{Bin}_{1,q}$ are copies of the Bernoulli random variable with success probability $q$, then $B=n \bar X$ has the binomial distribution $B \sim \textbf{Bin}_{n,q}$. The above inequality can be written in the form: $$\mathbb{P}\{B - nq \ge nt \} \le \exp\left[-2n t^2 \right].$$ If $m>n\mu=nq$ and we set $c=\frac{m-n\mu}{n}$ then: $$\begin{aligned} \mathbb{P}(B \ge m) &= \mathbb{P}(B \ge n\mu+nc) \\ &= \mathbb{P}(B - n\mu \ge nc) \\ & \le \exp\left[-2n c^2 \right] \\ & \le \exp\left[-2n \left( \frac{m-n\mu}{n} \right)^2 \right]\\ & \le \exp\left[-\frac{2}{n} ( m -nq) ^2 \right].\\\end{aligned}$$ This finishes the proof of the second inequality above. Next, let $m<n\mu$. We have $$\begin{aligned} \mathbb{P}(B \le m) &= \sum_{i=0}^m {n\choose i} q^i (1-q)^{n-i}\qquad ( \textrm{Set}\quad j= n-i\ ) \\ &= \sum_{j=n-m}^n {n\choose n-j} q^{n-j} (1-q)^{j} \\ &= \sum_{j=n-m}^n {n\choose j} (1-q)^{j} q^{n-j}\\ &=\mathbb{P}(\hat B \ge n- m), \qquad\textrm{where}\quad \hat B \sim \textbf{Bin}_{n,1-q}.\end{aligned}$$ Note that since $m<n\mu=nq$, we have $n-m>n-nq=n(1-q)$ which is the expected value of $\hat B$. Applying the second inequality proved above, we have $$\begin{aligned} \mathbb{P}(\hat B \ge n- m) &\le \exp\left[-2n \left( \frac{n-m}{n} -(1-q)\right) ^2 \right] \\ &= \exp\left[-\frac{2}{n} ( nq - m) ^2 \right].\end{aligned}$$ This finishes the proof of the first inequality. ◻ 5 M. Aigner, A Course in Enumeration, Graduate texts in Mathematics, Spriner 2007. L. Aronshtam, N. Linial, T. Łuczak and R. Meshulam,*Collapsibility and Vanishing of Top Homology in Random Simplicaial Complexes*, Dicrete Comput. Geom. (2013), 49: 317--334. G. Carlsson, *Topology and Data*, Bull. of the A.M.S., Vol. 46, 2009, pp. 255-308. P. Erdős and A. Réney, On random graphs, I. Publ. Math. Debrecen 6, (1959), p. 290--297. A. Frieze and M. Karoński, Introduction to Random Graphs, Cambridge University Presss, rev. Feb. 5, 2023. W. Hoeffding, *Probability Inequalities for Sums of Bounded Random Variables*, J. Amer. Statist. Assoc. 58 (1963), 13-30. M. Kahle, *The neighborhood complex of a random graph*, J. Combin. Theory Ser. A 114 (2007), 380--387 A. Klenke, *Probability Theory, a comprehensive Course*, Second ed. Springer, 2014. N. Linial and R. Meshulam, *Homological connectivity of Random 2-complexes*, Combinatorica 26 (4) (2006), 475-487. W. Matysiak and J. Spaliński, *The asymptotic topology of the multineighbor complex of a random graph*, arXiv:2302.09017.
arxiv_math
{ "id": "2309.05149", "title": "From Erdos-Renyi graphs to Linial-Meshulam complexes via the\n multineighbor construction", "authors": "Eric Babson, Jan Spali\\'nski", "categories": "math.CO", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | The generalized Schwarzschild spacetimes are introduced as warped manifolds where the base is an open subset of $\mathbb R^2$ equipped with a Lorentzian metric and the fiber is a Riemannian manifold. This family includes physically relevant spacetimes closely related to models of black holes. The generalized Schwarzschild spacetimes are endowed with involutive distributions which provide foliations by lightlike hypersurfaces. In this paper, we study spacelike submanifolds immersed in the generalized Schwarzschild spacetimes, mainly, under the assumption that such submanifolds lie in a leaf of the above foliations. In this scenario, we provide an explicit formula for the mean curvature vector field and establish relationships between the extrinsic and intrinsic geometry of the submanifolds. We have derived several characterizations of the slices, and we delve into the specific case where the warping function is the radial coordinate in detail. This subfamily includes the Schwarzschild and Reissner-Nordström spacetimes. author: - Rodrigo Morón[^1],   Francisco J. Palomo title: | Spacelike immersions in certain Lorentzian manifolds\ with lightlike foliations --- # Introduction The $(m+2)$-dimensional exterior Schwarzschild spacetime with mass $\mathbf{M}\geq 0$ is equipped with the Lorentzian metric $$\widetilde{g}=-\Big(1- \frac{2\mathbf{M}}{r^{m-1}} \Big)dt^{2}+ \frac{1}{\Big(1- \frac{2\mathbf{M}}{r^{m-1}} \Big)}dr^{2}+ r^{2}g_{\mathbb S^{m}},$$ where $(t,r)\in \mathbb R\times \mathbb R_{+}$ with $r^{m-1}> 2\mathbf{M}$ and $g_{\mathbb S^{m}}$ denotes the usual round metric of constant sectional curvature $1$ on the $m$-dimensional sphere $\mathbb S^{m}.$ This spacetime satisfies the vacuum Einstein equation and is spherically symmetric. Even more, these properties characterizes the Schwarzschild spacetime. For $\mathbf{M}= 0$, the Schwarzschild metric reduces to the Minkowski metric in spherical terms, Example [Example 9](#280723A){reference-type="ref" reference="280723A"}. The exterior Schwarzschild spacetime has several remarkable properties which we are interested in. 1. For every lightlike vector field $\xi$ in the $tr$-half plane, the Levi-Civita connection $\widetilde{\nabla}$ of the Schwarzschild metric satisfies $\widetilde{\nabla}\xi= \alpha \otimes \xi$ for some $1$-form $\alpha.$ 2. The $(m+2)$-dimensional exterior Schwarzschild spacetime admits two foliations by lightlike hypersurfaces. In fact, for every lightlike vector field $\xi$ in the $tr$-half plane, the distribution given by the vector fields $\widetilde{g}$-orthogonal to $\xi$ is integrable and every leaf inherits a degenerate metric from $\widetilde{g},$ Lemma [Lemma 2](#190723A){reference-type="ref" reference="190723A"}. 3. The vector field $\partial t$ is a timelike Killing vector field. That is, the exterior Schwarzschild spacetime is static. These properties rely on the following facts. The metric $\widetilde{g}$ is a warped product metric given by a Lorentzian metric on an open subset of the $tr$-plane and a Riemannian metric, [@One83 Chap. 7]. Moreover, the metric on the $tr$-plane admits a globally defined lightlike vector field and the function $f^2(r):=\Big(1- \frac{2\mathbf{M}}{r^{m-1}} \Big)$ does depend only on $r$. These facts lead us to consider the following class of Lorentzian warped product manifolds. **Definition 1**. *A Lorentzian warped product manifold $(\widetilde{M}, \widetilde{g})=B\times_{\lambda} F$ is said to be an $(m+2)$-dimensional generalized (exterior) Schwarzschild spacetime when $B$ is an open subset of $\mathbb R^{2}$ with canonical coordinates $(t,r)$ and metric $$\label{17112022A} g_B=-f^2(r)dt^2+\dfrac{1}{f^2(r)}dr^2,$$ where $f(r)> 0$, $(F,g_F)$ is a $m$-dimensional connected Riemannian manifold and $\lambda\in C^{\infty}(B)$ with $\lambda>0$ is the warping function. That is, $\widetilde{M}= B \times F$ and $$\widetilde{g}=\pi_B^*(g_B)+(\lambda\circ\pi_B)^2\pi_F^*(g_F),$$ where $\pi_B$ and $\pi_F$ are the natural projections on $B$ and $F$, respectively [@One83 Chap. 7].* Note that the Minkowski spacetime can be described in two ways from this setting. Namely, as was mentioned above and for $f(r)^{2}=1$, $\lambda=1$ and $F=\mathbb E^{m}$. For $\lambda(t,r)=r$ and $F=\mathbb S^{m}$, this class includes relevant spacetimes with spherical symmetry. Namely, we set $$\label{app} f^{2}(r)=1-\frac{2 \mathbf{M}}{r^{m-1}}+ \frac{q^2}{r^{2m-2}}- \frac{2 \Lambda r^2}{m(m+1)},$$ where $\mathbf{M}$ is called the mass parameter, $q$ is the charge and $\frac{2 \Lambda r^2}{m(m+1)}$ is the cosmological constant when positive and $\frac{2 \Lambda r^2}{m(m+1)}=-1/R^2$ when negative, where $R$ is the anti-de Sitter radius. For $q= \Lambda=0$, we get the Schwarzschild metric and for $q \neq 0$, $\Lambda= 0$, the Reissner-Nordström metric with total charge $q$. The de-Sitter and anti-de-Sitter versions correspond to $\Lambda >0$ and $\Lambda <0$, respectively. Karl Schwarzschild discovered in 1916 the point-mass solution to Einstein equations that bears his name. Historically, this solution was the first and more important nontrivial solution of the vacuum Einstein equations. This is the reason why we have named this class of spacetimes as generalized Schwarzschild spacetimes. The generalized Schwarzschild spacetimes admit two lightlike vector fields $\xi, \eta \in \mathfrak{X}(B)\subset \mathfrak{X}(\widetilde{M})$, ([\[11102022B\]](#11102022B){reference-type="ref" reference="11102022B"}). The distributions $D_{\xi}$ and $D_{\eta}$ defined by the $\widetilde{g}$-orthogonal vector fields to $\xi$ and $\eta$, respectively, are involutive, Lemma [Lemma 2](#190723A){reference-type="ref" reference="190723A"}. Therefore, we have two transverse foliations by lightlike hypersurfaces of $\widetilde{M}$. It is worth pointing out that the vector field $\partial t\in \mathfrak{X}(\widetilde{M})$ is Killing if and only if the warping function $\lambda$ depends only on the (radial) coordinate $r$. The integral curves of $\xi$ and $\eta$ are called lightlike geodesic generators of the corresponding lightlike hypersurfaces. Recall that we can scale $\xi$ and $\eta$ so that they are geodesic vector fields, [@Gall]. As was mentioned, the Minkowski spacetime $\mathbb L^{m+2}$ admits two descriptions as generalized Schwarzschild spacetime. Every description provides different foliations by lightlike hypersurfaces, see details in Example [Example 9](#280723A){reference-type="ref" reference="280723A"}. The main aim of this paper is the study of $n$-dimensional spacelike submanifolds $M$ immersed in generalized Schwarzschild spacetimes. The research on spacelike submanifolds has been developed both from physical and geometric interest. For instance, the Cauchy problem for the Einstein equations is formulated as an initial data problem on a Riemannian manifold which becomes a Cauchy hypersurface in the solution spacetime, see [@Lee Chap. 7]. Recall also the Penrose incompleteness theorem which relates the existence of a trapped codimension two spacelike submanifold with the singularities of certain spacetimes [@Lee Chap. 7]. The notion of trapped submanifold is usually given in terms of the mean curvature vector field of the submanifold, Definition [Definition 44](#110823C){reference-type="ref" reference="110823C"}. Most of our results are focused on the particular situation in which the spacelike submanifold $M$ is contained in a leaf of the above mentioned foliations by lightlike hypersurfaces. As it is well-known, lightlike hypersurfaces inherits a degenerate metric from the Lorentzian ambient metric and play an important role in General Relativity as event horizons of black holes [@Gourgoulhon]. The classical theory of submanifolds fails for these hypersurfaces. We think that the study of codimension two spacelike submanifolds which factor through a lightlike hypersurface can provide a tool to understand the geometry of such hypersurfaces. A different approach to lightlike manifolds from the Cartan geometries has been developed in [@Pal21]. The study of codimension two spacelike submanifolds in lightlike hypersurfaces has been previously developed in [@PaPaRo], [@PaRoRo] and [@PaRo] for the case of compact submanifolds into the lightlike cone in the Minkowski spacetime. The non-compact case is considered in [@ACR2] and the study of trapped submanifolds into lightlike hypersurfaces of the Sitter spacetime appears in [@ACR1]. This approach has been also applied to Brinkmann spacetimes. Recall that Brinkmann spacetimes admit a parallel lightlike vector field and then, they have a foliation by lightlike hypersurfaces. Spacelike submanifolds which lie in such hypersurfaces have been studied in [@CaPaRo] for the compact case and in [@PPR] for more general settings. We would like to emphasize that the study of spacelike submanifolds in generalized Friedmann-Lemaître-Robertson-Walker spacetimes goes back to the seminal work [@AlRoSan]. Since then multiple researchers have developed this topic. These spacetimes are written as $I\times_{\lambda}F$ with metric $-dt^2 + \lambda^2(t) F$. From this point of view, the submanifolds in generalized Schwarzschild spacetimes can be seem as the next natural step to shed light in the theory of spacelike submanifolds. As far as we know, there is no many works devoted to this problem. For instance, in the setting of stationary spacetimes, the study of prescribed mean curvature problem in Schwarzschild and Reissner-Nordström spacetimes appears in [@DRT]. On the other hand, the results in [@WWZ] have been enriching and have given us a better approach for the development of our paper. The plan of the paper is as follows. Section 2 exhibits basic notions, notations and also includes several technical results to be used later. Section 3 focuses on the class of Lorentzian manifolds we are interested in, Definition [Definition 1](#190723B){reference-type="ref" reference="190723B"}. Since this class of Lorentzian manifolds are warped product manifolds, we particularize the formulas for theirs Levi-Civita connections from [@One83 Chap. 7]. For a spacelike immersion in a generalized Schwarzschild spacetime $\Psi:M\rightarrow B\times_{\lambda} F,$ we have written $\Psi=(\Psi_B,\Psi_F)$, $u:=t\circ \Psi_{B}$ and $v:=r\circ \Psi_{B}.$ Lemma [Lemma 8](#020323A){reference-type="ref" reference="020323A"} states that a spacelike immersion factors through an integral hypersurface of $D_{\xi}$ if and only if $\nabla v=(f\circ \Psi_{B})^2 \nabla u ,$ where $\nabla$ denotes the gradient operator corresponding to the induced metric $g$ on $M$. In order to make the presentation of the results more fluid, in the Introduction we specialize our results and discussions to the distribution $D_{\xi}$ and its integral lightlike hypersurfaces. Almost all the results admit a similar version for the other distribution $D_{\eta}$. Section 4 exhibits fundamental equations for spacelike immersions in generalized Schwarzschild spacetimes. As a consequence, we obtain an integral characterization of compact spacelike immersions through leaves of $D_{\xi}$, Theorem [Theorem 13](#070323C){reference-type="ref" reference="070323C"}. > Assume $\Psi:M\rightarrow B\times_{\lambda}F$ is a compact spacelike immersion in a generalized Schwarzschild spacetime with $f'>0$ (resp. $f'<0$). Then $$\int_{M} \Big[ n\,\widetilde{g}(\mathbf{H},\xi^{\bot}) +\Big(\dfrac{\xi\lambda}{\lambda}\circ \Psi_{B}\Big)\left[n+2g(\xi^{\top}, \eta^{\top})\right]\Big]\, d\mu_{g} \geq 0.\quad (\textrm{resp.} \leq 0),$$ where the superscripts $\top$ and $\bot$ denote the tangent and normal parts of the indicated vector fields, respectively. The equality holds if and only if $M$ factors through an integral hypersurface of $D_{\xi}.$ Our main results are in Sections 5, 6 and 7, where we will focus on the case of spacelike submanifolds factoring through a lightlike integral hypersurface of $D_{\xi}$ or $D_{\eta}$. First, in Section 5 we deal with the case of arbitrary codimension. Our maim aim here is to find several conditions which assure that the immersion factors through a slice of the generalized Schwarzschild spacetime. That is, the functions $u$ and $v$ are constants. Thus, the setting is a spacelike immersion $\Psi:M\rightarrow B\times_{\lambda} F$ through an integral hypersurface of $D_{\xi}$. > - Assume $\eta \lambda$ signed and $M$ compact with $\mathbf{H}=0$. Then $M$ factors through a slice and the immersion of $M$ in such slice is minimal, Corollary [Corollary 22](#020623A){reference-type="ref" reference="020623A"}. > > - Assume $M$ compact with $\mathrm{Ric}^g(\nabla v, \nabla v)\leq 0$. The normal vector field $\eta^{\bot}$ is an umbilical direction if and only if $M$ factors through a slice, Theorem [Theorem 23](#20032023A){reference-type="ref" reference="20032023A"}. > > - Assume $\xi \lambda \neq 0$ never vanishes. Then $M$ factors through a slice if and only if $\nabla^{\bot}\xi=0$, Theorem [Theorem 29](#250523A){reference-type="ref" reference="250523A"}. The assumptions in Corollary [Corollary 22](#020623A){reference-type="ref" reference="020623A"} and Theorem [Theorem 29](#250523A){reference-type="ref" reference="250523A"} hold for a wide family of generalized Schwarzschild spacetime. For Theorem [Theorem 23](#20032023A){reference-type="ref" reference="20032023A"}, it is a key fact that if $\eta^{\bot}$ is an umbilical direction then $\nabla v$ is a conformal vector field ([\[22032023A\]](#22032023A){reference-type="ref" reference="22032023A"}). In our notion of generalized Schwarzschild spacetime, the geometry of the Riemannian part $F$ is arbitrary. Nevertheless, the case of spherical symmetry is the most relevant from the physical point of view. If we assume $M$ compact and $\eta^{\perp}$ an umbilical direction, Theorem [Theorem 24](#260723A){reference-type="ref" reference="260723A"} states a condition on the Ricci tensor which shows that, when $\nabla v$ is a nonzero vector field, $M$ is isometric to a sphere $\mathbb S^{n}(c)$ of constant sectional curvature $c$. This result is a consequence of [@DAL Theor. 1]. Therefore, in case that the codimension of $M$ is two and $F$ is simply-connected, by means of Proposition [Proposition 31](#020623B){reference-type="ref" reference="020623B"}, the manifold $F$ must be a topological sphere. Section 6 is devoted to study codimension two ($n=m$) immersions through these lightlike integral hypersurfaces. In the terminology of black holes, a such immersion $M$ is called a cross-section when every lightlike geodesic generator intersects $M$ at most one, [@Gourgoulhon]. At topological level, every codimension two immersion through a lightlike integral hypersurface is a covering space of the fiber $F$ but not necessarily a Riemannian covering, Proposition [Proposition 31](#020623B){reference-type="ref" reference="020623B"} and Remark [Remark 33](#140823A){reference-type="ref" reference="140823A"}. The mean curvature vector field of these immersions is obtained in Proposition [Proposition 36](#010623B){reference-type="ref" reference="010623B"} and Corollary [Corollary 37](#280723D){reference-type="ref" reference="280723D"} as follows > $$\mathbf{H}=\left[\dfrac{\eta\lambda}{\lambda}\circ \Psi_{B}-\Big(\dfrac{\xi\lambda}{2\lambda}\circ \Psi_{B}\Big)\|\nabla v\|^2+\dfrac{1}{m}\Delta v\right]\xi+\Big(\dfrac{\xi\lambda}{\lambda}\circ \Psi_{B}\Big)\ell^{\xi},$$ where $\ell^{\xi}$ is the normal lightlike vector field to $M$ with $\widetilde{g}(\xi, \ell^{\xi})=-1$. Furthermore, we compute that $$\|\mathbf{H}\|^{2}=\frac{1}{v^2}\left( (f\circ \Psi_{B})^2- \frac{S^{\Psi_{F}^{*}(g_{F})}- v^2 S^{g}}{m(m-1)}\right),$$ where $S^{g}$ and $S^{\Psi_{F}^{*}(g_{F})}$ are the scalar curvatures of the induced metric $g$ and $\Psi_{F}^{*}(g_{F})$ on $M$, respectively. These results extend previous ones in [@ACR1], [@ACR2], [@PaPaRo] and [@PaRo], see details in Remark [Remark 38](#140823B){reference-type="ref" reference="140823B"}. The second formula shows a relation between the intrinsic and extrinsic geometry of the codimension two immersions through lightlike integral hypersurfaces. A such kind of relation has been previously pointed out for the case of the lightlike cone in the Minkowski spacetime in [@PaPaRo] and [@PaRo]. Section 6 also contains a characterization of marginally trapped immersions when the warping function $\lambda$ agrees with the radial coordinate, Corollary [Corollary 46](#140823C){reference-type="ref" reference="140823C"}. This result extends [@ACR2 Cor. 6.3] where the case of the lightlike cone in the Minkowski spacetime was studied. In Section 7 we proceed with the study of immersions with parallel mean curvature vector field. That is, we consider the condition $\nabla^{\perp}\mathbf{H}=0.$ Under a technical condition, Theorem [Theorem 51](#130623B){reference-type="ref" reference="130623B"} provides an intrinsic characterization of the slices as the unique codimension two immersions through lightlike integral hypersurfaces with parallel mean curvature vector field. # Preliminaries {#Prel} Let $(\widetilde{M}, \widetilde{g})$ be an $(m+2)$-dimensional Lorentzian manifold. A smooth immersion $\Psi:M\rightarrow (\widetilde{M},\widetilde{g})$ of a (connected) $n$-dimensional manifold $M$ is said to be spacelike when the induced metric $g:= \Psi^{*}(\widetilde{g})$ is Riemannian. We assume $n\geq 2$ along this paper. For any point $x\in M$, we denote $\Psi(x)\in \widetilde{M}$ by the same letter $x$ if there is no danger of confusion. We also ignore the differential map of the immersion $\Psi$. Thus $T_{x}M$ is considered as subspace of $T_{x}\widetilde{M}$ and its orthogonal complement $T_{x}M^{\perp} \subset T_{x}\widetilde{M}$ is called the normal space of $M$ at $x$. We write $\overline{\mathfrak{X}}(M)$ for the $C^{\infty}(M)$-module of vector fields along the immersion $\Psi$. The set of vector fields $\mathfrak{X}(M)$ may be seen as a $C^{\infty}(M)$-submodule of $\overline{\mathfrak{X}}(M)$ in a natural way and for every $E\in \mathfrak{X}(\widetilde{M})$ we have its restriction $E\mid_{\Psi} \in \overline{\mathfrak{X}}(M)$. For $V\in \overline{\mathfrak{X}}(M)$, we have the orthogonal decomposition $$V_x= V_x^{\top}+V_x^{\bot},$$ where $V^{\top}_{x}\in T_{x}M$ and $V^{\bot}_{x}\in T_{x}M^{\bot}$ for all $x\in M$. The vector fields $V^{\top}$ and $V^{\bot}$ are called the tangent part of $V$ and the normal part of $V$, respectively. The $C^{\infty}(M)$-submodule of $\overline{\mathfrak{X}}(M)$ of all normal vector fields along $\Psi$ is denoted by $\mathfrak{X}^{\perp}(M)$, that is, $$\mathfrak{X}^{\perp}(M)=\{V \in \overline{\mathfrak{X}}(M): V^{\top}=0\}.$$ Now, let us write $\nabla$ and $\widetilde{\nabla}$ for the Levi-Civita connections of $M$ and $\widetilde{M}$, respectively. As usual, we also denote by $\widetilde{\nabla}$ the induced connection and by $\nabla^{\perp}$ the normal connection on $M$. The decomposition of the induced connection $\widetilde{\nabla}$, into tangent and normal parts, leads to the Gauss and Weingarten formulas of $\Psi$ as follows $$\label{shape} \widetilde{\nabla}_VW= \nabla_VW + \mathrm{II}(V,W) \quad \quad \mathrm{and} \quad \quad \widetilde{\nabla}_V\zeta=-A_{\zeta}V+\nabla^{\perp}_V\,\zeta,$$ for every tangent vector fields $V,W\in\mathfrak{X}(M)\subset \overline{\mathfrak{X}}(M)$ and $\zeta\in\mathfrak{X}^{\perp}(M)$. Here $\mathrm{II}$ denotes the second fundamental form and $A_{\zeta}$ the shape operator (or Weingarten endomorphism) associated to $\zeta$. Every shape operator $A_{\zeta}$ is self-adjoint and the second fundamental form is symmetric, they are also related by the following formula $$\label{230321C} g\left(A_{\zeta}V,W\right) = \widetilde{g} \left(\mathrm{II}(V,W), \zeta \right).$$ The mean curvature vector field is defined by $\mathbf{H}=\frac{1}{n}\,\mathrm{trace}_{g}\mathrm{II}.$ From ([\[230321C\]](#230321C){reference-type="ref" reference="230321C"}) we have $$\mathrm{trace}(A_{\zeta})=n\widetilde{g}(\mathbf{H},\zeta).$$ Although, we are interested here in the class of spacetimes given in Definition [Definition 1](#190723B){reference-type="ref" reference="190723B"}, there are several properties which can be stated in a more general setting. Let $(B,g_B)$ be a two dimensional oriented Lorentzian manifold and $(F,g_F)$ a $m$-dimensional connected Riemannian manifold. Fix $\lambda\in C^{\infty}(B)$ with $\lambda>0$, we are interested in the Lorentzian warped product manifold given by the product manifold $\widetilde{M}=B\times F$ endowed with the Lorentzian metric $$\widetilde{g}=\pi_B^*(g_B)+(\lambda\circ\pi_B)^2\pi_F^*(g_F),$$ where $\pi_B$ and $\pi_F$ are the natural projections on $B$ and $F$, respectively [@One83 Chap. 7]. As usual, we denote the Lorentzian manifold $(\widetilde{M},\widetilde{g})$ as $B\times_{\lambda} F$ and $\lambda$ is called the warping function. The set of vector fields $\mathfrak{X}(B)$ and $\mathfrak{X}(F)$ can be lifted to $\mathfrak{X}(\widetilde{M})$. Typically, we use the same notation for a vector field and its lift and then, every vector field $E\in \mathfrak{X}(\widetilde{M})$ has a unique expression as $E=X+V$ where $X\in \mathfrak{X}(B)$ and $V\in \mathfrak{X}(F)$. For our aims here, we assume there exists a global lightlike vector field $\xi\in\mathfrak{X}(B)\subset\mathfrak{X}(\widetilde{M})$. That is, we have $g_{B}(\xi, \xi)=0$ and $\xi_{x}\neq 0$ for all $x\in B$. Taking into account that $\mathrm{dim}\, B=2$, it is not difficult to show that there exists a $1$-form $\alpha\in \Omega^{1}(B, \mathbb R)$ such that $$\label{250223B} \nabla^{B}\xi=\alpha \otimes \xi$$ where $\nabla^{B}$ is the Levi-Civita connection of $B$. The assumption on the existence of the vector field $\xi$ has the following key consequence. **Lemma 2**. *The distribution $D_{\xi}=\{E\in\mathfrak{X}(\widetilde{M}):\widetilde{g}(E,\xi)=0\}$ on $\widetilde{M}$ is involutive.* For $X+V, Y+ W \in D_{\xi}$ a straightforward computation gives $$\widetilde{g}([X+V, Y+ W], \xi)=g_{B}([X,Y], \xi)\circ \pi_{B}.$$ Taking into account that $X+V\in D_{\xi}$ if and only if $g_{B}(X,\xi)=0$, we obtain from ([\[250223B\]](#250223B){reference-type="ref" reference="250223B"}) that $$g_{B}([X,Y], \xi)=-g_{B}(\nabla^{B}_{X}\xi, Y)+ g_{B}(\nabla^{B}_{Y}\xi, X)=0.$$ $\blacksquare$ Therefore, through every point $(x,p)\in \widetilde{M}$ passes a maximal integral submanifold $\mathcal{L}$ of the distribution $D_{\xi}$ and we have a foliation of the manifold $\widetilde{M}$ by hypersurfaces. If we write $\gamma\colon I\to B$ for the maximal integral curve of the vector field $\xi$ with initial condition $\gamma(0)=x\in B$, then the hypersurface $\mathcal{L}$ is given by $$\mathcal{L}=\{(\gamma(t), p)\in B\times F: t\in I\, \, , p\in F\}.$$ $\mathcal{L}$ inherits a degenerate metric tensor from $\widetilde{g}$ whose radical is spanned by the vector field $\xi\mid_{\mathcal{L}}$. Every smooth section $\sigma= (\sigma_{B}, \mathrm{Id}_{F})$ of the natural projection $\mathcal{L}\to F$ provides a spacelike immersion in $\widetilde{M}$ with induced metric $g=\sigma^{*}(\widetilde{g})=(\lambda\circ \sigma_{B})^{2}g_{F}$. Hence the induced metric $g$ belongs to the same conformal class of $g_{F}$. In particular, $\mathcal{L}$ is a subset of the bundle of scales of $F$ for the conformal class of $g_{F}$ [@CS09 Chap. 1]. **Remark 3**. *The projection $B\times_{\lambda}F \to B$ is a semi-Riemannian submersion [@One66] (see also [@One83 Chap. 7]). In the terminology of semi-Riemannian submersions, every maximal integral submanifold $\mathcal{L}$ of the distribution $D_{\xi}$ is the horizontal lift of a maximal integral curve of the vector field $\xi$ on $B$. * Now let $\Psi:M\rightarrow B\times_{\lambda} F$ be an arbitrary spacelike immersion. The immersion $\Psi$ can be written $$\Psi=(\Psi_B,\Psi_F),$$ where $\Psi_B= \pi_{B}\circ \Psi$ and $\Psi_F = \pi_{F}\circ \Psi$. For the vector field $\xi |_{\Psi}$ we have $$0=\widetilde{g}(\xi |_{\Psi},\xi |_{\Psi})=\widetilde{g}(\xi^{\top},\xi^{\top})+\widetilde{g}(\xi^{\perp},\xi^{\perp}).$$ Hence, $\xi^{\perp}$ does not vanish at any point of $M$ and $\widetilde{g}(\xi^{\perp}, \xi^{\perp})\leq 0$, in other words, $\xi^{\perp}$ is a causal normal vector field. On the other hand, the vector field $\xi^{\top}$ vanishes identically if and only if $M\subset \mathcal{L}$. In this case, we say that $M$ factors through the integral hypersurface $\mathcal{L}$ of $D_{\xi}$. **Remark 4**. * Let recall that the two dimensional manifold $B$ is assumed to be orientable. Thus, the existence of the lightlike vector field $\xi$ implies that there is another lightlike vector field $\eta \in \mathfrak{X}(B)$ which is uniquely determined by the normalization condition $g_{B}(\xi, \eta)=-1$. As for $\xi$, we have $\nabla^{B}\eta= -\alpha \otimes \eta$ and the corresponding distribution $D_{\eta}$ is also involutive. Every each maximal integral submanifold $\mathcal{N}$ of $D_{\eta}$ inherits a degenerate metric tensor from $\widetilde{g}$ whose radical is now spanned by the restriction of $\eta$ to $\mathcal{N}.$ * In order to be used later, let us recall the notion of parabolic Riemannian manifold. A (non necessary complete) Riemannian manifold is parabolic if the only subharmonic functions bounded from above that it admits are the constants. That is, a Riemannian manifold $M$ is parabolic when $\Delta v \geq 0$ and $\sup_{M} v< +\infty$ for a smooth function $v\in C^{\infty}(M)$ implies $v$ must be constant (see, for instance, [@Ka] and [@Gr]). From a physical point of view, the parabolicity is equivalent to the recurrence of the Brownian motion on a Riemannian manifold [@Gr]. Let us also recall that every complete Riemannian surface with non-negative Gaussian curvature is parabolic [@H]. Even more, every complete Riemannian surface with finite total curvature is parabolic [@H]. In arbitrary dimension there is no clear relation between parabolicity and sectional curvature. Nevertheless, there exist sufficient conditions to ensure the parabolicity of a Riemannian manifold of arbitrary dimension based on the volume growth of its geodesic balls [@AMR]. # Generalized Schwarzschild spacetimes From now on $B\times_{\lambda} F$ is a generalized Schwarzschild spacetime, Definition [Definition 1](#190723B){reference-type="ref" reference="190723B"}. We can give two lightlike vector fields in $B$ as follows $$\label{11102022B} \xi=\dfrac{1}{f^2}\partial t+\partial r\,\text{ and }\,\eta=\frac{1}{2}(\partial t-f^2\partial r)$$ with $\widetilde{g}(\xi,\eta)=-1.$ The Levi-Civita connection $\nabla^B$ is directly computed from ([\[17112022A\]](#17112022A){reference-type="ref" reference="17112022A"}) as follows $$\nabla^B_{\partial t}\partial t=f^3f'\partial r,\quad\quad \nabla^B_{\partial t}\partial r=\dfrac{f'}{f}\partial t\quad\quad\text{and}\quad\quad \nabla^B_{\partial r}\partial r=-\dfrac{f'}{f}\partial r$$ where $f'=\partial_{r}f$. The Gauss curvature of the metric $g_B$ is $K^{B}=-\left((f')^2 + ff''\right).$ The $1$-form defined in ([\[250223B\]](#250223B){reference-type="ref" reference="250223B"}) satisfies $$\label{15112022C} \alpha= f'\left(fdt-\dfrac{1}{f}dr\right).$$ In particular, $\alpha(\xi)= \alpha(\eta)=0$ and then $\xi$ and $\eta$ are geodesic vector fields. Let us recall that the Levi-Civita connection of $\widetilde{g}$ is given in [@One83 Prop. 7.35] as follows. For $X,Y\in \mathfrak{X}(B)\subset \mathfrak{X}(\widetilde{M})$ and $V,W\in \mathfrak{X}(F)\subset \mathfrak{X}(\widetilde{M})$, we have $$\label{250223A} \widetilde{\nabla}_{X}Y=\nabla^{B}_{X}Y, \quad \widetilde{\nabla}_{X}V=\widetilde{\nabla}_{V}X= \frac{X\lambda}{\lambda}V,\quad \widetilde{\nabla}_{V}W=-\frac{\widetilde{g}(V,W)}{\lambda}\nabla^{B}\lambda+ \nabla^{F}_{V}W,$$ where $\nabla^{B}$ and $\nabla^{F}$ are the Levi-Civita connections of $B$ and $F$, respectively. For every $h\in \mathcal{C}^{\infty}(B)$, we write $\nabla^{B}h$ for the gradient of $h$ with respect to the metric $g_{B}$. Besides $\widetilde{\nabla}(h\circ \pi_{B})= \nabla^{B}h \circ \pi_{B}$, [@One83 Lemma. 7.34]. Straightforward computations show for the natural coordinates $t$ and $r$ on $B$ and for the warping function $\lambda$ that $$\nabla^B t=-\dfrac{1}{f^2}\partial t,\quad \nabla^B r=f^2\partial r,\quad \nabla^B f=f^2f'\partial r\quad\text{and}\quad \nabla^B\lambda=-\dfrac{\lambda_t}{f^2}\partial t+f^2\lambda_r\partial r.$$ **Remark 5**. *Let $\mathcal{L}$ be an integral hypersurface of the distribution $D_{\xi}$. Recall that the null-Weingarten map $b_{\xi}$ is defined for every point $p\in \mathcal{L}$ as $$b_{\xi}\colon T_{x}\mathcal{L}/\xi_{x}\to T_{x}\mathcal{L}/\xi_{x}, \quad [w]\mapsto [\widetilde{\nabla}_{w}\xi],$$ where $[\,\,]$ denotes the class in the quotient vector space $T_{x}\mathcal{L}/\xi_{x}$, see [@Gall]. The lightlike manifold $\mathcal{L}$ is said to be totally geodesic when $b_{\xi}=0$. A direct computation from ([\[250223A\]](#250223A){reference-type="ref" reference="250223A"}) gives $b_{\xi}([w])= \frac{\xi \lambda}{\lambda}[w]$. This formula implies that $\mathcal{L}$ is a totally geodesic lightlike hypersurface if and only if $\xi\lambda=0$. In a similar way, the integral hypersurfaces of $D_{\eta}$ are totally geodesic lightlike hypersurfaces if and only if $\eta\lambda=0$. * **Remark 6**. * Lorentzian manifolds admitting a global parallel and lightlike vector field $\xi$ were introduced in [@B]. Such a Lorentzian manifolds are called Brinkmann spacetimes Hence, a generalized Schwarzschild spacetime is a Brinkmann spacetime if and only if $\alpha=0$. Recall that a Lorentzian manifold $(\bar{M},\bar{g})$ is said to be static when admits a Killing vector field $K$ with $\bar{g}(K,K)<0$. The timelike vector field $\partial_{t}$ in a generalized Schwarzschild spacetime is Killing if and only if $\lambda_{t}=0.$ This is the case of the classical Schwarzschild spacetime. * **Remark 7**. *In a general setting, given a semi-Riemannian manifold $(M,g)$, a vector field $X\in \mathfrak{X}(M)$ is said to be recurrent when there is a $1$-form $\alpha$ on $M$ such that $\nabla X=\alpha \otimes X$, where $\nabla$ is the Levi-Civita connection of $g$. In particular, equations ([\[250223B\]](#250223B){reference-type="ref" reference="250223B"}) and ([\[250223A\]](#250223A){reference-type="ref" reference="250223A"}) imply that the vector fields $\xi$ and $\eta$ are recurrent. This property widely generalizes the Brinkmann spacetimes. Lorentzian manifolds with recurrent lightlike vector fields have been studied in [@Leistner].* For $\Psi:M\rightarrow B\times_{\lambda} F$ a spacelike immersion in a generalized Schwarzschild spacetime, we have the smooth functions on $M$ given by $u=t\circ \Psi_{B}$ and $v=r\circ \Psi_{B}.$ We have for the gradients of these functions with respect to the induced metric $g$ that $$\label{270223D} \nabla u=-\frac{1}{(f\circ \Psi_{B})^2}\partial t^{\top}\quad\text{and}\quad \nabla v=(f\circ \Psi_{B})^2\partial r^{\top}$$ and therefore $$\label{21112022A} \xi^{\top}=\dfrac{1}{(f\circ \Psi_{B})^2}\nabla v-\nabla u\quad\text{and} \quad \eta^{\top}=-\dfrac{1}{2}\left(\nabla v+(f\circ \Psi_{B})^2\nabla u\right).$$ These formulas ([\[21112022A\]](#21112022A){reference-type="ref" reference="21112022A"}) lead to the following characterization for spacelike immersions through integral submanifolds of the distributions $D_{\xi}$ or $D_{\eta}.$ **Lemma 8**. *A spacelike immersion $\Psi:M\rightarrow B\times_{\lambda} F$ in a generalized Schwarzschild spacetime factors through an integral hypersurface $\mathcal{L}$ (resp. $\mathcal{N}$) of the distribution $D_{\xi}$ $($resp. $D_{\eta})$ if and only if $$\nabla v=(f\circ \Psi_{B})^2 \nabla u \quad (\mathrm{resp.}\, \nabla v=- (f\circ \Psi_{B})^2 \nabla u ).$$* The spacelike immersion $F\hookrightarrow B\times_{\lambda} F$ given by $x\mapsto (t_{0}, r_{0}, x)$ is called the slice at level $(t_{0}, r_{0}).$ From [@One83 Prop. 7.35], its mean curvature vector field is $$\label{110923A} \mathbf{H}= -\frac{\nabla^{B}\lambda}{\lambda}(t_{0}, r_{0})$$ and for $\lambda(t,r)=r$, this formula reduces to $\mathbf{H}= -\frac{f^{2}(r_{0})}{ r_{0}}\partial r\mid_{(r_{0}, t_{0})}.$ Recall that slices are totally umbilical spacelike embedded submanifolds, [@One83 Chap. 7] and note that every spacelike immersion which factors through an integral hypersurface of $D_{\xi}$ and, at the same time, through an integral hypersurface of $D_{\eta}$ must factors through a slice. **Example 9**. *As was mentioned in the Introduction section, the $(m+2)$-dimensional Minkowski spacetime $\mathbb L^{m+2}$ can be described in two ways as a generalized Schwarzschild spacetime. The first one is $B=\mathbb R^{2}$, $f(r)=1$, $\lambda(t,r)=1$ and $(F,g)=\mathbb E^{m}$. The lightlike vector fields in ([\[11102022B\]](#11102022B){reference-type="ref" reference="11102022B"}) are $\xi=\partial t+ \partial r$ and $\eta=\frac{1}{2}(\partial t-\partial r).$ Hence, the leaves of the lightlike foliations are lightlike hyperplanes in $\mathbb L^{m+2}.$ The second one is obtained by taking $B=\mathbb R\times \mathbb R_{+}$, $f(r)=1$, $\lambda(t,r)=r$ and $(F,g)=\mathbb S^{m}$. The lightlike vector fields in ([\[11102022B\]](#11102022B){reference-type="ref" reference="11102022B"}) are $\xi=\partial t + \partial r$ and $\eta=\frac{1}{2}(\partial t-\partial r).$ The smooth map $$\label{16082023-1} \varphi:(\mathbb R\times \mathbb R_{+})\times_{r} \mathbb S^{m}\to \mathbb L^{m+2}, \quad (t,r, x)\mapsto (t, rx)$$ provides an isometry on the open subset $\{(t, p)\in \mathbb L^{m+2}: p\neq 0\}$. The foliations by lightlike hypersurfaces given in Lemma [Lemma 2](#190723A){reference-type="ref" reference="190723A"} correspond via this isometry with the lightlike cones with vertex at the points $(t,0)\in \mathbb L^{m+2}.$* # Immersions in generalized Schwarzschild spacetimes Along this section $$\Psi:M\rightarrow B\times_{\lambda} F$$ is a fixed spacelike immersion which does not necessary factors through an integral submanifold of $D_{\xi}$ or $D_{\eta}$. For every vector field $V\in \mathfrak{X}(M)$ and $x\in M$, we denote $$V^{B}_{x}= T_{x}\Psi_{B}\cdot V_{x} \quad \textrm{and }\quad V^{F}_{x}= T_{x}\Psi_{F}\cdot V_{x}.$$ Since we agree to ignore the differential map of $\Psi$, this means that $V=V^{B}+ V^{F}.$ We get that $$\label{090523A} V^{B}=g(V, \nabla u)\partial t+ g(V, \nabla v)\partial r,$$ and from ([\[270223D\]](#270223D){reference-type="ref" reference="270223D"}), we have $$\label{17112022E} (V^{B})^{\top}=-(f\circ \Psi_{B})^2V(u)\nabla u + \dfrac{1}{(f\circ\Psi_{B})^2}V(v)\nabla v.$$ As a consequence of ([\[250223A\]](#250223A){reference-type="ref" reference="250223A"}), we get $$\widetilde{\nabla}_{ V}(\xi\mid_{\Psi})=\alpha(V^{B})\xi\mid_{\Psi}+\Big(\dfrac{\xi\lambda}{\lambda}\circ \Psi_{B}\Big) V^F.$$ The Gauss and Weingarten formulas ([\[shape\]](#shape){reference-type="ref" reference="shape"}) imply that $$\label{17112022B} \nabla_V\xi^{\top}+\mathrm{II}(V,\xi^{\top})- A_{\xi^{\bot}}V+\nabla^{\perp}_V\,\xi^{\bot}=\alpha(V^{B})\xi\mid_{\Psi}+\Big(\dfrac{\xi\lambda}{\lambda}\circ \Psi_{B}\Big) V^F.$$ In particular for the tangent parts to $M$, we get $$\label{270223B} \nabla_V\xi^{\top}-A_{\xi^{\bot}}V=\alpha(V^{B})\xi^{\top}+\Big(\dfrac{\xi\lambda}{\lambda}\circ \Psi_{B}\Big)( V^{F})^{\top}.$$ From ([\[17112022E\]](#17112022E){reference-type="ref" reference="17112022E"}) and taking into account that $( V^{F})^{T}= V-( V^{B})^{T}$, the equation ([\[270223B\]](#270223B){reference-type="ref" reference="270223B"}) reduces to $$\nabla_V\xi^{\top}-A_{\xi^{\bot}}V=\alpha(V^{B})\xi^{\top}+\Big(\dfrac{\xi\lambda}{\lambda}\circ \Psi_{B}\Big)\Big(V+(f\circ \Psi_{B})^2V(u)\nabla u - \dfrac{1}{(f\circ\Psi_{B})^2}V(v)\nabla v\Big)$$ and then, $$\label{17112022G} \begin{split} \mathrm{div}(\xi^{\top}) - n\,\widetilde{g}(\mathbf{H},\xi^{\bot}) & = \Big(\dfrac{\xi\lambda}{\lambda}\circ \Psi_{B}\Big)\left[n+(f\circ \Psi_{B})^2\|\nabla u\|^2 -\dfrac{1}{(f\circ \Psi_{B})^2}\|\nabla v\|^2\right] \\ & + (\Psi_{B}^{*}\alpha) (\xi^{\top}). \end{split}$$ Note that $(\Psi_{B}^{*}\alpha) (\xi^{\top})=\alpha((\xi^{\top})^{B})$. A straightforward computation from ([\[21112022A\]](#21112022A){reference-type="ref" reference="21112022A"}) and ([\[090523A\]](#090523A){reference-type="ref" reference="090523A"}) gives $$(\xi^{\top})^{B}=\Big(\frac{1}{(f\circ \Psi_{B})^2}g(\nabla u, \nabla v)- \| \nabla u\|^2\Big)\partial t +\Big(\frac{1}{(f\circ \Psi_{B})^2}\| \nabla v\|^2 -g(\nabla u, \nabla v) \Big)\partial r$$ and then, from ([\[15112022C\]](#15112022C){reference-type="ref" reference="15112022C"}), we have $$(\Psi_{B}^{*}\alpha) (\xi^{\top})= - (ff'\circ \Psi_{B}) \| \xi^{\top}\|^{2}.$$ On the other hand, one can compute that $$\mathrm{trace}_{g}(\Psi_{B}^{*}g_{B})=-(f\circ \Psi_{B})^2\|\nabla u\|^2+ \dfrac{1}{(f\circ \Psi_{B})^2}\|\nabla v\|^2= -2 g(\xi^{\top}, \eta^{\top}),$$ where $\mathrm{trace}_{g}(\Psi_{B}^{*}g_{B})=\sum_{i=1}^n (\Psi_{B}^{*}g_{B})(E_{i}, E_{i})$ for a local orthonormal frame in $M$. **Remark 10**. *Taking into account that $n=\mathrm{trace}_{g}(g)=\mathrm{trace}_{g}(\Psi_{B}^{*}g_{B})+(\lambda\circ \Psi_{B})^2 \mathrm{trace}_{g}(\Psi_{F}^{*}g_{F}),$ we get $g(\xi^{\top}, \eta^{\top})>-n/2$. * From ([\[17112022G\]](#17112022G){reference-type="ref" reference="17112022G"}), the above computations give the following result. **Lemma 11**. *Let $\Psi:M\rightarrow B\times_{\lambda} F$ be a spacelike immersion in a generalized Schwarzschild spacetime. Then the following formula holds $$\mathrm{div}(\xi^{\top}) - n\,\widetilde{g}(\mathbf{H},\xi^{\bot}) = \Big(\dfrac{\xi\lambda}{\lambda}\circ \Psi_{B}\Big)\left[n+2g(\xi^{\top}, \eta^{\top})\right] - (ff'\circ \Psi_{B}) \| \xi^{\top}\|^{2}.$$* **Remark 12**. *We know that $\xi^{\top}=0$ for a spacelike immersion which factors through an integral hypersurface $\mathcal{L}$ of $D_{\xi}$. In this case, the above Lemma reduces to $$\widetilde{g}(\mathbf{H},\xi) = -\dfrac{\xi\lambda}{\lambda}\circ \Psi_{B}.$$ * The following result provides an integral characterization for compact spacelike immersions through integral hypersurfaces of $D_{\xi}$. **Theorem 13**. *Assume $M$ is compact and $\Psi:M\rightarrow B\times_{\lambda} F$ is a spacelike immersion in a generalized Schwarzschild spacetime with $f'> 0$ (resp. $f'<0$). Then $$\label{160523C} \int_{M} \Big[ n\,\widetilde{g}(\mathbf{H},\xi^{\bot}) +\Big(\dfrac{\xi\lambda}{\lambda}\circ \Psi_{B}\Big)\left[n+2g(\xi^{\top}, \eta^{\top})\right]\Big]\, d\mu_{g} \geq 0.\quad (\textrm{resp.} \leq 0).$$ The equality holds if and only if $M$ factors through an integral hypersurface $\mathcal{L}$ of $D_{\xi}.$* Suppose the case $f'> 0$. The inequality ([\[160523C\]](#160523C){reference-type="ref" reference="160523C"}) is a direct consequence of Lemma [Lemma 11](#160523A){reference-type="ref" reference="160523A"} and the classical divergence theorem. Furthermore, ([\[160523C\]](#160523C){reference-type="ref" reference="160523C"}) becomes an equality if and only if $$\int_{M}(ff'\circ \Psi_{B})\| \xi^{\top}\|^{2}\, d\mu_{g}=0.$$ Since $f'>0$ and the immersion is spacelike, we get $\xi^{\top}=0$ and this fact ends the proof. The proof for $f'<0$ works in a similar way. $\blacksquare$ **Remark 14**. *We have for the family of functions given in ([\[app\]](#app){reference-type="ref" reference="app"}) that $$(ff')(r)=\dfrac{-2\Lambda r^{2m}+ m(m^2-1)(\mathbf{M}r^{m-1}- q^2)}{m(m+1)r^{2m-1}}.$$ Therefore the assumption $f'>0$ is satisfied when $\Lambda\leq 0$ and $\mathbf{M}v^{m-1} >q^2$. In particular, it holds for the exterior Schwarzschild spacetime.* We just sketch the proof of Lemma [Lemma 11](#160523A){reference-type="ref" reference="160523A"} for the lightlike vector field $\eta$. The condition $\widetilde{g}(\xi, \eta)=-1$ implies $\nabla^{B}\eta= -\alpha \otimes \eta$ (see Remark [Remark 4](#300523A){reference-type="ref" reference="300523A"}) and then $$\widetilde{\nabla}_{ V}(\eta\mid_{\Psi})=-\alpha(V^{B})\eta\mid_{\Psi}+\Big(\dfrac{\eta\lambda}{\lambda}\circ \Psi_{B}\Big) V^F.$$ From the Gauss and Weingarten formulas we have $$\label{240523A} \nabla_V\eta^{\top}+\mathrm{II}(V,\eta^{\top})- A_{\eta^{\bot}}V+\nabla^{\perp}_V\,\eta^{\bot}=-\alpha(V^{B})\eta\mid_{\Psi}+\Big(\dfrac{\eta\lambda}{\lambda}\circ \Psi_{B}\Big) V^F,$$and taking tangent parts, we get $$\label{210523C} \nabla_V\eta^{\top}-A_{\eta^{\bot}}V=-\alpha(V^{B})\eta^{\top}+\Big(\dfrac{\eta\lambda}{\lambda}\circ \Psi_{B}\Big) (V^F)^{\top}.$$ Hence we have $$\begin{split} \mathrm{div}(\eta^{\top}) - n\,\widetilde{g}(\mathbf{H},\eta^{\bot}) & = \Big(\dfrac{\eta\lambda}{\lambda}\circ \Psi_{B}\Big)\left[n+(f\circ \Psi_{B})^2\|\nabla u\|^2 -\dfrac{1}{(f\circ \Psi_{B})^2}\|\nabla v\|^2\right] \\ & -(\Psi_{B}^{*}\alpha) (\eta^{\top}). \end{split}$$ A straightforward computation from ([\[15112022C\]](#15112022C){reference-type="ref" reference="15112022C"}) shows that $$(\Psi^{*}\alpha)(\eta^{\top})=\Big(\frac{ff'}{2}\circ \Psi_{B}\Big)\mathrm{trace}_{g}(\Psi_{B}^{*}g_{B}).$$ Therefore, the corresponding version of Lemma [Lemma 11](#160523A){reference-type="ref" reference="160523A"} to the vector field $\eta$ reads as follows. **Lemma 15**. *Let $\Psi:M\rightarrow B\times_{\lambda} F$ be a spacelike immersion in a generalized Schwarzschild spacetime. Then the following formula holds* *$$\label{070323B} \mathrm{div}(\eta^{\top}) - n\,\widetilde{g}(\mathbf{H},\eta^{\bot}) = \Big(\dfrac{\eta\lambda}{\lambda}\circ \Psi_{B}\Big)\left[n+2 g(\xi^{\top}, \eta^{\top})\right]+ (ff'\circ \Psi_{B})g( \xi^{\top}, \eta^{\top}).$$ For a submanifold $M$ which factors through an integral hypersurface $\mathcal{N}$ of $D_{\eta}$ we have $$\widetilde{g}(\mathbf{H},\eta) = -\dfrac{\eta\lambda}{\lambda}\circ \Psi_{B}.$$* **Remark 16**. *From ([\[070323B\]](#070323B){reference-type="ref" reference="070323B"}) and for $M$ compact, we get $$\int_{M} \Big[ n\,\widetilde{g}(\mathbf{H},\eta^{\bot}) +\Big(\dfrac{\eta\lambda}{\lambda}\circ \Psi_{B}\Big)\left[n+ 2 g(\xi^{\top}, \eta^{\top})\right]\Big]\, d\mu_{g} = -\int_{M}(ff'\circ \Psi_{B})g( \xi^{\top}, \eta^{\top})\, d \mu_{g}.$$ The right-hand side of this integral formula has no prescribed sign although we impose $f'$ to be signed. Hence, a similar result to Theorem [Theorem 13](#070323C){reference-type="ref" reference="070323C"} does not hold for the vector field $\eta$. * # Immersions through lightlike integral hypersurfaces In this section, we will focus on spacelike immersions $\Psi:M\rightarrow B\times_{\lambda} F$ through lightlike integral hypersurfaces of the distributions $D_{\xi}$ or $D_{\eta}.$ We write $\mathcal{L}$ (resp. $\mathcal{N}$) for a general integral hypersurface of $D_{\xi}$ (resp. $D_{\eta}$). The following results specialize several formulas of the above sections for theses cases. For example, the formula ([\[21112022A\]](#21112022A){reference-type="ref" reference="21112022A"}) using Lemma [Lemma 8](#020323A){reference-type="ref" reference="020323A"}. **Lemma 17**. *Let $\Psi:M\rightarrow B\times_{\lambda} F$ be a spacelike immersion through an integral hypersurface $\mathcal{L}$ of $D_{\xi}$. The following formulas hold* 1. *$\alpha(V^B)=0$ for every $V\in \mathfrak{X}(M).$* 2. *$\eta^{\top}=- \nabla v$.* 3. *$V^{B}= g(V, \nabla v)\xi$. In particular, we get $(V^B)^{\top}=0$ and $(V^{F})^{\top}=V.$* 4. *When $M$ has codimension two, the normal tangent bundle $TM^{\perp}$ is spanned by the normal lightlike vector fields $\xi$ and $$\ell^{\xi}=-\frac{\| \nabla v\|^{2}}{2}\xi +\eta^{\perp}.$$ We also have $\widetilde{g}(\xi, \ell^{\xi})=-1$.* **Lemma 18**. *Let $\Psi:M\rightarrow B\times_{\lambda} F$ be a spacelike immersion through an integral hypersurface $\mathcal{N}$ of $D_{\eta}$. The following formulas hold* 1. *$\alpha(V^B)=-\left(\dfrac{2f'}{f}\circ \Psi_{B}\right) g(V, \nabla v)$ for every $V\in \mathfrak{X}(M).$* 2. *$\xi^{\top}=\left(\dfrac{2}{f^2}\circ \Psi_{B}\right) \nabla v$.* 3. *$V^{B}=-\left(\dfrac{2}{f^2}\circ \Psi_{B}\right) g(V,\nabla v)\eta$. In particular, we get $(V^B)^{\top}=0$ and $(V^{F})^{\top}=V.$* 4. *When $M$ has codimension two, the normal tangent bundle $TM^{\perp}$ is spanned by the normal null vector fields $\eta$ and $$\ell^{\eta}=\xi^{\perp}-\frac{2\| \nabla v\|^{2}}{(f\circ \Psi_{B})^4}\eta.$$ We also have $\widetilde{g}(\eta, \ell^{\eta})=-1$.* **Remark 19**. *In these cases, the projection $\Psi_{F}\colon M \to F$ is also an immersion. Indeed, from Lemma [Lemma 17](#210523D){reference-type="ref" reference="210523D"}, the equality $T_{x}\Psi_{F}\cdot v=0$ for $x\in M$ with $v\in T_{x}M$ and $\Psi(M)\subset \mathcal{L}$ give $g(v,v)=0$ and then $v=0$ . The same argument works for $\mathcal{N}$. For spacelike immersions $\Psi:M\rightarrow B\times_{\lambda} F$ through these lightlike integral hypersurfaces the induced metric is $g=(\lambda\circ \Psi_{B})^{2}\Psi_{F}^{*}(g_{F}).$ * **Lemma 20**. *Let $\Psi:M\rightarrow B\times_{\lambda} F$ be a spacelike immersion through an integral hypersurface $\mathcal{L}$ of $D_{\xi}$. For every $V\in \mathfrak{X}(M)$ we have $$A_{\xi}V=-\Big(\frac{\xi \lambda}{\lambda}\circ \Psi_{B}\Big)V\,\, \textrm{ and }\,\,A_{\eta^{\perp}}V=-\Big(\frac{\eta \lambda}{\lambda}\circ \Psi_{B}\Big)V- \, \nabla_{V }\nabla v.$$ In particular, we get $\widetilde{g}(\mathbf{H}, \eta^{\perp})=-\frac{\eta \lambda}{\lambda}\circ \Psi_{B} - \frac{1}{n} \Delta v.$* Under our assumptions $\xi^{\top}=0$ and by means of Lemma [Lemma 17](#210523D){reference-type="ref" reference="210523D"}, the equation ([\[270223B\]](#270223B){reference-type="ref" reference="270223B"}) reduces to the announced formula for $A_{\xi}$. From Lemma [Lemma 17](#210523D){reference-type="ref" reference="210523D"} we have $\eta^{\top}=-\nabla v$. Hence, formula ([\[210523C\]](#210523C){reference-type="ref" reference="210523C"}) ends the proof. $\blacksquare$ In a similar way we have. **Lemma 21**. *Let $\Psi:M\rightarrow B\times_{\lambda} F$ be a spacelike immersion through an integral hypersurface $\mathcal{N}$ of $D_{\eta}$. For every $V\in \mathfrak{X}(M)$ we have $$A_{\eta}V=-\Big(\dfrac{\eta\lambda}{\lambda}\circ \Psi_{B}\Big)V \quad \textrm{ and }\quad A_{\xi^{\perp}}V=-\Big(\frac{\xi \lambda}{\lambda}\circ \Psi_{B}\Big)V + \frac{2}{(f \circ \Psi_{B})^2} \nabla_{V }\nabla v.$$ In particular, we get $\widetilde{g}(\mathbf{H}, \xi^{\bot})=-\frac{\xi \lambda}{\lambda}\circ \Psi_{B}+ \frac{2}{n (f\circ\Psi_{B})^2} \Delta v.$* In order to avoid ambiguities, we add the following terminology. A function $f$ with values in $\mathbb R$ is said to be signed when $f\geq 0$ or $f \leq 0$ on whole domain. The assumptions on the functions $\eta \lambda$ and $\xi\lambda$ in the statements of the following results are only required along the immersions $\Psi$. **Corollary 22**. *Assume $\eta \lambda$ $($resp. $\xi\lambda)$ signed and let $\Psi:M\rightarrow B\times_{\lambda} F$ be a compact spacelike immersion through an integral hypersurface $\mathcal{L}$ (resp. $\mathcal{N}$) of the distribution $D_{\xi}$ $($resp. $D_{\eta})$ with $\mathbf{H}=0$. Then $M$ factors through a slice with $u=t_{0}$ and $v=r_{0}$ such that $\nabla^{B}\lambda (t_{0}, r_{0})=0$ and $M$ is minimal in $F$.* We give the proof only for the case of $D_{\xi}.$ From Lemma [Lemma 20](#16032023A){reference-type="ref" reference="16032023A"}, we have $$\frac{\eta \lambda}{\lambda}\circ \Psi_{B} + \frac{1}{n} \Delta v=0.$$ The assumption the sign of $\eta \lambda$ implies that $\Delta v$ is also signed. The compactness of $M$ gives that $v$ is a constant function $r_{0}$ and Lemma [Lemma 8](#020323A){reference-type="ref" reference="020323A"} implies that $u$ is also a constant function $t_{0}$. Now, consider the string of smooth maps $$M\stackrel{\Psi_{F}}{\longrightarrow } F\hookrightarrow B\times_{\lambda} F,$$ where the second map is the slice at level $(t_{0}, r_{0}).$ From [@Chen Chap. 3], we know that for an orthonormal local frame $(E_{1}, \cdots , E_{n})$ of the induced metric $g$, we have $$\mathbf{H}= \mathbf{H}'+\frac{1}{n}\sum_{i=1}^{n} \overline{\mathrm{II}}(T\Psi_F\cdot E_{i},T\Psi_F\cdot E_{i}),$$ where $\mathbf{H}'$ denotes the mean curvature vector field of $\Psi_{F}$ and $\overline{\mathrm{II}}$ is the second fundamental form of the slice at $(t_{0}, r_{0}).$ Therefore, as a consequence of ([\[110923A\]](#110923A){reference-type="ref" reference="110923A"}) and Remark [Remark 19](#110923B){reference-type="ref" reference="110923B"}, we get that $$\mathbf{H}= \mathbf{H}'-\dfrac{\nabla^{B} \lambda}{\lambda}(t_{0}, r_{0}).$$ Hence $\mathbf{H}=0$ provides that $\mathbf{H}'=0$ and $\nabla^{B} \lambda(t_{0}, r_{0})=0.$ $\blacksquare$ As a direct consequence of Lemmas [Lemma 20](#16032023A){reference-type="ref" reference="16032023A"} and [Lemma 21](#16032023B){reference-type="ref" reference="16032023B"}, for a spacelike submanifold through an integral hypersurface of $D_{\xi}$ (resp. $D_{\eta})$, the normal vector field $\eta^{\perp}$ (resp. $\xi^{\perp}$) is umbilic if and only if there is $h\in\mathcal{C}^{\infty}(M)$ such that $$\label{22032023A} \nabla_V\nabla v=hV,$$ for every $V\in\mathfrak{X}(M)$. Then, in both situations, we have $\mathcal{L}_{\nabla v}g=2hg$ and therefore $\nabla v$ is a conformal vector field in $(M,g)$. **Theorem 23**. *Let $\Psi:M\rightarrow B\times_{\lambda} F$ be a compact spacelike immersion through an integral hypersurface $\mathcal{L}$ (resp. $\mathcal{N}$) of $D_{\xi}$ $($resp. $D_{\eta})$ with $\mathrm{Ric}^g(\nabla v,\nabla v)\leq 0$. Then $\eta^{\perp}$ $($resp. $\xi^{\perp})$ is an umbilic direction if and only if $M$ factors through a slice.* Since $\nabla v$ is a conformal vector field and $\textrm{Ric}^g(\nabla v,\nabla v)\leq 0$ it follows that $\nabla v$ is a Killing vector field (see [@Poor Chap. 5]). This necessarily implies that $h=0$ in ([\[22032023A\]](#22032023A){reference-type="ref" reference="22032023A"}) and therefore $\Delta v=0$. Taking into account that $M$ is assumed to be compact, we get that $v$ is a constant function. Furthermore, from Lemma [Lemma 8](#020323A){reference-type="ref" reference="020323A"} the function $u$ must also be a constant. The converse is obvious. $\blacksquare$ Equation ([\[22032023A\]](#22032023A){reference-type="ref" reference="22032023A"}) implies that $\nabla v$ is a conformal gradient vector field on the Riemannian manifold $(M,g)$. From a classic result by Obata [@Ob], the existence of such vector fields on compact Riemannian manifolds has been address in [@DAL Theor. 1]. As a direct consequence of this result, we have. **Theorem 24**. *Let $\Psi:M\rightarrow B\times_{\lambda} F$ be a compact spacelike immersion through an integral hypersurface $\mathcal{L}$ (resp. $\mathcal{N}$) of $D_{\xi}$ $($resp. $D_{\eta})$ with $\eta^{\perp}$ $($resp. $\xi^{\perp})$ an umbilical direction. Assume the Ricci tensor of the induced metric $g$ satisfies $$0< \mathrm{Ric}^{g}\leq (n-1)\left( 2- \frac{nc}{\lambda_{1}}\right)c,$$ for a constant $c$ where $\lambda_{1}$ is the first non-trivial eigenvalue of the Laplace operator of the metric $g$. If $\nabla v$ is a nonzero vector field, then $(M,g)$ is isometric to the sphere $\mathbb S^{n}(c)$ of constant sectional curvature $c$.* **Remark 25**. *A careful reading of [@DAL Theor. 1] shows that we only need the above condition on the Ricci tensor for the vector field $\nabla (\frac{\Delta v}{n}+c v)$.* **Remark 26**. *From [@Y Chap. 1], we know that for every conformal vector field $V$ in an $n$-dimensional Riemannian manifold, the following formula holds $$V(S^g)=-\frac{2(n-1)}{n}\Delta(\mathrm{div} V)- \frac{2}{n}\mathrm{div}V\cdot S^g,$$ where $S^g$ is the scalar curvature of $g$. Under the assumptions of Theorem [Theorem 24](#260723A){reference-type="ref" reference="260723A"}, the vector field $\nabla v$ is conformal and the manifold $(M,g)$ is isometric to the sphere $\mathbb S^{n}(c)$. Therefore, the above formula reduces to $$0=\Delta(\Delta v)+n c\Delta v.$$ This implies that $\Delta v+ nc\, v=k \in \mathbb R$. Hence for $w:=v-\frac{k}{nc}$, we have that $\Delta w+ nc w=0$ and either $v=\frac{k}{nc}$ or $w$ is an eigenfunction of the Laplace operator of the sphere $\mathbb S^n(c)$ corresponding with $n c$. This is the well-known value of the first non-trivial eigenvalue $\lambda_1$ of the Laplace operator of the sphere $\mathbb S^n(c)$, [@Cha Chap. 2]. The space of homogeneous harmonic polynomial of $\mathbb R^{n+1}$ of degree $1$ restricted to $\mathbb S^{n}(c)$ constitutes the eigenspace corresponding to $\lambda_{1}$. In other words, there is $a\in \mathbb R^{n+1}$ such that $v(x)=\langle x, a\rangle+ \frac{k}{nc},$ where $\langle \,,\,\rangle$ is the usual Euclidean inner product. * **Proposition 27**. *Let $\Psi:M\rightarrow B\times_{\lambda} F$ be a spacelike immersion through an integral hypersurface $\mathcal{L}$ of $D_{\xi}$. For every $V\in\mathfrak{X}(M)$ we have $$\nabla_V^{\perp}\xi=-\left(\dfrac{\xi\lambda}{\lambda}\circ\Psi_B\right)g(\nabla v,V)\xi\,\, \textrm{ and }\,\, \nabla_V^{\perp}\eta^{\perp}=-\left(\dfrac{\eta\lambda}{\lambda}\circ\Psi_B\right)g(\nabla v,V)\xi+\mathrm{II}(\nabla v,V).$$* The first assertion is a direct computation from equation ([\[17112022B\]](#17112022B){reference-type="ref" reference="17112022B"}) from Lemmas [Lemma 17](#210523D){reference-type="ref" reference="210523D"} and [Lemma 20](#16032023A){reference-type="ref" reference="16032023A"} taking into account that $V^{F}-V=-V^{B}$. On the other hand, Lemmas [Lemma 17](#210523D){reference-type="ref" reference="210523D"} and [Lemma 20](#16032023A){reference-type="ref" reference="16032023A"} reduce equation ([\[240523A\]](#240523A){reference-type="ref" reference="240523A"}) to $$-\mathrm{II}(V,\nabla v)+\Big( \frac{\eta \lambda}{\lambda}\circ \Psi_{B}\Big) V+\nabla^{\perp}_V\,\eta^{\bot}=\Big(\dfrac{\eta\lambda}{\lambda}\circ \Psi_{B}\Big) V^F.$$ and again from $V^{F}-V=-V^{B}$, the above formula ends the proof. $\blacksquare$ In a similar way we obtain the following lemma. **Proposition 28**. *Let $\Psi:M\rightarrow B\times_{\lambda} F$ be a spacelike immersion through an integral hypersurface $\mathcal{N}$ of $D_{\eta}$. For every $V\in\mathfrak{X}(M)$ we have $$\nabla_V^{\perp}\eta=\dfrac{2}{(f\circ\Psi_B)^2}\left(ff'\circ\Psi_B+\dfrac{\eta\lambda}{\lambda}\circ\Psi_B\right)g(\nabla v,V)\eta$$ and $$\nabla_V^{\perp}\xi^{\perp}=\dfrac{2}{(f\circ\Psi_B)^2}\left[g(\nabla v,V)\left(-\left(ff'\circ\Psi_B\right)\xi^{\perp}+\left(\dfrac{\xi\lambda}{\lambda}\circ\Psi_B\right)\eta\right)-\mathrm{II}(\nabla v,V)\right].$$* As a direct consequence of Propositions [Proposition 27](#28032023A){reference-type="ref" reference="28032023A"} and [Proposition 28](#28032023B){reference-type="ref" reference="28032023B"} we have. **Theorem 29**. *Let $\Psi:M\rightarrow B\times_{\lambda} F$ be a spacelike immersion through an integral hypersurface $\mathcal{L}$ (resp. $\mathcal{N}$) of $D_{\xi}$ (resp. $D_{\eta}$). Assume $\xi\lambda\neq 0$ $($resp. $ff'+\frac{\eta\lambda}{\lambda}\neq 0$), then the following assertions are equivalent* 1. *$\nabla_V^{\perp}\xi=0$ $($resp. $\nabla_V^{\perp}\eta=0)$ for every $V\in\mathfrak{X}(M)$.* 2. *$M$ factors through a slice.* **Remark 30**. * In order to study the applicability of this result to the case $B\times_{r}\mathbb S^{m}$ where $f^{2}(r)$ is given in ([\[app\]](#app){reference-type="ref" reference="app"}), recall $\xi \lambda=1$ and a direct computation shows that $ff'+\frac{\eta \lambda}{\lambda}\neq 0$ if and only if $$-2\Lambda r^{m+1}+m(m+1)(2m\mathbf{M}-r^{m-1}-(2m-1)q^2r^{1-m})\neq 0.$$ For the exterior Schwarzschild spacetime with mass $\mathbf{M}$, we have $$ff'+\frac{\eta\lambda}{\lambda}=\frac{2m\mathbf{M}-r^{m-1}}{2r^m}.$$ Therefore, the assumption in Theorem [Theorem 29](#250523A){reference-type="ref" reference="250523A"} is always satisfied for the distribution $D_{\xi}$ but the condition for $D_{\eta}$ holds if and only if the value $2m\mathbf{M}$ is not achieved for the function $v^{m-1}\in C^{\infty}(M)$. * # Codimension two spacelike immersions From now on, we assume $n=m$, that is, the spacelike immersion $\Psi:M\rightarrow B\times_{\lambda} F$ has codimension two. We begin this section with a topological result on such spacelike immersions. **Proposition 31**. *Let $\Psi:M\rightarrow B\times_{\lambda} F$ be a codimension two compact spacelike immersion through an integral hypersurface $\mathcal{L}$ (resp. $\mathcal{N}$) of $D_{\xi}$ $($resp. $D_{\eta})$. Then the map $\Psi_{F}\colon M \to F$ is a covering map. In particular, $F$ is also compact and when $F$ is simply-connected, $\Psi_{F}$ is a diffeomorphism.* We give the proof only for the case of the distribution $D_{\xi}$. We claim that the map $\Psi_{F}\colon M \to F$ is a local diffeomorphism. Indeed, we know that $\Psi_{F}\colon M \to F$ is an immersion between manifolds of the same dimension. The compactness of $M$ and the connectedness of $F$ imply that $\Psi_F$ is a covering map (see [@Carmo1 Proposition 5.6.1] for details). $\blacksquare$ **Remark 32**. *Under the assumptions of Theorem [Theorem 24](#260723A){reference-type="ref" reference="260723A"}, if $F$ is simply-connected necessarily must be a topological sphere. * **Remark 33**. *The map $\Psi_{F}\colon M \to F$ is not a Riemannian covering, in general. In fact, as was mentioned for $\Psi=(\Psi_{B}, \Psi_{F})$, the induced metric on $M$ is given by $g=(\lambda \circ \Psi_{B})^{2}\Psi_{F}^{*}(g_{F}).$ Taking into account the well-known relation between the scalar curvatures of two conformally related metrics, see for instance [@Besse Chap. 1], we have that $$S^{\Psi_{F}^{*}(g_{F})}=(\lambda \circ \Psi_{B})^2 \left(S^{g}+2(m-1)\Delta\log(\lambda\circ\Psi_B)-(m-2)(m-1)\|\nabla\log(\lambda\circ\psi_B)\|^2\right),$$ where $S^{\Psi_{F}^{*}(g_{F})}$ and $S^g$ are the scalar curvatures of $\Psi_{F}^{*}(g_{F})$ and $g$, respectively. For $\lambda(t,r)=r$, we have $\lambda \circ \Psi_{B}=v$ and from a straightforward computation the above formula reads as follows $$\label{030623A} S^{\Psi_{F}^{*}(g_{F})}=v^2 \left(S^{g}+\dfrac{2(m-1)}{v}\Delta v-\dfrac{m(m-1)}{v^2}\|\nabla v \|^2\right).$$* **Proposition 34**. *Assume $\lambda(t,r)=r$ and let $\Psi:M\rightarrow B\times_{\lambda} F$ be a compact codimension two spacelike immersion through an integral hypersurface of $D_{\xi}$ or $D_{\eta}$. Then $$\int_{M}\left(S^{\Psi_{F}^{*}(g_{F})}- v^2S^g\right) d\mu_{g} \leq 0$$ and the equality holds if and only if $M$ factors through a slice.* From ([\[030623A\]](#030623A){reference-type="ref" reference="030623A"}), a direct computation gives that $$\int_{M}\left(S^{\Psi_{F}^{*}(g_{F})}- v^2S^g\right) d\mu_{g}=(m-1)\int_{M}(2v\Delta v-m \| \nabla v\|^{2})d\mu_{g}.$$ Taking into account that $\Delta v^2= 2v \Delta v+ 2 \| \nabla v\|^{2}$, the above formula reduces to $$\int_{M}\left(S^{\Psi_{F}^{*}(g_{F})}- v^2S^g\right) d\mu_{g}=-(m-1)(m+2)\int_{M} \| \nabla v\|^{2}d\mu_{g}.$$ Then, Lemma [Lemma 8](#020323A){reference-type="ref" reference="020323A"} ends the proof. $\blacksquare$ As a direct consequence of Corollary [Corollary 22](#020623A){reference-type="ref" reference="020623A"} and Proposition [Proposition 31](#020623B){reference-type="ref" reference="020623B"} we have. **Corollary 35**. *Assume $\eta \lambda$ $($resp. $\xi \lambda )$ signed. Then every codimension two compact spacelike immersion through an integral hypersurface $\mathcal{L}$ (resp. $\mathcal{N}$) of $D_{\xi}$ $($resp. $D_{\eta})$ with $\mathbf{H}=0$ factors through a slice and $\Psi_{F}\colon (M, g)\to (F, \lambda(t_{0}, r_{0})^{2}g_{F})$ is a Riemannian covering space.* **Proposition 36**. *Let $\Psi:M\rightarrow B\times_{\lambda} F$ be a codimension two spacelike immersion through an integral hypersurface $\mathcal{L}$ (resp. $\mathcal{N}$) of $D_{\xi}$ $($resp. $D_{\eta})$. Then the normal tangent bundle $TM^{\perp}$ is spanned by the vector fields $\xi^{\perp}$ and $\eta^{\perp}$. When $M$ factors through an integral hypersurface $\mathcal{L}$ of $D_{\xi}$, the mean curvature vector field is $$\label{Hxi} \mathbf{H}=\left[\dfrac{\eta\lambda}{\lambda}\circ \Psi_{B}-\Big(\dfrac{\xi\lambda}{\lambda}\circ \Psi_{B}\Big)\|\nabla v\|^2+\dfrac{1}{m}\Delta v\right]\xi+\Big(\dfrac{\xi\lambda}{\lambda}\circ \Psi_{B}\Big)\eta^{\perp}$$ and when $M$ factors through an integral hypersurface $\mathcal{N}$ of $D_{\eta}$ we have $$\mathbf{H}=\Big(\dfrac{\eta\lambda}{\lambda}\circ \Psi_{B}\Big)\xi^{\bot}+\left[\dfrac{\xi\lambda}{\lambda}\circ \Psi_{B}-\Big(\dfrac{4\,\eta\lambda}{\lambda f^4}\circ \Psi_{B}\Big)\|\nabla v\|^2-\dfrac{2}{m(f\circ \Psi_{B})^2}\Delta v\right]\eta.$$* The assertion on the normal tangent bundle is a direct consequence of $\xi^{\bot}=\xi$ (resp. $\eta^{\bot}=\eta$) and $\widetilde{g}(\xi, \eta)=-1.$ Hence, there are smooth functions $a,b\in C^{\infty}(M)$ such that $$\mathbf{H}=a\xi+b\eta^{\perp}$$ where $b=-\widetilde{g}(\mathbf{H}, \xi)$ and from Lemma [Lemma 17](#210523D){reference-type="ref" reference="210523D"} we can compute that $a=\widetilde{g}(\mathbf{H}, \|\nabla v\|^2\xi-\eta^{\perp}).$ Now formula ([\[230321C\]](#230321C){reference-type="ref" reference="230321C"}) and Lemma [Lemma 20](#16032023A){reference-type="ref" reference="16032023A"} imply that $$\widetilde{g}(\mathbf{H}, \xi)=-\dfrac{\xi\lambda}{\lambda}\circ \Psi_{B}\quad \textrm{ and } \quad \widetilde{g}(\mathbf{H}, \eta^{\perp})=-\dfrac{\eta\lambda}{\lambda}\circ \Psi_{B}-\dfrac{1}{m}\Delta v.$$ This completes the proof for the case of spacelike submanifolds through integral hypersurfaces of $D_{\xi}$. Slight changes in the proof show the formula for the mean curvature vector field in the case $D_{\eta}.$ $\blacksquare$ Under the same assumptions of Proposition [Proposition 36](#010623B){reference-type="ref" reference="010623B"} and from Lemmas [Lemma 17](#210523D){reference-type="ref" reference="210523D"} and [Lemma 18](#080623A){reference-type="ref" reference="080623A"}, we have. **Corollary 37**. *For a spacelike submanifold $M$ which factors through an integral hypersurface $\mathcal{L}$ of $D_{\xi}$, we have $$\mathbf{H}=\left[\dfrac{\eta\lambda}{\lambda}\circ \Psi_{B}-\Big(\dfrac{\xi\lambda}{2\lambda}\circ \Psi_{B}\Big)\|\nabla v\|^2+\dfrac{1}{m}\Delta v\right]\xi+\Big(\dfrac{\xi\lambda}{\lambda}\circ \Psi_{B}\Big)\ell^{\xi}.$$ In case that $M$ factors through an integral hypersurface $\mathcal{N}$ of $D_{\eta}$, $$\mathbf{H}=\Big(\dfrac{\eta\lambda}{\lambda}\circ \Psi_{B}\Big)\ell^{\eta}+\left[\dfrac{\xi\lambda}{\lambda}\circ \Psi_{B}-\Big(\dfrac{2\,\eta\lambda}{\lambda f^4}\circ \Psi_{B}\Big)\|\nabla v\|^2-\dfrac{2}{m(f\circ \Psi_{B})^2}\Delta v\right]\eta.$$* **Remark 38**. *This Corollary extends the formulas for the mean curvature vector field of codimension two spacelike submanifolds through lightlike hyperplanes and cones in the Minkowski spacetime, [@ACR2] and [@PaRo]. For submanifolds through lightlike hyperplanes, we particularize Corollary [Corollary 37](#280723D){reference-type="ref" reference="280723D"} for $f(r)=1$ and $\lambda(t,r)=1$ (see Example [Example 9](#280723A){reference-type="ref" reference="280723A"}), then $\mathbf{H}=\frac{1}{m}\Delta v\, \xi$ for $M$ in the lightlike hyperplane $\Pi_{\xi}:= \{x\in \mathbb L^{m+2}: \langle x, \xi\rangle=0\}$. Similarly, we obtain $\mathbf{H}=-\frac{2}{m}\Delta v \, \eta$ when $M$ factors through the lightlike hyperplane $\Pi_{\eta}$. It can be easily seen that our formulas for $\mathbf{H}$ coincides with the formula (8.2) given in [@ACR2 Sect. 8].* *For submanifolds through lightlike cones we need to take $f(r)=1$ and $\lambda(t,r)=r$, then we compute the mean curvature as follows $$\mathbf{H}=\Big( \frac{-1-\|\nabla v\|^2}{2v}+ \dfrac{\Delta v}{m}\Big)\xi + \frac{1}{v}\ell^{\xi},\quad\textrm{{$D_{\xi}$ case}}$$ and $$\mathbf{H}=- \frac{1}{2v}\ell^{\eta}+\Big( \dfrac{1+\|\nabla v\|^2}{v}-\frac{2\Delta v}{m}\Big)\eta, \quad\textrm{{$D_{\eta}$ case}}.$$ Theses formulas agree with [@PaRo] and [@ACR2 Sect. 6]. In fact, let us denote by $\Lambda^+\subset\mathbb L^{m+2}$ the future lightlike cone with vertex at $0\in \mathbb L^{m+2}.$ We know from ([\[16082023-1\]](#16082023-1){reference-type="ref" reference="16082023-1"}) that $\hat{\xi}:=r\xi$ satisfies $T\varphi\cdot\hat{\xi}$ is the position vector field in $\Lambda^+$. Therefore, if we rescale $\hat{l}:=\frac{1}{r}l^{\xi}$, the first formula of $\mathbf{H}$ expressed in terms of $\hat{\xi}$ and $\hat{l}$ coincides with the formula given in [@PaRo] and [@ACR2 Sect. 6].* **Remark 39**. *Assume the warping function $\lambda$ depends only on the radial coordinate $r$. According to ([\[Hxi\]](#Hxi){reference-type="ref" reference="Hxi"}), the existence of a codimension two spacelike immersion with $\mathbf{H}=0$ through an integral hypersurface $\mathcal{L}$ of $D_{\xi}$ implies $\lambda_{r} \circ \Psi_{B}=0.$ In particular, there are no such spacelike immersions in the exterior Schwarzschild spacetime with mass $\mathbf{M}$. The same result remains true for spacelike immersions through an integral hypersurface $\mathcal{N}$ of $D_{\eta}.$* **Remark 40**. *Let $\Psi:M\rightarrow B\times_{\lambda} F$ be a codimension two spacelike immersion through an integral hypersurface $\mathcal{L}$ of $D_{\xi}$ with $\lambda(t,r)=r$. Proposition [Proposition 27](#28032023A){reference-type="ref" reference="28032023A"} and Corollary [Corollary 37](#280723D){reference-type="ref" reference="280723D"} give that the normal lightlike vector field $\ell := v \,\xi$ satisfies $$\label{110823A} \widetilde{g}(\ell, \mathbf{H})=-1\quad \textrm{and} \quad \nabla^{\perp}\ell=0.$$ In the case that the spacelike immersion lies in an integral hypersurface $\mathcal{N}$ of $D_{\eta}$, the lightlike normal vector field $\ell:=\frac{-2v}{(f\circ \Psi_{B})^2}\eta$ satisfies $\widetilde{g}(\ell, \mathbf{H})=-1$ and $\nabla^{\perp}\ell=0.$ Assuming that $M$ is compact and $F$ is the round sphere, the existence of a normal lightlike vector field $\ell$ such that ([\[110823A\]](#110823A){reference-type="ref" reference="110823A"}) holds implies that $M$ lies in an integral lightlike hypersurface of $D_{\xi}$ or $D_{\eta}$, [@WWZ]. The authors of [@WWZ] call these hypersurfaces as null hypersurfaces of symmetry. In other words, the null hypersurfaces generated by the round sphere.* From formulas $(2)$ in Lemma [Lemma 17](#210523D){reference-type="ref" reference="210523D"} and Lemma [Lemma 18](#080623A){reference-type="ref" reference="080623A"} and Proposition [Proposition 36](#010623B){reference-type="ref" reference="010623B"}, we get. **Corollary 41**. *Under the same assumptions of Proposition [Proposition 36](#010623B){reference-type="ref" reference="010623B"}, for a spacelike immersion through an integral hypersurface $\mathcal{L}$ of $D_{\xi}$ we have that $$\|\mathbf{H}\|^{2}=-\frac{2\, \eta \lambda\, \xi \lambda}{\lambda^{2}}\circ \Psi_{B}- \Big(\frac{2\, \xi \lambda}{m\, \lambda}\circ \Psi_{B}\Big)\Delta v+\Big(\frac{\xi \, \lambda}{\lambda}\circ \Psi_{B}\Big)^{2}\|\nabla v\|^{2}$$ and for a spacelike immersion through an integral hypersurface $\mathcal{N}$ of $D_{\eta}$ $$\|\mathbf{H}\|^{2}=-\frac{2\, \eta \lambda\, \xi \lambda}{\lambda^{2}}\circ \Psi_{B}+ \Big(\frac{4\, \eta \lambda}{m\, \lambda f^{2}}\circ \Psi_{B}\Big)\Delta v+4\Big(\frac{\eta \, \lambda}{\lambda f^{2}}\circ \Psi_{B}\Big)^{2}\|\nabla v\|^{2}$$ where as usual $\|\mathbf{H}\|^{2}$ denotes $\widetilde{g}(\mathbf{H}, \mathbf{H})$.* **Corollary 42**. *Assume $\xi \lambda$ $($resp. $\eta \lambda )$ signed. Then every codimension two compact spacelike immersion through an integral hypersurface $\mathcal{L}$ (resp. $\mathcal{N}$) of $D_{\xi}$ (resp. $D_{\eta}$) with $$\|\mathbf{H}\|^{2}=-\frac{2\, \eta \lambda\, \xi \lambda}{\lambda^{2}}\circ \Psi_{B}$$ factors through a slice.* We give the proof only for the case of $D_{\xi}$. As a consequence of Corollary [Corollary 41](#16032023E){reference-type="ref" reference="16032023E"}, we have $$\Big(\frac{2\, \xi \lambda}{m\lambda}\circ \Psi_{B}\Big)\Delta v=\Big(\frac{\xi \, \lambda}{\lambda}\circ \Psi_{B}\Big)^{2}\|\nabla v\|^{2}.$$ Since $\xi\lambda\circ\Psi_B$ is signed, we deduce that $\Delta v$ is also signed. We can now proceed analogously to the proof of Theorem [Theorem 23](#20032023A){reference-type="ref" reference="20032023A"}. $\blacksquare$ **Remark 43**. *Corollary [Corollary 41](#16032023E){reference-type="ref" reference="16032023E"} provides a formula which relates the mean curvature vector field and the scalar curvatures $S^{\Psi_{F}^{*}(g_{F})}$ and $S^g$. In fact, for a codimension two spacelike immersion through an integral hypersurface of $D_{\xi}$ or $D_{\eta}$, one computes from ([\[030623A\]](#030623A){reference-type="ref" reference="030623A"}) that $$\label{080823A} \|\mathbf{H}\|^{2}=\frac{1}{v^2}\left( (f\circ \Psi_{B})^2- \frac{S^{\Psi_{F}^{*}(g_{F})}- v^2 S^{g}}{m(m-1)}\right).$$ If we specialize this formula for the case $m=2$, we get that $$\|\mathbf{H}\|^{2}=\frac{1}{v^2}\left( (f\circ \Psi_{B})^2- K^{\Psi_{F}^{*}(g_{F})}+ v^2 K^g\right).$$ Hence, if we assume $M$ compact, the Gauss-Bonnet formula implies $$\int_{M^2}\|\mathbf{H}\|^{2} \, d\mu_{g}=\int_{M^2}\frac{(f\circ \Psi_{B})^2}{v^2}\, d\mu_{g},$$ where $d\mu_{g}$ is the canonical measure associated to the metric $g$.* Let us recall the following terminology in General Relativity. **Definition 44**. *A codimension two spacelike submanifold $M$ in a timelike orientable Lorentzian manifold is said to be future (resp. past) trapped when $\mathbf{H}$ is timelike and future pointing (resp. past pointing). $M$ is called marginally (resp. weakly) trapped if $\mathbf{H}$ is lightlike (resp. causal) on $M$. The notions of future and past for marginally and weakly are obviously adapted.* **Remark 45**. * Assume $\lambda(t,r)=r$. Under the hypotheses of Corollary [Corollary 41](#16032023E){reference-type="ref" reference="16032023E"}, we have $\|\mathbf{H}\|^{2}\leq 0$ for a spacelike immersion through an integral hypersurface of $D_{\xi}$ or $D_{\eta}$ if and only if $$\frac{2}{n }\Delta v\geq \frac{ (f\circ \Psi_{B})^2+\|\nabla v\|^{2}}{v}.$$ In particular, there are no compact weakly trapped submanifolds in this case. Taking into account that for this choice of $\lambda$, the manifold $B\times_{\lambda} F$ is stationary, this result is only a particular case of [@MS Theor. 2]. In fact, recall that [@MS Theor. 2] states that there is no compact weakly trapped submanifold in a stationary spacetime. * From Remark [Remark 39](#060823B){reference-type="ref" reference="060823B"}, there is no point on $M$ where $\mathbf{H}=0$ and then, from ([\[030623A\]](#030623A){reference-type="ref" reference="030623A"}), we have. **Corollary 46**. *Assume $\lambda (t,r)=r$ and let $\Psi:M\rightarrow B\times_{\lambda} F$ be a codimension two spacelike immersion through an integral hypersurface of $D_{\xi}$ or $D_{\eta}$. Then the following assertions are equivalent.* 1. *$M$ is marginally trapped.* 2. *The function $v$ satisfies the equation $$2v \Delta v- m\Big[ (f\circ \Psi_{B})^2+\|\nabla v\|^{2}\Big]=0.$$* 3. *The scalar curvature of $M$ satisfies $$S^{\Psi_{F}^{*}(g_{F})}=v^2 S^{g}+ m(m-1)(f\circ \Psi_{B})^2.$$* **Definition 47**. **An immersion $\Psi:F\rightarrow B\times_{\lambda} F$ is said to be an (entire) spacelike graph on $F$ when $$\Psi(x)=(\Psi_B(x),x)$$ and the induced metric $\Psi^{*}(\widetilde{g})$ is Riemannian.** Recall that if a spacelike graph on $F$ factors through an integral hypersurface of $D_{\xi}$ or $D_{\eta}$, the induced metric is $g=(\lambda\circ \Psi_{B})^{2}g_{F}$. Assume $\lambda(t,r)=r$. Taking into account that $\Psi_{F}= \mathrm{Id}_{F}$ for spacelike graphs, formula ([\[080823A\]](#080823A){reference-type="ref" reference="080823A"}) implies that for every graph factoring through an integral hypersurface of $D_{\xi}$ or $D_{\eta}$, the mean curvature vector field $\mathbf{H}$ and the scalar curvatures $S^{g_{F}}$ and $S^g$ are related by $$\label{04092023} \|\mathbf{H}\|^{2}=\frac{1}{v^2}\left( (f\circ \Psi_{B})^2- \frac{S^{g_{F}}- v^2 S^{g}}{m(m-1)}\right).$$ **Remark 48**. *In the particular case of the exterior Schwarzschild spacetime with mass $\mathbf{M}$, the above formula reduces to $$\|\mathbf{H}\|^{2}= \frac{S^g}{m(m-1)}- \frac{2\mathbf{M}}{v^{m+1}}.$$ Then, a spacelike graph factoring through a lightlike hypersurface of $D_{\xi}$ or $D_{\eta}$ is marginally trapped if and only if $S^{g}=\frac{2\mathbf{M}m(m-1)}{v^{m+1}}.$ Also, taking into account the description of the Minkowski spacetime in Example [Example 9](#280723A){reference-type="ref" reference="280723A"}, a spacelike graph factoring through a lightlike cone in the Minkowski spacetime is marginally trapped if and only if $S^{g}=0.$ * **Theorem 49**. *Assume $\lambda$ depends only on the radial coordinate $r$ and $F$ is a non-compact parabolic Riemannian manifold. Let $\Psi:F\rightarrow B\times_{\lambda} F$ be a spacelike graph through an integral hypersurface $\mathcal{L}$ (resp. $\mathcal{N}$) of $D_{\xi}$ (resp. $D_{\eta}$) with $\mathbf{H}=0$. Then, $\Psi(F)$ is a totally geodesic slice.* We give the proof only for $\mathcal{L}$ since the case of $\mathcal{N}$ is similar. From formula ([\[Hxi\]](#Hxi){reference-type="ref" reference="Hxi"}), we get that $\lambda\circ\Psi_B$ is a constant $k\in \mathbb R_{+}$ and $\Delta v=0$. Then ([\[04092023\]](#04092023){reference-type="ref" reference="04092023"}) implies that $$\dfrac{(k^2- v^2)S^{g}}{m(m-1)}= (f \circ \Psi_{B})^2.$$ There are two possibilities. The first one is $S^{g}>0$ and $v^2< k^2$, then the parabolicity of $F$ gives that $v$ is constant. The other possibility is $S^g <0$ and $k^{2}< v^2$. Therefore, $-v<-k$ or $v<-k$ with $\Delta v=0$, again the parabolicity of $F$ shows that $v$ is constant. Now, formula ([\[110923A\]](#110923A){reference-type="ref" reference="110923A"}) ends the proof. $\blacksquare$ # Parallel mean curvature **Lemma 50**. *Let $\Psi:M\rightarrow B\times_{\lambda} F$ be a codimension two spacelike immersion through an integral hypersurface $\mathcal{L}$ of $D_{\xi}$. For every $V\in\mathfrak{X}(M)$, we have $$\widetilde{g}(\nabla^{\perp}_V\mathbf{H},\xi)=-g(\nabla v,V)\left(\dfrac{\xi(\xi \lambda)}{\lambda}\circ \Psi_{B}\right).$$ For the case of an integral hypersurface $\mathcal{N}$ of $D_{\eta}$, we have $$\widetilde{g}(\nabla^{\perp}_V\mathbf{H},\eta)=2g(\nabla v,V)\left( \frac{\eta (\eta \lambda)+f' f \, \eta \lambda}{\lambda f^2}\circ \Psi_{B}\right).$$ When $\lambda(t,r)=r$, the above formulas reduce to $\widetilde{g}(\nabla^{\perp}_V\mathbf{H},\xi)=0$ and $\widetilde{g}(\nabla^{\perp}_V\mathbf{H},\eta)=0,$ respectively.* From Propositions [Proposition 27](#28032023A){reference-type="ref" reference="28032023A"} and [Proposition 36](#010623B){reference-type="ref" reference="010623B"}, we derive $$\label{130623A} \widetilde{g}(\nabla^{\perp}_V\mathbf{H},\xi)=-V\left(\dfrac{\xi\lambda}{\lambda}\circ\Psi_B\right)-\left(\dfrac{\xi\lambda}{\lambda}\circ\Psi_B\right)^2g(\nabla v,V).$$ A direct computation from Lemma [Lemma 17](#210523D){reference-type="ref" reference="210523D"} shows that $$V\left(\dfrac{\xi\lambda}{\lambda}\circ\Psi_B\right)= V^{B}\left(\dfrac{\xi\lambda}{\lambda}\right)=g(\nabla v, V)\Big(\xi\left(\dfrac{\xi \lambda} {\lambda}\right)\circ \Psi_B\Big).$$ Substituting this formula in ([\[130623A\]](#130623A){reference-type="ref" reference="130623A"}), we get $$\widetilde{g}(\nabla^{\perp}_V\mathbf{H},\xi)=-V\left(\dfrac{\xi\lambda}{\lambda}\circ\Psi_B\right)-\left(\dfrac{\xi\lambda}{\lambda}\circ\Psi_B\right)^2g(\nabla v,V)=-g(\nabla v,V)\Big(\dfrac{\xi(\xi \lambda)}{\lambda}\circ \Psi_B\Big).$$ In a similar way, from Propositions [Proposition 28](#28032023B){reference-type="ref" reference="28032023B"} and [Proposition 36](#010623B){reference-type="ref" reference="010623B"}, we have $$\widetilde{g}(\nabla^{\perp}_V\mathbf{H},\eta)=-V\left(\dfrac{\eta\lambda}{\lambda}\circ\Psi_B\right)+\dfrac{2}{(f\circ\Psi_B)^2}\left(\dfrac{\eta\lambda}{\lambda}\circ\Psi_B\right)\left(ff'\circ\Psi_B+\dfrac{\eta\lambda}{\lambda}\circ\Psi_B\right)g(\nabla v,V)$$ and Lemma [Lemma 18](#080623A){reference-type="ref" reference="080623A"} ends the proof. $\blacksquare$ **Theorem 51**. *Assume the warping function satisfies $\xi (\xi \lambda)\neq 0$ at every point and let $\Psi:M\rightarrow B\times_{\lambda} F$ be a codimension two spacelike immersion through an integral hypersurface $\mathcal{L}$ of $D_{\xi}$. Then the following assertions are equivalent* 1. *$\widetilde{g}(\nabla^{\perp}_{V}\mathbf{H}, \xi)=0$ for every $V\in\mathfrak{X}(M)$.* 2. *$M$ factors through a slice.* 3. *$\nabla^{\perp}\mathbf{H}=0$.* Assume that $\widetilde{g}(\nabla^{\perp}_{V}\mathbf{H}, \xi)=0$. Then from Lemma [Lemma 50](#040623A){reference-type="ref" reference="040623A"} it is directly follows that $v$ is a constant function. From Lemma [Lemma 8](#020323A){reference-type="ref" reference="020323A"}, the function $u$ is also constant and then $M$ factors through a slice. The mean curvature vector field of a submanifold $M$ which factors through a slice is computed from ([\[Hxi\]](#Hxi){reference-type="ref" reference="Hxi"}) as follows $$\mathbf{H}=\Big(\dfrac{\eta\lambda}{\lambda}\circ \Psi_{B}\Big)\xi+\Big(\dfrac{\xi\lambda}{\lambda}\circ \Psi_{B}\Big)\eta.$$ Hence as a direct consequence of Lemma [Lemma 17](#210523D){reference-type="ref" reference="210523D"} and Proposition [Proposition 27](#28032023A){reference-type="ref" reference="28032023A"}, we get $\nabla^{\perp}\mathbf{H}=0$. The rest of the proof is obvious. $\blacksquare$ In a similar way we have. **Theorem 52**. *Assume $\eta (\eta \lambda)+ f f' \eta \lambda \neq 0$ at every point and let $\Psi:M\rightarrow B\times_{\lambda} F$ be a codimension two spacelike immersion through an integral hypersurface $\mathcal{N}$ of $D_{\eta}$. Then the following assertions are equivalent* 1. *$\widetilde{g}(\nabla^{\perp}_{V}\mathbf{H}, \eta)=0$ for every $V\in\mathfrak{X}(M)$.* 2. *$M$ factors through a slice.* 3. *$\nabla^{\perp}\mathbf{H}=0$.* **Remark 53**. *The proof of Theorems [Theorem 51](#130623B){reference-type="ref" reference="130623B"} and [Theorem 52](#130623C){reference-type="ref" reference="130623C"} does not work for $\lambda(t,r)=r$. In this case, Lemma [Lemma 50](#040623A){reference-type="ref" reference="040623A"} gives $\widetilde{g}(\nabla^{\perp}_{V}\mathbf{H}, \xi)=0$ for a codimension two spacelike immersion through an integral hypersurface $\mathcal{L}$ of $D_{\xi}$. Hence, the mean curvature vector field is parallel if and only if $\widetilde{g}(\nabla^{\perp}_V\mathbf{H},\eta^{\perp})=0$. A similar result is achieved for codimension two spacelike immersions through an integral hypersurface $\mathcal{N}$ of $D_{\eta}$.* 999 L.J. Alías, V.L. Canovas and M. Rigoli, Trapped submanifolds contained into a null hypersurface of the de Sitter spacetime, *Commum. Contemp. Math. ,* **20**, No. 08, (2018), 23 pp. L.J. Alías, V.L. Canovas and M. Rigoli, Codimension two spacelike submanifolds into the light cone of Lorentz-Minkowski space, *Proceedings of the Royal Society of Edinburgh Section A: Mathematics*, **149**(6), (2018), 1523--1553. L.J. Alías, P. Mastrolia and M. Rigoli, *Maximum principles and geometric applications*, Springer, 2016. L.J. Alías, A. Romero and M. Sánchez, Uniqueness of complete spacelike hypersurfaces of constant mean curvature in generalized Robertson-Walker spacetimes, *Gen. Relativity Gravitation* **27** (1995), no.1, 71--84. A.L. Besse, *Einstein Manifolds*, A Series of Modern Surveys in Mathematics, Springer, 1986. H.W. Brinkmann, Einstein spaces which are mapped conformally on each other, *Math. Ann.* **94** (1925), 119--145. V. Cánovas, F.J. Palomo and A. Romero, Mean curvature of spacelike submanifolds in a Brinkmann spacetime, *Classical Quantum Gravity* **38** (2021), Paper No. 195013, 18 pp. A. Čap and J. Slovák, *Parabolic Geometries I*. Background and General Theory, Mathematical Surveys and Monographs **154**, AMS 2009. I. Chavel, *Eigenvalues in Riemannian geometry*, Academic Press, INC, Orlando,1984. B.Y. Chen, *Geometry of Submanifolds*, Marcel Dekker, New York, 1973. D. de la Fuente, A. Romero and P.J. Torres, Entire spherically symmetric spacelike graphs with prescribed mean curvature function in Schwarzschild and Reissner-Nordström spacetimes, *Class. Quantum Grav.* **32** (2015), 17pp. S. Deshmukh and F. Al-Solamy, Conformal gradient vector fields on a compact Riemannian manifold, *Colloquium Mathematicum* **112** (2008) 157--161. M. P. Do Carmo. *Differential geometry of curves and surfaces* (Englewood Cliffs, NJ: Prentice Hall, 1976). G. J. Galloway, Maximum Principles for Null Hypersurfaces and Null Splitting Theorems, *Ann. Henri Poincaré* (2000), 543--567. E. Gourgoulhon, Geometry and physics of black holes, *Lecture notes* <https://relativite.obspm.fr/blackholes/bholes.pdf>. A. Grigor'yan, Analytic and geometric background of recurrence and non-explosion of the Brownian motion on Riemannian manifolds, *B. Am. Math. Soc.*, **36** (1999), 135--249. A. Huber, On subharmonic functions and differential geometry in the large, *Comment. Math. Helv.*, **32** (1958), 13--72. J.L. Kazdan, Parabolicity and the Liouville property on complete Riemannian manifolds, *Aspects of Math.*, **10** (1987), 153--166. D. D.Lee, *Geometric Relativity*, Graduates Studies in Mathematics. AMS, **201**, Providence, 2019. T. Leistner, Screen bundles of Lorentzian manifolds and some generalisations of pp-waves, *Journal of Geometry and Physics* **56** (2006), 2117--2134. M. Mars. and J.M.M. Senovilla, Trapped surfaces and symmetries, *Classical and Quantum Gravity* **20** (2003), 293-300. M. Obata, Certain conditions for a Riemannian manifold to be isometric with a sphere, *J. Math. Soc. Japan* **14** (1962), 333--340. B. O'Neill, The fundamental equations of a submersion, *Michigan Math. J.* **13** (1966), 459--469. B. O'Neill, *Semi-Riemannian Geometry with Applications to Relativity*, Academic Press, New York, 1983. O. Palmas, F.J. Palomo and A. Romero, On the total mean curvature of a compact space-like submanifold in Lorentz--Minkowski spacetime, *Proceedings of the Royal Society of Edinburgh,* **148A** (2018), 199--210. F.J. Palomo, Lightlike manifolds and Cartan geometries, *Anal. Math. Phys.,* **11** (2021), no.3, Paper No. 112, 39 pp. F.J. Palomo, J.A. Pelegrín and A. Romero, Rigidity results for complete spacelike submanifolds in plane fronted waves, *Rev. R. Acad. Cienc. Exactas Fís. Nat. Ser. A Mat. RACSAM* **116** (2022), Paper No. 179, 10 pp. F.J. Palomo,, F. J. Rodríguez and A. Romero, New Characterizations of Compact Totally Umbilical Spacelike Surfaces in $4$-dimensional Lorentz--Minkowski Spacetime through a Lightcone, *Mediterr. J. Math.* , **11** (2014), 1229--1240. F.J. Palomo and A. Romero, On spacelike surfaces in $4$-dimensional Lorentz-Minkowski spacetime through a lightcone, *Proc. Roy. Soc. Edinb. A Mat.*, **143A** (2013), 881--892. W. A. Poor, *Differential geometric structures*, McGraw-Hill Book Co., New York, 1981. M-T. Wang, Y-K Wang and X. Zhang, Minkowski formulae and Alexandrov theorems in Spacetime, *J. Differential Geometry*, **105** (2017), 249--290. K. Yano, *Integral formulas in Riemannian geometry*, Pure and Applied Mathematics, No. 1 Marcel Dekker, Inc., New York, 1970. Departamento de Matemática Aplicada, Universidad de Málaga, 29071-Málaga (Spain).\ E-mails: rodrigometalica_94\@hotmail.com (Rodrigo Morón) and fpalomo\@uma.es (Francisco J. Palomo).\ [^1]: Corresponding author.\ Both authors are partially supported by Spanish MICINN project PID2020-118452GB-100.\ 2020 *Mathematics Subject Classification*.  Primary 53B25, 53C40, 53C42. Secondary 53B30, 53C50.\ *Key words and phrases*  Lorentzian geometry, Spacelike submanifolds, Lightlike hypersurfaces, Generalized Schwarzschild spacetimes.
arxiv_math
{ "id": "2309.12749", "title": "Spacelike immersions in certain Lorentzian manifolds with lightlike\n foliations", "authors": "Rodrigo Mor\\'on and Francisco J. Palomo", "categories": "math.DG", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We give an algorithm for computing the knot Floer homology of a $(1,1)$ knot from a particular presentation of its fundamental group. address: - LMAM, School of Mathematics Sciences, Peking University, Beijing, 100871, P. R. China - LMAM, School of Mathematics Sciences, Peking University, Beijing, 100871, P. R. China author: - Jiajun Wang - Xiliu Yang title: Knot Floer homology and the fundamental group of $(1,1)$ knots --- # Introduction Heegaard Floer homology (introduced by Peter Ozsváth and Zoltán Szabó [@OS01a; @OS01b]) provides various topological invariants for three- and four-manifolds. A null homologous knot in a three-manifold induces a filtration on the Heegaard Floer chain complex and its homology, called the knot Floer homology, which was introduced by Peter Ozsváth and Zoltán Szabó [@OS04a] and Jacob Rasmussen [@Ras03] independently. The fundamental group is an important invariant for three-manifolds. The geometrization theorem (proposed by William Thurston [@Thu82] and proved by Grisha Perelman [@Per02; @Per03a; @Per03b]) implies that the fundamental group completely determines a closed, orientable, irreducible, three-manifold up to orientation except for lens spaces (see [@AFW15] for details). Hence the (hat version) Heegaard Floer homology was completely determined by the fundamental group of the three-manifold. Heegaard Floer homology has a close relationship with the fundamental group. For an integer homology three-sphere $Y$, the Euler characteristic of $HF^+_{\rm red}(Y)$ minus half its *correction term* equals Casson's invariant $\lambda(Y)$ under certain normalization ([@OS03]), and Casson's invariant is the algebraic counting of $SU(2)$ representations of $\pi_1(Y)$. On the homology level, the instanton Floer homology is a categorification of Casson's invariant ([@Flo88a]) and its generators are the $SU(2)$ representations of the fundamental group ([@Flo88a; @KM11]). Seiberg-Witten Floer homology is isomorphic to Heegaard Floer homology (by the work of Hutchings [@Hut02], Hutchings-Taubes [@HT07; @HT09], Taubes [@Tau10a; @Tau10b; @Tau10c; @Tau10d; @Tau10e], and Kutluhan-Lee-Taubes [@KLT10a; @KLT10b; @KLT10c; @KLT11; @KLT12] or Colin-Ghiggini-Honda [@CGH12b; @CGH12c; @CGH12a]), and Witten's conjecture ([@Wit94]) relates the Seiberg-Witten and Donaldson invariants. It is interesting to find a concrete connection between the fundamental group and the Heegaard Floer homology. In [@OS04c], Ozsváth and Szabó asked the following two closely related questions: **Question 1**. *[@OS04c Question 7][\[question:OS_knots\]]{#question:OS_knots label="question:OS_knots"} Let $K$ be a knot in $S^3$. Is there an explicit relationship between the fundamental group of the knot complement $S^3 \setminus K$ and the knot Floer homology $\widehat{HFK}(S^3, K)$?* **Question 2**. *[@OS04c Question 8][\[question:OS_three_manifold\]]{#question:OS_three_manifold label="question:OS_three_manifold"} Is there an explicit relationship between the Heegaard Floer homology and the fundamental group of a three-manifold?* We study Question [\[question:OS_knots\]](#question:OS_knots){reference-type="ref" reference="question:OS_knots"} for $(1,1)$ knots. $(1,1)$ knots are those knots which can be placed in one-bridge position with respect to a genus one Heegaard splitting of the three-sphere. The knot consists of two properly embedded arcs on the Heegaard surface (a torus) that meets on there endpoints, and the union of the two arcs gives the knot after pushing the interior of one arc properly to one solid torus. The fundamental group of a $(1,1)$ knot have a presentation with two generators and one relator. From the perspective of knot Floer homology, $(1,1)$ knots are particularly appealing. It was observed by Goda, Matsuda, and Morifuji in [@GMM05] that $(1,1)$ knots are exactly those knots which can be presented by a genus one doubly-pointed Heegaard diagram and their knot Floer homology can be computed combinatorially. Our main result is the following: **Theorem 1**. *Let $K$ be a $(1,1)$ knot in $S^3$. Given a two-generator one-relator presentation $\pi_1(S^3 \setminus K)=\langle X,Y\,|\,R(X, Y)\rangle$ of its fundamental group coming from a genus one doubly-pointed Heegaard diagram, $\widehat{HFK}(S^3, K)$ can be computed directly from the relator $R(X,Y)$.* The computation is provided by Algorithm [Algorithm 21](#algo:main){reference-type="ref" reference="algo:main"}. The proof of Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"} relies on the fact that the presentation from a genus one Heegaard diagram contains enough information about the universal cover of the diagram, which can be employed by the method in [@OS04a Section 6] to compute $\widehat{HFK}(S^3,K)$ ([@GMM05]). It would be interesting to generalize Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"} to knots with Heegaard diagrams of higher genus, or remove the requirement that the presentation of the fundamental group arise from a genus one Heegaard diagram. Algorithm [Algorithm 21](#algo:main){reference-type="ref" reference="algo:main"} actually applies to a slightly general group presentations of $(1,1)$ knots (Section [5](#sec:alexander){reference-type="ref" reference="sec:alexander"}), but we do not know whether it computes $\widehat{HFK}(S^3, K)$. But it does compute the Alexander polynomial (Corollary [Corollary 29](#cor:algorithm_alex_polynomial_pseudo_geometric){reference-type="ref" reference="cor:algorithm_alex_polynomial_pseudo_geometric"}). Algorithm [Algorithm 21](#algo:main){reference-type="ref" reference="algo:main"} does not only apply to the fundamental group of $(1,1)$ knots, it also applies to any *pseudo-geometric* two-generator one-relator group presentations. Though we do not know whether the homology is an invariant of the group, and what properties it captures. This paper is organized as follows. In Section [2](#sec:pre){reference-type="ref" reference="sec:pre"}, we briefly introduce the knot Floer homology and $(1, 1)$ knots. In Section [3](#sec:bigon){reference-type="ref" reference="sec:bigon"}, we give a method to find all (primitive) bigons and the basepoints contained in them from the presentation. In Section [4](#sec:alg){reference-type="ref" reference="sec:alg"}, we describe the algorithm from the special presentation to the knot Floer homology, and give some examples. In Section [5](#sec:alexander){reference-type="ref" reference="sec:alexander"}, we discuss our algorithm on general group presentations. ## Acknowledgement {#acknowledgement .unnumbered} We thank Cheng Chang, Matthew Hedden, Xuezhi Zhao, Shengyu Zou for helpful discussions. We thank Cheng Chang for computer programming. The first author is partially supported by NSFC grant 12131009 and National Key R&D Program of China 2020YFA0712800. # Preliminaries on Heegaard Floer homology and $(1,1)$ knots {#sec:pre} We recall some facts on the knot Floer homology and $(1,1)$ knots. See [@OS04a; @Ras03] for details on the knot Floer homology. ## Knot Floer homology A null-homologous knot $K$ in a closed oriented 3-manifold $Y$ can be represented by a *doubly-pointed Heegaard diagram* $\mathcal{H} := (\Sigma, \bm{\alpha}, \bm{\beta}, w, z)$, where - $\Sigma$ is a closed, oriented surface of genus $g$ in $Y$ which splits $Y$ into two handlebodies, denoted by $V_{\alpha}$ and $V_{\beta}$; - $\bm{\alpha}: = \{ \alpha_1, \cdots, \alpha_g \}$ (resp. $\bm{\beta}:= \{ \beta_1, \cdots, \beta_g \}$) is a set of attaching circles for handlebody $V_{\alpha}$ (resp. $V_{\beta}$) that are homologically independent in $H_1(\Sigma)$. Denote by $D_{\alpha_i}$ and $D_{\beta_i}$ their attaching disks; - $w$ and $z$ are two basepoints in $\Sigma - \bm{\alpha} - \bm{\beta}$; - the knot $K$ is specified by a proper arc joining $z$ to $w$ in $V_{\alpha} - \bigcup_i D_{\alpha_i}$, and a proper arc joining $w$ to $z$ in $V_{\beta} - \bigcup_i D_{\beta_i}$. The $g$-th symmetric product ${\rm Sym}^g(\Sigma) := \Sigma^{\times g}/S_g$ of $\Sigma$, where $S_g$ is the symmetric group on $g$ elements, has a symplectic structure induced from a complex structure on $\Sigma$. The two $g$-dimensional tori $\mathbb{T}_{\alpha} = \alpha_1 \times \cdots \times \alpha_g$ and $\mathbb{T}_{\beta} = \beta_1 \times \cdots \times \beta_g$ are Lagrangian submanifolds. A intersection point ${\bf x} \in \mathbb{T}_{\alpha} \cap \mathbb{T}_{\beta}$ is a $g$-tuples $\{x_1, \cdots, x_g\}$ such that each $x_i \in \alpha_i \cap \beta_{\sigma(i)}$ for some permutation $\sigma \in S_g$. Given two intersection points ${\bf x}, {\bf y} \in \mathbb{T}_{\alpha} \cap \mathbb{T}_{\beta}$, let $\pi_2 ({\bf x}, {\bf y})$ be the set of homotopy classes of *Whitney disks* connecting ${\bf x}$ and ${\bf y}$: $$\{ u: \mathbb{D} \rightarrow {\rm Sym}^g (\Sigma) \,|\, u(-i) = {\bf x}, u(i) = {\bf y}, u(a) \subset \mathbb{T}_{\alpha}, u(b) \subset \mathbb{T}_{\beta} \},$$ where $\mathbb{D}$ is the unit disk in $\mathbb{C}$ whose boundary consists of two arcs $a = \{ z \in \partial \mathbb{D}\,|\,{\rm Re}(z) \geq 0\}$ and $b = \{ z \in \partial \mathbb{D}\,|\,{\rm Re}(z) \leq 0\}$. The *multiplicity* $n_w (\phi)$ of $\phi \in \pi_2({\bf x}, {\bf y})$ at $w \in\Sigma$ is defined to be the algebraic intersection number of $\phi$ with $\{w\} \times {\rm Sym}^{g-1}(\Sigma)$. The moduli space ${\mathcal{M}}(\phi)$ of pseudo-holomorphic representatives of $\phi$ has a natural $\mathbb{R}$ action, and let $\widehat{\mathcal{M}}(\phi)$ be the unparametrized moduli space $\mathcal{M}(\phi) /\mathbb{R}$. The excepted dimension of ${\mathcal{M}}(\phi)$ is determined by the *Maslov index* $\mu (\phi)$ of $\phi$. The chain complex $\widehat{CFK}(\mathcal{H})$ is a free Abelian group generated by the intersection points ${\bf x} \in \mathbb{T}_{\alpha} \cap \mathbb{T}_{\beta}$ with the differential defined by $$\label{eqn:knot_floer_differential} \widehat{\partial}_K {\bf x} = \sum_{{\bf y} \in \mathbb{T}_{\alpha} \cap \mathbb{T}_{\beta}}\sum_{\{\phi \in \pi_2({\bf x}, {\bf y}) \,|\, \mu(\phi) = 1, n_w(\phi) = n_z(\phi) = 0 \}} \# \widehat{\mathcal{M}}(\phi) \cdot {\bf y}.$$ $(\widehat{CFK}(\mathcal{H}), \widehat{\partial}_K)$ is a chain complex whose homology $\widehat{HFK}(Y,K)$ is an invariant of the knot $K$ in $Y$, called the *knot Floer homology* of $K$ ([@OS04a; @Ras03]). There are two gradings on $\widehat{CFK}(S^3,K)$. The *Alexander grading* is the unique function $F: \mathbb{T}_{\alpha} \cap \mathbb{T}_{\beta} \rightarrow \mathbb{Z}$ satisfying $$\label{eqn:alex_grading_relative} F({\bf x}) - F({\bf y}) = n_z(\phi) - n_w(\phi),\quad \forall\, \phi \in \pi_2({\bf x}, {\bf y})$$ and the additional symmetry $$\label{eqn:alex_grading_symmetry} \# \{ {\bf x}\,|\,F({\bf x}) = i \} \equiv \# \{ {\bf x}\,|\,F({\bf x}) = -i \} \pmod{2},\quad \forall\,i \in \mathbb{Z}.$$ For $\bf x,\bf y\in \mathbb{T}_\alpha\cap\mathbb{T}_\beta$, the *relative Maslov grading* or the *homological grading* satisfies $${\rm gr}({\bf x}, {\bf y}) = \mu(\phi) - 2n_w(\phi), \quad \forall\, \phi \in \pi_2({\bf x}, {\bf y}).$$ A Heegaard diagram of $Y$ can be obtained by removing the basepoint $z$ from a doubly-pointed Heegaard diagram of $(Y,K)$, and $\widehat{HF}(Y)$ can be obtained from $\widehat{CFK}(Y, K)$ with additional differentials. When $Y=S^3$, we have $\widehat{HF}(S^3) \cong \mathbb{Z}$, and by defining this homology to be supported in Maslov grading 0, we can define an *absolute Maslov grading* $M:\mathbb{T}_\alpha\cap\mathbb{T}_\beta\to\mathbb{Z}$ with $${\rm gr}({\bf x}, {\bf y}) = M({\bf x})- M({\bf y}).$$ It is evident that the differential [\[eqn:knot_floer_differential\]](#eqn:knot_floer_differential){reference-type="eqref" reference="eqn:knot_floer_differential"} preserves the Alexander grading and decreases the Maslov grading by one. Let $\widehat{CFK}_m(S^3, K; s)$ be the subgroup of $\widehat{CFK}(S^3, K)$ generated by those ${\bf x}\in \mathbb{T}_{\alpha} \cap \mathbb{T}_{\beta}$ with $F({\bf x}) = s$ and $M({\bf x}) = m$, then $\widehat{HFK}(S^3, K)$ can be decomposed as $$\widehat{HFK}(S^3, K) = \bigoplus_{m, s} \widehat{HFK}_m(S^3, K; s).$$ The knot Floer homology is a categorification of the Alexander polynomial. **Theorem 2**. *(Ozsváth-Szabó [@OS04a], Rasmussen [@Ras03]) Let $K$ be a knot in $S^3$ and $\Delta_K(T)$ its symmetrized Alexander polynomial, then $$\sum_{m,s} (-1)^m \cdot {\rm rank} \widehat{HFK}_m (S^3, K; s) \cdot T^s = \Delta_K(T).$$* ## $(1,1)$ knots and their fundamental groups {#subsec:11knot} **Definition 3** ([@Dol92]). A proper embedded arc $\gamma$ in a handlebody $H$ is called *trivial* if there exists an embedded disk $D$ in $H$ such that $\gamma \subseteq \partial D$ and $\partial D \cap \partial H = \partial D \setminus {\rm Int} \gamma$. A link $L$ in a 3-manifold $M$ is called a *$(g, n)$ link* if there exists a genus $g$ Heegaard splitting $M = H_1 \cup H_2$ such that $L \cap H_i, (i = 1, 2)$ is the union of $n$ mutually disjoint properly embedded trivial arcs. The decomposition $(H_1, L \cap H_1) \cup (H_2, L \cap H_2)$ is called a *$(g,n)$ decomposition* of $(M, L)$. We focus on $(1,1)$ knots in $S^3$. $(1,1)$ knots are precisely those knots which admit genus one doubly-pointed Heegaard diagrams, or *$(1,1)$ Heegaard diagram* for brevity. For more detailed discussions of Heegaard diagrams and $(1,1)$ knots see [@GMM05; @Ord06; @Ord13]. $(1, 1)$ knots form a large family of knots: torus knots and 2-bridge knots are all $(1,1)$ knots. Fujii [@Fuj96] showed that the Alexander polynomial of any knot can be realized as the Alexander polynomial of some $(1,1)$ knot. A $(1,1)$ Heegaard diagram $\mathcal{H} = (T^2,\alpha,\beta,w,z)$ for a $(1,1)$ knot $K$ gives a two-generator one-relator presentation of $\pi_1(S^3 \setminus K)$ as follows. Orient $\alpha$ and $\beta$ so that their intersection number $[\alpha]\cdot[\beta] = +1$ and let $t_{\alpha}$ be an oriented arc connecting $w$ to $z$ in $T^2\setminus \alpha$. Travel along $\beta$ for a full round and record its intersection with $\alpha$ and $t_\alpha$: write $X$/$X^{-1}$ for a positive/negative intersection with $\alpha$, and $Y$/$Y^{-1}$ for a positive/negative intersection with $t_\alpha$. Let $R(X,Y)$ be the resulting word, then we have $$\label{eqn:fund_group_presentation} \pi_1(S^3 \setminus K) \cong \langle X,Y\,|\,R(X,Y) \rangle.$$ To see that this is indeed a presentation of $\pi_1(S^3 \setminus K)$, we note that we can stabilize the genus one Heegaard decomposition of $S^3$ which we started with by removing a tubular neighborhood $N(\tilde{t}_{\alpha}, V_{\alpha})$ of an arc $\tilde{t}_{\alpha}$ in $V_\alpha$, where $\tilde{t}_\alpha$ is obtained by pushing $t_\alpha$ down slightly into the solid torus $V_\alpha$, and then adding this one-handle ($D^1\times D^2$) to $V_\beta$, the resulting genus two handlebodies are denoted by $V_\alpha'$ and $V_\beta'$. Thus we obtain a new Heegaard decomposition $$S^3 = V_\alpha' \bigcup_{\Sigma_2} V_\beta',$$ here $\Sigma_2 = \partial V_{\alpha}' = \partial V_{\beta}'$. Suppose $t_{\alpha} \cap \partial N(\tilde{t}_{\alpha}, V_{\alpha}) = \{z',w'\}$, we choose $t_{\alpha}''$ is a copy of $\tilde{t}_{\alpha}$ in $\partial N(\tilde{t}_{\alpha}, V_{\alpha})$ whose ends are $\{z',w'\}$, then $\alpha_2 \triangleq t_{\alpha}'' \cup_{z',w'} t_{\alpha}'$ is a closed curve, and it bounds an embedded disk in handlebody $V_{\alpha}'$, where $t_{\alpha}'$ is the subarc of $t_{\alpha}$ whose ends are $z'$ and $w'$. Choose $\beta_2$ is a belt circle of $N(\tilde{t}_{\alpha}, V_{\alpha})$, i.e., a meridian of the knot $K$, then the Heegaard diagram $(\Sigma_2, \{ \alpha, \alpha_2\}, \{\beta, \beta_2\})$ specifies $S^3$; furthermore, $$\mathcal{H}' = (\Sigma_2, \{\alpha,\alpha_2\}, \{\beta\})$$ is a Heegaard diagram for the knot complement $S^3 \setminus K$, i.e., $S^3 \setminus K$ can be obtained from a genus two surface $\Sigma_2$ by first attaching two-handles $C_\alpha, C_{\alpha_2}, C_\beta$ so that the boundaries of the meridian disks of the two-handles are identified with $\alpha, \alpha_2, \beta$ respectively, and then attaching a three-ball to $\partial (\Sigma_2 \cup C_\alpha \cup C_{\alpha_2}) = S^2$. In this language, it is clear that the presentation [\[eqn:fund_group_presentation\]](#eqn:fund_group_presentation){reference-type="eqref" reference="eqn:fund_group_presentation"} specifies $\pi_1(S^3 \setminus K)$: attaching $C_\alpha$, $C_{\alpha_2}$, and the three-ball to $\Sigma_2$ yields a genus two handlebody with fundamental group free on two generators. Note that $\beta \cap \alpha_2 = \beta \cap t_{\alpha}' = \beta \cap t_{\alpha}$. According to Van Kampen's theorem, the relator obtained by attaching two-handle $C_\beta$ is $R(X, Y)$ described as above. Throughout the paper, we use $\overline{X}$ for $X^{-1}$ and $\overline{Y}$ for $Y^{-1}$ for brevity. **Assumption 4**. *Let $K$ be a $(1,1)$ knot in $S^3$ and $\langle X, Y\,|\,R(X, Y)\rangle$ be a presentation of $\pi_1(S^3 \setminus K)$ obtained as above. We make the following assumptions:* 1. *The curves $\alpha$ and $\beta$ are oriented so that $[\alpha]\cdot[\beta] = +1$, that is $$\# \{X \,|\, X \in R(X,Y)\} - \#\{\overline{X}\,|\, \overline{X}\in R(X,Y)\} = +1;$$* 2. *$\# \{X \,|\, X \in R(X,Y)\} \geq 2$, or equivalently, $K$ is knotted;* 3. *The relator $R(X,Y)$ is *cyclically reduced*, that is, there are no proper subwords of the form $X\overline{X}, \overline{X}X, Y\overline{Y}$ or $\overline{Y}Y$.* We write the relator as: $$R(X,Y) = R_1R_2\cdots R_i\cdots R_m,$$ where $R_i \in \{X, \overline{X}, Y, \overline{Y}\}$, and denote by $R_i^j$ the proper subword $R_iR_{i+1} \cdots R_j$, with the convention that $R_{m+k} = R_k$. We are interested in the subword $R_i^j$ with $\left\{ R_i, R_j \right\}=\left\{ X, \overline{X} \right\}$. We label the $X$-letters in $R_i^j$ form $1$ to $n$, and the $i$-th $X$-letter in $R_i^j$ is either $X_i$ or $\overline{X}_i$. Denote by $W_a^b$ ($1\leqslant a<b\leqslant n$) the subword of $R_i^j$ from $X_a$/$\overline{X}_a$ to $X_b$/$\overline{X}_b$ (hence $W_1^n = R_i^j$). We use the capital $X$ and $Y$ to denote the letters in the relator $R$ and the lowercase $x$ and $y$ to denote the intersections points in the Heegaard diagram $\mathcal{H}$. # Bigons and disk words for $(1,1)$ knots {#sec:bigon} Let $K$ be a $(1,1)$ knot in $S^3$ and $\langle X, Y\,|\,R(X, Y) \rangle$ be a presentation of $\pi_1(S^3\setminus K)$ obtained from a $(1,1)$ Heegaard diagram $\mathcal{H} = (T^2, \alpha, \beta, z, w)$ of $(S^3,K)$, satisfying Assumption [Assumption 4](#assump:relator){reference-type="ref" reference="assump:relator"}. To compute $\widehat{HFK}(S^3, K)$ from the presentation, we need to find all the information of the chain complex $\widehat{CFK}(S^3, K)$. The generators are the intersection points of the curves $\alpha$ and $\beta$ in the torus $T^2$, which correspond to $X$-letters ($X$ or $\overline{X}$) in the relator $R$. On the other hand, the differential is determined by bigons in Heegaard diagram $\mathcal{H}$. Since the relator $R$ is cyclically reduced, all bigons contain at least one basepoint, and it follows that the differential is identically zero. Therefore it suffices to decide the Alexander and Maslov grading of $\widehat{CFK}(S^3, K)$ from the relator $R$. In order to do so, one may wish to find all bigons from $R$, and decide the number of basepoints contained in each bigon. However, there are difficulties to achieve that (see Example [Example 9](#knot5_2){reference-type="ref" reference="knot5_2"}). Instead, we will work on a special class of bigons (Definition [Definition 5](#def:primitive_bigon){reference-type="ref" reference="def:primitive_bigon"}) that can be determined from $R$, and are sufficient to determine the gradings of $\widehat{CFK}(S^3, K)$. **Definition 5**. Let $D$ be a Whitney disk in $T^2$ connecting $x_1$ and $x_n$. Let $\widetilde{x}_1$ and $\widetilde{x}_n$ be lifts of $x_1$ and $x_n$ respectively in $\mathbb{C}$, and $\widetilde{D}$ be a lift of $D$ connecting $\widetilde{x}_1$ and $\widetilde{x}_n$. $\partial \widetilde{D}$ consists of an $\alpha$ arc $a$ and a $\beta$ arc $b$. Suppose $\widetilde{\alpha}$ is the lift of $\alpha$ containing $\widetilde{x}_1$. The bigon $\widetilde{D}$ is called *primitive* if $\widetilde{\alpha} \cap b = \{ \widetilde{x}_1, \widetilde{x}_n \}$, and a bigon $D$ is called *primitive* if it has a primitive lift. To describe our algorithm, we use the universal cover $\mathbb{C}$ of the torus $T^2$. We also use $\alpha$, $\beta$, $t_{\alpha}$, $z$ and $w$ to denote their lifts in $\mathbb{C}$ when there is no confusion. The keypoint is that bigons can lift to embedded bigons in the universal cover and it is possible to count the number of lifted basepoints inside primitive bigons from the relator $R$. Primitive bigons are enough to determine the gradings of all generators. If $D$ is a bigon connecting two points $x_1$ and $x_n$ in $\mathbb{C}$, suppose $\alpha \cap b = \{ x_1, x_2, \cdots, x_n \}$, labeled by the order of them on $b$. Then $D$ is a combination of primitive bigons $D_i, (i=1, \cdots, n-1)$ that connecting two adjacent $x_i$ and $x_{i+1}$, which can be used to compute the grading difference between $x_1$ and $x_n$. Suppose $D$ is a primitive bigon. Let $n_z(D)$ (resp. $n_w(D)$) be the number of basepoints $z$ (resp. $w$) contained in $D$ and write $$\label{eqn:PD} P(D) = (n_z(D), n_w(D)).$$ In this paper we always consider the grading shift from point $x_1$ to $x_n$, so the numbers $n_z(D)$ and $n_w(D)$ may be negative. In fact, the sign of $n_z(D)$ and $n_w (D)$ depends on the orientation of $D$, as defined below: **Definition 6**. Let $D$ be a bigon in $\mathbb{C}$ whose boundary consists of an $\alpha$ arc $a$ and a $\beta$ arc $b$ ($b$ is oriented). The *orientation* of $D$ is *positive* if $D$ is on the right side of the arc $b$, and *negative* if otherwise. **Definition 7**. Let $\varphi(W_1^n) = \# \{ X \,|\, X \in W_1^n\} - \# \{ \overline{X}\,|\, \overline{X}\in W_1^n\}$. A subword $W_1^n$ of the relator $R(X,Y)$ is called a *disk word* if its two ends have opposite signs and $\varphi(W_1^n) = 0$. A disk word $W_1^n$ is called *primitive* if $\varphi(W_1^k) \neq 0$ for all $1 < k < n$. **Definition 8**. Let $W_1^n$ be a disk word, let $\hat{X}_i$ denote $X_i$ or $\overline{X}_i$. Define the *height* (relative to the endpoints) of $\hat{X}_i$ as $$h(\hat{X}_i) = \varphi (W_1^i) - \epsilon(\hat{X}_i), \ (1 \leq i \leq n),$$ where $\epsilon(\hat{X}_i) = 1$ if $\hat{X}_i$ has the same sign as $\hat{X}_1$, and zero otherwise. If a subword $W_i^j$ of $W_1^n$ is primitive and $h(\hat{X}_i) = h(\hat{X}_j)=s$, we say that the primitive disk word $W_i^j$ has *height $s$* in $W_1^n$. A primitive disk word $W_1^n$ is *elementary* if $n = 2$, *upward* if $\hat{X_1}=X_1$ and *downward* if $\hat{X_1}=\overline{X}_1$. For a primitive disk word $W_1^n$ with $n>2$, all $h(\hat{X}_i), (1 < i < n)$ have the same sign. And we have $h(\hat{X}_i) > 0$ ($1 < i < n$) if $W_1^n$ is upward, and $h(\hat{X}_i) < 0$ ($1 < i < n$) if $W_1^n$ is downward. It is clear that the corresponding word of a bigon $D$ in $\mathbb{C}$ is a disk word, since the intersection points of each lifting of $\alpha$ with the $\beta$ part of $\partial D$ occur in pairs of opposite signs. We call a disk word $W_1^n$ a *real bigon* if it corresponds to a bigon in $\mathbb{C}$. An example of a disk word that does not correspond to a bigon is given in the following **Example 9**. Figure [\[fig:knot5_2\]](#fig:knot5_2){reference-type="ref" reference="fig:knot5_2"} shows a lifting of a Heegaard diagram compatible with the knot $5_2$ in $S^3$. The curve $\beta$ gives the relator $$R(X, Y) = X_1 Y \overline{X}_2 Y X_3 \overline{Y}\overline{X}_4 \overline{Y}X_5 Y \overline{X}_6 Y X_7.$$ The two disk words $W_1^4: X_1 Y \overline{X}_2 Y X_3 \overline{Y}\overline{X}_4$ and $W_3^6: X_3 \overline{Y}\overline{X}_4 \overline{Y}X_5 Y \overline{X}_6$ are the same if we replace $Y$ by $\overline{Y}$. $W_1^4$ corresponds to a real bigon, but $W_3^6$ does not. **Lemma 10**. *A primitive disk word corresponds to a primitive bigon in $\mathbb{C}$ and vice versus.* *Proof.* Let $W_1^n$ be a primitive disk word. Suppose it corresponds to the subarc $b$ of $\beta$ whose two endpoints $x_1, x_n$ are in the lifts $\alpha$ and $\alpha'$, respectively. Note that two sets $\{k\,|\,x_k \in \alpha \cap b, 1<k \}$ and $\{k\,|\,x_k \in \alpha' \cap b, k<n \}$ cannot both be empty sets. Assuming without loss of generality that the former is not empty and that $i$ is its minimum, then the subarc of $b$ from $x_1$ to $x_i$ and $\alpha$ bounds a primitive bigon, so that $W_1^i$ is a disk word. Since $W_1^n$ is assumed to be primitive, it follows that $i = n$ so that $W_1^n$ actually corresponds to a primitive bigon in $\mathbb{C}$. Suppose $D$ is a primitive bigon. If its corresponding word is not primitive, then $\varphi(W_1^k) = 0$ for some $1<k<n$. Let $j = \min \{k \,|\, \varphi(W_1^n) = 0, 1<k<n \}$, then the subword $W_1^j$ is primitive by definition. As proved above, $W_1^j$ corresponds to a primitive bigon, so that its endpoint $x_j \in \alpha$, which cannot occur. ◻ Since the primitive disk word and the primitive bigon are in one-to-one correspondence, we will use these two terms alternatively by abuse of notation. All primitive bigons can be enumerated since the length of the relator $R$ is finite. What we need to do is to find the number of basepoints contained in each of them directly from $R(X,Y)$. ## Elementary bigons In the beginning, we consider what the local diagram that corresponds to the simplest primitive disk word $XY\overline{X}$ should look like. As illustrated in Figure [\[fig:2choices\]](#fig:2choices){reference-type="ref" reference="fig:2choices"}, it could have two choices: Note that the two cases cannot occur simultaneously in a diagram. If both are present, consider the word that follows the letter $\overline{X}$ to the next $X$-letter, it can only be $Y\overline{X}$ or $\overline{X}$ since the relator is reduced. And then continue the discussion with the next letter $\overline{X}$, it can be seen that none of the $\overline{X}$ is followed by $X$, which cannot happen. When we choose $XY\overline{X}$ to correspond to the one containing the basepoint $z$ (resp. $w$), the word $\overline{X}YX$ must correspond to a bigon that contains the basepoint $w$ (resp. $z$), see Figure [\[fig:switch_zw\]](#fig:switch_zw){reference-type="ref" reference="fig:switch_zw"}: Thus, after exchanging two basepoints $z$ and $w$, [^1] we can assume that the disk word $XY\overline{X}$ corresponds to the bigon containing the basepoint $w$. So that the bigons corresponding to the four elementary disk words (or say *elementary bigons*) can be drawn as in Figure [\[fig:elementary_bigons\]](#fig:elementary_bigons){reference-type="ref" reference="fig:elementary_bigons"}. Moreover, the orientation of each elementary bigon, as well as the number of basepoints it contains can be found, as shown in Table [1](#table:elementary_bigons){reference-type="ref" reference="table:elementary_bigons"}. elementary disk word $XY\overline{X}$ $X\overline{Y}\overline{X}$ $\overline{X}YX$ $\overline{X}\overline{Y}X$ ---------------------- ------------------ ----------------------------- ------------------ ----------------------------- orientation negative positive positive negative $(n_z, n_w)$ $(0, -1)$ $(0 ,1)$ $(1, 0)$ $(-1, 0)$ : Elementary bigons. *Remark 11*. By Assumption [Assumption 4](#assump:relator){reference-type="ref" reference="assump:relator"}, there always exist two adjacent $X$-letters in $R$ with opposite signs, and there exist bigons of the form $X Y^k \overline{X}$ or $\overline{X}Y^k X$. We claim that $k$ must be $\pm 1$. If $\left| k \right|>1$ on the contrary, then there exists a covering transformation $\Gamma$ such that $\beta \cap \Gamma(\beta) \neq \emptyset$, as illustrated in Figure [\[fig:abs_k\_is_1\]](#fig:abs_k_is_1){reference-type="ref" reference="fig:abs_k_is_1"}. The argument is similar to Lemma [Lemma 13](#lemma:point_smaller1){reference-type="ref" reference="lemma:point_smaller1"}. It follows that there always exist elementary bigons containing basepoint $z$ and $w$ respectively. ## Bigons with height Now we consider general primitive bigons. Firstly, the orientation of a primitive bigon can be found from its corresponding word. **Lemma 12**. *If $D$ is a real bigon, then the number of positive and negative elementary words that it contains differs by one. Moreover, $D$ is positive if and only if $$\#\{\text{positive elementary bigons in $ D $}\} - \#\{\text{negative elementary bigons in $ D $}\} = 1.$$* *Proof.* Suppose $D$ is a real bigon whose boundary consists of an $\alpha$ arc $a$ and a $\beta$ arc $b$. Perturb $\beta$ so that it is perpendicular to $\alpha$ at all intersection points. Then the total curvature of the arc $b$ is $\pi$ (resp. $-\pi$), counted counterclockwise, if the bigon is on the left (resp. right) when walking along $b$. That is, $D$ is positive if and only if the total curvature is $-\pi$. By assuming that $\beta$ is perpendicular to $\alpha$ at all intersections, the arc $b$ will have a non-zero contribution to the total curvature only when it passes near the basepoints. As can be seen from the picture of the elementary bigons (Figure [\[fig:elementary_bigons\]](#fig:elementary_bigons){reference-type="ref" reference="fig:elementary_bigons"}), the arc $b$ on a negative (resp. positive) elementary bigon will contribute $\pi$ (resp. $-\pi$). Hence the total curvature is $$\pi \cdot \left(\#\{\text{negative elementary bigons}\} - \#\{\text{positive elementary bigons}\}\right).$$ Thus, $D$ is positive if and only if $D$ contains one more positive elementary bigons than negative ones. ◻ In the rest of the section, let $W_1^n$ be an upward positive disk word with $n > 2$. Let $D$ be a corresponding bigon of $W_1^n$ in $\mathbb{C}$. Suppose that $\partial D=a\cup b$ with $a\subseteq\widetilde{\alpha}$ and $b\subseteq\widetilde{\beta}$. It is clear that $h(x_2)=h(x_{n-1})=1$. Let $b_1$ be the subarc from $x_1$ to $x_2$, and $b_2$ be the subarc from $x_{n-1}$ to $x_n$. The corresponding subwords for $b_1$ and $b_2$ are $W_1^2 = X Y^l X$ and $W_{n-1}^n = \overline{X}\overline{Y}^{l'} \overline{X}$ for some integers $l$ and $l'$. Write the two subwords as a pair $(W_1^2, W_{n-1}^n)$ or $(X Y^l X, \overline{X}\overline{Y}^{l'} \overline{X})$. Denote by $S$ the square domain bounded by the subarcs $b_1$, $b_2$ and two lifts of the $\alpha$ curve. The key to calculating $P(D)$ is to determine the number of basepoints contained in $S$. **Lemma 13**. *$\max \left\{ \left| n_z(S) \right|, \left| n_{w} (S) \right| \right\} \leq 1$.* *Proof.* Suppose the square domain $S$ contains two lifts $w_1, w_2$ of the basepoint $w$. As in Figure [\[fig:no_two_lifts_of_basepoints\]](#fig:no_two_lifts_of_basepoints){reference-type="ref" reference="fig:no_two_lifts_of_basepoints"}, we assume that $w_2$ is on the right-hand side of $w_1$. Note that there exists an elementary bigon containing $w_1$, denoted by $D_w$. Then $D_w \subseteq S$. Let $p$ be an endpoint of $D_w$. Consider the covering transformation $\Gamma$ which maps $w_1$ to $w_2$, then $w_2 \in \Gamma (D_w)$, and $\Gamma (D_w) \subset S$. The orientation of the lift curve $\widetilde{\alpha}$ gives an order $$x_1 < p < \Gamma(p) < x_n.$$ Therefore $x_1 < \Gamma(x_1) < \Gamma(p) < x_n$. It implies that $\Gamma (b_1)\subset S$ and $\Gamma (D)\cap D\neq\emptyset$. It follows that $\widetilde{\beta}\cap\Gamma(\widetilde{\beta})\neq\emptyset$ and we get a contradiction. The case that $S$ contains two lifts $z_1, z_2$ of $z$ is similar: the image of $x_2$ under the covering transformation $\Gamma'$ that maps $z_1$ to $z_2$ satisfies $x_2 < \Gamma'(x_2) < x_{n-1}$, so $\Gamma'(D)$ and $D$ are overlapped. ◻ As a consequence, we have $|l - l'| \leq 1$. Since $D$ is upward and positive, we have $x_1 < x_n$, i.e., the subarc $b_2$ is on the right side of $b_1$. When $|l-l'| = 1$, all possible pairs $(W_1^2, W_{n-1}^n)$ are shown in Figure [\[fig:b_diff_word\]](#fig:b_diff_word){reference-type="ref" reference="fig:b_diff_word"}, and we have the following: **Corollary 14**. *If the two subarcs $b_1$ and $b_2$ correspond to the words $XY^l X$ and $\overline{X}\overline{Y}^{l'} \overline{X}$ with $|l - l'| = 1$, then the pair $(W_1^2, W_{n-1}^n)$ has four cases and the basepoint contained in $S$ is shown in Table [2](#table:b_diff_word){reference-type="ref" reference="table:b_diff_word"}.* *$(W_1^2, W_{n-1}^n)$* *$(XY^lX, \overline{X}\overline{Y}^{l-1} \overline{X})$* *$(XY^{l-1}X, \overline{X}\overline{Y}^l \overline{X})$* *$(X \overline{Y}^l X, \overline{X}Y^{l-1} \overline{X})$* *$(X \overline{Y}^{l-1} X, \overline{X}Y^l \overline{X})$* ------------------------ ---------------------------------------------------------- ---------------------------------------------------------- ------------------------------------------------------------ ------------------------------------------------------------ *$P(S)$* *$(1, 0)$* *$(0, 1)$* *$(0, 1)$* *$(1, 0)$* : *Four cases of the pair $(W_1^2, W_{n-1}^n)$ when $|l-l'|=1$.* It is evident that $n_z(S) = n_w(S)$ if $l = l'$. We now consider the height one primitive bigons contained in $D$. The simplest case is that there is exactly one such bigon, namely the one corresponding to $W_2^{n-1}$. **Lemma 15**. *Suppose the two subarcs $b_1$ and $b_2$ correspond to the words $XY^lX$ and $\overline{X}\overline{Y}^l \overline{X}$. If $W_2^{n-1}$ is a primitive disk word, then $$P(S)=(0,0).$$* *Proof.* Without loss of generality, suppose that $S$ contains a basepoint. Since $n_z(S) = n_w(S)$, $S$ contains both $z$ and $w$ exactly once by Lemma [Lemma 13](#lemma:point_smaller1){reference-type="ref" reference="lemma:point_smaller1"}. As illustrated in Figure [\[fig:1height1\]](#fig:1height1){reference-type="ref" reference="fig:1height1"}, since the algebraic intersection number of $\alpha_0$ and $\beta$ is one, there will be other intersection points. Without loss of generality, suppose $x' \neq x_1$ is adjacent to $x_n$, then the arcs bounded by $x_n$ and $x'$ on $\alpha_0$ and $\beta$ bound a primitive bigon $D'$, and which is on the other side of $\alpha_0$ with $D$, i.e., $D'$ is downward. Therefore, it must contain a subword of the form $\overline{X}Y^{\pm 1} X$, whose corresponding bigon contains the basepoint $z$. In fact, the first elementary bigon after the letter $\overline{X}_n$ is of this form, assumed to be $\overline{X}_N Y^{\pm 1} X_{N+1}, (N \geq n)$, and denoted by $D''$. By the assumption that $S$ contains the basepoint $z$, there is a covering transformation $\Gamma$ such that $\Gamma (D'')$ is contained in $D$. Furthermore, let $s$ be the subarc of $\beta$ that from the point $x_n$ to $x_{N+1}$, then $\Gamma(s) \subseteq S$. In particular, the covering transformation $\Gamma$ maps $x_n$ to a point in the interior of $D$. Since $\beta \cap \Gamma(\beta) = \emptyset$, this implies that $\Gamma (D)$ is contained in $D$. We get a contradiction since $\Gamma$ is nothing but a translation. ◻ **Corollary 16**. *$P(X^k Y \overline{X}^k) = P(X Y \overline{X})$. This equation holds for the other three types as well. We also call all of them are elementary bigons.* **Corollary 17**. *A lift of a real bigon does not contain a whole lift of $t_\alpha$.* *Proof.* Assume $D$ is a lift of a real bigon that contains a whole lift $\widetilde{t}_{\alpha}$ of $t_{\alpha}$, then $\widetilde{t}_{\alpha}$ is contained in a primitive bigon whether or not $D$ is primitive. However, it is obvious from the proof of Lemma [Lemma 15](#lemma:one_height_one){reference-type="ref" reference="lemma:one_height_one"} that this is impossible. ◻ Let $x_{k_1}, \cdots, x_{k_{d+1}}$ ($k_1 = 2, k_{d+1} = n-1$) be all points of height one. Suppose $D$ has more than one height one primitive bigons ($d \geq 2$), then the first and the last height one points $x_2$ and $x_{n-1}$ are connected by a series of primitive bigons $D_i$ corresponding to the subword $W_{k_i}^{k_{i+1}}, \ i = 1, \cdots, d$. We call $D_1$ and $D_d$ the first and the last height one primitive bigons in $W_1^n$, respectively. **Lemma 18**. *Suppose the two subarcs $b_1$ and $b_2$ correspond to the words $XY^lX$ and $\overline{X}\overline{Y}^l \overline{X}$. If $D$ contains at least two primitive bigons of height one, say $D_1, \cdots, D_d$, ($d \geq 2$). Then $P(S) = (1, 1)$ if and only if the first and last height one primitive bigons $D_1$ and $D_{d}$ have the same orientation as $D$; otherwise, $P(S) = (0, 0)$.* *Proof.* Note that $P(S) = (1, 1)$ or $(0, 0)$. Suppose $P(S) = (1, 1)$. As shown in Figure [\[fig:b_same_word1\]](#fig:b_same_word1){reference-type="ref" reference="fig:b_same_word1"}, consider the square domain $S'$ bounded by the two lifts of the curve $\alpha$, the arcs $b_1$ and $\Gamma_{1}^{-1} (b_2)$, where $\Gamma_{1}$ is a horizontal translation such that $S'$ contains no basepoints. Let $x'_1$ be a copy of $x_1$ at the height one $\alpha$-lifting and at the point on the left closest to $x_2$. There is a unique transformation that maps $x_1$ to $x'_1$, say $\Gamma_2$. One can find that $x'_1$ is on the right side of $\Gamma^{-1}_1 (x_{n-1})$, i.e., $$\Gamma^{-1}_1(x_{n-1}) < x'_1 < x_2.$$ Because otherwise, the bigons $\Gamma_2(D)$ and $\Gamma^{-1}_1 (D)$ would overlap, which cannot happen. For the first height one primitive bigon $D_1$, consider the position of its other endpoint $x_{k_2}$. Since any two different lifts of the curve $\beta$ are disjoint, if $x_{k_2} < x'_1$, it forces that $\Gamma_2 (D)$ contained in $D_1$, which cannot occur; on the other hand, $x_{k_2}$ cannot be a point between $x'_1$ and $x_2$ since there is no basepoint contained in domain $S'$. Thus $x_{k_2}$ must be on the right side of $x_2$, i.e., $D_1$ has same orientation as $D$. For the last height one primitive bigon $D_d$, consider the copy of $x_n$ at height one $\alpha$-lifting and right closest to $x_2$, say $x'_n$, there is a transformation $\Gamma_3$ that maps $x_n$ to $x'_n$. By the same argument, one can find that $x_{k_{d-1}}$ be to the left side of $x_{k_d} = x_{n-1}$, so that $D_d$ has the same orientation as $D$. While if $P(S) = (0, 0)$, i.e., $S$ does not contain any basepoints. Suppose $D_1$ and $D_{d}$ have the same orientation as $D$, then one of $x_{k_2}$ and $x_{k_d}$ lies in $S$. This is also impossible to happen. See Figure [\[fig:b_same_word2\]](#fig:b_same_word2){reference-type="ref" reference="fig:b_same_word2"}. ◻ **Lemma 19**. *$P(D) = P(S) + \sum\limits_{i=1}^{d} P(D_i)$.* *Proof.* Let $a_1$ be the subarc of the lift $\alpha_1$ that connects the points $x_2$ and $x_{n-1}$. If $a_1 \subset D$. As illustrated in Figure [\[fig:sum_points\]](#fig:sum_points){reference-type="ref" reference="fig:sum_points"}, cut $D$ along $a_1$ into two domains: a square domain $S$ and a real bigon (not necessarily primitive) connecting the points $x_2$ and $x_{n-1}$, say $D'$. So that $D = S + D'$. Moreover, $D'$ can be divided into a combination of primitive bigons $D_1, \cdots, D_d$. Thus $$P(D) = P(S) + P(D') = P(S) + \sum\limits_{i=1}^{d} P(D_i).$$ If $D$ does not contain the entire subarc $a_1$, there must be a height one primitive bigon of the form $\overline{X}Y^{\pm 1} X$ contained in $S$. Consider the outer-most one, and suppose it is the $s$-th height one primitive bigon $D_s$, whose two endpoints denoted by $x_{k_s}$ and $x_{k_{s+1}}$. Then $D_s$ has the opposite orientation as $S$ so that its corresponding word is $\overline{X}_{k_s} \overline{Y}X_{k_{s+1}}$. Let $a_2$ be the subarc of $\alpha_1$ that bounded by the points $x_2$ and $x_{k_s}$, and $a_{n-1}$ be the subarc that bounded by the points $x_{k_{s+1}}$ and $x_{n-1}$. Cutting the primitive bigon $D$ along the line segments $a_2$ and $a_{n-1}$, we obtain three domains: two of them are real bigons (not necessarily primitive) whose corresponding words are $W_2^{k_s}$ and $W_{k_{s+1}}^{n-1}$, denoted by $D'$ and $D''$ respectively, and the last one is a hexagon contained in $S$ that does not contain the basepoint $z$, denoted by $S_w$, see Figure [\[fig:sum_points\]](#fig:sum_points){reference-type="ref" reference="fig:sum_points"}. So that $$D = D' + D'' + S_w, \qquad S = S_w - D_s.$$ On the other hand, the two points $x_2$ and $x_{k_s}$ are connected by a series of primitive bigons $D_1, \cdots, D_{s-1}$, and points $x_{k{s+1}}$ and $x_{k_{d+1}}$ are connected by primitive bigons $D_{s+1}, \cdots, D_{d}$. Thus we obtain $$\begin{aligned} P(D) & = P(D') + P(D'') + P(S_w) \\ & = \sum_{i=1}^{s-1} P(D_i) + \sum_{i=s+1}^{d} P(D_i) + P(S_w) \\ & = \sum_{i=1}^{d} P(D_i) - P(D_s) + P(S_w) \\ & = \sum_{i=1}^{d} P(D_i) + P(S). \end{aligned}$$ ◻ We have discussed in details on upward, positive primitive bigons. The result for the other three types of primitive bigons are similar. In summary, we have the following. **Theorem 20**. *Let $W_1^n$ be a primitive disk word and $D$ be its corresponding bigon in $\mathbb{C}$, then the number of basepoints in $D$ can be computed from the word $W_1^n$.* *Proof.* We prove by induction on the length of the word $W_1^n$. The conclusion holds for the four elementary disk words from Figure [\[fig:elementary_bigons\]](#fig:elementary_bigons){reference-type="ref" reference="fig:elementary_bigons"}. For the case of length $n$, each height one primitive bigon $D_i$ has length less than $n$, so that all of $P(D_i)$ can be read out from the subword of $W_1^n$ by induction. On the other hand, Lemmas [Corollary 14](#cor:b_diff_word){reference-type="ref" reference="cor:b_diff_word"} and [Lemma 18](#lemma:b_same_word){reference-type="ref" reference="lemma:b_same_word"} imply that $P(S)$ can be computed from the word $W_1^n$. Theorem [Theorem 20](#thm:PD_from_disk_word){reference-type="ref" reference="thm:PD_from_disk_word"} then follows by Lemma [Lemma 19](#lemma:sum_points){reference-type="ref" reference="lemma:sum_points"}. ◻ # The fundamental group and $\widehat{HFK}$ {#sec:alg} Now we describe the algorithm for computing $\widehat{HFK}(S^3, K)$ from a presentation from a $(1,1)$ Heegaard diagram. **Algorithm 21**. Let $K$ be a $(1,1)$ knot and $P=\langle X, Y \,|\, R(X, Y) \rangle$ be a presentation of $\pi_1(S^3\setminus K)$ from a $(1,1)$ Heegaard diagram. The chain complex $\widehat{CFK}(S^3,K)$ has generators corresponding to the letters $X$ and $\overline{X}$ in the relator $R$ and trivial differential. The Alexander and Maslov gradings are determined as follows: 1. [\[step_bigon\]]{#step_bigon label="step_bigon"} Enumerate all the primitive disk words from the relator $R$ and determine their orientation by Table [1](#table:elementary_bigons){reference-type="ref" reference="table:elementary_bigons"} and Lemma [Lemma 12](#lemma:orientation){reference-type="ref" reference="lemma:orientation"}. 2. [\[step_basepoint\]]{#step_basepoint label="step_basepoint"} For each primitive disk word $W_1^n$, $P(W_1^n) = \left(n_z(W_1^n), n_w(W_1^n)\right)$ is computed iteratively as follows: 1. [\[step_elementary\]]{#step_elementary label="step_elementary"} $n_z$ and $n_w$ for the elementary disk words are given in Table [1](#table:elementary_bigons){reference-type="ref" reference="table:elementary_bigons"}; 2. [\[step_basepoint_key\]]{#step_basepoint_key label="step_basepoint_key"} If $W_1^n$ is upward and positive with $n>2$. Let $x_{k_1}, \cdots, x_{k_{d+1}}$ ($k_1 = 2, k_{d+1} = n-1$) be all points of height one. Cut $W_1^n$ into primitive disk words $W_{k_i}^{k_{i+1}}, \ i = 1, \cdots, d$, and two subwords $W_1^2 = X_1 Y^l X_2$ and $W_{n-1}^n = \overline{X}_{n-1} \overline{Y}^{l'} \overline{X}_n$ ($l,l'\in\mathbb{Z}$, and $|l - l'| \leq 1$ by Lemma [Lemma 13](#lemma:point_smaller1){reference-type="ref" reference="lemma:point_smaller1"}). Then by Lemma [Lemma 19](#lemma:sum_points){reference-type="ref" reference="lemma:sum_points"} $$P(W_1^n) = P(S) + \sum_{i=1}^{d} P(W_{k_i}^{k_{i+1}}),$$ where $P(S)$ is determined by the following: 1. If $|l - l'| = 1$, $P(S)$ are shown in Table [2](#table:b_diff_word){reference-type="ref" reference="table:b_diff_word"} (Corollary [Corollary 14](#cor:b_diff_word){reference-type="ref" reference="cor:b_diff_word"}). 2. If $l = l'$, then by Lemmas [Lemma 15](#lemma:one_height_one){reference-type="ref" reference="lemma:one_height_one"} and [Lemma 18](#lemma:b_same_word){reference-type="ref" reference="lemma:b_same_word"} $$P(S) = \left\{ \begin{aligned} (1,1), & \quad \text{$ d \geq 2 $ and $ W_{2}^{k_{2}} $ and $ W_{k_d}^{n-1} $ are positive}; \\ (0, 0), & \quad \text{otherwise}. \end{aligned}\right.$$ 3. If $W_1^n$ is upward and negative. A upward positive disk word $\widetilde{W}_1^n$ can be obtained from $W_1^n$ by replacing each $Y$-letter by its inverse, then $$P(W_1^n) = - P(\widetilde{W}_1^n).$$ 4. If $W_1^n$ is downward and negative. A upward positive disk word $\widetilde{W}_1^n$ can be obtained from $W_1^n$ by replacing each $X$-letter by its inverse, then $$(n_z(W_1^n), n_w(W_1^n)) = - (n_w(\widetilde{W}_1^n), n_z(\widetilde{W}_1^n)).$$ 5. If $W_1^n$ is downward and positive. A upward positive disk word $\widetilde{W}_1^n$ can be obtained from $W_1^n$ by replacing each $X$- and $Y$-letter by its inverse, then $$(n_z(W_1^n), n_w(W_1^n)) = (n_w(\widetilde{W}_1^n), n_z(\widetilde{W}_1^n)).$$ 3. [\[step_relative\]]{#step_relative label="step_relative"} For any two points $x_1$ and $x_n$ that connected by a primitive disk word $W_1^n$, the relative Alexander is determined by the equation: $$\label{eqn:alexander_relative} F(x_1) - F(x_n) = n_z(W_1^n) - n_w(W_1^n),$$ and the relative Maslov grading can be computed as: $$\label{eqn:maslov_relative} M(x_1) - M(x_n) = \left\{\begin{aligned} & 1 - 2 n_w(W_1^n), & & \text{if $ W_1^n $ is positive}; \\ & -1 + 2 n_w(W_1^n), & & \text{if $ W_1^n $ is negative}. \end{aligned} \right.$$ 4. [\[step_absolute\]]{#step_absolute label="step_absolute"} The absolute Alexander grading can be obtained by requiring $$\label{eqn:alexander_grading_absolute} \# \{x \,|\, F(x) = i\} \equiv \#\{x \,|\, F(x) = -i \} \pmod{2}, \quad \forall \, i \in \mathbb{Z}.$$ To obtain the absolute Maslov grading, we forget the basepoint $z$, and to find a unique intersection point which is the generator of $\widehat{HF}(T^2, \alpha, \beta, w) \cong \mathbb{Z}$. This process can only be done on the relator $R(X, Y)$, as follows: 1. Erasing primitive bigons whose $n_w = 0$. (For example, all elementary bigons $\overline{X}^k Y^{\pm 1} X^k$ are removed.) The resulting relator is denoted by $R'(X, Y)$. 2. The basepoint $w$ also gives a relative grading by the equation $$w(x_1) - w(x_n) = n_w(W_1^n).$$ For the relator $R'(X, Y)$, delete those new primitive bigons whose relative $w$-grading of two endpoints is zero. (Here the relative grading is given by $R$, not $R'$.) The resulting relator is denoted by $R''(X, Y)$. 3. There will be exactly one $X$-letter in $R''(X, Y)$, assuming it is the $k$-th one. Thus we can define $M(x_k) = 0$ to obtain an absolute $M$-grading of all points. In Step [\[step_absolute\]](#step_absolute){reference-type="ref" reference="step_absolute"}, all primitive bigons deleted in the above two procedures do not contain the basepoint $w$, so it can be realized by isotopies of the $\beta$ curve in the complement of $w$, and the resulting diagram $(T^2, \alpha, \beta'', w)$ specifies $S^3$, where $\beta''$ is the curve obtained from $\beta$ by isotopies. We need to show that there exactly one $X$-letter in $R''(X, Y)$. Specifically, we explain that no primitive bigon remains after these two steps. The proof is the same as the proof of Lemma [Lemma 15](#lemma:one_height_one){reference-type="ref" reference="lemma:one_height_one"}. Suppose $R''$ contains a primitive bigon $D_1$ with $n_w(D_1) \neq 0$, and $x_1$ and $x_n$ are two endpoints of $D_1$. Assume it is upward without loss of generality. Since the algebraic intersection number $[\alpha] \cdot [\beta''] = [\alpha] \cdot [\beta] = +1$, there will be another intersection point $x'$. Suppose $x'$ is adjacent to $x_n$, and they form a primitive bigon $D_2$. Since $D_2$ remains after the second step, the $w$-grading implies that $n_w(D_2) \neq 0$, i.e., $D_2$ also contains the basepoint $w$. On the other hand, $D_2$ is downward. Therefore, there exists a translation $\Gamma$ such that $\Gamma(D_2)$ and $D_1$ are overlapped, i.e., $\Gamma(\beta'') \cap \beta'' \neq \emptyset$, which cannot happen. To summarize, all the steps described above are relevant only to $R$. So we have the following theorem: **Theorem 22**. *Let $K$ be a $(1, 1)$ knot in $S^3$, $\langle X, Y \,|\, R(X, Y) \rangle$ be a cyclically reduced presentation of $\pi_1(S^3 \setminus K)$ that is coming from a $(1,1)$ Heegaard diagram. The generators of the knot Floer homology of $K$ are one-to-one correspondence to the $X$-letters in the relator $R$, denoted $x_i$ corresponding to the $i$-th $X$-letter; the Alexander and Maslov gradings of each generator can be determined by the above algorithm which involves only the relator $R(X, Y)$. The knot Floer homology as follows: $$\widehat{HFK}_m(S^3, K;s) = \bigoplus_{\{ x_i \,|\, F(x_i) = s, M(x_i) = m\}} \mathbb{Z} \cdot \langle x_i \rangle.$$* Rewrite the knot Floer homology as the Poincaré polynomial $$P_R(t, q) = \sum_{s, m} {\rm rank} \widehat{HFK}_m(S^3, K;s) \cdot t^s q^m = \sum_{i} t^{F(x_i)} q^{M(x_i)}.$$ **Proposition 23**. *Let $\langle X, Y \,|\, R \rangle$ be a presentation coming from a $(1,1)$ Heegaard diagram, and $P_R(t,m)$ be the Poincaré polynomial given by the algorithm.* 1. *Let $R'$ be the relator obtained from $R$ by doing the transformation $l_k: X \mapsto Y^kX, Y \mapsto Y$ (resp. $r_k: X \mapsto XY^k, Y \mapsto Y$) for an integer $k$, and then reduced. Then $\langle X, Y \,|\, R' \rangle$ is also a presentation coming from a $(1,1)$ Heegaard diagram, and $P_{R'}(t, q) = P_R(t, q)$.* 2. *Let $R''$ be the relator obtained by transforming $\tau: X \mapsto X, Y \mapsto \overline{Y}$. Then $\langle X, Y \,|\, R' \rangle$ is also a presentation coming from a $(1,1)$ Heegaard diagram, and $P_{R''}(t, q) = P_R(t^{-1}, q^{-1})$.* *Proof.* The difference between two relators $R$ and $R'$ is the choice of the arc $t_{\alpha}$ on $T^2 - \alpha$. We consider only the case where $k = 1$ because $l_k = l_1^k$ (resp. $r_k = r_1^k$). Let $t_{\alpha'}$ be the arc obtained by handlesliding $t_{\alpha}$ across $\alpha$ (with orientation), see Figure [\[knot3_1\]](#knot3_1){reference-type="ref" reference="knot3_1"} for an example on the trefoil knot. Then the intersection of $t_{\alpha}$ and $\beta$ is one-to-one corresponding to the intersection of $t_{\alpha'}$ and $\beta$; and each intersection point of $\alpha$ and $\beta$ corresponds to two intersection points of $\beta$ with $\alpha$ and $t_{\alpha'}$, which are consecutive in $\beta$. Thus, $R'$ is the image of $R$ in $l_1$ or $r_1$. It is clear that $R'$ also comes from same Heegaard diagram as $R$, and $t_{\alpha}$ and $t_{\alpha'}$ are isotopic in the solid torus $V_{\alpha}$. Therefore, $R$ and $R'$ represent the same knot. Since we require that $t_{\alpha}$ is oriented from $w$ to $z$, the second transformation can be achieved by switching the two basepoints $z$ and $w$. Thus, the two relators represent the knot $K$ and its mirror $K^*$, respectively. Thereby, $P_{R''}(t, q) = P_R(t^{-1}, q^{-1})$ is directly obtained. ◻ **Example 24**. A Heegaard diagram of the knot $10_{161}$ is illustrated in Figure [\[fig:knot10_161\]](#fig:knot10_161){reference-type="ref" reference="fig:knot10_161"} (as in [@GMM05]). The curve $\beta$ gives the relator $$R(X, Y) = X \overline{Y}X \overline{Y}\overline{X}Y \overline{X}\overline{Y}X \overline{Y}^2 X \overline{Y}\overline{X}Y \overline{X}\overline{Y}X \overline{Y}X Y \overline{X}Y X Y \overline{X}Y.$$ Label each $X$-letter in order. Consider the primitive bigon $D: X \overline{Y}^2 X \overline{Y}\overline{X}Y \overline{X}$ from $x_5$ to $x_8$ for instance. It contains only one primitive bigon $D_1: X_6 \overline{Y}\overline{X}_7$ of height one, which is the second type in Table [1](#table:elementary_bigons){reference-type="ref" reference="table:elementary_bigons"}. The square domain $S$ is the third type in Table [2](#table:b_diff_word){reference-type="ref" reference="table:b_diff_word"}. Thus $$P(D) = P(S) + P(D_1) = (0, 1) + (0, 1) = (0, 2).$$ Removing primitive bigons with $n_w = 0$ yields $$X_2 \overline{Y}\ \overline{Y}\ Y \ Y \ \overline{Y}.$$ So that we obtain $M(x_2) = 0$. Moreover, the Alexander and Maslov gradings of the generators are listed in Table [3](#table:knot_10_161){reference-type="ref" reference="table:knot_10_161"}, and the Poincaré polynomial is $$P_R(t,q) = t^{-3} + (1 + q)t^{-2} + 2qt^{-1} + 3q^2 + 2q^3t + (q^4 + q^5)t^2 + q^6t^3.$$ $x_1$ $x_2$ $x_3$ $x_4$ $x_5$ $x_6$ $x_7$ $x_8$ $x_9$ $x_{10}$ $x_{11}$ $x_{12}$ $x_{13}$ ----- ------- ------- ------- ------- ------- ------- ------- ------- ------- ---------- ---------- ---------- ---------- $F$ -2 -3 -2 -1 0 0 1 2 3 2 1 0 -1 $M$ 0 0 1 1 2 2 3 5 6 4 3 2 1 : The Alexander and Maslov gradings of the knot Floer homology of knot $10_{161}$. **Example 25**. Figure [\[fig:knotD6\]](#fig:knotD6){reference-type="ref" reference="fig:knotD6"} shows a Heegaard diagram of the knot $D_+(T_{2,3}, 6)$ (the $6$-twisted positive Whitehead double of the right-handed trefoil), as in [@HO05]. The relator is $$X \overline{Y}\overline{X}^3 Y X^3 Y \overline{X}^3 \overline{Y}X^2 \overline{Y}\overline{X}^3 Y X^3 Y \overline{X}^3 \overline{Y}X^4.$$ Label each $X$-letter in order. The corresponding word of the shaded domain $D$ in Figure [\[fig:knotD6\]](#fig:knotD6){reference-type="ref" reference="fig:knotD6"} is $$\overline{X}^3 \overline{Y}X^2 \overline{Y}\overline{X}^3 Y X^3 Y \overline{X}^3 \overline{Y}X^4,$$ which is a primitive bigon from $x_{8}$ to $x_{25}$. Determine $P(D)$ directly from the word as follows: two height $-1$ points $x_{9}$ and $x_{24}$ are connected by elementary bigons: $$\begin{aligned} D_1: ~~ & \overline{X}^2 \overline{Y}X^2, \quad & (x_{9} \rightarrow x_{12}); \\ D_2: ~~ & X \overline{Y}\overline{X}, & (x_{12} \rightarrow x_{13} ); \\ D_3: ~~ & \overline{X}^3 Y X^3, & (x_{13} \rightarrow x_{18} ); \\ D_4: ~~ & X Y \overline{X}, & (x_{18} \rightarrow x_{19} ); \\ D_5: ~~ & \overline{X}^3 \overline{Y}X^3, & (x_{19} \rightarrow x_{24} ). \end{aligned}$$ Note that $D_1$ and $D_5$ have the same orientation as $D$, thus $$\begin{aligned} P(D) & = P(S) + \sum_{i=1}^{5} P(D_i) \\ & = (-1, -1) + (-1, 0) + (0, 1) + (1, 0) + (0, -1) + (-1, 0) \\ & = (-2, -1), \end{aligned}$$ which is consistent with the diagram shown in Figure [\[fig:knotD6\]](#fig:knotD6){reference-type="ref" reference="fig:knotD6"}. The following steps are to find the unique $X$-letter that generates $\widehat{HF}(S^3, w)$: 1. Remove primitive bigons with $n_w = 0$, the resulting relator is $$X_1 \overline{Y}\ Y \overline{X}_8 \ \overline{Y}\ Y \ X_{25}.$$ 2. Note that the points $x_1$ and $x_8$ are connected by three primitive bigons in the original relator $R$: $$\begin{aligned} E_1: ~~ & X \overline{Y}\overline{X}, \quad & (x_{1} \rightarrow x_{2}); \\ E_2: ~~ & \overline{X}^3 Y X^3, & (x_{2} \rightarrow x_{7} ); \\ E_3: ~~ & X Y \overline{X}, & (x_{7} \rightarrow x_{8} ). \end{aligned}$$ So that the $w$-grading shift is $$w(x_1) - w(x_8) = w(x_1) - w(x_2) + w(x_2) - w(x_7) + w(x_7) - w(x_8) = 1 + 0 - 1 = 0.$$ On the other hand, $w(x_8) - w(x_{25}) = -1$, since the two points $x_8$ and $x_{25}$ are connected by the bigon $D$ in above. Therefore, the subword $X_1 \overline{Y}\ Y \overline{X}_8$ is removed in the second step, thus giving $$\overline{Y}\ Y \ X_{25}.$$ 3. We obtain $M(x_{25}) = 0$. The Poincaré polynomial is $$P_R(t, q) = (4q^{-1} + 2q)t^{-1} + (4q^2 + 9) + (2q^3 + 4q)t.$$ # On general presentation of the fundamental group {#sec:alexander} In previous sections, we always assume that the group presentation comes from a $(1,1)$ Heegaard diagram. This ensures that the curve $\beta$ and its image in the covering transformation do not intersect, a fact that we use several times when determining the number of basepoints contained in each primitive bigon. We would like to know whether the Algorithm [Algorithm 21](#algo:main){reference-type="ref" reference="algo:main"} is feasible for presentations that are not from Heegaard diagrams. It is possible that Algorithm [Algorithm 21](#algo:main){reference-type="ref" reference="algo:main"} computes the knot Floer homology from a general presentation. **Definition 26**. Let $P=\langle X, Y \,|\, R(X, Y) \rangle$ be a presentation of a group $G$ with $R(X,Y)$ cyclically reduced. $P$ is *quasi-geometric* if 1. $\# \{X \,|\, X \in R(X,Y)\} - \#\{\overline{X}\,|\, \overline{X}\in R(X,Y)\} = +1$; 2. all subwords of the form $X Y^k \overline{X}$ (or $\overline{X}Y^k X$) must have $|k| = 1$; 3. there are no two subwords of the form $X Y^l X$ (or $\overline{X}\overline{Y}^l \overline{X}$) and $X Y^{l'} X$ (or $\overline{X}\overline{Y}^{l'} \overline{X}$) that satisfy $\left| l-l' \right| > 1$, where $l, l' \in \mathbb{Z}$. For a quasi-geometric presentation, Algorithm [Algorithm 21](#algo:main){reference-type="ref" reference="algo:main"} (Step 1-3) can be applied: the primitive disk words can be enumerated; their orientations can be determined by counting the positive and negative elementary words that it contains; and the functions $n_z$ and $n_w$ on primitive disk words can be formally computed. Hence we get a relatively bigraded chain complex with trivial differential. For a quasi-geometric presentation of the fundamental group of a $(1,1)$ knot $K$, the homology of the above chain complex may not be isomorphic to the knot Floer homology $\widehat{HFK}(S^3,K)$. For example, the following presentation $$\label{eqn:trefoil_presentation} \langle X, Y\,|\, YX^3 Y \overline{X}\overline{Y}\overline{X}\rangle$$ of the fundamental group of the trefoil knot $T_{2,3}$ is quasi-geometric. However, the homology of [\[eqn:trefoil_presentation\]](#eqn:trefoil_presentation){reference-type="eqref" reference="eqn:trefoil_presentation"} from Algorithm [Algorithm 21](#algo:main){reference-type="ref" reference="algo:main"} (Step 1-3) has rank 5, while the rank of its knot Floer homology is $3$. However, we will show that its Euler characteristic agrees with the (unnormalized) Alexander polynomial. The Alexander polynomial of a knot $K$ can be computed from a presentation of $\pi_1(S^3 \setminus K)$ as follows ([@Rol76]): 1. For a tame knot $K$ in $S^3$, the knot group $G \triangleq \pi_1(S^3 \setminus K)$ has a presentation of the form $$\langle Y, g_1, \cdots, g_p \,|\, r_1, \cdots, r_p \rangle,$$ where $Y \mapsto 1$ and $g_i \mapsto 0$ under the abelianization $G \rightarrow G/[G: G] \cong \mathbb{Z}$. 2. The commutator subgroup $C = [G: G]$ of $G$ is generated by all words of the form: $$Y^{-k} g_i^{\pm 1} Y^k.$$ Each relator $r_i$ is conjugated to a word $r'_i$, which is a product of words of this form. 3. Let $\widetilde{X}$ be the infinite cyclic cover of the knot complement $X = S^3 \setminus K$. Then the covering map $p: \widetilde{X} \rightarrow X$ gives a $\Lambda \triangleq \mathbb{Z}[t,t^{-1}]$-module isomorphism: $$p_*: H_1(\widetilde{X};\mathbb{Z}) \rightarrow C/[C:C].$$ So we can obtain a $\Lambda$-module presentation of $C/[C:C]$ by taking the image of $g_i$ under abelianization as the generator, say $\alpha_i$, and replacing $Y^{-k} g_i^{\pm 1} Y^k$ by $\pm t^k \alpha_i$ in the relator $r'_i$ (multiplication becomes addition). Therefore, the Alexander polynomial is given by $$H_1(\widetilde{X};\mathbb{Z}) \cong \Lambda/\left(\Delta_K(t)\right).$$ When $p = 1$, $H_1 (\widetilde{X})$ is a cyclic $\Lambda$-module generated by a generator $\alpha$. Rewrite the generator $g$ of $G$ by $a$ and label each $g$-letter in the relator $r'$ by $a_i$ (signed) in sequence. An alternative description of the last step is as follows: Since the relator $r'$ is a product of words form like $Y^{-k} g^{\pm 1} Y^{k}$, we define two gradings for each $a_i$ as $$A(a_i) = k, \quad (-1)^{S(a_i)} = \pm 1 \quad {\rm for } \quad Y^{-k} a_i^{\pm 1} Y^k.$$ Then the (unnormalized) Alexander polynomial is $$\sum_i (-1)^{S(a_i)} t^{A(a_i)}.$$ **Proposition 27**. *Let $\langle X, Y\,|\,R(X, Y) \rangle$ be a quasi-geometric presentation of a $(1, 1)$ knot $K$ in $S^3$. Algorithm [Algorithm 21](#algo:main){reference-type="ref" reference="algo:main"} (Step 1-3) gives a chain complex with two relative gradings $F$ and $M$, whose Euler characteristic satisfies $$\label{eq:Euler2Alexander} \sum_{i} (-1)^{M(x_i)} t^{F(x_i)} \overset{\circ}{=} \Delta_K(t).$$ Where $f(t)\overset{\circ}{=} g(t)$ if $f(t)=\pm t^c g(t)$ for some integer $c$.* *Proof.* By assumption, the relator $R(X, Y)$ maps to $X + k Y$ for some integer $k$ under abelianization. Let $a = Y^kX$ and $R'(a, Y)$ be the reduced relator of $R(Y^{-k}a, Y)$. Consider the presentation $$\langle Y, a \,|\, R'(a, Y) \rangle.$$ It is easy to see that $Y \mapsto 1$ and $a \mapsto 0$ under abelianization, so that we can do the above procedure. Label each $a$-letter and the origin $X$-letter in sequence, and require that $a_i$ corresponds to the $i$-th $X$-letter. Claim that $$\label{eq:2grdings_are_agree} F(x_i) - F(x_j) = A(a_i) - A(a_j), \quad M(x_i) - M(x_j) \equiv S(a_i) - S(a_j) \pmod{2}.$$ In fact, it is sufficient to show that the two equations hold in the case where $x_i, x_j$ are two endpoints of a primitive disk word. Let $W_1^n$ be a primitive disk word. By definition, we have $S(a_1) - S(a_n) \equiv 1 \pmod{2}$. On the other hand, $$M(x_1) - M(x_n) \equiv 1 - 2n_w(W_1^n) \equiv 1 \pmod{2}.$$ So the second equation is automatically established. For the first equation, we proved by induction on the length of the primitive disk word $W_1^n$. For the four elementary bigons, see $X_1 Y \overline{X}_2$ as example, we have $F(x_1) - F(x_2) = 0 - (-1) = 1$; on the other hand, the corresponding subword in $R'(a, Y)$ is $Y^{-k} a_1 Y a^{-1}_2 Y^{k}$, thus $A(a_1) - A(a_2) = 1$ by definition. In general, we see a upward positive disk word $W_1^n = X_1 Y^l X_2 \cdots \overline{X}_{n-1} \overline{Y}^{l'} \overline{X}_n$ as example, where $|l - l'| \leq 1$. By Step [\[step_basepoint\]](#step_basepoint){reference-type="ref" reference="step_basepoint"} and [\[eqn:alexander_relative\]](#eqn:alexander_relative){reference-type="eqref" reference="eqn:alexander_relative"}, we have $$\begin{aligned} F(x_1) - F(x_n) & = n_z(S) + \sum_{i=1}^d n_z(W_{k_i}^{k_{i+1}}) - n_w(S) - \sum_{i=1}^d n_w(W_{k_i}^{k_{i+1}}) \\ & = n_z(S) - n_w(S) + \sum_{i=1}^d F(x_{k_i}) - F(x_{k_{i+1}}) \\ & = n_z(S) - n_w(S) + F(x_2) - F(x_{n-1}). \end{aligned}$$ On the other hand, the image of $W_1^n$ in the relator $R'(a, Y)$ is $$Y^{-k}a_1 Y^{l-k} a_2 \cdots a^{-1}_{n-1} Y^{k-l'} a^{-1}_n Y^{k}.$$ Thus $$\begin{aligned} A(a_1) - A(a_n) & = A(a_1) - A(a_2) + A(a_{n-1}) - A(a_n) + A(a_2) - A(a_{n-1}) \\ & = l - l' + A(a_2) - A(a_{n-1}). \end{aligned}$$ By the inductive hypothesis, we only need to show that $$n_z(S) - n_w(S) = l - l'.$$ This is the direct from the definition of $P(S)$ in Step [\[step_basepoint_key\]](#step_basepoint_key){reference-type="ref" reference="step_basepoint_key"}. Therefore, from the equations [\[eq:2grdings_are_agree\]](#eq:2grdings_are_agree){reference-type="eqref" reference="eq:2grdings_are_agree"}, we obtain $$\sum_i (-1)^{M(x_i)} t^{F(x_i)} = \sum_i (-1)^{S(a_i)} t^{A(a_i) + c},$$ for some integer $c$. ◻ The presentation [\[eqn:trefoil_presentation\]](#eqn:trefoil_presentation){reference-type="eqref" reference="eqn:trefoil_presentation"} does not come from a $(1,1)$ Heegaard diagram of the trefoil knot. Algorithm [Algorithm 21](#algo:main){reference-type="ref" reference="algo:main"} fails on Step [\[step_absolute\]](#step_absolute){reference-type="ref" reference="step_absolute"} for [\[eqn:trefoil_presentation\]](#eqn:trefoil_presentation){reference-type="eqref" reference="eqn:trefoil_presentation"}: label the $X$-letters in order, and the resulting relator after removing primitive bigons with $n_w = 0$ is $X_2 X_3 Y \overline{X}_4 \overline{Y}$, and then we cannot remove any letter since $$w(x_4) - w(x_3) = w(x_4) - w(x_2) = 1.$$ **Definition 28**. Let $P=\langle X, Y \,|\, R(X, Y) \rangle$ be a quasi-geometric presentation of a group $G$. $P$ is *pseudo-geometric* if 1. The relative Alexander grading can be normalized to satisfy [\[eqn:alexander_grading_absolute\]](#eqn:alexander_grading_absolute){reference-type="eqref" reference="eqn:alexander_grading_absolute"}. 2. The relator obtained by the process of deleting subwords in Algorithm [Algorithm 21](#algo:main){reference-type="ref" reference="algo:main"} Step [\[step_absolute\]](#step_absolute){reference-type="ref" reference="step_absolute"} has exactly one $X$-letter. We remark that [\[eqn:alexander_grading_absolute\]](#eqn:alexander_grading_absolute){reference-type="eqref" reference="eqn:alexander_grading_absolute"} automatically holds for knot groups by Proposition [Proposition 27](#prop:Euler2Alexander){reference-type="ref" reference="prop:Euler2Alexander"}, but not for general groups. For example, the presentation $\langle X, Y \,|\, YX^2 \overline{Y}XY \overline{X}^2 \rangle$ is quasi-geometric, but [\[eqn:alexander_grading_absolute\]](#eqn:alexander_grading_absolute){reference-type="eqref" reference="eqn:alexander_grading_absolute"} does not hold. For a pseudo-geometric group presentation $G = \langle X, Y\,|\,R(X, Y) \rangle$, Algorithm [Algorithm 21](#algo:main){reference-type="ref" reference="algo:main"} applies to give a chain complex with trivial differential and two absolute gradings $F$ and $M$. Denote its homology by $H(R)$, and its Poincaré polynomial is $$\label{eqn:poincare_poly_pseudo_geometric} P_R(t, q) = \sum_{i} t^{F(x_i)} q^{M(x_i)}.$$ **Corollary 29**. *Let $\langle X, Y\,|\,R(X, Y) \rangle$ be a pseudo-geometric presentation of a $(1, 1)$ knot $K$ in $S^3$, then its Poincareé polynomial [\[eqn:poincare_poly_pseudo_geometric\]](#eqn:poincare_poly_pseudo_geometric){reference-type="eqref" reference="eqn:poincare_poly_pseudo_geometric"} satisfies $$P_R(t, -1) = \Delta_K(t).$$* We have no examples of pseudo-geometric presentations of $\pi_1(S^3 \setminus K)$ for a $(1,1)$ knot that does not come from a $(1,1)$ Heegaard diagram. **Question 3**. *Let $K$ be a $(1,1)$ knot in $S^3$. Does every pseudo-geometric presentation of $\pi_1(S^3 \setminus K)$ come from a $(1,1)$ Heegaard diagram? Does Algorithm [Algorithm 21](#algo:main){reference-type="ref" reference="algo:main"} always compute the knot Floer homology $\widehat{HFK}(S^3,K)$ with a pseudo-geometric presentations of $\pi_1(S^3 \setminus K)$?* Algorithm [Algorithm 21](#algo:main){reference-type="ref" reference="algo:main"} does not a priori require the group to be the fundamental group of a $(1,1)$ knot. We can apply Algorithm [Algorithm 21](#algo:main){reference-type="ref" reference="algo:main"} for any pseudo-geometric two-generator one-relator group presentations. For example, the presentation $G=\langle X, Y\,|\,X^2 Y \overline{X}\overline{Y}\rangle$ is pseudo-geometric, but it is not a knot group. Applying Algorithm [Algorithm 21](#algo:main){reference-type="ref" reference="algo:main"} yields $$P_R(t, q) = 2t + q^{-1}.$$ We do not know whether $P_R(t,q)$ is an invariant of $G$, and what properties it captures. 10 Matthias Aschenbrenner, Stefan Friedl, and Henry Wilton. . EMS Series of Lectures in Mathematics. European Mathematical Society (EMS), Zürich, 2015. Helmut Doll. A generalized bridge number for links in $3$-manifolds. , 294(4):701--717, 1992. Andreas Floer. An instanton-invariant for 3-manifolds. , 118(2):215--240, 1988. Hirozumi Fujii. Geometric indices and the Alexander polynomial of a knot. , 124(9):2923--2933, 1996. Hiroshi Goda, Hiroshi Matsuda, and Takayuki Morifuji. Knot Floer homology of $(1,1)$-knots. , 112:197--214, 2005. Matthew Hedden and Philip Ording. The Ozsváth-Szabó and Rasmussen concordance invariants are not equal. , 130(2):441--453, 2008. Michael Hutchings. An index inequality for embedded pseudoholomorphic curves in symplectizations. , 4(4):313--361, 2002. Michael Hutchings and Clifford Henry Taubes. Gluing pseudoholomorphic curves along branched covered cylinders. I. , 5(1):43--137, 2007. Michael Hutchings and Clifford Henry Taubes. Gluing pseudoholomorphic curves along branched covered cylinders. II. , 7(1):29--133, 2009. Peter B Kronheimer and Tomasz S Mrowka. Knot homology groups from instantons. , 4(4):835--918, 2011. Çağatay Kutluhan, Yi-Jen Lee, and Clifford Taubes. , IV: The Sieberg-Witten Floer homology and ech correspondence. , 24(7):3219--3469, 2020. Çağatay Kutluhan, Yi-Jen Lee, and Clifford Henry Taubes. , III: holomorphic curves and the differential for the ech/Heegaard Floer correspondence. , 24(6):3013--3218, 2020. Çağatay Kutluhan, Yi-Jen Lee, and Clifford Henry Taubes. , I: Heegaard Floer homology and Seiberg-Witten Floer homology. , 24(6):2829--2854, 2020. Çağatay Kutluhan, Yi-Jen Lee, and Clifford Henry Taubes. , II: Reeb orbits and holomorphic curves for the ech/Heegaard Floer correspondence. , 24(6):2855--3012, 2020. Çağatay Kutluhan, Yi-Jen Lee, and Clifford Henry Taubes. , V: Seiberg-Witten Floer homology and handle additions. , 24(7):3471--3748, 2020. Philip Ording. . ProQuest LLC, Ann Arbor, MI, 2006. Thesis (Ph.D.)--Columbia University. Philip Ording. Constructing doubly-pointed Heegaard diagrams compatible with $(1,1)$ knots. , 22(11):1350071, 26 pp, 2013. Peter Ozsváth and Zoltán Szabó. Absolutely graded Floer homologies and intersection forms for four-manifolds with boundary. , 173(2):179--261, 2003. Peter Ozsváth and Zoltán Szabó. Heegaard diagrams and holomorphic disks. In *Different Faces of Geometry*, volume 3 of *International Series of Numerical Mathematics*, pages 301--348. Kluwer/Plenum, New York, 2004. Peter Ozsváth and Zoltán Szabó. Holomorphic disks and knot invariants. , 186(1):58--116, 2004. Peter Ozsváth and Zoltán Szabó. Holomorphic disks and topological invariants for closed three-manifolds. , 159(3):1027--1158, 2004. Peter Ozsváth and Zoltán Szabó. Holomorphic disks and three-manifold invariants: properties and applications. , 159(3):1159--1245, 2004. Grisha Perelman. The entropy formula for the ricci flow and its geometric applications. , 2002. Grisha Perelman. Finite extinction time for the solutions to the ricci flow on certain three-manifolds. , 2003. Grisha Perelman. Ricci flow with surgery on three-manifolds. , 2003. Jacob Rasmussen. . ProQuest LLC, Ann Arbor, MI, 2003. Thesis (Ph.D.)--Harvard University. Dale Rolfsen. . Mathematics Lecture Series, No. 7. Publish or Perish, Inc., Berkeley, Calif., 1976. Clifford Henry Taubes. Embedded contact homology and Seiberg-Witten Floer cohomology I. , 14(5):2497--2581, 2010. Clifford Henry Taubes. Embedded contact homology and Seiberg-Witten Floer cohomology II. , 14(5):2583--2720, 2010. Clifford Henry Taubes. Embedded contact homology and Seiberg-Witten Floer cohomology III. , 14(5):2721--2817, 2010. Clifford Henry Taubes. Embedded contact homology and Seiberg-Witten Floer cohomology IV. , 14(5):2819--2960, 2010. Clifford Henry Taubes. Embedded contact homology and Seiberg-Witten Floer cohomology V. , 14(5):2961--3000, 2010. William P. Thurston. Three-dimensional manifolds, Kleinian groups and hyperbolic geometry. , 6(3):357--381, 1982. Paolo Ghiggini Vincent Colin and Ko Honda. The equivalence of Heegaard Floer homology and embedded contact homology III: from hat to plus. , 2012. Paolo Ghiggini Vincent Colin and Ko Honda. The equivalence of Heegaard Floer homology and embedded contact homology via open book decompositions I. , 2012. Paolo Ghiggini Vincent Colin and Ko Honda. The equivalence of Heegaard Floer homology and embedded contact homology via open book decompositions II. , 2012. Edward Witten. Monopoles and four-manifolds. , Volume 1, Number 6, 769--796, 1994. [^1]: This step reverses the orientation of the arc $t_{\alpha}$. To obtain the same relator $R$, we can reverse the orientation of $T^2$ and $\alpha$ without changing the orientation of $\beta$. Note that the resulting diagram $(-T^2, -\alpha, \beta, w, z)$ is compatible with the mirror image $K^*$ of the knot $K$. Although the knot Floer homology can distinguish chirality, the fundamental group cannot. Our algorithm yields the knot Floer homology of $K$ or $K^*$.
arxiv_math
{ "id": "2309.14058", "title": "Knot Floer homology and the fundamental group of $ (1,1) $ knots", "authors": "Jiajun Wang and Xiliu Yang", "categories": "math.GT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In the Monge-Kantorovich transport problem, the transport cost is expressed in terms of transport maps or transport plans, which play crucial roles there. A variant of the Monge-Kantorovich problem is the ramified (branching) transport problem that models branching transport systems via transport paths. In this article, we showed that any cycle-free transport path between two atomic measures can be decomposed into the sum of a map-compatible path and a plan-compatible path. Moreover, we showed that each stair-shaped transport path can be decomposed into the difference of two map-compatible transport paths. address: | Department of Mathematics\ University of California at Davis\ Davis, CA, 95616, USA author: - Qinglan Xia, Haotian Sun title: Map-compatible Decomposition of transport paths. --- # Introduction The aim of this article is to provide a map-compatible decomposition of transport paths. In the well-known Monge-Kantorovich transport problem (see [@villani; @Luigi; @santambrogio] and references therein), the transport cost is expressed in terms of transport maps or transport plans. The existence of optimal transport maps, especially the Brenier map in the case of quadratic cost, leads to numerous applications of optimal transportation theory in PDEs, Probability theory, Machine learning, etc. A variant of the Monge-Kantorovich transport problem is ramified (also called branched) optimal transportation (see [@xia1; @book; @xia2015motivations] and references therein). Through the lens of economy of scales, ramified optimal transportation aims at studying the branching structures that appeared in many living or non-living transport systems. In contrast to the classical Monge-Kantorovich transport problems, where the transport cost relies on transport maps and plans, the transport cost in the ramified transport problem is assessed across the entire branching transport system, referred to as transport paths. Since transport maps/plans only utilize information from the initial/target measures, knowing only transport maps/plans is insufficient for describing the transport cost that appears in ramified optimal transportation problem. In general, two transport paths (e.g. a "Y-shaped\" and a "V-shaped\" path) may have different transportation costs while sharing the same transport map/plan. Nevertheless, motivated by the significance of transport maps in the context of the Monge-Kantorovich problem, when a transport path is given, one may wonder if there exists a hidden transport map or plan that is compatible with this specific transport path. This compatible transport map/plan tells one how the initial measure is distributed to the target measure via the given transport path. For simplicity, this article only considers the case of atomic measures, deferring the exploration of other scenarios for future endeavors. We want to provide a decomposition of transport paths such that each component in the decomposition is compatible with some transport map or transport plan. Roughly speaking, our main results are : - **Theorem [Theorem 17](#thm: decomposition){reference-type="ref" reference="thm: decomposition"}:** Every cycle-free [^1] transport path $T$ can be decomposed as a sum of subcurrents $T=T_0+T_1+\cdots+T_N$ such that each $T_1,T_2,\cdots, T_N$ has a single target and $T_0$ has at most $\binom{N}{2}$ sources[^2]. - **Theorem [Theorem 22](#thm: compatability){reference-type="ref" reference="thm: compatability"}:** Every cycle-free transport path $T$ can be decomposed as a sum of subcurrents $T=T_\varphi+T_\pi$ such that $T_\varphi$ is compatible with some transport map $\varphi$ and $T_\pi$ is compatible with some transport plan $\pi$. - **Theorem [Theorem 30](#thm: stair shaped induced transport maps){reference-type="ref" reference="thm: stair shaped induced transport maps"}:** Every stair-shaped transport path $T$ can be decomposed as a sum of subcurrents $T=T_1+T_2$ such that both $T_1$ and $-T_2$ are compatible with some transport maps. This article is organized as follows: We first recall in $\S 2$ some related concepts in geometric measure theory, the classical Monge-Kantorovich transport problem, and the ramified optimal transport problem. In particular, the *good decomposition* (i.e., Smirnov decomposition) of acyclic normal $1$-currents. In general, the family of atoms (i.e., supporting curves) of a good decomposition is not necessarily linearly independent. This fact brings a non-unique representation of vanishing currents and causes a technical obstacle for the proof of Theorem [Theorem 17](#thm: decomposition){reference-type="ref" reference="thm: decomposition"}. To overcome this, we generalize the notion of "good decomposition\" to "better decomposition\" (Definition [Definition 2](#def: better_decom){reference-type="ref" reference="def: better_decom"}) of transport paths in $\S 3$. A better decomposition $\eta$ of a transport path $T$ prohibits combinations of any four supporting curves of $\eta$ to form a non-trivial cycle on the support of $T$. We showed in Theorem [Theorem 4](#thm: GoodS_ij){reference-type="ref" reference="thm: GoodS_ij"} that any good decomposition of a transport path has a better decomposition that is absolutely continuous with respect to the original good decomposition. In $\S 4$, we introduce the concept of cycle-free transport paths, which are transport paths with no non-trivial cycles on[^3] them. Then, we use the "better decomposition\" achieved in Theorem [Theorem 4](#thm: GoodS_ij){reference-type="ref" reference="thm: GoodS_ij"} to give a decomposition of cycle-free transport paths, described in Theorem [Theorem 17](#thm: decomposition){reference-type="ref" reference="thm: decomposition"}. In $\S 5$, we consider the concept of "compatibility\" between transport paths and transport plans/maps. This concept was first introduced in [@xia1 Definition 7.1] for cycle-free transport paths to describe whether a given transport plan is practically possible for transportation along the given transport path. We first generalize this concept, in a more general setting, to the compatibility between transport paths and transport plans/maps. Then, using Theorem [Theorem 17](#thm: decomposition){reference-type="ref" reference="thm: decomposition"}, we decompose a cycle-free transport path into the sum of a map-compatible path and a plan-compatible path, which gives Theorem [Theorem 22](#thm: compatability){reference-type="ref" reference="thm: compatability"}. In $\S 6$, we proceed to study stair-shaped transport paths. We first show in Theorem [Theorem 26](#thm: stairshaped-matrix){reference-type="ref" reference="thm: stairshaped-matrix"} that each matrix[^4] with non-negative entries can be transformed into a stair-shaped matrix, and in Algorithm [Algorithm 27](#rem: Stair shaped Algorithm){reference-type="ref" reference="rem: Stair shaped Algorithm"}, we provide an algorithm for calculating the stair-shaped matrix. A transport path is called stair-shaped if it has a good decomposition that is represented by a stair-shaped matrix. A stair-shaped transport path is not necessarily cycle-free, but it still has a better decomposition. Our main result for the section is Theorem [Theorem 30](#thm: stair shaped induced transport maps){reference-type="ref" reference="thm: stair shaped induced transport maps"}, which says that any stair-shaped transport path can be decomposed into the difference of two map-compatible transport paths. Note that some cycle-free transport paths are also stair-shaped. They can be decomposed not only as the sum of a map-compatible path and a plan-compatible path by Theorem [Theorem 22](#thm: compatability){reference-type="ref" reference="thm: compatability"}, but also as the sum of two map-compatible transport paths by Theorem [Theorem 30](#thm: stair shaped induced transport maps){reference-type="ref" reference="thm: stair shaped induced transport maps"}. We further investigate some sufficient conditions under which cycle-free transport paths are stair-shaped. An illustrating example is provided at the end. # Preliminaries ## Basic concepts in geometric measure theory   We first recall some related terminologies from geometric measure theory [@Simon; @Lin]. Suppose $U$ is an open set in $\mathbb{R}^{m}$ and $k \le m$, the set of all $C^\infty$ $k$-forms with compact support in $U$ is denoted by $\mathcal{D}^k(U).$ The dual space of $\mathcal{D}^k(U)$, $\mathcal{D}_k(U)$, is called the space of *$k$-currents*. The mass of $T \in \mathcal{D}_k(U)$ is defined by $$\mathbf{M}(T)= \sup \{ T(\omega) \,:\, \|\omega\| \le 1, \omega \in \mathcal{D}^k(U)\}.$$ The *boundary* of a current $T\in \mathcal{D}_k(U), \partial T \in \mathcal{D}_{k-1}(U)$ is defined by $$\partial T(\omega) = T(d\omega)\ \mathrm{for}\ \omega \in \mathcal{D}^{k-1}(U).$$ A set $M \subset \mathbb{R}^{m}$ is said to be countably *$k$-rectifiable* if $$M \subset M_0 \cup \left( \bigcup_{j=1}^\infty F_j(\mathbb{R}^k)\right),$$ where $\mathcal{H}^k(M_0)=0$ under the $k$-dimensional Hausdorff measure $\mathcal{H}^k$, and $F_j: \mathbb{R}^k \to \mathbb{R}^{m}$ are Lipschitz functions for $j=1,2,\cdots.$ For any $T\in \mathcal{D}_k(U)$, we say that $T$ is a *rectifiable $k$-current* if for each $\omega \in \mathcal{D}^k(U)$, $$T(\omega) = \int_M \langle \omega(x) , \xi(x) \rangle \theta(x) \,d\mathcal{H}^k(x),$$ where $M$ is an $\mathcal{H}^k$-measurable countably $k$-rectifiable subset of $U$, $\theta(x)$ is a locally $\mathcal{H}^k$-integrable positive function, and $\xi: M \to \Lambda_k(\mathbb{R}^{m})$ is a $\mathcal{H}^k$-measurable function such that for $\mathcal{H}^k$-a.e. $x\in M$, $\xi(x) = \tau_1 \wedge \ldots \wedge \tau_k$, where $\tau_1 \ldots \tau_k$ is an orthonormal basis for the approximate tangent space $T_xM$. We will denote $T$ by $\underline{\underline{\tau}}(M,\theta,\xi)$. When $T$ is a rectifiable $k$-current, its mass $$\mathbf{M}(T) = \int_M \theta(x) \,d\mathcal{H}^k(x).$$ A current $T\in \mathcal{D}_k(U)$ is said to be *normal* if $\mathbf{M}(T) + \mathbf{M}(\partial T) < \infty$. In [@Paolini], Paolini and Stepanov introduced the concept of subcurrents: For any $T, S\in \mathcal{D}_k(U)$, $S$ is called a *subcurrent* of $T$ if $$\mathbf{M}(T-S) + \mathbf{M}(S) = \mathbf{M}(T).$$ A normal current $T\in \mathcal{D}_k(\mathbb{R}^m)$ is *acyclic* if there is no non-trivial subcurrent $S$ of $T$ such that $\partial S =0$. In [@smirnov], Smirnov showed that every acyclic normal $1$-current can be written as the weighted average of simple Lipschitz curves in the following sense. Let $\Gamma$ be the space of $1$-Lipschitz curves $\gamma: [0,\infty) \to \mathbb{R}^m$, which are eventually constant. For $\gamma \in \Gamma$, we denote $$t_0(\gamma):=\sup \{t: \gamma \mbox{ is constant on } [0,t] \}, \ t_\infty(\gamma):=\inf \{t: \gamma \mbox{ is constant on } [t,\infty) \} ,$$ and $p_0(\gamma):= \gamma(0)$, $p_\infty(\gamma):= \gamma(\infty)=\lim_{t\to \infty} \gamma(t)$. A curve $\gamma \in \Gamma$ is simple if $\gamma(s)\not=\gamma(t)$ for every $t_0(\gamma) \le s < t \le t_\infty(\gamma)$. For each simple curve $\gamma \in \Gamma$, we may associate it with the following rectifiable $1$-current, $$\label{eqn: I_gamma} I_\gamma:= \underline{\underline{\tau}}\left( \mathrm{Im}(\gamma), \frac{\gamma'}{|\gamma'|}, 1 \right) ,$$ where $Im(\gamma)$ denotes the image of $\gamma$ in $\mathbb{R}^m$. **Definition 1**. Let $T$ be a normal 1-current in $\mathbb{R}^m$ and let $\eta$ be a finite positive measure on $\Gamma$ such that $$\label{eqn: T_eta} T=\int_\Gamma I_\gamma \,d\eta(\gamma)$$ in the sense that for every smooth compactly supported 1-form $\omega\in \mathcal{D}^1(\mathbb{R}^m)$, it holds that $$T(\omega)=\int_\Gamma I_\gamma(\omega) \,d\eta(\gamma).$$ We say that $\eta$ is a *good decomposition* of $T$ (see [@colombo], [@colombo2020], [@smirnov]) if $\eta$ is supported on non-constant, simple curves and satisfies the following equalities: - $\mathbf{M}(T)=\int_\Gamma \mathbf{M}(I_\gamma) d\eta(\gamma)=\int_\Gamma \mathcal{H}^1(Im(\gamma)) d\eta(\gamma)$; - $\mathbf{M}(\partial T)=\int_\Gamma \mathbf{M}(\partial I_\gamma) d\eta(\gamma)=2 \eta(\Gamma)$. Moreover, if $\eta$ is a good decomposition of $T$, the following statements hold [@colombo Proposition 3.6] : - $$\label{eqn: startingending measure} \mu^- = \int_\Gamma \delta_{\gamma (0)} \, d\eta(\gamma),\ \mu^+ = \int_\Gamma \delta_{\gamma (\infty)} \, d\eta(\gamma).$$ - If $T = \underline{\underline{\tau}}(M,\theta,\xi)$ is rectifiable, then $$\label{eqn: theta(x)} \theta(x) = \eta (\{\gamma \in \Gamma : x \in \mathrm{Im}(\gamma) \} )$$ for $\mathcal{H}^1$-a.e. $x \in M.$ - For every $\tilde{\eta} \leq \eta$, the representation $$\tilde{T}=\int_\Gamma I_\gamma d\tilde{\eta}(\gamma)$$ is a good decomposition of $\tilde{T}$. Moreover, if $T=\underline {\underline {\tau} }\left(M, \theta, \xi\right)$ is rectifiable, then $\tilde{T}$ can be written as $\tilde{T}=\underline {\underline {\tau} }(M, \tilde{\theta}, \xi )$ with $$\label{eqn: tilde_theta_x} \tilde{\theta}(x)\le \min\{\theta(x), \tilde{\eta}(\Gamma)\}$$ for $\mathcal{H}^1$-a.e. $x\in M$. In the following contexts, we adopt the notations: for any points $x,y \in \mathbb{R}^m$ and subset $A\subseteq \mathbb{R}^m$, denote $$\begin{aligned} \Gamma_x & = \{\gamma \in \Gamma : x \in \mathrm{Im}(\gamma) \}, \label{gamma_{x}}\\ \Gamma_{x,y}&=\{\gamma\in \Gamma: p_0(\gamma)=x, \ p_\infty(\gamma)=y\}, \label{gamma_{x,y}} \\ \Gamma_{A,y}&=\{\gamma\in \Gamma: p_0(\gamma)\in A, \ p_\infty(\gamma)=y\}. \label{gamma_{A,y}}\end{aligned}$$ ## Basic concepts in optimal transportation theory   We now recall some basic concepts in optimal transportation theory that are related to this article. Suppose $X$ is a convex compact subset of $\mathbb{R}^m$, the source $\mu^-$ and the target $\mu^+$ are two measures supported on $X$ of equal mass. - A map $\varphi: X\rightarrow X$ is called a transport map from $\mu^-$ to $\mu^+$ if the push-forward measure $\varphi_\# \mu^-=\mu^+$. Let $Map(\mu^-, \mu^+)$ be the set of all transport maps from $\mu^-$ to $\mu^+$. - A Borel measure $\pi$ on $X\times X$ is called a transport plan from $\mu^-$ to $\mu^+$ if $(p_1)_\# \pi= \mu^-$ and $(p_2)_\#\pi=\mu^+$, where $p_1,p_2$ are respectively the first and the second orthogonal projection maps from $X\times X$ to $X$. Let $Plan(\mu^-, \mu^+)$ be the set of all transport plans from $\mu^-$ to $\mu^+$. - A rectifiable 1-current $T$ is called a transport path from $\mu^-$ to $\mu^+$ if its boundary $\partial T=\mu^+-\mu^-$. Let $Path(\mu^-, \mu^+)$ be the set of all transport paths from $\mu^-$ to $\mu^+$. Let $C(x,y)$ be a non-negative Borel function, called the cost function, on $X\times X$. For any transport map $\varphi\in Map(\mu^-, \mu^+)$, the transport $C-$cost of $\varphi$ is $$I_C(\varphi):=\int_X C(x, \varphi(x))d\mu^-(x).$$ Similarly, for any transport plan $\pi\in Plan(\mu^-, \mu^+)$, the transport $C-$cost of $\pi$ is $$J_C(\pi):=\int_{X\times X} C(x,y) d\pi(x,y).$$ For any transport path $T= \underline{\underline{\tau}}(M,\theta,\xi)\in Path(\mu^-, \mu^+)$, and any $0 \le \alpha < 1$, the transport $\mathbf{M}_\alpha$-cost of $T$ is $$\mathbf{M}_\alpha(T) := \int_{M} \theta(x)^\alpha \,d\mathcal{H}^1.$$ The corresponding optimal transport problems are: - **Monge:** Minimize $I_C(\varphi)$ among all transport maps $\varphi\in Map(\mu^-,\mu^+)$; - **Kantorovich:** Minimize $J_C(\pi)$ among all transport maps $\pi\in Plan(\mu^-,\mu^+)$; - **Ramified/Branched:** Minimize $\mathbf{M}_\alpha(T)$ among all transport paths $T\in Path(\mu^-,\mu^+)$. For theoretic results such as existence/regularity and their applications, we refer to [@villani; @Luigi; @santambrogio] for Monge-Kantorovich transport theory and [@xia1; @book; @xia2015motivations] for ramified/branched transportation. In this article, we mainly focus on transportation between atomic measures. Let $$\label{eqn: measures} \mu^- = \sum_{i=1}^M m'_i \delta_{x_i} \text{ and }\mu^+ = \sum_{j=1}^N m_j \delta_{y_j} \text{ with } \sum_{i=1}^M m'_i = \sum_{j=1}^N m_j < \infty$$ be two finite atomic measures on $X$ of equal mass with $M,N \in \mathbb{N} \cup \{\infty\}$. In this case, the above concepts have simplified forms: - A transport map $\varphi\in Map(\mu^-,\mu^+)$ corresponds to a map $\varphi: \{1,2,\cdots, M\}\rightarrow \{1,2,\cdots, N\}$ such that for each $j=1,2,\cdots, N$, $$m_j=\sum_{i\in \varphi^{-1}(\{j\})}m_i'.$$ The corresponding transport cost is $$I_C(\varphi)=\sum_{i=1}^M C(x_i, y_{\varphi(i)})m_i'.$$ - A transport plan $\pi\in Map(\mu^-,\mu^+)$ corresponds to an $M\times N$ matrix $\pi=[\pi_{ij}]$ such that for each $i, j$, it holds that $$\sum_i \pi_{ij}=m_j \text{ and } \sum_j \pi_{ij}=m_i'.$$ The corresponding transport cost is $$J_C(\pi)=\sum_{i=1}^M\sum_{j=1}^N c_{ij}\pi_{ij}$$ where $c_{ij}=C(x_i,y_j)$. - A transport path $T\in Path(\mu^-,\mu^+)$ corresponds to a weighted directed graph $T$ consisting of a vertex set $V$, a directed edge set $E$ and a weight function $w: E \rightarrow (0, +\infty)$ such that $\{x_1,x_2,\ldots,x_M \}\cup\{y_1,y_2,\ldots,y_N \} \subseteq V$ and for any vertex $v\in V$, there is a balance equation: $$\sum_{e\in E, e^-=v} w(e) \ = \sum_{e\in E, e^+=v} w(e) \ +\ \left\{ \begin{array}{ll} \ \ m_i &\mbox{if } v=x_i \mbox{ for some } i=1,\ldots,M \\ -n_j &\mbox{if }v=y_j \mbox{ for some } j=1,\ldots,N \\ \ \ 0 &\mbox{otherwise,} \end{array} \right.$$ where $e^-$ and $e^+$ denote the starting and ending point of the edge $e\in E$. The corresponding transport $\mathbf{M}_{\alpha}$-cost of $T$ is $$\mathbf{M}_{\alpha}(T)=\sum_{e\in E}w(e)^\alpha length(e)$$ where the length $length(e)$ of the edge $e$ equals to $\mathcal{H}^1(e)$. # Better decomposition of acyclic transport paths Let $\mu^-$ and $\mu^+$ be two atomic measures as given in ([\[eqn: measures\]](#eqn: measures){reference-type="ref" reference="eqn: measures"}), $T$ be an acyclic transport path from $\mu^-$ to $\mu^+$, and let $\eta$ be a good decomposition (i.e., Smirnov decomposition) of $T$. Observe that as shown in the following example, with respect to the good decomposition $\eta$, it is possible that the family $$\{I_\gamma:\; \eta(\{\gamma\})>0\}$$ is linearly dependent. **Example 1**. *Let $T$ be a transport path from $\mu^- = 4\delta_{x_1} + 2\delta_{x_2}$ to $\mu^+ = 3\delta_{y_1} + 3\delta_{y_2}$, as shown in the following figure* *.* *For each $(i,j)$, let $\gamma_{x_i, y_j}$ be the corresponding curve from $x_i$ to $y_j$ on $T$:* *Then $$\eta=2\delta_{\gamma_{x_1,y_1}}+2\delta_{\gamma_{x_1,y_2}}+\delta_{\gamma_{x_2,y_1}}+\delta_{\gamma_{x_2,y_2}}$$ is a good decomposition of $T$. But $$I_{\gamma_{x_1,y_1}}-I_{\gamma_{x_1,y_2}}-I_{\gamma_{x_2,y_1}}+I_{\gamma_{x_2,y_2}}$$ is the zero 1-current.* The linear dependence of the family $\{I_\gamma: \eta(\{\gamma\})>0\}$ brings a non-unique representation of vanishing currents and causes an obstacle later for the proof of Theorem [Theorem 17](#thm: decomposition){reference-type="ref" reference="thm: decomposition"}. To overcome this, we introduce the concept of "better decomposition\" of $T$ as follows. For each $i=1,2,\cdots, M$, $j=1,2,\cdots, N$, as given in ([\[gamma\_{x,y}\]](#gamma_{x,y}){reference-type="ref" reference="gamma_{x,y}"}), let $\Gamma_{x_i, y_j}$ denote all $1$-Lipschitz curves in $\Gamma$ from $x_i$ to $y_j$. Also, for any finite positive measure $\eta$ on $\Gamma$, denote $$\label{eqn: S_ij} S_{i,j}(\eta):= \begin{cases} \frac{1}{\eta(\Gamma_{x_i,y_j})} \int_{\Gamma_{x_i,y_j} } I_\gamma d\eta, & \text{ if }\eta(\Gamma_{x_i,y_j})>0\\ 0, &\text{ if }\eta(\Gamma_{x_i,y_j})=0. \end{cases}$$ **Definition 2**. Let $T$ be a transport path from $\mu^-$ to $\mu^+$ where $\mu^-$ and $\mu^+$ are given in ([\[eqn: measures\]](#eqn: measures){reference-type="ref" reference="eqn: measures"}). Suppose $\eta$ is a good decomposition of $T$. We say that $\eta$ is a *better decomposition* of $T$ if for any pairs $1\le i_1< i_2\le M$ and $1\le j_1<j_2\leq N$, $$S_{i_1,j_1} (\eta) - S_{i_1,j_2} (\eta)- S_{i_2,j_1} (\eta) + S_{i_2,j_2} (\eta)=0$$ implies that $$\eta(\Gamma_{ x_{i_1},y_{j_1} }) =\eta(\Gamma_{x_{i_1},y_{j_2}})=\eta(\Gamma_{x_{i_2},y_{j_1}}) = \eta(\Gamma_{x_{i_2},y_{j_2}})=0.$$ **Example 2**. *In Example [Example 1](#example: example 1){reference-type="ref" reference="example: example 1"}, $$\eta=2\delta_{\gamma_{x_1,y_1}}+2\delta_{\gamma_{x_1,y_2}}+\delta_{\gamma_{x_2,y_1}}+\delta_{\gamma_{x_2,y_2}}$$ is a good but not better decomposition of $T$. Indeed, $$S_{1,1} (\eta) - S_{1,2} (\eta)- S_{2,1} (\eta) + S_{2,2} (\eta)= I_{\gamma_{x_1,y_1}}-I_{\gamma_{x_1,y_2}}-I_{\gamma_{x_2,y_1}}+ I_{\gamma_{x_2,y_2}} =0,$$ but $$\eta (\Gamma_{x_1,y_1}) = 2, \eta (\Gamma_{x_1,y_2}) = 2,\eta (\Gamma_{x_2,y_1}) = 1, \text{ and }\eta (\Gamma_{x_2,y_2}) = 1.$$ To realize $T$ using $\eta$, all four transportation need to be used.* *On the other hand, $$\tilde{\eta}= 3\delta_{\gamma_{x_1,y_1}} +\delta_{\gamma_{x_1,y_2}}+2\delta_{\gamma_{x_2,y_2}}$$ is a better decomposition of $T$. In this case, $$S_{1,1} (\tilde{\eta}) - S_{1,2} (\tilde{\eta})- S_{2,1} (\tilde{\eta}) + S_{2,2} (\tilde{\eta}) = I_{\gamma_{x_1,y_1}}-I_{\gamma_{x_1,y_2}} + I_{\gamma_{x_2,y_2}} \not=0$$ despite that $$\tilde{\eta} (\Gamma_{x_1, y_1}) = 3, \tilde{\eta} (\Gamma_{x_1, y_2}) = 1, \tilde{\eta} (\Gamma_{x_2, y_1}) = 0, \tilde{\eta} (\Gamma_{x_2, y_2}) = 2.$$ Using this new decomposition, to realize the same $T$, one only needs to arrange three transportation.* **Definition 3**. For any two finite measures $\eta$ and $\tilde{\eta}$ on $\Gamma$, we say $\tilde{\eta} \prec \hspace{-1mm}\prec \eta$ if for each pair $(i,j)$, $$\label{eqn: partial_order_eta} \int_{\Gamma_{x_i,y_j} } I_\gamma d\tilde{\eta} =a_{i,j} \int_{\Gamma_{x_i,y_j} } I_\gamma d\eta$$ for some $a_{i,j}\ge 0$. Our main result for this section is the following theorem: **Theorem 4**. *Let $T$ be a transport path from $\mu^-$ to $\mu^+$ where $\mu^-$ and $\mu^+$ are given in ([\[eqn: measures\]](#eqn: measures){reference-type="ref" reference="eqn: measures"}). For any good decomposition $\eta$ of $T$, there exists a better decomposition $\eta_\infty$ of $T$ such that $\eta_\infty \prec \hspace{-1mm}\prec \eta$.* We first give an equivalent definition of $\tilde{\eta} \prec \hspace{-1mm}\prec \eta$ as follows. **Lemma 5**. *For any two finite measures $\eta$ and $\tilde{\eta}$ on $\Gamma$, $\tilde{\eta}\prec \hspace{-1mm}\prec \eta$ if and only if they satisfy the condition $$\label{eqn: equivalent_def_precc} \text{ if } \tilde{\eta} (\Gamma_{x_i,y_j})>0 \text{ for some } (i,j), \text{ then } \eta (\Gamma_{x_i,y_j})>0 \text{ and } S_{i,j}(\tilde{\eta})=S_{i,j}(\eta).$$* **Remark 6**. By Lemma [Lemma 5](#lem: equivalent_definition_precc){reference-type="ref" reference="lem: equivalent_definition_precc"}, it follows that $\tilde{\eta} (\Gamma_{x_i,y_j})=0$ whenever $\eta (\Gamma_{x_i,y_j})=0$. We use the notation $\tilde{\eta} \prec \hspace{-1mm}\prec \eta$ to mimic the absolute continuity notation $\ll$ of measures. *Proof.* Suppose $\tilde{\eta} \prec \hspace{-1mm}\prec \eta$. By taking the boundary operator on both sides of ([\[eqn: partial_order_eta\]](#eqn: partial_order_eta){reference-type="ref" reference="eqn: partial_order_eta"}), it follows that $$\int_{\Gamma_{x_i,y_j} } (\delta_{y_j} -\delta_{x_i})d\tilde{\eta} =a_{i,j} \int_{\Gamma_{x_i,y_j} } (\delta_{y_j} -\delta_{x_i}) d\eta.$$ That is, $$\tilde{\eta} (\Gamma_{x_i,y_j} ) (\delta_{y_j} -\delta_{x_i}) =a_{i,j} {\eta} (\Gamma_{x_i,y_j} ) (\delta_{y_j} -\delta_{x_i}),$$ which implies that $\tilde{\eta} (\Gamma_{x_i,y_j} ) = a_{i,j} {\eta} (\Gamma_{x_i,y_j} )$. Thus, $\tilde{\eta} (\Gamma_{x_i,y_j} ) >0$ implies $a_{ij} >0$ and ${\eta} (\Gamma_{x_i,y_j} ) >0.$ Moreover, $$S_{i,j}(\tilde{\eta})= \frac{1}{\tilde{\eta} (\Gamma_{x_i,y_j} )}\int_{\Gamma_{x_i,y_j} } I_\gamma d\tilde{\eta} = \frac{1}{a_{i,j} {\eta} (\Gamma_{x_i,y_j} )} \cdot a_{i,j} \int_{\Gamma_{x_i,y_j} } I_\gamma d\eta = S_{i,j}(\eta).$$ On the other hand, suppose ([\[eqn: equivalent_def_precc\]](#eqn: equivalent_def_precc){reference-type="ref" reference="eqn: equivalent_def_precc"}) holds. If $\tilde{\eta} (\Gamma_{x_i,y_j} ) =0$, then $a_{i,j}=0$ will give ([\[eqn: partial_order_eta\]](#eqn: partial_order_eta){reference-type="ref" reference="eqn: partial_order_eta"}). If $\tilde{\eta} (\Gamma_{x_i,y_j} ) >0$, then ([\[eqn: equivalent_def_precc\]](#eqn: equivalent_def_precc){reference-type="ref" reference="eqn: equivalent_def_precc"}) implies $\eta (\Gamma_{x_i,y_j})>0 \text{ and } S_{i,j}(\tilde{\eta})=S_{i,j}(\eta).$ By setting $$a_{i,j} = \frac{\tilde{\eta} (\Gamma_{x_i,y_j} )}{ {\eta} (\Gamma_{x_i,y_j} ) },$$ equation ([\[eqn: S_ij\]](#eqn: S_ij){reference-type="ref" reference="eqn: S_ij"}) gives that $$\int_{\Gamma_{x_i,y_j} } I_\gamma d\tilde{\eta} =\tilde{\eta} (\Gamma_{x_i,y_j} ) S_{i,j}(\tilde{\eta}) =(a_{i,j}\eta (\Gamma_{x_i,y_j} ) )S_{i,j}(\eta) =a_{i,j} \int_{\Gamma_{x_i,y_j} } I_\gamma d\eta .$$ ◻ Note that, by using the sign function $$sgn(x)= \left\{ \begin{array}{ll} 1, & \text{ if }x>0 \\ 0, & \text{ if }x=0\\ -1, & \text{ if }x<0, \\ \end{array} \right.$$ equation ([\[eqn: S_ij\]](#eqn: S_ij){reference-type="ref" reference="eqn: S_ij"}) gives $$\label{eqn: partial_S_ij} \partial S_{i,j}(\eta) = \left\{ \begin{array}{ll} \delta_{y_j}-\delta_{x_i}, & \text{ if }\eta(\Gamma_{x_i,y_j})>0, \\ 0, & \text{ if }\eta(\Gamma_{x_i,y_j})=0 \end{array} \right. =sgn(\eta(\Gamma_{x_i,y_j}))(\delta_{y_j}-\delta_{x_i}).$$ For any pairs $1\le i_1< i_2\le M$ and $1\le j_1<j_2\leq N$, define $$\label{eqn: C_def} C[(i_1,j_1), (i_2, j_2), \eta ] : = S_{i_1,j_1} (\eta) - S_{i_1,j_2} (\eta)- S_{i_2,j_1} (\eta) + S_{i_2,j_2} (\eta).$$ Direct calculation gives $$\partial C[(i_1,j_1), (i_2, j_2), \eta ] = \left( sgn(\eta(\Gamma_{x_{i_1},y_{j_2}}) - sgn(\eta(\Gamma_{x_{i_1},y_{j_1}})\right) \delta_{x_{i_1}} + \left(sgn(\eta(\Gamma_{x_{i_2},y_{j_1}}) - sgn(\eta(\Gamma_{x_{i_2},y_{j_2}})\right)\delta_{x_{i_2}}$$ $$\hphantom{\partial C[(i_1,j_1), (i_2, j_2), \eta ]\,} + \left(sgn(\eta(\Gamma_{x_{i_1},y_{j_1}}) - sgn(\eta(\Gamma_{x_{i_2},y_{j_1}})\right)\delta_{y_{j_1}} + \left(sgn(\eta(\Gamma_{x_{i_2},y_{j_2}}) - sgn(\eta(\Gamma_{x_{i_1},y_{j_2}})\right)\delta_{y_{j_2}}.$$ Hence, it follows that $\partial C[(i_1,j_1), (i_2, j_2), \eta ] =0$ if and only if $$\label{eqn: equalsgn} sgn(\eta(\Gamma_{x_{i_1},y_{j_1}}))=sgn(\eta(\Gamma_{x_{i_1},y_{j_2}}))=sgn(\eta(\Gamma_{x_{i_2},y_{j_1}}))=sgn(\eta(\Gamma_{x_{i_2},y_{j_2}}))=c,$$ where $c=0$ or $1$. We denote this common value, $c$, by $s[(i_1,j_1),(i_2,j_2), \eta]$. **Definition 7**. For any finite positive measure $\eta$ on $\Gamma$, define $$\mathcal{A}_\eta(i^*,j^*)=\{(i,j): i^*< i\le M, j^*<j\leq N, \ C[(i^*,j^*), (i, j), \eta ] =0 \text{ and } s[(i^*,j^*),(i,j), \eta]=1 \}.$$ Using this definition, saying a good decomposition $\eta$ of $T$ is a better decomposition of $T$ is equivalent to $\mathcal{A}_\eta(i,j)=\emptyset$ for all pairs $(i,j)$. We now consider the graded lexicographical order on $\mathbb{N}^2$, namely $$(a,b)<(c,d) \text{ if } a+b<c+d \text{ or } a=c \text{ but } b<d.$$ Under this order, $\mathbb{N}^2$ is listed in the order of $$\label{eqn: ordering} \{(i_n,j_n)\}_{n=1}^\infty=\{(1,1), (1,2), (2,1), (1,3),\ldots, (i_n,j_n), (i_{n+1},j_{n+1}),\ldots \}.$$ **Lemma 8**. *For any good decomposition $\eta$ of $T$, there exists a good decomposition $\tilde{\eta}$ of $T$ such that $\tilde{\eta}\prec \hspace{-1mm}\prec \eta$ and $\mathcal{A}_{\tilde{\eta}}(1,1)=\emptyset$.* *Proof.* When $\eta(\Gamma_{x_1,y_1})=0$, by ([\[eqn: equalsgn\]](#eqn: equalsgn){reference-type="ref" reference="eqn: equalsgn"}), the condition $C[(1,1), (i, j), \eta ] =0$ implies $s[(1,1),(i,j), \eta]=0$, and hence $\mathcal{A}_{\eta}(1,1)=\emptyset$. Setting $\tilde{\eta}:= \eta$ gives us the desired results. When $\eta(\Gamma_{x_1,y_1})\not=0$, we inductively define a sequence of good decomposition $\{\eta_n\}$ of $T$ with $\eta_n(\Gamma_{x_1,y_1})>0$, and whose limit is our desired measure $\tilde{\eta}$. Set $\eta_1=\eta$. If $\mathcal{A}_{\eta_n} (1,1)= \emptyset$ for some $n\ge 1$, set $\eta_m=\eta_n$ for all $m\ge n$ and set $\tilde{\eta}=\eta_n$ as well. If $\mathcal{A}_{\eta_n}(1,1)$ is non-empty for all $n\ge 1$, we construct $\tilde{\eta}$ from $\{\eta_n\}$ via the following steps. **Step 1: Construct a sequence of good decomposition $\{\eta_n\}$ of $T$.** For each $n\ge 1$, assume that $\eta_n$ is a good decomposition of $T$ with $\eta_n(\Gamma_{x_1,y_1})>0$. Let $(i_n,j_n)$ be the minimum element in $\mathcal{A}_{\eta_n}(1,1)$ which is a subset of $\mathbb{N}^2$ with the graded lexicographical order. Define $$\eta_{n+1}:=\eta_{n}+ \min\{\eta_n(\Gamma_{x_1,y_{j_n}}), \eta_n(\Gamma_{x_{i_n},y_1})\}\left( \frac{ \eta_n\lfloor_{\Gamma_{x_1,y_1}} }{\eta_n(\Gamma_{x_1,y_1})} - \frac{\eta_n\lfloor_{\Gamma_{x_1,y_{j_n}}} }{\eta_n(\Gamma_{x_1,y_{j_n}})} - \frac{\eta_n\lfloor_{\Gamma_{x_{i_n},y_1}} }{\eta_n(\Gamma_{x_{i_n},y_1})} + \frac{\eta_n\lfloor_{\Gamma_{x_{i_n},y_{j_n}}} }{\eta_n(\Gamma_{x_{i_n},y_{j_n}})} \right).$$ Here, the denominators in the above equation are positive because $s[(1,1),(i_n,j_n), \eta_n]=1$. Without loss of generality, we may assume that $$0<\eta_n(\Gamma_{x_1,y_{j_n}})\leq\eta_n(\Gamma_{x_{i_n},y_1}).$$ Under this construction, we have for each $i, j$, $$\label{eqn: eta_propotional} \eta_{n+1}\lfloor_{\Gamma_{x_i,y_j}}=(1+\lambda_{n,i,j})\eta_{n}\lfloor_{\Gamma_{x_i,y_j}}$$ for some real number $\lambda_{n,i,j} \ge -1$. In particular, it follows that $$\label{eqn: n+1 inequatliy 1} \eta_{n+1}(\Gamma_{x_1,y_1}) > \eta_{n}(\Gamma_{x_1,y_1})>0, \ \eta_{n}(\Gamma_{x_1,y_{j_n} })> \eta_{n+1}(\Gamma_{x_1,y_{j_n}})=0,$$ $$\label{eqn: n+1 inequatliy 2} \eta_{n} (\Gamma_{x_{i_n},y_1})> \eta_{n+1}(\Gamma_{x_{i_n},y_1})\geq 0, \ \eta_{n+1}(\Gamma_{x_{i_n},y_{j_n} }) > \eta_{n}(\Gamma_{x_{i_n},y_{j_n} })>0,$$ and $$\label{eqn: n+1 inequatliy 3} \eta_{n+1}(\Gamma_{x_i,y_j})= \eta_{n}(\Gamma_{x_i,y_j}) \text{ for all other } i,j.$$ Since $\eta_n$ is a good decomposition of $T$, we have $$T =\int_{\Gamma}I_{\gamma}d\eta_n, \; \mathbf{M}(T)=\int_\Gamma \mathbf{M}(I_\gamma) d\eta_{n}(\gamma)\; \text{ and }\mathbf{M}(\partial T)=\int_\Gamma \mathbf{M}(\partial I_\gamma) d\eta_n(\gamma).$$ In particular, $\mathbf{M}(T)=\int_\Gamma \mathbf{M}(I_\gamma) d\eta_{n}(\gamma)$ implies that $$\mathbf{M}( S_{1,1}(\eta_{n}) + S_{i_n,j_n}(\eta_{n}) ) = \mathbf{M}( S_{1,1}(\eta_{n}) ) + \mathbf{M}( S_{i_n,j_n}(\eta_{n}) ),$$ and $$\mathbf{M}( S_{1,j_n} (\eta_{n})+ S_{i_n,1}(\eta_{n}) ) = \mathbf{M}( S_{1,j_n}(\eta_{n}) ) + \mathbf{M}( S_{i_n,1}(\eta_{n}) ).$$ By assumption, $$C[(1,1), (i_n, j_n), \eta_n] = S_{1,1}(\eta_{n})- S_{1,j_n}(\eta_{n}) - S_{i_n,1}(\eta_{n}) + S_{i_n,j_n}(\eta_{n}) =0,$$ i.e., $S_{1,1}(\eta_{n})+S_{i_n,j_n} (\eta_{n})=S_{1,j_n}(\eta_{n}) + S_{i_n,1}(\eta_{n}).$ Thus, $$\begin{aligned} && \mathbf{M}( S_{1,1}(\eta_{n}) ) + \mathbf{M}( S_{i_n,j_n}(\eta_{n}) ) = \mathbf{M}( S_{1,1}(\eta_{n}) + S_{i_n,j_n}(\eta_{n}) ) \\ &=& \mathbf{M}( S_{1,j_n}(\eta_{n}) + S_{i_n,1}(\eta_{n}) ) = \mathbf{M}( S_{1,j_n} (\eta_{n})) + \mathbf{M}( S_{i_n,1}(\eta_{n}) ).\end{aligned}$$ Now, by the construction of $\eta_{n+1}$, $$\int_{\Gamma}I_{\gamma}d \eta_{n+1}-\int_{\Gamma}I_{\gamma}d \eta_n= \min\{\eta_n(\Gamma_{x_1,y_{j_n}}), \eta_n(\Gamma_{x_{i_n},y_1})\} \cdot C[(1,1), (i_n, j_n), \eta_n] =0 ,$$ and $$\begin{aligned} & &\int_\Gamma \mathbf{M}(I_\gamma) d\eta_{n+1}(\gamma)- \int_\Gamma \mathbf{M}(I_\gamma) d\eta_{n}(\gamma) \\ &=&\min\{\eta_n(\Gamma_{x_1,y_{j_n}}), \eta_n(\Gamma_{x_{i_n},y_1})\} \left( \mathbf{M}( S_{1,1} ) - \mathbf{M} ( S_{1,j_n} ) - \mathbf{M} ( S_{i_n,1}) + \mathbf{M}( S_{i_n,j_n} ) \right)=0.\end{aligned}$$ Moreover, $$\begin{aligned} & &\int_\Gamma \mathbf{M}(\partial I_\gamma) d\eta_{n+1}(\gamma)-\int_\Gamma \mathbf{M}(\partial I_\gamma) d\eta_{n}(\gamma)\\ &=& \min\{\eta_n(\Gamma_{x_1,y_{j_n}}), \eta_n(\Gamma_{x_{i_n},y_1})\} \left( \mathbf{M}(\partial S_{1,1} ) - \mathbf{M} ( \partial S_{1,j_n} ) - \mathbf{M} (\partial S_{i_n,1}) + \mathbf{M}( \partial S_{i_n,j_n} ) \right) \\ &=& \min\{\eta_n(\Gamma_{x_1,y_{j_n}}), \eta_n(\Gamma_{x_{i_n},y_1})\} \left(2 - 2 - 2 +2 \right) =0.\end{aligned}$$ As a result, since $\eta_n$ is a good decomposition of $T$, $\eta_{n+1}$ is a good decomposition of $T$ as well. **Step 2: Show that the sequence $\{\eta_n\}$ converges to a good decomposition $\tilde{\eta}$ of $T$.** Note that for each $1\le i\le M$ and $1\le j\le N$, the sequence $\{\eta_n\lfloor_{\Gamma_{x_i,y_j}}\}_{n=1}^\infty$ is a monotonic sequence of measures with bounded mass. Indeed, by the construction above and by equations ([\[eqn: n+1 inequatliy 1\]](#eqn: n+1 inequatliy 1){reference-type="ref" reference="eqn: n+1 inequatliy 1"}), ([\[eqn: n+1 inequatliy 2\]](#eqn: n+1 inequatliy 2){reference-type="ref" reference="eqn: n+1 inequatliy 2"}) and ([\[eqn: n+1 inequatliy 3\]](#eqn: n+1 inequatliy 3){reference-type="ref" reference="eqn: n+1 inequatliy 3"}), - if $i=1,j=1$, then $\{\eta_n\lfloor_{\Gamma_{x_i,y_j}}\}_{n=1}^\infty$ is monotone increasing; - if $i=1,j>1$, then $\{\eta_n\lfloor_{\Gamma_{x_i,y_j}}\}_{n=1}^\infty$ is monotone decreasing; - if $i>1,j=1$, then $\{\eta_n\lfloor_{\Gamma_{x_i,y_j}}\}_{n=1}^\infty$ is monotone decreasing; - if $i>1,j>1$, then $\{\eta_n\lfloor_{\Gamma_{x_i,y_j}}\}_{n=1}^\infty$ is monotone increasing, and eventually constant. As a result, the sequence, $\{\eta_n\lfloor_{\Gamma_{x_i,y_j}}\}_{n=1}^\infty$, converges to some measure $\eta_{ij}$ for each $(i,j)$. Define $$\tilde{\eta}:=\sum_{i=1}^M\sum_{j=1}^N \eta_{ij}.$$ Hence, as $n\rightarrow \infty$, $$\eta_n = \sum_{i=1}^M \sum_{j=1}^N \eta_n \lfloor_{\Gamma_{x_i,y_j}} \longrightarrow \tilde{\eta}= \sum_{i=1}^M\sum_{j=1}^N \eta_{ij}.$$ Since each $\eta_n$ is a good decomposition of $T$, it follows that $$\begin{aligned} &&\int_{\Gamma} I_{\gamma} d\tilde{\eta} = \lim_{n\to \infty}\int_{\Gamma} I_{\gamma} d\eta_{n}= T,\\ &&\int_{\Gamma} \mathbf{M}(I_\gamma) d\tilde{\eta} = \lim_{n\to \infty}\int_{\Gamma} \mathbf{M}(I_\gamma) d\eta_{n} = \mathbf{M}(T),\\ &&\int_{\Gamma} \mathbf{M}(\partial I_\gamma) d\tilde{\eta} = \lim_{n\to \infty}\int_{\Gamma} \mathbf{M}(\partial I_\gamma) d\eta_{n} = \mathbf{M}(\partial T).\end{aligned}$$ As a result, $\tilde{\eta}$ is also a good decomposition of $T$. **Step 3: Show that $\tilde{\eta} \prec \hspace{-1mm}\prec \eta$ .** Suppose $\tilde{\eta}(\Gamma_{x_i,y_j}) >0$ for some pair $(i,j)$. Then, ${\eta}_n(\Gamma_{x_i,y_j}) >0$ when $n$ is large enough. By ([\[eqn: eta_propotional\]](#eqn: eta_propotional){reference-type="ref" reference="eqn: eta_propotional"}), $${\eta}_n \lfloor_{\Gamma_{x_i,y_j}} = \prod_{k=1}^{n-1}(1+\lambda_{k,i,j}){\eta} \lfloor_{\Gamma_{x_i,y_j}}, \text{ for some } \lambda_{k,i,j} \ge -1 \text{ for each } k.$$ That is, $${\eta}_n=\left(\prod_{k=1}^{n-1}(1+\lambda_{k,i,j})\right){\eta} \text{ on } \Gamma_{x_i,y_j}.$$ As a result, ${\eta}_n(\Gamma_{x_i,y_j}) >0$ implies ${\eta}(\Gamma_{x_i,y_j}) >0$ and $S_{i,j}(\eta_n)=S_{i,j}(\eta)$. Since $\tilde{\eta}$ is the limit of $\eta_n$, $$S_{i,j}(\tilde{\eta})=\lim_{n\rightarrow \infty}S_{i,j}(\eta_n)=S_{i,j}(\eta).$$ This proves $\tilde{\eta}\prec \hspace{-1mm}\prec \eta$. **Step 4: Show that $\mathcal{A}_{\eta_{n+1}} (1,1) \subsetneqq \mathcal{A}_{\eta_n} (1,1)$ for each $n$.** Note that $(i_n,j_n)\in \mathcal{A}_{\eta_n} (1,1) \setminus \mathcal{A}_{\eta_{n+1}} (1,1)$. Indeed, if $(i_n,j_n)\in \mathcal{A}_{\eta_{n+1}} (1,1)$, then $$C[(1,1), (i_n, j_n), \eta_{n+1} ] =0 \text{ and } s[(1,1),(i_n,j_n), \eta_{n+1}]=1 .$$ This implies $sgn(\eta_{n+1}(\Gamma_{x_1,y_{j_n}}))=1$, which contradicts with $\eta_{n+1}(\Gamma_{x_1,y_{j_n}})=0$ as given in ([\[eqn: n+1 inequatliy 1\]](#eqn: n+1 inequatliy 1){reference-type="ref" reference="eqn: n+1 inequatliy 1"}). We now show that $\mathcal{A}_{\eta_{n+1}} (1,1) \subseteq \mathcal{A}_{\eta_n} (1,1)$. For any $(i_0,j_0) \in \mathcal{A}_{\eta_{n+1}} (1,1)$, by definition, $$C[(1,1), (i_0, j_0), \eta_{n+1} ] =0 \text{ and } s[(1,1), (i_0, j_0), \eta_{n+1} ] =1.$$ The condition $s[(1,1), (i_0, j_0), \eta_{n+1} ] =1$ indicates that $$\eta_{n+1}(\Gamma_{x_1,y_1})>0,\eta_{n+1}(\Gamma_{x_1,y_{j_0}})>0,\eta_{n+1}(\Gamma_{x_{i_0},y_1})>0,\eta_{n+1}(\Gamma_{x_{i_0},y_{j_0}}) >0.$$ By equations ([\[eqn: n+1 inequatliy 1\]](#eqn: n+1 inequatliy 1){reference-type="ref" reference="eqn: n+1 inequatliy 1"})--([\[eqn: n+1 inequatliy 3\]](#eqn: n+1 inequatliy 3){reference-type="ref" reference="eqn: n+1 inequatliy 3"}), and $(i_0, j_0)\neq (i_n, j_n)$, $$\eta_{n}(\Gamma_{x_1,y_1}) >0, \ \eta_{n}(\Gamma_{x_1,y_{j_0} })\geq \eta_{n+1}(\Gamma_{x_1,y_{j_0}})>0,$$ $$\eta_{n} (\Gamma_{x_{i_0},y_1})\geq \eta_{n+1}(\Gamma_{x_{i_0},y_1})>0, \ \eta_{n}(\Gamma_{x_{i_0},y_{j_0} })= \eta_{n+1}(\Gamma_{x_{i_0},y_{j_0} })>0.$$ By ([\[eqn: eta_propotional\]](#eqn: eta_propotional){reference-type="ref" reference="eqn: eta_propotional"}), for each $i, j$, when both $\eta_{n}(\Gamma_{x_i,y_j})>0$ and $\eta_{n+1}(\Gamma_{x_i,y_j})>0$, then $$S_{i,j}(\eta_n)=S_{i,j}(\eta_{n+1}).$$ As a result, $$C[(1,1), (i_0, j_0), \eta_{n} ] = C[(1,1), (i_0, j_0), \eta_{n+1} ] = 0.$$ Therefore, $(i_0,j_0)\in \mathcal{A}_{\eta_n}(1,1)$ and hence $\mathcal{A}_{\eta_{n+1}} (1,1) \subseteq \mathcal{A}_{\eta_n} (1,1)$. **Step 5: Show that $\mathcal{A}_{\tilde{\eta}}(1,1)=\emptyset$.** Assume that there exists $(i',j')\in \mathcal{A}_{\tilde{\eta}} (1,1)$, i.e. $C[(1,1), (i', j'), \tilde{\eta} ] =0$ and $s[(1,1), (i', j'), \tilde{\eta} ] =1$. For any $(i,j) \in \{ (1,1),(1,j'),(i',1),(i',j')\}$, since $s[(1,1), (i', j'), \tilde{\eta} ] =1$, it follows that $$\lim_{n\rightarrow \infty} \eta_{n}(\Gamma_{x_{i},y_{j} }) = \tilde{\eta}(\Gamma_{x_{i},y_{j}})>0.$$ Thus, there exists an $N_0\in \mathbb{N}$ such that $\eta_n(\Gamma_{x_i,y_j})>0$ for all $n\ge N_0$. By ([\[eqn: eta_propotional\]](#eqn: eta_propotional){reference-type="ref" reference="eqn: eta_propotional"}), this implies that the normalized current $S_{i,j}(\eta_n)$ is independent of $n$, and hence $S_{i,j}(\eta_n)=S_{i,j}(\tilde{\eta})$ for all $n\ge N_0$. As a result, for each $n\ge N_0$, $$C[(1,1), (i', j'), \eta_{n} ]=C[(1,1), (i', j'), \tilde{\eta} ] =0 \text{ and } s[(1,1), (i', j'), \eta_{n} ] = s[(1,1), (i', j'), \tilde{\eta} ] =1.$$ This shows that $(i',j')\in\mathcal{A}_{\eta_n} (1,1)$. On the other hand, since $\{ \mathcal{A}_{\eta_n} (1,1)\}$ is a sequence of nested subsets in $\mathbb{N}^2$ with $\mathcal{A}_{\eta_{n+1}} (1,1) \subsetneqq \mathcal{A}_{\eta_n} (1,1)$ for each $n$. When $n$ is larger than the order of the fixed element $(i',j')$, it is not possible for $(i',j')\in\mathcal{A}_{\eta_n} (1,1)$. A contradiction. ◻ We now extend Lemma [Lemma 8](#lem: S_ij(1,1)){reference-type="ref" reference="lem: S_ij(1,1)"} to a more general case: **Lemma 9**. *For any good decomposition $\eta$ of $T$, there exists a sequence of good decomposition $\{\eta_n\}_{n=0}^\infty$ of $T$ with $\eta_0=\eta$ such that for each $n\ge 1$, $\eta_n\prec \hspace{-1mm}\prec \eta_{n-1}$ and $\mathcal{A}_{\eta_{n}}(i_k, j_k)=\emptyset$ for all $1\leq k\leq n$, where $\{(i_k, j_k)\}$ is given in ([\[eqn: ordering\]](#eqn: ordering){reference-type="ref" reference="eqn: ordering"}).* *Proof.* We will prove these results by induction. Lemma [Lemma 8](#lem: S_ij(1,1)){reference-type="ref" reference="lem: S_ij(1,1)"} provides the base case when $n=1$. For each $n\ge 2$, assume that there exists a good decomposition $\eta_{n-1}$ of $T$ such that $\eta_{n-1}\prec \hspace{-1mm}\prec \eta_{n-2}$ and $\mathcal{A}_{\eta_{n-1}}(i_k, j_k)=\emptyset$ for all $1\leq k\leq n-1$. Using $\eta_{n-1}$, we construct $\eta_n$ as follows. Denote $$\tilde{\Gamma}_n=\bigcup_{i_n \le i, j_n \le j} \Gamma_{x_i, y_j}.$$ Let $\tilde{\eta}_n$ be the measure $\tilde{\eta}$ achieved in Lemma [Lemma 8](#lem: S_ij(1,1)){reference-type="ref" reference="lem: S_ij(1,1)"} with $\eta$ being replaced by $\eta_{n-1}\lfloor_{\tilde{\Gamma}_n}$ and $T$ being replaced by $\tilde{T}:=\int_{\tilde{\Gamma}_n} I_\gamma d\eta_{n-1}$. Define $$\eta_n:= \eta_{n-1} \lfloor_{\Gamma\setminus\tilde{\Gamma}_n} + \tilde{\eta}_n.$$ We first claim that $\eta_n$ is a good decomposition of $T$. Indeed, since both $\tilde{\eta}_n$ and $\eta_{n-1}\lfloor_{\tilde{\Gamma}_n}$ are good decompositions of $\tilde{T}$, $$\int_\Gamma I_\gamma d\eta_n-\int_\Gamma I_\gamma d\eta_{n-1}=\int_\Gamma I_\gamma d\tilde{\eta}_n-\int_{\tilde{\Gamma}_n} I_\gamma d\eta_{n-1}=0,$$ $$\begin{aligned} \int_\Gamma \mathbf{M}(I_\gamma) d\eta_{n}(\gamma)- \int_\Gamma \mathbf{M}(I_\gamma) d\eta_{n-1}(\gamma) =\int_\Gamma\mathbf{M}(I_\gamma) d\tilde{\eta}_n-\int_{\tilde{\Gamma}_n} \mathbf{M}(I_\gamma) d\eta_{n-1}=0,\end{aligned}$$ and $$\begin{aligned} \int_\Gamma \mathbf{M}(\partial I_\gamma) d\eta_{n}(\gamma)- \int_\Gamma \mathbf{M}(\partial I_\gamma) d\eta_{n-1}(\gamma) =\int_\Gamma\mathbf{M}(\partial I_\gamma) d\tilde{\eta}_n-\int_{\tilde{\Gamma}_n} \mathbf{M}(\partial I_\gamma) d\eta_{n-1}=0.\end{aligned}$$ As a result, since $\eta_{n-1}$ is a good decomposition of $T$, $\eta_{n}$ is also a good decomposition of $T$. We now show that $\eta_n \prec \hspace{-1mm}\prec \eta_{n-1}$. Suppose $\eta_n (\Gamma_{x_i,y_j}) >0$ for some $1\le i\le M, 1\leq j\leq N$. - When $i<i_n$ or $j<j_n$, definition of $\eta_n$ gives $\eta_n \lfloor_{\Gamma_{x_i,y_j}}=\eta_{n-1}\lfloor_{ \Gamma_{x_i,y_j}}$. Therefore, $$\eta_{n-1} (\Gamma_{x_i,y_j})=\eta_{n} (\Gamma_{x_i,y_j})>0 \text{ and } S_{i,j}(\eta_{n-1})=S_{i,j}(\eta_{n}).$$ - When $i\geq i_n$ and $j\geq j_n$, definition of $\eta_n$ gives $\eta_n \lfloor_{\Gamma_{x_i,y_j}}=\tilde{\eta}_{n}\lfloor_{ \Gamma_{x_i,y_j}}$, so that $$\tilde{\eta}_{n} (\Gamma_{x_i,y_j})=\eta_{n} (\Gamma_{x_i,y_j})>0.$$ Since $\tilde{\eta}_n \prec \hspace{-1mm}\prec \eta_{n-1}\lfloor_{\tilde{\Gamma}_n}$ by Lemma [Lemma 8](#lem: S_ij(1,1)){reference-type="ref" reference="lem: S_ij(1,1)"}, it follows that $$\eta_{n-1} (\Gamma_{x_i,y_j})>0 \text{ and } S_{i,j}(\eta_{n-1})=S_{i,j}(\tilde{\eta}_n)=S_{i,j}(\eta_{n}).$$ In both cases, $\eta_{n-1} (\Gamma_{x_i,y_j})>0$ and $S_{i,j}(\eta_{n-1})=S_{i,j}(\eta_{n})$. That is, $\eta_{n} \prec \hspace{-1mm}\prec \eta_{n-1}$. We now show that $\mathcal{A}_{\eta_{n}}(i_k, j_k)=\emptyset$ for all $1\leq k\leq n$. When $k=n$, $\mathcal{A}_{\eta_{n}}(i_n, j_n) =\emptyset$ by Lemma [Lemma 8](#lem: S_ij(1,1)){reference-type="ref" reference="lem: S_ij(1,1)"}. Suppose $k<n$, and for contradiction, we assume $\mathcal{A}_{\eta_{n}}(i_k, j_k) \neq\emptyset$. Thus, there exists $(i^*,j^*)\in \mathcal{A}_{\eta_{n}}(i_k, j_k)$, i.e., $$C[(i_k,j_k), (i^*, j^*), \eta_n ] =0 \text{ and } s[(i_k,j_k),(i^*,j^*), \eta_n]=1.$$ Now, for any $(i,j)\in \{(i_k, j_k), (i_k, j^*), (i^*,j_k), (i^*,j^*)\}$, since $s[(i_k,j_k),(i^*,j^*), \eta_n]=1$, it follows that $\eta_n(\Gamma_{x_{i}, y_{j}})>0$. By the definition of $\eta_n$, when $i<i_n$ or $j<j_n$, $\eta_n=\eta_{n-1}$ on $\Gamma_{x_i, y_j}$. Thus, $$\label{eqn: equal_eta_S} \eta_{n-1}(\Gamma_{x_{i}, y_{j}})=\eta_{n}(\Gamma_{x_{i}, y_{j}})>0 \text{ and } S_{i,j}(\eta_n)=S_{i,j}(\eta_{n-1}).$$ When $i\ge i_n$ and $j\ge j_n$, $$\tilde{\eta}_{n}(\Gamma_{x_{i}, y_{j}})=\eta_{n}(\Gamma_{x_{i}, y_{j}})>0.$$ Since $\tilde{\eta}_n \prec \hspace{-1mm}\prec \eta_{n-1}\lfloor_{\tilde{\Gamma}_n}$, then equations in ([\[eqn: equal_eta_S\]](#eqn: equal_eta_S){reference-type="ref" reference="eqn: equal_eta_S"}) still hold. As a result, $$C[(i_k,j_k), (i^*, j^*), \eta_{n-1} ]=C[(i_k,j_k), (i^*, j^*), \eta_{n} ] =0 \text{ and } s[(i_k,j_k),(i^*,j^*), \eta_{n-1}]=1.$$ Therefore, $(i^*, j^*)\in \mathcal{A}_{\eta_{n-1}}(i_k, j_k)$, which contradicts with $\mathcal{A}_{\eta_{n-1}}(i_k, j_k)=\emptyset$ whenever $k\le n-1$. ◻ We now give the proof of Theorem [Theorem 4](#thm: GoodS_ij){reference-type="ref" reference="thm: GoodS_ij"} by showing that for any good decomposition $\eta$ of $T$, there exists a good decomposition $\eta_\infty$ of $T$ such that $\eta_\infty \prec \hspace{-1mm}\prec \eta$ and $\mathcal{A}_{\eta_\infty}(i, j)=\emptyset$ for all $1\le i\le M, 1\le j\le N$. *Proof of Theorem [Theorem 4](#thm: GoodS_ij){reference-type="ref" reference="thm: GoodS_ij"}.* Let $\{\eta_n\}$ be the sequence of good decomposition of $T$ constructed in the proof of Lemma [Lemma 9](#lem: ikjk){reference-type="ref" reference="lem: ikjk"}. Observe that by the construction of the sequence $\{\eta_n\}$, it follows that for any $k\in \mathbb{N}$, $$\label{eqn: eta_n_constant} \eta_n\lfloor_{\Gamma_{x_{i_k},y_{j_k}} } =\eta_k\lfloor_{\Gamma_{x_{i_k},y_{j_k}} }$$ for all $n\geq k$. Define $\eta_\infty: \Gamma \rightarrow \mathbb{R}$ by setting $$\label{eqn: definition_eta_infinty} \eta_\infty:=\eta_{k} \quad \text{ on } \Gamma_{x_{i_k},y_{j_k}}, \forall k\in \mathbb{N}.$$ We first show that $\{\eta_n\}$ converges to $\eta_\infty$ with respect to the total variation distance $\|\cdot\|$. Indeed, by ([\[eqn: eta_n\_constant\]](#eqn: eta_n_constant){reference-type="ref" reference="eqn: eta_n_constant"}), $$\begin{aligned} \| \eta_n - \eta_\infty \| &=& \| \sum_{k\ge 1} (\eta_n- \eta_k) \lfloor_{\Gamma_{x_{i_k}, y_{j_k}}} \| = \| \sum_{k\ge n+1} (\eta_n- \eta_k) \lfloor_{\Gamma_{x_{i_k}, y_{j_k}}} \| \\ &\le & \sum_{k \ge n+1}\eta_n(\Gamma_{x_{i_k}, y_{j_k}}) + \sum_{k \ge n+1} \eta_k(\Gamma_{x_{i_k}, y_{j_k}})\\ &\le & \sum_{i_k +j_k \ge i_n + j_n }\eta_n(\Gamma_{x_{i_k}, y_{j_k}}) + \sum_{k \ge n+1 } \eta_k(\Gamma_{x_{i_k}, y_{j_k}}) \\ &\leq & \sum_{i_k \ge \sqrt{i_n j_n}} \sum_{j_k =1 }^N \eta_n(\Gamma_{x_{i_k}, y_{j_k}}) + \sum_{j_k \ge \sqrt{i_n j_n}} \sum_{i_k =1 }^M \eta_n(\Gamma_{x_{i_k}, y_{j_k}}) + \sum_{k \ge n+1 } \eta_k(\Gamma_{x_{i_k}, y_{j_k}}) \\ &=& \sum_{i_k \ge \sqrt{i_n j_n}} m'_{i_k} + \sum_{j_k \ge \sqrt{i_n j_n}} m_{j_k} + \sum_{k \ge n+1 } \eta_k(\Gamma_{x_{i_k}, y_{j_k}}),\end{aligned}$$ and $$\eta_\infty (\Gamma) = \sum_{k=1}^\infty \eta_{k}(\Gamma_{x_{i_k}, y_{j_k}})=\lim_{n\rightarrow \infty}\sum_{k=1}^n \eta_{k}(\Gamma_{x_{i_k}, y_{j_k}}) =\lim_{n\rightarrow \infty}\sum_{k=1}^n \eta_{n}(\Gamma_{x_{i_k}, y_{j_k}})\le \lim_{n\rightarrow \infty}\eta_n(\Gamma)=\eta(\Gamma)<\infty.$$ Thus, since $\lim_{n\rightarrow \infty}i_nj_n=\infty$ and $\sum_{i=1}^M m'_i = \sum_{j=1}^N m_j < \infty$, it follows that $\lim_{n\rightarrow \infty}\| \eta_n - \eta_\infty \|=0$. Since $\eta_n$ is a good decomposition for each $n$, it follows that its limit $\eta_\infty$ is also a good decomposition of $T$. Moreover, if $\eta_{\infty}(\Gamma_{x_{i_k},y_{j_k}})>0$ for some $k$, then $\eta_{k}(\Gamma_{x_{i_k},y_{j_k}})>0$ by ([\[eqn: definition_eta_infinty\]](#eqn: definition_eta_infinty){reference-type="ref" reference="eqn: definition_eta_infinty"}). Thus, by Lemma [Lemma 9](#lem: ikjk){reference-type="ref" reference="lem: ikjk"} and transitivity of "$\precc$\", we have $\eta_k \prec \hspace{-1mm}\prec \eta$, which implies $$\eta (\Gamma_{x_{i_k},y_{j_k}})>0 \text{ and } S_{i_k,j_k}(\eta_\infty)=S_{i_k,j_k}(\eta_k)=S_{i_k,j_k}(\eta).$$ Therefore, $\eta_\infty \prec \hspace{-1mm}\prec \eta$. We now show that $\mathcal{A}_{\eta_\infty}(i_k, j_k)=\emptyset$ for each $k$. Assume that for some $k$, $\mathcal{A}_{\eta_\infty}(i_k, j_k)$ contains an element $(i_n,j_n)$. Then the definition of $\mathcal{A}_{\eta_\infty}(i_k, j_k)$ implies $n> k$ and $$C[(i_k,j_k), (i_n, j_n), \eta_\infty ] =0 \text{ and } s[(i_k,j_k),(i_n,j_n), \eta_\infty]=1.$$ By ([\[eqn: eta_n\_constant\]](#eqn: eta_n_constant){reference-type="ref" reference="eqn: eta_n_constant"}) and ([\[eqn: definition_eta_infinty\]](#eqn: definition_eta_infinty){reference-type="ref" reference="eqn: definition_eta_infinty"}), since $(i_n, j_n)$ has the largest order among the elements $$\{(i_k,j_k),(i_k,j_n),(i_n,j_k),(i_n,j_n)\},$$ it follows that $\eta_\infty=\eta_n$ on $\Gamma_{x_i, y_j}$ for each $(i,j)$ of these four elements. Thus, $$C[(i_k,j_k), (i_n, j_n), \eta_n ] =0 \text{ and } s[(i_k,j_k),(i_n,j_n), \eta_n]=1.$$ This shows $(i_n,j_n) \in \mathcal{A}_{\eta_n}(i_k, j_k)$, a contradiction with $\mathcal{A}_{\eta_n}(i_k, j_k)=\emptyset$ due to Lemma [Lemma 9](#lem: ikjk){reference-type="ref" reference="lem: ikjk"}. ◻ # Decomposition of cycle-free transport paths In this section, we will prove the decomposition theorem in Theorem [Theorem 17](#thm: decomposition){reference-type="ref" reference="thm: decomposition"} using the better decomposition $\eta_\infty$ achieved from Theorem [Theorem 4](#thm: GoodS_ij){reference-type="ref" reference="thm: GoodS_ij"}. We first recall a concept that was introduced in [@boundary_payoff Definition 4.6]. **Definition 10**. Let $T=\underline{\underline{\tau}}(M,\theta,\xi)$ and $S=\underline{\underline{\tau}}(N,\phi,\zeta)$ be two real rectifiable $k$-currents. We say $S$ is on $T$ if $\mathcal{H}^k(N\setminus M)=0$, and $\phi(x)\le \theta(x)$ for $\mathcal{H}^k$ almost all $x\in N$. Note that when $S=\underline{\underline{\tau}}(N,\phi,\zeta)$ is on $T=\underline{\underline{\tau}}(M,\theta,\xi)$, then $\xi(x)=\pm \zeta(x)$ for $\mathcal{H}^k$ almost all $x\in N$, since two rectifiable sets have the same tangent almost everywhere on their intersection. Using it, we now introduce the concept of "cycle-free\" currents as follows: **Definition 11**. Let $T$ and $S$ be two real rectifiable $k$-currents. $S$ is called a cycle on $T$ if $S$ is on $T$ and $\partial S=0$. Also, $T$ is called *cycle-free* if except for the zero current, there is no other cycle on $T$. The zero current is called the trivial cycle on $T$. **Remark 12**. The concept of "cycle-free\" is different from "acyclic\". A cycle-free current is automatically acyclic, but not vice versa. For instance, let $T$ be a transport path (which is a 1-current) from $\mu^- = \delta_{x_1} + \delta_{x_2}$ to $\mu^+ = \delta_{y_1} + \delta_{y_2}$ as shown below. Then $T$ is acyclic but not cycle-free. As an example, we first show that each optimal transport path is cycle-free. To do so, we start with an analogous result to [@boundary_payoff Theorem 4.7] as follows. **Proposition 13**. *Let $T\in Path(\mu^-, \mu^+)$ with $\mathbf{M}_{\alpha}(T)<\infty$ for some $0<\alpha<1$. Suppose there exists a rectifiable 1-current $S$ such that $S$ is on $T$ and $\partial S=0$, then for any $\epsilon \in [-1,1]$, $T + \epsilon S\in Path(\mu^-, \mu^+)$ and $$\min \left\{ \mathbf{M}_\alpha(T + S), \mathbf{M}_\alpha(T - S) \right\} \le \mathbf{M}_\alpha(T)$$ with the equality holds only when $S=0$.* *Proof.* The statements clearly hold if $S=0$. Thus, in the following, we may assume that $S$ is non-zero. Since $T\in Path(\mu^-, \mu^+)$ and $\partial S=0$, it holds that $\partial (T + \epsilon S) = \partial T + \epsilon\partial S = \partial T=\mu^+-\mu^-$. That is, $T+\epsilon S \in Path(\mu^-,\mu^+)$. Let $T=\underline{\underline{\tau}}(M,\theta,\xi)$ and $S=\underline{\underline{\tau}}(N,\phi,\zeta)$. Since $S$ is on $T$, we have $\mathcal{H}^1(N\setminus M)=0$, and $\phi(x)\le \theta(x)$ for $\mathcal{H}^1$ almost all $x\in N$. One may assume that $N=M$ by extending $\phi (x)=0$ and $\zeta (x)=\xi (x)$ for $x\in M\setminus N$. For $\epsilon \in [-1,1]$, we now consider the function $$g(\epsilon)=\mathbf{M}_\alpha (T+\epsilon S) =\int_{M}\left( \theta (x)+\epsilon\phi (x)\langle \xi (x),\zeta (x)\rangle \right) ^{\alpha }d\mathcal{H}^{1}(x).$$ Here, the value of the inner product is $\langle \xi (x),\zeta (x)\rangle =\pm 1$ for $\mathcal{H}^1-a.e.\ x\in M$. Since $\mathbf{M}_{\alpha}(T)=\int_M \theta^\alpha d\mathcal{H}^1<\infty$ and $\phi(x)\le \theta(x)$ for $\mathcal{H}^1$ almost all $x\in M$, we have for any $\epsilon\in (-1,1)$, $$g'(\epsilon)=\alpha\int_{M}\left( \theta (x)+\epsilon\phi (x)\langle \xi (x),\zeta (x)\rangle \right) ^{\alpha -1}\phi (x)\langle \xi (x),\zeta (x)\rangle d\mathcal{H}^{1}(x)$$ and $$g''(\epsilon)=\alpha(\alpha-1)\int_{M}\left( \theta (x)+\epsilon\phi (x)\langle \xi (x),\zeta (x)\rangle \right) ^{\alpha -2}\phi (x)^2 d\mathcal{H}^{1}(x)<0,$$ because $0<\alpha<1$ and $S$ is non-zero. This shows that $g(\epsilon)$ is a strictly concave function on $(-1,1)$. By the lower semi-continuity of $\mathbf{M}_{\alpha}$, $g(\epsilon)$ is lower semi-continuous at $\epsilon=\pm 1$. Thus, $\min\{g(-1),g(1)\}<g(0)$. That is, $\min \{ \mathbf{M}_\alpha({T}+{S}), \mathbf{M}_\alpha({T}-{S}) \} < \mathbf{M}_\alpha({T})$ whenever $S$ is on $T$, nonzero and $\partial S=0$. ◻ **Corollary 14**. *Suppose $T$ is an $\alpha$-optimal transport path from $\mu^-$ to $\mu^+$ for $0< \alpha<1$. Then $T$ is cycle-free.* *Proof.* Since $T$ is $\alpha$-optimal, it is acyclic and hence it has a good decomposition. Suppose $S$ is on $T$ and $\partial S=0$. Assume $S$ is non-zero, then $\min \{ \mathbf{M}_\alpha({T}+{S}), \mathbf{M}_\alpha({T}-{S}) \} < \mathbf{M}_\alpha({T})$, which contradicts with the $\mathbf{M}_{\alpha}$ optimality of $T$. Therefore, $S$ must be zero. Hence, $T$ is cycle-free. ◻ To characterize cycle-free transport paths, we consider their better decomposition. **Proposition 15**. *Each cycle-free transport path $T\in Path(\mu^-, \mu^+)$ has at least a better decomposition.* *Proof.* By definition, each cycle-free transport path is acyclic and hence has a good decomposition. By Theorem [Theorem 4](#thm: GoodS_ij){reference-type="ref" reference="thm: GoodS_ij"}, it has a better decomposition. ◻ **Proposition 16**. *Let $T\in Path(\mu^-, \mu^+)$ be a cycle-free transport path, and let $\eta$ be a better decomposition of $T$. For each $y_j \in \{y_1,y_2,\ldots,y_N\}$, denote $$\label{eqn: X_j} X_j(\eta) := \{x_i\in X : \eta(\Gamma_{x_i,y_j})>0\}.$$ Then for each pair $1\le j_1<j_2\le N$, $$\label{eqn: X_j_intersection} |X_{j_1}(\eta)\cap X_{j_2}(\eta)|\le 1,$$ i.e., the intersection $X_{j_1}(\eta)\cap X_{j_2}(\eta)$ is either empty or a single point.* *Proof.* Assume $|X_{j_1}(\eta)\cap X_{j_2}(\eta)| > 1$. Then there exist two distinct points $x_{i_1},x_{i_2} \in X_{j_1}(\eta)\cap X_{j_2}(\eta)$ with $i_1<i_2$. Thus, $$\label{eqn: eta_positive} \eta (\Gamma_{x_{i_1},y_{j_1}}) >0,\ \eta (\Gamma_{x_{i_1},y_{j_2}}) >0,\ \eta (\Gamma_{x_{i_2},y_{j_1}}) >0,\ \text{ and }\eta (\Gamma_{x_{i_2},y_{j_2}}) >0.$$ By ([\[eqn: equalsgn\]](#eqn: equalsgn){reference-type="ref" reference="eqn: equalsgn"}), this implies that $C[(i_1,j_1),(i_2,j_2), \eta]$ defined in ([\[eqn: C_def\]](#eqn: C_def){reference-type="ref" reference="eqn: C_def"}) is a cycle. Since $\eta$ is a better decomposition of $T$, by ([\[eqn: eta_positive\]](#eqn: eta_positive){reference-type="ref" reference="eqn: eta_positive"}), it follows that $C[(i_1,j_1),(i_2,j_2), \eta]$ is non-vanishing. Pick $$0 < \epsilon_0 \le \frac{1}{4} \min\{ \eta(\Gamma_{x_{i_1}, y_{j_1}}), \eta(\Gamma_{x_{i_1}, y_{j_2} }), \eta(\Gamma_{x_{i_2}, y_{j_1}}), \eta(\Gamma_{x_{2}, y_{j_2}}) \},$$ and observe that $$S = \epsilon_0 \cdot C[(i_1,j_1),(i_2,j_2), \eta]$$ is a non-vanishing cycle on $T$. Indeed, assume $T = \underline{\underline{\tau}}(M,\theta,\xi)$ and $S=\underline{\underline{\tau}}(N,\phi,\zeta)$, then $N \subseteq M$ and for $\mathcal{H}^1$-a.e. $x$, $$\begin{aligned} \phi(x) &\le & \epsilon_0\left( \frac{\eta\lfloor_{\Gamma_{x_{i_1}, y_{j_1}}}}{\eta(\Gamma_{x_{i_1}, y_{j_1}})} + \frac{\eta\lfloor_{\Gamma_{x_{i_1}, y_{j_2}}} }{\eta(\Gamma_{x_{i_1}, y_{j_2}})} + \frac{\eta\lfloor_{\Gamma_{x_{i_2}, y_{j_1}}} }{\eta(\Gamma_{x_{i_2}, y_{j_1}})} + \frac{\eta\lfloor_{\Gamma_{x_{i_2}, y_{j_2}}} }{\eta(\Gamma_{x_{i_2}, y_{j_2}})} \right) \left(\{\gamma \in \Gamma : x \in \mathrm{Im}(\gamma) \} \right) \\ & \le & \epsilon_0 \left( \frac{1}{\eta(\Gamma_{x_{i_1}, y_{j_1}})}+ \frac{1}{\eta(\Gamma_{x_{i_1}, y_{j_2}})}+ \frac{1}{\eta(\Gamma_{x_{i_2}, y_{j_1}})}+ \frac{1}{\eta(\Gamma_{x_{i_2}, y_{j_2}})} \right) \eta \left(\{\gamma \in \Gamma : x \in \mathrm{Im}(\gamma) \} \right) \\ & \le & \eta\left(\{\gamma \in \Gamma : x \in \mathrm{Im}(\gamma) \} \right) = \theta(x),\end{aligned}$$ by equation ([\[eqn: theta(x)\]](#eqn: theta(x)){reference-type="ref" reference="eqn: theta(x)"}). This shows that $S$ is a non-vanishing cycle on $T$. A contradiction with $T$ is cycle-free. ◻ **Theorem 17**. *Let $T$ be a cycle-free transport path from $\mu^-$ to $\mu^+$, where $\mu^-$ and $\mu^+$ are given in ([\[eqn: measures\]](#eqn: measures){reference-type="ref" reference="eqn: measures"}). Then there exists a decomposition $$\label{eqn: decomposition_T} T=\sum_{j=0}^NT_j$$ such that* - *The set $\{x_1,x_2,\cdots, x_M\}$ can be expressed as the disjoint union of its subsets $\{B_j\}_{j=0}^N$ with the cardinality $|B_0|\le \binom{N}{2}$;* - *For each $j=1,2,\cdots, N$, $T_j$ is a single-target transport path from $$\mu_j^-:=\mu^-\lfloor_{B_j} \text{ to } \mu_j^+=\tilde{m}_j\delta_{y_j}$$ for some $0\le \tilde{m}_j:=\mu^-(B_j) \le m_j$. Each $T_j$ is a subcurrent of $T$.* - *$T_0$ is a transport path from $$\mu_0^-:=\mu^-\lfloor_{B_0} \text{ to } \mu_0^+=\sum_{j=1}^N (m_j-\tilde{m}_j)\delta_{y_j}.$$ $T_0$ is also a subcurrent of $T$.* Note that, by Theorem [Theorem 17](#thm: decomposition){reference-type="ref" reference="thm: decomposition"} , it follows that $$\label{eqn: decompositions} \mu^-=\sum_{j=0}^N\mu_{j}^- \ \text{ and }\mu^+=\sum_{j=0}^N\mu_{j}^+.$$ *Proof.* Let $\eta$ be a better decomposition of $T$, and $X_j(\eta)$ be the set as defined in ([\[eqn: X_j\]](#eqn: X_j){reference-type="ref" reference="eqn: X_j"}). Denote $$\label{eqn: B_0} B_0:=\bigcup_{1\le j_1<j_2\le N}\left(X_{j_1}(\eta)\cap X_{j_2}(\eta)\right)$$ and for each $1\le j\le N$, denote $$B_j:=X_j(\eta)\setminus B_0.$$ Then $\{B_j\}_{j=0}^N$ are pairwise disjoint. Moreover, by ([\[eqn: X_j\_intersection\]](#eqn: X_j_intersection){reference-type="ref" reference="eqn: X_j_intersection"}), $|B_0|\le \binom{N}{2}$. Define $$T_0:=\sum_{j=1}^N \sum_{x_i \in B_0} \int_{\Gamma_{x_i,y_j} } I_\gamma \, d\eta,$$ and for each $1\le j\le N$, denote $$T_j:= \sum_{x_i \in B_j} \int_{\Gamma_{x_i,y_j} } I_\gamma \, d\eta.$$ Then each $T_j$ is a subcurrent of $T$ for $0\le j \le N$ and $$\begin{aligned} T &=& \sum_{j=1}^N \sum_{i=1}^M \int_{\Gamma_{x_i,y_j} } I_\gamma \, d\eta = \sum_{j=1}^N \left( \sum_{x_i \in B_j} \int_{\Gamma_{x_i,y_j} } I_\gamma \, d\eta + \sum_{x_i \in B_0} \int_{\Gamma_{x_i,y_j} } I_\gamma \, d\eta \right) \\ &=& \sum_{j=1}^N \sum_{x_i \in B_j} \int_{\Gamma_{x_i,y_j} } I_\gamma \, d\eta + \sum_{j=1}^N \sum_{x_i \in B_0} \int_{\Gamma_{x_i,y_j} } I_\gamma \, d\eta \\ &=& \sum_{j=1}^N T_j \ + T_0=\sum_{j=0}^N T_j.\end{aligned}$$ For each $1\le j\le N$, $T_j$ is a single-target transport path with $$\partial T_j=\sum_{x_i \in B_j} \int_{\Gamma_{x_i,y_j} } (\delta_{y_j}-\delta_{x_i}) \, d\eta = \left(\sum_{x_i \in B_j}\eta(\Gamma_{x_i,y_j}) \right) \delta_{y_j}- \sum_{x_i \in B_j} \eta(\Gamma_{x_i,y_j}) \delta_{x_i}.$$ Note that when $x_i \in B_j$, since $\{B_k\}$'s are pairwise disjoint, it follows that $\eta (\Gamma_{x_i,y_k})=0$ for all $k\ne j$. So, $$\sum_{x_i \in B_j} \eta(\Gamma_{x_i,y_j}) \delta_{x_i}=\sum_{x_i \in B_j} \left(\sum_{k=1}^N\eta(\Gamma_{x_i,y_k})\right)\delta_{x_i}=\sum_{x_i \in B_j} \mu^-(\{x_i\})\delta_{x_i}=\mu^-\lfloor_{B_j}=\mu_j^-,$$ and $$\left(\sum_{x_i \in B_j}\eta(\Gamma_{x_i,y_j}) \right) \delta_{y_j}=\mu^-(B_j)\delta_{y_j}=\mu_j^+.$$ As a result, $\partial T_j=\mu_j^+-\mu_j^-$. Moreover, we have the result, $$\begin{aligned} \label{eqn: boundary_T_0} \partial T_0 &=& \sum_{j=1}^N \sum_{x_i \in B_0} \int_{\Gamma_{x_i,y_j} } (\delta_{y_j}-\delta_{x_i}) \, d\eta \\ &=& \nonumber \sum_{j=1}^N \left(\sum_{x_i \in B_0} \eta(\Gamma_{x_i,y_j})\right) \delta_{y_j}- \sum_{x_i \in B_0}\left(\sum_{j=1}^N \eta(\Gamma_{x_i,y_j})\right) \delta_{x_i}\\ &=& \nonumber \sum_{j=1}^N \left(\sum_{x_i \in B_0\cap X_j(\eta)} \eta(\Gamma_{x_i,y_j})\right) \delta_{y_j}- \sum_{x_i \in B_0} \mu^-(\{x_i\})\delta_{x_i}\\ &=& \nonumber \sum_{j=1}^N \left(\sum_{x_i \in X_j(\eta)} \eta(\Gamma_{x_i,y_j})-\sum_{x_i \in B_j} \eta(\Gamma_{x_i,y_j})\right) \delta_{y_j}- \mu^-\lfloor_{B_0} \\ &=&\nonumber \sum_{j=1}^N \left(\sum_{i=1}^M \eta(\Gamma_{x_i,y_j})-\sum_{x_i \in B_j} \eta(\Gamma_{x_i,y_j})\right) \delta_{y_j}- \mu^-\lfloor_{B_0} \\ &=& \nonumber \sum_{j=1}^N \left(m_j- \mu^-(B_j)\right) \delta_{y_j}- \mu^-\lfloor_{B_0} \\ &=& \nonumber \mu_0^+-\mu_0^-.\end{aligned}$$ ◻ # Transport Paths induced Transport Maps and Transport Plans In this section, we will decompose a cycle-free transport path into the sum of two transport paths, the first one is induced by a compatible transport map, while the second one is induced by a compatible transport plan. We first recall the concept of compatibility introduced in [@xia1 Definition 7.1], and rewrite it in terms of our current contexts. Suppose $\mu^-$ and $\mu^+$ are two atomic measures of equal finite mass as given in ([\[eqn: measures\]](#eqn: measures){reference-type="ref" reference="eqn: measures"}). Let $Path_0(\mu^-, \mu^+)$ denote the family of all cycle-free transport paths from $\mu^-$ to $\mu^+$. **Remark 18**. In [@xia1 Definition 7.1], we used $Path_0(\mu^-, \mu^+)$ to denote the family of all "acyclic\" transport paths from $\mu^-$ to $\mu^+$. In [@xia1], a transport path $G$ is called "acyclic\" if it satisfies the following condition: *for any polyhedral 1-chain $\tilde{G}$ with the support of $\tilde{G}$ contained in the support of $G$, if $\partial \tilde{G}=0$ then $\tilde{G}=0$.* In the current context, $G$ is an "acyclic\" transport path simply means that it is cycle-free. To avoid confusion between the term "acyclic\" used in [@xia1] and the acyclic concept defined using subcurrents in [@Paolini], we opt for the term "cycle-free\" to name the term "acyclic\" used in [@xia1]. Observe that for any $G\in Path_0(\mu^-, \mu^+)$ and for each $x_i$ and $y_j$, there exists at most one directed polyhedral curve $g_{ij}$ from $x_i$ to $y_j$, supported on the support of $G$. Thus, we associate each $G\in Path_0(\mu^-, \mu^+)$ with a $M\times N$ polyhedral 1-chain valued matrix $g= \begin{bmatrix} I_{g_{ij}} \end{bmatrix}$, such that $I_{g_{ij}}=0$ when $g_{ij}$ does not exist. **Definition 19**. ([@xia1 Definition 7.1]) [\[def: atomic_compatibility\]]{#def: atomic_compatibility label="def: atomic_compatibility"} Let $G\in Path_0(\mu^-, \mu^+)$ and $q \in Plan(\mu^-, \mu^+)$ with associated matrices $\begin{bmatrix} I_{g_{ij}} \end{bmatrix}$ and $\begin{bmatrix} q_{ij} \end{bmatrix}$ respectively. The pair $(G,q)$ is called *compatible* if $q_{ij} = 0$ whenever $I_{g_{ij}} = 0$ and $$\label{eqn: comptaible_currents} G = \sum_{i=1}^M \sum_{j=1}^N q_{ij} I_{g_{ij}} \text{ and } q=\sum_{i=1}^M \sum_{j=1}^N q_{ij} \delta_{(x_i, y_j)}$$ as polyhedral 1-chains. **Example 3**. *For instance, let $$\mu^- = \frac{1}{4} \delta_{x_1} + \frac{3}{4} \delta_{x_2},\ \mu^+ = \frac{5}{8} \delta_{y_1} + \frac{3}{8}\delta_{y_2} ,$$ and consider the following transport plan, $$q= \frac{1}{8} \delta_{(x_1, y_1)} + \frac{1}{8} \delta_{(x_1, y_2)} +\frac{1}{2} \delta_{(x_2, y_1)} +\frac{1}{4} \delta_{(x_2, y_2)} \in Plan (\mu^-, \mu^+) .$$ Let $G_1$ and $G_2$ be two transport paths as illustrated in the following figure.* *Then $(G_1, q)$ is compatible but $(G_2, q)$ is not, since $q_{12}=\frac{1}{8}\neq 0$ and there is no directed curve $g_{12}$ from $x_1$ to $y_2$ on the support of $G_2$.* Now, we generalize the compatibility of atomic measures $\mu^-, \mu^+$ stated above to those of general measures. **Definition 20**. Let $\mu$ and $\nu$ be two Radon measures on $X$ of equal total mass. Given $T \in Path (\mu,\nu)$, and $\pi \in Plan (\mu,\nu)$, we say the pair $(T, \pi)$ is compatible if there exists a finite Borel measure $\eta$ on $\Gamma$ such that $$T = \int_\Gamma I_\gamma d\eta, \text{ and } \pi = \int_\Gamma \delta_{ ( p_0(\gamma), p_\infty(\gamma) ) } d\eta .$$ Moreover, given $T \in Path (\mu,\nu)$ and $\varphi \in Map (\mu,\nu)$, we say the pair $(T,\varphi)$ is compatible if $(T, \pi_\varphi)$ is compatible, where $\pi_\varphi = (id \times \varphi)_{\#}\mu$. The following Proposition says that Definition [Definition 20](#def: Radon_compatibility){reference-type="ref" reference="def: Radon_compatibility"} is a generalization of Definition [\[def: atomic_compatibility\]](#def: atomic_compatibility){reference-type="ref" reference="def: atomic_compatibility"}. **Proposition 21**. *Let $\mu^-$ and $\mu^+$ be two atomic measures of equal mass as given in ([\[eqn: measures\]](#eqn: measures){reference-type="ref" reference="eqn: measures"}). Let $G\in Path_0 (\mu^-, \mu^+)$ and $q \in Plan(\mu^-, \mu^+)$. Then $(G,q)$ is compatible in the sense of Definition [\[def: atomic_compatibility\]](#def: atomic_compatibility){reference-type="ref" reference="def: atomic_compatibility"} if and only if $(G,q)$ is compatible in the sense of Definition [Definition 20](#def: Radon_compatibility){reference-type="ref" reference="def: Radon_compatibility"}.* *Proof.* Suppose $(G, q)$ is compatible in the sense of Definition [\[def: atomic_compatibility\]](#def: atomic_compatibility){reference-type="ref" reference="def: atomic_compatibility"}. By setting $$\eta = \sum_{i=1}^M \sum_{j=1}^N q_{ij} \delta_{g_{_{ij}}}$$ over all $\{1\le i\le M, 1\le j\le N\}$ with $g_{ij}$ exists, equation ([\[eqn: comptaible_currents\]](#eqn: comptaible_currents){reference-type="ref" reference="eqn: comptaible_currents"}) gives that $$G = \int_\Gamma I_\gamma d\eta \text{ and } q = \int_\Gamma \delta_{ ( p_0(\gamma), p_\infty(\gamma) ) } d\eta .$$ Therefore, $(G,q)$ is also compatible in the sense of Definition [Definition 20](#def: Radon_compatibility){reference-type="ref" reference="def: Radon_compatibility"}. On the other hand, suppose $(G,q)$ is compatible in the sense of Definition [Definition 20](#def: Radon_compatibility){reference-type="ref" reference="def: Radon_compatibility"}, then there exists a Borel measure $\eta$ on $\Gamma$ such that $$G = \int_\Gamma I_\gamma d\eta \text{ and } q = \int_\Gamma \delta_{ ( p_0(\gamma), p_\infty(\gamma) ) } d\eta.$$ Since $q\in Plan (\mathbf{a}, \mathbf{b})$, we may write $$q= \sum_{i=1}^M \sum_{j=1}^N q_{ij} \delta_{(x_i, y_j)}$$ for some $q_{ij}\ge 0$. Denote $$J_q:=\{(i,j): 1\le i\le M, 1\le j \le N, \text{ with } q_{ij}>0 \}.$$ and $$\tilde{\Gamma}:=\bigcup_{(i,j)\in J_q} \Gamma_{x_i, y_j}.$$ Since $$\int_{\Gamma\setminus \tilde{\Gamma}}\delta_{ ( p_0(\gamma), p_\infty(\gamma) ) } d\eta+\int_{ \tilde{\Gamma}} \delta_{ ( p_0(\gamma), p_\infty(\gamma) ) } d\eta = \int_\Gamma \delta_{ ( p_0(\gamma), p_\infty(\gamma) ) } d\eta= q= \sum_{i=1}^M \sum_{j=1}^N q_{ij} \delta_{(x_i, y_j)}=\sum_{(i,j)\in J_q} q_{ij} \delta_{(x_i, y_j)} ,$$ it follows that $$\int_{\Gamma\setminus \tilde{\Gamma}}\delta_{ ( p_0(\gamma), p_\infty(\gamma) ) } d\eta=0 \text{ and } \int_{ \tilde{\Gamma}} \delta_{ ( p_0(\gamma), p_\infty(\gamma) ) } d\eta= \sum_{(i,j)\in J_q} q_{ij} \delta_{(x_i, y_j)}.$$ Thus, $\eta (\Gamma\setminus \tilde{\Gamma} )=0$ and $$q=\sum_{(i,j)\in J_q} \int_{ \Gamma_{x_i, y_j}} \delta_{ ( p_0(\gamma), p_\infty(\gamma) ) } d\eta= \sum_{(i,j)\in J_q} q_{ij} \delta_{(x_i, y_j)}.$$ Hence for each $1\le i\le M, 1\le j \le N$, $$\eta(\Gamma_{x_i, y_j})=q_{ij} \text{ if } (i,j)\in J_q \text{ and } \eta(\Gamma_{x_i, y_j})=0 \text{ if not.}$$ Now, for each $(i,j)\in J_q$, since $\eta(\Gamma_{x_i, y_j})=q_{ij}>0$ and $$G= \int_\Gamma I_\gamma d\eta=\sum_{(i,j)\in J_q}\int_{\Gamma_{x_i, y_j}} I_\gamma d\eta,$$ it follows that there exists a polyhedral 1-curve $g_{ij}$ supported on the support of $G$. Let $$\tilde{G} = \sum_{(i,j)\in J_q} q_{ij} I_{g_{_{ij}}},$$ then $$\partial(G-\tilde{G})=\partial \left(\sum_{(i,j)\in J_q}\int_{\Gamma_{x_i, y_j}} I_\gamma d\eta-\sum_{(i,j)\in J_q} q_{ij} I_{g_{_{ij}}} \right) =\sum_{(i,j)\in J_q} \left( \eta \left(\Gamma_{x_i, y_j}\right) - q_{ij} \right) \left( \delta_{y_j}-\delta_{x_i} \right)=0,$$ so that $G-\tilde{G}$ is a cycle supported on the support of $G$. Since $G\in Path_0(\mu^-, \mu^+)$, we have $G-\tilde{G}=0$. Therefore, $$G=\tilde{G}= \sum_{(i,j)\in J_q} q_{ij} I_{g_{_{ij}}}.$$ Note also that whenever $I_{g_{ij}}=0$, it follows that $(i,j)\not\in J_q$, and thus $q_{ij}=0$. As a result, $(G, q)$ is compatible in the sense of Definition [\[def: atomic_compatibility\]](#def: atomic_compatibility){reference-type="ref" reference="def: atomic_compatibility"}. ◻ By Theorem [Theorem 17](#thm: decomposition){reference-type="ref" reference="thm: decomposition"}, we now have the following theorem: **Theorem 22**. *Let $T\in Path(\mu^-,\mu^+)$ be a cycle-free transport path, where $\mu^-$ and $\mu^+$ are given in ([\[eqn: measures\]](#eqn: measures){reference-type="ref" reference="eqn: measures"}). Then there exist* - *decomposition $$\mu^- = \mu_\pi^- + \mu_\varphi^-, \ \mu^+ = \mu_\pi^+ + \mu_\varphi^+,\text{ with } \mu_\pi^-(X)=\mu_\pi^+(X),\ \mu_\varphi^-(X)=\mu_\varphi^+(X)\ $$ where $\mu_\pi^-$ and $\mu_\varphi^-$ have disjoint supports and $|spt(\mu_\pi^-)|\le \binom{N}{2}$ with $|A|$ denoting the cardinality of the set $A$;* - *$T=T_{\pi}+T_{\varphi}$ for some $T_\pi \in Path\left(\mu_\pi^-, \mu_\pi^+\right)$ and $T_\varphi \in Path\left(\mu_\varphi^-, \mu_\varphi^+ \right)$. Both $T_\pi$ and $T_\varphi$ are subcurrents of $T$;* - *a transport map $\varphi \in Map\left(\mu_\varphi^{-}, \mu_\varphi^+\right)$ such that $(T_\varphi, \varphi)$ is compatible;* - *a transport plan $\pi\in Plan\left(\mu_\pi^-, \mu_\pi^+\right)$ such that $(T_\pi, \pi)$ is compatible;* - *For each $x_i$ with $\mu_\pi^{-} (\{x_i\}) >0$, there are at least two $y_{j_1},y_{j_2}$, such that $$\pi(\{x_i\} \times \{y_{j_1}\})>0, \pi(\{x_i\} \times \{y_{j_2}\})>0.$$* *Proof.* We continue with the same notations used in Theorem [Theorem 17](#thm: decomposition){reference-type="ref" reference="thm: decomposition"}. Part (a),(b) follows from ([\[eqn: decomposition_T\]](#eqn: decomposition_T){reference-type="ref" reference="eqn: decomposition_T"}) and ([\[eqn: decompositions\]](#eqn: decompositions){reference-type="ref" reference="eqn: decompositions"}) by setting $$\mu_\pi^- := \mu_0^-, \ \mu_\varphi^- := \sum_{j=1}^N \mu_j^-,\ \mu_\pi^+ := \mu_0^+, \ \mu_\varphi^+ := \sum_{j=1}^N \mu_j^+,\ T_\pi := T_0,\ T_\varphi := \sum_{j=1}^N T_j .$$ For part (c), we define $$\varphi := \sum_{j=1}^N y_j \chi_{_{B_j}},$$ where $B_j$'s are subsets of $\{x_1,x_2,\cdots, x_M\}$ given in Theorem [Theorem 17](#thm: decomposition){reference-type="ref" reference="thm: decomposition"}. Since $\mu_j^- = \mu^- \lfloor_{B_j}, \ \mu_j^+ = \tilde{m}_j \delta_{y_j}$, and $B_j$'s are pairwise disjoint for $j=1,2,\ldots,N$, we get $$\varphi_{\#} (\mu_\varphi^-) = \varphi_{\#} \left(\sum_{j=1}^N \mu_j^- \right)=\varphi_{\#} \left(\sum_{j=1}^N \mu^- \lfloor_{B_j}\right) = \sum_{j=1}^N \mu^- (B_j)\delta_{y_j} = \sum_{j=1}^N \tilde{m}_j\delta_{y_j}=\mu_\varphi^+.$$ Therefore, $\varphi$ is a transport map from $\mu_\varphi^-$ to $\mu_\varphi^+$. We now show that $(T_\varphi, \varphi)$ is compatible. Since $$T_\varphi := \sum_{j=1}^N T_j = \sum_{j=1}^N\sum_{x_i \in B_j} \int_{\Gamma_{x_i,y_j} } I_\gamma \, d\eta,$$ it is sufficient to show that $$\label{eqn: pi_varphi} \pi_\varphi := \left(id \times \varphi\right)_{\#}\mu^- = \sum_{j=1}^N\sum_{x_i \in B_j} \int_{\Gamma_{x_i,y_j} } \delta_{ (p_0(\gamma) , p_\infty(\gamma) )}\, d\eta.$$ Indeed, for any measurable rectangle $Q\times R$ in $X\times X$, $$\begin{aligned} \pi_\varphi(Q\times R) &=& (id \times \varphi )_{\#} \mu^-(Q\times R)= \mu^-(\{x: x\in Q, \varphi(x)\in R\})\\ &=& \sum_{j=1}^N \mu^-(\{x: x\in Q, \varphi(x)=y_j, y_j\in R\}) = \sum_{j=1}^N \chi_R(y_j)\mu^-(\{x: x\in Q, \varphi(x)=y_j\})\\ &=& \sum_{j=1}^N \chi_R(y_j)\mu^-(\{x: x\in Q, x\in B_j\}) = \sum_{j=1}^N \chi_R(y_j)\mu^-(Q\cap B_j)\\ &=& \sum_{j=1}^N \chi_R(y_j)\left((p_0)_\#\eta\right)(Q\cap B_j) = \sum_{j=1}^N \chi_R(y_j)\eta( p_0^{-1}(Q\cap B_j) )\\ &=& \sum_{j=1}^N \chi_R(y_j)\eta(\{\gamma\in \Gamma, p_0(\gamma)\in Q\cap B_j\})= \sum_{j=1}^N \chi_R(y_j) \sum_{x_i \in B_j} \int_{\Gamma_{x_i,y_j} }\chi_Q(p_0(\gamma)) d \eta\\ &=&\sum_{j=1}^N \sum_{x_i \in B_j} \int_{\Gamma_{x_i,y_j} } \chi_Q(p_0(\gamma)) \cdot \chi_R(y_j)d \eta = \sum_{j=1}^N \sum_{x_i \in B_j} \int_{\Gamma_{x_i,y_j} } \chi_Q(p_0(\gamma)) \cdot \chi_R(p_\infty(\gamma))d \eta\\ &=& \sum_{j=1}^N \sum_{x_i \in B_j} \int_{\Gamma_{x_i,y_j}, x_i \in Q, y_j \in R } \delta_{ p_0(\gamma)} \cdot \delta_{p_\infty(\gamma)} d \eta = \sum_{j=1}^N\sum_{x_i \in B_j} \int_{\Gamma_{x_i,y_j} } \delta_{ (p_0(\gamma) , p_\infty(\gamma) )}\, d\eta(Q\times R).\end{aligned}$$ Therefore, ([\[eqn: pi_varphi\]](#eqn: pi_varphi){reference-type="ref" reference="eqn: pi_varphi"}) holds and hence $(T_\varphi, \varphi)$ is compatible. For part (d), we define $$\pi := \sum_{x_i\in B_0} \sum_{j=1}^N \eta\left( \Gamma_{x_i,y_j} \right) \delta_{(x_i,y_j)} .$$ As shown in ([\[eqn: boundary_T\_0\]](#eqn: boundary_T_0){reference-type="ref" reference="eqn: boundary_T_0"}), $$\mu_\pi^+-\mu_\pi^-=\mu_0^+-\mu_0^-= \sum_{j=1}^N \left(\sum_{x_i \in B_0} \eta(\Gamma_{x_i,y_j})\right) \delta_{y_j}- \sum_{x_i \in B_0}\left(\sum_{j=1}^N \eta(\Gamma_{x_i,y_j})\right) \delta_{x_i}.$$ This shows that $\pi$ is a transport plan from $\mu_\pi^-$ to $\mu_\pi^+$. Note that since $$T_0=\sum_{j=1}^N \sum_{x_i \in B_0} \int_{\Gamma_{x_i,y_j} } I_\gamma \, d\eta$$ and $$\pi = \sum_{x_i\in B_0} \sum_{j=1}^N \eta \left( \Gamma_{x_i,y_j} \right) \delta_{(x_i,y_j)} = \sum_{j=1}^N \sum_{x_i \in B_0} \int_{\Gamma_{x_i,y_j} } \delta_{ ( p_0(\gamma), p_\infty(\gamma) ) } \, d\eta,$$ we have $(T_{\pi}, \pi)$ is compatible. For part (e), by definition of $\mu_\pi^-$, $x_i \in B_0$ which is defined in Theorem [Theorem 17](#thm: decomposition){reference-type="ref" reference="thm: decomposition"}. The result in (e) then follows from the definition of $B_0$ given in ([\[eqn: B_0\]](#eqn: B_0){reference-type="ref" reference="eqn: B_0"}). ◻ # Stair-shaped matrices and Decomposition of stair-shaped transport paths In Theorem [Theorem 22](#thm: compatability){reference-type="ref" reference="thm: compatability"}, we decomposed a cycle-free transport path as the sum of a map-compatible path and a plan-compatible path. In this section, we aim to decompose some transport paths as the difference of two map-compatible paths. The family of transport paths that we are interested in are stair-shaped transport paths. To do this, we start with the study of stair-shaped matrices. ## Stair-shaped matrices   Given $M, N\in \mathbb{N}\cup \{\infty\}$, let $\mathcal{A}_{M,N}$ denote the collection of all $M\times N$ matrices with non-negative entries. **Definition 23**. A matrix $A \in \mathcal{A}_{M,N}$ is called stair-shaped if there exists two non-decreasing sequences of natural numbers $\{r_1,r_2,\cdots,r_{M+N-1}\}$ and $\{c_1,c_2,\cdots, c_{M+N-1}\}$ with $r_k+c_k=k+1$ for each $k=1,2,\cdots, M+N-2$, and entries of $A$ that are not located in the positions $\{(r_k, c_k)\}_{k=1}^{M+N-1}$ equal to zero. Note that when $A \in \mathcal{A}_{M,N}$ is stair-shaped, then $(r_1,c_1)=(1,1)$ and $(r_{M+N-1},c_{M+N-1})=(M,N)$. **Definition 24**. For each $k=1,2,\cdots, M+N-1$, a matrix $A\in \mathcal{A}_{M,N}$ is called $k$-stairable if it is in the form of $$A = \begin{bmatrix} a_{11} & \cdots &a_{1,c -1} &a_{1,c} &0& \cdots & 0 & \cdots \\ \vdots & &\vdots &\vdots& \vdots & & \vdots \\ a_{r-1,1} & \cdots &a_{r-1,c -1} &a_{r-1,c} &0& \cdots & 0 & \cdots \\ a_{r,1} & \cdots &a_{r, c-1}&a_{r, c} &a_{r, c+1}& \cdots & a_{r, j} & \cdots \\ 0 & \cdots &0 &a_{r+1, c} &a_{r+1, c+1}& \cdots & a_{r+1, j} & \cdots \\ \vdots & &\vdots & \vdots & \vdots & & \vdots \\ 0 & \cdots &0 &a_{i, c} &a_{i, c+1} &\cdots & a_{i, j} & \cdots \\ \vdots & &\vdots & \vdots & \vdots & & \vdots \\ \end{bmatrix},$$ where the leading (i.e., upper left corner) sub-matrix $$\begin{bmatrix} a_{11} & \cdots &a_{1,c -1} &a_{1,c} \\ \vdots & &\vdots &\vdots \\ a_{r-1,1} & \cdots &a_{r_k-1,c -1} &a_{r-1,c} \\ a_{r,1} & \cdots &a_{r,c-1}&a_{r, c} \\ \end{bmatrix}$$ is stair-shaped and $k=r+c-1$. In particular, each matrix $A\in \mathcal{A}_{M,N}$ is at least $1$-stairable, and each stair-shaped matrix $A\in \mathcal{A}_{M,N}$ is $(M+N-1)$-stairable. For each $1\le i_1< i_2\le M$ and $1\le j_1< j_2 \le N$, denote $E [ (i_1,j_1),(i_2,j_2 ) ]$ as the $M \times N$ matrix with $1$ at $(i_1,j_1)$ and $(i_2,j_2)$ entries, with $-1$ at $(i_1,j_2)$ and $(i_2,j_1)$ entries, and $0$ at all other entries. Each $E [ (i_1,j_1),(i_2,j_2 ) ]$ is called an elementary matrix. **Definition 25**. For any two matrices $A, B\in \mathcal{A}_{M,N}$, we say $A\cong B$ if there exists a list of real numbers $\{t_k\}_{k=1}^K$ and a list of elementary matrices $\{E_k\}_{k=1}^K$ such that $B = A + \sum_{k=1}^K t_k E_k$ for some $K\in \mathbb{N} \cup \{ \infty \}$. **Theorem 26**. *For any matrix $A\in \mathcal{A}_{M,N}$, there exists a stair-shaped matrix $B\in \mathcal{A}_{M,N}$ such that $A \cong B$.* *Proof.* **Step 1:** Let $$A = \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1j} & \cdots \\ a_{21} & a_{22} & \cdots & a_{2j} & \cdots \\ \vdots & \vdots & & \vdots & \\ a_{i1} & a_{i2} & \cdots & a_{ij} & \cdots \\ \vdots & \vdots & & \vdots & \\ \end{bmatrix},$$ and $$u_1 = \sum_{i=2}^M a_{i 1} \text{ and } v_1 = \sum_{j=2}^N a_{1 j} .$$ If $u_1 = 0$, and since all entries in $A$ are non-negative, then we get $$A_1 = A = \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1j} & \cdots \\ 0 & a_{22} & \cdots & a_{2j} & \cdots \\ \vdots & \vdots & & \vdots & \\ 0 & a_{i2} & \cdots & a_{ij} & \cdots \\ \vdots & \vdots & & \vdots & \\ \end{bmatrix} .$$ If $u_1 \not=0$, and $u_1 \ge v_1$ then we do the following transformation and denote $$A_1 = A + \sum_{i=2}^\infty \sum_{j=2}^\infty \frac{a_{i1} a_{1j}}{u_1} E [ (1,1),(i,j ) ] .$$ This implies $$\begin{aligned} A_1 &=& \begin{bmatrix} a_{11} + \sum_{i=2}^\infty \sum_{j=2}^\infty \frac{a_{i1} a_{1j}}{u_1} & a_{12} - \sum_{i=2}^\infty \frac{a_{i1} a_{12}}{u_1}& \cdots & a_{1j} - \sum_{i=2}^\infty \frac{a_{i1} a_{1j}}{u_1}& \cdots \\ a_{21} - \sum_{j=2}^\infty \frac{a_{21} a_{1j}}{u_1} & a_{22} + \frac{a_{21} a_{12}}{u_1} & \cdots & a_{2j} + \frac{a_{21} a_{1j}}{u_1} & \cdots \\ \vdots & \vdots & & \vdots & \\ a_{i1} - \sum_{j=2}^\infty \frac{a_{i1} a_{1j}}{u_1} & a_{i2} + \frac{a_{i1} a_{12}}{u_1}& \cdots & a_{ij}+ \frac{a_{i1} a_{1j}}{u_1} & \cdots \\ \vdots & \vdots & & \vdots & \\ \end{bmatrix} \\ &=& \begin{bmatrix} a_{11} + v_1 & 0 & \cdots & 0 & \cdots \\ \left(1-\frac{v_1}{u_1}\right) a_{21} & a_{22} + \frac{a_{21} a_{12}}{u_1} & \cdots & a_{2j} + \frac{a_{21} a_{1j}}{u_1} & \cdots \\ \vdots & \vdots & & \vdots & \\ \left(1-\frac{v_1}{u_1}\right)a_{i1} & a_{i2} + \frac{a_{i1} a_{12}}{u_1}& \cdots & a_{ij}+ \frac{a_{i1} a_{1j}}{u_1} & \cdots \\ \vdots & \vdots & & \vdots & \\ \end{bmatrix} . \\\end{aligned}$$ If $u_1 \not= 0$, and $u_1 \le v_1$, we consider the following transformation: $$A_1 = A + \sum_{i=2}^\infty \sum_{j=2}^\infty \frac{a_{i1} a_{1j}}{v_1} E [ (1,1),(i,j ) ] ,$$ and $$\begin{aligned} A_1 &=& \begin{bmatrix} a_{11} + \sum_{i=2}^\infty \sum_{j=2}^\infty \frac{a_{i1} a_{1j}}{v_1} & a_{12} - \sum_{i=2}^\infty \frac{a_{i1} a_{12}}{v_1}& \cdots & a_{1j} - \sum_{i=2}^\infty \frac{a_{i1} a_{1j}}{v_1}& \cdots \\ a_{21} - \sum_{j=2}^\infty \frac{a_{21} a_{1j}}{v_1} & a_{22} + \frac{a_{21} a_{12}}{v_1} & \cdots & a_{2j} + \frac{a_{21} a_{1j}}{v_1} & \cdots \\ \vdots & \vdots & & \vdots & \\ a_{i1} - \sum_{j=2}^\infty \frac{a_{i1} a_{1j}}{v_1} & a_{i2} + \frac{a_{i1} a_{12}}{v_1}& \cdots & a_{ij}+ \frac{a_{i1} a_{1j}}{v_1} & \cdots \\ \vdots & \vdots & & \vdots & \\ \end{bmatrix} \\ &=& \begin{bmatrix} a_{11} + u_1 & \left(1-\frac{u_1}{v_1}\right)a_{12} & \cdots & \left(1-\frac{u_1}{v_1}\right)a_{1j} & \cdots \\ 0 & a_{22} + \frac{a_{21} a_{12}}{v_1} & \cdots & a_{2j} + \frac{a_{21} a_{1j}}{v_1} & \cdots \\ \vdots & \vdots & & \vdots & \\ 0 & a_{i2} + \frac{a_{i1} a_{12}}{v_1}& \cdots & a_{ij}+ \frac{a_{i1} a_{1j}}{v_1} & \cdots \\ \vdots & \vdots & & \vdots & \\ \end{bmatrix} . \\\end{aligned}$$ Hence, $A \cong A_1$ where $A_1$ is of the form: $$\begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1j} & \cdots \\ 0 & a_{22} & \cdots & a_{2j} & \cdots \\ \vdots & \vdots & & \vdots & \\ 0 & a_{i2} & \cdots & a_{ij} & \cdots \\ \vdots & \vdots & & \vdots & \\ \end{bmatrix} \text{ or } \begin{bmatrix} a_{11} & 0 & \cdots & 0 & \cdots \\ a_{21} & a_{22} & \cdots & a_{2j} & \cdots \\ \vdots & \vdots & & \vdots & \\ a_{i1} & a_{i2} & \cdots & a_{ij} & \cdots \\ \vdots & \vdots & & \vdots & \\ \end{bmatrix}$$ and $(r_1,c_1) = (1,1)$. Here and in the following steps, for simplicity of notations, we continue using the same notation, $a_{ij}$'s, to denote non-negative entries. **Step 2:** Set $A_1 = f(A)$, note that $A_1 \cong A$ is $1$-stairable. For each $k \in \mathbb{N}$, if $A_k\cong A$ is $k$-stairable, we construct a $(k+1)$-stairable matrix $A_{k+1}\cong A$ as follows. Given $$A_{k} = \begin{bmatrix} a_{11} & \cdots &a_{1,c_k -1} &a_{1,c_k} &0& \cdots & 0 & \cdots \\ \vdots & &\vdots &\vdots& \vdots & & \vdots \\ a_{r_k-1,1} & \cdots &a_{r_k-1,c_k -1} &a_{r_k-1,c_k} &0& \cdots & 0 & \cdots \\ a_{r_k1} & \cdots &a_{r_k c_k-1}&a_{r_k c_k} &a_{r_k,c_k+1}& \cdots & a_{r_k,j} & \cdots \\ 0 & \cdots &0 &a_{r_k+1, c_k} &a_{r_k+1,c_k+1}& \cdots & a_{r_k+1,j} & \cdots \\ \vdots & &\vdots & \vdots & \vdots & & \vdots \\ 0 & \cdots &0 &a_{i,c_k} &a_{i,c_k+1} &\cdots & a_{ij} & \cdots \\ \vdots & &\vdots & \vdots & \vdots & & \vdots \\ \end{bmatrix},$$ where the upper left corner sub-matrix $$S = \begin{bmatrix} a_{11} & \cdots &a_{1,c_k -1} &a_{1,c_k} \\ \vdots & &\vdots &\vdots \\ a_{r_k-1,1} & \cdots &a_{r_k-1,c_k -1} &a_{r_k-1,c_k} \\ a_{r_k1} & \cdots &a_{r_k c_k}&a_{r_k c_k} \\ \end{bmatrix}$$ is stair-shaped (which implies that $r_k+c_k-1=k$), $S \in \mathcal{A}_{r_k,c_k}$, and let $$B= f \left( \begin{bmatrix} a_{r_k c_k} &a_{r_k,c_k+1}& \cdots & a_{r_k,j} & \cdots \\ a_{r_k+1, c_k} &a_{r_k+1,c_k+1}& \cdots & a_{r_k+1,j} & \cdots \\ \vdots & \vdots & & \vdots \\ a_{i,c_k} &a_{i,c_k+1} &\cdots & a_{ij} & \cdots \\ \vdots & \vdots & & \vdots \\ \end{bmatrix} \right) = \begin{bmatrix} b_{r_k c_k} &b_{r_k,c_k+1}& \cdots & b_{r_k,j} & \cdots \\ b_{r_k+1, c_k} &b_{r_k+1,c_k+1}& \cdots & b_{r_k+1,j} & \cdots \\ \vdots & \vdots & & \vdots \\ b_{i,c_k} &b_{i,c_k+1} &\cdots & b_{ij} & \cdots \\ \vdots & \vdots & & \vdots \\ \end{bmatrix} .$$ Then we define $$A_{k+1} = \begin{bmatrix} a_{11} & \cdots &a_{1,c_k -1} &a_{1,c_k} &0& \cdots & 0 & \cdots \\ \vdots & &\vdots &\vdots& \vdots & & \vdots \\ a_{r_k-1,1} & \cdots &a_{r_k-1,c_k -1} &a_{r_k-1,c_k} &0& \cdots & 0 & \cdots \\ a_{r_k1} & \cdots &a_{r_k c_k-1}&b_{r_k c_k} &b_{r_k,c_k+1}& \cdots & b_{r_k,j} & \cdots \\ 0 & \cdots &0 &b_{r_k+1, c_k} &b_{r_k+1,c_k+1}& \cdots & b_{r_k+1,j} & \cdots \\ \vdots & &\vdots & \vdots & \vdots & & \vdots \\ 0 & \cdots &0 &b_{i,c_k} &b_{i,c_k+1} &\cdots & b_{ij} & \cdots \\ \vdots & &\vdots & \vdots & \vdots & & \vdots \\ \end{bmatrix}.$$ By definition of $f$, two sequences $(r_k)_{k=1}^\infty$ and $(c_k)_{k=1}^\infty$ can be constructed as follows: - If $$\begin{bmatrix} b_{r_k,c_k+1} & \ldots& b_{r_k,j}&\ldots \end{bmatrix} \not= \begin{bmatrix} 0 & \ldots& 0&\ldots \end{bmatrix},$$ then $(r_{k+1},c_{k+1})= (r_{k},c_{k} + 1)$; - If $$\begin{bmatrix} b_{r_k,c_k+1} & \ldots& b_{r_k,j}&\ldots \end{bmatrix}= \begin{bmatrix} 0 & \ldots& 0&\ldots \end{bmatrix}$$ and $$\begin{bmatrix} b_{r_{k}+1,c_k} & \ldots & b_{i,c_k} & \ldots \end{bmatrix}^T \not= \begin{bmatrix} 0 & \ldots & 0 & \ldots \end{bmatrix}^T ,$$ then $(r_{k+1},c_{k+1})= (r_{k}+1,c_{k})$; - If $$\begin{bmatrix} b_{r_k,c_k+1} & \ldots& b_{r_k,j}&\ldots \end{bmatrix} = \begin{bmatrix} 0 & \ldots& 0&\ldots \end{bmatrix}$$ and $$\begin{bmatrix} b_{r_{k}+1,c_k} & \ldots & b_{i,c_k} & \ldots \end{bmatrix}^T = \begin{bmatrix} 0 & \ldots & 0 & \ldots \end{bmatrix}^T ,$$ then $(r_{k+1},c_{k+1})= (r_{k},c_{k} + 1)$. This gives $(r_k)_{k=1}^\infty$, $(c_k)_{k=1}^\infty$ are non-decreasing sequences with $r_{k+1} + c_{k+1} = r_k +c_k +1=k+2$. By doing so, we get a $(k+1)$-stairable matrix $A_{k+1}$ with $A \cong A_{k} \cong A_{k+1}$. Note that in this construction we have $$\label{eqn: matrix_comparsion} A_{k+1}(i,j) = A_{k}(i,j), \quad \text{ for } i<r_k \text{ or } j<c_k.$$ Moreover, $$A_{k+1} = A_{k} + \sum_{l=1}^\infty t_{k,l} E_{k,l}$$ for some $t_{k,l} \in \mathbb{R}$, and $E_{k,l}$'s are elementary matrices. Set $$A_\infty := A_1 + \sum_{k=1}^\infty \sum_{l=1}^\infty t_{k,l} E_{k,l},$$ then $$A_\infty \cong A_1 \cong A.$$ Note that for each $i=1,\ldots, M$ and $j=1, \dots, N$, by ([\[eqn: matrix_comparsion\]](#eqn: matrix_comparsion){reference-type="ref" reference="eqn: matrix_comparsion"}) and $r_k+c_k-1=k$, the sequence $A_k(i,j)$ is eventually constant when $k$ is large enough. Thus, $A_\infty (i,j) = \lim_{k\to \infty} A_k(i,j)$ is well defined, stair-shaped, with non-negative entries. ◻ After knowing the existence of the stair-shaped matrix $B$ using Theorem [Theorem 26](#thm: stairshaped-matrix){reference-type="ref" reference="thm: stairshaped-matrix"}, one may use the following algorithm to recursively find its entries. **Algorithm 27**.   *Input:* A matrix $A=[a_{ij}]\in \mathcal{A}_{M,N}$. *Output:*  A stair-shaped matrix $B=[b_{ij}]\in \mathcal{A}_{M,N}$ with $B\cong A$. *Algorithm:* One may recursively calculate the entries of $B$ as follows: - Step 1: Start with $i_0=1, j_0=1$, set $$R=\sum_{j=1}^N a_{1 j} \text{ and } C=\sum_{i=1}^M a_{i 1}.$$ If $R\leq C$, then $b_{1 1}=R,\ b_{1 j}=0 \text{ for all }j>1$. Otherwise, $b_{1 1}=C$ and $b_{i 1}=0$ for all $i>1$.   - Step 2: For each $(i_0,j_0)$ with $b_{i_0, j_0}$ unknown and $b_{ij}$ is known for all $i<i_0$ and $j<j_0$, let $$R=\sum_{j=1}^N a_{i_0 , j}-\sum_{j<j_0}b_{i_0 ,j},\; C=\sum_{i=1}^M a_{i , j_0}-\sum_{i<i_0}b_{i ,j_0}.$$ If $R\leq C$, set $$b_{i_0 , j_0}=R,\ b_{i_0 ,j}=0 \text{ for all }j>j_0.$$ Otherwise, when $R>C$, set $$b_{i_0 , j_0}=C,\ b_{i, j_0}=0 \text{ for all }i>i_0.$$ Using Step 2 recursively, one can calculate all entries of the stair-shaped matrix $B$. ## Stair-shaped good decomposition **Definition 28**. Let $\eta$ be a finite measure on $\Gamma$ with $(p_0)_\#\eta=\mu^-$ and $(p_\infty)_\#\eta=\mu^+$. The representing matrix of $\eta$ is the matrix $A=[a_{ij}]\in \mathcal{A}_{M,N}$ such that $a_{ij}=\eta(\Gamma_{x_i, y_j})$ for each $i, j$. We say that $\eta$ is stair-shaped if its representing matrix $A$ is stair-shaped. A transport path $T\in Path(\mu^-, \mu^+)$ is called stair-shaped if there exists a good decomposition $\eta$ of $T$ such that $\eta$ is stair-shaped. **Proposition 29**. *Any stair-shaped good decomposition $\eta$ of $T$ is a better decomposition of $T$.* *Proof.* By Definition [Definition 2](#def: better_decom){reference-type="ref" reference="def: better_decom"}, suppose there exist $1\le i_1< i_2\le M$ and $1\le j_1<j_2\leq N$, with $$S_{i_1,j_1} (\eta) - S_{i_1,j_2} (\eta)- S_{i_2,j_1} (\eta) + S_{i_2,j_2} (\eta)=0,$$ then direct calculation from ([\[eqn: equalsgn\]](#eqn: equalsgn){reference-type="ref" reference="eqn: equalsgn"}) gives either $$\eta(\Gamma_{i_1,j_1}) =\eta(\Gamma_{i_1,j_2})=\eta(\Gamma_{i_2,j_1}) = \eta(\Gamma_{i_2,j_2})=0,$$ or $$\eta(\Gamma_{i_1,j_1})> 0, \eta(\Gamma_{i_1,j_2})> 0, \eta(\Gamma_{i_2,j_1})> 0, \eta(\Gamma_{i_2,j_2})> 0.$$ The latter case cannot appear since $\eta$ is stair-shaped and there is no way to align the indexes $$(i_1,j_1), (i_1,j_2),(i_2,j_1),(i_2,j_2),$$ such that both two coordinates are non-decreasing sequences. As a result, $\eta$ is a better decomposition. ◻ A stair-shaped path is not necessarily cycle-free. For instance, the transport path $T$ given in Remark [Remark 12](#rmk: cycle-free){reference-type="ref" reference="rmk: cycle-free"} is stair-shaped because $\eta= \delta_{\gamma_{x_1,y_1}} + \delta_{\gamma_{x_2,y_2}}$ is a stair-shaped good decomposition of $T$. However, $T$ is not cycle-free. **Example 4**. *Let $T$ be a transport path from $$\mu^-=9\delta_{x_1}+9\delta_{x_2}+9\delta_{x_3}+27\delta_{x_4}+27\delta_{x_5} \text{ to } \mu^+=36\delta_{y_1} +9\delta_{y_2} + 18\delta_{y_3} +9\delta_{y_4} +9\delta_{y_5}$$ given as shown in the following figure.* *For each $(i,j)$, let $\gamma_{x_i, y_j}\in \Gamma$ be the unique polyhedral curve from $x_i$ to $y_j$ on $T$, and $a_{i,j}$ be the $(i,j)$-entry of the matrix $$A = \begin{bmatrix} 4& 1 & 2 & 1 & 1 \\ 4& 1 & 2 & 1 & 1 \\ 4& 1 & 2 & 1 & 1 \\ 12& 3 & 6 & 3 & 3 \\ 12& 3 & 6 & 3 & 3 \\ \end{bmatrix}.$$ Then $$\eta_A:=\sum_{ i,j =1}^5 a_{ij}\delta_{\gamma_{x_i, y_j}}$$ is a good but not a better decomposition of $T$. Using Algorithm [Algorithm 27](#rem: Stair shaped Algorithm){reference-type="ref" reference="rem: Stair shaped Algorithm"}, the corresponding stair-shaped matrix of $A$ is given by $$B = \begin{bmatrix} 9& 0 & 0 & 0 & 0 \\ 9& 0 & 0 & 0 & 0 \\ 9& 0 & 0 & 0 & 0 \\ 9& 9 & 9 & 0 & 0 \\ 0& 0 & 9 & 9 & 9 \end{bmatrix}.$$ The corresponding measure $$\eta_B:=\sum_{ i,j =1}^5 b_{ij}\delta_{\gamma_{x_i, y_j}}$$ on $\Gamma$ is a stair-shaped good decomposition of $T$, which is automatically a better decomposition of $T$.* The following theorem says that any stair-shaped transport path can be decomposed as the sum of two subcurrents generated by two transport maps. **Theorem 30**. *Let $T\in Path(\mu^-,\mu^+)$ be a stair-shaped transport path, where $\mu^-$ and $\mu^+$ are given in ([\[eqn: measures\]](#eqn: measures){reference-type="ref" reference="eqn: measures"}). Then there exist decomposition $$\mu^-=\mu_1^-+\mu_2^-, \mu^+=\mu_1^++\mu_2^+, \text{ and } T=T_1+T_2$$ such that* - *for each $i=1,2$, $T_i$ is a subcurrent of $T$ and $T_i\in Path(\mu_i^-, \mu_i^+)$,* - *there exists transport maps $\varphi \in Map(\mu_1^-, \mu_1^+)$ and $\psi\in Map(\mu_2^+, \mu_2^-)$ such that both $( T_1,\varphi)$ and $(-T_2,\psi)$ are compatible.* *Proof.* Since $T$ is stair-shaped, there exists a good decomposition $\eta$ whose representing matrix $A=[a_{ij}]$ is a stair-shaped matrix. We now write $A$ as the sum of $B=[b_{ij}]$ and $C=[c_{ij}]$ as follows. For each $i$ and $j$, if $a_{ij}=0$, set $b_{ij}=0$ and $c_{ij}=0$. When $a_{ij}>0$, - if $a_{ij}$ is the last non-zero entry in the $i$-th row of $A$, (i.e., $a_{ij'}=0$ for all $j' \ge j+1$,) we set $b_{ij}=a_{ij}$ and $c_{ij}=0$; - if $a_{ij}$ is not the last non-zero entry in the $i$-th row of $A$, since $A$ is stair-shaped, $a_{ij}$ is the last non-zero entry in the $j$-th column of $A$. In this case, we set $b_{ij}=0$ and $c_{ij}=a_{ij}$. By doing so, we write $A=B+C$ such that each row of $B=[b_{ij}]$ and each column of $C=[c_{ij}]$ contain at most one non-zero entry. Note that for each $(i,j)$, $a_{ij}=b_{ij}+c_{ij}$ and $a_{ij}>0$ means either $b_{ij}>0$ or $c_{ij}>0$ but not both. Define $$\mu_1^- = \sum_{i} \left( \sum_j b_{ij} \right) \delta_{x_i}, \ \mu_1^+ = \sum_{j} \left( \sum_i b_{ij} \right) \delta_{y_j}, \ \mu_2^- = \sum_{i} \left( \sum_j c_{ij} \right) \delta_{x_i}, \ \mu_2^+ = \sum_{j} \left( \sum_i c_{ij} \right) \delta_{y_j}. \ $$ Then $\mu^- = \mu_1^- + \mu_2^-$ and $\mu^+ = \mu_1^+ + \mu_2^+$. Let $$T_1:=\int_{\{\gamma \in \Gamma_{x_i,y_j}:\; b_{ij}>0\}} I_\gamma \,d\eta,\text{ and } T_2:=\int_{\{ \gamma \in \Gamma_{x_i,y_j}:\; c_{ij}>0\}} I_\gamma \,d\eta.$$ Both $T_1$ and $T_2$ are subcurrents of $T$, and $$\partial T_1 = \int_{\{ \gamma \in \Gamma_{x_i,y_j}:\; b_{ij}>0\}} ( \delta_{y_j}-\delta_{x_i} ) \,d\eta = \sum_{i,j} b_{ij} ( \delta_{y_j}-\delta_{x_i} ) = \mu_1^+ - \mu_1^- ,$$ $$\partial T_2 = \int_{\{ \gamma \in \Gamma_{x_i,y_j}:\; c_{ij}>0\}} ( \delta_{y_j}-\delta_{x_i} ) \,d\eta = \sum_{i,j} c_{ij} ( \delta_{y_j}-\delta_{x_i} ) = \mu_2^+ - \mu_2^- ,$$ which gives $T_i \in Path(\mu_i^-,\mu_i^+)$ for $i=1,2$. Then, $$T= \int_\Gamma I_\gamma \,d\eta =\int_{\{ \gamma \in \Gamma_{x_i,y_j}:\; a_{ij}>0\}} I_\gamma \,d\eta = \int_{\{ \gamma \in \Gamma_{x_i,y_j}:\; b_{ij}>0\}} I_\gamma \,d\eta + \int_{\{ \gamma \in \Gamma_{x_i,y_j}:\; c_{ij}>0\}} I_\gamma \,d\eta=T_1 + T_2.$$ Denote $$X_1 =\{x_i \in X : \mu_1^-(\{x_i\}) > 0\},\ Y_1 =\{y_j \in X : \mu_1^+(\{y_j\}) > 0\},$$ $$X_2 =\{x_i \in X : \mu_2^-(\{x_i\}) > 0\},\ Y_2 =\{y_j \in Y : \mu_2^+(\{y_j\}) > 0\}.$$ Observe that since $A$ is stair-shaped, by the construction of $b_{ij}$, for each $i$, there exists at most one $j$ (i.e. the largest $j$ with $a_{ij}>0$) such that $b_{ij}>0$. This leads to a map: $\varphi: X_1 \to Y_1$ given by $$\varphi(x_i)=y_j \text{ if } b_{ij}>0.$$ Similarly, for each $j$, there exists at most one $i$ (i.e. the largest $i$ with $a_{ij}>0$) such that $c_{ij}>0$. This leads to a map: $\psi: Y_2 \to X_2$ given by $$\psi (y_j)=x_i \text{ if } c_{ij}>0.$$ By definition of $\varphi$, for each $y_j \in Y_1$, $$\varphi_{\#}\mu_1^-(\{y_j\}) = \mu_1^-(\varphi^{-1}(y_j)) = \mu_1^-(\{x_i : b_{ij} >0\}) =\sum_{b_{ij >0}}\mu_1^-(\{x_i\}) = \sum_{i} b_{ij} = \mu_1^+ (\{y_j\}).$$ Therefore, $\varphi_{\#}\mu_1^- = \mu_1^+$, and similarly, $\mu_2^- = \psi_{\#}\mu_1^+$. Also, direct calculation gives $$\pi_\varphi: = (id \times \varphi)_{\#} \mu_1^- = \int_{\{ \gamma \in \Gamma_{x_i,y_j}:\; b_{ij}>0\}} \delta_{(x_i,y_j)} \,d\eta, \text{ and } \pi_\psi: = (id \times \psi)_{\#} \mu_2^+ = \int_{\{ \gamma \in \Gamma_{x_i,y_j}:\; c_{ij}>0\}} \delta_{(y_j,x_i)} \,d\eta.$$ Hence, $( T_1,\varphi)$ and $(-T_2,\psi)$ are compatible. ◻ We now provide an example to illustrate Theorem [Theorem 30](#thm: stair shaped induced transport maps){reference-type="ref" reference="thm: stair shaped induced transport maps"}. **Example 5**. *Let $T$, $\mu^-$, $\mu^+$, $A$, $B$, $\eta_A$, $\eta_B$ be the same values as defined in Example [Example 4](#ex: stair shaped decomposition and measures){reference-type="ref" reference="ex: stair shaped decomposition and measures"}. By Theorem [Theorem 30](#thm: stair shaped induced transport maps){reference-type="ref" reference="thm: stair shaped induced transport maps"}, we have $$B_1 = \begin{bmatrix} 9& 0 & 0 & 0 & 0 \\ 9& 0 & 0 & 0 & 0 \\ 9& 0 & 0 & 0 & 0 \\ 0& 0 & 9 & 0 & 0 \\ 0& 0 & 0 & 0 & 9 \end{bmatrix}, \quad B_2 = \begin{bmatrix} 0& 0 & 0 & 0 & 0 \\ 0& 0 & 0 & 0 & 0 \\ 0& 0 & 0 & 0 & 0 \\ 9& 9 & 0 & 0 & 0 \\ 0& 0 & 9 & 9 & 0 \end{bmatrix},$$ so that $B=B_1 + B_2$. By matrix $B_1$, we get a transport path $T_1$, with $$\mu_1^- = 9 \delta_{x_1} + 9 \delta_{x_2} + 9 \delta_{x_3} + 9 \delta_{x_4} + 9 \delta_{x_5}, \ \mu_1^+ = 27 \delta_{y_1} + 9 \delta_{y_3} + 9 \delta_{y_5},$$ and $\varphi: \{x_1,x_2,x_3,x_4,x_5\} \to \{y_1,y_3,y_5\},$ such that $$\varphi(x_1)=\varphi(x_2)=\varphi(x_3)=y_1,\ \varphi(x_4)=y_3,\ \varphi(x_5)=y_5.$$* *By matrix $B_2$, we get a transport path $T_2$, with $$\mu_2^- = 18 \delta_{x_4} + 18 \delta_{x_5}, \ \mu_2^+ = 9 \delta_{y_1} + 9 \delta_{y_2} + 9 \delta_{y_3} + 9 \delta_{y_4},$$ and $\psi:\{y_1,y_2,y_3,y_4\} \to \{x_4,x_5\}$, such that $$\psi(y_1)=\psi(y_2)=x_4,\ \psi(y_3)=\psi(y_4)=x_5$$* *Then, $T$ is decomposed as the sum of $T_1$ and $T_2$.* ## Cycle-free stair-shaped transport paths   To use Theorem [Theorem 30](#thm: stair shaped induced transport maps){reference-type="ref" reference="thm: stair shaped induced transport maps"}, for a given transport path, one may want to find a stair-shaped good decomposition of it. However, the stair-shaped matrix generated by Algorithm [Algorithm 27](#rem: Stair shaped Algorithm){reference-type="ref" reference="rem: Stair shaped Algorithm"} does not necessarily correspond to a good decomposition, even if we start with a good decomposition, as demonstrated by the following example. **Example 6**. *Let $T$ be the graph given in the following figure, and $\gamma_{i,j}$ be the curve on $T$ from $x_i$ to $y_j$ for each $i,j$.* *Then, $$\eta=\delta_{\gamma_{1,1}} + \delta_{\gamma_{1,2}} + \delta_{\gamma_{2,1}}$$ is a good decomposition of $T$ with the representing matrix $$A=[a_{ij}] = \begin{bmatrix} 1& 1 \\ 1& 0 \end{bmatrix}.$$ Algorithm [Algorithm 27](#rem: Stair shaped Algorithm){reference-type="ref" reference="rem: Stair shaped Algorithm"} gives the stair-shaped matrix $$B=[b_{ij}] = \begin{bmatrix} 2& 0 \\ 0& 1 \end{bmatrix}.$$ However, the corresponding measure, $$\eta_B:= 2\delta_{\gamma_{1, 1}} + \delta_{\gamma_{2, 2} }$$ is not a good decomposition of $T$ anymore.* To overcome this issue, we introduce the following concepts: **Definition 31**. Given $A\in \mathcal{A}_{M,N}$, an elementary matrix $E [ (i_1,j_1),(i_2,j_2 ) ]$ is called admissible to $A$ if $a_{ij}>0$ for all $(i,j)\in \{(i_1,j_1),(i_2,j_2 ), (i_1,j_2),(i_2,j_1 )\}$. For any two matrices $A, B\in \mathcal{A}_{M,N}$, we say $A\triangleq B$ if there exists a list of real numbers $\{t_k\}_{k=1}^K$ and a list of elementary matrices $\{E_k\}_{k=1}^K$ admissible to $A$ such that $B = A + \sum_{k=1}^K t_k E_k$ for some $K\in \mathbb{N} \cup \{ \infty \}$. **Lemma 32**. *Suppose $A$ is the representing matrix of a finite measure $\eta_A$ on $\Gamma$ satisfying $(p_0)_\#\eta_A=\mu^-$ and $(p_\infty)_\#\eta_A=\mu^+$. For any matrix $B=[b_{ij}]$ with $A\triangleq B$, define $$\label{eqn: eta_B} \eta_B:= \sum_{ \substack{i,j \\ \text{ with } a_{ij}>0} } \frac{b_{ij}}{a_{ij}}\eta_A\lfloor_{\Gamma_{x_i,y_j}}.$$ Then $\eta_B$ is a finite measure on $\Gamma$ with $(p_0)_\#\eta_B=\mu^-$ and $(p_\infty)_\#\eta_B=\mu^+$. Moreover, $B$ is the representing matrix of $\eta_B$ and $\eta_B \prec \hspace{-1mm}\prec \eta_A$.* *Proof.* The condition $A \triangleq B$ gives $$B = A + \sum_{k} t_k E_k,$$ for some real numbers $t_k$ and elementary matrices $E_k = E[(i_k,j_k),(i'_k,j'_k)]$ that are *admissible* to $A$. Note that $$\begin{aligned} \eta_B(\Gamma) &=& \sum_{ \substack{i,j \\ \text{ with } a_{ij}>0} } \frac{b_{ij}}{a_{ij}}\eta_A\lfloor_{\Gamma_{x_i,y_j}}(\Gamma) = \sum_{ \substack{i,j \\ \text{ with } a_{ij}>0} } \frac{b_{ij}}{a_{ij}}\eta_A(\Gamma_{x_i,y_j}) = \sum_{ \substack{i,j \\ \text{ with } a_{ij}>0} } b_{ij} \\ &=& \sum_{ \substack{i,j \\ \text{ with } a_{ij}>0} } \left(a_{ij}+t_k (E_k)_{ij} \right) = \sum_{ \substack{i,j \\ \text{ with } a_{ij}>0} } a_{ij} = \eta_A(\Gamma)<\infty.\end{aligned}$$ Moreover, $$\begin{aligned} (p_0)_{\#}\eta_B &=& \sum_{ \substack{i,j \\ \text{ with } a_{ij}>0} } \frac{b_{ij}}{a_{ij}}\eta_A(\Gamma_{x_i,y_j})\delta_{x_i} = \sum_{ \substack{i,j \\ \text{ with } a_{ij}>0} } b_{ij}\delta_{x_i} = \sum_{i}\left(\sum_{ \substack{j \\ \text{ with } a_{ij}>0} } \left(a_{ij}+ \sum_{k} t_k (E_k)_{ij} \right)\right)\delta_{x_i} \\ &=& \sum_{i}\left(\sum_{ \substack{j \\ \text{ with } a_{ij}>0} } a_{ij}\right)\delta_{x_i} = \sum_{ \substack{i,j \\ \text{ with } a_{ij}>0} } a_{ij}\delta_{x_i}=(p_0)_{\#}\eta_A=\mu^-.\end{aligned}$$ Similarly, $(p_\infty)_{\#}\eta_B=\mu^+$. We now show that $B$ is the representing matrix of $\eta_B$, i.e., $\eta_B(\Gamma_{x_{i'},y_{j'}})=b_{i'j'}$ for each pair $(i', j')$. If $a_{i'j'}=0$, then $\eta_B(\Gamma_{x_{i'},y_{j'}}) =0$ since the sum is over all $a_{ij}>0$. Also, since $E_k$'s are admissible to $A$, this gives $(E_k)_{i' j'} =0$ for all $k$, so that $b_{i'j'}=0=\eta_B(\Gamma_{x_{i'},y_{j'}})$. If $a_{i'j'}>0$, then since $\eta_A(\Gamma_{x_{i'},y_{j'}})=a_{i'j'}$, $$\eta_B(\Gamma_{x_{i'},y_{j'}}) = \sum_{ \substack{i,j \\ \text{ with } a_{ij}>0} } \frac{b_{ij}}{a_{ij}}\eta_A\lfloor_{\Gamma_{x_i,y_j}}(\Gamma_{x_{i'},y_{j'}}) = b_{i'j'}.$$ Therefore, $B$ is the representing matrix of $\eta_B$. In the end, we show $\eta_B \prec \hspace{-1mm}\prec \eta_A$ by using Lemma [Lemma 5](#lem: equivalent_definition_precc){reference-type="ref" reference="lem: equivalent_definition_precc"}. Suppose $\eta_B(\Gamma_{x_{i'},y_{j'}}) = b_{i'j'}>0$, then previous argument gives $a_{i'j'}>0$. Also, by definition of $\eta_B$, $$\int_{\Gamma_{x_{i'},y_{j'}} } I_\gamma d\eta_B = \frac{b_{i'j'}}{a_{i'j'}} \int_{\Gamma_{x_{i'},y_{j'}} } I_\gamma d\eta_A, \text{ and hence } \frac{1}{b_{i'j'}}\int_{\Gamma_{x_{i'},y_{j'}} } I_\gamma d\eta_B = \frac{1}{a_{i'j'}} \int_{\Gamma_{x_{i'},y_{j'}} } I_\gamma d\eta_A.$$ As a result, $S_{i'j'}(\eta_B) = S_{i'j'}(\eta_A)$ as desired. ◻ **Proposition 33**. *Let $T$ be a cycle-free transport path from $\mu^-$ to $\mu^+$. Suppose $\eta_A$ is a good decomposition of $T$, then for any matrix $B=[b_{ij}]$ with $A\triangleq B$, $\eta_B$ given in ([\[eqn: eta_B\]](#eqn: eta_B){reference-type="ref" reference="eqn: eta_B"}) is also a good decomposition of $T$.* *Proof.* Let $A=[a_{ij}] \in \mathcal{A}_{M,N}$, $B=[b_{ij}]\in \mathcal{A}_{M,N}$, then $A \triangleq B$ gives $$B = A + \sum_{k} t_k E_k,$$ for some real numbers $t_k$ and elementary matrices $E_k = E[(i_k,j_k),(i'_k,j'_k)]$ that are *admissible* to $A$. Using $S_{i,j}(\eta)$ defined in ([\[eqn: S_ij\]](#eqn: S_ij){reference-type="ref" reference="eqn: S_ij"}), we have $$\begin{aligned} \int_{\Gamma}I_\gamma d(\eta_B-\eta_A) &=& \int_{\Gamma}I_\gamma \,d \left(\sum_{i,j}\frac{b_{ij}}{a_{ij}}\eta_A \lfloor_{\Gamma_{x_i,y_j}} - \sum_{i,j}\eta_A \lfloor_{\Gamma_{x_i,y_j}} \right) \\ &=& \sum_{i,j} \frac{b_{ij}-a_{ij}}{a_{ij}}\int_{\Gamma_{x_i,y_j}}I_\gamma d\eta_A \\ &=& \sum_{i,j}(b_{ij}-a_{ij})S_{i,j}(\eta_A) = \sum_{k} t_k\sum_{i,j} (E_k)_{ij} S_{i,j}(\eta_A)\\ &=& \sum_k t_k \cdot \left(S_{i_k,j_k}(\eta_A) - S_{i_k,j'_k}(\eta_A) -S_{i'_k,j_k}(\eta_A) + S_{i'_k,j'_k}(\eta_A) \right).\end{aligned}$$ Since $E_k$'s are admissible to $A$, then $a_{ij} >0$ for $(i,j) \in \{(i_k,j_k)), (i_k,j'_k)), (i'_k,j_k)), (i'_k,j'_k)) \}$. Since $$S_{i_k,j_k}(\eta_A) - S_{i_k,j'_k}(\eta_A) -S_{i'_k,j_k}(\eta_A) + S_{i'_k,j'_k}(\eta_A)$$ is on $T$ and $a_{ij}>0$, direct calculation gives $$\partial \left(S_{i_k,j_k} (\eta_A)- S_{i_k,j'_k}(\eta_A) -S_{i'_k,j_k}(\eta_A) + S_{i'_k,j'_k}(\eta_A) \right) =0.$$ By Definition [Definition 11](#def: cycle-free current){reference-type="ref" reference="def: cycle-free current"}, $T$ is a cycle-free transport path implies $$S_{i_k,j_k}(\eta_A) - S_{i_k,j'_k}(\eta_A) -S_{i'_k,j_k}(\eta_A) + S_{i'_k,j'_k}(\eta_A) =0.$$ Hence, $$\int_\Gamma I_\gamma d \eta_B =\int_\Gamma I_\gamma d \eta_A.$$ By using an analogous argument as in the proof of Step 1 in Lemma [Lemma 8](#lem: S_ij(1,1)){reference-type="ref" reference="lem: S_ij(1,1)"}, it follows that $\eta_B$ is also a good decomposition of $T$. ◻ Given a matrix $A$ with non-negative entries, Theorem [Theorem 26](#thm: stairshaped-matrix){reference-type="ref" reference="thm: stairshaped-matrix"} gives a stair-shaped matrix $B$, such that $A \cong B$, which by definition says $B= A + \sum_{k} t_k E_k$ for some elementary matrices $E_k$. In general, $A \cong B$ does not imply $A \triangleq B$, since it is possible that some $E_k$'s are not admissible to $A$. However, when each entries of $A$ is positive (as illustrated in Example [Example 4](#ex: stair shaped decomposition and measures){reference-type="ref" reference="ex: stair shaped decomposition and measures"}), $A \cong B$ implies $A \triangleq B$. In general, when $A$ satisfies certain conditions as stated in the following corollary, we have both $A \cong B$ and $A \triangleq B$, so that the $\eta_B$ in ([\[eqn: eta_B\]](#eqn: eta_B){reference-type="ref" reference="eqn: eta_B"}) is a stair-shaped good decomposition. Suppose $A=[a_{ij}]$, let $A[(i_0,j_0),(i'_0,j'_0)]$ be the "sub-matrix\" of $A$ with entries $a_{ij}$'s such that $i_0 \le i \le i'_0$, $j_0 \le j \le j'_0$. **Corollary 34**. *Let $T$ be a cycle-free transport path from $\mu^-$ to $\mu^+$. Let $A =[a_{ij}]$ be the representing matrix of a good decomposition $\eta_A$ of $T$. If there exist a list of sub-matrices $A_k = A [(i_k,j_k), (i'_{k},j'_{k})]$ of $A$ such that* - *$(i_1,j_1) = (1,1)$ and $i'_k \le i_{k+1} \le i'_k+1, \ j'_k \le j_{k+1} \le j'_k+1$ for each $k$,* - *all elements of the sub-matrix $A_k$ are positive for each $k$,* - *all elements of $A$ not in any of the sub-matrices are $0$,* *then there exists a stair-shaped good decomposition $\eta_B$ of $T$ with $\eta_B \prec \hspace{-1mm}\prec \eta_A$. Hence, $T$ is stair-shaped.* *Proof.* We construct the desired stair-shaped matrix by using induction. We first apply Theorem [Theorem 26](#thm: stairshaped-matrix){reference-type="ref" reference="thm: stairshaped-matrix"} to the sub-matrix $$A_1 = A [(i_1,j_1), (i'_1,j'_1)]$$ and get a stair-shaped $A'_1$. Then replace entries in $A$ with entries in $A'_1$ in their corresponding original positions in $A$, and denote this new matrix as $B_1$. Inductively, for each $k \ge 1$, apply Theorem [Theorem 26](#thm: stairshaped-matrix){reference-type="ref" reference="thm: stairshaped-matrix"} to the sub-matrix $$B_k [(i_{k+1},j_{k+1}), (i'_{k+1},j'_{k+1})]$$ of $B_k$ and get a stair-shaped $A'_{k+1}$. Then replace entries in $B_k$ with entries in $A'_{k+1}$ in their corresponding original positions in $B_{k}$, and denote this matrix as $B_{k+1}$. Note that for each $k$, by condition (a), the sub-matrix $B_{k} [(i_{1},j_{1}), (i'_{k},j'_{k})]$ is stair-shaped and $$\label{eqn: B_k} B_{K} [(i_{1},j_{1}), (i'_{k},j'_{k})]= B_{k} [(i_{1},j_{1}), (i'_{k},j'_{k})], \text{ for each } K\ge k+2.$$ As a result, for each $(i,j)$, the limit $\lim_{k \to \infty} B_k(i,j)$ exists and equals the value of $B_k(i,j)$ when $k$ is large enough. Let $B$ be the limit matrix of $\{B_k\}$ whose $(i,j)$-entry $B (i,j) = \lim_{k \to \infty} B_k(i,j)$ for each $(i,j)$. By ([\[eqn: B_k\]](#eqn: B_k){reference-type="ref" reference="eqn: B_k"}), $B[(i_{1},j_{1}), (i'_{k},j'_{k})]= B_{k} [(i_{1},j_{1}), (i'_{k},j'_{k})]$ for each $k$. Since $B_{k} [(i_{1},j_{1}), (i'_{k},j'_{k})]$ is stair-shaped, $B$ is also stair-shaped. Since $B$ is a stair-shaped matrix, its corresponding measure $\eta_B$ as defined in ([\[eqn: eta_B\]](#eqn: eta_B){reference-type="ref" reference="eqn: eta_B"}) is stair-shaped. By $(b)$ and definition of *admissible matrices*, we have $A \triangleq B$. Therefore, Proposition [Proposition 33](#prop: matrix_good_decomp){reference-type="ref" reference="prop: matrix_good_decomp"} gives $\eta_B$ is a good decomposition with $\eta_B \prec \hspace{-1mm}\prec \eta_A$. ◻ In the end, we provide a typical matrix of finite size satisfying conditions $(a),(b),(c)$ in Corollary [Corollary 34](#cor: stair shaped blocks){reference-type="ref" reference="cor: stair shaped blocks"}, and see how to decompose the corresponding cycle-free stair-shaped transport path into the difference of two map-compatible paths. **Example 7**. *Let $$\mu^- = 4 \delta_{x_1} +11 \delta_{x_2} + 14 \delta_{x_3} +11 \delta_{x_4} +17 \delta_{x_5} +10 \delta_{x_6} +3 \delta_{x_7} +6 \delta_{x_8} + 2 \delta_{x_9} + \delta_{x_{10}} +5 \delta_{x_{11}},$$ $$\mu^+ = 4 \delta_{y_1} + 3 \delta_{y_2} + 14 \delta_{y_3} + 11 \delta_{y_4} + 12 \delta_{y_5} + 7 \delta_{y_6} +7 \delta_{y_7} + 9 \delta_{y_8} + 3 \delta_{y_9} + 3 \delta_{y_{10}} + 11 \delta_{y_{11}},$$ and $T$ be a cycle-free transport path from $\mu^-$ to $\mu^+$ illustrated by the following diagram:* *Let $$A = \left[ \begin{array}{cccccccccccccccccccc} 1&1&2&0&0&0&0&0&0&0&0 \\ 3&2&1&2&3&0&0&0&0&0&0 \\ 0&0&6&7&1&0&0&0&0&0&0 \\ 0&0&5&2&4&0&0&0&0&0&0 \\ 0&0&0&0&1&3&6&7&0&0&0 \\ 0&0&0&0&3&4&1&2&0&0&0 \\ 0&0&0&0&0&0&0&0&1&2&0 \\ 0&0&0&0&0&0&0&0&2&1&3 \\ 0&0&0&0&0&0&0&0&0&0&2 \\ 0&0&0&0&0&0&0&0&0&0&1 \\ 0&0&0&0&0&0&0&0&0&0&5 \\ \end{array} \right].$$ Then, $A=[a_{ij}]$ is the corresponding matrix of a good decomposition $\eta_A$ of $T$, namely $$\eta_A:=\sum_{i,j}a_{ij}\delta_{\gamma_{x_i,y_j}}.$$ Here, $A$ satisfies conditions $(a),(b),(c)$ in Corollary [Corollary 34](#cor: stair shaped blocks){reference-type="ref" reference="cor: stair shaped blocks"} with $$A_1= \begin{bmatrix} 1&1&2 \\ 3&2&1 \end{bmatrix},\ A_2= \begin{bmatrix} 1&2&3 \\ 6&7&1 \\ 5&2&4 \end{bmatrix},\ A_3 = \begin{bmatrix} 1&3&6&7 \\ 3&4&1&2 \\ \end{bmatrix},\ A_4 = \begin{bmatrix} 1&2 \\ 2&1 \end{bmatrix} \text{, and } A_5 = \begin{bmatrix} 3 \\ 2 \\ 1 \\ 5 \\ \end{bmatrix}.$$* *Using algorithm [Algorithm 27](#rem: Stair shaped Algorithm){reference-type="ref" reference="rem: Stair shaped Algorithm"}, we have $$A'_1= \begin{bmatrix} 4&0&0 \\ 0&3&3 \end{bmatrix},\ A'_2= \begin{bmatrix} 8&0&0 \\ 6&8&0 \\ 0&3&8 \end{bmatrix},\ A'_3 = \begin{bmatrix} 4&7&6&0 \\ 0&0&1&9 \\ \end{bmatrix},\ A'_4 = \begin{bmatrix} 3&0 \\ 0&3 \end{bmatrix} \text{, and } A'_5 = \begin{bmatrix} 3 \\ 2 \\ 1 \\ 5 \\ \end{bmatrix}.$$ By Corollary [Corollary 34](#cor: stair shaped blocks){reference-type="ref" reference="cor: stair shaped blocks"}, $$\eta_B:=\sum_{i,j} b_{ij}\delta_{\gamma_{x_i,y_j}}$$ is a stair-shaped good decomposition of $T$ with $\eta_B\prec \hspace{-1mm}\prec \eta_A$, where the matrix $$B =[b_{ij}]= \left[ \begin{array}{cccccccccccccccccccc} 4&0&0&0&0&0&0&0&0&0&0 \\ 0&3&8&0&0&0&0&0&0&0&0 \\ 0&0&6&8&0&0&0&0&0&0&0 \\ 0&0&0&3&8&0&0&0&0&0&0 \\ 0&0&0&0&4&7&6&0&0&0&0 \\ 0&0&0&0&0&0&1&9&0&0&0 \\ 0&0&0&0&0&0&0&0&3&0&0 \\ 0&0&0&0&0&0&0&0&0&3&3 \\ 0&0&0&0&0&0&0&0&0&0&2 \\ 0&0&0&0&0&0&0&0&0&0&1 \\ 0&0&0&0&0&0&0&0&0&0&5 \\ \end{array} \right]$$ is stair-shaped.* *Now, by the proof of Theorem [Theorem 30](#thm: stair shaped induced transport maps){reference-type="ref" reference="thm: stair shaped induced transport maps"}, one may decompose the stair-shaped matrix $B$ into $B=B_1+B_2$ where $$B_1 = \left[ \begin{array}{cccccccccccccccccccc} 4&0&0&0&0&0&0&0&0&0&0 \\ 0&0&8&0&0&0&0&0&0&0&0 \\ 0&0&0&8&0&0&0&0&0&0&0 \\ 0&0&0&0&8&0&0&0&0&0&0 \\ 0&0&0&0&0&0&6&0&0&0&0 \\ 0&0&0&0&0&0&0&9&0&0&0 \\ 0&0&0&0&0&0&0&0&3&0&0 \\ 0&0&0&0&0&0&0&0&0&0&3 \\ 0&0&0&0&0&0&0&0&0&0&2 \\ 0&0&0&0&0&0&0&0&0&0&1 \\ 0&0&0&0&0&0&0&0&0&0&5 \\ \end{array} \right] \text{ and } B_2 = \left[ \begin{array}{cccccccccccccccccccc} 0&0&0&0&0&0&0&0&0&0&0 \\ 0&3&0&0&0&0&0&0&0&0&0 \\ 0&0&6&0&0&0&0&0&0&0&0 \\ 0&0&0&3&0&0&0&0&0&0&0 \\ 0&0&0&0&4&7&0&0&0&0&0 \\ 0&0&0&0&0&0&1&0&0&0&0 \\ 0&0&0&0&0&0&0&0&0&0&0 \\ 0&0&0&0&0&0&0&0&0&3&0 \\ 0&0&0&0&0&0&0&0&0&0&0 \\ 0&0&0&0&0&0&0&0&0&0&0 \\ 0&0&0&0&0&0&0&0&0&0&0 \\ \end{array} \right].$$* *From matrix $B_1$ and the transport path $T$, we may construct the corresponding transport path $T_1 \in Path(\mu_1^-,\mu_1^+)$ illustrated below, where $$\mu_1^- = 4 \delta_{x_1} +8 \delta_{x_2} + 8 \delta_{x_3} +8 \delta_{x_4} +6 \delta_{x_5} +9 \delta_{x_6} +3 \delta_{x_7} +3 \delta_{x_8} + 2 \delta_{x_9} + \delta_{x_{10}} +5 \delta_{x_{11}},$$ and $$\mu_1^+ = 4 \delta_{y_1} + 8 \delta_{y_3} + 8 \delta_{y_4} + 8 \delta_{y_5} + 6 \delta_{y_7} + 9 \delta_{y_8} + 3 \delta_{y_9} + 11 \delta_{y_{11}}.$$* *Note that from the non-zero entries of $B_1$, there exists a transport map $$\varphi_1: \{x_1, x_2,x_3,x_4,x_5,x_6,x_7,x_8,x_9,x_{10},x_{11}\} \longrightarrow \{y_1,y_3,y_4,y_5,y_7,y_8,y_9,y_{11}\},$$ where $$\begin{aligned} && \varphi_1(x_1)=y_1,\ \varphi_1(x_2)=y_3 ,\ \varphi_1(x_3)=y_4 ,\ \varphi_1(x_4)=y_5 ,\ \varphi_1(x_5)=y_7 ,\ \varphi_1(x_6)=y_8,\\ && \varphi_1(x_7)=y_9 ,\ \varphi_1(x_8)=y_{11} ,\ \varphi_1(x_9)=y_{11} ,\ \varphi_1(x_{10})=y_{11} ,\ \varphi_1(x_{11})=y_{11} .\end{aligned}$$ Here, $\varphi_{1\#} \mu_1^- = \mu_1^+$, and $(T_1, \varphi_1)$ is compatible.* *Similarly, using matrix $B_2$ and transport path $T$, we may construct the corresponding transport path $T_2\in Path(\mu_2^-,\mu_2^+)$ as illustrated below, where $$\mu_2^- = 3 \delta_{x_2} + 6 \delta_{x_3} + 3 \delta_{x_4} + 11 \delta_{x_5} + \delta_{x_6} + 3 \delta_{x_8},$$ and $$\mu_2^+ = 3 \delta_{y_2} + 6 \delta_{y_3} + 3 \delta_{y_4} + 4 \delta_{y_5} + 7 \delta_{y_6} + \delta_{y_7} + 3 \delta_{y_{10}} .$$* *Again, using the non-zero entries of $B_2$, there exists a transport map $$\varphi_2: \{y_2,y_3,y_4,y_5,y_6,y_7,y_{10}\} \longrightarrow \{x_2,x_3,x_4,x_5,x_6,x_8\},$$ with $$\varphi_2(y_2)=x_2 ,\ \varphi_2(y_3)=x_3 ,\ \varphi_2(y_4)=x_4 ,\ \varphi_2(y_5)=x_5 ,\ \varphi_2(y_6)=x_5 ,\ \varphi_2(y_7)=x_6 ,\ \varphi_2(y_{10})=x_8 ,\ $$ Here, $\mu_2^- = \varphi_{2_\#}\mu_1^+$, and $(-T_2, \varphi_2)$ is compatible.* *As a result, we decompose the cycle-free stair-shaped transport path $T=T_1-T_2$ as the difference of two map-compatible paths $T_1$ and $T_2$.* L. Ambrosio, E. Brué, and D. Semola, *Lectures on Optimal Transport*, Unitext, Volume 130, Springer, 2021. M. Bernot, V. Caselles, and J.-M. Morel. *Optimal transportation networks. Models and theory*. Lecture Notes in Mathematics, 1955. Springer, Berlin, 2009. M. Colombo, A De Rosa, A, Marchese, Improved stability of optimal traffic paths, *Calc Var Partial Differ Equ*, 57:28, 2018 M. Colombo, A. De Rosa, and A. Marchese. On the well-posedness of branched transportation. *Comm. Pure Appl. Math.* 74 (2021), 833-864. F. Lin, X. Yang, *Geometric Measure Theory: An Introduction*, Science Press & International Press, 2002. E. Paolini, E. Stepanov. Decomposition of acyclic normal currents in a metric space, *J Funct Anal*, Vol. 263, Issue 11, (2012), 3358-3390. L. Simon, *Introduction to Geometric Measure Theory*. 2014. F. Santambrogio, *Optimal Transport for Applied Mathematicians*, Springer, 2015. S. K. Smirnov. Decomposition of solenoidal vector charges into elementary solenoids, and the structure of normal one-dimensional flows. *Algebra i Analiz*, 5 (1993), 206-238. C. Villani, *Optimal transport old and new*, Springer, 2009. Q. Xia, Optimal paths related to transport problems. *Commun. Contemp. Math.* Vol.5, No. 2 (2003), 251-279. Q. Xia and S.F. Xu, Ramified optimal transportation with payoff on the boundary. *SIAM J. MATH. ANAL.* Vol 55, No. 1 (2023), 186-209. Q. Xia, Motivations, ideas and applications of ramified optimal transportation, *ESAIM Math. Model. Numer. Anal.* Vol. 49 No. 6 (2015), 1791--1832. [^1]: A transport path $T$ is called cycle-free if there are no nonzero cycles on $T$. See Definition [Definition 11](#def: cycle-free current){reference-type="ref" reference="def: cycle-free current"}. [^2]: Here, $N$ is the number of targets in the target measure $\mu^+$. [^3]: The concept cycle-free is different to the concept "acyclic\" defined using subcurrents. As in Definition [Definition 10](#def: S_on_T){reference-type="ref" reference="def: S_on_T"}, a current $S$ is "on\" another current $T$ does not mean that $S$ is a subcurrent of $T$. When $S$ is on $T$, unlike being a subcurrent, it is possible that $S$ has a reverse orientation with $T$ on their intersections. [^4]: The size of this matrix may be countably infinite.
arxiv_math
{ "id": "2310.03825", "title": "Map-compatible decomposition of transport paths", "authors": "Qinglan Xia, Haotian Sun", "categories": "math.AP math.OC", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We prove essentially sharp bounds for Ramsey numbers of ordered hypergraph matchings, inroduced recently by Dudek, Grytczuk, and Ruciński. Namely, for any $r \ge 2$ and $n \ge 2$, we show that any collection $\mathcal{H}$ of $n$ pairwise disjoint subsets in $\mathbb{Z}$ of size $r$ contains a subcollection of size $\lfloor n^{1/(2^r-1)}/2\rfloor$ in which every pair of sets are in the same relative position with respect to the linear ordering on $\mathbb{Z}$. This improves previous bounds of Dudek--Grytczuk--Ruciński and of Anastos--Jin--Kwan--Sudakov and is sharp up to a factor of $2$. For large $r$, we even obtain such a subcollection of size $\lfloor (1-o(1))\cdot n^{1/(2^r-1)}\rfloor$, which is asymptotically tight (here, the $o(1)$-term tends to zero as $r \to \infty$, regardless of the value of $n$). Furthermore, we prove a multiparameter extension of this result where one wants to find a clique of prescribed size $m_P$ for each relative position pattern $P$. Our bound is sharp for all choices of parameters $m_P$, up to a constant factor depending on $r$ only. This answers questions of Anastos--Jin--Kwan--Sudakov and of Dudek--Grytczuk--Ruciński. author: - Lisa Sauermann[^1]  - "Dmitrii Zakharov[^2]" title: A sharp Ramsey theorem for ordered hypergraph matchings --- # Introduction Ramsey Theory is one of the most active areas of combinatorics, with a rich history spanning an entire century and many exciting recent results (e.g. [@Campos; @Lee; @MV; @RR], to mention just a few). The basic philosophy behind the results in this area is that any sufficiently large structure, even if completely disordered, must contain a homogeneous-looking substructure of a prescribed size. Such Ramsey results have been established in a variety of contexts, and improving the quantitative bounds in these results is of great interest (see the books [@graham-rothschild-spencer] and [@promel] for a general overview of the area of Ramsey theory). Here, we are interested in a Ramsey problem about ordered hypergraph matching. An $r$-uniform hypergraph matching $\mathcal{H}$ is a finite collection of pairwise disjoint sets, each of size $r$. These sets are called hyperedges, and their union is called the vertex set of the hypergraph. An *ordered* $r$-uniform hypergraph matching $\mathcal{H}$ is such a $r$-uniform hypergraph matching equipped with a total ordering on the vertex set. Now, given an ordered hypergraph matching $\mathcal{H}$, we want to find a large submatching in $\mathcal{H}$ which looks homogeneous with respect to the linear order on $\mathbb{Z}$. More precisely, if $\mathcal{H}$ is an ordered $r$-uniform hypergraph matching, then for any two edges $e,f\in\mathcal{H}$ we can consider the relative order of the vertices in $e$ and $f$. In other words, we can consider the ordered $r$-uniform hypergraph matching $\{ e,f\}$. This can be represented by a string of length $2r$ consisting of $r$ times the letter $\rm A$ and $r$ times the letter $\rm B$. For example if $e=\{ 1,2,5\}$ and $f=\{ 3,4,6\}$ and the vertices in $e\cup f=\{1,2,\dots,6\}$ are ordered in the natural way, we obtain the pattern $\rm AABBAB$ encoding the relative order of the vertices $e$ and $f$. Note that this pattern is equivalent to the pattern $\rm BBAABA$ upon interchanging the roles of $e$ and $f$. To make the representation unique, let us impose the condition that the string needs to start with the letter $\rm A$. We can now define an *$r$-pattern* to be a string in $\{{\rm A},{\rm B}\}^{2r}$ starting with $\rm A$ and consisting of $r$ times the letter $\rm A$ and $r$ times the letter $\rm B$. For any ordered $r$-uniform hypergraph matching $\mathcal{H}$, for any two edges $e,f\in\mathcal{H}$ we can consider the corresponding $r$-pattern described by $\{ e,f\}$. If there is some $r$-pattern $P$ such that for every pair of edges $e,f\in\mathcal{H}$ the $r$-pattern described by $\{ e,f\}$ is $P$, then we call the $r$-uniform hypergraph matching a $P$-clique. We say that $\mathcal{H}$ contains a $P$-clique of size $m$, if there is some subset $\mathcal{H}'\subseteq\mathcal{H}$ of $|\mathcal{H}'|=m$ edges forming a $P$-clique. Now we have a natural Ramsey-type question: given a hypergraph matching $\mathcal{H}$ what is the size of the largest $P$-clique in $\mathcal{H}$ over all choices of the pattern $P$? In the graph case $r=2$, this question was introduced and solved by Dudek, Grytczuk, and Ruciński [@DGR22]. They showed that any matching of size $n^3+1$ contains a $P$-clique of size $n+1$, where $P$ is one of the three possible 2-patterns: AABB, ABAB, or ABBA. Moreover, they constructed a matching of size $n^3$ without $P$-cliques of size $n+1$ for any pattern $P$. In a later work, the same authors [@DGR] systematically studied the higher uniformity version of the problem and showed that any ordered $r$-uniform hypergraph with $C_r n^{3^{r-1}}$ edges contains a $P$-clique of size $n+1$ for some $r$-pattern $P$. On the other hand, they could only construct a matching of size $n^{2^{r-1}+2}$ without any $P$-cliques of size $n+1$. These bounds were substantially improved by Anastos, Jin, Kwan, and Sudakov [@AJKS]: they reduced the upper bound to $C_r n^{(r+1) 2^{r-2}}$ and constructed an example with $n^{2^{r}-1}$ edges, thus, showing that the correct exponent of $n$ is roughly $2^r$ rather than $3^r$. They also got a sharp exponent of $n$ in the cases $r=3$ and $r=4$. However, their techniques required a substantial case analysis already for $r=4$ and do not seem to generalize to large $r$. In this paper, we determine the correct behavior of the Ramsey problem for ordered $r$-uniform hypergraph matchings for all values of $r$. **Theorem 1**. *Let $r$ and $m$ be positive integers. Let $\mathcal{H}$ be an ordered $r$-uniform hypergraph matching. For every $r$-pattern $P$, assume that every $P$-clique contained in $\mathcal{H}$ has size at most $m$. Then the number of edges in $\mathcal{H}$ is bounded by $$|\mathcal{H}|\le C_r\cdot m^{2^r-1},$$ where $C_r=2^{r-1}(r-1)!$.* By the aforementioned construction of Anastos, Jin, Kwan, and Sudakov [@AJKS Theorem 1.7], the upper bound in Theorem [Theorem 1](#thm-main){reference-type="ref" reference="thm-main"} is tight up to the constant factor $C_r$. Theorem [Theorem 1](#thm-main){reference-type="ref" reference="thm-main"} can also be restated in the notation of [@AJKS] as follows. For an ordered matching $\mathcal{H}$, let $L(\mathcal{H})$ be the size of the largest $P$-clique in $\mathcal{H}$ for any pattern $P$. Define a Ramsey function $L_r(n)$ to be the minimum of $L(\mathcal{H})$ over all $r$-uniform ordered hypergraph matchings $\mathcal{H}$ with $n$ edges. Then our Theorem [Theorem 1](#thm-main){reference-type="ref" reference="thm-main"} can be restated as $L_r(n) \ge C_r^{-1/(2^r-1)}\cdot n^{1/(2^r-1)}$. Comparing this with the construction in [@AJKS] giving $L_r(n) \le \lceil n^{1/(2^r-1)} \rceil$, we obtain $$\lceil n^{1/(2^r-1)} \rceil \ge L_r(n) \ge C_r^{-1/(2^r-1)}\cdot n^{1/(2^r-1)} > \frac{1}{2}n^{1/(2^r-1)}.$$ So this determines $L_r(n)$ up to a constant of at most $2$ for all $r$ and $n$ (and the constant factor $C_r^{-1/(2^r-1)}=(2^{r-1}(r-1)!)^{-1/(2^r-1)}$ actually approaches 1 as $r$ tends to infinity). In Theorem [Theorem 1](#thm-main){reference-type="ref" reference="thm-main"}, for every $r$-pattern $P$, the size of every $P$-clique contained in $\mathcal{H}$ is bounded by $m$. Dudek--Grytczuk--Ruciński and Anastos--Jin--Kwan--Sudakov also asked for the maximum size of $|\mathcal{H}|$ when bounding the sizes of the $P$-cliques in $\mathcal{H}$ by different numbers for different $r$-patterns $P$. In other words, how large can $|\mathcal{H}|$ be, if for each $r$-pattern $P$ the size of each $P$-clique in $\mathcal{H}$ is at at most a given number $m_P$? We also answer this more general question up to a constant factor depending on $r$. To state the answer to this multi-parameter version of the problem, we need some more notation. First, it turns out that not for every $r$-pattern $P$ it is actually possible to form a large $P$-clique at all. One can show that this is only possible if $P$ has a particular block structure, where the string $P\in\{{\rm A},{\rm B}\}^{2r}$ can be divided into (consecutive) blocks of even size such that within each block the string reads either $\rm{AA}\dots\rm{A}\rm {BB}\dots\rm{B}$ with equally many $\rm{A}$'s and $\rm{B}$'s or $\rm{BB}\dots\rm{B}\rm {AA}\dots\rm{A}$ with equally many $\rm{A}$'s and $\rm{B}$'s. Following [@AJKS], we call such a decomposition of an $r$-pattern $P$ a *block partition*. For example, the $5$-pattern $\rm{AABBBABBAA}$ has the block partition $\rm{AABB|BA|BBAA}$, whereas the $5$-pattern $\rm{AABABBABBA}$ does not have a block partition. It is not hard to see that every $r$-pattern $P$ has at most one block decomposition. Following [@DGR], we call an $r$-pattern with a block decomposition *collectable* (then it turns out that the collectable $r$-patterns are precisely the $r$-patterns $P$ for which there exist arbitrarily large $P$-cliques, this was shown in [@DGR]). Given a positive integer $r$, we call a sequence $\lambda=(\lambda_1,\dots,\lambda_s)$ of positive integers with $\lambda_1+\dots+\lambda_s=r$ an *ordered partition of $r$*. One may think of $\lambda$ as a partition of the set $\{1,\dots,r\}$ into non-empty intervals. We call $s$ the *number of parts* of the ordered partition $\lambda=(\lambda_1,\dots,\lambda_s)$. For two ordered partitions $\lambda=(\lambda_1,\dots,\lambda_s)$ and $\lambda'=(\lambda'_1,\dots,\lambda'_{s'})$ of $r$, we write $\lambda \succ \lambda'$ if the identity $\lambda'_1+\dots+\lambda'_{s'}=r$ can be obtained from $\lambda_1+\dots+\lambda_s=r$ by splitting some summands into smaller sub-summands (without changing the order or re-combining any summands). In the language of partitions of $\{1,\dots,r\}$ into intervals, we have $\lambda \succ \lambda'$ if the partition corresponding to $\lambda'$ is a refinement of the partition corresponding to $\lambda$. For any ordered partition $\lambda=(\lambda_1,\dots,\lambda_s)$ of $r$, let $\mathcal{P}(\lambda)$ be the set of collectable $r$-patterns $P$ such that the block partition of $P$ consists of $s$ blocks of sizes $2\lambda_1,2\lambda_2,\dots,2\lambda_s$ (in this order). Note that the sets $\mathcal{P}(\lambda)$ for all ordered partitions $\lambda$ or $r$ form a partition of the set of all collectable $r$-patterns. With this notation, our multi-parameter generalization of Theorem [Theorem 1](#thm-main){reference-type="ref" reference="thm-main"} can be stated as follows. **Theorem 2**. *Let $r$ be a positive integer, and consider a positive integer $m_P$ for every collectable $r$-pattern $P$. Let $\mathcal{H}$ be an ordered $r$-uniform hypergraph matching, and assume that for every collectable $r$-pattern $P$, every $P$-clique contained in $\mathcal{H}$ has size at most $m_P$. Then the number of edges in $\mathcal{H}$ is bounded by $$\label{eq_main_upper} |\mathcal{H}|\le 2^{r-1} \sum_{\lambda^{(1)} \succ \lambda^{(2)} \succ \dots \succ \lambda^{(r)}}\ \prod_{P\in \mathcal{P}(\lambda^{(1)})\cup\dots\cup \mathcal{P}(\lambda^{(r)})} m_{P},$$ where the sum is over all sequences $\lambda^{(1)} \succ \lambda^{(2)} \succ \dots \succ \lambda^{(r)}$ of ordered partitions of $r$ (note that then for $s=1,\dots,r$, the ordered partition $\lambda^{(s)}$ must automatically have $s$ parts).* Note that Theorem [Theorem 2](#thm-paramteters-upper-bound){reference-type="ref" reference="thm-paramteters-upper-bound"} directly implies Theorem [Theorem 1](#thm-main){reference-type="ref" reference="thm-main"} by setting $m_P=m$ for all collectable $r$-patterns $P$. Indeed, for every ordered partition $\lambda=(\lambda_1,\dots,\lambda_s)$ of $r$, we have $|\mathcal{P}(\lambda)|=2^{s-1}$. There are $(r-1)!$ different sequences $\lambda^{(1)} \succ \lambda^{(2)} \succ \dots \succ \lambda^{(r)}$ of ordered partitions of $r$, and for each of them we have $|\mathcal{P}(\lambda^{(1)})\cup\dots\cup \mathcal{P}(\lambda^{(r)})|=1+2+\dots+2^{r-1}=2^r-1$. Hence the right-hand side of ([\[eq_main_upper\]](#eq_main_upper){reference-type="ref" reference="eq_main_upper"}) simplifies to $2^{r-1}(r-1)!\cdot m^{2^r-1}$ if $m_P=m$ for all $P$. Our next result shows that the bound in Theorem [Theorem 2](#thm-paramteters-upper-bound){reference-type="ref" reference="thm-paramteters-upper-bound"} is tight up to a constant factor depending on $r$. **Theorem 3**. *Let $r$ be a positive integer, and consider a positive integer $m_P$ for every collectable $r$-pattern $P$. Then there exists an ordered $r$-uniform hypergraph matching $\mathcal{H}$, such that for every collectable $r$-pattern $P$, every $P$-clique contained in $\mathcal{H}$ has size at most $m_P$, and such that $$\label{eq_main_lower} |\mathcal{H}|= \max_{\lambda^{(1)} \succ \lambda^{(2)} \succ \dots \succ \lambda^{(r)}}\ \prod_{P\in \mathcal{P}(\lambda^{(1)})\cup\dots\cup \mathcal{P}(\lambda^{(r)})} m_{P},$$ where the maximum is over all sequences $\lambda^{(1)} \succ \lambda^{(2)} \succ \dots \succ \lambda^{(r)}$ of ordered partitions of $r$.* Note that the right-hand side of [\[eq_main_lower\]](#eq_main_lower){reference-type="eqref" reference="eq_main_lower"} differs from the right-hand side of [\[eq_main_upper\]](#eq_main_upper){reference-type="eqref" reference="eq_main_upper"} by a factor of at most $2^{r-1}(r-1)!$. Indeed, there are exactly $(r-1)!$ different sequences $\lambda^{(1)} \succ \lambda^{(2)} \succ \dots \succ \lambda^{(r)}$, and [\[eq_main_lower\]](#eq_main_lower){reference-type="eqref" reference="eq_main_lower"} describes the largest of the $(r-1)!$ summands appearing on the right-hand side of [\[eq_main_upper\]](#eq_main_upper){reference-type="eqref" reference="eq_main_upper"}. Thus, Theorems [Theorem 2](#thm-paramteters-upper-bound){reference-type="ref" reference="thm-paramteters-upper-bound"} and [Theorem 3](#thm-paramteters-lower-bound){reference-type="ref" reference="thm-paramteters-lower-bound"} determine, up to a factor of at most $2^{r-1}(r-1)!$, the maximum size of $|\mathcal{H}|$ if for every collectable $r$-pattern every $P$-clique in $\mathcal{H}$ has at most some given size $m_P$. After discussing some preliminaries in Section [2](#sect-preliminaries){reference-type="ref" reference="sect-preliminaries"}, we will prove Theorem [Theorem 2](#thm-paramteters-upper-bound){reference-type="ref" reference="thm-paramteters-upper-bound"} (which implies Theorem [Theorem 1](#thm-main){reference-type="ref" reference="thm-main"}) in Section [3](#sect-upper-bound){reference-type="ref" reference="sect-upper-bound"}, and Theorem [Theorem 3](#thm-paramteters-lower-bound){reference-type="ref" reference="thm-paramteters-lower-bound"} in Section [4](#sect-lower-bound){reference-type="ref" reference="sect-lower-bound"}. # Preliminaries {#sect-preliminaries} The Erdős--Szekeres Theorem [@ES] states that any sequence of $n^2+1$ distinct integers contains a monotone subsequence of length $n+1$. This is a classical result in combinatorics with a rich family of generalizations and connections, see for example [@ExpES; @GL; @LS; @MS; @ST]. We will need a multi-dimensional version of the the classical (one-dimensional) Erdős--Szekeres Theorem. For our purposes, this multi-dimensional version is most convenient to state in the following way. For a function $\tau:\{1,\dots,s\}\to \{1,-1\}$ with $\tau(1)=1$, let us say that a sequence of points $z_1,\dots,z_m\in \mathbb{R}^s$ is *$\tau$-monotone* if for every $i\in \{1,\dots,s\}$ with $\tau(i)=1$ the $i$-th coordinates of the points $z_1,\dots,z_m$ form a strictly increasing sequence and for every $i\in \{1,\dots,s\}$ with $\tau(i)=-1$ the $i$-th coordinates of the points $z_1,\dots,z_m$ form a strictly decreasing sequence. **Theorem 4** (Multi-dimensional Erdős--Szekeres Theorem). *Let $s$ be a positive integer. Furthermore, consider a positive integer $m_\tau$ for every function $\tau:\{1,\dots,s\}\to \{1,-1\}$ with $\tau(1)=1$. Let $Z\subseteq\mathbb{R}^s$ be a collection of points, whose $|Z|\cdot s$ coordinates are all distinct. For each $\tau:\{1,\dots,s\}\to \{1,-1\}$ with $\tau(1)=1$, assume that every $\tau$-monotone sequence of points in $Z$ has length at most $m_\tau$. Then the number of points in $Z$ is bounded by $$|Z|\le \prod_{\tau} m_{\tau},$$ where the product is over all functions $\tau:\{1,\dots,s\}\to \{1,-1\}$ with $\tau(1)=1$.* *Furthermore, for any choice of positive integers $m_\tau$ for all functions $\tau:\{1,\dots,s\}\to \{1,-1\}$ with $\tau(1)=1$, there exists a collection of points $Z\subseteq\mathbb{R}^s$ of size $|Z|=\prod_\tau m_\tau$ satisfying the conditions above.* This result was essentially proven in [@DGR] and used to construct the ordered hypergraph matchings with cliques of bounded sizes. We include a simple (and fairly standard) proof for the reader's convenience. *Proof.* Note that each function $\tau:\{1, \ldots, s\}\rightarrow \{1,-1\}$ with $\tau(1) = 1$ naturally defines a partial order $\prec_{\tau}$ on $Z$. Let $\tau_1, \ldots, \tau_{2^{s-1}}$ be the list of all such functions in an arbitrary order. Then we are given positive integers $m_{\tau_i}$ for $i=1,\dots,2^{s-1}$. Let $Z \subseteq \mathbb R^s$ be as in the first part of the theorem statement. By the assumption, for $i=1,\dots,2^{s-1}$, the length of any chain in $(Z, \prec_{\tau_i})$ is at most $m_{\tau_i}$. By Dilworth's Theorem [@Dil] applied to $(Z, \prec_{\tau_1})$, there exists an $\prec_{\tau_1}$-antichain $Z_1 \subseteq Z$ of size at least $|Z| / m_{\tau_1}$. Then by Dilworth's Theorem applied to $(Z_1, \prec_{\tau_2})$, there exists an $\prec_{\tau_2}$-antichain $Z_2 \subseteq Z_1$ of size at least $|Z_1| / m_{\tau_2} \ge |Z|/(m_{\tau_1}m_{\tau_2})$. Continuing this process for all $i=1, \ldots, 2^{s-1}$, we obtain a subset $$Z' = Z_{2^{s-1}} \subseteq \ldots \subseteq Z_1\subseteq Z$$ of size $|Z'|\ge |Z|/\prod_{i=1}^{2^{s-1}} m_{\tau_i}$ which is an antichain with respect to $\prec_{\tau_i}$ for $i=1,\dots,2^{s-1}$. However, any two vectors in $\mathbb R^s$ with distinct coordinates are comparable with respect to exactly one of the partial orders $\prec_{\tau_1},\dots,\prec_{\tau_{2^{s-1}}}$. Thus, we must have $|Z'|\le 1$, implying $|Z|\le \prod_{i=1}^{2^{s-1}} m_{\tau_i}=\prod_\tau m_{\tau}$, as desired. Now we prove the second part of the theorem, i.e. we construct a collection of points $Z\subseteq\mathbb{R}^s$ achieving equality in the bound above. Let $N$ be a sufficiently large integer such that $N\ge 3m_{\tau_i}$ for $i=1,\dots,2^{s-1}$. For a sequence of numbers $a = (a_1, \ldots, a_{2^{s-1}})$, where $a_i \in \{1, \ldots, m_{\tau_i}\}$ for $i=1, \ldots, 2^{s-1}$, we define a vector $z(a) = (z(a)_1, \ldots, z(a)_s) \in \mathbb R^s$ by setting $$\label{eq_z} z(a)_j = \sum_{i=1}^{2^{s-1}} N^{i} \tau_i(j) a_{i}$$ for $j=1,\dots,s$. Let $Z \subseteq \mathbb R^s$ be the collection of all points $z(a)$ for all tuples $a = (a_1, \ldots, a_{2^{s-1}})$ with $a_i \in \{1, \ldots, m_{\tau_i}\}$ for $i=1, \ldots, 2^{s-1}$. First note that by the choice of $N$, all the sums of the form ([\[eq_z\]](#eq_z){reference-type="ref" reference="eq_z"}) are pairwise distinct for all possible choices of sequences $a$ and all $j=1,\dots,s$. So points the $|Z|\cdot s$ coordinates of the points $Z$ are all distinct coordinates, and we have $|Z| = \prod_{i=1}^{2^{s-1}} m_{\tau_i}=\prod_\tau m_\tau$. It remains to show that for any $i = 1, \ldots, 2^{s-1}$, any $\prec_{\tau_i}$-chain in $Z$ has length at most $m_{\tau_i}$. So let us some $k \in \{1, \ldots, 2^{s-1}\}$, and suppose that for some sequences $a^{(1)}, \ldots, a^{(t)}$ as above, the points $z(a^{(1)}), \ldots, z(a^{(t)}) \in Z$ form a $\tau_k$-monotone sequence. That means that for each $j \in \{1, \ldots, s\}$ we have $\tau_k(j) z(a^{(1)})_j<\dots<\tau_k(j) z(a^{(t)})_j$. Pick an arbitrary pair of indices $1\le \ell < \ell' \le t$ and let $k' \in \{1, \ldots, 2^{s-1}\}$ be the maximum index such that $a^{(\ell)}_{k'} \neq a^{(\ell')}_{k'}$. Note that by ([\[eq_z\]](#eq_z){reference-type="ref" reference="eq_z"}), for any $j\in \{1, \ldots, s\}$ we have $$z(a^{(\ell)})_j - z(a^{(\ell')})_j = \sum_{i=1}^{k'} N^i \tau_i(j) (a^{(\ell)}_i - a^{(\ell')}_i).$$ The $k'$-th term in the sum above is larger in absolute value than all the smaller terms combined, which implies that the numbers $z(a^{(\ell)})_j - z(a^{(\ell')})_j$ and $\tau_{k'}(j)(a^{(\ell)}_{k'} - a^{(\ell')}_{k'})$ have the same sign. On the other hand, we know that $\tau_k(j) z(a^{(\ell)})_j < \tau_k(j) z(a^{(\ell')})_j$. So we conclude that $$\label{taus} \tau_{k}(j) \tau_{k'}(j) a^{(\ell)}_{k'} < \tau_{k}(j) \tau_{k'}(j) a^{(\ell')}_{k'}$$ holds for every $j \in \{1, \ldots, s\}$. Letting $j = 1$ and using $\tau_k(1) = \tau_{k'}(1)=1$ implies that $a^{(\ell)}_{k'} < a^{(\ell')}_{k'}$. Combining this with ([\[taus\]](#taus){reference-type="ref" reference="taus"}), for $j=2,\dots,s$ we get $\tau_k(j) = \tau_{k'}(j)$, meaning that the functions $\tau_k$ and $\tau_{k'}$ coincide and we in fact have $k = k'$. We conclude that for any $1\le \ell<\ell' \le t$, the last coordinate where the sequences $a^{(\ell)}$ and $a^{(\ell')}$ differ is always equal to $k$. In particular, all $k$-th coordinates of the sequences $a^{(1)}, \ldots, a^{(t)}$ are pairwise distinct. Since these coordinates take values in the set $\{1, \ldots, m_{\tau_k}\}$, we can conclude that $t \le m_{\tau_k}$. This shows that the longest $\tau_k$-monotone sequence in $Z$ has length at most $m_{\tau_k}$, as desired. ◻ In order to make use of Theorem [Theorem 4](#thm_es){reference-type="ref" reference="thm_es"} in our context, the following notation will be helpful. Consider an ordered partition $\lambda=(\lambda_1,\dots,\lambda_s)$ of $r$. Then for any function $\tau:\{1,\dots,s\}\to \{1,-1\}$ with $\tau(1)=1$, we define a collectable $r$-pattern $P(\lambda,\tau)\in \mathcal{P}(\lambda)$ as follows. The block partition of $P$ consists of $s$ blocks of sizes $2\lambda_1,\dots,2\lambda_s$ (in this order). For each $i\in\{1,\dots,s\}$ with $\tau(i)=1$, in the $i$-th block we take $\lambda_i$ times the letter $\rm A$ followed by $\lambda_i$ times the letter $\rm B$. And for each $i\in\{1,\dots,s\}$ with $\tau(i)=-1$, in the $i$-th block we take $\lambda_i$ times the letter $\rm B$ followed by $\lambda_i$ times the letter $\rm A$. This gives a valid block decomposition of an $r$-pattern, and we call the resulting $r$-pattern $P(\lambda,\tau)$. Note that by definition, we have $P(\lambda,\tau)\in \mathcal{P}(\lambda)$ and in particular the $r$-pattern $P(\lambda,\tau)$ is collectable. Furthermore, every $r$-pattern $P\in \mathcal{P}(\lambda)$ is of the form $P=P(\lambda,\tau)$ for some function $\tau:\{1,\dots,s\}\to \{1,-1\}$ with $\tau(1)=1$. # Proof of the upper bound {#sect-upper-bound} Every ordered $r$-uniform hypergraph matching can be realized by assigning (positive) integers to the vertices and taking the total ordering on the vertex set to be the natural ordering of the integers. In order to prove Theorem [Theorem 2](#thm-paramteters-upper-bound){reference-type="ref" reference="thm-paramteters-upper-bound"}, we can therefore assume without loss of generality that our ordered $r$-uniform hypergraph matching $\mathcal{H}$ is given in this way. In other words, we can take $\mathcal{H}$ to be a finite collection of pairwise disjoint subsets $e\subseteq\mathbb{Z}$, each of size $|e|=r$. This somewhat simplifies the notation in our proof. As observed for example in [@AJKS], for ordered $r$-uniform hypergraph matching $\mathcal{H}$ with a certain $r$-partite structure, one can bound the size $|\mathcal{H}|$ in terms of the maximum sizes of the $P$-cliques contained in $|\mathcal{H}|$ for each $r$-pattern $P$, just by using the multi-dimensional Erdős--Szekeres Theorem. The following definition describes a more general condition under which this is possible. **Definition 5**. *Let $r$ be a positive integer, and consider a finite collection $\mathcal{H}$ of pairwise disjoint subsets $e\subseteq\mathbb{Z}$, each of size $|e|=r$. For an ordered partition $\lambda=(\lambda_1,\dots,\lambda_s)$ of $r$, we say that $\mathcal{H}$ is *$\lambda$-partite* if there are $x_0,x_1,\dots,x_{s}\in\mathbb{R}\setminus\mathbb{Z}$ with $x_0<x_1<\dots<x_{s}$ such that we have $|e\cap (x_{i-1},x_i)|=\lambda_i$ for every $e\in \mathcal{H}$ and every $i=1,\dots,s$. We say that $\mathcal{H}$ is *interval-wise $\lambda$-partite* if $x_0<x_1<\dots<x_{s}$ can be chosen such that in addition to the previous condition, for each $i=1,\dots,s$, the intervals $\operatorname{conv}(e\cap (x_{i-1},x_i))$ are pairwise disjoint for all $e\in \mathcal{H}$.* Intuitively speaking, for an ordered partition $\lambda=(\lambda_1,\dots,\lambda_s)$ of $r$, saying that $\mathcal{H}$ is $\lambda$-partite means that there are $s$ intervals $(x_0,x_1),(x_1,x_2),\dots,(x_{s-1},x_s)$ such that every hyperegde $e\in \mathcal{H}$ has exactly $\lambda_1$ elements in the first interval $(x_0,x_1)$, exactly $\lambda_2$ elements in the second interval $(x_1,x_2)$, and so on. For $\mathcal{H}$ to be interval-wise $\lambda$-partite means that the sub-intervals of $(x_0,x_1)$ spanned by the intersections $e\cap (x_0,x_1)$ are pairwise disjoint for all $e\in \mathcal{H}$, and similarly for the sub-intervals of $(x_1,x_2)$ spanned by the intersections $e\cap (x_1,x_2)$, and so on. Note that $\mathcal{H}$ is always $\lambda$-partite for the ordered partition $\lambda=(r)$ of $r$ into a single part (taking $(x_0,x_1)$ to be some interval containing all elements of all $e\in \mathcal{H}$). Also note that if $\mathcal{H}$ is $\lambda$-partite for some ordered partition $\lambda$, then $\mathcal{H}$ is automatically also $\lambda'$-partite for all $\lambda'$ with $\lambda'\succ \lambda$. Furthermore, for the ordered partition $\lambda=(1,1,\dots,1)$ of $r$ into $r$ parts, it is equivalent for $\mathcal{H}$ to be $\lambda$-partite or to be interval-wise $\lambda$-partite. Finally, if $\mathcal{H}$ is interval-wise $\lambda$-partite for some ordered partition $\lambda$ of $r$, then any two edges $e,f\in\mathcal{H}$ form some $r$-pattern in $\mathcal{P}(\lambda)$. Which of the $r$-patterns in $\mathcal{P}(\lambda)$ is formed by $e$ and $f$ depends on the the order of the intervals $\operatorname{conv}(e\cap (x_{i-1},x_i))$ and $\operatorname{conv}(f\cap (x_{i-1},x_i))$ appearing in Definition [Definition 5](#def-s-partite){reference-type="ref" reference="def-s-partite"} (for each $i$, there are two possibilities which of the two intervals comes first). If $\mathcal{H}$ is interval-wise $\lambda$-partite for some ordered partition $\lambda$ of $r$, then a bound on $|\mathcal{H}|$ under the assumptions in Theorem [Theorem 2](#thm-paramteters-upper-bound){reference-type="ref" reference="thm-paramteters-upper-bound"} follows directly from the multi-dimensional Erdős--Szekeres Theorem (see Theorem [Theorem 4](#thm_es){reference-type="ref" reference="thm_es"}). **Observation 6**. *Let $r$ be a positive integer, and let $\lambda=(\lambda_1,\dots,\lambda_s)$ be an ordered partition of $r$. Furthermore, consider a positive integer $m_P$ for every $r$-pattern $P\in \mathcal{P}(\lambda)$. Let $\mathcal{H}$ be a finite collection of pairwise disjoint subsets $e\subseteq\mathbb{Z}$, each of size $|e|=r$, such that $\mathcal{H}$ is interval-wise $\lambda$-partite. For every $P\in \mathcal{P}(\lambda)$, assume that every $P$-clique contained in $\mathcal{H}$ has size at most $m_{P}$. Then we have $$|\mathcal{H}|\le \prod_{P\in \mathcal{P}(\lambda)} m_{P}.$$* *Proof.* As $\mathcal{H}$ is interval-wise $\lambda$-partite, there exist $x_0<x_1<\dots<x_{s}$ such that for each $i=1,\dots,s$, we have $|e\cap (x_{i-1},x_i)|=\lambda_i>0$ for every $e\in \mathcal{H}$ and the intervals $\operatorname{conv}(e\cap (x_{i-1},x_i))$ are pairwise disjoint for all $e\in \mathcal{H}$. Now, for every $e\in \mathcal{H}$, let us define a point $z(e)\in \mathbb{R}^s$ by taking $$z(e)=(\min(e\cap (x_{0},x_1)), \min(e\cap (x_{1},x_2)), \dots, \min(e\cap (x_{s-1},x_s))),$$ and define $Z=\{ z(e)\mid e\in \mathcal{H}\}\subseteq\mathbb{R}^s$. As the sets $e\in \mathcal{H}$ are pairwise disjoint, the all the coordinates of the points $z(e)$ for $e\in \mathcal{H}$ are distinct. For every function $\tau:\{1,\dots,s\}\to \{1,-1\}$ with $\tau(1)=1$, let us define $m_\tau=m_{P(\lambda,\tau)}$ (recall that $P(\lambda,\tau)\in \mathcal{P}(\lambda)$ was defined at the end of Section [2](#sect-preliminaries){reference-type="ref" reference="sect-preliminaries"}). We claim that then every $\tau$-monotone sequence of points in $Z\subseteq\mathbb{R}^s$ has length at most $m_\tau$. Indeed, for any $\rho:\{1,\dots,s\}\to \{1,-1\}$ with $\tau(1)=1$, consider $e^{(1)},e^{(2)},\dots,e^{(t)}\in \mathcal{H}$ such that the point sequence $z(e^{(1)}),z(e^{(2)}),\dots,z(e^{(t)})$ is a $\tau$-monotone sequence in $Z \subseteq\mathbb{R}^s$. Then for any index $i\in \{1,\dots,s\}$ with $\tau(i)=1$ we have $\min(e^{(1)}\cap (x_{i-1},x_i))< \dots <\min(e^{(t)}\cap (x_{i-1},x_i))$, and in particular $\min(e^{(\ell)}\cap (x_{i-1},x_i))<\min(e^{(\ell')}\cap (x_{i-1},x_i))$ for any $1\le \ell<\ell'\le t$. Since furthermore the intervals $\operatorname{conv}(e^{(\ell)}\cap (x_{i-1},x_i))$ and $\operatorname{conv}(e^{(\ell')}\cap (x_{i-1},x_i))$ are disjoint, this means that all elements of $e^{(\ell)}\cap (x_{i-1},x_i)$ are smaller than all elements of $e^{(\ell')}\cap (x_{i-1},x_i)$. Similarly, for any $i\in \{1,\dots,s\}$ with $\tau(i)=-1$ and any $1\le \ell<\ell'\le t$, all elements of $e^{(\ell)}\cap (x_{i-1},x_i)$ are larger than all elements of $e^{(\ell')}\cap (x_{i-1},x_i)$. Since $x_0<x_1<\dots<x_{s}$ and $|e^{(\ell)}\cap (x_{i-1},x_i)|=|e^{(\ell')}\cap (x_{i-1},x_i)|=\lambda_i$ for all $i=1,\dots,s$, this shows that $e^{(\ell)}$ and $e^{(\ell')}$ form the pattern $P(\lambda,\tau)$ for all $1\le \ell<\ell'\le t$. Hence $e^{(1)},e^{(2)},\dots,e^{(t)}\in \mathcal{H}$ form a $P(\lambda,\tau)$-clique and therefore $t\le m_{P(\lambda,\tau)}=m_\rho$. So every $\rho$-monotone sequence in $Z$ indeed has length at most $m_\rho$. Applying Theorem [Theorem 4](#thm_es){reference-type="ref" reference="thm_es"} now gives $$|\mathcal{H}|=|Z|\le \prod_{\substack{\tau:\{1,\dots,s\}\to \{1,-1\}\\ \tau(1)=1}} m_{\tau}=\prod_{\substack{\tau:\{1,\dots,s\}\to \{1,-1\}\\ \tau(1)=1}} m_{P(\lambda,\tau)}=\prod_{P\in \mathcal{P}(\lambda)} m_{P},$$ as desired. ◻ The main step in our proof of Theorem [Theorem 2](#thm-paramteters-upper-bound){reference-type="ref" reference="thm-paramteters-upper-bound"} is showing the following lemma, which roughly speaking states that for every large $\lambda$-partite $\mathcal{H}$ we can find a large $\mathcal{H}^*\subseteq\mathcal{H}$ which is interval-wise $\lambda$-partite or we can find a large $\mathcal{H}'\subseteq\mathcal{H}$ which is $\lambda'$-partite for some ordered partition $\lambda'$ of $r$ with $\lambda\succ \lambda'$. In the first case, we can then apply Observation [Observation 6](#obs-interval-wise){reference-type="ref" reference="obs-interval-wise"}, whereas in the second case we can repeat the argument with $\lambda'$ instead of $\lambda$. **Lemma 7**. *Let $r$ be a positive integer, and let $\lambda=(\lambda_1,\dots,\lambda_s)$ be an ordered partition of $r$ into $s<r$ parts. For every ordered partition $\lambda'$ of $r$ into $s+1$ parts with $\lambda\succ\lambda'$, consider some positive integer $M_{\lambda'}$, and let $M=\sum_{\lambda'} M_{\lambda'}$ be the sum of all these positive integers. Let $\mathcal{H}$ be a finite collection of pairwise disjoint subsets $e\subseteq\mathbb{Z}$, each of size $|e|=r$, such that $\mathcal{H}$ is $\lambda$-partite. Then at least one of the following two statements holds:* - *There is a sub-collection $\mathcal{H}^*\subseteq\mathcal{H}$ of size $|\mathcal{H}^*|\ge |\mathcal{H}|/(2M)$ such that $\mathcal{H}^*$ is interval-wise $\lambda$-partite.* - *For some ordered partition $\lambda'$ of $r$ into $s+1$ parts with $\lambda\succ\lambda'$, there is a sub-collection $\mathcal{H}'\subseteq\mathcal{H}$ of size $|\mathcal{H}'|>M_{\lambda'}$ such that $\mathcal{H}'$ is $\lambda'$-partite.* *Proof.* Suppose that $\mathcal{H}$ does not have a subcollection $\mathcal{H}'$ as in (b). We need to show that then (a) holds. Let $x_0<x_1<\dots<x_s$ be as in Definition [Definition 5](#def-s-partite){reference-type="ref" reference="def-s-partite"}. For every $e\in \mathcal{H}$, we consider the intervals $e\cap (x_{i-1},x_i)$ for $i=1,\dots,s$. Our goal is to find a subcollection $\mathcal{H}^*\subseteq\mathcal{H}$ of size $|\mathcal{H}^*|\ge |\mathcal{H}|/(2M)$ such that for each $i=1,\dots,s$, the intervals $\operatorname{conv}(e\cap (x_{i-1},x_i))$ are pairwise disjoint for all $e\in \mathcal{H}^*$. Note that for any $i=1,\dots,s$ and any $e\in \mathcal{H}$, the interval $\operatorname{conv}(e\cap (x_{i-1},x_i))$ is a closed sub-interval of $(x_{i-1},x_i)$. Furthermore, the endpoints of all these sub-intervals are distinct (as all $e\in \mathcal{H}$ are pairwise disjoint subsets of $\mathbb{Z}$). Let us consider an auxiliary graph $G$ with vertex set $\mathcal{H}$, where for $e,e'\in\mathcal{H}$ we draw an edge between $e$ and $e'$ if $\operatorname{conv}(e\cap (x_{i-1},x_i))\cap \operatorname{conv}(e'\cap (x_{i-1},x_i))\ne \emptyset$ for some $i\in\{1,\dots,s\}$. Our goal is then to show that the graph $G$ has an independent set of size at least $|\mathcal{H}|/(2M)$. Note that for any two closed intervals $[a,b]$ and $[c,d]$ with $[a,b]\cap [c,d]\ne \emptyset$ we have $a\in [c,d]$ or $c\in [a,b]$. Hence we can orient the edges of $G$ in such a way that for any $e,e'\in\mathcal{H}$ with an edge from $e$ to $e'$ there is an index $i\in\{1,\dots,s\}$ with $\min(e\cap (x_{i-1},x_i))\in \operatorname{conv}(e'\cap (x_{i-1},x_i))$. In order to show that the graph $G$ contains an independent set of size at least $|\mathcal{H}|/(2M)$, it suffices to show that $G$ has average degree at most $2M-1$ (by a well-known result of Caro [@Caro] and Wei [@Wei]). Considering the orientation of the edges of $G$, it therefore suffices to show that for every $e\in \mathcal{H}$ there are at most $M-1$ outgoing edges at $e$ in the graph $G$ (then there are at most $(M-1)|\mathcal{H}|$ total edges in $G$ and hence the average degree is at most $2M-2$). So let us now fix some $e\in \mathcal{H}$. We need to show that there are at most $M-1$ different $e'\in\mathcal{H}\setminus\{e\}$ such that $\min(e\cap (x_{i-1},x_i))\in \operatorname{conv}(e'\cap (x_{i-1},x_i))$ for some index $i\in\{1,\dots,s\}$. For every index $i\in\{1,\dots,s\}$ with $\lambda_i=1$, we have $|e'\cap (x_{i-1},x_i)|=\lambda_i=1$ for every $e'\in \mathcal{H}$, and the singleton sets $e'\cap (x_{i-1},x_i)$ are all distinct for all $e'\in \mathcal{H}$ (as all $e'\in \mathcal{H}$ are pairwise disjoint). Hence we cannot have $\min(e\cap (x_{i-1},x_i))\in \operatorname{conv}(e'\cap (x_{i-1},x_i))$ for any $e'\in \mathcal{H}\setminus\{e\}$ when $\lambda_i=1$. We claim that for each $i\in\{1,\dots,s\}$ with $\lambda_i\ge 2$, there can be at most $\sum_{\lambda'} M_{\lambda'}-1$ different $e'\in\mathcal{H}\setminus\{e\}$ with $\min(e\cap (x_{i-1},x_i))\in \operatorname{conv}(e'\cap (x_{i-1},x_i))$, where the sum is over all ordered partitions $\lambda'$ of $r$ into $s+1$ parts that can be obtained from $\lambda$ by dividing the $i$-th part $\lambda_i$ into two pieces. This suffices to finish the proof, since then the total number of $e'\in\mathcal{H}\setminus\{e\}$ with $\min(e\cap (x_{i-1},x_i))\in \operatorname{conv}(e'\cap (x_{i-1},x_i))$ for some $i\in\{1,\dots,s\}$ is at most $$\sum_{\substack{i\in\{1,\dots,s\}\\\lambda_i\ge 2}}\left(\left(\sum_{\substack{\lambda'\text{obtained from }\lambda\text{ by}\\\text{dividing }\lambda_i\text{into two parts}}}\!\!M_{\lambda'}\right)-1\right)=\sum_{\lambda'} M_{\lambda'}-|\{i\in\{1,\dots,s\}\mid \lambda_i\ge 2\}|\le \sum_{\lambda'} M_{\lambda'}-1=M-1,$$ where in the second and third term the sum is over all ordered partitions $\lambda'$ of $r$ into $s+1$ parts with $\lambda\succ\lambda'$ (note that each of these ordered partitions is obtained by dividing some part $\lambda_i$ of $\lambda$ with $\lambda_i\ge 2$ into two parts). Also note that for the inequality sign we have used that there must at least one index $i\in \{1,\dots,s\}$ with $\lambda_i\ge 2$, since $s<r$. It remains to prove the above claim, so fix some index $i\in\{1,\dots,s\}$ with $\lambda_i\ge 2$. There are $\lambda_i-1$ different ordered partitions $\lambda'$ that can be obtained from $\lambda$ by dividing $\lambda_i$ into two parts, namely the ordered partitons $\mu^{(1)}=(\lambda_1,\dots,\lambda_{i-1},1,\lambda_i-1,\lambda_{i+1},\dots,\lambda_s), \ \mu^{(2)}=(\lambda_1,\dots,\lambda_{i-1},2,\lambda_i-2,\lambda_{i+1},\dots,\lambda_s),\ \dots,\ \linebreak \mu^{(\lambda_i-1)}=(\lambda_1,\dots,\lambda_{i-1},\lambda_i-1,1,\lambda_{i+1},\dots,\lambda_s)$. Defining $y=\min(e\cap (x_{i-1},x_i))\in \mathbb{Z}$, we need to show that there are at most $\sum_{j=1}^{\lambda_i-1} M_{\mu^{(j)}}-1$ different $e'\in\mathcal{H}\setminus\{e\}$ with $y\in \operatorname{conv}(e'\cap (x_{i-1},x_i))$. Note that for every such $e'\in\mathcal{H}\setminus\{e\}$, we also have $y+0.1\in \operatorname{conv}(e'\cap (x_{i-1},x_i))$, since $\operatorname{conv}(e'\cap (x_{i-1},x_i))$ is a closed interval with endpoints in $\mathbb{Z}\setminus\{y\}$. We furthermore also have $y+0.1\in \operatorname{conv}(e\cap (x_{i-1},x_i))$ (recalling that $y=\min(e\cap (x_{i-1},x_i))$ and that $e\cap (x_{i-1},x_i)\subseteq\mathbb{Z}$ is a set of size $\lambda_i\ge 2$), which in particular implies $x_{i-1}<y+0.1<x_i$. It now suffices to show that there are at most $\sum_{j=1}^{\lambda_i-1} M_{\mu^{(j)}}$ different $e'\in\mathcal{H}$ with $y+0.1\in \operatorname{conv}(e'\cap (x_{i-1},x_i))$. For every such $e'$, we have $1\le |e'\cap (x_{i-1},y+0.1)|\le \lambda_i-1$ (recalling that $e'\subseteq\mathbb{Z}$ and $|e'\cap (x_{i-1},x_i)|=\lambda_i$). For each $j=1,\dots,\lambda_i-1$, there can be at most $M_{\mu^{(j)}}$ different $e'\in\mathcal{H}$ with $|e'\cap (x_{i-1},y+0.1)|=j$. Indeed, for each such $e'\in\mathcal{H}$ we have $|e'\cap (y+0.1,x_i)|=|e'\cap (x_{i-1},x_i)|-|e'\cap (x_{i-1},y+0.1)|=\lambda_i-j$. Hence, by considering $x_0,\dots,x_{i-1}, y+0.1,x_i,\dots,x_s\in \mathbb{R}\setminus\mathbb{Z}$, we can observe that the collection $\mathcal{H}'\subseteq\mathcal{H}$ of all $e'\in\mathcal{H}$ with $|e'\cap (x_{i-1},y+0.1)|=j$ is $\mu^{(j)}$-partite. So by our assumption that (b) does not hold, there can be at most $M_{\mu^{(j)}}$ different such $e'\in\mathcal{H}$ (recall that $\mu^{(j)}$ is an ordered partition of $r$ into $s+1$ parts with $\lambda\succ \mu^{(j)}$). Thus, in total there can be at most $\sum_{j=1}^{\lambda_i-1} M_{\mu^{(j)}}$ different $e'\in\mathcal{H}$ with $1\le |e'\cap (x_{i-1},y+0.1)|\le \lambda_i-1$, meaning that there are at most $\sum_{j=1}^{\lambda_i-1} M_{\mu^{(j)}}$ different $e'\in\mathcal{H}$ with $y+0.1\in \operatorname{conv}(e'\cap (x_{i-1},x_i))$. This finishes the proof of the lemma. ◻ We are now ready to prove Theorem [Theorem 2](#thm-paramteters-upper-bound){reference-type="ref" reference="thm-paramteters-upper-bound"} using Observation [Observation 6](#obs-interval-wise){reference-type="ref" reference="obs-interval-wise"} and Lemma [Lemma 7](#lem-two-options){reference-type="ref" reference="lem-two-options"}. In order to do so, we inductively show the following generalization of the theorem. **Theorem 8**. *Let $r$ be a positive integer, and let $\lambda=(\lambda_1,\dots,\lambda_s)$ be an ordered partition of $r$. For every collectable $r$-pattern $P$, consider a positive integer $m_P$. Let $\mathcal{H}$ be a finite collection of pairwise disjoint subsets $e\subseteq\mathbb{Z}$, each of size $|e|=r$, such that $\mathcal{H}$ is $\lambda$-partite. Furthermore, assume that for every collectable $r$-pattern $P$, every $P$-clique contained in $\mathcal{H}$ has size at most $m_P$. Then the number of edges in $\mathcal{H}$ is bounded by $$|\mathcal{H}|\le 2^{r-s} \sum_{\lambda^{(s)} \succ \lambda^{(s+1)} \succ \dots \succ \lambda^{(r)}}\ \prod_{ P\in \mathcal{P}(\lambda^{(s)})\cup\dots\cup \mathcal{P}(\lambda^{(r)}) } m_{P},$$ where the sum is over all sequences $\lambda^{(s)} \succ \lambda^{(s+1)} \succ \dots \succ \lambda^{(r)}$ of ordered partitions of $r$ with $\lambda^{(s)}=\lambda$ (note that then for $t=s,\dots,r$, the ordered partition $\lambda^{(t)}$ must automatically have $t$ parts).* Note that Theorem [Theorem 8](#thm-induction-upper-bound){reference-type="ref" reference="thm-induction-upper-bound"} directly implies Theorem [Theorem 2](#thm-paramteters-upper-bound){reference-type="ref" reference="thm-paramteters-upper-bound"} by taking $\lambda=(r)$ to be the ordered partition of $r$ into a single part. Also recall that Theorem [Theorem 2](#thm-paramteters-upper-bound){reference-type="ref" reference="thm-paramteters-upper-bound"} implies Theorem [Theorem 1](#thm-main){reference-type="ref" reference="thm-main"}. *Proof of Theorem [Theorem 8](#thm-induction-upper-bound){reference-type="ref" reference="thm-induction-upper-bound"}.* We prove the theorem by induction on $r-s$ (note that we always have $s\le r$). If $s=r$, i.e. if $\lambda=(1,1,\dots,1)$ is the unique ordered partition of $r$ into $r$ parts, then $\mathcal{H}$ is automatically interval-wise $\lambda$-partite. Thus, by Observation [Observation 6](#obs-interval-wise){reference-type="ref" reference="obs-interval-wise"} we have $$|\mathcal{H}|\le \prod_{P\in \mathcal{P}(\lambda)} m_{P}=2^{r-r} \sum_{\lambda^{(s)} \succ \dots \succ \lambda^{(r)}}\ \prod_{P\in \mathcal{P}(\lambda^{(s)})\cup\dots\cup \mathcal{P}(\lambda^{(r)})} m_{P},$$ where the sum has only one summand corresponding to the unique length-1 sequence $\lambda^{(s)} \succ \dots \succ \lambda^{(r)}$ with $\lambda^{(s)}=\lambda^{(r)}=\lambda$. So let us now assume that $s<r$ and that we already proved the desired statement for all smaller values of $r-s$. For every ordered partition $\lambda'$ of $r$ into $s+1$ parts with $\lambda\succ\lambda'$, let us define $$M_{\lambda'}=2^{r-s-1} \sum_{\lambda^{(s+1)} \succ \lambda^{(s+2)} \succ \dots \succ \lambda^{(r)}}\ \prod_{P\in \mathcal{P}(\lambda^{(s+1)})\cup\dots\cup \mathcal{P}(\lambda^{(r)})} m_{P},$$ where the sum is over all sequences $\lambda^{(s+1)} \succ \lambda^{(s+2)} \succ \dots \succ \lambda^{(r)}$ of ordered partitions of $r$ with $\lambda^{(s+1)}=\lambda'$. As in Lemma [Lemma 7](#lem-two-options){reference-type="ref" reference="lem-two-options"}, let $M=\sum_{\lambda'}M_{\lambda'}$ be the sum of these $M_{\lambda'}$ for all ordered partitions $\lambda'$ of $r$ into $s+1$ parts with $\lambda\succ\lambda'$. Then we have $$M=\sum_{\lambda'} M_{\lambda'}=2^{r-s-1} \sum_{\lambda^{(s)}\succ \lambda^{(s+1)}\succ \dots \succ \lambda^{(r)}}\ \prod_{ P\in \mathcal{P}(\lambda^{(s+1)})\cup\dots\cup \mathcal{P}(\lambda^{(r)}) } m_{P},$$ where on the right-hand side the sum is over all sequences $\lambda^{(s)} \succ \lambda^{(s+1)} \succ \dots \succ \lambda^{(r)}$ of ordered partitions of $r$ with $\lambda^{(s)}=\lambda$. For every ordered partition $\lambda'$ of $r$ into $s+1$ parts with $\lambda\succ\lambda'$, we can apply the inductive assumption for $\lambda'$, and conclude that for every $\lambda'$-partite sub-collection $\mathcal{H}'\subseteq\mathcal{H}$, we must have $|\mathcal{H}'|\le M_{\lambda'}$. Thus, when applying Lemma [Lemma 7](#lem-two-options){reference-type="ref" reference="lem-two-options"} to $\mathcal{H}$ and $\lambda$, with the numbers $M_{\lambda'}$ as defined above, option (b) cannot happen. Hence option (a) must happen, and we can find a sub-collection $\mathcal{H}^*\subseteq\mathcal{H}$ of size $|\mathcal{H}^*|\ge |\mathcal{H}|/(2M)$ such that $\mathcal{H}^*$ is interval-wise $\lambda$-partite. But now by Observation [Observation 6](#obs-interval-wise){reference-type="ref" reference="obs-interval-wise"}, we have $$|\mathcal{H}|/(2M)\le |\mathcal{H}^*|\le \prod_{P\in \mathcal{P}(\lambda)} m_{P}.$$ This gives $$|\mathcal{H}|\le 2M\cdot \prod_{P\in \mathcal{P}(\lambda)} m_{P}=2^{r-s} \sum_{\lambda^{(s)} \succ \lambda^{(s+1)} \succ \dots \succ \lambda^{(r)}}\ \prod_{ P\in \mathcal{P}(\lambda^{(s)})\cup\dots\cup \mathcal{P}(\lambda^{(r)}) } m_{P},$$ as desired, where the sum on the right-hand side is again over all sequences $\lambda^{(s)} \succ \lambda^{(s+1)} \succ \dots \succ \lambda^{(r)}$ of ordered partitions of $r$ with $\lambda^{(s)}=\lambda$. ◻ # Lower bound Construction {#sect-lower-bound} In this section, we prove Theorem [Theorem 3](#thm-paramteters-lower-bound){reference-type="ref" reference="thm-paramteters-lower-bound"}, by giving a construction of a large ordered hypergraph matching where all $P$-cliques have at most some given sizes $m_P$. The construction is an appropriate multi-parameter generalization of the argument from [@AJKS Section 4]. We will actually prove the following stronger form of Theorem [Theorem 3](#thm-paramteters-lower-bound){reference-type="ref" reference="thm-paramteters-lower-bound"}. **Theorem 9**. *Let $r$ be a positive integer, and let $\lambda^{(1)} \succ \lambda^{(2)} \succ \dots \succ \lambda^{(r)}$ be a sequence of ordered partitions of $r$. For every $r$-pattern $P\in \mathcal{P}(\lambda^{(1)})\cup\dots\cup \mathcal{P}(\lambda^{(r)})$, consider a positive integer $m_P$. Then there exists an ordered $r$-uniform hypergraph matching $\mathcal{H}$ of size $$|\mathcal{H}|=\prod_{P\in \mathcal{P}(\lambda^{(1)})\cup\dots\cup \mathcal{P}(\lambda^{(r)})} m_{P},$$ such that any two edges $e,f\in \mathcal{H}$ form an $r$-pattern in $\mathcal{P}(\lambda^{(1)})\cup\dots\cup \mathcal{P}(\lambda^{(r)})$, and for each $r$-pattern $P\in \mathcal{P}(\lambda^{(1)})\cup\dots\cup \mathcal{P}(\lambda^{(r)})$ every $P$-clique contained in $\mathcal{H}$ has size at most $m_P$.* Note that Theorem [Theorem 9](#thm-paramteters-lower-bound-stronger){reference-type="ref" reference="thm-paramteters-lower-bound-stronger"} immediately implies Theorem [Theorem 3](#thm-paramteters-lower-bound){reference-type="ref" reference="thm-paramteters-lower-bound"} by taking $\lambda^{(1)} \succ \lambda^{(2)} \succ \ldots \succ \lambda^{(r)}$ to be a sequence of ordered partitions of $r$ achieving maximum in ([\[eq_main_lower\]](#eq_main_lower){reference-type="ref" reference="eq_main_lower"}). Following [@AJKS], for an ordered $r$-uniform hypergraph matching $\mathcal{H}$ and an $r$-pattern $P$, let $L_P(\mathcal{H})$ denote the size of the largest $P$-clique contained in $\mathcal{H}$. Then the conditions on $\mathcal{H}$ in Theorem [Theorem 9](#thm-paramteters-lower-bound-stronger){reference-type="ref" reference="thm-paramteters-lower-bound-stronger"} can be rephrased as $L_P(\mathcal{H})\le m_P$ for all $r$-patterns $P\in \mathcal{P}(\lambda^{(1)})\cup\dots\cup \mathcal{P}(\lambda^{(r)})$ and $L_P(\mathcal{H})=1$ for all other $r$-patterns $P\not\in \mathcal{P}(\lambda^{(1)})\cup\dots\cup \mathcal{P}(\lambda^{(r)})$. An ordered $r$-uniform hypergraph matching $\mathcal{H}$ is *$r$-partite*, if in the underlying ordering of the vertex set is such that all the first vertices of all the edges come before all the second vertices of all the edges, which come before all the third vertices of all the edges, and so on. More formally, we can say that $\mathcal{H}$ is $r$-partite, if as an ordered $r$-uniform hypergraph matching $\mathcal{H}$ is isomorphic to a finite collection of pairwise disjoint subsets $e\subseteq\mathbb{Z}$, each of size $|e|=r$, which is $\lambda^{(r)}$-partite according to Definition [Definition 5](#def-s-partite){reference-type="ref" reference="def-s-partite"} for the unique ordered partition $\lambda^{(r)}=(1,\dots,1)$ of $r$ into $r$ parts. For our construction proving Theorem [Theorem 9](#thm-paramteters-lower-bound-stronger){reference-type="ref" reference="thm-paramteters-lower-bound-stronger"}, we will use a notion of a *blow-up* introduced in [@DGR22] and later also used in [@AJKS]. **Definition 10**. *Let $\mathcal{H}$ and $\mathcal{H}_r$ be ordered $r$-uniform hypergraph matchings such that $\mathcal{H}_r$ is $r$-partite. Then we define the blow-up $\mathcal{H}[\mathcal{H}_r]$ to be the ordered $r$-uniform hypergraph matching of size $|\mathcal{H}[\mathcal{H}_r]|=|\mathcal{H}|\cdot |\mathcal{H}_r|$ defined as follows. For the vertex set $\mathcal{H}[\mathcal{H}_r]$, starting with $\mathcal{H}$, let us replace every vertex of $\mathcal{H}$ with $|\mathcal{H}_r|$ copies of itself (right after each other as a contiguous interval in the ordering of the vertex set). Then, for every edge $e\in \mathcal{H}$, let us take a matching of $|\mathcal{H}_r|$ edges between these copies of the vertices of $e$, in the shape prescribed by $\mathcal{H}_r$.* *More formally, this can be described when modelling $\mathcal{H}$ and $\mathcal{H}_r$ by pairwise disjoint size-$r$ subsets of $\mathbb{Z}$. As $\mathcal{H}_r$ is $r$-partite, there exists a sequence $x_0 < x_1 < \ldots < x_r$ of real numbers such that $|e\cap (x_{i-1},x_i)|=1$ for all $i=1,\dots,r$ and all $e\in \mathcal{H}_r$. Take a large integer $N$ such that $N>x_i-x_{i-1}$ for $i=1,\dots,r$, and define the blow-up $\mathcal{H}[\mathcal{H}_r]$ to be the collection of size-$r$ subsets of $\mathbb{Z}$ of the form $$\{ N x_1 + y_1, N x_2 + y_2, \ldots, N x_r + y_r \}$$ for all edges $\{x_1,\dots,x_r\}\in \mathcal{H}$ with $x_1<\dots<x_r$ and all edges $\{y_1,\dots,y_r\}\in \mathcal{H}_r$ with $y_1<\dots<y_r$.* Note that up to an isomorphism of ordered $r$-uniform hypergraph matchings, the second description in the definition above does not depend on the choice of $N$ or the choices made when modelling $\mathcal{H}$ and $\mathcal{H}_r$ as collections of subsets of $\mathbb{Z}$. For ordered $r$-uniform hypergraph matchings $\mathcal{H}$ and $\mathcal{H}_r$ as in Definition [Definition 10](#def-blow-up){reference-type="ref" reference="def-blow-up"}, we have $$\label{Lp} L_P(\mathcal{H}[\mathcal{H}_r]) \le L_P(\mathcal{H}) L_P(\mathcal{H}_r)$$ for every $r$-pattern $P$. Indeed, for any $P$-clique in $\mathcal{H}[\mathcal{H}_r]$, the corresponding edges in $\mathcal{H}$ must also form a $P$-clique (indeed, any two edges in $\mathcal{H}[\mathcal{H}_r]$ coming from different edges in $\mathcal{H}$ form the same $r$-pattern as the corresponding two edges in $\mathcal{H}$). Furthermore, for every edge $e\in \mathcal{H}$, the size of the largest $P$-clique among the edges in $\mathcal{H}[\mathcal{H}_r]$ corresponding to $e$ is bounded by $L_P(\mathcal{H}_r)$ (indeed, such a $P$-clique in $\mathcal{H}[\mathcal{H}_r]$ corresponds to a $P$-clique in $\mathcal{H}_r$). The other ingredient in our proof Theorem [Theorem 9](#thm-paramteters-lower-bound-stronger){reference-type="ref" reference="thm-paramteters-lower-bound-stronger"} is the following statement, giving a construction of an $r$-partite ordered $r$-uniform hypergraph matching $\mathcal{H}$ with prescribed bounds for the clique sizes $L_P(\mathcal{H})$. This statement is a direct consequence of the second part of Theorem'[Theorem 4](#thm_es){reference-type="ref" reference="thm_es"}. **Observation 11**. *Let $r$ be a positive integer, and let $\lambda^{(r)}=(1,\dots,1)$ be the unique ordered partition of $r$ into $r$ parts. For every $r$-pattern $P\in \mathcal{P}(\lambda^{(r)})$, consider a positive integer $m_P$. Then there exists an $r$-partite ordered $r$-uniform hypergraph matching $\mathcal{H}$ of size $$|\mathcal{H}|=\prod_{P\in \mathcal{P}(\lambda^{(r)})} m_{P},$$ such that $L_P(\mathcal{H})\le m_P$ for all $r$-patterns $P\in \mathcal{P}(\lambda^{(r)})$ and $L_P(\mathcal{H})=1$ for all other $r$-patterns $P\not\in \mathcal{P}(\lambda^{(r)})$.* We note that for any $r$-partite ordered $r$-uniform hypergraph matching $\mathcal{H}$ we automatically have $L_{P}(\mathcal{H})=1$ for any $P\not\in \mathcal{P}(\lambda^{(r)})$. Indeed, for any two edges $e,f\in \mathcal{H}$, the first vertices of $e$ and $f$ are both before the second vertices of $e$ and $f$, which are both before the third vertices of $e$ and $f$ and so on. Hence the $r$-pattern formed by $e$ and $f$ consists of $r$ blocks that each have one $\rm A$ and one $\rm B$, so the $r$-pattern has a block partition into $r$ parts and therefore belongs to $\mathcal{P}(\lambda^{(r)})$. *Proof of Observation [Observation 11](#obs-construction-partite){reference-type="ref" reference="obs-construction-partite"}.* Recall from the discussion at the end of Section [2](#sect-preliminaries){reference-type="ref" reference="sect-preliminaries"} that any $r$-pattern $P\in \mathcal{P}(\lambda^{(r)})$ is of the form $P=P(\lambda^{(r)},\tau)$ for a unique function $\tau:\{1,\dots,r\}\to \{1,-1\}$ with $\tau(1)=1$. For every such function $\tau$, let us define $m_\tau=m_{P(\lambda^{(r)},\tau)}$. By the second part of Theorem [Theorem 4](#thm_es){reference-type="ref" reference="thm_es"}, there exists a collection of points $Z\subseteq\mathbb{R}^r$ of size $|Z|=\prod_{\tau} m_{\tau}=\prod_{P\in \mathcal{P}(\lambda^{(r)})} m_{P}$ such that the $|Z|\cdot r$ coordinates of all the points in $Z$ are all distinct, and such that for every function $\tau:\{1,\dots,r\}\to \{1,-1\}$ with $\tau(1)=1$, every $\tau$-monotone sequence in $Z$ has length at most $m_\tau=m_{P(\lambda^{(r)},\tau)}$. Without loss of generality, we may assume that all points in $Z$ have positive coordinates (otherwise, we can translate the set $Z$ accordingly). Let $N>0$ be a number larger than all coordinates of all points in $Z$. Now, for every point $z=(z_1,\dots,z_r)\in Z\subseteq\mathbb{R}^r$, let us define $e(z)=\{N+ z_1, 2N + z_2, \ldots, r N+z_r \}\subseteq\mathbb{R}$, and note that the sets $e(z)$ for $z\in Z$ are pairwise disjoint. Taking $\mathcal{H}=\{e(z)\mid z\in Z\}$ to be the collection of the sets $e(z)$ for all $z\in Z$, we obtain an ordered $r$-uniform hypergraph $\mathcal{H}$ by considering the natural ordering on $\mathbb{R}$ on the vertex set. Note that $\mathcal{H}$ has size $|\mathcal{H}|=|Z|=\prod_{P\in \mathcal{P}(\lambda^{(r)})} m_{P}$. Furthermore, $\mathcal{H}$ is $r$-partite (by the choice of $N$) and so in particular we have $L_{P}(\mathcal{H})=1$ for any $P\not\in \mathcal{P}(\lambda^{(r)})$. It remains to check $L_{P}(\mathcal{H})\le m_P$ for every $P\in \mathcal{P}(\lambda^{(r)})$. So let $P\in \mathcal{P}(\lambda^{(r)})$ and let $\tau:\{1,\dots,r\}\to \{1,-1\}$ with $\tau(1)=1$ be such that $P=P(\lambda^{(r)},\tau)$, then we have $m_\tau=m_P$. Let $z^{(1)},\dots,z^{(t)}\in Z$ be such that the edges $e(z^{(1)}),\dots,e(z^{(t)})$ form $P$-clique in $\mathcal{H}$. Then for any $1\le \ell<\ell'\le t$, for any $i\in \{1,\dots,r\}$ with $\tau(i)=1$, the $i$-th vertex $iN+z^{(\ell)}_i$ of $e(z^{(\ell)})$ comes before the $i$-th vertex $iN+z^{(\ell')}_i$ of $e(z^{(\ell')})$ and so we have $z^{(\ell)}_i<z^{(\ell')}_i$. Similarly for any $i\in \{1,\dots,r\}$ with $\tau(i)=1$, the $i$-th vertex of $e(z^{(\ell)})$ comes after the $i$-th vertex of $e(z^{(\ell')})$ and so we have $z^{(\ell)}_i>z^{(\ell')}_i$. Thus, for every $i\in \{1,\dots,r\}$ with $\tau(i)=1$, the $i$-th coordinates of the points $z^{(1)},\dots,z^{(t)}$ form an increasing sequence $z^{(1)}_1<\dots<z^{(t)}_i$, and for every $i\in \{1,\dots,r\}$ with $\tau(i)=-1$, they form a decreasing sequence $z^{(1)}_1>\dots>z^{(t)}_i$. Thus, $z^{(1)},\dots,z^{(t)}$ is a $\tau$-monotone sequence of points in $Z$ and hence $t\le m_\tau\le m_P$. This shows that $L_P(\mathcal{H})\le m_P$, as desired. ◻ Finally, we are ready to prove Theorem [Theorem 9](#thm-paramteters-lower-bound-stronger){reference-type="ref" reference="thm-paramteters-lower-bound-stronger"} (implying Theorem [Theorem 2](#thm-paramteters-upper-bound){reference-type="ref" reference="thm-paramteters-upper-bound"}). *Proof of Theorem [Theorem 9](#thm-paramteters-lower-bound-stronger){reference-type="ref" reference="thm-paramteters-lower-bound-stronger"}.* We prove the theorem by induction on $r$. Note that the base case $r=1$ is trivial, so let us assume $r\ge 2$ and that we already proved the theorem for $r-1$. In the sequence $\lambda^{(1)} \succ \lambda^{(2)} \succ \dots \succ \lambda^{(r)}$, each ordered partition $\lambda^{(s)}$ for $s=1,\dots,r$ must have precisely $s$ parts. In particular, the partition $\lambda^{(r-1)}$ has the form $\lambda^{(r-1)}=(1,\dots,1,2,1,\dots,1)$. Let $j\in \{1,\dots,r-1\}$ be the unique index such that $\lambda^{(r-1)}_j=2$. If we imagine $\lambda^{(1)} \succ \lambda^{(2)} \succ \dots \succ \lambda^{(r)}$ as partitions of the set $\{1,\dots,r\}$ into intervals, then the partition $\lambda^{(r-1)}$ consists of the set $\{j,j+1\}$ and singleton sets for all the remaining elements. Hence $j$ and $j+1$ are in the same set for each of the partitions $\lambda^{(1)} \succ \lambda^{(2)} \succ \dots \succ \lambda^{(r-1)}$. If we ignore $j$ and relabel the remaining elements accordingly (i.e. $j+1$ is relabeled $j$, $j+2$ is relabeled $j+1$, and so on), $\lambda^{(1)} \succ \lambda^{(2)} \succ \dots \succ \lambda^{(r-1)}$ turns into a sequence of ordered partitions $\mu^{(1)} \succ \mu^{(2)} \succ \dots \succ \mu^{(r-1)}$ of $r-1$. For $s=1,\dots,r-1$ we obtain $\lambda^{(j)}$ from $\mu^{(j)}$ by doubling the element $j$ (keeping both copies in the same set of the partition), and relabeling the ground set into $\{1,\dots,r\}$ accordingly. Now, for $s=1,\dots,r-1$, every $r$-pattern $P\in \mathcal{P}(\lambda^{(s)})$ turns into an $(r-1)$-pattern $P'\in \mathcal{P}(\mu^{(s)})$ when omitting the $j$-th $\rm A$ and the $j$-th $\rm B$ in $P$ (note that these occur in the same block, i.e. right adjacent to the $(j+1)$-th $\rm A$ and the $(j+1)$-th $\rm B$, respectively). Conversely, $P$ can be recovered from $P'\in \mathcal{P}(\mu^{(s)})$ by doubling the $j$-th $\rm A$ in $P$ into $\rm{AA}$ and doubling the $j$-th $\rm B$ in $P$ into $\rm{BB}$. Thus, these operations give natural bijections between $\mathcal{P}(\lambda^{(s)})$ and $\mathcal{P}(\mu^{(s)})$. For every $(r-1)$-pattern $P'\in \mathcal{P}(\mu^{(1)})\cup\dots\cup \mathcal{P}(\mu^{(r-1)})$, let us define $m_{P'}=m_P$, where $P\in \mathcal{P}(\lambda^{(1)})\cup\dots\cup \mathcal{P}(\lambda^{(r-1)})$ is the $r$-pattern obtained from $P'$ by doubling the $j$-th $\rm A$ and the $j$-th $\rm B$. Then, applying the induction hypothesis to the sequence $\mu^{(1)} \succ \mu^{(2)} \succ \dots \succ \mu^{(r-1)}$ of ordered partitions of $r-1$, we obtain an ordered $(r-1)$-uniform hypergraph matching $\mathcal{H}'$ of size $$|\mathcal{H}'|=\prod_{P'\in \mathcal{P}(\mu^{(1)})\cup\dots\cup \mathcal{P}(\mu^{(r-1)})} m_{P'}=\prod_{P\in \mathcal{P}(\lambda^{(1)})\cup\dots\cup \mathcal{P}(\lambda^{(r-1)})} m_{P},$$ such that $L_{P'}(\mathcal{H}')\le m_{P'}$ for all $(r-1)$-patterns $P'\in \mathcal{P}(\mu^{(1)})\cup\dots\cup \mathcal{P}(\mu^{(r-1)})$ and $L_{P'}(\mathcal{H}')=1$ for all other $(r-1)$-patterns $P'\not\in \mathcal{P}(\mu^{(1)})\cup\dots\cup \mathcal{P}(\mu^{(r-1)})$. From $\mathcal{H}'$, we can now obtain an ordered $r$-uniform hypergraph matching $\mathcal{H}^*$ with $|\mathcal{H}^*|=|\mathcal{H}'|$ by doubling the $j$-th element of every edge $e'\in \mathcal{H}'$. More formally, for every edge $e'\in \mathcal{H}'$, we add a new element to $e'$ right before the $j$-the element of $e'$ (in the given ordering of the underlying vertex set of $\mathcal{H}'$). For the resulting edge $e\in \mathcal{H}^*$, we can recover the original edge $e'\in \mathcal{H}'$ by deleting the $j$-the element of $e$. If $e,f\in \mathcal{H}^*$ form some $r$-pattern $P$, then the $(r-1)$-pattern formed by the corresponding edges $e',f'\in \mathcal{H}'$ is the $(r-1)$-pattern $P'$ obtained from $P$ by omitting the $j$-th $\rm A$ and the $j$-th $\rm B$. Thus, by the properties of $\mathcal{H}'$, we have $L_P(\mathcal{H})\le m_P$ for all $r$-patterns $P\in \mathcal{P}(\lambda^{(1)})\cup\dots\cup \mathcal{P}(\lambda^{(r-1)})$ and $L_P(\mathcal{H})=1$ for all other $r$-patterns $P\not\in \mathcal{P}(\lambda^{(1)})\cup\dots\cup \mathcal{P}(\lambda^{(r-1)})$. Finally, recall that $\lambda^{(r)}=(1,\dots,1)$ is the unique ordered partition of $r$ into $r$ parts. Let $\mathcal{H}_r$ be an $r$-partite ordered $r$-uniform hypergraph matching as in Observation [Observation 11](#obs-construction-partite){reference-type="ref" reference="obs-construction-partite"}. Now, the blow-up $\mathcal{H}=\mathcal{H}^*[\mathcal{H}_r]$ has size $$|\mathcal{H}|=|\mathcal{H}^*|\cdot |\mathcal{H}_r|=|\mathcal{H}'|\cdot |\mathcal{H}_r|=\prod_{P\in \mathcal{P}(\lambda^{(1)})\cup\dots\cup \mathcal{P}(\lambda^{(r-1)})} m_{P}\cdot \prod_{P\in \mathcal{P}(\lambda^{(r)})} m_{P}=\prod_{P\in \mathcal{P}(\lambda^{(1)})\cup\dots\cup \mathcal{P}(\lambda^{(r)})} m_{P},$$ and by ([\[Lp\]](#Lp){reference-type="ref" reference="Lp"}) for every $r$-pattern $P$ we have $L_P(\mathcal{H})\le L_P(\mathcal{H}^*)\cdot L_P(\mathcal{H}_r)$. Hence $L_P(\mathcal{H})\le m_P\cdot 1=m_P$ for all $P\in \mathcal{P}(\lambda^{(1)})\cup\dots\cup \mathcal{P}(\lambda^{(r-1)})$, and $L_P(\mathcal{H})\le 1\cdot m_P=m_P$ for all $P\in \mathcal{P}(\lambda^{(r)})$, and also $L_P(\mathcal{H})\le 1\cdot 1=1$ for all $P\not\in \mathcal{P}(\lambda^{(1)})\cup\dots\cup \mathcal{P}(\lambda^{(r)})$. Thus, we have constructed the desired ordered $r$-uniform hypergraph matching $\mathcal{H}$. ◻ 20 Michael Anastos, Zhihan Jin, Matthew Kwan, and Benny Sudakov "Extremal, enumerative and probabilistic results on ordered hypergraph matchings." arXiv preprint arXiv:2308.12268 (2023). Marcelo Campos, Simon Griffiths, Robert Morris, and Julian Sahasrabudhe. "An exponential improvement for diagonal Ramsey." arXiv preprint arXiv:2303.09521 (2023). Yair Caro. "New Results on the Independence Number." Technical Report, Tel-Aviv University, 1979. Recep Altar Çiçeksiz, Recep Altar, Zhihan Jin, Eero Räty, and István Tomon. "Exponential Erdő s-Szekeres theorem for matrices." arXiv preprint arXiv:2305.07003 (2023). Robert P. Dilworth "A Decomposition Theorem for Partially Ordered Sets." Annals of Mathematics (1950): 161--166. Andrzej Dudek, Jarosław Grytczuk, and Andrzej Ruciński. "Ordered unavoidable sub-structures in matchings and random matchings." arXiv preprint arXiv:2210.14042 (2022). Andrzej Dudek, Jarosław Grytczuk, and Andrzej Ruciński. "Erdős-Szekeres type Theorems for ordered uniform matchings." arXiv preprint arXiv:2301.02936 (2023). Paul Erdős and George Szekeres. "A combinatorial problem in geometry." Compositio mathematica 2 (1935): 463--470. Timothy W. Gowers and Jason Long. "The length of an s-increasing sequence of r-tuples." Combinatorics, Probability and Computing 30, no. 5 (2021): 686--721. Ronald L. Graham, Bruce L. Rothschild, and Joel H. Spencer. Ramsey theory. Vol. 20. John Wiley & Sons, 1991. Choongbum Lee. "Ramsey numbers of degenerate graphs." Annals of Mathematics 185, no. 3 (2017): 791--829. Nathan Linial and Michael Simkin. "Monotone subsequences in high-dimensional permutations." Combinatorics, Probability and Computing 27, no. 1 (2018): 69--83. Sam Mattheus and Jacques Verstraete. "The asymptotics of $r(4,t)$." arXiv preprint arXiv:2306.04007 (2023). Leon Mirsky. "A dual of Dilworth's decomposition theorem." Amer. Math. Monthly 78, no. 8 (1971): 876-877. Guy Moshkovitz and Asaf Shapira. "Ramsey theory, integer partitions and a new proof of the Erdős--Szekeres theorem." Advances in Mathematics 262 (2014): 1107--1129. Hans Jürgen Prömel. Ramsey theory for discrete structures. Vol. 3. New York: Springer, 2013. Christian Reiher, and Vojtěch Rödl. "The girth Ramsey theorem." arXiv preprint arXiv:2308.15589 (2023). Tibor Szabó and Gábor Tardos. "A multidimensional generalization of the Erdős--Szekeres lemma on monotone subsequences." Combinatorics, Probability and Computing 10, no. 6 (2001): 557--565. Victor K. Wei. "A Lower Bound on the Stability Number of a Simple Graph." Technical memorandum, TM 81-11217-9, Bell Laboratories, 1981. [^1]: Institute for Applied Mathematics, University of Bonn, Bonn, Germany. Email: [`sauermann@iam.uni-bonn.de`](sauermann@iam.uni-bonn.de). Research supported by NSF Award DMS-2100157 and a Sloan Research Fellowship. [^2]: Department of Mathematics, Massachusetts Institute of Technology, Cambridge, MA. Research supported by a Jane Street Graduate Research Fellowship. Email: [`zakhdm@mit.edu`](zakhdm@mit.edu).
arxiv_math
{ "id": "2309.04813", "title": "A sharp Ramsey theorem for ordered hypergraph matchings", "authors": "Lisa Sauermann, Dmitrii Zakharov", "categories": "math.CO", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | An infinite family of $(q^2+q+1)$-ovoids of $\mathcal{Q}^+(7,q)$, $q\equiv 1\pmod{3}$, admitting the group $\mathrm{PGL}(3,q)$, is constructed. The main tool is the general theory of generalized hexagons. address: - Francesco Pavese, Department of Mechanics, Mathematics and Management, Polytechnic University of Bari, Via Orabona 4, 70125 Bari, Italy; - Hanlin Zou, School of Mathematical Sciences, Zhejiang University, Hangzhou 310058, China author: - Francesco Pavese, Hanlin Zou title: An infinite family of $m$-ovoids of the hyperbolic quadrics $\mathcal{Q}^+(7,q)$ --- # Introduction {#sec1} Let $q$ be a prime power, $\mathbb{F}_q$ the finite field of order $q$, and $V$ a finite dimensional vector space over $\mathbb{F}_q$. Let $f$ be a non-degenerate reflexive sesquilinear form or a non-singular quadratic form on $V$. The finite classical polar space $\mathcal{S}$ associated with $(V, f)$ is the geometry consisting of the totally singular or totally isotropic subspaces with respect to $f$ of the ambient projective space $\mathrm{PG}(V)$, according to whether $f$ is a quadratic or sesquilinear form. The totally singular or totally isotropic one-dimensional subspaces are the *points* of $\mathcal{S}$, and the collection of all the points of $\mathcal{S}$ will be denoted by $\mathcal{P}$. The totally singular or totally isotropic subspaces of maximum dimension are called the *generators* of $\mathcal{S}$. The *rank* of $\mathcal{S}$ is the vector space dimension of its generators. A finite classical polar space of rank 2 is a point-line geometry, and is also called a *finite generalized quadrangle*. Polar spaces over finite fields are very interesting geometric structures because they possess large automorphism groups, namely the finite classical groups. In this context it is natural to investigate combinatorial objects embedded in polar spaces admitting a fairly large group. Here, along these lines, we are interested in particular substructures of polar spaces called $m$-ovoids, having many symmetries. The notion of an $m$-ovoid came from "ovoids" of the projective space $\mathrm{PG}(3,q)$. It was first defined in generalized quadrangles by Thas [@Thas89], and was later extended naturally to finite classical polar spaces of higher rank in the work of Shult and Thas [@ST94], where an *$m$-ovoid* was defined to be a set of points having exactly $m$ points in common with each generator (a 1-ovoid is simply called an ovoid). In [@BLP09] the authors found that $m$-ovoids and tight sets of generalized quadrangles share the property that they have two intersection numbers with respect to perps of points (one for points in the set and the other for points not in the set), and they coined such sets intriguing. Here, the perp of a point $P$ is the set of the points of the generalized quadrangle that are collinear with $P$. In a subsequent paper [@BKLP07], the authors extended the concept of an intriguing set to finite classical polar spaces of higher rank. Over the past two decades, intriguing sets have been extensively studied and they are known to have close connection to many geometric/combinatorial objects, such as translation planes, strongly regular graphs, two-weight codes, Boolean degree one functions, completely regular codes of strength 0 and covering radius one, and Cameron-Liebler line classes. We refer the reader to [@BLMX; @CP18; @FI19; @FMRQZ; @FT20; @FWX20; @KNS; @LN20] for some recent results on intriguing sets of polar spaces. There are some straightforward methods for constructing $m$-ovoids. Let $\mathcal{S}$ be a polar space of rank $r$ over the field $\mathbb{F}_q$. As an example, the whole point set of $\mathcal{S}$ is a $\frac{q^r-1}{q-1}$-ovoid. Let $A$ and $B$ be an $m$-ovoid and $n$-ovoid of $\mathcal{S}$, respectively. If $A\subseteq B$, then $B\setminus A$ is an $(m-n)$-ovoid. In particular, the complement of $A$ is a $(\frac{q^r-1}{q-1}-m)$-ovoid. Dually, if $A$ and $B$ are disjoint, then $A\cup B$ is an $(m+n)$-ovoid. In hyperbolic polar spaces $\mathcal{S}=\mathcal{Q}^+(2r-1,q)$, there are some less straightforward ways to construct $m$-ovoids. First of all, a non-degenerate hyperplane section of $\mathcal{S}$ which is an embedded $\mathcal{Q}(2r-2,q)$ is a $\frac{q^{r-1}-1}{q-1}$-ovoid, and an $m$-ovoid of the embedded $\mathcal{Q}(2r-2,q)$ is also an $m$-ovoid of $\mathcal{S}$. Furthermore, new $m$-ovoids can be obtained from old ones by derivation (see [@Kelly07]). There are other constructions in hyperbolic spaces of different ranks. If $r=2$, then $\mathcal{S}=\mathcal{Q}^+(3,q)$ which is a grid. It is easy to see that $\mathcal{Q}^+(3,q)$ can be partitioned into ovoids and so $m$-ovoids exist for all possible $m$. If $r=3$, then $\mathcal{S}=\mathcal{Q}^+(5,q)$ and an ovoid of $\mathcal{Q}^+(5,q)$ is equivalent to a spread of $\mathrm{PG}(3,q)$ under the Klein correspondence. It is known that the lines of $\mathrm{PG}(3,q)$ can be partitioned into spreads (such a partition is called a packing of $\mathrm{PG}(3,q)$) [@Den73]. This means that $m$-ovoids of $\mathcal{Q}^+(5,q)$ exist for all possible $m$. The situation is much more different when $r$ is greater than 3 and only a few results are known. We will be focusing on $\mathcal{Q}^+(7,q)$ in this work. It is known that ovoids exist in $\mathcal{Q}^+(7,q)$ when $q$ is even, an odd prime or $q\equiv$ 0 or $2\pmod{3}$ [@HT16 Table 7.3]. As mentioned above, $m$-ovoids of $\mathcal{Q}(6,q)$ are $m$-ovoids of $\mathcal{Q}^+(7,q)$. Constructions of $m$-ovoids of $\mathcal{Q}(6,q)$ can be found in [@BKLP07; @CP16; @LN20]. There are further known $m$-ovoids of $\mathcal{Q}^+(7,q)$ which we present in Section [4](#concluding){reference-type="ref" reference="concluding"} when dealing with the isomorphism issue. In this paper, we construct an infinite family of $(q^2+q+1)$-ovoids of $\mathcal{Q}^+(7,q)$, $q\equiv 1\pmod{3}$, admitting $\mathrm{PGL}(3,q)$ as an automorphism group and not contained in a hyperplane section. The promised $(q^2+q+1)$-ovoid will be described as a set $\mathcal O$ of points covered by the planes spanned by two incident lines of a thin hexagon embedded in $\mathcal{Q}^+(7,q)$. The advantage is that in this way one can apply basic results from the theory of generalized hexagons. The properties of the hexagon that will be needed are achieved in Section [2](#sec_pre){reference-type="ref" reference="sec_pre"}. In Section [3](#sec_main){reference-type="ref" reference="sec_main"} we define the set $\mathcal O$ precisely and show that it is a $(q^2+q+1)$-ovoid. Finally, we show that our example is new in Section [4](#concluding){reference-type="ref" reference="concluding"}, and a problem is posed in the end. For the rest of this paper, we shall use the following definition of $m$-ovoids rather than the one given by Shult and Thas as mentioned in the second paragraph. **Definition 1**. *[Let $\mathcal{P}$ be the point set of $\mathcal{Q}^+(2r-1,q)$. A subset $\mathcal{M}$ of $\mathcal{P}$ is called an *$m$-ovoid* if there is an integer $m>0$ such that for all $P\in \mathcal{P}$, $$\label{def} |P^\perp\cap \mathcal{M}|=\begin{cases} m\theta_{r-1}-\theta_{r-1}+1,&\textup{if~} P\in \mathcal{M},\\ m\theta_{r-1},&\textup{if~}P\notin\mathcal{M}, \end{cases}$$ where $\theta_r=q^{r-1}+1$ and $P^{\perp}$ is the set of points in $\mathcal{Q}^+(2r-1,q)$ that are collinear with $P$.]{.upright}* # Preliminaries {#sec_pre} Let $p$ be a prime, and $q=p^e$, where $e\ge 1$ is an integer. The multiplicative group of $\mathbb{F}_q$ will be denoted by $\mathbb{F}_q^*$. For an integer $n\ge 1$, the *relative trace* $\mathrm{Tr}_{q^n/q}$ from $\mathbb{F}_{q^n}$ to $\mathbb{F}_q$ is defined by $$\mathrm{Tr}_{q^n/q}(z)=z+z^q+z^{q^2}+\cdots+z^{q^{n-1}},\; \forall z\in\mathbb{F}_{q^n}.$$ In particular, if $q=p$, then $\mathrm{Tr}_{q^n/q}$ is called the *absolute trace*. ## Cubic polynomials over $\mathbb{F}_q$ Let $q=p^e$ be a prime power, where $p\ne 3$ is a prime. Let $f(x)=x^3+cx+d$ be a cubic polynomial over $\mathbb{F}_q$, and let $\gamma_1,\gamma_2,\gamma_3$ be its roots in some extension field of $\mathbb{F}_q$. The discriminant of $f$ is defined by $$\Delta_f:=(\gamma_1-\gamma_2)^2(\gamma_2-\gamma_3)^2(\gamma_3-\gamma_1)^2,$$ which equals $-4c^3-27d^2$ for all $q$. In particular, when $q$ is even, we have $\Delta_f=d^2$. **Lemma 2**. *If $q\equiv 1\pmod{3}$ with $q$ odd, then $-3$ is a nonzero square in $\mathbb{F}_q$.* *Proof.* Let $q=p^e$. In the case $p\equiv 2 \pmod{3}$, we have $e$ is even, and so $-3$ is a nonzero square in $\mathbb{F}_q$. In the case $p\equiv 1\pmod{3}$, by the quadratic reciprocity law we have $$\left(\frac{-3}{p}\right)=\left(\frac{-1}{p}\right)\left(\frac{3}{p}\right)=(-1)^{(p-1)/2}(-1)^{(3-1)/2\cdot(p-1)/2}\left(\frac{p}{3}\right)=1.$$ Here $(\frac{\cdot}{p})$ is the Legendre symbol. Hence $-3$ is a square in $\mathbb{F}_p$ and also a square in $\mathbb{F}_q$. ◻ We shall need the following theorem giving the nonexistence condition for the roots of $f$ in $\mathbb{F}_q$ in various situations. **Theorem 3** ([@Dixon; @Williams1975]). *Let $q$ be a prime power with $q\equiv 1\pmod{3}$. Suppose that $f(x)=x^3+cx+d$ is a polynomial over $\mathbb{F}_q$ with discriminant $\Delta_f\ne 0$.* 1. *If $q$ is odd, and write $-3=\alpha^2$ for some $\alpha\in\mathbb{F}_{q}$, then $f$ has no roots in $\mathbb{F}_q$ if $\Delta_f$ is a square in $\mathbb{F}_q$, say $\Delta_f=81\beta^2$ for some $\beta\in\mathbb{F}_q$, and $2^{-1}(-d+\alpha\beta)$ is not a cube in $\mathbb{F}_{q}$.* 2. *If $q$ is even, then $f$ has no roots in $\mathbb{F}_q$ if $\mathrm{Tr}_{q/2}(c^3d^{-2})=\mathrm{Tr}_{q/2}(1)$, and the roots $t_1,t_2$ of $t^2+dt+c^3=0$ are not cubes in $\mathbb{F}_q$.* **Lemma 4**. *Let $q$ be a prime power with $q\equiv 1\pmod{3}$ and $\theta$ a non-cubic element of $\mathbb{F}_q^*$. Then the equation $x^3+\theta y^3+\theta^2z^3-3\theta xyz=0$ has a unique solution in $\mathbb{F}_q$, i.e., $x=y=z=0$.* *Proof.* If $z=0$, then the equation is reduced to $x^3+\theta y^3=0$. Since $\theta$ is not a cube, the above equation has solutions in $\mathbb{F}_q$ if and only if $x=y=0$. Similarly, if $x=0$ or $y=0$, the original equation has solutions in $\mathbb{F}_q$ for $x,y,z$ if and only if $x=y=z=0$. So it is enough to show that the polynomial $$f(x):=x^3-3\theta yzx+\theta y^3+\theta^2z^3$$ has no roots in $\mathbb{F}_q$ for any $y,z \in \mathbb{F}_q^*$. *Case 1. Assume $q$ is odd.* The discriminant of $f$ is $$\begin{aligned} \Delta_f=&-4(-3\theta yz)^3-27(\theta y^3+\theta^2z^3)^2=(-3)\cdot 9(\theta y^3-\theta^2z^3)^2.\end{aligned}$$ By Lemma [Lemma 2](#-3s){reference-type="ref" reference="-3s"}, we see that $\Delta_f$ is a nonzero square in $\mathbb{F}_q$. Furthermore, if we write $-3=\alpha^2$, and $\Delta_f=81\beta^2=-27\alpha^2\beta^2$, then $\alpha\beta=\pm(\theta y^3-\theta^2z^3)$. This implies that $2^{-1}(\alpha\beta-(\theta y^3+\theta^2z^3))=\theta y^3$ or $\theta^2z^3$, neither of which is a cube in $\mathbb{F}_q$. Therefore $f(x)$ has no solutions in $\mathbb{F}_q$ by Theorem [Theorem 3](#cubic){reference-type="ref" reference="cubic"}. *Case 2. Assume $q$ is even.* Then $q=2^n$ and $n$ is even. We have $$\mathrm{Tr}_{q/2}\left(\frac{(\theta yz)^3}{(\theta y^3+\theta^2z^3)^2}\right)=\mathrm{Tr}_{q/2}\left(\frac{\theta y^3}{\theta y^3+\theta^2 z^3}+\frac{\theta^2y^6}{(\theta y^3+\theta^2 z^3)^2}\right)=0=\mathrm{Tr}_{q/2}(1).$$ Moreover, the equation $t^2+(\theta y^3+\theta^2 z^3)t+(\theta y z)^3=0$ has solutions $t_1=\theta y^3$ and $t_2=\theta^2 z^3$, neither of which is a cube in $\mathbb{F}_q$. Using Theorem [Theorem 3](#cubic){reference-type="ref" reference="cubic"} again, we conclude that $f(x)$ has no solutions in $\mathbb{F}_q$. ◻ ## An embedded generalized hexagon in $\mathrm{PG}(7,q)$ Let $\mathcal{I}$ be an point-line incidence geometry with point set $\mathcal{P}$, line set $\mathcal{L}$ and incidence relation $\textbf{I}\subseteq \mathcal{P}\times \mathcal{L}$. The *incidence graph* of $\mathcal{I}$ is the graph with vertex set $\mathcal{P}\cup\mathcal{L}$, where adjacency is given by the incidence relation **I**. A *generalized hexagon* is a point-line geometry such that its incidence graph has diameter 6 and girth 12. A generalized hexagon is said to have *order* $(s,t)$ if every line is incident with exactly $s+1$ points and if every point is incident with precisely $t+1$ lines. A quick example of a generalized hexagon is the *flag geometry* $\mathcal{I}$ of a projective plane $\Pi$ of order $s$ defined as follows. The points of $\mathcal{I}$ are the *flags* of $\Pi$ (i.e., the incident point-line pairs); the lines of $\mathcal{I}$ are the points and lines of $\Pi$. Incidence between points and lines of $\mathcal{I}$ is reverse containment. It follows that $\mathcal{I}$ is a generalized hexagon of order $(s,1)$. We refer the reader to [@polygon] for more background information about generalized hexagons. Let $\mathcal S_{2,2}$ be the Segre variety of $\mathrm{PG}(8, q)$ consisting of the $(q^2+q+1)^2$ points represented by $$\begin{aligned} & (a_1b_1, a_1b_2, a_1b_3, a_2b_1, a_2b_2, a_2b_3, a_3b_1, a_3b_2, a_3b_3), \label{point}\end{aligned}$$ where $a_i, b_i \in \mathbb{F}_q$, $i = 1,2,3$, with $(a_1, a_2, a_3) \ne (0,0,0)$ and $(b_1, b_2, b_3) \ne (0,0,0)$. By means of the map $$\rho: M = (m_{i j}) \in \mathcal M_{3,3}(q) \mapsto (m_{11}, m_{12}, m_{13}, m_{21}, m_{22}, m_{23}, m_{31}, m_{32}, m_{33}) \in \mathbb{F}_q^9, \label{map}$$ points of $\mathcal S_{2,2}$ correspond to rank one $3 \times 3$ matrices over $\mathbb{F}_q$. Moreover, $\mathcal S_{2, 2}$ is contained in the parabolic quadric $\mathcal Q(8, q)$ given by $$\begin{aligned} & X_2 X_4 - X_1 X_5 + X_3 X_7 - X_1 X_9 + X_6 X_8 - X_5 X_9 = 0.\end{aligned}$$ There is a group, say $G$, isomorphic to $\mathrm{PGL}(3, q)$ fixing both $\mathcal S_{2, 2}$ and $\mathcal Q(8, q)$. This can be seen by the map $$\rho^{-1}(X)\mapsto A\rho^{-1}(X)A^{-1}, \;\forall A\in \mathrm{GL}(3,q), X\in \mathcal S_{2, 2}\cap\mathcal Q(8, q).$$ Such a group fixes the point of $\mathrm{PG}(8, q)$ corresponding to the $3 \times 3$ identity matrix and the hyperplane $\Pi: X_1+X_5+X_9 = 0$. Note that $\Pi \cap \mathcal Q(8, q)$ is hyperbolic, elliptic or a cone, according to whether $q \equiv 1$, $-1$, or $0 \pmod{3}$. Assume that $q \equiv 1 \pmod{3}$ and denote by $\mathcal Q^+(7, q)$ the hyperbolic quadric obtained by intersecting $\mathcal Q(8, q)$ with $\Pi$. Let $\perp$ be the polarity of $\Pi$ associated with $\mathcal Q^+(7, q)$. By [@TV pp. 99--100], the set $\Pi \cap \mathcal S_{2, 2}$ consists of $(q+1)(q^2+q+1)$ points and $2(q^2+q+1)$ lines of a *thin hexagon*, say $\Gamma$; the hexagon $\Gamma$ corresponds to the point-line flag geometry of $\mathrm{PG}(2, q)$. Every line of $\Gamma$ has $q+1$ points of $\Gamma$. Through a point $x$ of $\Gamma$ there pass two lines of $\Gamma$ spanning a plane and we will denote this plane by $\pi_x$. It is readily seen that $\pi_x \subset \mathcal Q^+(7, q)$, for $x \in \Gamma$. The incidence graph of $\Gamma$ is the girth $12$ bipartite graph where the two parts are the points and the lines of $\Gamma$ and adjacency is given by incidence. The *distance* between points and lines of $\Gamma$ is that between two vertices of the incidence graph of $\Gamma$. **Lemma 5**. *The set of $2(q+1)$ lines of $\Gamma$ having distance $1$ or $3$ from a point $x \in \Gamma$ span the hyperplane $x^\perp$ of $\Pi$.* *Proof.* By [@TV Lemma 1], it is enough to show that these $2(q+1)$ lines are contained in the hyperplane $x^\perp$. To see this fact observe that if $r_1, r_2$ are the two lines of $\Gamma$ through $x$, then $r_1, r_2 \subset \pi_x \subset x^\perp$. If $\ell$ is a line of $\Gamma$ and $d(x, \ell) = 3$, then $\ell$ is incident with $r_1$ or $r_2$ in a point $z$ and $\ell \subset \pi_z \subset x^\perp$. ◻ It follows from the previous lemma that the lines of $\Gamma$ at distance $5$ from $x$ and the points of $\Gamma$ at distance $6$ from $x$ are not contained in $x^\perp$. Let $U_i$ be the point of $\mathrm{PG}(8, q)$ having $1$ in the $i$-th position and $0$ elsewhere. **Lemma 6**. *If $x$ is a point of $\Gamma$, then $\pi_{x}^\perp$ meets $\Gamma$ precisely in the two lines of $\Gamma$ through $x$.* *Proof.* Since the group $G$ acts transitively on points of $\Gamma$ (by Lemma [Lemma 8](#lem_orb){reference-type="ref" reference="lem_orb"}), we may assume without loss of generality that $x$ is the point $U_3$. Then some easy calculations show that the $4$-dimensional projective space $X_4 = X_7 = X_8 = 0$ of $\Pi$, i.e. $\pi_{U_3}^\perp$, meets $\Gamma$ precisely in the two lines of $\Gamma$ through $U_3$. ◻ An *apartment* of $\Gamma$ consists of $6q$ points and $6$ lines of $\Gamma$ forming an ordinary $6$-gon. By [@TV Lemma 2] an apartment of $\Gamma$ spans a $5$-dimensional projective space of $\Pi$. **Lemma 7**. *If $x, y$ are distinct points of $\Gamma$, then $\pi_x \cap \pi_y$ is either a line of $\Gamma$ or a point of $\Gamma$ or empty, according to whether $d(x, y)$ equals $2$, $4$ or $6$, respectively.* *Proof.* Let $x, y$ be distinct points of $\Gamma$ and let $\ell_1$, $\ell_2$ be the two lines of $\Gamma$ through $y$. If $d(x, y) = 2$, then $\pi_x \cap \pi_y = \langle x, y \rangle$. If $d(x, y) = 4$, then we may assume that $d(x, \ell_1) = 3$ and $d(x, \ell_2) = 5$. Hence there is a point $z \in \ell_1$ such that $d(x, z) = 2$. Since $\ell_2 \not\subset x^\perp$, it follows that $\pi_y \cap x^\perp = \ell_1$ and therefore $\pi_x \cap \pi_y = \{z\}$. If $d(x, y) = 6$, then $d(x, \ell_i) = 5$, $i = 1,2$, and hence there are four points $z_1, z_2, z_3, z_4$ of $\Gamma$ such that $z_i \in \ell_i$, with $d(x, z_i) = 4$, $i = 1,2$, and $d(x, z_3) = d(z_1, z_3) = d(x, z_4) = d(z_2, z_4) = 2$. The points $x, z_4, z_2, y, z_1, z_3$ are the vertices of an apartment of $\Gamma$. Since $\langle x, z_4, z_2, y, z_1, z_3 \rangle \simeq \mathrm{PG}(5, q)$, it follows that $|\pi_x \cap \pi_y| = 0$. ◻ # The construction {#sec_main} Assume $q\equiv 1 \pmod{3}$ and take the notation introduced in the previous section. Set $$\begin{aligned} & \mathcal O= \bigcup_{x \in \Gamma} \pi_x .\end{aligned}$$ By Lemma [Lemma 7](#lem_cap){reference-type="ref" reference="lem_cap"} if $x, y \in \Gamma$, $x \ne y$, and $|\pi_x \cap \pi_y| \ne 0$, then $\pi_x \cap \pi_y \subset \Gamma$. Hence $|\mathcal O| = (q^2-q)(q+1)(q^2+q+1)+(q+1)(q^2+q+1) = (q^2+q+1)(q^3+1)$. This shows that $\mathcal O$ has the correct size of a $(q^2+q+1)$-ovoid of $\mathcal{Q}^+(7,q)$ and we will show it is indeed a $(q^2+q+1)$-ovoid. We first describe the orbits of the group $G$ on the point set of $\mathcal{Q}^+(7,q)$. Let $\omega$ be a primitive element of $\mathbb{F}_q$, i.e., $\mathbb{F}_q^*=\langle\omega\rangle$. **Lemma 8**. *The group $G$ has $5$ orbits on points of $\mathcal Q^+(7, q)$.* 1. *The orbit $\mathcal O_1$ consists of the $(q+1)(q^2+q+1)$ points of $\Gamma$; a representative for $\mathcal O_1$ is $P_1 = U_3$ and $U_3^\perp$ contains exactly $2(q+1)$ lines of $\Gamma$.* 2. *The orbit $\mathcal O_2$ consists of the $(q^3-q)(q^2+q+1)$ points of $\mathcal O\setminus \Gamma$; a representative for $\mathcal O_2$ is $P_2 = U_2+U_6$ and $P_2^\perp$ contains exactly $2$ lines of $\Gamma$.* 3. *The orbit $\mathcal O_3$ has size $q^3(q+1)(q^2+q+1)/3$; a representative for $\mathcal O_3$ is $P_3 = U_1 + \omega^{\frac{q-1}{3}} U_5 + \omega^{\frac{2(q-1)}{3}} U_9$ and $P_3^\perp$ contains exactly $6$ lines of $\Gamma$.* 4. *The orbit $\mathcal O_i$, $i = 4, 5$, has $q^3(q^2-1)(q-1)/3$ points; a representative for $\mathcal O_4$ is $P_4 = U_2 + U_6 + \omega U_7$ and for $\mathcal O_5$ is $P_5 = U_3 + \omega U_4 + \omega U_8$; $P_i^\perp$, $i = 4, 5$, contains no line of $\Gamma$.* *Proof.* We shall find it helpful to work with the elements of $\mathrm{PGL}(3,q)$ as matrices in $\mathrm{GL}(3,q)$ and the points of $\mathrm{PG}(7,q)$ as $3\times 3$ matrices with trace zero. 1. Let $A\in \mathrm{GL}(3,q)$ and assume that $A\rho^{-1}(U_3)A^{-1}=\lambda \rho^{-1}(U_3)$ for some $\lambda\in\mathbb{F}_{q}^*$. This can be written explicitly as $$\left(\begin{matrix} 0&0& a_{11} \\ 0 &0& a_{21} \\ 0 &0& a_{31} \end{matrix}\right)=\lambda\left(\begin{matrix} a_{31}& a_{32} &a_{33}\\ 0&0&0\\ 0&0&0 \end{matrix}\right),$$ which implies that $a_{11}=\lambda a_{33}$ and $a_{21}=a_{31}=a_{32}=0$. Since $A\in \mathrm{GL}(3,q)$, then $a_{22}a_{33}\neq 0$. It follows that the stabilizer of $P_1$ in $\mathrm{GL}(3,q)$ has size $q^3(q-1)^3$ and so $$|\mathcal{O}_1|=\frac{|\mathrm{GL}(3,q)|}{q^3(q-1)^3}=(q+1)(q^2+q+1).$$ The fact that $U_3^\perp$ contains $2(q+1)$ lines of $\Gamma$ follows from Lemma [Lemma 5](#l_0){reference-type="ref" reference="l_0"}. 2. Let $A\in \mathrm{GL}(3,q)$ with $A\rho^{-1}(P_2)A^{-1}=\lambda \rho^{-1}(P_2)$ for some $\lambda\in\mathbb{F}_q^*$. Then $a_{21}=a_{31}=a_{32}=0$, $a_{12}=\lambda a_{23}$, $a_{11}=\lambda a_{22}$, $a_{22}=\lambda a_{33}$, and $a_{33}\neq 0$. Thus $$|\mathcal{O}_2|=\frac{|\mathrm{GL}(3,q)|}{q^2(q-1)^2}=q(q^3-1)(q+1).$$ The point $P_2$ belongs to $\pi_{U_3}$ and hence $P_2^\perp$ contains the two lines $\ell_1, \ell_2$ of $\Gamma$ through $U_3$. Let $\ell$ be a line of $\Gamma$. If $d(U_3, \ell) = 3$ and $\ell \subset P_2^\perp$, then $|\ell \cap \ell_1| = 1$ and $\langle \ell, \ell_1 \rangle$, $\langle P_2, \ell \rangle$, $\langle \ell_1, \ell_2 \rangle$ are three planes of $\mathcal Q^+(7, q)$ spanning a solid and pairwise intersecting in a line. It follows that the solid is contained in $\mathcal Q^+(7, q)$. Therefore $\ell \subset \pi_{U_3}^\perp$, contradicting Lemma [Lemma 6](#l_1){reference-type="ref" reference="l_1"}. If $d(U_3, \ell) = 5$ and $\ell \subset P_2^\perp$, then there is a line $\ell'$ incident with $\ell$ and with $\ell_1$. Hence $d(U_3, \ell') = 3$ and $\ell' \subset P_2^\perp$. Thus, as above, $\ell' \subset \pi_{U_3}^\perp$, contradicting Lemma [Lemma 6](#l_1){reference-type="ref" reference="l_1"}. 3. Let $A\in \mathrm{GL}(3,q)$ with $A\rho^{-1}(P_3)A^{-1}=\lambda\rho^{-1}(P_3)$ for some $\lambda\in\mathbb{F}_q^*$. If $\lambda=1$, then $a_{12}=a_{13}=a_{21}=a_{23}=a_{31}=a_{32}=0$ and $a_{11}a_{22}a_{33}\neq 0$. If $\lambda=\omega^{\frac{q-1}{3}}$, then $a_{11}=a_{13}=a_{21}=a_{22}=a_{32}=a_{33}=0$ and $a_{12}a_{23}a_{31}\neq 0$. If $\lambda=\omega^{\frac{2(q-1)}{3}}$ then $a_{11}=a_{12}=a_{22}=a_{23}=a_{31}=a_{33}=0$ and $a_{13}a_{21}a_{32}\neq 0$. If $\lambda\notin\{1,\omega^{\frac{q-1}{3}},\omega^{\frac{2(q-1)}{3}}\}$, then $A=0$ which is not in $\mathrm{GL}(3,q)$. Therefore, $$|\mathcal{O}_3|=\frac{|\mathrm{GL}(3,q)|}{3(q-1)^3}=\frac{q^3(q+1)(q^2+q+1)}{3}.$$ The hyperplane $P_3^\perp$ of $\Pi$ contains the $5$-dimensional projective space $X_1 = X_5 = X_9 = 0$ which meets $\mathcal Q^+(7, q)$ in a $\mathcal Q^+(5, q)$ and intersects $\Gamma$ precisely in the apartment $\mathcal A$ given by the $6$ lines $\langle U_2, U_3 \rangle, \langle U_3, U_6 \rangle, \langle U_6, U_4 \rangle, \langle U_4, U_7 \rangle, \langle U_7, U_8 \rangle, \langle U_8, U_2 \rangle$. A line $s$ of $\Gamma$ disjoint from $\mathcal A$ cannot be contained in $P_3^\perp$, otherwise $s \cap \mathcal Q^+(5, q)$ would be a point of $\Gamma$ not contained in $\mathcal A$. Similarly, a line $s$ of $\Gamma$ incident with a line $r$ of $\mathcal A$ cannot be contained in $P_3^\perp$, otherwise through $r$ there would be three planes of $\mathcal Q^+(7, q)$ and necessarily two of these three planes would span a solid of $\mathcal Q^+(7, q)$. In this case it follows that there exists a line $r'$ of $\mathcal A$ incident with $r$ such that $s \subset \langle r, r' \rangle^\perp$, contradicting Lemma [Lemma 6](#l_1){reference-type="ref" reference="l_1"}. 4. Assume $A\in \mathrm{GL}(3,q)$ with $A\rho^{-1}(P_4)A^{-1}=\lambda \rho^{-1}(P_4)$ for some $\lambda\in\mathbb{F}_{q}^*$. Then $\lambda^3=1$, $a_{11}=\lambda^2 a_{33}$, $a_{22}=\lambda a_{33}$, $a_{12}=\lambda a_{23}$, $a_{31}=\lambda^2 \omega a_{23}$, $a_{21}=\lambda^2 \omega a_{13}$, and $a_{32}=\lambda \omega a_{13}$. With these conditions, we have $$\det(A)=a_{33}^3+\omega a_{23}^3+\omega^2a_{13}^3-3\omega a_{13}a_{23}a_{33}.$$ Since $\det(A)\neq 0$, by Lemma [Lemma 4](#xyz){reference-type="ref" reference="xyz"}, the only restriction on $\{a_{13}, a_{23}, a_{33}\}$ is $(a_{13},a_{23},a_{33})\neq (0,0,0)$. Therefore, $$|\mathcal{O}_{4}|=\frac{|\mathrm{GL}(3,q)|}{3(q^3-1)}=\frac{q^3(q^2-1)(q-1)}{3}.$$ The computation for the orbit of $P_5$ is similar and we omit it. To see that $P_4$ and $P_5$ are not in the same orbit, assume $A\in\mathrm{GL}(3,q)$ such that $A\rho^{-1}(P_4)=\lambda \rho^{-1}(P_5)A$ for some $\lambda\in\mathbb{F}_q^*$. Then $\lambda^3\omega=1$ which is impossible. We conclude that $G$ has five orbits as described above since $|\mathcal{O}_1|+\cdots+|\mathcal{O}_5|=(q^3+1)\frac{q^4-1}{q-1}$ which equals the number of points of $\mathcal{Q}^+(7,q)$. The line $u$ joining $P_4$ and $P_5$ is secant to $\mathcal Q^+(7, q)$ and $u^\perp$ is the $5$-dimensional projective space of $\Pi$ given by $\omega X_3 + X_4 + X_8 = \omega X_2 + \omega X_6 + X_7 = 0$. Furthermore, $u^\perp$ is disjoint from $\Gamma$. Indeed, the point given in [\[point\]](#point){reference-type="eqref" reference="point"} belongs to $u^\perp$ if and only if $$\begin{aligned} & a_1b_1 + a_2b_2 + a_3b_3 = 0 \nonumber \\ & \omega a_1b_2 + \omega a_2b_3 + a_3b_1 = 0 \label{sys} \\ & \omega a_1b_3 + a_2b_1 + a_3b_2 = 0. \nonumber\end{aligned}$$ The system [\[sys\]](#sys){reference-type="eqref" reference="sys"} has no non-trivial solutions since $$\begin{aligned} & \det \begin{pmatrix} b_1 & b_2 & b_3 \\ \omega b_2 & \omega b_3 & b_1 \\ \omega b_3 & b_1 & b_2 \\ \end{pmatrix} = 3 \omega b_1 b_2 b_3 - b_1^3 - \omega b_2^3 - \omega^2 b_3^3 = 0 ,\end{aligned}$$ which implies that $(b_1, b_2, b_3) = (0,0,0)$ by Lemma [Lemma 4](#xyz){reference-type="ref" reference="xyz"}. It follows that no line of $\Gamma$ is contained in $P_4^\perp$ or in $P_5^\perp$.  ◻ **Remark 9**. *[By Lemma [Lemma 8](#lem_orb){reference-type="ref" reference="lem_orb"} and [\[map\]](#map){reference-type="eqref" reference="map"}, the set $\mathcal O$ consists precisely of the points of $\Pi$ corresponding to $3\times 3$ matrices over $\mathbb{F}_q$ having rank at most $2$.]{.upright}* We are now ready to prove the main theorem. **Theorem 10**. *The set $\mathcal O$ is a $(q^2+q+1)$-ovoid of $\mathcal Q^+(7, q)$.* *Proof.* We show that $|P_i^\perp \cap \mathcal O|$ equals $q^4+q^3+q^2+q+1$ if $i = 1,2$ or $(q^2+1)(q^2+q+1)$ if $i = 3,4,5$. The hyperplane $P_1^\perp$ of $\Pi$ contains $2(q+1)$ lines of $\Gamma$ and hence $$\begin{aligned} & |P_1^\perp \cap \Gamma| = 2q^2+2q+1. \end{aligned}$$ Let $x$ be a point of $\Gamma$. If $x = P_1$ or $d(x, P_1) = 2$, then $\pi_{x} \subset P_1^\perp$. If $d(x, P_1) = 4$, then $\pi_x \cap P_1^\perp$ is a line of $\Gamma$ contained in $P_1^\perp$, whereas if $d(x, P_1) = 6$, then $\pi_x \cap P_1^\perp$ is a line with two points of $\Gamma$. Hence $$\begin{aligned} & |P_1^\perp \cap (\mathcal O\setminus \Gamma)| = (2q+1) \times (q^2-q) + q^3 \times (q-1) = q^4 + q^3 - q^2 - q\end{aligned}$$ and therefore $|P_1^\perp \cap \mathcal O| = q^4+q^3+q^2+q+1$. Assume that the hyperplane $P_2^\perp$ of $\Pi$ contains the two lines of $\Gamma$ contained in $\pi_y$. It follows that $$\begin{aligned} & |P_2^\perp \cap \Gamma| = (q+1)^2. \end{aligned}$$ Let $x$ be a point of $\Gamma$. If $x = y$, then $\pi_{x} \subset P_2^\perp$. If $d(x, y) = 2$, then $\pi_x \cap P_2^\perp$ is a line of $\Gamma$ contained in $P_2^\perp$. If $d(x, y) \ge 4$, then either $x \in P_2^\perp$ and $\pi_x \cap P_2^\perp$ is a line with one point of $\Gamma$, or $x \notin P_2^\perp$ and $\pi_x \cap P_2^\perp$ is a line with two points of $\Gamma$. Hence $$\begin{aligned} & |P_2^\perp \cap (\mathcal O\setminus \Gamma)| = (q^2-q) + q^2 \times q + (q^3 + q^2) \times (q-1) = q^4 + q^3 - q\end{aligned}$$ and therefore $|P_2^\perp \cap \mathcal O| = q^4+q^3+q^2+q+1$. If the hyperplane $P_3^\perp$ of $\Pi$ contains the apartment $\mathcal A$ of $\Gamma$, then $$\begin{aligned} & |P_3^\perp \cap \Gamma| = (q-1)^2 + 6q = q^2+4q+1.\end{aligned}$$ Indeed there are exactly $2(q-1)^2$ lines of $\Gamma$ intersecting $P_3^\perp$ in a point not belonging to $\mathcal A$. Let $x$ be a point of $\Gamma$. If $x \in \mathcal A$, then either $x$ is a vertex of $\mathcal A$ and $\pi_{x} \subset P_3^\perp$ or $x$ is not a vertex of $\mathcal A$ and $\pi_x \cap P_3^\perp$ is a line of $\mathcal A$ that hence is contained in $P_3^\perp$. If $x \notin \mathcal A$, then either $x \in P_3^\perp$ and $\pi_x \cap P_3^\perp$ is a line with one point of $\Gamma$, or $x \notin P_3^\perp$ and $\pi_x \cap P_3^\perp$ is a line with two points of $\Gamma$. Hence $$\begin{aligned} & |P_3^\perp \cap (\mathcal O\setminus \Gamma)| = 6 \times (q^2-q) + (q-1)^2 \times q + (q^3 + q^2 - 2q) \times (q-1) = q^4 + q^3 + q^2 - 3q\end{aligned}$$ and therefore $|P_3^\perp \cap \mathcal O| = (q^2+1)(q^2+q+1)$. If $i \in \{4, 5\}$, then no line of $\Gamma$ is contained in the hyperplane $P_i^\perp$ of $\Pi$. Hence $$\begin{aligned} & |P_i^\perp \cap \Gamma| = q^2+q+1.\end{aligned}$$ Let $x$ be a point of $\Gamma$. In this case either $x \in P_i^\perp$ and $\pi_x \cap P_i^\perp$ is a line with one point of $\Gamma$, or $x \notin P_i^\perp$ and $\pi_x \cap P_i^\perp$ is a line with two points of $\Gamma$. Hence $$\begin{aligned} & |P_i^\perp \cap (\mathcal O\setminus \Gamma)| = (q^2+q+1) \times q + (q^3 + q^2 + q) \times (q-1) = q^4 + q^3 + q^2\end{aligned}$$ and therefore $|P_i^\perp \cap \mathcal O| = (q^2+1)(q^2+q+1)$. ◻ # Concluding remarks {#concluding} ## Other known examples of $m$-ovoids of $\mathcal{Q}^+(7,q)$ To the best of our knowledge, there are further known $m$-ovoids of $\mathcal Q^+(7, q)$ not mentioned in Section [1](#sec1){reference-type="ref" reference="sec1"}. - $m$-ovoids, $m \in \{q^2+q+1, q+1\}$, of $\mathcal Q^+(7, q)$ arising from [@CPS Construction 5.1]: let $\mathcal C$ be a cone of a $\mathcal Q(6, q) \subset \mathcal Q^+(7, q)$ such that $\mathcal C\subset \mathcal Q(6, q)$ and $\mathcal C$ has either as vertex a point of $\mathcal Q(6, q)$ and base a $\mathcal Q(4, q)$ or as vertex a line and base a $\mathcal Q(2, q)$. Let $\mathcal C'$ be a cone having as vertex the same vertex of $\mathcal C$ and as base either a $\mathcal Q(4, q)$ or a $\mathcal Q(2, q)$ and such that $\mathcal C' \subset (\mathcal Q^+(7, q) \setminus \mathcal Q(6, q))$. Then $(\mathcal Q(6, q) \setminus \mathcal C) \cup \mathcal C'$ is a $(q^2+q+1)$-ovoid of $\mathcal Q^+(7, q)$. A line of $\mathcal Q^+(7, q)$ meets $(\mathcal Q(6, q) \setminus \mathcal C) \cup \mathcal C'$ in $0, 1, 2, q$ or $q+1$ points. By replacing $\mathcal Q(6, q)$ with $\mathcal Q^-(5, q)$, a similar construction yields $(q+1)$-ovoids of $\mathcal Q^+(7, q)$. - $m$-ovoids of $\mathcal Q^+(7, q)$, $1 \le m \le q+1$, obtained by applying a group of order $q+1$ to an ovoid of $\mathcal Q^+(7, q)$: it can be deduced from [@D], that there is a group of order $q+1$, say $S$, fixing $\mathcal Q^+(7, q)$ and a set $\mathcal L$ consisting of $(q^2+1)(q^3+1)$ pairwise disjoint lines of $\mathcal Q^+(7, q)$. In particular $\mathcal L$ is a line-spread of $\mathcal Q^+(7, q)$ and $\mathcal S$ stabilizes each of the members of $\mathcal L$ permuting in a single orbit the $q+1$ points of each of the lines of $\mathcal L$. Therefore, if $\mathcal O$ is an ovoid of $\mathcal Q^+(7, q)$ then it turns out that $\mathcal O^S$ is a set of $q+1$ pairwise disjoint ovoids of $\mathcal Q^+(7, q)$. ## The isomorphism issue We have constructed a $(q^2+q+1)$-ovoid $\mathcal O$ in $\mathcal{Q}^+(7,q)$ for $q\equiv 1\pmod{3}$. As mentioned in Section [1](#sec1){reference-type="ref" reference="sec1"}, a non-degenerate hyperplane intersects $\mathcal{Q}^+(7,q)$ in a parabolic quadric $\mathcal{Q}(6,q)$ which is also a $(q^2+q+1)$-ovoid of $\mathcal{Q}^+(7,q)$. One may ask whether $\mathcal{M}$ is contained in a non-degenerate hyperplane. We explain below that the answer is no. **Proposition 11**. *There are lines of $\mathcal Q^+(7, q)$ intersecting $\mathcal O$ in exactly three points.* *Proof.* Consider the plane $\sigma$ spanned by $U_2$, $U_6$, $U_7$. Then $\sigma$ is a plane of $\mathcal Q^+(7, q)$ and by Remark [Remark 9](#remark){reference-type="ref" reference="remark"}, $\sigma \cap \mathcal O$ consists of the $3q$ points of the three lines joining the three points $U_2$, $U_6$, $U_7$. Hence a line of $\sigma$ not containing $U_2$, $U_6$, $U_7$ meets $\mathcal O$ in three points. ◻ Note that $U_1-U_2+U_4-U_5, U_6-U_5+U_9-U_8, U_2, U_3, U_4, U_6, U_7, U_8\in\mathcal O$ and they span $\mathrm{PG}(7,q)$. It follows that $\mathcal O$ is not contained in a hyperplane. Furthermore, we see from Proposition [Proposition 11](#prop_int){reference-type="ref" reference="prop_int"} that $\mathcal O$ is not isomorphic to $(\mathcal Q(6, q) \setminus \mathcal C) \cup \mathcal C'$ defined in the previous subsection. Therefore the $(q^2+q+1)$-ovoid $\mathcal O$ is not equivalent to any known $m$-ovoids of $\mathcal{Q}^+(7,q)$. ## $m$-ovoids in $\mathcal{Q}(8,q)$ Consider the parabolic quadric $\mathcal{Q}(8,q)$ given by $$X_2 X_4 - X_1 X_5 + X_3 X_7 - X_1 X_9 + X_6 X_8 - X_5 X_9=0.$$ There is a group isomorphic to $\mathrm{PGL}(3,q)$ fixing $\mathcal{Q}(8,q)$ as mentioned in Section [2](#sec_pre){reference-type="ref" reference="sec_pre"}. With the aid of Magma [@Magma], we found $(q^2+q+1)$-ovoids of $\mathcal{Q}(8,q)$ for $q=2,3,4,5$. When $q=2$, the $(q^2+q+1)$-ovoid found by the computer is actually an embedded $\mathcal{Q}^-(7,q)$. This is not interesting. However, for $q=3,4,5$, a Magma computation shows that there exists a $(q^2+q+1)$-ovoid not contained in a hyperplane and we conjecture that this is true for all larger $q$. **Conjecture 12**. *There exists a $(q^2+q+1)$-ovoid of $\mathcal{Q}(8,q)$ which is not contained in a hyperplane for any prime power $q>2$.* 77 J. Bamberg, S. Kelly, M. Law, T. Penttila, Tight sets and $m$-ovoids of finite polar spaces, *J. Combin. Theory, Ser. A* **114** (2007), 1293--1314. J. Bamberg, M. Law, T. Penttila, Tight sets and m-ovoids of generalized quadrangles, *Combinatorica* **29** (2009), 1--17. J. Bamberg, M. Lee, K. Melissa, Q. Xiang, A new infinite family of hemisystems of the Hermitian surface, *Combinatorica* **38** (2018), 43--66. W. Bosma, J. Cannon, C. Fieker, A. Steel, *Handbook of Magma Functions*, 2017. A. Cossidente, F. Pavese, Hemisystems of $\mathcal{Q}(6,q)$, $q$ odd, *J. Combin. Theory Ser. A* **140** (2016), 112--122. A. Cossidente, F. Pavese, On intriguing sets of finite symplectic spaces, *Des. Codes Cryptogr.* **86** (2018), 1161--1174. D. Crnković, F. Pavese, A. Švob, Intriguing sets of strongly regular graphs and their related structures, to appear in *Contrib. Discrete Math.* R.H.F. Denniston, Packings of $\mathrm{PG}(3,q)$, in: *Finite geometric structures and their applications* (Centro Internaz. Mat. Estivo (C.I.M.E.), II Ciclo, Bressanone, 1972), Edizioni Cremonese, Rome, 1973, pp. 193--199. L. E. Dickson, Criteria for the irreducibility of functions in a finite fields, *Bull. Amer. Math. Soc.* **13** (1906), 1--8. R.H. Dye, Maximal subgroups of finite orthogonal groups stabilizing spreads of lines, *J. London Math. Soc. (2)*, **33** (1986), 279--293. T. Feng, K. Momihara, M. Rodgers, Q. Xiang, H. Zou, Cameron-Liebler line classes with parameter $x=\frac{(q+1)^2}{3}$, *Adv. Math.* **385** (2021), 107780. T. Feng, R. Tao, An infinite family of $m$-ovoids of $\mathcal{Q}(4,q)$, *Finite Fields Appl.* **63** (2020), 101644. T. Feng, Y. Wang, Q. Xiang, On $m$-ovoids of symplectic polar spaces, *J. Combin. Theory Ser. A* **175** (2020), 105279. Y. Filmus, F. Ihringer, Boolean degree 1 functions on some classical association schemes, *J. Combin. Theory Ser. A* **162** (2019), 241--270. J.W.P. Hirschfeld, J.A. Thas, *General Galois Geometries*, Springer Monographs in Mathematics, Springer, London, 2016. S. Kelly, Constructions of intriguing sets of polar spaces from field reduction and derivation, *Des. Codes Cryptogr.* **43** (2007), 1--8. G. Korchmáros, G.P. Nagy, P. Speziali, Hemisystems of the Hermitian surface, *J. Comb. Theory, Ser. A* **165** (2019), 408--439. J. Lansdown, A.C. Niemeyer, A family of hemisystems on the parabolic quadrics, *J. Combin. Theory Ser. A* **175** (2020), 105280. E.E. Shult, J.A. Thas, $m$-systems of polar space, *J. Combin. Theory Ser. A* **68** (1994), 184--204. J.A. Thas, Interesting pointsets in generalized quadrangles and partial geometries, *Linear Algebra Appl.* **114/115** (1989), 103--131. J.A. Thas, H. Van Maldeghem, On embeddings of the flag geometries of projective planes in finite projective spaces, *Des. Codes Cryptogr.*, **17** (1999), 97--104. H. Van Maldeghem, *Generalized Polygons*, Birkhäuser, Basel, 1998. K. S. Williams, Note on cubics over $GF(2^n)$ and $GF(3^n)$, *J. Number Theory* **7** (1975), 361--365.
arxiv_math
{ "id": "2309.06821", "title": "An infinite family of $m$-ovoids of the hyperbolic quadrics\n $\\mathcal{Q}^+(7,q)$", "authors": "Francesco Pavese, Hanlin Zou", "categories": "math.CO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | The paper analyses a spectral approach to reconstructing a scalar field on the sphere, given only information about a masked version of the field together with precise information about the (smooth) mask. The theory is developed for a general mask, and later specialised to the case of an axially symmetric mask. Numerical experiments are given for the case of an axial mask motivated by the cosmic microwave background, assuming that the underlying field is a realization of a Gaussian random field with an artificial angular power spectrum of moderate degree ($\ell \le 100$). The recovery is highly satisfactory in the absence of noise and even in the presence of moderate noise. address: - School of Physics, UNSW Sydney, Australia. - School of Mathematics and Statistics, UNSW Sydney, Australia. author: - Jan Hamann - Quoc T. Le Gia - Ian H. Sloan - Robert S. Womersley bibliography: - reference.bib title: Removing the mask -- reconstructing a scalar field on the sphere from a masked field --- # Introduction In this paper we study the reconstruction of a scalar field (for example temperature or pressure) on the unit sphere, given (possibly noisy) data on a masked version of the field, together with precise knowledge of the mask. The underlying motivation is the cosmic microwave background (CMB) for which the temperature observations, so important for the modern understanding of the early universe, are obscured over substantial portions of the sky by our own Milky Way, creating the need for masking some portions before attempting reconstruction. The paper aims at a proof-of-concept for a new spectral approach to such problems. While the theory is general, the numerical experiments are restricted to the case of an axially symmetric mask, and limit the field's angular power spectrum to polynomial degree $\ell \le 100$, corresponding to an angular resolution of approximately $2^{\degree}$. Within these limitations, the recovery is shown to be highly satisfactory in the no-noise case, and also in the case of moderate noise. There is a rich literature on the inpainting problem for the particular case of the CMB [@Abrial:2008mz; @Starck:2012gd], with techniques based on harmonic methods [@Bielewicz:2004en; @Inoue:2008qf], iterative methods [@Nishizawa:2013uwa; @Gruetjen:2015sta], constrained Gaussian realisations [@Kim:2012iq; @Bucher:2011nf], group sparse optimization methods [@LiChen2023] or neural networks [@Yi_2020; @Puglisi:2020deh; @Sadr:2020rje; @Montefalcone:2020fkb; @Wang:2022ybb]. Closest to the present approach is the work of Alonso et al. [@Alonso:2018jzx], which however differs in aim (which was to recover the angular power spectrum from a knowledge of the "pseudo $C_\ell$", through pooling together Fourier coefficients of different degrees). The problem is formulated in the next section, by reducing the problem to that of solving a large ill-posed linear system. Section [3](#sec:PropE){reference-type="ref" reference="sec:PropE"} establishes properties of the matrix in that linear system. Section [4](#sec:Solving){reference-type="ref" reference="sec:Solving"} outlines our stochastic approach to the solution of the linear system. Because the full linear system is currently beyond our resources, in Section [5](#sec:axsym){reference-type="ref" reference="sec:axsym"} we obtain a large reduction in difficulty by specialising to the case of an axially symmetric mask. The final section is devoted to numerical experiments in both the no-noise and noisy cases. # The problem setting {#sec:Setting} Taking ${\boldsymbol{r}}$ to be any point in the unit sphere $\mathbb{S}^2$ in $\mathbb{R}^3$ (i.e. ${\boldsymbol{r}}$ is a unit vector in $\mathbb{R}^3$), the underlying real scalar field $a({\boldsymbol{r}})$ is assumed to be partially obscured by a known mask $v = v({\boldsymbol{r}})$ to give a masked field $a^v = a^v({\boldsymbol{r}}) := a({\boldsymbol{r}}) v({\boldsymbol{r}})$. We assume that the incompletely known field $a$ is a spherical polynomial of degree at most $L$, and hence expressible as an expansion in terms of orthonormal (complex) spherical harmonics $Y_{\ell,m}({\boldsymbol{r}})$ of degree $\ell \le L$, $$\label{eq:a_and_v} a({\boldsymbol{r}}) = \sum_{\ell=0}^L \sum_{m = -\ell}^\ell a_{\ell,m} Y_{\ell,m}({\boldsymbol{r}}), \qquad {\boldsymbol{r}}\in\mathbb{S}^2,$$ where $$a_{\ell,m} := \int_{\mathbb{S}^2} a({\boldsymbol{r}})\overline{Y_{\ell,m}({\boldsymbol{r}})} {\mathrm{d}}{\boldsymbol{r}},$$ following from the orthonormality of the spherical harmonics, $$\int_ {\mathbb{S}^2}Y_{\ell,m}({\boldsymbol{r}}) \overline{Y_{\ell',m'}({\boldsymbol{r}})} {\mathrm{d}}{\boldsymbol{r}}= \delta_{\ell, \ell'} \delta_{m,m'}, \quad \ell, \ell' \ge 0,\; m = -\ell,\ldots,\ell,\; m' = -\ell',\ldots,\ell'.$$ We assume that the mask is expressible as[^1] $$\label{eq:v} v({\boldsymbol{r}}) = \sum_{k=0}^K \sum_{\nu = -k}^k v_{k,\nu} Y_{k,\nu}({\boldsymbol{r}}),$$ where $$v_{k,\nu} := \int_{\mathbb{S}^2} v({\boldsymbol{r}})\overline{Y_{k,\nu}({\boldsymbol{r}})} {\mathrm{d}}{\boldsymbol{r}}.$$ Thus the masked signal $a^v({\boldsymbol{r}}) = a({\boldsymbol{r}}) v({\boldsymbol{r}})$ is expressible as a spherical harmonic expansion of degree $L + K$, $$\label{eq:avfun} a^v({\boldsymbol{r}}) = a({\boldsymbol{r}}) v({\boldsymbol{r}}) = \sum_{j=0}^{L+K}\sum_{\mu=-j}^{j}a_{j,\mu}^{v} Y_{j,\mu}({\boldsymbol{r}}), \qquad {\boldsymbol{r}}\in\mathbb{S}^2,$$ where $$\begin{aligned} \label{eq:av} a_{j,\mu}^v &:= \int_{\mathbb{S}^2} {a}^v_J({\boldsymbol{r}})\overline{Y_{j,\mu}({\boldsymbol{r}})} {\mathrm{d}}{\boldsymbol{r}} = \int_{\mathbb{S}^2} a({\boldsymbol{r}})v({\boldsymbol{r}})\overline{Y_{j,\mu}({\boldsymbol{r}})} {\mathrm{d}}{\boldsymbol{r}}\\ & = \int_{\mathbb{S}^2} \left(\sum_{\ell=0}^{L} \sum_{m=-\ell}^{\ell} a_{\ell,m}Y_{\ell,m}({\boldsymbol{r}})\right) \left(\sum_{k=0}^{K}\sum_{\nu=-k}^{k} v_{k,\nu}Y_{k,\nu}({\boldsymbol{r}})\right) \overline{Y_{j,\mu}({\boldsymbol{r}})}{\mathrm{d}}{\boldsymbol{r}}\nonumber\\ &= \sum_{\ell=0}^{L} \sum_{m=-\ell}^{\ell} \sum_{k=0}^{K} \sum_{\nu=-k}^{k}a_{\ell,m}v_{k,\nu} \int_{\mathbb{S}^2} Y_{\ell,m}({\boldsymbol{r}}) Y_{k,\nu}({\boldsymbol{r}}) \overline{Y_{j,\mu}({\boldsymbol{r}})}{\mathrm{d}}{\boldsymbol{r}}\nonumber\\ &= \sum_{\ell=0}^{L} \sum_{m=-\ell}^{\ell} E_{j,\mu; \ell,m} a_{\ell,m}, \nonumber\end{aligned}$$ where, for $j = 0, \ldots, L+K$, $\mu = -j, \ldots, j$ and $\ell = 0,\ldots,L$, $m = -\ell,\ldots,\ell$, $$\label{eq:Edefined} E_{j,\mu; \ell,m} = \sum_{k=0}^{K} \sum_{\nu=-k}^{k} \int_{\mathbb{S}^2} Y_{\ell,m}({\boldsymbol{r}}) Y_{k,\nu}({\boldsymbol{r}}) \overline{Y_{j,\mu}({\boldsymbol{r}})}{\mathrm{d}}{\boldsymbol{r}}\; v_{k,\nu}.$$ Thus the essential task in the reconstruction is to solve as accurately as possible the large linear system $$\label{eq:aequalsEa} \sum_{\ell=0}^{L} \sum_{m=-\ell}^{\ell} E_{j,\mu;\ell,m}a_{\ell,m} = a_{j,\mu}^v,$$ where $E$ is defined by [\[eq:Edefined\]](#eq:Edefined){reference-type="eqref" reference="eq:Edefined"}. Equation [\[eq:aequalsEa\]](#eq:aequalsEa){reference-type="eqref" reference="eq:aequalsEa"} can be considered as an overdetermined (but possibly not full-rank) set of linear equations for the $a_{\ell, m}$. We will find it convenient to replace the upper limit $L+K$ in [\[eq:aequalsEa\]](#eq:aequalsEa){reference-type="eqref" reference="eq:aequalsEa"} by a more flexible upper limit $J$ with $L \leq J \leq L+K$. Then the equation can be written as $$\label{eq:aeqav_math} E {\boldsymbol{a}}= {\boldsymbol{a}}^v,$$ where $E$ is a $(J+1)^2 \times (L+1)^2$ matrix. In Section [4](#sec:Solving){reference-type="ref" reference="sec:Solving"} we come to the most challenging part of the paper, which is the approximate solution of the ill-posed linear system, and before it the computation of the matrix $E$. Before then, however, it is useful to establish properties of the matrix $E$. # Properties of $E$ {#sec:PropE} This section summarizes useful properties of the matrix $E$ defined in [\[eq:Edefined\]](#eq:Edefined){reference-type="eqref" reference="eq:Edefined"}. We first note that the product of three spherical harmonics (known as a Gaunt coefficient) can be evaluated in terms of Wigner 3j symbols, see eg.  [@NIST:DLMF Eq. 34.4.22] or [@Alonso:2018jzx]. Explicitly, $$\begin{aligned} \label{eq:D} D_{ \ell,m; k, \nu; j,\mu} &:= \int_{\mathbb{S}^2} Y_{\ell,m}({\boldsymbol{r}}) Y_{k,\nu}({\boldsymbol{r}}) \overline{Y_{j,\mu}({\boldsymbol{r}})}{\mathrm{d}}{\boldsymbol{r}}\\ &= (-1)^\mu \sqrt{\frac{(2\ell+1)(2k+1)(2j+1)}{4\pi}}\nonumber\\ & \hspace{.3cm}\times \begin{pmatrix} \ell & k & j\\ 0 & 0 & 0 \end{pmatrix} \begin{pmatrix} \ell & k & j\\ m & \nu & -\mu \end{pmatrix} .\nonumber\end{aligned}$$ As important special cases, $$\label{eq:Dprops} D_{ \ell,m; k, \nu; j,\mu} = 0 \, \mbox{ if}\, \begin{cases} j+\ell+k \quad \mbox{is odd, or}\\ k < |j - \ell|,\mbox{ or}\\ k > j + \ell,\mbox{ or}\\ m + \nu \ne \mu. \end{cases}$$ The following lemma gives several elementary properties of the matrix $E$, beginning with an explicit integral expression in terms of the mask function $v$. **Lemma 1**. *The elements of the matrix $E$ satisfy $$\label{eq:Eint} E_{j,\mu;\ell,m} = \int_{\mathbb{S}^2} \overline{Y_{j, \mu}({\boldsymbol{r}})} Y_{\ell,m }({\boldsymbol{r}}) v({\boldsymbol{r}})\, {\mathrm{d}}{\boldsymbol{r}},$$ and, for a real mask $v$, $$\begin{aligned} E_{\ell,m; j, \mu} & = & \overline{E_{j, \mu; \ell, m}} \, , \label{eq:Eherm} \\ E_{j,\mu;\ell,m} & = & \sum_{k=0}^{K} \left[ D_{ \ell,m; k, 0; j,\mu} v_{k,0} + 2 \sum_{\nu=1}^{k} \Re(D_{ \ell,m; k, \nu; j,\mu} v_{k,\nu}) \right], \label{eq:Ereal} \\ E_{j, \mu; \ell,-m} & = & (-1)^{m-\mu} \overline{E_{j,-\mu;\ell,m}} \, . \label{eq:Enegm}\end{aligned}$$* *Proof.* Firstly, from the definition of $E$ in [\[eq:Edefined\]](#eq:Edefined){reference-type="eqref" reference="eq:Edefined"} we have $$\begin{aligned} E_{j,\mu;\ell,m} &= %\sum_{k=0}^{K}\sum_{\nu = -k}^k % D_{\ell,m;k,\nu; j,\mu} v_{k,\nu}\\ \sum_{k=0}^{K} \sum_{\nu = -k}^k \int_{\mathbb{S}^2} Y_{\ell, m}({\boldsymbol{r}}) Y_{k, \nu}({\boldsymbol{r}}) \overline{Y_{j, \mu}({\boldsymbol{r}})} \, {\mathrm{d}}{\boldsymbol{r}}\: v_{k, \nu}\nonumber\\ &= \int_{\mathbb{S}^2} \overline{Y_{j, \mu}({\boldsymbol{r}})} Y_{\ell, m}({\boldsymbol{r}}) \sum_{k=0}^{K}\sum_{\nu = -k}^k v_{k, \nu} Y_{k, \nu}({\boldsymbol{r}}) \,{\mathrm{d}}{\boldsymbol{r}}\nonumber\\ &= \int_{\mathbb{S}^2} \overline{Y_{j, \mu}({\boldsymbol{r}})} Y_{\ell,m }({\boldsymbol{r}}) v({\boldsymbol{r}}) \, {\mathrm{d}}{\boldsymbol{r}},\nonumber\end{aligned}$$ establishing [\[eq:Eint\]](#eq:Eint){reference-type="eqref" reference="eq:Eint"}. From the definition of $D_{\ell,m;k,\nu;j,\mu}$ in [\[eq:D\]](#eq:D){reference-type="eqref" reference="eq:D"} as an integral, together with the spherical harmonic property $$Y_{\ell,-m}({\boldsymbol{r}}) = (-1)^m \overline{Y_{\ell,m}({\boldsymbol{r}})},$$ it follows that $$D_{j,\mu; k,\nu;\ell,m;} = (-1)^\nu \overline{D_{\ell,m;k,-\nu;j,\mu}}.$$ Because both the mask $v$ and the field $a$ are real we have $$\label{eq:a_symm} a_{\ell,m} = (-1)^m \overline{a_{\ell,-m}},\quad v_{k,\mu} = (-1)^\mu \overline{v_{k,-\mu}},$$ for all relevant values of $\ell, m, k$ and $\mu$, and [\[eq:Eherm\]](#eq:Eherm){reference-type="eqref" reference="eq:Eherm"} then follows from [\[eq:Edefined\]](#eq:Edefined){reference-type="eqref" reference="eq:Edefined"}. Also, from [\[eq:D\]](#eq:D){reference-type="eqref" reference="eq:D"} and [\[eq:a_symm\]](#eq:a_symm){reference-type="eqref" reference="eq:a_symm"}, $$D_{ \ell,-m; k, \nu; j,\mu} = (-1)^m \overline{D_{ \ell,m; k, \nu; j,\mu}}, \quad D_{ \ell,m; k, -\nu; j,\mu} = (-1)^\nu \overline{D_{ \ell,m; k, \nu; j,\mu}},$$ so [\[eq:a_symm\]](#eq:a_symm){reference-type="eqref" reference="eq:a_symm"} gives $$D_{ \ell,m; k, -\nu; j,\mu}v_{k,-\nu} = \overline{D_{ \ell,m; k, \nu; j,\mu} v_{k,\nu}},$$ and [\[eq:Edefined\]](#eq:Edefined){reference-type="eqref" reference="eq:Edefined"} then yields [\[eq:Ereal\]](#eq:Ereal){reference-type="eqref" reference="eq:Ereal"}. Finally, for a real mask [\[eq:Enegm\]](#eq:Enegm){reference-type="eqref" reference="eq:Enegm"} follows easily from [\[eq:Eint\]](#eq:Eint){reference-type="eqref" reference="eq:Eint"}. ◻ ## Singular values of $E$ This subsection gives upper bounds on the singular values of the rectangular matrix $E$ in terms of the real mask $v$. We use $E^*$ to denote the complex conjugate transpose of the matrix $E$. **Theorem 2**. *Assume that the mask $v({\boldsymbol{r}})$ is of degree $K\geq 1$ and that $L \leq J\le L+K$ is the degree of the approximation ${a}^v_J({\boldsymbol{r}})$ to the masked field $a^v({\boldsymbol{r}})$, so that $E$ is a $(J+1)^2$ by $(L+1)^2$ matrix. The singular values $\sigma$ of $E$ satisfy $$\label{eq:svbnd} 0 \leq \sigma \leq v_{{\rm max}},$$ where $$v_{{\rm max}}:= \max_{{\boldsymbol{r}}\in \mathbb{S}^2}\; |v({\boldsymbol{r}})| .$$* *Proof.* Let ${\boldsymbol{u}}\neq \boldsymbol{0}$ be an eigenvector of the positive semi-definite Hermitian matrix $E^*E$ corresponding to the non-negative real eigenvalue $\sigma^2$, so $E^* E {\boldsymbol{u}}= \sigma^2 {\boldsymbol{u}}$. Then, using [\[eq:Eint\]](#eq:Eint){reference-type="eqref" reference="eq:Eint"}, and writing the elements of ${\boldsymbol{u}}$ as $u_{\ell,m}, \ell = 0,\ldots,L,\; m = -\ell,\ldots, \ell$, we have $$(E {\boldsymbol{u}})_{j, \mu} = \sum_{\ell=0}^L \sum_{m=-\ell}^\ell u_{\ell,m} \int_{\mathbb{S}^2} \overline{Y_{j,\mu}({\boldsymbol{r}})} Y_{\ell,m}({\boldsymbol{r}})v({\boldsymbol{r}}) {\mathrm{d}}{\boldsymbol{r}} = \int_{\mathbb{S}^2} \overline{Y_{j,\mu}({\boldsymbol{r}})} u({\boldsymbol{r}}) v({\boldsymbol{r}}) {\mathrm{d}}{\boldsymbol{r}},$$ for $j = 0,\ldots,J$, $\mu = -j,\ldots,j$, where $$\ u({\boldsymbol{r}}) := \sum_{\ell=0}^L \sum_{m=-\ell}^\ell u_{\ell,m} Y_{\ell,m} ({\boldsymbol{r}}),$$ giving $${\boldsymbol{u}}^* E^* E {\boldsymbol{u}}= \|E{\boldsymbol{u}}\|_{\ell_2}^2 = \sum_{j = 0}^J \sum_{\mu=-j}^j \left| \int_{\mathbb{S}^2} u({\boldsymbol{r}}) v({\boldsymbol{r}}) \overline{Y_{j,\mu}({\boldsymbol{r}})} {\mathrm{d}}{\boldsymbol{r}}\right|^2.$$ Thus, using Parseval's identity for $u\, v$ and then $u$, $$\begin{aligned} \label{eq:svsq} \sigma^2 \| {\boldsymbol{u}}\|_{\ell_2}^2 = {\boldsymbol{u}}^* (\sigma^2 {\boldsymbol{u}}) = {\boldsymbol{u}}^* (E^* E {\boldsymbol{u}}) & = \sum_{j = 0}^J \sum_{\mu=-j}^j \left| \int_{\mathbb{S}^2} u({\boldsymbol{r}}) v({\boldsymbol{r}}) \overline{Y_{j,\mu}({\boldsymbol{r}})} {\mathrm{d}}{\boldsymbol{r}}\right|^2 \\ & = \| u \; v \|^2_{L^2} \nonumber \\ & \leq (v_{{\rm max}})^2 \| u \|_{L^2}^2 = (v_{{\rm max}})^2 \| {\boldsymbol{u}}\|_{\ell^2}^2 . \nonumber\end{aligned}$$ This gives the upper bound [\[eq:svbnd\]](#eq:svbnd){reference-type="eqref" reference="eq:svbnd"} on the singular values $\sigma$ of $E$. ◻ ## Eigenvalues of $E$ In this subsection we take $J = L$, making the matrix $E$ square. From the second statement in Lemma [Lemma 1](#lem:Eprop){reference-type="ref" reference="lem:Eprop"}, $E$ is Hermitian thus its eigenvalues are real, and eigenvectors belonging to distinct eigenvalues are orthogonal. Let $\lambda\in \mathbb{R}$ be an eigenvalue of $E$ and let ${\boldsymbol{q}}\neq \boldsymbol{0}$ be a corresponding eigenvector, thus $$E {\boldsymbol{q}}= \lambda{\boldsymbol{q}}.$$ The following result provides both lower and upper bounds on $\lambda$, in terms of the minimum and maximum values of the mask $v$. **Theorem 3**. *Assume that the mask $v$ is of degree $K\geq 1$. Assume also that $J = L$, so that the matrix $E$ is square. Then the eigenvalues of $E$ lie in the interval $(v^{{\rm min}}, v^{{\rm max}})$, where $$v^{{\rm min}}:= \min_{{\boldsymbol{r}}\in \mathbb{S}^2}\;v({\boldsymbol{r}}), \qquad v^{{\rm max}}:= \max_{{\boldsymbol{r}}\in \mathbb{S}^2}\; v({\boldsymbol{r}}).$$* *Proof.* Let $\lambda \in \mathbb{R}$ be an eigenvalue of $E$, with corresponding eigenvector ${\boldsymbol{q}}\in \mathbb{C}^{(L+1)^2}$. Then from [\[eq:Eint\]](#eq:Eint){reference-type="eqref" reference="eq:Eint"}, $$\begin{aligned} \label{eq:xEx} {\boldsymbol{q}}^* E {\boldsymbol{q}} &= \sum_{j=0}^{L}\sum_{\mu=-j}^j\sum_{\ell=0}^L \sum_{m = -\ell}^\ell \overline{q_{j,\mu}}E_{j,\mu;\ell,m}q_{\ell,m}\\ &= \sum_{j=0}^{L}\sum_{\mu=-j}^j\sum_{\ell=0}^L \sum_{m = -\ell}^\ell \overline{q_{j,\mu}} q_{\ell,m} \int_{\mathbb{S}^2} \overline{Y_{j, \mu}({\boldsymbol{r}})}Y_{\ell, m}({\boldsymbol{r}}) v({\boldsymbol{r}})\, {\mathrm{d}}{\boldsymbol{r}}\nonumber\\ &= \int_{\mathbb{S}^2} \overline{q({\boldsymbol{r}})} q({\boldsymbol{r}}) v({\boldsymbol{r}}){\mathrm{d}}{\boldsymbol{r}},\nonumber % &= \int_{\Sph} |u(\br)|^2 v_{K}(\br)\dd \br\nonumber\\ % &\le \|z_J\|_{L_2}^2 v_{K}^{\rm max}\nonumber,\end{aligned}$$ where $$\label{eq:z_J} q({\boldsymbol{r}}) := \sum_{\ell=0}^L \sum_{m= -\ell}^\ell q_{\ell,m} Y_{\ell,m}({\boldsymbol{r}}).$$ It follows that $$\begin{aligned} \label{eq:lambda_argument} \lambda \|{\boldsymbol{q}}\|_{\ell_2}^2 &= {\boldsymbol{q}}^*(\lambda {\boldsymbol{q}}) = {\boldsymbol{q}}^* E {\boldsymbol{q}} =\int_{\mathbb{S}^2} |q({\boldsymbol{r}})|^2 v({\boldsymbol{r}})\, {\mathrm{d}}{\boldsymbol{r}}\\ &< \| q \|_{L_2}^2 \, v^{{\rm max}} = \sum_{j=0}^{J}\sum_{\mu = -j}^j |q_{j,\mu}|^2 \,v^{{\rm max}} = \|{\boldsymbol{q}}\|_{\ell_2}^2\, v^{{\rm max}}\nonumber,\end{aligned}$$ where the inequality is strict because $v$, being a spherical polynomial of non-zero degree, cannot be identically equal to either its maximum or minimum value. Similarly, we have a lower bound $$\lambda \|{\boldsymbol{q}}\|_{\ell_2}^2 >\|{\boldsymbol{q}}\|_{\ell_2}^2\, v^{{\rm min}},$$ together proving $\lambda \in (v^{{\rm min}}, v^{{\rm max}})$. ◻ Note that even if the true (non-polynomial) mask lies in $[0, 1]$ for all ${\boldsymbol{r}}\in\mathbb{S}^2$, the Gibbs phenomenon will typically produce oscillations in $v$, making $v^{{\rm min}}< 0$ and $v^{{\rm max}}> 1$. We also note in passing that $q$ is an eigenvector belonging to $\lambda$ for the integral equation $$\int_{\mathbb{S}^2}{\mathcal K}_{L}({\boldsymbol{r}},{\boldsymbol{r}}') q({\boldsymbol{r}}') \,{\mathrm{d}}{\boldsymbol{r}}' = \lambda\, q({\boldsymbol{r}}),$$ where ${\mathcal K}_{L}({\boldsymbol{r}},{\boldsymbol{r}}')$ is the integral kernel given by $$\begin{aligned} \label{eq:kernel} {\mathcal K}_{L}({\boldsymbol{r}},{\boldsymbol{r}}') &= \sum_{\ell=0}^{L}\sum_{m = -\ell}^\ell Y_{\ell, m}({\boldsymbol{r}})\overline{Y_{\ell, m}({\boldsymbol{r}}')} v({\boldsymbol{r}}')\\ &= \sum_{\ell=0}^{L}\frac{2\ell+1}{4\pi} P_\ell({\boldsymbol{r}}\cdot {\boldsymbol{r}}') v({\boldsymbol{r}}'),\nonumber\end{aligned}$$ and in the last step we used the addition theorem for spherical harmonics. Here $P_\ell$ is the Legendre polynomial of degree $\ell$, normalised so that $P_\ell(1) = 1$. # Solving $E{\boldsymbol{a}}= {\boldsymbol{a}}^{v}$ {#sec:Solving} As with any ill-posed system, it is essential to build in *a priori* knowledge of the solution. Neumaier [@neumaier1998solving Section 8], knowing that the true solution of an ill-posed problem is generally smooth, controls the smoothness through a smoothing operator $S$. But in this problem a smoothing operator would not be appropriate because the solution is the opposite of smooth, since for each $\ell,m$ the unknown quantity $a_{\ell,m}$ is a realisation of an independent random variable. That is a property we must build into the solution. Accordingly, we assume, in accordance with the usual assumptions for the CMB, that the $a_{\ell,m}$ are mean-zero uncorrelated random variables with covariance $(C_\ell)_{\ell=0}^L$, where $C_\ell$ is real. Define the vector of variables $${\boldsymbol{a}}= (a_{\ell,m}, \ell = 0,\ldots,L, m = -\ell,\ldots,\ell) \in \mathbb{C}^{(L+1)^2}.$$ We allow general $J$ in the range $L \le J \le L+K$, giving a linear system with $(J+1)^2$ equations, so typically an over-determined linear system with more equations than unknowns, implying that an exact solution does not in general exist. Moreover, we assume that the original field coefficients $a_{\ell,m}$ are corrupted by noise, so the actual model is $$\label{eq:perturbed} %E\ba_\beps = \ba^{v} + \beps, E{\boldsymbol{a}}_{\boldsymbol{\varepsilon}}= {\boldsymbol{a}}^{v} + {\boldsymbol{\varepsilon}}^{v} = ({\boldsymbol{a}}+{\boldsymbol{\varepsilon}})^{v} % = (\ba + \beps)v ,$$ where ${\boldsymbol{\varepsilon}}$ is a vector of independent mean-zero random variables $\varepsilon_{\ell, m}$ with a diagonal covariance matrix $\Upsilon$, and ${\boldsymbol{a}}_{\boldsymbol{\varepsilon}}$ is an approximation to ${\boldsymbol{a}}$. We also assume that the $\varepsilon_{\ell,m}$ and the $a_{\ell,m}$ are all statistically independent, so that in terms of expected values we have $$\begin{aligned} \label{eq:expect_quad} &{\left\langle \varepsilon_{\ell,m} \right\rangle} = 0, \quad {\left\langle a_{\ell,m} \right\rangle} = 0 , \nonumber \\ &{\left\langle \varepsilon_{\ell, m} \overline{a_{\ell',m'}} \right\rangle} = 0, \quad {\left\langle a_{\ell,m}\overline{a_{\ell',m'}} \right\rangle} = C_{\ell}\delta_{\ell, \ell'}\delta_{m,m'} \, ,\\ &{\left\langle \varepsilon_{\ell, m} \overline{\varepsilon_{\ell',m'}} \right\rangle} = \Upsilon_{\ell,m} \delta_{\ell, \ell'}\delta_{m,m'} \nonumber.\end{aligned}$$ We deduce the following expectations of quadratic forms: $$\begin{aligned} \label{eq:expect_q} {\left\langle {\boldsymbol{a}}{\boldsymbol{a}}^* \right\rangle} & = \Omega\nonumber\\ {\left\langle {\boldsymbol{a}}^{v} {\boldsymbol{a}}^* \right\rangle}&= {\left\langle (E{\boldsymbol{a}}){\boldsymbol{a}}^* \right\rangle} = E \nonumber \Omega\\ {\left\langle {\boldsymbol{a}}^v({\boldsymbol{a}}^{v})^{*} \right\rangle} &= {\left\langle (E{\boldsymbol{a}})({\boldsymbol{a}}^{*}E^{*}) \right\rangle} = E \Omega E^*\\ {\left\langle {\boldsymbol{\varepsilon}}{\boldsymbol{a}}^* \right\rangle} & = \boldsymbol{0}\nonumber \\ {\left\langle {\boldsymbol{\varepsilon}}{\boldsymbol{\varepsilon}}^* \right\rangle} & = \Upsilon, \nonumber\\ {\left\langle {\boldsymbol{\varepsilon}}^v ({\boldsymbol{\varepsilon}}^v)^* \right\rangle} & = E \Upsilon E^{*}, \nonumber\end{aligned}$$ where $$\label{eq:Cmat} \Omega_{\ell,m;\ell',m'} = C_{\ell}\delta_{\ell,\ell'}\delta_{m,m'} .$$ Let $\Lambda\in\mathbb{C}^{(L+1)^2 \times (L+1)^2}$ be a real symmetric-positive definite matrix, with associated norm $\| {\boldsymbol{a}}\|_\Lambda = \left({\boldsymbol{a}}^* \Lambda {\boldsymbol{a}}\right)^\frac{1}{2}$ defined by $$\label{E:Norm} \| {\boldsymbol{a}}\|_\Lambda^2 = {\boldsymbol{a}}^* \Lambda {\boldsymbol{a}}= \mathop{\mathrm{tr}}\left[ {\boldsymbol{a}}{\boldsymbol{a}}^* \Lambda\right] = \mathop{\mathrm{tr}}\left[ \Lambda {\boldsymbol{a}}{\boldsymbol{a}}^* \right],$$ where we used the matrix property $\mathrm{tr}(AB)= \mathrm{tr}(BA)$. The following theorem gives a condition for minimising the expected squared $\Lambda$-norm error of an approximate solution of [\[eq:perturbed\]](#eq:perturbed){reference-type="eqref" reference="eq:perturbed"}. It is an extension/specialisation of [@neumaier1998solving Theorem 8], which that author attributes to [@Bertero1980]. **Theorem 4**. *Consider the over-determined linear system $E {\boldsymbol{a}}_{\boldsymbol{\varepsilon}}= {\boldsymbol{a}}^v + {\boldsymbol{\varepsilon}}^v$, where ${\boldsymbol{a}}^v = E{\boldsymbol{a}}$, ${\boldsymbol{\varepsilon}}^v = E {\boldsymbol{\varepsilon}}$ and ${\boldsymbol{a}}$ and ${\boldsymbol{\varepsilon}}$ have the stochastic properties in [\[eq:expect_q\]](#eq:expect_q){reference-type="eqref" reference="eq:expect_q"}. Assume that the $(J+1)^2 \times (L+1)^2$ matrix $E$ has full rank $(L+1)^2$. Among all approximations of the form ${\boldsymbol{a}}_{\boldsymbol{\varepsilon}}\approx Q({\boldsymbol{a}}^{v}+ {\boldsymbol{\varepsilon}}^v)$, where $Q$ is a non-random $(L+1)^2 \times (J+1)^2$ matrix, the expected squared error ${\left\langle \|{\boldsymbol{a}}-Q ({\boldsymbol{a}}^v+{\boldsymbol{\varepsilon}}^v)\|_\Lambda^2 \right\rangle}$ is minimized by any solution $\widehat{Q}$ of the equation $$\label{eq:Qmin} \widehat{Q}E (\Omega+ \Upsilon) = \Omega.$$ The resulting minimum expected squared error is $$\label{eq:msegen} {\left\langle \| {\boldsymbol{a}}-\widehat{{\boldsymbol{a}}}\|_\Lambda^2 \right\rangle} = \mathop{\mathrm{tr}}\left[\Lambda \bigl( \Omega-\Omega(\Omega+\Upsilon)^{-1}\Omega\bigr)\right],$$ where $\widehat{{\boldsymbol{a}}}:= \widehat{Q}({\boldsymbol{a}}^v+{\boldsymbol{\varepsilon}}^v)$.* *Remark 1*. The minimizer $\widehat{{\boldsymbol{a}}}$ is in general not unique. *Proof.* Writing ${\boldsymbol{y}}_{\boldsymbol{\varepsilon}}:= {\boldsymbol{a}}^v + {\boldsymbol{\varepsilon}}^v$, a general linear approximation can be written as $Q{\boldsymbol{y}}_{\boldsymbol{\varepsilon}}= \widehat{Q}{\boldsymbol{y}}_{\boldsymbol{\varepsilon}}+ R {\boldsymbol{y}}_{\boldsymbol{\varepsilon}}$, where $\widehat{Q}$ is an as yet unknown minimizer, and $R = Q - \widehat{Q}$ is a matrix in $\mathbb{C}^{(L+1)^2 \times (J+1)^2}$. The mean square error can now be written as $$\begin{aligned} \|{\boldsymbol{a}}-(\widehat{Q}+R){\boldsymbol{y}}_{\boldsymbol{\varepsilon}}\|_\Lambda^2 & = \|{\boldsymbol{a}}-\widehat{Q}{\boldsymbol{y}}_{\boldsymbol{\varepsilon}}\|_\Lambda^2 + \|R{\boldsymbol{y}}_{\boldsymbol{\varepsilon}}\|_\Lambda^2 - 2\Re\left[\left({\boldsymbol{a}}-\widehat{Q}{\boldsymbol{y}}_{\boldsymbol{\varepsilon}})^{*}\right) \Lambda R{\boldsymbol{y}}_{\boldsymbol{\varepsilon}}\right]\\ & = \|{\boldsymbol{a}}-\widehat{Q}{\boldsymbol{y}}_{\boldsymbol{\varepsilon}}\|_\Lambda^2 + \|R{\boldsymbol{y}}_{\boldsymbol{\varepsilon}}\|_\Lambda^2 - 2\Re\left[\mathop{\mathrm{tr}}\left(\Lambda R{\boldsymbol{y}}_{\boldsymbol{\varepsilon}}({\boldsymbol{a}}-\widehat{Q}{\boldsymbol{y}}_{\boldsymbol{\varepsilon}})^{*}\right)\right].\end{aligned}$$ On taking expected values and using [\[eq:expect_q\]](#eq:expect_q){reference-type="eqref" reference="eq:expect_q"} we have $$\begin{aligned} {\left\langle \mathop{\mathrm{tr}}(\Lambda R {\boldsymbol{y}}_{\boldsymbol{\varepsilon}}({\boldsymbol{a}}- \widehat{Q}{\boldsymbol{y}}_{\boldsymbol{\varepsilon}})^* \right\rangle} &= \mathop{\mathrm{tr}}\left(\Lambda R {\left\langle {\boldsymbol{y}}_{\boldsymbol{\varepsilon}}({\boldsymbol{a}}- \widehat{Q}{\boldsymbol{y}}_{\boldsymbol{\varepsilon}})^* \right\rangle} \right) \\ &=\mathop{\mathrm{tr}}\left(\Lambda R {\left\langle ({\boldsymbol{a}}^{v} + {\boldsymbol{\varepsilon}}^v) ({\boldsymbol{a}}- \widehat{Q}({\boldsymbol{a}}^v + {\boldsymbol{\varepsilon}}^v))^* \right\rangle} \right) \\ &=\mathop{\mathrm{tr}}\left( \Lambda R \left(E {\left\langle {\boldsymbol{a}}{\boldsymbol{a}}^* \right\rangle} - E{\left\langle {\boldsymbol{a}}{\boldsymbol{a}}^* + {\boldsymbol{\varepsilon}}{\boldsymbol{\varepsilon}}^* \right\rangle}\ E^* \widehat{Q}^* \right)\right)\\ &=\mathop{\mathrm{tr}}\left( \Lambda R \left( E \Omega- E(\Omega+ \Upsilon) E^* \widehat{Q}^* \right) \right).\end{aligned}$$ So $$\begin{split} {\left\langle \|{\boldsymbol{a}}-(\widehat{Q}+R){\boldsymbol{y}}_{\boldsymbol{\varepsilon}}\|_\Lambda^2 \right\rangle} &= {\left\langle \|{\boldsymbol{a}}-\widehat{Q}{\boldsymbol{y}}_{\boldsymbol{\varepsilon}}\|_\Lambda^2 \right\rangle} + {\left\langle \|R{\boldsymbol{y}}_{\boldsymbol{\varepsilon}}\|_\Lambda^2 \right\rangle} \\ &\quad \qquad -2\Re \mathop{\mathrm{tr}}\left( \Lambda R \left( E \Omega- E(\Omega+ \Upsilon) E^* \widehat{Q}^* \right) \right) . \end{split}$$ By definition, $\widehat{Q}$ is a minimizer of ${\left\langle \| {\boldsymbol{a}}- Q {{\boldsymbol{y}}_{\boldsymbol{\varepsilon}}} \right\rangle}\|_\Lambda^2$, so the linear term must vanish for all $R$. More precisely, we must have $$\label{eq:EOmega} E \Omega= E (\Omega+ \Upsilon) E^* \widehat{Q}^*, \quad \mbox{or equivalently} \quad \widehat{Q}E(\Omega+\Upsilon) E^* = \Omega E^*,$$ since otherwise by taking $R$ to be $\bigl(E \Omega- E (\Omega+ \Upsilon)E^* \widehat{Q}^*\bigr)^*$ we obtain a contradiction. It is easily seen that the second equality in [\[eq:EOmega\]](#eq:EOmega){reference-type="eqref" reference="eq:EOmega"} is equivalent to [\[eq:Qmin\]](#eq:Qmin){reference-type="eqref" reference="eq:Qmin"}: starting with [\[eq:Qmin\]](#eq:Qmin){reference-type="eqref" reference="eq:Qmin"} by right multiplying by $E^*$, starting with [\[eq:EOmega\]](#eq:EOmega){reference-type="eqref" reference="eq:EOmega"} by right multiplying by $E$ and using the invertibility of $E^* E$, noting that $E^*E$ is a square matrix of ful rank $(L+1)^2$. If [\[eq:Qmin\]](#eq:Qmin){reference-type="eqref" reference="eq:Qmin"} holds then we have $$\begin{aligned} {\left\langle \| {\boldsymbol{a}}-(\widehat{Q}+R){\boldsymbol{y}}_{\boldsymbol{\varepsilon}}\|_\Lambda^2 \right\rangle} & = {\left\langle \| {\boldsymbol{a}}-\widehat{Q}{\boldsymbol{y}}_{\boldsymbol{\varepsilon}}\|_\Lambda^2 \right\rangle} + {\left\langle \| R{\boldsymbol{y}}_{\boldsymbol{\varepsilon}}\|_\Lambda^2 \right\rangle}\\ & \geq {\left\langle \| {\boldsymbol{a}}-\widehat{Q}{\boldsymbol{y}}_{\boldsymbol{\varepsilon}}\|_\Lambda^2 \right\rangle}\end{aligned}$$ with equality for $R = 0$, corresponding to $\widehat{{\boldsymbol{a}}}= \widehat{Q}{\boldsymbol{y}}_{\boldsymbol{\varepsilon}}$. The expected squared error is $$\begin{aligned} {\left\langle \|{\boldsymbol{a}}-\widehat{{\boldsymbol{a}}}\|_\Lambda^2 \right\rangle} & = {\left\langle \| {\boldsymbol{a}}-\widehat{Q}{\boldsymbol{y}}_{\boldsymbol{\varepsilon}}\|_\Lambda^2 \right\rangle}\\ &= {\left\langle \bigl({\boldsymbol{a}}-\widehat{Q}{\boldsymbol{y}}_{\boldsymbol{\varepsilon}}\bigr)^* \Lambda \bigl({\boldsymbol{a}}-\widehat{Q}{\boldsymbol{y}}_{\boldsymbol{\varepsilon}}\bigr) \right\rangle}\\ &= \mathop{\mathrm{tr}}\left[ \Lambda {\left\langle \bigl({\boldsymbol{a}}-\widehat{Q}{\boldsymbol{y}}_{\boldsymbol{\varepsilon}}\bigr) \bigl({\boldsymbol{a}}-\widehat{Q}{\boldsymbol{y}}_{\boldsymbol{\varepsilon}}\bigr)^* \right\rangle}\right]\\ &= \mathop{\mathrm{tr}}\left[\Lambda\left({\left\langle {\boldsymbol{a}}{\boldsymbol{a}}^* \right\rangle}+{\left\langle \widehat{Q}{\boldsymbol{y}}_{\boldsymbol{\varepsilon}}{\boldsymbol{y}}_{\boldsymbol{\varepsilon}}^{*}\widehat{Q}^* \right\rangle}-2\Re\bigl(\widehat{Q}{\left\langle {\boldsymbol{y}}_{\boldsymbol{\varepsilon}}{\boldsymbol{a}}^* \right\rangle}\bigr)\right)\right]\\ &= \mathop{\mathrm{tr}}\left[\Lambda\left( \Omega+\widehat{Q} (E \Omega E^*+ E\Upsilon E^*)\widehat{Q}^* - 2\Re\bigl(\widehat{Q}E \Omega\bigr)\right)\right], \end{aligned}$$ Now by [\[eq:EOmega\]](#eq:EOmega){reference-type="eqref" reference="eq:EOmega"} $$\widehat{Q}E \Omega= \widehat{Q}(E \Omega E^*+ E \Upsilon E^*)\widehat{Q}^*,$$ and hence $$\begin{aligned} \mathop{\mathrm{tr}}[ \Lambda \widehat{Q}E \Omega] &= \mathop{\mathrm{tr}}[\Lambda \widehat{Q}E (\Omega+\Upsilon) E^*\widehat{Q}^*]\\ &= \mathop{\mathrm{tr}}[(\Omega+\Upsilon)^{1/2}E^*\widehat{Q}^*\Lambda \widehat{Q}E (\Omega+\Upsilon)^{1/2}]\\ &= \|\widehat{Q}E (\Omega+\Upsilon)^{1/2}]\|_{\Lambda}^2,\end{aligned}$$ which is real, implying $$\begin{aligned} {\left\langle \|{\boldsymbol{a}}-\widehat{{\boldsymbol{a}}}\|_\Lambda^2 \right\rangle} &= \mathop{\mathrm{tr}}\left[\Lambda\left( \Omega-\widehat{Q}E \Omega\right)\right]\\ &=\mathop{\mathrm{tr}}\left[\Lambda\left( \Omega- \Omega(\Omega+\Upsilon)^{-1}\Omega\right)\right].\\\end{aligned}$$ ◻ *Remark 2*. Note that the equation [\[eq:EOmega\]](#eq:EOmega){reference-type="eqref" reference="eq:EOmega"} determining the minimizer $\widehat{Q}{\boldsymbol{y}}_{\boldsymbol{\varepsilon}}$ of the expected mean-square error does not depend on the the matrix $\Lambda$, i.e. on the choice of quadratic norm. For example, using $\Lambda = I$ or $\Lambda = \Omega$ does not change $\widehat{Q}$. **Corollary 5**. Under the conditions of Theorem [Theorem 4](#thm:bagen){reference-type="ref" reference="thm:bagen"}, let $\Gamma$ be an arbitrary positive definite matrix of size $(J+1)^2 \times (J+1)^2$. Then a vector $\widehat{{\boldsymbol{a}}}\in \mathbb{R}^{(L+1)^2}$ that achieves the minimal error given in [\[eq:msegen\]](#eq:msegen){reference-type="eqref" reference="eq:msegen"} is $$\label{eq:afromalpha} \widehat{{\boldsymbol{a}}} := \Omega(\Omega+ \Upsilon)^{-1}{\boldsymbol{\alpha}},$$ where ${\boldsymbol{\alpha}}\in \mathbb{R}^{(L+1)^2}$ is the unique solution of $$\label{eq:alpha} E^* \Gamma E {\boldsymbol{\alpha}}= E^* \Gamma ({\boldsymbol{a}}^v + {\boldsymbol{\varepsilon}}^v).$$ *Proof.* The matrix $\widehat{Q}$ defined by $$\label{eq:Qmin_example} \widehat{Q}:= \Omega(\Omega+\Upsilon)^{-1} (E^* \Gamma E)^{-1} E^* \Gamma$$ is easily seen to satisfy the condition [\[eq:Qmin\]](#eq:Qmin){reference-type="eqref" reference="eq:Qmin"} in Theorem [Theorem 4](#thm:bagen){reference-type="ref" reference="thm:bagen"}. Equally, it is easily seen that the corresponding minimizer $$\widehat{{\boldsymbol{a}}} := \widehat{Q}({\boldsymbol{a}}^v + {\boldsymbol{\varepsilon}}^v)$$ can be written exactly as stated in the corollary. ◻ *Remark 3*. The corollary gives our prescription for computing the coefficient vector $\widehat{{\boldsymbol{a}}}$. Note that the postprocessing step in [\[eq:afromalpha\]](#eq:afromalpha){reference-type="eqref" reference="eq:afromalpha"} is easily carried out given that the matrices $\Omega$ and $\Upsilon$ are diagonal, since each element of $\alpha$ is by this step merely reduced by a known factor. Note also that equation [\[eq:alpha\]](#eq:alpha){reference-type="eqref" reference="eq:alpha"} is just the normal equation for the linear system if, as we shall assume in practice, $\Gamma$ is the identity matrix. Formation of the normal equations can greatly increase the condition number of an already ill-conditioned system. In practice we shall address the ill-conditioning either by QR factorisation of the matrix $E$, or (less desirably) by adding a regularising term to the right-hand side, to obtain $$\label{eq:alphareg} (E^* \Gamma E + \Sigma) {\boldsymbol{\alpha}}= E^* \Gamma ({\boldsymbol{a}}^v + {\boldsymbol{\varepsilon}}^v),$$ where $\Sigma$ is an empirically chosen positive definite $(L+1)^2 \times (L+1)^2$ matrix. The following proposition shows that if the elements of ${\boldsymbol{a}}^v$ and ${\boldsymbol{\varepsilon}}^v$ have the correct symmetry for real-valued fields $a^v = a v$ and $\epsilon^v = \epsilon v$, then so too do the computed values of $\widehat{{\boldsymbol{a}}}$. The practical importance of this result is that the symmetry property, since it occurs naturally, does not need to be enforced. **Proposition 6**. Assume that the components of ${\boldsymbol{a}}^v$ and ${\boldsymbol{\varepsilon}}^v$ satisfy $$a^v_{j,\mu}= (-1)^\mu \overline{a^v_{j,-\mu}} \quad\mbox{and}\quad \varepsilon_{j,\mu}^v = (-1)^\mu \overline{\varepsilon}_{j,-\mu}^v, \quad \mu = -j, \ldots j, \; j\ge 0.$$ Assume also that the positive definite matrices $\Omega, \Upsilon$ and $\Gamma$, and also $\Sigma$ if present, are all diagonal, and that their diagonal elements are positive numbers independent of the second label $\mu$ or $m$. Then $\widehat{{\boldsymbol{a}}}$ given by [\[eq:alpha\]](#eq:alpha){reference-type="eqref" reference="eq:alpha"} and [\[eq:afromalpha\]](#eq:afromalpha){reference-type="eqref" reference="eq:afromalpha"} satisfies $$\widehat{{\boldsymbol{a}}}_{\ell,m} = (-1)^m \overline{\widehat{{\boldsymbol{a}}}_{\ell, -m}},\quad m = -\ell, \ldots,\ell,\; \ell \ge 0.$$ *Proof.* We first show that the components of ${\boldsymbol{b}}:= E^* \Gamma({\boldsymbol{a}}^v +{\boldsymbol{\varepsilon}}^v)$ satisfy $$b_{\ell,m} = (-1)^m\overline{b_{\ell,-m}}, \quad m = -\ell,\ldots,\ell,\; \ell \ge 0.$$ We have, using [\[eq:Ereal\]](#eq:Ereal){reference-type="eqref" reference="eq:Ereal"}, $$\begin{aligned} \overline{b_{\ell,-m}} &= \overline{\sum_{j}\sum_\mu (E^*)_{\ell,-m;j,\mu} \Gamma_j (a^v_{j,\mu}+\varepsilon}_{j,\mu})\\ &= \sum_{j}\sum_\mu (-1)^{m - \mu}(E^*)_{\ell,m;j,-\mu} \Gamma_j \overline{(a^v_{j,\mu}+\varepsilon}_{j,\mu})\\ &= (-1)^m\sum_{j}\sum_\mu (E^*)_{\ell,m;j,-\mu} \Gamma_j (a^v_{j,-\mu}+ \varepsilon_{j,-\mu})\\ &= (-1)^m b_{\ell,m},\end{aligned}$$ as required. A similar argument shows that $$(E^* \Gamma E + \Sigma)_{\ell.-m; \ell', m'} =(-1)^{m-m'}\overline{(E^* \Gamma E + \Sigma)_{\ell.m; \ell', -m'}}.$$ Since ${\boldsymbol{\alpha}}$ is the unique solution of $$(E^* \Gamma E + \Sigma) {\boldsymbol{\alpha}}= {\boldsymbol{b}},$$ by taking the $\ell, -m$ component of this equation we obtain $$\sum_{\ell'}\sum_{m'}(E^* \Gamma E + \Sigma)_{\ell, -m ; \ell' m'} \alpha_{\ell',m'} =b_{\ell, -m},$$ which with the above symmetry properties leads to $$\sum_{\ell'}\sum_{m'}(-1)^{m-m'}\overline{(E^* \Gamma E + \Sigma)_{\ell, m ; \ell' -m'} }\alpha_{\ell',m'} =(-1)^m \overline{b_{\ell, m}};$$ On taking the complex conjugate and dividing by $(-1)^m$ this gives us $$\label{eq:c_eqn} (E^* \Gamma E + \Sigma) {\boldsymbol{c}}= {\boldsymbol{b}},$$ where $$\label{eq:c_symm} c_{\ell', -m'} := (-1)^{m'} \overline{ \alpha_{\ell', m'}},\quad m' = -\ell',\ldots,\ell',\; \ell' \ge 0.$$ We see by uniqueness of the solution of [\[eq:c_eqn\]](#eq:c_eqn){reference-type="eqref" reference="eq:c_eqn"} that ${\boldsymbol{c}}=\alpha$, thus by [\[eq:c_symm\]](#eq:c_symm){reference-type="eqref" reference="eq:c_symm"} the vector ${\boldsymbol{\alpha}}$ has the desired symmetry. Multiplication by $\Omega(\Omega+ \Upsilon)^{-1}$ clearly preserves the symmetry, thus the proof is complete. ◻ # Axially symmetric masks {#sec:axsym} A general mask $v({\boldsymbol{r}})$ leads to a large dense matrix $E$, of size $(J+1)^2 (L+1)^2 \times (J+1)^2 (L+1)^2$, see [\[eq:Edefined\]](#eq:Edefined){reference-type="eqref" reference="eq:Edefined"}, a size beyond present resources if $J$ and $L$ are in the hundreds. In this section we consider the more tractable special case in which the mask $v$ is axially symmetric, i.e. $$v({\boldsymbol{r}}) = v(\theta, \phi) = v(\theta),$$ with $v$ independent of the azimuthal angle $\phi$. For this case we have $$v_{k,\nu} = w_{k } \delta_{\nu,0},$$ where $$\label{eq:w} w_{k}:= v_{k,0} = \int_{\mathbb{S}^2} v({\boldsymbol{r}})\overline{Y_{k,0}({\boldsymbol{r}})}{\mathrm{d}}{\boldsymbol{r}} = \sqrt{\frac{2 k + 1}{4\pi}} \int_0^\pi v(\theta) P_{k}(\cos(\theta)){\mathrm{d}}\theta.$$ Note that $w_{k}$ is real, and that $$w_{k}= 0 \quad\mbox{if }k \mbox{ is odd and also } v(-{\boldsymbol{r}}) = v({\boldsymbol{r}}).$$ In this case it follows from [\[eq:Eint\]](#eq:Eint){reference-type="eqref" reference="eq:Eint"} that $E_{j,\mu;\ell,m}=0$ unless $\mu = m$ Thus it is convenient to introduce a new notation, $$\label{eq:Emlj} E_{j,\ell}^{(m)} := E_{j,m;\ell,m}.$$ Equation [\[eq:aequalsEa\]](#eq:aequalsEa){reference-type="eqref" reference="eq:aequalsEa"} now becomes $$\label{eq:axial} \sum_{\ell=0}^L E_{j,\ell}^{(m)}a_{\ell,m}= a_{j,m}^v,$$ in which the coefficients belonging to different values of $m$ are completely decoupled. This can be seen as just a special case of [\[eq:aeqav_math\]](#eq:aeqav_math){reference-type="eqref" reference="eq:aeqav_math"}, albeit with uncoupled values of $m$, thus all of the analysis in Sections [3](#sec:PropE){reference-type="ref" reference="sec:PropE"} and [4](#sec:Solving){reference-type="ref" reference="sec:Solving"} remains applicable. Note that from Lemma [Lemma 1](#lem:Eprop){reference-type="ref" reference="lem:Eprop"} $E^{(m)}_{j,\ell}$ is real and symmetric, $E^{(m)}_{j,\ell}= E^{(m)}_{\ell,j}$. Moreover $$E^{(m)}_{j,\ell} = 0 \,\mbox{ if } j + \ell \,\mbox{ is odd and } \, v(-{\boldsymbol{r}}) = v({\boldsymbol{r}}) \mbox{, or if } \ell < |m| \mbox{ or }j < |m| .$$ We can treat $E^{(m)}_{j,\ell}$ as an $(J-|m|+1) \times (L-|m|+1)$ matrix, but how should we choose $J$? The choice $J = L$ inevitably leads to a poorly conditioned linear system. There would seem to be considerable benefit, at least in theory, in taking the largest value $J = L + K$, to ensure that the resulting overdetermined linear system makes use of all available information. The equation to be solved in practice is, instead of [\[eq:alpha\]](#eq:alpha){reference-type="eqref" reference="eq:alpha"}, now $$\label{eq:eqtosolveaxial} \bigl( (E^{(m)})^* \Gamma_a E^{(m)}\bigr) {\boldsymbol{\alpha}}= (E^{(m)})^* \Gamma_a ({\boldsymbol{a}}^v + {\boldsymbol{\varepsilon}}^v);$$ Or if regularisation is desired, then instead of [\[eq:alphareg\]](#eq:alphareg){reference-type="eqref" reference="eq:alphareg"} the equation to be solved becomes $$\label{eq:eqtosolveaxialReg} \bigl( (E^{(m)})^* \Gamma_a E^{(m)} + \Sigma_a\bigr) {\boldsymbol{\alpha}}= (E^{(m)})^* \Gamma_a ({\boldsymbol{a}}^v + {\boldsymbol{\varepsilon}}^v).$$ Here $\Gamma_a$ and $\Sigma_a$ have the same diagonal values as $\Gamma$ and $\Sigma$, but the second label on rows and columns has now disappeared, and the new matrices are of size $J \times J$ and $L \times L$ respectively. # Numerical experiments {#sec:NumExp} Recall that our goal is to reconstruct a scalar random field on the sphere given only spherical harmonic coefficients from a masked and noisy version. We have seen in previous sections that the problem can be reduced to the solution of the overdetermined linear system $$\label{eq:model} E {\boldsymbol{a}}= {\boldsymbol{b}}^v ,$$ where the given data ${\boldsymbol{b}}^v = ({\boldsymbol{a}}+ {\boldsymbol{\varepsilon}})^v = {\boldsymbol{a}}^v + {\boldsymbol{\varepsilon}}^v$ are the spherical harmonic coefficients of the masked noisy map with the coefficients ${\boldsymbol{a}}$ corrupted by independent Gaussian noise ${\boldsymbol{\varepsilon}}$ with mean ${\left\langle {\boldsymbol{\varepsilon}} \right\rangle} = \boldsymbol{0}$ and variance ${\left\langle {\boldsymbol{\varepsilon}}{\boldsymbol{\varepsilon}}^* \right\rangle} = \Upsilon$. To illustrate the potential of the method we consider numerical experiments where we know the "true" solution ${\boldsymbol{a}}$, so we can calculate errors to test performance and the effect of model parameters, including taking ${\boldsymbol{\varepsilon}}= \boldsymbol{0}$. ![Original Gaussian random field](Linear_Nside2048_instance1.png){#fig:originalRF} In the experiments a specified angular power spectrum $C_\ell, \ell = 2,\ldots,L$ is used to generate an instance of a Gaussian random field with known spherical harmonic coefficients ${\boldsymbol{a}}$ at the $N_{\textrm{pix}} = 50,331,648$ `HEALPix`[^2] [@2005ApJ...622..759G] points ($N_{\textrm{side}} = 2048$), using the `HealPy`[^3] package [@Zonca2019]. Noise is then added as described in Subsection [6.3](#ss:Experiments){reference-type="ref" reference="ss:Experiments"}. The mask is then applied point-wise to the noisy map, and the masked noisy map is used to calculate the spherical harmonic coefficients ${\boldsymbol{b}}^v$ again using the `HealPy` package. We consider a Gaussian random field with angular power spectrum $$C_\ell = g\left(\frac{\ell}{L+1}\right),$$ with $$g(x) = \begin{cases} 1 & \text{ for } 0 \le x \le 1/2 \\ -2x + 2 & \text{ for } 1/2 \le x \le 1. \end{cases}$$ In the experiments, we assume that $L=100$ and $K=900$. A realisation of such a random field is shown in Figure [1](#fig:originalRF){reference-type="ref" reference="fig:originalRF"}. ## An axially symmetric mask {#sec:axsymact} When the mask applied to the noisy data is axially symmetric, the problem decomposes into independent problems for each value of $m$, as in Section [5](#sec:axsym){reference-type="ref" reference="sec:axsym"}. To construct the mask we first define the following non-decreasing function $p\in C^3(\mathbb{R})$: $$\label{eq:maskp} p(x)= \begin{cases} 0 & \text { for } x\le 0,\\ x^4(35-84x+70x^2-20x^3) & \text{ for } 0<x<1,\\ 1 & \text{ for } x \ge 1. \end{cases}$$ Then as a function of the Cartesian coordinate $z \in [-1, 1]$ of a point on the sphere, our mask is, for $0 < a_z < b_z < 1$ $$\label{eq:maskz} v(z) = %\begin{cases} %0 & \text { for } |z|\leq a_z,\\ p\left(\frac{|z|-a_z}{b_z-a_z}\right). %& \text{ for } a_z < |z| < b_z,\\ %1 & \text{ for } |z| \ge b_z. %\end{cases}$$ In Figure [2](#fig:mask_kappa3){reference-type="ref" reference="fig:mask_kappa3"}, an axially symmetric mask with $a_z = \frac{\pi}{2} - \frac{10\pi}{180}$ and $b_z = \frac{\pi}{2}- \frac{20\pi}{180}$ is plotted on the unit sphere. ![An axially symmetric mask $C^3$ mask with $a_z = \frac{\pi}{2} - \frac{10\pi}{180}$ and $b_z = \frac{\pi}{2}- \frac{20\pi}{180}$. ](kapp3_Mask_Nside2048.png){#fig:mask_kappa3} This mask has the value $1$ (and hence has no masking effect) for points on the sphere more than $20^{\degree}$ from the equator, the value 0 (complete masking) within $10^{\degree}$ from the equator, and smooth variation in between, through the function p, see [\[eq:maskp\]](#eq:maskp){reference-type="eqref" reference="eq:maskp"}, with an argument expressed as a piecewise linear function of the $z$ coordinate of a point on the sphere. (We do not use a discontinuous mask because a discontinuous function has slow convergence of its Fourier series, leading to Gibbs' phenomenon for the truncated Fourier series.) ![The $C^3$ mask $v(\theta,\phi)$ with $a = \frac{10\pi}{180}$, $b = \frac{20\pi}{180}$.](mask_theta.png){#fig:mask_theta} The transformation $z := \cos(\theta) = \cos(\frac{\pi}{2} - \varphi)$, with latitude $\varphi = \frac{\pi}{2} - \theta\in [-\frac{\pi}{2}, \frac{\pi}{2}]$, gives $$v(\varphi, \phi) = p\left(\frac{|\cos(\frac{\pi}{2} - \varphi)|-a_z}{b_z-a_z}\right) . %\quad\text{ for } a_z < |z| < b_z,$$ In terms of latitude the transition region is $|\varphi| \in [a, b]$ where $a = \frac{10\pi}{180}$ and $b = \frac{20\pi}{180}$, as illustrated in Figure [3](#fig:mask_theta){reference-type="ref" reference="fig:mask_theta"}. An instance of the masked random field on the sphere, including the addition of noise as described in Section [6.3](#ss:Experiments){reference-type="ref" reference="ss:Experiments"}, is illustrated in Figure [4](#fig:maskedRF){reference-type="ref" reference="fig:maskedRF"}. ![The masked noisy random field for $\tau = 10^{-4}$](Masked_noisy__1e_4_Nside2048_instance1.png){#fig:maskedRF} Using the axially symmetric mask described in Section [6.1](#sec:axsymact){reference-type="ref" reference="sec:axsymact"}, all the rectangular matrices $E=E^{(m)}$ for $m=0,\ldots,L$ of sizes $(J+1-m,L+1-m)$ with $J=L+K$ are pre-computed in parallel. ## Numerical condition of the problem {#sec:NumCond} ![Smallest and largest singular values](svminmaxE_L1max100_L2max900.png){#fig:svEmL100K900 width="\\textwidth"} ![Condition numbers](svcondE_L1max100_L2max900.png){#fig:condEmL100K900 width="\\textwidth"} Figures [5](#fig:svEmL100K900){reference-type="ref" reference="fig:svEmL100K900"} and [6](#fig:condEmL100K900){reference-type="ref" reference="fig:condEmL100K900"} illustrate the condition of the matrices $E^{(m)}$ for $L = 100$, $K = 900$ and the mask in Figure [3](#fig:mask_theta){reference-type="ref" reference="fig:mask_theta"}. The largest singular values in Figure [5](#fig:svEmL100K900){reference-type="ref" reference="fig:svEmL100K900"} are consistent with the upper bound on the singular values in Theorem [Theorem 2](#thm:svbnd){reference-type="ref" reference="thm:svbnd"}. Even though the mask may be in $[0, 1]$, the polynomial approximation of the mask may have values slightly outside this interval dues to Gibbs phenomenon. The ill-conditioning of the matrices $E^{(m)}$, as illustrated in Figure [6](#fig:condEmL100K900){reference-type="ref" reference="fig:condEmL100K900"}, which can be severe especially for small $m$, arises from the smallest singular value. To avoid the squaring of the condition number of $E$, when solving the equations [\[eq:alpha\]](#eq:alpha){reference-type="eqref" reference="eq:alpha"} with coefficient matrix $E^* \Gamma E$ we can use, as is standard, the QR factorization. Consider the generic case where $E$ is a $(J+1)^2$ by $(L+1)^2$ matrix. Assume that the positive definite matrix $\Gamma$ has a readily available factorization $\Gamma = \Theta^* \Theta$ (for example if $\Gamma$ is diagonal) and let $$\label{eq:Q1R1} \Theta E = Q_1 R_1$$ where $Q_1$ is a $(J+1)^2$ by $(L+1)^2$ unitary matrix and $R_1$ is an $(L+1)^2$ by $(L+1)^2$ upper triangular matrix. The solution to equation [\[eq:alpha\]](#eq:alpha){reference-type="eqref" reference="eq:alpha"}, assuming $R_1$ is non-singular, is obtained by solving $$R_1 {\boldsymbol{a}}= Q_1^* \Theta ({\boldsymbol{a}}^v + {\boldsymbol{\varepsilon}}^v)$$ by back-substitution. If $R_1$, which has the same condition number as $\Gamma^{\frac{1}{2}} E$, has diagonal elements which are too small, the regularized equation [\[eq:alphareg\]](#eq:alphareg){reference-type="eqref" reference="eq:alphareg"} can be used. ## Experiments {#ss:Experiments} Initially we assume there is no noise, that is ${\boldsymbol{\varepsilon}}=0$. In particular the noise covariance matrix $\Upsilon= 0$. We can still interpret [\[eq:model\]](#eq:model){reference-type="eqref" reference="eq:model"} as a least squares problem, which is solved using the QR factorization. The reconstructed field $\widehat{a}({\boldsymbol{r}}), {\boldsymbol{r}}\in \mathbb{S}^2$ and the corresponding error field $\widehat{a}({\boldsymbol{r}}) - a({\boldsymbol{r}}), {\boldsymbol{r}}\in \mathbb{S}^2$ are shown in Figure [\[fig:No noise\]](#fig:No noise){reference-type="ref" reference="fig:No noise"}. The rows of Tables [1](#tab:l2_errors){reference-type="ref" reference="tab:l2_errors"} and [2](#tab:l2_errors 0 and 1){reference-type="ref" reference="tab:l2_errors 0 and 1"} with $\Upsilon_\ell = 0$ correspond to the no-noise experiment. ![Reconstructed field](reconstructed_Gaussian_field_no_noise_QR.png){#fig:reconstructedRF_QR} \ ![Error field](error_field_no_noise_QR.png){#fig:reconstructedRF_QR_error} We now generate noise fields as another Gaussian random field with angular power spectrum $\Upsilon_{\ell} = \tau C_{\ell}, \; \ell=0,\ldots,L$ with $\tau = 10^{-4}; 10^{-3}; 10^{-2}; 10^{-1}$. We then add the noise field to the original Gaussian random field, mask it with an axial symmetric mask, then reconstruct the field using the two approaches below. i\) Using the QR factorization [\[eq:Q1R1\]](#eq:Q1R1){reference-type="eqref" reference="eq:Q1R1"}, we can simplify [\[eq:alpha\]](#eq:alpha){reference-type="eqref" reference="eq:alpha"}, assuming $R_1$ is non-singular ($\Theta E$ has full column rank), as $$R_1 {\boldsymbol{\alpha}}= Q_1^* \Gamma^{\frac12}({\boldsymbol{a}}^v + {\boldsymbol{\varepsilon}}^v).$$ The vector coefficient $\widehat{{\boldsymbol{a}}}$ is then obtained from equation [\[eq:afromalpha\]](#eq:afromalpha){reference-type="eqref" reference="eq:afromalpha"}. Note that in our experiments, since $\Upsilon= \tau \Omega$, and both matrices $\Upsilon$ and $\Omega$ are diagonal, [\[eq:afromalpha\]](#eq:afromalpha){reference-type="eqref" reference="eq:afromalpha"} is just $$\label{eq:a al simp} \widehat{{\boldsymbol{a}}} = \frac{1}{1+\tau} {\boldsymbol{\alpha}}.$$ The reconstructed fields $\widehat{a}({\boldsymbol{r}})$ and error fields $\widehat{a}({\boldsymbol{r}}) - a(r)$ using this approach are plotted for ${\boldsymbol{r}}\in\mathbb{S}^2$ in Figures [\[fig:Noise sig 1e-4 QR\]](#fig:Noise sig 1e-4 QR){reference-type="ref" reference="fig:Noise sig 1e-4 QR"} and [\[fig:Noise sig 1e-2 QR\]](#fig:Noise sig 1e-2 QR){reference-type="ref" reference="fig:Noise sig 1e-2 QR"} for the two noise levels $\tau = 10^{-4}$ and $\tau = 10^{-2}$. ii\) In this approach, we use the regularised equation [\[eq:alphareg\]](#eq:alphareg){reference-type="eqref" reference="eq:alphareg"} and [\[eq:afromalpha\]](#eq:afromalpha){reference-type="eqref" reference="eq:afromalpha"} with $\Gamma$ being the $(J+1)\times(J+1)$ identity matrix and $\Sigma = \nu I$ with $I$ being the $(L+1)\times (L+1)$ identity matrix. The optimal value of $\nu$ is found by minimizing the $\ell_2$ error $$\label{eq:l2err} \|{\boldsymbol{a}}(0) - \widehat{{\boldsymbol{a}}}(0)\|^2 + 2 \sum_{m=1}^L \| \widehat{{\boldsymbol{a}}}(m) - {\boldsymbol{a}}(m) \|^2.$$ By Parseval's theorem, minimising [\[eq:l2err\]](#eq:l2err){reference-type="eqref" reference="eq:l2err"} is equivalent to minimising $\|\widehat{a} - a\|^2_{L_2}$. The optimal value of $\nu$ is estimated via a grid search procedure over the values $\nu_0 = 10^{-15}; 1; 10; 10^2; 10^3; 10^4; 10^5$, then, for $\nu_0 = 10^{-15}$, zooming in to a finer grid $$\nu_k = \{ 10^{-15}+k\times 10^{-16}: k=0,5,10,15,20,25\}.$$ For our particular experiments, we found that the optimal value for $\nu$ is $2\times 10^{-15}$ and the reconstructed fields have lower quality and the errors are larger than the QR factorisation approach. Let us define $$\|a\|_{\ell_2} = \left( \frac{4\pi}{N_{\rm pix}} \sum_{i=1}^{N_{\rm pix}} |a({\boldsymbol{r}}_i)|^2 \right)^{1/2} .$$ The root mean square error of a re-constructed map is defined to be $${\rm RMSerr}(\widehat{a}) = \left ( \frac{4\pi}{N_{\rm pix}}\sum_{i=1}^{N_{\rm pix}} |\widehat{a}(\boldsymbol{r}_i) - a(\boldsymbol{r}_i)|^2 \right)^{1/2},$$ where $N_{\rm pix}$ is the number of pixels of the map in `HEALPix` format. In our numerical experiments, $N_{\rm pix}=50,331,648$. The relative errors are defined to be $${\rm relative\;\; error}:= {\rm RMSerror}(\widehat{a})/\|a\|_{\ell_2}.$$ Table [1](#tab:l2_errors){reference-type="ref" reference="tab:l2_errors"} shows the root mean square errors and relative errors between the reconstructed maps and the original map using the QR factorisation approach. $\Upsilon_{\ell}$ RMSerr$(\widehat{a})$ relative errors -------------------- ----------------------- ----------------- $0$ $3.913$ $0.078$ $10^{-4} C_{\ell}$ $3.949$ $0.079$ $10^{-3} C_{\ell}$ $4.213$ $0.084$ $10^{-2} C_{\ell}$ $6.364$ $0.127$ : RMS errors of reconstructed maps and relative errors using QR factorization. Here $\|a\|_{\ell_2} = 50.265$. We also consider two regions of the sphere: $${\mathcal R}_0 := \{ {\boldsymbol{r}}\in \mathbb{S}^2: v({\boldsymbol{r}}) = 0\}, \qquad {\mathcal R}_1 := \{ {\boldsymbol{r}}\in \mathbb{S}^2: v({\boldsymbol{r}}) > 0\}.$$ In ${\mathcal R}_0$ the mask is zero, so no information is available at these points, while in ${\mathcal R}_1$ the mask is non-zero, so some information is available at these points. Let $N_j := |{\mathcal R}_j|, \; j=0, 1$. In our experiments, $N_0 = 8,740,864$ and $N_1 = 41,590,784$. In the following, we let, for $j = 0, 1$, $$\|a\|_{\ell_2({\mathcal R}_j)} = \left( \frac{4\pi}{N_j} \sum_{\boldsymbol{r} \in {\mathcal R}_j} |a(\boldsymbol{r})|^2 \right)^{1/2}, \quad {\rm RMSerr_j} (\widehat{a} ) = \left( \frac{4\pi}{N_j}\sum_{{\boldsymbol{r}}\in {\mathcal R}_j} |\widehat{a}(\boldsymbol{r}) - a(\boldsymbol{r})|^2 \right)^{1/2}. %\quad j=0,1.$$ Note that from the definitions and from the fact that $\mathcal{R}_0 \cup \mathcal{R}_1 = \mathbb{S}^2$, $$N_{\rm pix}\|a\|^2_{\ell_2} = N_0 \|a\|^2_{\ell_2({\mathcal R}_0)} + N_1 \|a\|^2_{\ell_2({\mathcal R}_1)}.$$ The relative errors on each region ${\mathcal R}_j$ for $j=0,1$ are defined by $$\label{eq:errRj} {\rm relative\; err}_j = {\rm RMSerr}_j(\widehat{a})/\|a\|_{\ell_2({\mathcal R}_j)}.$$ $\Upsilon_{\ell}$ RMSerr$_0(\widehat{a})$ relative err$_0$ RMSerr$_1(\widehat{a})$ relative err$_1$ -------------------- ------------------------- ------------------ ------------------------- --------------------- $0$ $9.390$ $0.184$ $9.7 \cdot 10^{-5}$ $1.9 \cdot 10^{-6}$ $10^{-4} C_{\ell}$ $9.409$ $0.184$ $0.519$ $0.010$ $10^{-3} C_{\ell}$ $9.470$ $0.185$ $1.622$ $0.032$ $10^{-2} C_{\ell}$ $10.457$ $0.205$ $5.102$ $0.102$ : RMS errors of reconstructed maps for different regions of the mask using QR factorisation, where the relative error on each region is defined in [\[eq:errRj\]](#eq:errRj){reference-type="eqref" reference="eq:errRj"} with $\|a\|_{\ell_2(\mathcal{R}_0)}=51.071$ and $\|a\|_{\ell_2(\mathcal{R}_1)}=50.094$. As can be seen from Table [2](#tab:l2_errors 0 and 1){reference-type="ref" reference="tab:l2_errors 0 and 1"}, the reconstruction is better than the un-informed guess of zero in the masked region. The relative errors in the unmasked region are consistent with the level of noise. ![Reconstructed field](reconstr_field_QR_gauss_noise_1e-4.png){#fig:reconstructedRF2 Sig 1e-4 QR} \ ![Error field](error_field_QR_gauss_noise_1e-4.png){#fig:error2 Sig 1e-4 QR} ![Reconstructed field](reconstr_field_QR_gauss_noise_1e-2.png){#fig:reconstructedRF2 Sig 1e-2 QR} \ ![Error field](error_field_QR_gauss_noise_1e-2.png){#fig:error2 Sig 1e-2 QR factorization} # Acknowledgements {#acknowledgements .unnumbered} The assistance of Yu Guang Wang in the early stages of the project is also gratefully acknowledged. This research includes computations using the computational cluster Katana supported by Research Technology Services at UNSW Sydney [@Katana]. Some of the results in this paper have been derived using the `healpy` and `HEALPix` packages. [^1]: The mask $v$ is typically identically $1$ or $0$ over some parts of the sphere, and hence not polynomial, but expressible as a uniformly convergent infinite sum of the same form as in [\[eq:v\]](#eq:v){reference-type="eqref" reference="eq:v"}. [^2]: `http://healpix.sf.net` [^3]: `https://pypi.org/project/healpy/`
arxiv_math
{ "id": "2309.14815", "title": "Removing the mask -- reconstructing a scalar field on the sphere from a\n masked field", "authors": "Jan Hamann, Quoc Thong Le Gia, Ian H. Sloan, Robert S. Womersley", "categories": "math.NA cs.NA gr-qc", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We consider a scaled Navier--Stokes--Fourier system describing the motion of a compressible, heat-conducting, viscous fluid driven by inhomogeneous boundary temperature distribution together with the gravitational force of a massive object placed outside the fluid. We identify the limit system in the low Mach/low Froude number regime for the ill prepared initial data. The fluid is confined to a bounded cavity with acoustically hard boundary enhancing reflection of acoustic waves. author: - "Francesco Fanelli [^1]" - "Eduard Feireisl [^2]" title: "**Thermally driven fluid convection in the incompressible limit regime**" --- BCAM -- Basque Center for Applied Mathematics Alameda de Mazarredo 14, E-48009 Bilbao, Basque Country, Spain Ikerbasque -- Basque Foundation for Science Plaza Euskadi 5, E-48009 Bilbao, Basque Country, Spain Univ. Lyon, Université Claude Bernard Lyon 1, CNRS UMR 5208, Institut Camille Jordan, F-69622 Villeurbanne, France Email address: `ffanelli@bcamath.org` Institute of Mathematics of the Academy of Sciences of the Czech Republic Žitná 25, CZ-115 67 Praha 1, Czech Republic Email address: `feireisl@math.cas.cz` **2020 Mathematics Subject Classification:** 35B40 (primary); 35Q30, 76N15, 35B25 (secondary). Navier--Stokes--Fourier system, Dirichlet boundary conditions, incompressible limit, Oberbeck--Boussinesq system. # Introduction {#i} Thermally driven convective flows are ubiquitous in many real world phenomena, notably in geophysics, meteorology and stellar dynamics or, in general, plasma motion, to name only a few. The dynamics is enhanced by mutual interaction of thermal expansion of a fluid with other volume forces, typically the gravitation. Apparently, the fluid mass density is changing during the process and a correct mathematical description is therefore based on models of *compressible* fluids. Still the simplified *incompressible* formulation is frequently used notably in numerical simulations, see Barletta et al. [@barletta2022Bous], [@barletta2022use], Fröhlich [@FrLaPe], or the survey by Klein et al. [@KBSMRMHS]. A natural idea is to justify the incompressible *target* problem as a singular limit of the complete *primitive* system. In the context of compressible viscous fluids, the primitive model is represented by the Navier--Stokes--Fourier system specified below, while the *target* problem is usually identified with the Oberbeck--Boussinesq approximation. The limit process is described in detail by Zeytounian [@ZEY], and the rigorous justification obtained in [@FN5], [@FeNo6A Chapter 5] in the case of *conservative* boundary conditions. A suitable theoretical platform based on a new concept of weak solution to attack problems involving thermally driven fluid convection has been developed only recently in [@ChauFei], [@FeiNovOpen]. The singular limit problem in this new framework has been then revisited in [@BelFeiOsch]. Rather surprisingly and in contrast with the previously obtained results, thermally driven flows give rise to an additional "unexpected" non--local term in the boundary conditions for the target problem, see [@BelFeiOsch Theorem 4.1]. The result [@BelFeiOsch] relies on the relative energy method and yields convergence to the target system in a strong topology associated to the energy norm. In particular, the convergence rate can be estimated in terms of the initial data. However, this approach imposes two major restrictions: - *well--prepared* initial data for the primitive system; - the existence of smooth solutions for the target system, in particular the convergence result can be stated only on a short time interval in the natural 3--d setting, cf. [@Abb-Feir]. Our goal is to consider the *ill--prepared* initial data to show convergence to a global--in--time solution of the target problem in the framework of weak topology. On the one hand, the ill--prepared data allow for relatively large deviations from the equilibrium state. On the other hand, thermally driven systems operate in the far from equilibrium regime, as observed *e.g.* in the well known Rayleigh--Bénard convection model. In addition, analogously to the conservative case (Neumann boundary conditions for the temperature), for which we refer to [@FN5 Section 5.5], the presence of the acoustically hard boundary enhances oscillations of acoustic waves. In particular, the convergence to the target system can be shown only in the weak Lebesgue topology, which entails non--trivial difficulties completely absent whenever the initial data are well--prepared. The benefits obtained by this approach are: - a large class of initial data allowing the system to stay out of thermodynamics equilibrium; - convergence to global--in--time weak solutions of the target problem. More details will be discussed in Section [1.4](#ss:idea-proof){reference-type="ref" reference="ss:idea-proof"} below. ## Primitive Navier--Stokes--Fourier system {#ss:primitive} We adopt the *Navier--Stokes--Fourier (NSF) system* as the exact -- primitive -- model. The state of the fluid is characterized by its mass density $\varrho= \varrho(t,x)$, the absolute temperature $\vartheta= \vartheta(t,x)$, and the velocity ${\bf u}= {\bf u}(t,x)$. Assuming the Mach number is proportional to a small parameter $\varepsilon>0$, the scaled system of field equations written in the Eulerian coordinate frame reads: $$\begin{aligned} \partial_t \varrho+ {\rm div}_x(\varrho{\bf u}) &= 0, \label{i1}\\ \partial_t (\varrho{\bf u}) + {\rm div}_x(\varrho{\bf u}\otimes {\bf u}) + \frac{1}{\varepsilon^2} \nabla_xp(\varrho, \vartheta) &= {\rm div}_x\mathbb{S}(\vartheta, \mathbb{D}_x{\bf u}) + \frac{1}{\varepsilon} \varrho\nabla_xG, \label{i2} \\ \partial_t (\varrho s(\varrho, \vartheta)) + {\rm div}_x(\varrho s (\varrho, \vartheta) {\bf u}) + {\rm div}_x\left( \frac{ {\bf q} (\vartheta, \nabla_x\vartheta) }{\vartheta} \right) &= \frac{1}{\vartheta} \left( \varepsilon^2 \mathbb{S} : \mathbb{D}_x{\bf u}- \frac{{\bf q} (\vartheta, \nabla_x\vartheta) \cdot \nabla_x\vartheta}{\vartheta} \right). \label{i3} \end{aligned}$$ The quantity $s = s(\varrho, \vartheta)$ in [\[i3\]](#i3){reference-type="eqref" reference="i3"} is the entropy of the system, related to the pressure $p = p(\varrho, \vartheta)$ and the internal energy $e = e(\varrho, \vartheta)$ through Gibbs' equation $$\label{i10} \vartheta D s = D e + p D \left( \frac{1}{\varrho} \right).$$ The viscous stress tensor is given by Newton's rheological law $$\label{i7} \mathbb{S}(\vartheta, \mathbb{D}_x{\bf u}) = 2\mu(\vartheta) \left( \mathbb{D}_x{\bf u}- \frac{1}{3} {\rm div}_x{\bf u}\mathbb{I} \right) + \eta(\vartheta) {\rm div}_x{\bf u}\mathbb{I},$$ where $\mathbb{D}_x{\bf u}= \frac{1}{2}(\nabla_x{\bf u}+ \nabla_x^t {\bf u})$ is the symmetric part of the velocity gradient $\nabla_x{\bf u}$. The internal energy flux is given by Fourier's law $$\label{i8} {\bf q}(\vartheta, \nabla_x\vartheta) = - \kappa (\vartheta) \nabla_x\vartheta.$$ The potential $G = G(x)$ represents the gravitational force imposed by a massive object placed outside the fluid cavity. Besides the Mach number proportional to $\varepsilon$, there is another scaled quantity, namely $\frac{1}{\varepsilon} \varrho\nabla_xG$, where $\sqrt{\varepsilon}$ is the so--called Froude number. The fluid motion is therefore almost incompressible and mildly stratified, see [@FeNo6A Chapter 4]. The fluid is confined to a *bounded* physical domain $\Omega \subset R^3$ with acoustically hard boundary. Specifically, the velocity satisfies the complete slip boundary conditions, $$\begin{aligned} {\bf u}\cdot {\bf n}|_{\partial \Omega} = 0,\qquad \left( \mathbb{S}(\vartheta, \mathbb{D}_x{\bf u}) \cdot {\bf n} \right) \times {\bf n}|_{\partial \Omega} = 0. \label{i5} \end{aligned}$$ Finally, we impose appropriately scaled Dirichlet boundary conditions for the temperature, $$\label{i6} \vartheta|_{\partial \Omega} = \vartheta_B\equiv \overline{\vartheta} + \varepsilon\mathfrak{T}_B,$$ where $\overline{\vartheta}$ is a positive constant. As we shall see below, condition [\[i6\]](#i6){reference-type="eqref" reference="i6"} corresponds to *ill--prepared* boundary data for the temperature. ## Static solutions {#S} In accordance with the scaling of [\[i2\]](#i2){reference-type="eqref" reference="i2"}, [\[i3\]](#i3){reference-type="eqref" reference="i3"}, the zero--th order terms in the (formal) asymptotic limit written in terms of powers of the small parameter $\varepsilon$ is determined by the stationary (static) problem $$\label{S1} \nabla_xp(\widetilde\varrho_{\varepsilon}, \overline{\vartheta} + \varepsilon\mathfrak{T}_B ) = \varepsilon\widetilde\varrho_{\varepsilon} \nabla_xG,$$ where $$\label{T_B-harm} \text{$\mathfrak{T}_B$ is the unique harmonic extension of the boundary datum inside $\Omega$.}$$ Observe that equation [\[i1\]](#i1){reference-type="eqref" reference="i1"} conserves the total mass, meaning that $$\frac{\rm d}{\,{\rm d} t }\int_{\Omega} \varrho_\varepsilon(t,\cdot) \ \,{\rm d} {x} = 0.$$ Therefore, given some positive number $\overline{\varrho}>0$, we supplement equation [\[i1\]](#i1){reference-type="eqref" reference="i1"} with the condition that $$\forall\ \varepsilon>0,\qquad \frac{1}{|\Omega|} \int_{\Omega} \varrho_\varepsilon \ \,{\rm d} {x} = \overline{\varrho}.$$ Then, the solution $\widetilde\varrho_{\varepsilon}$ to equation [\[S1\]](#S1){reference-type="eqref" reference="S1"} is normalized so that $$\label{S3} \int_{\Omega} (\overline{\varrho}- \widetilde\varrho_\varepsilon) \ \,{\rm d} {x} = 0,$$ which implies in particular that $$\label{S2} \int_{\Omega} (\varrho_\varepsilon(t, \cdot) - \widetilde\varrho_{\varepsilon}) \ \,{\rm d} {x} = 0 \qquad \mbox{ for any } \ t \geq 0.$$ Finally, as $G = G(x)$ is given and smooth, arguing as in [@DS-F-S-WK], one gathers that $$\label{S4} \| \widetilde\varrho_\varepsilon- \overline{\varrho} \|_{L^\infty(\Omega)} \lesssim \varepsilon.$$ Here and always hereafter, the symbol $a \lesssim b$ means there is a constant $c > 0$, *independent* of the scaling parameter $\varepsilon$, such that $a \leq cb$. ## Target Oberbeck--Boussinesq system {#ss:target} The asymptotic limit for well--prepared data has been identified in [@BelFeiOsch Theorem 4.1]. Our goal is to show that the ill--prepared data give rise to convergence to the same limit in the weak topology. Specifically, we show $$\begin{aligned} \frac{\varrho_\varepsilon- \overline{\varrho}}{\varepsilon} \to \mathfrak{R}, \qquad \frac{\vartheta_\varepsilon- \overline{\vartheta}}{\varepsilon} \to \mathfrak{T}, \qquad % \sqrt{\vre} \vue \to \sqrt{ \Ov{\vr}} \vU, {\bf u}_\varepsilon\to {\bf U}, \nonumber\end{aligned}$$ in a suitable weak topology, where the limit quantities $$\mathfrak{R}, \quad {\bf U}, \quad \and \Theta := \mathfrak{T} - \frac{\lambda(\overline{\varrho}, \overline{\vartheta})}{|\Omega|} \int_{\Omega} \mathfrak{T} \ \,{\rm d} {x}$$ solve the *Oberbeck--Boussinesq (OB) system*: $$\begin{aligned} {\rm div}_x{\bf U}&= 0, \nonumber \\ \overline{\varrho} \Big( \partial_t {\bf U}+ {\bf U}\cdot \nabla_x{\bf U}\Big) + \nabla_x\Pi &= {\rm div}_x\mathbb{S}(\overline{\vartheta}, \nabla_x{\bf U}) + \mathfrak{R} \nabla_xG, \nonumber \\ \overline{\varrho} c_p(\overline{\varrho}, \overline{\vartheta} ) \Big( \partial_t \Theta + {\bf U}\cdot \nabla_x\Theta \Big) - \overline{\varrho} \ \overline{\vartheta} \alpha(\overline{\varrho}, \overline{\vartheta} ) {\bf U}\cdot \nabla_xG &= \kappa(\overline{\vartheta}) \Delta_x\Theta, \nonumber \\ \frac{\partial p(\overline{\varrho}, \overline{\vartheta} ) }{\partial \varrho} \nabla_x\mathfrak{R} + \frac{\partial p(\overline{\varrho}, \overline{\vartheta} ) }{\partial \vartheta} \nabla_x\Theta &= \overline{\varrho} \nabla_xG \label{ObBs1} \end{aligned}$$ with the complete slip boundary conditions for the velocity $$\label{i18bis} {\bf U}\cdot {\bf n}|_{\partial \Omega} = 0,\ (\mathbb{S}(\overline{\vartheta}, \mathbb{D}_x{\bf U}) \cdot {\bf n}) \times {\bf n}|_{\partial \Omega} = 0$$ and a *non--local* boundary condition for the temperature deviation $$\label{i14B} \Theta|_{\partial \Omega} = \mathfrak{T}_B - \frac{\lambda(\overline{\varrho}, \overline{\vartheta})}{1 - \lambda(\overline{\varrho}, \overline{\vartheta})} \frac{1}{|\Omega|}\int_{\Omega} \Theta \ \,{\rm d} {x}.$$ Here, the material parameters are: - the thermal expansion coefficient $$\alpha(\overline{\varrho}, \overline{\vartheta} ) \equiv \frac{1}{\overline{\varrho}} \frac{\partial p(\overline{\varrho}, \overline{\vartheta} ) }{\partial \vartheta} \left( \frac{\partial p(\overline{\varrho}, \overline{\vartheta} ) }{\partial \varrho} \right)^{-1};$$ - the specific heat and constant pressure $$c_p (\overline{\varrho}, \overline{\vartheta} ) \equiv \frac{\partial e(\overline{\varrho}, \overline{\vartheta} ) }{\partial \vartheta} + \overline{\varrho}^{-1} \overline{\vartheta} \alpha(\overline{\varrho}, \overline{\vartheta} ) \frac{\partial p(\overline{\varrho}, \overline{\vartheta} ) }{\partial \vartheta};$$ - the coefficient $$\lambda(\overline{\varrho}, \overline{\vartheta}) = \frac{\overline{\vartheta} \alpha (\overline{\varrho}, \overline{\vartheta} ) } {\overline{\varrho} c_p(\overline{\varrho}, \overline{\vartheta}) } \frac{\partial p(\overline{\varrho}, \overline{\vartheta} )}{\partial \vartheta} \ \in (0,1).$$ Our working hypothesis specified below asserts $\alpha > 0$, meaning, in particular, $\partial_\vartheta p > 0$. ## Singular limit, main ideas of the proof {#ss:idea-proof} Our goal is to identify the asymptotic limit of the family $(\varrho_\varepsilon, \vartheta_\varepsilon, {\bf u}_\varepsilon)_{\varepsilon> 0}$ of weak solutions to the scaled (NSF) system emanating from the initial data $$\begin{aligned} \varrho_\varepsilon(0,x) &= \widetilde\varrho_\varepsilon+ \varepsilon\varrho_{0,\varepsilon}, \nonumber \\ \vartheta_\varepsilon(0,x) & = \overline{\vartheta} + \varepsilon\vartheta_{0,\varepsilon}, \nonumber \\ {\bf u}_\varepsilon(0,x) & ={\bf u}_{0,\varepsilon}, \label{wpd} \end{aligned}$$ where we assume that $$\label{wpd1} \| \varrho_{0, \varepsilon} \|_{L^\infty(\Omega)} + \| \vartheta_{0, \varepsilon} \|_{L^\infty(\Omega)} + \| {\bf u}_{0, \varepsilon} \|_{L^\infty(\Omega; R^3)} \lesssim 1.$$ The bounds [\[wpd1\]](#wpd1){reference-type="eqref" reference="wpd1"} characterize the ill--prepared nature of the data and can be slightly relaxed to suitable $L^p-$norms. As announced in the preceding section, our goal is to show that the scaled quantities $$\frac{\varrho_\varepsilon- \overline{\varrho}}{\varepsilon},\ \ \frac{\vartheta_\varepsilon- \overline{\vartheta}}{\varepsilon} \ \mbox{ and }\ {\bf u}_\varepsilon%\sqrt{\vre}\vue$$ converge weakly to a (weak) solution of the Oberbeck--Boussinesq system [\[ObBs1\]](#ObBs1){reference-type="eqref" reference="ObBs1"}, [\[i18bis\]](#i18bis){reference-type="eqref" reference="i18bis"}, [\[i14B\]](#i14B){reference-type="eqref" reference="i14B"}. The precise result is stated in Section [3](#m){reference-type="ref" reference="m"}, see Theorem [Theorem 3](#TM1){reference-type="ref" reference="TM1"}. Although the target dynamics has already been identified in [@BelFeiOsch], the strategy of the proof of the convergence in the case of ill--prepared data is rather different. More precisely, the main steps to the proof of Theorem [Theorem 3](#TM1){reference-type="ref" reference="TM1"} can be summarised as follows. (i) In Section [2](#L){reference-type="ref" reference="L"}, we introduce the class of weak solutions to the primitive (NSF) system relevant for the Dirichlet boundary conditions for the temperature. In comparison with [@ChauFei], [@FeiNovOpen], the entropy production rate is explicitly specified and controlled by the available bounds uniform with respect to the scaling parameter $\varepsilon> 0$. (ii) In Section [3](#m){reference-type="ref" reference="m"}, we state our main result. (iii) In Section [4](#e){reference-type="ref" reference="e"}, we derive the necessary uniform bounds independent of the scaling parameter $\varepsilon>0$, the main tool being the relative energy inequality. (iv) The weak convergence towards a weak solution of the target Oberbeck--Boussinesq system is performed in Section [5](#w){reference-type="ref" reference="w"}, except the convective term ${\bf u}_\varepsilon\otimes{\bf u}_\varepsilon$ appearing in [\[i2\]](#i2){reference-type="eqref" reference="i2"}. (v) Finally, in Section [6](#A){reference-type="ref" reference="A"} we tackle the proof of the convergence of the convective term to the expected limit ${\bf u}\otimes{\bf u}$. In general, non--linear compositions do not commute with weak convergence; however, the special structure of the convective term allows us to conclude. Let us give more details about point (v) above. In our case, the Dirichlet boundary conditions imposed of the temperature destroy the structure of the acoustic equation obtained in the case of conservative boundary conditions in [@FN5]. The main reason is that the average of the temperature perturbation $$\int_{\Omega} \frac{\vartheta_\varepsilon- \overline{\vartheta}}{\varepsilon} \ \,{\rm d} {x}$$ is no longer a conserved quantity. This fact is, of course, completely harmless, in the case of well--prepared data, where acoustic waves are not created. On the contrary, when considering ill--prepared initial data, such a problem cannot be circumvented. Therefore, for studying propagation of acoustic waves, a cut--off must be performed in the neighbourhood of the boundary $\partial \Omega$. Now, this cut--off operation produces extra driving terms in the associated wave equation, which are "out of scaling" with respect to the low Mach number regime of equations [\[i1\]](#i1){reference-type="eqref" reference="i1"}-[\[i3\]](#i3){reference-type="eqref" reference="i3"}. The situation is similar to the ones appearing in the multiscale problems considered in [@DS-F-S-WK; @Fan], and can be handled by resorting to a compensation compactness argument, in the spirit of the so--called local method proposed by Lions and Masmoudi [@LIMA6; @LIMA-JMPA], cf. also the survey by Masmoudi [@MAS1] (see *e.g.* [@Fan; @Fa-Ga; @Fa-Za; @F-GV-G-N] and references therein for recent applications of that method in different contexts). # Weak solutions to the primitive NSF system {#L} Our analysis is carried out in the framework of the class of weak solutions to the NSF system introduced in [@ChauFei], cf. also [@FeiNovOpen]. **Definition 1** (**Weak solution to the NSF system**). We say that a trio $(\varrho, \vartheta, {\bf u})$ is a *weak solution* of the (scaled) NSF system [\[i1\]](#i1){reference-type="eqref" reference="i1"}--[\[i8\]](#i8){reference-type="eqref" reference="i8"}, with the boundary conditions [\[i5\]](#i5){reference-type="eqref" reference="i5"}, [\[i6\]](#i6){reference-type="eqref" reference="i6"}, and the initial data $$\varrho(0, \cdot) = \varrho_0,\ \varrho{\bf u}(0, \cdot) = \varrho_0 {\bf u}_0,\ \varrho s(0, \cdot) = \varrho_0 s(\varrho_0, \vartheta_0),$$ if the following holds: - the solution belongs to the **regularity class**: $$\begin{aligned} \varrho&\in L^\infty(0,T; L^\gamma(\Omega)) \ \mbox{for some}\ \gamma > 1,\ \varrho\geq 0 \ \mbox{a.a.~in}\ (0,T) \times \Omega, \nonumber \\ {\bf u}&\in L^2(0,T; W^{1,2} (\Omega; R^3)), \ {\bf u}\cdot {\bf n}|_{\partial \Omega} = 0, \nonumber \\ \vartheta^{\beta/2} ,\ \log(\vartheta) &\in L^2(0,T; W^{1,2}(\Omega)) \ \mbox{for some}\ \beta \geq 2,\ \vartheta> 0 \ \mbox{a.a.~in}\ (0,T) \times \Omega, \nonumber \\ (\vartheta- \vartheta_B) &\in L^2(0,T; W^{1,2}_0 (\Omega)); \label{Lw6} \end{aligned}$$ - the **equation of continuity** [\[i1\]](#i1){reference-type="eqref" reference="i1"} is satisfied in the sense of distributions, namely $$\begin{aligned} \int_0^T \int_{\Omega} \Big[ \varrho\partial_t \varphi + \varrho{\bf u}\cdot \nabla_x\varphi \Big] \ \,{\rm d} {x} \,{\rm d} t &= - \int_{\Omega} \varrho_0 \varphi(0, \cdot) \ \,{\rm d} {x} \label{Lw4} \end{aligned}$$ for any $\varphi \in C^1_c([0,T) \times \overline{\Omega} )$; - the **momentum equation** [\[i2\]](#i2){reference-type="eqref" reference="i2"} is satisfied in the sense of distributions, specifically $$\begin{aligned} \int_0^T &\int_{\Omega} \left[ \varrho{\bf u}\cdot \partial_t \boldsymbol{\varphi}+ \varrho{\bf u}\otimes {\bf u}: \nabla_x\boldsymbol{\varphi}+ \frac{1}{\varepsilon^2} p(\varrho, \vartheta) {\rm div}_x\boldsymbol{\varphi}\right] \ \,{\rm d} {x} \,{\rm d} t \nonumber \\ &= \int_0^T \int_{\Omega} \left[ \mathbb{S}(\vartheta, \mathbb{D}_x{\bf u}) : \nabla_x\boldsymbol{\varphi}- \frac{1}{\varepsilon} \varrho\nabla_xG \cdot \boldsymbol{\varphi}\right] \ \,{\rm d} {x} \,{\rm d} t - \int_{\Omega} \varrho_0 {\bf u}_0 \cdot \boldsymbol{\varphi}(0, \cdot) \ \,{\rm d} {x} \label{Lw5} \end{aligned}$$ for any $\boldsymbol{\varphi}\in C^1_c([0, T) \times \overline{\Omega}; R^3)$ such that $\boldsymbol{\varphi}\cdot {\bf n}|_{\partial \Omega} = 0$; - the **entropy balance** [\[i3\]](#i3){reference-type="eqref" reference="i3"} is replaced by $$\begin{aligned} - \int_0^T &\int_{\Omega} \left[ \varrho s(\varrho, \vartheta) \partial_t \varphi + \varrho s (\varrho,\vartheta) {\bf u}\cdot \nabla_x\varphi + \frac{{\bf q} (\vartheta, \nabla_x\vartheta)}{\vartheta} \cdot \nabla_x\varphi \right] \ \,{\rm d} {x} \,{\rm d} t \nonumber \\ &= \int_0^T \int_{\Omega}{ \varphi \ {\rm d}\sigma(t,x)} + \int_{\Omega} \varrho_0 s(\varrho_0, \vartheta_0) \varphi (0, \cdot) \ \,{\rm d} {x} \label{Lw7} \end{aligned}$$ for any $\varphi \in C^1_c([0, T) \times \Omega)$, where the entropy production rate $\sigma$ is a non--negative Radon measure, $\sigma \in \mathcal{M}^+([0,T] \times \overline{\Omega})$, satisfying $$\label{Lw7a} \sigma \geq \frac{1}{\vartheta} \left( \varepsilon^2 \mathbb{S}(\vartheta, \mathbb{D}_x{\bf u}) : \mathbb{D}_x{\bf u}- \frac{{\bf q}(\vartheta,\nabla_x\vartheta): \nabla_x\vartheta}{\vartheta}\right);$$ - the **ballistic energy balance** $$\begin{aligned} - &\int_0^T \partial_t \psi \int_{\Omega} \left[ \varepsilon^2 \frac{1}{2} \varrho|{\bf u}|^2 + \varrho e(\varrho, \vartheta) - \vartheta_B\varrho s(\varrho, \vartheta) \right] \ \,{\rm d} {x} \,{\rm d} t \nonumber \\ &+ \int_0^T \int_{\overline{\Omega}} \psi \vartheta_B\ {\rm d}\sigma(t,x) \nonumber \\ &\leq \int_0^T \psi \int_{\Omega} \left[ \varepsilon\varrho{\bf u}\cdot \nabla_xG - \varrho s(\varrho, \vartheta) \partial_t \vartheta_B - \varrho s(\varrho, \vartheta) {\bf u}\cdot \nabla_x\vartheta_B- \frac{{\bf q}(\vartheta, \nabla_x\vartheta)}{\vartheta} \cdot \nabla_x\vartheta_B\right] \ \,{\rm d} {x} \,{\rm d} t \nonumber \\ &+ \psi(0) \int_{\Omega} \left[ \frac{1}{2} \varepsilon^2 \varrho_0 |{\bf u}_0|^2 + \varrho_0 e(\varrho_0, \vartheta_0) - \vartheta_B(0, \cdot) \varrho_0 s(\varrho_0, \vartheta_0) \right] \ \,{\rm d} {x} \label{Lw8} \end{aligned}$$ holds true for any $\psi \in C^1_c ([0, T))$, with $\psi \geq 0$, and any smooth extension of the boundary datum $\vartheta_B$. Definition [Definition 1](#DL1){reference-type="ref" reference="DL1"}, in the spirit of [@FeNo6A Chapter 3], provides more information than its counterpart introduced in [@ChauFei], [@FeiNovOpen Chapter 12]. Indeed the entropy dissipation rate $\sigma$ appears in both [\[Lw7\]](#Lw7){reference-type="eqref" reference="Lw7"}, [\[Lw8\]](#Lw8){reference-type="eqref" reference="Lw8"}; in particular, the ballistic energy dissipation in [\[Lw8\]](#Lw8){reference-type="eqref" reference="Lw8"} controls the entropy dissipation modulo a multiplicative factor related to the boundary temperature $\vartheta_B$. It can be shown that the measure $\sigma$ is identified in the course of the existence proof, in particular all results obtained in [@FeiNovOpen] remain valid in the new framework. ## Relative energy inequality In addition to Gibbs' equation [\[i10\]](#i10){reference-type="eqref" reference="i10"}, we impose the hypothesis of thermodynamic stability written in the form $$\label{HTS} \frac{\partial p(\varrho, \vartheta) }{\partial \varrho} > 0,\quad \frac{\partial e(\varrho, \vartheta) }{\partial \vartheta} > 0 \qquad \mbox{ for all }\quad \varrho, \vartheta> 0.$$ Following [@ChauFei], we introduce the scaled *relative energy* $$\begin{aligned} E_\varepsilon&\left( \varrho, \vartheta, {\bf u}\Big| \widetilde\varrho, \widetilde\vartheta, {\widetilde{\bf u}}\right) \nonumber \\ &= \frac{1}{2}\varrho|{\bf u}- {\widetilde{\bf u}}|^2 + \frac{1}{\varepsilon^2} \left[ \varrho e - \widetilde\vartheta\Big(\varrho s - \widetilde\varrho s(\widetilde\varrho, \widetilde\vartheta) \Big)- \Big( e(\widetilde\varrho, \widetilde\vartheta) - \widetilde\vartheta s(\widetilde\varrho, \widetilde\vartheta) + \frac{p(\widetilde\varrho, \widetilde\vartheta)}{\widetilde\varrho} \Big) (\varrho- \widetilde\varrho) - \widetilde\varrho e (\widetilde\varrho, \widetilde\vartheta) \right] . \nonumber\end{aligned}$$ Now, the hypothesis of thermodynamic stability [\[HTS\]](#HTS){reference-type="eqref" reference="HTS"} can be equivalently rephrased as (strict) convexity of the total energy expressed with respect to the conservative entropy variables $$E_\varepsilon\Big( \varrho, S = \varrho s(\varrho, \vartheta), {\bf m}= \varrho{\bf u}\Big) \equiv \frac{1}{2} \frac{|{\bf m}|^2}{\varrho} + \frac{1}{\varepsilon^2} \varrho e(\varrho, S),$$ whereas the relative energy can be written as $$\begin{aligned} E_\varepsilon&\left( \varrho, S, {\bf m}\Big| \widetilde\varrho, \widetilde{S}, \widetilde{{\bf m}}\right) = E_\varepsilon(\varrho, S, {\bf m}) - \left< \partial_{\varrho, S, {\bf m}} E_\varepsilon(\widetilde\varrho, \widetilde{S}, \widetilde{{\bf m}}) ; (\varrho- \widetilde\varrho, S - \widetilde{S}, {\bf m}- \widetilde{{\bf m}}) \right> - E_\varepsilon(\widetilde\varrho, \widetilde{S}, \widetilde{{\bf m}}). \nonumber\end{aligned}$$ Finally, as observed in [@ChauFei], any weak solution of the NSF system in the sense of Definition [Definition 1](#DL1){reference-type="ref" reference="DL1"} satisfies the *relative energy inequality* in the form: $$\begin{aligned} &\left[ \int_{\Omega} E_\varepsilon\left(\varrho, \vartheta, {\bf u}\Big| \widetilde\varrho, \widetilde\vartheta, {\widetilde{\bf u}}\right) \ \,{\rm d} {x} \right]_{t = 0}^{t = \tau} + \frac{1}{\varepsilon^2}\int_0^\tau \int_{\overline{\Omega}} \widetilde\vartheta\ {\rm d}\sigma(t,x) \nonumber \\ &\leq - \frac{1}{\varepsilon^2} \int_0^\tau \int_{\Omega} \left( \varrho(s(\varrho, \vartheta) - s(\widetilde\varrho, \widetilde\vartheta)) \partial_t \widetilde\vartheta+ \varrho(s(\varrho,\vartheta) - s(\widetilde\varrho, \widetilde\vartheta)) {\bf u}\cdot \nabla_x\widetilde\vartheta- \frac{\kappa (\vartheta) \nabla_x\vartheta}{\vartheta} \cdot \nabla_x\widetilde\vartheta\right) \ \,{\rm d} {x} \,{\rm d} t \nonumber \\ &- \int_0^\tau \int_{\Omega} \Big[ \varrho({\bf u}- {\widetilde{\bf u}}) \otimes ({\bf u}- {\widetilde{\bf u}}) + \frac{1}{\varepsilon^2} p(\varrho, \vartheta) \mathbb{I} - \mathbb{S}(\vartheta, \mathbb{D}_x{\bf u}) \Big] : \mathbb{D}_x{\widetilde{\bf u}} \ \,{\rm d} {x} \,{\rm d} t \nonumber \\ &+ \int_0^\tau \int_{\Omega} \varrho\left[ \frac{1}{\varepsilon} \nabla_xG - \partial_t {\widetilde{\bf u}}- ({\widetilde{\bf u}}\cdot \nabla_x) {\widetilde{\bf u}}\right] \cdot ({\bf u}- {\widetilde{\bf u}}) \ \,{\rm d} {x} \,{\rm d} t \nonumber \\ &+ \frac{1}{\varepsilon^2} \int_0^\tau \int_{\Omega} \left[ \left( 1 - \frac{\varrho}{\widetilde\varrho} \right) \partial_t p(\widetilde\varrho, \widetilde\vartheta) - \frac{\varrho}{\widetilde\varrho} {\bf u}\cdot \nabla_xp(\widetilde\varrho, \widetilde\vartheta) \right] \ \,{\rm d} {x} \,{\rm d} t \label{L4}\end{aligned}$$ for a.a. $\tau > 0$ and any trio of continuously differentiable functions $(\widetilde\varrho, \widetilde\vartheta, {\widetilde{\bf u}})$ satisfying $$\label{L5} \widetilde\varrho> 0,\quad \widetilde\vartheta> 0,\quad \widetilde\vartheta|_{\partial \Omega} = \vartheta_B, \quad {\widetilde{\bf u}}\cdot {\bf n}|_{\partial \Omega} = 0.$$ ## Constitutive relations {#CR} The existence theory developed in [@ChauFei] requires certain restrictions to be imposed on the constitutive relations (state equations) similar to those introduced in the monograph [@FeNo6A Chapters 1,2]. Specifically, the equation of state reads $$p(\varrho, \vartheta) = p_{\rm m} (\varrho, \vartheta) + p_{\rm rad}(\vartheta),$$ where $p_{\rm m}$ is the pressure of a general *monoatomic* gas, $$\label{con1} p_{\rm m} (\varrho, \vartheta) = \frac{2}{3} \varrho e_{\rm m}(\varrho, \vartheta),$$ enhanced by the radiation pressure $$p_{\rm rad}(\vartheta) = \frac{a}{3} \vartheta^4,\qquad a > 0.$$ As observed in [@DF1], the radiation pressure prevents uncontrolled temperature oscillations in the (hypothetical) vacuum zones and, as such, is indispensable for the global existence theory to be valid. The pressure $p_m$ can be more general in the sense specified in [@FeNo6A Chapter 1, Section 1.4]. Accordingly, the internal energy reads $$e(\varrho, \vartheta) = e_{\rm m}(\varrho, \vartheta) + e_{\rm rad}(\varrho, \vartheta),\qquad e_{\rm rad}(\varrho, \vartheta) = \frac{a}{\varrho} \vartheta^4.$$ The specific form of [\[con1\]](#con1){reference-type="eqref" reference="con1"} is another issue to be discussed, obviously. In the context of gases, the natural candidate is provided by the standard Boyle--Mariotte law $p = \varrho\vartheta$. Unfortunately, this *ansatz* fails to provide even the expected energy estimates as long as the boundary temperature is prescribed. To circumvent this difficulty, more physics must be taken into account, cf. [@FeNo6A Chapter 1], as specified here below. - **Gibbs' relation** together with [\[con1\]](#con1){reference-type="eqref" reference="con1"} yield $$p_{\rm m} (\varrho, \vartheta) = \vartheta^{\frac{5}{2}} P \left( \frac{\varrho}{\vartheta^{\frac{3}{2}} } \right)$$ for a certain $P \in C^1[0,\infty)$. Consequently, $$\label{w9} p(\varrho, \vartheta) = \vartheta^{\frac{5}{2}} P \left( \frac{\varrho}{\vartheta^{\frac{3}{2}} } \right) + \frac{a}{3} \vartheta^4,\quad e(\varrho, \vartheta) = \frac{3}{2} \frac{\vartheta^{\frac{5}{2}} }{\varrho} P \left( \frac{\varrho}{\vartheta^{\frac{3}{2}} } \right) + \frac{a}{\varrho} \vartheta^4, \qquad a > 0.$$ - **Hypothesis of thermodynamic stability** [\[HTS\]](#HTS){reference-type="eqref" reference="HTS"} expressed in terms of $P$ gives rise to $$\label{w10} P(0) = 0,\qquad P'(Z) > 0 \ \mbox{ for }\ Z \geq 0,\qquad \frac{ \frac{5}{3} P(Z) - P'(Z) Z }{Z} > 0 \ \mbox{ for }\ Z > 0.$$ In particular, the function $Z \mapsto P(Z)/ Z^{\frac{5}{3}}$ is decreasing, and we suppose $$\label{w11} \lim_{Z \to \infty} \frac{ P(Z) }{Z^{\frac{5}{3}}} = p_\infty > 0.$$ - The associated **entropy** takes the form $$\label{w12} s(\varrho, \vartheta) = s_{\rm m}(\varrho, \vartheta) + s_{\rm rad}(\varrho, \vartheta),\qquad s_{\rm m} (\varrho, \vartheta) = \mathcal{S} \left( \frac{\varrho}{\vartheta^{\frac{3}{2}} } \right),\qquad s_{\rm rad}(\varrho, \vartheta) = \frac{4a}{3} \frac{\vartheta^3}{\varrho},$$ where one has $$\label{w13} \mathcal{S}'(Z) = -\frac{3}{2} \frac{ \frac{5}{3} P(Z) - P'(Z) Z }{Z^2} < 0.$$ Finally, we impose the **Third law of thermodynamics**, cf. Belgiorno [@BEL1], [@BEL2], requiring the entropy to vanish when the absolute temperature approaches zero, namely $$\label{w14} \lim_{Z \to \infty} \mathcal{S}(Z) = 0.$$ As for the transport coefficients, we suppose they are continuously differentiable functions of the temperature satisfying $$\begin{aligned} 0 < \underline{\mu}(1 + \vartheta) &\leq \mu(\vartheta),\qquad |\mu'(\vartheta)| \leq \overline{\mu}, \nonumber \\ 0 &\leq \underline{\eta} (1 + \vartheta) \leq \eta (\vartheta) \leq \overline{\eta}(1 + \vartheta), \nonumber \\ 0 < \underline{\kappa} (1 + \vartheta^\beta) &\leq \kappa (\vartheta) \leq \overline{\kappa}(1 + \vartheta^\beta), \quad \mbox{ where }\ \beta > 6. \label{w16}\end{aligned}$$ As a consequence of the hypotheses [\[w10\]](#w10){reference-type="eqref" reference="w10"}, [\[w11\]](#w11){reference-type="eqref" reference="w11"}, [\[w13\]](#w13){reference-type="eqref" reference="w13"}, and [\[w14\]](#w14){reference-type="eqref" reference="w14"}, we get the following estimates, for which we refer to [@FeNo6A Chapter 3, Section 3.2]: $$\begin{aligned} \varrho^{\frac{5}{3}} + \vartheta^4 \lesssim \varrho e(\varrho, \vartheta) &\lesssim 1+ \varrho^{\frac{5}{3}} + \vartheta^4, \label{L5b} \\ s_{\rm m}(\varrho, \vartheta) &\lesssim \left( 1 + |\log(\varrho)| + [\log(\vartheta)]^+ \right). \label{L5a}\end{aligned}$$ ## Coercivity of the dissipative stress In order to obtain uniform energy bounds, we need to control the $L^2$-norm of ${\bf u}$ in terms of the dissipation potential $$\begin{aligned} \int_{\Omega} \frac{1}{\vartheta} \mathbb{S}(\vartheta, \mathbb{D}_x{\bf u}): \mathbb{D}_x{\bf u} \ \,{\rm d} {x} &\approx \int_{\Omega} \frac{\mu(\vartheta)}{\vartheta} |\mathbb{D}_x{\bf u}- \frac{1}{3} {\rm div}_x{\bf u}\mathbb{I} |^2 \ \,{\rm d} {x} + \int_{\Omega} \frac{\eta(\vartheta)}{\vartheta} |{\rm div}_x{\bf u}|^2 \ \,{\rm d} {x} \nonumber \\ &\gtrsim \underline{\mu} \int_{\Omega} |\mathbb{D}_x{\bf u}- \frac{1}{3} {\rm div}_x{\bf u}\mathbb{I} |^2 \ \,{\rm d} {x} + \underline{\eta} \int_{\Omega} |{\rm div}_x{\bf u}|^2 \ \,{\rm d} {x}. \label{est:coerc}\end{aligned}$$ This can be done only far away from the set $$\mathcal{N} = \left\{ {\bf w} \in W^{1,2}(\Omega; R^3) \Big| \ \underline{\mu} \left( \mathbb{D}_x{\bf w} - \frac{1}{3} {\rm div}_x{\bf w} \mathbb{I} \right) = 0,\quad \underline{\eta} {\rm div}_x{\bf w} = 0 \ \mbox{ in }\ \Omega, \qquad {\bf w} \cdot {\bf n}|_{\partial \Omega} = 0 \right\}.$$ As a matter of fact, let $\Pi_{\mathcal{N}}$ denote a projection onto the space $\mathcal{N}$. Then Korn--Poincaré inequality, see *e.g.* Lewintan, Müller, Neff [@LewMulNef], gives $$\label{KoPo} \| {\bf v} - \Pi_{\mathcal{N}} {\bf v} \|_{W^{1,2}(\Omega; R^3)} \stackrel{<}{\sim}\left( \underline{\mu}\left\| \mathbb{D}_x{\bf v} - \frac{1}{3} {\rm div}_x{\bf v} \mathbb{I} \right\|_{L^2(\Omega; R^{3 \times 3})} + \underline{\eta} \| {\rm div}_x{\bf v} \|_{L^2(\Omega)} \right)$$ for any ${\bf v} \in W^{1,2}(\Omega;R^3)$, ${\bf v} \cdot {\bf n}|_{\partial \Omega} = 0$. Notice that, from Schirra [@Schi Proposition 2.5], it follows that, if $\underline{\mu} > 0$ and $\underline{\eta} = 0$, the set $\mathcal{N}$ contains all conformal Killing vectors with vanishing normal trace. If instead $\underline{\mu}>0$ and $\underline{\eta} > 0$, then the set $\mathcal{N}$ consists of all *rigid motions* ${\bf w} = \boldsymbol{\omega}\times x + {\bf a}$ with zero normal trace. In this latter case, $\mathcal{N} \ne \{ 0 \}$ if and only if $\Omega$ is rotationally symmetric. Finally, we point out that, if the velocity field ${\bf u}$ satisfies the complete slip boundary conditions [\[i5\]](#i5){reference-type="eqref" reference="i5"}, uniform energy estimates can be obtained provided the harmonic extension $\vartheta_B$ of the boundary condition [\[i6\]](#i6){reference-type="eqref" reference="i6"} inside $\Omega$ satisfies $$\label{COC} \nabla_x\vartheta_B\cdot {\bf w} = 0 \qquad \mbox{ for any }\quad {\bf w} \in \mathcal{N}.$$ We refer to Section [4.3](#EE){reference-type="ref" reference="EE"} below for more details. Note that [\[COC\]](#COC){reference-type="eqref" reference="COC"} is satisfied, for instance, in the case when $\Omega$ is an annulus bounded by two concentric spheres, $$\Omega = \left\{ x \in R^3 \ \Big|\ 0 < r_1 < |x| < r_2 \right\}\qquad \mbox{ and } \qquad \vartheta_B= \Theta_1 \ \mbox{ if } \ |x| = r_1,\quad \vartheta_B= \Theta_2 \ \mbox{ if }\ |x| = r_2,$$ provided $\underline{\mu}, \underline{\eta} > 0$, and $\Theta_1$, $\Theta_2$ are positive constants. ## Existence of global in time weak solutions Under the hypotheses [\[w9\]](#w9){reference-type="eqref" reference="w9"}--[\[w16\]](#w16){reference-type="eqref" reference="w16"} and [\[COC\]](#COC){reference-type="eqref" reference="COC"}, the NSF system [\[i1\]](#i1){reference-type="eqref" reference="i1"}--[\[i8\]](#i8){reference-type="eqref" reference="i8"}, together with the boundary conditions [\[i5\]](#i5){reference-type="eqref" reference="i5"}, [\[i6\]](#i6){reference-type="eqref" reference="i6"}, admits a global in time weak solution in the sense of Definition [Definition 1](#DL1){reference-type="ref" reference="DL1"} for any finite energy initial datum and any sufficiently smooth boundary datum $\vartheta_B$, see [@FeiNovOpen Chapter 12, Theorem 18]. **Remark 2**. Strictly speaking, the existence theory in [@FeiNovOpen] is formulated for the Dirichlet boundary conditions for the velocity. The extension to the complete slip conditions [\[i5\]](#i5){reference-type="eqref" reference="i5"} is straightforward. In addition, the weak solutions in the sense of Definition [Definition 1](#DL1){reference-type="ref" reference="DL1"} enjoy the *weak--strong uniqueness* property; they coincide with the strong solution associated to the same initial/boundary data as long as the latter exists, see [@FeGwKwSG Section 4]. # Main result: the singular limit problem {#m} Having collected all preliminary material, we are able to state the main result of this paper. **Theorem 3** (**Singular limit**). *Let $\Omega \subset R^3$ be a bounded domain of class $C^3$. Let the constitutive relations for $p$, $e$, $s$, $\mu$, $\lambda$, and $\kappa$ comply with hypotheses [\[con1\]](#con1){reference-type="eqref" reference="con1"}--[\[w16\]](#w16){reference-type="eqref" reference="w16"}. Let $(\varrho_\varepsilon, {\bf u}_\varepsilon, \vartheta_\varepsilon)_{\varepsilon> 0}$ be a family of weak solutions to the Navier--Stokes--Fourier system, in the sense specified in Definition [Definition 1](#DL1){reference-type="ref" reference="DL1"}, emanating from the initial data $$\begin{aligned} \varrho_\varepsilon(0, \cdot) = \overline{\varrho} + \varepsilon\varrho_{0,\varepsilon},\ \int_{\Omega} \varrho_{0,\varepsilon} \ \,{\rm d} {x} = 0,\ \overline{\varrho} > 0, \ \varrho_{0,\varepsilon} \to \mathfrak{R}_0 \ \mbox{weakly-(*) in}\ L^\infty (\Omega), \nonumber \\ \vartheta_\varepsilon(0, \cdot) = \overline{\vartheta} + \varepsilon\vartheta_{0,\varepsilon},\ \overline{\vartheta} > 0, \int_{\Omega} \vartheta_{0,\varepsilon} \ \,{\rm d} {x} = 0,\ \vartheta_{0,\varepsilon} \to \mathfrak{T}_0 \ \mbox{weakly-(*) in}\ L^\infty (\Omega), \nonumber \\ {\bf u}_\varepsilon(0, \varepsilon) = {\bf u}_{0,\varepsilon} \to {\bf u}_0 \ \mbox{weakly-(*) in}\ L^\infty(\Omega; R^3). \label{dt1} \end{aligned}$$ In addition, the Dirichlet boundary conditions for the temperature are given, $$\label{dt2} \vartheta_\varepsilon|_{\partial \Omega} = \overline{\vartheta} + \varepsilon\mathfrak{T}_B,$$ where the harmonic extension of $\mathfrak{T}_B$ satisfies the coercivity hypothesis [\[COC\]](#COC){reference-type="eqref" reference="COC"}.* *Then there is a subsequence (not relabelled) such that:* *$$\begin{aligned} \frac{\varrho_\varepsilon- \overline{\varrho}}{\varepsilon} &\to \mathfrak{R} \ \mbox{weakly-(*) in} \ L^\infty(0,T; L^{\frac 5 3}(\Omega)), \nonumber \\ \frac{\vartheta_\varepsilon- \overline{\vartheta}}{\varepsilon} &\to \mathfrak{T} \ \mbox{weakly-(*) in} \ L^\infty(0,T; L^{2}(\Omega)), \ \mbox{and weakly in}\ L^2(0,T; W^{1,2}(\Omega)), \nonumber \\ {\bf u}_\varepsilon&\to {\bf U}\ \mbox{weakly in}\ L^2((0,T) \times \Omega; R^3), \label{dt3}\end{aligned}$$ where $\mathfrak{R}$, ${\bf U}$, and $$\Theta := \mathfrak{T} - \frac{\lambda(\overline{\varrho}, \overline{\vartheta})}{|\Omega|} \int_{\Omega} \mathfrak{T} \ \,{\rm d} {x}$$ represent a weak solution of the Oberbeck--Boussinesq system [\[ObBs1\]](#ObBs1){reference-type="eqref" reference="ObBs1"}--[\[i14B\]](#i14B){reference-type="eqref" reference="i14B"}, emanating from the initial data $$\begin{aligned} {\bf U}(0, \cdot) &= {\bf H}[{\bf u}_0], \nonumber \\ \overline{\varrho} c_p (\overline{\varrho}, \overline{\vartheta}) \Theta(0, \cdot) &= \overline{\vartheta} \left( \frac{\partial s (\overline{\varrho}, \overline{\vartheta})}{\partial \varrho} \mathfrak{R}_0 + \frac{\partial s (\overline{\varrho}, \overline{\vartheta})}{\partial \vartheta} \mathfrak{T}_0 + \alpha (\overline{\varrho}, \overline{\vartheta}) G \right), \label{dt4}\end{aligned}$$ where ${\bf H}$ denotes the Helmholtz projection onto the space of solenoidal functions.* The rest of the paper is devoted to the proof of Theorem [Theorem 3](#TM1){reference-type="ref" reference="TM1"}. As already mentioned, the next section is devoted to the derivation of suitable uniform bounds for the family of weak solutions $(\varrho_\varepsilon, \vartheta_\varepsilon, {\bf u}_\varepsilon)_{\varepsilon> 0}$ to the scaled NSF system. In Section [5](#w){reference-type="ref" reference="w"}, we establish weak convergence properties and compute the limit of most of the terms appearing in equations [\[i1\]](#i1){reference-type="eqref" reference="i1"}-[\[i3\]](#i3){reference-type="eqref" reference="i3"}. Finally, in Section [6](#A){reference-type="ref" reference="A"} we compute the limit of the convective term appearing in the momentum equation [\[i2\]](#i2){reference-type="eqref" reference="i2"}, thus completing the proof to Theorem [Theorem 3](#TM1){reference-type="ref" reference="TM1"}. # Uniform bounds {#e} Given a global--in--time weak solution $(\varrho_\varepsilon, \vartheta_\varepsilon, {\bf u}_\varepsilon)_{\varepsilon> 0}$ to the scaled NSF system, our first goal is to establish uniform bounds, which are independent of the scaling parameter $\varepsilon$. ## Mass conservation As ${\bf u}\cdot {\bf n}|_{\partial \Omega} = 0$, the total mass of the fluid is a conserved quantity. Consequently, in accordance with [\[S2\]](#S2){reference-type="eqref" reference="S2"}, [\[S3\]](#S3){reference-type="eqref" reference="S3"}, one has $$\label{ee1} \int_{\Omega} \varrho_\varepsilon(t,\cdot) \ \,{\rm d} {x} = \int_{\Omega} \varrho_{0,\varepsilon} \ \,{\rm d} {x} = \overline{\varrho}|\Omega|.$$ ## Essential vs. residual component Following [@FeNo6A], we consider the *essential* and *residual* components of a measurable function $g=g(t,x)$. For a compact set $$K \subset \left\{ (\varrho, \vartheta) \in \mathbb{R}^2 \ \Big| \ \varrho> 0, \vartheta> 0 \right\}$$ and $\varepsilon> 0$, we introduce the functions $$[g]_{\rm ess} = g \mathds{1}_{(\varrho_\varepsilon, \vartheta_\varepsilon) \in K},\ [g]_{\rm res} = g - [g]_{\rm ess} = g \mathds{1}_{(\varrho_\varepsilon, \vartheta_\varepsilon) \in \mathbb{R}^2 \setminus K},$$ where $\mathds{1}_A$ denotes the characteristic function of a set $A\subset\mathbb{R}^2$. More specifically, we fix $\mathcal{U} (\overline{\varrho}, \overline{\vartheta} )\subset (0, \infty)^2$ to be an open neighborhood of $(\overline{\varrho}, \overline{\vartheta})$. Note that $\mathcal{U}$ contains the range of $(\widetilde\varrho_\varepsilon, \overline{\vartheta} + \varepsilon\mathfrak{T}_B)$ for all $\varepsilon> 0$ small enough. Then, the set $K$ is chosen so that $$\label{ee2} K = \overline{\mathcal{U}(\overline{\varrho}, \overline{\vartheta})}. %\subset (0, \infty)^2,\ \mathcal{U} (\Ov{\vr}, \Ov{\vt} ) \ \mbox{- an open neighborhood of}\ (\Ov{\vr}, \Ov{\vt}).$$ Next, we record the following bounds shown in [@FeNo6A Chapter 5, Lemma 5.1]: $$\begin{aligned} \left[E_{\varepsilon} \left( \varrho_\varepsilon, \vartheta_\varepsilon, {\bf u}_\varepsilon\Big| \widetilde\varrho, \widetilde\vartheta, {\widetilde{\bf u}}\right)\right]_{\rm ess} &\geq C \left[ \frac{ |\varrho_\varepsilon- \widetilde\varrho|^2 }{\varepsilon^2} + \frac{ |\vartheta_\varepsilon- \widetilde\vartheta|^2 }{\varepsilon^2} + |{\bf u}_\varepsilon- {\widetilde{\bf u}}|^2 \right]_{\rm ess}, \label{BB1} \\ \left[ E_{\varepsilon} \left( \varrho_\varepsilon, \vartheta_\varepsilon, {\bf u}_\varepsilon\Big| \widetilde\varrho, \widetilde\vartheta, {\widetilde{\bf u}}\right) \right]_{\rm res} &\geq C \left[ \frac{1}{\varepsilon^2} + \frac{1}{\varepsilon^2} \varrho_\varepsilon e(\varrho_\varepsilon, \vartheta_\varepsilon) + \frac{1}{\varepsilon^2} \varrho_\varepsilon|s(\varrho_\varepsilon, \vartheta_\varepsilon)| + \varrho_\varepsilon|{\bf u}_\varepsilon|^2 \right]_{\rm res}, \label{ee3} \end{aligned}$$ whenever $K$ is given by [\[ee2\]](#ee2){reference-type="eqref" reference="ee2"} and $(\widetilde\varrho, \widetilde\vartheta) \in \mathcal{U}(\overline{\varrho}, \overline{\vartheta})$. The constant $C$ depends on $K$ and on the distance $$\sup_{t,x} {\rm dist} \left[ (\widetilde\varrho(t,x), \widetilde\vartheta(t,x) ) ; \partial K \right].$$ ## Energy estimates {#EE} Using the ansatz $\widetilde\varrho= \widetilde\varrho_\varepsilon$, $\widetilde\vartheta= \overline{\vartheta} + \varepsilon\mathfrak{T}_B$, ${\widetilde{\bf u}}= 0$ in the relative energy inequality [\[L4\]](#L4){reference-type="eqref" reference="L4"}, we obtain $$\begin{aligned} &\left[ \int_{\Omega} E_\varepsilon\left(\varrho_\varepsilon, \vartheta_\varepsilon, {\bf u}_\varepsilon\Big| \widetilde\varrho_\varepsilon, \overline{\vartheta} + \varepsilon\mathfrak{T}_B, 0 \right) \ \,{\rm d} {x} \right]_{t = 0}^{t = \tau} + \frac{1}{\varepsilon^2} \int_0^\tau \int_{\overline{\Omega}} (\overline{\vartheta} + \varepsilon\mathfrak{T}_B) {\rm d}\sigma_\varepsilon(t,x) \nonumber \\ &\leq - \frac{1}{\varepsilon} \int_0^\tau \int_{\Omega_\varepsilon} \left( \varrho_\varepsilon\Big(s(\varrho_\varepsilon, \vartheta_\varepsilon) - s(\widetilde\varrho_\varepsilon, \overline{\vartheta} + \varepsilon\mathfrak{T}_B) \Big) {\bf u}_\varepsilon\cdot \nabla_x\mathfrak{T}_B - \left( \frac{\kappa (\vartheta_\varepsilon) \nabla_x\vartheta_\varepsilon}{\vartheta_\varepsilon} \right) \cdot \nabla_x\mathfrak{T}_B \right) \ \,{\rm d} {x} \,{\rm d} t \nonumber \\ &+ \int_0^\tau \int_{\Omega} \frac{1}{\varepsilon} \varrho_\varepsilon\nabla_xG \cdot {\bf u}_\varepsilon \ \,{\rm d} {x} \,{\rm d} t - \frac{1}{\varepsilon^2} \int_0^\tau \int_{\Omega} \frac{\varrho_\varepsilon}{\widetilde\varrho_\varepsilon} {\bf u}_\varepsilon\cdot \nabla_xp(\widetilde\varrho_\varepsilon, \overline{\vartheta} + \varepsilon\mathfrak{T}_B) \ \,{\rm d} {x} \,{\rm d} t , \label{e1}\end{aligned}$$ where, according to [\[Lw7a\]](#Lw7a){reference-type="eqref" reference="Lw7a"}, one has $$\label{e2} \sigma_\varepsilon\geq \frac{1}{\vartheta_\varepsilon} \left( \varepsilon^2 \mathbb{S}(\vartheta_\varepsilon, \mathbb{D}_x{\bf u}_\varepsilon) : \mathbb{D}_x{\bf u}_\varepsilon+ \frac{\kappa (\vartheta_\varepsilon) |\nabla_x\vartheta_\varepsilon|^2}{\vartheta_\varepsilon}\right).$$ Observe that, owing to [\[S1\]](#S1){reference-type="eqref" reference="S1"} and [\[T_B-harm\]](#T_B-harm){reference-type="eqref" reference="T_B-harm"}, we get $$\label{e3} \int_0^\tau \int_{\Omega} \frac{1}{\varepsilon} \varrho_\varepsilon\nabla_xG \cdot {\bf u}_\varepsilon \ \,{\rm d} {x} \,{\rm d} t - \frac{1}{\varepsilon^2} \int_0^\tau \int_{\Omega_\varepsilon} \frac{\varrho_\varepsilon}{\widetilde\varrho_\varepsilon} {\bf u}_\varepsilon\cdot \nabla_xp(\widetilde\varrho_\varepsilon, \overline{\vartheta} + \varepsilon\mathfrak{T}_B) \ \,{\rm d} {x} \,{\rm d} t = 0.$$ Thus, inequality [\[e1\]](#e1){reference-type="eqref" reference="e1"} reduces to $$\begin{aligned} &\left[ \int_{\Omega} E_\varepsilon\left(\varrho_\varepsilon, \vartheta_\varepsilon, {\bf u}_\varepsilon\Big| \widetilde\varrho_\varepsilon, \overline{\vartheta} + \varepsilon\mathfrak{T}_B, 0 \right) \ \,{\rm d} {x} \right]_{t = 0}^{t = \tau} + \frac{1}{\varepsilon^2} \int_0^\tau \int_{\overline{\Omega}} (\overline{\vartheta} + \varepsilon\mathfrak{T}_B) {\rm d}\sigma_\varepsilon(t,x) \nonumber \\ &\leq - \frac{1}{\varepsilon} \int_0^\tau \int_{\Omega_\varepsilon} \left( \varrho_\varepsilon\Big(s(\varrho_\varepsilon, \vartheta_\varepsilon) - s(\widetilde\varrho_\varepsilon, \overline{\vartheta} + \varepsilon\mathfrak{T}_B) \Big) {\bf u}_\varepsilon\cdot \nabla_x\mathfrak{T}_B - \left( \frac{\kappa (\vartheta_\varepsilon) \nabla_x\vartheta_\varepsilon}{\vartheta_\varepsilon} \right) \cdot \nabla_x\mathfrak{T}_B \right) \ \,{\rm d} {x} \,{\rm d} t . \label{e4}\end{aligned}$$ We are now going to bound each term appearing in the right--hand side of the previous relation. ### Entropy-dependent term Let us start by considering the entropy-dependent term. We resort to the decomposition into essential and residual sets to write $$\begin{aligned} \frac{1}{\varepsilon} &\int_{\Omega} \left| \varrho_\varepsilon(s(\varrho_\varepsilon, \vartheta_\varepsilon) - s(\widetilde\varrho_\varepsilon, \overline{\vartheta} + \varepsilon\mathfrak{T}_B)) {\bf u}_\varepsilon\cdot \nabla_x\mathfrak{T}_B \right| \ \,{\rm d} {x} \nonumber \\ &\lesssim \frac{1}{\varepsilon} \int_{\Omega} \left| \left[ \varrho_\varepsilon(s(\varrho_\varepsilon, \vartheta_\varepsilon) - s(\widetilde\varrho_\varepsilon, \overline{\vartheta} + \varepsilon\mathfrak{T}_B)){\bf u}_\varepsilon\right]_{\rm ess} \right| \ \,{\rm d} {x} \nonumber \\ &+ \frac{1}{\varepsilon} \int_{\Omega} \left| \left[ \varrho_\varepsilon(s(\varrho_\varepsilon, \vartheta_\varepsilon) - s(\widetilde\varrho_\varepsilon, \overline{\vartheta} + \varepsilon\mathfrak{T}_B)) {\bf u}_\varepsilon\cdot \nabla_x\mathfrak{T}_B \right]_{\rm res} \right| \ \,{\rm d} {x}. \nonumber\end{aligned}$$ As for the essential part term, we have $$\begin{aligned} \frac{1}{\varepsilon} &\int_{\Omega} \left| \left[ \varrho_\varepsilon(s(\varrho_\varepsilon, \vartheta_\varepsilon) - s(\widetilde\varrho_\varepsilon, \overline{\vartheta} + \varepsilon\mathfrak{T}_B)) {\bf u}_\varepsilon\right]_{\rm ess} \right| \ \,{\rm d} {x} \nonumber \\ &\lesssim \frac{1}{\varepsilon^2} \int_{\Omega} \left| \left[ (s(\varrho_\varepsilon, \vartheta_\varepsilon) - s(\widetilde\varrho_\varepsilon, \overline{\vartheta} + \varepsilon\mathfrak{T}_B)) \right]_{\rm ess} \right|^2 \ \,{\rm d} {x} + \int_{\Omega} \varrho_\varepsilon|{\bf u}_\varepsilon|^2 \ \,{\rm d} {x} \nonumber \\ &\lesssim \int_{\Omega} E_\varepsilon\left( \varrho_\varepsilon, \vartheta_\varepsilon, {\bf u}_\varepsilon\Big| \widetilde\varrho_\varepsilon, \overline{\vartheta} + \varepsilon\mathfrak{T}_B, 0 \right) \ \,{\rm d} {x} \label{e5}\end{aligned}$$ by virtue of [\[S4\]](#S4){reference-type="eqref" reference="S4"} and [\[BB1\]](#BB1){reference-type="eqref" reference="BB1"}, while the residual part term can be controlled as $$\begin{aligned} \frac{1}{\varepsilon} &\int_{\Omega} \left| \left[ \varrho_\varepsilon(s(\varrho_\varepsilon, \vartheta_\varepsilon) - s(\widetilde\varrho_\varepsilon, \overline{\vartheta} + \varepsilon\mathfrak{T}_B)) {\bf u}_\varepsilon\cdot \nabla_x\mathfrak{T}_B \right]_{\rm res} \right| \ \,{\rm d} {x} \nonumber \\ &\lesssim \frac{1}{\varepsilon} \int_{\Omega} \left[ \varrho_\varepsilon|{\bf u}_\varepsilon| \right]_{\rm res} \ \,{\rm d} {x} + \frac{1}{\varepsilon} \int_{\Omega} \left[ \varrho_\varepsilon s_{\rm m}(\varrho_\varepsilon, \vartheta_\varepsilon) |{\bf u}_\varepsilon| \right]_{\rm res} \ \,{\rm d} {x} + \frac{1}{\varepsilon} \int_{\Omega} \left[ \vartheta_\varepsilon^3 |{\bf u}_\varepsilon\cdot \nabla_x\mathfrak{T}_B | \right]_{\rm res} \ \,{\rm d} {x}. \nonumber \end{aligned}$$ In view of [\[ee1\]](#ee1){reference-type="eqref" reference="ee1"}, the total mass is conserved and we may infer that $$\label{e6} \frac{1}{\varepsilon} \int_{\Omega} \left[ \varrho_\varepsilon|{\bf u}_\varepsilon| \right]_{\rm res} \ \,{\rm d} {x} \lesssim \frac{1}{\varepsilon^2} \int_{\Omega} [\varrho_\varepsilon]_{\rm res} \ \,{\rm d} {x} + \int_{\Omega} \varrho_\varepsilon|{\bf u}_\varepsilon|^2 \ \,{\rm d} {x} \lesssim \int_{\Omega} E_\varepsilon\left( \varrho_\varepsilon, \vartheta_\varepsilon, {\bf u}_\varepsilon\Big| \widetilde\varrho_\varepsilon, \overline{\vartheta} + \varepsilon\mathfrak{T}_B, 0 \right) \ \,{\rm d} {x}.$$ Furthermore, by virtue of [\[L5b\]](#L5b){reference-type="eqref" reference="L5b"} and [\[L5a\]](#L5a){reference-type="eqref" reference="L5a"}, we have $$\begin{aligned} \frac{1}{\varepsilon} \int_{\Omega} \left[ \varrho_\varepsilon s_{\rm m}(\varrho_\varepsilon, \vartheta_\varepsilon) |{\bf u}_\varepsilon| \right]_{\rm res} \ \,{\rm d} {x} &\lesssim \frac{1}{\varepsilon^2} \int_{\Omega} \left[ \varrho_\varepsilon s^2_{\rm m}(\varrho_\varepsilon, \vartheta_\varepsilon) \right]_{\rm res} \ \,{\rm d} {x} + \int_{\Omega} \varrho_\varepsilon|{\bf u}_\varepsilon|^2 \ \,{\rm d} {x} \nonumber \\ &\lesssim \int_{\Omega} E_\varepsilon\left( \varrho_\varepsilon, \vartheta_\varepsilon, {\bf u}_\varepsilon\Big| \widetilde\varrho_\varepsilon, \overline{\vartheta} + \varepsilon\mathfrak{T}_B, 0 \right) \ \,{\rm d} {x}. \label{e7} \end{aligned}$$ Finally, we introduce the following refinement of the residual component of a function: for a given $\theta_*\gg1$ to be fixed later, we define $$%[g]_{\rm res} = [g]_{\rm res,small} + [g]_{\rm res,large}, \qquad [g]_{\rm res,S} = g \mathds{1}_{(\varrho_\varepsilon, \vartheta_\varepsilon) \in \mathbb{R}^2 \setminus K, \vartheta_\varepsilon\leq \theta_*}, \qquad [g]_{\rm res,L} = g \mathds{1}_{(\varrho_\varepsilon, \vartheta_\varepsilon) \in \mathbb{R}^2 \setminus K, \vartheta_\varepsilon\geq \theta_*},$$ where the index $S$ stay for "small" and the index $L$ for "large". Using that $$[g]_{\rm res} = [g]_{\rm res,S} + [g]_{\rm res,L},$$ by simple computations and Cauchy--Schwarz inequality we get $$\begin{aligned} \frac{1}{\varepsilon} \int_{\Omega} \left[ \vartheta_\varepsilon^3 |{\bf u}_\varepsilon\cdot \nabla_x\mathfrak{T}_B| \right]_{\rm res} \ \,{\rm d} {x} &\lesssim 2\delta \int_{\Omega} |{\bf u}_\varepsilon\cdot \nabla_x\mathfrak{T}_B |^2 \ \,{\rm d} {x} \nonumber \\ &\quad + \frac{C(\delta,\theta_*)}{\varepsilon^2}\int_{\Omega} [1]_{\rm res} \ \,{\rm d} {x} + \frac{ C(\delta) }{\varepsilon^2} \int_{\Omega} [\vartheta_\varepsilon^6]_{\rm res,L} \ \,{\rm d} {x} \nonumber \\ &\lesssim 2\delta \int_{\Omega} |{\bf u}_\varepsilon\cdot \nabla_x\mathfrak{T}_B |^2 \ \,{\rm d} {x} \nonumber \\ &\quad + C(\delta,\theta_*) \int_{\Omega} E_\varepsilon\left( \varrho_\varepsilon, \vartheta_\varepsilon, {\bf u}_\varepsilon\Big| \widetilde\varrho_\varepsilon, \overline{\vartheta} + \varepsilon\mathfrak{T}_B, 0 \right) \ \,{\rm d} {x} \nonumber \\ &\qquad\qquad+ \frac{ C(\delta) }{\varepsilon^2} \int_{\Omega} [\vartheta_\varepsilon^6]_{\rm res,L} \ \,{\rm d} {x} \label{e8}\end{aligned}$$ for any $\delta > 0$. Observe that we have also used [\[ee3\]](#ee3){reference-type="eqref" reference="ee3"} for passing from the first to the second inequality. First, we rewrite $$\int_{\Omega} |{\bf u}_\varepsilon\cdot \nabla_x\mathfrak{T}_B |^2 \ \,{\rm d} {x} = \int_{\Omega} |({\bf u}_\varepsilon- \Pi_{\mathcal{N}} {\bf u}_\varepsilon) \cdot \nabla_x\mathfrak{T}_B |^2 \ \,{\rm d} {x} + \int_{\Omega} | \Pi_{\mathcal{N}} {\bf u}_\varepsilon\cdot \nabla_x\mathfrak{T}_B |^2 \ \,{\rm d} {x},$$ where, in accordance with hypothesis [\[COC\]](#COC){reference-type="eqref" reference="COC"}, one has $$\Pi_{\mathcal{N}} {\bf u}_\varepsilon\cdot \nabla_x\mathfrak{T}_B = 0.$$ Second, by virtue of Korn--Poincaré inequality [\[KoPo\]](#KoPo){reference-type="eqref" reference="KoPo"} and [\[est:coerc\]](#est:coerc){reference-type="eqref" reference="est:coerc"}, we get $$\int_{\Omega} |({\bf u}_\varepsilon- \Pi_{\mathcal{N}} {\bf u}_\varepsilon) \cdot \nabla_x\mathfrak{T}_B |^2 \ \,{\rm d} {x} \lesssim \int_{\Omega} \frac{1}{\vartheta_\varepsilon} \mathbb{S}(\vartheta_\varepsilon, \mathbb{D}_x{\bf u}_\varepsilon): \mathbb{D}_x{\bf u}_\varepsilon \ \,{\rm d} {x}.$$ Consequently, the first integral on the right--hand side of [\[e8\]](#e8){reference-type="eqref" reference="e8"} can be absorbed by the left--hand side of [\[e4\]](#e4){reference-type="eqref" reference="e4"} as soon as $\delta > 0$ is fixed small enough. Let us now deal with the last integral in [\[e8\]](#e8){reference-type="eqref" reference="e8"}. We start by writing the bound $$\label{est:theta^6} \frac{ 1 }{\varepsilon^2} \int_{\Omega} [\vartheta_\varepsilon^6]_{\rm res,L} \ \,{\rm d} {x} \lesssim \frac{1}{\varepsilon^2} \int_{\Omega} \left( [\vartheta_\varepsilon^3 - \theta_*^3 ]^+ \right)^2 \ \,{\rm d} {x} + \frac{1}{\varepsilon^2} \int_{\Omega} [\theta_*^6]_{\rm res} \ \,{\rm d} {x}, %\frac{ C(\de) }{\ep^2} \intO{ [\vte]^4_{\rm res} } + \frac{ \de }{\ep^2} \intO{ [\vte]^8_{\rm res} },$$ where, by virtue of [\[ee3\]](#ee3){reference-type="eqref" reference="ee3"}, one has $$\label{est:theta-*} \frac{1}{\varepsilon^2} \int_{\Omega} [\theta_*^6]_{\rm res} \ \,{\rm d} {x} \lesssim \int_{\Omega} E_\varepsilon\left( \varrho_\varepsilon, \vartheta_\varepsilon, {\bf u}_\varepsilon\Big| \widetilde\varrho_\varepsilon, \overline{\vartheta} + \varepsilon\mathfrak{T} _B, 0 \right) \ \,{\rm d} {x}.$$ Next, by virtue of Poincaré inequality, one has $$\frac{1}{\varepsilon^2} \int_{\Omega} \left( [\vartheta_\varepsilon^3 - \theta_*^3 ]^+ \right)^2 \ \,{\rm d} {x} %\leq \frac{1}{\ep^2} \intO{ | \vte^4 - \Ov{\vt}^4 |^2 \lesssim \frac{1}{\varepsilon^2} \int_{\{\vartheta_\varepsilon\geq \theta_*\}}{|\nabla_x\vartheta_\varepsilon^3|^2 }\,{\rm d} {x}%+ \frac{1}{\ep^2} \int_{\partial \Omega} \left| (\Ov{\vt} + \ep \mathfrak{T}_B)^4 - \Ov{\vt}^4 \right|^2 \D S,$$ where no boundary term appears on the right-hand side provided we choose (say) $\theta_*\geq 2\overline{\vartheta}$. At this point, as $\beta > 6$, we can bound $$\begin{aligned} \frac{1}{\varepsilon^2} \int_{\{\vartheta_\varepsilon\geq \theta_*\}}{|\nabla_x\vartheta_\varepsilon^3|^2 }\,{\rm d} {x}&\approx \frac{1}{\varepsilon^2} \int_{\{\vartheta_\varepsilon\geq \theta_*\}}{ \vartheta_\varepsilon^4 |\nabla_x\vartheta_\varepsilon|^2 }\,{\rm d} {x}\\ &\leq \frac{\theta_*^{-(\beta-6)}}{\varepsilon^2} \int_{\Omega} \frac{\kappa (\vartheta_\varepsilon) |\nabla_x\vartheta_\varepsilon|^2 }{\vartheta_\varepsilon^2} \ \,{\rm d} {x}. %+ C(\delta)\intO{ E_\ep \left( \vre, \vte, \vue \Big| \tvr_\ep, \Ov{\vt} + \ep \mathfrak{T}_B, 0 \right) } \end{aligned}$$ In particular, inserting all these bounds into [\[est:theta\^6\]](#est:theta^6){reference-type="eqref" reference="est:theta^6"} and then into [\[e8\]](#e8){reference-type="eqref" reference="e8"} and finally choosing $\theta_*\gg1$ large enough, owing to [\[e2\]](#e2){reference-type="eqref" reference="e2"} the integral on the right--hand side of the previous relation can be absorbed by the left--hand side of [\[e4\]](#e4){reference-type="eqref" reference="e4"}. Summing up the previous estimates, we may rewrite inequality [\[e4\]](#e4){reference-type="eqref" reference="e4"} in the form $$\begin{aligned} &\left[ \int_{\Omega} E_\varepsilon\left(\varrho_\varepsilon, \vartheta_\varepsilon, {\bf u}_\varepsilon\Big| \widetilde\varrho_\varepsilon, \overline{\vartheta} + \varepsilon\mathfrak{T}_B, 0 \right) \ \,{\rm d} {x} \right]_{t = 0}^{t = \tau} + \frac{1}{\varepsilon^2} \int_0^\tau \int_{\overline{\Omega}} (\overline{\vartheta} + \varepsilon\mathfrak{T}_B) {\rm d}\sigma_\varepsilon(t,x) \nonumber \\ &\lesssim \left( 1 + \frac{1}{\varepsilon} \left| \int_0^\tau \int_{\Omega} \frac{\kappa (\vartheta_\varepsilon) \nabla_x\vartheta_\varepsilon}{\vartheta_\varepsilon} \cdot \nabla_x\mathfrak{T}_B \ \,{\rm d} {x} \,{\rm d} t \right| + \int_0^\tau \int_{\Omega} E_\varepsilon\left(\varrho_\varepsilon, \vartheta_\varepsilon, {\bf u}_\varepsilon\Big| \widetilde\varrho_\varepsilon, \overline{\vartheta} + \varepsilon\mathfrak{T}_B, 0 \right) \ \,{\rm d} {x} \,{\rm d} t \right). \label{e9}\end{aligned}$$ ### Heat flux dependent term We are now going to bound the second term appearing in the right--hand side of [\[e9\]](#e9){reference-type="eqref" reference="e9"}. Evoking once more hypothesis [\[w16\]](#w16){reference-type="eqref" reference="w16"} we get $$\frac{1}{\varepsilon} \left| \int_{\Omega} \frac{\kappa(\vartheta_\varepsilon) }{\vartheta_\varepsilon} \nabla_x\vartheta_\varepsilon\cdot \nabla_x\mathfrak{T}_B \ \,{\rm d} {x} \right| \lesssim \frac{1}{\varepsilon} \int_{\Omega} |\nabla_x(\log(\vartheta_\varepsilon))| + \vartheta_\varepsilon^{\beta - 1} |\nabla_x\vartheta_\varepsilon| \ \,{\rm d} {x},$$ where we can bound $$\begin{aligned} \frac{1}{\varepsilon} \int_{\Omega} \left| \nabla_x(\log(\vartheta_\varepsilon)) \right| \ \,{\rm d} {x} \lesssim \frac{\delta}{\varepsilon^2} \int_{\Omega} \left| \nabla_x(\log(\vartheta_\varepsilon)) \right|^2 \ \,{\rm d} {x} + C(\delta) \nonumber\end{aligned}$$ for any $\delta> 0$ to be fixed later. Thus, the integral on the right--hand side is controlled by the left--hand side of [\[e9\]](#e9){reference-type="eqref" reference="e9"}, provide we choose $\delta>0$ small enough. Next, we compute $$\begin{aligned} \frac{1}{\varepsilon} \int_{\Omega} \vartheta_\varepsilon^{\beta - 1}{\nabla_x\vartheta_\varepsilon} \ \,{\rm d} {x} &= \frac{1}{\varepsilon} \int_{\Omega} \vartheta^{\frac{\beta}{2}} \vartheta_\varepsilon^{\frac{\beta}{2}-1} \nabla_x\vartheta_\varepsilon \ \,{\rm d} {x}\leq \frac{\delta}{\varepsilon^2} \int_{\Omega} \frac{\kappa (\vartheta_\varepsilon) |\nabla_x\vartheta_\varepsilon|^2 }{\vartheta_\varepsilon^2} \ \,{\rm d} {x} + C(\delta) \int_{\Omega} |\vartheta_\varepsilon^{\frac{\beta}{2} }|^2 \ \,{\rm d} {x}, \nonumber\end{aligned}$$ where the first term is absorbed by the left--hand side of [\[e9\]](#e9){reference-type="eqref" reference="e9"}. Finally, using Poincaré inequality as done above, we infer $$\begin{aligned} \int_{\Omega} |\vartheta_\varepsilon^{\frac{\beta}{2} } |^2 \ \,{\rm d} {x} \lesssim \int_{\Omega} |\nabla_x\vartheta_\varepsilon^{\frac{\beta}{2}} |^2 \ \,{\rm d} {x} + \int_{\partial \Omega} ( \overline{\vartheta} + \varepsilon\mathfrak{T}_B )^\beta \ {\rm d}S, \nonumber\end{aligned}$$ where one has $$\int_{\Omega} |\nabla_x\vartheta_\varepsilon^{\frac{\beta}{2}} |^2 \ \,{\rm d} {x} = \int_{\Omega} \vartheta_\varepsilon^{\beta-2} |\nabla_x\vartheta_\varepsilon|^2 \ \,{\rm d} {x} \leq \int_{\Omega} \frac{\kappa (\vartheta_\varepsilon) |\nabla_x\vartheta_\varepsilon|^2 }{\vartheta_\varepsilon^2} \ \,{\rm d} {x}.$$ In the end, we may apply Grönwall's lemma to [\[e9\]](#e9){reference-type="eqref" reference="e9"} to conclude that, for any time $T>0$ fixed, there holds $$\label{e10} \sup_{t \in [0,T]} \int_{\Omega} E_\varepsilon\left(\varrho_\varepsilon, \vartheta_\varepsilon, {\bf u}_\varepsilon\Big| \widetilde\varrho_\varepsilon, \overline{\vartheta} + \varepsilon\mathfrak{T}_B, 0 \right) \ \,{\rm d} {x} + \frac{1}{\varepsilon^2} \int_0^T \int_{\overline{\Omega}} (\overline{\vartheta} + \varepsilon\mathfrak{T}_B) {\rm d}\sigma_\varepsilon(t,x) \lesssim 1$$ as long as $$\label{e11} \int_{\Omega} E_\varepsilon\left(\varrho_\varepsilon, \vartheta_\varepsilon, {\bf u}_\varepsilon\Big| \widetilde\varrho_\varepsilon, \overline{\vartheta} + \varepsilon\mathfrak{T}_B, 0 \right) (0, \cdot) \ \,{\rm d} {x} \lesssim 1,$$ meaning the initial data are ill--prepared, cf. hypotheses [\[wpd\]](#wpd){reference-type="eqref" reference="wpd"}, [\[wpd1\]](#wpd1){reference-type="eqref" reference="wpd1"}. We point out that the (implicit) multiplicative constant in [\[e10\]](#e10){reference-type="eqref" reference="e10"} depends on the fixed time $T>0$. ## Uniform bounds -- summary The bounds established in [\[e10\]](#e10){reference-type="eqref" reference="e10"}, [\[e11\]](#e11){reference-type="eqref" reference="e11"}, together with the structural hypotheses [\[w9\]](#w9){reference-type="eqref" reference="w9"}--[\[w16\]](#w16){reference-type="eqref" reference="w16"}, yield the following uniform estimates for $\varepsilon\to 0$, see [@FeNo6A Chapter 5, Proposition 5.1]: $$\begin{aligned} {\rm ess} \sup_{t \in (0,T)} \int_{\Omega} \mathds{1}_{\rm res}(t, \cdot) \ \,{\rm d} {x} &\lesssim \varepsilon^2 \label{ub1}, \\ {\rm ess} \sup_{t \in (0,T)} \left\| \left[\frac{\varrho_\varepsilon- \overline{\varrho}}{\varepsilon} \right]_{\rm ess} (t, \cdot) \right\|_{L^2(\Omega)} &\lesssim 1, \label{ub2}\\ {\rm ess} \sup_{t \in (0,T)} \left\| \left[\frac{\vartheta_\varepsilon- \overline{\vartheta}}{\varepsilon} \right]_{\rm ess} (t, \cdot) \right\|_{L^2(\Omega)} &\lesssim 1, \label{ub3}\\ {\rm ess} \sup_{t \in (0,T)} \int_{\Omega} \left( [\varrho]_{\rm res}^{\frac{5}{3}} + [\vartheta]_{\rm res}^{4} \right)(t, \cdot) \ \,{\rm d} {x} &\lesssim \varepsilon^2, \label{ub4} \\ {\rm ess} \sup_{t \in (0,T)} \left\| \sqrt{\varrho_\varepsilon} {\bf u}_\varepsilon(t, \cdot) \right\|_{L^2(\Omega;R^3)} &\lesssim 1, \label{ub5} \\ \int_0^T \int_{\overline{\Omega}} 1 \ {\rm d}\sigma (t,x) &\lesssim \varepsilon^2, \label{ub5a} \\ \int_0^T \| {\bf u}_\varepsilon(t, \cdot) \|^2_{W^{1,2}(\Omega; R^3)} \,{\rm d} t &\lesssim 1, \label{ub6}\\ \int_0^T \left( \left\| \frac{\vartheta_\varepsilon- \overline{\vartheta} }{\varepsilon} (t, \cdot) \right\|_{W^{1,2}(\Omega)} + \left\| \frac{\log(\vartheta_\varepsilon) - \log(\overline{\vartheta}) }{\varepsilon} (t, \cdot) \right\|_{W^{1,2}(\Omega)} \right) &\lesssim 1, \label{ub7} \\ \int_0^T \left\| \left[ \frac{\varrho_\varepsilon s(\varrho_\varepsilon, \vartheta_\varepsilon)}{\varepsilon} \right]_{\rm res} (t, \cdot) \right\|_{L^q(\Omega)}^q \,{\rm d} t &\lesssim 1 \ \mbox{for a certain}\ q > 1, \label{ub8}\\ \int_0^T \left\| \left[ \frac{\varrho_\varepsilon s(\varrho_\varepsilon, \vartheta_\varepsilon)}{\varepsilon} \right]_{\rm res} {\bf u}_\varepsilon(t, \cdot) \right\|_{L^q(\Omega;R^3)}^q \,{\rm d} t &\lesssim 1 \ \mbox{for a certain}\ q > 1, \label{ub9}\\ \int_0^T \left\| \left[ \frac{\kappa (\vartheta_\varepsilon)}{\vartheta_\varepsilon}\right]_{\rm res} \left( \frac{\nabla_x\vartheta_\varepsilon}{\varepsilon}\right) (t, \cdot) \right\|_{L^q(\Omega;R^3)}^q \,{\rm d} t &\lesssim 1 . \label{ub10} \end{aligned}$$ # Weak convergence towards the target system {#w} Making use of the uniform bounds [\[ub1\]](#ub1){reference-type="eqref" reference="ub1"}--[\[ub10\]](#ub10){reference-type="eqref" reference="ub10"} we perform the limit in the weak topologies. This part of the proof, with the exception of the limit in the momentum equation, is almost identical with [@FeNo6A Chapter 5, Section 5.3]. We therefore only state the results referring to [@FeNo6A] for details. First, it follows from the uniform bounds [\[ub2\]](#ub2){reference-type="eqref" reference="ub2"}, [\[ub4\]](#ub4){reference-type="eqref" reference="ub4"} that $$\label{w1} \frac{\varrho_\varepsilon- \overline{\varrho}}{\varepsilon} \to \mathfrak{R} \qquad \mbox{ weakly-(*) in } \quad L^\infty(0,T; L^{\frac{5}{3} }(\Omega)),$$ in particular we have the strong convergence $$\label{w2} \varrho_\varepsilon\to \overline{\varrho}\qquad \mbox{ in }\quad L^\infty(0,T; L^{\frac{5}{3} }(\Omega)).$$ Similarly, we gather $$\begin{aligned} \frac{\vartheta_\varepsilon- \overline{\vartheta}}{\varepsilon} &\to \mathfrak{T} \qquad \mbox{ weakly-(*) in } \quad L^\infty(0,T; L^{2}(\Omega)), \quad \mbox{ weakly in }\quad L^2(0,T; W^{1,2}(\Omega)), \label{w3}\\ \vartheta_\varepsilon&\to \overline{\vartheta} \qquad \mbox{ in }\quad L^\infty(0,T; L^{2}(\Omega)). \label{w4} \end{aligned}$$ Finally, by virtue of [\[ub6\]](#ub6){reference-type="eqref" reference="ub6"}, we get $$\label{w5} {\bf u}_\varepsilon\to {\bf U}\qquad \mbox{ weakly in }\quad L^2(0,T; W^{1,2}(\Omega;R^3) ).$$ In all cases, we have to extract a suitable subsequence as the case may be. In accordance with the boundary conditions [\[i5\]](#i5){reference-type="eqref" reference="i5"}, [\[i6\]](#i6){reference-type="eqref" reference="i6"}, we have $$\begin{aligned} \mathfrak{T}|_{\partial \Omega} &= \mathfrak{T}_B , \label{w6}\\ {\bf U}\cdot {\bf n}|_{\partial \Omega} &=0 . \label{w7}\end{aligned}$$ Furthermore, it follows from [\[ee1\]](#ee1){reference-type="eqref" reference="ee1"} that $$\label{w8} \int_{\Omega} \mathfrak{R}(t, \cdot) \ \,{\rm d} {x} = 0 .$$ ## Equation of continuity With [\[w2\]](#w2){reference-type="eqref" reference="w2"}, [\[w5\]](#w5){reference-type="eqref" reference="w5"} at hand, we may let $\varepsilon\to 0$ in the weak formulation of the equation of continuity [\[Lw4\]](#Lw4){reference-type="eqref" reference="Lw4"} to obtain $$\label{ww9} {\rm div}_x{\bf U}= 0 \qquad \mbox{ a.a. in }\quad (0,T) \times \Omega.$$ ## Entropy equation Passing to the limit in the entropy equation [\[Lw7\]](#Lw7){reference-type="eqref" reference="Lw7"} is a bit lengthy but, fortunately, the same as in [@FeNo6A Chapter 5, Section 5.3.2]. Thus we only report the result: $$\begin{aligned} \int_0^T &\int_{\Omega} \overline{\varrho} \left( \frac{\partial s(\overline{\varrho}, \overline{\vartheta} ) }{\partial \varrho} \mathfrak{R} + \frac{\partial s(\overline{\varrho}, \overline{\vartheta} ) }{\partial \vartheta} \mathfrak{T} \right) \left( \partial_t \varphi + {\bf U}\cdot \nabla_x\varphi \right) \ \,{\rm d} {x} \,{\rm d} t - \int_0^T \int_{\Omega} \frac{\kappa(\overline{\vartheta})}{\overline{\vartheta}} \nabla_x\mathfrak{T} \cdot \nabla_x\varphi \ \,{\rm d} {x} \,{\rm d} t \nonumber \\ &= - \int_{\Omega} \overline{\varrho} \left( \frac{\partial s(\overline{\varrho}, \overline{\vartheta} ) }{\partial \varrho} \mathfrak{R}_0 + \frac{\partial s(\overline{\varrho}, \overline{\vartheta} ) }{\partial \vartheta} \mathfrak{T}_0 \right)\varphi (0, \cdot) \ \,{\rm d} {x}, \label{ww10}\end{aligned}$$ for any $\varphi \in C^1_c ([0, T) \times \Omega)$, where we have set $$\label{ww11} \varrho_{0,\varepsilon} \to \mathfrak{R}_0,\quad \vartheta_{0,\varepsilon} \to \mathfrak{T}_0 \qquad \mbox{ weakly-(*) in }\quad L^\infty (\Omega).$$ Here, in order to conclude, we need a relation between the limit $\mathfrak{R}$ and $\mathfrak{T}$. This issue will be discussed in the forthcoming section. ## Momentum equation The asymptotic limit of the momentum equation [\[Lw5\]](#Lw5){reference-type="eqref" reference="Lw5"} is exactly the same as in [@FeNo6A Chapter 5, Section 5.5.3], specifically, $$\begin{aligned} \int_0^T &\int_{\Omega} \Big( \overline{\varrho} {\bf U}\cdot \partial_t \boldsymbol{\varphi}+ \overline{\varrho{\bf U}\otimes {\bf U}}: \nabla_x\boldsymbol{\varphi}- \mathbb{S}(\overline{\vartheta}, \mathbb{D}_x{\bf U}) : \nabla_x\boldsymbol{\varphi}+ \mathfrak{R} \nabla_xG \cdot \boldsymbol{\varphi}\Big) \ \,{\rm d} {x}\,{\rm d} t \nonumber \\ &= - \int_{\Omega} \overline{\varrho} {\bf U}_0 \cdot \boldsymbol{\varphi}(0, \cdot) \ \,{\rm d} {x} \label{ww12} \end{aligned}$$ for any test function $$\boldsymbol{\varphi}\in C^1_c([0,T) \times \overline{\Omega}; R^3),\quad {\rm div}_x\boldsymbol{\varphi}= 0,\quad \boldsymbol{\varphi}\cdot {\bf n}|_{\partial \Omega} = 0.$$ Here we have used the weak limits $$\begin{aligned} {\bf u}_{0, \varepsilon} &\to {\bf U}_0 \qquad \mbox{ weakly in } \quad L^\infty(\Omega; R^3), \nonumber \\ \varrho_\varepsilon{\bf u}_\varepsilon\otimes {\bf u}_\varepsilon&\to \overline{\varrho{\bf U}\otimes {\bf U}} \qquad \mbox{ weakly in }\quad L^2(0,T; L^q(\Omega;R^3)),\ \mbox{for a certain}\ q > 1. \label{ww13}\end{aligned}$$ ### Pressure term The limit equation [\[ww12\]](#ww12){reference-type="eqref" reference="ww12"} holds thanks to solenoidality of the test function. Multiplying the momentum equation by $\varepsilon$ and performing the limit for a general test function $\boldsymbol{\varphi}$ we deduce, exactly as in [@FeNo6A Chapter 5, Section 5.5.3], the Boussinesq relation $$\label{ww14} \frac{\partial p(\overline{\varrho}, \overline{\vartheta})}{\partial \varrho} \nabla_x\mathfrak{R} + \frac{\partial p(\overline{\varrho}, \overline{\vartheta})}{\partial \vartheta} \nabla_x\mathfrak{T} = \overline{\varrho} \nabla_xG.$$ Hereafter, we normalize the potential $G$ so that $$\label{ww15} \int_{\Omega} G \ \,{\rm d} {x} = 0.$$ Accordingly, using [\[w8\]](#w8){reference-type="eqref" reference="w8"} we may integrate [\[ww14\]](#ww14){reference-type="eqref" reference="ww14"} obtaining the desired relation between $\mathfrak{R}$ and $\mathfrak{T}$, namely $$\label{ww16} \frac{\partial p(\overline{\varrho}, \overline{\vartheta})}{\partial \varrho} \mathfrak{R} + \frac{\partial p(\overline{\varrho}, \overline{\vartheta})}{\partial \vartheta} \mathfrak{T} = \overline{\varrho} G + \frac{\partial p(\overline{\varrho}, \overline{\vartheta})}{\partial \vartheta} \frac{1}{|\Omega|} \int_{\Omega} \mathfrak{T} \ \,{\rm d} {x}.$$ ### Final form of the limit entropy equation The relation [\[ww16\]](#ww16){reference-type="eqref" reference="ww16"} plugged in the limit entropy equation [\[ww10\]](#ww10){reference-type="eqref" reference="ww10"} gives rise to the final form of the limit heat equation. Setting $$\label{ww17} \Theta = \mathfrak{T} - \frac{\lambda(\overline{\varrho}, \overline{\vartheta})}{|\Omega|} \int_{\Omega} \mathfrak{T} \ \,{\rm d} {x}$$ we conclude that $$\begin{aligned} \overline{\varrho} c_p(\overline{\varrho}, \overline{\vartheta} ) \Big( \partial_t \Theta + {\bf U}\cdot \nabla_x\Theta \Big) - \overline{\varrho} \ \overline{\vartheta} \alpha(\overline{\varrho}, \overline{\vartheta} ) {\bf U}\cdot \nabla_xG &= \kappa(\overline{\vartheta}) \Delta_x\Theta, \nonumber \\ \Theta|_{\partial \Omega} &= \mathfrak{T}_B - \frac{\lambda(\overline{\varrho}, \overline{\vartheta})}{1 - \lambda(\overline{\varrho}, \overline{\vartheta})} \frac{1}{|\Omega|}\int_{\Omega} \Theta \ \,{\rm d} {x} . \label{ww18}\end{aligned}$$ The initial value of $\Theta$ can be evaluated by means of [\[ww10\]](#ww10){reference-type="eqref" reference="ww10"}: $$\left( \frac{\partial s(\overline{\varrho}, \overline{\vartheta} ) }{\partial \varrho} \mathfrak{R} + \frac{\partial s(\overline{\varrho}, \overline{\vartheta} ) }{\partial \vartheta} \mathfrak{T} \right)(0, \cdot) = \left( \frac{\partial s(\overline{\varrho}, \overline{\vartheta} ) }{\partial \varrho} \mathfrak{R}_0 + \frac{\partial s(\overline{\varrho}, \overline{\vartheta} ) }{\partial \vartheta} \mathfrak{T}_0 \right),$$ where, by virtue of [\[ww16\]](#ww16){reference-type="eqref" reference="ww16"}, one has $$\mathfrak{R} (0, \cdot) = \left( \frac{\partial p(\overline{\varrho}, \overline{\vartheta})}{\partial \varrho} \right)^{-1} \left(\overline{\varrho} G + \frac{\partial p(\overline{\varrho}, \overline{\vartheta})}{\partial \vartheta} \left( \frac{1}{|\Omega|} \int_{\Omega} \mathfrak{T} (0, \cdot) \ \,{\rm d} {x} - \mathfrak{T}(0, \cdot) \right) \right).$$ To simplify, we suppose $$\label{ww19} \int_{\Omega} \mathfrak{R}_0 \ \,{\rm d} {x} = \int_{\Omega} \mathfrak{T}_0 \ \,{\rm d} {x} = 0$$ yielding $$\int_{\Omega} \mathfrak{T} (0,\cdot) = 0 \ \,{\rm d} {x}.$$ Consequently, a bit lengthy but straightforward calculation yields $$\begin{aligned} \overline{\varrho} c_p (\overline{\varrho}, \overline{\vartheta}) \Theta(0, \cdot) = \overline{\vartheta} \left( \frac{\partial s (\overline{\varrho}, \overline{\vartheta})}{\partial \varrho} \mathfrak{R}_0 + \frac{\partial s (\overline{\varrho}, \overline{\vartheta})}{\partial \vartheta} \mathfrak{T}_0 - \alpha (\overline{\varrho}, \overline{\vartheta}) G \right). \label{ww20}\end{aligned}$$ # Propagation of acoustic waves {#A} To complete the proof of convergence, we have to identify the "convective" term $\overline{\varrho{\bf U}\otimes {\bf U}}$ in the momentum equation [\[ww12\]](#ww12){reference-type="eqref" reference="ww12"}. Specifically, our goal is to show that $$\label{A1} \int_0^T \int_{\Omega} \overline{\varrho{\bf U}\otimes {\bf U}} : \nabla_x\boldsymbol{\varphi} \ \,{\rm d} {x} \,{\rm d} t = \int_0^T \int_{\Omega} \overline{\varrho} ({\bf U}\otimes {\bf U}): \nabla_x\boldsymbol{\varphi} \ \,{\rm d} {x} \,{\rm d} t$$ for any test function $$\label{cond:test-f} \boldsymbol{\varphi}\in C^1_c([0,T) \times \overline{\Omega}; R^3),\quad {\rm div}_x\boldsymbol{\varphi}= 0,\quad \boldsymbol{\varphi}\cdot {\bf n}|_{\partial \Omega} = 0.$$ To begin with, let ${\bf H}$ be the Helmholtz projection onto the space of solenoidal functions with vanishing normal trace and consider consider the Helmholtz decomposition $${\bf v} = {\bf H}[{\bf v}] + {\bf H}^\perp [{\bf v}] ,$$ over $\mathbb{R}^3$. As observed in [@FeNo6A Section 5.4.2], the component ${\bf H}[\varrho_\varepsilon{\bf u}_\varepsilon]$ is compact (in a suitable topology), in particular it converges almost everywhere on any set $(0,T)\times\Omega$. Hence, the desired relation [\[A1\]](#A1){reference-type="eqref" reference="A1"} follows as soon as we show $$\label{A2} \int_0^T \int_{\Omega} {\bf H}^\perp [\varrho_\varepsilon{\bf u}_\varepsilon] \otimes {\bf H}^\perp [{\bf u}_\varepsilon] : \nabla_x\boldsymbol{\varphi} \ \,{\rm d} {x} \,{\rm d} t \to 0 \qquad \mbox{ as }\quad \varepsilon\to 0$$ for any test function $\boldsymbol{\varphi}$ satisfying [\[cond:test-f\]](#cond:test-f){reference-type="eqref" reference="cond:test-f"}. ## Acoustic equation The main problem in showing [\[A2\]](#A2){reference-type="eqref" reference="A2"} are rapid *time oscillations* related to acoustic waves. The equation describing their evolution -- the acoustic equation -- has been derived in [@FeNo6A Chapter 5, Section 5.4.7]. Here, the main difficulty is the entropy equation holds only for all test functions which are compactly supported in $\Omega$, see [\[Lw7\]](#Lw7){reference-type="eqref" reference="Lw7"}. Thus, our first goal is to extend the validity of the entropy equation [\[i2\]](#i2){reference-type="eqref" reference="i2"} to test functions which may be non-zero at the boundary $\partial\Omega$. As we are going to see in a while, the price to pay is the appearing of additional (small) terms in the weak formulation [\[Lw7\]](#Lw7){reference-type="eqref" reference="Lw7"}. Let $\varphi (t,x) \in C^1_c ((0,T) \times \overline{\Omega})$ be a given test function. We consider its approximation $$\label{A3} \varphi_\varepsilon(t,x) = \chi_\varepsilon(x) \varphi(t,x),\qquad \chi_\varepsilon\in C^\infty_c(\Omega),\ 0 \leq \chi_\varepsilon\leq 1,\ \chi_\varepsilon(x) = 1 \ \mbox{ whenever }\ {\rm dist}[x, \partial \Omega] > \varepsilon.$$ Obviously, $\varphi_\varepsilon$ is a legitimate test function for the entropy balance [\[Lw7\]](#Lw7){reference-type="eqref" reference="Lw7"}: using it in that relation yields $$\begin{aligned} - \int_0^T &\int_{\Omega} \left[ \varepsilon\varrho_\varepsilon\frac{ s(\varrho_\varepsilon, \vartheta_\varepsilon) - s(\overline{\varrho}, \overline{\vartheta}) }{\varepsilon} \chi_\varepsilon\partial_t \varphi + \varepsilon\varrho_\varepsilon\frac{s (\varrho_\varepsilon,\vartheta_\varepsilon) -s(\overline{\varrho}, \overline{\vartheta}) }{\varepsilon} {\bf u}_\varepsilon\cdot \nabla_x\varphi + \frac{{\bf q} (\vartheta_\varepsilon, \nabla_x\vartheta_\varepsilon)}{\vartheta_\varepsilon} \cdot \nabla_x\varphi \right] \ \,{\rm d} {x} \,{\rm d} t \nonumber \\ &= \int_0^T \int_{\Omega}{ \varphi_\varepsilon\ {\rm d}\sigma_\varepsilon(t,x)} + \varepsilon\int_0^T \int_{\Omega} \varrho_\varepsilon\frac{s (\varrho_\varepsilon,\vartheta_\varepsilon) -s(\overline{\varrho}, \overline{\vartheta}) }{\varepsilon} {\bf u}_\varepsilon\cdot \nabla_x(\varphi - \varphi_\varepsilon) \ \,{\rm d} {x} \nonumber \\ &+ \int_0^T \int_{\Omega} \frac{{\bf q} (\vartheta_\varepsilon, \nabla_x\vartheta_\varepsilon)}{\vartheta_\varepsilon} \cdot \nabla_x(\varphi - \varphi_\varepsilon) \ \,{\rm d} {x}. \label{A4} \end{aligned}$$ Consequently, the entropy balance [\[A4\]](#A4){reference-type="eqref" reference="A4"}, compared to its "conservative" counterpart in [@FeNo6A Chapter 5], contains extra error terms represented by the last two integrals in [\[A4\]](#A4){reference-type="eqref" reference="A4"}. It is easy to check that $$\label{A5} \nabla_x(\varphi - \varphi_\varepsilon) = (1 - \chi_\varepsilon) \nabla_x\varphi - \varphi \nabla_x\chi_\varepsilon,$$ where one has the bounds $$\label{A6} \| \nabla_x\chi_\varepsilon\|_{L^q(\Omega; R^3)} \lesssim \varepsilon^{\frac{1 - q}{q}}, \ 1 \leq q \leq \infty , \qquad \| 1 - \chi_\varepsilon\|_{L^p(\Omega)} \lesssim \varepsilon^{\frac1 p} ,\ 1 \leq p \leq \infty.$$ Using the error estimates [\[A6\]](#A6){reference-type="eqref" reference="A6"}, we may repeat step by step the arguments of [@FeNo6A Chapter 5, Section 5.4.7], to deduce the following acoustic equation: $$\begin{aligned} \int_0^T \int_{\Omega} \Big( \varepsilon Z_\varepsilon\partial_t \varphi + \varrho_\varepsilon{\bf u}_\varepsilon\cdot \nabla_x\varphi \Big) \ \,{\rm d} {x} \,{\rm d} t &= \varepsilon\int_0^T \int_{\Omega} {\bf h}^1_\varepsilon\cdot \nabla_x\varphi \ \,{\rm d} {x} + \varepsilon^\gamma\int_0^T \int_{\Omega} h^2_\varepsilon\varphi \ \,{\rm d} {x} \,{\rm d} t \nonumber \\ &\mbox{for any}\ \varphi \in C^1_c((0,T) \times \overline{\Omega}), \label{A7} \\ %\end{align} %and %\begin{align} \int_0^T \int_{\Omega} \Big( \varepsilon\varrho_\varepsilon{\bf u}_\varepsilon\cdot \partial_t \boldsymbol{\varphi}+ \omega Z_\varepsilon{\rm div}_x\boldsymbol{\varphi}\Big) \ \,{\rm d} {x} \,{\rm d} t &= \varepsilon^\gamma\int_0^T \int_{\Omega} \mathbb{H}^3_\varepsilon: \nabla_x\boldsymbol{\varphi} \ \,{\rm d} {x} \,{\rm d} t + \varepsilon^\gamma\int_0^T \int_{\Omega} {\bf h}^4_\varepsilon\cdot \boldsymbol{\varphi} \ \,{\rm d} {x} \nonumber \\ &\mbox{for any} \ \boldsymbol{\varphi}\in C^1_c((0,T) \times \overline{\Omega}),\ \boldsymbol{\varphi}\cdot {\bf n}|_{\partial \Omega} = 0. \label{A8}\end{aligned}$$ Here we have $0<\gamma<1$ and we have defined $$\begin{aligned} Z_\varepsilon&= \frac{1}{\omega} \left(\omega \frac{\varrho_\varepsilon- \overline{\varrho}}{\varepsilon} + A \varrho_\varepsilon\chi_\varepsilon\frac{s(\varrho_\varepsilon, \vartheta_\varepsilon) - s (\overline{\varrho}, \overline{\vartheta})}{\varepsilon} - \overline{\varrho} G + \frac{A}{\varepsilon} \Sigma_\varepsilon\right), \nonumber \\ \Sigma_\varepsilon(\tau, \cdot ) &= \int_0^\tau \chi_\varepsilon\sigma_\varepsilon(t, \cdot) \,{\rm d} t , \nonumber\end{aligned}$$ with the parameters $A$ and $\omega$ defined by $$\begin{aligned} A &= \frac{1}{\overline{\varrho}} \frac{\partial p(\overline{\varrho}, \overline{\vartheta})} {\partial \vartheta} \left(\frac{\partial s(\overline{\varrho}, \overline{\vartheta})} {\partial \vartheta} \right)^{-1} \quad \mbox{ and }\quad \omega &= \frac{\partial p(\overline{\varrho}, \overline{\vartheta})} {\partial \varrho} + \frac{1}{\overline{\varrho}^2} \left(\frac{\partial s(\overline{\varrho}, \overline{\vartheta})} {\partial \vartheta} \right)^{-1} \left| \frac{\partial p(\overline{\varrho}, \overline{\vartheta})} {\partial \vartheta} \right|^2. %, \br %Z_\ep &= \frac{1}{\omega} \left(\omega \frac{\vre - \Ov{\vr}}{\ep} + A \vre \theta_\ep \frac{s(\vre, \vte) - s (\Ov{\vr}, \Ov{\vt})}{\ep} %- \Ov{\vr} G + \frac{A}{\ep} \Sigma_\ep\right), \ %\Sigma_\ep (\tau, \cdot ) = \int_0^\tau \theta_\ep \sigma_\ep (t, \cdot) \dt, %\label{A9} \nonumber\end{aligned}$$ In addition, we have the following bounds: $$\begin{aligned} \| {\bf h}^1_\varepsilon\|_{L^q(0,T; L^1(\Omega;R^3))} + \| {h}^2_\varepsilon\|_{L^q(0,T; L^1(\Omega))} + \| {\bf h}^4_\varepsilon\|_{L^q(0,T; L^1(\Omega;R^3))} & \stackrel{<}{\sim}1, \nonumber \\ \| \mathbb{H}^3_\varepsilon\|_{L^q(0,T; L^1(\Omega; R^{3 \times 3}))} & \stackrel{<}{\sim}1 \ \mbox{for some}\ q > 1. \label{A10} \end{aligned}$$ Notice that equations [\[A7\]](#A7){reference-type="eqref" reference="A7"}, [\[A8\]](#A8){reference-type="eqref" reference="A8"} correspond to the weak formulation of the following wave system: $$\label{eq:wave} \left\{ \begin{array}{l} \varepsilon\partial _tZ_\varepsilon+ {\rm div}_x{\bf m}_\varepsilon= \varepsilon{\rm div}_x{\bf h}^1_\varepsilon+ \varepsilon^\gamma h^2_\varepsilon\\[1ex] \varepsilon\partial_t{\bf m}_\varepsilon+ \omega\nabla_xZ_\varepsilon= \varepsilon^\gamma{\rm div}_x\mathbb{H}^3_\varepsilon+ \varepsilon^\gamma{\bf h}^4_\varepsilon\,, \end{array} \right.$$ where we have set ${\bf m}_\varepsilon:= \varrho_\varepsilon{\bf u}_\varepsilon$. Observe that, in contrast with in [@FeNo6A Section 5.4.7], the error terms represented by the right--hand side of [\[eq:wave\]](#eq:wave){reference-type="eqref" reference="eq:wave"} are larger, of order $\varepsilon^\gamma$ with $\gamma\in(0,1)$, whereas $\gamma= 1$ in [@FeNo6A Section 5.4.7]. Notice that a similar situation appeared in [@DS-F-S-WK; @Fan], for instance, in the context of the multiscale analysis for geophysical flows. We are going to apply a strategy similar to the one employed in those papers (see also [@FeNo6A Sections 5.4.5--5.4.7]), based on Lions-Masmoudi compensated compactness [@LIMA6; @LIMA-JMPA], in order to prove the sought convergence [\[A2\]](#A2){reference-type="eqref" reference="A2"}. ## Convergence of the convective term {#ss:conv-conv} First of all, we notice that, owing to [\[w1\]](#w1){reference-type="eqref" reference="w1"}, for proving [\[A2\]](#A2){reference-type="eqref" reference="A2"} it is enough to show that $$\label{A2_bis} \int_0^T \int_{\Omega} {\bf H}^\perp [{\bf m}_\varepsilon] \otimes {\bf H}^\perp [{\bf m}_\varepsilon] : \nabla_x\boldsymbol{\varphi} \ \,{\rm d} {x} \,{\rm d} t \to 0 \qquad \mbox{ as }\quad \varepsilon\to 0$$ for any test function $\boldsymbol{\varphi}$ satisfying [\[cond:test-f\]](#cond:test-f){reference-type="eqref" reference="cond:test-f"}. Next, omitting an approximation procedure, based on the spectral decomposition of the wave operator (see the details in *e.g.* [@FeNo6A Sections 5.4.5, 5.4.6]), we may assume that all the quantities appearing in [\[A2_bis\]](#A2_bis){reference-type="eqref" reference="A2_bis"} and in the wave system [\[eq:wave\]](#eq:wave){reference-type="eqref" reference="eq:wave"} are smooth with respect to the space variable. In light of the previous consideration, we can perform an integration by parts in [\[A2_bis\]](#A2_bis){reference-type="eqref" reference="A2_bis"} and then compute $$\begin{aligned} {\rm div}_x( {\bf H}^\perp [{\bf m}_\varepsilon] \otimes {\bf H}^\perp [{\bf m}_\varepsilon] ) &= {\rm div}_x({\bf H}^\perp [{\bf m}_\varepsilon] )\ {\bf H}^\perp [{\bf m}_\varepsilon] + {\bf H}^\perp [{\bf m}_\varepsilon]\cdot\nabla_x{\bf H}^\perp [{\bf m}_\varepsilon] \\ &= {\rm div}_x{\bf m}_\varepsilon\ {\bf H}^\perp [{\bf m}_\varepsilon] + \frac{1}{2} \nabla_x\left| {\bf H}^\perp [{\bf m}_\varepsilon] \right|^2 + {\bf curl}_x({\bf H}^\perp [{\bf m}_\varepsilon]) \times {\bf H}^\perp [{\bf m}_\varepsilon]\,.\end{aligned}$$ Observe that the gradient term identically vanishes, whenever integrated against a test function $\boldsymbol{\varphi}$ as in [\[cond:test-f\]](#cond:test-f){reference-type="eqref" reference="cond:test-f"}. Similarly, ${\bf curl}_x({\bf H}^\perp [{\bf m}_\varepsilon]) \equiv 0$, as ${\bf H}^\perp [{\bf m}_\varepsilon]$ is a perfect gradient. Therefore, it remains us to deal with the first term appearing in the right--hand side of the previous relation: for this, we use the wave system [\[eq:wave\]](#eq:wave){reference-type="eqref" reference="eq:wave"} to write $$\begin{aligned} {\rm div}_x{\bf m}_\varepsilon\ {\bf H}^\perp [{\bf m}_\varepsilon] &= -\varepsilon\partial_tZ_\varepsilon\ {\bf H}^\perp [{\bf m}_\varepsilon] + \varepsilon{\rm div}_x{\bf h}^1_\varepsilon\ {\bf H}^\perp [{\bf m}_\varepsilon] + \varepsilon^\gamma h^2_\varepsilon\ {\bf H}^\perp [{\bf m}_\varepsilon].\end{aligned}$$ Owing to the presence of the small factors $\varepsilon$ and $\varepsilon^\gamma$ and the smoothness of all the involved quantities, it is clear that the last two terms on the right do not contribute to the limit [\[A2_bis\]](#A2_bis){reference-type="eqref" reference="A2_bis"}, in the sense that $$\int_0^T \int_{\Omega} \left( \varepsilon{\rm div}_x{\bf h}^1_\varepsilon\ {\bf H}^\perp [{\bf m}_\varepsilon] + \varepsilon^\gamma h^2_\varepsilon\ {\bf H}^\perp [{\bf m}_\varepsilon] \right) \cdot \boldsymbol{\varphi} \ \,{\rm d} {x} \,{\rm d} t \to 0 \qquad \mbox{ as }\quad \varepsilon\to 0.$$ Finally, we use the second equation in [\[eq:wave\]](#eq:wave){reference-type="eqref" reference="eq:wave"} to compute $$\begin{aligned} -\varepsilon\partial_tZ_\varepsilon\ {\bf H}^\perp [{\bf m}_\varepsilon] &= - \varepsilon\partial_t\left(Z_\varepsilon\ {\bf H}^\perp [{\bf m}_\varepsilon]\right) + \varepsilon Z_\varepsilon\ \partial_t{\bf H}^\perp [{\bf m}_\varepsilon] \nonumber \\ &= - \varepsilon\partial_t\left(Z_\varepsilon\ {\bf H}^\perp [{\bf m}_\varepsilon]\right) + \varepsilon^\gamma Z_\varepsilon\left(\mathbb{H}^3_\varepsilon+ {\bf h}^4_\varepsilon\right) - \omega Z_\varepsilon\nabla_xZ_\varepsilon\,.\end{aligned}$$ In particular, since $Z_\varepsilon\nabla_xZ_\varepsilon= \frac{1}{2} \nabla_x(Z_\varepsilon)^2$, the previous computations show that the convergence property [\[A2_bis\]](#A2_bis){reference-type="eqref" reference="A2_bis"} holds true, so in turn we have proved [\[A2\]](#A2){reference-type="eqref" reference="A2"}. The proof of Theorem [Theorem 3](#TM1){reference-type="ref" reference="TM1"} is now completed. 10 A. Abbatiello, E. Feireisl. The Oberbeck-Boussinesq system with non-local boundary conditions. , **81**(2):297--306, 2023. A. Barletta. The Boussinesq approximation for buoyant flows. , **124**, 2022. A. Barletta, M. Celli, and D.A.S. Rees. On the use and misuse of the Oberbeck-Boussinesq approximation. , **5**(1):298--309, 2023. F. Belgiorno. Notes on the third law of thermodynamics, I. , **36**:8165--8193, 2003. F. Belgiorno. Notes on the third law of thermodynamics, ii. , **36**:8195--8221, 2003. P. Bella, E. Feireisl, and F. Oschmann. Rigorous Derivation of the Oberbeck--Boussinesq Approximation Revealing Unexpected Term. , accepted for publication, 2023. N. Chaudhuri and E. Feireisl. Navier-Stokes-Fourier system with Dirichlet boundary conditions. , **101**(12):4076--4094, 2022. D. Del Santo, F. Fanelli, G. Sbaiz and A. Wróblewska-Kamińska. A multiscale problem for viscous heat-conducting fluids in fast rotation. , **31**(1), Paper No. 21, 63 pp, 2021. B. Ducomet and E. Feireisl. A regularizing effect of radiation in the equations of fluid dynamics. , **28**:661--685, 2005. F. Fanelli. Incompressible and fast rotation limit for barotropic Navier-Stokes equations at large Mach numbers. , **428**, Paper No. 133049, 20 pp, 2021. F. Fanelli and I. Gallagher. Asymptotics of fast rotating density-dependent incompressible fluids in two space dimensions. , **35**(6):1763--1807, 2019 F. Fanelli and E. Zatorska. Low Mach number limit for the degenerate Navier-Stokes equations in presence of strong stratification. **400**(3):1463--1506, 2023. E. Feireisl, D. Gérard-Varet, I. Gallagher, and A. Novotný. Multi-scale analysis of compressible viscous and rotating fluids. **314**(3):641--670, 2012. E. Feireisl, P. Gwiazda, Y.-S. Kwon, and A. Świerczewska-Gwiazda. Mathematical theory of compressible magnetohydrodynamics driven by non--conservative boundary conditions. . In preparation. E. Feireisl and A. Novotný. The Oberbeck-Boussinesq approximation as a singular limit of the full Navier-Stokes-Fourier system. , **11**(2):274--302, 2009. E. Feireisl and A. Novotný. . Advances in Mathematical Fluid Mechanics. Birkhäuser/Springer, Cham, 2017. Second edition. E. Feireisl and A. Novotný. . Birkhäuser--Verlag, Basel, 2022. J. Fröhlich, P. Laure, and R. Peyret. Large departures from Boussinesq approximation in the Rayleigh-Bénard problem. , **4**:1355, 1992. R. Klein, N. Botta, T. Schneider, C.D. Munz, S. Roller, A. Meister, L. Hoffmann, and T. Sonar. Asymptotic adaptive methods for multi-scale problems in fluid mechanics. , **39**:261--343, 2001. P. Lewintan, S Müller, and P. Neff. Korn inequalities for incompatible tensor fields in three space dimensions with conformally invariant dislocation energy. , **60**(4):Paper No. 150, 46, 2021. P.-L. Lions and N. Masmoudi. Incompressible limit for a viscous compressible fluid. , **77**(6):585--627, 1998. P.-L. Lions and N. Masmoudi. Une approche locale de la limite incompressible. , **329**(5):387--392, 1999. N. Masmoudi. Examples of singular limits in hydrodynamics. , 2006. O. D. Schirra. New Korn-type inequalities and regularity of solutions to linear elliptic systems and anisotropic variational problems involving the trace-free part of the symmetric gradient. , **43**(1-2):147--172, 2012. R. Kh. Zeytounian. . Springer-Verlag, Berlin, 2004. [^1]: The work of F.F. has been partially supported by the project CRISIS (ANR-20-CE40-0020-01), operated by the French National Research Agency (ANR). [^2]: The work of E.F. was partially supported by the Czech Sciences Foundation (GAČR), Grant Agreement 21--02411S. The Institute of Mathematics of the Academy of Sciences of the Czech Republic is supported by RVO:67985840.
arxiv_math
{ "id": "2310.03881", "title": "Thermally driven fluid convection in the incompressible limit regime", "authors": "Francesco Fanelli, Eduard Feireisl", "categories": "math.AP", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | The intention of this article is to introduce a generalization of Proinov-type contraction via simulation functions. We name this generalized contraction map as Proinov-type $\mathcal{Z}$-contraction. This article establishes the[a]{style="color: white"}existence and[a]{style="color: white"}uniqueness of[a]{style="color: white"}fixed points for these contraction mappings in quasi-metric space and also, include explanatory examples with graphical interpretation. As an application, we generate a new iterated function system (IFS) consisting of Proinov-type $\mathcal{Z}$-contractions in quasi-metric spaces. At the end of the paper, we prove the existence of a unique attractor for the IFS consisting of Proinov-type $\mathcal{Z}$-contractions. --- [${{}_{}}_{}$]{.ul} Generalized Proinov- type contractions using simulation functions with applications to fractals Generalized Proinov- type contractions using simulation functions with applications to fractals *Athul Puthusseri $^{a}$, D. Ramesh Kumar$^{b,*}$* $^{a,b}$*Department of Mathematics, School of Advanced Sciences, Vellore Institute of Technology,\ Vellore-632014, TN, India*\ Mathematics Subject Classification(2020): Primary 47H09, 47H10; Secondary 28A80\ *Keywords: Quasi-metric space, Fixed point, Proinov-type $\mathcal{Z}$-contraction, Simulation functions, Iterated function system.* ------------------------------------------------------------------------ # Introduction The Banach contraction principle is the most famous and widely used fixed point theorem. It was stated and proved by the renowned Polish mathematician Stefan Banach in 1922. Its applications went beyond the boundary of mathematics, to other branches of science, engineering, technology, economics and so on. Many exciting results in fixed point theory came out as extensions of the Banach contraction principle. Recently, in 2020, P. D. Proinov [@21] has proved a[a]{style="color: white"}fixed-point result for a[a]{style="color: white"}map $T$ defined on a complete metric space $(X,d)$ to itself, satisfying the[a]{style="color: white"}contraction-type condition. $$\label{eq:1.1} \zeta\left(d\left(Tx, Ty\right)\right)\leq \eta\left(d\left(x, y\right)\right),\ \ \ \text{for\textcolor{white}{a}all } x,y\in X \text{\textcolor{white}{a}with } d\left(Tx, Ty\right)> 0,$$ where $\zeta, \eta :(0,\infty)\rightarrow \mathbb{R}$ are[a]{style="color: white"}two functions which are satisfying the condition $\eta(t)<\zeta(t)$ for $t>0$.\ The main fixed point result given by P. D. Proinov is: **Theorem 1**. *[@21] Let $(X,d)$ be a[a]{style="color: white"}complete metric space and $T:X\rightarrow X$ be a[a]{style="color: white"}mapping satisfying[a]{style="color: white"}condition ([\[eq:1.1\]](#eq:1.1){reference-type="ref" reference="eq:1.1"}), where the functions $\zeta, \eta :(0, \infty)\rightarrow\mathbb{R}$ satisfying the following conditions:* 1. *$\zeta$ is[a]{style="color: white"}nondecreasing;* 2. *$\eta(t)<\zeta(t)$ for any $t>0$;* 3. *$\limsup\limits_{t\rightarrow \epsilon+}\eta(t)< \zeta(\epsilon+).$* *Then $T$ has a[a]{style="color: white"}unique fixed point $x^*\in X$ and the[t]{style="color: white"}iterative sequence $\{T^nx\}$ converges to $x^*$ for every $x\in X$.* He has shown that this result extends some of the famous fixed point results in the literature, which include Amini- Harandi and Petrusel[@1], Moradi[@22], Geraghty[@11], Jleli and Samet[@14], Wardowski and Van Dung[@5], Secelean[@17], etc.\ In 2015, Khojasteh et al.[@8] introduced a new method for the study of fixed points using simulation functions. They have come up with a new kind of contraction map called $\mathcal{Z}$-contractions. **Definition 1**. *[@8] A[a]{style="color: white"}simulation function is a[a]{style="color: white"}mapping $\xi:[0,\infty)\times[0,\infty)\rightarrow\mathbb{R}$ which satisfies the[a]{style="color: white"}following conditions:* 1. *$\xi(0,0)= 0$;* 2. *$\xi(s, t)< t-s$[a]{style="color: white"}for[a]{style="color: white"}all $s, t>0$;* 3. *for any two sequences $\{s_n\}, \{t_n\}$ in $(0, \infty)$ with the property $\lim\limits_{n\rightarrow\infty}s_n=\lim\limits_{n\rightarrow\infty}t_n>0$, it is true that $\limsup\limits_{n\rightarrow\infty}\xi\left(s_n,t_n\right)<0$.* We use the notation $\mathcal{Z}$ to represent the set of all simulation functions. Here are a few illustrations of simulation functions. **Example 1**. *[@8] Let $\xi_i:[0,\infty)\times[0,\infty)\rightarrow\mathbb{R}$[a]{style="color: white"}for $i=1,2,3$ be[a]{style="color: white"}defined by* 1. *$\xi_1(s,t)= p(t)-q(s)$ for all $s,t\in [0,\infty)$, where $p,q:[0, \infty)\rightarrow[0,\infty)$[a]{style="color: white"}are continuous functions[a]{style="color: white"}such that $p(t)=q(t)=0$ if and[a]{style="color: white"}only if $t=0$ and $p(t)<t\leq q(t)$ for[a]{style="color: white"}all $t>0$.* 2. *$\xi_2(s, t)= t-\frac{f(s,t)}{g(s,t)}s$ for all $s,t\in [0,\infty)$, where $f,g:[0, \infty)\times[0,\infty)\rightarrow[0,\infty)$ are[a]{style="color: white"}continuous[a]{style="color: white"}functions with respect to each[a]{style="color: white"}variable such that $f(s, t)>g(s,t)$ for all $s, t>0$.* 3. *$\xi_3(s,t)=t-h(t)-s$ for all $s,t\in[0,\infty)$ where $h:[0,\infty)\rightarrow[0,\infty)$ is a[a]{style="color: white"}continuous function[a]{style="color: white"}satisfying $h(t)=0$ if and only if $t=0$.* *Then $\xi_i\in\mathcal{Z}$ for $i=1,2,3$.* We will define the $\mathcal{Z}$-contraction as follows: **Definition 2**. *[@8] Let $(X, d)$ be a[a]{style="color: white"}metric space, and $T:X\rightarrow X$. Then $T$ is said to be a $\mathcal{Z}$-contraction with respect to some $\xi\in\mathcal{Z}$ if $\xi\left(d\left(Tx, Ty\right), d(x, y)\right)\geq 0$ for all $x, y\in X$.* The following Theorem proves that there is a unique fixed point for[a]{style="color: white"}$\mathcal{Z}$-contraction. **Theorem 2**. *[@8] Let $T:X\rightarrow X$ be[a]{style="color: white"}a $\mathcal{Z}$-contraction[a]{style="color: white"}with respect to $\xi\in \mathcal{Z}$, where $(X,d)$ is a complete metric space. Then there exists a[a]{style="color: white"}unique[f]{style="color: white"}fixed point, say $x^*\in X$, of $T$. Furthermore, the iterated sequence $\{T^nx\}$ converges to $x^*$ for every $x\in X$.* The quasi-metric is a generalized metric that does not possess the symmetry condition of a metric. This notion was introduced in the literature by W. A. Wilson[@23]. **Definition 3**. *[@23] Let $X$ be a nonempty set. Define a function $q:X\times X\rightarrow\mathbb{R}$. Then $q$ is a quasi-metric on $X$ if it satisfies the following conditions:* 1. *$q(x,y)\geq 0$ for every $x,y\in X$.* 2. *$q(x, y)= 0$ if and only if $x=y$ for every $x,y\in X$.* 3. *$q(x,y)\leq q(x,z)+q(z,y)$ for any $x,y,z\in X$.* *The set $X$ along with $q$ is called a quasi-metric space and is denoted as $(X,q)$.* Since there is no symmetry, $q(x,y)$ need not be equal to $q(y,x)$ for any $x,y\in X$. Thus, in quasi-metric spaces, we have two topologies, called forward topology and backward topology. So, concepts such as convergence of sequences, continuity of functions, compactness and completeness got two notions namely forward and backward. By adding a weaker symmetry condition called $\delta$-symmetry we can get a sub-class of quasi-metric spaces namely, $\delta$-symmetric quasi-metric spaces, which have nicer properties than[a]{style="color: white"}quasi-metric spaces. **Definition 4**. *A[a]{style="color: white"}quasi-metric space $(X,q)$ is said to be a $\delta$ symmetric[a]{style="color: white"}quasi-metric space if there exists[a]{style="color: white"}$\delta>0$ such[a]{style="color: white"}that $q(x,y)\leq \delta q(y,x)$ for all $x,y\in X$.* In a $\delta$-symmetric quasi-metric space, one can easily observe that forward convergence implies backward convergence and vice versa. In this article, we are introducing new types of contraction mappings called $f$-Proinov-type $\mathcal{Z}$-contractions and $b$-Proinov-type $\mathcal{Z}$-contractions in the $\delta$-symmetric quasi-metric space by using simulation functions. We prove the existence[a]{style="color: white"}and uniqueness[a]{style="color: white"}of fixed point for these newly introduced contraction mappings. These fixed point theorems extend to fractal spaces obtained from $\delta$-symmetric quasi-metric space in the last section. We construct an iterated function system consisting of $f$-Proinov-type $\mathcal{Z}$-contractions towards the end of the paper. Further, we prove the existence[a]{style="color: white"}of[a]{style="color: white"}a unique attractor for this iterated function system. # Preliminaries This section includes some basic definitions and results in quasi-metric spaces which are required for the further sections of this paper. Suppose $(X, q)$ is a quasi-metric space. Then it does not need to always be the case that $q(x, y)= q(y, x)$ for $x,y\in X$. So, open balls $B_f(x, r)= \{y\in X: q(x,y)<r\}$ and $B_b(x,r)=\{y\in X: q(y,x)<r\}$, for some $x\in X$ and $r>0$, can be two different sets and are called forward and backward open balls, centered at $x$ with radius $r$, respectively. These two different basic open balls will lead to the following two different topologies in $X$. **Definition 5**. *[@23] The topology $\tau_f$, whose basis is the collection of all forward open balls $B_f(x, r)= \{y\in X: q(x, y)< r\}$ for $x\in X$ and $r> 0$, on $X$ is called the forward topology.\ Analogously, the topology $\tau_b$, which has a basis consists of all backward open balls $B_b(x, r)= \{y\in X: q(y, x)< r\}$ for $x\in X$ and $r> 0$, is called the backward topology on $X$.* The following are some examples of[a]{style="color: white"}quasi-metric spaces: **Example 2**. *Let $X=\mathbb{R}$ and $q:\mathbb{R}\times\mathbb{R}\rightarrow\mathbb{R}$ be[a]{style="color: white"}defined by $$q(\alpha,\beta)= \begin{cases} \beta-\alpha & \text{if } \beta\geq \alpha\\ 1 & \text{if } \beta<\alpha. \end{cases}$$This $q$ is a quasi-metric on $X$, which is known as[a]{style="color: white"}Sorgenfrey quasi-metric. Here $\tau_f$ is the lower-limit topology and $\tau_b$ is the upper-limit topology on $\mathbb{R}$.* **Example 3**. *For any $\lambda>0$, define $q:\mathbb{R}\times\mathbb{R}\rightarrow\mathbb{R}$ by $$q(\alpha,\beta)= \begin{cases} \alpha-\beta &\text{if }\alpha\geq \beta\\ \lambda(\beta-\alpha) &\text{if }\alpha<\beta. \end{cases}$$ Here $q$ is a $\lambda$-symmetric quasi-metric space on $\mathbb{R}$. Both the forward and backward topologies here are the usual topology on $\mathbb{R}$.* These two topologies give rise to two different notions of convergence in the space $X$, namely forward convergence (or $f$-convergence) and backward convergence (or $b$-convergence). Here, $f$-convergence is the convergence in the topology $\tau_f$ and $b$-convergence is the convergence in $\tau_b$. It can be defined in another way as follows: **Definition 6**. *Let $\{a_n\}$ be a[a]{style="color: white"}sequence in the quasi-metric[a]{style="color: white"}space $(X,q)$. Then,* 1. *$\{a_n\}$ is said to be $f$ -converge to $a\in X$ if $q(a, a_n)\rightarrow 0$ as $n\rightarrow\infty$. Then we will write $a_n\xrightarrow[]{f}a$.* 2. *$\{a_n\}$ is said to be $b$-converges to $a\in X$ if $q(a_n, a)\rightarrow 0$ as $n\rightarrow\infty$. Then we will write $a_n\xrightarrow[]{b}a$.* We have different notions of continuity in quasi-metric spaces since continuity always depends on the underlying topology. **Definition 7**. *[@16] Let $(X,q)$ and $(Y, \rho)$ be two quasi-metric[a]{style="color: white"}spaces. Then a function $g:X\rightarrow Y$ is $ff$-continuous at $x\in X$ if for any[a]{style="color: white"}sequence $x_n\xrightarrow[]{f}x$ in $(X,q)$, one has $g(x_n)\xrightarrow[]{f}g(x)$ in $(Y,\rho)$. Furthermore, $g$ is $ff$-continuous in $X$ if it is $ff$-continuous at each[a]{style="color: white"}point $x\in X$. If $Y=\mathbb{R}$ with the usual topology, then $g$ is said to be $f$-continuous. Analogously, we have other notions of continuities namely, $fb$-continuous, $bf$-continuous, $bb$-continuous and $b$-continuous.* The next proposition is discussing the continuity of a quasi-metric space. ** Proposition 1**. *[@16] If $f$-convergence implies $b$-convergence in a quasi-metric space $(X,q)$, then $q$ is $f$-continuous.* **Remark 1**. *Let $\{x_n\}$ is a[a]{style="color: white"}sequence in $(X, q)$, a $\delta$-symmetric[a]{style="color: white"}quasi-metric space. Then $\{x_n\}$ is $f$-convergent if and[a]{style="color: white"}only if it is $b$-convergent in $X$. Therefore, the map $(x,y)\mapsto q(x,y)$ is $f$-continuous.* *Proof.* Suppose that $\{x_n\}$ $f$-converges to $x\in X$. Then we have $\lim\limits_{n\rightarrow\infty}q(x, x_n)= 0$. Since $q$ is $\delta$-symmetric, we have $q(x_n, x)\leq \delta q(x, x_n)$ for all $n\in\mathbb{N}$. Thus, we get $\lim\limits_{n\rightarrow\infty}q(x_n, x)= \delta \lim\limits_{n\rightarrow\infty}q(x, x_n)= 0$, which implies $\{x_n\}$ $b$-converges to $x$. The converse follows in the same way.\ The second part follows directly from Proposition[ Proposition 1](#prop:2.1){reference-type="ref" reference="prop:2.1"}. ◻ Analogous to compactness in metric spaces we have forward and backward compactness in quasi-metric spaces. **Definition 8**. *[@16] A compact subset in the topological space $(X, \tau_f)$ is called a forward compact subset or simply $f$-compact subset of $X$. Similarly, a compact subset in the topological space $(X, \tau_b)$ is called a backward compact or $b$-compact subset of $X$.* # Main Results The results on the existence and uniqueness of fixed points of Proinov-type $\mathcal{Z}$-contractions on quasi-metric spaces are presented in this section. ## Auxiliary results Here we state some definitions and prove some results that will be used for proving our main theorem. **Definition 9**. *Let[a]{style="color: white"}$(X, q)$ be a quasi-metric space. A[a]{style="color: white"}mapping $T: X\rightarrow X$ is said to be forward Proinov-type $\mathcal{Z}$-contraction or $f$-Proinov-type $\mathcal{Z}$-contraction[a]{style="color: white"}with respect to $\xi\in\mathcal{Z}$ if $$\label{eq:1} \xi\left( \zeta\left(q\left(Tx, Ty\right)\right), \eta\left(q\left(x, y\right)\right)\right)\geq 0$$for[a]{style="color: white"}all $x, y\in X$ where $\zeta, \eta: (0, \infty)\rightarrow \mathbb{R}$ are two control functions with $\eta(t) < \zeta(t)$ for all $t\in Im(q)\setminus\{0\}$.* **Definition 10**. *Let $T$ be a[a]{style="color: white"}self-mapping on a quasi-metric space[a]{style="color: white"}$(X, q)$.[a]{style="color: white"}Then $T$ is said to be backward Proinov-type $\mathcal{Z}$-contraction or $b$-Proinov-type $\mathcal{Z}$-contraction[a]{style="color: white"}with[a]{style="color: white"}respect to $\xi\in\mathcal{Z}$ if $$\label{eq:2} \xi\left( \zeta\left(q\left(Tx, Ty\right)\right), \eta\left(q\left(y, x\right)\right)\right)\geq 0$$for all $x, y\in X$ where $\zeta, \eta: (0, \infty)\rightarrow \mathbb{R}$ are two control functions with $\eta(t) < \zeta(t)$ for all $t\in Im(q)\setminus\{0\}$.* ** Proposition 2**. *An $f$-Proinov-type $\mathcal{Z}$-contraction is both $ff$-continuous and $bb$-continuous if the control function $\zeta$ is nondecreasing.* *Proof.* Consider a quasi-metric space $(X,q)$ and an $f$-Proinov-type $\mathcal{Z}$-contraction $T:X\rightarrow X$ with respect to the simulation function $\xi$. Let $x\in X$. Consider the sequence $\{x_n\}$ in $X$ which $f$-converges[a]{style="color: white"}to $x$. That is, $q(x,x_n)\rightarrow 0$ as $n\rightarrow\infty$. Then by inequality([\[eq:1\]](#eq:1){reference-type="ref" reference="eq:1"}) and condition $(z_3)$ in Definition[Definition 1](#defn:1.2){reference-type="ref" reference="defn:1.2"} we get the following: $$\begin{split} 0&\leq\xi\left(\zeta\left(q\left(Tx, Tx_n\right)\right),\eta\left(q\left(x,x_n\right)\right)\right)\\&<\eta\left(q\left(x,x_n\right)\right)-\zeta\left(q\left(Tx, Tx_n\right)\right). \end{split}$$ This implies $\zeta\left(q\left(Tx, Tx_n\right)\right)<\eta\left(q\left(x,x_n\right)\right)$. Since $\eta(t)<\zeta(t)$ for all $t\in Im(q)\setminus\{0\}$, one can have $\zeta\left(q\left(Tx, Tx_n\right)\right)<\eta\left(q\left(x,x_n\right)\right)<\zeta\left(q\left(x,x_n\right)\right)$. As it is given that $\zeta$ is nondecreasing, we get $q\left(Tx, Tx_n\right)<q\left(x,x_n\right)\rightarrow 0$, which implies $Tx_n\xrightarrow[]{f}Tx$. Hence $T$ is $ff$-continuous.\ Proof of $bb$-continuity follows by a similar argument. ◻ ** Proposition 3**. *A $b$-Proinov-type $\mathcal{Z}$-contraction is both $bf$-continuous and $fb$-continuous if the control function $\zeta$ is nondecreasing.* *Proof.* The proof is comparable to that of Proposition [ Proposition 2](#prop:3.1){reference-type="ref" reference="prop:3.1"}. ◻ ** Proposition 4**. *In a $\delta$-symmetric quasi-metric space $(X,q)$, both $f$-Proinov-type $\mathcal{Z}$-contraction and $b$-Proinov-type $\mathcal{Z}$-contraction satisfy all four types of continuity if the control function $\zeta$ is not decreasing.* *Proof.* Since $(X,q)$ is $\delta$-symmetric quasi-metric space, we have $f$-convergence implies $b$-convergence and vice versa in $X$. Then the result follows from Propositions [ Proposition 2](#prop:3.1){reference-type="ref" reference="prop:3.1"} and [ Proposition 3](#prop:3.2){reference-type="ref" reference="prop:3.2"}. ◻ The notion of[a]{style="color: white"}asymptotic regularity[a]{style="color: white"}was brought into literature by Browder[a]{style="color: white"}and Petryshyn in[@7]. **Definition 11**. *[@7] Let[a]{style="color: white"}$(X, d)$ be a[a]{style="color: white"}metric[a]{style="color: white"}space and $T$ be a self-mapping on $X$. Then $T$[a]{style="color: white"}is said[a]{style="color: white"}to be asymptotically regular at a[a]{style="color: white"}point $x\in X$[a]{style="color: white"}if $\lim\limits_{n\rightarrow\infty}d\left(T^nx, T^{n+1}x\right)= 0$.\ Furthermore, $T$ is asymptotically regular on $X$ if it is asymptotically regular at each $x\in X$.* Inspired by this definition, Hamed H. Alsulami et al.[@9] introduced the idea of asymptotic regularity in quasi-metric spaces as: **Definition 12**. *[@9] Let $T$ be a[a]{style="color: white"}self-map on a quasi-metric[a]{style="color: white"}space $(X, q)$. Then $T$ is alleged to be* 1. *asymptotically forward regular or asymptotically $f$-regular[a]{style="color: white"}at some point[a]{style="color: white"}$x\in X$ if $\lim\limits_{n\rightarrow\infty}q\left(T^nx, T^{n+1}x\right)= 0$ and[a]{style="color: white"}asymptotically $f$-regular[a]{style="color: white"}on $X$ if it is asymptotically $f$-regular at every point of $X$;* 2. *asymptotically backward regular or asymptotically $b$-regular[a]{style="color: white"}at some point[a]{style="color: white"}$x\in X$ if $\lim\limits_{n\rightarrow\infty}q\left(T^{n+1}x, T^{n}x\right)= 0$ and asymptotically[a]{style="color: white"}$b$-regular[a]{style="color: white"}on $X$ if it is[a]{style="color: white"}asymptotically $b$-regular[a]{style="color: white"}at every point of $X$;* 3. *asymptotically[a]{style="color: white"}regular if it is both[a]{style="color: white"}asymptotically $f$-regular as well as asymptotically[a]{style="color: white"}$b$-regular.* The following lemma provides some conditions for the $f$- Proinov-type $\mathcal{Z}$-contraction to be asymptotically regular. **Lemma 1**. *Let $T$ be an $f$-Proinov-type $\mathcal{Z}$-contraction with respect to $\xi\in\mathcal{Z}$ on a quasi-metric space $(X, q)$. If the control functions $\zeta \text{ and }\eta$ satisfy[a]{style="color: white"}the following[a]{style="color: white"}conditions:* 1. *$\zeta$ is non[a]{style="color: white"}decreasing;* 2. *$\eta(t)< \zeta(t)$ for every $t\in Im(q)\setminus\{0\}$;* 3. *$\lim\limits_{n\rightarrow\infty}\zeta\left(x_n\right)= \lim\limits_{n\rightarrow\infty}\zeta\left(y_n\right)> 0$ for any two sequences $\{x_n\} \text{ and }\{y_n\}$ in $(0,\infty)$ with $\lim\limits_{n\rightarrow\infty}x_n= \lim\limits_{n\rightarrow\infty}y_n>0$.* *Then $T$ is asymptotically[a]{style="color: white"}regular in[a]{style="color: white"}$X$.* *Proof.* Let $x\in X$. Consider the sequence ${T^nx}$. If one can find an $N\in\mathbb{N}$ such that $T^nx= T^{n+1}x$ for every $n\geq N$, then the lemma follows. If not, suppose that $T^nx\neq T^{n+1}x$ for all $n\in\mathbb{N}$. Then, $$\begin{split} 0 &\leq\xi\left(\zeta\left(q\left(T^nx, T^{n+1}x\right)\right), \eta\left(q\left(T^{n-1}x, T^nx\right)\right)\right)\\ &\leq\eta\left(q\left(T^{n-1}x, T^nx\right)\right)- \zeta\left(q\left(T^nx, T^{n+1}x\right)\right). \end{split}$$ Then by condition $(ii)$ in the hypothesis, we get, $$\zeta\left(q\left(T^nx, T^{n+1}x\right)\right)\leq\eta\left(q\left(T^{n-1}x, T^nx\right)\right)< \zeta\left(q\left(T^{n-1}x, T^nx\right)\right).$$ From condition $(i)$ in the hypothesis, it follows that $q\left(T^nx, T^{n+1}x\right)\leq q\left(T^{n-1}x, T^nx\right)$. Thus, the sequence $\{q\left(T^nx, T^{n+1}x\right)\}$ is decreasing and bounded below. Hence it converges to a limit, say $r\geq 0$. Let $r>0$. Then we have $$\begin{split} 0 &\leq\xi\left(\zeta\left(q\left(T^nx, T^{n+1}x\right)\right), \eta\left(q\left(T^{n-1}x, T^nx\right)\right)\right)\\ &\leq\eta\left(q\left(T^{n-1}x, T^nx\right)\right)- \zeta\left(q\left(T^nx, T^{n+1}x\right)\right)\\ &< \zeta\left(q\left(T^{n-1}x, T^nx\right)\right)- \zeta\left(q\left(T^nx, T^{n+1}x\right)\right). \end{split}$$ From condition $(iii)$ in the hypothesis, as $n\rightarrow\infty$ we get, $$\lim\limits_{n\rightarrow\infty}\eta\left(q\left(T^{n-1}x, T^nx\right)\right)=\lim\limits_{n\rightarrow\infty}\zeta\left(q\left(T^nx, T^{n+1}x\right)\right)> 0.$$ Now if we apply condition $(z_3)$ of simulation function, we obtain $$\limsup\limits_{n\rightarrow\infty}\xi\left(\zeta\left(q\left(T^nx, T^{n+1}x\right)\right), \eta\left(q\left(T^{n-1}x, T^nx\right)\right)\right)< 0.$$ This leads to a contradiction. Therefore $r=0$, which proves $T$ is asymptotically $f$-regular. We can demonstrate that $T$ is asymptotically $b$-regular in a similar way. Therefore, it follows that $T$ is asymptotically regular in $X$. ◻ The next lemma will provide conditions for $b$-Proinov-type $\mathcal{Z}$-contraction to be asymptotically regular. The proof for this lemma differs slightly from the proof for the previous lemma. **Lemma 2**. *Let[a]{style="color: white"}$T$ be[a]{style="color: white"}a $b$-Proinov-type $\mathcal{Z}$-contraction, on a quasi-metric[a]{style="color: white"}space $(X, q)$,with respect to $\xi\in\mathcal{Z}$. Let the control functions $\zeta, \eta$ follow the conditions:* 1. *$\zeta$ is non decreasing;* 2. *$\eta(t)< \zeta(t)$ for all $t\in Im(q)\setminus\{0\}$;* 3. *if $\{x_n\} \text{ and } \{y_n\}$ are two[a]{style="color: white"}sequences in[a]{style="color: white"}$(0, \infty)$ such[a]{style="color: white"}that $\lim\limits_{n\rightarrow\infty}x_n= \lim\limits_{n\rightarrow\infty}y_n> 0$ then $\lim\limits_{n\rightarrow\infty}\zeta(x_n)= \lim\limits_{n\rightarrow\infty}\zeta(y_n)> 0$.* *Then $T$[a]{style="color: white"}is asymptotically[a]{style="color: white"}regular in $X$.* *Proof.* Let $x\in X$. Define $x_n= q(T^nx, T^{n +1}x)$. If one can find an $N\in \mathbb{N}$ such that $T^nx= T^{n+1}x$ for all $n\geq N$, then the lemma follows. If not, suppose that $T^nx\neq T^{n+1}x$ for all $n\in\mathbb{N}$. Then, $$\begin{split} 0 &\leq \xi\left(\zeta\left(q\left(T^nx, T^{n+1}x\right)\right), \eta\left(q\left(T^nx, T^{n-1}x\right)\right)\right)\\ &\leq\eta\left(q\left(T^nx, T^{n-1}x\right)\right)-\zeta\left(q\left(T^nx, T^{n+1}x\right)\right), \end{split}$$ which will imply $\zeta\left(q\left(T^nx, T^{n+1}x\right)\right)\leq\eta\left(q\left(T^nx, T^{n-1}x\right)\right)$. Then it follows from this and the condition $(ii)$ in the hypothesis that $\zeta\left(q\left(T^nx, T^{n+1}x\right)\right)\leq\eta\left(q\left(T^nx, T^{n-1}x\right)\right)\leq\zeta\left(q\left(T^nx, T^{n-1}x\right)\right)\leq\eta\left(q\left(T^{n-2}x, T^{n-1}x\right)\right)\leq\zeta\left(q\left(T^{n-2}x, T^{n-1}x\right)\right)$. Therefore, from the condition $(i)$ in the hypothesis we get $q\left(T^nx, T^{n+1}x\right)\leq q\left(T^{n-2}x, T^{n-1}x\right)$. i.e., $x_n\leq x_{n-2}$ for all $n\in\mathbb{N}$. This implies that the sequences $\{x_{2n}\} \text{ and }\{x_{2n+1}\}$ are decreasing sequences. We claim that both the sequences $\{x_{2n}\}\text{ and }\{x_{2n+1}\}$ converge to zero. If not, let $x_{2n}\rightarrow r>0$. Then, $$\begin{split} 0 &\leq\xi\left(\zeta\left(q\left(T^{2n}x, T^{2n+1}x\right)\right), \eta\left(q\left(T^{2n}x, T^{2n-1}x\right)\right)\right)\\ &\leq \eta\left(q\left(T^{2n}x, T^{2n-1}x\right)\right)- \zeta\left(q\left(T^{2n}x, T^{2n+1}x\right)\right)\\&\leq\zeta\left(q\left(T^{2n}x, T^{2n-1}x\right)\right)- \zeta\left(q\left(T^{2n}x, T^{2n+1}x\right)\right) \end{split}$$ From condition $(iii)$ in the hypothesis, we get $$\lim\limits_{n\rightarrow\infty}\zeta\left(q\left(T^{2n}x, T^{2n-1}x\right)\right)=\lim\limits_{n\rightarrow\infty} \zeta\left(q\left(T^{2n}x, T^{2n+1}x\right)\right)>0.$$ This implies that $\lim\limits_{n\rightarrow\infty}\eta\left(q\left(T^{2n}x, T^{2n-1}x\right)\right)=\lim\limits_{n\rightarrow\infty}\zeta\left(q\left(T^{2n}x, T^{2n+1}x\right)\right)>0$. Then by condition $(z_3)$ of simulation functions we get, $$\limsup\limits_{n\rightarrow\infty}\xi\left(\zeta\left(q\left(T^{2n}x, T^{2n+1}x\right)\right), \eta\left(q\left(T^{2n}x, T^{2n-1}x\right)\right)\right)< 0,$$ which gives a contradiction. Thus $\{x_{2n}\}$ converges to zero. Similarly, we can prove that $\{x_{2n+1}\}$ also converges to zero. Since both the sequences $\{x_{2n}\} \text{ and }\{x_{2n+1}\}$ decrease and converge to zero, we get $\{x_n\}$ also converges to zero. Hence $T$ is asymptotically $f$-regular. Similarly, we can prove that $T$ is asymptotically $b$-regular and hence it follows that $T$ is asymptotically regular. ◻ The following lemmas are crucial for demonstrating our key findings. **Lemma 3**. *[@2] Let $\{x_n\}$ be a sequence such that $\lim\limits_{n\rightarrow\infty}q\left(x_n, x_{n+1}\right)= 0$ in a $\delta$-symmetric quasi-metric space $(X, q)$. If $\{x_n\}$ is not $f$-Cauchy, then one can find an $\epsilon> o$ and two subsequences[a]{style="color: white"}$\{x_{n_k}\} \text{ and\textcolor{white}{a}}\{x_{m_k}\} \text{ of }\{x_n\}$ such[a]{style="color: white"}that $k< m_k< n_k$[a]{style="color: white"}and $\lim\limits_{k\rightarrow\infty}q\left(x_{m_k}, x_{n_k}\right)= \lim\limits_{k\rightarrow\infty}q\left(x_{m_k+1}, x_{n_k}\right)= \lim\limits_{k\rightarrow\infty}q\left(x_{m_k}, x_{n_k+1}\right)= \lim\limits_{k\rightarrow\infty}q\left(x_{m_k+1}, x_{n_k+1}\right)= \epsilon$.* **Lemma 4**. *Let $\{x_n\}$ be a sequence such that $\lim\limits_{n\rightarrow\infty}q\left(x_n, x_{n+1}\right)= 0$ in a $\delta$-symmetric quasi-metric space $(x, q)$. If the[a]{style="color: white"}sequence $\{x_n\}$ is[a]{style="color: white"}not $f-$Cauchy, then there[a]{style="color: white"}exist $\epsilon> o$ and two[a]{style="color: white"}subsequences $\{x_{n_k}\} \text{ and }\{x_{m_k}\} \text{ of }\{x_n\}$ such[a]{style="color: white"}that $k< m_k< n_k$ and $\lim\limits_{k\rightarrow\infty}q\left(x_{m_k}, x_{n_k}\right)= \lim\limits_{k\rightarrow\infty}q\left(x_{m_k-1}, x_{n_k}\right)= \lim\limits_{k\rightarrow\infty}q\left(x_{m_k-1}, x_{n_k-1}\right)= \epsilon$.* *Proof.* Since $(X, q)$ is a $\delta$-symmetric quasi-metric space, we can always write $q\left(x_{n+1}, x_n\right)\leq\delta q\left(x_n, x_{n+1}\right)$. Therefore, the[a]{style="color: white"}sequence $\{q\left(x_{n+1}, x_n\right)\}$ will also converge to zero. If $\{x_n\}$ is not $f$-Cauchy, then we can find an $\epsilon> 0$ and two[a]{style="color: white"}subsequences $\{x_{m_k}\}\text{ and\textcolor{white}{a}}\{x_{n_k}\}$ of $\{x_n\}$ with $k< m_k< n_k$ such that $q\left(x_{m_k}, x_{n_k}\right)\geq\epsilon$ and $q\left(x_{m_k-1}, x_{n_k}\right)<\epsilon$. Then, $$\epsilon\leq q\left(x_{m_k}, x_{n_k}\right)\leq q\left(x_{m_k}, x_{m_k-1}\right)+q\left(x_{m_k-1}, x_{n_k}\right)< q\left(x_{m_k}, x_{m_k-1}\right)+\epsilon.$$ Since $\lim\limits_{k\rightarrow\infty}q\left(x_{n+1}, x_n\right)= 0$, we get $$\lim\limits_{k\rightarrow\infty}q\left(x_{m_k}, x_{n_k}\right)= \lim\limits_{k\rightarrow\infty}q\left(x_{m_k-1}, x_{n_k}\right)= \epsilon.$$ Now we have, $$q\left(x_{m_k-1}, x_{n_k}\right)\leq q\left(x_{m_k-1}, x_{n_k-1}\right)+q\left(x_{n_k-1}, x_{n_k}\right)\leq q\left(x_{m_k-1}, x_{n_k}\right)+q\left(x_{n_k}, x_{n_k-1}\right)+q\left(x_{n_k-1}, x_{n_k}\right).$$ Since $\lim\limits_{k\rightarrow\infty}q\left(x_{m_k-1}, x_{n_k}\right)=\epsilon$ and $\lim\limits_{k\rightarrow\infty}q\left(x_{n_k}, x_{n_k-1}\right)= \lim\limits_{k\rightarrow\infty}q\left(x_{n_k-1}, x_{n_k}\right)= 0$, letting $k\rightarrow\infty$ we get, $\lim\limits_{k\rightarrow\infty}q\left(x_{m_k-1}, x_{n_k-1}\right)=\epsilon$. ◻ ## Fixed point theorems for forward and backward Proinov-type $\mathcal{Z}$-contractions We are now ready to demonstrate the[a]{style="color: white"}existence and[a]{style="color: white"}uniqueness of a fixed[a]{style="color: white"}point for $f$-Proinov-type $\mathcal{Z}$-contraction and $b$-Proinov-type $\mathcal{Z}$-contraction. **Theorem 3**. *Let[a]{style="color: white"}$(X, q)$ be an $f$-complete[a]{style="color: white"}$\delta$-symmetric quasi-metric[a]{style="color: white"}space. Let[a]{style="color: white"}$T:X\rightarrow X$ be[a]{style="color: white"}a $f$-Proinov-type $\mathcal{Z}$-contraction[a]{style="color: white"}with respect[a]{style="color: white"}to $\xi\in\mathcal{Z}$. If the control functions $\zeta \text{ and }\eta$ follow the below conditions:* 1. *$\zeta$ is non decreasing;* 2. *$\eta(t)< \zeta(t)$ for[a]{style="color: white"}every $t\in Im(q)\setminus\{0\}$;* 3. *if $\{x_n\} \text{ and\textcolor{white}{a}}\{y_n\}$ are[a]{style="color: white"}two sequences[a]{style="color: white"}in $(0,\infty)$ such that $\lim\limits_{n\rightarrow\infty}x_n= \lim\limits_{n\rightarrow\infty}y_n>0$, then $\lim\limits_{n\rightarrow\infty}\zeta\left(x_n\right)= \lim\limits_{n\rightarrow\infty}\zeta\left(y_n\right)> 0$.* *Then $T$ has[a]{style="color: white"}a unique[a]{style="color: white"}fixed point in $X$. Moreover,[a]{style="color: white"}the iterative[a]{style="color: white"}sequence $\{T^nx\}$ will $f$-converge to the fixed point for any[a]{style="color: white"}$x\in X$.* *Proof.* Let $x\in X$. By Lemma [Lemma 1](#lem:3.1){reference-type="ref" reference="lem:3.1"} it is clear that $T$ is asymptotically $f$-regular. Thus, the sequence $\{q\left(T^nx, T^{n+1}x\right)\}$ converges to zero. Define,for each $n\in\mathbb{N}$, $x_n=T^nx$. We claim that $\{x_n\}$ is $f$-Cauchy. If not, then by Lemma [Lemma 3](#lem:3.3){reference-type="ref" reference="lem:3.3"} one can find an $\epsilon> 0$ and subsequences $\{x_{m_k}\}, \{x_{n_k}\}$ of $\{x_n\}$ with $k< m_k< n_k$ such that $\lim\limits_{k\rightarrow\infty}q\left(x_{m_k}, x_{n_k}\right)= \lim\limits_{k\rightarrow\infty}q\left(x_{m_k+1}, x_{n_k+1}\right)= \epsilon> 0$. Then by condition $(iii)$ in the hypothesis we obtain, $$\lim\limits_{k\rightarrow\infty}\zeta\left(q\left(x_{m_k}, x_{n_k}\right)\right)= \lim\limits_{k\rightarrow\infty}\zeta\left(q\left(x_{m_k+1}, x_{n_k+1}\right)\right)> 0.$$ Now, from the contraction condition, we have, $$\begin{split} 0 &\leq\xi\left(\zeta\left(q\left(x_{m_k+1}, x_{n_k+1}\right)\right), \eta\left(q\left(x_{m_k}, x_{n_k}\right)\right)\right)\\ &< \eta\left(q\left(x_{m_k}, x_{n_k}\right)\right)- \zeta\left(q\left(x_{m_k+1}, x_{n_k+1}\right)\right)\\ &< \zeta\left(q\left(x_{m_k}, x_{n_k}\right)\right)- \zeta\left(q\left(x_{m_k+1}, x_{n_k+1}\right)\right). \end{split}$$ As $k\rightarrow\infty$, by condition $(iii)$ in the hypothesis, we get $$\lim\limits_{k\rightarrow\infty}\zeta\left(q\left(x_{m_k}, x_{n_k}\right)\right)= \lim\limits_{k\rightarrow\infty}\zeta\left(q\left(x_{m_k+1}, x_{n_k+1}\right)\right)> 0.$$ This implies that, $\lim\limits_{k\rightarrow\infty}\eta\left(q\left(x_{m_k}, x_{n_k}\right)\right)= \lim\limits_{k\rightarrow\infty}\zeta\left(q\left(x_{m_k+1}, x_{n_k+1}\right)\right)> 0$. Hence from the condition $(z_3)$ of simulation functions we get,$$\limsup\limits_{k\rightarrow\infty}\xi\left(\zeta\left(q\left(x_{m_k+1}, x_{n_k+1}\right)\right), \eta\left(q\left(x_{m_k}, x_{n_k}\right)\right)\right)< 0,$$ which gives a[a]{style="color: white"}contradiction. Thus $\{x_n\}$ is[a]{style="color: white"}$f$-Cauchy. Since $X$[a]{style="color: white"}is $f$-complete[a]{style="color: white"}$\{x_n\}$ will $f$-converge in $X$, say to $w$. Now, we claim that $w$ is a fixed point of $T$. For, we have $$\begin{split} 0 &\leq\xi\left(\zeta\left(q\left(Tw, T^nx\right)\right), \eta\left(q\left(w, T^{n-1}x\right)\right)\right)\\&< \eta\left(q\left(w, T^{n-1}x\right)\right)- \zeta\left(q\left(Tw, T^nx\right)\right), \end{split}$$ which implies $\zeta\left(q\left(Tw, T^nx\right)\right)< \eta\left(q\left(w, T^{n-1}x\right)\right)$. Now by using condition $(ii)$ followed by $(i)$ from the hypothesis, we get $$\zeta\left(q\left(Tw, T^nx\right)\right)< \eta\left(q\left(w, T^{n-1}x\right)\right)< \zeta\left(q\left(w, T^{n-1}x\right)\right),$$ which implies $q\left(Tw, T^nx\right)< q\left(w, T^{n-1}x\right)$.Then we get, $$0\leq\lim\limits_{n\rightarrow\infty}q\left(Tw, T^nx\right)< \lim\limits_{n\rightarrow\infty}q\left(w, T^{n-1}x\right)= 0,$$ which will imply $Tw= \lim\limits_{n\rightarrow\infty}T^nx= w$. Hence $w$ is a fixed point of $T$.\ For proving the uniqueness, let $w'\in X$ be another fixed point of $T$. Then, $$\begin{split} 0 &\leq\xi\left(\zeta\left(q\left(Tw, Tw'\right)\right),\eta\left(q\left(w, w'\right)\right)\right)\\&< \eta\left(q\left(w, w'\right)\right)- \zeta\left(q\left(Tw, Tw'\right)\right)\\&< \zeta\left(q\left(w, w'\right)\right)- \zeta\left(q\left(Tw, Tw'\right)\right)\\&= \zeta\left(q\left(w, w'\right)\right)- \zeta\left(q\left(w, w'\right)\right)\\&= 0, \end{split}$$ which gives a contradiction. Thus, the fixed[a]{style="color: white"}point of[a]{style="color: white"}$T$ is unique. ◻ Next, we[a]{style="color: white"}will give[a]{style="color: white"}an example[a]{style="color: white"}that will illustrate our theorem. **Example 4**. *Consider $X=[0, 1]$. Define $q: X\times X\rightarrow \mathbb{R}$ such that: $$q(x, y)= \begin{cases} 2x & \text{ if } x>y\\ y & \text{ if } x<y\\ 0 & \text{ if } x=y. \end{cases}$$ It is easy to see that $q$ is a $2$-symmetric quasi-metric on $X$. Also, $X$ is $f$-complete under the quasi-metric $q$. Define $T: X\rightarrow X$ such that $T(x)= \frac{x^2}{4x^2+3}$. Clearly, $T$ is an increasing map. Also consider the control functions $\zeta, \eta:(0, \infty)\rightarrow \mathbb{R}$ given by $\zeta(t)= t \text{ and } \eta(t)= \frac{t^2}{3}$. Here one can easily verify that both the functions $\zeta\text{ and }\eta$ satisfy the conditions $(i)- (iii)$ in the hypothesis of Theorem [Theorem 3](#thm:3.1){reference-type="ref" reference="thm:3.1"}. Next we define another function $\xi:[0, \infty)\times[0, \infty)\rightarrow\mathbb{R}$ such that $\xi(s, t)= \frac{t}{t+1}- s$. Then $\xi\in \mathcal{Z}$.\ **Case[a]{style="color: white"}1:** If[a]{style="color: white"}$x> y$, then[a]{style="color: white"}$q(x, y)= 2x, T(x)= \frac{x^2}{4x^2+3}\text{ and }T(y)=\frac{y^2}{4y^2+3}$. Since $T$ is increasing, we get $Tx> Ty$. Hence, $q(Tx, Ty)= \frac{2x^2}{4x^2+3}$. Then we have $\zeta\left(q\left(Tx, Ty\right)\right)= q(Tx, Ty)= \frac{2x^2}{4x^2+3} \text{ and }\eta\left(q\left(x, y\right)\right)= \frac{4x^2}{3}$. Therefore, we get the following. $$\begin{split} \xi\left(\zeta\left(q\left(Tx, Ty\right)\right), \eta\left(q\left(x, y\right)\right)\right)&= \xi\left(\frac{2x^2}{4x^2+3}, \frac{4x^2}{3}\right)\\&=\frac{\frac{4x^2}{3}}{\frac{4x^2}{3}+1}-\frac{2x^2}{4x^2+3}\\&=\frac{4x^2}{4x^2+3}-\frac{2x^2}{4x^2+3}\\&=\frac{2x^2}{4x^2+3}\geq 0 \end{split}$$ **Case[a]{style="color: white"}2:** If[a]{style="color: white"}$x< y$, then [a]{style="color: white"}$q(x, y)= y$ and $Tx< Ty$. Hence, $q\left(Tx, Ty\right)= Ty= \frac{y^2}{4y^2+3}$. Then $\zeta\left(q\left(Tx, Ty\right)\right)= \frac{y^2}{4y^2+3}\text{ and }\eta\left(q\left(x, y\right)\right)= \frac{y^2}{3}$. Therefore, $$\begin{split} \xi\left(\zeta\left(q\left(Tx, Ty\right)\right), \eta\left(q\left(x, y\right)\right)\right)&= \xi\left(\frac{y^2}{4y^2+3}, \frac{y^2}{3}\right)\\&=\frac{\frac{y^2}{3}}{\frac{y^2}{3}+1}-\frac{y^2}{4y^2+3}\\&=\frac{y^2}{y^2+3}-\frac{y^2}{4y^2+3}\\&=\frac{y^2\left(4y^2+3-\left(y^2+3\right)\right)}{\left(y^2+3\right)\left(4y^2+3\right)}\\&=\frac{3y^2}{\left(y^2+3\right)\left(4y^2+3\right)}\geq 0. \end{split}$$ **Case 3:** If $x= y$, then we have $Tx= Ty$ and therefore $q\left(x, y\right)= q\left(Tx, Ty\right)= 0$. Therefore, $$\xi\left(\zeta\left(q\left(Tx, Ty\right)\right), \eta\left(q\left(x, y\right)\right)\right)= \xi(0, 0)= 0.$$ Hence, in each case, we get $\xi\left(\zeta\left(q\left(Tx, Ty\right)\right), \eta\left(q\left(x, y\right)\right)\right)\geq 0$. Thus the map $T$ is an $f$-Proinov- type $\mathcal{Z}$-contraction in $X$. It can be easily observed that $x= 0$ is the unique fixed point of $T$ in $X$.\ \ Now we will study the convergence behaviour of the iterated sequence $\{T^n(x_0)\}$ for the map $T$. We will plot the graph of convergence of $\{T^n(x_0)\}$ for different initial ponts $x_0$ in $[0, 1]$. Here we have chosen the points $1,\ 0.75,\ 0.5 \text{ and } 0.25$ as the initial points. The data used to plot the graph is given in Table 1. Figure [1](#fig1){reference-type="ref" reference="fig1"} will display the graph of rate of convergence of $\{T^n(x_0)\}$.* ![image](imges/Table1.jpg){width="100%"} *Here we can observe that, after the third iterate the values of $T^n(x_0)$ is zero or very much close to zero so that we can approximate it to zero. So, as the initial point comes close to zero, the rate of convergence of $\{T^n(x_0)\}$ increases.\ \ * ![Rate of convergence of the iterated sequence $\{T^n(x)\}$](imges/RC.jpg){#fig1 width="100%"} *. [\[fig1\]]{#fig1 label="fig1"}* Our next theorem will prove the fixed point theorem for $b$-Proinov-type[a]{style="color: white"}$\mathcal{Z}$-contraction. **Theorem 4**. *Let[a]{style="color: white"}$(X, q)$ be[a]{style="color: white"}a $\delta$-symmetric quasi-metric[a]{style="color: white"}space and $T$ be a $b$-Proinov-type[a]{style="color: white"}$\mathcal{Z}$-contraction, on $X$, with[a]{style="color: white"}respect to $\xi\in\mathcal{Z}$. Let[a]{style="color: white"}the control functions $\zeta, \eta$ follow the conditions:* 1. *$\zeta$ is non decreasing;* 2. *$\eta(t)< \zeta(t)$ for[a]{style="color: white"}all $t\in Im(q)\setminus\{0\}$;* 3. *if $\{x_n\} \text{ and\textcolor{white}{a}} \{y_n\}$ are[a]{style="color: white"}two sequences[a]{style="color: white"}in $(0, \infty)$ such that $\lim\limits_{n\rightarrow\infty}x_n= \lim\limits_{n\rightarrow\infty}y_n> 0$ then $\lim\limits_{n\rightarrow\infty}\zeta(x_n)= \lim\limits_{n\rightarrow\infty}\zeta(y_n)> 0$.* *Then $T$[a]{style="color: white"}has a unique[a]{style="color: white"}fixed point[a]{style="color: white"}in $X$,[a]{style="color: white"}provided the space $X$ is $f$-complete. In this case, the sequence $\{T^nx\}$ will $f$-converge to the fixed point for any $x\in X$.* *Proof.* Let $x\in X$. By Lemma [Lemma 2](#lem:3.2){reference-type="ref" reference="lem:3.2"}, it is clear that $T$ is asymptotically $f$-regular. Thus, the sequence $\{q\left(T^nx, T^{n+1}x\right)\}$ converges to zero. Let $x_n=T^nx\text{ for each }n\in\mathbb{N}$. We assert that $\{x_n\}$ is $f$-Cauchy. If not, then by Lemma [Lemma 3](#lem:3.3){reference-type="ref" reference="lem:3.3"} and Lemma [Lemma 4](#lem:3.4){reference-type="ref" reference="lem:3.4"} one can find an[a]{style="color: white"}$\epsilon> 0$ and[a]{style="color: white"}subsequences $\{x_{m_k}\}, \{x_{n_k}\}$[a]{style="color: white"}of $\{x_n\}$[a]{style="color: white"}with $k< m_k< n_k$ such[a]{style="color: white"}that $\lim\limits_{k\rightarrow\infty}q\left(x_{m_k}, x_{n_k}\right)= \lim\limits_{k\rightarrow\infty}q\left(x_{m_k+1}, x_{n_k+1}\right)= \lim\limits_{k\rightarrow\infty}q\left(x_{m_k-1}, x_{n_k-1}\right)= \epsilon> 0$. Then by condition $(iii)$ in the hypothesis we obtain, $$\lim\limits_{k\rightarrow\infty}\zeta\left(q\left(x_{m_k-1}, x_{n_k-1}\right)\right)= \lim\limits_{k\rightarrow\infty}\zeta\left(q\left(x_{m_k+1}, x_{n_k+1}\right)\right)> 0.$$ Now, from the contraction condition and condition $(i)$ in the hypothesis we have, $$\begin{split} 0 &\leq\xi\left(\zeta\left(q\left(x_{m_k+1}, x_{n_k+1}\right)\right), \eta\left(q\left(x_{n_k}, x_{m_k}\right)\right)\right)\\ &< \eta\left(q\left(x_{n_k}, x_{m_k}\right)\right)- \zeta\left(q\left(x_{m_k+1}, x_{n_k+1}\right)\right)\\ &< \zeta\left(q\left(x_{n_k}, x_{m_k}\right)\right)- \zeta\left(q\left(x_{m_k+1}, x_{n_k+1}\right)\right)\\&\leq\zeta\left(q\left(x_{m_k-1}, x_{n_k-1}\right)\right)- \zeta\left(q\left(x_{m_k+1}, x_{n_k+1}\right)\right). \end{split}$$ As $k\rightarrow\infty$, we get, $$\begin{split} 0&\leq\lim\limits_{k\rightarrow\infty}\left(\eta\left(q\left(x_{n_k}, x_{m_k}\right)\right)- \zeta\left(q\left(x_{m_k+1}, x_{n_k+1}\right)\right)\right)\\&< \lim\limits_{k\rightarrow\infty}\left(\zeta\left(q\left(x_{m_k-1}, x_{n_k-1}\right)\right)- \zeta\left(q\left(x_{m_k+1}, x_{n_k+1}\right)\right)\right)= 0. \end{split}$$ This implies that, $\lim\limits_{k\rightarrow\infty}\eta\left(q\left(x_{n_k}, x_{m_k}\right)\right)= \lim\limits_{k\rightarrow\infty}\zeta\left(q\left(x_{m_k+1}, x_{n_k+1}\right)\right)> 0$. Hence from the condition $(z_3)$ of simulation functions, we get,$$\limsup\limits_{k\rightarrow\infty}\xi\left(\zeta\left(q\left(x_{m_k+1}, x_{n_k+1}\right)\right), \eta\left(q\left(x_{n_k}, x_{m_k}\right)\right)\right)< 0,$$ which gives a[a]{style="color: white"}contradiction.[a]{style="color: white"}Thus $\{x_n\}$ is $f$-Cauchy. Since[a]{style="color: white"}$X$ is $f$-complete,[a]{style="color: white"}$\{x_n\}$ will $f$-converge in $X$, say to $w$.\ The remaining part of the proof mimics the proof of the Theorem [Theorem 3](#thm:3.1){reference-type="ref" reference="thm:3.1"}. ◻ # Application ## Fractals Generated by Proinov-type $\mathcal{Z}$-contractions As an application of our fixed point results, we will extend them to fractal theory.\ M. F. Barnsley[@12; @13] mathematically described fractals as fixed points of set-valued maps. The concept of fractals was extended to quasi-metric spaces by Nicolae Adrian Secelean et al.[@16]\ For a quasi-metric space $(X, q)$, we denote by $\mathcal{H}_f(X)$, the collection of[a]{style="color: white"}all nonempty[a]{style="color: white"}$f$-compact[a]{style="color: white"}subsets of[a]{style="color: white"}$X$.\ For[a]{style="color: white"}two $b$-bounded subsets $A, B$[a]{style="color: white"}of $X$, we define $Q(A, B)= \sup\limits_{x\in A}\inf\limits_{y\in B}q(x, y)$ and $h_q(A,B)= \max\{Q(A, B), Q(B, A)\}$. **Remark 2**. *[@16] The condition that $A, B$ to be $b$-bounded is demanded to have $Q(A, B)< \infty$. This inequality may fail if we consider $A, B$ to be $f$-bounded.* ** Proposition 5**. *[@16] If $(X, q)$ is a quasi-metric space in which $f$-convergence[a]{style="color: white"}implies[a]{style="color: white"}$b$-convergence, then every[a]{style="color: white"}$f$-compact subset of $X$ is[a]{style="color: white"}$b$-bounded.* Combining the above fact with Proposition [ Proposition 1](#prop:2.1){reference-type="ref" reference="prop:2.1"}, we can have the[a]{style="color: white"}following[a]{style="color: white"}result. ** Proposition 6**. *If $(X, q)$ is a[a]{style="color: white"}$\delta$-symmetric[a]{style="color: white"}quasi-metric[a]{style="color: white"}space, then every $f$-compact subset of $X$ is $b$-bounded.* *Proof.* By Remark [Remark 1](#rmk:2.1){reference-type="ref" reference="rmk:2.1"}, we have $f$-convergence implies $b$-convergence in $X$. Then the result is immediate from Proposition [ Proposition 5](#prop:4.1){reference-type="ref" reference="prop:4.1"}. ◻ **Theorem 5**. *[@16] If $(X, q)$ is a[a]{style="color: white"}quasi-metric[a]{style="color: white"}space in which a[a]{style="color: white"}sequence is $f$-convergent if and only if it is $b$-convergent, then $(\mathcal{H}_f(X), h_q)$ is a complete metric space.* **Corollary 1**. *If $(X, q)$ is[a]{style="color: white"}a $\delta$-symmetric[a]{style="color: white"}quasi-metric[a]{style="color: white"}space, then[a]{style="color: white"}$(\mathcal{H}_f(X), h_q)$ is[a]{style="color: white"}a complete metric[a]{style="color: white"}space.* *Proof.* The proof follows from Remark [Remark 1](#rmk:2.1){reference-type="ref" reference="rmk:2.1"} and Theorem [Theorem 5](#thm:4.1){reference-type="ref" reference="thm:4.1"}. ◻ **Lemma 5**. *[@16] If[a]{style="color: white"}$(X, q)$ is[a]{style="color: white"}a $\delta$-symmetric[a]{style="color: white"}quasi-metric[a]{style="color: white"}space, then[a]{style="color: white"}the metric $h_q$ on $\mathcal{H}_f(X)$ satisfies the following condition: $$h_q\left(\bigcup\limits_{i=1}^nA_i, \bigcup\limits_{i=1}^nB_i\right)\leq\max\limits_{1\leq i\leq n}h_q\left(A_i, B_i\right),$$ where $A_i, B_i\in \mathcal{H}_f(X)$ for $i= 1,2, \dots, n$ and $n\in \mathbb{N}.$* The metric above $h_q$ on $\mathcal{H}_f(X)$ is called the $f$-Hausdorff-Pompeu metric. Here, the complete metric space $(\mathcal{H}_f(X), h_q)$ is called the fractal space.\ Before going to the application, we will prove a fixed[a]{style="color: white"}point theorem[a]{style="color: white"}for Proinov-type $\mathcal{Z}$-contraction[a]{style="color: white"}in complete[a]{style="color: white"}metric space,[a]{style="color: white"}which will be useful further. First, we will recall a lemma. **Lemma 6**. *[@21] Let $\{x_n\}$ be[a]{style="color: white"}a sequence[a]{style="color: white"}in a[a]{style="color: white"}metric space[a]{style="color: white"}$(X, d)$ such[a]{style="color: white"}that $\lim\limits_{n\rightarrow\infty}d\left(x_n, x_{n+1}\right)= 0$. If $\{x_n\}$ is[a]{style="color: white"}not Cauchy,[a]{style="color: white"}then one can find an $\epsilon> 0$[a]{style="color: white"}and two[a]{style="color: white"}subsequences $\{x_{n_k}\}$ and[a]{style="color: white"}$\{x_{m_k}\}$ such[a]{style="color: white"}that $$\lim\limits_{k\rightarrow\infty}d\left(x_{n_k}, x_{m_k}\right)= \lim\limits_{k\rightarrow\infty}d\left(x_{n_k+1}, x_{m_k+1}\right)= \epsilon.$$* Now, we can prove the fixed point result for Proinov-type $\mathcal{Z}$-contraction. **Lemma 7**. *Let[a]{style="color: white"}$T: X\rightarrow X$[a]{style="color: white"}be a[a]{style="color: white"}Proinov-type $\mathcal{Z}$-contraction, on[a]{style="color: white"}a metric space[a]{style="color: white"}$(X, d)$,[a]{style="color: white"}with respect to $\xi\in\mathcal{Z}$. If the control functions $\zeta, \eta$ satisfy[a]{style="color: white"}the following[a]{style="color: white"}conditions:* 1. *$\zeta$ is non[a]{style="color: white"}decreasing;* 2. *$\eta(t)< \zeta(t)$ for[a]{style="color: white"}all $t\in Im(q)\setminus\{0\}$;* 3. *if[a]{style="color: white"}$\{x_n\} \text{ and\textcolor{white}{a}} \{y_n\}$ are two sequences[a]{style="color: white"}in[a]{style="color: white"}$(0, \infty)$ such[a]{style="color: white"}that $\lim\limits_{n\rightarrow\infty}x_n= \lim\limits_{n\rightarrow\infty}y_n> 0$ then $\lim\limits_{n\rightarrow\infty}\zeta(x_n)= \lim\limits_{n\rightarrow\infty}\zeta(y_n)> 0$,* *then $T$ is asymptotically regular in $X$.* *Proof.* The proof mimics the proof of the first part($f$-asymptotic regularity) of Lemma [Lemma 1](#lem:3.1){reference-type="ref" reference="lem:3.1"} ◻ **Theorem 6**. *Let[a]{style="color: white"}$T: X\rightarrow X$[a]{style="color: white"}be a[a]{style="color: white"}Proinov-type $\mathcal{Z}$-contraction, on a[a]{style="color: white"}metric space[a]{style="color: white"}$(X, d)$,[a]{style="color: white"}with respect to $\xi\in\mathcal{Z}$.. If the control functions $\zeta, \eta$ follow the conditions:* 1. *$\zeta$ is nondecreasing;* 2. *$\eta(t)< \zeta(t)$ for[a]{style="color: white"}all $t\in Im(q)\setminus\{0\}$;* 3. *if[a]{style="color: white"}$\{x_n\} \text{ and\textcolor{white}{a}} \{y_n\}$ are two sequences[a]{style="color: white"}in $(0, \infty)$ such that $\lim\limits_{n\rightarrow\infty}x_n= \lim\limits_{n\rightarrow\infty}y_n> 0$ then $\lim\limits_{n\rightarrow\infty}\zeta(x_n)= \lim\limits_{n\rightarrow\infty}\zeta(y_n)> 0$,* *then $T$[a]{style="color: white"}has a[a]{style="color: white"}unique fixed[a]{style="color: white"}point in[a]{style="color: white"}$X$, say $w$. Moreover, the[a]{style="color: white"}sequence $\{T^nx\}$ converges[a]{style="color: white"}to $w$ for any $x\in X$.* *Proof.* Let[a]{style="color: white"}$x\in X$. Then, according to Lemma [Lemma 7](#lem:4.3){reference-type="ref" reference="lem:4.3"}, $T$ is asymptotically regular. Then the sequence $\{d\left(T^nx, T^{n+1}x\right)\}$ converges to zero. Let us denote $x_n= T^nx$ for all $n\in \mathbb{N}$. We[a]{style="color: white"}claim that $\{x_n\}$ is Cauchy. If not, by Lemma [Lemma 6](#lem:4.2){reference-type="ref" reference="lem:4.2"}, there[a]{style="color: white"}exist an[a]{style="color: white"}$\epsilon>0$ and[a]{style="color: white"}two subsequences[a]{style="color: white"}$\{x_{n_k}\}$ and[a]{style="color: white"}$\{x_{m_k}\}$ of[a]{style="color: white"}$\{x_n\}$ such[a]{style="color: white"}that $\lim\limits_{k\rightarrow\infty}d\left(x_{n_k}, x_{m_k}\right)= \lim\limits_{k\rightarrow\infty}d\left(x_{n_k+1}, x_{m_k+1}\right)= \epsilon.$ Then by condition $(iii)$ in the hypothesis, we get $$\lim\limits_{k\rightarrow\infty}\zeta\left(d\left(x_{n_k}, x_{m_k}\right)\right)= \lim\limits_{k\rightarrow\infty}\zeta\left(d\left(x_{n_k+1}, x_{m_k+1}\right)\right)>0.$$ From the contraction condition of $T$, we get $$\begin{split} 0 &\leq \xi\left(\zeta\left(d\left(x_{n_k+1}, x_{m_k+1}\right)\right), \eta\left(d\left(x_{n_k}, x_{m_k}\right)\right)\right)\\ &< \eta\left(d\left(x_{n_k}, x_{m_k}\right)\right)- \zeta\left(d\left(x_{n_k+1}, x_{m_k+1}\right)\right)\\&< \zeta\left(d\left(x_{n_k}, x_{m_k}\right)\right)- \zeta\left(d\left(x_{n_k+1}, x_{m_k+1}\right)\right). \end{split}$$ Taking the limit $k\rightarrow\infty$ in the above inequality, by condition $(iii)$ in the hypothesis, we get $$\lim\limits_{k\rightarrow\infty}\left(\zeta\left(d\left(x_{n_k}, x_{m_k}\right)\right)- \zeta\left(d\left(x_{n_k+1}, x_{m_k+1}\right)\right)\right)= 0.$$ This implies, $\lim\limits_{k\rightarrow\infty}\eta\left(d\left(x_{n_k}, x_{m_k}\right)\right)= \lim\limits_{k\rightarrow\infty}\zeta\left(d\left(x_{n_k+1}, x_{m_k+1}\right)\right)> 0.$ Hence from the condition $(z_3)$ of[a]{style="color: white"}simulation[a]{style="color: white"}functions,[a]{style="color: white"}we get $$\limsup\limits_{k\rightarrow\infty}\xi\left(\zeta\left(d\left(x_{n_k+1}, x_{m_k+1}\right)\right), \eta\left(d\left(x_{n_k}, x_{m_k}\right)\right)\right)< 0,$$ which contradicts the condition of $\mathcal{Z}$-contraction. Thus $\{x_n\}$[a]{style="color: white"}is a[a]{style="color: white"}Cauchy sequence. Since[a]{style="color: white"}$X$ is[a]{style="color: white"}a complete[a]{style="color: white"}metric space, the[a]{style="color: white"}sequence $\{x_n\}$ will converge in $X$, say to $w\in X$.\ The[a]{style="color: white"}remaining part[a]{style="color: white"}of the[a]{style="color: white"}proof is[a]{style="color: white"}similar to the proof of[a]{style="color: white"}Theorem [Theorem 3](#thm:3.1){reference-type="ref" reference="thm:3.1"}. ◻ Let[a]{style="color: white"}$(X, q)$ be[a]{style="color: white"}a $\delta$-symmetric quasi-metric[a]{style="color: white"}space and[a]{style="color: white"}$T: X\rightarrow X$ be[a]{style="color: white"}an $f$-Proinov-type $\mathcal{Z}$-contraction. Define a map $\hat{T}: \mathcal{H}_f(X)\rightarrow \mathcal{P}(X)$[a]{style="color: white"}such that[a]{style="color: white"}$\hat{T}(A)= T(A)= \{T(x): x\in A\}$ for[a]{style="color: white"}$A\in\mathcal{H}_f(X)$. Since $T$ is $ff$-continuous, $T(A)$ will be in $\mathcal{H}_f(X)$. Thus, $\hat{T}$ is a self-mapping of $\mathcal{H}_f(X)$.\ Next Lemma will prove the map $\hat{T}$ is a Proinov-type $\mathcal{Z}$-contraction on $\mathcal{H}_f(X)$. **Lemma 8**. *Let[a]{style="color: white"}$(X, q)$ b[a]{style="color: white"} a $\delta$-symmetric quasi-metric[a]{style="color: white"}space and[a]{style="color: white"}$T: X\rightarrow X$ be[a]{style="color: white"}an $f$-Proinov-type $\mathcal{Z}$-contraction[a]{style="color: white"}with respect[a]{style="color: white"}to $\xi\in\mathcal{Z}$, where the simulation function $\xi(s, t)$ decreases on the first variable and increases on the second variable. Suppose that the control functions $\zeta \text{ and } \eta$ are nondecreasing. Then the map $\hat{T}: \mathcal{H}_f(X)\rightarrow\mathcal{H}_f(X)$ defined as $\hat{T}(A)= T(A)$ for $A\in\mathcal{H}_f(X)$ is[a]{style="color: white"}a Proinov-type $\mathcal{Z}$-contraction[a]{style="color: white"}on the[a]{style="color: white"}complete metric[a]{style="color: white"}space[a]{style="color: white"}$(\mathcal{H}_f(X), h_q)$ with[a]{style="color: white"}respect to $\xi\in\mathcal{Z}$.* *Proof.* Let $A, B\in\mathcal{H}_f(X)$. Then $h_q(A, B)= \max\{Q(A, B), Q(B, A)\}$.\ Without loss of generality, let $h_q(A, B)= Q(A, B)$. Since $q\text{ and } T$ are continuous and $f$-convergence implies $b$-convergence, there exists $\alpha\in A$ such that $$\begin{split} \zeta\left(h_q\left(\hat{T}(A), \hat{T}(B)\right)\right)&= \zeta\left(Q\left(\hat{T}(A), \hat{T}(B)\right)\right)\\&= \zeta\left(\inf\limits_{y\in B}q\left(T(\alpha), T(y)\right)\right)\\&\leq \zeta\left(q\left(T(\alpha), T(y)\right)\right), \end{split}$$for any $y\in B$. On the other hand, let[a]{style="color: white"}$\beta\in B$[a]{style="color: white"}be such[a]{style="color: white"}that $q(\alpha, \beta)= \inf\limits_{y\in B}q(\alpha, y)$. Since $\eta$ is increasing, we[a]{style="color: white"}get $$\begin{split} \eta\left(q(\alpha, \beta)\right)&= \eta\left(\inf\limits_{y\in B}q(\alpha, y)\right)\\&\leq \eta\left(\sup\limits_{x\in A}\inf\limits_{y\in B}q(x, y)\right)\\&=\eta\left(Q(A, B)\right)\\&\leq \eta\left(h_q(A, B)\right). \end{split}$$ That is, we have $\zeta\left(h_q\left(\hat{T}(A), \hat{T}(B)\right)\right)\leq \zeta\left(q\left(T(\alpha), T(\beta)\right)\right) \text{ and }\eta\left(q(\alpha, \beta)\right)\leq \eta\left(h_q\left(A, B\right)\right).$\ Since the simulation function $\xi$ is decreasing on the first variable and increasing on the second variable, we get $$\begin{split} 0&\leq\xi\left(\zeta\left(q\left(T(\alpha), T(\beta)\right)\right), \eta\left(q(\alpha, \beta)\right)\right)\\&\leq \xi\left(\zeta\left(h_q\left(\hat{T}(A), \hat{T}(B)\right)\right), \eta\left(h_q\left(A, B\right)\right)\right). \end{split}$$This implies, $\hat{T}$ is a Proinov-type $\mathcal{Z}$-contraction on $\mathcal{H}_f(X)$. ◻ **Theorem 7**. *Let[a]{style="color: white"}$(X, q)$[a]{style="color: white"}be a[a]{style="color: white"}$\delta$-symmetric quasi-metric[a]{style="color: white"}space and[a]{style="color: white"}$T: X\rightarrow X$[a]{style="color: white"}be an[a]{style="color: white"}$f$-Proinov-type $\mathcal{Z}$-contraction[a]{style="color: white"}with respect[a]{style="color: white"}to $\xi\in\mathcal{Z}$. Suppose[a]{style="color: white"}that the[a]{style="color: white"}following conditions[a]{style="color: white"}hold:* 1. *$\xi(s, t)$ decreases in[a]{style="color: white"}the first[a]{style="color: white"}variable and increases in[a]{style="color: white"}the second[a]{style="color: white"}variable;* 2. *$\zeta, \eta$ are nondecreasing;* 3. *$\eta(t)< \zeta(t)$ for[a]{style="color: white"}all $t\in Im(q)\setminus\{0\}$;* 4. *if $\{x_n\} \text{ and } \{y_n\}$ are[a]{style="color: white"}two sequences[a]{style="color: white"}in[a]{style="color: white"}$(0, \infty)$ such[a]{style="color: white"}that $\lim\limits_{n\rightarrow\infty}x_n= \lim\limits_{n\rightarrow\infty}y_n> 0$ then $\lim\limits_{n\rightarrow\infty}\zeta(x_n)= \lim\limits_{n\rightarrow\infty}\zeta(y_n)> 0$.* *Then there exists a unique attractor, say $A^*$ in $\mathcal{H}_f(X)$, for $T$. Moreover the sequence $A_n= T^n(A)$ converges to $A^*$ for any $A\in\mathcal{H}_f(X)$.* *Proof.* By Lemma [Lemma 8](#lem:4.4){reference-type="ref" reference="lem:4.4"}, it[a]{style="color: white"}is clear[a]{style="color: white"}that $\hat{T}$[a]{style="color: white"}is a Proinov-type $\mathcal{Z}$-contraction in[a]{style="color: white"}the complete metric space $\mathcal{H}_f(X)$. Then the result follows from Theorem [Theorem 6](#thm:4.2){reference-type="ref" reference="thm:4.2"}. ◻ ## Iterated Function System consisting of Proinov-type $\mathcal{Z}$-contractions Now we will consider an iterated function system (IFS) $\{X; w_1, w_2,\dots, w_N\}$ where $N\in\mathbb{N}$ and each $w_i$ is an $f$-Proinov-type $\mathcal{Z}$-contraction. We define a function $W: \mathcal{H}_f(X)\rightarrow\mathcal{H}_f(X)$ by $W(A)= \bigcup\limits_{i=1}^Nw_i(A)$ for any $A\in \mathcal{H}_f(X)$. This map $W$ is called the fractal operator generated by the IFS. A set $A\in\mathcal{H}_f(X)$ that is a fixed point of $W$, that is, $W(A)= \bigcup\limits_{i=1}^Nw_i(A)= A$, is called an attractor of the IFS $\{X; w_1, w_2,\dots, w_N\}$. The next lemma will show that the fractal operator $W$ defined above is a Proinov-type $\mathcal{Z}$-contraction in $\mathcal{H}_f(X)$. **Lemma 9**. *Let[a]{style="color: white"}$(X, q)$ be[a]{style="color: white"}a $\delta$-symmetric[a]{style="color: white"}quasi-metric[a]{style="color: white"}space and[a]{style="color: white"}$w_i: X\rightarrow X$, $i=1, 2,\dots, N$ where $N\in\mathbb{N}$, be $f$-Proinov-type $\mathcal{Z}$-contractions with respect to a simulation function $\xi$ where $\xi(s, t)$ is decreasing on the first variable. If the control functions $\zeta\text{ and } \eta$ are nondecreasing, then the fractal operator $W$, generated by the IFS $\{X; w_1, w_2, \dots, w_N\}$, is a Proinov-type $\mathcal{Z}$-contraction in $\mathcal{H}_f(X)$.* *Proof.* Define $W:\mathcal{H}_f(X)\rightarrow \mathcal{H}_f(X)$ by $W(A)= \bigcup\limits_{i=1}^Nw_i(A)$ for any $A\in\mathcal{H}_f(X)$. Since each $w_i$ is an $f$-Proinov-type $\mathcal{Z}$-contraction, by Lemma [Lemma 8](#lem:4.4){reference-type="ref" reference="lem:4.4"} $\hat{w_i}$ is a Proinov-type $\mathcal{Z}$-contraction in $\mathcal{H}_f(X)$. Hence $\xi\left(\zeta\left(h_q\left(\hat{w_i}(A), \hat{w_i}(B)\right)\right), \eta\left(h_q\left(A, B\right)\right)\right)\geq 0$. By Lemma [Lemma 5](#lem:4.1){reference-type="ref" reference="lem:4.1"} we have $$\begin{split} h_q\left(W(A), W(B)\right)&= h_q\left(\bigcup\limits_{i=1}^Nw_i(A), \bigcup\limits_{i=1}^Nw_i(B)\right)\\&\leq\max\limits_{1\leq i\leq N}h_q\left(w_i(A), w_i(B)\right)\\&= h_q\left(w_j(A), w_j(B)\right)\\&= h_q\left(\hat{w_j}(A), \hat{w_j}(B)\right), \end{split}$$for some $j\in \{1, 2, \dots, N\}$. Since $\xi(s, t)$ is decreasing on $s$, we get $$\begin{split} 0&\leq \xi\left(\zeta\left(h_q\left(\hat{w_j}(A), \hat{w_j}(B)\right)\right), \eta\left(h_q\left(A, B\right)\right)\right)\\&\leq \xi\left(\zeta\left(h_q\left(W(A), W(B)\right)\right), \eta\left(h_q\left(A, B\right)\right)\right). \end{split}$$Hence the fractal operator $W$ is a Proinov-type $\mathcal{Z}$-contraction. ◻ The existence and uniqueness of an attractor for an IFS consisting of $f$-Proinov-type $\mathcal{Z}$-contractions are proved in[a]{style="color: white"}the next[a]{style="color: white"}Theorem. **Theorem 8**. *Let[a]{style="color: white"}$(X, q)$ be[a]{style="color: white"}a $\delta$-symmetric quasi-metric[a]{style="color: white"}space and $w_i: X\rightarrow X$, $i=1, 2,\dots, N$ where $N\in\mathbb{N}$, be $f$-Proinov-type $\mathcal{Z}$-contractions with respect to a simulation function $\xi$ and control functions $\zeta\text{ and } \eta$. Suppose[a]{style="color: white"}that the[a]{style="color: white"}following conditions[a]{style="color: white"}hold:* 1. *$\xi(s, t)$ is[a]{style="color: white"}decreasing in[a]{style="color: white"}the first[a]{style="color: white"}variable;* 2. *$\zeta, \eta$ are nondecreasing;* 3. *$\eta(t)< \zeta(t)$ for all $t\in Im(q)\setminus\{0\}$;* 4. *if $\{x_n\} \text{ and\textcolor{white}{a}} \{y_n\}$ are[a]{style="color: white"}two sequences[a]{style="color: white"}in $(0, \infty)$ such that $\lim\limits_{n\rightarrow\infty}x_n= \lim\limits_{n\rightarrow\infty}y_n> 0$ then $\lim\limits_{n\rightarrow\infty}\zeta(x_n)= \lim\limits_{n\rightarrow\infty}\zeta(y_n)> 0$.* *Then there exists a unique attractor, say $A^*\in\mathcal{H}_f(X)$, for the fractal operator $W$, generated by the IFS $\{X; w_1, w_2, \dots, w_N\}$. Moreover, the iterated sequence $\{W^n(A)\}$ converges to the attractor $A^*$ for any $A\in \mathcal{H}_f(X)$.* *Proof.* From Lemma [Lemma 9](#lem:4.5){reference-type="ref" reference="lem:4.5"}, it is clear that the fractal operator $W$ generated by the given IFS is a Proinov-type $\mathcal{Z}$-contraction in the complete metric space $\mathcal{H}_f(X)$. Then the result follows from Theorem [Theorem 6](#thm:4.2){reference-type="ref" reference="thm:4.2"}. ◻ Next, we will generalize Theorem [Theorem 8](#thm:4.4){reference-type="ref" reference="thm:4.4"}. For that, we will consider an IFS consisting of $f$-Proinov-type $\mathcal{Z}$-contractions each having different simulation functions and control functions. That is, we will take $w_i$ to be $f$-Proinov-type $\mathcal{Z}$-contraction with respect to $\xi_i\in\mathcal{Z}$ and control functions $\zeta\text{ and }\eta_i$ for each $i= 1, 2,\dots,N$.\ Before moving to the main results, we will prove the following Lemma about simulation functions. **Lemma 10**. *Let $\xi_i$, for[a]{style="color: white"}$i=1, 2, \dots, N$[a]{style="color: white"}where[a]{style="color: white"}$N\in\mathbb{N}$, be a[a]{style="color: white"}finite collection of simulation[a]{style="color: white"}functions. Define $\xi(s, t)= \max\limits_{1\leq i\leq N}\xi_i(s, t)$. Then the function $\xi$ is also a simulation function.* *Proof.* From the definition of $\xi(s, t)$, it is clear that $\xi$ is a map from $[0, \infty)\times [0, \infty)$ to $\mathbb{R}$. Since $\xi_i(0, 0)= 0$ for all $i$, we get $\xi(0, 0)= 0$. We have $\xi_i(s, t)< t-s$ for all $s,t>0$ and $i= 1, 2, \dots,N$. Thus, it is clear that $\xi(s, t)= \max\limits_{1\leq i\leq N}\xi_i(s, t)< t-s$ for all $s,t> 0$. So, $\xi$ satisfies the properties $(z_1) \text{ and }(z_2)$ of the simulation function. Now we have to prove the property $(z_3)$. Let $\{s_n\}, \{t_n\}$ be two sequences in $(0, \infty)$ such that $\lim\limits_{n\rightarrow\infty}s_n= \lim\limits_{n\rightarrow\infty}t_n> 0$. Then we have $\limsup\limits_{n\rightarrow\infty}\xi_i(s_n, t_n)< 0$. We claim $\limsup\limits_{n\rightarrow\infty}\xi(s_n, t_n)< 0$. We will prove this by mathematical induction[a]{style="color: white"}on[a]{style="color: white"}$N$. The[a]{style="color: white"}case $N= 1$ is[a]{style="color: white"}trivial. [a]{style="color: white"}For $N= 2$, let $\xi(s, t)= \max\{\xi_1(s, t), \xi_2(s, t)\}$. Let $a_n= \xi_1(s_n, t_n)$, $b_n= \xi_2(s_n, t_n)$ and $c_n= \xi(s_n, t_n)$. Then we have three real sequences $\{a_n\}, \{b_n\}$ and $\{c_n\}$ such that $c_n= \max\limits_{n\in\mathbb{N}}\{a_n, b_n\}$. Let $c= \limsup\limits_{n\rightarrow\infty}c_n$. Then there exists a subsequence $\{c_{n_k}\}$ of $\{c_n\}$ such that $\lim\limits_{k\rightarrow\infty}c_{n_k}= c$. We have three possibilities for $c_{n_k}$:\ Case 1: There[a]{style="color: white"}exists $K\in\mathbb{N}$[a]{style="color: white"}such that[a]{style="color: white"}$c_{n_k}= a_{n_k}$ for[a]{style="color: white"}each[a]{style="color: white"}$k\geq K$. Then we get $\lim\limits_{k\rightarrow\infty}a_{n_k}= c.$ This implies $c\leq \limsup\limits_{n\rightarrow\infty}a_n$.\ Case 2: There exist $K\in\mathbb{N}$ such that $c_{n_k}= b_{n_k}$ for each $k\geq K$. Then, by an argument similar to that in Case 1, we get $c\leq \limsup\limits_{n\rightarrow\infty}b_n$.\ Case 3: For each $i\in\mathbb{N}$ there exist $n_{k_i}, n_{l_i}> i$ such that $c_{n_{k_i}}= a_{n_{k_i}}$ and $c_{n_{l_i}}= b_{n_{l_i}}$. That is, there exist two subsequences $\{c_{n_{k_i}}\}$ and $\{c_{n_{l_i}}\}$ of $\{c_{n_k}\}$ such that $c_{n_{k_i}}= a_{n_{k_i}}$ and $c_{n_{l_i}}= b_{n_{l_i}}$. Thus, $\lim\limits_{i\rightarrow\infty}a_{n_{k_i}}= \lim\limits_{i\rightarrow\infty}b_{n_{k l_i}}= c$. Hence $c\leq \limsup\limits_{n\rightarrow\infty}a_n$ and $c\leq \limsup\limits_{n\rightarrow\infty}b_n$.\ In each case we get $c\leq\max\{\limsup\limits_{n\rightarrow\infty}a_n, \limsup\limits_{n\rightarrow\infty}b_n\}$.\ Now, suppose the result is true for $N$. Suppose that $\xi(s, t)= \max\limits_{1\leq i\leq N+1}\xi_i(s, t)$. Then, $$\begin{aligned} \limsup\limits_{n\rightarrow\infty}\xi(s_n, t_n)&=\limsup\limits_{n\rightarrow\infty}\max\limits_{1\leq i\leq N+1}\xi_i(s_n, t_n)\\&= \limsup\limits_{n\rightarrow\infty}\left(\max\left\{\max\limits_{1\leq i\leq N}\xi_i(s_n, t_n), \xi_{N+1}(s_n, t_n)\right\}\right)\\&\leq \max\left\{\limsup\limits_{n\rightarrow\infty}\max\limits_{1\leq i\leq N}\xi_i(s_n, t_n), \limsup\limits_{n\rightarrow\infty}\xi_{N+1}(s_n, t_n)\right\}\\&\leq \max\left\{\max\limits_{1\leq i\leq N}\limsup\limits_{n\rightarrow\infty}\xi_i(s_n, t_n), \limsup\limits_{n\rightarrow\infty}\xi_{N+1}(s_n, t_n) \right\}\\&= \max\limits_{1\leq i \leq N+1}\limsup\limits_{n\rightarrow\infty}\xi_i(s_n, t_n)\\&<0. \end{aligned}$$ Hence the result is true for any $N\in\mathbb{N}$. Thus, $\xi$ satisfies property $(z_3)$. Therefore, it is a simulation function. ◻ Next, we will prove a lemma that will generalize the fractal operator given in Lemma [Lemma 8](#lem:4.4){reference-type="ref" reference="lem:4.4"}. **Lemma 11**. *Let $\{X; w_1, w_2, \dots,w_N\}$ be an IFS where each $w_i$ is a $f$-Proinov-type $\mathcal{Z}$-contractions with respect to the simulation function $\xi_i$ and control functions $\zeta$ and $\eta_i$. That is, each $w_i$ satisfies[a]{style="color: white"}the contraction[a]{style="color: white"}condition:$$0\leq\xi_i\left(\zeta\left(q\left(w_i(x), w_i(y)\right)\right), \eta_i\left(q\left(x, y\right)\right)\right).$$ Suppose that each simulation function $\xi_i$ decreases on the first variable and increases on the second variable. Also, let the control functions $\zeta\text{ and }\eta_i$ not decrease for $i= 1, 2, \dots, N$. Then the fractal operator $W$ generated by the IFS $\{X; w_1, w_2,\dots,w_N\}$ is a Proinov-type $\mathcal{Z}$-contraction in $\mathcal{H}_f(X)$ with respect to the simulation function $\xi(s, t)=\max\limits_{1\leq i\leq N}\xi_i(s, t)$ and control functions $\zeta, \eta$ where $\eta(t)= \max\limits_{1\leq i\leq N}\eta_i(t)$.* *Proof.* Define $W:\mathcal{H}_f(X)\rightarrow\mathcal{H}_f(X)$ by $W(A)= \bigcup\limits_{i=1}^Nw_i(A)$ for $A\in \mathcal{H}_f(X)$. Let $A, B \in \mathcal{H}_f(X)$. For each $i=1, 2,\dots, N$, we have $0\leq \xi_i\left(\zeta\left(q\left(w_i(x), w_i(y)\right)\right), \eta_i\left(q\left(x, y\right)\right)\right)$ for any $x, y\in X$. By Lemma [Lemma 8](#lem:4.4){reference-type="ref" reference="lem:4.4"} we get $0\leq \xi_i\left(\zeta\left(h_q\left(\hat{w_i}(A), \hat{w_i(B)}\right)\right), \eta_i\left(h_q\left(A, B\right)\right)\right).$ From the proof of Lemma [Lemma 9](#lem:4.5){reference-type="ref" reference="lem:4.5"}, we have $h_q\left(W(A), W(B)\right)\leq h_q\left(\hat{w_j}(A), \hat{w_j}(B)\right)$ for some $j\in\{1,2,\dots, N\}$. Since each $\xi_i$ decreases in the first variable and increases in the second variable, the function $\xi(s, t)= \max\limits_{1\leq i\leq N}\xi_i(s, t)$ also decreases in[a]{style="color: white"}the first[a]{style="color: white"}variable and[a]{style="color: white"}increases in[a]{style="color: white"}the second variable. Then, $$\begin{aligned} 0&\leq\xi_j\left(\zeta\left(h_q\left(\hat{w_j}(A), \hat{w_j}(B)\right)\right), \eta_j\left(h_q\left(A, B\right)\right)\right)\\&\leq\xi_j\left(\zeta\left(h_q\left(W(A), W(B)\right)\right), \eta_j\left(h_q\left(A, B\right)\right)\right)\\&\leq\xi_j\left(\zeta\left(h_q\left(W(A), W(B)\right)\right), \eta\left(h_q\left(A, B\right)\right)\right)\\&\leq\xi\left(\zeta\left(h_q\left(W(A), W(B)\right)\right), \eta\left(h_q\left(A, B\right)\right)\right). \end{aligned}$$Thus the fractal operator $W$ generated by the IFS is a Proinov-type $\mathcal{Z}$-contraction in $\mathcal{H}_f(X)$. ◻ The next Theorem will prove the existence and uniqueness of attractor for this generalized IFS of $f$-Proinov-type $\mathcal{Z}$-contractions. **Theorem 9**. *Let $\{X; w_1, w_2, \dots,w_N\}$ be an IFS where each $w_i$ is a $f$-Proinov-type $\mathcal{Z}$-contractions with respect to the simulation function $\xi_i$ and control functions $\zeta$ and $\eta_i$. That is, each $w_i$ satisfies[a]{style="color: white"}the contraction[a]{style="color: white"}condition:$$0\leq\xi_i\left(\zeta\left(q\left(w_i(x), w_i(y)\right)\right), \eta_i\left(q\left(x, y\right)\right)\right).$$ Suppose that the following conditions hold:* 1. *Each $\xi_i(s, t)$ decreases in[a]{style="color: white"}the first[a]{style="color: white"}variable and[a]{style="color: white"}increases in[a]{style="color: white"}the second variable[a]{style="color: white"}for $i=1, 2,\dots, N$;* 2. *$\zeta, \eta_i$ are nondecreasing for each $i=1,2,\dots,N$;* 3. *$\eta_i(t)< \zeta(t)$ for all $t\in Im(q)\setminus\{0\}$ and $i=1, 2,\dots, N$;* 4. *if $\{x_n\} \text{ and } \{y_n\}$ are two sequences in $(0, \infty)$ such that $\lim\limits_{n\rightarrow\infty}x_n= \lim\limits_{n\rightarrow\infty}y_n> 0$ then $\lim\limits_{n\rightarrow\infty}\zeta(x_n)= \lim\limits_{n\rightarrow\infty}\zeta(y_n)> 0$.* *Then the fractal operator, $W$ generated by the IFS $\{X; w_1, w_2,\dots, w_N\},$ has a unique attractor, say $A^*\in\mathcal{H}_f(X)$. Moreover, the iterated sequence $\{W^n(A)\}$ converges to the attractor $A^*$ for any $A\in\mathcal{H}_f(X)$.* *Proof.* It follows from Lemma [Lemma 11](#lem:4.7){reference-type="ref" reference="lem:4.7"} that the fractal operator $W$ is a Proinov-type $\mathcal{Z}$-contraction in the complete metric space $\mathcal{H}_f(X)$. Then the result follows immediately from Theorem [Theorem 6](#thm:4.2){reference-type="ref" reference="thm:4.2"}. ◻ The[a]{style="color: white"}following example[a]{style="color: white"}will illustrate[a]{style="color: white"}Theorem [Theorem 9](#thm:4.5){reference-type="ref" reference="thm:4.5"}. **Example 5**. *Let[a]{style="color: white"}$X=[0, 1]$. Define $q:[0, 1]\times[0, 1]\rightarrow \mathbb{R}$ as $$q(x, y)= \begin{cases} 8x & if x>y\\ 4y & if x<y\\ 0 & if x=y. \end{cases}$$ It can be easily verified that $q$ is a $2$-symmetric quasi-metric on $[0, 1]$. Now define $\zeta, \eta:[0, \infty)\rightarrow \mathbb{R}$ as $\zeta(t)= t$ and $\eta(t)=t^2$. Both $\zeta \text{ and }\eta$ satisfy the conditions $(ii)- (iv)$ of the hypothesis. Consider two simulation functions $\xi_1$ and $\xi_2$ defined as $\xi_1(s, t)=\frac{t}{t+1}-s$ and $\xi_2(s, t)= \frac{16t}{t+16}-s$. Clearly, both $\xi_1$ and $\xi_2$ satisfies condition $(i)$ in the hypothesis. Define $w_1, w_2:[0, 1]\rightarrow [0, 1]$ by $w_1(x, y)=\frac{x^3}{66x^2+3}$ and $w_2(x)=\frac{4x^2}{4x^2+1}$. We will prove that both $w_1$ and $w_2$ are $f$-Proinov-type $\mathcal{Z}$-contractions with respect to the simulation functions $\xi_1$ and $\xi_2$ respectively.\ First, we consider the function $w_1$ and the simulation function $\xi_1$.\ **Case 1:** If $x>y$, then $q(x, y)=8x$ and $q(w_1(x),w_2(x))=\frac{8x^3}{66x^2+3}$. Then the $$\xi_1\left(\zeta\left(q\left(w_1(x),w_1(y)\right)\right), \eta\left(q\left(x,y\right)\right)\right)=\frac{64x^2}{64x^2+1}-\frac{8x^3}{66x^2+5} \geq\frac{64x^2}{64x^2+1}-\frac{8x^3}{64x^2+1} \geq 0$$ **Case 2:** If $x<y$, then $q(x,y)= 4y$ and $q(w_1(x),w_1(y))=\frac{4y^3}{66y^2+3}$. Then, $$\xi_1\left(\zeta\left(q\left(w_1(x),w_1(y)\right)\right),\eta\left(q\left(x,y\right)\right)\right)=\frac{16y^2}{16y^2+1}-\frac{4y^3}{66y^2+3}\geq\frac{16y^2}{16y^2+1}-\frac{4y^3}{16y^2+1}\geq 0.$$ From both cases, it can be observed that the self-mapping $w_1$ is an $f$-Proinov-type $\mathcal{Z}$-contraction on $[0,1]$ with respect to the simulation function $\xi_1$.\ Next, we consider the self-mapping $w_2$ and the simulation function $\xi_2$.\ **Case 1:** If $x>y$, then $q(x,y)= 8x$ and $q\left(w_2(x), w_2(y)\right)=\frac{32x^2}{4x^2+1}$. Thus, $$\xi_2\left(\zeta\left(q\left(w_2(x),w_2(y)\right)\right),\eta\left(q(x,y)\right)\right)=\frac{64x^2}{4x^2+1}-\frac{32x^2}{4x^2+1}\geq 0.$$ **Case 2:** If $x<y$, then $q(x,y)=4y$ and $q\left(w_2(x),w_2(y)\right)=\frac{16y^2}{4y^2+1}$. Then, $$\xi_2\left(\zeta\left(q\left(w_2(x),w_2(y)\right)\right),\eta\left(q(x,y\right)\right)=\frac{16y^2}{y^2+1}-\frac{16y^2}{4y^2+1}\geq\frac{16y^2}{y^2+1}-\frac{16y^2}{y^2+1}=0.$$ Hence, $w_2$ is an $f$-Proinov-type $\mathcal{Z}$-contraction on $[0,1]$ with respect to the simulation function $\xi_2$.\ Then by Theorem [Theorem 9](#thm:4.5){reference-type="ref" reference="thm:4.5"}, the collection $\left\{[0,1], w_1,w_2\right\}$ forms an IFS. Furthermore, the fractal operator $W$ generated by this IFS is a Proinov-type $\mathcal{Z}$-contraction on the complete metric space $\mathcal{H}_f\left([0,1]\right)$ with[a]{style="color: white"}respect to[a]{style="color: white"}the simulation[a]{style="color: white"}function $\xi(s,t)=\max\left\{\xi_1(s,t),\xi_2(s,t)\right\}= \max\left\{\frac{t}{t+1}-s, \frac{16t}{t+16}-s\right\}$. The Theorem [Theorem 9](#thm:4.5){reference-type="ref" reference="thm:4.5"} also guarantees the existence of a unique attractor of this IFS. Here, we can observe that $w_1\left([0,\frac{1}{2}]\right)=[0,\frac{1}{172}]$ and $w_2\left([0,\frac{1}{2}]\right)=[0,\frac{1}{2}]$. Thus, $W\left([0,\frac{1}{2}]\right)=w_1\left([0,\frac{1}{2}]\right)\bigcup w_2\left([0,\frac{1}{2}]\right)=[0,\frac{1}{2}]$. In addition, we can observe that $[0,\frac{1}{2}]$ is the unique attractor of this IFS.* 999 A. Amini-Harandi, A. Petrusel, A fixed point theorem by altering distance technique in complete metric spaces, *Miscolk Math. Notes*, 14 (2013) 11-17. A. El- Sayed Ahmed and Andreea Fulga, The Gornicki- Proinov type contractions on quasi-metric spaces, *AIMS Mathematics*, 6(8) (2021) 8815-8834. A. Petrusel, G. Petrusel and Mu-Ming Wong, Fixed Point Results for Locally Contraction with Applications to Fractals, *Journal of Nonlinear and Convex Analysis*, 21(2) (2020) 403-411. B. Alqahtani, S. S. Alzaid, A. Fulga and A. F. Roldán López de Hierro, Proinov type contractions on dislocated b-metric spaces, *Advances in Difference Equations*, 2021(1) (2021) 1-16. D. Wardowski and N. Van Dung, Fixed points of $F$-weak contractions on complete metric spaces, *Demonstratio Mathematica*, 47 (2014) 146-155 Erdal Karapinar, Fixed Points Results via Simulation Functions, *Filomat*, 30(8) (2016) 2343-2350. F. E. Browder and W. Petryshyn, The solution by iteration of nonlinear functional equation in Banach spaces, *Bulletin of the American Mathematical Society*, 57 (1966) 571-576. Farshid Khojasteh, Satish Shukla and Stojan Radenovic, A New Approach to the Study of Fixed Point Theory for Simulation Functions, *Filomat*, 29(6) (2015). Hamed H. Alsulami et al., A proposal to the study of contractions in quasi-metric spaces, *Discrete Dynamics in Nature and Society*, 2014 (2014). Julia Collins and Johennas Zimmer, An asymmetric Arzela- Ascoli theorem, *Topology and its Applications*, 154(2007) (2007) 2312-2322. M. A. Geraghty, On contractive mappings, *Proc. Am. Math. Soc.*, 40 (1973) 604-608. M. F. Barnsley, Fractals everywhere. *Academic press*, (2014). M. F. Barnsley, Superfractals. *Cambridge University Press*, (2006) M. Jleli and B. Samet, A new generalization of the Banach contraction principle, *Journal of Inequalities and Applications*, 2014(38) (2014) 1-8 N. A. Secelean, Iterated function systems consisting of F-contractions, *Fixed Point Theory and Applications* 2013(1) (2013)1-13. N. A. Secelean, S. Mathew and D. Wardowski, New fixed point results in quasi-metric spaces and applications in fractals theory, *Advances in Difference Equations*, 2019(1) (2019) 1-23. N. A. Secelean, Weak $F$-contractions and some fixed point results, *Bulletin of the Iranian Mathematical Society*, 42(3) (2016) 779-798. N. A. Secelean, Countable iterated function systems, *LAP Lambert Academic Publishing*, (2013) O. Popescu, Fixed points for $(\psi, \phi)$-weak contractions, *Applied Mathematics Letters*, 24(1) (2011) 1-4. O. Popescu, Some remarks on the paper "Fixed point theorems for generalized contractive mappings in metric spaces", *Journal of Fixed Point Theory and Applications*, 23(4) (2021)1-10. P. D. Proinov, Fixed point theorems for generalized contractive mappings in metric spaces, *Journal of Fixed Point Theory and Applications*, 22(1) (2020)1-27. S. Moradi, Fixed point of single-valued cyclic weak $\phi_F$-contraction mappings, *Filomat*, 28 (2014) 1747-1752. W. A. Wilson, On quasi metric spaces, *American Journal of Mathematics*, 53(3) (1931) 675-684.
arxiv_math
{ "id": "2310.05645", "title": "Generalized Proinov-type contractions using simulation functions with\n applications to fractals", "authors": "Athul Puthusseri and D. Ramesh Kumar", "categories": "math.FA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Fix a suitable link on the disk. Recently, F. Morabito associates each Hamiltonian symplecticmorphism preserving the link to a braid type. Based on this construction, Morabito defines a family of pseudometrics on the braid groups by using the Hofer metric. In this note, we show that two Hamiltonian symplecticmorphisms define the same braid type provided that their Hofer distance is sufficiently small. As a corollary, the pseudometrics defined by Morabito are nondegenerate. author: - Guanheng Chen title: Stability of the braid types defined by the symplecticmorphisms preserving a link --- # Introduction and main results #### Preliminaries Let $\widetilde{Conf^k}(\mathbb{D}): =\{(x_1,..,x_k) \in \mathbb{D}^k: x_i \ne x_j \mbox{ if } i \ne j \}$, where $\mathbb{D}$ is the disk. The configuration space is $Conf^k(\mathbb{D}): =\widetilde{Conf^k}(\mathbb{D}) /S_k$, where $S_k$ is the permutation group. Then the **braid group** is $\mathcal{B}_k := \pi_1(Conf^k(\mathbb{D}))$. To see the elements in $\mathcal{B}_k$ more geometrically, we fix a base point $\mathbf{x} = (x_1, ...x_k) \in \widetilde{Conf^k}(\mathbb{D})$. Then a **braid** is a collection of paths $\{\gamma_i: [0,1] \to \mathbb{D} \}_{i=1}^k$ such that 1. $\gamma_i(0)= x_i$ and $\gamma_i(1) = x_{\sigma(i)}$, where $\sigma \in S_k$. 2. $\gamma_i(t) \ne \gamma_j(t)$ for $i \ne j$. Obviously, this gives a loop $\gamma: S^1 \to Conf^k(\mathbb{D})$. Conversely, we can construct a braid from a loop in $Conf^k(\mathbb{D})$ easily. So we don't distinguish these two concepts throughout this note. The homotopy class $[\gamma] \in \mathcal{B}_k$ is called a **braid type**. For more details about the braid group, we refer the readers to [@KT]. The braid group is a classical object in low dimensional topology and it has been studied by mathematicians for decades. Recently, some progress has been made to study the braid group from view points of symplectic geometry [@AM; @FM; @MK; @H3]. To describe their results, we first review some definitions in symplectic geometry quickly. Let $\Sigma$ be a surface with a volume form $\omega$. A function $H: [0,1]_t \times \Sigma \to \mathbb{R}$ is called **a Hamiltonian function**. We can associate it to a vector field $X_H$ called the **Hamiltonian vector field** by the relation $\omega(X_H, \cdot) =d_{\Sigma }H$. Let $\varphi_H^t$ be the flow generated by $X_H$. A diffeomorphism $\varphi$ of $\Sigma$ is called a **Hamiltonian symplecticmorphism** if $\varphi =\varphi^1_H$ for some $H$. Let $Ham(\Sigma, \omega)$ be the set of all Hamiltonian symplecticmorphisms. In fact, $Ham(\Sigma, \omega)$ is a group. There is a distance function on $Ham(\Sigma, \omega)$ called the **Hofer metric** [@HH]. It is defined as follows. Given a Hamiltonian function $H$, the Hofer norm of $H$ is $$|H|_{(1, \infty)} : =\int_0^1 (\max_{\Sigma} H_t - \min_{\Sigma} H_t) dt.$$ The **Hofer norm** of a Hamiltonian symplecticmorphism is defined by $$|\varphi|_{Hofer} : = \inf \{ |H|_{(1, \infty)} : \varphi = \varphi^1_H \}.$$ Given $\varphi_1, \varphi_2 \in Ham(\Sigma, \omega)$, the Hofer metric is $d_{Hofer}(\varphi_1, \varphi_2) :=|\varphi_1 \circ \varphi_2 ^{-1}|_{Hofer}.$ #### Main results Given a union of distinct 1-periodic orbits $\alpha$ of $\varphi_H^1$, one can view them as a braid in $\Sigma$ and hence we get a braid type $b(\alpha)$ associated with these periodic orbits. M.R.R. Alves and M. Meiwes show that if another symplecticmorphism $\varphi^1_{H'}$ is sufficiently close to $\varphi_H^1$ with respect to the Hofer metric, then there is a union of distinct 1-periodic orbits $\alpha'$ of $\varphi_{H'}^1$ such that $b(\alpha) =b(\alpha')$ [@AM]. Recently, M. Hutchings generalizes this result to the general case (without the Hamiltonian assumption and $\alpha$ can contain periodic orbits with arbitrary periods) by using holomorphic curve methods [@H3]. In [@FM], F. Morabito constructs braid types for the Hamiltonian symplecticmorphisms preserving a fixed link. The braid types come from the Hamiltonian chords rather than periodic orbits. In this note, we follow Alves and Meiwes's idea to prove a parallel result for the braid types defined by Morabito. Since the Hamiltonian symplecticmorphisms are degenerate in our situation, the idea is more close to M. Khanevsky's generalization [@MK]. Before giving the satement of our result, let us review Morabito's construction [@FM]. Let $\underline{L}=\cup_{i=1}^k L_i$ be a link (a disjoint union of embedded circles) on the disk $\mathbb{D}$. Consider the group $$Ham_{\underline{L}}(\mathbb{D}, \omega): =\{\varphi \in Ham_c(\mathbb{D}, \omega) \vert \varphi(\underline{L}) = \underline{L}\},$$ where $Ham_c(\mathbb{D}, \omega)$ consists of the Hamiltonian symplecticmorphisms with compact support. Let $\mathbf{x} =(x_1, ...x_k) \in \underline{L}$ be a fixed base point, where $x_i \in L_i$. Given $\varphi \in Ham_{\underline{L}}(\mathbb{D}, \omega)$, let $H$ be a Hamiltonian function generated $\varphi$. Let $\gamma_{\varphi} =\{\varphi_H^t(x_i)\}_{i=1}^k$. Note that $\varphi(x_i) \in L_{\sigma(i)}$ because $\varphi(\underline{L}) = \underline{L}$, where $\sigma$ is a permutation. Choose paths $\{p_i(t)\}_{i=1}^k$ such that $p_i \subset L_{\sigma(i)}$ and $p_i$ connecting $\varphi(x_i)$ and $x_{\sigma(i)}$. Concatenate $\gamma_{\varphi}$ and $\{p_i(t)\}_{i=1}^k$; then we get a braid $\bar{\gamma}_{\varphi}$. The braid type of $\bar{\gamma}_{\varphi}$ is denoted by $b(\varphi, \underline{L})$. Note that $b(\varphi, \underline{L})$ is independent of the choice of the path $\{p_i(t)\}_{i=1}^k$ and the base point. Because $\pi_1(Ham_c(\mathbb{D}, \omega)) = 0$, $b(\varphi, \underline{L})$ is also independent of the choice of the Hamiltonian function $H$. To apply the Floer homology, we view the disk $\mathbb{D}$ as a domain in $\mathbb{S}^2-\{p\},$ where $p$ is the south pole. Define $Ham_{\underline{L}}(\mathbb{S}^2, \omega): =\{\varphi \in Ham(\mathbb{S}^2, \omega) \vert \varphi(\underline{L}) = \underline{L}\}.$ Then $Ham_{\underline{L}}(\mathbb{D}, \omega) \subset Ham_{\underline{L}}(\mathbb{S}^2, \omega)$ is the subgroup consisting of Hamiltonian symplecticmorphisms supported compactly in $\mathbb{D}$. The main result is as follows. **Theorem 1**. *Suppose that the link $\underline{L}$ is $\eta$-admissible (see Definition [Definition 2](#def1){reference-type="ref" reference="def1"}) with $k$ components. Then there exists a positive constant $\varepsilon_{\underline{L}}$ depending on $\underline{L}$ such that the following property holds. Given $\varphi \in Ham_{\underline{L}}(\mathbb{D}, \omega)$, for any $\varphi' \in Ham_{\underline{L}}(\mathbb{D}, \omega)$ such that $d_{Hofer}(\varphi, \varphi')< \varepsilon_{\underline{L}} /k$, then $b(\varphi, \underline{L}) = b(\varphi', \underline{L})$.* We prove Theorem [Theorem 1](#thm1){reference-type="ref" reference="thm1"} by using the quantitative Heegaard Floer homology that is introduced by D. Cristofaro-Gardiner, V. Humili$\grave{e}$re, C. Mak, S. Seyfaddini and I. Smith [@CHMSS]. One may use Hutchings's holomorphic curve methods to prove the same result alternatively. Morabito defines a pseudonorm on $\mathcal{B}_k$ by $$|b|_{\underline{L}} : = \inf \{ |\varphi|_{Hofer} \vert \varphi \in Ham_{\underline{L}}(\mathbb{D}, \omega), b(\varphi, \underline{L}) =b \}.$$ It is remarked by Morabito that the morphism $Ham_{\underline{L}}(\mathbb{D}, \omega) \to \mathcal{B}_k, \varphi \to b(\varphi, \underline{L})$ is surjective (see Page 4 of [@FM]). Therefore, the definition makes sense for every $b \in \mathcal{B}_k.$ For $g,h \in \mathcal{B}_k$, the pseudodistance induced by $|\cdot|_{\underline{L}}$ is $d_{\underline{L}}(g, h) := |g h^{-1}|_{\underline{L}}$. Morabito shows that $|\cdot|_{\underline{L}}$ is indeed a norm when $k=2$ [@FM]. As an application of Theorem [Theorem 1](#thm1){reference-type="ref" reference="thm1"}, we show that $| \cdot |_{\underline{L}}$ is actually a norm. **Corollary 1**. *Suppose that the link $\underline{L}$ is $\eta$-admissible with $k$ components. Then the pseudonorm $| \cdot |_{\underline{L}}$ on $\mathcal{B}_k$ is nondegenerate. Also, for any two different elements $g, h \in \mathcal{B}_k$, we have $d_{\underline{L}}(g, h) \ge \varepsilon_{\underline{L}} /k$. Moreover, if $\eta>0$, then $\mathcal{B}_k$ is unbounded with respect to $| \cdot |_{\underline{L}}$.* *Proof.* Suppose that $| b |_{\underline{L}} =0$. By definition, we have a sequence of Hamiltonian symplecticmorphisms $\varphi_n \in Ham_{\underline{L}}(\mathbb{D}, \omega)$ such that $b= b(\varphi_n, \underline{L})$ and $\lim_{n \to \infty}|\varphi_n|_{Hofer}= \lim_{n \to \infty}d_{Hofer}(\varphi_n, id)=0$. By Theorem [Theorem 1](#thm1){reference-type="ref" reference="thm1"}, we have $b= b(\varphi_n, \underline{L}) = b(id, \underline{L})$ provided that $n$ is sufficiently large. $b(id, \underline{L})$ is represented by the constant braid, hence it is the unit in $\mathcal{B}_k$. The same argument implies that $d_{\underline{L}}(g, h) \ge \varepsilon_{\underline{L}} /k$ for $g \ne h.$ The unboundedness of $\mathcal{B}_k$ is a direct consequence of Theorem 1.4 of [@FM]. ◻ # Quantitative Heegaard Floer homology In this section, we briefly review the quantitative Heegaard Floer homology and its filtered version. We assume that $\int_{\mathbb{S}^2} \omega =1$ throughout. **Definition 2**. *Fix $\eta\ge 0$. A link $\underline{L}=\cup_{i=1}^k L_i$ on the two-sphere is called $\eta$-admissible if it satisfies the following conditions:* 1. *$\mathbb{S}^2 -\underline{L} = \cup_{i=1}^{k+1} \mathring{B}_k$, where $\{\mathring{B}_i\}_{i=1}^k$ is a disjoint union of open disks. Let $B_i$ denote the closure of $\mathring{B}_i$. Then $\partial B—_i = L_i$ for $1\le i\le k$, and ${B}_{k+1}$ is a planar domain with $k$ boundaries.* 2. *$\lambda = \int_{B_i} \omega$ for $1\le i\le k$ and $\lambda = 2\eta (k-1) + \int_{B_{k+1}} \omega.$* We assume that the link is $\eta$-admissible throughout. **Remark 1**. *The argument in this note still work if we get rid of the second condition. However, say if $\int_{B_i} \omega \ne \int_{B_j} \omega$ for $i \ne j$, then we must have $\varphi(B_i) =B_i$ because $\varphi \in Ham_{\underline{L}}(\mathbb{D}, \omega)$ preserve the area. Then the Hamiltonian chord $\{\gamma_i(t)\}_{i=1}^k$ satisfies $\gamma_i(0), \gamma_i(1) \in L_i$. As a result, the morphism $Ham_{\underline{L}}(\mathbb{D}, \omega) \to \mathcal{B}_k, \varphi \to b(\varphi, \underline{L})$ is not surjective. This point is pointed out by Morabito in Page 4 of [@FM].* Let $X:=Sym^k \mathbb{S}^2$ and $\Delta :=\{[x_1,..., x_k] \in X : \exists \ x_i =x_j \mbox{ for } i \ne j\}$ be the **diagonal**. Let $D_i: =\{z_i\} \times Sym^{k-1} \mathbb{S}^2$ for a fixed point $z_{i} \in \mathring{B}_{i}$. There is a symplectic form $\omega_X$ such that $\omega_X =Sym^k \omega$ outside a neighborhood of the diagonal. Let $V$ be a small neighborhood of $\Delta\cup D_{k+1}$. Fix $\varphi=\varphi^1_H \in Ham(\mathbb{S}^2, \omega)$. A Hamiltonian function $\underline{H} \in C^{\infty}([0,1] \times X, \mathbb{R})$ is **compatible with $H$** if it satisfies the following conditions: 1. $\underline{H}_t =Sym^k H_t$ outside $V$. 2. $\underline{H}_t$ is a ($t$-dependent) constant near $\Delta$. 3. $\underline{H}_t$ is a ($t$-dependent) constant near $D_{k+1}$. 4. $\min_X Sym^kH_t\le \underline{H}_t \le \max_X Sym^kH_t$. The last two conditions are not necessary for defining the quantitative Heegaard Floer homology. The propose of the third item is to find holomorphic curves contained in $Sym^k(\mathbb{S}^2 -\{z_{k+1}\})$. If the fourth item is true, then $|\underline{H}|_{(1, \infty)} \le k|H|_{(1, \infty)}$. We use this property in the energy estimate. Note that the function $\underline{H}$ satisfying the four conditions always exists (see Remark 6.8 of [@CHMSS]). **Remark 2**. *Since we will consider $\varphi_H^t$ with compact support in $\mathbb{D}$ and take $z_{k+1}$ outside $\mathbb{D}$, then we can choose $V$ sufficiently small such that $Sym^k \varphi_{{H}}^t( \underline{L}) \cap V =\emptyset$ for $t \in [0, 1]$. Therefore, the first condition implies that $\varphi_{\underline{H}}^t(Sym^k \underline{L}) = Sym^k \varphi^t_H(\underline{L})$. We assume that this is true throughout.* Suppose that $\varphi$ is **nondegenerate** in the sense that $Sym^k\varphi(\underline{L})$ intersects $Sym^k\underline{L}$ transversely. Fix a base point $\mathbf{x} \in \underline{L}$. Let $\mathbf{y}: [0,1] \to X$ be a **Hamiltonian chord,** i.e., $\partial_t \mathbf{y}(t) =X_{\underline{H}} \circ \mathbf{y}(t)$ and $\mathbf{y}(0), \mathbf{y}(1) \in Sym^k \underline{L}$. A capping is a smooth map $\hat{\mathbf{y}} : [0,1]_s \times [0,1]_t \to X$ such that $\hat{\mathbf{y}} (0, t) ={\mathbf{y}}(t)$, $\hat{\mathbf{y}}(1, t) ={\mathbf{x}}$ and $\hat{\mathbf{y}}(s, i) \in Sym^k \underline{L}$, where $i=0, 1$. Let $\pi_2(\mathbf{x}, \mathbf{y})$ denote the set of cappings. We abuse the same notation $\hat{\mathbf{y}}$ to denote its equivalent class in $\pi_2(\mathbf{x}, \mathbf{y}) / \ker \omega_X.$ Define a complex ${CF}( \underline{L}, \underline{H}_t)$ to be the set of formal sums of equivalent classes of capping in $\pi_2(\mathbf{x}, \mathbf{y}) / \ker \omega_X$ $$\sum_{(\mathbf{y}, \hat{\mathbf{y}})} a_{(\mathbf{y}, \hat{\mathbf{y}})}(\mathbf{y}, \hat{\mathbf{y}})$$ satisfying that $a_{(\mathbf{y}, \hat{\mathbf{y}})} \in \mathbb{Z}_2$ and for any $C\in \mathbb{R}$, there are only finitely $(\mathbf{y}, \hat{\mathbf{y}})$ such that $-\int \hat{\mathbf{y}}^*\omega >C$ and $a_{(\mathbf{y}, \hat{\mathbf{y}})} \ne 0$. **Definition 3**. *An $\omega_X$-tame almost complex structures $J$ is called nearly-symmetric if $J =Sym^k j$ near $\Delta \cup \cup_{i=1}^{k+1}D_i$, where $j$ is a fixed complex structure on the sphere.* Fix a generic path of nearly-symmetric almost complex structures $\{J_t\}_{t\in [0,1]}$. A $X_{\underline{H}_t }$-perturbed holomorphic strip is a solution to the following Floer's equations: $$\begin{aligned} \label{eq1} \begin{split} \left \{ \begin{array}{ll} \partial_s u + J_{t}(u)(\partial_t u -X_{\underline{H}_t } \circ u) =0\\ u(s, 0), u(s, 1) \in Sym^k \underline{L}\\ \lim_{s \to \pm \infty} u(s, t) =\mathbf{y}_{\pm}(t). \end{array} \right. \end{split}\end{aligned}$$ The modui space of index $i$ $X_{\underline{H}_t }$-perturbed holomorphic strips with homotopy class $\beta$ is denoted by $\mathcal{M}_i^{J_t}(\mathbf{y}_+ , \mathbf{y}_-, \beta)$. Later, we will replace the vector field $X_{\underline{H}_t }$ in Equations ([\[eq1\]](#eq1){reference-type="ref" reference="eq1"}) by a Hamiltonian vector field $X$ which depends on $s$ or other parameters. The solutions are called **$X$-perturbed holomorphic strips**. Then the differential is defined by $$\label{eq2} \partial ({\mathbf{y}}_+, \hat{\mathbf{y}}_+) := \sum_{\beta \in \pi_2(X, \mathbf{y}_+, \mathbf{y}_-)} \#_2 \left(\mathcal{M}_1^{J_t}(\mathbf{y}_+ , \mathbf{y}_-, \beta) /\mathbb{R} \right) (\mathbf{y}_-, \hat{\mathbf{y}}_+ \# \beta ).$$ The quantitative Heegaard Floer homology is denoted by $HF(\underline{L}, H).$ #### Continuous morphisms A **homotopy** from $(\underline{H}_+, J_{t}^+)$ to $(\underline{H}_-, J_t^-)$ is a pair consisting of a function $\underline{H}: \mathbb{R}_s \times [0,1]_t \times X \to \mathbb{R}$ and a family of almost complex structures $\{J_{s,t}\}_{s \in \mathbb{R}, t\in [0,1]}$ such that - $(\underline{H}_s, J_{s,t}) =(\underline{H}_+, J_+^t)$ when $s \ge 1$ and $(\underline{H}_s, J_{s,t}) =(\underline{H}_-, J_t^-)$ when $s\le -1$, - $\underline{H}_s$ is ($s, t$-dependent) constant near the diagonal and $D_{k+1}$, - For each $(s,t) \in \mathbb{R} \times [0,1]$, $J_{s, t}$ is nearly-symmetric. A typical way of constructing $\underline{H}_s$ is to define $$\underline{H}_s =\chi(s) \underline{H}_+ + (1-\chi(s)) \underline{H}_-,$$ where $\chi: \mathbb{R} \to \mathbb{R}$ is a cut-off function such that $\chi(s) \ge 1$ when $s \ge 1$ and $\chi(s) =-1$ when $s \le -1$. Let $\mathcal{M}_i^{J_{s,t}}(\mathbf{y}_+ , \mathbf{y}_-, \beta)$ denote the moduli space of the index $i$ $X_{\underline{H}_s}$-perturbed holomorphic strips. Define a morphism $\phi(\underline{H}_s, J_{s,t}): CF(\underline{L}, \underline{H}_+) \to CF(\underline{L}, \underline{H}_-)$ on chain level by $$\phi(\underline{H}_s, J_{s,t}) ({\mathbf{y}}_+, \hat{\mathbf{y}}_+) := \sum_{\beta \in \pi_2(X, \mathbf{y}_+, \mathbf{y}_-)} \#_2 \mathcal{M}_0^{J_{s,t}}(\mathbf{y}_+ , \mathbf{y}_-, \beta) (\mathbf{y}_-, \hat{\mathbf{y}}_+ \# \beta ).$$ The above map induces a homomorphism $\Phi(\underline{H}_+, \underline{H}_-): HF(\underline{L}, H_+) \to HF(\underline{L}, H_-)$. Moreover, the homomorphism $\Phi(\underline{H}_+, \underline{H}_-)$ only depend on $(\underline{H}_+, \underline{H}_-)$. $\Phi(\underline{H}_+, \underline{H}_-)$ is called a **continuous morphism.** #### Filtration The action functional on $CF(\underline{L}, \underline{H})$ is defined by $$\mathcal{A}_{\underline{H}} ({\mathbf{y}}, \hat{\mathbf{y}}) := -\int \hat{\mathbf{y}}^* \omega_X + \int \underline{H}_t (\mathbf{y}(t))dt.$$ Since the differential decreases the action, $CF^{<a}(\underline{L}, \underline{H})$ is a subcomplex. For $a<b$, define a quotient subcomplex $CF^{(a, b)}(\underline{L}, \underline{H}) :=CF^{<b}(\underline{L}, \underline{H})/CF^{<a}(\underline{L}, \underline{H})$. There is another filtration on $CF(\underline{L}, \underline{H})$ induced by the intersection numbers with $\Delta$ and $D_{k+1}$. Define $$\partial_{ij} ({\mathbf{y}}_+, \hat{\mathbf{y}}_+) := \sum_{\beta \in \pi_2(X, \mathbf{y}_+, \mathbf{y}_-), \beta \cdot D_{k+1} =i, \beta \cdot \Delta = j} \#_2 \left(\mathcal{M}_1^{J_t}(\mathbf{y}_+ , \mathbf{y}_-, \beta) /\mathbb{R} \right) (\mathbf{y}_-, \hat{\mathbf{y}}_+ \# \beta ).$$ Note that $u$ is $Sym^k j$-holomorphic near $\Delta$ and $D_{k+1}$ because $X_{\underline{H}_t }=0$ there. Also, $\Delta$ and $D_{k+1}$ are codimension two complex variety. Therefore, $u\cdot \Delta \ge 0$ and $u\cdot D_{k+1} \ge 0$. By definition, we have $$\partial= \partial_{00} + \partial_{10}+\partial_{01} ....$$ Then $0=\partial^2=\partial_{00}^2 + \partial_{10} \partial_{00}+ \partial_{00} \partial_{10} +...$ implies that $\partial_{00}^2=0$, $\partial_{10} \partial_{00}+ \partial_{00} \partial_{10} =0$, etc. Define $\widehat{HF}(\underline{L}, \underline{H})$ to be the homology of $(CF(\underline{L}, \underline{H}), \partial_{00})$. Roughly speaking, $\widehat{HF} (\underline{L}, \underline{H})$ is the Lagrangian Floer homology of $Sym^k \underline{L}$ in $Sym^k(\mathbb{S}^2-\{z_{k+1}\})- \Delta$. It is easy to show that $\partial_{00}$ decrease the action functional. The homology of $(CF^{(a, b)}(\underline{L}, \underline{H}), \partial_{00})$ is denoted by $\widehat{HF}^{(a, b)}(\underline{L}, H)$. Similarly, we have $\phi(\underline{H}_s, J_{s,t})= \phi_{00}(\underline{H}_s, J_{s,t}) + \phi_{10}(\underline{H}_s, J_{s,t}) +\phi_{01}(\underline{H}_s, J_{s,t}) ...$. The chain map condition $\partial \circ \phi =\phi \circ \partial$ implies that $$\sum_{i+i=k} \sum_{j+j' =l} \left(\partial_{ij} \circ \phi_{i'j'}(\underline{H}_s, J_{s,t}) - \phi_{i'j'}(\underline{H}_s, J_{s,t}) \circ \partial_{ij} \right) =0$$ for each nonnegative integers $k, l$. Here $i, j, i', j'$ are nonnegative. In particular, we have $\partial_{00} \circ \phi_{00}(\underline{H}_s, J_{s,t}) =\phi_{00} (\underline{H}_s, J_{s,t})\circ \partial_{00}.$ The following lemma tells us that $\phi_{00}(\underline{H}_s, J_{s,t})$ induces a homomorphism $$\Phi_0(\underline{H}_+, \underline{H}_-): \widehat{HF}(\underline{L}, H_+) \to \widehat{HF}(\underline{L}, H_-)$$ that only depends on $(\underline{H}_+, \underline{H}_-)$. Moreover, $\widehat{HF}(\underline{L}, H)$ is independent of the choice of $H$ and almost complex structures. **Lemma 4**. *The continuous morphisms satisfy the following properties:* 1. *(Chain homotopy) Let $(\underline{H}^0_s, J^0_{s, t})$ and $(\underline{H}^1_s, J^1_{s, t})$ be two generic homotopies from $(\underline{H}_+, J^+_t)$ to $(\underline{H}_-, J^-_t)$. Then there exists a homomorphism $K_{00}: CF(\underline{L}, \underline{H}_+) \to CF(\underline{L}, \underline{H}_-)$ such that $$\phi_{00}(\underline{H}_s^1, J_{s,t}^1)- \phi_{00}(\underline{H}_s^0, J_{s,t}^0) =\partial_{00} \circ K_{00} + K_{00} \circ \partial_{00}.$$* 2. *(Composition rule) Let $(\underline{H}^0_s, J^0_{s, t})$ be a homotopy from $(\underline{H}_+, J^+_t)$ to $(\underline{H}_0, J^0_t)$ and $(\underline{H}^1_s, J^1_{s, t})$ be a homotopy from $(\underline{H}_0, J^0_t)$ to $(\underline{H}_-, J^-_t)$. Concatenating $(\underline{H}^0_s, J^0_{s, t})$ and $(\underline{H}^1_s, J^1_{s, t})$ produces a homotopy $(\underline{H}^1_s\circ \underline{H}^0_s, J_{s,t}^{1} \circ J^{0}_{s,t})$ from $(\underline{H}_+, J^+_t)$ to $(\underline{H}_-, J^-_t)$. Then there exists a homomorphism $K_{00}: CF(\underline{L}, H_+) \to CF(\underline{L}, H_-)$ such that $$\phi_{00}(\underline{H}^1_s, J_{s,t}^1 )\circ \phi_0(\underline{H}^0_s, J_{s,t}^0) =\phi_{00}( \underline{H}^1_s\circ \underline{H}^0_s, J_{s,t}^{1} \circ J^{0}_{s,t}) + \partial_{00} \circ K_{00} + K_{00} \circ \partial_{00}.$$* *Proof.* Let $\{(\underline{H}_s^{\tau}, J_{s,t}^{\tau})\}_{\tau \in [0,1]}$ be a generic homotopy between $(\underline{H}^0_s, J^0_{s, t})$ and $(\underline{H}^1_s, J^1_{s, t})$ such that $\underline{H}_s^{\tau}$ is constant and $J^{\tau}_{s,t} =Sym^k j$ near $\Delta$ and $D_{k+1}$. In the usual Lagrangian Floer theory, we have $$\label{eq3} \phi(\underline{H}_s^1, J_{s,t}^1)- \phi(\underline{H}_s^0, J_{s,t}^0) =\partial \circ K + K \circ \partial,$$ where $K$ is defined by counting the $X_{\underline{H}^{\tau}_s}$-perturbed strips with index $-1$. Since $X_{\underline{H}^{\tau}_s} =0$ and $J_{s,t}^{\tau} =Sym^k j$ near $\Delta$ and $D_{k+1}$, we still have $u \cdot \Delta \ge 0$ and $u \cdot D_{k+1} \ge 0$. Therefore, we can decompose $K =K_{00} + K_{10} + K_{01} ...$. Then Equation ([\[eq3\]](#eq3){reference-type="ref" reference="eq3"}) implies that $$\phi_{kl}(\underline{H}_s^1, J_{s,t}^1)- \phi_{kl}(\underline{H}_s^0, J_{s,t}^0) = \sum_{i+i'=k} \sum_{j+j'=l} \left(\partial_{ij} \circ K_{i'j'} - K_{i'j'} \circ \partial_{ij} \right).$$ The first statement is just the special case that $k=l=0$. The proof of the second statement is similar. ◻ **Remark 3**. *By P. Biran and O. Cornea's criterions (Proposition 6.1.4 of [@BC]) and the computations in [@CHMSS], one should expect that $\widehat{HF}(\underline{L}, H)=0.$ But we don't need this result because we only use the filtered version $\widehat{HF}^{(a, b)}(\underline{L}, H)$.* **Remark 4**. *In [@GHC], the author gives an alternative formulation of $HF(\underline{L}, H)$ by counting the holomorphic curves in $\mathbb{R} \times [0,1] \times \Sigma$ which are called HF curves. Under the tautological correspondence, the intersection number $u \cdot \Delta$ corresponds to the $J_0$ index of HF curves in $\mathbb{R} \times [0,1] \times \Sigma$ (see Proposition 4.2 of [@GHC]). The $J_0$ index is the counterpart of Hutchings's one in ECH setting that measures the Euler characteristic of holomorphic curves [@H1; @H2]. If an HF curve has zero $J_0$ index, then it is a disjoint union of holomorphic strips.* # Proof of Theorem [Theorem 1](#thm1){reference-type="ref" reference="thm1"} {#proof-of-theorem-thm1} To apply the quantitative Heegaard Floer homology, we need to perturb $\varphi^1_{\underline{H}}$ so that it is nondegenerate. Fix a perfect Morse function $f_{\underline{L}}: Sym^k \underline{L} \to \mathbb{R}$. Extend $f_{\underline{L}}$ to a function $f$ on $X$ such that - $f$ is Morse outside a small neighborhood of $\Delta$ and $D_{k+1}$; - A critical point of $f_{\underline{L}}$ is also a critical point of $f$; - $f$ is a constant near $\Delta$ and $D_{k+1}$. Denote $\underline{H} \# \epsilon f = \underline{H} + \epsilon f\circ (\varphi^t_{\underline{H} })^{-1}$ by $\underline{H}^{\epsilon}$. Note that $\underline{H}^{\epsilon}$ generates $\varphi^t_{\underline{H}} \circ \varphi_{\epsilon f}^t$ and $\underline{H}^{\epsilon}$ is constant near $\Delta$ and $D_{k+1}$. The next two lemmas tell us that the chain complex $(CF(\underline{L}, \underline{H}^{\epsilon}), \partial)$ is identical to $(CF(\underline{L}, {\epsilon} f), \partial)$ provided that $\epsilon$ is sufficiently small. **Lemma 5**. *There exists a constant $\epsilon_{\underline{L}, f}^1>0$ depending only on the link and $f$ such that for any $0<\epsilon \le \epsilon_{\underline{L}, f}^1$, $\varphi_{\underline{H}}^1 \circ \varphi_{\epsilon f}^1$ is nondegenerate. Also, the Hamiltonian chord of $\underline{H}^{\epsilon}$ is of the form $\mathbf{y}(t) =\varphi^t_{\underline{H}}(\mathbf{x})$, where $\mathbf{x} \in Crit(f_{\underline{L}})$.* *Proof.* For any $\mathbf{x}\in Sym^k \underline{L}$, $\varphi_{\underline{H}}^1 \circ \varphi_{\epsilon f}^1(\mathbf{x}) \in Sym^k \underline{L}$ if and only if $\mathbf{x} \in Sym^k\underline{L} \cap (\varphi_{\epsilon f}^1)^{-1}(Sym^k\underline{L})$ because $\varphi_H^1(\underline{L}) =\underline{L}$. Also, note that $\varphi_{\underline{H}}^1 \circ \varphi_{\epsilon f}^1$ is nondegenerate if and only if $(\varphi_{\epsilon f}^1)^{-1}(Sym^k\underline{L})$ intersects $Sym^k\underline{L}$ transversality. By Lemma 6.1 of [@GHC], there exists $\epsilon_{\underline{L}, f}^1>0$ such that if $0< \epsilon \le \epsilon_{\underline{L}, f}^1$, then $(\varphi_{\epsilon f}^1)^{-1}(Sym^k\underline{L})$ intersects $Sym^k\underline{L}$ transversality and $\mathbf{x} \in Crit(f_{\underline{L}})$. Since $\mathbf{x}$ is also a critical point of $f$, we have $\varphi_{\epsilon f}^t(\mathbf{x}) =\mathbf{x}$. Therefore, the Hamiltonian chord satisfies $\varphi^t_{\underline{H}} \circ \varphi_{\epsilon f}^t(\mathbf{x}) =\varphi^t_{\underline{H}}(\mathbf{x}).$ ◻ Let $\mathbf{x} \in Sym^k \underline{L}$ be a critical point of $f_{\underline{L}}$. Using the Hamiltonian chord of $\varphi_{\underline{H}}^t \circ \varphi_{\epsilon f}^t(\mathbf{x})$, we construct a braid $\bar{\gamma}_{\varphi \circ \varphi_{\epsilon f}^1}$ as before. Because $\varphi_{\underline{H}}^t \circ \varphi_{\epsilon f}^t(\mathbf{x}) = \varphi_{\underline{H}}^t (\mathbf{x})$, the braid type of $\bar{\gamma}_{\varphi \circ \varphi_{\epsilon f}^1}$ still is $b(\varphi, \underline{L})$. **Lemma 6**. *Assume that $0<\epsilon \le \epsilon_{\underline{L}, f}^1$. Let $\mathbf{y}_{\pm}(t) =\varphi^t_{\underline{H}}(\mathbf{x}_{\pm})$, where $\mathbf{x}_{\pm} \in Crit(f_{\underline{L}})$. Fix a nearly symmetric almost complex structure $J_0$. Let $J_t := (\varphi_{\underline{H}}^t)_* \circ J_0 \circ (\varphi_{\underline{H}}^t)_*^{-1}$. Then there is a bijection $$\Psi: \mathcal{M}^{J_t}(\mathbf{y}_+, \mathbf{y}_-) \to \mathcal{M}^{J_0}(\mathbf{x}_+, \mathbf{x}_-),$$ where $\mathcal{M}^{J_t}(\mathbf{y}_+, \mathbf{y}_-)$ is the moduli space of $X_{\underline{H}^{\epsilon}}$-strips and $\mathcal{M}^{J_0}(\mathbf{x}_+, \mathbf{x}_-)$ is the moduli space of $X_{\epsilon f}$-strips. Moreover, $\Psi$ preserves the energy in the sense that $$\int u^*\omega_X + \int \underline{H}^{\epsilon}(t, \mathbf{y}_+(t))dt - \int \underline{H}^{\epsilon}(t, \mathbf{y}_-(t))dt = \int \Psi(u)^*\omega_X + \epsilon f(\mathbf{x}_+) - \epsilon f(\mathbf{x}_-).$$* *Proof.* For $u \in \mathcal{M}^{J_t}(\mathbf{y}_+, \mathbf{y}_-)$, define the map $\Psi$ by sending $u$ to $v(s,t) : =(\varphi_{\underline{H}}^t)^{-1}(u(s,t ))$. Note that $\lim_{s \to \pm \infty }v(s, t) =(\varphi_{\underline{H}}^t)^{-1}( \mathbf{y}_{\pm}(t)) = \mathbf{x}_{\pm},$ $v(s, 0) = u(s, 0) \subset Sym^k{\underline{L}}$, and $v(s, 1) =(\varphi_{\underline{H}}^1)^{-1} (u(s, 1)) \subset (\varphi_{\underline{H}}^1)^{-1}(Sym^k{\underline{L}}) = Sym^k{\underline{L}}$. By a dirct computation, we have $$\begin{split} & \partial_s v = (\varphi_{\underline{H}}^t)^{-1}_*(\partial_su), \partial_t v = (\varphi_{\underline{H}}^t)^{-1}_*(\partial_t u - X_{\underline{H}} \circ u) \\ &X_{\underline{H}^{\epsilon }} = X_{\underline{H} } + (\varphi_{\underline{H}}^t)_*(X_{\epsilon f} \circ (\varphi_{\underline{H}}^t)^{-1}). \end{split}$$ Therefore, $\partial_s v + J_0(v)(\partial_t v -X_{\epsilon f} \circ v) = (\varphi_{\underline{H}}^t)^{-1}_*( \partial_s u + J_t(u) (\partial_t u -X_{ \underline{H}^{\epsilon}} \circ u)) = 0$. In sum, $\Psi$ is well-defined. Obviously, it is a bjiection. Let $v=\Psi(u)$. We have $$\begin{split} \int u^* \omega_X& =\int \omega_X \left( (\varphi_{\underline{H}}^t)_*(\partial_s v), (\varphi_{\underline{H}}^t)_*(\partial_t v) + X_{\underline{H}} \circ u \right) ds \wedge dt \\ &= \int v^* \omega_X + \int \omega_X( (\varphi_{\underline{H}}^t)_*(\partial_s v), X_{\underline{H}^{\epsilon }} \circ u- (\varphi_{\underline{H}}^t)_*(X_{\epsilon f} \circ v ) ) ds \wedge dt \\ &= \int v^* \omega_X + \int \omega_X( \partial_su, X_{\underline{H}^{\epsilon }} \circ u) ds \wedge dt - \int \omega_X(\partial_s v, X_{\epsilon f} \circ v) ds \wedge dt \\ &= \int v^* \omega_X - \int d{\underline{H}^{\epsilon }} (\partial_s u) ds \wedge dt - \int d({\epsilon f} ) (\partial_s v) ds \wedge dt \\ &= \int v^* \omega_X - \int \underline{H}^{\epsilon}(t, \mathbf{y}_+(t))dt + \int \underline{H}^{\epsilon}(t, \mathbf{y}_-(t))dt + \epsilon f(\mathbf{x}_+) - \epsilon f(\mathbf{x}_-). \end{split}$$ So $\Psi$ preserves the energy. ◻ **Lemma 7**. *Fix $a< b$. Let $H_{\pm}$ be Hamiltonian functions with compact support. Suppose that $|H_+-H_-|_{(1, \infty)} < \varepsilon/2k$ and $\epsilon|f|_{C^0} < \varepsilon/4$, then the continuous morphism $\phi_{00}(\underline{H}_s, J_{s,t})$ descends to the following maps: $$\begin{split} &\phi_{00}(\underline{H}_s, J_{s,t}): CF^{<a}(\underline{L}, \underline{H}_+^{\epsilon}) \to CF^{<a+ \varepsilon}(\underline{L}, \underline{H}^{\epsilon}_-) \\ &\phi_{00}(\underline{H}_s, J_{s,t}): CF^{(a, b)}(\underline{L}, \underline{H}^{\epsilon}_+) \to CF^{(a+ \varepsilon, b + \varepsilon)}(\underline{L}, \underline{H}^{\epsilon}_-).\\ \end{split}$$* *Proof.* We only prove the first statement because the second statement is a consequence of the first one. Let $u \in \mathcal{M}^{J_{s, t}}(\mathbf{y}_+ , \mathbf{y}_-, \beta)$ be a $X_{\underline{H}_s}$-perturbed holomorphic strip that contributes to the continuous morphism. Then we have $$\begin{split} &\mathcal{A}_{\underline{H}^{\epsilon}_+} ({\mathbf{y}}_+, \hat{\mathbf{y}}_+) -\mathcal{A}_{\underline{H}^{\epsilon}_-} ({\mathbf{y}}_-, \hat{\mathbf{y}}_-) \\ =& \int u^*\omega_X + \int \underline{H}^{\epsilon}_+(t, \mathbf{y}_+(t))dt - \int \underline{H}^{\epsilon}_-(t, \mathbf{y}_-(t))dt\\ =& \int \omega_X(\partial_s u , \partial_t u ) ds \wedge dt + \int \underline{H}^{\epsilon}_+(t, \mathbf{y}_+(t))dt - \int \underline{H}^{\epsilon}_-(t, \mathbf{y}_-(t))dt\\ =& \int (\omega_X(\partial_s u , J_{s,t}(\partial_s u) ) + \omega_X(\partial_su, X_{\underline{H}_s}))ds \wedge dt + \int \underline{H}^{\epsilon}_+(t, \mathbf{y}_+(t))dt - \int \underline{H}^{\epsilon}_-(t, \mathbf{y}_-(t))dt\\ =& \int (|\partial_s u|^2 - d{\underline{H}}_s(\partial_s u))ds \wedge dt + \int \underline{H}^{\epsilon}_+(t, \mathbf{y}_+(t))dt - \int \underline{H}^{\epsilon}_-(t, \mathbf{y}_-(t))dt\\ =&\int |\partial_s u|^2 ds \wedge dt -\int d(\underline{H}_s \circ u dt) + \int (\partial_s \underline{H}_s) \circ u+ \int \underline{H}^{\epsilon}_+(t, \mathbf{y}_+(t))dt - \int \underline{H}^{\epsilon}_-(t, \mathbf{y}_-(t))dt\\ =&\int |\partial_s u|^2 ds \wedge dt + \int ( \partial_s\underline{H}_s ) \circ u ds \wedge dt \\ \ge& \int (\partial_s\underline{H}) \circ uds \wedge dt \ge \int_0^1 \min_{\mathbb{S}^2}|\underline{H}^{\epsilon}_+ -\underline{H}^{\epsilon}_-|dt \\ \ge& -k |H_+ -H_-|_{(1, \infty)} -2\epsilon|f|_{C^0} >-\varepsilon. \end{split}$$ The above estimates imply that $\phi_{00}(\underline{H}_s, J_{s,t})$ maps $CF^{<a}(\underline{L}, \underline{H}_+^{\epsilon})$ to $CF^{<a+ \varepsilon}(\underline{L}, \underline{H}^{\epsilon}_-)$. ◻ The following definition is an analogy of Definition 6 in [@MK]. **Definition 8**. *Suppose that $\varepsilon>2\epsilon |f|_{C^0}$. We say that the spectrum of **$\mathcal{A}_{\underline{H}^{\epsilon}}$ is $\varepsilon$-admissible** for $(\epsilon f, J_t)$ if we have* - *either $|\mathcal{A}_{\underline{H}^{\epsilon}} ({\mathbf{y}}, \hat{\mathbf{y}}) - \mathcal{A}_{\underline{H}^{\epsilon}} ({\mathbf{y}}', \hat{\mathbf{y}}') | \ge \varepsilon$ or $|\mathcal{A}_{\underline{H}^{\epsilon}} ({\mathbf{y}}, \hat{\mathbf{y}}) - \mathcal{A}_{\underline{H}^{\epsilon}} ({\mathbf{y}}', \hat{\mathbf{y}}') | \le 2\epsilon |f|_{C^0}$;* - *In the case that $|\mathcal{A}_{\underline{H}^{\epsilon}} ({\mathbf{y}}, \hat{\mathbf{y}}) - \mathcal{A}_{\underline{H}^{\epsilon}} ({\mathbf{y}}', \hat{\mathbf{y}}') | \le 2\epsilon |f|_{C^0}$, we have $\#_2 (\mathcal{M}^{J_t}(\mathbf{y}_+, \mathbf{y}_-, \beta) / \mathbb{R}) =0$, where $\beta$ is the homotopy class such that $\hat{\mathbf{y}} \# \beta \# (-\hat{\mathbf{y}}') = 0$ in $\pi_2(X, Sym^k \underline{L})/\ker \omega_X$.* **Lemma 9**. *Let $J_t = (\varphi_{\underline{H}}^t)_* \circ J_0 \circ (\varphi_{\underline{H}}^t)_*^{-1}$, where $J_0$ is a generic nearly symmetric almost complex structure. There exists positive constants $\varepsilon^2_{\underline{L}, f, J_0}$ (depending on the link, $f$ and $J_0$) and $\lambda_{\underline{L}}$ (depending on $\underline{L}$) such that if $0<\epsilon \le \varepsilon^2_{\underline{L}, f, J_0}$, the spectrum of $\mathcal{A}_{\underline{H}^\epsilon }$ is $\lambda_{\underline{L}}$-admissible for $(\epsilon f, J_t)$. Moreover, given $({\mathbf{y}}= \varphi_{\underline{H}}^t(\mathbf{x}), \hat{\mathbf{y}})$, then we have $$%\# \{ ({\mathbf{y}}', \hat{\mathbf{y}}') : \mathcal{A}_{\underline{H}^{\epsilon}} ({\mathbf{y}}, \hat{\mathbf{y}}) =\mathcal{A}_{\underline{H}^{\epsilon}} ({\mathbf{y}}', \hat{\mathbf{y}}') \} = \#\{\mathbf{x}' \vert f(\mathbf{x}) =f(\mathbf{x}'), \mathbf{x}' \in Crit(f_{\underline{L}})\}. \# \{ ({\mathbf{y}}', \hat{\mathbf{y}}') :|\mathcal{A}_{\underline{H}^{\epsilon}} ({\mathbf{y}}, \hat{\mathbf{y}}) - \mathcal{A}_{\underline{H}^{\epsilon}} ({\mathbf{y}}', \hat{\mathbf{y}}') | \le 2\epsilon |f|_{C^0} \} = 2^{k}.$$* *Proof.* Let $\mathbf{y}(t) =\varphi^t_{\underline{H}^{\epsilon}} (\mathbf{x})$ and $\mathbf{y}'(t) =\varphi^t_{\underline{H}^{\epsilon}} (\mathbf{x}')$ be Hamiltonian chords, where $\mathbf{x}, \mathbf{x}' \in Crit(f_{\underline{L}})$. Let $\eta: [0,1] \to Sym^k \underline{L}$ be a path from $\mathbf{x}'$ to $\mathbf{x}$. Define $u(s, t) : =\varphi_{\underline{H}}^t(\eta(s))$. Note that $u(1, t) = \mathbf{y}(t)$, $u(0, t) = \mathbf{y}'(t)$, $u(s, 0) =\eta(s) \subset Sym^k \underline{L}$ and $u(s, 1) =\varphi_{\underline{H} }^1(\eta(s)) \subset Sym^k \underline{L}$ because $\varphi_H^1(\underline{L}) = \underline{L}$. Then $$\int u^* \omega_X = \int \omega_X(\partial_s u, X_{\underline{H}} \circ u) ds\wedge dt = -\int d \underline{H} (\partial_s u) ds \wedge dt = \int_0^1 \underline{H}(\mathbf{y}'(t))dt -\int_0^1 \underline{H}(\mathbf{y}(t))dt.$$ Therefore, we have $$\begin{split} \mathcal{A}_{\underline{H}^{\epsilon}}(\mathbf{y}, \hat{\mathbf{y}}) - \mathcal{A}_{\underline{H}^{\epsilon}}(\mathbf{y}', \hat{\mathbf{y}}') = -\int_A \omega_X + \epsilon f(\mathbf{x}) - \epsilon f(\mathbf{x}'). \end{split}$$ Here the class $A := \hat{\mathbf{y}} \# [u] \# (-\hat{\mathbf{y}}')$ belongs to $\pi_2(X, Sym^k \underline{L})/\ker \omega_X$. By Corollary 4.10 of [@CHMSS], $\pi_2(X, Sym^k \underline{L})$ is freely generated by $\{u_i\}_{i=1}^{k+1}$ plus some torsion terms, where $u_i$ are the tautological correspondence of $B_i$. By the construction, we have $\int u_i^* \omega_X = \int_{B_i} \omega$. Define $$\lambda_{\underline{L}}:= \frac{1}{2} \min\{ \varepsilon>0 \vert \exists (a_1...a_k) \in \mathbb{Z}^{k+1} \mbox{ s.t } |\sum_{i=1}^{k+1} a_i \int_{B_i} \omega | \ge \varepsilon\}>0.$$ Then $|\int_A \omega_X| \ge 2 \lambda_{\underline{L}}$ if $\int_A \omega_X \ne 0.$ Let $\varepsilon'_{\underline{L}, f} :=\min\{ \varepsilon_{\underline{L}, f}^1, \lambda_{\underline{L}}/10|f|_{C^0} \}$. Assume that $0< \epsilon\le \varepsilon'_{\underline{L}, f}.$ If $\int_A \omega_X \ne 0$, then $$|\mathcal{A}_{\underline{H}^{\epsilon}}(\mathbf{y}, \hat{\mathbf{y}})- \mathcal{A}_{\underline{H}^{\epsilon}}(\mathbf{y}', \hat{\mathbf{y}}') | \ge 2\lambda_{\underline{L}} -2\epsilon|f|_{C^0} \ge \lambda_{\underline{L}}. %3\varepsilon^2_{\underline{L}, f} |f|_{C^0} -2\epsilon|f|_{C^0} \ge \epsilon.$$ If $\int_A \omega_X =0$, then $$|\mathcal{A}_{\underline{H}^{\epsilon}}(\mathbf{y}, \hat{\mathbf{y}}) - \mathcal{A}_{\underline{H}^{\epsilon}}(\mathbf{y}', \hat{\mathbf{y}}')| = |\epsilon f(\mathbf{x}) - \epsilon f(\mathbf{x}')| \le 2\epsilon |f|_{C^0}.$$ By Y.-G. Oh's results (Proposition 4.1, 4.6) [@Oh], there exists a constant $\varepsilon''_{\underline{L}, f, J_0}$ such that for any $0<\epsilon \le \varepsilon_{\underline{L}, f, J_0}''$, the $X_{\epsilon f}$-strips with energy smaller than $\lambda_{\underline{L}}$ are 1-1 corresponding to the index 1 Morse flow lines of $f_{\underline{L}}$. By Lemma [Lemma 6](#lem5){reference-type="ref" reference="lem5"}, we have $\#_2 (\mathcal{M}^{J_t}(\mathbf{y}_+, \mathbf{y}_-, \beta) / \mathbb{R}) =0$, where $\beta$ is the homotopy class such that $\hat{\mathbf{y}} \# \beta \# (-\hat{\mathbf{y}}') = 0$. Here we use the assumption that $f_{\underline{L}}$ is perfect. Set $\varepsilon^2_{\underline{L}, f, J_0} := \min\{\varepsilon'_{\underline{L}, f, J_0}, \varepsilon''_{\underline{L}, f}\}$. This is the constant in the statement. Fix $(\mathbf{y}, \hat{\mathbf{y}})$. For any ${\mathbf{y}}'$, there exists a unique $\hat{\mathbf{y}}' \in \pi_2( {\mathbf{x}}, {\mathbf{y}}') / \ker \omega_X$ such that $A=0$. Therefore, we can deduce the second statement. ◻ *Proof of Theorem [Theorem 1](#thm1){reference-type="ref" reference="thm1"}.* Let $\varepsilon = \lambda_{\underline{L}}/100$ and $0<\epsilon \le \varepsilon^2_{\underline{L}, f, J_0}$ such that $\epsilon|f|_{C^0} < \varepsilon/4$. Fix $\varphi_{+} \in Ham_{\underline{L}}(\mathbb{D}, \omega)$. Let $H_+$ be a Hamiltonian function generated $\varphi_+$ and has compact support. Fix $({\mathbf{y}}_+, \hat{\mathbf{y}}_+)$, where $\mathbf{y}_+(t)= \varphi^t_{\underline{H}_+}(\mathbf{x})$ for $\mathbf{x} \in Crit(f_{\underline{L}})$. Let $$a=\mathcal{A}_{\underline{H}^{\epsilon}_+} ({\mathbf{y}}_+, \hat{\mathbf{y}}_+) -5\varepsilon \mbox{ and } b=\mathcal{A}_{\underline{H}^{\epsilon}_+} ({\mathbf{y}}_+, \hat{\mathbf{y}}_+) + 5\varepsilon.$$ By Lemma [Lemma 9](#lem4){reference-type="ref" reference="lem4"}, we have $${CF}^{(a, b)}(\underline{L}, \underline{H}^{\epsilon}_+) = \mathbb{Z}_2 \{ ({\mathbf{y}}', \hat{\mathbf{y}}') \vert |\mathcal{A}_{\underline{H}^{\epsilon}_+} ({\mathbf{y}}_+, \hat{\mathbf{y}}_+) -\mathcal{A}_{\underline{H}^{\epsilon}_+} ({\mathbf{y}}', \hat{\mathbf{y}}') | \le 2\epsilon |f|_{C^0} \}.$$ The $\lambda_{\underline{L}}$-admissible conditions also imply that $\partial_{00}=0$ on $CF^{(a, b)}(\underline{L}, \underline{H}^{\epsilon}_+)$. By Lemma [Lemma 9](#lem4){reference-type="ref" reference="lem4"}, we have $$\widehat{HF}^{(a, b)}(\underline{L}, H^{\epsilon}_+) = \mathbb{Z}_2 \{ ({\mathbf{y}}', \hat{\mathbf{y}}') \vert |\mathcal{A}_{\underline{H}^{\epsilon}_+} ({\mathbf{y}}_+, \hat{\mathbf{y}}_+) -\mathcal{A}_{\underline{H}^{\epsilon}_+} ({\mathbf{y}}', \hat{\mathbf{y}}') | \le 2\epsilon |f|_{C^0} \}=\mathbb{Z}_2^{2^k},$$ By the same argument, we have $\widehat{HF}^{(a+2\varepsilon, b+2\varepsilon)}(\underline{L}, H^{\epsilon}_+) = \mathbb{Z}_2^{2^k}$. Set $\varepsilon_{\underline{L}}: = \varepsilon/3 =\lambda_{\underline{L}} / 300$. Let $\varphi_- \in Ham_{\underline{L}}(\mathbb{D}, \omega)$ such that $d_{Hofer}(\varphi_+, \varphi_-)< \varepsilon_{\underline{L}}/ k$. For any $0<\delta< \varepsilon/ 6k$, we can find a Hamiltonian function $H_-$ generated $\varphi_-$ with compact support such that $$|H_+ - H_-|_{(1, \infty)} \le d_{Hofer}(\varphi_+, \varphi_-) + \delta < \varepsilon/2k.$$ By Lemma [Lemma 7](#lem1){reference-type="ref" reference="lem1"}, we have $$\mathbb{Z}_2^{2^k}= \widehat{HF}^{(a, b)}(\underline{L}, H^{\epsilon}_+) \xrightarrow{\Phi_0(\underline{H}^{\epsilon}_+, \underline{H}^{\epsilon}_-)} \widehat{HF}^{(a+\varepsilon, b+\varepsilon)}(\underline{L}, H^{\epsilon}_-) \xrightarrow{\Phi_0(\underline{H}^{\epsilon}_-, \underline{H}^{\epsilon}_+)} \widehat{HF}^{(a+2\varepsilon, b+2\varepsilon)}(\underline{L}, {H}^{\epsilon}_+) =\mathbb{Z}_2^{2^k}.$$ By the filtered version of Lemma [Lemma 4](#lem2){reference-type="ref" reference="lem2"}, we have $\Phi_0(\underline{H}^{\epsilon}_+, \underline{H}^{\epsilon}_-)\circ \Phi_0(\underline{H}^{\epsilon}_-, \underline{H}^{\epsilon}_+) = \Phi_0(\underline{H}^{\epsilon}_+, \underline{H}^{\epsilon}_+)$. We can use $({\underline{H}^{\epsilon}_+}, J_t)$ to define $\Phi_0(\underline{H}^{\epsilon}_+, \underline{H}^{\epsilon}_+)$. By index reason, the $X_{\underline{H}^{\epsilon}_+}$-perturbed holomorphic strips contributed to $\phi_{00}({\underline{H}^{\epsilon}_+}, J_t)$ must be the trivial strips. Therefore, $\Phi_0({\underline{H}^{\epsilon}_+}, \underline{H}^{\epsilon}_+)$ is the identity. As a result, $\Phi_0(\underline{H}^{\epsilon}_+, \underline{H}^{\epsilon}_-)$ is an injection. Then we have a $X_{\underline{H}_s}$-strip $u$ such that $\lim_{s \to \pm \infty} u(s, t) = \mathbf{y}_{\pm}(t)$ and $u \cdot \Delta =u\cdot D_{k+1}=0$. Recall that we choose $J_{s,t} =Sym^kj$ and $\underline{H}_s$ to be constant near $\Delta$ and $D_{k+1}$, then $u \cdot \Delta = u \cdot D_{k+1} =0$ implies that $u$ is disjoint from $\Delta$ and $\{z_{k+1}'\} \times Sym^{k-1} \mathbb{S}^2$, where $z_{k+1}'$ is sufficiently close to $z_{k+1}.$ Hence, $u$ lies inside $Sym^k (\mathbb{S}^2 -U_{z_{k+1}})$, where $U_{z_{k+1}}$ is a sufficiently small neighborhood of $z_{k+1}$. Let $\mathbf{p}_{\pm}$ be paths in $Sym^k \underline{L}$ connecting $\mathbf{y}_{\pm}(0)$ and $\mathbf{y}_{\pm}(1)$. Then we have braids $\bar{\gamma}_{\pm} = \mathbf{y}_{\pm} \cup (-\mathbf{p}_{\pm}).$ To produce a homotopy, we need to cap off $u \cup \mathbf{p}_+ \cup (-\mathbf{p}_-)$. Recall that $\pi_2(X, Sym^k \underline{L})$ is freely generated by $\{u_i\}_{i=1}^{k+1}$ plus some torsion part. Also, we have the following facts (see Corollary 4.10 of [@CHMSS]): 1. $\{\partial u_i\}_{i=1}^k$ generates $\pi_1(Sym^k \underline{L})$. 2. For $1 \le i \le k$, we have $u_i \cdot \Delta =0$. 3. $u_i \cdot D_j =\delta_{ij}$. In particular, $u_i \cdot D_{k+1}=0$ for any $1\le i\le k$. Identify $\mathbb{S}^2 -U_{z_{k+1}}$ with $\mathbb{D}$ topologically. We claim that there is a disk $u': (\mathbb{D}, \partial \mathbb{D}) \to (Sym^k \mathbb{D}, Sym^k \underline{L})$ such that $\partial u' =\mathbf{p}_{+} \cup \partial u \cup (-\mathbf{p}_{-})$ and $u'$ is disjoint from $\Delta$. Note that the claim is not a consequence of the Whitney trick because $\pi_1(X - \Delta) \to \pi_1(X) =0$ is not injective. Suppose that $[\mathbf{p}_{+} \cup \partial u \cup (-\mathbf{p}_{-})] = \Pi_{j} (\partial u_{i_j})^{\pm 1}$ in $\pi_1(Sym^k \underline{L})$, where $1\le i_j \le k$. Here we may have $i_j = i_{j'}$ for $j \ne j'.$ Let $\varphi_i: \mathbb{D} \to B_i$ be a diffeomorphism for $1\le i \le k$. Define maps $v_{i_j}(z) =[\varphi_{i_j}(z), z^1_{i_j}, ...z_{i_j}^{k-1}]$, where $z^1_{i_j}, ...z_{i_j}^{k-1}$ are fixed points in the circles other than $L_{i_j}$. Then $v_{i_j}: (\mathbb{D}, \partial \mathbb{D}) \to (Sym^k \mathbb{D}, Sym^k \underline{L})$ is a disk represented the class $u_{i_j}$. We also require that $[z^1_{i_j}, ...z_{i_j}^{k-1}]$ are distinct for different $j$. Then $v_{i_j}$ are distinct maps. Concatenating these disks produces a disk $\Pi_j v^{\pm 1}_{i_j} : (\mathbb{D}, \partial \mathbb{D}) \to (Sym^k \mathbb{D}, Sym^k \underline{L})$ such that $\partial ([\Pi_j v^{\pm 1}_{i_j}] )= [\mathbf{p}_{+} \cup \partial u \cup (-\mathbf{p}_{-})]$. Note that $v_{i_j}$ are disjoint from $\Delta$, so is $\Pi_j v^{\pm 1}_{i_j}$. Take a homotopy $v'$ between $\partial ( \Pi_j v^{\pm 1}_{i_j} )$ and $\mathbf{p}_{+} \cup \partial u \cup (-\mathbf{p}_{-})$ in $Sym^k \underline{L}$. Gluing $\Pi_j v^{\pm 1}_{i_j}$ and $v'$ along $\partial ( \Pi_j v^{\pm 1}_{i_j} )$, then the result $u'$ satisfies our requirements. Gluing $u$ and $u'$ along $\mathbf{p}_{+} \cup \partial u \cup (-\mathbf{p}_{-})$, we get a cylinder $\bar{u}: [- 1, 1]\times S^1 \to Sym^k \mathbb{D}$ (after reparametrization) such that $\bar{u}(\pm 1, t) = \bar{ \gamma}_{\pm}(t)$ and $\bar{u}$ is disjoint from $\Delta$. This gives a homotopy between $\bar{\gamma}_+$ and $\bar{\gamma}_-$. Therefore, $b(\varphi_+, \underline{L}) = b(\varphi_-, \underline{L}).$ ◻ unsrt M.R.R. Alves and M. Meiwes, Braid stability and the Hofer metric, arXiv:2112.11351, 2021. P. Biran and O. Cornea, Quantum structures for Lagrangian submanifolds, arXiv:0708.4221, 2018. D. Cristofaro-Gardiner, V. Humili$\grave{e}$re, C. Mak, S. Seyfaddini, I. Smith, Quantitative Heegaard Floer cohomology and the Calabi invariant. Forum of Mathematics, Pi, 10, E27. doi:10.1017/fmp.2022.18 G. Chen, Closed-open morphisms on periodic Floer homology, arXiv:2111.11891, 2021. H. Hofer, On the topological properties of symplectic maps. English. In: Proc. R. Soc. Edinb., Sect. A, Math. 115.1-2 (1990), pp. 25--38. issn: 0308-2105. d M. Hutchings, An index inequality for embedded pseudoholomorphic curves in symplectizations, J. Eur. Math. Soc. **4** (2002) 313--361. M. Hutchings, The embedded contact homology index revisited, New perspectives and challenges in symplectic field theory, 263--297, CRM Proc. Lecture Notes 49, Amer. Math. Soc., 2009. M. Hutchings, Braid stability for periodic orbits of area-preserving surface diffeomorphisms, arXiv: 2303.07133, 2023. C. Kassel and V. Turaev, Braid groups. With the graphical assistance of Olivier Dodane. English. Vol. 247. Grad. Texts Math. New York, NY: Springer, 2008. isbn: 978-0-387-33841-5 M. Khanevsky, A gap in the Hofer metric between integral and autonomous Hamiltonian diffeomorphisms of surfaces, arXiv:2205.03492, 2022. F. Morabito, Link Floer homology and a Hofer Pseudometic on Braids, arXiv:2301.08105, 2023. Y.-G. Oh. Floer cohomology, spectral sequences, and the Maslov class of Lagrangian embeddings, Internat. Math. Res. Notices no. 7 (1996), 305-346. Shenzhen University [`ghchen@szu.edu.cn`](mailto: ghchen@szu.edu.cn)\
arxiv_math
{ "id": "2309.03845", "title": "Stability of the braid types defined by the symplecticmorphisms\n preserving a link", "authors": "Guanheng Chen", "categories": "math.SG math.GT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | For the Lagrange spectrum and other applications, we determine the smallest accumulation point of binary sequences that are maximal in their shift orbits. This problem is trivial for the lexicographic order, and its solution is the fixed point of a substitution for the alternating lexicographic order. For orders defined by cylinders, we show that the solutions are $S$-adic sequences, where $S$ is a certain infinite set of substitutions that includes Sturmian morphisms. We also consider a similar problem for symmetric ternary shifts, which is applicable to the multiplicative version of the Markoff--Lagrange spectrum. address: - Institute of Mathematics / Research Core for Mathematical Sciences, University of Tsukuba, Tsukuba, Japan - Université Paris Cité, CNRS, IRIF, F-75006 Paris, France author: - Hajime Kaneko - Wolfgang Steiner bibliography: - order.bib title: Markoff--Lagrange spectrum of one-sided shifts --- # Introduction Let $A^\infty$ be the set of infinite words on a finite alphabet $A$, equipped with a total order $\le$ and the ultrametric $d$ given by $d(a_1a_2\cdots,b_1b_2\cdots) = 2^{-\min\{n\ge 1\,:\,a_n\ne b_n\}}$ for $a_1a_2\cdots \ne b_1b_2\cdots$. We study properties of the set of *$sup$-words* $$\mathcal{M}_\le := \{s_\le(\mathbf{a}) \,:\, \mathbf{a}\in A^\infty\}, \quad \mbox{with} \quad s_\le(a_1a_2\cdots) := \sup\nolimits_{n\ge1} a_na_{n+1}\cdots,$$ for a large class of orders on $A^\infty$. In particular, we are interested in the smallest accumulation point $\mathbf{m}_\le$ of $\mathcal{M}_\le$. For the lexicographic order $\le_{\mathrm{lex}}$, words in $\mathcal{M}_{\le_{\mathrm{lex}}}$ occur as (quasi-greedy) $\beta$-expansions of 1 for real bases $\beta > 1$ (see [@Parry60]), with $\mathbf{m}_{\le_{\mathrm{lex}}} = 1000\cdots$ being the limit as $\beta \to 1$. For the alternating lexicographic order $\le_{\mathrm{alt}}$, most elements of $\mathcal{M}_{\le_{\mathrm{alt}}}$ are $(-\beta)$-expansions of $\frac{-\beta}{\beta+1}$ in the sense of [@Ito-Sadahiro09; @Steiner13], with $\mathbf{m}_{\le_{\mathrm{alt}}}$ being the limit as $\beta \to 1$. An image of $\mathcal{M}_{\le_{\mathrm{alt}}}$ occurs in a multiplicative version of the (Markoff--)Lagrange spectrum w.r.t. an integer base, which is defined in terms of well approximable numbers [@Dubickas06; @Akiyama-Kaneko21]; see Proposition [Proposition 10](#p:Lbeta){reference-type="ref" reference="p:Lbeta"}. Below the image of $\mathbf{m}_{\le_{\mathrm{alt}}}$, which is the fixed point of a substitution [@Allouche83; @Allouche-Cosnard83; @Dubickas07], we find the discrete part of this spectrum. The classical Markoff and Lagrange spectra are given by two-sided versions of $\mathcal{M}_{\le_{\mathrm{alt}}}$ (and the Lagrange spectrum is defined by $\limsup$ instead of $\sup$). The unimodal order $\le_{\mathrm{uni}}$ yields kneading sequences of unimodal maps [@Milnor-Thurston88], and $\mathbf{m}_{\le_{\mathrm{uni}}}$ is the fixed point of the period-doubling (or Feigenbaum) substitution. Sup-words are also closely related to infinite Lyndon words, which are defined by $a_1 a_2 \cdots < a_n a_{n+1} \cdots$ for all $n \ge 2$; see e.g. [@Postic-Zamboni20]. We consider orders satisfying that $$\label{e:cylinderorder} \mathbf{a}\le \mathbf{b}\le \mathbf{c}\quad \mbox{implies} \quad d(\mathbf{a},\mathbf{b}) \le d(\mathbf{a},\mathbf{c}) \quad \mbox{for all $\mathbf{a}, \mathbf{b}, \mathbf{c}\in A^\infty$}$$ (note that $d(\mathbf{a},\mathbf{b}) \le d(\mathbf{a},\mathbf{c})$ is equivalent to $d(\mathbf{b},\mathbf{c}) \le d(\mathbf{a},\mathbf{c})$ by the strong triangle inequality), and we call them *cylinder orders* because the elements of each cylinder of words are contiguous. Here, the *cylinder* (of length $n$) given by $a_1\cdots a_n \in A^n$ is $$[a_1\cdots a_n] := \{a'_1a'_2\cdots \in A^\infty \,:\, a'_1\cdots a'_n= a_1\cdots a_n\},$$ and we write $[a_1\cdots a_n] < [b_1\cdots b_n]$ if $(a_1\cdots a_n)^\infty < (b_1\cdots b_n)^\infty$ (or, equivalently, $\mathbf{a}< \mathbf{b}$ for all $\mathbf{a}\in [a_1\cdots a_n]$, $\mathbf{b}\in [b_1\cdots b_n]$). Note that [\[e:cylinderorder\]](#e:cylinderorder){reference-type="eqref" reference="e:cylinderorder"} is equivalent to the condition (\*) of [@Postic-Zamboni20], and it includes generalized lexicographic orders as considered in [@Reutenauer06]. In Section [2](#sec:mark-lagr-spectr){reference-type="ref" reference="sec:mark-lagr-spectr"}, we give basic properties of $\mathcal{M}_{\le}$. Section [3](#sec:small-accum-point){reference-type="ref" reference="sec:small-accum-point"} contains our main result, an algorithm for determing the smallest accumulation point $\mathbf{m}_\le$ of $\mathcal{M}_\le$, for any cylinder order $(A^\infty, \le)$. We also give a complete description, in terms of $S$-adic sequences, of all words $\mathbf{m}_\le$ obtained by cylinder orders and of the discrete part of $\mathcal{M}_{\le}$, i.e., all its elements below $\mathbf{m}_\le$. Since the words $\mathbf{m}_\le$ have linear factor complexity, real numbers having such $\beta$-expansions are in $\mathbb{Q}(\beta)$ or transcendental, for all Pisot or Salem bases $\beta \ge 2$. In Section [4](#sec:examples){reference-type="ref" reference="sec:examples"}, we determine $\mathbf{m}_\le$ for some classical examples of cylinder orders and show that all maximal Sturmian sequences can occur. We consider cylinder orders on symmetric alphabets in Section [5](#sec:symm-shift-spac){reference-type="ref" reference="sec:symm-shift-spac"} and apply our results to the multiplicative Lagrange spectrum and other problems in Section [6](#sec:exampl-orders-symm){reference-type="ref" reference="sec:exampl-orders-symm"}, # Markoff--Lagrange spectrum {#sec:mark-lagr-spectr} We first show that the set of periodic words in $\mathcal{M}_\le$ is dense, and that $\mathcal{M}_\le$ is equal to $$\mathcal{L}_\le := \{\ell_\le(\mathbf{a}) \,:\, \mathbf{a}\in A^\infty\}, \quad \mbox{with} \quad \ell_\le(a_1a_2\cdots) := \limsup\nolimits_{n\to\infty} a_na_{n+1}\cdots.$$ Note that $\mathcal{M}_\le$ and $\mathcal{L}_\le$ can be seen as generalizations of the *Markoff* and *Lagrange spectrum* respectively. In the classical case, these spectra are defined by two-sided sequences, and the Lagrange spectrum is a strict subset of the Markoff spectrum [@Freiman68; @Cusick-Flahive89]. **Theorem 1**. *Let $\le$ be a cylinder order on $A^\infty$. Then $$\mathcal{L}_\le = \mathcal{M}_\le = \mathrm{cl}\{s_\le(\mathbf{a}) \,:\, \mathbf{a}\in A^\infty\, \mbox{purely periodic}\}.$$ In particular, the set $\mathcal{M}_\le$ is closed.* In the proof of the theorem, we use the following characterization of cylinder orders. **Lemma 2**. *An order $\le$ on $A^\infty$ is a cylinder order if and only if $$\label{e:cylinderorder2} \mathbf{a}\le \mathbf{b}\ \ \mbox{implies} \ \ \mathbf{a}' \le \mathbf{b}' \quad \mbox{for all $\mathbf{a},\mathbf{b},\mathbf{a}', \mathbf{b}' \in A^\infty$ with $\max(d(\mathbf{a},\mathbf{a}'), d(\mathbf{b},\mathbf{b}')) < d(\mathbf{a},\mathbf{b})$}.$$* *Proof.* Let $\le$ be a cylinder order, and $\mathbf{a},\mathbf{b},\mathbf{a}',\mathbf{b}' \in A^\infty$ such that $\mathbf{a}\le \mathbf{b}$, $d(\mathbf{a},\mathbf{a}') < d(\mathbf{a},\mathbf{b})$, $d(\mathbf{b},\mathbf{b}') < d(\mathbf{a},\mathbf{b})$. Then both $\mathbf{a}\le \mathbf{b}\le \mathbf{a}'$ and $\mathbf{b}' \le \mathbf{a}' \le \mathbf{b}$ are impossible by [\[e:cylinderorder\]](#e:cylinderorder){reference-type="eqref" reference="e:cylinderorder"}, using that $d(\mathbf{a}',\mathbf{b}') = d(\mathbf{a},\mathbf{b})$ by the strong triangle inequality. This implies that $\mathbf{b}> \mathbf{a}'$ and thus $\mathbf{b}' > \mathbf{a}'$, i.e., [\[e:cylinderorder2\]](#e:cylinderorder2){reference-type="eqref" reference="e:cylinderorder2"} holds. Let now $\le$ be an order satisfying [\[e:cylinderorder2\]](#e:cylinderorder2){reference-type="eqref" reference="e:cylinderorder2"}. Then $\mathbf{a}\le \mathbf{b}$ and $d(\mathbf{a},\mathbf{c}) < d(\mathbf{a},\mathbf{b})$ imply that $\mathbf{c} < \mathbf{b}$, thus $\mathbf{a}\le \mathbf{b}\le \mathbf{c}$ with $d(\mathbf{a},\mathbf{c}) < d(\mathbf{a},\mathbf{b})$ is impossible, i.e., [\[e:cylinderorder\]](#e:cylinderorder){reference-type="eqref" reference="e:cylinderorder"} holds. ◻ *Proof of Theorem [Theorem 1](#t:ML){reference-type="ref" reference="t:ML"}.* We show first that $\mathcal{M}_\le$ is the closure of $s_\le(\mathbf{a}')$ with purely periodic $\mathbf{a}' \in A^\infty$. Since $s_\le(s_\le(\mathbf{a})) = s_\le(\mathbf{a})$, it suffices to consider $\mathbf{a}\in A^\infty$ with $s_\le(\mathbf{a}) = \mathbf{a}$. We have $(a_1 \cdots a_n)^\infty \in \mathcal{M}_\le$ whenever $$\label{e:rn} 2^{-n} d(\mathbf{a}, a_{n+1}a_{n+2}\cdots) < 2^{-i} d(\mathbf{a}, a_{i+1}a_{i+2}\cdots) \quad \mbox{for all}\ 1 \le i < n.$$ Indeed, we have, for all $1 \le i < n$, that $a_{i+1} a_{i+2} \cdots \le \mathbf{a}$ (because $s_\le(\mathbf{a}) = \mathbf{a}$) and $$\begin{aligned} d(\mathbf{a}, a_{i+1} a_{i+2} \cdots) & > 2^{i-n} d(\mathbf{a}, a_{n+1}a_{n+2}\cdots) = 2^{i-n} d((a_1\cdots a_n)^\infty, a_{n+1}a_{n+2}\cdots) \\ & = 2^i d((a_1\cdots a_n)^\infty, \mathbf{a}) = d(a_{i+1} \cdots a_n (a_1\cdots a_n)^\infty, a_{i+1} a_{i+2} \cdots), \end{aligned}$$ hence [\[e:cylinderorder2\]](#e:cylinderorder2){reference-type="eqref" reference="e:cylinderorder2"} gives that $a_{i+1} \cdots a_n (a_1\cdots a_n)^\infty \le (a_1\cdots a_n)^\infty$, thus $(a_1 \cdots a_n)^\infty \in \mathcal{M}_\le$. Since $\lim_{n\to\infty} 2^{-n} d(\mathbf{a}, a_{n+1}a_{n+2}\cdots) = 0$, we have either $d(\mathbf{a}, a_{n+1}a_{n+2}\cdots) = 0$ for some $n \ge 1$, i.e., $\mathbf{a}$ is purely periodic, or infinitely many $n$ such that [\[e:rn\]](#e:rn){reference-type="eqref" reference="e:rn"} holds and thus $(a_1\cdots a_n)^\infty \in \mathcal{M}_\le$ infinitely often. Therefore, $\mathcal{M}_\le \subseteq \mathrm{cl}\{s_\le(\mathbf{a}) : \mathbf{a}\in A^\infty\, \mbox{purely periodic}\}$. For the opposite inclusion, we have to show that $\mathcal{M}_\le$ is closed. Consider $\mathbf{a}= \lim_{k\to\infty} \mathbf{a}^{(k)}$ with $\mathbf{a}^{(k)} \in \mathcal{M}_\le$. If $a_{n+1} a_{n+2} \cdots > \mathbf{a}$ for some $n \ge 1$, then we also had $a_{n+1}^{(k)} a_{n+2}^{(k)} \cdots > \mathbf{a}^{(k)}$ for all large enough $k$, contradicting that $\mathbf{a}^{(k)} \in \mathcal{M}_\le$. This implies that $s_\le(\mathbf{a}) = \mathbf{a}$, i.e., $\mathbf{a}\in \mathcal{M}_\le$. Since $s_\le(\ell_\le(\mathbf{a})) = \ell_\le(\mathbf{a})$ for all $\mathbf{a}\in A^\infty$, we have $\mathcal{L}_\le \subseteq \mathcal{M}_\le$. For the opposite inclusion, let $\mathbf{a}= \lim_{k\to\infty} \mathbf{a}^{(k)}$ for some purely periodic words $\mathbf{a}^{(k)} \in \mathcal{M}_\le$, let $(p_k)$ be an increasing sequence satisfying $\mathbf{a}^{(k)} = (a_1^{(k)}\cdots a_{p_k}^{(k)})^\infty$, and let $\mathbf{b}= a_1^{(1)}\cdots a_{p_1}^{(1)} a_1^{(2)}\cdots a_{p_2}^{(2)} \cdots$. Then $\ell_\le(\mathbf{b}) \ge \mathbf{a}$. If $a_{i+1}^{(k)}\cdots a_{p_k}^{(k)} a_1^{(k+1)}\cdots a_{p_{k+1}}^{(k+1)} \cdots > \mathbf{a}$ for some $k \ge 1$, $0 \le i < p_k$, then $$\label{e:dab} \delta_{i,k} := d(a_{i+1}^{(k)}\cdots a_{p_k}^{(k)} a_1^{(k+1)}\cdots a_{p_{k+1}}^{(k+1)} \cdots, \mathbf{a}) \le \max(d(\mathbf{a}^{(k)}, \mathbf{a}), d(\mathbf{a}^{(k)}, \mathbf{a}^{(k+1)}), 2^{-p_k}).$$ Indeed, for $1 \,{\le}\, j \,{\le}\, p_k{-}i$, we cannot have $[a_{i+1}^{(k)} \cdots a_{i+j}^{(k)}] \,{>}\, [a_1 \cdots a_j] \,{=}\, [a_1^{(k)} \cdots a_j^{(k)}]$ because this contradicts $\mathbf{a}^{(k)} \,{\in}\, \mathcal{M}_\le$, thus $\delta_{i,k} \,{=}\, 2^{-j}$ implies $d(\mathbf{a}^{(k)}, \mathbf{a}) \,{\ge}\, 2^{-j}$; similarly, for $p_k{-}i \,{<}\, j \,{\le}\, 2p_k{-}i$, $$[a_{i+1}^{(k)} \cdots a_{p_k}^{(k)}a_1^{(k)} \cdots a_{i+j-p_k}^{(k)}] = [a_{i+1}^{(k)} \cdots a_{p_k}^{(k)}a_1^{(k+1)} \cdots a_{i+j-p_k}^{(k+1)}] > [a_1 \cdots a_j] = [a_1^{(k)} \cdots a_j^{(k)}]$$ is impossible, thus $\delta_{i,k} = 2^{-j}$ implies $d(\mathbf{a}^{(k)}, \mathbf{a}^{(k+1)}) \ge 2^{p_k-i-j}$ or $d(\mathbf{a}^{(k)}, \mathbf{a}) \ge 2^{-j}$; hence, we have $\delta_{i,k} < 2^{i-2p_k}$ or $\delta_{i,k} \le 2^{i-p_k} d(\mathbf{a}^{(k)}, \mathbf{a}^{(k+1)})$ or $\delta_{i,k} \le d(\mathbf{a}^{(k)}, \mathbf{a})$, which implies [\[e:dab\]](#e:dab){reference-type="eqref" reference="e:dab"}. Since the right hand side of [\[e:dab\]](#e:dab){reference-type="eqref" reference="e:dab"} tends to 0 as $k \to \infty$, we have $\ell_\le(\mathbf{b}) \le \mathbf{a}$, thus $\mathbf{a}= \ell_\le(\mathbf{b}) \in \mathcal{L}_\le$. ◻ # Smallest accumulation point of $\mathcal{M}_\le$ {#sec:small-accum-point} For determining the smallest accumulation point $\mathbf{m}_{\le}$ of $\mathcal{M}_\le$ for a cylinder order $(A^\infty,\le)$, we can restrict to two-letter alphabets, w.l.o.g., $A = \{0,1\}$. Indeed, if $\{0,1\} \subseteq A$, $[0] < [1]$, then $\mathcal{M}_\le$ has the accumulation point $10^\infty$ (because $(10^n)^\infty \in \mathcal{M}_\le$ for all $n \ge 1$), and we clearly have $s_\le(\mathbf{a}) > 10^\infty$ if $\mathbf{a}\in A^\infty$ contains a letter $a_n \in A$ with $[a_n] > [1]$. We use substitutions (also called word morphisms) and limit words (or $S$-adic sequences). Let $A^*$ be the monoid of finite words over the alphabet $A$, with concatenation as operation. A *substitution* $\sigma: A^* \to A^*$ satisfies $\sigma(vw) = \sigma(v) \sigma(w)$ for all $v,w \in A^*$ and is extended naturally to $A^\infty$; it suffices to give $\sigma(a)$ for $a \in A$ to define $\sigma$. For a sequence $\boldsymbol{\sigma}= (\sigma_n)_{n\ge1}$ of substitutions on the alphabet $A$ and an infinite word $\mathbf{a}\in A^\infty$, the *limit word* is $$\boldsymbol{\sigma}(\mathbf{a}) := \lim_{n\to\infty} \sigma_{[1,n]}(\mathbf{a}),$$ if this limit exists; we use the notation $\sigma_{[1,n]} := \sigma_1 \circ \sigma_2 \circ \cdots \circ \sigma_n$ for $n \ge 0$, with $\sigma_{[1,0]}$ being the identity map. For a set of substitutions $S$, we denote the monoid generated by the composition of substitutions in $S$ by $S^*$. We use the set of substitutions $$S = \{\tau_{j,k} \,:\, 0 \le j < k\}, \quad \mbox{with} \quad \begin{array}{rl}\tau_{j,k}: & 0 \mapsto 10^j, \\[.5ex] & 1 \mapsto 10^k.\end{array}$$ Our main result is the following characterization of the smallest accumulation point $\mathbf{m}_\le$ and the discrete part of $\mathcal{M}_{\le}$ for cylinder orders $\le$. **Theorem 3**. *Let $\mathbf{m}\in \{0,1\}^\infty$. Then $\mathbf{m}= \mathbf{m}_\le$ for some cylinder order $\le$ on $\{0,1\}^\infty$ with $0^\infty < 1^\infty$ if and only if $\mathbf{m}= \sigma(10^\infty)$ for some $\sigma \in S^*$ or $\mathbf{m}= \boldsymbol{\sigma}(1^\infty)$ for some $\boldsymbol{\sigma}\in S^\infty$.* *If $\mathbf{m}_\le = \boldsymbol{\sigma}(1^\infty)$, $\boldsymbol{\sigma}= (\sigma_n)_{n\ge1} \in S^\infty$, then $$\label{e:discrete} \{\mathbf{a}\in \mathcal{M}_\le \,:\, \mathbf{a}< \mathbf{m}_\le\} = \{(\sigma_{[1,n]}(0))^\infty \,:\, n \ge 0\}.$$* *If $\mathbf{m}_\le = \sigma_{[1,h]}(10^\infty)$, $\sigma_1,\dots,\sigma_h \in S$, $h \ge 0$, then $$\label{e:discrete2} \begin{aligned} \{\mathbf{a}\in \mathcal{M}_\le \,:\, \mathbf{a}< \mathbf{m}_\le\} & = \{(\sigma_{[1,n]}(0))^\infty \,:\, 0 \le n \le h\} \\ & \quad \cup \{\sigma_{[1,h]}((10^j)^\infty) \,:\, j \ge 0,\, \sigma_{[1,h]}((10^j)^\infty) < \mathbf{m}_\le\}, \end{aligned}$$ and there is at most one $j \ge 0$ such that $\sigma_{[1,h]}((10^j)^\infty) < \mathbf{m}_\le$.* The following proposition constitutes the core of the proof of Theorem [Theorem 3](#t:main){reference-type="ref" reference="t:main"} and provides an algorithm for calculating $\mathbf{m}_\le$. **Proposition 4**. *Let $\le$ be a cylinder order on $\{0,1\}^\infty$ with $[0] < [1]$.* *If $[10^j1] < [10^j0]$ holds for at most one $j \ge 0$, then $\mathbf{m}_\le = 10^\infty$, and $\mathbf{a}< \mathbf{m}_\le$ implies that $\mathbf{a}= 0^\infty$ or $\mathbf{a}= (10^j)^\infty$ (in case $[10^j1] < [10^j0]$).* *Otherwise, we have $\mathbf{m}_\le = \tau_{j,k}(\mathbf{m}_\preceq)$ for the cylinder order $\preceq$ on $\{0,1\}^\infty$ defined by $\mathbf{a}\preceq \mathbf{b}$ if $\tau_{j,k}(\mathbf{a}) \le \tau_{j,k}(\mathbf{b})$, where $j,k$ are minimal such that $0 \le j < k$, $[10^j1] < [10^j0]$, $[10^k1] < [10^k0]$; we have $[0] \prec [1]$ and $\{\mathbf{a}\in \mathcal{M}_\le \,:\, \mathbf{a}< \mathbf{m}_\le\} = \{0^\infty\} \cup \{\tau_{j,k}(\mathbf{a}) \,:\, \mathbf{a}\in \mathcal{M}_\preceq,\, \mathbf{a}\prec \mathbf{m}_\preceq\}$.* *Proof.* We have $\mathbf{m}_\le \in [1]$ because $\mathcal{M}_\le \cap [0] = \{0^\infty\}$, and $\mathbf{m}_\le \le 10^\infty$ because $(10^n)^\infty \in \mathcal{M}_\le$ for all $n \ge 0$. If $[10^j0] < [10^j1]$ for all $j \ge 0$, i.e., $\min\, [1] = 10^\infty$, then $\mathbf{m}_\le = 10^\infty$. Otherwise, let $j$ be minimal such that $[10^j1] < [10^j0]$, i.e., $\min [1] \in [10^j]$. Then $\mathcal{M}_\le \cap [10^j1] = \{(10^j)^\infty\}$ implies that $\mathbf{m}_\le \in [10^j0]$. If $[10^k0] < [10^k1]$ for all $k > j$, i.e., $\min\, [10^j0] = 10^\infty$, then $\mathbf{m}_\le = 10^\infty$. Otherwise, let $k > j$ be minimal such that $[10^k1] < [10^k0]$, i.e., $\min [10^j0] \in [10^k1]$. We have $\mathbf{m}_\le \le \tau_{j,k}(10^\infty)$ because $\tau_{j,k}((10^n)^\infty) \in \mathcal{M}_\le$ for all $n \ge 0$, thus $\mathbf{m}_\le \in [10^k1]$. Each word in $\mathcal{M}_\le \cap [10^k1]$ is a concatenation of blocks $10^j$ and $10^k$, thus $\mathcal{M}_\le \cap [10^k1] \subseteq \tau_{j,k}([1])$. This proves that $\mathbf{m}_\le = \tau_{j,k}(\mathbf{m}_\preceq)$, where $\mathbf{a}\preceq \mathbf{b}$ if $\tau_{j,k}(\mathbf{a}) \le \tau_{j,k}(\mathbf{b})$; note that $\mathbf{m}_\preceq \in [1]$ because $[0] \prec [1]$. Since $\tau_{j,k}([w0]) \subseteq [\tau_{j,k}(w0)1]$ and $\tau_{j,k}([w1]) \subseteq [\tau_{j,k}(w0)0]$ for all $w \in \{0,1\}^*$, $\preceq$ is a cylinder order. If $\mathbf{a}\in \mathcal{M}_\le$ with $\mathbf{a}< \mathbf{m}_\le$, then $\mathbf{a}= 0^\infty$ or $\mathbf{a}= (10^j)^\infty = \tau_{j,k}(0^\infty)$ or $\mathbf{a}= \tau_{j,k}(\mathbf{a}')$ with $\mathbf{a}' \in \mathcal{M}_\preceq \cap [1]$, $\mathbf{a}' \prec \mathbf{m}_\preceq$. ◻ The following lemma is used in the construction of a cylinder order $\le$ such that $\mathbf{m}_\le = \mathbf{m}$ for a given word $\mathbf{m}$. Here, $\varepsilon$ denotes the empty word. **Lemma 5**. *Let $\boldsymbol{\sigma}= (\sigma_n)_{n\ge1} \in S^\infty$, $w_n = \sigma_{[1,n]}(0) \cdots \sigma_{[1,1]}(0)$ for $n \ge 0$, with $w_0 = \varepsilon$.* *For all even $n \ge 0$, we have $\sigma_{[1,n]}([0]) \subseteq [w_n0]$, $\sigma_{[1,n]}([1]) \subseteq [w_n1]$.* *For all odd $n \ge 1$, we have $\sigma_{[1,n]}([0]) \subseteq [w_n1]$, $\sigma_{[1,n]}([1]) \subseteq [w_n0]$.* *Proof.* Since $\sigma_{[1,0]}$ is the identity, the statement is trivial for $n = 0$. Suppose that it is true for all $\boldsymbol{\sigma}\in S^\infty$ for some even $n \ge 0$. Then $$\begin{aligned} \sigma_{[1,n+1]}([0]) & = \sigma_1 \circ \sigma_{[2,n+1]}([0]) \subseteq \sigma_1([\sigma_{[2,n+1]}(0) \cdots \sigma_{[2,2]}(0) 0]) \subseteq [\sigma_{[1,n+1]}(0) \cdots \sigma_{[1,2]}(0) \sigma_1(0) 1], \\ \sigma_{[1,n+1]}([1]) & = \sigma_1 \circ \sigma_{[2,n+1]}([1]) \subseteq \sigma_1([\sigma_{[2,n+1]}(0) \cdots \sigma_{[2,2]}(0) 1]) \subseteq [\sigma_{[1,n+1]}(0) \cdots \sigma_{[1,2]}(0) \sigma_1(0) 0], \end{aligned}$$ thus the statement is true for all $\boldsymbol{\sigma}\in S^\infty$ for $n{+}1$. The case of odd $n$ is similar. ◻ *Proof of Theorem [Theorem 3](#t:main){reference-type="ref" reference="t:main"}.* Let first $\mathbf{m}_{\le}$ be a cylinder order with $[0] < [1]$. By iterating Proposition [Proposition 4](#p:main){reference-type="ref" reference="p:main"}, we obtain a finite sequence $\sigma_1, \dots, \sigma_h \in S$ such that $\mathbf{m}_\le = \sigma_{[1,h]}(10^\infty)$ or an infinite sequence $\boldsymbol{\sigma}= (\sigma_n)_{n\ge1} \in S^\infty$ such that $\mathbf{m}_\le \in \sigma_{[1,n]}([1])$ for all $n \ge 1$. Since $\sigma_{[1,n+1]}(1)$ starts with $\sigma_{[1,n]}(1)$ and is longer than $\sigma_{[1,n]}(1)$ for all $n \ge 1$, we have $\bigcap_{n\ge1} \sigma_{[1,n]}([1]) = \{\boldsymbol{\sigma}(1^\infty)\}$. Equations [\[e:discrete\]](#e:discrete){reference-type="eqref" reference="e:discrete"} and [\[e:discrete2\]](#e:discrete2){reference-type="eqref" reference="e:discrete2"} respectively follow from Proposition [Proposition 4](#p:main){reference-type="ref" reference="p:main"}. Let now $\boldsymbol{\sigma}= (\sigma_n)_{n\ge1} = (\tau_{j_n,k_n})_{n\ge1} \in S^\infty$. By Proposition [Proposition 4](#p:main){reference-type="ref" reference="p:main"} and Lemma [Lemma 5](#l:wn){reference-type="ref" reference="l:wn"}, we have $\boldsymbol{\sigma}(1^\infty) = \mathbf{m}_\le$ for all cylinder orders $\le$ satisfying $$\label{e:wnorder} \begin{array}{cl} [\sigma_{[1,n]}(10^i)w_n0] < [\sigma_{[1,n]}(10^i)w_n1] & \text{for all even $n\ge0$, $j_{n+1} \ne i < k_{n+1}$}, \\ & \text{and for all odd $n\ge1$, $i \in \{j_{n+1}, k_{n+1}\}$}, \\[.5ex] {}[\sigma_{[1,n]}(10^i)w_n1] < [\sigma_{[1,n]}(10^i)w_n0] & \text{for all odd $n\ge1$, $j_{n+1} \ne i < k_{n+1}$}, \\ & \text{and for all even $n\ge0$, $i \in \{j_{n+1}, k_{n+1}\}$}. \end{array}$$ Such cylinder orders exist since $\sigma_{[1,n+1]}(1)w_{n+1}$ is longer than $\sigma_{[1,n]}(10^{k_{n+1}})w_n = \sigma_{[1,n+1]}(1)w_n$ for all $n \ge 0$. To obtain $\mathbf{m}_\le = \sigma_{[1,h]}(10^\infty)$, $h \ge 0$, we use cylinder orders $\le$ such that [\[e:wnorder\]](#e:wnorder){reference-type="eqref" reference="e:wnorder"} holds only for $n < h$ and such that, for all $i \ge 0$, $[\sigma_{[1,h]}(10^i)w_h0] < [\sigma_{[1,h]}(10^i)w_h1]$ if $h$ is even, $[\sigma_{[1,h]}(10^i)w_h1] < [\sigma_{[1,h]}(10^i)w_h0]$ if $h$ is odd. ◻ The sequence $\mathbf{m}_{\le}$ is thus either eventually periodic or an $S$-adic sequence. A word $\mathbf{a}= a_1a_2\cdots$ is eventually periodic if and only if the factor complexity $$p_{a_1a_2\cdots}(n) := \#\{a_{k+1} a_{k+2} \cdots a_{k+n} \,:\, k \ge 0\}$$ is bounded; see e.g. [@Lothaire02 Theorem 1.3.13]. The smallest complexity for an aperiodic sequence is $p_{\mathbf{a}}(n) = n{+}1$, which is attained precisely by Sturmian sequences; see e.g. [@Lothaire02 Theorem 2.1.5]. By [@Creutz-Pavlov23 Proposition 2.1], all aperiodic words with $\limsup \frac{p_{\mathbf{a}}(n)}{n} < \frac{4}{3}$ are essentially equal to $\boldsymbol{\sigma}(1^\infty)$ with $\boldsymbol{\sigma}= (\tau_{j_n,k_n})_{n\ge1} \in S^\infty$ (and $k_n \le 2j_n{+}1$ or $(j_n,k_n) = (0,2)$). Without conditions on $j_n,k_n$, we get the following upper bound for $p_{\boldsymbol{\sigma}(1^\infty)}(n)$, which is optimal since $p_{\tau_{0,k}^\infty(1^\infty)}(n) = 3n{-}2$ for all $k \ge 2$, $2 \le n \le k$. **Proposition 6**. *Let $\le$ be a cylinder order. Then $p_{\mathbf{m}_\le}(n) \le 3n{-}2$ for all $n \ge 2$.* *Proof.* The proof is similar to that of [@Balkova06 Theorem 17]; see also [@Creutz-Pavlov23 Proposition 4.1]. Recall that the set of factors of a word $\mathbf{a}= a_1a_2\cdots \in \{0,1\}^\infty$ is $\{a_{k+1} a_{k+2} \cdots a_{k+n} : k,n \ge 0\}$, a factor $v$ of $\mathbf{a}$ is *strong bispecial* if all four words $0v0, 0v1, 1v0, 1v1$ are factors of $\mathbf{a}$, *weak bispecial* if $0v1, 1v0$ are factors and $0v0, 1v1$ are not factors of $\mathbf{a}$. Then $v$ is a strong/weak bispecial factor of $\boldsymbol{\sigma}(1^\infty)$, $\boldsymbol{\sigma}= (\sigma_n)_{n\ge1} = (\tau_{j_n,k_n})_{n\ge1} \in S^\infty$, if and only if $v = 0^{j_1} /\, 0^{k_1-1}$, $k_1 \ge j_1{+}2$, or $v = 0^{j_1}\sigma_1(v'0)$, where $v'$ is a strong/weak bispecial factor of $\lim_{n\to\infty}\boldsymbol{\sigma}_{[2,n]}(1^\infty)$. By iterating, we obtain that all strong/weak bispecial factors of $\boldsymbol{\sigma}(1^\infty)$ are of the form $$w_{\ell,h} = \sigma_{[1,0]}(0^{j_1}) \cdots \sigma_{[1,h-1]}(0^{j_h}) \sigma_{[1,h]}(0^\ell) \sigma_{[1,h]}(0) \cdots \sigma_{[1,1]}(0),$$ with $h \ge 0$ such that $k_{h+1} \ge j_{h+1}{+}2$, where $\ell = j_{h+1}$ for a strong bispecial factor, $\ell = k_{h+1}{-}1$ for a weak one; here, $w_{\ell,0} = 0^\ell$. For any recurrent word $\mathbf{a}\in \{0,1\}^\infty$, the difference of $p_{\mathbf{a}}(n{+}2)-p_{\mathbf{a}}(n{+}1)$ and $p_{\mathbf{a}}(n{+}1)-p_{\mathbf{a}}(n)$ equals the difference of the number of strong and weak bispecial factors of $\mathbf{a}$ of length $n$; see [@Cassaigne97 Proposition 3.2]. By telescoping and since $p_{\mathbf{a}}(1)-p_{\mathbf{a}}(0)=1$, $p_{\mathbf{a}}(n{+}2)-p_{\mathbf{a}}(n{+}1)-1$ is equal to the difference of the number of strong and weak bispecial factors of $\mathbf{a}$ up to length $n$. Since $|w_{k_{h+1}-1,h}| < |\sigma_{[1,h+1]}(1)| \le |\sigma_{[1,h+2]}(0)| < |w_{j_{h+3},h+2}|$ for all $h \ge 0$, this difference for $\mathbf{a}= \boldsymbol{\sigma}(1^\infty)$ is at most $2$ for all $n \ge 0$; note that $\boldsymbol{\sigma}(1^\infty)$ is recurrent since $\sigma_{[n,n+1]}(1)$ starts with $10^{k_n}1$ for all $n \ge 1$. Since $p_{\mathbf{a}}(2) \le 4$, we have thus $p_{\boldsymbol{\sigma}(1^\infty)} \le 3n{-}2$ for all $n \ge 2$. Since $p_{\sigma_{[1,h]}(10^\infty)}(n) \le p_{\boldsymbol{\sigma}(1^\infty)}(n)$ if $k_{h+1}$ is sufficiently large, we also have $p_{\sigma_{[1,h]}(10^\infty)}(n) \le 3n{-}2$ for all $n \ge 2$, which proves the proposition by Theorem [Theorem 3](#t:main){reference-type="ref" reference="t:main"}. ◻ Since the factor complexity is bounded by a linear function, we can apply the results of [@Adamczewsk-Bugeaud07]. For $\beta > 1$, let $$\pi_\beta(a_1a_2\cdots) := \sum_{n=1}^\infty \frac{a_n}{\beta^n}.$$ Recall that Pisot and Salem numbers are algebraic integers $\beta > 1$ with all Galois conjugates (except $\beta$ itself) having absolute value $\le 1$; $\beta$ is a Salem number if a conjugate lies on the unit circle, a Pisot number otherwise. In particular, all integers $\beta \ge 2$ are Pisot numbers. **Proposition 7**. *Let $\beta \ge 2$ be a Pisot or Salem number, and let $\le$ be a cylinder order on $\{0,1\}^\infty$. Then $\pi_\beta(\mathbf{m}_\le)$ is in $\mathbb{Q}(\beta)$ or transcendental.* *Proof.* This is a direct consequence of Proposition [Proposition 6](#p:linear){reference-type="ref" reference="p:linear"} and [@Adamczewsk-Bugeaud07 Theorem 1A]. ◻ # Examples {#sec:examples} ## Lexicographic order The classical order on the set of infinite words $A^\infty$ with an ordered alphabet $(A,\le)$ is the lexicographic order, defined by $[wa] <_{\mathrm{lex}} [wb]$ for all $w \in A^*$, $a,b \in A$ with $a < b$. For $A = \{0,1\}$, we have $\mathbf{m}_{\le_{\mathrm{lex}}} = 10^\infty = \min \mathcal{M}_{\le_{\mathrm{lex}}} \setminus \{0^\infty\}$. By [@Parry60 §2], a sequence $\mathbf{a}\in \{0,1\}^\infty$ is the greedy $\beta$-expansion of $1$ for some $\beta \in (1,2)$ if and only if $\mathbf{a}\in \mathcal{M}_{\le_{\mathrm{lex}}} \setminus \{10^\infty\}$ and $\mathbf{a}$ is not purely periodic; it is the quasi-greedy $\beta$-expansion of $1$ for some $\beta \in (1,2]$ if and only if $\mathbf{a}\in \mathcal{M}_{\le_{\mathrm{lex}}}$ and $\mathbf{a}$ does not end with $0^\infty$. Here, the greedy $\beta$-expansion of $1$ is the lexicographically largest sequence $\mathbf{a}\in \{0,1\}^\infty$ with $\pi_\beta(\mathbf{a}) = 1$, the quasi-greedy $\beta$-expansion of $1$ is the largest such sequence that does not end with $0^\infty$. By [@Hubbard-Sparrow90 Theorem 1], we also have that $(0^\infty,\mathbf{a})$ is the pair of kneading sequences of a Lorenz map if and only if $\mathbf{a}\in \mathcal{M}_{\le_{\mathrm{lex}}}$ does not end with $0^\infty$. ## Alternating lexicographic order {#sec:altern-lexic-order} The alternating lexicographic order is defined by $[wa] <_{\mathrm{alt}} [wb]$ if $a < b$ and $|w|$ is even, or $a > b$ and $|w|$ is odd, where $|w|$ denotes the length of a word $w \in A^*$. For $A = \{0,1\}$ with $0 < 1$, we have $[11] <_{\mathrm{alt}} [10]$, $[101] >_{\mathrm{alt}} [100]$, $[1001] <_{\mathrm{alt}} [1000]$, thus $\mathbf{m}_{\le_{\mathrm{alt}}} = \tau_{0,2}(\mathbf{m}_{\preceq})$ by Proposition [Proposition 4](#p:main){reference-type="ref" reference="p:main"}, with $\mathbf{a}\preceq \mathbf{b}$ if $\tau_{0,2}(\mathbf{a}) \le_{\mathrm{alt}} \tau_{0,2}(\mathbf{b})$. Since $\preceq$ is equal to $\le_{\mathrm{alt}}$, we obtain that $\mathbf{m}_{\le_{\mathrm{alt}}}$ is the fixed point of $\tau_{0,2}$, i.e., $$\mathbf{m}_{\le_{\mathrm{alt}}} = \tau_{0,2}(\mathbf{m}_{\le_{\mathrm{alt}}}) = 100111001001001110011\cdots;$$ see also [@Allouche83; @Allouche-Cosnard83; @Dubickas07]. By Theorem [Theorem 3](#t:main){reference-type="ref" reference="t:main"}, we have $$\label{e:discretealt} \{\mathbf{a}\in \mathcal{M}_{\le_{\mathrm{alt}}} \,:\, \mathbf{a}<_{\mathrm{alt}} \mathbf{m}_{\le_{\mathrm{alt}}}\} = \{(\tau_{0,2}^n(0))^\infty \,:\, n \ge 0\} = \{0^\infty, 1^\infty, (100)^\infty, (10011)^\infty, \dots\}.$$ According to [@Ito-Sadahiro09], the $(-\beta)$-expansion of $x \in \big[\frac{-\beta}{\beta+1},\frac{1}{\beta+1}\big)$, $\beta > 1$, is the sequence $a_1a_2\cdots$ with $a_n = \big\lfloor \frac{\beta}{\beta+1}-\beta T_{-\beta}^{n-1}(x)\big\rfloor$, given by the $(-\beta)$-transformation $T_{-\beta}(y) := -\beta y - \big\lfloor \frac{\beta}{\beta+1}-\beta y\big\rfloor$, and the set of $(-\beta)$-expansions is characterized by that of $\frac{-\beta}{\beta+1}$. By [@Steiner13 Theorem 2], a sequence $\mathbf{a}$ is the $(-\beta)$-expansion of $\frac{-\beta}{\beta+1}$ for some $\beta \in (1,2)$ if and only if $\mathbf{a}\in \mathcal{M}_{\le_{\mathrm{alt}}} \setminus \{(10)^\infty\}$, $\mathbf{a}>_{\mathrm{alt}} \mathbf{m}_{\le_{\mathrm{alt}}}$, and $\mathbf{a}\notin \{w1,w00\}^\infty \setminus \{(w1)^\infty\}$ for all $w \in \{0,1\}^*$ such that $(w1)^\infty >_{\mathrm{alt}} \mathbf{m}_{\le_{\mathrm{alt}}}$. Note that continued fractions are also ordered by the alternating lexicographic order on the sequences of partial quotients, and $\mathbf{m}_{\le_{\mathrm{alt}}}$ occurs e.g. in [@Kraaikamp-Schmidt-Steiner12 Remark 11.1]. ## Unimodal maps Let $A = \{0,1\}$ and define the *unimodal order* by $[w0] <_{\mathrm{uni}} [w1]$ if $|w|_1$ is even, $[w0] >_{\mathrm{uni}} [w1]$ if $|w|_1$ is odd, where $|w|_1$ denotes the number of occurrences of 1 in $w \in \{0,1\}^*$. Then we have $[11] <_{\mathrm{uni}} [10]$, $[101] <_{\mathrm{uni}} [100]$, and $$\mathbf{m}_{\le_{\mathrm{uni}}} = \tau_{0,1}(\mathbf{m}_{\le_{\mathrm{alt}}}) = 10111010101110111011101010111010\cdots.$$ This is the fixed point of the period-doubling (or Feigenbaum) substitution $0 \mapsto 11$, $1 \mapsto 10$. The set $\mathcal{M}_{\le_{\mathrm{uni}}}$ is the set of kneading sequences of unimodal maps [@Collet-Eckmann80; @Milnor-Thurston88]. We define the *flipped unimodal order* by $[w0] <_{\mathrm{flip}} [w1]$ if $|w|_0$ is even, $[w0] >_{\mathrm{flip}} [w1]$ if $|w|_0$ is odd, where $|w|_0$ denotes the number of occurrences of 0 in $w \in \{0,1\}^*$. Then we have $[11] >_{\mathrm{flip}} [10]$, $[101] <_{\mathrm{flip}} [100]$, $[1001] >_{\mathrm{flip}} [1000]$, $[10001] >_{\mathrm{flip}} [10000]$, and $$\mathbf{m}_{\le_{\mathrm{flip}}} = \tau_{1,3}(\mathbf{m}_{\le_{\mathrm{alt}}}) = 100010101000100010001010100010101000101010001000\cdots.$$ Note that $0 \mathbf{m}_{\le_{\mathrm{flip}}} = F(\mathbf{m}_{\le_{\mathrm{uni}}})$, where $F(a_1a_2\cdots) := (1{-}a_1)(1{-}a_2)\cdots$, and we have $$\mathcal{M}_{\le_{\mathrm{uni}}} = 1F(\mathcal{M}_{\le_{\mathrm{flip}}}) \cup \{0^\infty\}.$$ ## Sturmian sequences {#sec:sturmian-sequences} The set of substitutions $\{\theta_k : k \ge 1\}$ defined by $\theta_k(0) = 0^{k-1}1$, $\theta_k(1) = 0^{k-1}10$, generates the standard Sturmian words; see [@Lothaire02 Corollary 2.2.22]. Since $\tau_{k-1,k}$ is rotationally conjugate to $\theta_k$, more precisely $\theta_k(w)0^{k-1} = 0^{k-1}\tau_{k-1,k}(w)$ for all $w \in \{0,1\}^*$, the set of substitutions $\{\tau_{k-1,k} : k \ge 1\}$ generates the same shifts as $\{\theta_k : k \ge 0\}$. Therefore, the limit words of sequences in $\{\tau_{k-1,k} : k \ge 1\}^\infty$ provide elements of all Sturmian shifts. For example, the limit word of the sequence $(\tau_{0,1})^\infty$ is the Fibonacci word. # Symmetric alphabets {#sec:symm-shift-spac} For a real number $q > 1$, the set $$\label{e:Lbeta} \tilde{\mathcal{L}}_q := \big\{\limsup_{n\to\infty} \|x q^n\| \,:\, x \in \mathbb{R}\big\},$$ where $\|.\|$ denotes the distance to the nearest integer, is a multiplicative version of the Lagrange spectrum and was studied in [@Dubickas06; @Akiyama-Kaneko21]. If $q$ is an integer, then representing $x = \sum_{k=-\infty}^\infty a_k q^{-k}$ with $a_k \in \mathbb{Z}$, $a_k \ne 0$ for finitely many $k \le 0$, $|\sum_{k=n+1}^\infty a_k q^{n-k}| \le 1/2$, gives that $\|x q^n\| = |\sum_{k=n+1}^\infty a_k q^{n-k}|$; see also Proposition [Proposition 10](#p:Lbeta){reference-type="ref" reference="p:Lbeta"} below. This leads us to consider $$\mathcal{M}^{\mathrm{abs}}_\le = \{s^{\mathrm{abs}}_\le(\mathbf{a}) \,:\, \mathbf{a}\in \{0,\pm1\}^\infty\} \quad \mbox{with} \quad s^{\mathrm{abs}}_\le(a_1a_2\cdots) = \sup\nolimits_{n\ge1} \mathrm{abs}(a_na_{n+1}\cdots),$$ where $$\mathrm{abs}(\mathbf{a}) = \begin{cases}\mathbf{a}& \mbox{if}\ \mathbf{a}\ge_{\mathrm{lex}} 0^\infty, \\ -\mathbf{a}& \mbox{if}\ \mathbf{a}\le_{\mathrm{lex}} 0^\infty,\end{cases}, \quad -(a_1a_2\cdots) = (-a_1)(-a_2)\cdots.$$ We denote the smallest accumulation point of $\mathcal{M}^{\mathrm{abs}}_\le$ by $\mathbf{m}^{\mathrm{abs}}_\le$. The same proof as for Theorem [Theorem 1](#t:ML){reference-type="ref" reference="t:ML"} shows for all cylinder orders $\le$ on $\{0,\pm1\}^\infty$ that $$\mathcal{L}^{\mathrm{abs}}_\le = \mathcal{M}^{\mathrm{abs}}_\le = \mathrm{cl}\{s^{\mathrm{abs}}_\le(\mathbf{a}) \,:\, \mathbf{a}\in \{0,\pm1\}^\infty\, \mbox{purely periodic}\},$$ where $\mathcal{L}^{\mathrm{abs}}_\le := \{\limsup_{n\ge1} \mathrm{abs}(a_na_{n+1}\cdots) \,:\, a_1a_2\cdots \in \{0,\pm1\}^\infty\}$. In the following, we assume that a cylinder order on $\{0,\pm1\}^\infty$ is *consistent* (with the natural order on $\{0,\pm1\}$), which means that, for each $w \in \{0,\pm1\}^*$, we have $[w(-1)] < [w0] < [w1]$ or $[w(-1)] > [w0] > [w1]$. In order to describe $\mathbf{m}^{\mathrm{abs}}_\le$, we define maps $\varrho_0, \varrho_1, \varrho_2$ from $\{0,1\}^*$ to $\{0,\pm1\}^*$ by $\varrho_0(\varepsilon) = \varrho_1(\varepsilon) = \varrho_2(\varepsilon) = \varepsilon$ for the empty word $\varepsilon$, and $$\begin{aligned} \varrho_0(w0) & = \begin{cases}\varrho_0(w) 1 & \mbox{if $|w|_0$ is even}, \\ \varrho_0(w) (-1) & \mbox{if $|w|_0$ is odd},\end{cases} & \varrho_0(w1) & = \begin{cases}\varrho_0(w) 10 & \mbox{if $|w|_0$ is even}, \\ \varrho_0(w) (-1)0 & \mbox{if $|w|_0$ is odd},\end{cases} \\ \varrho_1(w0) & = \begin{cases}\varrho_1(w) 1 & \mbox{if $|w|_1$ is even}, \\ \varrho_1(w) (-1) & \mbox{if $|w|_1$ is odd},\end{cases} & \varrho_1(w1) & = \begin{cases}\varrho_1(w) 10 & \mbox{if $|w|_1$ is even}, \\ \varrho_1(w) (-1)0 & \mbox{if $|w|_1$ is odd},\end{cases} \\ \varrho_2(w0) & = \begin{cases}\varrho_2(w) 1 & \mbox{if $|w|$ is even}, \\ \varrho_2(w) (-1) & \mbox{if $|w|$ is odd},\end{cases} & \varrho_2(w1) & = \begin{cases}\varrho_2(w) 10 & \mbox{if $|w|$ is even}, \\ \varrho_2(w) (-1)0 & \mbox{if $|w|$ is odd},\end{cases} \label{e:rho2} \end{aligned}$$ for all $w \in \{0,1\}^*$, where $|w|$ is the length of a word $w$ and $|w|_i$ the number of occurrences of the letter $i$ in $w$. As for substitutions, the maps $\varrho_i$ are extended naturally to $\{0,1\}^\infty$. **Theorem 8**. *Let $\mathbf{m}\in \{0,\pm1\}^\infty$. Then $\mathbf{m}= \mathbf{m}^{\mathrm{abs}}_\le$ for some consistent cylinder order $\le$ on $\{0,\pm1\}^\infty$ with $0^\infty < 1^\infty$ if and only if $\mathbf{m}= \sigma(\mathbf{m}_\preceq)$ for some $\sigma \in \{\varrho_0, \varrho_1, \varrho_2, \tau_{0,1}\}$ and some cylinder order $\preceq$ on $\{0,1\}^\infty$ with $0^\infty \prec 1^\infty$.* *If $\mathbf{m}^{\mathrm{abs}}_\le = \sigma(\mathbf{m}_\preceq)$, then we can assume that $\mathbf{a}\preceq \mathbf{b}$ if and only if $\sigma(\mathbf{a}) \le \sigma(\mathbf{b})$, and we have $$\label{e:discabs} \{\mathbf{a}\in \mathcal{M}^{\mathrm{abs}}_\le : \mathbf{a}< \mathbf{m}^{\mathrm{abs}}_\le\} = \{0^\infty\} \cup \{\sigma(\mathbf{a}) : \mathbf{a}\in \mathcal{M}_\preceq,\, \mathbf{a}\prec \mathbf{m}_\preceq\}.$$* *Proof.* Let first $\le$ be a consistent cylinder order on $\{0,\pm1\}^\infty$. Then the order $\preceq$ defined by $\mathbf{a}\preceq \mathbf{b}$ if $\sigma(\mathbf{a}) \le \sigma(\mathbf{b})$ is a cylinder order for all $\sigma \in \{\varrho_0, \varrho_1, \varrho_2, \tau_{0,1}\}$. Indeed, for any $w\in\{0,1\}^*$, we have $\sigma([w0])\subset[\sigma(w)xy], \ \sigma([w1])\subset[\sigma(w)x0]$, where $x,y\in \{\pm1\}$. Assume first that $[1\bar{1}] < [10]$; here and in the following, we use the notation $\bar{1} = -1$. Then $\mathcal{M}^{\mathrm{abs}}_\le \cap [1\bar{1}] = \{(1\bar{1})^\infty\}$, thus $\mathbf{m}^{\mathrm{abs}}_\le \in [10]$. If $[10\bar{1}] < [100]$, then each $1$ in a word in $\mathcal{M}^{\mathrm{abs}}_\le \cap [10\bar{1}]$ is followed by $\bar{1}$ or $0\bar{1}$, and each $\bar{1}$ is followed by $1$ or $01$, i.e., $$\mathcal{M}^{\mathrm{abs}}_\le \cap [10\bar{1}] \subseteq 10\,(\{\bar{1},\bar{1}0\}\{1,10\})^\infty = \varrho_2([1]).$$ Therefore, $\mathbf{m}^{\mathrm{abs}}_\le = \varrho_2(\mathbf{m}_\preceq)$ for the cylinder order $\preceq$ defined by $\mathbf{a}\preceq \mathbf{b}$ if $\varrho_2(\mathbf{a}) \le \varrho_2(\mathbf{b})$ . If $[101] < [100]$, then each $1$ in a word in $\mathcal{M}^{\mathrm{abs}}_\le \cap [101]$ is followed by $\bar{1}$ or $01$, and each $\bar{1}$ is followed by $1$ or $0\bar{1}$, i.e., $$\begin{aligned} \mathcal{M}^{\mathrm{abs}}_\le \cap [101] & \subseteq 10((10)^*1(\bar{1}0)^*\bar{1})^\infty \cup 10((10)^*1(\bar{1}0)^*\bar{1})^*(10)^\infty \cup 10((10)^*1(\bar{1}0)^*\bar{1})^*(10)^*1(\bar{1}0)^\infty \\ & = \varrho_0(1(1^*01^*0)^\infty) \cup \varrho_0(1(1^*01^*0)^* 1^\infty) \cup \varrho_0(1(1^*01^*0)^* 1^*0 1^\infty) = \varrho_0([1]). \end{aligned}$$ Therefore, we have $\mathbf{m}^{\mathrm{abs}}_\le = \varrho_0(\mathbf{m}_\preceq)$ for the cylinder order $\preceq$ defined by $\mathbf{a}\preceq \mathbf{b}$ if $\varrho_0(\mathbf{a}) \le \varrho_0(\mathbf{b})$. Assume now $[11] < [10]$. Then $\mathcal{M}^{\mathrm{abs}}_\le \cap [11] = \{1^\infty\}$, thus $\mathbf{m}_\le \in [10]$. If $[10\bar{1}] < [100]$, then $$\mathcal{M}^{\mathrm{abs}}_\le \cap [10\bar{1}] \subseteq 10(\bar{1}^*\bar{1}01^*10)^\infty \cup 10(\bar{1}^*\bar{1}01^*10)^* \bar{1}^\infty \cup 10(\bar{1}^*\bar{1}01^*10)^* \bar{1}^*\bar{1}0 1^\infty = \varrho_1([1]),$$ thus $\mathbf{m}^{\mathrm{abs}}_\le = \varrho_1(\mathbf{m}_\preceq)$, with $\preceq$ defined by $\mathbf{a}\preceq \mathbf{b}$ if $\varrho_1(\mathbf{a}) \le \varrho_1(\mathbf{b})$. If $[101] < [100]$, then $$\mathcal{M}^{\mathrm{abs}}_\le \cap [101] \subseteq 10\{1,10\}^\infty = \tau_{0,1}([1]),$$ thus $\mathbf{m}^{\mathrm{abs}}_\le = \tau_{0,1}(\mathbf{m}_\preceq)$ for the cylinder order $\preceq$ defined by $\mathbf{a}\preceq \mathbf{b}$ if $\tau_{0,1}(\mathbf{a}) \le \tau_{0,1}(\mathbf{b})$. Since $\varrho_0(0^\infty) = \varrho_2(0^\infty) = (1\overline{1})^\infty$ and $\varrho_1(0^\infty) = \tau_{0,1}(0^\infty) = 1^\infty$, equation [\[e:discabs\]](#e:discabs){reference-type="eqref" reference="e:discabs"} holds. Let now $\preceq$ be a cylinder order on $\{0,1\}^\infty$ with $0^\infty \prec 1^\infty$ and $\sigma \in \{\varrho_0, \varrho_1, \varrho_2, \tau_{0,1}\}$. Then there exists a consistent cylinder order $\le$ on $\{0,\pm1\}^\infty$ satisfying $\sigma(\mathbf{a}) \le \sigma(\mathbf{b})$ if $\mathbf{a}\preceq \mathbf{b}$ and $0^\infty < 1^\infty$. Indeed, for $w \in \{0,\pm1\}^*$ and distinct $a, b \in \{0,\pm1\}$, we set $[wa] < [wb]$ if $\mathbf{a}' \prec \mathbf{b}'$ for some $\mathbf{a}', \mathbf{b}' \in \{0,1\}^\infty$ with $\sigma(\mathbf{a}') \in [wa]$, $\sigma(\mathbf{b}') \in [wb]$; by Lemma [Lemma 2](#l:cylinderorder2){reference-type="ref" reference="l:cylinderorder2"}, this does not depend on the choice of $\mathbf{a}', \mathbf{b}'$. Moreover, since $\sigma(\mathbf{a}) \in [w\bar{1}]$ and $\sigma(\mathbf{b}) \in [w1]$ is impossible, we have no obstruction to a consistent cylinder order. Since $0^\infty \prec 1^\infty$, we have $[1\overline{1}] < [10]$ in case $\sigma \in \{\varrho_0, \varrho_2\}$, $[11] < [10]$ in case $\sigma \in \{\varrho_1, \tau_{0,1}\}$. We can set $[10\bar{1}] < [100]$ in case $\sigma \in \{\varrho_1, \varrho_2\}$ because $([100] \cup [101]) \cap \sigma(\{0,1\})^\infty = \emptyset$, similarly we can set $[101] < [100]$ in case $\sigma \in \{\varrho_0, \tau_{0,1}\}$. Then we have $\mathbf{m}^{\mathrm{abs}}_\le = \sigma(\mathbf{m}_\le)$. ◻ **Proposition 9**. *Let $\beta \ge 3$ be a Pisot or Salem number, and let $\le$ be a consistent cylinder order on $\{0,\pm1\}^\infty$. Then $\pi_\beta(\mathbf{m}^{\mathrm{abs}}_\le)$ is in $\mathbb{Q}(\beta)$ or transcendental.* *Proof.* Let $G(a_1a_2\cdots) = |a_1|\,|a_2|\cdots$. Then $G \circ \varrho_i = \tau_{0,1}$ for all $i \in \{0,1,2\}$, thus $p_{G(\mathbf{m}^{\mathrm{abs}}_\le)}(n) \le 3n{-}2$ by Proposition [Proposition 6](#p:linear){reference-type="ref" reference="p:linear"}, Theorems [Theorem 3](#t:main){reference-type="ref" reference="t:main"} and [Theorem 8](#t:sym){reference-type="ref" reference="t:sym"}. Moreover, the map $G$ is 2-to-1 from the set of factors of $\mathbf{m}^{\mathrm{abs}}_\le$ to the set of factors of $G(\mathbf{m}^{\mathrm{abs}}_\le)$, thus $p_{\mathbf{m}^{\mathrm{abs}}_\le}(n) \le 6n{-}4$. By [@Adamczewsk-Bugeaud07 Theorem 1A] and by adding 1 to each digit of $\mathbf{m}^{\mathrm{abs}}_\le$, we obtain that $\pi_\beta(\mathbf{m}^{\mathrm{abs}}_\le) + \frac{1}{\beta-1}$ is in $\mathbb{Q}(\beta)$ or transcendental, thus also $\pi_\beta(\mathbf{m}^{\mathrm{abs}}_\le)$ is in $\mathbb{Q}(\beta)$ or transcendental. ◻ # Examples of orders on symmetric shift spaces {#sec:exampl-orders-symm} ## Lexicographic order For the lexicographic order on $\{0,\pm1\}^\infty$ (with $-1 < 0 < 1$), we have $[1\bar{1}] < [10]$, $[10\bar{1}] < [100]$, and we obtain that $$\mathbf{m}^{\mathrm{abs}}_{\le_{\mathrm{lex}}} = \varrho_2(\mathbf{m}_{\le_{\mathrm{alt}}}) = 10\bar{1}1\bar{1}010\bar{1}01\bar{1}10\bar{1}1\bar{1}01\bar{1}10\bar{1}010\bar{1}1\bar{1}010\cdots.$$ The following proposition relates the Lagrange spectrum $\tilde{\mathcal{L}}_q$, defined in [\[e:Lbeta\]](#e:Lbeta){reference-type="eqref" reference="e:Lbeta"}, and its smallest accumulation point $\tilde{\mathbf{m}}_q$ to $\mathcal{M}^{\mathrm{abs}}_{\le_{\mathrm{lex}}}$ and $\mathbf{m}^{\mathrm{abs}}_{\le_{\mathrm{lex}}}$; it slightly improves results of [@Dubickas06; @Akiyama-Kaneko21]. **Proposition 10**. *We have $$\begin{gathered} \tilde{\mathcal{L}}_2 = \pi_2(\mathcal{M}^{\mathrm{abs}}_{\le_{\mathrm{lex}}}) \cap \big[0,\tfrac{1}{2}\big] \neq \pi_2(\mathcal{M}^{\mathrm{abs}}_{\le_{\mathrm{lex}}}), \quad \tilde{\mathcal{L}}_3 = \pi_3(\mathcal{M}^{\mathrm{abs}}_{\le_{\mathrm{lex}}}), \label{e:23} \\ \pi_q(\mathcal{M}^{\mathrm{abs}}_{\le_{\mathrm{lex}}}) = \tilde{\mathcal{L}}_q \cap \big[0,\tfrac{1}{q-1}\big] \neq \tilde{\mathcal{L}}_q \quad \mbox{for all integers}\ q \ge 4. \label{e:4}\end{gathered}$$ For all integers $q \ge 2$, we have $\tilde{\mathbf{m}}_q = \pi_q(\mathbf{m}^{\mathrm{abs}}_{\le_{\mathrm{lex}}}) = \pi_q(\varrho_2(\mathbf{m}_{\le_{\mathrm{alt}}}))$ and $$\tilde{\mathcal{L}}_q \cap [0,\tilde{\mathbf{m}}_q) = \pi_q(\mathcal{M}^{\mathrm{abs}}_{\le_{\mathrm{lex}}}) \cap [0,\tilde{\mathbf{m}}_q) = \{0\} \cup \{\pi_q(\varrho_2(\tau_{0,2}^n(0^\infty))) \,:\, n \ge 0\}.$$* *Proof.* As mentioned at the beginning of Section [5](#sec:symm-shift-spac){reference-type="ref" reference="sec:symm-shift-spac"}, for integer $q \ge 2$, $\|x q^n\|$ can be determined by a symmetric $q$-expansion of $x$. We can assume w.l.o.g. $|x| \le \frac{1}{2}$. Let $$\mathcal{A}_q := \big\{a_1a_2\cdots \in A_q^\infty \,:\, |\pi_q(a_ka_{k+1}\cdots)| \le \tfrac{1}{2} \ \mbox{for all}\ k \ge 1\big\},\ \mbox{with}\ A_q := \{0, \pm1, \dots, \pm\lfloor q/2\rfloor\}.$$ For each $x \in [-\frac{1}{2},\frac{1}{2}]$, we obtain a sequence $\mathbf{a}= a_1a_2\cdots \in \mathcal{A}_q$ satisfying $x = \pi_q(\mathbf{a})$ by taking $a_k = \lfloor q \tilde{T}_q^{k-1}(x) + \frac{1}{2}\rfloor$ where $\tilde{T}_q(y) := q y - \lfloor q y + \frac{1}{2}\rfloor$. Then $$\|x q^n\| = |\pi_q(a_{n+1}a_{n+2}\cdots)| = \pi_q(\mathrm{abs}(a_{n+1}a_{n+2}\cdots)).$$ Note that $\mathbf{a}\le_{\mathrm{lex}} \mathbf{b}$ implies $\pi_q(\mathbf{a}) \le \pi_q(\mathbf{b})$ for all $\mathbf{a}, \mathbf{b}\in \mathcal{A}_q$. Since $\mathcal{L}^{\mathrm{abs}}_{\le_{\mathrm{lex}}} = \mathcal{M}^{\mathrm{abs}}_{\le_{\mathrm{lex}}}$ (and a similar relation holds for larger alphabets), we obtain that $$\tilde{\mathcal{L}}_q = \{\pi_q(s^{\mathrm{abs}}_{\le_{\mathrm{lex}}}(\mathbf{a})) \,:\, \mathbf{a}\in \mathcal{A}_q\}.$$ For $q \in \{2,3\}$, we have $A_q = \{0,\pm1\}$, thus $\tilde{\mathcal{L}}_q \subseteq \pi_q(\mathcal{M}^{\mathrm{abs}}_{\le_{\mathrm{lex}}})$. For $q \ge 3$, we have $\{0,\pm1\}^\infty \subseteq \mathcal{A}_q$, thus $\pi_q(\mathcal{M}^{\mathrm{abs}}_{\le_{\mathrm{lex}}}) \subseteq \tilde{\mathcal{L}}_q$. Since $\pi_2(\mathcal{M}^{\mathrm{abs}}_{\le_{\mathrm{lex}}}) \cap [0,\frac{1}{2}] \subseteq \tilde{\mathcal{L}}_2$ and $1 = \pi_2(1^\infty) \in \pi_2(\mathcal{M}^{\mathrm{abs}}_{\le_{\mathrm{lex}}}) \setminus \tilde{\mathcal{L}}_2$, this proves [\[e:23\]](#e:23){reference-type="eqref" reference="e:23"}. For $q \ge 4$, we have $\pi_q(s^{\mathrm{abs}}_{\le_{\mathrm{lex}}}(\mathbf{a})) \ge \pi_q((2\overline{2})^\infty) = \frac{2}{q+1} > \frac{1}{q-1}$ for all $\mathbf{a}\in A_q^\infty \setminus \{0,\pm1\}^\infty$, thus $\tilde{\mathcal{L}}_q \cap [0,\frac{1}{q-1}] \subseteq\pi_q(\mathcal{M}^{\mathrm{abs}}_{\le_{\mathrm{lex}}})$. Together with $\frac{2}{q+1} \in \tilde{\mathcal{L}}_q \setminus \pi_q(\mathcal{M}^{\mathrm{abs}}_{\le_{\mathrm{lex}}})$, we obtain [\[e:4\]](#e:4){reference-type="eqref" reference="e:4"}. Since $\pi_q$ is order-preserving on $\mathcal{A}_q$, we obtain that $\tilde{\mathbf{m}}_q = \pi_q(\mathbf{m}^{\mathrm{abs}}_{\le_{\mathrm{lex}}})$ and that $\tilde{\mathcal{L}}_q$ and $\pi_q(\mathcal{M}^{\mathrm{abs}}_{\le_{\mathrm{lex}}})$ agree on $[0,\tilde{\mathbf{m}}_q)$. Since $\{\mathbf{a}\in \mathcal{M}^{\mathrm{abs}}_{\le_{\mathrm{lex}}} \,:\, \mathbf{a}< \mathbf{m}^{\mathrm{abs}}_{\le_{\mathrm{lex}}}\}$ is equal to $\{0^\infty\} \cup \{\varrho_2(\tau_{0,2}^n(0^\infty)) \,:\, n \ge 0\}$ by Theorem [Theorem 8](#t:sym){reference-type="ref" reference="t:sym"} and [\[e:discretealt\]](#e:discretealt){reference-type="eqref" reference="e:discretealt"}, this completes the proof of the proposition. ◻ ## Alternating lexicographic order {#alternating-lexicographic-order} For the alternating lexicographic order on $\{0,\pm1\}^\infty$ (with $-1 < 0 < 1$), we have $[11] <_{\mathrm{alt}} [10]$ and $[10\bar{1}] <_{\mathrm{alt}} [100]$, $$\mathbf{m}^{\mathrm{abs}}_{\le_{\mathrm{alt}}} = \varrho_1(\mathbf{m}_{\le_{\mathrm{alt}}}) = 10\bar{1}\bar{1}\bar{1}010\bar{1}01110\bar{1}\bar{1}\bar{1}01110\bar{1}010\bar{1}\bar{1}\bar{1}010\cdots.$$ ## Bimodal order Similarly to the unimodal order, we define the *bimodal order* on $\{0,\pm1\}^\infty$ by $[wa] <_{\mathrm{bi}} [wb]$ if $a < b$ (with $-1 < 0 < 1$) and $|w|_1 + |w|_{-1}$ is even, or $a > b$ and $|w|_1 + |w|_{-1}$ is odd. Then $\mathbf{m}^{\mathrm{abs}}_{\le_{\mathrm{bi}}} = \tau_{0,1}(\mathbf{m}_{\le_{\mathrm{alt}}}) = \mathbf{m}_{\le_{\mathrm{uni}}}$. We get the same result for the order defined by $[wa] < [wb]$ if $a < b$ and $|w|_1$ is even, or $a > b$ and $|w|_1$ is odd. We also define the *flipped bimodal order* on $\{0,\pm1\}^\infty$ by $[wa] <_{\mathrm{biflip}} [wb]$ if $a < b$ and $|w|_0$ is even, or $a > b$ and $|w|_0$ is odd. Then $$\mathbf{m}^{\mathrm{abs}}_{\le_{\mathrm{biflip}}} = \varrho_0(\mathbf{m}_{\le_{\mathrm{alt}}}) = 101\bar{1}1010101\bar{1}101\bar{1}101\bar{1}1010101\bar{1}1010\cdots..$$ ## Other orders For $\mathbf{e}\in \{\pm1\}^{\infty}$, we define a cylinder order $\le_{\mathbf{e}}$ on $\{0,\pm1\}^\infty$ by $$\mathbf{a}\le_{\mathbf{e}} \mathbf{b}\quad \mbox{if}\ \mathbf{e}\cdot \mathbf{a}\le_{\mathrm{lex}} \mathbf{e}\cdot \mathbf{b},$$ where $(e_1e_2\cdots) \cdot (a_1a_2\cdots) = (e_1a_1) (e_2a_2) \cdots$. We know from Proposition [Proposition 7](#p:transcendental){reference-type="ref" reference="p:transcendental"} that $\pi_\beta(\mathbf{m}^{\mathrm{abs}}_{\le_{\mathbf{e}}})$ is in $\mathbb{Q}(\beta)$ or transcendental for all Pisot or Salem numbers $\beta$. However, here the value of $\pi_\beta(\mathbf{e}\cdot \mathbf{m}^{\mathrm{abs}}_{\le_{\mathbf{e}}})$, which is the the smallest accumulation point of $\{\limsup_{n\to\infty} |\sum_{k=n+1}^\infty \frac{e_ka_k}{\beta^{k-n}}| \,:\, a_1a_2\cdots \in \{0,\pm1\}^\infty\}$ when $e_1=1$ and $\beta \ge 3$, is more relevant. If $\mathbf{e}$ is periodic with period $k$, then $p_{\mathbf{e}\cdot\mathbf{a}}(n) \le k\, p_{\mathbf{a}}(n)$, hence $\pi_\beta(\mathbf{e}\cdot \mathbf{m}^{\mathrm{abs}}_{\le_{\mathbf{e}}})$ is also in $\mathbb{Q}(\beta)$ or transcendental for all Pisot or Salem numbers $\beta$. We do not know whether the same result holds when $\mathbf{e}$ is aperiodic.
arxiv_math
{ "id": "2310.05454", "title": "Markoff-Lagrange spectrum of one-sided shifts", "authors": "Hajime Kaneko, Wolfgang Steiner (IRIF (UMR\\_8243))", "categories": "math.DS math.NT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | The famous theorem of Matsumura-Oort states that if $X$ is a proper scheme, then the automorphism group functor $\mathfrak{Aut}(X)$ of $X$ is a locally algebraic group scheme. In this paper we generalize this theorem to the category of superschemes, that is if $\mathbb{X}$ is a proper superscheme, then the automorphism group functor $\mathfrak{Aut}(\mathbb{X})$ of $\mathbb{X}$ is a locally algebraic group superscheme. Moreover, we also show that if $H^1(X, \mathcal{T}_X)=0$, where $X$ is the geometric counterpart of $\mathbb{X}$ and $\mathcal{T}_X$ is the tangent sheaf of $X$, then $\mathfrak{Aut}(\mathbb{X})$ is a smooth group superscheme. address: - - Department of Mathematical Science, UAEU, Al-Ain, United Arabic Emirates; Sobolev Institute of Mathematics, Omsk Branch, Pevtzova 13, 644043 Omsk, Russian Federation author: - A. N. Zubkov title: Automorphism group functors of algebraic superschemes --- # introduction {#introduction .unnumbered} The purpose of this paper is to generalize the fundamental result from [@mats-oort] that automorphism group functors of proper schemes are representable by locally algebraic group schemes to superschemes. A close result has been recently proven in [@brp]. More precisely, they showed that if $X$ is superscheme that is superprojective and flat over a Noetherian superscheme $Y$, then the automorphism group functor of $X$ (over $Y$) is representable by a group superscheme over $Y$. This result is a super analog of the well-known theorem of Grothendieck (cf. [@gr]) and their approach is parallel to the Grothendieck's one, i.e. it is based on the proof of the existence of Hilbert (super)scheme in the given setting. Note that both Grothendieck's theorem and its superized version from [@brp] do not imply the main result of [@mats-oort] and our theorem respectively, since a proper (super)scheme is not necessary (super)projective. In addition, the paper [@brp] covers a much wider range of problems than just the representability of the automorphism group functor of (superprojective) superscheme. In this paper, all superschemes are defined over a field $\Bbbk$ of zero or odd characteristic. All (associative and unital) superalgebras are assumed to be super-commutative, unless stated otherwise. The category of superalgebras with graded morphisms is denoted by $\mathsf{SAlg}_{\Bbbk}$. Let $X$ be a superscheme. Let $\mathbb{X}$ be a $\Bbbk$-functor of points of $X$, i.e. $\mathbb{X}(A)=\mathrm{Mor}(\mathrm{SSpec}(A), X)$ for arbitrary $A\in\mathsf{SAlg}_{\Bbbk}$. Then the automorphism group functor $\mathfrak{Aut}(\mathbb{X})$ can be defined by two equivalent ways. The first one is to determine $$\mathfrak{Aut}(\mathbb{X})(A)=\{\mbox{invertible natural transformations of the functor} \ \mathbb{X}_A \},$$ where $\mathbb{X}_A$ is the restriction of $\mathbb{X}$ on the subcategory of $A$-superalgebras with graded $A$-linear morphisms (cf. [@jan]). The second one is to determine $$\mathfrak{Aut}(\mathbb{X})(A)=\{\mbox{invertible endomorphisms of the superscheme} \ X\times\mathrm{SSpec}(A)$$ $$\mbox{over the affine superscheme} \ \mathrm{SSpec}(A)\},$$ similarly to [@mats-oort]. Our approach to the problem of representability of $\mathfrak{Aut}(\mathbb{X})$ is inspired by the description of locally algebraic group superschemes, that was recently obtained in [@maszub2]. Recall that if $\mathbb{G}$ is a locally algebraic group superscheme, then $\mathbb{G}=\mathbb{G}_{ev}\mathbb{N}_e(\mathbb{G})$, where $\mathbb{G}_{ev}$ is the largest purely even group supersubscheme of $\mathbb{G}$, and $\mathbb{N}_e(\mathbb{G})$ is a normal group subfunctor of $\mathbb{G}$, called the *formal neighborhhod of the identity*, which is strictly pro-representable by a complete local Noetherian Hopf superalgebra. In the nineth section we give an equivalent description of the group subfunctor $\mathbb{N}_e(\mathbb{G})$ as : $$\mathbb{N}_e(\mathbb{G})(A)=\ker(\mathbb{G}(A)\to\mathbb{G}(\widetilde{A}))$$ for a superalgebra $A$, where $\widetilde{A}=A/\mathsf{nil}(A)$ (see Proposition [Proposition 38](#final about neighborhood){reference-type="ref" reference="final about neighborhood"} and Corollary [Corollary 40](#normality of N_e){reference-type="ref" reference="normality of N_e"} below). Further, as a superscheme, $\mathbb{G}$ is isomorphic to $\mathbb{G}_{ev}\times \mathrm{SSp}(\Lambda(V))$, where $V=\mathfrak{g}_1^*$ is dual to the odd component of the Lie superalgebra $\mathfrak{g}$ of $\mathbb{G}$. The group structure of $\mathbb{G}$ is uniquely defined by the adjoint action of $\mathbb{G}_{ev}$ on $\mathfrak{g}$, whence on $\mathfrak{g}_1^*$, and by a list of commutator relations on certain \"generators\" of $\mathrm{SSpec}(\Lambda(V))$ (see [@maszub2 Section 12] for more details). Returning to the group functor $\mathfrak{Aut}(\mathbb{X})$, let $\mathfrak{Aut}(\mathbb{X})_{ev}$ denote the largest purely even group subfunctor of $\mathfrak{Aut}(\mathbb{X})$, i.e. $\mathfrak{Aut}(\mathbb{X})_{ev}(A)=\mathfrak{Aut}(\mathbb{X})(A_0)$ for any superalgebra $A$. If $X$ is proper, then using the representability criteria from [@mats-oort] we show that $\mathfrak{Aut}(\mathbb{X})_{ev}$ is a locally algebraic group scheme. Note that there is a natural morphism $\mathfrak{Aut}(\mathbb{X})_{ev}\to \mathfrak{Aut}(\mathbb{X}_{ev})$ of group schemes, that is not isomorphism in general. We prove that the kernel of this morphism is a nilpotent locally algebraic group scheme and its connected component is an unipotent algebraic group. Further, one can define the formal neighborhood of the identity in $\mathfrak{Aut}(\mathbb{X})$ as (see Proposition [Proposition 44](#nice property of T){reference-type="ref" reference="nice property of T"} below) : $$\mathfrak{T}(A)=\ker(\mathfrak{Aut}(\mathbb{X})(A)\to \mathfrak{Aut}(\mathbb{X})(\widetilde{A})), A\in\mathsf{SAlg}_{\Bbbk}.$$ We show that $\mathfrak{T}$ is strictly pro-representable by a complete local Noetherian Hopf superalgebra, provided $X$ is proper. We also prove that $\mathfrak{T}\simeq \mathfrak{T}_{ev}\times\mathrm{SSp}(\Lambda(V))$, where $V$ is dual to the odd component of Lie superalgebra of $\mathfrak{T}$. Since $\mathfrak{T}_{ev}$ is the formal neighborhood of the identity in $\mathfrak{Aut}(\mathbb{X})_{ev}$, it follows that $$\mathfrak{Aut}(\mathbb{X})_{la}=\mathfrak{Aut}(\mathbb{X})_{ev}\mathfrak{T}\simeq \mathfrak{Aut}(\mathbb{X})_{ev}\times \mathrm{SSp}(\Lambda(V))$$ is the largest locally algebraic group superscheme in $\mathfrak{Aut}(\mathbb{X})$. In particular, $\mathfrak{Aut}(\mathbb{X})$ is a locally algebraic group superscheme if and only if $\mathfrak{Aut}(\mathbb{X})=\mathfrak{Aut}(\mathbb{X})_{la}$ if and only if the following condition holds : for any superalgbra $A$ and for any $\phi\in \mathfrak{Aut}(\mathbb{X})(A)$ there is $\psi\in \mathfrak{Aut}(\mathbb{X})_{ev}(A)$ such that $\mathfrak{Aut}(\mathbb{X})(\pi_A)(\phi)=\mathfrak{Aut}(\mathbb{X})_{ev}(\pi_A)(\psi),$ where $\pi_A : A\to\widetilde{A}$ is the canonical epimorphism. Superizing [@mats-oort Lemma (1.2)], we show that one needs to check the latter conditions for finite dimensional local superalgebras only. It remains to note that if $A$ is a finite dimensional local superalgebra, then $\widetilde{A}$ is a finite field extension of $\Bbbk$, and by Cohen's theorem $\pi_A$ is split. Thus we immediately derive that $\mathfrak{Aut}(\mathbb{X})=\mathfrak{Aut}(\mathbb{X})_{la}$ is a locally algebraic group superscheme, whenever $X$ is proper. The paper is organized as follows. In the first section we remind the definitions of superschemes as $\Bbbk$-functors and as geometric superspaces. Due *Comparison Theorem* (see [@maszub1 Theorem 5.14], or [@gabdem I, §1, 4.4]) these categories are equivalent each to other, and we freely use them throughout. In the second section we remind the definitions and the standard properties of separated, proper and smooth morphisms of superschemes. In the third, fourth and fifth section we develope a fragment of the theory of Noetherian formal superschemes. In most cases the proofs are similar to the purely even case, but in some cases the so-called *even reducibility* is used. For example, one can prove that a Noetherian formal superscheme $\mathfrak{X}$ is proper over another Noetherian formal superscheme $\mathfrak{Y}$ if and only if the Noetherian formal scheme $\mathfrak{X}_0$ is proper over the Noetherian formal scheme $\mathfrak{Y}_0$. In the sixth section we prove some auxiliary properties of $\mathcal{O}_X$-supermodules, those will be needed later. In the seventh section we introduce a superized version of Čech cohomologies and prove some standrad theorems about. In the eighth section we recall Levelt's theorem on pro-representable functors and apply it to the category of superalgebras. In the ninth section we recall the definition of *formal neighborhood* $\mathbb{N}_x(\mathbb{X})$ of a $\Bbbk$-point $x$ of superscheme $\mathbb{X}$, regarded as a $\Bbbk$-subfunctor of $\mathbb{X}$. It is nothing else but the functor of points of the formal completion of $X$ along the closed supersubscheme $Y\simeq\mathrm{SSpec}(\Bbbk)$, such that $|Y|=\{x\}$. The tenth section is devoted to a (probably) folklore result that the automorphism group functor of functor on site, is again a functor on the same site. In the eleventh section we start to study the properties of the automorphism group functor $\mathfrak{Aut}(\mathbb{X})$ of superscheme $\mathbb{X}$. In particular, we prove that $\mathfrak{Aut}(\mathbb{X})_{ev}$ is a group subfunctor of $\mathfrak{Aut}(\mathbb{X})$, and we obtain a preliminary description of the *formal neighborhood of the identity* $\mathfrak{T}$ of $\mathfrak{Aut}(\mathbb{X})$. Our strategy is to prove that $\mathfrak{Aut}(\mathbb{X})$ satisfies the super analogs of the conditions $P_1-P_5$ from [@mats-oort], regarded as super-$P_i, 1\leq i\leq 5$. In particular, if they take place, then $\mathfrak{Aut}(\mathbb{X})_{ev}$ satisfies $P_1-P_5$. Furthermore, using an elementary trick we show that $\mathfrak{Aut}(\mathbb{X})_{ev}$ satisfies $P_{orb}$, hence it is a locally algebraic group scheme. This is realized in the sections 12, 13 and 14. In the fifteenth section we describe the structure of $\mathfrak{Aut}(\mathbb{X})_{ev}$. More precisely, there is the natural morphism $\mathfrak{Aut}(\mathbb{X})_{ev}\to \mathfrak{Aut}(\mathbb{X}_{ev})$ of group schemes and we show that its kernel $\mathfrak{R}$ is a nilpotent and almost unipotent, locally algebraic group scheme. The word \"almost\" means that its connected component $\mathfrak{R}^0$ is unipotent, but the etale group scheme $\mathfrak{R}/\mathfrak{R}^0$ is not necessary finite. In sexteenth section the structure of $\mathfrak{T}$, as a pro-representable group functor, is completely described. Indeed, we prove that $\mathfrak{T}\simeq\mathfrak{T}_{ev}\times\mathrm{SSpec}(\Lambda(V))$, where $\mathfrak{T}_{ev}$ acts on $\mathrm{SSpec}(\Lambda(V))$ by conjugations, and we uniquely determine its group structure by the list of commutator relations on certain \"generators\" of $\mathrm{SSpec}(\Lambda(V))$. Using the results of sexteenth section, in the seventeenth section we show that if $\mathbb{X}$ is a proper superscheme, then $\mathfrak{Aut}(\mathbb{X})$ contains the largest locally algebraic group supersubscheme $\mathfrak{Aut}(\mathbb{X})_{la}=\mathfrak{Aut}(\mathbb{X})_{ev}\mathfrak{T}$. Besides, we give a sufficient condition for them to be the same. Finally, using the fact that $\mathfrak{Aut}(\mathbb{X})$ and $\mathfrak{Aut}(\mathbb{X})_{la}$ satisfy the conditions super-$P_i, 1\leq i\leq 5,$ and Lemma [Lemma 78](#Lemma 1.2){reference-type="ref" reference="Lemma 1.2"}, we conclude that they are the same indeed. In the last section we prove that if $X$ is proper and $\mathrm{H}^1(X, \mathcal{T}_X)=0$, where $\mathcal{T}_X$ is the *tangent sheaf* of $X$, then the group superscheme $\mathfrak{Aut}(\mathbb{X})$ is smooth. The proof is based on the superized version of [@sga; @1 Proposition III.5.1] and the results of seventh section. # Superschemes and geometric superschemes For the content of this section we refer to [@brp; @CCF; @jan; @Man; @maszub1; @zub1; @zub2]. Let $\Bbbk$ be a field of zero or odd characteristic. Remind that $\mathsf{SAlg}_{\Bbbk}$ denotes the category of supercommutative $\Bbbk$-superalgebras with graded morphisms. Let $\mathsf{Alg}_{\Bbbk}$ denote the full subcategory of $\mathsf{SAlg}_{\Bbbk}$ consisting of purely even superalgebras. If $A\in\mathsf{SAlg}_{\Bbbk}$, then the superideal $AA_1$ is denoted by $J_A$. The algebra $A/J_A\simeq A_0/A_1^2$ is denoted by $\overline{A}$. Any *prime* (*maximal*) superideal of $A$ has a form $\mathfrak{P}=\mathfrak{p}\oplus A_1$, where $\mathfrak{p}$ is a prime (respectively, maximal) ideal of $A_0$. Let $\mathsf{gr}(A)$ denote the *graded* superalgebra $\oplus_{n\geq 0} J_A^n/J_A^{n+1}$. Then $0$-th and $1$-th components of $\mathsf{gr}(A)$ are $\overline{A}$ and $J_A/J_A^2\simeq A_1/A_1^3$ respectively. If $A$ is a local superalgebra with the maximal superideal $\mathfrak{M}$, then its *residue field* $A/\mathfrak{M}\simeq A_0/\mathfrak{m}$ is denoted by $\Bbbk(A)$. A $\Bbbk$-functor $\mathbb{X}$ is a functor from the category $\mathsf{SAlg}_{\Bbbk}$ to the category $\mathsf{Sets}$. Let $\mathcal{F}$ denote the category of $\Bbbk$-functors. The morphisms in $\mathcal{F}$ are denoted by bold letters $\bf f, g, \ldots$. A $\Bbbk$-functor $\mathbb{X}$, that is representable by a superalgebra $A$, is called an *affine superscheme*. We denote $\mathbb{X}$ by $\mathrm{SSp}(A)$, so that $$\mathrm{SSp}(A)(B)=\mathrm{Hom}_{\mathsf{SAlg}_{\Bbbk}}(A, B), B\in \mathsf{Alg}_{\Bbbk}.$$ A *closed supersubscheme* $\mathbb{Y}$ of $\mathrm{SSp}(A)$ is defined as $$\mathbb{Y}(B)=\{ \phi\in \mathrm{SSp}(A)(B)\mid \phi(I)=0\}, B\in\mathsf{SAlg}_{\Bbbk},$$ where $I$ is a superideal of $A$. It is clear that $\mathbb{Y}\simeq\mathrm{SSp}(A/I)$ is again affine superscheme. Similarly, an *open supersubscheme* $\mathbb{U}$ of $\mathrm{SSp}(A)$ is defined as $$\mathbb{U}(B)= \{ \phi\in \mathrm{SSp}(A)(B)\mid \phi(I)B=B\}, B\in\mathsf{SAlg}_{\Bbbk},$$ where $I$ is a superideal of $A$. Note that if $I=Aa, a\in A_0$, then the corresponding open supersubscheme is isomorphic to $\mathrm{SSp}(A_a)$, and it is called a *principal open supersubscheme*. More generally, a subfunctor $\mathbb{Y}$ of $\Bbbk$-functor $\mathbb{X}$ is called closed (open) if for any morphism ${\bf f} : \mathrm{SSp}(A)\to \mathbb{X}$, the subfunctor ${\bf f}^{-1}(\mathbb{Y})$ is closed (open) in $\mathrm{SSp}(A)$. Finally, a collection of open subfunctors $\mathbb{Y}_i$ of $\mathbb{X}$ form an *open covering* (of $\mathbb{X}$) if for any field extension $\Bbbk\subseteq L$ there is $\mathbb{X}(L)=\cup_{i\in I} \mathbb{Y}_i(L)$. Let $\mathcal{A}$ denote the full subcategory of $\mathcal{F}$ consisting of affine superschemes. Note that $\mathsf{SAlg}_{\Bbbk}^{op}$ is naturally isomorphic to $\mathcal{A}$. The category $\mathcal{A}$ can be regarded as a site (see Section 6 below, or [@fundalg Definition 2.24]) with respect to the topology of *local coverings* $$\{\mathrm{SSp}(A_{a_i})\to\mathrm{SSp}(A)\}, \sum_i A_0 a_i=A_0,$$ and to the topology of *fpqc coverings* $$\{\mathrm{SSp}(A_i)\to\mathrm{SSp}(A)\}, \ \prod_i A_i \ \mbox{is a faithfully flat} \ A-\mbox{supermodule}.$$ A $\Bbbk$-functor $\mathbb{X}$ is called *local* if it is a sheaf on $\mathsf{SAlg}_{\Bbbk}\simeq\mathcal{A}^{op}$ with respect to the topology of local coverings. A local $\Bbbk$-functor $\mathbb{X}$ is said to be a *superscheme*, provided $\mathbb{X}$ has an open covering by affine supersubschemes. The obvious examples of superschemes are affine superschemes and their open supersubschemes. Note that any superscheme is also a sheaf on $\mathsf{SAlg}_{\Bbbk}\simeq\mathcal{A}^{op}$ with respect to the topology of fpqc coverings. The category of superschemes is equivalent to the category of *geometric superschemes*. More precisely, a *geometric superspace* $X$ consists of a topological space $|X|$ and a sheaf of super-commutative superalgebras $\mathcal{O}_X$ on $|X|$, such that all stalks $\mathcal{O}_{X, x}$ for $x\in |X|$ are local superalgebras. A morphism of superspaces $f : X\to Y$ is a pair $(|f|, f^{\sharp})$, where $|f|: |X|\to |Y|$ is a morphism of topological spaces and $f^{\sharp} : \mathcal{O}_Y\to |f|_*\mathcal{O}_X$ is a morphism of sheaves such that the induced morphism of stalks $f^{\sharp}_x : \mathcal{O}_{Y,|f|(x)} \to \mathcal{O}_{X, x}$ is local for any $x\in |X|$. In what follows, the residue field $\Bbbk(\mathcal{O}_{X, x})$ is denoted by $\Bbbk(x), x\in |X|$. For a field extension $\Bbbk\subseteq L$ we say that a point $x\in |X|$ is an *$L$-point*, if $\Bbbk(x)\simeq L$. A geometric superspace $X$ is called a *geometric superscheme* if it can be covered by open supersubspaces, each of wich is isomorphic to an *affine geometric superscheme* $\mathrm{SSpec}(A)$. Recall that the underlying topological space of $\mathrm{SSpec}(A)$ coincides with the prime spectrum of $A_0$ and for any open subset $U\subseteq |\mathrm{Sp}(A_0)|$, the super-ring $\mathcal{O}_{\mathrm{SSpec}(A)}(U)$ consists of all locally constant functions $h : U\to \sqcup_{\mathfrak{p}\in U} A_{\mathfrak{p}}$ such that $h(\mathfrak{p})\in A_{\mathfrak{p}}, \mathfrak{p}\in U$. A morphism $f : X\to Y$ of geometric superschemes is called *open (closed) immersion*, if $|f|$ is a homeomorphism of $|X|$ onto an open subset $U\subseteq |Y|$ (respectively, a homeomorphism onto a closed subset $Z\subseteq |Y|$), such that $f^{\sharp}$ is an isomorphism of sheaves (respectively, $f^{\sharp}_x$ is surjective for any $x\in |X|$). A composition of closed and open immersions is called just *immersion*. The equivalence of categories is given by $X\mapsto\mathbb{X}$, where for any superalgebra $A$ there is $\mathbb{X}(A)=\mathrm{Mor}(\mathrm{SSpec}(A), X)$. This equivalence takes open (closed) immersions in the category of geometric superschemes to the open (closed) embeddings in the category of superschemes. In what follows we use the term *superscheme* for both geometric superschemes and just superschemes, but we distinguish them by notations, i.e. geometric superschemes and their morphisms are denoted by $X, Y, \ldots$, and $f, g, \ldots$ , and just superschemes and their morphisms (natural transformations of $\Bbbk$-functors) are denoted by $\mathbb{X}, \mathbb{Y}, \ldots$, and ${\bf f}, {\bf g}, \ldots$, respectively. Let $\mathcal{P}$ be a property of geometric superscheme or a property of morphisms of geometric superschemes, such that for any $X\simeq X'$ or any commutative diagram $$\begin{array}{ccc} X & \stackrel{f}{\to} & Y \\ \downarrow & & \downarrow \\ X' & \stackrel{f'}{\to} & Y' \end{array},$$ whose vertical arrows are isomorphisms, $X$ or $f$ satisfy $\mathcal{P}$ if and only if $X'$ or $f'$ do. Due the above equivalence, the property $\mathcal{P}$ can be translated to to category of just superschemes (and vice versa). In other words, a superscheme $\mathbb{X}$ or a morphism ${\bf f} : \mathbb{X}\to\mathbb{Y}$ satisfy $\mathcal{P}$, provided the corresponding $X$ or $f : X\to Y$ do. Any superscheme $X$ contains a largest purely even, closed supersubscheme $X_{ev}$, such that $|X_{ev}|=|X|$ and $\mathcal{O}_{X_{ev}}=\mathcal{O}_X/\mathcal{J}_X$, where $\mathcal{J}_X$ is the superideal sheaf of $\mathcal{O}_X$, generated by $(\mathcal{O}_X)_1$. The corresponding closed supersubscheme of $\mathbb{X}$ can be defined as $\mathbb{X}_{ev}(A)=\mathbb{X}(i)(A_0)$, where $A\in\mathsf{SAlg}_{\Bbbk}$ and $i : A_0\to A$ is the natural algebra embedding. Similarly, one can define a purely even superscheme $X_0$, such that $|X_0|=|X|$ and $\mathcal{O}_{X_0}=(\mathcal{O}_X)_0$. It is clear that there is a natural morphism $X\to X_0$ of geometric superschemes, such that any morphism from $X$ to a purely even superscheme $Y$ factors through $X\to X_0$. Translating this property to the category of superschemes, we define the superscheme $\mathbb{X}_0$. Recall that a morphism $f : X\to Y$ is said to be *locally of finite type*, if there is a covering of $Y$ by open supersubschemes $V_i\simeq \mathrm{SSpec}(B_i)$ such that for every index $i$, the open supersubscheme $f^{-1}(V_i)$ is covered by open supersubschemes $U_{ij}\simeq \mathrm{SSpec}(A_{ij})$, where each $A_{ij}$ is a finitely generated $B_i$-superalgebra. If each $f^{-1}(V_i)$ can be covered by a finite number of $U_{ij}$, then $f$ is said to be a morphism of finite type (cf. [@hart II.3]). For example, if $Y=\mathrm{SSpec}(\Bbbk)$, then $X$ is called a *locally algebraic* or *algebraic* superscheme respectively. Observe that any algebraic superscheme is *Noetherian* (cf. [@hart; @zub2]). Finally, if $S$ is a superscheme and $X\to S$ is a morphism of superschemes, then $X$ is said to be an *$S$-superscheme*, or *superscheme over* $S$. Let $Y$ be another $S$-superscheme. Then a morphism $X\to Y$ is called an *$S$-morphism*, if the diagram $$\begin{array}{ccc} X & \to & Y \\ \searrow & & \swarrow \\ & S & \end{array}$$ is commutative. # Proper and smooth morphisms of superschemes For the content of this section we refer to [@brp A.5, A.6], [@maszub2], [@maszub3 Appendix A] and [@maszub4; @zub2]. Recall that a superscheme morphism $f : X\to Y$ is said to be *separated*, if the diagonal morphism $\delta_f : X\to X\times_Y X$ is a closed immersion. Respectively, a superscheme is called *separated*, if the canonical morphism $X\to\mathrm{SSpec}(\Bbbk)$ is separated. Both properties are *even reducible*, i.e. $f : X\to Y$ or $X$ are separated if and only if the induced scheme morphism $f_{ev} : X_{ev}\to Y_{ev}$ or the scheme $X_{ev}$ are (see [@brp Proposition A.13], [@maszub2 Proposition 2.4] or Lemma [Lemma 2](#reducibility){reference-type="ref" reference="reducibility"} below). The following property of superschemes, which are separated over an affine superscheme, will be in use throughout. **Lemma 1**. *(see [@hart Exercise II.4.3]) Let $X$ be a superscheme, separated over an affine superscheme $S$. Then for any open affine supersubschemes $U$ and $V$ in $X$, the superscheme $U\cap V$ is affine as well.* *Proof.* Let $\mathrm{pr}_1$ and $\mathrm{pr}_2$ denote the canonical projections of $X\times_S X$ to the first and second factors respectively. Since $X$ is identified with a closed supersubscheme in $X\times_S X$, we have $$U\cap V=X\cap \mathrm{pr}_1^{-1}(U)\cap\mathrm{pr}_2(V),$$ and $\mathrm{pr}_1^{-1}(U)\cap\mathrm{pr}_2(V)\simeq U\times_S V$ is an open affine supersubscheme of $X\times_S X$. ◻ Further, a morphism $f : X\to Y$ is said to be *proper*, if it is of finite type, separated and universally closed. Respectively, a superscheme $X$ is caled *proper*, if the canonical morphism $X\to\mathrm{SSpec}(\Bbbk)$ is proper. The *Valuative Criterion of Separatedness and Properness* (see [@hart Theorem II.4.3 and Theorem II.4.7]) are still valid for superschemes (cf. [@brp Coollary A.14]). **Lemma 2**. *Assume that $Z$ and $T$ are closed supersubschemes of $X$ and $Y$ respectively, their defining superideal sheaves are locally nilpotent, and $f$ takes $Z$ to $T$. Then $X$ is separated/proper over $Y$ if and only if $Z$ is separated/proper over $T$. In particular, $X$ is separated/proper over $Y$ if and only $X_{ev}$ is separated/proper over $Y_{ev}$ (as a scheme) if and only if $X_0$ is separated/proper over $Y_0$ (as a scheme as well).* *Proof.* Just observe that any morphism $\mathrm{SSpec}(D)\to X$ (or $\mathrm{SSpec}(D)\to Y$), where $D$ is a purely even integral domain, factors through the closed immersion $Z\to X$ (respectively, through $T\to Y$). ◻ We say that a morphism $f : X\to Y$, locally of finite type, is *smooth*, whenever it is *formally smooth*. The latter means that for any affine superscheme $\mathrm{SSpec}(A)$ over $Y$ and for any nilpotent superideal $I$ in $A$, the natural map $\mathbb{X}_{\mathbb{Y}}(A)\to \mathbb{X}_{\mathbb{Y}}(A/I)$ is surjective, where $\mathbb{X}_{\mathbb{Y}}(R)$ denote the set of all $Y$-morphism $\mathrm{SSpec}(R)\to X$. Respectively, a locally algebraic superscheme $X$ is smooth, if the canonical morphism $X\to\mathrm{SSpec}(\Bbbk)$ is smooth (for more details, see [@maszub3 Appendix A] or [@brp A.6]). With any superscheme $X$ one can associate a superscheme $\mathsf{gr}(X)$ such that $|\mathsf{gr}(X)|=|X|$ and $\mathcal{O}_{\mathsf{gr}(X)}$ is isomorphic to the sheafification of the presheaf $$U\mapsto \oplus_{n\geq 0}\mathcal{J}_X(U)^n/\mathcal{J}_X(U)^{n+1}, \ U\subseteq |X|.$$ Obviously, $X\mapsto\mathsf{gr}(X)$ is an endofunctor of the category of locally algebraic superschemes. Note also that if $X\simeq\mathrm{SSpec}(A)$, then $\mathsf{gr}(X)\simeq\mathrm{SSpec}(\mathsf{gr}(A))$. Furthermore, for a superalgebra morphism $\phi : A\to B$ the induced superscheme morphism $\mathsf{gr}(\mathrm{SSpec}(\phi))$ is naturally identified with $\mathrm{SSpec}(\mathsf{gr}(\phi))$. As in [@zubbov] we call a superscheme $X$ *graded*, provided $X\simeq\mathsf{gr}(Y)$ for some superscheme $Y$. Note that $X$ is Noetherian if and only if $\mathsf{gr}(X)$ is. Moreover, in the latter case $\mathcal{J}_X^t=0$ for sufficiently large $t$, hence $\mathcal{O}_{\mathsf{gr}(X)}=\oplus_{n\geq 0}\mathcal{J}_X^n/\mathcal{J}_X^{n+1}$. In general, $\mathcal{O}_{\mathsf{gr}(X)}$ is a certain subsheaf of the sheaf $\prod_{n\geq 0}\mathcal{J}_X^n/\mathcal{J}_X^{n+1}$ that is not equal to $\oplus_{n\geq 0}\mathcal{J}_X^n/\mathcal{J}_X^{n+1}$. # Noetherian formal superschemes For the content of this section we refer to [@EGA; @I Section 0.7 and Section I.10], [@EGA; @III Section III.5] and [@hart Section II.9]. Let $X$ be a Noetherian superscheme and $Y$ be a closed supersubscheme of $X$, defined by a superideal sheaf $\mathcal{I}_Y$. Recall that the *formal completion of $X$ along $Y$* is defined as the geometric superspace $\widehat{X}=(|Y|, \varprojlim_{n\geq 0} \mathcal{O}_X/\mathcal{I}_Y^{n+1})$. Note that $\widehat{X}$ is covered by open supersubspaces $\widehat{U}$ (formal completion of $U$ along $Y\cap U$), where $U$ runs over an open covering of $X$. If $X=\mathrm{SSpec}(A)$ and $Y=\mathrm{SSpec}(A/I)$, then $\widehat{X}$ is denoted by $\mathrm{SSpecf}(\widehat{A})$, where $\widehat{A}=\varprojlim_n A/I^n$. The topological space $|\mathrm{SSpecf}(\widehat{A})|$ can be identified with the set of open prime superideals of $\widehat{A}$. Indeed, an open prime superideal of $\widehat{A}$ contains $\widehat{I}=\widehat{A} I$. Since $\widehat{A}/\widehat{I}\simeq A/I$, it has a form $\widehat{\mathfrak{P}}=\widehat{A}\mathfrak{P}$, where $\mathfrak{P}\in |Y|$ (cf. [@maszub4 Lemma 1.7]). Since $\widehat{\mathfrak{P}}\cap A=\mathfrak{P}$, the map $\mathfrak{P}\to\widehat{\mathfrak{P}}$ is bijective. Note also that $\mathcal{O}(\mathrm{SSpecf}(\widehat{A}))\simeq\widehat{A}$. Let $B$ be a Noetherian superalgebra and $J$ be a superideal of $B$. If $B$ is Hausdorff and complete in the $J$-adic topology, then we say that $B$ is *$J$-adic*, or just *adic*, superalgebra (cf. [@EGA; @I Definition 0.7.1.9]). The superideal $J$ is said to be a *superideal of definition* of $B$. For any multiplicative set $S\subseteq B_0$, let $B\{S^{-1}\}$ denote the completion of $S^{-1}B$ in the $S^{-1}J$-adic topology. If $S=\{g^n\mid n\geq 0\}$, then $B\{S^{-1}\}$ is denoted by $B_{(g)}$. **Lemma 3**. *Let $\widehat{X}$ be a completion of $X=\mathrm{SSpec}(A)$ along $Y=\mathrm{SSpec}(A/I)$, where $A$ is a Noetherian superalgebra. For any open superideal $\widehat{\mathfrak{P}}\in\mathrm{SSpecf}(\widehat{A})$ the stalk $\mathcal{O}_{\widehat{X}, \widehat{\mathfrak{P}}}$ is a local superalgebra, such that its residue field is isomorphic to $\Bbbk(A_{\mathfrak{P}})$.* *Proof.* Set $S=A_0\setminus\mathfrak{p}$. By [@maszub4 Lemma 1.5] the completion of $A$ in the $I$-adic topology coincides with the completion of $A$ in the $I_0$-adic topology. Thus the even component of $\mathcal{O}_{\widehat{X}, \widehat{\mathfrak{P}}}=\varinjlim_{f\in S} \widehat{A_f}$ is isomorphic to $\varinjlim_{f\in S}\widehat{(A_0)_f}$. For any $g\in \widehat{A_0}\setminus\widehat{\mathfrak{p}}$ there is $f\in S$ such that $g\equiv f\pmod{\widehat{I_0}}$. Since $\frac{g}{f}=1+x$, where $x$ is topologically nilpotent in $\widehat{(A_0)_f}$, we have $\widehat{(A_0)_f}\simeq \widehat{A_0}_{(g)}$ and [@EGA; @I Corollary 0.7.6.3 and Proposition 0.7.6.17] imply the statement. ◻ This lemma shows that the formal completion of arbitrary Noetherian superscheme along its closed supersubscheme is a geometric superspace indeed. A geometric superspace $\mathfrak{X}$ is said to be a *Noetherian formal superscheme* (briefly, N.f. superscheme), if there is a finite open covering $\{\mathfrak{X}_i\}$ of $\mathfrak{X}$, such that each $\mathfrak{X}_i$ is isomorphic to the completion of some Noetherian superscheme $X_i$ along its closed supersubscheme $Y_i$. By the above remark, any N.f. superscheme has a finite open covering by the affine N.f. supersubschemes $\mathfrak{X}_i\simeq\mathrm{SSpecf}(A_i)$, where each $A_i$ is a Noetherian superalgebra, that is complete in the $J_i$-adic topology for some superideal $J_i$ of $A_i$. N.f. superschemes form a full subcategory of the category of geometric superspaces, that contains all Noetherian superschemes. Recall that a sheaf $\mathfrak{F}$ of $\mathcal{O}_{\mathfrak{X}}$-supermodules is said to be *coherent*, if for some open covering $\{\mathfrak{X}_i\}$ as above, each sheaf $\mathfrak{F}|_{\mathfrak{X}_i}$ is isomorphic to $\widehat{\mathcal{F}_i}=\varprojlim_{n\geq 0}\mathcal{F}_i/\mathcal{F}_i \mathcal{I}^{n+1}_{Y_i}$, the completion of some coherent sheaf of $\mathcal{O}_{X_i}$-supermodules $\mathcal{F}_i$ along $Y_i$. Assume that $\mathfrak{X}=\widehat{X}$ is the formal completion of $X=\mathrm{SSpec}(A)$ along $Y=\mathrm{SSpec}(A/I)$, i.e. $\mathfrak{X}\simeq\mathrm{SSpecf}(\widehat{A})$. Let $M$ be a finitely generated $A$-supermodule. Following [@hart Section II.9], we denote by $M^{\triangle}$ the completion of the coherent $\mathcal{O}_X$-supermodule $\widetilde{M}$ along $Y$. The following proposition superizes [@hart Proposition II.9.4]. **Proposition 4**. *We have :* 1. *$\mathfrak{J}=I^{\triangle}$ is a superideal sheaf in $\mathcal{O}_{\mathfrak{X}}$, such that $\mathcal{O}_{\mathfrak{X}}/\mathfrak{I}^n$ is isomorphic to $\widetilde{A/I^n}$ as a sheaf on $|Y|$, for any $n\geq 0$;* 2. *if $M$ is a fintely generated $A$-supermodule, then $M^{\triangle}\simeq \widetilde{M}\otimes_{\mathcal{O}_X}\mathcal{O}_{\mathfrak{X}}$;* 3. *the functor $M\mapsto M^{\triangle}$ is an exact functor from the category of finitely generated $A$-supermodules to the category of coherent $\mathcal{O}_{\mathfrak{X}}$-supermodules;* 4. *for any integer $n> 0$ there is $(I^{\triangle})^n=(I^n)^{\triangle}$.* *Proof.* If $U$ is an affine supersubscheme of $X$, then [@zub2 Proposition 2.1(1), Proposition 3.1 and Lemma 3.2] imply that the functor $M\mapsto \widetilde{M}(|U|)$ is exact, where $N=\widetilde{M}(|U|)$ is a finitely generated supermodule over the Noetherian superalgebra $\mathcal{O}_X(|U|)$. Let $J$ denote the superideal $\widetilde{I}(|U|)$. Then $$M^{\triangle}(|U|)\simeq\widehat{N}=\varprojlim_{n\geq 0} N/J^{n+1}N$$ and [@kolzub Lemma 6.1] infers $(3)$. Now, the same arguments as in [@hart Proposition II.9.4], combined with [@kolzub Lemma 6.1(1)] prove $(1)$ and $(2)$. Finally, $(I^{\triangle})^n$ is the sheafifcation of the presheaf $$U\mapsto \widetilde{I}(U)^n\otimes_{\mathcal{O}_X(U)}\mathcal{O}_{\mathfrak{X}}(U),$$ and $\widetilde{I}^n=\widetilde{I^n}$ is the sheafifcation of the presheaf $U\mapsto \widetilde{I}(U)^n, U\subseteq |Y|$. Then [@stack Lemma 17.16.1] concludes the proof. ◻ **Proposition 5**. *A geometric superspace $\mathfrak{X}$ is a N.f. superscheme if and only if $\mathfrak{X}_0$ is a N.f. scheme and $(\mathcal{O}_{\mathfrak{X}})_1$ is a coherent sheaf of $\mathcal{O}_{\mathfrak{X}_0}$-modules.* *Proof.* Note that if $X$ is a Noetherian superscheme and $\mathcal{I}$ is a coherent superideal sheaf of $\mathcal{O}_X$, then there is a nonnegative integer $n$ such that $\mathcal{I}^n\subseteq \mathcal{O}_X\mathcal{I}_0$ (use [@maszub4 Lemma 1.5]). Let $\widehat{X}$ be a completion of $X$ along a closed supersubscheme $Y$. Let $Y'$ denote $(|Y|, \mathcal{O}_X/\mathcal{O}_X (\mathcal{I}_Y)_0)$. The above remark immediately infers that $\widehat{X}$ and $(\widehat{X})_0$ are the completions of $X$ and $X_0$ along $Y'$ and $Y_0$ respectively, hence the part \"if\". Assume that $\mathfrak{X}$ is a geometric superspace, that satisfies the conditions of proposition. Without loss of a generality, one can assume that $\mathfrak{X}_0\simeq\mathrm{Specf}(A)$, where $A$ is an adic Noetherian algebra. By [@hart Proposition II.9.4] the $\mathcal{O}_{\mathfrak{X}_0}$-module $(\mathcal{O}_{\mathfrak{X}})_1$ is isomorphic to $M^{\triangle}\simeq \widetilde{M}\otimes_{\mathcal{O}_{X_0}} \mathcal{O}_{\mathfrak{X}_0}$, where $X_0=\mathrm{Spec}(A)$ and $M$ is a finitely generated $A$-module. The superalgebra sheaf structure of $\mathcal{O}_{\mathfrak{X}}$ is uniquely defined by an $\mathcal{O}_{\mathfrak{X}_0}$-linear morphism $$M^{\triangle}\otimes_{\mathcal{O}_{\mathfrak{X}_0}} M^{\triangle}\to \mathcal{O}_{\mathfrak{X}_0}\simeq A^{\triangle}.$$ We have $$M^{\triangle}\otimes_{\mathcal{O}_{\mathfrak{X}_0}} M^{\triangle}\simeq (\widetilde{M}\otimes_{\mathcal{O}_{X_0}}\widetilde{M})\otimes_{\mathcal{O}_{X_0}}\mathcal{O}_{\mathfrak{X}_0} \simeq \widetilde{M\otimes_A M}\otimes_{\mathcal{O}_{X_0}} \mathcal{O}_{\mathfrak{X}_0}\simeq (M\otimes_A M)^{\triangle},$$ hence by [@hart Theorem II.9.7] the above $\mathcal{O}_{\mathfrak{X}_0}$-linear morphism is uniquely defined by a morphism $M\otimes_A M\to A$ of $A$-modules. Thus $B=A\oplus M$ has the natural superalgebra structure, and $\mathfrak{X}\simeq\mathrm{SSpecf}(B)$. ◻ Let $\mathfrak{X}$ be a N.f. superscheme. A superideal sheaf $\mathfrak{I}$ of $\mathcal{O}_{\mathfrak{X}}$ is said to be a *superideal of definition*, if $$\mathrm{Supp}(\mathcal{O}_{\mathfrak{X} }/\mathfrak{I})=\{x\in |\mathfrak{X}| \mid (\mathcal{O}_{\mathfrak{X} }/\mathfrak{I})_x\neq 0 \}=|\mathfrak{X}|$$ and $(|\mathfrak{X}|, \mathcal{O}_{\mathfrak{X} }/\mathfrak{I})$ is a Noetherian superscheme. **Proposition 6**. *(compare with [@hart Proposition II.9.5]) Let $\mathfrak{X}$ be a N.f. superscheme. Then :* 1. *if $\mathfrak{I}_1$ and $\mathfrak{I}_2$ are two superideals of definition, then there are positive integers $m, n$, such that $\mathfrak{I}^m_1\subseteq \mathfrak{I}_2$ and $\mathfrak{I}^n_2\subseteq \mathfrak{I}_1$;* 2. *if $\mathfrak{X}\simeq\mathrm{SSpecf}(A)$, where $A$ is an $I$-adic Noetherian superalgebra, and $\mathfrak{J}$ is a superideal of definition of $\mathfrak{X}$, then there is a superideal of definition $J$ of $A$, such that $\mathfrak{J}=J^{\triangle}$;* 3. *if $\mathfrak{I}$ is a superideal of definition, then for any positive integer $n$, $\mathfrak{I}^n$ is as well;* 4. *if $\mathfrak{J}$ is an ideal of definition of $\mathfrak{X}_0$, then $\mathcal{O}_{\mathfrak{X}}\mathfrak{J}$ is a superideal of definition of $\mathfrak{X}$.* *Proof.* The proof of (a) can be copied from [@hart Proposition II.9.5(a)] verbatim. Thus, under the conditions of (b) there is $(I^n)^{\triangle}= (I^{\triangle})^n\subseteq\mathfrak{J}$ for some integer $n>0$. Without loss of generality, one can assume that $I^{\triangle}\subseteq\mathfrak{J}$. Since $$\mathcal{O}_{\mathrm{SSpec}(A/I)}\simeq\mathcal{O}_{\mathfrak{X}}/I^{\triangle}\to\mathcal{O}_{\mathfrak{X}}/\mathfrak{J}$$ is a surjective morphism of superalgebra sheaves, $Z=(|\mathfrak{X}|, \mathcal{O}_{\mathfrak{X} }/\mathfrak{J})$ is a closed supersubscheme of $\mathrm{SSpec}(A/I)$, defined by a superideal $J\supseteq I$. Moreover, $|Z|=|\mathrm{SSpec}(A/I)|$ implies $J^m\subseteq I$ for some integer $m>0$, and $$J^{\triangle}/I^{\triangle}\simeq (J/I)^{\triangle}\simeq\widetilde{J/I}\simeq\widetilde{J}/\widetilde{I}$$ infers $\mathfrak{J}=J^{\triangle}$. The statement (d) is obviously local, i.e. one can assume that $\mathfrak{X}\simeq\mathrm{SSpecf}(A)$ and $\mathfrak{J}=J^{\triangle}$ for some ideal of definition of $A_0$. Note that $$(\mathcal{O}_{\mathfrak{X}})_1\mathfrak{J}\subseteq (\mathcal{O}_{\mathfrak{X}})_1\simeq \widetilde{A_1}\otimes_{\mathcal{O}_{X_0}}\mathcal{O}_{\mathfrak{X}_0}$$ is the natural image of the sheaf $\widetilde{A_1}\otimes_{\mathcal{O}_{X_0}}\mathfrak{J}$. By [@stack Lemma 17.16.3] we have the exact sequence $$0\to (\mathcal{O}_{\mathfrak{X}})_1\mathfrak{J}\to \widetilde{A_1}\otimes_{\mathcal{O}_{X_0}}\mathcal{O}_{\mathfrak{X}_0}\to \widetilde{A_1}\otimes_{\mathcal{O}_{X_0}}(\mathcal{O}_{\mathfrak{X}_0}/\mathfrak{J})\to 0.$$ Since $\mathcal{O}_{\mathfrak{X}_0}/\mathfrak{J}\simeq\widetilde{A/J}$, $\widetilde{A_1}\otimes_{\mathcal{O}_{X_0}}(\mathcal{O}_{\mathfrak{X}_0}/\mathfrak{J})$ is isomorphic to $\widetilde{A_1/JA_1}$ as $\mathcal{O}_{\mathrm{Spec}(A/J)}$-module, thus (d) follows. Finally, the statement (c) is also local, hence it immediately follows by Proposition [Proposition 4](#some standard prop-s of coherent sheaves over affine){reference-type="ref" reference="some standard prop-s of coherent sheaves over affine"}(1, 4). ◻ **Lemma 7**. *Let $A$ be a Noetherian superalgebra and $I$ be a superideal of $A$. If $\{ M_n, \phi_{n, m} : M_n\to M_m, 0\leq m\leq n\}$ is an inverse system, where each $M_n$ is a finitely generated $A/I^n$-supermodule and each map $\phi_{n, m}$ is surjective with $\ker\phi_{n, m}=I^m M_n$, then $M=\varprojlim_n M_n$ is a finitely generated $\widehat{A}$-supermodule, such that $M/I^n M\simeq M_n$ for any $n\geq 0$.* *Proof.* Assume that the elements $m^{(1)}_1, \ldots, m^{(1)}_s$ generate $M_1$. Since $M_1\simeq M_2/IM_2$, the elements $m^{(2)}_1, \ldots, m^{(2)}_s$, such that $\phi_{2, 1}(m^{(2)}_i)=m^{(1)}_i$ for any $i$, generate $M_2$. Similarly, $M_1\simeq M_3/IM_3$ infers that $m^{(3)}_1, \ldots, m^{(3)}_s$, such that $\phi_{3, 2}(m^{(3)}_i)=m^{(2)}_i$ for any $i$, generate $M_3$, and so on. Set $m_i=\varprojlim_n m_i^{(n)}, 1\leq i\leq s$. Then $M=\sum_{1\leq i\leq s}\widehat{A}m_i$ by [@maszub4 Lemma 1.7]. ◻ The following proposition superizes [@hart Proposition II.9.6]. **Proposition 8**. *Let $\mathfrak{X}$ be a N.f. superscheme. Assume that $\mathfrak{I}$ is a superideal of definition of $\mathfrak{X}$. Let $X_n$ denote the Noetherian superscheme $(|\mathfrak{X}|, \mathcal{O}_{\mathfrak{X}}/\mathfrak{I}^n), n\geq 0$. Then :* 1. *if $\mathfrak{F}$ is a coherent sheaf of $\mathcal{O}_{\mathfrak{X}}$-supermodules, then for any $n\geq 0$, $\mathcal{F}_n=\mathfrak{F}/\mathfrak{I}^n\mathfrak{F}$ is a coherent sheaf of $X_n$-supermodules, so that $\mathfrak{F}\simeq\varprojlim_n \mathcal{F}_n$;* 2. *if $\{\mathcal{F}_n, \phi_{n, m} : \mathcal{F}_n\to\mathcal{F}_m, n>m\geq 0\}$ is an inverse system, where each $\mathcal{F}_n$ is a coherent sheaf of $\mathcal{O}_{X_n}$-supermodules and each morphism $\phi_{n, m}$ is surjective with $\ker\phi_{n, m}=\mathfrak{I}^m\mathcal{F}_n$, then $\mathfrak{F}=\varprojlim_n\mathcal{F}_n$ is a coherent sheaf of $\mathcal{O}_{\mathfrak{X}}$-supermodules, such that $\mathcal{F}_n\simeq\mathfrak{F}/\mathfrak{I}^n\mathfrak{F}$ for any $n\geq 0$;* 3. *a sheaf $\mathfrak{F}$ of $\mathcal{O}_{\mathfrak{X}}$-supermodules is coherent if and only if the sheaf $\mathfrak{F}|_{\mathfrak{X}_0}$ of $\mathcal{O}_{\mathfrak{X}_0}$-modules is.* **Proof.* For the first two statements use Lemma [Lemma 7](#theorem II.9.3){reference-type="ref" reference="theorem II.9.3"} and copy the proof of [@hart Proposition II.9.6]. To prove the third one, let $\mathfrak{J}=\mathcal{O}_{\mathfrak{X} }\mathfrak{J}_0$. Then by [@zub2 Proposition 3.1] each $\mathcal{F}_n\simeq \mathfrak{F}/\mathfrak{J}_0^n\mathfrak{F}$ is a coherent $\mathcal{O}_{X_n}$-supermodule if and only if each $\mathcal{F}_n|_{\mathcal{O}_{(X_n)_0}}$ is a coherent $\mathcal{O}_{(X_n)_0}$-module if and only if $\mathfrak{F}|_{\mathfrak{X}_0}$ is a coherent $\mathcal{O}_{\mathfrak{X}_0}$-module. ◻* **Proposition 9**. *Let $\mathfrak{X}\simeq\mathrm{SSpecf}(A)$, where $A$ is an adic Noetherian superalgebra. Then $M\mapsto M^{\triangle}$ is an equivalence of the category of finitely generated $A$-supermodules to the category of coherent $\mathcal{O}_{\mathfrak{X}}$-supermodules, and its quasi-inverse is the exact functor $\mathfrak{F}\mapsto \mathfrak{F}(|\mathfrak{X}|)$.* *Proof.* Proposition [Proposition 4](#some standard prop-s of coherent sheaves over affine){reference-type="ref" reference="some standard prop-s of coherent sheaves over affine"}(2) implies $M^{\triangle}(|\mathfrak{X}|)=M$. Conversely, assume that $\mathfrak{F}$ is a coherent sheaf of $\mathcal{O}_{\mathfrak{X}}$-supermodules. Set $\mathfrak{I}=I^{\triangle}$, where $I$ is a superideal of definition of $A$. By Proposition [Proposition 8](#supermodule over formal as a proj limit){reference-type="ref" reference="supermodule over formal as a proj limit"}, for each $n\geq 1$ we have $\mathfrak{F}/\mathfrak{I}^n\mathfrak{F}\simeq \widetilde{M_n}$, where $M_n$ is a finitely generated $A/I^n$-supermodule. Moreover, $M=\varprojlim_n M_n$ is a finitely generated $A$-supermodule and $\mathfrak{F}\simeq M^{\triangle}$. The proof of the exactness of the functor $\mathfrak{F}\mapsto \mathfrak{F}(|\mathfrak{X}|)$ can be copied from [@hart Theorem II.9.7]. ◻ Let $\mathsf{Top-SAlg}_{\Bbbk}$ denote the subcategory of topological superalgebras with graded continuous morphisms. **Lemma 10**. *(see [@EGA; @I Proposition I.10.4.6]) The natural map $$\mathrm{Mor}(\mathfrak{X}, \mathrm{SSpecf}(A))\to\mathrm{Hom}_{\mathsf{Top-SAlg}_{\Bbbk}}(A, \mathcal{O}_{\mathfrak{X}}(|\mathfrak{X}|))$$ is bijective.* *Proof.* Without loss of a generality, one can assume that $\mathfrak{X}\simeq\mathrm{SSpecf}(B)$. Suppose that $I$ and $J$ are superideals of definition in $A$ and $B$ respectively. Let $f : \mathrm{SSpecf}(B)\to \mathrm{SSpecf}(A)$ be a morphism of formal superschemes. The dual superalgebra morphism $A\to B$ is denoted by $\phi$. Since $|\mathrm{SSpecf}(B)|=|\mathrm{Specf}(B_0)|$ as well as $|\mathrm{SSpecf}(A)|=|\mathrm{Specf}(A_0)|$, and the morphism $f_0$ is uniquely defined by $\phi_0 : A_0\to B_0$, there is $|f|(\mathfrak{Q})=\phi^{-1}(\mathfrak{Q})$ for any $\mathfrak{Q}\in |\mathrm{SSpecf}(B)|$. Furthermore, the superideal $\cap_{\mathfrak{Q}\in |\mathrm{SSpecf}(B)|}\mathfrak{Q}$ is nilpotent over $J$, that implies $\phi(I)^n\subseteq J$ for a positive integer $n$, whence $\phi$ is continuous. Any open supersubspace $\mathfrak{U}$ in $\mathrm{SSpecf}(A)$ is covered by some $\mathrm{SSpecf}(\widehat{A_x}), x\in A_0$. Since $f^{-1}(\mathrm{SSpecf}(\widehat{A_x}))=\mathrm{SSpecf}(\widehat{B_{\phi(x)}})$, and the superalgebra morphism $f^{\sharp}(\mathfrak{U}) : \mathcal{O}(\mathfrak{U})\to \mathcal{O}(f^{-1}(\mathfrak{U}))$ is uniquely defined by its restrictions $$f^{\sharp}(\mathrm{SSpecf}(\widehat{A_x})) : \widehat{A_x}\simeq \mathcal{O}(\mathrm{SSpecf}(\widehat{A_x}))\to \mathcal{O}(\mathrm{SSpecf}(\widehat{B_{\phi(x)}}))\simeq \widehat{B_{\phi(x)}},$$ which are, in turn, uniquely defined by $\phi$, thus our lemma follows. ◻ **Corollary 11**. *Assume that $\mathrm{SSpecf}(A)$ and $\mathrm{SSpecf}(B)$ are affine N.f. superschemes over an affine N.f. superscheme $\mathrm{SSpecf}(C)$. Then $\mathrm{SSpecf}(\widehat{A\otimes_C B})$ is isomorphic to the fiber product $\mathrm{SSpecf}(A)\times_{\mathrm{SSpecf}(C)} \mathrm{SSpecf}(B)$ in the category of N.f. superschemes.* **Proposition 12**. *The fiber products exist in the category of N.f. superschemes.* *Proof.* We briefly outline the main steps of the proof. For a couple of morphisms $f : \mathfrak{X}\to \mathfrak{Z}$ and $g : \mathfrak{Y}\to \mathfrak{Z}$ of N.f. superschemes, we choose open affine coverings $\{\mathfrak{U}_{ij}\}, \{\mathfrak{V}_{ik}\}$ and $\{\mathfrak{W}_i\}$ of $\mathfrak{X}, \mathfrak{Y}$ and $\mathfrak{Z}$ respectively, such that $f(\mathfrak{U}_{ij})\subseteq \mathfrak{W}_i$ and $f(\mathfrak{V}_{ik})\subseteq \mathfrak{W}_i$ for each triple of indices $i, j, k$. Then by Corollary [Corollary 11](#fiber product for affine){reference-type="ref" reference="fiber product for affine"}, for each triple of indices $i, j, k$ there exists the fiber product $\mathfrak{U}_{ij}\times_{\mathfrak{W}_{i}} \mathfrak{V}_{ik}$, so that $\mathfrak{X}\times_{\mathfrak{Z}}\mathfrak{Y}$ can be glued from these affine N.f. superschemes (for more details, see [@EGA; @I Theorem I.3.2.6 and Proposition I.10.7.3], or [@hart Theorem II.3.3]). ◻ # Separated and proper morphisms of formal superschemes **Lemma 13**. *Let $\mathfrak{X}$ be a N.f. superscheme and $\mathfrak{J}$ be a coherent superideal sheaf in $\mathcal{O}_{\mathfrak{X}}$. Set $Y=\mathrm{Supp}(\mathcal{O}_{\mathfrak{X}}/\mathfrak{J})$. Then $Y$ is a closed subset of $|\mathfrak{X}|$ and $\mathfrak{Y}=(Y, \mathcal{O}_{\mathfrak{X}}/\mathfrak{J})$ is a N.f. superscheme.* *Proof.* Since $\mathrm{Supp}(\mathcal{O}_{\mathfrak{X} }/\mathfrak{J})=\mathrm{Supp}(\mathcal{O}_{\mathfrak{X}_0}/\mathfrak{J}_0)$, $Y$ is a closed subset of $|\mathfrak{X}|$ and $\mathfrak{X}_0/\mathfrak{J}_0$ is a Noetherian formal scheme (see [@EGA; @I I.10.14.1]). Moreover, $(\mathcal{O}_{\mathfrak{X}})_1/\mathfrak{J}_1$ is a coherent $\mathcal{O}_{\mathfrak{X}_0}$-module by [@hart Corollary II.9.9], and Proposition [Proposition 5](#characterization of formal via even){reference-type="ref" reference="characterization of formal via even"} concludes the proof. ◻ The formal superscheme $\mathfrak{Y}$ is said to be a *closed formal supersubscheme* of $\mathfrak{X}$ (briefly, c.f. supersubscheme). A morphism $f : \mathfrak{Z}\to\mathfrak{X}$ of (Noetherian) formal superschemes is called a *closed immersion*, if $f$ is an isomorphism onto a c.f. supersubscheme $\mathfrak{Y}$ of $\mathfrak{X}$. Let $f : \mathfrak{X}\to\mathfrak{Y}$ be a morphism of N.f. superschemes. Let $\delta_f$ denote the corresponding diagonal morphism $\mathfrak{X}\to \mathfrak{X}\times_{\mathfrak{Y}}\mathfrak{X}$. Then $f$ is said to be *separated*, provided $|\delta_f|$ is a homeomorphism of $|X|$ onto a closed subset of $|\mathfrak{X}\times_{\mathfrak{Y}}\mathfrak{X}|$. Later, we will show that $f$ is separated if and only if $\delta_f$ is a closed immersion. A morphism of N.f. superschemes $f : \mathfrak{X}\to\mathfrak{Y}$ is called *adic*, if there is a superideal of definition $\mathfrak{J}$ of $\mathfrak{Y}$ such that $\mathcal{O}_{\mathfrak{X}} f^{\sharp}(\mathfrak{J})$ is a superideal of definition of $\mathfrak{X}$. We also say that $\mathfrak{X}$ is adic over $\mathfrak{Y}$. For any integer $n>0$ the morphism $f$ induces the morphism $f_n : X_n\to Y_n$ of Noetherian superschemes. Note that by Proposition [Proposition 6](#superideal of definition){reference-type="ref" reference="superideal of definition"}, if $f$ is adic, then $f$ is adic with respect to arbitrary superideal of definition of $\mathfrak{Y}$. **Lemma 14**. *Let $f : \mathfrak{X}\to\mathfrak{Y}$ be a morphism of affine N.f. superschemes. Then $f$ is a closed immersion if and only if the superalgebra morphism $f^{\sharp} : \mathcal{O}(\mathfrak{Y})\to \mathcal{O}(\mathfrak{X})$ is surjective.* *Proof.* The part \"if\" is obvious by Proposition [Proposition 9](#equivalence over affine formal){reference-type="ref" reference="equivalence over affine formal"}. Conversely, if $f^{\sharp}$ is surjective and $I$ is a superidel of definition of $\mathcal{O}(\mathfrak{Y})$, then $IB$ is the superideal of definition of $\mathcal{O}(\mathfrak{X})$. Again, by Proposition [Proposition 9](#equivalence over affine formal){reference-type="ref" reference="equivalence over affine formal"}, $\mathcal{O}_{\mathfrak{X}}$ is naturally isomorphic to $(\mathcal{O}(\mathfrak{Y})/J)^{\triangle}\simeq\mathcal{O}_{\mathfrak{Y}}/J^{\triangle}$ as an $\mathcal{O}_{\mathfrak{Y}}$-supermodule. ◻ **Lemma 15**. *Let $f : \mathfrak{X}\to\mathfrak{Y}$ be an adic morphism of N.f. superschemes. Then $f$ is separated if and only if $f_1$ is.* *Proof.* Let $\mathfrak{J}$ be a superideal of definition of $\mathfrak{Y}$, such that $\mathfrak{I}=\mathcal{O}_{\mathfrak{X}}f^{\sharp}(\mathfrak{J})$ is a superideal of definition of $\mathfrak{X}$. Note that both projections and the structural morphism $\mathfrak{X}\times_{\mathfrak{Y}}\mathfrak{X}\to\mathfrak{Y}$ are adic as well. More precisely, $\mathcal{O}_{\mathfrak{X}\times_{\mathfrak{Y}}\mathfrak{X}}\delta_f^{\sharp}(\mathfrak{J})$ is a superideal of definition of $\mathfrak{X}\times_{\mathfrak{Y}}\mathfrak{X}$. The induced morphism $(\delta_f)_1$ is naturally identified with $\delta_{f_1}$. In fact, there is a canonical bijection between morphisms $Z\to X_1$ (of Noetherian superschemes) over $Y_1$, and morphisms $Z\to\mathfrak{X}$ of (N.f. superschemes) over $\mathfrak{Y}$, such that $Z\to \mathfrak{Y}$ factors through $Z\to Y_1$. Thus for any couple of such morphisms from $Z$ to $X_1$, there is the unique morphism $Z\to \mathfrak{X}\times_{\mathfrak{Y}}\mathfrak{X}$, that obviously factors through $(\mathfrak{X}\times_{\mathfrak{Y}}\mathfrak{X})_1\to \mathfrak{X}\times_{\mathfrak{Y}}\mathfrak{X}$, hence $(\mathfrak{X}\times_{\mathfrak{Y}}\mathfrak{X})_1\simeq X_1\times_{Y_1} X_1$. It remains to note that $|\mathfrak{S}|=|S_1|$ for any N.f. superscheme $\mathfrak{S}$. ◻ **Corollary 16**. *All standard properties of separated morphisms of N.f. schemes, formulated in [@EGA; @I Proposition I.10.15.13], are valid for adic separated morphisms of N.f. superschemes.* *Proof.* Use the above isomorphism $(\mathfrak{X}\times_{\mathfrak{Y}}\mathfrak{X})_1\simeq X_1\times_{Y_1} X_1$ and refer to [@maszub2 Corollary 2.5]. ◻ Let $A$ be an adic Noetherian superalgebra, and let $I$ be a superideal of definition of $A$. Let $A[t_1, \ldots, t_n| z_1, \ldots , z_m]$ be a *polynomial superalgebra over $A$*, freely generated by the even indeterminants $t_1, \ldots, t_n$ and by the odd indeterminants $z_1, \ldots, z_m$. Then the superalgebra $$A\{t_1, \ldots, t_n| z_1, \ldots , z_m\}=\varprojlim_{k} A/I^k[t_1, \ldots, t_n| z_1, \ldots , z_m]$$ is said to be a *superalgebra of restricted formal series* over $A$. It is clear that $A\{t_1, \ldots, t_n| z_1, \ldots , z_m\}$ is supersubalgebra of $$A[[t_1, \ldots, t_n| z_1, \ldots , z_m]]\simeq A[[t_1, \ldots, t_n]][z_1, \ldots , z_m]$$ consisting of all formal series $$\sum_{\alpha, M}c_{\alpha, M}t^{\alpha} z^M, \alpha\in\mathbb{N}^n, M\subseteq \{1, \ldots, m \},$$ such that for any $k\geq 0$ almost all coefficients $c_{\alpha, M}$ belongs to $I^k$. We also use the shorter notation $A\{\underline{t}\mid\underline{z} \}$, instead of $A\{t_1, \ldots, t_n| z_1, \ldots , z_m\}$. Finally, a topological superalgebra $B$ is said to be *formally finite* over an $I$-adic superalgebra $A$, if $B$ is isomorphic to the quotient of some $A\{\underline{t} \mid \underline{z} \}$ over a closed superideal. Equivalently, $B$ is formally finite over $A$ if and only if $B$ is $IB$-adic, and $B/IB$ is finite over $A/I$. In fact, the part \"if\" is trivial. Conversely, we choose a finite set of $n$ even and $m$ odd generators of $B/IB$ over $A/IA$. Then for any positive integers $k\geq l$ we have a commutative diagram (of $A$-superalgebras) $$\begin{array}{ccccccccc} 0 & \to & R_k & \to & A/I^k A[\underline{t}\mid \underline{z}] & \to & B/I^k B & \to & 0 \\ & & \downarrow & & \downarrow & & \downarrow & & \\ 0 & \to & R_l & \to & A/I^l A[\underline{t}\mid \underline{z}] & \to & B/I^l B & \to & 0 \end{array},$$ where $\underline{t}=\{t_1, \ldots, t_n\}, \underline{z}=\{z_1, \ldots, z_m\}$. Since $\ker(B/I^k B\to B/I^l B)=I^l(B/I^k B)$ and the preimage of $I^l(B/I^k B)$ is equal to $I^l(A/I^k A[\underline{t}\mid \underline{z}])+R_k$, each superalgebra morphism $R_k\to R_l$ is surjective, hence the inverse system $\{ R_k\to R_l, k\geq l\}$ satisfies the *Mittag-Leffler condition*. Thus $B\simeq A\{\underline{t}\mid \underline{z}\}/R, R=\varprojlim_k R_k$. **Lemma 17**. *The following statements hold :* 1. *the property to be formally finite is transitive;* 2. *if $B$ is an $A'$-superalgebra, $A'$ is an $A$-superalgebra, $A$ is $I$-adic and $A'$ is $IA'$-adic, and $B$ is formally finite over $A$, then $B$ is formally finite over $A'$.* *Proof.* Assume that $C$ is formally finite over $B$ and $B$ is formally finite over $A$. Then there are superideals of definition $I$ and $J$ of $A$ and $B$ respectively, such that $B/IB$ is finite over $A/I$ and $C/JC$ is finite over $B/J$. There is an integer $n>0$, such that $J^n\subseteq IB$. Thus, if the (homogeneous) elements $c_1, \ldots, c_l$ generate $B/J$-superalgebra $C/JC$, then they obviously generate $C/IC$ over $B/IB$, and $(1)$ follows. The second statement is obvious. ◻ Superizing [@EGA; @I Definition I.10.13.3], we say that a morphism $f : \mathfrak{X}\to\mathfrak{Y}$ of N.f. superschemes is of *finite type*, if there are finite open coverings $\{\mathfrak{V}_i\}$ and $\{\mathfrak{U}_{ij}\}$ of $\mathfrak{Y}$ and $\mathfrak{X}$ respectively, such that for each couple of indices $i, j$ there is $f(\mathfrak{U}_{ij})\subseteq \mathfrak{V}_i$, and $\mathcal{O}(\mathfrak{U}_{ij})$ is formally finite over $\mathcal{O}(\mathfrak{V}_i)$. In particular, any morphism of finite type is obviously adic. **Lemma 18**. *In the above notations, $\mathfrak{X}$ is of finite type over $\mathfrak{Y}$ if and only if $\mathfrak{X}_0$ is of finite type over $\mathfrak{Y}_0$ if and only if $\mathfrak{X}_{ev}$ is of finite type over $\mathfrak{Y}_{ev}$ if and only if the morphism $f$ is adic and $X_1$ is of finite type over $Y_1$.* *Proof.* Note that a Noetherian superalgebra $C$ is a finitely generated $C_0$-supermodule (cf. [@maszub4 Lemma 1.4]). Therefore, if $C$ is $I$-adic, then $C$ is formally finite over $C_0$ (regarded as an $I_0$-adic algebra). Furthermore, if $B$ is formally finite over $A$, then $B$ is formally finite over $A_0$ by Lemma [Lemma 17](#transitive){reference-type="ref" reference="transitive"}(1), hence $B_0$ is formally finite over $A_0$. Conversely, if $B_0$ is formally finite over $A_0$, then $B$ is formally finite over $A_0$, and Lemma [Lemma 17](#transitive){reference-type="ref" reference="transitive"}(2), combined with [@maszub4 Lemma 1.5], imply the first equivalence. If $\overline{B}$ is formally finite over $\overline{A}$, then there are superideals of definition $J$ and $I$ of $B$ and $A$ respectively, such that $J^n+J_B\subseteq IB+J_B$ and $I^m B+J_B\subseteq J+J_B$ for some integers $m, n>0$. On the other hand, $J_B$ is nilpotent, say $J_B^l=0$, that infers $J^{nl}\subseteq IB$ and $I^{ml} B\subseteq J$, hence $B$ is $IB$-adic. It remains to note that if $C$ is a Noetherian superalgebra over a superalgebra $D$, and $\overline{C}$ is finite over $\overline{D}$, then each quotient $J_C^k/J_C^{k+1}$ is a finitely generated supermodule over a polynomial superalgebra $A[\underline{t}]$, hence $C$ is finite over $D$. The third equivalence is obvious. ◻ **Proposition 19**. *Let $f : \mathfrak{X}\to\mathfrak{Y}$ be a morphism of N.f. superschemes over a N.f. superscheme $\mathfrak{Z}$. Assume that both $\mathfrak{X}$ and $\mathfrak{Y}$ are of finite type and adic over $\mathfrak{Z}$, and $\mathfrak{Y}$ is also separated over $\mathfrak{Z}$. Then the graph morphism $\Gamma_f : \mathfrak{X}\to \mathfrak{X}\times_{\mathfrak{Z}} \mathfrak{Y}$, induced by $\mathrm{id}_{\mathfrak{X}}$ and $f$, is a closed immersion.* *Proof.* Arguing as in Lemma [Lemma 15](#a criteria for a morphism to be separated){reference-type="ref" reference="a criteria for a morphism to be separated"}, $|\mathfrak{X}|=|X_1|, |\mathfrak{X}\times_{\mathfrak{Z}} \mathfrak{Y}|=|X_1\times_{Z_1} Y_1|$ and $|\Gamma_f|$ can be naturally identified with $|\Gamma_{f_1}|$. Combining [@maszub2 Lemma 2.3 and Proposition 2.4] with [@EGA; @I Proposition I.10.15.4], one sees that $|\Gamma_f|$ is a homeomorphism onto a closed subset of $|\mathfrak{X}\times_{\mathfrak{Z}} \mathfrak{Y}|$. Further, $\Gamma_f$ can be glued from the morphisms $$\Gamma_f|_{\mathfrak{U}}=\Gamma_{f|_{\mathfrak{U}}} : \mathfrak{U}\to \mathfrak{U}\times_{\mathfrak{W}} \mathfrak{V},$$ where $f(\mathfrak{U})\subseteq \mathfrak{V}$, $\mathfrak{U}, \mathfrak{V}$ and $\mathfrak{W}$ are open affine N.f. supersubschemes of $\mathfrak{X}, \mathfrak{Y}$ and $\mathfrak{Z}$ respectively, such that $f|_{\mathfrak{U}} : \mathfrak{U}\to\mathfrak{V}$ is a morphism over $\mathfrak{W}$. The same arguments as in [@EGA; @I Lemma I.10.14.4] show that $\Gamma_f$ is a closed immersion if and only if each $\Gamma_{f|_{\mathfrak{U}}}$ is, so that Lemma [Lemma 14](#closed immersion for affine){reference-type="ref" reference="closed immersion for affine"} concludes the proof. ◻ Similarly to [@EGA; @III III.3.4.1], a morphism $f : \mathfrak{X}\to\mathfrak{Y}$ of N.f. superschemes is called *proper*, if $f$ is of finite type and for some superideal of definition $\mathfrak{J}$ of $\mathfrak{Y}$, the induced morphism $X_1=(|\mathfrak{X}|, \mathcal{O}_{\mathfrak{X}}/\mathcal{O}_{\mathfrak{X}}f^{\sharp}(\mathfrak{J}))\to Y_1=(|\mathfrak{Y}|, \mathcal{O}_{\mathfrak{Y}}/\mathfrak{J})$ is proper. Thus, by Lemma [Lemma 2](#reducibility){reference-type="ref" reference="reducibility"} and Proposition [Proposition 6](#superideal of definition){reference-type="ref" reference="superideal of definition"}(a), this property does not depend on the choice of $\mathfrak{J}$ and $X_n$ is proper over $Y_n$ for arbitrary $n\geq 1$. **Proposition 20**. *Let $f : \mathfrak{X}\to \mathfrak{Y}$ be a morphism of N.f. superschemes. Then $f$ is proper if and only if $f_0$ is if and only if $f_{ev}$ is if and only if the morphism $f$ is adic and $f_1$ is proper.* *Proof.* The third statement is trivial. The first and second statements follow by Lemma [Lemma 2](#reducibility){reference-type="ref" reference="reducibility"}, Proposition [Proposition 6](#superideal of definition){reference-type="ref" reference="superideal of definition"}(d) and Lemma [Lemma 18](#finite type for formal){reference-type="ref" reference="finite type for formal"}. ◻ # Comparison of superscheme morphisms and their formal prolongations Let $A$ be a Noetherian superalgebra and $I$ be its superideal. Let $S$ denote the affine superscheme $\mathrm{SSpec}(A)$ and $S'\simeq\mathrm{SSpec}(A/I)$ is its closed supersubscheme, defined by $I$. Assume that $X$ is a superscheme of finite type over $S$. In partiular, $X$ is Noetherian. The preimage of $S'$ in $X$ is a closed supersubscheme $X'$. Let $\widehat{X}$ and $\widehat{S}$ denote the formal completions of $X$ and $S$ along $X'$ and $S'$ respectively. Observe that for any open afine supersubscheme $U\simeq\mathrm{SSpec}(B)$ of $X$ there is $U'=U\cap X'\simeq\mathrm{SSpec}(B/B I)$. Thus $\mathcal{O}_{\widehat{X}}\simeq \varprojlim_{n\geq 0} \mathcal{O}_X/\mathcal{O}_X I^{n+1}$ and $\widehat{X}$ is a N.f. superscheme of finite type over $\widehat{S}=\mathrm{SSpecf}(\widehat{A})$. Besides, $X\mapsto \widehat{X}$ is a functor from the category of superschemes of finite type over $S$ to the category of N.f. superschemes of finite type over $\widehat{S}$. Assume that $Y$ is another superscheme of finite type over $S$. Let $Z$ denote the superscheme $X\times_S Y$. Then $Z$ is of finite type over $S$ as well, hence Noetherian. The proof of the following lemma is similar to the proof of Proposition [Proposition 12](#fibered product for fromal){reference-type="ref" reference="fibered product for fromal"}. **Lemma 21**. *The superspace $\widehat{Z}$ is isomorphic to the fiber product $\widehat{X}\times_{\widehat{S}}\widehat{Y}$ in the category of N.f. superschemes, so that, the canonical projections are identified with $\widehat{\mathrm{pr}_1}$ and $\widehat{\mathrm{pr}_2}$ respectively.* **Proposition 22**. *Assume that the above morphism $X\to S$ is separated and the superalgebra $A$ is $I$-adic. Then the map $T\mapsto\widehat{T}$ is a bijection from the set of closed supersubschemes of $X$, proper over $S$, to the set of closed N.f. supersubschemes of $\widehat{X}$, proper over $\widehat{S}$.* *Proof.* Without loss of generality, one can assume that $I=I_0A$. If $T$ satisfies the conditions of lemma, then $T_0$ is closed subscheme of $X_0$, proper over $S_0$, and by [@EGA; @III Corollary III.5.1.8], $\widehat{T_0}\simeq \widehat{T}_0$ is proper over $\widehat{S_0}\simeq\widehat{S}_0$. By Proposition [Proposition 20](#properness over formal){reference-type="ref" reference="properness over formal"}, $\widehat{T}$ is proper over $\widehat{S}$. Moreover, if $\mathcal{O}_T=\mathcal{O}_X/\mathcal{J}$, where $\mathcal{J}$ is a coherent sheaf of $\mathcal{O}_X$-supermodules, then $\mathcal{O}_{\widehat{T}}=\widehat{\mathcal{O}_X}/\widehat{\mathcal{J}}$. Since $\mathcal{J}|_{\mathcal{O}_{X_0}}$ is a coherent $\mathcal{O}_{X_0}$-module, [@EGA; @III Corollary III.5.1.6] implies that $\widehat{\mathcal{J}}|_{\mathcal{O}_{\widehat{X}_0}}$ is a coherent $\mathcal{O}_{\widehat{X}_0}$-module, hence by Proposition [Proposition 8](#supermodule over formal as a proj limit){reference-type="ref" reference="supermodule over formal as a proj limit"}(3), $\widehat{\mathcal{J}}$ is a coherent $\mathcal{O}_{\widehat{X}}$-supermodule and $\widehat{T}$ is closed in $\widehat{X}$. Conversely, if $\mathfrak{T}$ is a closed supersubscheme of $\widehat{X}$, proper over $\widehat{S}$, then $\mathcal{O}_{\mathfrak{T}}=\mathcal{O}_{\widehat{X}}/\mathfrak{J}$, where $\mathfrak{J}$ is a coherent $\mathcal{O}_{\widehat{X}}$-supermodule, hence a coherent $\mathcal{O}_{\widehat{X}_0}$-module as well. Again, by [@EGA; @III Corollary III.5.1.6], $\mathfrak{J}=\widehat{\mathcal{J}}$ for some coherent $\mathcal{O}_{X_0}$-supermodule of $\mathcal{O}_X$. Note that $\widehat{\mathcal{O}_X\mathcal{J}}=\mathfrak{J}$, that is $\mathcal{J}$ is a coherent $\mathcal{O}_X$-superideal. Set $T=(\mathrm{Supp}(\mathcal{O}_X/\mathcal{J}), \mathcal{O}_X/\mathcal{J})$. It is clear that $\widehat{T}=\mathfrak{T}$, and since $\mathfrak{T}_0$ is proper over $\widehat{S_0}$, $T_0$ is proper over $S_0$, and in its turn, $T$ is proper over $S$. ◻ Let $\mathrm{Mor}_S(X, Y)$ (respectively, $\mathrm{Mor}_{\widehat{S}}(\widehat{X}, \widehat{Y})$) denote the set of morphisms $X\to Y$ of $S$-superschemes (respectively, the set of morphisms $\widehat{X}\to\widehat{Y}$ of N.f. $\widehat{S}$-superschemes). It is clear that all morphisms from $\mathrm{Mor}_{\widehat{S}}(\widehat{X}, \widehat{Y})$ are adic. From now on $A$ is supposed to be an $I$-adic superalgebra. Let also $I=I_0 A$. The following proposition is a superization of [@EGA; @III Theorem III.5.4.1]. **Proposition 23**. *The natural map $\mathrm{Mor}_S(X, Y)\to \mathrm{Mor}_{\widehat{S}}(\widehat{X}, \widehat{Y})$ is bijective, provided $X$ is proper over $S$ and $Y$ is separated over $S$.* The proof will be given in two lemmas. **Lemma 24**. *The map $\mathrm{Mor}_S(X, Y)\to \mathrm{Mor}_{\widehat{S}}(\widehat{X}, \widehat{Y})$ is injective.* *Proof.* Assume that $\widehat{\phi}=\widehat{\psi}$ for some $\phi, \psi\in \mathrm{Mor}_S(X, Y)$. Then $|\phi||_{|X'|}=|\psi||_{|X'|}$. We show that there is an open supersubscheme $U$ of $X$ such that $X'\subseteq U$ and $\phi|_U=\psi|_U$. Since $X_{ev}\to S_{ev}$ is a proper morphism of schemes and $(X')_{ev}\subseteq U_{ev}$ coincides with the preimage of $(S')_{ev}=S'\cap S_{ev}=(S_{ev})'$ in $X_{ev}$, [@EGA; @III Lemma III.5.1.3.1] imlplies $X_{ev}=U_{ev}$. From $|X|=|X_{ev}|=|U_{ev}|=|U|$ we immediately derive that $U=X$. Choose finite open affine coverings $\{U_i\}, \{V_{ij}\}$ of $Y$ and $X$ respectively, such that $$\phi^{-1}(Y'\cap U_i)=\cup_{j} (V_{ij}\cap X') =\psi^{-1}(Y'\cap U_i)$$ for each index $i$. Then for arbitrary couple of indices $i, j$, we have $\widehat{\phi|_{V_{ij}}}=\widehat{\psi|_{V_{ij}}}$. In other words, one can assume that $X$ and $Y$ are affine, say $X\simeq\mathrm{SSpec}(B), Y\simeq\mathrm{SSpec}(C)$. Let $u : C\to B$ and $v : C\to B$ denote the superalgebra morphisms, those are dual to $\phi$ and $\psi$ respectively. Then the induced superalgebra morphisms $\widehat{u}$ and $\widehat{v}$ are equal each to other. Thus it follows that $(u-v)(c)\in \cap_{n\geq 0} C I^{n+1}$ for any $c\in C$. Since $C$ is a finitely generated $A$-superalgebra, by *Krull intersection theorem* (cf. [@maszub4 Proposition 1.10]) there is $x\in C_0 I_0+C_1 I_1$, such that for any $c\in C$ there is $n\geq 0$ with $(1-x)^n(u-v)(c)=0$. In other words, $\phi|_{X_{1-x_0}}= \psi|_{X_{1-x_0}}$, where $X_{1-x_0}\simeq\mathrm{SSpec}(C_{1-x_0})$ and $X'\subseteq X_{1-x_0}$. ◻ **Lemma 25**. *The map $\mathrm{Mor}_S(X, Y)\to \mathrm{Mor}_{\widehat{S}}(\widehat{X}, \widehat{Y})$ is surjective.* *Proof.* Let $h\in \mathrm{Mor}_{\widehat{S}}(\widehat{X}, \widehat{Y})$. By Lemma [Lemma 15](#a criteria for a morphism to be separated){reference-type="ref" reference="a criteria for a morphism to be separated"} the N.f. superscheme $\widehat{Y}$ is separated over $\widehat{S}$, and by Proposition [Proposition 19](#graph morphism){reference-type="ref" reference="graph morphism"} the graph morphism $\Gamma_h : \widehat{X}\to \widehat{Z}$ is a closed immersion, such that $\widehat{\mathrm{pr}_1}\Gamma_h=\mathrm{id}_{\widehat{X}}$. Let $\mathfrak{T}$ denote the closed N.f. supersubscheme of $\widehat{Z}$, such that $\Gamma_h$ induces the isomorphism $\widehat{X}\simeq\mathfrak{T}$ (over $\widehat{S}$). In particular, $\mathfrak{T}$ is proper over $\widehat{S}$. By Proposition [Proposition 22](#closed supersubschemes in completion){reference-type="ref" reference="closed supersubschemes in completion"} there is a closed supersubscheme $T$ in $Z$, such that $\widehat{T}=\mathfrak{T}$. Let $i$ denote the canonical embedding $T\to Z$. Then $\widehat{\mathrm{pr}_1}\widehat{i}$ is an isomorphism $\widehat{T}\to\widehat{X}$, induced by $u=\mathrm{pr}_1 i : T\to X$. By [@EGA; @III Proposition III.4.6.8], $u_0 : T_0\to X_0$ is an isomorphism. Identifying $\mathcal{O}_{T_0}$ with $\mathcal{O}_{X_0}$, and using again [@EGA; @III Corollary III.5.1.6], one sees that the morphism $u^{\sharp} : \mathcal{O}_X\to \mathcal{O}_T$ of coherent $\mathcal{O}_{X_0}$-(super)modules is an isomorphism, whence $u$ is. The rest of the proof can be copied from [@EGA; @III Theorem III.5.4.1]. ◻ # Some auxiliary properties of $\mathcal{O}_X$-supermodules Recall that if $\mathcal{F}$ and $\mathcal{G}$ are sheaves of vector $\Bbbk$-superspaces on a topological space $X$, then $\mathrm{Hom}_{\Bbbk}(\mathcal{F}, \mathcal{G})$ is a sheaf $$U\mapsto \{\mbox{morphisms of} \ \Bbbk-\mbox{sheaves} \ \mathcal{F}|_U\to\mathcal{G}|_U \},$$ where $U$ runs over open subsets of $X$. Furthermore, $\mathrm{Hom}_{\Bbbk}(\mathcal{F}, \mathcal{G})$ has a natural $\mathbb{Z}_2$-grading by $$\mathrm{Hom}_{\Bbbk}(\mathcal{F}, \mathcal{G})_{i}=\oplus_{j\in\mathbb{Z}_2}\mathrm{Hom}_{\Bbbk}(\mathcal{F}_j, \mathcal{G}_{i+j}), i\in\mathbb{Z}_2,$$ i.e. it is again a sheaf of $\Bbbk$-superspaces. If $\mathcal{F}=\mathcal{G}$, then $\mathrm{Hom}_{\Bbbk}(\mathcal{F}, \mathcal{F})$ is denoted by $\mathrm{End}_{\Bbbk}(\mathcal{F})$. Let $X$ be a superscheme. In what follows a sheaf of $\mathcal{O}_X$-supermodules is a sheaf of left $\mathcal{O}_X$-supermodules, unless stated otherwise. If $\mathcal{F}$ and $\mathcal{G}$ are sheaves of $\mathcal{O}_X$-supermodules, then $\mathrm{Hom}_{\mathcal{O}_X}(\mathcal{F}, \mathcal{G})$ is a subsheaf of $\mathrm{Hom}_{\Bbbk}(\mathcal{F}, \mathcal{G})$ such that for any open subsets $V\subseteq U\subseteq |X|$ a section $s\in \mathrm{Hom}_{\Bbbk}(\mathcal{F}, \mathcal{G})(U)$ belongs to $\mathrm{Hom}_{\mathcal{O}_X}(\mathcal{F}, \mathcal{G})(U)$ if and only if $$fs(V)(g)=(-1)^{|s||f|} s(V)(fg), f\in\mathcal{O}(V), g\in\mathcal{F}(V).$$ **Remark 26**. *The category of left $\mathcal{O}_X$-supermodules is equivalent to the category of right $\mathcal{O}_X$-supermodules. More pecisely, if $\mathcal{F}$ is a left $\mathcal{O}_X$-supermodule, then it is the right $\mathcal{O}_X$-supermodule via $$fa=(-1)^{|f||a|} af, a\in\mathcal{O}_X(U), f\in\mathcal{F}(U), U\subseteq |X|.$$ Moreover, if $s\in\mathrm{Hom}_{\mathcal{O}_X}(\mathcal{F}, \mathcal{G} )(U)$, then $s$ is the morphism of the right $\mathcal{O}_X|_U$-supermodules via $$(f)s(V)=(-1)^{|s||f|}s(V)(f), \mathcal{F}(V), V\subseteq U\subseteq |X|.$$* **Lemma 27**. *Assume that $X$ is a Noetherian superscheme. If $\mathcal{F}$ and $\mathcal{G}$ are coherent sheaves of $\mathcal{O}_X$-supermodule, then $\mathrm{Hom}_{\mathcal{O}_X}(\mathcal{F}, \mathcal{G})$ is coherent as well.* *Proof.* Choose a finite covering of $X$ by affine open supersubschemes $U$. Then $\mathcal{F}(U)\simeq\widetilde{M}$ and $\mathcal{G}(U)\simeq\widetilde{N}$ for each $U$ (see the proof of [@zub2 Proposition 3.1]), where $M$ and $N$ are finitely generated $\mathcal{O}(U)$-supermodule. Then [@zub2 Proposition 2.1(1)] implies $$\mathrm{Hom}_{\mathcal{O}_X}(\mathcal{F}, \mathcal{G})(U)\simeq \widetilde{\mathrm{Hom}_{\mathcal{O}(U)}(M, N) }$$ for each $U$. Since $\mathcal{O}(U)$ is a Noetherian superalgebra, it remains to note that $\mathrm{Hom}_{\mathcal{O}(U)}(M, N)$ can be naturally identified with an $\mathcal{O}(U)$-supersubmodule of the supermodule $N^{m|n}\oplus \Pi N^{m|n}\simeq N^{m+n|m+n}$, where $m$ and $n$ are the numbers of even and odd generators of $M$. ◻ **Proposition 28**. *(compare with [@brp Corollary 2.36]) Let $X$ be a proper superscheme and let $\mathcal{F}$ be a coherent sheaf of $\mathcal{O}_X$-supermodules. Then $\mathrm{H}^i(X, \mathcal{F})$ is a finite dimensional superspace for any $i\geq 0$.* *Proof.* The superideal sheaf $\mathcal{J}_X$ is nilpotent. Thus there is an nonnegative integer $k$ such that $\mathcal{J}_X^k\mathcal{F}\neq 0$ but $\mathcal{J}_X^{k+1}\mathcal{F}=0$. We call $k$ a *nilpotency depth* of $\mathcal{F}$. The sheaves $\mathcal{J}_X\mathcal{F}$ and $\mathcal{F}/\mathcal{J}_X\mathcal{F}$ are coherent (cf. [@zub2 Corollary 3.2]) and both are of nilpotency depth strictly less than $k$. The long exact sequence of cohomologies $$\ldots \to \mathrm{H}^i(X, \mathcal{J}_X\mathcal{F}) \to \mathrm{H}^i(X, \mathcal{F}) \to \mathrm{H}^i(X, \mathcal{F}/\mathcal{J}_X\mathcal{F}) \to \ldots$$ and the induction on $k$ allows us to consider the case $k=0$ only. In this case $\mathcal{F}$ is a direct sum of two coherent $\mathcal{O}_{X_0}$-modules $\mathcal{F}_0$ and $\mathcal{F}_1$, hence [@EGA; @III Theorem III.3.2.1] concludes the proof. ◻ **Lemma 29**. *Let $\mathcal{F}$ and $\mathcal{G}$ be coherent sheaves of $\mathcal{O}_N$-supermodules, where $N$ is a proper superscheme. Then the functor $$A\mapsto \mathrm{Hom}_{\mathcal{O}_{N\times\mathrm{Spec}(A)}}(\mathrm{pr}_1^*\mathcal{F}, \mathrm{pr}_1^*\mathcal{G})(|N\times\mathrm{Spec}(A)|), A\in\mathsf{Alg}_{\Bbbk},$$ where $\mathrm{pr}_1$ denote the canonical projection $N\times\mathrm{Spec}(A)\to N$, is isomorphic to a functor $A\mapsto V\otimes A$ for some finite dimensional vector superspace $V$.* *Proof.* Let $\{U_i\}$ be a finite covering of $N$ by open affine supersubschemes. For each couple of indices $i, j$ we also choose a finite covering $\{V_{ijk}\}$ of $U_i\cap U_j$ by open affine supersubschemes. Then any global section from $\mathrm{Hom}_{\mathcal{O}_{N\times\mathrm{Spec}(A)}}(\mathrm{pr}^*_1\mathcal{F}, \mathrm{pr}^*_1\mathcal{G}_A)$ is uniquely defined by its restrictions on $$\mathrm{Hom}_{\mathcal{O}_{N\times\mathrm{Spec}(A)}}(\mathrm{pr}_1^*\mathcal{F}, \mathrm{pr}_1^*\mathcal{G})(|U_i\times\mathrm{Spec}(A)|)$$$$\simeq\mathrm{Hom}_{\mathcal{O}_{U_i\times\mathrm{Spec}(A)}}(\mathrm{pr}_1^*\mathcal{F}|_{|U_i\times\mathrm{Spec}(A)|}, \mathrm{pr}_1^*\mathcal{G}|_{|U_i\times\mathrm{Spec}(A)|})\simeq$$ $$\mathrm{Hom}_{\mathcal{O}(U_i)\otimes A}(\mathcal{F}(U_i)\otimes A, \mathcal{G}(U_i)\otimes A)\simeq \mathrm{Hom}_{\mathcal{O}(U_i)}(\mathcal{F}(U_i), \mathcal{G}(U_i)\otimes A) ,$$ those are compatible being restricted on the corresponding $V_{ijk}$. Since any $\mathcal{F}(U_i)$ is a finitely generated $\mathcal{O}(U_i)$-supermodule, the latter $A$-supermodules are isomorphic to $\mathrm{Hom}_{\mathcal{O}(U_i)}(\mathcal{F}(U_i), \mathcal{G}(U_i))\otimes A$. Moreover, these isomorphisms are obviously compatible with the restrictions to $V_{ijk}$. Thus it follows easily that $$\mathrm{Hom}_{\mathcal{O}_{N\times\mathrm{Spec}(A)}}(\mathrm{pr}_1^*\mathcal{F}, \mathrm{pr}_1^*\mathcal{G})(|N\times\mathrm{Spec}(A)|)\simeq \mathrm{Hom}_{\mathcal{O}_N}(\mathcal{F}, \mathcal{G})(|N|)\otimes A.$$ Proposition [Proposition 28](#global sections are finite){reference-type="ref" reference="global sections are finite"} concludes the proof. ◻ # Čech cohomology Let $X$ be a topological space and let $\mathcal{F}$ be a sheaf of $\mathbb{Z}_2$-graded abelian groups. Following [@hart III.4], with any open covering $\mathfrak{U}=\{U_i \}_{i\in I}$ of $X$ one can associate a complex of $\mathbb{Z}_2$-graded abelian groups $C^{\bullet}(\mathfrak{U}, \mathcal{F})=\{ C^p(\mathfrak{U}, \mathcal{F})\}_{p\geq 0}$. Without loss of generality, one can asume that $I$ is totally ordered. Then $$C^p(\mathfrak{U}, \mathcal{F})=\prod_{i_0< \ldots < i_p} \mathcal{F}(U_{i_0}\cap\ldots\cap U_{i_p})$$ and the coboundary map $d : C^p(\mathfrak{U}, \mathcal{F})\to C^{p+1}(\mathfrak{U}, \mathcal{F})$ is defined as $$d(\alpha)_{i_0 , \ldots ,i_{p+1}}=\sum_{0\leq k\leq p+1}(-1)^k\alpha_{i_0, \ldots \widehat{i_k}, \ldots , i_{p+1} }|_{U_{i_0}\cap\ldots\cap U_{i_{p+1}} },$$ where $\alpha=\prod_{i_0< \ldots < i_p}\alpha_{i_0, \ldots, i_p}$. Since $d$ preserves $\mathbb{Z}_2$-grading, the *Čech cohomology groups* $\mathrm{\check{H}}^p(\mathfrak{U}, \mathcal{F})$ are $\mathbb{Z}_2$-graded as well. We have a natural resolution $0\to\mathcal{F}\stackrel{\epsilon}{\to} \mathcal{C}^{\bullet}(\mathfrak{U}, \mathcal{F})$, i.e. the exact sequence of sheaves $$0\to\mathcal{F}\stackrel{\epsilon}{\to}\mathcal{C}^0(\mathfrak{U}, \mathcal{F})\to\mathcal{C}^1(\mathfrak{U}, \mathcal{F})\to\ldots ,$$ where $\mathcal{C}^p(\mathfrak{U}, \mathcal{F})(V)=\prod_{i_0<\ldots < i_p}\mathcal{F}(U_{i_0}\cap \ldots \cap U_{i_p}\cap V)$ for any open subset $V\subseteq X$. The proof of exactness is the same as in the non-graded case (cf. [@hart Lemma III.4.2]). Note also that $\mathcal{C}^{\bullet}(\mathfrak{U}, \mathcal{F})(X)=C^{\bullet}(\mathfrak{U}, \mathcal{F})$. Recall that a sheaf $\mathcal{F}$ is said to be *flat* (or *flabby*), if for any open subsets $V\subseteq U\subseteq X$ the \"restriction\" map $\mathcal{F}(U)\to\mathcal{F}(V)$ is surjective. Using [@zub2 Lemma 1.2], one can mimic the proof of [@hart Proposition III.4.3] to show that $\mathrm{\check{H}}^p(\mathfrak{U}, \mathcal{F})=0$ for any $p> 0$, provided $\mathcal{F}$ is flat. If $0\to\mathcal{F}\to\mathcal{J}^{\bullet}$ is an injective resolution of a sheaf $\mathcal{F}$, then [@slang Lemma XX.5.2] implies that there is a morphism of of $\mathbb{Z}_2$-graded complexes $\mathcal{C}^{\bullet}(\mathfrak{U}, \mathcal{F})\to \mathcal{J}^{\bullet}$, that extends the identity morphism $\mathrm{id}_{\mathcal{F}}$, and it is unique up to a homotopy preserving $\mathbb{Z}_2$-grading. Thus this morphism induces the natural maps $\mathrm{\check{H}}^p(\mathfrak{U}, \mathcal{F})\to \mathrm{H}^p(X, \mathcal{F})$ of $\mathbb{Z}_2$-graded abelian groups for each $p\geq 0$, functorial in $\mathcal{F}$ (cf. [@hart Lemma III.4.4]). **Proposition 30**. *If $X$ is a Noetherian separated superscheme, $\mathfrak{U}$ is an open covering of $X$ by affine supersubschemes, and $\mathcal{F}$ is a quasi-coherent sheaf of $\mathcal{O}_X$-supermodules, then each map $\mathrm{\check{H}}^p(\mathfrak{U}, \mathcal{F})\to \mathrm{H}^p(X, \mathcal{F})$ is an isomorphism.* *Proof.* The proof can be copied from [@hart Theorem III.4.5] verbatim. Indeed, all we need is [@zub2 Lemma 3.2] and Lemma [Lemma 1](#Ex.II.4.3){reference-type="ref" reference="Ex.II.4.3"}. ◻ **Corollary 31**. *(see [@hart Exercise III.4.1]) Let $f : X\to Y$ be an affine morphism of Noetherian separated superschemes. Let $\mathcal{F}$ be a quasi-coherent sheaf of $\mathcal{O}_X$-supermodules. Then $$\mathrm{H}^i(X, \mathcal{F})\simeq \mathrm{H}^i(Y, f_*\mathcal{F}).$$ for any $i\geq 0$.* *Proof.* Let $\mathfrak{V}=\{V_i\}_{i\in I}$ be an open covering of $Y$ by affine supersubschemes. Then $\mathfrak{U}=\{f^{-1}(V_i)\}_{i\in I}$ is an open covering of $X$ by affine supersubschemes also. Since $\mathcal{C}^{\bullet}(\mathfrak{U}, \mathcal{F})=\mathcal{C}^{\bullet}(\mathfrak{V}, f_*\mathcal{F})$, Proposition [Proposition 30](#isomorphism of cohomologies){reference-type="ref" reference="isomorphism of cohomologies"} concludes the proof. ◻ # Pro-representable functors Let $\mathcal{C}$ be a category with finite products and fibered products as well. Let $F : \mathcal{C}\to\mathsf{Sets}$ be a covariant functor. The functor $F$ is said to be *left exact* in the sense of [@lev], provided the following conditions hold : 1. if $A$ is a final object in $\mathcal{C}$, then $|F(A)|=1$; 2. $F(A\times B)\simeq F(A)\times F(B)$ for any objects $A, B$ from $\mathcal{C}$; 3. $F$ takes any equalizer $A\to B\rightrightarrows C$ in $\mathcal{C}$ to the equalizer $F(A)\to F(B)\rightrightarrows F(C)$ in $\mathsf{Sets}$. The equivalent definitions are listed in [@lev Proposition 1]. Assume that $\mathcal{C}$ is also an *Artinian* category, that is for any object $A\in\mathcal{C}$ any collection of subobjects of $A$ has a minimal element. As it has been proved in [@lev Proposition 2], a functor $F : \mathcal{C}\to\mathsf{Sets}$ is left exact if and only if there is a *projective system* $\{ A_i, \phi_{ij}\}_{i, j\in I}$ in $\mathcal{C}$, such that $F(A)\simeq \varinjlim \mathsf{Mor}_{\mathcal{C}}(A_i, A)$. Let $\mathsf{SL}_{\Bbbk}$ denote the category of finite dimensional augmented local superalgebras over a field $\Bbbk$, with local morphisms between them. It is clear that $\mathsf{SL}_{\Bbbk}$ is Artinian, and it has finite and fibered products. If $A\in \mathsf{SL}_{\Bbbk}$, then its maxmal superideal is denoted by $\mathfrak{M}_A$ and the even component of $\mathfrak{M}_A$ is denoted by $\mathfrak{m}_A$, i.e. $\mathfrak{M}_A=\mathfrak{m}_A\oplus A_1$. The augmentation map $A\to\Bbbk$ is denoted by $\epsilon_A$, respectively. The full subcategory of $\mathsf{SL}_{\Bbbk}$ consisting of all $A$ with $\mathfrak{M}_A^{n+1}=0$ is denoted by $\mathsf{SL}_{\Bbbk}(n)$. The full subcategories of $\mathsf{SL}_{\Bbbk}$ and $\mathsf{SL}_{\Bbbk}(n)$, consisting of purely even superalgebras, are denoted by $\mathsf{L}_{\Bbbk}$ and $\mathsf{L}_{\Bbbk}(n)$ respectively. A topological superalgebra $A$ is called *profinite augmented local* superalgebra (p.a.l. superalgebra), if $A\simeq\varprojlim A_i$, where $\{A_i, \phi_{ij}\}$ is a projective system of superalgebras in $\mathsf{SL}_{\Bbbk}$. The maximal superideal $\mathfrak{M}_A$ is isomorphic to $\varprojlim\mathfrak{M}_{A_i}$ and $A$ is Noetherian if and only if $\dim\mathfrak{M}_A/\mathfrak{M}^2_A< \infty$. By the above, a functor $F : \mathsf{SL}_{\Bbbk}\to \mathsf{Sets}$ is left exact if and only if it is pro-representable, i.e. there is a p.a.l. superalgebra $A\simeq\varprojlim A_i$ such that $F(B)$ is the set of all continuous morphisms from $A$ to the discrete superalgebra $B$. Let $\Bbbk[\epsilon_0, \epsilon_1]$ be a superalgebra of *dual supernumbers* (cf. [@maszub2]), where $|\epsilon_0|=0, |\epsilon_1|=1, \epsilon_i\epsilon_j=0, i, j=0, 1$. Then $F(\Bbbk[\epsilon_0, \epsilon_1])\simeq (\dim\mathfrak{M}_A/\mathfrak{M}^2_A )^*$. Thus the functor $F$ is pro-representable by a *complete local augmented Noetherian* superalgebra (c.l.a.N. superalgebra), that is $F$ is *strictly pro-representable* in the terminology of [@mats-oort], if and only if $F(\Bbbk[\epsilon_0, \epsilon_1])$ is a finite dimensional vector (super)space. One can extend the above notion of strict pro-representability for arbitrary $\Bbbk$-functor $\mathbb{X}$. In other words, $\mathbb{X}$ is strictly pro-representable by a c.l.a.N. superalgebra $A$, if for any $B\in\mathsf{SAlg}_{\Bbbk}$ the set $\mathbb{X}(B)$ consists of all continuous morphisms from $A$ to the discrete superalgebra $B$. # Formal neighborhoods of $\Bbbk$-points Let $\mathbb{X}$ be a $\Bbbk$-functor. If $x\in\mathbb{X}(\Bbbk)$, then we have the functors $\mathbb{X}_{n, x} : \mathsf{SL}_{\Bbbk}(n)\to\mathsf{Sets}$, defined as $$\mathbb{X}_{n, x}(A)=\{y\in\mathbb{X}(A)\mid \mathbb{X}(\epsilon_A)(y)=x \}, A\in\mathsf{SL}_{\Bbbk}(n),$$ and $\mathbb{X}_x : \mathsf{SL}_{\Bbbk}\to \mathsf{Sets}$, defined as $$\mathbb{X}_{x}(A)=\{y\in\mathbb{X}(A)\mid \mathbb{X}(\epsilon_A)(y)=x \}, A\in\mathsf{SL}_{\Bbbk},$$ respectively. It is clear that $\mathbb{X}_{x}(A)=\mathbb{X}_{n, x}(A)$ provided $A\in\mathsf{SL}_{\Bbbk}(n)$. For exmple, if $\mathbb{G}$ is a group $\Bbbk$-functor and $e$ is the unit element of the group $\mathbb{G}(\Bbbk)$, then $\mathbb{G}_e$ is a group functor as well, analogous to the group functor $P_e$ from [@mats-oort]. For any $A\in\mathsf{SAlg}_{\Bbbk}$ set $$<\mathbb{X}_{n, x}>(A)=\cup_{B\in\mathsf{SL}_{\Bbbk}(n), \phi\in\mathrm{SSp}(B)(A)}\mathbb{X}(\phi)(\mathbb{X}_{n, x}(B))$$ and $$<\mathbb{X}_x>(A)=\cup_{B\in\mathsf{SL}_{\Bbbk}, \phi\in\mathrm{SSp}(B)(A)}\mathbb{X}(\phi)(\mathbb{X}_x(B)),$$ respectively. It is clear that all $<\mathbb{X}_{n, x}>$ and $<\mathbb{X}_x>$ are subfunctors of the functor $\mathbb{X}$. Let $\mathbb{X}$ be a superscheme. For a point $x\in \mathbb{X}(\Bbbk)$ and an open affine supersubscheme $\mathbb{U}$ of $\mathbb{X}$, such that $x\in\mathbb{U}(\Bbbk)$, let $\mathbb{N}_{n, x}(\mathbb{X})$ denote the closed supersubscheme of $\mathbb{U}$ that isomorphic to $\mathrm{SSp}(\mathcal{O}(\mathbb{U})/\mathfrak{M}^{n+1}_x)$, where $\mathfrak{M}_x$ is the kernel of $x : \mathcal{O}(\mathbb{U})\to\Bbbk$. Note that $\mathcal{O}(\mathbb{U})/\mathfrak{M}^{n+1}_x\in\mathsf{SL}_{\Bbbk}(k)$. **Lemma 32**. *The definition of $\mathbb{N}_{n, x}(\mathbb{X})$ does not depend on the choice of $\mathbb{U}$. In particular, $\mathbb{N}_{n, x}(\mathbb{X})$ is a closed supersubscheme of $\mathbb{X}$.* *Proof.* Let $\mathbb{V}$ be another affine supersubscheme of $\mathbb{X}$, such that $x\in\mathbb{V}(\Bbbk)$. Since there is an open affine supersubscheme $\mathbb{W}\subseteq\mathbb{U}\cap\mathbb{V}$ with $x\in\mathbb{W}(\Bbbk)$, one can assume that $\mathbb{V}\subseteq\mathbb{U}$. The latter embedding is dual to a superalgebra morphism $\phi : A\to B$, where $A=\mathcal{O}(\mathbb{U})$ and $B=\mathcal{O}(\mathbb{V})$. Note that the maximal superideal $\mathfrak{M}_x=\ker(x : A\to\Bbbk)$ equals $\phi^{-1}(\mathfrak{N}_x)$, where $\mathfrak{N}_x$ is the maximal superideal $\ker(x : B\to\Bbbk)$. By [@maszub1 Lemma 3.5] there are elements $a_1, \ldots, a_k\in A_0$ such that $\sum_{1\leq i\leq k} B_0\phi(a_i)=B_0$ and $\phi$ induces an isomorphism $A_{a_i}\simeq B_{\phi(a_i)}$ for each $i$. Then the natural embedding $\mathbb{N}_{n, x}(\mathbb{V})\to \mathbb{N}_{n, x}(\mathbb{U})$ is dual to the induced local morphism $\overline{\phi} : A/\mathfrak{M}^{n+1}_x\to B/\mathfrak{N}^{n+1}_x$ of local superalgebras. There is $a_i$ that does not belong to $\mathfrak{M}_x$, hence for its residue $\overline{a_i}$ modulo $\mathfrak{M}^{n+1}_x$ we have the isomorphism $$A/\mathfrak{M}^{n+1}_x=(A/\mathfrak{M}^{n+1}_x)_{\overline{a_i}}\to (B/\mathfrak{N}^{n+1}_x)_{\overline{\phi}(\overline{a_i})}=B/\mathfrak{N}^{n+1}_x .$$ Finally, $\mathbb{N}_{n, x}(\mathbb{X})$ is an affine superscheme, hence a local functor. The obvious superization of [@jan Lemma I.1.13] implies the second statement. Lemma is proven. ◻ **Remark 33**. *If $\mathbb{X}$ is represented by a geometric superscheme $X$, then the closed supersubscheme $\mathbb{N}_{n, x}(\mathbb{X})$ is represented by the **$n$-th neighborhood** of the superscheme morphism $\mathrm{SSpec}(\Bbbk)\to X$ that sends the unique point of the underlying topological space of $\mathrm{SSpec}(\Bbbk)$ to $x$ (see [@maszub2 Section 1.4]).* We have an ascending series of closed supersubschemes $$\mathbb{N}_{1, x}(\mathbb{X})\subseteq \mathbb{N}_{2, x}(\mathbb{X})\subseteq\ldots$$ and hence the subfunctor $\mathbb{N}_{x}(\mathbb{X})=\cup_{n\geq 1}\mathbb{N}_{n, x}(\mathbb{X})$. **Lemma 34**. *If $\mathbb{X}$ is a superscheme, then for any $x\in\mathbb{X}(\Bbbk)$ and $n\geq 1$ we have $\mathbb{N}_x(\mathbb{X})=<\mathbb{X}_x>$ and $\mathbb{N}_{n, x}(\mathbb{X})=<\mathbb{X}_{n, x}>$.* *Proof.* Let $B\in\mathsf{SL}_{\Bbbk}(n)$. An element $y\in\mathbb{X}(B)$ belongs to $\mathbb{X}_{n, x}(B)$ if and only if $y$ sends the inique point of the underlying topological space $|\mathrm{SSpec}(B)|$ to $x$. If $U$ is an open supersubscheme of $X$ such that $x\in |U|$, then $y\in\mathbb{U}(B)$, hence $\mathbb{X}_{n, x}(B)\subseteq\mathbb{U}(B)$. Thus $<\mathbb{X}_{n, x}>\subseteq \mathbb{U}$ for any $n$, hence $<\mathbb{X}_x>\subseteq\mathbb{U}$ as well. It remains to show that our statement holds for $\mathbb{X}$ to be affine. As above, $x\in\mathbb{X}(\Bbbk)$ is a superalgebra morphism $\mathcal{O}(\mathbb{X})\to\Bbbk$ and the superalgebra morphism $y : \mathcal{O}(\mathbb{X})\to A$ belongs to $\mathbb{N}_{n, x}(\mathbb{X})(A)$ if and only if $(\ker x)^{n+1}\subseteq \ker y\subseteq\ker x$. Lemma is proven. ◻ For any open affine supersubscheme $\mathbb{U}$ in $\mathbb{X}$ such that $x\in\mathbb{U}(\Bbbk)$, the stalk $\mathcal{O}_{X, x}=\mathcal{O}_x$ is isomorphic to $\mathcal{O}(\mathbb{U})_{\mathfrak{M}_x}$. Moreover, if $\mathbb{X}$ is locally algebraic, then $\widehat{\mathcal{O}_x}\simeq \varprojlim_k \mathcal{O}(\mathbb{U})/\mathfrak{M}^{n+1}_x$ is a c.l.a.N. superalgebra. As in [@maszub2 Section 9] one can note that $\mathbb{N}_x(\mathbb{X})$ is strictly pro-representable, provided $\mathbb{X}$ is a locally algebraic superscheme. Indeed, for any $A\in\mathsf{SAlg}_{\Bbbk}$ the set $\mathbb{N}_x(\mathbb{X})(A)$ can be naturally identified with the set of all continuous superalgebra morphisms $\widehat{\mathcal{O}_x}\to A$, where $A$ is regarded as a discrete superalgebra. Note that an element $a$ of a superalgebra $A$ is nilpotent if and only if its even component $a_0$ is. Thus one can easily derive that the set of all nilpotent elements of $A$ is the largest locally nilpotent superideal. We denote this superideal by $\mathrm{nil}(A)$. Then $\mathrm{nil}(A)=\mathrm{nil}(A_0)\oplus A_1$. The following lemma is a folklore. **Lemma 35**. *Let $\phi : A\to B$ be an injective superalgebra morphism, such that the induced superscheme morphism $\mathrm{SSpec}(\phi)$ is surjective on the underlying topological spaces. Then for any (not necessary of locally finite type) superscheme $\mathbb{X}$ the map $\mathbb{X}(A)\to\mathbb{X}(B)$ is injective.* *Proof.* Let $\pi$ denote $\mathrm{SSpec}(\phi)$. The statement of lemma is equivalent to the following : if $x, y\in\mathbb{X}(A)$ such that $x\pi=y\pi$, then $x=y$. It is clear that for any open affine supersubscheme $U$ of $X$ there holds $x^{-1}(U)=y^{-1}(U)$. Thus all we need is to show that the restrictions of $x$ and $y$ on any open affine supersubscheme $\mathrm{SSpec}(A_a)\subseteq x^{-1}(U)=y^{-1}(U)$, where $a\in A_0$, coincide to each other. Since $A_a\to B_a$ satisfies the conditions of lemma, the proof reduces to the case when $X$ is affine and the statement of lemma is obvious. ◻ The following lemma is a partial superization of [@mur I.3]. **Lemma 36**. *Let $\mathbb{X}$ be a locally algebraic superscheme. Then for any inductive system of superalgebras $\{A_{\alpha}\}_{\alpha\in I}$ the canonical map $p : \varinjlim\mathbb{X}(A_{\alpha})\to \mathbb{X}(\varinjlim A_{\alpha})$ is bijective.* *Proof.* Let $A$ denote $\varinjlim A_{\alpha}$. Assume that we have two morphisms $x : \mathrm{SSpec}(A_{\alpha})\to X$ and $y : \mathrm{SSpec}(A_{\beta})\to X$ such that the diagram $$\begin{array}{ccc} \mathrm{SSpec}(A) & \to & \mathrm{SSpec}(A_{\alpha}) \\ \downarrow & & \downarrow x\\ \mathrm{SSpec}(A_{\beta}) & \stackrel{y}{\to} & X \end{array}$$ is commutative. There are a finite collection of affine open supersubschemes $U_i$ in $X$ and two finite collections of elements $x_{ij}\in A_{\alpha}, y_{is}\in A_{\beta}$, where $1\leq i\leq k, 1\leq j\leq m, 1\leq s\leq n$, such that 1. $\mathrm{SSpec}((A_{\alpha})_{x_{ij}})\subseteq x^{-1}(U_i)$ and $\mathrm{SSpec}((A_{\beta})_{y_{is}})\subseteq y^{-1}(U_i)$ for each triple $i, j, s$; 2. $\cup_{1\leq i\leq k, 1\leq j\leq m}\mathrm{SSpec}((A_{\alpha})_{x_{ij}})=\mathrm{SSpec}(A_{\alpha})$ and $\cup_{1\leq i\leq k, 1\leq s\leq n}\mathrm{SSpec}((A_{\beta})_{y_{is}})=\mathrm{SSpec}(A_{\beta})$. The commutativity of the above diagram is equivalent to the commutativity of the diagram $$\begin{array}{ccc} A_{x_{ij}y_{is}} & \leftarrow & (A_{\alpha})_{x_{ij}} \\ \uparrow & & \uparrow \\ (A_{\beta})_{y_{is}} & \leftarrow & \mathcal{O}(U_i) \end{array}$$ for any triple $i, j, s$. Since each superalgebra $\mathcal{O}(U_i)$ is finitely generated, there is $\gamma\geq\alpha, \beta$ such that these diagrams remain commutative even if we replace $A$ by $A_{\gamma}$. In other words, the images of $x$ and $y$ in $\mathbb{X}(A_{\gamma})$ coincide to each other, hence $p$ is injective. Further, if $x\in\mathbb{X}(A)$, then $x$ is uniquely defined by its restrictions $\mathrm{SSpec}(A_{x_{ij}})\to U_i$, where $\mathrm{SSpec}(A_{x_{ij}})$ form a finite covering of $\mathrm{SSpec}(A)$ by open affine supersubschemes. Again, since each $\mathcal{O}(U_i)$ is finitely generated, all the corresponding superalgebra morphism $\mathcal{O}(U_i)\to A_{x_{ij}}$ factor through some $(A_{\alpha})_{x_{ij}}$, such that any $x_{ij}$ belongs to $A_{\alpha}$ as well. Thus $p$ is surjective. ◻ For any superalgebra $A$ we define a local supersubalgebra $\mathsf{n}(A)=\Bbbk\oplus\mathrm{nil}(A)$. The natural augmentation map $\mathsf{n}(A)\to\Bbbk$ is denoted by the same symbol $\epsilon_A$. The latter does not lead to confusion with the previous notation. Let $\pi_A$ and $\iota_A$ also denote the quotient superalgebra morphism $A\to A/\mathrm{nil}(A)=\widetilde{A}$ and the natural injection $\mathsf{n}(A)\to A$ respectively. **Proposition 37**. *Let $\mathbb{X}$ be a locally algebraic superscheme. For any $A\in\mathsf{SAlg}_{\Bbbk}$ we define $\mathbb{X}_x(\mathsf{n}(A))=\{y\in\mathbb{X}(\mathsf{n}(A))\mid \mathbb{X}(\epsilon_A)(y)=x\}$. Then the injection $\iota_A$ induces the bijection $\mathbb{X}_x(\mathsf{n}(A))\to \mathsf{N}_x(\mathbb{X})(A)$.* *Proof.* It is clear that the collection of all (augmented) finitely generated supersubalgebras $C_{\alpha}$ of $\mathsf{n}(A)$ form an inductive system, such that $\mathsf{n}(A)=\varinjlim C_{\alpha}=\cup_{\alpha\in I}C_{\alpha}$. Besides, each $C_{\alpha}$ belongs to $\mathsf{SL}_{\Bbbk}$. Lemma [Lemma 35](#folklore){reference-type="ref" reference="folklore"} implies that $\mathbb{X}_x(\mathsf{n}(A))\to\mathbb{X}(A)$ and each $\mathbb{X}_x(C_{\alpha})\to \mathbb{X}(A)$ are injective. Since $\mathbb{X}_x(\mathsf{n}(A))\simeq\varinjlim\mathbb{X}_x(C_{\alpha})$, the map $\mathbb{X}_x(\mathsf{n}(A))\to\mathbb{X}(A)$ is the direct limit of the maps $\mathbb{X}_x(C_{\alpha})\to \mathbb{X}(A)$, hence the image $\mathbb{X}_x(\mathsf{n}(A))$ is contained in $\mathsf{N}_x(\mathbb{X})(A)$. Conversely, for any $B\in\mathsf{SL}_{\Bbbk}(n)$ and any superalgebra morphism $B\to A$ the map $\mathbb{X}_{n, x}(B)\to\mathbb{X}(A)$ obviously factors through $\mathbb{X}_x(\mathsf{n}(A))\to\mathbb{X}(A)$. Proposition is proven. ◻ Let $x_A$ denote the image of $x$ in $\mathbb{X}(A)$ with respect to the natural map $\mathbb{X}(\Bbbk)\to \mathbb{X}(A)$. In particular, $x_{\Bbbk}=x$. **Proposition 38**. *Let $\mathbb{X}$ be a locally algebraic superscheme. Then $\mathbb{N}_x(\mathbb{X})(A)=\mathbb{X}(\pi_A)^{-1}(x_{\widetilde{A}})$.* *Proof.* Recall that an element $y\in\mathbb{X}(A)$ is interpreted as a superscheme morphism $\mathrm{SSpec}(A)\to X$. Then its image in $\mathbb{X}(\widetilde{A})$ coincides with $z=y \mathrm{SSpec}(\pi_A)$. Moreover, $z=x_{\widetilde{A}}$ if and only if the diagram $$\begin{array}{ccc} X & \stackrel{x}{\leftarrow} & \mathrm{SSpec}(\Bbbk) \\ y \uparrow & & \uparrow \\ \mathrm{SSpec}(A) & \stackrel{\mathrm{SSpec}(\pi_A)}{\leftarrow} & \mathrm{SSpec}(\widetilde{A}) \end{array}$$ is commutative. Similarly, $y$ belongs to $\mathbb{N}_x(\mathbb{X})(A)=\mathbb{X}_x(\mathsf{n}(A))$ if and only if the diagram $$\begin{array}{ccc} X & \stackrel{x}{\leftarrow} & \mathrm{SSpec}(\Bbbk) \\ y \uparrow & \nwarrow & \downarrow \\ \mathrm{SSpec}(A) & \stackrel{\mathrm{SSpec}(\iota_A)}{\rightarrow}& \mathrm{SSpec}(\mathsf{n}(A)) \end{array}$$ is commutative for some morphism $\mathrm{SSpec}(\mathsf{n}(A))\to X$. Therefore, all we need is to prove that $y$ makes the first diagram commutative if and only if $y$ does the second one. Since $\mathsf{n}(A)\stackrel{\iota_A}{\to} A\stackrel{\pi_A}{\to}\widetilde{A}$ factors through the natural injection $\Bbbk\to\widetilde{A}$, Proposition [Proposition 37](#another characterization of neighborhood){reference-type="ref" reference="another characterization of neighborhood"} implies $\mathbb{N}_x(\mathbb{X})(A)\subseteq \mathbb{X}(\pi_A)^{-1}(x_{\widetilde{A}})$. It remains to prove \"only if\" part. Since $|\mathrm{SSpec}(\pi_A)|$ is a homeomorphism of underlying topological spaces, the commutativity of the first diagram implies that $y$ sends all points of $|\mathrm{SSpec}(A)|$ to $x$. Without loss of a generality, one can replace $X$ by any open affine supersubscheme $U$ such that $x\in\mathbb{U}(\Bbbk)$. Then we have a commutative diagram $$\begin{array}{ccc} \mathcal{O}(U) & \stackrel{x}{\rightarrow} & \Bbbk \\ y \downarrow & & \downarrow \\ A & \stackrel{\pi_{A}}{\rightarrow} & \widetilde{A} \end{array}.$$ This immediately implies that $y$ factors through $\iota_A$, so that the diagram $$\begin{array}{ccc} \mathcal{O}(U) & \stackrel{x}{\rightarrow} & \Bbbk \\ y \downarrow & \searrow & \uparrow \\ A & \stackrel{\iota_{A}}{\leftarrow} & \mathsf{n}(A) \end{array},$$ is commutative. Proposition is proven. ◻ **Remark 39**. *In terms of Section $3$, $\mathbb{N}_x(\mathbb{X})$ is just the functor of points of the formal completion of $X$ along the closed supersubscheme $Y\simeq \mathrm{SSpec}(\Bbbk)$, such that $|Y|=\{ x\}$. In fact, it has been already explained in the penultimate paragraph before Lemma [Lemma 35](#folklore){reference-type="ref" reference="folklore"}. Proposition [Proposition 38](#final about neighborhood){reference-type="ref" reference="final about neighborhood"} gives a constructive description of $\mathbb{N}_x(\mathbb{X})$ in terms of $\Bbbk$-functors only.* **Corollary 40**. *(see [@maszub2 Lemma 9.5]) Let $\mathbb{G}$ be a locally algebraic group superscheme. Then **the formal neighborhood of the identity** $\mathbb{N}_e(\mathbb{G})$ is a normal group subfunctor of $\mathbb{G}$, strictly pro-representable by the c.l.a.N. Hopf superalgebra $\widehat{\mathcal{O}_{G, e}}$ (cf. [@hmt Definition 3.3]).* # Automorphism group functor of a sheaf on site Let $\mathcal{C}$ be a *site*, that is a category equipped with a Grothendieck topology. More precisely, for any object $U$ of $\mathcal{C}$ there is a collection of sets of arrows $\{U_i\to U \}$, called *coverings* of $U$, such that the following conditions hold : 1. if $V\to U$ is an isomorphism, then $\{V\to U \}$ is a covering; 2. if $\{U_i\to U \}$ is a covering and $V\to U$ is any arrow, then all fibered products $\{ U_i\times_U V\}$ exist and $\{U_i\times_U V\to V \}$ is a covering; 3. if $\{U_i\to U \}$ is a covering, and for any $i$ we have a covering $\{V_{ij}\to U_i\}$, then $\{V_{ij}\to U_i\to U \}$ is a covering. A *sheaf* on site $\mathcal{C}$ is a (contravariant) functor such that for any covering $\{ U_i\to U \}$ the natural sequence $$X(U) \to \prod_{i} X(U_i)\rightrightarrows\prod_{i, j} X(U_i\times_U U_j)$$ is exact (cf. [@fundalg], 2.3.3). Let $\mathcal{C}$ be a category and $X : \mathcal{C}^{op}\to\mathsf{Sets}$ be a covariant functor. For an object $A\in\mathcal{C}$ let $\mathcal{C}_A$ denote a category of pairs $(A', \phi)$, where $A'\in\mathcal{C}, \phi\in\mathsf{Mor}_{\mathcal{C}}(A', A)$, with morphisms $(A', \phi)\to (A'', \gamma)$ are just morphisms $\xi : A'\to A''$ in $\mathcal{C}$ such that $\gamma\xi=\phi$. Any object from $\mathcal{C}_A$ is called an *$A$-object* as well as any morphism in $\mathcal{C}_A$ is called an *$A$-morphism*. The functor $X$ induces a functor $X_A : \mathcal{C}_A^{op}\to\mathsf{Sets}$, such that $X((A', \phi))=X(A')$. We define the *automorphism group functor* $$\mathfrak{Aut}(X) : \mathcal{C}^{op}\to \mathsf{Gr}$$ of the functor $X$ as $$\mathfrak{Aut}(X)(A)=\{f\in\mathfrak{Mor}_{\mathcal{C}_A}(X_A, X_A)\mid f \ \mbox{is \ invertible}\}, A\in\mathcal{C}.$$ Observe that for any morphism $\alpha : A\to B$ in $\mathcal{C}$, there is a functor $\alpha_* : \mathcal{C}_A\to\mathcal{C}_{B}$ that sends a pair $(A', \phi)$ to $(A', \alpha\phi)$. In particular, any functor morphism $f : X_B\to X_B$ can be \"restricted\" to a functor morphism $f_A : X_A\to X_A$, and the map $f\mapsto f_A$ defines a group morphism $\mathfrak{Aut}(X)(B)\to\mathfrak{Aut}(X)(A)$. **Theorem 41**. *Assume that $\mathcal{C}$ is a site and $X$ is a sheaf on $\mathcal{C}$. Then the group functor $\mathfrak{Aut}(X)$ is a sheaf on $\mathcal{C}$ with respect to the same Grothendieck topology.* *Proof.* Consider an open covering $\{\alpha_i : A_i\to A\}$ of an object $A$. Assume that $\prod_{i\in I} f_i\in \prod_{i\in I}\mathfrak{Aut}(X)(A_i)$ belongs to the kernel of the map $$\prod_{i\in I}\mathfrak{Aut}(X)(A_i)\rightrightarrows\prod_{i, j\in I}\mathfrak{Aut}(X)(A_i\times_A A_j),$$ that is for any couple of indexes $i, j$ there holds $(f_i)_{A_i\times_A A_j}=(f_j)_{A_i\times_A A_j}$. More precisely, for any $A_i\times_A A_j$-object $B$, regarded as $A_i$-object and $A_j$-object via the canonical \"projections\" $A_i\times_A A_j\to A_i$ and $A_i\times_A A_j\to A_j$ respectively, $f_i(B)=f_j(B)$. Let $B$ be an $A$-object. We have an open covering $\{\beta_i : B_i\to B\}$, where $B_i=B\times_A A_i, i\in I$. For any $x\in X(B)$ set $X(\beta_i)(x)=x_i\in X(B_i)$. Then the element $\prod_{i\in I} f_i(x_i)$ belongs to the kernel of the map $$\prod_{i\in I}X(B_i)\rightrightarrows\prod_{i, j\in I, i\neq j}X(B_i\times_B B_j).$$ Indeed, the universal property of fibred products implies that there is a canonical morphism $B_i\times_B B_j\to A_i\times_A A_j$ that makes the diagrams $$\begin{array}{ccc} B_i\times_B B_j & \stackrel{\pi_i}{\to} & B_i \\ \downarrow & & \downarrow \\ A_i\times_A A_j & \to & A_i \end{array}$$ and $$\begin{array}{ccc} B_i\times_B B_j & \stackrel{\pi_j}{\to} & B_j \\ \downarrow & & \downarrow \\ A_i\times_A A_j & \to & A_j \end{array}$$ commutative, that is $\pi_i$ and $\pi_j$ are $A_i$-morphism and $A_j$-morphism correspondingly. Thus $$X(\pi_i)(f_i(x_i))=f_i(X(\pi_i)(x_i))=(f_i)_{A_i\times_A A_j}(X(\pi_i)(x_i))=$$ $$(f_j)_{A_i\times_A A_j}(X(\pi_j)(x_j))=f_j(X(\pi_j)(x_j))=X(\pi_j)(f_j(x_j)).$$ Since $X$ is a sheaf, there is the unique element $z\in X(B)$ such that for each $i$ we have $z_i=f_i(x_i)$. Set $f(x)=z$. First, we claim that $f_{A_i}=f_i$ for each index $i$. Let $B$ be an $A_i$-object, hence an $A$-object as well, via morphism $\alpha_i$. For any indices $i, j$ the composition of $\beta_j$ and the morphism $B\to A_i$ defines a morphism $B_j\to A_i$, which makes the diagram $$\begin{array}{ccc} B_j & \to & A_j \\ \downarrow & & \downarrow \\ A_i & \to & A \end{array}$$ commutative. Therefore, both morphisms $B_j\to A_j$ and $B_j\to A_i$ factor through the unique morphism $B_j\to A_i\times_A A_j$, and thus we have $$f_j(x_j)=f_i(x_j)=X(\beta_j)(f_i(x)),$$ whence $f(x)=f_i(x)$. Second, for any $A$-morphism $\gamma : B\to B'$ and any index $i$, there is the unique $A_i$-morphism $\gamma_i : B_i\to B'_i$ such that the diagram $$\begin{array}{ccc} B_i & \stackrel{\gamma_i}{\to} & B'_i \\ \downarrow & & \downarrow \\ B & \to & B' \end{array}$$ commutative. Continuing as above, one can find the unique morphism $$\gamma_{ij} : B_i\times_B B_j\to B'_i\times_{B'} B'_j$$ that makes the diagrams $$\begin{array}{ccc} B_i\times_B B_j & \to & B'_i\times_{B'} B'_j \\ \downarrow & & \downarrow \\ B_i & \to & B'_i \end{array}$$ and $$\begin{array}{ccc} B_i\times_B B_j & \to & B'_i\times_{B'} B'_j \\ \downarrow & & \downarrow \\ B_j & \to & B'_j \end{array}$$ commutative. Combining all, we obtain the commutative diagram $$\begin{array}{ccccccc} X(B) & \to & \prod_{i\in I} X(B_i) & \stackrel{\prod_{i\in I} f_i}{\to} & \prod_{i\in I} X(B_i) & \rightrightarrows & \prod_{i, j\in I, i\neq j} X(B_i\times_B B_j) \\ \downarrow & & \downarrow & & \downarrow & & \downarrow \\ X(B') & \to & \prod_{i\in I} X(B'_i) & \stackrel{\prod_{i\in I} f_i}{\to} & \prod_{i\in I} X(B'_i) & \rightrightarrows & \prod_{i, j\in I, i\neq j} X(B'_i\times_{B'} B'_j), \end{array}$$ where the vertical arrows are the maps $X(\gamma), \prod_{i\in I} X(\gamma_i)$ and $\prod_{i, j\in I, i\neq j}\gamma_{ij}$ respectively. It obviously follows that $X(\gamma)(f(x))=f(X(\gamma)(x))$. We left for the reader to check that $f$ is invertible. Theorem is proved. ◻ # First properties of $\mathfrak{Aut}(\mathbb{X})$ **Lemma 42**. *Let $X$ be a superscheme and $A$ be a superalgebra. Let $V$ be an open affine supersubscheme of $X\times\mathrm{SSpec}(A)$. Then for any superideal $I$ of $A$ the closed, hence affine, supersubscheme $V\cap (X\times \mathrm{SSpec}(A/I))$ of $V$ is defined by the superideal $\mathcal{O}(V)I$.* *Proof.* Assume that $\{ U\}$ is a covering of $X$ by open affine supersubschemes. Then $X\times\mathrm{SSpec}(A)$ is covered by open affine supersubschemes $U\times\mathrm{SSpec}(A)$ as well as $X\times\mathrm{SSpec}(A/I)$ is covered by open affine supersubschemes $$U\times\mathrm{SSpec}(A/I)=(U\times\mathrm{SSpec}(A))\cap (X\times\mathrm{SSpec}(A/I)).$$ Each open supersubscheme $V\cap (U\times\mathrm{SSpec}(A))$ is covered by open affine supersubschemes $\mathrm{SSpec}(\mathcal{O}(V)_g), g\in\mathcal{O}(V)_0$. Collecting all such $\mathrm{SSpec}(\mathcal{O}(V)_g)$ we obtain a covering of $V$, hence $\sum_g \mathcal{O}(V)_0 g=\mathcal{O}(V)_0$. Furthermore, there is $$W=V\cap (X\times\mathrm{SSpec}(A/I))=\mathrm{SSpec}(\mathcal{O}(V)/J)$$ for some superideal $J$ of $\mathcal{O}(V)$, and $W$ is covered by (finitely many!) open affine supersubschemes $(\mathrm{SSpec}(\mathcal{O}(V)_g))\cap (U\times\mathrm{SSpec}(A/I))$. All we need is to show that $$(\mathrm{SSpec}(\mathcal{O}(V)_g))\cap(U\times\mathrm{SSpec}(A/I))= \mathrm{SSpec}((\mathcal{O}(V)/I)_g)$$ for each couple $g, U$ and refer to [@zub1 Corollary 1.1]. In other words, it remains to prove that if $V$ is an open affine supersubscheme of some affine superscheme $\mathrm{SSpec}(B)$ and $I$ is a superideal of $B$, then we have $$V\cap\mathrm{SSpec}(B/I)=\mathrm{SSpec}(\mathcal{O}(V)/\mathcal{O}(V)I).$$ This statement is an obvious consequence of [@maszub1 Lemma 3.5] and [@zub1 Corollary 1.1] (see also the proof of Lemma [Lemma 32](#it does not depend on U){reference-type="ref" reference="it does not depend on U"}). ◻ **Lemma 43**. *Let $X$ be a superscheme over $\mathrm{SSpec}(A)$. Let $A\to B$ be a superalgebra morphism such that $\mathrm{SSpec}(B)\to\mathrm{SSpec}(A)$ is surjective on the underlying topological spaces. If $V$ and $V'$ are open supersubschemes in $X$ such that $$V\times_{\mathrm{SSpec}(A)}\mathrm{SSpec}(B)=V'\times_{\mathrm{SSpec}(A)}\mathrm{SSpec}(B),$$ then $V=V'$.* *Proof.* Translating to the category of $\Bbbk$-functors we have $$\mathbb{V}\times_{\mathrm{SSp}(A)}\mathrm{SSp}(B)=\mathbb{V}'\times_{\mathrm{SSp}(A)}\mathrm{SSp}(B).$$ In particular, for any field extension $\Bbbk\subseteq L$ there holds $$\mathbb{V}(L)\times_{\mathrm{SSp}(A)(L)}\mathrm{SSp}(B)(L)=\mathbb{V}'(L)\times_{\mathrm{SSp}(A)(L)}\mathrm{SSp}(B)(L).$$ The condition of lemma implies that $\mathrm{SSp}(B)(L)\to \mathrm{SSp}(A)(L)$ is surjective, thus $\mathbb{V}(L)=\mathbb{V}'(L)$. Superizing [@jan I.1.7(4)] we obtain $\mathbb{V}=\mathbb{V}'$, or $V=V'$. ◻ For any $A\in\mathsf{SAlg}_{\Bbbk}$ let $\mathfrak{T}(A)$ denote $\ker(\mathfrak{Aut}(\mathbb{X})(\pi_A))$, where $\pi_A : A\to \widetilde{A}$ is the canonical epimorphism. **Proposition 44**. *The following statements hold :* 1. *for any superalgebra monomorphism $A\to B$ such that $\mathrm{SSpec}(B)\to\mathrm{SSpec}(A)$ is surjective on the underlying topological spaces, the map $\mathfrak{Aut}(\mathbb{X})(A)\to \mathfrak{Aut}(\mathbb{X})(B)$ is injective;* 2. *there is $\mathfrak{T}(A)=\mathfrak{Aut}(\mathbb{X})_e(\mathsf{n}(A))=\ker(\mathfrak{Aut}(\mathbb{X})(\epsilon_A))$.* *Proof.* We start by proving the second assertion, the proof of the first one is similar and will be briefly outlined. Recall that the group $\mathfrak{Aut}(\mathbb{X})(A)$ can be identified with the group of $\mathrm{SSpec}(A)$-automorphisms of $X\times \mathrm{SSpec}(A)$, regarded as an $\mathrm{SSpec}(A)$-superscheme via the projection $\mathrm{pr}_2$. Choose a covering of $X$ by open affine supersubschemes $U_i$. The automorphism $\phi\in \mathfrak{Aut}(\mathbb{X})(A)$ is uniquely defined by its restrictions on affine supersubschemes $V_i=\phi^{-1}(U_i\times \mathrm{SSpec}(A))$, i.e. by isomorphisms $\mathcal{O}(U_i)\otimes A\to \mathcal{O}(V_i)$ of $A$-superalgebras. Respectively, Lemma [Lemma 42](#specific intersection){reference-type="ref" reference="specific intersection"} implies that the image of $\widetilde{\phi}$ in $\mathfrak{Aut}(\mathbb{X})(\widetilde{A})$ is defined by the induced morphisms $\mathcal{O}(V_i)/\mathcal{O}(V_i)\mathrm{nil}(A)\to \mathcal{O}(U_i)\otimes\widetilde{A}$. If $\widetilde{\phi}$ is the identity automorphism, then $$\mathrm{SSpec}(\mathcal{O}(V_i)/\mathcal{O}(V_i)\mathrm{nil}(A))=\mathrm{SSpec}(\mathcal{O}(U_i))\otimes\widetilde{A}).$$ On the other hand, since for any $A$-superalgebra $B$ there is $\mathrm{nil}(A)B\subseteq\mathrm{nil}(B)$, it obviously implies that $|V_i|=|U_i\times\mathrm{SSpec}(A)|$, hence $V_i= U_i\times\mathrm{SSpec}(A)$. Therefore, $\phi$ is defined by isomorphisms $\mathcal{O}(U_i)\otimes A\to \mathcal{O}(U_i)\otimes A$ of $A$-superalgebras. Assume that $\mathcal{O}(U_i)$ is generated by the elements $f_1, \ldots, f_t$. Then the isomorphism $\mathcal{O}(U_i)\otimes A\to \mathcal{O}(U_i)\otimes A$ is determined on generators as $$f_l\mapsto\sum_k f_{lk}\otimes a_{lk}, f_{lk}\in\mathcal{O}(U_i), a_{lk}\in A, 1\leq l\leq t.$$ Let $\widetilde{a_{lk}}$ denote the images of $a_{lk}$ in $\widetilde{A}$. Without loss of a generality one can assume that the nonzero $\widetilde{a_{lk}}$ are linearly independent for each $l$. The condition $\widetilde{\phi}=\mathrm{id}_{X\times \mathrm{SSpec}(\widetilde{A})}$ is equivalent to $$f_l\otimes 1= \sum_k f_{lk}\otimes\widetilde{a_{lk}}, 1\leq l\leq t .$$ In other words, for each $l$ there is only one $k$ such that $\widetilde{a_{lk}}\neq 0$. Moreover, for this index $k$ we have $\widetilde{a_{lk}}=1$ and $f_{lk}=f_l$. Thus it obviously follows that $\phi\in\mathfrak{Aut}(\mathbb{X})_e(\mathsf{n}(A))$ and (ii) is proven. Recall that $X\times\mathrm{SSpec}(B)$ is canonically isomorphic to the fibered product $$(X\times\mathrm{SSpec}(A))\times_{\mathrm{SSpec}(A)}\mathrm{SSpec}(B),$$ so that the image of $\phi$ in $\mathfrak{Aut}(\mathbb{X})(B)$ coincides with $\phi\times_{ \mathrm{SSpec}(A)} \mathrm{id}_{\mathrm{SSpec}(B)}$. If this image is the identity morphism, then $$V_i\times_{\mathrm{SSpec}(A)}\mathrm{SSpec}(B)=(U_i\times\mathrm{SSpec}(A))\times_{\mathrm{SSpec}(A)}\mathrm{SSpec}(B)$$ for each index $i$. Lemma [Lemma 43](#cancellation){reference-type="ref" reference="cancellation"} infers $V_i=U_i\times\mathrm{SSpec}(A)$, and repeating the above arguments with the generators of each $\mathcal{O}(U_i)$, one immediately sees that $\phi=\mathrm{id}_{X\times\mathrm{SSpec}(A)}$. ◻ Proposition [Proposition 44](#nice property of T){reference-type="ref" reference="nice property of T"} implies that the natural group morphism $\mathfrak{Aut}(\mathbb{X})(A_0)\to \mathfrak{Aut}(\mathbb{X})(A)$ is injective for any superalgebra $A$. Therefore, the group functor $A\mapsto \mathfrak{Aut}(\mathbb{X})_{ev}(A)=\mathfrak{Aut}(\mathbb{X})(A_0)$ can be regarded as a group subfunctor of $\mathfrak{Aut}(\mathbb{X})$. # $\mathfrak{Aut}(\mathbb{X})$ commutes with direct limits of superalgebras The following lemma is a folklore (see [@hart II, Exercise 3.3]). **Lemma 45**. *If $Y$ is of finite type over $\mathrm{SSpec}(A)$ and $V$ is an open affine supersubscheme of $Y$, then $\mathcal{O}(V)$ is a finitely generated $A$-superalgebra.* *Proof.* By [@maszub2 Lemma 1.10] there is a finite covering of $Y$ by open affine supersubschemes $U$ such that each $\mathcal{O}(U)$ is a finitely generated $A$-superalgebra. Then each $V\cap U$ can be covered by open affine supersubschemes $\mathrm{SSpec}(\mathcal{O}(U)_h), h\in\mathcal{O}(U)_0$. Since any affine superscheme is quasi-compact, $V$ can be covered by finitely many such supersubschemes (in general, for various $U$ and $h$). Applying [@maszub2 Lemma 1.10] again, we obtain that the induced morphism $V\to\mathrm{SSpec}(A)$ is of finite type, hence the $A$-superalgebra $\mathcal{O}(V)$ is finitely generated. ◻ **Lemma 46**. *Let $Y$ be a superscheme of finite type over $\mathrm{SSpec}(A)$. For any superalgebra morphism $A\to B$ and for arbitrary open quasi-compact supersubschemes $V$ and $V'$ of $Y$, such that $$V\times_{\mathrm{SSpec}(A)} \mathrm{SSpec}(B)=V'\times_{\mathrm{SSpec}(A)} \mathrm{SSpec}(B),$$ there is a finitely generated $A$-supersubalgebra $B'$ of $B$, that satisfies $$V\times_{\mathrm{SSpec}(A)} \mathrm{SSpec}(B')=V'\times_{\mathrm{SSpec}(A)} \mathrm{SSpec}(B').$$* *Proof.* Choose a covering of $Y$ by open affine supersubschemes $U$ as in Lemma [Lemma 45](#finite type property){reference-type="ref" reference="finite type property"}. Since $$V\times_{\mathrm{SSpec}(A)} \mathrm{SSpec}(B)=V'\times_{\mathrm{SSpec}(A)} \mathrm{SSpec}(B)$$ if and only if $$(U\cap V)\times_{\mathrm{SSpec}(A)} \mathrm{SSpec}(B)=(U\cap V')\times_{\mathrm{SSpec}(A)} \mathrm{SSpec}(B)$$ for each $U$, one can assume that $Y$ is affine. Then both $V$ and $V'$ are covered by finitely many open supersubschemes $\mathrm{SSpec}(\mathcal{O}(Y)_g)$ and $\mathrm{SSpec}(\mathcal{O}(Y)_h)$ respectively, where $g, h\in\mathcal{O}(V)_0$. Again, the original equality is equivalent to the finitely many equalities $$\mathrm{SSpec}(\mathcal{O}(Y)_g)\times_{\mathrm{SSpec}(A)} \mathrm{SSpec}(B)=(\cup_h \mathrm{SSpec}(\mathcal{O}(Y)_{gh}))\times_{\mathrm{SSpec}(A)} \mathrm{SSpec}(B)$$ for each element $g$. Thus it remains to consider the case when $V=Y$ and $V'=\cup_h \mathrm{SSpec}(\mathcal{O}(Y)_{h})$. We have $$\mathrm{SSpec}(\mathcal{O}(Y)\otimes_A B)=\cup_h \mathrm{SSpec}((\mathcal{O}(Y)\otimes_A B)_{h\otimes 1}),$$ that is equivalent to $$\sum_h (h\otimes 1)(\sum_i f_{i h}\otimes b_{i h})=\sum_{i, h} hf_{i h}\otimes b_{i h}=1\otimes 1 ,$$ where $f_{i h}\in\mathcal{O}(V), b_{i h}\in B, |f_{i h}|+|b_{i h}|=0$ for any $i, h$. By setting $B'=A[b_{i h}\mid i, h]$ our lemma follows. ◻ **Proposition 47**. *Let $X$ be an algebraic superscheme and let $\{ V_i\}_{1\leq i\leq m}$ be a finite covering of $X\times\mathrm{SSpec}(A)$ by open affine supersubschemes. Then there are a finitely generated $\Bbbk$-supersubalgebra $A'$ of $A$ and a finite covering $\{V'_i \}_{1\leq i\leq m}$ of $X\times\mathrm{SSpec}(A')$ by open affine supersubschemes, such that each diagram $$\begin{array}{ccc} V_i & \to & X\times\mathrm{SSpec}(A) \\ \downarrow & & \downarrow \\ V'_i\times_{\mathrm{SSpec}(A')}\mathrm{SSpec}(A) & \to & (X\times\mathrm{SSpec}(A'))\times_{\mathrm{SSpec}(A')}\mathrm{SSpec}(A) \end{array}$$ is commutative and the left vertical arrow is the isomorphism induced by the natural isomorphism on the right.* *Proof.* Let $\{U_j\}_{1\leq j\leq n}$ be a finite covering of $X$ by open affine supersubschemes. For each couple of indices $i, j$ we choose a finite covering $\{V_{ij k}\}_{1\leq k\leq l}$ of the open supersubscheme $V_i\cap (U_j\times\mathrm{SSpec}(A))$ by open affine supersubschemes. Since the statement of lemma is invariant with respect to the replacements of $A'$ by a larger (finitely generated) supersubalgebra $A''$ and $V'$ by $V'\times_{\mathrm{SSpec}(A')}\mathrm{SSpec}(A'')$ respectively, all we need is to prove our statement for the covering $\{V_{ijk} \}_{1\leq i\leq m, 1\leq k\leq l}$ of $U_j\times\mathrm{SSpec}(A), 1\leq j\leq n$. In other words, one can assume that $X$ is affine. Let $\phi_i$ denote the dual superalgebra morphism $\mathcal{O}(X)\otimes A\to \mathcal{O}(V_i), 1\leq i\leq m$. By [@maszub1 Lemma 3.5], there are elements $x_{i 1}, \ldots, x_{i t}\in (\mathcal{O}(X)\otimes A)_0$ and $b_{i1}, \ldots, b_{it}\in\mathcal{O}(V)_0$, such that $\sum_{1\leq s\leq t} b_{i s}\phi_i(x_{i s})=1$ and $\phi_i$ induces $(\mathcal{O}(X)\otimes A)_{x_{is}}\simeq \mathcal{O}(V)_{\phi_i(x_{is})}$ for any $1\leq i\leq m, 1\leq s\leq t$. Moreover, there are elements $d_{is}\in (\mathcal{O}(X)\otimes A)_0, 1\leq i\leq m, 1\leq s\leq t,$ such that $\sum_{1\leq i\leq m, 1\leq s\leq t}x_{is}d_{is}=1$. For arbitrary indices $1\leq i\leq m, 1\leq s, k\leq t,$ let $\frac{y_{iks}}{x_{ik}^{n_{iks}}}$ be the unique preimage of $\frac{b_{is}}{1}\in\mathcal{O}(V)_{\phi_i(x_{ik})}$, where $y_{iks}\in (\mathcal{O}(X)\otimes A)_0$. There is a finitely generated $\Bbbk$-supersubalgebra $A'$ of $A$ such that all elements $x_{is}, d_{is}$ and $y_{iks}$ belong to $(\mathcal{O}(X)\otimes A')_0$. Set $B_i=\phi_i(\mathcal{O}(X)\otimes A'), 1\leq i\leq m$. By the same [@maszub1 Lemma 3.5], each $\mathrm{SSpec}(B_i)$ is isomorphic to an open supersubscheme $V'_i$ of $X\times\mathrm{SSpec}(A')$ and the induced morphism $V_i\to V'_i\times_{\mathrm{SSpec}(A')}\mathrm{SSpec}(A)$ is an isomorphism. It is also clear that $\cup_{1\leq i\leq m} V_i'=X\times\mathrm{SSpec}(A')$. ◻ **Proposition 48**. *Let $\mathbb{X}$ be an algebraic superscheme and let $\{A_{\lambda} \}_{\lambda\in I}$ be an inductive system of $\Bbbk$-superalgebras. Set $A=\varinjlim A_{\lambda}$. Then the natural map $$q : \varinjlim \mathfrak{Aut}(\mathbb{X})(A_{\lambda})\to \mathfrak{Aut}(\mathbb{X})(A)$$ is a group isomorphism.* *Proof.* More generally, let $\mathfrak{End}(\mathbb{X})$ denote the functor from $\mathsf{SAlg}_{\Bbbk}$ to the category of monoids, where for any $A\in\mathsf{SAlg}_{\Bbbk}$ the monoid $\mathfrak{End}(\mathbb{X})(A)$ consists of all endomorphisms of the superscheme $X\times\mathrm{SSpec}(A)$ over $\mathrm{SSpec}(A)$. As above, we have the natural map $$p : \varinjlim \mathfrak{End}(\mathbb{X})(A_{\lambda})\to \mathfrak{End}(\mathbb{X})(A).$$ It is clear that $\mathfrak{Aut}(\mathbb{X})$ is a submonoid functor of $\mathfrak{End}(\mathbb{X})$ and $p|_{\mathfrak{Aut}(\mathbb{X})}=q$. Let $\phi\in \mathfrak{End}(\mathbb{X})(A_{\lambda})$ and its equivalence class $[\phi]\in \varinjlim \mathfrak{End}(\mathbb{X})(A_{\lambda})$ satisfies $p([\phi])=\mathrm{id}_{X\times\mathrm{SSpec}(A)}$. Arguing as in Proposition [Proposition 44](#nice property of T){reference-type="ref" reference="nice property of T"} and applying Lemma [Lemma 46](#variation on Lemma 5.3){reference-type="ref" reference="variation on Lemma 5.3"}, one sees that there is a superalgebra $A_{\beta}, \beta\geq\gamma,$ such that $\phi'=\phi\times_{\mathrm{SSpec}(A_{\gamma}) }\mathrm{id}_{\mathrm{SSpec}(A_{\beta})}$ maps each $$U\times\mathrm{SSpec}(A_{\beta})\simeq (U\times\mathrm{SSpec}(A_{\gamma}))\times_{\mathrm{SSpec}(A_{\gamma})} \mathrm{SSpec}(A_{\beta})$$ to itself, where $U$ runs over a finite covering of $X$ by open affine supersubschemes. Moreover, $\phi'\times_{\mathrm{SSpec}(A_{\beta}) }\mathrm{id}_{\mathrm{SSpec}(A) }=\mathrm{id}_{X\times\mathrm{SSpec}(A) }$. Assume that $f_1, \ldots, f_l$ generate a superalgebra $\mathcal{O}(U)$. Then the superalgebra automorphism of $\mathcal{O}(U)\otimes A_{\beta}$, dual to $\phi'|_{U\times\mathrm{SSpec}(A_{\beta})}$, acts on these generators as $$f_i\otimes 1\mapsto \sum_j h_{ij}\otimes a_{ij}, h_{ij}\in\mathcal{O}(U), a_{ij}\in A_{\beta}, 1\leq i\leq l .$$ Besides, for each index $i$ the elements $h_{ij}$ can be chosen linearly independent. We have $$\sum_j h_{ij}\otimes [a_{ij}]=f_i\otimes 1, 1\leq i\leq l,$$ where $[a]$ is the equivalence class of $a\in A_{\beta}$ in $A$. Since the covering $\{ U\}$ is finite, for sufficiently large $\beta'$ there is $\phi'\times_{\mathrm{SSpec}(A_{\beta}) }\mathrm{id}_{\mathrm{SSpec}(A_{\beta'}) }=\mathrm{id}_{X\times\mathrm{SSpec}(A_{\beta'}) }$. In particular, $q$ is an injective group homomorphism. It remains to show that $q$ is surjective. Condier an element $\phi\in \mathfrak{Aut}(\mathbb{X})(A)$. Let $\{U_i \}$ be a finite covering of $X$ by open affine supersubschemes. For each couple of indices we choose a covering of $U_i\cap U_j$ by open affine supersubschemes $U_{ij k}$. Then $X\times\mathrm{SSpec}(A)$ is covered by open affine supersubschemes $U_i\times\mathrm{SSpec}(A)$ and $V_i=\phi^{-1}(U_i\times\mathrm{SSpec}(A))$ respectively. Moreover, each $V_i\cap V_j=\phi^{-1}(U_i\cap U_j)$ is covered by open affine supersubschemes $V_{ij k}=\phi^{-1}(U_{ij k})$ and $\phi$ is uniquely defined by the superalgebra isomorphisms $$\phi^{\sharp}_i : \mathcal{O}(U_i)\otimes A\to \mathcal{O}(V_i) \ \mbox{and} \ \phi_{ijk}^{\sharp} : \mathcal{O}(U_{ijk})\otimes A\to \mathcal{O}(V_{ijk}),$$ such that for any triple $i, j, k$ we have a commutative diagram $$\begin{array}{ccc} \mathcal{O}(U_i)\otimes A & \to &\mathcal{O}(V_i) \\ \downarrow & & \downarrow \\ \mathcal{O}(U_{ij k})\otimes A & \to & \mathcal{O}(V_{ij k}) \end{array}.$$ If it does not lead to confusion, we omit the indices $i, j, k$ in the notations $\phi_i^{\sharp}$ and $\phi_{ijk}^{\sharp}$. Proposition [Proposition 47](#a typical affine in a product){reference-type="ref" reference="a typical affine in a product"} implies that there are an index $\gamma$ and open affine supersubschemes $V'_i, V'_{ijk}$ in $X\times\mathrm{SSpec}(A_{\gamma})$ such that $$\cup_i V_i'=X\times\mathrm{SSpec}(A_{\gamma})$$ and $$V_i\simeq V'_i\times_{\mathrm{SSpec}(A_{\gamma})} \mathrm{SSpec}(A), \ \ V_{ijk}\simeq V'_{ijk}\times_{\mathrm{SSpec}(A_{\gamma})} \mathrm{SSpec}(A)$$ for any triple $i, j, k$. Besides, $V_i'\cap V'_j=\cup_k V'_{ijk}$ for each couple of indices $i, j$. Note that if there is an $A$-superalgebra morphism $\psi : B\otimes_{A'} A\to C\otimes_{A'} A$, where $B$ and $C$ are finitely generated $A'$-superalgebras, then there is a finitely generated $A'$-supersubalgebra $A''$ of $A$, such that $\psi$ takes $B\otimes_{A'} A''$ to $C\otimes_{A'} A''$. Thus there is an index $\beta\geq\gamma$ such that $\phi^{\sharp}$ takes each $\mathcal{O}(U_i)\otimes A_{\beta}$ to $\mathcal{O}(V'_i)\otimes_{A_{\gamma}} A_{\beta}$ as well as each $\mathcal{O}(U_{ijk})\otimes A_{\beta}$ to $\mathcal{O}(V'_{ijk})\otimes_{A_{\gamma}} A_{\beta}$, so that all diagrams $$\begin{array}{ccc} \mathcal{O}(U_i)\otimes A_{\beta} & \to & \mathcal{O}(V'_i)\otimes_{A_{\gamma}} A_{\beta} \\ \downarrow & & \downarrow \\ \mathcal{O}(U_{ij k})\otimes A_{\beta} & \to & \mathcal{O}(V'_{ij k})\otimes_{A_{\gamma}} A_{\beta} \end{array}$$ are commutative. Gluing all together, one sees that there is an endomorphism $\phi'\in \mathfrak{End}(\mathbb{X})(A_{\beta})$ such that $\phi'\times_{\mathrm{SSpec}(A_{\beta})} \mathrm{id}_{\mathrm{SSpec}(A)}=\phi$. If $\psi$ is the inverse of $\phi$, then increasing $\beta$ one can construct the similar $\psi'\in \mathfrak{End}(\mathbb{X})(A_{\beta})$ such that $\psi'\times_{\mathrm{SSpec}(A_{\beta})} \mathrm{id}_{\mathrm{SSpec}(A)}=\psi$. We have $$\phi'\psi'\times_{\mathrm{SSpec}(A_{\beta})} \mathrm{id}_{\mathrm{SSpec}(A)}=\phi\psi=$$ $$\mathrm{id}_{X\times\mathrm{SSpec}(A)}=\psi\phi=\psi'\phi'\times_{\mathrm{SSpec}(A_{\beta})} \mathrm{id}_{\mathrm{SSpec}(A)}.$$ By the above, for some $\alpha\geq\beta$ the morphisms $\phi'\times_{\mathrm{SSpec}(A_{\beta})} \mathrm{id}_{\mathrm{SSpec}(A_{\alpha})}$ and $\psi'\times_{\mathrm{SSpec}(A_{\beta})} \mathrm{id}_{\mathrm{SSpec}(A_{\alpha})}$ are mutually inverse to each other. Proposition is proved. ◻ In terms of [@mats-oort] this proposition states that $\mathfrak{Aut}(\mathbb{X})$ satisfies the super analog of the condition $P_3$. In what follows we call the corresponding super analogs of all conditions $P_i$ from [@mats-oort] as *super-$P_i$*. # Property super-$P_2$ Recall that $X_A$ denote $X\times\mathrm{SSpec}(A)$ for any superalgebra $A$. **Lemma 49**. *Let $X$ be a superscheme and $A$ be a finite dimensional local superalgebra. Then any open supersubscheme $V$ of $X\times\mathrm{SSpec}(A)\simeq X_{\Bbbk(A)}\times_{\mathrm{SSpec}(\Bbbk(A))}\mathrm{SSpec}(A)$ has a form $U\times_{\mathrm{SSpec}(\Bbbk(A))}\mathrm{SSpec}(A)$, where $U$ is an open supersubscheme of $X_{\Bbbk(A)}$.* *Proof.* Choose an open covering of $X$ by open affine supersubschemes $U_i$. Since $V=\cup_i (V\cap (U_i\times\mathrm{SSpec}(A)))$, one can assume that $X$ is affine. We have $V=\cup_g \mathrm{SSpec}((\mathcal{O}(X)\otimes A)_g)$ for a finite set of elements $g\in (\mathcal{O}(X)\otimes A)_0$. It remains to note that $g=\bar{g}\otimes 1 +y$, where $\bar{g}\in\mathcal{O}(X)\otimes\Bbbk(A), y\in\mathcal{O}(X)\otimes\mathfrak{M}_A$, and since $y$ is nilpotent, $(\mathcal{O}(X)\otimes A)_g\simeq (\mathcal{O}(X)\otimes\Bbbk(A))_{\bar{g}}\otimes A$ for each $g$. In other words, $V=(\cup_g \mathrm{SSpec}((\mathcal{O}(X)\otimes\Bbbk(A))_{\bar{g}}))\times\mathrm{SSpec}(A)$. ◻ **Proposition 50**. *If $\mathbb{X}$ is a proper superscheme, then $\mathfrak{Aut}(\mathbb{X})$ satisfies the super-$P_2$.* *Proof.* Let $A$ be a c.l.N. superalgebra with the maximal superideal $\mathfrak{M}_A$. The superscheme $X_A=X\times\mathrm{SSpec}(A)$ is proper (and separated) over $S=\mathrm{SSpec}(A)$. Let $S'=\mathrm{SSpec}(A/\mathfrak{M}_A)$. By Proposition [Proposition 23](#endomorphisms of completions){reference-type="ref" reference="endomorphisms of completions"}, the natural map $\mathrm{End}_S(X_A)\to \mathrm{End}_{\widehat{S}}(\widehat{X_A})$ is an isomorphism of monoids. In particular, we have $$\mathfrak{Aut}(\mathbb{X})(A)=\mathrm{Aut}_S(X_A)\simeq \mathrm{Aut}_{\widehat{S}}(\widehat{X_A}),$$ and even more, we have a commutative diagram $$\begin{array}{ccc} \mathfrak{Aut}(\mathbb{X})(A) & \simeq & \mathrm{Aut}_{\widehat{S}}(\widehat{X_A}) \\ \searrow & & \nearrow \\ & \varprojlim_n \mathfrak{Aut}(\mathbb{X})(A/\mathfrak{M}_A^n) & \end{array},$$ where $q : \varprojlim_n \mathfrak{Aut}(\mathbb{X})(A/\mathfrak{M}_A^n)\to \mathrm{Aut}_{\widehat{S}}(\widehat{X_A})$ is the canonical group morphism, defined as follows. Let $\varprojlim \phi_n\in \varprojlim \mathfrak{Aut}(\mathbb{X})(A/\mathfrak{M}^{n})$. Lemma [Lemma 49](#open in a specific product){reference-type="ref" reference="open in a specific product"} implies that $|X\times\mathrm{SSpec}(A/\mathfrak{M}^n)|=|X_{\Bbbk(A)}|$ and $|\phi_m|=|\phi_n|$ for any $m\geq n\geq 1$. Besides, there is a finite open covering of $X_{\Bbbk(A)}$ by affine supersubschemes $U_{ijk}$ and $V_{ijk}$, such that : 1. $U_i=\cup_{j, k} U_{ijk}$ and $V_i=\cup_{j, k} V_{ijk}$ are open affine for each index $i$; 2. any $\phi_n^{\sharp}$ is uniquely defined by the collection of superalgebra isomorphisms $$\psi_{i, n} : \mathcal{O}(U_i)\otimes_{\Bbbk(A)} A/\mathfrak{M}^n\to \mathcal{O}(V_i)\otimes_{\Bbbk(A)} A/\mathfrak{M}^n$$ and $$\psi_{i, j, k, n} : \mathcal{O}(U_{ijk})\otimes_{\Bbbk(A)} A/\mathfrak{M}^n\to \mathcal{O}(V_{ijk})\otimes_{\Bbbk(A)} A/\mathfrak{M}^n,$$ so that all diagrams $$\begin{array}{ccc} \mathcal{O}(U_i)\otimes_{\Bbbk(A)} A/\mathfrak{M}^n & \stackrel{\psi_{i, n}}{\to} & \mathcal{O}(V_i)\otimes_{\Bbbk(A)} A/\mathfrak{M}^n\\ \downarrow & & \downarrow \\ \mathcal{O}(U_{ijk})\otimes_{\Bbbk(A)} A/\mathfrak{M}^n & \stackrel{\psi_{i, j, k, n}}{\to} & \mathcal{O}(V_{ijk})\otimes_{\Bbbk(A)} A/\mathfrak{M}^n \end{array},$$ $$\begin{array}{ccc} \mathcal{O}(U_i)\otimes_{\Bbbk(A)} A/\mathfrak{M}^m & \stackrel{\psi_{i, m}}{\to} & \mathcal{O}(V_i)\otimes_{\Bbbk(A)} A/\mathfrak{M}^m\\ \downarrow & & \downarrow \\ \mathcal{O}(U_i)\otimes_{\Bbbk(A)} A/\mathfrak{M}^n & \stackrel{\psi_{i, n}}{\to} & \mathcal{O}(V_i)\otimes_{\Bbbk(A)} A/\mathfrak{M}^n\\ \end{array} ,$$ and $$\begin{array}{ccc} \mathcal{O}(U_{ijk})\otimes_{\Bbbk(A)} A/\mathfrak{M}^m & \stackrel{\psi_{i, j, k, m}}{\to} & \mathcal{O}(V_{ijk})\otimes_{\Bbbk(A)} A/\mathfrak{M}^m\\ \downarrow & & \downarrow \\ \mathcal{O}(U_{ijk})\otimes_{\Bbbk(A)} A/\mathfrak{M}^n & \stackrel{\psi_{i, j, k, n}}{\to} & \mathcal{O}(V_{ijk})\otimes_{\Bbbk(A)} A/\mathfrak{M}^n\\ \end{array}$$ are commutative. Note that each $|\phi_n|$ can be uniquely glued from the maps $|\mathrm{SSpec}(\psi_{i, n})|$, compatible on the intersections. Determine $q(\varprojlim \phi_n)=\phi\in\mathrm{Aut}_{\widehat{S}}(\widehat{X_A})$ by the collection of isomorphisms $\phi^{\sharp}|_{\widehat{\mathcal{O}(U_i)\otimes A}}=\varprojlim \psi_{i, n}$ and $\phi^{\sharp}|_{\widehat{\mathcal{O}(U_{ijk})\otimes A}}=\varprojlim \psi_{i, j, k, n}$. Besides, $|\phi|=|\phi_1|$. Since $q$ is obviously injective, then $q$ is bijective, and our proposition obviously follows. ◻ **Corollary 51**. *If $\mathbb{X}$ is proper, then $\mathfrak{Aut}(\mathbb{X})_{ev}$ satisfies the condition $P_2$.* # More properties of the group functors $\mathfrak{Aut}(\mathbb{X}), \mathfrak{Aut}(\mathbb{X})_{ev}$ and $\mathfrak{T}$ **Lemma 52**. *The functor $\mathfrak{Aut}(\mathbb{X})$ satisfies the conditions super-$P_4$ and super-$P_5$. In particular, the functor $\mathfrak{Aut}(\mathbb{X})_{ev}$ satisfies the conditions $P_4$ and $P_5$.* *Proof.* Theorem [Theorem 41](#category_locality){reference-type="ref" reference="category_locality"} implies that $\mathfrak{Aut}(\mathbb{X})$ is a sheaf on $\mathsf{SAlg}^{opp}_{\Bbbk}$, regarded as a site with respect to fpqc coverings, thus it satisfies super-$P_4$ and super-$P_5$ (use [@zub1 Corollary 1.1 and the discussion on p.721]). In particular, $\mathfrak{Aut}(\mathbb{X})_{ev}$ is a sheaf on $\mathsf{Alg}_{\Bbbk}$ with respect to the same Grothendieck topology on $\mathsf{Alg}_{\Bbbk}^{opp}$, that is $\mathfrak{Aut}(\mathbb{X})_{ev}$ satisfies the conditions $P_4$ and $P_5$. ◻ **Lemma 53**. *If $\mathbb{X}$ is algebraic, then the functor $\mathfrak{Aut}(\mathbb{X})_{ev}$ satisfies the condition $P_3$ from [@mats-oort].* *Proof.* Since $\mathfrak{Aut}(\mathbb{X})(A)=\mathfrak{Aut}(\mathbb{X})_{ev}(A)$ for any algebra $A$, Proposition [Proposition 48](#Property P_3){reference-type="ref" reference="Property P_3"} concludes the proof. ◻ **Lemma 54**. *If $\mathbb{X}$ is algebraic, then the functor $\mathfrak{T}$ satisfies the super-$P_3$. In particular, it is strictly pro-representable if and only if $\mathfrak{T}|_{\mathsf{SL}_{\Bbbk}}$ is.* *Proof.* For any inductive system of $\Bbbk$-superalgebra $\{A_{\lambda} \}_{\lambda\in I}$, let $A$ denote $\varinjlim A_{\lambda}$. Then $\mathrm{nil}(A)=\varinjlim \mathrm{nil}(A_{\lambda})$ and $\widetilde{A}\simeq \varinjlim \widetilde{A_{\lambda}}$. We have a commutative diagram $$\begin{array}{ccccccccc} 1 & \to & \varinjlim\mathfrak{T}(A_{\lambda}) & \to & \varinjlim\mathfrak{Aut}(\mathbb{X})(A_{\lambda}) & \to & \varinjlim\mathfrak{Aut}(\mathbb{X})(\widetilde{A_{\lambda}}) & \to & 1 \\ & & \downarrow & & \downarrow & & \downarrow & \\ 1 & \to & \mathfrak{T}(A) &\to & \mathfrak{Aut}(\mathbb{X})(A) & \to & \mathfrak{Aut}(\mathbb{X})(\widetilde{A}) & \to & 1 \end{array}$$ By Proposition [Proposition 48](#Property P_3){reference-type="ref" reference="Property P_3"} the middle and right vertical arrows are isomorphisms, hence the left one is an isomorphism as well. An alternative proof can be given by combining Proposition [Proposition 44](#nice property of T){reference-type="ref" reference="nice property of T"}(ii) with Proposition [Proposition 48](#Property P_3){reference-type="ref" reference="Property P_3"}, and with the fact that $\mathsf{n}(A)=\varinjlim \mathsf{n}(A_{\lambda})$ . The second statement also follows by Proposition [Proposition 44](#nice property of T){reference-type="ref" reference="nice property of T"}(ii). ◻ For any integer $n\geq 0$, let $\mathfrak{T}_n$ denote the subfunctor of $\mathfrak{T}$ defined as $\mathfrak{T}_n(A)=\varinjlim_{B\subseteq A, B\in\mathsf{SAlg}_{\Bbbk}(n)} \mathfrak{T}(B), A\in\mathsf{SAlg}_{\Bbbk}$. **Lemma 55**. *We still assume that $\mathbb{X}$ is algebraic. An element $\phi\in\mathfrak{T}(A)$ belongs to $\mathfrak{T}_n(A)$ if and only if the following conditions hold :* 1. *$|\phi|$ is the identity map of the topological space $|X\times\mathrm{SSpec}(A)|$;* 2. *there is a supersubalgebra $B\subseteq A$ such that $B\in\mathsf{SL}_{\Bbbk}(n)$, and $\phi^{\sharp}$ induces an identity map of $\mathcal{O}(V)$ modulo $\mathcal{O}(V)\mathfrak{M}_B$ for arbitrary affine supersubscheme $V$ of $X\times\mathrm{SSpec}(A)$.* *Proof.* By the definition, an element $\phi\in\mathfrak{T}(A)$ belongs to $\mathfrak{T}_n(A)$ if and only if $|\phi|$ is an identity map of the topological space $|X\times\mathrm{SSpec}(A)|$. Moreover, there is a supersubalgebra $B\subseteq A$ such that $B\in\mathsf{SL}_{\Bbbk}(n)$, and $\phi^{\sharp}$ induces the identity map of $\mathcal{O}(U)\otimes B$ modulo $\mathcal{O}(U)\otimes\mathfrak{M}_B$ for arbitrary affine superscheme $U$ from a finite open covering of $X$. In turn, for arbitrary open affine supersubscheme $V\subseteq X\times\mathrm{SSpec}(A)$ each open supersubscheme $V\cap U\times\mathrm{SSpec}(A)$ can be covered by open affine supersubschemes $\mathrm{SSpec}((\mathcal{O}(U)\otimes A)_g), g\in (\mathcal{O}(U)\otimes A)_0$. Moreover, $\phi^{\sharp}$ induces the identity map of $(\mathcal{O}(U)\otimes A)_g$ modulo $(\mathcal{O}(U)\otimes A)_g\mathfrak{M}_B$. Using [@maszub1 Lemma 3.5] one can cover each $\mathrm{SSpec}((\mathcal{O}(U)\otimes A)_g)$ by open supersubschemes $\mathrm{SSpec}(\mathcal{O}(V)_h), h\in \mathcal{O}(V)_0$, so that $\sum_h \mathcal{O}(V)_0 h=\mathcal{O}(V)_0$ and $\mathcal{O}(V)_h\simeq ((\mathcal{O}(U)\otimes A)_g)_{\overline{h}}$, where $\overline{h}$ is the image of $h$ in the corresponding $(\mathcal{O}(U)\otimes A)_g$. Thus $\phi^{\sharp}$ induces the identity map of each $\mathcal{O}(V)_h$ modulo $\mathcal{O}(V)_h\mathfrak{M}_B$. Therefore, for any $t\in\mathcal{O}(V)$ there is a nonnegative integer $l$ such that $(t-\phi^{\sharp}(t))h^l\in \mathcal{O}(V)\mathfrak{M}_B$ for each $h$. Since $\sum_h a_h h^l=1$ for some $a_h\in \mathcal{O}(V)_0$, it follows that $$t-\phi^{\sharp}(t)=\sum_h (t-\phi^{\sharp}(t))h^l a_h\in \mathcal{O}(V)\mathfrak{M}_B.$$ ◻ The following corollary is now obvious (compare with [@maszub2 Lemma 9.5]). **Corollary 56**. *Each subfunctor $\mathfrak{T}_n$ is invariant under the action of $\mathfrak{Aut}(\mathbb{X})$ by conjugations.* If $X$ is a superscheme (not necessary algebraic), let $\mathfrak{RDer}(\mathcal{O}_X)$ denote the sheaf of *right superderivations* of the superalgebra sheaf $\mathcal{O}_X$. In other words, for any open subsets $V\subseteq U\subseteq |X|$ we have $$\mathfrak{RDer}(\mathcal{O}_X)_i(U)=\{d\in \mathrm{End}_{\Bbbk}(\mathcal{O}_X)_i(U)\mid (fg)d=f (g)d+(-1)^{i |g|}(f)d g, f, g\in\mathcal{O}(V)\},$$ where $i\in\mathbb{Z}_2$. Note that $\mathfrak{RDer}(\mathcal{O}_X)$ has the natural structure of a right $\mathcal{O}_X$-supermodule, hence it has the structure of left $\mathcal{O}_X$-supermodule as well. Symmetrically, one can define the sheaf of *left superderivations* $\mathfrak{LDer}(\mathcal{O}_X)$. There is a canonical isomorphism $\mathfrak{RDer}(\mathcal{O}_X)\simeq \mathfrak{LDer}(\mathcal{O}_X)$ of $\mathcal{O}_X$-supermodules, say $d\mapsto d'$, where $d'(f)=(-1)^{|d||f|} (f)d, f\in\mathcal{O}_X$ (cf. Remark [Remark 26](#left is right){reference-type="ref" reference="left is right"}). Assume that $X$ is proper. Let $\delta : X\to X\times X$ be a diagonal closed immersion. Let $\mathcal{I}_X$ be the superideal sheaf that defines the closed supersubscheme $\delta(X)$. The universality of $\mathcal{O}_X$-supermodule of Kahler superdifferentials $\Omega^1_{X/\Bbbk}\simeq \delta^*(\mathcal{I}_X/\mathcal{I}_X^2)$ (see [@maszub4 Section 3] for more details) imply that $\mathfrak{LDer}(\mathcal{O}_X)$ is isomorphic to the $\mathcal{O}_{X}$-supermodule sheaf $\mathcal{T}_X=\mathrm{Hom}_{\mathcal{O}_X} (\mathcal{I}_X/\mathcal{I}_X^2, \mathcal{O}_X)$, that is called the *tangent* sheaf of $X$. **Lemma 57**. *If $X$ is a proper superscheme, then $\mathfrak{LDer}(\mathcal{O}_X)(|X|)$ is finite dimensional.* *Proof.* The sheaf $\mathcal{I}_X/\mathcal{I}_X^2$ is coherent (see [@maszub4 Section 3]). By Lemma [Lemma 27](#dual of coherent){reference-type="ref" reference="dual of coherent"} the sheaf $\mathcal{T}_X$ is coherent. Proposition [Proposition 28](#global sections are finite){reference-type="ref" reference="global sections are finite"} concludes the proof. ◻ Note that by Cohen's theorem any c.l.N. superalgebra $A$ contains a *field of representatives*, that is isomorphic to $\Bbbk(A)$. In particular, it is valid for any finite dimensional local superalgebra. Let $\mathbb{F}$ denote the functor $\mathfrak{Aut}(\mathbb{X})_e : \mathsf{SL}_{\Bbbk}\to \mathsf{Gr}$. **Lemma 58**. *If $\mathbb{X}$ is a proper superscheme, then the functor $\mathbb{F}$ is strictly pro-representable. In other words, $\mathfrak{Aut}(\mathbb{X})$ satisfies the condition super-$P_1$.* *Proof.* If $A\in\mathsf{SL}_{\Bbbk}$, then Lemma [Lemma 49](#open in a specific product){reference-type="ref" reference="open in a specific product"} infers $|X\times\mathrm{SSpec}(A)|=|X|$. Thus for any $\phi\in\mathbb{F}(A)$ there holds $|\phi|=\mathrm{id}_{|X\times\mathrm{SSpec}(A)|}$. By [@maszub2 Lemma 9.6] we have $\mathcal{O}(U\times\mathrm{SSpec}(A))\simeq\mathcal{O}(U)\otimes A$ for any open supersubscheme $U$ of $X$. Therefore, $\mathbb{F}(A)$ can be naturally identified with the group of $A$-linear automorphisms $\phi$ of the superalgebra sheaf $\mathcal{O}_X\otimes A$, such that $(\mathrm{id}_{\mathcal{O}_X}\otimes\epsilon_A)\phi |_{\mathcal{O}_X}= \mathrm{id}_{\mathcal{O}_X}$. In partcular, $\phi\mapsto \phi|_{\mathcal{O}_X\otimes 1}$ is an injective map of $\mathbb{F}(A)$ into the set of all superalgebra sheaf morphisms $\psi : \mathcal{O}_X\to \mathcal{O}_X\otimes A$ such that $(\mathrm{id}_{\mathcal{O}_X}\otimes\epsilon_A)\psi=\mathrm{id}_{\mathcal{O}_X}$. Moreover, this map is also surjective. In fact, just set $\phi=\psi\otimes\mathrm{id}_A$ and note that $\phi$ induces the identity morphism of the graded superalgebra sheaf $\oplus_{n\geq 0}\mathcal{O}_X\otimes \mathfrak{M}_A^n /\mathcal{O}_X\otimes \mathfrak{M}_A^{n+1}$. As in [@mats-oort Lemma 3.4], it is easy to see that $\mathbb{F}$ is left exact, hence pro-representable. It remains to show that $\mathbb{F}(\Bbbk[\epsilon_0, \epsilon_1])$ is a finite dimensional $\Bbbk$-vector space. Since any $\psi\in \mathbb{F}(\Bbbk[\epsilon_0, \epsilon_1])$ obviously has a form $\mathrm{id}_{\mathcal{O}_X}+ d_0\otimes\epsilon_0+d_1\otimes\epsilon_1$, where $d\in\mathfrak{RDer}(\mathcal{O}_X)_i(|X|), i=0, 1$, Lemma [Lemma 57](#Der){reference-type="ref" reference="Der"} concludes the proof. ◻ **Corollary 59**. *If $\mathbb{X}$ is a proper superscheme, then $\mathfrak{Aut}(\mathbb{X})_{ev}$ satisfies the condition $P_1$.* Recall that a group functor $\mathbb{G}$ satisfies the condition $P_{orb}$, if for every algebra $A$ over an algebraically closed field extension $L$ of $\Bbbk$, and any $g\in \mathbb{G}(A)\setminus 1$, there is an algebraic scheme $\mathbb{Y}$ (over $L$) on which $\mathbb{G}$ acts on the left, and $y\in\mathbb{Y}(L)$, such that $gy_A\neq y_A$, where $y_A$ is the image of $y$ in $\mathbb{Y}(A)$ (cf. [@mats-oort]). It is clear that if $\mathbb{H}\leq \mathbb{G}$ and $\mathbb{G}$ satisfies $P_{orb}$, then $\mathbb{H}$ does. Let $X$ be an algebraic superscheme. Let $M$ and $\mathcal{T}$ denote $X_0$ and $(\mathcal{O}_X)_1$ respectively. We define the *Grassman inflation* $\Lambda(X)$ of $X$ as $(|M|, \Lambda_{\mathcal{O}_M}(\mathcal{T}))$. More precisely, there is a nonnegative integer $t$ such that $\mathcal{T}^{t+1}=0$ but $\mathcal{T}^t\neq 0$. Then $\Lambda_{\mathcal{O}_M}(\mathcal{T})$ is the sheafification of the superalgebra presheaf $$U\mapsto \oplus_{0\leq k\leq t}\wedge^k_{\mathcal{O}_M(U)}(\mathcal{T}(U)), U\subseteq |X|.$$ If $X$ is algebraic or proper, then $\Lambda(X)$ is. Since $\mathrm{id}_{\mathcal{T}}$ is naturally extended to the surjective sheaf morphism $\Lambda_{\mathcal{O}_M}(\mathcal{T})\to\mathcal{O}_X$, $X$ is a closed supersubscheme of $\Lambda(X)$. Furthermore, for any algebra $A\in\mathsf{Alg}_{\Bbbk}$ the superscheme $X\times\mathrm{SSpec}(A)$ is isomorphic to $(|M\times\mathrm{Spec}(A)|, \mathcal{O}_{M\times\mathrm{Spec}(A)}\oplus\mathcal{T}_A )$, where $\mathcal{T}_A=\mathrm{pr}_1^*\mathcal{T}$ and $\mathrm{pr}_1 : M\times\mathrm{Spec}(A)\to M$ is the canonical projection. Thus each $\phi\in\mathfrak{Aut}(\mathbb{X})(A)$ is uniquely extended to an automorphism $\Lambda(\phi)\in \mathfrak{Aut}(\Lambda(\mathbb{X}))(A)$ and the map $\phi\mapsto\Lambda(\phi)$ is a group monomorphism, functorial in $A$. More precisely, $|\Lambda(\phi)|=|\phi|$ and $\Lambda(\phi)^{\sharp}$ acts on $|\phi|_*\wedge_{\mathcal{O}_{M\times\mathrm{Spec}(A)}} (\mathcal{T}_A)$ as $\wedge\phi^{\sharp}$. In particular, $\mathfrak{Aut}(\mathbb{X})_{ev}$ is a subgroup functor of $\mathfrak{Aut}(\Lambda(\mathbb{X}))_{ev}$. Consider a closed (and proper) supersubscheme $Z$ of $\Lambda(X)$, defined by the superideal sheaf $\Lambda_{\mathcal{O}_M}(\mathcal{T})\mathcal{T}^2$. As above, for arbitrary $A\in\mathsf{Alg}_{\Bbbk}$ the superscheme $Z\times\mathrm{SSpec}(A)$ is defined by the superideal sheaf $\Lambda_{\mathcal{O}_{M\times\mathrm{Spec}(A)}}(\mathcal{T}_A)\mathcal{T}_A^2$. Since each automorphism $\phi\in \mathfrak{Aut}(\Lambda(\mathbb{X}))_{ev}(A)$ takes $|\phi|_*\mathcal{T}_A$ to $\Lambda_{\mathcal{O}_{M\times\mathrm{Spec}(A)}}(\mathcal{T}_A)\mathcal{T}_A$, it induces the automorphism $\overline{\phi}\in \mathfrak{Aut}(\mathbb{Z})_{ev}(A)$. Besides, $\phi\mapsto\overline{\phi}$ is a group homomorphism, functorial in $A$. Moreover, the induced morphism $\mathfrak{Aut}(\mathbb{X})_{ev}\to \mathfrak{Aut}(\mathbb{Z})_{ev}$ of group functors is obviously injective (not necessary isomorphism!). **Theorem 60**. *If $\mathbb{X}$ is proper, then $\mathfrak{Aut}(\mathbb{X})_{ev}$ is a locally algebraic group scheme.* *Proof.* Since $\mathfrak{Aut}(\mathbb{X})_{ev}$ satisfies the conditions $P_1-P_5$, by [@mats-oort Corollary 2.5] all we need is to show that $\mathfrak{Aut}(\mathbb{X})_{ev}$ satisfies $P_{orb}$. By the above remarks, all we need is to show that $\mathfrak{Aut}(\mathbb{Z})_{ev}$ satisfies $P_{orb}$. The superscheme $Z$ can be regarded as an ordinary (proper) scheme. In fact, the sheaf $\mathcal{O}_Z\simeq \mathcal{O}_M\oplus\mathcal{T}$ is a sheaf of commutative algebras, since $\mathcal{T}^2=0$ in $\mathcal{O}_Z$. Note also that $\mathfrak{Aut}(\mathbb{Z})_{ev}$ is a group subfunctor of the automorphism group functor of $\mathbb{Z}$, regarded as such a scheme. By [@mats-oort Theorem 3.7] the latter group functor is isomorphic to a locally algebraic group scheme, hence by [@mats-oort Corollary 2.5] it satisfies $P_{orb}$. Theorem is proven. ◻ # The structure of $\mathfrak{Aut}(\mathbb{X})_{ev}$ We still assume that $X$ is a proper algebraic superscheme. Since any automorphism of $X\times\mathrm{SSpec}(A)$ over $\mathrm{SSpec}(A)$ induces the automorphism of $X_{ev}\times\mathrm{Spec}(A)$ over $\mathrm{Spec}(A)$, provided $A$ is purely even, thus there is the natural morphism $\mathfrak{Aut}(\mathbb{X})_{ev}\to\mathfrak{Aut}(\mathbb{X}_{ev})$ of group schemes. Set $\mathfrak{N}_t=\ker(\mathfrak{Aut}(\mathbb{X})_{ev}\to\mathfrak{Aut}(\mathbb{X}_{ev}))$ (remind that $\mathcal{T}^{t+1}=0$). As a closed group subfunctor of $\mathfrak{Aut}(\mathbb{X})_{ev}$, $\mathfrak{N}_t$ is a locally algebraic group scheme. Each group $\mathfrak{N}_t(A)$ can be identified with the group of $\mathcal{O}_{\mathrm{Spec}(A)}$-linear automorphisms of superalgebra sheaf $\mathcal{O}_{X\times\mathrm{SSpec}(A)}\simeq \mathcal{O}_{M\times\mathrm{Spec}(A)}\oplus \mathcal{T}_A$, those induce identity morphisms modulo $\mathcal{T}_A^2\oplus\mathcal{T}_A$ . For any $1\leq l\leq t$, let $X^{\leq l}$ denote the superscheme $(|X|, \mathcal{O}_X/\mathcal{O}_X\mathcal{T}^{l+1})$. Set $\mathfrak{N}_l=\ker(\mathfrak{Aut}(\mathbb{X}^{\leq l})_{ev}\to\mathfrak{Aut}(\mathbb{X}_{ev}))$. Note that this is compatible with the definition of $\mathfrak{N}_t$. Moreover, the natural morphism $\mathfrak{Aut}(\mathbb{X})_{ev}\to\mathfrak{Aut}(\mathbb{X}^{\leq l})_{ev}$ of group schemes makes the diagram $$\begin{array}{ccc} \mathfrak{Aut}(\mathbb{X})_{ev} & \to & \mathfrak{Aut}(\mathbb{X}^{\leq l})_{ev} \\ \searrow & & \swarrow \\ & \mathfrak{Aut}(\mathbb{X}_{ev}) & \end{array}$$ to be commutative, hence it induces a group scheme morphism $\mathfrak{N}_t\to\mathfrak{N}_l$. Set $\mathfrak{N}_{t, l}=\ker(\mathfrak{N}_t\to\mathfrak{N}_l)$. **Lemma 61**. *The series of normal locally algebraic group subschemes $$\mathfrak{N}_t=\mathfrak{N}_{t, 1}\geq \mathfrak{N}_{t, 2} \geq \ldots \geq \mathfrak{N}_{t, t}=1$$ is central, i.e. for any $1\leq l <t$ there is $[\mathfrak{N}_t, \mathfrak{N}_{t, l}]\leq \mathfrak{N}_{t, l+1}$.* *Proof.* Consider $g\in\mathfrak{N}_t(A)$ and $h\in\mathfrak{N}_{t, l}(A)$. If $l=2m+1$, then for any $a\in \mathcal{O}_{M\times \mathrm{Spec}(A)}$ and $f\in\mathcal{T}_A$ we have $$h(a)\equiv a\pmod{\mathcal{T}_A^{l+1}}, \ h(f)\equiv f\pmod{\mathcal{T}_A^{l+2} }.$$ Thus $h(f)\equiv f\pmod{\mathcal{T}_A^{l+3}}$ for any $f\in\mathcal{T}_A^2$, that implies $$gh(a)\equiv g(a) \pmod{\mathcal{T}_A^{l+1} },$$ and $$hg(a)=h(a+(g(a)-a))\equiv a+(g(a)-a)=g(a)\pmod{\mathcal{T}_A^{l+1} }.$$ Similarly, we have $$gh(f)\equiv g(f)\pmod{\mathcal{T}_A^{l+2} }$$ and $$hg(f)=h(f+(g(f)-f))\equiv g(f)\pmod{\mathcal{T}_A^{l+2} }.$$ We leave the case $l=2m$ to the reader. ◻ Consider the central group subscheme $\mathfrak{N}_{t, t-1}$. If $t=2m+1$, then any $g\in\mathfrak{N}_{t, t-1}$ is characterized by the the conditions $g(a)=a, g(f)\equiv f\pmod{\mathcal{T}_A^t}, a\in \mathcal{O}_{M\times\mathrm{Spec}(A)}, f\in\mathcal{T}_A$. Since $g$ is an automorphism of superalgebra sheaf $\mathcal{O}_{X\times\mathrm{SSpec}(A)}$, $g-1$ is just $\mathcal{O}_{M\times\mathrm{Spec}(A)}$-linear sheaf morphism $\mathcal{T}_A\to\mathcal{T}_A^t$. In other words, the group $\mathfrak{N}_{t, t-1}(A)$ is isomorphic to the abelian group of global sections of the sheaf $$\mathrm{Hom}_{\mathcal{O}_{M\times\mathrm{Spec}(A)}}(\mathcal{T}_A, \mathcal{T}_A^t)\simeq \mathrm{Hom}_{\mathcal{O}_{X_{ev}\times\mathrm{Spec}(A) }}((\mathcal{J}_X/\mathcal{J}_X^2)_A, (\mathcal{J}_X^t)_A).$$ Finally, if $t=2m$, then any $g\in\mathfrak{N}_{t, t-1}$ is characterized by the the conditions $g(a)\equiv a\pmod{\mathcal{T}_A^t}, g(f)=f$. Moreover, $g-1$ is an $\mathcal{O}_{\mathrm{Spec}(A)}$-linear derivation $\mathcal{O}_{M\times\mathrm{Spec}(A)}\to \mathcal{T}_A^t$. Since any such derivation vanishes on $\mathcal{T}_A^2$, it implies that $\mathfrak{N}_{t, t-1}$ can be identified with the sheaf of $\mathcal{O}_{\mathrm{Spec}(A)}$-linear derivations $\mathcal{O}_{X_{ev}\times\mathrm{Spec}(A)}\to (\mathcal{J}^t_X)_A$. Using [@hart Proposition II.8.10], one sees that $\mathfrak{N}_{t, t-1}(A)$ is isomorphic to the abelian group of global sections of the sheaf $\mathrm{Hom}_{\mathcal{O}_{X_{ev}\times\mathrm{Spec}(A)}}((\mathcal{I}_{X_{ev}}/\mathcal{I}_{X_{ev}}^2)_A, (\mathcal{J}_X^t)_A)$. Recall that $\mathbb{G}_a$ denote the one dimensional unipotent group, i.e. $\Bbbk[\mathbb{G}_a]\simeq \Bbbk[t]$, where $t$ is a primitive element of Hopf algebra $\Bbbk[t]$. **Lemma 62**. *The group scheme $\mathfrak{N}_{t, t-1}$ is isomorphic to the finite direct product of several copies of $\mathbb{G}_a$.* *Proof.* Since $\mathcal{J}_X/\mathcal{J}_X^2, \mathcal{J}_X^t$ and $\mathcal{I}_{X_{ev}}/\mathcal{I}_{X_{ev}}^2$ are coherent sheaves of $\mathcal{O}_{X_{ev}}$-(super)modules, Lemma [Lemma 29](#represented by a space){reference-type="ref" reference="represented by a space"} implies the statement. ◻ **Proposition 63**. *$\mathfrak{N}^0_t$ is an (affine) algebraic unipotent group.* *Proof.* By Lemma [Lemma 62](#N_{t, t-1}){reference-type="ref" reference="N_{t, t-1}"} the group subscheme $\mathfrak{N}_{t, t-1}$ is connected, hence $\mathfrak{N}_{t, t-1}\leq \mathfrak{N}_t^0$. Moreover, $\mathfrak{N}_t^0$ is a clopen affine algebraic subgroup of $\mathfrak{N}_t$ (cf. [@mbrion Theorem 2.4.1]). It is clear that the group scheme morphism $\mathfrak{N}_t\to \mathfrak{N}_{t-1}$ takes $\mathfrak{N}_t^0$ to $\mathfrak{N}_{t-1}^0$. Then [@mbrion Proposition 2.7.1(1)] implies that the faisceau quotient $\mathfrak{N}_t^0/\mathfrak{N}_{t, t-1}$ is a closed algebraic subgroup of $\mathfrak{N}^0_{t-1}$. By the induction on $t$, $\mathfrak{N}^0_{t-1}$ is afine and unipotent, hence $\mathfrak{N}_t^0/\mathfrak{N}_{t, t-1}$ is. The latter infers the statement. ◻ **Question 64**. *Proposition [Proposition 63](#R is almost unipotent){reference-type="ref" reference="R is almost unipotent"} rises the natural question whether $\mathfrak{N}_t$ is algebraic for any $t\geq 1$?* # Strictly pro-representable group $\Bbbk$-functors Let $\mathbb{X}$ be a strictly pro-representable $\Bbbk$-functor, and let $A$ be a c.l.a.N. superalgebra, that represents $\mathbb{X}$. The maximal superideal of $A$ is denoted by $\mathfrak{M}_A$. We also call $A$ the *coordinate superalgebra* of $\mathbb{X}$. **Lemma 65**. *The category of strictly representable $\Bbbk$-functors is anti-equivalent to the category of c.l.a.N. superalgebras (with continuous local morphisms between them).* *Proof.* We outline the proof and leave details for the reader. Let $\mathbb{X}$ and $\mathbb{Y}$ be $\Bbbk$-functors, represented by superalgebras $A$ and $B$ respectively. Let ${\bf f} : \mathbb{X}\to\mathbb{Y}$ be a morphism of $\Bbbk$-functors. For any couple of nonnegative integers $n< t$, any $C\in\mathsf{SL}_{\Bbbk}(n)$, there is $$\mathbb{X}(C)=\mathrm{Hom}_{\mathsf{SL}_{\Bbbk}}(A/\mathfrak{M}_A^{n+1} , C)=\mathrm{Hom}_{\mathsf{SL}_{\Bbbk}}(A/\mathfrak{M}_A^{t+1} , C),$$ and the similar statement holds for $\mathbb{Y}$. We have a commutative diagram $$\begin{array}{ccc} \mathrm{Hom}_{\mathsf{SL}_{\Bbbk}}(A/\mathfrak{M}_A^{t+1} , A/\mathfrak{M}_A^{t+1}) & \to & \mathrm{Hom}_{\mathsf{SL}_{\Bbbk}}(B/\mathfrak{M}_B^{t+1} , A/\mathfrak{M}_A^{t+1})\\ \downarrow & & \downarrow \\ \mathrm{Hom}_{\mathsf{SL}_{\Bbbk}}(A/\mathfrak{M}_A^{n+1} , A/\mathfrak{M}_A^{n+1}) & \to & \mathrm{Hom}_{\mathsf{SL}_{\Bbbk}}(B/\mathfrak{M}_B^{n+1} , A/\mathfrak{M}_A^{n+1}) \end{array}$$ Let $\phi_n$ denote the image of $\mathrm{id}_{A/\mathfrak{M}_A^{n+1}}$ in $\mathrm{Hom}_{\mathsf{SL}_{\Bbbk}}(B/\mathfrak{M}_B^{n+1} , A/\mathfrak{M}_A^{n+1}), n\geq 1$. Then the collection $\{\phi_n\}_{n\geq 1}$ defines the dual morphism $\phi : B\to A$. ◻ This lemma allows us to define a Zariski topology on arbitrary strictly pro-representable functor $\mathbb{X}$, so that a closed subfunctor $\mathbb{Y}\subseteq\mathbb{X}$ is again strictly pro-representable by a c.l.a.N. superalgebra $A/I$, where $I$ is a closed superideal of $A$. For example, $\mathbb{X}_{ev}$ is defined by the superideal $J_A$. **Lemma 66**. *If $\mathbb{X}$ is strictly pro-representable, then it is a group functor if and only if its coordinate superalgebra $A$ is a c.l.a.N. Hopf superalgebra. Thus the category of strictly pro-representable group $\Bbbk$-functors is anti-equivalent to the category of c.l.a.N. Hopf superalgebras (with continuous local Hopf superalgebra morphisms between them).* *Proof.* Observe that if $\mathbb{X}$ is represented by $A$, then $\mathbb{X}\times\mathbb{X}$ is represented by $\widehat{A\otimes A}$, the completion of $A\otimes A$ in the $(A\otimes \mathfrak{M}_A + \mathfrak{M}_A\otimes A)$-adic topology. The second statement is obvious. ◻ From now on $\mathbb{X}$ is a strictly pro-representable group $\Bbbk$-functor. Let $\Delta=\Delta_A$ and $S=S_A$ denote the coproduct and the antipode of its coordinate superalgebra $A$. We use the *Sweedler*'s notations, i.e. $\Delta(a)=\sum a_{(0)}\otimes a_{(1)}$ is a convergent series in the $(A\otimes\mathfrak{M}_A+\mathfrak{M}_A\otimes A)$-adic topology. Following [@maszub2 Section 9], one can construct a Hopf superalgebra $\mathrm{hyp}(\mathbb{X})=\cup_{n\geq 1}\mathrm{hyp}_n(\mathbb{X})$, where $\mathrm{hyp}_n(\mathbb{X})=(A/\mathfrak{M}^{n+1}_A)^*$. In other words, $\mathrm{hyp}(\mathbb{X})$ coincides with the vector space $A^{\underline{*}}$ consisting of all continuous linear maps $A\to\Bbbk$, and $\Bbbk$ is regarded as a discrete vector space. The product and coproduct of $\mathrm{hyp}(\mathbb{X})$ are defined as $$\phi\psi(a)=\sum (-1)^{|\psi||a_{(0)}|}\phi(a_{(0)})\psi(a_{(1)}),$$ and $\Delta^*(\phi)=\sum \phi_{(0)}\otimes\phi_{(1)}$ if $$\phi(ab)=\sum (-1)^{|\phi_{(1)}||a|}\phi_{(0)}(a)\phi_{(1)}(b), a, b\in A, \phi, \psi\in\mathrm{hyp}(\mathbb{X}).$$ Besides, $S^*(\phi)(a)=\phi(S(a))$. The following lemma is a copy of [@maszub2 Lemma 9.2]. **Lemma 67**. *For any nonnegative integers $k$ and $t$ we have :* 1. *$\mathrm{hyp}_k(\mathbb{X})\mathrm{hyp}_t(\mathbb{X})\subseteq \mathrm{hyp}_{k+t}(\mathbb{X})$;* 2. *$\Delta^*(\mathrm{hyp}_k(\mathbb{X}))\subseteq \sum_{0\leq s\leq k} \mathrm{hyp}_s(\mathbb{X})\otimes \mathrm{hyp}_{k-s}(\mathbb{X})$;* 3. *$S^*(\mathrm{hyp}_k(\mathbb{X}))\subseteq \mathrm{hyp}_k(\mathbb{X})$.* Let $R$ be a superalgebra. The $R$-superalgebras $A\otimes R$ and $\mathrm{hyp}(\mathbb{X})\otimes R$ are Hopf $R$-superalgebras, i.e. their products, coproducts, counits and antipodes are just $R$-linear extensions of products, coproducts, counits and antipodes of $A$ and $\mathrm{hyp}(\mathbb{X})$ respectively. Moreover, we have the pairing $$(\mathrm{hyp}(\mathbb{X})\otimes R)\times (A\otimes R)\to R, \ <\phi\otimes r, a\otimes r'>=(-1)^{|r||a|}\phi(a)rr',$$ that is the Hopf superalgebra pairing in the sense of [@hmt]. The proof of the following lemma can be copied from [@maszub2 Lemma 9.3] **Lemma 68**. *The group functor $\mathbb{X}$ is isomorphic to $R\mapsto \mathsf{Gpl}(\mathrm{hyp}(\mathbb{X})\otimes R)$, where $$\mathsf{Gpl}(\mathrm{hyp}(\mathbb{X})\otimes R)=\{ u\in (\mathrm{hyp}(\mathbb{X})\otimes R)^{\times} \mid (\Delta^*\otimes\mathrm{id}_R)(u)=u\otimes_R u\}.$$* The Hopf superalgebra $\mathrm{hyp}(\mathbb{X})$ has another filtration with $\mathrm{hyp}^{(k)}(\mathbb{X})=\{\phi\in\mathrm{hyp}(\mathbb{X})\mid \phi(J_A^{k+1})=0 \}, k\geq 0$. Since $A_1$ is a finitely generated $A_0$-module, this filtration is finite. The proof of the following lemma is similar to the proof of Lemma [Lemma 67](#Lemma 9.2){reference-type="ref" reference="Lemma 9.2"}. **Lemma 69**. *For any nonnegative interegrs $k, l$ there is :* 1. *$\mathrm{hyp}^{(k)}(\mathbb{X})\mathrm{hyp}^{(l)}(\mathbb{X})\subseteq \mathrm{hyp}^{(k+l)}(\mathbb{X})$;* 2. *$\Delta^*(\mathrm{hyp}^{(k)}(\mathbb{X}))\subseteq \sum_{0\leq s\leq k} \mathrm{hyp}^{(s)}(\mathbb{X})\otimes \mathrm{hyp}^{(k-s)}(\mathbb{X})$;* 3. *$S^*(\mathrm{hyp}^{(k)}(\mathbb{X}))\subseteq \mathrm{hyp}^{(k)}(\mathbb{X})$.* Lemma [Lemma 69](#Lemma 11.4){reference-type="ref" reference="Lemma 11.4"} implies that $\mathsf{gr}(\mathrm{hyp}(\mathbb{X}))=\oplus_{k\geq 0}\mathrm{hyp}^{(k)}(\mathbb{X})/\mathrm{hyp}^{(k+1)}(\mathbb{X})$ is a *graded* Hopf superalgebra (cf. [@maszub2]). Next, $\mathsf{gr}(A)$ is a c.l.N. Hopf superalgebra with the maximal superideal $$\mathfrak{M}_{\mathsf{gr}(A)}=\overline{\mathfrak{m}_A}\oplus (\oplus_{k\geq 1} J_A^k/J_A^{k+1}),$$ where $\overline{\mathfrak{m}_A}=\mathfrak{m}_A/A_1^2$. Note that $\mathfrak{M}_{\mathsf{gr}(A)}$-adic topology of $\mathsf{gr}(A)$ coincides with the $\overline{\mathfrak{m}_A}$-adic one (cf. [@maszub4 Section 1.6]). The comultiplication of $\mathsf{gr}(A)$ is defined as follows. For any $x\in J_A^k$ there is $\Delta(x)=\sum_{0\leq s\leq k} x_{(0)}^{(s)}\otimes x_{(1)}^{(k-s)}$, where $x_{(0)}^{(s)}, x_{(1)}^{(s)}\in J_A^s, 0\leq s\leq k$. Then $$\mathsf{gr}(\Delta)(x+J_A^{k+1})=\sum_{0\leq s\leq k} (x_{(0)}^{(s)}+J_A^{s+1})\otimes (x_{(1)}^{(k-s)}+J_A^{k+1-s}),$$ so that $\mathsf{gr}(A)$ is a graded Hopf superalgebra as well. The antipode of $\mathsf{gr}(A)$ is defined as $\mathsf{gr}(S)(x+J_A^{k+1})=S(x)+J_A^{k+1}$. The superalgebra $\mathsf{gr}(A)$ represents a strictly pro-representable group $\Bbbk$-functor, that is denoted by $\mathsf{gr}(\mathbb{X})$. Moreover, $\mathbb{X}\to\mathsf{gr}(\mathbb{X})$ is an endofunctor of the category of strictly pro-representable group $\Bbbk$-functors, such that ${\bf f} : \mathbb{X}\to\mathbb{Y}$ is an isomorphism if and only if $\mathsf{gr}({\bf f})$ is. Recall that the c.l.a.N. Hopf algebra $\overline{A}$ represents the group subfunctor $\mathbb{X}_{ev}$. Moreover, we have (local) Hopf superalgebra morphisms $q : \overline{A}\to \mathsf{gr}(A)$ and $i : \mathsf{gr}(A)\to \overline{A}$ such that $iq=\mathrm{id}_{\overline{A}}$. Dualizing, the embedding $\mathbb{X}_{ev}\to\mathsf{gr}(\mathbb{X})$ is the right inverse of the morphism ${\bf q} : \mathsf{gr}(\mathbb{X})\to\mathbb{X}_{ev}$, induced by $q$. Therefore, $\mathsf{gr}(\mathbb{X})$ is isomorphic to the semi-direct product $\mathbb{X}_{ev}\ltimes\ker{\bf q}$. **Lemma 70**. *The group $\Bbbk$-functor $\ker{\bf q}$ is isomorphic to the purely odd unipotent group superscheme, represented by a (finite dimensional) local Hopf superalgebra $\Lambda(V)$, where $V=(\mathfrak{M}_A/\mathfrak{M}_A^2)_1\simeq A_1/\mathfrak{m}_A A_1$ and all elements of $V$ are primitive.* *Proof.* We have $\ker{\bf q}(C)=\{\phi\in \mathsf{gr}(\mathbb{X})(C)\mid \phi|_{\overline{A}}=\epsilon_A \}$ for arbitrary superalgebra $C$. Thus $\ker{\bf q}$ is represented by the finite dimensional local Hopf superalgebra $B=\mathsf{gr}(A)/\mathsf{gr}(A)\overline{\mathfrak{m}_A}$. The Hopf superalgebra $B$ is generated by its (purely odd) degree one component $(A_1/A_1^3)/(\mathfrak{m}_A A_1/A_1^3)\simeq A_1/\mathfrak{m}_A A_1$. Since $B$ is also a graded Hopf superalgebra, the elements of $V$ are primitive. The final arguments from [@maszub2 Proposition 11.1] conclude the proof. ◻ **Remark 71**. *In the notations from [@maszub3], $\ker{\bf q}$ is isomorphic to the direct product of $\dim V$ copies of the purely odd unipotent group $\mathbb{G}^-_a$ (of superdimension $0|1$). Recall that $\Bbbk[\mathbb{G}^-_a]\simeq\Bbbk[t]$, where $t$ is an odd primitive element.* Combining Lemma [Lemma 70](#Prop 11.1){reference-type="ref" reference="Prop 11.1"} with Lemma [Lemma 66](#Hopf){reference-type="ref" reference="Hopf"}, one sees that $\mathsf{gr}(A)\simeq\overline{A}\otimes \Lambda(V)$. Equivalently, there is a minimal generating set $v_1+J_A^2, \ldots , v_l+J_A^2$ of $\overline{A}$-module $J_A/J_A^2$, such that any element $f\in A$ has a \"canonical\" form $$f=\sum_{0\leq s\leq l, 1\leq i_1<\ldots < i_s\leq l} f_{i_1, \ldots , i_s} v_{i_1}\ldots v_{i_s},$$ where the even coefficients $f_{i_1, \ldots , i_s}$ are uniquely defined by $f$, modulo $A_1^2$. With this remark in mind, the proof of the following lemma can be copied from [@maszub2 Proposition 11.5]. We outline the principal steps only. **Lemma 72**. *There is a natural isomorphism $\mathsf{gr}(\mathrm{hyp}(\mathbb{X}))\simeq\mathrm{hyp}(\mathsf{gr}(\mathbb{X}))$ of graded Hopf superalgebras.* *Proof.* As it has been already observed, one can work in the $\overline{\mathfrak{m}_A}$-adic topology. In particular, $\mathrm{hyp}(\mathsf{gr}(\mathbb{X}))$ is isomorphic to $\oplus_{0\leq k\leq l} (J_A^k/J_A^{k+1})^{\underline{*}}$ as a superspace. The above Hopf superalgebra pairing $< , >$ induces an embedding of $\Bbbk$-vector (super)spaces $$\mathrm{hyp}^{(k)}(\mathbb{X})/\mathrm{hyp}^{(k+1)}(\mathbb{X})\to (J_A^k/J_A^{k+1})^{\underline{*}}.$$ For each $t\geq 0$ we choose a basis $B_t$ of $\overline{\mathfrak{m}_A}^t/\overline{\mathfrak{m}_A}^{t+1}$. Set $B=\cup_{t\geq 0} B_t$. Using a \"canonical\" basis of $\mathsf{gr}(A)$ consisting of the elements $\overline{f} v_{i_1}\ldots v_{is}$, where $\overline{f}\in B$ and $0\leq s\leq l$, one can show that the above emebedding is an isomorphism. In particular, we obtain a pairing $$\mathsf{gr}(\mathrm{hyp}(\mathbb{X}))\times\mathsf{gr}(A)\to\Bbbk,$$ that is a Hopf superalalgebra pairing due the definition of Hopf superalgebra structure of $\mathsf{gr}(A)$. ◻ The definition of $\Delta^*$ implies that a (homogeneous) element $\phi\in\mathrm{hyp}(\mathbb{X})$ is primitive if and only if $\phi$ is a $\Bbbk$-superderivation of $A$, i.e. $$\phi(ab)=\phi(a)\epsilon_A(b) +(-1)^{|\phi||a|}\epsilon_A(a)\phi(b), a, b\in A.$$ Thus the superspace of primitive elements of $\mathrm{hyp}(\mathbb{X})$ coincides with $\mathrm{hyp}_1(\mathbb{X})^+=\{\phi\in \mathrm{hyp}_1(\mathbb{X}) \mid \phi(1)=0\}$. In partcular, $\mathrm{hyp}_1(\mathbb{X})^+$ is a Lie superalgebra with respect to the standard Lie superbracket $[\phi, \psi]=\phi\psi -(-1)^{|\phi||\psi|}\psi\phi, \phi, \psi\in \mathrm{hyp}_1(\mathbb{X})^+$. We call it a *Lie superalgebra* of $\mathbb{X}$ and denote by $\mathrm{Lie}(\mathbb{X})$. Choose a basis $\gamma_1, \ldots, \gamma_l$ of $\mathrm{Lie}(\mathbb{X})_1\simeq (A_1/\mathfrak{m}_A A_1)^*$. Note also that $\mathrm{hyp}^{(0)}(\mathbb{X})\simeq \mathrm{hyp}(\mathbb{X}_{ev})$. **Lemma 73**. *Every element $\phi\in\mathrm{hyp}_k(\mathbb{X})$ has the unique form $$\phi=\sum_{0\leq s\leq k, 1\leq i_1<\ldots <i_s}\phi_{i_1, \ldots , i_s}\gamma_{i_1}\ldots \gamma_{i_s}, \phi_{i_1, \ldots , i_s}\in\mathrm{hyp}_{k-s}(\mathbb{X}_{ev}).$$* *Proof.* Use Lemma [Lemma 72](#Prop 11.5){reference-type="ref" reference="Prop 11.5"} and copy the proof of [@maszub2 Proposition 11.6]. ◻ Following [@maszub2], we define the group-like elements $$f(b, x)=\epsilon_A\otimes 1+x\otimes b, \ e(a, v)=\epsilon_A\otimes 1+v\otimes a,$$ where $x\in \mathrm{Lie}(\mathbb{X})_0, v\in \mathrm{Lie}(\mathbb{X})_1, b\in A_0, b^2=0, a\in A_1$. Mimic the proof of [@masshib Lemma 4.2] we obtain the following relations : 1. $[e(a, v), e(a', v')]=f(-aa', [v, v'])$; 2. $[f(b, x), e(a, v)]=e(ba, [x, v])$; 3. $[f(b, x), f(b', x')]=f(bb, [x, x'])$; 4. $e(a, v)e(a', v)=f(-aa', \frac{1}{2}[v, v])e(a+a', v)$ . Further, let $\Sigma_{\mathbb{X}}$ denote the group subfunctor of $\mathbb{X}$ such that $\Sigma_{\mathbb{X}}(R)$ is generated by all elements $f(b, x)$ and $e(a, v), a, b\in R$, for arbitrary superalgebra $R$. **Lemma 74**. *We have the following :* 1. *if $\Sigma_{\mathbb{X}, ev}$ denotes $\Sigma_{\mathbb{X}}\cap\mathbb{X}_{ev}$, then for any superalgebra $R$ the group $\Sigma_{\mathbb{X}, ev}(R)=\Sigma_{\mathbb{X}}(R_0)$ is generated by the elements $f(b, x), b\in R$;* 2. *each element of $\Sigma_{\mathbb{X}}(R)$ is uniquely expressed in the form $$f e(a_1, \gamma_1)\ldots e(a_l, \gamma_l), f\in\Sigma_{\mathbb{X}, ev}(R), a_1, \ldots, a_l\in R_1.$$* *Proof.* Use Lemma [Lemma 73](#Prop 11.6){reference-type="ref" reference="Prop 11.6"} and copy the proof of [@masshib Proposition 4.3]. ◻ **Lemma 75**. *The group functor $\mathbb{X}$ acts linearly on $\mathrm{Lie}(\mathbb{X})$ by conjugations.* *Proof.* For any $\phi\in \mathrm{Lie}(\mathbb{X})$ and arbitrary $x\in \mathsf{Gpl}(\mathrm{hyp}(\mathbb{X})\otimes R)$, there is $$\Delta^*(x(\phi\otimes 1)x^{-1})=$$ $$(x\otimes_R x)((\phi\otimes 1)\otimes_R (\epsilon_A\otimes 1)+((\epsilon_A\otimes 1)\otimes_R (\phi\otimes 1))(x^{-1}\otimes_R x^{-1})=$$ $$(x(\phi\otimes 1)x^{-1})\otimes _R(\epsilon_A\otimes 1)+(\epsilon_A\otimes 1)\otimes_R (x(\phi\otimes 1)x^{-1}).$$ The lineariry is obvious by the definition of the product in $\mathrm{hyp}(\mathbb{X})\otimes R$. ◻ Define the subfunctor ${\bf E}_{\mathbb{X}}$ of $\mathbb{X}$ such that $${\bf E}_{\mathbb{X}}(R)=\{e(a_1, \gamma_1)\ldots e(a_l, \gamma_l)\mid a_1, \ldots , a_l\in R_1 \}, R\in\mathsf{SAlg}_{\Bbbk}.$$ Note that any morphism ${\bf g} : \mathbb{X}\to\mathbb{Y}$ of strictly pro-representable group $\Bbbk$-functors induces a morphism of filtred Hopf superalgebras $\mathrm{hyp}({\bf g}) : \mathrm{hyp}(\mathbb{X})\to \mathrm{hyp}(\mathbb{X})$, hence a Lie superalgebra morphism $\mathrm{d}{\bf g} : \mathrm{Lie}(\mathbb{X})\to \mathrm{Lie}(\mathbb{Y})$. The latter is dual to the induced morphism $\mathfrak{M}_B/\mathfrak{M}_B^2\to \mathfrak{M}_A/\mathfrak{M}_A^2$ of $\Bbbk$-superspaces. Furthermore, ${\bf g}$ induces the morphism of group functors $\Sigma_{\mathbb{X}}\to \Sigma_{\mathbb{Y}}$, defined on the generators by $$f(b, x)\mapsto f(b, \mathrm{d}{\bf g}(x)), \ e(a, v)\mapsto e(a, \mathrm{d}{\bf g}(v)).$$ If we choose the basis of $\mathrm{Lie}(\mathbb{X})$ so that some of its vectors form a basis of the kernel of $\mathrm{d}{\bf g}$, then ${\bf g}$ induces morphism ${\bf E}_{\mathbb{X}}\to {\bf E}_{\mathbb{Y}}$ of $\Bbbk$-functors. Lemma [Lemma 75](#action of X){reference-type="ref" reference="action of X"} implies that $\mathbb{Y}=\mathbb{X}_{ev}{\bf E}_{\mathbb{X}}$ is a group subfunctor of $\mathbb{X}$. Moreover, Lemma [Lemma 74](#Lemma 12.2){reference-type="ref" reference="Lemma 12.2"} infers that each element of $\mathbb{Y}$ can be uniquely expressed as $x=x' e, x'\in\mathbb{X}_{ev}, e\in {\bf E}_{\mathbb{X}}$, i.e. $\mathbb{Y}\simeq \mathbb{X}_{ev}\times {\bf E}_{\mathbb{X}}$. **Lemma 76**. *We have $\mathbb{X}=\mathbb{Y}$.* *Proof.* The functor $\mathbb{Y}$ is strictly pro-representable as a direct product of two strictly pro-representable functors. Its coordinate superalgebra $B$ is isomorphic to $\overline{A}\otimes\Lambda(V)$. The composition of two embeddings $\mathbb{X}_{ev}\to \mathbb{Y}$ and $\mathbb{Y}\to\mathbb{X}$ coincides with the canonical embedding $\mathbb{X}_{ev}\to\mathbb{X}$, hence the dual morphism $A\to B$ induces the isomorphism $\overline{A}\to \overline{B}$. Similarly, since ${\bf E}_{\mathbb{Y}}={\bf E}_{\mathbb{X}}$, the above remarks and Lemma [Lemma 73](#Prop 11.6){reference-type="ref" reference="Prop 11.6"} imply that $\mathrm{Lie}(\mathbb{Y})\to \mathrm{Lie}(\mathbb{X})$ is an isomorphism of Lie superalgebras. Thus the induced morphism $\mathsf{gr}(A)\to\mathsf{gr}(B)$ is an isomorphism of graded superalgebras, hence $A\to B$ is. Lemma is proved. ◻ # $\mathfrak{Aut}(\mathbb{X})$ is a locally algebraic supergroup We still assume that $\mathbb{X}$ is a proper superscheme. Since $\mathfrak{T}|_{\mathsf{SL}_{\Bbbk} }=\mathbb{F}$, Lemma [Lemma 58](#even P_1){reference-type="ref" reference="even P_1"} and Lemma [Lemma 54](#pro-representability){reference-type="ref" reference="pro-representability"} imply that $\mathfrak{T}$ is strictly pro-representable. **Proposition 77**. *The following statements hold :* 1. *the group subfunctor $\mathfrak{Aut}(\mathbb{X})_{ev}\mathfrak{T}$ is the largest locally algebraic group superscheme in $\mathfrak{Aut}(\mathbb{X})$;* 2. *the group functor $\mathfrak{Aut}(\mathbb{X})$ is a locally algebraic group superscheme if and only if for any superalgbra $R$ and for any $\phi\in \mathfrak{Aut}(\mathbb{X})(R)$ there is $\psi\in \mathfrak{Aut}(\mathbb{X})_{ev}(R)$ such that $\mathfrak{Aut}(\mathbb{X})(\pi_R)(\phi)=\mathfrak{Aut}(\mathbb{X})_{ev}(\pi_R)(\psi)$.* *Proof.* Since $\mathfrak{Aut}(\mathbb{X})_{ev}\cap\mathfrak{T}=\mathfrak{T}_{ev}$, Lemma [Lemma 76](#equality){reference-type="ref" reference="equality"} implies that the group subfunctor $\mathfrak{Aut}(\mathbb{X})_{ev}\mathfrak{T}$ is isomorphic to $\mathfrak{Aut}(\mathbb{X})_{ev}\times {\bf E}_{\mathfrak{T}}$. By Theorem [Theorem 60](#even is a group scheme){reference-type="ref" reference="even is a group scheme"}, $\mathfrak{Aut}(\mathbb{X})_{ev}$ is a locally algebraic group scheme, and since ${\bf E}_{\mathfrak{T}}$ is a purely odd algebraic superscheme, $\mathfrak{Aut}(\mathbb{X})_{ev}\mathfrak{T}$ is a locally algebraic group superscheme. Conversely, let $\mathbb{G}\leq \mathfrak{Aut}(\mathbb{X})$ be a locally algebraic group superscheme. Then $\mathbb{G}_{ev}\leq \mathfrak{Aut}(\mathbb{X})_{ev}$. Furthermore, Proposition [Proposition 37](#another characterization of neighborhood){reference-type="ref" reference="another characterization of neighborhood"} and Proposition [Proposition 44](#nice property of T){reference-type="ref" reference="nice property of T"} imply $\mathbb{N}_e(\mathbb{G})\leq \mathfrak{T}$, and [@maszub2 Theorem 12.5] infers the first statement. The second statement is now obvious. ◻ **Lemma 78**. *Let ${\bf f} : \mathbb{G}\to\mathbb{H}$ be a morphism of $\Bbbk$-functors. Assume that both $\mathbb{G}$ and $\mathbb{H}$ satisfy the conditions super-$P_i, i=2, 3, 4, 5$. Then ${\bf f}$ is an isomorphism if and only if for any finitely dimensional local $\Bbbk$-algebra $A$ the map ${\bf f}(A)$ is bijective.* *Proof.* The proof of [@mats-oort Lemma 1.2] can be copied per verbatim. However, some parts of this proof require additional comments in the wider super-context. First, by [@kolzub Lemma 6.1(1)] if $A$ is a local Noetherian superalgebra, then its completion $\widehat{A}$ in the $\mathfrak{M}_A$-adic topology is a faithfully flat $A$-supermodule. Second, if ${\bf f}(B)$ is injective for any local Noetherian superalgebra $B$, such that the extension $\Bbbk\subseteq\Bbbk(B)$ is finite, then ${\bf f}(A)$ is injective for any finitely generated superalgebra $A$. In fact, let $\mathrm{Max}(A)$ denote the set of all maximal superideals of $A$, and let $B$ denote the superalgebra $$\times_{\mathfrak{M}\in\mathrm{Max}(A)} A_{\mathfrak{M}}=\varinjlim_{S\subseteq\mathrm{Max}(A), |S|<\infty}\times_{\mathfrak{M}\in S} A_{\mathfrak{M}}.$$ The condition super-$P_3$ implies that ${\bf f}(B)$ is injective. On the other hand, by [@zub1 Lemma 1.4] the superalgebra $B$ is a faithfully flat $A$-supermodule, hence the induced maps $\mathbb{G}(A)\to\mathbb{G}(B)$ and $\mathbb{H}(A)\to\mathbb{H}(B)$ are injective, and in its turn, ${\bf f}(A)$ is. ◻ **Theorem 79**. *Let $X$ be a proper superscheme. Then the group functor $\mathfrak{Aut}(\mathbb{X})$ is a locally algebraic group superscheme.* *Proof.* The both group functors $\mathfrak{Aut}(\mathbb{X})$ and $\mathfrak{Aut}(\mathbb{X})_{ev}\mathfrak{T}$ satisfy the conditions super-$P_i$, $i=2, 3, 4, 5$. By Lemma [Lemma 78](#Lemma 1.2){reference-type="ref" reference="Lemma 1.2"} it remains to check the sufficent condition in Proposition [Proposition 77](#one step to){reference-type="ref" reference="one step to"}(ii) for $R$ to be a finite dimensional local superalgebra. Since $\mathfrak{M}_R$ is nilpotent, then by Cohen's theorem $\pi_R$ is split, whence both $\mathfrak{Aut}(\mathbb{X})(\pi_R)$ and $\mathfrak{Aut}(\mathbb{X})_{ev}(\pi_R)$ are surjective. Theorem is proved. ◻ Let us give few comments on the structure of $\mathfrak{Aut}(\mathbb{X})$, for which we refer to [@maszub2]. As we have alreay noted, the formal neighborhood of the identity of $\mathfrak{Aut}(\mathbb{X})$ coincides with $\mathfrak{T}$. By [@maszub2 Lemma 9.7] the conjugation action of $\mathfrak{Aut}(\mathbb{X})$ on $\mathfrak{T}$ factors through the *largest affine quotient* $\mathfrak{Aut}(\mathbb{X})^{aff}$ of $\mathfrak{Aut}(\mathbb{X})$. In the strict sense of the word, this is not a quotient, but the universal morphism $\pi_{aff} : \mathfrak{Aut}(\mathbb{X})\to \mathfrak{Aut}(\mathbb{X})^{aff}$ of group superschemes, such that any morphism of $\mathfrak{Aut}(\mathbb{X})$ to an affine group superscheme factors through $\pi_{aff}$. For any superalgebra $R$, the group $\mathfrak{Aut}(\mathbb{X})^{aff}(R)$ acts on $A\otimes R$ by $R$-linear Hopf superalgebra automorphsms, preserving the $\mathfrak{M}_A\otimes R$-adic filtration. In turn, the Hopf superalgebra pairing $< , >$ induces the action of $\mathfrak{Aut}(\mathbb{X})^{aff}(R)$ on $\mathrm{hyp}(\mathfrak{T})\otimes R$ by $R$-linear Hopf superalgebra automorphisms, preserving the filtration $\{\mathrm{hyp}_k(\mathfrak{T}) \}_{k\geq 0}$ as well (see [@maszub2 Lemma 9.8]). In particular, the adjoint action of $\mathfrak{Aut}(\mathbb{X})(R)$ on $\mathrm{Lie}(\mathfrak{Aut}(\mathbb{X}))(R)$ is naturally identified with the conjugation action of $\mathfrak{Aut}(\mathbb{X})(R)$ on $\mathrm{hyp}_1(\mathfrak{T})^+\otimes R\simeq \mathrm{Lie}(\mathfrak{Aut}(\mathbb{X}))(R)$ (cf. [@maszub2 Corollary 9.9]). # Extensions of morphisms The aim of this section is to superize some fundamental results from [@sga; @1 III]. Based on these results, we formulate a sufficient condition for $\mathfrak{Aut}(\mathbb{X})$ to be a smooth group superscheme. Let $Y$ be superscheme. With any quasi-coherent sheaf of $\mathcal{O}_Y$-superalgebras $\mathcal{A}$ one can associate a $Y$-superscheme $\mathrm{SSpec}(\mathcal{A})$, called a *spectrum* of $\mathcal{A}$ (see [@hart Exercise II.5.17(c)]). More precisely, $|Y|=|\mathrm{SSpec}(\mathcal{A})|$ and for any affine open supersubscheme $U$ in $Y$ set $\mathrm{SSpec}(\mathcal{A})|_U=\mathrm{SSpec}(\mathcal{A}(U))$. It is easy to check that the data $\{\mathrm{SSpec}(\mathcal{A}(U)), \mathrm{SSpec}(\mathcal{A}(U\cap U'))\}_{U, U'}$ satisfies the conditions of *Glueing Lemma* (cf. [@hart Exercise II.2.12]), hence there is a superscheme $\mathrm{SSpec}(\mathcal{A})$, that is glued from all $\mathrm{SSpec}(\mathcal{A}(U))$. Finally, gluing together the affine morphisms $\mathrm{SSpec}(\mathcal{A}(U))\to U$ we obtain the unique affine morphism $\mathrm{SSpec}(\mathcal{A})\to Y$. **Lemma 80**. *For any morphism $f : X\to Y$ of superscheme and for arbitrary quasi-coherent sheaf of $\mathcal{O}_Y$-superalgebras $\mathcal{A}$, there is $\mathrm{SSpec}(\mathcal{A})\times_Y X\simeq \mathrm{SSpec}(f^*\mathcal{A})$.* *Proof.* The superscheme $\mathrm{SSpec}(\mathcal{A})\times_Y X$ is covered by open affine supersubschemes $\mathrm{SSpec}(\mathcal{A})|_V\times_V W\simeq \mathrm{SSpec}(\mathcal{A}(V))\times_V W$, where $V$ runs over open affine supersubschemes of $Y$ and $W$ runs over open affine supersubschemes of $f^{-1}(V)$. It remains to note that $$\mathrm{SSpec}(\mathcal{A}(V))\times_V W\simeq \mathrm{SSpec}(\mathcal{A}(V)\otimes_{\mathcal{O}(V)} \mathcal{O}(W))\simeq \mathrm{SSpec}(f^*\mathcal{A}(W)).$$ ◻ Let $X$ and $Y$ be superschemes over a superscheme $S$. Let $Y_0$ be a closed supersubscheme of $Y$, determined by a superideal sheaf $\mathcal{J}$ such that $\mathcal{J}^2=0$. For any $S$-morphism $g_0 : Y_0\to X$ define a sheaf $\mathcal{P}(g_0)$ on the topological space $|Y|$, such that for any $U\subseteq |Y|$ the set $\mathcal{P}(g_0)(U)$ consists of all $S$-morphism $g : U\to X$ such that $g|_{U\cap Y_0}=g_0|_{U\cap Y_0}$. The following proposition is a superization of [@sga; @1 Proposition III.5.1]. Since the proof is just outlined therein, for the reader convenience we reproduce it with few more comments and in the wider super-context. For the notion of a *principal homogeneous sheaf* or *torsor* we refer to the introduction of [@sga; @1 III.5]. **Proposition 81**. *$\mathcal{P}(g_0)$ is a principal homogeneous sheaf with respect to the natural action of the additive group sheaf $\mathcal{G}=\mathrm{Hom}_{\mathcal{O}_{Y_0}}(g^*_0\Omega^1_{X/S}, \mathcal{J})$.* *Proof.* Without loss of generality, one can suppose that $U=|Y|$. Fix some $g\in\mathcal{P}(g_0)(|Y|)$. The $S$-morphisms $g'\in\mathcal{P}(g_0)(|Y|)$ are in one-to-one correspondence with $S$-morphism $h : Y\to X\times_S X$ such that $\mathrm{pr}_1 h=g$ and $hi=(g_0, g_0)$, where $i : Y_0\to Y$ is the correponding closed immersion and $(g_0, g_0)$ is the unique morphism $Y_0\to X\times_S X$ with $\mathrm{pr}_j(g_0, g_0)=g_0, j=1,2$. This data can be represented locally by commutative diagrams $$\begin{array}{ccc} \mathcal{O}(U)\otimes_{\mathcal{O}(V)}\mathcal{O}(U) & \stackrel{(g_0, g_0)^{\sharp}}{\to} & \mathcal{O}(W)/\mathcal{J}(W) \\ \uparrow & \searrow & \uparrow \\ \mathcal{O}(U) & \stackrel{g^{\sharp}}{\to} & \mathcal{O}(W) \end{array},$$ where $(g_0, g_0)^{\sharp}(a\otimes b)=h^{\sharp}(a\otimes b)+\mathcal{J}(W)$, the middle diagonal map is defined by $h^{\sharp}(a\otimes b)=g^{\sharp}(a)g'^{\sharp}(b), a, b\in \mathcal{O}(U)$, $U$ is an open affine supersubscheme of $X$, $W=|g|^{-1}(U)\cap |g'|^{-1}(U)$ and $V$ is the preimage of $U$ in $S$. Since $g^{\sharp}=g'^{\sharp}$ modulo $\mathcal{J}(W)$, $h^{\sharp}(\mathcal{I}_X(U\times_V U))\subseteq\mathcal{J}(W)$, that implies $\mathcal{I}_X(U\times_V U)^2\subseteq\ker h^{\sharp}$, that is $h^{\sharp}$ and $(g_0, g_0)^{\sharp}$ factor through $1$-st neighborhood of the diagonal morphism $\delta : X\to X\times_SX$. Moreover, the induced morphism $$\mathcal{O}(U)\otimes\mathcal{O}(U)/\mathcal{I}(U\times_V U)^2\to\mathcal{O}(U)$$ splits (via $\mathrm{pr}_1^{\sharp}$), hence this neighborhood can be naturally identified with $X'=\mathrm{SSpec}(\mathcal{O}_X\oplus\Omega^1_{X/S})$. Set $Y'=X'\times_X Y$ and $Y'_0=X'\times_X Y_0$. Any $S$-morphism $h : Y\to X'$, such that that $\mathrm{pr}_1h=g$ and $hi=(g_0, g_0)$, is uniquely extended to the section $h' : Y\to Y'$, that is $\mathrm{pr}_2 h'=\mathrm{id}_Y$, with the additional property that $h' i$ coincides with the similar unique section $Y_0\to Y'$, induced by $hi$. The inverse of the map $h\mapsto h'$ is given by $h'\mapsto \mathrm{pr}_1 h'$. By Lemma [Lemma 80](#extension of spectrum){reference-type="ref" reference="extension of spectrum"} we have $Y'\simeq\mathrm{SSpec}(g^*(\mathcal{O}_X\oplus\Omega^1_{X/S}))=\mathrm{SSpec}(\mathcal{O}_Y\oplus g^*\Omega^1_{X/S})$ and similarly, $Y'_0\simeq \mathrm{SSpec}(\mathcal{O}_{Y_0}\oplus g_0^*\Omega^1_{X/S})$. The above sections are in one-to-one correspondence with the superalgebra sheaf morphisms $\mathcal{O}_Y\oplus g^*\Omega^1_{X/S}\to \mathcal{O}_Y$, those are equal to the identity map, being restricted on $\mathcal{O}_Y$. Additionally, they induce the canonical augmentation $\mathcal{O}_{Y_0}\oplus g_0^*\Omega^1_{X/S}\to \mathcal{O}_{Y_0}$. The concluding arguments are trivial and can be copied verbatim from [@sga; @1 Proposition III.5.1]. ◻ Let $X$ be a proper superscheme. Let $A$ be a superalgebra and $I$ be a square zero superideal in $A$. Consider $\phi\in \mathfrak{Aut}(\mathbb{X})(A/I)$. The composition of $\phi$ with the closed embedding $X\times\mathrm{SSpec}(A/I)\to X\times\mathrm{SSpec}(A)$ is denoted by $\phi_0$. The morphism $\phi_0$ can be extended to a $\mathrm{SSpec}(A)$-endomorphism of $X\times\mathrm{SSpec}(A)$ if and only if $\mathcal{P}(\phi_0)(|X\times\mathrm{SSpec}(A)|)\neq\emptyset$ if and only if $\mathcal{P}(\phi_0)$ is a trivial $\mathcal{G}$-torsor, where $\mathcal{G}=\mathrm{Hom}_{\mathcal{O}_{X\times\mathrm{SSpec}(A/I)}}(\phi^*_0\Omega^1_{X\times\mathrm{SSpec}(A)/\mathrm{SSpec}(A)}, \mathcal{J})$ and $\mathcal{J}=\mathcal{O}_{X\times\mathrm{SSpec}(A)}I$. Note that $\mathcal{J}$ has a natural structure of $\mathcal{O}_{X\times\mathrm{SSpec}(A/I)}$-supermodule. Recall that $\mathrm{pr}_1$ denote the canonical projection $X\times\mathrm{SSpec}(A)\to X$. Assume that $A$ is finitely generated. Then the superscheme $X\times\mathrm{SSpec}(A)$ is Noetherian and separated (use [@maszub2 Corollary 2.5]). Since the superscheme $X\times\mathrm{SSpec}(A)$ is of finite type over $\mathrm{SSpec}(A)$, the sheaf $\Omega^1_{X\times\mathrm{SSpec}(A)/\mathrm{SSpec}(A)}$ of $\mathcal{O}_{X\times\mathrm{SSpec}(A)}$-supermodules is coherent (cf. [@maszub4 Section 3]). By the remark $(2)$ in the end of [@zub2 Section 3], $\phi_0^*\Omega^1_{X\times\mathrm{SSpec}(A)/\mathrm{SSpec}(A)}$ is a coherent sheaf of $\mathcal{O}_{X\times\mathrm{SSpec}(A/I)}$-supermodules. In turn, Lemma [Lemma 27](#dual of coherent){reference-type="ref" reference="dual of coherent"} implies that $\mathcal{G}$ is a coherent sheaf of $\mathcal{O}_{X\times\mathrm{SSpec}(A)}$-supermodules as well. Finally, the morphism $\mathrm{pr}_1$ is obviously affine, hence Corollary [Corollary 31](#affine trick){reference-type="ref" reference="affine trick"} implies $$\mathrm{H}^1(X\times\mathrm{SSpec}(A), \mathcal{G})\simeq \mathrm{H}^1(X, (\mathrm{pr}_1)_*\mathcal{G}).$$ By [@stack Lemma 21.4.3] if $\mathrm{H}^1(X\times\mathrm{SSpec}(A), \mathcal{G})=0$, then all $\mathcal{G}$-torsors are trivial. Therefore, the above remarks infer that if $\mathrm{H}^1(X, (\mathrm{pr}_1)_*\mathcal{G})=0$, then $\phi_0$ can be extended to a $\mathrm{SSpec}(A)$-endomorphism of $X\times\mathrm{SSpec}(A)$. For arbitrary open affine supersubscheme $U$ in $X$ we have $(\mathrm{pr}_1)_*\mathcal{G}|_{U}\simeq \widetilde{M}$, where $$M=\mathrm{Hom}_{\mathcal{O}(U\times\mathrm{SSpec}(A))}(\phi_0^*\Omega^1_{X\times\mathrm{SSpec}(A)/\mathrm{SSpec}(A)}(U\times\mathrm{SSpec}(A)), \mathcal{O}(U)\otimes I)|_{\mathcal{O}(U)}.$$ Note that $V=\phi(U\times \mathrm{SSpec}(A/I))$ is an affine supersubscheme of $X\times\mathrm{SSpec}(A/I)$. Since $|X\times\mathrm{SSpec}(A)|=|X\times\mathrm{SSpec}(A/I)|$, there is an open supersubscheme $W$ in $X\times\mathrm{SSpec}(A)$ such that $V=W\cap (X\times\mathrm{SSpec}(A/I))$ and $|V|=|W|$. In other words, $V$ is defined in $W$ by the square zero superideal sheaf $\mathcal{J}\cap \mathcal{O}_W$, as well as $V_{ev}$ is defined in $W_{ev}$ by the square zero superideal sheaf $(\mathcal{J}\cap \mathcal{O}_W + \mathcal{J}_W)/\mathcal{J}_W$. Since $V_{ev}$ is affine, [@hart Proposition II.5.9] and [@gabdem I, section 2, Theorem 6.1] imply that $W_{ev}$ is affine, hence by [@zub2 Theorem 3.1] the superscheme $W$ is affine as well. Observe that if $|\phi_0|(|U\times\mathrm{SSpec}(A/I)|)\subseteq T$ for an open subset $T\subseteq |X\times\mathrm{SSpec}(A)|$, then $|W|\subseteq T$. It follows that $$\phi_0^*\Omega^1_{X\times\mathrm{SSpec}(A)/\mathrm{SSpec}(A)}(U\times\mathrm{SSpec}(A))\simeq$$ $$\Omega^1_{X\times\mathrm{SSpec}(A)/\mathrm{SSpec}(A)}(W)\otimes_{\mathcal{O}(W)} (\mathcal{O}(U)\otimes A/I).$$ Next, $\mathcal{O}(W)/\mathcal{O}(W) I\simeq \mathcal{O}(V)$ is isomorphic to $\mathcal{O}(U)\otimes A/I$ via the superalgebra morphism $(\phi^{\sharp})^{-1}$, whence $\mathcal{O}(V)\simeq (\phi^{\sharp})^{-1}(\mathcal{O}(U)\otimes 1)\otimes A/I$. Let $C$ denote $(\phi^{\sharp})^{-1}(\mathcal{O}(U)\otimes 1)$. We have $$\Omega^1_{X\times\mathrm{SSpec}(A)/\mathrm{SSpec}(A)}(W)\otimes_{\mathcal{O}(W)} (\mathcal{O}(U)\otimes A/I)=\Omega^1_{\mathcal{O}(W)/A}\otimes_{\mathcal{O}(W)} (\mathcal{O}(U)\otimes A/I).$$ By [@maszub4 Lemma 3.6] the latter supermodule is isomorphic to $$\Omega^1_{\mathcal{O}(V)/(A/I)}\otimes_{\mathcal{O}(V)} (\mathcal{O}(U)\otimes A/I)\simeq (\Omega^1_{C/\Bbbk}\otimes A/I)\otimes_{(C\otimes A/I)} (\mathcal{O}(U)\otimes A/I).$$ Thus the $\mathcal{O}(U)$-supermodule $M$ is isomorphic to $\mathrm{Hom}_{\mathcal{O}(U)}(\Omega^1_{C/\Bbbk}, \mathcal{O}(U)\otimes I)$, where $\Omega^1_{C/\Bbbk}$ has a structure of $\mathcal{O}(U)$-supermodule via the isomorphism $(\phi^{\sharp})^{-1}$, so that $$M\simeq \mathrm{Hom}_{\mathcal{O}(U)}(\Omega^1_{C/\Bbbk}, \mathcal{O}(U)\otimes I)\simeq \mathrm{Hom}_{\mathcal{O}(U)}(\Omega^1_{\mathcal{O}(U)/\Bbbk}, \mathcal{O}(U))\otimes I .$$ In other words, $M=(\mathrm{pr}_1)_*\mathcal{G}(U)\simeq \mathcal{T}_X(U)\otimes I$. Moreover, if we choose a finite open covering $\mathfrak{U}$ of $X$ by affine supersubschemes, then for arbitrary $p\geq 0$ there is $C^p(\mathfrak{U}, (\mathrm{pr}_1)_*\mathcal{G}\simeq C^p(\mathfrak{U}, \mathcal{T}_X)\otimes I$, hence $$\mathrm{H}^1(X, (\mathrm{pr}_1)_*\mathcal{G})\simeq \mathrm{\check{H}}^1(\mathfrak{U}, (\mathrm{pr}_1)_*\mathcal{G})\simeq \mathrm{\check{H}}^1(\mathfrak{U}, \mathcal{T}_X)\otimes I\simeq \mathrm{H}^1(X, \mathcal{T}_X)\otimes I.$$ **Theorem 82**. *If $\mathrm{H}^1(X, \mathcal{T}_X)=0$, then $\mathfrak{Aut}(\mathbb{X})(A)\to \mathfrak{Aut}(\mathbb{X})(A/I)$ is a surjective group morphism for arbitrary superalgebra $A$, and for any nil-superideal $I$ in $A$. In particular, $\mathfrak{Aut}(\mathbb{X})$ is a smooth locally algebraic group superscheme.* *Proof.* Since $A$ is a direct limit of its finitely generated supersubalgebras, the same arguments as in Lemma [Lemma 54](#pro-representability){reference-type="ref" reference="pro-representability"} show that the general case can be reduced to $A$ is finitely generated and $I$ is nilpotent. Moreover, the obvious induction on the nilpotency degree of $I$ allows us to suppose that $I$ is square zero. The above discussion implies that for any $\phi\in \mathfrak{Aut}(\mathbb{X})(A/I)$ the corresponding $\phi_0$ can be extended for a $\psi\in\mathfrak{End}(\mathbb{X})(A)$. Similarly, one can find $\psi'\in \mathfrak{End}(\mathbb{X})(A)$, that corresponds to $\phi^{-1}$. The image $\psi\psi'$ (as well as the image of $\psi'\psi$) in $\mathfrak{End}(\mathbb{X})(A/I)$ is equal to $\mathrm{id}_{X\times\mathrm{SSpec}(A/I)}$, hence both $\psi\psi'$ and $\psi'\psi$ are invertible. Thus our statement follows. ◻ 99 Barbara Fantechi \... \[et al.\], *Fundamental Algebraic Geometry : Grothendieck FGA explained*, Mathematical Surveys and Monographs, no. 123. V.A.Bovdi, A.N.Zubkov, *Orbits of actions of group superschemes*, Journal of Pure and Applied Algebra, 227(2023), 107404. U.Bruzzo, D.H.Ruiperez and A.Polishchuk, *Notes on fundamental algebraic supergeometry. Hilbert and Picard superschemes*, Advances in Mathematics, 415(2023), 108890. Brion, Michel, *Some structure theorems for algebraic groups*, Proc. Sympos. Pure Math., 94, American Mathematical Society, Providence, RI, 2017, 53--126. C. Carmeli, L. Caston, R. Fioresi, *Mathematical Foundations of Supersymmetry*, EMS Ser. Lect. Math., European Math. Soc., Zurich, 2011. M. Demazure and P. Gabriel, *Algebraic Groups*, Vol. I, Algebraic Geometry. Generalities. Commutative Groups, North-Holland, Amsterdam (1970). Grothendieck, A., *Elements de geometrie algebrique (rediges avec la collaboration de Jean Dieudonne). I. Le langage des schemas.* (French) Inst. Hautes Études Sci. Publ. Math. No. 4 (1960), p.5-228. Grothendieck, A., *Elements de geometrie algebrique (rediges avec la collaboration de Jean Dieudonne). III. Etude cohomologique des faisceaux coherentes.* (French) Inst. Hautes Études Sci. Publ. Math. No. 11 (1961), p. 5--167. A. Grothendieck, M. Raynaud, *Revetements etales et groupe fondamental (SGA1)*, Soc. Math. de France, Paris, 2003. A. Grothendieck, *Technique de descente, I, II, III, IV.* Seminaire Bourbaki, exposes 190 (1959), 195 (1960), 212 (1961), 221 (1961). R. Hartshorne., *Algebraic geometry*, Graduate Texts in Mathematics, No. 52. Springer-Verlag, New York-Heidelberg, 1977. M. Hoshi, A. Masuoka, Y. Takahashi, *Hopf-algebraic techniques applied to super Lie groups over complete fields*, J. Algebra 562 (2020) 28--93. J.C.Jantzen, *Representations of algebraic groups*, Second edition. Mathematical Surveys and Monographs, 107. American Mathematical Society, Providence, RI, 2003. S. Lang, *Algebra*, Graduate Texts in Mathematics, 211, third edition, Springer, 2002. Levelt, A.H.M. *Foncteurs exacts a gauche*, Invent. Math., 8(1969), 114-140. Yuri I. Manin, *Gauge Field Theory and Complex Geometry*, Translated from the 1984 Russian origi-nal by N. Koblitz and J.R. King, With an appendix by Sergei Merkulov, second edition, Grundlehren der Mathematischen Wissenschaften (Fundamental Principlesof Mathematical Sciences), vol.289, Springer-Verlag, Berlin, 1997, xii+346 pp. A. Zubkov, P. Kolesnikov, *On dimension theory of supermodules, super-rings and superschemes*, Commun. in Algebra, 2022, Vol. 50, No. 12, 5387-5409. J.P.Murre, *On contravariant functors from the category of pre-schemes over a field into the category of abelian groups (with an application to the Picard functor)*, Inst. Hautes Études Sci. Publ. Math. No. 23 (1964), 5--43. H. Matsumura, *Commutative algebra*, 2nd ed., Math. Lec. Notes Series **56**, Benjamin/Cunning Publishing Company, 1980. Matsumura, Hideyuki; Oort, Frans, *Representability of group functors, and automorphisms of algebraic schemes*, Invent. Math. 4 (1967), 1--25. A. Masuoka, T. Shibata, *On functor points of affine supergroups*, J. Algebra 503 (2018) 534--572. A. Masuoka and A. N. Zubkov, *Quotient sheaves of algebraic supergroups are superschemes*. J. Algebra, 348 (2011), 135--170. A. Masuoka and A. N. Zubkov, *Group superschemes*, Journal of Algebra, 605(2022), 89--145. A. Masuoka and A. N. Zubkov, *Solvability and nilpotency for algebraic supergroups*, Journal of Pure and Applied Algebra, 221(2017), no.2, 339--365. A. Masuoka and A. N. Zubkov, *On the notion of Krull super-dimension*, Journal of Pure and Applied Algebra, 224(2020), no.5, 106245, 30 pp. Various authors, *Stacks project*, https://stacks.math.columbia.edu/browse. A.N. Zubkov, *Affine quotients of supergroups*, Transform. Groups, 2009, 14(3), 713--745. A.N.Zubkov, *Some properties of Noetherian superschemes*, Algebra and Logic, Vol. 57 (2018), No. 2, 130--140. (Russian Original Vol. 57, No. 2, March-April, 2018).
arxiv_math
{ "id": "2309.14493", "title": "Automorphism group functors of algebraic superschemes", "authors": "Alexandr N. Zubkov", "categories": "math.AG math.RA math.RT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }